text
stringlengths 59
500k
| subset
stringclasses 6
values |
---|---|
July 2007 , Volume 8 , Issue 1
Special Issue on
Differential Equations in Mathematical Biology
P. Magal and Shigui Ruan
2007, 8(1): i-ii doi: 10.3934/dcdsb.2007.8.1i +[Abstract](1279) +[PDF](44.0KB)
This special issue is the proceedings of the International Workshop on Differential Equations in Mathematical Biology held in Le Havre, France, July 11-13, 2005. The workshop brought together internationals researchers in Differential Equations and Mathematical Biology to communicate with each other about their current work. The topics of the workshop included various types of differential equations and their applications to biology and other related subjects, such as, ecology, epidemiology, medicine, etc. There were more than 60 participants came from Algeria, Canada, Cameroun, Finland, France, Germany, Hungary, Italy, Japan, Lithuania, Mexico, The Netherlands, Portugal, Romania, Spain, South Africa, UK, and USA. The ple- nary speakers were Pierre Auger (IRD Bondy, France), Josef Hofbauer (University College London, UK), Michel Langlais (Bordeaux 2, France), Hal Smith (Arizona State, USA), Horst Thieme (Arizona State, USA), Glenn Webb (Vanderbilt, USA) and Jianhong Wu (York, Canada). There were also more than 40 presentations by other participants.
The 17 articles which appear in this special issue are from the participants of the Workshop and from other leading researchers in these subjects. Topics include malaria intra-host models, stem cell dynamics, tumor invasion, reaction-diffusion systems for competition and predation, traveling waves, optimal control in age structured models, host-parasitoid models, predator-prey models, HIV infection, immune system memory, bacteria infection, innate immune response, and antibiotic treatment.
For the full preface, please click the Full Text "PDF" button above.
P. Magal, Shigui Ruan. Preface. Discrete & Continuous Dynamical Systems - B, 2007, 8(1): i-ii. doi: 10.3934/dcdsb.2007.8.1i.
General models of host-parasite systems. Global analysis
P. Adda, J. L. Dimi, A. Iggidir, J. C. Kamgang, G. Sallet and J. J. Tewa
2007, 8(1): 1-17 doi: 10.3934/dcdsb.2007.8.1 +[Abstract](2101) +[PDF](226.1KB)
We obtain global stability results for within-host models with $k$ age-class of parasitized cells and two strains of parasites. The stability is determined by the value of the basic reproduction ratio $\mathcal R_0$. A competitive exclusion principle holds. This means that if $\mathcal R_0 >1$ generically an unique equilibrium, corresponding to the extinction of one strain and the survival of the other strain, is globally asymptotically stable on the positive orthant.
P. Adda, J. L. Dimi, A. Iggidir, J. C. Kamgang, G. Sallet, J. J. Tewa. General models of host-parasite systems. Global analysis. Discrete & Continuous Dynamical Systems - B, 2007, 8(1): 1-17. doi: 10.3934/dcdsb.2007.8.1.
Modeling and asymptotic stability of a growth factor-dependent stem cell dynamics model with distributed delay
Mostafa Adimy and Fabien Crauste
2007, 8(1): 19-38 doi: 10.3934/dcdsb.2007.8.19 +[Abstract](1776) +[PDF](268.0KB)
Under the action of growth factors, proliferating and nonproliferating hematopoietic stem cells differentiate and divide, so as to produce blood cells. Growth factors act at different levels in the differentiation process, and we consider their action on the mortality rate (apoptosis) of the proliferating cell population. We propose a mathematical model describing the evolution of a hematopoietic stem cell population under the action of growth factors. It consists of a system of two age-structured evolution equations modeling the dynamics of the stem cell population coupled with a delay differential equation describing the evolution of the growth factor concentration. We first reduce our system of three differential equations to a system of two nonlinear differential equations with two delays and a distributed delay. We investigate some positivity and boundedness properties of the solutions, as well as the existence of steady states. We then analyze the asymptotic stability of the two steady states by studying the characteristic equation with delay-dependent coefficients obtained while linearizing our system. We obtain necessary and sufficient conditions for the global stability of the steady state describing the cell population's dying out, using a Lyapunov function, and we prove the existence of periodic solutions about the other steady state through the existence of a Hopf bifurcation.
Mostafa Adimy, Fabien Crauste. Modeling and asymptotic stability of a growth factor-dependent stem cell dynamics model with distributed delay. Discrete & Continuous Dynamical Systems - B, 2007, 8(1): 19-38. doi: 10.3934/dcdsb.2007.8.19.
Fast reaction limit and long time behavior for a competition-diffusion system with Dirichlet boundary conditions
E. C.M. Crooks, E. N. Dancer and Danielle Hilhorst
We consider a two-component competition-diffusion system with equal diffusion coefficients and inhomogeneous Dirichlet boundary conditions. Provided certain conditions on a limit problem hold and provided that the competition rate is sufficiently large, all non-negative solutions of the system converge to a stationary solution of the system as $ t \rightarrow \infty$.
E. C.M. Crooks, E. N. Dancer, Danielle Hilhorst. Fast reaction limit and long time behavior for a competition-diffusion system with Dirichlet boundary conditions. Discrete & Continuous Dynamical Systems - B, 2007, 8(1): 39-44. doi: 10.3934/dcdsb.2007.8.39.
An age and spatially structured model of tumor invasion with haptotaxis
Janet Dyson, Eva Sánchez, Rosanna Villella-Bressan and Glenn F. Webb
A model of tumor growth into surrounding tissue is analyzed. The model consists of a system of nonlinear partial differential equations for the populations of tumor cells, extracellular matrix macromolecules, oxygen concentration, and extracellular matrix degradative enzyme concentration. The spatial growth of the tumor involves the directed movement of tumor cells toward the extracellular matrix through haptotaxis. Cell age is used to track progression of cells through the cell cycle. The existence of unique global solutions is proved using the theory of fractional powers of analytic semigroup generators.
Janet Dyson, Eva S\u00E1nchez, Rosanna Villella-Bressan, Glenn F. Webb. An age and spatially structured model of tumor invasion with haptotaxis. Discrete & Continuous Dynamical Systems - B, 2007, 8(1): 45-60. doi: 10.3934/dcdsb.2007.8.45.
Some remarks on a singular reaction-diffusion system arising in predator-prey modeling
Sebastién Gaucel and Michel Langlais
2007, 8(1): 61-72 doi: 10.3934/dcdsb.2007.8.61 +[Abstract](1446) +[PDF](2687.1KB)
This note is dedicated to the question of global existence for solutions to a two component singular system of reaction-diffusion equations modeling predator-prey interactions in insular environments. Depending on a 2D parameter space, positive orbits of the underlying ODE system undergo interesting dynamics, e.g., finite time existence and global existence may coexist. These results are partially extended to the reaction-diffusion system in the case of identical diffusivities. Our analysis relies on an auxiliary non singular reaction-diffusion system whose solutions may or may not blow up in finite time. Numerical simulations illustrate our analysis, including a numerical evidence of spatio-temporal oscillations.
Sebasti\u00E9n Gaucel, Michel Langlais. Some remarks on a singular reaction-diffusion system arising in predator-prey modeling. Discrete & Continuous Dynamical Systems - B, 2007, 8(1): 61-72. doi: 10.3934/dcdsb.2007.8.61.
Polytopic Lyapunov functions for persistence analysis of competing species
Frédéric Grognard, Frédéric Mazenc and Alain Rapaport
We show that stability of the equilibrium of a family of interconnected scalar systems can be proved by using a sum of monotonic $C^0$ functions as a Lyapunov function. We prove this result in the general framework of nonlinear systems and then in the special case of Kolmogorov systems. As an application, it is then used to show that intra-specific competition can explain coexistence of several species in a chemostat where they compete for a single substrate. This invalidates the Competitive Exclusion Principle, that states that in the classical case (without this intra-specific competition), it is indeed known that only one of the species will survive.
Fr\u00E9d\u00E9ric Grognard, Fr\u00E9d\u00E9ric Mazenc, Alain Rapaport. Polytopic Lyapunov functions for persistence analysis of competing species. Discrete & Continuous Dynamical Systems - B, 2007, 8(1): 73-93. doi: 10.3934/dcdsb.2007.8.73.
Interaction of diffusion and delay
Karl Peter Hadeler and Shigui Ruan
2007, 8(1): 95-105 doi: 10.3934/dcdsb.2007.8.95 +[Abstract](1631) +[PDF](153.7KB)
For reaction-diffusion equations with delay, the joint effects of diffusion and delay are studied. In particular, for two-dimensional systems where only the interaction between species is delayed, the interdependence of stability against delay and against diffusion (Turing instability) can be clearly exhibited. Turing instabilities occur largely independent of delay. But periodic oscillations, constant in space or with low spatial frequency, can be achieved via increasing the delay or changing the diffusion rates.
Karl Peter Hadeler, Shigui Ruan. Interaction of diffusion and delay. Discrete & Continuous Dynamical Systems - B, 2007, 8(1): 95-105. doi: 10.3934/dcdsb.2007.8.95.
For which objective is birth process an optimal feedback in age structured population dynamics?
We consider the McKendrick linear model for the evolution of an age structured population. Usually the birth rate is given through a linear functional of the present population using the fertility rate. We are investigating the question of the existence of an objective function, depending on the control and some observation of the state, for which the associated optimal control problem using the birth rate as a control would yield the previous relation using the fertility rate as the optimal closed loop form. Then we consider adaption mechanisms that we model by including a desired value of the observation in the objective function. A modified fertility rate is derived.
Jacques Henry. For which objective is birth process an optimal feedback in age structured population dynamics?. Discrete & Continuous Dynamical Systems - B, 2007, 8(1): 107-114. doi: 10.3934/dcdsb.2007.8.107.
Phase plane analysis of travelling waves for higher order autocatalytic reaction-diffusion systems
Yuzo Hosono
This paper investigates the existence of travelling waves for the two component higher order autocatalytic reaction-diffusion systems by the phase plane analysis. We prove the existence of travelling waves for the system without decay for two extreme cases: the non-diffusive reactant case and the equal diffusive case. We further discuss the existence problem for the system with higher order decay when the reactant does not diffuse. Our analysis also gives the estimate of the minimal propagation speeds in terms of the order of autocatalysis.
Yuzo Hosono. Phase plane analysis of travelling waves for higher order autocatalytic reaction-diffusion systems. Discrete & Continuous Dynamical Systems - B, 2007, 8(1): 115-125. doi: 10.3934/dcdsb.2007.8.115.
The dynamics of bacterial infection, innate immune response, and antibiotic treatment
Mudassar Imran and Hal L. Smith
We develop a simple mathematical model of a bacterial colonization of host tissue which takes account of nutrient availability and innate immune response. The model features an infection-free state which is locally but not globally attracting implying that a super-threshold bacterial inoculum is required for successful colonization and tissue infection. A subset $B$ of the domain of attraction of the disease-free state is explicitly identified. The dynamics of antibiotic treatment of the infection is also considered. Successful treatment results if the antibiotic dosing regime drives the state of the system into $B$.
Mudassar Imran, Hal L. Smith. The dynamics of bacterial infection, innate immune response, and antibiotic treatment. Discrete & Continuous Dynamical Systems - B, 2007, 8(1): 127-143. doi: 10.3934/dcdsb.2007.8.127.
Allee effects in a discrete-time host-parasitoid model with stage structure in the host
S. R.-J. Jang
We study a single-species population model with two stages, adults and juveniles, and the model with Allee effects. In these models, the fertility rate of an adult individual is assumed to be density dependent on the total adult population size and the transition probability from juvenile to adult over each time unit is assumed to be a constant. Both models exhibit a boundary $2$-cycle. Population persistence can occur for the model without the Allee effects. However, there exists a population threshold below which the population will go to extinction if the Allee effects are considered. We also propose a host-parasitoid model with stage structure in the host. Both populations can coexist with each other under some conditions if Allee effects are ignored. On the other hand, there exists a host population threshold below which both populations become extinct if Allee effects are incorporated into the interaction.
S. R.-J. Jang. Allee effects in a discrete-time host-parasitoid model with stage structure in the host. Discrete & Continuous Dynamical Systems - B, 2007, 8(1): 145-159. doi: 10.3934/dcdsb.2007.8.145.
Complex dynamics of a simple epidemic model with a nonlinear incidence
Jianquan Li, Yicang Zhou, Jianhong Wu and Zhien Ma
A simple epidemic model with a nonlinear incidence rate and two compartments is studied. The backward bifurcation is described and the corresponding threshold is calculated. The Hopf bifurcation and Bogdanov-Takens bifurcation are analyzed and numerical evidences for the stable or unstable limit cycle are provided.
Jianquan Li, Yicang Zhou, Jianhong Wu, Zhien Ma. Complex dynamics of a simple epidemic model with a nonlinear incidence. Discrete & Continuous Dynamical Systems - B, 2007, 8(1): 161-173. doi: 10.3934/dcdsb.2007.8.161.
Periodic solutions in a delayed predator-prey models with nonmonotonic functional response
Wan-Tong Li and Yong-Hong Fan
By using the continuation theorem of coincidence degree theory, the existence of a positive periodic solution for a delayed predator-prey model with nonmonotonic functional response ${ x^{'}(t)=x(t)[ a(t)-b(t)x(t)] -\frac{x(t)y(t)}{m^2+x^2(t)},$
$y^{'}(t)=y(t)[ \frac{\mu (t)x(t-\tau )}{m^2+x^2(t-\tau )} -d(t)]. \]$ is established, where $a(t), b(t), \mu (t)$ and $d(t)$ are all positive periodic continuous functions with period $\omega >0$, $m>0$ and $\tau \geq 0 $ are constants.
Wan-Tong Li, Yong-Hong Fan. Periodic solutions in a delayed predator-prey models with nonmonotonic functional response. Discrete & Continuous Dynamical Systems - B, 2007, 8(1): 175-185. doi: 10.3934/dcdsb.2007.8.175.
Biodiversity through co-opetition
Julián López-Gómez and M. Molina-Meyer
Ecology, Economy and Management Science require a huge interdisciplinary effort to ascertain the hidden mechanisms driving the evolution of communities and firm networks. This article shows that strategic alliances in competitive environments might provoke an explosive increment of productivity and stability through a feedback mechanism promoted by cooperation, while competition causes segregation within cooperative profiles. Some further speciation and radiation mechanisms enhancing innovation, facilitated by environmental heterogeneities, or specific market regulations, might explain the biodiversity of life and the high complexity of industrial and financial markets. Extinctions are likely to occur by the lack of adaptation of the strongest competitors to sudden environmental stress.
Juli\u00E1n L\u00F3pez-G\u00F3mez, M. Molina-Meyer. Biodiversity through co-opetition. Discrete & Continuous Dynamical Systems - B, 2007, 8(1): 187-205. doi: 10.3934/dcdsb.2007.8.187.
Modeling the rock - scissors - paper game between bacteriocin producing bacteria by Lotka-Volterra equations
Gunter Neumann and Stefan Schuster
In this paper we analyze the population dynamics of bacteria competing by anti-bacterial toxins (bacteriocins). Three types of bacteria involved in these dynamics can be distinguished: toxin producers, resistant bacteria and sensitive bacteria. Their interplay can be regarded as a Rock-Scissors-Paper - game (RSP). Here, this is modeled by a reasonable three-dimensional Lotka- Volterra ($L$V) type differential equation system. In contrast to earlier approaches to modeling the RSP game such as replicator equations, all interaction terms have negative signs because the interaction between the three different types of bacteria is purely competitive, either by toxification or by competition for nutrients. The model allows one to choose asymmetric parameter values. Depending on parameter values, our model gives rise to a stable steady state, a stable limit cycle or a heteroclinic orbit with three fixed points, each fixed point corresponding to the existence of only one bacteria type. An alternative model, the May - Leonard model, leads to coexistence only under very restricted conditions. We carry out a comprehensive analysis of the generic stability conditions of our model, using, among other tools, the Volterra-Lyapunov method. We also give biological interpretations of our theoretical results, in particular, of the predicted dynamics and of the ranges for parameter values where different dynamic behavior occurs. For example, one result is that the intrinsic growth rate of the producer is lower than that of the resistant while its growth yield is higher. This is in agreement with experimental results for the bacterium Listeria monocytogenes.
Gunter Neumann, Stefan Schuster. Modeling the rock - scissors - paper game between bacteriocin producing bacteria by Lotka-Volterra equations. Discrete & Continuous Dynamical Systems - B, 2007, 8(1): 207-228. doi: 10.3934/dcdsb.2007.8.207.
A model of HIV-1 infection with HAART therapy and intracellular delays
Rachid Ouifki and Gareth Witten
We consider a model of HIV-1 infection with triple drug therapy (HAART) and three delays: the first delay represents the time between the infection and the viral production, the second is associated with the loss of target cells by infection, and the third represents the time for the newly produced virions to become infectious. We show that the incorporation of these delays improves the critical efficacy of the treatment, and destabilizes the infected steady state or leads to an infected steady state with more healthy cells and less infected cells and viruses. Also, we considered the periodic treatment case. We analyze the stability of the viral free steady state and derive an effective strategy for reducing the viral load.
Rachid Ouifki, Gareth Witten. A model of HIV-1 infection with HAART therapy and intracellular delays. Discrete & Continuous Dynamical Systems - B, 2007, 8(1): 229-240. doi: 10.3934/dcdsb.2007.8.229.
Immune system memory realization in a population model
Jianhong Wu, Weiguang Yao and Huaiping Zhu
A general process of the immune system consists of effector stage and memory stage. Current theoretical studies of the immune system often focus on the memory stage and pay less attention on the function of non-immune system substances such as tissue cells in adjusting the dynamical behavior of the immune system. We propose a mathematical population model to investigate the interaction between influenza A virus(IAV) susceptible tissue cells and generic immune cells when the tissue is invaded by IAV. We carry out a linear stability analysis and numerically study the Neimark-Sacker bifurcation of the models. The behavior of the model system agrees with some important experimental or clinical observations for IAV. However, we show that without considering the space between tissue cells, the expected memory stage does not form. By considering the space which allows antibodies to bind antigens, the memory stage then forms without missing the property of the system in the effector stage.
Jianhong Wu, Weiguang Yao, Huaiping Zhu. Immune system memory realization in a population model. Discrete & Continuous Dynamical Systems - B, 2007, 8(1): 241-259. doi: 10.3934/dcdsb.2007.8.241. | CommonCrawl |
At some point you might come upon some operation that you wish it existed in Tensorflow or PyTorch, but no GPU implementation is available. In addition it might even be something that is easily parallelizable on GPU. So why not write your own CUDA kernel and integrate it in your main program? Let us start with the CUDA kernel itself since it will be the same in both implementations.
Kernel: name of a function run by CUDA on the GPU.
Thread: CUDA will run many threads in parallel on the GPU. Each thread executes the kernel.
Blocks: Threads are grouped into blocks, a programming abstraction. Currently a thread block can contain up to 1024 threads.
When should we write a custom CUDA kernel?
Data size: you should make sure you will launch a lot of threads and blocks in order to beat the CUDA overhead. Otherwise, you might not see a great improvement between a CPU and GPU version.
Parallelizable: you should be able to pinpoint a single or double for loop where the iterations are independent of each other.
The only tricky part is to figure out how to balance the load: how many threads and blocks should be launched, what portion of the data is going to be processed by each of these.
A list of crop centers coordinates in the original image ($M$, 3) where $M$ is the total number of crops.
The size $D$ of a crop (we require for simplicity that all crops have the same size).
The output should be a list of the crops and have a shape ($M$, $D$, $D$, $D$, $C$).
In our case, a first naive approach would be to assign to each thread a single voxel to copy from the input data array to the output crop array. We launch as many blocks as we have crops (i.e. $M$ blocks), and the threads inside the block will go over all the voxels inside a single crop (i.e. $D^3 \times C$ threads per block). Remember that the number of threads per block is fixed, so a single thread might have to work on several voxels, not just one.
The keyword __global__ signals that the function will be compiled by nvcc (NVIDIA compiler, a wrapper around gcc) and run on GPU. In our case we will need a pointer to the (flattened) big image, an array of (flattened) crop centers coordinates, as well as the image size, the number of channels, the crop size, and the total number of crops. The output result will be stored in crops_ptr array.
Since the crop centers coordinates array was flattened we retrieve the current crop center coordinates with the block index blockIdx.x information. We specified 1 block per crop, hence the block index will correspond exactly to the crop index.
We have to process all the pixels of the output array. Each thread is going to loop over them with a step of size the block dimension (i.e. the total number of threads working on this crop).
We reconstruct the coordinates of the current pixel inside the crop from the loop index. Note that this and all the following conversions between index/coordinates will depend on how the array was flattened.
We retrieve the equivalent coordinates in the original image.
Finally we proceed to copy the pixel from the original image array to the final output array.
After profiling the previous CUDA kernel we found out that it wasn't that much faster than a Numpy version running on CPU. The reason is that the number of crops (estimated around 100) was not high enough to harness the GPU power, which relies on high parallelization. A second more refined approach would be to set the number of blocks to $M \times D$: each block will process a 2D slice of a single crop, i.e. $D^2 \times C$ voxels.
The kernel declaration does not change.
This was a short introduction to CUDA kernels. This custom one is able to crop a big image into smaller pieces. Now you probably want to compile it in order to integrate it into your main Tensorflow or PyTorch program, so continue with the next part of the tutorial. | CommonCrawl |
\begin{document}
\title{Inapproximability of Matrix $p\rightarrow q$ Norms} \author{ $\qquad$ Vijay Bhattiprolu\thanks{Supported by NSF CCF-1422045 and CCF-1526092. {\tt [email protected]}. Part of the work was done while visiting NTU, Singapore.} \and Mrinalkanti Ghosh\thanks{Supported by NSF CCF-1254044 \tt [email protected]} \and Venkatesan Guruswami\thanks{Supported in part by NSF grant CCF-1526092. {\tt [email protected]}. Part of the work was done while visiting NTU, Singapore. }$\qquad$ \and Euiwoong Lee\thanks{Supported by the Simons Institute for the Theory of Computing. {\tt [email protected]}} \and Madhur Tulsiani \thanks{Supported by NSF CCF-1254044 \tt [email protected]} }
\setcounter{page}{0}
\date{}
\maketitle \draftbox \thispagestyle{empty}
We study the problem of computing the \onorm{p}{q} of a matrix $A \in {\mathbb R}^{m \times n}$, defined as
\[ \norm{p}{q}{A} ~:=~ \max_{x \in {\mathbb R}^n \setminus \{0\}} \frac{\norm{q}{Ax}}{\norm{p}{x}} \,. \]
This problem generalizes the spectral norm of a matrix ($p=q=2$) and the Grothendieck problem ($p=\infty$, $q=1$), and has been widely studied in various regimes.
When $p \geq q$, the problem exhibits a dichotomy: constant factor approximation algorithms are known if $2 \in [q,p]$, and the problem is hard to approximate within almost polynomial factors when $2 \notin [q,p]$.
The regime when $p < q$, known as \emph{hypercontractive norms}, is particularly significant for various applications but much less well understood. The case with $p = 2$ and $q > 2$ was studied by [Barak et al.\xspace, STOC'12] who gave sub-exponential algorithms for a promise version of the problem (which captures small-set expansion) and also proved hardness of approximation results based on the Exponential Time Hypothesis. However, no NP-hardness of approximation is known for these problems for any $p < q$.
We prove the first NP-hardness result for approximating hypercontractive norms.
We show that for any $1< p < q < \infty$ with $2 \notin [p,q]$,
$\norm{p}{q}{A}$ is hard to approximate within $2^{O((\log
n)^{1-\varepsilon})}$ assuming $\classfont{NP}\xspace \not \subseteq \BPTIME{2^{(\log
n)^{O(1)}}}$.
En route to the above result, we also prove new results for the case when $p \geq q$ with $2 \in [q,p]$.
For such $p$ and $q$, we show that
$\norm{p}{q}{A}$ is hard to approximate within any factor smaller
than $1/(\gamma_{p^\ast} \cdot \gamma_q)$, where for any $r$, $\gamma_r$
denotes the $r^{th}$ norm of a standard normal random variable, and $p^\ast := p/(p-1)$ is the dual norm of $p$. The hardness factor is tight for the cases when $p$ or $q$ equals $2$.
\ifnum\confversion=0 \pagenumbering{roman} \tableofcontents
\fi
\pagenumbering{arabic} \setcounter{page}{1}
\section{Introduction}\label{sec:intro}
We consider the problem of finding the \onorm{p}{q} of a given matrix $A \in {\mathbb R}^{m \times n}$, which is defined as
\[ \norm{p}{q}{A} ~:=~ \max_{x \in {\mathbb R}^n \setminus \{0\}} \frac{\norm{q}{Ax}}{\norm{p}{x}} \,. \]
The quantity $\norm{p}{q}{A}$ is a natural generalization of the well-studied spectral norm, which corresponds to the case $p=q=2$. For general $p$ and $q$, this quantity computes the maximum distortion (stretch) of the operator $A$ from the normed space $\ell_p^n$ to $\ell_q^m$.
The case when $p = \infty$ and $q = 1$ is the well known Grothendieck problem \cite{KN12, Pisier12}, where the goal is to maximize $\ip{y, Ax}$ subject to $\norm{\infty}{x}, \norm{\infty}{y} \leq 1$. In fact, via simple duality arguments (see \cref{sec:prelims}), the general problem computing $\norm{p}{q}{A}$ can be seen to be equivalent to the following variant of the Grothendieck problem (and to $ \norm{q^*}{p^*}{A^T} $)
\[ \norm{p}{q}{A} ~=~ \max_{ \substack{\norm{p}{x} \leq 1 \\ \norm{q^*}{y} \leq 1}} \ip{y, Ax} ~=~ \norm{q^*}{p^*}{A^T} \,, \]
where $p^*, q^*$ denote the dual norms of $p$ and $q$, satisfying $1/p + 1/p^* = 1/q + 1/q^* = 1$.
\paragraph{Hypercontractive norms.}
The case when $p < q$, known as the case of \emph{hypercontractive} norms, also has a special significance to the analysis of random walks, expansion and related problems in hardness of approximation \cite{Biswal11, BBHKSZ12}.
The problem of computing $\norm{2}{4}{A}$ is also known to be equivalent to determining the maximum acceptance probability of a quantum protocol with multiple unentangled provers, and is related to several problems in quantum information theory \cite{HM13, BH15}.
Bounds on hypercontractive norms of operators are also used to prove expansion of small sets in graphs.
Indeed, if $f$ is the indicator function of set $S$ of measure $\delta$ in a graph with adjacency matrix $A$, then we have that for any $p \leq q$,
\[ \Phi(S) ~=~ 1 - \frac{\ip{f, A f}}{\norm{2}{f}^2} ~\geq~ 1 - \frac{\norm{q^*}{f} \cdot \norm{q}{Af}}{\delta} ~\geq~ 1 - \norm{p}{q}{A} \cdot \delta^{1/p - 1/q} \,. \]
It was proved by Barak et al.\xspace \cite{BBHKSZ12} that the above connection to small-set expansion can in fact be made two-sided for a special case of the \onorm{2}{q}.
They proved by that to resolve the promise version of the small-set expansion (SSE) problem, it suffices to distinguish the cases $\norm{2}{q}{A} \leq c \cdot \sigma_{\min}$ and $\norm{2}{q}{A} \geq C\cdot \sigma_{\min}$, where $\sigma_{\min}$ is the least non-zero singular value of $A$ and $C > c > 1$ are appropriately chosen constants based on the parameters of the SSE problem.
Thus, the approximability of \onorm{2}{q} is closely related to the small-set expansion problem. In particular, proving the NP-hardness of approximating the \onorm{2}{q} is (necessarily) an intermediate goal towards proving the Small-Set Expansion Hypothesis of Raghavendra and Steurer \cite{RS10}.
However, relatively few results algorithmic and hardness results are known for approximating hypercontractive norms.
A result by Steinberg's \cite{Steinberg05} gives an upper bound of $O(\max\inbraces{m,n}^{25/128})$ on the approximation factor, for all $p,q$.
For the case of \onorm{2}{q} (for any $q > 2$), Barak et al.\xspace \cite{BBHKSZ12} give an approximation algorithm for the promise version of the problem described above, running in time $\exp\inparen{\tilde{O}(n^{2/q})}$.
They also provide an additive approximation algorithm for the \onorm{2}{4} (where the error depends on \onorm{2}{2} and \onorm{2}{\infty} of $A$), which was extended to the \onorm{2}{q} by Harrow and Montanaro \cite{HM13}.
Barak et al.\xspace also prove NP-hardness of approximating $\norm{2}{4}{A}$ within a factor of $1 + \tilde{O}(1/ n^{o(1)})$, and hardness of approximating better than $\exp{O((\log n)^{1/2-\varepsilon})}$ in polynomial time, assuming the Exponential Time Hypothesis (ETH).
This reduction was also used by Harrow, Natarajan and Wu \cite{HNW16} to prove that $\tilde{O}(\log n)$ levels of the Sum-of-Squares SDP hierarchy cannot approximate $\norm{2}{4}{A}$ within any constant factor.
It is natural to ask if the bottleneck in proving (constant factor) hardness of approximation for \onorm{2}{q} arises from the fact from the nature of the domain (the $\ell_2$ ball) or from hypercontractive nature of the objective. As discussed in \cref{sec:overview}, \emph{all} hypercontractive norms present a barrier for gadget reductions, since if a ``true'' solution $x$ is meant to encode the assignment to a (say) label cover problem with consistency checked via local gadgets, then (for $q > p$), a ``cheating solution'' may make the value of $\norm{q}{Ax}$ very large by using a sparse $x$ which does not carry any meaningful information about the underlying label cover problem.
We show that (somewhat surprisingly, at least for the authors) it is indeed possible to overcome the barrier for gadget reductions for hypercontractive norms, for any $2 < p < q$ (and by duality, for any $p < q< 2$).
This gives the first NP-hardness result for hypercontractive norms (under randomized reductions). Assuming ETH, this also rules out a constant factor approximation algorithm that runs in $2^{n^{\delta}}$ for some $\delta := \delta(p, q)$.
\begin{theorem} For any $p, q$ such that $1 < p \leq q < 2$ or $2 < p \leq q < \infty$ and a constant $c > 1$, it is NP-hard to approximate \onorm{p}{q} within a factor of $c$. The reduction runs in time $n^{B_p
\cdot q}$ for $2 < p < q$, where $B_p = {\mathrm{poly}}(1 / (1 - \gamma_{p^*}))$. \label{thm:main_hype_vanilla} \end{theorem}
We show that the above hardness can be strengthened to any constant factor via a simple tensoring argument.
In fact, this also shows that it is hard to approximate $\norm{p}{q}{A}$ within almost polynomial factors unless NP is in randomized quasi-polynomial time. This is the content of the following theorem.
\begin{theorem} For any $p, q$ such that $1 < p \leq q < 2$ or $2 < p \leq q < \infty$ and $\varepsilon > 0$, there is no polynomial time algorithm that approximates the \onorm{p}{q} of an $n \times n$ matrix within a factor $2^{\log^{1 - \varepsilon} n}$ unless $\classfont{NP}\xspace \subseteq \BPTIME{2^{(\log n)^{O(1)}}}$. When $q$ is an even integer, the same inapproximability result holds unless $\classfont{NP}\xspace \subseteq \DTIME{2^{(\log
n)^{O(1)}}}$ \label{thm:main_hype} \end{theorem}
We also note that the operator $A$ arising in our reduction in \cref{thm:main_hype_vanilla}
satisfies $\sigma_{\min}(A) \approx 1$ (and is in fact a product of a carefully chosen projection and a scaled random Gaussian matrix).
For such an $A$, we prove the hardness of distinguishing $\norm{p}{q}{A} \leq c$ and $\norm{p}{q}{A} \geq C$, for constants $C > c > 1$.
For the corresponding problem in the case of \onorm{2}{q}, Barak et al.\xspace \cite{BBHKSZ12} gave a subexponential algorithm running in time $\exp\inparen{O(n^{2/q})}$ (which works for every $C > c > 1$).
On the other hand, since the running time of our reduction is $n^{O(q)}$, we get that assuming ETH, we show that no algorithm can distinguish the above cases for \onorm{p}{q} in time $\exp\inparen{n^{o(1/q)}}$, for any $p \leq q$ when $2 \notin [p,q]$.
While the above results give some possible reductions for working with hypercontractive norms, it remains an interesting problem to understand the role of the domain as a barrier to proving hardness results for the \onorm{2}{q} problems. In fact, no hardness results are available even for the more general problem of polynomial optimization over the $\ell_2$ ball.
We view the above theorem as providing some evidence that while hypercontractive norms have been studied as a single class so far, the case when $2 \in [p,q]$ may be qualitatively different (with respect to techniques) from the case when $2 \notin [p,q]$.
This is indeed known to be true in the \emph{non-hypercontractive case} with $p \geq q$. In fact, our results are obtained via new hardness results for the case $p \geq q$, as described below.
\paragraph{The non-hypercontractive case.}
Several results are known in the case when $p \geq q$, and we summarize known results for matrix norms in \cref{fig:normbounds}, for the both the hypercontractive and non-hypercontractive cases.
While the case of $p=q=2$ corresponds to the spectral norm, the problem is also easy when $q = \infty$ (or equivalently $p = 1$) since this corresponds to selecting the row of $A$ with the maximum $\ell_{p^*}$ norm. Note that in general, \cref{fig:normbounds} is symmetric about the principal diagonal. Also note that if $\norm{p}{q}{A}$ is a hypercontractive norm ($p < q$) then so is the equivalent $\norm{q^*}{p^*}{A^T}$ (the hypercontractive and non-hypercontractive case are separated by the non-principal diagonal).
\begin{figure}
\caption{Upper and lower bounds for approximating $\|A\|_{p \rightarrow q}$. Arrows
indicate the region to which a boundary belongs and thicker shaded regions represent exact
algorithms. Our results are indicated by [$*$]. We omit UGC-based hardness results in the figure.}
\label{fig:normbounds}
\end{figure}
As is apparent from the figure, the problem of approximating $\norm{p}{q}{A}$ for $p \geq q$ admits good approximations when $2 \in [q,p]$, and is hard otherwise.
For the case when $2 \notin [q,p]$, an upper bound of $O(\max\{m,n\}^{25/128})$ on the approximation ratio was proved by Steinberg \cite{Steinberg05}. Bhaskara and Vijayaraghavan \cite{BV11} showed NP-hardness of approximation within any constant factor, and hardness of approximation within an $O\inparen{2^{(\log n)^{1-\varepsilon}}}$ factor for arbitrary $\varepsilon > 0$ assuming $\classfont{NP}\xspace \not\subseteq \DTIME{2^{(\log n)^{O(1)}}}$.
Determining the right constants in these approximations when $2 \in [q,p]$ has been of considerable interest in the analysis and optimization community.
For the case of \onorm{\infty}{1}, Grothendieck's theorem \cite{Grothendieck56} shows that the integrality gap of a semidefinite programming (SDP) relaxation is bounded by a constant, and the (unknown) optimal value is now called the Grothendieck constant $K_G$. Krivine \cite{Krivine77} proved an upper bound of $\pi/(2\ln(1+\sqrt{2})) = 1.782 \ldots$ on $K_G$, and it was later shown by Braverman et al.\xspace that $K_G$ is strictly smaller than this bound. The best known lower bound on $K_G$ is about $1.676$, due to (an unpublished manuscript of) Reeds \cite{Reeds91} (see also \cite{KO09} for a proof).
An upper bound of $K_G$ on the approximation factor also follows from the work of Nesterov \cite{Nesterov98} for any $p \geq 2 \geq q$. A later work of Steinberg \cite{Steinberg05} also gave an upper bound of $\min\inbraces{\gamma_p/\gamma_q, \gamma_{q^*}/\gamma_{p^*}}$, where $\gamma_p$ denotes $p^{th}$ norm of a standard normal random variable (i.e.,\xspace the $p$-th root of the $p$-th Gaussian moment). Note that Steinberg's bound is less than $K_G$ for some values of $(p,q)$, in particular for all values of the form $(2,q)$ with $q \leq 2$ (and equivalently $(p,2)$ for $p \geq 2$), where it equals $1/\gamma_q$ (and $1/\gamma_{p^*}$ for $(p,2)$).
On the hardness side, Bri\"et\xspace, Regev and Saket \cite{BRS15} showed NP-hardness of $\pi/2$ for the \onorm{\infty}{1}, strengthening a hardness result of Khot and Naor based on the Unique Games Conjecture (UGC) \cite{KN08} (for a special case of the Grothendieck problem when the matrix $A$ is positive semidefinite). Assuming UGC, a hardness result matching Reeds' lower bound was proved by Khot and O'Donnell \cite{KO09}, and hardness of approximating within $K_G$ was proved by Raghavendra and Steurer \cite{RS09}.
For a related problem known as the $L_p$-Grothendieck problem, where the goal is to maximize $\ip{x,Ax}$ for $\norm{p}{x} \leq 1$, results by Steinberg \cite{Steinberg05} and Kindler, Schechtman and Naor \cite{KNS10} give an upper bound of $\gamma_p^2$, and a matching lower bound was proved assuming UGC by \cite{KNS10}, which was strengthened to NP-hardness by Guruswami et al.\xspace \cite{GRSW16}.
However, note that this problem is quadratic and not necessarily bilinear, and is in general much harder than the Grothendieck problems considered here. In particular, the case of $p = \infty$ only admits an $\Theta(\log n)$ approximation instead of $K_G$ for the bilinear version \cite{AMMN06, ABHKS05}.
We extend the hardness results of \cite{BRS15} for the $\infty \rightarrow 1$ and $2 \rightarrow 1$ norms of a matrix to any $p \geq 2 \geq q$. The hardness factors obtained match the performance of known algorithms (due to Steinberg \cite{Steinberg05}) for the cases of $2 \rightarrow q$ and $p \rightarrow 2$.
\begin{theorem}
For any $p, q$ such that $\infty \geq p \geq 2 \geq q \geq 1$ and $\varepsilon > 0$,
it is NP-hard to approximate the \onorm{p}{q} within a factor $1 / (\gamma_{p^*} \gamma_q) - \varepsilon$.
\label{thm:main_nhc} \end{theorem}
In subsequent work \cite{BGGLT18b} motivated by the hardness results herein, we also give an improved approximation for \onorm{p}{q} when $2 \in [q,p]$ (inspired by the above hardness result) which achieves an approximation factor of $C_0\cdot (1 / (\gamma_{p^*} \gamma_q))$, where $C_0 \approx 1/(\ln(1+\sqrt{2}))$ is a constant comparable to that arising in Krivine's upper bound on the Grothendieck constant \cite{Krivine77}.
Both \cref{thm:main_hype_vanilla} and \cref{thm:main_nhc} are consequences of a more technical theorem, which proves hardness of approximating $\norm{2}{r}{A}$ for $r < 2$ (and hence $\norm{r^*}{2}{A}$ for $r^* > 2$) while providing additional structure in the matrix $A$ produced by the reduction. This is proved in \cref{sec:nhc}.
We also show our methods can be used to provide a simple proof (albeit via randomized reductions) of the $2^{\Omega((\log n)^{1-\varepsilon})}$ hardness for the non-hypercontractive case when $2 \notin [q,p]$, which was proved by \cite{BV11}. This is presented in \cref{sec:reverse}.
\subsection{Proof Overview}
\label{sec:overview}
\paragraph{The hardness of proving hardness for hypercontractive norms.}
Reductions for various geometric problems use a ``smooth'' version of the Label Cover problem, composed with long-code functions for the labels of the variables. In various reductions, including the ones by Guruswami et al.\xspace \cite{GRSW16} and Bri\"et\xspace et al.\xspace \cite{BRS15} (which we closely follow) the solution vector $x$ to the geometric problem consists of the Fourier coefficients of the various long-code functions, with a ``block'' $x_v$ for each vertex of the label-cover instance. The relevant geometric operation (transformation by the matrix $A$ in our case) consists of projecting to a space which enforces the consistency constraints derived from the label-cover problem, on the Fourier coefficients of the encodings.
However, this strategy presents with two problems when designing reductions for hypercontractive norms. Firstly, while projections maintain the $\ell_2$ norm of encodings corresponding to consistent labelings and reduce that of inconsistent ones, their behaviour is harder to analyze for $\ell_p$ norms for $p \neq 2$. Secondly, the \emph{global} objective of maximizing $\norm{q}{Ax}$ is required to enforce different behavior within the blocks $x_v$, than in the full vector $x$. The block vectors $x_v$ in the solution corresponding to a satisfying assignment of label cover are intended to be highly sparse, since they correspond to ``dictator functions'' which have only one non-zero Fourier coefficient. This can be enforced in a test using the fact that for a vector $x_v \in {\mathbb R}^t$, $\norm{q}{x_v}$ is a convex function of $\norm{p}{x_v}$ when $p \leq q$, and is maximized for vectors with all the mass concentrated in a single coordinate.
However, a global objective function which tries to maximize $\sum_v \norm{x_v}_q^q$, also achieves a high value from global vectors $x$ which concentrate all the mass on coordinates corresponding to few vertices of the label cover instance, and do not carry any meaningful information about assignments to the underlying label cover problem.
Since we can only check for a global objective which is the $\ell_q$ norm of some vector involving coordinates from blocks across the entire instance, it is not clear how to enforce local Fourier concentration (dictator functions for individual long codes) and global well-distribution (meaningful information regarding assignments of most vertices) using the same objective function.
While the projector $A$ also enforces a linear relation between the block vectors $x_u$ and $x_v$ for all edges $(u,v)$ in the label cover instance, using this to ensure well-distribution across blocks
seems to require a very high density of constraints in the label cover instance, and no hardness
results are available in this regime.
\paragraph{Our reduction.}
We show that when $2 \notin [p,q]$, it is possible to bypass the above issues using hardness of $\norm{2}{r}{A}$ as an intermediate (for $r < 2$). Note that since $\norm{r}{z}$ is a \emph{concave} function of $\norm{2}{z}$ in this case, the test favors vectors in which the mass is well-distributed and thus solves the second issue. For this, we use local tests based on the Berry-Ess\'een\xspace theorem (as in \cite{GRSW16} and \cite{BRS15}). Also, since the starting point now is the $\ell_2$ norm, the effect of projections is easier to analyze. This reduction is discussed in \cref{sec:nhc}.
By duality, we can interpret the above as a hardness result for $\norm{p}{2}{A}$ when $p > 2$ (using $r = p^*$). We then convert this to a hardness result for \onorm{p}{q} in the hypercontractive case by composing $A$ with an ``approximate isometry'' $B$ from $\ell_2 \rightarrow \ell_q$ (i.e.,\xspace $\forall y~ \norm{q}{By} \approx \norm{2}{y}$) since we can replace $\norm{2}{Ax}$ with $\norm{q}{BAx}$.
Milman's version of the Dvoretzky theorem \cite{Vershynin17} implies random operators to a sufficiently high dimensional ($n^{O(q)}$) space satisfy this property, which then yields constant factor hardness results for the \onorm{p}{q}. A similar application of Dvoretzky's theorem also appears in an independent work of Krishnan et al.\xspace \cite{KMW18} on sketching matrix norms.
We also show that the hardness for hypercontractive norms can be amplified via tensoring. This was known previously for the \onorm{2}{4} using an argument based on parallel repetition for QMA \cite{HM13}, and for the case of $p = q$ \cite{BV11}. We give a simple argument based on convexity, which proves this for all $p \leq q$, but appears to have gone unnoticed previously. The amplification is then used to prove hardness of approximation within almost polynomial factors.
\paragraph{Non-hypercontractive norms.} We also use the hardness of $\norm{2}{r}{A}$ to obtain hardness for the non-hypercontractive case of $\norm{p}{q}{A}$ with $q < 2 < p$, by using an operator that ``factorizes'' through $\ell_2$.
In particular, we obtain hardness results for $\norm{p}{2}{A}$ and $\norm{2}{q}{A}$ (of factors $1/\gamma_{p^*}$and $1/\gamma_q$ respectively) using the reduction in \cref{sec:nhc}. We then combine these hardness results using additional properties of the operator $A$ obtained in the reduction, to obtain a hardness of factor $(1/\gamma_{p^*}) \cdot (1/\gamma_{q})$ for the \onorm{p}{q} for $p > 2 > q$. The composition, as well as the hardness results for hypercontractive norms, are presented in \cref{sec:hypercontractive}.
We also obtain a simple proof of the $2^{\Omega((\log n)^{1-\varepsilon})}$ hardness for the non-hypercontractive case when $2 \notin [q,p]$ (already proved by Bhaskara and Vijayaraghavan \cite{BV11}) via an approximate isometry argument as used in the hypercontractive case. In the hypercontractive case, we started from a constant factor hardness of the \onorm{p}{2} and the same factor for \onorm{p}{q} using the fact that for a random Gaussian matrix $B$ of appropriate dimensions, we have $\norm{q}{Bx} \approx \norm{2}{x}$ for all $x$. We then amplify the hardness via tensoring.
In the non-hypercontractive case, we start with a hardness for \onorm{p}{p} (obtained via the above isometry), which we \emph{first} amplify via tensoring. We then apply another approximate isometry result due to Schechtman \cite{Schechtman87}, which gives a samplable distribution ${\mathcal D}$ over random matrices $B$ such that with high probability over $B$, we have $\norm{q}{Bx} \approx \norm{p}{x}$ for all $x$.
We thus view the above results as showing that combined with a basic hardness for \onorm{p}{2}, the basic ideas of duality, tensoring, and embedding (which builds on powerful results from functional analysis) can be combined in powerful ways to prove strong results in both the hypercontractive and non-hypercontractive regimes.
\section{Preliminaries and Notation}\label{sec:prelims} \ifnum\confversion=0 \subsection{Matrix Norms} \fi For a vector $x\in{\mathbb R}^n$, throughout this paper we will use $x(i)$ to denote its $i$-th coordinate. For $p \in [1, \infty)$, we define $\cnorm{p}{\cdot}$ to denote the counting $p$-norm and $\enorm{p}{\cdot}$ to denote the expectation $p$-norm; i.e.,\xspace for a vector $x\in{\mathbb R}^n$, \[
\cnorm{p}{x} := \inparen{\sum_{i\in [n]} |x(i)|^{p}}^{1/p}
\quad \mbox{ and } \quad
\enorm{p}{x} := \Ex{i\sim [n]}{|x(i)|^p}^{1/p} =\inparen{\frac1n \cdot \sum_{i\in [n]} |x(i)|^{p}}^{1/p}. \] Clearly $\cnorm{p}{x} = \enorm{p}{x}\cdot n^{1/p}$.
For $p = \infty$, we define $\cnorm{\infty}{x} = \enorm{\infty}{x} := \max_{i \in [n]} |x(i)|$. We will use $p^*$ to denote the `dual' of $p$, i.e. $p^* = p/(p-1)$. Unless stated otherwise, we usually work with $\cnorm{p}{\cdot}$. We also define inner product $\cip{x,y}$ to denote the inner product under the counting measure unless stated otherwise; i.e.,\xspace for two vectors $x, y \in {\mathbb R}^n$, $\cip{x, y} := \sum_{i\in [n]} x(i)y(i)$.
\iffalse In this paper, we will use the general $\norm{p}{\cdot}$ notation in a statement only when the statement remains true after replacing all occurrences of $\norm{p}{\cdot}$ by counting norms, as well as after replacing all occurrences by expectation norms. Similarly, we use the general $\ip{\cdot, \cdot}$ notation when the statement holds under both counting and expectation inner products. \fi
We next record a well-known fact about $p$-norms that is used in establishing many duality statements.
\begin{observation} \label{p-norm:dual}
For any $p\in [1,\infty]$, $\cnorm{p}{x} = \sup_{\cnorm{p^*}{y}=1} \,\mysmalldot{y}{x}$. \end{observation}
We next define the primary problems of interest in this paper. \begin{definition}
For $p,q\in [1,\infty]$, the \onorm{p}{q} problem is to maximize
\[
\frac{\cnorm{q}{Ax}}{\cnorm{p}{x}}
\]
given an $m\times n$ matrix $A$.
\end{definition}
\begin{definition}
For $p,q\in [1,\infty]$, we define a generalization of the Grothendieck problem, namely \groth{p}{q}, as the
problem of computing
\[
\sup_{\cnorm{p}{y}=1} \,\sup_{\cnorm{q}{x}=1} \mysmalldot{y}{Ax}
\]
given an $m\times n$ matrix $A$. \end{definition}
The original Grothendieck problem is precisely \groth{\infty}{\infty}. We next state the well known equivalence of \onorm{p}{q}, \groth{q\textsuperscript{*}}{p}, and \onorm{q\textsuperscript{*}}{p\textsuperscript{*}}. \begin{observation} \label{onorm:groth}
For any $p,q\in [1,\infty]$ and any matrix $A$, \[ \cnorm{p}{q}{A} =\sup_{\cnorm{q^*}{y}=1}\,\sup_{\cnorm{p}{x}=1}\mysmalldot{y}{Ax} = \cnorm{q^*}{p^*}{A^T}. \] \end{observation} \begin{proof} Using $\mysmalldot{y}{Ax} = \mysmalldot{x}{A^Ty}$, \begin{align*}
\cnorm{p}{q}{A} &= \sup_{\cnorm{p}{x}=1}\cnorm{q}{Ax} = \sup_{\cnorm{p}{x}=1}\,\sup_{\cnorm{q^*}{y}=1}\mysmalldot{y}{Ax} = \sup_{\cnorm{q^*}{y}=1}\,\sup_{\cnorm{p}{x}=1}\mysmalldot{y}{Ax} \\ &=
\sup_{\cnorm{p}{x}=1}\,\sup_{\cnorm{q^*}{y}=1}\mysmalldot{x}{A^Ty}
=
\sup_{\cnorm{q^*}{y}=1}\cnorm{p^*}{A^Ty}
=
\cnorm{q^*}{p^*}{A^T}\,. \tag*{\qedhere}
\end{align*} \end{proof}
The following observation will be useful for composing hardness maps for \onorm{p}{2} and \onorm{2}{q} to get \onorm{p}{q} hardness for when $p>q$ and $p\geq 2 \geq q$. \begin{observation} \label{composition:soundness}
For any $p,q,r\in [1,\infty]$ and any matrices $B, C$, \[
\cnorm{p}{q}{BC}
=
\sup_x \frac{\cnorm{q}{BCx}}{\cnorm{p}{x}}
\leq
\sup_x \frac{\cnorm{r}{q}{B} \cnorm{r}{Cx}}{\cnorm{p}{x}} \leq
\cnorm{r}{q}{B} \cnorm{p}{r}{C}. \] \end{observation}
\ifnum\confversion=0
\subsection{Fourier Analysis} \label{sec:fourier} We introduce some basic facts about Fourier analysis of Boolean functions. Let $R \in {\mathbb{N}}$ be a positive integer, and consider a function $f : \{ \pm 1 \}^R \rightarrow {\mathbb R}$. For any subset $S \subseteq [R]$ let $\chi_S := \prod_{i \in S} x_i$. Then we can represent $f$ as \begin{equation} \label{eq:inverse_fourier} f(x_1, \dots, x_R) = \sum_{S \subseteq [R]} \widehat{f}(S) \cdot \chi_S(x_1, \dots x_R), \end{equation} where \begin{equation} \label{eq:fourier} \widehat{f}(S) = {\mathbb E}_{x \in \{ \pm 1 \}^R} [f(x) \cdot \chi_S(x)] \mbox{ for all } S \subseteq [R]. \end{equation} The {\em Fourier transform} refers to a linear operator $F$ that maps $f$ to $\widehat{f}$ as defined as~\eqref{eq:fourier}. We interpret $\widehat{f}$ as a $2^R$-dimensional vector whose coordinates are indexed by $S \subseteq [R]$. Endow the expectation norm and the expectation norm to $f$ and $\widehat{f}$ respectively; i.e., \[
\enorm{p}{f} := \left( \Ex{x \in \{ \pm 1 \}^R}{|f(x)|^p} \right)^{1/p} \quad \mbox{ and } \quad
\cnorm{p}{\widehat{f}} := \left( \sum_{S \subseteq [R]} | \widehat{f}(S) |^p \right)^{1/p}. \] as well as the corresponding inner products $\mysmalldot{f}{g}$ and $\mysmalldot{\widehat{f}}{\widehat{g}}$ consistent with their $2$-norms. We also define the {\em inverse Fourier transform} $F^T$ to be a linear operator that maps a given $\widehat{f} : 2^R \rightarrow {\mathbb R}$ to $f : \{ \pm 1 \}^R \rightarrow {\mathbb R}$ defined as in~\eqref{eq:inverse_fourier}. We state the following well-known facts from Fourier analysis. \iffalse \begin{observation} $F$ is an orthogonal transformation; i.e., for any $f, g : \{ \pm 1 \}^R \rightarrow {\mathbb R}$, \[\eip{f, g} = \cip{F f, F g}.\] \end{observation} \fi \begin{observation} [Parseval's Theorem] For any $f : \{ \pm 1 \}^R \rightarrow {\mathbb R}$, $\enorm{2}{f} = \cnorm{2}{F f}$. \end{observation} \begin{observation} $F$ and $F^T$ form an adjoint pair; i.e., for any $f : \{ \pm 1 \}^R \rightarrow {\mathbb R}$ and $\widehat{g} : 2^R \rightarrow {\mathbb R}$, \[ \mysmalldot{\widehat{g}}{Ff} =
\mysmalldot{F^T \widehat{g}}{f}.
\] \end{observation} \begin{observation} $F^T F$ is the identity operator. \end{observation}
In~\cref{sec:nhc}, we also consider a {\em partial} Fourier transform $F_P$ that maps a given function $f : \{ \pm 1 \}^R \rightarrow {\mathbb R}$ to a vector $\widehat{f} : [R] \rightarrow {\mathbb R}$ defined as $\widehat{f}(i) = {\mathbb E}_{x \in \{ \pm 1 \}^R} [f(x) \cdot x_i]$ for all $i \in [R]$. It is the original Fourier transform where $\widehat{f}$ is further projected to $R$ coordinates corresponding to linear coefficients. The partial inverse Fourier transform $F_P^T$ is a transformation that maps a vector $\widehat{f} : [R] \rightarrow {\mathbb R}$ to a function $f : \{ \pm 1 \}^R \rightarrow {\mathbb R}$ as in~\eqref{eq:inverse_fourier} restricted to $S = \{ i \}$ for some $i \in [R]$. These partial transforms satisfy similar observations as above: (1) $\enorm{2}{f} \geq \cnorm{2}{F_P f}$, (2) $\enorm{2}{F_P^T \widehat{f}} = \cnorm{2}{\widehat{f}}$, (3) $F_P$ and $F_P^T$ form an adjoint pair, and (4) $(F_P^T F_P) f = f$ if and only if $f$ is a linear function.
\subsection{Smooth Label Cover} An instance of \problemmacro{Label Cover}\xspace is given by a quadruple ${\mathcal L} = (G, [R], [L], \Sigma)$ that consists of a regular connected graph $G = (V, E)$, a label set $[R]$ for some positive integer $n$, and a collection $\Sigma = ((\pi_{e, v}, \pi_{e, w}) : e = (v, w) \in E)$ of pairs of maps both from $[R]$ to $[L]$ associated with the endpoints of the edges in $E$. Given a {\em labeling} $\ell : V \rightarrow [R]$, we say that an edge $e = (v, w) \in E$ is {\em satisfied } if $\pi_{e, v}(\ell(v)) = \pi_{e, w}(\ell(w))$. Let ${\sf OPT}\xspace({\mathcal L})$ be the maximum fraction of satisfied edges by any labeling.
The following hardness result for \problemmacro{Label Cover}\xspace, given in~\cite{GRSW16}, is a slight variant of the original construction due to~\cite{Khot02}. The theorem also describes the various structural properties, including smoothness, that are identified by the hard instances. \begin{theorem} \label{thm:smooth_label_cover} For any $\xi >0$ and $J \in {\mathbb{N}}$, there exist positive integers $R = R(\xi, J), L = L(\xi, J)$ and $D = D(\xi)$, and a \problemmacro{Label Cover}\xspace instance $(G, [R], [L], \Sigma)$ as above such that \begin{itemize} \item (Hardness): It is NP-hard to distinguish between the following two cases: \begin{itemize} \item (Completeness): ${\sf OPT}\xspace({\mathcal L}) = 1$. \item (Soundness): ${\sf OPT}\xspace({\mathcal L}) \leq \xi$. \end{itemize}
\item (Structural Properties): \begin{itemize} \item ($J$-Smoothness): For every vertex $v \in V$ and distinct $i, j \in [R]$, we have \[ \Pr{e : v \in e}{\pi_{e,v}(i)=\pi_{e,v}(j)} \leq 1 / J. \]
\item ($D$-to-$1$): For every vertex $v \in V$, edge $e \in E$ incident on $v$, and $i \in [L]$, we have $|\pi^{-1}_{e, v}(i)| \leq D$; that is at most $D$ elements in $[R]$ are mapped to the same element in $[L]$.
\item (Weak Expansion): For any $\delta > 0$ and vertex set $V' \subseteq V$ such that $|V'| = \delta \cdot |V|$, the number of edges among the vertices in $|V'|$ is at least $(\delta^2 / 2) |E|$. \end{itemize}
\end{itemize} \end{theorem} \fi
\section{Hardness of \onorm{2}{r} with $r < 2$}\label{sec:nhc}
This section proves the following theorem that serves as a starting point of our hardness results. The theorem is stated for the expectation norm for consistency with the current literature, but the same statement holds for the counting norm, since if $A$ is an $n \times n$ matrix, $\cnorm{2}{r}{A} = n^{1/r -1/2} \cdot \enorm{2}{r}{A}$.
Note that the matrix $A$ used in the reduction below does not depend on $r$.
\begin{theorem} For any $\varepsilon > 0$,
there is a polynomial time reduction that takes a 3-CNF formula $\varphi$ and
produces a symmetric matrix $A \in {\mathbb R}^{n \times n}$ with $n = |\varphi|^{{\mathrm{poly}}(1 / \varepsilon)}$ such that
\begin{itemize}
\item (Completeness) If $\varphi$ is satisfiable, there exists $x \in
{\mathbb R}^{n}$ with $|x(i)| = 1$ for all $i \in [n]$ and $Ax = x$. In particular,
$\enorm{2}{r}{A} \geq 1$ for all $1 \leq r \leq \infty$.
\item (Soundness) $\enorm{2}{r}{A} \leq \gamma_r + \varepsilon^{2 - r}$ for all $1 \leq r < 2$.
\end{itemize}
\label{thm:brs} \end{theorem}
We adapt the proof by Bri\"et\xspace, Regev and Saket for the hardness of $2 \rightarrow 1$ and $\infty \rightarrow 1$ norms to prove the above theorem. A small difference is that, unlike their construction which starts with a Fourier encoding of the long-code functions, we start with an evaluation table (to ensure that the resulting matrices are symmetric). We also analyze their dictatorship tests for the case of fractional $r$.
\ifnum\confversion=1
The details can be found in the full version of the paper.
\fi
\ifnum\confversion=0
\subsection{Reduction and Completeness} Let ${\mathcal L} = (G, [R], [L], \Sigma)$ be an instance of \problemmacro{Label Cover}\xspace with $G = (V, E)$.
In the rest of this section, $n = |V|$ and our reduction will construct a self-adjoint linear operator
${\mathbf A} : {\mathbb R}^N \rightarrow {\mathbb R}^N$ with $N = |V| \cdot 2^R$, which yields a symmetric $N \times N$ matrix representing ${\mathbf A}$ in the standard basis. This section concerns the following four Hilbert spaces based on the standard Fourier analysis composed with ${\mathcal L}$.
\begin{enumerate}
\item Evaluation space ${\mathbb R}^{2^R}$. Each function in this space is
denoted by $f : \{ \pm 1 \}^R \rightarrow \ensuremath{\mathbb{R}}$. The inner product is defined as
$\ip{f, g} := \Ex{x \in \{ \pm 1 \}^R}{f(x) g(x)}$, which
induces $\norm{2}{f} := \enorm{2}{f}$. We also define $\enorm{p}{f} := \Ex{x}{|f(x)|^p}^{1/p}$ in this space.
\item Fourier space ${\mathbb R}^{R}$. Each function in this space is denoted by $\widehat{f} : [R]
\rightarrow \ensuremath{\mathbb{R}}$. The inner product is defined as $\mysmalldot{\widehat{f}}{\widehat{g}} := \sum_{i
\in [R]} \widehat{f}(i) \widehat{g}(i)$, which induces $\norm{2}{\widehat{f}} := \cnorm{2}{\widehat{f}}$.
\item Combined evaluation space ${\mathbb R}^{V \times 2^R}$. Each function in this space is
denoted by ${\mathbf f} : V \times \{ \pm 1 \}^R \rightarrow \ensuremath{\mathbb{R}}$. The inner product is defined as
$\mysmalldot{{\mathbf f}}{{\mathbf g}} := \Ex{v \in V}{\Ex{x \in \{ \pm 1 \}^R}{{\mathbf f}(v, x)
{\mathbf g}(v, x)}}$, which induces $\enorm{2}{{\mathbf f}} := \enorm{2}{{\mathbf f}}$.
We also define $\norm{p}{{\mathbf f}} := \Ex{v, x}{|{\mathbf f}(v, x)|^p}^{1/p}$ in this space.
\item Combined Fourier space ${\mathbb R}^{V \times R}$. Each function in this space is denoted
by ${\bf \widehat{f}} : V \times [R] \rightarrow \ensuremath{\mathbb{R}}$. The inner product is defined as
$\mysmalldot{{\bf \widehat{f}}}{{\bf \widehat{g}}} := \Ex{v \in V}{\sum_{i \in [R]} {\bf \widehat{f}}(v, i)
{\bf \widehat{g}}(v, i) }$, which induces $\norm{2}{{\bf \widehat{f}}}$, which is neither a counting nor an
expectation norm. \end{enumerate} Note that ${\mathbf f} \in {\mathbb R}^{V \times 2^R}$ and a vertex $v \in V$ induces $f_v \in {\mathbb R}^{2^R}$ defined by $f_v(x) := {\mathbf f}(v, x)$, and similarly ${\bf \widehat{f}} \in {\mathbb R}^{V \times R}$ and a vertex $v \in V$ induces $\widehat{f}_v \in {\mathbb R}^{R}$ defined by $\widehat{f}_v(x) := {\bf \widehat{f}}(v, x)$. As defined in \cref{sec:fourier}, we use the standard following (partial) {\em Fourier transform} $F$ that maps $f \in {\mathbb R}^{2^R}$ to $\widehat{f} \in {\mathbb R}^{R}$ as follows. \footnote{We use only {\em linear Fourier coefficients} in this work. $F$ was defined as $F_P$ in~\cref{sec:fourier}.} \begin{equation} \widehat{f}(i) = (Ff) (i) := \Ex{x \in \{\pm 1 \}^R}{x_i f(x)}. \end{equation}
The (partial) {\em inverse Fourier transform} $F^T$ that maps $\widehat{f} \in {\mathbb R}^{R}$ to $f \in {\mathbb R}^{2^R}$ is defined by \begin{equation} f(x) = (F^T \widehat{f})(x) := \sum_{i \in [R]} x_i \widehat{f}(i). \end{equation}
This Fourier transform can be naturally extended to combined spaces by defining ${\mathbf F} : {\mathbf f} \mapsto {\bf \widehat{f}}$ as $f_v \mapsto \widehat{f}_v$ for all $v \in V$. Then ${\mathbf F}^T$ maps ${\bf \widehat{f}}$ to ${\mathbf f}$ as $\widehat{f}_v \mapsto f_v$ for all $v \in V$.
Finally, let ${\bf \widehat{P}} : {\mathbb R}^{V \times R} \rightarrow {\mathbb R}^{V \times R}$ be the orthogonal projector to the following subspace of the combined Fourier space: \begin{equation}
{\bf \widehat{L}} := \left\{ {\bf \widehat{f}} \in {\mathbb R}^{V \times R} :
\sum_{j \in \pi^{-1}_{e, u}(i)} \widehat{f}_u(i) = \sum_{j \in \pi^{-1}_{e, v}(i)}
\widehat{f}_v(j) \mbox{ for all } (u, v) \in E \mbox{ and } i \in [L] \right\}\,. \end{equation} Our transformation ${\mathbf A} : {\mathbb R}^{V \times 2^R} \rightarrow {\mathbb R}^{V \times 2^R}$ is defined by \begin{equation} {\mathbf A} := ({\mathbf F}^T) {\bf \widehat{P}} {\mathbf F}. \end{equation} In other words, given ${\mathbf f}$, we apply the Fourier transform for each $v \in V$, project the combined Fourier coefficients to ${\bf \widehat{L}}$ that checks the \problemmacro{Label Cover}\xspace consistency, and apply the inverse Fourier transform. Since ${\bf \widehat{P}}$ is a projector, ${\mathbf A}$ is self-adjoint by design.
We also note that a similar reduction that produces $({\mathbf F}^T) {\bf \widehat{P}}$ was used in Guruswami et al.\xspace~\cite{GRSW16} and Bri\"et\xspace et al.\xspace~\cite{BRS15} for subspace approximation and Grothendieck-type problems, and indeed this reduction suffices for~\cref{thm:brs} except the self-adjointness and additional properties in the completeness case.
\paragraph{Completeness.} We prove the following lemma for the completeness case. A simple intuition is that if ${\mathcal L}$ admits a good labeling, we can construct a ${\mathbf f}$ such that each $f_v$ is a linear function and ${\bf \widehat{f}}$ is already in the subspace ${\bf \widehat{L}}$. Therefore, each of Fourier transform, projection to ${\bf \widehat{L}}$, and inverse Fourier transform does not really change ${\mathbf f}$.
\begin{lemma} [Completeness] Let $\ell : V \rightarrow [R]$ be a labeling that satisfies every
edge of ${\mathcal L}$. There exists a function ${\mathbf f} \in {\mathbb R}^{V \times 2^R}$ such that
${\mathbf f}(v, x)$ is either $+1$ or $-1$ for all $v \in V, x \in \{ \pm 1 \}^R$ and
${\mathbf A} {\mathbf f} = {\mathbf f}$. \end{lemma} \begin{proof} Let ${\mathbf f}(v, x) := x_{\ell(v)}$ for
every $v \in V, x \in \{ \pm 1 \}^R$. Consider ${\bf \widehat{f}} = {\mathbf F} {\mathbf f}$. For each
vertex $v \in V$, ${\bf \widehat{f}}(v, i) = \widehat{f}_v(i) = 1$ if $i = \ell(v)$ and $0$
otherwise. Since $\ell$ satisfies every edge of ${\mathcal L}$, ${\bf \widehat{f}} \in {\bf \widehat{L}}$ and
${\bf \widehat{P}} {\bf \widehat{f}} = {\bf \widehat{f}}$. Finally, since each $f_v$ is a linear function, the
partial inverse Fourier transform $F^T$ satisfies $(F^T) \widehat{f}_v = f_v$, which
implies that $({\mathbf F}^T) {\bf \widehat{f}} = {\mathbf f}$. Therefore, ${\mathbf A} {\mathbf f} = ({\mathbf F}^T {\bf \widehat{P}}
{\mathbf F}) {\mathbf f} = {\mathbf f}$. \end{proof}
\subsection{Soundness} We prove the following soundness lemma. This finishes the proof of~\cref{thm:brs} since~\cref{thm:smooth_label_cover} guarantees NP-hardness of \problemmacro{Label Cover}\xspace for arbitrarily small $\xi > 0$ and arbitrarily large $J \in {\mathbb{N}}$.
\begin{lemma}[Soundness] \label{lem:soundness} For every $\varepsilon > 0$,
there exist $\xi > 0$ (that determines $D = D(\xi)$ as in~\cref{thm:smooth_label_cover})
and $J \in {\mathbb{N}}$ such that if ${\sf OPT}\xspace({\mathcal L}) \leq \xi$, ${\mathcal L}$ is $D$-to-$1$, and ${\mathcal L}$ is $J$-smooth,
$\enorm{2}{r}{{\mathbf A}} \leq \gamma_{r} + 4\varepsilon^{2-r}$ for every $1 \leq r < 2$.
\end{lemma}
\iffalse The proof closely follows that of Lemma 3.3 of~\cite{BRS15}. We (partially) generalize their proof for $1$-norm (of general norms) to $r$-norms when $1 \leq r < 2$. Another component of our proof is generalizing {\em dictatorship tests}~\cite{KNS10} for ???, which involves ??? Berry-Essen for fractional moments. We defer the proof of~\cref{lem:soundness} to~\cref{sec:soundness}, and present our dictatorship tests in~\cref{sec:dic}. \fi
\begin{proof} Let ${\mathbf f} \in {\mathbb R}^{V \times 2^R}$ be an arbitrary vector such that $\enorm{2}{{\mathbf f}} = 1$. Let ${\bf \widehat{f}} = {\mathbf F} {\mathbf f}$, ${\bf \widehat{g}} = {\bf \widehat{L}} {\bf \widehat{f}}$, and ${\mathbf g} = {\mathbf F}^T {\bf \widehat{g}}$ so that ${\mathbf g} = ({\mathbf F}^T {\bf \widehat{L}} {\mathbf F}) {\mathbf f} = {\mathbf A} {\mathbf f}$. By Parseval's theorem, $\cnorm{2}{\widehat{f}_v} \leq \enorm{2}{f_v}$ for all $v \in V$ and $\norm{2}{{\bf \widehat{f}}} \leq \enorm{2}{{\mathbf f}} \leq 1$. Since ${\bf \widehat{L}}$ is an orthogonal projection, $\norm{2}{{\bf \widehat{g}}} \leq \norm{2}{{\bf \widehat{f}}} \leq 1$. Fix $1 \leq r < 2$ and suppose \begin{equation}
\enorm{r}{{\mathbf g}}^r = \Ex{v \in V}{\enorm{r}{g_v}^r} \geq \gamma_r^r +
4 \varepsilon^{2 - r}\,.
\label{eq:soundness} \end{equation}
Use~\cref{lem:dict} to obtain $\delta = \delta(\varepsilon)$ such that $\enorm{p}{g_v}^p > (\gamma_p^p + \varepsilon) \cnorm{2}{\widehat{g}_v}^p$ implies $\cnorm{4}{\widehat{g}} > \delta \cnorm{2}{\widehat{g}}$ for all $1 \leq p < 2$ (so that $\delta$ does not depend on $r$), and consider \begin{equation} V_0 := \{ v \in V : \cnorm{4}{\widehat{g}_v}
> \delta \varepsilon \mbox{ and } \cnorm{2}{\widehat{g}_v} \leq 1/\varepsilon \}.
\label{eq:vzero} \end{equation} We prove the following lemma that lower bounds the size of $V_0$. \begin{lemma}
For $V_0 \subseteq V$ defined as in~\eqref{eq:vzero}, we have $|V_0| \geq \varepsilon^2
|V|$. \end{lemma} \begin{proof} The proof closely follows the proof of Lemma 3.4 of~\cite{BRS15}.
Define the sets
\begin{align*}
V_1 &= \{ v \in V : \cnorm{4}{\widehat{g}_v}
\leq \delta \varepsilon \mbox{ and } \cnorm{2}{\widehat{g}_v} < \varepsilon \}, \\ V_2 &= \{ v
\in V : \cnorm{4}{\widehat{g}_v} \leq \delta \varepsilon \mbox{ and } \cnorm{2}{\widehat{g}_v}
\geq \varepsilon \}, \\ V_3 &= \{ v \in V : \cnorm{2}{\widehat{g}_v} > 1/\varepsilon \}.
\end{align*}
From~\eqref{eq:soundness}, we have
\begin{equation}
\sum_{v \in V_0} \enorm{r}{g_v}^r + \sum_{v \in V_1} \enorm{r}{g_v}^r + \sum_{v \in V_2}
\enorm{r}{g_v}^r + \sum_{v \in V_3} \enorm{r}{g_v}^r \geq (\gamma_r^r + 4
\varepsilon^{2 - r}) |V|\,.
\label{eq:soundness_sum}
\end{equation}
We bound the four sums on the left side of~\eqref{eq:soundness_sum} individually.
Parseval's theorem and the fact that $r < 2$ implies $\enorm{r}{g_v} \leq
\enorm{2}{g_v}=\cnorm{2}{\widehat{g}_v}$, and since $\cnorm{2}{\widehat{g}_v} \leq 1/\varepsilon$
for every $v \in V_0$, the first sum in~\eqref{eq:soundness_sum} can be bounded by
\begin{equation}
\sum_{v \in V_0} \enorm{r}{g_v}^r \leq |V_0| / \varepsilon^r.
\label{eq:soundness_zero}
\end{equation}
Similarly, using the definition of $V_1$
the second sum in~\eqref{eq:soundness_sum} is at most $\varepsilon^r |V|$.
By~\cref{lem:dict}, for each $v \in V_2$, we have $\enorm{r}{g_v}^r \leq (\gamma_r^r +
\varepsilon) \cnorm{2}{\widehat{g}_v}^r$. Therefore, the third sum in~\eqref{eq:soundness_sum}
is bounded as
\begin{align}
\sum_{v \in V_2} \enorm{r}{g_v}^r &\leq (\gamma_r^r +
\varepsilon) \sum_{v \in V_2} \cnorm{2}{\widehat{g}_v}^r & \nonumber \\
&= (\gamma_r^r + \varepsilon) |V_2| {\mathbb E}_{v \in V_2} [ \cnorm{2}{\widehat{g}_v}^r ]
& \nonumber \\
&\leq (\gamma_r^r + \varepsilon) |V_2| {\mathbb E}_{v \in V_2} [ \cnorm{2}{\widehat{g}_v}^2 ]^{r / 2} &&
\mbox{(By Jensen using $r < 2$)} \nonumber \\
&= (\gamma_r^r + \varepsilon) |V_2| \bigg( \frac{\sum_{v \in V_2}
\cnorm{2}{\widehat{g}_v}^2 }{|V_2|} \bigg) ^{r / 2} &\nonumber \\
&\leq (\gamma_r^r + \varepsilon) |V_2|^{1 - r / 2} |V|^{r / 2} &&
(\sum_{v \in V_2} \cnorm{2}{\widehat{g}_v}^2 \leq \sum_{v \in V} \cnorm{2}{\widehat{g}_v}^2
\leq |V|) \nonumber \\
&\leq (\gamma_r^r + \varepsilon) |V|. &
\end{align}
Finally, the fourth sum in~\eqref{eq:soundness_sum} is bounded by
\begin{align}
\sum_{v \in V_3} \enorm{r}{g_v}^r &\leq \sum_{v \in V_3} \enorm{2}{g_v}^r
&& \mbox{(Since $r < 2$)}\nonumber \\
&= \sum_{v \in V_3} \cnorm{2}{\widehat{g}_v}^r && \mbox{(By Parseval's
theorem)} \nonumber \\
&= \sum_{v \in V_3} \cnorm{2}{\widehat{g}_v}^{r - 2}\cnorm{2}{\widehat{g}_v}^2 \nonumber \\
&< \sum_{v \in V_3} \varepsilon^{2 - r}\cnorm{2}{\widehat{g}_v}^2
&& (\cnorm{2}{\widehat{g}_v} > 1/\varepsilon \mbox{ for } v \in V_3,\mbox{ and } r < 2)
\nonumber \\
&= \varepsilon^{2 - r} \sum_{v \in V_3}\cnorm{2}{\widehat{g}_v}^2 \leq \varepsilon^{2- r}|V|.&&
\end{align} Combining the above with~\eqref{eq:soundness_sum} yields
\begin{align}
|V_0| & \geq \varepsilon^{r} \sum_{v\in V_0} \enorm{r}{g_v}^r \nonumber \\
& \geq \varepsilon^{r} \bigg( (\gamma_r^r + 4\varepsilon^{2 - r}) |V| -
\varepsilon^r |V| - (\gamma_r^r + \varepsilon)|V| - \varepsilon^{2-r} |V| \bigg)
\nonumber \\
& \geq \varepsilon^{r} \varepsilon^{2 - r} |V| = \varepsilon^2 |V|,
\end{align} where the last inequality uses the fact that
$\varepsilon^{2- r } \geq \varepsilon \geq \varepsilon^r$. \end{proof}
Therefore, $|V_0| \geq \varepsilon^2 |V|$ and every vertex of $v$ satisfies $\cnorm{4}{\widehat{g}_v} > \delta \varepsilon$ and $\cnorm{2}{\widehat{g}_v} \leq 1 / \varepsilon$. Using only these two facts together with ${\bf \widehat{g}} \in {\bf \widehat{L}}$, Bri\"et\xspace et al.\xspace~\cite{BRS15} proved that if the smoothness parameter $J$ is large enough given other parameters, ${\mathcal L}$ admits a labeling that satisfies a significant fraction of edges. \begin{lemma} [Lemma
3.6 of \cite{BRS15}] Let $\beta := \delta^2 \varepsilon^3$. There exists an absolute
constant $c' > 0$ such that if ${\mathcal L}$ is $T$-to-$1$ and $T / (c' \varepsilon^8
\beta^4)$-smooth for some $T \in {\mathbb{N}}$, there is a labeling that satisfies at least $\varepsilon^8 \beta^4
/1024$ fraction of $E$. \end{lemma} This finishes the proof of~\cref{lem:soundness} by setting $\xi := \varepsilon^8 \beta^4 /1024$ and $J := D(\xi) / (c' \varepsilon^8 \beta^4)$ with $D(\xi)$ defined in~\cref{thm:smooth_label_cover}. Given a $3$-SAT formula, $\varphi$, by the standard property of Smooth Label Cover, the size of the reduction is
$|\varphi|^{O(J \log (1 / \xi))} = |\varphi|^{{\mathrm{poly}}(1/\varepsilon)}$. \end{proof}
\fi
\section{Hardness of \onorm{p}{q}}\label{sec:hypercontractive} In this section, we prove our main results. We prove \cref{thm:main_nhc} on hardness of approximating \onorm{p}{q} when $p \geq 2 \geq q$, and \cref{thm:main_hype} on hardness of approximating \onorm{p}{q} when $2 < p < q$. By duality, the same hardness is implied for the case of $p < q < 2$.
Our result for $p \geq 2 \geq q$ in~\cref{sec:composition} follows from \cref{thm:brs} using additional properties in the completeness case. For hypercontractive norms, we start by showing constant factor hardness via reduction from \onorm{p}{2} (see \cref{isometry}), and then amplify the hardness factor by using the fact that all hypercontractive norms productivize under Kronecker product, which we prove in \cref{sec:productivize}. \subsection{Hardness for $p \geq 2 \geq q$} \label{sec:composition}
We use~\cref{thm:brs} to prove hardness of \onorm{p}{q} for $p \geq 2 \geq q$, which proves~\cref{thm:main_nhc}.
\begin{proofof}{\cref{thm:main_nhc}}
Fix $p, q$, and $\delta > 0$ such that $\infty \geq p \geq 2 \geq q$ and $p > q$.
Our goal is to prove that \onorm{p}{q} is NP-hard to approximate within a factor $1 / (\gamma_{p^*}\gamma_q + \delta)$.
For \onorm{2}{q} for $1 \leq q < 2$, Theorem~\ref{thm:brs} (with $\varepsilon \leftarrow \delta^{1/(2-q)}$) directly proves a hardness
ratio of $1/(\gamma_q + \varepsilon^{2-q}) = 1/(\gamma_q + \delta)$.
By duality, it also gives an $1/(\gamma_{p^*} + \delta)$ hardness for \onorm{p}{2} for $p > 2$.
For \onorm{p}{q} for $p > 2 > q$,
apply~\cref{thm:brs} with $\varepsilon = (\delta / 3)^{\max(1/(2-p^*), 1/(2-q))}$.
It gives a polynomial time reduction that produces a symmetric matrix $A \in {\mathbb R}^{n \times n}$ given a 3-SAT\xspace formula $\varphi$. Our instance for \onorm{p}{q} is $AA^T = A^2$.
\begin{itemize}
\item (Completeness) If $\varphi$ is satisfiable, there exists $x \in {\mathbb R}^{n}$ such that $|x(i)| = 1$ for all $i \in [N]$ and $Ax = x$. Therefore, $A^2
x = x$ and $\enorm{p}{q}{A^2} \geq 1$.
\item (Soundness) If $\varphi$ is not satisfiable, \begin{align*} \enorm{p}{2}{A} &~=~ \enorm{2}{p^*}{A} \leq \gamma_{p^*} + \varepsilon^{2-p^*} \leq \gamma_{p^*} +
\delta / 3, \mbox{ and } \\ \enorm{2}{q}{A} &~\leq~ \gamma_{q} + \varepsilon^{2-q} \leq \gamma_{q} + \delta / 3. \end{align*} This implies that
\[ \enorm{p}{q}{A^2} \leq \enorm{p}{2}{A} \enorm{2}{q}{A} \leq
(\gamma_{p^*} + \delta /3)(\gamma_{q} + \delta / 3) \leq
\gamma_{p^*}\gamma_q + \delta \,. \]
\end{itemize} This creates a gap of $1 / (\gamma_{p^*} \gamma_q + \delta)$ between the completeness and the soundness case. The same gap holds for the counting norm since $\cnorm{p}{q}{A^2} = n^{1/q - 1/p} \cdot \enorm{p}{q}{A^2}$. \end{proofof}
\iffalse \Authornote{EL}{ This section proves~\cref{thm:main_hype} on hardness of approximating \onorm{p}{q} when $2 < p < q$. By duality, it also proves the same hardness when $p < q < 2$. The main intuition is to use Dvoretzky-Millman theorem that allows us to translate the hardness of \onorm{p}{2} for $p > 2$ proved in~\cref{sec:nhc} to \onorm{p}{q} for the same $p$ and any $q \geq 1$. When $q \geq p$, we also show that \onorm{p}{q} is multiplicative under taking a tensor product, ruling out any constant factor approximation. } \fi
\subsection{Reduction from \onorm{p}{2} via Approximate Isometries} \label{isometry} Let $A \in {\mathbb R}^{n \times n}$ be a hard instance of \onorm{p}{2}. For any $q \geq 1$, if a matrix $B \in {\mathbb R}^{m \times n}$ satisfies $\cnorm{q}{Bx} = (1 \pm o(1)) \cnorm{2}{x}$ for all $x \in {\mathbb R}^n$, then $\norm{p}{q}{BA} = (1 \pm o(1)) \norm{p}{2}{A}$. Thus $BA$ will serve as a hard instance for \onorm{p}{q} if one can compute such a matrix $B$ efficiently. In fact, a consequence of the Dvoretzky-Milman theorem is that a sufficiently tall random matrix $B$ satisfies the aforementioned property with high probability. In other words, for $m=m(q,n)$ sufficiently large, a random linear operator from $\ell_2^n$ to $\ell_q^m$ is an approximate isometry.
To restate this from a geometric perspective, for $m(q,n)$ sufficiently larger than $n$, a random section of the unit ball in $\ell_q^m$ is approximately isometric to the unit ball in $\ell_2^n$. In the interest of simplicity, we will instead state and use a corollary of the following matrix deviation inequality due to Schechtman (see \cite{Schechtman06}, Chapter 11 in \cite{Vershynin17}). \begin{theorem}[Schechtman~\cite{Schechtman06}] \label{matrix:deviation:bound}
Let $B$ be an $m \times n$ matrix with i.i.d. $\gaussian{0}{1}$ entries. Let $f : {\mathbb R}^m\rightarrow {\mathbb R}$ be a
positive-homogeneous and subadditive function, and let $b$ be such that
$f(y)\leq b \cnorm{2}{y}$ for all $y\in {\mathbb R}^m$. Then for any $T\subset {\mathbb R}^n$,
\[
\sup_{x\in T} |f(Bx)-\Ex{f(Bx)}| = O(b\cdot \gamma(T) + t\cdot \mathrm{rad}(T))
\]
with probability at least $1-e^{-t^2}$,
where $\mathrm{rad}(T)$ is the radius of $T$, and
$\gamma(T)$ is the Gaussian complexity of $T$ defined as
\[
\gamma(T) := \Ex{g\sim \gaussian{0}{I_n}}{\sup_{t\in T} |\mysmalldot{g}{t}|}
\] \end{theorem} The above theorem is established by proving that the random process given by $X_x := f(Bx) - \Ex{f(Bx)}$ has sub-gaussian increments with respect to $L_2$ and subsequently appealing to Talagrand's Comparison tail bound.
We will apply this theorem with $f(\cdot) = \cnorm{q}{\cdot}$, $b=1$ and $T$ being the unit ball under $\cnorm{2}{\cdot}$. We first state a known estimate of $\Ex{f(Bx)} = \Ex{\cnorm{q}{Bx}}$ for any fixed $x$ satisfying $\cnorm{2}{x} = 1$. Note that when $\cnorm{2}{x} = 1$, $Bx$ has the same distribution as an $m$-dimensional random vector with i.i.d. $\gaussian{0}{1}$ coordinates.
\begin{theorem}[Biau and Mason~\cite{BM15}] \label{expected:qnorm}
Let $X\in{\mathbb R}^m$ be a random vector with i.i.d. $\gaussian{0}{1}$ coordinates. Then for any $q\geq 2$,
\[
\Ex{\cnorm{q}{X}} = m^{1/q}\cdot \gamma_q+O(m^{(1/q) -1)}).
\] \end{theorem}
We are now equipped to see that a tall random Gaussian matrix is an approximate isometry (as a linear map from $\ell_2^n$ to $\ell_q^m$) with high probability. \begin{corollary} \label{embedding}
Let $B$ be an $m \times n$ matrix with i.i.d. $\gaussian{0}{1}$ entries where $m = \omega(n^{q/2})$.
Then with probability at least $1-e^{-n}$, every vector $x\in {\mathbb R}^n$
satisfies, \[\cnorm{q}{Bx} = (1\pm o(1))\cdot m^{1/q} \cdot \gamma_q \cdot \cnorm{2}{x}. \] \end{corollary}
\begin{proof}
We apply \cref{matrix:deviation:bound} with function $f$ being the $\ell_q$ norm, $b=1$, and $t = \sqrt{n}$.
Further we set $T$ to be the $\ell_2$ unit sphere, which yields $\gamma(T) = \Theta(\sqrt{n})$ and
$\mathrm{rad}(T) = 1$. Applying \cref{expected:qnorm} yields that with probability at least
$1 - e^{t^2} = 1-e^{-n}$, for all $x$ with $\cnorm{2}{x} = 1$, we have \begin{align*}
\left| \cnorm{q}{Bx} - m^{1/q} \cdot \gamma_q \right| & \leq
\left| \cnorm{q}{Bx} - \Ex{\cnorm{q}{X}} \right| +
\left| \Ex{\cnorm{q}{X}} - m^{1/q} \cdot \gamma_q \right| \\ & \leq O(b \cdot \gamma(T) + t \cdot \mathrm{rad}(T) + m^{(1/q)-1}) \\ & \leq O(\sqrt{n} + \sqrt{n} + m^{(1/q)-1})\\ & \leq o(m^{1/q}). \tag*{\qedhere} \end{align*} \end{proof}
We thus obtain the desired constant factor hardness: \begin{proposition} \label{hypercontractive:const:factor}
For any $p>2,~2\leq q< \infty$ and any $\varepsilon>0$, there is no polynomial time algorithm that
approximates \onorm{p}{q} (and consequently \onorm{q^*}{p^*}) within a factor of
$1/\gamma_{p^*}-\varepsilon$ ~unless $\classfont{NP}\xspace \not\subseteq \classfont{BPP}\xspace$. \end{proposition}
\begin{proof}
By \cref{embedding}, for every $n\times n$ matrix $A$ and a random $m\times n$ matrix $B$ with i.i.d.
$\gaussian{0}{1}$ entries ($m = \omega(n^{q/2})$), with probability at least $1-e^{-n}$, we have
\[
\cnorm{p}{q}{BA} =
(1\pm o(1))\cdot \gamma_q\cdot m^{1/q}\cdot \cnorm{p}{2}{A}.
\]
Thus the reduction $A\rightarrow BA$ combined with \onorm{p}{2} hardness implied by \cref{thm:brs},
yields the claim. \end{proof}
The generality of the concentration of measure phenomenon underlying the proof of the Dvoretzky-Milman theorem allows us to generalize \cref{hypercontractive:const:factor}, to obtain constant factor hardness of maximizing various norms over the $\ell_p$ ball ($p>2$). In this more general version, the strength of our hardness assumption is dependent on the Gaussian width of the dual of the norm being maximized. Its proof is identical to that of \cref{hypercontractive:const:factor}.
\begin{theorem} \label{p-to-anything:const:factor}
Consider any $p>2, \varepsilon>0$, and any family $(f_m)_{m\in {\mathbb{N}}}$ of positive-homogeneous and
subadditive functions where $f_m : {\mathbb R}^m\rightarrow {\mathbb R}$. Let $(b_m)_{m\in {\mathbb{N}}}$ be such that $f_m(y) \leq b_m\cdot
\cnorm{2}{y}$ for all $y$ and let $N=N(n)$ be such that $\gamma_*(f_N) = \omega(b_N\cdot \sqrt{n})$,
where
\[
\gamma^*(f_N) := \Ex{g\sim \gaussian{0}{I_N}}{f_N(g)}.
\]
Then unless $\classfont{NP}\xspace \not\subseteq \BPTIME{N(n)}$, there is no polynomial
time $(1/\gamma_{p^*}-\varepsilon)$-approximation algorithm for the problem of computing
$\sup_{\norm{p}{x} = 1} f_m(Ax)$, given an $m\times n$ matrix $A$. \end{theorem}
\subsection{Derandomized Reduction}\label{sec:derandomization} In this section, we show how to derandomize the reduction in \cref{hypercontractive:const:factor} to obtain NP-hardness when $q\geq 2$ is an even integer and $p > 2$. Similarly to~\cref{isometry}, given $A \in {\mathbb R}^{n \times n}$ as a hard instance of \onorm{p}{2}, our strategy is to construct a matrix $B \in {\mathbb R}^{m \times n}$ and output $BA$ as a hard instance of \onorm{p}{q}.
Instead of requiring $B$ to satisfy $\cnorm{q}{Bx} = (1 \pm o(1)) \cnorm{2}{x}$ for all $x \in {\mathbb R}^n$, we show that $\cnorm{q}{Bx} \leq (1 + o(1)) \cnorm{2}{x}$ for all $x \in {\mathbb R}^n$ and $\cnorm{q}{Bx} \geq (1 - o(1)) \cnorm{2}{x}$ when every coordinate of $x$ has the same absolute value. Since~\cref{thm:brs} ensures that $\cnorm{p}{2}{A}$ is achieved by $x = Ax$ for such a well-spread $x$ in the completeness case, $BA$ serves as a hard instance for \onorm{p}{q}.
\ifnum\confversion=1
We use a construction of $q$-wise independent sets due to Alon, Babai and Itai \cite{ABI86} to prove the following derandomized version of our results. The details of the derandomization can be found in the full version of the paper.
\fi
\ifnum\confversion=0
We use the following construction of $q$-wise independent sets to construct such a $B$ deterministically.
\begin{theorem}[Alon, Babai, and Itai~\cite{ABI86}] \label{kwise:independence}
For any $k\in{\mathbb{N}}$, one can compute a set $S$ of vectors in $\{\pm 1\}^n$ of size $O(n^{k/2})$, in time
$n^{O(k)}$, such that the vector random variable $Y$ obtained by sampling uniformly from $S$ satisfies
that for any $I\in {[n]\choose k}$, the marginal distribution $Y\restrict{I}$ is the uniform distribution over
$\{\pm 1\}^{k}$. \end{theorem}
For a matrix $B$ as above, a randomly chosen row behaves similarly to an $n$-dimensional Rademacher random vector with respect to $\cnorm{q}{\cdot}$.
\begin{corollary} \label{derandomized:operator}
Let $R\in{\mathbb R}^n$ be a vector random variable with i.i.d. Rademacher ($\pm 1$) coordinates.
For any even integer $q\geq 2$, there is an $m \times n$ matrix $B$ with $m = O(n^{q/2})$, computable in $n^{O(q)}$ time,
such that for all $x\in{\mathbb R}^n$, we have
\[
\cnorm{q}{Bx} = m^{1/q} \cdot \Ex{R}{\mysmalldot{R}{x}^q}^{1/q}.
\] \end{corollary} \begin{proof}\belowdisplayskip=-12pt
Let $B$ be a matrix, the set of whose rows is precisely $S$.
By \cref{kwise:independence},
\begin{align*}
\cnorm{q}{Bx}^q
=
\sum_{Y \in S}{\mysmalldot{Y}{x}^q}
&=
m\cdot \Ex{R}{\mysmalldot{R}{x}^q}
\,. \tag*{\qedhere}
\end{align*} \end{proof}
We use the following two results that will bound $\cnorm{p}{q}{BA}$ for the completeness case and the soundness case respectively.
\begin{theorem}[Stechkin~\cite{Stechkin61}] \label{stechkin:clt}
Let $R\in{\mathbb R}^n$ be a vector random variable with i.i.d. Rademacher coordinates. Then for any $q\geq 2$
and any $x\in {\mathbb R}^n$ whose coordinates have the same absolute value,
\[
\Ex{\mysmalldot{R}{x}}^{1/q} = (1-o(1))\cdot \gamma_q \cnorm{2}{x}.
\] \end{theorem}
\begin{theorem}[Khintchine inequality~\cite{Haagerup81}] \label{khintchine:clt}
Let $R\in{\mathbb R}^n$ be a vector random variable with i.i.d. Rademacher coordinates. Then for any $q\geq 2$
and any $x \in {\mathbb R}^n$,
\[
\Ex{\mysmalldot{R}{x}^q}^{1/q} \leq \gamma_q \cdot \cnorm{2}{x}.
\] \end{theorem}
We finally prove the derandomimzed version of \cref{hypercontractive:const:factor} for even $q \geq 2$.
\fi
\begin{proposition} \label{hypercontractive:derandomized:const:factor}
For any $p > 2, \varepsilon>0$, and any even integer $q\geq 2$, it is NP-hard to approximate
\onorm{p}{q} within a factor of ~$1/\gamma_{p^*} - \varepsilon$. \end{proposition}
\ifnum\confversion=0
\begin{proof}
Apply \cref{thm:brs} with $r_1 \leftarrow p^*$ and $\varepsilon \leftarrow \varepsilon$. Given an instance $\varphi$ of 3-SAT\xspace,
\cref{thm:brs} produces a symmetric matrix $A \in {\mathbb R}^{n \times n}$ in polynomial time as a hard instance of \onorm{p}{2}. Our instance for \onorm{p}{q} is $BA$ where $B$ is
the $m \times n$ matrix given by \cref{derandomized:operator} with $m = O(n^{q/2})$.
\begin{itemize}
\item (Completeness) If $\varphi$ is satisfiable, there exists a vector $x\in\{\pm \frac{ 1}{\sqrt{n}}\}^n$ such that $Ax = x$.
So we have $\cnorm{q}{BAx} = \cnorm{q}{Bx} = (1-o(1))\cdot m^{1/q} \cdot \gamma_q$, where the last
equality uses \cref{derandomized:operator} and \cref{stechkin:clt}. Thus $\cnorm{p}{q}{BA}\geq
(1-o(1))\cdot m^{1/q} \cdot \gamma_q$.
\item (Soundness) If $\varphi$ is not satisfiable, then for any $x$ with $\cnorm{p}{x}=1$,
\begin{align*}
&\cnorm{q}{BAx} = m^{1/q} \cdot \Ex{R}{\mysmalldot{R}{Ax}^q}^{1/q} \leq m^{1/q} \cdot \gamma_q\cdot \cnorm{2}{Ax}
\\
\leq~ & m^{1/q} \cdot \gamma_q \cdot \cnorm{p}{2}{A}
\leq~ m^{1/q} \cdot \gamma_q \cdot (\gamma_{p^*}-\varepsilon)
\end{align*}
where the first inequality is a direct application of~\cref{khintchine:clt}.
\qedhere
\end{itemize} \end{proof}
\fi
\subsection{Hypercontractive Norms Productivize}\label{sec:productivize} We will next amplify our hardness results using the fact that hypercontractive norms productivize under the natural operation of Kronecker or tensor product. Bhaskara and Vijayraghavan~\cite{BV11} showed this for the special case of $p=q$ and the Harrow and Montanaro~\cite{HM13} showed this for \onorm{2}{4} (via parallel repetition for $\mathrm{QMA(2)}$). In this section we prove this claim whenever $p\leq q$.
\begin{theorem} \label{productivization}
Let $A$ and $B$ be $m_1\times n_1$ and $m_2\times n_2$ matrices respectively. Then for any $1\leq p\leq q < \infty$,
$\cnorm{p}{q}{A\otimes B} \leq \cnorm{p}{q}{A}\cdot \cnorm{p}{q}{B}$. \end{theorem}
\begin{proof}
We will begin with some notation. Let $a_{i},b_{j}$ respectively denote the $i$-th and $j$-th rows of $A$
and $B$. Consider any $z\in {\mathbb R}^{[n_1]\times [n_2]}$ satisfying $\cnorm{p}{z}=1$.
For $k\in [n_1]$, let $z_k\in {\mathbb R}^{n_2}$ denote the vector given by $z_k(\ell) := z(k,\ell)$.
For $j\in [m_2]$, let $\overline{z}_j\in {\mathbb R}^{n_1}$ denote the vector given by $\overline{z}_j(k) :=
\mysmalldot{b_{j}}{z_{k}}$.
Finally, for $k\in [n_1]$, let $\lambda_k := \cnorm{p}{z_k}^{p}$ and let $v_k\in {\mathbb R}^{m_2}$ be the vector given
by $v_k(j) := |\overline{z}_j(k)|^{p}/\lambda_k$.
We begin by 'peeling off' $A$:
\begin{align*}
\cnorm{q}{(A\otimes B)z}^{q}
~=~
\sum_{i,j} |\mysmalldot{a_i\otimes b_j}{z}|^{q}
&~=~
\sum_{j} \sum_{i} |\mysmalldot{a_i}{\overline{z}_j}|^{q} \\
&~=~
\sum_{j} \cnorm{q}{A\overline{z}_j}^{q} \\
&~\leq~
\cnorm{p}{q}{A}^{q}\cdot \sum_{j} \cnorm{p}{\overline{z}_j}^{q} \\
&~=~
\cnorm{p}{q}{A}^{q}\cdot \sum_{j} \inparen{\cnorm{p}{\overline{z}_j}^{p}}^{q/p}
\end{align*}
In the special case of $p=q$, the proof ends here since the expression is a sum of terms of the
form $\cnorm{p}{By}^p$ and can thus be upper bounded term-wise by
$\cnorm{p}{p}{B}^p\cdot \cnorm{p}{z_k}^p$ which sums to $\cnorm{q}{p}{B}^p$. To handle the case of
$q>p$, we will use a convexity argument:
\begin{align*}
&\cnorm{p}{q}{A}^{q}\cdot \sum_{j} \inparen{\cnorm{p}{\overline{z}_j}^{p}}^{q/p} \\
~=~
&\cnorm{p}{q}{A}^{q}\cdot \sum_{j} \inparen{\sum_k |\overline{z}_j(k)|^{p}}^{q/p} \\
~=~
&\cnorm{p}{q}{A}^{q}\cdot \cnorm{q/p}{\sum_k \lambda_k\cdot v_k}^{q/p}
&&(|\overline{z}_j(k)|^{p} = \lambda_k v_k(j)) \\
~\leq~
&\cnorm{p}{q}{A}^{q}\cdot \sum_k \lambda_k \cdot \cnorm{q/p}{v_k}^{q/p}
&&(\text{by convexity of }\norm{q/p}{\cdot}^{q/p} \text{ when } q\geq p) \\
~\leq~
&\cnorm{p}{q}{A}^{q}\cdot \max_k \cnorm{q/p}{v_k}^{q/p}
\end{align*}
It remains to show that $\cnorm{q/p}{v_k}^{q/p}$ is precisely $\cnorm{q}{Bz_k}^q/\cnorm{p}{z_k}^q$.
\begin{align*}
\cnorm{p}{q}{A}^{q}\cdot \max_k \cnorm{q/p}{v_k}^{q/p}
~=~
&\cnorm{p}{q}{A}^{q}\cdot \max_k \frac{1}{\cnorm{p}{z_k}^q}\cdot \sum_j |\overline{z}_j(k)|^q \\
~=~
&\cnorm{p}{q}{A}^{q}\cdot \max_k \frac{1}{\cnorm{p}{z_k}^q}\cdot \sum_j |\mysmalldot{b_j}{z_k}|^q \\
~=~
&\cnorm{p}{q}{A}^{q}\cdot \max_k \frac{\cnorm{q}{Bz_k}^q}{\cnorm{p}{z_k}^q} \\
~\leq~
&\cnorm{p}{q}{A}^{q}\cdot \cnorm{p}{q}{B}^q
\end{align*}
Thus we have established $\cnorm{p}{q}{A\otimes B} \leq \cnorm{p}{q}{A}\cdot \cnorm{p}{q}{B}$.
Lastly, the claim follows by observing that the statement is equivalent to the statement obtained by
replacing the counting norms with expectation norms. \end{proof}
\iffalse \begin{theorem} [Restatement of~\cref{thm:main_hype}]
For any $p, q$ such that $1 \leq p \leq q < 2$ or $2 < p \leq q < \infty$ and $\varepsilon > 0$,
there is no polynomial time algorithm that approximates the \onorm{p}{q} of an $n \times n$ matrix within a factor
$2^{\log^{1 - \varepsilon} n}$ unless $\classfont{NP}\xspace$ admits a randomized quasi-polynomial time algorithm.
When $q$ is an even integer, the same statement holds unless $\classfont{NP}\xspace$ admits a deterministic quasi-polynomial time algorithm. \end{theorem} \fi
We finally establish super constant NP-Hardness of approximating \onorm{p}{q}, proving \cref{thm:main_hype}. \begin{proofof}{\cref{thm:main_hype}} Fix $2 < p \leq q < \infty$. \cref{hypercontractive:const:factor} states that there exists $c = c(p, q) > 1$ such that any polynomial time algorithm approximating the \onorm{p}{q} of an $n \times n$-matrix $A$ within a factor of $c$ will imply $\classfont{NP}\xspace \subseteq \classfont{BPP}\xspace$. Using \cref{productivization}, for any integer $k \in {\mathbb{N}}$ and $N = n^k$, any polynomial time algorithm approximating the \onorm{p}{q} of an $N \times N$-matrix $A^{\otimes k}$ within a factor of $c^k$ implies that $\classfont{NP}\xspace$ admits a randomized algorithm running in time ${\mathrm{poly}}(N) = n^{O(k)}$. Under $\classfont{NP}\xspace \not\subseteq \classfont{BPP}\xspace$, any constant factor approximation algorithm is ruled out by setting $k$ to be a sufficiently large constant. For any $\varepsilon > 0$, setting $k = \log^{1/\varepsilon} n$ rules out an approximation factor of $c^k = 2^{O(\log^{1 - \varepsilon} N)}$ unless $\classfont{NP}\xspace \subseteq \BPTIME{2^{\log^{O(1)} n}}$.
By duality, the same statements hold for $1 < p \leq q < 2$. When $2 < p \leq q$ and $q$ is an even integer, all reductions become deterministic due to \cref{hypercontractive:derandomized:const:factor}. \end{proofof}
\subsection{A Simple Proof of Hardness for the Case $2 \notin [q,p]$} \label{sec:reverse}
In this section, we show how to prove an almost-polynomial factor hardness for approximating \onorm{p}{q} in the non-hypercontractive case when $2 > p \geq q$ (and the case $p \geq q > 2$ follows by duality).
This result is already known from the work of Bhaskara and Vijayaraghavan \cite{BV11}. We show how to obtain a more modular proof, composing our previous results with a simple embedding argument.
However, while the reduction in \cite{BV11} was deterministic, we will only give a randomized reduction below.
As in \cite{BV11}, we start with a strong hardness for the \onorm{p}{p}, obtained in \cref{thm:main_hype}. While the reduction in \cite{BV11} relied on special properties of the instance for \onorm{\ell_p}{\ell_p}, we can simply use the following embedding result of Schechtman \cite{Schechtman87} (phrased in a way convenient for our application).
\begin{theorem}[Schechtman \cite{Schechtman87}, Theorem 5] Let $q<p<2$ and $\varepsilon>0$. Then, there exists a polynomial time samplable distribution ${\mathcal D}$ on random matrices in ${\mathbb R}^{m \times
n}$ with $m = \Omega_{\varepsilon}(n^{3})$, such that with probability $1-o(1)$, we have for every $x \in R^n$, $\norm{\ell_q}{Bx} ~=~ (1 \pm \varepsilon) \cdot \norm{\ell_p}{x}$. \label{thm:schechtman} \end{theorem}
In fact the distribution ${\mathcal D}$ is based on $p$-stable distributions.
While the theorem in \cite{Schechtman87} does not mention the high probability bound or samplability, it is easy to modify the proof to obtain there properties. We provide a proof sketch below for completeness. We note that Schechtman obtains a stronger bound of $O(n^{1+p/q})$ on the dimension $m$ of the $\ell_q$ space, which requires a more sophisticated argument using ``Lewis weights''. However, we only state weaker $O(n^3)$ bound above, which suffices for our purposes and is easier to convert to a samplable distribution.
We first prove the following hardness result for approximating \onorm{p}{q} in the reverse-hypercontractive case, using \cref{thm:schechtman}.
\begin{theorem} For any $p, q$ such that $1 < q \leq p < 2$ or $2 < q \leq p < \infty$ and $\varepsilon > 0$, there is no polynomial time algorithm that approximates the \onorm{p}{q} of an $n \times n$ matrix within a factor $2^{\log^{1 - \varepsilon} n}$ unless $\classfont{NP}\xspace \subseteq \BPTIME{2^{(\log n)^{O(1)}}}$. \label{thm:reverse} \end{theorem}
\begin{proof}
We consider the case $1 < q \leq p < 2$ (the other case follows via duality).
\cref{thm:main_hype} gives a reduction from SAT on $n$ variables, approximating the \onorm{p}{p} of matrices $A \in {\mathbb R}^{N \times N}$ with $N = 2^{(\log n)^{O(1/\varepsilon)}}$, within a factor $2^{(\log
N)^{1-\varepsilon}}$. Sampling a matrix $B$ from the distribution ${\mathcal D}$ given by \cref{thm:schechtman} (with dimension $N$) gives that it is also hard to approximate $\norm{p}{q}{BA} \approx \norm{p}{p}{A}$, within a factor $2^{(\log N)^{1-\varepsilon}}$. \end{proof}
We now give a sketch of the proof of \cref{thm:schechtman} including the samplability condition. The key idea is to embed the space $\ell_p^n$ into the infinite-dimensional space $L_q$ (for $0 \leq q \leq p < 2$) using $p$-stable random variables. The corresponding subspace of $L_q$ can then be embedded into $\ell_q^m$ if the random variables (elements of $L_q$) constructed in the previous space are bounded in $L_{\infty}$ norm. This is the content of the following claim.
\begin{claim}[Schechtman \cite{Schechtman87}, Proposition 4] \label{clm:schechtman} Let $\varepsilon > 0$ and $\Omega$ be an efficiently samplable probability space and let $V$ be an $n$-dimensional subspace of $L_q(\Omega)$, such that \[ M ~:=~ \sup\inbraces{\norm{L_{\infty}}{f} ~\mid~ \norm{L_{q}}{f} \leq 1, f \in V} ~<~ \infty \,. \] Then there exists a polynomial time samplable distribution ${\mathcal D}$ over linear operators $T: L_q(\Omega) \rightarrow {\mathbb R}^{m}$ for $m = C(\varepsilon, q) \cdot n \cdot M^q$ such that with probability $1 - o(1)$, we have that for every $f \in V$, $\norm{\ell_q}{Tf} = (1\pm \varepsilon) \cdot \norm{L_q}{f}$. \end{claim}
\begin{proofsketch} The linear operator is simply defined by sampling $x_1, \ldots, x_m \sim \Omega$ independently, and taking \[ Tf ~:=~ \frac{1}{m^{1/q}} \cdot \inparen{f(x_1), \ldots, f(x_m)} \qquad \forall f\,. \] The proof then follows by concentration bounds for $L_{\infty}$-bounded random variables, and a union bound over an epsilon net for the space $V$. \end{proofsketch}
The problem then reduces to constructing an embedding of $\ell_p^n$ into $L_q$, which is bounded in $L_{\infty}$ norm. While a simple embedding can be constructed using $p$-stable distributions, Schechtman uses a clever reweighting argument to control the $L_{\infty}$ norm. We show below that a simple truncation argument can also be used to obtain a somewhat crude bound on the $L_{\infty}$ norm, which suffices for our purposes and yields an easily samplable distribution.
We collect below the relevant facts about $p$-stable random variables needed for our argument, which can be found in many well-known references, including \cite{Indyk06, AlbiacK06}.
\begin{fact} For all $p \in (0,2)$, there exist (normalized) $p$-stable random variables $Z$ satisfying the following properties:
\begin{enumerate}
\item For $Z_1, \ldots, Z_n$ iid copies of $Z$, and for all $a \in {\mathbb R}^n$, the random variable \[ S ~:=~ \frac{a_1 \cdot Z_1 + \cdots + a_n \cdot Z_n}{\norm{\ell_p}{a}} \,, \] has distribution identical to $Z$.
\item For all $q < p$, we have \[ C_{p,q} ~:=~ \norm{L_q}{Z} ~=~ \inparen{\Ex{\abs{Z}^q}}^{1/q} ~<~ \infty \,. \]
\item There exists a constant $C_p$ such that for all $t > 0$, \[ \Pr{\abs{Z} \geq t} ~<~ \frac{C_p}{t} \,. \]
\item $Z$ can be sampled by choosing $\theta \in_R [-\pi/2, \pi/2]$, $r \in_R [0,1]$, and taking \[ Z ~=~ \frac{\sin(p\theta)}{(\cos(\theta))^{1/p}} \cdot \inparen{\frac{\cos((1-p) \cdot
\theta)}{\ln(1/r)}}^{(1-p)/p} \,. \] \end{enumerate}
\end{fact}
We now define an embedding of $\ell_p^n$ into $L_q$ with bounded $L_{\infty}$, using truncated $p$-stable random variables. Let $Z = (Z_1, \ldots, Z_n)$ be a vector of iid $p$-stable random variables as above, and let $B$ be a parameter to be chosen later. We consider the random variables \[ \Delta(Z) ~:=~ \indicator{\exists i \in [n] ~\abs{Z_i} > B} \quad \text{and} \quad Y ~:=~ (1 - \Delta(Z)) \cdot Z ~=~ \indicator{\forall i \in [n] ~\abs{Z_i} \leq B} \cdot Z\,. \]
For all $a \in {\mathbb R}^n$, we define the (linear) embedding \[ \varphi(a) ~:=~ \frac{ \ip{a,Y}}{C_{p,q}} ~=~ \frac{\ip{a,Z}}{C_{p,q}} - \Delta(Z) \cdot \frac{\ip{a,Z}}{C_{p,q}}\,. \] By the properties of $p$-stable distributions, we know that $\norm{L_q}{\ip{a,Z}/C_{p,q}} = \norm{\ell_p}{a}$ for all $a \in {\mathbb R}^n$. By the following claim, we can choose $B$ so that the second term only introduces a small error.
\begin{claim} For all $\varepsilon > 0$, there exists $B = O_{p,q,\varepsilon}(n^{1/p})$ such that for the embedding $\varphi$ defined above \[ \abs{\norm{L_q}{\varphi(a)} - \norm{\ell_p}{a}} ~\leq~ \varepsilon \cdot \norm{\ell_p}{a} \,. \] \end{claim}
\begin{proof} By triangle inequality, it suffices to bound $\norm{L_q}{\Delta(Z) \cdot \ip{a,Z}}$ by $\varepsilon \cdot C_{p,q} \cdot \norm{\ell_p}{a}$. Let $\delta > 0$ be such that $(1+\delta) \cdot q < p$. Using the fact that $\Delta(Z)$ is Boolean and H\"{o}lder's inequality, we observe that
\begin{align*} \norm{L_q}{\Delta(Z) \cdot \ip{a,Z}} &~=~ \inparen{\Ex{\abs{\ip{a,Z}}^q \cdot \Delta(Z)}}^{1/q} \\ &~\leq~ \inparen{\Ex{\abs{\ip{a,Z}}^{q(1+\delta)}}}^{1/(q(1+\delta))} \cdot
\inparen{\Ex{\Delta(Z)}}^{\delta/(q(1+\delta))} \\ &~=~ C_{p,(1+\delta)q} \cdot \norm{\ell_p}{a} \cdot \inparen{\Pr{\exists i \in [n] ~\abs{Z_i} \geq
B}}^{\delta/(q(1+\delta))} \\ &~\leq~ C_{p,(1+\delta)q} \cdot \norm{\ell_p}{a} \cdot \inparen{n \cdot \frac{C_p}{B^p}}^{\delta/(q(1+\delta))} \end{align*}
Thus, choosing $B = O_{\varepsilon, p, q}(n^{1/p})$ such that \[ \frac{C_{p,(1+\delta)q}}{C_{p,q}} \cdot \inparen{n \cdot \frac{C_p}{B^p}}^{\delta/(q(1+\delta))} ~\leq~ \varepsilon \] proves the claim. \end{proof}
Using the value of $B$ as above, we now observe a bound on $\norm{L_{\infty}}{\varphi(a)}$.
\begin{claim} Let $B = O_{\varepsilon,p,q}(n^{1/p})$ be chosen as above. Then, we have that \[ M ~:=~ \sup\inbraces{\norm{L_{\infty}}{\ip{a,Y}} ~\mid~ \norm{L_{q}}{\ip{a,Y}} \leq 1} ~=~ O_{\varepsilon,p,q}(n) \,. \] \end{claim}
\begin{proof} By the choice of $B$, we have that $\norm{L_{q}}{\ip{a,Y}} \geq (1-\varepsilon) \norm{\ell_p}{a}$. Thus, we can assume that $\norm{\ell_p}{a} \leq 2$. H\"{o}lder's inequality then gives for all such $a$,
\begin{align*} \abs{\ip{a,Y}} &~\leq~ \norm{\ell_1}{a} \cdot \norm{\ell_{\infty}}{Y} \\ &~\leq~ n^{1-1/p} \cdot \norm{\ell_p}{a} \cdot B \\ &~\leq~ 2 \cdot n^{1-1/p} \cdot B ~=~ O_{\varepsilon, p, q}(n) \,, \end{align*}
which proves the claim. \end{proof}
Using the above bound on $M$ in \cref{clm:schechtman} gives a bound of $m = O_{\varepsilon,p,q}(n^{q+1}) = O_{\varepsilon,p,q}(n^3)$. Moreover, the distribution over embeddings is efficiently samplable, since it obtained by truncating $p$-stable random variables. This completes the proof of \cref{thm:schechtman}.
\ifnum\confversion=0
\appendix
\begin{lemma}[Restatement of~\cref{lem:soundness}] For every $\varepsilon > 0$,
there exist $\xi > 0$ (that determines $D = D(\xi)$ as in~\cref{thm:smooth_label_cover})
and $J \in {\mathbb{N}}$ such that if ${\sf OPT}\xspace({\mathcal L}) \leq \xi$, ${\mathcal L}$ is $D$-to-$1$, and ${\mathcal L}$ is $J$-smooth,
$\enorm{2}{r}{{\mathbf A}} \leq \gamma_{r} + 4\varepsilon^{2-r}$ for every $1 \leq r < 2$.
\end{lemma}
\begin{proof} Let ${\mathbf f} \in {\mathbb R}^{V \times 2^R}$ be an arbitrary vector such that $\enorm{2}{{\mathbf f}} = 1$. Let ${\bf \widehat{f}} = {\mathbf F} {\mathbf f}$, ${\bf \widehat{g}} = {\bf \widehat{L}} {\bf \widehat{f}}$, and ${\mathbf g} = {\mathbf F}^T {\bf \widehat{g}}$ so that ${\mathbf g} = ({\mathbf F}^T {\bf \widehat{L}} {\mathbf F}) {\mathbf f} = {\mathbf A} {\mathbf f}$. By Parseval's theorem, $\cnorm{2}{\widehat{f}_v} \leq \enorm{2}{f_v}$ for all $v \in V$ and $\norm{2}{{\bf \widehat{f}}} \leq \enorm{2}{{\mathbf f}} \leq 1$. Since ${\bf \widehat{L}}$ is an orthogonal projection, $\norm{2}{{\bf \widehat{g}}} \leq \norm{2}{{\bf \widehat{f}}} \leq 1$. Fix $1 \leq r < 2$ and suppose \begin{equation}
\enorm{r}{{\mathbf g}}^r = \Ex{v \in V}{\enorm{r}{g_v}^r} \geq \gamma_r^r +
4 \varepsilon^{2 - r}\,.
\label{eq:soundness} \end{equation}
Use~\cref{lem:dict} to obtain $\delta = \delta(\varepsilon)$ such that $\enorm{p}{g_v}^p > (\gamma_p^p + \varepsilon) \cnorm{2}{\widehat{g}_v}^p$ implies $\cnorm{4}{\widehat{g}} > \delta \cnorm{2}{\widehat{g}}$ for all $1 \leq p < 2$ (so that $\delta$ does not depend on $r$), and consider \begin{equation} V_0 := \{ v \in V : \cnorm{4}{\widehat{g}_v}
> \delta \varepsilon \mbox{ and } \cnorm{2}{\widehat{g}_v} \leq 1/\varepsilon \}.
\label{eq:vzero} \end{equation} We prove the following lemma that lower bounds the size of $V_0$. \begin{lemma}
For $V_0 \subseteq V$ defined as in~\eqref{eq:vzero}, we have $|V_0| \geq \varepsilon^2
|V|$. \end{lemma} \begin{proof} The proof closely follows the proof of Lemma 3.4 of~\cite{BRS15}.
Define the sets
\begin{align*}
V_1 &= \{ v \in V : \cnorm{4}{\widehat{g}_v}
\leq \delta \varepsilon \mbox{ and } \cnorm{2}{\widehat{g}_v} < \varepsilon \}, \\ V_2 &= \{ v
\in V : \cnorm{4}{\widehat{g}_v} \leq \delta \varepsilon \mbox{ and } \cnorm{2}{\widehat{g}_v}
\geq \varepsilon \}, \\ V_3 &= \{ v \in V : \cnorm{2}{\widehat{g}_v} > 1/\varepsilon \}.
\end{align*}
From~\eqref{eq:soundness}, we have
\begin{equation}
\sum_{v \in V_0} \enorm{r}{g_v}^r + \sum_{v \in V_1} \enorm{r}{g_v}^r + \sum_{v \in V_2}
\enorm{r}{g_v}^r + \sum_{v \in V_3} \enorm{r}{g_v}^r \geq (\gamma_r^r + 4
\varepsilon^{2 - r}) |V|\,.
\label{eq:soundness_sum}
\end{equation}
We bound the four sums on the left side of~\eqref{eq:soundness_sum} individually.
Parseval's theorem and the fact that $r < 2$ implies $\enorm{r}{g_v} \leq
\enorm{2}{g_v}=\cnorm{2}{\widehat{g}_v}$, and since $\cnorm{2}{\widehat{g}_v} \leq 1/\varepsilon$
for every $v \in V_0$, the first sum in~\eqref{eq:soundness_sum} can be bounded by
\begin{equation}
\sum_{v \in V_0} \enorm{r}{g_v}^r \leq |V_0| / \varepsilon^r.
\label{eq:soundness_zero}
\end{equation}
Similarly, using the definition of $V_1$
the second sum in~\eqref{eq:soundness_sum} is at most $\varepsilon^r |V|$.
By~\cref{lem:dict}, for each $v \in V_2$, we have $\enorm{r}{g_v}^r \leq (\gamma_r^r +
\varepsilon) \cnorm{2}{\widehat{g}_v}^r$. Therefore, the third sum in~\eqref{eq:soundness_sum}
is bounded as
\begin{align}
\sum_{v \in V_2} \enorm{r}{g_v}^r &\leq (\gamma_r^r +
\varepsilon) \sum_{v \in V_2} \cnorm{2}{\widehat{g}_v}^r & \nonumber \\
&= (\gamma_r^r + \varepsilon) |V_2| {\mathbb E}_{v \in V_2} [ \cnorm{2}{\widehat{g}_v}^r ]
& \nonumber \\
&\leq (\gamma_r^r + \varepsilon) |V_2| {\mathbb E}_{v \in V_2} [ \cnorm{2}{\widehat{g}_v}^2 ]^{r / 2} &&
\mbox{(By Jensen using $r < 2$)} \nonumber \\
&= (\gamma_r^r + \varepsilon) |V_2| \bigg( \frac{\sum_{v \in V_2}
\cnorm{2}{\widehat{g}_v}^2 }{|V_2|} \bigg) ^{r / 2} &\nonumber \\
&\leq (\gamma_r^r + \varepsilon) |V_2|^{1 - r / 2} |V|^{r / 2} &&
(\sum_{v \in V_2} \cnorm{2}{\widehat{g}_v}^2 \leq \sum_{v \in V} \cnorm{2}{\widehat{g}_v}^2
\leq |V|) \nonumber \\
&\leq (\gamma_r^r + \varepsilon) |V|. &
\end{align}
Finally, the fourth sum in~\eqref{eq:soundness_sum} is bounded by
\begin{align}
\sum_{v \in V_3} \enorm{r}{g_v}^r &\leq \sum_{v \in V_3} \enorm{2}{g_v}^r
&& \mbox{(Since $r < 2$)}\nonumber \\
&= \sum_{v \in V_3} \cnorm{2}{\widehat{g}_v}^r && \mbox{(By Parseval's
theorem)} \nonumber \\
&= \sum_{v \in V_3} \cnorm{2}{\widehat{g}_v}^{r - 2}\cnorm{2}{\widehat{g}_v}^2 \nonumber \\
&< \sum_{v \in V_3} \varepsilon^{2 - r}\cnorm{2}{\widehat{g}_v}^2
&& (\cnorm{2}{\widehat{g}_v} > 1/\varepsilon \mbox{ for } v \in V_3,\mbox{ and } r < 2)
\nonumber \\
&= \varepsilon^{2 - r} \sum_{v \in V_3}\cnorm{2}{\widehat{g}_v}^2 \leq \varepsilon^{2- r}|V|.&&
\end{align} Combining the above with~\eqref{eq:soundness_sum} yields
\begin{align}
|V_0| & \geq \varepsilon^{r} \sum_{v\in V_0} \enorm{r}{g_v}^r \nonumber \\
& \geq \varepsilon^{r} \bigg( (\gamma_r^r + 4\varepsilon^{2 - r}) |V| -
\varepsilon^r |V| - (\gamma_r^r + \varepsilon)|V| - \varepsilon^{2-r} |V| \bigg)
\nonumber \\
& \geq \varepsilon^{r} \varepsilon^{2 - r} |V| = \varepsilon^2 |V|,
\end{align} where the last inequality uses the fact that
$\varepsilon^{2- r } \geq \varepsilon \geq \varepsilon^r$. \end{proof}
Therefore, $|V_0| \geq \varepsilon^2 |V|$ and every vertex of $v$ satisfies $\cnorm{4}{\widehat{g}_v} > \delta \varepsilon$ and $\cnorm{2}{\widehat{g}_v} \leq 1 / \varepsilon$. Using only these two facts together with ${\bf \widehat{g}} \in {\bf \widehat{L}}$, Bri\"et\xspace et al.~\cite{BRS15} proved that if the smoothness parameter $J$ is large enough given other parameters, ${\mathcal L}$ admits a labeling that satisfies a significant fraction of edges. \begin{lemma} [Lemma
3.6 of \cite{BRS15}] Let $\beta := \delta^2 \varepsilon^3$. There exists an absolute
constant $c' > 0$ such that if ${\mathcal L}$ is $T$-to-$1$ and $T / (c' \varepsilon^8
\beta^4)$-smooth for some $T \in {\mathbb{N}}$, there is a labeling that satisfies at least $\varepsilon^8 \beta^4
/1024$ fraction of $E$. \end{lemma} This finishes the proof of~\cref{lem:soundness} by setting $\xi := \varepsilon^8 \beta^4 /1024$ and $J := D(\xi) / (c' \varepsilon^8 \beta^4)$ with $D(\xi)$ defined in~\cref{thm:smooth_label_cover}. \end{proof}
\section{Dictatorship Test}\label{sec:dic} \newcommand{\cnorm}{\cnorm} \newcommand{\enorm}{\enorm}
First we prove an implication of Berry-Ess\'een\xspace estimate for fractional moments (similar to Lemma 3.3 of \cite{GRSW16}, see also \cite{KNS10}). \begin{lemma} There exist universal constants $c>0$ and $\delta_0>0$ such that the following statement is true.
If $X_1,\cdots,X_n$ are bounded independent random variables with $\abs{X_i}\le 1$,
$\Ex{X_i}=0$ for $i\in[n]$, and $\sum_{i\in[n]}\Ex{X_i^2}=1$,
$\sum_{i\in[n]}\Ex{\abs{X_i}^3}\leq \delta$ for some $0 < \delta < \delta_0$, then
for every $p\geq 1$:
\[
\inparen{\Ex{\abs{\sum_{j=1}^n X_j}^p}}^{\frac{1}{p}} \leq \gamma_p\cdot
\inparen{1+c\delta\inparen{\log\inparen{\nfrac{1}{\delta}}}^{\frac{p}{2}}}.
\]
\label{lem:clt-analytic-sparse} \end{lemma}
Now we state and prove the main lemma of this section: \begin{lemma}\label{lem:dict}
Let $f : \{ \pm 1 \}^R \rightarrow {\mathbb R}$ be a linear
function for
some positive integer $R \in {\mathbb{N}}$ and $\widehat{f} : [R] \rightarrow {\mathbb R}$ be its linear Fourier
coefficients defined by
\[ \widehat{f}(i) := \Ex{x \in \{ \pm 1 \}^R}{x_i f(x)}.\]
For all $\varepsilon > 0$, there exists $\delta > 0$
such that if $\enorm{r}{f} > (\gamma_r + \varepsilon) \cnorm{2}{\widehat{f}}$ then
$\cnorm{4}{\widehat{f}} > \delta \cnorm{2}{\widehat{f}}$ for all $1 \leq r < 2$. \end{lemma} \begin{proof}
We will prove this lemma by the method of contradiction.
Let us assume $\cnorm{4}{\widehat{f}} \leq \delta\cnorm{2}{\widehat{f}}$, for $\delta$ to be
fixed later.
Let us define $y_i:= \frac{\widehat{f}(i)}{\cnorm{2}{\widehat{f}}}$.
Then, for all $x\in\pmone^R$,
\[
g(x):= \sum_{i\in[n]} x_i\cdot y_i = \frac{f(x)}{\cnorm{2}{\widehat{f}}}\,.
\]
Let $Z_i=x_i\cdot y_i$ be the random variable when $x_i$ is independently uniformly
randomly chosen from $\pmone$. Now
\[
\sum_{i\in[n]}\Ex{Z_i^2}=\sum_{i\in[n]}\frac{\widehat{f}(i)^2}{\cnorm{2}{\widehat{f}}^2}=1\,.
\]
and
\[
\sum_{i\in[n]}\Ex{\abs{Z_i}^3} = \sum_{i\in[n]}
\frac{\abs{\widehat{f}(i)}^3}{\cnorm{2}{\widehat{f}}^3}= \sum_{i\in[n]}
\frac{\abs{\widehat{f}(i)}^2}{\cnorm{2}{\widehat{f}}^2}\cdot \frac{\abs{\widehat{f}(i)}}{\cnorm{2}{\widehat{f}}}
\leq \frac{\cnorm{4}{\widehat{f}}^2}{\cnorm{2}{\widehat{f}}^2} \leq \delta^2 \,,
\]
where the penultimate inequality follows from Cauchy-Schwarz ineqality.
Hence, by applying \cref{lem:clt-analytic-sparse}
on the random variables $Z_1,\cdots, Z_n$, we get:
\begin{align*}
\frac{\enorm{r}{f}}{\cnorm{2}{\widehat{f}}}=
\enorm{r}{g} & = \inparen{ \Ex{x\in \pmone^n} {\abs{g(x)}^r}}^\frac{1}{r}\\
& = \inparen{ \Ex{x\in \pmone^n} {\abs{\sum_{i\in[n]}Z_i}^r}}^\frac{1}{r}\\
& \leq \gamma_r\inparen{1+c\delta^2 \inparen{\log{\frac{1}{\delta}}}^{r}}\\
\end{align*}
We choose $\delta>0$ small enough (since $1 \leq r < 2$, setting $\delta<
\frac{\sqrt{\varepsilon}}{\min(\delta_0,\sqrt{\gamma_2}\log\frac{c\gamma_2}{\varepsilon})} = \frac{\sqrt{\varepsilon}}{\min(\delta_0, \log\frac{c}{\varepsilon})}$
suffices) so that $\delta^2(\log\frac{1}{\delta})^r <
\frac{\varepsilon}{c\gamma_r}$. For this choise of $\delta$, we get:
$\enorm{r}{f}\leq (\gamma_r+\varepsilon)\cnorm{2}{\widehat{f}}$ -- a contradiction.
And hence the proof follows. \end{proof}
Finally we prove \cref{lem:clt-analytic-sparse}: \begin{proofof}{\cref{lem:clt-analytic-sparse}}
The proof is almost similar to that of Lemma 2.1 of \cite{KNS10}.
From Berry-Ess\'een\xspace theorem (see \cite{Beek72} for the constant), we get that:
\[
\Pr{\abs{\sum_{i=1}^nX_i}\geq u} \leq \Pr{\abs{g}\geq u} +
2\sum_{i=1}^n\Ex{\abs{X_i}^3} \leq \Pr{\abs{g}\geq u} + 2\delta\,,
\] for every $u>0$ and where $g \sim \gaussian{0}{1}$.
By Hoeffding's lemma, \[ \Pr{\abs{\sum_{i\in[n]} X_i}\ge t} < 2 \ensuremath{\mathrm e}^{-2t^2} \]
for every $t>0$.
Combining the above observations, we get:
\begin{align*}
\Ex{\abs{\sum_{i=1}^n X_i}^p} & = \int_{0}^{\infty} p u^{p-1}
\Pr{\abs{\sum_{i=1}^nX_i}\geq u}du \\
& \leq \int_{0}^{a} pu^{p-1}\Pr{\abs{g}>u} du + 2\delta a^p +
2\int_{a}^{\infty} pu^{p-1} \ensuremath{\mathrm e}^{-2u^2} du\\
& = \sqrt{\frac{2}{\pi}}\int_{0}^{a} u^p \ensuremath{\mathrm e}^{-\nfrac{u^2}{2}}du + 2\delta
a^p + \frac{2p}{2^{\frac{p-1}{2}}}\int_{2a^2}^{\infty} z^{\frac{p+1}{2}-1} \ensuremath{\mathrm e}^{-z} dz\\
& =\gamma_p^p -\sqrt{\frac{2}{\pi}}\int_{a}^{\infty} u^p\ensuremath{\mathrm e}^{-\nfrac{u^2}{2}}du + 2\delta a^p
+\Gamma\inparen{\frac{p+1}{2},2a^2} ~ \,,
\end{align*}
where $\Gamma(\cdot,\cdot)$ is the upper incomplete gamma function and $a$ is a large constant determined later depending on $\delta$ and $p$.
The second term is bounded as
\begin{align*}
\int_{a}^{\infty}u^p\ensuremath{\mathrm e}^{-\nfrac{u^2}{2}}du = a^{p-1}
\ensuremath{\mathrm e}^{-\nfrac{a^2}{2}} + (p-1) \int_{a}^{\infty}
u^{p-2}\ensuremath{\mathrm e}^{-\nfrac{u^2}{2}} du \leq a^{p-1}
\ensuremath{\mathrm e}^{-\nfrac{a^2}{2}} + \frac{p-1}{a^2} \int_{a}^{\infty}
u^{p}\ensuremath{\mathrm e}^{-\nfrac{u^2}{2}} du \,.
\end{align*}
Hence $\int_{a}^{\infty}u^p\ensuremath{\mathrm e}^{-\nfrac{u^2}{2}}du \leq
\frac{a^{p+1}e^{-\nfrac{a^2}{2}}}{1+a^2-p}$.
We know, $\Gamma(\nfrac{p+1}{2},x)\rightarrow x^{\frac{p-1}{2}}\ensuremath{\mathrm e}^{-x}$ as $x\rightarrow\infty$.
We choose $a=\gamma_p \sqrt{\log\frac{1}{\delta}}$. Hence there exists $\delta_0$
so that for all small enough
$\delta<\delta_0$, we have $\Gamma(\nfrac{p+1}{2},2a^2) \sim 2^{\frac{p-1}{2}}a^{p-1}
\delta^{2\gamma_p^2}\ll \delta a^p$ where the last inequality follows from
the fact that $2\gamma_p^2 > 1$ (as $p>1$).
Putting all this together, we get:
\[
2\delta a^p +\Gamma\inparen{\frac{p+1}{2},2a^2}-
\sqrt{\frac{2}{\pi}}\int_{a}^{\infty}u^p\ensuremath{\mathrm e}^{-\nfrac{u^2}{2}}du
\ll 3\delta a^p -\sqrt{\frac{2}{\pi}}\frac{a^{p+1}e^{-\nfrac{a^2}{2}}}{1+a^2-p}
\leq
c\gamma_p^p \delta \inparen{\log{\frac{1}{\delta}}}^{\nfrac{p}{2}}\,,
\]
where $c$ is an absolute constant independent of $a$ and $p$.
This completes the proof of the lemma. \end{proofof}
\fi
\end{document} | arXiv |
We use a new compilation of the hadronic $R$-ratio from available data for the process $e^+e^-\to$ hadrons below the charm mass to determine the strong coupling $\alpha_s$, using finite-energy sum rules. Quoting our results at the $\tau$ mass to facilitate comparison to the results obtained from similar analyses of hadronic $\tau$-decay data, we find $\alpha_s(m_\tau^2)=0.298\pm 0.016\pm 0.006$ in fixed-order perturbation theory, and $\alpha_s(m_\tau^2)=0.304\pm 0.018\pm 0.006$ in contour-improved perturbation theory, where the first error is statistical, and the second error combines various systematic effects. These values are in good agreement with a recent determination from the OPAL and ALEPH data for hadronic $\tau$ decays. We briefly compare the $R(s)$-based analysis with the $\tau$-based analysis.
1. Add captions to Tables and enumerate them.
2. In section 2 the authors infer that more data in the region s<4 GeV2 should imply a more precise determination of αs as compared with the higher s region. However, I do not see why this should be true in general, since theoretical uncertainties play a crucial role in the extraction of αs.
Response: This is discussed in much more detail in Ref. , to which we refer through footnote 1.
3. Explain briefly how the errors quoted in the last column of would-be Table 1 are obtained.
Response: This is very simple: by propagation of the data errors. We added a note stating this.
4. The error on αs displayed in Figure 4 is not the total error. This is stated by the authors in footnote 5, but it should also be stated in the caption of Figure 4. The total error should possibly be indicated in the Figure, or reported in the text.
Response: We added a sentence to the caption indicating that the individual error bars on the points reflect the fit errors. The purpose of this plot is explained in detail in the text, and it's goal is not to show the final values of the error bars on αs, which can be found in Eq. (6).
5. Data for w4 should also be included in Figure 4, according to the content of would-be Table 2.
Response: We did not do this to avoid clutter in the plot. We added a sentence to this extent in the caption of this figure.
6. What is the effect of neglected higher dimensional condensates on the data in Figure 4, and how do they compare with the duality violations (black circles) especially in the low smin0 region?
Response: No higher dimensional condensates have been neglected, as explained in the text. The condensates we included follow directly from Cauchy's theorem.
7. The authors should address more clearly the stability of their fitted results reported in Figure 4 when varying smin0 and smax0. An analogous observation applies to section 5, see point 9.
Response: This is discussed in detail in the paper this writeup summarizes, see Ref. , to which we refer extensively for all details.
8. What is the channel displayed in Figure 5 left panel, V, A, V+A? And what are the uncertainties in the fitted curves?
Response: We chose the V channel, as this makes most sense in comparison with the e+e-based plots. We added a clarification to the caption.
9. Section 5 concludes that the determination of αs from e+e− data and the one from τ data are consistent. However, there is hardly enough information that can be extracted from Figure 5 and the surrounding text. Importantly, what happens to the final value of αs and to the fitted parameters of the duality-violation model when one varies smin0 in the τ-data fits?
This is a relevant point. I think that an accurate study of this dependence is needed in order to assess the stability and consistency of the results and to draw conclusions. If a complete analysis cannot be worked out in a reasonably short time, the authors can at least acknowledge this point in their contribution and formulate their final remarks accordingly.
Response: This writeup is not concerned with the τ-based analysis, for which we refer to Refs. [11,18], and references therein.
The authors have implemented most of the revisions. Though I would have preferred a more thorough reaction to point 9 of my report, there is sufficient improvement and the contribution can now be accepted for publication. | CommonCrawl |
Understanding diaschisis models of attention dysfunction with rTMS
Individual differences in local functional brain connectivity affect TMS effects on behavior
Carsten Gießing, Mohsen Alavash, … Christiane M. Thiel
Intrinsic network architecture predicts the effects elicited by intracranial electrical stimulation of the human brain
Kieran C. R. Fox, Lin Shi, … Josef Parvizi
Task-dependent functional organizations of the visual ventral stream
Han-Gue Jo, Junji Ito, … Thilo Kellermann
The human endogenous attentional control network includes a ventro-temporal cortical node
Ilaria Sani, Heiko Stemmann, … Winrich A. Freiwald
Structural determinants of dynamic fluctuations between segregation and integration on the human connectome
Makoto Fukushima & Olaf Sporns
Topographic organization of the human subcortex unveiled with functional connectivity gradients
Ye Tian, Daniel S. Margulies, … Andrew Zalesky
Electrophysiological dynamics of antagonistic brain networks reflect attentional fluctuations
Aaron Kucyi, Amy Daitch, … Josef Parvizi
Damage to the right temporoparietal junction, but not lateral prefrontal or insular cortex, amplifies the role of goal-directed attention
Elena Pedrazzini & Radek Ptak
Characterizing directed functional pathways in the visual system by multivariate nonlinear coherence of fMRI data
Gadi Goelman, Rotem Dan & Tarek Keadan
Javier O. Garcia1,2,
Lorella Battelli3,4,
Ela Plow5,
Zaira Cattaneo6,7,
Jean Vettel1,2,8 &
Emily D. Grossman9
Scientific Reports volume 10, Article number: 14890 (2020) Cite this article
Visual attentive tracking requires a balance of excitation and inhibition across large-scale frontoparietal cortical networks. Using methods borrowed from network science, we characterize the induced changes in network dynamics following low frequency (1 Hz) repetitive transcranial magnetic stimulation (rTMS) as an inhibitory noninvasive brain stimulation protocol delivered over the intraparietal sulcus. When participants engaged in visual tracking, we observed a highly stable network configuration of six distinct communities, each with characteristic properties in node dynamics. Stimulation to parietal cortex had no significant impact on the dynamics of the parietal community, which already exhibited increased flexibility and promiscuity relative to the other communities. The impact of rTMS, however, was apparent distal from the stimulation site in lateral prefrontal cortex. rTMS temporarily induced stronger allegiance within and between nodal motifs (increased recruitment and integration) in dorsolateral and ventrolateral prefrontal cortex, which returned to baseline levels within 15 min. These findings illustrate the distributed nature by which inhibitory rTMS perturbs network communities and is preliminary evidence for downstream cortical interactions when using noninvasive brain stimulation for behavioral augmentations.
The effective deployment of visual attention depends on spatially competitive mechanisms distributed through frontoparietal cortical networks1,2,3. Patients with unilateral insult to these networks often suffer from hemispatial neglect, a failure to explore and orient towards contralesional space4. Although the severity of the hemispatial neglect typically decreases in the months following stroke onset, many of these individuals are left with permanent visual "extinction" in which events in the contralesional visual field are ignored when they occur simultaneous with salient events in the ipsilesional field5,6. Despite the active study of neural mechanisms supporting visuospatial attention, the means by which maladaptive chronic deficits in visual extinction persist are still poorly understood.
Neglect models recognize that localized insult to the attention network have cascading effects that propagate throughout the attention network1,5. Interhemispheric competition is a key mechanism in these models, in which mutual inhibition is disrupted between transcallosally-connected homotopic circuits, resulting in an upregulation of excitation in the contralesional tissue that is otherwise healthy7,8,9,10,11,12. In addition, the fontoparietal attention network is strongly connected within-hemisphere via association fibers, the structural and functional connectivity of which are both essential for effective deployment of visuospatial attention13,14. Thus, there exists multiple possible mechanisms by which diaschisis, or the downstream destabilization of functionally connected circuits, may promote and sustain the maladaptive functional organization of attention systems observed in chronic extinction.
The goal of this study is to characterize how the targeted insult within the attention networks may restructure mesoscale network architecture in the brain. We hypothesize that disrupting a single node of the attention network has the potential to shift the dynamics of network organization downstream from the stimulation site, mediated by functional connected circuits. This is based, in part, on the observation that acute and localized insult has potential to alter the hodological, or network, spreading of dysfunction15,16,17.
In this study, we induced disruption using noninvasive offline, repetitive transcranial magnetic stimulation (rTMS), delivered to the intraparietal sulcus (IPS), a key posterior node of the dorsal attention system18,19,20. Repetitive TMS is a particularly effective tool for modulating large-scale brain networks, including downstream from the stimulation site, by employing a propagation of pulses that travel through functionally connected systems18, 21,22,23,24,25,26. Moreover, rTMS is a neuromodulatory tool that induces transient metaplastic changes in cortical circuits that are sustained well beyond the time of stimulation27,28,29,30. The extended action of rTMS makes it well-suited for pairing with functional magnetic resonant imaging (fMRI) using a "condition-and-map" approach by which the durable impacts of neuromodulation on the dynamics of functionally connected circuits can be evaluated31.
To evaluate the non-localized impact of inhibitory rTMS, we use network science tools by which large-scale functional brain systems may be described, including how they dynamically organize into interconnected communities and are perturbed by interventions32,33. Using these tools, researchers have identified an architecture in healthy brains that reflects a balance of segregation and integration in functionally specialized systems that is flexible to task demands34. Localized damage from stroke is linked to poor segregation in the lesioned hemisphere that spreads to the healthy (i.e., unlesioned) hemisphere16,35,36,37. And although modular community structure provides a level of robustness that protects network architecture against disruption induced by insult38,39, localized lesions to the highly integrative areas may in fact promote disease spreading39 and encourage more severe behavioral impairments17.
In this study, we focus on the community dynamics that evaluate time-varying interactions of nodes and network configurations. These metrics quantify the movements of individual nodes among network community affiliations (flexibility, cohesion, promiscuity), and quantify the dynamic deviations of nodal motifs from a standard model (recruitment and integration). These metrics, in particular, have been effective for identifying potential biomarkers of dysfunction in higher cognitive systems40 and have been proposed as potential indicators of positive outcomes in the context of rehabilitation after injury41. For these reasons, we hypothesize that the network dynamics may be informative for describing the destabilization of brain systems induced by neuromodulation.
Collectively, our results reveal rTMS transiently alters the integrative properties of brain regions distal from the stimulation site, particularly in prefrontal cortex. In these healthy participants, network dynamics restabilize to their archetypal (or stable) state within a time interval that approximates stimulation time. Our findings are evidence that the propagation of noninvasive brain stimulation may capitalize on the flexible dimensions of neural community architecture, with localized inhibition increasing network dynamics distal from the stimulation site.
This study proceeded as a "condition-and-map" paradigm, in which participants received 15 min of 1 Hz inhibitory active stimulation (or sham, collected in a separate session on a separate day) over the left intraparietal sulcus. rTMS followed immediately, within 5 min, by the initiation of fMRI. During the brain imaging measures, participants engaged in bilateral visual tracking designed to elicit competitive interactions within the cortical attention system. Participants engaged in this task for three sequential 12-min scans. The analyses below reflect the characterization of network dynamics over the course of those 36 min following stimulation.
Community structure
We first identified the community structure in networks over the course of the task using dynamic community detection42,43, which is an algorithm to distill complex connectivity matrices (e.g., coherence matrices) into series of coarse clusters of networks across time (see Dynamic community detection in "Materials and methods" section for specifics). In brief, this method optimizes the modularity function Q such that network assignments reflect a community organization that is the most different from a null model, composed of shuffled weighted edges derived from the observed connectivity estimates. After derivation of these temporally evolving communities, we may then characterize these network dynamics with metrics that describe specific community changes across time. We first use this method, however, to identify the consensus community structure in the sham condition, to reflect the most common community organization across time and space.
As shown in Fig. 1, cortical coherence dynamics are best characterized by six communities: a dorsal visual community (pink); a ventral visual community (gray); a parietotemporal community (yellow); a community capturing nodes of the default mode network (blue); a ventrolateral prefrontal community (green); and dorsolateral prefrontal community (orange).
Community architecture during visual tracking, as assessed in the sham condition. Region centroids are represented as orbs plotted on a semi-transparent inflated brain. Colors indicate the consensus community structure, or the most common network architecture found across time and participants. Results indicate that on average this community structure is most common across time and participants with, on average, an 84.6% similarity in partitioning of this community structure to the others across participants and time.
Community dynamics are robust to inhibitory rTMS
A comparison of the individual node metrics in the rTMS and sham conditions revealed that neuromodulation had little impact on the overall community dynamics when assessed over the entire 36 min following stimulation (Fig. 2). Node flexibility (movement between communities), cohesion (the movement of network motifs), and promiscuity (dispersion of movement across communities) did not differ statistically when assessed for the rTMS and sham conditions via a series of paired sample t-tests. When comparing sham to rTMS for each metric, no node survived FDR correction for multiple comparisons (q < 0.05)44.
Indeed, a correlation between the means across subjects of these metrics showed that compared to the sham condition, rTMS metrics explained 84.1% (r = 0.92, p < 0.001), 84.2% (r = 0.92, p < 0.001), and 99.6% (r = 0.99, p < 0.001) variability in the metrics for the sham condition for flexibility, cohesion, and promiscuity, respectively. These results show that local dynamics are largely robust to the perturbations introduced by inhibitory rTMS.
Evaluating community dynamics
Although node dynamics were remarkably stable when pooled over 36 min of scanning, three communities contained subsets of nodes that were among those that exhibited extreme changes in community affiliation. These communities were characterized by nodes with dynamics that deviated significantly from what would be expected by chance (when assessed against a bootstrapped distribution of sample means, as shown as the gray shaded region in Fig. 2; see Population distributions of node metrics in the Statistical Significance segment in "Materials and methods" section). These nodes were largely within the parietotemporal community (PT; yellow), which included the stimulation site; the community organized around nodes in ventrolateral prefrontal cortex (VLPFC; green); and nodes aligned in a community distributed regions through dorsolateral prefrontal cortex (DLPFC; orange).
Nodal community affiliation metrics in the rTMS and sham conditions. (A–C) Average metric for each node across participants for each of the estimated individual nodal metrics, plotted with the corresponding color of the representative community as indicated by the legend. Circular gray region denotes metrics scores expected by chance (see Population distributions of node metrics in "Statistical significance" section for calculation specifics). Those scores exceeding 95% of the estimated null distribution are shown as larger tokens exceeding the null region. Semi-transparent inflated mesh brains inlayed in the upper left and lower right of each panel illustrate the spatial location of the nodes with metrics more extreme than expected by chance, with the size of the node scaled to the relative strength of the metric score. Images in the upper left of each panel show associated metric score as assessed in the sham condition and those in the bottom right are from the TMS condition.
Nodes in left parietal cortex, which were targeted by the rTMS, were affiliated with a community characterized by high flexibility, cohesion, and promiscuity of the nodes. In other words, parietal brain regions changed community affiliation more than the population as a whole (high flexibility), with the new community affiliations tending to occur jointly with other nodes from within the PT community (high cohesion). These new community affiliations were also distributed across many networks (high promiscuity). Together, these characteristics illustrate the potential for nodes within the PT community to dynamically reorganize with other communities to promote large-scale integration across brain systems.
In contrast, nodes in the dorsolateral prefrontal cortex communities (shown in orange) revealed largely the opposite pattern of community dynamics. These nodes were less likely to change community affiliation (lower flexibility), and when nodes in this network altered their community allegiance they tended to do so in isolation (low cohesion) and to relatively few other communities (low promiscuity).
Although more diverse than the PT community, nodes in the ventrolateral prefrontal cortex demonstrated a similar pattern, with nodes cohesively changing community affiliation more frequently than expected by chance. The ventrolateral prefrontal cortex nodes, however, shifted affiliations across communities (promiscuity) consistent with that expected from a system of this modularity.
Inhibitory rTMS changes to community allegiance
We next characterized the rapid network reconfigurations induced by rTMS by evaluating the node dynamics within shorter sliding windows of time. Functionally-driven architectures evaluated over the entire scan session reflect a stationary snapshot of what is an inherently dynamic system, with rapid reconfigurations dependent on specific task contexts and constrained by the underlying structure of the network45,46. This time-dependent analysis considers both the dynamics of individual nodes (as above) and a quantitative measure of deviations of nodal motifs from a standard, reference community model. The reference system in this analysis is the consensus community structure as observed in the sham condition, which reflects the functionally-driven interactions without the rTMS intervention.
Recruitment is a measure of dynamic deviations from a model system, quantifying the extent to which nodes of a particular system cohere in the same community over time. High recruitment indicates sustained and stable movements in community affiliation between nodes of the same system, and low recruitment indicating a decrease in allegiance following rTMS. In contrast, the integrative properties of a system characterize the extent to which nodes of a particular system have high allegiance to nodes of other systems. High integration is indicative of a node that shares the same affiliation frequently with nodes from other systems (i.e., high inter-system allegiance), either in a sustained or flexible manner, whereas low integration is characteristic of a node that is weakly allied with nodes in other functional systems.
In the first 10 min after stimulation, an average of 62% of the nodes increased their promiscuity as compared to the standard (sham) network, 86% of nodes increased their recruitment properties, and 72% of nodes increased their integration characteristics. Statistical single-sample t-tests of the mean normalized SSD (Fig. 3) identified temporal windows within the first 10 min following stimulation in which node promiscuity across the entire brain significantly increased after stimulation as compared to sham (at 3.3 min: t(6) = 3.9, p = 0.008); 4.0 min: t(6) = 3.0, p = 0.024; 4.7 min: t(6) = 3.6, p = 0.011; 8.0 min: t(6) = 3.1, p = 0.022; and 8.7 min: t(6) = 3.4, p = 0.014). No temporal intervals survived multiple comparison correction when comparing flexibility and cohesion after the active stimulation and sham conditions.
rTMS-related nodal and motif dynamics. (A) Time-dependent changes in node metrics (flexibility, cohesion, promiscuity) following rTMS. Normalized SSD is the sum of squared differences (SSD) between sham and rTMS conditions, aggregated over all community nodes. Error bars indicate standard error of the between-subject means, computed from individual participant z-scored SSD timeseries. Metrics are estimated in approximately 40 s windows for concatenated 12 min scans. The shaded region isolates the first 10 min following stimulation during which the metrics most strongly deviate from the community structure as evaluated during Sham baseline. Bars along the bottom axis represent significant time points (uncorrected) from as single sample t-test, with color indicating metric (e.g. orange = promiscuity). Asterisks (*) at the top of the figure indicate significant time points following an FDR correction for multiple comparisons (i.e. time point comparisons). (B) Recruitment (stability vs flexibility) and integration (connectedness vs isolation) coefficients over time. (C–E) Promiscuity, recruitment and integration computed during the first 10 min following rTMS (vs sham), shown for each node and organized by community.
When corrected for multiple comparisons, system recruitment increased significantly as compared to sham approximately 6.0 min after stimulation (t(6) = 6.3, p = 0.001). These changes in the network dynamics returned to a stabilized state over a duration roughly equivalent to the duration of stimulation, which is consistent with the expected duration of impact as assessed behaviorally20,47.
The largest transient increases in recruitment and integration of nodal motifs following rTMS were predominantly within the ventrolateral and dorsolateral prefrontal cortex communities: bilateral orbitofrontal cortices, the anterior cingulate and the fusiform (Fig. 4). These nodes exhibited increased allegiance within their own and to other communities. Supplementary motor areas of the precentral cortex also exhibited high recruitment and promiscuity following rTMS, indicating sustained allegiance within its own system with increased connectedness of individual nodes to other communities. Unexpectedly, rTMS did not substantially impact the allegiance of nodes directly under the stimulation site, evidenced by the relatively small change in integration and recruitment.
Changes in promiscuity and allegiance metrics immediately following rTMS. Color maps correspond to the top 20% regional increases of these metrics within 10 min of the delivery of rTMS. Top row: dorsal view. Bottom row: ventral view. (A) Nodes with greatest increase in promiscuity included bilateral dorsal precentral cortex, orbitofrontal and middle temporal cortices. (B,C) Nodes with the highest integration (left) and recruitment coefficients (right) include anterior cingulate, bilateral orbitofrontal and fusiform, and precentral sulcus (recruitment only).
Integrative properties of the stimulation site
An important consideration in network dynamics is whether individual nodes have a specialized role in the functional architecture, either in stabilizing the system or promoting information spreading across systems16,37,48. To determine whether the stimulation site, left intraparietal cortex, has unique structural properties in the system (i.e. high degree node, hub, etc.), we compared within-module degree, which is indicative of hub-like properties, and participation coefficient, a metric of integration, of the stimulation site to the overall distribution observed within our network (Fig. 5A). These metrics, unlike those previously discussed, are not dynamic. Instead, they characterize the strength and distribution of connectivity, as a static system, to the larger network as a whole.
Characterizing the stimulation site. (A) Scatterplot of within-module degree and participation coefficients. Vertical dotted lines mark the central 90% of the participation coefficient across the brain (~ 133 nodes), and horizontal lines mark the central 90% of the within-module degree across all nodes all possible 148 regions. (B) Orbs plotted at the centroid of the regions that show the 95th percentile of the within-module degree, designating the most 'hub-like' nodes within the brain. (C) Orbs plotted at the centroid of the regions that show the 95th percentile of the participation coefficient, representing the 'integrating' nodes within the brain. (D) Orbs plotted at the centroid of all regions of the parcellated Destriuex atlas, where the size of node is scaled by the distance from the stimulation site or from the homologous region in the opposite hemisphere. The stimulation site (left intraparietal sulcus) is organized within the larger parietal, ventral temporal and orbitofrontal community (PT; shown in yellow).
Nodes with the highest within-module degree scores in this functional architecture were observed in dorsolateral prefrontal areas (orange; Fig. 5B); whereas regions with the highest participation coefficients during the visual tracking were in the occipital regions (pink; Fig. 5C). Nodes beneath the stimulation site in left IPS are neither hubs nor integrators, with a normalized node degree (− 0.01) and participation coefficient (0.78), a characteristic of many regions. Consequently, the community structure changes we observe following inhibitory rTMS over left IPS reflects the general susceptibility of the attention network to insult rather than unique connectedness of the targeted node.
Behavioral results
Another important consideration is the impact of rTMS on behavioral performance in the motion tracking task. Participants, on average, exhibited an 8% decline in contralateral tracking for the first 12 min following rTMS, with performance returning to that observed following sham for the 24 min following (Fig. 6). Immediately following stimulation, the impact of rTMS on performance trended, but did not reach, statistical significance (t(6) = −1.31, p = 0.24), likely due to the relatively few numbers of tracking trials completed during each scan. There was no effect of the rTMS in the 24–48 min following the stimulation (Run 2: t(6) = −0.76, p = 0.47; Run 3: t(6) = −0.75, p = 0.48; Run 4: t(6) = −0.13, p = 0.90). Bootstrap analyses confirmed that tracking in the contralateral visual field immediately following stimulation scored within the lowest quartile of performance as compared to later timepoints and tracking in the ipsilateral visual field.
Experimental task. Participants (A) were cued to attend to one of the four pinwheel wedges in each hemifield, (B) then tracked those wedges through 3 s of rotation at a speed individually calibrated for 85% accuracy (3 up, 1 down staircase procedure, conducted prior to stimulation). One pinwheel terminated in an upright position and (C) participants indicated which of the four wedges (up, down, left or right) corresponded to the cued wedge. (D) The mean changes in tracking accuracy following rTMS (as compared to sham baseline) for the contralateral and ipsilateral hemifields. The impact of rTMS on behavior was most apparent in the first experimental run of tracking, contralateral to the stimulation site (in the right visual field). Error bars indicate SE mean difference (TMS-Sham). (E) A bootstrap analysis in which run and hemifield labels were discarded to generate null distributions of expected change in performance (10,000 iterations) shows only 12.7% and 4.5% of contralateral and ipsilateral scores, respectively, have larger effect size than the impact of rTMS on contralateral tracking in the first experimental run (~ 12 min).
The goal of this study was to characterize the dynamics of mesoscale network architecture for visual attention disrupted by noninvasive neuromodulation. Inhibitory rTMS over left parietal cortex targets cortical networks that competitive mechanisms that support one's ability to engage in bilateral visual tracking20,49,50. We observed rTMS to introduce transient downstream changes to community dynamics that outlasted the 15-min stimulation interval, predominantly within lateral prefrontal brain systems. Prefrontal nodes increased the probability of allegiance with nodes in the same system (recruitment) and to other systems (integration), integrating with many of communities at a high rate (high promiscuity). The changes in community dynamics following rTMS returned to the expected levels (as observed in a sham condition) within approximately 15 min. Together, these findings demonstrate intraparietal rTMS to destabilize the network architecture of nodes functionally connected to the stimulation site, inducing a temporary state of increased integration within an otherwise segregated system.
Relationship to models of attention dysfunction (neglect and extinction)
That inhibition targeting the parietal nodes of the attention network would impact functional systems in lateral prefrontal cortex is not surprising in the context of the larger attention system, which spans dual frontoparietal pathways1,2 and couples with a frontal cognitive control system depending on task demands51,52,53,54. Moreover, results suggest that structural and functional connectivity of the entire frontoparietal system is critical for effective deployment of visuospatial attention10,13,55,56.
The intraparietal sulcus, the region targeted in this study, is an integral parietal node in the dorsal attention network. In previous studies, the delivery of inhibitory rTMS over left parietal cortex impairs one's ability to engage in bilateral visual tracking20, a task sensitive to the sustained attention deficits observed in right parietal stroke patients19,57,58,59,60. The dorsal attention system is a neural system well-characterized by interhemispheric inhibitory connections that are more susceptible to excess excitation contralateral to a unilateral insult61. Neglect as a consequence of damage to either the frontal or parietal nodes in this network is attributed to a chronic imbalance in interhemispheric excitation and inhibition in the lesioned and contralateral parietal cortices, which can be re-established by delivering inhibitory rTMS to the healthy hemisphere49,50,62.
In terms of its network properties, the dorsal attention system lacks high centrality and instead is characterized strong homotopic transcollosal connectivity that shapes interactions between by contralateral maps of spatial attention2,55. These systems in particular are amenable to inhibitory noninvasive brain stimulation as a means for re-establishing normalized circuitry28. This is in contrast to nodes in the ventral attention network, which are characterized by high centrality, indicating they serve as hubs at which multiple brain networks are integrated38,63. As a result, direct insult to these regions are predicted to be induce more pervasive and severe impairments16. Our results also implicate dysfunction in the integrative function of frontal circuits that be a consequence of localized lesion to parietal cortex. By characterizing modular reorganization in the neuromodulated state, we gain insights into maladaptive reorganization patients exhibit in the chronic phase of stroke recovery.
Comparison to theta-burst inhibitory neuromodulation
To our knowledge, this is the first characterization of large-scale network community dynamics following rTMS. Our findings, however, dovetail other reports of inhibitory neuromodulation as a means to disrupt global connectivity. Both rTMS and inhibitory "theta burst" stimulation (TBS) induce plasticity in local and distally connected circuits27. Network architecture indicates inhibitory TBS functionally isolates the stimulation site, decreasing local and long-range network connectivity when delivered over motor cortex, as assessed by evaluating the estimated resting state participation coefficient 25 min following stimulation64. The impact is also not local, as demonstrated by decreased resting state connectivity both within the stimulated hemisphere and in the unstimulated hemisphere in the alpha band following inhibitory TBS65. When applied using a multi-day accelerated protocol, TBS over left dorsolateral prefrontal cortex induces prolonged disruptions in network integration distributed throughout cortex, apparent 3 days after a week of stimulation66. Thus, neuromodulation has the potential to introduce sustained reorganization in large-scales systems beyond the targeted stimulation site.
A comment on resting state versus task-based susceptibility
We found that the robust consensus community did not differ significantly between the rTMS and sham manipulations, with no significant change to nodal dynamics when evaluated over the entire scanning session. Only when considered via a sliding window within the brief interval directly following rTMS did we observe significant changes in nodal promiscuity, integration and recruitment. That network architecture measured over extended durations exceeding that anticipated for the effects of offline rTMS would be resilient to perturbation is not particularly surprising. This is also consistent with previous findings that noninvasive brain stimulation have no impact on global metrics of network modularity or clustering coefficients, even when measured in the resting state65, 66.
The robust community structure observed is also perhaps not entirely surprising in the context of previous work demonstrating the potency of tasks to increase connectivity between otherwise loosely connected networks. We note that, as compared to the resting state, task states are associated with increased network integration and decreased segregation of communities across a wide range of sensory, motor and cognitive tasks37,67,68, particularly for high cognitively demanding tasks69,70. For example, extended practice in figure sequences that increases motor automaticity also decreases the modularity of brain systems, while increases the number of transitions between network structures (i.e. the flexibility of the network)72. Moreover, those individuals that exhibit more flexibility of the network tend to demonstrate more learning. Thus tasks can have profound influence on the structure and neural dynamics of the underlying cortical architectures.
One hypothesis is that the increase in network integration during task-driven neural states reflects the strengthening of otherwise weak long-range connections, which in turn streamlines efficiencies while reducing neural flexibility into future brain states71. Our results are consistent with this proposal. Among the communities observed during visual tracking, some exhibited extreme dynamics compared to the population as a whole, as estimated using a null model. Nodes in the hetermodal parietotemporal cortex, which included the stimulation site, were among the most flexible, tending to coherently integrate with other communities, consistent with previous models of network ablation that reveal higher levels of network flexibility driven by hetermodal cortex72.
Susceptibility of the intraparietal sulcus to neuromodulation
Some nodes have specialized roles within the network, determined by their connectivity both within and between communities73. Network hubs, which have many connections within their own community (high within-network node degree) but relatively low integration across networks (participation coefficients), have a stabilizing role in brain networks37. In contrast, integrator nodes form bridges across communities, serving as connectors across otherwise segregated cortical networks74. The balance between hubs and integrators facilitates the cooperation between the global synchrony and local computation, a hallmark of small-world networks and facilitating rapid communication between distal sights75. Whereas too much integration may promote disease spreading, not enough may diminish interaction between modules39,76.
Most relevant to our findings, network hubs and integrators have differential sensitivity to perturbation. Network hubs are more resilient to insult as compared to more peripheral nodes48, and the extent to which a node serves as a bridge is linked to more disease spreading, greater diaschisis, and more significant behavioral deficits16,77. In computational models, structural and functional connectivity is particularly susceptible to lesions impacting nodes of high centrality38. In situ lesion analyses bear the same findings and moreover show that damage to regions with high participation coefficients is associated with more significant behavioral impairments17. The impact on global network architecture is apparent, with damage to connector nodes increasing granularity of the community structure, quantified as increased modularity, in both the lesioned and healthy hemispheres36.
These findings support the proposal that neuromodulation will have the greatest clinical translation when delivered over high participation connector nodes with strong connectivity to nodes outside the network78. When impacted by acute brain injury, the left IPS (the targeted node in this study), is associated with disruptions to transcallosal homotopic connectivity, and disruptions to the functional balance of excitation and inhibition across hemispheres which, in turn is correlated to severity of hemispatial neglect following stroke10,79. The ventral attention network, on the other hand, is more closely associated with association fibers that support long-range connectivity within the same hemisphere14, that, and when disrupted, nonetheless have significant potential to alter network architecture38,63. Although dorsal parietal cortex has high scores of integration in the resting state74, the integrative properties of this region will vary dynamically over time and as a function of tasks68. Attention tasks, in particular, have the effect of increasing the participation coefficients across the brain overall as compared to rest.
In our measures, which included BOLD measures corresponding to intervals of attentive tracking and inter-trial rest, the left IPS had modal scores for node degree and participation coefficient as compared to the population, qualifying neither as a connector nor as a hub. Nonetheless, we observed the impact of the rTMS to destabilize the dynamics in communities downstream from the stimulation site. We therefore conclude that in the context of an attentive tracking task, the IPS behaves sufficiently as an integrator to facilitate the disruptive influence of rTMS through frontoparietal circuits.
Inhibitory rTMS during visual tracking temporarily increases the within and between system integrative properties of functional systems of an otherwise stabilized and highly cohesive network. Rather than being restricted to the stimulation site, the influence of inhibitory rTMS on network architecture is most robust in the dynamics of nodal motifs outside of the targeted brain system. These findings suggest that connectivity and integrative properties of brain systems downstream from parietal cortex may underlie chronic attention dysfunction in parietal stroke.
Nine healthy participants (mean age ± SD 27.72 ± 5.99 years, 7 males) participated in the experiment. Two participants were excluded from analysis due to gradient artifacts in the MR data incompatible with timeseries analyses. All participants had normal or corrected-to-normal vision. All participants met the inclusion criteria for participation in both TMS80 and MRI, and experiments were carried out in accordance with the guidelines set out for this method80. Participants provided written informed consent in accordance with and the experimental protocols were approved by the Institutional Review Board of the Beth Israel Deaconess Medical Center, Boston, MA .
Experimental procedures are detailed in Plow et al.20 and follow previous research19,81. Briefly, participants engaged in a bilateral visual tracking task in which they monitored 3 s of rotation by two rotating pinwheels positioned in the left and right hemifields (Fig. 6). Rotation speed was fixed at the rate that elicited 85% tracking accuracy across both the right and left visual fields, calibrated individually in a session prior to the rTMS/imaging. Participants completed 36 tracking trials per scan (18 per hemifield), each separated by a 20 s intertrial interval. Stimuli were generated in MATLAB using the Psychophysics Toolbox82,83.
Participants received 15 min of 1 Hz rTMS at 75% of the maximum stimulator output or sham, conducted on two experimental sessions in a counter-balanced order. Stimulation was applied using a MagStim device (MagStim, Whitland, Wales, UK) with a 70-mm figure-of-eight coil with the coil held such that the handle pointing posteriorly at an angle of 45° to the inter-hemispheric fissure, at an orientation that aligned it perpendicular to the left IPS. Sham stimulation was delivered identical to the TMS, with the exception that the coil was positioned with the edge at an angle perpendicular to the head. Stimulation was targeted to the left intraparietal sulcus (IPS), identified using frameless stereotaxic neuronavigation (BrainsightTM, Rogue Research Inc., Montreal, QC, Canada) co-registered with the individual participant's anatomical images (average Talairach (mean ± SD): X = −23.4 ± 5.2, Y = −67.6 ± 4.3, Z = 52.9 ± 2.5).
fMRI data collection was initiated within 4 min following active and sham rTMS. fMRI data was acquired using a whole-body 3T Phillips scanner equipped with a standard birdcage headcoil. High-resolution (1 × 1 × 1.2 mm) T1-weighted MPRAGE images were subjected to automated surface-based segmentation and Desitrieux atlas parcellation84 in Freesurfer85. Functional images (EPI, TR = 2 s, 20 axial slices acquired interleaved, 2.4 × 2.4 × 4 mm with a 0.5 mm gap, TE = 55 ms, flip angle = 90°) were collected for three successive 12:12 min scans (366 volumes per scan).
Preprocessing of the functional scans was conducted in Brain Voyager (Brain Innovations), including correction for slice acquisition timing, removing linear trends, correction for rigid body movement within and across the volumes, and spatial smoothing (3 mm FWHM). fMRI volumes were then coregistered to the individual participant anatomies and morphed into surface space. The voxel timeseries for each surface region of interest was extracted, z-scored, then averaged into a single timeseries. This resulted in 148 timeseries corresponding to the mean brain activity from the 74 bilateral regions of the Desitrieux atlas.
Our analytical pipeline is outlined in Fig. 7 and follows procedures recommended in a review of community distillation of neural networks86. This procedure is also consistent with a previous investigation of single-pulse TMS network dynamics within band-specific intrinsic oscillatory activity87.
Dynamic community detection overview. Schematic of the approximate 7 steps completed for the dynamic community detection and metric estimation within this dataset. Briefly, (1) cortical regions of interest were parceled using the Destrieux atlas and (2) average regional time-courses were extracted from each of the 148 regions for each participant. (3) Timecourses were then passed through a continuous wavelet transformation and coherence was estimated between each pair of regions. (4) These connectivity matrices were then subjected to dynamic community detection across a set of parameters to determine the optimal "scale" in the dynamic community architecture. (5) A final community affiliation was calculated for the 'optimal' parameters, and these temporal labels were then used to (6) estimate community metrics (e.g., flexibility, cohesion, promiscuity, recruitment, and integration) and a representative community structure across participants (7).
Functional connectivity analysis To estimate functional connectivity of brain networks, we used an undirected coherence measurement as computed between each pair of the 148 Desitrieux brain regions (Fig. 7, steps 1–3; for review, see88). The continuous wavelet transformation and coherence estimation was calculated using Matlab (Mathworks, Inc.) and a freely available wavelet toolbox89. Coherence was calculated across discrete non-overlapping 20 volume (40 s) temporal windows that spanned the 3 concatenated experimental runs (1,098 volumes) and averaged across frequencies corresponding to the task-relevant spectral content in fMRI BOLD connectivity, approximately 0.06–0.12 Hz90,91,92. This calculation yielded 54 unique coherence matrices representing the strength of each pairwise connection across the atlas parcellation.
Dynamic community detection To capture changes in networks over the course of the task, we utilized a multilayer dynamic community detection analysis42,43. Following our previous work on fMRI data86,93, we investigated the network effects of stimulation across the 54 time windows. The community detection algorithm optimizes a modularity quality function, Q, using a Louvain-like greedy algorithm94 to assign brain regions to communities. The community assignments are dependent on two parameters: (1) a structural resolution γ parameter (γ = 0.25 to γ = 31.62, in 20 logarithmically spaced steps) and (2) a temporal resolution ω parameter (ω = 0.25 to ω = 31.62, also in 20 logarithmically spaced steps). We search this parameter space to find the optimal parameter pair for this unique stimulation dataset, determined by comparing the mean value of optimized value Q in the experimental data to the mean value of Q in a shuffled 'null' model of the data.
Since we were most interested in the impact of rTMS on normal community structure, we used the sham condition for the parameter search and compared these estimates to those generated by a "null model" created by randomly shuffling the pair-wise coherence estimates and therefore destroying the correlational structure. This allowed us to draw conclusions of the effects of rTMS unbiased to the scale determined by the parameters. For each parameter pairing, we compared the observed model's Sham Q and the null model's Null Q (shuffled connectivity patterns) for each participant (Fig. 7, step 4). These parameters were optimized for γ = 0.6952 and ω = 1.1965, which were used in the reported analyses (Fig. 7, step 5).
Community detection was completed 100 times for each participant and each condition, as the community detection algorithm is non-deterministic95. This yielded 100 sets of community labels for the 148 nodes for each condition and for each of the seven participants across time. These iterations and final community labels were then used to calculate the subsequent metrics.
Community metrics
We characterized the dynamic reconfiguration of spatially distributed regions following sham stimulation or active rTMS using metrics that can be broadly characterized within three categories:
Individual node metrics Using metrics from network science designed to understand the temporal dynamics of network changes, we quantify the movement of individual nodes between communities to determine the extent to which the community architecture is stationary or flexible. Node flexibility, cohesion and promiscuity are three metrics that have previously proven fruitful in describing how networks change following external stimulation96,97.
The flexibility of each node corresponds to the number of instances in which a node changes community affiliation, g, normalized by the total possible number of changes that could occur across the layers L42. In other words, the flexibility of a single node i, ξi, may be estimated with
$$\xi_{i} = \frac{{g_{i} }}{L - 1},$$
where L is the total number of temporal windows.
Cohesion quantifies the extent to which nodes shift community affiliations jointly across time, or the movement of network motifs98. Node cohesion is derived from a pairwise measurement, cohesion matrix M, where edge weight Mij denotes the number of times a pair of nodes moves to the same community together divided by L − 1 possible changes. Thus, the cohesion strength of node i, Ωi, as used in this work, is then defined as follows.
$${\Omega }_{i} = \mathop \sum \limits_{j \ne i} M_{ij} .$$
Promiscuity characterizes the fraction of all the communities in the network in which a node participates at least once96. Importantly, this metric may be used to determine whether a node's flexibility is high simply because it is switching between two communities or across all communities.
Taken together these individual node metrics are used to distill the network dynamics of each node, estimating how much a node changes across time but lacking some sensitivity to how the node changes across time. The following metrics of nodal allegiance inform on the latter.
Dynamic allegiance metrics The second class of metrics quantifies the dynamics in node allegiance, the probability that any two nodes co-exist within the same community, as induced by rTMS. These metrics quantify the rapid reconfiguration within- and between-system dynamics of nodal motifs, with the comparison architecture established by a representative brain system46,99. In this study, the architecture of the reference brain system is constrained by the visual tracking task, and these metrics allow us to determine dynamic changes induced by rTMS of motifs within that predefined system.
Consensus community structure To estimate the most representative community structure, we follow the methodology determined by Doron et al.100, modified to estimate the most representative community structure for each individual across time. This scheme utilizes the z-score of the Rand coefficient101 which compares each pair of communities within the temporal dataset in terms of the total number of pairs that are in the same community. After each partition is compared, the structure with the highest Rand z-score is the most representative community. We compute this representative community from the Sham condition, which reflects system architecture during visual tracking but without the stimulation intervention. This process is similarly completed across participants, where the consensus community is built on each individual's representative community (Fig. 7), providing the common community structure across our participant group and time.
We next estimated two metrics, recruitment and integration, which describe whether the node changes affiliation within its own brain system (recruitment) or with other systems (integration). Mathematically, both metrics begin by first estimating module allegiance. Similar to the cohesion matrix, we define allegiance matrix P, where edge weight Pij denotes the number of times a pair of nodes moves to the same community together divided by L − 1 possible changes. Then, both measurements are found by averaging the module allegiance within its brain system or outside its brain system. For this estimate, we call the aforementioned representative community as being a group of nodes C such that C = {C1…Ck} where k is the number of communities in the representative community structure. Recruitment is the average probability over the dynamic window over which a region retains the same community affiliation as nodes from its own community in the reference system (k1 = k2). Integration is the average probability that a region is in other communities other than it's own (k1 ≠ k2). In other words, recruitment is the average allegiance within its own community (as defined by the reference community) where integration is the average allegiance with every other community.
Metric changes across time Dynamic changes to node and community metrics were also estimated across time for each participant and iteration of the community algorithm (Fig. 7, Step 5). The timeseries were epoched into sets of size 10 units here (10 community windows = 200 volumes), and all metrics were estimated from this epoch. To estimate time-evolving metric changes, the epoch was then consecutively shifted by 1 window (20 volumes) until all of the 54 community windows were used in the time evolving metric estimation (Fig. 3).
Characterizing the stimulation site To characterize the connectivity structure of the node underlying the stimulation site, we also computed more common network metrics within-module degree and participation coefficient75,102 using the average pairwise coherence estimate from the Sham condition and the Matlab Network Community Toolbox (https://commdetect.weebly.com/). Similar to the previously defined recruitment and integration metrics, these metrics also require the definition of a reference community structure. Here, we used the same consensus community across participants (Fig. 1) and then estimated degree and participation relative to this structure.
Within-module degree is defined as the average coherence between a node and other nodes in the same community, z-scored to normalize across the community population. Participation coefficient denotes the diversity of a node's connectivity, computed as the proportion of connections that are accounted for by a node's own community rather than to nodes in communities other than its own. Thus, a participation coefficient of 0 denotes that the connectivity is restricted to one's own community and a value of 1 denotes connectivity restricted to every other community besides its own.
We use these measurements to determine whether, in a classical sense, the stimulated node is an integrator or a hub. Where previous research has used a strict limitation on node degree (z > 2.5) and participation coefficient to define hubs and classify nodes, we use a soft classification of hubs and integrators, loosely defining both based on where these nodes lie in the distribution of nodes within our dataset (> 95% of other nodes). We note that although overwhelming previous research evaluates hubs and integrators in the resting state, this analysis was conducted on task-driven community structure, and we know that fMRI BOLD measurements during a task show fundamentally different network configurations67. Thus, the hubs and integrators shown in Fig. 5 should only be interpreted within the confines of this study's task.
Statistical significance
Active versus sham TMS, metric comparison A series of paired-sample t-tests were performed to compare the derived flexibility, cohesion, and promiscuity metrics over the entire 36 min period following rTMS (Fig. 2) to the metrics derived from the sham condition. This was completed for each of the 148 nodes of the brain. Due to the high volume of statistical tests, the False Discovery Rate (FDR) was used to correct for multiple comparisons. A q = 0.05 was used as the statistical cut-off44.
Active versus sham TMS, correlations Due to the highly similar metrics across both conditions over the entire temporal interval (36 min), we also estimated the mutual relationship between the estimated mean across subjects for the active and sham rTMS conditions using a correlation analysis. For each metric, the observed mean value across subjects of the 148 nodes was compared in the both conditions. Reported r and p values indicate the strength and significance of this relationship.
Population distributions of node metrics As a follow-up statistical test to the unobserved differences between the active rTMS and sham conditions, we used a Monte Carlo method to identify the nodes (and networks) with the most extreme dynamics across both conditions. Probability distributions were derived from 10,000 random drawings of the estimated metrics across subjects and nodes for all conditions, separately for flexibility, cohesion, and promiscuity. For each of the 10,000 iterations, the metrics (e.g., flexibility) across the 7 participants and 148 total regions were combined to create a single vector of scores. A random 7 values were drawn from this vector and averaged to create a single mean estimated group score from the unlabeled distribution. Thus, the derived probability distribution is representative of the grand mean across subjects and nodes and may be used to indicate which observed metrics would not be observed due to random draws of unlabeled data.
Time-evolving metric comparison To test the hypothesis that the unobserved differences were due to the decay of the active stimulation effects, we computed change in network dynamics between the active and sham rTMS using sliding time-evolving windows (1 window = 200 volumes). Differences in the metric scores across time and the brain regions was assessed using the average of a sum of squared differences across nodes of the brain. This provides a coarse estimate of any observed change due to the rTMS. For each time point within the 10 min following rTMS (a time equal to the length of stimulation), we compared the subject's SSD estimate across nodes to zero using a single sample t-test. Due to the quantity of tests completed across time, the observed p values were subjected to a correction for multiple comparisons using the FDR correction. Corrected and uncorrected time points are shown in Fig. 3.
Behavioral changes Observed behavioral changes indicated a decrease in the mean performance within the first run of scanning following rTMS of the stimulus contralateral to the stimulation site. Due to the small sample size (N = 7, 18 trials per condition in the first 12 min following stimulation) and the unequal sample means of behavior (4 runs for ipsilateral performance vs 3 runs for contralateral performance), this single mean of contralateral performance difference (rTMS-Sham performance) was compared to a probability distribution of bootstrapped means using a Monte Carlo method. The distributions of behavioral performance were estimated by computing the difference between the mean contralateral performance in Run 1 and against seven values randomly drawn from one of two vectors of the estimated performance from the other runs (i.e., ipsilateral performance in the 4 runs and the 3 runs of contralateral performance). This was repeated for 10,000 random draws each for ipsilateral and contralateral performance. The derived probability distribution is representative of the grand mean across all subjects and stimulus types (contralateral/ipsilateral to the stimulation site) and may be used to indicate significance of the observed contralateral performance in the first run (compared to all others). A one-tailed test of the effect of rTMS was computed by counting the proportion of the 10,000 differences that were below zero.
Buschman, T. J. & Kastner, S. From behavior to neural dynamics: an integrated theory of attention. Neuron 88, 127–144 (2015).
Corbetta, M. & Shulman, G. L. Spatial neglect and attention networks. Annu. Rev. Neurosci. 34, 569–599 (2011).
Husain, M. & Nachev, P. Space and the parietal cortex. Trends Cogn. Sci. 11, 30–36 (2007).
Stone, S. P., Halligan, P. W. & Greenwood, R. J. The incidence of neglect phenomena and related disorders in patients with an acute right or left hemisphere stroke. Age Ageing 22, 46–52 (1993).
Corbetta, M., Kincade, M. J., Lewis, C., Snyder, A. Z. & Sapir, A. Neural basis and recovery of spatial attention deficits in spatial neglect. Nat. Neurosci. 8, 1603–1610 (2005).
Vossel, S. et al. Visual extinction in relation to visuospatial neglect after right-hemispheric stroke: quantitative assessment and statistical lesion-symptom mapping. J. Neurol. Neurosurg. Psychiatry 82, 862–868 (2011).
Andrews, R. J. Transhemispheric diaschisis. A review and comment. Stroke 22, 943–949 (1991).
Baldassarre, A. et al. Large-scale changes in network interactions as a physiological signature of spatial neglect. Brain 137, 3267–3283 (2014).
Grefkes, C. & Fink, G. R. Connectivity-based approaches in stroke and recovery of function. Lancet Neurol. 13, 206–216 (2014).
He, B. J. et al. Breakdown of functional connectivity in frontoparietal networks underlies behavioral deficits in spatial neglect. Neuron 53, 905–918 (2007).
Nakashima, K., Kanba, M., Fujimoto, K., Sato, T. & Takahashi, K. Somatosensory evoked potentials over the non-affected hemisphere in patients with unilateral cerebrovascular lesions. J. Neurol. Sci. 70, 117–127 (1985).
Obeso, J. A., Marti-Masso, J. F. & Carrera, N. Somatosensory evoked potentials: abnormalities with focal brain lesions remote from the primary sensorimotor area. Electroencephalogr. Clin. Neurophysiol. 49, 59–65 (1980).
Chechlacz, M. et al. The central role of the temporo-parietal junction and the superior longitudinal fasciculus in supporting multi-item competition: evidence from lesion-symptom mapping of extinction. Cortex 49, 487–506 (2013).
Thiebaut de Schotten, M. et al. A lateralized brain network for visuospatial attention. Nat. Neurosci. 14, 1245–1246 (2011).
Ffytche, D. H. & Catani, M. Beyond localization: from hodology to function. Philos. Trans. R. Soc. B Biol. Sci. 360, 767–779 (2005).
Fornito, A., Zalesky, A. & Breakspear, M. The connectomics of brain disorders. Nat. Rev. Neurosci. 16, 159–172 (2015).
Warren, D. E. et al. Network measures predict neuropsychological outcome after brain injury. Proc. Natl. Acad. Sci. 111, 14247–14252 (2014).
Battelli, L., Grossman, E. D. & Plow, E. B. Local immediate versus long-range delayed changes in functional connectivity following rTMS on the visual attention network. Brain Stimul. 10, 263–269 (2017).
Battelli, L., Alvarez, G. A., Carlson, T. & Pascual-Leone, A. The role of the parietal lobe in visual extinction studied with transcranial magnetic stimulation. J. Cogn. Neurosci. 21, 1946–1955 (2009).
Plow, E. B. et al. The compensatory dynamic of inter-hemispheric interactions in visuospatial attention revealed using rTMS and fMRI. Front. Hum. Neurosci. 8, 226 (2014).
Ilmoniemi, R. J. et al. Neuronal responses to magnetic stimulation reveal cortical reactivity and connectivity. NeuroReport 8, 3537–3540 (1997).
Paus, T. et al. Transcranial magnetic stimulation during positron emission tomography: a new method for studying connectivity of the human cerebral cortex. J. Neurosci. 17, 3178–3184 (1997).
Fuggetta, G. Cortico–cortical interactions in spatial attention: a combined ERP/TMS study. J. Neurophysiol. 95, 3277–3280 (2006).
Bestmann, S. The physiological basis of transcranial magnetic stimulation. Trends Cogn. Sci. 12, 81–83 (2008).
Eldaief, M. C., Halko, M. A., Buckner, R. L. & Pascual-Leone, A. Transcranial magnetic stimulation modulates the brain's intrinsic activity in a frequency-dependent manner. Proc. Natl. Acad. Sci. 108, 21229–21234 (2011).
Garcia, J. O., Grossman, E. D. & Srinivasan, R. Evoked potentials in large-scale cortical networks elicited by TMS of the visual cortex. J. Neurophysiol. 106, 1734–1746 (2011).
Hallett, M. Transcranial magnetic stimulation and the human brain. Nature 406, 147–150 (2000).
Article ADS CAS PubMed Google Scholar
Lefaucheur, J.-P. et al. Evidence-based guidelines on the therapeutic use of repetitive transcranial magnetic stimulation (rTMS). Clin. Neurophysiol. 125, 2150–2206 (2014).
Silasi, G. & Murphy, T. H. Stroke and the connectome: how connectivity guides therapeutic intervention. Neuron 83, 1354–1368 (2014).
Thickbroom, G. W. Transcranial magnetic stimulation and synaptic plasticity: experimental framework and human models. Exp. Brain Res. 180, 583–593 (2007).
Siebner, H. R. et al. Consensus paper: combining transcranial stimulation with neuroimaging. Brain Stimul. 2, 58–80 (2009).
Bassett, D. S. & Bullmore, E. T. Small-world brain networks revisited. Neuroscientist 23, 499–516 (2017).
Bullmore, E. & Sporns, O. Complex brain networks: graph theoretical analysis of structural and functional systems. Nat. Rev. Neurosci. 10, 186–198 (2009).
Deco, G. & Kringelbach, M. L. Great expectations: using whole-brain computational connectomics for understanding neuropsychiatric disorders. Neuron 84, 892–905 (2014).
Aerts, H., Fias, W., Caeyenberghs, K. & Marinazzo, D. Brain networks under attack: robustness properties and the impact of lesions. Brain 139, 3063–3083 (2016).
Gratton, C., Nomura, E. M., Pérez, F. & D'Esposito, M. Focal brain lesions to critical locations cause widespread disruption of the modular organization of the brain. J. Cogn. Neurosci. 24, 1275–1285 (2012).
Wig, G. S. Segregated systems of human brain networks. Trends Cogn. Sci. 21, 981–996 (2017).
Alstott, J., Breakspear, M., Hagmann, P., Cammoun, L. & Sporns, O. Modeling the impact of lesions in the human brain. PLoS Comput. Biol. 5, e1000408 (2009).
Article ADS PubMed CAS PubMed Central Google Scholar
Filippi, M. et al. Assessment of system dysfunction in the brain through MRI-based connectomics. Lancet Neurol. 12, 1189–1199 (2013).
He, X. et al. Disrupted dynamic network reconfiguration of the language system in temporal lobe epilepsy. Brain 141, 1375–1389 (2018).
Sizemore, A. E. & Bassett, D. S. Dynamic graph metrics: tutorial, toolbox, and tale. NeuroImage 180, 417–427 (2018).
Bassett, D. S. et al. Dynamic reconfiguration of human brain networks during learning. Proc. Natl. Acad. Sci. 108, 7641–7646 (2011).
Mucha, P. J., Richardson, T., Macon, K., Porter, M. A. & Onnela, J.-P. Community structure in time-dependent, multiscale, and multiplex networks. Science 328, 876–878 (2010).
Article ADS MathSciNet CAS PubMed MATH Google Scholar
Benjamin, Y. & Hochberg, Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. R. Stat. Soc. Ser. B Methodol. 57, 289–300 (1995).
Cole, M. W. et al. Multi-task connectivity reveals flexible hubs for adaptive task control. Nat. Neurosci. 16, 1348–1355 (2013).
Mattar, M. G., Cole, M. W., Thompson-Schill, S. L. & Bassett, D. S. A functional cartography of cognitive systems. PLOS Comput. Biol. 11, e1004533 (2015).
Hilgetag, C. C., Théoret, H. & Pascual-Leone, A. Enhanced visual spatial attention ipsilateral to rTMS-induced'virtual lesions' of human parietal cortex. Nat. Neurosci. 4, 953–957 (2001).
Gollo, L. L., Roberts, J. A. & Cocchi, L. Mapping how local perturbations influence systems-level brain dynamics. NeuroImage 160, 97–112 (2017).
Agosta, S., Herpich, F., Miceli, G., Ferraro, F. & Battelli, L. Contralesional rTMS relieves visual extinction in chronic stroke. Neuropsychologia 62, 269–276 (2014).
Cazzoli, D., Müri, R. M., Hess, C. W. & Nyffeler, T. Treatment of hemispatial neglect by means of rTMS—a review. Restor. Neurol. Neurosci. 28, 499–510 (2010).
Cocchi, L., Zalesky, A., Fornito, A. & Mattingley, J. B. Dynamic cooperation and competition between brain systems during cognitive control. Trends Cogn. Sci. 17, 493–501 (2013).
Dixon, M. L. et al. Heterogeneity within the frontoparietal control network and its relationship to the default and dorsal attention networks. Proc. Natl. Acad. Sci. 115, E1598–E1607 (2018).
Gao, W. & Lin, W. Frontal parietal control network regulates the anti-correlated default and dorsal attention networks. Hum. Brain Mapp. 33, 192–202 (2012).
Spreng, R. N., Sepulcre, J., Turner, G. R., Stevens, W. D. & Schacter, D. L. Intrinsic architecture underlying the relations among the default, dorsal attention, and frontoparietal control networks of the human brain. J. Cogn. Neurosci. 25, 74–86 (2012).
Szczepanski, S. M. & Kastner, S. Shifting attentional priorities: control of spatial attention through hemispheric competition. J. Neurosci. 33, 5411–5421 (2013).
Thiebaut De Schotten, M. et al. Direct evidence for a parietal-frontal pathway subserving spatial awareness in humans. Science 309, 2226–2228 (2005).
Battelli, L. et al. Unilateral right parietal damage leads to bilateral deficit for high-level motion. Neuron 32, 985–995 (2001).
Cazzoli, D., Wurtz, P., Müri, R. M., Hess, C. W. & Nyffeler, T. Interhemispheric balance of overt attention: a theta burst stimulation study. Eur. J. Neurosci. 29, 1271–1276 (2009).
Göbel, S. M., Calabria, M., Farnè, A. & Rossetti, Y. Parietal rTMS distorts the mental number line: simulating 'spatial' neglect in healthy subjects. Neuropsychologia 44, 860–868 (2006).
Petitet, P., Noonan, M. P., Bridge, H., O'Reilly, J. X. & O'Shea, J. Testing the inter-hemispheric competition account of visual extinction with combined TMS/fMRI. Neuropsychologia 74, 63–73 (2015).
Grefkes, C. & Fink, G. R. Reorganization of cerebral networks after stroke: new insights from neuroimaging with connectivity approaches. Brain 134, 1264–1276 (2011).
Oliveri, M. et al. Left frontal transcranial magnetic stimulation reduces contralesional extinction in patients with unilateral right brain damage. Brain 122, 1731–1739 (1999).
Hagmann, P. et al. Mapping the structural core of human cerebral cortex. PLOS Biol. 6, e159 (2008).
Cocchi, L. et al. Dissociable effects of local inhibitory and excitatory theta-burst stimulation on large-scale brain dynamics. J. Neurophysiol. 113, 3375–3385 (2015).
Shafi, M. M., Brandon Westover, M., Oberman, L., Cash, S. S. & Pascual-Leone, A. Modulation of EEG functional connectivity networks in subjects undergoing repetitive transcranial magnetic stimulation. Brain Topogr. 27, 172–191 (2014).
Klooster, D. C. W. et al. Focal application of accelerated iTBS results in global changes in graph measures. Hum. Brain Mapp. https://doi.org/10.1002/hbm.24384 (2018).
Di, X., Gohel, S., Kim, E. H. & Biswal, B. B. Task vs. rest—different network configurations between the coactivation and the resting-state brain networks. Front. Hum. Neurosci. 7, 493 (2013).
Shine, J. M. et al. The dynamics of functional brain networks: integrated network states during cognitive task performance. Neuron 92, 544–554 (2016).
Cohen, J. R. & D'Esposito, M. The segregation and integration of distinct brain networks and their relationship to cognition. J. Neurosci. 36, 12083–12094 (2016).
Vatansever, D., Menon, D. K., Manktelow, A. E., Sahakian, B. J. & Stamatakis, E. A. Default mode dynamics for global functional integration. J. Neurosci. 35, 15254–15262 (2015).
Ercsey-Ravasz, M. et al. A predictive network model of cerebral cortical connectivity based on a distance rule. Neuron 80, 184–197 (2013).
Reddy, P. G. et al. Brain state flexibility accompanies motor-skill acquisition. NeuroImage 171, 135–147 (2018).
Sporns, O. Contributions and challenges for network models in cognitive neuroscience. Nat. Neurosci. 17, 652–660 (2014).
Bertolero, M. A., Yeo, B. T. T. & D'Esposito, M. The diverse club. Nat. Commun. 8, 1277 (2017).
Guimerà, R. & Nunes Amaral, L. A. Functional cartography of complex metabolic networks. Nature 433, 895–900 (2005).
Salathé, M. & Jones, J. H. Dynamics and control of diseases in networks with community structure. PLOS Comput. Biol. 6, e1000736 (2010).
Article ADS MathSciNet PubMed CAS PubMed Central Google Scholar
Lynch, C. J. et al. Precision inhibitory stimulation of individual-specific cortical hubs disrupts information processing in humans. Cereb. Cortex 29, 3912–3921 (2019).
Sale, M. V., Mattingley, J. B., Zalesky, A. & Cocchi, L. Imaging human brain networks to improve the clinical efficacy of non-invasive brain stimulation. Neurosci. Biobehav. Rev. 57, 187–198 (2015).
Siegel, J. S. et al. Disruptions of network connectivity predict impairment in multiple behavioral domains after stroke. Proc. Natl. Acad. Sci. 113, E4367–E4376 (2016).
Rossi, S., Hallett, M., Rossini, P. M. & Pascual-Leone, A. Safety, ethical considerations, and application guidelines for the use of transcranial magnetic stimulation in clinical practice and research. Clin. Neurophysiol. 120, 2008–2039 (2009).
Carlson, T. A., Alvarez, G. A. & Cavanagh, P. Quadrantic deficit reveals anatomical constraints on selection. Proc. Natl. Acad. Sci. 104, 13496–13500 (2007).
Brainard, D. H. The psychophysics toolbox. Spat. Vis. 10, 433–436 (1997).
Pelli, D. G. The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat. Vis. 10, 437–442 (1997).
Destrieux, C., Fischl, B., Dale, A. & Halgren, E. Automatic parcellation of human cortical gyri and sulci using standard anatomical nomenclature. NeuroImage 53, 1–15 (2010).
Fischl, B. FreeSurfer. NeuroImage 62, 774–781 (2012).
Garcia, J. O., Ashourvan, A., Muldoon, S. F., Vettel, J. M. & Bassett, D. S. Applications of Community Detection Techniques to Brain Graphs: Algorithmic Considerations and Implications for Neural Function, in Proceedings o the IEEE PP, 1–22 (2018).
Garcia, J. O. et al. Reconfigurations within resonating communities of brain regions following TMS reveal different scales of processing. bioRxiv 500967 (2018) https://doi.org/10.1101/500967.
Friston, K. J. Functional and effective connectivity: a review. Brain Connect. 1, 13–36 (2011).
Grinsted, A., Moore, J. C. & Jevrejeva, S. Application of the cross wavelet transform and wavelet coherence to geophysical time series. Nonlinear Process. Geophys. 11, 561–566 (2004).
Article ADS Google Scholar
Lauritzen, T. Z., D'Esposito, M., Heeger, D. J. & Silver, M. A. Top–down flow of visual spatial attention signals from parietal to occipital cortex. J. Vis. 9, 18–18 (2009).
Sun, F. T., Miller, L. M. & D'Esposito, M. Measuring temporal dynamics of functional networks using phase spectrum of fMRI data. NeuroImage 28, 227–237 (2005).
Sun, F. T., Miller, L. M. & D'Esposito, M. Measuring interregional functional connectivity using coherence and partial coherence analyses of fMRI data. NeuroImage 21, 647–658 (2004).
Cooper, N. et al. Time-evolving dynamics in brain networks forecast responses to health messaging. Netw. Neurosci. 3, 138–156 (2018).
Blondel, V. D., Guillaume, J.-L., Lambiotte, R. & Lefebvre, E. Fast unfolding of communities in large networks. J. Stat. Mech. Theory Exp. 2008, P10008 (2008).
Good, B. H., de Montjoye, Y.-A. & Clauset, A. Performance of modularity maximization in practical contexts. Phys. Rev. E 81, 046106 (2010).
Article ADS MathSciNet CAS Google Scholar
Papadopoulos, L., Puckett, J. G., Daniels, K. E. & Bassett, D. S. Evolution of network architecture in a granular material under compression. Phys. Rev. E 94, 032908 (2016).
Telesford, Q. K. et al. Detection of functional brain network reconfiguration during task-driven cognitive states. NeuroImage 142, 198–210 (2016).
Telesford, Q. K. et al. Cohesive network reconfiguration accompanies extended training. Hum. Brain Mapp. 38, 4744–4759 (2017).
Bassett, D. S., Yang, M., Wymbs, N. F. & Grafton, S. T. Learning-induced autonomy of sensorimotor systems. Nat. Neurosci. 18, 744–751 (2015).
Doron, K. W., Bassett, D. S. & Gazzaniga, M. S. Dynamic network structure of interhemispheric coordination. Proc. Natl. Acad. Sci. 109, 18661–18668 (2012).
Traud, A., Kelsic, E., Mucha, P. & Porter, M. Comparing community structure to characteristics in online collegiate social networks. SIAM Rev. 53, 526–543 (2011).
Power, J. D., Schlaggar, B. L., Lessov-Schlaggar, C. N. & Petersen, S. E. Evidence for hubs in human functional brain networks. Neuron 79, 798–813 (2013).
This study was supported in part by mission funding to the U.S. CCDC Army Research Laboratory and the "Harvard Catalyst" and the Harvard-Thorndike Clinical Research Center at Beth Israel Deaconess Medical Center (UL1 RR025758–NCRR–NIH). Ela B. Plow was supported by NIH career development award (1K01HD069504). Lorella Battelli was supported by the Autonomous Province of Trento, Call "Grandi Progetti 2012," project "Characterizing and improving brain mechanisms of attention—ATTEND".
The content is solely the responsibility of the authors and does not necessarily represent the official views of any funding agency or the U.S. Government.
US CCDC Army Research Laboratory, 459 Mulberry Pt Rd., Aberdeen Proving Ground, MD, 21005, USA
Javier O. Garcia & Jean Vettel
University of Pennsylvania, Philadelphia, PA, USA
Center for Neuroscience and Cognitive Systems@UniTn, Istituto Italiano di Tecnologia, Via Bettini 31, 38068, Rovereto, TN, Italy
Lorella Battelli
Berenson-Allen Center for Noninvasive Brain Stimulation, Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, 02215, USA
Department of Biomedical Engineering and Department of Physical Medicine and Rehabilitation, Cleveland Clinic, Cleveland, OH, 44195, USA
Ela Plow
Department of Psychology, University of Milano-Bicocca, 20126, Milan, Italy
Zaira Cattaneo
IRCCS Mondino Foundation, Pavia, Italy
University of California, Santa Barbara, Santa Barbara, CA, USA
Jean Vettel
Department of Cognitive Sciences, University of California Irvine, Irvine, CA, 92697, USA
Emily D. Grossman
Javier O. Garcia
E.P., Z.C. and L.B. collected all the data for this manuscript. J.G. conducted all analyses and prepared the figures. J.G., E.G. and J.V. wrote the manuscript. All authors contributed to manuscript revisions and approved the final version.
Correspondence to Javier O. Garcia.
Garcia, J.O., Battelli, L., Plow, E. et al. Understanding diaschisis models of attention dysfunction with rTMS. Sci Rep 10, 14890 (2020). https://doi.org/10.1038/s41598-020-71692-6
Editor's choice: cognitive neuroscience | CommonCrawl |
Slothouber–Graatsma puzzle
The Slothouber–Graatsma puzzle is a packing problem that calls for packing six 1 × 2 × 2 blocks and three 1 × 1 × 1 blocks into a 3 × 3 × 3 box. The solution to this puzzle is unique (up to mirror reflections and rotations). It was named after its inventors Jan Slothouber and William Graatsma.
The puzzle is essentially the same if the three 1 × 1 × 1 blocks are left out, so that the task is to pack six 1 × 2 × 2 blocks into a cubic box with volume 27.
Solution
The solution of the Slothouber–Graatsma puzzle is straightforward when one realizes that the three 1 × 1 × 1 blocks (or the three holes) need to be placed along a body diagonal of the box, as each of the 3 x 3 layers in the various directions needs to contain such a unit block. This follows from parity considerations, because the larger blocks can only fill an even number of the 9 cells in each 3 x 3 layer.[1]
Variations
The Slothouber–Graatsma puzzle is an example of a cube-packing puzzle using convex polycubes. More general puzzles involving the packing of convex rectangular blocks exist. The best known example is the Conway puzzle which asks for the packing of eighteen convex rectangular blocks into a 5 x 5 x 5 box. A harder convex rectangular block packing problem is to pack forty-one 1 x 2 x 4 blocks into a 7 x 7 x 7 box (thereby leaving 15 holes); the solution is analogous to the 5x5x5 case, and has three 1x1x5 cuboidal holes in mutually perpendicular directions covering all 7 slices.[1]
See also
• Soma cube
• Bedlam cube
• Diabolical cube
References
1. Elwyn R. Berlekamp, John H. Conway and Richard K. Guy: Winning ways for your mathematical plays, 2nd ed, vol. 4, 2004.
External links
• The Slothouber-Graatsma puzzle in Stewart Coffin's "The Puzzling World of Polyhedral Dissections"
• Jan Slothouber and William Graatsma: Cubic constructs
• William Graatsma and Jan Slothouber: Dutch mathematical art
Packing problems
Abstract packing
• Bin
• Set
Circle packing
• In a circle / equilateral triangle / isosceles right triangle / square
• Apollonian gasket
• Circle packing theorem
• Tammes problem (on sphere)
Sphere packing
• Apollonian
• Finite
• In a sphere
• In a cube
• In a cylinder
• Close-packing
• Kissing number
• Sphere-packing (Hamming) bound
Other 2-D packing
• Square packing
Other 3-D packing
• Tetrahedron
• Ellipsoid
Puzzles
• Conway
• Slothouber–Graatsma
| Wikipedia |
How time-inconsistent preferences influence venture capital exit decisions? A new perspective for grandstanding
Yanzhao Li ORCID: orcid.org/0000-0002-8466-970X1,
Ju-e Guo1,
Shaolong Sun ORCID: orcid.org/0000-0002-3196-14591 &
Yongwu Li ORCID: orcid.org/0000-0001-8962-96972
Considering that the assumption of time consistency does not adequately reveal the mechanisms of exit decisions of venture capital (VC), this study proposes two kinds of time-inconsistent preferences (i.e., time-flow inconsistency and time-point inconsistency) to advance research in this field. Time-flow inconsistency is in line with the previous time inconsistency literature, while time-point inconsistency is rooted in the VC fund's finite lifespan. Based on the assumption about the strategies guiding future behaviors, we consider four types of venture capitalists: time-consistent, time-point-inconsistent, naïve, and sophisticated venture capitalists, of which the latter three are time-inconsistent. We derive and compare the exit thresholds of these four types of venture capitalists. The main results include: (1) time-inconsistent preferences accelerate the exits of venture capitalists; (2) the closer the VC funds expiry dates are, the more likely time-inconsistent venture capitalists are to accelerate their exits; and (3) future selves caused by time-flow inconsistency weaken the effect of time-point inconsistency. Our study provides a behavioral explanation for the empirical fact of young VCs' grandstanding.
Venture capital (VC) provides the imperative capital for the development of start-ups (Cumming 2012; Tavares-Gärtner et al. 2018; Ferreira and Pereira 2021). Additionally, due to its unique mode of operation, VC plays a role in coping with risks, facilitating venture success, and nurturing high-tech industries worldwide, especially in transition economies such as China (Guo and Jiang 2013). According to KPMG's quarterly report on VC trends named "Venture Pulse", both Asian and global VC transactions increased by more than 40% in 2018, reaching a record of $93.5 billion and $254.7 billion US dollars, respectively. Among them, Chinese VC transaction volume reached a record of 70.5 billion US dollars, with an increase of 52.9%, compared with 46.1 billion US dollars in 2017 (KPMG 2019).
The exit, divestment from the VC's portfolio, is crucial since it achieves the sale of shares and thus determines the VC fund's final payoffs. Consequently, exit payoffs are an important signal of VC funds' quality (Cumming 2010). Particularly, for young and less-prestigious VC funds, exit payoffs may be the only quality signal and directly affect subsequent fundraising (Cumming 2010; Gompers 1996). Therefore, young VCs are more likely to grandstand by pushing firms to go public or sell privately held firms earlier than older VCs (Gompers 1996; Lee and Wahal 2004; Amor and Kooli 2020). Considering that young VCs are more prone to time-inconsistent behavior than older VCs,Footnote 1 this study, for the first time, aims to provide a behavioral explanation for young VC grandstanding from the perspective of decision-makers' time preferences. Specifically, we explore the optimal exit decisions of venture capitalists under time-inconsistent preferences.
Previous studies have investigated various factors influencing VC exit decisions, such as legal institutions and agency problems. For example, Cumming et al. (2006) provide the first cross-country empirical insight into the relationship between legality and VC exits based on a sample of 12 Asia–Pacific countries and regions. Cumming (2008) documents that strong VC control increases the likelihood that start-ups would exit through trade sales rather than through initial public offerings (IPOs). Anderson et al. (2017) report the effects of political ties of VC funds on VC exits and find that political ties facilitate VCs' successful exits via Chinese stock and mergers and acquisitions (M&A) markets. However, these studies rely on the assumption that individuals are perfectly rational. Recently, pioneering research in behavioral finance has discovered many decision biases, thus effectively explaining the anomaly and puzzle phenomenon in reality (Tian 2016). Nevertheless, only a few studies have attempted to incorporate behavioral finance theory into the VC exit decision research. Notably, Bock and Schmidt (2015) first examine the determinants of VC exit behavior after the lockup expiry in IPOs by considering insights from prospect theory. Nevertheless, the intertemporal choice of VC exit decisions and the resulting time-inconsistent preferences have been neglected in previous studies.Footnote 2
Many experimental studies on time preferences suggest that time-inconsistent preferences are more realistic than time-consistent preference, and they seriously distort the behavior of decision-makers (Strotz 1955; Thaler 1981; Loewenstein and Prelec 1992). Concretely, time-inconsistent preferences assume that decision-makers' discount rates for payoffs decrease over time. Therefore, decision-makers prefer current payoffs rather than future payoffs (Laibson 1997; O'Donoghue and Rabin 1999; Grenadier and Wang 2007). By relaxing the assumption of constant discount rates, time-inconsistent preferences provide a new theoretical perspective for accurately describing decision-makers' behavioral choices. As a result, time-inconsistent preferences are widely used in many fields such as investment (Grenadier and Wang 2007; Tian 2016; Luo et al. 2020), consumption (Liu et al. 2020), insurance (Chen et al. 2016) and contract design (Li et al. 2016; Wang et al. 2020).
Time-inconsistent preferences mentioned earlier are caused by individual time preferences, which essentially depend only on the individual's time sensitivity to flow payoffs. Beyond that, the finite lifespan of VC funds, determined by VC's particular organizational structure,Footnote 3 is also a source of time inconsistency among venture capitalists. The finite lifespan of VC funds forces venture capitalists to sell all projects before maturity. Although VC funds always have extension periods to facilitate exits, fund investors (limited partners) observe the delayed exit behavior of venture capitalists and then evaluate the quality of VC funds accordingly (Gompers 1996; Cumming 2010; Amor and Kooli 2020). This pressure makes venture capitalists have a lower utility perception of payoffs after the expiry date. We can find relevant evidences from previous research on the finite lifespan of VC funds. For example, Kandel et al. (2011) prove that the termination of all unfinished projects at the fund's maturity leads to suboptimal decisions during later stages of investment. And they sum up this phenomenon as venture capitalists' myopia induced by the finite lifespan of VC funds. Additionally, Arcot et al. (2015) investigate whether secondary buyouts are value-maximizing, or reflect opportunistic behavior and demonstrate that VC funds under expiration pressure engage more in secondary buyouts. Therefore, it is reasonable to believe that the discount rate of venture capitalists drops rapidly after VC funds expiring, also in line with time-inconsistent preferences. Consistent with this, Guo et al. (2018) present a similar argument. For the long-term transit investment problem, the utility of transit projects completed during and outside the term is significantly different for city mayors.
To distinguish the two kinds of time inconsistencies mentioned, we propose time-flow and time-point inconsistencies to promote understanding of VC exit decisions. As shown in Fig. 1, the exit decision of VC starts with an exit opportunity and ends with the exit exercise. Thus, if venture capitalists exercise the exit option before the expiration, the perceived exit payoffs are affected by time-flow inconsistency. Additionally, if venture capitalists exit during the extension period, both time-flow and time-point inconsistencies influence the perceived exit payoffs. Based on the assumption about the strategies guiding the future behaviors (Grenadier and Wang 2007; Tian 2016), we consider four types of venture capitalists: time-consistent, time-point-inconsistent, naïve, and sophisticated venture capitalists, of which the latter three are time-inconsistent. All time-inconsistent venture capitalists are aware of time-point inconsistency, but naive venture capitalists misunderstand time-flow inconsistency and assume that the future selves caused by time-flow inconsistency act according to preferences of the current self. In contrast, sophisticated venture capitalists know that future selves choose strategies that are optimal for themselves.
The exit decision of VC and its embedding in the duration of VC fund
This study first presents the model setup of venture capitalists' time-inconsistent preferences and develops an optimal VC exit decisionFootnote 4 through trade salesFootnote 5 based on the fact that the VC and the acquiring firm share synergies brought by the M&A. Then, we derive the optimal exit thresholds of the four types of venture capitalists using the well-established real options approach,Footnote 6 considering the uncertainty and option nature of VC exits. Finally, a comparative static analysis and the corresponding model implications are presented. The main results are summarized as follows: (1) time-inconsistent preferences accelerate the exit of venture capitalists, verifying the grandstanding of young VCs; (2) the closer the VC funds expiry dates are, the more likely time-inconsistent venture capitalists are to accelerate their exits; and (3) future selves caused by time-flow inconsistency weaken the effect of time-point inconsistency. Under the action of these mechanisms, we can observe that (1) all time-inconsistent venture capitalists exit earlier than the time-consistent ones; (2) generally, sophisticated venture capitalists exit earlier than naïve venture capitalists, who in turn exit earlier than time-point-inconsistent venture capitalists; and (3) when the degree of time-point inconsistency is much greater than that of time-flow inconsistency, and the exit opportunity is close to the VC fund's expiry date, naive venture capitalists exit later than time-point-inconsistent venture capitalists, and sophisticated venture capitalists exit last among the three defined time-inconsistent venture capitalists.
Our study contributes to the literature in the following ways. First, to the best of our knowledge, this is the first study to consider VC exit decisions under time-inconsistent preferences. Given the vital role that intertemporal choice plays in VC decision-making, our study broadens the theoretical understanding of VC exit decisions. Second, since Gompers (1996) proposed the grandstanding hypothesis of young VCs, researchers have constantly tried various theories and perspectives such as signal games (Grenadier and Malenko 2011), demand side (Butler and Goktan 2013), and multiple agents (Sethuram et al. 2021), to explain this hypothesis. In contrast to prior studies, we are the first to provide a behavioral explanation from the perspective of time preferences. Third, we extend the decision-making framework of time-inconsistent agents established by Grenadier and Wang (2007). Specifically, we propose time-flow and time-point inconsistencies to model individuals' time preferences and the effect of the finite lifespan of VC funds, respectively. This modeling framework is more realistic for VC exit decisions (Kandel et al. 2011; Ferreira and Pereira 2021) and applicable to other intertemporal choice issues with time restrictions.
The remainder of this paper is organized as follows. "Section Model setup" provides the model setup, including the venture capitalist's time-inconsistent preferences and the exit decision via trade sales. "Section Time-consistent and time-inconsistent venture capitalists" derives the optimal exit timing for time-consistent and time-inconsistent venture capitalists. "Section Model implications" presents the comparative static analysis and model implications. Finally, "Section Conclusions" concludes the paper.
Model setup
Venture capitalists' time-inconsistent preferences
Following Chen et al. (2016), Harris and Laibson (2013), and Grenadier and Wang (2007), we describe the time-inconsistent preferences of venture capitalists using a continuous-time quasi-hyperbolic discount function. The venture capitalist is described as a finite number of selves with a random lifespan. Each self represents the current stage of the venture capitalist to exercise decision while considering the utility of the future selves exercise decision. As shown in Fig. 2, we call \(t_{0}\) the start time moment, which signals the birth of self 0 and means that an exit option emerges for the venture capitalist to sell the share of the invested start-ups. Let \(T_{L}\) denote the duration from \(t_{0}\) to the VC fund's expiry date. We assume that \(T_{L}\) is exponentially distributed with parameter \(\lambda_{L}\). Let \(t_{n}\) be the birth time of self \(n\) and death time of self \(n - 1\) (\(n = 1,2,3,...,L - 1,L\)). The lifespan \(T_{n} = t_{n + 1} - t_{n}\) (excluding \(T_{L}\)) for the self \(n\) is assumed to be exponentially distributed with the parameter \(\lambda_{f}\). If instead the VC fund's expiry date arrives before the next self generated by time-flow inconsistency, the next self is changed to self \(L\) generated by time-point inconsistency. The duration of the self \(L - 1\) is assumed to be exponentially distributed with parameter \(\lambda_{p}\). We note that \({1 \mathord{\left/ {\vphantom {1 \lambda }} \right. \kern-\nulldelimiterspace} \lambda }\) represents the inter-arrival time of selves because \(\lambda\) is the arrival intensity of the Poisson process, thus we have \(E\left[ {{{\left( {L - 1} \right)} \mathord{\left/ {\vphantom {{\left( {L - 1} \right)} {\lambda_{f} + {1 \mathord{\left/ {\vphantom {1 {\lambda_{p} }}} \right. \kern-\nulldelimiterspace} {\lambda_{p} }}}}} \right. \kern-\nulldelimiterspace} {\lambda_{f} + {1 \mathord{\left/ {\vphantom {1 {\lambda_{p} }}} \right. \kern-\nulldelimiterspace} {\lambda_{p} }}}}} \right] = E\left[ {{1 \mathord{\left/ {\vphantom {1 {\lambda_{L} }}} \right. \kern-\nulldelimiterspace} {\lambda_{L} }}} \right]\).
Two kinds of time inconsistencies and the resulting multiple selves
We use \(D_{n} \left( {t,s} \right)\) to denote the inter-temporal discount function of self \(n\), giving self \(n\)'s discounted value at time \(t\) of one dollar received at the future time \(s\). For payoffs obtained in the duration of self \(n\), self \(n\) uses a standard discount function \(e^{{ - \rho \left( {s - t} \right)}}\), where the constant discount rate \(\rho > 0\). For payoffs obtained after the death of the current self, self \(n\) uses the discount function \(\delta e^{{ - \rho \left( {s - t} \right)}}\), multiplying a reduction factor \(\delta\) based on the standard discount function. After the arrival of self \(n + 1\), the venture capitalist uses the updated discount function \(D_{n + 1} \left( {t,s} \right)\) for evaluation. To distinguish the impact of time-flow and time-point inconsistencies, we define the different reduction factor \(\delta_{f}\) for self \(n\) (\(n = 0\sim L - 2\)) and \(\delta_{p}\) for self \(L - 1\), where \(\delta_{f} > \delta_{p}\) for time-point inconsistency reducing the discounting of payoffs more.
Then the inter-temporal discount function is given by
$$D_{n} = \left\{ \begin{gathered} e^{{ - \rho \left( {s - t} \right)}} , \hfill \\ \delta e^{{ - \rho \left( {s - t} \right)}} , \hfill \\ \end{gathered} \right.\begin{array}{*{20}c} {} & \begin{gathered} if{}_{{}}s \in \left[ {t_{n} ,t_{n + 1} } \right), \hfill \\ if{}_{{}}s \in \left[ {t_{n + 1} ,\infty } \right), \hfill \\ \end{gathered} \\ \end{array}$$
for \(s > t\) and \(t \in \left[ {t_{n} ,t_{n + 1} } \right)\).
The exit decision via trade sale
Consider that the venture capitalist has an opportunity to exit the invested start-up by trade sales and the invested start-up has uncertain prospects for development. Let \(P_{t}^{T}\) denote the start-up profit at time \(t\). We suppose that the profit is given by a geometric Brownian motion:
$$dP_{t}^{T} = \alpha P_{t}^{T} dt + \sigma P_{t}^{T} dB_{t} , \, t \ge 0,$$
where \(dB_{t}\) is the increment of a standard Wiener process, \(\alpha\) is the expected growth rate of the profit, and \(\sigma\) is the profit volatility.
Following Thijssen (2008), we assume that the profit \(P_{t}^{T}\) consists of a deterministic part, denoted by \(Q^{T}\), and a stochastic component, denoted by \(X_{t}\). The stochastic shock is assumed to be multiplicative, that is, \(P_{t}^{T} = Q^{T} X_{t}\). Similarly, for the acquiring and merged firms, there are \(P_{t}^{A} = Q^{A} X_{t}\) and \(P_{t}^{M} = Q^{M} X_{t}\). The deterministic component is regarded as a result of competition in the product market. The stochastic component indicates uncertainty. Therefore, the stochastic component follows a geometric Brownian motion:
$$dX_{t} = \alpha X_{t} dt + \sigma X_{t} dB_{t} , \, t \ge 0, \, X_{0} = x.$$
The discount rate of the acquiring firm is assumed to be the risk-free rate in this M&A. In addition, the payment of dividends, provided by the invested firm, is far less than the payoff from selling the shares held in the invested firm for the VC. Therefore, the exit can be regarded as the only way to obtain lump-sum payoffs. Thus, even though the profit of the invested firm is given in flows over time, the venture capitalist's time preference does not affect the expected present value of the profit flow generated by the invested firm.
Gao et al. (2013) highlight that the acquiring firm could expand its business more efficiently by achieving economies of scale and scope. In this study, this positive effect is characterized as a synergy, that is, the deterministic profit generated by the M&A is larger than the sum of the deterministic profits of the constituent firms, which is \(Q^{M} > Q^{A} + Q^{T}\).
The value of the acquiring firm before the M&A is as follows:
$$V^{A} \left( x \right) = E\int_{0}^{\infty } {e^{ - \rho t} \left( {Q^{A} X_{t} } \right)dt = \frac{{Q^{A} x}}{\rho - \alpha }} .$$
The value of the merged firm after the M&A is as follows:
$$V^{M} \left( x \right) = E\int_{0}^{\infty } {e^{ - \rho t} \left( {Q^{M} X_{t} } \right)dt = \frac{{Q^{M} x}}{\rho - \alpha } > } \frac{{Q^{A} x}}{\rho - \alpha } = V^{A} \left( x \right).$$
The value of the start-up's shares held by the VC before the M&A is as follows:
$$V_{VC}^{T} \left( x \right) = E\int_{0}^{\infty } {e^{ - \rho t} \left( {\phi Q^{T} X_{t} } \right)} dt = \phi \frac{{Q^{T} x}}{\rho - \alpha },$$
where \(\phi\) is the VC's share in the start-up.
We assume that the VC uses the participating convertible preferred (PCP) stock to invest,Footnote 7 which brings the highest payoffs to the VC in the M&A (Arcot 2014). The exit payoff obtained by the VC is \(P_{VC} \left( x \right)\) as follows:
$$P_{VC} \left( x \right) = d + \phi \left[ {P\left( x \right) - d} \right],$$
where \(P\left( x \right)\) is the value of cash or cash equivalents paid by the acquiring company to purchase the entire equity of the start-up and \(d\) is the preferential fixed claim in the M&A. Nonparticipating convertible preferred stock or common stock is equivalent to the exception when \(d = 0\) and is therefore covered.
The VC and the acquiring firm negotiate the merger price to obtain a Pareto effective synergistic value distribution. The merger price is determined using the Nash bargaining game equilibrium. We suppose that the negotiation ability of the venture capitalist is \(\beta_{VC}\), and that of the acquiring firm is \(\beta_{A} = 1 - \beta_{VC}\). Following Alvarez and Stenbacka (2006), the merger price is the solution to the optimization problem below:
$${\text{sup}}_{{p^{*} }} \left[ {P_{VC} \left( x \right) - V_{VC}^{T} \left( x \right)} \right]^{{\beta_{VC} }} \left[ {V^{M} \left( x \right) - P\left( x \right) - V^{A} \left( x \right)} \right]^{{\beta_{A} }} ,$$
where \(P_{VC} \left( x \right) - V_{VC}^{T} \left( x \right)\) and \(V^{M} \left( x \right) - P\left( x \right) - V^{A} \left( x \right)\) are the value-added payoffs obtained by the VC and the acquiring firm through the M&A, respectively. Therefore, we find that the Nash bargaining solution is given by
$$P^{*} \left( x \right) = \frac{{Q^{T} x}}{\rho - \alpha } - \left( {1 - \beta_{VC} } \right)\frac{1 - \phi }{\phi }d + \beta_{VC} \left[ {\frac{{\left( {Q^{M} - Q^{A} - Q^{T} } \right)x}}{\rho - \alpha }} \right].$$
The exit payoff obtained by the VC is as follows:
$$\begin{aligned} P_{VC}^{*} \left( x \right) & = \frac{{\phi Q^{T} x}}{\rho - \alpha } + \beta_{VC} \left( {1 - \phi } \right)d + \phi \beta_{VC} \left[ {\frac{{\left( {Q^{M} - Q^{A} - Q^{T} } \right)x}}{\rho - \alpha }} \right] \\ & = V_{VC}^{T} \left( x \right) + \beta_{VC} \left( {1 - \phi } \right)d + \phi \beta_{VC} \Delta V\left( x \right). \\ \end{aligned}$$
The above formula shows that the VC payoffs in trade sales consist of three parts. The first part \(V_{VC}^{T} \left( x \right)\) is the value of the shares held by the VC when the invested start-up maintains an independent operation, and the second part \(\beta_{VC} \left( {1 - \phi } \right)d\) is the profit from the priority settlement in the M&A. The last part \(\phi \beta_{VC} \Delta V\left( x \right)\) is the synergistic benefits shared by the VC.
The payoff distribution of the VC by trade sales is a stochastic process affected by market uncertainty. Therefore, it is necessary to choose the optimal exit threshold to maximize the VC payoffs (Li et al. 2017). We set \(C\) as the cost of exit for the VC. We expect to maximize the expected value of the VC exit payoffs:
$$\mathop {\sup }\limits_{\tau \ge t} E_{t} \left[ {D_{n} \left( {t,\tau } \right)\left( {P_{VC}^{*} \left( {X_{\tau } } \right) - C} \right)} \right],$$
where \(E_{t} \left[ \cdot \right]\) denotes the expectation operator.
Time-consistent and time-inconsistent venture capitalists
This section discusses the exit decisions of time-consistent and the three types of time-inconsistent venture capitalists. The time-consistent case is the benchmark model. The three defined time-inconsistent cases allow us to explore the net effect of time-point inconsistency and the complex superposition effect of two kinds of time-inconsistent preferences. This modeling setup can provide a systematic insight into the comprehensive impact of time-inconsistent preferences on VC exit decisions.
According to the model setup of venture capitalists' time-inconsistent preferences in "Section Venture capitalists' time-inconsistent preferences", we present the three defined time-inconsistent venture capitalists' decision selves and the interarrival time in Fig. 3. Obviously, \(\lambda_{L} < \lambda_{pN} < \lambda_{pS}\). To compare the exit thresholds of self 0 from \(t_{0}\), we follow the standard backward derivation procedure, starting with the self \(L\) and then moving back to the self 0. To ensure the same duration left before the expiry date, we have \(E\left[ {{1 \mathord{\left/ {\vphantom {1 {\lambda_{L} }}} \right. \kern-\nulldelimiterspace} {\lambda_{L} }}} \right] = E\left[ {{1 \mathord{\left/ {\vphantom {1 {\lambda_{f} + {1 \mathord{\left/ {\vphantom {1 {\lambda_{pN} }}} \right. \kern-\nulldelimiterspace} {\lambda_{pN} }}}}} \right. \kern-\nulldelimiterspace} {\lambda_{f} + {1 \mathord{\left/ {\vphantom {1 {\lambda_{pN} }}} \right. \kern-\nulldelimiterspace} {\lambda_{pN} }}}}} \right] = E\left[ {{{\left( {L - 1} \right)} \mathord{\left/ {\vphantom {{\left( {L - 1} \right)} {\lambda_{f} + {1 \mathord{\left/ {\vphantom {1 {\lambda_{pS} }}} \right. \kern-\nulldelimiterspace} {\lambda_{pS} }}}}} \right. \kern-\nulldelimiterspace} {\lambda_{f} + {1 \mathord{\left/ {\vphantom {1 {\lambda_{pS} }}} \right. \kern-\nulldelimiterspace} {\lambda_{pS} }}}}} \right]\).
The exit decision comparison of the three defined time-inconsistent venture capitalists
Time-consistent venture capitalists
Let \(F\left( x \right)\) denote the time-consistent venture capitalist's exit opportunity value function and \(x^{*}\) be the optimal exit threshold. Then, according to the continuous-time Bellman equation \(\rho Fdt = E\left( {dF} \right)\), \(F\left( x \right)\) satisfies the ordinary differential equation below (see Dixit and Pindyck (1994) for details):
$$\frac{1}{2}\sigma^{2} x^{2} F^{\prime\prime}\left( x \right) + \alpha xF^{\prime}\left( x \right) - \rho F\left( x \right) = 0.$$
Equation (12) is solved using the following value-matching and smooth-pasting conditions:
$$F\left( {x^{*} } \right) = P_{VC}^{*} \left( {x^{*} } \right) - C,\quad F^{{\prime }} \left( {x^{*} } \right) = \left[ {P_{VC}^{*} \left( {x^{*} } \right) - C} \right]^{{\prime }} .$$
The implied condition of the stochastic process is \(F\left( 0 \right) = 0\), and we note that the general solution \(F\left( x \right) = Ax^{{\beta_{1} }}\), where \(\beta_{1} = \frac{1}{2} - \frac{\alpha }{{\sigma^{2} }} + \sqrt {\left( {\frac{\alpha }{{\sigma^{2} }} - \frac{1}{2}} \right)^{2} + \frac{2\rho }{{\sigma^{2} }}} > 1\).
The exit threshold \(x^{*}\) is given by
$$x^{*} = \frac{{\beta_{1} \theta }}{{\left( {\beta_{1} - 1} \right)\eta }},$$
where \(\eta = \frac{{\phi \left[ {Q^{T} + \beta_{VC} \left( {Q^{M} - Q^{A} - Q^{T} } \right)} \right]}}{\rho - \alpha }\) and \(\theta = C - \beta_{VC} d\left( {1 - \phi } \right)\).
Equation (13) reveals that when the exit cost \(C\) increases, \(x^{*}\) increases (\(\frac{{\partial x^{*} }}{\partial C} > 0\)). In turn, when the venture capitalist's negotiation ability \(\beta_{VC}\) increases or the fixed income \(d\) increases, \(x^{*}\) decreases (\(\frac{{\partial x^{*} }}{{\partial \beta_{VC} }} < 0{\text{ and }}\frac{{\partial x^{*} }}{\partial d} < 0\)).
The option value \(F\left( x \right)\) before exiting is given by
$$F\left( x \right) = \left( {\frac{x}{{x^{*} }}} \right)^{{\beta_{1} }} \left( {\eta x^{*} - \theta } \right).$$
Time-point-inconsistent venture capitalists
The assumption about the time-point-inconsistent venture capitalist is natural for the following reasons. First, venture capitalists cannot ignore the impact of time-point inconsistency due to the importance of VC funds' finite lifespan. Second, the most critical advantage of VC is its ability to serve as long-term committed patient capital to help start-ups achieve powerful sustained compounding of growth (Klingler-Vidra 2016; Arundale 2020). Hence, venture capitalists may selectively ignore time-flow inconsistency from overconfident beliefs in the ability to commit (Grenadier and Wang 2007).
This optimization problem, which solves the exit threshold of self 0, is transformed into a two-stage optimization problem solved by backward induction. Self \(L\) faces the case as the time-consistent venture capitalist. Then we analyze the exit timing selection of self 0. Let \(G\left( x \right)\) and \(x_{G}\) denote the value function and exercise threshold for self 0, respectively. Drawing on the continuation value function proposed by Grenadier and Wang (2007), \(G\left( x \right)\) solves the differential equation:
$$\frac{1}{2}\sigma^{2} x^{2} G^{\prime\prime}\left( x \right) + \alpha xG^{\prime}\left( x \right) - \rho G\left( x \right) + \lambda_{L} \left[ {F^{c} \left( x \right) - G\left( x \right)} \right] = 0,$$
where \(F^{c} \left( x \right)\) is self 0's continuation value function, upon the arrival of self \(L\), occurring at the intensity \(\lambda_{L}\), and \(F^{c} \left( x \right) = \delta_{p} F\left( x \right)\). Equation (15) is solved using the following value-matching and smooth-pasting conditions:
$$G\left( {x_{G} } \right) = P_{VC}^{*} \left( {x_{G} } \right) - C = \eta x_{G} - \theta ,\quad G^{{\prime }} \left( {x_{G} } \right) = \left[ {P_{VC}^{*} \left( {x_{G} } \right) - C} \right]^{{\prime }} = \eta .$$
After standard calculations, we obtain the exit exercise threshold and option value for the time-point-inconsistent venture capitalist.
Proposition 1
For the time-point-inconsistent venture capitalist, the value of the exit option is as follows:
$$G\left( x \right) = \frac{{\eta \left( {\beta_{1} - 1} \right)}}{{\beta_{2} - \beta_{1} }}\left( {x^{*} - x_{G} } \right)\left( {\frac{x}{{x_{G} }}} \right)^{{\beta_{2} }} + \delta_{p} F\left( x \right),$$
where \(\beta_{2} = \frac{1}{2} - \frac{\alpha }{{\sigma^{2} }} + \sqrt {\left( {\frac{\alpha }{{\sigma^{2} }} - \frac{1}{2}} \right)^{2} + \frac{{2\left( {\rho + \lambda_{L} } \right)}}{{\sigma^{2} }}} > \beta_{1}\) and \(x_{G}\) is the optimal exit threshold given by
$$x_{G} = \frac{1}{{\eta \left( {\beta_{2} - 1} \right)}}\left[ {\beta_{2} \theta + \left( {\beta_{2} - \beta_{1} } \right)\delta_{p} F\left( {x_{G} } \right)} \right].$$
Naïve venture capitalists
The naïve venture capitalist realizes time-point inconsistency but misunderstands the decision criterion of future selves generated by time-flow inconsistency. In our model, this type of venture capitalist believes that self \(n\left( {n = 1 \sim L - 1} \right)\) will adopt strategies consistent with the current self (self 0). In other words, the discount function \(D_{n} \left( {t,s} \right)\)\(\left( {n = 1\sim L - 1} \right)\) will not update and is always equal to \(D_{0} \left( {t,s} \right)\). Thus, it is a three-stage optimization problem solved by backward induction.
First, let us consider the optimization problem from the perspective of self \(L\). As mentioned above, self \(L\) faces the case as the time-consistent VC. Let \(N_{L} \left( x \right)\) and \(x_{N,L}\) denote self \(L\)'s value function and exercise threshold, respectively.
$$N_{L} \left( x \right) = F\left( x \right) = \left( {\frac{x}{{x^{*} }}} \right)^{{\beta_{1} }} \left( {\eta x^{*} - \theta } \right).$$
$$x_{N,L} = x^{*} = \frac{{\beta_{1} \theta }}{{\left( {\beta_{1} - 1} \right)\eta }}.$$
Then, self 1 faces the case in which there is only self \(L\) in the future. This situation is similar to that of the time-point-inconsistent venture capitalist. The only difference is in the arrival intensity of self \(L\). Self 1's value function \(N_{1} \left( x \right)\) and exercise threshold \(x_{N,1}\) are given in Eqs. (20) and (21).
$$N_{1} \left( x \right) = \frac{{\eta \left( {\beta_{1} - 1} \right)}}{{\beta_{3} - \beta_{1} }}\left( {x^{*} - x_{N,1} } \right)\left( {\frac{x}{{x_{N,1} }}} \right)^{{\beta_{3} }} + \delta_{p} F\left( x \right),$$
$$x_{N,1} = \frac{1}{{\eta \left( {\beta_{3} - 1} \right)}}\left[ {\beta_{3} \theta + \left( {\beta_{3} - \beta_{1} } \right)\delta_{p} F\left( {x_{N,1} } \right)} \right],$$
where \(\beta_{3} = \frac{1}{2} - \frac{\alpha }{{\sigma^{2} }} + \sqrt {\left( {\frac{\alpha }{{\sigma^{2} }} - \frac{1}{2}} \right)^{2} + \frac{{2\left( {\rho + \lambda_{pN} } \right)}}{{\sigma^{2} }}} > \beta_{2} > \beta_{1}\).
Next, self 0 decides their exercise threshold \(x_{N,0}\), considering their future selves' exercise thresholds. The continuation value function \(N_{1}^{c} \left( x \right)\) of self 0 is calculated as follows. If self 1 is alive when their threshold \(x_{N,1}\) is reached, then the exit option is exercised, and its payoff to self 0 is \(\delta_{f} \left[ {P_{VC}^{*} \left( {x_{N,1} } \right) - C} \right]\). However, if self 1 dies and self \(L\) arrives before \(x_{N,1}\) is reached, then self 0's continuation value \(N_{1}^{c} \left( x \right)\) changes into self 1's continuation value \(F^{c} \left( x \right)\). Thus, \(N_{1}^{c} \left( x \right)\) solves the following differential equation:
$$\frac{1}{2}\sigma^{2} x^{2} N_{1}^{c\prime\prime } \left( x \right) + \alpha xN_{1}^{c\prime } \left( x \right) - \rho N_{1}^{c} \left( x \right) + \lambda_{pN} \left[ {F^{c} \left( x \right) - N_{1}^{c} \left( x \right)} \right] = 0,$$
where \(F^{c} \left( x \right) = \delta_{p} F\left( x \right)\), the value-matching condition is given by
$$N_{1}^{c} \left( {x_{N,1} } \right) = \delta_{f} \left[ {P_{VC}^{*} \left( {x_{N,1} } \right) - C} \right] = \delta_{f} \left( {\eta x_{N,1} - \theta } \right).$$
The value-matching condition ensures the continuity of the continuation value function. We note that solving \(N_{1}^{c} \left( x \right)\) only requires a boundary condition. To simplify the expression, we assume that \(Y = x_{N,1}^{{ - \beta_{3} }} \left[ {\delta_{f} \left( {\eta x_{N,1} - \theta } \right) - \delta_{p} F\left( {x_{N,1} } \right)} \right]\). Self 0's continuation value function is \(N_{1}^{c} \left( x \right) = Yx^{{\beta_{3} }} + \delta_{p} F\left( x \right)\).
Self 0 maximizes their value function \(N_{0} \left( x \right)\) by taking the continuation value function \(N_{1}^{c} \left( x \right)\) and choosing their exit threshold \(x_{N,0}\). Thus, \(N_{1}^{c} \left( x \right)\) solves the differential equation:
$$\frac{1}{2}\sigma^{2} x^{2} N_{0}^{\prime \prime } \left( x \right) + \alpha xN_{0}^{\prime } \left( x \right) - \rho N_{0} \left( x \right) + \lambda_{f} \left[ {N_{1}^{c} \left( x \right) - N_{0} \left( x \right)} \right] = 0.$$
It is solved by using the value-matching and smooth-pasting conditions:
$$N_{0} \left( {x_{N,0} } \right) = P_{VC}^{*} \left( {x_{N,0} } \right) - C = \eta x_{N,0} - \theta ,\quad N_{0}^{{\prime }} \left( {x_{N,0} } \right) = \left[ {P_{VC}^{*} \left( {x_{N,0} } \right) - C} \right]^{{\prime }} = \eta .$$
We assume that the general solution of \(N_{0} \left( x \right)\) takes the form below, and we verify the conjecture in "Appendix 1". Next, we discuss case by case.
$$N_{0} \left( x \right) = \left\{ \begin{gathered} \delta_{p} F\left( x \right) + \varepsilon Yx^{{\beta_{3} }} + U_{0} x^{{\beta_{4} }} , \quad if \, \lambda_{f} \ne \lambda_{pN} , \hfill \\ \delta_{p} F\left( x \right) + R_{1} x^{{\beta_{4} }} \log x + R_{0} x^{{\beta_{4} }} ,\quad if \, \lambda_{f} = \lambda_{pN} . \hfill \\ \end{gathered} \right.$$
If \(\lambda_{f} \ne \lambda_{pN}\), we note that \(\beta_{3} \ne \beta_{4}\). The value of the naïve venture capitalist exit option is given by.
$$N_{0} \left( x \right) = \delta_{p} F\left( x \right) + \varepsilon Yx^{{\beta_{3} }} + U_{0} x^{{\beta_{4} }} ,$$
where \(\varepsilon = \frac{{\lambda_{f} }}{{\lambda_{f} - \lambda_{pN} }}\), \(U_{0} = x_{N,0}^{{ - \beta_{4} }} \left[ {\eta x_{N,0} - \theta - \delta_{p} F\left( {x_{N,0} } \right) - \varepsilon Yx_{N,0}^{{\beta_{3} }} } \right]\), \(\beta_{4} = \frac{1}{2} - \frac{\alpha }{{\sigma^{2} }} + \sqrt {\left( {\frac{\alpha }{{\sigma^{2} }} - \frac{1}{2}} \right)^{2} + \frac{{2\left( {\rho + \lambda_{f} } \right)}}{{\sigma^{2} }}}\), and the exit threshold \(x_{N,0}\) is the solution to Eq. (26).
$$x_{N,0} = \frac{{\beta_{4} \theta }}{{\eta \left( {\beta_{4} - 1} \right)}} + \frac{{\beta_{4} - \beta_{1} }}{{\eta \left( {\beta_{4} - 1} \right)}}\delta_{p} F\left( {x_{N,0} } \right) + \frac{{\beta_{4} - \beta_{3} }}{{\eta \left( {\beta_{4} - 1} \right)}}\varepsilon Yx_{N,0}^{{\beta_{3} }} .$$
If \(\lambda_{f} = \lambda_{pN}\), we note that \(\beta_{3} = \beta_{4}\) (hereinafter referred to as \(\beta_{4}\)). The value of the naïve venture capitalist exit option is given by
$$N_{0} \left( x \right) = \delta_{p} F\left( x \right) + R_{1} x^{{\beta_{4} }} \log x + R_{0} x^{{\beta_{4} }} ,$$
where \(R_{1} = - \frac{{\lambda_{f} Y}}{{\alpha + \frac{1}{2}\sigma^{2} \left( {2\beta_{4} - 1} \right)}}\), \(R_{0} = x_{N,0}^{{ - \beta_{4} }} \left[ {\eta x_{N,0} - \theta - \delta_{p} F\left( {x_{N,0} } \right) - R_{1} x_{N,0}^{{\beta_{4} }} \log x_{N,0} } \right]\), and the exit threshold \(x_{N,0}\) is the solution to Eq. (28).
$$x_{N,0} = \frac{{\beta_{4} \theta }}{{\eta \left( {\beta_{4} - 1} \right)}} + \frac{{\beta_{4} - \beta_{1} }}{{\eta \left( {\beta_{4} - 1} \right)}}\delta_{p} F\left( {x_{N,0} } \right) - \frac{{R_{1} x_{N,0}^{{\beta_{4} }} }}{{\eta \left( {\beta_{4} - 1} \right)}}.$$
Sophisticated venture capitalists
The sophisticated venture capitalist foresees time-point and time-flow inconsistencies, as shown in Fig. 3. This type of venture capitalist clearly and correctly understands that all future selves will adopt strategies based on their own interests. Therefore, we need to start with self \(L\), the last self, and backward derive value function, continuation value function, and exit threshold for each self until self 0.
Similar to the naïve venture capitalist, we give self \(L\)'s value function \(S_{L} \left( x \right)\) and exercise threshold \(x_{S,L}\) directly.
$$S_{L} \left( x \right) = F\left( x \right) = \left( {\frac{x}{{x^{*} }}} \right)^{{\beta_{1} }} \left( {\eta x^{*} - \theta } \right).$$
$$x_{S,L} = x^{*} = \frac{{\beta_{1} \theta }}{{\left( {\beta_{1} - 1} \right)\eta }}.$$
Self \(L - 1\) faces the case similar to self 1 of the naïve venture capitalist. Let \(S_{L - 1} \left( x \right)\) and \(x_{S,L - 1}\) denote self \(L - 1\)'s value function and exercise threshold, respectively. It is noted that the arrival intensity of self \(L\) changes again.
$$S_{L - 1} \left( x \right) = \frac{{\eta \left( {\beta_{1} - 1} \right)}}{{\beta_{5} - \beta_{1} }}\left( {x^{*} - x_{S,L - 1} } \right)\left( {\frac{x}{{x_{S,L - 1} }}} \right)^{{\beta_{5} }} + \delta_{p} F\left( x \right),$$
$$x_{S,L - 1} = \frac{1}{{\eta \left( {\beta_{5} - 1} \right)}}\left[ {\beta_{5} \theta + \left( {\beta_{5} - \beta_{1} } \right)\delta_{p} F\left( {x_{S,L - 1} } \right)} \right],$$
where \(\beta_{5} = \frac{1}{2} - \frac{\alpha }{{\sigma^{2} }} + \sqrt {\left( {\frac{\alpha }{{\sigma^{2} }} - \frac{1}{2}} \right)^{2} + \frac{{2\left( {\rho + \lambda_{pS} } \right)}}{{\sigma^{2} }}}\). In general, \(\beta_{4} \ge \beta_{5}\).
We assume the general solution of \(S_{L - 2} \left( x \right)\) that satisfies the corresponding differential equation takes the form below, and we solve the exit threshold \(x_{S,L - 2}\) by the value-matching and smooth-pasting conditions.
$$S_{L - 2} \left( x \right) = \left\{ \begin{gathered} \delta_{p} F\left( x \right) + \varphi Zx^{{\beta_{5} }} + U_{L - 2,0} x^{{\beta_{4} }}, \quad if \, \lambda_{f} \ne \lambda_{pS} , \hfill \\ \delta_{p} F\left( x \right) + R_{L - 2,1} x^{{\beta_{4} }} \log x + R_{L - 2,0} x^{{\beta_{4} }} , \quad if \, \lambda_{f} = \lambda_{pS} . \hfill \\ \end{gathered} \right.$$
The proof for the general solution of \(S_{L - 2} \left( x \right)\) is similar to the case of the naïve venture capitalist, so it is not repeated here.
(1) If \(\lambda_{f} \ne \lambda_{pS}\), we note that \(\beta_{4} \ne \beta_{5}\). Self \(L - 2\)'s exercise threshold \(x_{S,L - 2}\) is the solution to Eq. (34).
$$x_{S,L - 2} = \frac{{\beta_{4} \theta }}{{\eta \left( {\beta_{4} - 1} \right)}} + \frac{{\beta_{4} - \beta_{1} }}{{\eta \left( {\beta_{4} - 1} \right)}}\delta_{p} F\left( {x_{S,L - 2} } \right) + \frac{{\beta_{4} - \beta_{5} }}{{\eta \left( {\beta_{4} - 1} \right)}}\varphi Zx_{S,L - 2}^{{\beta_{5} }} ,$$
where \(\varphi { = }\frac{{\lambda_{f} }}{{\lambda_{f} - \lambda_{pS} }}\) and \(Z = x_{S,L - 2}^{{ - \beta_{5} }} \left[ {\delta_{f} \left( {\eta x_{S,L - 1} - \theta } \right) - \delta_{p} F\left( {x_{S,L - 1} } \right)} \right]\).
In summary, for \(n \le L - 3\), self \(n\)'s continuation value function \(S_{n + 1}^{c} \left( x \right)\) satisfies the differential equation below:
$$\frac{1}{2}\sigma^{2} x^{2} S_{n + 1}^{c\prime \prime} \left( x \right) + \alpha xS_{n + 1}^{c\prime} \left( x \right) - \rho S_{n + 1}^{c} \left( x \right) + \lambda_{f} \left[ {S_{n + 2}^{c} \left( x \right) - S_{n + 1}^{c} \left( x \right)} \right] = 0.$$
The value-matching condition is given by
$$S_{n + 1}^{c} \left( {x_{S,n + 1} } \right) = \delta_{f} \left[ {P_{VC}^{*} \left( {x_{S,n + 1} } \right) - C} \right] = \delta_{f} \left( {\eta x_{S,n + 1} - \theta } \right).$$
The solutions for the continuation value functions \(S_{n + 1}^{c} \left( x \right)\) are presented in "Appendix 2", and \(S_{n + 1}^{c} \left( x \right)\) is given by
$$S_{n + 1}^{c} \left( x \right) = \delta_{p} F\left( x \right) + \varphi^{L - n - 2} Zx^{{\beta_{5} }} + \sum\limits_{i = 0}^{L - n - 3} {W_{n + 1,i} \left( {\log x} \right)^{i} x^{{\beta_{4} }} } ,$$
where the formula for \(W_{n + 1,i}\) is detailed in "Appendix 2".
Self \(n + 1\)'s value function \(S_{n + 1} \left( x \right)\) satisfies the following differential equation:
$$\frac{1}{2}\sigma^{2} x^{2} S_{n + 1}^{\prime \prime } \left( x \right) + \alpha xS_{n + 1}^{\prime } \left( x \right) - \rho S_{n + 1} \left( x \right) + \lambda_{f} \left[ {S_{n + 2}^{c} \left( x \right) - S_{n + 1} \left( x \right)} \right] = 0.$$
$$S_{n + 1} \left( {x_{S,n + 1} } \right) = P_{VC}^{*} \left( {x_{S,n + 1} } \right) - C = \eta x_{S,n + 1} - \theta ,\quad S_{n + 1}^{{\prime }} \left( {x_{S,n + 1} } \right) = \left[ {P_{VC}^{*} \left( {x_{S,n + 1} } \right) - C} \right]^{{\prime }} = \eta .$$
The solutions for the value functions \(S_{n + 1} \left( x \right)\) are presented in "Appendix 3", and \(S_{n + 1} \left( x \right)\) is given by
$$S_{n + 1} \left( x \right) = \delta_{p} F\left( x \right) + \varphi^{L - n - 2} Zx^{{\beta_{5} }} + \sum\limits_{i = 0}^{L - n - 3} {U_{n + 1,i} \left( {\log x} \right)^{i} x^{{\beta_{4} }} } .$$
where the formula for \(U_{n + 1,i}\) is detailed in "Appendix 3".
(2) If \(\lambda_{f} = \lambda_{pS}\), we note that \(\beta_{4} = \beta_{5}\) (hereinafter referred to as \(\beta_{4}\)). Self \(L - 2\)'s exercise threshold \(x_{S,L - 2}\) is the solution to Eq. (39).
$$x_{S,L - 2} = \frac{{\beta_{4} \theta }}{{\eta \left( {\beta_{4} - 1} \right)}} + \frac{{\beta_{4} - \beta_{1} }}{{\eta \left( {\beta_{4} - 1} \right)}}\delta_{p} F\left( {x_{S,L - 2} } \right) - \frac{{R_{L - 2,1} }}{{\eta \left( {\beta_{4} - 1} \right)}}x_{S,L - 2}^{{\beta_{4} }} ,$$
where \(R_{L - 2,1} = - \frac{{\lambda_{f} Z}}{{\alpha + \frac{1}{2}\sigma^{2} \left( {2\beta_{4} - 1} \right)}}\).
In summary, for \(n \le L - 3\), self \(n\)'s continuation value function \(S_{n + 1}^{c} \left( x \right)\) is given by
$$S_{n + 1}^{c} \left( x \right) = \delta_{p} F\left( x \right) + \sum\limits_{i = 0}^{L - n - 2} {V_{n + 1,i} \left( {\log x} \right)^{i} x^{{\beta_{4} }} } .$$
Self \(n + 1\)'s value function \(S_{n + 1} \left( x \right)\) is given by
$$S_{n + 1} \left( x \right) = \delta_{p} F\left( x \right) + \sum\limits_{i = 0}^{L - n - 2} {R_{n + 1,i} \left( {\log x} \right)^{i} x^{{\beta_{4} }} } .$$
The solutions for the continuation value function \(S_{n + 1}^{c} \left( x \right)\) and value function \(S_{n + 1} \left( x \right)\) are similar to those in \(\lambda_{f} \ne \lambda_{pS}\), so it is not repeated here.
If \(\lambda_{f} \ne \lambda_{pS}\) , we note that \(\beta_{4} \ne \beta_{5}\) . The value of the sophisticated venture capitalist exit option is given by
$$S_{0} \left( x \right) = \delta_{p} F\left( x \right) + \varphi^{L - n - 2} Zx^{{\beta_{5} }} + \sum\limits_{i = 0}^{L - 2} {U_{0,i} \left( {\log x} \right)^{i} x^{{\beta_{4} }} } ,$$
and the exit threshold \(x_{S,0}\) is as follows:
$$\begin{aligned} x_{S,0} &= \frac{{\beta_{3} \theta }}{{\eta \left( {\beta_{3} - 1} \right)}} + \frac{{\beta_{3} - \beta_{1} }}{{\eta \left( {\beta_{3} - 1} \right)}}\delta_{p} F\left( {x_{S,0} } \right) + \frac{{\beta_{3} - \beta_{4} }}{{\eta \left( {\beta_{3} - 1} \right)}}\varphi^{L - 1} Zx_{S,0}^{{\beta_{5} }}\\&\quad - \frac{k}{{\eta \left( {\beta_{3} - 1} \right)}}\sum\limits_{k = 1}^{L - 2} {U_{0,k} \left( {\log x_{S,0} } \right)^{k - 1} x_{S,0}^{{\beta_{4} }} } .\end{aligned}$$
If \(\lambda_{f} = \lambda_{pS}\), we note that \(\beta_{4} = \beta_{5}\) (hereinafter referred to as \(\beta_{4}\)). The value of the sophisticated venture capitalist exit option is given by
$$S_{0} \left( x \right) = \delta_{p} F\left( x \right) + \sum\limits_{i = 0}^{L - 1} {R_{0,i} \left( {\log x} \right)^{i} x^{{\beta_{4} }} } ,$$
where the derivation of \(R_{0,i}\) is similar to that of \(W_{0,i}\) (see "Appendix 2" for details), and the exit threshold \(x_{S,0}\) is as follows:
$$x_{S,0} = \frac{{\beta_{4} \theta }}{{\eta \left( {\beta_{4} - 1} \right)}} + \frac{{\beta_{4} - \beta_{1} }}{{\eta \left( {\beta_{4} - 1} \right)}}\delta_{p} F\left( {x_{S,0} } \right) - \frac{k}{{\eta \left( {\beta_{4} - 1} \right)}}\sum\limits_{k = 1}^{L - 1} {R_{0,k} \left( {\log x_{S,0} } \right)^{k - 1} x_{S,0}^{{\beta_{4} }} } .$$
Model implications
This section compares the exit thresholds of the four types of venture capitalists by numerically examining the properties of the model solutions. The base parameter values are set as \(\alpha = 0.02\), \(\sigma = 0.2\), and \(\rho = 0.06\) according to Dixit and Pindyck (1994), and ensuring \(\rho > \alpha\) for convergence; \(Q^{M} = 1.7\), \(Q^{A} = 1\), and \(Q^{T} = 0.5\) based on Thijssen (2008) and ensuring \(Q^{M} > Q^{A} + Q^{T}\) for the positive synergy; the VC's negotiation ability \(\beta_{VC} = 0.2\) (can be anywhere from 0 to 1); the VC's holding shares \(\phi = 0.4\) (can be anywhere from 0 to 1); the VC's preferential fixed claim \(d = 0.4\); and the exit cost \(C = 10\).
Comparison of time-consistent and time-point-inconsistent venture capitalists
Figure 4 shows how the exit thresholds \(x^{*}\) of time-consistent venture capitalists and \(x_{G}\) of time-point-inconsistent venture capitalists change with parameters \(\delta_{p}\) and \(\lambda_{L}\), respectively. We separately set \(\lambda_{L} = 1\) in Fig. 4a and \(\delta_{p} = 0.3\) in Fig. 4b. Our results show that time-point-inconsistent venture capitalists exit earlier than time-consistent venture capitalists, in that \(x_{G} < x^{*}\). We also observe that as the reduction factor \(\delta_{p}\) increases from 0 to 1 or the arrival intensity \(\lambda_{L}\) decreases from 1 to 0, the degree of time inconsistency gradually decreases, and the exit threshold \(x_{G}\) gradually approaches \(x^{*}\).
Exit thresholds change with respect to the reduction factor \(\delta_{p}\) and arrival intensity \(\lambda_{L}\)
The economic intuition for these findings is as follows. The exit decision of the time-point-inconsistent venture capitalist is determined by the current and the future selves. The VC fund's expiration can be viewed as the death of the current self and the birth of the future self, meaning that the future self starts to take over the right to make exit decisions. Considering that the payoffs obtained from exercising the exit option by the future self should be further discounted by an extra reduction factor, the current self prefers to exercise the option by oneself. As a result, time-inconsistent preferences accelerate the exit of venture capitalists. Venture capitalists managing young and less-prestigious VC funds are more sensitive to fund expiration and more prone to time-inconsistent preferences than those in charge of older VC funds. Therefore, we conclude that young VCs exit earlier than older VCs. This conclusion is also supported by recent empirical evidences (Amor and Kooli 2020; Sethuram et al. 2021).
Comparison of the three defined time-inconsistent venture capitalists
Figures 5 and 6 demonstrate how the exit thresholds \(x_{G}\) of time-point-inconsistent venture capitalists and \(x_{N,0}\) of naive venture capitalists change with the parameter \(\delta_{p}\) under \(\lambda_{f} = \lambda_{pN}\) and \(\lambda_{f} \ne \lambda_{pN}\), respectively. To ensure \(\delta_{f} \ge \delta_{p}\), we set \(\delta_{f} = 0.7\) and \(\delta_{p}\) from 0 to 0.7. To model the different durations left before the expiry date, we examine the exit thresholds of the above two types of venture capitalists by adjusting \(\lambda_{pN}\) while keeping \(\lambda_{f} = 1\) all the time. The time-point-inconsistent venture capitalist is directly faced with the pressure of the VC fund's expiration, while the naïve venture capitalist is also faced with the pressure of time-flow inconsistency in addition to the VC fund's expiration.
Exit thresholds change with the reduction factor \(\delta_{p}\) under \(\lambda_{f} = \lambda_{pN} = 1\)
Exit thresholds change with the reduction factor \(\delta_{p}\) under \(\lambda_{f} \ne \lambda_{pN}\). The parameter values are \(\lambda_{f} = 1\), a \(\lambda_{pN} = 0.8\), b \(\lambda_{pN} = 0.5\), c \(\lambda_{pN} = 0.2\), and d \(\lambda_{pN} = 0.05\)
Let us begin with a special case of \(\lambda_{f} = \lambda_{pN}\). Figure 5 shows that \(x_{G} > x_{N,0}\) at \(\delta_{p} > 0.18\), and \(x_{G} < x_{N,0}\) at \(\delta_{p} < 0.18\). Thus, we represent the intersection point where \(\delta_{p} = 0.18\) as \(\delta_{pe}\). We find that when \(\delta_{p} = \delta_{f} = 0.7\) (i.e., we do not distinguish between two kinds of time inconsistencies), our model evolves as the case of Grenadier and Wang (2007), and the exit threshold of the time-point-inconsistent venture capitalist is strictly greater than that of the naïve venture capitalist. Nevertheless, we note that when \(\delta_{p} < \delta_{pe}\), the naïve venture capitalist abnormally exits later than the time-point-inconsistent venture capitalists. In this case, the time-point-inconsistent venture capitalist faces the powerful but distant VC fund's expiration pressure. In contrast, the naïve venture capitalist also faces the above expiration pressure but faces a low degree of pressure from time-flow inconsistency in the first place. For the naïve venture capitalist, the future self caused by time-flow inconsistency may act as a buffer against the detrimental effect, thus weakening the impact of time-point inconsistency. We further test this inference by comparing the exit thresholds of naïve and sophisticated venture capitalists.
Now let us analyze the more general cases of \(\lambda_{f} \ne \lambda_{pN}\). Figure 6a–d are the cases where \(\lambda_{pN}\) = 0.8, 0.5, 0.2 and 0.05, respectively. We find that as \(\lambda_{pN}\) decreases, both \(x_{G}\) and \(x_{N,0}\) increase while the increase of \(x_{G}\) is greater than that of \(x_{N,0}\). When \(\lambda_{pN}\) decreases to a specific value, \(x_{G}\) is completely greater than \(x_{N,0}\). These findings suggest that the farther the expiry dates are, the more the two types of venture capitalists delay their exit. However, considering that we adjust \(\lambda_{pN}\) while keeping \(\lambda_{f}\) unchanged, the exit threshold of the time-point-inconsistent venture capitalist is more affected than that of the naïve venture capitalist. The intuition is straightforward. As the duration left before the expiry date increases, the impact from the VC fund's expiration decreases, leading to the further prominent effect of time-flow inconsistency. In addition, it's worth noting that the case of \(\lambda_{f} \gg \lambda_{pN}\), that is, the result shown in Fig. 6 is the most common in reality. After all, the future self, caused by individual time preferences, arrives earlier than the VC fund expiry date. Therefore, the naïve venture capitalist typically exits earlier than the time-point-inconsistent venture capitalist.
Figure 7 shows how the exit thresholds \(x_{G}\), \(x_{N,0}\) and \(x_{S,0}\) change with the parameter \(\delta_{p}\) under different numbers of selves generated by time-flow inconsistency. We set \(\lambda_{f} = \lambda_{pS} = 0.5\) for facilitating the calculation and \(\delta_{f} = 0.7\) as in the above cases. Hence, the number of selves \(E + 1\) reflects the different durations left before the expiry date. Due to the complicated derivation, we only provide three cases where \(L + 1\) = 4, 5, and 6, respectively.
Exit thresholds change with the reduction factor \(\delta_{p}\). The parameter values are a \(L + 1 = 4\); b \(L + 1 = 5\); c \(L + 1 = 6\)
We denote the intersection of curves \(x_{G}\) and \(x_{S,0}\) as \(\delta_{pG}\) and the intersection of curves \(x_{N,0}\) and \(x_{S,0}\) as \(\delta_{ps}\), respectively. As shown in Fig. 7a, when \(\delta_{p} > \delta_{ps}\), especially when \(\delta_{p} = \delta_{f} = 0.7\), our model evolves as the case of Grenadier and Wang (2007) again and we find that sophisticated venture capitalists exit earlier than naïve venture capitalists, who in turn exit earlier than time-point-inconsistent venture capitalists. When \(\delta_{p} < \delta_{pG}\) and \(\delta_{pG} < \delta_{p} < \delta_{ps}\), we observe that \(x_{S,0} > x_{G} > x_{N,0}\) and \(x_{G} > x_{S,0} > x_{N,0}\), respectively. These results confirm the inference we propose in our analysis of Fig. 5 that the future selves caused by time-flow inconsistency weaken the effect of time-point inconsistency. Particularly, sophisticated venture capitalists exit later than naïve venture capitalists because they are fully aware of multiple future selves caused by time-flow inconsistency, further weakening the effect of time-point inconsistency.
Moreover, it is observed that the growth degree of \(x_{G}\), \(x_{N,0}\) and \(x_{S,0}\) decreases in turn with the number of selves \(L + 1\) increasing. This result indicates that a gradual increase in the number of future selves enhances the dominant effect of time-flow inconsistencies on the sophisticated venture capitalist, thereby prompting the current self to make the exit decision earlier than the naïve venture capitalist. Thus, it is foreseeable that when the number of future selves reaches a specific threshold, the sophisticated venture capitalist exits earlier than the naïve venture capitalist, who in turn exits earlier than the time-point-inconsistent venture capitalist. This finding is also in line with the real-world practice because the arrival intensity of time-flow inconsistency is usually much greater than that of time-point inconsistency.
This study merges time-inconsistent preferences and VC exit decisions under uncertainty. In transition economies such as China, many VC funds have been set up and put into operation. For young and less-prestigious VC funds, obtaining significant payoffs and exiting successfully before the VC fund's expiration could effectively signal the high quality of the VC fund and guarantee follow-up fundraising. Consequently, venture capitalists are faced with two opposite decision drivers: waiting for the optimal exit and exiting early to gain reputational value. In this study, the time-inconsistent preferences of venture capitalists are incorporated into the optimal exit option framework to model the above VC exit decisions accurately.
Considering that both individual time preferences and the finite lifespan of VC funds contribute to the time-inconsistent preferences of venture capitalists, two kinds of time inconsistencies (i.e., time-flow and time-point inconsistencies) are proposed. Based on the assumption of time inconsistencies described, we derive and compare the exit thresholds of the four types of venture capitalists. The main findings of the numerical experiments are as follows. First, time-inconsistent preferences accelerate the exit of venture capitalists and thus all types of time-inconsistent venture capitalists exit earlier than time-consistent ones. Our model is the first to provide a behavioral explanation for the empirical facts of young VCs' grandstanding from the perspective of decision-makers' time preferences. Second, the closer the VC funds expiry dates are, the more likely time-inconsistent venture capitalists are to accelerate their exits. Third, future selves caused by time-flow inconsistency weaken the effect of time-point inconsistency. Our findings fully explain both the standard and abnormal phenomena. Generally, sophisticated venture capitalists exit earlier than naïve venture capitalists, who in turn exits earlier than time-point-inconsistent venture capitalists. As for some special situations when the degree of time-point inconsistency is much greater than that of time-flow inconsistency, and the exit opportunity is close to the VC fund's expiry date, naive venture capitalists exit later than time-point-inconsistent venture capitalists, and sophisticated venture capitalists exit last among the three defined time-inconsistent venture capitalists.
The two possible extensions to this study are introduced below. First, our study follows previous literature and assumes that the discount reduction factor \(\delta_{f}\) is fixed for each self. Nevertheless, a more realistic assumption would be that the closer to the expiration date, the smaller \(\delta_{f}\) is. Second, it is assumed that venture capitalists retain control over their capital exit from the invested firm. Although in reality venture capital contracts do have provisions to give venture capitalists this right,Footnote 8 entrepreneurs lose the private benefits of control and hence may oppose trade sales. Therefore, the model could be extended to account for the game between the entrepreneur, the venture capitalist, and the acquiring firm.
Young VCs rely more on individual decisions, whereas older VCs have well-established decision rules and processes. Grenadier and Wang (2007) argue that individuals or small private partnerships are more prone to time inconsistency than firms, because the organizational structure and professional management of firms may mitigate or remove the time inconsistency from the firms' decisions.
Yoon (2020) praises time inconsistency in intertemporal choice as one of the most influential findings in the judgment and decision literature.
The limited partnership has been the main organizational form of VC funds for the past three decades. It is designed with a finite lifespan: about 8–10 years, with an extension of 2–3 years allowed to facilitate exit (Gompers and Lerner, 1999; Kandel et al., 2011).
We note that Ferreira and Pereira (2021) also define a similar model. Their model follows the exit dynamic process that is generally consistent with ours, except that they set the premium unilaterally requested by VC, while we determine it by sharing the synergies brought by M&A.
Trade sales (or M&As) have been the most common exit route for venture capitalists (Bienz and Walz, 2010; Félix et al., 2014; Ferreira and Pereira, 2021).
For the foundation of real options approach, see Dixit and Pindyck (1994). For recent applications of real options in venture capital decision making, see Tavares-Gärtner et al. (2018), and Ferreira and Pereira (2021).
Kaplan and Strömberg (2003) report that nearly 80% of all venture contracts use convertible preferred stock and that in nearly half of those cases the stock is participating.
For example, drag-along rights and tag-along rights, see Cumming (2010) and Bienz and Walz (2010).
Alvarez LHR, Stenbacka R (2006) Takeover timing, implementation uncertainty, and embedded divestment options. Rev Financ 10:417–441. https://doi.org/10.1007/s10679-006-9002-y
Amor SB, Kooli M (2020) Do M&A exits have the same effect on venture capital reputation than IPO exits? J Bank Financ 111:105704. https://doi.org/10.1016/j.jbankfin.2019.105704
Anderson HD, Chi J, Wang Q (2017) Political ties and VC exits: evidence from China. China Econ Rev 44:48–66. https://doi.org/10.1016/j.chieco.2017.03.007
Arcot S (2014) Participating convertible preferred stock in venture capital exits. J Bus Ventur 29:72–87. https://doi.org/10.1016/j.jbusvent.2013.06.001
Arcot S, Fluck Z, Gaspar J-M et al (2015) Fund managers under pressure: rationale and determinants of secondary buyouts. J Financ Econ 115:102–135. https://doi.org/10.1016/j.jfineco.2014.08.002
Arundale K (2020) Syndication and cross-border collaboration by venture capital firms in Europe and the USA: a comparative study. Ventur Cap 22:355–376. https://doi.org/10.1080/13691066.2020.1847414
Bienz C, Walz U (2010) Venture capital exit rights. J Econ Manag Strategy 19:1071–1116. https://doi.org/10.1111/j.1530-9134.2010.00278.x
Bock C, Schmidt M (2015) Should I stay, or should I go?—How fund dynamics influence venture capital exit decisions. Rev Financ Econ 27:68–82. https://doi.org/10.1016/j.rfe.2015.09.002
Butler AW, Goktan MS (2013) On the role of inexperienced venture capitalists in taking companies public. J Corp Financ 22:299–319. https://doi.org/10.1016/j.jcorpfin.2013.06.004
Chen S, Wang X, Deng Y et al (2016) Optimal dividend-financing strategies in a dual risk model with time-inconsistent preferences. Insur Math Econ 67:27–37. https://doi.org/10.1016/j.insmatheco.2015.11.005
Cumming D (2008) Contracts and exits in venture capital finance. Rev Financ Stud 21:1947–1982. https://doi.org/10.1093/rfs/hhn072
Cumming D (2010) Venture capital: investment strategies, structures, and policies. Wiley, Hoboken
Cumming D (2012) The Oxford handbook of venture capital. Oxford University Press, Oxford
Cumming D, Fleming G, Schwienbacher A (2006) Legality and venture capital exits. J Corp Financ 12:214–245. https://doi.org/10.1016/j.jcorpfin.2004.12.004
Dixit AK, Pindyck RS (1994) Investment under uncertainty. Princeton University Press, Princeton
Félix EGS, Pires CP, Gulamhussen MA (2014) The exit decision in the European venture capital market. Quant Financ 14:1115–1130. https://doi.org/10.1080/14697688.2012.714903
Ferreira RM, Pereira PJ (2021) A dynamic model for venture capitalists' entry–exit investment decisions. Eur J Oper Res 290:779–789. https://doi.org/10.1016/j.ejor.2020.08.014
Gao X, Ritter JR, Zhu Z (2013) Where have all the IPOs gone? J Financ Quant Anal 48:1663–1692. https://doi.org/10.1017/S0022109014000015
Gompers P, Lerner J (1999) An analysis of compensation in the U.S. venture capital partnership. J Financ Econ 51:3–44. https://doi.org/10.1016/S0304-405X(98)00042-7
Gompers PA (1996) Grandstanding in the venture capital industry. J Financ Econ 42:133–156. https://doi.org/10.1016/0304-405X(96)00874-4
Grenadier SR, Malenko A (2011) Real options signaling games with applications to corporate finance. Rev Financ Stud 24:3993–4036. https://doi.org/10.1093/rfs/hhr071
Grenadier SR, Wang N (2007) Investment under uncertainty and time-inconsistent preferences. J Financ Econ 84:2–39. https://doi.org/10.1016/j.jfineco.2006.01.002
Guo D, Jiang K (2013) Venture capital investment and the performance of entrepreneurial firms: Evidence from China. J Corp Financ 22:375–395. https://doi.org/10.1016/j.jcorpfin.2013.07.001
Guo Q-W, Chen S, Schonfeld P et al (2018) How time-inconsistent preferences affect investment timing for rail transit. Transp Res Pt B-Methodol 118:172–192. https://doi.org/10.1016/j.trb.2018.10.009
Harris C, Laibson D (2013) Instantaneous gratification. Q J Econ 128:205–248. https://doi.org/10.1093/qje/qjs051
Kandel E, Leshchinskii D, Yuklea H (2011) VC funds: aging brings myopia. J Financ Quant Anal 46:431–457. https://doi.org/10.1017/s0022109010000840
Kaplan SN, Strömberg P (2003) Financial contracting theory meets the real world: an empirical analysis of venture capital contracts. Rev Econ Stud 70:281–315. https://doi.org/10.1111/1467-937X.00245
Klingler-Vidra R (2016) When venture capital is patient capital: seed funding as a source of patient capital for high-growth companies. Socio-Econ Rev 14:691–708. https://doi.org/10.1093/ser/mww022
KPMG (2019) Venture Pulse Q4 2018. Available at: https://assets.kpmg/content/dam/kpmg/xx/pdf/2019/01/kpmg-venture-pulse-q4-2018.pdf
Laibson D (1997) Golden eggs and hyperbolic discounting. Q J Econ 112:443–478. https://doi.org/10.1162/003355397555253
Lee PM, Wahal S (2004) Grandstanding, certification and the underpricing of venture capital backed IPOs. J Financ Econ 73:375–407. https://doi.org/10.1016/j.jfineco.2003.09.003
Li H, Mu C, Yang J (2016) Optimal contract theory with time-inconsistent preferences. Econ Model 52:519–530. https://doi.org/10.1016/j.econmod.2015.09.032
Li X, Wu X, Zhou W (2017) Optimal stopping investment in a logarithmic utility-based portfolio selection problem. Financ Innov 3:28. https://doi.org/10.1186/s40854-017-0080-y
Liu L, Niu Y, Wang Y et al (2020) Optimal consumption with time-inconsistent preferences. Econ Theory 70:785–815. https://doi.org/10.1007/s00199-019-01228-1
Loewenstein G, Prelec D (1992) Anomalies in intertemporal choice: evidence and an interpretation. Q J Econ 107:573–597. https://doi.org/10.2307/2118482
Luo P, Tian Y, Yang Z (2020) Real option duopolies with quasi-hyperbolic discounting. J Econ Dyn Control 111:103829. https://doi.org/10.1016/j.jedc.2019.103829
O'Donoghue T, Rabin M (1999) Doing it now or later. Am Econ Rev 89:103–124. https://doi.org/10.1257/aer.89.1.103
Sethuram S, Taussig M, Gaur A (2021) A multiple agency view of venture capital investment duration: the roles of institutions, foreignness, and alliances. Glob Strateg J. https://doi.org/10.1002/gsj.1402
Strotz RH (1955) Myopia and inconsistency in dynamic utility maximization. Rev Econ Stud 23:165–180. https://doi.org/10.2307/2295722
Tavares-Gärtner M, Pereira PJ, Brandão E (2018) Heterogeneous beliefs and optimal ownership in entrepreneurial financing decisions. Quant Financ 18:1947–1958. https://doi.org/10.1080/14697688.2018.1432882
Thaler R (1981) Some empirical evidence on dynamic inconsistency. Econ Lett 8:201–207. https://doi.org/10.1016/0165-1765(81)90067-7
Thijssen JJJ (2008) Optimal and strategic timing of mergers and acquisitions motivated by synergies and risk diversification. J Econ Dyn Control 32:1701–1720. https://doi.org/10.1016/j.jedc.2007.06.016
Tian Y (2016) Optimal capital structure and investment decisions under time-inconsistent preferences. J Econ Dyn Control 65:83–104. https://doi.org/10.1016/j.jedc.2016.02.001
Wang Y, Huang W, Liu B et al (2020) Optimal effort in the principal-agent problem with time-inconsistent preferences. N Am Econ Financ 52:100909. https://doi.org/10.1016/j.najef.2019.01.006
Yoon H (2020) Impatience and time inconsistency in discounting models. Manag Sci 66:5850–5860. https://doi.org/10.1287/mnsc.2019.3496
The authors thank the editor and the reviewers for invaluable comments and suggestions, which have improved the quality of this paper immensely.
This paper was supported by the Major Program of the National Social Science Foundation of China under Grant No. 17ZDA083, the National Natural Science Foundation of China under Grant No. 71932002, and the Natural Science Foundation of Beijing Municipality under Grant No. 9192001.
School of Management, Xi'an Jiaotong University, No. 28, Xianning West Road, Beilin District, Xi'an, 710049, China
Yanzhao Li, Ju-e Guo & Shaolong Sun
College of Economics and Management, Beijing University of Technology, No. 100, Ping Le Yuan, Chaoyang District, Beijing, 100124, China
Yongwu Li
Yanzhao Li
Ju-e Guo
Shaolong Sun
YZ.L and J.G conceived of the presented idea. YZ.L participated in the design of the study and drafted the manuscript. J.G supervised the findings. S.S participated in project administration. YW.L participated in the design of the study and writing the manuscript. All the authors provided critical feedback and helped shape the research, analysis, and manuscript. All the authors read and approved the manuscript.
Authors' information
Yanzhao Li received bachelor degree of Engineering and bachelor degree of Economics from Xi'an Jiaotong University, China, in 2017. He is currently pursuing the Ph.D. degree in management science and engineering at School of Management, Xi'an Jiaotong University, China. His research interests include venture capital, behavioral finance, and data analysis. He has published one paper in Tourism Economics.
Ju-e Guo received her Ph.D. degree in School of Management, Xi'an Jiaotong University, China, in 2001. She is currently a professor at School of Management, Xi'an Jiaotong University, China and executive deputy director of Research Center of Chinese Management. She has published more than 30 papers in international journals, such as Energy Economics and International Journal of Production Economics. Her research interests include investment and financing decision, risk management, and energy policy.
Shaolong Sun received his Ph.D. degree in Management Science and Engineering at the Institute of Systems Science, Academy of Mathematics and Systems Sciences, Chinese Academy of Sciences, China, in 2019. He is currently an associate professor at School of Management, Xi'an Jiaotong University, China. He has published over 20 papers in leading journals including Tourism Management, Energy Economics, and IEEE Transactions on Cybernetics. His research interests include big data mining, machine learning, and economic and financial forecasting.
Yongwu Li received his Ph.D. degree in Probability and Mathematical Statistics from Lanzhou University, Lanzhou, China, in 2014. Currently, he is an associate professor in the College of Economics and Management, Beijing University of Technology. He has published more than 20 papers in international journals, such as IEEE Systems Journal, Insurance: Mathematics and Economics, Journal of Optimization Theory and Applications, and Operations Research Letters. His research interests include financial engineering, mathematic finance, and the applications of stochastic control and optimization methods in operations research.
Correspondence to Yongwu Li.
The proof for the general solution of \(N_{0} \left( x \right)\).
The general solution we assume is given by
$$N_{0} \left( x \right) = \left\{ \begin{gathered} \delta_{p} F\left( x \right) + \varepsilon Yx^{{\beta_{3} }} + U_{0} x^{{\beta_{4} }} , \quad if \, \lambda_{f} \ne \lambda_{pN} , \hfill \\ \delta_{p} F\left( x \right) + R_{1} x^{{\beta_{4} }} \log x + R_{0} x^{{\beta_{4} }} , \quad if \, \lambda_{f} = \lambda_{pN} . \hfill \\ \end{gathered} \right.$$
Take the first and second derivatives of \(N_{0} \left( x \right)\) and substitute them into the following differential equation:
(1) If \(\lambda_{f} \ne \lambda_{pN}\), we note that \(\beta_{3} \ne \beta_{4}\).
For the \(x^{{\beta_{1} }}\) term, we have
$$\delta_{p} F\left( x \right)\left[ {\frac{1}{2}\sigma^{2} \beta_{1} \left( {\beta_{1} - 1} \right) + \alpha \beta_{1} - \rho } \right] = 0.$$
$$\frac{1}{2}\sigma^{2} \beta_{3} \left( {\beta_{3} - 1} \right)x^{{\beta_{3} }} \varepsilon Y + \alpha \beta_{3} x^{{\beta_{3} }} \varepsilon Y - \rho x^{{\beta_{3} }} \varepsilon Y + \lambda_{f} \left[ {x^{{\beta_{3} }} Y - x^{{\beta_{3} }} \varepsilon Y} \right] = 0.$$
Simplify the above Eq. (49), we obtain
$$\varepsilon = \frac{{\lambda_{f} }}{{\lambda_{f} - \lambda_{pN} }}.$$
$$U_{0} x^{{\beta_{4} }} \left[ {\frac{1}{2}\sigma^{2} \beta_{4} \left( {\beta_{4} - 1} \right) + \alpha \beta_{4} - \left( {\rho + \lambda_{f} } \right)} \right] = 0.$$
(2) If \(\lambda_{f} = \lambda_{pN}\), we note that \(\beta_{3} = \beta_{4}\).
For the \(x^{{\beta_{4} }} \log x\) term, we have
$$\begin{aligned} & R_{1} x^{{\beta_{4} }} \log x\left[ {\frac{1}{2}\sigma^{2} \beta_{4} \left( {\beta_{4} - 1} \right) + \alpha \beta_{4} - \rho - \lambda_{f} } \right] \\ & \quad + x^{{\beta_{4} }} \left[ {\frac{1}{2}\sigma^{2} R_{1} \left( {2\beta_{4} - 1} \right) + \alpha R_{1} + \lambda_{f} Y} \right] = 0. \\ \end{aligned}$$
$$\frac{1}{2}\sigma^{2} R_{1} \left( {2\beta_{4} - 1} \right) + \alpha R_{1} + \lambda_{f} Y = 0.$$
We obtain
$$R_{1} = - \frac{{\lambda_{f} Y}}{{\alpha + \frac{1}{2}\sigma^{2} \left( {2\beta_{4} - 1} \right)}}.$$
$$R_{0} x^{{\beta_{4} }} \left[ {\frac{1}{2}\sigma^{2} \beta_{4} \left( {\beta_{4} - 1} \right) + \alpha \beta_{4} - \left( {\rho + \lambda_{f} } \right)} \right] = 0.$$
Solving for the continuation value function \(S_{n + 1}^{c} \left( x \right)\).
Let \(n = N - \left( {j + 1} \right)\), for \(j = 2,3,...,N - 1\). Then \(S_{n + 1}^{c} \left( x \right) = S_{N - j}^{c} \left( x \right)\). We assume the general solution of \(S_{N - j}^{c} \left( x \right)\) is given in Eq. (57).
$$S_{N - j}^{c} \left( x \right) = \delta_{p} \left( {\frac{x}{{x^{*} }}} \right)^{{\beta_{1} }} \left( {\eta x^{*} - \theta } \right) + \varphi^{j - 1} Zx^{{\beta_{5} }} + \sum\limits_{i = 0}^{j - 2} {W_{N - j,i} \left( {\log x} \right)^{i} x^{{\beta_{4} }} } .$$
Further, it can be concluded that
$$S_{N - j + 1}^{c} \left( x \right) = \delta_{p} \left( {\frac{x}{{x^{*} }}} \right)^{{\beta_{1} }} \left( {\eta x^{*} - \theta } \right) + \varphi^{j - 2} Zx^{{\beta_{5} }} + \sum\limits_{i = 0}^{j - 3} {W_{{N - \left( {j - 1} \right),i}} \left( {\log x} \right)^{i} x^{{\beta_{4} }} } .$$
We substitute the general solutions of \(S_{N - j}^{c} \left( x \right)\) and \(S_{N - j + 1}^{c} \left( x \right)\), and the first and second derivatives of \(S_{N - j}^{c} \left( x \right)\) into the following differential equation:
$$\frac{1}{2}\sigma^{2} x^{2} S_{N - j}^{c\prime \prime } \left( x \right) + \alpha xS_{N - j}^{c\prime } \left( x \right) - \rho S_{N - j}^{c} \left( x \right) + \lambda_{f} \left[ {S_{N - j + 1}^{c} \left( x \right) - S_{N - j}^{c} \left( x \right)} \right] = 0.$$
For the \(x^{{\beta_{1} }}\) term, Eq. (59) always stands up.
$$\frac{1}{2}\sigma^{2} \beta_{5} \left( {\beta_{5} - 1} \right)x^{{\beta_{5} }} \varphi^{j - 1} Z + \alpha \beta_{5} x^{{\beta_{5} }} \varphi^{j - 1} Z - \rho x^{{\beta_{5} }} \varphi^{j - 1} Z + \lambda_{f} \left[ {x^{{\beta_{5} }} \varphi^{j - 2} Z - x^{{\beta_{5} }} \varphi^{j - 1} Z} \right] = 0.$$
$$\varphi = \frac{{\lambda_{f} }}{{\lambda_{f} - \lambda_{pS} }}.$$
For the \(x^{{\beta_{4} }}\) term, we set the coefficients for each \(x^{{\beta_{4} }} \left( {\log x} \right)^{k}\) term to 0 and then we obtain
$$\begin{aligned} & \frac{{\sigma^{2} }}{2}\left[ {\left( {2\beta_{4} - 1} \right)\left( {k + 1} \right)W_{N - j,k + 1} + \left( {k + 2} \right)\left( {k + 1} \right)W_{N - j,k + 2} } \right] \\ & \quad + \alpha \left( {k + 1} \right)W_{N - j,k + 1} + \lambda_{f} W_{N - j + 1,k} = 0,\quad for\,\quad k = 1,2, \ldots ,j - 1. \\ \end{aligned}$$
We assume that \(\mu = \frac{{ - \beta_{4} }}{{{{\sigma^{2} \beta_{4}^{2} } \mathord{\left/ {\vphantom {{\sigma^{2} \beta_{4}^{2} } {2 + \rho + \lambda_{f} }}} \right. \kern-\nulldelimiterspace} {2 + \rho + \lambda_{f} }}}}\). The coefficient \(W_{N - j,k}\) is given by
$$W_{N - j,k} = \frac{{\lambda_{f} }}{k}\left[ {\mu \sum\limits_{n = 0}^{j - k - 2} {\left( {\frac{{\sigma^{2} \mu }}{2}} \right)^{n + 1} W_{N - j + 1,k + n} \prod_{m = 0}^{n} \left( {k + m} \right) + \mu W_{N - j + 1,k - 1} } } \right],$$
for \(k = 1,2,...,j - 2\). And \(W_{N - j,0}\) is solved using the value-matching condition:
$$\begin{aligned} W_{N - j,0} & = x_{S,N - j}^{{ - \beta_{4} }} \left[ {\delta_{f} \left( {\eta x_{S,N - j} - \theta } \right) - \delta_{p} \left( {\frac{{x_{S,N - j} }}{{x^{*} }}} \right)^{{\beta_{1} }} \left( {\eta x^{*} - \theta } \right) - \varphi^{j - 1} Zx_{S,N - j}^{{\beta_{5} }} } \right] \\ & \quad - \sum\limits_{i = 1}^{j - 2} {W_{N - j,i} \left( {\log x_{S,N - j} } \right)^{i} } . \\ \end{aligned}$$
Solving for the value function \(S_{n + 1} \left( x \right)\)
Let \(n = N - \left( {j + 1} \right)\), for \(j = 2,3,...,N - 1\). Then \(S_{n + 1} \left( x \right) = S_{N - j} \left( x \right)\). We assume the general solution of \(S_{N - j} \left( x \right)\) is given in Eq. (65).
$$S_{N - j} \left( x \right) = \delta_{p} \left( {\frac{x}{{x^{*} }}} \right)^{{\beta_{1} }} \left( {\eta x^{*} - \theta } \right) + \varphi^{j - 1} Zx^{{\beta_{5} }} + \sum\limits_{i = 0}^{j - 2} {U_{N - j,i} \left( {\log x} \right)^{i} x^{{\beta_{4} }} } .$$
We substitute the general solutions of \(S_{N - j} \left( x \right)\) and \(S_{N - j + 1} \left( x \right)\), and the first and second derivatives of \(S_{N - j} \left( x \right)\) into the following differential equation:
$$\frac{1}{2}\sigma^{2} x^{2} S_{N - j}^{\prime \prime } \left( x \right) + \alpha xS_{N - j}^{\prime } \left( x \right) - \rho S_{N - j} \left( x \right) + \lambda_{f} \left[ {S_{N - j + 1}^{c} \left( x \right) - S_{N - j} \left( x \right)} \right] = 0.$$
For the \(x^{{\beta_{1} }}\) and \(x^{{\beta_{3} }}\) term, we reach the same conclusions as "Appendix 2".
$$\begin{aligned} & \frac{{\sigma^{2} }}{2}\left[ {\left( {2\beta_{4} - 1} \right)\left( {k + 1} \right)U_{N - j,k + 1} + \left( {k + 2} \right)\left( {k + 1} \right)U_{N - j,k + 2} } \right] \\ & \quad + \alpha \left( {k + 1} \right)U_{N - j,k + 1} + \lambda_{f} W_{N - j + 1,k} = 0,\quad {\text{for}}\,\quad k = 1,2, \ldots ,j - 2. \\ \end{aligned}$$
The coefficient \(U_{N - j,k}\) is given by
$$U_{N - j,k} = W_{N - j,k} ,$$
for \(k = 1,2, \ldots ,j - 1\). And \(U_{N - j,0}\) is solved using the value-matching condition:
$$\begin{aligned} U_{N - j,0} & = x_{S,N - j}^{{ - \beta_{4} }} \left[ {\eta x_{S,N - j} - \theta - \delta_{p} \left( {\frac{{x_{S,N - j} }}{{x^{*} }}} \right)^{{\beta_{1} }} \left( {\eta x^{*} - \theta } \right) - \varphi^{j - 1} Zx_{S,N - j}^{{\beta_{5} }} } \right] \\ & \quad - \sum\limits_{i = 1}^{j - 2} {U_{N - j,i} \left( {\log x_{S,N - j} } \right)^{i} } . \\ \end{aligned}$$
Li, Y., Guo, Je., Sun, S. et al. How time-inconsistent preferences influence venture capital exit decisions? A new perspective for grandstanding. Financ Innov 8, 1 (2022). https://doi.org/10.1186/s40854-021-00305-6
Time-inconsistent preferences
Exit decisions | CommonCrawl |
Applying tuberculosis management time to measure the tuberculosis infectious pool at a local level in Ethiopia
Senedu Bekele Gebreegziabher1,2,
Gunnar Aksel Bjune2 &
Solomon Abebe Yimer1,2,3,4
Measuring the size of the infectious pool of tuberculosis (TB) is essential to understand the burden and monitor trends of TB control program performance. This study applied the concept of TB management time to estimate and compare the size of the TB infectious pool between 2009 and 2014 in West Gojjam Zone of Amhara Region, Ethiopia.
New sputum smear-positive and smear-negative pulmonary TB (PTB) and retreatment cases who attended 30 randomly selected public health facilities in West Gojjam Zone from October 2013 to October 2014 were consecutively enrolled in the study. In order to determine the infectious period, the TB management time (number of days from the onset of cough until start of anti-TB treatment) was computed for each patient category. The number of undiagnosed TB cases was estimated and hence the TB management time for the undiagnosed category was calculated. The total size of the TB infectious pool during the study period for the study zone was estimated as the annual number of infectious person days.
New smear-positive and smear-negative PTB cases contributed 25,050 and 12,931 infectious person days per year to the TB infectious pool, respectively. The retreatment and presently undiagnosed cases contributed 8840 and 34,310 infectious person days per year, respectively. The total size of the TB infectious pool in West Gojjam Zone during the study period was estimated at 81,131 infectious person days per year or 3405 infectious person days per 100,000 population per year. Compared to a similar study done in 2009 in the study area, the current study showed reduction of the TB infectious pool by 244,279 infectious person days.
TB management time is a simple and practical tool that may help to estimate and compare the changes in the size of the TB infectious pool at local level. It may also be used as an indicator to monitor the changes in TB control program performance.
Please see Additional file 1 for translations of the abstract into the five official working languages of the United Nations.
Despite important progress in tuberculosis (TB) control has been made with great global commitment, yet TB remains a major global health problem [1]. According to World Health Organization (WHO) global report, there were an estimated 10.4 million new TB cases and 1.8 million deaths from TB in 2015 [1]. An estimated one-third of the world's population is infected with TB [2], serving as a reservoir that is continuously contributing to the TB infectious pool.
Measuring the size of the infectious pool of TB is essential to understand the burden and monitor trends of TB control program performance [3]. Currently, WHO uses case notification data, national TB prevalence surveys and data audits to estimate the TB burden [1]. These various sources have yielded considerable information to estimate the TB burden. However, in resource poor countries where problem with data quality such as incomplete records and untimely reports are frequently documented [4], the data from the routine surveillance system may not provide accurate information that shows the real burden of TB. In addition, the national TB prevalence survey which is recommended in high TB burden countries where many cases and deaths are missed by routine reporting are relatively costly and laborious [5, 6]. Currently, there is no simple, inexpensive and practical method that can be applied to measure the TB infectious pool at local level.
A recent study proposed the concept of TB management time as an alternative parameter to estimate the TB infectious pool at local level [3]. TB management time is basically applied by defining the infectious period contributed by different TB patient categories in a given year. The current study was conducted to achieve two objectives. The first aim was to compare the change in the size of the infectious pool between 2009 and 2014. The concept of TB management time was first introduced in 2009 to estimate the infectious pool of TB in West Gojjam Zone of Amhara Region. Therefore, using the same tool we wanted to compare the change in the dynamics of the TB infectious pool between 2009 and 2014. The second aim was that the application of TB management time in the previous study had some limitations that needed to be addressed for better application of the tool in a wider perspective. The TB management time for smear-negative and retreatment PTB cases were calculated based on evidence obtained from other studies, mostly from low-TB burden countries. Therefore, this study by addressing the limitations of the former study, applied TB management time to estimate and compare the size of the TB infectious pool between 2009 and 2014 in the study area.
Study setting
This study was conducted in West Gojjam Zone of Amhara Region, Ethiopia. West Gojjam Zone is one of the ten zones of the Amhara Region. The total population is estimated at 2,382,497 [7]. A total of 30 public health facilities providing TB diagnostic and treatment services were included in the study. Simple random sampling method was used to select study sites. First, we obtained list of all public health facilities providing TB diagnostic and treatment services in West Gojjam Zone. Accordingly, 73 health centers and one hospital were providing TB diagnostic and treatment services during the study period. Of these, 29 health centers were randomly selected. We also added one hospital which is the only available hospital in the study zone. This makes a total of 30 study sites.
Seventy six private health facilities (hospitals and higher clinics) were providing health service to the population in the study zone. Of these, six private health institutions had TB diagnostic and treatment facilities during the study period. However, the private health facilities were not included in this study.
Operational definition of variables
The national guideline for clinical and programmatic management of TB, which is adapted from the WHO TB treatment guidelines was followed to diagnose, classify and define TB cases [8].
A new case of TB is defined as a patient who has never had treatment for TB or who has taken anti-TB drugs for less than one month.
Smear-positive PTB: a patient with at least two initial sputum smear examinations positive for acid-fast bacilli (AFB) by direct microscopy, or one initial smear examination positive for AFB by direct microscopy and culture positive, or one initial smear examination positive for AFB by direct microscope and radiographic abnormalities consistent with active TB.
Smear-negative PTB: a patient with symptoms suggestive of TB with at least three AFB negative sputum smear examinations, radiographic abnormalities consistent with active PTB, no response to a course of broad spectrum antibiotics and a decision by a clinician to treat with a full course of anti-TB chemotherapy.
Retreatment cases include three sub-categories: treatment failure, relapse and default cases. Treatment after failure is a patient who was started on retreatment after the previous treatment had failed. A default case is defined as a patient who was previously treated for TB and came back for treatment having previously defaulted. A relapse case is a patient who was previously declared cured or treatment completed and is currently diagnosed with bacteriologically positive (sputum smear or culture).
TB management time is defined as the time interval from onset of cough until first start of anti-TB treatment.
Study design, population and data collection
This was a health facility based cross-sectional study conducted in 30 public health facilities in West Gojjam Zone from October 2013 to October 2014. All newly diagnosed smear-positive and smear-negative PTB and retreatment cases ≥15 years of age who attended the study sites during the study period were consecutively interviewed at the time of treatment initiation.
Socio-demographics of patients, symptoms suggestive of TB, date when cough started, date of first visit to health care provider, date of first start of anti-TB treatment were collected using semi-structured questionnaire. The questionnaire was pretested at a health facility for assessing the clarity, consistency and completeness prior to using it for actual data collection. Trained health officers and nurses at each study site collected the data. To assure quality of the data, frequent supervision was made by the principal investigator and other supervisors throughout the data collection period. Extra pulmonary TB cases were not included in this study.
Data were entered, cleaned and analyzed using IBM Statistical Package for the Social Sciences (SPSS) Version 22 (SPSS Inc. Chicago, IL, USA). Descriptive statistics such as proportions and medians with interquartile ranges (IQRs) were computed. The median TB management time for each PTB patient category (new smear-positive, new smear-negative, retreatment and undiagnosed /not yet detected cases) were computed. Figure 1 shows components of TB management time.
The components of TB management time. Note: TB management time is defined as the time interval from onset of cough until first start of anti-TB treatment. The figure describes different types of delays within the interval
The median TB management time for smear-positive TB patients was calculated based on data collected from new smear-positive PTB cases who attended the study sites during the study period. Previous studies showed that 30% to 58% of smear-positive PTB cases remained infectious after the two weeks of treatment initiation [9, 10]. The median time required for sputum culture conversion after commencement of anti-TB treatment varied from 23 to 39 days for smear-positive cases [11,12,13]. In order to estimate the total infectious period (the period from onset of cough until the estimated time of non-infectious period following initiation of treatment), an average infectious period of 30 days after the start of treatment was thus added to the median TB management time of smear-positive cases.
The median TB management time for smear-negative PTB cases was also computed using data obtained from the cross-sectional study conducted in the study area. However, the challenge here was defining the number of smear-negative TB cases that were culture positive. We used data from former local studies to estimate the proportion of culture positive cases from smear-negative cases. The national TB prevalence survey in Ethiopia indicated that 57% of smear-negative cases were culture positive. Another study in Ethiopia revealed that 47% smear-negative cases were culture positive [14, 15]. Based on these facts, we used an average of these two studies (52%) to estimate the number of smear-negative culture positive cases in our study. Finally, by multiplying the calculated number of smear-negative culture positive cases with the calculated median TB management time, we estimated the total infectious period for smear-negative TB patients.
The infectious period for retreatment cases was calculated for each retreatment subcategory of patients (relapse, treatment failure and treatment after default). Firstly, the median TB management time for relapse, treatment failure and treatment after default cases were computed using data obtained from the cross-sectional study conducted in the study area. For treatment failure cases, the median time period from start of treatment until failed treatment was also computed. Likewise, for treatment after default cases, the median time period from start of treatment until default, and from treatment default until returns to the health facilities and start of retreatment regimen were computed from the data collected in this study.
A recently conducted study in the Amhara Region (study region) showed that the proportion of multi drug resistant TB (MDR-TB) among relapse, treatment failure and default cases were 15.9%, 21.7% and 24.1%, respectively [16]. Previous studies indicated that the median time required for sputum culture conversion for MDR-TB cases varied from 36 days to 90 days [17,18,19]. However, as the proportion of estimated MDR-TB cases from the retreatment category reported from Amhara Region was not high i.e. within the range of 15.9% – 24.1% [16], we considered an addition of average infectious period of 30 days after the start of retreatment on the calculated TB management time for each patient included in the retreatment category.
The undiagnosed TB cases were estimated based on evidence of population-based national TB prevalence survey in Ethiopia [14]. Accordingly, it was estimated that 28% of the smear-positive PTB cases were undiagnosed in the study area. A previous systematic review indicated that undiagnosed PTB cases remain infectious for an average of 3 years [20]. We considered an infectious period of 365 days for the undiagnosed TB cases since our aim was to define the infectious pool for one year.
The total infectious period for new smear-positive and smear-negative PTB and retreatment cases were calculated by multiplying the total number of cases in each patient category during the study period by the total infectious period calculated for each patient category. Likewise, the total infectious period for the undiagnosed cases was calculated by multiplying the total number of estimated undiagnosed cases by the estimated infectious period for this patient category. Finally, the infectious days for each PTB patient category was summed up to estimate the total infectious days contributed by each patient category (smear-positive, smear-negative, retreatment and undiagnosed TB cases) for the study year. The infectious pool was measured in terms of infectious person days per 100,000 population. The total size of the infectious pool of the study zone during the study period was calculated by adding the total number of infectious person days contributed by each TB patient category.
The following assumption and equation that was applied in the former study was used to calculate the total size of TB infectious pool in the current study.
Assumption: Let median infectious period and total number (seen during one year) for smear- positives, smear -negatives, relapse cases, treatment failures, treatment after default cases, and undiagnosed cases be A1 and N1, A2 and N2, A3 and N3, A4 and N4, A5 and N5, A6 and N6 and A7 with N7, respectively. Hence, the estimated total infectious pool can be calculated using the following eq. [3].
$$ \mathrm{Total}\ \mathrm{infectious}\ \mathrm{pool}=\mathrm{A}1\mathrm{Ni}+\dots +\mathrm{A}7\mathrm{N}7+=\sum \limits_{\mathrm{i}}^7\mathrm{AiNi}. $$
A total of 334 new sputum smear-positive TB cases were included in the study. The median TB management time for smear-positive category was 45 days (interquartile range, 23–128 days). By adding an average infectious period of 30 days after commencement of anti-TB treatment, each new smear-positive case was found to have contributed an estimated infectious period of 75 days to the TB infectious pool. A total of 334 new smear-positive TB cases contributed 25,050 infectious person days during the study period (Table 1). In the 2009 study conducted in the same study area, a total of 1250 new smear- positive TB cases contributed 128,750 infectious person days in one year [3].
Table 1 Estimated infectious pool of TB in West Gojjam Zone of Amhara Region, Ethiopia from October 2013 to October 2014
There were 372 new smear-negative TB cases. Among these, 193 (52%) were estimated to be culture positive cases. The median TB management time for new smear-negative patients was 67 days (interquartile range, 25–152 days). A total of 193 smear-negative cases contributed 12,931 infectious person days. The median TB management time estimated for new smear-negative PTB cases is high compared to new smear-positive PTB cases (Fig. 2). In 2009, 1998 new smear-negative PTB cases identified in the current study area contributed 39,960 infectious person days to the infectious pool of TB in one year [3].
Box plots depict the median TB management time. Note: Box plots showing the median TB management time from cough until start of TB treatment for new smear-positive and new smear-negative pulmonary TB cases
It was estimated that 94 (28%) of new sputum smear-positive TB cases were undiagnosed in the study zone. The estimated TB management time for this category was 365 days, thus 94 undiagnosed TB cases contributed 34,310 infectious person days to the TB infectious pool. In 2009, the estimated 416 undiagnosed cases contributed 151,840 infectious person days to the TB infectious pool in the study area [3].
The median TB management time for relapse cases was 51 days (interquartile range 24–97 days). By adding an infectious period of 30 days after start of retreatment, one relapse case contributed 81 infectious person days. A total of 54 relapse cases identified in the study thus contributed 4374 infectious person days. In 2009, 45 relapse cases contributed 2700 infectious person days to the TB infectious pool in the study area [3].
The median TB management time for treatment failure cases was 98 days (interquartile range 47–235 days). The median time from start of treatment until failed treatment was 165 days. In addition, an infectious period of 30 days after the commencement of retreatment was added. Accordingly, the infectious period of one treatment failure case was estimated at 293 days. A total of ten treatment failure cases enrolled in our study thus contributed 2930 infectious person days to the TB infectious pool. Whereas in the 2009 study, a total of nine treatment failure cases contributed 1620 infectious person days during a year period [3]. The median TB management time for treatment after default cases was 43 days (interquartile range 25–105 days). The median time period from start of treatment until default treatment was 49 days, while the median time period from default treatment until returns to the health facilities and start of retreatment regimen was 70 days. By adding an average infectious period of 30 days after start of retreatment, the contribution of one treatment after default case was estimated at 192 infectious person days. A total of eight treatment after default cases enrolled in this study contributed 1536 infectious person days to the TB infectious pool. Six treatment after default cases contributed 540 infectious person days to the TB infectious pool of the study area in 2009 [3].
The total estimated infectious pool of TB from October 2013 to October 2014 for West Gojjam Zone of Amhara Region was 81,131 person days or 3405 infectious person days per 100,000 population. In contrast, the total infectious pool of TB in the study area was estimated at 325410 person days or 15,447 person days per 100,000 population in the 2009 study [3].
Measuring the size of the infectious pool of TB is essential to understand the burden and monitor trends of TB control program performance [3]. In this study, we estimated the TB infectious pool using TB management time as a simple tool. The estimated infectious person days contributed to the TB infectious pool from new smear-positive patients in the current study is lower than that reported from the former study [3]. The main reason for this difference may be related to the significant change in the value of the TB management time between the current study (45 days) and the former study that showed, 73 days. This may indicate improvement in health seeking behavior among patients and the diagnostic capacity of health facilities. Secondly, the numbers of smear-positive PTB cases treated in one year in the previous study were more than the current study and thus contributed more infectious person days.
It has been shown from low-TB burden countries that smear-negative patients are capable of transmitting the disease [21,22,23,24], and new infections originating from them significantly contribute to the burden of TB transmission [25]. The contribution of smear-negative cases to the TB infectious pool in the former study was estimated by applying the evidences from developed countries [3, 26]. Accordingly, smear-negative cases contributed 20% of the smear-positive cases. In the current study however, the contribution of smear-negative culture positive cases were estimated based on evidence from Ethiopia [14, 15], and the result indicates that smear-negative culture positive cases contributed 15.9% of infectious person days to the infectious pool.
The median TB management time estimated for new smear-negative PTB cases is high compared to new smear-positive PTB cases. This is due to the fact that majority of health centers of the study area were relying on smear microscopy to diagnose PTB during the study period. Smear microscopy has very low sensitivity [27], and many patients can get false negative results. According to the national TB diagnostic and treatment guideline of Ethiopia [8], the diagnostic process for smear-negative TB patients may take between 15 and 30 days before anti-TB treatment is initiated. Therefore, this long time duration before diagnosis and start of treatment for smear-negative cases may increase the median TB management time.
The undiagnosed TB cases contributed the largest number of infectious person days to the infectious pool. Undiagnosed cases remained infectious throughout the year [20], and serve as a continuous pool for generating new infections. Nevertheless, the estimated infectious person days for this category of patients in the current study is relatively lower compared to the study in 2009 [3]. This may be related to improved TB diagnostic and treatment facilities in the study area [28]. As the health seeking behaviour and diagnostic facilities are improved, the undiagnosed cases may be detected. The geographical DOTS coverage in Ethiopia and the study region is 100% [14, 29], indicating that most TB patients have access to TB diagnosis and treatment services. This may reduce diagnostic delay and the backlog of undiagnosed TB cases. However, the result of the current study should be interpreted cautiously given smear microscopy was the basic diagnostic tool used to diagnose TB during the study period.
The retreatment category accounted for 11% of the total size of the TB infectious pool. This is higher compared to the 2009 study report which was 1.5%. In the current study, the median TB management time for the sub categories of the retreatment group were calculated from the data obtained in this study. This may be considered a relatively acceptable estimate. However, the infectious periods for the retreatment subcategories in the 2009 study were computed based on evidences obtained from other studies. In addition, the average infectious period after start of treatment for retreatment cases is updated in the current study.
Estimating the infectious pool of TB requires defining the infectious period for each TB patient category. The beginning of the infectious period is when onset of symptoms occurs, especially cough [30]. Cough is the cardinal symptom of PTB [31]. Ninety eight percent new and all retreatment PTB cases in our study reported cough, and most were able to report the time when their cough first started. We assumed the date of first onset of cough and date of first treatment initiation as important parameters needed to define the TB management time. The application of the tool (TB management time) can be used and evaluated at the local level. TB control program managers at local can use the tool to analyze changes in the TB infectious pool and monitor the performance of the TB control program.
The size of the infectious pool of TB estimated using TB management time in West Gojjam Zone during October 2013 to October 2014 is lower compared to that estimated in 2009 [3]. This may be related to a number of reasons that may reduce delay in TB diagnosis and treatment of TB. Improved access to TB diagnostic and treatment services [28], and the increasing involvement of health extension workers (community health workers) in early identification and referral of TB suspects to the nearest health facilities where AFB smear microscopy test is available are the most likely reasons. Furthermore, the number of patients and infectious period for each patient category plays pivotal role in determining the size of the TB infectious pool. As described earlier, the numbers of cases in each patient category in the current study were lower than the former study [3].
Limitation of the study
This study has potential limitations that should be considered for improved application of the tool. The study was carried out only in government health facilities. Private health facilities were not included. Therefore, one may argue that number of patients seen in private health facilities may have an effect in the estimation of the size of the infectious pool. However, as the number of private health facilities involved in TB diagnostic and treatment services in the study zone is very low, it may not have a significant effect in the infectious pool estimate. The number of undiagnosed PTB cases during the study period was estimated based on the national TB prevalence survey result in Ethiopia, and may vary across the different regions of the country. This may have resulted in under or over estimation of undiagnosed cases in this study. However, the undiagnosed cases will be identified sooner or later as access to diagnostic and treatment facilities and the health seeking behavior of patients improve. We also believe that the undiagnosed cases are not decisive category for the application of the tool.
In addition, some patients may not accurately remember the exact date of onset of their symptoms and is subject to recall bias. However, a local calendar listing the main religious and national days was used to help patients remember the date of onset of their symptoms.
One of our objectives for conducting the current study was to address some limitations related to using TB management time. One may ask the validity of comparing the 2009 infectious pool size with the 2014 while some proportions used in the parameter were adjusted for the current study. The limitations were related to defining the infectious periods for smear-negatives, retreatment and undiagnosed TB cases. In the 2009 study, the infectious period for smear-negatives was calculated considering that smear-negative TB cases contributed 20% of smear-positive cases and this was applied to estimate infectious person days that contributed to infectious pool from smear-negative category. While in the current study, we computed the median TB management time for smear-negative cases from the data collected in this study. We also estimated the proportion of smear-negative culture positive cases based on the local evidence [14, 15]. In the previous study, smear-negative category accounted for 12.3% of the infectious pool while in the current study accounted for 15.9% of the size of the infectious pool making a difference of 3.6% between the previous and the current studies. While the undiagnosed TB case proportion used in the 2009 study was estimated at 33%, we used 28% in the current study making a 5% difference between the previous and the current study. As described earlier, the retreatment category in the previous study accounted for 1.5% of the infectious pool while in the current study, it accounted for 11% of the size of TB infectious pool. The size of the infectious pool in 2014 has shown decline when compared with the year 2009. On the other hand, if we had applied similar proportions for smear-negatives, retreatment and undiagnosed TB cases used in 2009 for 2014, the total size of the infectious pool would still have been much lower than that in 2009. This indicates that the adjustments we applied for 2014 does not make a big difference in the total size of the infectious pool between 2009 and 2014. Overall, this demonstrates that the tool can be used to monitor the size of the TB infectious pool in different time periods.
The total infectious pool of TB estimated using TB management time from October 2013 to October 2014 in West Gojjam Zone is lower compared to that estimated in 2009. The undiagnosed TB patient category followed by the smear-positive patient group contributed the largest infectious person days to the infectious pool in the study zone.
A simple and inexpensive tool is essential to estimate the infectious pool of TB and monitor program performance at local level. Systematic recording of TB management time in the unit TB registry book may help to estimate the infectious pool of TB and monitor trends of TB control program performance at the local level. Additional validation studies including both public and private health facilities need to be conducted before full-scale implementation of the parameter. In addition, further research is needed to validate the contribution of pulmonary smear-negative and retreatment cases to the infectious pool of TB. Moreover, a study that explores the feasibility of implementing the parameter at local level is warranted.
AFB:
acid-fast bacilli
DOTS:
directly observed treatment short-course
IQR:
MDR-TB:
multi-drug resistant TB
PTB:
pulmonary tuberculosis
SPSS:
statistical package for the social sciences
TB:
World Health Organization. Global Tuberculosis Report. Geneva, Switzerland: WHO; 2016. WHO/HTM/TB/2016. 2016:13.
World Health Organization Tuberculosis fact sheet. WHO; 2016. Available from http://www.who.int/mediacentre/factsheets/fs104/en.
Yimer SA, Holm-Hansen C, Storla DG, Bjune GA. Tuberculosis management time: an alternative parameter for measuring the tuberculosis infectious pool. Tropical Med Int Health. 2014;19(3):313–20.
Mphatswe W, Mate KS, Bennett B, Ngidi H, Reddy J, Barker PM, et al. Improving public health information: a data quality intervention in KwaZulu-Natal, South Africa. Bull World Health Organ. 2012;90(3):176–82.
Cowling K, Dandona R, Dandona L. Improving the estimation of the tuberculosis burden in India. Bull World Health Organ. 2014;92(11):817–25.
C Dye, A Bassili, AL Bierrenbach, JF Broekmans, VK Chadha, P Glaziou, et al. Measuring tuberculosis burden, trends and the impact of control programmes. Available: http://apps.who.int/tb/advisory_bodies/impact_measurement_taskforce/meetings/lid_measuring_tb_burden_supportingmaterial.pdf.
Central Statistics Autority of Ethiopia: Summary and Statestical report of the 2007 Population and Housing Census, Addis Ababa. Central Statestic Autority of Ethiopia; 2008.
Federal Ministry of Health Ethiopia. Guidelines for clinical and programmatic management of TB, leprosy and TB/HIV in Ethiopia. 5th ed. Addis Ababa, Ethiopia: Ministry of Health of Ethiopia; 2012.
Senkoro M, Mfinanga SG, Mørkve O. Smear microscopy and culture conversion rates among smear positive pulmonary tuberculosis patients by HIV status in Dar es salaam, Tanzania. BMC Infect Dis. 2010;10:210.
Bouti K, Aharmim M, Marc K, Soualhi M, Zahraoui R, Benamor J, et al. Factors influencing sputum conversion among smear-positive pulmonary tuberculosis patients in Morocco. ISRN Pulmonology. 2013;486507:2013.
Telzak EE, Fazal BA, Pollard CL, Turett GS, Justman JE, Blum S. Factors influencing time to sputum conversion among patients with smear-positive pulmonary tuberculosis. Clin Infect Dis. 1997;25(3):666–70.
Parikh R, Nataraj G, Kanade S, Khatri V, Mehta P. Time to sputum conversion in smear-positive pulmonary TB patients on category I DOTS and factors delaying it. J Assoc Physicians India. 2012;60:22–6.
Kanda R, Nagao T, Tho NV, Ogawa E, Murakami Y, Osawa M, et al. Factors affecting time to sputum culture conversion in adults with pulmonary tuberculosis: a historical cohort study without censored cases. PLoS One. 2015;10(11):e0142607.
Federal Ministry of Health of Ethiopia (FMOH). First Ethiopian national population based tuberculosis prevalence survey. Addis Ababa: FMOH; 2011.
Keflie TS, Ameni G. Microscopic examination and smear negative pulmonary tuberculosis in Ethiopia. Pan Afr Med J. 2014;19:162.
Nigus DM, Lingerew WM, Beyene BA, Tamiru AA, Lemma MT, Melaku MY. Prevalence of multi drug resistant tuberculosis among presumptive multi drug resistant tuberculosis cases in Amhara National Regional State. Ethiopia J Mycobac Dis. 2014;4:3.
Ndusilo ND, Heysell SK, Mpagama SG, Gratz J, Segesela FH, Pazia SJ, et al. Improvement in plasma drug activity during the early treatment interval among Tanzanian patients with multidrug-resistant tuberculosis. PLoS One. 2015;10(3):e0122769.
Tierney DB, Franke MF, Becerra MC, FA A'nV, Bonilla CA, et al. Time to culture conversion and regimen composition in multidrug- resistant tuberculosis treatment. PLoS One. 2014;9(9):e108035.
Kurbatova EV, Gammino VM, Bayona J, Becerra MC, Danilovitz M, Falzon D, et al. Predictors of sputum culture conversion among patients treated for multidrug-resistant tuberculosis. Int J Tuberc Lung Dis. 2012;16(10):1335–43.
Tiemersma EW, van der Werf MJ, Borgdorff MW, Williams BG, Nagelkerke NJ. Natural history of tuberculosis: duration and fatality of untreated pulmonary tuberculosis in HIV negative patients: a systematic review. PLoS One. 2011;6(4):e17601.
Behr MA, Warren SA, Salamon H, Hopewell PC, Ponce de Leon A, Daley CL, et al. Transmission of mycobacterium tuberculosis from patients smear negative for acid-fast bacilli. Lancet. 1999;353:444–9.
Hernández-Garduño E, Cook V, Kunimoto D, Elwood RK, Black WA, FitzGerald JM. Transmission of tuberculosis from smear negative patients: a molecular epidemiology study. Thorax. 2004;59:286–90.
Tostmann A, Kik SV, Kalisvaart NA, Sebek MM, Verver S, Boeree MJ, et al. Tuberculosis transmission by patients with smear- negative pulmonary tuberculosis in a large cohort in the Netherlands. Clin Infect Dis. 2008;47(9):1135–42.
Lawn SD, Edwards DJ, Wood R. Tuberculosis transmission from patients with smear- negative pulmonary tuberculosis in sub-Saharan Africa. Clin Infect Dis. 2009;48(4):496–7.
Fitzwater SP, Caviedes L, Gilman RH, Coronel J, Chira DL, Salazar C. Prolonged infectiousness of tuberculosis patients in a directly observed therapy short-course program with standardized therapy. Clin Infect Dis. 2010;51(4):371–8.
Storla DG, Yimer S, Bjune GA. Can treatment delay be utilized as a key variable for monitoring the pool of infectious tuberculosis in a population? J Infect Dev Ctries. 2010;4(2):083–90.
Parsons LM, Somoskövi A, Gutierrez C, Lee E, Paramasivan CN, Abimiku A, et al. Laboratory diagnosis of tuberculosis in resource-poor countries: challenges and opportunities. Clin Microbiol Rev. 2011;24:314–50.
West Gojjam Zone Health Department. Annual health service report. Finoteselam: Ethiopia; 2013.
Amhara Regional State Health Bureau: Annual health service report. Bahir-Dar, Ethiopia; 2009.
Department of Health and Human Services Centers for Disease Control and Prevention Division of Tuberculosis Elimination. Protecting people. Effective TB Interviewing for Contact Investigation: Self-Study Modules. Atlanta, Georgia. Available: https://www.cdc.gov/tb/publications/guidestoolkits/interviewing/tbinterviewing_ssmodules.pdf.
Buregyeya E, Criel B, Nuwaha F, Colebunders R. Delays in diagnosis and treatment of pulmonary tuberculosis in Wakiso and Mukono districts, Uganda. BMC Public Health. 2014;14:586.
We would like to thank the Amhara Regional State Health Bureau, the West Gojjam Zone Health Department, the Districts and Town Administrations Health Offices for their support. We wish to thank all health professionals who participated in the data collection. Last but not least, we are very much grateful to the study subjects who consented to participate in the study.
University of Oslo funded the study. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
All data generated or analysed during this study are included in this published article and its supplementary information files. (Additional files 2 and 3).
Amhara Regional State Health Bureau, Bahir Dar, Ethiopia
Senedu Bekele Gebreegziabher & Solomon Abebe Yimer
Department of Community Medicine, Institute of Health and Society, University of Oslo, Oslo, Norway
Senedu Bekele Gebreegziabher, Gunnar Aksel Bjune & Solomon Abebe Yimer
Oslo University Hospital, Oslo, Norway
Solomon Abebe Yimer
Department of Bacteriology and Immunology, Division of Infectious Disease Control, Norwegian Institute of Public Health, Oslo, Norway
Senedu Bekele Gebreegziabher
Gunnar Aksel Bjune
SBG, GAB and SAY designed the study. SBG conducted the data collection. SBG, SAY and GAB performed the data analysis. SBG and SAY drafted the manuscript. SBG, SAY and GAB edited the manuscript. All authors finally read and approved the final manuscript.
Correspondence to Senedu Bekele Gebreegziabher.
The Regional Committee for Medical Research Ethics (REK Øst) in Oslo, Norway and the National Research Ethics Review Committee (NRERC) in Addis Ababa, Ethiopia approved this study. In addition, letter of support and permissions to conduct the study in the local area were obtained from the local administrations. All participants were fully informed before written consents were taken. Then written informed consent was obtained from each participant who was willing to take part in this study. For those participants under the age of 18 years, written consent was obtained from their parents/legal guardians. The participants were assured about the confidentiality of the data.
"Not applicable" in this section.
Multilingual abstracts in the five official working languages of the United Nations. (PDF 671 kb)
Data set used for the article. The data set consists of 706 new pulmonary TB cases included in the study and the variables used in this article. (XLS 226 kb)
Data set used for the article. The data set consists of 72 retreatment pulmonary TB cases included in the study and the variables used in this article. (XLS 53 kb)
Gebreegziabher, S.B., Bjune, G.A. & Yimer, S.A. Applying tuberculosis management time to measure the tuberculosis infectious pool at a local level in Ethiopia. Infect Dis Poverty 6, 156 (2017). https://doi.org/10.1186/s40249-017-0371-6
TB management time
Infectious pool
West Gojjam zone | CommonCrawl |
\begin{document}
\begin{abstract}
In this article an anisotropic interaction model avoiding collisions is proposed. Starting point is a general isotropic interacting particle system, as used for swarming or follower-leader dynamics. An anisotropy is induced by rotation of the force vector resulting from the interaction of two agents. In this way the anisotropy is leading to a smooth evasion behaviour. In fact, the proposed model generalizes the standard models, and compensates their drawback of not being able to avoid collisions. Moreover, the model allows for formal passage to the limit 'number of particles to infinity', leading to a mesoscopic description in the mean-field sense. Possible applications are autonomous traffic, swarming or pedestrian motion. Here, we focus on the latter, as the model is validated numerically using two scenarios in pedestrian dynamics. The first one investigates the pattern formation in a channel, where two groups of pedestrians are walking in opposite directions. The second experiment considers a crossing with one group walking from left to right and the other one from bottom to top. The well-known pattern of lanes in the channel and travelling waves at the crossing can be reproduced with the help of this anisotropic model at both, the microscopic and the mesoscopic level. In addition, the 'right-before-left' and 'left-before-right' rule appear intrinsically for different anisotropy parameters. \end{abstract}
\maketitle
\section{Introduction} Modelling of pedestrian dynamics is a challenging problem that has gained a lot of attention in the last decades \cite{BellomoDegondTadmor,MFGames,DegondHierachy,BellomoCrowdDynamics,Helbing1,HelbingMolnar,Yanagisawa}. The applied mathematics community as well as engineers developed a variety of models that focus on different features, e.g.~collision avoidance by defining side-stepping procedures \cite{MTWsidestepping}, by minimizing appropriate cost functionals \cite{BailoCarrilloDegond,Borzi} or by computing collision times to anticipate a future collision and adjust the trajectories of the pedestrians involved accordingly \cite{BailoCarrilloDegond,Christiani,DegondVision}.
Other models are more interested in social aspects such as the influence of groups on evacuation dynamics \cite{AlbiInvisible,Helbing1}. Moreover, comparisons to real data and statistical model fitting, e.g.~in evacuation dynamics, helped in the optimization of sites and obstacles \cite{Bode2,Bode1,ChristianiPeri,GomesMTW,HelbingExperiments,HelbingPanic,HelbingEmpirical,PiccoliTosin,SiebenSchumannSeyfried}. Especially, in evacuation dynamics it is important to include the structure of the surrounding into the model. This can be done with the help of the Eikonal equation \cite{improvedHughes,MTWHuges,KlarTiwariMahato,Goatin}. In addition, the interaction of pedestrians and cars is an important field of study in the interest of security and traffic management \cite{BorscheMeurer1,BorscheMeurer2,Roundabouts}.
Up to the author's knowledge the community widely agrees that isotropic interaction models that are used for the modelling of swarms of birds, schools of fish, sheep-dog interactions or opinion dynamics \cite{AlbiPareschi,Sheep1,CBO2,CBO1,MotschTadmor} are inappropriate for a detailed and realistic description of pedestrian dynamics \cite{BailoCarrilloDegond}. This lack of details is unfortunate since these models allow for formulations on the microscopic, mesoscopic and macroscopic scale and even the limiting procedures are well understood, see \cite{AlbiPareschi,CarrilloReview,smoothed,bodyattitutecoordination,DegondMotsch,Golse,JabinMeanfield,Vicsek} for an overview. A starting point for such a hierarchy in pedestrians dynamics is given in \cite{ChistianiPiccoli,DegondHierachy} where the relation of kinetic and macroscopic models for pedestrian dynamics is discussed.
Here, we are going to adapt isotropic interaction models for crowd dynamic in two dimensions in order to build an appropriate model for such dynamics including collision avoidance. The main objective of this article is to reveal this interesting link between potential-based isotropic interaction schemes and anisotropic schemes allowing for pattern formation. In fact, the introduction of only one additional parameter allows for this significant change in the dynamic.
First, the model is described on the particle level, then a formal limit to a kinetic equation is discussed. The main new key ingredient is the introduction of an anisotropy in form of a rotation applied to the force vector of the interaction. This rotation does not model a property, like a view cone, explicitly. One may think of it as the perception that adjusting the velocity helps to avoid a collision. This is a perception everyone undergoes unconsciously in his/her every day life, for example, when strolling around the city. In fact, the rotation allows us to model the idea of evasion in an elementary and smooth way. Moreover, it is straight forward to include interaction with obstacles, which can be represented at fixed positions with artificial velocities. The approach is inspired by \cite{Kreusser2,Kreusser3,Kreusser1}, where the formation of fingerprint patterns is studied with the help of an anisotropic force field. Further studies may consider the anisotropy parameter to be density or obstacle dependent. In the sense that the rotation becomes larger or smaller in locations where obstacles or high densities are present.
Beside the application to pedestrian dynamics, the approach is very general and can be used for many other interaction schemes as well. For example applications in swarms of autonomous agents, as discussed in \cite{TaylorLuzziNowzari}, are very interesting and important for traffic and transport in the future.
A driving question in this research is whether the formation of lanes in a channel and travelling waves at a crossing as found in \cite{Appert-Rolland,Ranetbauer1,MTWsidestepping,Ranetbauer2}, can be reproduced with the help of the rotation anisotropy. The answer is affirmative, which the simulation results at the particle and the mean-field level will show.
Many models for pedestrian dynamics work with view cones or other sensorial perceptions to approximate collision times. Hence, they influence the future trajectories of the pedestrians to make the simulations more realistic and to avoid collision\cite{BailoCarrilloDegond,DegondHierachy,EversFetecauRyzhik, EversFetecauSun}. These approaches lead to anisotropies as well. Note that this kind of anisotropy has a different effect than the anisotropy discussed in the present paper. Indeed, considering view cones leads to an anisotropy in the sense that binary interactions are not symmetric. For example, one agent sees the other, and interacts with him or her, while the other one does not see the first and thus does not react.
In contrast, the anisotropy introduced in the following rotates the vectors of the interaction forces for each of the two interacting pedestrians individually. Hence, if one pedestrian interacts with another, the second will also interact with the first and the corresponding force vectors are rotated symmetrically.
An advantage of the anisotropic interaction induced by this rotation is that the dynamic can be written as system of ordinary differential equations (ODE). Further, the pedestrians are interacting with all their neighbours at the same time. In particular, there is no need to estimate times of collision and to choose a specific pedestrian to interact with first.
Moreover, a formal mean-field limit leads to a formulation on the mesoscopic scale (PDE). Numerical results on the micro and mean-field level show the influence of the anisotropy for pattern formation.
The article is organized as follows: a detailed motivation and description of pairwise interactions is given in the next section. Based on the pairwise interactions, the microscopic model for the pedestrian dynamic is derived in Section~\ref{sec:micro}. The formal derivation of the mean-field limit is given in Section~\ref{sec:scaling}. As we are interested in realistic simulations, we discuss boundary conditions and handling of obstacles in Section~\ref{sec:boundaryObstacles}. Numerical schemes and their implementation are discussed in Section~\ref{sec:numerics}, before we discuss numerical results in Section~\ref{sec:simulations}. We conclude and give an outlook to future work in Section~\ref{sec:conclusionOutlook}.
\section{Collision avoidance by anisotropy}
We recall an isotropic interaction model to clarify the starting point of the modifications. Following Newton's Second Law, isotropic interaction models are second order ODE systems \cite{AlbiPareschi,Sheep1,smoothed,MotschTadmor,Vicsek}, given by
\begin{align*}
\frac{d}{dt} x_i &= v_i, \quad i=1,\dots, N, \\
\frac{d}{dt} v_i &= -\frac{1}{N}\sum_{j\ne i} K(x_i,x_j,v_i,v_j) ,\qquad i=1,\dots, N,
\end{align*}
where the force $K(x_i,x_j,v_i,v_j)$ can arise as gradient of a potential, for example the Morse potential \cite{Morse},
\begin{equation*}
K(x_i,x_j,v_i,v_j) = \nabla_{x_i} R e^{-|x_i - x_j| / r} - A e^{-|x_i - x_j| / a}, \quad A,R,a,r > 0,
\end{equation*}
or an alignment force, as in the case of the Cucker-Smale model \cite{smoothed}
\begin{equation*}
K(x_i,x_j,v_i,v_j) = \frac{1}{(1 + \| x_i - x_j\|^2)^\beta} (v_j - v_i), \quad \beta \ge 0.
\end{equation*}
For reasons of well-posedness the system is supplemented with the initial data
$x(0) = x_0, \; v(0) = v_0.$
Here and in the following we use the notation $x = (x_i)_{i=1,\dots,N}$ with $x_i(t) = (x_i^1(t), x_i^2(t)) \in \mathbb{R}^{2}$ and $v= (v_i)_{i=1,\dots,N}$ with $v_i(t) = (v_i^1(t), v_i^2(t)) \in \mathbb{R}^{2}$ for all times $t$ and $i=1,\dots,N.$
These systems of ODEs are used to model behaviour of swarms of birds, flocks of sheep or fish.
As we are interested in the patters arising in pedestrian dynamics. We first add some intelligence in form of a desired journey to the system. In fact, we add an acceleration term $u(x,v)$ which may depend on the current position or velocity of the agents. This leads to the system equations given by
\begin{align*}
\frac{d}{dt} x_i &= v_i, \quad i=1,\dots, N, \\
\frac{d}{dt} v_i &= u(x_i,v_i) -\frac{1}{N}\sum_{j\ne i} K(x_i,x_j,v_i,v_j) ,\qquad i=1,\dots, N,
\end{align*}
With the help of the acceleration term $u$ we can model the scenario of two particles approaching each other as it is shown in for $\lambda = 0$ in Figure~\ref{fig:influence of lambda}. The particles are pushing each other away from their desired velocity, leading to undesired behaviour.
Now, we modify the model to allow for collision avoidance such that each particle can return to the desired velocity after the interaction. First, we discuss pairwise interactions before we extend the idea to the microscopic model for $N \in \mathbb{N}$ agents. From the standard attraction-repulsion-alignment schemes, see e.g.~\cite{smoothed,Vicsek,MotschTadmor}, we borrow the idea that the interaction of two agents is based on their distance or their velocities. Thinking of collision avoidance, a drawback of these models is that the force is aligned with the vector connecting the positions or velocities of the two agents, respectively. Indeed, it may happen that two approaching agents stop in front of each other and are not able to proceed with their journey. In the following we generalize the well-known ODE based systems for particle interaction in order to derive a reasonable model allowing for collision avoidance. The main ingredient is a rotation matrix which is based on the current state information of each agent, such that no anticipation of states or collisions in the future are necessary. Naturally, each agent is interacting with all other pedestrians. Hence, using this anisotropic model allows for anticipation in a very elementary way. The details of the rotation used for the anisotropy are the following.
We let two agents, represented by positions, $x$ and $y\in \mathbb{R}^2$, and velocities, $v$ and $w \in \mathbb{R}^2$, adjust their direction of movement if their velocity vectors point towards each other. We recall that the angle between two vectors $v, w$ is given by $\arccos(\frac{v \cdot w}{\norm{v}\norm{w}})$. Hence, for two agents approaching each other, this quantity is $\arccos(-1) = \pi.$ To be able to model the tendency to move to the left or the right, we introduce an anisotropy parameter $\lambda \in [-1, 1]$
leading to the interaction angle \begin{equation}\label{eq:alpha}
\alpha = \begin{cases} \lambda \arccos(\frac{v \cdot w}{\norm{v}\, \norm{w}}) , &\text{for } v\ne 0,w \ne 0 \\ 0, & \text{else} \end{cases}. \end{equation}
Note that $\alpha$ is well-defined, since the Cauchy-Schwartz inequality ensures $|v \cdot w| \le \norm{v}\,\norm{w}.$
The resulting angle is now introduced into the rotation matrix that is multiplied to the force vector. In more detail, we have \begin{equation}\label{eq:rotationmatrix}
\mathcal M(v,w) = \begin{pmatrix} \cos \alpha & -\sin\alpha \\ \sin\alpha & \cos\alpha \end{pmatrix}. \end{equation} Note, that for $\lambda= 0$ the rotation matrix equals the identity. Hence, the model proposed here is a generalization of standard isotropic interaction models, see e.g.~\cite{smoothed,Vicsek}.
In the case of two agents, the equations read
\begin{align*}
\frac{d}{dt} x_1 &= v_1, &\frac{d}{dt} v_1 = u(x_1,v_1) - \frac{1}{2} \mathcal M(v_1,v_2) K(x_1,x_2,v_1,v_2), \\
\frac{d}{dt} x_2 &= v_2, &\frac{d}{dt} v_2 = u(x_2,v_2) - \frac{1}{2} \mathcal M(v_2,v_1) K(x_2,x_1,v_2,v_1).
\end{align*} In particular, approaching agents slightly turn the direction of the velocity vector and back-stepping may only happen, if the total force of approaching agents is very strong. More details on the influence of the parameter $\lambda$, in particular, the natural appearance of the 'right-before-left'-rule and the 'left-before-right'-rule, is visualized in Figure~\ref{fig:visLambda} and Figure~\ref{fig:visLambda_Cross}. Also, for agents walking in the same direction the matrix equals to the identity, leading to an isotropic interaction. As we assume to have small interaction ranges, the isotropic interaction takes place only when agents enter the comfort range of others. More details on this are given in the numerics section.
\begin{figure}
\caption{Visualization of the influence of $\lambda.$ For $\lambda = 0$ we are in the case of the grey vector. The interaction force reduces the force resulting for the desired velocity. The particles may stop in front of each other. For $\lambda > 0$ we are in the yellow region. The force vector resulting from the interaction is turned. Adding this rotated vector and the vector resulting from the desired velocity, yields the evasive behaviour. The red particle moves to the bottom and the blue particles moves to the top. For $\lambda < 0$ the roles of the to particles change. We are then in the blue region and the red particle moves to the top and the blue particle to the bottom.}
\label{fig:visLambda}
\end{figure}
\begin{figure}
\caption{Visualization of the influence of $\lambda$ in a crossing scenario. Top: For $\lambda = 0$ we are in the case of the grey vector. The particles push each other diagonally and leave the path given by the desired velocity, see Figure~\ref{fig:influence of lambda} in the middle. For $\lambda > 0$ we are in the yellow region. The force vector resulting from the interaction is turned. Adding this rotated vector and the vector resulting from the desired velocity, yields the evasive behaviour. The red particle slows down and moves to the top before accelerating and adjusting the velocity to get back onto its path and the blue particle does not slow down as much as the red one and moves to the right. For $\lambda < 0$ the roles of the to particles change. We are then in the blue region and the red particle moves to the top and the blue particle moves to the right before being push back onto its path. Intrinsically, we see here the 'right-before-left' rule for $\lambda > 0$ and 'left-before-right' for $\lambda <0.$ }
\label{fig:visLambda_Cross}
\end{figure} Altogether, the force acting on two agents positioned at $x$ and $y$ with velocities $v$ and $w$, respectively, is now given by the force coming for the standard interaction models multiplied with the rotation matrix.
\subsection*{Choice of $\lambda$}
Clearly, the sign of $\lambda$ decides whether the agents have the tendency to move to the right ($\lambda > 0$) or to the left ($\lambda < 0$) to avoid a collision. The exact value of $\lambda$ is a modelling choice. Assuming that both agents reduce their speed w.r.t. the desired direction when approaching a possible collision, leads to $\lambda \in (-1/2, 1/2).$ In fact, for $\lambda = \pm 0.5$ only one agents slows down, while the other keeps the speed into the desired direction but moves to the side.
Assuming that the speed is changed significantly in a collision scenario motivates to choose $\lambda \in [-1/4, 1/4]$ as shown in Figure~\ref{fig:visLambda} and Figure~\ref{fig:visLambda_Cross}. In case of a crossing scenario $\lambda = \pm 1/4$ happens to imply naturally the 'right-before-left' rule or 'left-before-right' rule, respectively. This is the justification to choose $\lambda = 1/4$ for all numerical examples in the remainder. The anisotropic binary interaction is illustrated in the following example, before we state the microscopic model for arbitrary number of agents in the next section.
\begin{example}
We consider two pedestrians in a channel and at a crossing. Therefore, let $u(x,v) = u_{r/b} - v$. Here, the indices $r$ and $b$ refer to a red and a blue pedestrian, and the desired velocities $u_r$ and $u_b$ will be fixed for the whole simulation. As interaction potential we choose the Morse potential as proposed in \cite{Morse}.
The influence of the rotation is illustrated in Figure~\ref{fig:influence of lambda}. In the first plot, the blue agent moves from the right to the left with $u_b = (-1,0)^T$ and the red agent from the left to the right with $u_r = (1,0)^T.$ The parameter $\lambda = -0.25, 0 , 0.25$ is varied. In Figure~\ref{fig:influence of lambda} (bottom) the crossing scenario is shown, the red pedestrians walks to the right, the blue one is walking from bottom to top with $u_b = (0,1)^T.$
\begin{figure}
\caption{Illustration of isotropic and anisotropic pairwise interaction induced by different values of $\lambda$. The initial positions of the pedestrians are highlighted with a point. Top: Two pedestrians are approaching each other. Positive $\lambda$ leads to right-stepping, negative $\lambda$ results in left-stepping. The isotropic case corresponds to $\lambda = 0.$ Bottom: Two pedestrians meet at a crossroad. In the isotropic case, both depart from their desired trajectory. For $\lambda \ne 0$ the two return to their desired velocity after avoiding the collision. In case of the crossing with $\lambda = -0.25$ the blue pedestrians slows down, while the red pedestrian accelerates to avoid the collision. Then, the pedestrians relax their velocities towards the desired velocity. For $\lambda = 0.25$ the two pedestrians change their roles.
}
\label{fig:influence of lambda}
\end{figure}
For $\lambda = -0.25$ the agents turn to the left, which is appropriate for countries with left-hand traffic. Similarly, for $\lambda = 0.25$ we have the case of right-hand traffic. For $\lambda = 0$ we are in the isotropic setting. We see that approaching agents stop in front of each other and are not able to pass. In Figure~\ref{fig:influence of lambda} (bottom) we see the scenario of two pedestrians with trajectories crossing each other. Again we observe the influence of $\lambda$. In the isotropic case the agents are not able to pass each other, in both other cases the pedestrians adjust their route and walk their way after a slight interaction. In case of the crossing with $\lambda = -0.25$ the blue pedestrians slows down, while the red pedestrian accelerates to avoid the collision. Then, the pedestrians relax their velocities towards the desired velocity. For $\lambda = 0.25$ the two pedestrians change their roles. Note, that the collision is avoided in all anisotropic cases.
\end{example}
\section{Microscopic Model}\label{sec:micro}
In this section we introduce a general microscopic model for $N \in \mathbb{N}$ agents based on the pairwise-interactions discussed above.
We introduce the anisotropy by rotating the force vector of the particle system derived from Newton's Second Law. In particular, we obtain a second order ODE for each agent which reads
\begin{subequations}\label{eq:ODE}
\begin{align}
\frac{d}{dt} x_i &= v_i, \quad i=1,\dots, N, \\
\frac{d}{dt} v_i &= u(x_i,v_i) -\frac{1}{N}\sum_{j\ne i} \mathcal M(v_i,v_j)\,K(x_i,x_j,v_i,v_j) ,\qquad i=1,\dots, N, \label{eq:velocityODE}
\end{align}
with anisotropy governed by the rotation matrix
\begin{equation}
\mathcal M(v_i,v_j) = \begin{pmatrix} \cos\alpha_{ij} & -\sin \alpha_{ij} \\ \sin\alpha_{ij} & \cos\alpha_{ij} \end{pmatrix},
\end{equation}
where the rotation angle is given by
\begin{equation}
\alpha_{ij} = \begin{cases} \lambda \arccos(\frac{v_i \cdot v_j}{\norm{v_i}\, \norm{v_j}}) , &\text{for } v_i\ne 0,v_j \ne 0 \\ 0, & \text{else} \end{cases},\quad
\end{equation}
for $\lambda \in [-0.25,0.25]$ fixed. The system is supplemented with the initial data
$x(0) = x_0, \; v(0) = v_0.$
\end{subequations}
Additionally, the model contains desired velocities $u(x_i,v_i)$ of each particle. The strength of interaction is described by $K$ and $\mathcal M$ rotates the vector in which the force acts on the agent. At this point we do not specify any boundary behaviour as it will strongly depend on the geometry under consideration. In the numerical section, we shall consider a channel and a crossing with periodic and reflecting boundary conditions.
Note that each agent interacts with \textit{all} other pedestrians. This is in contrast to many other collision avoiding models, where the interaction is considering only the agent $j$ which will collide with agent $i$ first. Moreover, the interaction depends on the position and the velocities of the agents, this increases the computational costs. We overcome this issue by limiting the interaction to a finite neighbourhood as is described in the following remark and in the section on the numerical schemes.
\begin{remark}
Note, that even though every agent is interacting with all other agents in this model. In reality, the interaction range is quite short. The range parameter in the interaction potential accounts for this behaviour. For the numerical simulations, we use the Morse Potential with short range interactions. Of course, the approach also works for models where agents interact only locally in a finite neighbourhood. Since the interaction depends on the positions and velocities of the agents, the model is computationally very costly. This is the reason why we use a finite range of interaction for the numerical simulations later on. See below for more details.
\end{remark}
\begin{example}
For the pedestrian scenarios in the numerics section, the above system can be specified as follows. We consider two groups of pedestrians, red ($r$) and blue ($b$), with $N_r$ and $N_b$ group members, respectively. We set $N:= N_r + N_b$. Each group has a desired velocity which we denote by $u_r$ for the red and $u_b$ for the blue. We use $u(x_i,v_i) = u_b - v_i$ for the pedestrians of the blue group and $u(x_i,v_i) = u_r - v_i$ for the pedestrians of the red group. Note that in this case the desired velocity depends only on the group each pedestrians belongs to and the current velocity of the agent. Moreover, we consider forces arising from Morse potentials that depend only on the distance of two interaction agents. This leads to $K(x_i,x_j,v_i,v_j) = K(x_i,x_j).$ Altogether, \eqref{eq:velocityODE} can be rewritten as
\begin{subequations}\label{eq:ODE2groups}
\begin{align}
\frac{d}{dt} v_i &= (u_r-v_i) -\frac{1}{N}\sum_{j\ne i} \mathcal M(v_i,v_j)\,K(x_i,x_j) ,\qquad i=1,\dots, N_r, \\
\frac{d}{dt} v_i &= (u_b-v_i) -\frac{1}{N}\sum_{j\ne i} \mathcal M(v_i,v_j)\,K(x_i,x_j) ,\qquad i=N_r+1,\dots, N.
\end{align}
\end{subequations}
\end{example}
\begin{remark}
For $N \gg 1$ there is the well-known mean-field approximation of the interacting particle system in terms of measures which describe the probability of finding particles at a given location $x$ with velocity $v$. The mean-field approximation is formally derived in the following section.
\end{remark}
\begin{remark}
We do not need a relaxation parameter in $u(x,v)$, as we can balance the terms with the help of the strength parameter of the interaction.
\end{remark}
\section{Formal derivation of mesoscopic model}\label{sec:scaling}
In many interesting applications, like swarming or evacuation dynamics many agents are involved. This motivates to seek for a mesoscopic description of the model which we derive in the following. Based on the agents' information given by \eqref{eq:ODE} we define the empirical measure
$$f^N(t,x,v) = \frac{1}{N}\sum_{i=1}^N \delta(x_i(t) - x, v_i(t) -v).$$
Let $\varphi$ be an arbitrary test function, we compute for $i=1,\dots,N$
\begin{align*}
\frac{d}{dt} &\Big\langle f^N(t,x,v), \varphi(x,v) \Big\rangle = \Big\langle \partial_t f^N, \varphi \Big\rangle \\
&= \frac1{N} \left[\; \sum_{i=1}^{N} \nabla_x \varphi(x_i(t),v_i(t)) \cdot v_i \right. \\
&\left.\quad+ \nabla_v \varphi(x_i(t),v_i(t)) \cdot \Big( (u(x_i,v_i)-v_i) -\frac{1}{N}\sum_{j\ne i} \mathcal M(v_i,v_j)\,K(x_i,x_j,v_i,v_j) \Big) \right] \\
&= \frac1{N} \left[\; \sum_{i=1}^{N} \nabla_x \varphi(x_i(t),v_i(t)) \cdot v_i \right. \\
&\left.\quad+ \nabla_v \varphi(x_i(t),v_i(t)) \cdot \Big( (u(x_i,v_i)-v_i) -\int \mathcal M(v_i,\bar v)\,K(x_i,\bar x,v_i,\bar v)\, df_t^N(\bar x,\bar v) \Big) \right]\\
&= \Big\langle \nabla_x \varphi(x,v) \cdot v \\
&\quad\qquad + \nabla_v \varphi(x,v) \cdot \Big( (u(x,v)- v) -\int \mathcal M(v,\bar v)\,K(x,\bar x,v,\bar v)\, df_t^N(\bar x,\bar v)\Big) ,f_r^N \Big\rangle. \\
\end{align*}
Now, assuming the existence of a limiting measure satisfying $f = \lim\limits_{N\rightarrow\infty} f^{N} $ we formally pass to the limit $N \rightarrow\infty$ and use the variational lemma to obtain the evolution equation
\begin{subequations}\label{eq:PDE}
\begin{equation}
\partial_t f + v\cdot \nabla_x f = \nabla_v \cdot \left[\left( \int \mathcal M(v,\bar v) K(x,\bar x,v,\bar v)\, df(\bar x, \bar v) - (u(x,v)-v) \right ) f \right]
\end{equation}
supplemented with initial condition
\begin{equation}
f(0,x,v) = f_0(x,v).
\end{equation}
\end{subequations}
\begin{remark}\label{rem:meanfield}
We emphasize that the mean-field equation cannot be derived in the case every particle has its own desired velocity as the particles would no longer be indistinguishable. One approach to prescribe a velocity field $u(x,v)$ is to use the solution of an appropriate Eikonal equation, see for example \cite{improvedHughes,MTWHuges,KlarTiwariMahato}. An investigation of a system coupled to an Eikonal equation planned for future work. Here, we adhere with the pedestrian scenarios discussed above. The details of the PDE in this specific case are discussed in the example below.
\end{remark}
\begin{example}
Under the assumption, that the crowd splits into two groups $f_r$ (red) and $f_b$ (blue) of $N_r$ and $N_b$ agents, respectively, we prescribe different desired velocities $u_r$ and $u_b.$ Without loss of generality we arrange the agents of the red group at the first $N_r$ positions of $x$ and $v$ and the blue group at the indices $N_R +1$ to $N.$ This leads to \begin{align*}
u(x_i,v_i) &= u_r - v_i,\quad &&i=1,\dots,N_r, \\
u(x_i,v_i) &= u_b -v_i,\quad &&i = N_r+1,\dots, N.
\end{align*}
Further, it holds
\begin{align*}
f^N(t,x,v) &= f^N_r(t,x,v) + f^N_b(t,x,v) \\&= \frac{1}{N} \left( \sum_{i=1}^{N_r} \delta(x_i(t) - x, v_i(t) -v) +\sum_{i=N_r + 1}^N \delta(x_i(t) - x, v_i(t) -v) \right).
\end{align*}
We observe that the convolution is linear w.r.t. $f^N$ to obtain the coupled PDE system given by
\begin{subequations}\label{eq:pde-system}
\begin{align}
\partial_t f_r + v\cdot \nabla_x f_r &= \nabla_v \cdot \left[\left( \int \mathcal M(v,\bar v) K(x,\bar x,v,\bar v)\, df(\bar x, \bar v) - (u_r-v) \right ) f_r \right] , \\
\partial_t f_b + v\cdot \nabla_x f_b &= \nabla_v \cdot \left[\left( \int \mathcal M(v,\bar v) K(x,\bar x, v,\bar v)\, df(\bar x, \bar v) -(u_b -v) \right) f_b \right].
\end{align}
\end{subequations}
Note that the equations are coupled, as $f = f_r + f_b$ appears in the interaction terms.
For the discussion of the numerical schemes later on, we introduce the notation
$$ S_i(f) f_i := \left( \int \mathcal M(v,\bar v)K(x,\bar x,v,\bar v)\, df(\bar x, \bar v) - (u_i-v) \right) f_i, \qquad i \in \{r,b\}. $$
Then the evolution equations reduce to
\begin{subequations}\label{eq:short_hand}
\begin{align}
\partial_t f_r + \nabla_x \cdot (vf_r) &= \nabla_v \cdot \left(S_r(f) f_r\right), \qquad
\partial_t f_b + \nabla_x \cdot (vf_b) = \nabla_v \cdot \left( S_b(f) f_b \right), \\
f_r(0,x,v) &= f_r(x,v),\qquad\qquad\qquad\qquad\, f_b(0,x,v) = f_b(x,v).
\end{align}
\end{subequations}
All studies in the numerical section will be based on these equations.
\end{example}
The mean-field equations derived here are expected to be helpful for analytical investigations of stationary states and the formation of patterns in future work. Here, the main interest is to see whether or not the collision avoidance and pattern formation on the microscopic level translates to the mesoscopic system.
\section{Boundary Conditions and Obstacles}\label{sec:boundaryObstacles}
The purpose of modelling agent dynamics is to predict their behaviour in events of evacuation or in crowded sites. Therefore, it is reasonable to assume that the domain is bounded, and to be even more realistic, that there are obstacles in the domain. We begin with the discussion of boundary conditions.
\subsection{Boundary conditions}
To model the fact that agents are in general not scared of walls, but usually keep some distance to a wall, we apply reflective boundary conditions. In fact, if an agent moves towards a boundary, he or she will be reflected. Thus, the velocity component which points towards the boundary will be transformed in a velocity pointing away from the boundary. Besides being realistic, these boundary conditions preserve mass, which is a desired property in this context, especially, in the mean-field limit. A sketch of the reflecting behaviour is given in Figure~\ref{fig:boundaryConditions}.
\begin{figure}
\caption{Illustration of the reflective boundary conditions. The black particle with velocity depicted with the dashed vector is about to leave the domain in the next time step. Due to the reflecting boundary conditions, it is projected into the domain and the y-component of its velocity is reflected (black vector).}
\label{fig:boundaryConditions}
\end{figure}
In order to study pattern formation, we assume to have periodic boundary conditions at the inflow and outflow of the domain.
\begin{example}
For the simulations we consider a channel and a crossing. The domain and its periodic (yellow, green) and reflecting (black) boundaries are sketched in Figure~\ref{fig:boundarysetting}. The plot on the top refers to the channel setting, the one on the bottom to the crossing.
\begin{figure}
\caption{Illustration of the boundary setting. The green and yellow parts of the boundary refer to periodic boundary conditions. Walls are indicated by black lines. In case of the crossing the particles that leave the domain through a green boundary are flowing in at the other green boundary and analogous for the yellow boundary parts.}
\label{fig:boundarysetting}
\end{figure}
\end{example}
\subsection{Obstacles}
The pairwise-interaction introduced above can be easily adapted to interaction with obstacles. Indeed, we define obstacles as artificial agents having a fixed position and an artificial velocity pointing outward of the obstacle. Positions and artificial velocity vectors of a circle shaped obstacle are illustrated in Figure~\ref{fig:illustraction obstacle} (left).
\begin{figure}
\caption{Left: Illustration of artificial agents with fixed position and artificial velocities modelling a circular obstacle. Right: Influence of a circular obstacle on the trajectories of the pedestrians.}
\label{fig:illustraction obstacle}
\end{figure}
The plot on the right shows the influence of this obstacle on trajectories of $20$ agents, $10$ red ones and $10$ blue ones.\\
\section{Numerical schemes and implementation}\label{sec:numerics}
In the following we discuss the numerical schemes used for the implementation on the microscopic and the mesoscopic scale. We begin with the particle implementation.
\subsection{Particle Scheme}
In order to solve the ODE system for the particles, we use a leap frog scheme combined with a splitting for the velocity update. In more detail, we compute
\begin{align*}
x_i^{k'} &= x_i^k + \frac\tau2 v_i^k, \qquad &&v_i^{k'} = (v_i^k + \tau u_i)/(1+\tau) \\
v_i^{k+1} &= v_i^{k'} + \tau M(v^{k'}) K(x^{k'}), \qquad &&x_i^{k+1} = x_i^{k'} + \frac\tau2 v_i^{k+1}.
\end{align*}
Here and in the following $\tau$ denotes the time step. Note that the part involving the desired velocity is solved implicitly and that the interaction is independent of the group membership of the two interacting particles. This allows for a straight forward vectorized implementation of the scheme.
To include the periodic boundary conditions in the particle scheme, we use copies of the particles of interest. These artifical particles are representing the particles on the other end of the domain in the compuation of the interaction forces. Moreover, we restrict the domain of interaction in order to be consistent with the mean-field scheme. See Remark~\ref{rem:domain of interaction} below for more details.
\subsection{Mean-field Scheme}
For the mean-field simulation we employ a Strang splitting scheme. That means, a Semi-Lagrangian solver \cite{KlarReuterswaerd,Sonnendruecker} is used in the spatial domain and a Semi-Implicit Finite-Volume scheme is used in the velocity space. The evolution is treated analogously for both groups. Using the short hand notation introduced in \eqref{eq:short_hand} and $i \in \{r,b\}$ we get
\begin{subequations}\label{mf-splitting}
\begin{align}\label{dis_1}
\partial_t f_i^{k'} &= -\frac{1}{2} \nabla_v \cdot ( S_i(f^{k'}) f_i^{k'}), \qquad && f_i^{k'} = f_i(t), \\
\label{dis_2} \partial_t f_i^{k''} &= - v \cdot \nabla_x f_i^{k''}, \qquad && f_i^{k''} = f_i^{k'}(t + \tau), \\
\label{dis_3} \partial_t f_i^{k+1} &= -\frac{1}{2} \nabla_v \cdot (S_i(f^{k'}) f_i^{k+1}), \qquad && f_i^{k+1} = f_i^{k''}(t+\tau) .
\end{align}
\end{subequations}
The transport in \eqref{dis_2} is computed using a Semi-Lagrangian method. In fact, the computations are based on a fixed grid and the mass is transported along characteristics. These characteristics curves are computed with the help of a second order Runge-Kutta scheme in a pre-processing step. In every transport step of the time loop, the initial point is a grid point. It is very unlikely to end again in a grid point, hence we need an interpolation to compute the new values on the grid points. For the interpolation we use a polynomial reconstruction with cubic Bezier curves which is of second order and has the nice property that it never leaves the convex hull of the control points. Additionally, we have to obey periodic or reflective boundary conditions in each step.
For the transport in velocity space, i.e.~\eqref{dis_1} and \eqref{dis_3}, a second order finite volume scheme is employed. The advection is approximated by a Lax-Wendroff flux \cite{LeVeque, Quarteroni} and oscillations arising from non-smooth solutions are limited using a van-Leer method \cite{VanLeer}. In the velocity domain we do not need to fix boundary conditions, as we assume to never reach the boundary of the domain. More details on the schemes without boundary conditions can be found in \cite{Sheep1,CarrilloKlarRoth}.
\begin{remark}
Note that in contrast to these references the interaction of the agents in this contribution is not only depending on their pairwise distance, but also on their velocities. This has to be taken care of in the finite volume scheme. Moreover, the dependence on the velocities increases the cost of computing the interaction forces significantly.
\end{remark}
In order to avoid cancellation, we make use of the structure $f = f_r + f_b$ and compute the interaction with each group separately. Hence, we do the procedure explained in \eqref{mf-splitting} first for the interaction with the red group and then for the blue group.
\begin{remark}\label{rem:domain of interaction}
Another important observation is the following: the interaction takes place in a very short range, thus to speed up the computation of the interaction force, we consider only points in the neighbourhood. Indeed, the domain of interaction at location ${i,j}$ is given by all the points $(k,l)$ with $k=i-2 \text{ to } i+2$ and $l=j-2\text{ to }j+2$. This speeds up the computation of the interaction forces significantly.
In order to make the simulations on the particle and the mean-field level comparable, we restrict the interaction domain in the particle scheme to a finite value as well.
\end{remark}
All codes for the particle simulations are written vectorized in Python 3.6, the mean-field simulations are implemented in Fortran 90 and their results are plotted with Matlab R2018a.
\section{Numerical Results}\label{sec:simulations}
In this section we discuss numerical results for the two settings sketched in Figure~\ref{fig:boundaryConditions} on the microscopic and on the mesoscopic level. We begin with the channel setting which has periodic boundary conditions on the left and the right side and reflective boundary conditions on the top and the bottom, see Figure~\ref{fig:boundarysetting} (top). Then we discuss the crossing with periodic boundary conditions at top and bottom and left and right, respectively.
The following simulation results are computed using interaction forces resulting from the Morse Potential \cite{Morse}
$$ P(d) = Re^{-d/r} - Ae^{-d/a} $$
with strength parameters $A=0, R = 500$ and range parameters $r = 1.5, a = 1.5$. Further, $d$ refers to the distance of two interacting pedestrians $d = |x_i -x_j|.$ Moreover, we assume that all pedestrians are having the tendency to go to the right to avoid a collision, i.e., $\lambda = 0.25$.
\begin{remark}
Here $A = 0$ corresponds to the modeling assumption that the pedestrians have only repulsive influence on each other. Groups of friends or families are not considered here. A study of the influence of these on the simulations would be interesting as future work.
\end{remark}
\subsection{Pedestrians in a channel}
The random initial positions are distributed uniformly in the domain $\Omega_x = [-45,45] \times [-15,15]$. The initial velocities of the blue and the red particles are distributed in $$\Omega_{v_b} = [-0.3,-0.1] \times [-0.2,0.2]\quad \text{and} \quad \Omega_{v_r} = [0.1,0.3] \times [-0.2,0.2],$$ respectively. Further parameters are
$dt = 0.01,\; N_b \in \{10, 150\},\; N_r \in \{10, 150\}$ at the particle level and $dt = 0.05,\; nx = 100,\; ny = 40,\; nv_1 = 20,\; nv_2 = 20$ for the mean-field simulation. Here $nx, ny, nv_1$ and $nv_2$ denote the number of discretization points for the spatial and velocity domain in $x$- and $y$-direction, respectively. Note that this choice implies that the red and the blue group have mass 0.5, as $f^N$ is by construction a probability measure.
\begin{figure}
\caption{Lane formation in a channel - particle simulation with only few particles involved, we see a formation of multiple horizontal lanes as stationary state.
The parameters are $N_b=10=N_r, t=150.$ The arrows show the velocities of the particles.}
\label{fig:lowdensity}
\end{figure}
In Figure~\ref{fig:lowdensity} we see positions of the groups with $N_r = 10 = N_b$ at time $t=600.$ Here, four horizontal lanes have formed and the interaction forces are small enough such that this is a stable configuration. For higher particle densities we expect a formation of only two lanes.
\begin{remark}
Numerical results suggest that there is a relation between the number of lanes formed and the parameter $r$ which scales the interaction radius of the particles. This seems to be in analogy to the investigation of the stationary number of opinions in opinion formation simulations, see \cite{MotschTadmor}. It would be very interesting to quantify the number of lanes for given interaction range $r.$
\end{remark}
\begin{figure}
\caption{Lane formation in a channel - particle simulation. The blue pedestrians go from right to left, the red from left to right, i.e. $u_b = (-0.2,0)^T$ and $u_r = (0.2,0)^T.$ The arrows show the velocities of the pedestrians. As $\lambda = 0.25,$ pedestrians prefer to step to the right to avoid a collision. The snapshots are made at the times $t=0,\; 50,\; 100,\; 150,\; 200,\; 250.$}
\label{fig:laneformationParticles}
\end{figure}
Next, we investigate the channel setting with $N_b = 250 = N_r$ and the corresponding mean-field simulation. Figure~\ref{fig:laneformationParticles} shows the results of the particle simulation. Initially the particles are uniformly distributed over the whole domain as shown in the plot on the top-left. As time evolves we see the formation of diagonal stripes ($t=50,\;100$). The red particles are moving towards the bottom of the domain and the blue group is moving upwards. After some time a stationary distribution of two lanes, the red at the bottom of the domain and the blue at the top of the domain has formed ($t=250$).
This is in contrast, to the behaviour shown in Figure~\ref{fig:lowdensity} suggesting that there is a correlation of interaction radius and the number of lanes. A detailed study on this is work in progress.
\begin{remark}
Heuristically, the formation of lanes can be explained by sequential occurrences of evasive behaviour. If we consider ourselves as one particle in the high density crowd, then soon we see someone approaching us. We need to adjust our velocity in order to avoid a collision. Then the next one approaches us, again, we avoid the collision. This behaviour terminates when we are at the top or bottom boundary of the domain, or, when we are surrounded by others walking in the same direction, hence, we feel no need to avoid any collision. The velocity vectors in Figure~\ref{fig:laneformationParticles} underline this thought experiment.
\end{remark}
\begin{figure}
\caption{Lane formation in a channel - mean-field simulation. Initially the red and the blue group are uniformly distributed in the domain. The difference of the densities vanishes as shown in the plot on the top-left. Then we see the formation of diagonal stripes at time t=50 and finally a formation of lanes as time proceeds. The plots correspond to time times t=0, 250, 500, 750, 1000, 1250.}
\label{fig:laneformationmeanfield}
\end{figure}
The same observations are made for the simulation on the mean-field level. To discuss the results, we introduce
\begin{equation}\label{quantities}
\rho_i(t,x) := \int f_i(t,x,v)\; \mathrm{d}v\quad \text{and}\quad \phi_i(t,v) := \int f_i(t,x,v) \; \mathrm{d}x, \qquad i=\{r,b\}.
\end{equation}
Figure~\ref{fig:laneformationmeanfield} shows the quantity $\rho_r(t,\cdot) - \rho(t, \cdot)$ at several time instances $t$. Initially the quantity is zero, as both groups are uniformly distributed in the spatial domain. As the groups start to cluster and to form lanes, we see the density of the blue group depicted in blue and the density of the red group in red. As expected from the formal derivation above, the results match the results of the particles level.
The evolution of $$\rho_i^2(t,x^2) = \int f_i(t,x^1,x^2,v_1,v_2)\; \mathrm{d}(v_1,v_2)\; \mathrm{d}x^1\qquad i\in \{r,b\}$$ is illustrated in Figure~\ref{fig:densitydistribution}. Note that the quantities are averaged with respect to the $x^1$-component as well in order to emphasize the evolution in the $x^2$-component.
\begin{figure}
\caption{Density distribution in a channel - mean-field simulation. The densities $\rho_i^2, i \in \{r, b\}$ are integrated w.r.t. $x^1$ in order to extract the information along the $x^2$-axis.}
\label{fig:densitydistribution}
\end{figure}
Finally, we study the evolution of the velocity density $\phi_r(t,\cdot) + \phi_b(t,\cdot)$ in Figure~\ref{fig:velocitychannel}. We see the initial distribution at the top-left plot. The relaxation towards the desired velocities is achieved after a very short time. Then only small changes in the $x^2$-component lead to the formation of the lanes.
\begin{figure}
\caption{Lane formation in a channel - mean-field simulation. Initially the red and the blue group are uniformly distributed in the velocity domain $\Omega_{v_r}$ and $\Omega_{v_b}$, respectively, as shown in the plot on the top-left. Then the relaxation towards the desired velocities $u_r = (0.2,0)^T$ and $u_b = (-0.2,0)^T$ starts, see plot on the top-right. Only small changes in the $y$-coordinates lead the crowd to the lane configuration as can be seen when comparing the top-left and the bottom plot of the velocity density. The time instances are $t=0,\; 250,\; 1500.$}
\label{fig:velocitychannel}
\end{figure}
\begin{remark}
For negative anisotropy parameter, i.e. $\lambda=-0.25$ the roles of the to groups change. In this case, the red lane forms at the top of the domain and the blue lane at the bottom of the domain.
\end{remark}
\subsection{Pedestrians at a crossing}
For the simulation of the crossing the initial positions and velocities are randomly distributed in $\Omega_{x} = [-40,40] \times [-40,40]$ and $\Omega_{v} = [-0.1,0.1] \times [-0.1,0.1]$, respectively. The desired velocity for the red group is $u_r = (0.2,0)^T$ and for the blue group it is $u_b = (0,0.2)^T.$ All other parameters are as in the previous case, except for $nx = 40 = ny.$ Moreover, boundary conditions are periodic from left to right and from top to bottom as indicated by the green and yellow lines in Figure~\ref{fig:travellingwavesParticles}.
\begin{figure}
\caption{Travelling waves at a crossing - particle simulation. The blue go from bottom to top, the red from left to right, i.e. $u_b = (0,0.2)$ and $u_r = (0.2,0).$ The arrows show the velocities of the pedestrians. As $\lambda = 0.25,$ pedestrians prefer to step to the right to avoid a collision. From top-left to bottom-right: $t = 0$, $t = 50$, $t = 100$, $t = 150$, $t = 200$ and $t=250$. The simulation was done with $N_b=150=N_r.$ At $t=250$ the pattern allows most pedestrians to walk with their desired velocities.}
\label{fig:travellingwavesParticles}
\end{figure}
The figure shows corresponding simulation results for the particle scheme. Initially the particles are distributed uniformly, after some time we see patterns forming. This process ends in stable travelling waves as shown in the plot at the bottom-right ($t=250$).
\begin{figure}
\caption{Travelling waves at a crossing - mean-field simulation. Initially the red and the blue group are uniformly distributed in the spatial domain. The difference of the densities vanishes as shown in the plot on the top-left. At time t=50 the groups start to separate, see the plot in the top-right. Afterwards the line pattern is forming (t=150, 375 and 750). Finally, we see the stationary configuration of travelling waves at time t=1500 in the bottom-right plot.}
\label{fig:travellingwavesmeanfield}
\end{figure}
Simulation results for the mean-field setting are depicted in Figure~\ref{fig:travellingwavesmeanfield} and Figure~\ref{fig:velocitycross}. The plots show the spatial and velocity densities of the red and the blue group, which are defined as in the channel setting, see \eqref{quantities}.
In Figure~\ref{fig:travellingwavesmeanfield} we see a plot of the difference $\rho_r(t,x) - \rho_b(t,x)$ at various time instances $t.$ Hence, in locations where the blue group is clustered, we see blue colour and in locations with a majority of the red group we see red colour. At the beginning both groups are uniformly distributed over the spatial domain, the difference is therefore zero, as shown in the top-left plot. As time evolves we see the formation of groups which ends in a stable configuration of travelling waves as can be seen in the bottom-right plot of Figure~\ref{fig:travellingwavesmeanfield}. Thanks to the travelling wave pattern, almost every pedestrian walks with his/her desired velocity. Note that these patters are also reported in many other simulations and experiments with crossing flows, see for example \cite{Appert-Rolland,Ranetbauer1,Cividini,Ranetbauer2}.
The velocity density $\phi_r(t,\cdot) + \phi_b(t,\cdot)$ is shown in Figure~\ref{fig:velocitycross} for difference time instances $t$. Initially the red and the blue group are uniformly distributed in the velocity domain $\Omega_v$ as shown in the plot on the top-left. Then the relaxation towards the desired velocities $u_r = (0.2,0)^T$ and $u_b = (0, 0.2)^T$ starts, see plot on the top-right. Only small changes in the $v^1$- and $v^2$-coordinate lead the crowd to the travelling waves configuration as can be seen when comparing the top-left and the bottom plot of the velocity density.
\begin{figure}
\caption{Travelling waves at a crossing - mean-field simulation. Initially the red and the blue group are uniformly distributed in the velocity domain $\Omega_v$ as shown in the plot on the top-left. Then the relaxation towards the desired velocities $u_r = (0.2,0)^T$ and $u_b = (0, 0.2)^T$ starts, see plot on the top-right. Only small changes in the $x$- and $y$-coordinate lead the crowd to the travelling waves configuration as can be seen when comparing the top-left and the bottom plot of the velocity density. The time instances are $t=0, 250, 1500.$}
\label{fig:velocitycross}
\end{figure}
\section{Conclusion and Outlook}\label{sec:conclusionOutlook}
We discussed an anisotropic interaction model that allows for collision avoidance. The model is based on general swarm models and uses a rotation matrix to allow for a smooth evasion behaviour. First, we looked at a binary interaction of two agents, then we stated the particle system for $N$ agents and derived formally a mesoscopic formulation. We illustrated how obstacles can be included in the model conveniently. The behaviour of the model is studied numerically with the help of two scenarios. The first one considers a channel with two groups of pedestrians, one group walking from right to left and the other group from left to right. We see the typical lane formation observed in many other works and in reality. The second scenario investigates the behaviour of pedestrians at a crossing. Again, we see the typical travelling wave formations on both, the particle and the mean-field level.
In a next step, the model shall be analysed: a study of well-posedness in the ODE and PDE level is planned. Further, a rigorous proof of the relation of the particle and the mean-field system has $N\rightarrow \infty$ is to be achieved. Finally, it is interesting to find an analytical way to show the pattern formation as $T\rightarrow \infty$. Based on this work, an optimization framework corresponding to an evacuation scenario and a parameter identification based on data is planned for further studies.
\end{document} | arXiv |
Leo Zippin
Leo Zippin (January 25, 1905 – May 11, 1995) was an American mathematician.[1] He is best known for solving Hilbert's Fifth Problem with Deane Montgomery and Andrew M. Gleason in 1952.
Biography
Leo Zippin was born in 1905 to Bella Salwen and Max Zippin, who had emigrated to New York City from the Ukraine in 1903. He did his undergraduate and graduate work at the University of Pennsylvania in 1929. His doctoral adviser was John Robert Kline.
Leo Zippin is the author of The Uses Of Infinity, and together with Deane Montgomery, the monograph Topological Transformation Groups. In 1952, he, along with Andrew M. Gleason and Deane Montgomery solved Hilbert's fifth problem.[2][3]
Personal life
He married Frances Levinson in 1932. They had two children, Nina, a prominent literary critic and literary historian, and Vivian.[4] He taught at Queens College in Flushing, NY.
References
1. "Leo Zippin, 90, Dies; Solved Math Puzzle". NY Times. May 20, 1995.
2. Stein, Howard (1977). "Some Philosophical Prehistory of General Relativity". In Earman, John; Glymour, Clark N.; Stachel, John J. (eds.). Foundations of space-time theories. U of Minnesota Press. pp. 38–. ISBN 978-0-8166-0807-2. Retrieved 22 May 2011.
3. Montgomery, Deane; Zippin, Leo (May 1952). "Small subgroups of finite-dimensional groups". Proc Natl Acad Sci U S A. 38 (5): 440–442. doi:10.1073/pnas.38.5.440. PMC 1063582. PMID 16589121.
4. known as Vivian B. Narehood, an attorney at Gibbel Kraybill & Hess Archived 2013-12-05 at the Wayback Machine
External links
• O'Connor, John J.; Robertson, Edmund F., "Leo Zippin", MacTutor History of Mathematics Archive, University of St Andrews
• Leo Zippin at the Mathematics Genealogy Project
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• United States
• Netherlands
• Poland
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
Other
• IdRef
| Wikipedia |
\begin{document}
\title{Disturbance by optimal discrimination} \author{Ry\^uitir\^o Kawakubo} \email{[email protected]} \affiliation{Department of Physics, Keio University, Yokohama 223-8522, Japan} \author{Tatsuhiko Koike} \email{[email protected]} \affiliation{Department of Physics, Keio University, Yokohama 223-8522, Japan} \date{\today}
\pacs{03.65.Ta, 03.67.-a} \begin{abstract} We discuss the disturbance by measurements which unambiguously discriminate between given candidate states. We prove that such an optimal measurement necessarily changes distinguishable states indistinguishable when the inconclusive outcome is obtained. The result was previously shown by Chefles~[Phys. Lett. A 239, 339 (1998)] under restrictions on the class of quantum measurements and on the definition of optimality. Our theorems remove these restrictions and are also applicable to infinitely many candidate states. Combining with our previous results, one can obtain concrete mathematical conditions for the resulting states.
The method may have a wide variety of applications in contexts other than state discrimination. \end{abstract}
\maketitle
\section{Introduction}
Optimal quantum measurements play fundamental roles in many subjects in quantum foundations and quantum information. The subjects include error-disturbance relations~\cite{Ozawa04}, quantum coding theorems~\cite{Hol98}, entanglement distillation~\cite{Bennett96a}
state estimation~\cite{Helstrom76}, state discrimination~\cite{Chefles00}, state protection~\cite{Wakamura17a}, etc. Although it is sometimes difficult to obtain concrete forms of optimal measurements, the characterization of those
measurements that
maximize the precision, maximize the transmission rate, etc., gives fundamental bounds in quantum mechanics and on efficiency of various information processing. In this paper, we discuss a natural and intuitive principle in optimality: The measurement which does ``something'' the best leaves no room for doing ``something'' afterwards. Intuitively, if the room remained, one would be able to improve the original measurement by composing that with a measurement which does an amount of ``something'' later. This simple reasoning seems to have a wide scope.
However, it is not necessarily trivial to implement the idea as a rigorous statement in general or in each subject. One must carefully choose the definition of optimality, assumptions and conclusions.
We will demonstrate, as a prototype, that the principle works well and makes the original problem more transparent in the context of unambiguous state discrimination, where we also notice subtleties in the application thereof. The principle generalizes the previous results and simplifies the proof, which may not be achieved by other approaches such as extremity (in the mathematical sense) of the optimal measurements, even if we restrict ourselves to convex evaluation functions.
Unambiguous discrimination is one of possible frameworks for state discrimination. There, one must answer the correct input state among the candidates after performing a quantum measurement. One must not take a state for another, though one can answer ``inconclusive'' or ``?.'' Unambiguous discrimination between two states was introduced by Ivanovic~\cite{Ivanovic1987257} and developed in Refs.~\cite{Dieks1988303,Peres198819,Jaeger199583}. Chefles~\cite{Chefles1998339} showed that finitely many pure states are distinguishable if and only if they are linearly independent. Feng \textit{et.~al.}~\cite{PhysRevA.70.012308} extended the result to mixed states. There are also interesting examples where infinitely many candidate states are involved. An example is the set of coherent states corresponding to a lattice in the classical phase space, which was considered by von Neumann~\cite{vonneumann32} in the context of simultaneous measurements of position and momentum. The present authors~\cite{1751-8121-49-26-265201} generalized the results above on unambiguous discrimination to infinitely many candidate states. The application to von Neumann's lattice led to a natural characterization of Planck's constant from the viewpoint of state discrimination.
The studies of unambiguous discrimination above mainly discuss its possibility and accuracy. Not much attention was paid to the disturbance caused by unambiguous discrimination measurements. One exception is a part of Chefles's work~\cite{Chefles1998339}. He showed, under some restrictions explained below, that optimal discrimination measurements change the input states to linearly dependent (and thus indistinguishable) ones if the inconclusive outcome {``?''} is obtained. The claim is interesting
because it concerns about disturbance of an optimal measurement. However, besides the finiteness of the candidate states, the results obtained there were restrictive in the following sense. First, the states to be discriminated were assumed to be pure. Second, not all measurements allowed by quantum mechanics were considered. The measurements were restricted to those which change pure states to pure states when the outcome is inconclusive. It is quite common, however, that the output state is mixed even if
the input is pure.
Third, a particular evaluation function was considered to define the optimality. Namely, existence of a prior probability distribution is assumed and the average success probability was chosen. Even in the unambiguous discrimination between finite number of states, it may be as natural, for example, to define the optimality by maximization of the minimum success probability taken over the candidate states.
In this paper, we show that optimal unambiguous discrimination measurements make distinguishable states indistinguishable provided that the outcome is the inconclusive one. We apply the simple principle described at the beginning of this paper and derive the result directly from the definition of the optimality, not resorting to the detailed mathematical properties of the states. The results are free from all restrictions mentioned above, and is valid for mixed states, for general measurements, for a wide class of evaluation functions (which are not necessarily convex), and for infinitely many candidate states.
A careful treatment is necessary for infinitely many states since distinguishability naturally splits into slightly different levels, distinguishability and uniform distinguishability~\cite{1751-8121-49-26-265201}. One can obtain detailed mathematical characterization of the states resulting from optimal measurements by combining the results in Ref.~\cite{1751-8121-49-26-265201} and those in the present work.
The paper is organized as follows. In Sec.~\ref{sec-revmes}, we briefly review quantum measurement theory. We introduce the concept of unambiguous discrimination in Sec.~\ref{sec-uud}. We present our main result on distinguishability in Sec.~\ref{sec-thm} and that on uniform distinguishability in Sec.~\ref{sec-thm-uni}. The excluded case in the main results, which is itself of interest, is addressed in Sec.~\ref{sec-perfect}. Section~\ref{sec-con} is for conclusion and discussions.
\section{Brief review of quantum measurement theory} \label{sec-revmes} In this section, we quickly review quantum measurement theory~\cite{DL70} to the extent that is necessary for this paper. We consider the measurement with countably many outcomes.
Let $\Omega$ denote the set of possible outcomes and $\hil{H}$ denote a system (a separable Hilbert space) to be measured. A state is expressed by a density operator $\rho\in \bone{H}$, $\rho\geqslant0$, $\tr\rho=1$, where $\bone{H}$ denotes the Banach space of trace class operators on $\hil{H}$. A measurement on $\hil{H}$ with the outcome set $\Omega$ is mathematically described by an \textit{instrument} $(\inst{I}_\omega)_{\omega\in\Omega}$. Each element $\inst{I}_\omega$ of the instrument is a linear map $\inst{I}_\omega:\bone{H}\to \bone{H}$ [that is bounded with respect to the trace norm on $\bone{H}$], which describes a weighted state change caused by the measurement. Each $\inst{I}_\omega$ sends a state $\rho$ to an ``unnormalized'' state, namely,
\begin{align}
\inst{I}_\omega\rho
=\mathrm{P}(\omega|\rho) \rho_\omega,
\label{eq-Irho} \end{align}
where $\mathrm{P}(\omega|\rho)$ is the probability of obtaining an outcome $\omega\in\Omega$ and $\rho_\omega$ is the resulting state when the outcome $\omega$ is obtained. The equation \eqref{eq-Irho}
defines $\mathrm{P}(\omega|\rho)$ and $\rho_\omega$ uniquely (unless $\tr[\inst{I}_\omega\rho]=0$):
\begin{align}
\mathrm{P}(\omega|\rho)
&=\tr[\inst{I}_\omega\rho],&
\rho_\omega
&=\frac{\inst{I}_\omega\rho}{ \tr{[\inst{I}_\omega\rho]} }.
\end{align}
In order to interpret these quantities as probabilities and quantum
state, respectively,
an instrument is assumed to satisfy the following two conditions.
(CP) Completely positivity:
$\inst{I}_\omega$ is completely positive for all $\omega\in\Omega$
i.e., its trivial extentions
$\inst{I}_\omega\otimes \mathrm{id}_{\mathbb{C}^{n\times n}}:
B^1(\hil{H}\otimes \mathbb{C}^n)\to B^1(\hil{H}\otimes \mathbb{C}^n)$
are positive maps for all $n\in\mathbb{N}$.
(TP) Trace preserving property:
The sum $\sum \inst{I}_\omega$ preserves trace of operators i.e.,
$\tr[(\sum \inst{I}_\omega)\rho]=\tr \rho$ for all $\rho\in\bone{H}$.
The property CP, especially the positivity, guarantees positivity of $\rho_\omega$ and $\mathrm{P}(\omega|\rho)$. The property TP guarantees the conservation of probability. It is known that the instruments with above two properties correspond exactly to the realizable measurements (e.g.~\cite{doi:10.1063/1.526000}).
\section{Unambiguous discrimination}
\label{sec-uud}
We begin with the precise definition of unambiguous discrimination.
\begin{defi}\label{defi-uud}
An \emph{unambiguous discrimination measurement}
between
$(\rho_j)_{j\in J}$ is
an instrument $(\inst{I}_\omega)_{\omega \in J\cup\{?\}}$ that satisfies,
for $j\ne k\in J$,
\begin{align}
\tr[\inst{I}_j\rho_k]
&=0,&
\tr[\inst{I}_j\rho_j]>0.
\label{eq-def-uud}
\end{align}
We call $\tr[\inst{I}_j\rho_j]$ \emph{success probabilities}.
The states $(\rho_j)_{j\in J}$ are said to be
\emph{distinguishable}
when they admit at least one unambiguous discrimination measurement
between them.
\end{defi}
Let $(\rho_j)_{j\in J}$ be distinguishable
candidate states.
Then
$(\rho_j)_{j\in J}$
admit an unambiguous discrimination measurement
$(\inst{I}_\omega)_{\omega \in J\cup\{?\}}$.
When the true state is $\rho_j$,
one obtains the outcome $j$ or {``?''}
with probabilities $\tr[\inst{I}_j\rho_j]$ or $\tr[\inst{I}_?\rho_k]$, respectively,
and does not obtain any other outcome,
namely,
$\tr[\inst{I}_j\rho_j]+\tr[\inst{I}_?\rho_j]=1$.
When the outcome is $j$,
we can decide the true state is $\rho_j$ \textit{with certainty}.
In the rest of this section,
we would like to discuss the
ways to quantify how good an unambiguous discrimination measurement is.
In the presence of a prior probability density $(p_j)_{j\in J}$,
which was assumed by
Dieks~\cite{Dieks1988303} and Chefles~\cite{Chefles1998339},
it is reasonable to evaluate
an unambiguous discrimination measurement $(\inst{I}_\omega)_{\omega\in J\cup\{?\}}$
by the average success probability
\begin{align}
f_{\text{av}}
:=\sum_{j\in J} p_jq_j,
\label{eq-fav}
\end{align}
where $q_j=\tr[\inst{I}_j\rho_j]$ are success probabilities.
However, this is not the only way.
Even in the case of finite $J$,
there are important examples in which
$f$ is not of the form
\eqref{eq-fav}.
The minimum success probability
\begin{align}
f_{\text{min}}
:=\min\{\,q_j \mid j\in J\,\}
\label{eq-fmmin}
\end{align}
is such an example.
Roughly speaking,
$1/f_{\text{min}}$
trials are enough to determine the
true state.
This is the operational meaning of the minimum success probability
$f_{\text{min}}$.
Generalizing these two examples,
we define the class of evaluation functions for unambiguous
discrimination.
\begin{defi}\label{defi-evf}
We call a function $f:(0,1]^J\to\mathbb{R}$ an evaluation function if
\begin{align}
x_j>y_j \ \text{for all}\ j\in J\implies f(x_j)>f(y_j)
\end{align} holds for all $(x_j)_{j\in J},\, (y_j)_{j\in J} \in (0,1]^J$. We say an unambiguous discrimination measurement $(\inst{I}_\omega)_{\omega \in J\cup\{?\}}$ between the states $(\rho_j)_{j\in J}$ is better if the value $f(\tr[\inst{I}_j\rho_j])$ is larger, and optimal if no other measurement exceeds the value.
\end{defi}
The class of ``evaluation functions'' defined here is so large
that,
when $J$ is finite,
one could hardly imagine any functions
that suit the term
and do not belong to the defined class.
For example,
the class contains
$f_{\text{av}}$ and
$f_{\text{min}}$.
An evaluation function need not be linear nor convex.
When $J$ is infinite,
however,
$f_{\text{inf}}$ [see \eqref{eq-finf}],
which is a natural generalization of
$f_{\text{min}}$,
is excluded from the class.
Such functions are more suitably discussed in the context of
uniform distinguishability
(see Sec.~\ref{sec-thm-uni}).
\section{Result on distinguishability} \label{sec-thm}
Now, we can state the first main result.
\begin{theo}
Optimal discriminations make
distinguishable states indistinguishable
when the discrimination fails (gives ``?'').
Namely, assume that
the measurement
$(\inst{I}_{\omega})_{\omega\in J\cup \{?\}}$
achieves an optimal
unambiguous discrimination between the states
$(\rho_j)_{j\in J}$
and that the condition
\begin{align}
&\tr[\inst{I}_j\rho_j]<1 \quad\text{ for all $j\in J$}
\label{theo-eq-assumption}
\end{align}
holds.
Then the resulting states
under the condition that
the outcome is {``?''}, defined by
\begin{align}
\left(\frac{\inst{I}_?\rho_j}{\tr[\inst{I}_?\rho_j]} \right)_{j\in J},
\label{theo-eq}
\end{align}
are not distinguishable.
Here,
the optimality is defined
by an arbitrary evaluation function
in Definiton~\ref{defi-evf}.
\end{theo}
Note that the condition $\tr[\inst{I}_j\rho_j]<1$ is equivalent to
$\tr[\inst{I}_?\rho_j]>0$ and this ensure the well-definedness of
the resulting states \eqref{theo-eq}.
We will discuss the case
that
this condition fails in Sec.~\ref{sec-perfect}.
We
explain the idea of the proof first
and then give the formal
one.
The simple but important idea is to prove the contrapositive,
namely,
if the states \eqref{theo-eq}
are distinguishable then the discrimination measurement
$(\inst{I}_\omega)_{\omega\in J\cup\{?\}}$ is not optimal.
We thus construct
a discrimination measurement
that is better than the original one,
$(\inst{I}_\omega)_{\omega\in J\cup\{?\}}$,
assuming that \eqref{theo-eq} are distinguishable.
The measurement consisting of the following two steps does the
task.
\begin{enumerate}
\item
Perform the original discrimination
$(\inst{I}_{\omega})_{\omega\in
J\cup\{?\}}$.
If the outcome of this measurement is $j\in J$,
then decide the true state is $\rho_j$
regardless of the second step.
If the outcome is {``?''},
then defer the decision and proceed to the next step.
\item
Perform the discrimination of states \eqref{theo-eq},
whose existence is guaranteed by the assumption.
If the outcome of this measurement is $j\in J$,
decide the true state is $\rho_j$.
Otherwise, give up on the decision and answer {``?''}.
\end{enumerate}
We show below
that this combined discrimination measurement
truly improves the original one
$(\inst{I}_{\nu})_{\nu\in J\cup\{?\}}$.
\begin{proof}[Proof of the Theorem]
Note that
\begin{align}
\tr[\inst{I}_?\rho_j]
&= 1-\tr[\inst{I}_j\rho_j]
>0
\end{align}
by the assumption of the Theorem.
We will prove the contrapositive.
Let us assume \eqref{theo-eq} admits an unambiguous discrimination
measurement $(\inst{I}'_\omega)_{\omega\in {\jincon}}$.
Because $(\inst{I}_{\omega})_{\omega\in{\jincon}}$ and $(\inst{I'}_{\omega})_{\omega\in{\jincon}}$
discriminate between $(\rho_j)_{j\in J}$ and between \eqref{theo-eq}, respectively,
one obtains, by Definition~\ref{defi-uud},
\begin{align}
&\tr\left[\inst{I}_j\rho_k\right]
=0,&
& \tr\left[\inst{I}_j\rho_j\right]>0,
\label{theo-pr-eq1}\\
&\tr\left[\inst{I}_j
\frac{\inst{I}_?\rho_k}{\tr[\inst{I}_?\rho_k]}
\right]
=0,&
&
\tr\left[\inst{I}_j
\frac{\inst{I}_?\rho_j}{\tr[\inst{I}_?\rho_j]}
\right]
>0.
\label{theo-pr-eq2}
\end{align}
for $j, k\in J$ with $j\ne k$.
Let us define an instrument
$(\inst{J}_{\omega})_{\omega\in {\jincon}}$
by
\begin{align}
\inst{J}_j
&:=
\bigg(\sum_{\omega'\in{\jincon}}\inst{I}'_{\omega'}\bigg) \inst{I}_j+ \inst{I}'_{j} \inst{I}_?, &
j\in J,\\
\inst{J}_?&:=
\inst{I}'_?\inst{I}_?,
\label{theo-pr-eq3}
\end{align}
We will prove that
the instrument $(\inst{J}_{\omega})_{\omega\in{\jincon}}$ unambiguously discriminate
states $(\rho_j)_{j\in J}$ better than $(\inst{I}_\omega)_{\omega\in{\jincon}}$
in the rest of this proof.
First, it is readily seen that
$ (\inst{J}_{\omega})_{\omega\in {\jincon}}$ is an instrument
since
$\sum_{\omega\in {\jincon}} \inst{J}_\omega=\sum_{\omega',\omega\in {\jincon}}
\inst{I}'_{\omega'}\inst{I}_{\omega} $.
Second,
we calculate the probabilities $\tr\left[ \inst{J}_{j}\rho_k \right]$
for all $j,k\in J$:
\begin{align}
&\tr\left[ \inst{J}_{j}\rho_k \right]\notag\\
&=
\tr\left[
\left( \sum \inst{I}'_{\omega} \right) \inst{I}_j \rho_k
\right]
+
\tr\left[
\inst{I}'_{j}\inst{I}_? \rho_k
\right]
\notag\\
&=
\tr\left[ \inst{I}_j \rho_k \right]
+
(\tr[ \inst{I}_? \rho_k ])
\tr\left[
\inst{I}'_{j} \frac{\inst{I}_? \rho_k}{ \tr[\inst{I}_? \rho_k] }
\right]
\notag\\
&=
\delta_{j,k} \left(
\tr\left[ \inst{I}_k \rho_k \right]
+
(\tr[ \inst{I}_? \rho_k ])
\tr\left[
\inst{I}'_{k} \frac{\inst{I}_? \rho_k}{ \tr[\inst{I}_? \rho_k] }
\right]
\right),
\label{theo-rp-succpr}
\end{align}
where the first equality is by
the definition of
$ (\inst{J}_{\omega})_{\omega\in {\jincon}}$,
the second follows from the TP property of $\sum \inst{I}'_{\omega}$,
and the third is by \eqref{theo-pr-eq1} and \eqref{theo-pr-eq2}.
Finally, we show that $(\inst{J}_{\omega})_{\omega\in{\jincon}}$
unambiguous discriminates between $(\rho_j)_{j\in J}$ better than $(\inst{I}_\omega)_{\omega\in {\jincon}}$.
We see that each
$\tr[\inst{J}_j\rho_j]$ is
strictly larger than
$\tr[\inst{I}_j\rho_j]$:
\begin{align}
&\tr[\inst{J}_j\rho_j] - \tr[\inst{I}_j\rho_j]
=
\tr[ \inst{I}_? \rho_j ]
\tr\left[
\inst{I}'_{j} \frac{\inst{I}_? \rho_j}{ \tr[\inst{I}_? \rho_j] }
\right]
>0,
\label{theo-eq-posi}
\end{align}
where the equality is by \eqref{theo-rp-succpr},
and the inequality is
by \eqref{theo-pr-eq1} and \eqref{theo-pr-eq2}.
Eqs.~\eqref{theo-rp-succpr} and \eqref{theo-eq-posi} prove, in particular,
that $(\inst{J}_\omega)_{\omega\in {\jincon}}$
unambiguously discriminate $(\rho_j)_{j\in J}$.
Let $f$ be any evaluation function described in the Definition~\ref{defi-evf}.
Then, by \eqref{theo-eq-posi},
we have $f(\inst{J}_j\rho_j)> f(\inst{I}_j\rho_j)$.
In other words,
the instrument
$(\inst{J}_{\omega})_{\omega\in{\jincon}}$
discriminates between
$(\rho_j)_{j\in J}$
better than $(\inst{I}_\omega)_{\omega\in {\jincon}}$.
This completes the proof.
\end{proof}
We will discuss
the assumption \eqref{theo-eq-assumption} in the Theorem
in Sec.~\ref{sec-perfect}.
\section{Result on uniform distinguishability} \label{sec-thm-uni}
As the number of states becomes infinite, the concept of distinguishability
is naturally divided into two:
``distinguishability'' and ``uniform distinguishability''~\cite{1751-8121-49-26-265201}.
We discussed the former
in the preceding sections.
We consider the latter in this section.
We recall
that the class of evaluation functions
in the Definition in Sec.~\ref{sec-uud}
becomes slightly restricted
when
the index set $J$ is not finite.
Although $f_{\text{av}}$
is
included in the class,
an important
evaluation function
\begin{align}
f_{\text{inf}}(x_j)
:=\inf\{\, x_j\mid j\in J\,\},
\label{eq-finf}
\end{align}
which generalizes
$f_{\text{min}}$,
is excluded.
The function
$f_{\text{inf}}$
has a definite operational meaning
similar to
$f_{\text{min}}$.
Hence, it is
a natural demand to include
such an evaluation function.
It can be done
by introducing the uniform distinguishability.
We provide the uniform version of
Definitions~\ref{defi-uud} and \ref{defi-evf},
and the Theorem.
\begin{defi-a}\label{defi-uud-unif}
A \emph{uniform unambiguous discrimination measurement}
between
$(\rho_j)_{j\in J}$ is
an instrument $(\inst{I}_\omega)_{\omega \in J\cup\{?\}}$ that satisfies,
for $j\ne k\in J$,
\begin{align}
\tr[\inst{I}_j\rho_k]
&=0,&
\inf\{\,\tr[\inst{I}_j\rho_j] \mid j\in J\,\}>0.
\label{eq-def-uud}
\end{align}
The states $(\rho_j)_{j\in J}$ are said to be
\emph{uniformly distinguishable}
when they admit at least one
uniform
unambiguous discrimination measurement
between them.
\end{defi-a}
Distinguishability is a weaker condition than
uniform distinguishability.
Consider, for example,
the states $(\rho_j)_{j\in\mathbb{N}}$ that
are merely distinguishable
with success probabilities
$q_j=\tr[\inst{I}_j\rho_j]=1/j$.
In this case,
one cannot predict how many trials suffices
to determine the true state
before performing the discrimination
since $1/f_\text{inf}(q_j)=1/0=\infty$.
Uniform distinguishability
cures this problem
and
provides a natural framework for infinitely many states.
\begin{defi-a}\label{defi-evf-unif}
We call a function $f:(0,1]^J\to\mathbb{R}$
an evaluation function \emph{for uniform discrimination} if
\begin{align}
\inf\{\, x_j>y_j \mid j\in J\,\} >0\implies f(x_j)-f(y_j)>0,
\label{def-evfunc-inf}
\end{align}
holds, where $(x_j)_{j\in J},\, (y_j)_{j\in J} \in (0,1]^J$.
\end{defi-a}
Note that the function $f_\text{inf}$ is an evaluation function for
uniform discrimination as well as $f_\text{av}$.
Evaluation functions for uniform discrimination
comprise a
larger class than mere evaluation functions do.
\begin{theo-a}
Optimal uniform discrimination measurements make
uniformly distinguishable states not uniformly distinguishable
when the discrimination fails (gives ``?'').
Namely, assume that
the measurement
$(\inst{I}_{\omega})_{\omega\in J\cup \{?\}}$
achieves an optimal uniform
unambiguous discrimination between the states
$(\rho_j)_{j\in J}$
and that the condition
\begin{align}
&\sup\{\tr[\inst{I}_j\rho_j]\}<1
\label{theo-eq-assumption-a}
\end{align}
holds.
Then the resulting states
under the condition that
the outcome is {``?''}, defined by
\begin{align}
\left(\frac{\inst{I}_?\rho_j}{\tr[\inst{I}_?\rho_j]} \right)_{j\in J},
\end{align}
are not uniformly distinguishable.
Here,
the optimality is defined
by an arbitrary evaluation function
for uniform discrimination
in Definition~\ref{defi-evf}\,$'$.
\end{theo-a}
The definitions and theorem with prime symbols
are equivalent to those without prime in Sec.~\ref{sec-thm} when $J$
is finite.
When $J$ becomes countably infinite,
``uniform distinguishability'' becomes a stronger condition than mere
``distinguishability'' and evaluation functions for uniform
discrimination form a larger class than that of mere evaluation
functions.
The proof of the Theorem$'$ can be given in a way similar to that of
the Theorem
and is omitted [the essential point is
to replace the inequalities ``$\cdots>0$'' with ``$\inf\{\,\cdots\mid
j\in J\,\}>0$''
in
Eqs.~\eqref{theo-pr-eq1},
\eqref{theo-pr-eq2} and
\eqref{theo-eq-posi}].
We note
that an assumption \eqref{theo-eq-assumption-a},
which is stronger
than \eqref{theo-eq-assumption}
in the previous Theorem,
is necessary in the Theorem$'$.
In fact,
when
$\sup q_j=1$ and $q_j<1$,
the original uniform discrimination
(with success probabilities $q_j$)
is not
necessarily improved by a
subsequent uniform discrimination measurement
(with $q_j'$).
The reason is
because the improvements in success probabilities are given by
$(1-q_j)q_j'$ [see \eqref{theo-eq-posi}].
\section{Separation of perfectly distinguishable states}
\label{sec-perfect}
In the Theorem, we assumed that
optimal discrimination measurement
satisfies $\tr[\inst{I}_j\rho_j]<1$.
The excluded case was $\tr[\inst{I}_?\rho_j]=0$,
or equivalently,
$\tr[\inst{I}_j\rho_j]=1$.
We discuss such cases in this section.
\begin{defi}
Let $(\rho_j)_{j\in J}$ be states and
$K$ be a subset of $J$.
The
states $(\rho_j)_{j\in K}$
are said to be
perfectly distinguishable
if there exists
an
unambiguous discrimination measurement
$(\inst{I}_{\omega})_{\omega \inJ\cup\{\incon\}}$
between
$(\rho_j)_{j\in J}$
such that
\begin{align}
&\tr[\inst{I}_k\rho_k]=1
\end{align}
holds for all $k\in K$.
\end{defi}
\begin{prop}
Assume
that
the
states $(\rho_j)_{j\in J}$ are
distinguishable and
that $(\rho_j)_{j\in K},\, K\subset J$, are
perfectly distinguishable.
Then there exists a (two-outcome projection) measurement
$(\inst{L},\inst{L}')$ such that,
\begin{align}
&\inst{L}\rho_k=\rho_k, &
&\inst{L}'\rho_k=0,&
&k\in K,
\label{eq-prop-1}\\
\intertext{and}
&\inst{L}\rho_\ell=0,&
&\inst{L}'\rho_\ell=\rho_\ell,&
&\ell\in J\setminus K
\label{eq-prop-2}
\end{align}
holds.
\end{prop}
\begin{proof}
Assume $(\rho_k)_{k\in K}$ are perfectly distinguishable by an unambiguous discrimination measurement $(\inst{I}_\omega)_{\omega\in J\cup\{?\}}$.
Let $L\in\bof{H}$ be an operator such that, for all $\rho\in\bone{H}$,
\begin{align}
\sum_{k\in K} \tr[\inst{I}_k\rho]
=\tr[LL^*\rho],
\end{align}
i.e.,
$LL^*$ is the sum of so-called positive-operator valued measure
(POVM) elements for outcomes in $K$.
Then $LL^*\leqslant1$.
Let $P$ be the projection onto the (norm) closure of
$L\hil{H}=\{\, L\xi \mid \xi \in\hil{H} \,\}$.
Then $PL=L$.
Define the instrument $(\inst{L},\inst{L}')$ by
\begin{align}
\inst{L}\rho&=P\rho P,&
\inst{L}'\rho&=(1-P)\rho(1-P),
\end{align}
where $\rho\in\bone{H}$.
We will prove this instrument satisfies
the conditions \eqref{eq-prop-1} and \eqref{eq-prop-2} in the Proposition.
First, we prove \eqref{eq-prop-1}.
Fix $k\in K$.
Because $(\inst{I}_{\omega})_{\omega \inJ\cup\{\incon\}}$ is perfect on $K$, we have
\begin{align}
1
&=\tr[\inst{I}_k\rho_k]
\notag \\
&
\leqslant \sum_{k'\in K}\tr[\inst{I}_{k'}\rho_k]
=\tr[LL^*\rho_k]
=\tr[LL^*(P\rho_k P)] \notag \\
&\qquad\qquad\qquad\qquad\qquad\qquad\quad
\leqslant \tr[1(P\rho_k P)].
\end{align}
Therefore $\tr[P\rho_kP]=1$.
Thus, by the cycric property of trace, $\tr[(1-P)\rho_k(1-P)]=0$.
Then one has $\rho_k^{1/2}(1-P)=0$,
which proves \eqref{eq-prop-1}.
Second, we prove \eqref{eq-prop-2}.
Fix $\ell\in J\setminus K$.
By the assumption that $(\inst{I}_\omega)_{\omega\in J\cup\{\incon\}}$
unambiguously discriminates between the states,
we have $\tr[\inst{I}_k \rho_\ell]=0$ for all $k\in K$.
Then $0=\sum_{k\in K} \tr[\inst{I}_k\rho_\ell]=\tr[L^*\rho_\ell L]$
and $\rho_\ell^{1/2}L=0$.
Since $P$ is the projection onto the closure of $L\hil{H}$,
we have $\rho_\ell^{1/2}P=0$.
Thus $(1-P)\rho_\ell(1-P)=\rho_\ell$,
which proves \eqref{eq-prop-2}. \end{proof}
By the measurement described in the Proposition, we can see whether the true state $\rho_j$ is in $(\rho_k)_{k\in K}$ or $(\rho_\ell)_{\ell\in J\setminus K}$, without disturbing the set that contains $\rho_j$.
The Theorem in Sec.~\ref{sec-thm} is not applicable when an optimal instrument perfectly discriminates between some of the states. In such a case, one can remove all perfectly distinguishable states beforehand by the Proposition above and then apply the Theorem. Therefore, we can assume \eqref{theo-eq-assumption} in the Theorem without loss of generality.
However, we cannot always assume
\eqref{theo-eq-assumption-a} in the Theorem$'$
physically.
When $q_j:=\tr[\inst{I}_j\rho_j]<1$ and $\sup q_j=1$, the states after an optimal uniform discrimination measurement with the inconclusive outcome may be uniformly distinguishable.
\section{Conclusion and discussions}
\label{sec-con}
We have shown that optimal unambiguous discrimination makes distinguishable states indistinguishable under the condition that the inconclusive outcome {``?''} is obtained. The results extend the previously known ones to infinitely many candidates and output states, which can be pure or mixed, to all quantum mechanically possible measurements, and to virtually all evaluation functions that define optimality. Our proof was based on a simple principle. The best measurements leave no room to carry out the task further. If the resulting states are distinguishable, we can achieve more accurate discrimination by discriminating the resulting states. This made the proof almost obvious and, at the same time, removed restrictions in the previous work~\cite{Chefles1998339}. We would like to emphasize that our proofs of the Theorems do not depend on the criteria of the distinguishability
such as linear independence. The Theorems is a direct consequence of the definition of optimality.
We have also discussed the uniform discrimination, which becomes slightly stronger than the mere distinguishability when the number of candidate state becomes infinite. We showed the Theorem$'$ for the uniform distinguishability. The conclusion of the Theorem$'$ becomes slightly weaker; however, the class of evaluation functions is enlarged, which includes a natural measure $f_{\text{inf}}$. The difference between the Theorem and Theorem$'$ describe the subtlety of the handling of infinite many states.
Besides the main results, we have discussed the case that some candidate states are perfectly distinguishable. We showed that such candidate state can be separated by the two-outcome measurement without disturbing the true state. This
reflects the nature of unambiguous discrimination.
If one is interested in more detailed descriptions of the property of the resulting states, he/she can make use of the results in our previous work. It was proved that countably many pure states are distinguishable if and only if they are minimal and that they are uniformly distinguishable if and only if they are Riesz-Fisher (both of the mathematical properties are generalizations of the linear independence to the case of infinitely many vectors). We also derived the condition for the countably many general (possibly mixed) states to be distinguishable (see also \cite{PhysRevA.70.012308} for the case of finitely many states). On the other hand, the condition for countably many general states to be \textit{uniformly} distinguishable can be stated based on our previous work in principle; however, the condition so derived seems to be complicated so that it is not practical enough. It may be an interesting problem to simplify the condition for uniform distinguishability, which
helps understand the resulting states or the disturbance of the optimal discrimination.
Finally, we would like to make a general comment on the simple principle which we have used to prove the theorems (that the optimal measurements leave no room to carry out the task further). The idea is itself very simple and obvious so that everyone understands it readily. However, it is not trivial in what context this idea really works well and how to apply the idea to each context. Note that the fact that the idea works in the context of unambiguous discrimination under an appropriate setting, as we have demonstrated in this paper, is itself not trivial. For example, without the inconclusive outcome, it might be impossible to make use of the idea in
a similar manner. The simple idea seems to be applicable to a wide variety of subjects
and also allows a unified discussion. It would be an interesting work to find other subjects
in which the idea can draw useful conclusions.
\end{document} | arXiv |
How to Value a Company
Corporate Finance & Accounting Financial Analysis
A Clear Look at EBITDA
By Ben McClure
EBITDA: A Quick Review
The Rationale Behind EBITDA
A Company's Financial Health
Normally, investors focus on cash flow, net income, and revenues as the basic measures of corporate health and value. But over the years, another measure has crept into quarterly reports and accounts: earnings before interest, taxes, depreciation, and amortization (EBITDA). While investors can use EBITDA to analyze and compare profitability between companies and industries, they should understand that there are serious limits to what the metric can tell them about a company. Here we look at why this measure has become so popular and why, in many cases, it should be treated with caution.
Earnings before interest, taxes, depreciation, and amortization (EBITDA) is a metric that measures a company's overall financial performance.
In the mid-1980s, investors began to use EBITDA to determine if a distressed company would be able to pay back the interest on a leveraged buyout deal.
EBITDA is now commonly used to compare the financial health of companies and to evaluate firms with different tax rates and depreciation policies.
Among its drawbacks, EBITDA is not a substitute for analyzing a company's cash flow and can make a company look like it has more money to make interest payments than it really does.
EBITDA also ignores the quality of a company's earnings and can make it look cheaper than it really is.
EBITDA is a measure of profits. While there is no legal requirement for companies to disclose their EBITDA, according to the U.S. generally accepted accounting principles (GAAP), it can be worked out using the information found in a company's financial statements.
The usual shortcut to calculate EBITDA is to start with operating profit, also called earnings before interest and tax (EBIT), and then add back depreciation and amortization. However, an easier and more straightforward formula for calculating EBITDA is as follows:
EBITDA=NP+Interest+Taxes+D+Awhere:NP=Net profitD=DepreciationA=Amortization\begin{aligned} &EBITDA = NP + Interest + Taxes + D + A \\ &\textbf{where:}\\ &NP = \text{\small Net profit}\\ &D = \text{\small Depreciation}\\ &A = \text{\small Amortization}\\ \end{aligned}EBITDA=NP+Interest+Taxes+D+Awhere:NP=Net profitD=DepreciationA=Amortization
The earnings, tax, and interest figures are found on the income statement, while the depreciation and amortization figures are normally found in the notes to operating profit or on the cash flow statement.
Should You Ignore EBITDA?
EBITDA first came to prominence in the mid-1980s as leveraged buyout investors examined distressed companies that needed financial restructuring. They used EBITDA to calculate quickly whether these companies could pay back the interest on these financed deals.
Leveraged buyout bankers promoted EBITDA as a tool to determine whether a company could service its debt in the near term, say over a year or two. Looking at the company's EBITDA-to-interest coverage ratio could give investors a sense of whether a company could meet the heavier interest payments it would face after restructuring.
The use of EBITDA has since spread to a wide range of businesses. Its proponents argue that EBITDA offers a clearer reflection of operations by stripping out expenses that can obscure how the company is really performing.
Easily Understood Financial Health of a Company
Interest, which is largely a function of management's choice of financing, is ignored in EBITDA. Taxes are left out because they can vary widely depending on acquisitions and losses in prior years; this variation can distort net income. Finally, EBITDA removes the arbitrary and subjective judgments that can go into calculating depreciation and amortization, such as useful lives, residual values, and various depreciation methods.
By eliminating these items, EBITDA makes it easier to compare the financial health of various companies. It is also useful for evaluating firms with different capital structures, tax rates, and depreciation policies. EBITDA further gives investors a sense of how much money a young or restructured company might generate before it has to hand over payments to creditors and the taxman.
All the same, one of the biggest reasons for EBITDA's popularity is that it shows higher profit numbers than just operating profits. It has become the metric of choice for highly leveraged companies in capital-intensive industries such as cable and telecommunications.
While EBITDA may be a widely accepted indicator of performance, using it as a single measure of earnings or cash flow can be very misleading. A company can make its financial picture more attractive by touting its EBITDA performance, shifting investors' attention away from high debt levels and unsightly expenses against earnings. In the absence of other considerations, EBITDA provides an incomplete and dangerous picture of financial health. Here are four good reasons to be wary of EBITDA.
No Substitute for Cash Flow
Some analysts and journalists urge investors to use EBITDA as a measure of cash flow. This advice is illogical and hazardous for investors. For starters, taxation and interest are real cash items, and, therefore, they're not at all optional. A company that does not pay its government taxes or services its loans will not stay in business for long.
Unlike proper measures of cash flow, EBITDA ignores changes in working capital, the cash needed to cover day-to-day operations. This is most problematic in cases of fast-growing companies, which require increased investment in receivables and inventory to convert their growth into sales. Those working capital investments consume cash, but they are neglected by EBITDA.
Even if a company just breaks even on an EBITDA basis, it will not generate enough cash to replace the basic capital assets used in the business. Treating EBITDA as a substitute for cash flow can be dangerous because it gives investors incomplete information about cash expenses.
If you want to know the cash from operations, just flip to the company's cash flow statement.
Skews Interest Coverage
EBITDA can easily make a company look like it has more money to make interest payments. Consider a company with $10 million in operating profits and $15 million in interest charges. By adding back depreciation and amortization expenses of $8 million, the company suddenly has EBITDA of $18 million and appears to have enough money to cover its interest payments.
Depreciation and amortization are added back based on the flawed assumption that these expenses are avoidable. Even though depreciation and amortization are non-cash items, they can't be postponed indefinitely. Equipment inevitably wears out and funds will be needed to replace or upgrade it.
Ignores Quality of Earnings
While subtracting interest payments, tax charges, depreciation, and amortization from earnings may seem simple enough, different companies use different earnings figures as the starting point for EBITDA. In other words, EBITDA is susceptible to the earnings accounting methods found on the income statement. Even if you account for the distortions that result from interest, taxation, depreciation, and amortization, the earnings figure in EBITDA is still unreliable.
Makes Companies Look Cheaper Than They Are
Worst of all, EBITDA can make a company look less expensive than it really is. When analysts look at stock price multiples of EBITDA rather than bottom-line earnings, they produce lower multiples.
A company may trade at what appears to be a low multiple to its forecast EBITDA, making it appear to be a bargain. However, when comparing that same company using other multiples—such as operating profits or estimated net income—that same company may trade at much higher multiples. To gain a complete picture of a company's valuation, investors need to consider other price multiples besides EBITDA when assessing a company's worth.
Despite its widespread use, EBITDA isn't defined in generally accepted accounting principles, or GAAP. As a result, companies can report EBITDA as they wish. The problem with doing this is that EBITDA doesn't give a complete picture of a company's performance. In many cases, investors may be better off avoiding EBITDA or using it in conjunction with other, more meaningful metrics.
Investopedia requires writers to use primary sources to support their work. These include white papers, government data, original reporting, and interviews with industry experts. We also reference original research from other reputable publishers where appropriate. You can learn more about the standards we follow in producing accurate, unbiased content in our editorial policy.
Wharton School of Business University of Pennsylvania. "Is It Time to Get Rid of EBITDA?" Accessed Oct. 5, 2020.
Free Cash Flow vs. EBITDA: What's the Difference?
What Are the Differences Between EBITDA, EBITDAR, and EBITDARM?
What Is Considered a Healthy EV/EBITDA ?
The Difference Between Cash Flow and EBITDA
The Difference Between EBIT and EBITDA
How Is Operating Margin And EBITDA Different?
EBITDA – Earnings Before Interest, Taxes, Depreciation, and Amortization
EBITDA, or earnings before interest, taxes, depreciation, and amortization, is a measure of a company's overall financial performance.
Enterprise Value – EV
Enterprise value (EV) is a measure of a company's total value, often used as a comprehensive alternative to equity market capitalization. EV includes in its calculation the market capitalization of a company but also short-term and long-term debt as well as any cash on the company's balance sheet.
How EBITA Helps Investors Evaluate Company Performance
Earnings before interest, taxes, and amortization (EBITA) is a measure of a company's real performance. It can be more informative than bottom-line earnings.
Earnings Before Interest, Taxes, Depreciation, Amortization, Special Losses (EBITDAL)
Earnings before interest, taxation, depreciation, amortization, and special losses is a measure of profitability that is roughly analogous to net income.
Multiple Definition
A multiple measures some aspect of a company's financial well-being, determined by dividing one metric by another metric.
What the EBITDA-To-Sales Ratio Tells Us
'EBITDA-to-sales' is used to assess profitability by comparing revenue with operating income before interest, taxes, depreciation, and amortization. | CommonCrawl |
Identity for convolution of central binomial coefficients: $\sum\limits_{k=0}^n \binom{2k}{k}\binom{2(n-k)}{n-k}=2^{2n}$
It's not difficult to show that
$$(1-z^2)^{-1/2}=\sum_{n=0}^\infty \binom{2n}{n}2^{-2n}z^{2n}$$
On the other hand, we have $(1-z^2)^{-1}=\sum z^{2n}$. Squaring the first power series and comparing terms gives us
$$\sum_{k=0}^n \binom{2k}{k}\binom{2(n-k)}{n-k}2^{-2n}=1$$
that is,
$$\sum_{k=0}^n \binom{2k}{k}\binom{2(n-k)}{n-k}=2^{2n}$$
My question: is there a more direct, combinatorial proof of this identity? I've been racking my brains trying to come up with one but I'm not having much success.
combinatorics summation binomial-coefficients
Michael Rozenberg
SkatcheSkatche
$\begingroup$ This is identity 5.39 (p. 187, 2nd ed.) in Concrete Mathematics. The proof they give there uses Vandermonde's convolution and the identity $\binom{-1/2}{n} = (-1/4)^n \binom{2n}{n}$. It's not a combinatorial proof, but it might be interesting to take a look at anyway. $\endgroup$ – Mike Spivey May 9 '11 at 16:15
$\begingroup$ See here for another combinatorial proof. $\endgroup$ – punctured dusk Apr 8 '15 at 15:30
It is possible to give a direct combinatorial proof, but it is quite difficult to find it.
One possibility is to use paths between points with integer coordinates and steps $(1,1)$ and $(1,-1)$.
1) $\binom{2n}{n}$ counts all paths from $(0,0)$ to $(2n,0)$.
2) $2^{2n}$ counts all paths starting from $(0,0)$ with $2n$ steps.
3) $\binom{2n}{n}$ counts all paths with $2n$ steps that never touch the $x$-axis again after the start. (This one is not obvious, but can be proved with a bijection.)
Now you can conclude that all paths are a concatenation of a path that returns a certain number of times to the $x$-axis and a path that never does.
Note that the main difficulty here was that the two binomial coefficients are interpreted differently.
Edited to add reference: In Richard P. Stanley: Enumerative Combinatorics Volume 1, Chapter 1, Solution to exercice 2c the following reference is given:
The problem of giving a combinatorial proof was raised by P. Veress and solved by G. Hajos in the 1930s. A recent proof appears in D.J. Kleitman, Studies in Applied Math. 54 (1975), 289 - 292. See also M. Sved, Math. Intelligencer, vol.6, no. 4 (1984), 44-45.
But I have not looked to check which article gives the proof I have outlined above.
$\begingroup$ Here is the Sved article; I can't seem to find an electronic version of Kleitman's. $\endgroup$ – J. M. is a poor mathematician May 9 '11 at 13:01
$\begingroup$ There's also another combinatorial proof by de Angelis in Amer. Math. Montly, Aug-Sept 2006, and a probabilistic proof, which generalizes to $\prod_{i_1+\dots+i_m=n} \binom{2i_1}{i_1} \dots \binom{2i_m}{i_m} = (4^n/n!) \cdot \Gamma(m/2+n) / \Gamma(m/2)$, by Chang & Xu in Amer. Math. Montly, Feb 2011. $\endgroup$ – Hans Lundmark May 9 '11 at 15:13
$\begingroup$ Links: jstor.org/stable/27642007 and jstor.org/stable/10.4169/amer.math.monthly.118.02.175 $\endgroup$ – Hans Lundmark May 9 '11 at 15:18
$\begingroup$ @Hand Lundmark: The product should be a sum. Thanks to all for the references. $\endgroup$ – Phira May 9 '11 at 15:20
$\begingroup$ The hard part of the proof is hidden in part (3) with the words "can be proved". Scott's answer here tries to actually prove this part with a giant lemma, but it doesn't look correct. The Sved article (second one here) gives a bijection due to Gessel but the description is slightly confusing: in the (c)-to-(b) move, when it refers to "the segment following it", this is referring to the entire path after the first contact point. (Took me a while to figure that out!) $\endgroup$ – Matt Dec 14 '11 at 8:43
Here is another proof, one that I slightly prefer. I'll start with the hardest part.
Lemma. The number of all words of length $n$ in the alphabet $\{A,B\}$ such that no prefix (left factor) of it contains more letters $B$ than $A$, is $\binom n{\lceil n/2\rceil}$.
Instead of these words one may also take, interpreting $A$ as an up-step and $B$ as a down-step, paths as in the answer by Phira that never go below the horizontal axis; or one can formulate as ballot sequences as in Bertand's ballot problem, with the difference that we allow $B$ to catch up with $A$ without overtaking, and that the (non-negative) size of the eventual lead of $A$ is not fixed.
Proof. The following step can be applied to any word for which some prefix does contain more letters $B$ than$~A$: find the smallest prefix for which the majority of its letters $B$ over its letters $A$ is maximal among all prefixes, and change its last letter (which is a $B$) into $A$. There is an inverse step that can be applied to any word with more letters $A$ than letters $B$ (or more generally to a word for which some suffix (right factor) has this property): find the smallest suffix for which the majority of its letters $A$ over its letters $B$ is maximal among all suffixes, and change its first letter (which is an $A$) into $B$. The easiest way to see that these are inverse operations is that the presence of subwords in the Dyck language for $\{A,B\}$ has no effect on these operations (in particular they will never change inside such words), and that what remains when ignoring such subwords is of the form $BB\ldots BAA\ldots A$, where the last $B$ respectively first $A$ will be changed. Now given a word of length $n$ with $\lceil n/2\rceil$ letters $A$ and $\lfloor n/2\rfloor$ letters $B$, one can iterate the first operation until no prefix contains more letters $B$ than $A$, and conversely given a word of length $n$ satisfying that condition, if there are $d\geq0$ more letters $A$ than $B$ in all, one can iterate the reverse operation $\lfloor d/2\rfloor$ times to obtain a word of length $n$ with $\lceil n/2\rceil$ letters $A$ and $\lfloor n/2\rfloor$ letters $B$. This bijection proves the lemma. QED
Now to prove the identity of the question, consider the words of length $2n+1$ in which the letters $A$ are in the majority; their number is $2^{2n+1}/2=2^{2n}$. Consider the longest prefix (possibly empty) in which there are as many letters $A$ as $B$; it has an even length $2k$, and given that length there are $\binom{2k}k$ possibilities for this prefix. The next letter is necessarily an $A$, and after that there is a suffix of length $2n-2k$ in which no prefix (of that suffix) contains more letters $B$ than $A$. By the lemma there are $\binom{2n-2k}{n-k}$ of them, whence the result.
darij grinberg
Marc van LeeuwenMarc van Leeuwen
$\begingroup$ Instead of "subwords in the Dyck language for $\left\{A,B\right\}$", can't you just say "subwords of the form $AB$"? $\endgroup$ – darij grinberg Feb 10 '15 at 19:20
$\begingroup$ I would also define what you mean by "majority of its letters $B$ over its letters $A$" (you seem to use it for "number of its letters $B$ minus number of its letters $A$", but it also could be mistaken for a ratio by someone not used to combinatorics). Otherwise, this is a very nicely written half-page introduction into crystal operators; I didn't expect to see a bijective proof of this identity that short! $\endgroup$ – darij grinberg Feb 10 '15 at 19:27
There is also a probabilistic proof of this identity.
Start with an urn containing one red marble and one blue marble. Make a series of $n$ draws from the urn; for each draw, remove a random ball in the urn, then put it back, along with two extra balls of the same color. We then ask, what is the probability that exactly $k$ of the draws were red?
The probability that the first $k$ draws are red and the last $n-k$ are blue is $$ \frac12\cdot\frac{3}4\cdot\frac5{6}\cdots\cdot\frac{2k-1}{2k}\cdot\frac1{2k+2}\cdot\frac{3}{2k+4}\cdots\frac{2(n-k)-1}{2n}=\frac{(2k-1)!!(2(n-k)-1)!!}{(2n)!!} $$ where $n!!=\prod_{k=0}^{\lfloor n/2\rfloor-1}(n-2k)=n(n-2)(n-4)\cdots$.
It is not hard to see that every sequence of $k$ red and $n-k$ blue draws has this same probability; rearranging the order of draws just changes the order of the factors in the numerator. Therefore, the probability of $k$ red draws is, using the identities $(2n)!!=2^nn!$ and $(2k-1)!!=\frac{(2k)!}{(2k)!!}=\frac{(2k)!}{2^kk!}$, $$ \binom{n}k\frac{(2k-1)!!(2(n-k)-1)!!}{(2n)!!}=\frac{\binom{2k}k\binom{2(n-k)}{n-k}}{2^{2n}} $$ Since these probabilities must sum to $1$, the desired identity follows!
Mike EarnestMike Earnest
Here is another proof by Egecioglu. It was published as a technical report, not a journal paper, so it's not easy to find.
Alexander BursteinAlexander Burstein
Not the answer you're looking for? Browse other questions tagged combinatorics summation binomial-coefficients or ask your own question.
Proof of a combinatorial identity: $\sum\limits_{i=0}^n {2i \choose i}{2(n-i)\choose n-i} = 4^n$
Proving $\sum_{k=0}^n{2k\choose k}{2n-2k\choose n-k}=4^n$
Finding a closed form expression for this sum
Proving an Identity involving $4^N$
How prove this $\sum_{k+j=n,0\le k,j\le n}\binom{2k}{k}\binom{2j}{j}=4^n$
Binomial coefficients identity
Combinatorial proof of an binomial coefficient identity
Simplifying the sum with binomial coefficients: $\sum\limits_{k=0}^n {2k\choose k}{2n-2k\choose n-k}$
Proof of the identity $\sum_{k=0}^{n}\binom {2k}{k}\binom {2(n-k)}{n-k}=4^n$
How to prove $\sum\limits_{k=1}^{n}\cot^2\left( \frac{k\pi}{2n+1}\right)=\frac{n(2n-1)}{3}$
Upper bound on sum of binomial coefficients
Algebraic proof of a binomial sum identity.
Strehl identity for the sum of cubes of binomial coefficients
Finishing proof of identity $\sum_{k=b}^{n} \binom{n}{k} \binom{k}{b} = 2^{n-b} \binom{n}{b}$
Proving an identity involving binomial coefficients and fractions
Sum of products of binomial coefficients: $ \sum_{l=0}^{n-j} \binom{M-1+l}{l} \binom{n-M-l}{n-j-l} = \binom{n}{j} $
Binomial coefficients identity: $\sum i \binom{n-i}{k-1}=\binom{n+1}{k+1}$
Convolution Identity for Stirling Numbers
Identity involving sums of binomial coefficients
How to prove upper bound for partial sum of binomial coefficients | CommonCrawl |
Proceedings from the 12th International BBCC conference
PanDelos: a dictionary-based method for pan-genome content discovery
Vincenzo Bonnici1,
Rosalba Giugno1 na1 &
Vincenzo Manca1 na1
Pan-genome approaches afford the discovery of homology relations in a set of genomes, by determining how some gene families are distributed among a given set of genomes. The retrieval of a complete gene distribution among a class of genomes is an NP-hard problem because computational costs increase with the number of analyzed genomes, in fact, all-against-all gene comparisons are required to completely solve the problem. In presence of phylogenetically distant genomes, due to the variability introduced in gene duplication and transmission, the task of recognizing homologous genes becomes even more difficult. A challenge on this field is that of designing fast and adaptive similarity measures in order to find a suitable pan-genome structure of homology relations.
We present PanDelos, a stand alone tool for the discovery of pan-genome contents among phylogenetic distant genomes. The methodology is based on information theory and network analysis. It is parameter-free because thresholds are automatically deduced from the context. PanDelos avoids sequence alignment by introducing a measure based on k-mer multiplicity. The k-mer length is defined according to general arguments rather than empirical considerations. Homology candidate relations are integrated into a global network and groups of homologous genes are extracted by applying a community detection algorithm.
PanDelos outperforms existing approaches, Roary and EDGAR, in terms of running times and quality content discovery. Tests were run on collections of real genomes, previously used in analogous studies, and in synthetic benchmarks that represent fully trusted golden truth. The software is available at https://github.com/GiugnoLab/PanDelos.
A pan-genome can be abstractly considered as a structure defined on a set of genomes. The structure is built by identifying groups of homologous genes [1]. Two genes are homologous if they share a common ancestral gene. Homologous genes can be distinguished into paralogous, when homology occurs within the same genome, or orthologous, when homology occurs between different genomes. We call pan-genome content discovery the determination of homologous groups within a collection of genomes.
Different mechanisms are involved in gene transmission. Paralogy is linked to sequence duplication within the same genome. Orthology is associated to a "vertical" transmission. It happens among genomes in the same lineage and involves most of the genetic contents. On the contrary, "horizontal" transmission occurs between genomes of organisms of different lineages, involving one or few genes. Genes present in every genome are core genes of the pan-genome and they may be involved in essential living functionalities. Sequences shared by a subset of genomes are referred as dispensable and they represent variable features. Singleton genes are present only in one genome and represent some genome-specific functionality. The collective analyses of all the genes is developed for many specific interests, for example, for the study of a bacterial strain of a given species [2, 3]. Pan-genome analyses found many application in clinical studies [4, 5], for example they help in identifying drug-target genes in clinical studies [6, 7], or in exploring phylogenetic lineages of bacteria [8] that can be linked to strain-specific disease phenotypes [9].
Approaches to pan-genome content discovery need to take into account that gene duplication and transmission may introduce sequence alterations [10–13]. The variations make the task of recognizing homologous genes difficult, especially when ancestor genomes are no more available. Core genes are often under strong evolutionary selection, thus their sequences are transmitted almost without any alteration. The amount of variations affecting dispensable genes varies and the similarity between homologous sequences tends to decrease according to their phylogenetic distance. When closely related organisms are analyzed, reasonable thresholds on sequences similarity are applied to recognize gene families. However, when distant genomes are compared, global thresholds result less feasible. Suitable notions are needed to define adaptive thresholds especially when they present non-uniform phylogenetic distances.
The discovery of a pan-genome content is an NP-hard problem [14], and the complexity of the analysis is proportional to the number of input genomes. This is mainly due to the fact that all-against-all comparisons between gene sets are required to solve the task. State of art tools for pan-genome analysis are Roary [15] and EDGAR [16]. They use some heuristics to reduce the computational requirements in the definitions of thresholds for sequence alignments and the number of comparisons necessary in their procedures. Both approaches are based on a largely-used strategy that searches for reciprocally most similar genes between compared genomes [17, 18].
Roary combines an approach for clustering gene sequences (CD-HIT) with a procedure based on reciprocal BLAST alignments. CD-HIT [19] clustering counts the presence of k-mers, substrings of length k, among the analyzed sequences at different values of k. The results of CD-HIT are merged with normalized BLAST scores [20] and clustered via the MCL algorithm [21]. The Roary's procedure requires intensively tuning of user-defined parameters to set the thresholds for discarding low homology values. Parameters are set globally, making Roary best performing on closely related genomes.
EDGAR uses adaptive thresholds depending on the distribution of BLAST gene scores. The retrieval of a distribution is made feasible by the normalization of alignment scores. The normalization is performed by fixing the self-alignment score of a sequence as the maximum one. The approach results suitable for comparing genomes with a considerable phylogenetic distance, but some disadvantages arise. It requires an expensive amount of sequence alignments, in fact for each 1-vs-1 genome comparisons, the complete gene sets of the two genomes must be cross-aligned. EDGAR chooses the threshold on the minimum feasible score by computing the distribution of the normalized gene blast of all scores. Scores are summed up and represented in a histogram, and a beta distribution is calculated from the mean and standard deviation of the observed values. A 97% quantile of the density function is used as a cutoff to asses orthology. The quantile has been identified by manual inspection of hundreds of histograms from real cases.
Roary and EDGAR are based on sequence alignment, however alternative strategies can be used for retrieving domain architecture between homologous genes [22] or for the detection of horizontal gene transfer [23], by exploiting alignment-free techniques.
We present PanDelos Footnote 1, a methodology to discover pan-genome content in phylogenetically distant organisms based on information theory and network analysis. It is parameter-free, the thresholds are automatically deduced from the context. PanDelos avoids sequence alignment by introducing a similarity measure based on k-mers multiplicity, rather than simple presence/absence of mers. The strength of the strategy is supported by a non-empirical choice of the best appropriate k-mer length. Moreover, the selection of minimum similarity for which two sequences are eligible to be homologs is inspired by the knowledge coming from read mapping in next-generation sequencing and sequence reconstruction processes. Reciprocal best hits in 1-vs-1 genome comparison, aimed at discovering orthologous genes, are used as a basis to infer thresholds for paralogs discovery. Homology relations are integrated into a global network and groups of homologous genes are extracted from it by applying a community detection algorithm. PanDelos outperforms in terms of running times and quality discovery contents the existing approaches, Roary and EDGAR, in real applications and in synthetic benchmarks, that represent fully trusted golden truth.
The detection of gene homology performed by PanDelos is divided into 5 main steps that combine a candidate selection based on k-dictionaries, with a refinement procedure, developed by means of network analysis. Firstly, an optimal value of word length k is chosen according to properties of the input collection of genomes. Consequently, genes are compared and candidate homologous pairs are selected. The selection is firstly applied by setting a minimum amount of intersection between the k-dictionaries of two genes. Then, the generalized Jaccard similarity is used to measure the similarity between genes in order to extract bidirectional best hits. The extraction produces a homology network from which, at the end of a refinement procedure, gene families are retrieved. Figure 1 gives an overview of the overall schema.
Overview of PanDelos Pan-genome computation of three genomes (represented as blue, red, and violet). Genomes are taken in input as list of genetic sequences (represented as colored rectangles). The homology detection schema is divided into 5 steps. PanDelos, at first, chooses an optimal word length that is used to compare dictionaries of genetic sequences. The 1-vs-1 genome comparisons are performed. An initial candidate gene pairs selection is obtained by applying a minimum percentage threshold on the dictionary intersection. Then, PanDelos computes generalized Jaccard similarities among genes (shown in the bottom left matrix). Only pairs of genes that passed the threshold applied on dictionary percentages are taken in consideration for the similarity computation. Pairs that did not pass the threshold are represented by gray tiles. Next, PanDelos computes bidirectional best hits (BBH), here represented with green borders. On the bottom right, a similarity network, made of reciprocal best hits is shown. Border colors represent the genomes to which genes belong. A final computational step discards edges in inconsistent components of the network and returns the final list of gene families. A component is inconsistent if it contains two genes belonging to the same genome that are not accounted as paralogs. A family may contains orthologous as well as paralogous genes, such as the yellow/brown ones. Families are finally classified as singletons, dispensable or core depending on their presence among genomes (borders of the rectangles represent the genomes the genes belong to)
In what follows, we first describe the details of PanDelos together with the engineering and extension of existing data structures that allows PanDelos to reach high performance and efficiency.
Basic notation
A gene is represented as a string s over the amino acid alphabet Γ, s=a1a2…ah, with ai∈Γ for 1≤i≤h. The k-mers of s are the substrings of s having length k. A sequence s, having length |s|, contains |s|−k+1 occurrences of k-mers. A k-mer w may occurs several times within s. The number of times that w occurs in s is called the multiplicity of w in s and it is denoted by cs(w). The k-dictionary Dk(s) of s is given by the set of all distinct k-mers occurring in s:
$$D_{k}(s) = { s[i..i+k] : 1 \leq i \leq |s| - k}, $$
where s[i..i+k] is the substring of s starting at position i and ending after k positions.
Given a population of n individual genomes, we denote by \(\mathbb {G}^{i} = \{s_{1}, s_{2},\dots,s_{m} \}\) the set of genes of the i-th individual. The genetic length of \(\mathbb {G}^{i}\) is given by the sum of the lengths of the genes in \(\mathbb {G}^{i}\) and it is denoted by \(\langle \mathbb {G}^{i} \rangle \). On the contrary, when whole DNA sequences are taken into account the genomic length of the i-th individual, |Gi|, is given by the total length of the DNA sequence. In what follows, we use the term genome to indicate both a DNA sequence G and the corresponding set of genes \(\mathbb {G}\). The context will suggest the intended appropriated meaning.
Choosing an optimal word length for gene dictionary construction
A dictionary-based measure is highly sensitive to the length k of the words that compose the dictionary. In analyzing whole genome sequences, a crucial resolution is given by k=log4|G|, where 4 is the cardinality of the nucleotide alphabet [24, 25]. This value was proven to reveal structural laws that emerge from the maximum entropic difference between real genomes with random ones of the same length. In our case, genes are represented by the amino acid sequences of the proteins they encode, thus the alphabet to be considered is Γ rather than the 4-symbols nucleotide alphabet. Thus, we take into account the set of genetic sequences belonging to all the n input genomes by setting the value of the optimal word length k as:
$$k = {log}_{|\Gamma|} \sum\limits_{i=1}^{n} \langle \mathbb{G}^{i} \rangle. $$
Selection of candidate gene pairs
Unfortunately, no theory exists to define a non-empirical threshold regarding the application of the Jaccard similarity in the context of gene comparison. Thus, a preliminary step filters pairs of gene candidate to be homologous. The intersection coverage of the dictionaries of two genes is used as a criterion of relational relevance between sequences. The criterion requires that the k-mers of Dk(s∩t)=Dk(s)∩Dk(t) have to occur in s and t with a minimal percentage.
PanDelos creates a set CH of candidate homologous genes by computing, for each pair of genes s and t, \(s \in \mathbb {G}^{i}\) and \(t \in \mathbb {G}^{j}\), the percentage of k-mer occurrences of s that belong to \(\widehat {D_{k}}(s, t)\). It is given by
$$p_{k}(s \rightarrow t) = \frac{{\sum\nolimits}_{w \in D_{k}(s, \cap t)} c_{s}(w)}{|s| - k + 1}. $$
PanDelos considers as homologous two genes s,t such that both pk(s→t) and pk(t→s) must overcome the minimum amount of 2/k.
The threshold 2/k is not empirically defined, but motivated by an argument that we will briefly outline. If we consider that from a sequence s we can extract at most |s|/k distinct non-overlapping k-mers, then we realize that, when the average multiplicity of k-mers in s is close to 1, this fraction is close to 1/k of the number of all k-mer occurrences of s. However, the lack of overlap denies any possibility of reconstructing of s from such a k-dictionary, because in this case there is no indication on how the different k-mers must be arranged to form s. Therefore, we assume that a minimum amount of overlap between consecutive k-mers extracted from s is given by doubling the above fraction 1/k. This argument suggests us to fix as 2/k the threshold of pk(s→t) and pk(t→s). In conclusion, s and t are considered homologous candidate genes if: pk(s→t)≥2/k and pk(t→s)≥2/k.
Dictionary based gene sequence similarity detection
For each pair of genomes, \(\mathbb {G}^{i}\) and \(\mathbb {G}^{j}\), and for each candidate pair of genes (s,t), such that \(s \in \mathbb {G}^{i}\) and \(t \in \mathbb {G}^{j}\), PanDelos computes their sequences similarity by applying a generalized Jaccard similarity among the k-dictionaries. Note that, in the search for paralogous genes, i is equal to j.
Given two sequences, s and t, and Dk(s∪t)=Dk(s)∪Dk(t) the union of their k-dictionaries, PanDelos uses the following generalized Jaccard similarity Jk(s,t) on k-mer multiplicities:
$$J_{k}(s,t) = \frac{ {\sum\nolimits}_{w \in D_{k}(s \cup t)} min(c_{s}(w), c_{t}(w)) }{{\sum\nolimits}_{w \in D_{k}(s \cup t)} max(c_{s}(w), c_{t}(w))}. $$
It takes values in the interval [0,1]. It is independent of the lengths of the compared sequences and thus it is suitable for comparing sets of sequences having a wide range of lengths.
Extraction of gene pairs by bidirectional similarity
In order to obtain the set CHO of orthologous candidate genes, PanDelos computes bidirectional best hits (BBHs) on genes in CH.
Given a gene \(s \in \mathbb {G}^{i}\), the set of best hits of s towards a genome \(\mathbb {G}^{j}\) is given by:
$$BH\left(s, \mathbb{G}^{j}\right) = \left\{ t \in \mathbb{G}^{j} : J_{k}(s,t) = \underset{v \in \mathbb{G}^{j}}{\text{max} } J_{k}(s,v) \right\}. $$
The set of bidirectional best hits of s towards \(\mathbb {G}^{j}\) is given by:
$$BBH\left(s, \mathbb{G}^{j}\right) \,=\, \left\{ t \in \mathbb{G}^{j} : t \in \!BH\left(s,\mathbb{G}^{j}\right) \text{and}\ s \in BH\left(t, \mathbb{G}^{i}\right) \right\} $$
Only genes involved in at least one BBH are kept in CHO.
The BBH strategy is commonly used in pan-genomic analyses, however, it may capture sequences having low similarity. This behavior especially arises with singleton sequences. Two unrelated singletons of the two genomes may form a BBH simply because no orthologs exist and they are reciprocally the best match. PanDelos avoids these cases by performing the BBH strategy only on genes in CH, i.e. on candidate genes 'similar enough'.
In order to obtain the set CHP of paralogous candidate genes, at the end of every 1-vs-1 genome comparison, the minimum score of BBH orthologous candidate genes is used to infer new paralogous. Recalling that PanDelos has compared each genome to it self, the intra-genome BBH with a score equal to or greater than the minimum inter-genomes BBH score (orthology score) are accounted as paralogous. This rule states that the score accounting for orthologous sequences can be used as threshold to define two genes as paralogous because their similarity is strong at least as the minimum trusted similarity between orthologs.
Gene family detection by network coherence refinement
PanDelos constructs an undirected weighted network from homology information, where each vertex is labelled with a pair \((s, \mathbb {G}_{i})\) formed by a candidate gene and the genome to which it belongs, and where an edge connects two vertices if they are in CHO or CHP. The edge weights are the scores computed applying the generalized Jaccard similarity on candidate genes.
The network may be formed by several connected components which are the starting homologous candidate gene families. A connected component is defined inconsistent if it contains two genes belonging to the same genome that are not accounted as paralogs, namely which are not connected by an edge. The inconsistency is resolved by recursively splitting the component into subgroups until a set of consistent subgroups is reached. PanDelos uses the Girvan-Newman algorithm for community detection which calculates the betweenness centrality along the components and progressively removes the edge with the highest centrality [26]. PanDelos normalizes the edge weights by means of the maximum weight present in each connected components.
The resulting pan-genomic structure is given by the final set of consistent connected components plus singleton genes, i.e. the singleton vertices in the network. Components containing genes of all genomes represent the core of the pan-genome. The other components contain the dispensable genes.
Data structures engineering for fast similarity computation
Limitations of enhanced suffix arrays for pan-genome computations
Given a string s, a suffix array (SA) [27] reports the lexicographically ordered suffixes of s equipped with their start position in s. Substring search by means of SA can be sped up by performing binary searches. An enhanced suffix array (ESA) [28] is a combination of SA with the LCP (Longest Common Prefix) array giving the length of the longest common prefix of a suffix with that one lexicographically preceding it. An ESA allows for efficient recovery of the k-mers multiplicities [29]. The values of the LCP array define contiguous regions of the ESA array, called LCP-intervals, which identify all the occurrences of k-mers. Additionally, an array of length N reports for each suffix the distance from its start to the first forward occurrence of a N symbol [30]. The N is used to represent positions in s that must be discarded in dictionary operation. The ESA structure performs k-mer enumeration in linear time by just doubling the memory requirement of simple SA. Since each k-mer must be checked for N inclusion, this verification increases the time complexity by a factor of k. However, with the additional N array, the complexity remains linear. Figure 2a gives an example of ESA+N structure that has been built for the string WLLPPP, and illustrates LCP intervals of 1-mers and 2-mers of the string.
Examples of indexing structures. In the top-side of the image a, an example of the indexing structure ESA+N is shown for the string WLLPPP. The string is indexed by lexicographically sorting its suffixes. The array SA, LCP and N are computed according to the ordering. The indexing structure is composed by the three arrays, and the other columns shown on the image are virtually extracted. The SA array stores star positions of suffixes and it is used to keep trace of the lexicographic ordering. Values along the LCP and N arrays are used to identify intervals that correspond to specific k-mers [30]. The 1-mer L is identified by an interval that covers the first two positions of the structure, while the 1-mer P covers three positions and the 1-mer W cover one positions. Thus, the multiplicity of L, P and W are respectively 2, 3, and 1. 2-mers intervals are shown in the second columns,from the left. Note that the third position is not cover when 2-mer intervals are extracted because it can not identify the start of any 2-mer. The second section of the image b, c, d, e, f, show the extension the indexing structure in order to manage set of strings. Four input strings, s1, s2, t1 and t2 are indexed. Firstly, a global string is built by concatenating the four strings and by putting a special symbol (represented as N) on the concatenation joints. Then, similarly to the single string case, suffixes of the global sequences are sorted in lexicographic order. The sorting procedure defines the content of the SA array and LCP, N and SID arrays are computed in accordance with it. The SID array informs for each suffix the sequences from which it originates. The indexing structure helps in extracting the information, namely the multiplicities of 2-mers in every sequence, that is ideally represented in the matrix b. The illustrations d, e and f show the final values that the matrices M, P1 and P−1 take after every 2-mer of the global sequence have been taken into account
Methodologies for efficient similarity calculation in sets of strings have been developed by means of suffix trees [31]. This type of data structure inspired the develop of suffix arrays as a memory efficient implementation that does not increase time requirements. However, enhanced suffix arrays are useful for single reference string analysis, but result less suitable to be applied to sets of strings.
Given two sequences, s and t, the comparison of their k-dictionaries can be performed by listing the k-mers of s and searching for them in t. Taking into account an ESA+N structure, the k-mers listing is performed in |s| time, and the search of each k-mer in t takes \(\mathcal {O}(k \cdot log(|t|))\) time. Since at most |s| distinct k-mers are in s, the overall time is \(\mathcal {O}(|s| \cdot k \cdot log(|t|))\).
The search described above takes into account only Dk(s), but t may contain k-mers not listed in the dictionary of s, thus the time requirement is doubled because Dk(t) must also be scanned. In 1-vs-1 genome comparison, the process must be repeated for every pair of genes belonging to the two genomes, resulting in a highly expensive procedure. In the next section, we address a way to efficiently improve the procedure.
PanDelos data structure engineering
We extend the ESA+N structure [30] in order to speed up the comparison of k-dictionaries when multiple sequences are taken into account. The goal is to compute the generalized Jaccard similarities between a set of sequences simultaneously.
The generalized Jaccard similarity between s and t can be expressed as:
$$J_{k}(s,t) = \frac{a}{b + c}, $$
where a is the sum of the minimum multiplicities of k-mers shared by the two sequences, b is the sum of the maximum multiplicities of the shared k-mers, and c is the sum of the multiplicities of k-mers appearing only in one of the two sequences.
Given a and b for every pair of sequences, then c is obtained as
$$c = (|s| + |t| - 2k + 2) - (a + b), $$
where (|s|+|t|−2k+2) is the sum of multiplicity of all k-mers in s and t. Therefore it can be rewritten as:
$$J_{k}(s,t) = \frac{a}{|s| + |t| - 2k + 2 - a}. $$
Given two genomes, \(\mathbb {G}^{1}=\{s_{1}, \dots, s_{n}\}\) and \(\mathbb {G}^{2}=\{t_{1}, \dots, t_{m}\}\), genes are concatenate in a single global sequence s1·N·⋯·N·sn·N·t1·N·⋯·N·tm. An ESA+N structure is built, and the concatenation by N symbols ensures that extracted k-mers do not cross between gene sequences. The data structure is extended with a further array, called SID. Given an LCP-interval, that represents a specific k-mer, the content of the corresponding interval in the SID array reports the identifiers of the sequences in which the k-mer is present. Moreover, the number of time a sequence identifier is repeated within the interval corresponds to the multiplicity of the k-mer within the specific sequence.
For each pair of sequences involved in the interval, we compute the sum of minima (the a term in the generalized Jaccard formula) by computing the partial of such sums and storing them in a matrix M
$$M[i,j] = \sum\limits_{w \in D_{k}\left(s_{i} \cup t_{j}\right)} min\left(c_{s_{i}}(w), c_{t_{j}}(w)\right), $$
for 1≤i≤n and 1≤j≤m.
Once the matrix is filled, the similarities are finally computed by the formula:
$$J\left(s_{i},t_{j}\right) = \frac{ M[i,j] }{ \left|s_{i}\right| + \left|t_{j}\right| -2k +2 - M[i,j]}. $$
In this way, we avoid comparisons between sequences that do no share any k-mers, and eliminate the logarithmic factor of searching k-mers into multiple suffix arrays, one for each gene sequence.
Similarly, two additional matrices are stored in order to calculate candidate homologous genes: P1[i,j] reporting the percentage of multiplicities of k-mers in sishared with tj, and P2[i,j] storing the vice versa:
$$P_{1}[i,j] = \sum\limits_{w \in D_{k}(s_{i} \cap t_{j}) }\frac{c_{s_{i}}(w)}{|s|-k+1} $$
$$P_{2}[i,j] = \sum\limits_{w \in D_{k}(s_{i} \cap t_{j})} \frac{c_{t_{j}}(w)}{|t|-k+1}. $$
Figure 2 reports an example for four sequences that are concatenated into a single one. Then 2-mers, and their associated sequences identifiers, are retrieved from the data structure. In the example, two genomes are compared. The first genome contains the sequences s1 and s2, and the second genome contains the sequences t1 and t2. The word length 2 is chosen as best k value, thus sequences are compared by means of the multiplicities of the 2-mers they contain. Ideally, the matrix (c) has to be computed in order to calculate the Jaccard similarity between the sequences. For higher values of k, the storage and update of such matrix may require high computational efforts, thus its rows are computed on the fly by identifying k-mer intervals along the indexing structure and by iterating them. A linear iteration over the structure lists the 2-mers, together with the number of times they appear within each original sequence. During the iteration the three matrices M, P1 and P2, in Fig. 2d, e and f, are updated. After that every 2-mer have been iterated, Jaccard similarity are computed by means of the M matrix, while coverage percentage are computed by means of the P1 and P2 matrices.
PanDelos has been compared to Roary and EDGAR. Roary is a stand-alone computational tool written in Perl. It runs under Linux systems and is takes as input genomic data in GFF format. We used the tool with its default parameter settings, except for the experiments regarding the Mycoplasma genus where we performed parameter tuning to improve its performance. EDGAR is a web based tool, it gives precomputed analyses performed on individuals grouped by living species. PanDelos is a pipeline composed of Java and Python modules and takes in input genomic data in GFF format. Tests were run on a machine equipped with an Intel Core i7-5960X CPU and 64 Gb of RAM on top of which an Ubuntu 16.04 64 bit Linux OS is installed.
Several notions of phylogenetic distance have been defined in the literature. Each distance captures a specific aspect of genome evolution. Here, we refer to a distance [32] that is widely used to infer phylogenetic trees of bacterial populations [33]. The measure computes the cosine similarity between the composition vectors of the proteomes of the compared genomes. The distance reaches a minimum of 0 for genomes having the same composition, and a maximum of 1 for completely unrelated proteomes.
Comparisons on real collections of genomes
We compared PanDelos, EGDAR and Roary on four real cases. We used two collections of genomes originally used to evaluate the performances of Roary and EDGAR, i.e. 7 isolates of the Typhi serotype of the Salmonella enterica species which is known to have very closely related genomes, and 14 isolates of the Xanthomonas campestris species. The Typhi serotype and the Xanthomonas genus has been used as reference case to study performances respectively of Roary and EDGAR. We further selected, from EDGAR available datasets, 10 isolates of Escherichia coli species, and 64 isolates of Mycoplasma genus. These two collections show opposite properties for what concerns phylogenetic distances, in fact, the primer is a group of closely related genomes and the latter represents a collection of highly distant sequences. The identifiers of the selected isolates are reported in Additional file 1: Tables S1–S4. Their properties, summarized in Additional file 1: Figures S1–S4, show changes in the number of sequences of the genomes. An high variability in genetic sequence lengths is also reported (from 13 to thousands of amino acids).
Table 1 reports the average phylogenetic distances within the analyzed real populations. The Escherichia coli dataset has the most similar genomes, in fact their phylogenetic distances reach the lowest values. The population with the higher variability is given by the Mycoplasma dataset. Details regarding phylogenetic distances of these datasets are show in Additional file 1: Figures S5–S8.
Table 1 Phylogenetic distances (average and standard deviation) for the four real datasets
The three tools show a similar performance on the 7 closely related isolate of Salmonella enterica, whereas Roary showed low performances on Xanthomonas campestris collection (see Tables 2 and 3). For each tool, the number of gene families shared among genomes that compared tools have found is shown. Singletons appear in only one genome, core genes are shared among all the 7 genomes, and the remaining accessory genes are shared from 2 to 6 genomes. On the contrary, the comparison related to the Xanthomonas campestris collection showed a low performance of Roary (see Table 3). In fact, while PanDelos and EDGAR found circa 9k gene families, Roary reported more than 17k groups. Roary found a double amount of singletons and groups shared among a low number of genomes. Notably, PanDelos and EDGAR discovered the presumably correct pan-genome content having a high number of singletons and core genes. Similar results are reported for the Escherichia coli (see Table 4) collection. However, as for the Xanthomonas campestris collection, Roary found a large number of families in sets composed by a low number of genomes. The percentage of core genes, w.r.t. the total aggregated families, computed by PanDelos was 37%, while Roary reached a percentage equal to 31%. In Mycoplasma isolates, PanDelos found 22 core genes while EDGAR only 14 (see Table 5). Among those 14 genes, only one was absent on the list of 22 core genes given by PanDelos. PanDelos and EDGAR found a total of 13,181 and 12,344 families, respectively, thus the core percentages are less than 1% (0.16% and 0.17%). Roary, launched with default parameters, did not detect core genes. It found dispensable families shared among at most 12 genomes and did not discover genes with higher sharing. We decided to reduce the Roary threshold on the BLAST score, which is by default equal to 95%, until Roary reported core genes. With an identity threshold set to 65%, Roary found only 2 core genes. A detailed description of the core genes found by the three approaches in given in Additional file 1: Table S5. PanDelos and EDGAR are in accordance for 10 core genes. The results obtained for all the four collections are graphically summarized in Additional file 1: Figure S9.
Table 2 Number of genomes per gene family in 7 serotype Typhi of the Salmonella enterica species
Table 3 Number of genomes per gene family in 14 Xanthomonas campestris species
Table 4 Number of genomes per gene family in 10 Escherichia coli isolates
Table 5 Number of genomes per gene family in 64 Mycoplasma genus
Comparisons on collections of synthetic genomes
Since in real data we don't know the exact phylogeny, we created a synthetic benchmark simulating genomes evolution.
The generated population can be represented as an n-ary tree, where the root is the common ancestor genome. Leaves of the tree are genomes without progeny. Starting from an existing genome, we generated descendants by vertical transmission (copy of parent genes), loss of parent genetic material or addition of new genetic material (in order to simulate horizontal transfer). Synthetic generation of gene families is a studied problem in literature [34]. However, few studies have proposed the development of a methodology to simulate genome evolution in a pan-genome context. The IGM model [35] simulates vertical and horizontal transmission but the actual implementation is able to create and evolve genes having almost the same lengths, that is an unfeasible behavior considering the real variability in gene length. The SimBAC approach [36] simulates variations at the genomic level that can occur during bacterial evolution but it does not keep trace of gene transmission. Thus, the real homology relationships are lost. For these reasons, we decided to implement an in-house procedure, in order to simulate bacterial evolution, that is inspired by existing approaches. The procedure traces homology relationships and generates synthetic collections that show properties similar to real bacterial populations. The procedure is briefly described above.
From a parent gene set, 0.1% of genes are removed and 1% totally new genes are added. The 80% of the transmitted genes were varied by adding, removing or changing a given percentage of amino acids. Finally, the 0.01% of the genes were duplicated. We generated 4 populations of 2000 individuals from 2 real Mycoplasma genomes by applying two different variation percentages, 0.5% and 1%. From each population, we extracted the 50 individuals closest to the ancestor genome, referred to as roots, and 50 peripheral individuals (leaves of the n-ary tree), referred to as leaves.
Additional file 1: Figure S10 shows phylogenetic relationships of the four 2k-individuals populations. Structural properties of the phylogenetic tree of one of the populations are reported in Additional file 1: Table S6. The number of total individuals, the number of peripheral genomes, and the average number of descendants are shown for each depth of the tree. The generated populations show compositional properties, namely number of genes per genomes, variation of gene lengths within each genome and pan-genomic trends that are similar to real collections (see Additional file 1: Figure S11). The realistic composition and trends are also maintained in the 50-individuals sub-populations (see Additional file 1: Figure S12). Table 6 reports average phylogenetic distances within the extracted sub-populations and further details are given in Additional file 1: Figures S13–S20. The synthetic sub-populations show realistic distances. The roots extracted from populations generated with 0.5% locus variation percentage seem to show unrealistic average distances, however, the detailed information shows genomic distances similar to the Escherichia coli collection.
Table 6 Phylogenetic distances (average and standard deviation) for the four extracted synthetic sub-populations
We used the synthetic dataset as a golden truth to compare PanDelos, EDGAR and Roary on quality of retrieved families. We evaluated the performances of the methodologies by comparing the set of homology relationships that they extract from the input genomes and how such relationships infer the pan-genomic distribution.We measured the number of true positive (TP) relationships retrieved by each approach, the correct homologies; the number of false positive (FP) relationships, i.e. the wrong reported homology relationships; the number of true negative relationships (TN), i.e. the correct discarded homology relations; the number of false negative relationships (FN), i.e. the links that are not extracted by the approach but that are present in the synthetic data. Then we combine the above measures into an f-measure which informs about the accuracy of the results. The measure reaches the best value at 1 and the worst at 0.
Table 7 shows that PanDelos and EDGAR keep an high amount of true positives, that is also reported for Roary on roots collections. Roary significantly decreases TP in leaves collections, and this behavior is directly linked to an increase in false negatives. PanDelos and EDGAR are mostly not affected by false positives, while Roary decreasing performance follows the increase in the number of input sequences. The total number of possible relationships reaches the order of billions of links when all the input sequences can be linked to each other. However, PanDelos and EDGAR show a good performance in discarding most of unfeasible relationships (TN values). Both algorithms have f-measure values closed to 1 for every collection, but PanDelos shows higher values. Roary shows very low performances in leaves datasets. This behavior is also reflected in the CDiff value, which measures the number of gene families that have been erroneously split or merged by the tools w.r.t. the golden truth. Ideally, a gene family is a connected component in the homology network formed as a clique, namely every possible edge between the vertices of the component are present. A discovery methodology may miss some of the edges in a component, but without losing the whole connectivity. On the contrary, high amounts of missing edges may split components, and wrongly assigned links may merge multiple components. CDiff values reported for PanDelos are mostly linked to phylogenetic distances, in fact, low values are reported for collections of highly similar genomes, the roots, and higher values are expressed for datasets of more distant genomes, the leaves. A similar trend is observed for CDiff values of EDGAR, however, the methodology is affected by higher values compared to PanDelos.
Table 7 Performances of PanDelos, EDGAR and Roary on the synthetic datasets
Finally, we evaluated the execution times of PanDelos and Roary over synthetic data. Figure 3 shows time costs of the two methodologies on varying the number of analyzed genomes, from 10 to 50. Times were recorded for the four 2k-individuals datasets extracted from the populations generated starting from the Mycoplasma genitalium G37 genome. Roary outperforms PanDelos on the roots dataset generated with a 0.5% locus variation percentage that is the collection with the lowest, and probably unrealistic, average phylogenetic distance (see also Table 6). The two approaches show comparable performances on the roots dataset obtained with 1% locus variation. However, the 0.24 average distance of this collection is lower than the averages computed on real datasets (for which the minimum is 0.28 of the Escherichia coli collection, see Tables 1 and 6). PanDelos clearly outperforms Roary on leaves datasets (the ones that show average distances similar to real cases). Moreover, the performance of PanDelos is shown to be not affected by phylogenetic distances, but it is only dependent on the number of input genomes. This trend is in contrast with the performance of Roary that is affected by the number of input genomes and also by phylogenetic distance. In fact, for both datasets, the running time of PanDelos has a stable increase of 15x (from 32 to 507 s on 0.5% variation, and from 39 to 509 s on 1% variation). On the contrary, Roary has an increase of 14x (from 159 to 2320 s) on the 0.5% variation dataset, and 17x (from 202 to 3448 s) on the 1% variation dataset. Similar results were obtained by running the two tools on the synthetic datasets generated from the Mycoplasma pneumoniae M129 genome (see Additional file 1: Figure S21).
Execution times of PanDelos and Roary over the four synthetic datasets extracted form the two populations generated from the Mycoplasma genitalium G37 genome. Time requirements have been measured by taking into account five different amounts of analyzed genomes, from 10 to 50. Execution times are reported in seconds
For what concerns collections regarding real cases, in general, Roary performs similarly to the two other methodologies on populations with low phylogenetic distances, namely Salmonella enterica and Escherichia coli, but it is quiet different on the other dataset by reporting a higher number of singletons and a lower amount of core families. On the contrary, PanDelos and EDGAR show coherent trends and the homology detection of both approaches can be considered realistic. However, PanDelos was able to detect more core genes in the collections having the highest and most variable phylogenetic distances.
Regarding synthetic benchmarks, the low performance of EDGAR, mainly expressed by CDiff values higher than PanDelos, may be linked to the high amount of false negative homology relations that cause the break of gene families into subgroups. This result agrees with the behaviors of PanDelos and EDGAR obtained on the collection of 64 real Mycoplasma individuals.
We presented PanDelos, a methodology for discovering pan-genome contents of closely related and phylogenetic distant genomes. The advantages of the approach are the absence of user-defined parameters, a similarity measure based on dictionaries, and the choice of the optimal dictionaries by applying theoretical concepts emerging from informational analysis of genomes [24]. PanDelos dominates the intrinsic complexity of phylogenetic distances among input genomes by searching for gene communities over a global normalized homology network. Finally, PanDelos extends the suffix array data structure for efficiently computing the similarity between sets of sequences. Comparisons in real and synthetic cases have demonstrated the outperforming of PanDelos on the existing methods Roary and EDGAR.
Delos is the core island of the Cyclades archipelago.
BH:
BBH:
Bidirectional best hit
ESA:
Enhanced suffix array
LCP:
Longest common prefix
Suffix array
Vernikos G, Medini D, Riley DR, Tettelin H. Ten years of pan-genome analyses. Curr Opin Microbiol. 2015; 23:148–54.
Medini D, Donati C, Tettelin H, Masignani V, Rappuoli R. The microbial pan-genome. Curr Opin Genet Dev. 2005; 15(6):589–94.
Tettelin H, Riley D, Cattuto C, Medini D. Comparative genomics: the bacterial pan-genome. Curr Opin Microbiol. 2008; 11(5):472–7.
Holt KE, Parkhill J, Mazzoni CJ, Roumagnac P, Weill F-X, Goodhead I, Rance R, Baker S, Maskell DJ, Wain J, et al. High-throughput sequencing provides insights into genome variation and evolution in salmonella typhi. Nat Genet. 2008; 40(8):987–93.
Earle SG, Wu C-H, Charlesworth J, Stoesser N, Gordon NC, Walker TM, Spencer CC, Iqbal Z, Clifton DA, Hopkins KL, et al. Identifying lineage effects when controlling for population structure improves power in bacterial association studies. Nat Microbiol. 2016; 1:16041.
Serruto D, Serino L, Masignani V, Pizza M. Genome-based approaches to develop vaccines against bacterial pathogens. Vaccine. 2009; 27(25):3245–50.
Muzzi A, Masignani V, Rappuoli R. The pan-genome: towards a knowledge-based discovery of novel targets for vaccines and antibacterials. Drug Discov Today. 2007; 12(11):429–39.
Zhang Y, Sievert SM. Pan-genome analyses identify lineage-and niche-specific markers of evolution and adaptation in epsilonproteobacteria. Front Microbiol. 2014; 5:110.
D'Auria G, Jiménez-Hernández N, Peris-Bondia F, Moya A, Latorre A. Legionella pneumophila pangenome reveals strain-specific virulence factors. BMC Genomics. 2010; 11(1):181.
Brittnacher MJ, Fong C, Hayden H, Jacobs M, Radey M, Rohmer L. Pgat: a multistrain analysis resource for microbial genomes. Bioinformatics. 2011; 27(17):2429–30.
Contreras-Moreira B, Vinuesa P. Get_homologues, a versatile software package for scalable and robust microbial pangenome analysis. Appl Environ Microbiol. 2013; 79(24):7696–701.
Benedict MN, Henriksen JR, Metcalf WW, Whitaker RJ, Price ND. Itep: an integrated toolkit for exploration of microbial pan-genomes. BMC Genomics. 2014; 15(1):24373.
Chaudhari NM, Gupta VK, Dutta C. Bpga-an ultra-fast pan-genome analysis pipeline. Sci Rep. 2016; 6.
Nguyen N, Hickey G, Zerbino DR, Raney B, Earl D, Armstrong J, Kent WJ, Haussler D, Paten B. Building a pan-genome reference for a population. J Comput Biol. 2015; 22(5):387–401.
Page AJ, Cummins CA, Hunt M, Wong VK, Reuter S, Holden MT, Fookes M, Falush D, Keane JA, Parkhill J. Roary: rapid large-scale prokaryote pan genome analysis. Bioinformatics. 2015; 31(22):3691–3.
Blom J, Kreis J, Spänig S, Juhre T, Bertelli C, Ernst C, Goesmann A. Edgar 2.0: an enhanced software platform for comparative gene content analyses. Nucleic Acids Res. 2016; 44(W1):22–8.
Rasko DA, Myers GS, Ravel J. Visualization of comparative genomic analyses by blast score ratio. BMC Bioinformatics. 2005; 6(1):2.
Sahl JW, Caporaso JG, Rasko DA, Keim P. The large-scale blast score ratio (ls-bsr) pipeline: a method to rapidly compare genetic content between bacterial genomes. PeerJ. 2014; 2:332.
Li W, Jaroszewski L, Godzik A. Clustering of highly homologous sequences to reduce the size of large protein databases. Bioinformatics. 2001; 17(3):282–3.
Altschul SF, Gish W, Miller W, Myers EW, Lipman DJ. Basic local alignment search tool. J Mol Biol. 1990; 215(3):403–10.
Enright AJ, Van Dongen S, Ouzounis CA. An efficient algorithm for large-scale detection of protein families. Nucleic Acids Res. 2002; 30(7):1575–84.
Syamaladevi DP, Joshi A, Sowdhamini R. An alignment-free domain architecture similarity search (adass) algorithm for inferring homology between multi-domain proteins. Bioinformation. 2013; 9(10):491.
Cong Y, Chan Y-b, Ragan MA. A novel alignment-free method for detection of lateral genetic transfer based on tf-idf. Sci Rep. 2016; 6:30308.
Bonnici V, Manca V. Informational laws of genome structures. Scientific reports. 2016; 6:28840.
Manca V. The principles of informational genomics. Theor Comput Sci. 2017; 701:190–202.
Girvan M, Newman ME. Community structure in social and biological networks. Proc Natl Acad Sci. 2002; 99(12):7821–6.
Manber U, Myers G. Suffix arrays: a new method for on-line string searches. SIAM J Comput. 1993; 22(5):935–48.
Abouelhoda MI, Kurtz S, Ohlebusch E. The enhanced suffix array and its applications to genome analysis. In: International Workshop on Algorithms in Bioinformatics. Berlin: Springer: 2002. p. 449–63.
Kurtz S, Narechania A, Stein JC, Ware D. A new method to compute k-mer frequencies and its application to annotate large repetitive plant genomes. BMC Genomics. 2008; 9(1):517.
Bonnici V, Manca V. Infogenomics tools: A computational suite for informational analysis of genomes. J Bioinforma Proteomics Rev. 2015; 1:8–14.
Rieck K, Laskov P. Linear-time computation of similarity measures for sequential data. J Mach Learn Res. 2008; 9(Jan):23–48.
Qi J, Wang B, Hao B-I. Whole proteome prokaryote phylogeny without sequence alignment: a k-string composition approach. J Mol Evol. 2004; 58(1):1–11.
Qi J, Luo H, Hao B. Cvtree: a phylogenetic tree reconstruction tool based on whole genomes. Nucleic Acids Res. 2004; 32(suppl_2):45–7.
Stoye J, Evers D, Meyer F. Rose: generating sequence families. Bioinformatics (Oxford, England). 1998; 14(2):157–63.
Baumdicker F, Hess WR, Pfaffelhuber P. The infinitely many genes model for the distributed genome of bacteria. Genome Biol Evol. 2012; 4(4):443–56.
Brown T, Didelot X, Wilson DJ, De Maio N. Simbac: simulation of whole bacterial genomes with homologous recombination. Microbial Genomics. 2016; 2(1).
We thank the Fondo Sociale Europeo provided by Regione del Veneto for partially supported this work.
This work has been partially supported by the following projects: GNCS-INDAM, Fondo Sociale Europeo, National Research Council Flagship Projects Interomics, JOINT PROJECTS 2016-JPVR16FNCL, and JOINT PROJECTS 2017-B33C17000440003. This work has been partially supported by the project of the Italian Ministry of Education, Universities and Research (MIUR) "Dipartimenti di Eccellenza 2018-2022". Publication costs for this manuscript were sponsored by JOINT PROJECTS 2016-JPVR16FNCL and JOINT PROJECTS 2017-B33C17000440003.
Data and materials are available at the web site https://github.com/GiugnoLab/PanDelos.
This article has been published as part of BMC Bioinformatics Volume 19 Supplement 15, 2018: Proceedings of the 12th International BBCC conference. The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-19-supplement-15.
Rosalba Giugno and Vincenzo Manca contributed equally to this work
Department of Computer Science, University of Verona, Strada le Grazie, 15, Verona, 37134, Italy
Vincenzo Bonnici, Rosalba Giugno & Vincenzo Manca
Vincenzo Bonnici
Rosalba Giugno
Vincenzo Manca
All authors contributed equally to this work. All authors have read and approved the final manuscript.
Correspondence to Vincenzo Bonnici.
Additional file
Supplementary materials of PanDelos: a dictionary-based method for pan-genome content discovery. (PDF 5543 kb)
Bonnici, V., Giugno, R. & Manca, V. PanDelos: a dictionary-based method for pan-genome content discovery. BMC Bioinformatics 19, 437 (2018). https://doi.org/10.1186/s12859-018-2417-6
Pan-genome
Distant genomes
k-mer dictionary | CommonCrawl |
Symmetry exploiting control of hybrid mechanical systems
JCD Home
Preface: Special issue on the occasion of the 4th International Workshop on Set-Oriented Numerics (SON 13, Dresden, 2013)
January 2015, 2(1): 1-24. doi: 10.3934/jcd.2015.2.1
Modularity of directed networks: Cycle decomposition approach
Nataša Djurdjevac Conrad 1, , Ralf Banisch 1, and Christof Schütte 1,
Freie Universität Berlin, Department of Mathematics and Computer Science, Arnimallee 6, 14195 Berlin, Germany, Germany
Received August 2014 Revised February 2015 Published August 2015
The problem of decomposing networks into modules (or clusters) has gained much attention in recent years, as it can account for a coarse-grained description of complex systems, often revealing functional subunits of these systems. A variety of module detection algorithms have been proposed, mostly oriented towards finding hard partitionings of undirected networks. Despite the increasing number of fuzzy clustering methods for directed networks, many of these approaches tend to neglect important directional information. In this paper, we present a novel random walk based approach for finding fuzzy partitions of directed, weighted networks, where edge directions play a crucial role in defining how well nodes in a module are interconnected. We will show that cycle decomposition of a random walk process connects the notion of network modules and information transport in a network, leading to a new, symmetric measure of node communication. Finally, we will use this measure to introduce a communication graph, for which we will show that although being undirected it inherits important directional information of modular structures from the original network.
Keywords: Directed networks, measure of node communication., modules, cycle decomposition.
Mathematics Subject Classification: Primary: 60J20, 05C81; Secondary: 94C1.
Citation: Nataša Djurdjevac Conrad, Ralf Banisch, Christof Schütte. Modularity of directed networks: Cycle decomposition approach. Journal of Computational Dynamics, 2015, 2 (1) : 1-24. doi: 10.3934/jcd.2015.2.1
B. Altaner, S. Grosskinsky, S. Herminghaus, L. Katthän, M. Timme and J. Vollmer, Network representations of nonequilibrium steady states: Cycle decompositions, symmetries, and dominant paths,, Phys. Rev. E, 85 (2012). doi: 10.1103/PhysRevE.85.041133. Google Scholar
A. Arenas, J. Duch, A. Fernández and S. Gómez, Size reduction of complex networks preserving modularity,, New Journal of Physics, 9 (2007). doi: 10.1088/1367-2630/9/6/176. Google Scholar
R. Banisch and N. D. Conrad, Cycle-flow based module detection in directed recurrence networks,, EPL (Europhysics Letters), 108 (2014), 0295. doi: 10.1209/0295-5075/108/68008. Google Scholar
A. Barrat, M. Barthelemy, R. Pastor-Satorras and A. Vespignani, The architecture of complex weighted networks,, Proceedings of the National Academy of Sciences of the United States of America, 101 (2004), 3747. doi: 10.1073/pnas.0400087101. Google Scholar
G. R. Bowman and V. S. Pande and F. Noé, editors, An Introduction to Markov State Models and Their Application to Long Timescale Molecular Simulation,, volume 797 of Advances in Experimental Medicine and Biology, (2014). doi: 10.1007/978-94-007-7606-7_11. Google Scholar
J. Chen and B. Yuan, Detecting functional modules in the yeast protein-protein interaction network,, Bioinformatics, 22 (2006), 2283. doi: 10.1093/bioinformatics/btl370. Google Scholar
D. Cvetkovic, P. Rowlinson and S. Simic, Spectral Generalizations of Line Graphs,, Cambridge University Press, (2004). doi: 10.1017/CBO9780511751752. Google Scholar
P. Deuflhard and M. Weber, Robust perron cluster analysis in conformation dynamics,, Linear Algebra and its Applications, 398 (2005), 161. doi: 10.1016/j.laa.2004.10.026. Google Scholar
N. Djurdjevac, S. Bruckner, T. O. F. Conrad and C. Schütte, Random walks on complex modular networks,, Journal of Numerical Analysis, 6 (2011), 29. Google Scholar
N. Djurdjevac, M. Sarich and C. Schütte, Estimating the eigenvalue error of markov state models,, Multiscale Modeling & Simulation, 10 (2012), 61. doi: 10.1137/100798910. Google Scholar
T. S. Evans and R. Lambiotte, Line graphs, link partitions, and overlapping communities,, Phys. Rev. E, 80 (2009). doi: 10.1103/PhysRevE.80.016105. Google Scholar
S. Fortunato, Community detection in graphs,, Physics Reports, 486 (2010), 75. doi: 10.1016/j.physrep.2009.11.002. Google Scholar
M. Girvan and M. Newman, Community structure in social and biological networks,, Proceedings of the National Academy of Sciences of the United States of America, 99 (2002), 7821. doi: 10.1073/pnas.122653799. Google Scholar
D. Jiang, M. Qian and M.-P. Quian, Mathematical Theory of Nonequilibrium Steady States: On the Frontier of Probability and Dynamical Systems,, Springer, (2004). doi: 10.1007/b94615. Google Scholar
S. L. Kalpazidou, Cycle Representations of Markov Processes,, Springer, (2006). Google Scholar
Y. Kim, S.-W. Son and H. Jeong, Finding communities in directed networks,, Phys. Rev. E, 81 (2010). doi: 10.1103/PhysRevE.81.016103. Google Scholar
R. Lambiotte, J. C. Delvenne and M. Barahona, Laplacian dynamics and multiscale modular structure in networks,, ArXiv., (). Google Scholar
A. Lancichinetti and S. Fortunato, Benchmarks for testing community detection algorithms on directed and weighted graphs with overlapping communities,, Phys. Rev. E, 80 (2009). doi: 10.1103/PhysRevE.80.016118. Google Scholar
A. Lancichinetti, F. Radicchi, J. J. Ramasco and S. Fortunato, Finding statistically significant communities in networks,, PLoS ONE, 6 (2011). doi: 10.1371/journal.pone.0018961. Google Scholar
E. A. Leicht and M. E. J. Newman, Community structure in directed networks,, Phys. Rev. Lett., 100 (2008). doi: 10.1103/PhysRevLett.100.118703. Google Scholar
T. Li, J. Liu and W. E, Probabilistic framework for network partition,, Phys. Rev. E, 80 (2009). doi: 10.1103/PhysRevE.80.026106. Google Scholar
F. D. Malliaros and M. Vazirgiannis, Clustering and community detection in directed networks: A survey,, Physics Reports, 533 (2013), 95. doi: 10.1016/j.physrep.2013.08.002. Google Scholar
J. Mattingly, A. M. Stuart and D. J. Higham, Ergodicity for SDEs and approximations: Locally Lipschitz vector fields and degenerated noise,, Stochastic Process Appl., 101 (2002), 185. doi: 10.1016/S0304-4149(02)00150-3. Google Scholar
P. Metzner, C. Schütte and E. Vanden-Eijnden, Transition path theory for markov jump processes,, Multiscale Modeling & Simulation, 7 (2008), 1192. doi: 10.1137/070699500. Google Scholar
M. E. J. Newman, The structure and function of complex networks,, SIAM Review, 45 (2003), 167. doi: 10.1137/S003614450342480. Google Scholar
M. E. J. Newman, Fast algorithm for detecting community structure in networks,, Phys. Rev. E, 69 (2004). doi: 10.1103/PhysRevE.69.066133. Google Scholar
M. E. J. Newman, Modularity and community structure in networks,, Proceedings of the National Academy of Sciences, 103 (2006), 8577. doi: 10.1073/pnas.0601602103. Google Scholar
V. Nicosia, G. Mangioni, V. Carchiolo and M. Malgeri, Extending the definition of modularity of directed graphs with overlapping communities,, Journal of Statistical Mechanics: Theory and Experiment, 2009 (2009). doi: 10.1088/1742-5468/2009/03/P03024. Google Scholar
P. Pakoński, G. Tanner and K. .Zyczkowski, Families of line-graphs and their quantization,, Journal of Statistical Physics, 111 (2003), 1331. doi: 10.1023/A:1023012502046. Google Scholar
G. Palla, I. Derenyi, I. Farkas and T. Vicsek, Uncovering the overlapping community structure of complex networks in nature and society,, Nature, 435 (2005), 814. doi: 10.1038/nature03607. Google Scholar
M. A. Porter, J.-P. Onnela and P. J. Mucha, Communities in networks,, Notices of the American Mathematical Society, 56 (2009), 1082. Google Scholar
H. Risken, The Fokker-Planck Equation,, Springer, (1996). Google Scholar
M. Sarich, N. Djurdjevac, S. Bruckner, T. O. F. Conrad and C. Schütte, Modularity revisited: A novel dynamics-based concept for decomposing complex networks,, Journal of Computational Dynamics, 1 (2014), 191. doi: 10.3934/jcd.2014.1.191. Google Scholar
M. Sarich, F. Noé and C. Schütte, On the Approximation Quality of Markov State Models,, Multiscale Modeling & Simulation, 8 (2010), 1154. doi: 10.1137/090764049. Google Scholar
M. Sarich, C. Schütte and E. Vanden-Eijnden, Optimal fuzzy aggregation of networks,, Multiscale Modeling & Simulation, 8 (2010), 1535. doi: 10.1137/090758519. Google Scholar
M. T. Schaub, J.-C. Delvenne, S. N. Yaliraki and M. Barahona, Markov dynamics as a zooming lens for multiscale community detection: Non clique-like communities and the field-of-view limit,, PLoS ONE, 7 (2012). doi: 10.1371/journal.pone.0032210. Google Scholar
M. T. Schaub, R. Lambiotte and M. Barahona, Encoding dynamics for multiscale community detection: Markov time sweeping for the map equation,, Phys. Rev. E, 86 (2012). doi: 10.1103/PhysRevE.86.026112. Google Scholar
J. Schnakenberg, Network theory of microscopic and macroscopic behavior of master equation systems,, Rev. Mod. Phys., 48 (1976), 571. doi: 10.1103/RevModPhys.48.571. Google Scholar
Ch. Schütte and M. Sarich, Metastability and Markov State Models in Molecular Dynamics: Modeling, Analysis, Algorithmic Approaches,, volume 24 of Courant Lecture Notes, (2013). Google Scholar
A. Viamontes Esquivel and M. Rosvall, Compression of flow can reveal overlapping-module organization in networks,, Phys. Rev. X, 1 (2011). doi: 10.1103/PhysRevX.1.021025. Google Scholar
R. K. P. Zia and B. Schmittmann, Probability currents as principal characteristics in the statistical mechanics of non-equilibrium steady states,, Journal of Statistical Mechanics-theory and Experiment, 2007 (2007). doi: 10.1088/1742-5468/2007/07/P07012. Google Scholar
Yujuan Li, Huaifu Wang, Peipei Zhou, Guoshuang Zhang. Some properties of the cycle decomposition of WG-NLFSR. Advances in Mathematics of Communications, 2021, 15 (1) : 155-165. doi: 10.3934/amc.2020050
Xiaoxian Tang, Jie Wang. Bistability of sequestration networks. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1337-1357. doi: 10.3934/dcdsb.2020165
Meilan Cai, Maoan Han. Limit cycle bifurcations in a class of piecewise smooth cubic systems with multiple parameters. Communications on Pure & Applied Analysis, 2021, 20 (1) : 55-75. doi: 10.3934/cpaa.2020257
D. R. Michiel Renger, Johannes Zimmer. Orthogonality of fluxes in general nonlinear reaction networks. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 205-217. doi: 10.3934/dcdss.2020346
Bernold Fiedler. Global Hopf bifurcation in networks with fast feedback cycles. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 177-203. doi: 10.3934/dcdss.2020344
Lars Grüne. Computing Lyapunov functions using deep neural networks. Journal of Computational Dynamics, 2020 doi: 10.3934/jcd.2021006
Pedro Aceves-Sanchez, Benjamin Aymard, Diane Peurichard, Pol Kennel, Anne Lorsignol, Franck Plouraboué, Louis Casteilla, Pierre Degond. A new model for the emergence of blood capillary networks. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2021001
Leslaw Skrzypek, Yuncheng You. Feedback synchronization of FHN cellular neural networks. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021001
Abdelghafour Atlas, Mostafa Bendahmane, Fahd Karami, Driss Meskine, Omar Oubbih. A nonlinear fractional reaction-diffusion system applied to image denoising and decomposition. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020321
Hongyan Guo. Automorphism group and twisted modules of the twisted Heisenberg-Virasoro vertex operator algebra. Electronic Research Archive, , () : -. doi: 10.3934/era.2021008
Ugo Bessi. Another point of view on Kusuoka's measure. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020404
Fabio Camilli, Giulia Cavagnari, Raul De Maio, Benedetto Piccoli. Superposition principle and schemes for measure differential equations. Kinetic & Related Models, 2021, 14 (1) : 89-113. doi: 10.3934/krm.2020050
Wenjun Liu, Yukun Xiao, Xiaoqing Yue. Classification of finite irreducible conformal modules over Lie conformal algebra $ \mathcal{W}(a, b, r) $. Electronic Research Archive, , () : -. doi: 10.3934/era.2020123
Huanhuan Tian, Maoan Han. Limit cycle bifurcations of piecewise smooth near-Hamiltonian systems with a switching curve. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020368
Hongfei Yang, Xiaofeng Ding, Raymond Chan, Hui Hu, Yaxin Peng, Tieyong Zeng. A new initialization method based on normed statistical spaces in deep networks. Inverse Problems & Imaging, 2021, 15 (1) : 147-158. doi: 10.3934/ipi.2020045
Harrison Bray. Ergodicity of Bowen–Margulis measure for the Benoist 3-manifolds. Journal of Modern Dynamics, 2020, 16: 305-329. doi: 10.3934/jmd.2020011
Mark F. Demers. Uniqueness and exponential mixing for the measure of maximal entropy for piecewise hyperbolic maps. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 217-256. doi: 10.3934/dcds.2020217
Giulia Luise, Giuseppe Savaré. Contraction and regularizing properties of heat flows in metric measure spaces. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 273-297. doi: 10.3934/dcdss.2020327
Dan Zhu, Rosemary A. Renaut, Hongwei Li, Tianyou Liu. Fast non-convex low-rank matrix decomposition for separation of potential field data using minimal memory. Inverse Problems & Imaging, 2021, 15 (1) : 159-183. doi: 10.3934/ipi.2020076
Impact Factor:
Nataša Djurdjevac Conrad Ralf Banisch Christof Schütte | CommonCrawl |
How many elements are in the intersection of the set of all the prime numbers less than 30 and the set of all the odd numbers greater than zero?
In other words, we're looking for the number of positive odd prime numbers less than 30. We go through all odd numbers less than 30 and note how many of them are prime. We get that 3, 5, 7, 11, 13, 17, 19, 23, and 29 are all of the positive odd prime numbers less than 30, a total of $\boxed{9}$ elements in the intersection. | Math Dataset |
A stone is thrown vertically upwards with a velocity 40 m/s and is caught back. Taking g =10 m/s2 , calculate the maximum height reached by the stone. : A stone thrown downward with an initial velocity of 34.3 m/sec travel a distance of s meters, where: s(t)=4.9t + 34.3t and t is in seconds. If a stone is thrown downward at 34.3m/sec from a height of 294 m, how long will it take the stone to reach the ground? I am putting 294 in for t and trying to solve it that way. Is this correct? When a rock is thrown upwards at an angle, it starts to slow down on its horizontal velocity because of gravitational pull. The stone can then start to Two second later, a stone is thrown vertically (from the same initial height as the ball) with an initial speed of 24m/s. At what height above the release...Stone 2 thrown upwards: initial velocity (u) = 25 m/s acceleration due to gravity (g) is negative = - 9.8 m/s2 At time t, 2nd stone reaches height (h2), then using equation S = ut + ½at2 ⇒ h2 Q45: A ball is thrown with some velocity 'u' m/s. Show that under free fall, it will fall the ground with same velocity.A stone is thrown vertically upwards so that its height, measured in feet, after t seconds is given by s(t) = 100t - 16t2. What is its initial velocity? A softball of mass 250g is thrown with an initial velocity of 16m/s at an angle @ to the horizontal. The maximum height achieved by the ball from its point of...
Problem A stone is thrown from the top of a building with an initial velocity of 20.0 m/s straight upward, at an initial height of 50.0 m above the ground. The stone just misses the edge of the roof on its way down, as shown in Figure 2.20. Determine (a) the time needed for the stone to reach its maximum height, (b) the maximum height, (c) the ...
Stone hedge - место для излечения или поклонения предкам. Mysterious rocks. C. The 1,500-year-old pyramids, located near the town of Merida, may be less popular than their equivalents in Egypt, but they are just as remarkable. Although there are many structures there like the Temple of the Warriors...Stone hedge - место для излечения или поклонения предкам. Mysterious rocks. C. The 1,500-year-old pyramids, located near the town of Merida, may be less popular than their equivalents in Egypt, but they are just as remarkable. Although there are many structures there like the Temple of the Warriors...A stone is thrown with an initial velocity of 39 ft/s from the edge of a bridge that is 42 ft above the ground. The height of this stone above the ground t seconds after it is thrown is f (t)=−16t^2 + 39t +42. If a second stone is thrown from the ground, then its height above the ground after t seconds is given by A stone is thrown at an angle of 30 degrees above the horizontal from a building 45 meters high with an initial speed of 23 m/s. Find the minimum speed of the projectile during its flight. Since the stone is thrown horizontally, it has no component of vertical velocity at the moment it is thrown. This stone will have the same time of flight as a stone dropped from rest off the same ... The initial velocity of the stone, u = 40 m/s. We need to find the maximum height reached by the stone, the net displacement and the total distance covered by the stone. At maximum height, final velocity, v = 0. Using the equation of kinematics as
Since the stone is thrown horizontally, it has no component of vertical velocity at the moment it is thrown. This stone will have the same time of flight as a stone dropped from rest off the same ...
Dec 24, 2020 · 1. Motion of a projectile. A stone thrown upward with anVinitial velocity of 128 feet per second will attain a height of h feet in t seconds, where h 1 t 2 = 128 t - 16 t 2, 0 … t … 8. Initial velocity is v(0), which is 50. The maximum height of the rock occurs when the velocity is 0. (Think about it, that is the time when the rock is neither moving up nor down).Sep 14, 2011 · From the top of a cliff overlooking a lake, a person throws two stones. The two stones have identical initial speeds of v0 = 13.7 m/s and are thrown at an angle θ = 28.0°, one below the horizontal and one above the horizontal. Well the final velocity is going to be your initial velocity plus your acceleration times change in time. If you are starting at 10m/s and you are accelerated at 1m/s^2 then after 1 second you will be going 1 second faster than that. (11m/s) So this right here is your final velocity. Jul 17, 2013 · Alex throws a 5kg stone down a well with an intial velocity of 5.4 m/s. The stone falls for 2.5 seconds before he hears it hit the bottom.?
A stone is thrown off a 78.4m high cliff with an initial horizontal velocity of 5m/s. Find the final horizontal component of the velocity. the initial y velocity is 0m/s. At what angle of launch a projectile will a projectile go the greatest horizontal distance.Oct 26, 2010 · A stone thrown downward with an initial velocity of 34.3m/sec will travel a distance of (s) meters, where s (t)= 4.9t^2+34.3t and (t) is in seconds. if a stone is thrown downward at 34.3m/s from a... Q. A stone is thrown vertically upward with an initial velocity v 0. The distance travelled in time $\displaystyle \frac{1.5v_0}{g}$ is (a) $ \displaystyle \frac{{v_0}^2}{2 g}$ (b) $\displaystyle \frac{3{v_0}^2}{8 g}$ (c) $ \displaystyle \frac{5{v_0}^2}{8 g}$ (d) None of these. Ans: (c) Sol: Maximum height attained is $\displaystyle h = \frac{v ... 8. The computer is given position data and velocity vectors of the satellite for a given time. 5. You cannot get blood out of a stone. 5. In the common radio tubes electrons are thrown off from the cathode when it is heated. The initial problem analysis usually runs together with outline flowcharting.
This book contains grade 10, 11 and 12 revision questions with answers physics. A stone is thrown vertically upward with an initial velocity of 40m/s. Taking g = 10m/s2, find the maximum height reached by the stone. What is the net displacement and the total distance covered by the stone? Nov 28, 2012 · a stone is thrown with an initial velocity of 20.meters per second straight upward from the edge of a cliff 100.meters above a canyon floor. the stone just misses the cliff's edge on its way down.... "Stepping Stone" is the latest in a series of songs apologizing to people who have been negatively affected by Em's career. A 'stepping stone' is a term used to describe a means of progress, and in this song is used to refer to how D12 wound up being an unintentional career boost for Em, while the...
An egg is thrown horizontally off the roof of SI, which is 60 meters high, with an initial velocity of 6.5 m/s. How long does it take to hit the ground? How far does it go in the x direction? 7 a stone is thrown horizontally at a speed of 5. a stone is thrown horizontally at a speed of 5.o m/s from the top of a cliff 78.4 m high. How long does ... The stone moves under gravity freely and reaches the floor 5s after thrown. a) Find H, b)the horizontal distance covered To find the height we first need to find the initial vertical velocity of the stone. A stone is dropped from the top of a tall cliff and {eq}\displaystyle 1 sec {/eq} later a second stone is thrown vertically downward with a velocity of {eq}\displaystyle 20 m/sec. {/eq} A stone is thrown into the air with an initial velocity of 50 m/s at 37degress to the horizontal. find (a) the total time the ball is in the air and (b) the total horizontal distance it travels. Initial velocity synonyms, Initial velocity pronunciation, Initial velocity translation, English dictionary definition of Initial velocity. the velocity of a moving body at starting; especially, the velocity of a projectile as it leaves the mouth of a firearm from which it is discharged.
When hunting for new eggs in the various habitats of Dragon Cave, users are presented with a set of "mystery eggs". These eggs will all display a black egg with a red question mark on it, and are only identifiable by their written description.
A stone is thrown vertically upward with an initial velocity of 40 m/s. Taking g = 10 m/s2, find the maximum height reached by the stone. What is the net displacement and the total distance covered by the stone? Open 1 Answers 4065 Views Education
Assuming stone can be represented by a point mass and no air drag and the acceleration of gravity being 9.81 m/s^2 and initial stone velocity was zero we have average speed being v(ave) = d/t = 19.6 / 2.057=9.528 m/s (meter per seconds) and final speed be v(final) = a * t = 9.81 x 2.057 =. 20.18 m/s.A stone is thrown from the top of a building upward at an angle of {eq}\displaystyle 30^o {/eq} to the horizontal with an initial speed of {eq}\displaystyle 24 m/s. Conceptual Example 10 Two Ways to Throw a Stone From the top of a cliff, a person throws two stones. The stones have identical initial speeds, but stone 1 is thrown downward at some angle above the horizontal and stone 2 is thrown at the same angle below the horizontal. Neglecting air resistance, Where, u = Initial velocity of the stone = 40 m/s. A stone of mass 64.0g is thrown vertically upward from the ground with an initial speed of 20.0m/s. asked Feb 27, 2019 in Physics by Daisha (70.5k points).47. A .250-kg ball is thrown upwards with an initial velocity of 12.0 m/s at an angle of 30.0 degrees. After reaching terminal velocity, the skydiver opens his parachute. Shortly thereafter, there is an an instant in time in which the skydiver encounters an air resistance force of 1180 Newtons.
A Stone is Thrown Vertically Upward with an Initial Velocity of 40 M/S. Taking G = 10 M/S^2, Find the Maximum Height Reached by the Stone. What is the Net Displacement and the Total Distance Covered by the Stone? Since the stone is thrown horizontally, it has no component of vertical velocity at the moment it is thrown. This stone will have the same time of flight as a stone dropped from rest off the same ... A stone is thrown at an angle of 30 degrees above the horizontal from a building 45 meters high with an initial speed of 23 m/s. Find the minimum speed of the projectile during its flight. Solution for Question-1 A stone is thrown vertically upwards with a velocity 20m/s from the top of a tower 25m high. Make Calculation for the following… Jason Kendall throws a baseball with a horizontal component of velocity of 25 m/s. It takes 3.00s to come back to its original height. Calculate its horizontal range, its initial vertical component of velocity and its initial angle of projection. An egg is thrown horizontally off the roof of SI, which is 60 meters high, with an initial velocity
physics. A stone is thrown vertically upward with an initial velocity of 40m/s. Taking g = 10m/s2, find the maximum height reached by the stone. What is the net displacement and the total distance covered by the stone? Astronomers think the collision actually occurred at a relatively low velocity, with the object striking the Earth at an oblique angle and an initial impactor velocity below 4 km/s. In fact, many astronomers believe that the Earth experienced dozens of such collisions with...This is zero initial velocity. Acceleration is 10 on time. Interval is 2.5, so velocity would be 25 m per second in the downward direction. A stone is thrown vertically upward with a speed of 15.5 m/s from the edge of a cliff 75.0 m high (Fig. 2-48). ($a$) How much later does it reach the bottom of the...A stone is thrown vertically upward with a speed of 12.0 m/s from the edge of a cliff 70.0 m high; (a) How much later does it reach the bottom of the cliff? (b) Whay is its speed just before hitting? (c) What total distance did it travel? 7.A ball is thrown vertically upward with an initial speed of 20 m/s.
Silvaire aircraft company
Pa state police ebensburg
Henderson county news
Well the final velocity is going to be your initial velocity plus your acceleration times change in time. If you are starting at 10m/s and you are accelerated at 1m/s^2 then after 1 second you will be going 1 second faster than that. (11m/s) So this right here is your final velocity.
Citibank temporary debit card cvv
Near the surface of the Earth, an object in free fall in a vacuum will accelerate at approximately 9.8 m/s 2, independent of its mass.With air resistance acting on an object that has been dropped, the object will eventually reach a terminal velocity, which is around 53 m/s (190 km/h or 118 mph) for a human skydiver. Video Explanation. Answer. Initial Velocity u=40. View Answer. A body dropped from a certain height attains the same velocity as another falling with an initial velocity u from a height h below the first body.
Ar 601 280 board questions
Td ameritrade api conditional order
A stone is thrown with an initial velocity
Glock 26 ported barrel review
A stone is thrown with an initial velocity of 35 ft/s from the edge of a bridge that is 43 fr above the ground. The height of this stone above the ground t seconds after it is thrown is f(t)=-16t2+35t+43.
Dbm to mw excel formulaGithub actions pass data between jobs
Marlboro 110th birthday free carton
Nov 17, 2008 · A stone is thrown with an initial velocity of 20.meters per second straight upward from the edge of a cliff 100 meters above a canyon floor. The stone just misses the cliff's edge on its way down....
Can someone else deposit my stimulus check into their accountIndiana state university football
The stone just stopped as the apex and came back. Another boy projected a stone horizontally from the top of the mountain. Calculate: (a) Height of the mountain. (b) Time taken for the stone to follow the trajectory. (c) The range if the horizontal velocity is 20m/s.
Free rotating proxy
Keycloak vs cas
A ball of mass 0.20 kg is thrown vertically upward with an initial velocity of 20 m/s. asked Dec 5, 2018 in Class X Science by aditya23 ( -2,145 points) force work power and energy | CommonCrawl |
Negation introduction
Negation introduction is a rule of inference, or transformation rule, in the field of propositional calculus.
Negation introduction
TypeRule of inference
FieldPropositional calculus
StatementIf a given antecedent implies both the consequent and its complement, then the antecedent is a contradiction.
Symbolic statement$(P\rightarrow Q)\land (P\rightarrow \neg Q)\rightarrow \neg P$
Transformation rules
Propositional calculus
Rules of inference
• Implication introduction / elimination (modus ponens)
• Biconditional introduction / elimination
• Conjunction introduction / elimination
• Disjunction introduction / elimination
• Disjunctive / hypothetical syllogism
• Constructive / destructive dilemma
• Absorption / modus tollens / modus ponendo tollens
• Negation introduction
Rules of replacement
• Associativity
• Commutativity
• Distributivity
• Double negation
• De Morgan's laws
• Transposition
• Material implication
• Exportation
• Tautology
Predicate logic
Rules of inference
• Universal generalization / instantiation
• Existential generalization / instantiation
Negation introduction states that if a given antecedent implies both the consequent and its complement, then the antecedent is a contradiction.[1][2]
Formal notation
This can be written as: $(P\rightarrow Q)\land (P\rightarrow \neg Q)\rightarrow \neg P$
An example of its use would be an attempt to prove two contradictory statements from a single fact. For example, if a person were to state "Whenever I hear the phone ringing I am happy" and then state "Whenever I hear the phone ringing I am not happy", one can infer that the person never hears the phone ringing.
Many proofs by contradiction use negation introduction as reasoning scheme: to prove ¬P, assume for contradiction P, then derive from it two contradictory inferences Q and ¬Q. Since the latter contradiction renders P impossible, ¬P must hold.
Proof
Step Proposition Derivation
1$(P\to Q)\land (P\to \neg Q)$Given
2$(\neg P\lor Q)\land (\neg P\lor \neg Q)$Material implication
3$\neg P\lor (Q\land \neg Q)$Distributivity
4$\neg (Q\land \neg Q)$Law of noncontradiction
5$\neg P$Disjunctive syllogism (3,4)
See also
• Reductio ad absurdum
References
1. Wansing, Heinrich, ed. (1996). Negation: A Notion in Focus. Berlin: Walter de Gruyter. ISBN 3110147696.
2. Haegeman, Lilliane (30 Mar 1995). The Syntax of Negation. Cambridge: Cambridge University Press. p. 70. ISBN 0521464927.
| Wikipedia |
\begin{document}
\title{Operads and Operadic Algebras in Homotopy Theory} \author{Michael A. Mandell} \address{Department of Mathematics, Indiana University, Bloomington, IN} \email{[email protected]}
\date{January 3, 2021}
\maketitle \setcounter{tocdepth}{1} \tableofcontents
\section{Introduction}
Operads first appear in the book \textit{Geometry of Iterated Loop Spaces} by J.\ P.\ May~\cite{May-GILS}, though Boardman and Vogt had earlier implicitly defined a mathematically equivalent notion as a ``PROP in standard form''~\cite[\S2]{BV-Bull}. In those works, operads and operadic algebra structures provide a recognition principle and a delooping machine for $n$-fold loop spaces and infinite loop spaces. The basic idea is that an operad should encode the operations in some kind of homotopical algebraic structure. For example, an $n$-fold loop space $\Omega^{n}X$ comes with $n$ different multiplications $(\Omega^{n}X)^{2}\to \Omega^{n}X$, which can be iterated and generalized to a space of $m$-ary maps ${\opsymbfont{C}}_{n}(m)$ (from $(\Omega^{n}X)^{m}$ to $\Omega^{n}X$); here ${\opsymbfont{C}}_{n}$ is the Boardman-Vogt little $n$-cubes operad (see Construction~\ref{cons:lilcubes} and Section~\ref{sec:loop} below). The content of the recognition theorem is that ${\opsymbfont{C}}_{n}$ specifies a structure that is essentially equivalent to the structure of an $n$-fold loop space for connected spaces. It was clear even at the time of introduction that operads were a big idea and in the almost 50 years since then, operads have found a wide range of other uses in a variety of areas of mathematics: a quick MathSciNet search for papers since 2015 with ``operad'' in the title comes up with papers in combinatorics, algebraic geometry, nonassociative algebra, geometric group theory, free probability, mathematical modeling, and physics, as well as in algebraic topology and homological algebra.
Even the topic of operads in algebraic topology is too broad to cover or even summarize in a single article. This expository article concentrates on what the author views as the basic topics in the homotopy theory of operadic algebras: the definition of operads, the definition of algebras over operads, structural aspects of categories of algebras over operads, model structures on algebra categories, and comparison of algebra categories when changing operad or underlying category. In addition, it includes two applications of the theory: The original application to $n$-fold loop spaces, and an application to algebraic models of homotopy types (chosen purely on the basis of author bias). This leaves out a long list of other topics that could also fit in this handbook, such as model structures on operads, Koszul duality, deformation theory and Quillen (co)homology, multiplicative structures in stable homotopy theory (for example, on Thom spectra, $K$-theory spectra, etc.), Deligne and Kontsevich conjectures, string topology, factorization homology, construction of moduli spaces, and Goodwillie calculus, just to name a few areas.
\subsection*{Notation and conventions}
Although we concentrate on operads and operadic algebras in topology, much of the background applies very generally. Because of this and because we will want to discuss both the case of spaces and the case of spectra, we will use neutral notation: let ${\catsymbfont{M}}$ denote a symmetric monoidal category \cite[\S1.4]{Kelly-BasicConcepts}, writing $\mathbin{\square}$ for the monoidal product and $\mathbf{1}$ for the unit. (We will uniformly omit notation for associativity isomorphisms and typically omit notation for commutativity isomorphisms, but when necessary, we will write $c_{\sigma}$ for the commutativity isomorphism associated to a permutation $\sigma$.) Usually, we will want ${\catsymbfont{M}}$ to have coproducts and sometimes more general colimits, which we will expect to commute with $\mathbin{\square}$ on each side (keeping the other side fixed). This exactness of $\mathbin{\square}$ is automatic if the monoidal structure is closed~\cite[\S1.5]{Kelly-BasicConcepts}, i.e., if for each fixed object $X$ of ${\catsymbfont{M}}$, the functor $(-)\mathbin{\square} X$ has a right adjoint; this is often convenient to assume, and when we do, we will use $F(X,-)$ for the right adjoint. The three basic classes of examples to keep in mind are: \begin{enumerate} \item ``Convenient categories of topological spaces'' including compactly generated weak Hausdorff spaces~\cite{McCord-ISP}; then $\mathbin{\square}$ is the categorical product, $\mathbf{1}$ is the final object (one point space), and $F(X,Y)$ is the function space, often written $Y^{X}$. \item ``Modern categories of spectra'' including EKMM $S$-modules~\cite{EKMM}, symmetric spectra~\cite{HSS}, and orthogonal spectra~\cite{MMSS}; then $\mathbin{\square}$ is the smash product, $\mathbf{1}$ is the sphere spectrum, and $F(-,-)$ is the function spectrum. \item The category of chain complexes of modules over a commutative ring $R$; then $\mathbin{\square}$ is the tensor product over $R$, $\mathbf{1}$ is the complex $R$ concentrated in degree zero, and $F(-,-)$ is the $\Hom$-complex $\Hom_{R}(-,-)$. \end{enumerate} (We now fix a convenient category of spaces and just call it ``the category of spaces'' and the objects in it ``spaces'', ignoring the classical category of topological spaces.)
In the context of operadic algebras in spectra (i.e., (ii) above), it is often technically convenient to use operads of spaces. However, for uniformity of exposition, we have written this article in terms of operads internally in ${\catsymbfont{M}}$. The unreduced suspension functor $\Sigma^{\infty}_{+}(-)$ converts operads in spaces to operads in the given category of spectra.
\subsection*{Outline}
The basic idea of an operad is that the pieces of it should parametrize a class of $m$-ary operations. From this perspective, the fundamental example of an operad is the \term{endomorphism operad} of an object $X$, \[ {\opsymbfont{E}}{\mathrm{nd}}_{X}(m):=F(X^{(m)},X)\qquad \qquad X^{(m)}:=\underbrace{X\mathbin{\square} \dotsb \mathbin{\square} X}_{m\text{ factors}}, \] which parametrizes all $m$-ary maps from $X$ to itself. Abstracting the symmetry and composition properties leads to the definition of operad in~\cite{May-GILS}. We review this definition in Section~\ref{sec:def1}.
Section~\ref{sec:ainf} presents some basic examples of operads important in topology, including some $A_{\infty}$ operads, $E_{\infty}$ operads, and $E_{n}$ operads.
May chose the term ``operad'' to match the term ``monad'' (see~\cite{May-MacLane}), to show their close connection. Basically, a monad is an abstract way of defining some kind of structure on objects in a category, and an operad gives a very manageable kind of monad. Section~\ref{sec:monad} reviews the monad associated to an operad and defines algebras over an operad.
Section~\ref{sec:modules} gives the basic definition of a module over an operadic algebra and reviews the basics of the homotopy theory of module categories.
Section~\ref{sec:limit} discusses limits and colimits in categories of operadic algebras. It includes a general filtration construction that often provides the key tool to study pushouts of operadic algebras homotopically in terms of colimits in the underlying category. Section~\ref{sec:enrich} discusses when categories of operadic algebras are enriched, and in the case of categories of algebras enriched over spaces, discusses the geometric realization of simplicial and cosimplicial algebras. Although these sections may seem less basic and more technical than the previous sections, the ideas here provide the tools necessary for further work with operadic algebras using the modern methods of homotopy theory.
\iffalse Consideration of the monads associated to operads leads to a different formulation of operad in terms of symmetric sequences and the plethysm product. The author learned this definition in Getzler-Jones~\cite{GetzlerJones}, which attributes it to Joyal~\cite{Joyal} and Smirnov~\cite{Smirnov}. We review this definition and the corresponding definition of operadic algebras in Section~\ref{sec:def2}. \fi
Model structures on categories of operadic algebras provide a framework for proving comparison theorems and rectification theorems. Section~\ref{sec:model} reviews some aspects of model category theory for categories of operadic algebras. In the terminology of this article, a \term{comparison theorem} is an equivalence of homotopy theories between categories of algebras over different operads that are equivalent in some sense (for example, between categories of algebras over different $E_{\infty}$ operads) or between categories of algebras over equivalent base categories (for example, $E_{\infty}$ algebras in spaces versus $E_{\infty}$ algebras in simplicial sets). A \term{rectification theorem} is a comparison theorem when one of the operads is discrete in some sense: a comparison theorem for the category of algebras over an $A_{\infty}$ operad and the category of associative algebras is an example of a rectification theorem, as is the comparison theorem for $E_{\infty}$ algebras and commutative algebras in modern categories of spectra. Section~\ref{sec:compare} discusses these and other examples of comparison and rectification theorems. In both Sections~\ref{sec:model} and~\ref{sec:compare}, instead of stating theorems of maximal generality, we have chosen to provide ``Example Theorems'' that capture some examples of particular interest in homotopy theory and stable homotopy theory. Both the statements and the arguments provide examples: the arguments apply or can be adapted to apply in a wide range of generality.
The Moore space is an early rectification technique (pre-dating operads and $A_{\infty}$ monoids) for producing a genuine associative monoid version of the loop space; the construction applies generally to a little $1$-cubes algebra to produce an associative algebra that we call the \term{Moore algebra}. The concept of modules over an operadic algebra leads to another way of producing an associative algebra, called the \term{enveloping algebra}. Section~\ref{sec:rectass} compares these constructions and the rectification of $A_{\infty}$ algebras constructed in Section~\ref{sec:compare}.
Sections~\ref{sec:loop} and~\ref{sec:cochains} review two significant applications of the theory of operadic algebras. Section~\ref{sec:loop} reviews the original application: the theory of iterated loop spaces and the recognition principle in terms of $E_{n}$ algebras. Section~\ref{sec:cochains} reviews the equivalence between the rational and $p$-adic homotopy theory of spaces with the homotopy theory of $E_{\infty}$ algebras.
\subsection*{Acknowledgments}
The author benefited from conversations and advice from Clark Barwick, Agn\`es Beaudry, Julie Bergner, Myungsin Cho, Bj{\o}rn Dundas, Tyler Lawson, Andrey Lazarev, Amnon Neeman, Brooke Shipley, and Michael Shulman while working on this chapter. The author thanks Peter May for his mentorship in the 1990s (and beyond) while learning these topics and for help straightening out some of the history described here. The author thanks the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the program ``Homotopy harnessing higher structures'' (HHH) when work on this chapter was undertaken; this work was supported by: EPSRC Grant Number EP/R014604/1. The author was supported in part by NSF grants DMS-1505579 and DMS-1811820 while working on this project. Finally, the author thanks Andrew Blumberg for extensive editorial advice.
\section{Operads and Endomorphisms} \label{sec:def1}
We start with the definition of an operad. The collection of $m$-ary endomorphism objects ${\opsymbfont{E}}{\mathrm{nd}}_{X}(m)=F(X^{(m)},X)$ provides the prototype for the definition, and we use its intrinsic structure to motivate and explain it. Although the endomorphism objects only make sense when the symmetric monoidal category is ``closed'' (which means that function objects exist), the definition of operad will not require or assume function objects, nor will the definition of operadic algebra in Section~\ref{sec:monad}. To take in the picture, it might be best just to take ${\catsymbfont{M}}$ to be the category of spaces, the category of vector spaces over a field, or the category of sets on first introduction to this material.
In our basic classes of examples, and more generally as a principle of enriched category theory, function objects behave like sets of morphisms: the counit of the defining adjunction \[ F(X,Y)\mathbin{\square} X\to Y \] is often called the \textit{evaluation map} (and denoted $ev$). It allows ``element-free'' definition and study of composition: iterating evaluation maps \[ F(Y,Z)\mathbin{\square} F(X,Y)\mathbin{\square} X\to F(Y,Z)\mathbin{\square} Y\to Z \] induces (by adjunction) a \term{composition map} \[ \circ \colon F(Y,Z)\mathbin{\square} F(X,Y)\to F(X,Z). \] One can check just using the basic properties of adjunctions that this composition is associative in the obvious sense. It is also unital: the identity element of ${\catsymbfont{M}}(X,X)$ specifies a map $1_{X}\colon \mathbf{1}\to F(X,X)$, \[ \id_{X} \in {\catsymbfont{M}}(X,X)\cong {\catsymbfont{M}}(\mathbf{1} \mathbin{\square} X,X)\cong {\catsymbfont{M}}(\mathbf{1},F(X,X)), \] where the first isomorphism is induced by the unit isomorphism; essentially by construction, the composite \[ \mathbf{1}\mathbin{\square} X\overto{1_{X}\mathbin{\square} \id_{X}} F(X,X)\mathbin{\square} X\overto{ev} X \] is the unit isomorphism. It follows that the diagram \[ \xymatrix{ \mathbf{1}\mathbin{\square} F(X,Y)\ar[r]^{\cong}\ar[d]_{1_{Y}\mathbin{\square} \id_{F(X,Y)}} &F(X,Y)\ar@{=}[d] &F(X,Y)\mathbin{\square} \mathbf{1}\ar[l]_{\cong}\ar[d]^{\id_{F(X,Y)}\mathbin{\square} 1_{X}}\\ F(Y,Y)\mathbin{\square} F(X,Y)\ar[r]_-{\circ}&F(X,Y)&F(X,Y)\mathbin{\square} F(X,X)\ar[l]^-{\circ} } \] commutes, where the top-level isomorphisms are the unit isomorphisms. More is true: the function objects enrich the category ${\catsymbfont{M}}$ over itself, and the $\mathbin{\square},F$ parametrized adjunction is itself enriched \cite[\S1.5--6]{Kelly-BasicConcepts}.
In the case when ${\catsymbfont{M}}$ is the category of spaces, the evaluation map is just the map that evaluates functions on their arguments; thinking in these terms will make the formulas and checks clearer for the reader not used to working with adjunctions. Since in the category of spaces $\mathbf{1}$ is the one point space, a map out of $\mathbf{1}$ just picks out an element of the target space and the map $\mathbf{1}\to F(X,X)$ is just the map that picks out the identity map of $X$.
The basic compositions above generalize to associative and unital $m$-ary compositions; now for simplicity and because it is the main case of interest here, we restrict to considering a fixed object $X$. The $m$-ary composition takes the form \[ F(X^{(m)},X)\mathbin{\square} (F(X^{(j_{1})},X)\mathbin{\square} \dotsb \mathbin{\square} F(X^{(j_{m})},X))\to F(X^{(j)},X) \] where $j=j_{1}+\dotsb+j_{m}$ and (as in the introduction) $X^{(m)}$ denotes the $m$th $\mathbin{\square}$ power of $X$; we think of the $m$-ary composition as plugging in the $m$ $j_{i}$-ary maps into the first $m$-ary map; it is adjoint to the map \begin{multline*} F(X^{(m)},X)\mathbin{\square} F(X^{(j_{1})},X)\mathbin{\square} \dotsb \mathbin{\square} F(X^{(j_{m})},X) \mathbin{\square} X^{(j)} \cong\\ F(X^{(m)},X)\mathbin{\square} F(X^{(j_{1})},X)\mathbin{\square} \dotsb \mathbin{\square} F(X^{(j_{m})},X) \mathbin{\square} X^{(j_{1})}\mathbin{\square} \dotsb \mathbin{\square} X^{(j_{m})} \to X \end{multline*} that does the evaluation map \[ F(X^{(j_{i})},X)\mathbin{\square} X^{(j_{i})}\to X, \] then collects the resulting $m$ factors of $X$ and does the evaluation map \[ F(X^{(m)},X)\mathbin{\square} X^{(m)}\to X. \] In this double evaluation, implicitly we have shuffled some of the factors of $X$ past some of the endomorphism objects, but we take care not to permute factors of $X$ among themselves or the endomorphism objects among themselves. This defines a composition map \[ \Gamma^{m}_{j_{1},\dotsc,j_{m}}\colon {\opsymbfont{E}}{\mathrm{nd}}_{X}(m)\mathbin{\square} {\opsymbfont{E}}{\mathrm{nd}}_{X}(j_{1})\mathbin{\square} \dotsb \mathbin{\square} {\opsymbfont{E}}{\mathrm{nd}}_{X}(j_{m})\to {\opsymbfont{E}}{\mathrm{nd}}_{X}(j). \] The composition is associative and unital in the obvious sense (which we write out in the definition of an operad, Definition~\ref{def:operad}, below).
We now begin systematically writing ${\opsymbfont{E}}{\mathrm{nd}}_{X}(m)$ for $F(X^{(m)},X)$. We note that ${\opsymbfont{E}}{\mathrm{nd}}_{X}(m)=F(X^{(m)},X)$ has a right action by the symmetric group $\Sigma_{m}$ induced by the left action of $\Sigma_{m}$ on $X^{(m)}$ corresponding to permuting the $\mathbin{\square}$-factors. In general, for a permutation $\sigma$, we write $c_{\sigma}$ for the map that permutes $\mathbin{\square}$-factors and $a_{\sigma}$ for the action of $\sigma$ on ${\opsymbfont{E}}{\mathrm{nd}}_{X}(m)$, i.e., the map that does $c_{\sigma}$ on the domain of ${\opsymbfont{E}}{\mathrm{nd}}_{X}(m)=F(X^{(m)},X)$. We now study what happens when we permute the various factors in the formula for $\Gamma$ above. (As these are a bit tricky, we do the formulas out here and repeat them below in the definition of an operad, Definition~\ref{def:operad}.)
First consider what happens when we permute the factors of $X$. We have nothing to say for an arbitrary permutation of the factors of $X$, but in the composition $\Gamma^{m}_{j_{1},\dotsc,j_{m}}$, we can say something for a permutation that permutes the factors only within their given blocks of size $j_{1},\dotsc,j_{m}$, i.e., when the overall permutation $\sigma$ of all $j$ factors is the block sum of permutations $\sigma_{1}\oplus \dotsb \oplus \sigma_{m}$ with $\sigma_{i}$ in $\Sigma_{j_{i}}$. By extranaturality, performing the right action of $\sigma_{i}$ on ${\opsymbfont{E}}{\mathrm{nd}}_{X}(j_{i})$ and evaluating is the same as applying the left action of $\sigma_{i}$ on $X^{(j_{i})}$ and evaluating. It follows that the composition $\Gamma^{m}_{j_{1},\dotsc,j_{m}}$ is $(\Sigma_{j_{1}}\times \dotsb \times \Sigma_{j_{m}})$-equivariant where we use the $\Sigma_{j_{i}}$-actions on the ${\opsymbfont{E}}{\mathrm{nd}}_{X}(j_{i})$'s in the source and block sum with the $\Sigma_{j}$-action on ${\opsymbfont{E}}{\mathrm{nd}}_{X}(j)$ on the target.
Permuting the endomorphism object factors is easier to understand when we also permute the corresponding factors of $X$. In the context of $\Gamma^{m}_{j_{1},\dotsc,j_{m}}$, for $\sigma$ in $\Sigma_{m}$, let $\sigma_{j_{1},\dotsc,j_{m}}$ be the element of $\Sigma_{j}$ that permutes the blocks $X^{(j_{1})}$,\dots, $X^{(j_{m})}$ as $\sigma$ permutes $1$,\dots,$m$. So, for example, if $m=3$, $j_{1}=1$, $j_{2}=3$, $j_{3}=2$, and $\sigma=(23)$, then $\sigma_{1,3,2}$ is the permutation \[ (23)_{1,3,2}= \left\{ \vcenter{\tiny \xymatrix@[email protected]{ 1\ar[d]&2\ar[d]&3\ar[d]&4\ar[d]&5\ar[d]&6\ar[d]\\ 1&5&6&2&3&4 }} \right\} = (25364). \] In ${\opsymbfont{E}}{\mathrm{nd}}_{X}(j_{1})\mathbin{\square} \dotsb \mathbin{\square} {\opsymbfont{E}}{\mathrm{nd}}_{X}(j_{m})\mathbin{\square} X^{(j)}$, if we apply $\sigma$ to permute the endomorphism object factors and $\sigma_{j_{1},\dotsc,j_{m}}$ to permute the $X$ factors, then evaluation pairs the same factors as with no permutation and the diagram \[ \xymatrix{ ({\opsymbfont{E}}{\mathrm{nd}}_{X}(j_{1})\mathbin{\square} \dotsb \mathbin{\square} {\opsymbfont{E}}{\mathrm{nd}}_{X}(j_{m}))\mathbin{\square} X^{(j)} \ar[r]^-{ev} \ar[d]_{c_{\sigma} \mathbin{\square} c_{\sigma_{j_{1},\dotsc,j_{m}}}} &X^{(m)}\ar[d]^{c_{\sigma}}\\ ({\opsymbfont{E}}{\mathrm{nd}}_{X}(j_{\sigma^{-1}(1)})\mathbin{\square} \dotsb \mathbin{\square} {\opsymbfont{E}}{\mathrm{nd}}_{X}(j_{\sigma^{-1}(m)}))\mathbin{\square} X^{(j)}\ar[r]_-{ev} &X^{(m)} } \] commutes. \iffalse (As indicated above, here $c_{(-)}$ denotes the $\mathbin{\square}$ commutativity isomorphism for the given permutation; for the left vertical map, we note that since $\sigma$ sends $\sigma^{-1}(i)$ to $i$, in general $c_{\sigma}$ is a map $Z_{1}\mathbin{\square} \dotsb \mathbin{\square} Z_{m}\to Z_{\sigma^{-1}(1)}\times \dotsb \mathbin{\square} Z_{\sigma^{-1}(m)}$.)\fi This now tells us what happens with $\Gamma^{m}_{j_{1},\dotsc,j_{m}}$ and the permutation action on ${\opsymbfont{E}}{\mathrm{nd}}_{X}(n)$: the composite of the right action of $\sigma$ on ${\opsymbfont{E}}{\mathrm{nd}}_{X}(m)$ with $\Gamma^{m}_{j_{1},\dotsc,j_{m}}$ \begin{multline*} {\opsymbfont{E}}{\mathrm{nd}}_{X}(m)\mathbin{\square} ({\opsymbfont{E}}{\mathrm{nd}}_{X}(j_{1})\mathbin{\square} \dotsb \mathbin{\square} {\opsymbfont{E}}{\mathrm{nd}}_{X}(j_{m}))\\ \overto{a_{\sigma} \mathbin{\square} \id} {\opsymbfont{E}}{\mathrm{nd}}_{X}(m)\mathbin{\square} ({\opsymbfont{E}}{\mathrm{nd}}_{X}(j_{1})\mathbin{\square} \dotsb \mathbin{\square} {\opsymbfont{E}}{\mathrm{nd}}_{X}(j_{m})) \overto{\Gamma^{m}_{j_{1},\dotsc,j_{m}}} {\opsymbfont{E}}{\mathrm{nd}}_{X}(j) \end{multline*} is equal to the composite of the $\mathbin{\square}$-permutation $c_{\sigma}$ on the ${\opsymbfont{E}}{\mathrm{nd}}(j_{i})$'s, the composition map $\Gamma^{m}_{j_{\sigma^{-1}(1)},\dotsc,j_{\sigma^{-1}(m)}}$, and the right action of $\sigma_{j_{1},\dotsc,j_{m}}$ on ${\opsymbfont{E}}{\mathrm{nd}}_{X}(j)$ \begin{multline*} {\opsymbfont{E}}{\mathrm{nd}}_{X}(m)\mathbin{\square} ({\opsymbfont{E}}{\mathrm{nd}}_{X}(j_{1})\mathbin{\square} \dotsb \mathbin{\square} {\opsymbfont{E}}{\mathrm{nd}}_{X}(j_{m}))\\ \overto{\id\mathbin{\square} c_{\sigma}} {\opsymbfont{E}}{\mathrm{nd}}_{X}(m)\mathbin{\square} ({\opsymbfont{E}}{\mathrm{nd}}_{X}(j_{\sigma^{-1}(1)})\mathbin{\square} \dotsb \mathbin{\square} {\opsymbfont{E}}{\mathrm{nd}}_{X}(j_{\sigma^{-1}(m)}))\\ \overto{\Gamma^{m}_{j_{\sigma^{-1}(1)},\dotsc,j_{\sigma^{-1}(m)}}} {\opsymbfont{E}}{\mathrm{nd}}_{X}(j) \overto{a_{\sigma_{j_{1},\dotsc,j_{m}}}}{\opsymbfont{E}}{\mathrm{nd}}_{X}(j). \end{multline*}
See Figure~\ref{fig:operadhardperm} on p.~\pageref{fig:operadhardperm} for this equation written as a diagram.
Although we did not emphasize this above, we need to allow any of $m$, $j_{1},\dotsc,j_{m}$, or $j$ to be zero, where we understand empty $\mathbin{\square}$-products to be the unit $\mathbf{1}$. The formulations above still work with this extension, using the unit isomorphism where necessary. The purpose of allowing these ``zero-ary'' operations is that it allows us to encode a unit object into the structure: For example, in the context of spaces $\mathbf{1}$ is the one point space $*$ and to describe the structure of a topological monoid, not only do we need the binary operation $X\times X\to X$, but we also need the zero-ary operation $*\to X$ for the unit.
Rewriting the properties of ${\opsymbfont{E}}{\mathrm{nd}}_{X}$ above as a definition, we get an element-free version of the definition of operad of May~\cite[1.2]{May-GILS}.\footnote{In the original definition, May required ${\opsymbfont{O}}(0)=\mathbf{1}$ in order to provide ${\opsymbfont{O}}$-algebras with units, which was desirable in the iterated loop space context, but standard convention has since dropped this requirement to allow non-unital algebras and other unit variants.}
\begin{defn}\label{def:operad} An operad in a symmetric monoidal category ${\catsymbfont{M}}$ consists of a sequence of objects ${\opsymbfont{O}}(m)$, $m=0,1,2,3,\dotsc$, together with: \begin{enumerate}\deflist \item\label{part:operad:action} A right action of the symmetric group $\Sigma_{m}$ on ${\opsymbfont{O}}(m)$ for all $m$, \item\label{part:operad:unitmap} A \textit{unit map} $1\colon \mathbf{1}\to {\opsymbfont{O}}(1)$, and \item\label{part:operad:comp} A \textit{composition rule} \[ \Gamma^{m}_{j_{1},\dotsc,j_{m}}\colon {\opsymbfont{O}}(m)\mathbin{\square} {\opsymbfont{O}}(j_{1})\mathbin{\square} \dotsb \mathbin{\square} {\opsymbfont{O}}(j_{m})\to {\opsymbfont{O}}(j) \] for every $m$, $j_{1},\dotsc,j_{m}$, where $j=j_{1}+\dotsb+j_{m}$, typically written $\Gamma$ when $m$ and $j_{1},\dotsc,j_{m}$ are understood or irrelevant, \end{enumerate} satisfying the following conditions: \begin{enumerate} \item\label{part:operad:assoc} The composition rule $\Gamma$ is associative in the sense that for any $m$, $j_{1},\dotsc,j_{m}$ and $k_{1},\dotsc,k_{j}$, letting $j=j_{1}+\dotsb +j_{m}$, $k=k_{1}+\dotsb+k_{j}$, $t_{i}=j_{1}+\dotsb+j_{i-1}$ (with $t_{1}=0$), and $s_{i}=k_{t_{i}+1}+\dotsb+k_{t_{i}+j_{i}}$, the equation \begin{multline*} \hspace{3\listparindent} \Gamma^{j}_{k_{1},\dotsc,k_{j}}\circ (\Gamma^{m}_{j_{1},\dotsc,j_{m}}\mathbin{\square} \id_{{\opsymbfont{O}}(k_{1})}\mathbin{\square} \dotsb \mathbin{\square} \id_{{\opsymbfont{O}}(k_{j})})\\ = \Gamma^{m}_{s_{1},\dotsc,s_{m}}\circ (\id_{{\opsymbfont{O}}(m)}\mathbin{\square}\ \Gamma^{j_{1}}_{k_{1},\dotsc,k_{j_{1}}}\mathbin{\square} \dotsb \mathbin{\square} \Gamma^{j_{m}}_{k_{t_{m}+1},\dotsc,k_{j}})\circ c \end{multline*} holds in the set of maps \[ {\opsymbfont{O}}(m)\mathbin{\square} {\opsymbfont{O}}(j_{1})\mathbin{\square} \dotsb \mathbin{\square} {\opsymbfont{O}}(j_{m})\mathbin{\square} {\opsymbfont{O}}(k_{1})\mathbin{\square} \dotsb \mathbin{\square} {\opsymbfont{O}}(k_{j})\to {\opsymbfont{O}}(k) \] where $c$ is the $\mathbin{\square}$-permutation \begin{multline*} \hspace{3\listparindent} {\opsymbfont{O}}(m)\mathbin{\square} {\opsymbfont{O}}(j_{1})\mathbin{\square} \dotsb \mathbin{\square} {\opsymbfont{O}}(j_{m})\mathbin{\square} {\opsymbfont{O}}(k_{1})\mathbin{\square} \dotsb \mathbin{\square} {\opsymbfont{O}}(k_{j}) \to\\ \hspace{3\listparindent} {\opsymbfont{O}}(m)\mathbin{\square} ({\opsymbfont{O}}(j_{1})\mathbin{\square} {\opsymbfont{O}}(k_{1})\mathbin{\square} \dotsb \mathbin{\square} {\opsymbfont{O}}(k_{j_{1}})) \mathbin{\square}\dotsb\\ \dotsb \mathbin{\square} ({\opsymbfont{O}}(j_{m})\mathbin{\square} {\opsymbfont{O}}(k_{t_{m}+1})\mathbin{\square} \dotsb \mathbin{\square} {\opsymbfont{O}}(k_{j})) \end{multline*} that shuffles the ${\opsymbfont{O}}(k_{\ell})$'s and ${\opsymbfont{O}}(j_{i})$'s as displayed (see Figure~\ref{fig:operadassoc} on p.~\pageref{fig:operadassoc} for the diagram);
\item\label{part:operad:unit} The unit map $1$ is a left and right unit for the composition rule $\Gamma$ in the sense that $\Gamma^{1}_{m}\circ (1\mathbin{\square} \id)$ \[ \mathbf{1}\mathbin{\square} {\opsymbfont{O}}(m)\overto{1\mathbin{\square} \id}{\opsymbfont{O}}(1)\mathbin{\square} {\opsymbfont{O}}(m)\overto{\Gamma^{1}_{m}}{\opsymbfont{O}}(m) \] is the unit isomorphism and $\Gamma^{m}_{1,\dotsc,1}\circ (\id\mathbin{\square} 1^{(m)})$ \[ {\opsymbfont{O}}(m)\mathbin{\square} \mathbf{1}^{(m)} \overto{\id\mathbin{\square} 1^{(m)}} {\opsymbfont{O}}(m)\mathbin{\square} {\opsymbfont{O}}(1)^{(m)} \overto{\Gamma^{m}_{1,\dotsc,1}}{\opsymbfont{O}}(m) \] is the iterated unit isomorphism for ${\opsymbfont{O}}(m)$ for all $m$; \item\label{part:operad:easyperm} The map $\Gamma^{m}_{j_{1},\dotsc,j_{m}}$ is $(\Sigma_{j_{1}}\times \dotsb \times \Sigma_{j_{m}})$-equivariant for the block sum inclusion of $\Sigma_{j_{1}}\times \dotsb \times \Sigma_{j_{m}}$ in $\Sigma_{j}$; and \item\label{part:operad:hardperm} For any $m$, $j_{1},\dotsc,j_{m}$ and any $\sigma \in \Sigma_{m}$, the equation \begin{multline*} \hspace{3\listparindent} \Gamma^{m}_{j_{1},\dotsc,j_{m}}\circ (a_{\sigma}\mathbin{\square} \id_{{\opsymbfont{O}}(j_{1})}\mathbin{\square} \dotsb \mathbin{\square}\id_{{\opsymbfont{O}}(j_{m})}) \\ =a_{\sigma_{j_{1},\dotsc,j_{m}}}\circ \Gamma^{m}_{j_{\sigma^{-1}(1)},\dotsc,j_{\sigma^{-1}(m)}}\circ (\id_{{\opsymbfont{O}}(m)}\mathbin{\square} c_{\sigma}) \end{multline*} holds in the set of maps \[ {\opsymbfont{O}}(m)\mathbin{\square} {\opsymbfont{O}}(j_{1})\mathbin{\square} \dotsb \mathbin{\square} {\opsymbfont{O}}(j_{m})\to {\opsymbfont{O}}(j) \] where $\sigma_{j_{1},\dotsc,j_{m}}$ denotes the block permutation in $\Sigma_{j}$ corresponding to $\sigma$ on the blocks of size $j_{1},\dotsc,j_{m}$, $a$ denotes the right action of~\eqref{part:operad:action}, and $c_{\sigma}$ denotes the $\mathbin{\square}$-permutation corresponding to $\sigma$ (see Figure~\ref{fig:operadhardperm} on p.~\pageref{fig:operadhardperm} for the diagram). \end{enumerate} \end{defn}
\begin{figure}
\caption{The diagram for \ref{def:operad}.\eqref{part:operad:assoc}}
\label{fig:operadassoc}
\end{figure}
\begin{figure}
\caption{The diagram for \ref{def:operad}.\eqref{part:operad:hardperm}}
\label{fig:operadhardperm}
\end{figure}
A map of operads consists of a map of each object that commutes with the structure:
\begin{defn}\label{def:opmap} A map of operads $(\{{\opsymbfont{O}}(m)\},1,\Gamma)\to (\{{\opsymbfont{O}}'(m)\},1',\Gamma')$ consists of $\Sigma_{m}$-equivariant maps $\phi_{m}\colon {\opsymbfont{O}}(m)\to {\opsymbfont{O}}'(m)$ for all $m$ such that \[ \Gamma^{\prime m}_{j_{1},\dotsc,j_{m}} \circ (\phi_{m}\mathbin{\square} \phi_{j_{1}}\mathbin{\square} \dotsb \mathbin{\square} \phi_{j_{m}}) = \phi_{j}\circ \Gamma^{m}_{j_{1},\dotsc,j_{m}} \] for all $m$, $j_{1},\dotsc,j_{m}$ and $1'=\phi_{1}\circ 1$; in commuting diagrams: \[ \def\vrule height0pt width0pt depth1ex{\vphantom{{\opsymbfont{O}}'(m)\mathbin{\square} {\opsymbfont{O}}'(j_{1})\mathbin{\square} \dotsb \mathbin{\square} {\opsymbfont{O}}'(j_{m})}} \xymatrix@C+1pc{ {\opsymbfont{O}}(m)\mathbin{\square} {\opsymbfont{O}}(j_{1})\mathbin{\square} \dotsb \mathbin{\square} {\opsymbfont{O}}(j_{m}) \vrule height0pt width0pt depth1ex \ar[r]^-{\Gamma^{m}_{j_{1},\dotsc,j_{m}}} \ar[d]_{\phi_{m}\mathbin{\square} \phi_{j_{1}}\mathbin{\square} \dotsb \mathbin{\square} \phi_{j_{m}}} &{\opsymbfont{O}}(j)\ar[d]^{\phi_{j}}\\ {\opsymbfont{O}}'(m)\mathbin{\square} {\opsymbfont{O}}'(j_{1})\mathbin{\square} \dotsb \mathbin{\square} {\opsymbfont{O}}'(j_{m}) \ar[r]_-{\Gamma^{\prime m}_{j_{1},\dotsc,j_{m}}} &{\opsymbfont{O}}'(j) } \qquad \[email protected]{ &\mathbf{1}\vrule height0pt width0pt depth1ex\ar[dl]_{1}\ar[dr]^{1'}\\ {\opsymbfont{O}}(1)\ar[rr]_{\phi_{1}}\vrule height0pt width0pt depth1ex&\relax\hskip-1pc&{\opsymbfont{O}}'(1). } \] \end{defn}
The endomorphism operad ${\opsymbfont{E}}{\mathrm{nd}}_{X}$ gives an example of an operad in any closed symmetric monoidal category (for any object $X$). Here are some additional important examples.
\begin{example}[The identity operad] Assume the symmetric monoidal category ${\catsymbfont{M}}$ has an initial object $\emptyset$. If $\mathbin{\square}$ preserves the initial object in each variable, $\emptyset\mathbin{\square} (-)\cong \emptyset\cong (-)\mathbin{\square} \emptyset$ (which is automatic in the closed case, i.e., when function objects exist), we also have the example of the \term{identity operad} ${\opsymbfont{I}}$, which has ${\opsymbfont{I}}(1)=\mathbf{1}$ (with $1$ the identity) and ${\opsymbfont{I}}(m)$ the initial object for $m\neq 1$; this is the initial object in the category of operads. \end{example}
\begin{example}[The commutative algebra operad] The operad ${\opsymbfont{C}}{\mathrm{om}}$ exists in any symmetric monoidal category: \[ {\opsymbfont{C}}{\mathrm{om}}(m)=\mathbf{1} \] for all $m$ with the trivial symmetric group actions and composition law $\Gamma$ given by the unit isomorphism; its category of algebras (see the next section) is isomorphic to the category of commutative monoids for $\mathbin{\square}$ in ${\catsymbfont{M}}$ (defined in terms of the usual diagrams, i.e., \cite[VII\S3]{MacLane-Categories} plus commutativity); see Example~\ref{ex:comm}. \end{example}
\begin{example}[The associative algebra operad]\label{ex:Ass} If ${\catsymbfont{M}}$ has finite coproducts and $\mathbin{\square}$ preserves finite coproducts in each variable, then we also have the operad ${\opsymbfont{A}}{\mathrm{ss}}$: \[ {\opsymbfont{A}}{\mathrm{ss}}(m)=\myop\coprod\limits_{\Sigma_{m}}\mathbf{1} \] with symmetric group action induced by the natural (right) action of $\Sigma_{m}$ on $\Sigma_{m}$ and composition law $\Gamma$ induced by block permutation and block sum of permutations, \[ \sigma \in \Sigma_{m}, \tau_{1}\in \Sigma_{j_{1}},\dotsc, \tau_{m}\in \Sigma_{j_{m}} \mapsto \sigma_{j_{1},\dotsc,j_{m}}\circ (\tau_{1}\oplus \dotsb \oplus \tau_{m})\in \Sigma_{j}. \] Its category of algebras is isomorphic to the category of monoids for $\mathbin{\square}$ in ${\catsymbfont{M}}$; see Example~\ref{ex:mon}. \end{example}
For operads like ${\opsymbfont{A}}{\mathrm{ss}}$, it is often useful to work in terms of \textit{non-symmetric operads}, which come without the permutation action.
\begin{defn}\label{def:nonsym} A \textit{non-symmetric operad} consists of a sequence of objects ${\opsymbfont{O}}(m)$, $m=0,1,2,3,\dotsc$, together with a unit map and composition rule as in \ref{def:operad}.\eqref{part:operad:unitmap} and \eqref{part:operad:comp} satisfying the associativity and unit rules of \ref{def:operad}.\eqref{part:operad:assoc} and~\eqref{part:operad:unit}. A map of non-symmetric operads consists of a map of their object sequences that commutes with the unit map and the composition rule. \end{defn}
Forgetting the permutation action on ${\opsymbfont{C}}{\mathrm{om}}$ gives a non-unital operad called $\overline{{\opsymbfont{A}}{\mathrm{ss}}}$ that is the non-symmetric version of the operad ${\opsymbfont{A}}{\mathrm{ss}}$. In general, under the finite coproduct assumption in Example~\ref{ex:Ass}, given a non-symmetric operad $\overline{{\opsymbfont{O}}}$, the product $\overline{{\opsymbfont{O}}}\mathbin{\square} {\opsymbfont{A}}{\mathrm{ss}}$ has the canonical structure of an operad; it is the \term{operad associated to $\overline{{\opsymbfont{O}}}$}. In the category of spaces (or sets, but not in the category of abelian groups, the category of chain complexes, or the various categories of spectra), an operad ${\opsymbfont{O}}$ comes from a non-symmetric operad exactly when it admits a map to ${\opsymbfont{A}}{\mathrm{ss}}$: the corresponding non-symmetric operad $\overline{{\opsymbfont{O}}}$ has $\overline{{\opsymbfont{O}}}(n)$ the subobject that maps to the identity permutation summand of ${\opsymbfont{A}}{\mathrm{ss}}$, and there is a canonical isomorphism ${\opsymbfont{O}}\cong \overline{{\opsymbfont{O}}}\mathbin{\square} {\opsymbfont{A}}{\mathrm{ss}}$ (that depends only on the original choice of map ${\opsymbfont{O}}\to {\opsymbfont{A}}{\mathrm{ss}}$).
\section{\texorpdfstring{$A_{\infty}$, $E_{\infty}$, and $E_{n}$}{Ainfty, Einfty, and En} Operads} \label{sec:ainf}
This section reviews some of the most important classes of examples of operads in homotopy theory, the $A_{\infty}$, $E_{\infty}$, and $E_{n}$ operads. We concentrate on the case of (unbased) spaces, with some notes about the appropriate definition of such operads in other contexts. For example, in stable homotopy theory, the unbased suspension spectrum functor $\Sigma^{\infty}_{+}$ converts model $E_{n}$ operads into operads in the various modern categories of spectra. The universal role played by spaces in homotopy theory typically allows for reasonable definitions of these classes of operads in any homotopy theoretic setting.
The terminology of $A_{\infty}$ space and the basic model of an $A_{\infty}$ operad, due to Stasheff~\cite{Stasheff-Ainfty}, preceded the definition of operad by several years.
\begin{defn}\label{defn:Ainfty} An $A_{\infty}$ operad in spaces is a non-symmetric operad whose $m$th space is contractible for all $m$. \end{defn}
Informally, an operad (with symmetries) is $A_{\infty}$ when there is an understood isomorphism to the operad associated to some $A_{\infty}$ operad. The definition of $A_{\infty}$ operad usually has a straightforward generalization to other symmetric monoidal categories with a notion of homotopy theory: contractibility corresponds to a weak equivalence with the unit $\mathbf{1}$ of the symmetric monoidal structure, and we should add the requirement that the non-symmetric operad composition rule should be a weak equivalence for all indexes (which is automatic in spaces). One wrinkle is that a flatness condition may be needed and should be imposed to ensure that the functor $\overline{{\opsymbfont{O}}}(m)\mathbin{\square} X^{(m)}$ is weakly equivalent to $X^{(m)}$ (cf.~Section~\ref{sec:compare}); in the case of spaces, contractibility implicitly includes such a condition (although in spaces itself, the monoidal product $\times$ preserves all weak equivalence in each variable). In symmetric spectra and orthogonal spectra, a good flatness condition is to be homotopy equivalent to a cofibrant object; in EKMM $S$-modules, a good flatness condition is to be homotopy equivalent to a semi-cofibrant object (see~\cite[\S6]{LewisMandell-MMMC}).
We have already seen an example of an $A_{\infty}$ operad: the operad $\overline{{\opsymbfont{A}}{\mathrm{ss}}}$ is an $A_{\infty}$ operad. The associahedra ${\opsymbfont{K}}(m)$ of Stasheff~\cite[I.\S6]{Stasheff-Ainfty} have the structure of a non-symmetric operad using the insertion maps [\textit{ibid.}] for the composition rule, and this is an example of an $A_{\infty}$ operad. The Boardman-Vogt little $1$-cubes (non-symmetric) operad $\overline{{\opsymbfont{C}}}_{1}$ described below gives a third example.
Next we discuss $E_{\infty}$ operads. Recall that a free $\Sigma_{m}$-cell complex is a space built by cells of the form $(\Sigma_{m}\times D^{n},\Sigma_{m}\times S^{n-1})$, where $D^{n}$ denotes the unit disk in ${\mathbb{R}}^{n}$. The definition of $E_{\infty}$ operad asks for the constituent spaces to have the $\Sigma_{m}$-equivariant homotopy type of a free $\Sigma_{m}$-cell complex and the non-equivariant homotopy type of a point.
\begin{defn}\label{defn:Einfty} An operad ${\opsymbfont{E}}$ in spaces is an $E_{\infty}$ operad when for each $m$, its $m$th space is a universal $\Sigma_{m}$ space: ${\opsymbfont{E}}(m)$ has the $\Sigma_{m}$-equivariant homotopy type of a free $\Sigma_{m}$-cell complex and is non-equivariantly contractible. \end{defn}
Unlike the $A_{\infty}$ case, the operad ${\opsymbfont{C}}{\mathrm{om}}$ is not $E_{\infty}$ as its spaces do not have free actions. The Barratt-Eccles operad ${\opsymbfont{E}}\Sigma$ provides an example:
\begin{example}[The Barratt-Eccles operad]\label{ex:be} Let ${\opsymbfont{E}}\Sigma(m)$ denote the nerve of the category $E\Sigma_{m}$ whose set of objects is $\Sigma_{m}$ and which has a unique map between any two objects. The symmetric group $\Sigma_{m}$ acts strictly on the category and the nerve ${\opsymbfont{E}}\Sigma(m)$ inherits a $\Sigma_{m}$-action; moreover, as the action of $\Sigma_{m}$ on the simplices is free, the simplicial triangulation of ${\opsymbfont{E}}\Sigma(m)$ has the structure of a free $\Sigma_{m}$-cell complex. It is non-equivariantly contractible because every object of $E\Sigma_{m}$ is a zero object. The multiplication is induced by an operad structure on the sequence of categories using block sums of permutations as in the operad structure on ${\opsymbfont{A}}{\mathrm{ss}}$. The resulting operad is called the \term{Barratt-Eccles operad}. \end{example}
Boardman and Vogt~\cite[\S2]{BV-Bull} defined another $E_{\infty}$ operad, built out of linear isometries.
\begin{example}[The linear isometries operad]\label{ex:L} The Boardman-Vogt linear isometries operad ${\opsymbfont{L}}$ has its $m$th space the space of linear isometries \[ ({\mathbb{R}}^{\infty})^{m}={\mathbb{R}}^{\infty}\oplus\dotsb \oplus {\mathbb{R}}^{\infty}\to {\mathbb{R}}^{\infty} \] (where ${\mathbb{R}}^{\infty}=\bigcup {\mathbb{R}}^{n}$), with operad structure defined as in the example of an endomorphism operad. The topology comes from the identification \[ {\opsymbfont{L}}(m)=\lim\nolimits_{k} \colim_{n} {\opsymbfont{I}}(({\mathbb{R}}^{k})^{m},{\mathbb{R}}^{n}) \] for ${\opsymbfont{I}}(({\mathbb{R}}^{k})^{m},{\mathbb{R}}^{n})$ the space of linear isometries ${\mathbb{R}}^{k})^{m}\to {\mathbb{R}}^{n}$ (with the usual manifold topology). The $\Sigma_{m}$-action induced by the action on the direct sum $({\mathbb{R}}^{\infty})^{m}$ is clearly free; each ${\opsymbfont{I}}(({\mathbb{R}}^{k})^{m},{\mathbb{R}}^{n})$ is a $\Sigma_{m}$-manifold, and ${\opsymbfont{L}}(m)$ is homotopy equivalent to a free $\Sigma_{m}$-cell complex. Since ${\opsymbfont{I}}(({\mathbb{R}}^{k})^{m},{\mathbb{R}}^{n})$ is $(n-km-1)$-connected, it follows that ${\opsymbfont{L}}(m)$ is non-equivariantly contractible. \end{example}
The Boardman-Vogt little $\infty$-cubes operad ${\opsymbfont{C}}_{\infty}$ described below gives a third example of an $E_{\infty}$ operad.
The requirement for freeness derives from infinite loop space theory. As we review in Section~\ref{sec:loop}, infinite loop spaces are algebras for the little $\infty$-cubes operad ${\opsymbfont{C}}_{\infty}$. As we review in Section~\ref{sec:compare}, for any $E_{\infty}$ operad ${\opsymbfont{E}}$ in spaces, the category of ${\opsymbfont{E}}$-algebras has an equivalent homotopy theory to the category of ${\opsymbfont{C}}_{\infty}$-algebras. On the other hand, any algebra in spaces for the operad ${\opsymbfont{C}}{\mathrm{om}}$ must be a generalized Eilenberg-Mac~Lane space, and the category of ${\opsymbfont{C}}{\mathrm{om}}$-algebras does not have an equivalent homotopy theory to the category of ${\opsymbfont{C}}_{\infty}$-algebras. In generalizing the notion of $E_{\infty}$ to other categories, getting the right category of algebras is key. For symmetric spectra, orthogonal spectra, and EKMM $S$-modules and for chain complexes of modules over a ring containing the rational numbers, it is harmless to allow ${\opsymbfont{C}}{\mathrm{om}}$ to fit the definition of $E_{\infty}$ operad (cf.~Examples~\ref{ex:dmeq2}, \ref{ex:dmeq3}); in spaces and chain complexes of modules over a finite field, some freeness condition is required. In general, the condition should be a flatness condition on ${\opsymbfont{O}}(m)$ for $({\opsymbfont{O}}(m)\mathbin{\square} X^{(m)})/\Sigma_{m}$ as a functor of $X$ (for suitable $X$) (cf.~Definition~\ref{def:dmeq}).
Unlike the definition of $E_{\infty}$ or $A_{\infty}$ operad, which are defined in terms of homotopical conditions on the constituent spaces, the definition of $E_{n}$ operads for other $n$ depends on specific model operads first defined by Boardman-Vogt~\cite{BV-Bull} called the \textit{little $n$-cubes operads} ${\opsymbfont{C}}_{n}$.
\begin{cons}[The little $n$-cubes operad]\label{cons:lilcubes} The $m$th space ${\opsymbfont{C}}_{n}(m)$ of the little $n$-cubes operad is the space of $m$ ordered almost disjoint parallel axis affine embeddings of the unit $n$-cube $[0,1]^{n}$ in itself. So ${\opsymbfont{C}}_{n}(0)$ is a single point representing the unique way to embed $0$ unit $n$-cubes in the unit $n$-cube. A parallel axis affine embedding of the unit cube in itself is a map of the form \[ (t_{1},\dotsc,t_{n})\in [0,1]^{n}\mapsto (x_{1}+a_{1}t_{1},\dotsc,x_{n}+a_{n}t_{n})\in [0,1]^{n} \] for some fixed $(x_{1},\dotsc,x_{n})$ and $(a_{1},\dotsc,a_{n})$ with each $a_{i}>0$, $x_{i}\geq 0$, and $x_{i}+a_{i}\leq 1$; it is determined by the point $(x_{1},\dotsc,x_{n})$ where it sends $(0,\dotsc,0)$ and the point \[ (y_{1},\dotsc,y_{n})=(x_{1}+a_{1},\dotsc,x_{n}+a_{n}) \] where it sends $(1,\dotsc,1)$. So ${\opsymbfont{C}}_{n}(1)$ is homeomorphic to the subspace \[ \{ ((x_{1},\dotsc,x_{n}),(y_{1},\dotsc,y_{n}))\in [0,1]^{n}\times [0,1]^{n}\mid x_{1}<y_{1}, x_{2}<y_{2}, \dotsc, x_{n}<y_{n}\} \] of $[0,1]^{n}\times [0,1]^{n}$. For $m\geq 2$, almost disjoint means that the images of the open subcubes are disjoint (the embedded cubes only intersect on their boundaries), and ${\opsymbfont{C}}_{n}(m)$ is homeomorphic to a subset of ${\opsymbfont{C}}_{n}(1)^{m}$. The map $1$ is specified by the element of ${\opsymbfont{C}}_{n}(1)$ that gives the identity embedding of the unit $n$-cube. The action of the symmetric group is to re-order the embeddings. The composition law $\Gamma^{m}_{j_{1},\dotsc,j_{m}}$ composes the $j_{1}$ embeddings in ${\opsymbfont{C}}_{n}(j_{1})$ with the first embedding in ${\opsymbfont{C}}_{n}(m)$, the $j_{2}$ embeddings in ${\opsymbfont{C}}_{n}(j_{2})$ with the second embedding in ${\opsymbfont{C}}_{n}(m)$, etc., to give $j=j_{1}+\dotsb+j_{m}$ total embeddings. See Figure~\ref{fig:lilcubes} for a picture in the case $n=2$. Taking cartesian product with the identity map on $[0,1]$ takes a self-embedding of the unit $n$-cube to a self-embedding of the unit $(n+1)$-cube and induces maps of operads ${\opsymbfont{C}}_{n}\to {\opsymbfont{C}}_{n+1}$ that are closed inclusions of the underlying spaces. Let ${\opsymbfont{C}}_{\infty}(m)=\bigcup {\opsymbfont{C}}_{n}(m)$; the operad structures on the ${\opsymbfont{C}}_{n}$ fit together to define an operad structure on ${\opsymbfont{C}}_{\infty}$. \end{cons}
\begin{figure}
\caption{Composition of Little 2-Cubes}
\label{fig:lilcubes}
\end{figure}
The space ${\opsymbfont{C}}_{n}(m)$ has the $\Sigma_{m}$-equivariant homotopy type of the configuration space $C(m,{\mathbb{R}}^{n})$ of $m$ (ordered) points in ${\mathbb{R}}^{n}$, or equivalently, $C(m,(0,1)^{n})$ of $m$ points in $(0,1)^{n}$. To see this, since both spaces are free $\Sigma_{m}$-manifolds (non-compact, and with boundary in the case of ${\opsymbfont{C}}_{n}(m)$), it is enough to show that they are non-equivariantly weakly equivalent, but it is in fact no harder to produce a $\Sigma_{m}$-equivariant homotopy equivalence explicitly. We have a $\Sigma_{m}$-equivariant map ${\opsymbfont{C}}_{n}(m)\to C(m,(0,1)^{n})$ by taking the center point of each embedded subcube. It is easy to define a $\Sigma_{m}$-equivariant section of this map by continuously choosing cubes centered on the given configuration; one way to do this is to make them all have the same equal side length of $1/2$ of the minimum of the distance between each of the points and the distance from each point to the boundary of $[0,1]^{n}$. A $\Sigma_{m}$-equivariant homotopy from the composite map on ${\opsymbfont{C}}_{n}(m)$ to the identity could (for example) first linearly shrink all sides that are bigger than their original length and then linearly expand all remaining sides to their original length. In particular, ${\opsymbfont{C}}_{n}(1)$ is always contractible and ${\opsymbfont{C}}_{n}(2)$ is $\Sigma_{2}$-equivariantly homotopy equivalent to the sphere $S^{n-1}$ with the antipodal action. For $m>2$, the configuration spaces can be described in terms of iterated fibrations, and their Borel homology was calculated by Cohen in~\cite{Cohen-Bulletin} and~\cite[IV]{CohenLadaMay}.
We can say more about the homotopy types in the cases $n=1$, $n=2$, and $n=\infty$. For $n=1$, the natural order of the interval $[0,1]$, gives a natural order to the embedded sub-intervals ($1$-cubes); let $\overline{{\opsymbfont{C}}}_{1}(m)$ denote the subspace of ${\opsymbfont{C}}_{1}(m)$ where the sub-intervals are numbered in their natural order. The spaces $\overline{{\opsymbfont{C}}}_{1}(m)$ are contractible and form a non-symmetric operad with ${\opsymbfont{C}}_{1}$ (canonically) isomorphic to the associated operad. In other words, the map of operads ${\opsymbfont{C}}_{1}\to {\opsymbfont{A}}{\mathrm{ss}}$ that takes a sequence of embeddings and just remembers the order they come in is a $\Sigma_{m}$-equivariant homotopy equivalence at each level. In particular ${\opsymbfont{C}}_{1}$ is an $A_{\infty}$ operad.
For $n=2$, the configuration space $C(m,{\mathbb{R}}^{2})$ is easily seen to be an Eilenberg-Mac~Lane space $K(A_{m},1)$ where $A_{m}$ is the pure braid group (of braids with fixed endpoints) on $m$ strands (see, for example, \cite[\S 4]{May-GILS}).
For $n=\infty$, ${\opsymbfont{C}}_{\infty}$ is an $E_{\infty}$ operad; each ${\opsymbfont{C}}_{\infty}(m)$ is a universal $\Sigma_{m}$-space. To see this, it is easier to work with \[ C(m,{\mathbb{R}}^{\infty}):=\bigcup C(m,{\mathbb{R}}^{n}). \] Choosing a homeomorphism $(0,1)\cong {\mathbb{R}}$ that sends $1/2$ to $0$, the induced homeomorphisms ${\opsymbfont{C}}_{n}(m)\to C(m,{\mathbb{R}}^{n})$ are compatible with the inclusions ${\opsymbfont{C}}_{n}(m)\to {\opsymbfont{C}}_{n+1}(m)$ and $C(m,{\mathbb{R}}^{n})\to C(m,{\mathbb{R}}^{n+1})$; as these inclusions are embeddings of closed submanifolds (with boundary in the case of ${\opsymbfont{C}}_{n}(m)$), the induced map \[ {\opsymbfont{C}}_{\infty}(m)=\bigcup {\opsymbfont{C}}_{n}(m)\to \bigcup C(m,{\mathbb{R}}^{n})=C(m,{\mathbb{R}}^{\infty}) \] remains a homotopy equivalence. One way to see that $C(m,{\mathbb{R}}^{\infty})$ is non-equivariantly contractible is to start by choosing a homotopy though injective linear maps from the identity on ${\mathbb{R}}^{\infty}$ to the shift map that on basis elements sends $e_{i}$ to $e_{i+m}$. We then homotope the configuration (which now starts with the first $m$ coordinates all zero) so that the $i$th point has $i$th coordinate $1$ and the remainder of the first $m$ coordinates zero. Finally, we homotope the configuration to the configuration with $i$th point at $e_{i}$.
We use the operads ${\opsymbfont{C}}_{n}$ to define $E_{n}$ operads:
\begin{defn}\label{def:En} An operad ${\opsymbfont{E}}$ in spaces is an $E_{n}$ operad when there is a zigzag of maps of operads relating it to ${\opsymbfont{C}}_{n}$, each of which is a $\Sigma_{m}$-equivariant homotopy equivalence on $m$th spaces for all $m$. \end{defn}
This definition is standard, but a bit awkward, because it defines a property, whereas a better definition would define a structure and ask for at least a preferred equivalence class of zigzag.
As we review in Section~\ref{sec:compare}, such maps induce equivalences of homotopy categories of algebras (indeed, Quillen equivalences). We have implicitly given two different definitions of $E_{\infty}$ operad; the following proposition justifies this.
\begin{prop} An operad ${\opsymbfont{E}}$ of spaces is $E_{\infty}$ in the sense of Definition~\ref{defn:Einfty} if and only if it is $E_{\infty}$ in the sense of Definition~\ref{def:En}. \end{prop}
Before reviewing the proof, we state the following closely related proposition.
\begin{prop}\label{prop:E1} An operad ${\opsymbfont{E}}$ of spaces is $E_{1}$ if and only if it is isomorphic to the associated operad of an $A_{\infty}$ operad. \end{prop}
The previous two propositions (and their common proof) are the gist of the second half of \S3 of May~\cite{May-GILS}. In each case one direction is clear, since ${\opsymbfont{C}}_{1}$ and ${\opsymbfont{C}}_{\infty}$ are $A_{\infty}$ and $E_{\infty}$ (respectively), and the conditions of Definitions~\ref{defn:Ainfty} and~\ref{defn:Einfty} are preserved by the zigzags considered in Definition~\ref{def:En}. The proof of the other direction is to exhibit an explicit zigzag:
\begin{proof} Let ${\opsymbfont{E}}$ be the operad in question and assume it is either $E_{\infty}$ in the sense of Definition~\ref{defn:Einfty} (for the first proposition) or $A_{\infty}$ in the sense of Definition~\ref{defn:Ainfty}ff (for the second proposition). In the case of the first proposition, consider the product in the category of operads ${\opsymbfont{C}}_{\infty}\times {\opsymbfont{E}}$; it satisfies \[ ({\opsymbfont{C}}_{\infty}\times {\opsymbfont{E}})(m)={\opsymbfont{C}}_{\infty}(m)\times {\opsymbfont{E}}(m) \] with the diagonal $\Sigma_{m}$-action and the unit and composition maps the product of those for ${\opsymbfont{C}}_{\infty}$ and ${\opsymbfont{E}}$. The projections \[ {\opsymbfont{C}}_{\infty}\from {\opsymbfont{C}}_{\infty}\times {\opsymbfont{E}}\to {\opsymbfont{E}} \] give a zigzag as required by Definition~\ref{def:En}. For the second proposition, do the same trick with the non-symmetric operads $\overline{{\opsymbfont{E}}}$ and $\overline{{\opsymbfont{C}}}_{1}$ and then pass to the associated operads. \end{proof}
Definitions~\ref{defn:Ainfty} and~\ref{defn:Einfty} mean that identifying $A_{\infty}$ and $E_{\infty}$ operads is pretty straightforward. In unpublished work, Fiedorowicz~\cite{Fiedorowicz-BraidedOperad} defines the notion of a \term{braided operad}, which provides a good criterion for identifying $E_{2}$ operads. For $n>2$ (finite), the spaces ${\opsymbfont{C}}_{n}(m)$ are not Eilenberg-Mac\ Lane spaces (for $m>1$), and that makes identification of such operads much harder; however, Berger~\cite[1.16]{Berger-CombConfig} proves a theorem (that he attributes to Fiedorowicz) that gives a method to identify $E_{n}$ operads that seems to work well in practice; see~\cite[\S14]{McClureSmith-CosimplicialCubes}, \cite[\S1.6]{BergerFresse-Combinatorial}.
The work of Dunn~\cite{Dunn-Tensor} and Fiedorowicz-Vogt~\cite{FV-Add} is the start of an abstract identification of $E_{n}$ operads: The derived tensor product of $n$ $E_{1}$ operads is an $E_{n}$ operad. Here ``tensor product'' refers to the Boardman-Vogt tensor product of operads (or PROPs) in \cite[2\S3]{BV-PROP}, which is the universal pairing subject to ``interchange'', meaning that an ${\opsymbfont{O}}\otimes {\opsymbfont{P}}$-algebra structure consists of an ${\opsymbfont{O}}$-algebra and a ${\opsymbfont{P}}$-algebra structure on a space where the ${\opsymbfont{O}}$- and ${\opsymbfont{P}}$-structure maps commute (see \textit{ibid.}\ for more details on the construction of the tensor product). This still essentially defines $E_{n}$ operads in terms of reference models, though in principle, it gives a wide range of additional models. (The author does not know an example where this is actually put to use, but \cite{BFV-THH} comes close.) The concept of interchange makes sense in any cartesian symmetric monoidal structure, so this also in principle tells how to extend the notion of $E_{n}$ to other cartesian symmetric monoidal categories with a reasonable homotopy theory of operads for which the Boardman-Vogt tensor product is reasonably well-behaved. (Again, the author knows no examples where this is put to use, but perhaps work by Barwick (unpublished), Gepner (unpublished), and Lurie~\cite{Lurie-DAGVI} on $E_{n}$ structures is in a similar spirit.)
In categories suitably related to spaces, $E_{n}$ algebras are defined by a reference model suitably related to ${\opsymbfont{C}}_{n}$. For example, in the context of simplicial sets, the total singular complex of the little $n$-cubes operad has the canonical structure of an operad of simplicial sets, and we define $E_{n}$ operads in terms of this reference model. In symmetric spectra and orthogonal spectra, we have the reference model given by the unbased suspension spectrum functor: an operad is an $E_{n}$ operad when it is related to $\Sigma^{\infty}_{+}{\opsymbfont{C}}_{n}$ by a zigzag of operad maps that are (non-equivariant) weak equivalences on $m$th objects for all $m$. For categories of chain complexes, we use the singular chain complex of the little $n$-cubes operad to define the reference model. To make the singular chains an operad, we use the Eilenberg-Mac Lane shuffle map to relate tensor product of chains to chains on the cartesian product; the shuffle map is a lax symmetric monoidal natural transformation \[ C_{*}(X)\otimes C_{*}(Y)\to C_{*}(X\times Y), \] meaning that it commutes strictly with the symmetry isomorphisms \[ C_{*}(X)\otimes C_{*}(Y)\cong C_{*}(Y)\otimes C_{*}(X) \qquad C_{*}(X\times Y) \cong C_{*}(Y\times X) \] and makes the following associativity diagram commute. \[ \xymatrix{ C_{*}(X)\otimes C_{*}(Y)\otimes C_{*}(Z)\ar[r]\ar[d] &C_{*}(X\times Y)\otimes C_{*}(Z)\ar[d]\\ C_{*}(X)\otimes C_{*}(Y\times Z)\ar[r]&C_{*}(X\times Y\times Z) } \] See, for example, \cite[\S29]{May-Simplicial}.
The fact that $E_{n}$ operads need to be defined in terms of a reference model is not entirely satisfactory, especially in homotopical contexts that are not topological. Nevertheless, the definition for spaces, simplicial sets, or chain complexes seems to suffice to cover all other contexts that arise in practice.\footnote{In theory, the definition for simplicial sets should suffice for all homotopical contexts, but this may require changing models, which for a particular problem may be inconvenient or more complicated, or make it less concrete.}
\section{Operadic Algebras and Monads} \label{sec:monad}
In the original context of iterated loop spaces and in many current contexts in homotopy theory and beyond, the main purpose of operads is to parametrize operations, which is to say, to define operadic algebras. For a closed symmetric monoidal category, there are three equivalent definitions, one in terms of operations, one in terms of endomorphism operads, and one in terms of monads. This section reviews the three definitions.
Viewing ${\opsymbfont{O}}(m)$ as parametrizing some $m$-ary operations on an object $X$ means that we have an \textit{action map} \[ {\opsymbfont{O}}(m)\mathbin{\square} X^{(m)}\to X. \] Since the right action of $\Sigma_{m}$ on ${\opsymbfont{O}}(m)$ corresponds to reordering the arguments of the operations, applying $\sigma \in \Sigma_{m}$ to ${\opsymbfont{O}}(m)$ (and then performing the action map) should have the same effect as applying $\sigma$ to permute the factors in $X^{(m)}$. A concise way of saying this is to say that the map is equivariant for the diagonal (left) action on the source ${\opsymbfont{O}}(m)\mathbin{\square} X^{(m)}$ and the trivial action on the target $X$ (using the standard convention that the left action $\sigma$ on ${\opsymbfont{O}}(m)$ is given by the right action of $\sigma^{-1}$). The action map should also respect the composition law $\Gamma$, making $\Gamma$ correspond to composition of operations, and respect the identity $1$, making $1$ act by the identity operation. The following gives the precise definition:
\begin{defn}\label{def:algebra} Let ${\catsymbfont{M}}$ be a symmetric monoidal category and ${\opsymbfont{O}}=(\{{\opsymbfont{O}}(m)\},\Gamma,1)$ an operad in ${\catsymbfont{M}}$. An ${\opsymbfont{O}}$-algebra (in ${\catsymbfont{M}}$) consists of an object $A$ in ${\catsymbfont{M}}$ together with \textit{action maps} \[ \xi_{m}\colon {\opsymbfont{O}}(m)\mathbin{\square} A^{(m)}\to A \] that are equivariant for the diagonal (left) $\Sigma_{m}$-action on the source and the trivial $\Sigma_{m}$-action on the target and that satisfy the following associativity and unit conditions: \begin{enumerate} \item For all $m$, $j_{1},\dotsc,j_{m}$, \[ \xi_{m}\circ (\id_{{\opsymbfont{O}}(m)}\mathbin{\square} \xi_{j_{1}}\mathbin{\square} \dotsb \mathbin{\square} \xi_{j_{m}})=\xi_{j}\circ (\Gamma^{m}_{j_{1},\dotsc,j_{m}}\mathbin{\square} \id_{A}^{(j)}), \] i.e., the diagram \[ \xymatrix@C+3pc{ {\opsymbfont{O}}(m)\mathbin{\square} {\opsymbfont{O}}(j_{1})\mathbin{\square} \dotsb \mathbin{\square} {\opsymbfont{O}}(j_{m})\mathbin{\square} A^{(j)} \ar[r]^-{\Gamma^{m}_{j_{1},\dotsc,j_{m}}\mathbin{\square} \id_{A}^{(j)}} \ar[d]_{\id_{{\opsymbfont{O}}(m)}\mathbin{\square} \xi_{j_{1}}\mathbin{\square} \dotsb \mathbin{\square} \xi_{j_{m}}} &{\opsymbfont{O}}(j)\mathbin{\square} A^{(j)}\ar[d]^{\xi_{j}}\\ {\opsymbfont{O}}(m)\mathbin{\square} A^{(m)}\ar[r]_{\xi_{m}} &A } \] commutes.
\item The map $\xi_{1}\circ (1\mathbin{\square} \id_{A})\colon \mathbf{1}\mathbin{\square} A\to A$ is the unit isomorphism for $\mathbin{\square}$.
\end{enumerate} A map of ${\opsymbfont{O}}$-algebras from $(A,\{\xi_{m}\})$ to $(A',\{\xi'_{m}\})$ consists of a map $f\colon A\to A'$ in ${\catsymbfont{M}}$ that commutes with the action maps, i.e., that make the diagrams \[ \xymatrix{ {\opsymbfont{O}}(m)\mathbin{\square} A^{(m)}
\ar[r]^-{\xi_{m}}
\ar[d]_{\id_{{\opsymbfont{O}}(m)}\mathbin{\square} f^{(m)}} &A\ar[d]^{f}\\ {\opsymbfont{O}}(m)\mathbin{\square} A^{\prime(m)}\ar[r]_-{\xi'_{m}} &A' } \] commute for all $m$. We write ${\catsymbfont{M}}[{\opsymbfont{O}}]$ for the category of ${\opsymbfont{O}}$-algebras. \end{defn}
\begin{example} When ${\catsymbfont{M}}$ has an initial object and $\mathbin{\square}$ preserves the initial object in each variable, the structure of an algebra over the identity operad ${\opsymbfont{I}}$ is no extra structure on an object of ${\catsymbfont{M}}$. \end{example}
As per~(ii) above and as illustrated in the previous example, the $1$ in the structure of the operad corresponds to the identity operation. In some contexts algebras have units; when that happens, the unit is encoded in ${\opsymbfont{O}}(0)$ as in the examples of monoids and commutative monoids. Recall that a \term{monoid object for $\mathbin{\square}$ in ${\catsymbfont{M}}$} (or \term{$\mathbin{\square}$-monoid} for short) consists of an object $M$ together with a \textit{multiplication map} $\mu \colon M\mathbin{\square} M\to M$ and \textit{unit map} $\eta \colon \mathbf{1}\to M$ satisfying the following associativity and unit diagrams \[ \xymatrix{ M\mathbin{\square} M\mathbin{\square} M\mathstrut \ar[r]^-{\mu \mathbin{\square} \id}\ar[d]_{\id\mathbin{\square} \mu} &M\mathbin{\square} M\ar[d]^{\mu}\\ M\mathbin{\square} M\mathstrut\ar[r]_-{\mu}&M }\qquad \xymatrix{ \mathbf{1}\mathbin{\square} M\mathstrut\ar[r]^-{\eta \mathbin{\square} \id}\ar[dr]_{\cong} &M\mathbin{\square} M\ar[d]_{\mu} &M\mathbin{\square} \mathbf{1}\ar[l]_-{\id\mathbin{\square} \eta}\ar[dl]^{\cong}\\ &M } \] (where the diagonal maps are the unit isomorphisms in ${\catsymbfont{M}}$). The \textit{opposite multiplication} is the composite of the symmetry morphism $c\colon M\mathbin{\square} M\to M\mathbin{\square} M$ with $\mu$, and a $\mathbin{\square}$-monoid is \textit{commutative} when $\mu=\mu \circ c$.
\begin{example}\label{ex:comm} Given a ${\opsymbfont{C}}{\mathrm{om}}$-algebra $A$, defining $\eta$ to be the action map $\xi_{0}$ \[ \eta \colon \mathbf{1}={\opsymbfont{C}}{\mathrm{om}}(0)\overto{\xi_{0}} A \] and $\mu$ to be the composite of the (inverse) unit isomorphism and the action map $\xi_{2}$ \[ \mu \colon A\mathbin{\square} A \cong {\opsymbfont{C}}{\mathrm{om}}(2)\mathbin{\square} A\mathbin{\square} A\overto{\xi_{2}}A \] endows $A$ with the structure of a commutative $\mathbin{\square}$-monoid: associativity follows from the fact that the maps $\Gamma^{2}_{1,2}$ and $\Gamma^{2}_{2,1}$ are both unit maps for $\mathbin{\square}$ so under the canonical isomorphisms \begin{align*} A\mathbin{\square} A\mathbin{\square} A &\cong {\opsymbfont{C}}{\mathrm{om}}(2)\mathbin{\square} ({\opsymbfont{C}}{\mathrm{om}}(1)\mathbin{\square} {\opsymbfont{C}}{\mathrm{om}}(2)) \mathbin{\square} (A\mathbin{\square} A\mathbin{\square} A)\\ A\mathbin{\square} A\mathbin{\square} A &\cong {\opsymbfont{C}}{\mathrm{om}}(2)\mathbin{\square} ({\opsymbfont{C}}{\mathrm{om}}(2)\mathbin{\square} {\opsymbfont{C}}{\mathrm{om}}(1)) \mathbin{\square} (A\mathbin{\square} A\mathbin{\square} A) \end{align*} both maps induce the same map $A\mathbin{\square} A\mathbin{\square} A\to A$. Likewise, the unit condition follows from the fact that \begin{align*} \Gamma^{2}_{0,1}&\colon {\opsymbfont{C}}{\mathrm{om}}(2)\mathbin{\square} ({\opsymbfont{C}}{\mathrm{om}}(0)\mathbin{\square} {\opsymbfont{C}}{\mathrm{om}}(1))\to {\opsymbfont{C}}{\mathrm{om}}(1)\\ \Gamma^{2}_{1,0}&\colon {\opsymbfont{C}}{\mathrm{om}}(2)\mathbin{\square} ({\opsymbfont{C}}{\mathrm{om}}(1)\mathbin{\square} {\opsymbfont{C}}{\mathrm{om}}(0))\to {\opsymbfont{C}}{\mathrm{om}}(1) \end{align*} are both unit maps. The multiplication is commutative because the action of the symmetry map on $\mathbf{1}={\opsymbfont{C}}{\mathrm{om}}(2)$ is trivial. Conversely, we can convert a commutative $\mathbin{\square}$-monoid to a ${\opsymbfont{C}}{\mathrm{om}}$-algebra by taking $\xi_{0}$ to be the unit $\eta$, $\xi_{1}$ to be the unit isomorphism for $\mathbin{\square}$, $\xi_{2}$ to be induced by the unit isomorphism for $\mathbin{\square}$ and the multiplication, and all higher $\xi_{m}$'s induced by the unit isomorphism for $\mathbin{\square}$ and (any) iterated multiplication. This defines a bijective correspondence between the set of commutative $\mathbin{\square}$-monoid structures and the set of ${\opsymbfont{C}}{\mathrm{om}}$-algebra structures on a fixed object and an isomorphism between the category of commutative $\mathbin{\square}$-monoids and the category of ${\opsymbfont{C}}{\mathrm{om}}$-algebras. \end{example}
For a non-symmetric operad, defining an algebra in terms of the associated operad or in terms of the analogue of Definition~\ref{def:algebra} without the equivariance requirement produce the same structure.
\begin{example}\label{ex:mon} The constructions of Example~\ref{ex:comm} applied to the non-symmetric operad $\overline{{\opsymbfont{A}}{\mathrm{ss}}}$ give a bijective correspondence between the set of $\mathbin{\square}$-monoid structures and the set of ${\opsymbfont{A}}{\mathrm{ss}}$-algebra structures on a fixed object and an isomorphism between the category of $\mathbin{\square}$-monoids and the category of ${\opsymbfont{A}}{\mathrm{ss}}$-algebras. \end{example}
The monoid and commutative monoid objects in the category of sets (with the usual symmetric monoidal structure given by cartesian product) are just the monoids and commutative monoids in the usual sense. Likewise, in spaces, they are the topological monoids and topological commutative monoids. In the category of abelian groups (with the usual symmetric monoidal structure given by the tensor product), the monoid objects are the rings and the commutative monoid objects are the commutative rings. In the category of chain complexes of $R$-modules for a commutative ring $R$ (with the usual symmetric monoidal structure given by tensor product over $R$), the monoid objects are the differential graded $R$-algebras and the commutative monoid objects are the commutative differential graded $R$-algebras. In a modern category of spectra, the monoid objects are called \term{${\mathbb{S}}$-algebras} or sometimes \term{strictly associative ring spectra}. Some authors take the term ``ring spectrum'' to be synonymous with ${\mathbb{S}}$-algebra, but others take it to mean the weaker notion of monoid object in the stable category (or even weaker notions). Work of Schwede-Shipley~\cite[3.12.(3)]{SS-Monoidal} shows that the homotopy category of monoid objects in any modern category of spectra is equivalent to an appropriate full subcategory of the (mutually equivalent) homotopy category of monoid objects in EKMM $S$-modules, symmetric spectra, or orthogonal spectra (at least when ``modern category of spectra'' is used as a technical term to mean a model category with a preferred equivalence class of symmetric monoidal Quillen equivalence to the currently known modern categories of spectra); cf.~Example Theorem~\ref{ethm:compare} below. The analogous result does not hold for commutative monoid objects; see~\cite{Lawson-CommGamma}. The term ``commutative ${\mathbb{S}}$-algebra'' is typically reserved for examples where the homotopy category of commutative monoid objects is equivalent to an appropriate full subcategory of the (mutually equivalent) homotopy category of commutative monoid objects in EKMM $S$-modules, symmetric spectra, or orthogonal spectra.
Returning to the discussion of operadic algebras, in the case when ${\catsymbfont{M}}$ is a closed symmetric monoidal category, adjoint to the action map \[ \xi_{m}\colon {\opsymbfont{O}}(m)\mathbin{\square} A^{(m)}\to A \] is a map \[ \phi_{m}\colon {\opsymbfont{O}}(m)\to F(A^{(m)},A)={\opsymbfont{E}}{\mathrm{nd}}_{A}(m). \] Equivariance for $\xi_{m}$ is equivalent to equivariance for $\phi_{m}$. Similarly, conditions~(i) and~(ii) in the definition of ${\opsymbfont{O}}$-algebra (Definition~\ref{def:algebra}) are adjoint to the diagrams in the definition of map of operads (Definition~\ref{def:opmap}). This proves the following proposition, which gives an alternative definition of ${\opsymbfont{O}}$-algebra.
\begin{prop}\label{prop:algend} Let ${\catsymbfont{M}}$ be a closed symmetric monoidal category, let ${\opsymbfont{O}}$ be an operad in ${\catsymbfont{M}}$, and let $X$ be an object in ${\catsymbfont{M}}$. The adjunction rule $\xi_{m} \leftrightarrow \phi_{m}$ above defines a bijection between the set of ${\opsymbfont{O}}$-algebra structures on $X$ and the set of maps of operads ${\opsymbfont{O}}\to {\opsymbfont{E}}{\mathrm{nd}}_{X}$. \end{prop}
In the case when ${\catsymbfont{M}}$ is (countably) cocomplete (has (countable) colimits) and $\mathbin{\square}$ preserves (countable) colimits in each variable (which includes the case when it is closed), algebras can also be defined in terms of a monad. The idea for the underlying functor is to gather the domains of all the action maps into a coproduct; since the action maps are equivariant with target having the trivial action, they factor through the coinvariants (quotient by the symmetric group action), and this goes into the definition.
\begin{notn}\label{notn:monad} Let ${\catsymbfont{M}}$ be a symmetric monoidal category with countable colimits, and let ${\opsymbfont{O}}$ be an operad in ${\catsymbfont{M}}$. Define the endofunctor ${\mathbb{O}}$ of ${\catsymbfont{M}}$ (i.e., functor ${\mathbb{O}}\colon {\catsymbfont{M}}\to {\catsymbfont{M}}$) by \[ {\mathbb{O}} X = \myop\coprod_{m=0}^{\infty} {\opsymbfont{O}}(m)\mathbin{\square}_{\Sigma_{m}} X^{(m)} \] (where ${\opsymbfont{O}}(m)\mathbin{\square}_{\Sigma_{m}}X^{(m)}:=({\opsymbfont{O}}(m)\mathbin{\square} X^{(m)})/\Sigma_{m}$). \end{notn}
(When we use other letters for operads, we typically use the corresponding letters for the associated monad; for example, we write ${\mathbb{A}}$ for the monad associated to an operad ${\opsymbfont{A}}$, ${\mathbb{B}}$ for the monad associated to an operad ${\opsymbfont{B}}$, etc.)
The action maps for an ${\opsymbfont{O}}$-algebra $A$ then specify a map $\xi\colon {\mathbb{O}} A\to A$; the conditions for defining an ${\opsymbfont{O}}$-structure also admit a formulation in terms of this map. The basic observation is that we have a canonical isomorphism \begin{multline*} ({\mathbb{O}} X)^{(m)}\cong \myop\coprod_{j_{1}=0}^{\infty}\dotsb \myop\coprod_{j_{m}=0}^{\infty} ({\opsymbfont{O}}(j_{1})\mathbin{\square}_{\Sigma_{j_{1}}}X^{(j_{1})})\mathbin{\square} \dotsb \mathbin{\square} ({\opsymbfont{O}}(j_{m})\mathbin{\square}_{\Sigma_{j_{1}}}X^{(j_{m})}) \\\cong \def\vrule height0pt width0pt depth1ex{\vrule height0pt width0pt depth1ex} \myop\coprod_{j=0}^{\infty}\,\myop\coprod_{\putatop{j_{1},\dotsc,j_{m}\vrule height0pt width0pt depth1ex}{\sum j_{i}=j}} ({\opsymbfont{O}}(j_{1})\mathbin{\square} \dotsb {\opsymbfont{O}}(j_{m}))\mathbin{\square}_{\Sigma_{j_{1}}\times \dotsb \times \Sigma_{j_{m}}}X^{(j)} \end{multline*} using the symmetry isomorphism to shuffle like factors without permuting them. We can use this isomorphism to give ${\mathbb{O}} X$ the canonical structure of an ${\opsymbfont{O}}$-algebra, defining the action map \[ \mu_{m}\colon {\opsymbfont{O}}(m)\mathbin{\square} ({\mathbb{O}} X)^{(m)}\to {\mathbb{O}} X \] by commuting the coproduct past $\mathbin{\square}$, using the operad composition law, and passing to the quotient by the full permutation group \begin{multline*} {\opsymbfont{O}}(m)\mathbin{\square} ({\mathbb{O}} X)^{(m)}\cong \def\vrule height0pt width0pt depth1ex{\vrule height0pt width0pt depth1ex} \myop\coprod_{j=0}^{\infty}\,\myop\coprod_{\putatop{j_{1},\dotsc,j_{m}\vrule height0pt width0pt depth1ex}{\sum j_{i}=j}}
{\opsymbfont{O}}(m)\mathbin{\square} ({\opsymbfont{O}}(j_{1})\mathbin{\square} \dotsb {\opsymbfont{O}}(j_{m}))
\mathbin{\square}_{\Sigma_{j_{1}}\times \dotsb \times
\Sigma_{j_{m}}}X^{(j)} \\ \overto{\coprod\coprod \Gamma^{m}_{j_{1},\dotsc,j_{m}}\mathbin{\square} \id_{X}^{(j)}} \myop\coprod_{j=0}^{\infty} {\opsymbfont{O}}(j) \mathbin{\square}_{\Sigma_{j_{1}}\times \dotsb \times
\Sigma_{j_{m}}}X^{(j)} \to \myop\coprod_{j=0}^{\infty} {\opsymbfont{O}}(j) \mathbin{\square}_{\Sigma_{j}} X^{(j)}={\mathbb{O}} X. \end{multline*} The pictured map is well-defined because of the $(\Sigma_{j_{1}}\times \dotsb \times \Sigma_{j_{m}})$-equivariance of $\Gamma^{m}_{j_{1},\dotsc,j_{m}}$ (\ref{def:operad}.\eqref{part:operad:easyperm}). The other permutation rule (\ref{def:operad}.\eqref{part:operad:hardperm}) implies that $\mu_{m}$ is $\Sigma_{m}$-equivariant. The remaining two parts of the definition of operad show that the $\mu_{m}$ define an ${\opsymbfont{O}}$-algebra structure map: \ref{def:operad}.(\ref{part:operad:assoc})--(ii) imply \ref{def:algebra}.(i)--(ii), respectively. This ${\opsymbfont{O}}$-algebra structure then defines a map \[ \mu\colon {\mathbb{O}}{\mathbb{O}} X\to {\mathbb{O}} X \] as above, which is natural in $X$. The map $1\mathbin{\square} \id_{X}$ also induces a natural transformation \[ \eta \colon X\to {\mathbb{O}} X. \] These two maps together give ${\mathbb{O}}$ the structure of a monad.
\begin{prop}\label{prop:monad} Let ${\catsymbfont{M}}$ be a symmetric monoidal category with countable colimits and assume that $\mathbin{\square}$ commutes with countable colimits in each variable. For an operad ${\opsymbfont{O}}$, the functor ${\mathbb{O}}$ and natural transformations $\mu$, $\eta$ form a monad: the diagrams \[ \xymatrix{ {\mathbb{O}}{\mathbb{O}}{\mathbb{O}} X\mathstrut\ar[r]^{\mu}\ar[d]_{{\mathbb{O}}\mu}&{\mathbb{O}}{\mathbb{O}} X\ar[d]^{\mu}\\ {\mathbb{O}}{\mathbb{O}} X\mathstrut\ar[r]_{\mu}&{\mathbb{O}} X }\qquad \xymatrix{ {\mathbb{O}} X\mathstrut\ar[r]^{\eta}\ar@{=}[dr]&{\mathbb{O}}{\mathbb{O}} X\ar[d]^{\mu}\\ &{\mathbb{O}} X\mathstrut } \] commute (where the top map in the lefthand diagram is the map $\mu$ for the object ${\mathbb{O}} X$). \end{prop}
The proof is applying \ref{def:algebra}.(i)--(ii) for ${\mathbb{O}} X$.
\begin{example} Under the hypotheses of the previous proposition, the monad associated to the identity operad ${\opsymbfont{I}}$ is canonically isomorphic (via the unit isomorphism) to the identity monad $\Id$. The monad associated to the operad ${\opsymbfont{C}}{\mathrm{om}}$ is canonically isomorphic to the free commutative monoid monad \[ {\mathbb{P}} X = \myop\coprod_{j=0}^{\infty} X^{(j)}/\Sigma_{j}. \] The monad associated to the algebra ${\opsymbfont{A}}{\mathrm{ss}}$ is canonically isomorphic to the free monoid monad \[ {\mathbb{T}} X = \myop\coprod_{j=0}^{\infty} X^{(j)}. \] \end{example}
An algebra over the monad ${\mathbb{O}}$ consists of an object $A$ and a map $\xi \colon {\mathbb{O}} A\to A$ such that the diagrams \[ \xymatrix{ {\mathbb{O}}{\mathbb{O}} A\mathstrut\ar[r]^{\mu}\ar[d]_{{\mathbb{O}} \xi}&{\mathbb{O}} A\ar[d]^{\xi}\\ {\mathbb{O}} A\mathstrut\ar[r]_{\xi}&A }\qquad \xymatrix{ A\mathstrut\ar[r]^{\eta}\ar@{=}[dr]&{\mathbb{O}} A\ar[d]^{\xi}\\ \relax\mathstrut&A } \] commute. Given an ${\opsymbfont{O}}$-algebra $(A,\{\xi_{m}\})$, the map $\xi\colon {\mathbb{O}} A\to A$ constructed as the coproduct of the induced maps on coinvariants then is an ${\mathbb{O}}$-algebra action map. Conversely, given an ${\mathbb{O}}$-algebra $(A,\xi)$, defining $\xi_{m}$ to be the composite \[ {\opsymbfont{O}}(m)\mathbin{\square} A^{(m)}\to {\mathbb{O}} A\overto{\xi}A, \] the maps $\xi_{m}$ make $A$ an ${\opsymbfont{O}}$-algebra. This gives a second alternative definition of ${\opsymbfont{O}}$-algebra.
\begin{prop}\label{prop:algmon} Under the hypotheses of Proposition~\ref{prop:monad}, for $X$ an object of ${\catsymbfont{M}}$, the correspondence $\{\xi_{m}\}\leftrightarrow \xi$ above defines a bijection between the set of ${\opsymbfont{O}}$-algebra structures on $X$ and the set of ${\mathbb{O}}$-algebra structures on $X$ and an isomorphism between the category of ${\opsymbfont{O}}$-algebras and the category of ${\mathbb{O}}$-algebras. \end{prop}
\section{Modules over Operadic Algebras}\label{sec:modules}
Just as an operad defines a category of algebras, an algebra defines a category of modules. Because this chapter concentrates on the theory of operadic algebras, we will only touch on the theory of modules. A complete discussion could fill a book and many of the aspects of the theory of operads we omit in this chapter (including Koszul duality, Quillen (co)homology, Deligne and Kontsevich conjectures) correspond to statements about categories of modules; even an overview could form its own chapter. We restrict to giving a brief review of the definitions and the homotopy theory.
The original definition of modules over an operadic algebra seems to be due to Ginzburg and Kapranov~\cite[\S1.6]{GinzburgKapranov}.
\begin{defn}\label{def:module} For an operad ${\opsymbfont{O}}$ and an ${\opsymbfont{O}}$-algebra $A$, an $({\opsymbfont{O}},A)$-module (or just $A$-module when ${\opsymbfont{O}}$ is understood) consists of an object $M$ of ${\catsymbfont{M}}$ and structure maps \[ \zeta_{m} \colon {\opsymbfont{O}}(m+1)\mathbin{\square} (A^{(m)}\mathbin{\square} M)\to M \] for $m\geq 0$ such that the associativity, symmetry, and unit diagrams in Figure~\ref{fig:module} commute. A map of $A$-modules is a map of the underlying objects of ${\catsymbfont{M}}$ that commutes with the structure maps. \end{defn}
\begin{figure}
\caption{The diagrams for Definition~\ref{def:module}}
\label{fig:module}
\end{figure}
Although the definition appears to favor $A$ on the left, we obtain analogous righthand structure maps \[ {\opsymbfont{O}}(m+1)\mathbin{\square} (M\mathbin{\square} A^{(m)})\to M \] satisfying the analogous righthand version of the diagrams in Figure~\ref{fig:module} by applying an appropriate permutation. Thus, an $A$-module structure can equally be regarded as either a left or right module structure. The following example illustrates this point.
\begin{example} When ${\opsymbfont{O}}={\opsymbfont{A}}{\mathrm{ss}}$, the (symmetric) operad for associative algebras and $A$ is an ${\opsymbfont{O}}$-algebra (i.e., $\mathbin{\square}$-monoid), an $({\opsymbfont{O}},A)$-module in the sense of the previous definition is precisely an $A$-bimodule in the usual sense: it has structure maps \[ \lambda \colon A\mathbin{\square} M\to M\qquad \text{and}\qquad \rho\colon M\mathbin{\square} A\to M \] satisfying the following associativity, unity, and interchange diagrams \begin{gather*} \xymatrix{ A\mathbin{\square} A\mathbin{\square} M\ar[r]^-{\mu\mathbin{\square}\id}\ar[d]_-{\id\mathbin{\square}\lambda} &A\mathbin{\square} M\ar[d]^-{\lambda} &M\mathbin{\square} A\mathbin{\square} A\ar[r]^-{\id\mathbin{\square}\mu}\ar[d]_-{\rho\mathbin{\square}\id} &M\mathbin{\square} A\ar[d]^-{\rho}\\ A\mathbin{\square} M\ar[r]_-{\lambda}&M&M\mathbin{\square} A\ar[r]_-{\rho}&M }\\ \xymatrix{ A\mathbin{\square} M\ar[dr]_-{\lambda} &\mathbf{1}\mathbin{\square} M\cong M\mathbin{\square} \mathbf{1}
\ar[d]\ar[l]_-{\eta\mathbin{\square}\id}\ar[r]^-{\id\mathbin{\square}\eta} &M\mathbin{\square} A\ar[dl]^-{\rho}\\ &M }\qquad \xymatrix{ A\mathbin{\square} M\mathbin{\square} A\ar[r]^-{\lambda \mathbin{\square} \id}\ar[d]_-{\id\mathbin{\square}\rho} &M\mathbin{\square} A\ar[d]^-{\rho}\\ A\mathbin{\square} M\ar[r]_-{\lambda}&M } \end{gather*} where $\mu$ denotes the multiplication and $\eta$ the unit for $A$ and the unlabeled arrow is the unit isomorphism for $\mathbin{\square}$. \end{example}
Obtaining a theory of modules closer to the idea of a left module (or right module) over an associative algebra requires working with non-symmetric operads.
\begin{defn}\label{def:leftmodule} Let $\overline{{\opsymbfont{O}}}$ be a non-symmetric operad and let $A$ be an $\overline{{\opsymbfont{O}}}$-algebra. A left $(\overline{{\opsymbfont{O}}},A)$-module (or just left $A$-module when $\overline{{\opsymbfont{O}}}$ is understood) consists of an object $M$ of ${\catsymbfont{M}}$ and structure maps \[ \zeta_{m} \colon \overline{{\opsymbfont{O}}}(m+1)\mathbin{\square} (A^{(m)}\mathbin{\square} M)\to M \] for $m\geq 0$ such that the associativity and unit diagrams in Figure~\ref{fig:module} commute (with $\overline{{\opsymbfont{O}}}$ in place of ${\opsymbfont{O}}$). A map of left $A$-modules is a map of the underlying objects of ${\catsymbfont{M}}$ that commutes with the structure maps. \end{defn}
We also have the evident notion of a right $A$-module defined in terms of structure maps \[ \zeta_{m} \colon \overline{{\opsymbfont{O}}}(m+1)\mathbin{\square} (M\mathbin{\square} A^{(m)})\to M \] and the analogous righthand associativity and unit diagrams.
Unlike in the case of operadic algebras, where working with a non-symmetric operad and its corresponding symmetric operad results in the same theory, in the case of modules, the results are very different.
\begin{example} When $\overline{{\opsymbfont{O}}}=\overline{{\opsymbfont{A}}{\mathrm{ss}}}$, the non-symmetric operad for associative algebras and $A$ is an $\overline{{\opsymbfont{O}}}$-algebra (i.e., a $\mathbin{\square}$-monoid), a left $(\overline{{\opsymbfont{A}}{\mathrm{ss}}},A)$-module in the sense of the previous definition is precisely a left $A$-module in the usual sense defined in terms of an associative and unital left action map $A\mathbin{\square} M\to M$. Likewise, a right $(\overline{{\opsymbfont{A}}{\mathrm{ss}}},A)$-module is precisely a right $A$-module in the usual sense. \end{example}
Under mild hypotheses, the category of $({\opsymbfont{O}},A)$-modules is a category of modules for a $\mathbin{\square}$-monoid called the enveloping algebra of $A$.
\begin{cons}[The enveloping algebra]\label{cons:UA} Assume that ${\catsymbfont{M}}$ admits countable colimits and $\mathbin{\square}$ preserves countable colimits in each variable. For an operad ${\opsymbfont{O}}$ and an ${\opsymbfont{O}}$-algebra $A$, let $U^{{\opsymbfont{O}}}A$ (or $UA$ when ${\opsymbfont{O}}$ is understood) be the following coequalizer \[ \xymatrix@C-1pc{ \myop\coprod_{m=0}^{\infty}{\opsymbfont{O}}(m+1)\mathbin{\square}_{\Sigma_{m}} ({\mathbb{O}} A)^{(m)}\ar@<.5ex>[r]\ar@<-.5ex>[r] &\myop\coprod_{m=0}^{\infty}{\opsymbfont{O}}(m+1) \mathbin{\square}_{\Sigma_{m}} A^{(m)}\ar[r] &U^{{\opsymbfont{O}}}A } \] where we regard $\Sigma_{m}$ as the usual subgroup of $\Sigma_{m+1}$ of permutations that fix $m+1$. Here one map is induced by the action map ${\mathbb{O}} A\to A$ and the other is induced by the operadic multiplication \begin{multline*} {\opsymbfont{O}}(m+1)\mathbin{\square} (\overline{{\mathbb{O}}} A)^{(m)} \cong \myop\coprod_{j_{1},\dotsc,j_{m}}{\opsymbfont{O}}(m+1)\mathbin{\square} ({\opsymbfont{O}}(j_{1})\mathbin{\square} A^{(j_{1})})\mathbin{\square}\dotsb \mathbin{\square} ({\opsymbfont{O}}(j_{m})\mathbin{\square} A^{(j_{m})})\\ \cong \myop\coprod_{j_{1},\dotsc,j_{m}}\bigl({\opsymbfont{O}}(m+1)\mathbin{\square} ({\opsymbfont{O}}(j_{1})\mathbin{\square}\dotsb \mathbin{\square} {\opsymbfont{O}}(j_{m})\mathbin{\square} \mathbf{1})\bigr)\mathbin{\square} A^{(j)}\\ \overto{\coprod \Gamma^{m+1}_{j_{1},\dotsc,j_{m},1}\mathbin{\square} \id} {\opsymbfont{O}}(j+1)\mathbin{\square} A^{(j)} \end{multline*} (where we have omitted writing $1\colon \mathbf{1}\to {\opsymbfont{O}}(1)$ and as always $j=j_{1}+\dotsb+j_{m}$). Let $\eta \colon \mathbf{1}\to UA$ be the map induced by $1\colon \mathbf{1}\to {\opsymbfont{O}}(1)$ and the inclusion of the $m=0$ summand and let $\mu\colon UA\mathbin{\square} UA\to UA$ be the map induced from the maps \[ ({\opsymbfont{O}}(m+1)\mathbin{\square} A^{(m)})\mathbin{\square} ({\opsymbfont{O}}(n+1)\mathbin{\square} A^{(n)})\to {\opsymbfont{O}}(m+n+1)\mathbin{\square} A^{(m+n)} \] obtained from the map $\circ_{m+1}\colon {\opsymbfont{O}}(m+1)\mathbin{\square} {\opsymbfont{O}}(n+1)\to {\opsymbfont{O}}(m+n+1)$ defined as the composite \[ {\opsymbfont{O}}(m+1)\mathbin{\square} {\opsymbfont{O}}(n+1)\cong {\opsymbfont{O}}(m+1)\mathbin{\square}(\mathbf{1}\mathbin{\square} \dotsb \mathbin{\square}\mathbf{1} \mathbin{\square} {\opsymbfont{O}}(n+1)) \overto{\!\Gamma^{m+1}_{1,\dotsc,1,n+1}\!} {\opsymbfont{O}}(m+n+1) \] (where again we have omitted writing $1\colon \mathbf{1}\to {\opsymbfont{O}}(1)$). Associativity of the operad multiplication implies that $\eta$ and $\mu$ give $UA$ the structure of an associative monoid for $\mathbin{\square}$ and the resulting object is called the \term{enveloping algebra} of $A$ over ${\opsymbfont{O}}$. \end{cons}
An easy argument from the definitions and universal property of the coequalizer proves the following proposition.
\begin{prop}\label{prop:modulecat} Assume ${\catsymbfont{M}}$ admits countable coproducts and $\mathbin{\square}$ preserves them in each variable. Let ${\opsymbfont{O}}$ be an operad and let $A$ be an ${\opsymbfont{O}}$-algebra. For an object $X$ of ${\catsymbfont{M}}$, $({\opsymbfont{O}},A)$-module structures on $X$ are in bijective correspondence with left $U^{{\opsymbfont{O}}}A$-module structures. In particular, the category of $({\opsymbfont{O}},A)$-modules is isomorphic to the category of left $U^{{\opsymbfont{O}}}A$-modules. \end{prop}
Similarly, in the case of non-symmetric operads, we can construct a left module enveloping algebra $\overline U\vrule height1em depth0pt width0pt^{\overline{{\opsymbfont{O}}}}A$ (denoted $\overline U\vrule height1em depth0pt width0pt A$ when $\overline{{\opsymbfont{O}}}$ is understood) as the following coequalizer \begin{equation}\label{eq:nUA} \xymatrix@C-1pc{ \myop\coprod_{m=0}^{\infty}\overline{{\opsymbfont{O}}}(m+1)\mathbin{\square} (\overline{{\mathbb{O}}} A)^{(m)}\ar@<.5ex>[r]\ar@<-.5ex>[r] &\myop\coprod_{m=0}^{\infty}\overline{{\opsymbfont{O}}}(m+1) \mathbin{\square} A^{(m)}\ar[r] &\overline U\vrule height1em depth0pt width0pt^{\overline{{\opsymbfont{O}}}}A } \end{equation} with maps defined as in Construction~\ref{cons:UA}. The analogous identification of module categories holds.
\begin{prop}\label{prop:modulecat2} Assume ${\catsymbfont{M}}$ admits countable coproducts and $\mathbin{\square}$ preserves them in each variable. Let $\overline{{\opsymbfont{O}}}$ be a non-symmetric operad and let $A$ be an $\overline{{\opsymbfont{O}}}$-algebra. For an object $X$ of ${\catsymbfont{M}}$, left $(\overline{{\opsymbfont{O}}},A)$-module structures on $X$ are in bijective correspondence with left $\overline U\vrule height1em depth0pt width0pt A$-module structures. In particular, the category of left $(\overline{{\opsymbfont{O}}},A)$-modules is isomorphic to the category of left $\overline U\vrule height1em depth0pt width0pt A$-modules. \end{prop}
We develop some tools to study enveloping algebras in the next section. In the meantime, we can identify the enveloping algebra in some specific examples.
\begin{example} For ${\opsymbfont{O}}={\opsymbfont{A}}{\mathrm{ss}}$ and $A$ an ${\opsymbfont{A}}{\mathrm{ss}}$-algebra (a $\mathbin{\square}$-monoid), $U^{{\opsymbfont{A}}{\mathrm{ss}}}A$ is $A\mathbin{\square} A^{\op}$, the usual enveloping algebra for a $\mathbin{\square}$-monoid. Viewing $A$ as an $\overline{{\opsymbfont{A}}{\mathrm{ss}}}$-algebra, $\overline U\vrule height1em depth0pt width0pt^{\overline{{\opsymbfont{A}}{\mathrm{ss}}}}A$ is the $\mathbin{\square}$-monoid $A$. If $A$ is a ${\opsymbfont{C}}{\mathrm{om}}$-algebra (a commutative $\mathbin{\square}$-monoid), then $U^{{\opsymbfont{C}}{\mathrm{om}}}A$ makes sense and is also the $\mathbin{\square}$-monoid $A$. \end{example}
\begin{example} Let ${\opsymbfont{L}}$ denote the Boardman-Vogt linear isometries operad of Example~\ref{ex:L}. For an ${\opsymbfont{L}}$-algebra, the underlying space of $U^{{\opsymbfont{L}}}A$ is the pushout \[ \xymatrix{ {\opsymbfont{L}}(2)\times_{{\opsymbfont{L}}(1)} {\opsymbfont{L}}(0)\ar[d]_{\circ_{1}}
\ar[r]^-{\id\times\xi_{0}}&{\opsymbfont{L}}(2)\times_{{\opsymbfont{L}}(1)} A\ar[d]\\ {\opsymbfont{L}}(1)\ar[r]&U^{{\opsymbfont{L}}}A } \] where $\circ_{1}$ is the map induced by $1\colon *\to {\opsymbfont{L}}(1)$ and $\Gamma^{2}_{0,1}$ (as in Construction~\ref{cons:UA}) and the right action on ${\opsymbfont{L}}(2)$ of ${\opsymbfont{L}}(1)\cong {\opsymbfont{L}}(1)\times *$ is via $\Gamma^{2}_{1,1}\circ(\id\times 1)$. The inclusions of the $m=0$ and $m=1$ summands induce the map from the pushout above to the coequalizer defining $U^{{\opsymbfont{L}}}A$; the inverse isomorphism uses the ``Hopkins' Lemma'' \cite[I.5.4]{EKMM} isomorphism \begin{equation}\tag{HL} {\opsymbfont{L}}(2)\times_{{\opsymbfont{L}}(1)\times {\opsymbfont{L}}(1)}({\opsymbfont{L}}(i)\times {\opsymbfont{L}}(j))\cong {\opsymbfont{L}}(i+j) \end{equation} for $i,j\geq 1$. The pushout explicitly admits maps in from the $m=0$ and $m=1$ summands of the coequalizer and for $m>1$, we have the following map. \begin{multline*} {\opsymbfont{L}}(m+1)\times_{\Sigma_{m}} A^{(m)} \cong {\opsymbfont{L}}(m+1)\times_{\Sigma_{m}\times {\opsymbfont{L}}(1)} (A^{(m)}\times {\opsymbfont{L}}(1))\\ \mathrel{\mathop{\cong}\limits_{\textrm{(HL)}}} {\opsymbfont{L}}(2)\times_{{\opsymbfont{L}}(1)\times {\opsymbfont{L}}(1)}(({\opsymbfont{L}}(m)\times_{\Sigma_{m}}A^{(m)})\times {\opsymbfont{L}}(1))\\ \overto{\id\times (\xi_{m}\times \id)} {\opsymbfont{L}}(2)\times_{{\opsymbfont{L}}(1)\times {\opsymbfont{L}}(1)}(A\times {\opsymbfont{L}}(1)) \cong {\opsymbfont{L}}(2)\times_{{\opsymbfont{L}}(1)}A \end{multline*} The previous display also indicates how the multiplication of $U^{{\opsymbfont{L}}}A$ works in the pushout description: it is induced by the map \begin{multline*} ({\opsymbfont{L}}(2)\times_{{\opsymbfont{L}}(1)} A)\times ({\opsymbfont{L}}(2)\times_{{\opsymbfont{L}}(1)} A) \cong ({\opsymbfont{L}}(2)\times {\opsymbfont{L}}(2))\times A^{(2)} \\\overto{\circ_{2}\times \id} {\opsymbfont{L}}(3)\times A^{(2)}\to {\opsymbfont{L}}(3)\times_{\Sigma_{2}} A^{(2)}\to
{\opsymbfont{L}}(2)\times_{{\opsymbfont{L}}(1)}A \end{multline*} where the last map is the $m=2$ case of the map above. It turns out that the map $U^{{\opsymbfont{L}}}A\to A$ induced by the operadic algebra action maps is always a weak equivalence. (The proof is not obvious but uses the ideas from EKMM, especially~\cite[I.8.5,XI.3.1]{EKMM} in the context of the theory of ${\opsymbfont{L}}(1)$-spaces as in for example~\cite[\S6]{BasterraMandell-Homology}, \cite[\S4.6]{Blumberg-Progress}, or \cite[\S4.3]{BlumbergCohenS-THHThom}.) If we forget the symmetries in ${\opsymbfont{L}}$ to create a non-symmetric operad ${\opsymbfont{L}}_{\not\Sigma}$, then $\overline U\vrule height1em depth0pt width0pt^{{\opsymbfont{L}}_{\not\Sigma}}A\cong U^{{\opsymbfont{L}}}A$. Even when $A$ is just an ${\opsymbfont{L}}_{\not\Sigma}$-algebra, $\overline U\vrule height1em depth0pt width0pt^{{\opsymbfont{L}}_{\not\Sigma}}A$ can still be identified as the same pushout construction pictured above using the analogous comparison isomorphisms with the coequalizer definition~\eqref{eq:nUA}. Analogous formulations also hold in the context of orthogonal spectra, symmetric spectra, and EKMM $S$-modules using the operad $\Sigma^{\infty}_{+}{\opsymbfont{L}}$ (in the respective categories). In the context of Lewis-May spectra, these observations are closely related to the foundations of EKMM $S$-modules and the properties of the smash product ($\wedge_{{\opsymbfont{L}}}$, $\wedge$, and $\wedge_{A}$); this is the start of a much longer story on monoidal products and balanced products for $A_{\infty}$ module categories (see, for example, \cite{Smash} and~\cite[\S17-18]{BlumbergMandell-TTHH}). \end{example}
Although in both of the previous two examples, we had an isomorphism of enveloping algebras for symmetric and non-symmetric constructions, this is not a general phenomenon, as can be seen, for example, by comparing $U^{{\opsymbfont{A}}{\mathrm{ss}}}$ and $\overline U\vrule height1em depth0pt width0pt^{{\opsymbfont{A}}{\mathrm{ss}}_{\not\Sigma}}$ where ${\opsymbfont{A}}{\mathrm{ss}}_{\not\Sigma}$ is the non-symmetric operad formed from ${\opsymbfont{A}}{\mathrm{ss}}$ by forgetting the symmetry. (In this style of notation, $\overline{{\opsymbfont{A}}{\mathrm{ss}}}={\opsymbfont{C}}{\mathrm{om}}_{\not\Sigma}$.)
The left module enveloping algebra construction for the non-symmetric little $1$-cubes operad, $\overline U\vrule height1em depth0pt width0pt^{\overline{{\opsymbfont{C}}}_{1}}(-)$, also admits a concrete description~\cite[\S2]{Smash}, which we review in Section~\ref{sec:rectass}. It shares the feature with the previous two examples that for any $\overline{{\opsymbfont{C}}}_{1}$-algebra $A$, $\overline U\vrule height1em depth0pt width0pt^{\overline{{\opsymbfont{C}}}_{1}}A$ is weakly equivalent to $A$ (see [\textit{ibid.},1.1] or Proposition~\ref{prop:UAhty}).
Given Propositions~\ref{prop:modulecat} and~\ref{prop:modulecat2}, the homotopy theory of modules over operadic algebras reduces to (1) the homotopy theory of modules over $\mathbin{\square}$-monoids and (2) the homotopy theory of $U^{{\opsymbfont{O}}}A$ (or $\overline U\vrule height1em depth0pt width0pt^{\overline{{\opsymbfont{O}}}}A$) as a functor of ${\opsymbfont{O}}$ (or $\overline{{\opsymbfont{O}}}$) and $A$. The latter first requires a study of the homotopy theory of operadic algebras that we review (in part) in the next few sections before returning to this question in Corollary~\ref{cor:modcomp}. On the other hand the homotopy theory of modules over $\mathbin{\square}$-monoids is very straightforward, and we give a short review of the main results in the remainder of this section. We discuss this in terms of closed model categories. (For an overview of closed model categories as a setting for homotopy theory, we refer the reader to~\cite{DwyerSpalinski}.) The following theorem gives a comprehensive result in some categories of primary interest.
\begin{thm}\label{thm:modone} Let $({\catsymbfont{M}},\mathbin{\square},\mathbf{1})$ be the category of simplicial sets, spaces, symmetric spectra, orthogonal spectra, EKMM $S$-modules, simplicial abelian groups, chain complexes, or any category of modules over a commutative monoid object in one of these categories, with the usual monoidal product and one of the standard cofibrantly generated model structures. Let $A$ be a monoid object in ${\catsymbfont{M}}$. The category of $A$-modules is a closed model category with weak equivalences and fibrations created in ${\catsymbfont{M}}$. \end{thm}
The proof in all cases is much like the argument in~\cite[VII\S4]{EKMM} or~\cite[2.3]{SS-AlgMod}. Heuristically, whenever the small object argument applies and $\mathbin{\square}$ behaves well with respect to weak equivalences, pushouts, and sequential or filtered colimits, a version of the previous theorem should hold. For an example of a more general statement, see~\cite[4.1]{SS-AlgMod}.
A map of monoid objects $A\to B$ induces an obvious \term{restriction of scalars} functor from the category of $B$-modules to the category of $A$-modules. When ${\catsymbfont{M}}$ admits coequalizers and $\mathbin{\square}$ preserves coequalizers in each variable (as is the case in the examples in the previous theorem), the restriction of scalars functor admits a left adjoint \term{extension of scalars} functor $B\mathbin{\square}_{A}(-)$ which on the underlying objects is constructed as the coequalizer \[ \xymatrix@C-1pc{ B\mathbin{\square} A\mathbin{\square} M\ar@<-.5ex>[r]\ar@<.5ex>[r]&B\mathbin{\square} M\ar[r]&B\mathbin{\square}_{A}M, } \] where one map is induced by the $A$-action on $M$ and the other by the $A$ action on $B$ (induced by the map of monoid objects). In the case when the categories of modules have closed model structures with weak equivalences and fibrations created in the underlying category ${\catsymbfont{M}}$, this adjunction is automatically a Quillen adjunction, which implies a derived adjunction on homotopy categories. When the map $A\to B$ is a weak equivalence, we can often expect the Quillen adjunction to be a Quillen equivalence and induce an equivalence of homotopy categories; this is in particular the case in the setting of the previous theorem.
\begin{thm} Let ${\catsymbfont{M}}$ be one of the symmetric monoidal model categories of Theorem~\ref{thm:modone}. A weak equivalence of monoid objects induces a Quillen equivalence on categories of modules. \end{thm}
Again, significantly more general results hold; see, for example, \cite{LewisMandell-MMMC}, especially Theorem~8.3 and the subsection of Section~1 entitled ``Extension of scalars''.
\section{Limits and Colimits in Categories of Operadic Algebras} \label{sec:limit}
Before going on to the homotopy theory of categories of operadic algebras, we say a few words about certain constructions, limits and colimits in this section, and geometric realization in the next section. While limits of operadic algebras are pretty straightforward (as explained below), colimits tend to be more complicated and we take some space to describe in detail what certain colimits look like.
We start with limits. Let $D\colon {\catsymbfont{D}}\to {\catsymbfont{M}}[{\opsymbfont{O}}]$ be a diagram, i.e., a functor from a small category ${\catsymbfont{D}}$, where ${\catsymbfont{M}}$ is a symmetric monoidal category and ${\opsymbfont{O}}$ is an operad in ${\catsymbfont{M}}$. By neglect of structure, we can regard $D$ as a diagram in ${\catsymbfont{M}}$, and suppose the limit $L$ exists in ${\catsymbfont{M}}$. Then for each $d\in {\catsymbfont{D}}$, we have the canonical map $L\to D(d)$, and using the ${\opsymbfont{O}}$-algebra structure map for $D(d)$, we get a map \[ {\opsymbfont{O}}(m)\mathbin{\square} L^{(m)}\to {\opsymbfont{O}}(m)\mathbin{\square} D(d)^{(m)}\to D(d). \] These maps satisfy the required compatibility to define a map \[ {\opsymbfont{O}}(m)\mathbin{\square} L^{(m)}\to L, \] which together are easily verified to provide structure maps for an ${\opsymbfont{O}}$-algebra structure on $L$. This ${\opsymbfont{O}}$-algebra structure has the universal property for the limit of $D$ in ${\catsymbfont{M}}[{\opsymbfont{O}}]$.
\begin{prop} For any symmetric monoidal category ${\catsymbfont{M}}$, any operad ${\opsymbfont{O}}$ in ${\catsymbfont{M}}$, and any diagram of ${\opsymbfont{O}}$-algebras, if the limit exists in ${\catsymbfont{M}}$, then it has a canonical ${\opsymbfont{O}}$-algebra structure that gives the limit in ${\catsymbfont{M}}[{\opsymbfont{O}}]$. \end{prop}
We cannot expect general colimits of operadic algebras to be formed in the underlying category, as can be seen from the examples of coproducts of $\mathbin{\square}$-monoids (${\opsymbfont{A}}{\mathrm{ss}}$-algebras) or of commutative $\mathbin{\square}$-monoids (${\opsymbfont{C}}{\mathrm{om}}$-algebras). The discussion of colimits simplifies if we assume that ${\catsymbfont{M}}$ has countable colimits and that $\mathbin{\square}$ preserves countable colimits in each variable, so that Proposition~\ref{prop:algmon} holds and the category of ${\opsymbfont{O}}$-algebras is the category of algebras over the monad ${\mathbb{O}}$. The main technical tool in this case is the following proposition; because we have assumed in particular that $\mathbin{\square}$ preserves coequalizers in each variable, it follows that the $m$th $\mathbin{\square}$-power functor preserves reflexive coequalizers (see \cite[II.7.2]{EKMM} for a proof) and the filtered colimits that exist (by an easy cofinality argument).
\begin{prop}\label{prop:reflex} If ${\catsymbfont{M}}$ satisfies the hypotheses of Proposition~\ref{prop:monad}, then for any operad ${\opsymbfont{O}}$, the monad ${\mathbb{O}}$ preserves reflexive coequalizers in ${\catsymbfont{M}}$ and the filtered colimits that exist in ${\catsymbfont{M}}$. \end{prop}
Recall that a reflexive coequalizer is a coequalizer \[ \xymatrix@C-1pc{ X\ar@<.5ex>[r]^{a}\ar@<-.5ex>[r]_{b}&Y\ar[r]^{c}&C } \] where there exists a map $r\colon Y\to X$ such that $a\circ r=\id_{Y}$ and $b\circ r=\id_{Y}$; $r$ is called a \term{reflexion}. The proposition says that if the above coequalizer exists in ${\catsymbfont{M}}$ and is reflexive then the diagram obtained by applying ${\mathbb{O}}$ \[ \xymatrix@C-1pc{ {\mathbb{O}} X\ar@<.5ex>[r]^{{\mathbb{O}} a}\ar@<-.5ex>[r]_{{\mathbb{O}} b}&{\mathbb{O}} Y\ar[r]^{{\mathbb{O}} c}&{\mathbb{O}} C } \] is also a (reflexive) coequalizer diagram in ${\catsymbfont{M}}$. Now suppose that $a$ and $b$ are maps of ${\opsymbfont{O}}$-algebras. Then the diagrams \[ \xymatrix@-1pc{ {\mathbb{O}} X\ar[r]^{{\mathbb{O}} a}\ar[d]&{\mathbb{O}} Y\ar[d] &&{\mathbb{O}} X \ar[r]^{{\mathbb{O}} b}\ar[d]&{\mathbb{O}} Y\ar[d]\\ X\ar[r]_{a}&Y&&X\ar[r]_{b}&Y } \] commute (where the vertical maps are the ${\opsymbfont{O}}$-algebra structure maps) and we get an induced map \[ {\mathbb{O}} C\to C. \] Repeating this for ${\mathbb{O}} X\xymatrix@C-1pc{\ar@<.5ex>[r]\ar@<-.5ex>[r]&} {\mathbb{O}} Y$ and the two maps ${\mathbb{O}}{\mathbb{O}} X\xymatrix@C-1pc{\ar@<.5ex>[r]\ar@<-.5ex>[r]&} {\mathbb{O}}{\mathbb{O}} Y$ to ${\mathbb{O}} X\xymatrix@C-1pc{\ar@<.5ex>[r]\ar@<-.5ex>[r]&} {\mathbb{O}} Y$, we see that the map ${\mathbb{O}} C\to C$ constructed above is an ${\opsymbfont{O}}$-algebra structure map and an easy check of universal properties shows that $C$ with this ${\opsymbfont{O}}$-algebra structure is the coequalizer in ${\catsymbfont{M}}[{\opsymbfont{O}}]$. This shows that if a pair of parallel arrows in ${\catsymbfont{M}}[{\opsymbfont{O}}]$ has a reflexion in ${\catsymbfont{M}}$, then the coequalizer in ${\catsymbfont{M}}$ has the canonical structure of an ${\opsymbfont{O}}$-algebra and is the coequalizer in ${\catsymbfont{M}}[{\opsymbfont{O}}]$.
We can turn the observation in the previous paragraph into a construction of colimits of arbitrary shapes in ${\catsymbfont{M}}[{\opsymbfont{O}}]$. Given a diagram $D\colon {\catsymbfont{D}}\to {\catsymbfont{M}}[{\opsymbfont{O}}]$, assume that the colimit of the underlying functor to ${\catsymbfont{M}}$ exists, and denote it by $\colim^{{\catsymbfont{M}}}D$. If $\colim^{{\catsymbfont{M}}}{\mathbb{O}} D$ also exits, then we get a pair of parallel arrows \begin{equation}\label{eq:coeq} \xymatrix@C-1pc{ {\mathbb{O}}(\colim^{{\catsymbfont{M}}} {\mathbb{O}} D)\ar@<.5ex>[r]\ar@<-.5ex>[r]&{\mathbb{O}}(\colim^{{\catsymbfont{M}}} D) } \end{equation} where one arrow is induced by the ${\opsymbfont{O}}$-algebra structure maps ${\mathbb{O}} D(d)\to D(d)$ and the other is the composite \[ {\mathbb{O}}(\colim^{{\catsymbfont{M}}} {\mathbb{O}} D)\overto{{\mathbb{O}} i} {\mathbb{O}}{\mathbb{O}}(\colim^{{\catsymbfont{M}}} D)\overto{\mu} {\mathbb{O}}(\colim^{{\catsymbfont{M}}} D) \] where $\mu$ is the monadic multiplication ${\mathbb{O}}{\mathbb{O}}\to {\mathbb{O}}$ and \[ i\colon \colim^{{\catsymbfont{M}}} {\mathbb{O}} D\to {\mathbb{O}}(\colim^{{\catsymbfont{M}}} D) \] is the map assembled from the maps ${\mathbb{O}} D(d)\to {\mathbb{O}}(\colim^{{\catsymbfont{M}}} D)$ induced by applying ${\mathbb{O}}$ to the canonical maps $D(d)\to \colim^{{\catsymbfont{M}}} D$. We also have a reflexion \[ {\mathbb{O}}(\colim^{{\catsymbfont{M}}} D)\to {\mathbb{O}}(\colim^{{\catsymbfont{M}}} {\mathbb{O}} D) \] induced by the unit map $D(d)\to {\mathbb{O}} D(d)$. Thus, the coequalizer of~\eqref{eq:coeq} in ${\catsymbfont{M}}$ has the canonical structure of an ${\opsymbfont{O}}$-algebra which provides the coequalizer in ${\catsymbfont{M}}[{\opsymbfont{O}}]$; a check of universal properties shows that the coequalizer is the colimit in ${\catsymbfont{M}}[{\opsymbfont{O}}]$ of~$D$.
\begin{prop} Assume ${\catsymbfont{M}}$ satisfies the hypotheses of Proposition~\ref{prop:monad}. For any operad ${\opsymbfont{O}}$ and any diagram $D\colon {\catsymbfont{D}}\to {\catsymbfont{M}}[{\opsymbfont{O}}]$, if the colimit of $D$ and the colimit of ${\mathbb{O}} D$ exist in ${\catsymbfont{M}}$, then the colimit of $D$ exists in ${\catsymbfont{M}}[{\opsymbfont{O}}]$ and is given by the coequalizer of the reflexive pair displayed in~\eqref{eq:coeq}. \end{prop}
For example, the coproduct $A\amalg^{{\catsymbfont{M}}[{\opsymbfont{O}}]} B$ in ${\catsymbfont{M}}[{\opsymbfont{O}}]$ can be constructed as the coequalizer \[ \xymatrix@C-1pc{ {\mathbb{O}}({\mathbb{O}} A \amalg {\mathbb{O}} B)\ar@<.5ex>[r]\ar@<-.5ex>[r] &{\mathbb{O}}(A \amalg B)\ar[r]&A\amalg^{{\catsymbfont{M}}[{\opsymbfont{O}}]}B. } \] In the case when $B={\mathbb{O}} X$ for some $X$ in ${\catsymbfont{M}}$, we can say more by recognizing that the category of ${\opsymbfont{O}}$-algebras under $A$ is itself the category of algebras over an operad.
\begin{cons}[The enveloping operad]\label{cons:eop} For $m\geq 0$, define ${\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}(m)$ by the coequalizer diagram \begin{equation*}\label{eq:env} \xymatrix@C-1pc{ \myop\coprod_{\ell=0}^{\infty}{\opsymbfont{O}}(\ell+m)\mathbin{\square}_{\Sigma_{\ell}} ({\mathbb{O}} A)^{(\ell)}\ar@<.5ex>[r]\ar@<-.5ex>[r] &\myop\coprod_{\ell=0}^{\infty}{\opsymbfont{O}}(\ell+m)\mathbin{\square}_{\Sigma_{\ell}} A^{(\ell)}\ar[r] &{\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}(m) } \end{equation*} where one arrow is induced by the operadic multiplication \[ \Gamma^{\ell+m}_{j_{1},\dotsc,j_{\ell},1,\dotsc,1}\colon {\opsymbfont{O}}(\ell+m)\mathbin{\square} {\opsymbfont{O}}(j_{1})\mathbin{\square} \dotsb \mathbin{\square} {\opsymbfont{O}}(j_{\ell})\mathbin{\square} \mathbf{1}\mathbin{\square}\dotsb \mathbin{\square}\mathbf{1}\to {\opsymbfont{O}}(j+m) \] and the other by the ${\opsymbfont{O}}$-algebra action map ${\mathbb{O}} A\to A$. We think of the $\ell$ factors of $A$ (or ${\mathbb{O}} A$) as being associated with the first $\ell$ inputs of ${\opsymbfont{O}}(\ell+m)$, leaving the last $m$ inputs open. We then have a $\Sigma_{m}$-action induced from the $\Sigma_{m}$-action on ${\opsymbfont{O}}(\ell+m)$ on the open inputs, unit map $1\colon \mathbf{1}\to {\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}(1)$ induced by the unit map of ${\opsymbfont{O}}$ (on the summand $\ell=0$), and operadic composition $\Gamma$ induced by applying the operadic multiplication of ${\opsymbfont{O}}$ using the open inputs. \end{cons}
This operad is called the \term{enveloping operad} of $A$ and generalizes the enveloping algebra $U^{{\opsymbfont{O}}}A$ of Construction~\ref{cons:UA}: for $m=1$, ${\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}(1)$ is precisely the coequalizer defining $U^{{\opsymbfont{O}}}A$ and the operad unit and multiplication $\Gamma^{1}_{1}$ coincide with the $\mathbin{\square}$-monoid unit and multiplication.
To return to the discussion of the category of ${\opsymbfont{O}}$-algebras under $A$, we note that for $m=0$, the coequalizer in Construction~\ref{cons:eop} is \[ \xymatrix@C-1pc{ {\mathbb{O}} {\mathbb{O}} A\ar@<.5ex>[r]\ar@<-.5ex>[r]&{\mathbb{O}} A\ar[r]&{\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}(0), } \] giving a canonical isomorphism $A\to {\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}(0)$, and so a ${\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}$-algebra $T$ comes with a structure map $A\to T$. Looking at the summands with $\ell=0$ above, we get a map of operads ${\opsymbfont{O}}\to {\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}$, giving $T$ an ${\opsymbfont{O}}$-algebra structure; the map $A\to T$ is a map of ${\opsymbfont{O}}$-algebras. On the other hand, given an ${\opsymbfont{O}}$-algebra $B$ and a map of ${\opsymbfont{O}}$-algebras $A\to B$, we have maps \[ {\opsymbfont{O}}(\ell+m)\mathbin{\square} A^{(\ell)} \mathbin{\square} B^{(m)}\to {\opsymbfont{O}}(\ell+m)\mathbin{\square} B^{(\ell)} \mathbin{\square} B^{(m)} \to B \] which together induce maps ${\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}(m)\mathbin{\square} B^{(m)}\to B$ that are easily checked to provide ${\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}$-algebra structure maps. This gives a bijection between the structure of an ${\opsymbfont{O}}$-algebra under $A$ and the structure of a ${\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}$-algebra.
\begin{prop} When ${\catsymbfont{M}}$ satisfies the hypotheses of Proposition~\ref{prop:monad}, then for an object $X$ of ${\catsymbfont{M}}$, the set of ${\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}$-algebra structures on $X$ is in bijective correspondence with the set of ordered pairs consisting of an ${\opsymbfont{O}}$-algebra structure on $X$ and a map of ${\opsymbfont{O}}$-algebras $A\to X$ for that structure. \end{prop}
As a consequence we have the following description of the coproduct of ${\opsymbfont{O}}$-algebras $A\amalg^{{\catsymbfont{M}}[{\opsymbfont{O}}]} {\mathbb{O}} X$, since $A\amalg^{{\catsymbfont{M}}[{\opsymbfont{O}}]}{\mathbb{O}}(-)$ is the left adjoint of the forgetful functor from ${\opsymbfont{O}}$-algebras under $A$ to ${\catsymbfont{M}}$.
\begin{prop} When ${\catsymbfont{M}}$ satisfies the hypotheses of Proposition~\ref{prop:monad}, \[ A\amalg^{{\catsymbfont{M}}[{\opsymbfont{O}}]} {\mathbb{O}} X\cong {\mathbb{U}}^{{\opsymbfont{O}}}_{A}X = \myop\coprod_{m=0}^{\infty}{\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}(m)\mathbin{\square}_{\Sigma_{m}}X^{(m)} \] (where the coproduct symbol undecorated by a category denotes coproduct in ${\catsymbfont{M}}$). \end{prop}
The decomposition above can be useful even without further information on ${\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}$, but in fact we can be more concrete about what ${\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}$ looks like in the case when $A$ is built up iteratively from pushouts of free objects in ${\catsymbfont{M}}[{\opsymbfont{O}}]$. As a base case, an easy calculation gives \[ {\opsymbfont{U}}^{{\opsymbfont{O}}}_{{\mathbb{O}} X}(m)=\myop\coprod_{\ell=0}^{\infty}{\opsymbfont{O}}(\ell+m)\mathbin{\square}_{\Sigma_{\ell}} X^{(\ell)}. \] Now suppose $A'=A\amalg^{{\catsymbfont{M}}[{\opsymbfont{O}}]}_{{\mathbb{O}} X}{\mathbb{O}} Y$ for some maps $X\to A$ and $X\to Y$ in ${\catsymbfont{M}}$; we can then describe ${\opsymbfont{U}}^{{\opsymbfont{O}}}_{A'}$ in terms of ${\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}$ and pushouts in ${\catsymbfont{M}}[{\opsymbfont{O}}]$ as follows. (In particular, the calculation of ${\opsymbfont{U}}^{{\opsymbfont{O}}}_{A'}(0)$ describes $A'$ in these terms and the calculation of ${\opsymbfont{U}}^{{\opsymbfont{O}}}_{A'}(1)$ describes $UA'$ in these terms.) First, using the observations on colimits above, a little work shows that the coequalizer defining ${\opsymbfont{U}}^{{\opsymbfont{O}}}_{A'}$ simplifies in this case to \[ \xymatrix@C-1pc{ \myop\coprod_{\ell=0}^{\infty}{\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}(\ell+m)
\mathbin{\square}_{\Sigma_{\ell}}(X\amalg Y)^{(\ell)}
\ar@<.5ex>[r]\ar@<-.5ex>[r] &\myop\coprod_{\ell=0}^{\infty}{\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}(\ell+m)
\mathbin{\square}_{\Sigma_{\ell}}Y^{(\ell)}
\ar[r]&{\opsymbfont{U}}^{{\opsymbfont{O}}}_{A'}(m) } \] where one map is induced by the map $X\to A$ ($={\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}(0)$) and the other is induced by the map $X\to Y$. We then have a filtration on ${\opsymbfont{U}}^{{\opsymbfont{O}}}_{A'}(m)$ by powers of $Y$; specifically, define $F^{k}{\opsymbfont{U}}^{{\opsymbfont{O}}}_{A'}(m)$ by the coequalizer \[ \xymatrix@C-1pc{ \myop\coprod_{\ell=0}^{k}{\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}(\ell+m)
\mathbin{\square}_{\Sigma_{\ell}}(X\amalg Y)^{(\ell)}
\ar@<.5ex>[r]\ar@<-.5ex>[r] &\myop\coprod_{\ell=0}^{k}{\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}(\ell+m)
\mathbin{\square}_{\Sigma_{\ell}}Y^{(\ell)}
\ar[r]&F^{k}{\opsymbfont{U}}^{{\opsymbfont{O}}}_{A'}(m) } \] Then $\colim_{k} F^{k}{\opsymbfont{U}}^{{\opsymbfont{O}}}_{A'}(m)={\opsymbfont{U}}^{{\opsymbfont{O}}}_{A'}(m)$. Comparing the universal properties for $F^{k-1}{\opsymbfont{U}}^{{\opsymbfont{O}}}_{A'}(m)$ and $F^{k}{\opsymbfont{U}}^{{\opsymbfont{O}}}_{A'}(m)$, we see that the following diagram is a pushout (in ${\catsymbfont{M}}$). \[ \xymatrix{ {\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}(k+m)\mathbin{\square}_{\Sigma_{k-1}}(X\mathbin{\square} Y^{(k-1)})
\ar[r]\ar[d] &{\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}(k+m)\mathbin{\square}_{\Sigma_{k}}Y^{(k)}\ar[d]\\ F^{k-1}{\opsymbfont{U}}^{{\opsymbfont{O}}}_{A'}(m)\ar[r]&F^{k}{\opsymbfont{U}}^{{\opsymbfont{O}}}_{A'}(m) } \]
This describes ${\opsymbfont{U}}^{{\opsymbfont{O}}}_{A'}$ in terms of iterated pushouts in ${\catsymbfont{M}}$, but we can do somewhat better, as can be seen in the example where ${\catsymbfont{M}}$ is the category of spaces and $X\to Y$ is a closed inclusion. In the pushout above, the top horizontal map comes from the map \[ \Sigma_{k}\times_{\Sigma_{k-1}}(X\times Y^{k-1})\to Y^{k} \] which fails to be an inclusion for $k>1$ except in trivial cases; however, the image of this map can be described as an iterated pushout, starting with $X^{k}$ and gluing in higher powers of $Y$. This works as well in the general case (which we now return to).
Let $Q^{k}_{0}(X\rightarrow Y)=X^{(k)}$, an object of ${\catsymbfont{M}}$ with a $\Sigma_{k}$-action and a $\Sigma_{k}$-equivariant map to $Y^{(k)}$. Inductively, for $i>0$, define $Q^{k}_{i}(X\rightarrow Y)$ as the pushout \begin{equation}\label{eq:Ql} \begin{gathered} \xymatrix@C-1ex{ \Sigma_{k}\times_{\Sigma_{k-i}\times\Sigma_{i}}
(X^{(k-i)}\mathbin{\square} Q^{i}_{i-1}(X\rightarrow Y))\ar[r]\ar[d]& \Sigma_{k}\times_{\Sigma_{k-i}\times \Sigma_{i}}
(X^{(k-i)}\mathbin{\square} Y^{(i)})\ar[d]\\ Q^{k}_{i-1}(X\rightarrow Y)\ar[r] &Q^{k}_{i}(X\rightarrow Y) } \end{gathered} \end{equation} with the evident $\Sigma_{k}$-action and $\Sigma_{k}$-equivariant map \[ Q^{k}_{i}(X\rightarrow Y)\to Y^{(k)}. \] Then for all $j>0$, we have a $(\Sigma_{j}\times \Sigma_{k})$-equivariant map \[ X^{(j)}\mathbin{\square} Q^{k}_{i}(X\rightarrow Y)\to Q^{j+k}_{i}(X\rightarrow Y) \] induced by the map \[ X^{(j)}\mathbin{\square} X^{(k-i)}\mathbin{\square} Y^{(i)}\cong X^{(j+k-i)}\mathbin{\square} Y^{(i)}\to
Q^{j+k}_{i}(X\rightarrow Y) \] and the compatible (inductively defined) map \[ X^{(j)}\mathbin{\square} Q^{k}_{i-1}(X\rightarrow Y)\to Q^{j+k}_{i-1}(X\rightarrow Y) \to Q^{j+k}_{i}(X\rightarrow Y), \] which allows us to continue the induction. In the case when ${\catsymbfont{M}}$ is the category of topological spaces and $X\to Y$ is a closed inclusion, the maps \[ Q^{k}_{0}(X\rightarrow Y)\to \dotsb \to Q^{k}_{k-1}(X\rightarrow Y)\to Y^{(k)} \] are closed inclusions with $Q^{k}_{i}(X\rightarrow Y)$ the subspace of $Y^{k}$ where at most $i$ coordinates are in $Y\setminus X$. In the general case, an inductive argument shows that the map \[ \Sigma_{k}\times_{\Sigma_{k-i}\times \Sigma_{i}}(X^{(k-i)}\mathbin{\square} Y^{(i)}) \to Q^{k}_{i}(X\rightarrow Y) \] is a categorical epimorphism and that the map \[ {\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}(k+m)\mathbin{\square}_{\Sigma_{k-1}}
(X\mathbin{\square} Y^{(k-1)}) \to {\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}(k+m)\mathbin{\square}_{\Sigma_{k}}Q^{k}_{k-1}(X\rightarrow Y) \] is a categorical epimorphism. Since this factors the map \[ {\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}(k+m)\mathbin{\square}_{\Sigma_{k-1}}
(X\mathbin{\square} Y^{(k-1)}) \to {\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}(k+m)\mathbin{\square}_{\Sigma_{k}}Y^{(k)}, \] we get the following more sophisticated identification of $F^{k}{\opsymbfont{U}}^{{\opsymbfont{O}}}_{A'}(m)$ as a pushout. \begin{equation}\label{eq:poenv} \begin{gathered} \xymatrix{ {\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}(k+m)\mathbin{\square}_{\Sigma_{k}}Q^{k}_{k-1}(X\rightarrow Y)
\ar[r]\ar[d] &{\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}(k+m)\mathbin{\square}_{\Sigma_{k}}Y^{(k)}\ar[d]\\ F^{k-1}{\opsymbfont{U}}^{{\opsymbfont{O}}}_{A'}(m)\ar[r]&F^{k}{\opsymbfont{U}}^{{\opsymbfont{O}}}_{A'}(m) } \end{gathered} \end{equation} In practice, the map $Q^{k}_{k-1}(X\to Y)\to Y^{(k)}$ is some kind of cofibration when $X\to Y$ is nice enough; the above formulation is then useful for deducing homotopical information in the presence of cofibrantly generated model category structures, as discussed in Section~\ref{sec:model}.
\section{Enrichment and Geometric Realization} \label{sec:enrich}
Categories of operadic algebras in spaces or spectra come with a canonical enrichment in spaces, i.e., they have mapping spaces and an intrinsic notion of homotopy. While more abstract notions of homotopy, for example, in terms of model structures, now play a more significant role in homotopy theory, the topological enrichment provides some powerful tools, including and especially geometric realization of simplicial objects.
We begin with a general discussion of enrichment of operadic algebra categories. When ${\catsymbfont{M}}$ satisfies the hypotheses of Proposition~\ref{prop:monad}, Proposition~\ref{prop:algmon} describes the maps in the category of ${\opsymbfont{O}}$-algebras as an equalizer: \[ \xymatrix@C-1pc{ {\catsymbfont{M}}[{\opsymbfont{O}}](A,B)\ar[r]&{\catsymbfont{M}}(A,B)\ar@<-.5ex>[r]\ar@<.5ex>[r]&{\catsymbfont{M}}({\mathbb{O}} A,B) } \] where one arrow ${\catsymbfont{M}}(A,B)\to {\catsymbfont{M}}({\mathbb{O}} A,B)$ is induced by the action map ${\mathbb{O}} A\to A$ and the other arrow is induced by applying the functor ${\mathbb{O}}\colon {\catsymbfont{M}}(A,B)\to {\catsymbfont{M}}({\mathbb{O}} A,{\mathbb{O}} B)$ and then using the action map ${\mathbb{O}} B\to B$. When ${\catsymbfont{M}}$ is enriched over a complete symmetric monoidal category (for example, when the mapping sets of ${\catsymbfont{M}}$ are topologized or simplicial), then ${\catsymbfont{M}}[{\opsymbfont{O}}]$ becomes enriched exactly when ${\mathbb{O}}$ has the structure of an enriched functor, defining the enrichment of ${\catsymbfont{M}}[{\opsymbfont{O}}]$ by the equalizer above. Clearly it is not always possible for ${\mathbb{O}}$ to be enriched: in the case when ${\catsymbfont{M}}$ is the category of abelian groups and ${\opsymbfont{O}}={\opsymbfont{A}}{\mathrm{ss}}$ or ${\opsymbfont{C}}{\mathrm{om}}$, then ${\mathbb{O}}$ is not an additive functor so cannot be enriched over abelian groups; this corresponds to the fact that the category of rings and the category of commutative rings are not enriched over abelian groups. On the other hand, enrichments over spaces and simplicial sets are usually inherited by algebra categories; the reason, as we now explain, derives from the fact that spaces and simplicial sets are cartesian.
For convenience, consider the case when ${\catsymbfont{M}}$ is a closed symmetric monoidal category, let ${\catsymbfont{E}},\times,\mathbf{*}$ be a symmetric monoidal category (which we will eventually assume to be cartesian), and let $L\colon {\catsymbfont{E}}\to {\catsymbfont{M}}$ be a strong symmetric monoidal functor that is a left adjoint; let $R$ denote its right adjoint. For formal reasons $R$ is then lax symmetric monoidal and in particular $RF$ provides an ${\catsymbfont{E}}$-enrichment of ${\catsymbfont{M}}$ (where, as always, $F$ denotes the mapping object in ${\catsymbfont{M}}$). These hypotheses are not all necessary but avoid some review of enriched category theory and concisely state a lot coherence data that more minimal hypotheses would force us to spell out. The iterated symmetric monoidal product in ${\catsymbfont{M}}$ then gives a multivariable enriched functor \[ RF(A_{1},B_{1}) \times \dotsb \times RF(A_{m},B_{m})\to RF(A_{1}\mathbin{\square} \dotsb \mathbin{\square} A_{m},B_{1}\mathbin{\square} \dotsb \mathbin{\square} B_{m}). \] Now assume that $\times$ is a cartesian monoidal product, meaning that it is the categorical product, the unit is the final object, and the symmetry and unit isomorphisms are the universal ones. With this assumption, we have a natural diagonal map $E\to E\times E$, which we can apply in particular to the object $RF(A,B)$ to get a natural map \begin{equation}\label{eq:enr} RF(A,B)\to RF(A,B)\times\dotsb \times RF(A,B)\to RF(A^{(m)},B^{(m)}). \end{equation} This makes the $m$th $\mathbin{\square}$-power into an ${\catsymbfont{E}}$-enriched functor for $m>0$. In the case $m=0$, we have the final map \[ RF(A,B)\to *\to R\mathbf{1}\overto{\cong} RF(A^{(0)},B^{(0)}). \] From here the rest is easy: the $\mathbin{\square},F$ adjunction gives a natural (and ${\catsymbfont{E}}$-natural) map \[ RF(A^{(m)},B^{(m)})\to RF({\opsymbfont{O}}(m)\mathbin{\square} A^{(m)},{\opsymbfont{O}}(m)\mathbin{\square} B^{(m)}) \] and the composite to $RF({\opsymbfont{O}}(m)\mathbin{\square} A^{(m)},{\opsymbfont{O}}(m)\mathbin{\square}_{\Sigma_{m}} B^{(m)})$ admits a canonical factorization \[ RF(A,B)\to RF({\opsymbfont{O}}(m)\mathbin{\square}_{\Sigma_{m}} A^{(m)},{\opsymbfont{O}}(m)\mathbin{\square}_{\Sigma_{m}} B^{(m)}) \] since the target is a limit (in ${\catsymbfont{E}}$) that exists by right adjoint considerations when the quotient ${\opsymbfont{O}}(m)\mathbin{\square}_{\Sigma_{m}} B^{(m)}=({\opsymbfont{O}}(m)\mathbin{\square} B^{(m)})/\Sigma_{m}$ in ${\catsymbfont{M}}$ exists. When we assume that ${\catsymbfont{M}}$ has countable coproducts, composing further into \[ RF({\opsymbfont{O}}(m)\mathbin{\square}_{\Sigma_{m}} A^{(m)},{\mathbb{O}} B), \] the countable categorical product over $m$ exists, giving an ${\catsymbfont{E}}$-natural map \[ RF(A,B)\to RF({\mathbb{O}} A,{\mathbb{O}} B) \] which provides the ${\catsymbfont{E}}$-enrichment of ${\mathbb{O}}$. We state this as the following theorem.
\begin{thm}\label{thm:enrich} Let ${\catsymbfont{M}}$ be a closed symmetric monoidal category with countable colimits, and let ${\opsymbfont{O}}$ be an operad in ${\catsymbfont{M}}$. Let ${\catsymbfont{E}}$ be a cartesian monoidal category and let ${\catsymbfont{E}}\to {\catsymbfont{M}}$ be a strong symmetric monoidal functor with a right adjoint. Regarding ${\catsymbfont{M}}$ as ${\catsymbfont{E}}$-enriched over the right adjoint, the category ${\catsymbfont{M}}[{\opsymbfont{O}}]$ of ${\opsymbfont{O}}$-algebras has a canonical ${\catsymbfont{E}}$-enrichment for which the forgetful functor ${\catsymbfont{M}}[{\opsymbfont{O}}]\to {\catsymbfont{M}}$ is ${\catsymbfont{E}}$-enriched. \end{thm}
We apply this now in the discussion of geometric realizations of (co)simplicial objects. Let ${\catsymbfont{S}}$ denote either the category of spaces or of simplicial sets, and write $C(-,-)$ for the internal mapping objects in ${\catsymbfont{S}}$. To avoid awkward circumlocutions, we will refer to objects of ${\catsymbfont{S}}$ as spaces in either case for the rest of the section. We now assume that ${\catsymbfont{M}}$ is closed symmetric monoidal and has countable coproducts and that we have a left adjoint symmetric monoidal functor $L$ from ${\catsymbfont{S}}$ to ${\catsymbfont{M}}$, as above, so that Theorem~\ref{thm:enrich} applies. We write $R$ for the right adjoint to $L$ as above, so that $RF(-,-)$ provides mapping spaces for ${\catsymbfont{M}}$. The category ${\catsymbfont{M}}$ then has \term{tensors} $X\otimes T$ and \term{cotensors} $T\pitchfork Y$, defined by the natural isomorphisms \begin{align*} RF(X\otimes T,-)&\cong C(T,RF(X,-))&&\text{(tensor)}\\ RF(-,T\pitchfork Y))&\cong C(T,RF(-,Y))&&\text{(cotensor)} \end{align*} for spaces $T$ and objects $X$ and $Y$ of ${\catsymbfont{M}}$, constructed as follows.
\begin{prop} In the context of Theorem~\ref{thm:enrich}, tensors and cotensors with spaces exist and are given by $X\otimes T = X\mathbin{\square} LT$ and $T\pitchfork Y=F(LT,Y)$ for a space $T$ and objects $X,Y$ in ${\catsymbfont{M}}$. \end{prop}
The proposition is an easy consequence of the formal isomorphism \begin{equation}\label{eq:MSComp} RF(LT,X)\cong C(T,RX), \end{equation} natural in spaces $T$ and objects $X$ of ${\catsymbfont{M}}$; the isomorphism in the forward direction is adjoint to the map \[ RF(LT,X)\times T\to RF(LT,X)\times RLT\to R(F(LT,X)\mathbin{\square} LT)\to RX \] and the isomorphism in the backwards direction is adjoint to the map $LC(T,RX)\to F(LT,X)$ adjoint to the map \[ LC(T,RX)\mathbin{\square} LT\cong L(C(T,RX)\times T)\to LRX\to X. \]
Let $RF^{{\catsymbfont{M}}[{\opsymbfont{O}}]}(-,-)$ denote the mapping spaces constructed above for the category of ${\opsymbfont{O}}$-algebras; despite the suggestion of the notation, this is not typically a composite functor. For an ${\opsymbfont{O}}$-algebra $A$, $F(-,A)$ does not typically carry a canonical ${\opsymbfont{O}}$-algebra structure, but for a space $T$, $F(LT,A)=T\pitchfork A$ does: the structure map \[ {\opsymbfont{O}}(n)\mathbin{\square} (T\pitchfork A)^{(n)}\to T\pitchfork A \] is adjoint to the map \[ {\opsymbfont{O}}(n)\mathbin{\square} (T\pitchfork A)^{(n)}\mathbin{\square} LT = {\opsymbfont{O}}(n)\mathbin{\square} (F(LT,A))^{(n)}\mathbin{\square} LT \to A \] constructed as the composite \begin{multline*} {\opsymbfont{O}}(n)\mathbin{\square} (F(LT,A))^{(n)}\mathbin{\square} LT \to {\opsymbfont{O}}(n)\mathbin{\square} (F(LT,A))^{(n)}\mathbin{\square} (LT)^{(n)} \\\to {\opsymbfont{O}}(n)\mathbin{\square} A^{(n)}\to A \end{multline*} using the diagonal map on the space $T$ and the structure map on $A$. A check of universal properties then shows that $T\pitchfork A$ is the cotensor of $A$ with $T$ in the category of ${\opsymbfont{O}}$-algebras. Tensors in ${\catsymbfont{M}}[{\opsymbfont{O}}]$ can be constructed as reflexive coequalizers \[ \xymatrix@C-1pc{ {\mathbb{O}}({\mathbb{O}} A\otimes T)
\ar@<.5ex>[r]\ar@<-.5ex>[r] &{\mathbb{O}}(A\otimes T)\ar[r] &A\otimes^{{\catsymbfont{M}}[{\opsymbfont{O}}]} T. } \] Writing $\Delta[n]$ for the standard $n$-simplex, we then have the standard definition of geometric realization of simplicial objects in ${\catsymbfont{M}}$ and ${\catsymbfont{M}}[{\opsymbfont{O}}]$ (without additional assumptions) and geometric realization (often called ``Tot'') of cosimplicial objects in ${\catsymbfont{M}}$ and ${\catsymbfont{M}}[{\opsymbfont{O}}]$ when certain limits exist. Given a simplicial object $X_{\ssdot}$ or a cosimplicial object $Y^{\ssdot}$, the degeneracy subobject $sX_{n}$ of $X_{n}$ is defined as the colimit of the denegeracy maps and the degeneracy quotient object $sY^{n}$ of $Y^{n}$ is defined as the limit (if it exists) of the degeneracy maps. (In some literature, $sX_{n}$ is called the ``latching object'' and $sY^{n}$ the ``matching object''; see
\cite[\S15.2]{Hirschhorn-ModelCategories}.) The geometric realization of $X_{\ssdot}$ in ${\catsymbfont{M}}$ or ${\catsymbfont{M}}[{\opsymbfont{O}}]$ is then the sequential colimit of $|X_{\ssdot}|_{n}$, where $|X_{\ssdot}|_{0}=X_{0}$ and
$|X_{\ssdot}|_{n}$ is defined inductively as the pushout \[ \xymatrix@C-1pc{ (sX_{n}\otimes \Delta[n]) \cup_{(sX_{n}\otimes \partial \Delta[n])}
(X_{n}\otimes \partial \Delta[n])\ar[r]\ar[d] &X_{n}\otimes \Delta[n]\ar[d]\\
|X_{\ssdot}|_{n-1}\ar[r]&|X_{\ssdot}|_{n} } \] with both the tensor and the pushouts performed in ${\catsymbfont{M}}$ to define the geometric realization in ${\catsymbfont{M}}$ or performed in ${\catsymbfont{M}}[{\opsymbfont{O}}]$ to define the geometric realization in ${\catsymbfont{M}}[{\opsymbfont{O}}]$. The analogous, opposite construction defines the geometric realization of $Y^{\ssdot}$ when all the limits exist. Because cotensors and limits (when they exist) coincide in ${\catsymbfont{M}}$ and ${\catsymbfont{M}}[{\opsymbfont{O}}]$, geometric realization of cosimplicial objects (when it exists) also coincides in ${\catsymbfont{M}}$ and ${\catsymbfont{M}}[{\opsymbfont{O}}]$. Because pushouts generally look very different in ${\catsymbfont{M}}$ than in ${\catsymbfont{M}}[{\opsymbfont{O}}]$, one might expect that geometric realization of simplicial objects in ${\catsymbfont{M}}$ and in ${\catsymbfont{M}}[{\opsymbfont{O}}]$ would also look very different; this turns out not to be the case.
\begin{thm}\label{thm:georeal} Assume ${\catsymbfont{M}}$ satisfies the hypotheses of Theorem~\ref{thm:enrich} for ${\catsymbfont{E}}$ either the category of spaces or the category of simplicial sets. \begin{enumerate} \item Let $A^{\ssdot}$ be a cosimplicial object in ${\catsymbfont{M}}[{\opsymbfont{O}}]$. If the limits defining the geometric realization (``Tot'') exist in ${\catsymbfont{M}}$, then that geometric realization has the canonical structure of an ${\opsymbfont{O}}$-algebra and is isomorphic to the geometric realization (``Tot'') in ${\catsymbfont{M}}[{\opsymbfont{O}}]$. \item Let $A_{\ssdot}$ be a simplicial object in ${\catsymbfont{M}}[{\opsymbfont{O}}]$. Then the geometric realization of $A_{\ssdot}$ in ${\catsymbfont{M}}$ has the canonical structure of an ${\opsymbfont{O}}$-algebra and is isomorphic to the geometric realization of $A_{\ssdot}$ in ${\catsymbfont{M}}$. \end{enumerate} \end{thm}
As discussed above, only~(ii) requires additional argument. For clarity in the argument for the theorem, we will write $|{\cdot}|$ for geometric realization in ${\catsymbfont{M}}$ and $|{\cdot}|^{{\catsymbfont{M}}[{\opsymbfont{O}}]}$ for geometric realization in ${\catsymbfont{M}}[{\opsymbfont{O}}]$. The key fact is the following lemma.
\begin{lem}\label{lem:prod} For ${\catsymbfont{M}}$ as in the previous theorem, geometric realization in ${\catsymbfont{M}}$ is strong symmetric monoidal. \end{lem}
\begin{proof} Although we wrote a more constructive definition of geometric realization above, it can also be described as a coend \[
|X_{\ssdot}|=\int^{{\mathbf \varDelta}^{\op}}X_{\ssdot}\otimes \Delta[\bullet], \] where ${\mathbf \varDelta}$ denotes the category of simplexes (the category with objects $[n]=\{0,\dotsc,n\}$ for $n=0,1,2,\dotsc$, and maps the non-decreasing functions) and $\Delta[n]$ denotes the standard $n$-simplex in spaces or simplicial sets. Because the symmetric monoidal product $\mathbin{\square}$ for ${\catsymbfont{M}}$ is assumed to commute with colimits in each variable, we can identify the product of geometric realizations also as a coend \begin{equation*}\label{eq:grc}
|X_{\ssdot}|\mathbin{\square}|Y_{\ssdot}| \cong
\int^{{\mathbf \varDelta}^{\op}\times {\mathbf \varDelta}^{\op}}
(X_{\ssdot}\mathbin{\square} Y_{\ssdot})\otimes
(\Delta[\bullet]\times \Delta[\bullet]). \end{equation*} On the other hand, \[
|X_{\ssdot} \mathbin{\square} Y_{\ssdot}|=
\int^{{\mathbf \varDelta}^{\op}}\diag(X_{\ssdot}\mathbin{\square} Y_{\ssdot}) \otimes \Delta[\bullet]. \] Next, we need a purely formal observation, which is an adjoint form of the Yoneda lemma: if coproducts of appropriate cardinality exist in ${\catsymbfont{C}}$, then given a functor $F\colon {\catsymbfont{C}}\to {\catsymbfont{D}}$, functoriality of $F$ induces a natural isomorphism \[ \int^{c\in {\catsymbfont{C}}}F(c)\times {\catsymbfont{C}}(c,-)\overto{\cong}F(-) \] (where $\times$ denotes coproduct over the given set; the coend exists and the identification holds with no further hypotheses on ${\catsymbfont{C}}$ or ${\catsymbfont{D}}$). Applying this to \[ F((\bullet,\bullet))=X_{\ssdot} \mathbin{\square} Y_{\ssdot}\colon {\mathbf \varDelta}^{\op}\times {\mathbf \varDelta}^{\op}\to {\catsymbfont{M}} \] and pre-composing with $\diag$, we get an isomorphism \[ X_{p}\mathbin{\square} Y_{p}\cong \int^{(m,n)\in {\mathbf \varDelta}^{\op}\times {\mathbf \varDelta}^{\op}}(X_{m}\mathbin{\square} Y_{n})\times ({\mathbf \varDelta}^{\op}(m,p)\times {\mathbf \varDelta} (n,p)) \] of functors $p\in {\mathbf \varDelta}^{\op}\to {\catsymbfont{M}}$. Commuting coends, we can reorganize the double coend \[
|X_{\ssdot} \mathbin{\square} Y_{\ssdot}|\cong
\int^{p\in {\mathbf \varDelta}^{\op}}\hspace{-1ex} \bigg(
\int^{(m,n)\in {\mathbf \varDelta}^{\op}\times {\mathbf \varDelta}^{\op}} \hspace{-1.5ex}
(X_{m}\mathbin{\square} Y_{n})
\times ({\mathbf \varDelta}^{\op}(m,p)\times {\mathbf \varDelta}^{\op}(n,p)) \bigg) \otimes \Delta[p] \] as \[ \int^{(m,n)\in {\mathbf \varDelta}^{\op}\times {\mathbf \varDelta}^{\op}}
(X_{m}\mathbin{\square} Y_{n}) \otimes \bigg(
\int^{p\in {\mathbf \varDelta}^{\op}}
({\mathbf \varDelta}^{\op}(m,p)\times {\mathbf \varDelta}^{\op}(n,p))\times
\Delta[p] \bigg). \] In the latter formula, the expression in parentheses is the coend formula for the geometric realization (in spaces) of the product of standard simplices (in simplicial sets) $\Delta[m]_{\ssdot}\times \Delta[n]_{\ssdot}$, which is $\Delta[m]\times \Delta[n]$ by the ``classic'' version of the lemma for geometric realization in spaces. This then constructs the natural isomorphism
$|X_{\ssdot}|\mathbin{\square} |Y_{\ssdot}|\cong |X_{\ssdot}\mathbin{\square} Y_{\ssdot}|$, and a little more fiddling with coends shows that this natural transformation is symmetric monoidal. \end{proof}
As a consequence of the previous lemma, we have a natural isomorphism
${\mathbb{O}}|X_{\ssdot}|\cong |{\mathbb{O}} X_{\ssdot}|$, making the appropriate diagrams commute so that the geometric realization (in ${\catsymbfont{M}}$) of a simplicial object $A_{\ssdot}$ in ${\catsymbfont{M}}[{\opsymbfont{O}}]$ obtains the natural structure of an ${\opsymbfont{O}}$-algebra. Moreover, the canonical maps $A_{n}\otimes
\Delta[n]\to |A_{\ssdot}|$ induce maps of ${\opsymbfont{O}}$-algebras $A_{n}\otimes^{{\catsymbfont{M}}[{\opsymbfont{O}}]}\Delta[n]\to
|A_{\ssdot}|$ that assemble into a natural map of ${\opsymbfont{O}}$-algebras \[
|A_{\ssdot}|^{{\catsymbfont{M}}[{\opsymbfont{O}}]}\to |A_{\ssdot}|. \]
In the case when $A_{\ssdot}={\mathbb{O}} X_{\ssdot}$, under the identification of colimits $|{\mathbb{O}} X_{\ssdot}|^{{\catsymbfont{M}}[{\opsymbfont{O}}]}={\mathbb{O}}|X_{\ssdot}|$, this map is the isomorphism
${\mathbb{O}}|X_{\ssdot}|\to |{\mathbb{O}} X_{\ssdot}|$ above. To see that it is an isomorphism for arbitrary $A_{\ssdot}$, write $A_{\ssdot}$ as a (reflexive) coequalizer \[ \xymatrix@C-1pc{ {\mathbb{O}} {\mathbb{O}} A_{\ssdot}\ar@<.5ex>[r]\ar@<-.5ex>[r]&{\mathbb{O}} A_{\ssdot}\ar[r]&A_{\ssdot}, } \] apply the functors, and compare diagrams.
\section{Model Structures for Operadic Algebras} \label{sec:model}
The purpose of this section is to review the construction of model structures on some of the categories of operadic algebras that are of interest in homotopy theory; we use these in the next section in comparison theorems giving Quillen equivalences between some of these categories.
Constructing model structures for algebras over operads is a special case of constructing model structures for algebras over monads; chapter~VII of EKMM~\cite{EKMM} seems to be an early reference for this kind of result, but it concentrates on the category of LMS-spectra and related categories. Schwede-Shipley~\cite{SS-AlgMod} studies the general case of monads in cofibrantly generated monoidal model categories, which Spitzweck~\cite{Spitzweck-Operads} specializes to the case of operads. Because less sharp results hold in the general case than in the special cases of interest, we state the results on model structures as a list of examples. This is an ``example theorem'' both in the sense that it gives a list of examples, but also in the sense that it fits into the general rubric of the kind of theorem that should hold very generally under appropriate technical hypotheses with essentially the same proof outline. Some terminology and notation is explained after the statement.
\begin{ethm}\label{ethm:model} Let ${\catsymbfont{M}}$ be a symmetric monoidal category with a cofibrantly generated model structure and let ${\opsymbfont{O}}$ be an operad in ${\catsymbfont{M}}$ from one of the examples listed below. Then the category of ${\opsymbfont{O}}$-algebras in ${\catsymbfont{M}}$ is a closed model category with: \begin{enumerate} \item Weak equivalences the underlying weak equivalences in ${\catsymbfont{M}}$ \item Fibrations the underlying fibrations in ${\catsymbfont{M}}$ \item Cofibrations the retracts of regular ${\mathbb{O}} I$-cofibrations \end{enumerate} This theorem holds in particular in the examples: \begin{enumerate}\deflist \item ${\catsymbfont{M}}$ is the category of symmetric spectra (of spaces or simplicial sets) with its positive stable model structure or orthogonal spectra with its positive stable model structure or the category of EKMM $S$-modules with its standard model structure (with $\mathbin{\square}$ the smash product, $\mathbf{1}$ the sphere spectrum) and ${\opsymbfont{O}}$ is any operad in ${\catsymbfont{M}}$. \cite[8.1]{ChadwickMandell} \item ${\catsymbfont{M}}$ is the category of spaces or simplicial sets (with $\mathbin{\square}=\times$, $\mathbf{1}=*$), or simplicial $R$-modules for some simplicial commutative ring $R$ (with $\mathbin{\square}=\otimes_{R}$, $\mathbf{1}=R$) and ${\opsymbfont{O}}$ is any operad. \item ${\catsymbfont{M}}$ is the category of (unbounded) chain complexes in $R$-modules for a commutative ring $R$ (with $\mathbin{\square} =\otimes_{R}$, $\mathbf{1}=R$) and either $R\supset {\mathbb{Q}}$ or ${\opsymbfont{O}}$ admits a map of operads ${\opsymbfont{O}}\to {\opsymbfont{O}}\otimes {\opsymbfont{E}}$ which is a section for the map ${\opsymbfont{O}}\otimes {\opsymbfont{E}}\to {\opsymbfont{O}}\otimes {\opsymbfont{C}}{\mathrm{om}}\cong {\opsymbfont{O}}$, where ${\opsymbfont{E}}$ is any $E_{\infty}$ operad that naturally acts on the normalized cohains of simplicial sets. \cite[3.1.3]{BergerFresse-Combinatorial} \item ${\catsymbfont{M}}$ is a monoidal model category in the sense of \cite[3.1]{SS-AlgMod} that satisfies the Monoid Axiom of \cite[3.3]{SS-AlgMod} and ${\opsymbfont{O}}$ is a cofibrant operad in the sense of \cite[\S3]{Spitzweck-Operads}. \cite[\S4, Theorem~4]{Spitzweck-Operads} \end{enumerate} \end{ethm}
The category of EKMM ${\mathbb{L}}$-spectra \cite[I\S4]{EKMM} also fits into example~(a) if we allow ${\catsymbfont{M}}$ to be a ``weak'' symmetric monoidal category in the sense of \cite[II.7.1]{EKMM}; the theorem then covers categories of operadic algebras in LMS spectra for operads over the linear isometries operad that have the form ${\opsymbfont{O}}\times {\opsymbfont{L}}\to {\opsymbfont{L}}$; see \cite[3.5]{ChadwickMandell}.
In part~(c), we note that for an operad that satisfies the section condition (or when $R\supset {\mathbb{Q}}$), the functor ${\opsymbfont{O}}(n)\times_{R[\Sigma_{n}]}(-)$ preserves preserve exactness of (homologically) bounded-below exact sequences of $R$-free $R[\Sigma_{n}]$-modules (for all $n$). For operads that satisfy this more general condition but not necessarily the section condition, the algebra category still has a theory of cofibrant objects and a good homotopy theory for those objects; see, for example, \cite[\S2]{Einfty}.
It is beyond the scope of the present chapter to do a full review of closed model category theory terminology, but we recall that a ``cofibrantly generated model category'' has a set $I$ of ``generating cofibrations'' and a set $J$ of ``generating acyclic cofibrations'' for which the Quillen small object argument can be done (perhaps transfinitely, but in the examples of (a), (b), and (c), sequences suffice). Then \[ {\mathbb{O}} I = \{ {\mathbb{O}} f \mid f\in I\} \] is the set of maps of ${\opsymbfont{O}}$-algebras obtained by applying ${\mathbb{O}}$ to each of the maps in $I$. The point of ${\mathbb{O}} I$ is that a map of ${\opsymbfont{O}}$-algebras has the left lifting property with respect to ${\mathbb{O}} I$ in ${\opsymbfont{O}}$-algebras exactly when the underlying map in ${\catsymbfont{M}}$ has the left lifting property with respect to $I$. The same definition and observations apply replacing $I$ with $J$. The strategy for proving the previous theorem is to define the fibrations and weak equivalences of ${\opsymbfont{O}}$-algebras as in (i),(ii), and define cofibrations in terms of the left lifting property (obtaining the characterization in (iii) as a theorem). The advantage of this approach is that fibrations and acyclic fibrations are also characterized by lifting properties: a map of ${\opsymbfont{O}}$-algebras is a fibration if and only if it has the right lifting property with respect to ${\mathbb{O}} J$ and a map of ${\opsymbfont{O}}$-algebras is an acyclic fibration if and only if it has the right lifting property with respect to ${\mathbb{O}} I$. For these lifting properties, we can attempt the small object argument. We now outline the remaining steps in this approach.
Recall that a \term{regular ${\mathbb{O}} I$-cofibration} is a map formed as a (transfinite) composite of pushouts along coproducts of maps in ${\mathbb{O}} I$. This is the generalization of the notion of a \term{relative ${\mathbb{O}} I$-cell complex} which is the colimit of a sequence of pushouts of coproducts of maps in ${\mathbb{O}} I$; in the case of examples~(a), (b), and (c), in a regular ${\mathbb{O}} I$-cofibration the transfinite composite can always be replaced simply by a sequential composite and so a regular ${\mathbb{O}} I$-cofibration is a relative ${\mathbb{O}} I$-cell complex. The small object argument for $I$ and $J$ in ${\catsymbfont{M}}$ implies the small object argument for ${\mathbb{O}} I$ and ${\mathbb{O}} J$, which gives factorization in ${\opsymbfont{O}}$-algebras of a map as either a regular ${\mathbb{O}} I$-cofibration followed by an acyclic fibration or a regular ${\mathbb{O}} J$-cofibration followed by a fibration. (A small wrinkle comes up in going from the small object argument in ${\catsymbfont{M}}$ to the small object argument in ${\catsymbfont{M}}[{\opsymbfont{O}}]$ in the topological examples of (a) and (b): we need to check that a regular ${\mathbb{O}} I$-cofibrations are nice maps, for example, closed inclusions on the constituent spaces; see the ``Cofibration Hypothesis'' of \cite[VII\S4]{EKMM} or \cite[5.3]{MMSS}.)
This gets us most of the way to a model structure. Having defined a cofibration of ${\opsymbfont{O}}$-algebras as a map that has the left lifting property with respect to the acyclic fibrations, the free-forgetful adjunction shows that regular ${\mathbb{O}} I$-cofibrations are cofibrations; moreover, it follows formally that any cofibration is the retract of a regular ${\mathbb{O}} I$-cofibration: given a cofibration $f\colon A\to B$, factor it as $p\circ i$ for $i\colon A\to B'$ a regular ${\mathbb{O}} I$-cofibration and $p\colon B'\to B$ an acyclic fibration, then solving the lifting problem \[ \xymatrix@-1pc{ A\ar[r]^{i}\ar[d]_{f}&B'\ar[d]^{p}\\ B\ar@{..>}[ur]_{g}\ar[r]_{\id}&B } \] to produce a map $g\colon B\to B'$ exhibits $f$ as a retract of $i$. \[ \xymatrix@-1pc{ A\ar[r]^{\id}\ar[d]_{f}&A\ar[r]^{\id}\ar[d]_{i}&A\ar[d]_{f}\\ B\ar[r]_{g}&B'\ar[r]_{p}&B } \] We can try the same thing with regular ${\mathbb{O}} J$-cofibrations; they have the left lifting property with respect to all fibrations so are in particular cofibrations, but are they weak equivalences? This is the big question and what keeps us from having a fully general result for Theorem~\ref{ethm:model} (especially in (c)). If regular ${\mathbb{O}} J$-cofibrations are weak equivalences, then the trick in the previous argument shows that every acyclic cofibration is a retract of a regular ${\mathbb{O}} J$-cofibration, and the lifting property for acyclic cofibrations follows as does the other factorization, proving the model structure. (Conversely, if the model structure exists, because regular ${\mathbb{O}} J$-cofibrations have the left lifting property for all fibrations, it follows that they are weak equivalences.)
In many examples, including examples (a) and (b) in the theorem above, the homogeneous filtration on the pushout that we studied in Section~\ref{sec:limit} can be used to prove that regular ${\mathbb{O}} J$-cofibrations are weak equivalences. Specifically, for $X\to Y$ a map in $J$, taking $A'=A\amalg^{{\catsymbfont{M}}[{\opsymbfont{O}}]}_{{\mathbb{O}} X}{\mathbb{O}} Y$, the case $m=0$ of the filtration on the enveloping operad for $A$ gives a filtration on $A'$ by objects of ${\catsymbfont{M}}$ starting from $A$. Now from the inductive definition of $Q^{k}_{k-1}(X\rightarrow Y)$ in~\eqref{eq:Ql}, it can be checked in examples (a) and (b) that the map $Q^{k}_{k-1}(X\rightarrow Y)\to Y^{(k)}$ is an equivariant Hurewicz cofibration of the underlying spaces or a monomorphism of the underlying simplicial sets as well as being a weak equivalence. The pushout~\eqref{eq:poenv} then identifies the maps in the filtration of $A'$ as weak equivalences as well. (This approach can also be used to prove versions of the ``Cofibration Hypothesis'' of \cite[VII\S4]{EKMM} or \cite[5.3]{MMSS} that regular ${\mathbb{O}} I$-cofibrations are closed inclusions on the constituent spaces.)
Example~(d) is similar, except that it uses a filtration argument on the construction of a cofibrant operad; see \cite[\S4]{Spitzweck-Operads}.
Example~(c) fits into the case of the general theorem of Schwede-Shipley \cite[2.3]{SS-AlgMod}, where every object is fibrant and has a path object. To complete the argument here, we need to show that every map $f\colon A\to B$ factors as a weak equivalence followed by a fibration: \[ \[email protected]{ A\ar[r]^-{\simeq}&A'\ar@{->>}[r]&B. } \] We then get the factorization of an acyclic cofibration followed by a fibration by using the factorization already established: \[ \[email protected]{ A\ar@{{ >}{-}{>}}[r]^-{\simeq}&A''\ar@{->>}[r]^-{\simeq}&A'\ar@{->>}[r]&B. } \] In the case of~(c) where we hypothesize a map of operads ${\opsymbfont{O}}\to {\opsymbfont{O}}\otimes {\opsymbfont{E}}$, this map gives a natural ${\opsymbfont{O}}$-algebra structure on $B\otimes C^{*}(-)$; the hypothesis that the composite map on ${\opsymbfont{O}}$ is the identity implies that the canonical isomorphism \[ B\cong B\otimes C^{*}(\Delta[0]) \] is an ${\opsymbfont{O}}$-algebra map. Looking at the maps between $\Delta[0]$ and $\Delta[1]$, we get maps of ${\opsymbfont{O}}$-algebras \[ B\to B\otimes C^{*}(\Delta[1])\to B\times B \] and the usual mapping path object construction \[ \[email protected]{ A\ar[r]^-{\simeq}&A\times_{B}(B\otimes C^{*}(\Delta[1]))\ar@{->>}[r]&B } \] consists of maps of ${\opsymbfont{O}}$-algebras and gives the factorization. In the case when $R\supset {\mathbb{Q}}$, the polynomial de Rham functor $A^{*}$ reviewed in Section~\ref{sec:cochains} is a functor from simplicial sets to commutative differential graded ${\mathbb{Q}}$-algebras, which can be used in the same way to construct a factorization \[ \[email protected]{ A\ar[r]^-{\simeq}&A\times_{B}(B\otimes_{{\mathbb{Q}}} A^{*}(\Delta[1]))\ar@{->>}[r]&B. } \]
In the case of operadic algebras in spaces in example~(b) and EKMM $S$-modules in example~(a), we have another argument taking advantage of the topological enrichment. In these examples, the maps in $J$ are deformation retractions, and so the maps in ${\mathbb{O}} J$ are deformation retractions in the category of ${\opsymbfont{O}}$-algebras. It follows that regular ${\mathbb{O}} J$-cofibrations are also deformation retractions and in particular homotopy equivalences. Since homotopy equivalences are weak equivalences, regular ${\mathbb{O}} J$-cofibrations are weak equivalences in examples where this argument can be made. The specific examples again also fit into the case of \cite[2.3]{SS-AlgMod} where every object is fibrant and has a path object.
\section{Comparison and Rectification Theorems for Operadic Algebras} \label{sec:compare}
This section discusses Quillen equivalences and Quillen adjunctions between the model categories in Example Theorem~\ref{ethm:model}. In particular, when we change from simplicial sets to spaces or when we change the underlying symmetric monoidal category between the Quillen equivalent modern categories of spectra, we get Quillen equivalences of categories of operadic algebras under only mild technical hypotheses on the operad; this gives several comparison theorems. We also consider Quillen adjunctions and Quillen equivalences obtained by change of operads. In wide generality, the augmentation map ${\opsymbfont{A}}\to {\opsymbfont{A}}{\mathrm{ss}}$ for an $A_{\infty}$ operad induces a Quillen equivalence between categories of algebras. Likewise, in the case of modern categories of spectra, the augmentation map ${\opsymbfont{E}}\to {\opsymbfont{C}}{\mathrm{om}}$ for an $E_{\infty}$ operad induces a Quillen equivalence between categories of algebras. These comparison theorems are rectification theorems in that they show that a homotopical algebraic structure can be replaced up to weak equivalence with a strict algebraic structure.
We begin by reviewing the change of operad adjunction. Let $f\colon {\opsymbfont{A}}\to {\opsymbfont{B}}$ be a map of operads in a symmetric monoidal category ${\catsymbfont{M}}$. Such a map certainly gives a restriction functor $U_{f}$ from ${\opsymbfont{B}}$-algebras to ${\opsymbfont{A}}$-algebras, and under mild hypothesis, this functor has a left adjoint. As in the discussion of colimits in Section~\ref{sec:limit}, if we assume that ${\catsymbfont{M}}$ satisfies the hypotheses of Proposition~\ref{prop:monad} then we can define $P_{f}\colon {\catsymbfont{M}}[{\opsymbfont{A}}]\to {\catsymbfont{M}}[{\opsymbfont{B}}]$ by the reflexive coequalizer \[ \xymatrix@C-1pc{ {\mathbb{B}}({\mathbb{A}} A)\ar@<.5ex>[r]\ar@<-.5ex>[r]&{\mathbb{B}} A\to P_{f}(A) } \] where ${\mathbb{A}}$ and ${\mathbb{B}}$ denote the monads associated to ${\opsymbfont{A}}$ and ${\opsymbfont{B}}$, one arrow is induced by the ${\opsymbfont{A}}$-algebra structure on $A$, and the other arrow is the composite ${\mathbb{B}}{\mathbb{A}}\to {\mathbb{B}}{\mathbb{B}}\to {\mathbb{B}}$ induced by the map of operads $f$ and the monadic product on ${\mathbb{B}}$. As a side remark, not related to the rest of this section, we note that in this situation the category ${\opsymbfont{B}}$-algebras can be identified as the category of algebras for the monad $U_{f}P_{f}$ in ${\catsymbfont{M}}[{\opsymbfont{A}}]$ (for a general formal proof, see~\cite[II.6.6.1]{EKMM}).
Now suppose that ${\catsymbfont{M}}$ has a closed model structure and ${\catsymbfont{M}}[{\opsymbfont{A}}]$ and ${\catsymbfont{M}}[{\opsymbfont{B}}]$ are closed model categories with fibrations and weak equivalences created in ${\catsymbfont{M}}$. For a map of operads $f\colon {\opsymbfont{A}}\to {\opsymbfont{B}}$, we then get a Quillen adjunction \[ P_{f}\colon {\catsymbfont{M}}[{\opsymbfont{A}}]\xymatrix@C-1pc{\ar@<.5ex>[r]&\ar@<.5ex>[l]} {\catsymbfont{M}}[{\opsymbfont{B}}]\;{:}\, U_{f}. \] When can we expect it to be a Quillen equivalence? It is tempting to define an equivalence of operads in ${\catsymbfont{M}}$ to be a map $f$ such that derived adjunction induces an equivalence of homotopy categories; then we have a tautological result that an equivalence of operads induces a Quillen equivalence of model structures. Instead we propose the following definition, which leads to a theorem with some substance (Example Theorem~\ref{ethm:rect}). It is the condition used in practice in proving comparison and rectification theorems.
\begin{defn}\label{def:dmeq} Let ${\catsymbfont{M}}$ be a closed model category with countable coproducts and with a symmetric monoidal product that preserves countable colimits in each variable. We say that a map $f\colon {\opsymbfont{A}}\to {\opsymbfont{B}}$ of operads in ${\catsymbfont{M}}$ is a \term{derived monad equivalence} if the induced map ${\mathbb{A}} Z\to {\mathbb{B}} Z$ is a weak equivalence for every cofibrant object $Z$ in~${\catsymbfont{M}}$. \end{defn}
Though we have not put enough hypotheses on ${\catsymbfont{M}}$ to ensure it, in practice countable coproducts of reasonable objects in ${\catsymbfont{M}}$ will preserve and reflect weak equivalences and then $f$ will be a derived monad equivalence if and only if each of the maps \[ {\opsymbfont{A}}(m)\mathbin{\square}_{\Sigma_{m}}Z^{(m)}\to {\opsymbfont{B}}(m)\mathbin{\square}_{\Sigma_{m}}Z^{(m)} \] is a weak equivalence. In our examples of main interest, we have more intrinsic sufficient conditions for a map of operads to be a derived monad equivalence.
\begin{example}\label{ex:dmeq1} In the category of spaces (or more generally, any topological or simplicial model category), a map of operads $f\colon {\opsymbfont{A}}\to {\opsymbfont{B}}$ that induces an equivariant homotopy equivalence ${\opsymbfont{A}}(m)\to {\opsymbfont{B}}(m)$ for all $m$ is a derived monad equivalence. Indeed, the map ${\mathbb{A}} Z\to {\mathbb{B}} Z$ is a homotopy equivalence for all $Z$ (and a homotopy equivalence in a topological or simplicial model category is a weak equivalence). As a special case, when $\overline{{\opsymbfont{A}}}$ is a non-symmetric operad with $\overline{{\opsymbfont{A}}}(m)$ contractible for all $m$, the map of operads ${\opsymbfont{A}}\to {\opsymbfont{A}}{\mathrm{ss}}$ is a derived monad equivalence. \end{example}
\begin{example}\label{ex:dmeq2} In the category of symmetric spectra (of spaces or simplicial sets) with its positive stable model structure or the category of orthogonal spectra with its positive model structure, a map of operads $f\colon {\opsymbfont{A}}\to {\opsymbfont{B}}$ that induces a (non-equivariant) weak equivalence ${\opsymbfont{A}}(n)\to {\opsymbfont{B}}(n)$ is a derived monad equivalence. This can be proved by generalizing the argument of~\cite[15.5]{MMSS} (see~\cite[8.3.(i)]{ChadwickMandell} for slightly more details). In the case of EKMM $S$-modules, if $f\colon {\opsymbfont{A}}\to {\opsymbfont{B}}$ is a map of operads of spaces that is a (non-equivariant) homotopy equivalence ${\opsymbfont{A}}(n)\to {\opsymbfont{B}}(n)$ for all $n$, then $\Sigma^{\infty}_{+}f$ is a derived monad equivalence. This can be proved by generalizing the argument of~\cite[III.5.1]{EKMM}. (See~\cite[8.3.(ii)]{ChadwickMandell} for a more general statement.) In particular, in these categories, the augmentation map ${\opsymbfont{E}}\to {\opsymbfont{C}}{\mathrm{om}}$ for an $E_{\infty}$ operad (assumed to come from spaces in the EKMM $S$-module case) is a derived monad equivalence. \end{example}
\begin{example}\label{ex:dmeq3} In the context of chain complexes of $R$-modules, a map of operads ${\opsymbfont{A}}\to {\opsymbfont{B}}$ that is an $R[\Sigma_{n}]$-module chain homotopy equivalence ${\opsymbfont{A}}(n)\to {\opsymbfont{B}}(n)$ for all $n$ is a derived monad equivalence. If the functors ${\opsymbfont{A}}(n)\otimes_{R[\Sigma_{n}]}(-)$ and ${\opsymbfont{B}}(n)\otimes_{R[\Sigma_{n}]}(-)$ preserve exactness of (homologically) bounded-below exact sequences of $R$-free $R[\Sigma_{n}]$-modules (for all $n$), then a weak equivalence ${\opsymbfont{A}}\to {\opsymbfont{B}}$ is a derived monad equivalence. This occurs in particular for part~(c) of Example Theorem~\ref{ethm:model} when ${\opsymbfont{A}}$ and ${\opsymbfont{B}}$ both satisfy the stated operad hypotheses. \end{example}
To go with these examples, we have the following example theorem.
\begin{ethm}\label{ethm:rect} Let ${\catsymbfont{M}}$ be a symmetric monoidal category and $f\colon {\opsymbfont{A}}\to {\opsymbfont{B}}$ a map of operads in ${\catsymbfont{M}}$, where ${\catsymbfont{M}}$, ${\opsymbfont{A}}$, and ${\opsymbfont{B}}$ fall into one of the examples of Example Theorem~\ref{ethm:model}.(a)-(c). If $f$ is a derived monad equivalence then the Quillen adjunction $P_{f}\colon {\catsymbfont{M}}[{\opsymbfont{A}}]\xymatrix@C-1pc{\ar@<.5ex>[r]&\ar@<.5ex>[l]} {\catsymbfont{M}}[{\opsymbfont{B}}]\;{:}\, U_{f}$ is a Quillen equivalence. \end{ethm}
Again, as in the previous section, this is an ``example theorem'' in that it gives an example of the kind of theorem that holds much more generally with a proof that can also be adapted to work much more generally. We outline the proof after the change of categories theorem below, as the arguments for both are quite similar.
In terms of change of categories, one should expect comparison theorems of the following form to hold quite generally: \begin{quotation} Let \[ L\colon {\catsymbfont{M}}\xymatrix@C-1pc{\ar@<.5ex>[r]&\ar@<.5ex>[l]} {\catsymbfont{M}}'\;{:}\, R \] be a Quillen equivalence between monoidal model categories with $L$ strong symmetric monoidal, and let ${\opsymbfont{O}}$ be an operad in ${\catsymbfont{M}}$. With some technical hypotheses, the adjunction \[ L\colon {\catsymbfont{M}}[{\opsymbfont{O}}]\xymatrix@C-1pc{\ar@<.5ex>[r]&\ar@<.5ex>[l]} {\catsymbfont{M}}'[L{\opsymbfont{O}}]\;{:}\, R \] on operadic algebra categories is also a Quillen equivalence \end{quotation} A minimal technical hypothesis is that $L{\opsymbfont{O}}$ be ``the right thing'' and an easy way to ensure this is to put some kind of cofibrancy condition on the objects ${\opsymbfont{O}}(n)$. In our cases of interest, we could certainly state such a theorem, but it would not cover the example in modern categories of spectra when ${\opsymbfont{O}}$ is the suspension spectrum functor applied to an operad of spaces; for such an operad, the spectra ${\opsymbfont{O}}(n)$ will not be cofibrant. On the other hand, in these examples the right adjoint preserves all weak equivalences and not just weak equivalences between fibrant objects; in this setup it seems more convenient to consider an operad ${\opsymbfont{O}}'$ in ${\catsymbfont{M}}'$ and a map of operads ${\opsymbfont{O}}\to R{\opsymbfont{O}}'$ (or equivalently, $L{\opsymbfont{O}}\to {\opsymbfont{O}}'$) that induces a weak equivalence \[ {\mathbb{O}} Z\to R({\mathbb{O}}' LZ) \] for all cofibrant objects $Z$ of ${\catsymbfont{M}}$. We state such a theorem for our examples of interest.
\begin{ethm}\label{ethm:compare} Let $L\colon {\catsymbfont{M}}\xymatrix@C-1pc{\ar@<.5ex>[r]&\ar@<.5ex>[l]} {\catsymbfont{M}}'\;{:}\, R$ be one of the Quillen adjunctions of symmetric monoidal categories listed below. Let ${\opsymbfont{A}}$ be an operad in ${\catsymbfont{M}}$, let ${\opsymbfont{B}}$ be an operad in ${\catsymbfont{M}}'$, and let $f\colon {\opsymbfont{A}}\to R{\opsymbfont{B}}$ be a map of operads that induces a weak equivalence \[ {\mathbb{A}} Z\to R({\mathbb{B}} LZ) \] for all cofibrant objects $Z$ of ${\catsymbfont{M}}$. Then the induced Quillen adjunction \[ P_{L,f}\colon {\catsymbfont{M}}[{\opsymbfont{A}}]\xymatrix@C-1pc{\ar@<.5ex>[r]&\ar@<.5ex>[l]} {\catsymbfont{M}}'[{\opsymbfont{B}}]\;{:}\, U_{R,f} \] is a Quillen equivalence. This theorem holds in particular in the examples: \begin{enumerate}\deflist \item ${\catsymbfont{M}}$ is the category of simplicial sets (with the usual model structure) or the category of symmetric spectra of simplicial sets, ${\catsymbfont{M}}'$ is the category of spaces or the category of symmetric spectra in spaces (resp.), and $L,R$ is the geometric realization, singular simplicial set adjunction. \item ${\catsymbfont{M}}$ is the category of symmetric spectra, ${\catsymbfont{M}}'$ is the category of orthogonal spectra and $L,R$ is the prolongation, restriction adjunction of~\cite[p.~442]{MMSS}. \item ${\catsymbfont{M}}$ is the category of symmetric spectra or orthogonal spectra, ${\catsymbfont{M}}'$ is the category of EKMM $S$-modules, and $L,R$ is the adjunction of \cite{Schwede-EKMM} or \cite[I.1.1]{MMM}. \end{enumerate} \end{ethm}
As indicated in the paragraph above the statement, the statement takes advantage of the fact that in the examples being considered in this section, the right adjoint preserves all weak equivalences; a general statement for other examples should use a fibrant replacement for ${\mathbb{B}} LZ$ in place of ${\mathbb{B}} LZ$. The proof sketch below also takes advantage of this property of the right adjoint. In generalizing the argument to the case when fibrant replacement is required in the statement, the fibrant replacement of the filtration can be performed in ${\catsymbfont{M}}'$.
The proof of the theorems above uses the homogeneous filtration on a pushout of the form $A'=A\amalg^{{\catsymbfont{M}}[{\opsymbfont{O}}]}_{{\mathbb{O}} X}{\mathbb{O}} Y$ studied in Section~\ref{sec:limit}. This is the $m=0$ case of the filtration on the enveloping operad ${\opsymbfont{U}}^{{\opsymbfont{O}}}_{A'}$, and we will need to use the filtration on the whole operad for an inductive argument even though we are only interested in the $m=0$ case in the end. We will use uniform notation in the sketch proof that follows, taking ${\catsymbfont{M}}'={\catsymbfont{M}}$ with adjoint functors $L$ and $R$ to be the identity in the case of Example Theorem~\ref{ethm:rect}. We use the notation $I$ for the preferred set of generators for the cofibrations of ${\catsymbfont{M}}$ (as in Section~\ref{sec:model}).
Because fibrations and weak equivalences in the algebra categories are created in the underlying symmetric monoidal categories, the adjunction $P_{L,f}$,$U_{R,f}$ is automatically a Quillen adjunction (as indicated already in the statements), and we just have to prove that the unit of the adjunction \begin{equation}\label{eq:uniteq} A\to U_{R,f}(P_{L,f}A) \end{equation} is a weak equivalence for any cofibrant ${\opsymbfont{A}}$-algebra $A$. Every cofibrant ${\opsymbfont{A}}$-algebra is the retract of an ${\mathbb{A}} I$-cell ${\opsymbfont{A}}$-algebra, and so it suffices to consider the case when $A$ is an ${\mathbb{A}} I$-cell ${\opsymbfont{A}}$-algebra; then $A=\colim A_{n}$ where $A_{0}={\opsymbfont{A}}(0)$ and each $A_{n+1}$ is formed from $A_{n}$ by cell attachment (of possibly an infinite coproduct of cells). As mentioned parenthetically in Section~\ref{sec:model} and as we shall see below, the underlying maps $A_{n}\to A_{n+1}$ are nice enough that $A$ is the homotopy colimit (in ${\catsymbfont{M}}$ or ${\catsymbfont{M}}[{\opsymbfont{A}}]$) of the system of the finite stages $A_{n}$ (and likewise for $P_{L,f}A$, which is a cell ${\mathbb{B}} LI$-algebra with stages $P_{L,f}A_{n}$). Thus, it will be enough to see that~\eqref{eq:uniteq} is a weak equivalence for each $A_{n}$. By the hypothesis of the theorem, we know that this holds for $A_{0}$ (which is the free ${\opsymbfont{A}}$-algebra on the initial object of ${\catsymbfont{M}}$); moreover, as the enveloping operad of $A_{0}$ is ${\opsymbfont{A}}$ and the enveloping operad of $P_{L,f}A_{0}$ is ${\opsymbfont{B}}$, we can assume as an inductive hypothesis that \[ {\mathbb{U}}^{{\opsymbfont{A}}}_{A_{n}}Z\to {\mathbb{U}}^{{\opsymbfont{B}}}_{P_{L,f}A_{n}}LZ \] is a weak equivalence for all cofibrant $Z$; in other words, we can assume by induction that the hypothesis of the theorem holds for the map of enveloping operads ${\opsymbfont{U}}^{{\opsymbfont{A}}}_{A_{n}}\to R({\opsymbfont{U}}^{{\opsymbfont{B}}}_{P_{L,f}A_{n}}$). It then suffices to prove that the hypothesis of the theorem holds for the map of enveloping operads ${\opsymbfont{U}}^{{\opsymbfont{A}}}_{A_{n+1}}\to R({\opsymbfont{U}}^{{\opsymbfont{B}}}_{P_{L,f}A_{n+1}})$; this is because in the categories ${\catsymbfont{M}}$ and ${\catsymbfont{M}}'$ of the examples, countable coproducts preserve and reflect weak equivalences and the unit map $A_{n+1}\to U_{R,f}(P_{L,f}A_{n+1})$ is the restriction of the map of monads to the homogeneous degree zero summand (at least in the homotopy category of ${\catsymbfont{M}}$).
To prove this, let $X\to Y$ be the coproduct of maps in $I$ such that $A_{n+1}=A_{n}\amalg^{{\catsymbfont{M}}[{\opsymbfont{A}}]}_{{\mathbb{A}} X}{\mathbb{A}} Y$ and consider the filtration on ${\opsymbfont{U}}^{{\opsymbfont{A}}}_{A_{n+1}}(m)$ and ${\opsymbfont{U}}^{{\opsymbfont{B}}}_{P_{L,f}A_{n+1}}(m)$ studied in Section~\ref{sec:limit}. We note that the induction hypothesis on $A_{n}$ also implies that the map \begin{multline*} {\opsymbfont{U}}^{{\opsymbfont{A}}}_{A_{n}}(m) \mathbin{\square}_{\Sigma_{m_{1}}\times \dotsb \times \Sigma_{m_{i}}} (Z_{1}^{(m_{1})}\mathbin{\square} \dotsb \mathbin{\square} Z^{(m_{i})}_{i}) \\\to R({\opsymbfont{U}}^{{\opsymbfont{B}}}_{P_{L,f}A_{n}}(m) \mathbin{\square}_{\Sigma_{m_{1}}\times \dotsb \times \Sigma_{m_{i}}} (LZ_{1}^{(m_{1})}\mathbin{\square} \dotsb \mathbin{\square} LZ^{(m_{i})}_{i})) \end{multline*} is a weak equivalence for all cofibrant objects $Z_{1},\dotsc,Z_{i}$ (where $m=m_{1}+\dotsb +m_{i}$) as this is a summand of the map \[ {\opsymbfont{U}}^{{\opsymbfont{A}}}_{A_{n}}(m) \mathbin{\square}_{\Sigma_{m}} (Z_{1}\amalg \dotsb \amalg Z_{i})^{(m)} \to R({\opsymbfont{U}}^{{\opsymbfont{B}}}_{P_{L,f}A_{n}}(m) \mathbin{\square}_{\Sigma_{m}} L(Z_{1}\amalg \dotsb \amalg Z_{i})^{(m)}). \] Looking at the pushout square~\eqref{eq:Ql} that inductively defines $Q^{k}_{i}(X\rightarrow Y)$, a bit of analysis shows that in our example categories the maps $Q^{k}_{i-1}\to Q^{k}_{i}$ are $\Sigma_{k}$-equivariant Hurewicz cofibrations (or in the simplicial categories, maps that geometrically realize to such). It follows that for any cofibrant object $Z$, the maps \begin{multline*} {\opsymbfont{U}}^{{\opsymbfont{A}}}_{A_{n}}(k+m) \mathbin{\square}_{\Sigma_{k}\times \Sigma_{m}} (Q^{k}_{i-1}(X\rightarrow Y) \mathbin{\square} Z^{(m)})\\\to {\opsymbfont{U}}^{{\opsymbfont{A}}}_{A_{n}}(k+m) \mathbin{\square}_{\Sigma_{k}\times \Sigma_{m}} ( Q^{k}_{i}(X\rightarrow Y) \mathbin{\square} Z^{(m)}) \end{multline*} are (or geometrically realize to) Hurewicz cofibrations (likewise in ${\catsymbfont{M}}'$) and that the maps \begin{multline*} {\opsymbfont{U}}^{{\opsymbfont{A}}}_{A_{n}}(k+m) \mathbin{\square}_{\Sigma_{k}\times \Sigma_{m}} (Q^{k}_{i}(X\rightarrow Y)\mathbin{\square} Z^{(m)})\\\to R({\opsymbfont{U}}^{{\opsymbfont{B}}}_{P_{L,f}A_{n}}(k+m) \mathbin{\square}_{\Sigma_{k}\times \Sigma_{m}} (Q^{k}_{i}(LX\rightarrow LY)\mathbin{\square} LZ^{(m)})) \end{multline*} are weak equivalences. Now the pushout square~\eqref{eq:poenv} shows that for any cofibrant object $Z$, at each filtration level $k$, the map \[ F^{k-1}{\opsymbfont{U}}^{{\opsymbfont{A}}}_{A_{n+1}}(m)\mathbin{\square}_{\Sigma_{m}}Z^{(m)}\to F^{k}{\opsymbfont{U}}^{{\opsymbfont{A}}}_{A_{n+1}}(m)\mathbin{\square}_{\Sigma_{m}}Z^{(m)} \] is (or geometrically realizes to) a Hurewicz cofibration (likewise in ${\catsymbfont{M}}'$) and that the maps \[ F^{k}{\opsymbfont{U}}^{{\opsymbfont{A}}}_{A_{n+1}}(m)\mathbin{\square}_{\Sigma_{m}}Z^{(m)} \to R(F^{k}{\opsymbfont{U}}^{{\opsymbfont{B}}}_{P_{L,f}A_{n+1}}(m)\mathbin{\square}_{\Sigma_{m}}LZ^{(m)}) \] are weak equivalences. The colimit is then weakly equivalent to the homotopy colimit and we get a weak equivalence \[ {\opsymbfont{U}}^{{\opsymbfont{A}}}_{A_{n+1}}(m)\mathbin{\square}_{\Sigma_{m}}Z^{(m)} \to R({\opsymbfont{U}}^{{\opsymbfont{B}}}_{P_{L,f}A_{n+1}}(m)\mathbin{\square}_{\Sigma_{m}}LZ^{(m)}), \] completing the induction and the sketch proof of Example Theorems~\ref{ethm:rect} and~\ref{ethm:compare}.
The argument above proved the comparison theorems by proving equivalences of enveloping operads. Since the unary part of the enveloping operad is the enveloping algebra, we also get module category comparison results. We state this as the following corollary, which says that as long as the algebras are cofibrant, changing categories by Quillen equivalences and the algebras by derived monad equivalences results in Quillen equivalent categories of modules.
\begin{cor}\label{cor:modcomp} Let $L\colon {\catsymbfont{M}}\xymatrix@C-1pc{\ar@<.5ex>[r]&\ar@<.5ex>[l]} {\catsymbfont{M}}'\;{:}\, R$ be one of the Quillen adjunctions of symmetric monoidal categories in Example Theorem~\ref{ethm:compare} or the identity functor adjunction on one of the categories in Example Theorem~\ref{ethm:rect}. Let $f\colon {\opsymbfont{A}}\to R{\opsymbfont{B}}$ be a map of operads that induces a weak equivalence ${\mathbb{A}} Z\to R({\mathbb{B}} LZ)$ for all cofibrant objects $Z$, and let $g\colon A\to RB$ be a weak equivalence of ${\opsymbfont{A}}$-algebras for an ${\opsymbfont{A}}$-algebra $A$ and a ${\opsymbfont{B}}$-algebra $B$. If $A$ and $B$ are cofibrant (in ${\catsymbfont{M}}[{\opsymbfont{A}}]$ and ${\catsymbfont{M}}'[{\opsymbfont{B}}]$, respectively), then $f$ and $g$ induce a Quillen equivalence of the category of $({\opsymbfont{A}},A)$-modules and the category of $({\opsymbfont{B}},B)$-modules. \end{cor}
\begin{proof}[Sketch proof] The argument above shows that under the given hypotheses, the map of $\mathbin{\square}$-monoids $U^{{\opsymbfont{A}}}A\to R(U^{{\opsymbfont{B}}}B)$ is a weak equivalence. The left and right adjoint functors in the Quillen adjunction on module categories are given by $U^{\overline{{\opsymbfont{B}}}}B\mathbin{\square}_{L(U^{\overline{{\opsymbfont{A}}}}A)}L(-)$ and $R$, respectively. These both preserve coproducts, homotopy cofiber sequences, and sequential homotopy colimits up to weak equivalence. It follows that the unit of the adjunction $X\to R(U^{\overline{{\opsymbfont{B}}}}B\mathbin{\square}_{L(U^{\overline{{\opsymbfont{A}}}}A)}LX)$ is a weak equivalence for every cofibrant $A$-module~$X$. \end{proof}
The analogous result also holds for modules over algebras on non-symmetric operads, proved by essentially the same filtration argument: we have a non-symmetric version $\overline U\vrule height1em depth0pt width0pt^{\overline{{\opsymbfont{O}}}}_{A}(m)$ of Construction~\ref{cons:eop}. In this case, the resulting objects do not assemble into an operad; nevertheless, $\overline U\vrule height1em depth0pt width0pt^{\overline{{\opsymbfont{O}}}}_{A}(1)$ still has the structure of a $\mathbin{\square}$-monoid and coincides with the (non-symmetric) enveloping algebra $\overline U\vrule height1em depth0pt width0pt^{\overline{{\opsymbfont{O}}}}A$. The non-symmetric analogue of~\eqref{eq:poenv} holds, and the filtration argument (under the hypotheses of the previous corollary) proves that the map $\overline U\vrule height1em depth0pt width0pt^{\overline{{\opsymbfont{A}}}}A\to R(\overline U\vrule height1em depth0pt width0pt^{\overline{{\opsymbfont{B}}}}B)$ is a weak equivalence of $\mathbin{\square}$-monoids. We conclude that the unit map $X\to R(\overline U\vrule height1em depth0pt width0pt^{\overline{{\opsymbfont{B}}}}B\mathbin{\square}_{L\overline U\vrule height1em depth0pt width0pt^{\overline{{\opsymbfont{A}}}}A}LX)$ is a weak equivalence for every cofibrant $A$-module $X$.
\section{Enveloping Algebras, Moore Algebras, and Rectification} \label{sec:rectass}
In the special case of Example~\ref{ex:dmeq1}, Example Theorem~\ref{ethm:rect} gives an equivalence of the homotopy category of $A_{\infty}$ algebras (over a given $A_{\infty}$ operad) with the homotopy category of associative algebras, in particular constructing an associative algebra rectification of an $A_{\infty}$ algebra. We know another way to construct an associative algebra from an $A_{\infty}$ algebra, namely the (non-symmetric) enveloping algebra. In the case when the $A_{\infty}$ operad is the operad of little $1$-cubes $\overline{{\opsymbfont{C}}}_{1}$, there is also a classical rectification called the Moore algebra. The purpose of this section is to compare these constructions.
We first consider the rectification of Example Theorem~\ref{ethm:rect} and the non-symmetric enveloping algebra. Let $\overline{{\opsymbfont{O}}}$ be a non-symmetric operad and $\epsilon \colon \overline{{\opsymbfont{O}}}\to \overline{{\opsymbfont{A}}{\mathrm{ss}}}$ a weak equivalence. Under the hypotheses of Example Theorem~\ref{ethm:rect}, the rectification (change of operads) functor $P_{\epsilon}$ associated to $\epsilon$ gives a $\mathbin{\square}$-monoid $P_{\epsilon}A$ and a map of $\overline{{\opsymbfont{O}}}$-algebras $A\to P_{\epsilon}A$ that is a weak equivalence when $A$ is cofibrant. As part of the proof of Example Theorem~\ref{ethm:rect}, we get a weak equivalence of enveloping operads \[ {\opsymbfont{U}}^{{\opsymbfont{O}}}_{A}\to {\opsymbfont{U}}^{{\opsymbfont{A}}{\mathrm{ss}}}_{P_{\epsilon}A}. \] As mentioned at the end of the previous section, the non-symmetric version of this argument also works to give a weak equivalence of $\mathbin{\square}$-monoids \[ \overline U\vrule height1em depth0pt width0pt^{\overline{{\opsymbfont{O}}}}A\to \overline U\vrule height1em depth0pt width0pt^{\overline{{\opsymbfont{A}}{\mathrm{ss}}}}(P_{\epsilon}A). \] Moreover, in the case of the associative algebra operad $\overline{{\opsymbfont{A}}{\mathrm{ss}}}$, we have a natural isomorphism of $\mathbin{\square}$-monoids $\overline U\vrule height1em depth0pt width0pt^{\overline{{\opsymbfont{A}}{\mathrm{ss}}}}M\to M$ for any $\mathbin{\square}$-monoid $M$. Putting this together, we get the following theorem.
\begin{thm}\label{thm:Uwk} Let ${\catsymbfont{M}}$ be a symmetric monoidal category and ${\opsymbfont{O}}$ an $A_{\infty}$ operad that fall into one of the examples of Theorem~\ref{ethm:model}.(a)-(c). Write $\epsilon\colon \overline{{\opsymbfont{O}}}\to \overline{{\opsymbfont{A}}{\mathrm{ss}}}$ for the weak equivalence identifying ${\opsymbfont{O}}$ as an $A_{\infty}$ operad. If $A$ is a cofibrant ${\opsymbfont{O}}$-algebra then the natural maps \[ A\to P_{\epsilon}A\cong \overline U\vrule height1em depth0pt width0pt^{\overline{{\opsymbfont{A}}{\mathrm{ss}}}}P_{\epsilon}A\from \overline U\vrule height1em depth0pt width0pt^{\overline{{\opsymbfont{O}}}}A \] are weak equivalences of ${\opsymbfont{O}}$-algebras. \end{thm}
We now focus on $A_{\infty}$ algebras for the little $1$-cubes operad $\overline{{\opsymbfont{C}}}_{1}$, where we can describe results both more concretely and in much greater generality. For the rest of the section we work in the context of a symmetric monoidal category enriched over topological spaces as in Section~\ref{sec:enrich}: let ${\catsymbfont{M}}$ be a closed symmetric monoidal category with countable colimits, and let $L\colon {\catsymbfont{S}}\to {\catsymbfont{M}}$ be strong symmetric monoidal left adjoint functor (whose right adjoint we denote as $R$). Then as per Theorem~\ref{thm:enrich}, ${\catsymbfont{M}}$ becomes enriched over topological spaces and we have a notion of homotopies and homotopy equivalences in ${\catsymbfont{M}}$, defined in terms of mapping spaces or equivalently in terms of tensor with the unit interval. We also have $L\overline{{\opsymbfont{C}}}_{1}$ as a non-symmetric operad in ${\catsymbfont{M}}$; for an $L\overline{{\opsymbfont{C}}}_{1}$-algebra $A$, we give a concrete construction of the enveloping algebra $\overline U\vrule height1em depth0pt width0pt A$, mostly following~\cite[\S2]{Smash}. We first write the formulas and then explain where they come from.
\begin{cons}\cite[\S2]{Smash} Let $\bar D$ be the space of subintervals of $[0,1]$ and let $D$ be the subspace of $\bar D$ of those intervals that do not start at $0$. We have a canonical isomorphism $\bar D\cong \overline{{\opsymbfont{C}}}_{1}(1)$ (sending a subinterval to the $1$-tuple containing it) that we elide notation for. Under this isomorphism, the composition law $\Gamma^{1}_{1}$ defines a pairing $\gamma \colon \bar D\times \bar D\to \bar D$ that satisfies the formula \[ \gamma ([x,y],[x',y'])=[x+(y-x)x',x+(y-x)y']. \] We note that $\gamma$ restricts to a pairing $D\times D\to D$ and for formal reasons $\gamma$ is associative \begin{multline*} \gamma (\gamma ([x,y],[x',y']),[x'',y''])=\\ [x+(y-x)x'+(y-x)(y'-x')x'',x+(y-x)x'+(y-x)(y'-x')y'']\\ =\gamma([x,y],\gamma ([x',y'],[x'',y''])) \end{multline*} and unital \[ \gamma ([0,1],[x,y])=[x,y]=\gamma([x,y],[0,1]), \] making $\bar D$ a topological monoid and $D$ a sub-semi-group. Define $\alpha \colon D\times D\to \overline{{\opsymbfont{C}}}_{1}(2)$ by \[ \alpha([x,y],[x',y'])=([0,\tfrac{x}{x+(y-x)x'}], [\tfrac{x}{x+(y-x)x'},1]). \] Let $DA$ be the object of ${\catsymbfont{M}}$ defined by the following pushout diagram \[ \xymatrix@-1pc{ LD \mathbin{\square} \mathbf{1} \ar[d]\ar[r]&LD \mathbin{\square} A\ar[d]\\ L\bar D\mathbin{\square} \mathbf{1}\ar[r]&DA } \] where the top map is induced by the composite of the isomorphism $\mathbf{1}\cong L\overline{{\opsymbfont{C}}}_{1}(0)$ (from the strong symmetric monoidal structure on $L$) and the $L\overline{{\opsymbfont{C}}}_{1}$-action map $L\overline{{\opsymbfont{C}}}_{1}(0)\to A$. We use $\gamma$ and $\alpha$ to define a multiplication on $DA$ as follows. We use the map \[ (LD\mathbin{\square} A)\mathbin{\square} (LD\mathbin{\square} A)\to LD\mathbin{\square} A\to DA \] coming from the map \begin{multline*} (LD\mathbin{\square} A)\mathbin{\square} (LD\mathbin{\square} A)\cong L(D\times D)\mathbin{\square} (A\mathbin{\square} A) \to\\ L(D\times \overline{{\opsymbfont{C}}}_{1}(2))\mathbin{\square} (A\mathbin{\square} A) \cong LD \mathbin{\square} (L\overline{{\opsymbfont{C}}}_{1}(2)\mathbin{\square} (A\mathbin{\square} A)) \to LD\mathbin{\square} A \end{multline*} induced by the map $(\gamma,\alpha)\colon D\times D\to D\times \overline{{\opsymbfont{C}}}_{1}(2)$ and the $L\overline{{\opsymbfont{C}}}_{1}$-action map on $A$. We note that both associations \[ (LD\mathbin{\square} A)\mathbin{\square} (LD\mathbin{\square} A)\mathbin{\square} (LD\mathbin{\square} A)\to LD\mathbin{\square} A \] coincide: both factor through the map \[ (LD\mathbin{\square} A)\mathbin{\square} (LD\mathbin{\square} A)\mathbin{\square} (LD\mathbin{\square} A) \cong L(D\times D\times D)\mathbin{\square} A^{(3)}\to L(D\times \overline{{\opsymbfont{C}}}_{1}(3))\mathbin{\square} A^{(3)} \] induced by the map $D\times D\times D\to D\times \overline{{\opsymbfont{C}}}_{1}(3)$ given on the $D$ factor as $\gamma \circ (\gamma \times \id)=\gamma \circ (1\times \gamma)$ and on the $\overline{{\opsymbfont{C}}}_{1}(3)$ factor by the formula \[ [x,y],[x',y'],[x'',y'']\mapsto ([0,a],[a,b],[b,1]) \] where \[ a=\frac{x}{x+(y-x)(x'+(y'-x')x'')}, \qquad b=\frac{x+(y-x)x'}{x+(y-x)(x'+(y'-x')x'')}. \] When restricted to maps \[ (LD\mathbin{\square} \mathbf{1})\mathbin{\square} (LD\mathbin{\square} A), (LD\mathbin{\square} A)\mathbin{\square} (LD\mathbin{\square} \mathbf{1})\to DA, \] this map coincides with the map induced by just $\gamma$ and the unit isomorphism of ${\catsymbfont{M}}$ and so extends to compatible maps \begin{align*} (L\bar D \mathbin{\square} \mathbf{1})\mathbin{\square} (L\bar D \mathbin{\square} \mathbf{1})&\to DA\\ (L\bar D \mathbin{\square} \mathbf{1})\mathbin{\square} (LD \mathbin{\square} A)&\to DA\\ (LD \mathbin{\square} A)\mathbin{\square} (L\bar D \mathbin{\square} \mathbf{1})&\to DA \end{align*} and defines an associative multiplication on $DA$. The map $\mathbf{1}\to DA$ induced by the inclusion of the element $[0,1]$ of $\bar D$ is a unit for this multiplication. \end{cons}
To understand the construction, it is useful to think of $D$ as a subspace of $\overline{{\opsymbfont{C}}}_{1}(2)$ rather than a subspace of $\overline{{\opsymbfont{C}}}_{1}(1)$ via the embedding \[ [x,y]\mapsto ([0,x],[x,y]). \] Then we have a map $DA\to \overline U\vrule height1em depth0pt width0pt A$ sending $L\bar D\mathbin{\square} \mathbf{1}$ and $LD\mathbin{\square} A$ to the 0 and 1 summands \[ L\bar D\mathbin{\square} \mathbf{1}\cong L\overline{{\opsymbfont{C}}}_{1}\mathbin{\square} A^{(0)}\qquad \text{and}\qquad LD\mathbin{\square} A\to L\overline{{\opsymbfont{C}}}_{1}(2)\mathbin{\square} A \] in the coequalizer~\eqref{eq:nUA} for $\overline U\vrule height1em depth0pt width0pt A$. We also have a map back that sends the summand $L\overline{{\opsymbfont{C}}}_{1}(n+1)\mathbin{\square} A^{(n)}$ (for $n\geq 1$) to $LD\mathbin{\square} A$ by remembering just the last interval and using the rest to do the multiplication on $A$; specifically, for $[x_{1},y_{1}],\dotsc,[x_{n+1},y_{n+1}]$, we use the element of $\overline{{\opsymbfont{C}}}_{1}(n)$ corresponding to \[ [\tfrac{x_{1}}{x_{n+1}},\tfrac{y_{1}}{x_{n+1}}],\dotsc,[\tfrac{x_{n}}{x_{n+1}},\tfrac{y_{n}}{x_{n+1}}] \] for the map $A^{(n)}\to A$. It is straightforward to check that these give inverse isomorphisms of objects of ${\catsymbfont{M}}$; see \cite[2.5]{Smash}.
The isomorphism of the previous paragraph then forces the formula for the multiplication. Intuitively speaking, the first box in $D$ (viewed as a subset of $\overline{{\opsymbfont{C}}}_{1}(2)$) holds the algebra (from the tensor) and the second box is a placeholder to plug in the module variable; the complement $\bar D \setminus D$ corresponds to the first box having length zero and then only the unit of the algebra can go there. For the composition, the right copy gets plugged into the second box of the left copy to give an element of $\overline{{\opsymbfont{C}}}_{1}(3)$ (i.e., the operadic composition $\ell\circ_{2}r=\Gamma^{2}_{1,2}(\ell;1,r)$ where $\ell$ is the element of the left copy of $D$ and $r$ is the element of the right copy of $D$); the first and second boxes are on the one hand rescaled to an element of $\overline{{\opsymbfont{C}}}_{1}(2)$ that does the multiplication on the copies of $A$ and on the other hand joined to give with the third box the new element of $D$ (viewed as a subspace of $\overline{{\opsymbfont{C}}}_{1}(2)$). The associativity is straightforward to visualize in terms of plugging in boxes when written down on paper. (See Section~2 of~\cite{Smash}.) In the case when one of the elements comes from $\bar D \setminus D$, the corresponding copy of $A$ is restricted to the unit $\mathbf{1}$ and the first box of zero length also works like a unit.
Using the isomorphism of $\mathbin{\square}$-monoids $\overline U\vrule height1em depth0pt width0pt A\cong DA$, we now have the following comparison theorem for the underlying objects of $\overline U\vrule height1em depth0pt width0pt A$ and $A$.
\begin{prop}\label{prop:UAhty}\cite[1.1]{Smash} The map of $\overline U\vrule height1em depth0pt width0pt A$-modules $\overline U\vrule height1em depth0pt width0pt A\cong \mathbf{1}\mathbin{\square} \overline U\vrule height1em depth0pt width0pt A\to A$ induced by the map $\mathbf{1}\cong L\overline{{\opsymbfont{C}}}_{1}(0)\to A$ is a homotopy equivalence of objects of ${\catsymbfont{M}}$. \end{prop}
\begin{proof} In concrete terms, the map in the statement is induced by the map \[ LD\mathbin{\square} A\to L\overline{{\opsymbfont{C}}}_{1}(1)\mathbin{\square} A\to A \] for the map $D\to \overline{{\opsymbfont{C}}}_{1}(1)$ that sends $[x,y]$ to $([0,x])$, which is compatible with the map \[ L\bar D\mathbin{\square} \mathbf{1}\to \mathbf{1}\to A. \] We can use any element of $D$ to produce a map (in ${\catsymbfont{M}}$) from $A$ to $\overline U\vrule height1em depth0pt width0pt A$; a path to the operad identity element $1$ in $\overline{{\opsymbfont{C}}}_{1}(1)$ (which corresponds to $[0,1]\subseteq [0,1]$) then induces a homotopy of the composite map $A\to A$ to the identity map of $A$. We can construct a homotopy from the composite to the identity on $\overline U\vrule height1em depth0pt width0pt A$ using a homotopy of self-maps of $\overline{{\opsymbfont{C}}}_{1}(1)$ from the identity to the constant map on $1$ (combined with the $\overline{{\opsymbfont{C}}}_{1}(1)$ action map on $A$) and a homotopy of self-maps of the pair $(\bar D,D)$ from the constant map (on the chosen element of $D$) to the identity map. For example, if the chosen element of $D$ corresponds to the subinterval $[a,b]$ (with $a\neq 0$) then the linear homotopy \[ [x,y],t \mapsto [xt+a(1-t),yt+b(1-t)] \] is such a homotopy of self-maps of the pair. \end{proof}
In the context of spaces, J.~C.~Moore invented an associative version of the based loop space by parametrizing loops with arbitrary length intervals. This idea extends to the current context to give another even simpler construction of a $\mathbin{\square}$-monoid equivalent (in ${\catsymbfont{M}}$) to an $L\overline{{\opsymbfont{C}}}_{1}$-algebra $A$.
\begin{cons} Define $MA$ to be the object of ${\catsymbfont{M}}$ defined by the pushout diagram \[ \xymatrix@-1pc{ L{\mathbb{R}}^{>0} \mathbin{\square} \mathbf{1} \ar[d]\ar[r]&L{\mathbb{R}}^{>0} \mathbin{\square} A\ar[d]\\ L{\mathbb{R}}^{\geq 0}\mathbin{\square} \mathbf{1}\ar[r]&MA } \] (where ${\mathbb{R}}^{>0}\subset {\mathbb{R}}^{\geq 0}$ are the usual subspaces of positive and non-negative real numbers, respectively). We give this the structure of a $\mathbin{\square}$-monoid with the unit $\mathbf{1}\to MA$ induced by the inclusion of $0$ in ${\mathbb{R}}^{\geq 0}$ and multiplication $MA\mathbin{\square} MA\to MA$ induced by the map \begin{multline*} (L{\mathbb{R}}^{>0} \mathbin{\square} A)\mathbin{\square} (L{\mathbb{R}}^{>0} \mathbin{\square} A) \cong L({\mathbb{R}}^{>0}\times {\mathbb{R}}^{>0})\mathbin{\square} (A\mathbin{\square} A)\\ \to L({\mathbb{R}}^{>0}\times \overline{{\opsymbfont{C}}}_{1}(2))\mathbin{\square} (A\mathbin{\square} A) \cong L{\mathbb{R}}^{>0}\mathbin{\square} (L\overline{{\opsymbfont{C}}}_{1}(2)\mathbin{\square} (A\mathbin{\square} A)) \to L{\mathbb{R}}^{>0}\mathbin{\square} A \end{multline*} induced by the $\overline{{\opsymbfont{C}}}_{1}$-action on $A$ and the map \[ c\colon (r,s)\in {\mathbb{R}}^{>0}\times {\mathbb{R}}^{>0} \mapsto (r+s, ([0,\tfrac{r}{r+s}],[\tfrac{r}{r+s},1])) \in {\mathbb{R}}^{>0}\times \overline{{\opsymbfont{C}}}_{1}(2). \] \end{cons}
The idea is that the element of ${\mathbb{R}}^{>0}$ specifies a length (with the zero length only available for the unit) and the multiplication uses the proportionality of the two lengths to choose an element of $\overline{{\opsymbfont{C}}}_{1}(2)$ for the multiplication on $A$; the two lengths add to give the length in the result. In the case when ${\catsymbfont{M}}$ is the category of spaces and $A=\Omega X$ is the based loop space of a space $X$, $MA$ is the Moore loop space. An element is specified by an element $r$ of ${\mathbb{R}}^{\geq 0}$ together with an element of $\Omega X$ (which must be the basepoint when $r=0$) but can be visualized as a based loop parametrized by $[0,r]$ (or for $r=0$ the constant length zero loop at the basepoint). The multiplication concatenates loops by concatenating the parametrizations, an operation that is strictly associative and unital.
We can compare the $\mathbin{\square}$-monoids $MA$ and $\overline U\vrule height1em depth0pt width0pt A$ through a third $\mathbin{\square}$-monoid $NA$ constructed as follows. Let $N={\mathbb{R}}^{> 0}\times {\mathbb{R}}^{>0}\times {\mathbb{R}}^{\geq 0}$, let $\bar N={\mathbb{R}}^{\geq 0}\times {\mathbb{R}}^{>0}\times {\mathbb{R}}^{\geq 0}$, and define $NA$ by the pushout diagram \[ \xymatrix@-1pc{ LN \mathbin{\square} \mathbf{1} \ar[d]\ar[r]&LN \mathbin{\square} A\ar[d]\\ L\bar N\mathbin{\square} \mathbf{1}\ar[r]&NA. } \] We have maps $\bar N\times \bar N\to \bar N$ and $N\times N\to \overline{{\opsymbfont{C}}}_{1}(2)$ defined by \begin{align*} ((r,s,t),(r',s',t'))\in \bar N\times \bar N &\mapsto (r+sr',ss',st'+t) \in \bar N\\ ((r,s,t),(r',s',t'))\in N\times N &\mapsto c(t,st')=([0,\tfrac{r}{r+sr'}],[\tfrac{r}{r+sr'},1]) \in \overline{{\opsymbfont{C}}}_{1}(2), \end{align*} which we use to construct the multiplication on $NA$ by the same scheme as above \[ (LN\mathbin{\square} A)\mathbin{\square} (LN\mathbin{\square} A)\cong L(N\times N)\mathbin{\square} (A\mathbin{\square} A)\to L(N\times \overline{{\opsymbfont{C}}}_{1}(2))\mathbin{\square} (A\mathbin{\square} A)\to LN\mathbin{\square} A. \] The unit is the map $\mathbf{1}\to NA$ induced by the inclusion of $(0,1,0)$ in $\bar N$.
The parametrizing space $N=\{(r,s,t)\}$ generalizes $D$ by allowing $[r,s]$ to be a subinterval of $[0,r+s+t]$ instead of $[0,1]$, or from another perspective, generalizes lengths in the definition on the Moore algebra by incorporating a scaling factor $s$ and padding of length $t$. In other words, we have maps \begin{align*} [x,y]\in \bar D&\mapsto (x,y-x,1-y)\in \bar N\\ r\in {\mathbb{R}}^{\geq 0}&\mapsto (r,1,0)\in \bar N. \end{align*} These maps induce maps of $\mathbin{\square}$-monoids $\overline U\vrule height1em depth0pt width0pt A\cong DA\to NA$ and $MA\to NA$, respectively, and the argument of Proposition~\ref{prop:UAhty} shows that these maps are homotopy equivalences in ${\catsymbfont{M}}$. We state this as the following theorem, repeating the conventions of this part of the section for easy reference.
\begin{thm}\label{thm:UNM} Let ${\catsymbfont{M}}$ be a closed symmetric monoidal category admitting countable colimits and enriched over spaces via a strong symmetric monoidal left adjoint functor $L$. Then for algebras over the little $1$-cubes operad ($L\overline{{\opsymbfont{C}}}_{1}$-algebras) the non-symmetric enveloping algebra $\overline U\vrule height1em depth0pt width0pt A$ and the Moore algebra $MA$ fit in a natural zigzag of $\mathbin{\square}$-monoids \[ \overline U\vrule height1em depth0pt width0pt A\to NA\from MA \] where the maps are homotopy equivalences in ${\catsymbfont{M}}$. Moreover, the canonical maps $\overline U\vrule height1em depth0pt width0pt A\to A$ and $MA\to A$ are homotopy equivalences in ${\catsymbfont{M}}$. \end{thm}
To compare $MA$ and $A$ as $A_{\infty}$ algebras, we use a new $A_{\infty}$ operad $\nsCl$ defined as follows.
\begin{cons} Let $\nsCl(0)={\mathbb{R}}^{\geq 0}$ and for $m>0$, let $\nsCl(m)$ be the set of ordered pairs $(S,r)$ with $r$ a positive real number and $S$ a list of $m$ almost non-overlapping closed subintervals of $[0,r]$ in their natural order, topologized analogously as in the definition of $\overline{{\opsymbfont{C}}}_{1}$ (as a semilinear submanifold of ${\mathbb{R}}^{2m+1}$). The operadic composition is defined by scaling and replacement of the subintervals: the basic composition \begin{multline*} \Gamma^{1}_{j}((([x,y]),r),(([x'_{1},y'_{1}],\dotsc,[x'_{j},y'_{j}]),r')) =\\(([x+ax'_{1},x+ay'_{1}],\dotsc,[x+ax'_{j},x+ay'_{j}]),r+a(r'-1)) \end{multline*} (with $a:=y-x$) scales the interval $[0,r']$ to length $ar'$ and inserts that in place of $[x,y]\subset [0,r]$; the resulting final interval then has size $r-a+ar'$. The general composition $\Gamma^{m}_{j_{1},\dotsc,j_{m}}$ does this operation on each of the $m$ subintervals: \begin{multline*} \Gamma^{m}_{j_{1},\dotsc,j_{m}}\colon (([x^{0}_{1},y^{0}_{1}],\dotsc,[x^{0}_{m},y^{0}_{m}]),r_{1}),\\ (([x^{1}_{1},y^{1}_{1}],\dotsc,[x^{1}_{j_{1}},y^{1}_{j_{1}}]),r_{1}), \dotsc, (([x^{m}_{1},y^{m}_{1}],\dotsc,[x^{m}_{j_{m}},y^{m}_{j_{m}}]),r_{m}),\\ \mapsto\\ (([x^{0}_{1}+a_{1}x^{1}_{1},x^{0}_{1}+a_{1}y^{1}_{1}],\dotsc, [s_{m-1}+x^{0}_{m}+a_{m}x^{m}_{j_{m}}, s_{m-1}+x^{0}_{m}+a_{m}y^{m}_{j_{m}}]), r_{0}+s_{m}) \end{multline*} where $a_{i}:=y^{0}_{i}-x^{0}_{i}$ and $s_{i}=a_{1}(r_{1}-1)+\dotsb +a_{i}(r_{i}-1)$. In the case when one of the $j_{i}$ is zero, that $j_{i}$ contributes no subintervals but still scales the original subinterval $[x^{0}_{i},y^{0}_{i}]$ to length $a_{i}r_{i}$ (or removes it when $r_{i}=0$). The operad identity element is the element $(([0,1]),1)\in \nsCl(1)$. \end{cons}
The maps $\overline{{\opsymbfont{C}}}_{1}(m)\to \nsCl(m)$ that include $\overline{{\opsymbfont{C}}}_{1}(m)$ as the length $1$ subspace assemble to a map of operads $i\colon \overline{{\opsymbfont{C}}}_{1}\to \nsCl$. We also have a map of operads $j\colon \overline{{\opsymbfont{A}}{\mathrm{ss}}}\to \nsCl$ induced by sending the unique element of $\overline{{\opsymbfont{A}}{\mathrm{ss}}}(m)$ to the element \[ (([0,1],[1,2],\dotsc,[m-1,m]),m) \] of $\nsCl(m)$. Using the map $j$, an $L\nsCl$-algebra has the underlying structure of a $\mathbin{\square}$-monoid. A straightforward check of universal properties proves the following proposition.
\begin{prop} The functor that takes a $\overline{{\opsymbfont{C}}}_{1}$-algebra $A$ to its Moore algebra $MA$ is naturally isomorphic to the functor that takes $A$ to the underlying $\mathbin{\square}$-monoid of the pushforward $P_{Li}A$ for the map of operads $Li\colon L\overline{{\opsymbfont{C}}}_{1}\to L\nsCl$. \end{prop}
The $\nsCl$-action map $L\nsCl(m)\mathbin{\square} (MA)^{(m)}\to MA$ is induced by the map \[ \nsCl(m)\times ({\mathbb{R}}^{>0})^{n}\to \nsCl(m)\times \nsCl(1)^{n} \overto{\Gamma^{m}_{1,\dotsc,1}} \nsCl(m)\cong {\mathbb{R}}^{>0}\times \overline{{\opsymbfont{C}}}_{1}(m) \] that includes ${\mathbb{R}}^{>0}$ in $\nsCl(1)$ by $r\mapsto (([0,r]),r)$, where the isomorphism is the map that takes an element $(([x_{1},y_{1}],\dotsc,[x_{m},y_{m}]),r)$ of $\nsCl(m)$ to the element $(r,([x_{1}/r,y_{1}/r],\dotsc,[x_{m}/r,y_{m}/r]))$ of ${\mathbb{R}}^{>0}\times \overline{{\opsymbfont{C}}}_{1}(m)$.
The map of $\overline{{\opsymbfont{C}}}_{1}$-algebras that is the unit of the change of operads adjunction $A\to P_{Li}A$ is induced by the inclusion of $1$ in ${\mathbb{R}}^{>0}$ and is a homotopy equivalence by a (simpler) version of the homotopy argument of Proposition~\ref{prop:UAhty}. We do not see how to do a similar argument for the pushforward $P_{Lj}$ from $\mathbin{\square}$-monoids to $\nsCl$-algebras, so we do not have a direct comparison of $\overline{{\opsymbfont{C}}}_{1}$-algebras between $A$ (or $P_{Li}A$) and $MA$ with the $\overline{{\opsymbfont{C}}}_{1}$-algebra structure inherited from its $\mathbin{\square}$-monoid structure without some kind of rectification result (such as Example Theorem~\ref{ethm:rect}) comparing the category of $L\nsCl$-algebras with the category of $\overline{{\opsymbfont{A}}{\mathrm{ss}}}$-algebras.
The argument in~\cite[2.5]{Smash} that identifies $\overline U\vrule height1em depth0pt width0pt^{\overline{{\opsymbfont{C}}}_{1}}A$ as $DA$ generalizes to identify $\overline U\vrule height1em depth0pt width0pt^{\nsCl}P_{Li}A$ as $NA$; the maps in Theorem~\ref{thm:UNM} can then be viewed as the natural maps on enveloping algebras induced by maps of operads and maps of algebras.
\iffalse
\section{Operads and the Plethysm Product of Symmetric Sequences} \label{sec:def2}
Taking the relationship between operads and monads as primary leads to another formulation of the definition of operad and the definition of operadic algebra. Philosophically (and mathematically in some symmetric monoidal categories ${\catsymbfont{M}}$ such as sets, spaces, abelian groups, chain complexes, and others), operads can be viewed as monads with the extra structure of a decomposition into homogeneous pieces (the ${\opsymbfont{O}}(m)\mathbin{\square}_{\Sigma_{m}}X^{(m)}$'s). From this perspective, \textit{symmetric sequences} correspond to functors. (Throughout this section, we assume that the symmetric monoidal category ${\catsymbfont{M}}$ has countable colimits and that $\mathbin{\square}$ commutes with colimits in each variable.)
\begin{defn}\label{def:symseq} A \term{symmetric sequence} ${\opsymbfont{X}}$ consists of objects ${\opsymbfont{X}}(m)$,\break for $m=0,1,2,\dotsc$, together with a right $\Sigma_{m}$-action on ${\opsymbfont{X}}(m)$ for each $m$. A map of symmetric sequences ${\opsymbfont{X}}\to {\opsymbfont{X}}'$ consists of $\Sigma_{m}$-equivariant maps ${\opsymbfont{X}}(m)\to {\opsymbfont{X}}'(m)$ for all $m$. \end{defn}
Associated to a symmetric sequence ${\opsymbfont{X}}$, we have an endofunctor ${\mathbb{F}}_{{\opsymbfont{X}}}$ of ${\catsymbfont{M}}$ defined by \[ {\mathbb{F}}_{{\opsymbfont{X}}}(Z)=\myop\coprod_{n=0}^{\infty}{\opsymbfont{X}}(m)\mathbin{\square}_{\Sigma_{m}}Z^{(m)}. \] We have used a different notation than in~\ref{notn:monad} to be able to treat ${\opsymbfont{X}}$ as a variable, but for an operad ${\opsymbfont{O}}$, the underlying functor of the monad ${\mathbb{O}}$ is precisely ${\mathbb{F}}_{{\opsymbfont{O}}}$. A map of symmetric sequences ${\opsymbfont{X}}\to{\opsymbfont{X}}'$ induces a natural transformation of functors ${\mathbb{F}}_{{\opsymbfont{X}}}\to {\mathbb{F}}_{{\opsymbfont{X}}'}$. The key observation is that ${\mathbb{F}}_{{\opsymbfont{X}}}\circ {\mathbb{F}}_{{\opsymbfont{Y}}}$ is actually naturally (in ${\opsymbfont{X}}$ and ${\opsymbfont{Y}}$) the functor associated to a symmetric sequence ${\opsymbfont{X}}\circ {\opsymbfont{Y}}$, called the \term{plethysm}; it defined by looking at ${\mathbb{F}}_{{\opsymbfont{X}}}\circ {\mathbb{F}}_{{\opsymbfont{Y}}}$ and collecting homogeneous terms:
\begin{defn}\label{def:plethysm} For symmetric sequences ${\opsymbfont{X}}$ and ${\opsymbfont{Y}}$, define the \textit{plethysm} ${\opsymbfont{X}}\circ {\opsymbfont{Y}}$ to be the symmetric sequence \[ \def\vrule height0pt width0pt depth1ex{\vrule height0pt width0pt depth1ex} ({\opsymbfont{X}}\circ {\opsymbfont{Y}})(j)= \myop\coprod_{n=0}^{\infty} {\opsymbfont{X}}(m)\mathbin{\square}_{\Sigma_{m}} \left( \myop\coprod_{\putatop{j_{1},\dotsc,j_{k}\vrule height0pt width0pt depth1ex}{\sum j_{i}=j}}
({\opsymbfont{Y}}(j_{1})\mathbin{\square} \dotsb \mathbin{\square} {\opsymbfont{Y}}(j_{k})) _{\Sigma_{j_{1}}\times \dotsb \times \Sigma_{j_{k}}}^{\Sigma_{j}} \right) \] where $(-)_{\Sigma}^{\Sigma_{j}}$ denotes induction for the subgroup $\Sigma< \Sigma_{j}$. \end{defn}
For $H$ a subgroup of a group $G$ and a right $H$-object $Y$, the induction $Y_{H}^{G}$ can be defined concisely in terms we have already seen as \[ Y_{H}^{G}\cong Y\mathbin{\square}_{H}\left(\myop\coprod_{G} \mathbf{1}\right) \] using the left action of $H$ on $G$ for the colimit in $\mathbin{\square}_{H}$ and the commuting right action of $G$ on $G$ to make this a $G$-object. In more elementary terms, choosing coset representatives $\{\gamma_{\alpha}\}$ for $H\backslash G$, \[ Y_{H}^{G}\cong \myop\coprod_{H \backslash G} Y \] where $g\in G$ acts (on the right) by sending the copy of $Y$ in the summand indexed by $H\gamma_{\alpha}$ to the summand indexed by $H\gamma_{\alpha}g$ by applying the action of $h\in H$ where $\gamma_{\alpha}g=h\gamma_{\beta}$ for a coset representative $\gamma_{\beta}$.
Just as composition of functors is associative (but not symmetric) so is plethysm of symmetric sequences. It also has a two-sided unit given by the symmetric sequence of the identity operad ${\opsymbfont{I}}$: \[ {\opsymbfont{I}}(m)=\begin{cases} \mathbf{1}&n=1,\\ \emptyset&\text{(the initial object) otherwise.} \end{cases} \] Since $\mathbin{\square}$ commutes with coproducts in each variable $\emptyset\mathbin{\square} Z\cong \emptyset$ and we have a canonical natural isomorphism ${\mathbb{F}}_{{\opsymbfont{I}}}\cong \Id$. The following theorem relates the structure on the category of symmetric sequences to the structure on the category of endofunctors of ${\catsymbfont{M}}$.
\begin{thm}\label{thm:plethysm} The category of symmetric sequences in ${\catsymbfont{M}}$ is monoidal (but not symmetric monoidal) for $\circ$ with unit ${\opsymbfont{I}}$. The construction ${\mathbb{F}}_{(-)}$ extends to a strong monoidal functor from symmetric sequences in ${\catsymbfont{M}}$ to endofunctors in ${\catsymbfont{M}}$. \end{thm}
We leave the details to the reader as they take little more than bookkeeping of terms, but there are quite a lot of terms to write out for the iterated plethysms. In many contexts (such as when ${\catsymbfont{M}}$ is sets, spaces, abelian groups, chain complexes, various categories of spectra, and others) the construction ${\mathbb{F}}_{(-)}$ is a faithful functor from symmetric sequences to endofunctors and the theorem can be deduced without much work from associativity of composition of endofunctors (after checking that the associativity isomorphism for endofunctors restricts to an associativity isomorphism for symmetric sequences).
Monads are the monoids for the composition product in the category of endofunctors, a natural question is then what are the monoids for the plethysm product. If ${\opsymbfont{X}}$ is a monoid for the plethysm product, then from the various summands in ${\opsymbfont{X}}\circ {\opsymbfont{X}}$, we get maps \[ \Gamma^{m}_{j_{1},\dotsc,j_{m}}\colon {\opsymbfont{X}}(m)\mathbin{\square} {\opsymbfont{X}}(j_{1})\mathbin{\square} \dotsb \mathbin{\square} {\opsymbfont{X}}(j_{m})\to {\opsymbfont{X}}\circ {\opsymbfont{X}}\to {\opsymbfont{X}} \] and the map ${\opsymbfont{I}}\to{\opsymbfont{X}}$ gives a map $1\colon \mathbf{1}\to{\opsymbfont{X}}(1)$. A straightforward check then shows that this gives ${\opsymbfont{X}}$ the structure of an operad. On the other hand, starting with an operad ${\opsymbfont{O}}$, the various components of the composition law $\Gamma$ assemble to a map \[ \mu_{\Gamma}\colon {\opsymbfont{O}}\circ {\opsymbfont{O}}\to {\opsymbfont{O}} \] which is well-defined because of the equivariance rules \ref{def:operad}.\eqref{part:operad:easyperm}--\eqref{part:operad:hardperm}; the associativity rule for $\Gamma$ \ref{def:operad}.\eqref{part:operad:assoc} implies associativity of $\mu_{\Gamma}$, and the unit rule \ref{def:operad}.\eqref{part:operad:unit} shows that the map ${\opsymbfont{I}}\to {\opsymbfont{O}}$ (from the unit $1$) is a unit for $\mu_{\Gamma}$. This proves the following theorem, another alternative definition of operad.
\begin{thm} For a symmetric sequence ${\opsymbfont{X}}$ in ${\catsymbfont{M}}$, the constructions above give a bijective correspondence between operad structures on ${\opsymbfont{X}}$ (with the given underlying symmetric sequence) and plethysm monoid structures on ${\opsymbfont{X}}$. \end{thm}
Algebras over an operad also have an alternative definition in this context. Say that a symmetric sequence ${\opsymbfont{X}}$ is a \textit{zero sequence} if ${\opsymbfont{X}}(m)=\emptyset$ (the initial object) for $m>0$. Then the full subcategory of zero sequences is isomorphic to the category ${\catsymbfont{M}}$. When ${\opsymbfont{X}}$ is a zero sequence, then ${\opsymbfont{O}}\circ {\opsymbfont{X}}$ is also a zero sequence and \[ ({\opsymbfont{O}}\circ {\opsymbfont{X}})(0)\cong {\mathbb{O}}({\opsymbfont{X}}(0)). \] Under the isomorphism between ${\catsymbfont{M}}$ and zero sequences, ${\opsymbfont{O}}$-algebras are exactly the left ${\opsymbfont{O}}$-objects (for the plethysm product) in zero sequences. \fi
\section{\texorpdfstring{$E_{n}$}{En} spaces and Iterated Loop Space Theory}\label{sec:loop}
The recognition principle for iterated loop spaces provided the first application for operads. Although the summary here has been spiced up with model category notions and terminology (in the adjoint functor formulation of~\cite[\S8]{May-What}), the mathematics has not changed significantly from the original treatment by May in~\cite{May-GILS}, except for the improvements noted in the appendix to~\cite{CohenLadaMay}, which extend the results from connected to grouplike $E_{n}$ spaces. ($E_{n}$ spaces $=$ $E_{n}$ algebras in spaces.)
The original idea for the little $n$-cubes operads ${\opsymbfont{C}}_{n}$ and the start of the relationship between $E_{n}$ spaces and $n$-fold loop spaces is the Boardman-Vogt observation that every $n$-fold loop spaces comes with the natural structure of a ${\opsymbfont{C}}_{n}$-algebra. The action map \[ {\opsymbfont{C}}_{n}(m)\times \Omega^{n}X \times \dotsb \times \Omega^{n}X\to \Omega^{n} X \] is defined as follows. We view $S^{n}$ as $[0,1]^{n}/\partial$. Given an element $c\in {\opsymbfont{C}}_{n}(m)$, and elements $f_{1},\dotsc,f_{m}\colon S^{n}\to X$ of $\Omega^{n}X$, let $f_{c;f_{1},\dotsc,f_{n}}\colon S^{n}\to X$ be the function that sends a point $x$ in $S^{n}$ to the base point if $x$ is not in one of the embedded cubes; the $i$th embedded cube gets sent to $X$ using the inverse of the embedding and the quotient map $[0,1]^{n}\to S^{n}$ followed by the map $f_{i}\colon S^{n}\to X$. This is a continuous based map $S^{n}\to X$ since the boundary of each embedded cube gets sent to the base point. Phrased another way, $c$ defines a based map \[ S^{n}\to S^{n}\vee \dotsb \vee S^{n} \] with the $i$th embedded cube mapping to the $i$th wedge summand of $S^{n}$ by collapsing all points not in an open cube to the base point and rescaling; we then apply $f_{i}\colon S^{n}\to X$ to the $i$th summand to get a composite map $S^{n}\to X$.
The construction of the previous paragraph factors $\Omega^{n}$ as a functor from based spaces to ${\opsymbfont{C}}_{n}$-spaces ($=$ ${\opsymbfont{C}}_{n}$-algebras in spaces). It is clear that not every ${\opsymbfont{C}}_{n}$-space arises as $\Omega^{n}X$ because $\pi_{0}\Omega^{n}X$ is a group (for its canonical multiplication), whereas for the free ${\opsymbfont{C}}_{n}$-space ${\mathbb{C}}_{n}X$, $\pi_{0}{\mathbb{C}}_{n}X$ is not a group unless $X$ is the empty set; for example, $\pi_{0}{\mathbb{C}}_{n}X\cong {\mathbb{N}}$ when $X$ is path connected. We say that a ${\opsymbfont{C}}_{n}$-space $A$ is \term{grouplike} when $\pi_{0}A$ is a group (for its canonical multiplication). The following is the fundamental theorem of iterated loop space theory; it gives an equivalence of homotopy theories between $n$-fold loop spaces and grouplike ${\opsymbfont{C}}_{n}$-spaces.
\begin{thm}[{May~\cite{May-GILS}, Boardman-Vogt~\cite[\S6]{BV-PROP}}]\label{thm:FTILST} The functor $\Omega^{n}$ from based\break spaces to ${\opsymbfont{C}}_{n}$-spaces is a Quillen right adjoint. The unit of the derived adjunction \[ A\to \Omega^{n}B^{n}A \] is an isomorphism in the homotopy category of ${\opsymbfont{C}}_{n}$-spaces if (and only if) $A$ is grouplike. The counit of the derived adjunction \[ B^{n}\Omega^{n}X\to X \] is an isomorphism in the homotopy category of spaces if (and only if) $X$ is $(n-1)$-connected; in general it is an $(n-1)$-connected cover. \end{thm}
We have written the derived functor of the left adjoint in Theorem~\ref{thm:FTILST} as $B^{n}$, suggesting an iterated bar construction. Although neither the point-set adjoint functor nor the model for its derived functor used in the argument of Theorem~\ref{thm:FTILST} is constructed iteratively, Dunn~\cite{Dunn-Uniqueness} shows that the derived functor is naturally equivalent to an iterated bar construction.
As a consequence of the statement of the theorem, the unit of the derived adjunction $A\to \Omega^{n}B^{n}A$ is the initial map in the homotopy category of ${\opsymbfont{C}}_{n}$-spaces from $A$ to a grouplike ${\opsymbfont{C}}_{n}$-space and so deserves to be called ``group completion''. Group completion has various characterizations and for the purposes of sketching the ideas behind the proof of the theorem, it works best to choose one of them as the definition and state the property of the unit map as a theorem. One such characterization uses the classifying space construction, which we understand as the Eilenberg-Mac\ Lane bar construction (after converting the underlying ${\opsymbfont{C}}_{1}$-spaces to topological monoids) or the Stasheff bar construction (choosing compatible maps from the Stasheff associahedra into the spaces ${\opsymbfont{C}}_{n}(m)$).
\begin{defn}\label{def:groupcompletion} A map $f\colon A\to G$ of ${\opsymbfont{C}}_{n}$-spaces is a \term{group completion} if $G$ is grouplike and $f$ induces a weak equivalence of classifying spaces. \end{defn}
In the case $n>1$ (and under some hypotheses if $n=1$), Quillen~\cite{Quillen-GroupCompletion} gives a homological criterion for a map to be group completion: if $G$ is grouplike, then a map $A\to G$ of ${\opsymbfont{C}}_{n}$-spaces is group completion if and only if \[ H_{*}(A)[(\pi_{0}A)^{-1}]\to H_{*}(G) \] is an isomorphism. Counterexamples exist in the case $n=1$ (indeed, McDuff~\cite{McDuff-DiscreteMonoids} gives a counterexample for every loop space homotopy type), but recent work of Braun, Chuang, and Lazarev~\cite{BCL-DerivedLocalization} give an analogous derived category criterion in terms of derived localization at the multiplicative set $\pi_{0}A$. Using Definition~\ref{def:groupcompletion} or any equivalent independent characterization of group completion, we have the following addendum to Theorem~\ref{thm:FTILST}.
\begin{add} The unit of the derived adjunction in Theorem~\ref{thm:FTILST} is group completion. \end{add}
The homotopical heart of the proof of Theorem~~\ref{thm:FTILST} is the May-Cohen-Segal Approximation (Theorem~\cite[\S6--7]{May-GILS}, \cite{Cohen-Bulletin}, \cite{Segal-ConfigurationSpaces}), which we now review. This theorem studies a version of the free ${\opsymbfont{C}}_{n}$-algebra functor $\tilde{\mathbb{C}}_{n}$ whose domain is the category of based spaces, where the base point becomes the identity element in the ${\opsymbfont{C}}_{n}$-algebra structure. This version of the free functor has the advantage that for a connected space $X$, $\tilde{\mathbb{C}} X$ is also a connected space; May's Approximation Theorem identifies $\tilde{\mathbb{C}} X$ in this case as a model for $\Omega^{n}\Sigma^{n}X$. Cohen (following conjectures of May) and Segal (working independently) then extended this to non-connected spaces: the group completion of $\tilde{\mathbb{C}} X$ is a model for $\Omega^{n}\Sigma^{n}X$.
For a based space $X$, $\tilde{\mathbb{C}}_{n}X$ is formed as a quotient of \[ {\mathbb{C}} X= \coprod {\opsymbfont{C}}_{n}(m)\times_{\Sigma_{m}}X^{m} \] by the equivalence relation that identifies $(c,(x_{1},\dotsc,x_{i},*,\dotsc,*))\in {\opsymbfont{C}}_{n}(m)\times X^{m}$ with $(c',(x_{1},\dotsc,x_{i}))\in {\opsymbfont{C}}_{n}(i)\times X^{i}$ for $c'=\Gamma(c;1,\dotsc,1,0,\dotsc,0)$ where $1$ denotes the identity element in ${\opsymbfont{C}}_{n}(1)$ and $0$ denotes the unique element in ${\opsymbfont{C}}_{n}(0)$. This is actually an instance of the operad pushforward construction: let ${\opsymbfont{I}}_{\mathrm{dbp}}$ be the operad with ${\opsymbfont{I}}_{\mathrm{dbp}}(0)={\opsymbfont{I}}_{\mathrm{dbp}}(1)=*$ and ${\opsymbfont{I}}_{\mathrm{dbp}}(m)=\emptyset$ for $m>1$. The functor associated to ${\opsymbfont{I}}_{\mathrm{dbp}}$ is the functor $(-)_{+}$ that adds a disjoint base point with the monad structure $((-)_{+})_{+}\to (-)_{+}$ that identifies the two disjoint base points; the category of algebras for this monad is the category of based spaces. The functor $\tilde{\mathbb{C}}_{n}$ from based spaces to ${\opsymbfont{C}}_{n}$-algebras is the pushforward $P_{f}$ for $f$ the unique map of operads ${\opsymbfont{I}}_{\mathrm{dbp}}\to {\opsymbfont{C}}_{n}$: formally $P_{f}$ is the coequalizer described in Section~\ref{sec:compare}, that in this case takes the form \[ \[email protected]{ {\mathbb{C}}_{n}(X_{+})\ar@<.5ex>[r]\ar@<-.5ex>[r]&{\mathbb{C}}_{n}X\ar[r]&\tilde{\mathbb{C}}_{n}X. } \] As mentioned in an aside in that section (or as can be seen concretely here using the operad multiplication on ${\opsymbfont{C}}_{n}$ directly), the endofunctor $\tilde{\mathbb{C}}_{n}$ on based spaces (i.e., $U_{f}P_{f}$) has the structure of a monad, and we can identify the category of ${\opsymbfont{C}}_{n}$-spaces as the category of algebras over the monad~$\tilde{\mathbb{C}}_{n}$.
The factorization of the functor $\Omega^{n}$ through ${\opsymbfont{C}}_{n}$-spaces has the formal consequence of producing a map of monads (in based spaces) \[ \tilde{\mathbb{C}}_{n}\to \Omega^{n}\Sigma^{n}. \] Formally the map is induced by the composite \[ \tilde{\mathbb{C}}_{n}X\overto{\tilde{\mathbb{C}}_{n}\eta}\tilde{\mathbb{C}}_{n}\Omega^{n}\Sigma^{n}X\overto{\xi} \Omega^{n}\Sigma^{n}X, \] where $\eta$ is the unit of the $\Sigma^{n},\Omega^{n}$-adjunction and $\xi$ is the ${\opsymbfont{C}}_{n}$-action map. This map has the following concrete description: an element $(c,(x_{1},\dotsc,x_{m}))\in {\opsymbfont{C}}_{n}(m)\times X^{m}$ maps to the element $\gamma \colon S^{n}\to \Sigma^{n}X$ of $\Omega^{n}\Sigma^{n}X$ given by the composite of the map \[ S^{n}\to S^{n}\vee\dotsb \vee S^{n} \] associated to $c$ (as described above) and the map \[ S^{n}\cong \Sigma^{n} \{x_{i}\}_{+}\subset \Sigma^{n}X \] on the $i$th factor of $S^{n}$. Either using this concrete description, or following diagrams in a formal categorical argument, it is straightforward to check that this defines a map of monads. We can now state the May-Cohen-Segal Approximation Theorem.
\begin{thm}[{May-Cohen-Segal Approximation Theorem~\cite[6.1]{May-GILS}, \cite[3.3]{Cohen-Bulletin}, \cite[Theorem~2]{Segal-ConfigurationSpaces}}]\label{thm:MA} \noindent \par\noindent For any non-degenerately based space $X$, the map of ${\opsymbfont{C}}_{n}$-spaces $\tilde{\mathbb{C}}_{n}X\to \Omega^{n}\Sigma^{n}X$ is group completion. \end{thm}
(``Non-degenerately based'' means that the inclusion of the base point is a cofibration. Both $\tilde{\mathbb{C}}_{n}$ and $\Omega^{n}\Sigma^{n}$ preserve weak equivalences in non-degenerately based spaces, but for other spaces, either or both may have the wrong weak homotopy type.)
From here a sketch of the proof of Theorem~\ref{thm:FTILST} goes as follows. Since $\Omega^{n}$ as a functor from based spaces to based spaces has left adjoint $\Sigma^{n}$, a check of universal properties shows that the functor from ${\opsymbfont{C}}_{n}$-spaces to based spaces defined by the coequalizer \[ \xymatrix@C-1pc{ \Sigma^{n}\tilde{\mathbb{C}}_{n}A\ar@<-.5ex>[r]\ar@<.5ex>[r] &\Sigma^{n}A\ar[r]&\Sigma^{n}\otimes_{{\mathbb{C}}_{n}} A } \] is the left adjoint to $\Omega^{n}$ viewed as a functor from based spaces to ${\opsymbfont{C}}_{n}$-spaces. (In the coequalizer, one map is induced by the ${\opsymbfont{C}}_{n}$-action map on $A$ and the other is adjoint to the map of monads $\tilde{\mathbb{C}}\to \Omega^{n}\Sigma^{n}$.) Because $\Omega^{n}$ preserves fibrations and weak equivalences, this is a Quillen adjunction.
The main tool to study the $\Sigma^{n}\otimes_{{\mathbb{C}}_{n}}(-),\Omega^{n}$-adjunction is the two-sided monadic bar construction, invented in~\cite[\S9]{May-GILS} for this purpose. Given a monad ${\mathbb{T}}$ and a right action of ${\mathbb{T}}$ on a functor $F$ (say, to based spaces), the two-sided monadic bar construction is the functor on ${\mathbb{T}}$-algebras $B(F,{\mathbb{T}},-)$ defined as the geometric realization of the simplicial object \[ B_{m}(F,{\mathbb{T}},A)=F\underbrace{{\mathbb{T}}\dotsb {\mathbb{T}}}_{m}A \] with face maps induced by the action map $F{\mathbb{T}}\to F$, the multiplication map ${\mathbb{T}}{\mathbb{T}}\to{\mathbb{T}}$ and the action map ${\mathbb{T}} A\to A$, and degeneracy maps induced by the unit map $\Id\to {\mathbb{T}}$. In the case when $F={\mathbb{T}}$, the simplicial object $B_{\ssdot}({\mathbb{T}},{\mathbb{T}},A)$ has an extra degeneracy and the map from $B_{\ssdot}({\mathbb{T}},{\mathbb{T}},A)$ to the constant simplicial object on $A$ is a simplicial homotopy equivalence (in the underlying category for ${\mathbb{T}}$, though not generally in the category of ${\mathbb{T}}$-algebras).
Because geometric realization commutes with colimits and finite cartesian products, we have a canonical isomorphism \[ \tilde{\mathbb{C}}_{n}B(\tilde{\mathbb{C}}_{n},\tilde{\mathbb{C}}_{n},A)\to B(\tilde{\mathbb{C}}_{n}\tilde{\mathbb{C}}_{n},\tilde{\mathbb{C}}_{n},A) \] and the multiplication map $\tilde{\mathbb{C}}_{n}\tilde{\mathbb{C}}_{n}\to \tilde{\mathbb{C}}_{n}$ then gives $B(\tilde{\mathbb{C}}_{n},\tilde{\mathbb{C}}_{n},A)$ the natural structure of a ${\opsymbfont{C}}_{n}$-algebra. (See Section~\ref{sec:enrich} for a more general discussion.) For the same reason, the canonical map \[ \Sigma^{n}\otimes_{{\mathbb{C}}_{n}} B(\tilde{\mathbb{C}}_{n},\tilde{\mathbb{C}}_{n},A)\to B(\Sigma^{n}\otimes_{{\mathbb{C}}_{n}}\tilde{\mathbb{C}}_{n},\tilde{\mathbb{C}}_{n},A)=B(\Sigma^{n},\tilde{\mathbb{C}}_{n},A) \] is an isomorphism. The latter functor clearly\footnote{At the time when May wrote the argument, this was far from clear: some of the first observations about when geometric realization of simplicial spaces preserves levelwise weak equivalences were developed in~\cite[\S11]{May-GILS} precisely for this argument.} preserves weak equivalences of ${\opsymbfont{C}}_{n}$-spaces $A$ whose underlying based spaces are non-degenerately based. (In addition to being a hypothesis of May-Cohen-Segal Approximation Theorem, non-degenerately based here also ensures that the inclusion of the degenerate subspace (or latching object) is a cofibration.) As a consequence of Theorem~\ref{thm:georeal} it follows that when the underlying based space of $A$ is cofibrant (which is the case in particular when $A$ is cofibrant as a ${\opsymbfont{C}}_{n}$-space), then $B(\tilde{\mathbb{C}}_{n},\tilde{\mathbb{C}}_{n},A)$ is a cofibrant ${\opsymbfont{C}}_{n}$-space. Because $\Sigma^{n}\otimes_{{\mathbb{C}}_{n}}(-)$ is a Quillen left adjoint, it preserves weak equivalences between cofibrant objects, and looking at a cofibrant approximation $A'\overto{\sim}A$, we see from the weak equivalences \[ B(\Sigma^{n},\tilde{\mathbb{C}}_{n},A)\overfrom{\sim}B(\Sigma^{n},\tilde{\mathbb{C}}_{n},A')\cong \Sigma^{n}\otimes_{{\mathbb{C}}_{n}} B(\tilde{\mathbb{C}}_{n},\tilde{\mathbb{C}}_{n},A')\overto{\sim} \Sigma^{n}\otimes_{{\mathbb{C}}_{n}} A' \] that $B(\Sigma^{n},\tilde{\mathbb{C}}_{n},A)$ models the derived functor $B^{n}A$ of $\Sigma^{n}\otimes_{{\mathbb{C}}_{n}}(-)$ whenever $A$ is non-degenerately based.
To complete the argument, we need the theorem of~\cite[\S12]{May-GILS} that $\Omega^{n}$ commutes up to weak equivalence with geometric realization of (proper) simplicial spaces that are $(n-1)$-connected in each level. Then for $A$ non-degenerately based, we have that the vertical maps are weak equivalences of ${\opsymbfont{C}}_{n}$-spaces \[ \xymatrix{ B(\tilde{\mathbb{C}}_{n},\tilde{\mathbb{C}}_{n},A)\ar[r]\ar[d] &B(\Omega^{n}\Sigma^{n},\tilde{\mathbb{C}}_{n},A)\ar[d]\\ A&\Omega^{n}B(\Sigma^{n},\tilde{\mathbb{C}}_{n},A) } \] while by the May-Cohen-Segal Approximation Theorem, the horizontal map is group completion. This proves that the unit of the derived adjunction is group completion.
For the counit of the derived adjunction, we have from the model above that $B^{n}$ is always $(n-1)$-connected and the unit \[ \Omega^{n}X\to \Omega^{n}B^{n}\Omega^{n}X \] on $\Omega^{n}X$ is a weak equivalence. Looking at $\Omega^{n}$ of the counit, \[ \Omega^{n}B^{n}\Omega^{n}X\to \Omega^{n}X, \] the composite with the unit is the identity on $\Omega^{n}X$, and so it follows that $\Omega^{n}$ of the counit is a weak equivalence. Thus, the counit of the derived adjunction is an $(n-1)$-connected cover map.
\iffalse ********** say something about the $n=\infty$ case \fi
\section{\texorpdfstring{$E_{\infty}$}{Einfty} Algebras in Rational and \texorpdfstring{$p$}{p}-Adic Homotopy Theory}\label{sec:cochains}
In the 1960's and 1970's, Quillen~\cite{Quillen-RHT} and Sullivan~\cite{Sullivan73,Sullivan-ICT} showed that the rational homotopy theory of simply connected spaces (or simplicial sets) has an algebraic model in terms of rational differential graded commutative algebras or coalgebras. In the 1990's, the author proved a mostly analogous theorem relating $E_{\infty}$ differential graded algebras and $p$-adic homotopy theory and a bit later some results for using $E_{\infty}$ differential graded algebras or $E_{\infty}$ ring spectra to identify integral homotopy types. In this section, we summarize this theory following mostly the memoir of Bousfield-Gugenheim~\cite{BG}, and the papers~\cite{Einfty}\footnote{In the published version, in addition to several other unauthorized changes, the copy editors changed the typefaces with the result that the same symbols are used for multiple different objects or concepts; the preprint version available at the author's home page \url{https://pages.iu.edu/~mmandell/papers/einffinal.pdf} does not have these changes and should be much more readable.} and~\cite{Integral}. In what follows $k$ denotes a commutative ring, which is often further restricted to be a field.
In both the rational commutative differential graded algebra case and the $E_{\infty}$ $k$-algebra case, the theory simplifies by working with simplicial sets instead of spaces, and the functor is some variant of the cochain complex. Sullivan's approach to rational homotopy theory uses a rational version of the de Rham complex, originally due to Thom (unpublished), consisting of forms that are polynomial on simplices and piecewise matched on faces:
\begin{defn} The algebra $\nabla^{*}[n]$ of polynomial forms on the standard simplex $\Delta[n]$ is the rational commutative differential graded algebra free on generators $t_{0},\dotsc,t_{n}$ (of degree zero), $dt_{0},\dotsc,dt_{n}$ (of degree one) subject to the relations $t_{0}+\dotsb+t_{n}=1$ and $dt_{0}+\dotsb+dt_{n}=0$ (as well as the differential relation implicit in the notation). \end{defn}
Viewing $t_{0},\dotsc,t_{n}$ as the barycentric coordinate functions on $\Delta[n]$ determines their behavior under face and degeneracy maps, making $\nabla^{*}[\bullet]$ a simplicial rational commutative differential graded algebra.
\begin{defn} For a simplicial set $X$, the rational de Rham complex $A^{*}(X)$ is the rational graded commutative algebra of maps of simplicial sets from $X$ to $\nabla^{*}[\bullet]$, or equivalently, the end over the simplex category \[ A^{*}(X):={\mathbf \varDelta}^{\op}{\opsymbfont{S}}\mathrm{et}(X,\nabla^{*}[\bullet])= \int_{{\mathbf \varDelta}^{\op}}{\opsymbfont{S}}\mathrm{et}(X_{n},\nabla^{*}[n])= \int_{{\mathbf \varDelta}^{\op}} \,\prod_{X_{n}}\nabla^{*}[n] \] (the last formula indicating how to regard $A^{*}(X)$ as a rational commutative differential graded algebra). \end{defn}
More concretely, $A^{*}(X)$ is the rational commutative differential graded algebra where an element of degree $q$ consists of a choice of element of $\nabla^{q}[n]$ for each non-degenerate $n$-simplex of $X$ (for all $n$) which agree under restriction by face maps, with multiplication and differential done on each simplex. (When $X$ is a finite simplicial complex $A^{*}(X)$ also has a Stanley-Reisner ring style description; see \cite[G.\iffalse(\fi{i})]{Sullivan73}.) The simplicial differential graded ${\mathbb{Q}}$-module $\nabla^{q}[n]$ is a contractible Kan complex for each fixed $q$ (``the extension lemma''~\cite[1.1]{BG}) and is acylic in the sense that the inclusion of the unit ${\mathbb{Q}}\to \nabla^{*}[n]$ is a chain homotopy equivalence for each fixed $n$ (``the Poincar\'e lemma''~\cite[1.3]{BG}). These formal properties imply that the cohomology of $A^{*}(X)$ is canonically naturally isomorphic to $H^{*}(X;{\mathbb{Q}})$, the rational cohomology of $X$ (even uniquely naturally isomorphic, relative to the canonical isomorphism ${\mathbb{Q}}\cong A^{*}(\Delta[0])$). The canonical isomorphism can be realized as a chain map to the normalized cochain complex $C^{*}(X;{\mathbb{Q}})$ defined in terms of integrating differential forms; see~\cite[1.4,2.1,2.2]{BG}.
In the $p$-adic case, we can use the normalized cochain complex $C^{*}(X;k)$ directly as it is naturally an $E_{\infty}$ $k$-algebra. In the discussion below, we use the $E_{\infty}$ $k$-algebra structure constructed by Berger-Fresse~\cite[\S2.2]{BergerFresse-Combinatorial} for the Barratt-Eccles operad ${\opsymbfont{E}}$ (the normalized chains of the Barratt-Eccles operad of categories or simplicial sets described in Example~\ref{ex:be}). Hinich-Schechtmann~\cite{HinichSchechtman} and (independently) Smirnov~\cite{Smirnov} appear to be the first to explicitly describe a natural operadic algebra structure on cochains; McClure-Smith~\cite{McClureSmith-MultivariableCubes} describes a natural $E_{\infty}$ structure that generalizes classical $\cup_{i}$ product and bracket operations. The ``cochain theory'' theory of~\cite{Cochains} shows that all these structures are equivalent in the sense that they give naturally quasi-isomorphic functors into a common category of $E_{\infty}$ $k$-algebras, as does the polynomial de Rham complex functor $A^{*}$ when $k={\mathbb{Q}}$.
Both $A^{*}(X)$ and $C^{*}(X;k)$ fit into adjunctions of the contravariant type that send colimits to limits. Concretely, for a rational commutative differential graded algebra $A$ and an $E_{\infty}$ $k$-algebra $E$, define simplicial sets by the formulas \[ T(A):={\catsymbfont{C}}_{{\mathbb{Q}}}(A,\nabla^{*}[\bullet]), \qquad U(E):={\catsymbfont{E}}_{k}(E,C^{*}(\Delta[\bullet])), \] where ${\catsymbfont{C}}_{{\mathbb{Q}}}$ denotes the category of rational commutative differential graded algebras and ${\catsymbfont{E}}_{k}$ denotes the category of $E_{\infty}$ $k$-algebras (over the Barratt-Eccles operad). An easy formal argument shows that \[ A^{*}\colon {\mathbf \varDelta}^{\op}{\opsymbfont{S}}\mathrm{et} \xymatrix@C-1pc{\ar@<.5ex>[r]&\ar@<.5ex>[l]} {\catsymbfont{C}}_{{\mathbb{Q}}}^{\op}\;{:}\, T, \qquad C^{*}\colon {\mathbf \varDelta}^{\op}{\opsymbfont{S}}\mathrm{et} \xymatrix@C-1pc{\ar@<.5ex>[r]&\ar@<.5ex>[l]} {\catsymbfont{E}}_{k}^{\op}\;{:}\, U, \] are adjunctions. As discussed in Section~\ref{sec:model}, both ${\catsymbfont{C}}_{{\mathbb{Q}}}$ and ${\catsymbfont{E}}_{k}$ have closed model structures with weak equivalences the quasi-isomorphisms and fibrations the surjections. Because both $A^{*}$ and $C^{*}$ preserve homology isomorphisms and convert injections to surjections, these are Quillen adjunctions. The main theorems of \cite{BG} and \cite{Einfty} then identify subcategories of the homotopy categories on which the adjunction restricts to an equivalence.
Before stating the theorems, first recall the $H_{*}(-;k)$-local model structure on simplicial sets: this has cofibrations the inclusions and weak equivalences the $H_{*}(-;k)$ homology isomorphisms. When $k$ is a field, the weak equivalences depend only on the characteristic, and we also call this the \term{rational model structure} (in the case of characteristic zero) or the \term{$p$-adic model structure} (in the case of characteristic $p>0$); we call the associated homotopy categories, the \term{rational homotopy category} and \term{$p$-adic homotopy category}, respectively. As with any localization, the local homotopy category is the homotopy category of local objects (that is to say, the fibrant objects): in the case of rational homotopy theory, the local objects are the Kan complexes of the homotopy type of rational spaces. In $p$-adic homotopy theory, the local objects are the Kan complexes that satisfy a $p$-completeness property described explicitly in~\cite[\S5,7--8]{Bousfield-LocSpace}.
We say that a simplicial set $X$ is \term{finite $H_{*}(-;k)$-type} (or \term{finite rational type} when $k$ is a field of characteristic zero or \term{finite $p$-type} when $k$ is a field of characteristic $p>0$) when $H_{*}(X;k)$ is finitely generated over $k$ in each degree (or, equivalently if $k$ is a field, when $H^{*}(X;k)$ is finite dimensional in each degree). Similarly a rational commutative differential graded algebra or $E_{\infty}$ $k$-algebra $A$ is \term{finite type} when its homology is finitely generated over $k$ in each degree. It is \term{simply connected} when the inclusion of the unit induces an isomorphism $k\to H^{0}(A)$, $H^{1}(A)\cong 0$, and $H^{n}(A)\cong 0$ for $n<0$ (with the usual cohomological grading convention that $H^{n}(A):=H_{-n}(A)$). With this terminology, the main theorem of \cite{BG} is the following:
\begin{thm}[{\cite[Section~8, Theorem~9.4]{BG}}]\label{thm:rht} The polynomial de Rham complex functor, $A^{*}\colon {\mathbf \varDelta}^{\op}{\opsymbfont{S}}\mathrm{et} \to {\catsymbfont{C}}_{{\mathbb{Q}}}^{\op}$, is a left Quillen adjoint for the rational model structure on simplicial sets. The left derived functor restricts to an equivalence of the full subcategory of the rational homotopy category consisting of the simply connected simplicial sets of finite rational type and the full subcategory of the homotopy category of rational commutative differential graded algebras consisting of the simply connected rational commutative differential graded algebras of finite type. \end{thm}
For the $p$-adic version below, we need to take into account Steenrod operations. For $k={\mathbb{F}}_{p}$, the Steenrod operations arise from the coherent homotopy commutativity of the $p$-fold multiplication, which is precisely encoded in the action of the $E_{\infty}$ operad. Specifically, the $p$th complex ${\opsymbfont{E}}(p)$ of the operad is a $k[\Sigma_{p}]$-free resolution of $k$, and by neglect of structure, we can regard it as a $k[C_{p}]$-free resolution of $k$ where $C_{p}$ denotes the cyclic group of order $p$. The operad action induces a map \[ {\opsymbfont{E}}(p)\otimes_{k[C_{p}]}(C^{*}(X;k))^{(p)}\to {\opsymbfont{E}}(p)\otimes_{k[\Sigma _{p}]}(C^{*}(X;k))^{(p)}\to C^{*}(X;k). \] The homology of ${\opsymbfont{E}}(p)\otimes_{k[C_{p}]}(C^{*}(X;k))^{(p)}$ is a functor of the homology of $C^{*}(X;k)$ and the Steenrod operations $P^{s}$ are precisely the image of certain classes under this map; see, for example, \cite[2.2]{May-Steenrod}. This process works for any $E_{\infty}$ $k$-algebra, not just the cochains on spaces, to give natural operations on the homology of ${\opsymbfont{E}}$-algebras, usually called Dyer-Lashoff operations. The numbering conventions for the Dyer-Lashoff operations are the opposite of those of the Steenrod operations: on the cohomology of $C^{*}(X;{\mathbb{F}}_{p})$, the Dyer-Lashoff operation $Q^{s}$ performs the Steenrod operation $P^{-s}$. If $k$ is characteristic $p$ but not ${\mathbb{F}}_{p}$, the operations constructed this way are ${\mathbb{F}}_{p}$-linear but satisfy $Q^{s}(ax)=\phi(a)Q^{s}(x)$ for $a\in k$, where $\phi$ denotes the Frobenius automorphism of $k$.
The ${\mathbb{F}}_{p}$ cochain algebra of a space has the special property that the Steenrod operation $P^{0}=Q^{0}$ is the identity operation on its cohomology; this is not true of the zeroth Dyer-Lashof operation in general. Indeed for a commutative ${\mathbb{F}}_{p}$-algebra regarded as $E_{\infty}$ ${\mathbb{F}}_{p}$-algebra, $Q^{0}$ is the Frobenius. (The fact that $Q^{0}$ is the identity for the ${\mathbb{F}}_{p}$-cochain algebra of a space is related to the fact that it comes from a cosimplicial ${\mathbb{F}}_{p}$-algebra where the Frobenius in each degree is the identity.) So when $X$ is finite $p$-type, $C^{*}(X;k)$ in each degree has a basis that is fixed by $Q^{0}$. We say that a finite type $E_{\infty}$ $k$-algebra is \term{spacelike} when in each degree its homology has a basis that is fixed by $Q^{0}$. The main theorem of~\cite{Einfty} is the following:
\begin{thm}[{\cite[Main Theorem, Theorem~A.1]{Einfty}}]\label{thm:pht} The cochain complex with coefficients in $k$, $C^{*}(-;k)\colon {\mathbf \varDelta}^{\op}{\opsymbfont{S}}\mathrm{et} \to {\catsymbfont{E}}_{k}^{\op}$, is a left Quillen adjoint for the $H_{*}(-;k)$-local model structure on simplicial sets. If $k={\mathbb{Q}}$ or $k$ is characteristic $p$ and $1-\phi$ is surjective on $k$, then the left derived functor restricts to an equivalence of the full subcategory of the $H_{*}(-;k)$-local homotopy category consisting of the simply connected simplicial sets of finite $H_{*}(-;k)$-type and the full subcategory of the homotopy category of $E_{\infty}$ $k$-algebras consisting of the spacelike simply connected $E_{\infty}$ $k$-algebras of finite type. \end{thm}
Given the Quillen equivalence between rational commutative differential graded algebras and $E_{\infty}$ ${\mathbb{Q}}$-algebras (Theorem~\ref{ethm:rect}) and the natural quasi-isomorphism (zigzag) between $A^{*}(-)$ and $C^{*}(-;{\mathbb{Q}})$ \cite[p.~549]{Cochains}, the rational statement in Theorem~\ref{thm:pht} is equivalent to Theorem~\ref{thm:rht}. The Sullivan theory in Theorem~\ref{thm:rht} often includes observations on \term{minimal models}. A simply connected finite type rational commutative differential graded algebra $A$ has a cofibrant approximation $A'\to A$ whose underlying graded commutative algebra is free and such that the differential of every element is decomposable (i.e., is a sum of terms, all of which are word length greater than 1 in the generators); $A'$ is called a minimal model and is unique up to isomorphism. As a consequence, simply connected simplicial sets of finite rational type are rationally equivalent if and only if their minimal models are isomorphic. The corresponding theory also works in the context of $E_{\infty}$ ${\mathbb{Q}}$-algebras with the analogous definitions and proofs. The corresponding theory does not work in the context of $E_{\infty}$ algebras in characteristic $p$ for reasons closely related to the fact that unlike the rational homotopy groups, the $p$-adic homotopy groups of a simplicial set are not vector spaces.
The equivalences in Theorems~\ref{thm:rht} and~\ref{thm:pht} also extend to the nilpotent simplicial sets of finite type, but the corresponding category of $E_{\infty}$ $k$-algebras does not have a known intrinsic description in the $p$-adic homotopy case; in the rational case, the corresponding algebraic category consists of the finite type algebras whose homology is zero in negative cohomological degrees and whose $H^{0}$ is isomorphic as a ${\mathbb{Q}}$-algebra to the cartesian product of copies of ${\mathbb{Q}}$ (cf.~\cite[\S3]{EqEinfty}).
For other fields not addressed in the second part of Theorem~\ref{thm:pht}, the adjunction does not necessarily restrict to the indicated subcategories and even when it does, it is never an equivalence. To be an equivalence, the unit of the derived adjunction would have to be an $H_{*}(-;k)$-isomorphism for simply connected simplicial sets of finite type. If $k\neq {\mathbb{Q}}$ is characteristic zero, then the right derived functor of $U$ takes $C^{*}(S^{2};k)$ to a simplicial set with $\pi_{2}$ isomorphic to $k$; if $k$ is characteristic $p$, then the right derived functor of $U$ takes $C^{*}(S^{2};k)$ to a simplicial set with $\pi_{1}$ isomorphic to the cokernel of $1-\phi$. See \cite[App.~A]{Einfty} for more precise results. Because the algebraic closure of a field $k$ of characteristic $p$ does have $1-\phi$ surjective, even when $C^{*}(-;k)$ is not an equivalence, it can be used to detect $p$-adic equivalences. The paper~\cite{Integral} extends this kind of observation to the case $k={\mathbb{Z}}$:
\begin{thm}[{\cite[Main Theorem]{Integral}}]\label{thm:integral} Finite type nilpotent spaces or simplicial sets $X$ and $Y$ are weakly equivalent if and only if $C^{*}(X;{\mathbb{Z}})$ and $C^{*}(Y;{\mathbb{Z}})$ are quasi-isomorphic as $E_{\infty}$ ${\mathbb{Z}}$-algebras. \end{thm}
Using the spectral version of Theorem~\ref{thm:pht} in~\cite[App.~C]{Einfty}, the proof of the previous theorem in \cite{Integral} extends to show that when $X$ and $Y$ are finite nilpotent simplicial sets then $X$ and $Y$ are weakly equivalent if and only if their Spanier-Whitehead dual spectra are weakly equivalent as $E_{\infty}$ ring spectra. (This was the subject of a talk by the author at the Newton Institute in December 2002.)
We use the rest of the section to outline the argument for Theorems~\ref{thm:rht} and~\ref{thm:pht}, using the notation of the latter. We fix a field $k$, which is either ${\mathbb{Q}}$ or is characteristic $p>0$ and has $1-\phi$ surjective. We write $C^{*}$ for $C^{*}(-;k)$ or when $k={\mathbb{Q}}$ and we are working in the context of Theorem~\ref{thm:rht}, we understand $C^{*}$ as $A^{*}$. We also use $C^{*}$ to denote the derived functor and write $\mathbf{U}$ for the derived functor of its adjoint. The idea of the proof, going back to Sullivan, is to work with Postnikov towers, and so the first step is to find cofibrant approximations for $C^{*}(K(\pi,n))$. For $k={\mathbb{Q}}$, this is easy since $H^{*}(K({\mathbb{Q}},n);{\mathbb{Q}})$ is the free graded commutative algebra on a generator in degree $n$.
\begin{prop}\label{prop:rems} If $k={\mathbb{Q}}$ then $C^{*}(K({\mathbb{Q}},n))$ is quasi-isomorphic to the free ($E_{\infty}$ or commutative differential graded) ${\mathbb{Q}}$-algebra on a generator in cohomological degree $n$. \end{prop}
We use the notation ${\mathbb{E}} k[n]$ to denote the free $E_{\infty}$ $k$-algebra on a generator in cohomological degree $n$. When $k$ is characteristic $p$, there is a unique map in the homotopy category from ${\mathbb{E}} k[n]\to C^{*}(K({\mathbb{Z}}/p,n))$ that sends the generator $x_{n}$ to a class $i_{n}$ representing the image of the tautological element of $H^{n}(K({\mathbb{Z}}/p,n);{\mathbb{Z}}/p)$. Unlike the characteristic zero case, this is not a quasi-isomorphism since $Q^{0}[i_{n}]=[i_{n}]$ in $H^{*}(C^{*}(K({\mathbb{Z}}/p,n)))$, but $Q^{0}[x_{n}]\neq [x_{n}]$ in $H^{*}({\mathbb{E}} k[n])$. Let $B_{n}$ be the homotopy pushout of a map ${\mathbb{E}} k[n]\to {\mathbb{E}} k[n]$ sending the generator to a class representing $[x_{n}]-Q^{0}[x_{n}]$ and the map ${\mathbb{E}} k[n]\to k$ sending the generator to $0$. Then the map ${\mathbb{E}} k[n]\to C^{*}(K({\mathbb{Z}}/p,n))$ factors through a map $B_{n}\to C^{*}(K({\mathbb{Z}}/p,n))$. (The map in the homotopy category turns out to be independent of the choices.) The following is a key result of \cite{Einfty}, whose proof derives from a calculation of the relationship between the Dyer-Lashof algebra and the Steenrod algebra.
\begin{thm}[{\cite[6.2]{Einfty}}]\label{thm:pems} Let $k$ be a field of characteristic $p>0$; then $B_{n}\to C^{*}(K({\mathbb{Z}}/p,n))$ is cofibrant approximation. \end{thm}
(As suggested by the hypothesis, we do not need $1-\phi$ to be surjective in the previous theorem; indeed, the easiest way to proceed is to prove it in the case $k={\mathbb{F}}_{p}$ and it then follows easily for all fields of characteristic $p$ by extension of scalars.)
The two previous results can be used to calculate $\mathbf{U}(C^{*}(K({\mathbb{Q}},n)))$ and\break $\mathbf{U}(C^{*}(K({\mathbb{Z}}/p,n)))$. In the rational case, \[ \mathbf{U}(C^{*}(K({\mathbb{Q}},n)))\simeq U({\mathbb{E}} {\mathbb{Q}}[n]) = Z(C^{n}(\Delta[\bullet])), \] the simplicial set of $n$-cocycles of $C^{*}(\Delta[\bullet];{\mathbb{Q}})$; this is the original model for $K({\mathbb{Q}},n)$, and a straightforward argument shows that the unit map $K({\mathbb{Q}},n)\to K({\mathbb{Q}},n)$ is a weak equivalence (the identity map with this model). In the context of Theorem~\ref{thm:rht}, the same kind of argument is made in \cite[10.2]{BG}. In the $p$-adic case, we likewise have that $U({\mathbb{E}} k[n])$ is the original model for $K(k,n)$, and so we get a fiber sequence \[ \Omega K(k,n)\to \mathbf{U}(K({\mathbb{Z}}/p,n))\to K(k,n)\to K(k,n). \] The map $K(k,n)\to K(k,n)$ is calculated in \cite[6.3]{Einfty} to be the map that on $\pi_{n}$ induces $1-\phi$. The kernel of $1-\phi$ is ${\mathbb{F}}_{p}$ and the unit map $K({\mathbb{Z}}/p,n)\to \mathbf{U}(C^{*}(K,{\mathbb{Z}}/p,n))$ is an isomorphism on $\pi_{n}$. As a consequence, when $1-\phi$ is surjective (as we are assuming), the unit map is a weak equivalence for $K({\mathbb{Z}}/p,n)$.
The game now is to show that for all finite type simply connected (or nilpotent) simplicial sets, the derived unit map $X\to \mathbf{U} C^{*}(X)$ is a rational or $p$-adic equivalence\iffalse and the strategy is to work with Postnikov towers\fi. The next result tells how to construct a cofibrant approximation for a homotopy pullback; it is not a formal consequence of the Quillen adjunction, but rather a version of the Eilenberg-Moore theorem.
\begin{prop}[{\cite[\S3]{BG}, \cite[\S3]{Einfty}}]\label{prop:em} Let \[ \xymatrix@-1pc{ W\ar[r]\ar[d]&Y\ar[d]\\Z\ar[r]&X } \] be a homotopy fiber square of simplicial sets. If $X,Y,Z$ are finite $H_{*}(-;k)$-type and $X$ is simply connected, then \[ \xymatrix@-1pc{ C^{*}(X)\ar[r]\ar[d]&C^{*}(Y)\ar[d]\\C^{*}(Z)\ar[r]&C^{*}(W) } \] is a homotopy pushout square of $E_{\infty}$ $k$-algebras or rational commutative differential graded algebras. \end{prop}
Since we can write $K({\mathbb{Z}}/p^{m},n)$ as the homotopy fiber of a map \[ K({\mathbb{Z}}/p^{m-1},n)\to K({\mathbb{Z}}/p,n+1), \] we see that the unit of the derived adjunction is a weak equivalence also for $K({\mathbb{Z}}/p^{m},n)$ (when $k$ is characteristic $p$). Likewise, since products are homotopy pullbacks, we also get that the unit of the derived adjunction is a weak equivalence for $K(A,n)$ when $A$ is a ${\mathbb{Q}}$ vector space (when $k={\mathbb{Q}}$) or when $A$ is a finite $p$-group (when $k$ is characteristic $p$). Although also not a formal consequence of the adjunction, it is elementary to see that when a simplicial set $X$ is the homotopy limit of a sequence $X_{j}$ and the map $\colim H^{*}(X_{j};k)\to H^{*}(X;k)$ is an isomorphism, then $C^{*}(X)$ is the homotopy colimit of $C^{*}(X_{j})$ and $\mathbf{U} C^{*}(X)$ is the homotopy limit of $\mathbf{U} C^{*}(X_{j})$. It follows that for $K({\mathbb{Z}}^{\wedge}_{p},n)$, the unit of the derived adjunction is a weak equivalence (when $k$ is characteristic $p$). For any finitely generated abelian group, the map $K(A,n)\to K(A\otimes {\mathbb{Q}},n)$ is a rational equivalence and the map $K(A,n)\to K(A^{\wedge}_{p},n)$ is a $p$-adic equivalence. Putting these results and tools all together, we see that the unit of the derived equivalence is an $H_{*}(-;k)$ equivalence for any $X$ that can be built as a sequential homotopy limit $\holim X_{j}$ where $X_{0}=*$, the connectivity of the map $X\to X_{j}$ goes to infinity, and each $X_{j+1}$ is the homotopy fiber of a map $X_{j}\to K(\pi_{j+1},n)$ for $\pi_{j+1}$ a finitely generated abelian group, or the rationalization (when $k={\mathbb{Q}}$) or $p$-completion (when $k$ is characteristic $p$) of a finitely generated abelian group. In particular, for a simply connected simplicial set, applying this to the Postnikov tower, we get the following result.
\begin{thm} Assume $k={\mathbb{Q}}$ or $k$ is characteristic $p>0$ and $1-\phi$ is surjective. If $X$ is a simply connected simplicial set of finite $H_{*}(-;k)$-type, then the unit of the derived adjunction $X\to \mathbf{U} C^{*}(X)$ is an $H_{*}(-;k)$-equivalence. \end{thm}
The previous theorem formally implies that $C^{*}$ induces an equivalence of the $H_{*}(-;k)$-local homotopy category of simply connected simplicial sets of finite $H_{*}(-;k)$-type with the full subcategory of the homotopy category $E_{\infty}$ $k$-algebras or rational commutative differential graded algebras of objects in its image. The remainder of Theorems~\ref{thm:rht} and~\ref{thm:pht} is identifying this image subcategory. In the case when $k={\mathbb{Q}}$, it is straightforward to see that a finite type simply connected algebra has a cofibrant approximation that $\mathbf{U}$ turns into a simply connected principal rational finite type Postnikov tower. The argument for $k$ of characteristic $p$ is analogous, but more complicated; see~\cite[\S7]{Einfty}.
\let\bibcomma\relax
\end{document} | arXiv |
\begin{document}
\title{Precision metrology using weak measurements}
\author{Lijian Zhang} \email{[email protected]} \affiliation{National Laboratory of Solid State Microstructures and College of Engineering and Applied Sciences, Nanjing University, Nanjing 210093, China} \affiliation{Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing 210093, China.} \affiliation{Max Planck Institute for Structure and Dynamics of Material, Hamburg 22761, Germany}
\author{Animesh Datta} \affiliation{Department of Physics, University of Warwick, Coventry CV4 7AL, United Kingdom} \affiliation{Clarendon Laboratory, Department of Physics, University of Oxford, Oxford OX1 3PU, United Kingdom}
\author{Ian A. Walmsley} \affiliation{Clarendon Laboratory, Department of Physics, University of Oxford, Oxford OX1 3PU, United Kingdom}
\date{\today} \begin{abstract}
Weak values and measurements have been proposed as means to achieve dramatic enhancements in metrology based on the greatly increased range of possible measurement outcomes. Unfortunately, the very large values of measurement outcomes occur with highly suppressed probabilities. This raises three vital questions in weak-measurement-based metrology, namely, (Q1) Does post-selection enhance the measurement precision~? (Q2) Does weak measurement offer better precision than strong measurement~? (Q3) Is it possible to beat the standard quantum limit or to achieve the Heisenberg limit with weak measurement using only classical resources~? We analyse these questions for two prototypical, and generic, measurement protocols and show that while the answers to the first two questions are negative for both protocols, the answer to the last is affirmative for measurements with phase-space interactions, and negative for configuration space interactions. Our results, particularly the ability of weak measurements to perform at par with strong measurements in some cases, are instructive for the design of weak-measurement-based protocols for quantum metrology.
\end{abstract}
\pacs{03.65.Ta, 42.50.Lc, 06.20.-f, 42.50.St}
\maketitle
Weak measurements reveal partial information about a quantum state without ``collapsing'' it. This is done by coupling a measurement apparatus (MA) feebly to a test quantum system (QS), the dynamics of which is of interest. A procedure involves probing the QS at an intermediate stage between a pre-selected prepared state and a post-selected state which typically has little overlap with the prepared state~\cite{PhysRevLett.60.1351}. A subsequent projective measurement on the MA yields an outcome known as the ``weak value". The fact that the weak value may lie outside the spectrum of the measurement operator leads itself to some interesting results. This phenomena has been used to study numerous quantum effects~\cite{PhysRevLett.74.2405, Yokota2009, lundeen:020404, ruskov:200404, williams:026804, goggin_09, palacios2010experimental, PhysRevLett.92.043601, PhysRevLett.93.203902, PhysRevLett.109.100404, chen2013experimental, Molmer2001151, Resch2004, Mir2007, kocsis2011observing} as well as to reconstruct the wavefunctions of quantum states~\cite{lundeen2011direct, PhysRevLett.108.070402, Salvail2013, wu2013state}.
Weak values may dramatically amplify the small perturbations of the meter state arising from the coupling between the QS and MA~\cite{PhysRevLett.66.1107, susa2012optimal, RevModPhys.86.307}. This amplification makes weak measurements potentially useful in estimating the coupling strength with enhanced precision~\cite{Hosten2008, dixon:173601, brunner_2009, 2013arXiv1306.4768X, PhysRevLett.107.133603, PhysRevLett.106.080405, PhysRevLett.109.013901}. Yet, the amplification effect of weak measurement comes at the cost of a reduced rate at which data can be acquired due to the requirement to select almost orthogonal pre- and post-selected states of the QS. This leads to a majority of trials being ``lost". Thus, the central question is whether the amplification effect of a weak measurement can overcome the corresponding reduction in the occurrence of such events to provide an estimation at a precision surpassing conventional techniques. This issue has garnered substantial interest recently~\cite{starling:041803, PhysRevA.84.052111}, in particular the amplification of information~\cite{2013arXiv1306.2409T, 2013arXiv1307.4016F, combes2013probabilistic} and their role in alleviating technical imperfections~\cite{PhysRevLett.107.133603, PhysRevA.87.012115, PhysRevX.4.011032, PhysRevX.4.011031, 2014arXiv1402.0199V}. However, an unequivocal agreement as to the ultimate efficacy of weak measurements in precision metrology is still lacking. Our endeavour in this work is to provide such an answer in the ideal scenario (i.e. without technical imperfections).
In this Letter, we show that post-selection does not enhance the precision of estimation, that weak measurements do not offer better precision relative to strong measurements, and that it is possible to beat the standard quantum limit and to achieve Heisenberg limit of quantum metrology with weak measurements using only classical resources. These apparently contradictory conclusions arise from a complete consideration of where the maximum information resides in the weak measurement protocol. Our results are valid both for single-particle MA states, in which the QS couples to a continuous degree of freedom of the MA, and for multi-particle states of a bosonic MA. Although in both cases the MA may have similar mathematical representations, the degrees of freedom involved are different, and therefore the scaling of the precision is different and in consequence analyzed separately. Our analysis properly counts the resources involved in the measurement process, enabling us to compare the precision of different measurement strategies and strengths using tools of classical and quantum Fisher information. Weak measurements have a rich structure, and offer some prospects for novel strategies for quantum-enhanced metrology. Nonetheless, we show that a new approach is required to harness this potential.
\textit{Framework :} Our aim is to estimate a parameter associated with the interaction between two systems. We focus on the situation that one of them, the QS, is a two-state system with eigenstates ${|-1\rangle, |+1\rangle}$ of an observable $\hat{S}$ with corresponding eigenvalues -1 and 1. The initial (pre-selected) state of the QS is prepared as $|\psi_i\rangle = \cos (\theta_i/2)|-1\rangle + \sin (\theta_i/2) e^{i\phi_i} |+1\rangle$. The initial state of the other system, the MA, is $\ket{\Phi_i}$. The coupling strength $g$ which is to be estimated appears in the Hamiltonian $H = -g \delta(t-t_0) \hat{S} \hat{M}$ coupling MA to QS, where $\hat{M}$ is an observable of the MA. After this interaction, the joint state of the MA and the QS is \begin{equation}
|\Psi_j\rangle = \cos \frac{\theta_i}{2}|-1\rangle |\Phi_{-g}\rangle + \sin \frac{\theta_i}{2} e^{i\phi_i} |+1\rangle |\Phi_{+g}\rangle, \label{eq:state_joint} \end{equation}
where $|\Phi_{\pm g}\rangle = \exp(\mp i g \hat{M}) \ket{\Phi_i}$. Post-selecting the QS in state $|\psi_f\rangle = \cos (\theta_f/2)|-1\rangle + \sin (\theta_f/2) e^{i\phi_f} |+1\rangle$ leads to the MA state $|\Phi_d\rangle = \left(\gamma_{d}^{-}|\Phi_{-g}\rangle + \gamma_{d}^{+}|\Phi_{+g}\rangle \right)/\sqrt{p_d}, $ with $\gamma_{d}^{-} = \cos (\theta_i/2) \cos (\theta_f/2)$, $\gamma_{d}^{+} = \sin (\theta_i/2) \sin (\theta_f/2) \exp(i\phi_0)$ and $\phi_0 = \phi_i - \phi_f$. The probability of successful post-selection, i.e., of obtaining $\ket{\Phi_d}$ is $p_d$. When the post-selection fails (with probability $p_r = 1-p_d$), the MA state, which is not considered in the original protocol and is often ignored in experiments, is
$ |\Phi_r\rangle = \left(\gamma_{r}^{-}|\Phi_{-g}\rangle + \gamma_{r}^{+} |\Phi_{+g}\rangle \right)/\sqrt{1-p_d},$ where $\gamma_{r}^{-} = \cos (\theta_i/2) \sin (\theta_f/2)$, $\gamma_{r}^{+} = - \sin (\theta_i/2) \cos (\theta_f/2) \exp(i\phi_0)$. Repeating the pre-selection-coupling-post-selection process $N$ times, yields $Np_d$ copies of $\ket{\Phi_d}$ and $N(1-p_d)$ copies of $\ket{\Phi_r}$. The best attainable precision in estimating $g$ is given by the Cram\'{e}r-Rao bound $\Delta^2 g \geq 1/(NF_{tot})$~\cite{PhysRevLett.72.3439}, where $F_{tot}$ is the sum total of the classical and quantum Fisher information (FI) contained at different stages of the pre-selection--coupling--post-selection process. Note that the single-parameter Cram\'{e}r-Rao bound, both quantum and classical, can always be attained asymptotically for large $N$ with maximum-likelihood estimation.
Depending on the estimation protocol, $F_{tot}$ may have different values. To date almost all applications of the weak measurement to precision metrology focus on the amplification effect of weak values, which corresponds to considering the information about $g$ contained in $|\Phi_d\rangle$. In this situation $F_{tot} = p_d Q_{d}$, where $Q_{d}$ is the quantum FI (QFI) of $\ket{\Phi_d}$, \textit{i.e.} the maximum FI that can be achieved with the optimal measurement on $\ket{\Phi_d}$, which is a set of projection operators onto the eigenstates of the symmetric logarithmic derivative of $\ket{\Phi_d}$~\cite{PhysRevLett.72.3439}.
$p_d Q_{d}$ can be viewed as the total information in the post-selected meter state. In addition, one may also monitor the failure mode $|\Phi_r\rangle$ to achieve better precision in parameter estimation~\cite{PhysRevA.86.040102, PhysRevLett.110.083605} and state tomography~\cite{wu2013state}. The maximum information in the failure mode is $(1-p_d)Q_r$ where $Q_r$ is the QFI of $\ket{\Phi_r}$. Finally, the distribution $\{p_d,1-p_d\}$ of the post-selection process on QS also contains information about $g$. This distribution yields a classical FI $F_p$ which we refer to as the information in the post-selection process. If we account for all these contributions, we have (see supplementary material for a proof) \begin{equation} F_{tot} = p_d Q_d + (1-p_d)Q_r + F_p. \label{eq:ftot} \end{equation}
The whole process (post-selection plus measurements on the MA state) is a special case of the global measurement on the joint state $|\Psi_j\rangle$~\cite{supp}, therefore $F_{tot}$ is no larger than the QFI $Q_j$ of $|\Psi_j\rangle$, \textit{i.e.} post-selection cannot increase the precision in estimating $g$. This seemingly straightforward result provides important insight about the relation between the amplification effect and measurement precision, and allows us to access the rich structures of weak measurement and evaluate their quantum advantages. In particular, we note that $Q_d$ or $Q_r$ alone may be larger than $Q_j$ due to the amplification effect of weak values. Nevertheless this apparent gain of information is completely canceled by the small probability of successful post-selection. Moreover, the post-selection process may contain important information $F_p \geq 0$. This analysis goes beyond previous studies~\cite{2013arXiv1306.2409T} by considering all the contributions to the total information, and thus provides a complete answer to Q1 posed in the abstract. We note that a similar conclusion was independently and contemporaneously reached in~\cite{combes2013probabilistic}. In following sections, we provide answers to Q2 and Q3 in both configuration and phase space interactions.
\textit{Configuration space interactions :} We begin with the most widely used scenario in weak measurement~\cite{PhysRevLett.60.1351, PhysRevLett.66.1107, PhysRevLett.92.043601, PhysRevLett.93.203902, PhysRevLett.66.1107, Hosten2008, dixon:173601, brunner_2009, 2013arXiv1306.4768X, PhysRevLett.106.080405, PhysRevLett.109.013901} where both the QS and MA are single-particle states, possibly in different degrees of freedom of the same particle~\footnote{To be precise, the QS and MA in these implementations are (multi-mode) coherent states. Yet as we will show, the following analysis can be applied with little modification}. In this situation, the MA is normally prepared in a Gaussian superposition state of two conjugate variables \begin{eqnarray}
|\Phi\rangle &=& \int dq \frac{1}{(2\pi\sigma^2)^{1/4}} \exp(-\frac{q^2}{4\sigma^2}) |q\rangle \nonumber \\
&=& \int dp \frac{(2 \sigma^2)^{1/4}}{\pi^{1/4}} \exp(-\sigma^2 p^2) |p\rangle, \label{eq:meter_gauss}
\end{eqnarray} where $p$ and $q$ are, e.g., momentum and position or time and frequency. The two representations are related via a Fourier transform. The interaction Hamiltonian between the QS and MA is chosen as $H = -g \delta(t-t_0) \hat{S} \hat{q}$. Note that this interaction Hamiltonian entangles the QS with an \textit{external} degree of freedom of the MA. It does not change the particle number distribution in the state of the MA. After the interaction and post-selection, the MA state becomes $|\Phi_k\rangle = \int dp \phi_k(g, p) |p\rangle$ ($k = d,r$) with \begin{equation} \phi_k(g,p) = \frac{(2 \sigma^2)^{1/4}}{\pi^{1/4}\sqrt{p_k}} \left[\gamma_{k}^{-}e^{-\sigma^2 (p+g)^2} + \gamma_{k}^{+} e^{-\sigma^2 (p-g)^2}\right]. \label{eq:phi_phys} \end{equation} The probability of successful post-selection is \begin{equation} p_d = \frac{1+\cos\theta_i \cos\theta_f + \sin\theta_i \sin\theta_f \cos\phi_0 e^{-2s^2}}{2}, \label{eq:ps_phys} \end{equation} with $s = g\sigma$ characterising the measurement strength. With Eqns. (\ref{eq:phi_phys}, \ref{eq:ps_phys}) we can estimate $Q_d$, $Q_r$ and $F_p$~\cite{supp}. \begin{eqnarray} Q_d & = & \frac{4\sigma^2}{p_d} \left[p_d + S\left(2s^2-1\right)-\frac{1}{p_d}S^2 s^2\right], \nonumber \label{eq:qm_phys} \\ Q_r & = & \frac{4\sigma^2}{1-p_d} \left[1-p_d - S\left(2s^2-1\right)-\frac{1}{1-p_d}S^2 s^2\right], \nonumber \label{eq:qf_phys} \\ F_p & = & \frac{4\sigma^2s^2 S^2}{p_d(1-p_d)}, \label{eq:fp_phys} \end{eqnarray} where $S=e^{-2s^2}\sin\theta_i \sin\theta_f \cos\phi_0$. Further the QFI of the joint meter-system state before post-selection is $Q_j = 4\sigma^2$. We can now calculate $F_{tot}$ for different estimation strategies. In particular, if we take into account of all the contributions in Eq.~(\ref{eq:ftot}), we have $F_{tot} = Q_j$, \textit{i.e.} we achieve the maximal precision. A commonly employed strategy retains only the information in the successfully post-selected meter state. In this case, the complicated functional form of $F_{tot}=p_d Q_d$ demands numerical maximization over $\psi_i$ and $\psi_f$. Nonetheless, some limits that may be obtained analytically allow us to answer Q2. In the weak measurement limit, defined as $s\rightarrow 0$ \begin{equation} p_d Q_d = 2 \sigma^2 (1+\cos\theta_i \cos\theta_f - \sin\theta_i \sin\theta_f \cos\phi_0), \label{eq:fi_MA_weak} \end{equation}
the maximum value of which is $4\sigma^2$, attained when either $\theta_i = -\theta_f$ and $\phi_0 =0$ or $\theta_i = \theta_f$ and $\phi_0 = \pi$. Interestingly, this does not coincide in general with the situation when the weak value is the largest which requires $p_d = |\langle \psi_i | \psi_f \rangle|^2 \rightarrow 0$~\cite{PhysRevA.84.052111}. In the limit of strong measurement when $s \gg 1$,
\begin{equation} p_d Q_d = 2 \sigma^2 (1+\cos\theta_i \cos\theta_f), \label{eq:fi_MA_strong} \end{equation} which also attains the maximum of $4\sigma^2,$ but for the situation that both pre- and post-selected states are $\ket{+1}$ or $\ket{-1}$. In both these limits, $p_d Q_d = Q_j$, $F_p=0$ and $Q_r = 0$.
More generally, non-Gaussian MA states also achieve this precision (see the supplementary material for proof). This may be relevant to recent experiments that exploit this resource~\cite{shomroni2013demo, PhysRevLett.109.040401}. The conclusion is that when the uncertainty of the meter state $\sigma$ is fixed, the precision in the weak measurement limit, that is, to estimate a small parameter $g$ through pre-selection-coupling-post-selection, is no better than that in the strong measurement limit, that is, when the coupling parameter is large. However, if the parameter to be estimated is fixed, the precision is always better if we use a meter state with larger $\sigma$, as is evident in Eqns.~(\ref{eq:fi_MA_weak}, \ref{eq:fi_MA_strong}) and $F_{tot}$ since the FIs are proportional to $\sigma^2$. This answers Q2 for the configuration-space-interaction scenario.
This analysis focuses on the effect of the uncertainty in the external degrees of freedom of the MA as in the previous works~\cite{PhysRevLett.66.1107, Hosten2008, dixon:173601, brunner_2009, 2013arXiv1306.4768X, PhysRevLett.106.080405, PhysRevLett.109.013901, starling:041803, PhysRevA.84.052111, PhysRevA.86.040102}, showing that weak measurements may or may not offer an overhead advantage. In quantum metrology, the relevant measure of the resource required to effect a measurement is the average number of photons ($n$) in the MA state. The scaling of the precision of estimation with respect to $n$ is the signature of whether the system is capable of operating beyond the standard quantum limit (in which the FI scales linearly in $n$) and offering genuine quantum advantages. Since the interaction Hamiltonian does not change particle-number distributions, for QS and MA prepared in (multi-mode) coherent states with amplitude $\alpha$, post-selected meter states are also multi-mode coherent states, and the FIs in Eqns. (\ref{eq:fi_MA_weak}, \ref{eq:fi_MA_strong}) pick up an additional factor of $n=|\alpha|^2$. Thus the scalings are at the standard quantum limit. This is the answer to Q3 for the configuration-space-interaction scenario.
\textit{Phase-space interactions:} We now consider a scenario that can change the particle-number distribution. The initial state of the QS $\ket{\psi_i}$ is the same as before, while the MA is prepared in a coherent state $\ket{\alpha}$. A state-dependent interaction with $\hat{M} = \hat{n}$, where $\hat{n}$ is the particle number operator, leads to~\footnote{See, for instance, Eq. (2) in Ref.~\cite{PhysRevLett.107.133603}, where $g = \phi_0/2$.} \begin{equation}
|\Psi\rangle = \cos \frac{\theta_i}{2}|-1\rangle |\alpha \rangle + \sin \frac{\theta_i}{2} e^{i\phi_i} |+1\rangle |\alpha e^{i2g}\rangle. \label{eq:psi_full_2} \end{equation} or~\cite{2014arXiv1409.3488J} \begin{equation}
|\Psi\rangle = \cos \frac{\theta_i}{2}|-1\rangle |\alpha e^{-ig}\rangle + \sin \frac{\theta_i}{2} e^{i\phi_i} |+1\rangle |\alpha e^{ig}\rangle. \label{eq:psi_full} \end{equation}
Both states have the same precision in estimating $g$ when $n$ is large. In the following, we focus on the symmetric form in Eq.~(\ref{eq:psi_full}). The meter states after post-selection are ($k=d,r$) $|\Phi_k\rangle = \left(\gamma_{k}^{-}|\alpha e^{-ig}\rangle + \gamma_{k}^{+} |\alpha e^{ig}\rangle\right)/\sqrt{p_k}. $
The probability of obtaining this state and the FIs are all given in~\cite{supp}. Again, the QFIs are attainable with the optimal measurement on $|\Phi_k\rangle$.
The QFI of the system-meter state in Eq.~(\ref{eq:psi_full}) is~\cite{supp} $ Q_{j} = 4n^2\sin^2\theta_i+4n, $ where $n=|\alpha|^2$ is again the mean photon number (or energy) of the meter state (Similarly, the QFI of the state in Eq.~(\ref{eq:psi_full_2}) is $Q_{j} = 4n^2\sin^2\theta_i + 4n[4\sin^2(\theta_i/2)]$).
$Q_{j}$ is the maximum amount of information, and can exhibit quantum scaling ($\sim n^2$) depending on the initial system state. The expression for $Q_{j}$ immediately suggests that $\theta_i=0,\pi$ will never provide a better-than-classical scaling. These are the two cases when the initial state is an eigenstate of $\hat{S}$, so that no entanglement is generated between the QS and MA. Indeed, for $|\psi_i\rangle = \ket{\pm 1},$ $p_d Q_d = 2n(1\pm \cos\theta_f),$ $(1-p_d)Q_{r}=2n(1\mp \cos\theta_f)$ and $F_p=0.$ Thus, $F_{tot}=4n,$ but the information may be equally shared between the successful and the failed post-selection mode. This is important since the failed post-selection mode is generally discarded completely~\cite{PhysRevLett.66.1107, Hosten2008, dixon:173601, brunner_2009, 2013arXiv1306.4768X, PhysRevLett.106.080405, PhysRevLett.109.013901, PhysRevLett.107.133603}.
In contrast, the maximal $Q_j$ is found for $\theta_i=\pi/2$. We immediately find that $\theta_f=0,\pi$ provides no better than classical scalings either. Thus, we set $\theta_f=\pi/2$ as well, and find that as $g\rightarrow 0,$ it leads to \begin{equation} F_p=4n^2. \label{eq:heisenberg} \end{equation} This result shows that quantum-enhanced scaling can be attained in the sensing of the coupling parameter $g$ in a weak measurement setup. On the other hand, in this same situation the QFIs for both the successful and failed post-selection mode scale classically; $p_d Q_d=4n\sin^2(\phi_0/2)$ and $(1-p_d)Q_r=4n\cos^2(\phi_0/2)$, where $\phi_0 = \phi_i-\phi_f$. This shows that $p_d Q_d$ achieves its maximum when $\phi_0 \rightarrow \pi$, \textit{i.e.} $\psi_i$ and $\psi_f$ are orthogonal. Note also that if we take into account of all the contributions we have $F_{tot}=Q_{j}.$ This is a particularly interesting situation since most, if not all, earlier experiment considered only the information $Q_d$ contained in the successfully post-selected MA state. Yet, as our calculation shows, the post-selection process has much more information, and indeed scales at the Heisenberg limit. The parameter $g$ can be estimated with the precision derived in Eq.~(\ref{eq:heisenberg}) from the statistics of the success/failure of the post-selection using a maximum likelihood estimator.
\begin{figure}
\caption{Contributions to the total information from the three constituents in the conditional-phase-rotation scenario with pre- and post-selected QS state $\psi_i = (\ket{-1}+\ket{+1})/\sqrt{2}$ and $\psi_f = (\ket{-1}-\ket{+1})/\sqrt{2}$, and initial MA state $|\alpha\rangle$. Red : $F_p,$ Green : $ p_d Q_d,$ Blue : $(1-p_d)Q_r.$ The sum of three quantities $F_{tot}$ equals $Q_j,$ the total QFI of the joint system-meter state which is $4n^2+4n$ and $n = |\alpha|^2$.}
\label{fig:allqfi}
\end{figure}
For interaction strengths $g > 0,$ the contributions of the different terms in $F_{tot}$ change. In Fig.~(\ref{fig:allqfi}), we plot the FI and QFIs contributing to $F_{tot}$ for $\phi_0=\pi.$ Exploiting a symmetry of our model, we only plot the results in $g=\{0,\pi/2\}$. As shown earlier, for $g \rightarrow 0$ the main contribution comes from $F_p,$ the classical FI in the post-selection distribution. As $g$ increases, $F_p$ falls, and the information in the post-selected states for both successful and failed QS measurement outcomes rises. For $g=\pi/2,$ we plot the contributions in greater detail in Fig.~(\ref{fig:qfi}) for $\phi_0=\pi.$ For this case, $F_p=0$ while $ (1-p_d)Q_r$, $p_d Q_d$ are almost equal. Indeed the difference in the QFIs decreases with $n$, as $ p_d Q_d - (1-p_d)Q_r = -4n(n-1) \exp(-2n).$ For $n \gg 1,$ up to a small exponential correction, there is thus as much information in the successful post-selection mode as in the failed mode, and both of them scale better than the classical scaling. In all cases, the total $F_{tot}$ still matches the maximum QFI attainable, that is $Q_{j}.$
These results provide answers to Q2 and Q3 for the conditional-phase-shift scenario.
\begin{figure}
\caption{Classical and quantum FIs for $g=\pi/2.$ in the conditional phase rotation scenario with with $\psi_i = (\ket{-1}+\ket{+1})/\sqrt{2}$ and $\psi_f = (\ket{-1}-\ket{+1})/\sqrt{2}$, and initial MA state $|\alpha\rangle$. Green : $ p_d Q_d,$ Blue : $(1-p_d)Q_r,$ Black: $Q_{j}=4n^2+4n,$ and Brown: Classical scaling of $4n$. $F_{p}$ is not shown since it is 0. The green and blue lines add up to the black line.}
\label{fig:qfi}
\end{figure}
\textit{Discussion and Conclusions :} It is perhaps unsurprising that the Heisenberg limit for estimating the coupling parameter $g$ in the conditional-phase-shift interaction can be attained when the system-meter coupling is strong, since in that case, the post-selected MA states are Schr\"odinger-cat states. That is, the measurement protocol produces highly non-classical states in the joint system. In the case of weak coupling ($g\rightarrow 0$), however, the the post-selected MA states are classical, and the Heisenberg scaling arises only in the post-selection process itself. How this conditioning step using a classical MA state achieves a precision beyond the standard quantum limit is therefore an interesting open question.
Our calculations show that not only the failed post-selection mode but the post-selection process itself contains useful information. The analysis provide answers to three long-standing questions in the study of weak measurement posed in the abstract: (A1) Post-selection can not enhance the measurement precision even when all the contributions are taken into account; (A2) For equal resources, weak measurement does not give improved precision over strong measurement, when both measurements are optimized. In particular, this result applies to all previous experiments that have explored weak-measurement enhancements to precision metrology. (A3) Weak measurement that modifies the particle number distribution of the meter state can yield quantum-enhanced precision though no non-classical states need be involved. These results highlight the rich structure of the weak measurement and shed new light on both the understanding of quantum measurement and the development of new technologies for practical quantum metrology.
\begin{acknowledgments} We thank M. Barbieri for several useful comments on the manuscript. This work was supported by National Basic Research Programme of China (No. 2011CBA00205), the Engineering and Physical Sciences Research Council (EP/H03031X/1, EP/K034480/1, EP/K04057X/1, EP/M01326X/1 and EP/M013243/1), the Air Force Office of Scientific Research (European Office of Aerospace Research and Development), and the Priority Academic Program Development of Jiangsu Higher Education Institutions. LZ acknowledges support from Alexander von Humboldt Foundation and 1000 Youth Fellowship Program of China. \end{acknowledgments}
\begin{thebibliography}{47} \makeatletter \providecommand \@ifxundefined [1]{
\@ifx{#1\undefined} } \providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Aharonov}\ \emph {et~al.}(1988)\citenamefont
{Aharonov}, \citenamefont {Albert},\ and\ \citenamefont
{Vaidman}}]{PhysRevLett.60.1351}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Aharonov}}, \bibinfo {author} {\bibfnamefont {D.~Z.}\ \bibnamefont
{Albert}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Vaidman}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. Lett.}\ }\textbf {\bibinfo {volume} {60}},\ \bibinfo {pages} {1351}
(\bibinfo {year} {1988})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Steinberg}(1995)}]{PhysRevLett.74.2405}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont
{Steinberg}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {74}},\ \bibinfo {pages}
{2405} (\bibinfo {year} {1995})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yokota}\ \emph {et~al.}(2009)\citenamefont {Yokota},
\citenamefont {Yamamoto}, \citenamefont {Koashi},\ and\ \citenamefont
{Imoto}}]{Yokota2009}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Yokota}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Yamamoto}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Koashi}}, \ and\ \bibinfo
{author} {\bibfnamefont {N.}~\bibnamefont {Imoto}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {New Journal of Physics}\ }\textbf
{\bibinfo {volume} {11}},\ \bibinfo {pages} {033011} (\bibinfo {year}
{2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lundeen}\ and\ \citenamefont
{Steinberg}(2009)}]{lundeen:020404}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont
{Lundeen}}\ and\ \bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont
{Steinberg}},\ }\href {http://link.aps.org/abstract/PRL/v102/e020404}
{\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\
}\textbf {\bibinfo {volume} {102}},\ \bibinfo {pages} {020404} (\bibinfo
{year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ruskov}\ \emph {et~al.}(2006)\citenamefont {Ruskov},
\citenamefont {Korotkov},\ and\ \citenamefont {Mizel}}]{ruskov:200404}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Ruskov}}, \bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont
{Korotkov}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Mizel}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Physical Review Letters}\ }\textbf {\bibinfo {volume} {96}},\ \bibinfo {eid}
{200404} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Williams}\ and\ \citenamefont
{Jordan}(2008)}]{williams:026804}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~S.}\ \bibnamefont
{Williams}}\ and\ \bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont
{Jordan}},\ }\href {http://link.aps.org/abstract/PRL/v100/e026804} {\bibfield
{journal} {\bibinfo {journal} {Physical Review Letters}\ }\textbf {\bibinfo
{volume} {100}},\ \bibinfo {eid} {026804} (\bibinfo {year}
{2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Goggin}\ \emph {et~al.}(2011)\citenamefont {Goggin},
\citenamefont {Almeida}, \citenamefont {Barbieri}, \citenamefont {Lanyon},
\citenamefont {O'Brien}, \citenamefont {White},\ and\ \citenamefont
{Pryde}}]{goggin_09}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~E.}\ \bibnamefont
{Goggin}}, \bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont {Almeida}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Barbieri}}, \bibinfo
{author} {\bibfnamefont {B.~P.}\ \bibnamefont {Lanyon}}, \bibinfo {author}
{\bibfnamefont {J.~L.}\ \bibnamefont {O'Brien}}, \bibinfo {author}
{\bibfnamefont {A.~G.}\ \bibnamefont {White}}, \ and\ \bibinfo {author}
{\bibfnamefont {G.~J.}\ \bibnamefont {Pryde}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Proceedings of the National Academy of
Sciences}\ }\textbf {\bibinfo {volume} {108}},\ \bibinfo {pages} {1256}
(\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Palacios-Laloy}\ \emph {et~al.}(2010)\citenamefont
{Palacios-Laloy}, \citenamefont {Mallet}, \citenamefont {Nguyen},
\citenamefont {Bertet}, \citenamefont {Vion}, \citenamefont {Esteve},\ and\
\citenamefont {Korotkov}}]{palacios2010experimental}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Palacios-Laloy}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Mallet}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nguyen}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Bertet}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Vion}}, \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Esteve}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.~N.}\ \bibnamefont {Korotkov}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Nature Physics}\ }\textbf {\bibinfo {volume}
{6}},\ \bibinfo {pages} {442} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Solli}\ \emph {et~al.}(2004)\citenamefont {Solli},
\citenamefont {McCormick}, \citenamefont {Chiao}, \citenamefont {Popescu},\
and\ \citenamefont {Hickmann}}]{PhysRevLett.92.043601}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~R.}\ \bibnamefont
{Solli}}, \bibinfo {author} {\bibfnamefont {C.~F.}\ \bibnamefont
{McCormick}}, \bibinfo {author} {\bibfnamefont {R.~Y.}\ \bibnamefont
{Chiao}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Popescu}}, \
and\ \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Hickmann}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\
}\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages} {043601} (\bibinfo
{year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brunner}\ \emph {et~al.}(2004)\citenamefont
{Brunner}, \citenamefont {Scarani}, \citenamefont {Wegm\"uller},
\citenamefont {Legr\'e},\ and\ \citenamefont
{Gisin}}]{PhysRevLett.93.203902}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Brunner}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Scarani}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Wegm\"uller}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Legr\'e}}, \ and\ \bibinfo
{author} {\bibfnamefont {N.}~\bibnamefont {Gisin}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {93}},\ \bibinfo {pages} {203902} (\bibinfo {year}
{2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rozema}\ \emph {et~al.}(2012)\citenamefont {Rozema},
\citenamefont {Darabi}, \citenamefont {Mahler}, \citenamefont {Hayat},
\citenamefont {Soudagar},\ and\ \citenamefont
{Steinberg}}]{PhysRevLett.109.100404}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~A.}\ \bibnamefont
{Rozema}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Darabi}},
\bibinfo {author} {\bibfnamefont {D.~H.}\ \bibnamefont {Mahler}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Hayat}}, \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Soudagar}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.~M.}\ \bibnamefont {Steinberg}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {109}},\ \bibinfo {pages} {100404} (\bibinfo {year}
{2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chen}\ \emph {et~al.}(2013)\citenamefont {Chen},
\citenamefont {Zou}, \citenamefont {Xu}, \citenamefont {Tang}, \citenamefont
{Li}, \citenamefont {Han}, \citenamefont {Li}, \citenamefont {Ni},
\citenamefont {Yu}, \citenamefont {Li} \emph
{et~al.}}]{chen2013experimental}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Chen}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Zou}}, \bibinfo
{author} {\bibfnamefont {X.-Y.}\ \bibnamefont {Xu}}, \bibinfo {author}
{\bibfnamefont {J.-S.}\ \bibnamefont {Tang}}, \bibinfo {author}
{\bibfnamefont {Y.-L.}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont
{Y.-J.}\ \bibnamefont {Han}}, \bibinfo {author} {\bibfnamefont {C.-F.}\
\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {G.-C. G. H.-q.}\
\bibnamefont {Ni}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Yu}},
\bibinfo {author} {\bibfnamefont {M.-f.}\ \bibnamefont {Li}}, \emph
{et~al.},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv
preprint arXiv:1306.1027}\ } (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {M\o lmer}(2001)}]{Molmer2001151}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{M\o lmer}},\ }\href {\doibase http://dx.doi.org/10.1016/S0375-9601(01)00783-6}
{\bibfield {journal} {\bibinfo {journal} {Physics Letters A}\ }\textbf
{\bibinfo {volume} {292}},\ \bibinfo {pages} {151 } (\bibinfo {year}
{2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Resch}\ \emph {et~al.}(2004)\citenamefont {Resch},
\citenamefont {Lundeen},\ and\ \citenamefont {Steinberg}}]{Resch2004}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.~J.}\ \bibnamefont
{Resch}}, \bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont {Lundeen}},
\ and\ \bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Steinberg}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physics Letters
A}\ }\textbf {\bibinfo {volume} {324}},\ \bibinfo {pages} {125} (\bibinfo
{year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mir}\ \emph {et~al.}(2007)\citenamefont {Mir},
\citenamefont {Lundeen}, \citenamefont {Mitchell}, \citenamefont {Steinberg},
\citenamefont {Garretson},\ and\ \citenamefont {Wiseman}}]{Mir2007}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Mir}}, \bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont {Lundeen}},
\bibinfo {author} {\bibfnamefont {M.~W.}\ \bibnamefont {Mitchell}}, \bibinfo
{author} {\bibfnamefont {A.~M.}\ \bibnamefont {Steinberg}}, \bibinfo {author}
{\bibfnamefont {J.~L.}\ \bibnamefont {Garretson}}, \ and\ \bibinfo {author}
{\bibfnamefont {H.~M.}\ \bibnamefont {Wiseman}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {New Journal of Physics}\ }\textbf {\bibinfo
{volume} {9}},\ \bibinfo {pages} {287} (\bibinfo {year} {2007})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Kocsis}\ \emph {et~al.}(2011)\citenamefont {Kocsis},
\citenamefont {Braverman}, \citenamefont {Ravets}, \citenamefont {Stevens},
\citenamefont {Mirin}, \citenamefont {Shalm},\ and\ \citenamefont
{Steinberg}}]{kocsis2011observing}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Kocsis}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Braverman}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Ravets}}, \bibinfo
{author} {\bibfnamefont {M.~J.}\ \bibnamefont {Stevens}}, \bibinfo {author}
{\bibfnamefont {R.~P.}\ \bibnamefont {Mirin}}, \bibinfo {author}
{\bibfnamefont {L.~K.}\ \bibnamefont {Shalm}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.~M.}\ \bibnamefont {Steinberg}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume}
{332}},\ \bibinfo {pages} {1170} (\bibinfo {year} {2011})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Lundeen}\ \emph {et~al.}(2011)\citenamefont
{Lundeen}, \citenamefont {Sutherland}, \citenamefont {Patel}, \citenamefont
{Stewart},\ and\ \citenamefont {Bamber}}]{lundeen2011direct}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont
{Lundeen}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Sutherland}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Patel}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Stewart}}, \ and\ \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Bamber}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo
{volume} {474}},\ \bibinfo {pages} {188} (\bibinfo {year}
{2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lundeen}\ and\ \citenamefont
{Bamber}(2012)}]{PhysRevLett.108.070402}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont
{Lundeen}}\ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Bamber}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. Lett.}\ }\textbf {\bibinfo {volume} {108}},\ \bibinfo {pages} {070402}
(\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Salvail}\ \emph {et~al.}(2013)\citenamefont
{Salvail}, \citenamefont {Agnew}, \citenamefont {Johnson}, \citenamefont
{Bolduc}, \citenamefont {Leach},\ and\ \citenamefont {Boyd}}]{Salvail2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~Z.}\ \bibnamefont
{Salvail}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Agnew}},
\bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont {Johnson}}, \bibinfo
{author} {\bibfnamefont {E.}~\bibnamefont {Bolduc}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Leach}}, \ and\ \bibinfo {author}
{\bibfnamefont {R.~W.}\ \bibnamefont {Boyd}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Nat Photon}\ }\textbf {\bibinfo {volume}
{7}},\ \bibinfo {pages} {316} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wu}(2013)}]{wu2013state}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Wu}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Scientific
reports}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {1193}
(\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ritchie}\ \emph {et~al.}(1991)\citenamefont
{Ritchie}, \citenamefont {Story},\ and\ \citenamefont
{Hulet}}]{PhysRevLett.66.1107}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~W.~M.}\
\bibnamefont {Ritchie}}, \bibinfo {author} {\bibfnamefont {J.~G.}\
\bibnamefont {Story}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~G.}\
\bibnamefont {Hulet}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {66}},\ \bibinfo
{pages} {1107} (\bibinfo {year} {1991})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Susa}\ \emph {et~al.}(2012)\citenamefont {Susa},
\citenamefont {Shikano},\ and\ \citenamefont {Hosoya}}]{susa2012optimal}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Susa}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Shikano}}, \
and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Hosoya}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review
A}\ }\textbf {\bibinfo {volume} {85}},\ \bibinfo {pages} {052110} (\bibinfo
{year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dressel}\ \emph {et~al.}(2014)\citenamefont
{Dressel}, \citenamefont {Malik}, \citenamefont {Miatto}, \citenamefont
{Jordan},\ and\ \citenamefont {Boyd}}]{RevModPhys.86.307}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Dressel}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Malik}},
\bibinfo {author} {\bibfnamefont {F.~M.}\ \bibnamefont {Miatto}}, \bibinfo
{author} {\bibfnamefont {A.~N.}\ \bibnamefont {Jordan}}, \ and\ \bibinfo
{author} {\bibfnamefont {R.~W.}\ \bibnamefont {Boyd}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf
{\bibinfo {volume} {86}},\ \bibinfo {pages} {307} (\bibinfo {year}
{2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hosten}\ and\ \citenamefont
{Kwiat}(2008)}]{Hosten2008}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Hosten}}\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Kwiat}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf
{\bibinfo {volume} {319}},\ \bibinfo {pages} {787} (\bibinfo {year}
{2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dixon}\ \emph {et~al.}(2009)\citenamefont {Dixon},
\citenamefont {Starling}, \citenamefont {Jordan},\ and\ \citenamefont
{Howell}}]{dixon:173601}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~B.}\ \bibnamefont
{Dixon}}, \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Starling}},
\bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont {Jordan}}, \ and\
\bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont {Howell}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review
Letters}\ }\textbf {\bibinfo {volume} {102}},\ \bibinfo {eid} {173601}
(\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brunner}\ and\ \citenamefont
{Simon}(2010)}]{brunner_2009}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Brunner}}\ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Simon}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Physical review letters}\ }\textbf {\bibinfo {volume} {105}},\ \bibinfo
{pages} {010405} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Xu}\ \emph {et~al.}(2013)\citenamefont {Xu},
\citenamefont {Kedem}, \citenamefont {Sun}, \citenamefont {Vaidman},
\citenamefont {Li},\ and\ \citenamefont {Guo}}]{2013arXiv1306.4768X}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-Y.}\ \bibnamefont
{Xu}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Kedem}}, \bibinfo
{author} {\bibfnamefont {K.}~\bibnamefont {Sun}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Vaidman}}, \bibinfo {author} {\bibfnamefont
{C.-F.}\ \bibnamefont {Li}}, \ and\ \bibinfo {author} {\bibfnamefont {G.-C.}\
\bibnamefont {Guo}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Physical review letters}\ }\textbf {\bibinfo {volume} {111}},\
\bibinfo {pages} {033604} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Feizpour}\ \emph {et~al.}(2011)\citenamefont
{Feizpour}, \citenamefont {Xing},\ and\ \citenamefont
{Steinberg}}]{PhysRevLett.107.133603}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Feizpour}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Xing}}, \
and\ \bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Steinberg}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\
}\textbf {\bibinfo {volume} {107}},\ \bibinfo {pages} {133603} (\bibinfo
{year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zilberberg}\ \emph {et~al.}(2011)\citenamefont
{Zilberberg}, \citenamefont {Romito},\ and\ \citenamefont
{Gefen}}]{PhysRevLett.106.080405}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Zilberberg}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Romito}},
\ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Gefen}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\
}\textbf {\bibinfo {volume} {106}},\ \bibinfo {pages} {080405} (\bibinfo
{year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gorodetski}\ \emph {et~al.}(2012)\citenamefont
{Gorodetski}, \citenamefont {Bliokh}, \citenamefont {Stein}, \citenamefont
{Genet}, \citenamefont {Shitrit}, \citenamefont {Kleiner}, \citenamefont
{Hasman},\ and\ \citenamefont {Ebbesen}}]{PhysRevLett.109.013901}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Gorodetski}}, \bibinfo {author} {\bibfnamefont {K.~Y.}\ \bibnamefont
{Bliokh}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Stein}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Genet}}, \bibinfo
{author} {\bibfnamefont {N.}~\bibnamefont {Shitrit}}, \bibinfo {author}
{\bibfnamefont {V.}~\bibnamefont {Kleiner}}, \bibinfo {author} {\bibfnamefont
{E.}~\bibnamefont {Hasman}}, \ and\ \bibinfo {author} {\bibfnamefont {T.~W.}\
\bibnamefont {Ebbesen}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {109}},\ \bibinfo
{pages} {013901} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Starling}\ \emph {et~al.}(2009)\citenamefont
{Starling}, \citenamefont {Dixon}, \citenamefont {Jordan},\ and\
\citenamefont {Howell}}]{starling:041803}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont
{Starling}}, \bibinfo {author} {\bibfnamefont {P.~B.}\ \bibnamefont {Dixon}},
\bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont {Jordan}}, \ and\
\bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont {Howell}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review A
(Atomic, Molecular, and Optical Physics)}\ }\textbf {\bibinfo {volume}
{80}},\ \bibinfo {eid} {041803} (\bibinfo {year} {2009})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Zhu}\ \emph {et~al.}(2011)\citenamefont {Zhu},
\citenamefont {Zhang}, \citenamefont {Pang}, \citenamefont {Qiao},
\citenamefont {Liu},\ and\ \citenamefont {Wu}}]{PhysRevA.84.052111}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont
{Zhu}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Zhang}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Pang}}, \bibinfo {author}
{\bibfnamefont {C.}~\bibnamefont {Qiao}}, \bibinfo {author} {\bibfnamefont
{Q.}~\bibnamefont {Liu}}, \ and\ \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Wu}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo
{pages} {052111} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tanaka}\ and\ \citenamefont
{Yamamoto}(2013)}]{2013arXiv1306.2409T}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Tanaka}}\ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Yamamoto}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. A}\ }\textbf {\bibinfo {volume} {88}},\ \bibinfo {pages} {042116}
(\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ferrie}\ and\ \citenamefont
{Combes}(2014)}]{2013arXiv1307.4016F}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Ferrie}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Combes}},\ }\href {\doibase 10.1103/PhysRevLett.112.040406} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {112}},\ \bibinfo {pages} {040406} (\bibinfo {year}
{2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Combes}\ \emph {et~al.}(2014)\citenamefont {Combes},
\citenamefont {Ferrie}, \citenamefont {Jiang},\ and\ \citenamefont
{Caves}}]{combes2013probabilistic}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Combes}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Ferrie}},
\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Jiang}}, \ and\ \bibinfo
{author} {\bibfnamefont {C.~M.}\ \bibnamefont {Caves}},\ }\href {\doibase
10.1103/PhysRevA.89.052117} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages} {052117}
(\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Knee}\ \emph {et~al.}(2013)\citenamefont {Knee},
\citenamefont {Briggs}, \citenamefont {Benjamin},\ and\ \citenamefont
{Gauger}}]{PhysRevA.87.012115}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~C.}\ \bibnamefont
{Knee}}, \bibinfo {author} {\bibfnamefont {G.~A.~D.}\ \bibnamefont {Briggs}},
\bibinfo {author} {\bibfnamefont {S.~C.}\ \bibnamefont {Benjamin}}, \ and\
\bibinfo {author} {\bibfnamefont {E.~M.}\ \bibnamefont {Gauger}},\ }\href
{\doibase 10.1103/PhysRevA.87.012115} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo
{pages} {012115} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Knee}\ and\ \citenamefont
{Gauger}(2014)}]{PhysRevX.4.011032}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~C.}\ \bibnamefont
{Knee}}\ and\ \bibinfo {author} {\bibfnamefont {E.~M.}\ \bibnamefont
{Gauger}},\ }\href {\doibase 10.1103/PhysRevX.4.011032} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {4}},\
\bibinfo {pages} {011032} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jordan}\ \emph
{et~al.}(2014{\natexlab{a}})\citenamefont {Jordan}, \citenamefont
{Mart\'{i}nez-Rinc\'{o}n},\ and\ \citenamefont {Howell}}]{PhysRevX.4.011031}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont
{Jordan}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Mart\'{i}nez-Rinc\'{o}n}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~C.}\
\bibnamefont {Howell}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages}
{011031} (\bibinfo {year} {2014}{\natexlab{a}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Vaidman}\ \emph
{et~al.}(2014{\natexlab{b}})\citenamefont {Vaidman}}]{2014arXiv1402.0199V}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}\ \bibnamefont
{Vaidman}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {ArXiv e-prints: 1402.0199}\ }
(\bibinfo {year} {2014}{\natexlab{b}})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Braunstein}\ and\ \citenamefont
{Caves}(1994)}]{PhysRevLett.72.3439}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~L.}\ \bibnamefont
{Braunstein}}\ and\ \bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont
{Caves}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. Lett.}\ }\textbf {\bibinfo {volume} {72}},\ \bibinfo {pages} {3439}
(\bibinfo {year} {1994})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hofmann}\ \emph {et~al.}(2012)\citenamefont
{Hofmann}, \citenamefont {Goggin}, \citenamefont {Almeida},\ and\
\citenamefont {Barbieri}}]{PhysRevA.86.040102}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~F.}\ \bibnamefont
{Hofmann}}, \bibinfo {author} {\bibfnamefont {M.~E.}\ \bibnamefont {Goggin}},
\bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont {Almeida}}, \ and\
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Barbieri}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf
{\bibinfo {volume} {86}},\ \bibinfo {pages} {040102} (\bibinfo {year}
{2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Str\"{u}bi}\ and\ \citenamefont
{Bruder}(2013)}]{PhysRevLett.110.083605}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Str\"{u}bi}}\ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Bruder}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. Lett.}\ }\textbf {\bibinfo {volume} {110}},\ \bibinfo {pages} {083605}
(\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{sup()}]{supp}
\BibitemOpen
\href@noop {} {\bibinfo {journal} {Supplementary material}\ }\BibitemShut
{NoStop} \bibitem [{Note1()}]{Note1}
\BibitemOpen \bibfield {journal} { }\bibinfo {note} {To be precise, the QS and MA in these
implementations are (multi-mode) coherent states. Yet as we will show, the
following analysis can be applied with little modification}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Shomroni}\ \emph {et~al.}(2013)\citenamefont
{Shomroni}, \citenamefont {Bechler}, \citenamefont {Rosenblum},\ and\
\citenamefont {Dayan}}]{shomroni2013demo}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{Shomroni}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Bechler}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Rosenblum}}, \ and\
\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Dayan}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {111}},\ \bibinfo {pages} {023604} (\bibinfo {year}
{2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Puentes}\ \emph {et~al.}(2012)\citenamefont
{Puentes}, \citenamefont {Hermosa},\ and\ \citenamefont
{Torres}}]{PhysRevLett.109.040401}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Puentes}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Hermosa}}, \
and\ \bibinfo {author} {\bibfnamefont {J.~P.}\ \bibnamefont {Torres}},\
}\href {\doibase 10.1103/PhysRevLett.109.040401} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {109}},\
\bibinfo {pages} {040401} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{Note2()}]{Note2}
\BibitemOpen
\bibinfo {note} {See, for instance, Eq. (2) in Ref.~\cite{PhysRevLett.107.133603}, where $g = \phi_0/2$.}\BibitemShut
{Stop} \bibitem [{\citenamefont {Jordan}\ \emph
{et~al.}(2014{\natexlab{b}})\citenamefont {Jordan}, \citenamefont
{Tollaksen}, \citenamefont {Troupe}, \citenamefont {Dressel},\ and\
\citenamefont {Aharonov}}]{2014arXiv1409.3488J}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont
{Jordan}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Tollaksen}},
\bibinfo {author} {\bibfnamefont {J.~E.}\ \bibnamefont {Troupe}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Dressel}}, \ and\ \bibinfo
{author} {\bibfnamefont {Y.}~\bibnamefont {Aharonov}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {ArXiv e-prints: 1409.3488}\ }
(\bibinfo {year} {2014}{\natexlab{b}})}\BibitemShut
{NoStop} \end{thebibliography}
\section{Supplementary Material} \subsection{Derivation of $F_{tot}$} \label{sec:f_tot}
The quantum FI $Q_k$ of $|\Phi_k\rangle$ ($k=d,r$) is given by
\begin{equation} Q_k = 4\left[\left(\frac{d\langle\Phi_k|}{dg}\right) \left(\frac{d|\Phi_k\rangle}{dg}\right) - \left|\langle \Phi_k |\left(\frac{d|\Phi_k\rangle}{dg}\right)\right|^2 \right]. \label{eq:qfi} \end{equation}
$Q_k$ can be achieved with the optimal POVMs. Assume the optimal measurement for $|\Phi_d\rangle$ is $\{\Pi^{d}_{1} \cdots \Pi^{d}_{V}\}$ with the probabilities of each outcome $\{P(1|\textrm{detect}), \cdots ,P(V|\textrm{detect})$, where
\begin{equation} P(v|\textrm{detect}) = \langle \Phi_d |\Pi^{d}_{v} |\Phi_d\rangle, \textrm{ for } v=1\cdots V. \end{equation} Then we have
\begin{equation} Q_d = \sum_{v=1}^{V} \frac{1}{P(v|\textrm{detect})} \left(\frac{d(P(v|\textrm{detect}))}{dg}\right)^2.
\end{equation} Similarly, the optimal measurement for $|\Phi_r\rangle$ is $\{\Pi^r_{1} \cdots \Pi^r_{W}\}$ with the probabilities of each outcome $\{P(1|\textrm{reject}), \cdots ,P(W|\textrm{reject})\}$. Then post-selection on the QS state followed by the optimal measurement on the MA states can be considered as a POVM performed on the joint state $\ket{\Psi}$ with $\{|\psi_f\rangle\langle\psi_f|\otimes\Pi^{d}_{1}, \cdots |\psi_f\rangle\langle\psi_f|\otimes\Pi^{d}_{V}, |\psi_f^{\perp}\rangle\langle\psi_f^{\perp}|\otimes\Pi^{r}_{1}, \cdots, |\psi_f^{\perp}\rangle\langle\psi_f^{\perp}|\otimes\Pi^{r}_{W}\}$, where $|\psi_f^{\perp}\rangle$ is the state of QS when post-selection fails. The probabilities associated with each outcome are $\{p_d\times P(1|\textrm{detect}), \cdots ,p_d\times P(V|\textrm{detect}), (1-p_d)\times P(1|\textrm{reject}), \cdots ,(1-p_d)\times P(W|\textrm{reject})\}$. The Fisher information is given by \begin{eqnarray}
F_{tot} & = & \sum_{v=1}^{V} \frac{1}{p_d P(v|\textrm{detect})} \left(\frac{d(p_d P(v|\textrm{detect}))}{dg}\right)^2 \nonumber \\
& & + \sum_{w=1}^{W} \frac{1}{(1-p_d) P(w|\textrm{reject})} \left(\frac{d((1-p_d) P(w|\textrm{reject}))}{dg}\right)^2 \nonumber \\ & = & p_d Q_d + (1-p_d) Q_r + F_p, \label{eq:fisher_total} \end{eqnarray} where \begin{equation} F_p = \frac{1}{p_d}\left(\frac{d p_d}{dg}\right)^2+\frac{1}{1-p_d}\left(\frac{d(1-p_d)}{dg}\right)^2.
\end{equation} If we ignore the meter state when the post-selection fails, the whole process can still be considered as a POVM performed on the joint state with $\{|\psi_f\rangle\langle\psi_f|\otimes\Pi^{d}_{1}, \cdots, |\psi_f\rangle\langle\psi_f|\otimes\Pi^{d}_{V}, |\psi_f^{\perp}\rangle\langle\psi_f^{\perp}|\otimes \hat{I}\}$. The probabilities associated with each outcome are $\{p_d P(1|\textrm{detect}), \cdots ,p_d P(V|\textrm{detect}), 1-p_d\}$. The total FI is given by \begin{equation} F_{tot} = p_d Q_d + F_p. \end{equation}
\subsection{Configuration space interactions with arbitrary MA state}
We generalize the situation considered in the manuscript to arbitary MA states \begin{equation}
|\Phi\rangle = \int dp f(p) |p\rangle, \end{equation} where the probability amplitude $f(p)$ satifies the conditions \begin{eqnarray}
& & \int_{-\infty}^{\infty} |f(p)|^2 dp = 1, \label{eq:normlize} \\
& & |f(p)| \rightarrow 0 \textrm{ when } p \rightarrow \pm\infty, \label{eq:f_infty} \\
& & |f'(p)| \rightarrow 0 \textrm{ when } p \rightarrow \pm\infty. \label{eq:f_derive_infty} \end{eqnarray} After the interaction between the QS and MA, the joint state is \begin{eqnarray}
|\Psi_j\rangle & = & \cos\frac{\theta_i}{2} |-1\rangle \int dp f(p+g) |p\rangle \nonumber \\
&& + \sin\frac{\theta_i}{2} e^{i\phi_i} |+1\rangle \int dp f(p-g) |p\rangle. \label{eq:joint_state_arbitary} \end{eqnarray} Using the conditions that \begin{eqnarray} \frac{\partial f(p+g)}{\partial g} & = & f'(p+g), \label{eq:deri_pos_g} \\ \frac{\partial f(p-g)}{\partial g} & = & -f'(p-g), \label{eq:deri_neg_g} \end{eqnarray} we have \begin{eqnarray}
\frac{d|\Psi_j\rangle}{dg} & = & \cos\frac{\theta_i}{2} |-1\rangle \int dp f'(p+g) |p\rangle \nonumber \\
&& - \sin\frac{\theta_i}{2} e^{i\phi_i} |+1\rangle \int dp f'(p-g) |p\rangle, \end{eqnarray} then \begin{eqnarray}
\left(\frac{d\langle\Psi_j|}{dg}\right) \left(\frac{d|\Psi_j\rangle}{dg}\right) & = & \cos^2 \frac{\theta_i}{2} \int dp |f'(p+g)|^2 \nonumber \\
& & + \sin^2 \frac{\theta_i}{2} \int dp |f'(p-g)|^2 \nonumber \\
& = & \int dp |f'(p)|^2,
\end{eqnarray} where we have used $\int_{-\infty}^{\infty} dp |f'(p+g)|^2 = \int_{-\infty}^{\infty} dp |f'(p-g)|^2 = \int_{-\infty}^{\infty} dp |f'(p)|^2$. Similarly we have \begin{eqnarray}
\langle \Psi_j| \left(\frac{d|\Psi_j\rangle}{dg}\right) &=& \cos^2 \frac{\theta_i}{2} \int dp \tilde{f}(p+g) f'(p+g) \nonumber \\ & & - \sin^2 \frac{\theta_i}{2} \int dp \tilde{f}(p-g) f'(p-g) \nonumber \\ & = & \cos \theta_i \int dp \tilde{f}(p) f'(p), \end{eqnarray} where $\tilde{f}(p)$ is the conjugate of $f(p)$. Again we have used $\int_{-\infty}^{\infty} dp \tilde{f}(p+g) f'(p+g) = \int_{-\infty}^{\infty} dp \tilde{f}(p-g) f'(p-g) = \int_{-\infty}^{\infty} dp \tilde{f}(p) f'(p)$. So we have the quantum FI of the joint meter-system state
\begin{eqnarray} Q_j &=& 4\left[\left(\frac{d\langle\Psi_j|}{dg}\right) \left(\frac{d|\Psi_j\rangle}{dg}\right) - \left|\langle \Psi_j |\left(\frac{d|\Psi_j\rangle}{dg}\right)\right|^2 \right] \nonumber \\
&=& 4\left( \int dp |f'(p)|^2 - \cos^2 \theta_i \left| \int dp \tilde{f}(p) f'(p) \right|^2\right). \label{eq:qj_arbi_meter} \end{eqnarray} From Eq.~(\ref{eq:qj_arbi_meter}) we can see that $Q_j$ is independent of the value of $g$, \textit{i.e.} the measurement strength. It is worth to investigate this result a bit further. Since \begin{widetext} \begin{eqnarray} \int dp \left[\tilde{f} (p) f'(p) + f(p) \tilde{f}'(p)\right] & = & \int dp \left[\tilde{f} (p+g) f'(p+ g) + f(p+g) \tilde{f}'(p+g)\right] \nonumber \\ &=& \int dp \left[\tilde{f} (p+g) \frac{\partial f(p+ g)} {\partial g} + f(p+g) \frac{\partial\tilde{f}(p+g)}{\partial g}\right] \nonumber \\
&=& \frac{d \int dp |f(p+g)|^2}{dg}, \nonumber \\ & = & 0 \end{eqnarray} \end{widetext} we have \begin{equation} \int dp \tilde{f}(p) f'(p) = - \int dp f(p) \tilde{f}'(p), \label{eq:int_f_f_deri} \end{equation}
\textit{i.e.} $\int dp \tilde{f}(p) f'(p)$ is either 0 or an imaginary number. In particular, if $f(p)$ is a real function, $\int dp \tilde{f}(p) f'(p)=0$, and $Q_j$ is independent of the choice of the initial QS state. If $\int dp \tilde{f}(p) f'(p)$ is not zero, $Q_j$ reaches its maximum when $\cos \theta_i = 0$, \textit{i.e.} with the initial QS state $|\psi_i\rangle = (|-1\rangle \pm e^{i\phi_i} |+1\rangle)/\sqrt{2}$.
Now we consider the effect of post-selection, after which the MA state becomes $|\Phi_k\rangle = \int dp \phi_k (g,p) |p\rangle$ ($k = d,r$) with \begin{equation} \phi_k (g,p) = \frac{1}{\sqrt{p_k}} \left[ \gamma_k^{-} f(p+g) + \gamma_k^{+} f(p-g)\right] = \frac{1}{\sqrt{p_k}} \vartheta_k (p,g), \end{equation} where we define $\vartheta_k (p,g) = \gamma_k^{-} f(p+g) + \gamma_k^{+} f(p-g) $ for the later analysis. The probability of successful post-selection is \begin{widetext}
\begin{equation} p_d = \int dp |\vartheta_d (p,g)|^2 = \frac{1+\cos\theta_i \cos\theta_f + \sin\theta_i \sin\theta_f \textrm{Re}\left(\int dp \tilde{f}(p-g) f(p+g) e^{i\phi_0} \right)}{2} \label{eq:success_prob_arti_meter} \end{equation} \end{widetext}
and the probability that the post-selection fails is $p_r = |\int dp \vartheta_r (p,g)|^2 = 1-p_d$. We have \begin{eqnarray}
p_k Q_k & = & 4 \left[ \int dp \left| \frac{\partial \vartheta_k (p,g)}{\partial g} \right|^2 \right. \nonumber \\
& &\left. - \frac{1}{p_k} \left| \int dp \tilde{\vartheta}_k (p,g) \frac{\partial \vartheta_k (p,g)}{\partial g} \right|^2\right], \label{eq:qfi_post_arbi_meter} \\ F_p & = & \sum_{k = d,r} \frac{1}{p_k} \left(\frac{dp_k}{dg} \right)^2. \label{eq:cfi_post_arbi_meter}
\end{eqnarray} Since $p_k = \int dp |\vartheta_k(p,g)|^2$, we have \begin{equation} \frac{dp_k}{dg} = \int dp \frac{\partial \tilde{\vartheta}_k(p,g)}{\partial g} \vartheta_k (p,g) + \int dp \tilde{\vartheta}_k (p,g) \frac{\partial \vartheta_k(p,g)}{\partial g}. \label{eq:dp_dg} \end{equation}
Substituting Eq. (\ref{eq:dp_dg}) into Eq. (\ref{eq:cfi_post_arbi_meter}), and summing over all the contributions to the total Fisher information, we have \begin{widetext}
\begin{equation} F_{tot} = \sum_{k = d, r} \left\{ 4 \int dp \left| \frac{\partial \vartheta_k (p,g)}{\partial g} \right|^2 + \frac{1}{p_k}\left[ \int dp \left( \frac{\partial \tilde{\vartheta}_k(p,g)}{\partial g} \vartheta_k (p,g) - \tilde{\vartheta}_k (p,g) \frac{\partial \vartheta_k(p,g)}{\partial g}\right) \right]^2 \right\}. \label{eq:ftot_arbi_meter} \end{equation} \end{widetext} Here we are interested in the situations in the weak and strong measurement limit, \textit{i.e.} $g\to 0$ and $g \to \infty$. The results for a general $g$ will be given elsewhere.
In the weak measurement limit with $g \to 0$, we have \begin{widetext} \begin{eqnarray} p_d & = & \frac{1+\cos\theta_i \cos\theta_f + \sin\theta_i \sin\theta_f \cos\phi_0}{2}, \label{eq:success_prob_arbi_meter_weak} \\ p_r & = & \frac{1-\cos\theta_i \cos\theta_f - \sin\theta_i \sin\theta_f \cos\phi_0}{2}, \label{eq:fail_prob_arbi_meter_weak} \\
\int dp \left| \frac{\partial \vartheta_d (p,g)}{\partial g} \right|^2 & = & \frac{1+\cos\theta_i \cos\theta_f - \sin\theta_i \sin\theta_f \cos\phi_0}{2} \int dp |f'(p)|^2, \label{eq:int_partial_d_weak} \\
\int dp \left| \frac{\partial \vartheta_r (p,g)}{\partial g} \right|^2 & = & \frac{1-\cos\theta_i \cos\theta_f + \sin\theta_i \sin\theta_f \cos\phi_0}{2} \int dp |f'(p)|^2, \label{eq:int_partial_r_weak} \\
\int dp \tilde{\vartheta}_d (p,g) \frac{\partial \vartheta_d(p,g)}{\partial g} & = & \frac{\cos\theta_i + \cos\theta_f + i \sin\theta_i\sin\theta_f \sin\phi_0}{2} \int dp \tilde{f} (p) f'(p),
\label{eq:int_f_partial_f_d_weak} \\
\int dp \tilde{\vartheta}_r (p,g) \frac{\partial \vartheta_r(p,g)}{\partial g} & = & \frac{\cos\theta_i - \cos\theta_f - i \sin\theta_i\sin\theta_f \sin\phi_0}{2} \int dp \tilde{f} (p) f'(p), \label{eq:int_f_partial_f_r_weak} \end{eqnarray} \end{widetext} Substituting Eqns. (\ref{eq:success_prob_arbi_meter_weak} - \ref{eq:int_f_partial_f_r_weak}) and Eq. (\ref{eq:int_f_f_deri}) into Eq. (\ref{eq:ftot_arbi_meter}), we have \begin{widetext}
\begin{equation} F_{tot} = 4 \left\{\int dp |f'(p)|^2 - \frac{1}{2} \left[\frac{(\cos\theta_i + \cos\theta_f)^2}{1+\cos\theta_i \cos\theta_f + \sin\theta_i \sin\theta_f \cos\phi_0} + \frac{(\cos\theta_i - \cos\theta_f)^2}{1-\cos\theta_i \cos\theta_f - \sin\theta_i \sin\theta_f \cos\phi_0}\right] \left| \int dp \tilde{f}(p) f'(p) \right|^2\right\}. \label{eq:f_tot_arbi_meter_weak} \end{equation} \end{widetext} If $\int dp \tilde{f}(p) f'(p) = 0$, for example, $f(p)$ is a real function, $F_{tot}$ always equals $Q_j$. The Gaussian MA state discussed in the main text is a specific example of this situation. If $\int dp \tilde{f}(p) f'(p) \neq 0$, $F_{tot}$ achieves its maximum value $Q_j$ when the pre- and post-selected QS states satisfy the condition \begin{equation} \cos\theta_f \sin\theta_i - \cos \theta_i \sin \theta_f \cos\phi_0 = 0 \label{eq:max_cond_weak} \end{equation}
In the strong measurement limit with $g \to \infty$, we have $\int dp \tilde{f} (p+g) f(p-g) = 0$, $\int dp \tilde{f} (p+g) f'(p-g) = 0$ and $\int dp \tilde{f}'(p+g) f'(p-g)=0$. Thus \begin{widetext} \begin{eqnarray} p_d & = & \frac{1+\cos\theta_i \cos\theta_f}{2}, \label{eq:success_prob_arbi_meter_strong} \\ p_r & = & \frac{1-\cos\theta_i \cos\theta_f}{2}, \label{eq:fail_prob_arbi_meter_strong} \\
\int dp \left| \frac{\partial \vartheta_d (p,g)}{\partial g} \right|^2 & = & \frac{1+\cos\theta_i \cos\theta_f}{2} \int dp |f'(p)|^2, \label{eq:int_partial_d_strong} \\
\int dp \left| \frac{\partial \vartheta_r (p,g)}{\partial g} \right|^2 & = & \frac{1-\cos\theta_i \cos\theta_f}{2} \int dp |f'(p)|^2, \label{eq:int_partial_r_strong} \\
\int dp \tilde{\vartheta}_d (p,g) \frac{\partial \vartheta_d(p,g)}{\partial g} & = & \frac{\cos\theta_i + \cos\theta_f}{2} \int dp \tilde{f} (p) f'(p), \label{eq:int_f_partial_f_d_strong} \\
\int dp \tilde{\vartheta}_r (p,g) \frac{\partial \vartheta_r(p,g)}{\partial g} & = & \frac{\cos\theta_i - \cos\theta_f }{2} \int dp \tilde{f} (p) f'(p), \label{eq:int_f_partial_f_r_strong} \end{eqnarray} \end{widetext} Sustituing Eqns. (\ref{eq:success_prob_arbi_meter_strong} - \ref{eq:int_f_partial_f_r_strong}) and Eq. (\ref{eq:int_f_f_deri}) into Eq. (\ref{eq:ftot_arbi_meter}), we have \begin{widetext}
\begin{equation} F_{tot} = 4 \left\{\int dp |f'(p)|^2 - \frac{1}{2} \left[\frac{(\cos\theta_i + \cos\theta_f)^2}{1+\cos\theta_i \cos\theta_f} + \frac{(\cos\theta_i - \cos\theta_f)^2}{1-\cos\theta_i \cos\theta_f}\right] \left| \int dp \tilde{f}(p) f'(p) \right|^2\right\}. \label{eq:f_tot_arbi_meter_strong} \end{equation} \end{widetext}
Again if $\int dp \tilde{f}(p) f'(p) = 0$, $F_{tot}$ always equals $Q_j$. If $\int dp \tilde{f}(p) f'(p) \neq 0$, $F_{tot}$ achieves its maximum value $Q_j$ when $\cos\theta_f =0$ with the post-selected QS state $|\psi_i\rangle = (|-1\rangle \pm e^{i\phi_i} |+1\rangle)/\sqrt{2}$.
The above results show that the precisions one can achieve with weak and strong measurement are the \textit{same} for an arbitary MA state when both measurements are optimized.
If one retains only the information in the successfully post-selected meter state, $F_{tot} = p_d Q_d$. In the weak measurement limit $g \to 0$, \begin{widetext}
\begin{equation} p_d Q_d = 4\left[ \frac{1+\cos\theta_i \cos\theta_f - \sin\theta_i \sin\theta_f \cos\phi_0}{2} \int dp |f'(p)|^2 - \frac{(\cos\theta_i + \cos\theta_f)^2 + \sin^2 \theta_i \sin^2 \theta_f \sin^2 \phi_0}{2(1+\cos\theta_i \cos\theta_f + \sin\theta_i \sin\theta_f \cos\phi_0)} \left| \int dp \tilde{f}(p) f'(p) \right|^2 \right]. \label{f_partial_arbi_meter_weak} \end{equation} \end{widetext} Similarly in the strong measurement limit $g \to \infty$, \begin{widetext}
\begin{equation} p_d Q_d = 4\left[ \frac{1+\cos\theta_i \cos\theta_f}{2} \int dp |f'(p)|^2 - \frac{(\cos\theta_i + \cos\theta_f)^2 }{2(1+\cos\theta_i \cos\theta_f)} \left| \int dp \tilde{f}(p) f'(p) \right|^2 \right]. \label{f_partial_arbi_meter_strong} \end{equation} \end{widetext} Both Eq. (\ref{f_partial_arbi_meter_weak}) and Eq. (\ref{f_partial_arbi_meter_strong}) are generally smaller than $Q_j$. Yet if $\int dp \tilde{f}(p) f'(p) = 0$, both of these equations can reach $Q_j$ with the conditions discussed in the main text.
\subsection{Quantum FI of the joint system-meter state in Eq.~(10)}
As shown in Sec.~\ref{sec:f_tot}, post-selection on the QS state followed by the measurement on the MA state can be considered as a measurement on the joint state $|\Psi_j\rangle$. Therefore, we can estimate the quantum FI of $|\Psi_j\rangle$, which will give us an upper bound on the precision, though it may not be achievable, in that $|\psi_f\rangle$ may not be the optimal.
We have \begin{eqnarray}
\frac{d|\Psi_j\rangle}{dg} & = & \cos \frac{\theta_i}{2}|-1\rangle (-i\alpha e^{-ig} \hat{a}^{\dag} )|\alpha e^{-ig}\rangle \nonumber \\
&& + \sin \frac{\theta_i}{2} e^{i\phi_i} |+1\rangle (i \alpha e^{ig} \hat{a}^{\dag})|\alpha e^{ig}\rangle \nonumber \\
& = & -i\alpha \hat{a}^{\dag} \left(e^{-ig} \cos \frac{\theta_i}{2}|-1\rangle |\alpha e^{-ig}\rangle \right.\nonumber \\
&& \left. - e^{ig} \sin \frac{\theta_i}{2} e^{i\phi_i} |+1\rangle |\alpha e^{ig}\rangle\right) \end{eqnarray} then \begin{equation}
\left(\frac{d\langle\Psi_j|}{dg}\right) \left(\frac{d|\Psi_j\rangle}{dg}\right) = n^2 + n. \end{equation} and \begin{equation}
\langle \Psi_j |\left(\frac{d|\Psi_j\rangle}{dg}\right) = -i|\alpha|^2 \left(\cos^2 \frac{\theta_i}{2} - \sin^2 \frac{\theta_i}{2}\right) = -i n \cos\theta_i. \end{equation} So we have the quantum FI of the joint state \begin{eqnarray}
Q_{j} & = & 4\left[\left(\frac{d\langle\Psi_j|}{dg}\right) \left(\frac{d|\Psi_j\rangle}{dg}\right) - \left|\langle \Psi_j |\left(\frac{d|\Psi_j\rangle}{dg}\right)\right|^2 \right] \nonumber \\ & = & 4n^2 \sin^2 \theta_i + 4n.
\label{eq:qfi_joint} \end{eqnarray}
\subsection{Quantum and classical FI for weak measurement in phase space}
The probability of successful post-selection $p_d$, that is, of obtaining the state $|\Phi_d\rangle$, is being given by \begin{equation} p_d = \frac{1+\mathcal{A}\cos(\mathcal{B}-2g)+\mathcal{C}}{2}, \label{eq:f} \end{equation}
where $\mathcal{A} = \sin\theta_i \sin\theta_f \exp(-2n\sin^2 g),\mathcal{B}=n \sin 2g + 2g+\phi_0,$ $\mathcal{C}=\cos\theta_i\cos\theta_f,$ and $n=|\alpha|^2.$ The classical FI in this post-selection process is \begin{equation} F_p = \frac{4 n^2 \mathcal{A}^2 \sin^2 \mathcal{B}}{1-\left[\mathcal{A}\cos(\mathcal{B}-2g)-\mathcal{C}\right]^2}. \label{eq:info_post_phase} \end{equation} The quantum FI for the state after successful post-selection is \begin{widetext} \begin{equation} Q_d = \frac{4}{p_d} \left\{ \frac{n}{2} (1+\mathcal{C} - \mathcal{A}\cos\mathcal{B})
+ \frac{n^2}{2} \left[1+\mathcal{C} - \mathcal{A}\cos(\mathcal{B}+2g)\right]
- \frac{1}{p_d} \frac{n^2}{4} (\cos^2 \theta_i + \cos^2 \theta_f + 2\mathcal{C} + \mathcal{A}^2\sin^2\mathcal{B})\right\}. \end{equation} The quantum FI for the state after failed post-selection is \begin{equation} Q_r = \frac{4}{1-p_d} \left\{ \frac{n}{2} (1-\mathcal{C} - \mathcal{A}\cos\mathcal{B}) + \frac{n^2}{2} \left[1-\mathcal{C} + \mathcal{A}\cos(\mathcal{B}+2g)\right]
- \frac{1}{1-p_d} \frac{n^2}{4} (\cos^2 \theta_i + \cos^2 \theta_f - 2\mathcal{C} + \mathcal{A}^2\sin^2\mathcal{B})\right\}. \end{equation} \end{widetext}
\end{document} | arXiv |
Abstract: We study the problem of decomposing a volume bounded by a smooth surface into a collection of Voronoi cells. Unlike the dual problem of conforming Delaunay meshing, a principled solution to this problem for generic smooth surfaces remained elusive. VoroCrust leverages ideas from $\alpha$-shapes and the power crust algorithm to produce unweighted Voronoi cells conforming to the surface, yielding the first provably-correct algorithm for this problem. Given an $\epsilon$-sample on the bounding surface, with a weak $\sigma$-sparsity condition, we work with the balls of radius $\delta$ times the local feature size centered at each sample. The corners of this union of balls are the Voronoi sites, on both sides of the surface. The facets common to cells on opposite sides reconstruct the surface. For appropriate values of $\epsilon$, $\sigma$ and $\delta$, we prove that the surface reconstruction is isotopic to the bounding surface. With the surface protected, the enclosed volume can be further decomposed into an isotopic volume mesh of fat Voronoi cells by generating a bounded number of sites in its interior. Compared to state-of-the-art methods based on clipping, VoroCrust cells are full Voronoi cells, with convexity and fatness guarantees. Compared to the power crust algorithm, VoroCrust cells are not filtered, are unweighted, and offer greater flexibility in meshing the enclosed volume by either structured grids or random samples. | CommonCrawl |
Directly to page Studies
Degree programmes A-Z
Admissions without a German "Abitur" and other routes
Application and enrolment
Transfer programme/department
Transition Bachelor Master
Leave the university
Manage your studies
PAUL - campus management system
PANDA - learning platform
Student Advice Centre (ZSB)
Paderborn Centre for Educational Research and Teacher Training - PLAZ-Professional School
Learning centres
Studies with disabilites
BAföG - federal student grants and loans
Studienfonds OWL scholarships
Working while studying
Campus and city life
Social, sporting and cultural activites
Study locations
Printing, copying and scanning
Life in Paderborn
Transition to working life
Look for work
Business start-ups
Degree programme students
Outgoing exchange students
Directly to page Teaching
Teaching and COVID-19
Teaching during the COVID-19 pandemic
Digitize teaching materials
Digital test formats
Notes according to course types
Content for your teaching
Additional external websites and resources
Academic Mission Statement
Digitalization & E-Learning
Competence-Oriented Education
Exam design
Organizing Courses
Qualification and service
Higher Education Development Unit
E-Learning and Technical Support
Internal Professional Development and Further Education
Interdisciplinary Cooperation to Improve Quality in Teacher Education (PLAZ)
Faculty-Specific Initiatives
Internationally Focused Academics
Educational innovations
Teaching Award for Young Academics
Promotion of Innovation and Quality in Education Award
Curriculum 4.0
Best-Practices Teaching Symposium
E-Learning Label
Quality Management for Academics and Teaching
Quality Improvement Funding
Surveys of Current and Former Students
Degree Program Design and Accreditation
Teaching research networks
DH.NRW
Foundation for Innovation in Higher Education
Centre for Higher Mathematics Education (khdm)
E-Assessment NRW
Directly to page Research
Research in general
Academic awards and honors
Committee for research and junior academics
Equal opportunities in research
Good scientific practice
Intelligent Technical Systems
Sustainable Materials, Processes and Products
Optoelectronics and Photonics
Transformation and Education
Network for Interdisziplinary Research (NiFo)
Paderborner Wissenschaftskolleg "Data Society"
Paderborn University Research Award
Interdisciplinary research institutes
Collaborative research centers
Research units (DFG)
Priority programmes
External research cooperations
Fraunhofer institutes
Research international
EU projects (Horizon Europe, Horizon 2020, FP 7)
Erasmus+ projects
Welcome Service for international researchers and employees
Early-Career Researchers
Jenny Aloni Center for Early-Career Researchers
Qualification and advisory
Research service and consulting
Third-party funds management
Publication service
Directly to page University
How to get here / campus map
Unishop Paderborn
Uni-webcam and weather station
University Society
Gender portal
University as employer
Info and services for employees
Dual-Career Couples
Healthy university
Company contact persons - Prevention
Central services units
Centre for Language Studies (ZfS)
University Library (UB)
Information and Media Technologies Centre (IMT)
Paderborn Center for Parallel Computing (PC²)
Culture and sports
University choir UniSono
University orchestra
Jenny Aloni Archive
Radio L´UniCo
Student media initiatives
University drama group
Student Union (AStA)
Studying with disability / Disabled persons representative
Severely disabled persons representative
Equal opportunities officer
Youth and trainee representative
Non-professorial academic staff representative
Academic Staff Council
Further Staff Council
Ombudsperson for good scientific practice
Student assistant representatives
Directly to page Faculties
Faculty of Arts and Humanities
Faculty of Computer Science, Electrical Engineering and Mathematics
Photo: Universität Paderborn, Adelheid Rutenburges
Prof. Dr. Wolf Gero Schmidt
Theoretical Materials Physics
Head - Professor - Lehrstuhlinhaber
Center for Optoelectronics and Photonics (CeOPP)
Committee - Professor
Member - Professor
05251/60-2677
w.g.schmidt(at)upb(dot)de
N3.347
Fakultät für Naturwissenschaften
Dean - Professor
2018 - today
Dean of the Faculty of Science
Vice Dean of the Faculty of Science
Chair offer (W3) from Bielefeld Univ, declined
Full Prof (W3) in theoretical physics, Paderborn Univ
Assoc Prof, Massey Univ, New Zealand
Habilitation, Jena Univ
Adjunct Assistant Prof, North Carolina State Univ, USA
Postdoctoral fellow with J Bernholc, NC State U, USA
PhD in physics with Friedhelm Bechstedt, Jena Univ
Visiting researcher with WS Verwoerd, University of South Africa
Visiting researcher with GP Srivastava, Exeter Univ, UK
Diploma studies in physics, Jena Univ
Open list in Research Information System
Clean and Hydrogen‐Adsorbed AlInP(001) Surfaces: Structures and Electronic Properties
L.J. Glahn, I.A. Ruiz Alvarado, S. Neufeld, M.A. Zare Pour, A. Paszuk, D. Ostheimer, S. Shekarabi, O. Romanyuk, D.C. Moritz, J.P. Hofmann, W. Jaegermann, T. Hannappel, W.G. Schmidt, physica status solidi (b) (2022), 259(11), 2200308
P-Terminated InP (001) Surfaces: Surface Band Bending and Reactivity to Water
D.C. Moritz, I.A. Ruiz Alvarado, M.A. Zare Pour, A. Paszuk, T. Frieß, E. Runge, J.P. Hofmann, T. Hannappel, W.G. Schmidt, W. Jaegermann, ACS Applied Materials & Interfaces (2022), 14(41), pp. 47255-47261
Electrochemical performance of $\mathrmKTiOAsO_4$ (KTA) in potassium-ion batteries from density-functional theory
A. Bocchini, U. Gerstmann, T. Bartley, H. Steinrück, G. Henkel, W.G. Schmidt, Phys. Rev. Materials (2022), 6, pp. 105401
https://journals.aps.org/prmaterials/abstract/10.1103/PhysRevMaterials.6.105401
DC Ionic Conductivity in KTP and Its Isomorphs: Properties, Methods for Suppression, and Its Connection to Gray Tracking
L. Padberg, V. Quiring, A. Bocchini, M. Santandrea, U. Gerstmann, W.G. Schmidt, C. Silberhorn, C. Eigner, Crystals (2022), 12(10)
We study the DC conductivity in potassium titanyl phosphate (KTiOPO4, KTP) and its isomorphs KTiOAsO4 (KTA) and Rb1%K99%TiOPO4 (RKTP) and introduce a method by which to reduce the overall ionic conductivity in KTP by a potassium nitrate treatment. Furthermore, we create so-called gray tracking in KTP and investigate the ionic conductivity in theses areas. A local unintended reduction of the ionic conductivity is observed in the gray-tracked regions, which also induce additional optical absorption in the material. We show that a thermal treatment in an oxygen-rich atmosphere removes the gray tracking and brings the ionic conductivity as well as the optical transmission back to the original level. These studies can help to choose the best material and treatment for specific applications.
Oxygen vacancies in $\mathrmKTiOPO_4$: Optical absorption from hybrid DFT
A. Bocchini, U. Gerstmann, W.G. Schmidt, Phys. Rev. B (2022), 105, pp. 205118
Bound polaron formation in lithium niobate from ab initio molecular dynamics
M. Krenz, U. Gerstmann, W.G. Schmidt, Applied Physics A (2022), 128(6), 480
<jats:title>Abstract</jats:title><jats:p>Polarons influence decisively the performance of lithium niobate for optical applications. In this work, the formation of (defect) bound polarons in lithium niobate is studied by ab initio molecular dynamics. The calculations show a broad scatter of polaron formation times. Rising temperature increases the share of trajectories with long formation times, which leads to an overall increase of the average formation time with temperature. However, even at elevated temperatures, the average formation time does not exceed the value of 100 femtoseconds, i.e., a value close to the time measured for free, i.e., self-trapped polarons. Analyzing individual trajectories, it is found that the time required for the structural relaxation of the polarons depends sensitively on the excitation of the lithium niobate high-frequency phonon modes and their phase relation.</jats:p>
Electron–Nuclear Coherent Coupling and Nuclear Spin Readout through Optically Polarized V<sub>B</sub><sup>–</sup> Spin States in hBN
F.F. Murzakhanov, G.V. Mamin, S.B. Orlinskii, U. Gerstmann, W.G. Schmidt, T. Biktagirov, I. Aharonovich, A. Gottscholl, A. Sperlich, V. Dyakonov, V.A. Soltamov, Nano Letters (2022), 22(7), pp. 2718-2724
Water/InP(001) from Density Functional Theory
I.A. Ruiz Alvarado, W.G. Schmidt, ACS Omega (2022), 7(23), pp. 19355-19364
Reconstructions of the As-Terminated GaAs(001) Surface Exposed to Atomic Hydrogen
M. Karmo, I.A. Ruiz Alvarado, W.G. Schmidt, E. Runge, ACS Omega (2022), 7(6), pp. 5064-5068
Quasiparticle energies and optical response of RbTiOPO4 and KTiOAsO4
S. Neufeld, A. Schindlmayr, W.G. Schmidt, Journal of Physics: Materials (2022), 5(1), 015002
Many-body perturbation theory based on density-functional theory calculations is used to determine the quasiparticle band structures and the dielectric functions of the isomorphic ferroelectrics rubidium titanyl phosphate (RbTiOPO4) and potassium titanyl arsenide (KTiOAsO4). Self-energy corrections of more than 2 eV are found to widen the transport band gaps of both materials considerably to 5.3 and 5.2 eV, respectively. At the same time, both materials are characterized by strong exciton binding energies of 1.4 and 1.5 eV, respectively. The solution of the Bethe-Salpeter equation based on the quasiparticle energies results in onsets of the optical absorption within the range of the measured data.
Neufeld_2022_J._Phys._Mater._5_015002.pdf - Creative Commons Attribution…
A density-functional theory study of hole and defect-bound exciton polarons in lithium niobate
F. Schmidt, A.L. Kozub, U. Gerstmann, W.G. Schmidt, A. Schindlmayr, Crystals (2022), 12(11), 1586
Hole polarons and defect-bound exciton polarons in lithium niobate are investigated by means of density-functional theory, where the localization of the holes is achieved by applying the +U approach to the oxygen 2p orbitals. We find three principal configurations of hole polarons: (i) self-trapped holes localized at displaced regular oxygen atoms and (ii) two other configurations bound to a lithium vacancy either at a threefold coordinated oxygen atom above or at a twofold coordinated oxygen atom below the defect. The latter is most stable and in excellent quantitative agreement with measured g factors from electron paramagnetic resonance. Due to the absence of mid-gap states, none of these hole polarons can explain the broad optical absorption centered between 2.5 and 2.8 eV that is observed in transient absorption spectroscopy, but such states appear if a free electron polaron is trapped at the same lithium vacancy as the bound hole polaron, resulting in an exciton polaron. The dielectric function calculated by solving the Bethe-Salpeter equation indeed yields an optical peak at 2.6 eV in agreement with the two-photon experiments. The coexistence of hole and exciton polarons, which are simultaneously created in optical excitations, thus satisfactorily explains the reported experimental data.
crystals-12-01586-v2.pdf - Creative Commons Attribution…
Electron polarons in lithium niobate: Charge localization, lattice deformation, and optical response
F. Schmidt, A.L. Kozub, U. Gerstmann, W.G. Schmidt, A. Schindlmayr, in: New Trends in Lithium Niobate: From Bulk to Nanocrystals, MDPI, 2022, pp. 231-248
Lithium niobate (LiNbO3), a material frequently used in optical applications, hosts different kinds of polarons that significantly affect many of its physical properties. In this study, a variety of electron polarons, namely free, bound, and bipolarons, are analyzed using first-principles calculations. We perform a full structural optimization based on density-functional theory for selected intrinsic defects with special attention to the role of symmetry-breaking distortions that lower the total energy. The cations hosting the various polarons relax to a different degree, with a larger relaxation corresponding to a larger gap between the defect level and the conduction-band edge. The projected density of states reveals that the polaron states are formerly empty Nb 4d states lowered into the band gap. Optical absorption spectra are derived within the independent-particle approximation, corrected by the GW approximation that yields a wider band gap and by including excitonic effects within the Bethe-Salpeter equation. Comparing the calculated spectra with the density of states, we find that the defect peak observed in the optical absorption stems from transitions between the defect level and a continuum of empty Nb 4d states. Signatures of polarons are further analyzed in the reflectivity and other experimentally measurable optical coefficients.
Controlled growth of ordered monolayers of N-heterocyclic carbenes on silicon
M. Franz, S. Chandola, M. Koy, R. Zielinski, H. Aldahhak, M. Das, M. Freitag, U. Gerstmann, D. Liebig, A.K. Hoffmann, M. Rosin, W.G. Schmidt, C. Hogan, F. Glorius, N. Esser, M. Dähne, Nature Chemistry (2021), pp. 828-835
Electronic structure of the Si(111)3×3R30∘−B surface from theory and photoemission spectroscopy
H. Aldahhak, C. Hogan, S. Lindner, S. Appelfeller, H. Eisele, W.G. Schmidt, M. Dähne, U. Gerstmann, M. Franz, Physical Review B (2021)
Spin Polarization, Electron–Phonon Coupling, and Zero-Phonon Line of the NV Center in 3C-SiC
H. Jurgen von Bardeleben, J. Cantin, U. Gerstmann, W.G. Schmidt, T. Biktagirov, Nano Letters (2021), 21(19), pp. 8119-8125
Surface localized phonon modes at the Si(553)-Au nanowire system
J. Plaickner, E. Speiser, C. Braun, W.G. Schmidt, N. Esser, S. Sanna, Physical Review B (2021)
InP and AlInP(001)(2 × 4) Surface Oxidation from Density Functional Theory
I.A. Ruiz Alvarado, M. Karmo, E. Runge, W.G. Schmidt, ACS Omega (2021), pp. 6297-6304
Adatom mediated adsorption of <scp>N‐heterocyclic</scp> carbenes on Cu(111) and Au(111)
M. Jain, U. Gerstmann, W.G. Schmidt, H. Aldahhak, Journal of Computational Chemistry (2021), 43(6), pp. 413-420
Potassium titanyl phosphate Z- and Y-cut surfaces from density-functional theory
S. Neufeld, A. Bocchini, W.G. Schmidt, Physical Review Materials (2021)
F. Schmidt, A.L. Kozub, U. Gerstmann, W.G. Schmidt, A. Schindlmayr, Crystals (2021), 11(5), 542
crystals-11-00542.pdf - Creative Commons Attribution…
Polaronic enhancement of second-harmonic generation in lithium niobate
A.L. Kozub, A. Schindlmayr, U. Gerstmann, W.G. Schmidt, Physical Review B (2021), 104(17), 174110
Density-functional theory within a Berry-phase formulation of the dynamical polarization is used to determine the second-order susceptibility χ(2) of lithium niobate (LiNbO3). Defect trapped polarons and bipolarons are found to strongly enhance the nonlinear susceptibility of the material, in particular if localized at NbV–VLi defect pairs. This is essentially a consequence of the polaronic excitation resulting in relaxation-induced gap states. The occupation of these levels leads to strongly enhanced χ(2) coefficients and allows for the spatial and transient modification of the second-harmonic generation of macroscopic samples.
PhysRevB.104.174110.pdf - © 2021 American Physical…
GaInP/AlInP(001) Interfaces from Density Functional Theory
L. Meier, W.G. Schmidt, physica status solidi (b) (2021), 259(1), 2100462
Understanding gray track formation in KTP: $\mathrmTi^3+$ centers studied from first principles
A. Bocchini, C. Eigner, C. Silberhorn, W.G. Schmidt, U. Gerstmann, Phys. Rev. Materials (2020), 4, pp. 124402
Electron paramagnetic resonance study of ferroelectric phase transition and dynamic effects in a Mn2+ doped [NH4][Zn(HCOO)3] hybrid formate framework
M. Navickas, L. Giriūnas, V. Kalendra, T. Biktagirov, U. Gerstmann, W.G. Schmidt, M. Mączka, A. Pöppl, J. Banys, M. Šimėnas, Physical Chemistry Chemical Physics (2020), 22, pp. 8513-8521
<p>EPR spectroscopy reveals the universality class and dynamic effects of the [NH<sub>4</sub>][Zn(HCOO)<sub>3</sub>] hybrid formate framework.</p>
Vibration-Driven Self-Doping of Dangling-Bond Wires on Si(553)-Au Surfaces
C. Braun, S. Neufeld, U. Gerstmann, S. Sanna, J. Plaickner, E. Speiser, N. Esser, W.G. Schmidt, Physical Review Letters (2020), 124(14)
Spin decontamination for magnetic dipolar coupling calculations: Application to high-spin molecules and solid-state spin qubits
T. Biktagirov, W.G. Schmidt, U. Gerstmann, Physical Review Research (2020)
Tetracene Ultrathin Film Growth on Hydrogen-Passivated Silicon
J. Niederhausen, R.W. MacQueen, K. Lips, H. Aldahhak, W.G. Schmidt, U. Gerstmann, Langmuir (2020), pp. 9099-9113
A photoredox catalysed Heck reaction via hole transfer from a Ru(ii)-bis(terpyridine) complex to graphene oxide
M. Rosenthal, J. Lindner, U. Gerstmann, A. Meier, W.G. Schmidt, R. Wilhelm, RSC Advances (2020), 10(70), pp. 42930-42937
<p>A hole transfer from an excited Ru unit towards graphene oxide significantly improved the photocatalytic activity of the complexes.</p>
Toward Efficient Toxic-Gas Detectors: Exploring Molecular Interactions of Sarin and Dimethyl Methylphosphonate with Metal-Centered Phthalocyanine Structures
H. Aldahhak, P. Powroźnik, P. Pander, W. Jakubik, F.B. Dias, W.G. Schmidt, U. Gerstmann, M. Krzywiecki, The Journal of Physical Chemistry C (2020)(124), pp. 6090-6102
Vibrational Raman spectroscopy on adsorbate-induced low-dimensional surface structures
E. Speiser, N. Esser, B. Halbig, J. Geurts, W.G. Schmidt, S. Sanna, Surface Science Reports (2020), 75(1), 100480
Photocatalytic properties of graphene‐supported titania clusters from density‐functional theory
S. Badalov, R. Wilhelm, W.G. Schmidt, Journal of Computational Chemistry (2020), pp. 1921-1930
Photochemical Ring Opening of Oxirane Modeled by Constrained Density Functional Theory
M. Krenz, U. Gerstmann, W.G. Schmidt, ACS Omega (2020), pp. 24057-24063
Band Alignment at Ga <sub> <i>x</i> </sub> In <sub> 1– <i>x</i> </sub> P/Al <sub> <i>y</i> </sub> In <sub> 1– <i>y</i> </sub> P Alloy Interfaces from Hybrid Density Functional Theory Calculations
L. Meier, C. Braun, T. Hannappel, W.G. Schmidt, physica status solidi (b) (2020), 258(2), 2000463
Free and defect-bound (bi)polarons in LiNbO3: Atomic structure and spectroscopic signatures from ab initio calculations
F. Schmidt, A.L. Kozub, T. Biktagirov, C. Eigner, C. Silberhorn, A. Schindlmayr, W.G. Schmidt, U. Gerstmann, Physical Review Research (2020), 2(4), 043002
Polarons in dielectric crystals play a crucial role for applications in integrated electronics and optoelectronics. In this work, we use density-functional theory and Green's function methods to explore the microscopic structure and spectroscopic signatures of electron polarons in lithium niobate (LiNbO3). Total-energy calculations and the comparison of calculated electron paramagnetic resonance data with available measurements reveal the formation of bound polarons at Nb_Li antisite defects with a quasi-Jahn-Teller distorted, tilted configuration. The defect-formation energies further indicate that (bi)polarons may form not only at Nb_Li antisites but also at structures where the antisite Nb atom moves into a neighboring empty oxygen octahedron. Based on these structure models, and on the calculated charge-transition levels and potential-energy barriers, we propose two mechanisms for the optical and thermal splitting of bipolarons, which provide a natural explanation for the reported two-path recombination of bipolarons. Optical-response calculations based on the Bethe-Salpeter equation, in combination with available experimental data and new measurements of the optical absorption spectrum, further corroborate the geometries proposed here for free and defect-bound (bi)polarons.
PhysRevResearch.2.043002.pdf - Creative Commons Attribution…
T. Biktagirov, W.G. Schmidt, U. Gerstmann, Physical Review Research (2020), 2(2)
Quasiparticle and excitonic effects in the optical response of KNbO3
F. Schmidt, A. Riefer, W.G. Schmidt, A. Schindlmayr, M. Imlau, F. Dobener, N. Mengel, S. Chatterjee, S. Sanna, Physical Review Materials (2019), 3(5), 054401
The cubic, tetragonal, and orthorhombic phase of potassium niobate (KNbO3) are studied based on density-functional theory. Starting from the relaxed atomic geometries, we analyze the influence of self-energy corrections on the electronic band structure within the GW approximation. We find that quasiparticle shifts widen the direct (indirect) band gap by 1.21 (1.44), 1.58 (1.55), and 1.67 (1.64) eV for the cubic, tetragonal, and orthorhombic phase, respectively. By solving the Bethe-Salpeter equation, we obtain the linear dielectric function with excitonic and local-field effects, which turn out to be essential for good agreement with experimental data. From our results, we extract an exciton binding energy of 0.6, 0.5, and 0.5 eV for the cubic, tetragonal, and orthorhombic phase, respectively. Furthermore, we investigate the nonlinear second-harmonic generation (SHG) both theoretically and experimentally. The frequency-dependent second-order polarization tensor of orthorhombic KNbO3 is measured for incoming photon energies between 1.2 and 1.6 eV. In addition, calculations within the independent-(quasi)particle approximation are performed for the tetragonal and orthorhombic phase. The novel experimental data are in excellent agreement with the quasiparticle calculations and resolve persistent discrepancies between earlier experimental measurements and ab initio results reported in the literature.
PhysRevMaterials.3.054401.pdf - © 2019 American Physical…
Water Splitting Reaction at Polar Lithium Niobate Surfaces
C. Dues, W.G. Schmidt, S. Sanna, ACS Omega (2019), pp. 3850-3859
Oxygen and potassium vacancies in KTP calculated from first principles
A. Bocchini, S. Neufeld, U. Gerstmann, W.G. Schmidt, Journal of Physics: Condensed Matter (2019), 385401
Potassium titanyl phosphate (KTP) quasiparticle energies and optical response
S. Neufeld, A. Bocchini, U. Gerstmann, A. Schindlmayr, W.G. Schmidt, Journal of Physics: Materials (2019), 2(4), 045003
The KTiOPO4 (KTP) band structure and dielectric function are calculated on various levels of theory starting from density-functional calculations. Within the independent-particle approximation an electronic transport gap of 2.97 eV is obtained that widens to about 5.23 eV when quasiparticle effects are included using the GW approximation. The optical response is shown to be strongly anisotropic due to (i) the slight asymmetry of the TiO6 octahedra in the (001) plane and (ii) their anisotropic distribution along the [001] and [100] directions. In addition, excitonic effects are very important: The solution of the Bethe–Salpeter equation indicates exciton binding energies of the order of 1.5 eV. Calculations that include both quasiparticle and excitonic effects are in good agreement with the measured reflectivity.
Excited-state band mapping and momentum-resolved ultrafast population dynamics in In/Si(111) nanowires investigated with XUV-based time- and angle-resolved photoemission spectroscopy
C.W. Nicholson, M. Puppin, A. Lücke, U. Gerstmann, M. Krenz, W.G. Schmidt, L. Rettig, R. Ernstorfer, M. Wolf, Physical Review B (2019), 99(15), 155107
Polytypism driven zero-field splitting of silicon vacancies in 6H-SiC
T. Biktagirov, W.G. Schmidt, U. Gerstmann, B. Yavkin, S. Orlinskii, P. Baranov, V. Dyakonov, V. Soltamov, Physical Review B (2018), 98(19)
Spin pairing versus spin chains at Si(553)-Au surfaces
C. Braun, U. Gerstmann, W.G. Schmidt, Physical Review B (2018), 98(12)
Impact of finite-temperature and condensed-phase effects on theoretical X-ray absorption spectra of transition metal complexes
P. Müller, K. Karhan, M. Krack, U. Gerstmann, W.G. Schmidt, M. Bauer, T.D. Kühne, Journal of Computational Chemistry (2018), pp. 712-716
Plasmon spectroscopy: Robust metallicity of Au wires on Si(557) upon oxidation
Z. Mamiyev, T. Lichtenstein, C. Tegenkamp, C. Braun, W.G. Schmidt, S. Sanna, H. Pfnür, Physical Review Materials (2018), 2(6)
Structural dynamics upon photoexcitation-induced charge transfer in a dicopper(i)–disulfide complex
M. Naumova, D. Khakhulin, M. Rebarz, M. Rohrmüller, B. Dicke, M. Biednov, A. Britz, S. Espinoza, B. Grimm-Lebsanft, M. Kloz, N. Kretzschmar, A. Neuba, J. Ortmeyer, R. Schoch, J. Andreasson, M. Bauer, C. Bressler, W.G. Schmidt, G. Henkel, M. Rübhausen, Physical Chemistry Chemical Physics (2018), pp. 6274-6286
<p>A study of structural evolution upon photoinduced charge transfer in a dicopper complex with biologically relevant sulfur coordination.</p>
Calculation of spin-spin zero-field splitting within periodic boundary conditions: Towards all-electron accuracy
T. Biktagirov, W.G. Schmidt, U. Gerstmann, Physical Review B (2018), 97(11)
Erratum: Optical properties of titanium-doped lithium niobate from time-dependent density-functional theory [Phys. Rev. Materials 1, 034401 (2017)]
M. Friedrich, W.G. Schmidt, A. Schindlmayr, S. Sanna, Physical Review Materials (2018), 2(1), 019902
Vibrational properties of the Au-(3×3)/Si(111) surface reconstruction
B. Halbig, M. Liebhaber, U. Bass, J. Geurts, E. Speiser, J. Räthel, S. Chandola, N. Esser, M. Krenz, S. Neufeld, W.G. Schmidt, S. Sanna, Physical Review B (2018), 97(3)
Temperature stabilizes rough Au/Ge(001) surface reconstructions
K. Seino, S. Sanna, W.G. Schmidt, Surface Science (2018), 667, pp. 101-104
Probing quasi-one-dimensional band structures by plasmon spectroscopy
T. Lichtenstein, Z. Mamiyev, C. Braun, S. Sanna, W.G. Schmidt, C. Tegenkamp, H. Pfnür, Physical Review B (2018), 97(16)
Electric Field Induced Raman Scattering at the Sb-InP(110) Interface: The Surface Dipole Contribution
N. Esser, W.G. Schmidt, physica status solidi (b) (2018), 1800314
Beyond the molecular movie: Dynamics of bands and bonds during a photoinduced phase transition
C.W. Nicholson, A. Lücke, W.G. Schmidt, M. Puppin, L. Rettig, R. Ernstorfer, M. Wolf, Science (2018), pp. 821-825
<jats:p>Ultrafast nonequilibrium dynamics offer a route to study the microscopic interactions that govern macroscopic behavior. In particular, photoinduced phase transitions (PIPTs) in solids provide a test case for how forces, and the resulting atomic motion along a reaction coordinate, originate from a nonequilibrium population of excited electronic states. Using femtosecond photoemission, we obtain access to the transient electronic structure during an ultrafast PIPT in a model system: indium nanowires on a silicon(111) surface. We uncover a detailed reaction pathway, allowing a direct comparison with the dynamics predicted by ab initio simulations. This further reveals the crucial role played by localized photoholes in shaping the potential energy landscape and enables a combined momentum- and real-space description of PIPTs, including the ultrafast formation of chemical bonds.</jats:p>
Unraveling the Oxidation and Spin State of Mn–Corrole through X-ray Spectroscopy and Quantum Chemical Analysis
M. Paszkiewicz, T. Biktagirov, H. Aldahhak, F. Allegretti, E. Rauls, W. Schöfberger, W.G. Schmidt, J.V. Barth, U. Gerstmann, F. Klappenberger, The Journal of Physical Chemistry Letters (2018), pp. 6412-6420
Identifying On-Surface Site-Selective Chemical Conversions by Theory-Aided NEXAFS Spectroscopy: The Case of Free-Base Corroles on Ag(111)
H. Aldahhak, M. Paszkiewicz, E. Rauls, F. Allegretti, S. Tebi, A.C. Papageorgiou, Y. Zhang, L. Zhang, T. Lin, T. Paintner, R. Koch, W.G. Schmidt, J.V. Barth, W. Schöfberger, S. Müllegger, F. Klappenberger, U. Gerstmann, Chemistry - A European Journal (2018), pp. 6787-6797
Electric Field Induced Raman Scattering at the Sb–InP(110) Interface: The Surface Dipole Contribution
N. Esser, W.G. Schmidt, physica status solidi (b) (2018)(256), 1800314
Signatures of transient Wannier-Stark localization in bulk gallium arsenide
C. Schmidt, J. Bühler, A. Heinrich, J. Allerbeck, R. Podzimski, D. Berghoff, T. Meier, W.G. Schmidt, C. Reichl, W. Wegscheider, D. Brida, A. Leitenstorfer, Nature Communications (2018), 9, pp. 2890
Zn–VI quasiparticle gaps and optical spectra from many-body calculations
A. Riefer, N. Weber, J. Mund, D.R. Yakovlev, M. Bayer, A. Schindlmayr, C. Meier, W.G. Schmidt, Journal of Physics: Condensed Matter (2017), 29(21), 215702
The electronic band structures of hexagonal ZnO and cubic ZnS, ZnSe, and ZnTe compounds are determined within hybrid-density-functional theory and quasiparticle calculations. It is found that the band-edge energies calculated on the G0W0 (Zn chalcogenides) or GW (ZnO) level of theory agree well with experiment, while fully self-consistent QSGW calculations are required for the correct description of the Zn 3d bands. The quasiparticle band structures are used to calculate the linear response and second-harmonic-generation (SHG) spectra of the Zn–VI compounds. Excitonic effects in the optical absorption are accounted for within the Bethe–Salpeter approach. The calculated spectra are discussed in the context of previous experimental data and present SHG measurements for ZnO.
New pyridinium based ionic dyes for the hydrogen evolution reaction
D.D. Konieczna, H. Biller, M. Witte, W.G. Schmidt, A. Neuba, R. Wilhelm, Tetrahedron (2017), pp. 142-149
Solving the Bethe-Salpeter equation for the second-harmonic generation in Zn chalcogenides
A. Riefer, W.G. Schmidt, Physical Review B (2017), 96(23)
Si(775)-Au atomic chains: Geometry, optical properties, and spin order
C. Braun, C. Hogan, S. Chandola, N. Esser, S. Sanna, W.G. Schmidt, Physical Review Materials (2017), 1(5)
Polaron optical absorption in congruent lithium niobate from time-dependent density-functional theory
The optical properties of congruent lithium niobate are analyzed from first principles. The dielectric function of the material is calculated within time-dependent density-functional theory. The effects of isolated intrinsic defects and defect pairs, including the NbLi4+ antisite and the NbLi4+−NbNb4+ pair, commonly addressed as a bound polaron and bipolaron, respectively, are discussed in detail. In addition, we present further possible realizations of polaronic and bipolaronic systems. The absorption feature around 1.64 eV, ascribed to small bound polarons [O. F. Schirmer et al., J. Phys.: Condens. Matter 21, 123201 (2009)], is nicely reproduced within these models. Among the investigated defects, we find that the presence of bipolarons at bound interstitial-vacancy pairs NbV−VLi can best explain the experimentally observed broad absorption band at 2.5 eV. Our results provide a microscopic model for the observed optical spectra and suggest that, besides NbLi antisites and Nb and Li vacancies, Nb interstitials are also formed in congruent lithium-niobate samples.
Efficient PAW-based bond strength analysis for understanding the In/Si(111)(8 × 2) - (4 × 1) phase transition
A. Lücke, U. Gerstmann, T.D. Kühne, W.G. Schmidt, Journal of Computational Chemistry (2017), pp. 2276-2282
LiNbO3 surfaces from a microscopic perspective
S. Sanna, W.G. Schmidt, Journal of Physics: Condensed Matter (2017), 413001
Optically excited structural transition in atomic wires on surfaces at the quantum limit
T. Frigge, B. Hafke, T. Witte, B. Krenzer, C. Streubühr, A. Samad Syed, V. Mikšić Trontl, I. Avigo, P. Zhou, M. Ligges, D. von der Linde, U. Bovensiepen, M. Horn-von Hoegen, S. Wippermann, A. Lücke, S. Sanna, U. Gerstmann, W.G. Schmidt, Nature (2017), 544, pp. 207-211
Current density analysis of electron transport through molecular wires in open quantum systems
D. Nozaki, W.G. Schmidt, Journal of Computational Chemistry (2017), 38, pp. 1685-1692
[Cu6(NGuaS)6]2+ and its oxidized and reduced derivatives: Confining electrons on a torus
M. Witte, M. Rohrmüller, U. Gerstmann, G. Henkel, W.G. Schmidt, S. Herres-Pawlis, Journal of Computational Chemistry (2017), pp. 1752-1761
On-Surface Site-Selective Cyclization of Corrole Radicals
S. Tebi, M. Paszkiewicz, H. Aldahhak, F. Allegretti, S. Gonglach, M. Haas, M. Waser, P.S. Deimel, P.C. Aguilar, Y. Zhang, A.C. Papageorgiou, D.A. Duncan, J.V. Barth, W.G. Schmidt, R. Koch, U. Gerstmann, E. Rauls, F. Klappenberger, W. Schöfberger, S. Müllegger, ACS Nano (2017), pp. 3383-3391
X-ray Spectroscopy of Thin Film Free-Base Corroles: A Combined Theoretical and Experimental Characterization
H. Aldahhak, M. Paszkiewicz, F. Allegretti, D.A. Duncan, S. Tebi, P.S. Deimel, P. Casado Aguilar, Y. Zhang, A.C. Papageorgiou, R. Koch, J.V. Barth, W.G. Schmidt, S. Müllegger, W. Schöfberger, F. Klappenberger, E. Rauls, U. Gerstmann, The Journal of Physical Chemistry C (2017), 121, pp. 2192-2200
Electron paramagnetic resonance calculations for hydrogenated Si surfaces
M. Rohrmüller, W.G. Schmidt, U. Gerstmann, Physical Review B (2017), 95(12)
Tuning the conductivity along atomic chains by selective chemisorption
F. Edler, I. Miccoli, J.P. Stöckmann, H. Pfnür, C. Braun, S. Neufeld, S. Sanna, W.G. Schmidt, C. Tegenkamp, Physical Review B (2017), 95(12)
Molecular Orbital Rule for Quantum Interference in Weakly Coupled Dimers: Low-Energy Giant Conductivity Switching Induced by Orbital Level Crossing
D. Nozaki, A. Lücke, W.G. Schmidt, The Journal of Physical Chemistry Letters (2017), pp. 727-732
Understanding band alignments in semiconductor heterostructures: Composition dependence and type-I–type-II transition of natural band offsets in nonpolar zinc-blendeAlxGa1−xN/AlyGa1−yNcomposites
M. Landmann, E. Rauls, W.G. Schmidt, Physical Review B (2017)
Optical properties of titanium-doped lithium niobate from time-dependent density-functional theory
The optical properties of pristine and titanium-doped LiNbO3 are modeled from first principles. The dielectric functions are calculated within time-dependent density-functional theory, and a model long-range contribution is employed for the exchange-correlation kernel in order to account for the electron-hole binding. Our study focuses on the influence of substitutional titanium atoms on lithium sites. We show that an increasing titanium concentration enhances the values of the refractive indices and the reflectivity.
Consistent atomic geometries and electronic structure of five phases of potassium niobate from density-functional theory
F. Schmidt, M. Landmann, E. Rauls, N. Argiolas, S. Sanna, W.G. Schmidt, A. Schindlmayr, Advances in Materials Science and Engineering (2017), 2017, 3981317
We perform a comprehensive theoretical study of the structural and electronic properties of potassium niobate (KNbO3) in the cubic, tetragonal, orthorhombic, monoclinic, and rhombohedral phase, based on density-functional theory. The influence of different parametrizations of the exchange-correlation functional on the investigated properties is analyzed in detail, and the results are compared to available experimental data. We argue that the PBEsol and AM05 generalized gradient approximations as well as the RTPSS meta-generalized gradient approximation yield consistently accurate structural data for both the external and internal degrees of freedom and are overall superior to the local-density approximation or other conventional generalized gradient approximations for the structural characterization of KNbO3. Band-structure calculations using a HSE-type hybrid functional further indicate significant near degeneracies of band-edge states in all phases which are expected to be relevant for the optical response of the material.
3981317.pdf - Creative Commons Attribution…
Vibration eigenmodes of the Au-(5×2)/Si(111) surface studied by Raman spectroscopy and first-principles calculations
M. Liebhaber, B. Halbig, U. Bass, J. Geurts, S. Neufeld, S. Sanna, W.G. Schmidt, E. Speiser, J. Räthel, S. Chandola, N. Esser, Physical Review B (2016), 94(23)
LiNbO3 electronic structure: Many-body interactions, spin-orbit coupling, and thermal effects
A. Riefer, M. Friedrich, S. Sanna, U. Gerstmann, A. Schindlmayr, W.G. Schmidt, Physical Review B (2016), 93(7), 075205
The influence of electronic many-body interactions, spin-orbit coupling, and thermal lattice vibrations on the electronic structure of lithium niobate is calculated from first principles. Self-energy calculations in the GW approximation show that the inclusion of self-consistency in the Green function G and the screened Coulomb potential W opens the band gap far stronger than found in previous G0W0 calculations but slightly overestimates its actual value due to the neglect of excitonic effects in W. A realistic frozen-lattice band gap of about 5.9 eV is obtained by combining hybrid density functional theory with the QSGW0 scheme. The renormalization of the band gap due to electron-phonon coupling, derived here using molecular dynamics as well as density functional perturbation theory, reduces this value by about 0.5 eV at room temperature. Spin-orbit coupling does not noticeably modify the fundamental gap but gives rise to a Rashba-like spin texture in the conduction band.
PhysRevB.93.075205.pdf - © 2016 American Physical…
LiTaO3 phonon dispersion and ferroelectric transition calculated from first principles
M. Friedrich, A. Schindlmayr, W.G. Schmidt, S. Sanna, Physica Status Solidi B (2016), 253(4), pp. 683-689
The phonon dispersions of the ferro‐ and paraelectric phase of LiTaO3 are calculated within density‐functional perturbation theory. The longitudinal optical phonon modes are theoretically derived and compared with available experimental data. Our results confirm the recent phonon assignment proposed by Margueron et al. [J. Appl. Phys. 111, 104105 (2012)] on the basis of spectroscopical studies. A comparison with the phonon band structure of the related material LiNbO3 shows minor differences that can be traced to the atomic‐mass difference between Ta and Nb. The presence of phonons with imaginary frequencies for the paraelectric phase suggests that it does not correspond to a minimum energy structure, and is compatible with an order‐disorder type phase transition.
Vibrational properties ofLiNb1−xTaxO3mixed crystals
M. Rüsing, S. Sanna, S. Neufeld, G. Berth, W.G. Schmidt, A. Zrenner, H. Yu, Y. Wang, H. Zhang, Physical Review B (2016)
Strain-induced quasi-one-dimensional rare-earth silicide structures on Si(111)
F. Timmer, R. Oelke, C. Dues, S. Sanna, W.G. Schmidt, M. Franz, S. Appelfeller, M. Dähne, J. Wollschläger, Physical Review B (2016), 94(20)
Experimental and Theoretical High-Energy-Resolution X-ray Absorption Spectroscopy: Implications for the Investigation of the Entatic State
N.J. Vollmers, P. Müller, A. Hoffmann, S. Herres-Pawlis, M. Rohrmüller, W.G. Schmidt, U. Gerstmann, M. Bauer, Inorganic Chemistry (2016), 55, pp. 11694-11706
Optical response of the Cu2S2diamond core in Cu2II(NGuaS)2Cl2
M. Witte, B. Grimm-Lebsanft, A. Goos, S. Binder, M. Rübhausen, M. Bernard, A. Neuba, S. Gorelsky, U. Gerstmann, G. Henkel, W.G. Schmidt, S. Herres-Pawlis, Journal of Computational Chemistry (2016), 37(23-24), pp. 2181-2192
Surface vibrational Raman modes of In:Si(111)(4×1)and(8×2)nanowires
E. Speiser, N. Esser, S. Wippermann, W.G. Schmidt, Physical Review B (2016), 94(7)
Temperature-Dependent Hole Mobility and Its Limit in Crystal-Phase P3HT Calculated from First Principles
A. Lücke, F. Ortmann, M. Panhans, S. Sanna, E. Rauls, U. Gerstmann, W.G. Schmidt, The Journal of Physical Chemistry B (2016), 120, pp. 5572-5580
Inhomogeneous and Homogeneous Line Broadening of Optical Spectra of PTCDA Molecules Adsorbed at Step Edges of Alkali Halide Surfaces
A. Paulheim, C. Marquardt, H. Aldahhak, E. Rauls, W.G. Schmidt, M. Sokolowski, The Journal of Physical Chemistry C (2016), 10, pp. 11926-11937
Grand canonical Peierls transition in In/Si(111)
E. Jeckelmann, S. Sanna, W.G. Schmidt, E. Speiser, N. Esser, Physical Review B (2016), 93(24)
Rare-earth silicide thin films on the Si(111) surface
S. Sanna, C. Dues, W.G. Schmidt, F. Timmer, J. Wollschläger, M. Franz, S. Appelfeller, M. Dähne, Physical Review B (2016), 93(19)
Atomic size effects studied by transport in single silicide nanowires
I. Miccoli, F. Edler, H. Pfnür, S. Appelfeller, M. Dähne, K. Holtgrewe, S. Sanna, W.G. Schmidt, C. Tegenkamp, Physical Review B (2016)
A Bifunctional Electrocatalyst for Oxygen Evolution and Oxygen Reduction Reactions in Water
W. Schöfberger, F. Faschinger, S. Chattopadhyay, S. Bhakta, B. Mondal, J.A.A.W. Elemans, S. Müllegger, S. Tebi, R. Koch, F. Klappenberger, M. Paszkiewicz, J.V. Barth, E. Rauls, H. Aldahhak, W.G. Schmidt, A. Dey, Angewandte Chemie International Edition (2016), pp. 2350-2355
Manipulation resolves non-trivial structure of corrole monolayer on Ag(111)
S. Tebi, H. Aldahhak, G. Serrano, W. Schöfberger, E. Rauls, W.G. Schmidt, R. Koch, S. Müllegger, Nanotechnology (2016), 27, 025704
Surface induced vibrational modes in the fluorescence spectra of PTCDA adsorbed on the KCl(100) and NaCl(100) surfaces
A. Paulheim, C. Marquardt, M. Sokolowski, M. Hochheim, T. Bredow, H. Aldahhak, E. Rauls, W.G. Schmidt, Physical Chemistry Chemical Physics (2016), 18, pp. 32891-32902
<p>We report a combined experiment-theory study on low energy vibrational modes in fluorescence spectra of perylene-3,4,9,10-tetracarboxylic acid dianhydride (PTCDA) molecules.</p>
Impurity-Mediated Early Condensation of a Charge Density Wave in an Atomic Wire Array
H.W. Yeom, D.M. Oh, S. Wippermann, W.G. Schmidt, ACS Nano (2016), 10, pp. 810-814
GaNm-plane: Atomic structure, surface bands, and optical response
M. Landmann, E. Rauls, W.G. Schmidt, M. Neumann, E. Speiser, N. Esser, Physical Review B (2015)
Liquid Crystal (8CB) Molecular Adsorption on Lithium Niobate Z-Cut Surfaces
C. Braun, S. Sanna, W.G. Schmidt, The Journal of Physical Chemistry C (2015), pp. 9342-9346
Phonon dispersion and zero-point renormalization of LiNbO3 from density-functional perturbation theory
M. Friedrich, A. Riefer, S. Sanna, W.G. Schmidt, A. Schindlmayr, Journal of Physics: Condensed Matter (2015), 27(38), 385402
The vibrational properties of stoichiometric LiNbO3 are analyzed within density-functional perturbation theory in order to obtain the complete phonon dispersion of the material. The phonon density of states of the ferroelectric (paraelectric) phase shows two (one) distinct band gaps separating the high-frequency (~800 cm−1) optical branches from the continuum of acoustic and lower optical phonon states. This result leads to specific heat capacites in close agreement with experimental measurements in the range 0–350 K and a Debye temperature of 574 K. The calculated zero-point renormalization of the electronic Kohn–Sham eigenvalues reveals a strong dependence on the phonon wave vectors, especially near Γ. Integrated over all phonon modes, our results indicate a vibrational correction of the electronic band gap of 0.41 eV at 0 K, which is in excellent agreement with the extrapolated temperature-dependent measurements.
Defect complexes in congruentLiNbO3and their optical signatures
Y. Li, W.G. Schmidt, S. Sanna, Physical Review B (2015)
Raman scattering efficiency inLiTaO3andLiNbO3crystals
S. Sanna, S. Neufeld, M. Rüsing, G. Berth, A. Zrenner, W.G. Schmidt, Physical Review B (2015)
Modeling atomic force microscopy at LiNbO 3 surfaces from first-principles
S. Sanna, C. Dues, W.G. Schmidt, Computational Materials Science (2015), pp. 145-150
Mechanism for nuclear and electron spin excitation by radio frequency current
S. Müllegger, E. Rauls, U. Gerstmann, S. Tebi, G. Serrano, S. Wiespointner-Baumgarthuber, W.G. Schmidt, R. Koch, Physical Review B (2015), 92(22)
Single PTCDA molecules on planar and stepped KCl and NaCl(100) surfaces
H. Aldahhak, W.G. Schmidt, E. Rauls, Surface Science (2015), pp. 278-281
Diindenoperylene adsorption on Cu(111) studied with density-functional theory
H. Aldahhak, E. Rauls, W.G. Schmidt, Surface Science (2015), pp. 260-265
Max number of publications reached - all publications can be found in our Research Infomation System.
The University for the Information Society
Phone: +49 5251 600 | CommonCrawl |
About PAC-Bayesian bounds in learning theory
Consider PAC-Bayesian bounds used in learning theory (as defined in say section $1.2$, page $3$ of this paper, https://arxiv.org/pdf/1707.09564.pdf).
I want to know what is the precise mathematical expression that is asked to be true when one says that the "prior distribution has to be independent of the training data". Can someone kindly state that as an equation in probability theory?
This is confusing because to make sense of a notion of "independence" here one has to somehow first be able to imagine the "prior" and the "data" both as random variables on the same probability space. To the best of my knowledge the only formal notion of independence is that of between $2$ random variables both of which are mapping from the same probability space.
If say one had to make a list of $K$ priors and allow oneself to choose one of these priors then there is a way to rewrite such bounds with an extra $log(K)$ term in the numerator under the square-root on the RHS.
Now lets say that I do some auxiliary experiments with training this predictor on certain samples and realize that there is a certain prior which gives me a good RHS then I guess I am not allowed to use this prior in the bound because this is now dependent on the data. But,
(a) Am I allowed to include this prior in the list of $K$ prior options? Or will that also break the assumptions of this proof?
(b) Am I allowed to use this prior if I say use it in a PAC-Bayes bound for the same predictor but I have say changed the changed the data distribution from which the samples are being taken (from the auxiliary experiments to the case when the bound is being evaluated) or I have thrown away from the support of the distribution the part of the data on which I did these auxiliary experiments.
bayesian pac-learning
gradstudentgradstudent
Conceptually, there are two different types of distributions: the data distribution, i.e. so we can draw data $x\sim\mathcal{D}$, and the (various) distributions over learning models, often expressed as distributions over the parameter spaces (e.g. if you're familiar with Bayesian neural networks), so we can for example write $\theta \sim \mathcal{R}$ to draw, say, a set of random weights for a neural network.
The idea in PAC-Bayes is that you learn a distribution over predictors, $Q$, so that if you draw a random predictor $f_\theta\sim Q$ (which really means $\theta\sim Q$ I suppose but I'm following their notation), then $f_\theta$ should perform well on the data. In other words, $Q$ depends on the training data, $T=\{x_i\}_i,\,x_i\sim\mathcal{D}$. We can think of this as assuming $Q$ has parameters $\phi$, and we are learning $\phi$ from $T$. Thus, $Q$ is dependent on $T$, and thus $f_\theta\sim Q$ is dependent on it as well.
In the paper, they consider a special case, where $f_{w+u}\sim Q$, where $w$ is learned but presumably deterministic, and $u$ is random (so I'd write $w+u\sim Q$). Consider the joint distribution $p(\theta, T)$ where $\theta=w+u$. Because $\theta$ is learned, it is dependent on $T$, so we cannot say $p(\theta,T)=p(\theta|T)p(T)=p(\theta)p(T)$.
Now what do you do if you haven't seen any training data yet? You still want to be able to draw random predictors. So, as a Bayesian :), we define a prior $P$, so that we can draw $f_\psi \sim P$, where $P$ has no dependence on the data $T$, i.e. it is not learned. So we can write $p(\psi,T)=p(\psi)p(T)$, because $T$ and $\psi$ are independent.
This is confusing because to make sense of a notion of "independence" here one has to somehow first be able to imagine the "prior" and the "data" both as random variables on the same probability space. To the best of my knowledge the only formal notion of independence is that of between 2 random variables both of which are mapping from the same probability space.
The prior is not a random variable, it is a distribution. The key is that there are two spaces, the set of data $\mathcal{X}$ and that of model parameters $\Phi$, and there are distributions ($\mathcal{D}$ and $Q$ & $P$) on those spaces. Our notion of independence is on one space, if you want to think of it that way: it is on the product space, i.e. $ S= \mathcal{X}\times \Phi$, so that we can define joint distributions like $p(\psi,T)$ or $p(\theta,T)$. At least that's how I see it.
Am I allowed to include this prior in the list of K prior options? Or will that also break the assumptions of this proof?
I'm not sure about this, I don't know of the theorem you are talking about. But for this theorem or a modification of it specifically, then yes, I think it will break the proof assumptions. It would let you put $Q$ in the list for example, which probably seems problematic.
However, you might be interested in papers doing this: e.g., Parrado-Hernandez et al, PAC-Bayes Bounds with Data Dependent Priors
Am I allowed to use this prior if I say use it in a PAC-Bayes bound for the same predictor but I have say changed the changed the data distribution from which the samples are being taken (from the auxiliary experiments to the case when the bound is being evaluated) or I have thrown away from the support of the distribution the part of the data on which I did these auxiliary experiments.
Interesting question. I think this is possible but with different bounds. Specifically, maybe you could think of this as an instance of domain adaptation (for which there is a lot of work on learning bounds, which contain an additional term that is based on the difference between the old and new distributions). Note that PAC-Bayes in the domain adaptation context (e.g., Germain et al, A New PAC-Bayesian Perspective on Domain Adaptation) still utilize a prior from before seeing the the source or the target domains. You cannot escape this, usually, in a truly Bayesian method. What I am suggesting is considering the change in distribution used in the method you proposed as a form of domain adaptation. I'm not an expert in this area though. :)
Edit in response to the comments:
what is the fundamental mathematical difference between an RV and a distribution?
I think I will merely paraphrase a section from the Wikipedia article on RVs, which adequately describes it (italics mine):
The domain of a random variable is a sample space, which is interpreted as the set of possible outcomes of a random phenomenon.
A random variable has a probability distribution, which specifies the probability of its values. Random variables can be discrete, endowed with a probability mass function; or continuous, via a probability density function; or a mixture of both types.
Two random variables with the same probability distribution can still differ in terms of their associations with, or independence from, other random variables. The realizations of a random variable, that is, the results of randomly choosing values according to the variable's probability distribution function, are called random variates.
$\begingroup$ >The prior is not a random variable, it is a distribution So what is the difference between a distribution and a random variable. Isn't it true that for any distribution you can describe there exist a random variable? $\endgroup$ – Kirk Walla Feb 27 '20 at 16:31
$\begingroup$ @KirkWalla Of course every RV has a distribution and vice versa. But they're not exactly the same object. Just like samples from a distribution (i.e., instantiations of an RV) are not the same as the RV itself. I felt this semantic distinction may have been confusing the OP. You can consider independence for RVs (or equivalently for their distributions using e.g. densities) separately, but mixing and matching the nomenclature/notation seemed to be leading to confusion. $\endgroup$ – user3658307 Feb 28 '20 at 21:10
$\begingroup$ @ user3658307 I kind of understand what you might be trying to say but still is not very clear what is the fundamental mathematical difference between an RV and a distribution? $\endgroup$ – Kirk Walla Feb 28 '20 at 23:39
$\begingroup$ @KirkWalla I've edited in a quote from wikipedia which might help $\endgroup$ – user3658307 Feb 29 '20 at 0:00
Not the answer you're looking for? Browse other questions tagged bayesian pac-learning or ask your own question.
PAC learning theory and lower bound on the amount of input samples
Is my interpretation of Bayesian probability and inference correct?
What does PAC learning theory mean?
Do Bayesian priors become irrelevant with large sample size?
What is a "concept" in computational learning theory?
agnostic PAC model: Learnability and Bias-Complexity Trade-off
PAC Learning for OLS and noisy regression | CommonCrawl |
Giovanni Battista Pisani
Giovanni Battista Pisani, also known as Gio. Battista Pisani, was a 17th-century Genoese mathematician.[1]
Biography
He wrote The First Book of Modern Cursive Letters (Il primo libro di lettere corsive moderne, 1641) about calligraphy,[2] followed by other educational works, Arithmetic Memorial (Memoriale aritmetico, 1644) and Arithmetic Garden (Giardino aritmetico, 1646), intended to solve arithmetic problems that were especially related to merchant activity.[3]
Works
• Il primo libro di lettere corsive moderne (in Italian). Genoa: Stampato appresso l'autore. 1641.
• Giardino aritmetico: nel quale con brevità, e facilità non più usata, sciogliesi ogni più intricato laberinto de' conti mercantili (in Italian). Milan: Lodovico Monza. 1646.
• Memoriale aritmetico: per indirizzo et aiuto di chi desidera con ogni facilità e sicurezza apprendere la prattica del conteggiare (in Italian). Venice: Stefano Curti. 1686 [1644].
References
1. Cerboni, Giuseppe (1887). Sur l'importance d'unifier les études de la comptabilité (in French). Rome: Imprimerie héritiers Botta. p. 132.
2. Whalley, Joyce Irene (1980). The Pen's Excellencie: Calligraphy of Western Europe and America. Tunbridge Wells: Midas Books. pp. 188, 205.
3. Atti della Accademia ligure di scienze e lettere (in Italian). Genoa: Accademia ligure di scienze e lettere. 2003. p. 241.
Authority control
International
• VIAF
National
• Germany
• Italy
• United States
| Wikipedia |
What is the sum of the positive whole number divisors of 210?
The prime factorization of $210$ is $2 \cdot 3 \cdot 5 \cdot 7$. It follows that the sum of the divisors of $210$ is equal to $(1 + 2)(1 + 3)(1+5)(1+7)$, as each factor of $210$ is represented when the product is expanded. It follows that the answer is equal to $3 \cdot 4 \cdot 6 \cdot 8 = \boxed{576}$. | Math Dataset |
\begin{document}
\title{Three Dimensional Optimization of Scaffold Porosities for Bone Tissue Engineering}
\author{
Patrick Dondl \\
University of Freiburg \\
\texttt{[email protected]}
\And
Marius Zeinhofer \\
University of Freiburg\\
\texttt{[email protected]} }
\maketitle
\begin{abstract}
We consider the scaffold design optimization problem associated to the three dimensional, time dependent model for scaffold mediated bone regeneration considered in \cite{dondl2021efficient}. We prove existence of optimal scaffold designs and present numerical evidence that optimized scaffolds mitigate stress shielding effects from exterior fixation of the scaffold at the defect site. \end{abstract} \keywords{Scaffold Mediated Bone Growth, Optimizing Bone Scaffolds, PDE constrained optimization, Optimal Control}
\section{Introduction} In this work we extend our previously proposed model from \cite{dondl2021efficient} for bone regeneration in the presence of a bioresorbable porous scaffold to include a scaffold architecture optimization problem that allows geometry and patient dependent optimal scaffold designs. We prove the existence of optimal scaffold density distributions under assumptions on the model's geometry and boundary conditions that are realistic for applications, i.e., non-smooth domains and mixed Dirichlet-Neumann boundary conditions. Furthermore, we present numerical simulations that indicate how to mitigate the negative impact of stress shielding on the bone regeneration process that appears in vivo due to the external fixation of the scaffold at the defect site, as documented in, e.g., \cite{sumner1992determinants, huiskes1992relationship, behrens2008numerical, arabnejad2017fully} in the context of total hip arthroplasty or \cite{viateau07, Terjesen09} for femoral defects.
Our model was previously proposed in \cite{poh2019optimization} and analysed in \cite{dondl2021efficient}. Its essential processes are an interplay between the mechanical and biological environment which we model by a coupled system of PDEs and ODEs. The mechanical environment is represented by a linear elastic equation and the biological environment through reaction-diffusion equations as well as logistic ODEs, modelling signalling molecules and cells/bone respectively. Material properties are incorporated using homogenized quantities not resolving any scaffold microstructure. This makes the model efficient in computations and thus allows to solve the scaffold architecture optimization problem numerically with manageable computational cost.
The article is organized as follows. In follwoing section, we provide a brief introduction to tissue engineering for the treatment of severe bone defects, present our computational model and the corresponding scaffold optimization problem. Then, we discuss its weak formulation in Section~\ref{section:mathematical_formulation}, followed by the presentation of the main analytical results in Section~\ref{section:main_results} together with an outline of their proofs. We proceed with the discussion of the numerical simulations in Section~\ref{section:simulations}. The detailed proofs of the main analytical results are provided in Appendix~\ref{section:proofs_of_the_main_results}.
\subsection{Scaffold Mediated Bone Growth}\label{section:scaffold_mediated_bone_growth} The treatment of critical-sized bone defects ($>$25 mm) and restoration of skeletal functions is challenging using current treatment options, see \cite{nauth2018critical}. Non-unions of the defect, i.e.,\ when no bridging is achieved after $>$9 months and healing stagnates for 3 months, pose severe problems for patients, as discussed in \cite{calori2017non}, and have a relevant financial dimension. For the UK \cite{stewart2019fracture} estimates the healthcare costs to $\pounds 320$ million annually. Further, healing prospects can be worsened by comorbidities such as diabetes, which is associated with compromised bone regeneration capacity, see \cite{marin2018impact}.
Recent research illustrates the potential of porous, bio-resorbable support structures, as temporary support. These structures are called scaffolds an are implanted in the defect site to provide stability, allow vascularization and guidance for new bone formation. Promising results were recently obtained in vivo and in clinical studies. We refer to \cite{petersen2018biomaterial, cipitria2012porous, paris2017scaffold, petersen2018biomaterial, pobloth2018mechanobiologically}. These works also indicate that material choice and scaffold design are critical variables for a successful healing outcome yet they are at present not fully understood.
Several objectives need to be considered for the scaffold design, including (a) the pore size, porosity and shape of the microstructure, which has a significant influence on vascularization, cell proliferation and cell differentiation; (b) the mechanical properties of the scaffold need to guarantee a proper strain distribution within the defect site in order for bone to grow; (c) information concerning comorbitities of the patients that imply reduced bone growths or bone density as caused by diabetes need to be incorporated in the scaffold design. Hence, designing scaffolds that are optimized for the patient and defect site at hand are of fundamental importance. With the possibilities of additive manufacturing, personalized scaffold designs are within reach.
So far, the question of optimal scaffold design has be dominated by trial-and-error approaches. However, this workflow is expensive and prohibits patient specific designs. Topology optimization techniques have successfully been exploited for optimal design questions yielding scaffolds that meet elastic optimality with given porosity or fluid permeability, as shown in \cite{dias2014optimization, coelho2015bioresorbable, lin2004novel, guest2006optimizing, challis2012computationally, kang2010topology,Wang:2016js, dondl_bone_shape_opt}. However, these methods usually lack the ability to resolve the time dependence of the elastic moduli in scaffold mediated bone growth that are integral to incorporate for an optimal scaffold design.
Based on the previous studies \cite{poh2019optimization, dondl2021efficient}, we propose a PDE constrained optimization problem for the optimal porosity (or, equivalently, density) distribution of a scaffold. The advantages of optimized scaffolds include (a), the mitigation of stress shielding effects, which are caused by external fixation of the defect site and result in areas of low stress within the scaffold. This leads to poor bone regeneration, as mechanical stimulus is indispensable for bone growth. (b) Optimized scaffolds can be designed patient dependent by altering the model's parameters. It is straight forward to include a reduced bone regeneration capability due to, e.g., diabetes into the model.
Note that the model proposed in \cite{dondl2021efficient} does not resolve the porous micro-structure of the scaffold design, but uses coarse-grained values instead, i.e., values averaged over a volume representative of the scaffold microstructure (a representative volume element, RVE), and thus optimizes only these macroscopic quantities. In a scaffold based on a unit cell design, the scaffold volume fraction (or equivalently, the porosity) changes on a larger length-scale than the unit cell design. Likewise, the other quantities of the model can be viewed as locally averaged values. We stress that this homogenization viewpoint of the model is essential for the feasibility of the optimization problem and we refer to \cite{dondl2021efficient} for more information concerning this viewpoint. The resulting optimal scaffold porosity distributions can easily be introduced in a 3d printable, periodic microstructure based, scaffold design: one simply considers a one-parameter family of microstructures, where the thickness of struts or surfaces in the microstructure is given by the parameter. The parameter can then be chosen non-constant over the scaffold domain such that at each periodic unit cell the correct scaffold density is recovered as an average.
The main contributions of this article are as follows. We analyze the optimal scaffold design problem mathematically and prove the existence of optimal scaffold designs under realistic assumptions on the geometry of the computational domain, the boundary conditions and functional relationships in Section~\ref{section:main_results}. The existence of such scaffolds is a necessary prerequisite for a successful numerical treatment of the optimization problem. In Section~\ref{section:simulations} we then investigate numerically how optimized scaffold designs mitigate the negative impacts of stress shielding. Our findings suggest that our three dimensional optimization routine is able to successfully mitigate adverse stress shielding effects that may arise from external scaffold fixation.
\subsection{The System of Equations}\label{section:the_system_of_equations} The underlying paradigm of the model is that an interplay of the biological and the mechanical environment are responsible for bone growth where the mechanical environment is described through displacements and strains and the biological environment through bio-active molecules (signalling molecules) and different cell types. The model is a coupled system of evolution equations composed of a linear elastic equilibrium equation for every point in time, diffusion equations for the bio-active molecules and ordinary differential equations for the concentration of osteoblasts and the volume fraction of bone. The model considered here is a concrete instance of the more general model proposed in \cite{dondl2021efficient}.
By $\Omega \subset \mathbb{R}^3$ we denote the computational domain, i.e.,\ the bone defect site. We let $I = [0,T]$ be a finite time interval. On $\Omega$ we keep track of the local scaffold volume fraction which we call $\rho(x)$, where $x\in\Omega$. Therefore the porosity $\theta$ is $\theta(x) = 1 - \rho(x)$, but we use only $\rho$. We do not resolve a time dependency for $\rho$. This is due to the experimental findings in \cite{pitt1981aliphatic} which have shown that, in the time-frame of two years, PCL degrades largely due to bulk erosion. Only the molecular mass decreases which we model by the exponential decay $\sigma (t) = e^{-k_1 t}$. Thus product $\rho(x)\cdot\sigma(t)$ quantifies the mechanical properties of PCL over time and space. We denote the bone volume fraction averaged over a RVE by $b(t,x)$ and the variables $b, \sigma$ and $\rho$ then determine the material properties of the bone-scaffold composite. We work in the linear elastic regime and use the notation $\mathbb{C}(\rho,\sigma,b)$ for the elastic tensor capturing the material properties.
Besides isotropy and ellipticity, we assume little for the tensor $\mathbb{C}(\rho,\sigma,b)$. Choosing a microstructure allows to explicitely specify $\mathbb{C}(\rho, \sigma, b)$ in numerical applications. By $u(t,x)$ we denote the displacement field satisfying mechanical equilibrium equations, see \eqref{StrongElasticEquation}. We denote the strain by $\varepsilon(u)$.
We represent the biological environment through two bio-active molecules $a_1(t,x)$ and $a_2(t,x)$, which should be viewed as endogenous angiogenic and osteoinductive factors. These molecules diffuse depending on the scaffold density $\rho$ which we model by $D(\rho)$ in the equation \eqref{StrongDiffusionEquation}. We leave this as an abstract functional relationship for the same reasons as discussed in the context of the elastic tensor. Further, we assume exponential decay and production of bio-active molecules in the presence of strain and a local density of osteoblast cells which are denoted by $c(t,x)$. Essential for the production of growth factors is mechanical stimulus $S(\varepsilon(u))$. The quantity $S(\varepsilon(u))$ is derived from strain, admissible choices include the strains magnitude or octahedral shear strains. We allow flexibility in the choice of $S$, see also the discussion in Section \ref{section:mathematical_formulation}. The strain dependent reaction term is motivated by Wolff's law for bone remodeling, see \cite{wolff1892gesetz}. Strain as a driving force for bone regeneration is also supported by more recent work, for example in \cite{ruff2006s}. Note that the concentration of bio-active molecules is normalized to unity in healthy tissue. This dictates the decay and production rates in concrete simulations.
The equation \eqref{StrongCellODE} governs the production of osteoinductive cells (here: osteoblasts) and is modeled by logistic growth with a driving factor depending on both bio-active molecules $a_1$ and $a_2$ and a proliferation term $(1+k_7c)$. The factor $1-c(1-\rho)^{-1}$ encodes the osteoblast carrying capacity and implies that the osteoblast density is bounded by the amount of available space, i.e., the space not filled by the scaffold. We neglect diffusion in this equation as osteoblasts diffuse on a significantly lower level than the bio-active molecules. Modeling only one cell type in our present model is a simplification and an extension of the model is easily feasible. The equation for bone growth \eqref{StrongODE} is similar to the one for osteoblast concentration. We remark that osteoblast and bone do not compete for space, this encodes the assumption that, e.g., $b=1$ and $c=1$ means that osteoblasts reside ``in saturation'' within healthy bone. Our system of equations is \begin{align}
0 &= \operatorname{div}\big(
\mathbb{C}(\rho,\sigma,b)\varepsilon(u)
\big) \label{StrongElasticEquation}
&\parbox{14em}{(mechanical equilibrium)}
\\
d_ta_i
&=
\label{StrongDiffusionEquation}
\operatorname{div}\big(
D(\rho)\nabla a_i
\big)
+
k_{2,i}S(\varepsilon(u)) c
-
k_{3,i}a_i
&\parbox{14em}{(diffusion, generation, and decay of $i=1,2$ bio-molecules)}
\\
d_tc
&=
\label{StrongCellODE}
k_6a_1a_2(1+k_7c)\bigg(1 - \frac{c}{1-\rho}
\bigg)
&\parbox{14em}{(osteoblast generation)}
\\
d_tb
&=
\label{StrongODE}
k_4a_1c\bigg(
1-\frac{b}{1-\rho}
\bigg)
&\parbox{14em}{(bone regeneration driven by $a,b$ and $c$).}
\end{align}
In the above system $k_1,k_{2,i}, k_{3,i}, k_4, k_6, k_7\geq0$, $i=1,2$ are constants that need to be determined from experiments, compare to Section \ref{section:simulations} where we discuss certain choices. The functional relationships $\mathbb{C}, D(\rho)$ and $S(\cdot)$ are all required to satisfy certain technical assumptions that guarantee the well-posedness of the above system. We discuss this in detail in Section \ref{section:mathematical_formulation}. For concrete examples of the functional relationships we refer to \cite{dondl2021efficient}.
Finally, we need to specify boundary conditions. For the elastic equilibrium equation we allow mixed boundary conditions including the limiting cases of a pure displacement boundary condition and a pure stress boundary condition. As for the bio-active molecules we assume that these are in saturation, i.e., $a(t,x) = 1$ adjacent to bone and on the rest of the boundary of $\Omega$ we assume no-flux boundary conditions. For the initial time-point we propose $a_i(0,x) = a_{i,0} = 0$, for $i=1,2$ inside of $\Omega$. This choice reflects the scenario of a scaffold that is not preseeded with exogenous growth factors. However, different choices of $a_{i,0}$ are admissible and allow the model to cover e.g., pre-seeding with osteoinductive factors. Finally, at the initial time we assume that no osteoblasts and no regenerated bone are present inside the domain of computation. In formulas, it holds for all $i = 1,2$
\begin{align}
a_i(0,x)\label{InitialMolecules}
&=
0
&\parbox{18em}{for all $x\in\Omega$}
\\
a_i(t,x)\label{BoundaryDirichletMolecules}
&=
1
&\parbox{18em}{for all $t\in I$, $x$ adjacent to healthy bone}
\\
D_i^\rho\nabla a_i(t,x)\cdot\eta
&=
0
&\parbox{18em}{for all $t\in I$, $x$ not adjacent to healthy bone} \label{BoundaryNeumannMolecules}
\\
\big(
\mathbb{C}(\rho,\sigma,b)\varepsilon(u(t,x))
\big)\cdot\eta
&=
g_N(x)
&\parbox{18em}{on the Neumann boundary of $\Omega$}
\\
u(t,x)
&=
g_D(x)
&\parbox{18em}{on the Dirichlet boundary of $\Omega$}
\\
c(0,x) = b(0,x)\label{InitialBone}
&=
0
&\parbox{18em}{for all $x\in \Omega$.}
\end{align}
One may also consider Robin type boundary conditions for the diffusion equations instead of equation \eqref{BoundaryNeumannMolecules}. The model allows for a time dependent choice of the mechanical loading $g_D$ and $g_N$. Due to the long regeneration time horizon of approximately $12$ months, however, it is not expedient to resolve very short time-scales of, e.g., the mechanics of physical therapy. Instead, we consider suitably time-averaged loading conditions here. \subsection{The Optimization Problem}\label{section:the_optimization_problem}
In the system \eqref{StrongElasticEquation} - \eqref{StrongODE} above, the function $\rho$, i.e., the scaffold's volume fraction, is a design parameter -- also called control variable -- that we can control in applications. For example, a given scaffold volume fraction distribution could be additively manufactured. Given a certain control variable $\rho$, we denote the solution of the system \eqref{StrongElasticEquation} - \eqref{StrongODE} by
\begin{equation*}
y_\rho \coloneqq (u_\rho,a^1_\rho, a_\rho^2,c_\rho,b_\rho)
\end{equation*}
to stress the dependency on $\rho$. We will also use the notation $\phi(\rho) = y_\rho$ and refer to $\phi$ as the solution operator of the system \eqref{StrongElasticEquation} - \eqref{StrongODE}. Depending on the state $y_\rho$, we can measure the control variable's performance by the value of an objective function $J$ evaluated at $\rho$ and $y_\rho$. We are interested in minimizing or maximizing the objective function over the set of admissible control variables. In other words, we are interested in the optimization problem of finding
\begin{equation}\label{eq:optimization_problem_heuristically}
\underset{\rho}{\operatorname{argmin}}J(\rho,y_\rho) \quad \text{subjected to} \quad \rho\in P,
\end{equation}
where the set $P$ encodes for example that $\rho$ takes values in the unit interval (necessary for a reasonable volume fraction). The fact that corresponding to $\rho$, we consider the solution $y_\rho$ makes this a PDE-constrained optimization problem and $\rho \in P$ introduces box constraints on the control variable. The concrete form of $J$ is an engineering choice. For instance, the amount of regenerated bone at a certain time-point in the healing process should be maximized. Another alternative we pursue is to maximize the temporal minimum of the elastic modulus. In the case of a hard load for the elastic equation, the elastic modulus at a time-point $t$ is proportional to the elastic energy $\mathcal{E}(t)$, i.e.,
\begin{equation*}
\mathcal{E}_\rho(t) = \frac12\int_\Omega \mathbb{C}(\rho,\sigma(t),b(t))\varepsilon(u(t)):\varepsilon(u(t))\mathrm dx,
\end{equation*}
where $u$ and $b$ solve the system \eqref{StrongElasticEquation} - \eqref{StrongODE} corresponding to $\rho$. The minimum of $\mathcal{E}$ over the whole regeneration process describes the weakest state of the bone-scaffold structure during healing. This gives rise to the objective function
\begin{equation*}
\hat J (\rho) = \min_{t\in I}\mathcal{E}_\rho(t)
\end{equation*}
and the \emph{maximization} problem of finding
\begin{equation*}
\rho^* \in \underset{\rho \in P}{\operatorname{argmax}}\hat J(\rho).
\end{equation*}
The set $P$ encodes pointwise constraints on $\rho$, i.e., the necessity of enforcing $\rho(x) \in [0,1]$ in order to be a meaningful volume fraction. The notation $\hat J$ instead of $J$ is chosen to indicate that the variables $u$ \& $b$ appearing in the definition of $\mathcal{E}_\rho$ are solving the system \eqref{StrongElasticEquation} - \eqref{StrongODE}. Usually, $\hat J$ is called the reduced objective function to distinguish it from the objective function $J$ that does not require $u$ and $b$ to solve the PDE system. If we use a soft load instead of a hard load, the elastic modulus is proportional to the inverse of the elastic energy, hence the objective function becomes
\begin{equation*}
\hat J(\rho) = \max_{t\in I}\mathcal{E}_\rho(t)
\end{equation*}
and the optimization consists of finding
\begin{equation*}
\rho^* \in \underset{\rho \in P}{\operatorname{argmin}}\hat J(\rho),
\end{equation*}
i.e., is a \emph{minimization} problem.
\begin{remark}
The proposed objective functions are not smooth as they involve minimizing or maximizing over $t\in I$. For a numerical implementation, one might therefore approximate the minimum or maximum functional by an $L^p(I)$ norm with large value for $-p$ or $p$ respectively.
\end{remark}
Another choice of objective function is to consider the amount of regenerated bone after a given time $T$. This results in the definition
\begin{equation*}
\hat J(\rho) = \int_\Omega b(T)\mathrm dx
\end{equation*}
\begin{remark}
Care needs to be taken with respect to the functional relationships in the system \eqref{StrongElasticEquation} - \eqref{StrongODE} when choosing the amount of regenerated bone as an objective. This requires an adequate choice of $S(\cdot)$. If $S(\cdot)$ is chosen to be the Frobenius norm, the above objective function promotes very weak scaffolds as these lead to high strains and high bone growth. A more sensible choice for $S(\cdot)$ in this case is to use a filter, i.e., only strains with a certain range of magnitude lead to non-vanishing values of $S(\cdot)$.
\end{remark}
\section{Mathematical Formulation}\label{section:mathematical_formulation}
\subsection{Notation and Preliminaries}\label{section:notation_and_preliminaries} By $X$ we usually denote a generic Banach space and with $X^*$ we denote its dual space. The dual pairing for $f\in X^*$ and $x\in X$ is denoted by $\langle f,x \rangle_X$. If a sequence $(x_n)_{n\in\mathbb{N}}$ converges weakly in $X$ to $x$ we write $x_n\rightharpoonup x$. For an interval $I=[0,T]$ and $p\in[1,\infty]$ we denote the Bochner space of $p$-integrable functions with values in $X$ by $L^p(I,X)$, see for instance \cite{diestel1977vector}. Further, we write $W^{1,p}(I,X,Y)$ for the vector valued Sobolev space consisting of functions $u\in L^p(I,X)$ with distributional derivative $d_tu \in L^p(I,Y)$ where $X$ and $Y$ are Banach spaces with $X\hookrightarrow Y$, see for instance \cite{boyer2012mathematical}.
We say a bounded, open set $\Omega\subset\mathbb{R}^d$ is a Lipschitz domain if $\overline{\Omega}$ is a Lipschitz manifold with boundary, see \cite[Definition 1.2.1.2]{grisvard2011elliptic}. In the following we will denote the cube $[-1,1]^n\subset\mathbb{R}^d$ by $Q$, its half $\{x\in Q\mid x_d <0\}$ by $Q_-$, the hyperplane $\{ x\in Q \mid x_d = 0 \}$ by $\Sigma$ and $\{x\in\Sigma\mid x_{d-1}<0\}$ by $\Sigma_0$. The following definition is due to Gr\"oger, see \cite{groger1989aw}. \begin{definition}[Gr\"oger Regular Sets]\label{definition:groger_regular_sets}
Let $\Omega\subset\mathbb{R}^d$ be bounded and open and $\Gamma\subset\partial\Omega$ a relatively open set. We call $\Omega\cup\Gamma$ Gr\"oger regular, if for every $x\in\partial\Omega$ there are open sets $U,V\subset\mathbb{R}^d$ with $x\in U$, and a bijective, bi-Lipschitz map $\phi:U\to V$, such that $\phi(x) = 0$ and $\phi(U\cap (\Omega\cup\Gamma) )$ is either $Q_-$, $Q_-\cup\Sigma$ or $Q_-\cup\Sigma_0$. \end{definition} It can easily be seen that a Gr\"oger regular set $\Omega$ (no matter the choice $\Gamma \subset \partial\Omega$) is a Lipschitz domain, see \cite[Theorem 5.1]{haller2009holder}. The requirement of Gr\"oger regularity is very mild and all applications we have in mind fall in this category. This claim is justified by the following useful characterization of Gr\"oger regular sets in two and three dimensions that allow to check Gr\"oger regularity almost ``by appearance''. The results are due to \cite{haller2009holder}. \begin{theorem}[[Gr\"oger Regular Sets in 2D, Theorem 5.2 in \cite{haller2009holder}]]\label{theorem:groger_regular_sets_in_2d}
Let $\Omega \subset \mathbb{R}^2$ be a Lipschitz domain and $\Gamma\subset\partial\Omega$ be relatively open. Then $\Omega\cup\Gamma$ is Gr\"oger regular if and only if $\overline{\Gamma}\cap(\partial\Omega\setminus\Gamma)$ is finite and no connected component of $\partial\Omega\setminus\Gamma$ consists of a single point. \end{theorem} \begin{theorem}[Gr\"oger Regular Sets in 3D, Theorem 5.4 in \cite{haller2009holder}]\label{theorem:groger_regular_sets_in_3d}
Let $\Omega \subset \mathbb{R}^3$ be a Lipschitz domain and $\Gamma\subset\partial\Omega$ be relatively open. Then $\Omega\cup\Gamma$ is Gr\"oger regular if and only if the following two conditions hold
\begin{itemize}
\item [(i)] $\partial\Omega\setminus\Gamma$ is the closure of its interior.
\item [(ii)] For any $x\in \overline{\Gamma}\cap(\partial\Omega\setminus\Gamma)$ there is an open neighborhood $U_x$ of $x$ and a bi-Lipschitz map $\phi:U_x\cap\overline{\Gamma}\cap(\partial\Omega\setminus\Gamma)\to (-1,1)$.
\end{itemize} \end{theorem}
\subsection{Setting}\label{section:setting_optimal_control} In this Section we state the precise framework we use for the optimal control result. We begin by specifying the assumptions on the domain.
\textbf{The Domain.} We consider a finite time interval $I = [0,T]$. The spatial domain $\Omega \subset \mathbb{R}^d$ with $d = 1,2,3$ is assumed to be an open, bounded and connected Lipschitz domain. We consider partitions of the boundary $\partial\Omega$, namely \begin{gather*}
\partial\Omega = \Gamma_N^e \cup \partial_D^e, \quad\text{and}\quad \partial\Omega = \Gamma_N^d\cup\Gamma_D^d \end{gather*}
that will be used for the elastic and the diffusion equation respectively. For the partition of the elastic equation we assume $|\Gamma_D^e| \neq 0$. We require both $\Omega \cup \Gamma_N^e$ and $\Omega \cup \Gamma_N^d$ to be Gr\"oger regular, see Definition~\ref{definition:groger_regular_sets} or \cite{groger1989aw} and \cite{haller2009holder}.
\begin{remark}
Note the following things.
\begin{itemize}
\item [(i)] For the elastic equation we exclude a pure Neumann problem, however, we can include this case by passing to a suitable quotient space. We excluded this for convenience and brevity only.
\item[(ii)] The assumption of Gr\"oger regularity is very mild and all desirable application settings we have in mind easily satisfy this requirement. Compare to \cite{haller2009holder} for more information.
\end{itemize} \end{remark}
\textbf{The Control Space.} The set of control variables is defined to be \begin{equation}\label{equation:set_of_control_variables}
P = \left\{ \rho \in H^2(\Omega) \mid 0 < c_P \leq \rho(x) \leq C_P < 1 \right\}, \end{equation} where $c_P$ and $C_P$ are two fixed constants. Note that in the spatial dimensions $d=1,2,3$ the space $H^2(\Omega)$ embeds into $C^0(\Omega)$, hence the pointwise condition imposed in the above definition is well-defined.
\textbf{The State Space and the Equations.} Consider the state space \begin{equation*}
Y = C^0(I,H^1_{D_e}(\Omega)) \times H^1(I,H^1_D(\Omega),H^1_D(\Omega)^*)\cap L^4(I,C^0(\Omega))\times W^{1,2}_0(I,C^0(\Omega))^2 \end{equation*} and the space \begin{equation*}
W = L^2(I,H^1_{D_e}(\Omega))^* \times L^2(I,H^1_{D_d}(\Omega))^* \times L^2(\Omega) \times L^2(I,C^0(\Omega))^2. \end{equation*} Then the state equations can be written in the form $e(y,\rho) = 0$ with the constraint operator \begin{equation*}
e:Y\times P \to W, \quad (y,\rho) = (\tilde u,\tilde a_1,\tilde a_2, b, c, \rho)\mapsto e(y,\rho) \end{equation*} given by \begin{align}\label{equation:state_equations_optimal_control_setting}
e(y,\rho)
=
\begin{pmatrix}
\iint \mathbb{C}(\rho,\sigma, b)\varepsilon(\tilde u+u_D):\varepsilon(\cdot)\mathrm dx\mathrm dt - \int_I \int_{\partial\Omega} g_N\cdot \mathrm ds\mathrm dt
\\
\\
\int_I\langle d_t\tilde a_1,\cdot\rangle_{H^1_{D_d}(\Omega)}\mathrm dt + \iint D(\rho)\nabla \tilde a_1\nabla\cdot + k_{3,1}(\tilde a_1+1)\cdot\mathrm dx\mathrm dt - \iint k_{2,1}S(\varepsilon(u_0+u_D)) c\cdot\mathrm dx\mathrm dt
\\
\\
\int_I\langle d_t\tilde a_2,\cdot\rangle_{H^1_{D_d}(\Omega)}\mathrm dt + \iint D(\rho)\nabla \tilde a_2\nabla\cdot + k_{3,2}(\tilde a_2+1)\cdot\mathrm dx\mathrm dt - \iint k_{2,2}S(\varepsilon(u_0+u_D)) c\cdot\mathrm dx\mathrm dt
\\
\\
\tilde a_1(0) + 1
\\
\\
\tilde a_2(0) + 1
\\
\\
d_tc - k_6(\tilde a_1 + 1)(\tilde a_2 + 1)(1+k_7c)\left( 1 - \frac{c}{1 - \rho} \right)
\\
\\
d_tb - k_4(\tilde a_1 + 1)c\left( 1 - \frac{b}{1 - \rho} \right)
\end{pmatrix} \end{align} We frequently use the notation \begin{equation*}
a_i = \tilde a_i + 1, \quad \text{and} \quad u = \tilde u + u_D. \end{equation*}
\textbf{Functional Relationships.} To make fully sense of the above definition of $e$ we still need to clarify the assumptions made on the data and functional relationships. We begin with the function $\sigma$. We assume that it is smooth, depends only on time and is bounded away from zero, i.e., \begin{equation}\label{equation:assumption_on_sigma}
\sigma\in C^\infty(I), \quad\text{with }\sigma(t) > 0 \text{ for all }t\in I. \end{equation} Usually, we set $\sigma$ to be an exponential decay. For the material properties $\mathbb{C}$ of the elastic equation we require that it is a map \begin{equation*}
\mathbb{C}:\operatorname{dom}(\mathbb{C})\subset C^0(I\times \Omega)\times C^0(\Omega) \to C^0(I,L^\infty(\Omega,\mathcal{L}(\mathcal{M}_s))), \quad \text{with}\quad (b,\rho)\mapsto \mathbb{C}(b,\sigma,\rho). \end{equation*} The concrete definition of $\operatorname{dom}(\mathbb{C})$ is not so important, however, as a minimal requirement it should hold \begin{equation*}
\bigcup_{\rho\in P}\{ b\in C^0(I\times\Omega) \mid 0 \leq b(t,x) \leq 1 - \rho(x) \}\times\{\rho\} \subset \operatorname{dom}(\mathbb{C}). \end{equation*} Furthermore, we need $\mathbb{C}(\cdot,\sigma,\rho)$ to be Lipschitz continuous with Lipschitz constant independent of $\rho$ and $\mathbb{C}$ is assumed to be continuous on all of $\operatorname{dom}(\mathbb{C})$. Finally, we require \begin{equation}\label{equation:assumption_on_C_boundedness}
\sup_{(b,\rho)\in\operatorname{dom}(\mathbb{C})}\lVert \mathbb{C}(b,\sigma,\rho) \rVert_{L^\infty(\Omega,\mathcal{L}(\mathcal{M}_s))} < \infty \end{equation} and \begin{equation}\label{equation:assumption_on_C_ellipticity}
\inf_{(b,\rho)\in\operatorname{dom}(\mathbb{C})}\left[\inf_{M\in\mathcal{M}_s\setminus \{0\}}\mathbb{C}(b,\sigma,\rho)M:M\right] \geq c_{\mathbb{C}}|M|^2, \end{equation} for a constant $c_{\mathbb{C}}>0$. We need a further regularity property of $\mathbb{C}$. We assume that $b(t)\in C^\alpha(\Omega)$, $\rho\in C^\alpha(\Omega)$ for an $\alpha \in (0,1)$ implies that the coefficient functions \begin{equation*}
C_{ijkl}(t)\coloneqq \left[ \mathbb{C}(b,\sigma,\rho)(t) \right]_{ijkl} \end{equation*} are members of $C^\alpha(\Omega)$ and that there exists a constant $C>0$ not depending on $b$ and $\rho$ such that \begin{equation}\label{equation:holder_regulariy_of_coefficients}
\lVert C_{ijkl}(t) \rVert_{C^\alpha(\Omega)} \leq C\lVert b(t) \rVert_{C^\alpha(\Omega)}\lVert \rho \rVert_{C^\alpha(\Omega)}. \end{equation} For the boundary data $u_D$ and $g_N$ of the elliptic equation we assume that \begin{equation}\label{equation:assumption_on_g_N}
g_N \in C^0(I,L^2(\partial\Omega)) \end{equation} and that the Dirichlet boundary data is given through a function \begin{equation}\label{equation:assumption_on_u_D}
u_D \in C^0(I,H^{1+\theta}(\Omega)), \end{equation} meaning that the boundary information can be lifted to all of $\Omega$ such that the lift has the above regularity in time and space, where $\theta > 0$ can be arbitrarily small. In practice, this is easy to verify as we mainly work with Dirichlet boundary conditions that do not vary in time. The material properties $D(\rho)$ used in the diffusion equation are a map \begin{equation*}
D:\operatorname{dom}(D)\subset C^0(\Omega) \to L^\infty(\Omega,\mathcal{M}_s), \quad\text{with}\quad \rho\mapsto D(\rho) \end{equation*} that we require to be continuous with respect to the uniform norm on $\operatorname{dom}(D)$. The domain of $D$ will usually satisfy \begin{equation*}
\{ \rho\in C^0(\Omega) \mid 0 < c_P \leq \rho(x) \leq C_P < 1 \} \subset \operatorname{dom}(D), \end{equation*} where $c_P$ and $C_P$ are the positive constants appearing in the definition of $P$. We also require $D$ to be uniformly elliptic independently of $\rho\in\operatorname{dom}(D)$, i.e., \begin{equation}\label{equation:assumption_on_D_ellipticity}
\inf_{\rho\in\operatorname{dom}(D)}\left[ \inf_{\xi \in \mathbb{R}^d\setminus \{0\}} D(\rho)\xi\cdot\xi \right] \geq c_D |\xi|^2, \end{equation} for a constant $c_D > 0$. Finally, for the function $S(\cdot)$ we assume that it is given through a map on matrices \begin{equation*}
S(\cdot):\mathbb{R}^{d\times d} \to [0,\infty) \end{equation*} that we require to be Lipschitz and to obey an estimate of the form \begin{equation}\label{equation:assumption_on_smoothing_of_norm_for_symmetric_gradient}
S(A) \leq C_1|A| + C_2\quad\text{for all }A\in\mathbb{R}^{d\times d}, \end{equation}
where $C_1, C_2 >0$ and $|A|$ denotes the Euclidean (or any) norm of a matrix. Furthermore, we need $S$ to be continuous, more precisely, we assume that if $(v_k)\subset L^2(\Omega,\mathbb{R}^{d\times d})$ is a sequence, then it holds \begin{equation}\label{equation:assumption_on_weak_continuity_of_smoothing_of_norm}
v_k \to v \quad\text{in }L^2(\Omega,\mathbb{R}^{d\times d}) \quad \Rightarrow \quad S(v_k) \to S(v) \quad\text{in }L^2(\Omega). \end{equation}
We recall the main result of the first chapter concerning the well-posedness of the PDE-ODE system. \begin{theorem}
Assume that the setting described in this section holds. Then, for every $\rho \in P$ there exists a unique solution $y=(\tilde u, \tilde a_1, \tilde a_2, b, c)\in Y$ satisfying $e(y,\rho) = 0$, i.e., solving the state equations \eqref{equation:state_equations_optimal_control_setting}. \end{theorem} \begin{proof}
This follows almost as an application of Theorem 3.2 in \cite{dondl2021efficient}. Note that our assumptions here are slightly stronger, so the requirements in \cite{dondl2021efficient} are trivially satisfied. The only extension to the results in \cite{dondl2021efficient} is to show the improved integrability for the functions $a_1$ and $a_2 \in L^4(I,C^0(\Omega))$. To this end, we use the estimate $(3.11)$ in \cite{dondl2021efficient} which shows that the right-hand sides of the diffusion equations satisfy
\begin{equation*}
k_{2,i}S(\varepsilon(u_0+u_D)) c \in L^\infty(I,L^2(\Omega)).
\end{equation*}
This allows to apply the maximal $L^p$ regularity result in Lemma~\ref{lemma:the_bound_for_a} and obtain $\tilde a_i \in L^p(I,C^0(\Omega))$ for all $p\in [2,\infty)$. \end{proof}
\subsection{Objective Function}\label{section:objective_function} Here we formulate the class of objective functions we are able to treat in the setting of the optimal control result. For every time-point $t\in I$ and state control pair $(y,\rho) \in Y \times P$ we consider the elastic energy \begin{equation}
\mathcal{E}:Y\times P \to C^0(I), \quad \mathcal{E}(y,\rho)(t) = t\mapsto \frac12 \int_\Omega \mathbb{C}(b(t),\sigma(t),\rho)\varepsilon(u(t)):\varepsilon(u(t))\mathrm dx. \end{equation} For most of our objective functions we desire $\mathcal{E}$ to take values in $C^0(I)$, as we want to have access to point evaluations. This is the reason to require the continuity of the solutions to the elastic equation in the definition of $Y$. Primarily, we are interested in the reduced elastic energy $\hat{\mathcal{E}}$, that is, we are interested in $\mathcal{E}(y,\rho)$ only when $(y,\rho)$ solves the system of equations, i.e., when it holds $e(y,\rho) = 0$. We define \begin{equation}
\hat{\mathcal{E}}: P \to C^0(I), \quad \hat{\mathcal{E}}(\rho) = \mathcal{E}(y_\rho,\rho) \end{equation} and here it holds $e(y_\rho,\rho) = 0$. We provide now the proof that $\mathcal{E}$ takes values in $C^0(I)$. \begin{lemma}\label{lemma:strict_positivity_of_elastic_energy}
For all $(y,\rho)\in Y\times P$ we have $\mathcal{E}(y,\rho)\in C^0(I)$. If it holds $e(y,\rho) = 0$ and $\tilde u(t) +u_D(t) \neq 0$, then $\mathcal{E}(y,\rho)(t) > 0$. \end{lemma} \begin{proof}
As $\tilde u+u_D \in C^0(I, H^1(\Omega))$ by the definition of the state space $Y$ and the material tensor $\mathbb{C}$ is a member of the space $C^0(I,L^\infty(\Omega,\mathcal{L}(\mathcal{M}_s)))$ it follows that
\begin{equation*}
\mathbb{C}(\sigma, \rho, b)\varepsilon(\tilde u+u_D):\varepsilon(\tilde u+u_D) \in C^0(I,L^1(\Omega)).
\end{equation*}
Using the $L^1(I)$ continuity of integration, we get $\mathcal{E}(y,\rho)\in C^0(I)$. Now, let $e(y,\rho) = 0$. We can estimate
\begin{equation*}
\mathcal{E}(y,\rho)(t) \geq c \lVert \tilde u(t) + u_D \rVert^2_{H^1(D)},
\end{equation*}
with the constant $c > 0$ depending on the constant appearing in Korn's inequality and the ellipticity constant $c_{\mathbb{C}}$. As it holds $e(y,\rho) = 0$, for every $t\in I$ the function $\tilde u(t) + u_D(t)$ solves an elastic equation, hence can only vanish if the boundary conditions are homogeneous for this time-point which leads to $\tilde u(t) + u_D(t) = 0$. This is excluded in the statement of the Lemma and the proof is complete. \end{proof} We state now the structural assumption we impose for our admissible objective functions. \begin{assumption}\label{assumption:objective_function_optimal_control}
Let $\mathcal{F}:\operatorname{dom}(\mathcal{F})\subset C^0(I) \to \mathbb{R}$ be a continuous map and assume that the domain of $\mathcal{F}$ satisfies
\begin{equation}\label{equation:domain_of_F}
\left\{ v \in C^0(I) \mid v(t) > 0 \text{ for all }t\in I \right\} \subset \operatorname{dom}(\mathcal{F}).
\end{equation}
Furthermore, let $\mathcal{G}:C^0(I\times \Omega) \to \mathbb{R}$ be a continuous function. Using the elastic energy $\mathcal{E}$ and functionals $\mathcal{F}$, $\mathcal{G}$ as above, we define the prototypical objective function as
\begin{equation*}
J: Y \times P \to \mathbb{R}, \quad J(y,\rho) = \mathcal{F}\left( \mathcal{E}(y,\rho) \right) + \mathcal{G}(b)
\end{equation*}
in case the domain of $\mathcal{F}$ allows $\mathcal{E}(y,\rho)$ as an argument. The function $b$ denotes the bone component of the state variable $y$. More important, we define the reduced objective
\begin{equation*}
\hat J : P \to \mathbb{R}, \quad \hat J(\rho) = \mathcal{F}\left( \mathcal{E}(\phi(\rho),\rho) \right) + \mathcal{G}(b).
\end{equation*}
Note that the assumption \eqref{equation:domain_of_F} together with Lemma \ref{lemma:strict_positivity_of_elastic_energy} guarantees that $\mathcal{E}(\phi(\rho),\rho)$ is an admissible argument of $\mathcal{F}$. Finally, we assume that $\hat J$ is bounded from below if we are interested in a minimization problem and we assume $\hat J$ to be bounded from above if we are interested in maximization. \end{assumption} \begin{remark} We discuss how the examples discussed in Section~\ref{section:the_optimization_problem} fall in the abstract setting described above.
\begin{enumerate}
\item [(i)] Choosing the minimum (or maximum) functional
\begin{equation*}
\min: C^0(I) \to \mathbb{R}, \quad v\mapsto \min_{t\in I}v(t)
\end{equation*}
for $\mathcal{F}$ is conforming with Assumption~\ref{assumption:objective_function_optimal_control} as clearly $\operatorname{min}$ and $\operatorname{max}$ are continuous functionals on $C^0(I)$.
\item[(ii)] Smooth approximations of the minimum and the maximum are given by $L^p(I)$ norms with large values of $|p|$. A positive value for $p$ serves as an approximation of the maximum and a negative value is suitable for the approximation of the minimum. In the latter case, i.e., $p<0$, one chooses
\begin{equation*}
\operatorname{dom}\left( \lVert\cdot\rVert_{L^p(I)} \right) \coloneqq \left\{ v \in C^0(I) \mid v(t) > 0 \text{ for all }t\in I \right\}.
\end{equation*}
It is straight forward to show that $\lVert \cdot \rVert_{L^p(I)}$ is continuous with respect to the uniform norm, also for negative exponents. In fact, it is even Fr\'echet differentiable.
\item[(iii)] The choice
\begin{equation*}
\mathcal{G}(b) = \int_\Omega b(T) \mathrm dx,
\end{equation*}
corresponds to the objective of regenerated bone at time $T$. Clearly, $\mathcal{G}$ is continuous and evaluating $\mathcal{G}$ only at functions $b$ that solve the state equations shows that $\mathcal{G}$ is bounded.
\end{enumerate} \end{remark}
\section{Main Results}\label{section:main_results} Our main result establishes the existence of an optimal control in the set $P\subset H^2(\Omega)$ given the objective function $\hat J$ is regularized by an $H^2(\Omega)$ norm. \begin{theorem}[Optimal Control]\label{theorem:optimal_control_approx_objective}
Assume we are in Setting \ref{section:setting_optimal_control} and let $\eta > 0$ be fixed. Then there exists a minimizer $\rho^* = \rho^*(\eta) \in P$ to the regularized objective
\begin{equation*}
\hat{J}(\rho^*) + \eta\lVert \rho^* \rVert_{H^2(\Omega)} = \inf_{\rho \in P}\left[ \hat{J}(\rho) + \eta\lVert \rho \rVert^2_{H^2(\Omega)} \right].
\end{equation*} \end{theorem} \begin{proof}
The proof is established in the course of the article. \end{proof}
In order to incorporate the pointwise constraint encoded in the definition of the control space $P$, see \eqref{equation:set_of_control_variables}, in a numerical simulation one can use a soft penalization. This usually corresponds to a continuous functional $\mathcal{K}:C^0(\Omega)\to[0,\infty)$. Also in this setting we can establish the existence of an optimal control. \begin{corollary}\label{corollary:optimal_control_numerical_penalization}
Assume we are in Setting \ref{section:setting_optimal_control} and let $\mathcal{K}:C^0(\Omega)\to[0,\infty)$ be a continuous, non-negative functional. Then there exists an optimal control $\rho^\dag = \rho^\dag(\eta,\mathcal{K}) \in H^2(\Omega)$ to the regularized and penalized objective, i.e.,
\begin{equation*}
\hat J(\rho^\dag) + \eta \lVert \rho^\dag \rVert^2_{H^2(\Omega)} + \mathcal{K}(\rho^\dag) = \inf_{\rho\in H^2(\Omega)}\left[ \hat J(\rho) + \eta \lVert \rho \rVert^2_{H^2(\Omega)} + \mathcal{K}(\rho) \right].
\end{equation*} \end{corollary} \begin{proof}
The proof is established in the course of the article. \end{proof} \begin{remark}
A few comments regarding the above results are in order.
\begin{enumerate}
\item[(i)] For some objectives we might be interested in a maximizer rather than a minimizer. In this case, one subtracts the regularizer $\eta \lVert \cdot \rVert_{H^2(\Omega)}$ and the soft penalty $\mathcal{K}$ and the results are still valid. For brevity, we discuss only minimization problems in the remainder.
\item [(ii)] As discussed in Section \ref{section:objective_function}, we have some freedom in the choice of $\hat J$. From a modelling perspective a maximum or minimum over all time-points of the elastic energy seems reasonable. On the other hand, for the numerical treatment a smooth approximation thereof is preferable, e.g., an $L^p(I)$ norm. Note that all these choices are covered by our main result.
\item[(iii)] The Tikhonov penalization term $\eta\lVert\cdot\rVert^2_{H^2(\Omega)}$ is artificial. It serves to generate compactness of minimizing sequences and an optimal control result without this term seems out of reach.
\item[(iv)] It is presently unclear to us if the optimal control problem possesses a unique solution.
\end{enumerate} \end{remark}
The strategy to prove Theorem~\ref{theorem:optimal_control_approx_objective} and Corollary~\ref{corollary:optimal_control_numerical_penalization} is the direct method of the calculus of variations and crucially relies on rather specific regularity properties of the diffusion equations and the elastic equation that imply convenient compact embeddings. The technical results concerning these regularity properties are established in Appendix \ref{section:proofs_of_the_main_results}. In this Section, we assume the implications of the compact embeddings and show how this leads to a proof of Theorem~\ref{theorem:optimal_control_approx_objective}. We stress that the mixed boundary conditions, rough coefficients and jump initial conditions are responsible for the technical difficulties.
\begin{proposition}\label{proposition:proof_under_assumptions} Assume we are in Setting \ref{section:setting_optimal_control}. Let $(\rho_k)\subset P$ be a minimizing sequence for $\hat J + \eta\lVert\cdot\rVert^2_{H^2(\Omega)}$ and denote by $(u_k) \subset C^0(I,H^1(\Omega)) $, $(a^1_k),(a^2_k) \subset H^1(I,H^1(\Omega),H^1_D(\Omega)^*)$ and $(b_k), (c_k) \subset W^{1,2}(I,C^0(\Omega)) $ the corresponding solutions to the system \ref{equation:state_equations_optimal_control_setting}. Assume that there is a common subsequence (not relabeled) of $(\rho_k), (u_k), (a^1_k), (a^2_k), (b_k), (c_k)$ and elements $\rho^* \in P$, $u^* \in C^0(I,H^1(\Omega))$, $a_1^*$, $a_2^* \in H^1(I,H^1(\Omega),H^1_D(\Omega)^*)$ and $b^*, c^* \in W^{1,2}(I, C^0(\Omega))$ such that \begin{itemize}
\item [(A1)] $\rho_k \to \rho^*$ in $C^0(\Omega)$ and $\rho_k \rightharpoonup \rho^*$ in $H^2(\Omega)$ \label{item:A1},
\item[(A2)] $u_k \to u^*$ in $C^0(I, H^1(\Omega))$,
\item[(A3)] $a^i_k \rightharpoonup a_i^*$ in $H^1(I,H^1(\Omega), H^1_D(\Omega)^*)$, \quad $i=1,2$
\item[(A4)] $b_k \to b^*$ in $C^0(I\times\Omega)$
\item[(A5)] $c_k \to c^*$ in $C^0(I\times\Omega)$ \end{itemize} then $(\rho^*, u^*, a_1^*, a_2^*, b^*)$ solves the system \ref{equation:state_equations_optimal_control_setting} and $\rho^*$ is minimizer of $\hat J + \eta\lVert\cdot\rVert^2_{H^2(\Omega)}$ over the set $P$, i.e., satisfies \begin{equation*}
\hat{J}(\rho^*) + \eta\lVert \rho^* \rVert_{H^2(\Omega)} = \inf_{\rho \in P}\left[ \hat{J}(\rho) + \eta\lVert \rho \rVert^2_{H^2(\Omega)} \right]. \end{equation*} \end{proposition} \begin{proof}
There are two things to show. First, we need to guarantee that the tuple $(\rho^*,u^*,a_1^*, a_2^*,b^*)$ still solves the system of equations \ref{equation:state_equations_optimal_control_setting}. And secondly, we need to prove that $\rho^*$ is in fact a minimizer. We start with the second point, assuming for the moment that $(\rho^*,u^*,a_1^*, a_2^*,b^*)$ solves the correct equations. We show that it holds
\begin{equation*}
\hat J(\rho^*) + \eta\lVert \rho^* \rVert_{H^2(\Omega)}^2
\leq
\liminf_{k\to\infty}\left[ \hat J(\rho_k) + \eta\lVert \rho_k \rVert^2_{H^2(\Omega)} \right]
=
\min_{\rho\in P}\left[ \hat{J}(\rho) + \eta\lVert \rho \rVert^2_{H^2(\Omega)} \right],
\end{equation*}
that is, the classical lower semi-continuity property required in the application of the direct method of the calculus of variations. Clearly, the map
\begin{equation*}
H^2(\Omega)\to \mathbb{R}, \quad \rho\mapsto \eta\lVert \rho \rVert_{H^2(\Omega)}^2
\end{equation*}
is convex and norm continuous, hence weakly lower semi-continuous, that is, it holds
\begin{equation*}
\eta\lVert \rho^* \rVert_{H^2(\Omega)}^2 \leq \liminf_{k\to\infty}\eta\lVert \rho_k \rVert_{H^2(\Omega)}^2
\end{equation*}
by the assumption $\rho_k\rightharpoonup\rho^*$ in $H^2(\Omega)$ on the minimizing sequence. To proceed, remember our structural assumption on the objective function, i.e.,
\begin{equation*}
\hat J = \mathcal{F}\left( \hat{\mathcal{E}}(\rho) \right) + \mathcal{G}(b),
\end{equation*}
where $\mathcal{F}:C^0(I)\to \mathbb{R}$ and $\mathcal{G}:C^0(I\times\Omega)\to \mathbb{R}$ are assumed to be continuous. Thus it suffices to show that $\mathcal{E}(\rho_k)\to \mathcal{E}(\rho^*)$ in $C^0(I)$. For convenience, let us now set $\mathbb{C}^* = \mathbb{C}(\rho^*,\sigma,b^*)$ and $\mathbb{C}_k = \mathbb{C}(\rho_k,\sigma,b_k)$. We then compute
\begin{align*}
\lVert \hat{\mathcal{E}}(\rho_k) - \hat{\mathcal{E}}(\rho^*) \rVert_{C^0(I)}
&=
\frac12 \left\lVert \int_\Omega \left[\mathbb{C}_k-\mathbb{C}^*\right]\varepsilon(u_k):\varepsilon(u_k)
+
\mathbb{C}^*\varepsilon(u_k-u^*):\varepsilon(u_k)
+
\mathbb{C}^*\varepsilon(u^*)\varepsilon(u_k-u^*)\mathrm dx
\right\rVert_{C^0(I)}
\\
&\leq
\lVert \mathbb{C}_k - \mathbb{C}^* \rVert_{C^0(I,L^\infty(\Omega,\mathcal{L}(\mathcal{M}_s)))} \lVert \varepsilon(u_k) \rVert^2_{C^0(I,L^2(\Omega))}
\\&+
\lVert \mathbb{C}^* \rVert_{C^0(I,L^\infty(\Omega,\mathcal{L}(\mathcal{M}_s)))} \lVert \varepsilon(u_k - u^*) \rVert^2_{C^0(I,L^2(\Omega))} \lVert \varepsilon(u_k) \rVert^2_{C^0(I,L^2(\Omega))}
\\&+
\lVert \mathbb{C}^* \rVert_{C^0(I,L^\infty(\Omega,\mathcal{L}(\mathcal{M}_s)))} \lVert \varepsilon(u^*) \rVert^2_{C^0(I,L^2(\Omega))} \lVert \varepsilon(u_k - u^*) \rVert^2_{C^0(I,L^2(\Omega))}.
\end{align*}
Using the continuity assumption for $\mathbb{C}$ and the convergence $b_k\to b^*$ in $C^0(I\times\Omega)$ and $\rho_k\to\rho^*$ in $C^0(\Omega)$ we get that
\begin{equation*}
\lVert \mathbb{C}_k - \mathbb{C}^* \rVert_{C^0(I,L^\infty(\Omega,\mathcal{L}(\mathcal{M}_s)))} \to 0.
\end{equation*}
Furthermore, the convergence $u_k\to u^*$ in $C^0(I,H^1(\Omega))$ implies both a bound on $\lVert \varepsilon(u_k) \rVert$ and the convergence
\begin{equation*}
\lVert \varepsilon(u_k - u^*) \rVert_{C^0(I,L^2(\Omega))}.
\end{equation*}
Hence, we established $\hat{\mathcal E}(\rho_k)\to\hat{\mathcal{E}}(\rho^*)$ and conclude
\begin{equation*}
\hat J(\rho^*) + \eta\lVert \rho^* \rVert^2_{H^2(\Omega)} \leq \lim_{k\to\infty}\hat J(\rho_k) + \liminf_{k\to\infty}\eta\lVert \rho_k \rVert^2_{H^2(\Omega)} \leq \liminf_{k\to\infty}\left[ \hat J(\rho_k) + \eta\lVert \rho_k \rVert_{H^2(\Omega)}^2 \right]
\end{equation*}
which settles the claim.
We still need to show that $(\rho^*,u^*,a_1^*, a_2^*,b^*)$ is in fact a solution to the system \ref{equation:state_equations_optimal_control_setting}. For the elastic equation we consider for an arbitrary test function $\varphi\in L^2(I,H^1_{D_e}(\Omega))$
\begin{equation*}
\iint \mathbb{C}(\rho_k,\sigma,b_k)\varepsilon(u_k):\varepsilon(\varphi)\mathrm dx\mathrm dt = \int_I \int_{\partial\Omega}g_N\varphi\mathrm ds\mathrm dt
\end{equation*}
and the continuity assumption on $\mathbb{C}$ and the convergence assumed for $\rho_k$, $b_k$ and $u_k$ are by far sufficient to pass to the limit.
In the same spirit, we consider the diffusion equations with a test function $\varphi \in L^2(I,H_{D_d}(\Omega))$
\begin{equation*}
\int_I\langle d_ta^i_k,\varphi\rangle_{H^1_{D_d}(\Omega)}\mathrm dt + \iint D(\rho_k)\nabla a^i_k\nabla \varphi + k_3 (a_k^i)\varphi \mathrm dx\mathrm dt = \iint k_2 S(\varepsilon(u_k)) c_k\varphi\mathrm dx\mathrm dt, \quad i=1,2.
\end{equation*}
For the left-hand side of the diffusion equations we can easily pass to the limit by the weak convergence of $a^i_k$ and the strong convergence of $D(\rho_k)$ that we have available through the continuity assumption on $D$ and $\rho_k\to\rho^*$ in $C^0(\Omega)$. For the right-hand sides we use the implication
\begin{equation*}
u_k\to u^*\text{ in }C^0(I,H^1(\Omega))\quad\Rightarrow\quad S(\varepsilon(u_k)) \to S(\varepsilon(u^*)) \text{ in }L^2(\Omega).
\end{equation*}
Hence, the limit for the diffusion equations can also be correctly identified. To establish the initial condition of the limit, consider the continuous linear map
\begin{equation*}
H^1(I,H^1_{D_d}(\Omega), H^1_{D_d}(\Omega)^*) \to C^0(I,L^2(\Omega)) \to L^2(\Omega), \quad a\mapsto a(0).
\end{equation*}
Using the weak sequential continuity of continuous linear maps shows that $a^*(0)$ vanishes, as desired.
To pass to the limit in the cell ODE, we look at its fixed-point equation
\begin{equation*}
c_k(t) = \int_0^t k_6a^k_1(s)a^k_2(s)(1 + k_7c_k(s))\left(1 - \frac{c_k(s)}{1 - \rho_k}\right)\mathrm ds,
\end{equation*}
which holds in the space $C^0(\Omega)$, for all $t\in I$. Multiplying the above equation by a smooth test function $\varphi \in C_c^\infty(\Omega)$ and integrating over $\Omega$ yields for the left-hand side of the above equation
\begin{equation*}
\int_\Omega c_k(t)\varphi\mathrm dx \to \int_\Omega c^*(t)\varphi\mathrm dx \quad \text{with}\quad k\to \infty.
\end{equation*}
The convergence $c_k \to c^*$ in the space $C^0(I\times\Omega)$ suffices by far for the above limit passage. Before we treat the limit of the right-hand side we note that the compactness result of Aubin-Lions, see for instance \cite{simon1986compact}, provides the compact embedding
\begin{equation*}
H^1(I,H^1_D(\Omega), H^1_D(\Omega)^*) \hookrightarrow\hookrightarrow L^2(I,L^2(\Omega))
\end{equation*}
which is essentially due to the fact that the space triple $(H^1_D(\Omega),L^2(\Omega), H^1_D(\Omega)^*)$ satisfies the requirements of the Ehrling Lemma, being in turn guaranteed by the Rellich-Kochandrov compactness result that provides the compact embedding of $H^1_D(\Omega)$ into $L^2(\Omega)$. Note that the boundary regularity in for $\Omega$ is chosen to support the Rellich-Kochandrov theorem. Hence we get the convergence
\begin{equation*}
a^k_1 a^k_2 \to a^*_1 a^*_2 \quad\text{in}\quad L^1(I,L^1(\Omega))\Tilde{=}L^1(I\times\Omega).
\end{equation*}
Using the above convergence and the convergence of $c_k\to c^*$ in $C^0(I\times\Omega)$ and $\rho_k \to \rho^*$ in $C^0(\Omega)$ we compute, employing Fubini's theorem and pass to the limit
\begin{align*}
\int_\Omega \int_0^t k_6a^k_1 a^k_2(1 + k_7c_k)\left(1 - \frac{c_k}{1 - \rho_k}\right)\mathrm ds \varphi \mathrm dx &= \int_0^t\int_\Omega k_6 a^k_1 a^k_2 (1 + k_7c_k)\left(1 - \frac{c_k}{1 - \rho_k}\right)\varphi \mathrm ds \mathrm dx
\\
&\to
\int_0^t\int_\Omega k_6a^*_1 a^*_2 (1 + k_7c^*)\left(1 - \frac{c^*}{1 - \rho^*}\right)\varphi \mathrm dx \mathrm ds
\\
&=
\int_\Omega\int_0^t k_6 a^*_1 a^*_2 (1 + k_7c^*)\left(1 - \frac{c^*}{1 - \rho^*}\right) \mathrm ds \varphi \mathrm dx
\end{align*}
Inferring the fundamental lemma of the calculus of variations we obtain
\begin{equation*}
c^*(t) = \int_0^t k_6 a_1^* a_2^* (1+k_7c^*)\left( 1 - \frac{c^*}{1-\rho^*} \right)\mathrm ds
\end{equation*}
for every $t \in I$. This implies that $c^*$ satisfies the correct limit equation. Obviously we can repeat the same argument to guarantee that $b^*$ satisfies an appropriate limit equation. \end{proof} \begin{remark}
Via discussing the requirements $(A1) - (A5)$ above, we give a rough idea of their proof.
\begin{itemize}
\item [(i)] The fact that $J$ is bounded from below implies that the regularization term $\eta\lVert\cdot\rVert^2_{H^2(\Omega)}$ automatically leads to an $H^2(\Omega)$ bound on any minimizing sequence $(\rho_k)\subset P$. Thus there exists $\rho^* \in P$ and a (not re-labeled) subsequence $(\rho_k)$ with $\rho_k \rightharpoonup \rho^*$ in $H^2(\Omega)$. Employing the compactness
\begin{equation*}
H^2(\Omega) \hookrightarrow\hookrightarrow C^0(\Omega)
\end{equation*}
that holds for three spatial dimensions, this implies the desired convergence $\rho_k \to \rho^*$ in $C^0(\Omega)$.
\item[(ii)] A uniform bound in $C^0(I,H^1(\Omega))$ norm of the sequence $(u_k)$ is easily established as Lemma \ref{lemma:bound_for_u_k} shows. However, this does not provide assumption (A2) which can only be achieved through a compactness argument. In fact -- given H\"older continuous coefficients functions of $\mathbb{C}(\rho_k,\sigma, b_k)$ -- one is able to show that for every $t\in I$ the solution $u_k(t)$ is a member of $H^{1+\theta}(\Omega)$ for a sufficiently small $\theta >0$ as an application of the main theorem of \cite{haller2019higher}. Compare also to Lemma \ref{lemma:higher_regularity_elliptic_equation} for a discussion of the applicability of this result. Then, given the relative compactness of the sequences $(b_k)$ in $C^0(I\times\Omega)$ and $(\rho_k) \subset C^0(\Omega)$ one can apply a vector-valued version of the Arzel\`a-Ascoli theorem to derive the relative compactness of $(u_k)$ in $C^0(I,H^1(\Omega))$. As discussed in (iv), the compactness of $(b_k)$ relies on a H\"older regularity result for diffusion equations.
\item[(iii)] Similarly, a uniform bound for the sequences $(a^i_k)$ in $H^1(I,H^1(\Omega),H^1_D(\Omega)^*)$ norm can be established by standard computations, thus implying the desired existence of $a_i^*$ and corresponding subsequence. We provide the details in Lemma \ref{lemma:bound_for_a_k}.
\item[(iv)] The existence of a subsequence $(b_k)$ and $b^*\in C^0(I\times\Omega)$ with $b_k\to b^*$ in $C^0(I\times\Omega)$ requires the biggest effort. We achieve this by deriving a $W^{1,2}(I,C^\alpha(\Omega))$ bound on $(b_k)$ for an $\alpha \in (0,1)$. Investigating the structure of the cell and bone ODEs, we see that such a regularity and bound can only be established if we are able to show that the sequences $(a^i_k)$ are bounded in $L^2(I,C^\alpha(\Omega))$. It is this regularity and boundedness result for the diffusion equation on which the whole proof rests, we state it in Lemma \ref{lemma:the_bound_for_a}, but the derivation of this result is the topic of \cite{dondl2021regularity}.
Coming back to the boundedness of $(b_k)$ in $W^{1,2}(I,C^\alpha(\Omega))$, note that this implies the desired existence of $b^*\in C^0(I\times\Omega)$ together with a subsequence $b_k \to b^*$ in $C^0(I\times\Omega)$ via the embeddings
\begin{equation*}
W^{1,2}(I,C^\alpha(\Omega)) \hookrightarrow C^\beta(I,C^\alpha(\Omega)) \hookrightarrow C^{\min(\alpha,\beta)}(I\times\Omega)\hookrightarrow\hookrightarrow C^0(I\times\Omega).
\end{equation*}
\item[(vi)] To summarize: $(A1)$ is clear, $(A3)$ is established in lemma \ref{lemma:bound_for_a_k}, $(A2)$, $(A4)$ and $(A5)$ rely on the regularity result for diffusion equations stated in Lemma \ref{lemma:the_bound_for_a} and the main result of \cite{haller2019higher}. The derivation of the $W^{1,2}(I,C^\alpha(\Omega))$ bound for $(b_k)$ is carried out in Lemma \ref{lemma:bound_for_b_k}, the bound for $(c_k)$ in Lemma \ref{lemma:bound_for_c_k}.
\end{itemize} \end{remark}
\section{Simulations}\label{section:simulations} In this section we present numerical simulations of optimal scaffold density distributions. Our motivation are large tibial defects and we are especially interested in stress shielding effects caused by external fixation of the scaffold. Our numerical findings indicate that a three dimensional scaffold density optimization is of substantial importance in the mitigation of stress shielding effects.
\subsection{Stress Shielding}\label{section:stress_shielding} Bone adapts according to the mechanical environment it is subjected to. This important property of bone is well known and commonly referred to as Wolff's law, see \cite{wolff1892gesetz}. It has far ranging consequences for bone tissue engineering. More precisely, prosthetic implants are often made of less elastic materials than bone and thus change the mechanical environment in their vicinity. This often leads to bone regions that are subjected to less stress and consequently bone resorption when compared to a healthy bone, a phenomenon known as stress shielding which has been extensively studied, e.g., in the context of total hip arthroplasty, see \cite{sumner1992determinants, huiskes1992relationship, behrens2008numerical, arabnejad2017fully}. The bone resorption in the vicinity of the prosthetic implant can lead to serious complications such as periprosthetic fracture and aseptic loosening and revision surgeries -- if so needed -- can be complicated, we refer to \cite{arabnejad2017fully}.
It is to be expected that stress shielding effects do also play an important role in scaffold mediated bone growth, for example caused through the external fixation of the scaffold by a metal plate. This leads to under-loading in the vicinity of the fixating element. To be able to quantify these effects it is crucial to use a three dimensional computational model, a one dimensional simplification as for instance discussed by \cite{poh2019optimization} cannot resolve the asymmetries that induce the effect.
\subsection{The Computational Model} Our concrete model setup is almost identical to the one presented in \cite{dondl2021efficient} as far as the state equations are concerned. For the readers convenience we briefly repeat the state equations and boundary conditions \begin{align*}
0 &= \operatorname{div}\Big( \mathbb{C}(\rho,\sigma, b)\varepsilon(u) \Big)
\\
d_ta_1 &= \operatorname{div}\Big( D(\rho) \nabla a_1 \Big) + k_{2,1}|\varepsilon(u)|c - k_{3,1}a_1
\\
d_ta_2 &= \operatorname{div}\Big( D(\rho) \nabla a_2 \Big) + k_{2,2}|\varepsilon(u)|c - k_{3,2}a_2
\\
d_tc &= k_6a_1a_2(1 + k_7c)\bigg( 1 - \frac{c}{1 - \rho} \bigg)
\\
d_tb &= k_4a_1c\bigg( 1 - \frac{b}{1 - \rho} \bigg). \end{align*} We use the same boundary conditions as in \cite{dondl2021efficient} with the exception of the elastic equation that is subjected to pure Neumann boundary conditions with a constant surface traction stemming from a force of $0.3\operatorname{kN}$ which is applied to the top and bottom of the cylindrical domain. We propose to view this as a maximal force that repeatedly occurs, compare to the discussion in \cite{dondl2021efficient} for a more detailed reasoning. The bioactive molecules $a_1,a_2$ are assumed to be in saturation adjacent to the initial, healthy bone matrix at the top and bottom of the domain and a scenario without preseeding throughout the domain (i.e., a zero initial condition) is considered. For the model constants and functional relationships we refer to \cite{dondl2021efficient}.
As an objective function to measure a scaffold performance, we use the maximum over the temporal evolution of the scaffold-bone composite's elastic energy. Due to the softload in the numerical experiments, the reciprocal of the elastic energy is proportional to the elastic modulus of the scaffold-bone system; a reasonable measure of stability. The optimization's goal is to minimize this temporal maximum while respecting the state equations and an additional constraint on $\rho$ to not take values outside the unit interval\footnote{A scaffold volume fraction should always take values between zero and one in order to be reasonably interpreted as a volume fraction. More restrictive, $\rho$ should even be bounded away from zero and one.} In formulas, we denote by $\mathcal{E}$ the elastic energy \begin{equation*}
\mathcal{E}(y,\rho)(t) = \frac12\int_\Omega \mathbb{C}(\rho(x),\sigma(t),b(t,x))\varepsilon(u(t,x)):\varepsilon(u(t,x))\mathrm{d}x \end{equation*} where $y=(u,a_1,a_2,c,b)$ is the state variable. The minimization problem is the task to find \begin{equation}\label{equation:repeat_optimization_problem_simulation_section}
\rho \in \operatorname{argmin}\left[ \max_{t\in I}\mathcal{E}(y,\rho)(t) \right], \quad \text{subjected to }e(y,\rho)=0\text{ and }\rho\in P, \end{equation} where $P$ encodes that $\rho$ is bounded away from zero and one. Numerically, we replace the temporal maximum by an $L^p(I)$ norm (with, e.g.,\ $p=5$) to smoothly approximate it. The pointwise constraint $\rho\in P$ is treated by a soft penalty and $e(y,\rho)=0$ by the adjoint method. \subsection{Numerical Implementation} The numerical realization of the PDE constrained optimization problem is based on the adjoint approach, see for instance \cite{hinze2008optimization} for a derivation of the method. This means that the constraint $e(y,\rho)=0$ (in the notation of Section~\ref{section:mathematical_formulation}) is parametrized by the solution operator $\rho\mapsto \phi(\rho)$ satisfying $e(\phi(\rho),\rho)=0$ eliminating the constraint in the optimization. Computing the derivative of the reduced objective with respect to $\rho$ yields an adjoint equation that is structurally similar to the state equations \eqref{equation:state_equations_optimal_control_setting}. Having access to the derivative of the reduced objective, we use an $L^2(\Omega)$ gradient flow in order to solve the optimization problem \eqref{equation:repeat_optimization_problem_simulation_section}. As this leads to reasonable results, more sophisticated optimization algorithms were not deemed necessary.
We use the Computational Geometry Algorithms Library CGAL (\cite{boissonnat2000triangulations}) to generate tetrahedral meshes for the spatial resolution of diffusion and elasticity via P1 finite elements in both the state and adjoint equation. The meshes used in our simulations consist of roughly $40$k tetrahedrons. The time dependence and couplings in the equations are treated by a semi-implicite ansatz, using only the quantities explicitly that are not available at a current time step due to the couplings of the equations. The ODEs are solved on every element separately, yielding a spatially constant approximation of their solution. Due to the comparatively simple structure of the time dependent equations, a coarse time stepping can be employed with one temporal increment corresponding to one week of the regeneration process.
\subsection{Discussion} \begin{figure}
\caption{Two optimized scaffold densities for different mechanical environments. Architecture A is the result of a one dimensional optimization routine which has afterwards been transferred to a three dimensional setting whereas in architecture B the metal fixateur is included in the optimization routine in a three dimensional model. }
\label{fig:optimized_scaffolds}
\end{figure} In figure~\ref{fig:optimized_scaffolds} we display two optimized scaffold densities. Architecture A corresponds to the outcome of an essentially one dimensional experiment setup. To produce architecture A, the fixateur (marked in gold) is excluded in the computations and a compressive softload is applied on the top and bottom of the cylindrical domain. This parallels the optimization routine proposed in \cite{poh2019optimization} and produces a qualitatively similar result. The scaffold architecture B is obtained from an optimization routine including the fixateur and thus takes into account the drastic change in mechanical environment introduced by the fixating element. Using external fixation, the mechanical stimulus is almost absent in the vicinity of the fixateur. Naturally, this influences the scaffold optimization and an important merit of a three dimensional model is the ability to resolve these stress shielding effects and adapt the architecture of an optimal scaffold accordingly.
The architecture A in figure \ref{fig:optimized_scaffolds} depicts a scaffold with a higher density in the middle region. A reasonable outcome, as regenerated bone grows back at the scaffold ends where it is attached to the intact bone tissue. Therefore, the central scaffold region needs to maintain structural integrity for a longer time by itself. The overall shape is very similar to the results obtained by \cite{poh2019optimization} with a one dimensional model which is not surprising as our experiment is essentially one dimensional.
The architecture B, corresponding to the experiment including the fixateur depicted in the right of figure \ref{fig:optimized_scaffolds}, shows a considerably different distribution. A higher density in the central part is favorable for the same reason as in the experiment excluding the fixateur, however, in vicintiy of the stiff metal plate a comparatively low scaffold density is predicted. High porosity in this region of the scaffold is beneficial as it increases the mechanical stimulus due to reduced stability and enhances vascularization\footnote{In our model vascularization is resolved through the diffusion of bio-active molecules.}. Both effects promote bone ingrowth in the region close to the fixateur.
To illustrate the benefits with respect to stress shielding of the scaffold architecture B over scaffold architecture A, compare to figure~\ref{fig:optimized_scaffolds}, we use the architecture A in a numerical experiment \emph{including external fixation}. We then compare the strain distributions for the two architectures. Figure~\ref{fig:initial_strain_distribution} shows the strain magnitude distributions at the initial time-point when no bone has regenerated yet. We clearly observe that architecture B mitigates stress shielding in the vicinity of the fixateur in comparison to architecture A. This trend is sustained two month in the regeneration process, as can be observed in figure~\ref{fig:2months_strain_distribution}. We remark that the reduction of stress-shielding is not directly part of the objective function with respect to which the optimization is carried out. Rather, this effect is an implicit favorable consequence of the objective function~\eqref{equation:repeat_optimization_problem_simulation_section} that advocates for its usage in scaffold design optimization.
\begin{figure}\label{fig:initial_strain_distribution}
\end{figure}
\begin{figure}\label{fig:2months_strain_distribution}
\end{figure}
\section{Conclusions and Future Research} We analyzed a three dimensional, homogenized model for bone growth in the presence of a porous, bio-resorbable scaffold and considered the associated problem of optimal scaffold design. This leads to a PDE constrained optimization problem for which we proved the existence of an optimal control, i.e.,\ an optimal scaffold density distribution. We presented proof-of-concept numerical experiments illustrating the benefits of a three dimensional optimization routine. For future work, we propose to use the computational model in detailed numerical simulations and to study the optimized scaffold architectures in vivo.
\appendix \section{Proofs of the Main Results}\label{section:proofs_of_the_main_results} \begin{lemma}[$C^0(I,H^1(\Omega))$ bound for $u$]\label{lemma:bound_for_u_k}
Let $\mathbb{C} \in C^0(I,L^\infty(\Omega,\mathcal{L}(\mathcal{M}_s)))$ be uniformly elliptic with ellipticity constant $\lfloor \mathbb{C} \rfloor$ independent of $t\in I$ and $x\in\Omega$, i.e., it holds
\begin{equation*}
\mathbb{C}(t,x)M:M \geq \lfloor \mathbb{C} \rfloor\lvert M \rvert^2, \quad\text{for all }M\in \mathcal{L}(\mathcal{M}_s)\text{ and }(t,x)\in I\times\Omega.
\end{equation*}
Furthermore, let $f \in C^0(I,H^1_D(\Omega)^*)$ be a fixed right-hand side. Then the unique solution $u\in L^2(I,H^1_D(\Omega))$ to
\begin{equation}\label{equation:abstract_elastic_equation}
\iint \mathbb{C}\varepsilon(u):\varepsilon(\cdot)\mathrm dx \mathrm dt = \int_I \langle f,\cdot \rangle_{H_D^{1}(\Omega)}\mathrm dt \quad \text{in } L^2(I,H^1_D(\Omega))^*
\end{equation}
is a member of the space $C^0(I,H^1_D(\Omega))$ and satisfies
\begin{equation*}
\lVert u \rVert_{C^0(I,H^1(\Omega))} \leq C\left( \lfloor \mathbb{C}\rfloor, C_{\text{Korn}} \right)\cdot \lVert f \rVert_{C^0(I,H^1_D(\Omega)^*)}.
\end{equation*} \end{lemma} \begin{proof}
The equation \eqref{equation:abstract_elastic_equation} implies that $u$ satisfies almost everywhere in $I$
\begin{equation*}
\underbrace{\int_\Omega \mathbb{C}(t)\varepsilon(u(t)):\varepsilon(\cdot)\mathrm dx}_{\eqqcolon \mathcal{T}_tu(t)} = f(t) \quad\text{in }H^1_D(\Omega)^*
\end{equation*}
upon applying the isometry $L^2(I,H^1_D(\Omega))^* \to L^2(I,H^1_D(\Omega)^*)$ to both sides of the equation. Clearly, testing with $u(t)$ yields, inferring Korn's inequality,
\begin{equation*}
\lfloor \mathbb{C} \rfloor C_{\text{Korn}} \cdot \lVert u \rVert^2_{H^1_D(\Omega)} \leq \lfloor \mathbb{C} \rfloor \cdot \lVert \varepsilon(u) \rVert_{L^2(\Omega)}^2 \leq \lVert f(t) \rVert_{H^1_D(\Omega)^*}\lVert u(t) \rVert_{H^1_D(\Omega)}.
\end{equation*}
Hence,
\begin{equation*}
\lVert u(t) \rVert_{H^1_D(\Omega)}
\leq
\left( \lfloor \mathbb{C} \rfloor C_{\text{Korn}} \right)^{-1} \lVert f(t) \rVert_{H^1_D(\Omega)^*}
\leq
\left( \lfloor \mathbb{C} \rfloor C_{\text{Korn}} \right)^{-1} \lVert f \rVert_{C^0(I,H^1_D(\Omega)^*)},
\end{equation*}
meaning that the $H^1(\Omega)$ bound on $u(t)$ is independent of $t\in I$. To show that $u$ is continuous in time, we compute for $t,s \in I$
\begin{equation*}
f(t) - f(s) = \mathcal{T}_tu(t) - \mathcal{T}_su(s) = \mathcal{T}_t(u(t)-u(s)) + \mathcal{T}_t(u(s)) - \mathcal{T}_su(s).
\end{equation*}
Using the coercivity of $\mathcal{T}_t$ we find
\begin{equation*}
\lVert u(t) - u(s) \rVert_{H^1(\Omega)} \leq \frac{1}{\lfloor \mathbb{C} \rfloor C_{\text{Korn}}}\left[ \lVert f(t) - f(s) \rVert_{H^1(\Omega)^*} + \lVert \mathcal{T}_tu(s) - \mathcal{T}_su(s) \rVert_{H^1_D(\Omega)^*} \right]
\end{equation*}
By the assumption $f\in C^0(I,H^1_D(\Omega)^*)$ it is clear that the first term above tends to zero when $|t-s|\to 0$. It remains to estimate
\begin{align*}
\lVert \mathcal{T}_tu(s) - \mathcal{T}_su(s) \rVert_{H^1_D(\Omega)^*}
& \leq
\sup_{\lVert \varphi \rVert_{H^1_D(\Omega)}\leq 1}\int_\Omega \left[ \mathbb{C}(t) - \mathbb{C}(s) \right]\varepsilon(u(s)):\varepsilon(\varphi)\mathrm dx
\\
& \leq
\lVert \mathbb{C}(t) - \mathbb{C}(s) \rVert_{L^\infty(\Omega, \mathcal{M}_s)}\lVert u(s) \rVert_{H^1_D(\Omega)}.
\end{align*}
The time-independent bound on $\lVert u(s) \rVert_{H^1_D(\Omega)}$ and the continuity assumption on $\mathbb{C}$ imply the assertion. \end{proof} \begin{lemma}[Equi-Continuity]\label{lemma:equi_continuity_of_uk} Assume $(\rho_k)\subset P$ is any sequence, $(b_k)\subset W_{\rho_k}$ is an equi-continuous sequence in $C^0(I,C^0(\Omega))$ and $(f_k)$ is a equi-continuous and bounded sequence in $C^0(I,H^1_D(\Omega)^*)$. Assume that $\mathbb{C}(\rho_k,\sigma, b_k)$ satisfies the assumption \ref{section:setting_optimal_control}, i.e., in particular, it holds \begin{equation}
\lVert \mathbb{C}(\rho_k,\sigma, b_k)(t) - \mathbb{C}(\rho_k, \sigma, b_k)(s) \rVert_{L^\infty(\Omega, \mathcal{L}(\mathcal{M}_s))} \leq C \lVert b_k(t) - b_k(s) \rVert_{C^0(\Omega)} \end{equation} for a constant $C$ that does not depend on the data $\rho_k\in P$ and $b_k\in W_{\rho_k}$ and $t\in I$. Denote by $u_k$ the unique solution of \begin{equation*}
\iint \mathbb{C}(\rho_k,\sigma,b_k)\varepsilon(u_k):\varepsilon(\cdot)\mathrm dx \mathrm dt = \int_I \langle f_k,\cdot \rangle_{H^1_D(\Omega)}\mathrm dt \quad \text{in }L^2(I,H^1_D(\Omega))^*. \end{equation*} Then, $(u_k)$ lies in $C^0(I,H^1_D(\Omega))$ and is equi-continuous in this space. \end{lemma} \begin{proof}
We are in situation of Lemma \ref{lemma:bound_for_u_k}, hence we know that $u_k$ is a member of the space $C^0(I,H^1_D(\Omega))$ and we need only to establish the equi-continuity. To this end, repeating the equations in Lemma \ref{lemma:bound_for_u_k} for $u_k$ instead of $u$ we arrive at
\begin{align*}
\lVert u_k(t) - u_k(s) \rVert_{H^1(\Omega)}
&\leq
\frac{1}{\lfloor \mathbb{C}_k \rfloor C_{\text{Korn}}}\left[ \lVert f_k(t) - f_k(s) \rVert_{H^1(\Omega)^*} + \lVert \mathbb{C}_k(t) - \mathbb{C}_k(s) \rVert_{L^\infty(\Omega,\mathcal{L}(\mathcal{M}_s))} \lVert u_k(s) \rVert_{H^1(\Omega)} \right]
\\&\leq
\frac{1}{\lfloor \mathbb{C}_k \rfloor C_{\text{Korn}}}\left[ \lVert f_k(t) - f_k(s) \rVert_{H^1(\Omega)^*} + C\lVert b_k(t) - b_k(s) \rVert_{C^0(\Omega)} \right]
\end{align*}
as $\lVert u_k(t) \rVert_{H^1(\Omega)}$ is bounded uniformly in $k\in\mathbb{N}$ and $s\in I$ by Lemma \ref{lemma:bound_for_u_k} through the boundedness we assumed for $(f_k)$. Then, we infer the equi-continuity of $(f_k)$ and $(b_k)$ to derive it for $(u_k)$. \end{proof} The following lemma summarizes the main result of \cite{haller2019higher}. We restrict ourselves to the generality necessary needed for our application, which however, is not the most general situation. We refer the reader to \cite{haller2019higher} for a relaxation concerning boundary regularity, regularity of coefficients and the differential operator. \begin{lemma}[Higher Regularity for Elliptic Systems]\label{lemma:higher_regularity_elliptic_equation}
Let $\mathbb{C} \in L^\infty(\Omega, \mathcal{L}(\mathcal{M}_s))$ be uniformly elliptic, i.e., there exists $\lfloor \mathbb{C} \rfloor > 0$ such that
\begin{equation*}
\mathbb{C}M:M \geq \lfloor \mathbb{C} \rfloor |M|^2, \quad \text{for all }M\in\mathcal{M}_s.
\end{equation*}
Assume that $\mathbb{C}_{ijkl}\in C^\alpha(\Omega)$ for a fixed but arbitrary small $\alpha >0$. Then, there exists $\theta = \theta(\alpha) > 0$ such that for every $f\in H^{1-\theta}(\Omega)^*$ the solution $u\in H^1_D(\Omega)$ to
\begin{equation*}
\int_\Omega \mathbb{C}\varepsilon(u):\varepsilon(\cdot)\mathrm dx = f \quad \text{in }H^1_D(\Omega)^*
\end{equation*}
is in fact a member of $H^{1+\theta}(\Omega)$ and we can estimate
\begin{equation*}
\lVert u \rVert_{H^{1+\theta}(\Omega)} \leq C\lVert \mathbb{C} \rVert_{C^\alpha(\Omega)}\lVert f \rVert_{H^{1 - \theta}_D(\Omega)^*},
\end{equation*}
where $C$ does not depend on the concrete form of $\mathbb C$. \end{lemma} \begin{proof}
This follows from Theorem 1 and Lemma 1 in \cite{haller2019higher}. \end{proof} The last result we need to establish the relative compactness of $(u_k)$ in $C^0(I,H^1_D(\Omega))$ is -- little surprisingly -- a vector valued version of the Arzel\`a-Ascoli Theorem which we recall here for convenience. \begin{theorem}[Characterization of Relative Compactness in $C^0(K,X)$ Spaces]\label{theorem:arzela_ascoli_vector_valued}
Let $X$ be a Banach space and $K$ a compact metric space. Then a set $\mathcal{F}\subset C^0(K,X)$ is relatively compact if and only if the following two conditions hold:
\begin{itemize}
\item [(i)] The set $\mathcal{F}$ is equi-continuous, that is, for all $t\in K$ and all $\varepsilon >0$ there exists a neighborhood $U(t)\subset K$ such that
\begin{equation*}
\sup_{u\in \mathcal{F}}\lVert u(t) - u(s) \rVert_{X} \leq \varepsilon\quad\text{for all }s\in U(t).
\end{equation*}
\item[(ii)] For all $t\in K$ the set
\begin{equation*}
\{ u(t)\mid u\in \mathcal{F} \} \subset X
\end{equation*}
is relatively compact.
\end{itemize} \end{theorem}
The focus of the next lemma lies on the a priori estimates for linear parabolic equations. \begin{lemma}[A Priori Estimate for Parabolic Evolution Equations]\label{lemma:bound_for_a_k}
Let $(i,X,H)$ be a Gelfand triple, $M:X\to X^*$ a linear coercive operator with coercivity constant $\lfloor M \rfloor$, i.e., it holds
\begin{equation*}
\langle M a, a\rangle_X \geq \lfloor M \rfloor \lVert a \rVert_X^2, \quad \text{for all }a\in X.
\end{equation*}
Let $I = [0,T]$ denote a time interval and $f\in L^2(I,X^*)$ a fixed right-hand side. Then there exists a unique solution $a\in H^1(I,X,X^*)$ to
\begin{equation*}
\int_I \langle d_ta, \cdot \rangle \mathrm dt + \int_I\langle Ma, \cdot \rangle_X\mathrm dt = \int_I \langle f,\cdot \rangle_X \mathrm dt, \quad \text{in } L^2(I,X)^*
\end{equation*}
Furthermore, the norm of the solution $a$ can be estimated by
\begin{equation}\label{equation:a_priori_estimate_evolution_problems}
\lVert a \rVert_{H^1(I,X,X^*)} \leq C\left( \lVert M \rVert_{\mathcal{L}(X,X^*)}, \lfloor M \rfloor^{-1} \right)\cdot \left( \lVert a(0) \rVert_{H} + \lVert f \rVert_{L^2(I,X^*)}\right),
\end{equation}
with $C$ being monotonously increasing in $\lVert M \rVert_{\mathcal{L}(X,X^*)}$ and $\lfloor M \rfloor^{-1}$. \end{lemma} \begin{proof}
We establish only the estimate \eqref{equation:a_priori_estimate_evolution_problems}, the existence of a solution is the well known maximal regularity result of J.\ L.\ Lions, see for instance \cite[Part II, Section 6]{ern2013theory}. To derive the estimate, we note that by the natural isometry $L^2(I,X^*) = L^2(I,X)^*$ the function $a$ satisfies a pointwise almost-everywhere equation in $X^*$, namely
\begin{equation*}
d_ta(s) + M(a(s)) = f(a(s))
\end{equation*}
which, at time $s\in I$, we can test with $a(s) \in X$ and integrate from $0$ to $t$. Then, we apply the partial integration formula for Gelfand triples and estimate using the coercivity of $M$ and Young's inequality
\begin{align*}
\frac12 \lVert a(t) \rVert_H^2 + \lfloor M \rfloor \int_0^t\lVert a(s) \rVert^2_X\mathrm ds
&\leq
\frac12 \lVert a(0) \rVert_H^2 + \lVert f \rVert_{L^2(I,X^*)}\lVert a \rVert_{L^2(I,X)}
\\&\leq
\frac12 \lVert a(0) \rVert_H^2 + \frac{1}{2\lfloor M \rfloor}\lVert f \rVert^2_{L^2([0,t],X^*)} + \frac{\lfloor M \rfloor}{2} \lVert a \rVert^2_{L^2([0,t],X)},
\end{align*}
which leads to
\begin{equation*}
\frac12 \lVert a(t) \rVert_H^2 + \frac{\lfloor M \rfloor}{2} \int_0^t\lVert a(s) \rVert^2_X\mathrm ds
\leq
\frac12 \lVert a(0) \rVert_H^2 + \frac{1}{2\lfloor M \rfloor}\lVert f \rVert^2_{L^2([0,t],X^*)}.
\end{equation*}
We get by estimating the terms of the left-hand side separately and taking the supremum over $t\in I$ both
\begin{gather*}
\lVert a \rVert_{C^0(I,L^2(\Omega))}^2 \leq \lVert a(0) \rVert_H^2 + \frac{1}{\lfloor M \rfloor}\lVert f \rVert^2_{L^2(I,X^*)}
\quad \text{and} \quad
\lVert a \rVert_{L^2(I,X)} \leq \lVert a(0) \rVert_H^2 + \frac{1}{\lfloor M \rfloor^2}\lVert f \rVert^2_{L^2(I,X^*)}.
\end{gather*}
To estimate the $L^2(I,X)^*$ norm of $d_ta$, we use that $a$ is the solution of the parabolic equation to estimate
\begin{align*}
\lVert d_ta \rVert_{L^2(I,X)^*}
&=
\sup_{\lVert \varphi \rVert_{L^2(I,X)\leq 1}} \int_I \langle d_ta, \varphi \rangle_X\mathrm dt
\\ & \leq
\sup_{\lVert \varphi \rVert_{L^2(I,X)\leq 1}} \left[ \int_I |\langle Ma,\varphi\rangle_X|\mathrm dt + \int_I|\langle f,\varphi \rangle_X| \mathrm dt \right]
\\ & \leq
\lVert M \rVert_{\mathcal{L(X,X^*)}}\lVert a \rVert_{L^2(I,X)} + \lVert f \rVert_{L^2(I,X^*)}.
\end{align*}
If we infer the previous estimates for $a$ in $L^2(I,X)$ norm, we can bound $d_ta$ in $L^2(I,X)^*$ norm. Combining the considerations for $a$ and $d_ta$ lets us bound the $H^1(I,X,X^*)$ as desired. \end{proof} \begin{lemma}[$L^p(I,C^\alpha(\Omega))$ Bound for $(a_k)$]\label{lemma:the_bound_for_a}
Assume $\Omega\subset \mathbb{R}^d$ with $d=1,2,3$ and $\partial\Omega = \Gamma_N\cup\Gamma_D$ where $\Omega \cup \Gamma_N$ is Gr\"oger regular. Let $D\in L^\infty(\Omega,\mathcal{M}_s)$ be uniformly elliptic with ellipticity constant $\lfloor D \rfloor >0$, $k > 0$, $f\in L^p(I,L^2(\Omega))$ for a fixed $p > 2$ and $a_0 \in L^\infty(\Omega)$ some essentially bounded initial condition. Then there exists $\alpha = \alpha(p)\in(0,1)$ independent of $D$ and $f$ such that the solution $a \in H^1(I,H^1_D(\Omega),H^1_D(\Omega)^*)$ to
\begin{align*}
\int_I \langle d_ta,\cdot \rangle_{H^1_D(\Omega)}\mathrm dt + \iint D\nabla a\nabla \cdot + k a\cdot \mathrm dx \mathrm dt &= \iint f\cdot \mathrm dx\mathrm dt
\\
a(0) = a_0
\end{align*}
is a member of $L^p(I,C^\alpha(\Omega))$ and satisfies the estimate
\begin{equation*}
\lVert a \rVert_{L^p(I,C^\alpha(\Omega))} \leq C\left(p,\lfloor D \rfloor, \lVert D \rVert_{L^\infty(\Omega,\mathcal{M}_s)}\right) \lVert f \rVert_{L^p(I,L^2(\Omega))}.
\end{equation*} \end{lemma} \begin{proof}
The proof of this Lemma exceeds the scope of this manuscript and is the main result of \cite{dondl2021regularity}. \end{proof} \begin{remark}
We comment on some of the aspects leading to the complexity of the proof of lemma \ref{lemma:the_bound_for_a}.
\begin{itemize}
\item [(i)] The mixed boundary conditions, rough coefficients and the jump initial condition prevents the standard theory from being applicable. If it wasn't for this roughness, an $L^2(I,H^2(\Omega))$ result could be derived by standard theory, see for instance \cite{evans1998partial}.
\item[(ii)] Even invoking the theory of abstract parabolic equations as described in \cite{amann1995linear} does only almost suffice. In fact, combining the results in \cite{amann1995linear} with \cite{haller2009holder} yields $L^2(I,C^\alpha(\Omega))$ regularity only if $a_0$ lies in a suitable trace space for initial conditions. The trace space in this case is $H^1_D(\Omega)$ and not $L^\infty(\Omega)$.
\item[(iii)] The strategy to prove Lemma \ref{lemma:the_bound_for_a} is therefore to treat the cases $f = 0$, $a(0) = a_0$ and $f = f$, $a(0) = 0$ separately and then to use the superposition principle for linear equations. For details we refer to \cite{dondl2021regularity}.
\end{itemize} \end{remark} We treat now the cell ODE. We need to establish that a solution in $W^{1,2}(I,C^\alpha(\Omega))$ exists and is suitably bounded in the data. We have already access to the fact that a long-time solution in $W^{1,2}(I,C^0(\Omega))$ exists, hence the crucial part is to control the H\"older norm of this solution. This can be done by accessing the solution $c$ through its formulation as a fixed-point and then estimating its H\"older norm if suitable regularity for the data is given.
\begin{lemma}\label{lemma:holder_bound_c_k}
Let $a_1$ and $a_2$ be functions in $L^4(I,C^\alpha(\Omega))$ with $a_1, a_2 \geq 0$. Assume that $\rho \in C^\alpha(\Omega)$ satisfies $0 < \rho < 1$ and $k_6$ and $k_7$ are positive constants. Then there exists a solution $c \in W^{1,2}(I,C^0(\Omega))$ to the equation
\begin{equation*}
d_tc = k_6a_1a_2(1+k_7c)\left( 1 - \frac{c}{1-\rho} \right), \quad c(0) = 0
\end{equation*}
with $0\leq c \leq 1$. Furthermore, we can control the $\alpha$-H\"older seminorm of $c$ in the following way
\begin{equation*}
\lfloor c(t) \rfloor_\alpha \leq C\left( \lVert a_1 \rVert_{L^2(I,C^\alpha(\Omega))}, \lVert a_2 \rVert_{L^2(I,C^\alpha(\Omega))}, \lVert \rho \rVert_{C^\alpha(\Omega)} \right),
\end{equation*}
with the constant $C$ being monotone in its arguments. \end{lemma} \begin{proof}
The existence of a solution in the space $W^{1,2}(I,C^0(\Omega))$ was already established in Theorem 3.2 in \cite{dondl2021efficient}. We are only concerned with the control over the H\"older seminorm. To simplify notation, we prove the statement for an ODE of the form
\begin{equation}\label{local}
dtc = m(1 + c)(1 - \theta c), \quad c(0) = 0
\end{equation}
with $m \in L^2(I,C^\alpha(\Omega))$ and $\theta \in C^\alpha(\Omega)$ with $0 < \theta^{-1}(x) < 1$, which implies that the solution $c$ to \eqref{local} takes values in the unit interval, i.e., $c(t,x)\in [0,1]$, see Lemma B.5. The existence of $c$ in $W^{1,2}(I,C^0(\Omega))$ solving \eqref{local} implies upon applying integrating that $c(t)$ is given by
\begin{equation*}
c(t) = \int_0^t m(s)(1+c(s))(1-\theta c(s))\mathrm ds,
\end{equation*}
with the integral being a $C^0(\Omega)$ valued Bochner integral. As point evaluation at $x\in \overline{\Omega}$ is continuous and linear from $C^0(\Omega)$ to $\mathbb{R}$, it also holds
\begin{equation*}
c(t,x) = \int_0^t m(s,x)(1+c(s,x))(1-\theta(x) c(s,x))\mathrm ds.
\end{equation*}
We use the above formula and the triangle inequality to estimate
\begin{align*}
|c(t,x) - c(t,y)| &\leq \underbrace{\int_0^t|m(s,x)-m(s,y)|\mathrm ds}_{\eqqcolon A}
+
\underbrace{\int_0^t |m(s,x)c(s,x) - m(s,y)c(s,y)| \mathrm ds}_{\eqqcolon B}
\\&+
\underbrace{\int_0^t |m(s,y)\theta(y)c(s,y) - m(s,x)\theta(x)c(s,x)| \mathrm ds}_{\eqqcolon C}
\\&+
\underbrace{\int_0^t |m(s,y)\theta(y)c(s,y)^2 - m(s,x)\theta(x)c(s,x)^2| \mathrm ds}_{\eqqcolon D}.
\end{align*}
For brevity, we set $\tilde m(t,x) = m(t,x)\theta(x)$. Inferring that $c$ takes values in $[0,1]$, we claim that the above estimate leads to
\begin{equation}\label{local_holder_estimate}
|c(t,x) - c(t,y)| \leq \int_0^t \left( 2\lfloor m(s) \rfloor_\alpha + \lVert m(s) \rVert_{C^0(\Omega)}\lfloor c(s) \rfloor_\alpha + 3\lVert \Tilde{m}(s) \rVert_{C^0(\Omega)} \lfloor c(s) \rfloor_\alpha + 2\lfloor \Tilde{m}(s) \rfloor_\alpha \right)|x-y|^\alpha\mathrm ds.
\end{equation}
Dividing by $|x-y|^\alpha$ and taking the supremum over pairs $(x,y)\in\overline{\Omega}^2$ with $x\neq y$ we get
\begin{equation*}
\lfloor c(t) \rfloor_\alpha
\leq
\int_0^t \underbrace{2 \left( \lfloor m(s) \rfloor_\alpha + \lfloor \tilde m(s) \rfloor_\alpha \right)}_{\eqqcolon \alpha(s)} + \underbrace{\left( \lVert m(s) \rVert_{C^0(\Omega)} + 3\lVert \tilde m(s) \rVert_{C^0(\Omega)} \right)}_{\eqqcolon \beta(s)} \lfloor c(s) \rfloor_\alpha \mathrm ds.
\end{equation*}
Hence, by Gr\"onwall's lemma we get
\begin{equation*}
\lfloor c(t) \rfloor_\alpha \leq \left( 1 + \lVert \beta \rVert_{L^1(I)}\exp\left(\lVert \beta \rVert_{L^1(I)}\right) \right)\lVert \alpha \rVert_{L^1(I)}
\end{equation*}
with
\begin{align*}
\lVert \alpha \rVert_{L^1(I)} &\leq 2 \lVert m \rVert_{L^1(I,C^\alpha(\Omega))} + 2 \lVert \tilde m \rVert_{L^1(I,C^\alpha(\Omega))}
\\
\lVert \beta \rVert_{L^1(I)} &\leq \lVert m \rVert_{L^1(I,C^0(\Omega))} + 3 \lVert \tilde m \rVert_{L^1(I,C^0(\Omega))}.
\end{align*}
As for a bounded intervals the $L^2$ norm dominates the $L^1$ norm, we are done, given we still provide the details of the computations that led to \eqref{local_holder_estimate}. To this end, note that we may estimate $(A)$ by
\begin{equation*}
\int_0^t |m(s,x)-m(s,y)|\mathrm ds \leq \int_0^t \lfloor m(s) \rfloor_\alpha|x-y|^\alpha \mathrm ds.
\end{equation*}
Using the triangle inequality and the pointwise properties of $c$, we estimate $(B)$ by
\begin{align*}
\int_0^t |m(s,x)c(s,x) - m(s,y)c(s,y)| \mathrm ds
&\leq
\int_0^t |m(s,x)|\lfloor c(s) \rfloor_\alpha |x - y|^\alpha \mathrm ds + \int_0^t |c(s,y)|\lfloor m(s)\rfloor_{\alpha}|x-y|^\alpha\mathrm ds
\\&\leq
\int_0^t \lVert m(s)\rVert_{C^0(\Omega)} \lfloor c(s) \rfloor_\alpha |x - y|^\alpha \mathrm ds + \int_0^t \lfloor m(s)\rfloor_{\alpha}|x-y|^\alpha\mathrm ds.
\end{align*}
Using again the abbreviation $\tilde m = m\theta$ and noting that $\tilde m$ has the same regularity as $m$, we can estimate the term $(C)$ in analogy to term $(B)$ by
\begin{equation*}
\int_0^t |m(s,y)\theta(y)c(s,y) - m(s,x)\theta(x)c(s,x)| \mathrm ds
\leq
\int_0^t \lVert \tilde m(s) \rVert_{C^0(\Omega)}\lfloor c(s) \rfloor_\alpha |x-y|^\alpha + \int_0^t \lfloor \tilde m(s) \rfloor_\alpha |x-y|^\alpha \mathrm ds.
\end{equation*}
To estimate $(D)$ we need to split the term
\begin{equation*}
(D) = \underbrace{\int_0^t \tilde |m(s,y)|\,\left| c(s,y)^2 - c(s,x)^2 \right|\mathrm ds}_{\eqqcolon D_1} + \underbrace{\int_0^t |c(s,x)|^2\left| \tilde m(s,y) - \tilde m(s,x) \right| \mathrm ds}_{\eqqcolon D_2}.
\end{equation*}
Using $c(s,y)^2 - c(s,x)^2 = c(s,y)(c(s,y)-c(s,x)) + c(s,x)(c(s,y)-c(s,x))$ and $c(s,x)\in[0,1]$, we estimate $(D_1)$
\begin{align*}
(D_1) &\leq \int_0^t |\tilde m(s,y)|\, |c(s,y)|\,\lfloor c(s)\rfloor_\alpha |x-y|^\alpha + |\tilde m(s,y)|\,|c(s,x)|\,\lfloor c(s) \rfloor_\alpha|x-y|^\alpha\mathrm ds
\\&\leq
\int_0^t 2|\tilde m(s,y)|\,\lfloor c(s) \rfloor_\alpha |x-y|^\alpha \mathrm ds
\\&\leq
\int_0^t 2\lVert \tilde m(s)\rVert_{C^0(\Omega)}\,\lfloor c(s) \rfloor_\alpha |x-y|^\alpha \mathrm ds
\end{align*}
and for $(D_2)$
\begin{equation*}
(D_2) \leq \int_0^t |c(s,x)|^2\lfloor \tilde m(s) \rfloor_\alpha |x-y|^\alpha \mathrm ds
\leq
\int_0^t \lfloor \tilde m(s) \rfloor_\alpha |x-y|^\alpha \mathrm ds.
\end{equation*}
Collecting all estimates yields the claim and the proof is complete. \end{proof}
\begin{lemma}\label{lemma:bound_for_c_k}
Let $a_1$ and $a_2$ be functions in $L^4(I,C^\alpha(\Omega))$ with $a_1, a_2 \geq 0$. Assume that $\rho \in C^\alpha(\Omega)$ satisfies $0 < \rho < 1$ and $k_6$ and $k_7$ are positive constants. Then there exists a unique solution $c \in W^{1,2}(I,C^\alpha(\Omega))$ to the equation
\begin{equation*}
d_tc = k_6a_1a_2(1+k_7c)\left( 1 - \frac{c}{1-\rho} \right), \quad c(0) = 0
\end{equation*}
with $0\leq c \leq 1$. Furthermore, we can control the full $\alpha$-H\"older norm of $c$ in the following way
\begin{equation*}
\lVert c \rVert_{W^{1,2}(I,C^\alpha(\Omega))} \leq C\left( \lVert a_1 \rVert_{L^2(I,C^\alpha(\Omega))}, \lVert a_2 \rVert_{L^2(I,C^\alpha(\Omega))}, \lVert \rho \rVert_{C^\alpha(\Omega)} \right),
\end{equation*}
with the constant $C$ being monotone in its arguments. \end{lemma} \begin{proof}
We use again the notation
\begin{equation*}
d_tc = m(1+c)(1-\theta c), \quad c(0)=0,
\end{equation*}
where $m\in L^2(I,C^\alpha(\Omega))$ and $m(t,x) \geq 0$ and $\theta \in C^\alpha(\Omega)$. Thus, the inducing function $F$ in the sense of Theorem B.2 in \cite{dondl2021efficient} is given by
\begin{equation*}
F:I\times C^\alpha(\Omega) \to C^\alpha(\Omega), \quad F(t,c) = m(t)(1 + c)(1 - \theta c).
\end{equation*}
To prove the existence of a unique short-time solution in the space $W^{1,2}(I_\delta, C^\alpha(\Omega))$, we need $F$ to be of Carath\'eodory regularity. Clearly, $F(\cdot,c):I\to C^\alpha(\Omega)$ is Bochner measurable as $m$ is. Furthermore, $F(t,\cdot):C^\alpha(\Omega) \to C^\alpha(\Omega)$ is continuous. This is due to the fact that $C^\alpha(\Omega)$ is a Banach algebra.
To proceed, we need a boundedness and a Lipschitz condition on bounded subsets of $C^\alpha(\Omega)$, compare to Theorem B.2 in \cite{dondl2021efficient}. To this end, let $B\subset C^\alpha(\Omega)$ be a bounded set. For $c \in B$ we estimate
\begin{equation*}
\lVert F(t,c) \rVert_{C^\alpha(\Omega)} \leq C \lVert m(t) \rVert_{C^\alpha(\Omega)}\lVert 1 + c \rVert_{C^\alpha(\Omega)}\lVert 1-\theta c \rVert_{C^\alpha(\Omega)}
\end{equation*}
The term $\lVert 1 + c \rVert_{C^\alpha(\Omega)}\lVert 1-\theta c \rVert_{C^\alpha(\Omega)}$ can be bounded in terms of the measure of $\Omega$, the assumed boundedness of $B$ and the $C^\alpha(\Omega)$ norm of $\theta$. Hence, there exists a constant $C = C(\Omega, \lVert \theta \rVert_{C^\alpha(\Omega)}, B)$ such that
\begin{equation*}
\lVert F(t,c) \rVert_{C^\alpha(\Omega)} \leq C\left( \Omega, \lVert \theta \rVert_{C^\alpha(\Omega)},B \right) \lVert m(t) \rVert_{C^\alpha(\Omega)}
\end{equation*}
and by assumption, the map $t\mapsto \lVert m(t) \rVert_{C^\alpha(\Omega)}$ is a member of $L^2(I)$. Now, let $c$ and $\bar c \in B$ and look at the differences
\begin{align*}
\lVert F(t,c) - F(t,\bar c) \rVert_{C^\alpha(\Omega)} &\leq C \lVert m(t) \rVert_{C^\alpha(\Omega)} \lVert (1+c)(1-\theta c) - (1+\bar c)(1 - \theta\bar c) \rVert_{C^\alpha(\Omega)}
\\&\leq
C\lVert m(t) \rVert_{C^\alpha(\Omega)}\left[ \lVert 1+ \theta \rVert_{C^\alpha(\Omega)} \lVert c - \bar c \rVert_{C^\alpha(\Omega)} + \lVert \theta \rVert_{C^\alpha(\Omega)}\left\lVert c^2 - \bar c^2 \right\rVert_{C^\alpha(\Omega)} \right].
\end{align*}
We look at the quadratic term separately
\begin{align*}
\left\lVert c^2 - \bar c^2 \right\rVert_{C^\alpha(\Omega)} \leq C \lVert c - \bar c \rVert_{C^\alpha(\Omega)}\lVert c + \bar c \rVert_{C^\alpha(\Omega)} \leq C\left(B\right)\lVert c-\bar c \rVert_{C^\alpha(\Omega)}.
\end{align*}
Hence, there exists a function $L_B \in L^2(I)$ such that
\begin{equation*}
\lVert F(t,c) - F(t,\bar c) \rVert_{C^\alpha(\Omega)} \leq L_B(t)\lVert c - \bar c \rVert_{C^\alpha(\Omega)}.
\end{equation*}
Consulting Theorem B.2 in \cite{dondl2021efficient}, the estimates above provide the existence of an interval $[0,\delta] = I_\delta$ and a unique function $c\in W^{1,2}(I_\delta, C^\alpha(\Omega))$ solving the ODE.
To show that the solution can be extended to all of $I=[0,T]$, we extend the solution $c$ to the maximal interval $[0,t^*)$ of existence. For any $t_0\in [0,t^*)$ we set $c_0 = c(t_0)$ and consider the initial value problem
\begin{equation*}
d_tc = k_6a_1a_2(1+k_7c)\left(1 - \frac{c}{1-\rho} \right), c(t_0) = c_0.
\end{equation*}
Then this has a unique solution in $W^{1,2}([t_0-\tilde \delta, t_0 + \tilde\delta]\cap I, C^\alpha(\Omega))$ for some suitable $\tilde \delta$. In fact, $\tilde \delta$ depends on the $L^2(I,C^\alpha(\Omega))$ norm of $a_1a_2$, the $C^\alpha(\Omega)$ norm of $(1-\rho)^{-1}$ and the $L^2(I,C^\alpha(\Omega))$ norm of $c$ on $[0,t^*)$. This implies that $\tilde\delta$ does not depend on the position of $t_0\in[0,t^*)$ and thus $t^*=T$ and the interval $[0,t^*)$ can be closed.
Finally, the promised bound on the $W^{1,2}(I,C^\alpha(\Omega))$ norm of $c$ is easily established using $c(t,x)\in[0,1]$ and the estimate on the H\"older seminorm of Lemma \ref{lemma:holder_bound_c_k}. \end{proof}
We show now how to establish the existence of solutions to the bone ODE in the space $W^{1,2}(I,C^\alpha(\Omega))$. Furthermore, we show that the $W^{1,2}(I,C^\alpha(\Omega))$ norm of such solutions can be bounded, given bounded data in the right spaces. This can in principle be done by the same arguments as for the cell equation, however, the bone ODE is linear and thus we can use more elegant approaches. \begin{lemma}\label{lemma:bound_for_b_k}
Let $X$ be a Banach algebra and denote by $C_X > 0$ the norm of its multiplication and assume that $p > 1$. By $W^{1,p}_0(I,X)$ we denote the vector-valued Sobolev space with vanishing initial conditions. For a function $m\in L^p(I,X)$ we define the multiplication operator
\begin{equation*}
M: C^0(I,X) \to L^p(I,X), \quad Mv = t\mapsto m(t)v(t).
\end{equation*}
Then the map
\begin{equation*}
d_t + M: W^{1,p}_0(I,X) \to L^p(I,X), \quad v\mapsto d_tv + Mv
\end{equation*}
is a linear homeomorphism. Furthermore, given a right-hand side $f\in L^p(I,X)$ we may bound the solution $v$ to $d_tv + Mv = f$ in the following way
\begin{equation*}
\lVert v \rVert_{W^{1,p}_0(I,X)} \leq C \left( |I|, \lVert m \rVert_{L^p(I,X)}, C_X \right)\lVert f \rVert_{L^p(I,X)},
\end{equation*}
i.e., the norm of $v$ does only depend on $f$ and $m$ measured in $L^p(I,X)$ norm and the constant $C$ is monotone in these quantities. \end{lemma} \begin{proof}
The continuity and linearity of the map $d_t + M$ is clear. Its bijectivity follows as an application of Theorem B.2 in \cite{dondl2021efficient}. To this end, note that the inducing function $F:I\times X \to X$ of Theorem B.2 in \cite{dondl2021efficient} is given by
\begin{equation*}
F:I\times X \to X, \quad F(t,x) = m(t)x.
\end{equation*}
This is clearly a Carath\'eodory function and it holds for $x,y \in X$
\begin{equation*}
\lVert F(t,x) - F(t,y) \rVert_X \leq C \lVert m(t) \rVert_X\lVert x-y \rVert_X.
\end{equation*}
The function $C\lVert m(\cdot) \rVert_X$ is a member of $L^p(I)$ with $p > 1$ and therefore the existence of a unique solution $v \in W^{1,p}_0(I,X)$ is established. To provide the bound, we employ Gr\"onwall's inequality. Note that, by the fundamental theorem, the solution $v$ satisfies the integral identity
\begin{equation*}
v(t) = \int_0^t f(s) - m(s)v(s)\mathrm ds
\end{equation*}
and consequently the estimate
\begin{equation*}
\lVert v(t) \rVert_X \leq \int_0^t \lVert f(s) \rVert_X + C_X\lVert m(s) \rVert_X\lVert v(s) \rVert_X\mathrm ds.
\end{equation*}
Using Gr\"onwall's inequality yields
\begin{equation*}
\lVert v(t) \rVert_X \leq \left[ 1 + C_X\lVert m \rVert_{L^1(I,X)}\exp\left( C_X \lVert m \rVert_{L^1(I,X)} \right) \right]\cdot \lVert f \rVert_{L^1(I,X)}.
\end{equation*}
Clearly, this implies a bound in $C^0(I,X)$ norm for $v$ of the form
\begin{equation*}
\lVert v \rVert_{C^0(I,X)}\leq C \left( \lVert m \rVert_{L^1(I,X)}, C_X \right)\lVert f \rVert_{L^1(I,X)}
\end{equation*}
and consequently also in $L^p(I,X)$. To bound $d_tv$, we use the equation satisfied by $v$ and estimate
\begin{align*}
\lVert d_tv \rVert_{L^p(I,X)} &= \lVert f - Mv \rVert_{L^p(I,X)}
\\
&\leq
\lVert f \rVert_{L^p(I,X)} + \lVert v \rVert_{C^0(I,X)}\lVert m \rVert_{L^p(I,X)}
\\
&\leq
C \left( |I|, \lVert m \rVert_{L^p(I,X)}, C_X \right)\lVert f \rVert_{L^p(I,X)}.
\end{align*} \end{proof} \begin{lemma}\label{lemma:providing_assumptions}
Assume $(\rho_k)$ is a minimizing sequence for $\hat J + \eta\lVert\cdot\rVert^2_{H^2(\Omega)}$. Then properties $(A1)$-$(A5)$ hold. \end{lemma} \begin{proof}
We begin with $(A1)$. The regularizing term $\eta\lVert\cdot\rVert^2_{H^2(\Omega)}$ leads to a $H^2(\Omega)$ bound for any minimizing sequence $(\rho_k)$ of $\hat J + \eta\lVert\cdot\rVert^2_{H^2(\Omega)}$, as $\hat J$ is bounded from below. Then there exists a subsequence (not relabeled) with
\begin{equation*}
\rho_k \rightharpoonup \rho^*\quad\text{in }H^2(\Omega).
\end{equation*}
Using the compactness of the embedding $H^2(\Omega)\hookrightarrow\hookrightarrow C^0(\Omega)$, we get
\begin{equation*}
\rho_k \to \rho^* \quad \text{in }C^0(\Omega)
\end{equation*}
and inferring the closedness of $P$ in $C^0(\Omega)$ yields $\rho^* \in P$ as desired.
We provide first a weaker statement than $(A2)$. Namely, we prove that $(u_k)$ is bounded uniformly in $C^0(I,H^1(\Omega))$. At the end of the proof, we can show the full validity of $(A2)$. In Setting \ref{section:setting_optimal_control} we assumed that the boundary conditions satisfied by $u_k$ are
\begin{gather*}
\mathbb{C}(\rho_k,\sigma,b_k)\varepsilon(u_k)\cdot \eta = g_N, \quad u_k=u_D
\end{gather*}
on $\Gamma_N$ and $\Gamma_D$ respectively, where $g_N \in C^0(I,L^2(\partial\Omega)) \subset C^0(I,H^{1/2}(\partial\Omega)^*)$ and $u_D\in C^0(I,H^{1+\theta}(\Omega))$ implying that $(u_D)_{|\Gamma_D}\in H^{1/2}(\partial\Omega)$. The unique solutions $\tilde u_k \in L^2(I,H^1_D(\Omega))$ to
\begin{equation}\label{equation:equation_for_u_0_k}
\iint \mathbb{C}(\rho_k, \sigma, b_k)\varepsilon(\tilde u_k):\varepsilon(\cdot)\mathrm dx\mathrm dt = \underbrace{\int_I\langle g_N, \cdot \rangle_{H^{1/2}(\partial\Omega)} \mathrm dt - \iint \mathbb{C}(\rho_k, \sigma, b_k)\varepsilon(u_D):\varepsilon(\cdot)\mathrm dx\mathrm dt}_{\eqqcolon f_k \in L^2(I,H^1_D(\Omega))^*} \quad \text{in }L^2(I,H^1_D(\Omega))^*
\end{equation}
have therefore right-hand sides $f_k$ that can be interpreted as members of $C^0(I,H^1_D(\Omega)^*)$. This is due to the assumption $g_N \in C^0(I,H^{1/2}(\partial\Omega)^*)$ and a standard computation that shows that the map
\begin{equation*}
t \mapsto \int_\Omega \mathbb{C}(\rho_k,\sigma, b_k)(t)\varepsilon(u_D(t)):\varepsilon(\cdot)\mathrm dx
\end{equation*}
is a member of $C^0(I,H^1_D(\Omega)^*)$. Hence Lemma \ref{lemma:bound_for_u_k} is applicable and shows that
\begin{equation*}
\lVert \tilde u_k \rVert_{C^0(I,H^1(\Omega))} \leq C\left( \lfloor \mathbb{C}(\rho_k,\sigma,b_k) \rfloor, C_{\text{Korn}} \right) \lVert f_k \rVert_{C^0(I,H^1_D(\Omega)^*)}.
\end{equation*}
As $\mathbb{C}$ is coercive and essentially bounded, this estimate can be made independent of $k\in \mathbb{N}$.
Clearly, we have not yet proven $(A2)$ but will first continue with the other assumptions.
We are concerned with $(A3)$ now which is an application of Lemma \ref{lemma:bound_for_a_k}. The corresponding Gelfand triple is $(\operatorname{Id},H^1_D(\Omega),L^2(\Omega))$ and the operators $M_k$ are
\begin{equation*}
M_k:H^1_D(\Omega) \to H^1_D(\Omega)^*, \quad M_ka = \int_\Omega D(\rho_k)\nabla a\nabla\cdot + k_3a\cdot\mathrm dx.
\end{equation*}
The coercivity constant of $M_k$ can be estimated from below independently of $(\rho_k)$ by
\begin{equation*}
\lfloor M_k \rfloor \geq \min(\lfloor D(\rho_k) \rfloor, k_3).
\end{equation*}
On the other hand, the operator norm of $M_k$ can be estimated to
\begin{equation*}
\lVert M_k \rVert = \sup_{\lVert a \rVert \leq 1, \lVert \varphi \rVert \leq 1}\int_\Omega D(\rho_k)\nabla a \nabla \varphi + k_3 a\varphi\mathrm dx \leq \lVert D(\rho_k) \rVert_{L^\infty(\Omega,\mathcal{M}_s)} + k_3.
\end{equation*}
The right-hand sides of the equation are given by
\begin{equation*}
f_k = \iint \left( k_2S(\varepsilon(u_k)) c_k - k_3 \right) \cdot \mathrm dx \mathrm dt,
\end{equation*}
consequently their norm can be estimated
\begin{align*}
\lVert f_k \rVert_{L^2(I,H^1_D(\Omega)^*)}
& \leq
\lVert f_k \rVert_{L^2(I,L^2(\Omega)^*)}
\\ &\leq
\left[ \iint \left( k_2S(\varepsilon(u_k)) c_k - k_3 \right) \right]^{1/2}
\\ &\leq
k_2\lVert c_k \rVert_{C^0(I\times\Omega)}\lVert u_k \rVert_{L^2(I,H^1_D(\Omega))} + |I\times\Omega|^{1/2}k_3.
\end{align*}
This is uniformly bound in $k\in \mathbb{N}$ by the bound on $(u_k)$ and the pointwise properties of $c_k$, i.e., $0\leq c_k \leq 1$. Thus we apply Lemma \ref{lemma:bound_for_a_k} to obtain
\begin{align*}
\lVert a_k \rVert_{H^1(I,H^1_D(\Omega), H^1_D(\Omega)^*)} &\leq C\left( \lVert M_k \rVert, \lfloor M_k\rfloor^{-1} \right)\cdot\left( \lVert a_k(0) \rVert_{L^2(\Omega)} + \lVert f_k \rVert_{L^2(I,H^1_D(\Omega))} \right)
\\&\leq
C\left( \lVert D(\rho_k) \rVert_{L^\infty(\Omega,\mathcal{M}_s)}, \lfloor D(\rho_k) \rfloor^{-1}, k_3 \right)\cdot\left( \lVert a_k(0) \rVert_{L^2(\Omega)} + \lVert f_k \rVert_{L^2(I,H^1_D(\Omega))} \right)
\end{align*}
By the ellipticity and boundedness of $D$, the constant initial conditions $\tilde a^i_k(0)\equiv 1$ and the estimate for $u_k$, we see that $(a^i_k)$ are bounded uniformly in $k\in\mathbb{N}$. Using the reflexivity of the Hilbert space $H^1(I,H^1_D(\Omega), H^1_D(\Omega)^*)$ to produce a weakly convergent subsequence with limit $\tilde a_i^*$ we proved $(A3)$.
We proceed with $(A4)$ and aim to apply Lemma \ref{lemma:bound_for_b_k} to the ODE
\begin{equation}\label{equation:repeat_bone_ode}
d_tb_k = k_4a^1_k\left( 1 + \frac{b_k}{1-\rho_k} \right), \quad b_k(0) = 0,
\end{equation}
with $X = C^\alpha(\Omega)$ and $p = 2$. To this end we rearrange \eqref{equation:repeat_bone_ode} to
\begin{equation*}
d_tb_k - \frac{k_4 a^1_k}{1-\rho_k}b_k = k_4a^1_k,
\end{equation*}
thus
\begin{gather*}
m_k = \frac{k_4 a^1_k}{1 - \rho_k}\quad \text{and}\quad f_k = k_4a^1_k
\end{gather*}
in the notation of Lemma \ref{lemma:bound_for_b_k}. Due to the embedding $H^2(\Omega)\hookrightarrow\hookrightarrow C^\alpha(\Omega)$ in three spatial dimensions, we get $\rho_k\in C^\alpha(\Omega)$ and also $(1 - \rho_k)^{-1}\in C^\alpha(\Omega)$ with a uniform bound in H\"older norm. Then, Lemma \ref{lemma:bound_for_b_k} guarantees that $(a^1_k)$ is bounded uniformly in $L^2(I,C^\alpha(\Omega))$ which implies such a bound for $(m_k)$ and $(f_k)$. We may therefore use Lemma \ref{lemma:bound_for_b_k} to obtain
\begin{equation*}
\lVert b_k \rVert_{W^{1,2}(I,C^\alpha(\Omega))} \leq C\left( I, C_{C^\alpha(\Omega)}, \lVert m_k \rVert_{L^2(I,C^\alpha(\Omega))} \right)\lVert f_k \rVert_{L^2(I,C^\alpha(\Omega))}
\end{equation*}
and guarantee that the bound is independent of $k\in \mathbb{N}$. Furthermore, we have the compact embedding
\begin{equation*}
W^{1,2}(I,C^\alpha(\Omega))\hookrightarrow\hookrightarrow C^0(I\times\Omega).
\end{equation*}
This yields the relative compactness of $(b_k)$ in $C^0(I\times\Omega)$ and thus the existence of $b^*\in C^0(I\times\Omega)$ and a (not relabeled) subsequence with
\begin{equation*}
b_k \to b^*.
\end{equation*}
To provide the existence of $c^*$ and a subsequence $c_k\to c^*$ in $C^0(I\times\Omega)$, we note that Lemma \ref{lemma:bound_for_c_k} provides a bound of the $W^{1,2}(I,C^\alpha(\Omega))$ norm of $c_k$ of the form
\begin{equation*}
\lVert c_k \rVert_{W^{1,2}(I,C^\alpha(\Omega))} \leq C \left( \lVert a_1^k \rVert_{L^2(I,C^\alpha(\Omega))}, \lVert a_2^k \rVert_{L^2(I,C^\alpha(\Omega))}, \lVert \rho \rVert_{C^\alpha(\Omega)} \right)
\end{equation*}
with $C$ being increasing in its arguments. As we proved suitable bounds for $(a_1^k)$, $(a_2^k)$ and $(\rho_k)$ this yields a $W^{1,2}(I, C^\alpha(\Omega))$ bound for $(c_k)$ that does not depend on $k\in\mathbb{N}$. Therefore, the sequence $(c_k)$ is relatively compact in $C^0(I\times\Omega)$ and $(A5)$ follows.
We are still left with showing the existence of a subsequence of $(u_k)$ and a function $u^*\in C^0(I,H^1(\Omega))$ such that
\begin{equation*}
u_k \to u^*\quad\text{in }C^0(I,H^1(\Omega)).
\end{equation*}
To this end, note that we have established that $(b_k)$ is relatively compact in $C^0(I,C^0(\Omega))$. Hence, applying the Arzel\`a-Ascoli Theorem, $(b_k)$ is equi-continuous as well. Now, going back to \eqref{equation:equation_for_u_0_k} we can easily compute that the sequence $(f_k)$ is indeed equi-continuous in $C^0(I,H^1_D(\Omega)^*)$. Again, this is essentially due to the equi-continuity of $(b_k)$ in the space $C^0(I,C^0(\Omega))$. Hence, applying Lemma \ref{lemma:equi_continuity_of_uk} yields the equi-continuity of $(u_k)$ in $C^0(I,H^1(\Omega))$. In view of the Arzel\`a-Ascoli Theorem we still need to show that the sets
\begin{equation*}
\{ u_k(t) \in H^1(\Omega)\mid k\in\mathbb{N} \}
\end{equation*}
are relatively compact in $H^1(\Omega)$. This can be established by looking at the equation satisfied by $u_k(t)$ for every fixed $t\in I$ and applying Lemma \ref{lemma:higher_regularity_elliptic_equation}. Indeed, $u_k(t) =\tilde u_k(t) + u_D(t)$ satisfies
\begin{equation*}
\int_\Omega \mathbb{C}(\rho_k,\sigma,b_k)(t)\varepsilon(u_0^k(t)):\varepsilon(\cdot)\mathrm dx = \int_{\partial\Omega}g_N(t)\cdot\mathrm ds - \int_\Omega\mathbb{C}(\rho_k,\sigma,b_k)(t)\varepsilon(u_D(t)):\varepsilon(\cdot)\mathrm dx.
\end{equation*}
The assumptions on $g_N$ and $u_D$ guarantee that the right-hand side lies in $H^{1-\theta}(\Omega)^*$ and the H\"older regularity established for $(b_k)$, i.e., $(b_k(t))\subset C^\alpha(\Omega)$ allows to deduce the $H^{1+\theta}(\Omega)$ regularity for $u_k(t)$. Additionally, the uniform $W^{1,2}(I,C^\alpha(\Omega))$ bound for the sequence $(b_k)$ established before yields a bound for the $C^\alpha(\Omega)$ norm of $(b_k(t))$ and also the coefficients of $\mathbb{C}(\rho_k,\sigma,b_k)$ via assumption \eqref{equation:holder_regulariy_of_coefficients}. Collecting these bounds in fact implies that \begin{equation*}
\sup_{k\in \mathbb{N}}\lVert u_k(t) \rVert_{H^{1+\theta}(\Omega)} \leq C
\end{equation*}
and invoking the compactness result of Rellich which states that
\begin{equation*}
H^{1+\theta}(\Omega)\hookrightarrow\hookrightarrow H^1(\Omega)
\end{equation*}
we can conclude the missing piece in order to apply the Arzel\`a-Ascoli Theorem to $(u_k)$ in the space $C^0(I,H^1_D(\Omega))$. This eventually guarantees the validity of assumption $(A2)$. \end{proof} We can now conclude the section by providing the proofs of Theorem \ref{theorem:optimal_control_approx_objective} and Corollary \ref{corollary:optimal_control_numerical_penalization}. \begin{proof}[Proof of Theorem \ref{theorem:optimal_control_approx_objective}]
Let $(\rho_k)\subset P$ be a minimizing sequence for $\hat J + \eta\lVert\cdot\rVert^2_{H^2(\Omega)}$. Lemma \ref{lemma:providing_assumptions} shows that the assumptions $(A1)- (A5)$ hold and Proposition \ref{proposition:proof_under_assumptions} shows that this leads to the existence of an accumulation point $\rho^*\in P$ of the sequence $(\rho_k)$ which is a minimizer of $\hat J + \eta\lVert \cdot \rVert_{H^2(\Omega)}^2$. \end{proof} \begin{proof}[Proof of Corollary \ref{corollary:optimal_control_numerical_penalization}]
Let $(\rho_k)\subset H^2(\Omega)$ be a minimizing sequence for $\hat J + \eta\lVert\cdot\rVert^2_{H^2(\Omega)} + \mathcal{K}$. Revisiting the proof of Proposition \ref{proposition:proof_under_assumptions} shows that the additional term $\mathcal{K}$ does not lead to complications in the lower semicontinuity as it is assumed to be continuous on $C^0(I\times\Omega)$, i.e., a compact perturbation. Furthermore, as we assumed that $\mathcal{K}$ takes non-negative values only, also the coercivity of the objective function is not violated through the addition of $\mathcal{K}$. \end{proof}
\end{document} | arXiv |
What is the greatest product obtainable from two integers whose sum is 1998?
Let the two integers be $x$ and $1998-x$. The product which needs to be maximized is $1998x-x^2$. Completing the square results in $-(x-999)^2+999^2$. Since $-(x-999)^2\le 0$, the expression is maximized when $x=999$, which results in a value of $999^2=\boxed{998001}$. | Math Dataset |
# The fundamental theorem of calculus
The fundamental theorem of calculus is a powerful result in calculus that connects integration and differentiation. It states that if a function $f$ is continuous on an interval $[a, b]$, then there exists an antiderivative $F$ of $f$ on that interval such that $F'(x) = f(x)$ for all $x$ in the interval.
To prove the fundamental theorem, we need to use the concept of a Riemann sum. A Riemann sum is a method of approximating the definite integral of a function by summing the area of rectangles under the curve of the function.
The fundamental theorem can be used to find the antiderivative of a function $f$ by integrating $f$ with respect to $x$ and then finding the constant of integration using the condition that $F(a) = 0$.
## Exercise
Find the antiderivative of the function $f(x) = x^2$ using the fundamental theorem of calculus.
# Integration by parts formula and its properties
Integration by parts is a formula that allows us to integrate certain functions more easily. The formula is given by:
$$\int u dv = uv - \int v du$$
where $u$ and $v$ are functions of $x$.
To use the integration by parts formula, we first need to choose appropriate functions $u$ and $v$ such that $u' = v$. Then, we can rewrite the integral as:
$$\int u dv = uv - \int v du$$
which simplifies the integration process.
## Exercise
Use the integration by parts formula to evaluate the integral $\int x^2 e^x dx$.
# Introduction to MATLAB for numerical computations
MATLAB is a powerful programming environment for numerical computations. It is widely used in scientific and engineering fields for tasks such as data analysis, numerical integration, and solving differential equations.
To use MATLAB for numerical computations, we need to write a script that defines the functions, defines the intervals for integration, and calculates the numerical results.
# Numerical integration methods: trapezoidal rule and Simpson's rule
The trapezoidal rule is a numerical integration method that approximates the definite integral of a function by dividing the interval into equal-sized subintervals and calculating the area under the curve using trapezoids.
The trapezoidal rule formula is given by:
$$\int_a^b f(x) dx \approx \frac{1}{2} (f(a) + f(b)) + \sum_{i=1}^{n-1} f(x_i)$$
where $x_i = a + i\Delta x$ and $\Delta x = \frac{b - a}{n}$.
Simpson's rule is another numerical integration method that approximates the definite integral of a function by dividing the interval into equal-sized subintervals and calculating the area under the curve using parabolic segments.
The Simpson's rule formula is given by:
$$\int_a^b f(x) dx \approx \frac{1}{3} (f(a) + 4f(a + \frac{(b - a)}{2}) + f(b))$$
# Implementing trapezoidal and Simpson's rules in MATLAB
To implement the trapezoidal rule in MATLAB, we can write a script that defines the function, defines the interval for integration, and calculates the numerical result using a loop.
To implement Simpson's rule in MATLAB, we can write a script that defines the function, defines the interval for integration, and calculates the numerical result using a loop.
## Exercise
Write a MATLAB script that implements the trapezoidal rule to approximate the integral $\int_0^1 x^2 dx$ using $n = 100$ subintervals.
## Exercise
Write a MATLAB script that implements Simpson's rule to approximate the integral $\int_0^1 x^2 dx$ using $n = 100$ subintervals.
# Comparison of the methods and their error bounds
To compare the accuracy of the trapezoidal and Simpson's rules, we can calculate the error bounds for each method.
The error bound for the trapezoidal rule is given by:
$$\Delta \int_a^b f(x) dx \le \frac{1}{2n} \max_{x \in [a, b]} |f'(x)| \cdot (b - a)$$
The error bound for Simpson's rule is given by:
$$\Delta \int_a^b f(x) dx \le \frac{(b - a)^5}{180n^4} \max_{x \in [a, b]} |f^{(4)}(x)|$$
## Exercise
Calculate the error bounds for the trapezoidal and Simpson's rules for the integral $\int_0^1 x^2 dx$ using $n = 100$.
# Handling improper integrals and singularities
When integrating functions with singularities or improper integrals, we need to be careful to handle these cases correctly.
To handle singularities, we can use techniques such as integration by substitution or integration by parts.
To handle improper integrals, we can split the integral into two parts: one from $a$ to $c$ and another from $c$ to $b$, where $c$ is the point of singularity.
## Exercise
Integrate the function $f(x) = \frac{1}{x}$ from $1$ to $2$ using MATLAB.
# Applications of numerical integration in real-world problems
Numerical integration is widely used in real-world problems such as engineering, physics, and economics.
For example, numerical integration can be used to calculate the area under a curve in engineering applications, the energy consumption of a device in physics, and the present value of a future cash flow in economics.
## Exercise
Calculate the area under the curve of the function $f(x) = x^2$ from $0$ to $1$ using MATLAB.
# Solving ordinary differential equations using numerical integration
Numerical integration can also be used to solve ordinary differential equations (ODEs).
To solve an ODE using numerical integration, we can use methods such as the Euler method, the Runge-Kutta method, or the Adams-Bashforth method.
In MATLAB, we can use the built-in function `ode45` to solve ODEs numerically.
## Exercise
Solve the ODE $y' = -2xy$ with initial condition $y(0) = 1$ using MATLAB.
# Error estimation and convergence analysis for numerical integration methods
To analyze the error of numerical integration methods, we can use techniques such as error estimation and convergence analysis.
Error estimation involves calculating the error bound for each method and comparing it to the actual error.
Convergence analysis involves studying the behavior of the numerical results as the number of subintervals increases.
## Exercise
Analyze the convergence of the trapezoidal and Simpson's rules for the integral $\int_0^1 x^2 dx$ as the number of subintervals increases.
## Exercise
Calculate the error of the trapezoidal and Simpson's rules for the integral $\int_0^1 x^2 dx$ using $n = 100$ subintervals. | Textbooks |
\begin{document}
\title{The Largest Laplacian and Signless Laplacian H-Eigenvalues of a Uniform Hypergraph}
\author{ Shenglong Hu \thanks{Email: [email protected]. Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong.},\hspace{4mm} Liqun Qi \thanks{Email: [email protected]. Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong. This author's work was supported by the Hong Kong Research Grant Council (Grant No. PolyU 501909, 502510, 502111 and 501212).}, \hspace{4mm} Jinshan Xie \thanks{Email: [email protected]. School of Mathematics and Computer Science, Longyan University, Longyan, Fujian, China.}}
\date{\today} \maketitle
\begin{abstract} In this paper, we show that the largest Laplacian H-eigenvalue of a $k$-uniform nontrivial hypergraph is strictly larger than the maximum degree when $k$ is even. A tight lower bound for this eigenvalue is given. For a connected even-uniform hypergraph, this lower bound is achieved if and only if it is a hyperstar. However, when $k$ is odd, in certain cases the largest Laplacian H-eigenvalue is equal to the maximum degree, which is a tight lower bound. On the other hand, tight upper and lower bounds for the largest signless Laplacian H-eigenvalue of a $k$-uniform connected hypergraph are given. For a connected $k$-uniform hypergraph, the upper (respectively lower) bound of the largest signless Laplacian H-eigenvalue is achieved if and only if it is a complete hypergraph (respectively a hyperstar). The largest Laplacian H-eigenvalue is always less than or equal to the largest signless Laplacian H-eigenvalue. When the hypergraph is connected, the equality holds here if and only if $k$ is even and the hypergraph is odd-bipartite.
\noindent {\bf Key words:}\hspace{2mm} Tensor, H-eigenvalue, hypergraph, Laplacian, signless Laplacian
\noindent {\bf MSC (2010):}\hspace{2mm} 05C65; 15A18 \end{abstract}
\section{Introduction} \setcounter{Theorem}{0} \setcounter{Proposition}{0} \setcounter{Corollary}{0} \setcounter{Lemma}{0} \setcounter{Definition}{0} \setcounter{Remark}{0} \setcounter{Conjecture}{0} \setcounter{Example}{0} \hspace{4mm} In this paper, we study the largest Laplacian and signless Laplacian H-eigenvalues of a uniform hypergraph. The largest Laplacian and signless Laplacian H-eigenvalues refer to respectively the largest H-eigenvalue of the Laplacian tensor and the largest H-eigenvalue of the signless Laplacian tensor. This work is motivated by the classic results for graphs \cite{c97,bh11,crs07,zl02,z07}. Please refer to \cite{l07,hq12a,cd12,q12a,pz12,lqy12,rp09,r09,xc12c,hq13,cpz08,hhlq12,l05,q05,q12b,yy11,xc12a} for recent developments on spectral hypergraph theory and the essential tools from spectral theory of nonnegative tensors.
This work is a companion of the recent study on the eigenvectors of the zero Laplacian and signless Laplacian eigenvalues of a uniform hypergraph by Hu and Qi \cite{hq13b}. For the literature on the Laplacian-type tensors for a uniform hypergraph, which becomes an active research frontier in spectral hypergraph theory, please refer to \cite{hq12a,lqy12,xc12c,q12a,hq13,xc12a,hq13b} and references therein. Among others, Qi \cite{q12a}, and Hu and Qi \cite{hq13} respectively systematically studied the Laplacian and signless Laplacian tensors, and the Laplacian of a uniform hypergraph. These three notions of Laplacian-type tensors are more natural and simpler than those in the literature.
The rest of this paper is organized as follows. Some definitions on eigenvalues of tensors and uniform hypergraphs are presented in the next section. The class of hyperstars is introduced. We discuss in Section 3 the largest Laplacian H-eigenvalue of a $k$-uniform hypergraph. We show that when $k$ is even, the largest Laplacian H-eigenvalue has a tight lower bound that is strictly larger than the maximum degree. Extreme hypergraphs in this case are characterized, which are the hyperstars. When $k$ is odd, a tight lower bound is exactly the maximum degree. However, we are not able to characterize the extreme hypergraphs in this case. Then we discuss the largest signless Laplacian H-eigenvalue in Section 4. Tight lower and upper bounds for the largest signless Laplacian H-eigenvalue of a connected hypergraph are given. Extreme hypergraphs are characterized as well. For the lower bound, the extreme hypergraphs are hyperstars; and for the upper bound, the extreme hypergraphs are complete hypergraphs. The relationship between the largest Laplacian H-eigenvalue and the largest signless Laplacian H-eigenvalue is discussed in Section 5. The largest Laplacian H-eigenvalue is always less than or equal to the largest signless Laplacian H-eigenvalue. When the hypergraph is connected, the equality holds here if and only if $k$ is even and the hypergraph is odd-bipartite. This result can help to find the largest Laplacian H-eigenvalue of an even-uniform hypercycle. Some final remarks are made in the last section.
\section{Preliminaries}\label{sec-p} \setcounter{Theorem}{0} \setcounter{Proposition}{0} \setcounter{Corollary}{0} \setcounter{Lemma}{0} \setcounter{Definition}{0} \setcounter{Remark}{0} \setcounter{Conjecture}{0} \setcounter{Example}{0} \hspace{4mm} Some definitions of eigenvalues of tensors and uniform hypergraphs are presented in this section.
\subsection{Eigenvalues of Tensors}\label{s-et} In this subsection, some basic definitions on eigenvalues of tensors are reviewed. For comprehensive references, see \cite{q05,hhlq12} and references therein. Especially, for spectral hypergraph theory oriented facts on eigenvalues of tensors, please see \cite{q12a,hq13}.
Let $\mathbb R$ be the field of real numbers and $\mathbb R^n$ the $n$-dimensional real space. $\mathbb R^n_+$ denotes the nonnegative orthant of $\mathbb R^n$. For integers $k\geq 3$ and $n\geq 2$, a real tensor $\mathcal T=(t_{i_1\ldots i_k})$ of order $k$ and dimension $n$ refers to a multiway array (also called hypermatrix) with entries $t_{i_1\ldots i_k}$ such that $t_{i_1\ldots i_k}\in\mathbb{R}$ for all $i_j\in[n]:=\{1,\ldots,n\}$ and $j\in[k]$. Tensors are always referred to $k$-th order real tensors in this paper, and the dimensions will be clear from the content. Given a vector $\mathbf{x}\in \mathbb{R}^{n}$, ${\cal T}\mathbf{x}^{k-1}$ is defined as an $n$-dimensional vector such that its $i$-th element being $\sum\limits_{i_2,\ldots,i_k\in[n]}t_{ii_2\ldots i_k}x_{i_2}\cdots x_{i_k}$ for all $i\in[n]$. Let ${\cal I}$ be the identity tensor of appropriate dimension, e.g., $i_{i_1\ldots i_k}=1$ if and only if $i_1=\cdots=i_k\in [n]$, and zero otherwise when the dimension is $n$. The following definition was introduced by Qi \cite{q05}.
\begin{Definition}\label{def-00} Let $\mathcal T$ be a $k$-th order $n$-dimensional real tensor. For some $\lambda\in\mathbb{R}$, if polynomial system $\left(\lambda {\cal I}-{\cal T}\right)\mathbf{x}^{k-1}=0$ has a solution $\mathbf{x}\in\mathbb{R}^n\setminus\{0\}$, then $\lambda$ is called an H-eigenvalue and $\mathbf x$ an H-eigenvector. \end{Definition} It is seen that H-eigenvalues are real numbers \cite{q05}. By \cite{hhlq12,q05}, we have that the number of H-eigenvalues of a real tensor is finite. By \cite{q12a}, we have that all the tensors considered in this paper have at least one H-eigenvalue. Hence, we can denote by $\lambda(\mathcal T)$ (respectively $\mu(\mathcal T)$) as the largest (respectively smallest) H-eigenvalue of a real tensor $\mathcal T$.
For a subset $S\subseteq [n]$, we denoted by $|S|$ its cardinality, and $\mbox{sup}(\mathbf x):=\{i\in[n]\;|\;x_i\neq 0\}$ its {\em support}.
\subsection{Uniform Hypergraphs}
In this subsection, we present some essential concepts of uniform hypergraphs which will be used in the sequel. Please refer to \cite{b73,c97,bh11,hq13,q12a} for comprehensive references.
In this paper, unless stated otherwise, a hypergraph means an undirected simple $k$-uniform hypergraph $G$ with vertex set $V$, which is labeled as $[n]=\{1,\ldots,n\}$, and edge set $E$. By $k$-uniformity, we mean that for every edge $e\in E$, the cardinality $|e|$ of $e$ is equal to $k$. Throughout this paper, $k\geq 3$ and $n\geq k$. Moreover, since the trivial hypergraph (i.e., $E=\emptyset$) is of less interest, we consider only hypergraphs having at least one edge (i.e., nontrivial) in this paper.
For a subset $S\subset [n]$, we denoted by $E_S$ the set of edges $\{e\in E\;|\;S\cap e\neq\emptyset\}$. For a vertex $i\in V$, we simplify $E_{\{i\}}$ as $E_i$. It is the set of edges containing the vertex $i$, i.e., $E_i:=\{e\in E\;|\;i\in e\}$. The cardinality $|E_i|$ of the set $E_i$ is defined as the {\em degree} of the vertex $i$, which is denoted by $d_i$. Two different vertices $i$ and $j$ are {\em connected} to each other (or the pair $i$ and $j$ is connected), if there is a sequence of edges $(e_1,\ldots,e_m)$ such that $i\in e_1$, $j\in e_m$ and $e_r\cap e_{r+1}\neq\emptyset$ for all $r\in[m-1]$. A hypergraph is called {\em connected}, if every pair of different vertices of $G$ is connected. Let $S\subseteq V$, the hypergraph with vertex set $S$ and edge set $\{e\in E\;|\;e\subseteq S\}$ is called the {\em sub-hypergraph} of $G$ induced by $S$. We will denote it by $G_S$. A hypergraph is {\em regular} if $d_1=\cdots=d_n=d$. A hypergraph $G=(V,E)$ is {\em complete} if $E$ consists of all the possible edges. In this case, $G$ is regular, and moreover $d_1=\cdots=d_n=d={n-1\choose k-1}$. In the sequel, unless stated otherwise, all the notations introduced above are reserved for the specific meanings.
For the sake of simplicity, we mainly consider connected hypergraphs in the subsequent analysis. By the techniques in \cite{q12a,hq13}, the conclusions on connected hypergraphs can be easily generalized to general hypergraphs.
The following definition for the Laplacian tensor and signless Laplacian tensor was proposed by Qi \cite{q12a}.
\begin{Definition}\label{def-l} Let $G=(V,E)$ be a $k$-uniform hypergraph. The {\em adjacency tensor} of $G$ is defined as the $k$-th order $n$-dimensional tensor $\mathcal A$ whose $(i_1 \ldots i_k)$-entry is: \begin{eqnarray*} a_{i_1 \ldots i_k}:=\left\{\begin{array}{cl}\frac{1}{(k-1)!}&if\;\{i_1,\ldots,i_k\}\in E,\\0&\mbox{otherwise}.\end{array}\right. \end{eqnarray*} Let $\mathcal D$ be a $k$-th order $n$-dimensional diagonal tensor with its diagonal element $d_{i\ldots i}$ being $d_i$, the degree of vertex $i$, for all $i\in [n]$. Then $\mathcal L:=\mathcal D-\mathcal A$ is the {\em Laplacian tensor} of the hypergraph $G$, and $\mathcal Q:=\mathcal D+\mathcal A$ is the {\em signless Laplacian tensor} of the hypergraph $G$. \end{Definition}
In the following, we introduce the class of hyperstars.
\begin{Definition}\label{def-bi-hm}
Let $G=(V,E)$ be a $k$-uniform hypergraph. If there is a disjoint partition of the vertex set $V$ as $V=V_0\cup V_1\cup\cdots\cup V_d$ such that $|V_0|=1$ and $|V_1|=\cdots=|V_d|=k-1$, and $E=\{V_0\cup V_i\;|\;i\in [d]\}$, then $G$ is called a {\em hyperstar}. The degree $d$ of the vertex in $V_0$, which is called the {\em heart}, is the {\em size} of the hyperstar. The edges of $G$ are {\em leaves}, and the vertices other than the heart are vertices of leaves. \end{Definition} It is an obvious fact that, with a possible renumbering of the vertices, all the hyperstars with the same size are identical. Moreover, by Definition \ref{def-00}, we see that the process of renumbering does not change the H-eigenvalues of either the Laplacian tensor or the signless Laplacian tensor of the hyperstar. The trivial hyperstar is the one edge hypergraph, its spectrum is very clear \cite{cd12}. In the sequel, unless stated otherwise, a hyperstar is referred to a hyperstar having size $d>1$. For a vertex $i$ other than the heart, the leaf containing $i$ is denoted by $le(i)$. An example of a hyperstar is given in Figure 1.
\begin{figure}
\caption{An example of a $3$-uniform hyperstar of size $3$. An edge is pictured as a closed curve with the containing solid disks the vertices in that edge. Different edges are in different curves with different colors. The red (also in dashed margin) disk represents the heart.}
\end{figure}
The notions of odd-bipartite and even-bipartite even-uniform hypergraphs are introduced in \cite{hq13b}.
\begin{Definition}\label{def-bi-odd} Let $k$ be even and $G=(V,E)$ be a $k$-uniform hypergraph. It is called {\em odd-bipartite} if either it is trivial (i.e., $E=\emptyset$) or there is a disjoint partition of the vertex set $V$ as $V=V_1\cup V_2$ such that $V_1,V_2\neq \emptyset$ and every edge in $E$ intersects $V_1$ with exactly an odd number of vertices. \end{Definition} An example of an odd-bipartite hypergraph is given in Figure 2.
\begin{figure}
\caption{An example of an odd-bipartite $4$-uniform hypergraph. The bipartition is clear from the different colors (also the dashed margins from the solid ones) of the disks.}
\end{figure}
\section{The Largest Laplacian H-Eigenvalue} \setcounter{Theorem}{0} \setcounter{Proposition}{0} \setcounter{Corollary}{0} \setcounter{Lemma}{0} \setcounter{Definition}{0} \setcounter{Remark}{0} \setcounter{Conjecture}{0} \setcounter{Example}{0} \hspace{4mm} This section presents some basic facts about the largest Laplacian H-eigenvalue of a uniform hypergraph. We start the discussion on the class of hyperstars.
\subsection{Hyperstars} Some properties of hyperstars are given in this subsection.
The next proposition is a direct consequence of Definition \ref{def-bi-hm}.
\begin{Proposition}\label{prop-0} Let $G=(V,E)$ be a hyperstar of size $d>0$. Then except for one vertex $i\in [n]$ with $d_i=d$, we have $d_j=1$ for the others. \end{Proposition}
By Theorem 4 of \cite{q12a}, we have the following lemma.
\begin{Lemma}\label{lem-00} Let $G=(V,E)$ be a $k$-uniform hypergraph with its maximum degree $d>0$ and $\mathcal L=\mathcal D-\mathcal A$ be its Laplacian tensor. Then $\lambda (\mathcal L)\geq d$. \end{Lemma}
When $k$ is even and $G$ is a hyperstar, Lemma \ref{lem-00} can be strengthened as in the next proposition.
\begin{Proposition}\label{prop-1} Let $k$ be even and $G=(V,E)$ be a hyperstar of size $d>0$ and $\mathcal L=\mathcal D-\mathcal A$ be its Laplacian tensor. Then $\lambda (\mathcal L)>d$. \end{Proposition}
\noindent {\bf Proof.} Suppose, without loss of generality, that $d_1=d$. Let $\mathbf x\in\mathbb R^n$ be a nonzero vector such that $x_1=\alpha\in\mathbb R$, and $x_2=\cdots=x_n=1$. Then, we see that \begin{eqnarray*} \left(\mathcal L\mathbf x^{k-1}\right)_1=d \alpha^{k-1}-d, \end{eqnarray*} and for $i\in\{2,\ldots,n\}$ \begin{eqnarray*} \left(\mathcal L\mathbf x^{k-1}\right)_i=1-\alpha. \end{eqnarray*} Thus, if $\mathbf x$ is an H-eigenvector of $\mathcal L$ corresponding to an H-eigenvalue $\lambda$, then we must have \begin{eqnarray*} d \alpha^{k-1}-d=\lambda\alpha^{k-1},\;\mbox{and}\; 1-\alpha=\lambda. \end{eqnarray*} Hence, \begin{eqnarray*} (1-\lambda)^{k-1}(\lambda-d)+d=0. \end{eqnarray*} Let $f(\lambda):=(1-\lambda)^{k-1}(\lambda-d)+d$. We have that \begin{eqnarray*} f(d)=d>0, \;\mbox{and}\;f(d+1)=(-d)^{k-1}+d<0. \end{eqnarray*} Consequently, $f(\lambda)=0$ does have a root in the interval $(d,d+1)$. Hence $\mathcal L$ has an H-eigenvalue $\lambda>d$. The result follows. \ep
The next lemma characterizes H-eigenvectors of the Laplacian tensor of a hyperstar corresponding to an H-eigenvalue which is not one .
\begin{Lemma}\label{lem-0} Let $G=(V,E)$ be a hyperstar of size $d>0$ and $\mathbf x\in\mathbb R^n$ be an H-eigenvector of the Laplacian tensor of $G$ corresponding to a nonzero H-eigenvalue other than one. If $x_i=0$ for some vertex of a leaf (other than the heart), then $x_j=0$ for all the vertices $j$ in the leaf containing $i$ and other than the heart. Moreover, in this situation, if $h$ is the heart, then $x_h\neq 0$. \end{Lemma}
\noindent {\bf Proof.} Suppose that the H-eigenvalue is $\lambda\neq 1$. By the definition of eigenvalues, we have that for the vertex $j$ other than the heart and the vertex $i$, \begin{eqnarray*} \left(\mathcal L\mathbf x^{k-1}\right)_j=x_j^{k-1}-\prod_{s\in le(j)\setminus\{j\}} x_s=x_j^{k-1}-0=\lambda x_j^{k-1}. \end{eqnarray*} Since $\lambda\neq 1$, we must have that $x_j=0$.
With a similar proof, we get the other conclusion by contradiction, since $h\in le(i)$ for all vertices $i$ of leaves and $\mathbf x\neq 0$. \ep
The next lemma characterizes the H-eigenvectors of the Laplacian tensor of a hyperstar corresponding to the largest Laplacian H-eigenvalue.
\begin{Lemma}\label{lem-1}
Let $G=(V,E)$ be a hyperstar of size $d>1$. Then there is an H-eigenvector $\mathbf z\in\mathbb R^n$ of the Laplacian tensor $\mathcal L$ of $G$ corresponding to $\lambda(\mathcal L)$ satisfying that $|z_i|$ is a constant for $i\in\mbox{sup}(\mathbf z)$ and $i$ being not the heart. \end{Lemma}
\noindent {\bf Proof.} Suppose that $\mathbf y\in\mathbb R^{n}$ is an H-eigenvector of $\mathcal L$ corresponding to $\lambda(\mathcal L)$. Without loss of generality, let $1$ be the heart and hence $d_1=d$. Note that, by Lemma \ref{lem-00}, we have that $\lambda(\mathcal L)\geq d>1$. By Lemma \ref{lem-0}, without loss of generality, we can assume that $\mbox{sup}(\mathbf y)=[n]$ and $y_1>0$. In the following, we construct an H-eigenvector $\mathbf z\in\mathbb R^{n}$ corresponding to $\lambda(\mathcal L)$ from $\mathbf y$ such that
$|z_2|=\cdots=|z_{n}|$.
(I). We first prove that for every leaf $e\in E$, $|y_t|$ is a constant for all $t\in e\setminus\{1\}$.
For an arbitrary but fixed leaf $e\in E$, suppose that $|y_i|=\max\{|y_j|\;|\;j\in e\setminus\{1\}\}$ and $|y_s|=\min\{|y_j|\;|\;j\in e\setminus\{1\}\}$. If $|y_i|=|y_s|$, then we are done. In the following, suppose on the contrary that $|y_i|>|y_s|$. Then, we have \begin{eqnarray*}
(\lambda(\mathcal L)-1)|y_i|^{k-1}=y_1\prod_{j\in e\setminus\{1,i\}}|y_j|, \;\mbox{and}\;(\lambda(\mathcal L)-1)|y_s|^{k-1}=y_1\prod_{j\in e\setminus\{1,s\}}|y_j|. \end{eqnarray*}
By the definitions of $|y_i|$ and $|y_s|$, we have $y_1\prod_{j\in e\setminus\{1,i\}}|y_j|< y_1\prod_{j\in e\setminus\{1,s\}}|y_j|$. On the other hand, we have $(\lambda(\mathcal L)-1)|y_i|^{k-1}>(\lambda(\mathcal L)-1)|y_s|^{k-1}$. Hence, a contradiction is derived. Consequently, for every leaf $e\in E$, $|y_t|$ is a constant for all $t\in e\setminus\{1\}$.
(II). We next show that all the numbers in this set \begin{eqnarray*} \left\{\alpha_s:=\prod_{j\in e_s\setminus\{1\}}y_j,\;e_s\in E\right\} \end{eqnarray*} are of the same sign.
When $k$ is even, suppose that $y_i<0$ for some $i$. Then \begin{eqnarray}\label{new-4} 0>(\lambda(\mathcal L)-1)y_i^{k-1}=-y_1\prod_{j\in le(i)\setminus\{1,i\}}y_j. \end{eqnarray} Thus, an odd number of vertices in $le(i)$ takes negative values. By \reff{new-4}, we must have that there exists some $i\in e$ such that $y_i<0$ for every $e\in E$. Otherwise, $(\lambda(\mathcal L)-1)y_i^{k-1}>0$, together with $-y_1\prod_{j\in le(i)\setminus\{1,i\}}y_j<0$, would lead to a contradiction. Hence, all the numbers in this set \begin{eqnarray*} \left\{\alpha_s:=\prod_{j\in e_s\setminus\{1\}}y_j,\;e_s\in E\right\} \end{eqnarray*} are negative.
When $k$ is odd, suppose that $y_i<0$ for some $i$. Then \begin{eqnarray}\label{new-3} 0<(\lambda(\mathcal L)-1)y_i^{k-1}=-y_1\prod_{j\in le(i)\setminus\{1,i\}}y_j. \end{eqnarray} Thus, an positive even number of vertices in $le(i)$ takes negative values. Thus, if there is some $s\in le(i)$ such that $y_s>0$, then \begin{eqnarray*} 0<(\lambda(\mathcal L)-1)y_s^{k-1}=-y_1\prod_{j\in le(s)\setminus\{1,s\}}y_j. \end{eqnarray*} Since $s\in le(i)$, we have $le(i)=le(s)$ and $i\in le(s)$. Hence, $y_1\prod_{j\in le(s)\setminus\{1,s\}}y_j>0$. A contradiction is derived. By \reff{new-3}, we must have that there exists some $i\in e$ such that $y_i<0$ for every $e\in E$. Consequently, $y_j<0$ for all $j\neq 1$. Hence, all the numbers in this set \begin{eqnarray*} \left\{\alpha_s:=\prod_{j\in e_s\setminus\{1\}}y_j,\;e_s\in E\right\} \end{eqnarray*} are positive.
(III.) We construct the desired vector $\mathbf z$.
If the product $\prod_{j\in e\setminus\{1\}}y_j$ is a constant for every leaf $e\in E$, then take $\mathbf z=\mathbf y$ and we are done. In the following, suppose on the contrary that the set \begin{eqnarray*} \left\{\alpha_s:=\prod_{j\in e_s\setminus\{1\}}y_j,\;e_s\in E\right\} \end{eqnarray*} takes more than one numbers. Let $\mathbf z\in\mathbb R^n$ be the vector such that \begin{eqnarray*} z_j=\left(\frac{\sum_{e_s\in E}\alpha_s}{d\alpha_t}\right)^{1 \over k-1}y_j,\;j\in e_t\setminus\{1\} \end{eqnarray*}
and $z_1=y_1$. Note that $|z_2|=\cdots=|z_{n_2}|$, since $|y_j|^{k-1}=\alpha_t$ for all $j\in e_t\setminus\{1\}$ and $e_t\in E$. Then \begin{eqnarray*} \left(\mathcal L\mathbf z^{k-1}\right)_1&=&dz_1^{k-1}-\sum_{e_s\in E}\prod_{j\in \in e_s\setminus\{1\}}z_j\\ &=&dy_1^{k-1}-\sum_{e_s\in E}\frac{\sum_{e_t\in E}\alpha_t}{d\alpha_s}\prod_{j\in \in e_s\setminus\{1\}}y_j\\ &=&dy_1^{k-1}-\sum_{e_s\in E}\frac{\sum_{e_t\in E}\alpha_t}{d\alpha_s}\alpha_s\\ &=&dy_1^{k-1}- \sum_{e_t\in E}\alpha_t\\ &=&dy_1^{k-1}- \sum_{e_t\in E}\prod_{j\in e_t\setminus\{1\}}y_j\\ &=&\lambda(\mathcal L) y_1^{k-1}\\ &=&\lambda(\mathcal L) z_1^{k-1}. \end{eqnarray*} For any $i\neq 1$ with $i\in e_s$ for some $s$, we have \begin{eqnarray*} \left(\mathcal L\mathbf z^{k-1}\right)_i&=&z_i^{k-1}-z_1\prod_{j\in e_s\setminus\{1,i\}}z_j\\ &=&\frac{\sum_{e_t\in E}\alpha_t}{d\alpha_s}y_i^{k-1}-y_1\frac{\sum_{e_t\in E}\alpha_t}{d\alpha_s}\prod_{j\in e_s\setminus\{1,i\}}y_j\\ &=&\lambda(\mathcal L)\frac{\sum_{e_t\in E}\alpha_t}{d\alpha_s}y_i^{k-1}\\ &=&\lambda(\mathcal L)z_i^{k-1}. \end{eqnarray*} By Definition \ref{def-00}, $\mathbf z$ is an H-eigenvector of $\mathcal L$ corresponding to $\lambda(\mathcal L)$ with the requirement. The result follows. \ep
The next corollary follows directly from the proof of Lemma \ref{lem-1}.
\begin{Corollary}\label{cor-4} Let $k$ be odd and $G=(V,E)$ be a hyperstar of size $d>1$. If $\mathbf z\in\mathbb R^n$ is an H-eigenvector of the Laplacian tensor $\mathcal L$ of $G$ corresponding to $\lambda(\mathcal L)$, then $z_i$ is a constant for $i\in\mbox{sup}(\mathbf z)$ and $i$ being not the heart. Moreover, whenever $\mbox{sup}(\mathbf z)$ contains a vertex other than the heart, the signs of the heart and the vertices of leaves in $\mbox{sup}(\mathbf z)$ are opposite. \end{Corollary} However, in Section \ref{sec-odd}, we will show that $\mbox{sup}(\mathbf z)$ is a singleton which is the heart.
The next lemma is useful, which follows from a similar proof of \cite[Theorem 5]{q05}.
\begin{Lemma}\label{lem-5} Let $k$ be even and $G=(V,E)$ be a $k$-uniform hypergraph. Let $\mathcal L$ be the Laplacian tensor of $G$. Then \begin{eqnarray}\label{lp-c}
\lambda(\mathcal L)=\max\left\{\mathcal L\mathbf x^k:=\mathbf x^T(\mathcal L\mathbf x^{k-1})\;|\;\sum_{i\in [n]}x_i^k=1,\;\mathbf x\in\mathbb R ^n\right\}. \end{eqnarray} \end{Lemma}
The next lemma is an analogue of Corollary \ref{cor-4} for $k$ being even.
\begin{Lemma}\label{lem-2} Let $k$ be even and $G=(V,E)$ be a hyperstar of size $d>0$. Then there is an H-eigenvector $\mathbf z\in\mathbb R^n$ of the Laplacian tensor $\mathcal L$ of $G$ satisfying that $z_i$ is a constant for $i\in\mbox{sup}(\mathbf z)$ and $i$ being not the heart. \end{Lemma}
\noindent {\bf Proof.}
In the proof of Lemma \ref{lem-1}, $d>1$ is required only to guarantee $\lambda(\mathcal L)>1$. While, when $k$ is even, by Proposition \ref{prop-1}, $\lambda(\mathcal L)>1$ whenever $d>0$. Hence, there is an H-eigenvector $\mathbf x\in\mathbb R^n$ of the Laplacian tensor $\mathcal L$ of $G$ corresponding to $\lambda(\mathcal L)$ satisfying that $|x_i|$ is a constant for $i\in\mbox{sup}(\mathbf x)$ and $i$ being not the heart.
Suppose, without loss of generality, that $1$ is the heart. By Lemma \ref{lem-0}, without loss of generality, suppose that $\mbox{sup}(\mathbf x)=[n]$. If $x_1>0$, then let $\mathbf y=-\mathbf x$, and otherwise let $\mathbf y=\mathbf x$.
Suppose that $y_i<0$ for some $i$ other than $y_1$. Then \begin{eqnarray*} 0>(\lambda(\mathcal L)-1)y_i^{k-1}=-y_1\prod_{j\in le(i)\setminus\{1,i\}}y_j. \end{eqnarray*} Thus, a positive even number of vertices in $le(i)$ other than $1$ takes negative values. Hence, all the values in this set \begin{eqnarray*} \left\{\prod_{j\in e_s\setminus\{1\}}y_j,\;e_s\in E\right\} \end{eqnarray*}
are positive. Let $\mathbf z\in\mathbb R^n$ such that $z_1=y_1$ and $z_i=|y_i|$ for the others. We have that if $y_i>0$, then \begin{eqnarray*} (\lambda(\mathcal L)-1)z_i^{k-1}=(\lambda(\mathcal L)-1)y_i^{k-1}=y_1\prod_{j\in le(i)\setminus\{1,i\}}y_j=z_1\prod_{j\in le(i)\setminus\{1,i\}}z_j; \end{eqnarray*} and if $y_i<0$, then \begin{eqnarray*}
(\lambda(\mathcal L)-1)z_i^{k-1}=(\lambda(\mathcal L)-1)|y_i|^{k-1}=-y_1\prod_{j\in le(i)\setminus\{1,i\}}y_j=z_1\prod_{j\in le(i)\setminus\{1,i\}}z_j. \end{eqnarray*} Here, the second equality follows from the fact that $\prod_{j\in le(i)\setminus\{1,i\}}y_j<0$ in this situation. Moreover, \begin{eqnarray*}
(\lambda(\mathcal L)-d)z_1^{k-1}=(\lambda(\mathcal L)-d)y_1^{k-1}=\sum_{e_s\in E}\prod_{j\in e_s\setminus\{1\}}y_j=\sum_{e_s\in E}|\prod_{j\in e_s\setminus\{1\}}y_j|=\sum_{e_s\in E}\prod_{j\in e_s\setminus\{1\}}z_j. \end{eqnarray*} Consequently, $\mathbf z$ is the desired H-eigenvector. \ep
The next theorem gives the largest Laplacian H-eigenvalue of a hyperstar for $k$ being even.
\begin{Theorem}\label{thm-3} Let $k$ be even and $G=(V,E)$ be a hyperstar of size $d>0$. Let $\mathcal L$ be the Laplacian tensor of $G$. Then $\lambda(\mathcal L)$ is the unique real root of the equation $(1-\lambda)^{k-1}(\lambda-d)+d=0$ in the interval $(d,d+1)$. \end{Theorem}
\noindent {\bf Proof.} By Lemma \ref{lem-2}, there is an H-eigenvector $\mathbf x\in\mathbb R^n$ of the Laplacian tensor $\mathcal L$ of $G$ satisfying that $x_i$ is a constant for $i\in\mbox{sup}(\mathbf x)$ and $i$ being not the heart. By the proof for Lemma \ref{lem-0}, we have that $\lambda(\mathcal L)$ is the largest real root of the equation $(1-\lambda)^{k-1}(\lambda-w)+w=0$. Here $w$ is the size of the sub-hyperstar $G_{\mbox{sup}(\mathbf x)}$ of $G$.
Let $f(\lambda):=(1-\lambda)^{k-1}(\lambda-d)+d$. Then, $f^{\prime}(\lambda)=(1-\lambda)^{k-2}((k-1)(d-\lambda)+1-\lambda)$. Hence, $f$ is strictly decreasing in the interval $(d,+\infty)$. Moreover, $f(d+1)<0$. Consequently, $f$ has a unique real root in the interval $(d,d+1)$ which is the maximum. Thus, by Proposition \ref{prop-1}, we must have $\mbox{sup}(\mathbf x)=[n]$. The result follows. \ep
The next corollary is a direct consequence of Theorem \ref{thm-3}.
\begin{Corollary}\label{cor-1} Let $G_1=(V_1,E_1)$ and $G_2=(V_2,E_2)$ be two hyperstars of size $d_1$ and $d_2>0$, respectively. Let $\mathcal L_1$ and $\mathcal L_2$ be the Laplacian tensors of $G_1$ and $G_2$ respectively. If $d_1>d_2$, then $\lambda(\mathcal L_1)>\lambda(\mathcal L_2)$. \end{Corollary}
When $k$ is even, the proofs of Lemmas \ref{lem-1} and \ref{lem-2}, and Theorem \ref{thm-3} actually imply the next corollary.
\begin{Corollary}\label{cor-3} Let $k$ be even and $G=(V,E)$ be a hyperstar of size $d>0$. If $\mathbf x\in\mathbb R^n$ is an H-eigenvector of the Laplacian tensor $\mathcal L$ of $G$ corresponding to $\lambda(\mathcal L)$, then $\mbox{sup}(\mathbf x)=[n]$. Hence, there is an H-eigenvector $\mathbf z\in\mathbb R^n$ of the Laplacian tensor $\mathcal L$ of $G$ corresponding to $\lambda(\mathcal L)$ satisfying that $z_i$ is a constant for all the vertices other than the heart. \end{Corollary}
\subsection{Even-Uniform Hypergraphs}
In this subsection, we present a tight lower bound for the largest Laplacian H-eigenvalue and characterize the extreme hypergraphs when $k$ is even.
The next theorem gives the lower bound, which is tight by Theorem \ref{thm-3}.
\begin{Theorem}\label{thm-4} Let $k$ be even and $G=(V,E)$ be a $k$-uniform hypergraph with the maximum degree being $d>0$. Let $\mathcal L$ be the Laplacian tensor of $G$. Then $\lambda(\mathcal L)$ is not smaller than the unique real root of the equation $(1-\lambda)^{k-1}(\lambda-d)+d=0$ in the interval $(d,d+1)$. \end{Theorem}
\noindent {\bf Proof.} Suppose that $d_s=d$, the maximum degree. Let $G'=(V',E')$ be a $k$-uniform hypergraph such that $E'=E_s$ and $V'$ consisting of the vertex $s$ and the vertices which share an edge with $s$. Let $\mathcal L'$ be the Laplacian tensor of $G'$. We claim that $\lambda(\mathcal L)\geq \lambda(\mathcal L')$.
Suppose that $|V'|=m\leq n$ and $\mathbf y\in\mathbb R^m$ is an H-eigenvector of $\mathcal L'$ corresponding to the H-eigenvalue $\lambda(\mathcal L')$ such that $\sum_{j\in [m]}y_j^k=1$. Suppose, without loss of generality, that $V'=[m]$, and the degree of vertex $j\in [m]$ in the hypergraph $G'$ is $d_j'$. Let $\mathbf x\in\mathbb R^n$ such that \begin{eqnarray}\label{new-5} x_i=y_i,\;\forall i\in [m],\;\mbox{and}\;x_i=0,\forall i>m. \end{eqnarray} Obviously, $\sum_{i\in [n]}x_i^k=\sum_{j\in [m]}y_j^k=1$. Moreover, \begin{eqnarray} \mathcal L\mathbf x^k&=&\sum_{i\in [n]}d_ix_i^k-k\sum_{e\in E}\prod_{j\in e}x_j\nonumber\\ &=&d_sx_s^k+\sum_{j\in [m]\setminus\{s\}}d'_jx_j^k-k\sum_{e\in E_s}\prod_{t\in e}x_t\nonumber\\ &&+\sum_{j\in [m]\setminus\{s\}}(d_j-d'_j)x_j^k+\sum_{j\in [n]\setminus [m]}d_jx_j^k-k\sum_{e\in E\setminus E_s}\prod_{t\in e}x_t\nonumber\\ &=&d_sx_s^k+\sum_{j\in [m]\setminus\{s\}}d'_jx_j^k-k\sum_{e\in E_s}\prod_{j\in e}x_j+\sum_{e\in E\setminus E_s}\left(\sum_{t\in e}x_t^k-k\prod_{w\in e}x_w\right)\nonumber\\ &=&\mathcal L'\mathbf y^k+\sum_{e\in E\setminus E_s}\left(\sum_{t\in e}x_t^k-k\prod_{w\in e}x_w\right)\nonumber\\ &\geq&\mathcal L'\mathbf y^k\label{ext-2}\\ &=&\lambda(\mathcal L').\nonumber \end{eqnarray}
Here the inequality follows from the fact that $\sum_{t\in e}x_t^k-k\prod_{w\in e}|x_w|\geq 0$ by the arithmetic-geometric mean inequality. Thus, by the characterization \reff{lp-c} (Lemma \ref{lem-5}), we get the conclusion since $\lambda(\mathcal L)\geq \mathcal L\mathbf x^k$.
For the hypergraph $G'$, we define a new hypergraph by renumbering the vertices in the following way: fix the vertex $s$, and for every edge $e\in E_s$, number the rest $k-1$ vertices as $\{(e,2),\ldots,(e,k)\}$. Let $\bar G=(\bar V,\bar E)$ be the $k$-uniform hypergraph with $\bar V:=\{s,(e,2),\ldots,(e,k),\;\forall e\in E_s\}$ and $\bar E:=\{\{s,(e,2),\ldots,(e,k)\}\;|\;e\in E_s\}$. It is easy to see that $\bar G$ is a hyperstar with size $d>0$ and the heart being $s$ (Definition \ref{def-bi-hm}). Let $\mathbf z\in\mathbb R^{kd-k+1}$ be an H-eigenvector of the Laplacian tensor $\bar \mathcal L$ of $\bar G$ corresponding to $\lambda(\bar\mathcal L)$. Suppose that $\sum_{t\in \bar V}z_t^k=1$. By Corollary \ref{cor-3}, we can choose a $\mathbf z$ such that $z_i$ is a constant other than $z_s$ which corresponds to the heart. Let $\mathbf y\in \mathbb R^m$ be defined as $y_i$ being the constant for all $i\in [m]\setminus \{s\}$ and $y_s=z_s$. Then, by a direct computation, we see that \begin{eqnarray*} \mathcal L'\mathbf y^k=\bar\mathcal L\mathbf z^k=\lambda(\bar\mathcal L). \end{eqnarray*} Moreover, $\sum_{j\in [m]}y_j^k\leq \sum_{t\in \bar V}z_t^k=1$. By \reff{lp-c} and the fact that $\lambda(\bar\mathcal L)>0$ (Theorem \ref{thm-3}), we see that \begin{eqnarray}\label{ext} \lambda(\mathcal L')\geq \lambda(\bar \mathcal L). \end{eqnarray} Consequently, $\lambda(\mathcal L)\geq \lambda(\bar\mathcal L)$. By Theorem \ref{thm-3}, $\lambda(\bar \mathcal L)$ is the unique real root of the equation $(1-\lambda)^{k-1}(\lambda-d)+d=0$ in the interval $(d,d+1)$. Consequently, $\lambda(\mathcal L)$ is no smaller than the unique real root of the equation $(1-\lambda)^{k-1}(\lambda-d)+d=0$ in the interval $(d,d+1)$. \ep
By the proof of Theorem \ref{thm-4}, the next theorem follows immediately.
\begin{Theorem}\label{cor-2} Let $k$ be even, and $G=(V,E)$ and $G'=(V',E')$ be two $k$-uniform hypergraphs. Suppose that $\mathcal L$ and $\mathcal L'$ be the Laplacian tensors of $G$ and $G'$ respectively. If $V\subseteq V'$ and $E\subseteq E'$, then $\lambda (\mathcal L)\leq\lambda(\mathcal L')$. \end{Theorem}
The next lemma helps us to characterize the extreme hypergraphs with respect to the lower bound of the largest Laplacian H-eigenvalue.
\begin{Lemma}\label{lem-4} Let $k\geq 4$ be even and $G=(V,E)$ be a hyperstar of size $d>0$. Then there is an H-eigenvector $\mathbf z\in\mathbb R^n$ of the Laplacian tensor $\mathcal L$ of $G$ satisfying that exactly two vertices other than the heart in every edge takes negative values. \end{Lemma}
\noindent {\bf Proof.} Suppose, without loss of generality, that $1$ is the heart. By Corollary \ref{cor-3}, there is an H-eigenvector $\mathbf x\in\mathbb R^n$ of $\mathcal L$ corresponding to $\lambda(\mathcal L)$ such that $x_i$ is a constant for the vertices other than the heart. By Theorem \ref{thm-3}, we have that this constant is nonzero. If $x_2<0$, then let $\mathbf y=-\mathbf x$, and otherwise let $\mathbf y=\mathbf x$. We have that $\mathbf y$ is an H-eigenvector of $\mathcal L$ corresponding to $\lambda(\mathcal L)$.
Let $\mathbf z\in\mathbb R^n$. We set $z_1=y_1$, and for every edge $e\in E$ arbitrarily two chosen $i_{e,1},i_{e,2}\in e\setminus\{1\}$ we set $z_{i_{e,1}}=-y_{i_{e,1}}<0$, $z_{i_{e,2}}=-y_{i_2}<0$ and $z_j=y_j>0$ for the others $j\in e\setminus\{1,i_{e,1},i_{e,2}\}$. Then, by a direct computation, we can conclude that $\mathbf z$ is an H-eigenvector of $\mathcal L$ corresponding to $\lambda(\mathcal L)$. \ep
The next theorem is the main result of this subsection, which characterizes the extreme hypergraphs with respect to the lower bound of the largest Laplacian H-eigenvalue.
\begin{Theorem}\label{thm-5} Let $k\geq 4$ be even and $G=(V,E)$ be a $k$-uniform connected hypergraph with the maximum degree being $d>0$. Let $\mathcal L$ be the Laplacian tensor of $G$. Then $\lambda(\mathcal L)$ is equal to the unique real root of the equation $(1-\lambda)^{k-1}(\lambda-d)+d=0$ in the interval $(d,d+1)$ if and only if $G$ is a hyperstar. \end{Theorem}
\noindent {\bf Proof.} By Theorem \ref{thm-3}, only necessity needs a proof. In the following, suppose that $\lambda(\mathcal L)$ is equal to the unique real root of the equation $(1-\lambda)^{k-1}(\lambda-d)+d=0$ in the interval $(d,d+1)$. Suppose that $d_s=d$ as before.
Define $G'$ and $\bar G$ as in Theorem \ref{thm-4}. Actually, let $G'=(V',E')$ be the $k$-uniform hypergraph such that $E'=E_s$ and $V'$ consisting of the vertex $s$ and the vertices which share an edge with $s$. Let $\mathcal L'$ be the Laplacian tensor of $G'$. Fix the vertex $s$, and for every edge $e\in E_s$, number the rest
$k-1$ vertices as $\{(e,2),\ldots,(e,k)\}$. Let $\bar G=(\bar V,\bar E)$ be the $k$-uniform hypergraph such that $\bar V:=\{s,(e,2),\ldots,(e,k),\;\forall e\in E_s\}$ and $\bar E:=\{\{s,(e,2),\ldots,(e,k)\}\;|\;e\in E_s\}$.
With the same proof as in Theorem \ref{thm-4}, by Lemma \ref{lem-5}, we have that inequality in \reff{ext} is an equality if and only if
$|\bar V|=m$. Since otherwise $\sum_{j\in [m]}y_j^k<\sum_{t\in \bar V}z_t^k=1$, which together with $\lambda(\bar \mathcal L)>0$ and \reff{lp-c} implies that $\lambda(\mathcal L')>\lambda(\bar \mathcal L)$. Hence, if $\lambda(\mathcal L)$ is equal to the unique real root of the equation $(1-\lambda)^{k-1}(\lambda-d)+d=0$ in the interval $(d,d+1)$, then $G'$ is a hyperstar. In this situation, the inequality in \reff{ext-2} is an equality if and only if $G'=G$. The sufficiency is clear.
For the necessity, suppose that $G'\neq G$. Then there is an edge $\bar e\in E$ \begin{itemize} \item [(i)] either containing both vertices in $[m]$ and vertices in $[n]\setminus [m]$, since $G$ is connected, \item [(ii)] or containing only vertices in $[m]\setminus\{s\}$. \end{itemize} For the case (i), it is easy to get a contradiction since $\sum_{t\in e}x_t^k-k\prod_{w\in e}x_w=\sum_{t\in e\cap [m]}x_t^k>0$. Note that this situation happens if and only if $m<n$. Then, in the following we assume that that $m=n$. For the case (ii), we must have that there are $q\geq 2$ edges $e_a\in E_s, a\in [q]$ in $G'$ such that $e_a\cap \bar e\neq\emptyset$ for all $a\in [q]$. By Lemma \ref{lem-4}, let $\mathbf y\in\mathbb R^n$ be an H-eigenvector of the Laplacian tensor $\mathcal L'$ of $G'$ satisfying that exactly two vertices other than the heart in every edge takes negative values. Moreover, we can normalize $\mathbf y$ such that $\sum_{i\in [n]}y_i^k=1$. Since $m=n$, by \reff{new-5}, we have $\mathbf x=\mathbf y$. Consequently, by Lemma \ref{lem-5}, we have \begin{eqnarray} \lambda(\mathcal L)\geq \mathcal L\mathbf x^k&=&\mathcal L'\mathbf x^k+\sum_{e\in E\setminus E_s}\left(\sum_{t\in e}x_t^k-k\prod_{w\in e}x_w\right)\nonumber\\ &=&\lambda(\mathcal L')+\sum_{e\in E\setminus E_s}\left(\sum_{t\in e}x_t^k-k\prod_{w\in e}x_w\right)\nonumber\\ &\geq &\lambda(\mathcal L')+\sum_{t\in \bar e}x_t^k-k\prod_{w\in \bar e}x_w.\nonumber \end{eqnarray} If $\prod_{w\in \bar e}x_w<0$, then we get a contradiction since $\lambda(\mathcal L')$ is equal to the unique real root of the equation $(1-\lambda)^{k-1}(\lambda-d)+d=0$ in the interval $(d,d+1)$. In the following, we assume that $\prod_{w\in \bar e}x_w>0$. We have two cases: \begin{itemize} \item [(1)] $x_w>0$ or $x_w<0$ for all $w\in \bar e$, \item [(2)] $x_b>0$ for some $b\in\bar e$ and $x_c<0$ for some $c\in\bar e$. \end{itemize}
Note that $|e_a\cap \bar e|\leq k-2$ for all $a\in [q]$. For an arbitrary but fixed $a\in [q]$, define $\{f_1,f_2\}:=\{f\in e_a\setminus\{s\}\;|\;x_f<0\}$.
(I). If $f_1, f_2 \in \bar e$, then we choose an $h\in e_a$ such that $h\neq s$, $h\notin \bar e$ and $x_h>0$. Since $k\geq 4$ is even, such an $h$ exists. It is a direct computation to see that $\mathbf z\in\mathbb R^n$ such that $z_{f_1}=-x_{f_1}>0$, $z_h=-x_h<0$, and $z_i=x_i$ for the others is still an H-eigenvector of $\mathcal L'$ corresponding to $\lambda(\mathcal L')$. More importantly, $\prod_{w\in \bar e}z_w<0$. Hence, replacing $\mathbf y$ by $\mathbf z$, we get a contradiction.
(II). If $f_1\in \bar e$ and $f_2\notin\bar e$, then either there is an $h\in \bar e\cap e_a$ such that $h\neq s$ and $x_h>0$, or there is an $h\in e_a$ such that $h\neq s$, $h\notin \bar e$ and $x_h>0$. Since $k\geq 4$ is even, such an $h$ exists. For the former case, set $\mathbf z\in\mathbb R^n$ such that $z_{h}=-x_{h}<0$, $z_{f_2}=-x_{f_2}>0$, and $z_i=x_i$ for the others; and for the latter case, set $\mathbf z\in\mathbb R^n$ such that $z_{f_1}=-x_{f_1}>0$, $z_h=-x_h<0$, and $z_i=x_i$ for the others. Then, it is a direct computation to see that $\mathbf z$ is still an H-eigenvector of $\mathcal L'$ corresponding to $\lambda(\mathcal L')$. We also have that $\prod_{w\in \bar e}z_w<0$. Hence, replacing $\mathbf y$ by $\mathbf z$, we get a contradiction.
(III). The proof for the case $f_2\in \bar e$ and $f_1\notin\bar e$ is similar.
(IV). If $f_1,f_2\notin \bar e$, then there is some $b\in\bar e\cap e_a$ such that $x_b>0$, then similarly it is a direct computation to see that $\mathbf z\in\mathbb R^n$ such that $z_{b}=-x_{b}<0$, $z_{f_1}=-x_{f_1}>0$, and $z_i=x_i$ for the others is still an H-eigenvector of $\mathcal L'$ corresponding to $\lambda(\mathcal L')$. We also have that $\prod_{w\in \bar e}z_w<0$. Consequently, a contradiction can be derived.
Thus, $G=G'$ is a hyperstar. \ep
Theorems \ref{thm-4} and \ref{thm-5} generalize the classical result for graphs \cite{gm94,zl02}.
\subsection{Odd-Uniform Hypergraphs}\label{sec-odd}
In this subsection, we discuss odd-uniform hypergraphs. Note that there does not exist an analogue of Lemma \ref{lem-5} for $k$ being odd. Hence it is difficult to characterize the extreme hypergraphs for the lower bound of the largest H-eigenvalue of the Laplacian tensor.
\begin{Theorem}\label{thm-2} Let $k$ be odd and $G=(V,E)$ be a hyperstar of size $d>0$. Let $\mathcal L$ be the Laplacian tensor of $G$. Then $\lambda(\mathcal L)=d$. \end{Theorem}
\noindent {\bf Proof.} The case for $d=1$ follows by direct computation, since in this case, for all $i\in [k]$ \begin{eqnarray*} (\lambda(\mathcal L)-1)x_i^k=-\prod_{j\in [k]}x_j. \end{eqnarray*} If $\lambda(\mathcal L)>1$, then $x_i^k=x_j^k$ for all $i, j\in [k]$. Since $k$ is odd and $x \not = 0$, we have $x_i=x_j \not = 0$ for all $i,j\in [k]$. This implies that $0<\lambda(\mathcal L)-1=-1<0$, a contradiction.
In the following, we consider cases when $d>1$. Suppose, without loss of generality, that $1$ is the heart. It is easy to see that the H-eigenvector $\mathbf x:=(1,0,\ldots,0)\in\mathbb R^n$ corresponds to the H-eigenvalue $d$. Suppose that $\mathbf x\in\mathbb R^n$ is an H-eigenvector of $\mathcal L$ corresponding to $\lambda(\mathcal L)$. In the following, we show that $\mbox{sup}(\mathbf x)=\{1\}$, which implies that $\lambda(\mathcal L)=d$.
Suppose on the contrary that $\mbox{sup}(\mathbf x)\neq\{1\}$. By Lemma \ref{lem-0} and Corollary \ref{cor-4}, without loss of generality, we assume that $\mbox{sup}(\mathbf x)=[n]$ and $\mathbf x$ is of the following form \begin{eqnarray*} \alpha:=x_1>0, \;\mbox{and}\; x_2=\cdots=x_n=-1. \end{eqnarray*} Then, we see that \begin{eqnarray*} (\mathcal L\mathbf x^{k-1})_1=d \alpha^{k-1}-d=\lambda(\mathcal L)\alpha^{k-1}, \end{eqnarray*} and for $i\in\{2,\ldots,n\}$ \begin{eqnarray*} (\mathcal L\mathbf x^{k-1})_i=1+\alpha=\lambda(\mathcal L). \end{eqnarray*} Consequently, \begin{eqnarray*} (d-\lambda(\mathcal L))(\lambda(\mathcal L)-1)^{k-1}=d. \end{eqnarray*} Hence, we must have $\lambda(\mathcal L)<d$. This is a contradiction. Hence, $\lambda(\mathcal L)=d$. \ep
When $k$ is odd, Theorem \ref{thm-2}, together with Lemma \ref{lem-00}, implies that the maximum degree is a tight lower bound for the largest Laplacian H-eigenvalue.
We now give a lower bound for the largest Laplacian H-eigenvalue of a $3$-uniform complete hypergraph.
\begin{Proposition}\label{prop-2} Let $G=(V,E)$ be a $3$-uniform complete hypergraph. Let $\mathcal L$ be the Laplacian tensor of $G$ and $n=2m$ for some positive integer $m$. Then $\lambda(\mathcal L)\geq {n-1\choose 2}+m-1$, which is strictly larger than $d = {n-1\choose 2}$, the maximum degree of $G$. \end{Proposition}
\noindent {\bf Proof.} Let $\mathbf x\in\mathbb R^n$ be defined as $x_1=\cdots=x_m=1$ and $x_{m+1}=\cdots=x_{2m}=-1$. We have that \begin{eqnarray*} \left(\mathcal L\mathbf x^2\right)_1&=&{n-1\choose 2}x_1^2 -\sum_{1<i<j\in [n]}x_ix_j\\ &=&{n-1\choose 2}-\sum_{1<i<j\in [m]}x_ix_j-\sum_{m+1\leq i<j\in [2m]}x_ix_j-\sum_{1<i\in [m],\;m+1\leq j\leq 2m}x_ix_j\\
&=&{n-1\choose 2}-\sum_{1<i<j\in [m]}x_ix_j-\sum_{m+1\leq i<j\in [2m]}x_ix_j+\sum_{1<i\in [m],\;m+1\leq j\leq 2m}\left|x_ix_j\right|\\ &=&{n-1\choose 2}-{m-1\choose 2}-{m\choose 2}+(m-1)m\\ &=&\left[{n-1\choose 2}+m-1\right]x_1^2. \end{eqnarray*} Thus, for any $p = 2, \cdots, m$, we have that $$\left(\mathcal L\mathbf x^2\right)_p = \left(\mathcal L\mathbf x^2\right)_1 = \left[{n-1\choose 2}+m-1\right]x_p^2.$$ Similarly, for any $p\in \{m+1,\ldots, 2m\}$, we have that \begin{eqnarray*} \left(\mathcal L\mathbf x^2\right)_p&=&\left(\mathcal L\mathbf x^2\right)_n\\ &=&{n-1\choose 2}x_n^2 -\sum_{1\leq i<j\in [n-1]}x_ix_j\\ &=&{n-1\choose 2}-\sum_{1\leq i<j\in [m]}x_ix_j-\sum_{m+1\leq i<j\in [2m-1]}x_ix_j-\sum_{1\leq i\in [m],\;m+1\leq j\leq 2m-1}x_ix_j\\
&=&{n-1\choose 2}-\sum_{1\leq i<j\in [m]}x_ix_j-\sum_{m+1\leq i<j\in [2m-1]}x_ix_j+\sum_{1\leq i\in [m],\;m+1\leq j\leq 2m-1}\left|x_ix_j\right|\\ &=&{n-1\choose 2}-{m\choose 2}-{m-1\choose 2}+m(m-1)\\ &=&\left[{n-1\choose 2}+m-1\right]x_p^2. \end{eqnarray*} Thus, $\mathbf x$ is an H-eigenvector of $\mathcal L$ corresponding to the H-eigenvalue ${n-1\choose 2}+m-1$. \ep
We have the following conjecture.
\begin{Conjecture}\label{con-1} Let $k\geq 3$ be odd and $G=(V,E)$ be a $k$-uniform connected hypergraph with the maximum degree being $d>0$. Let $\mathcal L$ be the Laplacian tensor of $G$. Then $\lambda(\mathcal L)$ is equal to $d$ if and only if $G$ is a hyperstar. \end{Conjecture}
\section{The Largest Signless Laplacian H-eigenvalue} \setcounter{Theorem}{0} \setcounter{Proposition}{0} \setcounter{Corollary}{0} \setcounter{Lemma}{0} \setcounter{Definition}{0} \setcounter{Remark}{0} \setcounter{Conjecture}{0} \setcounter{Example}{0} \hspace{4mm} In this section, we discuss the largest signless Laplacian H-eigenvalue of a $k$-uniform hypergraph. Since the signless Laplacian tensor $\mathcal Q$ is nonnegative, the situation is much clearer than the largest Laplacian H-eigenvalue.
The next proposition gives bounds on $\lambda(\mathcal Q)$.
\begin{Proposition}\label{thm-41} Let $G=(V,E)$ be a $k$-uniform hypergraph with maximum degree being $d>0$, and $\mathcal A$ and $\mathcal{Q}$ be the adjacency tensor and the signless Laplacian tensor of $G$ respectively. Then \begin{eqnarray*} \max\left\{d,\frac{2\sum_{i\in [n]}d_i}{n}\right\}\leq \lambda(\mathcal{Q})\leq \lambda(\mathcal{A})+d. \end{eqnarray*} \end{Proposition}
\noindent {\bf Proof.} The first inequality follows from \cite[Corollary 12]{q12a}. For the second, by \cite[Theorem 11]{q12a}, we have that \begin{eqnarray*} \lambda(\mathcal{Q})&=&\max_{\sum_{i\in [n]}x_i^k=1,\;\mathbf x\in\mathbb R_+^n} \mathcal{Q}\mathbf{x}^k=\max_{\sum_{i\in [n]}x_i^k=1,\;\mathbf x\in\mathbb R_+^n} (\mathcal{A}+\mathcal{D})\mathbf{x}^k\nonumber\\ &\leq&\max_{\sum_{i\in [n]}x_i^k=1,\;\mathbf x\in\mathbb R_+^n} \mathcal{A}\mathbf{x}^k+\max_{\sum_{i\in [n]}x_i^k=1,\;\mathbf x\in\mathbb R_+^n} \mathcal{D}\mathbf{x}^k=\lambda(\mathcal{A})+d. \end{eqnarray*} Consequently, the second inequality follows. \ep
\begin{Lemma}\label{lem-6} Let $G =(V,E)$ be a $k$-uniform regular connected hypergraph with degree $d>0$, and $\mathcal Q$ be its signless Laplacian tensor. Then, $\lambda(\mathcal Q)=2d$. \end{Lemma}
\noindent {\bf Proof.} Note that the vector of all ones is an H-eigenvector of $\mathcal Q$ corresponding to the H-eigenvalue $2d$. Since $\mathcal Q$ is weakly irreducible (\cite[Lemma 3.1]{pz12}), the result follows from \cite[Lemmas 2.2 and 2.3]{hq12a}. \ep
The next proposition gives a tight upper bound of the largest signless Laplacian H-eigenvalues and characterizes the extreme hypergraphs.
\begin{Proposition}\label{thm-42} Let $G =(V,E)$ be a $k$-uniform hypergraph and $G'$ be a sub-hypergraph of $G$. Let $\mathcal{Q}$ and $\mathcal{Q}'$ be the signless Laplacian tensor of $G$ and $G'$, respectively. Then, \begin{eqnarray} \lambda(\mathcal{Q}') \leq \lambda(\mathcal{Q}).\nonumber\end{eqnarray} Furthermore, if $G'$ and $G$ are both connected, then $\lambda(\mathcal{Q}')=\lambda(\mathcal{Q})$ if and only if $G'=G$. Consequently, \begin{eqnarray} \lambda(\mathcal{Q})&\leq & 2{n-1 \choose k-1},\nonumber \end{eqnarray} and equality holds if and only if $G$ is a $k$-uniform complete hypergraph. \end{Proposition}
\noindent {\bf Proof.} The first conclusion follows from \cite[Theorem 3.19]{yy11}. The remaining follows from \cite[Theorem 3.20]{yy11}, \cite[Theorem 4]{q12b} and \cite[Lemma 3.1]{pz12} (see also \cite[Lemmas 2.2 and 2.3]{hq13}) which imply that there is a unique positive H-eigenvector of $\mathcal Q$ and the corresponding H-eigenvalue must be $\lambda(\mathcal Q)$ whenever $G$ is connected, and the fact that the vector of all ones is an H-eigenvector of $\mathcal Q$ corresponding to the H-eigenvalue $2{n-1 \choose k-1}$ when $G$ is a complete hypergraph (Lemma \ref{lem-6}). \ep
When $k=2$ (i.e., the usual graph), Propositions \ref{thm-41} and \ref{thm-42} reduce to the classic results in graph theory \cite{crs07}.
The next theorem gives a tight lower bound for $\lambda(\mathcal Q)$ and characterizes the extreme hypergraphs.
\begin{Theorem}\label{thm-44} Let $G=(V,E)$ be a $k$-uniform connected hypergraph with the maximum degree being $d>0$ and $\mathcal{Q}$ be the signless Laplician tensor of $G$. Then \begin{eqnarray*} \lambda(\mathcal{Q})&\geq &d+ d \left(\frac{1}{\alpha_*}\right)^{k-1}, \end{eqnarray*} where $\alpha_*\in (d-1,d]$ is the largest real root of $\alpha^k+(1-d)\alpha^{k-1}-d=0$, with equality holding if and only if $G$ is a hyperstar. \end{Theorem}
\noindent {\bf Proof.} Suppose that $d_s=d$. Let $G'$ be the hypergraph $G_{S}$ with $S$ being the vertices in the set $E_s$. As in the proof of Proposition \ref{thm-4}, for the hypergraph $G'$, we define a new hypergraph by renumbering the vertices in the following way: fix the vertex $s$, and for every edge $e\in E_s$, number the rest $k-1$ vertices as $\{(e,2),\ldots,(e,k)\}$. Let $\bar G=(\bar V,\bar E)$ be the $k$-uniform hypergraph such that $\bar V:=\{s,(e,2),\ldots,(e,k),\;\forall e\in E_s\}$ and $\bar E:=\{\{s,(e,2),\ldots,(e,k)\}\;|\;e\in E_s\}$. It is easy to see that $\bar G$ is a hyperstar with order $d>0$ and the heart being $s$. Let $\mathbf z\in\mathbb R^{kd-k+1}$ be a vector such that $z_s=\alpha>0$ and $z_j=1$ for all $j\in\bar V\setminus\{s\}$. By a similar proof of Proposition \ref{prop-1}, we see that $\mathbf z$ is an H-eigenvector of the signless Laplacian tensor $\bar \mathcal Q$ of $\bar G$ if and only if $\alpha$ is a real root of the following equation \begin{eqnarray}\label{sl} \alpha^k+(1-d)\alpha^{k-1}-d=0. \end{eqnarray} In this situation, the H-eigenvalue is $\lambda=1+\alpha$.
By \cite[Theorem 11]{q12a} and \cite[Lemmas 2.2 and 2.3]{hq13}, or by a similar proof of Proposition \ref{prop-1}, we can show that $\lambda(\mathcal Q)\geq \lambda(\bar\mathcal Q)$ with equality holding if and only if $G=\bar G$. Moreover, let $\alpha_*$ be the largest real root of the equation \reff{sl}, by \reff{sl} we have \begin{eqnarray*} \lambda_*=1+\alpha_*=d+d\left(\frac{1}{\alpha_*}\right)^{k-1}. \end{eqnarray*} With a similar proof as Theorem \ref{thm-3}, we can show that the equation in \reff{sl} has a unique real in the interval $(d-1, d]$ which is the maximum. Since $\bar G$ is connected, by \cite[Lemmas 2.2 and 2.3]{hq13} and \cite[Lemma 3.1]{pz12}, we have that $\lambda(\bar\mathcal Q)=1+\alpha_*$. Consequently, the results follow. \ep
When $G$ is a 2-uniform hypergraph, we know that $\alpha_*=d$, hence Theorem \ref{thm-44} reduces to $\lambda(\mathcal{Q})\geq d+1$ \cite{crs07}.
\section{The Relation between The Largest Laplacian and Signless Laplacian H-Eigenvalues} \setcounter{Theorem}{0} \setcounter{Proposition}{0} \setcounter{Corollary}{0} \setcounter{Lemma}{0} \setcounter{Definition}{0} \setcounter{Remark}{0} \setcounter{Conjecture}{0} \setcounter{Example}{0} \hspace{4mm} In this section, we discuss the relationship between the largest Laplacian H-eigenvalue and the largest signless Laplacian H-eigenvalue.
The following theorem characterizes this relationship. This theorem generalizes the classical result in spectral graph theory \cite{zl02,z07}.
\begin{Theorem}\label{prop-3} Let $G=(V,E)$ be a $k$-uniform hypergraph. Let $\mathcal L,\mathcal Q$ be the Laplacian and signless Laplacian tensors of $G$ respectively. Then \begin{eqnarray*} \lambda(\mathcal L)\leq\lambda(\mathcal Q). \end{eqnarray*} If furthermore $G$ is connected and $k$ is even, then \begin{eqnarray*} \lambda(\mathcal L)=\lambda(\mathcal Q) \end{eqnarray*} if and only if $G$ is odd-bipartite. \end{Theorem}
\noindent {\bf Proof.} The first conclusion follows from Definition \ref{def-00} and \cite[Proposition 14]{q12a}.
We now prove the second conclusion. We first prove the sufficiency. We assume that $G$ is odd-bipartite. Suppose that $\mathbf x\in\mathbb R^n$ is a nonnegative H-eigenvector of $\mathcal Q$ corresponding to $\lambda(\mathcal Q)$. Then, \cite[Lemma 2.2]{hq12a} implies that $\mathbf x$ is a positive vector, i.e., all its entries are positive. Suppose that $V=V_1\cup V_2$ is an odd-bipartition of $V$ such that $V_1,V_2\neq \emptyset$ and every edge in $E$ intersects $V_1$ with exactly an odd number of vertices. Let $\mathbf y\in\mathbb R^n$ be defined such that $y_i=x_i$ whenever $i\in V_1$ and $y_i=-x_i$ for the others. Then, for $i\in V_1$, we have \begin{eqnarray*} \left[(\mathcal D-\mathcal A)\mathbf y^{k-1}\right]_i&=&d_iy_i^{k-1}-\sum_{e\in E_i}\prod_{j\in e\setminus\{i\}}y_j\\ &=&d_ix_i^{k-1}+\sum_{e\in E_i}\prod_{j\in e\setminus\{i\}}x_j\\ &=&\left[(\mathcal D+\mathcal A)\mathbf x^{k-1}\right]_i\\ &=&\lambda(\mathcal Q) x_i^{k-1}\\ &=&\lambda(\mathcal Q) y_i^{k-1}. \end{eqnarray*} Here the second equality follows from the fact that exactly an odd number of vertices in $e$ takes negative values for every $e\in E_i$. Similarly, we have for $i\in V_2$, \begin{eqnarray*} \left[(\mathcal D-\mathcal A)\mathbf y^{k-1}\right]_i&=&d_iy_i^{k-1}-\sum_{e\in E_i}\prod_{j\in e\setminus\{i\}}y_j\\ &=&-d_ix_i^{k-1}-\sum_{e\in E_i}\prod_{j\in e\setminus\{i\}}x_j\\ &=&-\left[(\mathcal D+\mathcal A)\mathbf x^{k-1}\right]_i\\ &=&-\lambda(\mathcal Q) x_i^{k-1}\\ &=&\lambda(\mathcal Q) y_i^{k-1}. \end{eqnarray*} Here the second equality follows from the fact that exactly an even number of vertices in $e\setminus\{i\}$ takes negative values for every $e\in E_i$, and the last from the fact that $y_i=-x_i$. Thus, $\lambda(\mathcal Q)$ is an H-eigenvalue of $\mathcal L$. This, together with the first conclusion, implies that $\lambda(\mathcal L)=\lambda(\mathcal Q)$.
In the following, we prove the necessity of the second conclusion. We assume that $\lambda(\mathcal L)=\lambda(\mathcal Q)$. Let $\mathbf x\in\mathbb R^n$ be an H-eigenvector of $\mathcal L$ corresponding to the H-eigenvalue $\lambda(\mathcal L)$ such that $\sum_{i\in [n]}x_i^k=1$. Then, \begin{eqnarray*} \left[(\mathcal D-\mathcal A)\mathbf x^{k-1}\right]_i=\lambda(\mathcal L)x_i^{k-1},\;\forall i\in [n]. \end{eqnarray*}
Let $\mathbf y\in\mathbb R^n$ be defined such that $y_i=|x_i|$ for all $i\in [n]$. By \reff{lp-c}, we see that \begin{eqnarray}\label{new-1}
\lambda(\mathcal L)&=&\sum_{i\in [n]}x_i \left[(\mathcal D-\mathcal A)\mathbf x^{k-1}\right]_i=\sum_{i\in [n]}|x_i|\left| \left[(\mathcal D-\mathcal A)\mathbf x^{k-1}\right]_i\right|\nonumber\\ &\leq& \sum_{i\in [n]}y_i\left[(\mathcal D+\mathcal A)\mathbf y^{k-1}\right]_i\leq\lambda(\mathcal Q). \end{eqnarray}
Thus, all the inequalities in \reff{new-1} should be equalities. By \cite[Lemma 2.2]{q12a} and \cite[Theorem 2.1(iii)]{hq13}, we have that $\mathbf y$ is an H-eigenvector of $\mathcal Q$ corresponding to the H-eigenvalue $\lambda(\mathcal Q)$, and it is a positive vector. Let $V_1:=\{i\in [n]\;|\;x_i>0\}$ and $V_2:=\{i\in [n]\;|\;x_i<0\}$. Then, $V_1\cup V_2=[n]$, since $\mathbf y$ is positive. Since $G$ is connected and nontrivial, we must have that $V_2\neq \emptyset$. Otherwise $\left|\left[(\mathcal D-\mathcal A)\mathbf x^{k-1}\right]_i\right|<\left[(\mathcal D+\mathcal A)\mathbf y^{k-1}\right]_i$, since $(\mathcal A\mathbf x^{k-1})_i>0$ in this situation. We also have that $V_1\neq\emptyset$, since otherwise $\left|\left[(\mathcal D-\mathcal A)\mathbf x^{k-1}\right]_i\right|=|-d_iy_i^{k-1}+(\mathcal A\mathbf y^{k-1})_i|<\left[(\mathcal D+\mathcal A)\mathbf y^{k-1}\right]_i$.
Moreover, since the first inequality in \reff{new-1} must be an equality, we must get that for all $i\in V_1$, \begin{eqnarray*} \lambda(\mathcal Q)y_i^{k-1}&=&\left[(\mathcal D+\mathcal A)\mathbf y^{k-1}\right]_i=\left[(\mathcal D-\mathcal A)\mathbf x^{k-1}\right]_i \end{eqnarray*} We have that \begin{eqnarray*} \left[(\mathcal D+\mathcal A)\mathbf y^{k-1}\right]_i=d_iy_i^{k-1}+\sum_{e\in E_i}\prod_{j\in e\setminus\{i\}}y_j,\;\mbox{and}\; \left[(\mathcal D-\mathcal A)\mathbf x^{k-1}\right]_i=d_ix_i^{k-1}-\sum_{e\in E_i}\prod_{j\in e\setminus\{i\}}x_j. \end{eqnarray*}
Hence, for every $e\in E_i$ with $i\in V_1$, we must have that exactly $|e\cap V_2|$ is an odd number. Similarly, we can show that for every $e\in E_i$ with $i\in V_2$, we must have that exactly $|e\cap V_1|$ is an odd number. Consequently, $G$ is odd-bipartite by Definition \ref{def-bi-odd}.\ep
In the following, we give an application of Theorem \ref{prop-3}.
\begin{Definition}\label{def-hc}
Let $G=(V,E)$ be a $k$-uniform nontrivial hypergraph. If there is a disjoint partition of the vertex set $V$ as $V=V_1\cup\cdots\cup V_s$ such that $|V_1|=\cdots=|V_s|=k$, and \begin{itemize}
\item [(i)] $E=\{V_i\;|\;i\in [s]\}$,
\item [(ii)] $|V_1\cap V_2|=\cdots=|V_{s-1}\cap V_s|=|V_s\cap V_1|=1$, and $V_i\cap V_j=\emptyset$ for the other cases, \item [(iii)] the intersections $V_1\cap V_2$, \ldots, $V_s\cap V_1$ are mutually different. \end{itemize} then $G$ is called a {\em hypercycle}. $s$ is the {\em size} of the hypercycle. \end{Definition} It is easy to see that a $k$-uniform hypercycle of size $s>0$ has $n=s(k-1)$ vertices, and is connected. Figure 3 (i) is an example of a $4$-uniform hypercycle of size $3$.
\begin{figure}
\caption{(i) is an example of a $4$-uniform hypercycle of size $3$. The intersections are in dashed margins. (ii) is an illustration of an odd-bipartition of the $4$-uniform hypercycle. The partition is clear from the different colors of the disks (also the dashed margins from the solid ones).}
\end{figure}
The next lemma says that the largest signless Laplacian H-eigenvalue of a hypercycle is easy to characterize.
\begin{Lemma}\label{lem-7} Let $G=(V,E)$ be a $k$-uniform hypercycle of size $s>0$ and $\mathcal Q$ be its signless Laplacian tensor. Then, $\lambda(\mathcal Q)=2+2\beta^{k-2}$ with $\beta$ being the unique positive solution of the equation $2\beta^k+\beta^2-1=0$ which is in the interval $(\frac{1}{2}, 1)$. \end{Lemma}
\noindent {\bf Proof.} By \cite[Theorem 3.20]{yy11}, \cite[Theorem 4]{q12b} and \cite[Lemma 3.1]{pz12} (see also \cite[Lemmas 2.2 and 2.3]{hq13}), if we can find a positive H-eigenvector $\mathbf x\in\mathbb R^n$ of $\mathcal Q$ corresponding to an H-eigenvalue $\mu$, then $\mu=\lambda(\mathcal Q)$.
Let $x_i=\alpha$ whenever $i$ is an intersection of the edges of $G$ and $x_i=\beta$ for the others. Without loss of generality, we assume that $\alpha =1$. Then, for an intersection vertex $i$, we have that $d_i=2$ and \begin{eqnarray*} (\mathcal Q\mathbf x^{k-1})_i=2\alpha^{k-1}+2\alpha\beta^{k-2}=2+2\beta^{k-2}; \end{eqnarray*} and for the other vertices $j$, we have that $d_j=1$ and \begin{eqnarray*} (\mathcal Q\mathbf x^{k-1})_j=\beta^{k-1}+\alpha^2\beta^{k-3}=\beta^{k-1}+\beta^{k-3}. \end{eqnarray*} If there are some $\mu>0$ and $\beta>0$ such that \begin{eqnarray}\label{new-6} 2+2\beta^{k-2}=\mu,\;\mbox{and}\;\beta^{k-1}+\beta^{k-3}=\mu\beta^{k-1}, \end{eqnarray} then $\mu=\lambda(\mathcal Q)$ by the discussion at the beginning of this proof. We assume that \reff{new-6} has a required solution pair. Then, \begin{eqnarray*} 2\beta^{2k-3}+\beta^{k-1}-\beta^{k-3}=0,\;i.e.,\; 2\beta^k+\beta^2-1=0. \end{eqnarray*} Let $g(\beta):=2\beta^k+\beta^2-1$. Then $g(1)>0$ and \begin{eqnarray*} g(\frac{1}{2})=\frac{1}{2^{k-1}}+\frac{1}{4}-1<0. \end{eqnarray*} Thus, \reff{new-6} does have a solution pair with $\beta\in (\frac{1}{2}, 1)$ and $\mu=2+2\beta^{k-2}$. Since $\mathcal Q$ has a unique positive H-eigenvector (\cite[Lemmas 2.2 and 2.3]{hq13}), the equation $2\beta^k+\beta^2-1=0$ has a unique positive solution which is in the interval $(\frac{1}{2}, 1)$. Hence, the result follows. \ep
By Theorem \ref{prop-3} and Lemma \ref{lem-7}, we can get the following corollary, which characterizes the largest Laplacian H-eigenvalue of a hypercycle when $k$ is even.
\begin{Corollary}\label{cor-5} Let $k$ be even and $G=(V,E)$ be a $k$-uniform hypercycle of size $s>0$. Let $\mathcal L$ be its Laplacian tensor. Then, $\lambda(\mathcal L)=2+2\beta^{k-2}$ with $\beta$ being the unique positive solution of the equation $2\beta^k+\beta^2-1=0$ which is in the interval $(\frac{1}{2}, 1)$. \end{Corollary}
\noindent {\bf Proof.} By Theorem \ref{prop-3} and Lemma \ref{lem-7}, it suffices to show that when $k$ is even, a $k$-uniform hypercycle is odd-bipartite.
Let $V=V_1\cup\cdots\cup V_s$ such that $|V_1|=\cdots=|V_s|=k$ be the partition of the vertices satisfying the hypotheses in Definition \ref{def-hc}. Denote $V_s\cap V_1$ as $i_1$, $V_1\cap V_2$ as $i_2$, \ldots, $V_{s-1}\cap V_s$ as $i_s$. For every $j\in [s]$, choose a vertex $r_j\in V_j$ such that $r_j\notin\{i_1,\ldots,i_s\}$. Let $S_1:=\{r_j\;|\;j\in [s]\}$ and $S_2=V\setminus S_1$. Then it is easy to see that $S_1\cup S_2=V$ is an odd-bipartition of $G$ (Definition \ref{def-bi-odd}). An illustration of such a partition is shown in Figure 3 (ii).
Thus, the result follows. \ep
The next proposition says that when $k$ is odd, the two H-eigenvalues cannot equal for a connected nontrivial hypergraph.
\begin{Proposition}\label{prop-4} Let $k$ be odd and $G=(V,E)$ be a $k$-uniform connected nontrivial hypergraph. Let $\mathcal L,\mathcal Q$ be the Laplacian and signless Laplacian tensors of $G$ respectively. Then \begin{eqnarray*} \lambda(\mathcal L)<\lambda(\mathcal Q). \end{eqnarray*} \end{Proposition}
\noindent {\bf Proof.} Suppose that $\mathbf x\in\mathbb R^n$ is an H-eigenvector of $\mathcal L$ corresponding to $\lambda(\mathcal L)$ such that $\sum_{i\in [n]}|x_i|^k=1$. Then, we have that \begin{eqnarray*} \lambda(\mathcal L)x_i^{k-1}=(\mathcal L\mathbf x^{k-1})_i=\left[(\mathcal D-\mathcal A)\mathbf x^{k-1}\right]_i,\;\forall i\in [n]. \end{eqnarray*} Hence, \begin{eqnarray}\label{new-2}
\lambda(\mathcal L)&=&\sum_{i\in [n]}|x_i|\left|(\mathcal L\mathbf x^{k-1})_i\right|=\sum_{i\in [n]}|x_i|\left|\left[(\mathcal D-\mathcal A)\mathbf x^{k-1}\right]_i\right|\nonumber\\
&\leq& \sum_{i\in [n]}|x_i|\left[(\mathcal D+\mathcal A)|\mathbf x|^{k-1}\right]_i\leq \lambda(\mathcal Q). \end{eqnarray}
If $\mbox{sup}(\mathbf x)\neq [n]$, then $\lambda(\mathcal L)<\lambda(\mathcal Q)$ by \cite[Lemma 2.2]{hq13}. Hence, in the following we assume that $\mbox{sup}(\mathbf x)=[n]$. We prove the conclusion by contradiction. Suppose that $\lambda(\mathcal L)=\lambda(\mathcal Q)$. Then all the inequalities in \reff{new-2} should be equalities. By \cite[Theorem 11]{q12a}, $\mathbf y:=|\mathbf x|$ is an H-eigenvector of $\mathcal Q$ corresponding to the H-eigenvalue $\lambda(\mathcal Q)$, and it is a positive vector. Similar to the proof of Proposition \ref{prop-3}, we can get a bipartition of $V$ as $V=V_1\cup V_2$ with $V_1,V_2\neq\emptyset$. Moreover, for all $i\in V$, \begin{eqnarray*}
\lambda(\mathcal Q)y_i^{k-1}&=&\left[(\mathcal D+\mathcal A)\mathbf y^{k-1}\right]_i=\left|\left[(\mathcal D-\mathcal A)\mathbf x^{k-1}\right]_i\right|. \end{eqnarray*}
Suppose, without loss of generality, that $x_1>0$. Then, we have that $|e\cap V_2|<k-1$ is an odd number for every $e\in E_1$. Since $G$ is connected and nontrivial, we have that $E_1\neq \emptyset$. Suppose that $2\in\bar e\cap V_2$ with $\bar e\in E_1$. We have $x_2<0$ and \begin{eqnarray*}
\left|\left[(\mathcal D-\mathcal A)\mathbf x^{k-1}\right]_2\right|&=&\left|d_2x_2^{k-1}-\sum_{e\in E_2}\prod_{w\in e}x_w\right|\\
&=&\left|d_2x_2^{k-1}-\sum_{e\in E_2\setminus\{\bar e\}}\prod_{w\in e}x_w-\prod_{w\in\bar e}x_w\right|\\
&=&\left|d_2|x_2|^{k-1}-\sum_{e\in E_2\setminus\{\bar e\}}\prod_{w\in e}x_w-\prod_{w\in\bar e}|x_w|\right|\\
&\leq&\left|\left|d_2|x_2|^{k-1}+\sum_{e\in E_2\setminus\{\bar e\}}\prod_{w\in e}|x_w|\right|-\prod_{w\in\bar e}|x_w|\right|\\
&<&\left|d_2|x_2|^{k-1}+\sum_{e\in E_2}\prod_{w\in e}|x_w|\right|\\ &=&\left[(\mathcal D-\mathcal A)\mathbf y^{k-1}\right]_2. \end{eqnarray*} Thus, we get a contradiction. Consequently, $\lambda(\mathcal L)<\lambda(\mathcal Q)$. \ep
Combining Theorem \ref{prop-3} and Proposition \ref{prop-4}, we have the following theorem.
\begin{Theorem} Let $G=(V,E)$ be a $k$-uniform hypergraph. Let $\mathcal L,\mathcal Q$ be the Laplacian and signless Laplacian tensors of $G$ respectively. Then \begin{eqnarray*} \lambda(\mathcal L)\leq\lambda(\mathcal Q). \end{eqnarray*} If furthermore $G$ is connected, then \begin{eqnarray*} \lambda(\mathcal L)=\lambda(\mathcal Q) \end{eqnarray*} if and only if $k$ is even and $G$ is odd-bipartite. \end{Theorem}
\section{Final Remarks} \setcounter{Theorem}{0} \setcounter{Proposition}{0} \setcounter{Corollary}{0} \setcounter{Lemma}{0} \setcounter{Definition}{0} \setcounter{Remark}{0} \setcounter{Conjecture}{0} \setcounter{Example}{0} \hspace{4mm} In this paper, the largest Laplacian and signless Laplacian H-eigenvalues of a uniform hypergraph are discussed. The largest signless Laplacian H-eigenvalue is the spectral radius of the signless Laplacian tensor \cite{cpz08,q12a,yy11}, since the signless Laplacian tensor is a nonnegative tensor. There is sophisticated theory for the spectral radius of a nonnegative tensor. Thus, the corresponding theory for the largest signless Laplacian H-eigenvalue is clear. On the other hand, the largest Laplacian H-eigenvalue is more subtle. It can be seen that there are neat and simple characterizations for the lower bound of the largest Laplacian H-eigenvalue of an even-uniform hypergraph (Theorem \ref{thm-5}). These are largely due to Lemma \ref{lem-5}. While, for odd-uniform hypergraphs, the current theory is incomplete. This would be the next topic to investigate.
{\bf Acknowledgement.} The authors are grateful to Prof. Jia-Yu Shao for his comments, and Prof. Xiaodong Zhang for Reference \cite{z07}.
\end{document} | arXiv |
# Fundamentals of derivatives and rules of differentiation
A derivative measures the rate at which a function changes as its input changes. It tells us how the function behaves locally, providing information about its slope or rate of change at a specific point. The derivative of a function is denoted by the prime symbol (') or by using the notation $\frac{dy}{dx}$, where $y$ is the dependent variable and $x$ is the independent variable.
To find the derivative of a function, we use the process of differentiation. The derivative of a function $f(x)$ is defined as the limit of the difference quotient as the change in $x$ approaches zero:
$$f'(x) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h}$$
This limit represents the instantaneous rate of change of the function at a specific point. It gives us the slope of the tangent line to the graph of the function at that point.
Now that we understand the concept of a derivative, let's explore the rules of differentiation. These rules allow us to find the derivative of more complex functions by applying a set of predefined rules.
1. Constant Rule: The derivative of a constant function is zero. For example, if $f(x) = 5$, then $f'(x) = 0$.
2. Power Rule: The derivative of a power function is obtained by multiplying the exponent by the coefficient and reducing the exponent by one. For example, if $f(x) = ax^n$, where $a$ and $n$ are constants, then $f'(x) = anx^{n-1}$.
3. Sum and Difference Rule: The derivative of a sum or difference of functions is equal to the sum or difference of their derivatives. For example, if $f(x) = g(x) + h(x)$, then $f'(x) = g'(x) + h'(x)$.
4. Product Rule: The derivative of a product of two functions is given by the first function times the derivative of the second function, plus the second function times the derivative of the first function. For example, if $f(x) = g(x) \cdot h(x)$, then $f'(x) = g'(x) \cdot h(x) + g(x) \cdot h'(x)$.
5. Quotient Rule: The derivative of a quotient of two functions is given by the denominator function times the derivative of the numerator function, minus the numerator function times the derivative of the denominator function, all divided by the square of the denominator function. For example, if $f(x) = \frac{g(x)}{h(x)}$, then $f'(x) = \frac{g'(x) \cdot h(x) - g(x) \cdot h'(x)}{(h(x))^2}$.
These rules provide a systematic way to find the derivative of a wide range of functions. By applying these rules, we can analyze the behavior of functions and solve various problems in engineering and science.
Consider the function $f(x) = 3x^2 + 2x - 1$. To find its derivative, we can apply the power rule and the sum rule:
$$f'(x) = (3 \cdot 2x^{2-1}) + (2 \cdot 1x^{1-1}) + (0)$$
$$f'(x) = 6x + 2$$
The derivative of $f(x)$ is $6x + 2$. This tells us that the slope of the tangent line to the graph of $f(x)$ at any point is given by the expression $6x + 2$.
## Exercise
Find the derivative of the following functions using the rules of differentiation:
1. $f(x) = 4x^3 - 2x^2 + 5x - 3$
2. $g(x) = \sqrt{x} + \frac{1}{x}$
3. $h(x) = e^x \cdot \sin(x)$
### Solution
1. $f'(x) = 12x^2 - 4x + 5$
2. $g'(x) = \frac{1}{2\sqrt{x}} - \frac{1}{x^2}$
3. $h'(x) = e^x \cdot \sin(x) + e^x \cdot \cos(x)$
# Applications of derivatives in engineering and science
One common application of derivatives is in physics, where they are used to describe the motion of objects. The derivative of a position function gives us the velocity of an object, while the derivative of the velocity function gives us the acceleration. By analyzing these derivatives, we can understand how objects move and predict their future positions.
Derivatives are also used in optimization problems, where the goal is to find the maximum or minimum value of a function. For example, in engineering, we may want to find the dimensions of a container that minimize material usage while still meeting certain capacity requirements. By taking the derivative of the cost function with respect to the dimensions, we can find the optimal solution.
In economics, derivatives are used to analyze supply and demand curves. The derivative of a demand function gives us the rate at which the quantity demanded changes with respect to the price. This information is crucial for understanding market behavior and making informed decisions.
Derivatives are also used in signal processing, where they are used to analyze and manipulate signals. For example, the derivative of a sound wave can be used to determine its frequency or pitch. In image processing, derivatives are used to detect edges and features in images.
These are just a few examples of the many applications of derivatives in engineering and science. By understanding the concepts and rules of differentiation, we can apply them to solve a wide range of problems and gain valuable insights into the world around us.
Consider a ball being thrown into the air. The height of the ball at any time can be described by the function $h(t) = -16t^2 + 20t + 5$, where $t$ is the time in seconds and $h$ is the height in feet. To find the velocity and acceleration of the ball, we can take the derivatives of the position function:
$$v(t) = h'(t) = -32t + 20$$
$$a(t) = v'(t) = -32$$
The velocity function $v(t)$ tells us how fast the ball is moving at any time, while the acceleration function $a(t)$ tells us how the velocity is changing. By analyzing these derivatives, we can understand the motion of the ball and answer questions such as when it reaches its maximum height or how long it takes to hit the ground.
## Exercise
A car is traveling along a straight road. The position of the car at any time $t$ can be described by the function $s(t) = 2t^3 - 5t^2 + 3t + 10$, where $s$ is the position in meters and $t$ is the time in seconds. Find the velocity and acceleration of the car at any time $t$.
### Solution
The velocity function is given by the derivative of the position function:
$$v(t) = s'(t) = 6t^2 - 10t + 3$$
The acceleration function is given by the derivative of the velocity function:
$$a(t) = v'(t) = 12t - 10$$
# Introduction to differential equations and their solutions
A differential equation typically involves an unknown function and its derivatives. The goal is to find a function that satisfies the equation. The order of a differential equation is determined by the highest derivative that appears in the equation. For example, a first-order differential equation involves the first derivative, while a second-order differential equation involves the second derivative.
Differential equations can be classified into different types based on their form. Some common types include:
- Ordinary differential equations (ODEs): These equations involve a single independent variable and its derivatives. They are used to model phenomena that occur in one dimension, such as the motion of a particle along a straight line.
- Partial differential equations (PDEs): These equations involve multiple independent variables and their derivatives. They are used to model phenomena that occur in multiple dimensions, such as the heat distribution in a solid.
Solving a differential equation involves finding a function that satisfies the equation. The general solution of a differential equation is a family of functions that includes all possible solutions. Additional conditions, known as initial or boundary conditions, are needed to determine the specific solution that describes a particular physical system.
Consider the following first-order ordinary differential equation:
$$\frac{dy}{dx} = x^2 - 3x$$
To solve this equation, we can separate the variables and integrate both sides:
$$\int \frac{1}{y} dy = \int (x^2 - 3x) dx$$
This simplifies to:
$$\ln|y| = \frac{1}{3}x^3 - \frac{3}{2}x^2 + C$$
where $C$ is the constant of integration. Taking the exponential of both sides gives us the general solution:
$$y = Ce^{\frac{1}{3}x^3 - \frac{3}{2}x^2}$$
where $C$ is an arbitrary constant.
## Exercise
Solve the following first-order ordinary differential equation:
$$\frac{dy}{dx} = 2x - 1$$
### Solution
To solve this equation, we can separate the variables and integrate both sides:
$$\int dy = \int (2x - 1) dx$$
This simplifies to:
$$y = x^2 - x + C$$
where $C$ is the constant of integration.
# Solving differential equations using Mathematica
To solve a differential equation in Mathematica, we can use the `DSolve` function. This function takes the differential equation as input and returns the general solution. Let's look at an example.
Consider the following differential equation:
$$\frac{dy}{dx} = x^2 - 3x$$
To solve this equation using Mathematica, we can write:
```mathematica
DSolve[y'[x] == x^2 - 3x, y[x], x]
```
The output will be the general solution of the differential equation:
$$y(x) = \frac{1}{3}x^3 - \frac{3}{2}x^2 + C$$
where $C$ is the constant of integration.
Mathematica also allows us to specify initial or boundary conditions to determine the specific solution that describes a particular physical system. We can use the `DSolve` function with additional arguments to include these conditions. Let's see an example.
Consider the following initial value problem:
$$\frac{dy}{dx} = x^2 - 3x, \quad y(0) = 2$$
To solve this problem using Mathematica, we can write:
```mathematica
DSolve[{y'[x] == x^2 - 3x, y[0] == 2}, y[x], x]
```
The output will be the specific solution that satisfies the initial condition:
$$y(x) = \frac{1}{3}x^3 - \frac{3}{2}x^2 + 2$$
# Applications of differential equations in engineering and science
One common application of differential equations is in population dynamics. The growth or decline of a population over time can be modeled using a differential equation. For example, the logistic equation is a commonly used model for population growth. It is given by:
$$\frac{dP}{dt} = rP\left(1 - \frac{P}{K}\right)$$
where $P$ is the population, $t$ is time, $r$ is the growth rate, and $K$ is the carrying capacity.
Another important application of differential equations is in physics, particularly in the study of motion. The motion of objects can be described using differential equations, such as Newton's second law of motion. For example, the equation of motion for a simple harmonic oscillator is given by:
$$\frac{d^2x}{dt^2} + \omega^2x = 0$$
where $x$ is the displacement of the object, $t$ is time, and $\omega$ is the angular frequency.
In addition to population dynamics and physics, differential equations are also used in many other fields, such as electrical engineering, chemical engineering, and economics. They provide a powerful tool for modeling and analyzing complex systems.
Using Mathematica, we can solve these differential equations and obtain their solutions. The built-in functions in Mathematica, such as `DSolve` and `NDSolve`, make it easy to solve a wide range of differential equations and analyze their behavior.
# Fundamentals of integration and rules of integration
Integration is a fundamental concept in calculus that deals with finding the area under a curve. It has wide-ranging applications in engineering and science, such as calculating the total distance traveled by an object, finding the volume of a solid, and determining the average value of a function.
To understand integration, we first need to understand the concept of an antiderivative. An antiderivative of a function $f(x)$ is a function $F(x)$ whose derivative is equal to $f(x)$. In other words, if $F'(x) = f(x)$, then $F(x)$ is an antiderivative of $f(x)$.
The process of finding an antiderivative is called integration. The symbol $\int$ is used to represent integration, and the function being integrated is called the integrand. For example, $\int f(x) dx$ represents the integral of $f(x)$ with respect to $x$.
There are several rules and techniques that can be used to evaluate integrals. In this section, we will cover the fundamental rules of integration, including the power rule, the constant rule, and the sum rule. These rules allow us to find the antiderivative of a wide variety of functions.
The power rule states that if $f(x) = x^n$, where $n$ is any real number except -1, then the antiderivative of $f(x)$ is given by:
$$\int x^n dx = \frac{1}{n+1}x^{n+1} + C$$
where $C$ is the constant of integration.
The constant rule states that if $f(x) = c$, where $c$ is a constant, then the antiderivative of $f(x)$ is given by:
$$\int c dx = cx + C$$
where $C$ is the constant of integration.
The sum rule states that if $f(x) = g(x) + h(x)$, then the antiderivative of $f(x)$ is the sum of the antiderivatives of $g(x)$ and $h(x)$:
$$\int (g(x) + h(x)) dx = \int g(x) dx + \int h(x) dx$$
These rules provide a starting point for evaluating integrals. However, more complex functions may require additional techniques, such as substitution or integration by parts. We will explore these techniques in later sections.
Let's apply the power rule to find the antiderivative of the function $f(x) = 3x^2$. Using the power rule, we have:
$$\int 3x^2 dx = \frac{1}{3}x^3 + C$$
where $C$ is the constant of integration.
## Exercise
Find the antiderivative of the following functions using the power rule:
1. $f(x) = 5x^4$
2. $g(x) = \frac{1}{2}x^{-3}$
3. $h(x) = 7x^0$
### Solution
1. $\int 5x^4 dx = \frac{5}{5}x^5 + C = x^5 + C$
2. $\int \frac{1}{2}x^{-3} dx = \frac{1}{2} \cdot \frac{1}{-2}x^{-2} + C = -\frac{1}{4}x^{-2} + C$
3. $\int 7x^0 dx = 7x + C$
# Applications of integration in engineering and science
One common application of integration is in finding the area under a curve. This can be used to calculate the total distance traveled by an object, the volume of a solid, or the amount of material needed to construct a structure.
For example, let's say we have a velocity function $v(t)$ that describes the velocity of an object at each point in time. To find the total distance traveled by the object, we can integrate the absolute value of the velocity function over a specific time interval. This gives us the area under the velocity curve, which represents the total distance traveled.
Another application of integration is in finding the average value of a function. This can be useful in analyzing data or determining the average value of a physical quantity over a given interval.
Integration is also used in solving differential equations, which are equations that involve derivatives. By integrating both sides of a differential equation, we can find the general solution that satisfies the equation.
In engineering, integration is used in various fields such as electrical engineering, mechanical engineering, and civil engineering. It is used to analyze systems, calculate forces, and solve optimization problems.
In science, integration is used in physics to calculate quantities such as work, energy, and power. It is also used in biology to model population growth and in chemistry to calculate reaction rates.
In the following sections, we will explore specific examples and applications of integration in engineering and science.
A common application of integration is in calculating the work done by a force. The work done by a force is equal to the integral of the force over the distance traveled. For example, let's say we have a constant force of 10 Newtons acting on an object that moves 5 meters in the direction of the force. The work done by the force can be calculated as:
$$\text{Work} = \int F \cdot dx = \int 10 \cdot dx = 10x + C$$
where $x$ represents the distance traveled and $C$ is the constant of integration. Plugging in the values, we get:
$$\text{Work} = 10 \cdot 5 + C = 50 + C$$
## Exercise
Calculate the work done by a force of 8 Newtons acting on an object that moves 3 meters in the direction of the force.
### Solution
The work done by the force can be calculated as:
$$\text{Work} = \int F \cdot dx = \int 8 \cdot dx = 8x + C$$
Plugging in the values, we get:
$$\text{Work} = 8 \cdot 3 + C = 24 + C$$
# Introduction to multivariable calculus
Multivariable calculus is an extension of single-variable calculus that deals with functions of multiple variables. In single-variable calculus, we study functions of a single variable and their derivatives and integrals. In multivariable calculus, we extend these concepts to functions of multiple variables and their partial derivatives and multiple integrals.
In this section, we will introduce the basic concepts of multivariable calculus, including functions of multiple variables, partial derivatives, and the gradient. We will also explore the applications of multivariable calculus in engineering and science.
A function of multiple variables is a function that depends on more than one variable. For example, a function $f(x, y)$ could depend on both $x$ and $y$. We can think of $x$ and $y$ as the independent variables, and $f(x, y)$ as the dependent variable.
The partial derivative of a function with respect to one of its variables measures how the function changes when only that variable is varied, holding all other variables constant. For example, the partial derivative $\frac{\partial f}{\partial x}$ measures how $f(x, y)$ changes with respect to $x$, while keeping $y$ constant.
The gradient of a function is a vector that points in the direction of the steepest increase of the function at a given point. It is a generalization of the derivative to multiple dimensions. The gradient is denoted by the symbol $\nabla$ and is defined as:
$$\nabla f = \left(\frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}, \frac{\partial f}{\partial z}, ...\right)$$
The gradient is useful in optimization problems, where we want to find the maximum or minimum of a function.
In engineering and science, multivariable calculus is used in various fields such as physics, economics, and computer science. It is used to model and analyze complex systems, optimize designs, and solve problems involving multiple variables.
In the following sections, we will delve deeper into the concepts of multivariable calculus and explore their applications in engineering and science.
Suppose we have a function $f(x, y) = x^2 + 2xy + y^2$. We can calculate the partial derivatives of $f$ with respect to $x$ and $y$ as follows:
$$\frac{\partial f}{\partial x} = 2x + 2y$$
$$\frac{\partial f}{\partial y} = 2x + 2y$$
The gradient of $f$ is:
$$\nabla f = \left(\frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}\right) = (2x + 2y, 2x + 2y)$$
## Exercise
Calculate the partial derivatives and the gradient of the function $g(x, y) = 3x^2 - 4xy + 2y^2$.
### Solution
The partial derivatives of $g$ with respect to $x$ and $y$ are:
$$\frac{\partial g}{\partial x} = 6x - 4y$$
$$\frac{\partial g}{\partial y} = -4x + 4y$$
The gradient of $g$ is:
$$\nabla g = \left(\frac{\partial g}{\partial x}, \frac{\partial g}{\partial y}\right) = (6x - 4y, -4x + 4y)$$
# Applications of multivariable calculus in engineering and science
Multivariable calculus has numerous applications in engineering and science. It allows us to model and analyze complex systems, optimize designs, and solve problems involving multiple variables.
One common application of multivariable calculus is in physics, where it is used to model and analyze the motion of objects. For example, we can use multivariable calculus to calculate the velocity and acceleration of an object moving in three-dimensional space. We can also use it to analyze the forces acting on an object and determine its equilibrium position.
Multivariable calculus is also used in economics to model and analyze supply and demand curves. It allows us to optimize production and consumption decisions, and determine the equilibrium price and quantity in a market.
In computer science, multivariable calculus is used in machine learning and data analysis. It allows us to model and analyze complex datasets, and optimize algorithms for tasks such as image recognition and natural language processing.
In engineering, multivariable calculus is used in various fields such as electrical engineering, mechanical engineering, and civil engineering. It is used to model and analyze systems, optimize designs, and solve problems involving multiple variables.
In the following sections, we will explore specific examples and applications of multivariable calculus in engineering and science.
Suppose we have a system of equations that describes the motion of a particle in three-dimensional space:
$$\frac{dx}{dt} = 2t$$
$$\frac{dy}{dt} = 3t^2$$
$$\frac{dz}{dt} = 4t^3$$
We can use multivariable calculus to calculate the velocity and acceleration of the particle at any given time. The velocity is the derivative of the position with respect to time, and the acceleration is the derivative of the velocity with respect to time.
The velocity of the particle is given by:
$$\mathbf{v}(t) = \left(\frac{dx}{dt}, \frac{dy}{dt}, \frac{dz}{dt}\right) = (2t, 3t^2, 4t^3)$$
The acceleration of the particle is given by:
$$\mathbf{a}(t) = \frac{d\mathbf{v}}{dt} = \left(\frac{d^2x}{dt^2}, \frac{d^2y}{dt^2}, \frac{d^2z}{dt^2}\right) = (2, 6t, 12t^2)$$
## Exercise
Calculate the velocity and acceleration of a particle in two-dimensional space, given the following equations:
$$\frac{dx}{dt} = 3t$$
$$\frac{dy}{dt} = 4t^2$$
### Solution
The velocity of the particle is given by:
$$\mathbf{v}(t) = \left(\frac{dx}{dt}, \frac{dy}{dt}\right) = (3t, 4t^2)$$
The acceleration of the particle is given by:
$$\mathbf{a}(t) = \frac{d\mathbf{v}}{dt} = \left(\frac{d^2x}{dt^2}, \frac{d^2y}{dt^2}\right) = (3, 8t)$$
# Optimization techniques and their applications
One common optimization problem is finding the maximum or minimum of a function subject to certain constraints. This is known as constrained optimization. For example, we may want to find the maximum volume of a box that can be made from a sheet of cardboard of fixed size.
To solve constrained optimization problems, we can use techniques such as Lagrange multipliers and linear programming. Lagrange multipliers allow us to find the maximum or minimum of a function subject to equality constraints. Linear programming allows us to find the maximum or minimum of a linear function subject to linear inequality constraints.
Another type of optimization problem is unconstrained optimization, where we want to find the maximum or minimum of a function without any constraints. This is known as unconstrained optimization. For example, we may want to find the minimum cost of producing a certain quantity of a product.
To solve unconstrained optimization problems, we can use techniques such as gradient descent and Newton's method. Gradient descent is an iterative optimization algorithm that uses the gradient of the function to find the minimum. Newton's method is an iterative optimization algorithm that uses the derivative of the function to find the minimum.
Optimization techniques are used in various fields such as engineering, economics, and computer science. They are used to optimize designs, maximize profits, and solve complex problems involving multiple variables.
In the following sections, we will delve deeper into optimization techniques and explore their applications in engineering and science.
Suppose we want to find the maximum volume of a rectangular box that can be made from a sheet of cardboard of fixed size. The sheet of cardboard has a length of 10 units and a width of 6 units.
Let's denote the length, width, and height of the box as $l$, $w$, and $h$ respectively. The volume of the box is given by $V = lwh$.
We want to maximize the volume of the box subject to the constraint that the surface area of the box (excluding the top and bottom) is equal to the area of the sheet of cardboard.
The surface area of the box is given by $A = 2lw + 2lh + wh$. The constraint is $A = 10 \cdot 6 = 60$.
To solve this constrained optimization problem, we can use Lagrange multipliers. The Lagrangian function is defined as $L = V - \lambda(A - 60)$, where $\lambda$ is the Lagrange multiplier.
We can take the partial derivatives of $L$ with respect to $l$, $w$, $h$, and $\lambda$ and set them equal to zero to find the critical points. Solving these equations, we find that the maximum volume occurs when $l = 5$, $w = 5$, $h = 2$, and $\lambda = \frac{1}{2}$.
The maximum volume of the box is $V = 5 \cdot 5 \cdot 2 = 50$.
## Exercise
Find the minimum cost of producing a certain quantity of a product, given the cost function $C(q) = 100 + 5q^2$, where $q$ is the quantity of the product.
### Solution
To find the minimum cost, we need to find the minimum of the cost function $C(q) = 100 + 5q^2$.
We can use unconstrained optimization techniques to solve this problem. Taking the derivative of $C(q)$ with respect to $q$ and setting it equal to zero, we get:
$$\frac{dC}{dq} = 10q = 0$$
Solving this equation, we find that the minimum occurs when $q = 0$.
The minimum cost of producing the product is $C(0) = 100 + 5(0)^2 = 100$.
# Using Mathematica for optimization problems
Mathematica is a powerful computational software that can be used to solve optimization problems. It provides a wide range of built-in functions and algorithms for optimization, making it a valuable tool for engineers and scientists.
In this section, we will explore how to use Mathematica for optimization problems. We will cover topics such as defining objective functions, setting constraints, and finding optimal solutions.
To get started, let's first understand the basic syntax and structure of Mathematica. Mathematica uses a functional programming language, which means that computations are performed by applying functions to arguments.
To define an objective function, we can use the `Function` or `#` notation. For example, to define a simple objective function $f(x) = x^2$, we can write:
```mathematica
f[x_] := x^2
```
To find the minimum or maximum of the objective function, we can use the `Minimize` or `Maximize` functions. For example, to find the minimum of $f(x)$, we can write:
```mathematica
Minimize[f[x], x]
```
Mathematica will return the minimum value and the corresponding value of $x$.
In addition to defining objective functions, we can also set constraints on the variables. Mathematica provides various functions for setting constraints, such as `Element`, `Less`, `Greater`, etc. For example, to set the constraint $x > 0$, we can write:
```mathematica
Element[x, Reals] && x > 0
```
To solve optimization problems with constraints, we can use the `Minimize` or `Maximize` functions with the constraints specified. For example, to find the minimum of $f(x)$ subject to the constraint $x > 0$, we can write:
```mathematica
Minimize[{f[x], Element[x, Reals] && x > 0}, x]
```
Mathematica will find the minimum value of $f(x)$ that satisfies the constraint.
In the following examples, we will explore more advanced optimization problems and demonstrate how to use Mathematica to solve them.
Suppose we have the objective function $f(x, y) = x^2 + y^2$ and we want to find the minimum subject to the constraint $x + y = 1$.
To solve this problem using Mathematica, we can define the objective function and the constraint as follows:
```mathematica
f[x_, y_] := x^2 + y^2
constraint = x + y == 1
```
Then, we can use the `Minimize` function to find the minimum:
```mathematica
Minimize[{f[x, y], constraint}, {x, y}]
```
Mathematica will return the minimum value of $f(x, y)$ and the corresponding values of $x$ and $y$ that satisfy the constraint.
## Exercise
Use Mathematica to find the maximum of the objective function $f(x) = \sin(x)$ subject to the constraint $0 \leq x \leq \pi$.
### Solution
To solve this problem using Mathematica, we can define the objective function and the constraint as follows:
```mathematica
f[x_] := Sin[x]
constraint = 0 <= x <= Pi
```
Then, we can use the `Maximize` function to find the maximum:
```mathematica
Maximize[{f[x], constraint}, x]
```
Mathematica will return the maximum value of $f(x)$ and the corresponding value of $x$ that satisfies the constraint. | Textbooks |
Projection (relational algebra)
In relational algebra, a projection is a unary operation written as $\Pi _{a_{1},...,a_{n}}(R)$, where $R$ is a relation and $a_{1},...,a_{n}$ are attribute names. Its result is defined as the set obtained when the components of the tuples in $R$ are restricted to the set $\{a_{1},...,a_{n}\}$ – it discards (or excludes) the other attributes.[1]
In practical terms, if a relation is thought of as a table, then projection can be thought of as picking a subset of its columns. For example, if the attributes are (name, age), then projection of the relation {(Alice, 5), (Bob, 8)} onto attribute list (age) yields {5,8} – we have discarded the names, and only know what ages are present.
Projections may also modify attribute values. For example, if $R$ has attributes $a$, $b$, $c$, where the values of $b$ are numbers, then $\Pi _{a,\ b*0.5,\ c}(R)$ is like $R$, but with all $b$-values halved.[2]
Related concepts
The closely related concept in set theory (see: projection (set theory)) differs from that of relational algebra in that, in set theory, one projects onto ordered components, not onto attributes. For instance, projecting $(3,7)$ onto the second component yields 7.
Projection is relational algebra's counterpart of existential quantification in predicate logic. The attributes not included correspond to existentially quantified variables in the predicate whose extension the operand relation represents. The example below illustrates this point.
Because of the correspondence with existential quantification, some authorities prefer to define projection in terms of the excluded attributes. In a computer language it is of course possible to provide notations for both, and that was done in ISBL and several languages that have taken their cue from ISBL.
A nearly identical concept occurs in the category of monoids, called a string projection, which consists of removing all of the letters in the string that do not belong to a given alphabet.
When implemented in SQL standard the "default projection" returns a multiset instead of a set, and the π projection is obtained by the addition of the DISTINCT keyword to eliminate duplicate data.
Example
For an example, consider the relations depicted in the following two tables which are the relation Person and its projection on (some say "over") the attributes Age and Weight:
${\text{Person}}$ $\Pi _{\text{Age,Weight}}({\text{Person}})$
Name Age Weight
Harry 34 180
Sally 28 164
George 28 170
Helena 54 154
Peter 34 180
Age Weight
34 180
28 164
28 170
54 154
Suppose the predicate of Person is "Name is age years old and weighs weight." Then the given projection represents the predicate, "There exists Name such that Name is age years old and weighs weight."
Note that Harry and Peter have the same age and weight, but since the result is a relation, and therefore a set, this combination only appears once in the result.
More formally the semantics of projection are defined as follows:
$\Pi _{a_{1},...,a_{n}}(R)=\{\ t[a_{1},...,a_{n}]:\ t\in R\ \},$
where $t[a_{1},...,a_{n}]$ is the restriction of the tuple $t$ to the set $\{a_{1},...,a_{n}\}$ so that
$t[a_{1},...,a_{n}]=\{\ (a',v)\ |\ (a',v)\in t,\ a'\in \{a_{1},...,a_{n}\}\},$
where $(a',v)$ is an attribute value, $a'$ is an attribute name, and $v$ is an element of that attribute's domain — see Relation (database).
The result of a projection $\Pi _{a_{1},...,a_{n}}(R)$ is defined only if $\{a_{1},...,a_{n}\}$ is a subset of the header of $R$.
Projection over no attributes at all is possible, yielding a relation of degree zero. In this case the cardinality of the result is zero if the operand is empty, otherwise one. The two relations of degree zero are the only ones that cannot be depicted as tables.
See also
• Projection (set theory)
References
1. "Relational Algebra". cs.rochester.edu. Retrieved 2014-07-28.
2. http://www.csee.umbc.edu/~pmundur/courses/CMSC661-02/rel-alg.pdf See Problem 3.8.B on page 3
| Wikipedia |
\begin{document}
\pagestyle{empty}
\title{Phase transitions and {\em all that}} \author{Gabriel Istrate\thanks{e-mail: [email protected],
NISAC, National Infrastructure Simulation Analysis Center,
Los Alamos National Laboratory,
Mail Stop M 997, Los Alamos, NM 87545, U.S.A.}} \date{}
\maketitle \thispagestyle{empty} \section{Introduction}
Since the experimental paper of Cheeseman, Kanefsky and Taylor \cite{cheeseman-kanefsky-taylor} {\em phase transitions in combinatorial problems} held the promise to shed light on the ``practical'' algorithmic complexity of combinatorial problems. However, the connection conjectured in \cite{cheeseman-kanefsky-taylor} was easily seen to be inaccurate. A much more realistic possible connection has been highlighted by the results (based on experimental evidence and nonrigorous arguments from Statistical Mechanics) of Monasson et al. \cite{2+p:nature} (see also \cite{2+p:rsa}). These results supported the conjecture that it is {\em first-order phase transitions} that have algorithmic implications for the complexity of restricted classes of algorithms, including the important class of {\em Davis-Putnam-Longman-Loveland (DPLL) algorithms} \cite{beame:dp}.
There exists, indeed, a nonrigorous argument supporting this conjecture: phase transitions amount to nonanalytical behavior of a certain {\em order parameter}; the phase transition is {\em first order} if the order parameter is actually discontinuous. At least for random $k$-SAT \cite{monasson:zecchina} the order parameter suggested by Statistical Mechanics considerations has a purely combinatorial interpretation: it is the {\em backbone} of the formula, the set of literals that assume the same value in all optimal assignments. But intuitively one can relate (see e.g. the presentation of this argument by Achlioptas, Beame and Molloy \cite{achlioptas:beame:molloy:slides}) the size of the backbone to the complexity of DPLL algorithms, when run on random $k$-SAT instances slightly above the phase transition: All literals in the backbone require well-defined values in order to satisfy the formula. But a DPLL algorithm has very few ways to know what those ``right'' values are. If w.h.p. the backbone of formulas above the transition contains a positive fraction of the literals that is bounded away from zero as we approach the transition (which happens in a case of a first-order phase transition) then, intuitively, DPLL will misassign a variable having $\Omega(n)$ height in the tree representing the behavior of the algorithm, and will be forced to backtrack on the given variable. {\em The conclusion of this intuitive argument is that a first-order phase transition implies a $2^{\Omega(n)}$ lower bound for the running time of any DPLL algorithm, valid with high probability for random instances located slightly above the transition. }
While previous rigorous results \cite{2+psat:ralcom97,scaling:window:2sat,achlioptas:beame:molloy}, supported these intuitions, to date, the extent of a connection between first-order phase transitions and algorithmic complexity was unclear.
{\bf The goals of this paper are \begin{enumerate} \item To remedy this, and formally establish a connection between first-order phase transitions and the resolution complexity of random satisfiability problems, and \item To take steps towards obtaining a complete classification of the order of phase transition in generalized satisfiability problems. \end{enumerate} }
To accomplish these goals
\begin{enumerate} \item we obtain (Theorem~\ref{dichotomy:threshold}) a complete characterization of sharp/coarse thresholds in the random generalized satisfiability model due to Molloy \cite{molloy-stoc2002}. ``Phyisical'' arguments (see discussion below) imply that it makes no sense to study the order of the phase transition unless the problem has a sharp threshold. \item we rigorously prove (Theorem~\ref{3sat:first-order}) that random 3-SAT has a first-order phase transition. We extend this result in several ways: first (Theorem~\ref{2+p-sat:first-order}) to random $(2+p)$-satisfiability, the original problem from \cite{2+p:nature}, obtaining further theoretical support to the heuristic results of \cite{2+p:nature}. Second we give a sufficient condition (Theorem~\ref{sufficient:first-order}) for the existence of a first-order phase transition. We then show (Theorem~\ref{implicates:first-order}) that all problems whose constraints have no implicates of size at most two satisfy this condition.\item we show that in all the cases where we can prove the existence of a first-order phase transition, such problems have a $2^{\Omega(n)}$ lower bound on their resolution complexity (and hence the complexity of DPLL algorithms as well \cite{beame:dp}). Indeed, the two phenomena ($2^{\Omega(n)}$ resolution complexity and the existence of a first-order phase transition) have common causes. \item in contrast, we show (Theorem~\ref{second:order}) that, for {\em any generalized satisfiability problem}, a second-order phase transition implies, for every $\alpha >0$, a $O(2^{\alpha \cdot n})$ upper bound on the resolution complexity of their random instances (in the region where most formulas are unsatisfiable). \end{enumerate}
\section{Preliminaries}
Throughout the paper we will assume familiarity with the general concepts of phase transitions in combinatorial problems (see e.g. \cite{martin:monasson:zecchina}), random structures \cite{bol:b:random-graphs}, proof complexity \cite{beame:proof:survey}. Some papers whose concepts and methods we use in detail (and we assume greater familiarity with) include \cite{friedgut:k:sat}, \cite{chvatal:szemeredi:resolution}, \cite{ben-sasson:resolution:width}.
Consider a monotonically increasing problem $A=(A_{n})$,
under the constant probability model $\Gamma(n,p)$, that independently sets to 1 with probability $p$ each bit of the random string. As usual, for $\epsilon >0$ let $p_{\epsilon}= p_{\epsilon}(n)$ define the canonical probability such that $\Pr_{x \in \Gamma(n,p_{\epsilon}(n))}[x \in A]= \epsilon$.
The probability that a random sample $x$ satisfies property $A$ (i.e. $x\in A$) is a monotonically increasing function of $p$. {\em Sharp thresholds} are those for which this function has a ``sudden jump'' from value 0 to 1:
\begin{definition} \label{sharp} Problem $A$ has a {\em sharp threshold} iff for every $0<\epsilon < 1/2$, we have $\lim_{n\rightarrow \infty} \frac{p_{1-\epsilon}(n)- p_{\epsilon}(n)}{p_{1/2}(n)} = 0$. $A$ has {\em a coarse threshold} if for some $\epsilon > 0$ it holds that $\underline{\lim}_{n\rightarrow \infty} \frac{p_{1-\epsilon}(n)- p_{\epsilon}(n)}{p_{1/2}(n)} > 0$. \end{definition}
For satisfiability problems (whose complements are monotonically increasing) the constant probability model amounts to adding every constraint (among those allowed by the syntactic specification of the model) to the random formula independently with probability $p$. Related definitions can be given for the other two models for generating random structures, the {\em counting model} and {\em the multiset model} \cite{bol:b:random-graphs}. Under reasonable conditions \cite{bol:b:random-graphs} these models are equivalent, and we will liberally switch between them. In particular, for satisfiability problem $A$, and an instance $\Phi$ of $A$, $c_{A}(\Phi)$ will denote its {\em constraint density}, the ratio between the number of clauses and the number of variables of $\Phi$. To specify the random model in this latter cases we have to specify the constraint density as a function of $n$, the number of variables. We will use $c_{A}$ to denote the value of the constraint density $c_{A}(\Phi)$ (in the counting/multiset models) corresponding to taking $p=p_{1/2}$ in the constant probability model. $c_{A}$ is a function on $n$ that is believed to tend to a constant limit as $n\rightarrow \infty$. However, Friedgut's proof \cite{friedgut:k:sat} of a sharp threshold in $k$-SAT (and our results) leave this issue open.
The original investigation of the order of the phase transition in $k$-SAT used an order parameter called {\em the backbone}. Bollob\'{a}s et al. \cite{scaling:window:2sat} have investigated the order of the phase transition in 2-SAT under a different order parameter, a ``monotonic version'' of the backbone called {\em the spine}.
\begin{equation}\label{spine:initial}
Spine(\Phi) = \{ x\in Lit | (\exists) \Xi \subseteq \Phi, \Xi \in SAT, \Xi \wedge \{\overline{x}\}\in \overline{SAT}\}. \end{equation}
They showed that random 2-SAT has a continuous (second-order) phase transition: the size of the spine, normalized by dividing it by the number of variables, approaches zero ( as $n\rightarrow \infty$) for $c<c_{2-SAT}=1$, and is continuous at $c=c_{2-SAT}$. By contrast, nonrigorous arguments from Statistical Mechanics \cite{monasson:zecchina} imply the fact that for $3-SAT$ the spine jumps discontinuously from zero to positive values at the transition point $c=c_{3-SAT}$ (a first-order phase transition).
It is easy to see that the intuition concerning the connection between the complexity of DPLL algorithms and the size of the backbone (discussed briefly in the introduction) extends to the spine as well. In this paper whenever we will discuss the order of a phase transition we will do it with respect to this latter order parameter.
We would like to obtain a complete classification of the order of the phase transition in random satisfiability problems. A preliminary problem we have to deal with is characterizing those problems that have a sharp threshold: indeed, Physics considerations require that, in order that the study of the (order of the) phase transition to be meaningful, the order parameter (in the case of $k$-SAT the spine) has to be, w.h.p., concentrated around its expected value (in Physics parlance it is a {\em self averaging quantity}), and it is zero up to a certain value of the control parameter (in our case constraint density $c$) and positive above it. In the case of $k$-SAT these conditions imply the fact that $k$-SAT has a sharp threshold. The argument (a ``folklore'' one) can be formally expressed by the following
\begin{lemma}\label{spine-threshold} Let $c$ be an arbitrary {\em constant} value for the constraint density function. \begin{enumerate} \item If $c< \underline{lim}_{n\rightarrow \infty} c_{k-SAT}(n)$ then
$\lim_{n \rightarrow \infty} \frac{|Spine(\Phi)|}{n} =0$. \item If for some $c$ there exists $\delta > 0$ such that w.h.p. (as
$n\rightarrow \infty$) $\frac{|Spine(\Phi)|}{n} > \delta$ then $\lim_{n\rightarrow \infty}
\Pr[\Phi \in SAT]= 0$, that is $c> \overline{lim}_{n\rightarrow \infty} c_{k-SAT}(n)$. \end{enumerate} \end{lemma}
The argument is generic enough to extend to {\em all} constraint satisfaction problems. So a necessary condition for the study of the phase transition to be meaningful is that the problem have a sharp threshold.
\section{Coarse and sharp thresholds of random generalized satisfiability problems}
In this section we obtain a complete classification of thresholds of random satisfiability problems, under
Molloy's recent model of random constraint satisfaction problems from \cite{molloy-stoc2002} (specialized to satisfiability problems, i.e. problems with domain $\{0,1\}$). This affirmatively solves an open problem raised in \cite{creignou:daude:sat2002}.
\begin{definition}\label{model} Consider the set of all $2^{2^{k}}-1$ potential nonempty binary constraints on $k$ variables $X_{1}, \ldots, X_{k}$. We specify a probability distribution ${\cal P}$ which selects a single random constraint, and let ${\cal C}= supp({\cal P})$ be the set of constraints on which ${\cal P}$ assigns positive probability.
A random formula from $SAT_{n,M}({\cal P})$ is specified by the following procedure: \begin{itemize} \item $n$ is the number of variables. \item $M$ is the number of clauses, chosen by the following procedure: first select, uniformly at random and with repetition $m$ hyperedges of the uniform hypergraph on $n$ variables. \item for each hyperedge choose a random ordering of the variables involved. Choose a random constraint according to ${\cal P}$ and apply it on the list of (ordered) variables. \end{itemize} $SAT({\cal C})$ refers to the random model corresponding to ${\cal P}$ being the uniform distribution on ${\cal C}$. \end{definition}
It turns out we face a technical difficulty when studying sharp and coarse thresholds in Molloy's model; it cannot be directly mapped onto the constant probability model for which the notion of a sharp threshold in Definition~\ref{sharp} works. The definition of a sharp threshold we need to employ is the one from \cite{molloy-stoc2002}
\begin{definition}\label{sharp:2} $SAT({\cal P})$ is said to have {\em a sharp threshold of satisfiability} if there exists a function $c(n)$ bounded away from 0 such that, for any $\epsilon >0$ if $M<(c(n)-\epsilon) n$ then $SAT_{n,M}({\cal P})$ is a.s. satisfiable and if $M>(c(n)+\epsilon) n$ then $SAT_{n,M}({\cal P})$ is a.s. unsatisfiable. On the other hand, if there exist two functions $M_{1}(n), M_{2}(n)$ such that $M_{1}(n)/M_{2}(n)$ is bounded away from zero, and the satisfaction probability of random instances from $SAT_{n,M_{1}}({\cal P})$, $SAT_{n,M_{2}}({\cal P})$ is bounded away from both 0 and 1 then $SAT({\cal P})$ is said to have {\em a coarse threshold.} \end{definition}
However, just as in \cite{molloy-stoc2002} (where this was done in the case when ${\cal P}$ is the uniform distribution), for $k\geq 3$ one can map Molloy's model onto a modified version of the constant probability model, defined as follows: Let $p_{1}, \ldots p_{r}$ be positive numbers between zero and 1. A random sample $x$ from the model $\Gamma_{p_{1},\ldots p_{r}}(n,p)$ is obtained in the following way: divide the bits of $x$ into $r$ equal groups. Set each of the bits in the $i$'th group to 1 independently with probability $p\cdot p_{i}$. For this model the definitions of $p_{\epsilon}$ and sharp/coarse threshold from Definition~\ref{sharp} carry over, and are equivalent to those from Definition~\ref{sharp:2}.
Indeed, let $r$ be the cardinality of the support of distribution ${\cal P}$, and $p_{1}, \ldots, p_{r}$ be the associated positive probabilities.
In its general setting Molloy's model is specified as follows: divide the potential constraints into groups of $rk!$ constraints, corresponding to all possible applications of the $r$ constraint templates on a fixed set of $k$ variables. For each such group, independently with probability $p$, we make the decision to include at least one of the constraints in the group with probability $p=\frac{M}{rk!{{n}\choose {k}}}$ (going from $M$ clauses to including each potential edge independently with probability $p$ can be done just as in the uniform case from \cite{molloy-stoc2002}).
Each realization of constraint constraint template $i$ is chosen with probability probability $p_{i}/k!$. Denote this model by $M(n,p,p_{1}, \ldots, p_{r})$.
Defining $f(x) = [(1+x pp_{1}/k!)\cdot (1+xpp_{2}/k!) \cdot \ldots \cdot (1+xpp_{r})/k!]^{k!}- x$ we have $f(1)>0 $ and, since (by a simple calculus argument) the minimum of $f(x)$ over the choices of $p_{i}\geq 0$, $\sum p_{i}=1$
is obtained when one of them is 1 and the others are zero, \[ f(\frac{1}{1-p})\geq [1+\frac{p}{k!(1-p)}]^{k!}-\frac{1}{1-p}\geq 0. \]
Let $\alpha= \alpha(n) >0$ be the smallest solution of the equation $f(\alpha) =0$. Thus $\frac{1}{1-p} \leq \alpha$.
Define, for $i=1,r$, \[ p^{\prime}_{i}= \frac{1/k! \cdot \alpha \cdot p_{i}}{1+1/k! \cdot \alpha pp_{i}} \]
\begin{claim} The following hold for any $p=\theta(n^{1-k})$: \begin{enumerate} \item For every formula $\Phi$ such that no two constraints on the same set of variables appear in it, \[ \Pr_{M(n,p,p_{1}, \ldots, p_{r})}(\Phi)\geq \Pr_{\Gamma_{p^{\prime}_{1}, \ldots, p^{\prime}_{r}}(n,p)}[\Phi]. \] Consequently \begin{eqnarray*} \Pr_{M(n,p,p_{1}, \ldots, p_{r})}[SAT({\cal P})]\geq
\Pr_{\Gamma_{p^{\prime}_{1}, \ldots, p^{\prime}_{r}}(n,p)}[SAT({\cal P})| \\ \mbox{ no two constraints on the same set of variables appear in }\Phi ]. \end{eqnarray*} \item On the other hand, there exists $f(n) = 1+o(1)$ such that for every formula $\Phi$ such that no two constraints on the same set of variables appear in it, \[ \Pr_{M(n,p,p_{1}, \ldots, p_{r})}(\Phi)\leq f(n) \Pr_{\Gamma_{p^{\prime}_{1}, \ldots, p^{\prime}_{r}}(n,p)}[\Phi]. \] Consequently \begin{eqnarray*} \Pr_{M(n,p,p_{1}, \ldots, p_{r})}[SAT({\cal P})]\leq
(1+o(1))\Pr_{\Gamma_{p^{\prime}_{1}, \ldots, p^{\prime}_{r}}(n,p)}[SAT({\cal P})| \\ \mbox{ no two constraints on the same set of variables appear in }\Phi ]. \end{eqnarray*} \end{enumerate} \end{claim}
Indeed, consider the set of constraints on a fixed set of given variables. The probability (under $\Gamma_{p^{\prime}_{1}, \ldots, p^{\prime}_{r}}$) that a given clause of type $i$ is included, and none of the others are is equal to $\frac{pp^{\prime}_{i}}{1- pp^{\prime}_{i}}\cdot [(1-pp^{\prime}_{1})\ldots (1-pp^{\prime}_{r})]^{k!} $. But \[ \frac{pp^{\prime}_{i}}{1-pp^{\prime}_{i}}= \alpha pp_{i}/k!. \]
Also \[ 1-pp^{\prime}_{i}= \frac{1}{1+\alpha pp_{i}/k!}, \]
so, by the definition of $\alpha$,
\[ [(1-pp^{\prime}_{1})\ldots (1-pp^{\prime}_{r})]^{k!}= \frac{1}{\alpha}. \]
This means that the probability that in a given set of constraints exactly one constraint (of type $i$) is chosen is equal to $pp_{i}/k!$, the same as in model $M$. On the other hand the probability that {\em no} constraint is chosen is equal to $[(1-pp^{\prime}_{1})\ldots (1-pp^{\prime}_{r})]^{k!}= \frac{1}{\alpha}$. But the same probability in model $M$ is $1-p$, and we know that $1-p \geq \frac{1}{\alpha}$. In both model decisions on different sets of $k$ variables are independent. The conclusion is that $M$ assigns a larger probability than $\Gamma_{p^{\prime}_{1}, \ldots, p^{\prime}_{r}}$ to any sample $x$ to which it assigns positive probability. Point (1) follows.
Point (2) has a similar proof: by calculus the maximum value of $f(x)$ is obtained when the $p_{i}$'s are equal, so \[ 0 = f(\alpha) \leq (1+\frac{\alpha p}{rk!})^{rk!} - \alpha \leq e^{\alpha p} - \alpha. \]
Since $p=o(1)$, for large enough $n$ $e^{\alpha p} \leq 1+ \alpha p + (\alpha p)^{2}$, so $(\alpha p)^{2}+ \alpha (p-1) +1 > 0$, in other words \[ \alpha (1-p) \leq 1+ (\alpha p)^{2}. \]
But the ratio of the probabilities associated to any given $\Phi$ by $M$ and $\Gamma_{p^{\prime}_{1}, \ldots, p^{\prime}_{r}}(n,p)$ verifies \[ \frac{ \Pr_{M(n,p,p_{1}, \ldots, p_{r})}(\Phi)}{\Pr_{\Gamma_{p^{\prime}_{1}, \ldots, p^{\prime}_{r}}(n,p)}[\Phi]}\leq [\alpha (1-p)]^{rk! {{n}\choose {k}}}\leq (1+ (\alpha p)^{2})^{rk! {{n}\choose {k}}}. \] Since $p=\theta(n^{1-k})$ and $k\geq 3$ the right-hand side is a function of $n$ that is $1+o(1)$.
To prove the result it is enough to observe that for $k\geq 3$ the expected number of times a random formula in $\Gamma_{p^{\prime}_{1}, \ldots, p^{\prime}_{r}}(n,p)$ contains two different clauses on the same set of variables is $o(1)$, since that will imply that the satisfaction probabilities in the two models are related by a $1-o(1)$ factor. Indeed, this number is \[ {{n}\choose {k}} \cdot [\sum_{\alpha,\beta} \frac{pp^{\prime}_{\alpha}pp^{\prime}_{\beta}}{(k!)^{2}}], \]
where indices $\alpha,\beta$ span the set of different pairs of clauses from a group. Since each group is finite (contains $rk!$ clauses) and $p=\theta(n^{1-k})$, this expected value is $\theta(n^{2-k})$, which is $o(1)$ for $k\geq 3$.
$\Box$\newline
\begin{definition} Constraint $C_{2}$ is {\em an implicate of $C_{1}$}
iff every satisfying assignment for $C_{1}$ satisfies $C_{2}$. \end{definition}
\begin{definition} A boolean constraint $C$ {\em strongly depends on a literal} if it has an unit clause as an implicate. \end{definition}
\begin{definition} A boolean constraint $C$ {\em strongly depends on a 2-XOR relation} if $\exists i,j\in \overline{1,k}$ such that constraint ``$x_{i}\neq x_{j}$'' is an implicate of $C$. \end{definition}
Our result is:
\begin{theorem}\label{dichotomy:threshold} Consider a generalized satisfiability problem $SAT({\cal P})$ (that is not trivially satisfiable by the ``all zeros'' or ``all ones'' assignment). Let ${\cal C}= supp({\cal P})$. \begin{enumerate} \item if some constraint in ${\cal C}$ strongly depends on one component then $SAT({\cal P})$ has a coarse threshold. \item if some constraint in ${\cal C}$ strongly depends on a 2XOR-relation then $SAT({\cal P})$ has a coarse threshold. \item in all other cases $SAT({\cal P})$ has a sharp threshold. \end{enumerate} \end{theorem}
\begin{proof}
\begin{enumerate} \item Suppose some clause $C$ implies a unit clause. We claim that $SAT({\cal P})$ has a coarse threshold in the region where the expected number of clauses is $\theta(\sqrt{n})$.
That the probability that such a formula is bounded away from zero in this region it is easy to see: consider a random formula with $c\sqrt n$ constraints, and let $H$ be the $k$-uniform formula hypergraph. The expected number of pairs of edges $C_{i}$, $C_{j}$ that have nonempty intersection is \[ {{c\cdot \sqrt n}\choose {2}}\cdot (1 - \frac{{{n-k}\choose {k} }}{{{n}\choose {k} }})\leq \frac{(ck)^{2}}{2} \]
Therefore with constant positive probability (that depends on $c$), all vertices will have degree at most 1 in the hypergraph $H_{n}$, and the formula will be satisfiable.
If, on the other hand, both positive and negative unit clauses are implicates of constraints in ${\cal P}$ then one can adapt the well-known lower bound on the probability of intersection of two random sets of size $\theta(\sqrt{n})$ to show that, with constant probability a random formula will contain two contradictory unit clauses as implicates, and be unsatisfiable.
The proof is similar in the case when only one type of unit clauses (w.l.o.g assume it's the positive unit clauses) are implicates of constraints in ${\cal C}$. Since $SAT({\cal C})$ is not trivial there exists a constraint $C_{1}\in {\cal C}$ with an implicate of the type $\overline{x_{1}}\vee \ldots \vee \overline{x_{b}}$, $b\geq 2$. We deal first with the case when there exists a constraint $C_{2}\neq C_{1}$ such that $C_{2}$ has an unit clause as implicate. Then it is easy to construct a formula $F$ consisting of $b$ copies of $C_{1}$and one copy of $C_{2}$ that implies the (unsatisfiable) formula $\{x_{1}, \ldots, x_{b},\overline{x_{1}}\vee \ldots \vee \overline{x_{b}}\}$. It is easy to see that the expected number of copies of $F$ in a random instance of $SAT({\cal P})$ with $\theta(\sqrt{n})$ clauses is constant, so the probability that the instance is unsatisfiable is bounded away from zero.
Finally, in the case when the only constraint in ${\cal C}$ that has an unit implicate is $C_{1}$. In this case one can use a trick similar to that used in the last paragraph of subsection~\ref{all:together}: we use half of the random copies of $C_{1}$ to imply (random) unit clauses, and the other half to imply (random) copies of $\overline{x_{1}}\vee \ldots \vee \overline{x_{b}}$. This way we can produce, with constant probability, a copy of the formula $F$.
\item Suppose now that $C$ does not fall in the first case but has a 2XOR implicate. In this case Creignou and Daud\'{e} have shown when ${\cal P}$ is the uniform distribution (and this extends directly to the case of a general probability distribution as well) that $p_{1/2}= \Omega(n^{1-k}))$ and the expected number of constraints is $\theta{n}$. Let $c_{SAT({\cal P})}\cdot n$ be the expected number of constraints corresponding to $p_{1/2}$. Then there exists $\delta > 0 $ such that, for every $n$, $c_{SAT({\cal P})}= c_{SAT({\cal P})}(n) > \delta$.
Let us consider a random formula with $c\cdot n$ of constraints. By the well known result on triangles in random graphs it follows that with positive probability one can use $C$ to create a ``contradictory triangle''. Therefore it is easy to see that for every $c>0$ the satisfaction probability is bounded away from 1. It is easy to see than this statement, together with the fact that $c_{SAT({\cal P})}(n) > \delta$ together imply that $SAT({\cal P})$ has a coarse threshold.
\item We will concentrate in the sequel on the last one. As discussed previously, for $k\geq 3$ Molloy's model can be mapped onto a version of the constraint probability model. In the case $k=2$ we can establish the existence of a sharp threshold in a direct manner, by the same method as the one used by Chv\'{a}tal and Reed for 2-SAT \cite{mickgetssome} (the complete proof of this case will be presented in the full version). Indeed, by the first two points of the Theorem, and the assumption $k=2$ the only possible constraints in ${\cal P}$ can be constraints $x\vee y$, $\overline{x} \vee \overline{y}$, $\overline{x}\vee y$, $x \vee \overline{y}$, $x=y$, and the first two are always present.
Let us now consider the case $k\geq 3$, using the modified version of the constant probability model. We note first that there exists a simple observation
that allows us to reduce the problem to the case when ${\cal P}$ is the uniform probability: the Friedgut-Bourgain result on sharp/coarse threshold properties in monotone problems \cite{friedgut:k:sat} uses the following result, an easy consequence of the Mean Value Theorem: if a monotonic property $A$ does {\em not} have a sharp threshold (under model $\Gamma(n.p)$) then there exists $p^{*}=p(n)$ and a constant $C>0$ such that (for infinitely many $n$)
\begin{equation}\label{coarse} p^{*}\cdot I(p^{*}) < C, \end{equation}
where $I(p^{*})=\frac{d\mu_{p}(A)}{dp}|_{p=p^{*}}$.
The same argument works when $A$ is considered under model $\Gamma_{p_{1},\ldots p_{r}}(n,p)$. Moreover, it is an easy consequence of Russo's Lemma for $\Gamma_{p_{1},\ldots p_{r}}(n,p)$ that if equation~\ref{coarse} holds for $p^{*}$ and some tuple ${p_{1},\ldots p_{r}}$, then it also holds (with a different constant $C$) for $p^{*}$ and tuple $p_{1}= \ldots p_{r}=1/r$. In other words it is enough to obtain a contradiction to the assumption that $SAT({\cal P})$ did not have a sharp threshold in the case when ${\cal P}$ is the uniform probability, which is what we show next.
\subsection{A base case}
To prove the theorem in the uniform case we will first consider a ``base case'' that is easier to explain, and will be of use in solving the general case: let $a,b$ be two integers (not necessarily equal), both greater or equal to 2. Let $S$ be a set consisting of two constraints $C_{1}, C_{2}$ of arity $a$, respectively $b$, specified by $C_{1}= \overline{x_{1}}\vee \ldots \overline{x_{a}}$, $C_{2}= x_{1}\vee \ldots x_{b}$. One can represent $SAT(S)$ in the framework of Definition~\ref{model} by ``simulating'' $C_{1}$, $C_{2}$ by suitable constraints of arity $\max\{a,b\}$.
We first outline how to prove that $SAT(S)$ has a sharp threshold: we apply the Friedgut-Bourgain result \cite{friedgut:k:sat} and infer that if $SAT(S)$ did not have a sharp threshold than, for some $\epsilon, \delta_{0}, K>0$ and some probability $p=p(n)\in [p_{\epsilon}, p_{1-\epsilon}]$
\begin{enumerate} \item either $ \Pr_{p=p(n)} [\Phi \mbox{ contains some } F\in \overline{SAT}\mbox{
with }|F|\leq K] > \delta_{0}$, or
\item there exists a fixed satisfiable formula $F_{0}$, $|F_{0}|\leq K$ such that $ \Pr_{p=p(n)} [\Phi \wedge F_{0} \in \overline{SAT}] - \Pr [ \Phi \in \overline{SAT}] > \delta_{0}$.
\end{enumerate}
One easy observation is that in the second alternative we can always assume that $F$ consists of a conjunction of unit clauses: if $F$ is satisfiable and satisfies (2), then so does the conjunction of unit clauses specifying one satisfying assignment of $F$. The first alternative is eliminated by a result (Proposition 4.6) from \cite{creignou:daude:sat2002}. The key to disproving the second alternative, in the case of $k$-SAT, is a geometric result, Lemma 5.7 in \cite{friedgut:k:sat}. We restate it here for completeness.
\begin{lemma}\label{friedgut} For a sequence $A=(A_{n})$ of subsets of the $n$-dimensional hypercube,
$A\subseteq \{0,1\}^{n}$, define $A$ to be {\em $(d,m,\epsilon)$-coverable} if the probability for a union of a random choice of $d$ subcubes (hyperplanes) of codimension $m$ to cover $A$ is greater than $\epsilon$ for large enough $n$.
Let $f(n)$ be any function that tends to infinity as $n\rightarrow \infty$. For fixed $k$, $d$, and $\epsilon$ any $A$ that is $(d,1,\epsilon)$-coverable is $(f(n),k, \epsilon)$-coverable. \end{lemma}
The connection with satisfiability can be explained as follows: the set $A$ in the application of the Lemma~\ref{friedgut} will (intuitively) refer to the set of satisfying assignments of random formula $\Phi$. Hyperplanes of codimension 1 are associated to unit clauses, more precisely to the set of assignments {\em forbidden} by a given unit clause. The fact that $A$ can be covered with probability $\epsilon$ by a union of $d$ random hyperplanes of codimension 1 parallels the fact that with probability $\epsilon$, $\Phi \wedge F_{0}$ becomes unsatisfiable. This is what the result of Friedgut-Bourgain gives us (for $\epsilon = \delta_{0}$, under the assumption that $k$-SAT does not have a sharp threshold). Hyperplanes of codimension $k$ correspond to the set of assignments forbidden by a given $k$-clause, and the conclusion of the geometric lemma is that adding any small (but unbounded) number $f(n)$ of random $k$-CNF clauses to random formula $\Phi$ boosts the probability of {\em not} being satisfiable at least as much as the addition of the (constantly many) unit clauses in $F_{0}$.
For small enough $f(n)$ this statement can be directly refuted, by concentration results for the binomial distribution (Lemma 5.6 in \cite{friedgut:k:sat}). A simpler and more general way to derive it is given as Lemma 3.1 in \cite{achlioptas:friedgut:kcol}.
The same outline works for the case we consider. To state the geometric result we need, however, to work with two types of hyperplanes:
\begin{definition} Let $H_{n}=\{0,1\}^{n}$ be the $n$-dimensional hypercube, and let $w_{i}$ denote the value of the $i$'th bit of element $w\in H_{n}$. A {\em positive hyperplane of codimension $d$} is a subset of points of $H_{n}$ defined by a system of equations $ x_{i_{1}}= \ldots = x_{i_{d}}=1$, where the $x_{i}$'s are distinct variables. Negative hyperplanes have a similar definition. \end{definition}
Our version of the geometric Lemma is
\begin{lemma}\label{geometric} For a sequence $A=(A_{n})$ of subsets of the $n$-dimensional hypercube,
$A_{n}\subseteq \{0,1\}^{n}$ define $A$ to be {\em $(n_{1}, d_{1}, d_{2},m_{1}, m_{2},\epsilon)$-coverable} if the probability of a union of a random choice of $d_{1}$ negative hyperplanes of codimension $m_{1}$ and $d_{2}$ positive hyperplanes of codimension $m_{2}$ to cover $A_{n}$ is at least $\epsilon$ if $n\geq n_{1}$. Let $f(n)$, $g(n)$ be any functions that tends to infinity as $n\rightarrow \infty$. For fixed $k_{1}$,$k_{2}$, $d$, and $0<\delta <\epsilon$, there exists $n_{2}$ that depends on $k_{1}, k_{2}, d,\epsilon,\Delta, n_{1}$ (but {\em not} $A$) such that for any $n\geq n_{2}$ any $A_{n}\subseteq \{0,1\}^{n}$ that is $(n_{1},d_{1},d_{2},1,1,\epsilon)$-coverable is $(n_{2},f(n),g(n),k_{1},k_{2}, \epsilon - \delta)$-coverable. \end{lemma} We will in fact prove a stronger version of the Lemma:
\begin{lemma}\label{geometric:2} For a sequence $A=(A_{n})$ of subsets of the $n$-dimensional hypercube, assume that \[ \Pr[A\subseteq H_{1}\cup \ldots H_{d}]\geq \epsilon \]
for all $n\geq n_{1}$, where the $H_{i}$'s are random hyperplanes of codimension 1, $d_{1}$ of them negative, $d_{2}$ of them positive.
Let $f(n)$, $g(n)$ be any functions that tends to infinity as $n\rightarrow \infty$. For fixed $k_{1}$,$k_{2}$, $d$, and $\delta >0$, there exists $n_{*}=n(n_{1},d_{1},d_{2},k_{1}, k_{2}, \epsilon,\delta,f,g)$, (however it does {\em not} depend on $A$) such that for any $n\geq n_{*}$ and any $i$, $0\leq i\leq d$ \begin{equation}\label{conclusion} Pr[A\subseteq P_{1}\cup \ldots \cup P_{\frac{if(n)}{d}}\cup N_{1}\ldots \cup N_{\frac{ig(n)}{d}}\cup H_{i+1}\cup \ldots \cup H_{d}]\geq Pr[A\subseteq H_{1}\cup \ldots \cup H_{d}]-\frac{i\delta}{d}, \end{equation}
where the $N_{i}$'s are random negative hyperplanes of codimension $k_{1}$ and the $P_{i}$'s are random positive hyperplanes of codimension $k_{2}$. \end{lemma}
\begin{proof}
It is easy to see that one can assume that $d|f(n)$, $d|g(n)$ (since it is enough to prove the lemma for $\overline{f}(n)=d\lfloor f(n)/d \rfloor$, $\overline{g}(n)=d\lfloor g(n)/d \rfloor$).
We will prove the lemma by double induction on $d_{1}, d_{2}$. By symmetry we only need to consider two ``base cases:'' \\
{\bf Case 1: $d_{1}=0$, $d_{2}=1$} \\
In this case (and the dual, $d_{1}=1$, $d_{2}=0$) we can replace, for $i=1$,
equation~\ref{conclusion} by the stronger: \begin{equation}\label{conclusion:d=1} \Pr[A\subseteq P_{1}\cup \ldots \cup P_{f(n)}\cup N_{1}\ldots \cup N_{g(n)}]\geq 1-\delta. \end{equation}
The hypothesis implies that for $n\geq n_{1}$ there exist $n\cdot \epsilon$ positive hyperplanes of codimension $1$ such that \[ A_{n} \subseteq P^{(n)}_{1}\cap \ldots \cap P^{(n)}_{n\cdot \epsilon}. \]
We will assume, w.l.o.g., in what follows that $A$ is in fact {\em equal} to the right hand side. If $KP$ is a random positive hyperplane of codimension $k_{1}$ then \[ \Pr[A \not \subseteq KP] = 1-\frac{{{n\cdot \epsilon}\choose {k_{1}}}}{{{n}\choose {k_{1}}}}. \]
Indeed, suppose KP is specified by the (random set of) equations $x^{(n)}_{1}= \ldots =x^{(n)}_{k_{1}}=1$. The condition that $A\subseteq KP$ is equivalent to \[ \{x^{(n)}_{1}, \ldots x^{(n)}_{k_{1}}\} \subseteq \{p^{(n)}_{1}, \ldots, p^{(n)}_{n\cdot \epsilon}\}, \]
where $\{p^{(n)}_{1}, \ldots, p^{(n)}_{n\cdot \epsilon}\}$ are the literals that specify the hyperplanes $P^{(n)}_{1}, \ldots, P^{(n)}_{n\cdot \epsilon}$.
Thus the probability that $A$ is included in the union of $g(n)$ random positive hyperplanes $KP_{i}$ of codimension $k_{1}$ is at least $1 - \Pr[ (\forall i): A\not \subseteq KP_{i}]$, which is equal to \[ 1 - (1-\frac{{{n\cdot \epsilon}\choose {k_{1}}}}{{{n}\choose {k_{1}}}})^{g(n)} \sim 1- (1-\epsilon^{k_{1}})^{g(n)}\rightarrow 1 \mbox{ as } n\rightarrow \infty. \]
It follows that there exists $n_{*}=n(n_{1},d_{1},d_{2},k_{1}, k_{2}, \epsilon,\delta,f,g)$ such that for $n\geq n_{*}$ the left-hand side is larger than $1-\delta$.
{\bf Case 2: $d_{1}+d_{2}>1$} \\
It is enough to prove that there exists $n_{i}=n(n_{1},d_{1},d_{2},k_{1}, k_{2}, \epsilon,\delta,f,g,i)$ such that~\ref{conclusion} holds, for a fixed value of $i$, $0\leq i \leq d$, when $n\geq n_{i}$. Then we can take $n_{*}=max\{n_{i}\}$.
We prove this on induction on $i$. The claim is clearly true for $i=0$. Assume, therefore, that the claim is true up to $i$; we will prove it for $i+1$.
Denote for all $j$ \[ p_{j}= \Pr[A \subseteq P_{1}\cup \ldots \cup P_{\frac{(j-1)f(n)}{d}}\cup N_{1}\cup \ldots \cup N_{\frac{(j-1)g(n)}{d}}\cup H_{j}\cup \ldots \cup H_{d}]. \]
To accomplish that it is enough to show that \begin{equation}\label{inductive:step} p_{i+1}\geq p_{i}-\delta, \end{equation}
By the Bayes formula:
\begin{eqnarray*}\label{bayes} p_{i}=\Pr[A \subseteq P_{1}\cup \ldots \cup P_{\frac{(i-1)f(n)}{d}}\cup N_{1}\cup \ldots \cup N_{\frac{(i-1)g(n)}{d}}\cup H_{i}\cup \ldots \cup H_{d}] = \\
= \sum_{B}
\ Pr[B\subseteq H_{i}| A\setminus (P_{1}\cup \ldots \cup P_{\frac{(i-1)f(n)}{d}}\cup N_{1}\cup \ldots \cup N_{\frac{(i-1)g(n)}{d}}) = B] \cdot \\ \cdot \Pr[A\setminus (P_{1}\cup \ldots \cup P_{\frac{(i-1)f(n)}{d}}\cup N_{1}\cup \ldots \cup N_{\frac{(i-1)g(n)}{d}}) = B] \end{eqnarray*}
Assume without loss of generality that $H_{i}$ is a positive hyperplane.
Let $\gamma= \frac{\delta}{2d}$. Let \[ C_{\gamma}=\{B\subseteq \{0,1\}^{n}: Pr[B\subset P] > \gamma\}. \]
Let $\alpha$ be the sum of those terms in~\ref{bayes} corresponding to sets $B\in C_{\gamma}$, and let $\beta$ be the sum corresponding to sets $B\in \overline{C_{\gamma}}$.
From the definition of $C_{\gamma}$ it follows that \[ 0\leq \beta \leq \gamma, \]
therefore \begin{eqnarray*}\label{inequality} \Pr[A\setminus (P_{1}\cup \ldots \cup P_{\frac{(i-1)f(n)}{d}}\cup N_{1}\cup \ldots \cup N_{\frac{(i-1)g(n)}{d}}) \in C_{\gamma}]\geq \\ \geq \frac{1}{\gamma}\cdot [\Pr[A \subseteq P_{1}\cup \ldots \cup P_{\frac{(i-1)f(n)}{d}}\cup N_{1}\cup \ldots \cup N_{\frac{(i-1)g(n)}{d}}\cup H_{i}\cup \ldots \cup H_{d}] -\gamma]= \\ = \frac{1}{\gamma}\cdot [p_{i}-\gamma]. \end{eqnarray*}
On the other hand
\begin{eqnarray*}\label{bayes:2} p_{i+1}=\Pr[A \subseteq P_{1}\cup \ldots \cup P_{\frac{if(n)}{d}}\cup N_{1}\cup \ldots \cup N_{\frac{ig(n)}{d}}\cup H_{i+1}\cup \ldots \cup H_{d}] = \\
= \sum_{B} \ Pr[B\subseteq P_{\frac{(i-1)f(n)}{d}+1}\cup \ldots \cup P_{\frac{if(n)}{d}}
\cup N_{\frac{(i-1)g(n)}{d}+1}\cup \ldots \cup N_{\frac{ig(n)}{d}}| \\ A\setminus (P_{1}\cup \ldots \cup P_{\frac{(i-1)f(n)}{d}}\cup N_{1}\cup \ldots \cup N_{\frac{(i-1)g(n)}{d}}) = B] \cdot \\ \cdot \Pr[A\setminus (P_{1}\cup \ldots \cup P_{\frac{(i-1)f(n)}{d}}\cup N_{1}\cup \ldots \cup N_{\frac{(i-1)g(n)}{d}}) = B] \end{eqnarray*}
Let $\overline{f}(n)=f(n)/d$, $\overline{g}(n)= g(n)/d$. Since the $N_{i}$'s, $P_{i}$'s are random hyperplanes, one can rewrite the previous recurrence as
\begin{eqnarray*}\label{bayes:3} p_{i+1}=\Pr[A \subseteq P_{1}\cup \ldots \cup P_{\frac{if(n)}{d}}\cup N_{1}\cup \ldots \cup N_{\frac{ig(n)}{d}}\cup H_{i+1}\cup \ldots \cup H_{d}] = \\
= \sum_{B} \Pr[B\subseteq P_{1}\cup \ldots \cup P_{\overline{f(n)}}
\cup N_{1}\cup \ldots \cup N_{\overline{g(n)}}| \\ A\setminus (P_{1}\cup \ldots \cup P_{\frac{(i-1)f(n)}{d}}\cup N_{1}\cup \ldots \cup N_{\frac{(i-1)g(n)}{d}}) = B] \cdot \\ \cdot \Pr[A\setminus (P_{1}\cup \ldots \cup P_{\frac{(i-1)f(n)}{d}}\cup N_{1}\cup \ldots \cup N_{\frac{(i-1)g(n)}{d}}) = B] \end{eqnarray*}
Since all terms are nonnegative, one can obtain a lower bound on the left-hand size of this latter terms by only considering those $B\in C_{\gamma}$.
Applying the induction hypothesis from case one for all $B\in C_{\gamma}$ and $n\geq n_{i}=n_{*}(n_{1},0,1,k_{1},k_{2},\gamma,\delta,\overline{f},\overline{g})$, we infer that for all such $B$, \[ \Pr[B\subseteq P_{1}\cup \ldots \cup P_{\overline{f(n)}} \cup N_{1}\cup \ldots \cup N_{\overline{g(n)}}]\geq (1-\gamma), \] therefore \begin{eqnarray*} p_{i+1}= \Pr[A \subseteq P_{1}\cup \ldots \cup P_{\frac{if(n)}{d}}\cup N_{1}\cup \ldots \cup N_{\frac{ig(n)}{d}}\cup H_{i+1}\cup \ldots \cup H_{d}]\geq \\ (1-\gamma)\cdot \sum_{B\in C_{\gamma}} Pr[A\setminus (P_{1}\cup \ldots \cup P_{\frac{(i-1)f(n)}{d}}\cup N_{1}\cup \ldots \cup N_{\frac{(i-1)g(n)}{d}}) = B] \\ = (1-\gamma) \cdot Pr[A\setminus (P_{1}\cup \ldots \cup P_{\frac{(i-1)f(n)}{d}}\cup N_{1}\cup \ldots \cup N_{\frac{(i-1)g(n)}{d}})\in C_{\gamma}] \\ \geq \frac{(1-\gamma)}{\gamma}\cdot [\Pr[A \subseteq P_{1}\cup \ldots \cup P_{\frac{(i-1)f(n)}{d}}\cup N_{1}\cup \ldots \cup N_{\frac{(i-1)g(n)}{d}}\cup H_{i}\cup \ldots \cup H_{d}] -\gamma] = \\ \frac{(1-\gamma)}{\gamma}\cdot [p_{i}-\gamma]. \end{eqnarray*}
Since $\gamma \leq 1$, \[ p_{i+1}\geq (1-\gamma)\cdot (p_{i}-\gamma)= p_{i} - \gamma \cdot [1+p_{i}-\gamma]\geq p_{i}-2\gamma. \]
which is precisely equation~\ref{inductive:step} (that we wanted to prove).
\end{proof}
$\Box$\newline
\subsection{How to contradict Lemma~\ref{geometric}}
It is now easy to infer the fact that $SAT(S)$ has a sharp threshold, by
using the previous lemma with $d_{1}= |F_{0}\cap Var|$,
$d_{2}=|F_{0}|-d_{1}$, $k_{1}=b$, $k_{2}=a$, $\epsilon = \delta_{0}$, $\Delta = \delta_{0}/2$ and a refutation of the geometric lemma similar to the one for 3-SAT.
The conclusion of the Geometric Lemma (similar to the one for $k$-SAT) is that, by adding any number $f(n)$ of copies of $x_{1}\vee \ldots \vee x_{a}$ and $g(n)$ copies of $\overline{x_{1}}\vee \ldots \vee x_{b}$ suffices to boost the unsatisfiability probability by a constant.
However, Lemma 3.1 from \cite{achlioptas:friedgut:kcol} asserts that adding up to $o(\sqrt(n))$ random clauses is not enough to boost the unsatisfiability probability by more than $o(1)$.
Because of the nature of the random model, adding $H(n)=o(\sqrt(n))$ random clauses insures that w.h.p. we have $\Theta(H(n))$ copies of each type of clause, as long as $H(n)$ grows faster than some power of $n$. Taking the number of such copies to be the functions $f(n)$, $g(n)$ contradicts the consequence of the Geometric Lemma.
Note that we do {\em not} make use of all the details of the random model (such as the precise number of copies of each clause in random instances at $p=p(n)$), but only that: \begin{itemize} \item the expected number of copies of both $C_{1}$ and $C_{2}$ in a random formula at $p=p(n)$ is unbounded. This is enough to make the analog of Lemma 3.1 from \cite{achlioptas:friedgut:kcol} work. \item the clauses are independent. \end{itemize}
\subsection{Putting it all together} \label{all:together}
In the previous section we have proved a geometric lemma that is used to prove that the above-defined set $SAT(S)$ has a sharp threshold.
Consider now a set ${\cal C}$ of constraints that satisfies the condition (3) of the theorem. Since $SAT({\cal C})$ is not trivially satisfied by the ``all zero'' (all ones) assignment, there exist constraints $C_{1}$, $C_{2}$ in ${\cal C}$ and $a,b \geq 1$ such that $C_{1}\models x_{1}\vee \ldots x_{a}$, $C_{2}\models \overline{x_{1}}\vee \ldots \vee \overline{x_{b}}$. In fact $a,b \geq 2$, otherwise some constraint in ${\cal C}$ would strongly depend on one variable.
Just as in the base case, condition (i) in the Friedgut-Bourgain result is eliminated by the result of Creignou and Daud\'{e}, and the formula $F_{0}$ in condition (ii) can be assumed to consist of a conjunction of unit clauses. Reflecting this fact, the geometric lemma needed for the general case has the same hypothesis as the one of Lemma~\ref{geometric}: the set $A$ can be covered by a union of random hyperplanes of codimension 1. However, the covering desired in the conclusion no longer consists of hyperplanes, but of (general) sets of points in the hyperplane, corresponding to sets of assignments forbidden by a certain constraint $C$.
The critical observation (easiest to make in the case when $C_{1}\neq C_{2}$ ) is that {\bf Lemma~\ref{geometric} implies the geometric result we need for this case}: since $C_{1}\models x_{1}\vee \ldots x_{a}$, the ``forbidden set'' associated to $C_{1}$ contains the ``forbidden set'' associated to constraint $x_{1}\vee \ldots x_{a}$ in Lemma~\ref{geometric}, the hyperplane $x_{1}= \ldots = x_{a}=0$. A similar result holds for $C_{2}$ and the positive hyperplanes.
Thus each ``covering set'' in a conclusion of the (general case of the ) geometric lemma contains a corresponding ``covering set'' from the conclusion of Lemma~\ref{geometric}. In other words {\bf the geometric lemma for $SAT({\cal C})$ follows from the geometric lemma for the base case by monotonicity}.
All is left to show is that the analog of Lemma 3.1 from \cite{achlioptas:friedgut:kcol} also works in this general case. We have previously observed that this amounts to showing that the expected number of copies of $C_{1}$, $C_{2}$ in a random formula is unbounded. But Proposition 3.5 in \cite{creignou:daude:sat2002} (slightly generalized to probability distributions ${\cal P}$ that are not uniform) and the details of the random model imply that in fact this number is linear.
A simple modification of this argument holds when $C_{1}=C_{2}$. To see this, note that in the proof of the fact that Lemma~\ref{geometric} holds for {\em some} small enough (but unbounded) $f(n),g(n)$ is enough to obtain a contradiction.
The expected number of copies of $C_{1}$ in a random formula is linear, denote it by $h(n)$. Dividing the set of such copies into two (random) sets of cardinality $h(n)/2$ yields infinitely many random copies of $C_{1}$ used to imply $x_{1}\vee \ldots \vee x_{a}$ in the previous argument and infinitely many copies used to imply $\overline{x_{1}}\vee \ldots \vee \overline{x_{b}}$. We then apply the same strategy as in the first case.
To summarize: the proof follows from the corresponding argument for the base case by monotonicity. It critically uses the fact that we are {\em not} in cases (i) or (ii) of the Theorem, since it is only under these conditions when the first alternative in the Friedgut-Bourgain argument can be eliminated.
\end{enumerate} \end{proof}
$\Box$\newline
\section{3-SAT has a first-order phase transition}
\begin{theorem}\label{3sat:first-order} $k$-SAT, $k\geq 3$ has a first-order phase transition. In other words there exists $\eta > 0 $ such that for every sequence $p = p(n)$
we have \begin{equation}\label{jump} \lim_{n\rightarrow \infty} \Pr_{p=p(n)}[\Phi \in SAT] = 0 \Rightarrow
\lim_{n\rightarrow \infty}\Pr_{p=p(n)}[ \frac{|Spine(\Phi)|}{n}\geq \eta]= 1. \end{equation} \end{theorem}
\begin{proof}
We start by giving a simple sufficient condition for a literal to belong to the spine of the formula:
\begin{claim}\label{spine:unsat} Let $\Phi$ be a minimally unsatisfiable formula, and let $x$ be a literal that appears in $\Phi$. Then $x\in Spine(\Phi)$. \end{claim} \begin{proof} Let $C$ be a clause that contains $x$. By the minimal unsatisfiability of $\Phi$, $\Phi \setminus \{C\} \in SAT$. On the other hand $\Phi \setminus \{C\} \wedge \{x\} \in \overline{SAT}$, otherwise $\Phi$ would also be satisfiable. Thus $x \in Spine(\Phi \setminus \{C\})$. \end{proof} \qedbox
Thus, to show that 3-SAT has a first-order phase transition it is enough to show that a random unsatisfiable formula contains w.h.p. a minimally unsatisfiable subformula containing a linear number of literals. A way to accomplish this is by using the two ingredients of the Chv\'{a}tal-Szemer\'{e}di proof \cite{chvatal:szemeredi:resolution} that random 3-SAT has exponential resolution size w.h.p. They are explicitly stated to make the argument self-contained:
\begin{claim} There exists a constant $\delta>0$ such that for every constant $c>c_{3 SAT}$ with high probability (as $n\rightarrow \infty$) a random formula $\Phi$ with $n$ variables and $cn$ clauses has no minimally unsatisfiable subformula of size less than $\delta \cdot n$. \end{claim}
\begin{claim} There exists $\eta >0$ so that w.h.p. for every $c>c_{3-SAT}$ all subformulas of a random formula $\Phi$ having between $(\delta/2)\cdot n$ and $\delta\cdot n$ clauses contain at least $\eta\cdot n$ (pure) literals (corresponding to different variables). \end{claim}
The argument is now transparent: if $\Phi$ is unsatisfiable then w.h.p. a minimally unsatisfiable subformula $\Xi$ of $\Phi$ has size at least $\delta n$. By the second claim, applied to an arbitrary subformula of $\Xi$ of size $(3\delta n)/4$, we infer that w.h.p. $\Xi$ contains at least many $\eta\cdot n$ different variables.
\end{proof}
\section{First-order phase transitions and resolution complexity of random generalized satisfiability problems}
In this section we extend the previous result (and the connection between first-order phase transitions and resolution complexity) to other classes of satisfiability problems. Interestingly, we find that a condition Molloy investigated in \cite{molloy-stoc2002} is a sufficient condition for the existence of a first-order phase transition.
It turns out that there are differences between the case of random $k$-SAT and the general case that force us to employ an alternative definition of the spine. The most obvious one is that formula (~\ref{spine:initial}) involves negations of variables, whereas Molloy's model does not. But it has more serious problems: consider for example {\em $k$-uniform hypergraph 2-coloring} specified as $SAT(\{C_{0}\})$, where $C_{0}(x_{1}, \ldots, x_{k})$ has the interpretation ``not all of $x_{1}, \ldots x_{k}$ are equal''. Because of the built-in symmetry to permuting colors 0 and 1, the spine of {\em any} instance is empty (under definition~\ref{spine:initial}). Similar phenomena have appeared before (and forced a different definition of the backbone/spine) in {\em k-coloring \cite{frozen:development}} (symmetry = permutation of colors) or {\em graph partition} \cite{graph-partition:transition} (symmetry = permutation of sides). There are other ways (to be detailed in the journal version of the paper) in which the original definition of the spine behaves differently in the general case than in that of random $k$-SAT. Our solution is to define the concept of spine of a random instance of a satisfiability problem $SAT(\cal P)$ in a slightly different way. The definition is consistent with those in \cite{frozen:development}, \cite{graph-partition:transition}.
\begin{definition}$
Spine(\Phi) = \{ x\in Var | (\exists) \Xi \subseteq \Phi, \Xi \in SAT, (\exists) C\in {\cal C}, x\in C,\mbox{ such that } \Xi \wedge C \in \overline{SAT}\}$.
\end{definition}
It is easy to see that, for $k$-CNF formulas whose (original) spine contains at least three literals a variable $x$ is in the (new version of the) spine if and only if either $x$ or $\overline{x}$ were present in the old version. In particular the new definition does not change the order of the phase transition of random $k$-SAT. Moreover the proof of Claim~\ref{spine:unsat} carries over to the general case. The {\em resolution complexity} of an instance $\Phi$ of $SAT({\cal P})$ is defined as the resolution complexity of the formula obtained by converting each constraint of $\Phi$ to CNF-form.
\begin{definition} Let ${\cal P}$ be such that $SAT({\cal P})$ has a sharp threshold. Problem $SAT({\cal P})$ has a {\em first-order phase transition} if there exists $\eta > 0 \mbox{ such that for every sequence } p = p(n)$
we have \begin{equation}\label{jump:general} \lim_{n\rightarrow \infty} \Pr_{p=p(n)}[\Phi \in SAT] = 0 \Rightarrow
\lim_{n\rightarrow \infty}\Pr_{p=p(n)}[ \frac{|Spine(\Phi)|}{n}\geq \eta]= 1. \end{equation} If, on the other hand, for every $epsilon >0$ there exists $p^{\epsilon}(n)$ with \begin{equation}\label{jump:continuous} \lim_{n\rightarrow \infty} \Pr_{p=p^{\epsilon}(n)}[\Phi \in SAT] = 0 \mbox{ and }
\lim_{n\rightarrow \infty}\Pr_{p=p(n)}[ \frac{|Spine(\Phi)|}{n}\geq \epsilon]= 0 \end{equation} we say that $SAT({\cal P})$ has a {\em second-order phase transition}\footnote{strictly speaking the order of the phase transition is {\em at least two}.}. \end{definition}
A first observation is that a second-order phase transition has computational implications:
\begin{theorem}\label{second:order} Let ${\cal P}$ be such $SAT({\cal P})$ has a second-order phase transition. Then for every constant $c>\overline{lim}_{n\rightarrow \infty} c_{SAT({\cal P})}(n)$, and {\em every $\alpha>0$}, random formulas of constraint density $c$ have w.h.p. resolution complexity $O(2^{k\cdot \alpha \cdot n})$. \end{theorem}
\begin{proof}
By the analog of Claim~\ref{spine:unsat} for the general case, if $SAT({\cal P})$ has a second-order phase transition) then for every $c>c_{SAT({\cal P})}$ and for every $\alpha>0$, minimally unsatisfiable subformulas of a random formula $\Phi$ with constraint density $c$ have w.h.p. size at most $\alpha \cdot n$. Consider the backtrack tree of the natural DPLL algorithm (that tries to satisfies clauses one at a time) on such a minimally unsatisfiable subformula $F$. By the usual correspondence between DPLL trees and resolution complexity (e.g. \cite{beame:dp}, pp. 1) it yields a resolution proof of the unsatisfiability of $\Phi$ having size at most $2^{k\cdot \alpha \cdot n+1}$.
\end{proof}
$\Box$\newline
\begin{definition} For a formula $F$ define $
c^{*}(F)= \max\{ \frac{|Constraints(G)|}{|Var(G)|}: \emptyset \neq G \subseteq F\}$. \end{definition}
The next result gives a sufficient condition for a generalized satisfiability problem to have a first-order phase transition.
\begin{theorem} \label{sufficient:first-order} Let $C$ be a set of constraints such that $SAT({\cal C})$ has a sharp threshold. If there exists $\epsilon > 0$ such that for every minimally unsatisfiable formula $F$ it holds that \[ c^{*}(F) > \frac{1+\epsilon}{k-1} \] then for every ${\cal P}$ with $supp({\cal P})=C$
$SAT({\cal P})$ has a first-order phase transition.
\end{theorem} \begin{proof}
We first recall the following concept from \cite{chvatal:szemeredi:resolution}:
\begin{definition} Let $x,y>0$. A $k$-uniform hypergraph with $n$ vertices is {\em ($x$,$y$)-sparse} if every set of $s\leq xn$ vertices contains at most $ys$ edges. \end{definition}
We also recall Lemma 1 from the same paper. \begin{lemma}\label{sparsity:hypergraph} Let $k,c>0$ and $y$ be such that $(k-1)y>1$. Then w.h.p. the random $k$-uniform hypegraph with $n$ vertices and $cn$ edges is $(x,y)$-sparse, where \begin{eqnarray*} \epsilon = y-1/(k-1), \\ x = (\frac{1}{2e}(\frac{y}{ce})^{y})^{1/\epsilon}, \\ \end{eqnarray*} \end{lemma}
The critical observation is that the existence of a minimally unsatisfiable formula of size $xn$ and with $c^{*}(F) > \frac{1+\epsilon}{k-1}$ implies that the $k$-uniform hypergraph associated to the given formula is {\em not} $(x,y)$-sparse, for $y= \frac{\epsilon}{k-1}$.
But, according to Lemma~\ref{sparsity:hypergraph}, w.h.p. a random $k$-uniform hypergraph with $cn$ edges is $(x_{0},y)$ sparse, for $x_{0}=(\frac{1}{2e}(\frac{y}{ce})^{y})^{1/\epsilon}$ (a dirrect application of Lemma 1 in their paper). We infer that any formula with less than $x_{0}\cdot n /K$ constraints is satisfiable, therefore the same is true for any formula with $x_{0}\cdot n/K$ clauses picked up from the clausal representation of constraints in $\Phi$.
The second condition (expansion of the formula hypergraph) can be proved similarly.
\end{proof} \qedbox
One can give an explicitly defined class of satisfiability problems for which the previous result applies:
\begin{theorem}\label{implicates:first-order} Let ${\cal P}$ be such that $SAT({\cal P})$ has a sharp threshold. If {\em no} clause $C\in {\cal C}=supp({\cal P})$ has an implicate of length at most 2 then \begin{enumerate} \item for every minimally unsatisfiable formula $F$ \[ c^{*}(F)\geq \frac{2}{2k-3}. \] Therefore $SAT({\cal P})$ satisfies the conditions of the previous theorem, i.e. it has a first-order phase transition. \item Moreover $SAT({\cal P})$ also has $2^{\Omega(n)}$ resolution complexity\footnote{this result subsumes some of the recent results in \cite{mitchell:cp02}}. \end{enumerate} \end{theorem}
\begin{proof}
\begin{enumerate} \item For any real $r \geq 1$, formula $F$ and set of clauses $G\subseteq F$,
define the {\em $r$-deficiency of $G$}, $\delta_{r}(G)=
r|Clauses(G)|-|Vars(G)|$.
Also define \begin{equation} \label{max} \delta^{*}_{r}(F)= \max\{\delta_{r}(G): \emptyset \neq G \subseteq F\} \end{equation}
We claim that for any minimally unsatisfiable $F$, $\delta^{*}_{2k-3}(F)\geq 0$. Indeed, assume this was not true. Then there exists such $F$ such that: \begin{equation}\label{deff} \delta_{2k-3}(G)\leq -1\mbox{ for all }\emptyset \neq G\subseteq F. \end{equation} \begin{proposition} \label{1-transversal} Let $F$ be a formula for which condition~\ref{deff} holds. Then there exists
an ordering $C_{1}, \ldots, C_{|F|}$ of constraints in $F$ such that each constraint $C_{i}$ contains at least $k-2$ variables that appear in {\em no} $C_{j}$, $j<i$. \end{proposition}
\begin{proof} Denote by $v_{i}$ the number of variables that appear in {\em exactly} $i$ constraints of $F$. We have \[
\sum_{i\geq 1} i\cdot v_{i} = k\cdot |Constraints(F)|. \]
therefore $2|Var(F)|-v_{1}\leq k\cdot |Constraints(F)|$. This can be
rewritten as $v_{1}\geq 2|Var(F)|-k|Constraints(F)|> |Constraints(F)|\cdot
(2k-3 - k)= (k-3)\cdot |Constraints(F)|$ (we have used the upper bound on $c^{*}(F)$. Therefore there exists at least one constraint in $F$ with at least $k-2$ variables that are free in $F$. We set $C_{|F|}=C$ and apply this argument recursively to $F\setminus C$. \end{proof}
$\Box$\newline
Call the $k-2$ new variables of $C_{i}$ {\em free in $C_{i}$}. Call the other two variables {\em bound in $C_{i}$}. Let us show now that $F$ cannot be minimally unsatisfiable. Construct a satisfying assignment for $F$ incrementally: Consider constraint $C_{j}$. At most two of the variables in $C_{j}$ are bound for $C_{j}$. Since $C$ has no implicates of size at most two, one can set the remaining variables in a way that satisfies $C_{j}$. This yields a satisfying assignment for $F$, a contradiction with our assumption that $F$ was minimally unsatisfiable.
Therefore $\delta^{*}_{2k-3}(F)\geq 0$, a statement equivalent to our conclusion.
\item To prove the resolution complexity lower bound we use the size-width connection for resolution complexity obtained in \cite{ben-sasson:resolution:width}: we prove that there exists $\eta >0$ such that w.h.p. random instances of $SAT({\cal P})$ having constraint density $c$ have resolution width at least $\eta \cdot n$.
We use the same strategy as in \cite{ben-sasson:resolution:width} \begin{enumerate} \item prove that w.h.p. minimally unsatisfiable subformulas are ``large''. \item prove that any clause implied by a satisfiable formula of ``intermediate'' size will contain ``many'' literals. \end{enumerate}
Indeed, define for a unsatisfiable formula $\Phi$ and (possibly empty) clause $C$ \[
\mu(C)=\min\{|\Xi|: \Xi\subseteq \Phi, \Xi \models C\}. \]
\begin{claim} There exists $\eta_{1}>0$ such that for any $c>0$, w.h.p. $\mu(\Box)\geq \eta_{1}\cdot n$ (where $\Phi$ is a random instance of $SAT({\cal P})$ having constraint density $c$). \end{claim}
\begin{proof} In the proof of Theorem~\ref{sufficient:first-order} we have shown that there exists $\eta_{0}>0$ such that w.h.p. any unsatisfiable subformula of a given formula has at least $\eta_{0} \cdot n$ constraints. Therefore {\em any} formula made of {\em clauses} in the CNF-representation of constraints in $\Phi$, and which has less than $\eta_{0} \cdot n$ clauses is satisfiable, and the claim follows, by taking $\eta_{1} = \eta_{0}$. \end{proof}
$\Box$\newline
The only (slightly) nontrivial step of the proof, which critically uses the fact that constraints in ${\cal P}$ do not have implicates of length at most two, is to prove that clause implicates of subformulas of ``medium'' size have ``many'' variables. Formally:
\begin{claim}\label{expansion} There exists $d>0$ and $\eta_{2}>0$ such that w.h.p., for every clause
$C$ such that $d/2\cdot n <\mu(C)<=dn$, $|C|\geq \eta_{2}\cdot n$. \end{claim}
\begin{proof}
Take $0<\epsilon$. It is easy to see that if $c^{*}(F)<\frac{2}{2k-3+\epsilon}$ then w.h.p. for every subformula $G$ of $F$, at least $\frac{\epsilon}{3} \cdot |Constraints(G)|$ have at least $k-2$ private variables: Indeed, since $c^{*}(G)<\frac{2}{2k-3+\epsilon}$, by a reasoning similar to the one we made
previously $v_{1}(G)\geq (k-3+\epsilon)|Constraints(G)|$. Since constraints in $G$ have arity $k$, at least $\epsilon/3 \cdot |Constraints(G)|$ have at least $k-2$ ``private'' variables.
Choose $y=\frac{2}{2k-3+\epsilon}$ in Lemma~\ref{sparsity:hypergraph} for $\epsilon>0$ a small enough constant. Since the problem has a sharp threshold in the region where the number of clauses is linear, \[ d=\inf\{x(y,c): c>= c_{SAT({\cal P})}\} >0. \]
W.h.p. all subformulas of $\Phi$ having size less than $d/k \cdot n$ have a formula hypergraph that is $(x,y)$-sparse, therefore fall under the scope of the previous argument.
Let $\Xi$ be a subformula of $\Phi$, having minimal size, such that $\Xi \models C$. We claim:
\begin{claim} For every clause $P$ of $\Xi$ with $k-2$ ``private'' variables, (i.e. one that does not appear in any other clause), at least one of these ``private'' variables appears in $C$. \end{claim}
Indeed, supppose there exists a clause $D$ of $\Xi$ such that none of its private variables appears in $C$.
Because of the minimality of $\Xi$ there exists an assignment $F$ that satisfies $\Xi \setminus \{D\}$ but does not satisfy $D$ or $C$. Since $D$ has no implicates of size two, there exists an assignment $G$, that differs from $F$ only on the private variables of $D$, that satisfies $\Xi$. But since $C$ does not contain any of the private variables of $D$, $F$ coincides with $G$ on variables in $C$. The conclusion is that $G$ does not satisfy $C$, which contradicts the fact that $\Xi\models C$.
The proof of Claim~\ref{expansion} (and of item 2. of Theorem~\ref{implicates:first-order}) follows: since for any clause $K$ of one of the original constraints $\mu(K)=1$, since $\mu(\Box)>\eta_{1}\cdot n$ and since w.l.o.g. $0<d<\eta_{1}$ (otherwise replace $d$ with the smaller value) there exists a clause $C$ such that \begin{equation}\label{intermediate} \mu(C)\in [d/2k \cdot n, d/k \cdot n]. \end{equation}
Indeed,
let $C^{\prime}$ be a clause in the resolution refutation of $\Phi$ minimal with the property that $\mu(C^{\prime})> dn$. Then at least one clause $C$, of the two involved in deriving $C^{\prime}$ satisfies equation~\ref{intermediate}.
By the previous claim it
$C$ contains at least one ``private'' variable from each clause of $\Xi$. Therefore $|C|\geq \eta_{2}\cdot n$, with $\eta_{2}=d/2k\cdot \epsilon$.
\end{proof}
\end{enumerate}
\end{proof} \qedbox
It is instructive to note that the condition in the theorem is violated (as expected) by random 2-SAT, as well as by random 1-in-$k$ SAT: the formula $C(x_{1}, x_{2}, \ldots, x_{k-1}, x_{k})\wedge C(\overline{x_{k}}, x_{k+1}, \ldots , x_{2k-2}, x_{1})\wedge C(\overline{x_{1}}, x_{2k-1}, \ldots, x_{3k-3}, \overline{x_{k}}) \wedge C(x_{k}, x_{3k-2}, \ldots, x_{4k-4}, x_{1})$ (where $C$ is the constraint ``1-in-$k$'') is minimally unsatisfiable, but has clause/variable ratio $1/(k-1)$ and implicates $\overline{x_{1}} \vee \overline{x_{k}}$ and $x_{1}\vee x_{k}$.
It would be tempting to speculate that whenever both $x\vee y$ and $\overline{x}\vee \overline{y}$ are implicates of clauses in ${\cal C}$ then $SAT({\cal P})$ has a second-order phase transitions for every distribution ${\cal P}$ with $supp({\cal P})= {\cal C}$. That is, however, {\em not} true, at least for some distributions ${\cal P}$. Consider the random $(2+p)$-SAT model of Monasson et al. \cite{2+p:nature}. In this model $p$ is a fixed real in $[0,1]$. A random instances of $(2+p)$-SAT with $n$ variables and $c\cdot n$ clauses is obtained by choosing $pcn$ random clauses of length 3 and $(1-p)cn$ random clauses of length 2. It was shown in \cite{2+p:nature} (using the nonrigorous replica method) that \begin{enumerate} \item $(2+p)$-SAT has a second-order phase transition for $0\leq p \leq p_{0}\sim 0.413...$. \item the transition becomes first-order for $p > p_{0}$. \item when the transition changes from second-order to first-order the complexity of a certain DPLL algorithm changes from polynomial to exponential. \end{enumerate}
Several rigorous results have complemented these findings. Achlioptas et al. \cite{2+psat:ralcom97} have shown that for $0\leq p \leq 0.4$ the phase transition in $(2+p)$-SAT only depends on the ``2-SAT part''. One can perhaps use the techniques of \cite{scaling:window:2sat} to confirm statement (i).
It is easily seen that $(2+p)$-SAT can be represented in Molloy's framework. On the other hand Achlioptas, Beame and Molloy \cite{achlioptas:beame:molloy} have shown that for those $p$ for which $(2+p)$-SAT does {\em not} behave like 2-SAT the resolution complexity of the problem is exponential. Using Theorem~\ref{second:order} and and results in \cite{achlioptas:beame:molloy} we get:
\begin{theorem}\label{2+p-sat:first-order} Let $p\in [0,1]$ be s.t. there exist $\epsilon >0$ and $c<\frac{(1-\epsilon)}{1-p}$ s.t. random instances of $(2+p)$-SAT with $n$ variables and $c\cdot n$ clauses are w.h.p. unsatisfiable. Then $(2+p)$-SAT has a first-order phase transition. \end{theorem}
It would be interesting to obtain a complete characterization of the order of the phase transition in an arbitrary problem $SAT({\cal P})$. Such a characterization, however, requires substantial advances: Exactly locating the ``tricritical point'' $p_{0}$ in random $2+p$-SAT (or merely deciding whether it is equal or not to 0.4) is an open problem. A complete characterization would yield a solution to this problem as a byproduct.
On the other hand Theorem~\ref{2+p-sat:first-order} suggests an interesting conjecture: whenever the location of the phase transition is {\em not} determined by the implicates of size at most two in the given formula, the phase transition in $SAT({\cal P})$ is first-order. Perhaps techniques in \cite{achlioptas:beame:molloy} can help settle this question.
\section{Discussion}
We have shown that the existence of a first-order phase transition in a random satisfiability problem is often correlated with a $2^{\Omega(n)}$ peak in the complexity of resolution/DPLL algorithms at the transition point.
As for the extent of the connection it is easy to see that it does not extend to a substantially larger class of algorithms: consider random $k$-XOR-SAT, the problem of testing the satisfiability of random systems of linear equations of size $k$ over ${\bf Z}_{2}$. $k$-XOR-SAT is a version of XOR-SAT, one of the polynomial time cases of satisfiability from Schaefer's Dichotomy Theorem \cite{schaefer-dich}. Indeed, it is easily solved by Gaussian elimination. But Ricci-Tersenghi et al. \cite{zecchina:kxorsat} have presented a non-rigorous argument using the replica method that supports the existence of a first-order phase transition, and we can show this formally (as a direct consequence of Theorem~\ref{implicates:first-order}):
\begin{proposition} Random $k$-XOR-SAT, $k\geq 3$, has a first-order phase transition. \end{proposition}
To sum up: {\bf the intuitive argument states that a first-order phase transition correlates with a $2^{\Omega(n)}$ lower bound of the complexity of DPLL algorithms at the transition. This is true in many cases, and the underlying reason is that the two phenomena (the jump in the order parameter and the resolution complexity lower bound) have common causes. However, at least for satisfiability problems, this connection does not extend substantially beyond the class of DPLL algorithms.}
\section*{Acknowledgments}
I thank Madhav Marathe, Anil Kumar and Cris Moore for useful comments. In particular Cris made the observation that led to the realization that my previous results implied Theorem~\ref{second:order}.
This work has been supported by the Department of Energy under contract W-705-ENG-36.
\end{document} | arXiv |
Materials Research (4)
Physics And Astronomy (3)
MRS Online Proceedings Library Archive (3)
Canadian Journal of Neurological Sciences (2)
Proceedings of the British Society of Animal Science (2)
Radioprotection (2)
Bulletin of the Australian Mathematical Society (1)
Communications in Computational Physics (1)
Journal of Developmental Origins of Health and Disease (1)
Powder Diffraction (1)
The Journal of Laryngology & Otology (1)
Cambridge University Press (1)
Materials Research Society (3)
Canadian Neurological Sciences Federation (2)
EDPS Sciences - Radioprotection (2)
Australian Mathematical Society Inc (1)
BSAS (1)
Global Science Press (1)
Nestle Foundation - enLINK (1)
The Australian Society of Otolaryngology Head and Neck Surgery (1)
C.07 Predicting individualized risk of recurrence: development and validation of a DNA-methylation based nomogram in meningioma
F Nassiri, Y Mamatjan, S Suppiah, J Badhiwala, S Mansouri, S Karimi, O Saarela, L Poisson, H Ng, H Noushmehr, P Harter, P Baumgarten, M Weller, M Preusser, C Mende, F Sahm, A von Deimling, K Aldape, G Zadeh
Journal: Canadian Journal of Neurological Sciences / Volume 46 / Issue s1 / June 2019
Published online by Cambridge University Press: 05 June 2019, p. S13
Print publication: June 2019
Background: Challenges in predicting risk of recurrence for individual patients with meningioma limits appropriate selection of patients who may benefit from adjuvant radiation therapy to delay recurrence. Here, we aimed to develop and validate a combined clinicomolecular predictor of early recurrence for individual patients with meningiomas. Methods: A methylation-based predictor of 5-year recurrence-free-survival (RFS) was developed using DNA-methylation profiles from a training cohort of 228 patients. Model performance was compared to a standard-of-care histological-based model using three independent cohorts (N=54 ;N=140; N=64 patients). Subsequently, a nomogram that integrated the methylome-based predictor with prognostic clinical factors was developed and validated. Results: The methylome-based predictor of 5-year RFS performed favorably compared to a grade-based predictor when tested using the three validation cohorts (ΔAUC=0.10, 95%CI 0.03 – 0.018) and was independently associated with RFS on multivariable Cox regression analysis (HR=3.6, 95%CI 1.8–7.2, P<0.001). A nomogram combining the methylome-predictor with clinical factors demonstrated greater discrimination for recurrence than a nomogram using clinical factors alone (ΔAUC=0.25, 95%CI 0.22–0.27) and resulted in two risk groups with distinct recurrence patterns (HR=7.7, 95%CI 5.3–11.1, P<0.001) and clinical implications. Conclusions: Our validated models provide important novel prognostic information that could be used to individualize decisions regarding post-operative therapeutic interventions in meningioma.
OS14 - 208 The prognostic role of pre-operative complete blood count (CBC) in progression-free survival in patients with meningioma
S. Karimi, P.D. Tonge, L. Gonen, R. Tabasinejad, G. Zadeh, K. Aldape
Journal: Canadian Journal of Neurological Sciences / Volume 43 / Issue S4 / October 2016
Published online by Cambridge University Press: 18 October 2016, p. S6
Factors which might influence outcome in patients with meningioma are not well-understood. Previous studies have examined associations of laboratory blood values including hemoglobin levels with patient outcomes in cancer. We hypothesized those changes in CBC before tumor resection can be used as one of the prognostic factors for tumor recurrence/progression in meningioma. To address this, we gathered the clinical and pre-operative CBC results for final analysis from 226 patients (64 males and 162 females) who underwent craniotomy for primary meningioma (grades: 157 WHO GI, 59 GII, 10 GIII) at our institution between 2001 and 2015.Individual parameters were analyzed for correlation with progression-free survival. The median recurrence free survival (RFS) was not reached and follow-up ranged 0.3-14 years. Fifty-six patients (25%) had anemia and 30% of the patients showed leukocytosis using standard cut-offs. On univariate analyses, low hemoglobin (Hb) level, as well as high leukocytes (Lkc), neutrophil (Neutro) and monocyte counts correlated with worse RFS. As expected, tumor grade was correlated with RFS. Low Hb level, high Lkc and Neutro counts were all significantly associated with RFS after adjusting for grade. Strikingly, 32% of patients with pre-operative anemia experienced a recurrence at 5 years, compared with only 11% of non-anemic patients. Conclusion: In this exploratory study, we find that pre-operative CBC data, which is readily available, may contain prognostic information relevant to subsequent risk of recurrence or progression in meningioma. While the biological mechanism for these associations is not clear, they represent hypotheses for further investigation.
Do Current Lattice Boltzmann Methods for Diffusion and Advection-Diffusion Equations Respect Maximum Principle and the Non-Negative Constraint?
S. Karimi, K. B. Nakshatrala
Journal: Communications in Computational Physics / Volume 20 / Issue 2 / August 2016
Published online by Cambridge University Press: 21 July 2016, pp. 374-404
Print publication: August 2016
The Lattice Boltzmann Method (LBM) has established itself as a popular numerical method in computational fluid dynamics. Several advancements have been recently made in LBM, which include multiple-relaxation-time LBM to simulate anisotropic advection-diffusion processes. Because of the importance of LBM simulations for transport problems in subsurface and reactive flows, one needs to study the accuracy and structure preserving properties of numerical solutions under the LBM. The solutions to advective-diffusive systems are known to satisfy maximum principles, comparison principles, the non-negative constraint, and the decay property. In this paper, using several numerical experiments, it will be shown that current single- and multiple-relaxation-time lattice Boltzmann methods fail to preserve these mathematical properties for transient diffusion-type equations. We will also show that these violations may not be removed by simply refining the discretization parameters. More importantly, it will be shown that meeting stability conditions alone does not guarantee the preservation of the aforementioned mathematical principles and physical constraints in the discrete setting. A discussion on the source of these violations and possible approaches to avoid them is included. A condition to guarantee the non-negativity of concentration under LBM in the case of isotropic diffusion is also derived. The impact of this research is twofold. First, the study poses several outstanding research problems, which should guide researchers to develop LBM-based formulations for transport problems that respect important mathematical properties and physical constraints in the discrete setting. This paper can also serve as a good source of benchmark problems for such future research endeavors. Second, this study cautions the practitioners of the LBM for transport problems with the associated numerical deficiencies of the LBM, and provides guidelines for performing predictive simulations of advective-diffusive processes using the LBM.
By Magdalena Anitescu, Charles E. Argoff, Arash Asher, Nyla Azam, Nomen Azeem, Sachin K. Bansal, Jose E. Barreto, Rodrigo A Benavides, Niteesh Bharara, Justin B. Boge, Robert B. Bolash, Thomas K. Bond, Christopher Centeno, Zachariah W. Chambers, Jonathan Chang, Grace Chen, Hamilton Chen, Jeffry Chen, Jianguo Cheng, Natalia Covarrubias, Claire J. Creutzfeldt, Gulshan Doulatram, Amirpasha Ehsan, Ike Eriator, Jeff Ericksen, Mark Etscheidt, Frank J. E. Falco, Jack Fu, Timothy Furnish, Annemarie E. Gallagher, Kingsuk Ganguly, Eugene Garvin, Cliff Gevirtz, Scott E. Glaser, Brandon J. Goff, Harry J. Gould, Christine Greco, Jay S. Grider, Maged Guirguis, Qiao Guo, Justin Hata, John Hau, Garett J. Helber, Eric R. Helm, Lori Hill Marshall, Dean Hommer, Jeffrey Hopcian, Eric S. Hsu, Jakun Ing, Tracy P. Jackson, Gaurav Jain, Chrystina Jeter, Alan David Kaye, James Kelly, Soorena Khojasteh, Ankur Khosla, Daniel Krashin, Monika A. Krzyzek, Prasad Lakshminarasimhiah, Steven Michael Lampert, Garrett LaSalle, Quan D. Le, Ankit Maheshwari, Edward R. Mariano, Joaquin Maury, John P. McCallin, John Michels, Natalia Murinova, Narendren Narayanasamy, Rebekah L. Nilson, Elliot Palmer, Vikram B. Patel, Devin Peck, Donald B. Penzien, Danielle Perret Karimi, Tilak Raj, Michael R. Rasmussen, Mohit Rastogi, Rahul Rastogi, Nashaat N. Rizk, Rinoo V. Shah, Paul A. Sloan, Julian Sosner, A. Raj Swain, Minyi Tan, Natacha Telusca, Santhosh A. Thomas, Andrea Trescot, Michael Truong, Jason Tucker, Richard D. Urman, Brandon A. Van Noord, Nihir Waghela, Irene Wu, Jiang Wu, Jijun Xu, Jinghui Xie, William Yancey
Edited by Alan David Kaye, Louisiana State University, Rinoo V. Shah
Book: Case Studies in Pain Management
Published online: 05 October 2014
Print publication: 16 October 2014, pp xi-xv
Can the same dose data be estimated from phantoms with different anatomies?
K. Karimi Shahri, L. Rafat Motavalli, H. Miri Hakimabad
Journal: Radioprotection / Volume 48 / Issue 4 / October 2013
In this paper, the effect of additional adipose and muscle layers was investigated on the effective dose and the organ absorbed dose. Calculations were performed using the Monte Carlo N-Particle Transport Code (MCNP) and the ORNL mathematical phantom for external photon and neutron beams. Variations in adipose and muscle tissue thickness were modeled by adding layers of adipose and soft tissues around the torso of the phantom. The effective dose decreased by about 7%–40% when the thickness of the extra layer increased from 0.5 to 5 cm considering all photon energies (10 keV–10 MeV) and neutron energies (10–9–20 MeV) for anterior-posterior, posterior-anterior, left-lateral, right-lateral, rotation and isotropic irradiation geometries. The results calculated here were compared with those reported in previous studies such as those of the VIPMAN, NORMAN05, MASH-3 and ICRP reference voxel phantoms. Our data shows that adding proper adipose or muscle layers to two very different phantoms can cause similar effective dose values, and also more than half of the organ absorbed doses have satisfactory agreement.
Chemical composition, cell wall features and degradability of stem, leaf blade and sheath in untreated and alkali-treated rice straw
E. Ghasemi, G. R. Ghorbani, M. Khorvash, M. R. Emami, K. Karimi
Journal: animal / Volume 7 / Issue 7 / July 2013
Published online by Cambridge University Press: 11 March 2013, pp. 1106-1112
Print publication: July 2013
Three dominant morphological fractions (i.e. leaf blade (LB), leaf sheath (LS) and stem) were analysed for chemical composition and ruminal degradability in three rice straw varieties. In one variety treated with alkali, cell wall features were also characterized using Fourier Transform Infrared Spectroscopy (FTIR) and Scanning Electron Microscopy. The highest concentrations of cell wall carbohydrates (hemicellulose and cellulose) were observed in LS, whereas the highest concentrations of non-fibre (silica, phenolic compounds and CP) and lignin were recorded for LB. The stem had the lowest silica and hemicellulose contents but intermediate levels of other components. In terms of ruminal degradability, stem ranked higher than LB, which was followed by LS. Hemicellulose was found to be less degradable than either dry matter or cellulose in all the three fractions investigated. FTIR results indicated that the highest levels of hydrogen bonding, esterification and crystallinity within the cell wall components belonged to LS. In the alkaline treatment, these indices decreased to a larger extent for leaf fractions and a greater improvement was achieved in the degradability of LB and LS compared with that of stem. In the 24-h ruminal incubation, the silicified layer of epidermis and the underlying cell walls showed a rigid structure in the control fractions, whereas the treatment with NaOH resulted in crimping of the silicified cuticle layer and the loss of integrity in cell structure. Despite the highest silica and lignin contents observed in LB, LS showed the lowest degradability, which might be due to its high level of hydrogen bonding, crystallinity and esterification within its cell wall components as well as its high hemicellulose content.
Prenatal stress enhances severity of atherosclerosis in the adult apolipoprotein E-deficient mouse offspring via inflammatory pathways
H. Ho, S. Lhotak, M. E. Solano, K. Karimi, M. K. Pincus, R. C. Austin, P. Arck
Journal: Journal of Developmental Origins of Health and Disease / Volume 4 / Issue 1 / February 2013
Published online by Cambridge University Press: 01 November 2012, pp. 90-97
Print publication: February 2013
Atherosclerosis is the underlying cause of cardiovascular disease and stroke. Endothelial cell dysfunctions are early events in atherosclerosis, resulting in the recruitment of circulating monocytes. The immune system can elicit an inflammatory response toward the atherosclerotic lesion, thereby accelerating lesion growth. Risk factors for atherosclerosis include hypertension, smoking, stress perception or low birth weight. As prenatal stress challenge decreases the birth weight and affects the offspring's postnatal immune response, we aimed to investigate whether prenatal stress contributes to the development of atherosclerosis in mice. Syngenic pregnant apolipoprotein E-deficient (apoE−/−) dams were exposed to sound stress on gestation days 12.5 and 14.5. The presence and size of atherosclerotic plaques in the offspring at the age of 15 weeks was evaluated by histomorphology, accompanied by flow cytometric analysis of the frequency and phenotype of monocytes/macrophages and regulatory T (Treg) cells in the blood. Further, cytokine secretion of peripheral blood lymphocytes was analyzed. In response to prenatal stress challenge, an increased frequency of large atherosclerotic plaques was detectable in apoE−/− offspring, which was particularly profound in females. Prenatal stress also resulted in alterations of the offspring's immune response, such as a decreased frequency of Treg cells in blood, alterations of macrophage populations in blood and an increased secretion of inflammatory cytokines. We provide novel evidence that prenatally stressed adult offspring show an increased severity of atherosclerosis. As Treg cells are key players in dampening inflammation, the observed increase in atherosclerosis may be due to the lack of Treg cell frequency. Future interdisciplinary research is urgently required to understand the developmental origin of prenatal stress-induced atherosclerosis. The availability of our model may facilitate and foster such research endeavors.
ON THE REGULAR DIGRAPH OF IDEALS OF COMMUTATIVE RINGS
Chain conditions, growth conditions, and other forms of finiteness
Chain conditions, finiteness conditions
M. AFKHAMI, M. KARIMI, K. KHASHYARMANESH
Journal: Bulletin of the Australian Mathematical Society / Volume 88 / Issue 2 / October 2013
Published online by Cambridge University Press: 16 October 2012, pp. 177-189
Let $R$ be a commutative ring. The regular digraph of ideals of $R$ , denoted by $\Gamma (R)$ , is a digraph whose vertex set is the set of all nontrivial ideals of $R$ and, for every two distinct vertices $I$ and $J$ , there is an arc from $I$ to $J$ whenever $I$ contains a nonzero divisor on $J$ . In this paper, we study the connectedness of $\Gamma (R)$ . We also completely characterise the diameter of this graph and determine the number of edges in $\Gamma (R)$ , whenever $R$ is a finite direct product of fields. Among other things, we prove that $R$ has a finite number of ideals if and only if $\mathrm {N}_{\Gamma (R)}(I)$ is finite, for all vertices $I$ in $\Gamma (R)$ , where $\mathrm {N}_{\Gamma (R)}(I)$ is the set of all adjacent vertices to $I$ in $\Gamma (R)$ .
Different effective dose conversion coefficients for monoenergetic neutron fluence from 10-9 MeV to 20 MeV – A methodological comparative study
H. Miri H., L. Rafat M., K. Karimi S.
Journal: Radioprotection / Volume 47 / Issue 2 / April 2012
Print publication: April 2012
Calculations are presented of the effective doses per unit neutron fluence according to the ICRP publications 60 and 103. Monte Carlo N-Particle (MCNPX) code was used for six geometrical conditions of irradiation (Anterior-Posterior, Posterior-Anterior, Left-Lateral, Right-Lateral, Rotation and Isotropic) on Oak Ridge National Laboratory (ORNL) modified mathematical adult phantoms for monoenergetic neutrons from 10-9 MeV to 20 MeV. The conversion coefficients were compared with the results of an analytical phantom (Medical Internal Radiation Dose (MIRD-5)) and some voxel model (ICRP/ICRU Reference Voxel Phantom (ICRP/ICRU RVP), HANAKO, TARO and Visible Human Project (VIPMAN)). From these comparisons, one can conclude that large discrepancies between data sets appear when wR and different sizes of the phantoms have been used for calculations. Furthermore, the differences in applied Monte Carlo codes or simulated body models could make some discrepancies less than 15%.
D-15 A New Approach to High Throughput Diffraction Analysis
S. Roncallo, S. A. Ansari, D. W. Lane, O. Karimi, K. D. Rogers, J. M. Gregoire
Journal: Powder Diffraction / Volume 25 / Issue 2 / June 2010
Published online by Cambridge University Press: 20 May 2016, p. 210
ENT manifestations in Iranian patients with primary antibody deficiencies
A Aghamohammadi, K Moazzami, N Rezaei, A Karimi, M Movahedi, M Gharagozlou, S Abdollahzade, N Pouladi, A Kouhi, M Moin
Journal: The Journal of Laryngology & Otology / Volume 122 / Issue 4 / April 2008
One hundred and nine patients with primary antibody deficiencies were selected in order to determine the frequency of ENT complications.
Demographic information and ENT medical histories were collected for each patient. Duration of study for each patient was divided into two periods of before diagnosis and after diagnosis and the initiation of treatment.
Eighty-two of 109 patients (75.2 per cent) experienced ENT infections during the course of the disease (63: otitis media, 75: sinusitis and nine: mastoiditis). At the time of diagnosis, 52 (47.7 per cent) out of 109 patients presented with an ENT symptom. The frequencies of episodes were 27 for sinusitis and 25 for otitis media (one complicated with mastoiditis). After immunoglobulin replacement therapy the incidence of otitis media was reduced from 1.75 before treatment to 0.39 after treatment per patient per year (p = 0.008). The incidence of sinusitis also significantly decreased from 2.38 to 0.78 (p value = 0.011).
ENT infections are common medical problems in primary antibody deficiency patients. Persistent and recurrent ENT infections should be suspected as originating from a possible underlying immunodeficiency.
Sites of phytase and xylanase activities in the gastrointestinal tract of broiler chickens
A.A. Sadeghi, P. Shawrang, K. Karimi
Journal: Proceedings of the British Society of Animal Science / Volume 2007 / April 2007
Published online by Cambridge University Press: 23 November 2017, p. 28
Because digesta vary in pH in different gastrointestinal segments of poultry, exogenous phytase or xylanase may exhibit differences in activity along the gastrointestinal tract. Previous reports indicated that the stomach is the major site of exogenous microbial phytase activity, with no further activity found in the small intestine of piglets. Information regarding exogenous phytase or xylanase activity in the gastrointestinal tract of poultry is largely unavailable. Because exogenous phytase or xylanase activity in the digesta is extremely low, normal phytase or xylanase activity measurements are prone to errors resulting from background interference contributed by the exogenous inorganic phosphate or xylose in the digesta (Walsh et al., 1995). The aim of this study was to utilize electrophoresis activity stain to detect the activity of phytase or xylanase in different gastrointestinal segments of broiler chickens fed diets containing exogenous enzymes.
Effects of using dry sugar beet molasses on dairy cattle performance
K Karkoodi, S S Mirghaffari, F Forudi, N Karimi
Journal: Proceedings of the British Society of Animal Science / Volume 2006 / March 2006
Published online by Cambridge University Press: 23 November 2017, p. 135
Print publication: March 2006
Sugarcane and sugar beet molasses are dark brown juices which are remained after crystallization or concentration of sugar syrup. Problematic nature of molasses prompted sugar producers to solve difficulty of preservations, transportation, mixing and the problem of sticky concentrate mixtures or its hard handling in cold weather. Thus, in recent years, Iran sugar industry made a shift to develop dehydration technology of molasses, but it was not clear weather the applied technology could be efficient employed and if such a product could replaced by sugar beet pulp which is in rare in cold seasons. This trial performed to study if Dry Sugar beet Molasses (DSBM) could be efficiently replaced with sugar beet pulp with no side effects.
Fracture mechanisms of GaAs under nanoscratching
J.-M. Solletti, M. Parlinska-Wojtan, J. Tharian, K. Wasmer, J. Michler, C. Ballif, D. Schulz, A. Karimi
Journal: MRS Online Proceedings Library Archive / Volume 841 / 2004
Published online by Cambridge University Press: 01 February 2011, R9.15
Print publication: 2004
Nanoscratching on GaAs (001) by a pyramidal diamond tip (Berkovitch) indenter has been carried on under different loads, scratching velocities and directions. Plastic deformation and fractures induced by scratching have been investigated by atomic force microscopy (AFM), and by scanning and transmission electron microscopy (SEM and TEM, respectively). Surface images revealed radial and surface tensile cracks. Focused ion beam (FIB) milling of the contact area revealed median and shear fracture distribution in the volume. The different cracks were characterized for various scratching conditions in terms of their direction of propagation, extension and frequencies. Plastic deformations have been characterized by vertical displacement of material. No purely ductile zone was observed, GaAs deformation occurred by fractures and plastic strain. Their preponderances are discussed in terms of material properties.
Wavelength-Invariant Resist Composed of Bimetallic Layers
Y. Tu, M. Karimi, N. Morawej, W. N. Lennard, T. W. Simpson, J. Peng, K. L. Kavanagh, G. H. Chapman
Published online by Cambridge University Press: 11 February 2011, N3.8
Two layer co-sputtered Bi over In thin films (40 nm/layer) act as a microfabrication resist with many potential applications. Their physical, chemical and optical characteristics change after laser exposures that produce a rapid thermal anneal in selected areas. Unlike organic photoresists, Bi/In is a bimetallic thermal resist whose sensitivity shows a near wavelength invariance for wavelengths from Near IR to UV. The laser-induced patterns are developed by an etch that selectively removes unexposed areas and retains converted ones. The optical density (OD) of 40 nm thick Bi/In films on quartz substrates, for example, changes from 3.3 OD to 0.37 OD in the annealed area. This has enabled the creation of direct-write photomasks for standard photoresist exposures. In this paper, the composition, morphology, and nanostructure of the resist before and after laser processing were studied in order to determine the mechanism of the laser-induced material conversion. AFM, XRD, and TEM show that the as-deposited films are polycrystalline, continuous, but with a rough, island morphology. Furnace anneals in air above the eutectic temperature (150–250°C, 3 hours) result in the formation of the tetragonal phase BiIn with a small degree of oxidation. The island morphology is maintained but there is evidence of melting and recrystallization. Transparency is much lower than after laser annealing. RBS and NRA depth profile analysis show that Bi/In films exposed to laser annealing in air contain a large fraction of oxygen and suggest that the converted film may be a BiIn0.6O6 /Bi0.3InO6 bilayer.
Magnetostrictive Damping to Reduce Noise and Vibrations
A. Karimi, P. H. Giauque, J. L. Martin
Published online by Cambridge University Press: 16 February 2011, 353
The magnetomechanical damping capacity of cast and thermally sprayed Fe-Cr based alloys has been investigated using free and forced vibration techniques. The coatings were deposited using a vacuum plasma spraying method and the cast alloys were prepared in a high frequency furnace under an argon atmosphere. Three laboratory devices including a torsion pendulum, a resonant bar and a cantilever were used to cover a wide range of frequencies and amplitudes varying between f = 1Hz to 10 kHz, and ε = 10−6 to 10−3. The damping capacity of the plasma sprayed coatings was found to be comparable to that of cast alloys. Appropriate heat treatments improved the damping capacity of both coatings and cast alloys by several times. The variation of the loss factor as function of the vibration amplitude showed a maximum, but versus frequency exhibited a slightly monotonous character. The magnetic domains were observed using the magneto-optical Kerr effect and their modification under heat treatments was associated with different values of the damping capacities. | CommonCrawl |
\begin{document}
\title{The largest fragment of a homogeneous fragmentation process}
\author{Andreas Kyprianou, Francis Lane and Peter M\"orters\footnote{ Department of Mathematical Sciences, University of Bath, Bath BA2 7AY, UK}}
\date{}
\maketitle
\begin{abstract} \noindent We show that in homogeneous fragmentation processes the largest fragment at time $t$ has size $$e^{-t \Phi'(\overline{p})}t^{-\frac32 (\log \Phi)'(\overline{p})+o(1)},$$ where $\Phi$ is the L\'evy exponent of the fragmentation process, and $\overline{p}$ is the unique solution of the equation \smash{$(\log \Phi)'(\bar{p})=\frac1{1+\bar{p}}$.} We argue that this result is in line with predictions arising from the classification of homogeneous fragmentation processes as logarithmically correlated random fields. \end{abstract}
\section{Introduction}
There has been considerable interest in the past couple of years in a universality class of stochastic models called \emph{logarithmically correlated fields}. This class includes branching Brownian motion~\cite{B78, R13, ABK13}, branching random walks~\cite{AR09, shi, A13}, the Gaussian free field on a planar lattice domain~\cite{D06, BZ12, BDZ16}, the logarithmically correlated random energy model~\cite{FDR09}, Gaussian $1/f$-noise~\cite{FDR12}, nested conformal loops~\cite{A15}, and Gaussian multiplicative chaos~\cite{RV14, M15} to name just a few. Plenty of interesting features arise from the conjectured membership of combinatorial and probabilistic objects such as eigenvectors of random matrix ensembles in this class, conjectures of Fyodorov, Hiary and Keating~\cite{FHK12, FK14} on the maximum of the Riemann zeta function on an interval of the critical line and of the characteristic polynomial of random unitary matrices being well-known examples, see~\cite{Arg16} for a survey.
Let us briefly describe some of the heuristic features of this class, as sketched, for example, in~\cite{FG14}.
Characteristic of these models is that, loosely speaking, at a large fixed level~$n$ they can be described as a centred field $(V(x) \colon x\in 2^{-n} {\mathbb Z}^d \cap (0,1)^d)$ with correlations obeying a scaling of the type \begin{equation}\label{var0}
\mathbb{E} \big[ V(x) V(y) \big] \sim - d \, \Psi''(0) \, \log |x-y|, \qquad \mbox{ if } \mbox{$2^{-n}$} \ll |x-y| \ll 1, \end{equation} where $\Psi$ is a characteristic exponent given as $$\mathbb{E}\big[ e^{pV(x)}\big] \sim 2^{dn(\Psi(p)-1)}.$$ The conjectured behaviour that the models in this universality class have in common relates to their extremal geometry. It has been argued (at varying levels of detail and rigour) that the highest peak at level~$n$ in a logarithmically correlated field satisfies \begin{equation}\label{P1} \max_{x\in 2^{-n} {\mathbb Z}^d \cap [0,1]^d} V(x) \en{=} \Psi'(\bar{q}) (d \log 2) n
- \frac32 (\log \Psi)'(\bar{q}) \log n + O(1) \end{equation} in probability, where $\bar{q}$ solves the equation \smash{$\Psi'(\bar{q})=\frac{\Psi(\bar{q})}{\bar{q}}$.} In some cases finer results have been obtained, including the precise distribution of the asymptotic random constant of order one in the expansion of $\max V(x)$ and fine results on the peaks seen from the largest peak, see for example~\cite{Br83, A13, ABK13}.
An alternative approach to logarithmically correlated fields comes from the work of Fyodorov, Le Doussal and Rosso~\cite{FDR09}. They look at random fields satisfying a multifractal formalism and conjecture that, under natural conditions, the disorder-induced multifractality implies a logarithmic scaling of the correlations. The highest peak then satisfies \begin{equation}\label{P2} \max_{x\in 2^{-n} {\mathbb Z}^d \cap [0,1]^d} V(x) \en{=} \alpha_+ (d \log 2) \, n + \frac32 \big(f'(\alpha_+)\big)^{-1} \log n +O(1), \end{equation} where $f(\alpha)= \dim\{ x \colon \lim \frac1n V(x) = \alpha (d \log 2) \big\}>0$ is the multifractal spectrum on the domain $(\alpha_-, \alpha_+)$ with boundary values given by $f(\alpha_-),f(\alpha_+)=0$. The multifractal formalism relates the spectrum to the characteristic exponent through the Legendre transform $\Psi(q)=\max\{ f(\alpha)+ q \alpha\}$.
The purpose of the present paper is to align the class of homogeneous fragmentation processes with the universality class of logarithmically correlated fields by describing the processes' extremal behaviour and arguing that the rigorous result we obtain is consistent with the predictions obtained from the heuristics above. This makes homogeneous fragementation processes one of very few examples of a \emph{non-Gaussian} field where the universality hypothesis can be verified. It also gives non-rigorous evidence that further properties of the class of of logarithmically correlated fields, such as convergence of the constant order term in \eqref{P2} to a random variable of a particular shape and existence of a freezing transition, also hold in this case, but we will not give technical proofs of this.
Fragmentation processes represent the (typically) continuous splitting of an object into smaller parts. We describe a fragmentation process by means of a random family $\{ {\mathcal I}^x(t) \colon x\in (0,1), t\geq 0\}$ of intervals such that ${\mathcal I}^x(t)\subseteq (0,1)$ is the interval containing $x$ at time~$t$. We assume that the following consistency relations are satisfied: \begin{enumerate} \item $x\in {\mathcal I}^x(t)$ \hspace{2pt}; \item ${\mathcal I}^x(t)\subseteq {\mathcal I}^x(s)$ if $s<t$ \hspace{2pt}; \enskip and \item if $y\in{\mathcal I}^x(t)$, then ${\mathcal I}^x(t)={\mathcal I}^y(t)$. \end{enumerate} \pagebreak[3]
The random evolution of the fragmentation is given by a \emph{dislocation measure}~$\nu$ defined on the partitions of the unit interval. Every interval $ {\mathcal I}^x(t)$ decomposes independently at rate $\nu(du)$ into parts whose relative sizes are given by the partition~$u$. If the measure $\nu$ is finite then the process $(\log I^x(t) \colon t>0)$, is a random walk for every $x\in(0,1)$ and the entire system is a branching random walk. Our interest is therefore mostly on the case of infinite dislocation measure when both particle movement and branching become instantaneous and classical results on branching random walk cannot be applied. A rigorous definition of the process in the infinite dislocation measure case will be given in Section~2 of this paper.
We let $\upsilon$ be uniformly distributed in the unit interval and define the L\'evy exponent, when it is finite, as \[
\Phi(p) := - \log \prob{E} \big[ |{\mathcal I}^\upsilon(1)|^p \big]<\infty, \] or, equivalently, in terms of the dislocation measure as \[
\Phi(p) = \int \bigg( 1 - \sum_{i = 1}^{\infty} |u_i|^{p+1} \bigg) \nu(du) , \] where $(u_i: i \in \mathbb{N})$ is an enumeration of the partition sets of $u$, see~\cite{bas} for details. Our main result, Theorem~\ref{mainthm}, describes the size of the largest fragment at time $t\uparrow\infty$ as \[
\max_{x\in[0,1]} |{\mathcal I}^x(t)|=e^{-t \Phi'(\bar{p})}t^{-\frac32 (\log \Phi)'(\bar{p})+o(1)}, \] where \smash{$\bar{p}$} is the unique solution of the equation $(\log \Phi)'(\bar{p})=\frac1{1+\bar{p}}$.
\section{Preliminaries and Main Result}
Before stating the main result of this paper, we briefly discuss the definition of conservative homogeneous interval fragmentation processes and some of their basic properties. An informal description of such a process is as follows. The process starts from some initial configuration of fragments (i.e. subsets of $(0,1)$), which break up independently of one another as time passes. In general, these fragmentation events occur instantaneously in time. Looking at a single fragment at a given time, its subsequent evolution (after scaling to unit length) looks precisely the same as the fragmentation of any other (similarly scaled) particle. This means, in particular, that our fragmentations are time-homogeneous - the rate of `breaking up' is independent of particle size. Finally, we allow no loss of mass; the sum of the lengths of the fragments at any given time equals the sum of the lengths of the fragments in the initial configuration.
Let us now briefly state the formal definition of a conservative homogeneous interval fragmentation process, referring to~\cite{bas} for proofs and further details. Let $\mathcal{U}$ denote the space of open subsets of $(0,1)$, which serves as our state-space.
Each set $u \in \mathcal{U}$ has a unique decomposition into disjoint, non-empty, open intervals. The intervals comprising this decomposition are referred to as the \textit{fragments} or \textit{particles} of the set, and represent the `pieces' of the object that `falls apart at random'. For $u,v \in \mathcal{U}$, we define the distance between $u$ and $v$ to be the Hausdorff distance between $(0,1)\setminus u$ and $(0,1) \setminus v$ (see \cite{bertselfsim}). We also endow $\mathcal{U}$ with the $\sigma$-algebra generated by the open sets corresponding to this distance, which we denote by $\mathcal{B}(\mathcal{U})$.
Our basic data are a family $(q_{t}\colon t > 0)$ of probability measures defined on $(\mathcal{U},\mathcal{B}(\mathcal{U}))$. We fix an interval $I:= (a,b) \subseteq (0,1)$ and write $\mathcal{I}$ for the set of open subsets of $I$ (with the distance inherited from $\mathcal{U}$ and the corresponding $\sigma$-algebra). We introduce the affine map $g_I : (0,1) \rightarrow I$, and retain the notation $g_I$ for its natural extension to a map from $\mathcal{U}$ to $\mathcal{I}$. We write $q^I_{t}$ for the image measure of $q_{t}$ under the map $g_I$, so that $q^I_{t}$ is a probability measure on $\mathcal{I}$. Given an open set $u \in \mathcal{U}$ and a measurable enumeration $(u_i: i \in \mathbb{N})$ of the intervals in its decomposition, we write $q^u_{t} $ for the distribution of $\cup X_i$ where the $X_i$ are independent random variables with laws $q^{u_i}_{t}$ respectively.
\begin{definition} \thlabel{fragproc} A Markov process $ U:=(U(t): t \geq 0)$ taking values in $\mathcal{U}$ is called a \textit{conservative homogeneous interval fragmentation} if it has the following properties: \begin{enumerate} \item $U$ is continuous in probability; \item $U$ is nested in the sense that $s>t \Rightarrow U(s) \subseteq U(t)$; \item \textit{Fragmentation property}: there exists some family $(q_{t}: t > 0)$ of probability measures on $\mathcal{U}$ such that
$$ \forall t \geq 0 \quad \forall s>t \quad \forall A \in \mathcal{B}(\mathcal{U}) \quad \text{\textnormal{\prob{P}}}\big(U(s) \in A \hspace{3pt} \big| \hspace{3pt}U(t)\big) = q^{U(t)}_{s-t}(A) ;$$
\item $|U(t)| = 1$ for all $t \geq 0$. \end{enumerate} \end{definition}
The filtration generated by $U$ is denoted by $ \mathcal{F}:=(\mathcal{F}_t : t \geq 0)$,
and the law of the fragmentation started from $u \in \mathcal{U}$ by $\prob{P}_u$, with corresponding expectation operator $\prob{E}_u$. We define $\prob{P} := \prob{P}_{(0,1)}$ with expectation operator $\prob{E}$.
Denoting by $u^*$ the largest interval component of $u \in \mathcal{U}$ we call a measure $\nu$ on $\mathcal{U}$ a \textit{dislocation measure} if it satisfies $\nu((0,1)) = 0$ and
\begin{equation} \label{integrability}
\int_{\mathcal{U}} \left( 1 - |u^*| \right) \nu(du) < \infty, \end{equation} c.f. Definition 2.6 of \cite{bas}. Given a homogeneous interval fragmentation we obtain a dislocation measure~$\nu$ by letting, for $u \in\mathcal{B}(\mathcal U)$ and $I=(0,1)$,
\[
\nu(u) \en{=}
\lim_{t\downarrow 0} \frac1t \big( q_t^I(u)- q_0^I(u) \big). \]
The measure $\nu$ is called the \textit{dislocation measure corresponding to} $U$, and it characterises the law of~$U$.
Next we introduce the collection of \textit{tagged fragments}. Given a fragmentation process $U$ and $x \in (0,1)$, the \textit{x-tagged process} is simply the process of intervals in $U$ containing $x$. We write $\mathcal{I}^x(t)$ for this fragment at time $t \geq 0$, and $ I^x(t)$ for its length. We also introduce the family of processes $(\xi^x : x \in (0,1) ) $, where $\xi^x(t):= -\log I^x(t) $. Letting $\upsilon$ denote a uniform random variable on $(0,1)$ which is independent of all the random variables introduced above, the processes $(\mathcal{I}_t),(I_t)$ and $(\xi_t)$ are defined by replacing $x$ with $\upsilon$ in the preceding definitions. These are the corresponding \textit{randomly tagged} processes. Importantly, $\xi$ is a subordinator. We denote its Laplace exponent by $\Phi(p) := - \log \prob{E} ( e^{-p \xi(1)})$, which exists and is infinitely differentiable on the interval $(\underline{p},\infty)$ for some $\underline{p} \in [-1,0]$.
Using the concavity of $\Phi$, it is easy to show that the equation \begin{equation} \label{overbar} \frac{\Phi(p)}{1+p} = \Phi'(p) \end{equation} has a unique solution $\overline{p} \in (\underline{p}, \infty)$, and that this solution is positive. The value $\overline{p}$ has great importance in the present context. For instance, with $c_{\overline{p}} := \Phi'(\overline{p})$, we have \[ \lim_{t \rightarrow \infty} \frac{\inf_{x \in (0,1)} \xi^x_t}{t} = c_{\overline{p}} \quad \textrm{a.s.}, \] giving the first term in the asymptotic expansion of the size of the largest particle (see, for example, \cite{BER01}). \\
We are now ready to state the main result of this paper, which identifies the second term of this asymptotic expansion in terms of $\overline{p}$ :
\begin{thm} \thlabel{mainthm} Starting from any initial configuration in $\mathcal{U}$, $$\frac{\inf_{x \in (0,1)} \xi^x(t) - c_{\overline{p}}t}{\log t} \enskip \longrightarrow \enskip \frac{3}{2}(\overline{p} + 1)^{-1} \enskip =: l \quad \textrm{in probability as} \enskip t \uparrow \infty.$$ \end{thm}
The proof is based on martingale methods and is close in spirit to that of~\cite{shi}. Roughly speaking, we will define random variables that count the number of particles that are too large or small. Using two tools - a Many-to-One Lemma
and a change of measure (to be introduced shortly) - we will estimate the moments of these random variables using the fluctuation theory of L\'evy processes.
To be precise, let us introduce the processes $\zeta^x_t := \xi^x_t - c_{\overline{p}}t$ for each $x \in (0,1)$, $t \geq 0$, and the corresponding randomly tagged process $\zeta_t := \xi_t - c_{\overline{p}}t$ for $t \geq 0$. For each $p > \underline{p} $ we also define the process $(\mathcal{E}_p (t) : t \geq 0)$ by \[ \mathcal{E}_p (t) := \exp \big( \Phi(p)t - p\xi_t \big). \] This process is a unit mean $(\mathcal{F},\prob{P})$-martingale, allowing us to define the family of probability measures $\big( \prob{Q}^{p} : p > \underline{p})$ by \[ \frac{d {\prob{Q}}^{p}}{d \prob{P}} \bigg\vert_{\mathcal{F}_t} \en{=} \mathcal{E}_p (t) \qquad \mbox{ for } t \geq 0. \]
In fact we will only use $\prob{Q} := \prob{Q}^{\overline{p}}$. This is because, as a consequence of the equation defining $\overline{p}$, \eqref{overbar}, the spectrally positive L\'evy process $(\zeta, \prob{Q})$ has zero mean. It is also well-known that $\zeta$ has finite moments of all orders under $\prob{Q}$. These special properties allow us to use results on L\'evy processes with zero mean and finite variance, which are collected in the appendix.
For a set $A \subset (0,1)$, we use the notation $\sum_{[x]_t : A}$ to represent sums taken over the (countable) collection of \textit{distinct} fragments at time $t$ that are subsets of $A$. We also write $\sum_{[x]_t }$ for $\sum_{[x]_t : (0,1)}$, the sum taken over \textit{all} distinct fragments at time $t$. For a Borel set $B \subset \mathbb{R}$, $|B|$ stands for the Lebesgue measure of $B$. Using this notation, we make the simple observation that for any $u \in \mathcal{U}$, $t \geq 0$, and measurable non-negative function $F$ on paths of tagged fragments, we can write \begin{align*} \text{\textnormal{\prob{E}}}_u \sum_{[x]_t} F( \xi^x_s : s \leq t) \fal{=} \sum_{i=1}^{\infty} \prob{E}_u \sum_{[x]_t : u_i} F( \xi^x_s : s \leq t) \\ \al{=}
\sum_{i=1}^{\infty} \prob{E} \sum_{[x]_t : (0,1)} F( \xi^x_s - \log |u_i|: s \leq t) \\ \al{=}
\sum_{i=1}^{\infty} \text{\textnormal{\prob{E}}} \big( I^{-1}_t F( \xi_s - \log |u_i| : s \leq t) \big) , \end{align*}
where the sums in $i$ should be regarded as finite in case $u$ consists of finitely many blocks. To illustrate the notation, $\sum_{[x]_t : u_i}$ sums over distinct particles at time $t$ which result from the fragmentation of the interval $u_i$. In the second equality we have used the fragmentation property. To get from second to third, we introduce the factor $I^x_t \cdot (I^x_t)^{-1}$ inside the second sum, which can then be interpreted as a size-biased pick. Proceeding to make the change of measure $\prob{E} \rightarrow \prob{Q}$, we obtain the following Many-to-One lemma:
\begin{lem} \emph{(MT1)} \thlabel{mt1} For any measurable, non-negative function $F$ on paths of tagged fragments and any $u = (u_1, u_2, ...) \in \mathcal{U}$ we have
\[ \text{\textnormal{\prob{E}}}_{u} \sum_{[x]_t} F( \zeta^x_s : s \leq t) = \sum_{i=1}^{\infty} \text{\textnormal{\prob{Q}}} \big( e^{\zeta_t(\overline{p}+1)} F( \zeta_s - \log |u_i| : s \leq t) \big) ,\] In particular, \[ \text{\textnormal{\prob{E}}} \sum_{[x]_t} F( \zeta^x_s : s \leq t) = \text{\textnormal{\prob{Q}}} \big( e^{\zeta_t(\overline{p}+1)} F( \zeta_s : s \leq t) \big) . \] \end{lem}
To prove \thref{mainthm}, it suffices to prove the following statement two statements for arbitrary $u \in \mathcal{U}$:
\begin{equation} \prob{P}_u \left( \inf_{x \in (0,1)} \zeta^x(t) \leq \alpha \log t \hspace{2pt} \right) \rightarrow 0 \quad \textrm{as} \quad t \uparrow \infty \enskip \textrm{ for all } \enskip \alpha < l \hspace{3pt}; \quad \label{liminf} \end{equation} \begin{equation} \limsup_{t \rightarrow \infty} \frac{ \inf_{x \in (0,1)} \zeta^x(t)}{\log t} \leq l \hspace{2pt} \quad \prob{P}_u-\textrm{almost surely.} \label{limsup} \end{equation}
The structure of the remainder of the paper is as follows. In Section~$3$ we prove \eqref{liminf}, and in Section~$4$ we prove~\eqref{limsup}, the more challenging result. The arguments are analogous to those in \cite{shi}, but there are significant differences on the technical level, occuring particularly in the proof of \eqref{limsup}. The analogous part of the proof in \cite{shi} makes certain moment assumptions that are not satisfied in our framework. We do not need these moment assumptions, as we are able to exploit the special features of our fragmentation processes - namely, that particles decrease in size, and no mass is lost.
In Section~5 we align our result with heuristics on logarithmically correlated fields. Our proof relies on fine results on L\'evy processes, which are provided in the appendix, Section~$6$.
\section{Proof of \eqref{liminf}}
Fix an arbitrary $\alpha \in (0,l)$, $k \in \mathbb{N}$ and $u = (u_1,u_1,...) \in \mathcal{U}$. Define, for $t \geq 0$, the random variable \begin{equation} \label{sumrv} Z^k_t := \sum_{[x]_t} \textbf{1} \Big( \zeta^x_t \leq \alpha \log t, \enskip \underline{\zeta}^x_t \geq -k \Big), \end{equation} where $\underline{\zeta}^x_t := \inf_{0 \leq s \leq t} \zeta^x_{\hspace{1pt} s}$. This random variable counts the number of `bad' particles (with a truncation we will remove later).
We estimate the mean of $Z^k_t$ under $\prob{E}_u$ as follows, recalling that $\zeta$ is the randomly tagged process corresponding to the family of processes $\left( \zeta^x : x \in (0,1) \right)$: \begin{align} \label{firstest} \prob{E}_u Z^k_t \enskip &= \enskip
\sum_{i=1}^{\infty} \prob{Q} \Big( e^{\zeta_t(\overline{p}+1)} \textbf{1} \big( \zeta_t - \log |u_i| \leq \alpha \log t, \enskip \underline{\zeta}_t - \log |u_i| \geq -k \big) \Big) \nonumber \\
&\leq \enskip t^{\alpha(\overline{p}+1)} \sum_i |u_i|^{\overline{p}+1} \prob{Q} \Big( \zeta_t - \log |u_i| \leq \alpha \log t, \enskip \underline{\zeta}_t - \log |u_i| \geq -k \Big). \end{align}
In the first line we use MT1 (Lemma~\ref{mt1}) , and in the second we bound the exponential factor using the indicator. Recalling that $(\zeta, \prob{Q})$ is a spectrally positive L\'evy process with zero mean and finite variance, we can estimate a typical probability on the right-hand side of the previous inequality using \thref{shicor1}: \begin{align} \label{secondest}
\prob{Q} \Big( \zeta_t - \log |u_i| \leq \alpha \log t, \enskip \underline{\zeta}_t - \log |u_i| \geq -k \Big) \enskip &\leq \enskip
\gamma \, t^{-3/2} (k-\log |u_i| +1)(k+\alpha \log t)^2 \nonumber \\
&\leq \enskip \gamma_k \, t^{-3/2} (\log t)^2 (1- \log|u_i|), \end{align}
for some constants $\gamma, \gamma_k >0$ (where the latter depends on $k$). Putting this back into \eqref{firstest}, we find that \begin{equation} \prob{E}_u Z^k_t \enskip \leq \enskip
\gamma_k t^{\alpha(\overline{p}+1)} t^{-3/2} (\log t)^2 \sum_i |u_i|^{\overline{p}+1} (1-\log |u_i|). \end{equation}
Since $\overline{p} > 0$, the function $x \mapsto x^{\overline{p}} (1-\log x) $ has an upper bound $K >0$ on $(0,1)$, so the sum on the right-hand side is bounded by $K \sum |u_i| = K$. We deduce that \begin{equation} \prob{E}_u Z^k_t \enskip \leq \enskip K \gamma_k \: t^{\alpha(\overline{p}+1)} t^{-3/2} (\log t)^2. \end{equation}
Since $\alpha(\overline{p}+1) < l(\overline{p}+1) = 3/2$, this quantity goes to zero as $t \rightarrow \infty$.
To complete this part of the proof we must remove the truncation $\underline{\zeta}^x_t \geq -k$ in \eqref{sumrv}. To this end, we introduce the
\emph{intrinsic additive martingale} corresponding to $\overline{p}$, \[ M_t := e^{\Phi(\overline{p}) t} \sum_{[x]_t} I^x(t)^{1+\overline{p}} = \sum_{[x]_t} \exp \big( -(1+\overline{p}) \hspace{2pt} \zeta^x_t \big) . \] By the martingale convergence theorem, $M_t$ converges to a finite limit $\prob{P}_{u}$-almost surely as $t \rightarrow \infty$. Noting that $\overline{p} > 0$, we get $\inf_{t \geq 0} \inf_{x \in (0,1)} \zeta^x_t > -\infty$ $\prob{P}_{u}$-a.s. Letting $B_k := \big\lbrace \inf_{t \geq 0} \inf_{x \in (0,1)} \zeta^x_t \geq -k \big\rbrace$
for each $k \in \mathbb{N}$, it follows that \begin{equation} \label{truncation2} \lim_{k \rightarrow \infty} \prob{P}_u (B_k) = 1 . \end{equation}
Next fix an arbitrary $\epsilon > 0$, and (using \eqref{truncation2}) select $k=k(\epsilon) \in \mathbb{N}$ so large that $\prob{P}_{u}(B_k) \geq 1 - \epsilon$. Observing that $Z^k_t \geq \textbf{1}_{B_k}
\sum_{[x]_t} \textbf{1} (\zeta^x_t \leq \alpha \log t)$ for all $t \geq 0$, we may then write, \begin{align} \label{truncation3} \prob{P}_{u}(Z^k_t = 0) \enskip &\leq \enskip \prob{P}_{u} \bigg[ B_k \cap \Big\{ \sum_{[x]_t} \textbf{1} \big( \zeta^x_t \leq \alpha \log t \big) = 0 \Big\}\bigg] \enskip + \enskip \prob{P}_{u} (B_k^c) \nonumber \\ &\leq \enskip \prob{P}_{u}\bigg[ \sum_{[x]_t} \textbf{1} \big(\zeta^x_t \leq \alpha \log t \big) = 0 \bigg] \enskip + \enskip \epsilon, \end{align}
for all $t \geq 0$. We have already shown that $\prob{E}_{u} (Z^k_t) \rightarrow 0$ as $t \rightarrow \infty$, and so, since $Z^k_t$ takes values in $\{0,1,2,...\}$, we deduce that $\prob{P}_{u} (Z^k_t = 0) \rightarrow 1$ as $t \uparrow \infty$. Combining this observation with \eqref{truncation3} we conclude that
$$ 1 \enskip = \enskip \liminf_{t \rightarrow \infty} \prob{P}_{u} (Z^k_t = 0) \enskip \leq \enskip \liminf_{t \rightarrow \infty}\prob{P}_{u}\bigg[ \sum_{[x]_t} \textbf{1} \big(\zeta^x_t \leq \alpha \log t \big) \hspace{3pt} = 0 \bigg] \enskip + \enskip \epsilon \enskip . $$
Since $\epsilon > 0$ was arbitrary, we get $1=\lim_{t \rightarrow \infty}\prob{P}_{u} \Big[ \sum_{[x]_t} \textbf{1} \big(\zeta^x_t \leq \alpha \log t \big) \enskip = 0 \Big]$. Finally, observe that $$ \bigg\lbrace \sum_{[x]_t} \textbf{1} (\zeta^x_t \leq \alpha \log t)\enskip = 0 \bigg\rbrace \enskip \subseteq \enskip \left\lbrace \inf_{x \in (0,1)} \zeta^x_t > \alpha \log t \right\rbrace,$$ so that $\prob{P}_{u} \big( \inf_{x \in (0,1)} \zeta^x_t > \alpha \log t \big) \rightarrow 1$ as $t \uparrow \infty,$ which implies that \eqref{liminf} holds. \qed
\section{Proof of \eqref{limsup}}
In this part of the proof, we can work under $\prob{P}$ without loss of generality. To see why, note that we are now trying to show the existence of `big' particles (in the sense made precise by \eqref{limsup}). This means that, starting the fragmentation from general $u \in \mathcal{U}$, we can immediately look only at the largest particle at time $t$ descending from $u^*$, whose size we call $B^{u^*}_t$. Let $B_t$ denote the size of the largest fragment at time~$t$ in a fragmentation issued from $(0,1)$. The fragmentation property implies that $(B^{u^*}_t , \prob{P}_u)$ is equal in law to $(|u^*| B_t, \prob{P})$. The numerator in
\eqref{limsup} corresponding to these two processes will therefore only differ by the additive constant $-\log |u^*| $, which goes to zero in the limit upon division by $\log t$.
Let $C>0$ be the larger of the two constants provided by \thref{nonzeroliminf} and \thref{shiprop1}. Introduce the following intervals: \begin{equation}
J_s(t) \en{:=}
\begin{cases}
[-1, \infty) &\mbox{if} \enskip 0 \leq s \leq t \\
[l \log t , \infty) &\mbox{if} \enskip t < s < 2t \\
[l \log t, l \log t + 2C] &\mbox{if} \enskip s = 2t.
\end{cases} \end{equation}
For $x \in (0,1)$ and $u,v \in [0,2t]$, define the events $A^x_{2t,[u,v]} := \{ \zeta^x_s \in J_s(t) \enskip \forall s \in [u,v] \}$, and write $A^x_{2t}:= A^x_{2t,[0,2t]}$. In what follows, $A_{2t}$ (with no superscript) means $A^{\upsilon}_{2t}$, where $\upsilon$ is the uniformly distributed random tag in $(0,1)$ in the definition of $\zeta$. Finally, define the random variable $Z_t := \sum_{[x]_{2t}} \textbf{1}_{A^x_{2t}}$.
The first step is to bound $\prob{E} Z_t$ from below. Using MT1 (Lemma~\ref{mt1}),
we obtain \begin{align*} \label{lower}
\prob{E} Z_t \en{=}
\prob{Q}
\big(
e^{\zeta_t(\overline{p}+1)} \,
\textbf{1}_{A_{2t}}
\big) \en{\geq}
\gamma \, t^{3/2} \, \prob{Q}(A_{2t}) \en{\geq}
\gamma' \en{>}
0, \end{align*} for some $\gamma, \gamma' > 0$ and all large $t$. In the first inequality we have used the indicator to bound the exponential factor from below; the second uses \thref{nonzeroliminf}.
Next, we bound the second moment of $Z_t$ from above. To this end we introduce the notation $\mathcal{D}_s$ to denote the random set of all fragmentation times in $[0,s]$, which, in general, is almost surely dense in $[0,s]$. For $r \in \mathcal{D}_{s}$ write $B_{[z]_r}$ for the event that the interval $\mathcal{I}^z_{r-}$ shatters at time $r$. Note that for $r \in \mathcal{D}_s$ precisely one of the indicators
\smash{$\textbf{1}_{B_{[z]_{r-}}} $} over all dinstinct fragments $[z]_{r-} \subset (0,1)$ takes the value $1$ (simultaneous fragmentations of distinct blocks is a null event). We then make the decomposition \begin{equation} \label{decomp} Z_t^2 = Z_t + \Lambda_t, \end{equation}
where \begin{align} \label{defLambda}
\Lambda_t \fal{:=}
\sum_{r \in \mathcal{D}_{2t}} \hspace{3pt}
\sum_{[z]_{r-} : (0,1)} \textbf{1}_{A^z_{[0,r-]}} \hspace{4pt}
\textbf{1}_{B_{[z]_r}}
\sum_{\substack{[x]_r , [y]_r : [z]_{r-} \\ [x]_r \neq [y]_r}}
\hspace{4pt}
\sum_{\substack{[u]_{2t} : [x]_{r} \\ [v]_{2t} : [y]_{r}}}
\textbf{1}_{A^u_{[r,2t]}} \textbf{1}_{A^v_{[r,2t]}}\\ \al{=}
\sum_{r \in \mathcal{D}_{2t}} \hspace{3pt}
\sum_{[z]_{r-} : (0,1)}
\Lambda^z_{r}, \end{align}
where the second line defines $\Lambda^z_{r}$. As we are temporarily regarding $t$ as fixed, we have written $A^w_{[u,v]}$ for $A^w_{2t,[u,v]}$. This decomposition is similar to the one used in \cite{shi}, but we have the added complication that the sum in $r$ is over a random (dense) set. To explain this decomposition, first note that $Z_t^2 = \sum_{[u]_{2t}} \textbf{1}_{A^u_{2t}} \cdot \sum_{[v]_{2t}} \textbf{1}_{A^v_{2t}}$. The $Z_t$ in \eqref{decomp} comes from the terms in this product where $\mathcal{I}^u_{2t} = \mathcal{I}^v_{2t}$. When $\mathcal{I}^u_{2t} \neq \mathcal{I}^v_{2t}$, we find their most recent common ancestor $\mathcal{I}^z_{r-} $ just before it fragments (at time $r$) into the distinct ancestors $ \mathcal{I}^x_{r}$ and $ \mathcal{I}^y_{r}$ of $\mathcal{I}^u_{2t}$ and $\mathcal{I}^v_{2t}$ respectively.
Our aim is to bound $\prob{E} \Lambda_t $ from above. The first part of the calculation uses the fragmentation property to make the summand indexed by $r$ in \eqref{defLambda} measurable with respect to $\mathcal{F}_r$. To this end, we first show that, for all $s>0$, the set $\mathcal{D}_s$ almost surely has an enumeration $(r_1,r_2,...)$ with the property that each $r_i$ is an $\mathcal{F}$-stopping time. Fix $s>0$, and a strictly increasing (deterministic) sequence $(a_i) \subset [0,1)$ with $a_1 = 0$ and $\lim a_i = 1$. If $[z]_{r-}$ (for some $z \in (0,1)$) is the particle that shatters at time $r \in \mathcal{D}_s$, then the fragments at time $r$ resulting from this fragmentation event are given by an affine image of some $u_r \in \mathcal{U}$. We write $u_r^*$ for the largest interval component of $u_r$.
We then introduce the sets $\mathcal{D}_{s,n} := \{r \in \mathcal{D}_s : |u^*_r| \in [a_n,a_{n+1}) \}$.
Of course, $\mathcal{D}_s = \bigcup_{n \in \mathbb{N}} \mathcal{D}_{s,n}$, and, as we will now show, $D_{s,n} := \# \mathcal{D}_{s,n} < \infty$ almost surely, for all $n \in \mathbb{N}$. To this end, we rewrite $D_{s,n}$ as follows: \begin{equation*}
D_{s,n} \en{=}
\sum_{0 \leq r \leq s} \textbf{1}_{B_{[z]_r}}
\textbf{1}_{\big( \, |u^*_r| \in [a_n,a_{n+1}) \big) \,} . \end{equation*} Using the compensation formula (see page 99 of \cite{KYP14}), we deduce that $\prob{E} D_{s,n} = s \, \nu \big( u^* \in [a_n,a_{n+1}) \big)$. It remains to note that, for all $n \in \mathbb{N}$, \begin{equation*}
\nu \big( u^* \in [a_n,a_{n+1}) \big) \en{\leq}
(1-a_{n+1})^{-1} \int_{\mathcal{U}} \left( 1 - |u^*| \right) \nu(du) \en{<}
\infty. \end{equation*} The desired enumeration is then obtained by listing the elements of each (almost surely finite) set $\mathcal{D}_{s,n}$ in order of increasing size, and concatenating the resulting sequences.
Using the enumeration $(r_1,r_2,...)$ constructed above (with $s=2t$) and the non-negativity of the terms in \eqref{defLambda}, we can now take the first step towards estimating $\prob{E} \Lambda_t$, writing \begin{equation} \label{enum}
\prob{E} \Lambda_t \en{=}
\sum_{i = 1}^{\infty}
\prob{E} \sum_{[z]_{r_i- }:(0,1)} \Lambda^z_{r_i} \en{=}
\sum_{i = 1}^{\infty}
\prob{E} \hspace{2pt} \prob{E}_{\mathcal{F}_{r_i}}
\sum_{[z]_{r_i- }:(0,1)} \Lambda^z_{r_i} . \end{equation}
In the second equality we have conditioned the term in the sum labelled by $r_i$ on the sigma-algebra $\mathcal{F}_{r_i}$. Next we calculate these conditional expectations. Fixing $r = r_i$ for some $i \in \mathbb{N}$, we have \begin{equation}
\prob{E}_{\mathcal{F}_{r}}
\sum_{[z]_{r-} : (0,1)}
\Lambda^z_{r} \en{=}
\sum_{[z]_{r-} : (0,1)} \textbf{1}_{A^z_{[0,r-]}} \hspace{4pt} \textbf{1}_{B_{[z]_r}}
\sum_{\substack{[x]_r , [y]_r : [z]_{r-} \\ [x]_r \neq [y]_r}}
\hspace{4pt}
\prob{E}_{\mathcal{F}_r}
\sum_{\substack{[u]_{2t} : [x]_{r} \\ [v]_{2t} : [y]_{r}}}
\textbf{1}_{A^u_{[r,2t]}} \textbf{1}_{A^v_{[r,2t]}}. \end{equation}
where we have used the fact that $r$ is $\mathcal{F}_r$-measurable. We then write \begin{equation} \label{indep} \prob{E}_{\mathcal{F}_r} \sum_{\substack{[u]_{2t} : [x]_{r} \\ [v]_{2t} : [y]_{r}}} \textbf{1}_{A^u_{[r,2t]}} \textbf{1}_{A^v_{[r,2t]}} \en{=} \left( \prob{E}_{\mathcal{F}_r} \sum_{[u]_{2t} : [x]_{r}} \textbf{1}_{A^u_{[r,2t]}} \right) \left( \prob{E}_{\mathcal{F}_r} \sum_{[v]_{2t} : [x]_{r}} \textbf{1}_{A^v_{[r,2t]}} \right) \end{equation} for $x,y \in (0,1)$ such that $\mathcal{I}^x_{2t} \neq \mathcal{I}^y_{2t}$, using the independent evolution of distinct particles. Now we calculate a typical factor on the right-hand side of \eqref{indep} (explanations follow the calculation): \begin{align}
\prob{E}_{\mathcal{F}_r}
\sum_{[u]_{2t} : [x]_{r}}
\textbf{1}_{A^u_{[r,2t]}} \fal{=}
\prob{E}_{\mathcal{F}_r}
\sum_{[u]_{2t} : [x]_{r}}
\textbf{1}_{ \big(
\zeta^u_s \in J_s(t) \enskip \forall s \in [r,2t]
\big)}
\nonumber\\ \al{=}
\prob{E}_{\mathcal{F}_r}
\sum_{[u]_{2t} : [x]_{r}}
\frac{I^u_{2t}}{I^x_r} \frac{I^x_r}{I^u_{2t}}
\textbf{1}_{ \big(
\zeta^u_s \in J_s(t) \enskip \forall s \in [r,2t] \big)}
\nonumber\\ \al{=}
\left(
\prob{E} \,
I_{2t-r}^{-1}
\textbf{1}_{ \big(
\alpha + \zeta_s \in J_{s+r}(t) \enskip \forall s \in [0,2t-r]
\big)}
\right)
\bigg\vert_{\alpha = \zeta^x_r}
\nonumber \\ \al{=}
\left( \prob{Q} \,
e^{\zeta_{2t-r}(\overline{p} + 1)} \textbf{1}_{ \big(
\alpha + \zeta_s \in J_{s+r}(t) \enskip \forall s \in [0,2t-r]
\big)}
\right)
\bigg\vert_{\alpha = \zeta^x_r} \nonumber \en{=}
F(\zeta^x_r), \end{align}
where, for $\alpha \in \mathbb{R}$, \begin{equation*}
F(\alpha) \en{:=}
\prob{Q} \,
e^{\zeta_{2t-r}(\overline{p} + 1)} \textbf{1}_{ \big( \alpha + \zeta_s \in J_{s+r}(t) \enskip \forall s \in [0,2t-r] \big)} . \end{equation*}
In the first line we just write down the definition of the events $A^u_{[r,2t]}$; the second artificially introduces a size-based pick; the third makes use of the size-biased pick together with the fragmentation property; and the final line makes the change of measure $\prob{E} \rightarrow \prob{Q}$. So far, we've shown that \begin{equation}
\prob{E}_{\mathcal{F}_{r}} \sum_{[z]_{r-} : (0,1)}
\Lambda^z_{r} \en{=}
\sum_{[z]_{r-} : (0,1)} \textbf{1}_{A^z_{[0,r-]}} \hspace{4pt} \textbf{1}_{B_{[z]_r}}
\sum_{\substack{[x]_r , [y]_r : [z]_{r-} \\ [x]_r \neq [y]_r}}
F(\zeta^x_r) F(\zeta^y_r). \end{equation}
Putting this expression back into \eqref{enum} and exchanging the summation and expectation, we arrive at \begin{equation} \label{reduction}
\prob{E} \Lambda_t \en{=}
\prob{E} \sum_{r \in \mathcal{D}_{2t}} \hspace{3pt}
\sum_{[z]_{r-} : (0,1)} \textbf{1}_{A^z_{[0,r-]}} \hspace{4pt} \textbf{1}_{B_{[z]_r}}
\sum_{\substack{[x]_r , [y]_r : [z]_{r-} \\ [x]_r \neq [y]_r}}
F(\zeta^x_r) F(\zeta^y_r). \end{equation}
We have now succeeded in making the $r$-indexed summand $\mathcal{F}_r$-measurable, which will allow us to use the compensation formula (see page 99 of \cite{KYP14}).
To this end, define the function $G: \mathbb{R} \times \mathcal{U} \rightarrow [0,\infty]$ by \[
G(\alpha, u) \en{:=} \sum_{a_u \neq b_u} F(\alpha - \log |a_u|) F(\alpha -\log |b_u|) \] where the sum is over the distinct interval components $a_u, b_u \subset u$.\footnote{
The function $G$ can be constructed in a measurable way by ordering the interval components of $u \in \mathcal{U}$ in order of decreasing length, $(u_1,u_2,...)$, and then writing the sum as $\sum_{i=1}^{\infty} \sum_{j\neq i} F(\alpha - \log |u_i|) F(\alpha -\log |u_j|) $.} Using the compensation formula, we can move from \eqref{reduction} to \begin{align} \label{precomp}
\prob{E} \Lambda_t \fal{=}
\int_0^{2t} dr \, \cdot \,
\prob{E} \left(
\sum_{[z]_{r-} : (0,1)} \textbf{1}_{A^z_{[0,r-]}}
\int_{\mathcal{U}} G(\zeta^z_{r-},u) \nu (du)
\right) \nonumber \\ \al{=}
\int_0^{2t} dr \, \cdot \,
\prob{Q} \left(
e^{\zeta_{r-}(\overline{p}+1)}
\textbf{1}_{A_{[0,r-]}}
\int_{\mathcal{U}} G(\zeta_{r-},u) \nu (du)
\right) \nonumber \\ \al{=}
\int_0^{2t} dr \, \cdot \, \lambda(r), \end{align}
where the final equality defines $\lambda(r)$
as the integrand of the previous line. Here, $\nu$ is the dislocation measure introduced in Section $2$, which satisfies the integrability condition \eqref{integrability}.
\textbf{Notation:} In the remainder of this section, positive constants (independent of $t$) will be denoted by $\gamma > 0$, the value of which will change from one inequality to another.
We state the next part of the proof as a lemma:
\begin{lem} \hspace{10pt} $\displaystyle{\text{\textnormal{\prob{E}}} \Lambda_t = \int_0^{2t} \lambda(r) dr = O\big((\log t)^3 \, \big)}$ \quad as \enskip $\displaystyle{t \uparrow \infty}$. \end{lem}
\begin{proof}
First we estimate $F(\alpha - \log |a_u|)$ for interval components $a_u$ of $u \in \mathcal{U}$ and $\alpha \in \mathbb{R}$: using the indicator to bound the exponent we have \begin{align} \label{estF}
F(\alpha - \log |a_u|) \fal{=}
\prob{Q} \,
e^{\zeta_{2t-r}(\overline{p} + 1)} \textbf{1}_{ \big( \alpha - \log |a_u| + \zeta_s \in J_{s+r}(t) \enskip \forall s \in [0,2t-r] \big)} \nonumber \\ \al{\leq}
\gamma \, t^{3/2} \, \, |a_u|^{\overline{p} + 1}e^{-\alpha (\overline{p} +1)}
f(\alpha - \log |a_u|), \end{align} for some $\gamma > 0$, with \[ f(\theta) \en{:=} \prob{Q}\bigg(\theta + \zeta_s \in J_{s+r}(t) \enskip \forall s \in [0,2t-r] \bigg), \qquad \textrm{for} \enskip \theta \in \mathbb{R}. \]
We estimate $f$ in two different ways, depending on the value of $r$. For $r \in [t,2t]$, \thref{shiprop1} provides the estimate \begin{equation*} f(\theta) \en{\leq} \prob{Q} \bigg( \zeta_{2t-r} \in [l \log t - \theta, l \log t - \theta + 2C] \bigg) \en{\leq} \gamma \: n_{2t-r}, \end{equation*} with $n_{\theta} := \theta^{-1/2} \wedge 1$ for $\theta \geq 0$. Referring back to \eqref{precomp}, this leads to the bound \begin{equation} \label{firstLambda}
\int_t^{2t} \lambda(r) \, dr \en{\leq}
\gamma \, I_1 \, t^3
\int_0^{2t} dr \, \cdot
n_{2t-r}^2
\prob{Q} \left(
e^{-\zeta_{r}(\overline{p}+1)}
\textbf{1}_{A_{[0,r-]}}
\right) \, , \end{equation}
where \[
I_1 \en{:=} \int_{\mathcal{U}} \nu(du) \, \cdot \sum_{a_u \neq b_u} |a_u|^{\overline{p}+1} |b_u|^{\overline{p}+1} \, . \]
Let us check that $I_1$ is finite. Indeed, \begin{align*}
\sum_{a_u \neq b_u} |a_u|^{\overline{p}+1} |b_u|^{\overline{p}+1} \fal{\leq}
\sum_{a_u \neq b_u} |a_u| |b_u| \en{=}
\sum_{a_u} |a_u| (1-|a_u|)\\ \al{\leq}
(1-|u^*|) + \sum_{a_u \neq u^*} |a_u| \\ \al{=}
2 (1-|u^*|). \end{align*}
In the first inequality we use the facts that $|a_u|,|b_u| < 1$ and $\overline{p} > 0$; in the first equality we fix an interval component $a_u$ of $u \in \mathcal{U}$ and sum over the interval components $b_u \neq a_u$ of $u$; and in the second inequality we use the fact that $|a_u| \in (0,1)$.
The finiteness of $I_1$ then follows from \eqref{integrability}. It remains to estimate the expectation in \eqref{firstLambda}:
\begin{align*}
\prob{Q} \big( e^{-\zeta_{r-} (\overline{p}+1)} \textbf{1}_{A_{[0,r-]}} \big) \fal{\leq}
\gamma \, t^{-3/2} \,
\prob{Q}
\big(
\textbf{1}_{A_{[0,r-]}} \textbf{1}_{( \zeta_{r-} \leq 2 \,l \log t )}
\big)
+
\gamma \, \prob{Q}
\big(
e^{-\zeta_{r-} (\overline{p}+1)}
\textbf{1}_{A_{[0,r-]}}
\textbf{1}_{( \zeta_{r-} > 2 \,l \log t )}
\big) \\[3pt] \al{\leq}
\gamma \, t^{-3/2} \prob{Q}\big( \underline{\zeta}_{r-} \geq - 1 , \: \zeta_{r-} \leq 2 l \log t \big) + \gamma \, t^{-3}\\[3pt] \al{\leq}
\gamma \, t^{-3/2} (r^{-3/2} \wedge 1) (\log t)^2 + \gamma \, t^{-3}. \end{align*}
In the first line we split the event $\{ \zeta_{r-} \geq l \log t\} \subset A_{[0,r-]}$ into the events $\{ \zeta_{r-} > 2 l \log t\}$ and $\{ l \log t \leq \zeta_{r-} \leq 2 l \log t\}$. In the second line, we discard some information from the indicator on the interval $[t,r]$ and estimate the exponential factor in the second term using the indicator $\textbf{1}_{( \zeta_{r-} > 2 \,l \log t )}$. In the final line, we use \thref{shicor1} to estimate the remaining expectation. Returning to \eqref{firstLambda}, we conclude that \begin{equation*}
\int_t^{2t} \lambda(r) \, dr \en{\leq}
\int_t^{2t} \, dr \, \cdot
\left[
\gamma \, (\log t)^2 \, t^{3/2} \,
(r^{-3/2} \wedge 1) \, n_{2t-r}^2
\en{+}
\gamma \, n_{2t-r}^2
\right]. \end{equation*}
Elementary analysis allows us to conclude that $\displaystyle{\int_t^{2t} \lambda(r) dr = O\big((\log t)^3\big),}$ as required.
Now we look at $\lambda(r)$ for $r \in [0,t]$. This time we make the estimate
\begin{align*}
f(\theta) \fal{\leq}
\prob{Q} \left(
\underline{\zeta}_{2t-r} \geq -1 - \theta,
\enskip
\zeta_{2t-r} \in [l \log t - \theta, l \log t - \theta + 2C] \,
\right) \\ \al{\leq}
\gamma \, (1+\theta) \, (\log t) \, (2t-r)^{-3/2} \\ \al{\leq}
\gamma \, (1+\theta) \, (\log t) \, t^{-3/2}. \end{align*}
In the first inequality we throw away some information from the indicator on the interval $[t,2t-r)$; in the second we use \thref{shicor1}; and the final inequality uses the fact that $r \in [0,t]$. Making the substitution $\theta = \alpha - \log |a_u|$, we arrive at \begin{align*}
f(\alpha - \log |a_u|) \fal{\leq}
\gamma \, (1+\alpha - \log |a_u| ) \, (\log t) \, t^{-3/2}\\ \al{\leq}
2 \gamma \,(2+\alpha) \, ( 1 - \log |a_u| ) \, (\log t) \, t^{-3/2} \end{align*} for $\alpha \geq -1$ (recall we intend to make the substitution $\alpha = \zeta_{r-} \geq -1$). This leads to the bound \[
\lambda(r) \en{\leq}
\gamma \, I_2 \, (\log t)^2 \,
\prob{Q} \left(
e^{-\zeta_{r}(\overline{p}+1)}
(2+\zeta_{r-})^2
\textbf{1}_{A_{[0,r-]}}
\right) , \]
where \[
I_2 \en{:=}
\int_{\mathcal{U}} \nu(du) \, \cdot \sum_{a_u,b_u} |a_u|^{\overline{p}+1} |b_u|^{\overline{p}+1}
( 1 - \log |a_u| )
( 1 - \log |b_u| ) . \]
This time we note that the function $x \mapsto x^{\overline{p}} (1 - \log x)$ is bounded on $[0,1]$, since $\overline{p} > 0$. This allows us to write $I_2 \leq K \int_{\mathcal{U}} \nu(du) \cdot \sum |a_u| |b_u|$ (for some $K>0$), which is finite by the same arguments we used for $I_1$. To complete the proof we define $\tau_0^- := \inf \{s \geq 0 : \zeta_s < 0 \} $, and let $\prob{Q}_1$ denote the law of $1+\zeta_t$ under $\prob{Q}$. We note then that \begin{align*} \int_0^t dr \cdot \prob{Q} \, \left( (2+\zeta_{r-})^2 e^{-\zeta_{r-}(\overline{p} + 1)} \textbf{1}_{A_{[0,r-]}} \right) \fal{\leq} e^{\overline{p} + 1} \, \prob{Q}_1 \int_0^{\tau_0^-} (1+\zeta_{r-})^2 e^{-\zeta_{r-}(\overline{p} + 1)} \: dr \, . \end{align*}
Defining the function $h: [0,\infty) \rightarrow [0,\infty)$ by $h(\theta) := (1+\theta)^2 \, e^{-(\overline{p}+1)\theta}$, and bearing in mind that $\zeta$ is spectrally positive, we apply Theorem 20 (page 196) of \cite{BER96} to make the following calculation: \begin{align*}
\prob{Q}_1 \int_0^{\tau_0^-} (1+\zeta_{r-})^2 e^{-\zeta_{r-}(\overline{p} + 1)} \: dr \fal{=}
\prob{Q}_1 \int_0^{\tau_0^-} h(\zeta_{r-})\: dr \\ \al{=}
\gamma \int_0^{\infty} dy \, \cdot \,
\int_0^1 dz \, \cdot \,
h(1+y-z) \end{align*} for some $\gamma > 0$. It remains to note that the right-hand side of the previous display is bounded by $K \int_0^\infty e^{-w} dw < \infty$, for some finite constant $K>0$ (since $1+\overline{p} > 1$). \end{proof}
Let us collect together the facts we have established in this section so far: for some $\gamma_1, \gamma_2 >0$, we have \begin{align} &\prob{E} (Z_t) \geq \gamma_1 \enskip ; &&\hspace{-120pt}\textrm{and} \label{fact1}\\ &Z_t^2 \en{=} Z_t + \Lambda_t \enskip , &&\hspace{-120pt}\textrm{with} \label{fact2}\\ &\prob{E} \Lambda_t \enskip \leq \enskip \gamma_2 (\log t)^3 \label{fact3} , \end{align}
for all large $t$. Following page $7$ of \cite{shi}, we make the following simple calculation, valid for all large $t$: \begin{equation} \label{fact4} \prob{E} (Z_t^2) \enskip \leq \enskip \gamma_2 (\log t)^3 + \prob{E}(Z_t) \enskip \leq \enskip \left[ \sfrac{\gamma_2}{\gamma_1}(\log t)^3 + 1 \right] \prob{E}(Z_t) \enskip \leq \enskip \left[ \sfrac{\gamma_2}{\gamma_1}(\log t)^3 + 1 \right] \sfrac{1}{\gamma_1} \prob{E} (Z_t)^2 , \end{equation}
where the first inequality uses \eqref{fact2} and \eqref{fact3}, and the next two inequalities use \eqref{fact1}. First making use of the Paley-Zygmund inequality,
and then of \eqref{fact4}, we find that
\begin{equation*} \prob{P} (Z_t > 0) \enskip \geq \enskip \frac{\prob{E}(Z_t)^2}{\prob{E}(Z_t^2)} \enskip \geq \enskip \frac{\gamma}{(\log t)^3} \hspace{5pt}. \end{equation*}
We then note that
\begin{equation*} \left\lbrace \min_{x \in (0,1)} \zeta_{2t} \enskip > \enskip l\log t + 2C\right\rbrace \enskip \subseteq \enskip \left\lbrace Z_t = 0 \right\rbrace \end{equation*}
so that, for all sufficiently large $t$, we have \begin{align} \prob{P} \left\lbrace \min_{x \in (0,1)} \zeta^x_t \enskip > \enskip l\log t + 2C\right\rbrace \enskip \fal{\leq} \enskip \prob{P} \left\lbrace \min_{x \in (0,1)} \zeta^x_t \enskip > \enskip l\log \sfrac{t}{2} + 2C\right\rbrace \nonumber \\ \al{\leq} \enskip 1 - \frac{\gamma}{(\log t)^3} . \end{align}
Now we need to know the rate at which the number of exceptionally large particles grows. To be precise define, in the notation of \cite{krell}, sets $G_{c,\alpha,\beta}(t) := \{ \mathcal{I}^x(t) : x \in (0,1) , \alpha e^{-ct} < I^x(t) < \beta e^{-ct} \}$ for $0<\alpha <1< \beta$ and $c \in \mathbb{R}$. A result from~\cite{bertoin} shows that for $c \in (c_{\overline{p}}, \Phi'(\underline{p}+)) $ there exists $\tilde{\rho}(c)>0$, depending only on $c$, and not on $\alpha$ or $\beta$, such that $$ \lim_{t \rightarrow \infty} \frac{1}{t} \log \#G_{c,\alpha,\beta}(t) = \tilde{\rho}(c) \quad \textrm{a.s.} $$
We fix a small $\delta>0$ and $c:= c_{\overline{p}}+\delta$, and define sets $\mathcal{N}(t):= \{ \mathcal{I}^x(t) \colon \xi^x_t - ct \leq 1 \} $. We deduce that for $\rho=\tilde\rho(c)>0$ we have \begin{equation} \label{rate1} \lim_{t \rightarrow \infty} \frac{1}{t} \log \#\mathcal{N}(t) \geq \rho \quad \prob{P}-a.s. \end{equation}
Next, fix an arbitrary $\epsilon>0$ and define $T_n := T(n,\epsilon) := \inf \{ t \geq 0 : \#\mathcal{N}(t) \geq n^{\epsilon} \}$. We choose the $\lfloor n^{\epsilon} \rfloor$ largest elements of $\mathcal{N}(T_n)$
and label them $\{ \mathcal{I}^{n,j} : 1 \leq j \leq \lfloor n^{\epsilon} \rfloor \}$ in order of increasing size. We then write $\xi^{n,j,x}_t$ to denote the $-\log$ of the size of the particle containing $x \in \mathcal{I}^{n,j}$ at each time $t \geq T_n$. Note, for instance, that $ \xi^{n,j,x}_{T_n} = - \log I^{n,j}$ for all $x \in \mathcal{I}^{n,j} $. For all $n \in \mathbb{N}$ we have \[ \prob{P} \left( \max_{s \in [\frac{n}{2},n] \cap \mathbb{N}} \enskip \min_{1 \leq j \leq \lfloor n^{\epsilon} \rfloor} \enskip \min_{x \in \mathcal{I}^{n,j}} \enskip \xi^{n,j,x}_{ T_n + s} - c_{\overline{p}}(T_n + s) > \max_{1 \leq j \leq \lfloor n^{\epsilon} \rfloor} \xi^{n,j} -c_{\overline{p}} T_n + l \log n + 2C \right) \]
\begin{align} \label{nexpression} &\leq
\enskip \sum_{s \in [\frac{n}{2},n] \cap \mathbb{N}} \prob{P} \left( \inf_{x \in (0,1)} \xi^x_s -
c_{\overline{p}}s \enskip > \enskip l \log t + 2C \right)^{\lfloor n^{\epsilon} \rfloor } \leq \enskip
\sum_{s \in [\frac{n}{2},n] \cap \mathbb{N}} \left( 1 - \frac{\gamma}{(\log s)^3}
\right)^{\lfloor n^{\epsilon} \rfloor } \nonumber \\[3pt] &\leq \enskip
\frac{n}{2} \left( 1 - \frac{\gamma}{(\log n)^3} \right)^{ n^{\epsilon} -1}. \end{align}
The final expression is summable in $n$ (see \thref{sum}). By the Borel-Cantelli lemma, we deduce that, $\prob{P}$-almost surely, \begin{align} \max_{s \in [\frac{n}{2},n] \cap \mathbb{N}} \enskip \min_{1 \leq j \leq \lfloor n^{\epsilon} \rfloor} \enskip \min_{x \in \mathcal{I}^{n,j}} \enskip \xi^{n,j,x}_{ T_n + s} - c_{\overline{p}}(T_n + s) \enskip &\leq \enskip \max_{1 \leq j \leq \lfloor n^{\epsilon} \rfloor} \xi^{n,j} -c_{\overline{p}} T_n + l \log n + 2C \nonumber \\ &\leq \enskip 1 + (c_{\overline{p}}+\delta) T_n - c_{\overline{p}} T_n + l \log n + 2C \nonumber \\[5pt] &= \enskip \delta T_n + l \log n + 2C + 1 . \label{maxminmax} \end{align}
The final ingredient we need to finish the proof is to show that
\begin{equation} \lim_{n \rightarrow \infty} \frac{T(n,\epsilon)}{\log n} \enskip \leq \enskip \frac{\epsilon}{\rho} \quad \prob{P}-\textrm{a.s.} \label{rate} \end{equation}
To do this, fix $\epsilon' \in (0, \rho)$. Then, by \eqref{rate1}, there is some almost-surely finite random variable $T \geq 0$ such that, almost surely, $t \geq T$ implies $\#\mathcal{N}(t) \geq e^{(\rho - \epsilon')t}$. Consequently, for all $t \geq T$ we know that $$ T(n,\epsilon) \enskip \leq \enskip \inf \Big\lbrace t \geq 0 : e^{(\rho - \epsilon')t} \geq n^{\epsilon} \Big\rbrace \enskip = \enskip \frac{\epsilon \log n } {\rho - \epsilon'} .$$ This yields \eqref{rate}. Combining \eqref{maxminmax} and \eqref{rate} we find that $\prob{P}$-almost surely, for all large~$n$, we have
\begin{equation*} \max_{s \in [\frac{n}{2},n] \cap \mathbb{N}} \enskip \min_{1 \leq j \leq n} \enskip \min_{x \in \mathcal{I}^{n,j}} \enskip \xi^{n,j,x}_{ T_n + s} - c_{\overline{p}}(T_n + s) \enskip \leq
\enskip \left( \sfrac{2\epsilon \delta}{\rho}+ l \right) \log n + 2C + 1. \end{equation*}
By \eqref{rate}, we can write almost surely that $T_n = \epsilon \cdot O( \log n)$. We immediately deduce that for all large $n$ we have \[
\inf_{x \in (0,1)} \xi^x_{n+ T_n } - c_{\overline{p}}(n+ \epsilon \, O(\log n)) \en{\leq}
\left( \sfrac{2\epsilon \delta}{\rho}+l \right) \log n + 2C + 1
\hspace{20pt} \prob{P}- \text{a.s.} \] Noting that $T_n \geq 0$ and that $t \mapsto \xi^x_t$ is monotonically increasing, we deduce that, for all large $n$, \[
\inf_{x \in (0,1)} \xi^x_n - c_{\overline{p}} n \en{\leq}
\left( \sfrac{2\epsilon \delta}{\rho}+l \right) \log n + \epsilon \, O(\log n)
\hspace{20pt} \prob{P}- \text{a.s.} \] Since $\epsilon>0$ can be made arbitrarily small, we conclude that
$$ \limsup_{\mathbb{N} \ni n \rightarrow \infty} \frac{ \inf_{x \in (0,1)} \xi^x_n - c_{\overline{p}}n }{\log n} \enskip \leq \enskip l \hspace{20pt} \prob{P}-\text{a.s.} $$
By monotonicity of $t\mapsto \xi^x_t$ we see that the limit can be taken through all real values, completing the proof of~\eqref{limsup}. \quad \qed
\section{Physical heuristics}
In this section we argue \emph{informally} how our result can be put in line with the predictions described for logarithmically correlated random fields in the introduction. To cast the model into this framework we let
$$V(x)=\log |{\mathcal I}^x(t)| - \mathbb{E} \log |{\mathcal I}^x(t)|, \qquad \mbox{ for } x\in 2^{-n} {\mathbb Z} \cap [0,1] \mbox{ and } t=\frac{n \log 2}{\Phi'(0)}, $$ where the choice of time scale comes from matching the spatial scale $2^{-n}$ to $e^{-t\Phi'(0)}$, which is the typical length of a tagged fragment at time~$t$ and hence the scale on which $V$ needs to be sampled. We have
\begin{align*}
e^{-t\Phi(p)} & =\mathbb{E}\big[ \big| \mathcal{I}^{\upsilon}(t)\big|^{p} \big]
\approx \sum_{x \in 2^{-n}\mathbb{Z} \cap(0,1)} \mathbb{E} \big| \mathcal{I}^{x}(t)\big|^{p+1}
\approx \exp\big( (p+1) \mathbb{E} \log | \mathcal{I}^\upsilon(t)| + (n \log 2) \Psi(p+1)\big). \end{align*}
Observing that $\mathbb{E} \log |{\mathcal I}^\upsilon(t)|\sim- t\Phi'(0)$ we get $$\Psi(p+1) = 1 + p - \frac{\Phi(p)}{\Phi'(0)}.$$ We introduce, for $x,y \in (0,1)$, the stopping time $T = T(x,y) := \inf \{t \geq 0: \mathcal{I}^x_t \neq \mathcal{I}^y_t \} $, the time when $x$ and $y$ are first split apart. Fixing an arbitrary $t>0$ and abbreviating $\tau = t \wedge T$ we can decompose
$$ |{\mathcal I}^x(t)| =
|{\mathcal I}^x(\tau -)| \times \Delta^x_{\tau} \times |\tilde{\mathcal I}^{\tilde x}(t-\tau)|,$$
where $\Delta^x_s := |{\mathcal I}^x(s)|/|{\mathcal I}^x(s-)|$, the process $(\tilde{\mathcal I}^x(s) \colon s \geq 0)$ is a fragmentation process, which is independent of what happened up to time~$t$, and $\tilde x\in(0,1)$ is the relative position of $x$ in ${\mathcal I}^x(t)$. Taking $\log$ on both sides of the decomposition and centering gives
$V(x,t) \sim V(x,\tau -) + \tilde{V}(\tilde{x},t-\tau),$ as $t\uparrow\infty$, where we define $V(z,t)= \log |{\mathcal I}^z(t)| - \mathbb{E} \log |{\mathcal I}^z(t)|$. Taking expectations and using the independence we get $\mathbb{E} [ V(x,t) V(y,t) \big] \sim \mathbb{E} \big[ V(x,\tau -) V(y, \tau -)] .$ Using Wald's identity (see Theorem 3 of \cite{hall70}) we calculate the expectation on the right and obtain $$ \mathbb{E} \big[ V(x,t) V(y,t) \big] \sim \mathbb{E}\big[ t \wedge T\big] \, \Phi'(0)\Psi''(0). $$
Recalling that $t=\frac{n \log 2}{\Phi'(0)}$ we look at a regime where
$$t\Phi'(0)=n \log 2 \gg -\log |x-y|.$$ Observe that the right-hand side is at least
$-\log |{\mathcal I}^x(T-)| \sim T \Phi'(0)$ and hence $\mathbb{E}[ t \wedge T]\sim \mathbb{E}[T]$. We obtain \begin{equation}\label{var1} \mathbb{E} \big[ V(x) V(y) \big] \sim \mathbb{E}\big[ T(x,y) \big] \, \Phi'(0)\Psi''(0)
\qquad \mbox{ if } \mbox{$2^{-n}$} \ll |x-y| \ll 1. \end{equation} This is a result of the type~\eqref{var0}, if the distance of points $x,y$ on the interval is measured not with the euclidean metric, but with respect to the natural random metric coming from our problem, defined by
$d(x,y)=|\mathcal{I}^x(T(x,y)-)|$ and therefore $-\log d(x,y)\sim {\Phi'(0)T(x,y)}.$
The result can also be partially claimed for the euclidean set-up, as $\log |x-y| \leq \log |{\mathcal I}^x(T(x,y)-)| \sim \log d(x,y) \, \Phi'(0)$ but we will see below that working in this framework will lead to a loss of accuracy.
The physicist's prediction~\eqref{P1} hence gives
$$\max_x \log |\mathcal{I}^x(t)| + t \Phi'(0) \approx \Psi'(\bar{q}) n \log 2 - \frac32 (\log \Psi)'(\bar{q}) \log n.$$ Recalling that $\Psi(p+1)=1+p-\frac{\Phi(p)}{\Phi'(0)}$ we get $\bar{q}=\bar{p}+1$ and hence
$$\max_x \log |\mathcal{I}^x(t)| \approx -\frac{\Phi'(\bar{p})}{\Phi'(0)} n \log 2 - \frac32 \frac1{\bar{p}+1} \log n \sim -\Phi'(\bar{p}) t - \frac32 \frac1{\bar{p}+1} \log t,$$ which is in line with our rigorous result.
To relate our story to the multifractal approach of Fyodorov, Le Doussal and
Rosso~\cite{FDR09} we first recall the multifractal spectrum for homogeneous fragmentation processes obtained by Berestycki~\cite{b03} and refined by Krell~\cite{krell}. We define
$q_\beta$ by $\Phi'(q_\beta)=\beta$. Then, for every $\beta$ making the right-hand side below positive, almost surely,
$$\dim_{|\cdot|}\big\{ x\in (0,1) \colon \lim_{t\uparrow\infty} -\sfrac1t \log |\mathcal{I}^x(t)| = \beta\big\} = 1+ q_\beta -\frac{\Phi(q_\beta)}{\beta}.$$ Perhaps surprisingly, this formula does \emph{not} put our result in line with the prediction of Fyodorov, Le Doussal and Rosso.
The prediction can however be reconciled with our results, if one moves to the appropriate metric, which in our case is again the random metric $d$. While for fixed intervals the ratio of lengths with respect to $d$ and the Euclidean metric are typically bounded from zero and infinity, the optimal coverings implicit in the Hausdorff dimension above use random intervals for which these diameters are radically different. Indeed, given $\beta$ the covering intervals $I$ for the corresponding set have metric diameters given by their length to the power $\Phi'(0)/\beta$ (see for example~\cite{M09}). As a result the multifractal spectrum in the intrinsic random metric becomes
$$\dim_{d}\big\{ x\in (0,1) \colon \lim_{t\uparrow\infty} -\sfrac1t \log |\mathcal{I}^x(t)| = \beta\big\} = \frac{\beta}{\Phi'(0)}\Big(1+ q_\beta\Big) -\frac{\Phi(q_\beta)}{\Phi'(0)}.$$ This can be translated as $$\dim_{d}\big\{ x\in (0,1) \colon V(x) \approx \alpha (\log 2) n \big\}
=\Psi(p_\alpha)- \alpha p_\alpha=: f(\alpha),$$ where $p_\alpha$ is given by $\Psi'(p_\alpha)=\alpha$. Hence $f'(\alpha)=-p_\alpha$. The right end of the spectrum, $\alpha_+$, is characterised by the equation $\Psi(p_{\alpha_+})=p_{\alpha_+}\Psi'(p_{\alpha_+})$, hence $p_{\alpha_+}=\bar q$ and $\alpha_+=\Psi'(\bar q)$ aligning the prediction of~\eqref{P2} with our result.
\section{Appendix on L\'evy Processes}
In this section we extend the lemmas found in the appendix of~\cite{shi} from random walks to L\'evy processes with finite variance and zero mean. The proofs proceed by contradiction: we assume that the various statements do not hold for appropriate L\'evy processes, and then generate a random walk contradicting the results in \cite{shi} by discretization. We begin by stating two elementary lemmas which will be of use in carrying out such arguments. The first is a topological lemma whose proof can be found in \cite{king}. The second is a simple observation, recorded for convenience. Throughout this section we write $X$ for the process $(X_t)_{t \geq 0}$.
\begin{lem} \label{topologicallemma} \quad Let $U\subseteq[0,\infty)$ be open and unbounded. Then there exists $h > 0$ such that $nh \in U$ for infinitely many~$n \in \mathbb{N}$. \end{lem}
\begin{lem} \thlabel{rcont} \quad Let $X$ be a real-valued stochastic process issued from zero with almost surely right-continuous paths. Then \[ \forall \epsilon > 0 \quad \forall \delta > 0 \quad \exists \hspace{1pt} a>0 \quad\mbox{ such that } \quad \text{\textnormal{\prob{P}}}\big( {\norm{X}{[0,a]}} > \delta \big)\hspace{2pt} < \hspace{2pt} \epsilon \hspace{1pt}, \]
where ${\norm{X}{[0,a]}} := \sup_{0 \leq t \leq a} |X_{\hspace{1pt} t} |$ . \end{lem}
Now we state the first of our results on L\'evy processes.
\begin{prop} \thlabel{shiprop1} Let $X$ be a L\'evy process with zero mean and finite variance. Then $$ \exists C_{0}>0 \quad \exists c>0 \quad\mbox{ such that }\quad \forall h \geq C_{0} \quad \forall t > 0 \qquad \sup_{r \in \mathbb{R}} \hspace{2pt} \text{\textnormal{\prob{P}}} \big(r \leq X_t \leq r+h \big)\enskip \leq \enskip c \hspace{2pt} \frac{h}{t^{1/2}} \hspace{5pt}. $$ \end{prop}
\begin{pf} Assume the above statement is not true, i.e. for some such L\'evy process $X$
\begin{equation}\forall n \in \mathbb{N} \qquad \exists \hspace{1pt} h_n \geq n \qquad \exists t_n > 0 \quad \exists r_n \in \mathbb{R} \quad\mbox{ such that }\quad \prob{P}\big(r_n \leq X_{t_n} \leq r_n+h_n \big)\enskip > \enskip n \hspace{2pt} \frac{h_n}{t_n^{1/2}} \hspace{5pt}. \label{contrhyp} \end{equation}
Now select an $a>0$ corresponding to the choices $\epsilon=\frac{1}{2}$ and $\delta = 1$ in \thref{rcont}. Evidently, for all $n \in \mathbb{N}$, \begin{align*} \prob{P}\big( r_n -1 \leq X_{t} \leq r_n+h_n +1 \enskip \forall t \in [t_n,t_n+a] \big) \enskip &\geq \enskip \prob{P}\big( r_n \leq X_{t_n} \leq r_n+h_n , \enskip \norm{X_t -X_{t_n}}{{t \in [t_n,t_n+a]}} \hspace{2pt} < \hspace{2pt} 1 \big)\\ &\geq \enskip \frac{1}{2} \hspace{2pt} \prob{P} \big( r_n \leq X_{t_n} \leq r_n+h_n \big) \enskip \geq \enskip \frac{n}{2}\hspace{2pt} \frac{h_n}{t_n^{1/2}} \hspace{5pt}, \end{align*} where in the second inequality we have used the Markov property of the L\'evy process at time $t_n$. Let $U:= \bigcup_{n=1}^\infty (t_n,t_n+a)$, which is an open set. Note that, to prevent the probability in \eqref{contrhyp} exceeding one we must have $t_n \geq n^4$, proving that $U$ is unbounded. Lemma~6.1 therefore supplies an $h>0$ and two strictly increasing sequences $(m_j)$ and $(n_j)$ of natural numbers with the property that, for all $j \in \mathbb{N}$ we have $m_{j}h \in [t_{n_j}, t_{n_j}+a]$. Note that ${t_{n_j}}/{m_j} \to h$ as $j \to \infty$. In particular, there exists $K>0$ such that \smash{${{K}/{m_j^{1/2}}}< {1}/{t_{n_j}^{1/2}}$} for all $j \in \mathbb{N}$. Now define a random walk on $\mathbb{R}$ by $S_n := X_{nh}$, and note that this random walk has zero mean and finite variance. We estimate \begin{align*} \prob{P}\big(r_{n_j}-1 \leq S_{m_j} \enskip \leq \enskip r_{n_j}+h_{n_j}+1\big) \enskip \fal{\geq} \enskip \prob{P}\big( r_{n_j}-1 \leq X_{t} \leq r_{n_j}+h_{n_j}+1 \quad \forall t \in [t_{n_j},t_{n_j}+a] \big) \\ \al{\geq} \frac{K}{2} n_j \frac{h_{n_j}}{m_j^{1/2}}. \end{align*}
Taking suprema and assuming without loss of generality that $h_{n_j}\geq 2$ for all $j \in \mathbb{N}$, we find that, for all $j \in \mathbb{N} $, $$ \sup_{r \in \mathbb{R}} \prob{P}(r \leq S_{m_j} \leq r+h_{n_j}+2) \quad \geq \quad {\frac{K}{4} n_j \frac{h_{n_j}+2}{m_j^{1/2}}}, $$ contradicting (A.1) in \cite{shi}. \qed \end{pf}
\begin{prop} \thlabel{shiprop2} Let $X$ be a L\'evy process with zero mean and finite variance. Then, with $\underline{X}_t := \inf_{0 \leq s \leq t}X_{\hspace{1pt} s}$, we have $$ \limsup_{t \to \infty} \enskip {t^{1/2}} \enskip \sup_{u \geq 0} \enskip \frac{1}{u+1} \enskip
\text{\textnormal{\prob{P}}} \big( \hspace{1pt}\underline{X}_t \geq -u \big) < \infty \enskip .$$
\end{prop}
\begin{pf} The statement in the proposition is equivalent to the following statement: $$ {\exists C>0 \quad \exists T > 0 \quad \mbox{ such that} \quad t \geq T \Rightarrow \enskip \sup_{u \geq 0} \enskip \frac{1}{u+1} \prob{P}\big( \hspace{1pt}\underline{X}_t \geq -u \big) \enskip \leq \enskip \frac{C}{t^{1/2}}} \enskip . $$
For a contradiction, let us assume the converse of this statement holds. Then $$ {\forall n \in \mathbb{N} \quad \exists t_n \geq n \quad \exists u_n \geq 0 \quad \mbox{ such that} \quad
\frac{1}{u_n+1} \prob{P}\big( \hspace{1pt}\underline{X}_{\hspace{1pt}{t_n}} \geq -u_n \big) \enskip \geq \enskip \frac{n}{t_n^{1/2}}} \enskip . $$
As in \thref{shiprop1}, select $a>0$ with the following property: $$ \frac{1}{u_n+1} \prob{P} \big( \hspace{1pt}\underline{X}_{\hspace{1pt}{t}} \geq -u_n-1 \quad \forall t \in [t_n,t_n+a] \big) \enskip \geq \enskip \frac{n}{2 t_n^{1/2}} \enskip . $$
Now choose sequences $(m_j)$ and $(n_j)$, and $K>0$ precisely as in the proof of \thref{shiprop1}. Select furthermore an $M>0$ with the property that $\frac{1}{u} \leq \frac{M}{u+1} \quad \forall u \geq 1$ . Defining the random walk $(S_n)_{n \in \mathbb{N}}$ as in \thref{shiprop1}, we estimate \begin{align*} \frac{K}{2} n_j {\frac{1}{m_j^{1/2}}} \quad &\leq \quad \frac{1}{u_{n_j}+1} \enskip \prob{P} \big( \hspace{1pt}\underline{X}_{\hspace{1pt}{t}} \geq -u_{n_j}-1 \quad \forall t \in [t_{n_j},t_{n_j}+a] \big) \enskip \leq \enskip
\frac{1}{u_{n_j}+1} \enskip \prob{P} \big( \hspace{1pt}\underline{S}_{\hspace{1pt}{m_{j}}}\geq -u_{n_j}-1 \big) \enskip \\
&\leq \enskip \sup_{u \geq 0} \enskip \frac{1}{u+1} \enskip \prob{P}\big( \hspace{1pt}\underline{S}_{\hspace{1pt}{m_{j}}}\geq -u-1 \big) = \enskip \sup_{u \geq 1} \enskip \frac{1}{u} \enskip \prob{P}\big( \hspace{1pt}\underline{S}_{\hspace{1pt}{m_{j}}}\geq -u \big)\\ &\leq \enskip M \enskip \sup_{u \geq 1} \enskip \frac{1}{u+1} \enskip \prob{P}\big( \hspace{1pt}\underline{S}_{\hspace{1pt}{m_{j}}}\geq -u\big) \enskip \leq \enskip M \enskip \sup_{u \geq 0} \enskip \frac{1}{u+1} \enskip \prob{P}\big( \hspace{1pt}\underline{S}_{\hspace{1pt}{m_{j}}}\geq -u \big)\enskip . \end{align*} This contradicts (A.3) in \cite{shi}. \qed \end{pf}
With \thref{shiprop1} and \thref{shiprop2} in hand, the proof of the following corollary follows verbatim from the proof of Lemma A.1 of \cite{shi}.
\begin{cor} \thlabel{shicor1} Let $C_0$ be the constant whose existence is guaranteed by \thref{shiprop1}. Then there exists $c>0$ such that, for any $f: \mathbb{R}^{+}_{0} \rightarrow \mathbb{R}^{+}_{0}$ bounded away from $0$, and any $g: \mathbb{R}^{+}_{0} \rightarrow \mathbb{R}$ such that $g(t) \geq -f(t) \enskip \forall t \in \mathbb{R}^{+}_{0}$, we have, for all $t \geq 0$, \[ \text{\textnormal{\prob{P}}} \Big(g(t) \leq X_t \leq g(t)+ C_0, \enskip \underline{X}_t \geq -f(t) \Big) \enskip \leq \enskip c \hspace{2pt} \frac{ \big\lbrace \big(f(t)+1\big) \wedge t^{1/2}\big\rbrace \big\lbrace \big(g(t)+f(t)+1\big) \wedge t^{1/2}\big\rbrace}{t^{3/2}}, \] for all $t \geq 0$ where $x \wedge y := \min\{x,y\}$. In particular, there exists $c' > 0$ such that for all such $f$ and $g$ we have, for all $t \geq 0$, \[ \qquad \text{\textnormal{\prob{P}}} \Big(X_t \leq g(t), \enskip \underline{X}_t \geq -f(t) \Big) \enskip \leq \enskip c' \hspace{2pt} \frac{\big\lbrace \big(f(t)+1 \big) \wedge t^{1/2}\big\rbrace \big\lbrace \big(g(t)+f(t)+1\big)^2 \wedge t \big\rbrace}{t^{3/2}}. \] \end{cor}
\begin{prop} \thlabel{nonzeroliminf} Let $X$ be a L\'{e}vy process of the form $(Y_t - ct)_{t \geq 0}$, where $Y$ is a pure-jump subordinator and $c>0$. Assume that $X$ has zero mean and finite variance. For $\alpha>0$ let $X^{\alpha}_t := X_t + \alpha$. Then there exists $C>0$ such that, for any $f\colon [0,\infty) \rightarrow \mathbb{R}$ satisfying $\limsup_{t \rightarrow \infty} t^{-1/2}f(t)< \infty$ and $f(t) \geq \alpha$, for all large $t$, we have \begin{equation} \label{eq:shia4levy} \liminf_{t \rightarrow \infty} \hspace{3pt} t^{3/2} \hspace{3pt}\text{\textnormal{\prob{P}}} \Big(\underline{X}^{\alpha}_t \geq 0, \enskip \min_{t \leq s \leq 2t} X^{\alpha}_s \geq f(t), \enskip f(t) \leq X^{\alpha}_{2t} < f(t) + C\Big) \enskip > \enskip 0. \end{equation} \end{prop}
\begin{pf} Let us assume that there exists no such constant $C>0$, and fix an $\alpha>0$. Select an $a>0$ corresponding to the choices $\epsilon = \frac{1}{2}$ and $\delta = 1$ in Lemma \ref{rcont}. Finally, choose an $h \in (0, \frac{1}{4} \min \{a, \frac{\alpha}{c}\})$. Define a random walk $(S_n)$ by $S_n := X_{nh}$ and note that $(S_n)$ satisfies the hypotheses of Lemma~A.3 in \cite{shi}. Let $K$ denote the positive constant corresponding to $(S_n)$ whose existence is guaranteed by Lemma A.3 in \cite{shi} (there, $K$ is called $2C$), and pick $\tilde{C} > K + 1 + \alpha$. Since, in particular, we are assuming that \eqref{eq:shia4levy} does not hold for $C = \tilde{C}$, we infer the existence of a sequence $(t_k) \subseteq [0,\infty)$ such that $\lim_{k \rightarrow \infty} t_k = \infty$ with the property \begin{equation} \forall k \in \mathbb{N} \quad \quad
\left({\frac{t_k}{h}}\right)^{3/2} \prob{P} \Big(\underline{X}^{\alpha}_{t_k} \geq 0, \enskip \inf_{{t_k} \leq s \leq 2{t_k}} X^{\alpha}_s \geq f(t_k), \enskip f(t_k) \leq X^{\alpha}_{2t_k} < f(t_k) + \tilde{C} \Big) \enskip < \enskip \frac{1}{k} \enskip. \label{equ1}
\end{equation} Now define $n_k:= \lfloor{\frac{t_k - 1}{h}} \rfloor$. Note in particular that $(n_k+1) h \in [t_k - h, t_k]$; this will allow us to ensure that $X^{\alpha}_{t_k} \geq f(t_k)$ in the following computation. Define $a_{n_k} := f(t_k)+\alpha$ for each $k \in \mathbb{N}$, and $a_n := 0$ whenever there is no $k$ such that $n = n_k$. The important thing to note is that for any $j,k \in \mathbb{N}$ with $j \leq k$ and all $r \geq 0$ we have
$$ \min_{s \in [jh, jh +r]} X^{\alpha}_ s \enskip \geq \enskip S_j -rc + \alpha \enskip \geq \enskip \underline{S}_k - rc + \alpha \enskip \geq \enskip \underline{S}_k \qquad \textrm{whenever} \qquad r \leq \frac{\alpha}{c}. $$
Consequently, whenever $ r \leq \frac{\alpha}{c}$ we find, for any $k \in \mathbb{N}$, that $\underline{X}^{\alpha}_{kh+r} \geq \underline{S}_k$ Recalling that $n_k h \in [t_k - 2h, t_k - h]$, we can write $t_k = n_kh + r $ for some $r_h \in [h,2h]$. Consequently, we deduce that
$$ \underline{X}^{\alpha}_{t_k}\enskip = \enskip \underline{X}^{\alpha}_{n_k h+r} \enskip \geq \enskip \underline{S}_{n_k} \qquad \textrm{provided}\qquad r_h \leq \frac{\alpha}{c}, $$
and the condition on $r_h$ holds because we have selected $h < \frac{\alpha}{2c}$. We will use this in the computation below, where we require $\{ \underline{S}_{n_k} \geq 0 \} \subseteq \{ \underline{X}^{\alpha}_{t_k} \geq 0 \}$. By the same considerations, we have also have the inclusion $$ \bigg\{ \inf\limits_{{t_k} \leq s \leq 2{t_k}} X^{\alpha}_s \geq f(t_k) \bigg\} \enskip \subseteq \enskip \bigg\{ \min_{n_k <j \leq 2n_k} S_j \geq \enskip f(t_k)+ \alpha \bigg\} ,$$ since we have in fact picked $h < \frac{\alpha}{4c}$. We can therefore estimate \begin{align} \label{inequ2} \left({\sfrac{t_k}{h}}\right)^{3/2} \, &\prob{P} \Big(\underline{X}^{\alpha}_{t_k} \geq 0, \enskip \inf\limits_{{t_k} \leq s \leq 2{t_k}} X^{\alpha}_s \geq f(t_k), \enskip f(t_k) \leq X^{\alpha}_{2t_k} < f(t_k) + \tilde{C}\Big) \notag\\ &\geq \enskip n_k^{3/2} \, \prob{P} \, \Big(\underline{S}_{n_k} \geq 0, \enskip \min_{n_k <j \leq 2n_k} S_j \geq \enskip f(t_k)+ \alpha, \enskip f(t_k) + \alpha \leq S_{2n_k} < f(t_k)+ \tilde{C} -1, \nonumber \\ & \hspace{150pt} \norm{X_t -X_{2n_k h}}{{t \in [2n_k h ,2t_k]}} < 1\Big) \nonumber \\ &\geq \enskip \sfrac12\, {n_k^{3/2}}\, \prob{P} \, \Big(\underline{S}_{n_k} \geq 0, \enskip \min_{n_k <j \leq 2n_k} S_j \geq a_{n_k}, \enskip a_{n_k} \leq S_{2n_k} < a_{n_k} + K\Big). \end{align}
In the second inequality we used the fact that $h < \frac{a}{4}$ and the Markov property of $X^{\alpha}$ at time $2n_k h$. Combining \eqref{equ1} and \eqref{inequ2}, we find that, for all $k\in\mathbb{N}$, we have $$n_k^{3/2} \, \prob{P} \Big(\underline{S}_{n_k} \geq 0, \enskip \min_{n_k <j \leq 2n_k} S_j \geq a_{n_k}, \enskip a_{n_k} \leq S_{2n_k} < a_{n_k} + K\Big) \enskip \leq \enskip \frac{2}{k}, $$ contradicting Lemma A.3 in \cite{shi}. \qed \end{pf}
We finish this appendix with an arithmetic fact required in Section~4.
\begin{lem} \thlabel{sum} For any $\alpha, \gamma>0$ and $k \in \mathbb{N}$ we have $$\sum_{n = 4}^{\infty} n \left( 1 - \frac{1}{(\log n)^k} \right)^{n^\alpha } \enskip < \enskip \infty.$$ \end{lem}
\begin{pf} It suffices to show that \[ \int_4^{\infty} x \left( 1 - (\log x)^{-k} \, \right)^{x^\alpha} dx \en{=} \int_{\log 4}^{\infty} e^{2x} \left( 1 - x^{-k} \, \right)^{e^{\alpha x}} dx \enskip < \enskip \infty . \] To prove this integrability, we show that the second integrand is $o(e^{-x})$ as $x \rightarrow \infty$, or, equivalently, that $e^{\alpha x} \left( \log(x^k) - \log(x^k-1) \right) - 3x \enskip \rightarrow \enskip \infty$ as $x \rightarrow \infty$. For all $t > 1$ we have $\log' (s) \geq \frac{1}{t} \enskip \forall s \in [t-1,t]$, so $\log(x^k) - \log(x^k-1) \geq \frac{1}{x^k}$ for all $x > 1$. It remains to note that $x^{-k} e^{\alpha x} - 3 x \rightarrow \infty$ as $x \rightarrow \infty$. \qed \end{pf}
{\bf Acknowledgement:} We thank Yan Fyodorov who suggested this project to us. We would like to thank three anonymous referees for their careful reading of an earlier version of this paper. F.L. was supported by an EPSRC studentship for the duration of this project.
\newcommand{\bergaaa}[1]{}
\end{document} | arXiv |
Structure sensitive complexity for symbol-free sequences
Cheng-Yuan Liou1,
Aleksandr A Simak1 &
Jiun-Wei Liou2
The study proposes our extended method to assess structure complexity for symbol-free sequences, such as literal texts, DNA sequences, rhythm, and musical input. This method is based on L-system and topological entropy for context-free grammar. Inputs are represented as binary trees. Different input features are represented separately within tree structure and actual node contents. Our method infers tree generating grammar and estimates its complexity. This study reviews our previous results on texts and DNA sequences and provides new information regarding them. Also, we show new results measuring complexity of Chinese classical texts and music samples with rhythm and melody components. Our method demonstrates enough sensitivity to extract quasi-regular structured fragments of Chinese texts and to detect irregular styled samples of music inputs. To our knowledge, there is no other method that can detect such quasi-regular patterns.
This work introduces general complexity assessment on structure properties for different types of inputs. Input sequences are represented as binary trees, the concept of L-system (Wikipedia 2005) is borrowed to infer rewriting rules and build corresponding context-free grammars, which are used later to assess the complexity score (Kuich 1970). This complexity score is closely related to the notion of entropy (Shannon 1948). Current work is intended to establish a general vision on such kinds of structural complexity assessment.
One initial work in this field focused on the complexity of musical rhythm (Liou et al. 2010), where binary tree representation almost perfectly fits. Later, our proposed method was applied to the complexity of DNA sequences (Liou et al. 2013a, b). From this arose the question of representation: how can other input types be transformed into a binary tree, while keeping the complexity assessment the same? The third study adapted complexity assessment to general texts encoded as symbol-free sequences (Liou et al. 2013a, b). Symbol-free representation was an important milestone—it allowed to extend method for more generic input data, such as Chinese paragraphs. Finally, the study turns back to music with an attempt to reconsider the initial assessment, redefine it, and make method capable of naturally incorporating both musical melody and rhythm.
Complexity assessment
This section provides a generic version of the earlier proposed method for structural complexity assessment (Liou et al. 2010). Our method in the essence remains the same; however, basic data structure and definitions were modified to equip our approach with new capabilities. Also, previous studies paid attention on the equivalence of bracketed strings and binary tree rewriting systems. This study considered it as being already justified, and bracketed strings do not appear in generalized version of the method any more. Instead, we focused on other issues, updating the notation of our formal grammars and proposing a better view on the classification step. It is worth mentioning that all adjustments follow previous conclusions and important statements, as well.
The procedure of transformation from arbitrary input encoding to the binary tree depends on the nature of the input. Despite this, following remains the same: the resulting binary tree reflects and corresponds to the structure of the input. We will not provide exact specifications here on how to transform different kinds of input into corresponding binary trees, but each following section dedicated to one particular kind does provide such necessary explanation in detail.
Our binary tree defined as follow (Fig. 1):
Made-up binary tree. Every proper sub-tree (a.) or node (b.) is a binary tree, but (c.) is not
Every sub-tree of a binary tree is a binary tree itself;
Every node except the root has a parent node;
Every node can have exactly two or none child nodes;
Every child node is labeled as left or right;
Every node can store some content inside.
Using a branching factor of two gives the tree two useful properties—it is relatively simple to maintain and general enough to get in account inputs properties, which are known to be local in linguistics (Gibson 1998) and music (Simonton 1984).
L-system
Every one of these trees can be considered as the result of consecutive development starting from the root. Each development step corresponds to the next tree level, and nodes at any current level are actually the result of development at a previous level. The process gradually continues until the original tree is replicated identically. Such development mechanism can be formalized with biology-inspired parallel string rewriting systems, or L-systems (Prusinkiewicz and Lindenmayer 1996). The L-system is a special case of formal grammar (Chomsky 1956). The core of its capabilities is a set of rewriting rules (it explains how every element shall be certainly rewritten), which are applied in parallel, naturalistically reflecting the processes of cell division and plant growth (Lindenmayer 1968). To replicate the tree, it is necessary to construct a complete set of rewriting rules based on labels of the nodes and start the rewriting procedure with the root node as the initial.
Rewriting rules
Every node in our binary tree, except the root, is labeled denoting whether it is the left or right child. It is necessary to assign a unique label to the root node. Thus, every node in the binary tree shall be labeled. Let symbol L states for the left child, R states for the right one, and P denotes as tree root, all in uppercase as shown in the figure (Fig. 2). Those labels form the set of rewriting system terminal symbols, and their corresponding lowercase symbols l, r, and p are the set of non-terminals. Then, root node non-terminal is the initial starting symbol (or axiom, in formal systems).
Properly labeled binary tree
Next, for every node in a tree starting with the root, its corresponding rewriting rule is created and placed into a rewriting set one by one (Fig. 3):
Rewriting rule creation for dashed node with value 1. List 1. Rewriting rules inside the rewriting set for binary tree from Fig. 2
Left-hand side of rewriting rule contains node non-terminal symbol with the context on a left defined by traversing parent nodes up to the root inclusively and concatenating their labels.
Right-hand side of the rule contains node label itself, which is actually a terminal symbol, followed by non-terminals in case the node has a child.
An additional operation of node content setting denoted by brackets at right-hand side of the rule immediately after the terminal symbol with the content supposed to be placed inside the node at rewriting moment.
List 1 demonstrates the rewriting rules set for this particular binary tree (Fig. 2) after the procedure above is completed.
Thus, such parallel rewriting system is a non-ambiguous context-sensitive formal grammar, which is capable of replicating the original tree identically (Chomsky 1959).
Homomorphism and isomorphism
Curious reader may note two things. Firstly, for every node in a binary tree, there is exactly one corresponding rewriting rule. Secondly, some rewriting rules are quite similar and may appear redundant. The last claim is also correct relative to the tree nodes and even sub-trees. Indeed, some sections of a binary tree may share exactly the same structure and even the same placement of node content. To extract such repeated structures based on their similarity and bound the redundancy of rewriting set, two auxiliary definitions are provided:
Homomorphism in rewriting rules
Two rewriting rules are homomorphic if and only if they assign equal contents to their terminals.
In terms of a binary tree, it means that after the rewriting procedure has been completed, homomorphic nodes share the same content.
Isomorphism on level X in rewriting rules
Two rewriting rules are isomorphic on depth X if and only if they are homomorphic and rules corresponding to their non-terminals are relatively isomorphic on depth X-1. Isomorphism on level 0 indicates homomorphism.
After the rewriting has been completed, two sub-trees of a binary tree are considered isomorphic (on depth X) if their root nodes share the same content and their descendants form an equal structure and relatively share the same content (up to depth X-1).
It is possible to classify all rewriting rules from list 1 using a certain level of isomorphism (Table 1).
Table 1 Classified rewriting rules with respect to isomorphism levels
Table 2 Final rewriting set after the classification is finished, rules positions are corresponding to Table 1
It is good to place boundaries on isomorphism depth. Obviously, the lower bound of isomorphism domain is 0 while the upper bound is the number of levels of the original binary tree. However, such isomorphism depth bounds are quite meaningless. The lower bound does not involve any structural information, while the upper bound does not leave anything to compare with the whole tree. Thus, the meaningful lower and upper for the rewriting rules of isomorphism depth are 1 and depth of the original tree minus 1.
The classification of rewriting rules is one of the most important steps for structural complexity assessment. It reveals the hidden redundancy of a binary tree to the explicit form, exploiting the redundancy of the corresponding rewriting set.
All isomorphic rewriting rules are labeled with one denoting class label (Table 1). However, such a simple procedure is quite computationally expensive, despite the chosen domain of rewriting rules or tree nodes. The isomorphism check will be repeatedly performed dozens of times on the same inputs, expanding with factor of two for every level of required isomorphism depth. A good illustration is a straightforward implementation of Fibonacci numbers computation.
A more elegant and less computationally expensive way of doing this is to iteratively assign class labels to all tree nodes depending on the node and its child node labels at previous iteration (Fig. 4). It assumes breadth-first node ordering. The first iteration considers only node content and is equal to the 0-depth isomorphism, or homomorphism. Each of new iteration increases the isomorphism level by 1, thus, the total number of iterations is bounded by the depth of the tree (considering also 0th initial iteration).
Rewriting rules classification within tree nodes domain. (a.)—initial tree, (b.)—zeroth iteration (homomorphism), (c.)—first iteration (isomorphism-1), (d.)—second iteration (isomorphism-2)
New class labels (final nodes values) shall be propagated to the corresponding rewriting rules to compose a new rewriting set, for each rule replacing the left-hand side with its class label and the right-hand side with class labels of its children (Table 2). Some rules in the set will have duplicates. Or, alternatively, every rule occurs exactly once but has an associated counter for how many times it actually appears. This information is required for the following complexity assessment. All labels are considered as non-terminal symbols, additional productions to the dedicated terminal symbol shall be added to the set to conform the formality. The initial symbol is obviously a root node class label.
This new parallel rewriting system is a stochastic context-free formal grammar capable of reproducing the original binary tree as well as many other similar trees.
Complexity formula
As mentioned above, a set of classified rewriting rules is a context-free grammar. Thus, the redundancy in the tree (its hidden structure) can be explored by assessing the complexity of tree generating grammar (Liou et al. 2010), which is closely related to the entropy notion for context-free grammars (Kuich, 1970).
The complexity of context-free grammar for binary trees can be evaluated by next three steps:
Assume that there are n classes of rules and that each class C i contains n i rules. Let V i ∈ {C 1, C 2, …, C n }, U ij ∈ {R ij , i = 1, 2, …, n, j = 1, 2, …, n i }, and a ijk ∈ {x, x = 1, 2, …, n}, where each U ik has the following form:
$$ {U}_{i1}\to {V}_{a_{i11}}{V}_{a_{i12}},{U}_{i2}\to {V}_{a_{i21}}{V}_{a_{i22}},\dots \to \dots {U}_{i{n}_i}\to {V}_{a_{i{n}_i1}}{V}_{a_{i{n}_i2}}. $$
The generating function of V i , V i (z) defined as:
$$ {V}_i(z)=\frac{{\displaystyle {\sum}_{p=1}^{n_i}}{n}_{ip}z{V}_{a_{ip1}}(z){V}_{a_{ip2}}(z)}{{\displaystyle {\sum}_{q=1}^{n_i}}{n}_{iq}}, $$
If V i does not have non-terminals, set V i (z) = 1.
After formulating the generating function V i (z), we intend to find the largest value of z, z max, at which V 1(z max) still converges (V 1 here denoted the root node rule of a binary tree). After obtaining z max of V 1(z), we set R = z max (the radius of convergence). We define the complexity of a binary tree as:
$$ {K}_0=- \ln\ R. $$
The algorithm for numerical estimation is suggested due to the fact that there is no analytical solution for such a system of complex argument equations. We rewrite generating function and use region tests to approximate the complexity, as follows:
Rewrite generating function:
$$ \left\{\begin{array}{c}\hfill {V}_i^m\left({z}^{\hbox{'}}\right)=\frac{{\displaystyle {\sum}_{p=1}^{n_i}}{n}_{ip}{z}^{\hbox{'}}{V}_{a_{ip1}}^{m-1}\left({z}^{\hbox{'}}\right){V}_{a_{ip2}}^{m-1}\left({z}^{\hbox{'}}\right)}{{\displaystyle {\sum}_{q=1}^{n_i}}{n}_{iq}}\hfill \\ {}\hfill {V}_i^0\left({z}^{\hbox{'}}\right)=1\hfill \end{array}\right. $$
$$ {V}_i^0\left({z}^{\hbox{'}}\right)=1. $$
Each iteration, calculate values from \( {V}_i^0\left({z}^{\hbox{'}}\right) \) to \( {V}_i^m\left({z}^{\hbox{'}}\right) \) . When \( {V}_i^{m-1}\left({z}^{\hbox{'}}\right)={V}_i^m\left({z}^{\hbox{'}}\right) \) for all i, we say \( {V}_i^m \) reaches the convergence for z '. We set m = 200.
We look up for \( {\mathrm{z}}_{max}^{\hbox{'}} \) using dichotomy search to check z ' between 0 and 1 for \( {V}_i^m \) convergence.
DNA sequences
In modern bioinformatics, finding an efficient way to locate sequence fragments with biological meaning is an important issue. There are two broadly used categories of methods—sequence complexity (Koslicki 2011) and structure patterns analysis (Manna and Liou 2006; Tino 1998; Peng et al. 1992). Koslicki (2011) presented a method for computing the complexity of a sequence using redefined topological entropy, so the complexity score will not converge to zero for longer sequences. According to Hao et al. (Hao et al. 2000), we can find some rare subsequences by proposed graphical representation for DNA sequences. Zhang and Zhang (1994) analyzed nucleotides occurrence probabilities using four-nucleotide-related functions to draw 3D curves plots.
Our past study gave an attempt on combining statistical and structural properties for input DNA sequences (Liou et al. 2013a, b) within single assessment. We replaced the sequence of four nucleotides with a binary tree and assessed initial sequence complexity, fragmenting the tree to smaller sub-trees and computing the complexity score for each sub-tree independently. The study focused on encoding issue: how to represent a four-nucleotide DNA sequence as a binary tree. We used four fixed tree representations, one for each nucleotide base A, T, C, and G (Fig. 5).
Fixed tree structures for encoding corresponding nucleotides
Thus, every input sequence element can be replaced with corresponding tree, and two neighboring trees are combined together under one made-up common root, recursively (Fig. 6).
Complete binary tree for encoded nucleotide sequence
All of the following steps, such as rewriting rules extraction, classification, and numerical estimation of complexity scores remain the same as stated in the section above.
The study also paid attention to comparing topological entropy (Koslicki 2011) and presented a method of structural complexity, revealing the advanced nature of the latter one. Both methods showed the ability to detect statistical properties of test sequences, but only structural complexity assessment was sensitive to the changes of the sequence sub-words order. In addition, for some input, Koslicki's method cannot compute amino-acid sequences efficiently (required fragment size growths exponentially with sub-word length on alphabet size), but structural complexity does not pose such limitations and can be applied to any amino-acids directly.
The study was successful in attempting to represent symbol sequences as binary trees and encoding sequence symbols with fixed tree structures for the next structural complexity assessment. However, a possible dependency of final complexity scores on chosen fixed representations still was a matter of future study at that moment.
Below we have provided a plot (Fig. 7) of the front part of the structural complexity score for the Zaire Ebola virus (ZEBOV), there are approximately 4000 values, one value for each nucleotide. The whole length of the genome is about 19,000 nucleotides, and it encodes seven structural proteins in the following order: nucleoprotein NP, polymerase cofactor VP35, VP40, GP, transcriptional activator VP30, VP24, and RNA polymerase L. The blue and red lines represent two different fragmentation sizes, 64 and 32 nucleotides, respectively. The green dashed lines are what we found at genomic database (U.S. National Library of Medicine), relative to the positions of complexity scores amplitude changes. The first green line and the second green line are the start (470) and the end (2689) positions of nucleoprotein coding sequence (CDS); this segment tends to display a higher complexity with positive gaps and quite short but deep negative spikes. The third green line is a polyA signal (3015…3026) for nucleoprotein, intergenic region (3027…3031), and the transcription start signal (3032…3043) for next polymerase protein complex VP35. The last green line is the beginning of VP35 coding sequence (3129…4151)—the complexity scores return back typically higher values.
Zaire Ebola virus complexity scores for 4000 nucleotides, two size segments, isomorphism level 2
Text sequences
Despite successful attempts at encoding input elements with fixed tree structures, two questions were still waiting to be answered:
How can we efficiently encode a sequence for alphabet cardinalities higher than the number of nucleotide bases? Encoding every alphabet symbol as fixed tree structure requires deeper trees for larger alphabet symbol sets, and the complexity assessment obviously tends to measure the dependencies between those fixed structures;
How do different encodings affect the complexity scores?
Our third study on structure complexity (Liou et al. 2013a, b) addressed both questions. The study concerned regular text sequences, representing natural text as a symbol-free sequence. Symbol-free sequences assume intermediate encoding from symbolic text to binary strings. Two intermediate encodings were used: naive binary (BIN) and advanced Lempel-Ziv-Welch (LZW) (Welch 1984). For English text, BIN encodes 27 alphabet characters (26 Latin letters and space character) directly as a binary representation of the symbol integer index. LZW is a lossless data compression with dictionary-based encoder. LZW saves certain sub-strings from shortest to longest in the dictionary and replaces their occurrences into input sequences with corresponding dictionary indexes. After intermediately encoding every symbol-free binary string, fragments of length 2 were represented with already known fixed tree structures (Fig. 8). Following complexity assessment remains the same.
Fixed tree structures to encode 2-bit segments of binary string
This study compared sequence complexity for both of the intermediate encodings. Interestingly, the complexity for BIN remains quite uniform over the encoded sequence, while LZW tends to have lower complexity scores in the front and higher scores in the rear of the sequence. Since LZW saves regular patterns in the front part to absorb them later in the rear end, there are not so many regular patterns in the end of the sequence. Also, structural complexity was compared with linguistic complexity (LC) and topological entropy (TE). They also showed similar behavior on intermediate encodings.
The study analyzed intermediate encodings, but some parts of question 2 still remain. Theoretically, there should be no difference in complexity score if all fixed tree replacements are unique, and the replacement procedure is one-to-one function.
However, when we satisfied above two conditions using intermediate BIN encoding and ran the test—our results were surprising (Fig. 9). We tried four different encodings for the same binary string fragments "00," "01,""10," and "11"—corresponding encoding by fixed tree structures denoted by its nucleotide letter.
Four different fixed structures encodings surprisingly reveals different complexity scores
Later investigation showed that the intermediate encoding BIN encodes 27 symbols as binary strings with length of 5 and fixed tree replacements are aligned to length 2. Original symbols of input sequence became shredded because of this misalignment. Thus, some fixed representation substitutions were formed by ending bit of one symbol and starting bit of the next one. It is not important when one just measures the relative complexity of incoming transmission stream. But when one has to reveal structure complexity of input sequence—such alignment does matter. Since fixed tree representation replaces 2-bit fragments of encoded string—intermediate encoding should be aligned to a multiple of 2.
Chinese texts
In this section, Chinese texts are considered as an extreme case of possible application for structural complexity. Alphabet size or symbol size of such input sequences is of the order of thousand and can easily exceed the input sequence length. Such alphabet cardinality may also create some restrictions on encoding due to the limitation of memory capacity of modern computers.
There are four great classical novels of the Chinese literature (Shep 2011), which are commonly regarded as the greatest and most influential of premodern Chinese fiction. Two of those classical Chinese novels—"Dream of the Red Chamber" (Trad. Chinese "紅樓夢") by Cao Xueqin (18th century) and "Romance of the Three Kingdoms" (Trad. Chinese "三國演義") by Luo Guanzhong (14th century)—were decided for analysis with developed structural complexity method.
Intermediate encoding of input Unicode symbols (e.g., u4e00, u4e8c) removes the "u" character and considers every 4 hex numbers of two bytes as ASCII symbols, 8 bits each. Thus, all initial input symbols were encoded as 32-bit binary string and concatenated together later. Next, four fixed tree representations were applied to compose binary trees for every of 1024-bit segments of binary input string. Those trees were used as input to perform structural complexity assessments with isomorphism level 8.
The most fascinating result we have discovered so far is a significantly lower complexity scores for sentences containing regular structures inside. When sentences display a more regular structure than a regular narrative plot (for instance, some poetic inserts), the structural complexity score tends to be lower. Below we provide a few instances of this effect for both novels in descendent order from the highest (less regularity) to the lowest (more regularity) complexity scores. For those who do not feel confident in Chinese, we would recommend paying attention to some regularities in the sequences of the symbols. Some of those regularities are typical for classical Chinese, and some of them are something more.
Dream of the Red Chamber:
和他好,他偏不和你好,你怎麼樣?你不和他好,他偏要和你好,你怎麼
「癡情司」,「結怨司」,「朝啼司」,「暮哭司」,「春感司」,「
便是『了』,『了』便是『好』;若不『了』便不『好』;若要『好』,
、賈敕、賈效、賈敦、賈赦、賈政、賈琮、賈 、賈珩、賈珖、賈琛、賈
、太婆婆、媳婦、孫子媳婦、重孫子媳婦、親孫子媳婦、姪孫子、重孫子
Romance of the Three Kingdoms:
建。建生廣陵侯劉哀。哀生膠水侯劉憲。憲生祖邑侯劉舒。舒生祁陽侯劉
也;不讀詩書,是口濁也;不納忠言,是耳濁也;不通古今,是身濁也;
之人,然後有非常之事;有非常之事,然後立非常之功。夫非常者,固
Chapter 102:
,方者為牛腹。垂者為牛舌,曲者為牛肋。刻者為牛齒,立者為牛角。細
5. Chapter 20:
劉昂。昂生漳侯劉祿。祿生沂水侯劉戀。戀生欽陽侯劉英。英生安國侯劉
To our knowledge, there is no other method which can detect such quasi-regular sections.
An earlier study (Liou et al. 2010) proposed the complexity measure for musical rhythm, representing it as a binary tree. Such representation seems very natural for rhythm, because notes durations are generally square. The study focused only on the rhythm ignoring another important music component—the melody. Melody gives information on tones transitions through time, specified by rhythm.
This section explains how to incorporate the melody component of music into the assessment. It is the first section where input data have multilayer structure (Fig. 10) and corresponding binary tree representations are truly layered (Fig. 11). This occurs because two beats of rhythmic line can sound at the same time. Hypothetically, for text sequences, it would be so when two characters take one position simultaneously, one character takes more than one position or even both! Binary trees are capable of representing such input by definition. However, there is still an issue of how to bind tonal information into the tree. Representing tonal information with already known fixed tree structures could be a possible solution, but this would cause unexpected difficulty; representing both rhythm and melody with only structural properties of a tree makes them indistinguishable. Later, it causes issues similar to misalignment of data intermediate encodings. The solution we proposed is to keep rhythm within the structure of the binary tree and melody within the content of tree nodes. This section is also novel with the idea to represent different kinds of input features with separate tree properties.
Blues lesson 57, exercises 6 (left) and 7 (right)
Multilayered music binary tree for Blues lesson 57 exercises 7. Nodes contain MIDI codes and NAN values are dedicated to keep tree made-up upper part separate
Rock lesson 205, exercise 1 and exercise 5 (with and without typical hi-hat pattern)
We decide to approbate the structural complexity method on a test dataset, the collection of drum lessons for three styles: Rock, Blues, and Jazz. The collection was created and published online by drummer of over 25 years, Rudy Lievens at his personal website (Lievens, 2013) devoted to drums. Exercising materials are provided as note sheets and MP3 or MIDI files for listening and downloading. Exercises download had some issues for few particular files, they were later eliminated from the assessments. In total, after download, Rock had 7533 exercises, and Blues and Jazz had 8594 and 12609 exercises, respectively. Typical lesson note sheets are provided below (Fig. 10).
We conducted preprocessing for all data. The procedure works as uniformly as possible; a single implementation version was used to preprocess all dataset samples. The procedure recognized and properly fixed the following cases: uncertain note onsets and time lags, upbeats and syncopations, and triplets and grace notes. All notes were adjusted to the most suitable positions. Samples with triplets had an additional transformation with multiplication to 3/2 of their durations. Detected grace notes served as indicators to extend their joined notes up to the proper length.
All samples are rather short and structurally similar to each other within one style. Thus, straightforward structural complexity assessment on each sample with isomorphism level 1 does not reveal fascinating results. We decided to assess complexity of each style first and later try to distinguish the most atypical samples within each style. To do so, an additional structure called the universal rewriting rules set is required. This universal rules set contains all rewriting rules from all the samples within one style corpus.
The complexity assessment procedure has been adapted for the current task and was performed in three steps. Step 1 converted preprocessed MIDI files into its binary tree, extracted rewriting rules, and classified them with isomorphism level 1. Step 2 placed classified rewriting rules into universal rules set and accurately maintain their relative probabilities (occurrence scores). The final step assessed the complexity for each sample and each universal set. Numerical estimation of structural complexity for individual samples remains the same, with just one difference—instead of individual rules scores, corresponding scores from universal rules sets were substituted. And to assess the complexity of each style, numerical estimation was applied for each universal rewriting set directly.
The assessment of structural complexity on Rock, Blues, and Jazz universal sets reveals the following scores as 2.96, 2.54, and 3.98, respectively. Also, every set has different numbers of rewriting rules—142, 172, and 688. One might note significant dependency between complexity scores and rewriting sets sizes. For example, Jazz has 688 rewriting rules and the complexity score is dramatically higher. However, higher number of participated in the set rules is not the only necessary component for a higher complexity score. Rules relative probabilities and connections between the rules are actually more important. For example, Blues has 172 rewriting rules, but the complexity score is still significantly lower than Rock with 142 rules. Rock universal set has fewer rules, but they are organized in a more comprehensive way. We tried to illustrate this with two figures (Figs. 13 and 14).
Rock universal set rules interconnections tree representation, complexity score 2.96
Blues universal set rules interconnections tree representation, complexity score 2.54
Higher complexity score as well as larger size of universal set for particular corpus might be the direct evidence on a more comprehensive music style. A larger universal set with no dependency on the corresponding complexity score might recall to the richness of music and overall musical expression.
Also, we identified some samples with extremely high complexity. Later examination revealed that they are different from all the other samples of the style. The most evident and easy to understand are several Rock exercises with detected absence of standard for the style hi-hat beats rhythmic line. Figure 12 shows two samples with and without such hi-hat pattern.
Chomsky N (1956) Three models for the description of language. IRE Trans Inform Theory 2(3):113–124
Chomsky N (1959) On certain formal properties of grammars. Inf Control 2(2):137–167
MATH MathSciNet Article Google Scholar
Gibson E (1998) Linguistic complexity: locality of syntactic dependencies. Cognition 68(1):1–76
Hao B-L, Lee HC, Zhang S-Y (2000) Fractals related to long DNA sequences and complete genomes. Chaos, Solitons Fractals 11(6):825–836
Koslicki D (2011) Topological entropy of DNA sequences. Bioinformatics 27:1061–1067
Kuich W (1970) On the entropy of context-free languages. Inf Control 16(2):173–200
Lievens, R. (2013). Retrieved 2013, from drum beats, drum lessons and Midi loops: http://www.edrumbeats.com/
Lindenmayer A (1968) Mathematical models for cellular interactions in development I. Filaments with one-sided inputs. J Theor Biol 18(3):280–299
Liou C-Y, Wu T-H, Chia-Ying L (2010) Modelling complexity in musical rhythm. Complexity 15:19–30
Liou C-Y, Liou D-R, Simak AA, Huang B-S (2013a) Syntactic sensitive complexity for symbol-free sequence. In: LNCS, 4th International Conference, IScIDE 2013, Beijing, China, July 31 – August 2, 8261. pp 14–21
Liou C-Y, Tseng S-H, Cheng W-C, Tsai H-Y (2013b) Structural complexity of DNA sequence. Comput Math Methods Med 2013:11
Manna S, Liou C-Y (2006) Reverse engineering approach in molecular evolution: simulation and case study with enzyme proteins, Proceedings of the 2006 International Conference on Bioinformatics & Computational Biology, BIOCOMP'06. Las Vegas, Nevada, pp 529–533
Peng C-K, Buldyrev SV, Goldberger A, Havlin S, Sciortino F, Simons M et al (1992) Long-range correlations in nucleotide sequences. Nature 356(6365):168–170
Prusinkiewicz P, Lindenmayer A (1996) The algorithmic beauty of plants. Springer-Verlag, New York
Shannon, C. (1948). The mathematical theory of communication. The Bell System Technical Journal, Vol. 27, pp. 379–423, 623–656, July, October
Shep, S. J. (2011). Paper and print technology. In The Encyclopedia of the Novel, Volume 2 of Wiley-Blackwell Encyclopedia of Literature (p. 596). John Wiley & Sons New Jersey, USA.
Simonton DK (1984) Melodic structure and note transition probabilities: a content analysis of 15,618 classical themes. Psychol Music 12:3–16
Tino P (1998) Spatial representation of symbolic sequences through iterative function systems. Systems Man Cybernetics A 29(4):386–393
U.S. National Library of Medicine. (n.d.). Retrieved from National Center for Biotechnology Information: http://www.ncbi.nlm.nih.gov/
Welch TA (1984) A technique for high-performance data compression. Computer 17(6):8–19
Wikipedia. (2005). L-system. Retrieved October 1, 2013, from Wikipedia, the free encyclopedia: http://en.wikipedia.org/wiki/L-system
Zhang R, Zhang C (1994) Z curves, an intuitive tool for visualizing and analyzing the DNA sequences. J Biomol Struct Dyn 11(4):767–782
This work was supported by the Ministry of Science and Technology MOST 103-2221-E-002-180 and MOST 104-2811-H-001-004. We also greatly appreciate Rudy Lievens permission to use his data in our research.
Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
Cheng-Yuan Liou & Aleksandr A Simak
Institute of Statistical Science, Academia Sinica, Taipei, Taiwan
Jiun-Wei Liou
Cheng-Yuan Liou
Aleksandr A Simak
Correspondence to Cheng-Yuan Liou.
CY conceived the original concept of the study, participated in the design of methods, coordinated the research, and process of drafting the manuscript. AA updated the methods applied in this research, collected and prepared the data for the music analysis, carried out thoroughly the analysis of the music complexity, and drafted the manuscript. JW participated in the analyses of text sequences and has been involved in revising it critically for important content. All authors read and approved the final manuscript.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Liou, CY., Simak, A.A. & Liou, JW. Structure sensitive complexity for symbol-free sequences. Appl Inform 2, 6 (2015). https://doi.org/10.1186/s40535-015-0011-9
Formal Grammar
Terminal Symbol
Complexity Score | CommonCrawl |
# Working with big numbers in C++
To use the Boost multiprecision library, first include the necessary header files:
```cpp
#include <boost/multiprecision/cpp_int.hpp>
#include <boost/multiprecision/cpp_dec_float.hpp>
```
Next, let's create some big numbers:
```cpp
boost::multiprecision::cpp_int big_integer = 12345678901234567890;
boost::multiprecision::cpp_dec_float<0> big_float = 0.12345678901234567890;
```
Now, let's perform some arithmetic operations with these big numbers:
```cpp
boost::multiprecision::cpp_int sum = big_integer + 12345678901234567890;
boost::multiprecision::cpp_dec_float<0> product = big_float * 0.12345678901234567890;
```
## Exercise
Create two big numbers using the Boost multiprecision library and perform the following operations:
- Addition
- Subtraction
- Multiplication
- Division
# Limits and continuity in mathematical analysis
A limit is a way to describe the behavior of a function as its input approaches a certain value. The limit of a function f(x) as x approaches a is denoted as:
$$\lim_{x \to a} f(x)$$
A function is continuous at a point if it is defined at that point and the limit of the function as x approaches that point exists.
Let's consider the function f(x) = x^2. We can find the limit of this function as x approaches 2:
$$\lim_{x \to 2} x^2 = 4$$
This means that the function f(x) = x^2 is continuous at x = 2.
## Exercise
Find the limit of the function f(x) = x^3 as x approaches 1.
# Numerical integration techniques
The trapezoidal rule is a simple method for approximating the definite integral of a function. The formula for the trapezoidal rule is:
$$\int_{a}^{b} f(x) dx \approx \frac{h}{2} \left[ f(a) + f(b) + 2 \sum_{i=1}^{n-1} f(a + ih) \right]$$
where h is the width of the trapezoids and n is the number of trapezoids.
Simpson's rule is an improvement over the trapezoidal rule. The formula for Simpson's rule is:
$$\int_{a}^{b} f(x) dx \approx \frac{h}{3} \left[ f(a) + f(b) + 4 \sum_{i=1}^{n-1} f(a + 2i \cdot \frac{h}{2}) + 2 \sum_{i=1}^{n-1} f(a + 2i \cdot \frac{h}{2}) \right]$$
where h is the width of the intervals and n is the number of intervals.
The Romberg integration method is an adaptive method for approximating the definite integral of a function. It starts with the trapezoidal rule and refines the approximation using Richardson's extrapolation.
## Exercise
Use the trapezoidal rule to approximate the integral of the function f(x) = x^2 from 0 to 1.
# Optimization algorithms for mathematical problems
Gradient descent is a first-order optimization algorithm that uses the negative gradient of a function to find its minimum. The update rule for gradient descent is:
$$x_{n+1} = x_n - \eta \cdot \nabla f(x_n)$$
where x is the variable, n is the current iteration, η is the learning rate, and ∇f(x) is the gradient of the function f(x).
The Newton-Raphson method is a second-order optimization algorithm that uses the inverse of the Hessian matrix of a function to find its minimum. The update rule for the Newton-Raphson method is:
$$x_{n+1} = x_n - H(x_n)^{-1} \cdot \nabla f(x_n)$$
where H(x) is the Hessian matrix of the function f(x).
The conjugate gradient method is an iterative optimization algorithm that approximates the inverse of the Hessian matrix of a function. The update rule for the conjugate gradient method is:
$$x_{n+1} = x_n - \alpha_n \cdot \nabla f(x_n)$$
where αn is the step size.
## Exercise
Use the gradient descent algorithm to find the minimum of the function f(x) = x^2.
# Taylor series and their applications
The Taylor series for a function f(x) is given by:
$$f(x) = \sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!} (x - a)^n$$
where f(x) is the function, a is the point of expansion, and n is the order of the derivative.
Taylor series have several applications in mathematical analysis, including:
- Approximating functions
- Solving differential equations
- Finding limits of functions
- Studying the behavior of functions
## Exercise
Find the Taylor series for the function f(x) = e^x around the point a = 0.
Now that you have learned about Taylor series and their applications, you can use this knowledge to approximate functions, solve differential equations, find limits of functions, and study the behavior of functions. | Textbooks |
Selberg's 1/4 conjecture
In mathematics, Selberg's conjecture, also known as Selberg's eigenvalue conjecture, conjectured by Selberg (1965, p. 13), states that the eigenvalues of the Laplace operator on Maass wave forms of congruence subgroups are at least 1/4. Selberg showed that the eigenvalues are at least 3/16. Subsequent works improved the bound, and the best bound currently known is 975/4096≈0.238..., due to Kim & Sarnak (2003).
For the conjecture about the Riemann zeta function, see Selberg's zeta function conjecture.
The generalized Ramanujan conjecture for the general linear group implies Selberg's conjecture. More precisely, Selberg's conjecture is essentially the generalized Ramanujan conjecture for the group GL2 over the rationals at the infinite place, and says that the component at infinity of the corresponding representation is a principal series representation of GL2(R) (rather than a complementary series representation). The generalized Ramanujan conjecture in turn follows from the Langlands functoriality conjecture, and this has led to some progress on Selberg's conjecture.
References
• Gelbart, S. (2001) [1994], "Selberg conjecture", Encyclopedia of Mathematics, EMS Press
• Kim, Henry H.; Sarnak, Peter (2003), "Functoriality for the exterior square of GL4 and the symmetric fourth of GL2. Appendix 2.", Journal of the American Mathematical Society, 16 (1): 139–183, doi:10.1090/S0894-0347-02-00410-1, ISSN 0894-0347, MR 1937203
• Selberg, Atle (1965), "On the estimation of Fourier coefficients of modular forms", in Whiteman, Albert Leon (ed.), Theory of Numbers, Proceedings of Symposia in Pure Mathematics, vol. VIII, Providence, R.I.: American Mathematical Society, pp. 1–15, ISBN 978-0-8218-1408-6, MR 0182610
• Luo, W.; Rudnick, Z.; Sarnak, P. (1995-03-01). "On Selberg's eigenvalue conjecture". Geometric & Functional Analysis. 5 (2): 387–401. doi:10.1007/BF01895672. ISSN 1420-8970.
External links
• "Selberg conjecture - Encyclopedia of Mathematics". encyclopediaofmath.org. Retrieved 2022-06-08.
| Wikipedia |
Is there any stable orbit around a black hole?
Is there any stable orbit around a black hole so that the spacecraft will remain in orbit without any disturbance over a long period of time ?
orbit probe black-hole
called2voyage♦
$\begingroup$ space.stackexchange.com/questions/1909/… writes to say > ... It's perfectly safe to orbit a black hole - as long as you don't cross the event horizon, you're fine ... $\endgroup$ – Everyone Sep 12 '13 at 11:04
$\begingroup$ A quick reminder that our Sun, and thus our entire solar system, orbits around a black-hole right now and we have a stable orbit. $\endgroup$ – EtherDragon Sep 12 '13 at 21:50
$\begingroup$ @EtherDragon - I believe I've already mentioned that in my answer several hours before your comment. And indeed, the question is not specifically asking for close proximity orbits, so they can be considered stable. $\endgroup$ – TildalWave Sep 12 '13 at 22:27
$\begingroup$ Can I add to this question? Or should I ask this on another question? All answers seem to say that yes, there is a stable orbit. BUT: considering that a black hole is that huge ever-hungry monster, won't it (given time) accumulate more matter and mass and increase in size, eventually turning what was supposed to be a stable orbit into an orbit of "decay"? $\endgroup$ – msb Sep 13 '13 at 21:51
$\begingroup$ Black Hole Orbits: Zoom-Whirls and Four-Leaf Clovers: "Orbits around a black hole can be fascinatingly intricate." $\endgroup$ – Simon Woodside May 28 at 20:15
The answer is "yes", and there are a surprising number of ways to argue this. You will probably want to look at the question about small orbits around black holes in Physics Stack Exchange.
At sufficient distances, black holes are not special
Black holes behave the same as any other spherical collection of matter. This is a conclusion of the (intimidating) equations of general relativity (GR), but it's not a surprising result. Newtonian gravity ("classical" gravity) states that any spherically symmetric collection of matter will behave the same. This means that the radial distribution of the sun's mass doesn't affect its tug on the Earth. The vast majority of the sun's mass is contained below 1/4th of its visible radius. But if it were uniform, it would behave exactly the same! This is a very strong statement, and general relativity follows suit.
In short, we only need general relativity when a combination of parameters puts the system in the regime of highly relativistic effects. For gravity regarding a point mass orbiting a large mass, this is often dictated by the gravitational potential, which you should be familiar with as $\frac{G M }{r}$. When this value starts to get close to $\frac{c^2}{2}$, then you have to worry about these strange relativistic distortions.
The story of Mercury's precession is a common anecdote about GR, but this only deals with the accumulation of small change over a long period of time. Specifically 43 arc-seconds over a century, which is something like 1/100th of a degree... in 100 years. However, since the solar system has been around for billions of years, this can still have an impact on orbital stability.
That's a very minute correction to our conclusions. Otherwise, it generally stands to say that whatever orbits are stable for any large body are also stable for a black hole. Actually, a black hole will be even more stable. All the planets are stars have a complex gravitational tapestry due to density changes due to composition changes. If you collapse any of these bodies to a black hole, they'll have to expel their angular momentum. At long distances from a black hole, the gravitational field will be exceptionally constant, and this leads to greater orbital stability.
When we're talking about Sagittarius A* and things like that, we're generally still in this regime.
Truly relativistic orbits are weird
Once you get close enough such that the potential is close to the relativistic limits, the dynamics change. You can't make a blanket statement that "all orbits are stable/unstable". Things you can say:
obviously anything that passes within the event horizon is gone
all orbits that pass within the "IBCO" are dead unless "rockets" are used
unless you're beyond the "ISCO", your orbit will be highly elliptical and precess
If your orbit precession hits an angle that divides $2 \pi$ evenly, then it's "stable"
My apologies for the jargon here. It's fairly hard to say any of this concisely without the specific terminology.
Let me just establish that IBCO (Innermost Bound Circular Orbit, which is 1.5 times the event horizon radius) is basically the line of death. If you cross this point, you can only escape if you have "powerful rockets". I use this terminology because the physicists use it, but it's really a lie. No rocket would be powerful enough to bring you back out unless you were right next to the IBCO. Nonetheless, there are other ways to get an exception - rotating black holes, for instance. Without these, venturing beyond the IBCO will plunge you to the event horizon. But now, the crazy thing about the IBCO is that you can dance right on the edge of it for basically an unlimited amount of time, and hop right back out.
Now, my points #3 & 4 are basically that orbits that come close to the IBCO have "zoom-whirl" behavior. This can be avoided fully, but only beyond the "ISCO" (Innermost Stable Circular Orbit). Beyond that point, you can orbit in a circle and it will be stability until infinity. If you don't meet this criteria, the orbit moves over time like Mercury. But the shift over every revolution can be any angle. That means that you can spin around the IBCO a few times, and then come back and retrace your previous path to the ISCO. You can repeat this exact path for infinity.
This demonstrates the types of stable orbits you can obtain, but real life will have a more complicated collection of parameters. Most black holes are rotating, and they apparently rotate at a large fraction of their theoretical maximum. Plus, there's other material around it. But both of these open up avenues to get energy from it. These complications will likely prevent "stable" orbits, but since they're non-conservative, you can sort-of "surf" the environment around the black hole until you deplete all its usable energy (hint: that's a lot).
AlanSEAlanSE
$\begingroup$ """The vast majority of the sun's mass is contained below 1/4th of its visible radius. But if it were uniform, it would behave exactly the same!""" - I'd emphasize the original point here by mentioning that if it were all within its Swartzchild radius of 3 km (making it a black hole) it would also be the same. $\endgroup$ – Random832 Sep 12 '13 at 18:47
$\begingroup$ I remember I found a black hole orbit simulator somewhere on the Web, alas! I forgot the linky. $\endgroup$ – Deer Hunter Sep 12 '13 at 20:02
$\begingroup$ One simple way to think of orbital instability around black hole is: Relativistic mass is a gravitational mass. The faster you move, the heavier you are. The lower the orbit the faster you move. The heavier you are the more the black hole pulls you, speeding you up even more. $\endgroup$ – SF. Sep 16 '13 at 11:25
$\begingroup$ @SF.: Your argument is incorrect. Inertia also increases. $\endgroup$ – Ben Crowell Jun 10 '14 at 18:46
$\begingroup$ Is the Innermost Bound Circular Orbit (IBCO) the same as the photon sphere? $\endgroup$ – Simon Woodside May 28 at 20:20
Well yes, and the best proof I can think of are the stars orbiting our galaxy's galactic center black hole, which is a supermassive black hole in the center of the Milky Way in the Sagittarius A* region:
Inferred orbits of 6 stars around supermassive black hole candidate Sagittarius A* at the Milky Way galactic centre. (Source: Wikipedia)
It could also be argued, that we all orbit around this black hole anyway, since the whole galaxy orbits around its barycenter, where this supermassive black hole in the galaxy nucleus is located.
$\begingroup$ Note that the elliptical orbits must have some precession -- they shouldn't overlap themselves $\endgroup$ – Manishearth Sep 12 '13 at 13:27
Yes, there are, to a reasonable approximation1. For a nonspinning black hole, there are exactly 4 things that can happen (assuming that you haven't thrown the object directly at the black hole):
The object will be going too fast, and it will just continue to infinity with a slight deflection in path.
The object will be going too slow, and it will spiral into the center
The object goes at a perfect speed at an angle perpendicular to the position vector, giving you a circular orbit
The object goes at a (different) perfect speed at an angle not perpendicular to the position vector, giving a rosetta orbit:
This is similar to an elliptic orbit, except that the major/minor axes themselves rotate. Mercury also prominently displays this kind of orbit (nowhere near as much as one orbiting a black hole, though), as it exhibits a prominent precession of the perihelion — the ellipse itself seems to orbit around the sun.
In the case of a spinning black hole, things become more complicated as a spiraling object may actually reverse the direction of rotation due to frame dragging.
1. See Mark Adler's answer. Similar to orbits arising from the electromagnetic force, two orbiting bodies emit gravitational waves, which leads to an eventual loss of energy and inspiral. However, this process is very slow except near the end.
ManishearthManishearth
Technically, in General Relativity, there are no stable orbits in any two-body system, period. Regardless of mass or black holiness. Bodies rotating about their mutual center of mass will emit gravitational radiation. By conservation of energy, the orbits will get smaller, and given enough time, they will crash into each other.
Practically, this takes so long to happen in most circumstances that we find in the universe, that it is not a consideration on the time scales of age of the universe. However in unusual circumstances with large masses orbiting each other tightly, in this case two neutron stars, this has been observed:
The orbit has decayed since the binary system was initially discovered, in precise agreement with the loss of energy due to gravitational waves predicted by Einstein's general theory of relativity.
In general relativity the energy of a "test-body" in motion around a Schwarzschild (spherically symmetric, non-rotating) black hole can be written as:
$$E=mc^2\left(\frac{\sqrt{1-\frac{2GM}{rc^2}}}{\sqrt{1-\frac{v^2}{c^2\left((1-\frac{2GM}{rc^2})^2(\hat{r}\cdot\hat{v})^2+(1-\frac{2GM}{rc^2})|\hat{r}\times\hat{v}|^2\right)}}}\right)$$.
This can be written as:
$$E=mc^2\left(\frac{{1-\frac{2GM}{rc^2}}}{\sqrt{1-\frac{2GM}{rc^2}-\frac{v^2}{c^2\left((1-\frac{2GM}{rc^2})(\hat{r}\cdot\hat{v})^2+|\hat{r}\times\hat{v}|^2\right)}}}\right)$$.
In the special case of pure circular motion you have:
$$E=mc^2\left(\frac{{1-\frac{2GM}{rc^2}}}{\sqrt{1-\frac{2GM}{rc^2}-\frac{v^2}{c^2}}}\right)$$.
Just as classically (it can be shown somehow) for a pure cicular orbit you have: $v=\sqrt{GM/r}$ and we can thus write:
$$E=mc^2\left(\frac{{1-\frac{2GM}{rc^2}}}{\sqrt{1-\frac{3GM}{rc^2}}}\right)$$.
I believe (I have not checked it out) that by differentiating it can be seen that this expression has a miminum at $r=6GM/c^2$, which is known as the radius of the "innermost stable circular orbit". This means that any circular orbit with $r>6GM/c^2$ is stable in the sense that circular orbits infinitesimally closer to the black hole requires less energy. However, at $r=6GM/c^2$ it actually requires more energy to sustain a circular orbit closer to the black hole and are because of that unstable. If you try to sustain a circular orbit closer than this distance you will inevitably crash into the black hole.
From the expressions above we also see that it requires "infinite energy" (an object must travel at the speed of light) to sustain a spherical orbit at the "photon sphere" ($r=3GM/c^2$).
Answer: Circular orbits are stable for $r>6GM/c^2$.
AgerhellAgerhell
Not the answer you're looking for? Browse other questions tagged orbit probe black-hole or ask your own question.
Would there be any benefit to sending a probe to a black hole?
Is it possible to orbit a black-hole AT the event horizon?
Could a black hole be used as portable gravity device?
Are there any plans to send a probe to orbit a black hole?
What Would Happen if We (Theoretically) Send a Drone With a Camera Inside a Black Hole?
What's the record for the fastest orbit around Earth?
What difficulties would there be for a probe to attain a high retrograde solar orbit?
Time dilation between Earth and a similar planet revolving around a supermassive Black hole
Galactic center. Questions about super massive black hole
Why was Dawn placed into an orbit that would only be stable for "decades" | CommonCrawl |
Fresh banana pseudo-stems as a tropical lignocellulosic feedstock for methane production
Chao Li ORCID: orcid.org/0000-0003-2393-48451,2,
Gangjin Liu2,3,
Ivo A. Nges1,
Liangwei Deng2,
Mihaela Nistor3 &
Jing Liu1,3
The banana pseudo-stem is a low-lignin-content lignocellulosic biomass that can be used for methane production. In recent years, anaerobic digestion (AD) of dried banana stems for methane production has attracted considerable attention. However, there is limited information regarding methane production from the fresh banana pseudo-stem. The direct usability of fresh banana stems as a resource for renewable energy production through AD is a called upon prerequisite for an improved waste management system culminating in a sustainable as well as socioeconomic development of local banana-producing communities.
In this study, three series of experiments were performed simultaneously to investigate the methane production from fresh banana pseudo-stems for the first time. The tests included size reduction, enzyme addition, and co-digestion of banana stems with cow manure.
The achieved methane yields were 287, 340, to 347 mL g−1 volatile solids for the banana stem with particle sizes of 5, 10, and 20 mm, respectively. The highest yield was obtained at the particle size of 20 mm, showing a 21 % increase compared to the particle size of 5 mm. However, the particle size of 5 mm showed a high initial rate of hydrolysis evident by the highest hydrolysis rate constant of 0.152 d−1 as compared to 0.110 d−1 for the 20-mm particle size. The addition of enzyme and co-digestion improved the rate of hydrolysis evident by a high rate constant as compared to the control though there was no improvement in the ultimate methane production.
This study demonstrates the usability of the fresh banana stem for efficient and high methane production after simply applying a minimal size reduction. The implementation of such a study will benefit society especially rural banana-producing areas both toward renewable energy generation and sustainable waste management. These could lead to job creation and an improved standard of living.
Bananas are widely consumed fruits with over 140 metric tons produced annually. India and China are leading banana producers owning about half of the total production [1]. Hainan is the largest tropical island in China, which had a population of 9,108,000 people at the end of 2015 and harbors abundant biomass resources. In the agricultural sector, banana trees are widely planted and the production has a primary economic importance. Annually, about 12.1 million tons of bananas are produced in China with 2.0 million tons coming from Hainan province. By using the residue/product ratio of 2.4 [2], 4.8 million tons of banana pseudo-stems, sheaths, central cylinders, and leaves is produced in Hainan province [3]. The stem which is the major component is often abandoned in the plantation or processed into low-grade animal feed by local farmers. However, these treatments may cause serious environmental and ecological problems such as eutrophication. On the other hand, the banana stem contains a large amount of biodegradable biomass that can be a potential resource. Therefore, it is imperative to find environmentally friendly methods for both pollutant control and efficient utilization of the waste biomass from the banana stem.
The anaerobic digestion (AD) process, where biogas can be generated and upgraded to bio-methane which can be a substitute for natural gas, is widely recognized as a promising technique for the treatment of various organic wastes [4, 5]. However, low methane yield and poor substrate degradation rate are frequently reported when lignocellulosic biomasses are used as feedstock [6]. The recalcitrant structure of lignocellulosic wastes often requires suitable treatment to change its chemical and physical property in order to improve the biodegradability for enhanced methane production and substrate utilization [7]. In order to increase the rate of hydrolysis and optimize the biological conversion, physical, chemical, and biological pretreatments are often implemented, e.g., milling/grinding, acid or alkali treatment, and the addition of enzymes and microorganisms [6, 8–10]. Mechanical size reduction for banana stems could increase the rate of degradation and the methane yield by increasing the surface area of the substrates for an efficient enzymatic or microbial attack. Mild, environmentally friendly pretreatments using enzymes or cellulose-degrading microorganisms have provided another promising alternative for the enhancement of hydrolysis in the AD of lignocellulosic wastes [8, 9]. Although the cost of enzymatic treatment is relatively high and enzyme activity can be lost to some extent, compared to pretreatments with microorganisms, enzymes can act in the presence of different toxic and recalcitrant feedstock with a low requirement to environmental conditions such as pH, temperature, and salinity [8, 11]. The above research activities provide some potential strategies to enhance the AD of lignocellulosic waste; however, to date, the pretreatment methods for improving methane production of tropical and subtropical lignocellulosic waste such as banana stems are still limited.
Considering the economic factors, some large full-scale biogas plants have already evaluated the feasibility of co-digesting manure with the banana stem in Hainan Island [12]. Banana stems and other tropical biomass can be used as supplementary substrates in the biogas plant for mitigating and optimizing both feedstock supply costs and environmental impact of livestock waste treatment. Moreover, the addition of suitable substrates can provide a better process stability by balancing the macro- and micronutrients and by avoiding the accumulation toxic by-products [13, 14]. Meanwhile, a wide range of nutrients and balanced C/N ratio among co-digestion might avoid ammonia through simple dilution.
Studies on the AD of banana waste most often reported the pre-dried banana stem used as a substrate for methane production [6, 15, 16]. Since the banana stem is bulky in nature, the main constraint in the value chain will be drying to reduce bulkiness and ease of transport. However, dry substrates are prone to floatation in an anaerobic digester [14] and hence poor contact with the microbial consortium. Also, studies on methane production from wet or fresh banana stem are scarcely reported in scientific literature. It is, therefore, worth investigating the production of methane from wet or fresh banana stems.
The current study aimed primarily at evaluating the feasibility of using the fresh banana stem as feedstock for methane production and to concomitantly investigate the effects of physical (size reduction ranging from 5 to 20 mm) and enzymatic treatment (Celluclast and MethaPlus) on the methane potential, biodegradability, and methane production rate. Co-digestion of the fresh banana stem with cow manure was also investigated to test the applicability of banana stem as a lignocellulosic co-substrate in methane production from animal manure. Such a study will therefore entail the transformation of waste into value-added products, thus addressing both the environmental pollution and inefficient utilization of resources. Generation of renewable energy from fresh banana stems could benefit rural communities which are often deprived of electrical energy. This may also reduce the dependence on firewood or charcoal which are known to provide for domestic energy requirements. Such a development could help save trees, lower emissions that cause climate change, and reduce the fumes from millions of tons of firewood that threaten the health of the improvised local communities.
Substrates and inoculum
The banana stem used in this study was collected from a banana farm in Yongzhuang village of Haikou city, Hainan province, China (19° 59′ 05.4″ N 110° 15′ 21.7″ E). The tough outer waxy layers and inflorescence centers were handily separated, thereby keeping only the tender intermediary succulent section. The samples were well packed in gas-tight containers and transported to the Department of Biotechnology at the University of Lund, Sweden, within 24 h. The substrate was stored in a refrigerator at 4 °C prior to use. The cow manure (CM) used as a co-substrate was obtained from an animal farm in Plönninge (Skåne, Sweden).
The inoculum used in the study was collected from a mesophilic biogas plant (Ellinge, Eslöv, Sweden) treating potato waste and sewage sludge. The particulate matter (>1 mm) was removed from the inoculum by passing through a 1-mm pore size sieve. The inoculum was stored at room temperature for 7 days under anaerobic conditions to decrease the endogenous methane production. The pH of the inoculum was 7.3, and the partial alkalinity was 3600 mg L−1. Other characteristics of the substrates and inoculum are presented in Table 1.
Table 1 Characteristics of substrates and inoculum
Substrate pretreatments and co-digestion
Three parallel batch AD tests were performed in the present study to investigate the effect of size reduction, enzyme addition, and co-digestion of the banana stem with CM.
In order to study the effect of particle size on the AD process, the banana stems were manually peeled then sheared into small pieces with the aid of a scissor into three different particle sizes comprising 5, 10, and 20 mm thereof denoted as B5, B10, and B20, respectively (Fig. 1). When cutting the banana stem, the juice from the banana stem was collected. However, liquid loss during the cutting procedure could not be totally avoided, a condition that led to the slight difference in the total solid (TS) and volatile solid (VS) values (Table 1).
Sketch of the experimental design and protocol where B 5 stands for the 5-mm samples, B 10 for the 10-mm samples, B 20 for the 20-mm samples, and CM for cow manure
Enzymatic treatment
Only the B20 banana stem sample was investigated as per enzymatic treatment. The enzyme used in this study was a pure enzyme or mix enzyme. The first was a pure Celluclast (cellulase) 1.5 L, obtained from Novozyme Inc. (Denmark) which was derived from Trichoderma reesei. The optimum activity was 700 endoglucanase (EGU) g−1 or 70 filter paper units (FPU) g−1 at pH 4.5–6.0 and 50–60 °C. The second was MethaPlus® L100 obtained from DSM Biogas (Delft, Netherlands). MethaPlus® L100 is a complex enzyme mixture which is active in a pH range of 4.5 to 8.0. Under typical methane fermentation conditions, MethaPlus® L100 is active in a temperature range of 35–50 °C. The enzymes (Celluclast 1.5 L and MethaPlus® L100) were added into the reactors prior to the AD assay of B20 wherein 0.19 and 1.14 mL of Celluclast was added corresponding to an enzyme load of 10 and 60 FPU g−1 VS. For MethaPlus® L100, 2.2 mL was added which corresponded to an enzyme loading 10 FPU g−1 VS (Fig. 1).
Co-digestion with cow manure
The co-digestion test was performed with all three banana stem particle sizes (B5, B10, and B20). In order to investigate the performance of co-digestion by adding a small part of CM, two VS-based regimes were designed and tested, i.e., 15 % CM plus 85 % banana stems and the second contained a higher amount of CM, i.e., 35 % and a less fraction of the banana stem (65 %) (see Fig. 1).
Biochemical methane potential tests
Three series of biochemical methane potential (BMP) assays (Fig. 1), namely BMP of mechanical-pretreated samples (size reduction), BMP of enzyme-treated samples, and BMP of co-digested banana stem and CM, were performed in parallel with the aid of an automatic methane potential test system (AMPTS II) (Bioprocess Control AB, Sweden) under mesophilic conditions (37 ± 0.5 °C). The batch digestion trials were conducted in 15 reactors of 500 mL each. The substrates and inoculum were mixed at the ratio of 1:2 in terms of gram VS [17] to avoid potential substrate inhibition and low microbial density. All the reactors were sealed with rubber stoppers and connected to a mechanical agitator to provide complete mixing. During the test, only methane was measured as carbon dioxide was scrubbed off from the produced gas with the aid of concentrated NaOH. A detailed description of the AMPTS is found in and the experimental protocol was performed according to previous studies by Badshah et al. [18].
One blank and two sets of controls were included in the test. In the blank experiment, only the inoculum was used to measure the indigenous methane production from the inoculum, which was subtracted from the total methane produced from all test samples. The positive control with cellulose (Avicel PH-101, Sigma-Aldrich, St. Louis, MO, USA) was used to give an idea of the inoculum response toward "standard" substrates. The BMP of Celluclast/MethaPlus was also evaluated (third control) and subtracted from the enzyme-treated test samples. All tests were performed in triplicates and terminated after 34 days of incubation when the daily methane production was less than 1 % of the total methane production. The methane produced was automatically normalized to standard conditions, i.e., 0 °C and 1013 hPa by AMPTS software.
The content of TS and VS of all the samples including those of the different particle sizes and CM was determined in triplicates according to standard methods [19]. The pH was measured by the automatic titrator TitroLine Easy (Schott Instrument, Germany). The content of hemicellulose, cellulose, and lignin were determined by analysis of neutral detergent fiber (NDF), acid detergent fiber (ADF), and acid detergent lignin (ADL) in ground samples as described in a previous study [20]. The elements of C, H, and N of the banana stems were determined using a CE-440 elemental analyzer (EAI Co., USA), while S and O were determined by a 5E-8S II sulfur analyzer (Kaiyuan Co., China) and a PE 2400 II elemental analyzer (Perkin Elmer, USA), respectively. Chemical oxygen demand (COD) was determined with the aid of the Dr. Lange test kit LCK 914 (HACH LANGE GmbH, Germany).
According to the accumulation methane production curve, the first-order kinetic equation [21] was used to estimate the BMP. The first-order kinetic equation is listed as follows:
$$ \mathrm{B}\mathrm{M}\mathrm{P}(t)={\mathrm{BMP}}_{\infty}\cdot \left(1- \exp \left(-k\cdot \left(t-\theta \right)\right)\right) $$
where BMP(t) is the methane potential (N mL g−1 VS) at time t (days), BMP∞ is the maximum or ultimate methane potential (N mL g−1 VS) of the substrate, k is the rate constant or hydrolysis rate constant (d−1), and θ is the lag time constant (days).
Dixon's test (P ≤ 0.05) was used to check for outliers in the replicate-BMP tests. One-way analysis of variance (ANOVA) was performed with the Statistical Package for the Social Science (SPSS), version 16, to assess statistical differences in methane yield between the various sample sizes, enzymatic treatments, and co-digestion at a 95 % confidence level to accept or reject the null hypothesis.
Inoculum activity and substrate characterization
The inoculum showed a good methanogenic activity which was deduced from the positive control experiment using cellulose as the substrate. The BMP value of cellulose was 368 mL g−1 VS after 34 days of digestion, which represented 89 % of the theoretical yield of 415 mL g−1 VS [22, 23].
The banana stem is a cylinder of packed overlapping leaf sheaths which are very succulent and fleshy [15]. Both substrates (banana stem and CM) showed very high moisture contents which were above 93 % w/w (Table 1). The main components in the banana stem were cellulose (30.08 %), hemicelluloses (27.79 %), and poor lignin content (6.08 %) (Table 2), making banana stems an ideal candidate for methane generation because of its low lignin content. Similar values have been reported in other studies wherein banana residues were used as feedstock in AD [16]. The ash content of the banana stem is about 20 % expressed on DM which is presented mainly by potassium, calcium, and silicium salts that are considerably high when compared with other agricultural straw [24]. The C/N ratio of the banana stem was 31, which is about or slightly higher than the maximum value of 30 wherein it has been reported to be favorable for methane production [14]. Manure, on the other hand, is known to show a rather low C/N [25], a reason why banana stem and manure could be co-substrates of choice in methane production.
Table 2 Compositional analysis of banana pseudo-stem
Effect of particle sizes on methane production rate and BMP of banana stems
Figure 2 shows the 34-day methane production rate (right axis) and cumulative methane yield (left axis) for the different particle sizes of banana stems. The B5 showed the highest methane production rate on day 1 (69.5 mL g−1 VS d−1) while B20 showed the lowest (48.0 mL g−1 VS d−1). These corresponded to hydrolysis constants simulated by the first-order equation of 0.15, 0.12, and 0.11 d−1 for B5, B10, and B20, respectively (P < 0.01) (Table 3). As compared to other lignocellulosic substrates wherein two peaks are often noted [26], only one peak was observed in the present study. It is plausible to state that easily degradable non-structural carbohydrates and hemicelluloses (water soluble carbohydrates or water extractives), which are of smaller molecular weight and amorphous together with structural carbohydrates in the banana stem, were easily hydrolyzed by enzymes in the reaction broth [26, 27]. In all the tests (B5, B10, and B20), 90 % of the methane was produced within 2 weeks of incubation (Fig. 2). The methane production rate showed an increasing trend wherein B5 > B10 > B20, showing that the smaller the particle size, the higher the methane production rate.
Methane yields and methane production rates for the 5- (B5), 10- (B10), and 20-mm (B20) particle sizes of the banana stem
Table 3 The simulation results of the anaerobic digestion process of banana stem with/without cow manure
The 34-day methane yields (mL g−1 VS) were 289 ± 1.81, 340 ± 18.18, and 347 ± 15.85 for B5, B10, and B20, respectively, wherein there was no significant difference (P ≤ 0.05) between B10 and B20. On the other hand, the B5 methane yield was significantly lower (P ≤ 0.05) than B10 and B20. The methane yields, therefore, showed a decreasing trend wherein B5 < B10 < B20 (Fig. 1), indicating an increase in methane yield with increasing particle size. The B20 and B10 showed a 20 and 18 % increase in methane yield, respectively, as compared to B5 which was significantly different (P ≤ 0.05). The results of decreasing BMP with decreasing particle size are in stark contrast with other studies where decreasing particle size has been reported to garner an improvement in methane yields [7, 26]. The explanation has liaised to morphological structures of the banana stem. The dermal tissues in the banana stem consist of well-defined epidermis with radial oblong, wide cells with thick cuticles [28]. When the banana stem was reduced in size by cutting, the highly concentrated juice from the oblong cells could have been lost. That is, the smaller the particle size, the more organic juice was lost. In fact, the COD of the clear juice collected during chopping of samples showed an organic content value (dissolved COD) of 4401 mg L−1. It is plausible, therefore, to state that banana juice contains soluble organic compounds, easily convertible to methane that could have been lost during extreme size reduction. This may also occur during drying prior to AD. This hypothesis may explain why the methane yields in the present study, using wet/fresh banana stem, are significantly higher than those reported in studies wherein pre-dried samples are used. Another plausible explanation is the decrease in particle size increased the surface area of the banana stem that made it more accessible for the bacterial/enzymatic attack, thus leading to an accelerated hydrolysis evident by the increasing of the hydrolysis constant. It is probable that the accelerated hydrolysis may have led to the accumulation of volatile fatty acids which might have inhibited the biogas process and hence the poor BMP values [29, 30]. Results also showed that the larger particle size (B20) had the highest biodegradability.
Effect of enzyme addition on the BMP of B20 banana stem samples
Only the larger particle size (B20) was used in the enzymatic treatment. The 34-day methane production rates (right axis) and cumulative methane yields (left axis) from the B20 banana stem samples with and without enzyme addition are demonstrated in Fig. 3. The test with an enzyme load of 60 FPU g−1 VS showed the highest methane production rate of 67 mL g−1 VS d−1 on day 1. This finding was correlated by the highest hydrolytic rate constant (k h ) presented in Table 3. As with the tests with the different particles sizes, 90 % of the total methane was produced within 2 weeks of incubation and methane production showed a one-peak curvature. The entire enzyme-treated samples showed similar methane yields, which were 320 ± 20.60, 343 ± 14.20, and 336 ± 31.10 mL g−1 VS for 10 FPU Celluclast, 60 FPU Celluclast, and 10 FPU MethaPlus, respectively. Analysis of variance (P ≤ 0.05) showed that these methane yields did not differ significantly between them and the control (no enzyme addition). These findings are in line with other studies where insignificant changes in BMP were obtained when enzymes were added to lignocellulosic wastes [22, 31]. The BMP may not improve after enzyme addition in AD systems; nevertheless, a considerable improvement of the hydrolysis rate can be achieved [31]. In this study, a higher hydrolysis rate was obtained as compared to the control experiment when a higher dose of Celluclast was added in the reactors. Both enzymes used in the present study were cellulolytic in nature, meaning that they can break down cellulose to easily convertible glucose molecules. However, cellulolytic microbes are known to evolve as individual degraders or as part of a "chain reaction" in microbial communities breaking down cellulose [32] in some environments such as AD processes [26].
Methane production rates of banana stems with and without enzyme addition. FPU stands for filter paper units, C15L stands for Celluclast, and Met stands for MethaPlus
Effect of co-digestion with different fractions of manures on AD of banana stem samples
The methane production rate (right axis) and cumulative methane yield (left axis) of the three different particle sizes of banana stem co-digested with 15 and 35 % g VS of CM are displayed in Fig. 4. The co-digestion of B5 with 35 % CM showed the highest methane production rate peaking at 74 mL g−1 VS d−1 (Fig. 4b) while it was 66 mL g−1 VS d−1 (Fig. 4a) when a lower amount of CM (B5 with 15 % CM) was added. Meanwhile, the k h values were in the same range, i.e., 0.148 and 0.152 d−1, respectively. Both the results were not significant (P > 0.05).
Methane yields and methane production rates of B5, B10, and B20 samples with 15 % (a) and 35 % (b) of cow manure
The methane yields of B5 co-digested with 15 and 35 % of CM were 287 ± 26.39 and 311 ± 3.20 mL g−1 VS, respectively. The methane yields were 290 ± 16.39 and 278 ± 8.34 mL g−1 VS for B10 and 296 ± 19.03 and 322 ± 9.18 mL g−1 VS for B20, both co-digested with 15 and 35 % of CM addition, respectively. The results show that co-digestion did not lead to a significant (P ≤ 0.05) improvement in methane yield. In fact, for B10 and B20, mono-digestion showed significantly higher methane yields (P ≤ 0.05). Co-digestion may lead to either synergy wherein higher yields are achieved through the provision of vital nutrients and buffering or antagonism where lower yields are obtained [14]. In the present study, it is probable that the inoculum was rich in buffering (partial alkalinity was 3600 mg L−1) and may have contained the necessary nutrients and cofactors needed for the biochemistry of methane production.
Though many studies have shown an improvement of anaerobic co-digestion compared to the mono-digestion [33, 34], this study showed that mono-digestion was a better option in terms of methane yields as compared to co-digestion. This might be because of the appropriate C/N of the banana stem for AD and the fact that the inoculum used in the present study showed a rather high partial alkalinity. The above conditions should have provided a conducive environment for the mono-AD of the banana stem.
The energy potential of fresh pseudo-banana stem
Many studies have reported the BMP of dried banana stem to range from 195 to 256 mL g−1 VS [35–38]; these yields are much lower than the value obtained in this study. This disparity can be attributed to the fresh or wet nature of the banana stem used in the present study. It is probable that high moisture aided the biodegradability or putrescibility through providing an initial richer medium for enzymatic attack and hence methanogenesis [39]. Water is consumed during hydrolysis of macromolecules, and it has been reported that below a certain moisture content (<25 %), no biodegradation is known to take place [40]. Pommier et al. also reported a linear correlation between methane production and moisture content. On the other hand, drying may lead to the floatation of biomass and constriction, shrinkage, and loss of porosity in the cells making it inaccessible to enzymatic attack through decreased surface area to volume ratio. As mentioned earlier, the sap or liquid fraction of the banana stem may be rich in soluble, easily convertible organics to methane, which are otherwise lost during drying. Bound water (water contained in substrate or moisture) is therefore essential for an effective and efficient AD process. Increase moisture therefore favors bacterial/methanogen-waste colonization. Increased moisture containment also leads to an increase in water activity which has been reported to positively correlate with bacterial/archaea growth and hence methane production [41]. Water content may therefore positively impact methane production through increased microbial growth rate and bioavailability of solid banana waste [42]. It must be mentioned that the use of wet banana stem in lieu of dry sample may present some logistic and conservation problems. Nonetheless, the fresh banana stem can be preserved via ensiling or co-ensiling with other fractions of the banana plant waste such as leaves and peels.
A full-scale, continuous stirred tank biogas plant can therefore be operated with the pseudo-banana stem as the sole substrate or co-digested with manure. Considering the 4.8 × 106 tons of wet banana stem produced in Hainan province and chopped to a particle size of 20 mm having a VS of 3.78 % w/w and a BMP of 347 m3 ton−1 VS, the yearly methane production could sum up to 6.29 × 107 m3 CH4. This corresponds to a calorific or lower heating value of 2.26 × 1011 kJ year−1 (1 m3 = 35,900 kJ) or 0.63 GWh year−1 (1 m3 = 9.97 kWh) [16]. Paying cognizance of a conversion efficiency of 43 %, the yearly energy recovery from banana stem could amount to 0.27 GWh year−1. This can account for 1.46 % of the yearly power consumption in Hainan province. Thus, it is plausible to state that the banana stem contains a huge amount of energy that can be harnessed through AD.
This present study demonstrated that wet banana stems can be used as substrates in both mono- and co-digestion processes with cow manure. The methane production of the fresh banana stem as a mono-substrate could reach 347 mL g−1 VS, which is higher than the reported values from other lignocellulosic biomasses and especially dried banana stems. Mechanical cutting to a desirable size was deemed the most economically viable option due to a possibly less energy requirement and high achievable methane yield. The results of enzyme addition and the co-digestion with cow manure showed a limited improvement of methane production for the banana stem, while the hydrolysis rates were improved. In addition, in all the tests, 90 % of the total methane was produced within 2 weeks of incubation. Fresh banana stems can, therefore, become a cheap lignocellulosic substrate for methane production especially through co-ensiling with other morphological fractions for better storage or preservation.
Padam BS, Tin HS, Chye FY, Abdullah MI (2014) Banana by-products: an under-utilized renewable food biomass with great potential. J Food Sci Technol 51(12):3527–3545
Yamaguchi J, Araki S (2004) Biomass production of banana plants in the indigenous farming system of the East African Highland: a case study on the Kamachumu Plateau in northwest Tanzania. Agric Ecosyst Environ 102(1):93–111
Mohapatra D, Mishra S, Sutar N (2010) Banana and its by-product utilization: an overview. J Sci Ind Res 69(5):323–329
Neves LC M d, Converti A, Vessoni Penna TC (2009) Biogas production: new trends for alternative energy sources in rural and urban zones. Chem Eng Technol 32(8):1147–1153
Scholz M, Alders M, Lohaus T, Wessling M (2015) Structural optimization of membrane-based biogas upgrading processes. J Membr Sci 474:1–10
Zhang C, Li J, Liu C, Liu X, Wang J, Li S, Fan G, Zhang L (2013) Alkaline pretreatment for enhancement of biogas production from banana stem and swine manure by anaerobic codigestion. Bioresour Technol 149:353–358
Taherzadeh MJ, Karimi K (2008) Pretreatment of lignocellulosic wastes to improve ethanol and biogas production: a review. Int J Mol Sci 9(9):1621–1651
Parawira W (2012) Enzyme research and applications in biotechnological intensification of biogas production. Crit Rev Biotechnol 32(2):172–186
Herbel Z, Rákhely G, Bagi Z, Ivanova G, Ács N, Kovács E, Kovács KL (2010) Exploitation of the extremely thermophilic Caldicellulosiruptor saccharolyticus in hydrogen and biogas production from biomasses. Environ Technol 31(8-9):1017–1024
Wang H, Tao Y, Temudo M, Schooneveld M, Bijl H, Ren N, Wolf M, Heine C, Foerster A, Pelenc V (2015) An integrated approach for efficient biomethane production from solid bio-wastes in a compact system. Biotechnol for biofuels 8(1):1
Gianfreda L, Rao MA (2004) Potential of extra cellular enzymes in remediation of polluted soils: a review. Enzym Microb Technol 35(4):339–354
Pei P, Zhang C, Li J, Chang S, Li S, Wang J, Zhao M, Li J, Yu M, Chen X (2014) Optimization of NaOH pretreatment for enhancement of biogas production of banana pseudo-stem fiber using response surface methodology. Bio Resources 9(3):5073–5087
Scano EA, Asquer C, Pistis A, Ortu L, Demontis V, Cocco D (2014) Biogas from anaerobic digestion of fruit and vegetable wastes: experimental results on pilot-scale and preliminary performance evaluation of a full-scale power plant. Energy Convers Manag 77:22–30
Nges IA, Escobar F, Fu X, Björnsson L (2012) Benefits of supplementing an industrial waste anaerobic digester with energy crops for increased biogas production. Waste Manage 32(1):53–59
Tock JY, Lai CL, Lee KT, Tan KT, Bhatia S (2010) Banana biomass as potential renewable energy resource: a Malaysian case study. Renew Sust Energ Rev 14(2):798–805
Kamdem I, Hiligsmann S, Vanderghem C, Bilik I, Paquot M, Thonart P (2013) Comparative biochemical analysis during the anaerobic digestion of lignocellulosic biomass from six morphological parts of Williams Cavendish banana (Triploid Musa AAA group) plants. World J Microbiol Biotechnol 29(12):2259–2270
Raposo F, De la Rubia M, Fernández-Cegrí V, Borja R (2012) Anaerobic digestion of solid organic substrates in batch mode: an overview relating to methane yields and experimental procedures. Renew Sust Energ Rev 16(1):861–877
Badshah M, Lam DM, Liu J, Mattiasson B (2012) Use of an automatic methane potential test system for evaluating the biomethane potential of sugarcane bagasse after different treatments. Bioresour Technol 114:262–269
APHA (2005) Total, fixed, and volatile solids in solid and semisolid samples. In: Eaton AD, Clesceri LS, Rice EW, Greenberg AE, Franson MA (eds) Standard methods for the examination of water and wastewater, 21st edn. American Public Health Association/American Water Works Association/Water Environment Federation, Baltimore
Van Soest PV, Robertson J, Lewis B (1991) Methods for dietary fiber, neutral detergent fiber, and nonstarch polysaccharides in relation to animal nutrition. J Dairy Sci 74(10):3583–3597
Angelidaki I, Alves M, Bolzonella D, Borzacconi L, Campos J, Guwy A, Kalyuzhnyi S, Jenicek P, Van Lier J (2009) Defining the biomethane potential (BMP) of solid organic wastes and energy crops: a proposed protocol for batch assays. Water Sci Technol 59(5):927–934
Kreuger E, Sipos B, Zacchi G, Svensson S-E, Björnsson L (2011) Bioconversion of industrial hemp to ethanol and methane: the benefits of steam pretreatment and co-production. Bioresour Technol 102(3):3457–3465
Raposo F, Fernández‐Cegrí V, De la Rubia M, Borja R, Béline F, Cavinato C, Demirer G, Fernández B, Fernández‐Polanco M, Frigon J (2011) Biochemical methane potential (BMP) of solid organic substrates: evaluation of anaerobic biodegradability using data from an international interlaboratory study. J Chem Technol Biotechnol 86(8):1088–1098
Oliveira L, Cordeiro N, Evtuguin D, Torres I, Silvestre A (2007) Chemical composition of different morphological parts from 'Dwarf Cavendish' banana plant and their potential as a non-wood renewable source of natural products. Ind Crop Prod 26(2):163–172
Angelidaki I, Ellegaard L (2003) Codigestion of manure and organic wastes in centralized biogas plants. Appl Biochem Biotechnol 109(1-3):95–105
Shen S, Nges IA, Yun J, Liu J (2014) Pre-treatments for enhanced biochemical methane potential of bamboo waste. Chem Eng J 240:253–259
Li M-F, Fan Y-M, Xu F, Sun R-C, Zhang X-L (2010) Cold sodium hydroxide/urea based pretreatment of bamboo for bioethanol production: characterization of the cellulose rich fraction. Ind Crop Prod 32(3):551–559
Cordeiro N, Belgacem M, Torres I, Moura J (2004) Chemical composition and pulping of banana pseudo-stems. Ind Crop Prod 19(2):147–154
Izumi K, Okishio Y-k, Nagao N, Niwa C, Yamamoto S, Toda T (2010) Effects of particle size on anaerobic digestion of food waste. Int Biodeter Biodegr 64(7):601–608
Tumutegyereize P, Muranga F, Kawongolo J, Nabugoomu F (2013) Optimization of biogas production from banana peels: effect of particle size on methane yield. Afr J Biotechnol 10(79):18243–18251
Romano RT, Zhang R, Teter S, McGarvey JA (2009) The effect of enzyme addition on anaerobic digestion of JoseTall Wheat Grass. Bioresour Technol 100(20):4564–4571
Yang B, Dai Z, Ding S-Y, Wyman CE (2011) Enzymatic hydrolysis of cellulosic biomass. Biofuels 2(4):421–449
El-Mashad HM, Zhang R (2010) Biogas production from co-digestion of dairy manure and food waste. Bioresour Technol 101(11):4021–4028
Lehtomäki A, Huttunen S, Rintala J (2007) Laboratory investigations on co-digestion of energy crops and crop residues with cow manure for methane production: effect of crop to manure ratio. Resour Conserv Recycl 51(3):591–609
Kalia V, Sonakya V, Raizada N (2000) Anaerobic digestion of banana stem waste. Bioresour Technol 73(2):191–193
Cheng S, Li Z, Xu C, Yang L (2009) Experimental study on biochemical methane potential of banana tree waste
Khan MT, Maurer C, Argyropoulos D, Brule M, Mueller J (2009) Anaerobic digestion of banana waste, a potential source of energy in Uganda. In: Proceedings Tropentag (2009): International Research and Food Security
Zainol N (2012) Kinetics of biogas production from banana stem waste. InTech, Biogas Europe, p 408
Hernández-Berriel MC, Márquez-Benavides L, González-Pérez D, Buenrostro-Delgado O (2008) The effect of moisture regimes on the anaerobic degradation of municipal solid waste from Metepec (Mexico). Waste Manage 28:S14–S20
Pommier S, Chenu D, Quintard M, Lefebvre X (2007) A logistic model for the prediction of the influence of water on the solid waste methanization in landfills. Biotechnol Bioeng 97(3):473–482
Bourgeois C, Mescle J, Zucca J (1996) Microbiologie alimentaire. Lavoisier, Tome 1 Paris, p 672
Pommier S, Chenu D, Quintard M, Lefebvre X (2007) A logistic model for the prediction of the influence of water on the solid waste methanization in landfills. Biotechnol Bioeng 97:473–482
The authors would like to thank the lab work support of Ms. Jinhua Chen. The authors wish to express their gratitude to the Agricultural Departments of the Hainan Provincial Government. The Open Fund of Key Laboratory of Development and Application of Rural Renewable Energy (2015014), Ministry of Agriculture in China, is gratefully acknowledged for the financial support.
CL conceived the study, initiated the collaboration with the partner, and handled both the collection of the data and the drafting of the manuscript. IAN and MN participated in the design of the study and took the lead in the collection and analysis of the literature material and the REEP House data; GL led the drafting of the manuscript. LD and JL contributed to the development of the literature review and the drafting of the manuscript. All authors read and approved the final manuscript.
Department of Biotechnology, Lund University, Naturvetarvägen 14, P.O. Box 124, SE-221 00, Lund, Sweden
, Ivo A. Nges
& Jing Liu
Key Laboratory of Development and Application of Rural Renewable Energy, Ministry of Agriculture, Chengdu, 610041, China
, Gangjin Liu
& Liangwei Deng
Bioprocess Control, Scheelevägen 22, SE-223 63, Lund, Sweden
Gangjin Liu
, Mihaela Nistor
Search for Chao Li in:
Search for Gangjin Liu in:
Search for Ivo A. Nges in:
Search for Liangwei Deng in:
Search for Mihaela Nistor in:
Search for Jing Liu in:
Correspondence to Chao Li.
Li, C., Liu, G., Nges, I.A. et al. Fresh banana pseudo-stems as a tropical lignocellulosic feedstock for methane production. Energ Sustain Soc 6, 27 (2016) doi:10.1186/s13705-016-0093-9
Banana stems
Co-digestion | CommonCrawl |
Phenomenology of buoyancy-driven turbulence: Recent results
Mahendra K. Verma, Abhishek Kumar, Ambrish Pandey
In this paper, we review the recent developments in the field of buoyancy-driven turbulence. Scaling and numerical arguments show that the stably-stratified turbulence with moderate stratification has kinetic energy spectrum $E_u(k) \sim k^{-11/5}$ and the kinetic energy flux $\Pi_u(k) \sim k^{-4/5}$, which is called Bolgiano-Obukhov scaling. The energy flux for the Rayleigh-B\'{e}nard convection (RBC) however is approximately constant in the inertial range that results in Kolmorogorv's spectrum ($E_u(k) \sim k^{-5/3}$) for the kinetic energy. The phenomenology of RBC should apply to other flows where the buoyancy feeds the kinetic energy, e.g. bubbly turbulence and fully-developed Rayleigh Taylor instability. This paper also covers several models that predict the Reynolds and Nusselt numbers of RBC. Recent works show that the viscous dissipation rate of RBC scales as $\sim \mathrm{Ra}^{1.3}$, where $\mathrm{Ra}$ is the Rayleigh number.
New Journal of Physics
Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Rayleigh-Benard convection
buoyancy-driven flows
stably-stratified turbulence
thermally-driven turbulence
PublishedFinal published version, 3.27 MBLicence: CC BY
http://www.mendeley.com/research/phenomenology-buoyancydriven-turbulence-recent-results
Fingerprint Dive into the research topics of 'Phenomenology of buoyancy-driven turbulence: Recent results'. Together they form a unique fingerprint.
buoyancy Physics & Astronomy
phenomenology Physics & Astronomy
kinetic energy Physics & Astronomy
turbulence Physics & Astronomy
scaling Physics & Astronomy
Taylor instability Physics & Astronomy
stratification Physics & Astronomy
Rayleigh number Physics & Astronomy
Verma, M. K., Kumar, A., & Pandey, A. (2017). Phenomenology of buoyancy-driven turbulence: Recent results. New Journal of Physics, 19(2). http://www.mendeley.com/research/phenomenology-buoyancydriven-turbulence-recent-results
Phenomenology of buoyancy-driven turbulence: Recent results. / Verma, Mahendra K.; Kumar, Abhishek; Pandey, Ambrish.
In: New Journal of Physics, Vol. 19, No. 2, 27.02.2017.
Verma, MK, Kumar, A & Pandey, A 2017, 'Phenomenology of buoyancy-driven turbulence: Recent results', New Journal of Physics, vol. 19, no. 2. <http://www.mendeley.com/research/phenomenology-buoyancydriven-turbulence-recent-results>
Verma MK, Kumar A, Pandey A. Phenomenology of buoyancy-driven turbulence: Recent results. New Journal of Physics. 2017 Feb 27;19(2).
Verma, Mahendra K. ; Kumar, Abhishek ; Pandey, Ambrish. / Phenomenology of buoyancy-driven turbulence: Recent results. In: New Journal of Physics. 2017 ; Vol. 19, No. 2.
@article{3aa7b95cb9a443f5a5e32f1298b1c732,
title = "Phenomenology of buoyancy-driven turbulence: Recent results",
abstract = "In this paper, we review the recent developments in the field of buoyancy-driven turbulence. Scaling and numerical arguments show that the stably-stratified turbulence with moderate stratification has kinetic energy spectrum $E_u(k) \sim k^{-11/5}$ and the kinetic energy flux $\Pi_u(k) \sim k^{-4/5}$, which is called Bolgiano-Obukhov scaling. The energy flux for the Rayleigh-B\'{e}nard convection (RBC) however is approximately constant in the inertial range that results in Kolmorogorv's spectrum ($E_u(k) \sim k^{-5/3}$) for the kinetic energy. The phenomenology of RBC should apply to other flows where the buoyancy feeds the kinetic energy, e.g. bubbly turbulence and fully-developed Rayleigh Taylor instability. This paper also covers several models that predict the Reynolds and Nusselt numbers of RBC. Recent works show that the viscous dissipation rate of RBC scales as $\sim \mathrm{Ra}^{1.3}$, where $\mathrm{Ra}$ is the Rayleigh number.",
keywords = "Rayleigh-Benard convection, buoyancy-driven flows, stably-stratified turbulence, thermal convection, thermally-driven turbulence",
author = "Verma, {Mahendra K.} and Abhishek Kumar and Ambrish Pandey",
note = "Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.",
journal = "New Journal of Physics",
publisher = "IOP Publishing",
T1 - Phenomenology of buoyancy-driven turbulence: Recent results
AU - Verma, Mahendra K.
AU - Kumar, Abhishek
AU - Pandey, Ambrish
N1 - Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
N2 - In this paper, we review the recent developments in the field of buoyancy-driven turbulence. Scaling and numerical arguments show that the stably-stratified turbulence with moderate stratification has kinetic energy spectrum $E_u(k) \sim k^{-11/5}$ and the kinetic energy flux $\Pi_u(k) \sim k^{-4/5}$, which is called Bolgiano-Obukhov scaling. The energy flux for the Rayleigh-B\'{e}nard convection (RBC) however is approximately constant in the inertial range that results in Kolmorogorv's spectrum ($E_u(k) \sim k^{-5/3}$) for the kinetic energy. The phenomenology of RBC should apply to other flows where the buoyancy feeds the kinetic energy, e.g. bubbly turbulence and fully-developed Rayleigh Taylor instability. This paper also covers several models that predict the Reynolds and Nusselt numbers of RBC. Recent works show that the viscous dissipation rate of RBC scales as $\sim \mathrm{Ra}^{1.3}$, where $\mathrm{Ra}$ is the Rayleigh number.
AB - In this paper, we review the recent developments in the field of buoyancy-driven turbulence. Scaling and numerical arguments show that the stably-stratified turbulence with moderate stratification has kinetic energy spectrum $E_u(k) \sim k^{-11/5}$ and the kinetic energy flux $\Pi_u(k) \sim k^{-4/5}$, which is called Bolgiano-Obukhov scaling. The energy flux for the Rayleigh-B\'{e}nard convection (RBC) however is approximately constant in the inertial range that results in Kolmorogorv's spectrum ($E_u(k) \sim k^{-5/3}$) for the kinetic energy. The phenomenology of RBC should apply to other flows where the buoyancy feeds the kinetic energy, e.g. bubbly turbulence and fully-developed Rayleigh Taylor instability. This paper also covers several models that predict the Reynolds and Nusselt numbers of RBC. Recent works show that the viscous dissipation rate of RBC scales as $\sim \mathrm{Ra}^{1.3}$, where $\mathrm{Ra}$ is the Rayleigh number.
KW - Rayleigh-Benard convection
KW - buoyancy-driven flows
KW - stably-stratified turbulence
KW - thermal convection
KW - thermally-driven turbulence
JO - New Journal of Physics
JF - New Journal of Physics | CommonCrawl |
Friday afternoon, 31th May, and Saturday morning, 1th June 2019, at Zentrum Mathematik, Technische Universität München.
Gioia Carinci: Inclusion process, sticky brownian motion and condensation.
Abstract: Many real world phenomena can be modelled by dynamic random networks. We will focus on preferential attachment models where the networks grow node by node and edges with the new vertex are added randomly depending on a sublinear function of the degree of the older vertex. Using Stein's method provides rates of convergence for the total variation distance between the evolving degree distribution and an asymptotic power-law distribution as the number of vertices tends to infinity. This is a joint work with Carina Betken and Marcel Ortgiese.
Abstract: In this joint work with Gerónimo Uribe-Bravo, we prove and extend results from the physics literature about a random walk with random reinforced relocations. The "walker" evolves in $\mathbb Z^d$ or $\mathbb R^d$ according to a Markov process, except at some random jump-times, where it chooses a time uniformly at random in its past, and instatnly jumps to the position it was at that random time. This walk is by definition non-Markovian, since the walker needs to remember all its past. Under moment conditions on the inter-jump-times, and provided that the underlying Markov process verifies a distributional limit theorem, we show a distributional limit theorem for the position of the walker at large time. The proof relies on exploiting the branching structure of this random walk with random relocations; we are able to extend the model further by allowing the memory of the walker to decay with time.
Abstract: I discuss crossing probabilities of multiple interfaces in the critical Ising model with alternating boundary conditions. In the scaling limit, they are conformally invariant expressions given by so-called pure partition functions of multiple SLE(kappa) with kappa=3. I also describe analogous results for critical percolation and the Gaussian free field. This is joint work with Hao Wu (Yau Center / Tsinghua University).
Abstract: In this talk, I will focus on the behavior of the following cluster growth models: internal DLA, the rotor model, and the divisible sandpile model. These models can be run on any infinite graph, and they are based on particles moving around according to some rule (that can be either random or deterministic) and aggregating. Describing the limit shape of the cluster these particles produce is one of the main questions one would like to answer. For some of the models, the fractal nature of the cluster is, from the mathematical point of view, far away from being understood. I will give an overview on the known limit shapes for the above mentioned growth models; in particular I will present a limit shape universality result on the Sierpinski gasket graph, and conclude with some open questions. The results are based on collaborations with J. Chen, W. Huss, and A. Teplyaev. | CommonCrawl |
FO3 and the algebra of binary relations
Submitted by Jan Van den Bussche on Tue, 07/31/2018 - 09:23
Image credit: Why U
The algebra of binary relations, denoted here simply by RA, is one of the oldest logics, created by De Morgan, Pierce and Schroder in the 19th century. Tarski and his collaborators revived the logic in the 20th century. Nowadays RA is still very relevant as it lies at the basis of dynamic logics, description logics, and XPath-like query languages for tree and graph data.
A very interesting result by Tarski and Givant is the equivalence between RA and \(\mathrm{FO}^3\), the three-variable fragment of first-order logic. The monograph by Tarski and Givant on the formalization of set theory without variables contains a proof, but that monograph is not for the impatient. The book by Marx and Venema on multi-dimensional modal logic contains an elegant and short proof. As that book is again not something you can readily dive in and start reading somewhere in the middle, we give here a self-contained version of the Marx-Venema proof.
Let us begin by recalling the definition of RA. We start from a binary relational vocabulary \(\Upsilon\). Recall that an \(\Upsilon\)-structure \(I\) consists of a nonempty domain of atomic data elements, denoted by \(\mathrm{dom}(I)\), and, for every relation name \(R\) from \(\Upsilon\), a binary relation \(I(R)\) on \(\mathrm{dom}(I)\). We can think of \(\Upsilon\) as a database schema with only binary relation names, and of \(I\) as an instance of that schema. The only difference is that usually database instances are defined without an explicit domain, instead using the active-domain semantics for first-order logic. Below we continue to work with structures with explicit domains, but everything can be adapted to work with active-domain semantics instead.
The expressions \(e\) of RA are generated by the following grammar:
$$ e ::= R \mid e \cup e \mid \neg e \mid e \circ e \mid e^{-1} \mid {\it id} \mid {\it all} $$ Here, as above, the symbol $R$ ranges over the relation names from \(\Upsilon\). Given any \(\Upsilon\)-structure \(I\), an RA-expression $e$ defines a binary relation \(e(I)\) on \(\mathrm{dom}(I)\) in a natural manner. Obviously, \(\cup\) is union; \(\neg\) is complement with respect to \(\mathrm{dom}(I)^2\). The operator \(\circ\) is the composition of two binary relations, i.e., \(r \circ s = \{(a,c) \mid \exists b : (a,b) \in r\) and \((b,c) \in s\}\). The operator \({}^{-1}\), called converse, inverts a binary relation, i.e., \(r^{-1} = \{(b,a) \mid (a,b) \in r\}\). Finally, the primitives \({\it id}\) and \({\it all}\) evaluate to the identity relation and the complete binary relation on \(\mathrm{dom}(I)\), respectively.
It is not difficult to see that RA is subsumed by \(\mathrm{FO}^3\), in that for every RA-expression \(e\) there exists a first-order logic formula \(\varphi\) with two free variables and three variables overall, so that on every structure, \(\varphi\) defines the same relation as \(e\). For example, the expression \(R \circ (S \circ T)\) can be equivalently expressed as $$ \{(x,y) \mid \exists z(R(x,z) \land \exists x(S(z,x) \land T(x,y)))\}. $$ Note the reuse of the variable \(x\). The interesting result is that the converse holds as well: for every \(\varphi\) as above there exists an \(e\) defining the same relation on every structure.
To prove this result, let us fix the three available variables as \(x\), \(y\) and \(z\). We begin by defining, for every two-element subset \(\{u,v\} \subseteq \{x,y,z\}\), the logic \(L_{\{u,v\}}\). This logic contains all atomic formulas over \(\Upsilon\) involving the variables \(u\) and \(v\), as well as an explicit proposition \(\top\) for true; is closed under the boolean connectives; and also contains all quantified formulas of the form \(\exists w(\gamma \land \chi)\), where \(w\) is the third variable, i.e., the variable not in \(\{u,v\}\), and \(\gamma\) and \(\chi\) are formulas in \(L_{\{u,w\}}\) and \(L_{\{w,v\}}\), respectively. Thus, the three logics \(L_{\{x,y\}}\), \(L_{\{x,z\}}\) and \(L_{\{y,z\}}\) are defined by mutual recursion. Note that the only variables that can occur free in a formula in \(L_{\{u,v\}}\) are \(u\) and \(v\).
It is not difficult to see that, for every formula \(\varphi\) in \(L_{\{u,v\}}\), we can express \(\{(u,v) \mid \varphi\}\) in RA. For example, we can express $$ \{(x,y) \mid \exists z(R(z,x) \land \top) \land y=x\} $$ by \((R^{-1}\circ {\it all}) \cap {\it id}\), where we use intersection \(\cap\) as a shorthand which can be expressed in terms of union and complement.
We now claim that every \(\mathrm{FO}^3\) formula over \(\Upsilon\) is equivalent to a boolean combination of formulas from \(L_{\{x,y\}}\), \(L_{\{x,z\}}\) and \(L_{\{y,z\}}\). This claim can be proven by structural induction. The basis of the induction is formed by the atomic formulas. Since \(\Upsilon\) has only binary relation names, every atomic formula is always in \(L_{\{x,y\}}\), \(L_{\{x,z\}}\) or \(L_{\{y,z\}}\) as desired. For the induction step, the boolean connectives pose no problem. So, the only case that is not immediately clear is when \(\varphi\) is of the form \(\exists w\, \psi\). Let \(u\) and \(v\) be the two variables other than \(w\). By induction, we can write \(\psi\) as a boolean combination of formulas from \(L_{\{u,v\}}\), \(L_{\{u,w\}}\) and \(L_{\{w,v\}}\). We can put this boolean combination in disjunctive normal form and distribute the existential quantifier over the disjuncts, so that we are left with formulas of the form \(\exists w(\beta \land \gamma \land \chi)\), where \(\beta\), \(\gamma\) and \(\chi\) are from \(L_{\{u,v\}}\), \(L_{\{u,w\}}\) and \(L_{\{w,v\}}\), respectively. We can rewrite this formula as \(\beta \land \exists w(\gamma \land \chi)\) which is a formula in \(L_{\{u,v\}}\) as desired.
The result now follows. Let \(\varphi\) be an \(\mathrm{FO}^3\) formula with two free variables \(u\) and \(v\). Let \(w\) be the third variable. By the above claim, we can write \(\varphi\) as a boolean combination of formulas from \(L_{\{u,v\}}\), \(L_{\{u,w\}}\) and \(L_{\{w,v\}}\). Of these constituent formulas, those that are in \(L_{\{u,w\}}\) can have only \(u\) free, so they can be rewritten as \(L_{\{u,v\}}\)-formulas by swapping \(w\) and \(v\). Similarly, the constituent formulas that are in \(L_{\{v,w\}}\) can be rewritten as \(L_{\{u,v\}}\)-formulas by swapping \(w\) and \(u\). The final result is an \(L_{\{u,v\}}\)-formula, which concludes the proof, as we have seen earlier that \(\{(u,v)\mid \varphi\}\), with \(\varphi\) an \(L_{\{u,v\}}\)-formula, is expressible in RA.
Thanks to Floris Geerts for suggesting that I write this blog post.
query languages | CommonCrawl |
Poverty alleviation strategies under informality: evidence for Latin America
Martin Caruso Bloeck1,
Sebastian Galiani2 &
Federico Weinschelbaum3
Latin American Economic Review volume 28, Article number: 14 (2019) Cite this article
Strategies based on growth and inequality reduction require a long-run horizon, and this paper therefore argues that those strategies need to be complemented by poverty alleviation programs. With regards to such programs, informality in Latin America and the Caribbean is a primary obstacle to carry out means testing income-support programs, and countries in the region have therefore mostly relied on proxy means testing mechanisms. This paper studies the relative effectiveness of these and other mechanisms by way of a formal model in which workers choose between job opportunities in the formal and informal sectors. Although the means testing mechanism allows for a more pro-poor design of transfers, it distorts labor decisions made by workers. On the other hand, (exogenous) proxy means testing does not cause distortions, but its pro-poor quality is constrained by the power of observable characteristics to infer income levels. However, since taxation is necessary to fund programs, redistribution becomes less effective, especially for programs other than means testing. The paper concludes by discussing the implications of these results for the design of more efficient targeting programs.
For several decades, poverty has been on a steady downward path, both worldwide and in Latin America and the Caribbean (LAC). Growth in incomes and a decline in inequality in the 2000s have triggered these improvements. Still, there is room for progress, and poverty reduction remains a top priority for societies and policymakers in the LAC region. This paper is therefore dedicated to assessing the effectiveness of poverty reduction strategies.
In terms of poverty, LAC has consistently ranked in the middle of the remaining regions of the world, below Europe and Central Asia and the Middle East and Northern Africa, but above South Asia, East Asia and the Pacific, and sub-Saharan Africa. With the exception of this last region, all regions have gone through a sustained process of poverty reduction. The fall in poverty in LAC has been underpinned by long-run growth in the region, although there has been frequent macroeconomic instability. Inequality generally rose in the 1990s, while it has fallen in the 2000s. With some exceptions, the gains in the 2000s have more often offset the declines in the 1990s, implying that declines in inequality have contributed to declines in poverty.
This paper estimates the contribution of growth and inequality toward poverty reduction by means of regression-based decompositions, finding that growth has been the main driver behind falling poverty in LAC. This is not because declining inequality is ineffective at reducing poverty—far from it, the estimates indicate that the elasticity of poverty with respect to inequality is rather large, especially in the region. The reason is rather that inequality has not fallen consistently from its values at the beginning of the sample.
That being said, growth and sustained declines in inequality are necessarily long-term strategies for poverty reduction. Even when growth and declines in inequality are sustained, this paper argues that there is a need to complement these developments with poverty alleviation programs. This is because improvements do not automatically spill over to everyone, and groups without the proper human capital and access to better opportunities are unlikely to be able to gain their share of these benefits.
The effectiveness of poverty alleviation programs depends crucially on whether they can efficiently target the poorest sector of the population. High-income countries have mainly relied on means testing strategies to gauge this efficiently, but this requires being able to verify whether incomes have been reported accurately. In a context with huge levels of informality, as is the case in LAC, incomes are generally unobservable to program administrators. As a result, developing countries have adopted proxy means testing mechanisms to assess the poverty status of potential beneficiaries.
This paper compares the performance of these and other alternative mechanisms by means of a formal model. While means testing may steer workers away from more productive opportunities in the informal sector, it allows for a more flexible design of transfers and a greater pro-poor character for a program. On the other hand, proxy means testing generates no distortions in workers' decisions, but the overall pro-poor nature of the program is constrained by the ability of observable characteristics to accurately predict poverty levels. Additionally, since means testing programs do not provide income support for informal workers, the effect of complementing means testing with transfers to informal workers is analyzed. When transfers are granted to all informal workers, the level of filtration to the non-poor is possibly large, while it is not possible to provide greater transfers to the poorest informal workers. As a result, indiscriminate transfers to informal workers are a rather inefficient way to reduce the incidence and depth of poverty. Transfers to informal workers assigned through proxy means testing generate distortions in labor market decisions, a result that contrasts with the effect of proxy means testing alone.
Finally, this paper considers how the design of redistributive programs affects the budget constraints of the public sector. Informality diminishes public revenues for a given tax rate or requires a higher rate for a desired revenue level. Additionally, it makes taxation of the non-poor and targeting of the poor less efficient, weakening the overall distributive effect of public programs. These effects are stronger if income support is provided to informal workers, even by means of proxy means testing. In view of this, this paper argues that greater reliance on means testing may be suitable in LAC. The main reasons are that means testing is a more pro-poor design, curbs the size of the informal sector and the losses in revenues associated with it, and avoids unnecessary filtration to the non-poor.
The next section of this paper discusses methodological issues behind the measurement of poverty and presents the evidence on poverty in LAC and the rest of the world. Section 3 studies the role of growth and inequality in poverty reduction and estimates the elasticities of poverty with respect to growth and inequality. It also quantifies the contribution of each of these factors to poverty reduction in LAC. Section 4 uses a formal model to examine income-support programs for the poor in the context of high informality and draws conclusions for improvements in policy design. Finally, Sect. 5 puts forth the conclusions and implications of the paper.
Measuring monetary poverty
The measurement of poverty first requires establishing a threshold to distinguish the poor from the non-poor. Although it is hard to justify such a discontinuity in the welfare distribution, the practicality and usefulness of poverty lines has been hard to replace.
There are two possible criteria with which to set the poverty line. The first is an absolute poverty threshold. The logic behind an absolute poverty line requires establishing a series of needs to be satisfied by means of a basket of goods and pricing this basket. Caloric intake is a classic element of such baskets. The alternative is a relative poverty line, which can be defined as a fixed rule of the distribution of welfare. For example, a relative poverty line may be defined as 50% of mean per capita income.
The key difference between these principles is that absolute poverty lines are meant to identify persons with the greatest needs, while relative poverty lines are conceived to adjust to their social context. For example, as incomes grow in a country, the relative poverty line adjusts automatically, reflecting a broadening of the needs that society considers basic. By comparison, absolute poverty will unequivocally fall as incomes grow, provided growth is at least partially shared by the lower tail of the income distribution.
Although most developing countries tend to use absolute poverty lines, there is consistent evidence that these are influenced by relative factors. The first piece of evidence to support this claim is that countries with higher incomes tend to opt for higher poverty lines, even after adjusting for differences in purchasing power, as is shown in Fig. 1 . Additionally, after sustained periods of growth, practically the entire population will have overcome a given poverty threshold. When this happens, the poverty rate will remain stagnant at very low levels. Figure 2 illustrates this point for Chile. After a sustained period of poverty reduction, the Ministry of Social Development made a substantial methodological change in how to measure poverty, including an upward adjustment of the poverty line. Although such changes are generally grounded in changes in spending patterns of a reference population, development is an undeniable cause of such changes. Thus, even if not explicitly, poverty lines do adjust to social progress.
[Source: Gasparini et al. (2014) based on Ravallion et al. (2009)]
Poverty lines and ln of consumption per capita
(Source: Prepared by the authors based on information from Chile's Ministry of Social Development)
Poverty rates in Chile (percent)
As a consequence, differences in national poverty rates reflect not only true differences in poverty between countries, but also the methodological differences in how poverty is measured. Making poverty rates comparable therefore requires the use of a uniform methodology and a line that adjusts for differences in price levels between countries. The World Bank popularized the use of $1 per-person-per-day poverty line, which has been updated several times to account for inflation in the USA, reaching US$1.25 in 2005 and US$1.90 in 2011. The poverty line is adjusted by a factor reflecting differences in the purchasing power of the US dollar in each country. Thus, the line is adjusted to each country to reflect the cost of a uniform basket of goods.
Once the poverty line has been set and the poor have been identified, the task that follows is to build a synthetic index from the distribution of income of the poor. Following Sen (1976), a poverty index should comply with two axioms. The first is the monotonicity axiom, which states that a decrease in the income of a person below the poverty line must increase the poverty index. The second is the transfer axiom, which states that any pure transfer from a poor person to someone who is richer must result in an increase in poverty. The headcount ratio does not satisfy either of these axioms, while the poverty gap does not satisfy the transfer axiom.
Foster et al. (FGT) (1984) propose a family of indexes in the following form:
$$ \begin{array}{*{20}c} {{\text{FGT}}\left( \alpha \right) = \frac{1}{N}\mathop \sum \limits_{i = 1}^{N} \left( {1 - \frac{{x_{i} }}{z}} \right)^{\alpha } 1\left( {x_{i} < z} \right),} & {\alpha \ge 0} \\ \end{array} , $$
where \( x_{i} \) is the income of person or household \( i \), \( z \) is the poverty line, \( N \) is the total population or number of households, and \( 1\left( {. } \right) \) is an indicator function. When \( \alpha = 0 \), this index is the poverty headcount ratio, while \( \alpha = 1 \) delivers the poverty gap. Any \( \alpha > 1 \) will satisfy the transfer and monotonicity axioms. This has popularized the use of \( {\text{FGT}}\left( 2 \right) \) as a poverty index, commonly known as poverty gap squared. This paper uses poverty lines adjusted for purchasing power parity and poverty indexes from Foster et al. (1984).
Evidence of poverty and inequality
Table 1 shows several poverty indicators worldwide and for different developing regions since 1981. The period has been one of consistent poverty reduction worldwide. Extreme poverty and moderate poverty have fallen by over 30 and 20 percentage points, respectively. The decline in extreme poverty has reached a point that this benchmark has become of little use, and more moderate poverty lines have become the more relevant benchmark. The fall in poverty is evident in all regions and all indicators, although certain differences prevail. East Asia and the Pacific have been by far the most dynamic in poverty reduction, while progress has been the slowest in relative terms in sub-Saharan Africa. LAC is consistently ranked in the middle compared to the remaining regions: While far from the low poverty levels of Europe and Central Asia, it fares considerably well when compared to regions such as sub-Saharan Africa or South Asia. Poverty levels seem to be historically similar to those of the Middle East and Northern Africa, and also similar to those of East Asia and the Pacific in the most recent measures.
Table 1 Poverty rates worldwide.
Table 2 shows poverty rates within LAC for the latest years available. As mentioned when discussing Table 1, extreme poverty is very low for most countries in the region: Only one of the 17 countries has a rate that is over 10%, and only five have a rate above 5%. Nevertheless, moderate poverty is still widespread in the region. Moreover, it can be seen that there is substantial variation in the poverty rates.
Table 2 Comparison of poverty rates in LAC in 2015.
Drivers of poverty reduction
The evidence reviewed so far shows a consistent decline in the poverty rate, although it does little to explain the drivers behind this trend. Because of the importance of understanding the underlying factors behind the decline in poverty, the remainder of this section is dedicated to its analysis. Schematically, changes in the poverty rate can have three sources, which are explained with the help of Fig. 3. Panel A shows an income distribution and a poverty line (vertical dashed line). Given that the poverty rate is calculated as the percentage of the population below the poverty line, the brown area below income distribution and to the left of the line is a graphical representation of the poverty rate. Panel B shows us that an increase in the value of poverty line, for example because the goods in the basket become more expensive, increases the poverty rate by a magnitude represented by the red shaded area. This is called the "line effect," and it is a result of changes in the real value of the poverty line. In Panel C, the income distribution is shifted to the right, representing an even increase in incomes. Before the income shift, the poverty rate is represented by the brown and red shaded areas, while only those in the red area remain poor after the income increase. Thus, the brown area represents the fall in poverty resulting from an increase in incomes, called the "growth effect." Last, poverty rates can change because the income distribution becomes more or less egalitarian. These changes are shown in Panel D, where the distribution in red is more even than that in black. If the income distribution were to evolve from the red to the black, poverty would fall by the area shaded brown. This is the "distribution effect."
(Source: illustration prepared y the authors)
Drivers behind poverty reduction. Vertical dashed lines represent hypothetical poverty lines
Simple as it is, this logic provides an outstanding framework to understand the drivers behind the fall in poverty. The following sections are dedicated to studying the growth and the redistribution components of changes in poverty.Footnote 1
Economic growth is generally believed to be a necessary factor behind poverty reduction strategies, given that there are constraints to income distribution. As incomes grow, a given poverty line should be accessible to more people. Figure 4 shows the growth in mean and median incomes adjusted for purchasing power parity (PPP) in the region. The figure shows that all countries experienced at least some growth over the 1981–2014 period. On average, mean incomes grew by 69% from their base year, while median incomes grew by 82%. Additionally, the figure shows the volatile history for which LAC is well known. The decade starting in 1990 had mixed results, as individual country mean growth rates ranged from − 2% per annum (Venezuela) to over 3% per annum (Chile, Honduras, Panama). In terms of median income, the growth rates ranged from − 2.8% per annum (Venezuela and Paraguay) to 4.5% per annum (Honduras). In the 2000s, there was strong growth in most countries, more than offsetting previous declines in incomes. Growth rates at the beginning of the millennia were quite sensitive to the timing of the crises and in many cases were inflated as a result of the subsequent recovery, but mean incomes still grew by between 3 and 4% per annum in most countries after 2005. Median incomes grew faster, in the range of 4 to even 6% per annum after 2005.
[Source: Prepared by the authors based on PovCalNet (2017)]
Growth in mean and median income adjusted by purchasing power parity. The base year is the first year available for each country. A linear trend is assumed for countries with 1 or 2 years of missing data
In all, despite several macroeconomic crises, countries in the region have generally gone through a period of sustained income growth. This growth is expected to have contributed to the fall in poverty discussed previously.
Latin America has long been recognized as a region with comparatively high levels of inequality. For example, Londoño and Székely (2000) characterized the region as having "excess inequality" given that countries in the region have greater inequality than would be expected for their income levels. Figure 5 illustrates the concept. The figure shows that inequality in Latin America is higher than in any other region in the world. Moreover, even the least unequal countries in the region have inequality levels that would be among the highest in any other region. This shows that inequality is not only high on average, but also a phenomenon that extends throughout the entire region.
(Source: Prepared by the authors based on the World Bank's World Development Indicators
Gini index worldwide
The evolution of inequality is a history of ebb and flow. Table 3 shows that reforms in the 1990s tended to increase inequality in most countries in LAC. On the other hand, the past decade of commodity booms and strong economic growth has been accompanied by a generalized fall in inequality in the region. With the exceptions of Costa Rica and Paraguay, the decline in inequality over the past decade was enough to offset the increases in inequality in the earlier period.
Table 3 Gini index in Latin America and the Caribbean since the 1990s.
One would intuitively believe that the decline in inequality must have contributed to the decline in poverty. However, the claim that both growth and falling inequality played a role in poverty reduction is qualitative. A quantitative decomposition of the decline in poverty into growth and inequality factors would contribute to a more comprehensive understanding of the process behind poverty reduction. The next section takes on that task.
Decompositions of changes in poverty
There is an extensive series of methods by which changes in poverty can be decomposed into several possible explanatory factors. One of the simplest methodologies, which will be employed here, is a regression-based decomposition. The method consists on estimating a variant of the following equation:
$$ \ln \left( {P_{it} } \right) = \alpha + \beta \ln \left( {M_{it} } \right) + \gamma \ln \left( {I_{it} } \right) + \delta_{t} + \eta_{i} + \varepsilon_{it} , $$
where \( P_{it} \) is a poverty index, \( M_{it} \) is an income index, \( I_{it} \) is an inequality index, \( \alpha \) is a constant, \( \delta_{t} \) and \( \eta_{i} \) are time and country fixed effects, and \( \varepsilon_{it} \) is an error term. As the previous expression is written in logarithmic form, the parameters \( \beta \) and \( \gamma \) constitute the elasticities of the poverty rate with respect to income and inequality, respectively. The coefficient \( \beta \) is expected to be negative, while \( \gamma \) is expected to be positive.
Variants of Eq. (1) are estimated based on data extracted from PovCalNet (2017). This source provides internationally comparable poverty and inequality indexes and data regarding consumption and income adjusted for purchasing power parity. Adjustments were made to the data given that the indexes reported are based on the distribution of income for some countries, on consumption for others, and on both for others. We used the countries for which both income and consumption indexes were available to calculate the average ratio between the two and then use this correcting coefficient to countries for which only income-based indexes were provided. All countries for which at least two observations were available were kept in the final sample. The final dataset obtained consists of an unbalanced panel of 135 countries with data from 1984 to 2014, and the results of estimating Eq. (1) are shown in Table 4. Columns (1), (4), and (7) show the results for the full sample of 135 countries; columns (2), (5), and (8) exclude 19 high-income countries from the sample; and columns (3), (6), and (9) limit the sample to countries in LAC. The upper panel shows the results for the US$1.9 PPP poverty line, while the lower panel uses the US$3.2 PPP line. Poverty indexes used are the headcount ratio (columns 1–3), the poverty gap (columns 4–6), and the poverty gap squared (columns 7–9).
Table 4 Estimation results.
The elasticities of both growth and inequality have the expected sign throughout the different estimations. At around − 2, the growth elasticity of poverty is similar to that of previous studies (Ravallion and Chen 1997; Ravallion 1997; Kraay 2006). Exclusion of high-income countries has a minimal effect on the parameters estimated when compared to the full sample. However, restricting the estimation to LAC countries has a notable effect on the coefficients. The table shows that the effectiveness of growth for poverty reduction becomes much smaller for all poverty indexes except for the poverty gap squared with the US$1.9 line. Additionally, inequality reduction is more effective in reducing poverty in LAC than in the other countries. Table 9 in Appendix shows the same results using median consumption instead of the mean. The results remain largely similar in spite of this change.
Higher-than-average inequality is a key factor that explains the difference in coefficients between LAC countries and the full sample. For example, the effect of a small change in income on the poverty rate is the density of the income distribution around the poverty rate. Under certain simplifying assumptions on the distribution of income, high inequality generates a low density around the poverty line, and therefore a low income elasticity of poverty (Bourguignon 2003). Empirical evidence is consistent with this explanation (Ravallion 1997). Additionally, falling inequality generates a proportionally greater reduction in poverty in a place where inequality is high, because this implies larger transfers from rich to poor.
We now assess the quantitative impact of growth and inequality for each country. Table 5 shows the LAC countries in the sample and the change in the poverty rate, income, and inequality expressed in log points. The growth (inequality) effect is calculated as the product of the change in log points in income (inequality) and the elasticity featured in Table 4. The mean change in the poverty rate is − 1.11 log points, while the growth and inequality effects average − 0.97 and − 0.12 log points, respectively. These magnitudes indicate that growth has been responsible for the bulk of poverty reduction, although inequality has also played a role. The decomposition of the remaining poverty indexes is featured in Tables 10, 11, 12, 13, and 14 in Appendix. These show that the growth effect accounts for between 70 and 90% of the change in the poverty rate, while inequality explains around 12%.
Table 5 Decomposition of changes in headcount ratio (US$3.20 line).
These estimates lead to the conclusion that growth has been the central element behind poverty reduction in LAC. This is not because declines in inequality are ineffective in reducing poverty. Quite the contrary, Table 4 suggests that the elasticity of poverty with respect to inequality is rather large. The reason seems to be that countries in the region have failed to reduce inequality substantially over long periods of time.
Despite the fact that both factors facilitate overcoming the poverty threshold, there is still a nucleus of poor people for whom that progress is elusive. This is because they may be incapable of building the assets (tangible or otherwise) to benefit from these trends. What is more, persistent poverty interferes with the proper upbringing of children, generating an intergenerational transmission. Long-term strategies for poverty reduction therefore require complementing growth and fostering equality, without affecting grow much, with poverty alleviation for the core of the poor population. Unfortunately, adequate identification of this population is quite a challenge, so the following section is dedicated to this issue.
Poverty alleviation programs and targeting
Sustained economic growth does not automatically spill over to the entire population. Capitalizing on growth often requires the correct set of skills and opportunities, which many may lack. Those incapable of capitalizing on growth may even be done a disservice by it, as growth puts pressure on the prices of vital goods, such as food and land in locations with better access to these opportunities.
The persistence of these inequalities has long been a rationale for poverty alleviation programs based on ethical grounds. Given that structural factors impede a permanent improvement in the standard of living, programs such as food stamps, subsidized housing, and income support have typically been implemented to mitigate the effect of poverty. Moreover, poverty status may also inhibit proper investments in health and schooling of children—investments that are socially desirable in the long run. This argument adds an efficiency basis for poverty alleviation, and these concerns have explicitly been taken into account in the design of income-support programs such as conditional cash transfer (CCT) programs, which are popular in LAC and require compliance with specified schooling and health criteria for children.
Successful targeting in any poverty alleviation program depends on the possibility of correctly identifying of who is poor and who is not. High-income countries generally rely on employment outcomes to determine eligibility for welfare programs. For example, workers who lose their job qualify for unemployment insurance, workers earning less than a given threshold are entitled to tax credits, and so on. Ownership of assets may also be taken into account to determine eligibility, for example, as is the case of Medicaid in the USA.
However, employment-based criteria have traditionally been considered unsuitable for welfare programs in LAC because of the region's large informal sector. Table 6 shows the informal economy in relation to GDP. Roughly one-third of economic activity in LAC is informal. Table 7 shows informality rates in labor markets. With an unweighted average of 49.9% and a minimum of over 30%, informality is widespread across the region. In this context, programs like unemployment insurance are virtually impossible to monitor given the difficulty of distinguishing the unemployed from informal workers. What is more, although there is a correlation between informality and poverty, and an inverse correlation between informality and income, these relations are not very strong. Informality rates are large even for middle- and high-income families, and income testing alone would generate a large filtration of public resources to the non-poor.
Table 6 Informal economy in 2005 (percent of GDP).
Table 7 Informality rates in labor markets.
Given that employment outcomes are an unreliable targeting mechanism in LAC, alternate mechanisms have become necessary, as will be discussed in the following section.
Targeting in highly informal economies
As discussed earlier, poverty alleviation programs include a broad range of transfers, some in-kind, others in the form of discounts or refunds (i.e., distortion of relative prices), and also some in cash. Additionally, income-support programs have eligibility requirements that aim to reduce filtrations outside the target population. For poverty alleviation programs, the most immediate measure of whether persons should receive benefits is their income or consumption level, which makes means testing a preferred targeting mechanism. However, administrative agencies have limited power to verify the income of potential beneficiaries in largely informal economies, and means testing has been sidelined as a result.
The literature has proposed a number of alternative targeting mechanisms. Conditionality, for example, has made income-support programs more palatable for non-beneficiaries. If people are going to be getting money, the reasoning goes, they might as well be expected to do something in return. Besley and Coate (1992) argue that conditionality works as a targeting mechanism, as those whose time is too valuable to dedicate to compliance will self-exclude themselves from the program. Conditionality is widely used in LAC, but this design feature has probably more to do with encouraging investments in the human capital of children than with screening the poor from the non-poor.
In a paper that compares the targeting efficiency of different program designs, Alatas et al. (2012) empirically assess whether community-based targeting performs better than proxy means testing and a hybrid mechanism in a field experiment. They find that community-based targeting performs worse than proxy means testing overall, although it performs slightly better at identifying the poorest. However, even though proxy means testing performs better, the authors find that communities where the community-based mechanism is used are more satisfied with the outcome. They also find that a reason for this is that communities have an objective function that is more comprehensive than simply a household consumption level.
At the extreme on the trade-off between inclusion and exclusion errors are universal basic income programs. Hanna and Olken (2018) discuss this format in detail, finding several shortcomings in this design. Typically, one would expect individuals to have a positive marginal tax rate, even at low levels of income, and even if universal basic income generates a negative average tax rate for them. However, this is difficult to put into practice in countries where a relatively small fraction of the population pays taxes, because the rest are either exempt or informal. Furthermore, they argue that eligibility requirements not only have an effect on the trade-off between inclusion and exclusion errors, but also on the size of transfers families end up receiving. There is a point where more lenient eligibility implies that a given mass of resources has to be distributed among more families, generating decreasing marginal effects on welfare. In LAC, policymakers have relied heavily on proxy means testing as a targeting mechanism, to the point where proxy means testing has become an "industry standard" (De Wachter and Galiani 2006). Following Barr (1998), proxy means testing has a number of advantages compared to means testing: It does not discourage work effort as much, under some circumstances it requires less information, and it provides a stable assessment of a family's quality of life. Moreover, proxy means testing is especially useful in countries with a high degree of informality, where means testing is unreliable.
However, proxy means testing has its detractors. For starters, there is growing concern regarding its actual efficiency. Stampini and Tornarolli (2012) show that the targeting efficiency of CCT programs in LAC has waned as these programs have grown (Table 8). Although targeting errors are inevitable, large errors will negatively affect the programs' poverty reduction effect and may also generate a sense of unfairness, especially if the program relies on a scoring system that is difficult for the average person to understand. Moreover, programs with large errors may be hard to redesign, as leakages to the middle class increase support for the program.
Table 8 Targeting efficiency of conditional cash transfer programs (percent).
The next section develops a simple model to assess the targeting efficiency of different mechanisms in a context where workers can choose to work in the formal or informal sector. Overall, the conclusions are quite intuitive and illustrative of the agenda going forward for research and policy.
A model of targeting efficiency
This section is dedicated to examining the targeting efficiency of several alternative mechanisms in situations where workers have employment opportunities in the formal as well as the informal sectors. The difference is that income is unverifiable in the latter, and therefore policymakers cannot use it in means testing. It is assumed that there is a mass R of public revenue to be allocated to transfers with the goal of minimizing a poverty index given a poverty line of \( \bar{w} \). Alternative programs can be evaluated along several dimensions. The first dimension is targeting efficiency, which looks at whether the program can be designed so as to transfer income to poor workers. The second is the degree to which the program generates economic distortions, measured as the changes in labor decisions induced by the program. As will become clear, distortions in labor decisions reduce the overall effectiveness of a program, even if all beneficiaries are poor.
Both the poverty gap and the poverty gap squared are considered as the relevant indexes. The poverty gap is a very intuitive index, where targeting efficiency of a program simply comes down to how much it can increase the average income of poor families. However, when studying optimal policies, the poverty gap leads to infinite solutions, making comparisons difficult. For this reason, the analysis will be complemented with the poverty gap squared, which assigns a greater weight to targeting the poorest of all. This is a valid concern for policy design and a realistic feature of social preferences.
It is assumed that there is a unitary mass of workers who are characterized by two random variables: a wage in the formal sector \( w_{f} \) and a wage in the informal sector \( w_{i} \). When discussing proxy means testing, an additional feature will be added to workers, but this is not necessary for now. Workers will choose the sector that grants them the greatest income after transfers, which vary depending on the setup of each transfer program. For analytical convenience, it is assumed that the distributions of wages are uniform and independent:
$$ w_{f} \sim U\left[ {0,\bar{w}_{f} } \right] $$
$$ w_{i} \sim U\left[ {0,\bar{w}_{i} } \right] $$
$$ w_{f} |w_{i} \sim U\left[ {0,\bar{w}_{f} } \right] $$
$$ w_{i} |w_{f} \sim U\left[ {0,\bar{w}_{i} } \right]. $$
Moreover, it is assumed that \( \bar{w}_{f} > \bar{w}_{i} \), so incomes in the formal sector dominate those in the informal sector in a first-order stochastic sense. Although this assumption is largely unnecessary for this paper, it is included because it is a realistic feature of labor markets. Note that absent any transfer program, workers will choose whatever sector generates greater income for them.Footnote 2 Hence, the following expression corresponds to the distribution of wages after the choice of sectors:
$$ F\left( w \right) = { \Pr }(w_{f} < w \vee w_{i} < w) $$
$$ F\left( w \right) = { \Pr }(w_{f} < w) * Pr(w_{i} < w) $$
$$ F\left( w \right) = w^{2} / \left( {\bar{w}_{f} {\mkern 1mu} * \bar{w}_{i} {\mkern 1mu} } \right)\;{\text{if}}\;w < \overline{{w_{f} }} \; {\text{and}}\;w < \overline{{w_{l} }} , $$
where the second expression holds because the distributions of wages are independent and the last expression arises after replacing the appropriate cumulative density functions. It is assumed that \( \bar{w} < \bar{w}_{f} \) and \( \bar{w} < \bar{w}_{i} \), so \( F\left( w \right) = w^{2} / \left( {\bar{w}_{f} * \bar{w}_{i} } \right) \) is the functional form of the cumulative density function in the space that is relevant for measuring poverty. The poverty gap and poverty gap squared that would result absent any public program are given by:
$$ {\text{PG}} = \mathop \int \limits_{0}^{{\bar{w}}} \left( {\bar{w} - w} \right){\text{d}}F\left( w \right) = \frac{{\bar{w}^{3} }}{{3 \bar{w}_{f} . \bar{w}_{i} }} $$
$$ {\text{PGS}} = \mathop \int \limits_{0}^{{\bar{w}}} \left( {\bar{w} - w} \right)^{2} {\text{d}}F\left( w \right) = \frac{{\bar{w}^{4} }}{{6\bar{w}_{f} . \bar{w}_{i} }}. $$
Optimal targeting
It is assumed that observing wage opportunities in both sectors implies that the transfer can be a function of available jobs in the formal and informal sectors. Let \( t^{*} \left( {w_{f} ,w_{i} } \right) \ge 0 \) be the functional form of the optimal transfer. Naturally, the structure of the optimal function will depend on the welfare measure used. First, the poverty gap is considered as the objective function and the properties of an optimal transfer function are enunciated in the following proposition.Footnote 3
A function \( t^{*} \left( {w_{f} ,w_{i} } \right) \) minimizes the poverty gap if the following conditions hold:
$$ {\text{If}}\; t^{*} \left( {w_{f} ,w_{i} } \right) > 0, w = \hbox{max} \left\{ {w_{f} ,w_{i} } \right\} + t^{*} \left( {w_{f} ,w_{i} } \right) \le \bar{w} $$
$$ \mathop \int \limits_{0}^{{\bar{w}_{i} }} \mathop \int \limits_{0}^{{\bar{w}_{f} }} t^{*} \left( {w_{f} ,w_{i} } \right){\text{d}}F\left( {w_{f} } \right){\text{d}}F\left( {w_{i} } \right) = R. $$
Proposition 1 indicates that the transfers can only be positive for poor workers who do not cross the poverty line as a result of the transfer. The second condition simply states that all the money available is spent. As it turns out, there are an infinite number of functions that can meet these criteria. However, when the poverty gap squared is considered as the objective function, \( t^{*} \left( . \right) \) has a unique functional form. Again, this result is shown in the form of a proposition.
The function \( t^{*} \left( {w_{f} ,w_{i} } \right) \) that minimizes the poverty gap squared has the following form:
$$ t^{*} \left( {w_{f} ,w_{i} } \right) = \left\{ {\begin{array}{ll} {\bar{t} - \hbox{max} \left\{ {w_{f} ,w_{i} } \right\}} & \quad {{\text{if}}\; \;\bar{t} > \hbox{max} \left\{ {w_{f} ,w_{i} } \right\} } \\ 0 & \quad {\text{otherwise}} \\ \end{array} } \right., $$
where \( \bar{t} = \sqrt[3]{{3 R*\bar{w}_{f} *\bar{w}_{i} }} \) and \( \bar{t} < \bar{w} \).
In this case, the optimal function raises the income of everyone below \( \bar{t} \) to this level, and \( \bar{t} \) is set to the value at which the budget restriction is exhausted. The reason for this is that the marginal decrease in poverty is greater the lower the income, and therefore it is always optimal to increase transfers to the poorest families. Additionally, note that the function in Proposition 2 satisfies the conditions in Proposition 1, and so this function also minimizes the poverty gap.
For completeness, the expression for the poverty gap (PG) and the poverty gap squared (PGS) is shown below.
$$ {\text{PG}}^{*} = \frac{{\bar{w}^{3} - \bar{t}^{3} }}{3} $$
$$ {\text{PGS}}^{*} = \frac{{\bar{w}^{4} - \bar{t}^{3} \left( {4\bar{w} - 3\bar{t}} \right)}}{6}. $$
Rather than focusing on the expressions of the poverty gap and the poverty gap squared, greater emphasis will be placed on the form of the transfer functions and the effects they generate. It is understood that unless a transfer function can satisfy the conditions expressed in Propositions 1 and 2, the resulting poverty gap and poverty gap squared will be greater than these values. The study of possible transfer programs will begin with a universal program.
A universal transfer program
A universal program is perhaps the simplest welfare program that is feasible to apply. The mass of workers was normalized to equal one, and each worker gets a transfer R. Thus, the support of the distributions is shifted as follows:
$$ w_{f}^{U} \sim U\left[ {R,\bar{w}_{f} + R} \right] $$
$$ w_{i}^{U} \sim U\left[ {R,\bar{w}_{i} + R} \right]. $$
The cumulative density function for wages is now:
$$ F_{U} \left( w \right) = \left( {w - R} \right)^{2} /\left( {\bar{w}_{f} * \bar{w}_{i} } \right)\;\;\;if w > R. $$
Now, the expressions corresponding to the poverty gap and the poverty gap squared are:
$$ {\text{PG}}_{U} = \mathop \int \limits_{R}^{{\bar{w}}} \left( {\bar{w} - w} \right){\text{d}}F_{U} \left( w \right) = \frac{{\left( {\bar{w} - R} \right)^{3} }}{{3 \bar{w}_{f} . \bar{w}_{i} }} $$
$$ {\text{PGS}}_{U} = \mathop \int \limits_{R}^{{\bar{w}}} \left( {\bar{w} - w} \right)^{2} {\text{d}}F_{U} \left( w \right) = \frac{{\left( {\bar{w} - R} \right)^{4} }}{{6 \bar{w}_{f} . \bar{w}_{i} }}. $$
One can see that the effect of the universal program on both poverty indexes is identical to that of a reduction in the poverty line by a magnitude of \( R \). The universal program is very inefficient in terms of targeting, a result that resembles the discussion by Hanna and Olken (2018) on universal basic income with a high minimum taxable income. This is because everyone receives the transfer, regardless of whether they are above or below the poverty line. However, the universal program generates no distortions: All workers choose the same sector that they would have chosen absent the program. Although universal programs perform poorly in terms of poverty reduction, the fact that they do not affect labor choices is a positive feature of these types of programs. Given that the deficiency of the universal program is the large number of filtrations, the next section looks at a means testing scheme, where it can be ensured that all beneficiaries are poor.
Means testing transfer program
Means testing programs rely on direct indicators of the standard of living to determine eligibility. In practice, means testing programs may take into account factors such as a person's assets or rental income. For the purpose of simplicity, it is assumed that the means assessment relies exclusively on the income earned in a formal sector job. If formal income is below the poverty line, the worker is granted income support. Since wages in the informal sector are unverifiable, only formal workers are eligible for the public transfer.
It is assumed that workers in the formal sector are entitled to a transfer that is a function of their reported wage, \( t\left( {w_{f} } \right) \). For purposes of simplicity, it is assumed that all poor workers get a positive transfer that is proportional to the distance from the poverty line:
$$ t\left( {w_{f} } \right) = \left( {\bar{w} - w_{f} } \right)b\;\;{\text{if}}\;\;\bar{w} > w_{f} , $$
where \( b \in \left[ {0,1} \right] \) is a parameter that determines the degree of phasing out of the program. Since only formal workers are eligible for the transfer and workers choose the formal sector if \( w_{f} + t\left( {w_{f} } \right) > w_{i} \), the following expression shows how the cumulative density function changes as a result of the transfer:
$$ F_{\text{MT}} \left( w \right) = \left\{ {\begin{array}{ll} {Pr(\left( {1{-} b} \right) w_{f} + b\bar{w} < w \vee wi < w)} & \quad {{\text{if}} \;b*\bar{w} < w < \bar{w}} \\ {Pr(w_{f} < w \vee w_{i} < w)} & \quad {{\text{if}}\;w \ge \bar{w}} \\ \end{array} } \right. $$
$$ F_{\text{MT}} \left( w \right) = \left\{ {\begin{array}{ll} {\left( {w^{2} - b\bar{w}w} \right)/ \left[ {\left( {1 - b} \right)*\bar{w}_{f} * \bar{w}_{i} } \right] } & \quad {{\text{if}}\;\;b*\bar{w} < w < \bar{w}} \\ {w^{2} / \left( {\bar{w}_{f} * \bar{w}_{i} } \right)} & \quad {{\text{if}}\;\;w \ge \bar{w}} \\ \end{array} } \right.. $$
Additionally, the cost of the program is represented by the following expression, which can be equated to \( R \) to find the (maximum) value of \( b \):
$$ \mathop \int \limits_{0}^{{\bar{w}}} \mathop \int \limits_{0}^{{w_{f} \left( {1 - b} \right) + b\bar{w}}} \frac{{b*\left( {\bar{w} - w_{f} } \right)}}{{\bar{w}_{f} * \bar{w}_{i} }} {\text{d}}w_{i} {\text{d}}w_{f} = \frac{{\bar{w}^{3} *\left( {\frac{b}{6} + \frac{{b^{2} }}{3}} \right)}}{{\bar{w}_{f} * \bar{w}_{i} }}. $$
The discussion now turns to the intuition behind the welfare effects of this type of policy. By design, only poor workers get the transfers, no worker crosses the poverty line as a result of the transfer, and all money is spent if \( b \) is chosen appropriately. One may be tempted to believe that this type of transfer is optimal in the sense described by Proposition 1. Unfortunately, this is not so. The reason is that many workers sacrifice better employment opportunities in the informal sector to become eligible for the transfer. This is a distortion caused by the welfare program. Figure 6 shows that the after-transfer distribution of income dominates the distribution absent a program in a first-order-stochastic sense, but the pre-transfer distribution is dominated by the distribution absent a program. Thus, the effectiveness of the program in terms of reducing the poverty gap is partly offset by the foregoing of productive jobs in the informal sector.
(Source: illustration prepared by the authors)
Distribution wages of pre-transfer and post-transfer income. MT means testing
In terms of the poverty gap squared, it is noteworthy that because one can choose \( b > 0 \), this transfer mechanism has a clear pro-poor nature. However, wages are not equalized for the poorest and the overall effect is therefore suboptimal unless the budget is sufficient to lift everybody out of poverty. Moreover, even if one allowed for a more flexible functional form, the poverty gap squared under optimal targeting would still be unattainable. Again, the reason for this is that the transfer program channels workers away from more productive jobs in the informal sector.
In spite of this, it is clear that poverty falls as measured by both indexes. The expression of the poverty gap and the poverty gap squared is
$$ {\text{PG}}_{\text{MT}} = \frac{{\bar{w}^{3} }}{{3* \bar{w}_{f} . \bar{w}_{i} }}*\frac{{1 - \frac{3}{2}*b + \frac{1}{2} *b^{3} }}{1 - b} $$
$$ {\text{PGS}}_{\text{MT}} = \frac{{\bar{w}^{4} }}{{6 \bar{w}_{f} . \bar{w}_{i} }}*\left( {b + 1} \right)\left( {b - 1} \right)^{2} . $$
It is easy to verify that these indexes are smaller than corresponding ones absent means testing, and poverty is reduced as a result of the program. An interesting question is whether universal programs can reduce poverty more than means testing programs. In terms of the poverty gap, it seems logical that means testing is more effective because it generates no filtration to the non-poor. However, it is technically possible that, if filtration is low (e.g., because practically everyone is poor) or if distortions caused by means testing are large, then a universal program may be more effective at reducing the poverty gap than a means testing program. In terms of the poverty gap squared, this is again possible, but more unlikely because the design of the transfer is more pro-poor under means testing.
A proxy means testing program
To consider the effect of a proxy means testing program, it is assumed there is a series of subsets \( X = \left\{ {x_{1} , x_{2} , \ldots , x_{N} } \right\} \) that constitute a partition of the space of possible wages. The proportion of members in each group is \( p_{{x_{i} }} \), with \( \mathop \sum \nolimits_{i} p_{{x_{i} }} = 1 \). Figure 7 is a graphical representation of such possible subsets.
Graphic representation of the distribution of wages and characteristics
Let us assume that the government transfers \( t_{x} \) to all members of group \( x \) and that the transfer is only conditional on belonging to the group, may differ between groups, and may be equal to zero. The poverty indexes that arise from the transfer are:
$$ \mathop \sum \limits_{i = 1}^{N} \mathop \smallint \limits_{0}^{{\bar{w}}} v\left( {w + t_{{x_{i} }} } \right) dF\left( {w|x_{i} } \right) p_{{x_{i} }} , $$
where \( v\left( w \right) = \left( {\bar{w} - w} \right) 1\left\{ {\bar{w} > w} \right\} \) for the poverty gap and \( v\left( w \right) = \left( {\bar{w} - w} \right)^{2} 1\left\{ {\bar{w} > w} \right\} \) for the poverty gap squared. Let \( T\left( X \right) \) denote a series of transfers for groups \( \left\{ {x_{1} , x_{2} , \ldots , x_{N} } \right\} \). De Wachter and Galiani (2006) show that the optimal targeting rule corresponds to the following expression:
$$ T^{*} \left( X \right) = \arg \mathop {\hbox{max} }\limits_{T\left( X \right)} \mathop \sum \limits_{{x_{i} }} \left( {\mathop \smallint \limits_{0}^{{\bar{w}}} v\left( w \right) - v\left( {w + t_{{x_{i} }} } \right) dF\left( {w|x_{i} } \right) - \lambda t_{{x_{i} }} } \right) p_{{x_{i} }} , $$
where \( \lambda \) is a parameter that can be scaled so that total spending equals available revenue. Intuitively, this rule says that positive transfers must be assigned to groups for which poverty reduction is larger than \( \lambda t_{{x_{i} }} \).
Note that the proxy means testing program generates no distortion in the formality decision, as wages are not taken explicitly into account when determining the transfer. Thus, in terms of targeting efficiency, proxy means testing programs must be at least as good as universal programs, as granting every group the same transfer is feasible within this framework. The reason for this is because it has been assumed that \( x \) is strictly exogenous. The implications of potentially endogenous characteristics will be discussed in the following section.
Given a value \( \lambda \), the transfer to a group will fall the greater the share of non-poor members if \( v\left( . \right) \) is the poverty gap, or the higher incomes are in a group if \( v\left( . \right) \) is the poverty gap squared. Figure 8 illustrates a case where members of group \( x^{\prime} \) receive larger transfers than those of group \( x^{\prime\prime} \) in spite of the fact that many the poorest workers belong to group \( x^{\prime\prime} \). This is because the poorest workers are undistinguishable from other non-poor workers, and the high degree of inclusion errors outweighs the benefits of granting the poorest a greater transfer.
Characteristics and transfers
Overall, the targeting efficiency of proxy means testing depends on whether the space of characteristics \( X \) can separate the poor from the non-poor and identify the poorest, in particular. The following propositions determine conditions under which proxy means testing minimizes the poverty gap and the poverty gap squared:
Assume there is a subset \( X_{p} = \left\{ {x_{1}^{p} ,x_{2}^{p} , \ldots , x_{m}^{p} } \right\} \subset X \) such that all workers in \( X_{p} \) are poor. Additionally, assume there exist a series of transfers \( T\left( {X_{p} } \right) \) such that \( \hbox{max} \left\{ {w_{f} ,w_{i} } \right\} + t\left( x \right) \) for all workers \( x \in X_{p} \) , and \( \mathop \sum \limits_{i = 1}^{m} t\left( {x_{i}^{p} } \right) p_{{x_{i}^{p} }} = R \) . Then, \( T\left( {X_{p} } \right) \) minimizes the poverty gap.
Let \( \bar{t} \) be the same value found in Proposition 2. Assume there is a series of subspaces \( X_{p} = \left\{ {x_{1}^{p} ,x_{2}^{p} , \ldots , x_{m}^{p} } \right\} \) such that
$$ \begin{aligned} & x_{1}^{p} = \left[ {0, \frac{{\bar{t}}}{m}} \right]*\left[ {0, \frac{{\bar{t}}}{m}} \right] \hfill \\ & x_{2}^{p} = \left[ {0, 2\frac{{\bar{t}}}{m}} \right]*\left[ {0, 2\frac{{\bar{t}}}{m}} \right] - x_{1}^{p} \hfill \\ & \ldots \hfill \\ & x_{m}^{p} = \left[ {0, \bar{t}} \right]*\left[ {0, \bar{t}} \right] - x_{1}^{p} - x_{2}^{p} - \ldots - x_{m - 1}^{p} . \hfill \\ \end{aligned} $$
Additionally, assume a transfer function \( t\left( x \right) = E\left( {w|x} \right) \) for all \( x \in X_{p} \). If \( m \) approaches infinity, then the transfer function is efficient.
Once again, the conditions set by Proposition 3 seem reasonable. In fact, the greatest requirement is that the space of characteristics identifies a subset of the poor population; it requires neither identifying all the poor people nor separating the poor from the non-poor. However, as argued in Proposition 1, these conditions seem feasible only because the poverty gap is imperfect as a poverty measure. On the other hand, conditions set by Proposition 4 are unrealistic, as these conditions imply being able to infer the wage level of all of the poorest workers.
The combination of multiple programs
Both means testing and proxy means testing programs can leave poor families without support. In this context, it is common for policymakers to combine these types of programs to ensure that more poor families receive at least some coverage. This section considers the effect of combining different programs to cover this gap, starting by considering the basic structure of the means testing program in Fig. 9. Note that the poor families who do not receive income support belong to the informal sector (shown in dark orange), and the goal is therefore to study transfer programs that are conditional on informality.
Poor workers and income support
As has been argued previously, targeting efficiency is maximized if income support can be made a decreasing function of income. However, because incomes in the informal sector are unobservable, this is not feasible. Instead, one first considers combining the means testing program with a lump-sum transfer for all informal workers. As readers can probably guess, this program is likely to be rather inefficient, especially if there are a large number of informal workers who are non-poor. As a consequence, this section also examines the effects of combining the means testing program with a proxy means testing for informal workers.
Because these programs are more general than means testing, they can trivially do as well as means testing in terms of poverty reduction. However, the model predicts that conditioning income support on informality tends to increase the latter. In the context of the model, this is not particularly disturbing, but in reality, the costs associated with informality are likely to be much larger than those shown here.
A means testing program with a transfer to informal workers
The section examines the case of combining a means testing program with lump-sum payments to informal workers. It is again assumed that workers choose a sector and are granted income support equal to \( t_{f} \left( {w_{f} } \right) = b\bar{w} + \left( {1 - b} \right)w_{f} \) if \( w_{f} < \bar{w} \), or a lump sum of \( t_{i} \) if they choose the informal sector. Assuming \( t_{i} < b\bar{w} \), Fig. 10 shows how workers allocate themselves into the formal and informal sectors. One can see that, compared to the case where there is no transfer to informal workers (\( t_{i} = 0 \)), the inclusion of the second transfer encourages more workers to choose the informal sector.
Formal and informal workers in a combined program
The total cost of transfers to formal workers is given by:
$$ C_{f} = \frac{1}{{\bar{w}_{i} . \bar{w}_{f} }}\left\{ {\bar{w}^{3} \left[ {\frac{{b - b^{2} }}{6} + \frac{{b^{2} }}{2}} \right] - \frac{{t_{i} b}}{2}\bar{w}^{2} } \right\}. $$
Note that this is a generalization of the cost of the means testing program; a value of \( t_{i} = 0 \) delivers the same expression seen previously. Additionally, the cost of transfers to informal workers is given by:
$$ C_{i} = \frac{1}{{\bar{w}_{i} . \bar{w}_{f} }} \left\{ {t_{i} \bar{w}^{2} \frac{{\left( {1 - b} \right)}}{2} + t_{i}^{2} \bar{w}_{i} + \frac{{t_{i}^{3} }}{2}} \right\}. $$
The appropriate expressions for the poverty gap and the poverty gap squared are given by:
$$ {\text{PG}}_{M} = \frac{1}{{\left( {1 - b} \right)\bar{w}_{i} . \bar{w}_{f} }}\left\{ {\frac{{\bar{w}^{3} }}{3}\left[ {1 - \frac{3}{2}b - \frac{{b^{3} }}{2}} \right] - \bar{w}^{2} \frac{{t_{i} }}{2}\left[ {1 - b\left( {2 - b} \right)} \right]} \right\} $$
$$ {\text{PGS}}_{M} = \frac{1}{{\bar{w}_{i} . \bar{w}_{f} }}\left[ {\frac{{\bar{w}^{4} }}{6}\left( {1 + b} \right)\left( {1 - b} \right)^{2} + \frac{{\bar{w}^{3} }}{6}t_{i} \left( {1 - b} \right)^{2} } \right]. $$
Given that the goal is to minimize the poverty index, one can find an optimal solution by choosing \( t_{i} \) and \( b \) that minimizes these poverty measures subject to the budget constraint:
$$ \mathop {\hbox{min} }\limits_{{t_{i} ,b}} P_{m} $$
$$ {\text{s}}.{\text{t}}.: R \ge \frac{1}{{\bar{w}_{i} . \bar{w}_{f} }}\left\{ {\bar{w}^{3} \left[ {\frac{{b - b^{2} }}{6} + \frac{{b^{2} }}{2}} \right] - \frac{{t_{i} b}}{2}\bar{w}^{2} + t_{i} \bar{w}^{2} \frac{{\left( {1 - b} \right)}}{2} + t_{i}^{2} \bar{w}_{i} + \frac{{t_{i}^{3} }}{2}} \right\}, $$
where \( {\text{P}}_{M} \) is a poverty index, like \( {\text{PG}}_{M} \) or \( {\text{PGS}}_{M} \). The solution to this problem requires equalizing the marginal reduction in poverty per dollar for \( b \) and \( t_{i}\):
$$ \frac{{\partial P_{m} \left( {t_{i}^{*} ,b^{*} } \right)/\partial t_{i} }}{{\partial \left( {C_{f} \left( {t_{i}^{*} ,b^{*} } \right) + C_{i} \left( {t_{i}^{*} ,b^{*} } \right)} \right)/\partial t_{i} }} = \frac{{\partial P_{m} \left( {t_{i}^{*} ,b^{*} } \right)/\partial b}}{{\partial \left( {C_{f} \left( {t_{i}^{*} ,b^{*} } \right) + C_{i} \left( {t_{i}^{*} ,b^{*} } \right)} \right)/\partial b}}. $$
There are at least two problems with combining these policies, which are shown in Fig. 10. The first is the possibility that many informal workers are non-poor, and therefore the program will have a large number of filtrations (i.e., leakages). The second is that the transfer to informal workers increases the incentive to join this sector, both for poor and for non-poor workers. As a consequence, transfers to informal workers are an inefficient mechanism to provide income support to poor and informal workers, and given these issues one would expect \( t_{i}^{*} \) to be rather small. These undesirable properties can be mitigated by means of a proxy means testing transfer to informal workers, which is discussed in the following section.
A means testing program with a proxy means transfer to informal workers
In examining the effect of proxy means testing programs, we argued that their overall effectiveness in terms of poverty reduction depends on the empirical relation between observable characteristics and income. When a set of characteristics are a good predictor of low income, a large transfer can be assigned to this group to reduce poverty. In this case, the total effect of proxy means testing as a complement to means testing is more complex because many workers will be eligible for both subsidies and will only be able to choose one. Moreover, the proxy means testing complement will generally induce workers to choose the informal sector relative to the case where income support is provided by means of a means testing program alone.
Because the effect of the proxy means testing complement depends crucially on the relation between the observable characteristics and income, it is hard to generalize results. Instead, we examine the effect of proxy means testing complements by studying three cases of different possible relations between characteristics, income, and formality status.
Case 1: Observable characteristics identify poor informal workers
Figure 11 shows the case where observable characteristics identify workers who under a means testing program alone would be poor and informal. In this case, workers with characteristics \( x^{\prime} \) would receive a subsidy that is conditional on being informal and holding these characteristics. The magnitude of the subsidy will depend on the specific welfare measure used and on how poor workers in \( x' \) are relative to the rest of the poor. Relative to the case of means testing alone, one see that in this case there is no effect on formality.
Characteristics identify poor informal workers
Case 2: Observable characteristics identify very poor formal workers
Figure 12 shows the case in which observable characteristics identify the poorest families. In this case, it is more efficient to give informal workers with characteristics \( x^{\prime\prime} \) a large subsidy. This is because these characteristics are strong predictors of poverty, and therefore making transfers to this group is very efficient. Moreover, while the means testing transfer has a limited capacity to transfer income to these people (the maximum transfer is \( b\bar{w} \)), policymakers have much more discretion to make transfers by means of proxy means testing. As a result, the expected transfer for this group would be rather large. However, Fig. 12 also shows the undesirable consequences of giving this group a large subsidy: These workers would mainly choose the informal sector in order to qualify for the lump-sum payment.
Characteristics identify very poor formal workers
Case 3: Observable characteristics identify nearly non-poor formal workers
Finally, Fig. 13 shows a situation where the observable characteristics identify workers who are poor but are on the verge of graduating from this status. In this case, it is optimal not to grant workers with characteristics \( x^{\prime\prime\prime} \) any additional transfer. Rather, if negative transfers are feasible, it may even be optimal to reduce the transfers to \( x^{\prime\prime\prime} \) in order to increase the budget available for the remaining poor. Given that the possibility of using proxy means testing transfers is discarded, there are no additional effects regarding the choice between sectors relative to a stand-alone means testing program.
Characteristics identify nearly non-poor formal workers
Welfare programs and the budget restriction of the government
The analysis so far has assumed that the budget available for redistribution was given. This extension to the model shows that raising revenues has several adverse consequences in the presence of a large informal sector. Since taxes can only be borne by formal workers, these workers will tend to switch toward the informal sector as the government raises tax rates, dampening revenues. The government will either have to raise taxes further or make do with the diminished revenue. As a result, both the excess in taxation and the slack of revenues will serve as a measure of the costs of informality.
As previously, it is assumed that workers choose the sector that generates the largest disposable income. It is assumed that formal workers pay taxes at a uniform rate \( \tau \) and all workers who are poor receive a transfer with the same structure as the former example. Hence, disposable income (utility) for formal workers is:
$$ U_{f} = w_{f} \left( {1 - t} \right) + t_{f} \left( {w_{f} } \right), $$
$$ t_{f} \left( {w_{f} } \right) = \left\{ {\begin{array}{ll} {\left[ {\bar{w} - \left( {1 - \tau } \right)w_{f} } \right]b} & \quad {{\text{if}}\; \bar{w} > \left( {1 - t} \right)w_{f} } \\ 0 & \quad {{\text{if}} \;\bar{w} \le \left( {1 - t} \right)w_{f} } \\ \end{array} } \right.. $$
For informal workers, utility is given by their wage and a transfer for all non-registered workers. Hence, disposable income for informal workers is:
$$ U_{i} = w_{i} + t_{i} . $$
Workers choose the formal sector whenever \( U_{f} > U_{i} \) and choose the informal sector otherwise. Hence, the ratios of informal and formal workers are given by:
$$ S_{i} = \frac{1}{2} \frac{1}{{\bar{w}_{i} . \bar{w}_{f} }}\left\{ {\bar{w}^{2} \frac{1 - b}{1 - \tau } + \left( {\bar{w}_{i} + \frac{{\bar{w}}}{1 - \tau }} \right)\left( {\bar{w}_{i} - \bar{w} + t_{i} } \right)} \right\} $$
$$ S_{f} = 1 - S_{i} . $$
The goal is to find the expression for the budget restriction of the public sector. Because both public revenue and spending on the means testing program depend on the distribution of wages in the formal sector, one must first find the expression for the latter. Its corresponding cumulative density function is:
$$ F\left( {w_{f} } \right) = \left\{ {\begin{array}{ll} {\frac{1}{{S_{f} }}\left\{ {\frac{{w_{f} }}{2} \left[ {w_{f} \left( {1 - \tau } \right)\left( {1 - b} \right) + 2\left( {b\bar{w} - t_{i} } \right)} \right]} \right\}} & \quad {{\text{if}} \;\;w_{f} \ge 0 \;{\text{and}}\;w_{f} < \frac{{\bar{w}}}{1 - \tau }} \\ {\frac{1}{{S_{f} }}\left\{ {\phi_{1} + \frac{1}{2}\left( {w_{f} - \frac{{\bar{w}}}{1 - \tau }} \right)\left[ {w\left( {2 - \tau } \right) - 2t_{i} } \right]} \right\}} & \quad {{\text{if}} \;\;w_{f} \ge \frac{{\bar{w}}}{1 - \tau } \;{\text{and}}\;w_{f} < \frac{{\overline{{w_{i} }} }}{1 - \tau } + t_{i} } \\ {\frac{1}{{S_{f} }}\left\{ {\phi_{2} + \frac{{w_{f} }}{{\overline{{w_{f} }} }}} \right\}} & \quad {{\text{if}} \;\;w_{f} \ge \frac{{\overline{{w_{i} }} }}{1 - \tau } + t_{i} \;{\text{and}}\;w_{f} \le \overline{{w_{f} }} } \\ \end{array} ,} \right. $$
$$ \phi_{1} = \frac{{\bar{w}}}{{2\left( {1 - \tau } \right)}} \left[ {\bar{w}\left( {1 - b} \right) + 2\left( {b\bar{w} - t_{i} } \right)} \right] $$
$$ \phi_{2} = \phi_{1} + \frac{1}{2}\left( {\frac{{\bar{w}_{i} - \bar{w}}}{1 - \tau } + t_{i} } \right)\left[ {\left( {\frac{{\bar{w}_{i} }}{1 - \tau } + t_{i} } \right)\left( {2 - \tau } \right) - 2t_{i} } \right]. $$
We are now ready to write the budget restriction of the public sector, starting with the expression for public revenue, bearing in mind that only the formal sector is taxed:
$$ R = \frac{1}{{\bar{w}_{i} . \bar{w}_{f} }} \left\{ {\mathop \smallint \limits_{0}^{{\frac{{\bar{w}}}{1 - \tau }}} \tau w_{f} \left[ {w_{f} \left( {1 - \tau } \right)\left( {1 - b} \right) + b\bar{w} - t_{i} } \right] {\text{d}}w_{f} + \mathop \smallint \limits_{{\frac{{\bar{w}}}{1 - \tau }}}^{{\frac{{\bar{w}_{i} }}{1 - \tau } + t_{i} }} \tau w_{f} \left[ {2w_{f} \left( {2 - \tau } \right) - 2t_{i} - \left( {2 - \tau } \right)\frac{{\bar{w}}}{1 - \tau } } \right]{\text{d}}w_{f} + \tau \frac{1}{2} \bar{w}_{i} \left[ {\bar{w}_{f}^{2} - \left( {\frac{{\bar{w}_{i} }}{1 - \tau } - t_{i} } \right)^{2} } \right]} \right\}. $$
Finally, the spending side of the budget restriction is as follows:
$$ S = \frac{1}{{\bar{w}_{i} . \bar{w}_{f} }}\mathop \int \limits_{0}^{{\frac{{\bar{w}}}{1 - \tau }}} \left[ {\bar{w} - \left( {1 - \tau } \right)w_{f} } \right]b \left[ {w_{f} \left( {1 - \tau } \right)\left( {1 - b} \right) + b\bar{w} - t_{i} } \right] {\text{d}}w_{f} + S_{i} t_{i} . $$
There are two factors to be discussed in this setting. The first is the effect of taxation. From the expression of \( S_{i} \), one can see that informality rises with \( \tau \). This is because workers are prone to choose the informal sector where they avoid taxation. However, this narrows the tax base. Moreover, workers who are switching between sectors at the margin are wealthier as the tax rate increases, and taxation is also less progressive as a result. The endogenous limitations to tax collection contrast with those discussed in Hanna and Olken (2018), whose setup emphasizes that a high minimum non-taxable income prevents tax collection from middle-income workers, but in which tax collection is nevertheless pro-poor.
The second factor is the effect of transfers on informal workers in this context. Given a tax rate, increases in transfers to informal workers both further narrow the tax base and increase the number of non-poor informal workers. As a result, transfers to informal workers negatively affect both sides of the government budget constraint: Revenues are reduced and spending is made less efficient. This contrasts with the effect of transfers to formal workers, which underpin the size of the formal sector and public revenues. In addition, the change in formality choice of workers due to taxation generates intricate efficiency effect. This is a factor to be taken into account in programs that intend to widen the tax base, starting from one of relatively high level of minimum untaxed income discussed in Hanna and Olken (2018).
Figure 14 schematically shows the differences between the case in which public revenue is given and the case with taxation. The blue dashed line shows the frontier between formal and informal workers under taxation, while the red line shows the same frontier absent taxation. One can see that the blue line is to the left of the red line, indicating that there is more informality. Moreover, the gap between the blue and red lines widens to the right, indicating that even well-off workers choose the informal sector in order to avoid being subject to taxes, implying less progressive taxation. Finally, because more poor workers are now in the informal sector, they are not eligible for means-tested transfers, which are more pro-poor than general transfers to informal workers. As a result, spending is targeted less effectively.
Taxation and formality choice
Proxy means testing programs and taxation
This section discusses targeting by proxy means testing in the context of taxation. First to be discussed is the case where income support is provided through proxy means testing alone. Relative to the case where the revenue was given, more workers choose the informal sector to avoid taxation, reducing funding available for income-support programs. The first effect of taxation is therefore to reduce the overall program size. Additionally, the size of the program also affects how transfers are allocated to characteristics. The lower budget will affect mainly those who are poor but not the poorest of all groups. Figure 15 shows the case of two groups of characteristics, \( x^{\prime} \) and \( x^{\prime\prime} \), where both groups are poor but the former is poorer. The toll of the diminished revenue will mainly affect group \( x^{\prime\prime} \), while it will have a smaller effect on group \( x^{\prime} \).
Taxation and income support to different types of poor groups
Next to be considered is the case in which formal workers are targeted by means testing and informal workers by proxy means testing. The introduction of taxation in this case implies that the benefit of joining the informal sector for workers is twofold. The first benefit is that of avoiding taxation. The second is the possibility of receiving transfers assigned by proxy means testing. However, from the point of view of the policymaker, the first of these is a cost, and the second will also have an opportunity cost if the beneficiary is not poor. For policymakers, introducing taxation implies that the costs of transfers rise, but they do so more steeply for inclusion errors than exclusion errors. As was the case previously, policymakers will be inclined to implement smaller programs, given the higher cost of redistribution.
Program comparison and discussion
Having laid out the basic features of each program, the next section compares those programs. Of the several arrangements discussed, it was found that the means testing program allows for a pro-poor transfer design but channels workers away from productive jobs in the informal sector, which partly offsets the impact of the transfers. Transfers in the proxy means testing program do not generate distortions, but the pro-poor character of the system of transfers is constrained by the existing relation between poverty and observable characteristics.
The extent to which a proxy means testing mechanism is pro-poor is a question that is at first empirical. The evidence presented previously indicated that classification errors were rather frequent, and this should tilt the balance in favor of means testing programs. However, in practice poor families often work in the informal sector, and this motivates policymakers to promote income support outside the formal sector. The model here indicates that the most general effect of income support to informal workers is an increase in the size of this sector. The effectiveness of these complementary programs in reducing poverty depends on the relation between informality and poverty if support is conditional on informality or, once again, between the strength of the correlation of the observable characteristics available and poverty if support is assigned by means of a proxy means testing design.
Additionally, workers tend to switch to the informal sector as a way to avoid taxation, and governments then require higher tax rates to achieve a desired revenue level compared to a case without an informal sector. Unlike the discussion presented by Hanna and Olken (2018), the difficulty of phasing out a universal basic income program in this model does not come from a high non-taxable minimum income, but rather as a result of endogenous choice from workers.
If transfers are made to informal workers, more people are encouraged to choose the informal sector to qualify for the subsidy. Unless the relation between informality and poverty is extremely close, the level of filtration is large and the income received by the poor is diluted in the mass of informal workers. This is a general feature of expanding the base of eligibility for social programs (Hanna and Olken 2018), with an additional distortion generated by the switch in sector. A similar logic applies to proxy means testing transfers: Filtration implies that the cost of inclusion errors rises sharply, and policymakers would be more tolerant of exclusion errors. As a consequence, the redistributive capacity of the government is more constrained in these circumstances.
From a long-term perspective, Latin America and the Caribbean have benefited from a substantial decrease in the incidence and depth of poverty. However, when compared to other developing regions, LAC has performed in line with its peers and experienced declines in poverty that are far from extraordinary. The main driver behind this fall in poverty has been economic growth—not because reducing inequality is ineffective but rather because it has not been consistently achieved over the period studied. Still, growth is a channel that operates over the long run, and even when it facilitates improving the living standards of many poor people, spillovers to all are not straightforward. For those of whom improvements are elusive, poverty alleviation in the form of income-support programs is necessary.
To assess the effectiveness of income-support programs, this paper built a formal model where workers have working opportunities in both the formal and informal sectors, and examined how means testing programs compare to proxy means testing programs. Because means testing programs are more flexible, a pro-poor design is possible, while the pro-poor character of proxy means testing programs is constrained by the relation between observable characteristics and poverty. However, implementation of means testing programs may be problematic if there is underreporting of income at the intensive margin. Meanwhile, the relation between observable characteristics and poverty is not stable over time, which may weaken the targeting efficiency of proxy means testing programs, and these programs are more distortive in a context where raising revenue for redistribution is necessary.
While proxy means testing is the industry standard for developing economies, means testing is much more widespread in the developed world. The ability of a government to observe incomes is a leading factor in determining the targeting design. It is possible that if developing countries continue on the growth path that they have experienced the last several decades, they will eventually acquire the capabilities that are necessary for means testing programs. If this is so, the transition from proxy means testing to means testing will be a natural one.
On the other hand, a more skeptical view would suspect that the existing structure of transfer programs affects the aforementioned process in some way, possibly slowing it. Additionally, one would also wonder whether a safety net designed for informal workers will hold them back in informality, and whether this retention has implications for human capital acquisition and long-run growth.
A central feature to this dichotomy is the effectiveness of policies to affect the formality decisions of workers. In particular, are subsidies and tax incentives sufficiently large to pull a significant amount of the labor force into the formal sector? Moreover, will complementary reforms be necessary to ease the burden of formality on firms, and should special consideration be given to smaller firms, or those that are typically believed to have complying with these requirements? Finally, there is the question of whether proxy means targeting and means testing programs will have to coexist during the hypothetical transition, and, if so, what this coexistence should be like.
Data used in this paper are available online at http://iresearch.worldbank.org/PovcalNet/povOnDemand.aspx.
Since we use measures of growth that are adjusted for differences in the purchasing power of the dollar, the line effect is nil by default. The alternative would be to factor out changes in purchasing power from the growth effect based on exchange rate movements and differential inflation, which generate changes in the value of lines expressed in real domestic currency. However, we disregard this calculation as the common practice is to express welfare in a purchasing-power-parity-adjusted measure.
Clearly, there are other factors that affect workers' utility and, hence, decisions as to whether to work in the formal or informal sector (Galiani and Weinschelbaum 2012). They are not included here just for purposes of simplicity.
Both Propositions 1 and 2 hold when R is small enough that there is still poverty after the transfers.
Alatas V, Banerjee A, Hanna R, Olken B, Tobias J (2012) Targeting the poor: evidence from a field experiment in Indonesia. Am Econ Rev 102(4):1206–1240
Barr N (1998) The economics of the welfare state. Stanford University Press, Stanford
Besley T, Coate S (1992) Workfare versus welfare: incentive arguments for work requirements in poverty-alleviation programs. Am Econ Rev 82(1):249–261
Bourguignon F (2003) The growth elasticity of poverty reduction: explaining heterogeneity across countries and time periods. In: Eicher T, Turnovsky S (eds) Inequality and growth, theory and policy implications. The MIT Press, Cambridge
De Wachter S, Galiani S (2006) Optimal income support targeting. Int Tax Public Financ 13(6):661–684
Foster J, Greer J, Thorbecke E (1984) A class of decomposable poverty measures. Econometrica 52:761–766
Galiani S, Weinschelbaum F (2012) Modeling informality formally: households and firms. Econ Inq 50:821–838
Gasparini L, Sosa Escudero W, Cicowiez M (2014) Pobreza y desigualdad en América Latina: conceptos, herramientas y aplicaciones. CEDLAS Working Paper 0171. Universidad Nacional de La Plata
Hanna R, Olken BA (2018) Universal basic incomes versus targeted transfers: anti-poverty programs in developing countries. J Econ Perspect 32(4):201–206
Kraay A (2006) When is growth pro-poor? Evidence from a panel of countries. J Dev Econ 80(1):198–227
Londoño JL, Székely M (2000) Persistent poverty and excess inequality: Latin America, 1970–1995. J Appl Econ 3(1):93–134
PovCalNet (2017) Select countries aggregation. World Bank, Washington, DC. http://iresearch.worldbank.org/PovcalNet/povOnDemand.aspx. Accessed 25 Oct 2017
Ravallion M (1997) Can high-inequality developing countries escape absolute poverty? Econ Lett 56(1):51–57
Ravallion M, Chen S (1997) What can new survey data tell us about recent changes in distribution and poverty? World Bank Econ Rev 11(2):357–382
Ravallion M, Chen S, Sangraula P (2009) Dollar a day revisited. World Bank Econ Rev 23(2):163–184
Schneider F, Buehn A, Montenegro C (2010) New estimates for the shadow economies all over the world. Int Econ J 24(4):443–461
SEDLAC (2016) Socio-economic database for Latin America and the Caribbean (CEDLAS). La Plata. http://www.cedlas.econo.unlp.edu.ar/wp/en/estadisticas/sedlac/. Accessed 10 Jan 2016
Sen A (1976) Poverty: an ordinal approach to measurement. Econometrica 4(2):219–231
Stampini M, Tornarolli L (2012). The growth of conditional cash transfers in Latin America and the Caribbean: did they go too far? IZA Policy Paper No. 49. Institute for the Study of Labor, Bonn
We would like to thank an anonymous referee for reviewing this paper.
Generous funding for this Project was provided by the Inter-American Development Bank.
Department of Economics, University of California, Berkeley, Berkeley, USA
Martin Caruso Bloeck
Department of Economics, University of Maryland, College Park, USA
Sebastian Galiani
Department of Economics, Universidad Tocuato di Tella, Buenos Aires, Argentina
Federico Weinschelbaum
Search for Martin Caruso Bloeck in:
Search for Sebastian Galiani in:
Search for Federico Weinschelbaum in:
The authors contributed equally to the writing and reviewing of this paper. All authors read and approved the final manuscript.
Correspondence to Martin Caruso Bloeck.
See Tables 9, 10, 11, 12, 13, and 14.
Table 9 Estimation of Eq. (1) with median consumption.
Table 10 Decomposition of changes in the headcount ratio (US$1.9 purchasing power parity line).
Table 11 Decomposition of changes in the poverty gap (US$1.9 purchasing power parity line).
Table 13 Decomposition of changes in the poverty gap squared (US$1.9 purchasing power parity line).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Caruso Bloeck, M., Galiani, S. & Weinschelbaum, F. Poverty alleviation strategies under informality: evidence for Latin America. Lat Am Econ Rev 28, 14 (2019) doi:10.1186/s40503-019-0074-4
Accepted: 21 October 2019
Means testing
Proxy means testing
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page | CommonCrawl |
It is known to be finitely generated, but the question of finite presentation is still open. Given that the mapping class group is well known to be finitely presented, a natural question to ask is then, under what conditions, if any, does every finitely generated subgroup of a given group permit a finite presentation? A group satisfying this property is said to be coherent. Of course if the mapping class group could be shown to be coherent then the question of finite presentation of the Torelli group is trivially solved. All I have done is said; if every finitely generated subgroup of the mapping class group is finitely presented, then the Torelli group is finitely presented, which isn't very helpful. But the question of coherence could provide a roundabout way of solving this problem. More importantly however it serves as motivation for the question; under what conditions can a group be said to be coherent.
Using this we can embed in with generators , , and . And similarly we can embed in with . So we have that these linear groups are also not coherent.
Surface groups are 1-relator groups. Moreover, any subgroup of a surface group is another surface group, and thus is finitely presented. This is a nice easy case as surfaces have a powerful underlying topological restriction, namely the classification of surfaces.
In fact it is also known that the fundamental groups of 3-manifolds are coherent, and similarly it is the underlying topological properties that allow this to be shown, specifically the Scott core theorem.
What if we let X be the wedge of two circles, then take Y to be the direct product of two copies of X. If we thicken Y slightly we obtain a 3-manifold. However the fundamental group of Y is unaffected by this thickening, remaining F2 x F2, which as you have already said is incoherant?
Don't know if you are still reading this, but the mapping class group contains copies of $F_2 \times F_2$ once the genus is at least $2$, so it is not coherent.
Hey – thanks for the reply! I know how to get an sitting in there (take Dehn twists about curves with and use ping pong lemma). I guess then with genus 2 or higher we have enough room to do this twice disjointly?
The structure of subgroup of mapping class groups generated by two Dehn twists.
Proc. Japan Acad. Ser. A Math. Sci. 72 (1996), no. 10, 240–241. | CommonCrawl |
\begin{document}
\title{Arithmetic of abelian varieties in Artin-Schreier extensions} \author{Rachel Pries} \address{Department of Mathematics\\ Colorado State University\\ Fort Collins, CO 80523} \email{[email protected]} \author{Douglas Ulmer} \address{School of Mathematics \\ Georgia Institute of Technology\\ Atlanta, GA 30332} \email{[email protected]}
\begin{abstract}
We study abelian varieties defined over function fields of curves in
positive characteristic $p$, focusing on their arithmetic in the
system of Artin-Schreier extensions. First, we prove that the
$L$-function of such an abelian variety vanishes to high order at
the center point of its functional equation under a parity condition
on the conductor. Second, we develop an Artin-Schreier variant of a
construction of Berger. This yields a new class of Jacobians over
function fields for which the Birch and Swinnerton-Dyer conjecture
holds. Third, we give a formula for the rank of the Mordell-Weil
groups of these Jacobians in terms of the geometry of their fibers
of bad reduction and homomorphisms between Jacobians of auxiliary
Artin-Schreier curves. We illustrate these theorems by computing
the rank for explicit examples of Jacobians of arbitrary dimension
$g$, exhibiting Jacobians with bounded rank and others with
unbounded rank in the tower of Artin-Schreier extensions. Finally,
we compute the Mordell-Weil lattices of an isotrivial elliptic curve
and a family of non-isotrivial elliptic curves. The latter exhibits
an exotic phenomenon whereby the angles between lattice vectors are
related to point counts on elliptic curves over finite fields. Our
methods also yield new results about supersingular factors of
Jacobians of Artin-Schreier curves.\\
MSC2000: Primary 11G10, 11G40, 14G05;
Secondary 11G05, 11G30, 14H25, 14J20, 14K15 \end{abstract}
\maketitle
\section{Introduction} Let $k$ be a finite field of characteristic $p>0$ and suppose $F=k(\mathcal{C})$ is the function field of a smooth, projective curve $\mathcal{C}$ over $k$. Given an abelian variety $J$ defined over $F$, the Birch and Swinnerton-Dyer (BSD) conjecture relates the $L$-function of $J$ and the Mordell-Weil group $J(F)$. In particular, it states that the algebraic rank of the Mordell-Weil group equals the analytic rank, the order of vanishing of the $L$-function at $s=1$. If the BSD conjecture is true for $J$ over $F$ and if $K/F$ is a finite extension, it is not known in general whether the BSD conjecture is true for $J$ over $K$.
In \cite{Ulmer07b}, the second author studied the behavior of a more general class of $L$-functions over geometrically abelian extensions $K/F$. Specifically, for certain self-dual symplectic or orthogonal representations $\rho:G_F\to\mathrm{GL}_n({\overline{\mathbb{Q}}_\ell})$ of weight $w$, there is a factorization of $L(\rho, K, T)$, with factors indexed by orbits of the character group of $\gal(K/F)$ under Frobenius, and a criterion for a factor to have a zero at the center point of its functional equation. Under a parity condition on the conductor of $\rho$, this implies that the order of vanishing of $L(\rho, K_d, T)$
at $T=|k|^{-(w+1)/2}$ is unbounded among Kummer extensions of the form $K_d=k(t^{1/d})$ of $F=k(t)$, see \cite[Theorem 4.7]{Ulmer07b}.
The system of rational Kummer extensions of function fields also plays a key role in the papers \cite{Berger08, Ulmer13a, Ulmer14a}. For example, \cite{Berger08} proves that the BSD conjecture holds for Jacobians $J_X/K_d$ when $X$ is in the class of curves defined by equations of the form $f(x)-tg(y)$ over $F=k(t)$ and $K_d$ is in the Kummer tower of fields $K_d=k(t^{1/d})$. Also, \cite{Ulmer13a}, gives a formula for the rank of $J_X$ over $K_d$ which depends on homomorphisms between the Jacobians of auxiliary Kummer curves.
In this paper, we study these phenomena for the system of Artin-Schreier extensions of function fields of positive characteristic. The main results are analogous to those described above: an unboundedness of analytic ranks result (Corollary~\ref{cor:analytic-ranks}), a proof of the BSD conjecture for Jacobians of a new class of curves $X$ over an Artin-Schreier tower of fields (Corollary~\ref{cor:BSD}), and a formula for the rank of the Mordell-Weil group of $J_X$ over Artin-Schreier extensions which depends on homomorphisms between the Jacobians of auxiliary Artin-Schreier curves (Theorem~\ref{thm:ranks}).
There are several reasons why the Artin-Schreier variants of these theorems are quite compelling. First, the curves which can be studied using the Artin-Schreier variant include those defined by an equation of the form $f(x)-g(y)-t$ over $F=k(t)$. The geometry of these curves is comparatively easy to analyze, allowing us to apply the main results in broad generality. For example, Proposition~\ref{prop:higher-g-unbounded} illustrates that the hyperelliptic curve $x^2=g(y)+t$ with $g(y) \in k[y]$ of degree $N$ satisfies the BSD conjecture, with unbounded rank in the tower of Artin-Schreier extensions of $k(t)$, under the very mild conditions that $p \nmid N$ and the finite critical values of $g(y)$ are distinct. Second, the structure of endomorphism rings of Jacobians of Artin-Schreier curves is sometimes well-understood. This allows us to compute the exact value of the rank of the Mordell-Weil group in several natural cases. Finally, some apparently unusual lattices appear as Mordell-Weil lattices of elliptic curves covered by our analysis. We illustrate this for the family of elliptic curves $Y^2=X(X+16b^2)(X+t^2)$ (where $b$ is a parameter in a finite field) in Subsection~\ref{Slattice}.
\begin{figure}
\caption{Leitfaden}
\label{f:leitfaden}
\end{figure}
Here is an outline of the paper. In Section~\ref{Sanalytic}, we consider certain elementary abelian extensions $K$ of $F=k(\mathcal{C})$ with $\deg(K/F)=q$ a power of $p$, and we study the $L$-functions $L(\rho, K, T)$ of certain self-dual representations $\rho:G_F\to\mathrm{GL}_n({\overline{\mathbb{Q}}_\ell})$. Using results about Artin conductors of twists of $\rho$ by characters of $\gal(K/F)$, we prove a lower bound for the order of vanishing of $L(\rho, K, T)$ at the center point of the functional equation. In the case of an abelian variety $J$ over $F$ whose conductor satisfies a parity condition, this yields a lower bound for the order of vanishing of $L(J/K, s)$ at $s=1$, Corollary~\ref{cor:analytic-ranks}.
In Section~\ref{s:Berger}, we prove that a new class of surfaces has the DPCT property introduced by Berger. More precisely, we prove that a surface associated to the curve $X$ given by an equation of the form $f(x)-g(y)-t$ over $F=k(t)$ is dominated by a product of curves and furthermore, this DPC property is preserved under pullback to the field $K_q:=F(u)/(u^q-u-t)$ for all powers $q$ of $p$. It follows that the BSD conjecture holds for the Jacobians of this class of curves $X$ over this Artin-Schreier tower of fields, Corollary~\ref{cor:BSD}. In Section~\ref{s:exs1}, we combine the results from Sections~\ref{Sanalytic} and~\ref{s:Berger} to give a broad array of examples of Jacobians over rational function fields $k(u)$ which satisfy the BSD conjecture and have large Mordell-Weil rank, see, e.g., Proposition~\ref{prop:higher-g-unbounded}.
Section~\ref{s:rank} contains a formula for the rank of $J_X$ over $K_q$ in terms of the geometry of the fibers of bad reduction of $X$ and the rank of the group of homomorphisms between the Jacobians of auxiliary curves. (The auxiliary curves are $\mathcal{C}_q$ and $\mathcal{D}_q$, defined by equations $z^q-z=f(x)$ and $w^q-w=g(y)$, and we consider homomorphisms which commute with the $\mathbb{F}_q$-Galois actions on $\mathcal{C}_q$ and $\mathcal{D}_q$, see Theorem~\ref{thm:ranks}.) Section~\ref{s:exactrank} contains three applications of the rank formula: first, by considering cases where $\mathcal{C}_q$ is ordinary and $\mathcal{D}_q$ has $p$-rank $0$, we construct examples of curves $X$ over $F$ with arbitrary genus for which the rank of $J_X$ over $K_q$ is bounded independent of $q$; second, looking at the case when $f=g$, we construct examples of curves $X$ over $F$ with arbitrary genus for which the rank of $X$ over $K_q$ goes to infinity with $q$; third, we combine the lower bound for the analytic rank and the rank formula to deduce the existence of supersingular factors of Jacobians of Artin-Schreier curves.
Finally, in Section~\ref{s:explicit}, we construct explicit points and compute heights for two examples. When $q \equiv 2 \bmod 3$, the isotrivial elliptic curve $E$ defined by $Y^2+tY=X^3$ has rank
$2(q-1)$ over $K_q=\mathbb{F}_{q^2}(u)$ where $u^q-u=t$. We construct a subgroup of finite index in the Mordell-Weil group, and we conjecture that the index is $|\sha(E/K_q)|^{1/2}$ (which is known to be finite in this case). For $b\not\in\{0,1,-1\}$, the non-isotrivial curve $Y^2=X(X+16b^2)(X+t^2)$ has rank $q-1$ over $K_q$, and again we construct a subgroup of finite index in the Mordell-Weil group. In this case, the lattice generated by $q-1$ explicit points is in a certain sense a perturbation of the lattice $A_{q-1}^*$ where the fluctuations are determined by point counts on another family of elliptic curves. This rather exotic situation has, to our knowledge, not appeared in print before.
An appendix, Section~\ref{s:AS-covers}, collects all the results we need about ramification, Newton polygons, and endomorphism algebras of Artin-Schreier curves.
Figure~\ref{f:leitfaden} shows dependencies between the sections. A dashed line indicates a very mild dependency which can be ignored to first approximation, whereas a solid line indicates a more significant dependency. We have omitted dependencies to the appendix; these exist in Sections~2, 6, and 7, and at one place in Section~3.
We would like to thank Chris Hall for useful data about $L$-functions of Artin-Schreier extensions and stimulating conversations about the topics in this paper. Thanks also to the referee for a critical reading of the paper. The first author was partially supported by NSF grant DMS-11-01712.
\section{Analytic ranks} \label{Sanalytic} In this section, we use results from \cite{Ulmer07b} to show that analytic ranks are often large in Artin-Schreier extensions. The main result is Corollary~\ref{cor:analytic-ranks}.
\subsection{Notation}\label{ss:notation}
Let $p$ be a prime number, let ${\mathbb{F}_p}$ be the field of $p$ elements, and let $k$ be a finite field of characteristic $p$. We write $r=|k|$ for the cardinality of $k$. Let $F=k(\mathcal{C})$ be the function field of a smooth, projective, irreducible curve $\mathcal{C}$ over $k$. Let $F^{sep}$ be a separable closure of $F$. We write ${\overline{\mathbb{F}}_p}$ for the algebraic closure of ${\mathbb{F}_p}$ in $F^{sep}$. Let $G_F=\gal(F^{sep}/F)$ be the Galois group of $F$.
Let $\ell\neq p$ be a prime number and let ${\overline{\mathbb{Q}}_\ell}$ be an algebraic closure of the $\ell$-adic numbers. Fix a representation $\rho:G_F\to\mathrm{GL}_n({\overline{\mathbb{Q}}_\ell})$ satisfying the hypotheses of \cite[\S4.2]{Ulmer07b}. In particular, $\rho$ is assumed to be self-dual of some weight $w$ and sign $-\epsilon$. When $\epsilon=1$ we say $\rho$ is symplectic and when $\epsilon=-1$ we say $\rho$ is orthogonal.
The representation $\rho$ gives rise to an $L$-function $L(\rho,F,T)$ given by an Euler product as in \cite[\S4.3]{Ulmer07b}. We write
$L(\rho,K,T)$ for $L(\rho|_{G_K},K,T)$ for any finite extension $K$ of $F$ contained in $F^{sep}$.
In \cite[\S4]{Ulmer07b}, we studied the order of vanishing of $L(\rho,K,T)/L(\rho,F,T)$ at the center point $T= r^{-(w+1)/2}$ when $K/F$ is a Kummer extension. Here we want to study the analogous order when $K/F$ is an Artin-Schreier extension.
\subsection{Extensions}\label{ss:extensions} Let $q$ be a power of $p$ and write $\wp_q(z)$ for the polynomial $z^q-z$. We will consider field extensions $K$ of $F$ of the form \begin{equation}\label{eq:K} K=K_{\wp_q,f}=F[z]/(\wp_q(z)-f) \end{equation} for $f\in F \setminus k$. We assume throughout that that ${\overline{\mathbb{F}}_p} K$ is a field, a condition which is guaranteed when $f$ has a pole of order prime to $p$ at some place of $F$. As described in Lemma~\ref{l:ASgalois}, under this assumption, the degree $q$ field extension $K/F$ is ``geometrically abelian'' in the sense that ${\overline{\mathbb{F}}_p} K/{\overline{\mathbb{F}}_p} F$ is Galois with abelian Galois group. In fact, setting $H=\gal({\overline{\mathbb{F}}_p} K/{\overline{\mathbb{F}}_p} F)$, we have a canonical isomorphism $H\cong \mathbb{F}_q$, where ${\mathbb{F}_q}$ is the subfield of $F^{sep}$ of cardinality $q$. The element $\alpha\in\mathbb{F}_q$ corresponds to the automorphism of ${\overline{\mathbb{F}}_p} K$ which sends the class of $z$ in \eqref{eq:K} to $z+\alpha$.
It will be convenient to consider a more general class of geometrically abelian extensions whose Galois groups are elementary abelian $p$-groups. Suppose that $A$ is a monic, separable, additive polynomial, in other words a polynomial of the form $$A(z)=z^{p^\nu}+\sum_{i=0}^{\nu-1}a_iz^{p^i}$$ with $a_i\in{\overline{\mathbb{F}}_p}$ and $a_0\neq0$. Recall from Subsection~\ref{ss:additive} that there is a bijection between such polynomials $A$ and subgroups of ${\overline{\mathbb{F}}_p}$ which associates to $A$ the group $H_A$ of its roots. The field generated by the coefficients of $A$ is the field of $p^\mu$ elements, where $p^\mu$ is the smallest power of $p$ such that $H_A$ is stable under the $p^\mu$-power Frobenius.
Suppose $f\in F$ has a pole of order prime to $p$ at some place of $F$ and that $A$ has coefficients in $k$. Then we have a field extension $$K=K_{A,f}=F[x]/(A(z)-f).$$ It is geometrically Galois over $F$, with $\gal({\overline{\mathbb{F}}_p} K/{\overline{\mathbb{F}}_p} F)$ canonically isomorphic to $H_A$.
By Lemma~\ref{Ladditive}, if $A$ has roots in ${\mathbb{F}_q}$, then there exists another monic, separable, additive polynomial $B$ such that the composition $A\circ B$ equals $\wp_q$. Furthermore, this implies that $K_{A,f}$ is a subfield of $K_{\wp_q,f}$ and that $\gal({\overline{\mathbb{F}}_p} K_{A,f}/{\overline{\mathbb{F}}_p} F)$ is a quotient of ${\mathbb{F}_q}$, namely $B({\mathbb{F}_q})$. In particular, for many questions, we may reduce to the case where $K_{A,f}$ is the Artin-Schreier extension $K_{\wp_q,f}$.
\subsection{Characters}\label{ss:chars} Let $K=K_{\wp_q,f}$ be an Artin-Schreier extension of $F$ as in Subsection~\ref{ss:extensions}, and let $H=\gal({\overline{\mathbb{F}}_p} K/{\overline{\mathbb{F}}_p} F)\cong{\mathbb{F}_q}$. Fix once and for all a non-trivial additive character $\psi_0:{\mathbb{F}_p}\to{\overline{\mathbb{Q}}_\ell}^\times$. Let $\hat H=\Hom(H,{\overline{\mathbb{Q}}_\ell}^\times)$ be the group of ${\overline{\mathbb{Q}}_\ell}$-valued characters of $H$. Then we have an identification $\hat H\cong\mathbb{F}_q$ under which $\beta\in\mathbb{F}_q$ corresponds to the character $\chi_\beta:H\to{\overline{\mathbb{Q}}_\ell}^\times$, $\alpha\mapsto\psi_0(\tr_{\mathbb{F}_q/{\mathbb{F}_p}}(\alpha\beta))$.
Next we consider actions of $G_k=\gal({\overline{k}}/k)$ on $H$ and $\hat H$. To define them, consider the natural projection $G_F\to G_k$, and let $\Phi$ be any lift of the (arithmetic) generator of $G_k$, namely the $r$-power Frobenius. Using this lift, $G_k$ acts on $H=\gal({\overline{\mathbb{F}}_p} K/{\overline{\mathbb{F}}_p} F)$ on the left by conjugation, and it is easy to see that under the identification $H\cong \mathbb{F}_q$, $\Phi$ acts on $\mathbb{F}_q$ via the $r$-th power Frobenius.
We also have an action of $G_k$ on $\hat H$ on the right by precomposition: $(\chi_\beta)^{\Phi}(\alpha)=\chi_\beta(\Phi(\alpha))=\chi_\beta(\alpha^r)$. Since $$\tr_{\mathbb{F}_q/{\mathbb{F}_p}}(\alpha^r\beta)=\tr_{\mathbb{F}_q/{\mathbb{F}_p}}(\alpha\beta^{r^{-1}})$$ we see that $(\chi_\beta)^{\Phi}=\chi_{\beta^{r^{-1}}}$.
If $A$ is a monic, separable, additive polynomial with coefficients in $k$ and group of roots $H_A$, then the character group of $H_A$ is naturally a subgroup of $\hat H$, and it is stable under the $r$-power Frobenius. More precisely, by Lemma~\ref{Ladditive}(2), $H_A$ is the quotient $B({\mathbb{F}_q})$ of ${\mathbb{F}_q}$, and so its character group is identified with $(\ker B)^\perp=(\im A)^\perp$ where the orthogonal complements are taken with respect to the trace pairing $(x,y)\mapsto\tr_{{\mathbb{F}_q}/{\mathbb{F}_p}}(xy)$.
As seen in Example~\ref{ex:A}, if $r$ is a power of an odd prime $p$ and $A(z)=z^{r^\nu}+z$, then the group $H_A$ of roots of $A$ generates ${\mathbb{F}_q}$ where $q=r^{2\nu}$. In this case, $A \circ B =\wp_q$ when $B=\wp_{r^\nu}$. If $f\in F$ has a pole of order prime to $p$ at some place of $F$, then the field extension $K_{A,f}$ is a subextension of $K_{\wp_q,f}$ and its character group is identified with $(\ker B)^\perp=H_A$.
\subsection{Ramification and conductor} We fix a place $v$ of $F$ and consider a decomposition subgroup $G_v$ of $G=G_F$ at the place $v$ and its inertia subgroup $I_v$.
Recall from \cite[Chap.~IV]{SerreLF} that the upper numbering of ramification groups is compatible with passing to a quotient, and so defines a filtration on the inertia group $I_v$, which we denote by $I_v^t$ for real numbers $t\ge0$. By the usual convention, we set $I_v^t=I_v$ for $-1< t\le0$.
Let $\rho:G_F\to\mathrm{GL}_n({\overline{\mathbb{Q}}_\ell})$ be a Galois representation as above, acting on $V={\overline{\mathbb{Q}}_\ell}^n$. We denote the local exponent at a place $v$ of the conductor of $\rho$ by $f_v(\rho)$. We refer to \cite{Serre70} for the definition.
Now let $\chi:G_F\to{\overline{\mathbb{Q}}_\ell}^\times$ be a finite order character. We say ``$\chi$ is more deeply ramified than $\rho$ at $v$'' if there exists a non-negative real number $t$ such that $\rho(I_v^t)=\{id\}$ and $\chi(I_v^t)\neq\{id\}$. In other words, $\chi$ is non-trivial further into the ramification filtration than $\rho$ is. Let $t_0$ be the largest number such that $\chi$ is non-trivial on $I_v^{t_0}$ and recall that $f_v(\chi)=1+t_0$ \cite[VI, \S2, Proposition 5]{SerreLF}.
\begin{lemma}\label{lemma:ramification}
If $\chi$ is more deeply ramified than $\rho$ at $v$, then $$f_v(\rho\otimes\chi)=\deg(\rho)\mathfrak{f}_v(\chi).$$ \end{lemma}
\begin{proof}
This is an easy exercise and presumably well-known to experts. It
is asserted in \cite[Lemma~9.2(3)]{DokchitsersG}, and a detailed
argument is given in \cite{UlmerC}. \end{proof}
A particularly useful case of the lemma occurs when $\rho$ is tamely ramified and $\chi$ is wildly ramified, e.g., when $\chi$ is an Artin-Schreier character.
\subsection{Factoring $L(\rho,K,T)$} Fix a monic, separable, additive polynomial $A$ with coefficients in $k$ and a function $f\in F$ such that $f$ has a pole of order prime to $p$ at some place of $F$. Let $K=K_{A,f}$ be the corresponding extension, whose geometric Galois group $\gal({\overline{\mathbb{F}}_p} K/{\overline{\mathbb{F}}_p} F)$ is canonically identified with the group $H=H_A$ of roots of $A$. Let ${\mathbb{F}_q}$ be the subfield of $F^{sep}$ generated by $H_A$. Recall the Galois representation $\rho$ fixed above. In this section, we record a factorization of the $L$-function $L(\rho,K,T)$.
In Subsection~\ref{ss:chars} above, we identified the character group of $H$ with a subgroup of ${\mathbb{F}_q}$ which is stable under the $r$-power Frobenius. As in \cite[\S3]{Ulmer07b}, we write $o\subset\hat H\subset\mathbb{F}_q$ for an orbit of the action of $\Fr_r$. Note that the cardinality of the orbit $o$ through $\beta\in{\mathbb{F}_q}$ is equal to the degree of the field extension $k(\beta)/k$, and is therefore at most $2\nu$.
As in \cite[\S4.4]{Ulmer07b}, we have a factorization $$L(\rho,K,T)=\prod_{o\subset\hat H}L(\rho\otimes\sigma_o,F,T)$$ and a criterion for the factor $L(\rho\otimes\sigma_o,F,T)$ to have a zero at $T=\epsilon r^{-(w+1)/2}$ (or more generally to be divisible by a certain polynomial).
To unwind that criterion, we need to consider self-dual orbits. More precisely, note that the inverse of $\chi_\beta$ is $(\chi_\beta)^{-1}=\chi_{-\beta}$. Thus an orbit $o$ is self-dual in the sense of \cite[\S3.4]{Ulmer07b} if and only if there exists a positive integer $\nu$ such that $\beta^{r^\nu}=-\beta$ for all $\beta\in o$. The trivial orbit $o=\{0\}$ is of course self-dual in this sense. To ensure that that there are many other self-dual orbits, we may assume $r$ is odd and take $A(x)=x^{r^{\nu}}+x$ for some positive integer $\nu$. Then if $\beta$ is a non-zero root of $A$, the orbit through $\beta$ is self-dual. Since the size of this orbit is at most $2\nu$, we see that there are at least $(r^\nu-1)/(2\nu)$ non-trivial self-dual orbits in this case.
We also note that if $\beta\neq0$, then the order of the character $\chi_\beta$ is $p$ and since we are assuming $r$, and thus $p$, is odd, we have that $\chi_\beta$ has order $>2$. Summarizing, we have the following.
\begin{lemma}\label{lemma:factorization}
Let $k$ be a finite field of cardinality $r$ and characteristic $p
>2$. Suppose $A(z)=z^{r^\nu}+z$. Suppose $f\in F$ has a pole of
order prime to $p$, and let $K=K_{A,f}$. Let $\rho$ be a
representation of $G_F$ as in Subsection~\ref{ss:notation}. Then we
have a factorization $$L(\rho,K,T)=\prod_{o\subset\hat H}L(\rho\otimes\sigma_o,F,T)$$ where the product is over the orbits of the $r$-power Frobenius on the roots of $A$. Aside from the orbit $o=\{0\}$, there are at least $(r^\nu-1)/2\nu$ orbits, each of which is self-dual, has cardinality at most $2\nu$, and consists of characters of order $p>2$. \end{lemma}
\subsection{Parity conditions}\label{ss:parity} According to \cite[Thm.~4.5]{Ulmer07b}, $L(\rho\otimes\sigma_o,F,T)$ vanishes at $T=r^{-(w+1)/2}$ if $\rho$ is symplectic of weight $w$, $o$ is a self-dual orbit, and if the degree of $\cond(\rho\otimes\chi_\beta)$ is odd for one, and therefore all, $\beta\in o$. Thus to obtain a large order of vanishing, we should arrange matters so that $\rho\otimes\chi_\beta$ satisfies the conductor parity condition for many orbits $o$. This is not hard to do using Lemma~\ref{lemma:ramification}.
Indeed, let $S$ be the set of places where $\chi_\beta$ is ramified, and suppose that $\chi_\beta$ is more deeply ramified than $\rho$ at each $v\in S$. Suppose also that $\sum_{v\not\in S}f_v(\rho)\deg(v)$ is odd. Then using Lemma~\ref{lemma:factorization} we have $$\deg\cond(\rho\otimes\chi_\beta)= \sum_{v\in S}\deg(\rho)f_v(\chi_\beta)\deg(v)+ \sum_{v\not\in
S}f_v(\rho)\deg(v).$$ Since $\rho$ is symplectic, it has even degree, and so our assumptions imply that $\deg\cond(\rho\otimes\chi_\beta)$ is odd.
\subsection{High ranks} Putting everything together, we get results guaranteeing large analytic ranks in Artin-Schreier extensions:
\begin{thm}\label{thm:analytic-ranks}
Let $k$ be a finite field of cardinality $r$ and characteristic $p
>2$. Let $\nu \in {\mathbb N}$ and let $k'$ be the field of
$q=r^{2\nu}$ elements. Let $F=k(\mathcal{C})$ and
$\rho:G_F\to\mathrm{GL}_n({\overline{\mathbb{Q}}_\ell})$ be as in Subsection~\ref{ss:notation}.
Assume that $\rho$ is symplectically self-dual of weight $w$.
Choose $f\in F$ with at least one pole of order prime to $p$.
Suppose that either \textup{(1)} $K=K_{A,f}$ where $A(z)=z^{r^\nu}
+z$, or \textup{(2)} $K=K_{\wp_q, f}$ where $\wp_q(z)=z^q-z$ as in
Subsection~\ref{ss:extensions}.
Let $S$ be set of place of $F$ where $K/F$ is ramified and suppose that $\rho$ is at worst tamely ramified at each place $v\in S$.
Suppose also that $\sum_{v\not\in S}f_v(\rho)\deg(v)$ is odd. Then $$\ord_{s=(w+1)/2}\frac{L(\rho,K,s)}{L(\rho,F,s)}\ge(r^\nu-1)/(2\nu)$$ and $$\ord_{s=(w+1)/2}\frac{L(\rho,k'K,s)}{L(\rho,k'F,s)}\ge(r^\nu-1).$$ \end{thm}
\begin{proof}
For Case (1), the first inequality is an easy consequence of the
preceding subsections and \cite[Thm.~4.5]{Ulmer07b}. Indeed, by
Lemma~\ref{lemma:factorization}, we have a factorization $$L(\rho,K,T)=\prod_{o\subset\hat H}L(\rho\otimes\sigma_o,F,T)$$ where the product is over the orbits of the $r$-power Frobenius on the roots of $A$. The factor on the right corresponding to the orbit $o=\{0\}$ is just $L(\rho,F,T)$, and by the lemma, all the other orbits are self-dual and consist of characters of order $>2$. The hypotheses on the ramification of $\rho$ allow us to apply Lemma~\ref{lemma:ramification} to conclude that the parity of $\deg\cond(\rho\otimes\chi_\beta)$ is odd for all roots $\beta\neq0$ of $A$. Thus \cite[Thm.~4.5]{Ulmer07b} implies that each of the factors $L(\rho\otimes\sigma_o,F,T)$ is divisible by
$1-(r^{(w+1)/2}T)^{|o|}$, and in particular, has a zero at $T=r^{-(w+1)/2}$. Since there are $(r^\nu-1)/2\nu$ non-trivial orbits, we obtain the desired lower bound.
Over any extension $k'$ of $k$ of degree divisible by $2\nu$, we have a further factorization $$L(\rho\otimes\sigma_o,k'F,T)= \prod_{\beta\in o}L(\rho\otimes\chi_\beta,k'F,T)$$ and each factor $L(\rho\otimes\chi_\beta,k'F,T)$ is divisible by
$(1-|k'|^{(w+1)/2}T)$ and thus vanishes at $s=(w+1)/2$. This establishes the second lower bound in Case (1).
The lower bounds for Case (2) are an immediate consequence of those for Case (1) since $K_{A,f}$ is a subextension of $K_{\wp_q,f}$ by Example~\ref{ex:A}. \end{proof}
\begin{rem} If $F={\mathbb{F}_p}(t)$ and $f=t$, then the Artin-Schreier
extension given by $u^q-u=t$ is again a rational function field.
Thus starting with a suitable $\rho$ and taking a large degree
Artin-Schreier extension, or by taking multiple extensions, we
obtain another proof of unbounded analytic ranks over the fixed
ground field ${\mathbb{F}_p}(u)$. \end{rem}
As an illustration, we specialize Theorem~\ref{thm:analytic-ranks} to the case where $\rho$ is given by the action of $G_F$ on the Tate module of an abelian variety over $F$.
\begin{cor}\label{cor:analytic-ranks}
Let $k$ be a finite field of cardinality $r$ and characteristic $p
>2$. Let $\nu \in {\mathbb N}$ and let $k'$ be the field of
$q=r^{2\nu}$ elements. Suppose $J$ is an abelian variety over a
function field $F=k(\mathcal{C})$ as in Subsection~\ref{ss:notation}.
Choose $f\in F$ with at least one pole of order prime to $p$.
Suppose that either \textup{(1)} $K=K_{A,f}$ where $A(z)=z^{r^\nu}
+z$, or \textup{(2)} $K=K_{\wp_q, f}$ where $\wp_q(z)=z^q-z$ as in
Subsection~\ref{ss:extensions}. Let $S$ be the set of places of $F$
where $K/F$ is ramified. Suppose that $J$ is at worst tamely
ramified at all places in $S$ and that the degree of the part of the
conductor of $J$ away from $S$ is odd. Then $$\ord_{s=1}L(J/K,s)\ge(r^\nu-1)/(2\nu)$$ and $$\ord_{s=1}L(J/k'K,s)\ge(r^\nu-1).$$ \end{cor}
\subsection{Orthogonal $\rho$ and supersingularity}
Consider the set-up of Theorem~\ref{thm:analytic-ranks}, except that we assume that $\rho$ is orthogonally self-dual instead of symplectically self-dual, and we replace the parity condition there with the assumption that $$\deg(\rho)\sum_{v\in S}(-\ord_v(f)+1)\deg(v)+\sum_{v\not\in
S}f_v(\rho)\deg(v)$$
is odd. Then \cite[Thm.~4.5]{Ulmer07b} implies that if $o$ is an orbit with $o\neq\{0\}$, then $L(\rho\otimes\sigma_o,F,T)$ is divisible by $1+(r^{(w+1)/2}T)^{|o|}$. In particular, over a large enough finite extension $k'$ of $k$, at least $r^\nu-1$ of the inverse roots of the $L$-function $L(\rho,K,T)/L(\rho,F,T)$ are equal to
$|k'|^{(w+1)/2}$.
We apply this result to the case when $\rho$ is the trivial representation to conclude that the Jacobians of certain Artin-Schreier curves have many copies of a supersingular elliptic curve as isogeny factors. This implies that the slope 1/2 occurs with high multiplicity in their Newton polygons as defined in Subsection~\ref{ss:slopes}. However, as explained in Subsection~\ref{ss:slopesandss}, the occurrence of slope 1/2 in the Newton polygon of an abelian variety usually does not give any information about whether the abelian variety has a supersingular elliptic curve as an isogeny factor. This gives the motivation for this result. More precisely:
\begin{prop}\label{prop:ss-factors}
With the notation of Corollary~\ref{cor:analytic-ranks}, write $$\dvsr_\infty(f)=\sum_{i=1}^m a_iP_i$$ where the $P_i$ are distinct ${\overline{k}}$-valued points of $\mathcal{C}$. Assume that $p\nmid a_i$ for all $i$ and that $\sum_{i=1}^m(a_i+1)$ is odd. Let $\mathcal{J}$ (resp.\ $\mathcal{J}_{A,f}$, $\mathcal{J}_{\wp_q, f}$) be the Jacobian of $\mathcal{C}$ (resp.\ the cover $\mathcal{C}_{A,f}$ of $\mathcal{C}$ defined by $A(z)=f$, the cover $\mathcal{C}_{\wp_q,f}$ of $\mathcal{C}$ defined by $\wp_q(z)=f$). Then up to isogeny over ${\overline{k}}$, the abelian varieties $\mathcal{J}_{A,f}/\mathcal{J}$ and $\mathcal{J}_{\wp_q, f}/\mathcal{J}$ each contain at least $(r^\nu-1)/2$ copies of a supersingular elliptic curve. \end{prop}
\begin{proof} We give only a brief sketch, since this result plays a minor role in the rest of the paper. An argument parallel to that in the proof of Theorem~\ref{thm:analytic-ranks} shows that the numerator of the zeta function of $\mathcal{C}_{A,f}$ divided by that of $\mathcal{C}$ is divisible by $$\left(1+r^\nu T^{2\nu}\right)^{(r^\nu-1)/(2\nu)}.$$
Thus over a large extension $k'$ of $k$, at least $r^\nu-1$ of the inverse roots of the zeta function are equal to $|k'|^{1/2}$. Honda-Tate theory then shows that the Jacobian has a supersingular elliptic curve as an isogeny factor with multiplicity at least $(r^\nu-1)/2$. \end{proof}
We will see in Section~\ref{s:AS-covers} that the lower bound of Proposition~\ref{prop:ss-factors} is not always sharp.
\subsection{The case $p=2$} The discussion of the preceding subsections does not apply when $p=2$ since in that case all characters of $H$ have order 2. To get high ranks when $p=2$, we can use the variant of \cite[Thm.~4.5]{Ulmer07b} suggested in \cite[4.6]{Ulmer07b}. In this variant, instead of symmetric or skew-symmetric matrices, we have orthogonal matrices, and zeroes are forced because $1$ is always an eigenvalue of an orthogonal matrix of odd size, and $\pm1$ are always eigenvalues of an orthogonal matrix of even size and determinant $-1$. The details are somewhat involved and tangential to the main concerns of this paper, so we will not include them here.
\subsection{Artin-Schreier-Witt extensions} The argument leading to Theorem~\ref{thm:analytic-ranks} generalizes easily to the situation where we replace Artin-Schreier extensions with Artin-Schreier-Witt extensions. This generalization is relevant even if $p=2$. We sketch very roughly the main points.
Let $W_n(F)$ be the ring of Witt vectors of length $n$ with coefficients in $F$. We choose ${\bf f}\in W_n(F)$ and we always assume that its first component $f_1$ is such that $x^q-x-f_1$ is irreducible in $F[x]$ and so defines an extension of $F$ of degree $q$. Then adjoining to $F$ the solutions (in $W_n(F^{sep})$) of the equation $\Fr_q({\bf x})-{\bf x}={\bf f}$ yields a field extension of $F$ which is geometrically Galois with group $W_n({\mathbb{F}_q})$. The character group of this Galois group can be identified with $W_n({\mathbb{F}_q})$ and we have an action of $G_k$ (i.e., the $r$-power Frobenius where
$r=|k|$) on the characters of this group.
Choose a positive integer $\nu$ and consider the situation above where $q=r^{2\nu}$. We claim that there are $r^{n\nu}$ solutions in $W_n({\mathbb{F}_q})$ to the equation $\Fr_{r^\nu}({\bf x})+{\bf x}=0$. For $p>2$, this is clear---just take Witt vectors whose entries satisfy $x^{r^\nu}+x=0$. For $p=2$, the entries of $-{\bf x}$ are messy functions of those of ${\bf x}$, so we give a different argument. Namely, let us proceed by induction on $n$. For $n=1$, ${\bf
x}_1=(1)$ is a solution. Suppose that ${\bf
x}_{n-1}=(a_1,\dots,a_{n-1})$ satisfies $\Fr_{r^\nu}({\bf x})+{\bf
x}=0$. Then we have $$\Fr_{r^\nu}(a_1,\dots,a_{n-1},0)+(a_1,\dots,a_{n-1},0)=(0,\dots,0,b_n)$$ and it is easy to see that $b_n$ lies in the field of $r^{\nu}$ elements. We can thus solve the equation $a_n^{r^\nu}+a_n=b_n$, and then ${\bf
x}_n=(a_1,\dots,a_n)$ solves $\Fr_{r^\nu}({\bf x}_n)+{\bf x}_n=0$. With one solution which is a unit in $W_n({\mathbb{F}_q})$ in hand, we remark that any multiple of our solution by an element of $W_n(\mathbb{F}_{r^\nu})$ is another solution, so we have $r^{n\nu}$ solutions in all.
Next we note that the self-dual orbits $o\subset W_n({\mathbb{F}_q})$ (i.e., those orbits stable under ${\bf x}\mapsto -{\bf x}$) are exactly the orbits whose elements satisfy $\Fr_{r^\nu}({\bf x})+{\bf x}=0$. These orbits are of size at most $2\nu$. If $p>2$, all but the orbit $o=\{0\}$ consist of characters of order $>2$, whereas if $p=2$, all but $p^\nu$ of the orbits consist of characters of order $>2$. Thus taking $p>2$, or $p=2$ and $n>1$, we have a plentiful supply of orbits which are self-dual and consist of characters of order $>2$.
The last ingredient needed to ensure a high order of vanishing for the $L$-function is a conductor parity condition. This can be handled in a manner quite parallel to the cases considered in Subsection~\ref{ss:parity}. Namely, we choose ${\bf f}\in W_n(F)$ so that at places where $\rho$ and characters $\chi$ are ramified, $\chi$ should be so more deeply, and the remaining part of the conductor of $\rho$ should have odd degree. Then $\rho\otimes\chi$ will have conductor of odd degree.
\section{Surfaces dominated by a product of curves in Artin-Schreier
towers}\label{s:Berger}
In this section, we extend a construction of Berger to another class of surfaces, following \cite[\S\S 4-6]{Ulmer13a}.
\subsection{Construction of the surfaces}\label{ss:data} Let $k$ be a field with $\ch(k)=p$ and let $K=k(t)$. Suppose $\mathcal{C}$ and $\mathcal{D}$ are smooth projective irreducible curves over $k$. Suppose $f: \mathcal{C} \to \mathbb{P}^1$ and $g: \mathcal{D} \to \mathbb{P}^1$ are non-constant separable rational functions.
Write the polar divisors of $f$ and $g$ as: $$\dvsr_\infty(f)=\sum_{i=1}^{m} a_i P_i\quad\text{and}\quad \dvsr_\infty(g)=\sum_{j=1}^{n} b_j Q_j$$ where the $P_i$ and the $Q_j$ are distinct ${\overline{k}}$-valued points of $\mathcal{C}$ and $\mathcal{D}$. Let $$M=\sum_{i=1}^{m} a_i\quad\text{and}\quad N=\sum_{j=1}^{n} b_j.$$ We make the following {\it standing assumption\/}: \begin{equation}\label{eq:hyp} p \nmid a_i\text{ for }1 \leq i \leq m\quad\text{and}\quad p \nmid b_j\text{ for }1 \leq j \leq n. \end{equation}
We use the notation $\mathbb{P}^1_{k, t}$ to denote the projective line over $k$ with a chosen parameter $t$. Define a rational map $\psi_1: \mathcal{C} \times_k \mathcal{D} {\dashrightarrow} \mathbb{P}^1_{k,t}$ by the formula $t=f(x)-g(y)$ or more precisely $$\psi_1(x, y)= \begin{cases} [f(x)-g(y):1] & \text{if $x \not \in \{P_i\}$ and $y \not \in \{Q_j\}$}\\ [1:0] & \text{if $x \in \{P_i\}$ and $y\not\in\{Q_j\}$}\\ [1:0] & \text{if $x \not\in \{P_i\}$ and $y\in\{Q_j\}$}. \end{cases}$$ The map $\psi_1$ is undefined at each of the points in the set $${\mathbb B}=\{(P_i, Q_j) \mid 1 \leq i \leq m, 1 \leq j \leq n\}.$$ Let $U=\mathcal{C} \times_k \mathcal{D} - {\mathbb B}$ and note that the restriction
$\psi_1|_U:U \to \mathbb{P}^1_{k,t}$ is a morphism. We call the points in ${\mathbb B}$ ``base points'' because they are the base points of the pencil of divisors on $\mathcal{C} \times_k \mathcal{D}$ defined by $\psi_1$. Namely, for each closed point $v \in \mathbb{P}^1_{k,t}$, let $\overline{\psi_1^{-1}(v)}$ denote the Zariski closure in $\mathcal{C}
\times_k \mathcal{D}$ of $(\psi_1|_U)^{-1}(v)$. The points in $\mathbb B$ lie in each member of this family of divisors.
We note that the fiber of $\psi_1$ over $v=\infty$ is a union of horizontal and vertical divisors: $$\displaystyle \overline{\psi_1^{-1}(\infty)}=\left(\cup_{i=1}^m \{a_iP_i\} \times
\mathcal{D}\right) \cup \left(\cup_{j=1}^n \mathcal{C} \times \{b_jQ_j\}\right).$$ In particular, the complement of this divisor in $\mathcal{C}\times\mathcal{D}$ is again a product of (open) curves. This is the underlying geometric reason why the open sets considered in Proposition~\ref{prop:DPC} below are dominated by products of curves, and ultimately why we are able to deduce the Tate and BSD conjectures in Theorem~\ref{thm:Berger} below.
Suppose $\phi_1:\mathcal{X} \to \mathcal{C} \times_k \mathcal{D}$ is a blow-up such that the composition $\pi_1=\psi_1 \circ \phi_1: \mathcal{X} \to \mathcal{C} \times_k \mathcal{D} {\dashrightarrow} \mathbb{P}^1_{k,t}$ is a generically smooth morphism. The statement of Theorem~\ref{thm:Berger} below is independent of the choice of $\phi_1$. In Proposition~\ref{prop:blowupgenus}, we will construct a specific blow-up $\phi_1$ in order to compute the genus of $X$ in terms of the orders of the poles of $f$ and $g$. We will use this construction later in Section~\ref{s:rank} to find a formula for the rank of the Mordell-Weil group of the Jacobian of $X$.
Let $X \to \spec(K)$ be the generic fiber of $\pi_1$ so that $X$ is a smooth curve over $K=k(t)$. In Theorem~\ref{thm:Berger}, we show that $\mathcal{X}$ is dominated by a product of curves and $X$ is irreducible over $\overline{k}K \simeq \overline{k}(t)$, thus proving the Tate conjecture for $\mathcal{X}$ and the BSD conjecture for the Jacobian of $X$ when $k$ is a finite field.
More generally, we prove analogous results for the entire system of rational Artin-Schreier extensions of $k[t]$. Let $q$ be a power of $p$ and set $\wp_q(u)=u^q-u$. We write $\mathcal{Y}_q=\mathbb{P}^1_{k,u}$ and we define a covering $\mathcal{Y}_q\to\mathbb{P}^1_{k,t}$ by setting $t=\wp_q(u)$. We write $K_q$ for the function field of $\mathcal{Y}_q$, so that $K_q\cong k(u)$ and $K_q/k(t)$ is an extension of degree $q$. When the ground field $k$ contains ${\mathbb{F}_q}$, then $K_q/k(t)$ is an ${\mathbb{F}_q}$-Galois extension.
Consider the base change: $$\begin{array}{cccc} \mathcal{S}_q:=& \mathcal{Y}_q \times_{\mathbb{P}^1_{k,t}} \mathcal{X} & \rightarrow & \mathcal{X}\\ & \downarrow& & \downarrow\\ & \mathcal{Y}_q & \longrightarrow & \mathbb{P}^1_{k,t}. \end{array}$$ Because both $\mathcal{Y}_q$ and $\mathcal{X}$ have critical points over $\infty$, the fiber product $\mathcal{S}_q$ will usually not be smooth over $k$, or even normal. Let $\phi_q: \mathcal{X}_q \to \mathcal{S}_q$ be a blow-up of the normalization of $\mathcal{S}_q$ such that $\mathcal{X}_q$ is smooth over $k$. The statement of Theorem~\ref{thm:Berger} is independent of the choice of $\phi_q$. Let $\pi_q:\mathcal{X}_q \to \mathcal{Y}_q$ be the composition and let $X_q \to \spec(K_q)$ be its generic fiber. Note that $X_q \cong X \times_{\spec{K}} \spec(K_q)$.
\begin{thm} \label{thm:Berger}
Given data $k$, $\mathcal{C}$, $\mathcal{D}$, $f$, $g$, and $q$ as above, consider
the fibered surface $\pi_q:\mathcal{X}_q \to \mathcal{Y}_q$ and the curve $X_q/K_q$
constructed as above. Then \begin{enumerate} \item $\mathcal{X}_q$ is dominated by a product of curves; \item $X_q$ is irreducible and remains irreducible over $\overline{k}K_q \cong \overline{k}(u)$; \item If $k$ is finite, the Tate conjecture holds for $\mathcal{X}_q$ and the
BSD conjecture holds for the Jacobian of $X_q$. \end{enumerate} These results also hold for $\mathcal{X}$ and $X$. \end{thm}
The Tate conjecture mentioned in part (3) of Theorem~\ref{thm:Berger} refers to Tate's second conjecture, $\rk \NS(\mathcal{X}_q)= - \ord_{s=1} \zeta(\mathcal{X}, s)$, stated in \cite{Tate65}. The BSD conjecture mentioned in part (3) of Theorem~\ref{thm:Berger} and in Corollary~\ref{cor:BSD} refers both to the basic BSD conjecture, $\rk(J_{X_q}(K_q))= \ord_{s=1} L(J_{X_q}/K_q,s)$ and the refined BSD conjecture relating the leading coefficient of the $L$-function to other arithmetic invariants, see \cite{Tate66b}. See also \cite[6.1.1, 6.2.3, and 6.2.5]{Ulmer14b} for further discussion of these conjectures.
We now introduce some notation useful for proving Theorem~\ref{thm:Berger}. Let $\mathcal{C}_q$ be the smooth projective irreducible curve covering $\mathcal{C}$ defined by $\wp_q(z)=f$ and let $\mathcal{D}_q$ be the smooth, projective irreducible curve covering $\mathcal{D}$ defined by $\wp_q(w)=g$. The morphisms $\mathcal{C}_q\to\mathcal{C}$ and $\mathcal{D}_q\to\mathcal{D}$ are geometric $\mathbb{F}_q$-Galois covers, i.e., after extending the ground field to $\overline{k}$, these covers are Galois and there is a canonical identification of the Galois group with $\mathbb{F}_q$.
Let $\mathcal{C}^\circ\subset\mathcal{C}$ and $\mathcal{C}_q^\circ \subset \mathcal{C}_q$ be the complements of the points above the poles of $f$. Similarly, let $\mathcal{D}^\circ\subset\mathcal{D}$ and $\mathcal{D}_q^\circ \subset \mathcal{D}_q$ be the complements of the points above the poles of $g$. Then $\mathcal{C}_q^\circ \to \mathcal{C}^\circ$ and $\mathcal{D}_q^\circ \to \mathcal{D}^\circ$ are \'etale geometric $\mathbb{F}_q$-Galois covers. Let $\mathcal{X}^o=\mathcal{C}^o\times\mathcal{D}^o$, and let $\mathcal{X}_q^\circ \subset \mathcal{X}_q$ be the complement of $\pi_q^{-1}(\infty_{\mathcal{Y}_q})$.
\begin{prop} \label{prop:DPC} For each power $q$ of $p$,
there is a canonical isomorphism $$\mathcal{X}_q^\circ \cong (\mathcal{C}_q^\circ \times_k \mathcal{D}_q^\circ)/\mathbb{F}_q$$ where $\mathbb{F}_q$ acts diagonally. \end{prop}
\begin{proof}
By definition, $\mathcal{X}^\circ$ is the open subset of $\mathcal{C} \times
\mathcal{D}$ where $f(x)$ and $g(y)$ are regular. Also, $\mathcal{X}_q^\circ$ is
the closed subset of $$\mathcal{X}^\circ \times_k \mathcal{Y}_q = \mathcal{C}^\circ \times_k \mathcal{D}^\circ \times_k \mathcal{Y}_q$$ with coordinates $(x,y,u)$ where $f(x)-g(y)=\wp_q(u)$. On the other hand, $\mathcal{C}_q^\circ \times_k \mathcal{D}_q^\circ$ is isomorphic to the closed subset of $$\left(\mathcal{C}^\circ \times_k \mathcal{Y}_q\right) \times_k \left(\mathcal{D}^\circ \times_k \mathcal{Y}_q\right) =\mathcal{C}_q^\circ\times_k\mathcal{D}_q^\circ$$ with coordinates $(x,y,z,w)$ where $f(x)=\wp_q(z)$ and $g(y)=\wp_q(w)$.
Letting $u=z-w$, the morphism $(x,z,y,w) \mapsto (x,y, z-w)$ presents $\mathcal{C}_q^\circ \times_k \mathcal{D}_q^\circ$ as an $\mathbb{F}_q$-torsor over $\mathcal{X}_q^\circ$. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:Berger}]
By Proposition~\ref{prop:DPC}, there is a rational dominant map
$\mathcal{C}_q \times \mathcal{D}_q {\dashrightarrow} \mathcal{X}_q$ given by: $$(x,z,y,w) \mapsto (x,y,z-w).$$ This proves that $\mathcal{X}_q$ is dominated by a product of curves. Also, $\mathcal{X}_q$ is geometrically irreducible since $\mathcal{C}_q$ and $\mathcal{D}_q$ are geometrically irreducible. This proves that $X_q$ remains irreducible over $\overline{k}(u)$. Part (3) is a consequence of part (1) and Tate's theorem on endomorphisms of abelian varieties over finite fields. See, for example, \cite[8.2.2, 6.1.2, and 6.3.1]{Ulmer14b}. The claims for $\mathcal{X}$ and $X$ follow similarly from the fact that $\mathcal{X}$ is birational to $\mathcal{C}\times_k\mathcal{D}$. \end{proof}
Using \cite[8.2.1 and 6.3.1]{Ulmer14b}, we see that if $X$ is a curve over a function field $F$ and the BSD conjecture holds for $X$ over a finite extension $K$, then it also holds over any subextension $F\subset K'\subset K$. The following is thus immediate from Theorem \ref{thm:Berger} and Lemma~\ref{Ladditive}.
\begin{cor}\label{cor:BSD}
Let $X$ be a smooth projective irreducible curve over $K=k(u)$ and
assume that there are rational functions $f(x) \in k(x)$ and $g(y)
\in k(y)$ and a separable additive polynomial $A(u) \in k[u]$ such
that $X$ is birational to the curve $$\left\{f(x)-g(y)-A(u)=0\right\} \subset\mathbb{P}^1_K \times_K\mathbb{P}^1_K.$$ Then the BSD conjecture holds for the Jacobian of $X$. \end{cor}
We note that an argument similar to \cite[Rem.~12.2]{Ulmer14a} shows that the hypothesis that $A$ is separable is not needed.
To determine the genus of $X_q$ and for later use, we now proceed to construct a specific blow-up $\phi_1:\mathcal{X}\to\mathcal{C}\times_k\mathcal{D}$ which resolves the indeterminacy of the rational map $\psi_1:\mathcal{C}\times_k\mathcal{D} {\dashrightarrow}\mathbb{P}^1_{k,t}$ and yields a morphism $\pi_1:\mathcal{X}\to\mathbb{P}^1_{k,t}$.
\begin{prop} \label{prop:blowupgenus} The genus of the smooth proper irreducible curve $X_q$ over $K_q$ is $$g_{X_q}=Mg_{\mathcal{D}} + Ng_{\mathcal{C}} +(M-1)(N-1) - \sum_{i,j} \delta(a_i, b_j)$$ where $\delta(a,b):=(ab-a-b+\gcd(a,b))/2$. \end{prop}
\begin{proof} The proof of Proposition~\ref{prop:blowupgenus} is very similar to the proof of \cite[Thm~3.1]{Berger08}; see also \cite[\S 4.4]{Ulmer13a}. It uses facts about the arithmetic genus of curves of bidegree $(M,N)$ in $\mathcal{C}\times_k\mathcal{D}$, the adjunction formula, and resolution of singularities.
The procedure to resolve the singularity at each base point
$(P_i,Q_j)$ is the same so we fix one such point and drop $i$ and
$j$ from the notation. Thus assume that $(P,Q)$ is a base point,
that $f$ has a pole of order $a$ at $P$, and that $g$ has a pole of
order $b$ at $Q$. Choose uniformizers $x$ and $y$ at $P$ and $Q$
respectively, so that $f=ux^{-a}$ and $g=vy^{-b}$ where $u$ and $v$
are units in the local rings at $P$ and $Q$ respectively. The map
$\psi_1$ is thus given in neighborhood of $(P,Q)$ in projective
coordinates by $[uy^b-vx^a:x^ay^b]$.
The resolution of the indeterminacy at $(P,Q)$ takes place in three
stages. The first stage, which we discuss now, occurs only when
$a\neq b$. Suppose that is the case and blow up the point $(P,Q)$
on $\mathcal{C}\times_k\mathcal{D}$. Then there is a unique point of indeterminacy
upstairs. If $a<b$, we introduce new coordinates $x=x_1y_1$ and
$y=y_1$ in which the blow up composed with $\psi_1$ becomes
$[uy_1^{b_1}-vx_1^{a_1}:x_1^{\alpha_1}y_1^{\beta_1}]$ where $a_1=a$,
$b_1=b-a$, $\alpha_1=a$ and $\beta_1=b$. The unique point of
indeterminacy is at $x_1=y_1=0$. If $a>b$, we introduce new
coordinates $x=x_1$ and $y=x_1y_1$ in which the blow up composed
with $\psi_1$ becomes
$[uy_1^{b_1}-vx_1^{a_1}:x_1^{\alpha_1}y_1^{\beta_1}]$ where
$a_1=a-b$, $b_1=b$, $\alpha_1=a$ and $\beta_1=b$. The unique point
of indeterminacy is at $x_1=y_1=0$. In both cases, note that
$\alpha_1\ge a_1$ and $\beta_1\ge b_1$.
We now proceed inductively within this case. Suppose that at step
$\ell$ our map is given locally by
$[uy_\ell^{b_\ell}-vx_\ell^{a_\ell}:x_\ell^{\alpha_\ell}y_\ell^{\beta_\ell}]$
and $a_\ell\neq b_\ell$. The point $x_\ell=y_\ell=0$ is the point
of indeterminacy. If $a_\ell<b_\ell$, we set
$x_\ell=x_{\ell+1}y_{\ell+1}$ and $y_\ell=y_{\ell+1}$ so that our
map becomes $[uy_{\ell+1}^{b_{\ell+1}}-vx_{\ell+1}^{a_{\ell+1}}:
x_{\ell+1}^{\alpha_{\ell+1}}y_{\ell+1}^{\beta_{\ell+1}}]$ where
$a_{\ell+1}=a_\ell$, $b_{\ell+1}=b_\ell-a_\ell$,
$\alpha_{\ell+1}=\alpha_\ell$ and
$\beta_{\ell+1}=\beta_\ell+\alpha_\ell-a_\ell$. On the other hand,
if $a_\ell>b_\ell$, we set $x_\ell=x_{\ell+1}$ and
$y_\ell=x_{\ell+1}y_{\ell+1}$ so that our map becomes
$[uy_{\ell+1}^{b_{\ell+1}}-vx_{\ell+1}^{a_{\ell+1}}:
x_{\ell+1}^{\alpha_{\ell+1}}y_{\ell+1}^{\beta_{\ell+1}}]$ where
$a_{\ell+1}=a_\ell-b_\ell$, $b_{\ell+1}=b_\ell$,
$\alpha_{\ell+1}=\alpha_\ell+\beta_\ell-b_\ell$ and
$\beta_{\ell+1}=\beta_\ell$. (We use here that $\alpha_\ell\ge
a_\ell$ and $\beta_\ell\ge b_\ell$ and we note that these
inequalities continue to hold at step $\ell+1$.)
Let $\gamma(a,b)$ be the number of steps to proceed from $(a,b)$ to
$(\gcd(a,b),0)$ by subtracting the smaller of $a$ or $b$ from the
larger at each step (cf.~\cite[fourth paragraph of
\S4.4]{Ulmer13a}). Then after $j=\gamma(a,b)-1$ steps as in the
preceding paragraph, our map is given by
$[uy_j^{b_j}-vx_j^{a_j}:x_j^{\alpha_j}y_j^{\beta_j}]$ where
$a_j=b_j=\gcd(a,b)$. To lighten notation, let us write $c$ for
$\gcd(a,b)$, $\alpha$ for $\alpha_j$, $\beta$ for $\beta_j$, $x$ for
$x_j$, and $y$ for $y_j$, so that our map is
$[uy^c-vx^c:x^{\alpha}y^{\beta}]$ and the unique point of
indeterminacy in these coordinates is $x=y=0$. Note that $\alpha,
\beta \geq c$. This completes the first stage of the resolution of
indeterminacy.
The second stage consists of a single blow up at $x=y=0$.
Introducing coordinates $x=rs$, $y=s$, our map becomes
$[u-vr^c:r^\alpha s^{\beta+\alpha-c}]$ and there are now $c$ points
of indeterminacy, namely the $c$ solutions of $r^c=u/v$, $s=0$.
(Note that $u(x)=u(rs)$ and $v(y)=v(s)$ are both constant along the
exceptional divisor $s=0$, so the equation $r^c=u/v$ has exactly $c$
solutions on that divisor.) Let $\delta=\beta+\alpha-c$.
The third stage consists of dealing with each of the $c$ points of
indeterminacy in parallel. Focus on one of them: Replace $r$ with
$r-\omega$ where $\omega$ is one of the zeroes of $r^c-u/v$ so that
our map becomes $[wr:zs^{\delta}]$, the point of interest is
$r=s=0$, and $w$ and $z$ are units in the local ring at that point.
We now blow up $\delta$ times: Setting $r=r_1s_1$, $s=s_1$, our map
becomes $[wr_1:zs_1^{\delta-1}]$; setting $r_1=r_2s_2$ and $s_1=s_2$
our map is $[wr_2:zs_2^{\delta-2}]$; ...; and after $\delta$ steps
our map is $[wr_\delta;z]$ which is everywhere defined.
\begin{figure}
\caption{Resolution for $a=4$, $b=6$}
\end{figure}
Figure 1 above, illustrating the case $a=4$, $b=6$, may help to digest the various steps. The vertical line in the lower left is the proper transform of $\mathcal{C}\times\{Q\}$, and the horizontal line in the upper right is the proper transform of $\{P\}\times\mathcal{D}$. The two lines adjacent to them are the components introduced in the first stage of the resolution, where $(a,b)=(4,6)$ becomes $(2,2)$ in 2 steps (so $\gamma=3$). The line of slope 1 is the component introduced in step 2. The chains leading away from this last line are the components introduced in the third step, where $\delta=12$ (but we have only drawn half of each chain, indicating the rest with $\dots$).
Now we go back and consider a general element of the pencil defined by $\psi$ and its proper transform at each stage. For all but finitely many values of $t$, the element of the pencil parameterized by $t$ is smooth away from the base points. In a neighborhood of a base point $(P,Q)$ where $f$ and $g$ have poles of order $a$ and $b$ respectively, $\mathcal{F}=\overline{\psi_1^{-1}(t)}$ is given by $uy^b-vx^a-tx^ay^b$. The tangent cone of $\mathcal{F}$ at $(0,0)$ is a single line $x=0$ or $y=0$ and so there is a unique point over $(P,Q)$ on the proper transform of $\mathcal{F}$. The situation is similar for each of the first $\gamma(a,b)-1$ blow ups, and after the last of them, the proper transform of $\mathcal{F}$ is given locally by $uy^c-vx^c-tx^\alpha y^\beta$ in the notation at the end of the first stage above.
Now at the second stage the tangent cone consists of $c$ lines and there are $c$ points over $x=y=0$ in the proper transform. Locally the proper transform is given by $wr-zs^\delta$, and this is smooth in a neighborhood of the exceptional divisor. Therefore, there are no further changes in the isomorphism type of the proper transform in the third stage. In other words, the fibers of $\pi_1$ are isomorphic to the elements of the pencil appearing after the second stage.
To compute the genus of the fibers, we note that the multiplicity of the point of indeterminacy on $\mathcal{F}$ at the $\ell$-th step of the first stage is $e_\ell=\min(a_\ell,b_\ell)$ and at the second stage it is $c=\gcd(a,b)$. Thus the change in arithmetic genus at step $\ell$ is $e_\ell (e_\ell-1)/2$ and the change in the last step is $c(c-1)/2$. Summing these contributions and noting that the arithmetic genus of the elements of the original pencil is $Mg_{\mathcal{D}} + Ng_{\mathcal{C}} +(M-1)(N-1)$ yields the asserted formula for the genus $g_{X_q}$ of the generic fiber of $\pi_1$. (See \cite[\S\S3.7 and 3.8]{Berger08} for more details on computing the sum.) This completes the proof. \end{proof}
It is worth noting that the algorithm presented above for resolving the indeterminacy of $\psi_1$ sometimes leads to a morphism $\mathcal{X}\to\mathbb{P}^1_{k,t}$ which is not relatively minimal. In general, one needs to contract several $(-1)$-curves to arrive at a relatively minimal morphism.
\begin{rem}\label{rem:components}
For later use we note that the exceptional divisor of
the last blow up in stage three (at each of $c=\gcd(a,b)$ points)
maps isomorphically onto the base $\mathbb{P}^1_{k,t}$ whereas all the other
exceptional divisors introduced in the resolution map to the point
$\infty=[1,0]\in\mathbb{P}^1_{k,t}$. In particular, $\pi_1:\mathcal{X}\to\mathbb{P}^1_{k,t}$ always
has a section, and $X$ always has a $k(t)$-rational point. \end{rem}
\section{Examples---lower bounds on ranks}\label{s:exs1} Our goal in this section is to combine the construction of Theorem~\ref{thm:Berger} with the analytic ranks bound in Corollary~\ref{cor:analytic-ranks} to give examples of Jacobians which satisfy the BSD conjecture and which have large Mordell-Weil rank. This is an analogue for Artin-Schreier extensions of some results in \cite{Ulmer07b} for Kummer extensions.
\subsection{Notation}\label{ss:types} Throughout this section, $k$ is a finite field of cardinality $r$, a power of $p$. Given an integer $M$ and a partition $M=a_1+\cdots+a_m$, we say that a rational function $f$ on $\mathbb{P}^1$ is {\it of type $(a_1+\cdots+a_m)$\/} if the polar divisor has multiplicities $a_1,\dots,a_m$, i.e., $$\dvsr_\infty(f)=\sum_{i=1}^m a_iP_i$$ where the $P_i$ are distinct ${\overline{k}}$-valued points. We assume throughout that $p \nmid a_1 \cdots a_m$. Given two non-constant rational functions $f$ on $\mathcal{C}$ and $g$ on $\mathcal{D}$ over $k$, Proposition~\ref{prop:blowupgenus} gives a formula for the genus of the smooth proper curve over $k(t)$ with equation $f-g=t$ in terms of the types of $f$ and $g$.
\subsection{Elliptic curves}\label{ss:ell-curves} Suppose now that $\mathcal{C}=\mathcal{D}=\mathbb{P}^1$ over $k$ and that $f$ and $g$ are rational functions on $\mathbb{P}^1$. Straightforward calculation reveals that if the types $f$ and $g$ are on the following list, then the curve $X$ over $k(t)$ given by $f(x)-g(y)=t$ has genus 1: $$(2,1+1), (1+1,1+1), (2,3), (2,2+1), (2,4), (2,2+2), (3,3).$$ (We omit pairs of types obtained from these by exchanging the two partitions and assume $p \not = 2,3$ as necessary).
For example, to illustrate the $(2, 1+1)$ case, let $f(x)$ be a quadratic polynomial, so that $f$ has type $(2)$. Let $g_1(y)$ and $g_2(y)$ be polynomials with $\deg g_1\le 2$ and $\deg g_2=2$ such that $g_2$ has distinct roots and $g_1$ and $g_2$ are relatively prime in $k[y]$, so that $g=g_1/g_2$ has type $(1+1)$. For such a choice of $f$ and $g$, the curve $f(x)-g(y)=t$ has genus 1.
\subsection{Elliptic curves of high rank} Recall that $K=k(t)$, $q$ is a power of $p$, and $K_q=k(u)$ with $u^q-u=t$. The next result says that for certain types as in the previous section and ``generic'' $f$ and $g$, the elliptic curve $X$ has unbounded rank over $K_q$ as $q$ varies.
\begin{prop}\label{prop:ell-ranks1}
Suppose that $p>2$ and $f$ and $g$ are rational functions on $\mathbb{P}^1$
over $k$ of type $(2,2+1)$ or of type $(2,4)$.
Suppose that the finite critical values of $g$ are distinct. Then
the curve $X$ defined by $f(x)-g(y)=t$ is elliptic, it satisfies the
BSD conjecture over $K_q$ for all $q$, and the rank of $X(K_q)$ is
unbounded as $q$ varies. More precisely, if $q$ has the form
$q=r^{2\nu}$ and $k'$ is the field of $r^{2\nu}$ elements, then $$\rk X(K_q)\ge \frac{r^\nu-1}{2\nu}$$ and $$\rk X(k'K_q)\ge r^\nu-1.$$ \end{prop}
\begin{proof}
Proposition~\ref{prop:blowupgenus} shows that $X$ has genus 1, and
Remark~\ref{rem:components} shows that $X$ has a $k(t)$-rational
point, so $X$ is elliptic.
By the Riemann-Hurwitz formula, a rational function of degree $M$
has $2M-2$ critical points (counting multiplicities). A pole of
order $a$ is a critical point of multiplicity $a-1$. Thus a
rational function $f$ of type $(2)$ has 1 critical point which is
not a pole, and therefore 1 finite critical value. A rational
function $g$ of type $(2+1)$ has 3 non-polar critical points, and so
3 finite critical values. Similarly, a rational function of type
$(4)$ has 3 non-polar critical points and 3 finite critical values.
By ``generic'' we mean that the finite critical values of $g$ are
distinct, and we impose no restriction on $f$.
Now consider the rational map $\psi_1:\mathcal{C}\times_k\mathcal{D} {\dashrightarrow}
\mathbb{P}^1_{k,t}$ given by $t=f(x)-g(y)$ and the blow up $\phi_1:
\mathcal{X}\to\mathcal{C}\times\mathcal{D}$ constructed in the proof of
Proposition~\ref{prop:blowupgenus} that resolves the indeterminacy
of $\psi_1$, yielding a proper morphism $\pi_1:\mathcal{X}\to\mathbb{P}^1_{k,t}$
whose generic fiber is $X$. Away from the fiber over $t=\infty$,
the critical points of $\pi_1$ are precisely the simultaneous
critical points of $f$ and $g$. Under our hypotheses, these are
simple critical points, and so the critical points of $\pi_1$ away
from the fiber at infinity are ordinary double points. Moreover, by
the counts in the previous paragraph, there are precisely three such
ordinary double points. This shows that $X$ has multiplicative
reduction over three finite places of the $t$-line, and good
reduction at all other finite places. Thus the degree of the finite
part of the conductor of $X$ is 3.
Next we claim that $X$ (or rather the representation
$H^1(X\times\overline{K},{\mathbb{Q}_\ell})$ for any $\ell\neq p$) is tamely
ramified at $t=\infty$. One way to see this is to use the algorithm
in the proof of Proposition~\ref{prop:blowupgenus} to compute the
reduction type of $X$ at $t=\infty$. One finds that $X$ has Kodaira
type $I_3^*$ in the $(2,2+1)$ case and Kodaira type $III^*$ in the
$(2,4)$ case. In both cases, $X$ is tamely ramified at $t=\infty$
for any $p>2$. (Another possibility is to use the method of the
proof of Proposition~\ref{prop:higher-g-unbounded} below to see that
$X$ obtains good reduction over an extension of $k((t^{-1}))$ of
degree 4.)
Now we may apply Corollary~\ref{cor:analytic-ranks} to conclude that
we have $\ord_{s=1}L(X/K_q,s)\ge(r^\nu-1)/(2\nu)$ and
$\ord_{s=1}L(X/k'K_q,s)\ge r^\nu-1$. Moreover, by
Theorem~\ref{thm:Berger}, $X$ satisfies the BSD conjecture, so we
also have a lower bound on the algebraic ranks, i.e., on $\rk
X(K_q)$ and $\rk X(k'K_q)$.
This competes the proof of the proposition. \end{proof}
The curves in Proposition~\ref{prop:ell-ranks1} can of course be made quite explicit. Let us consider the case of types $(2,2+1)$. Since $f$ and $g$ have unique double poles, these occur at rational points, and we may assume they are both at infinity. Thus $f(x)$ is a quadratic polynomial which, after a change of coordinates on $x$ and $t$, we may take to be $x^2$, and $g$ has the form $$g(y)=\frac{ay^3+by^2+cy+d}y$$ for scalars $a,b,c,d$. A small calculation reveals that $X$ has the Weierstrass form $$y^2=x^3+(t+c)x^2+bdx+ad^2.$$ The discriminant of this model is a cubic polynomial in $t$ and the genericity condition is simply that the discriminant have distinct roots. To see that the locus where it is satisfied is not empty, we may specialize as follows: If $p>3$, take $a=d=1$ and $b=c=0$, so that $X$ is $y^2=x^3+tx^2+1$. The discriminant is then $-16(4t^3+27)$ which has distinct roots. If $p=3$, take $a=b=d=1$ and $c=0$, in which case the discriminant is $-t^3+t^2-1$, a polynomial with distinct roots in characteristic 3.
\subsection{Unbounded rank in most
genera}\label{ss:higher-g-unbounded} The main idea of the previous section generalizes easily to most genera.
We define a pair of polynomials $(f,g)$ to be ``generic'' if the set of differences $f(x_i)-g(y_j)$, where $x_i$ and $y_j$ run through the non-polar critical points of $f$ and $g$ respectively, has maximum possible cardinality. In other words, we require that if $(i,j)\neq(i',j')$, then $f(x_i)-g(y_j)\neq f(x_{i'})-g(y_{j'})$. Note that this condition imposes no constraint on a quadratic polynomial $f$ since it has only one finite critical value.
\begin{prop}\label{prop:higher-g-unbounded}
Fix an integer $g_X>0$ such that $p$ does not divide $N=2g_X+2$.
Suppose that $f$ and $g$ are a pair of ``generic'' rational
functions on $\mathbb{P}^1$ \textup{(}generic in the sense mentioned
above\textup{)} of type $(2,N)$. Then the smooth proper curve
defined by $f(x)-g(y)=t$ has genus $g_X$, its Jacobian $J_X$
satisfies the BSD conjecture over $K_q$ for all $q$, and the rank of
$J_X(K_q)$ is unbounded as $q$ varies through powers of $p$. \end{prop}
\begin{proof}
We may assume that the unique poles of $f$ and $g$ are at infinity,
so that $f$ and $g$ are polynomials. After a further change of
coordinates on $x$ and $t$, we may take $f(x)=x^2$. Thus $X$ is a
hyperelliptic curve \begin{equation}\label{eq:higher-genus-X} x^2=a_Ny^N+a_{N-1}x^{N-1}+\cdots+a_0+t \end{equation} where $a_0,\dots,a_N\in k$ and $a_N \neq 0$. The BSD conjecture is true for $J_X$ by Theorem~\ref{thm:Berger} and the genus of $X$ is $g_X=(N-1)-\delta(2,N)$, as seen in Proposition~\ref{prop:blowupgenus}.
Our genericity assumption is that the $N-1$ finite critical values of $g$ are distinct. As in the proof of Proposition \ref{prop:ell-ranks1}, we see that $X$ has an ordinary, non-separating double point at $N-1$ places of $\mathbb{P}^1$, and it has good reduction at all other finite places. This shows that the degree of the finite part of the conductor of $X$ is $N-1=2g_X-1$, an odd integer.
We now claim that at $t=\infty$, $X$ obtains good reduction over an extension of degree $N$. Since $p \nmid N$ by hypothesis, this implies that $X$ is tame at $t=\infty$. To check the claim, let $v$ satisfy $t=v^{-N}$ and change coordinates in (\ref{eq:higher-genus-X}) by setting $x=x_1/v$ and $y=y_1/v^{N/2}$. The resulting model of $X$ is $$x_1^{2}=a_Ny_1^{N}+\sum_{i=0}^{N-1}a_iy_1^{i}v^{N-i}+1.$$ This curve visibly has good reduction at $v=0$ which establishes our claim.
Now Corollary~\ref{cor:analytic-ranks} applies and shows that when $q=r^{2\nu}$ $$\ord_{s=1}L(J_X/K_q,s)\ge(r^\nu-1)/(2\nu).$$ Since $J_X$ satisfies the BSD conjecture, we get a similar lower bound on the rank and this completes the proof of the proposition. \end{proof}
As an explicit example, assume that $p\nmid(2g_X+2)(2g_X+1)$ and take $N=2g_X+2$, $f(x)=x^2$, and $g(y)=y^N+y$, so that $X$ is the hyperelliptic curve $$x^2=y^N+y+t.$$ The finite critical values of $g$ are $\alpha(N-1)/N$ where $\alpha$ runs through the roots of $\alpha^{N-1}=-1/N$, and these values are distinct under our assumptions on $p$. Thus this pair $(f,g)$ is generic and we get an explicit hyperelliptic curve whose Jacobian has unbounded rank in the tower of fields $K_q$.
\section{A rank formula}\label{s:rank} In this section, $k$ will be a general field of characteristic $p$, not necessarily finite. In the main result, we will assume $k$ is algebraically closed for convenience, but this is not essential.
\subsection{The Jacobian of $X$} We write $J_X$ for the Jacobian of the curve $X$ over $K=k(t)$ discussed in Theorem~\ref{thm:Berger}. Recall that for a power $q$ of $p$, we set $K_q=k(u)$ where $\wp_q(u)=u^q-u=t$. Our main goal in this section is to give a formula for the rank of the Mordell-Weil group (as defined just below) of $J_X$ over $K_q$.
First we recall the $K_q/k$-trace of $J_X$, which we denote by $(B_q,\tau_q)$. By definition, $(B_q,\tau_q)$ is the final object in the category of pairs $(B,\tau)$ where $B$ is an abelian variety over $k$ and $\tau:B\times_kK_q\to J_X$ is a morphism of abelian varieties over $K_q$. See \cite{Conrad06} for a modern account.
\begin{prop}
For every power $q$ of $p$, the $K_q/k$-trace of $J_X$ is
canonically isomorphic to $J_\mathcal{C}\times J_\mathcal{D}$. \end{prop}
\begin{proof}
The proof is very similar to that of \cite[Prop.~5.6]{Ulmer13a},
although somewhat simpler since our hypothesis that $p$ does not
divide the pole orders of $f$ and $g$ implies that $\mathcal{C}_q$ and
$\mathcal{D}_q$ are irreducible. We omit the details.
\end{proof}
\begin{defn} The {\it Mordell-Weil group\/} of $J_X$ over $K_q$, denoted $MW(J_X/K_q)$ is defined to be $$\frac{J_X(K_q)}{\tau_qB_q(k)}.$$ \end{defn}
\subsection{Two numerical invariants} Recall that we have constructed a smooth projective surface $\mathcal{X}$ equipped with a generically smooth morphism $\pi_1:\mathcal{X}\to\mathbb{P}^1_{k,t}$ whose generic fiber is $X/K$. For each closed point $v$ of $\mathbb{P}^1_{k,t}$, let $f_v$ denote the number of irreducible components in the fiber of $\pi_1$ over $v$. We define $$c_1(q)=q\sum_{v\neq\infty}(f_v-1)\deg v$$ where the sum is over the finite closed points of $\mathbb{P}^1_{k,t}$.
Using the notation established at the beginning of Subsection~\ref{ss:data}, we define $$c_2=\left(\sum_{i=1}^m\sum_{j=1}^n\gcd(a_i,b_j)\right)-m-n+1.$$
We can now state the main result of this section.
\begin{thm}\label{thm:ranks} Assume that $k$ is algebraically closed. Given data $\mathcal{C}$, $\mathcal{D}$, $f$, and $g$ as above, consider the smooth proper model $X$ of \[\{f-g-t=0\} \subset \mathcal{C} \times_k \mathcal{D} \times_k \spec(K)\] over $K=k(t)$ as constructed above. Let $J_X$ be the Jacobian of $X$. Recall that $K_q=k(u)$ with $u^q-u=t$. Then, with $c_1(q)$ and $c_2$ as defined above, we have $$\rk\MW(J_X/K_q) = \rk\Hom_{k-av}(J_{\mathcal{C}_q}, J_{\mathcal{D}_q})^{{\mathbb{F}_q}} - c_1(q) + c_2.$$ Here $\Hom_{k-av}$ denotes homomorphisms of abelian varieties over $k$ and the exponent ${\mathbb{F}_q}$ signifies those homomorphisms which commute with the ${\mathbb{F}_q}$ actions on $J_{\mathcal{C}_q}$ and $J_{\mathcal{D}_q}$. \end{thm}
\begin{rem}\label{rem:q=1}
The theorem also holds for $X/K$: We have
$\rk\MW(J_X/K) = \rk\Hom_{k-av}(J_{\mathcal{C}}, J_{\mathcal{D}}) - c_1(1) + c_2$.
The proof is a minor variation of what follows, but we omit it to
avoid notational complications. \end{rem}
\begin{proof}
The proof is very similar to that of \cite[Thm~6.4]{Ulmer13a}: we
will construct a good model $\pi_q: \mathcal{X}_q\to\mathbb{P}^1_{k,u}$ of $X/K_q$
and use the Shioda-Tate formula.
First consider the rational map $\psi_q: \mathcal{C}_q \times_k \mathcal{D}_q {\dashrightarrow}
\mathbb{P}^1_{k,u}$ defined by the formula $u=z-w$. For each pair $(i,j)$
with $1\le i\le m$, $1\le j\le n$, there is a unique point $(\tilde
P_i,\tilde Q_j)\in\mathcal{C}_q\times_k\mathcal{D}_q$ over $(P_i,Q_j)\in
\mathcal{C}\times_k\mathcal{D}$. The indeterminacy locus of $\psi_q$ is $\{(\tilde
P_i,\tilde Q_j)\}$. At each of these base points, the blow-ups
required to resolve the indeterminacy of $\psi_q$ are identical to
those described in the proof of Proposition~\ref{prop:blowupgenus}
(resolving the indeterminacy of $\psi_1$ at $(P_i,Q_j)$). For each
$(i,j)$, write the total number of blow-ups over $\{(\tilde
P_i,\tilde Q_j)\}$ as $N_{ij}+\gcd(a_i,b_j)$ and recall that
$N_{ij}$ of the exceptional divisors map to $\infty\in\mathbb{P}^1$ whereas
$\gcd(a_i,b_j)$ of them map isomorphically onto $\mathbb{P}^1_{k,u}$. Let
$\widetilde{\mathcal{C}_q\times_k\mathcal{D}_q}$ denote this blow-up of $\mathcal{C}_q
\times_k \mathcal{D}_q$.
The action of $\mathbb{F}_q^2$ on $\mathcal{C}_q\times_k\mathcal{D}_q$ lifts canonically to
$\widetilde{\mathcal{C}_q\times_k\mathcal{D}_q}$. In fact, it is clear that the
action of $\mathbb{F}_q^2$ on the tangent space at $\{(\tilde P_i,\tilde
Q_j)\}$ is trivial, so every point in the exceptional divisor is fixed
and these are the only fixed points. Therefore the quotient
$\mathcal{X}_q:=\widetilde{\mathcal{C}_q\times_k\mathcal{D}_q}/{\mathbb{F}_q}$ (quotient by the
diagonal subgroup ${\mathbb{F}_q}\subset\mathbb{F}_q^2$) is smooth. The resolved
morphism $\widetilde{\mathcal{C}_q\times_k\mathcal{D}_q}\to\mathbb{P}^1_{k,u}$ factors
through $\mathcal{X}_q$ and defines a morphism $\pi_q:\mathcal{X}_q\to\mathbb{P}^1_{k,u}$
whose generic fiber is $X/K_q$.
It is classical (and reviewed in \cite[II.8.4]{Ulmer11}) that $$\NS(\mathcal{C}_q\times_k\mathcal{D}_q)\cong \Hom_{k-av}(J_{\mathcal{C}_q},J_{\mathcal{D}_q}) \oplus\mathbb{Z}^2.$$ Noting that the blow-ups are fixed by the action of $\mathbb{F}_q^2$ and taking ${\mathbb{F}_q}$ invariants, we find that $$\NS(\mathcal{X}_q)\cong\Hom_{k-av}(J_{\mathcal{C}_q},J_{\mathcal{D}_q})^{{\mathbb{F}_q}}\oplus\mathbb{Z}^{2 +\sum_{i,j}(N_{ij}+\gcd(a_i,b_j))}$$ and so \begin{equation}\label{eq:rkNS} \rk\NS(\mathcal{X}_q)=\rk\Hom_{k-av}(J_{\mathcal{C}_q},J_{\mathcal{D}_q})^{{\mathbb{F}_q}}+2 +\sum_{i,j}(N_{ij}+\gcd(a_i,b_j)). \end{equation}
We apply the Shioda-Tate formula \cite{Shioda99} to $\mathcal{X}_q$. It says \begin{equation}\label{eq:rkNS2} \rk\NS(\mathcal{X}_q)=\rk MW(J_X/K_q)+2+\sum_{u}(f_{u,q}-1). \end{equation} Here the sum is over the closed points of $\mathbb{P}^1_{k,u}$ and $f_{u,q}$ denotes the number of irreducible components in the fiber over $u$. As we noted at the beginning of the proof of Proposition~\ref{prop:DPC}, the complement $\mathcal{X}^0_q$ of $\pi_q^{-1}(\infty_u)$ in $\mathcal{X}^0_q$ is the fiber product of $\wp_q:\mathbb{A}^1_{k,u}\to\mathbb{A}^1_{k,t}\subset \mathbb{P}^1_{k,t}$ and $\pi_1: \mathcal{X}\to\mathbb{P}^1_{k,t}$. Thus $$\sum_{u\neq\infty}(f_{u,q}-1)=q\sum_{t\neq\infty}(f_{t,1}-1)=c_1(q).$$ Also, $$f_{\infty,q}=\sum_{i,j}N_{ij}+m+n.$$ Substituting these into equation~\eqref{eq:rkNS2}, comparing with equation~\eqref{eq:rkNS}, and solving for $\rk MW(J_X/K_q)$ yields the claimed equality, namely $$\rk\MW(J_X/K_q) = \rk\Hom_{k-av}(J_{\mathcal{C}_q}, J_{\mathcal{D}_q})^{{\mathbb{F}_q}} - c_1(q) + c_2.$$ This completes the proof of the theorem. \end{proof}
\section{Examples---exact rank calculations} \label{s:exactrank} In this section, we use the rank formula of Theorem~\ref{thm:ranks} and results from the Appendix to give examples of various behaviors of ranks in towers of Artin-Schreier extensions.
\subsection{Preliminaries} Throughout this section, we let $k={\overline{\mathbb{F}}_p}$ and let $f$ and $g$ be rational functions on $\mathcal{C}=\mathcal{D}=\mathbb{P}^1$ with poles of order prime to $p$. Let $X$ be the smooth proper model of $\{f(x)-g(y)-t =0\} \subset \mathbb{P}^1_K \times_K \mathbb{P}^1_K$ where $K=k(t)$. We noted in Subsection~\ref{ss:ell-curves} above that $X$ is an elliptic curve when $f$ and $g$ have various types of low degree. If either $f$ or $g$ is a linear fractional transformation, then Proposition~\ref{prop:blowupgenus} shows that $X$ is rational, so its Jacobian is trivial and there is nothing to say about ranks. Also, if $f$ and $g$ are both quadratic and both have a double pole at some point, then $X$ is again rational by Proposition~\ref{prop:blowupgenus}. The first interesting case is thus when $(f,g)$ has type $(2,1+1)$.
\subsection{Elliptic curves with bounded ranks}\label{ss:ell-bounded} Assume that $p>2$ and that $(f,g)$ has type $(2,1+1)$, i.e., that $f$ and $g$ are quadratic rational functions such that $f$ has a double pole and $g$ has two distinct poles. Up to a change of coordinates on $x$ and $t$, we may assume that $f(x)=x(x-a)$ with $a\in\{0,1\}$. Also $g(y)=(y-1)(y-b)/y$ for some parameter $b\in k^\times$. The curve $X$ is then the curve of genus 1 with affine equation $$x(x-a)y-(y-1)(y-b)=ty.$$ The change of coordinates $(x,y)\to(y/x,x)$ brings $X$ into the Weierstrass form $$y^2-axy=x^3+(t-1-b)x^2+bx.$$
Examining the discriminant and $j$-invariant of this model shows that $X$ has $I_1$ reduction at two finite values of $t$ and good reduction at all other finite places, so $c_1(r)=0$ for all $r$. It follows immediately from the definition that $c_2=0$ as well.
Thus our rank formula says that $$\rk X(K_q)=\rk\Hom(J_{\mathcal{C}_q},J_{\mathcal{D}_q})^{\mathbb{F}_q}.$$
Now since $f$ has a unique pole, by Lemma~\ref{lemma:p-rank}, $J_{\mathcal{C}_q}$ has $p$-rank 0 for all $q$. On the other hand, $g$ has simple poles, so the same lemma shows that $J_{\mathcal{D}_q}$ is ordinary for all $q$. Thus $\Hom(J_{\mathcal{C}_q},J_{\mathcal{D}_q})=0$ and we have $\rk X(K_q)=0$ for all $q$.
\subsection{Higher genus, bounded rank} The idea of Subsection~\ref{ss:ell-bounded} extends readily to higher genus. Namely, it is possible to construct curves $X$ of every genus such that the rank of $J_X(K_q)$ is a constant independent of $q$. Let $f$ be the reciprocal of a polynomial of degree $M$ with distinct roots, and let $g=y^N$. Then $X$ has genus $g=(M-1)(N-1)$ by Proposition~\ref{prop:blowupgenus}.
By Lemma~\ref{lemma:p-rank}, $J_{\mathcal{C}_q}$ is ordinary whereas $J_{\mathcal{D}_q}$ has $p$-rank zero. It follows that $\Hom(J_{\mathcal{C}_q},J_{\mathcal{D}_q})=0$ and {\it a fortiori\/} $\Hom(J_{\mathcal{C}_q},J_{\mathcal{D}_q})^{{\mathbb{F}_q}}=0$. Since the term $c_1$ in the rank formula is non-positive (and goes to $-\infty$ with $q$ if it is not identically zero), and since $c_2$ is a constant, we see that in fact $c_1=0$ and the rank of $J_X(K_q)$ is bounded (in fact constant) independently of $q$.
If $p>2$, we may take $N=2$ and $M$ arbitrary to get examples of every genus. If $p=2$, we may take $M=2$ and $N$ odd to get examples of every even genus.
When $p=2$, a similar construction produces examples of curves with odd genus. Indeed, let $\mathcal{C}$ be an ordinary elliptic curve and let $f$ be a function on $\mathcal{C}$ with $M \geq 2$ simple poles. Applying the Lemmas~\ref{lemma:ASbasics} and~\ref{lemma:p-rank}, we see that $\mathcal{C}_q$ is an ordinary curve of genus $M(q-1)+1$. If $\mathcal{D}=\mathbb{P}^1$ and $g=y^N$ with $N$ odd, then $\mathcal{D}_q$ has $p$-rank $0$ so $\Hom(J_{\mathcal{C}_q},J_{\mathcal{D}_q})=0$ as before. By Proposition~\ref{prop:blowupgenus}, $X$ has genus $N+(M-1)(N-1)$. Taking $N=3$ yields examples of every odd genus $\geq 5$.
\subsection{Elliptic curves with unbounded ranks}\label{ss:(1+1,1+1)a} Now suppose that $f=g$ is a quadratic rational function with two distinct poles.
We may choose coordinates so that $f(x)=(x-1)(x-a)/x$ and $g(y)=(y-1)(y-a)/y$ for some parameter $a\in k^\times$. The curve $X$ is then the curve of genus 1 with affine equation $$(x-1)(x-a)y-(y-1)(y-a)x=txy.$$ The change of coordinates $$(x,y)\to\left(-a\frac{(x-a)^2+ty}{(x-a)y},-a\frac{(x-a)}{y}\right)$$ brings $X$ into the Weierstrass form $$y^2-txy=x^3-2ax^2+a^2x.$$
Straightforward calculation with Tate's algorithm gives the reduction types of $X$. When $p>2$, we find that $X$ has reduction of type $I_1$ at two finite places ($t=\pm\sqrt{16a}$), reduction of type $I_2$ at $t=0$, and good reduction at all other finite places. When $p=2$, $X$ has reduction type $III$ and conductor exponent 3 at $t=0$, and it has good reduction at all other finite places. (Thus, the analytic ranks result of Corollary~\ref{cor:analytic-ranks} gives a non-trivial lower bound on the rank of $X(K_q)$ which we will see presently is not sharp.) In all cases it follows that $c_1(q)=q$. It is also immediate from the definition that $c_2=1$.
Next, we note that $\mathcal{C}_q=\mathcal{D}_q$ and so $$\Hom(J_{\mathcal{C}_q},J_{\mathcal{D}_q})^{{\mathbb{F}_q}}=\en(J_{\mathcal{C}_q})^{{\mathbb{F}_q}}.$$ Moreover, by Lemma~\ref{lemma:p-rank}, $\mathcal{C}_q$ is ordinary. Since $k={\overline{\mathbb{F}}_p}$, we know from Honda-Tate theory (cf.~Lemma~\ref{lemma:ASendos}) that $\en(J_{\mathcal{C}_q})$ is commutative of rank $2g_{\mathcal{C}_q}=2(q-1)$. Thus we find that $$\rk X(K_q)=q-1.$$
We will study this example in much more detail in Section~\ref{Slattice}. In particular, we will give explicit generators of a subgroup of finite index in $X(K_q)$.
\subsection{Another elliptic curve with unbounded
ranks}\label{ss:3-3a} In this example we take $p\neq3$ and $f=g=x^3$. Then $X$ is the isotrivial elliptic curve $x^3-y^3-t=0$ with $j$-invariant 0. The change of coordinates $$(x,y)\to\left(\frac{y+9t}{3x},\frac{y}{3x}\right)$$ brings $X$ into Weierstrass form $$y^2+9ty=x^3-27t^2.$$ Tate's algorithm shows that $X$ has good reduction away from 0 and $\infty$, and reduction type $IV$ at $0$. (In particular, the analytic ranks result of Corollary~\ref{cor:analytic-ranks} does not give a non-trivial lower bound on the rank.) It follows that $c_1(q)=2q$ and $c_2=2$. The rank formula shows that $\rk X(K_q)=\rk\en(J_{\mathcal{C}_q})^{{\mathbb{F}_q}}-2(q-1)$.
Suppose that $p\equiv2\pmod3$. Then the curve $\mathcal{C}_q$ is supersingular of genus $q-1$ (in other words, its Newton polygon has all slopes equal to $1/2$). Applying Lemma~\ref{lemma:ASendos} part (3), we find that the rank of $\en(J_{\mathcal{C}_q})^{{\mathbb{F}_q}}$ is $4(q-1)$ and $\rk X(K_q)=2(q-1)$. In Subsection~\ref{ss:isotrivial-points} below, we will write down explicit points generating a finite index subgroup of $X(K_q)$.
\subsection{Higher genus, unbounded rank} \label{S:highergenus} It is clear from Lemma~\ref{lemma:ASendos} that when we take $f=g$ in the construction of Section~\ref{s:Berger}, in many cases the main term of the rank formula, namely $\rk\en(J_{\mathcal{C}_q})^{{\mathbb{F}_q}}$, will go to infinity with $q$. If we can arrange the geometry so that $c_1$ is not too large, we will have unbounded ranks. In this subsection, we show that this is not difficult to do.
Before giving constructions, we record two easy lemmas about irreducibility of curves.
\begin{lemma}\label{lemma:few-nodes-irreducible} Suppose that $C\subset\mathbb{P}^1\times\mathbb{P}^1$ is a curve of bidegree $(M,N)$ which has only ordinary double points as singularities. Suppose further that the number of double points is less than $\min(M,N)$. Then $C$ is irreducible. \end{lemma}
\begin{proof}
If $C$ is reducible, then it is the union of curves of bidegrees
$(i,j)$ and $(M-i,N-j)$ for some $(i,j)\neq(0,0)$ and $\neq(M,N)$.
The intersection number of the two components is $(M-i)j+(N-j)i$ and
it is not hard to check that the minimum of this function over the
allowable values of $(i,j)$ is $\min(M,N)$. Thus if $C$ has fewer
than $\min(M,N)$ ordinary double points and no other singularities,
then it cannot be reducible. \end{proof}
\begin{lemma}\label{lemma:big-Gal-irreducible}
Let $L$ be an arbitrary field and let $f(x)=a(x)/b(x)\in L(x)$ be a rational
function of degree $M$ such that $a(x)-b(x)t$ is irreducible and
separable in $\overline{L}(t)[x]$. Suppose that the Galois group $G$ of
the splitting field of $a(x)-b(x)t$ over $\overline{L}(t)$ is a
2-transitive subgroup of $S_M$. Then the plane curve with affine
equation $f(x)-f(y)=0$ \textup{(}or rather
$a(x)b(y)-a(y)b(x)=0$\textup{)} has exactly two irreducible
components over $\overline{L}$. \end{lemma}
\begin{proof}
Consider the morphism $\pi_x:\mathbb{P}^1_{L,x}\to\mathbb{P}^1_{L,t}$ given by
$x\mapsto t=f(x)$. The corresponding extension of function fields
is $L(t) \hookrightarrow L(t)[x]/(a(x)-b(x)t)\cong L(x)$. Make a
similar definition of $\pi_y$ with $y$ replacing $x$
everywhere. Then the curve $f(x)-f(y)=0$ is the fiber product of
$\pi_x$ and $\pi_y$. The function field (or rather total ring of
fractions) of this fiber product is
$\overline{L}(x)\otimes_{\overline{L}(t)}\overline{L}(y)$. By basic
field theory, its set of irreducible components over $\overline{L}$ is in
bijection with the set of orbits of $G$ acting on ordered pairs of
roots of $a(x)-b(x)t$ in $\overline{L(t)}$. By our hypotheses,
there are exactly two of these, namely the diagonal (corresponding
to the component $x=y$), and the rest. Thus $f(x)-g(y)=0$ has
exactly two components. \end{proof}
We return to the construction of Section~\ref{s:Berger} and consider the case where $k={\overline{\mathbb{F}}_p}$ and $f=g$. We assume that $f$ has degree $M\ge2$ and is generic in the following sense: if the critical values of $f:\mathbb{P}^1_{x} \to\mathbb{P}^1$ are $\alpha_1,\dots,\alpha_{2M-2}$, then our assumption is that the set of differences $\alpha_i-\alpha_j$ for $i\neq j$ has maximum cardinality, namely $(2M-2)(2M-3)$. (This is slightly different than the condition that the pair $(f,f)$ be generic in the sense of Subsection~\ref{ss:higher-g-unbounded}.)
Our assumption implies in particular that $f$ has $2M-2$ distinct critical values. Therefore, the type of $f$ (in the sense of Subsection~\ref{ss:types}) is $1+1+\cdots+1$, i.e., $f$ has $M$ simple poles. In this case the genus of $\mathcal{C}_q$ is $(M-1)(q-1)$, $J_{\mathcal{C}_q}$ is ordinary by Lemma~\ref{lemma:p-rank}, and $$\rk\en(J_{\mathcal{C}_q})^{{\mathbb{F}_q}}=2g_{\mathcal{C}_q}=2(M-1)(q-1)$$ by Lemma~\ref{lemma:ASendos}.
Now let $X$ be the curve over $k(t)$ defined by $f(x)-f(y)-t=0$, with regular proper model $\pi:\mathcal{X}\to\mathbb{P}^1_{k,t}$. By Proposition~\ref{prop:blowupgenus}, the genus of $X$ is $(M-1)^2$. Arguing as in Subsection~\ref{ss:higher-g-unbounded}, we see that the fibers of $\pi$ away from $t=0,\infty$ are either smooth, or have a single ordinary double point. By Lemma~\ref{lemma:few-nodes-irreducible}, they are thus irreducible. If we assume further that $f$ has a large Galois group (in the sense of Lemma~\ref{lemma:big-Gal-irreducible}), then the fiber of $\pi$ over $t=0$ has two components. Thus $c_1=1$ and our rank formula says that $$\rk MW(J_X/K_q)=2(M-1)(q-1)-q+c_2.$$ Since $M\ge2$, the rank is unbounded as $q$ varies. (The reader has no doubt already noticed that the case $M=2$ is exactly the situation of Subsection~\ref{ss:(1+1,1+1)a}.)
\subsection{Explicit curves of higher genus and unbounded rank} As a complement to the preceding subsection, we give an example showing that even with fairly special choices of $f=g$, we get unbounded ranks. Namely, let us take $f=1/(x^m-1)$ where $m>1$ is prime to $2p$. Then the curve $X$ over $k(t)$ has equation $$y^m-x^m-t(x^m-1)(y^m-1)=0.$$
It is obvious that the fiber of $\mathcal{X}$ over $t=0$ is reducible, with $m$ components. We claim that for all other finite values of $t$, the fiber is irreducible. In other words, we claim that for all $a\in k^\times$, the plane curve $$\mathcal{X}_a:\qquad y^m-x^m-a(x^m-1)(y^m-1)=0$$ is irreducible. Since the only critical values of $f$ are $0$ and $-1$, both with multiplicity $m-1$, the fibers away from $t\in\{0,\pm1,\infty\}$ are smooth and thus, by Lemma~\ref{lemma:few-nodes-irreducible}, irreducible. The fiber over $t=-1$ is the curve $$x^my^m-2x^m+1=0.$$ We can see that this is irreducible by considering it as a Galois cover of $\mathbb{P}^1_{k,x}$ with Galois group $\mu_m$. To wit, the cover is totally ramified over the regular points $x=(1/2)^{1/m}$, $y=0$, so the curve must be irreducible. The argument at $t=1$ is similar and we omit it.
Using the results of the preceding paragraph, we find that $c_1(q)=(m-1)q$, $c_2=(m-1)^2$, and our rank formula yields \begin{align*} \rk MW(X/K_q)&=2(m-1)(q-1)-(m-1)q+(m-1)^2\\ &=(q+m-3)(m-1) \end{align*} which grows linearly with $q$.
\subsection{Analytic ranks and supersingular factors} In this subsection, we show that the rank formula of Theorem~\ref{thm:ranks} gives a connection between the symplectic and orthogonal versions of the analytic rank lower bounds, i.e., between Corollary~\ref{cor:analytic-ranks} and Proposition~\ref{prop:ss-factors}.
Consider the situation of Proposition~\ref{prop:ell-ranks1} with $(f,g)$ generic of type $(2,2+1)$ and $p$ odd. We suppose that $f$ and $g$ are defined over a finite field $k_0$ of cardinality $r$, and we let $k={\overline{\mathbb{F}}_p}$ and $K=k(t)$. We assume that $q$ is a power of $r^2$ and set $K_q={\overline{\mathbb{F}}_p}(u)$ with $u^q-u=t$.
The curve $X$ given by $f-g=t$ has genus 1, and by Proposition~\ref{prop:ell-ranks1} we have $$\rk X(K_q)-\rk X(K)\ge \sqrt{q}-1.$$
The proof of Proposition~\ref{prop:ell-ranks1} shows that $X$ has three finite places of bad reduction, each with a single ordinary double point. It follows from Lemma~\ref{lemma:few-nodes-irreducible} that the fibers are irreducible, so $c_1(q)=0$. It is immediate that $c_2=1$, so the rank formula of Theorem~\ref{thm:ranks} reads $$\rk X({\overline{\mathbb{F}}_p}(u))=\rk\Hom_{{\overline{\mathbb{F}}_p}}(J_{\mathcal{C}_q},J_{\mathcal{D}_q})^{{\mathbb{F}_q}}+1.$$ The formula of Remark~\ref{rem:q=1} for $\rk X(K)$) shows that $\rk X(K)=1$. Considering the lower bound of the preceding paragraph, we find that $$\rk\Hom_{{\overline{\mathbb{F}}_p}}(J_{\mathcal{C}_q},J_{\mathcal{D}_q})^{{\mathbb{F}_q}}\ge\sqrt{q}-1.$$
Now the Jacobian of $\mathcal{C}_q$ is supersingular of dimension $(q-1)/2$. By Lemma~\ref{lemma:ASbasics} and Theorem~\ref{thm:slopes}, the Jacobian of $\mathcal{D}_q$ has dimension $3(q-1)/2$ and slopes 0,$1/2$, and 1, each with multiplicity $(q-1)$. The slopes suggest, but do not prove, that $J_{\mathcal{D}_q}$ has supersingular elliptic curves as isogeny factors. The ranks formula does prove this. Indeed, if $e$ is the multiplicity of the supersingular elliptic curve in the Jacobian of $\mathcal{D}_q$, then $$\rk\Hom_{{\overline{\mathbb{F}}_p}}(J_{\mathcal{C}_q},J_{\mathcal{D}_q})^{{\mathbb{F}_q}}=4\frac{q-1}2e\frac1{q-1}=2e.$$
Therefore $2e\ge \sqrt{q}-1$, and we see that $J_{\mathcal{D}_q}$ has a supersingular elliptic curve as an isogeny factor with multiplicity at least $(\sqrt{q}-1)/2$. This is exactly the conclusion we would obtain by applying Proposition~\ref{prop:ss-factors} directly to $\mathcal{D}_q$.
A similar discussion applies when we take $(f,g)$ to have type $(2,N)$ with $N$ even. If $p\equiv1\pmod N$, slope considerations (as in Theorem~\ref{thm:slopes}) suggest supersingular factors. Without this congruence on $p$, we know little about slopes. Still, for all $p\nmid 2N$ we get supersingular factors in $J_{\mathcal{D}_q}$ directly from Proposition~\ref{prop:ss-factors} or indirectly via Corollary~\ref{cor:analytic-ranks} and the rank formula of Theorem~\ref{thm:ranks}.
\section{Examples---Explicit points and heights} \label{s:explicit}
\subsection{A variant of the construction of Section~\ref{s:Berger}} There is a slight modification of the construction of Section~\ref{s:Berger} which is very useful for producing explicit points. To explain it, choose data $\mathcal{C}$, $\mathcal{D}$, $f$ and $g$ as usual. Assume that $f=g$ and that the covers $f:\mathcal{C}\to\mathbb{P}^1$ and $g:\mathcal{D}\to\mathbb{P}^1$ are geometrically Galois, necessarily with the same group $G$. For $q$ a power of $p$, we have the curves $\mathcal{C}_q$ and $\mathcal{D}_q$ with equations $z^q-z=f(x)$ and $w^q-w=g(y)$ respectively. The surface $\mathcal{X}_q$ is birational to the quotient of $\mathcal{C}_q\times\mathcal{D}_q$ by the diagonal action of ${\mathbb{F}_q}$, and its function field is generated by $x$, $y$, and $u$ with $u=z-w$.
Now consider the graph of Frobenius $Fr_q:\mathcal{C}_q\to\mathcal{D}_q$, i.e., the set $$\{(x,z,y,w)=(x,z,x^q,z^q)\}\subset\mathcal{C}_q\times\mathcal{D}_q.$$ Its image in $\mathcal{X}_q$ is $\{(x,y,u)=(x,x^q,z-z^q)=(x,x^q,-f(x)\}$ which is obviously a multisection of $\mathcal{X}_q\to\mathbb{P}^1_u$ whose degree over $\mathbb{P}^1_u$ is equal to the degree of $f$. It is more convenient to have a section, and we can arrange for this by dividing $\mathcal{X}_q$ by the action induced by the diagonal or anti-diagonal action of $G$ on $\mathcal{C}_q\times\mathcal{D}_q$. (The two quotients can be different, they can even give rise to curves $X$ with different genera, and which to take is dictated by the circumstances at hand.) Calling (a nice model of) the quotient $\mathcal{X}'_q$, and writing $X'/K_q$ for the generic fiber of $\mathcal{X}'_q\to\mathbb{P}^1_u$, the image of the graph of Frobenius in $\mathcal{X}'_q$ will then be a section and will give rise to a $K_q$-rational point of $X'$. We will use this variant in the two examples that follow.
\subsection{An isotrivial elliptic curve with explicit
points}\label{ss:isotrivial-points} For this example, we assume that $q\equiv2\pmod3$, and we take $f=g=x^3$. The curve $X$ thus has equation $x^3-y^3=t$. We take the quotient by $G=\mu_3$ acting anti-diagonally (i.e., $(x,y)\mapsto(\zeta x,\zeta^{-1}y)$). The invariants are generated by $X=xy$ and $Y=-x^3$ and the relation between them is $Y^2+tY=X^3$. This is the equation of our curve $X'$. Note that $X'$ and $X$ are 3-isogenous elliptic curves, so they have the same Mordell-Weil rank and the prime-to-3 part of their Tate-Shafarevich groups are isomorphic. In Subsection~\ref{ss:3-3a}, we found that the rank of $X({\overline{\mathbb{F}}_q}(u))$ is $2(q-1)$. Presently we will find explicit points generating a subgroup of $X'(\mathbb{F}_{q^2}(u))$ of this rank.
To ease notation, we write $E$ for $X'$. Note that $E$ is isotrivial, with $j$-invariant $j=0$. It becomes isomorphic to a constant curve $E_0$ over ${\mathbb{F}_p}(t^{1/3})$. The underlying $E_0$ is supersingular since we have assumed that $p\equiv2\mod3$.
Thus our aim is to find points on $$E:\qquad Y^2+tY=X^3$$ over $K=k(u)$ where $u^q-u=t$ and where $k$ is the field of $q^2$ elements.
\begin{prop} The torsion subgroup of $E(K)$ is isomorphic to $\mathbb{Z}/3\mathbb{Z}$,
with non-trivial points $(0,0)$ and $(0,-t)$. \end{prop}
\begin{proof}
Let $P=(X,Y) \in E(K)$ be a non-trivial torsion point and let
$L=K(v)$ where $v^3=t$. Over $L$, the change of coordinates
$X=v^2x'$, $Y=v^3y'$ gives an isomorphism between $E$ and the
constant curve $E_0: (y^{\prime})^2+y'=(x^{\prime})^3$. It is well
known (see for example \cite[I.6.1]{Ulmer11}) that the torsion points
of $E_0(L)$ are defined over the finite constant field. Thus
$(X,Y)=(av^2,bv^3)$ for some $a,b\in k$. Since these coordinates
are also in $K$, we must have $a=0$, and then it follows easily that
$b=0$ or $b=-1$, yielding the two points in the statement of the
proposition. \end{proof}
Next we construct some non-torsion points. Using the graph of Frobenius, we find a point $(X,Y)=(u^{(q+1)/3},u)$ on $E(K)$. More precisely, the graph of Frobenius $Fr_q:\mathcal{C}_q\to\mathcal{D}_q$ is a curve in $\mathcal{C}_q\times\mathcal{D}_q$. Its image in $\mathcal{X}_q$ (which is birational to $\left(\mathcal{C}_q\times\mathcal{D}_q\right)/{\mathbb{F}_q}$) yields a multisection of $\mathcal{X}_q\to\mathbb{P}^1_u$ of degree 3, given by $y=x^q$ and $u=-x^3$. Taking the quotient by the action of $\mu_3$ discussed above yields the section $X=xy=u^{(q+1)/3}$, $Y=-x^3=u$ whose generic fiber is the desired rational point.
Now using the Galois group of $k(u)/k(t)$, and the automorphism group of $E$, we get $3q$ points labelled by $i\in\mathbb{Z}/3\mathbb{Z}$ and $\alpha\in{\mathbb{F}_q}$: $$P_{i,\alpha}=\left(\zeta^i(u+\alpha)^{(q+1)/3},u+\alpha\right).$$
Considering the divisor of $Y-(u+\alpha)$ shows that $\sum_{i\in(\mathbb{Z}/3\mathbb{Z})}P_{i,\alpha}=0$. Considering the divisor of $X-Y^{(q+1)/3}\zeta^i$ shows that $\sum_{\alpha\in{\mathbb{F}_q}}P_{i,\alpha}$ is the 3-torsion point $(0,0)$. Thus the subgroup of $E(K)$ generated by the $P_{i,\alpha}$ has rank at most $2(q-1)$ and contains all the torsion points of $E(K)$. We will see by calculating heights that it has rank exactly $2(q-1)$.
In the following result, we normalize away the factors of $\log r$ in the canonical height, as in \cite[Ch.~4]{Ulmer14b}.
\begin{prop} \label{p:height} The height pairing on $E(K)$ satisfies: $$\langle P_{i,\alpha},P_{j,\beta}\rangle= \langle P_{i-j,\alpha-\beta},P_{0,0}\rangle.$$ We have $$\langle P_{i,\alpha},P_{0,0}\rangle= \begin{cases} \frac{2(q-1)}3&\text{if $i=0,\alpha=0$}\cr -\frac23&\text{if $i=0$, $\alpha\neq0$}\cr -\frac{q-1}3&\text{if $i\neq0$, $\alpha=0$}\cr \frac13&\text{if $i\neq0$, $\alpha\neq0$} \end{cases} $$ \end{prop}
\begin{proof} We refer to \cite{Shioda99} or \cite[4.3]{Ulmer14b} for a detailed account of the height pairing.
That $\langle P_{i,\alpha},P_{j,\beta}\rangle= \langle P_{i-j,\alpha-\beta},P_{0,0}\rangle$ follows from the fact that $E$ is defined over ${\mathbb{F}_p}(t)$ and the height pairing is invariant under the action of $\gal(K/{\mathbb{F}_p}(t))$. Thus to compute the pairing in general, we may reduce to the case where $(j,\beta)=(0,0)$. What has to be computed are intersection numbers and the components above places of $K$ which contain the reductions of points.
We write $\mathcal{X}'\to\mathbb{P}^1_u$ for the regular minimal model of $E/K$ and we write $P_{i,\alpha}$ also for the sections of $\mathcal{X}'$ corresponding to the points with these labels. We write $O$ for the 0-section of $\mathcal{X}'$. With this notation, as in \cite[Lemma 1.18]{CoxZucker79}, the height is given by $$\langle P_{i,\alpha},P_{0,0}\rangle= -\left(P_{i,\alpha}-O\right) \cdot \left(P_{0,0}-O+D\right)$$ where the dot signifies the intersection product on $\mathcal{X}'$, and where $D$ is a divisor with $\mathbb{Q}$-coefficients supported in fibers such that $P_{0,0}-O+D$ is orthogonal to all components of all fibers of $\mathcal{X}'\to\mathbb{P}^1$. The divisor $D$ is easily calculated once we know which component of each fiber $P_{0,0}$ lands on, cf.~\cite{CoxZucker79}.
Standard calculations using Tate's algorithm \cite{Tate75} show that $E$ has reduction type $IV$ at the places $u=\gamma \in{\mathbb{F}_q}$ and over $u=\infty$. The non-identity components correspond to components of the tangent cone $Y(Y+t)=0$.
The height (or degree) of $\mathcal{X}'\to\mathbb{P}^1_u$ (in the sense of \cite[III.2.4]{Ulmer11}) is $(q+1)/3$, so the self-intersection of any section is $-(q+1)/3$. So, $O \cdot O=P_{0,0} \cdot P_{0,0}=-(q+1)/3$. We see that $P_{i,\alpha} \cdot O=0$ for all $(i,\alpha)$ because the points $P_{i,\alpha}$ have polynomial coordinates of low degree. We briefly summarize the calculations needed to compute the multiplicity of the intersection of $P_{i,
\alpha}$ and $P_{0,0}$ for $(i,\alpha) \neq (0,0)$. (i) If $\alpha=0$, then the multiplicity is $(q-2)/3$ at $u=0$ and is zero at the other finite places. (ii) If $\alpha\neq0$, the equation for the $Y$ coordinate shows the multiplicity is zero at every finite place. (iii) At infinity, the multiplicity is $(q+1)/3$ if $i=0$ and is $(q-2)/3$ if $i \neq 0$. Putting these local contributions together gives the ``geometric part'' of the height, namely $-(P_{i,\alpha}-O)\cdot(P_{0,0}-O)$.
Similar calculations show that $P_{i,\alpha}$ lands on the identity component when $\alpha\neq\gamma$ and on the non-identity component indexed by $Y=0$ at $\alpha=\gamma$ and at $\infty$. Thus the ``correction factor'' $-(P_{i,\alpha}-O)\cdot D$ is $-4/3$ if $\alpha=0$ and $-2/3$ if $\alpha\neq0$, as in \cite[Lemma 1.19]{CoxZucker79}. Summing the geometric part and the correction factor gives the heights asserted in the statement of the proposition. \end{proof}
Let $V$ be the subgroup of $E(K)$ generated by $\{P_{i,\alpha} \mid i\in\mathbb{Z}/3\mathbb{Z}, \ \alpha\in{\mathbb{F}_q}\}$. It follows immediately from Proposition~\ref{p:height} that $V$ has rank $2(q-1)$. Write $A_n^*$ for the lattice of rank $n$ dual to the $A_n$ root lattice (cf.~\cite[4.6.6]{ConwaySloaneSPLG}). It is well known to have discriminant $(n+1)^{n-1}$. For a real number $a$, write $aA_n^*$ for the scaling of $A_n^*$ by $a$. Then the sublattice of $E(K)/tor$ generated by the $P_{i,\alpha}$ is isomorphic to the tensor product lattice $A_2^*\otimes(\frac13A_{q-1}^*)$. It thus has discriminant $$R'=q^{2(q-2)}3^{1-q}.$$
Now $E(K)_{tor}=V_{tor}$ and $R'$ is the discriminant of the lattice $V/V_{tor}$. The discriminant of the full lattice $E(K)/E(K)_{tor}$ is thus $R'/[E(K):V]^2$. The integrality result of \cite[Prop.~9.1]{Ulmer14a} shows that $[E(K):V]$ divides $q^{q-2}$.
The degree of the $L$-function of $E$ over $K$ is $2(q-1)$. Since the rank of $E(K)$ is at least this big (by the height computation), it is equal to $2(q-1)$ and the $L$-function of $E$ is $(1-q^{2(1-s)})^{2(q-1)}$. (Recall that the ground field $k$ is the field of $q^2$ elements.) In particular, the leading term of the $L$-function at $s=1$ is 1. Using the BSD formula, we find that
$$[E(K):V]^2=|\sha(E/K)|q^{\frac43(q-2)}.$$ It follows that $q^{\frac23(q-2)}$ divides the index $[E(K):V]$. Also by \cite[Prop.~9.1]{Ulmer14a}, the order of $\sha(E/K)$ is a power of $p$ which divides $q^{\frac23(q-2)}$.
Experience with analogous situations suggests that there should be an easily constructed subgroup of $E(K)$ whose index is
$|\sha(E/K)|^{1/2}$. We now propose a candidate for this subgroup.
First, we note that since $q\equiv2\pmod3$, the curve $\mathcal{C}_q$ is a quotient of the Hermitian (Fermat) curve $F$ with equation $x_1^{q+1}=z_1^q-z_1$ via the map $$(z_1,x_1)\mapsto(z=z_1,x=x_1^{(q+1)/3}).$$ Choose elements $\alpha,\beta,\gamma$ in ${\overline{\mathbb{F}}_p}$ satisfying $\alpha^{q^2-1}=-1$, $\gamma=\alpha^q$, and $\beta^q-\beta=\gamma^{q+1}=-\alpha^{q+1}$. Then we have an automorphism of $F$ given by $$(z_1,x_1)\mapsto(z_1+\alpha x_1+\beta,x_1+\gamma).$$ We take the graph of this automorphism and map it to $\mathcal{C}_q\times\mathcal{D}_q$, then on to $\mathcal{X}_q$ and $\mathcal{X}'_q$, which leads to a rational point. After some simplifying algebra, we arrive at the following points:
If $p>2$, for each solution $\beta$ of $\beta^{q-1}=-1$ we have a point $$P_\beta:\qquad(X,Y)=\left(-\left(\frac{u^2-\beta^2}{2\beta}\right)^{(q+1)/3},
\frac{(u-\beta)^{q+1}}{2\beta}\right).$$ For each choice of $\beta$, we may act on $P_\beta$ by elements of the Galois group of $k(u)/k(t)$ (sending $u$ to $u+\alpha$ with $\alpha\in{\mathbb{F}_q}$) and the automorphism group of $E$ (sending $X$ to $\zeta_3X$). This leads to a set of $3q(q-1)$ points, all with coordinates in $K=k(u)=\mathbb{F}_{q^2}(u)$.
If $p=2$, it is convenient to index our points by elements $\beta\in\mathbb{F}_{q^2}\setminus{\mathbb{F}_q}$. The corresponding point is $$P_\beta:\qquad(X,Y)= \left(\left(\frac{(u+\beta)(u+\beta^q)}{\beta+\beta^q}\right)^{(q+1)/3}, \frac{(u+\beta)^{q+1}}{\beta+\beta^q}\right)$$ and for each value of $\beta$ we can apply automorphisms of $E$ to get a triple of points. Again we get a total of $3q(q-1)$ points.
Recall that $V$ is the subgroup of $E(K)$ generated by the $P_{i,\alpha}$. Let $V_1$ be the subgroup generated by $V$, the $P_\beta$, and their images under the action of $\gal(K/{\mathbb{F}_p}(t))$ and $\aut(E)$. We conjecture that $[V_1:V]\overset?=q^{\frac23(q-2)}$ or equivalently that
$$[E(K):V_1]^2\overset?=|\sha(E/K)|.$$
For all prime powers $q\le32$ with $q\equiv2\pmod3$, we have confirmed this conjecture by using machine calculation to compute the height pairings $\langle P_\beta,P_{i,\alpha}\rangle$.
\begin{rem}
If we again take $f=g=x^3$, but assume that $q\equiv1\pmod3$, then
we do not get any interesting results about ranks (other than what
can be deduced from the above if $q$ is a power of $p$ with
$p\equiv2\pmod 3$). The reason is that we have little control of
the Jacobian of $\mathcal{C}_q$ in this case. It might well be ordinary, in
which case we would have $\rk X({\overline{\mathbb{F}}_q}(u))=0$. \end{rem}
\subsection{A family of non-isotrivial elliptic curves with explicit
points} \label{Slattice}
In this subsection, let $k={\mathbb{F}_q}$ and suppose $q$ is odd. Let $f(x)=(x-1)(x-a)/x$ for some $a\in k\setminus\{0,1\}$ and let $\mathcal{X}$ be a smooth projective surface over $k$ birational to the affine surface in $\mathbb{A}^3$ with coordinates $(x,y,t)$ defined by $$f(x)-f(y)=t.$$ We may choose $\mathcal{X}$ such that there is a morphism $\pi_1:\mathcal{X}\to\mathbb{P}^1_t$ extending the projection $(x,y,t)\mapsto t$. Let $\mathcal{X}_q$ be a smooth proper model of the fiber product of $\pi_1$ and $\mathbb{P}^1_u\to\mathbb{P}^1_t$, $t=u^q-u$. The generic fiber of $\mathcal{X}_q\to\mathbb{P}^1_u$ is the curve over $k(u)$ studied in Subsection~\ref{ss:(1+1,1+1)a} above.
Let $\mathcal{C}_q$ and $\mathcal{D}_q$ be the smooth projective curves defined by the equations $z^q-z=f(x)$ and $w^q-w=f(y)$ respectively. We saw in the course of analyzing the construction of Section~\ref{s:Berger} that $\mathcal{X}_q$ is birational to the quotient of $\mathcal{C}_q\times_k\mathcal{D}_q$ by the diagonal action of ${\mathbb{F}_q}$. As in the previous section, we want to take a further quotient. Note that since $f$ is quadratic, $\mathcal{C}_q$ and $\mathcal{D}_q$ are double covers of the $z$- and $w$-lines respectively; thus they are Galois covers with group $\mathbb{Z}/2\mathbb{Z}$. We let $\mathcal{X}'_q$ be (a smooth projective model of) the quotient of $\mathcal{X}_q$ by the diagonal action of $\mathbb{Z}/2\mathbb{Z}$.
We have a morphism $\mathcal{X}'_q\to\mathbb{P}^1_u$ sitting in a commutative diagram \begin{equation*} \xymatrix{ \mathcal{X}_q\ar[d]\ar[r]&\mathcal{X}'_q\ar[d]\\ \mathbb{P}^1_u\ar@{=}[r]&\mathbb{P}^1_u} \end{equation*} We will see in a moment that the generic fiber $X'$ of $\mathcal{X}'_q\to\mathbb{P}^1_u$ is an elliptic curve over $k(u)$ and so the morphism $X\to X'$ induced by $\mathcal{X}_q\to\mathcal{X}'_q$ is a 2-isogeny. It follows that the rank of $X'(k(u))$ is equal to the rank of $X(k(u))$, and we showed in Subsection~\ref{ss:(1+1,1+1)a} that this rank is $q-1$. Our main goal in this section is to exhibit an explicit set of points generating a subgroup of $X'(k(u))$ of finite index.
We now proceed to find an explicit equation for $X'$, working birationally, i.e., with function fields. The function field of $\mathcal{X}_q$ is generated by $x$, $y$, and $u$, with relation $f(x)-f(y)=u^q-u$. The action of $\mathbb{Z}/2\mathbb{Z}$ sends $x$ to $a/x$, $y$ to $a/y$, and fixes $u$. Let $$s_1=(x+\frac ax),\qquad s_2=(y+\frac ay),\quad\text{and}\quad s_3=(x-\frac ax)(y-\frac ay).$$ It is easy to see that the field of invariants of $\mathbb{Z}/2\mathbb{Z}$ acting on $\mathcal{X}_q$ is generated by $s_1$, $s_3$ and $u$. (Note that $u^q-u=t=s_1-s_2$.) The relations are generated by \begin{align*} s_3^2&=\left(s_1^2-4a\right)\left(s_2^2-4a\right)\\ &=\left(s_1^2-4a\right)\left((s_1-t)^2-4a\right) \end{align*}
It is thus evident that the generic fiber of $\mathcal{X}'_q\to\mathbb{P}^1_u$ is the curve $X'$ of genus 1 with equation \begin{equation}\label{eq:X'} s_3^2=\left(s_1^2-4a\right)\left(s_1^2-2ts_1+t^2-4a\right). \end{equation}
Now for convenience (explained below), we {\it assume that $a$ is a
square in $k$\/}, say $a=b^2$. Then $X'$ has the $k(u)$-rational point $s_1=-2b$, $s_3=0$. We use this point as origin and make the substitution $$s_1=-2b\left(\frac{X+4bt}{X-4bt}\right)\quad s_3=\frac{4bt(4b+t)Y}{(X-4bt)^2}$$ which brings $X'$ into the Weierstrass form \begin{equation}\label{eq:E} E:\qquad Y^2=X(X+16b^2)(X+t^2). \end{equation} (Note that $E$ is closely related to the Legendre curve.)
We are now going to write down some explicit points of $E(k(u))$. First consider the graph of the $q$-power Frobenius morphism $\mathcal{C}_q\to\mathcal{D}_q$, which is the closed subset $\Gamma\subset\mathcal{C}_q\times_k\mathcal{D}_q$ defined by $y=x^q$, $w=z^q$. The image of $\Gamma$ in $\mathcal{X}_q$ is defined by $y=x^q$ and $u=z-z^q=-f(x)$. The image of $\Gamma$ in $\mathcal{X}'_q$ is defined by \begin{align*} s_1&=f(x)+a+1\\ &=-u+a+1 \end{align*} and \begin{align*} s_3&=\left(x-\frac ax\right)^{q+1}\\ &=\left((-u+a+1)^2-4a\right)^{(q+1)/2}\\ &=\left(u^2-2(a+1)u+(a-1)^2\right)^{(q+1)/2}. \end{align*} The image of $\Gamma$ in $\mathcal{X}'_q$ turns out to be a section and yields the rational point \begin{align*} X&=4bt\left(\frac{u^q-(b-1)^2}{u-(b+1)^2}\right)\\ Y&=\frac{4bt(4b+t)\left(u^2-2(a+1)u+(a-1)^2\right)^{(q+1)/2}} {\left(u-(b+1)^2\right)^2} \end{align*} on $E(k(u))$.
We write $Q(u)$ for the point in $E(k(u))$ defined by the last display. Since $E$ is defined over $k(t)$ and the Galois group of $k(u)/k(t)$ acts via the substitutions $u\mapsto u+\alpha$, it is clear that $Q(u+\alpha)$ lies in $E(k(u))$ for all $\alpha\in{\mathbb{F}_q}$. To streamline coordinates, let $P(u)=Q(u+(b+1)^2)$, so that $P(u)$ is given by \begin{align*} X&=4bt\left(\frac{u^q+4b}{u}\right)\\ Y&=\frac{4bt(4b+t)\left(u^2+4bu\right)^{(q+1)/2}}{u^2}. \end{align*} For $\alpha\in{\mathbb{F}_q}$, write $P_\alpha$ for $P(u-\alpha)$.
(We note that the curve \eqref{eq:X'} has two evident rational points, namely the two points at infinity. Instead of using one of them to go to the Weierstrass form \eqref{eq:E}, we assumed that $a=b^2$ in $k$ and used the point $s_1=-2b$, $s_2=0$. This does not affect the model \eqref{eq:E}, but it does change the points $P(u)$ by translation by a torsion point. We made the choices we did because they simplify the coordinates of $P(u)$.)
Our next goal is to prove that the points $P_\alpha$ generate a subgroup of $E(k(u))$ of finite index. Normally we would prove a result like this using heights, but as we will see below, the height pairings in this example are exotic, and it seems difficult to calculate the relevant determinant. Instead, we proceed using the construction of Section~\ref{s:rank} directly.
First, we need a preliminary result on $\en_{k-av}(J_{\mathcal{C}_q})$. Recall from Lemma~\ref{lemma:ASendos}(2) that $\en_{k-av}^0(J_{\mathcal{C}_q})$ is commutative of rank $2(q-1)$ since $J_{\mathcal{C}_q}$ is ordinary of dimension $q-1$.
\begin{lemma}\label{lemma:indep-of-endos}
The subgroup of $\en_{k-av}^0(J_{\mathcal{C}_q})$ generated by the
endomorphisms $[\alpha]$ and $\Fr\circ[\alpha]$ for
$\alpha\in{\mathbb{F}_q}$ has rank $2(q-1)$, and thus has finite index in
$\en_{k-av}(J_{\mathcal{C}_q})$. \textup{(}Here $\Fr$ is the $q$-power
Frobenius.\textup{)} \end{lemma}
\begin{proof}
Lemma~\ref{lemma:ASendos}(1) implies that the subgroup of
$\en_{k-av}^0(J_{\mathcal{C}_q})$ generated by the endomorphisms $[\alpha]$
for $\alpha\in{\mathbb{F}_q}$ has rank $q-1$. It is clear that
$\sum_\alpha[\alpha]=0$, so $\{[\alpha]|\alpha\in{\mathbb{F}_q}\setminus0\}$
is linearly independent.
Since $\Fr$ is not a zero divisor in $\en_{k-av}^0(J_{\mathcal{C}_q})$, the
subgroup of $\en_{k-av}^0(J_{\mathcal{C}_q})$ generated by the endomorphisms
$\Fr\circ[\alpha]$ for $\alpha\in{\mathbb{F}_q}$ also has rank $q-1$ and
$\{\Fr\circ[\alpha]|\alpha\in{\mathbb{F}_q}\setminus0\}$ is also
independent.
We will show that the two subgroups generated by
$$\{[\alpha]|\alpha\in{\mathbb{F}_q}\setminus0\}\qquad\text{and}
\qquad\{\Fr\circ[\alpha]|\alpha\in{\mathbb{F}_q}\setminus0\}$$ are independent.
To that end, we consider the (effective) action of
$\en_{k-av}^0(J_{\mathcal{C}_q})$ on
$H^1(J_{\mathcal{C}_q},{\overline{\mathbb{Q}}_\ell})=H^1(\mathcal{C}_q,{\overline{\mathbb{Q}}_\ell})$. Computing the latter
using the Leray spectral sequence for the finite map $\mathcal{C}_q\to
\mathbb{P}^1_{x}$ and decomposing for the action of ${\mathbb{F}_q}$, we find that $$H^1(\mathcal{C}_q,{\overline{\mathbb{Q}}_\ell})\cong\oplus_{\beta\in{\mathbb{F}_q}}W_\beta$$ where $W_\beta$ is the subspace of $H^1(\mathcal{C}_q,{\overline{\mathbb{Q}}_\ell})$ where ${\mathbb{F}_q}$ acts via the character $\alpha\mapsto\psi_0(\tr_{{\mathbb{F}_q}/{\mathbb{F}_p}}(\alpha\beta))$. (Here $\psi_0$ is a fixed character ${\mathbb{F}_p}\to{\overline{\mathbb{Q}}_\ell}^\times$.) Using the Grothendieck-Ogg-Shafarevich formula, we see that each $W_\beta$ with $\beta\neq0$ has dimension 2, and $W_0=\{0\}$. Using an exponential sum expression for the action of $\Fr$ on $W_{\beta}$, we see that for $\beta\neq0$, $\Fr$ has two distinct eigenvalues on $W_\beta$, one a $p$-adic unit, the other a non-unit.
Now suppose that we have a linear dependence, i.e., that there are integers $a_\alpha$ and $b_\alpha$ such that $$\sum_{\alpha\in{\mathbb{F}_q}^\times}a_\alpha[\alpha]+b_{\alpha}[\alpha]\circ\Fr=0$$ in $\en_{k-av}^0(J_{\mathcal{C}_q})$. Then as endomorphisms of $H^1(\mathcal{C}_q,{\overline{\mathbb{Q}}_\ell})$, we have $$\sum_{\alpha\in{\mathbb{F}_q}^\times}a_\alpha[\alpha] =-\sum_{\alpha\in{\mathbb{F}_q}^\times}b_{\alpha}[\alpha]\circ\Fr.$$ Suppose that the left hand side is not zero. Then there is a $\beta$ such that the left hand side is not 0 on $W_\beta$. But the left hand side acts as a (non-zero) scalar on $W_\beta$ (namely $\sum_\alpha a_\alpha\psi_0(\tr_{{\mathbb{F}_q}/{\mathbb{F}_p}}(\alpha\beta))$). On the other hand, the right hand side acts as a (non-zero) scalar composed with Frobenius, and thus has two distinct eigenvalues. This is a contradiction, and so we must have $$\sum_{\alpha\in{\mathbb{F}_q}^\times}a_\alpha[\alpha] =\sum_{\alpha\in{\mathbb{F}_q}^\times}b_{\alpha}[\alpha]\circ\Fr=0.$$ It then follows from Lemma~\ref{lemma:ASendos}(1) that $a_\alpha=b_\alpha=0$ for all $\alpha$. This completes the proof of the lemma. \end{proof}
We now return to the curve $E$.
\begin{thm}\label{thm:rank} The points $P_\alpha\in E(k(u))$ generate a subgroup of rank $q-1$ and of finite index in $E(k(u))$. The relation among them is that $\sum_{\alpha\in{\mathbb{F}_q}}P_\alpha$ is torsion. \end{thm}
\begin{proof} To see that the subgroup generated by the $P_\alpha$ has finite index in $E(K_q)$, we consider in more detail the geometry of the construction of Section~\ref{s:rank}. We have $\mathcal{C}_q\times\mathcal{D}_q$ with its action of $\mathbb{F}_q^2$, its blow up $\mathcal{S}$, and the quotient $\mathcal{S}/{\mathbb{F}_q}$ by the diagonal ${\mathbb{F}_q}$. The resulting $\mathcal{X}_q=\mathcal{S}/{\mathbb{F}_q}$ is equipped with a morphism $\pi_q$ to $\mathbb{P}^1_u$ whose generic fiber is $X/k(u)$. It is also equipped with an action of ${\mathbb{F}_q}$ (namely $\mathbb{F}_q^2$ modulo the diagonal) which induces the action of $\gal(k(u)/k(t))={\mathbb{F}_q}$ on $X$. There is an isogeny $X\to X'\cong E$ and the $P_\alpha$ come from sections of $\pi_q$, so it will suffice to show that the corresponding points in $X(K_q)$ generate a subgroup of finite index.
Now the Shioda-Tate theorem tells us that the Mordell-Weil group $X(K_q)$ is a quotient of the N\'eron-Severi group $\NS(\mathcal{X}_q)$. In the course of the proof of Theorem~\ref{thm:ranks} we saw that \begin{align*} \NS(\mathcal{X}_q)&\cong\en_{k-av}(J_{\mathcal{C}_q})^{{\mathbb{F}_q}}\oplus\mathbb{Z}^{2 +\sum_{i,j}(N_{ij}+\gcd(a_i,b_j))}\\ &\cong\en_{k-av}(J_{\mathcal{C}_q})\oplus\mathbb{Z}^{10} \end{align*} where the factor $\mathbb{Z}^{10}$ corresponds to the classes of the exceptional divisors of the blow-ups and the classes of (the images of) $\mathcal{C}_q\times\{pt\}$ and $\{pt\}\times\mathcal{D}_q$.
We claim that the classes in the factor $\mathbb{Z}^{10}$ all map to torsion points in $X(K_q)$. Indeed, it is clear from the discussion above that they are fixed by the action of ${\mathbb{F}_q}$ on $\mathcal{X}_q$. Thus they land in the ${\mathbb{F}_q}$-invariant part of $X(K_q)$, which is precisely $X(k(t))$, and we know the latter group has rank 0. (This claim can also be checked by straightforward, but tedious, computation.) It follows from the claim that the image of $\en_{k-av}(J_{\mathcal{C}_q})$ in $X(K_q)$ is a subgroup of finite index.
By Lemma~\ref{lemma:indep-of-endos}, the subgroup of $\en_{k-av}^0(J_{\mathcal{C}_q})$ generated by the endomorphisms $[\alpha]$ and $\Fr\circ[\alpha]$ for $\alpha\in{\mathbb{F}_q}$ has finite index in $\en_{k-av}(J_{\mathcal{C}_q})$. The corresponding points in $X(K_q)$ are the images of the graphs of these endomorphisms. Moreover, it is easy to see that the graph of $[\alpha]$ maps to one component of the fiber over $u=\alpha$ (the component ``$x=y$'') in $\mathcal{X}_q$. Therefore, these endomorphisms map to zero in $X(K_q)$.
It follows that the image of the remaining endomorphisms $\Fr\circ[\alpha]$ generates a finite index subgroup of $X(K_q)$. Their images in $E(K_q)$ are precisely the points $P_\alpha$, and we proved in Subsection~\ref{ss:(1+1,1+1)a} that $E(K_q)$ has rank $q-1$, so we have established the first claim of the theorem.
Since $\sum P_\alpha$ lies in $E(k(t))$ and we know that the rank of $E(k(t))$ is zero, the sum must be torsion. (We could also note that Lemma~\ref{lemma:ASendos} implies that $\sum_\alpha\Fr\circ[\alpha]$ is trivial in $\en_{k-av}(J_{\mathcal{C}_q})$.)
This completes the proof of the theorem.
\end{proof}
\begin{rem}
In contrast to the situation of \cite{Ulmer14a}, 2-descent is
not sufficient to prove that the ``visible'' points $P_\alpha$
generate a finite index subgroup of $E(K_q)$. More precisely, when
$q\neq p$, the index of the subgroup generated by the $P_\alpha$ in
$E(K_q)$ is divisible by a large power of $2$. \end{rem}
We turn now to a consideration of the heights of the $P_\alpha$. For $\gamma\in{\mathbb{F}_q}$, write $tr_\gamma$ for the integer defined as follows: Consider the fiber of the family \eqref{eq:X'} over $t=\gamma$. In other words, let $X'_\gamma$ be the smooth projective curve given by \eqref{eq:X'} with $\gamma$ substituted for $t$. Then $tr_\gamma$ is defined by the equality $$\#X'_\gamma({\mathbb{F}_q})=q-tr_\gamma+1.$$ If $\chi$ denotes the non-trivial quadratic character of ${\mathbb{F}_q^\times}$, then we may also define $tr_\gamma$ as \begin{align*} tr_\gamma &=-1-\sum_{\beta\in{\mathbb{F}_q}}
\chi\left(\left(\beta^2-4a\right)
\left(\beta^2-2\gamma\beta+\gamma^2-4a\right)\right)\\ &=-1-\sum_{\beta\in{\mathbb{F}_q}}
\chi\left(\beta(\beta+4b)
(\beta-\gamma)(\beta-\gamma+4b)\right). \end{align*} (The first equality comes from the standard count of points on a hyperelliptic curve as an exponential sum. The second comes from a change of variables $\beta\mapsto\beta+2b$.)
\begin{thm}\label{thm:heights} The height pairings $\langle P_\alpha,P_\beta\rangle$ are given by $$\langle P_\alpha,P_\beta\rangle= \begin{cases} \frac{(3q-1)(q-1)}{4q}+\frac12&\text{if $\alpha=\beta$}\\ \\ \frac{1-3q}{4q}+\frac14\chi(-1)&\text{if $\alpha-\beta=\pm4b$}\\ \\ \frac{1-3q}{4q}+\frac14tr_{\alpha-\beta}&\text{if $\alpha-\beta\neq0,\pm4b$} \end{cases} $$ \end{thm}
\begin{rems}\mbox{} \begin{enumerate} \item If we were to ignore the second term in each of these heights,
the lattice generated by the $P_\alpha$ would be a scaling of the
lattice $A_{q-1}^*$. We may view the actual lattice as a
``perturbation'' of $A_{q-1}^*$ where the fluctuations are controlled
by point counts on an auxiliary family of elliptic curves. This
seems to us an exotic phenomenon somewhat reminiscent of mirror
symmetry.
\item The terms $1/2$ and $\frac14\chi(-1)$ in the height formula may
also be viewed as traces. To wit, we consider the ``middle
extension sheaf'' $\mathcal{F}$ on $\mathbb{P}^1_t$ associated to the family
\eqref{eq:X'}. Then for $\gamma\neq0,\pm4b$, we have
$$tr_\gamma=\tr\left(\Fr_q|\mathcal{F}_\gamma\right)$$ where $\mathcal{F}_\gamma$ is the stalk of $\mathcal{F}$ at a geometric point over $t=\gamma$. One can then show that for $\gamma=\pm4b$ we have
$$\tr\left(\Fr_q|\mathcal{F}_\gamma\right)=\chi(-1)$$ and for $\gamma=0$ or $\gamma=\infty$ we have
$$\tr\left(\Fr_q|\mathcal{F}_\gamma\right)=1.$$ \item As a check, we note that the Lefschetz trace formula for $\mathcal{F}$
implies that
$$\sum_{\gamma\in\mathbb{P}^1_t({\mathbb{F}_q})}\tr\left(\Fr_q|\mathcal{F}_\gamma\right)=0.$$ Thus if we interpret the $1/2$ in the formula for $\langle P_0,P_0\rangle$ as
$$\frac14\left(\tr\left(\Fr_q|\mathcal{F}_0\right)+\tr\left(\Fr_q|\mathcal{F}_\infty\right)\right),$$ then we see that the sum $\sum_{\alpha\in{\mathbb{F}_q}}P_\alpha$ is orthogonal to all $P_\alpha$, i.e., it is torsion. This is in agreement with Theorem~\ref{thm:rank}. \end{enumerate} \end{rems}
\begin{proof}[Proof of Theorem~\ref{thm:heights}] Since $E$ is defined over $k(t)$, and the height pairing is invariant under the action of $\gal(k(u)/k(t))$, we may reduce to the case where $\beta=0$. Thus we consider $\langle P_\alpha,P_0\rangle$ and we have to compute $$-(P_\alpha-O)\cdot(P_0-O+D_{P_0}).$$
The height of $E/k(u)$ (in the sense of \cite[III.2.4]{Ulmer11}) is equal to $q$, so we have $O^2=-q$.
Next we consider $P_\alpha\cdot O$. Rewriting the coordinates of $P_\alpha$ slightly, we have \begin{align*} X(P_\alpha)&=4b\frac t{u-\alpha}\left(u-\alpha+4b\right)^q\\ Y(P_\alpha)&=4b\frac t{u-\alpha}(4b+t)
\left(u-\alpha+4b\right)^{(q+1)/2}(u-\alpha)^{(q-1)/2} \end{align*} and since $u-\alpha$ divides $t$, we see that these coordinates are polynomials in $u$. This shows that $P_\alpha$ and $O$ do not meet over any finite place of $k(u)$. Moreover, the degree in $u$ of $X(P_\alpha)$ is $2q-1$ and the degree in $u$ of $Y(P_\alpha)$ is $3q-1$. Since these degrees are $<2q$ and $<3q$ respectively, $P_\alpha$ and $O$ also do not meet over $u=\infty$. Thus we have $$P_\alpha\cdot O=P_0\cdot O=0.$$
Now we consider the disposition of the points at the $3q+1$ places of bad reduction, namely $u\in{\mathbb{F}_q}$ (so $t=0$), $u^q-u=t=\pm4b$, and $u=\infty$.
At the places $u\in{\mathbb{F}_q}$, $E$ has multiplicative reduction of type $I_4$. At $u=\alpha$, $X(P_\alpha)\neq0$, so $P_\alpha$ lands on the identity component. At $u=\alpha-4b$, $X(P_\alpha)$ vanishes to high order, so $P_\alpha$ lands on the component labeled 2. At $u\in{\mathbb{F}_q}$, $u\neq\alpha,\alpha-4b$, $X(P_\alpha)$ and $Y(P_\alpha)$ both vanish simply and so $P_\alpha$ lands on the component labeled either 1 or 3. Which case occurs is determined by the sign of $$Y(P_\alpha)/X(P_\alpha) =4b(u-\alpha)^{(q-1)/2}(u-\alpha+4b)^{-(q-1)/2}=\pm4b.$$ We make the convention that component 1 corresponds to the case $+4b$ above. Considering components shows that if $\alpha\neq0$, $P_\alpha$ and $P_0$ do not meet over $u=0$, $-4b$, $\alpha$, or $\alpha-4b$. At other places with $u\in{\mathbb{F}_q}$, they both land on component $1$ or $3$ and we have to look closer for a possible intersection. Consider the $X$ coordinate of $P_\alpha$ over $u=\beta$ after the blow up at which components 1 and 3 appear. It is $$4b\phi(\beta)(\beta-\alpha+4b)$$ where $\phi(u)=t/((u-\alpha)(u-\beta)$ so that $\phi(\beta)=-1/(\beta-\alpha)$. The $X$ coordinate in question is thus $4b(\beta-\alpha+4b)/(\alpha-\beta)$. The map $\alpha\mapsto 4b(\beta-\alpha+4b)/(\alpha-\beta)$ is a linear fractional transformation, thus injective, so there are no intersections between $P_\alpha$ and $P_0$ over places with $u\in{\mathbb{F}_q}$.
Now consider places $u=\beta$ where $u^q-u=t=4b$. At such a place, $$X(P_\alpha)(\beta)=16b^2\frac{\beta-\alpha+8b}{\beta-\alpha}\neq0.$$ Also, we have $X(P_\alpha)(\beta)=-16b^2$ if and only $2\beta=2\alpha-8b$, but this is impossible since $\beta\not\in{\mathbb{F}_q}$. This shows that $P_\alpha$ lands on the identity component at these places. Also, since $\alpha\mapsto (\beta-\alpha+8b)/(\beta-\alpha)$ is injective, $X(P_\alpha)\neq X(P_{\alpha'})$ if $\alpha\neq\alpha'$, i.e., there are no points of intersection at these places.
At places $u=\beta$ where $u^q-u=t=-4b$, we have $X(P_\alpha)(\beta)=-16b^2$ and so $P_\alpha$ always lands on the non-identity component. A short calculation reveals that $$X(P_\alpha)=4b\frac{-\beta^q+\alpha}{\beta-\alpha} \left((u-\beta)-(u-\beta)^q\right).$$ After the blow up which makes the non-identity component appear, $X(P_\alpha)$ evaluates to $4b(-\beta^q+\alpha)(\beta-\alpha)$ at $u=\beta$, and $\alpha\mapsto 4b(-\beta^q+\alpha)(\beta-\alpha)$ is injective. Thus there are no points of intersection between $P_\alpha$ and $P_0$ at the places where $t=-4b$.
Next, we consider the situation at $u=\infty$, where $E$ has reduction of type $I_{4q}$. Setting $v=u^{-1}$ and changing coordinates $X=v^{-2q}X'$, $Y=v^{-3q}Y'$, the point $P_\alpha$ has coordinates: \begin{align*} X'(P_\alpha)&= 4b\left(v(1-\alpha v)^{q-1}-v^q\right)
\left(1-\alpha v^q+4b\alpha^q\right)\\ Y'(P_\alpha)&= 4b\left(v(1-\alpha v)^{q-1}-v^q\right)
\left(4bv^q+1-v^{q-1}\right)\left(1+4bv\right)^{(q+1)/2}. \end{align*} Since $X'$ and $Y'$ both vanish simply, $P_\alpha$ lands on the component labeled 1. In fact, each $P_\alpha$ lands on the same point on that component. (In natural coordinates this is the point $(4b,1)$.) Moreover, by considering the next term in the Taylor expansions of $X'$ and $Y'$ near $v=0$, we see that the local intersection multiplicity in $P_\alpha\cdot P_0$ is 1.
Finally, we consider possible intersections between $P_\alpha$ and $P_0$ at places where $E$ has good reduction. At a place where the $X$-coordinates coincided, we would have $$4bt\frac{u^q+4b}u=4bt\frac{u^q-\alpha+4b}{u-\alpha}.$$ Since we have already treated the places where $t=0$, we may assume $t\neq0$ and then the equality above holds if and only if $u^q+4b=u$, i.e., if and only if $t=-4b$. We already treated these places as well, so there are no further points of intersection.
Summarizing, we have shown that the ``geometric'' part of the height pairing is: $$-(P_\alpha-O)\cdot(P_0-O)=\begin{cases} 2q&\text{if $\alpha=0$}\\ q-1&\text{if $\alpha\neq0$}. \end{cases}$$
As for the ``correction factor'' $-D_{P_0}\cdot P_\alpha$, the local contributions at $t=4b$ are 0, they are $1/2$ at each of the $q$ places where $t=-4b$, and they are $(4q-1)/4q$ at $u=\infty$.
The corrections factors over $t=0$ are more interesting. Namely, at $u-\beta$ with $\beta\neq\alpha,\alpha-4b$, $P_\alpha$ lands on component $\pm1$ where the sign is controlled by whether or not $$(\beta-\alpha)^{(q-1)/2}(\beta-\alpha+4b)^{-(q-1)/2}=1$$ i.e., by whether or not $$(\beta-\alpha)(\beta-\alpha+4b)$$ is a square in ${\mathbb{F}_q}$.
If $\alpha=0$, then $P_0$ lands on the identity component at $u=0$, on the component $2$ at $u=-4b$, and on component $\pm1$ at other places $u=\beta$ with $\beta\in{\mathbb{F}_q}$. Thus the contribution to the correction factor at places over $t=0$ is $-(3q-2)/4$, the total correction factor is $$-P_0\cdot D_{P_0}=-\frac{5q^2+2q-1}{4q}$$ and the height pairing is $$\langle P_0,P_0\rangle =\frac{3q^2-2q+1}{4q}=\frac{(3q-1)(q-1)}{4q}+\frac12.$$
If $\alpha=-4b$, then at $\beta=0$ and $\beta=-4b$, one of $P_0$ or $P_\alpha$ lands on the identity component and the local contribution is zero. At $\beta=-8b$, $P_0$ lands on component $\pm1$ and $P_\alpha$ lands on component 2 for a local contribution of $-1/2$. At other places over $t=0$, $P_0$ and $P_\alpha$ lie on components $\pm1$ and the sum of the local contributions is \begin{multline*} -\sum_{\beta\neq0,-4b,-8b}\left(\frac12
+\frac14\chi\left(\beta(\beta+4b)(\beta+4b)(\beta+8b)\right)\right)\\ =-\frac{q-3}2-\frac14\sum_{\beta\neq0,-4b,-8b}\chi((\beta+8b)/\beta). \end{multline*} The last sum is easily seen to be $-1-\chi(-1)$ and so the sum of the local contributions over all places over $t=0$ is $$-\frac{2q-5}4+\frac14\chi(-1).$$ The total correction factor is $$-P_\alpha\cdot D_{P_0}=\frac{-4q^2+q+1}{4q}+\frac14\chi(-1)$$ and the height pairing is $$\langle P_\alpha,P_0\rangle=\frac{(1-3q)}{4q}+\frac14\chi(-1).$$
The case $\alpha=4b$ is very similar to that of $\alpha=-4b$ and we leave it as an exercise for the reader.
Now assume that $\alpha\neq0,\pm4b$. Then at $\beta=0$ and $\beta=\alpha$, one of $P_0$ or $P_\alpha$ lands on the identity component and the local contribution is 0. At $\beta=-4b$ and $\beta=\alpha-4b$, one of $P_0$ or $P_\alpha$ lands on component 2 and the other lands on component $\pm1$, so we get local contributions of $-1/2$. At the other $q-4$ places over $t=0$, both $P_0$ and $P_\alpha$ land on components $\pm1$. The sum of the local contributions at these places is \begin{multline*} -\sum_{\beta\neq0,-4b,\alpha,\alpha-4b}\left(\frac12
+\frac14\chi\left(\beta(\beta+4b)(\beta-\alpha)(\beta-\alpha+4b)\right)\right)\\ =-\frac{q-4}2+\frac14\left(1+tr_\alpha\right). \end{multline*} (For the last equality, see the display just before the statement of the Theorem.) Thus the sum of the local contributions at places over $t=0$ is $$-\frac{2q-5}4+\frac14tr_\alpha,$$ the total correction factor is $$-P_\alpha\cdot D_{P_0}=\frac{-4q^2+q+1}{4q}+\frac14tr_\alpha,$$ and the height pairing is $$\langle P_\alpha,P_0\rangle=\frac{(1-3q)}{4q}+\frac14tr_\alpha.$$
This completes the proof of the Theorem. \end{proof}
It would be very interesting to have a conceptual explanation for the appearance of point counts in the height pairings.
\section{Appendix: Auxiliary results on Artin-Schreier
covers}\label{s:AS-covers}
In this section, we collect results on Artin-Schreier curves and the Newton polygons and endomorphism algebras of their Jacobians.
\subsection{The genus and $p$-rank of Artin-Schreier
curves} \label{ss:ASgenus}
Suppose $k$ is a perfect field of characteristic $p$. Suppose $\mathcal{C}$ is a smooth projective irreducible curve over $k$ with function field $F=k(\mathcal{C})$. Let $f(x) \in F$ be a non-constant rational function. Write $\dvsr_\infty(f(x))=\sum_{i=1}^m a_i P_i$ with distinct $P_i \in \mathbb{P}^1(\overline{k})$ and all $a_i\neq0$.
For a power $q$ of $p$, let $\mathcal{C}_{q,f}$ be the smooth projective curve with function field $F[z]/(z^q-z-f)$ and let $\tau_{q,f}:\mathcal{C}_{q,f} \to \mathcal{C}$ be the morphism corresponding to the field extension $F\hookrightarrow F[z]/(z^q-z-f(x))$. We assume throughout that $\mathcal{C}_{q,f}$ is geometrically irreducible. This holds, for example, if $f$ has a pole of order prime to $p$ at some place of $F$.
\begin{lemma} \label{l:ASgalois}
If $k$ contains $\mathbb{F}_q$, then $\tau_{q,f}:\mathcal{C}_{q,f} \to \mathcal{C}$ is a
Galois cover and its Galois group $G$ is canonically identified with
$\mathbb{F}_q$. \end{lemma}
\begin{proof}
This is a straightforward generalization of
\cite[6.4.1(a-b)]{StichtenothAFFC}. \end{proof}
\begin{lemma}\label{lemma:ASbasics}
Let $k$, $q$, $f$, and $\mathcal{C}_{q,f}$ be as above. Suppose that all
the poles of $f$ have order prime to $p$. \begin{enumerate} \item The branch locus of $\tau_{q,f}$ is $\{P_1,\dots,P_m\}$. Above each
point $P_i$, the cover $\tau_{q,f}$ is totally ramified. If $k$ contains
${\mathbb{F}_q}$ and $G_i^t$ denotes the ramification subgroup of $G$
at $P_i$ in the upper numbering, then $G_i^{a_i}=G$ and $G_i^t$ is
trivial for $t>a_i$. \item The genus $g_{q,f}$ of $\mathcal{C}_{q,f}$ and the genus $g_{\mathcal{C}}$ of $\mathcal{C}$ are related by the formula $$2g_{q,f} -2 = q(2g_{\mathcal{C}}-2) + (q-1)\sum_{i=1}^m (a_i+1).$$ If particular, if $\mathcal{C} \simeq \mathbb{P}^1$, then $g_{q,f}=\frac12(q-1)\left(-2+\sum_{i=1}^m (a_i+1)\right)$. \end{enumerate} \end{lemma}
\begin{proof} This is a straightforward generalization of \cite[6.4.1(c-g)]{StichtenothAFFC}. \end{proof}
Let $\mathcal{J}_{q,f}$ be the Jacobian of $\mathcal{C}_{q,f}$ and let $\mathcal{J}_{q,f}[p]$ be its $p$-torsion group scheme. Recall that the {\it p-rank} of $\mathcal{J}_{q,f}$ is the integer $s$ such that $\#\mathcal{J}_{q,f}[p](\overline{k})=p^s$. The $p$-rank is at most the genus $g_{q,f}$ of $\mathcal{C}_{q,f}$, and $\mathcal{C}_{q,f}$ and $\mathcal{J}_{q,f}$ are said to be {\it ordinary\/} if the $p$-rank is maximal, i.e., $s=g_{q,f}$ \cite[Section 1.1]{ChaiOort09}.
\begin{lemma} \label{lemma:p-rank}
The $p$-rank of $\mathcal{J}_{q,f}$ is $s=1+q(s_{\mathcal{C}} -1) + m(q-1)$. In
particular, if $\mathcal{C} \simeq \mathbb{P}^1$, then $\mathcal{J}_{q,f}$ is ordinary if
and only if the poles of $f$ are all simple, and $\mathcal{J}_{q,f}$ has
$p$-rank $0$ if and only if $f$ has exactly one pole. \end{lemma}
\begin{proof} This follows from the Deuring-Shafarevich formula \cite[Thm.~4.2]{Subrao75}. \end{proof}
\subsection{Quotients of Artin-Schreier curves} \label{ss:additive}
This section contains two results about subextensions of the Artin-Schreier extension $F \hookrightarrow F[z]/(z^q-z-f)$. The first allows us to reduce questions about the structure of the Jacobian of the curve $\mathcal{C}_{q,f}$ given by the equation $z^q-z=f$ to the case $q=p$; it is used in Subsection~\ref{ss:slopes}.
\begin{lemma}\label{lemma:decompose} Suppose $\mathcal{C} \simeq \mathbb{P}^1$. Let $S$ be a set of representatives for the cosets of ${\mathbb{F}_p^\times} \subset {\mathbb{F}_q^\times}$. For $\mu \in S$, let ${\mathcal{Z}}_\mu$ be the Artin-Schreier curve $z^p-z=\mu f$ and let $\mathcal{J}_\mu$ be the Jacobian of ${\mathcal{Z}}_\mu$. Then there is an isogeny $$\mathcal{J}_{q,f} \sim \oplus_{\mu \in S} \mathcal{J}_\mu.$$ \end{lemma}
\begin{proof}
By \cite[Proposition 1.2]{GarciaStichtenoth91}, the set $\{Z_\mu \to
\mathbb{P}^1 \mid \mu \in S\}$ is the set of degree $p$ covers $Z \to
\mathbb{P}^1$ which are quotients of $\tau: \mathcal{C}_{q,f} \to \mathbb{P}^1$.
The result then follows from \cite[Theorem C]{KaniRosen89}. \end{proof}
The second result is used in Section~\ref{Sanalytic} where we need to work with a more general class of Artin-Schreier extensions. To that end, recall that there is a bijection between finite subgroups of ${\overline{\mathbb{F}}_p}$ and monic, separable, additive polynomials, i.e., polynomials of the form $$A(x) = x^{p^\nu}+\sum_{i=0}^{\nu-1}a_ix^{p^i}$$ with $a_i\in{\overline{\mathbb{F}}_p}$ and $a_0\neq0$. The bijection identifies a subgroup $H$ with the polynomial $A_H(x):=\prod_{\alpha\in
H}(x-\alpha)$ and identifies a polynomial $A$ with the group $H_A$ of its roots. For example, when $H$ is the field of order $q$, then $A_H(x)$ is the polynomial $\wp_q(x)=x^q-x$. For general $H$, note that the field generated by the coefficients of $A_H$ is the field of $p^\mu$ elements, where $p^\mu$ is the smallest power of $p$ such that $H$ is stable under the $p^\mu$-power Frobenius.
Now suppose $f \in F$ where $F$ is the function field of a smooth projective curve defined over $k$. We assume that $f$ has a pole of order prime to $p$ at some place of $F$. Suppose $A$ is a monic, separable, additive polynomial with coefficients in $k$. Then we have a field extension $$K=K_{A,f}=F[x]/(A(x)-f).$$ It is geometrically Galois over $F$ and the Galois group $\gal({\overline{\mathbb{F}}_p} K/{\overline{\mathbb{F}}_p} F)$ is canonically isomorphic to $H_A$. This Galois group is stable under the $r$-power Frobenius since $A$ is assumed to have coefficients in $k$.
The next lemma is used in Section~\ref{Sanalytic} to reduce questions about the field $K_{A,f}$ to the analogous questions about the field $K_{\wp_q,f}$.
\begin{lemma} \label{Ladditive} Let $A$ be a monic, separable, additive polynomial with roots in ${\mathbb{F}_q}$. \begin{enumerate} \item Then there exists a monic, separable additive polynomial $B$
such that the composition $A\circ B$ is $\wp_q$. \item Suppose $f \in F$ has a pole of order prime to $p$ at some place
of $F$. Suppose $A \circ B=\wp_q$. Then $K_{A,f}$ is a subfield of
$K_{\wp_q,f}$ and the geometric Galois group $\gal({\overline{\mathbb{F}}_p}
K_{A,f}/{\overline{\mathbb{F}}_p} F)$ is a quotient of ${\mathbb{F}_q}$, namely $B({\mathbb{F}_q})$. \end{enumerate} \end{lemma}
\begin{proof}
Let $B$ be the polynomial identified with the subgroup $A({\mathbb{F}_q})$.
Then $B\circ A$ has degree $q$ and kills ${\mathbb{F}_q}$, so must be equal to
$\wp_q$. Next, we note that the set of additive polynomials with
coefficients in ${\mathbb{F}_q}$ together with the ring structure given by
addition and composition of polynomials is a (non-commutative)
domain, and $\wp_q$ is in its center. (Both of these are most
easily checked by noting that the ring in question is isomorphic to
Drinfeld's ring of twisted polynomials ${\mathbb{F}_q}\{\tau\}$ where $\tau
a=a^p\tau$ for $a\in {\mathbb{F}_q}$.) Since $B\circ A=\wp_q$, we see that
$A\circ B\circ A=A\circ\wp=\wp\circ A$, and canceling yields the
first claim $A\circ B=\wp_q$. The second claim follows directly
from the first. \end{proof}
\begin{ex}\label{ex:A}
Assume that $r$ is a power of an odd prime $p$ and fix a positive
integer $\nu$. Let $A(x)=x^{r^\nu}+x$. The group $H_A$ of roots of
$A$ generates ${\mathbb{F}_q}$ where $q=r^{2\nu}$. Setting $B=\wp_{r^\nu}$, we
have $A\circ B=\wp_q$. If $f\in F$ has a pole of order prime to $p$
at some place of $F$, then the field extension $K_{A,f}$ is a
subextension of $K_{\wp_q,f}$.
\end{ex}
\subsection{Slopes of Artin-Schreier curves} \label{ss:slopes} Next we review the definition of the Newton polygon of a curve $\mathcal{C}$ of genus $g$ defined over a finite field from \cite[Sections 1.16, 1.18, 3,5, 3.8, 4.38, 4.49, 10.17]{ChaiOort09}. The Newton polygon of $\mathcal{C}$ is the Newton polygon of (the $p$-divisible group of) its Jacobian $\mathcal{J}$. It is a symmetric Newton polygon of height $2g$ and dimension $g$; in other words, it is a lower convex polygon in $\mathbb{R}^2$, starting at $(0,0)$ and ending at $(2g,g)$, whose break points are integral, such that the slopes $\lambda$ are rational numbers in the interval $[0,1]$ and the slopes $\lambda$ and $1 - \lambda$ occur with the same multiplicity. The Newton polygon is determined by its sequence of slopes, written in ascending order, and these are the $p$-adic values of the zeros of the relative Frobenius morphism $\pi_A$. More precisely, if $A$ is a simple abelian variety defined over a finite field $k$ of cardinality $r$, then Tate proved that $\pi_A$ generates a field which is the center of $\en^0(A)$ \cite[Section 10.17]{ChaiOort09}. Viewed as an algebraic number, $\pi_A$ has absolute value $\sqrt{r}$ in every embedding of $\mathbb{Q}(\pi_A)$ in $\mathbb{C}$ (a Weil $\sqrt{r}$-number). The slopes of the Newton polygon of $A$ are the $p$-adic valuations of $\pi_A$ and the multiplicity of $\lambda$ in the Newton polygon is the sum of the degrees $[\mathbb{Q}(\pi_A)_v: \mathbb{Q}_p]$ over all places $v$ of $\mathbb{Q}(\pi_A)$ above $p$ such that $\lambda=v(\pi_A)/v(r)$. If $\mathcal{J}$ is not simple, then its slopes are the concatenation of the slopes of its simple factors.
Next, for $k$ a finite field of characteristic $p$, a power $q$ of $p$, and $f\in k(x)$ a rational function with poles of order prime to $p$, we define a (Hodge) polygon $HP=HP(f,q)$ as follows. Write the polar divisor of $f$ as $\dvsr_\infty(f)=\sum_{i=1}^ma_iP_i$ where the $a_i$ are all prime to $p$ and the $P_i$ are distinct. Define a collection of slopes by taking slopes 0 and 1 with multiplicity $(m-1)(q-1)$ and, for each pole $P_i$ with $a_i>1$, slopes $1/a_i, 2/a_2,\dots,(a_i-1)/a_i$ each with multiplicity $q-1$. We have in total $2g_{\mathcal{C}_{f,q}}$ slopes, which we place in ascending order and call $s_1,\dots,s_{2g}$. Then $HP$ is defined to be the graph of the piecewise linear function $\psi$ on $[0,2g]$ with $\psi(0)=0$ and with slope $s_i$ on $[i-1,i]$.
Note that $NP(\mathcal{C}_{f,q})$ and $HP(f,q)$ have the same endpoints, namely $(0,0)$, and $(2g,g)$ and $NP(C_{f,q})$ lies on or over $HP(f,q)$ \cite{Katz79b}. The following is an immediate consequence of \cite[Theorem 1.1 \& Corollary 1.3]{Zhu04} (which is the case $p=q$) and Lemma~\ref{lemma:decompose} above.
\begin{thm}\label{thm:slopes} Suppose $\mathcal{C} \simeq \mathbb{P}^1$.
The Newton polygon $NP(\mathcal{C}_{f,q})$ coincides with the Hodge polygon
$HP(f,q)$ if and only if $p\equiv1\pmod{\lcm(a_i)}$. \end{thm}
The curve $\mathcal{C}_{q,f}$ is ordinary if and only if the only slopes of its Newton polygon are $0$ and $1$. As an example of the theorem, note that if all poles of $f(x) \in k(x)$ are simple, then the congruence condition is empty and the Newton and Hodge polygons coincide. Moreover, the latter has only slopes 0 and 1, giving another proof that $\mathcal{C}_{f,q}$ is ordinary in this case.
\subsection{Slopes, $p$-ranks, and supersingular
factors} \label{ss:slopesandss} In this subsection, we collect a few remarks about slopes, $p$-ranks, and supersingular elliptic curves appearing in Jacobians of Artin-Schreier curves. Throughout, $\mathcal{C}_{q,f}$ is the Artin-Schreier cover of $\mathcal{C}$ determined by the equation $z^q-z=f$.
By definition, $\mathcal{C}_{q,f}$ is {\it supersingular} if and only if all of the slopes of its Newton polygon equal $1/2$ \cite[Section 1.1]{ChaiOort09}.
If $\mathcal{C}_{q,f}$ is supersingular, then there is an isogeny $\mathcal{J}_{q,f} \otimes \overline{k} \sim \oplus_{i=1}^g E$ for a supersingular elliptic curve $E$ \cite[Theorem 4.2]{Oort74}. As seen in \cite[Sections 1.1 and 5.3]{ChaiOort09}, if $\mathcal{C}_{q,f}$ is supersingular, its Jacobian has $p$-rank 0, but the converse is in general false when $g_{q,f} \geq 3$.
Note that if the Jacobian of $\mathcal{C}_{q,f}$ has a supersingular elliptic curve as an isogeny factor of multiplicity $e$ (i.e., $\mathcal{J}_{q,f} \otimes \overline{k} \sim E^e\oplus A$), then $2e$ of its slopes are $1/2$. The converse is false unless $e=g_{q,f}$; for every isogeny type other than the supersingular one, there exists an absolutely simple abelian variety having that isogeny type \cite{LenstraOort74}.
Suppose that $\dvsr_\infty(f)=\sum_{i=1}^ma_iP_i$ where as usual the $P_i$ are distinct ${\overline{k}}$-valued points of $\mathbb{P}^1$ and the $a_i$ are prime to $p$. If some $a_i$ is even, then the Hodge polygon of $f$ has a segment of slope $1/2$. If furthermore $p\equiv 1 \pmod{\lcm(a_i)}$, then by Theorem~\ref{thm:slopes}, the Newton polygon of $\mathcal{C}_{q,f}$ also has a segment of slope $1/2$ and so it is possible that the Jacobian $\mathcal{J}_{q,f}$ of $\mathcal{C}_{q,f}$ has supersingular factors.
One case where it does follow immediately that $\mathcal{J}_{q,f}$ has supersingular factors is when $p$ is odd and $f$ has exactly one pole of order 2 and no other poles. Indeed, in this situation, the Newton and Hodge polygons are equal, and the latter is a segment of slope $1/2$. Since its length is $q-1$, it follows that over ${\overline{k}}$, $\mathcal{J}_{q,f}$ is isogenous to a supersingular elliptic curve to the power $(q-1)/2$. More generally, any Artin-Schreier curve that dominates this example will also have supersingular factors. This includes the Artin-Schreier curves $z^p-z=g(x)^2$ for any rational function $g(x)$ having poles of order prime to $p$.
Finally, we note that a \emph{different} parity condition on the $a_i$
leads to supersingular factors, and therefore to slopes 1/2. Indeed, according to Proposition~\ref{prop:ss-factors}, if $\sum(a_i+1)$ is odd and $q$ is a power of $r^2=|k|^2$, then $\mathcal{C}_{f,q}$ has a supersingular elliptic curve as isogeny factor with multiplicity at least $(\sqrt{q}-1)/2$. (Note that the hypothesis here implies that at least one of the $a_i$ is even, making a connection with the previous paragraph.) This lower bound for the multiplicity of supersingular curves as isogeny factors is often not sharp, as can be seen from the main result of \cite{vanderGeervanderVlugt95}.
\subsection{Endomorphism algebras of Artin-Schreier curves} The endomorphism algebras of Artin-Schreier curves are known only in special cases. We include some partial results here which are used multiple times in Sections~\ref{s:exactrank} and~\ref{s:explicit}. Throughout this subsection, we assume that $k$ contains the field of $q$ elements.
Let $\mathbb{Q}[H]$ be the group algebra of the group $H\cong{\mathbb{F}_q}$. By the Perlis-Walker theorem \cite{PerlisWalker50}, $$\mathbb{Q}[H] \simeq \mathbb{Q} \oplus_{a \in S} \mathbb{Q}(\zeta_p)$$ where $S$ is a set of representatives of the cosets of $\mathbb{F}_p^* \subset \mathbb{F}_q^*$. Let $W$ be ${\overline{\mathbb{Q}}_\ell}^{q-1}$ with ${\mathbb{F}_q}$ acting by the direct sum of its $q-1$ nontrivial characters.
Let $\mathcal{C}_{q,f}$ be as in the previous subsection, and let $\mathcal{J}_{q,f}$ be its Jacobian. Consider the endomorphism algebra $\en^0(\mathcal{J}_{q,f})=\en_k(\mathcal{J}_{q,f})\otimes\mathbb{Q}$.
If $k$ contains the field of $q$ elements, then $H\cong{\mathbb{F}_q}$ acts on $\mathcal{C}_{q,f}$. The action of $H$ on $\mathcal{C}_{q,f}$ induces a homomorphism $\mathbb{Q}[H]\to\en^0(\mathcal{J}_{q,f})$. Let $\en^0(\mathcal{J}_{q,f})^{H}$ denote the subalgebra of endomorphisms which commute with the action of $H$, in other words, the subalgebra commuting with the image of $\mathbb{Q}[H]\to\en^0(\mathcal{J}_{q,f})$. We consider the composition $\mathbb{Q}[H] \to \en^0(\mathcal{J}_{q,f}) \subset \en^0(H^1(\mathcal{C}_{q,f}\times{\overline{k}}, \overline{\mathbb{Q}}_\ell))$, where $\ell \not = p$ is prime.
\begin{prop} \label{proposition:rep} Suppose $\mathcal{C} \simeq \mathbb{P}^1$.
There is a $\mathbb{Q}[H]$-module isomorphism $$H^1(\mathcal{C}_{q,f}\times\overline{k},\overline{\mathbb{Q}}_\ell) \simeq W^R$$ where $R=2g_{q,f}/(q-1)=-2+\sum(a_i+1)$. \end{prop}
\begin{proof}
Consider the representation $\rho_{q,f}$ determined by the action of
$H$ on $H^1(\mathcal{C}_{q,f}\times{\overline{k}}, \overline{\mathbb{Q}}_{\ell})$. By the
Lefschetz fixed point theorem \cite[V.2.8]{MilneEC}, the character
$\chi(\rho_{q,f})$ satisfies $$\chi(\rho_{q,f})=2 \chi_{{\rm triv}} -2 \chi_{\rm reg} + \sum_{i=1}^m A_i,$$ where $A_i$ is the character of the Artin representation attached to the branch point $P_i$ and $\chi_{\rm reg}$ is the character of the regular representation. By Lemma~\ref{lemma:ASbasics}(2) and \cite[VI]{SerreLF}, for $\sigma \in {\mathbb{F}_q}$, $$A_i(\sigma)= \begin{cases} -(a_i+1) & \sigma \in \mathbb{F}_q, \ \sigma \not = {\rm id}\\ (a_i+1)(q-1) & \sigma = {\rm id}. \end{cases}$$ Thus $$\chi(\rho_{q,f})(\sigma)= \begin{cases} 2-\sum_{i=1}^m(a_i+1) & \sigma \not = {\rm id}\\ (q-1)(-2+\sum_{i=1}^m(a_i+1))& \sigma = {\rm id} \end{cases} $$ and therefore by Lemma~\ref{lemma:ASbasics}(3), $$\chi(\rho_{q,f})(\sigma)= \begin{cases} -R& \sigma \not = {\rm id}\\ (q-1)R& \sigma = {\rm id}. \end{cases} $$ which is the character of $W^R$. \end{proof}
\begin{lemma}\label{lemma:ASendos}
Suppose that $k$ contains the field of $q$ elements, so that
$\mathcal{C}_{f,q}\to\mathbb{P}^1$ is an ${\mathbb{F}_q}$-Galois extension and $H={\mathbb{F}_q}$ acts
on the Jacobian $\mathcal{J}_{q,f}$ of $\mathcal{C}_{q,f}$. \begin{enumerate} \item The image of $\mathbb{Q}[H]\to\en^0(\mathcal{J}_{q,f})$ has dimension $q-1$. \item If $\mathcal{C}_{q,f}$ is ordinary and $k$ is algebraic over ${\mathbb{F}_p}$, then
$\en^0(\mathcal{J}_{q,f})$ is commutative of dimension $2g_{q,f}$ and so
$\en^0(\mathcal{J}_{q,f})^{H}=\en^0(\mathcal{J}_{q,f})$ has dimension $2g_{q,f}$.
\item If $\mathcal{C}_{q,f}$ is supersingular and $k$ contains the field of $p^2$
elements, then $$\en^0(\mathcal{J}_{q,f}) \cong M_g(D)$$ where $D$ is the quaternion algebra ramified at $p$ and $\infty$, and $\en^0(\mathcal{J}_{q,f})^{H}$ has dimension $4g_{q,f}^2/(q-1)$. \end{enumerate} \end{lemma}
\begin{proof}\mbox{} \begin{enumerate} \item This follows from Proposition~\ref{proposition:rep}. \item See \cite[Theorem~2(c)]{Tate66a}. \item The fact that $\en^0(\mathcal{J}_{q,f}) \cong M_g(D)$ can be found in
\cite[Theorem~2(d)]{Tate66a}. (The assumption that $\mathbb{F}_{p^2}\subset
k$ guarantees that the endomorphism algebra of a supersingular
elliptic curve is $D$.) By part (1), the dimension of the image of
$\mathbb{Q}[H]\to\en^0(\mathcal{J}_{q,f})$ is $q-1$. By the double centralizer theorem
\cite[Theorem~2.43]{KnappAA}, $\en^0(\mathcal{J}_{q,f})^{H}$ has dimension
$4g_{q,f}^2/(q-1)$. \end{enumerate} \end{proof}
{}
\end{document} | arXiv |
\begin{document}
\title{\Large \bf Spin Networks and Anyonic Topological
Computing }
\begin{abstract} We review the $q$-deformed spin network approach to Topological Quantum Field Theory and apply these methods to produce unitary representations of the braid groups that are dense in the unitary groups. \end{abstract}
\keywords{braiding, knotting, linking, spin network, Temperley -- Lieb algebra, unitary representation.}
\section{INTRODUCTION}
This paper describes the background for topological quantum computing in terms of Temperely -- Lieb Recoupling Theory. This is a recoupling theory that generalizes standard angular momentum recoupling theory, generalizes the Penrose theory of spin networks and is inherently topological. Temperely -- Lieb Recoupling Theory is based on the bracket polynomial model for the Jones polynomial. It is built in terms of diagrammatic combinatorial topology. The same structure can be explained in terms of the $SU(2)_{q}$ quantum group, and has relationships with functional integration and Witten's approach to topological quantum field theory. Nevertheless, the approach given here will be unrelentingly elementary. Elementary, does not necessarily mean simple. In this case an architecture is built from simple beginnings and this archictecture and its recoupling language can be applied to many things including: colored Jones polynomials, Witten--Reshetikhin--Turaev invariants of three manifolds, topological quantum field theory and quantum computing. \bigbreak
In quantum computing, the application is most interesting because the recoupling theory yields representations of the Artin Braid group into unitary groups $U(n)$. These represententations are {\em dense} in the unitary group, and can be used to model quantum computation universally in terms of representations of the braid group. Hence the term: topological quantum computation. \bigbreak
In this paper, we outline the basics of the Temperely -- Lieb Recoupling Theory, and show explicitly how unitary representations of the braid group arise from it. We will return to this subject in more detail in subsequent papers. In particular, we do not describe the context of anyonic models for quantum computation in this paper. Rather, we concentrate here on showing how naturally unitary representations of the braid group arise in the context of the Temperely -- Lieb Theory. For the reader interested in the relevant background in anyonic topological quantum computing we recommend the following references \{ \cite{F,FR98,FLZ,F5,F6,Kitaev,MR,Preskill,Wilczek} \}. \bigbreak
Here is a very condensed presentation of how unitary representations of the braid group are constructed via topological quantum field theoretic methods. For simplicity assmue that one has a single (mathematical) particle with label $P$ that can interact with itself to produce either itself labeled $P,$ or itself with the null label $*.$ When $*$ interacts with $P$ the result is always $ P. $ When $*$ interacts with $*$ the result is always $*.$ One considers process spaces where a row of particles labeled $P$ can successively interact, subject to the restriction that the end result is $P.$ For example the space $V[(ab)c]$ denotes the space of interactions of three particles labeled $P.$ The particles are placed in the positions $a,b,c.$ Thus we begin with $(PP)P.$ In a typical sequence of interactions, the first two $P$ 's interact to produce a $*,$ and the $*$ interacts with $P$ to produce $P.$ \[ (PP)P \longrightarrow (*)P \longrightarrow P. \] \noindent In another possibility, the first two $P$'s interact to produce a $ P,$ and the $P$ interacts with $P$ to produce $P.$ \[ (PP)P \longrightarrow (P)P \longrightarrow P. \] It follows from this analysis that the space of linear combinations of processes $V[(ab)c]$ is two dimensional. The two processes we have just described can be taken to be the the qubit basis for this space. One obtains a representation of the three strand Artin braid group on $V[(ab)c]$ by assigning appropriate phase changes to each of the generating processes. One can think of these phases as corresponding to the interchange of the particles labeled $a$ and $b$ in the association $(ab)c.$ The other operator for this representation corresponds to the interchange of $b$ and $c.$ This interchange is accomplished by a {\it unitary change of basis mapping} \[ F:V[(ab)c] \longrightarrow V[a(bc)]. \] \noindent If \[ A:V[(ab)c] \longrightarrow V[(ba)c] \] is the first braiding operator (corresponding to an interchange of the first two particles in the association) then the second operator \[ B:V[(ab)c] \longrightarrow V[(ac)b] \] is accomplished via the formula $B = F^{-1}AF$ where the $A$ in this formula acts in the second vector space $V[a(bc)]$ to apply the phases for the interchange of $b$ and $c.$ \bigbreak
In this scheme, vector spaces corresponding to associated strings of particle interactions are interrelated by {\it recoupling transformations} that generalize the mapping $F$ indicated above. A full representation of the Artin braid group on each space is defined in terms of the local intechange phase gates and the recoupling transfomations. These gates and transformations have to satisfy a number of identities in order to produce a well-defined representation of the braid group. These identities were discovered originally in relation to topological quantum field theory. In our approach the structure of phase gates and recoupling transformations arise naturally from the structure of the bracket model for the Jones polynomial \cite{JO}. Thus we obtain a knot-theoretic basis for topological quantum computing. \bigbreak
\section {Spin Networks and Temperley -- Lieb Recoupling Theory} In this section we discuss a combinatorial construction for spin networks that generalizes the original construction of Roger Penrose \cite{Penrose}. The result of this generalization is a structure that satisfies all the properties of a graphical $TQFT$ as described in our paper on braiding and universal quantum gates \cite{UG}, and specializes to classical angular momentum recoupling theory in the limit of its basic variable. The construction is based on the properties of the bracket polynomial \cite{KA87}. A complete description of this theory can be found in the book ``Temperley -- Lieb Recoupling Theory and Invariants of Three-Manifolds" by Kauffman and Lins \cite{KL}. \bigbreak
The ``$q$-deformed" spin networks that we construct here are based on the bracket polynomial relation. View Figure 1 and Figure 2. \bigbreak
\begin{figure}
\caption{\bf Basic Projectors}
\end{figure}
\bigbreak
\begin{figure}
\caption{\bf Two Strand Projector}
\end{figure}
\bigbreak
\begin{figure}
\caption{\bf Trivalent Vertex}
\end{figure}
\bigbreak
In Figure 1 we indicate how the basic projector (symmetrizer, Jones-Wenzl projector) is constructed on the basis of the bracket polynomial expansion \cite{KA87}. In this technology, a symmetrizer is a sum of tangles on $n$ strands (for a chosen integer $n$). The tangles are made by summing over braid lifts of permutations in the symmetric group on $n$ letters, as indicated in Figure 1. Each elementary braid is then expanded by the bracket polynomial relation, as indicated in Figure 1, so that the resulting sum consists of flat tangles without any crossings (these can be viewed as elements in the Temperley -- Lieb algebra). The projectors have the property that the concatenation of a projector with itself is just that projector, and if you tie two lines on the top or the bottom of a projector together, then the evaluation is zero. This general definition of projectors is very useful for this theory. The two-strand projector is shown in Figure 2. Here the formula for that projector is particularly simple. It is the sum of two parallel arcs and two turn-around arcs (with coefficient $-1/d,$ with $d = -A^{2} - A^{-2}$ is the loop value for the bracket polynomial. Figure 2 also shows the recursion formula for the general projector. This recursion formula is due to Jones and Wenzl and the projector in this form, developed as a sum in the Temperley -- Lieb algebra (see Section 5 of this paper), is usually known as the {\em Jones--Wenzl projector}. \bigbreak
The projectors are combinatorial analogs of irreducible representations of a group (the original spin nets were based on $SU(2)$ and these deformed nets are based on the quantum group corresponding to SU(2)). As such the reader can think of them as ``particles". The interactions of these particles are governed by how they can be tied together into three-vertices. See Figure 3. In Figure 3 we show how to tie three projectors, of $a,b,c$ strands respectively, together to form a three-vertex. In order to accomplish this interaction, we must share lines between them as shown in that Figure so that there are non-negative integers $i,j,k$ so that $a = i + j, b = j + k, c = i + k.$ This is equivalent to the condition that $a + b + c$ is even and that the sum of any two of $a,b,c$ is greater than or equal to the third. For example $a + b \ge c.$ One can think of the vertex as a possible particle interaction where $[a]$ and $[b]$ interact to produce $[c].$ That is, any two of the legs of the vertex can be regarded as interacting to produce the third leg. \bigbreak
There is a basic orthogonality of three vertices as shown in Figure 4. Here if we tie two three-vertices together so that they form a ``bubble" in the middle, then the resulting network with labels $a$ and $b$ on its free ends is a multiple of an $a$-line (meaning a line with an $a$-projector on it) or zero (if $a$ is not equal to $b$). The multiple is compatible with the results of closing the diagram in the equation of Figure 4 so the the two free ends are identified with one another. On closure, as shown in the Figure, the left hand side of the equation becomes a Theta graph and the right hand side becomes a multiple of a ``delta" where $\Delta_{a}$ denotes the bracket polynomial evaluation of the $a$-strand loop with a projector on it. The $\Theta(a,b,c)$ denotes the bracket evaluation of a theta graph made from three trivalent vertices and labeled with $a, b, c$ on its edges.
\begin{figure}
\caption{\bf Orthogonality of Trivalent Vertices}
\end{figure}
\bigbreak
There is a recoupling formula in this theory in the form shown in Figure 5. Here there are ``$6$-j symbols", recoupling coefficients that can be expressed, as shown in Figure 7, in terms of tetrahedral graph evaluations and theta graph evaluations. The tetrahedral graph is shown in Figure 6. One derives the formulas for these coefficients directly from the orthogonality relations for the trivalent vertices by closing the left hand side of the recoupling formula and using orthogonality to evaluate the right hand side. This is illustrated in Figure 7.
\begin{figure}
\caption{\bf Recoupling Formula }
\end{figure}
\bigbreak
\begin{figure}
\caption{\bf Tetrahedron Network}
\end{figure}
\bigbreak
\begin{figure}
\caption{\bf Tetrahedron Formula for Recoupling Coefficients}
\end{figure}
\bigbreak
Finally, there is the braiding relation, as illustrated in Figure 8.
\begin{figure}
\caption{\bf LocalBraidingFormula}
\end{figure}
\bigbreak
With the braiding relation in place, this $q$-deformed spin network theory satisfies the pentagon, hexagon and braiding naturality identities needed for a topological quantum field theory. All these identities follow naturally from the basic underlying topological construction of the bracket polynomial. One can apply the theory to many different situations.
\subsection{Evaluations} In this section we discuss the structure of the evaluations for $\Delta_{n}$ and the theta and tetrahedral networks. We refer to \cite{KL} for the details behind these formulas. Recall that $\Delta_{n}$ is the bracket evaluation of the closure of the $n$-strand projector, as illustrated in Figure 4. For the bracket variable $A,$ one finds that $$\Delta_{n} = (-1)^{n}\frac{A^{2n+2} - A^{-2n-2}}{A^{2} - A^{-2}}.$$ One sometimes writes the {\it quantum integer} $$[n] = (-1)^{n-1}\Delta_{n-1} = \frac{A^{2n} - A^{-2n}}{A^{2} - A^{-2}}.$$ If $$A=e^{i\pi/2r}$$ where $r$ is a positive integer, then $$\Delta_{n} = (-1)^{n}\frac{sin((n+1)\pi/r)}{sin(\pi/r)}.$$ Here the corresponding quantum integer is $$[n] = \frac{sin(n\pi/r)}{sin(\pi/r)}.$$ Note that $[n+1]$ is a positive real number for $n=0,1,2,...r-2$ and that $[r-1]=0.$ \bigbreak
The evaluation of the theta net is expressed in terms of quantum integers by the formula $$\Theta(a,b,c) = (-1)^{m + n + p}\frac{[m+n+p+1]![n]![m]![p]!}{[m+n]![n+p]![p+m]!}$$ where $$a=m+p, b=m+n, c=n+p.$$ Note that $$(a+b+c)/2 = m + n + p.$$ \bigbreak
When $A=e^{i\pi/2r},$ the recoupling theory becomes finite with the restriction that only three-vertices (labeled with $a,b,c$) are {\it admissible} when $a + b +c \le 2r-4.$ All the summations in the formulas for recoupling are restricted to admissible triples of this form. \bigbreak
\subsection{Symmetry and Unitarity} The formula for the recoupling coefficients given in Figure 7 has less symmetry than is actually inherent in the structure of the situation. By multiplying all the vertices by an appropriate factor, we can reconfigure the formulas in this theory so that the revised recoupling transformation is orthogonal, in the sense that its transpose is equal to its inverse. This is a very useful fact. It means that when the resulting matrices are real, then the recoupling transformations are unitary. \bigbreak
Figure 9 illustrates this modification of the three-vertex. Let $Vert[a,b,c]$ denote the original $3$-vertex of the Temperley -- Lieb recoupling theory. Let $ModVert[a,b,c]$ denote the modified vertex. Then we have the formula $$ModVert[a,b,c] = \frac{\sqrt{\sqrt{\Delta_{a} \Delta_{b} \Delta_{c}}}}{ \sqrt{\Theta(a,b,c)}}\,\, Vert[a,b,c].$$
\noindent {\bf Lemma.} For the bracket evaluation at the root of unity $A = e^{i\pi/2r}$ the factor $$f(a,b,c) = \frac{\sqrt{\sqrt{\Delta_{a} \Delta_{b} \Delta_{c}}}}{ \sqrt{\Theta(a,b,c)}}$$ is real, and can be taken to be a positive real number for $(a,b,c)$ admissible (i.e. with $a + b + c \le 2r -4$). \bigbreak
\noindent {\bf Proof.} By the results from the previous subsection, $$\Theta(a,b,c) = (-1)^{(a+b+c)/2}\hat{\Theta}(a,b,c)$$ where $\hat{\Theta}(a,b,c)$ is positive real, and $$\Delta_{a} \Delta_{b} \Delta_{c} = (-1)^{(a+b+c)} [a+1][b+1][c+1]$$ where the quantum integers in this formula can be taken to be positive real. It follows from this that $$f(a,b,c) = \sqrt{\frac{\sqrt{[a+1][b+1][c+1]}}{\hat{\Theta}(a,b,c)}},$$ showing that this factor can be taken to be positive real. This completes the proof. \bigbreak
In Figure 10 we show how this modification of the vertex affects the non-zero term of the orthogonality of trivalent vertices (compare with Figure 4). We refer to this as the ``modified bubble identity." The coefficient in the modified bubble identity is $$\sqrt{ \frac{\Delta_{b}\Delta_{c}}{\Delta_{a}} } = (-1)^{(b+c-a)/2} \sqrt{\frac{[b+1][c+1]}{[a+1]}}$$ where $(a,b,c)$ form an admissible triple. In particular $b+c-a$ is even and hence this factor can be taken to be real. \bigbreak
We rewrite the recoupling formula in this new basis and emphasize that the recoupling coefficients can be seen (for fixed external labels $a,b,c,d$) as a matrix transforming the horizontal ``double-$Y$" basis to a vertically disposed double-$Y$ basis. In Figures 11, 12 and 13 we have shown the form of this transformation,using the matrix notation $$M[a,b,c,d]_{ij}$$ for the modified recoupling coefficients. In Figure 11 we derive an explicit formula for these matrix elements. The proof of this formula follows directly from trivalent--vertex orthogonality (See Figures 4 and 7.), and is given in Figure 11. The result shown in Figure 11 and Figure 12 is the following formula for the recoupling matrix elements. $$M[a,b,c,d]_{ij} = ModTet \left( \begin{array}{ccc} a & b & i \\ c & d & j \\ \end{array} \right)/\sqrt{\Delta_{a}\Delta_{b}\Delta_{c}\Delta_{d}}$$ where $\sqrt{\Delta_{a}\Delta_{b}\Delta_{c}\Delta_{d}}$ is short-hand for the product $$\sqrt{ \frac{\Delta_{a}\Delta_{b}}{\Delta_{j}} }\sqrt{ \frac{\Delta_{c}\Delta_{d}}{\Delta_{j}} } \Delta_{j}$$ $$= (-1)^{(a+b-j)/2}(-1)^{(c+d-j)/2} (-1)^{j} \sqrt{ \frac{[a+1][b+1]}{[j+1]}}\sqrt{ \frac{[c+1][d+1]}{[j+1]}} [j+1]$$ $$ = (-1)^{(a+b+c+d)/2}\sqrt{[a+1][b+1][c+1][d+1]}$$ In this form, since $(a,b,j)$ and $(c,d,j)$ are admissible triples, we see that this coeffient can be taken to be real, and its value is independent of the choice of $i$ and $j.$ The matrix $M[a,b,c,d]$ is real-valued. \bigbreak
\noindent It follows from Figure 12 (turn the diagrams by ninety degrees) that $$M[a,b,c,d]^{-1} = M[b,d,a,c].$$ In Figure 14 we illustrate the formula $$M[a,b,c,d]^{T} = M[b,d,a,c].$$ It follows from this formula that $$M[a,b,c,d]^{T} = M[a,b,c,d]^{-1}.$$ {\it Hence $M[a,b,c,d]$ is an orthogonal, real-valued matrix.}
\begin{figure}
\caption{\bf Modified Three Vertex}
\end{figure}
\bigbreak
\begin{figure}
\caption{\bf Modified Bubble Identity}
\end{figure}
\bigbreak
\begin{figure}
\caption{\bf Derivation of Modified Recoupling Coefficients}
\end{figure}
\bigbreak
\begin{figure}
\caption{\bf Modified Recoupling Formula}
\end{figure}
\bigbreak
\begin{figure}
\caption{\bf Modified Recoupling Matrix}
\end{figure}
\bigbreak
\begin{figure}
\caption{\bf Modified Matrix Transpose}
\end{figure}
\bigbreak
\noindent {\bf Theorem.} In the Temperley -- Lieb theory we obtain unitary (in fact real orthogonal) recoupling transformations when the bracket variable $A$ has the form $A = e^{i\pi/2r}$. Thus we obtain families of unitary representations of the Artin braid group from the recoupling theory at these roots of unity. \bigbreak
\noindent {\bf Proof.} The proof is given the discussion above. \bigbreak
\noindent{\bf Remark.} The density of these representations will be taken up in a subsequent paper. \bigbreak
\section*{ACKNOWLEDGMENTS} Most of this effort was sponsored by the Defense Advanced Research Projects Agency (DARPA) and Air Force Research Laboratory, Air Force Materiel Command, USAF, under agreement F30602-01-2-05022. Some of this effort was also sponsored by the National Institute for Standards and Technology (NIST). The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright annotations thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Defense Advanced Research Projects Agency, the Air Force Research Laboratory, or the U.S. Government. (Copyright 2006.) \bigbreak
\end{document} | arXiv |
Software & Systems Modeling
pp 1–40 | Cite as
ChronoSphere: a graph-based EMF model repository for IT landscape models
Martin Haeusler
Thomas Trojer
Johannes Kessler
Matthias Farwick
Emmanuel Nowakowski
Ruth Breu
Regular Paper
IT Landscape models are representing the real-world IT infrastructure of a company. They include hardware assets such as physical servers and storage media, as well as virtual components like clusters, virtual machines and applications. These models are a critical source of information in numerous tasks, including planning, error detection and impact analysis. The responsible stakeholders often struggle to keep such a large and densely connected model up-to-date due to its inherent size and complexity, as well as due to the lack of proper tool support. Even though modeling techniques are very suitable for this domain, existing tools do not offer the required features, scalability or flexibility. In order to solve these challenges and meet the requirements that arise from this application domain, we combine domain-driven modeling concepts with scalable graph-based repository technology and a custom language for model-level queries. We analyze in detail how we synthesized these requirements from the application domain and how they relate to the features of our repository. We discuss the architecture of our solution which comprises the entire data management stack, including transactions, queries, versioned persistence and metamodel evolution. Finally, we evaluate our approach in a case study where our open-source repository implementation is employed in a production environment in an industrial context, as well as in a comparative benchmark with an existing state-of-the-art solution.
Model-driven engineering Model repositories Versioning Graph database IT landscape
Communicated by Dr. Ana Moreira.
Model-driven engineering (MDE) is a discipline that aims at improving the processes, workflows and products of existing engineering areas by applying models as an abstraction layer. The primary field of application for MDE has traditionally always been software engineering [64]. However, the key innovations of MDE are not domain specific. The general concept of using a metamodel to define a structure and then instantiating it to create actual objects applies to a wide range of problems. When comparing different use cases it becomes evident that modeling concepts tend to be employed in areas that exhibit high complexity and heterogeneity in their domain structures, such as cloud orchestration [22], self-adaptive software [5], automotive systems [24] or Enterprise Architecture Management (EAM) [43]. However, there are still many potential application areas for model-driven approaches that have barely been investigated so far. EAM focuses exclusively on the strategic aspects of IT management. Standard metamodels (such as ArchiMate [43]) have been developed for this domain, yet these metamodels focus primarily on high-level business services and capabilities. The actual assets (or Configuration Items (CIs) [7]) on the operative level are captured in a coarse-grained way that does not allow for deeper analysis, or are excluded entirely.
Configuration items typically comprise physical servers, applications, databases and network infrastructure. We refer to the collection of all assets in a company as the IT Landscape (also known as resource landscape [36]). The IT Landscapes of major, globally operating companies, can grow to considerable dimensions. Due to agility requirements, they are increasingly subject to frequent evolution and technology shifts. Recent examples include the extensive usage of virtualization platforms in data centers, the advent of cloud computing and the emergence of As-A-Service solutions. Furthermore, even though commonalities do exist, every company has its own architecture and vision behind its landscape. The terminology also varies, as there is no generally accepted definition across all stakeholders for common terms like Service or Application. Responsible persons and teams often struggle in their continuous efforts to properly document these landscapes due to their inherent size and complexity. The absence of a reliable and up-to-date documentation can result in slow error detection, loss of traceability of changes and misguided planning processes due to poor information situations. Ultimately, these issues can lead to problems which cause very high costs for the companies if they remain unaddressed [30, 51].
The need for tool support in the area of IT Landscape documentation is evident, and model engineering is well-suited to provide the required concepts. However, the existing MDE tool infrastructure is insufficient when it comes to satisfying the requirements of this domain. Existing solutions either do not scale with the number of elements in a real-world IT Landscape documentation, do not offer the necessary analysis capabilities, or lack the flexibility needed in long-term projects. Several state-of-the-art model repositories employ relational databases, even though the object-relational gap is well-known to cause additional complexity and performance overhead. Furthermore, the required commitment to a fixed schema across all entries impedes the ability to perform metamodel evolution processes without altering past revisions. In recent years, the NoSQL family of databases has expanded, and graph databases in particular are an excellent fit for storing model data [1, 4]. The central research question we focus on in this paper is how to combine domain-driven modeling concepts and technologies with the innovations from the graph database community in order to build a model repository which is suitable for IT Landscape documentation.
In this paper, we present a solution for storing, versioning and querying IT Landscape models called ChronoSphere. ChronoSphere is a novel open-source EMF model repository that addresses the needs of this domain, in particular scalable versioning, querying and persistence. It utilizes innovative database technology and is based on a modular architecture which allows individual elements to be used as standalone components outside the repository context. Even though ChronoSphere has been designed for the IT Landscape use case, the core implementation is domain independent and may also serve other use cases (see Sect. 9.4). In our inter-disciplinary efforts to realize this repository, we also contributed to the state-of-the-art in the database community, in particular in the area of versioned data storage and graph versioning. We evaluate our approach in an industrial case study in collaboration with Txture GmbH.1 This company employs our ChronoSphere implementation as the primary storage back-end in their commercial IT Landscape modeling tool.
The remainder of this paper is structured as follows. In Sect. 2, we first describe the IT Landscape use case in more detail. We then extract the specific requirements for our solution from this environment and discuss how they were derived from the industrial context. Section 3 provides a high-level overview of our approach. In Sects. 4 through 6, we discuss the details of our solution. In Sect. 7, we present the application of our repository in an industrial context. Section 8 evaluates the performance of ChronoSphere in comparison with other model repository solutions, which is followed by a feature-based comparison of related work in several different areas in Sect. 9. We conclude the paper with an outlook to future work in Sect. 10 and a summary in Sect. 11. Sections 4 through 6 consist of a revised, updated and extended version of the content presented in our previous work, mainly [25, 27, 28]. The remaining sections (most notably 2, 7 and 8) have never been published before.
2 Use case and requirement analysis
The overarching goal in IT Landscape documentation is to produce and maintain a model which reflects the current IT assets of a company and their relationships with each other. As these assets change over time, keeping this model up-to-date is a continuous task, rather than a one-time effort.
The IT landscape environment
Common data sources for IT landscape documentation
Web URL
www.mysql.com
www.postgresql.org
www.microsoft.com/en-us/sql-server
Virtualization platforms
www.vmware.com/products/vcenter-server
www.openstack.org
Red Hat Enterprise Virtualization
www.redhat.com/en/technologies/virtualization
Enterprise architecture management tools
IteraPlan
www.iteraplan.de/en
www.mega.com/en/product/enterprise-architecture
www.sparxsystems.eu
Cloud Computation Platforms
https://aws.amazon.com/ec2
https://azure.microsoft.com/en-us
Google Cloud Compute Engine
https://cloud.google.com/compute
Container orchestration mechanisms
Google Kubernetes
https://kubernetes.io
Docker Swarm
https://docs.docker.com/engine/swarm
From a repository perspective, the use case of IT Landscape documentation is unique because it is both a database scenario (involving large datasets) as well as a design scenario where multiple users manually edit the model in a concurrent fashion (see Fig. 1). The amount and quality of information which is available in external data sources depends on the degree of automation and standardization in the company. For companies with a lower degree of automation, users will want to edit the model manually to keep it up-to-date. In companies that have a sophisticated automation chain in place, the majority of data can be imported into the repository without manual intervention. Typical data sources involved in such a scenario are listed in Table 1.
After gathering and consolidating the required information in a central repository, typical use cases are centered around analysis and reporting. A user usually starts a session with a query that finds all assets that match a list of criteria, such as "Search for all Virtual Machines which run a Linux Operating System" or "Search for the Cluster named 'Production 2' located in Vienna". Finding an asset based on its name is a particularly common starting query.
From the result of this initial global query, the user will often want to analyze this particular asset or group of assets. Common use cases involve impact analysis and root cause analysis. The central question in impact analysis is "What would be the impact to my applications if a given Physical Server fails" and can be answered by a transitive dependency analysis starting from the Physical Server and resolving the path to the transitively connected applications (crossing the virtualization, clustering and load balancing layers in between). Root cause analysis is the inverse question: given an Application, the task is to find all Physical Servers on which the application transitively depends. This insight allows to reduce the search space in case of an incident (ranging from performance problems to total application outage).
Finally, analyzing the history of a single element or the entire model as a whole are important use cases in IT Landscape management. For example, users are interested in the number of applications employed in their company over time. Version control becomes essential in such scenarios, because it allows to formulate queries over time after the actual insertion of the data has happened (whereas for a statistic on a non-versioned store the query would have to be known in advance to track the relevant data at insertion time). Per-element history traces are also important, as they allow to identify who performed a certain change, which properties of the asset have been modified, and when the modification has occurred.
In the remainder of this section, we focus on the most important influences from the industrial context, how we derived requirements for our repository from them, and how these requirements are met by technical features. Figure 2 provides an overview of this process.
Traceability matrix between context, requirements and features
2.1 Deriving requirements from the context
IT architectures and their terminology (e.g., the exact definition of general terms like Service or Application) vary by company. Therefore, the structure of the resulting IT Landscape models also differs. One solution to these Heterogeneous Architectures [C1] is to unify them under a common, fixed metamodel (e.g., ArchiMate [43]). However, this can lead to poor acceptance in practice due to its rigidity and the additional complexity introduced by its generality. From our past surveys and case studies [17, 18, 19, 20, 73], we inferred the requirement that the metamodel should be configurable by the user [R1]. The companies which utilize IT Landscape models the most are usually large companies [C2], or companies with a strong focus on IT (such as data centers). This entails that the corresponding models will grow to considerable sizes, and a repository must offer the necessary scalability [R5].
Documenting the IT Landscape of a company is a continuous effort. In the industrial context, we are therefore faced with long-running endeavors [C4] that span several years. In situations where responsible persons change and team members leave while new ones join, the ability to comprehend and reflect upon past decisions becomes crucial. Versioning the model content [R2] meets these demands, and also enables use cases that involve auditing, as well as use cases where versioning is required for legal compliance [C6]. The underlying requirement for these use cases is to not only store the version history, but also to analyze history traces [R8]. During a long-term documentation project, the metamodel sometimes also needs to be adapted [R3], for example due to technological shifts [C3] that introduce new types of assets. Examples for technological shifts include the introduction of virtualization technologies in data centers, and the advent of cloud computing. Another direct consequence of long-running projects is that the change history of individual model elements can grow to large sizes [R5] which must be considered in the technical realization of a repository.
In industrial contexts, several different stakeholders collaborate in documenting the IT Landscape. Depending on the scope of the project, stakeholders can involve a wide variety of roles, ranging from IT operations experts to database managers and enterprise architects [C5]. This entails that the repository must support concurrent access [R6] for multiple users. Another requirement that follows directly from concurrent access is that the structural consistency of the model contents must be ensured by the repository [R6, R9] (e.g., conformance to the metamodel and referential integrity). From the analysis perspective, concurrent access is also a threat to the consistency and reproducibility of query results, which is required for reliable model analysis [R9]. Apart from analyzing the current and past states of the model, IT Landscape models are also used to plan for future transformations [C7]. The general workflow involves the creation of "to-be" scenarios based on the current state which are then compared against each other in order to select the best candidate. In order to cope with such use cases, the repository must support branching [R4]. Branches allow the plans to be based on the actual model and to change it independently without affecting the as-is state. Since the comparison of two model states is an important part of planning (as well as a number of other use cases), the repository needs to be able to evaluate an analysis query on any model version and on any branch [R7] without altering the query.
2.2 Deriving features from requirements
From the set of requirements, we inferred the technical features which our model repository has to support. As we want our users to be able to provide their own metamodels, we employ the standard Eclipse Modeling Framework (EMF [70]) as our modeling language of choice [F1]. The fact that the data managed by our repository consists of a large and densely connected model which has to be put under version control lead to the decision to employ a per-element versioning strategy [F2], as a coarse-grained whole-model versioning strategy would cause performance issues for such models.
Supporting a user-defined metamodel, element versioning and metamodel evolution at the same time is a challenging task. The combination of these requirements entails that our repository must support metamodel versioning, metamodel evolution and instance co-adaptation [F3]. From a technical perspective, it is also inadvisable to create a full copy of a model version each time a branch is created due to the potential size of the model. We therefore require a branching mechanism that is lightweight [F4] in that it reuses the data from the origin branch rather than copying it when a new branch is created. Since IT Landscape models can grow to large sizes and will potentially not fit into the main memory of the machine which runs our repository, we require a mechanism for dynamic on-demand loading and unloading of model elements [F5].
A technical feature which is crucial for efficient querying of the entire model is indexing [F6]. The primary index allows to locate a model element by its unique ID without linear iteration over all elements. Secondary indices can be defined by the user for a given metamodel and can be used to efficiently find all elements in the model where a property is set to a specified value (e.g., finding all servers where the name contains "Production"). In addition, indexing has to consider the versioned nature of our repository, as we want our indices to be usable for all queries, regardless of the chosen version or branch. In other words, even a query on a model version that is one year old should be able to utilize our indices. In order for queries to be executable on any branch and timestamp, we require a framework that allows for the creation of queries that are agnostic to the chosen branch and version [F9].
All queries and modifications in our repository are subject to concurrent access. We meet this requirement by providing full ACID [38] end-to-end transaction support in our repository [F8]. Finally, in order to support long histories, we implement a feature called Temporal Rollover which enables the archiving of historical entries [F7]. This feature allows for indefinite growth of element histories and will be explained in detail in later sections.
3 Solution overview
The overarching goal of our efforts is to provide a model repository that fulfills the requirements in Sect. 2. Our initial prototypes were based on standard technology, such as object-relational mappers and SQL databases. However, we soon realized that table-based representations were not an ideal fit for the structure of model data. The main issues we faced with these solutions were related to scalability and performance [R5]. The fact that most SQL databases require a fixed schema also proved to be very limiting when taking the requirement for metamodel evolution [R3] into consideration.
During our search for alternatives, we were inspired by approaches such as MORSA [54] and Neo4EMF [4]. We investigated various NoSQL storage solutions and eventually settled for graph databases. Graph databases do not require a fixed schema, offer fast execution of navigational queries and the bijective mapping between model data and graph data is both simpler and faster than object-relational mappings. However, existing graph databases on the market did not offer built-in versioning capabilities [R2]. Using a general-purpose graph database (e.g., Neo4j2 or Titan3) and managing the versioning process entirely on the application side has already been proven by various authors to be possible [8, 66, 67, 71]. However, such approaches greatly increase the complexity of the resulting graph structure as well as the complexity of the queries that operate on it. This reduces the maintainability, performance and scalability [R5] of such systems.
ChronoSphere data management stack
After discovering this gap in both research and industrial solutions, we created a versioned graph database called ChronoGraph [27], which is the first graph database with versioning support that is compliant to the Apache TinkerPop standard. ChronoGraph is listed4 as an official TinkerPop implementation and available as an open-source project on GitHub.5 Since graph structures need to be transformed into a format that is compatible with the sequential nature of hard drives, we also required a versioned storage solution. Our explorative experiments with different back-ends of Titan DB demonstrated that key-value stores are a very suitable fit for storing graph data. We created ChronoDB [25], a versioned key-value store, to act as the storage backend for ChronoGraph. The full source code of this project is also available on GitHub.6 The resulting repository solution therefore consists of three layers, allowing for a coherent architecture and a clean separation of concerns.
Figure 3 shows the data management concepts of ChronoSphere. At the very top, in Fig. 3 Part A, we are working with EObjects and their corresponding EClasses, EPackages and other Ecore elements. It is important that the model and the metamodel are stored together. This will become a critical factor when dealing with metamodel evolution. This combined model needs to be persisted and versioned [R1–R4]. ChronoSphere maps it to a property graph representation [58] for this purpose. This representation is conceptually very close to the model form. Our model-to-graph mapping is inspired by Neo4EMF [4]. We will discuss related work in this area in more detail in Sect. 9.
The property graph management in Fig. 3 Part B is provided by ChronoGraph. In order to achieve a serial form for the model data that can be persisted to disk, ChronoGraph disassembles the property graph into individual Star Graphs, one for each vertex (i.e., node). A star graph is a sub-graph that is centered around one particular vertex. Figure 3 Part C shows the star graph of vertex v1. Creating star graphs for each vertex is a special kind of graph partitioning. When linking the star graphs again by replacing IDs by vertices, the original graph can be reconstructed from this partitioning. This reconstruction can occur fully or only partially, which makes this solution very suitable for lazy loading techniques [R5].
In the next step, we transform the star graph of each vertex into a binary string using the Kryo7 serializer, and pass the result to the underlying ChronoDB, our versioned Key-Value-Store. When the transaction is committed [R6], the commit timestamp is assigned to each pair of modified keys and corresponding binary values, creating time-key-value triples as shown in Fig. 3 Part D. ChronoDB then stores these triples in a Temporal Data Matrix (Fig. 3 Part E) which is implemented as a B\(^{+}\)-Tree [61]. Each row in this matrix represents the full history of a single element, each column represents a model revision, and each cell represents the data of one particular element for a given ID at a given timestamp. We will define and discuss this matrix structure in more detail in the following section.
4 Solution part I: ChronoDB
ChronoDB [25] is a versioned key-value store and the bottom layer in our architecture. Its main responsibilities are persistence, versioning, branching and indexing. As all other components in our architecture rely on this store, we formalized its data structures and operations during the design phase.
4.1 Formal foundation
Salzberg and Tsotras identified three key query types which have to be supported by a data store in order to provide the full temporal feature set [62]. For versioning purposes, this set can be reused by restricting the features to timestamps instead of time ranges. This gives rise to the following three types of possible queries:
Pure-Timeslice Query Given a point in time (e.g., date and time), find all keys that existed at that time.
Range-Timeslice Query Given a set of keys and a point in time, find the value for each key which was valid at that time.
Pure-Key Query Given a set of keys, for each key find the values that comprise its history.
We use these three core query types as the functional requirements for our formalization approach. For practical reasons, we furthermore require that inserted entries never have to be modified again. In this way, we can achieve a true append-only store. In order to maintain the traceability of changes over time (e.g., for auditing purposes [R8]), we also require that the history of a key must never be altered, only appended.
The key concept behind our formalism is based on the observation that temporal information always adds an additional dimension to a dataset. A key-value format has only one dimension, which is the key. By adding temporal information, the two resulting dimensions are the key, and the time at which the value was inserted. Therefore, a matrix is a very natural fit for formalizing the versioning problem, offering the additional advantage of being easy to visualize. The remainder of this section consists of definitions which provide the formal semantics of our solution, interleaved with figures and (less formal) textual explanations.
Temporal Data Matrix
Let T be the set of all timestamps with \(T \subseteq \mathbb {N}\). Let \(\mathcal {S}\) denote the set of all non-empty strings and K be the set of all keys with \(K \subseteq \mathcal {S}\). Let \(\mathbb {B}\) define the set of all binary strings with \(\mathbb {B} \subseteq \{0,1\}^+ \cup \{null, \epsilon \}\). Let \(\epsilon \in \mathbb {B}\) be the empty binary string with \(\epsilon \ne null\). We define the Temporal Data Matrix\(\mathcal {D} \in \mathbb {B}^{\infty \times \infty }\) as:
$$\begin{aligned} \mathcal {D}: T \times K \rightarrow \mathbb {B} \end{aligned}$$
We define the initial value of a given Temporal Data Matrix D as:
$$\begin{aligned} D_{t,k} := \epsilon \qquad \forall t \in T, \forall k \in K \end{aligned}$$
In Definition 1, we define a Temporal Data Matrix, which is a key-value mapping enhanced with temporal information [R2, R3]. Note that the number of rows and columns in this matrix is infinite. In order to retrieve a value from this matrix, a key string and a timestamp are required. We refer to such a pair as a Temporal Key. The matrix can contain an array of binary values in every cell, which can be interpreted as the serialized representation of an arbitrary object. The formalism is therefore not restricted to any particular value type. The dedicated null value (which is different from all other bit-strings and also different from the \(\epsilon \) values used to initialize the matrix) will be used as a marker that indicates the deletion of an element later in Definition 3.
In order to guarantee the traceability of changes [R8], entries in the past must not be modified, and new entries may only be appended to the end of the history, not inserted at an arbitrary position. We use the notion of a dedicated now timestamp for this purpose.
Now Operation
Let D be a Temporal Data Matrix. We define the function \(now: \mathbb {B}^{\infty \times \infty } \rightarrow T\) as:
$$\begin{aligned} now(D) = max(\{t | k \in K, D_{t,k} \ne \epsilon \} \cup \{0\}) \end{aligned}$$
Definition 2 introduces the concept of the now timestamp, which is the largest (i.e., latest) timestamp at which data has been inserted into the store so far, initialized at zero for empty matrices. This particular timestamp will serve as a safeguard against temporal inconsistencies in several operations. We continue by defining the temporal counterparts of the put and get operations of a key-value store.
Temporal Write Operation
Let D be a Temporal Data Matrix. We define the function \(put: \mathbb {B}^{\infty \times \infty } \times T \times K \times \mathbb {B} \rightarrow \mathbb {B}^{\infty \times \infty }\) as:
$$\begin{aligned} put(D,t,k,v) = D' \end{aligned}$$
with \(v \ne \epsilon \), \(t > now(D)\) and
$$\begin{aligned} D'_{i,j} := {\left\{ \begin{array}{ll} v &{} \hbox {if } t = i \wedge k = j\\ D_{i,j} &{} \hbox {otherwise}\\ \end{array}\right. } \end{aligned}$$
The write operation put replaces a single entry in a Temporal Data Matrix by specifying the exact coordinates and a new value for that entry. All other entries remain the same as before. Please note that, while v must not be \(\epsilon \) in the context of a put operation (i.e., a cell cannot be "cleared"), v can be null to indicate a deletion of the key k from the matrix. Also, we require that an entry must not be overwritten. This is given implicitly by the fact that each put advances the result of now(D), and further insertions are only allowed after that timestamp. Furthermore, write operations are not permitted to modify the past in order to preserve consistency and traceability, which is also asserted by the condition on the now timestamp. This operation is limited in that it allows to modify only one key at a time. In the implementation, we generalize it to allow simultaneous insertions in several keys via transactions.
Temporal Read Operation
Let D be a Temporal Data Matrix. We define the function \(get: \mathbb {B}^{\infty \times \infty } \times T \times K \rightarrow \mathbb {B}\) as:
$$\begin{aligned} get(D,t,k) := {\left\{ \begin{array}{ll} D_{u,k} &{} \text{ if } u \ge 0 \wedge D_{u,k} \ne null\\ \epsilon &{} \text{ otherwise } \end{array}\right. } \end{aligned}$$
with \(t \le now(D)\) and
$$\begin{aligned} u := max(\{x | x \in T, x \le t, D_{x,k} \ne \epsilon \} \cup \{-1\}) \end{aligned}$$
The function get first attempts to return the value at the coordinates specified by the key and timestamp (\(u = t\)). If that position is empty, we scan for the entry in the same row with the highest timestamp and a non-empty value, considering only entries with lower timestamps than the request timestamp. In the formula, we have to add \(-~1\) to the set from which u is chosen to cover the case where there is no other entry in the row. If there is no such entry (i.e., \(u = -~1\)) or the entry is null, we return the empty binary string, otherwise we return the entry with the largest encountered timestamp.
This process is visualized in Fig. 4. In this figure, each row corresponds to a key, and each column to a timestamp. The depicted get operation is working on timestamp 5 and key 'd'. As \(D_{5,d}\) is empty, we attempt to find the largest timestamp smaller than 5 where the value for the key is not empty, i.e., we move left until we find a non-empty cell. We find the result in \(D_{1, d}\) and return v1. This is an important part of the versioning concept: a value for a given key is assumed to remain unchanged until a new value is assigned to it at a later timestamp. This allows any implementation to conserve memory on disk, as writes only occur if the value for a key has changed (i.e., no data duplication is required between identical revisions). Also note that we do not need to update existing entries when new key-value pairs are being inserted, which allows for pure append-only storage. In Fig. 4, the value v1 is valid for the key 'd' for all timestamps between 1 and 5 (inclusive). For timestamp 0, the key 'd' has value v0. Following this line of argumentation, we can generalize and state that a row in the matrix, identified by a key \(k \in K\), contains the history of k. This is formalized in Definition 5. A column, identified by a timestamp \(t \in T\), contains the state of all keys at that timestamp, with the additional consideration that value duplicates are not stored as they can be looked up in earlier timestamps. This is formalized in Definition 6.
A get operation on a Temporal Data Matrix [25]
History Operation
Let D be a Temporal Data Matrix, and t be a timestamp with \(t \in T, t \le now(D)\). We define the function \(history: \mathbb {B}^{\infty \times \infty } \times T \times K \rightarrow 2^{T}\) as:
$$\begin{aligned} history(D,t,k) := \{x | x \in T, x \le t, D_{x,k}\ne \epsilon \} \end{aligned}$$
In Definition 5, we define the history of a key k up to a given timestamp t in a Temporal Data Matrix D as the set of timestamps less than or equal to t that have a non-empty entry for key k in D. Note that the resulting set will also include deletions, as null is a legal value for \(D_{x,k}\) in the formula. The result is the set of timestamps where the value for the given key changed. Consequently, performing a get operation for these timestamps with the same key will yield different results, producing the full history of the temporal key.
Keyset Operation
Let D be a Temporal Data Matrix, and t be a timestamp with \(t \in T, t \le now(D)\). We define the function \(keyset: \mathbb {B}^{\infty \times \infty } \times T \rightarrow 2^{K}\) as:
$$\begin{aligned} keyset(D,t) := \{x | x \in K, get(D,t,x)\ne \epsilon \} \end{aligned}$$
As shown in Definition 6, the keyset in a Temporal Data Matrix changes over time. We can retrieve the keyset at any desired time by providing the appropriate timestamp t. Note that this works for any timestamp in the past, in particular we do not require that a write operation has taken place precisely at t in order for the corresponding key(s) to be contained in the keyset. In other words, the precise column of t may consist only of \(\epsilon \) entries, but the key set operation will also consider earlier entries which are still valid at t. The version operation introduced in Definition 7 operates in a very similar way, but returns tuples containing keys and values, rather than just keys.
Version Operation
Let D be a Temporal Data Matrix, and t be a timestamp with \(t \in T, t \le now(D)\). We define the function \(version: \mathbb {B}^{\infty \times \infty } \times T \rightarrow 2^{K \times \mathbb {B}}\)
$$\begin{aligned} version(D,t) := \{\langle k,v\rangle | k \in keyset(D,t), v = get(D,t,k)\} \end{aligned}$$
Figure 5 illustrates the key set and version operations by example. In this scenario, the key set (or version) is requested at timestamp \(t = 5\). We scan each row for the latest non-\(\epsilon \) entry and add the corresponding key of the row to the key set, provided that a non-\(\epsilon \) right-most entry exists (i.e., the row is not empty) and is not null (the value was not removed). In this example, keyset(D, 5) would return \(\{a,c,d\}\), assuming that all non-depicted rows are empty. b and f are not in the key set, because their rows are empty (up to and including timestamp 5), and e is not in the set because its value was removed at timestamp 4. If we would request the key set at timestamp 3 instead, e would be in the key set. The operation version(D, 5) returns \(\{ \langle a,v0\rangle , \langle c, v2\rangle , \langle d, v4\rangle \}\) in the example depicted in Fig. 5. The key e is not represented in the version because it did not appear in the key set.
Performing a keyset or version operation on a Temporal Data Matrix [25]
Mapping capabilities to operations [25]
Realization in formalism
Pure-Timeslice
Equivalent to keyset operation
Range-Timeslice
One get operation per given key
Pure-Key
One history operation per given key
With the given set of operations, we are able to answer all three kinds of temporal queries identified by Salzberg and Tsotras [62], as indicated in Table 2. Due to the restrictions imposed onto the put operation (see Definition 3), data cannot be inserted before the now timestamp (i.e., the history of an entry cannot be modified). Since the validity range of an entry is determined implicitly by the empty cells between changes, existing entries never need to be modified when new ones are being added. The formalization therefore fulfills all requirements stated at the beginning of this section.
4.2 Implementation
ChronoDB is our implementation of the concepts presented in the previous section. It is a fully ACID compliant, process-embedded, temporal key-value store written in Java. The intended use of ChronoDB is to act as the storage backend for a graph database, which is the main driver behind numerous design and optimization choices. The full source code is freely available on GitHub under an open-source license.
4.2.1 Implementing the matrix
As the formal foundation includes the concept of a matrix with infinite dimensions, a direct implementation is not feasible. However, a Temporal Data Matrix is typically very sparse. Instead of storing a rigid, infinite matrix structure, we focus exclusively on the non-empty entries and expand the underlying data structure as more entries are being added.
There are various approaches for storing versioned data on disk [15, 46, 50]. We reuse existing, well-known and well-tested technology for our prototype instead of designing custom disk-level data structures. The temporal store is based on a regular B\(^{+}\)-Tree [61]. We make use of the implementation of B\(^{+}\)-Trees provided by the TUPL8 library. In order to form an actual index key from a Temporal Key, we concatenate the actual key string with the timestamp (left-padded with zeros to achieve equal length), separated by an '@' character. Using the standard lexicographic ordering of strings, we receive an ordering as shown in Table 3. This implies that our B\(^{+}\)-Tree is ordered first by key, and then by timestamp. The advantage of this approach is that we can quickly determine the value of a given key for a given timestamp (i.e., get is reasonably fast), but the keyset (see Definition 6) is more expensive to compute.
Ascending Temporal Key ordering by example [25]
Temporal key
Key string
a@0123
aa@0100
b@0001
ba@0001
The put operation appends the timestamp to the user key and then performs a regular B\(^{+}\)-Tree insertion. The temporal get operation requires retrieving the next lower entry with the given key and timestamp.
This is similar to regular B\(^{+}\)-Tree search, except that the acceptance criterion for the search in the leaf nodes is "less than or equal to" instead of "equal to", provided that nodes are checked in descending key order. TUPL natively supports this functionality. After finding the next lower entry, we need to apply a post-processing step in order to ensure correctness of the get operation. Using Table 3 as an example, if we requested aa@0050 (which is not contained in the data), searching for the next-lower key produces a@1000. The key string in this temporal key (a) is different from the one which was requested (aa). In this case, we can conclude that the key aa did not exist up to the requested timestamp (50), and we return null instead of the retrieved result.
Due to the way we set up the B\(^{+}\)-Tree, adding a new revision to a key (or adding an entirely new key) has the same runtime complexity as inserting an entry into a regular B\(^{+}\)-Tree. Temporal search also has the same complexity as regular B-Tree search, which is \(\mathcal {O}(\hbox {log}(n))\), where n is the number of entries in the tree. From the formal foundations onward, we assert by construction that our implementation will scale equally well when faced with one key and many versions, many keys with one revision each, or any distribution in between [R5]. An important property of our data structure setup is that, regardless of the versions-per-key distribution, the data structure never degenerates into a list, maintaining an access complexity of \(\mathcal {O}(\hbox {log}(n))\) by means of regular B\(^{+}\)-Tree balancing without any need for additional algorithms.
Lightweight branching concept
4.2.2 Branching
Figure 6 shows how the branching mechanism works in ChronoDB [R4]. Based on our matrix formalization, we can create branches of our history at arbitrary timestamps. To do so, we generate a new, empty matrix that will hold all changes applied to the branch it represents. We would like to emphasize that existing entries are not duplicated. We therefore create lightweight branches. When a get request arrives at the first column of a branch matrix during the search, we redirect the request to the matrix of the parent branch, at the branching timestamp, and continue from there. In this way, the data from the original branch (up to the branching timestamp) is still fully accessible in the child branch.
Querying data stored in branches
For example, as depicted in Fig. 7, if we want to answer a get request for key c on branch branchA and timestamp 4, we scan the row with key c to the left, starting at column 4. We find no entry, so we redirect the call to the origin branch (which in this case is master), at timestamp 3. Here, we continue left and find the value \(c_{1}\) on timestamp 1. Indeed, at timestamp 4 and branch branchA, \(c_{1}\) is still valid. However, if we issue the same original query on master, we would get \(c_{4}\) as our result. This approach to branching can also be employed recursively in a nested fashion, i.e., branches can in turn have sub-branches. The primary drawback of this solution is related to the recursive "backstepping" to the origin branch during queries. For deeply nested branches, this process will introduce a considerable performance overhead, as multiple B\(^{+}\)-Trees (one per branch) need to be opened and queried in order to answer this request. This happens more often for branches which are very thinly populated with changes, as this increases the chances of our get request scan ending up at the initial column of the matrix without encountering an occupied cell. The operation which is affected most by branching with respect to performance is the keySet operation (and all other operations that rely on it), as this requires a scan on every row, leading to potentially many backstepping calls.
4.2.3 Caching
A disk access is always slow compared to an in-memory operation, even on a modern solid state drive (SSD). For that reason, nearly all database systems include some way of caching the most recent query results in main memory for later reuse. ChronoDB is no exception, but the temporal aspects demand a different approach to the caching algorithm than in regular database systems, because multiple transactions can simultaneously query the state of the stored data at different timestamps. Due to the way we constructed the Temporal Data Matrix, the chance that a given key does not change at every timestamp is very high. Therefore, we can potentially serve queries at many different timestamps from the same cached information by exploiting the periods in which a given key does not change its value. For the caching algorithm, we apply some of the ideas found in the work of Ramaswamy [57] in a slightly different way, adapted to in-memory processing and caching idioms.
Temporal caching principle
Figure 8 displays an example for our temporal caching approach which we call Mosaic. When the value for a temporal key is requested and a cache miss occurs, we retrieve the value together with the validity range (indicated by gray background in the figure) from the persistent store, and add the range together with its value to the cache. Validity ranges start at the timestamp in which a key-value pair was modified (inclusive) and end at the timestamp where the next modification on that pair occurred (exclusive). For each key, the cache manages a list of time ranges called a cache row, and each range is associated with the value for the key in this period. As these periods never overlap, we can sort them in descending order for faster search, assuming that more recent entries are used more frequently. A cache look-up is performed by first identifying the row by the key string, followed by a linear search through the cached periods.9 We have a cache hit if a period containing the requested timestamp is found. When data is written to the underlying store, we need to perform a write-through in our cache, because validity ranges that have open-ended upper bounds potentially need to be shortened due to the insertion of a new value for a given key. The write-through operation is fast, because it only needs to check if the first validity range in the cache row of a given key is open-ended, as all other entries are always closed ranges. All entries in our cache (regardless of the row they belong to) share a common least recently used registry which allows for fast cache eviction of the least recently read entries.
In the example shown in Fig. 8, retrieving the value of key d at timestamp 0 would result in adding the validity range [0; 1) with value v0 to the cache row. This is the worst-case scenario, as the validity range only contains a single timestamp, and can consequently be used to answer queries only on that particular timestamp. Retrieving the same key at timestamps 1 through 4 yields a cache entry with a validity range of [1; 5) and value v1. All requests on key d from timestamp 1 through 4 can be answered by this cache entry. Finally, retrieving key d on a timestamp greater than or equal to 5 produces an open-ended validity period of \([5;\infty )\) with value v2, which can answer all requests on key d with a timestamp larger than 4, assuming that non-depicted columns are empty. If we would insert a key-value pair of \(\langle d, v3\rangle \) at timestamp 10, the write-through operation would need to shorten the last validity period to [5; 10) and add a cache entry containing the period \([10;\infty )\) with value v3.
4.2.4 Incremental commits
Database vendors often provide specialized ways to batch-insert large amounts of data into their databases that allow for higher performance than the usage of regular transactions. ChronoDB provides a similar mechanism, with the additional challenge of keeping versioning considerations in mind along the way. Even when inserting large amounts of data into ChronoDB, we want the history to remain clean, i.e., it should not contain intermediate states where only a portion of the overall data was inserted. We therefore need to find a way to conserve RAM by writing incoming data to disk while maintaining a clean history. For this purpose, the concept of incremental commits was introduced in ChronoDB. This mechanism allows to mass-insert (or mass-update) data in ChronoDB by splitting it up into smaller batches while maintaining a clean history and all ACID properties for the executing transaction.
Incremental commits
Figure 9 shows how incremental commits work in ChronoDB. The process starts with a regular transaction inserting data into the database before calling commitIncremental(). This writes the first batch (timestamp 2 in Fig. 9) into the database and releases it from RAM. However, the now timestamp is not advanced yet. We do not allow other transactions to read these new entries, because there is still data left to insert. We proceed with the next batches of data, calling commitIncremental() after each one. After the last batch was inserted, we conclude the process with a call to commit(). This will merge all of our changes into one timestamp on disk. In this process, the last change to a single key is the one we keep. In the end, the timestamps between the first initial incremental commit (exclusive) to the timestamp of the final commit (inclusive) will have no changes (as shown in timestamps 3 and 4 in Fig. 9). With the final commit, we also advance the now timestamp of the matrix and allow all other transactions to access the newly inserted data. By delaying this step until the end of our operation, we keep the possibility to roll back our changes on disk (for example in case that the process fails) without violating the ACID properties for all other transactions. Also, if data generated by a partially complete incremental commit process is present on disk at database start-up (which occurs when the database is unexpectedly shut down during an incremental commit process), these changes can be rolled back as well, which allows incremental commit processes to have "all or nothing" semantics.
A disadvantage of this solution is that there can be only one concurrent incremental commit process on any data matrix. This process requires exclusive write access to the matrix, blocking all other (regular and incremental) commits until it is complete. However, since we only modify the head revisions and now does not change until the process ends, we can safely perform read operations in concurrent transactions, while an incremental commit process is taking place. Overall, incremental commits offer a way to insert large quantities of data into a single timestamp while conserving RAM without compromising ACID safety at the cost of requiring exclusive write access to the database for the entire duration of the process. These properties make them very suitable for data imports from external sources, or large scale changes that affect most of the key-value pairs stored in a matrix. This will become an important factor when we consider global model evolutions in the model repository layer [R3]. We envision incremental commits to be employed for administrative tasks which do not recur regularly, or for the initial filling of an empty database.
4.2.5 Supporting long histories
In order to create a sustainable versioning mechanism, we need to ensure that our system can support a virtually unlimited number of versions [R2, R5]. Ideally, we also should not store all data in a single file, and old files should remain untouched when new data is inserted (which is important for file-based backups). For these reasons, we must not constrain our solution to a single B-Tree. The fact that past revisions are immutable in our approach led to the decision to split the data along the time axis, resulting in a series of B-Trees. Each tree is contained in one file, which we refer to as a chunk file. An accompanying meta file specifies the time range which is covered by the chunk file. The usual policy of ChronoDB is to maximize sharing of unchanged data as much as possible. Here, we deliberately introduce data duplication in order to ensure that the initial version in each chunk is complete. This allows us to answer get queries within the boundaries of a single chunk, without having to navigate to the previous one. As each access to another chunk has CPU and I/O overhead, we should avoid accesses on more than one chunk to answer a basic query. Without duplication, accessing a key that has not changed for a long time could potentially lead to a linear search through the chunk chain which contradicts the requirement for scalability [R5].
The temporal rollover process by example [26]
The algorithm for the "rollover" procedure outlined in Fig. 10 works as follows.
In Line 1 of Algorithm 1, we fetch the latest timestamp where a commit has occurred in our current head revision chunk. Next, we calculate the full head version of the data in Line 2. With the preparation steps complete, we set the end of the validity time range to the last commit timestamp in Line 3. This only affects the metadata, not the chunk itself. We now create a new, empty chunk in Line 4, and set the start of its validity range to the split timestamp plus one (as chunk validity ranges must not overlap). The upper bound of the new validity range is infinity. In Line 5 we copy the head version of the data into the new chunk. Finally, we update our internal look-up table in Line 6. This entire procedure only modifies the last chunk and does not touch older chunks, as indicated by the grayed-out boxes in Fig. 10.
The look-up table that is being updated in Algorithm 1 is a basic tree map which is created at start-up by reading the metadata files. For each encountered chunk, it contains an entry that maps its validity period to its chunk file. The periods are sorted in ascending order by their lower bounds, which is sufficient because overlaps in the validity ranges are not permitted. For example, after the rollover depicted in Fig. 10, the time range look-up would contain the entries shown in Table 4.
Time range look-up [26]
Chunk number
\([0 \ldots 300]\)
\([301 \ldots 1000]\)
\([1001 \ldots \infty ]\)
We employ a tree map specifically in our implementation for Table 4, because the purpose of this look-up is to quickly identify the correct chunk to address for an incoming request. Incoming requests have a timestamp attached, and this timestamp may occur exactly at a split, or anywhere between split timestamps. As this process is triggered very often in practice and the time range look-up map may grow quite large over time, we are considering to implement a cache based on the least-recently-used principle that contains the concrete resolved timestamp-to-chunk mappings in order to cover the common case where one particular timestamp is requested more than once in quick succession.
With this algorithm, we can support a virtually unlimited number of versions [R6] because new chunks always only contain the head revision of the previous ones, and we are always free to roll over once more should the history within the chunk become too large. We furthermore do not perform writes on old chunk files anymore, because our history is immutable. Regardless, thanks to our time range look-up, we have close to \(O(\log {}n)\) access complexity to any chunk, where n is the number of chunks.
This algorithm is a trade-off between disk space and scalability. We introduce data duplication on disk in order to provide support for large histories. The key question that remains is when this process happens. We require a metric that indicates the amount of data in the current chunk that belongs to the history (as opposed to the head revision) and thus can be archived if necessary by performing a rollover. We introduce the Head–History–Ratio (HHR) as the primary metric for this task, which we defined as follows:
$$\begin{aligned} HHR(e, h) = {\left\{ \begin{array}{ll} e, &{} \text {if } e = h\\ \frac{h}{e-h}, &{} \text {otherwise} \end{array}\right. } \end{aligned}$$
...where e is the total number of entries in the chunk, and h is the size of the subset of entries that belong to the head revision (excluding entries that represent deletions). By dividing the number of entries in the head revision by the number of entries that belong to the history, we get a proportional notion of how much history is contained in the chunk that works for datasets of any size. It expresses how many entries we will "archive" when a rollover is executed. When new commits add new elements to the head revision, this value increases. When a commit updates existing elements in the head revision or deletes them, this value decreases. We can employ a threshold as a lower bound on this value to determine when a rollover is necessary. For example, we may choose to perform a rollover when a chunk has an HHR value of 0.2 or less. This threshold will work independently of the absolute size of the head revision. The only case where the HHR threshold is never reached is when exclusively new (i.e., never seen before) keys are added, steadily increasing the size of the head revision. However, in this case, we would not gain anything by performing a rollover, as we would have to duplicate all of those entries into the new chunk to produce a complete initial version. Therefore, the HHR metric is properly capturing this case by never reaching the threshold, thus never indicating the need for a rollover.
4.2.6 Secondary indexing
There are two kinds of secondary indices in ChronoDB. On the one hand, there are indices which are managed by ChronoDB itself ("system indices") and on the other hand there are user-defined indices. As indicated in Table 3, the primary index for each matrix in ChronoDB has its keys ordered first by user key and then by version. In order to allow for efficient time range queries, we maintain a secondary index that is first ordered by timestamp and then by user key. Further system indices include an index for commit metadata (e.g., commit messages) that maps from timestamp to metadata, as well as auxiliary indices for branching (branch name to metadata). User-defined indices [R5] help to speed up queries that request entries based on their contents (rather than their primary key). An example for such a query is find all persons where the first name is 'Eva'. Since ChronoDB stores arbitrary Java objects, we require a method to extract the desired property value to index from the object. This is accomplished by defining a ChronoIndexer interface. It defines the index(Object) method that, given an input object, returns the value that should be put on the secondary index. Each indexer is associated with a name. That name is later used in a query to refer to this index. The associated query language provides support for a number of string matching techniques (equals, contains, starts with, regular expression...), numeric matching (greater than, less than or equal to...) as well as Boolean operators (and, or, not). The query engine also performs optimizations such as double negation elimination. Overall, this query language is certainly less expressive than other languages such as SQL. Since ChronoDB is intended to be used as a storage engine and embedded in a database frontend (e.g., a graph database), these queries will only be used internally for index scans while more sophisticated expressions are managed by the database frontend. Therefore, this minimalistic Java-embedded DSL has proven to be sufficient. An essential drawback of this query mechanism is that the number of properties available for querying is determined by the available secondary indices. In other words, if there is no secondary index for a property, that property cannot be used for filtering. This is due to ChronoDB being agnostic to the Java objects it is storing. In absence of a ChronoIndexer, it has no way of extracting a value for an arbitrary request property from the object. This is a common approach in database systems: without a matching secondary index, queries require a linear scan of the entire data store. When using a database frontend, this distinction is blurred, and the difference between an index query and a non-index query is only noticeable in how long it takes to produce the result set.
In contrast to the primary index, entries in the secondary index are allowed to have non-unique keys. For example, if we index the "name" attribute, then there may be more than one entry where the name is set to "John". We therefore require a different approach than the temporal data matrices employed for the primary index. Inspired by the work of Ramaswamy et al. [57], we make use of explicit time windows. Non-unique indices in versioned contexts are special cases of the general interval stabbing problem [31].
Secondary indexing in ChronoDB
Keyspace
\(\infty \)
Table 5 shows an example of a secondary index. As such a table can hold all entries for all indices, we store the index for a particular entry in the "index" column. The branch, keyspace and key columns describe the location of the entry in the primary index. The "value" column contains the value that was extracted by the ChronoIndexer. "From" and "To" express the time window in which a given row is valid. Any entry that is newly inserted into this table initially has its "To" value set to infinity (i.e., it is valid for an unlimited amount of time). When the corresponding entry in the primary index changes, the "To" value is updated accordingly. All other columns are effectively immutable.
In the concrete example shown in Table 5, we insert three key-value pairs (with keys e1, e2 and e3) at timestamp 1234. Our indexer extracts the value for the "name" index, which is "john" for all three values. The "To" column is set to infinity for all three entries. Querying the secondary index at that timestamp for all entries where "name" is equal to "john" would therefore return the set containing e1, e2 and e3. At timestamp 5678, we update the value associated with key e2 such that the indexer now yields the value "jack". We therefore need to terminate the previous entry (row #2) by setting the "To" value to 5678 (upper bounds are exclusive), and inserting a new entry that starts at 5678, has the value "jack" and an initial "To" value of infinity. Finally, we delete the key e3 in our primary index at timestamp 7890. In our secondary index, this means that we have to limit the "To" value of row #3 to 7890. Since we have no new value due to the deletion, no additional entries need to be added.
This tabular structure can now be queried using well-known techniques also employed by SQL. For usual queries, the branch and index is fixed, the value is specified as a search string and a condition (e.g., "starts with [jo]") and we know the timestamp for which the query should be evaluated. We process the timestamp by searching only for entries where
$$\begin{aligned} From \le timestamp < To \end{aligned}$$
...in addition to the conditions specified for the other columns. Selecting only the documents for a given branch is more challenging, as we need to traverse the origin branches upwards until we arrive at the master branch, performing one subquery for each branch along the way and merging the intermediate results accordingly.
Transaction control with and without versioning [25]
4.2.7 Transaction control
Consistency and reliability are two major goals in ChronoDB. It offers full ACID transactions with the highest possible read isolation level (serializable, see [38]). Figure 11 shows an example with two sequence diagrams with identical transaction schedules. A database server is communicating with an Online Analytics Processing (OLAP [10]) client that owns a long-running transaction (indicated by gray bars). The process involves messages (arrows) sending queries with timestamps and computation times (blocks labeled with "c") on both machines. A regular Online Transaction Processing (OLTP) client wants to make changes to the data which is analyzed by the OLAP client. The left figure shows what happens in a non-versioned scenario with pessimistic locking. The server needs to lock the relevant contents of the database for the entire duration of the OLAP transaction, otherwise we risk inconsistencies due to the incoming OLTP update. We need to delay the OLTP client until the OLAP client closes the transaction. Modern databases use optimistic locking and data duplication techniques (e.g., MVCC [6]) to mitigate this issue, but the core problem remains: the server needs to dedicate resources (e.g., locks, RAM...) to client transactions over their entire lifetime. With versioning, the OLAP client sends the query plus the request timestamp to the server. This is a self-contained request; no additional information or resources are needed on the server, and yet the OLAP client achieves full isolation over the entire duration of the transaction, because it always requests the same timestamp. While the OLAP client is processing the results, the server can safely allow the modifications of the OLTP client, because it is guaranteed that any modification will only append a new version to the history. The data at timestamp on which the OLAP client is working is immutable. Client-side transactions act as containers for transient change sets and metadata, most notably the timestamp and branch name on which the transaction is working. Security considerations aside, transactions can be created (and disposed) without involving the server. An important problem that remains is how to handle situations in which two concurrent OLTP transactions attempt to change the same key-value pair. ChronoDB allows to select from several conflict handling modes (e.g., reject, last writer wins) or to provide a custom conflict resolver implementation.
5 Solution part II: ChronoGraph
ChronoGraph is our versioned graph database which is built on top of ChronoDB. ChronoGraph implements the Apache TinkerPop standard, the de-facto standard interface for graph databases. We first provide a high-level overview over TinkerPop; then, we focus on the concepts of ChronoGraph itself.
ChronoGraph architecture [27]
5.1 Apache TinkerPop
The TinkerPop framework is the de-facto standard interface between applications and graph databases. Its main purpose is to allow application developers to exchange a graph database implementation with another one without altering the application source code that accesses the database. The TinkerPop standard is designed in a modular fashion. The core module is the property graph API [58] which specifies the Java interfaces for vertices, edges, properties and other structural elements.
In a property graph, each vertex and edge can have properties which are expressed as key–value pairs. According to the standard, each vertex must have an identifier which is unique among all vertices, and the same is true for edges. In practice, database vendors often recommend to use identifiers which are globally unique in the database. Furthermore, in addition to the unique ID and the user-defined properties, each vertex and edge has a label, which is defined to be a single-valued string that is intended to be used for categorization purposes. All user-defined properties are untyped by definition, i.e., no restriction is imposed on which values a user-defined property may have. However, some graph database vendors such as Titan DB10 and OrientDB11 offer the possibility to define a schema which is evaluated at runtime. The only unsupported value for any user-defined property is the null value. Instead of assigning null to a property, it is recommended to delete the property on the target graph element entirely.
Another module in the TinkerPop standard is the graph query language Gremlin. In contrast to the property graph API, which is only a specification, Gremlin comes with a default implementation that is built upon the property graph API interfaces. This implementation also includes a number of built-in query optimization strategies. Other modules include a standard test suite for TinkerPop vendors, and a generic server framework for graph databases called Gremlin Server.
5.2 ChronoGraph architecture
Our open-source project ChronoGraph provides a fully TinkerPop-compliant graph database implementation with additional versioning capabilities. In order to achieve this goal, we employ a layered architecture as outlined in Fig. 12a. In the remainder of this section, we provide an overview of this architecture in a bottom-up fashion.
The bottom layer of the architecture is a versioned key-value store, i.e., a system capable of working with time–key–value tuples as opposed to plain key–value pairs in regular key-value stores. For the implementation of ChronoGraph, we use ChronoDB, as introduced in Sect. 4.
ChronoGraph itself consists of three major components. The first component is the graph structure management. It is responsible for managing the individual vertices and edges that form the graph, as well as their referential integrity [R9]. As the underlying storage mechanism is a key-value store, the graph structure management layer also performs the partitioning of the graph into key-value pairs and the conversion between the two formats. We present the technical details of this format in Sect. 5.3. The second component is the transaction management. The key concept here is that each graph transaction is associated with a timestamp on which it operates. Inside a transaction, any read request for graph content will be executed on the underlying storage with the transaction timestamp. ChronoGraph supports full ACID transactions [R6] with the highest possible isolation level ("serializable", also known as "snapshot isolation", as defined in the SQL Standard [38]). The underlying versioning system acts as an enabling technology for this highest level of transaction isolation, because any given version of the graph, once written to disk, is effectively immutable. All mutating operations are stored in the transaction until it is committed, which in turn produces a new version of the graph, with a new timestamp associated with it. Due to this mode of operation, we do not only achieve repeatable reads, but also provide effective protection from phantom reads, which are a common problem in concurrent graph computing. The third and final component is the query processor itself which accepts and executes Gremlin queries on the graph system. As each graph transaction is bound to a branch and timestamp, the query language (Gremlin) remains agnostic of both the branch and the timestamp, which allows the execution of any query on any desired timestamp and branch [R8].
The application communicates with ChronoGraph by using the regular TinkerPop API, with additional extensions specific to versioning. The versioning itself is entirely transparent to the application to the extent where ChronoGraph can be used as a drop-in replacement for any other TinkerPop 3.x compliant implementation. The application is able to make use of the versioning capabilities via additional methods, but their usage is entirely optional and not required during regular operation that does not involve history analysis.
5.3 Data layout
In order to store graph data in our Temporal Key–Value Store, we first need to disassemble the graph into partitions that can be serialized as values and be addressed by keys. Then, we need to persist these pairs in the store. We will first discuss how we disassemble the graph, followed by an overview of the concrete key–value format and how versioning affects this process.
5.3.1 Partitioning: the star graph format
Like many other popular graph databases, e.g., Titan DB, we rely on the Star Graph partitioning in order to disassemble the graph into manageable pieces.
Star Graph partitioning by example
Figure 13 shows an example of a star graph. A star graph is a subset of the elements of a full graph that is calculated given an origin vertex, in this case v0. The star graph contains all properties of the vertex, including the id and the label, as well as all incoming and outgoing edges (including their label, id and properties). All adjacent vertices of the origin vertex are represented in the star graph by their ids. Their attributes and remaining edges (indicated by dashed lines in Fig. 13) are not contained in the star graph of v0. This partitioning was chosen due to its ability to reconstruct the entire graph from disk without duplicating entire vertices or attribute values. Furthermore, it is suitable for lazy loading of individual vertices, as only the immediate neighborhood of a vertex needs to be loaded to reconstruct it from disk.
5.3.2 Key–value layout
Starting from a star graph partitioning, we design our key–value layout. Since all graph elements in TinkerPop are mutable by definition and our persistent graph versions have to be immutable, we perform a bijective mapping step before persisting an element. We refer to the persistent, immutable version as a Record, and there is one type of record for each structural element in the TinkerPop API. For example, the mutable Vertex element is mapped to an immutable VertexRecord. A beneficial side-effect of this approach is that we hereby gain control over the persistent format, and can evolve and adapt each side of the mapping individually if needed. Table 6 shows the contents of the most important record types.
TinkerPop API to Record Mapping [27]
TinkerPop
Record contents
VertexRecord
id, label,
PropertyKey \(\rightarrow \) PropertyRecord
In: EdgeLabel \(\rightarrow \) EdgeTargetRecord
Out: EdgeLabel \(\rightarrow \) EdgeTargetRecord
EdgeRecord
id of InVertex, id of OutVertex
PropertyRecord
PropertyKey, PropertyValue
EdgeTargetRecord
id of edge, id of other-end Vertex
In Table 6, all id and label elements, as well as all PropertyKeys, are of type String. The PropertyValue in the PropertyRecord is assumed to be in byte array form. An arrow in the table indicates that the record contains a mapping, usually implemented with a regular hash map. An element that deserves special attention is the EdgeTargetRecord that does not exist in the TinkerPop API. Traversing from one vertex to another via an edge label is a very common task in a graph query. In a naive mapping, we would traverse from a vertex to an adjacent edge and load it, find the id of the vertex at the other end, and then resolve the target vertex. This involves two steps where we need to resolve an element by ID from disk. However, we cannot store all edge information directly in a VertexRecord, because this would involve duplication of all edge properties on the other-end vertex. We overcome this issue by introducing an additional record type. The EdgeTargetRecord stores the id of the edge and the id of the vertex that resides at the "other end" of the edge. In this way, we can achieve basic vertex-to-vertex traversal in one step. At the same time, we minimize data duplication and can support edge queries (e.g., g.traversal().E() in TinkerPop), since we have the full EdgeRecords as standalone elements. A disadvantage of this solution is the fact that we still need to do two resolution steps for any query that steps from vertex to vertex and has a condition on a property of the edge in between. This trade-off is common for graph databases, and we share it with many others, e.g., Neo4j. We will discuss this with a concrete example in Sect. 5.4.
For each record type, we create a keyspace in the underlying key–value store. We serialize the record elements into a binary sequence. This binary sequence serves as the value for the key–value pairs and the id of the element is used as the corresponding key. The type of record indicates which keyspace to use, completing the mapping to the key–value format. The inverse mapping involves the same steps: given an element ID and type, we resolve the key–value pair from the appropriate keyspace by performing a key look-up using the ID. Then, we deserialize the binary sequence, and apply our bijective element-to-record mapping in the inverse direction. When loading a vertex, the properties of the incoming and outgoing edges will be loaded lazily, because the EdgeTargetRecord does not contain this information and loading edge properties immediately would therefore require an additional look-up. The same logic applies to resolving the other-end vertices of EdgeTargetRecords, allowing for a lazy (and therefore efficient and RAM-conserving) solution.
Mapping a graph to key-value format
Figure 14 shows an example for the translation process between the Graph format and the Key-Value-Store format. In this example, we express the fact "John Doe knows Jane Doe since 1999" in a property graph format. Each graph element is transformed into an entry in the key–value store. In the example, we use a JSON-like syntax; our actual implementation employs a binary serialization format. Please note that the presented value structures correspond to the schema for records presented in Table 6.
5.4 Versioning concept
When discussing the mapping from the TinkerPop structure to the underlying key–value store in Sect. 5.3, we did not touch the topic of versioning. This is due to the fact that our key–value store ChronoDB is performing the versioning on its own. The graph structure does not need to be aware of this process. We still achieve a fully versioned graph, an immutable history and a very high degree of sharing of common (unchanged) data between revisions. This is accomplished by attaching a fixed timestamp to every graph transaction. This timestamp is always the same as in the underlying ChronoDB transaction. When reading graph data, at some point in the resolution process we perform a get(...) call in the underlying key–value store, resolving an element (e.g., a vertex) by ID. At this point, ChronoDB uses the timestamp attached to the transaction to perform the temporal resolution. This will return the value of the given key, at the specified timestamp.
Example: Navigating in a graph version
In order to illustrate this process, we consider the example in Fig. 15. We open a transaction at timestamp 1234 and execute the following Gremlin query:
$$\begin{aligned}&{\texttt {V("v0").out("e0").outE("e1")}}\\&\quad {\texttt {.has("p", "x").inV()}} \end{aligned}$$
Translated into natural language, this query:
starts at a given vertex (v0),
navigates along the outgoing edge labeled as e0 to the vertex at the other end of the edge,
from there navigates to the outgoing edge labeled as e1,
checks that the edge has a property p which is set to value x,
and finally navigates to the target vertex of that edge.
We start the execution of this query by resolving the vertex v0 from the database. Since our transaction uses timestamp 1234, ChronoDB will look up the temporal key v0@1234, and return the value labeled as B in Fig. 15.12 Value A is not visible because it was overwritten by B at timestamp 1203, and value C is also not visible because it was written after our transaction timestamp. Next, we navigate the outgoing edge labeled as e0. Our store does contain information on that edge, but since the query does not depend on any of its properties, we use the EdgeTargetRecord stored in B and directly navigate to v1. We therefore ask ChronoDB for the value associated with temporal key v1@1234, and receive value G. For the next query step, we have a condition on the outgoing edge e1. Our EdgeTargetRecord in value G does not contain enough information to evaluate the condition; hence, we need resolve the edge from the store. Querying the temporal key e1@1234 will return the value H, which is shared with the previous version because it was not changed since then. After evaluating the condition that the property "p" on edge version H is indeed set to the value "x" (as specified in the query), we continue our navigation by resolving the target of e1, which is v2. The temporal key v2@1234 will result in the value K being returned.
Note that this final navigation step starts at an element that was reused from the commit at timestamp 1065 and ends at the state of v2 that was produced by the commit at timestamp 1203. This is possible because graph elements refer to each other by ID, but these references do not include the branch or timestamp. This information is injected from the transaction at hand, allowing for this kind of navigation and data reuse. This is a major step toward fulfilling requirement [R8]. As ChronoDB offers logarithmic access time to any key-value pair on any version, this is also in line with requirement [R5].
5.5 TinkerPop compatibility and extensions
The Apache TinkerPop API is the de-facto standard interface between graph databases and applications built on top of them. We therefore want ChronoGraph to implement and be fully compliant to this interface as well. However, in order to provide our additional functionality, we need to extend the default API at several points. There are two parts to this challenge. The first part is compliance with the existing TinkerPop API, the second part is the extension of this API in order to allow access to new functionality. In the following sections, we will discuss these points in more detail.
5.5.1 TinkerPop API compliance
As we described in Sects. 5.3 and 5.4, our versioning approach is entirely transparent to the user. This eases the achievement of compliance to the default TinkerPop API. The key aspect that we need to ensure is that every transaction receives a proper timestamp when the regular transaction opening method is invoked. In a non-versioned database, there is no decision to make at this point, because there is only one graph in a single state. The logical choice for a versioned graph database is to return a transaction on the current head revision, i.e., the timestamp of the transaction is set to the timestamp of the latest commit. This aligns well with the default TinkerPop transaction semantics—a new transaction t1 should see all changes performed by other transactions that were committed before t1 was opened. When a commit occurs, the changes are always applied to the head revision, regardless of the timestamp at hand, because history states are immutable in our implementation in order to preserve traceability of changes. As the remainder of our graph database, in particular the implementation of the query language Gremlin, is unaware of the versioning process, there is no need for further specialized efforts to align versioning with the TinkerPop API.
We employ the TinkerPop Structure Standard Suite, consisting of more than 700 automated JUnit tests, in order to assert compliance with the TinkerPop API itself. This test suite is set up to scan the declared Graph Features (i.e., optional parts of the API), and enable or disable individual tests based on these features. With the exception of Multi-Properties13 and the Graph Computer,14 we currently support all optional TinkerPop API features, which results in 533 tests to be executed. We had to manually disable 8 of those remaining test cases due to problems within the test suite, primarily due to I/O errors related to illegal file names on our Windows-based development system. The remaining 525 tests all pass on our API implementation.
5.5.2 TinkerPop extensions
Having asserted conformance to the TinkerPop API, we created custom extensions that give access to the features unique to ChronoGraph. As the query language Gremlin itself remains completely untouched in our case, and the graph structure (e.g., Vertex and Edge classes) is unaware of the versioning process (as indicated in Sect. 5.4), we are left with one possible extension point, which is the Graph interface itself. In order to offer queries access to timestamps other than the head revision, we need to add a method to open a transaction on a user-provided timestamp. By default, a transaction in TinkerPop on a Graph instance g is opened without parameters. We expand the transaction class by adding several overrides which accept the desired target branch and version. Using these additional overrides, the user can decide the java.util.Date or java.lang.Long timestamp on which the transaction should be based, as well as the branch to operate on. This small change of adding an additional time argument is all it takes for the user to make full use of the time travel feature, the entire remainder of the TinkerPop API, including the structure elements and the Gremlin query language, behave as defined in the standard. Opening a transaction without parameters defaults to opening a transaction on the latest version of the master branch, which is also in line with the TinkerPop API specification.
In order to provide access to the history of a single Vertex or Edge, we added explicit query methods to our Graph implementation. These methods allow access to the history of any given edge or vertex. The history is expressed by an Iterator over the change timestamps of the element in question, i.e., whenever a commit changed the element, its timestamp will appear in the values returned by the iterator. The user of the API can then use any of these timestamps as an argument to g.tx().open(...) in order to retrieve the state of the element at the desired point in time. The implementation of the history methods delegate the call directly to the underlying ChronoDB, which retrieves the history of the key–value pair associated with the ID of the given graph element. This history is extracted from the primary index, which is first sorted by key (which is known in both scenarios) and then by timestamp. This ordering allows the two history operations to be very efficient as only element ID requires a look-up in logarithmic time, followed by backwards iteration over the primary index (i.e., iteration over change timestamps) until a different ID is encountered (c.f. Table 3).
The final requirement with respect to versioning capabilities is the demand for an operation that lists all changes within a given time range, regardless of the affected elements. In order to meet this requirement, we added another pair of methods to our Graph implementation. These methods (one for vertices, one for edges) accept time ranges and grant access to iterators that return TemporalKeys. These keys are pairs of actual element identifiers and change timestamps. Just as their element-specific counterparts, it is intended that these timestamps are used for opening transactions on them in order to inspect the graph state. Combined calls to next() on it1 and it2 will yield the complete list of changes upon iterator exhaustion. Analogous to their element-specific siblings, these methods redirect directly to the underlying ChronoDB instance, where a secondary temporal index is maintained that is first ordered by timestamp and then by key. This secondary index is constructed per keyspace. Since vertices and edges reside in disjoint keyspaces, these two operations do not require further filtering and can make direct use of the secondary temporal index.
5.6 Transaction semantics
The Apache TinkerPop API is currently available in its third version. It evolved alongside its implementations, which range from local graphs (e.g., the in-memory reference implementation TinkerGraph) to highly distributed systems (e.g., Titan DB). Due to this diversity, the requirements toward transaction semantics, in particular behavior under concurrent access [R6], are specified very loosely in TinkerPop itself. For example, when iterating over the outgoing edges of a vertex, TinkerPop only specifies that the iteration itself should never return a null value and should never throw a ConcurrentModificationException, but details regarding the visibility of changes made by other, concurrent transactions are unspecified.
Since the reference implementation TinkerGraph, which is provided alongside the API, does not support transactions,15 we had to design the transaction semantics by ourselves. When we implemented ChronoDB, we envisioned it to be a system suitable for storing data for analysis purposes, therefore the consistency of a view and the contained data is paramount. As all stored versions are effectively immutable, we chose to implement a full ACID transaction model in ChronoDB with the highest possible isolation level ("Serializable" [38]). As ChronoGraph is based on ChronoDB, it follows the same transaction model. To the best of our knowledge, ChronoGraph is currently the only implementation of the TinkerPop API v3.x that is full ACID in the strict sense, as many others opt for repeatable reads isolation (e.g., OrientDB), while ChronoGraph supports snapshot isolation. A proposal for snapshot isolation for Neo4j was published recently [55], but it is not part of the official version. Graph databases without ACID transactions and snapshot isolation often suffer from issues like Ghost Vertices16 or Half Edges17 which can cause inconsistent query results and are very difficult to deal with as an application developer. These artifacts are negative side-effects of improper transaction isolation, and application developers have to employ techniques such as soft deletes (i.e., the addition of "deleted" flags instead of true element deletions) in order to avoid them. As ChronoGraph adheres to the ACID properties, these inconsistencies can not appear by design.
5.7 Functionality and usage implications
Our implementation is a stark contrast to existing solutions. We implement the versioning process at a lower level, in the generic temporal key–value store ChronoDB. This store is aware of the semantics of the versioning process, and is capable of solving the problem of long histories [26] (c.f. Sect. 4.2), unlike the previously mentioned solutions. There are no additional mapping steps required in order to achieve graph versioning, in fact our graph to key-value mapping is very similar to the algorithm employed by Titan DB. In particular, no additional auxiliary graph elements are introduced for the purpose of versioning. To the end user, the versioning process is completely transparent, as our implementation is fully compliant with the standard TinkerPop API for non-versioned graphs. There is no need for translating one graph query into another in order to run it on a different version of the graph. A developer familiar with the TinkerPop API can start using ChronoGraph without any particular knowledge about its versioned nature. By offering additional methods, which are very much in line with the intentions of the TinkerPop API, we grant access to the versioning-related features. Additionally, ChronoGraph is fully ACID compliant with snapshot isolation for concurrent transactions, preventing common artifacts that arise in other, non-ACID graph databases, such as ghost vertices and half edges. Our solution is strongly based on immutability of existing versions, which aids in preserving traceability of changes and allows extensive sharing of data that remained unchanged between revisions.
5.8 Conflict resolution
In case of concurrent write transactions, the versioning engine is sometimes faced with conflicting commits. This situation occurs when two transactions simultaneously intend to modify the very same graph element. In this section, we describe our current conflict resolution approach.
The conflict resolution algorithm implemented in ChronoGraph differentiates between addition and removal of entire graph elements on the one hand and property value changes on the other hand. Additions of graph elements can never cause a true conflict: even if two concurrent transactions add a vertex or edge with the same new identifier (which is highly unlikely, since we employ universally unique identifiers), then the resulting state in both transactions is identical: the new vertex or edge exists. They may still differ in their properties, which we consider at a later stage.
When either side of the conflict is a graph element deletion, we are faced with two options: either we undo the deletion to apply the changes from the other side of the conflict, or we retain the deletion and discard the other changes. In our current implementation, the removal of a graph element always takes precedence over any other conflicting modification on this element. This may cause the loss of property changes on the deleted element in the concurrent transaction. However, the alternative of "undeleting" the graph element is even more undesirable. In particular if the deleted element was a vertex, then its adjacent edges have also been deleted, which would result in a completely isolated vertex if we chose to undelete it. Isolated vertices are no problem for the storage engine, but they don't add much value to the semantics of the graph, as they will never contribute to the results of traversal queries (since they are not connected to any other element).
In case of a conflict on property values, it is important to know that ChronoGraph tracks all modifications on vertex and edge properties individually. This means that the conflict resolution algorithm has access to the information whether or not any given vertex property or edge property has been modified within the transaction. For example, if two concurrent transactions perform a commit on the same vertex, and one of them sets the firstname property to John, while the other sets the lastname property to Doe, then the conflict is resolved by accepting both changes (the resulting vertex will have a firstname of John and a lastname of Doe). Only if both transactions modify the same property on the same graph element, then a true conflict occurs. Here, we employ the same strategy as for graph elements: if either side of the conflict is a deletion, the deletion wins. If neither side is a deletion, then the last writer wins. For example, if one transaction sets firstname to John and a concurrent transaction sets firstname to Jack on the same graph element, then the conflict is resolved by using the firstname value from the transaction which was committed later in time.
5.9 Limitations and drawbacks
Our approach is tailored toward the use case of having a versioned graph (as opposed to a temporal graph), which entails that queries on a single timestamp are the prevalent form of read access. Even though we support additional auxiliary methods for traversing the history of a single vertex or edge, and listing all changes within a given time range, our approach is far less suitable for use cases with an emphasis on temporal analysis that require time range queries, or detection of patterns on the time axis (as in graph stream analysis [47, 56]). For example, answering the question "Which elements often change together?", while possible in our solution, can not be implemented in an efficient way that does not require linear scanning through the commit logs. Another example would be the query "List all vertices that have ever been adjacent to a given one", which would again involve linear iteration in our solution. In general, our graph is a TinkerPop implementation and therefore optimized with the traversal language Gremlin in mind. As such, it does not lend itself as well to declarative, pattern-driven search approaches like Cypher as a dedicated Cypher graph implementation (e.g., Neo4j) would do.
Conceptual ChronoSphere metamodel [28]
We are currently also not offering any means for distributing the graph among multiple machines (see Sect. 10 for details). This limits the scale of our graph to sizes manageable within the physical memory and computing resource restrictions of a single machine. An essential drawback of our solution is that, due to the versioned nature of our data, we cannot rely as much on dictionaries with \(\mathcal {O}(1)\) access times (e.g., Hash Maps) as regular general-purpose graph databases, because of the temporal resolution steps that happen on every navigation. Those steps have a complexity of \(\mathcal {O}(\log {}(n))\), which also limits the scalability of our graph.
Finally, we have to acknowledge the fact that ChronoGraph is an ongoing work-in-progress research project, therefore numerous optimization possibilities have not been exploited yet. For a detailed evaluation of ChronoGraph, we refer the interested reader to our previous work [27].
6 Solution part III: ChronoSphere
ChronoSphere is our novel open-source graph-based EMF model repository. It provides a wide variety of features known from other solutions, such as querying, persistence, versioning and branching, and furthermore supports unique features such as metamodel evolution and snapshot-level transaction isolation. ChronoSphere does not assume the presence of a runtime environment such as OSGi,18 but can be integrated into such frameworks if required. The software is distributed via standard Maven repositories,19 which makes it easily accessible for a wide range of modern dependency management tools such as Gradle, Apache Ivy or Apache Maven. ChronoSphere is implemented in pure Java, which allows it to run in any environment compatible with the Java 8 SE standard. This includes a broad spectrum of scenarios, from single-user desktop applications to highly concurrent enterprise-level server back-ends. The only notable exception where ChronoSphere cannot be used are JVM implementations that do not support Java reflection (e.g., Android devices) which is required for serialization and deserialization of objects in the lower levels of ChronoDB. As ChronoSphere provides its own data store and has no dependencies to an external database, it can be easily embedded into any EMF-based application.
A conceptual metamodel of ChronoSphere is shown in Fig. 16. A ChronoSphere instance manages a number of named Branches (with master as the predefined one) [R4], and each Branch refers to its origin (recursively). Each Branch contains any number of Versions [R8], which in turn contain a user-defined (Ecore-based) Metamodel and an InstanceModel, which is a collection of EObjects that adhere to the EClasses in the Metamodel. A ChronoSphere instance can then create Transactions [R6] on a given Version by starting them on a transactionTimestamp, which is usually obtained from a user-provided java.util.Date. This is a fairly common setup for versioning- and branching-enabled model repositories. A detail deserving special attention is the fact that a Version and a Metamodel are bound to each other in a one-to-one relationship. This is a requirement for metamodel evolution [R3], which we will discuss in Sect. 6.4.
6.1 Graph layout
In order to store EMF metamodels and their corresponding instance models in our graph database, we need to define a bijective transformation for each element between its EMF (in-memory) representation and its (persistent) graph representation. Our approach to this task is inspired by Neo4EMF [4] which uses a similar mapping.
Model-to-graph mapping by example (simplified)
Figure 17 shows a small example for our model-to-graph mapping. Please note that this example is not complete; several properties were omitted for visual clarity. As outlined earlier, we store the Ecore metamodel (EPackages, EClasses, ...) together with the actual instance model (EObjects) in the same graph in order to support metamodel evolution and model versioning at the same time. The two vertices at the top represent two EPackages, with "MySubPackage" being owned by "MyPackage". The metamodel also contains two EClasses, one of which has an EAttribute and an EReference attached. Note that, in contrast to regular Ecore, we attach unique identifiers to every element in the meta- and instance model. This allows for easier object comparison in cases where an element was loaded more than once from the graph.
The two vertices with box icons represent actual EObjects. Each EObject vertex is connected with an edge labeled as "eClass" to the vertex in the metamodel that represents the EClass of the EObject. References are represented as edges as well. They use the unique ID of the EReference to which they belong as the edge label, prefixed with "eRef_". This allows for efficient identification during the mapping process and eliminates the possibility of unintentional name clashes on edges. A similar pattern is applied for attribute values. The left EObject vertex has a property prefixed with "eAttr_", followed by the unique identifier of the "name" EAttribute. Again, this schema prevents name clashes and allows for fast identification of the corresponding meta-element. By following this schema, we can efficiently and unambiguously map each EMF model into its graph representation, and vice versa.
There are several additional details to be considered which are not represented in this figure. For example, Ecore allows to define an EReference which is many-valued and has a fixed ordering for its targets. By definition, edges on a vertex are unordered. Hence, we need to assign explicit numeric "order" attributes to these edges to retain this information. Even though such corner cases do exist and require special attention during the design of the mapping algorithm, the overall graph format is very concise, especially when compared to the large number of tables required in equivalent SQL representations.
6.2 EQuery
Storing and retrieving models are basic capabilities that are offered by all model repositories. However, only very few tools allow for queries that operate on model content, such as CDO. Often, languages like OCL [53], EOL [42] or the Hibernate Query Language (HQL20) are employed for this purpose. HQL translates directly into SQL queries, utilizing the object-relational mapping information. From the modeling perspective, HQL is therefore a rather low-level language that is furthermore bound specifically to stores implemented in SQL. It operates on storage level, as opposed to working on the model level. OCL allows for model-level queries, but the execution of these statements often suffers from poor performance on larger instance models. Both OCL and HQL require that their statements are written as plain strings in an application, effectively circumventing any validation by compilers.
In the context of ChronoSphere, we introduce a new query language called EQuery. It uses familiar syntax and is implemented as an internal domain-specific language (DSL) embedded in Java itself. EQuery is based on traversals rather than declarative statements. Queries form an integral part of an application's business logic, and embedding them directly into the Java source code has many benefits. Application developers can make full use of the Java compiler for validation, and benefit from their Java IDEs when it comes to editor support features, such as code completion and syntax highlighting. Queries will never go out of sync with the code that operates on them, and Java-level type safety is also preserved at compile time, which cannot be achieved with string-based query formats. Finally, EQuery also defines generic traversal steps that accept Java Lambda Expressions, which greatly enhances the flexibility and expressivity of queries.
EQuery works by first stating a starting element, then navigating the model from there to arrive at the desired result element(s). In between, EQuery allows for a wide range of operations, including filters, loops and subqueries. Before execution, the traversals are scanned and optimized to make best use of the indices maintained by ChronoSphere [R5]. Internally, those queries are translated to graph traversal queries on the underlying ChronoGraph structure. Those queries allow for highly efficient, stream-based lazy evaluation. The entire EQuery framework is also easily extensible with new operations, such as the evaluation of OCL statements [53] or expressions in the Epsilon Object Language [42]. There is also an ongoing research project for translating OCL statements directly into graph traversals called Mogwaï [11], which might also be incorporated into our framework in the future.
The remainder of this section is dedicated to examples on how to use EQuery. We use the IT Landscape metamodel for this purpose, as shown in Fig. 20.
Listing 1 shows a basic example of the EQuery syntax. Here, we start our traversal from the set of all EObjects in our model. We then narrow the search space by restricting the elements to have an EClass named "Application". We then filter the EAttributes of the EObjects and look for Applications with a "name" equal to "Mail Server". Then, we navigate along the "runsOn" EReference using the familiar eGet(...) syntax. The result is a stream of arbitrary objects21 which needs to be filtered to include only EObjects. Finally, we convert the stream into a Set for further processing. All of this is expressed within regular Java code.
Listing 2 shows a query that retrieves all clusters which are mixed, i.e., run on physical as well as virtualized hardware. The query starts from the set of all clusters and then specifies an and(...) step. This step will filter out all EObjects where at least one of the provided subqueries does not produce any object. Each subquery starts at the same object, which is the one which was passed into the and step. In our example, we pass in a cluster object into the and step, and check if it runs on at least one physical and at least one virtual machine.
Our final query example in Listing 3 involves the retrieval of all Applications which run on virtualized hardware. To do so, we start from all applications in our model, and create a name for this traversal step ("apps"). This name can be arbitrary and is not connected to the model in any way; it merely acts as a marker in our query. From the applications, we navigate along the "runsOn" EReference to the virtual machines. For every application where this navigation step produced a valid result (i.e., application.eGet("runsOn") is non-empty), we check if at least one of the resulting EObjects is of type Virtual Machine. If there is such an instance, we navigate back to the previously marked "apps" position. This produces a stream that only contains EObjects that have at least one target for the "runsOn" EReference which is a Virtual Machine. It is important to note here that the back step will only be performed if at least one element passed the previous step. This style of backtracking works in arbitrary distances between query steps, and the programmer can define an arbitrary number of (uniquely) named steps to jump back to. They are useful in cases where the required filter condition is not on an element itself but on its neighbors, or in order to avoid long backtracking navigation. A final detail worth mentioning in Listing 3 is that we first retrieve the EClasses from the EPackage. In most places, the EQuery API allows the programmer to choose between passing a name (EClass name, EAttribute name...) or a variable of the appropriate type. Generally speaking, passing in the metamodel element directly is more efficient than specifying it by name. In all further listings, we will not include the variable definitions.
6.2.1 EQuery validation and type safety
As EQuery statements are fragments of Java code, they are subject to the Java type system and will automatically be checked by the Java compiler. Furthermore, as there is no media disruption between the query and the code which surrounds it, the compiler will also assert that the result of the query is of the type expected by the surrounding code. For example, in Listing 3 the compiler can check that the result is of type \({\texttt {Set<EObject>}}\). Aside from type errors, the compiler will also check that the general syntactic structure of the query is correct (e.g., no unbalanced braces, no mistyped operators...). Among the errors which will not be noticed by the Java compiler are mistyped names of metamodel elements and their types. For example, eGet("name") is always correct according to the Java compiler, even if the EAttribute name does not exist in the current metamodel. Furthermore, the Java compiler cannot infer that eGet("name") will produce an element of type String; all it knows is that it produces some result Object. For such cases, EQuery provides content filtering and casting methods (e.g., asEObject(), asNumber()...) which first apply an instanceof filter and then downcast the passing elements.
OCL takes a very different approach. From the perspective of a Java developer, OCL is an external DSL, which means that an OCL expression is embedded into a Java program as a string literal. By using tools such as the Dresden OCL compiler [12], it is possible to type-check an OCL statement, provided that both the statement and the corresponding metamodel are known. However, even though this is a strong validation, it cannot check the interactions between the query literal and the application which executes the query. On a Java API level, the result of an ocl.evaluate(literal) method call will always be of type Object which then needs to be downcast to the expected type. As both the content of the OCL string literal as well as the downcast itself can be changed freely without causing type errors from the Java compiler, we argue that this method does not provide full type safety for application developers. This is not limited to OCL: all query languages which rely on string literal representation (such as SQL and HQL) also suffer from the same issue.
Due to the fact that in our use case the metamodel is evolving dynamically at runtime and queries have to be created dynamically rather than coming from design-time-constant string literals, we decided to implement our query language as a Java-embedded DSL to retain as much type safety as possible by relying on the Java type system.
6.3 Metamodel evolution
One of the major benefits of employing model repositories is the freedom of defining a custom, domain-specific metamodel for any given use case. In practice, users often cannot take full advantage of this benefit because they are hampered by the lack of proper tool support, in particular in cases where the metamodel evolves over the duration of a project [37]. These cases are very common in industrial contexts with long-running endeavors. In traditional enterprise software engineering scenarios, developers create database scripts that migrate the database schema (and contained data) from one version to the next. There is a wide variety of tools for this purpose (e.g., Flyway22 and LiquiBase23). In a model-based environment, this translates to the concept of metamodel evolution [R3], sometimes also referred to as metamodel adaptation [74]. The key challenge of metamodel evolution is to keep the instance model consistent with the metamodel, i.e., the instances need to be co-adapted such that they conform to the new metamodel [9].
Metamodel evolution in ChronoSphere [28]
For some evolutionary metamodel changes, no instance co-adaptation is required. For example, when adding a new EAttribute to an existing EClass, the existing instances are still valid (provided that the attribute is not mandatory), they just have no value set for the new attribute. Other basic examples include the addition of new EClasses or increasing the multiplicity of an EAttribute from multiplicity-one to multiplicity-many. However, far more complex examples exist as well, and in many cases, fully automatic and deterministic instance co-adaptation is not possible. Cicchetti et al. refer to such cases as unresolvable breaking changes [9]. For instance, we consider a metamodel that contains an EClassA. The next version of the same metamodel does not contain A anymore, but a new EClass named B instead. Even though there are algorithms for model differencing [16, 40, 72], in the absence of unique identifiers (e.g., UUIDs) and change logs we cannot tell if A was removed and B was added, or if A was merely renamed to B. In the first case, we would have to delete all instances of A, in the second case we would need to migrate them to become instances of B. This basic example shows that instance co-adaptation requires semantic information about the change, which is only available to the application developer. For this reason, ChronoSphere provides an API for managing metamodel evolution with instance co-adaptation [R3]. Rose et al. provide a summary of related approaches [60]. This in-place transformation approach is in line with Wimmer [75] and Meyers [48], with the notable difference that we propose a Java API instead of ATL processes or DSLs [35, 59]. The concept is also similar to the Model Change Language [49]. This API offers three different modes of operation:
Changes without need for instance adaptation
This kind of evolution is intended for the most basic changes that do not require any kind of instance co-adaptation. Examples include adding EClasses, adding EAttributes, or increasing feature multiplicities from one to many. This category is also known as non-breaking changes [9]. The developer only provides the new version of the metamodel and loads it into ChronoSphere, which will create a new version in the history.
One-to-one correspondence
When instance co-adaptation is required, a common case is that each EObject from the old model will still correspond to (at most) one EObject in the new model. Examples for changes in this category include the renaming of EClasses and EAttributes. For such cases, the ChronoSphere metamodel evolution engine provides the developer with a predefined evolution process and a predefined element iteration order. The developer implements an Incubator that is responsible for either migrating a given EObject to match the new metamodel, or deleting it if it is obsolete. The Incubator is specific to a given source and target metamodel and contains the semantics and domain-specific constraints of the migration, expressed in Java source code.
Generic adaptation
In more complex cases, a one-to-one correspondence of elements can no longer be established, for example when an EClass is refactored and split up into two separate classes. In such cases, ChronoSphere provides a generic Evolution Controller interface that is in full control over the instance co-adaptation. It receives the migration context, which provides utility methods for querying the old and new model states. The migration process as well as the iteration order of elements are defined by the implementation of the controller. For that reason, implementing an evolution controller is the most powerful and expressive way of defining a migration, but also the most technically challenging one that entails the highest effort for the developer. Just like the incubators from the one-to-one correspondence case, such migration controllers are specific to a given source and target metamodel version.
By offering these features, we implement the metamodel evolution requirement [R3]. Since we only adapt the latest version of the model to the new metamodel, the old model instances still conform to their corresponding metamodel. We must not touch these instances, because this would violate the requirements for versioning and traceability of changes [R8, R9]. Hence, we need to put the metamodel under version control as well.
As shown in Fig. 18, every version in every branch of the model can have its own metamodel to which it corresponds. A direct consequence of this approach is that the application developer needs to be aware of those (potentially) multiple metamodels, and create queries dynamically based on that metamodel. While this will entail additional efforts in development, it is the only fully consistent way of managing versioned models with evolving metamodels.
The alternative would be to retroactively adapt every stored version of the model to a single new metamodel. However, since this adaptation process is not guaranteed to conserve all information (e.g., consider a new metamodel where an EAttribute has been deleted), we would not be able to guarantee traceability anymore. Consequently, we would introduce a considerable threat to the validity of audits. By storing a new metamodel alongside the co-adapted instance model, we restrict the impact of a metamodel evolution to a single version (e.g., the version that simultaneously introduces \(m_{4}\) and \({mm_{2}}\) in Fig. 18) and can still guarantee traceability in the remaining sections of our data. As we will discuss in the remainder of this section, in our approach, we can guarantee traceability even across metamodel evolutions.
Algorithm 2 shows how metamodel evolution works in ChronoSphere when using an Incubator. In the beginning of the metamodel evolution algorithm, we open two transactions on the repository, and we refer to them as txOld and txNew. We will use txOld in order to read the repository state before the evolution has occurred, and txNew to perform our modifications. We assume that txOld contains a metamodel and a corresponding instance model (otherwise the evolution is a regular insertion). It is crucial at this point that these two transactions are able to work in parallel, and are furthermore completely isolated from each other. Our first actual modification is to override the previous metamodel in txNew. We can safely do so because the original is still stored in txOld. There is no metamodel differencing taking place in this phase, we perform a plain overwrite. We initially delete all elements in txNew (lines 4 to 5) and start with an empty instance model. Afterward, we begin our first instance evolution phase (lines 6 to 9). We iterate over all EObjects stored in the old repository state, and ask our Incubator for a new EClass for this particular EObject. If there is a corresponding EClass in the new metamodel, we recreate the EObject with the same ID, preserving the historical traceability link. Otherwise, we discard the EObject. In lines 10 through 12, we iterate over the elements that received a new EClass previously and look for their counterparts in txOld. We ask the Incubator to transfer any desired EAttribute values from the old version to the new one, which may also involve a value transformation step. For the fulfillment of all of its tasks, the Incubator has access to the Ecore API as well as the ChronoSphere API, allowing for very clean and expressive implementations. Finally, we construct the EReference instances by iterating over the EObjects again (lines 13 to 15). Once more, the Incubator is responsible for the actual semantics. In the last phase, we perform the commit that persists our changes to disk (and creates a new entry in the version history), and roll back the historical transaction.
Overall, we have maintained our traceability links (by retaining the IDs of EObjects) and performed a metamodel evolution with instance adaptation that is ACID safe and creates a clean history without partially evolved intermediate states. The evolution process with a full-fledged Evolution Controller works in much the same way. The primary difference is that the lines 6 through 15 are replaced by a call to the controller, allowing for a maximum of flexibility in the controller implementation. This algorithm requires efficient management of RAM in practice, in particular when working with larger models. Since txNew needs to manage all changes applied to all model elements, the change set can grow to very large sizes. By utilizing incremental commits, we can mitigate the problem by flushing batches to disk while providing the same level of ACID safety and equivalent histories.
6.4 Advantages and limitations of the incubator approach
Using the incubator algorithm as described in section reduces the amount of manual coding required to perform a metamodel evolution with instance adaptation. The incubator assists the programmer in the common case that an EObject before the metamodel evolution conforms to at most one EObject after the metamodel evolution. This covers a lot of cases, including:
Removal of classes
Addition of new classes
Renaming of any metamodel element
Any changes to EAttributes and EReferences
Any combination of changes listed above
In all of these cases, the incubator provides utilities such as a fixed migration process and a fixed element iteration order. The goal of the incubator approach is to drastically reduce the required amount of manual coding for common cases. However, it is not applicable in all scenarios. For example, when a single class C is split into two classes (\(C_{1}\) and \(C_{2}\)), then each EObject E conforming to C must be split into two EObjects \(E_{1}\) and \(E_{2}\), where \(E_{1}\) becomes an instance of \(C_{1}\), and \(E_{2}\) an instance of \(C_{2}\). Cases such as this (which are less common and often fairly complex) are not covered by the incubator. In such cases, the programmer needs to implement the migration code manually. We consider the current features to be a baseline that provides the necessary raw functionality. We aim to build more sophisticated and user-friendly evolution mechanisms on top in the future, gradually reducing the amount of required manual coding (see Sect. 10).
Analyzing transitive dependencies of IT assets in Txture's interactive visualizations
6.5 Transaction and versioning concepts
Transactional safety [R6] is the foundation of all features in ChronoSphere which are related to collaboration and evolution. Originally coined by the database community, this concept has since been adopted by other domains as well. In the context of modeling and model repositories, transactional safety implies that several clients can work in parallel on a model without interfering with each other.
In ChronoSphere, a client24 requests a transaction, then operates on it by executing queries and applying changes locally in this transaction. Afterward, the transaction can either be committed (local changes will be made available globally by creating a new version) or rolled back (reverting all local changes). The isolation property defined in the SQL Standard [38] states that any operation executed within a transaction must not be affected by other, concurrent transactions [R6]. In order to achieve the highest possible isolation level (serializable [38], also known as snapshot isolation), databases traditionally either need to perform excessive pessimistic locking, or allocate considerable amounts of RAM to open transactions in order to retain duplicates of concurrently modified entries. Thanks to its versioning capabilities, ChronoSphere can provide snapshot isolation with minimal locking, no additional memory overhead and without sacrificing performance. This is a direct consequence of our design: once a model version is committed, it is effectively immutable. Further changes will create new versions. Therefore, as long as a client is working on any given version (i.e., the used transaction timestamp does not change), the model content will not change, thus guaranteeing snapshot isolation.
7 Industrial case study
At the time of writing this document, ChronoSphere and its sub-components are already being used in production in industry. They are embedded in the commercial IT Landscape documentation tool Txture,25 which is in turn deployed on-site at several customers. In this section, we explore how the interplay between Txture and ChronoSphere works and how ChronoSphere contributes to the use cases of Txture.
Metamodel for IT landscapes (simplified)
Txture is a software tool for documenting the IT Landscape of companies. The customers of Txture employ it for several different use cases, including impact analysis, history analysis, enterprise architecture management, transformation planning and as a supportive tool for datacenter outsourcing. Txture uses ChronoSphere as its primary data storage. Its clients interact with Txture via a web-based user interface. This UI offers interactive, query-based near-real-time visualizations to the end user. The supported visualization types include graph-based views, tables and charts. The graph-based views (as shown in Fig. 19) are suitable for many IT Operation and Enterprise Architecture use cases, because they allow the user to navigate the connections between model elements in an interactive way, expanding areas of interest and collapsing others into groups (or hiding them entirely). This workflow allows to abstract away from distracting details on the fly. Each navigation step performed by the user on the front-end is translated into a query step and added to the existing base query. The server core is responsible for executing these queries and directly relies upon the capabilities of ChronoSphere. The metamodel in Txture can be adapted to suit the needs of the customer (usually starting from a best-practice model synthesized from past projects; a simplified version is shown in Fig. 20). Therefore, queries cannot be hard-coded; they need to be created on the fly in order to conform to the metamodel. In a string-based approach (e.g., OCL strings), the only solution would involve building queries from individual pieces via string concatenations, which is an error-prone process. In this approach, the Java compiler cannot offer any checks with respect to the content of a query string. EQuery is implemented as a Java-embedded internal DSL (c.f. Sect. 6.2) which is beneficial for this scenario, as queries can be assembled dynamically one step at a time without the need for string concatenations. As queries are expressed as Java code and ChronoSphere is intended as a tool for application developers, further measures are required to make ad-hoc queries available to end-users. Txture implements a custom query syntax for end-users which is then translated into EQuery statements. Another approach for ad-hoc queries could be to employ a JVM scripting language such as Groovy,26 which would allow end-users to type their EQuery statements directly into a text field and evaluate them.
The incremental commits offered by ChronoSphere (see Sect. 4.2) are also used by Txture. Data imports from external data sources, most notably from a Configuration Management Database (CMDB [7]), often produce a large quantity of new elements to insert into the model. Incremental commits allow to perform this insertion in a number of smaller successive steps, avoiding memory limitation issues on the server while preserving a history that does not contain any intermediate (incomplete) states. Furthermore, this preserves the atomic nature of imports: even if previous incremental commit portions have already been written to disk, an error that occurs toward the end of the process can still cancel the entire process without corrupting the database content.
A feature that is used in particular by Enterprise Architects in Txture is the planning feature. This allows an architect to apply arbitrary changes onto the current state of the model, without making them part of the "as-is" architecture or making them visible to anybody else. Txture provides the same suite of analysis tools for the resulting plans that is also available on the "as-is" model, in addition to comparison features that allow to analyze the impact of the plan on the real model. This is realized using the branching feature of ChronoSphere. When a new plan is created by a user, Txture switches the currently selected branch for that user from the master ("as-is") branch to a new one. Since this branching is lightweight in ChronoSphere, the user does not experience any waiting time during this switch even when creating new plans on top of large models.
A similar argument can be made for the Time Machine feature in Txture. By selecting a date and time in the past, a user can move the entire client to that particular point in time and review the contents of all existing visualizations calculated on that model state. The versioning feature of ChronoSphere acts as the enabling technology here: Txture uses the date and time selected by the user on the user interface as the request timestamp for the ChronoSphere transaction. As all queries are timestamp-agnostic, all visualizations adapt to the new situation automatically as their backing queries are being re-evaluated by the server.
The fact that each revision is immutable in ChronoSphere also brings additional advantages that are utilized by Txture. Any client starts out on the latest revision of the "as-is" model at the time of logging in. When a new model version is created at the server, a notification is sent out to all clients, informing them about the existence of this new update. On most systems, this would entail that the client has to react immediately and fetch the new information from the server in order to prevent synchronization issues. Txture clients are not forced to interrupt their current task, because the version a client is currently viewing is immutable. The user can continue to analyze the currently selected model version indefinitely as the server will yield consistently reproducible query results for any version. Should the user choose to switch to the latest version, the transaction timestamp associated with the session is updated to the most recent one, refreshing all query results and client views in the process.
8 Performance evaluation
In comparison with other repository solutions, ChronoSphere offers several additional features while making fewer assumptions about the input model (e.g., we do not assume the metamodel to remain constant over time). This rises the question how well our approach can perform in comparison with other solutions. In this section, we present a comparative benchmark between ChronoSphere and the Eclipse CDO repository. We specifically selected CDO for three reasons:
CDO is widely considered to be the "gold standard" in model repositories.
In the spectrum of competing solutions, CDO is closest to ChronoSphere with respect to features.
CDO is based on a relational store, which allows for a discussion of individual advantages and drawbacks when comparing it to our graph-based approach.
For this benchmark, we operate on a fixed Ecore model (as required by CDO). This model uses the Metamodel for IT Landscape Management (Fig. 20 shows a simplified version, the full Ecore file is available online27). Since instance models in the IT Landscape environment are sensitive information and therefore confidential, we created a synthetic model in collaboration with our industry partners at Txture. This model exhibits the same inner structure (element counts, associations, ...) as a typical real-world model. The instance model file is available online.28 It consists of approximately 200000 EObjects.
This benchmark includes four individual scenarios. The scenarios have been synthesized from the most commonly used queries in Txture, as introduced in Sect. 1. In each scenario, we grant 5GB of RAM to the Java Virtual Machine (Java 1.8 Update 161) which executes the code. It is possible to run the benchmark with less RAM, however in such cases, the garbage collector can impact the individual runtimes in unpredictable ways. The relevant hardware in our test system included an Intel i7 5820K CPU (3.30GHz) and a Crucial CT500MX SSD, operated by Windows 10. All tests were executed on a pre-warmed JVM. Each figure in the benchmark shows the results of the scenario, averaged over 10 independent executions. Variances are not included in the figures due to their small magnitudes. We would like to emphasize that CDO offers several ways of executing queries. One of them is the usage of the Hibernate Query Language (HQL29). This language requires deep knowledge of the underlying storage, as the user needs to be aware of the way in which database tables are generated from model classes. Another way of querying the data in CDO is by using its programming interface directly. The third and final way for querying data in CDO is by specifying OCL [53] queries. We argue that OCL queries are the only option to formulate queries on the abstraction level of the model, without requiring any knowledge of the underlying storage. In this benchmark, we will therefore only utilize OCL queries in CDO. We use the latest stable version of CDO as of January 2018.
8.1 Model insertion
The first benchmark scenario consists of loading the 200000 EObjects into the system and persisting them to disk.
Insertion performance
As Fig. 21 shows, there are significant performance differences during model insertion when using different database back-ends for CDO. The embedded H2 database offers faster insertion speed compared to PostGreSQL. ChronoSphere is the middle ground between the two. The primary factor which is slowing ChronoSphere down in this comparison is the fact that both H2 and PostGreSQL have access to fixed schemas, which allows for a very compact binary representation of each EObject. ChronoSphere uses a more flexible approach, resulting in less densely packed data structures and consecutively lower insertion speed. CDO accomplishes this performance also by deliberately working without foreign key constraints in the database itself, thus forfeiting one of the biggest advantages of relational storage systems.
8.2 Assets By Name
A very common task for any repository is to allow a user to find an element by its name. In this scenario, we are searching for a Physical Machine by name. We use the following queries:
We repeat each query 100 times, using a different replacement for the \({\texttt {<name\_placeholder>}}\) in every iteration. Using the same name in every execution would entail the possibility that the tool under test is simply returning a cached query result which contradicts our measurement goals (Fig. 22).
Assets By Name performance
Given the fact that relational databases have been designed especially for this kind of query, it might lead to the assumption that CDO is capable of answering this query very efficiently. However, it turns out that the CDO query evaluation engine for OCL does not inspect the provided OCL statement for optimization possibilities. Instead of creating a suitable SQL statement and forwarding it to the underlying database, the OCL statement is evaluated simply by iterating over the EObjects in memory. Another shortcoming of CDO is the fact that it does not create secondary indices on model data, and also does not offer such an option through its API. While it is possible to manually create secondary indices in the underlying database, they will not be utilized by CDO. ChronoSphere analyzes and optimizes the passed query and utilizes a secondary index on the element name, allowing for greatly reduced query response times in this scenario.
8.3 Root cause analysis
After retrieving a particular element by name, a common task in the IT Landscape context is to perform a Root Cause Analysis. Such an analysis attempts to track down the root cause for a failing asset. For example, an application might fail to operate properly because the underlying virtual machine fails. The query therefore needs to retrieve the transitive dependencies of a given asset. For this particular scenario, we consider the transitive dependencies of a Service asset down to the PhysicalMachine level. This is commonly referred to as the deployment stack. We measure the accumulated time for 1000 executions of this query on different initial start objects. The queries in OCL and EQuery are formulated as follows:
In this benchmark, we utilize the closure statement in both languages to navigate the transitive paths. We require this statement because the path length is unknown—a Virtual Machine may run on a Cluster which in turn is deployed on Virtual Machines. Such a query is very difficult to express in SQL and would require recursive JOIN operations. However, since CDO resolves OCL queries via in-memory iteration, it circumvents this issue.
Root cause analysis performance
Figure 23 shows that CDO provides acceptable performance in this scenario; however, ChronoSphere outperforms it by a factor of 2. The reason for this speedup lies within the architecture of ChronoSphere: after consolidating the EQuery steps, it transforms them into a Gremlin graph traversal. This highly optimized traversal engine is then executing the query on the data structures provided by ChronoGraph. We avoid the overhead of the reflective EObject API completely, granting ChronoSphere faster query execution.
8.4 Impact analysis
Root Cause Analysis, as introduced in the previous section, finds the cause of failure of a particular asset. The inverse question is also relevant in IT Landscape management: given an asset, we need to determine which higher-level assets are affected in case that this asset fails. For our model-level query, this means that we need to find the incoming transitive dependencies of a model element. For our benchmark, we perform the impact analysis from a Physical Machine all the way up to the Services. We formulated the corresponding queries as follows:
In this particular scenario, we encounter a shortcoming of OCL: it does not allow to formulate query steps across the incoming references of an object. This issue has been perceived by tool authors and there are extensions to OCL which allow for an eInverse step, e.g., in the Acceleo framework.30 However, as CDO only supports the official OCL standard, our options for formulating this query (without having to modify our metamodel) are very limited. We iterate over all services, build the deployment stack for each service, and then check if our Physical Machine in question is contained in the resulting set of elements.
Impact analysis performance
As Fig. 24 clearly shows, the limitations of OCL severely impact the performance of the query in CDO. In contrast, the EQuery expression in ChronoSphere has approximately the same performance as the forward-navigating Root Cause Analysis query. Please note that we reduced the number of input elements from 1000 to 10 in this scenario in order to get results from CDO within a reasonable time frame. Multiplying the result of ChronoSphere by 100 yields the same results as in the Root Cause Analysis scenario. This demonstrates that ChronoSphere queries can navigate along outgoing and incoming references without any performance penalty.
8.5 Threats to validity
We tried to achieve a comparison in this chapter which is as fair as possible. Nevertheless, some threats to the validity of our results could not be eliminated during the process. First of all, the exact behavior of the Just-in-Time Compiler of the Java Platform, as well as its garbage collector, cannot be completely pre-determined. This adds some inherent variance to the results, which we tried to mitigate by pre-warming the JVM and assigning sufficient RAM to the Java process. In the CDO cases, we implemented the queries on a CDO client, with the CDO Server process running on the same machine in order to eliminate network latency. CDO does offer an embedded mode; however, in our tests we unfortunately found this mode to be unstable and prone to a number of unpredictable runtime exceptions. One might argue that implementing the queries in HQL would have yielded better performance in CDO; however, we chose OCL because it operates on the model level rather than on the persistence level. Also, in HQL it is currently not possible to formulate queries which are recursive or rely on transitive closures, which we require in our benchmark scenarios. The employed model, even though crafted in collaboration with the experts at Txture to ensure real-life conditions, is a synthetic construct, which might lead to different results in real applications.
Benchmark result summary (execution times in ms)
CDO (H2)
CDO (PGSQL)
ChronoSphere
Model Loading
Root Cause Analysis
Impact Analysis
Assets By Name
Bold value represents the best score in the respective row
8.6 Benchmark summary
In this comparative evaluation (which is summarized in Table 7), we demonstrated that ChronoSphere offers competitive performance, even though it does not require a metamodel to be constant over time. The underlying graph database allows for schema-free model storage without sacrificing query execution speed. This benchmark also demonstrates that avoiding O/R-mapping techniques has a positive influence on the overall performance. We also showcased the expressiveness of our query framework, which allows for more flexible navigation than OCL. This advantage is crucial in the IT Landscape domain and generally beneficial in model analysis. Our results are also in line with the extensive comparative benchmark by Barmpis et. al [1] which demonstrates the advantages of graph-based storage over relational solutions for model persistence.
Please note that we focused exclusively on ChronoSphere in this evaluation. For an evaluation on ChronoDB [25] and ChronoGraph [27], we refer the interested reader to the respective publications.
9 Discussion and related work
Over the years, a considerable number of model repositories have been developed by various authors. Pierantonio et. al provide a good overview in their paper [13]. In this section, we will compare our approach to other solutions which are conceptionally close to our repository. As our approach also entailed the development of lower-level infrastructure, we will also consider related work in those areas. This section is structured in a bottom-up fashion, starting with the related work in the area of versioned key-value stores and concluding with related work in the model repository area.
9.1 Related Key-Value-Store versioning solutions
Database content versioning is a well-known topic. Early work in this area dates back to the 1986 when Richard Snodgrass published his article on Temporal Databases [69]. Adding a time dimension to the data stored in a database considerably increases the complexity of the data management, because the additional dimension introduces new demands regarding data consistency and several tried-and-true solutions are no longer applicable to the same extent as with non-versioned data (e.g., hash tables for caching). Key-value stores have become attractive formats for versioned data due to their simple nature when compared to relations, graphs or documents.
Sridhar Ramaswamy published a paper in 1997 on indexing for temporal databases [57]. He proposes an approach based on validity ranges for each entry to which he refers as windows. Each entry is valid from its insertion until its validity is explicitly terminated by an update or deletion. This transforms the problem of a search in a versioned database into an instance of the well-known interval stabbing problem [65]: given a set of intervals and a number, find all intervals containing this number. This approach strongly inspired our efforts. The major difference between the algorithms we employ in ChronoDB (c.f. Sect. 4.1) and Ramaswamy's approach is that in our case, the upper limit of each validity window is given implicitly by the matrix structure. We therefore do not need to update a previously stored validity range in our database when a new version is added. This allows our data store to operate in a strictly append-only fashion, which entails a number of technical advantages in the implementation. Also, deletions of entries do not impact our data structure in any different way than inserts or modifications, which was an issue in Ramasway's solution.
Felber et. al [21] propose a different solution. For every key, the store manages a list of values, each value corresponding to one version. This is a simple and effective system for managing elements with independent histories (e.g., wiki pages). However, this solution does not preserve the historical correlation between elements. For example, given a key \(k_{1}\) with values \(a_{1}\) and \(a_{2}\), and a key \(k_{2}\) with values \(b_{1}\) and \(b_{2}\), there is no way to tell if the entry \((k_{1},a_{1})\) existed at the same time as \((k_{1},b_{1})\), or \((k_{1},b_{2})\) or neither of them. This temporal correlation is crucial when individual values can contain references to one another, as it is the case with ChronoGraph.
Commercial database vendors also explored the possibilities for database content versioning. Lomet et.al [44, 45] developed an approach named ImmortalDB which was later integrated into Microsoft SQL Server. This solution is based on history chains: each entry refers to its predecessor via a pointer (alongside timestamp metadata). This approach allows for high performance of queries on the latest version of the data. However, as the request timestamps are moved further back in time, the query performance degrades linearly, as the history chains need to be traversed. ChronoDB avoids this problem and offers logarithmic access time to any entry, regardless of its age. Further commercial projects in this area include Temporal Tables in IBM DB2 [63] or Oracle's Flashback technology [32]. The choice between history chains and time-indexing (as shown in Sect. 4) depends on the use case. History chains offer almost the same performance for queries on the latest version as an unversioned system, but when previous versions are requested, the performance decreases linearly with the age of the requested version, as all intermediate chain links need to be traversed. Time-indexed solutions are engineered to offer nearly identical query performance on any requested version, but the overall query performance decreases in a logarithmic fashion as new versions are being added. We mitigate this issue by splitting the data along the time axis, thus limiting the search space of each request (see Algorithm 1). For systems which primarily serve the latest version, history chains are a viable choice. However, in particular for the use case of IT Landscapes, the performance of historical queries matters to the end users, as the repository is used for audits and history analysis.
9.2 Related graph versioning solutions
In comparison with Key-Value stores and SQL databases, graph databases as we know them today are a relatively new technology. Consequently, there are fewer approaches regarding content versioning.
Considering the large amount of development and quality assurance efforts that has been invested into existing graph databases (e.g., Neo4J or TitanDB), it is a tempting idea to integrate versioning in these systems rather than developing new ones. Castelltort and Laurent published one of the first papers [8] that seek to integrate versioning into general-purpose graph databases. This is achieved by creating a layout for a "meta-graph" that can contain and manage multiple versions of itself. The graph database contains this meta-graph, and incoming queries need to be aware of this structure in order to extract the information they require. As Castelltort and Laurent clearly show in their paper, the complexity of queries sharply increases in such a scenario. Due to the increased number of checks that need to be performed by each query, the performance inevitably degrades as well. Perhaps the largest drawback of this approach is that the application needs to be aware of and manage this additional layer of complexity. There are several different approaches for creating a layout for the meta-graph, e.g., the solution proposed by Taentzer et al. [71] which is based on modeling differences between versions as graphs. There is one central issue which is shared by all layouts: Given a suitable set of graph changes, they introduce vertices in the graph which have a very high degree of incoming and/or outgoing edges for the purpose of version control. Such super vertices represent a problematic corner case in any graph database and may lead to poor performance and storage space utilization. As ChronoGraph manages version control on a lower level, there is no need to introduce any additional graph elements in order to achieve versioning. The disadvantage of our solution in that regard is that a completely new implementation was required and existing graph databases could not be reused.
Other related approaches, e.g., by Semertzidis and Pitoura [66, 67] or by Han et al. [29], assume the existence of a series of graph snapshots as input to their solutions. These approaches do not aim for online transaction processing (OLTP) capabilities and focus on the analysis of a series of static graphs. A direct comparison with our approach is therefore not feasible. However, the data managed by ChronoGraph may serve as an input to those tools, as each graph revision can be extracted individually and consequently be treated as a series of snapshots.
9.3 Related repository solutions
Model repository feature comparison
Eclipse CDO
SQL/Documents
MORSA
EMFStore
MagicDraw Teamwork Server
MagicDraw Teamwork Cloud
Key–Value Store
Hawk Model Indexer
Neo4EMF
GreyCat
Versioned Graph
GenMyModel
Cloud (SaaS)
MDEForge
Table 8 shows a comparison of related model repositories based on the required features we established in Sect. 2. The table can be divided into two sections, which are cloud (Software-as-a-Service) solutions and on premise solutions. While cloud-based solutions for EAM models exist (e.g., Iteraplan31), more fine-grained IT Landscape models are widely considered to be very sensitive data in industry. Unauthorized access to such models could guide a potential attacker to the servers where the impact of the attack is maximized. Most companies therefore require an on premise deployment of the software that manages their IT Landscapes. This eliminates cloud-based tools such as MDEForge [3] and GenMyModel [14] from the list of possible repository candidates for IT Landscape models.
Connected Data Objects (CDO)32 is widely considered to be the gold standard of model repositories. This repository uses SQL databases to store model data in a versioned fashion. CDO handles the versioning process internally, it does not make use of versioning features in the underlying database. With respect to features, CDO is the most complete competitor to ChronoSphere. However, CDO exhibits several weaknesses when employed in practice [68]. In particular, the lack of any support for metamodel evolution motivated our decision to implement a novel model repository. ChronoSphere also avoids the Object-Relational Mapping (O/R-Mapping) process which is employed by CDO in order to transfer model data into and out of the underlying database. As we have shown in Sect. 6.1, there is a natural pattern for transforming model elements into graph elements (and vice versa), whereas O/R-Mapping is a fairly involved process [39], both conceptionally as well as with respect to resource usage (e.g., CPU and RAM). We also performed a comparative benchmark between CDO and ChronoSphere in Sect. 8.
Neo4EMF [4] was the first storage solution for EMF models that is based on graph database technology. This work has inspired and guided our efforts. ChronoSphere utilizes a similar model-to-graph mapping as Neo4EMF, albeit a different implementation for technical reasons. Neo4EMF showed the advantages of graph-based persistence, however it is a persistence framework rather than a model repository. Central features, such as versioning, branching and ACID transaction support, remain unaddressed by this technology.
Hawk Model Indexer [2] is another solution in the realm of model engineering that seeks to utilize graph-based persistence for models. As the name implies, Hawk is primarily an indexer. It is therefore not responsible for the actual model persistence, but rather for creating a secondary structure to speed up incoming queries. Hawk is intended to be used as an assistive technology and does not qualify as a standalone model repository. Just as with Neo4EMF, features like versioning and branching are not considered by this approach.
MORSA [54] was one of the first NoSQL model repositories. It stores models in a document-based backend (MongoDB33). The main features of MORSA include model versioning and persistence. However, in contrast to ChronoSphere, MORSA treats a model as a single atomic artifact. Queries on the model content are therefore not supported. Also, the versioning process takes place on the granularity of the entire model (rather than per-element as in ChronoSphere). MORSA is suitable for storing and retrieving hand-crafted models of smaller sizes. The large models generated by automated processes in IT Landscapes would introduce a prohibitive amount of overhead for whole-model versioning approaches.
EMFStore [41] is a model repository that operates on model differences, which are stored in files in an XML-based format. This allows EMFStore to support per-element versioning and branching as well as efficient storage utilization for long history chains. However, EMFStore does not offer support for model content indexing and/or querying. Retrieving a model version requires a checkout operation as seen in traditional version control systems, e.g., Git or SVN. The commercial tool MagicDraw Teamwork Server34 follows a similar approach as EMFStore. Teamwork Server internally stores the XMI [52] representation of a model in a folder controlled by SVN, which introduces similar scalability issues as discussed about MORSA. ChronoSphere follows a different approach: each version of each model element is accessible in logarithmic time without requiring a checkout procedure of the entire model. Also, ChronoShpere allows for indexing and querying of the model content in contrast to the other mentioned solutions in this category. Teamwork Server has been superseded by MagicDraw Teamwork Cloud35 which employs a per-element versioning approach and is based on Apache Cassandra. Even though this approach allows for a higher scalability, due to the nature of Cassandra, this solution cannot support ACID transactions. As of the current version (19.0), according to the official API documentation36 Teamwork Cloud does not offer any extended querying capabilities beyond retrieving a model as a whole and picking individual elements by ID. It does however utilize the same retrieval model as ChronoSphere, where elements are retrieved by stating their ID as well as their branch and timestamp.
An approach that is conceptually close to ChronoSphere, but does not declare itself as a model repository, is GreyCat37 [33, 34]. GreyCat stores models in versioned graphs. It is based on the Kevoree Modeling Framework [23] (KMF), which is an alternative to EMF that focuses on models-at-runtime scenarios. KMF metamodels can be automatically derived from Ecore definitions (hence we consider the metamodeling process to be Ecore compliant in Table 8). Much like ChronoGraph, GreyCat implements its own graph layer on top of a NoSQL storage. In its storage layer, GreyCat uses history chains (traces in GreyCat terminology) which consist of change operations. Thus, GreyCat utilizes difference-based versioning, whereas ChronoGraph employs state-based versioning. While GreyCat also implements a property graph, it does not support the TinkerPop API. A major difference that sets ChronoSphere and GreyCat apart is the fact that GreyCat is heavily relying on code generation. This entails that the metamodel for GreyCat is fixed for all versions and branches, and metamodel evolution cannot be supported in the same way as it is possible in ChronoSphere. GreyCat (and its predecessors) and ChronoSphere have been developed during the same time periods as independent projects, which is the reason why neither of them is built on top of the other. The existence of two independent solutions also highlights both the importance of versioned model storage as well as the suitability of property graphs for this task.
Further related work specifically in the IT Landscape domain includes Configuration Management Databases (CMDBs). There is a wide variety of commercial products on the market (e.g., BMC Atrium,38 ServiceNow39 or HP Universal CMDB40). A direct comparison with ChronoSphere is infeasible because CMDBs are tightly bound to their application domain, whereas our solution is generic and domain independent. The metamodel in a CMDB is usually fixed and tailored toward the IT operations use case. Versioning capabilities are also found in CMDB products, but they are often limited to the history of single elements (i.e., it is not possible to move an entire view with multiple elements back in time). Overall, we do not consider CMDBs to be model repositories because they do not utilize a metamodeling language (e.g., Ecore), they are domain-specific and operate on a fixed metamodel. However, a model repository can be used as the back-end of a CMDB or EAM application.
The chronos technology stack
Source Code Repository
ChronoDB [25]
Versioned Key-Value Store
https://github.com/MartinHaeusler/chronos/tree/master/org.chronos.chronodb
ChronoGraph [27]
Versioned TinkerPop Graph Database
https://github.com/MartinHaeusler/chronos/tree/master/org.chronos.chronograph
ChronoSphere [28]
Ecore Model Repository
https://github.com/MartinHaeusler/chronos/tree/master/org.chronos.chronosphere
Table 8 shows two important facts. On the one hand all of the features we implemented in ChronoSphere (except for historical archiving) are present in at least one related tool. This emphasizes the importance of the chosen feature set. On the other hand, this table shows that ChronoSphere also fulfills all requirements of a general-purpose model repository and is therefore not restricted to IT Landscape modeling in any way.
9.4 ChronoSphere as a generic model repository
ChronoSphere has been created specifically for the use case of IT Landscape documentation. However, the resulting concepts and software are generic and have no dependencies to this particular domain. As a general-purpose EMF model repository with a rich feature set, ChronoSphere can be applied in a wide range of use cases, in particular in models-at-runtime scenarios. In the context of this paper, we decided to focus on the domain for which the tool was originally created. While the features of ChronoSphere are generic, it is optimized for the workloads expected in the IT Landscape domain (model sizes, frequency of insertions and queries, number of concurrent users...). We will conduct further studies in the future where we apply ChronoSphere in different domains.
10 Outlook and future work
ChronoSphere and its components currently operate exclusively in local deployments. However, as other projects (e.g., Neo4J and TitanDB) have shown, graph databases lend themselves well to distribution across several machines. One of our future goals is to create a distributed version of ChronoGraph for greater scalability. The fact that this database operates in a versioned, append-only fashion should ease this transition as the common problem of encountering stale data is eliminated by design. Due to the chosen layer separation, the code base of ChronoSphere itself will remain largely unchanged. Overall, we hope to achieve a distributed model repository that can scale well with even larger models and higher numbers of concurrent users.
EQuery, the query framework introduced by ChronoSphere, is constantly evolving. With inspiration from Project Mogwaï [11], we aim for the inclusion of OCL expression evaluation into our EQuery framework. This will allow programmers to have a ocl(String) step in the query where the OCL statement is provided as a string. This string will then be analyzed and transformed into a graph query, which is then used as a subquery in the overall evaluation. By similar means, we intend to integrate the Epsilon Object Language [42] as sub-expressions in our query framework. This will allow ChronoSphere to offer support for several different query languages within the same underlying engine.
The metamodel evolution facilities in ChronoSphere are intended as a baseline. They offer the atomic commands required by any evolution mechanism and focus on raw functionality. However, certain use cases may require more sophisticated approaches, e.g., transformations based on differences between two given metamodels. We plan on introducing additional abstraction layers on top of the current imperative design in order to support such transformations, gradually reducing and ultimately eliminating the required amount of manual coding efforts.
Finally, we will continue our ongoing efforts to increase the overall code quality, test coverage, documentation and performance of the implementation (Table 9). ChronoSphere is a work in progress project that uses a large amount of software that was developed specifically for this purpose and consequently includes less off-the-shelf software components than other projects. There is still a lot of room for improvement in the implementation details which we intend to explore in the near future.
11 Summary
In this paper, we presented ChronoSphere, a novel open-source EMF model repository. This model repository was designed and implemented to support large IT Landscape models in industrial contexts, but is generic and can be employed in any EMF-based modeling scenario. We analyzed how we inferred the requirements from the IT Landscape context and how they relate to the technical features offered by ChronoSphere. We then focused on the concepts behind our repository implementation which also contributed to the state-of-the-art in versioned data storage and graph databases. We discussed the commonalities and differences of our solution with respect to related repository technology. Our concepts and technology were evaluated in a case study where ChronoSphere is used as the primary storage backend by the industrial IT Landscape modeling and analysis tool Txture. Building on top of the use cases of this case study we performed a comparative benchmark with a state-of-the-art model repository and demonstrated the competitive performance of our solution. ChronoSphere is a fresh impulse in the area of model repositories not only in terms of its features and implemented standards, but first and foremost in that it provides the entire data management stack, allowing for a clean and consistent architecture. As all of the individual components are available as open-source software, each aspect is accessible for experimentation and innovation in future research projects. ChronoSphere is an all-new approach to model repositories, and we hope that it will serve as a platform for future projects in research and industry alike.
www.txture.io.
https://neo4j.com/.
http://titan.thinkaurelius.com/.
http://tinkerpop.apache.org/.
https://github.com/MartinHaeusler/chronos/tree/master/org.chronos.chronograph.
https://github.com/MartinHaeusler/chronos/tree/master/org.chronos.chronodb.
https://github.com/EsotericSoftware/kryo.
https://github.com/cojen/Tupl.
In most cases, linear search in main memory is still faster than performing \(\mathcal {O}(\hbox {log}(n))\) disk accesses. Also, the cache usually does not contain all data, so these lists will remain short.
http://s3.thinkaurelius.com/docs/titan/1.0.0/schema.html.
https://orientdb.com/docs/last/Tutorial-Using-schema-with-graphs.html.
All circles in Fig. 15 represent serialized vertex records or edge records.
We do not support multivalued properties directly as intended by TinkerPop. However, we do support regular properties of List or Set types.
The Graph Computer is the entry point to the distributed Online Analytics Processing (OLAP) API. Support for this feature may be added in future versions of ChronoGraph.
Transactions are an optional feature in TinkerPop 3.
Vertices that have been deleted by transaction t1 while being modified concurrently by transaction t2 do not disappear from the graph; they remain as Ghosts.
Half Edges refer to the situation where an edge is only traversable and visible in one direction, i.e., the out-vertex lists the edge as outgoing, but the in-vertex does not list it as incoming, or vice versa.
https://www.osgi.org/.
https://mvnrepository.com/artifact/com.github.martinhaeusler/org.chronos.chronosphere.
https://docs.jboss.org/hibernate/orm/3.3/reference/en/html/queryhql.html.
In Ecore, the operation eGet(...) can be used to access references and attributes. While EReferences always point to other EObjects, EAttributes can have arbitrary values. Thus, eGet(...) in EQuery returns an untyped stream of objects.
https://flywaydb.org/.
http://www.liquibase.org/.
We use the term "client" to refer to application code that operates on top of ChronoSphere. This can be a remote method invocation, another thread or simply a method call to the public API.
https://groovy-lang.org.
https://git.io/vxZl3.
https://git.io/vxZWp.
https://wiki.eclipse.org/Acceleo/Acceleo_Operations_Reference.
https://www.iteraplan.de/en.
https://wiki.eclipse.org/CDO.
https://www.mongodb.com/.
https://www.nomagic.com/products/teamwork-server.
https://www.nomagic.com/products/teamwork-cloud.
https://osmc.nomagic.com/19.0/swagger/index.html#.
https://greycat.ai.
https://www.bmc.com/it-solutions/cmdb-configuration-management.html.
https://www.servicenow.com/.
http://cmshelpcenter.saas.hp.com/CMS/10.30/ucmdb-docs/docs/eng/doc_lib/Content/DIC_Guide.htm.
Open access funding provided by Austrian Science Fund (FWF). This work was partially funded by the research project "txtureSA" (FWF-Project P 29022).
Barmpis, K., Kolovos, D.: Comparative analysis of data persistence technologies for large-scale models. In: Proceedings of the 2012 Extreme Modeling Extreme Modeling Workshop. ACM (2012). http://dl.acm.org/citation.cfm?id=2467314. Accessed 12 Feb 2019
Barmpis, K., Kolovos, D.: Hawk: towards a scalable model indexing architecture. In: Proceedings of the Workshop on Scalability in Model Driven Engineering, p. 6. ACM (2013)Google Scholar
Basciani, F., Di Rocco, J., Di Ruscio, D., Di Salle, A., Iovino, L., Pierantonio, A.: Mdeforge: an extensible web-based modeling platform. In: CloudMDE@MoDELS, pp. 66–75 (2014)Google Scholar
Benelallam, A., Gómez, A., et al.: Neo4EMF, a scalable persistence layer for EMF models. In: Modelling Foundations and Applications, pp. 230–241. Springer (2014)Google Scholar
Bennaceur, A., France, R., Tamburrelli, G., Vogel, T., Mosterman, P.J., Cazzola, W., Costa, F.M., Pierantonio, A., Tichy, M., Akşit, M., et al.: Mechanisms for leveraging models at runtime in self-adaptive software. In: Models@ run.time, pp. 19–46. Springer (2014)Google Scholar
Bernstein, P.A., Goodman, N.: Multiversion concurrency control theory and algorithms. ACM Trans. Database Syst. (TODS) 8(4), 465–483 (1983)MathSciNetCrossRefzbMATHGoogle Scholar
Cannon, D.L.: ITIL Service Operation. TSO The Stationery Office (2007)Google Scholar
Castelltort, A., Laurent, A.: Representing history in graph-oriented NoSQL databases: a versioning system. In: Eighth International Conference on Digital Information Management (ICDIM 2013), Islamabad, Pakistan, September 10–12, 2013, pp. 228–234 (2013). https://doi.org/10.1109/ICDIM.2013.6694022
Cicchetti, A., Di Ruscio, D., Eramo, R., Pierantonio, A.: Automating co-evolution in model-driven engineering. In: 12th International IEEE on Enterprise Distributed Object Computing Conference. EDOC'08, pp. 222–231. IEEE (2008)Google Scholar
Codd, E.F., Codd, S.B., Salley, C.T.: Providing OLAP (on-line analytical processing) to user-analysts: an it mandate. Codd Date 24 (1993)Google Scholar
Daniel, G., Sunyé, G., Cabot, J.: Mogwaï: a framework to handle complex queries on large models. In: Tenth IEEE International Conference on Research Challenges in Information Science, RCIS 2016, Grenoble, France, June 1–3, 2016, pp. 1–12 (2016). https://doi.org/10.1109/RCIS.2016.7549343
Demuth, B., Wilke, C.: Model and object verification by using Dresden OCL. In: Proceedings of the Russian-German Workshop Innovation Information Technologies: Theory and Practice, Ufa, Russia, pp. 687–690. Citeseer (2009)Google Scholar
Di Rocco, J., Di Ruscio, D., Iovino, L., Pierantonio, A.: Collaborative repositories in model-driven engineering. IEEE Softw. 32(3), 28–34 (2015). https://doi.org/10.1109/MS.2015.61 CrossRefGoogle Scholar
Dirix, M., Muller, A., Aranega, V.: Genmymodel: an online uml case tool. In: ECOOP (2013)Google Scholar
Easton, M.C.: Key-sequence data sets on indelible storage. IBM J. Res. Dev. 30(3), 230–241 (1986)CrossRefGoogle Scholar
ben Fadhel, A., Kessentini, M., Langer, P., Wimmer, M.: Search-based detection of high-level model changes. In: 28th IEEE International Conference on Software Maintenance (ICSM), pp. 212–221. IEEE (2012)Google Scholar
Farwick, M., Agreiter, B., Breu, R., Ryll, S., Voges, K., Hanschke, I.: Requirements for automated enterprise architecture model maintenance. In: International Conference on Enterprise Information Systems (ICEIS). SciTePress (2011)Google Scholar
Farwick, M., Breu, R., Hauder, M., Roth, S., Matthes, F.: Enterprise architecture documentation: empirical analysis of information sources for automation. In: Hawaii International Conference on System Sciences (HICSS). IEEE, Wailea (2013)Google Scholar
Farwick, M., Schweda, C.M., Breu, R., Voges, K., Hanschke, I.: On enterprise architecture change events. In: Lecture Notes in Business Information Processing, vol. 131 LNBIP, pp. 129–145. Barcelona, Spain (2012). https://doi.org/10.1007/978-3-642-34163-2_8. http://www.springerlink.com/index/N5456664113LU847.pdf http://link.springer.com/chapter/10.1007/978-3-642-34163-2_8. Accessed 12 Feb 2019
Farwick, M., Trojer, T., Breu, M., Ginther, S., Kleinlercher, J., Doblander, A.: A case study on textual enterprise architecture modeling. In: Enterprise Distributed Object Computing Conference Workshops (EDOCW). IEEE (2013)Google Scholar
Felber, P., Pasin, M., Riviere, E., Schiavoni, V., Sutra, P., Coelho, F., et al.: On the support of versioning in distributed key-value stores. In: 33rd IEEE SRDS 2014, Nara, Japan, October 6–9, 2014, pp. 95–104 (2014). https://doi.org/10.1109/SRDS.2014.35
Ferry, N., Almeida, M., Solberg, A.: The MODAClouds Model-Driven Development, pp. 23–33. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-46031-4_3 Google Scholar
Francois, F., Nain, G., Morin, B., Daubert, E., Barais, O., Plouzeau, N., Jézéquel, J.M.: Kevoree modeling framework (KMF): efficient modeling techniques for runtime use (2014). arXiv preprint arXiv:1405.6817
Giese, H., Hildebrandt, S., Neumann, S.: Model synchronization at work: keeping sysml and autosar models consistent. In: Graph Transformations and Model-Driven Engineering, vol. 5765, pp. 555–579 (2010)Google Scholar
Haeusler, M.: Scalable versioning for key-value stores. In: DATA 2016—Proceedings of 5th International Conference on Data Management Technologies and Applications, Lisbon, Portugal, 24–26 July, 2016, pp. 79–86 (2016). https://doi.org/10.5220/0005938700790086
Haeusler, M., Breu, R.: Sustainable management of versioned data. In: Proceedings of the 24th PhD Mini-Symposium. Budapest University of Technology and Economics (2017)Google Scholar
Haeusler, M., Trojer, T., Kessler, J., Farwick, M., Nowakowski, E., Breu, R.: ChronoGraph: versioning support for OLTP TinkerPop graphs. In: DATA 2017—Proceedings of 5th International Conference on Data Management Technologies and Applications, Madrid, Spain, 24–26 July, 2017 (2017) (in print) Google Scholar
Haeusler, M., Trojer, T., Kessler, J., Farwick, M., Nowakowski, E., Breu, R.: Combining versioning and metamodel evolution in the chronosphere model repository, vol. 10706. LNCS (2018). https://doi.org/10.1007/978-3-319-73117-9_11
Han, W., Miao, Y., Li, K., Wu, M., Yang, F., Zhou, L., Prabhakaran, V., Chen, W., Chen, E.: Chronos: a graph engine for temporal graph analysis. In: Proceedings of the Ninth European Conference on Computer Systems, p. 1. ACM (2014)Google Scholar
Hanschke, I.: Strategic IT Management: A Toolkit for Enterprise Architecture Management. Springer, Berlin (2009). https://doi.org/10.1007/978-3-642-05034-3 Google Scholar
Hanson, E.N.: The Interval Skip List: A Data Structure for Finding All Intervals that Overlap a Point, pp. 153–164. Springer, Berlin (1991). https://doi.org/10.1007/BFb0028258 zbMATHGoogle Scholar
Hart, M., Jesse, S.: Oracle Database 10G High Availability with RAC, Flashback and Data Guard, 1st edn. McGraw-Hill Inc, New York (2004)Google Scholar
Hartmann, T., Fouquet, F., Jimenez, M., Rouvoy, R., Le Traon, Y.: Analyzing complex data in motion at scale with temporal graphs. In: The 29th International Conference on Software Engineering and Knowledge Engineering (SEKE'17), p. 6. KSI Research (2017)Google Scholar
Hartmann, T., Fouquet, F., Nain, G., Morin, B., Klein, J., Barais, O., Le Traon, Y.: A native versioning concept to support historized models at runtime. In: International Conference on Model Driven Engineering Languages and Systems, pp. 252–268. Springer (2014)Google Scholar
Herrmannsdoerfer, M., Benz, S., Juergens, E.: COPE-Automating Coupled Evolution of Metamodels and Models, pp. 52–76. Springer, Berlin (2009). https://doi.org/10.1007/978-3-642-03013-0_4 Google Scholar
Huber, N., Brosig, F., Kounev, S.: Modeling dynamic virtualized resource landscapes. In: Proceedings of the 8th International ACM SIGSOFT Conference on Quality of Software Architectures, QoSA '12, pp. 81–90. ACM, New York (2012). https://doi.org/10.1145/2304696.2304711
Iovino, L., Pierantonio, A., Malavolta, I.: On the impact significance of metamodel evolution in MDE. J. Object Technol. 11(3), 1–3 (2012)CrossRefGoogle Scholar
ISO: SQL Standard 2011 (ISO/IEC 9075:2011) (2011)Google Scholar
Keith, M., Schnicariol, M.: Object-relational mapping. In: Pro JPA 2, pp. 69–106. Springer (2009)Google Scholar
Khelladi, D.E., Hebig, R., Bendraou, R., Robin, J., Gervais, M.P.: Detecting Complex Changes During Metamodel Evolution, pp. 263–278. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-19069-3_17 Google Scholar
Koegel, M., Helming, J.: EMFStore: a model repository for EMF models. In: Kramer, J., Bishop, J., Devanbu, P.T., Uchitel, S. (eds.) ICSE, vol. 2, pp. 307–308. ACM (2010). https://dl.acm.org/citation.cfm?doid=1810295.1810364
Kolovos, D.S., Paige, R.F., Polack, F.: The epsilon object language (EOL). In: Proceedings of Second European Conference on Model Driven Architecture-Foundations and Applications, ECMDA-FA 2006, Bilbao, Spain, July 10–13, 2006, pp. 128–142 (2006). https://doi.org/10.1007/11787044_11. http://dx.doi.org/10.1007/11787044_11
Lankhorst, M.M., Proper, H.A., Jonkers, H.: The architecture of the archimate language. In: Enterprise, Business-Process and Information Systems Modeling, pp. 367–380. Springer (2009)Google Scholar
Lomet, D., Barga, R., Mokbel, M., Shegalov, G.: Transaction time support inside a database engine. In: Proceedings of the 22nd ICDE, pp. 35–35 (2006)Google Scholar
Lomet, D., Hong, M., Nehme, R., Zhang, R.: Transaction time indexing with version compression. Proc. VLDB Endow. 1(1), 870–881 (2008)CrossRefGoogle Scholar
Lomet, D., Salzberg, B.: Access methods for multiversion data. SIGMOD Rec. 18(2), 315–324 (1989). https://doi.org/10.1145/66926.66956 CrossRefGoogle Scholar
McGregor, A.: Graph stream algorithms: a survey. SIGMOD Rec. 43(1), 9–20 (2014). https://doi.org/10.1145/2627692.2627694 MathSciNetCrossRefGoogle Scholar
Meyers, B., Wimmer, M., Cicchetti, A., Sprinkle, J.: A generic in-place transformation-based approach to structured model co-evolution. Electron. Commun. EASST 42 (2012)Google Scholar
Narayanan, A., Levendovszky, T., Balasubramanian, D., Karsai, G.: Automatic domain model migration to manage metamodel evolution. Model Driven Engineering Languages and Systems, pp. 706–711 (2009)Google Scholar
Nascimento, M., Dunham, M., Elmasri, R.: M-IVTT: an index for bitemporal databases. In: Wagner, R., Thoma, H. (eds.) Database and Expert Systems Applications, Lecture Notes in Computer Science, pp. 779–790. Springer, Berlin (1996). https://doi.org/10.1007/BFb0034730 CrossRefGoogle Scholar
Nowakowski, E., Farwick, M., Trojer, T., Häusler, M., Kessler, J., Breu, R.: Enterprise architecture planning: analyses of requirements from practice and research. In: Proceedings of the 50th Hawaii International Conference on System Sciences (2017)Google Scholar
OMG.: XML metadata interchange (XMI). OMG (2007). https://www.omg.org/spec/XMI
OMG.: Object constraint language (ocl). version 2.3.1 (2012). http://www.omg.org/spec/OCL/2.3.1/. Accessed 12 Feb 2019
Pagán, J.E., Cuadrado, J.S., Molina, J.G.: Morsa: a scalable approach for persisting and accessing large models. In: International Conference on Model Driven Engineering Languages and Systems, pp. 77–92. Springer (2011)Google Scholar
Patiño Martínez, M., Sancho, D., Jiménez Peris, R., Brondino, I., Vianello, V., Dhamane, R.: Snapshot isolation for neo4j. In: Advances in Database Technology (EDBT). OpenProceedings.org (2016)Google Scholar
Pigné, Y., Dutot, A., Guinand, F., Olivier, D.: GraphStream: a tool for bridging the gap between complex systems and dynamic graphs. In: Emergent Properties in Natural and Artificial Complex Systems. Satellite Conference within the 4th European Conference on Complex Systems (ECCS'2007), vol. abs/0803.2 (2008). arXiv:0803.2093
Ramaswamy, S.: Efficient indexing for constraint and temporal databases. In: Database Theory-ICDT'97, pp. 419–431. Springer (1997)Google Scholar
Rodriguez, M.A., Neubauer, P.: The graph traversal pattern. In: Graph Data Management: Techniques and Applications, pp. 29–46 (2011). https://doi.org/10.4018/978-1-61350-053-8.ch002
Rose, L.M., Kolovos, D.S., Paige, R.F., Polack, F.A.: Model migration with epsilon flock. ICMT 10, 184–198 (2010)Google Scholar
Rose, L.M., Paige, R.F., Kolovos, D.S., Polack, F.A.: An analysis of approaches to model migration. In: Proceedings of Joint MoDSE-MCCM Workshop, pp. 6–15 (2009)Google Scholar
Salzberg, B.: File Structures: An Analytic Approach. Prentice-Hall Inc, Upper Saddle River (1988)Google Scholar
Salzberg, B., Tsotras, V.J.: Comparison of access methods for time-evolving data. ACM Comput. Surv. (CSUR) 31(2), 158–221 (1999)CrossRefGoogle Scholar
Saracco, C., Nicola, M., Gandhi, L.: A matter of time: Temporal data management in DB2 10. IBM developerWorks (2012)Google Scholar
Schmidt, D.C.: Model-driven engineering. Comput. IEEE Comput. Soc. 39(2), 25 (2006)CrossRefGoogle Scholar
Schmidt, J.M.: Interval stabbing problems in small integer ranges. In: ISAAC, pp. 163–172. Springer (2009)Google Scholar
Semertzidis, K., Pitoura, E.: Durable graph pattern queries on historical graphs. In: IEEE 32nd International Conference on Data Engineering (ICDE), pp. 541–552. IEEE (2016)Google Scholar
Semertzidis, K., Pitoura, E.: Time traveling in graphs using a graph database. In: Proceedings of the Workshops of the (EDBT/ICDT) (2016)Google Scholar
Seybold, D., Domaschka, J., Rossini, A., Hauser, C.B., Griesinger, F., Tsitsipas, A.: Experiences of models@ run-time with EMF and CDO. In: Proceedings of the 2016 ACM SIGPLAN International Conference on Software Language Engineering, pp. 46–56. ACM (2016)Google Scholar
Snodgrass, R.T.: Temporal databases. IEEE Comput. 19, 35–42 (1986)CrossRefGoogle Scholar
Steinberg, D., Budinsky, F., Merks, E., Paternostro, M.: EMF: Eclipse Modeling Framework. Pearson Education, London (2008)Google Scholar
Taentzer, G., Ermel, C., Langer, P., Wimmer, M.: A fundamental approach to model versioning based on graph modifications: from theory to implementation. Softw. Syst. Model. 13(1), 239–272 (2014). https://doi.org/10.1007/s10270-012-0248-x CrossRefGoogle Scholar
Toulmé, A., Inc, I.: Presentation of EMF compare utility. In: Eclipse Modeling Symposium, pp. 1–8 (2006)Google Scholar
Trojer, T., Farwick, M., Häusler, M., Breu, R.: Living models of IT architectures: challenges and solutions. Softw. Serv. Syst. 8950, 458–474 (2015)CrossRefzbMATHGoogle Scholar
Wachsmuth, G.: Metamodel adaptation and model co-adaptation. In: ECOOP 2007—Object-Oriented Programming, 21st European Conference, Berlin, Germany, July 30–August 3, 2007, Proceedings, pp. 600–624 (2007). https://doi.org/10.1007/978-3-540-73589-2_28
Wimmer, M., Kusel, A., Schönböck, J., Retschitzegger, W., Schwinger, W., Kappel, G.: On using inplace transformations for model co-evolution. In: Proceedings of 2nd International Workshop Model Transformation with ATL, vol. 711, pp. 65–78 (2010)Google Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
1.University of InnsbruckInnsbruckAustria
2.Txture GmbHInnsbruckAustria
Haeusler, M., Trojer, T., Kessler, J. et al. Softw Syst Model (2019). https://doi.org/10.1007/s10270-019-00725-0
Received 01 April 2018
Revised 18 January 2019 | CommonCrawl |
\begin{definition}[Definition:Partial Quotient]
Let $k$ be a field.
Let $(a_n)_{n\geq0}$ be a continued fraction in $k$, either finite or infinite.
Let $n \geq 0$ be a natural number.
The '''$n$th partial quotient''' is the $n$th term $a_n$.
\end{definition} | ProofWiki |
\begin{definition}[Definition:General Logarithm/Common/Historical Note]
'''Common logarithms''' were developed by {{AuthorRef|Henry Briggs}}, as a direct offshoot of the work of {{AuthorRef|John Napier}}.
After seeing the tables that {{AuthorRef|John Napier|Napier}} published, {{AuthorRef|Henry Briggs|Briggs}} consulted {{AuthorRef|John Napier|Napier}}, and suggested defining them differently, using base $10$.
In $1617$, {{AuthorRef|Henry Briggs|Briggs}} published a set of tables of logarithms of the first $1000$ positive integers.
In $1624$, he published tables of logarithms which included $30 \, 000$ logarithms going up to $14$ decimal places.
Before the advent of cheap means of electronic calculation, '''common logarithms''' were widely used as a technique for performing multiplication.
\end{definition} | ProofWiki |
Effect of hemoadsorption during cardiopulmonary bypass surgery – a blinded, randomized, controlled pilot study using a novel adsorbent
Martin H. Bernardi1,
Harald Rinoesl1,
Klaus Dragosits2,
Robin Ristl3,
Friedrich Hoffelner4,
Philipp Opfermann1,
Christian Lamm2,
Falk Preißing2,
Dominik Wiedemann4,
Michael J. Hiesmayr1 &
Andreas Spittler2,5
Critical Care volume 20, Article number: 96 (2016) Cite this article
Cardiopulmonary bypass (CPB) surgery initiates a systemic inflammatory response, which is associated with postoperative morbidity and mortality. Hemoadsorption (HA) of cytokines may suppress inflammatory responses and improve outcomes. We tested a new sorbent used for HA (CytoSorb™; CytoSorbents Europe GmbH, Berlin, Germany) installed in the CPB circuit on changes of pro- and anti-inflammatory cytokines levels, inflammation markers, and differences in patients' perioperative course.
In this first pilot trial, 37 blinded patients were undergoing elective CPB surgery at the Medical University of Vienna and were randomly assigned to HA (n = 19) or control group (n = 18). The primary outcome was differences of cytokine levels (IL-1β, IL-6, IL-18, TNF-α, and IL-10) within the first five postoperative days. We also analyzed whether we can observe any differences in ex vivo lipopolysaccharide (LPS)-induced TNF-α production, a reduction of high-mobility box group 1 (HMGB1), or other inflammatory markers. Additionally, measurements for fluid components, blood products, catecholamine treatment, bioelectrical impedance analysis (BIA), and 30-day mortality were analyzed.
We did not find differences in our primary outcome immediately following the HA treatment, although we observed differences for IL-10 24 hours after CPB (HA: median 0.3, interquartile range (IQR) 0–4.5; control: not traceable, P = 0.0347) and 48 hours after CPB (median 0, IQR 0–1.2 versus not traceable, P = 0.0185). We did not find any differences for IL-6 between both groups, and other cytokines were rarely expressed. We found differences in pretreatment levels of HMGB1 (HA: median 0, IQR 0–28.1; control: median 48.6, IQR 12.7–597.3, P = 0.02083) but no significant changes to post-treatment levels. No differences in inflammatory markers, fluid administration, blood substitution, catecholamines, BIA, or 30-day mortality were found.
We did not find any reduction of the pro-inflammatory response in our patients and therefore no changes in their perioperative course. However, IL-10 showed a longer-lasting anti-inflammatory effect. The clinical impact of prolonged IL-10 needs further evaluation. We also observed strong inter-individual differences in cytokine levels; therefore, patients with an exaggerated inflammatory response to CPB need to be identified. The implementation of HA during CPB was feasible.
ClinicalTrials.gov: NCT01879176, registration date: June 7, 2013.
Cardiopulmonary bypass (CPB) surgery initiates a systemic inflammatory response induced by extrinsic and intrinsic factors [1–3]. Monocytes and high-mobility group box 1 protein (HMGB1), a chromatin protein, encoded by the Hmgb1 gene in humans, are important players in systemic inflammation and belong to the main producers of pro- and anti-inflammatory cytokines [4, 5]. Once activated by the extracorporeal circuit, they might lead to a dysregulation of inflammatory homeostasis and increased levels of both, pro- and anti-inflammatory plasma mediators such as tumor necrosis factor-alpha (TNF-α), interleukin-1β (IL-1β), IL-6, IL-10, and IL-18 [4, 6–9]. This strong inflammatory response induces post-surgical monocyte immunosuppression which is indicated by an impaired production of ex vivo lipopolysaccharide (LPS)-induced TNF-α exaggeration [10].
All of these factors may lead to a prolonged postoperative course, including a delayed weaning from mechanical ventilation, recovery of organ functions, and discharge from the intensive care unit (ICU). Thus, measures to decrease the inflammatory process have the potential to improve the perioperative course [11]. Hemoadsorption (HA) using the CytoSorb™ adsorber (CytoSorbents Europe GmbH, Berlin, Germany) is a recent technology that has shown rapid elimination of many key cytokines that cannot be filtered by using current blood purification techniques [12].
The primary aim of this first single-center, blinded, randomized, and controlled pilot study was to investigate differences of pro- and anti-inflammatory cytokines in patients undergoing cardiac surgery with CPB using the CytoSorb™ adsorber compared with a control group within the first 5 postoperative days (POD). Furthermore, we investigated whether we can observe any differences in ex vivo LPS-induced TNF-α production, a reduction of HMGB1, or other inflammatory markers. Also, we investigated differences in fluid management or the use of catecholamines and differences in edema formation as determined by analysis of body composition by bioelectrical impedance analysis (BIA). Additionally, we compared length of ICU stay, respirator therapy, and 30-day mortality.
This study was approved by the ethics committee of the Medical University of Vienna with reference number EK Nr: 1095/2013. Furthermore, we reported the study to the Austrian Federal Office for Safety in Health Care (INS-621000-0505) and registered it at ClinicalTrials.gov (NCT01879176) before recruitment started. Written informed consent to participate and consent to publish were obtained from each patient.
Study design and patients
This study was a randomized, blinded (in patients), controlled, single-center trial in 46 adult patients undergoing elective open heart surgery (coronary artery bypass graft [CABG], valve surgery, combined procedure) with an expected CPB duration of more than 120 minutes at the Department of Cardiac Surgery, Medical University of Vienna, Vienna, Austria. The study was conducted between Sept. 10, 2013, and May 6, 2015, at our department.
We excluded the following interventions or conditions: declined informed consent, transplant surgery, scheduled insertion of a cardiac assist device, thrombendarterectomy of the pulmonary arteries, emergency and urgent procedures, serum creatinine of more than 2 mg/dl, C-reactive protein (CRP) of more than 2 mg/dl, bilirubin of more than 2 mg/dl, body mass index (BMI) of less than 18 kg/m2, pregnancy, history of stroke, and patients receiving chemotherapy, anti-leukocyte drugs, TNF-α blockers, immunosuppressive drugs (e.g., tocilizumab) or with any diagnosed disease state that has produced leukopenia (e.g., acquired immune deficiency syndrome). Patient selection is shown in Fig. 1.
The selection process for patients included in the study
Eligible patients were enrolled the day before surgery by one of the physicians involved in the study and randomly assigned into one of two groups (HA or control). Randomization was performed as block randomization by the online Randomizer for Clinical Trials 1.7.0 (https://www.meduniwien.ac.at/randomizer). To create homogenous and comparable groups, the randomization was stratified by sex and procedures.
Primary outcome
Primary outcomes were differences in the evolution of cytokines using the CytoSorb™ adsorber for HA during cardiopulmonary bypass.
Secondary outcomes
Secondary outcomes were differences in LPS-induced release of TNF-α; differences in the expression of HMGB1; changes in serum CRP or procalcitonin (PCT) concentrations; differences in the need of fluid components (crystalloid and colloid solutions), blood products (erythrocytes, fresh frozen plasma, and platelets), or catecholamine treatment; and changes in BIA, length of ICU stay, and 30-day mortality.
Number of patients
The fact that this is the first randomized controlled study and no prior data to cytokine level alterations in cardiac surgery patients using this HA device were available made us consider a mean difference of one standard deviation between groups as a clinically relevant effect. Under this assumption, we calculated by using a t test a total of 16 individuals per group that are required to achieve 80 % power with a significance level of 5 %. Therefore, we planned a total of 40 patients to allow an adequately powered analysis with 20 % dropouts for complicated intraoperative course. To avoid the risk of low power, we increased the number of patients to 46 after the completion of the 36th patient because a total of 7 patients dropped out at this time.
Preoperative patient data (age, weight, height, sex, BIA, European system for cardiac operative risk evaluation [EuroSCORE], diagnosis, preoperative myocardial infarction within 24 hours [MCI], history of asthma bronchiale, chronic obstructive pulmonary disease [COPD], insulin or non-insulin-dependent diabetes mellitus [IDDM, NIDDM], history of chronic kidney disease [CKD], dialysis, left ventricular ejection fraction [LVEF], stable and unstable angina pectoris, cardial decompensation, peripheral arterial obstructive disease [PAOD], and arterial hypertension), surgery-related factors (kind of operation, duration of anaesthesia and surgery duration of CPB and aortic cross-clamp [AoCC], unplanned insertion of assist devices, amount of fluids [cristalloids or colloids], need of catecholamines [noradrenalin, dobutamin, levosimendan, vasopressin, or milrinon], and need of blood products [erythrocytes, fresh frozen plasma, or thrombocytes] or coagulation factors [fibrinogen, prothrombin complex concentrate, desmopressin, or recombinant factor VIIa]), intraoperative diuresis and postoperative data (use of catecholamines, BIA, length of stay on intensive care unit [LOS-ICU], length of mechanical ventilation, or need of extracorporeal membrane oxygenation [ECMO]) were collected by a case report form.
Anaesthesia was induced and CPB circuit was primed (1000 ml crystalloid and 500 ml colloid solution together with 5000 IE heparin, and 100 ml mannitol 20 %) in accordance with institutional standards. CPB was performed by using non-pulsatile flow at 2.5 l · min−1 · m−2, a non-heparin-coated circuit, and a membrane oxygenator (Quadrox™, Maquet, Hirrlingen, Germany, or Capiox, Terumo, Eschborn, Germany). All study cases were performed by experienced cardiac anaesthesia fellows supervised by senior cardiac anaesthesiologists, both trained in transesophageal echocardiography (TOE), which was used to monitor myocardial performance and the impact of fluid loading and inotropic support on left and right ventricular function. Blood transfusion was performed in accordance with Society of Thoracic Surgeons/Society of Cardiovascular Anesthesiologists (STS-SCA) transfusion guidelines [13, 14], and administration of coagulation factors was based predominantly on rotational thromboelastometry (ROTEM) variables and the coagulation profile of each patient.
In the intervention group, we installed the 300 ml CytoSorb™ adsorber on the CPB machine. The active component of the CytoSorb™ device consists of adsorbent polymer beads composed of porous polymerized divinylbenzene. These beads have pores that can adsorb hydrophobic molecules in a size range of approximately 10 to 55 kD, which is sufficient to remove almost all known cytokines. The polymer beads are encased in a polycarbonate canister commonly used in commercially available dialyzers. Blood was pumped actively through the CytoSorb™ cartridge by using a side arm coming from the venous outflow tube and given back to the venous reservoir prior to the oxygenator. The flow through the cartridge was controlled by a roller pump with 200 ml/min to standardize flow conditions in all treated patients. The addition of 20 ml crystalloid solution was necessary to fill the additional line in the treatment group. The control group was treated similarly, but no adsorber was installed.
Blood sampling
Blood samples were drawn in pyrogen-free vials, and plasma was separated by centrifugation and frozen (−80 °C). Blood samples for cytokines (IL-1β, IL-6, IL-18, TNF-α, and IL-10) were determined at the following time points: A, before induction of anesthesia; B, before CPB; C, at the end of CPB; D, 2 hours after CPB; E, 24 hours after CPB; F, 48 hours after CPB; and G, 120 hours after CPB. The ex vivo LPS-induced TNF-α production was measured at the time points A–C, F, and G; and HMGB1 at time points B, D, and E. For the quantification of IL-1β, IL-6, TNF-α, and IL-10, we used the BD™ Cytometric Bead Array (CBA) Human Inflammatory Cytokines (BD Biosciences Europe, Erembodegem, Belgium) Kit; for quantification of IL-18, Human IL-18 Instant, ELISA (eBioscience, Inc., San Diego, CA, USA), and for quantification of HMGB1 the high-mobility group box 1 (HMGB1), ELISA Kit (MyBioSource, Inc., San Diego, CA, USA). For the measurement of ex vivo LPS-induced TNF-α, lipopolysaccharide from Escherichia coli was purchased (Sigma-Aldrich GmbH, Vienna, Austria) and prepared. For the analysis of LPS-induced TNF-α release, Human TNF-α Instant, ELISA (eBioscience, Inc.) was performed on each sample. All analysis were conducted in accordance with the protocol of the manufacturer.
Blood samples for CRP, procalcitonin, albumine, fibrinogen, hemoglobin, thrombocytes, and leukocytes were determined at the following time points: a baseline value within 24 hours preoperatively (BL), 1st postoperative morning (1.POD), 2nd postoperative morning (2.POD), and 5th postoperative morning (5.POD).
Bioelectrical impedance analysis
We performed BIA by using 800 μA at 50 kHz with a single-frequency bioimpedance analyzer (Model BIA 101; Akern-RJL, Pontassieve, Italy). The skin was cleaned, and adhesive pregelled electrodes (Bianostic AT; Data-Input GmbH, Wedemark, Germany) were placed on the hand (source on the third metacarpophalangeal joint and the detector on wrist, between the distal prominences of the radius and ulna) and the foot (source on the third metatarsophalangeal joint and the detector on the ankle, between the medial and lateral malleoli) of the right side while patients where in a recumbent position with the limbs abducted from the body. Measurements were performed within 24 hours preoperatively, 1.POD, 2.POD, and 5.POD. The measured BIA variables were resistance (R), reactance (Xc), the phase angle (arctanXc/R), and total body water (TBW). TBW was calculated by the following formulas according to the BIA analyzer we used [15, 16]:
$$ \begin{array}{l}\mathrm{Female}:\ \mathrm{T}\mathrm{B}\mathrm{W} = 0.382*\left(\mathrm{H}{\mathrm{t}}^2/\mathrm{R}\right) + 0.105*\mathrm{weight} + 8.315\hfill \\ {}\mathrm{Male}:\ \mathrm{T}\mathrm{B}\mathrm{W} = 0.396*\left(\mathrm{H}{\mathrm{t}}^2/\mathrm{R}\right) + 0.143*\mathrm{weight} + 8.399.\hfill \end{array} $$
Demographic and clinical baseline data were summarized by mean and standard deviation or mean and range, expressed through minimum and maximum, for metric variables or absolute frequencies for categorical variables. Differences between groups were analyzed by using the Student's t test for continuous variables and Fisher's exact test for categorical variables.
The distributions of the cytokine levels were highly skewed; therefore, between-group differences of these variables were assessed by using the non-parametric Wilcoxon rank-sum test. The distributions were described by median and interquartile range (IQR) expressed through the first quartile and the third quartile.
The distributions of laboratory values were largely symmetric without severe outliers. These variables were described by mean and standard deviation, and the effect of HA was assessed by using analysis of covariance (ANCOVA) models. In these models, the outcome is explained by the treatment group (HA versus control), the stratification variables (procedure and sex), and the observed preoperative baseline value (except when analyzing the baseline differences). To describe the correlation between cytokine levels and duration of procedure, we calculated Spearman's rank correlation coefficients at the end of treatment time (time point C).
The analysis of IL-10 suggests that the decrease after HA may follow an exponential function. To investigate this, we fitted the following model for time-dependent decay of IL-10 for each group by using the non-linear least squares method: mean IL-10 = A*exp(λ*time). Also, to obtain a robust global test, we used a re-randomization test with 10,000 repeats. The patients were repeatedly randomly assigned anew with the same block randomization procedure as originally applied. In each repeat, a chi-squared-type statistic and a P value were calculated as the proportion of resampled statistics being equal to or larger than the observed statistic.
In total, 46 patients were included in the study and randomly assigned into one of two groups (HA or control). Nine patients (5 HA and 4 control) dropped out of the study after randomization: In six patients, the procedure was scheduled to another day or later in the day, when the study team was no longer available. One patient (HA) evolved a hemodynamical instability after skin incision leading to cardiopulmonary resuscitation and an acute onset of the CPB, so no adsorber could be installed, and in two patients (1 HA and 1 control) we lost the follow-up of our main outcome measurements, so we excluded them too. Finally, we analyzed 37 patients; 19 patients were randomly assigned to the HA group and 18 patients to the control group. For the analysis of our primary outcome, we excluded the patients with unexpected post-treatment ECMO therapy (n = 2) because of differences in cytokine exaggeration (Fig. 1). Our patients had a mean age of 66 ± 12 years, the mean EuroSCORE was 5.4, 30 % of them were female, and the mean temperature during CPB was 33 ± 2 °C. All patients survived the 30-day period, except one patient (HA) who died on the 22nd postoperative day because of multiple surgical complications. All other pre-, intra-, and post-operative patient characteristics showed no difference between both groups. Detailed results are shown in Table 1.
Table 1 Patient and surgical characteristics
We measured high amounts of IL-6 in both groups increasing after CPB and with a peak value 2 hours after CPB (HA: median 120.8, IQR 49.0–160.8 versus control: median 118.7, IQR 68.4–255.9, pg/ml, P = 0.6781). One patient (control) showed an increase of IL-6 after skin incision (2.1 pg/ml); in all other patients, the activation started during CPB. No significant difference was found between the treatment and control group for all time points. The correlations for IL-6 and the end of treatment duration were 0.34 for the HA group and 0.46 for the control group.
For IL-10, we observed an increase in one patient (HA) after inducing anesthesia (11.3 pg/ml). In four patients (2 HA and 2 control), the activation of IL-10 started before CPB; one of those also had an increase of IL-6. We did not find any similarities in those patients with pre-CPB increased levels of cytokines. IL-10 reached a peak value at the end of CPB (HA: median 13.1, IQR 3.3–18.7 versus control: median 18.5, IQR 5.7–68.0 pg/ml, P = 0.1562). The decrease of IL-10 seems to be earlier in the control group showing significant differences 24 hours after CPB (HA: median 0.3, IQR 0–4.5 pg/ml versus control: median 0, P = 0.0347) and 48 hours after CPB (HA: median 0, IQR 0–1.2 pg/ml versus control: not traceable, P = 0.0185). The correlations for IL-10 and the end of treatment time were 0.02 for the HA group and 0.32 for the control group.
The exponential decay model matched rather well the observed mean values and showed different characteristics for the two groups (P = 0.0188). For detailed results of the analysis of the cytokine evolution, see Table 2 and Figs. 2 and 3.
Table 2 Comparison of cytokine levels
Comparison of median cytokine levels in picograms per milliliter. Red lines indicate the patients in the CytoSorb™ treatment group. Black lines indicate the patients in the control group. Error bars correspond to interquartile ranges (first quartile, third quartile). Asterisks mark differences between both groups at a significance P < 0.05
Exponential decay model for mean interleukin-10 (IL-10). Black circles indicate IL-10 values for the control group, and red squares indicate IL-10 values for the CytoSorb™ group
We detected traceable values in only two patients for TNF-α and IL-18 and in one patient for IL-1β over the perioperative period. Therefore, we did not perform any statistical analysis on TNF-α, IL-18, and IL-1β.
Owing to technical problems with thawing of our frozen blood samples, we additionally lost one patient in the HA group and two patients in the control group. So we were able to analyze our primary outcome in only 16 patients in the HA group and 16 patients in the control group (Fig. 1).
Secondary outcome parameters
Ex vivo LPS-induced TNF-α exaggeration could be stimulated at all determined time points. We observed a significant difference between both groups preoperatively (HA: median 2216, IQR 1742–2659 versus control: median 3364, IQR 2579–4893 pg/ml, P = 0.004) and a reduction of LPS-induced TNF-α after CPB but no significant difference between both groups. On the 2.POD, LPS-induced TNF-α reached again preoperative values but showed a significant lower amount in the HA group (HA: median 788, IQR 679–1272 versus control: median 3959, IQR 2088–4777 pg/ml, P = 0.0115) as well as on 5.POD (HA: median 1737, IQR 843–2535 versus control: median 3358, IQR 3017–3672 pg/ml, P = 0.0205). Unfortunately, we were able to analyze only 18 (9 HA and 9 control) patients on 2.POD and 17 (9 HA and 8 control) patients on 5.POD (Table 2).
HMGB1 showed a significantly different expression at baseline before treatment (HA: median 0, IQR 0–28.1 versus control: median 48.6, IQR 12.7–597.3 pg/ml, P = 0.0208) but no differences in the period after CPB, although post-treatment maximum levels in the control group were nearly double that of the HA group (HA: 705 versus control: 1594 pg/ml). The post hoc analysis was possible in 29 patients (15 HA and 14 control). We did not observe any differences in other inflammation markers like CRP or PCT or differences in leukocytes, thrombocytes, hemoglobin, albumin, or fibrinogen levels. The analysis of differences before and after intervention within the groups resulted in significant decreases in hemoglobin, albumin, and thrombocytes, according to hemodilution, as well as significant increases in CRP and leukocytes, according to usual postoperative inflammation, within both groups. We did not find differences in the change of PCT, HMGB1, or fibrinogen levels within both groups. Detailed information is shown in Tables 3 and 4.
Table 3 Comparison of laboratory values
Table 4 Differences between preoperative and postoperative values
We performed a BIA in 19 patients (9 HA and 10 control). However no differences in baseline values in resistance, phase angle, or TBW were observed between both groups (Table 5).
Table 5 Comparison of bioelectrical impedance analysis
We also analyzed the need of catecholamines in both groups within the first 24 hours, but we did not analyze differences on day 2 or 5, because only 20 patients (9 HA and 11 control) remained in the ICU after 24 hours. We did not find any differences in the need of noradrenalin, dobutamin, or levosimendan, although it is worth mentioning that only a total of eight patients received dobutamin in the control group and 11 patients in the HA group, as well as only eight (5 HA and 3 control) patients were treated with levosimendan (Table 6).
Table 6 Catecholamine treatment
Cardiac surgery is associated with an unpredictable activation of the immune system with an increase of pro-inflammatory cytokines as well as a decrease of anti-inflammatory cytokines, which is caused by blood contact with artificial surfaces and therefore is linked to adverse outcomes [17, 18]. In this (to our knowledge) first controlled study in patients undergoing on-pump cardiac surgery treated with the CytoSorb™ adsorber, no significant differences of pro-inflammatory cytokine levels were found. Even though a reduction of absolute levels within the first 24 hours after CPB is noticeable, no significant changes were observed. Given that we did not observe any adverse device-related side effects or differences in reduction of blood cells or albumin, our study shows that using the CytoSorb™ adsorber cartridge in a CPB circuit is technically feasible.
The indication for the CytoSorb™ adsorber is to reduce cytokine concentrations in various clinical situations with elevated cytokine levels and has been tested previously and demonstrated significant cytokine adsorption [19–21], an effect we could not reproduce in our patients. There may be numerous reasons for that outcome: First, we had a mean treatment time of 191 ± 56 minutes, which may be too short to allow a significant reduction of cytokine levels, although we did not find any correlation between cytokine peaks and treatment time. In all previously conducted studies [19–21] and case reports [12, 22, 23], the treatment time was at least 4 hours up to 4 days. In a CPB-porcine model with 5 hours of treatment time, also no effect in IL-6 or TNF-α has been found [24]. Secondly, we effectively observed a CPB-triggered immunoactivation and an increase in cytokines and inflammation markers after CPB and therefore after HA treatment. So it might be owing to a concentration-dependent adsorption of CytoSorb™ that we did not observe any HA of cytokines. Third, we may have expected a too optimistic effect and therefore we may have planned a too small sample size, a fact which is also shown by the observed strong inter-individual differences in the amounts of cytokine levels. But at the time of planning this pilot study, no clinically relevant data concerning our primary outcome were available. And, fourth, we included only the least sick cohort of patients undergoing cardiac surgery. Although we did not restrict the EuroSCORE levels of our included patients, our exclusion criteria resulted in a moderate preoperative risk for postoperative outcomes. In a recently published cohort analysis [25] representing more than 9,000 cardiac surgical patients operated at our center, we found similar demographics, so that the population we investigated represents in our opinion a cross-section of elective moderate-risk patients operated at our center.
We rarely observed the production of TNF-α in our patients and this goes hand in hand with the contentious role of TNF-α within CPB; although some studies have shown an increase, others have not [26]. Also, IL-1β has been found to be detectable in only a small proportion of patients with systemic inflammatory response syndrome and sepsis [27].
IL-10 is thought to downregulate cytokine production [27], and high concentrations have been observed in our patients. Our results follow a similar time course to the pro-inflammatory cytokines with an early peak and subsequently falling concentrations. Interestingly, we observed a slower decrease of postoperative IL-10 levels in the HA group and therefore a longer effect up to 48 hours postoperatively. Recently, higher IL-10 levels following cardiac surgery have been associated with a decreased risk of mortality [28]. Although we did not find a reduction in mortality, this arguable immuno-protective effect needs to be investigated further.
We also found significant time-dependent changes in ex vivo LPS-induced TNF-α release. Surprisingly, we had observed a lower stimulation rate in the HA group already before treatment. That could be based on an effect modification of comorbidities like diabetes mellitus (DM). There is good evidence that patients with DM are associated with an immunosuppressive condition and have poorer humoral response, including decreased cytokine production, and therefore have an increased susceptibility to infections [29–31]. Although there were no differences between both groups, the lowest levels of LPS-induced TNF-α release were found in HA-treated patients with DM. Additionally, it may be an effect of an associated, preoperative drug therapy such as metformin, aspirin, or statins, which we did not record. HMGB1 is associated with the inflammatory response after ischemia/reperfusion injury after cardiopulmonary bypass, and it has been shown that a reduction decreases the markers indicating cardiac damage. Also, elevated levels of HMGB1 have been correlated with the disease severity of heart failure [32–35]. However, we observed preoperative differences in HMGB1; therefore, it is not possible to interpret the results if there is a post-treatment difference. According to our LPS-induced TNF-α results, we cannot rule out that there is a hidden phenotypic difference between groups despite randomization that may be associated with higher levels of HMGB1 before CPB. But we think the effects of HMGB1 and HA on patients' postoperative course should be investigated further.
Last, we observed neither any differences in our patients' body composition nor any need of catecholamines or postoperative fluid balances. Therefore, we cannot conclude that there is less edema formation in the HA group. Unfortunately, we did not record systemic vascular resistance after CPB to assess that the HA group was more vasodilated.
A limitation of a study such as ours may be effect modifications owing to omitted or unobserved confounding risk indicators, although we included the most relevant risk indicators to rule out any systematic effect. However, we did not monitor the use of non-steroidal anti-inflammatory drugs preoperatively (e.g., aspirin), statins, or metformin, which may have anti-inflammatory effects [36, 37]. Another limitation of our study can be that our study design was not double-blinded, only blinded in patients. Although blinding in studies involving operative management and medical devices is rarely feasible, we do not think that would have affected the outcome. Additionally, owing to technical problems, we were not able to perform BIA in every patient, and we have to report that our BIA monitor is validated in healthy subjects up to only 66 years [16]. Last, a limitation of the study is that we did not observe the cytokine levels before and after the HA cartridge and therefore we cannot be sure what proportion of blood has been treated. We can only estimate that despite the small amount of 3 to 4 % total blood volume purified each minute, more than 99 % of the blood volume has passed through the HA device after our mean treatment time of 191 minutes.
We claim that the use of the CytoSorb™ adsorber cartridge in a CPB circuit is technically feasible. We observed a longer-lasting anti-inflammatory effect of IL-10 in the HA group, which needs to be investigated further. We did not observe significantly relevant changes in the evolution of pro-inflammatory cytokines in patients treated with the CytoSorb™ adsorber device during CPB. However, we did not find any effects on our patients' clinical outcomes, although future studies should endeavour to increase the sample size. By reason of several involved pathways in the complex pathogenesis of the inflammatory reaction to CPB, the inhibition of a single pathway may not achieve sufficient inhibition of the entire pro-inflammatory cascade to significantly improve clinical outcomes [38]. We found an inhomogeneous inflammatory response expressed by high inter-individual differences in cytokine levels between our patients. A greater homogeneity may be achieved by identifying those patients or procedures, which are triggering an exaggerated inflammatory response to CPB like patients with endocarditis, procedures on the aortic arch needing hypothermic cardiac arrest or transplant surgery. This hypothesis requires further investigations.
Using the CytoSorb™ adsorber during cardiopulmonary bypass did not change patients' perioperative course.
Activation of cytokines due to cardiopulmonary bypass is inhomogeneous and shows high inter-individual differences between our patients. Patients with an exaggerated inflammatory response to cardiac surgery need to be identified.
A longer-lasting anti-inflammatory effect of IL-10 was observed in patients who were treated with hemadsorption.
The use and installation of the CytoSorb™ adsorber on CPB were technically feasible, and no adverse device-related side effects occurred.
Activated clotting time
aHTN:
Arterial hypertension
AoCC:
Aortic cross-clamp
BIA:
Bispectral index
CABG:
Coronary artery bypass graft
CPB:
CPR:
EuroSCORE:
European system for cardiac operative risk evaluation
HA:
Hemoadsorption
HMGB1:
High-mobility group box 1
IDDM:
Insulin-dependent diabetes mellitus
LOS-ICU:
Length of stay in the intensive care unit
LPS:
LVEF:
Left ventricular ejection fraction
MCI:
NIDDM:
Non-insulin-dependent diabetes mellitus
PAOD:
Peripheral arterial obstructive disease
PCC:
Prothrombin complex concentrate
PCT:
Procalcitonin
POD:
Postoperative day
TBW:
TNF-α:
Tumor necrosis factor-alpha
Tomic V, Russwurm S, Moller E, Claus RA, Blaess M, Brunkhorst F, et al. Transcriptomic and proteomic patterns of systemic inflammation in on-pump and off-pump coronary artery bypass grafting. Circulation. 2005;112:2912–20.
Diegeler A, Doll N, Rauch T, Haberer D, Walther T, Falk V, et al. Humoral immune response during coronary artery bypass grafting: A comparison of limited approach, "off-pump" technique, and conventional cardiopulmonary bypass. Circulation. 2000;102(19 Suppl 3):III95–III100.
Chew MS, Brandslund I, Brix-Christensen V, Ravn HB, Hjortdal VE, Pedersen J, et al. Tissue injury and the inflammatory response to pediatric cardiac surgery with cardiopulmonary bypass: a descriptive study. Anesthesiology. 2001;94:745–53. discussion 5A.
de Jong PR, Schadenberg AW, van den Broek T, Beekman JM, van Wijk F, Coffer PJ, et al. STAT3 regulates monocyte TNF-alpha production in systemic inflammation caused by cardiac surgery with cardiopulmonary bypass. PLoS One. 2012;7:e35070.
Shen X, Li WQ. High-mobility group box 1 protein and its role in severe acute pancreatitis. World J Gastroenterol. 2015;21:1424–35.
Seghaye M, Duchateau J, Bruniaux J, Demontoux S, Bosson C, Serraf A, et al. Interleukin-10 release related to cardiopulmonary bypass in infants undergoing cardiac operations. J Thorac Cardiovasc Surg. 1996;111:545–53.
Sablotzki A, Welters I, Lehmann N, Menges T, Gorlach G, Dehne M, et al. Plasma levels of immunoinhibitory cytokines interleukin-10 and transforming growth factor-beta in patients undergoing coronary artery bypass grafting. Eur J Cardiothorac Surg. 1997;11:763–8.
Tarnok A, Schneider P. Pediatric cardiac surgery with cardiopulmonary bypass: pathways contributing to transient systemic immune suppression. Shock. 2001;16 Suppl 1:24–32.
Franke A, Lante W, Fackeldey V, Becker HP, Thode C, Kuhlmann WD, et al. Proinflammatory and antiinflammatory cytokines after cardiac operation: different cellular sources at different times. Ann Thorac Surg. 2002;74:363–70. discussion 70-1.
Exner R, Tamandl D, Goetzinger P, Mittlboeck M, Fuegger R, Sautner T, et al. Perioperative GLY-GLN infusion diminishes the surgery-induced period of immunosuppression: accelerated restoration of the lipopolysaccharide-stimulated tumor necrosis factor-alpha response. Ann Surg. 2003;237:110–5.
Whitlock RP, Devereaux PJ, Teoh KH, Lamy A, Vincent J, Pogue J, et al. Methylprednisolone in patients undergoing cardiopulmonary bypass (SIRS): a randomised, double-blind, placebo-controlled trial. Lancet. 2015;386:1243–53.
Basu R, Pathak S, Goyal J, Chaudhry R, Goel RB, Barwal A. Use of a novel hemoadsorption device for cytokine removal as adjuvant therapy in a patient with septic shock with multi-organ dysfunction: a case study. Indian J Crit Care Med. 2014;18:822–4.
Society of Thoracic Surgeons Blood Conservation Guideline Task F, Ferraris VA, Brown JR, Despotis GJ, Hammon JW, Reece TB, et al. 2011 update to the Society of Thoracic Surgeons and the Society of Cardiovascular Anesthesiologists blood conservation clinical practice guidelines. Ann Thorac Surg. 2011;91:944–82.
Society of Thoracic Surgeons Blood Conservation Guideline Task F, Ferraris VA, Ferraris SP, Saha SP, Hessel 2nd EA, Haan CK, et al. Perioperative blood transfusion and blood conservation in cardiac surgery: the Society of Thoracic Surgeons and The Society of Cardiovascular Anesthesiologists clinical practice guideline. Ann Thorac Surg. 2007;83(5):S27–86.
Kushner RF, Schoeller DA. Estimation of total body water by bioelectrical impedance analysis. Am J Clin Nutr. 1986;44:417–24.
Kyle UG, Bosaeus I, De Lorenzo AD, Deurenberg P, Elia M, Gomez JM, et al. Bioelectrical impedance analysis--part I: review of principles and methods. Clin Nutr. 2004;23:1226–43.
Nebelsiek T, Beiras-Fernandez A, Kilger E, Mohnle P, Weis F. Routine use of corticosteroids to prevent inflammation response in cardiac surgery. Recent Pat Cardiovasc Drug Discov. 2012;7:170–4.
Day JR, Taylor KM. The systemic inflammatory response syndrome and cardiopulmonary bypass. Int J Surg. 2005;3:129–40.
Kellum JA, Venkataraman R, Powner D, Elder M, Hergenroeder G, Carter M. Feasibility study of cytokine removal by hemoadsorption in brain-dead humans. Crit Care Med. 2008;36:268–72.
Song M, Winchester J, Albright RL, Capponi VJ, Choquette MD, Kellum JA. Cytokine removal with a novel adsorbent polymer. Blood Purif. 2004;22:428–34.
Kellum JA, Song M, Venkataraman R. Hemoadsorption removes tumor necrosis factor, interleukin-6, and interleukin-10, reduces nuclear factor-kappaB DNA binding, and improves short-term survival in lethal endotoxemia. Crit Care Med. 2004;32:801–5.
Pattnaik SK, Panda B. CytoSorb-friend or foe!! Indian J Crit Care Med. 2015;19:296.
Bruenger F, Kizner L, Weile J, Morshuis M, Gummert JF. First successful combination of ECMO with cytokine removal therapy in cardiogenic septic shock: a case report. Int J Artif Organs. 2015;38:113–6.
Vocelka CR, Jones KM, Mikhova KM, Ebisu RM, Shar A, Kellum JA, et al. Role of cytokine hemoadsorption in cardiopulmonary bypass-induced ventricular dysfunction in a porcine model. J Extra Corpor Technol. 2013;45:220–7.
Bernardi MH, Schmidlin D, Schiferer A, Ristl R, Neugebauer T, Hiesmayr M, et al. Impact of preoperative serum creatinine on short- and long-term mortality after cardiac surgery: a cohort study. Br J Anaesth. 2015;114:53–62.
Warren OJ, Smith AJ, Alexiou C, Rogers PL, Jawad N, Vincent C, et al. The inflammatory response to cardiopulmonary bypass: part 1--mechanisms of pathogenesis. J Cardiothorac Vasc Anesth. 2009;23:223–31.
Clark MA, Plank LD, Connolly AB, Streat SJ, Hill AA, Gupta R, et al. Effect of a chimeric antibody to tumor necrosis factor-alpha on cytokine and physiologic responses in patients with severe sepsis--a randomized, clinical trial. Crit Care Med. 1998;26:1650–9.
Zhang WR, Garg AX, Coca SG, Devereaux PJ, Eikelboom J, Kavsak P, et al. Plasma IL-6 and IL-10 Concentrations Predict AKI and Long-Term Mortality in Adults after Cardiac Surgery. J Am Soc Nephrol. 2015;26:3123–32.
Frasnelli SC, de Medeiros MC, Bastos Ade S, Costa DL, Orrico SR, Rossa JC. Modulation of immune response by RAGE and TLR4 signalling in PBMCs of diabetic and non-diabetic patients. Scand J Immunol. 2015;81:66–71.
Tanaka Y. Immunosuppressive mechanisms in diabetes mellitus. Nippon Rinsho. 2008;66:2233–7.
Koh GC, Peacock SJ, van der Poll T, Wiersinga WJ. The impact of diabetes on the pathogenesis of sepsis. Eur J Clin Microbiol Infect Dis. 2012;31:379–88.
Wang N, Min X, Li D, He P, Zhao L. Geranylgeranylacetone protects against myocardial ischemia and reperfusion injury by inhibiting high-mobility group box 1 protein in rats. Mol Med Rep. 2012;5:521–4.
Gratia S, Kay L, Potenza L, Seffouh A, Novel-Chate V, Schnebelen C, et al. Inhibition of AMPK signalling by doxorubicin: at the crossroads of the cardiac responses to energetic, oxidative, and genotoxic stress. Cardiovasc Res. 2012;95:290–9.
Bangert A, Andrassy M, Muller AM, Bockstahler M, Fischer A, Volz CH, et al. Critical role of RAGE and HMGB1 in inflammatory heart disease. Proc Natl Acad Sci U S A. 2016;113:E155–64.
Asavarut P, Zhao H, Gu J, Ma D. The role of HMGB1 in inflammation-mediated organ injury. Acta Anaesthesiol Taiwan. 2013;51:28–33.
O'Neal Jr HR, Koyama T, Koehler EA, Siew E, Curtis BR, Fremont RD, et al. Prehospital statin and aspirin use and the prevalence of severe sepsis and acute lung injury/acute respiratory distress syndrome. Crit Care Med. 2011;39:1343–50.
Landis RC, Brown JR, Fitzgerald D, Likosky DS, Shore-Lesserson L, Baker RA, et al. Attenuating the Systemic Inflammatory Response to Adult Cardiopulmonary Bypass: A Critical Review of the Evidence Base. J Extra Corpor Technol. 2014;46:197–211.
Paparella D, Micelli M, Favoino B, D'Alo M, Fiore T, de Luca Tupputi Schinosa L. Anti-heparin-platelet factor 4 antibodies after cardiopulmonary bypass: role of HLA expression. Haematologica. 2001;86:326–7.
We thank all the medical staff from the Department of Cardiothoracic and Vascular Anesthesia and the Department for Cardiac Surgery for contributing to the study. Special thanks go to our medical students David Hirschl and Christoph Steinkellner for their invaluable help in data collection.
Disclosure of funding
Materials for this study have been partly financially supported by CytoSorbents Europe GmbH. All other sources of funding for laboratory measurements and human resources were departmental and institutional funding.
Department of Cardiothoracic and Vascular Anaesthesia and Intensive Care Medicine, Medical University of Vienna, Waehringer Guertel 18-20, A-1090, Vienna, Austria
Martin H. Bernardi, Harald Rinoesl, Philipp Opfermann & Michael J. Hiesmayr
Department of Surgery, Research Laboratories, Medical University of Vienna, Lazarettgasse 14, A-1090, Vienna, Austria
Klaus Dragosits, Christian Lamm, Falk Preißing & Andreas Spittler
Centre for Medical Statistics, Informatics and Intelligent Systems, Medical University of Vienna, Spitalgasse 23, A-1090, Vienna, Austria
Robin Ristl
Department of Cardiac Surgery, Medical University of Vienna, Waehringer Guertel 18-20, A-1090, Vienna, Austria
Friedrich Hoffelner & Dominik Wiedemann
Core Facilities, Core Facility Flow Cytometry, Medical University of Vienna, Lazarettgasse 14, 1090, Vienna, Austria
Andreas Spittler
Martin H. Bernardi
Harald Rinoesl
Klaus Dragosits
Friedrich Hoffelner
Philipp Opfermann
Christian Lamm
Falk Preißing
Dominik Wiedemann
Michael J. Hiesmayr
Correspondence to Martin H. Bernardi.
MHB and MJH have received travel funding for a lecture from CytoSorbents Europe GmbH. All other authors declare that they have no competing interests.
MHB contributed to conception and design, to drafting of the article, to critical revision of the article for important intellectual content, to final approval of the article, and to laboratory work, patient recruitment, and follow-up investigations. FH contributed to conception and design. MJH and AS contributed to conception and design, to drafting of the article, to critical revision of the article for important intellectual content, and to final approval of the article. RR contributed to drafting of the article and to critical revision of the article for important intellectual content and provided statistical expertise. HR and DW contributed to critical revision of the article for important intellectual content and to laboratory work, patient recruitment, and follow-up investigations. KD, PO, CL, and FP contributed to laboratory work, patient recruitment, and follow-up investigations. All authors read and approved the final manuscript.
Bernardi, M.H., Rinoesl, H., Dragosits, K. et al. Effect of hemoadsorption during cardiopulmonary bypass surgery – a blinded, randomized, controlled pilot study using a novel adsorbent. Crit Care 20, 96 (2016). https://doi.org/10.1186/s13054-016-1270-0
CytoSorb
High-mobility box group 1 | CommonCrawl |
\begin{document}
\title{Quadrant Marked Mesh Patterns and the $r$-Stirling Numbers}
\abstract{\textit{Marked mesh patterns} are a very general type of permutation pattern. We examine a particular marked mesh pattern originally defined by Kitaev and Remmel, and show that its generating function is described by the $r$-Stirling numbers. We examine some ramifications of various properties of the $r$-Stirling numbers for this generating function, and find (seemingly new) formulas for the $r$-Stirling numbers in terms of the classical Stirling numbers and harmonic numbers. We also answer some questions posed by Kitaev and Remmel and show a connection to another mesh pattern introduced by Kitaev and Liese.}
\section{Introduction}
The notion of a marked mesh pattern in a permutation is a generalization of classical permutation patterns, and is in fact a common generalization of a number of different variations of permutation patterns that have been of recent interest. Br\"{a}nd\'{e}n and Claesson introduced mesh patterns \cite{BC}, and \'{U}lfarsson developed marked mesh patterns \cite{U}. We will give a somewhat loose definition of general marked mesh patterns here before turning to the specific examples studied in this paper.
Let $\sigma$ be a permutation written in one-line notation: $\sigma = \sigma_{1}\sigma_{2}\cdots \sigma_{n}$. The graph of $\sigma$ is the set of points of the form $(i, \sigma_{i})$ for $i$ from $1$ to $n$. A \textit{marked mesh pattern} of length $k$ is the graph of a permutation $\sigma$ in $S_{k}$, drawn on a grid, with some regions of the grid demarcated and labeled with a symbol `$=a$', `$\leq a$', or `$\geq a$' for some non-negative integer $a$. This diagram is intended to be a schematic drawing of the graph of a permutation $\sigma$ in $S_{n}$ (with $n>k$), where the drawn points are actual points of the graph of $\sigma$, but the spaces between may have been compressed. Unmarked regions of the pattern place no restriction on $\sigma$, but the marked regions of the graph of $\sigma$ must contain a number of points satisfying the expression that marks the region. (Since the mark ``$=0$'' is the most commonly used, a shaded region with no other mark is assumed to be marked with ``$=0$''.)
The main pattern of interest for this paper is shown in Figure \ref{fig:patterndef}. We denote this pattern as $MMP^{k}$. We precisely define this pattern by saying that $\sigma_{i}$ matches the pattern $MMP^{k}$ in $\sigma$ if there are at least $k-1$ indices $j > i$ with $\sigma_{j} > \sigma_{i}$ and $j < m$, where $\sigma_{m} = n$; and no indices $j < i$ with $\sigma_{j} > \sigma_{i}$. That is, when checking this pattern, we count the points in the graph of $\sigma$ that appear in the rectangle bounded in the lower left by $(i,\sigma_{i})$ and in the upper right by $(m,n)$. For $\sigma_{i}$ to match $MMP^{k}$, there must be at least $k$ points in that rectangle (including the point $(m,n)$ itself, but not $(i, \sigma_{i})$,) and no points above and to the left of $(i, \sigma_{i})$. (This matches the diagram of the pattern because the part of the shaded region along the top of the diagram means that the upper-right point must be $(m,n)$, while the portion of the shaded region on the left gives the restriction on points occurring before $\sigma_{i}$. The middle region specifies the minimum number of points between $\sigma_{i}$ and $n$ in $\sigma$.)
\begin{centering}
\begin{figure}
\caption{The Pattern $MMP^{k}$ }
\label{fig:patterndef}
\end{figure}
\end{centering}
The graph of $\sigma$ leads to terminology that will be useful. Draw a horizontal and a vertical line through the point $(i,\sigma_{i})$, which will divide the graph of $\sigma$ into four quadrants. The usual quadrant numbering then lets us more easily describe the position of points relative to $(i, \sigma_{i})$. For example, points above and to the left of $(i, \sigma_{i})$ are said to be in the second quadrant relative to $\sigma_{i}$, or, more succinctly, in $\sigma_{i}$'s second quadrant. Kitaev and Remmel \cite{KR} began a systematic study of patterns for which the restrictions of the pattern can be described in terms of the quadrants relative to $\sigma_{i}$, (including $MMP^{k}$) which they called \textit{quadrant marked mesh patterns}. The main goal of this paper is to describe the generating function for $MMP^{k}$ in terms of $r$-Stirling numbers (allowing us to explain some of the results of Kitaev and Remmel \cite{KR} more fully) and to examine some related combinatorial questions. Specifically, we will find some new recurrences and descriptions for the $r$-Stirling numbers, and note a connection to another marked mesh pattern.
\section{Preliminaries}
For any permutation $\sigma$, we define $mmp^{k}(\sigma)$ to be the number of entries $\sigma_{i}$ that match $MMP^{k}$ in $\sigma$:
\[ mmp^{k}(\sigma) = \left| \{ i \, | \, 1 \leq i \leq n \textrm{ and } i \textrm{ matches } MMP^{k} \textrm{ in } \sigma \} \right|. \]
We will write our permutations in one-line notation. Generally, for $\sigma \in S_{n}$, we use $\sigma_{1}, \sigma_{2}, \ldots, \sigma_{n}$ throughout to denote the entries of $\sigma$. We will also often write $\sigma$ as \begin{equation} \sigma = a_{1,1}a_{1,2} \cdots a_{1,m_{1}}a_{2,1} \cdots a_{2,m_{2}} \cdots a_{p,1} \cdots a_{p,m_{p}}, \label{eq:cycstruct} \end{equation} where $a_{1,1} < a_{2,1} < \cdots < a_{p,1}$, and $a_{q,1} > a_{q,m}$ for any $q$ from 1 to $p$ and $m$ from 1 to $m_{q}$. We will refer to any of the substrings $a_{q,1} a_{q,2} \cdots a_{q,m_{q}}$ as a \textit{pseudocycle} of $\sigma$. The entries $a_{q,1}$ are called \textit{left-to-right maxima} of $\sigma$, since if $\sigma$ is read left-to-right, each entry $a_{q,1}$ is the largest entry read so far. For example, in the permutation $56418732$, the three pseudocycles are the substrings $5, 641$, and $8732$, so we would set $a_{1,1} = 5$, $a_{2,1} = 6$, etc.
We note that the permutations in $S_{n}$ with exactly $k$ pseudocycles are counted by the (unsigned) Stirling numbers of the first kind, denoted $c(n,k)$. These numbers are given by \[ x(x+1)(x+2) \cdots (x+n-1) = \sum_{i=1}^{n} c(n,k)x^{k}.\]
We also define the reduction operator $\textrm{red()}$, which turns a string of $k$ distinct integers into a permutation in $S_{k}$ by setting $\textrm{red}(a_{1}a_{2} \cdots a_{k})$ to be the unique permutation $\sigma \in S_{k}$ resulting from arranging $1$ through $k$ in the same relative order as the numbers $a_{i}$. For example, $\textrm{red}(3625) = 2413$.
\section{Generating Functions}
We now begin our study of the distribution of $mmp^{k}$ on $S_{n}$ with a basic observation relating the number of occurrences of the pattern $MMP^{k}$ in a permutation $\sigma$ to the pseudocycle structure of $\sigma$.
\begin{lemma} \label{lem:basic} If $\sigma \in S_{n}$, then for any $k > 1$, $mmp^{k}(\sigma) \leq \max(0,n-k).$ Moreover, using the notation of \eqref{eq:cycstruct} for $\sigma$, if $mmp^{k}(\sigma) = j$, then the elements that match $MMP^{k}$ in $\sigma$ are exactly $a_{1,1}, a_{2,1}, \ldots, a_{j,1}$. \end{lemma}
Proof. Use the notation of equation \eqref{eq:cycstruct} for $\sigma$ and assume $\sigma$ has $p$ pseudocycles. Then we note that if $\sigma_{i}$ matches $MMP^{k}$ in $\sigma$, then $\sigma_{i}$ is a left-to-right maximum, and thus is one of the $p$ entries $a_{q,1}$ for some $1 \leq q \leq p$. It also implies that there are at least $k$ entries after $\sigma_{i}$, so that $i \leq n-k$. For the second statement, we note that if $a_{i,1}$ matches $MMP^{k}$ in $\sigma$, for $i > 1$, then there are at least $k-1$ elements in the $i+1$st through $p-1$st pseudocycles that are greater than $a_{i,1}$. These same $k-1$ elements occur between $a_{i-1,1}$ and $a_{p,1}$, so that $a_{i-1,1}$ also matches $MMP^{k}$ in $\sigma$. $\square$
The standard generating function for the statistic $mmp^{k}$ is \begin{equation} \label{eq:genfunc} R_{n}^{k}(x) = \sum_{\sigma \in S_{n}} x^{mmp^{k}(\sigma)}.\end{equation} However, a slight alteration to $R_{n}^{k}(x)$ makes its description more straightforward. Let $\sigma = \sigma_{1}\sigma_{2} \cdots \sigma_{n}$. We say that $0$ matches $MMP^{k}$ in $\sigma$ if $n = \sigma_{i}$ for some $i \geq k$. This is really an extension of the definition of $MMP^{k}$, since if we were to prepend a 0 to $\sigma$ and consider it as a permutation $\sigma'$ of $0, \ldots, n$, then $0$ would in fact match $MMP^{k}$ in $\sigma'$ as long as $n$ occurs at the $k$th position or later in $\sigma$. In order to specify when we wish to include this expanded notion of matching $MMP^{k}$, we will define \[ mmp^{k'}(\sigma) = \left| \{ i \, | \, 0 \leq i \leq n \textrm{ and } i \textrm{ matches } MMP^{k} \textrm{ in } \sigma \} \right|. \] Conventionally, we will write $\sigma_{0} = 0$ when we want to consider this leading 0 as part of the permutation. Notice that $mmp^{k'}(\sigma) = 0$ if and only if $n$ occurs at the $k-1$st position or earlier in $\sigma$, and we describe this by saying that $\sigma$ \textit{cannot match} $MMP^{k}$. If $mmp^{k'}(\sigma) = 1$, then we say that $\sigma$ \textit{almost matches} $MMP^{k}$. Note that $n = \sigma_{k}$ is a sufficient but not necessary condition for $mmp^{k'}(\sigma) = 1$. We also point out that if $mmp^{k'}(\sigma) > 0$, then in fact $mmp^{k'}(\sigma) = mmp^{k}(\sigma) + 1$.
As an example that this notion is useful, it can easily be checked that \[ R_{4}^{2}(x) = 17 + 6x + x^{2}.\] However, there are 6 permutations in $S_{4}$ that cannot match $MMP^{2}$, and 11 that almost match $MMP^{2}$. Thus, it seems in some ways more natural to write \[ R_{4}^{2}(x) = 6 + 11 + 6x + x^{2} = c(n,1) + c(n,2) + c(n,3)x + c(n,4)x^{2}.\] More generally, we have the following, which generalizes a result of Kitaev and Remmel \cite[Proposition 3]{KR}.
\begin{theorem} \label{thm:k2} For $s > 0$, the coefficient of $x^{s}$ in $R_{n}^{2}(x)$ is $c(n,s+2)$. The number of permutations that almost match $MMP^{2}$ is $c(n,2)$, while the number of permutations that cannot match $MMP^{2}$ is $c(n,1) = (n-1)!$. \end{theorem}
Proof. First, we claim that if $\sigma$ has $p>2$ pseudocycles, then $mmp^{2}(\sigma) = p-2$. To see this, write $\sigma$ in cycle notation in the notation of equation $\eqref{eq:cycstruct}$. As in Lemma \ref{lem:basic}, if $\sigma_{i}$ matches $MMP^{2}$ in $\sigma$, then $\sigma_{i}$ is one of the entries $a_{q,1}$ for some $1 \leq q \leq p$. Conversely, every entry of the form $a_{q,1}$ for $1 \leq q \leq p-2$ matches the pattern $MMP^{2}$ in $\sigma$, since for such $q$, the entries $a_{q+1,1}$ and $a_{p,1} = n$ will lie in the first quadrant relative to $a_{q,1}$. But since $a_{p-1,1}$ is the next-to-last left-to-right maximum, $n$ is the only entry in its first quadrant. Hence, $mmp^{2}(\sigma) = p-2$, as claimed.
For $s > 0$, this implies that the number of permutations $\sigma$ that contribute a term of $x^{s}$ to the sum in Equation \eqref{eq:genfunc} is precisely the number of permutations with $s+2$ pseudocycles, which is counted by $c(n,s+2)$. The count of permutations that cannot match $MMP^{2}$ is immediate from the definition, and the remaining $c(n,2)$ permutations must almost match $MMP^{2}$. $\square$
To more easily keep track of $mmp^{k'}$, we define a new generating function $P_{n}^{k}(x)$ by setting \[ P_{n}^{k}(x) = \sum_{\pi \in S_{n}} x^{mmp^{k'}(\sigma)}. \] For example, \[ P_{4}^{2}(x) = 6 + 11x + 6x^{2} + x^{3}.\] Of course, $R_{n}^{k}(x)$ is easily recoverable from $P_{n}^{k}(x)$ and vice versa, so we will focus our attention on $P_{n}^{k}(x)$. We also define $C_{n,k,j}$ to be the coefficient of $x^{j}$ in $P_{n}^{k}$.
For a fixed $k$, we can generate $P_{n}^{k}(x)$ using the following recursive description:
\begin{theorem} \label{thm:main} For a fixed value of $k > 1$ and any $n \geq k$ and $j > 0$, \[ C_{n,k,j} = (n-1) C_{n-1,k,j} + C_{n-1,k,j-1}.\] That is, \[P_{n}^{k}(x) = (x+n-1) P_{n}^{k}(x).\] Also, \[ P_{k-1}^{k}(x) = (k-1)!. \] \end{theorem}
Proof. First, assume $j > 1$. If $mmp^{k'}(\sigma) = j$, then $\sigma$ must match $MMP^{k}$ at the elements that start each of its first $j-1$ pseudocycles and at 0. We can count such permutations as follows:
If $\sigma_{1} = 1$, then dropping $1$ from $\sigma$ will completely eliminate the first pseudocycle, but will not affect any of the others. Thus $\textrm{red}(\sigma_{2}\sigma_{3} \cdots \sigma_{n})$ will match $MMP^{k}$ at exactly $j-1$ places - the starting point of each of its first $j-2$ pseudocycles and 0. There are exactly $C_{n-1,k,j-1}$ such permutations, and the described correspondence is a bijection, since the inverse map consists of increasing each element of the permutation by 1 and inserting 1 at the start of the string. If $\sigma_{1} \neq 1$, then $1$ does not match $MMP^{k}$ in $\sigma$, and in fact removing it from $\sigma$ will not affect any of the entries $\sigma_{i}$ that do match $MMP^{k}$ in $\sigma$, with the possible exception of 0. However, since $j >1$, then even after removing $1$ and reducing, $mmp^{k'}(\textrm{red}(\sigma_{1}\sigma_{2} \cdots \hat{1} \cdots \sigma_{n})) = j-1$, so that 0 matches $MMP^{k}$ in $\sigma$ as well. Thus $mmp^{k'}(\textrm{red}(\sigma_{1}\sigma_{2} \cdots \hat{1} \cdots \sigma_{n})) = j$. But, since 1 could be in positions 2 through $n$ in $\sigma$, each permutation $\sigma' \in S_{n-1}$ with $mmp^{k'}(\sigma') = j$ will appear as $\textrm{red}(\sigma_{1}\sigma_{2} \cdots \hat{1} \cdots \sigma_{n})$ for $n-1$ different $\sigma \in S_{n}$. Thus, there are $(n-1) C_{n-1,k,j}$ such $\sigma \in S_{n}$.
If $j=1$, then the theorem claims that the number of permutations in $S_{n}$ that almost match $MMP^{k}$ is equal to $n-1$ times the number of permutations in $S_{n-1}$ that almost match $MMP^{k}$ plus the number of permutations in $S_{n-1}$ that cannot match $MMP^{k}$. This actually can be deduced as a consequence of the result for $j>1$ and the fact that the sum of the coefficients of $P_{n}^{k}(x)$ is $n!$, but we can construct a bijection as well. We assume now that $\sigma$ almost matches $MMP^{k}$. If $n$ appears after the $k$th position of $\sigma$, then $mmp^{k}(\sigma) = 0$ implies that $\sigma_{1} \neq 1$. In this case, or if $n = \sigma_{k}$ and 1 appears after $n$, then the argument from the second case above essentially applies. The only alteration that we must make is to note that under our assumptions, $n$ will still be in the $k$th position or later after 1 is deleted, so that $0$ will still match $MMP^{k}$. This accounts for the term $(n-1)C_{n-1,k,1}$.
For the permutations $\sigma$ where $\sigma_{k} = n$ and 1 appears before $n$, we create a permutation in $S_{n-1}$ that cannot match $MMP^{k}$ as follows - first we swap the positions of $n$ and 1, and then delete 1 and apply $red()$. This is a bijection, since the inverse map consists of increasing each element of $\sigma'$ by 1, inserting 1 in the $k$th position, and swapping the positions of 1 and $n$.
The final statement - the base case for the recursion - is clear from the definition of ``cannot match''. $\square$
A direct consequence of the second and third statements of this theorem is an easy description of $P_{n}^{k}(x)$.
\begin{corollary} \label{cor:genfunc} For $n \geq k-1$, \[ P_{n}^{k}(x) = ((k-1)!) \cdot \prod_{i=k-1}^{n-1} (x+i) .\] \end{corollary}
\section{The \texorpdfstring{$r$}{r}-Stirling Numbers}
The recursive relationship in Theorem \ref{thm:main} is the standard recursion for the (unsigned) Stirling numbers of the first kind. In light of Theorem \ref{thm:k2}, this is not entirely surprising. Of course, for $k > 2$, the initial conditions for $P_{n}^{k}(x)$ are different from those for $c(n,k)$. The resulting coefficients of $P_{n}^{k}(x)$ are a generalization of the Stirling numbers of the first kind defined originally by Mitrinovic \cite{M}, but have appeared in many guises. See Broder \cite{B} and Koutras \cite{Kout} for some examples. We will use the notation $\rstir{n}{m}{r}$ of Broder \cite{B}. These numbers are the $r$-Stirling numbers of the first kind, which are defined as follows. We let $\rstir{n}{m}{r}$ be the number of permutations in $S_{n}$ with exactly $m$ left-to-right-maxima such that $n,n-1, \ldots, n-r+1$ are all left-to-right maxima. (We will typically refer to these as just the $r$-Stirling numbers. While $r$-Stirling numbers of the second kind exist, we will not use them here.) Theorem \ref{thm:main} is enough to show the following result, but our main goal is an explicit combinatorial correspondence connecting $P_{n}(x)$ and the $r$-Stirling numbers.
\begin{theorem} \label{thm:rstir} For $n \geq k$, The coefficient $C_{n,k,j}$ is given by $(k-1)!\rstir{n}{j+k-1}{k-1}$ \end{theorem}
Proof. First, fix $j \geq 1$ and assume $\sigma \in S_{n}$ has exactly $j+k-1$ left-to-right maxima, including $n$, $n-1$, \ldots, $n-k+2$. Say that these left-to-right maxima occur at $\sigma_{i_{1}}, \sigma_{i_{2}}, \ldots, \sigma_{i_{j+k-1}} = n$. We will construct $(k-1)!$ permutations counted by $C_{n,k,j}$, and show that this gives a 1-to-$(k-1)!$ correspondence. We first note that if $j\geq 1$, $\sigma_{i_{j-1}}$ matches $MMP^{k}$ in $\sigma$, since $\sigma_{i_{j}}, \ldots, \sigma_{i_{j+k-1}} = n$ are all larger than $\sigma_{i_{j-1}}$ and occur after it. If $j=1$ we interpret this conventionally to mean $\sigma_{i_{0}} = 0$ matches $MMP^{k}$ in $\sigma$. Similar logic shows that if $j=0$, then $n$ must occur in position $k=1$ or later in $\sigma$.
Now our argument changes based on whether $\sigma_{i_{j}}$ matches $MMP^{k}$ in $\sigma$. Again, if $j=0$, we will conventionally treat $\sigma_{i_{0}}=0$.
Case 1: First assume $j>0$. If $\sigma_{i_{j}}$ does not match $MMP^{k}$ in $\sigma$, then we claim that any elements $\sigma_{r} \geq \sigma_{i_{j}}$ that occur between between $\sigma_{i_{j-1}}$ and $n$ must be one of $\sigma_{i_{j}}$, $\sigma_{i_{j+1}}, \ldots, \sigma_{i_{j+k-2}}$. If $\sigma_{r}$ were such an ``extra'' element, it could not occur before $\sigma_{i_{j}}$ since $\sigma_{i_{j}}$ is a left-to-right maximum, and it could not occur after $\sigma_{i_{j}}$, or else $\sigma_{i_{j}}$ would match $MMP^{k}$ in $\sigma$. Then if we rearrange the elements $\sigma_{i_{j}}$, $\sigma_{i_{j+1}}, \ldots, \sigma_{i_{j+k-2}}$, but leave all the other entries of $\sigma$ in the same place, then $\sigma_{i_{j-1}}$ will still match $MMP^{k}$ in $\sigma$. Moreover, no matter how these elements are rearranged, none of them will match $MMP^{k}$ in the resulting permutation, since there are only $k-1$ elements between positions $i_{j}$ and $i_{j+k-1}$ that are greater than $\sigma_{i_{j}}$, which is the least of all these elements. Then taking all possible rearrangements of $\sigma_{i_{j}}$ through $\sigma_{i_{j+k-2}}$ gives us $(k-1)!$ permutations $\sigma'$ with $mmp^{k'}(\sigma') = j$.
The analogous case for $j=0$ is the set of permutations with $\sigma_{k-1} = n$. We can rearrange the first $k-1$ entries of $\sigma$ in any order, and all of the resulting permutations cannot match $MMP^{k}$.
Case 2: Now assume $\sigma_{i_{j}}$ does match $MMP^{k}$ in $\sigma$. (Note that we now make no assumptions on $j$, so that case 2 includes the possibility that $j=0$ and $n$ appears at position $k$ or later.) Since the $k-1$ elements $n$, $n-1, \ldots, n-k+2$ must be among the left-to-right maxima in $\sigma$, and $\sigma_{i_{j}}$ is the $k$th largest left-to-right maximum, we must have $\sigma_{i_{j+1}} = n-k+2$, $\sigma_{i_{j+2}} = n-k+3$, etc. This implies that the elements that are larger than $\sigma_{i_{j}}$ that occur between $\sigma_{i_{j}}$ and $n$ are either in the list $\sigma_{i_{j+1}}, \ldots, \sigma_{i_{j+k-1}}$, or occur between some $\sigma_{i_{r}}$ and $\sigma_{i_{r+1}}$. We will create a new permutation $\sigma'$ by rearranging $\sigma$. The entry $\sigma_{i_{j-1}}$ and all entries before it will be left in the same place. However, some of the entries that occur after $\sigma_{i_{j-1}}$ but before $n$ will be moved to the end of the string, according to the following algorithm.
For each element $\sigma_{i_{r}}$, for $r$ from $j+1$ to $j+k-2$, we call an entry that lies between $\sigma_{i_{r}}$ and $\sigma_{i_{r+1}}$ that is larger than $\sigma_{i_{j-1}}$ a ``large follower'' of $\sigma_{i_{r}}$. If $\sigma_{i_{r}}$ has no large followers, then the string $\sigma_{i_{r}} \cdots \sigma_{i_{r+1}-1}$ will not be directly moved when we rearrange $\sigma$. If, however, $\sigma_{i_{r}}$ has at least one large follower, then we will remove the substring starting at $\sigma_{i_{r}}$ and ending at the entry before its final large follower, and move that substring to the end of $\sigma$. If several entries $\sigma_{i_{r}}$ have large followers, then we move the strings starting at each $\sigma_{i_{r}}$ in order based on the last large follower of $\sigma_{i_{r}}$, starting with the smallest one. The permutation that results after we have moved all of these strings will be called $\sigma'$.
For example, let $n=8$, $k=4$, and $j=2$. (Then we are constructing a permutation in $S_{8}$ that matches $MMP^{4}$ exactly once.) If \[ \sigma = 13625748, \textrm{ then } \sigma' = 13548762. \] We have $\sigma_{i_{j}} = 3$, and thus we are in case 2. Both 6 and 7 have large followers, and the algorithm tells us to move the substrings 7 and 62 to the end of the string, in that order, since the remaining large followers are 4 and 5 respectively. As another example, if \[ \sigma = 13647582, \textrm{ then } \sigma' = 13458267,\] since both 6 and 7 are moved to the end of the string (after the 2), in that order.
Now, we show that the constructed permutation $\sigma'$ satisfies $mmp^{k'}(\sigma') = j$. To do this, it suffices to check that $\sigma_{i_{j}}$ does not match $MMP^{k}$ in $\sigma'$, which will imply that $mmp^{k'}(\sigma') = j$. But, the given algorithm moves all but one large follower from each substring $\sigma_{i_{r}} \cdots \sigma_{i_{r+1}-1}$ after $n$ so that only one element from that substring that is greater than $\sigma_{i_{j-1}}$ remains between $\sigma_{i_{j-1}}$ and $n$. And as noted above, all entries of $\sigma$ greater than $\sigma_{i_{j}}$ that occur between $\sigma_{i_{j}}$ and $n$ are in one of those substrings. Thus $\sigma_{i_{j}}$ matches $MMP^{k-1}$ but not $MMP^{k}$ in $\sigma'$. Then, as in case 1, taking all possible rearrangements of the $k-1$ largest elements that occur after $\sigma_{i_{j-1}}$ and before $n$ gives a total of $(k-1)!$ permutations $\sigma''$ with $mmp^{k'}(\sigma'') = j$, constructed from the original $\sigma$.
Our work so far has given us a way of constructing $(k-1)!$ permutations $\sigma'$ with $mmp^{k'}(\sigma') = j$ out of each permutation $\sigma$ counted by by $\rstir{n}{j+k-1}{k-1}$. Now, we must show that the correspondence we have constructed is exactly one-to-$(k-1)!$ and onto. To do so, we construct a right inverse for the correspondence.
Let $\sigma \in S_{n}$ be a permutation with $mmp^{k'}(\sigma) = j$. We must construct a permutation $\sigma'$ with exactly $j+k-1$ left-to-right maxima, including the set $X = \{n-1, \ldots, n-k+2\}$. Let $s$ be the number of elements from $X$ that occur after $n$ in $\sigma$. By assumption, $\sigma_{i_{j-1}}$, the $j-1$st left-to-right maximum, is the last element that matches $MMP^{k}$ in $\sigma$. Let $a_{1}, a_{2}, \ldots, a_{k-1}$ be the $k-1$ largest elements that occur between $\sigma_{i_{j-1}}$ and $n$. Then we rearrange the $a_{i}$ in such a way that, if we were to switch the positions of the $i$th element (in order as the numbers appear in $\sigma$) of $X$ that appears after $n$ with the $i$th smallest of the $a_{i}$, the elements of $X$ would appear in order.
Finally, for each element $r$ from $n-k+2$ to $n-1$ that occurs after $n$, we remove the substring starting at $r$ and ending just before the next occurrence of an entry from $n-k+2$ to $n-1$. Each of these substrings is then re-inserted just before so that the substring starting with $n-k+1+i$ is inserted just before $i$.
For example, let $n=7$, $k=4$, and $j=2$. Then given the permutation $\sigma = 1324756$, we have $s=2$, and the $a_{i}$ are the elements 3,2, and 4. We rearrange 2,3, and 4 in $\sigma$ to get $1423756$ since swapping 5 with 2 and 6 with 3 would put 456 in order. Next, we remove the strings 5 and 6 from $1423756$ and re-insert them just before 2 and 3 respectively to get $\sigma' = 1452637$. One can easily check that if we apply the original algorithm to $\sigma'$, the $(k-1)!$ permutations we make from $\sigma'$ are $\sigma$ and the permutations we get by rearranging 2,3, and 4 in $\sigma$.
Because of the way that we arranged the $a_{i}$, each of the elements $n-k+2$ through $n-1$ will be a left-to-right maximum in the resulting permutation. Also, by assumption, there are exactly $j-1$ left-to-right maxima that occur at or before $\sigma_{j-1}$, and $k$ that occur after it (including $n$). Thus, there are exactly $j+k-1$ left-to-right maxima in the resulting permutation. Moreover, this process will reverse the correspondence described above, because of the way (including the order) we chose to move elements with large followers after $n$ in the original algorithm. Thus, the original correspondence is 1-to-$(k-1)!$. $\square$
This description of the coefficients of $P_{n}^{k}(x)$ allows us to answer an unresolved question about a formula for these coefficients. Kitaev and Remmel \cite{KR} observed that a formula for $C_{n,4,2}$ (from entry A001712 in the OEIS \cite{OEIS}) is \[ 6 \sum_{i=2}^{n-3} (-1)^{n+i+1} \binom{i}{2} 3^{i-2} s(n-3,i).\] This formula is, in fact, an example of a formula of Mitrinovic \cite{M}, which we prove a version of here.
\begin{theorem} \label{thm:KRans} For $k \geq 2$, and $0 \leq j \leq n-k+1$, we have \[ \frac{1}{(k-1)!} C_{n,k,j} = \rstir{n}{j+k-1}{k-1} = \sum_{i=j}^{n-k+1} \binom{i}{j} c(n-k+1,i) (k-1)^{i-j}. \] Thus \[ C_{n,k,j} = (k-1)! \sum_{i=j}^{n-k+1} \binom{i}{j} c(n-k+1,i) (k-1)^{i-j}.\] \end{theorem}
Proof. We construct a permutation $\sigma$ counted by $\rstir{n}{j+k-1}{k-1}$. First, we know that the elements $n,n-1, \ldots, n-k+2$ must be left-to-right maxima, and so must occur in order in $\sigma$. We choose a way to insert the entries $1$ through $n-k+1$ to complete $\sigma$. For any $i$ from $j$ to $n-k+1$, choose one of the $c(n-k+1,i)$ permutations of $1, \ldots, n-k+1$ (in one-line notation) with exactly $i$ pseudocycles. We choose $j$ of these pseudocycles to be inserted before $n-k+2$, and insert them in increasing order of their first entry, so that those first entries are left-to-right maxima of $\sigma$. The other $i-j$ pseudocycles will then be inserted somewhere after $n-k+2$: between $n-k+2$ and $n-k+3$, between $n-k+3$ and $n-k+4$, etc., up to between $n-1$ and $n$, or after $n$. That gives $k-1$ possible places that a given pseudocycle could be inserted. If multiple pseudocycles are inserted in the same place, we arrange the pseudocycles in increasing order of their first entries. There are exactly $\binom{i}{j} c(n-k+1,i) (k-1)^{i-j}$ ways to construct $\sigma$ in this way, and every such $\sigma$ can be constructed uniquely in this way. The formula for $C_{n,k,j}$ follows immediately from the first part. $\square$.
Example. Let $n=7$, $k=4$, and $j=2$. Then the theorem gives us \[ 119 = \rstir{7}{5}{3} = 6 \cdot 1 \cdot 9 + 3 \cdot 6 \cdot 3 + 1 \cdot 11 \cdot 1 .\] The first term counts permutations using the permutation 1234 (consisting of 4 different pseudocycles) inserted around 5,6, and 7. The second term includes permutations where 1,2,3, and 4 are arranged in a permutation with three pseudocycles, while the last terms fit 1,2,3,4 into two pseudocycles..
The second part of this theorem is somewhat difficult to see directly. Theoretically, we can apply the correspondence described in Theorem \ref{thm:rstir} to the proof of this theorem to understand it. However, the manner of counting is different for permutations in Case 1 of the proof of Theorem \ref{thm:rstir} as opposed to Case 2. For example, if $n=6$, $k=4$, and $j=2$, the term $\binom{3}{2} \cdot c(3,3) \cdot 3^{1}$ counts the permutations 123564, 124365, 124563, 134256, 134526, 134562, 234156,234516, and 234561. The middle 3 and last 3 of these are easily counted - we choose two numbers out of 1,2,3 to start the permutation and then choose for the remaining one to be placed after 4,5, or 6. However, the first 3 (which correspond to Case 1 in Theorem \ref{thm:rstir}) are counted by choosing 1 and 2 to start the permutation and then choosing which of 3,4, or 5 is placed after 6. Now the factor 3 chooses an element, rather than a position for that element. These two cases make the counting argument much clearer for the $r$-Stirling numbers proper than for the coefficients $C_{n,k,j}$.
\section{A Recurrence For Fixed \texorpdfstring{$n$}{n}}
Now, we examine some more consequences of this description of the coefficients of $P_{n}^{k}(x)$. Broder \cite[Theorem 3]{B} shows that the numbers $\rstir{n}{m}{r}$ satisfy the following recursion in $r$:
\[ \rstir{n}{m}{r} = \frac{1}{r-1} \left( \rstir{n}{m-1}{r-1} - \rstir{n}{m-1}{r}\right) .\]
We prove an equivalent result directly for the coefficients of $P_{n}^{k}(x)$. First, we make an observation.
\begin{lemma} \label{lem:second} Let $\sigma$ be a permutation. For $ j\geq 0$, \[ \textrm{ if } mmp^{k'}(\sigma) = j+1\textrm{, then } mmp^{(k+1)'}(\sigma) = j+1 \textrm{ or }j.\] \end{lemma}
Proof. The condition of matching $MMP^{k+1}$ is strictly stronger than matching $MMP^{k}$, so $mmp^{(k+1)'}(\sigma) \leq mmp^{k'}(\sigma)$. This implies the result for $j=0$, so we now assume $j \geq 1$. Let $\sigma = a_{1}a_{2} \cdots a_{n}$, and let $a_{i_{1}},a_{i_{2}}, \ldots, a_{i_{j}}$ be the $j$ elements (besides 0) which match $MMP{k}$ in $\sigma$, written so that $i_{1} < i_{2} < \cdots < i_{j}$. Since $a_{i_{j}}$ matches $MMP^{k}$ in $\sigma$, we know that $a_{i_{j-1}} < a_{i_{j}}$, and that there are at least $k$ entries of $\sigma$ larger than $a_{i_{j}}$ that occur after $a_{i_{j}}$ but (weakly) before $n$. Then those $k$ entries and $a_{i_{j}}$ are a set of $k+1$ elements that are larger than $a_{i_{j-1}}$ and occur after it in $\sigma$, but weakly before $n$. Thus $a_{i_{j-1}}$ matches $MMP^{k+1}$ in $\sigma$. Then Lemma \ref{lem:basic} implies that $a_{i_{1}}, \ldots, a_{i_{j-2}}$ match $MMP^{k+1}$, and $mmp^{(k+1)'}(\sigma) \geq j$. $\square$
\begin{theorem} \label{thm:main2} Let $k > 1$ and $j > 0$. Then
\[ (k-1) \cdot \left| \{ \sigma \in S_{n} \, | \, mmp^{k'}(\sigma) = j \textrm{ and } mmp^{(k+1)'}(\sigma) = j-1 \} \right| \]\[ \qquad = \left| \{ \sigma \in S_{n} \, | \, mmp^{k'}(\sigma) = j-1 \textrm{ and } mmp^{(k+1)'}(\sigma) = j-1 \} \right|. \]
As a result, $C_{n,k+1,j-1}$ is given by \[ k \cdot \left| \{ \sigma \in S_{n} \, | \, mmp^{k'}(\sigma) = j \textrm{ and } mmp^{(k+1)'}(\sigma) = j-1 \} \right| .\] \end{theorem}
Proof. We give an explicit $k-1$-to-one correspondence between the two sets in question. Write $\sigma$ in the notation of $\eqref{eq:cycstruct}$, with $a_{1,1} < a_{2,1} < \cdots < a_{p,1}$ the left-to-right maxima of $\sigma$. Moreover, assume that $mmp^{k'}(\sigma) = j$ and $mmp^{(k+1)'}(\sigma) = j-1$. Then, by Lemma $\ref{lem:basic}$, $a_{j-1,1}$ matches $MMP^{k}$ but not $MMP^{k+1}$. (If $j=0$, then $a_{0,1}$ is again considered to be the ``virtual'' entry 0 at the start of $\sigma$.) Then, there are exactly $k-1$ entries of $\sigma$ that are larger than $a_{j-1,1}$ that occur after $a_{j-1,1}$ but strictly before $a_{p,1} = n$. Say that these entries are indexed $a_{i_1}, a_{i_2}, \ldots, a_{i_{k-1}}$, with $a_{j-1,1} < a_{i_1} < a_{i_2} < \cdots < a_{i_{k-1}}$. Note that the $a_{i_q}$ and $a_{j-1,1}$ are the $k$ largest elements that occur in the $j-1$st through $p-1$st pseudocycles of $\sigma$, since the $a_{i_{q}}$ are the only elements larger than $a_{j-1,1}$ that occur in those pseudocycles.
Then, for $q$ from $1$ to $k-1$, define $\pi_{q}$ to be the permutation identical to $\sigma$, except with the positions of $a_{j-1,1}$ and $a_{i_q}$ switched. We claim that $mmp^{k'}(\pi_{q}) = j-1 = mmp^{(k+1)'}(\pi_{q})$. First, we note that the entries $a_{1,1}, a_{2,1}, \ldots, a_{j-2,1}$ still match $MMP^{k+1}$ in $\pi_{q}$, since for any $r$ from $1$ to $j-2$, the set of numbers between $a_{r,1}$ and $a_{p,1}$ in $\pi_{q}$ is the same as in $\sigma$. However, $a_{i_q}$ does not match $MMP^{k}$ in $\pi_{q}$ since the only entries of $\pi_{q}$ that are larger than $a_{i_q}$ that occur after the $j-2$nd pseudocycle but before the last pseudocycle are $a_{i_{q+1}}, \ldots, a_{i_{k-1}}$. However, there are only $k-1-q < k-1$ such elements. Moreover, since $a_{j-1,1}$ began the $j-1$st pseudocycle of $\sigma$, it was greater than all the entries of $\sigma$ before it. Since $a_{i_q} > a_{j,1}$, we know that $a_{i_q}$ is now the first entry of the $j-1$st pseudocycle of $\pi_{q}$. Thus, $mmp^{k'}(\pi_{q}) = j-1 = mmp^{(k+1)'}(\sigma)$.
To finish the proof, we construct the correspondence in the opposite direction. Let $\pi$ be any permutation with $mmp^{k'}(\pi) = j-1$ and $mmp^{(k+1)'}(\pi) = j-1$. Write \[ \pi = a_{1,1}a_{1,2} \cdots a_{1,m_{1}}a_{2,1} \cdots a_{2,m_{2}} \cdots a_{p,1} \cdots a_{p,m_{p}},\] with the usual notation. Then, by assumption and Lemma $\ref{lem:basic}$, $a_{j-2,1}$ matches $MMP^{k+1}$ and $MMP^{k}$ in $\pi$, but $a_{j-1,1}$ matches neither. Thus, there are at least $k$ entries of $\pi$ larger than $a_{j-2,1}$ that occur after $a_{j-2,1}$ but strictly before $a_{p,1} = n$, but at most $k-1$ of those occur after $a_{j-1,1}$. Since none of those entries larger than $a_{j-2,1}$ can occur in the $j-2$nd pseudocycle, there must be exactly $k$ of these - $a_{j-1,1}$ and $k-1$ others that occur after $a_{j-1,1}$ - which we call $a_{i_1}, a_{i_2}, \ldots, a_{i_k}$. Assume that $a_{i_1} < a_{i_2} < \cdots < a_{i_{k}}$. Let $q$ be the index so that $a_{j,1} = a_{i_q}$. Notice that $q \neq 1$, since if it were, then $a_{j,1}$ would match $MMP^{k}$ in $\pi$. Then, let $\sigma$ be the permutation identical to $\pi$, except with the positions of $a_{i_1}$ and $a_{i_q}$ switched. Then by construction, $\pi$ is exactly the permutation $\pi_{q}$ constructed from $\sigma$ as in the previous paragraph.
This shows that the correspondence above is exactly $(k-1)$-to-one, and its image is the entire set $\{ \sigma \in S_{n} \, | \, mmp^{k'}(\sigma) = j-1 \textrm{ and } mmp^{(k+1)'}(\sigma) = j-1 \}$.
The second part of the result then follows from the first part and Lemma \ref{lem:second}. $\square$
This result can be seen in the following table displaying the generating functions $P_{5}^{k}(x)$, beginning with $k=2$ in the first row. The arrows between the polynomials demonstrate the number of permutations in the sets described by the theorem. For example, the arrow from $10x^3$ in the first row to $18x^2$ in the second row depicts the 9 permutations $\sigma$ with $mmp^{2'}(\sigma) = 3$ and $mmp^{3'}(\sigma) = 2$. We can determine the labels on the arrows from right-to-left, since the first diagonal arrow on the right in each row must be labeled with the coefficient of the highest degree term, each diagonal arrow determines the vertical arrow pointing to the same term (by the theorem), and the total labels on the two arrows coming from any given term must add up to the coefficient of that term.
\begin{figure}
\caption{A Recursive Calculation of $P_{5}^{k}(x)$ for $k \leq 5$. }
\label{fig:extable}
\end{figure}
We define the notation \[ m_{n,k,i,j} = \left| \{ \sigma \in S_{n} \, | \, mmp^{k}(\sigma) = i \textrm{ and } mmp^{k+1}(\sigma) = j \} \right| \] for the marks on the arrows in diagrams like this one.
As a result, we can easily deduce the following recurrence relation satisfied by the coefficients of $P_{n}^{k}(x)$. It essentially describes the process of determining one row of the above table from the previous row.
\begin{corollary} For $k \geq 2$ and $0 \leq j \leq n-k$, \[ C_{n,k+1,j} = \sum_{i=j+1}^{n-k+1} k(1-k)^{i-j-1} C_{n,k,i} . \] \end{corollary}
Proof. We prove the corollary inductively. By Lemma \ref{lem:second}, if $mmp^{k'}(\sigma) = n-k+1$, then $mmp^{(k+1)'}(\sigma) = n-k$. Thus the theorem implies that $C_{n,k+1,n-k} = k \cdot C_{n,k,n-k+1}$. That is, the corollary holds for $j = n-k$. Then, assume that the formula in the corollary holds for some $j > 1$. This implies (using the second part of the theorem) that \begin{equation} \label{eq:corIH} \sum_{i = j+1}^{n-k+1} (k)(1-k)^{i-j-1} C_{n,k,i} = C_{n,k+1,j} = k \cdot m_{n,k,j+1,j}. \end{equation} Then, by Lemma \ref{lem:second} and the theorem, \begin{equation} \label{eq:corsec} C_{n,k+1,j-1} = m_{n,k,j-1,j-1} + m_{n,k,j,j-1} = k \cdot m_{n,k,j,j-1} \end{equation} and \begin{equation} \label{eq:corthird} m_{n,k,j,j-1} = C_{n,k,j} - m_{n,k,j,j} = C_{n,k,j} - (k-1) m_{n,k,j+1,j}.\end{equation} Thus, by \eqref{eq:corIH}, \eqref{eq:corsec}, and \eqref{eq:corthird}, \[ C_{n,k+1,j-1} = k(C_{n,k,j} + (1-k)m_{n,k,j+1,j}) \] \[ = kC_{n,k,j} + (1-k) \cdot \sum_{i = j+1}^{n-k+1} (k)(1-k)^{i-j-1} C_{n,k,i} \] \[ = \sum_{i = j}^{n-k+1} (k)(1-k)^{i-j} C_{n,k,i}. \] That is, the theorem is true for $j-1$, and by induction, it is true for all $j \geq 0$. $\square$
\begin{corollary} \label{cor:sumform} For $k \geq 2$ and $0 \leq j \leq n-k$, \[ \rstir{n}{j+k}{k} = \sum_{i=j+1}^{n-k+1} (1-k)^{i-j-1} \rstir{n}{i+k-1}{k-1}. \] \end{corollary}
Proof. Simply substituting the formula for $C_{n,k,j}$ given in Theorem \ref{thm:rstir} into the previous corollary and canceling $k!$ from both sides gives this result. $\square$
This corollary is a special case of a theorem of Broder \cite[Theorem 19]{B}, but is included since it is such an easy consequence of Theorem \ref{thm:main2}. However, our main goal for the results of this section (and pictures like Figure \ref{fig:extable}) is to describe a (seemingly new) connection between the $r$-Stirling numbers and the classical Stirling numbers.
\section{The Classical Stirling Numbers}
The recursive description of the coefficients of $P_{n}^{k}(x)$ gives us another way to compute an explicit formula for any coefficient $C_{n,k,j}$. For example, we can compute a formula for the coefficients of $P_{n}^{3}(x)$ as follows. We assume that $n$ is large, and begin by writing the entries $c(n,1)$, $c(n,2)$, etc. in the first row. We can then mark the arrows in the diagram left-to-right - the first vertical arrow (And thus the first diagonal arrow) is marked with $c(n,1)$. Thus the second pair of arrows are both marked with $c(n,2) - c(n,1)$. Continuing in this fashion, we can fill out the entire second row of the diagram.
\begin{figure}
\caption{Table to compute $C_{n,3,j}$ in terms of $c(n,k)$.}
\end{figure}
The alternating sum of Stirling numbers of the first kind that appear in this diagram is in fact plus or minus the sum of signed Stirling numbers of the first kind, whichever makes the total sum positive. Thus, the computations depicted in the diagram prove:
\begin{corollary} \label{k3cor}
For any $n > 1$, \[ P_{n}^{3}(x) = \sum_{j=0}^{n-2} 2\left| \sum_{i=1}^{j+1} s(n,i) \right| x^{j} .\]
\end{corollary}
Kitaev and Remmel pointed out a special case of this theorem \cite[Proposition 4]{KR}. As another example, we find a formula for the number of permutations that almost match $MMP^{k}$. To do so, we fill out only the first two columns of the diagram above. The first vertical and diagonal arrows are determined, and from the first diagonal arrow and the second entry in each row, we can compute the mark on the second vertical arrow. Thus we can compute the entire table one column at a time, since each entry in the $k$th row of the table is $\frac{k}{k-1}$ times the mark on the vertical arrow pointing to it.
\begin{figure}
\caption{Table to compute $C_{n,k,1}$ and $C_{n,k,2}$ in terms of $c(n,k)$ }
\label{fig:seccolform}
\end{figure}
We can see from this computation that the number of permutations that almost match $MMP^{k}$ is of the form $(k-1)c(n,2) - Ac(n,1)$ for some positive constant $A$, which can be found recursively, or using boundary conditions to solve for $A = \frac{c(k,2) -(k-2)!}{(k-2)!}$. (The numerator of the coefficient of $c(n,1)$ in this formula is A052881 in the OEIS \cite{OEIS}.)
In light of Theorem \ref{thm:rstir}, this formula expresses $\rstir{n}{r+1}{r}$ as a linear combination of the classical Stirling numbers. But this procedure could be continued, so that we have the following:
\textbf{Fact}: For any $j \geq 0$ and $k \geq 2$, there are constants $a_{m,r,1}, a_{m,r,2}, \ldots, a_{m,r,m-r}$ so that, for all $n > j + k -2$, we have \[ \rstir{n}{m}{r} = \sum_{i=1}^{m-r} a_{m,r,i} c(n,i) .\]
We will describe these coefficients more precisely in Theorem \ref{thm:lincomb} below. Note that the $r$-Stirling numbers were previously known to be linear combinations of classical Stirling numbers of the first kind. Koutras \cite[Equation 1.8]{Kout} (for example) describes $\rstir{n}{m}{r}$ as a linear combination of $c(k,k), c(k+1,k), \ldots, c(n,k)$ rather than $c(n,1),c(n,2), \ldots, c(n,k)$, as we have here. Broder \cite[Equation 43]{B} also describes $\rstir{n}{m}{r}$ as a linear combination of Stirling numbers, but as an integer linear combination of $c(n,m), c(n,m+1), \ldots, c(n,n)$, so that the formulas described in this fact are new. (In fact, Broder's formula can be recovered by drawing a table like Figure \ref{fig:seccolform} that starts from the right-hand side rather than the left.)
\section{Harmonic Sums}
The Stirling numbers (and thus the $r$-Stirling numbers) have a number of connections to harmonic numbers, which we now explore. We define the \textit{$n$th harmonic number} to be $H_{n}^{(1)} = \sum_{i=1}^{n} \frac{1}{i}$. We recursively define the \textit{iterated harmonic sum} for $j > 1$ by \[ H_{n}^{(j)} = \sum_{i=1}^{n} \frac{H_{i}^{(j-1)}}{i} .\] Conventionally, we will set $H_{n}^{(0)} = 1$ for any $n > 0$.
We will be most interested in the sequences $n H_{n-1}^{(j)}$. For example, $n H_{n-1}^{(1)}$ is the sequence \[ 2,9/2, 44/6, 250/24, \ldots,\] which are exactly the coefficients of $c(n,1)$ that appear in the second column of Figure \ref{fig:seccolform}. We can also check that $n H_{n-1}^{(2)}$ is \[ 2, 21/4, 340/36, \ldots, \] which appear in the third column of Figure \ref{fig:seccolform}. This leads us to the following theorem:
\begin{theorem} \label{thm:lincomb} For $j>0$, and $k>2$, the coefficients of $P_{n}^{k}(x)$ satisfy \[ C_{n,k,j} = \sum_{i=1}^{j+1} (k-1) H_{k-2}^{(j+1-i)} (-1)^{j+1-i} c(n,i).\] \end{theorem}
Proof. We proceed inductively. For $k=3$, the statement is exactly the same as Corollary \ref{k3cor}. Now, we assume the result for a fixed value of $k$, and prove the result for $C_{n,k+1,j}$ by induction on $j$. Again, the result for $j=0$ is obvious since $C_{n,k+1,0} = k(n-1)! = k c(n,1)$, as the theorem claims. Now, we assume the result for $C_{n,k+1,j}$ for some fixed $j$, and prove the result for $j+1$.
By assumption, $C_{n,k+1,j} = \sum_{i=1}^{j+1} k H_{k-1}^{(j+1-i)} (-1)^{j+1-i} c(n,i).$ Then, we imagine the tables as drawn in Figure \ref{fig:seccolform}, focusing on $C_{n,k,j+1}$, $C_{n,k+1,j}$, and $C_{n,k+1,j+1}$. Recall the notation $m_{n,k,j,j'}$ for the mark on the arrow pointing from $C_{n,k,j}$ to $C_{n,k+1,j'}$. By Theorem \ref{thm:main2}, $m_{n,k,j+1,j} = \sum_{i=1}^{j+1} H_{k-1}^{(j+1-i)} (-1)^{j+1-i} c(n,i).$ Then, we compute: \begin{align*} m_{n,k,j+1,j+1} = & C_{n,k,j+1} - \sum_{i=1}^{j+1} H_{k-1}^{(j+1-i)} (-1)^{j+1-i} c(n,i) \\ = & \sum_{i=1}^{j+2} (k-1) H_{k-2}^{(j+2-i)} (-1)^{j+2-i} c(n,i) - \sum_{i=1}^{j+1} H_{k-1}^{(j+1-i)} (-1)^{j+1-i} c(n,i) \\ = & (k-1)c(n,j+2) + \sum_{i=j+1} (-1)^{j+2-i} c(n,i) \left( (k-1) H_{k-2}^{(j+2-i)} + H_{k-1}^{(j+1-i)}\right) \\ = & (k-1) \left( c(n,j+2) + \sum_{i=j+1} (-1)^{j+2-i} c(n,i) \left( H_{k-2}^{(j+2-i)} + \frac{1}{k-1} H_{k-1}^{(j+1-i)}\right) \right) \\ = & (k-1) \left( c(n,j+2) + \sum_{i=j+1} (-1)^{j+2-i} c(n,i) H_{k-1}^{(j+2-i)} \right). \end{align*}
But again by Theorem \ref{thm:main2}, $C_{n,k+1,j+1}$ is \[ \frac{k}{k-1}m_{n,k,j+1,j+1} = \sum_{i=j+2} (-1)^{j+2-i} k \cdot c(n,i) H_{k-1}^{(j+2-i)},\] as desired. $\square$
Loeb \cite{L} defines a generalization of the Stirling numbers $s(n,k)$ for arbitrary values of $n$, and examines these numbers when $n$ is a negative integer. Briefly, $s(n,k)$ can be defined for all integers $n$ and $k \geq 0$ using the recursive relation $s(n+1,k) = s(n,k-1) - ns(n,k)$ and the initial conditions $s(n,0) = \delta_{n0}$.
For our purposes, we will only use the following:
\begin{theorem} (\cite[Theorem 2]{L}) Using Loeb's \cite{L} definition of $s(-n,k)$, we have \[ (-1)^{k}n!s(-n,k) = H_{n}^{(k)}.\] \end{theorem}
Substituting this formula for $H_{m}^{(k)}$ into Theorem \ref{thm:lincomb} gives us an appealing formula for the coefficients $C_{n,k,j}$ and the $r$-Stirling numbers in terms of $c(n,k)$ and $s(-n,k)$.
\begin{corollary} For $j>0$ and $k>2$, we have \[ C_{n,k,j} = \sum_{i=1}^{j+1} (k-1)!s(2-k,j+1-i) c(n,i) \] and \[ \rstir{n}{j+r}{r} = \sum_{i=1}^{j+1} s(1-r,j+1-i)c(n,i).\] \end{corollary}
Interestingly, this is not the only connection between $r$-Stirling numbers and iterated harmonic sums. We define another set of iterated harmonic sums as follows. Set \[ H_{n,j}^{1} = \sum_{1 \leq i_{1} < i_{2} < \cdots < i_{j} \leq n} \frac{1}{i_{1}i_{2} \cdots i_{j}}, \] and then for $k >1$, recursively define \[ H_{n,j}^{k} = \sum_{i=1}^{n} H_{i,j}^{k-1} .\] (Notice the slight change in notation to distinguish this notion from $H_{n}^{(j)}$.) Of course, $H_{n,j}^{k} = 0$ for any $n < j$.
First, we note that the generating function for the Stirling numbers of the first kind, \[ (x+1)(x+2) \cdots (x+n-1) = \sum_{j=0}^{n-1} c(n,j+1) x^{j}\] easily implies that \[ c(n,j+1) = \sum_{1 \leq i_{1} < i_{2} < \cdots < i_{n-j-1} \leq n-1} i_{1}i_{2} \cdots i_{n-j-1} = e_{n-j-1}(1,2, \ldots, n-1).\] (Here, $e_{k}$ is the usual elementary symmetric function.) We can re-write this formula for $j>0$ as \begin{equation} \label{eq:harm1} c(n,j+1) = (n-1)! \sum_{1 \leq i_{1} < i_{2} < \cdots < i_{j} \leq n-1} \frac{1}{i_{1}i_{2} \cdots i_{j}} = (n-1)! H_{n-1,j}^{1}.\end{equation}
Similarly, the $r$-Stirling numbers have generating function \[ (x+r)(x+r+1) \cdots (x+n-1) = \sum_{j=0}^{n-r} \rstir{n}{j+r}{r}x^{j}, \] so that \[ \rstir{n}{j+r}{r} = \sum_{r \leq i_{1} < i_{2} < \cdots < i_{n-j-r} \leq n-1} i_{1}i_{2} \cdots i_{n-j-r} = e_{n-j-r}(r,r+1,\ldots, n-1), \] which is equivalent to \[ = \frac{(n-1)!}{(r-1)!} \sum_{r \leq i_{1} < i_{2} < \cdots < i_{j} \leq n-1} \frac{1}{i_{1}i_{2} \cdots i_{j}}.\] A combinatorial interpretation is of this formula is described by Broder \cite[Theorem 7]{B}.
However, there is another way to generalize equation \eqref{eq:harm1}. We begin with another recurrence we can use to describe the $r$-Stirling numbers, which is due to Broder \cite[Lemma 11]{B}.
\begin{proposition} \label{prop:recur} For $r>1$ and $n \geq m \geq r$, the $r$-Stirling numbers satisfy \[ \rstir{n}{m}{r} = \sum_{i = 0}^{n-m} \frac{(n-r)!}{(m+i-r)!} \rstir{m+i-1}{m-1}{r-1} .\] \end{proposition}
Proof. We count the permutations counted by $\rstir{n}{m}{r}$ according to the position of $n$. Consider a permutation $\sigma$ in $S_{n}$ with $m$ left-to-right maxima that include $n, n-1, \ldots, n-r+1$. Since $n$ is the $m$th largest left-to-right maximum, it must occur at position $m$ or later in $\sigma$. Now, for $i$ from $0$ to $n-m$, we notice that if $n$ is in position $m+i$, then the $n - i - m$ entries that occur after $n$ in $\sigma$ will not be left-to-right maxima, so they must be chosen from $1,2, \ldots, n-r$. There are then $\frac{(n-r)!}{(m+i-r)!}$ ways to choose and place these elements. We also note that exactly $m-1$ more left-to-right maxima must occur before $n$, including $n-1,n-2, \ldots, n-r+1$. This implies that $\textrm{red}(\sigma_{1}\sigma_{2} \cdots \sigma_{m+i-1})$ is a permutation in $S_{m+i-1}$ with exactly $m-1$ left-to-right maxima, including $m+i-1, m+i-2, \ldots, m+i-r+1$, of which there are exactly $\rstir{m+i-1}{m-1}{r-1}$. $\square$
Then another formula for the $r$-Stirling numbers is given by the following:
\begin{theorem} For $r \geq 1$, $m > r$ and $n \geq r$, \[ \rstir{n}{m}{r} = (n-r)! H_{n-r,m-r}^{r}.\] \end{theorem}
Proof. The result is clear if $r=1$, since then the formula coincides with that given in \eqref{eq:harm1}. We proceed by induction on $r$. Recall that \[(n-r)! H_{n-r,m-r}^{r} = (n-r)! \sum_{i=1}^{n-r} H_{i,m-r}^{r-1} = (n-r)! \sum_{i=m-r}^{n-r} H_{i,m-r}^{r-1} \] \[ = (n-r)! \sum_{i=0}^{n-m} H_{m+i-r, m-r}^{r-1} \] We claim that, for $i$ from $0$ to $n-m$, the term $(n-r)! H_{m+i-r,m-r}^{r-1}$ counts the permutations $\sigma$ with $n$ in position $m+i$. By the proof of Proposition \ref{prop:recur}, there are $\frac{(n-r)!}{(m+i-r)!} \rstir{m+i-1}{m-1}{r-1}$ such permutations. However, inductively, we may assume that \[ \rstir{m+i-1}{m-1}{r-1} = (m+i-r)! H_{m+i-r,m-r}^{r-1}.\]
Then the total number of permutations we have just counted is \[ \frac{(n-r)!}{(m+i-r)!} \cdot (m+i-r)!H_{m+i-r,m-r}^{r-1} = (n-r)! H_{m+i-r,m-r}^{r-1}.\] Then summing over all $i$ from $0$ to $n-m$ gives the desired result. $\square$
Although this particular description of this formula for the $r$-Stirling numbers seems to be new, the formula itself is not entirely new. We could trace back the nested sums $H_{n-r,m-r}^{r}$ to express $\rstir{n}{m}{r}$ as an integer linear combination of the original harmonic sums $H_{i,m-1}^{1} = \frac{1}{i!} \rstir{1+i}{m}{1}$. That formula appears to be a special case of a theorem of Broder \cite[Theorem 12, the case $p=1$]{B}.
Adamchik \cite{A} makes some other connections between the classical Stirling numbers and generalized harmonic numbers.
\section{Border Patterns}
Finally, we note that the $r$-Stirling numbers $\rstir{n}{r+2}{r}$ are also related to another permutation pattern. Kitaev and Liese \cite{KL}, introduce the \textit{border mesh pattern} $p$ in Figure \ref{fig:patternp}: \begin{figure}
\caption{The pattern $p$ }
\label{fig:patternp}
\end{figure}
This pattern is called a border pattern since the squares around the outside edge of the pattern are all marked ``$=0$''. We say that a permutation $\sigma = \sigma_{1} \cdots \sigma_{n}$ matches the pattern $p$ if $\sigma_{1} > 1$, $\sigma_{n} = n$, and there is an entry $a$ so that $\sigma_{1} < a < n$, and $a$ occurs after 1 in $\sigma$. (The first dot in the picture is $\sigma_{1}$, the second must be 1, and the fourth must be $n$. The third dot in the picture is $a$.) When we count occurrences of $p$, we will count every set of 4 entries of $\sigma$ that meet the description of the pattern. Since $\sigma_{1}$, 1, and $n$ must be involved in the pattern, this is equivalent to counting the number of possibilities for $a$ - i.e. the number of entries between 1 and $n$ that are larger than $\sigma_{1}$.
\begin{theorem} For any $k \geq 1$, the number of permutations in $S_{n}$ that match $p$ exactly $k$ times is exactly the number of permutations in $S_{n-1}$ that match $MMP^{k+1}$ exactly once, but almost match $MMP^{k+2}$. \end{theorem}
Proof. Again, we can give a bijective proof. First, we note that if $\sigma = \sigma_{1} \cdots \sigma_{n} \in S_{n}$ matches $p$ exactly $k$ times, then $\sigma = \sigma_{1} A 1 B n$, where $A$ and $B$ are strings, and $B$ contains exactly $k$ elements larger than $\sigma_{1}$. Then we form $\sigma' = \sigma_{1} B n A$, and let $\phi = (\sigma_{1}-1) B' (n-1) A'$, where $A'$ and $B'$ are obtained from $A$ and $B$, respectively, by subtracting 1 from each number in the string. Then $\phi \in S_{n-1}$ since $\sigma'$ contained $2, \ldots, n$.
Since the string $B$ contains exactly $k$ elements greater than $\sigma_{1}$, $B'$ will contain exactly $k$ elements greater than $\sigma_{1}-1$. Hence $\phi$ will match $MMP^{k+1}$ but not $MMP^{k+2}$. This map will be a bijection since its inverse can be constructed easily. $\square$
\end{document} | arXiv |
ChoiMap
From QETLAB
Produces the Choi map or one of its generalizations
Other toolboxes required
Related functions
ReductionMap
Function category
Superoperators
ChoiMap is a function that returns the Choi matrix of the linear map on $3 \times 3$ matrices that acts as follows:
\[\begin{bmatrix}x_{11} & x_{12} & x_{13} \\ x_{21} & x_{22} & x_{23} \\ x_{31} & x_{32} & x_{33}\end{bmatrix} \mapsto \begin{bmatrix}ax_{11}+bx_{22}+cx_{33} & -x_{12} & -x_{13} \\ -x_{21} & cx_{11}+ax_{22}+bx_{33} & -x_{23} \\ -x_{31} & -x_{32} & bx_{11}+cx_{22}+ax_{33}\end{bmatrix},\]
where $a,b,c$ are given real numbers. This map is positive if and only if $a \geq 0$, $a + b + c \geq 2$, and $bc \geq (1-a)^2$ whenever $0 \leq a \leq 1$[1].
2 Argument descriptions
3.1 The standard Choi map
3.2 The reduction map
4 Source code
C = ChoiMap()
C = ChoiMap(A,B,C)
Argument descriptions
A,B,C: Real parameters of the Choi map. If they are not provided, the default Choi map (with A = B = 1 and C = 0) is returned.
The standard Choi map
The following code returns the Choi matrix of the Choi map and then verifies that the Choi map is indeed positive (i.e., verifies that its Choi matrix is block positive):
>> C = ChoiMap()
1 0 0 0 -1 0 0 0 -1
0 0 0 0 0 0 0 0 0
-1 0 0 0 1 0 0 0 -1
-1 0 0 0 -1 0 0 0 1
>> IsBlockPositive(C) % verify that the Choi map is positive
The reduction map
The reduction map is the map $R$ defined by $R(X) = {\rm Tr}(X)I - X$, where $I$ is the identity operator. The reduction map is the Choi map that arises when $a = 0$, $b = c = 1$:
>> ChoiMap(0,1,1)
>> full(ReductionMap(3))
Click here to view this function's source code on github.
↑ S. J. Cho, S.-H. Kye, and S. G. Lee. Generalized Choi maps in three-dimensional matrix algebra. Linear Algebra Appl., 171:213, 1992.
Retrieved from "https://qetlab.com/wiki/index.php?title=ChoiMap&oldid=464"
Content on this wiki is available under the Creative Commons Attribution Share Alike license. The QETLAB source code is available under the BSD license. | CommonCrawl |
Zdeněk Hedrlín
Zdeněk Hedrlín (1933 – April 22, 2018) was a Czech mathematician, specializing in universal algebra and combinatorial theory, both in pure and applied mathematics.
Zdeněk Hedrlín received his PhD from Prague's Charles University in 1963. His thesis on commutative semigroups was supervised by Miroslav Katětov.[1] Hedrlín held the title of Docent (associated professor) at Charles University. There he worked at the Faculty of Mathematics and Physics for over 60 years until he died at age 85. He was among the first Czech mathematicians to do research on category theory.[2]
Already in the mid-1960s, the Prague school of Zdeněk Hedrlín, Aleš Pultr and Věra Trnková had devised a particularly nice notion of concrete categories over Set, the so-called functor-structured categories ...[3]
In 1970 Hedrlín was an Invited Speaker at the International Congress of Mathematicians in Nice.[4] In the later part of his career, he focused on applications of relational structures and led very successful special and interdisciplinary seminars. Applications to biological cell behavior earned him and his students a European grant.[2] (He and his students worked on computational cell models of cancer.)
Hedrlín was a member of the editorial board of the Journal of Pure and Applied Algebra.[5] His Erdős number is 1.[6] His doctoral students include Vojtěch Rödl.[1]
Selected publication
• Hedrlín, Z. (1961). "On common fixed points of commutative mappings" (PDF). Commentationes Mathematicae Universitatis Carolinae. 2 (4): 25–28.
• Hedrlín, Zdeněk (1962). "On number of commutative mappings from finite set into itself (Preliminary communication)" (PDF). Commentationes Mathematicae Universitatis Carolinae. 3 (1): 32.
• Hedrlín, Z.; Pultr, A. (1963). "Remark on topological spaces with given semigroups" (PDF). Commentationes Mathematicae Universitatis Carolinae. 4 (4): 161–163.
• Hedrlín, Z.; Pultr, A. (1964). "Relations (graphs) with given finitely generated semigroups". Monatshefte für Mathematik. 68 (3): 213–217. doi:10.1007/BF01298508. S2CID 120856684.
• Pultr, A.; Hedrlín, Z. (1964). "Relations (graphs) with given infinite semigroups". Monatshefte für Mathematik. 68 (5): 421–425. doi:10.1007/BF01304185. S2CID 122610862.
• Baayen, P. C.; Hedrlin, Z. (1964). "On the existence of well distributed sequences in compact spaces" (PDF). Stichting Mathematisch Centrum. Zuivere Wiskunde.
• Hedrlín, Z.; Pultr, A. (1965). "Symmetric relations (undirected graphs) with given semigroups". Monatshefte für Mathematik. 69 (4): 318–322. doi:10.1007/BF01297617. S2CID 120384797.
• Vopěnka, P.; Pultr, A.; Hedrlín, Z. (1965). "A rigid relation exists on any set" (PDF). Commentationes Mathematicae Universitatis Carolinae. 6 (2): 149–155.
• Hedrlín, Zdeněk; Pultr, Aleš (1966). "On full embeddings of categories of algebras". Illinois Journal of Mathematics. 10 (3): 392–406. doi:10.1215/ijm/1256054991. (over 160 citations)
• Hedrlín, Z.; Pultr, A. (1966). "On Rigid Undirected Graphs". Canadian Journal of Mathematics. 18: 1237–1242. doi:10.4153/CJM-1966-121-7. S2CID 124453196.
• Hedrlín, Z.; Vopěnka, P. (1966). "An undecidable theorem concerning full embeddings into categories of algebras" (PDF). Commentationes Mathematicae Universitatis Carolinae. 7 (3): 401–409.
• Hedrlín, Z.; Pultr, A.; Trnková, V. (1967). "Concerning a categorial approach to topological and algebraic theories" (PDF). In: (ed.): General Topology and its Relations to Modern Analysis and Algebra, Proceedings of the second Prague topological symposium, 1966. Academia Publishing House of the Czechoslovak Academy of Sciences, Praha. pp. 176–181.
• Hedrlín, Zdeněk; Lambek, Joachim (1969). "How comprehensive is the category of semigroups?". Journal of Algebra. 11 (2): 195–212. doi:10.1016/0021-8693(69)90054-4.
• Hedrlín, Zdeněk (1969). "On universal partly ordered sets and classes" (PDF). Journal of Algebra. 11 (4): 503–509. doi:10.1016/0021-8693(69)90089-1.
• Hedrlín, Z.; Mendelsohn, E. (1969). "The Category of Graphs with a Given Subgraph-with Applications to Topology and Algebra". Canadian Journal of Mathematics. 21: 1506–1517. doi:10.4153/CJM-1969-165-5. S2CID 124324655.
• Goralčík, Pavel; Hedrlín, Zdeněk (1971). "On reconstruction of monoids from their table fragments". Mathematische Zeitschrift. 122: 82–92. doi:10.1007/BF01113568. S2CID 120230682.
• Chvatal, V.; Erdös, P.; Hedrlín, Z. (1972). "Ramsey's theorem and self-complementary graphs". Discrete Mathematics. 3 (4): 301–304. doi:10.1016/0012-365X(72)90087-8.
• Goralčík, P.; Hedrlín, Z.; Koubek, V.; Ryšunková, J. (1982). "A game of composing binary relations" (PDF). R.A.I.R.O.: Informatique Théorique. 16 (4): 365–369. doi:10.1051/ita/1982160403651.
• Hedrlín, Z.; Hell, P.; Ko, C.S. (1982). "Homomorphism Interpolation and Approximation". Algebraic and Geometric Combinatorics. North-Holland Mathematics Studies. Vol. 65. pp. 213–227. doi:10.1016/S0304-0208(08)73267-5. ISBN 9780444863652.
References
1. Zdeněk Hedrlín at the Mathematics Genealogy Project
2. Kratochvíl, Jan (April 26, 2018). "Zemřel doc. Zdeněk Hedrlín (deceased docent Zdeněk Hedrlín)". Matematicko-fyzikální fakulty Univerzity Karlovy (Mathematics and Physics Faculty of Charles University).
3. Koslowski, Jürgen; Melton, Austin, eds. (6 December 2012). "Chapter. Contributions and importance of Professor George E. Strecker's Research by Jürgen Koslowski". Categorical Perspectives. Springer Science & Business Media. pp. 63–90. ISBN 978-1-4612-1370-3. (quote from p. 73)
4. Hedrlín, Z. (1970). "Extensions of structures and full embeddings of categories". In: Actes du Congrès international des mathématiciens, 1–10 Septembre 1970, Nice. Vol. 1. pp. 319–321.
5. "Managing Editors; Editors" (PDF). Journal of Pure and Applied Algebra.
6. Chvatal, V.; Erdös, P.; Hedrlín, Z. (1972). "Ramsey's theorem and self-complementary graphs". Discrete Mathematics. 3 (4): 301–304. doi:10.1016/0012-365X(72)90087-8.
Authority control
International
• VIAF
National
• Czech Republic
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
| Wikipedia |
communications earth & environment
The vulnerability of lakes to climate change along an altitudinal gradient
Worldwide alteration of lake mixing regimes in response to climate change
R. Iestyn Woolway & Christopher J. Merchant
Attribution of global lake systems change to anthropogenic forcing
Luke Grant, Inne Vanderkelen, … Wim Thiery
Global lake thermal regions shift under climate change
Stephen C. Maberly, Ruth A. O'Donnell, … Andrew N. Tyler
Lake heatwaves under climate change
R. Iestyn Woolway, Eleanor Jennings, … Stephen C. Maberly
Deeper waters are changing less consistently than surface waters in a global analysis of 102 lakes
Rachel M. Pilla, Craig E. Williamson, … Egor Zadereev
The world's largest High Arctic lake responds rapidly to climate warming
Igor Lehnherr, Vincent L. St. Louis, … Charles H. Talbot
Permafrost thaw drives surface water decline across lake-rich regions of the Arctic
Elizabeth E. Webb, Anna K. Liljedahl, … Jeremy W. Lichstein
Global lake responses to climate change
R. Iestyn Woolway, Benjamin M. Kraemer, … Sapna Sharma
Phenological shifts in lake stratification under climate change
R. Iestyn Woolway, Sapna Sharma, … Eleanor Jennings
Love Råman Vinnå ORCID: orcid.org/0000-0002-9108-80571,
Iselin Medhaug ORCID: orcid.org/0000-0002-1115-88962,
Martin Schmid ORCID: orcid.org/0000-0001-8699-56911 &
Damien Bouffard ORCID: orcid.org/0000-0002-2005-97181
Communications Earth & Environment volume 2, Article number: 35 (2021) Cite this article
Studies of future 21st century climate warming in lakes along altitudinal gradients have been partially obscured by local atmospheric phenomena unresolved in climate models. Here we forced the physical lake model Simstrat with locally downscaled climate models under three future scenarios to investigate the impact on 29 Swiss lakes, varying in size along an altitudinal gradient. Results from the worst-case scenario project substantial change at the end of the century in duration of ice-cover at mid to high altitude (−2 to −107 days), stratification duration (winter −17 to −84 days, summer −2 to 73 days), while lower and especially mid altitude (present day mean annual air temperature from 9 °C to 3 °C) dimictic lakes risk shift to monomictic regimes (seven out of the eight lakes). Analysis further indicates that for many lakes shifts in mixing regime can be avoided by adhering to the most stringent scenario.
Lakes are commonly cited as sentinels of climate change1. The physical response of lakes to changes in atmospheric forcing are traditionally assessed by quantifying modifications in their thermal structure. These modifications typically include evolution of lake surface and bottom temperatures, the duration of the summer and winter stratification and the duration of ice cover. The mixing regime of lakes2 is a physical parameter describing the timing and frequency with which lake temperatures fully homogenize on an annual basis. When compared with other physical characteristics, these events with homogenization of the water column exert the largest overall influence on the functioning of lake ecosystems. A change in mixing regime thus profoundly alters a lake by enhancing or preventing vertical fluxes of nutrients and dissolved gases. This in turn can reshape food web dynamics in the lake3,4,5,6,7,8,9. Numerous studies have already reported specific effects of climate change on lake temperature10,11,12, ice cover13,14,15,16,17, stratification and mixing regime18,19,20,21 and nonlinear seasonal interactions22 at global and regional scales. These changes have been linked to trends in atmospheric climate variables (e.g. air temperature) and to individual lake characteristics (e.g. volume, area, transparency) either by statistical methods12,16,20,22,23 or process-based numerical models10,11,14,15,19,21,24,25.
In addition to the well-studied latitudinal gradients in lake response to climatic change19, altitude-gradients in atmospheric conditions26,27 may also influence the thermal structure of lakes28. Lake mixing regimes have classically been defined at global scales as a function of latitude and altitude2. Given the altitude-dependence of both climate trends26,27 and lake mixing regimes2, we hypothesize that lake response to climate change varies as a function of altitude. This hypothesis carries major implications for the vulnerability of lakes in terms of their ecosystems and their role in the carbon cycle. The remoteness of high altitude lakes however limits the availability of long-term datasets that might elucidate these relations26,29. Given the lack of long-term data, modelling approaches can fill in gaps by estimating responses from past lake data and future projected climate scenarios. This study investigates altitude-dependencies on climate change by modelling the response of 29 lakes in Switzerland, spanning an altitudinal gradient from 193 to 1797 m a.s.l. (Fig. 1d and Table 1). The numerical simulations are performed with the deterministic one-dimensional physical lake model Simstrat30,31.
Fig. 1: Time series and trend maps of projected climate from 1981 to 2099.
Temperature a–c and surface downward solar radiation d–f shown at local scale (downscaled RCM simulations). Shaded areas show the full range of annual means relative to a 1981–2010 base period for all models under scenarios RCP8.5 (orange; 17 models), RCP4.5 (green; 7 models) and RCP2.6 (blue; 7 models) with ensemble mean linear trends given for each graph. For other atmospheric forcing variables see Supplementary Fig. 2. Topography of Switzerland g with 29 lakes (numbered by altitude, see Table 1 for details) modelled using the Simstrat physical lake model (also in h). Decadal model median trend in surface downward solar radiation (RCP8.5) from 17 regional climate models for h Switzerland and i Europe, together with j regional European altitude vs. latitude dependent trends (10W to 40E, 35N to 70N). Maps created with QGIS v3.4, geographical data obtained for topography from www.gadm.org version 2.8, country boundaries www.naturalearthdata.com version 3.1.0, and lakes www.diva-gis.org/datadown.
Table 1 Properties of the lakes included in the present study with lakes ranked according to increasing altitude (first column) and volume (second column).
An important challenge when modelling lakes along an altitudinal gradient is that the altitude-dependence of climate forcing in Regional Climate Model (RCM) projections is represented by averages over individual grid cells. The complex topography of mountainous areas can strongly affect local atmospheric conditions. In the present study, we address this challenge by using recently developed RCM projections downscaled to the local scale for Switzerland32. We specifically focus on the period from 1981 to 2099 using RCM projections for the three Representative Concentration Pathways (RCP), RCP8.5, RCP4.5 and RCP2.6 (Supplementary Fig. 1). The numbers at the end of the RCPs indicate the increased radiative forcing in W m−2 in the year 2100 compared to preindustrial levels. The worst-case scenario RCP8.5 implies continuously rising global greenhouse gas emissions, in RCP 4.5 emissions are peaking around 2050 and subsequently declining, while the most stringent scenario RCP2.6 was designed to limit global warming to 2 °C in line with the Paris agreement and requires stringent measures for emission reduction and net negative emissions towards the end of the twenty-first century33. Differences in climate between the late twenty-first century and the base period are here expressed as the median, mean and variance for 2071–2099 compared to 1982–2010.
The results of our simulations indicate that climate change will cause substantial alterations in the thermal structure of the 29 investigated lakes. Mean annual lake surface temperatures are projected to consistently increase for all lakes. Projected changes in other thermal properties, such as lake bottom temperature and the duration of summer and winter stratification as well as ice cover, depend on individual lake properties such as lake volume and altitude. The projected changes in stratification and ice cover duration are largest for high altitude lakes. Nonetheless, these lakes will maintain their annual ice cover and a dimictic stratification regime throughout the twenty-first century. Low to mid altitude (<1500 m a.s.l.) small lakes (<0.5 km3) were found to be especially sensitive to changes. These lakes are projected to lose ice cover and change from a dimictic to a monomictic regime during the twenty-first century. Results based on different emission scenarios indicate that changes in mixing regime and loss of ice cover can be counteracted, but not entirely avoided, with climate protection measures as projected by scenario RCP2.6. The results of the present study cannot be directly transferred to other regions because the altitudinal variation of both lake thermal properties and trends in climate variables are not necessarily similar elsewhere. However, we would expect similar sensitivities of lakes to climate change in other regions where lake mixing regimes range from primarily monomictic at lower altitudes to dimictic at high altitudes. The investigations in the present study are limited to the direct impacts of projected changes in climate conditions on the thermal structure of lakes. Further investigations are needed to quantify possible indirect impacts, especially those resulting from changes in the lake catchments34, including but not limited to glacier retreat and permafrost thaw35, alterations of snow melt dynamics36,37, river flow regimes and resulting changes in loads of suspended particles and nutrients38,39. On a hopeful note, this study shows that many lakes can potentially avoid shifts in mixing regimes and sustain ice cover throughout the twenty-first century if greenhouse gas concentration trajectories remain within envelopes envisioned under RCP2.6.
Altitudinal gradients in RCM forcing and heat fluxes
An analysis of the downscaled dataset confirmed altitude-dependent trends associated with several climate variables (Fig. 1). For RCP8.5, averaged air temperature trends across all RCM simulations are ~20% larger at 1700 m a.s.l (0.54 °C decade−1, Fig. 1c) than those at 450 m a.s.l. (0.44 °C decade−1, Fig. 1a). Conversely, surface downward solar radiation (305–2800 nm) is projected to decrease at high altitudes (−0.84 W m−2 decade−1 at 1700 m a.s.l. for RCP8.5, Fig. 1f) but not at low altitudes (Fig. 1d). Changes in shortwave solar radiation can in principle result from changes in cloud cover, in atmospheric water content, or in aerosol concentrations. A recent analysis showed that global climate models (GCMs) generally project significantly increasing surface downward shortwave radiation over Europe due to projected reduced anthropogenic aerosols40. However, most EURO-CORDEX RCMs do not take into account the future evolution of these aerosols. The projected spatially variable shortwave radiation in these models, with decreasing radiation at high altitudes and latitudes (Fig. 1i, j) and increasing radiation elsewhere, is therefore likely related to the spatial variability in projected trends of cloud cover, cloud types and atmospheric water content in the RCM simulations. In the alpine region of Switzerland, this results in a significant negative altitudinal gradient of the solar radiation trend of ~−0.63 W m−2 decade−1 km−1. For parts of the pre-alps, Jura Mountains and most mid to low altitudes (<~1000 m a.s.l.) in south-west Europe, the RCMs show a positive trend for solar radiation (Fig. 1h–j). The projected wind speed reductions of ~0.003 (low altitude) to 0.008 (high altitude) m s−1 decade−1 for RCP8.5 are in line with historical observations for the alpine region but notably smaller than observed recent atmospheric stilling in central and northern Europe24. Trends are also projected for some other atmospheric variables used in the forcing of the Simstrat model (Supplementary Fig. 2), but their effects on lake temperatures are comparably small as discussed in the following paragraph.
Lake surface temperatures vary in equilibrium with atmospheric forcing, typically increasing by ~0.8 °C for a 1.0 °C change in air temperature, by ~0.4 °C for a 10 W m−2 change in surface downward solar radiation, by 0.1 °C for a 0.1 m s−1 decrease in wind speed and by 0.1 °C for a 1% increase in relative humidity41. These responses are, however, modulated by lake characteristics and mixing conditions. The projected forcing trends (Fig. 1a–f) therefore imply that for RCP8.5, air temperature trends would cause high altitude lakes to warm ~0.1 °C decade−1 faster than low altitude lakes. The projected decrease in surface downward solar radiation, however, would reduce this warming rate by ~0.035 °C decade−1. Projected trends in wind speed (Supplementary Fig. 2a–c) suggest a surface warming at high altitude of ~0.008 °C decade−1 and relative humidity trends (Supplementary Fig. 2g–i) a cooling by ~0.02 (mid to low altitude) to ~0.008 (high altitude) °C decade−1, respectively. Precipitation is only used in Simstrat for the calculation of the snow cover on ice, which then affects ice growth and melting, and the projected precipitation trends will not have a relevant impact on this process. In summary, the air temperature and solar radiation trends have the largest impact on lake temperatures, and the trends of the other atmospheric variables are therefore not discussed further in this manuscript.
Changes in lake temperature are driven by heat fluxes between air and water, and these fluxes depend on meteorological conditions and lake surface temperatures41,42. Average annual heat fluxes among Swiss lakes display an altitude-dependent change in the response to climate change, strongest for RCP8.5 and weakest for RCP2.6 (Fig. 2). The effect is strongest in mid- to high-altitude lakes (from ~800 m a.s.l.), where surface downward solar radiation (HS) and uptake (HA) as well as emission (HW) of infrared longwave radiation change substantially. The heat-flux trends primarily arise due to the reduced duration of lake ice cover (Fig. 3e). As ice cover recedes, lakes absorb more heat in the form of incoming longwave radiation (HA) and surface downward solar radiation (HS). These changes also mean increased heat loss by latent heat flux (HE) and outgoing longwave radiation (HW). Altitudinal gradients in air temperature and surface downward solar radiation forcing for different RCMs, combined with the large heat flux modification due to loss of lake ice, create a complex altitude-dependence in air-water heat flux variation. Finally, despite the negative trend in solar radiation at high altitudes in the atmosphere (Fig. 1), the net annual absorbed shortwave solar radiation is projected to increase for high altitude lakes due to the loss of ice (Fig. 2). Using the relationship above, the increased absorption of surface downward solar radiation due to loss of ice cover under RCP8.5 at 1700 m a.s.l. (~1.98 W m−2 decade−1) leads to an increase in lake surface temperature by ~0.079 °C decade−1.
Fig. 2: Altitudinal variation in projected trends of lakes surface heat fluxes during the twenty-first century.
Heat fluxes consist of uptake (HA) and emission (HW) of infrared longwave radiation, evaporation/condensation (HE), sensible heat flux (HC) and uptake of surface downward solar radiation (HS). Positive values indicate heat uptake and negative values indicate heat loss by the lake. Data points represent the trend for each lake obtained from regression analysis of annual (1982–2099) and scenario-specific means (17 RCP8.5 orange circles, 7 RCP2.6 blue triangles, 7 RCP 4.5 green diamonds). Linear regressions (solid lines) are shown with simultaneous prediction bounds for the fitted function (dashed lines, 95% confidence level), for regression coefficients and statistics see Supplementary Table 1.
Fig. 3: Median changes in projected lake thermal properties of lakes from the reference period (1982–2010) to the late twenty-first century (2071–2099) under three RCP scenarios.
For a lake surface temperature (at 1 m depth), b bottom temperatures (1 m above lake bottom), c duration of summer stratification, d duration of winter stratification and e duration of ice cover for scenario RCP8.5 (orange), RCP4.5 (green) and RCP2.6 (blue). The circular arrows indicate lakes ordered by altitude, from the lowest at 193 m. a.s.l. (# 1) to the highest at 1797 m a.s.l. (# 29). For lake details see Table 1.
Climate change impact on the thermal structure of lakes
The variation in the projected impacts of climate change on lake properties including thermal structure, ice and stratification results from: (i) the projected greenhouse gas concentration trajectories (RCP climate assumptions); (ii) the variation of projected atmospheric changes by the climate model chains for a given RCP; and (iii) on individual lake characteristics and how they modulate lake response to forcing scenarios.
For lake surface temperatures in the late twenty-first century, the RCPs are clearly the most important driver of change (Figs. 3a and 4a and Supplementary Fig. 3a–c). The average projected lake surface temperature warming compared to the reference period (average of 29 lakes in Figs. 3a and 4a) is 3.3 °C (standard deviation, SD: 0.28 °C) for RCP8.5, 1.7 °C (SD: 0.16 °C) for RCP4.5 and 0.9 °C (SD: 0.11 °C) for RCP2.6. The standard deviations given here are a measure for the individual lake response. In comparison, the variation induced by the climate models is measured with the standard deviations of all model runs for each individual lake. These are on average 0.57 °C for RCP8.5, 0.35 °C for RCP4.5 and 0.27 °C for RCP2.6, indicating that the variance induced by the climate models is about twice as large as the variation between individual lakes. A slight altitudinal difference exists (Fig. 3a and Supplementary Fig. 3c) with RCP8.5 causing an average lake surface temperature increase of 3.26 °C (SD: 0.31 °C) for high altitude lakes (>1600 m a.s.l., #27–29), 3.58 °C (SD; 0.33 °C) for mid altitude lakes (800–1600 m a.s.l., #21–26), and 3.29 °C (SD: 0.23 °C) for low altitude lakes (400–800 m a.s.l., #1–20).
Changes in a lake surface temperature (at 1 m depth) and b bottom temperatures (1 m above lake bottom), c duration of summer stratification, d duration of winter stratification and e duration of ice cover for scenario RCP8.5 (orange), RCP4.5 (green) and RCP2.6 (blue). Lakes are ordered by volume as denoted by the circular arrows from the smallest (#1; 0.003 km3) to the largest (#29; 89 km3). For lake details see Table 1.
Lake bottom temperatures are projected to warm at slower rates, estimated (average of 29 lakes in Figs. 3b and 4b) as 1.6 °C (SD: 0.87 °C) for RCP8.5, 0.93 °C (SD: 0.59 °C) for RCP4.5 and 0.48 °C (SD: 0.37 °C) for RCP2.6. For bottom temperatures, the variation between lakes is larger than that induced by the climate models for individual lakes, which amounts to 0.39 °C for RCP8.5, 0.32 °C for RCP4.5 and 0.25 °C for RCP2.6. Remarkably, all emission scenarios result in lower bottom-water warming rates in smaller lakes (<0.15 km3 and <8.5 km2 #10, 14, 18, 19, 21–29; with RCP8.5 modelled median changes from 0.06 to 2.10 °C, Fig. 4b). The reason for this is that despite climate change, the cooling phase in fall and winter will remain strong enough to extract the amount of heat accumulated in summer, and cool the entire lake down toward the temperature of maximum density of ~4 °C. The lake bottom temperature is thus reset every winter independent of the exact meteorological conditions. This effect is strongest at high altitude lakes, resulting in diminished bottom temperature warming rate at high altitudes. For RCP8.5 the average projected deep water temperature increase is 0.18 °C (SD: 0.10 °C) for high altitude lakes, 1.02 °C (SD; 0.71 °C) for mid altitude lakes and 2.03 °C (SD: 0.6 °C) for low altitude lakes (Fig. 3b and Supplementary Fig. 3f).
Future stratification duration in lakes
Our model results suggest that differential warming of surface and deep waters can influence the strength and duration of a given lake's stratified period. Results show substantial changes in the durations of summer and winter stratification (Fig. 3c, d). For dimictic lakes, the estimated inverse stratification duration for all lakes (average of 14 lakes, Figs. 3d and 4d) decreases by 30 days (SD: 28 days) for RCP8.5, by 18 days (SD: 13 days) for RCP4.5 and by 13 days (SD: 8 days) for RCP2.6. The estimated duration of summer stratification (average of 29 lakes in Figs. 3c and 4c) increases by 30 days (SD: 16 days) for RCP8.5, by 15 days (SD: 12 days) for RCP4.5 and by 9 days (SD: 9 days) for RCP2.6. RCP8.5 resulted in the largest changes in summer stratification with an average increase of 50 days (SD: 5 days) for high-altitude lakes (>1600 m a.s.l., #27–29), 49 days (SD; 14 days) for mid-altitude lakes (800–1600 m a.s.l., #21–26) and 22 days (SD: 9 days) for low-altitude lakes (400–800 m a.s.l., #1–20). The corresponding values for RCP2.6 are 13 days (SD: 2 days) for high-altitude lakes, 16 days (SD; 9 days) for mid-altitude lakes and 7 days (SD: 9 days) for low-altitude lakes. Estimates of the duration of winter stratification show similar altitude-dependence, where high-altitude lakes exhibit the largest decrease (Fig. 3d). RCP8.5 results in an average decrease in inverse stratification of 73 days (SD: 14 days) at high altitudes, 22 days (SD; 19 days) at mid altitudes and 14 days (SD: 15 days) at low altitudes. The corresponding values for RCP2.6 are 21 days (SD: 2 days), 12 days (SD; 9 days) and 6 days (SD: 1 day). Projected changes in summer stratification also depend on lake size (Fig. 4c), with smaller lakes (<0.5 km3) changing faster than larger lakes (>0.5 km3), which is in line with the slower bottom temperature changes in these lakes. There is no clear size dependence of the projected trends in inverse stratification (Fig. 4d).
Climate impact on ice cover duration
During the twentieth century, ice cover for larger and deeper lakes on the Swiss plateau (between 400 and 700 m a.s.l.) drastically diminished to a point of near disappearance43. Model estimates for the twenty-first century indicate that persistent ice cover only develops on small, mid- to high-altitude lakes. These lakes experience shorter ice-cover duration (from #14 to 29; Fig. 3e). The ice cover duration (average of 12 lakes, Figs. 3e and 4e) is projected to decrease by 54 days (SD: 35 days) for RCP8.5, 34 days (SD: 18 days) for RCP4.5 and 18 days (SD: 7 days) for RCP2.6 in the late twenty-first century compared to the reference period.
As for inverse stratification, the RCP8.5 scenario has the most severe impacts on ice cover for lakes at high altitudes. The average decrease of ice cover duration under RCP8.5 is 85 days (SD: 24 days) at high altitudes, 44 days (SD; 39 days) at mid altitudes and 41 days (SD: 18 days) at low altitudes. The corresponding values for RCP2.6 are 20 days (SD: 6 days), 18 days (SD; 9 days) and 17 days (SD: 4 days). Even though the rate of change is faster at higher altitudes, lower altitude lakes risk complete loss of ice cover. Of the three lakes that are projected to completely lose ice cover during late twenty-first century (#14 with RCP8.5, #22 with all RCPs, and #24 with RCP4.5 and RCP8.5), one resides at mid-altitude and two at low altitude. Lake volume cannot be linked to ice loss (Fig. 4e), since only the smaller lakes in this study are ice-covered in winter. Loss of ice leads to seasonal increased warming mainly in spring, with consequences for stratification duration and mixing regimes as discussed below.
Changes in lake mixing regimes
Changes in the thermal structure of lakes also modify mixing regimes. For example, a dimictic lake that presently experiences seasonal ice-cover will first lose ice cover under climate warming. This subsequently prevents winter stratification and shifts the lake over to a monomictic state. Depending on its inherent tendency to undergo complete mixing, which is a function of its morphology and exposure to wind, a lake may either remain a warm monomictic lake or further shift to an oligomictic or meromictic state with additional warming. Table 1 and Fig. 5 show this shift in mixing regime along an altitudinal gradient for four typical lakes. High altitude lakes such as Lake St. Moritz (#27, 1768 m a.s.l., Fig. 5b) are projected to remain dimictic under all climate scenarios although with reduced duration of winter stratification and ice cover. This loss of ice will cause a greater increase in early summer lake surface temperatures and duration of summer stratification relative to that estimated from air temperature trends alone. Mid altitude lakes, such as Lac de Joux (#23, 1004 m a.s.l., Fig. 5c) or Klöntalersee (#21, 848 m a.s.l., Fig. 5d), are projected to partially or completely shift from dimictic to monomictic under some climate scenarios. This shift clearly depends on greenhouse gas concentration trajectories. The model indicates seven out of the eight dimictic lakes will shift from predominantly dimictic to primarily monomictic regimes during late twenty-first century under RCP8.5. Only three lakes make this projected transition under RCP2.6 and five make it under RCP4.5. The larger, low altitude lakes that already exhibit monomictic regimes will remain in their present states (Table 1).
Fig. 5: Altitude-dependent changes in stratification regime for three RCP scenarios.
a Ensemble mean (using 17 climate models for RCP8.5, and 7 models for RCP4.5 and RCP2.6) percentage of years (from 2011 to 2099) with different stratification regime relative to the mean regime during the reference period (1982–2010), where 100% indicates a shift from dimictic to annually persistent monomictic. The y-axis shows the mean air temperature in the reference period for each lake forcing station and indicates altitude for four lakes (not to scale). Smoothed lines encompass all 29 lakes under each scenario while symbols represent individual lakes (RCP2.6 blue diamonds, RCP4.5 green squares, RCP8.5 orange circles). b–e Percentage of occurrence of inversed winter (solid line) and summer (dashed) stratification on each day of the year (DoY) for four lakes spanning an altitude range from 419 to 1768 m a.s.l. Values are averaged over 17 or 7 climate models and 28 years for the reference (top grey) and future (bottom coloured) periods; a value of 100% means that stratification is present in all model runs and each of the 28 years on that specific day of the year. The lines are smoothed with a 10-day running mean.
The present study focuses on shifts from dimictic to monomictic mixing regimes. Due to lack of modelled instances, we did not include a detailed analysis of shifts among deep lakes from monomictic to oligomictic states or the frequency of deep mixing events among oligomictic lakes. Our results generally show that mixing regimes will shift at mid altitudes and that this shift will be larger under scenarios of higher greenhouse gas concentration trajectories (Fig. 5a). Our study suggests that the global distribution of lake thermal regions will not only shift across latitudes10 but also across altitudinal gradients. This shift is likely to obscure historical definitions of lake mixing regimes based on altitude and latitude2 especially under RCP8.5. New mixing regime criteria should also consider the effects of water clarity and bathymetry recorded by this and other studies44,45.
By regulating heat storage, mixing regimes are important modulators of climate impacts on lakes18. A shift from a dimictic to a warm monomictic regime increases the heat storage potential of lakes which do not cool below 4 °C. A reduced frequency and/or intensity of deep water renewal can cause hypoxia or anoxia46 and thereby increase deep water phosphorus content47. Reduced oxygen availability can severely reduce the habitat for higher life forms. Along with temperature increases, these may especially affect cold water fish, who face elevated temperatures from above and diminished oxygen concentrations below6,48. Shifts in mixing regime with climate change are one of the most fundamental changes that can take place in lakes. The present study outlines an important altitude dependence of these shifts, which appear most pronounced for mid altitude lakes.
Climate forcing
In this study, we use RCM simulations from EURO-CORDEX49. The RCMs cover the European region and are forced along their boundaries using information from GCMs (Supplementary Fig. 1) for the period 1981–2099 for three different future climate scenarios. Within the RCM domain, the climate is allowed to evolve freely and includes variables for formation and breakdown of terrestrial snow and ice.
The climate data50 used to model lake responses in this study were developed under the Swiss climate scenarios framework, CH2018 (ref. 32). In this framework, RCM simulations were statistically downscaled from the model grid to local (station) scales and aligned to atmospheric observations through non-parametric empirical quantile mapping to remove RCM biases32,51,52,53,54. To do this, a day-of-year dependent correction function was created by relating the statistical distribution of the observed station data to the overlying model grid cell during the calibration period 1981–2010. The correction function was then applied to the entire simulated period51, implicitly removing the RCM biases52,53 and linking the grid scale with the local scale54.
Here, we use daily data for mean air temp (tas), precipitation (pr), global radiation (rsds), relative humidity (hurs) and wind speed (sfcwind) covering the period 1981–2099 from 17 RCM simulations and 3 climate scenarios downscaled to Swiss meteorological stations maintained by MeteoSwiss (Swiss Federal Office of Meteorology and Climatology). The Supplementary Table 2 lists station details, and the pairing of meteorological stations with individual lakes is given in Table 1. Using downscaled RCM data at meteorological stations ensures consistency and representativeness for the lake model, which was calibrated with observations from the stations. For four lakes (# 23, 24, 28, 29), some of the downscaled atmospheric variables were not available at the closest station. In these cases, data were combined from two nearby stations (multiple stations, Table 1). To maintain consistency, measured atmospheric variables used in the calibration were combined in the same manner. Air temperature was adjusted for station altitude differences as −0.56 °C for every 100 m increase in altitude (known gradient for Switzerland28).
Three RCPs (RCP2.6, RCP4.5, and RCP8.5) were used for the twenty-first century climate projections (Supplementary Fig. 1). Model simulations were made using two different spatial resolutions, EUR-11 (0.11°/~12 km) and EUR-44 (0.44°/~50 km). To visualize regional differences in Europe (Fig. 1i, j), we constructed an ensemble median dataset for each RCP using the RCMs in Supplementary Fig. 1. EUR-11 and EUR-44 simulations were combined according to Eq. (1) to include information from all models while retaining high-resolution features of the EUR-11 simulations (CH2018, Ch. 4.2.2)32.
$$\Phi _{ik} = {{Q}}\left( {\bar \Psi _i^1, \ldots ,\bar \Psi _i^N,\,\phi _i^1, \ldots ,\phi _i^M} \right) + Q\left( {\tilde \Psi _{ik}^1, \ldots ,\tilde \Psi _{ik}^N} \right)$$
$$\tilde \Psi _{ik}^n = \Psi _k^n - \bar \Psi _{ik}^n$$
The ensemble median Φ is provided on the EUR-11 grid and combines N model simulations originally on the EUR-11 (Ψ) grid and M model simulations originally on the EUR-44 (ϕ) grid. N and M vary depending on climate scenario (see Supplementary Fig. 1). Overbars indicate regridded data, while k and i indicate whether the data was originally derived from the EUR-11 or EUR-44 grid, respectively. The term ik indicates the use of information from both resolutions. First, the EUR-11 simulations were regridded to the EUR-44 grid (\(\bar \Psi _i^n\)) using a bilinear distance weighting. The median (Q) was calculated for all simulations on the EUR-44 grid, and then regridded to EUR-11 resolution as indicated by the first term on the right-hand side of Eq. (1). Second, in order to retain the high-resolution features, \(\bar \Psi _i^n\) was regridded back to the original EUR-11 grid. This resulted in low resolution versions (\(\bar \Psi _{ik}\)) of the high-resolution simulations (Ψk) for the EUR-11 grid. The median was then calculated from the anomalies in the EUR-11 simulations relative to the low-resolution version (Eq. (2)). These were added to the combined median estimate from EUR-44 and EUR-11.
The physical lake model Simstrat requires air temperature, wind speed, precipitation, vapour pressure, downward shortwave solar and either downward longwave radiation or fractional cloud cover as forcing variables. These variables were obtained from the CH2018 framework downscaled scenarios, yet two variables were not available, downward longwave radiation and fractional cloud cover. Therefore, shortwave solar radiation was used to estimate fractional cloud cover, c, which was then used by Simstrat to calculate downward longwave radiation. The fractional cloud cover was estimated as
$$c = \frac{{P_{\mathrm{{sf}}}}}{{k_{\mathrm{{cld}}} - k_{\mathrm{{clr}}}}} + 1 - \frac{{k_{\mathrm{{cld}}}}}{{k_{\mathrm{{cld}}} - k_{\mathrm{{clr}}}}}$$
where Psf is the potential solar fraction defined as Psf = Is/Cssr. In this expression, Is is observed (or downscaled RCM) shortwave solar irradiance and Cssr is clear sky solar radiation. Equation (3) linearly interpolates cloud cover between 1.0 (Psf equal to the clearness index for complete cloud cover, kcld) and 0.0 (Psf equal to the clear sky index, kclr). These two indices depend on location55 and were obtained during the calibration period (lake dependent; Supplementary Table 3) by minimizing the root mean square difference between observed and modelled c at each meteorological station. Indices are given in 0.05 increments. Cssr is calculated using the Lake Heat Flux Analyzer56. The statistical distributions of the estimates for c are compared to observations during the calibration period (Supplementary Figure 4). Supplementary Table 2 lists values for k.
Lake model
The physical deterministic one-dimensional lake model Simstrat30 (v. 2.1.2) includes surface lake ice31, constant lake dependent geothermal heat flux and a k–ε turbulence closure scheme driven by wind and internal waves. The recently added ice and snow module31 based on previous work57,58,59 generates estimates for snow pack, which can delay the ice melting27 by isolating the ice-cover. The version of the model used here did not include inflows and outflows, and thus future cooling effects of increased meltwater inflows, drainage area processes or anthropogenic discharge and thermal usage that could potentially modify lake temperatures28,34 outside of historical parameter calibration. Visible light attenuation, geothermal heat flux, bathymetry, initial conditions, air pressure and latitude were all lake dependent.
The lake model was calibrated with in situ data for 27 of the 29 lakes (no observations available for lakes #24 and 25) with four model parameters (Supplementary Table 3) using PEST (model-independent parameter estimation software; http://pesthomepage.org). Parameters selected for calibration and the rationale for their selection have been described previously31. The remaining model parameters are set to standard values. To match the temporal resolution of climate projections, calibrations used daily averaged measurements from nearby meteorological stations (Table 1). To check the sensitivity of the daily temporal resolution of the forcing data, Simstrat was additionally calibrated using hourly instead of daily averaged forcing. This gave similar model performance for both time increments (Supplementary Fig. 5).
Among available one-dimensional lake models, Simstrat has been shown to generate accurate results for both surface and bottom waters in deep lakes60. The original Simstrat model has a tendency to overestimate the internal wave energy in winter and thus the occurrence of mixing events in deep lakes. This can be avoided by filtering out high-frequency wind events (removing wind events shorter than ~¼ the first internal wave mode period)61 or by using seasonally varying parameterization for α7. In order to limit the number of parameters and manage computational resources, the model used seasonally varying α only for lakes deeper than 150 m, an approach which adequately improved model results (Supplementary Table 3). The point where α_w (winter) changes to α_s (summer) can be assigned to a specific day of the year7, but this transition is expected to change with changing climate. The model therefore switches α_s to α_w once thermal stratification disappears, i.e., if the maximum Brunt–Väisälä frequency (N2) during a time step drops below a set limit. N2 is equal to −g/ρ*dρ/dz where g is gravitational acceleration, ρ is water density, and z is the vertical axis pointing upward. The N2 limit depends on the grid resolution of the model (0.5 m). N2 was set to 2 × 10−4 s−2 based on temporal evaluation of maximum values for N2 during the reference period (1982–2010). Using real measured atmospheric forcing of Lake Geneva between 1981 and 2011 (31 years), the seasonal α method resulted in 16 modelled overturn events compared with 15 observed events (detected in monthly CTD profiles).
According to the conventions of the CH2018 climate analyses, a 30-year reference time period from 1981 to 2010 was used and compared to the late twenty-first century period from 2070 to 2099. However, since the first simulation year could include spin-up effects, we dropped it from the analysis and used 1982 to 2010 as the reference period instead. To be consistent, the first year was also dropped for the late twenty-first century, and averages were calculated from 2071 to 2099. The approximate thirty years' classical time frame aims at removing random effects due to interannual variability33.
All calculations were performed in MATLAB R 2017b and statistical properties such as trends, means, medians and variances were calculated first for each individual RCM and then for each RCP. For example, in Fig. 3a, the median surface temperature change from the reference period to late twenty-first century was calculated as follows. First, annual means (temperatures) and annual sums (stratification, ice) for each RCM simulation were obtained. Second, differences between the two periods were calculated for each RCM simulation. Third, medians, means and variances across the models (7 models in RCP2.6 and RCP4.5, and 17 for RCP8.5) of this difference were calculated (Figs. 3 and 4 and Supplementary Fig. 6).
A lake is considered to be stratified if the temperature difference (ΔT) between the surface water (10 m mean) and the bottom water (lowest point) exceeds 1 °C62. The surface mean temperature was averaged over the top 10 m to remove short-term temperature fluctuations. Lakes investigated in this study are all deep (Table 1), and water exchange between surface and bottom is limited when a significant density gradient develops. In summer ΔT > 1 °C results in stable stratification, while in winter ΔT < −1 describes an inverse stable stratification. Multiple stratified periods in both summer and winter, separated with short gaps (<15 days) due to strong short-term mixing events, were merged prior to estimating the total annual stratification duration.
Future and past mixing regimes were classified as either warm monomictic (overturn once in winter), dimictic (overturn twice per year, in autumn and in spring) or oligomictic (irregular overturns, not occurring every year). A lake overturn is defined to have occurred if surface temperature (10 m average) becomes colder or equal to bottom temperature after a previous summer stratification (monomictic, oligomictic and dimictic lakes), or if the surface temperature becomes warmer or equal to bottom temperature following an inverse stratification (dimictic lakes).
Stratification regimes were obtained annually (Fig. 5a), and as periodic means during the reference period (1982–2010) and for the late twenty-first century (2071–2099) (Table 1 and Fig. 5b–e). A lake was first classified for each simulated year as dimictic, monomictic or meromictic (no mixing). The final classification was then defined as dimictic or monomictic, if that was the regime occurring in most years for each climate scenario, or as oligomictic, if it was classified as meromictic in most years.
Simstrat simulation output are available in an open access repository (https://doi.org/10.25678/0002SF). The RCM dataset (DAILY-LOCAL) used to force Simstrat can be obtained upon request from the Swiss National Centre for Climate Services (NCCS, https://doi.org/10.18751/Climate/Scenarios/CH2018/1.0). The raw CORDEX data can be accessed through the Earth System Grid Federation (ESGF: https://esgf-node.llnl.gov/projects/esgf-llnl/).
The one-dimensional Simstrat physical lake model can be accessed on github (https://github.com/Eawag-AppliedSystemAnalysis/Simstrat).
Adrian, R. et al. Lakes as sentinels of climate change. Limnol. Oceanogr. 54, 2283–2297 (2009).
Hutchinson, G. E. & Löffler, H. The thermal classification of lakes. Proc. Natl. Acad. Sci. USA 42, 84–86 (1956).
Schwefel, R., Müller, B., Boisgontier, H. & Wüest, A. Global warming affects nutrient upwelling in deep lakes. Aquat. Sci. 81, 50 (2019).
Yankova, Y., Neuenschwander, S., Köster, O. & Posch, T. Abrupt stop of deep water turnover with lake warming: drastic consequences for algal primary producers. Sci. Rep. 7, 13770 (2017).
Kraemer, B. M. et al. Global patterns in lake ecosystem responses to warming based on the temperature dependence of metabolism. Glob. Change Biol. 23, 1881–1890 (2017).
Missaghi, S., Hondzo, M. & Herb, W. Prediction of lake water temperature, dissolved oxygen, and fish habitat under changing climate. Clim. Change 141, 747–757 (2017).
Schwefel, R., Gaudard, A., Wüest, A. & Bouffard, D. Effects of climate change on deepwater oxygen and winter mixing in a deep lake (Lake Geneva): comparing observational findings and modeling. Water Resour. Res. 52, 8811–8826 (2016).
Posch, T., Köster, O., Salcher, M. M. & Pernthaler, J. Harmful filamentous cyanobacteria favoured by reduced water turnover with lake warming. Nat. Clim. Change 2, 809–813 (2012).
Peeters, F., Straile, D., Lorke, A. & Livingstone, D. M. Earlier onset of the spring phytoplankton bloom in lakes of the temperate zone in a warmer climate. Glob. Change Biol. 13, 1898–1909 (2007).
Maberly, S. C. et al. Global lake thermal regions shift under climate change. Nat. Commun. 11, 1232 (2020).
Piccolroaz, S., Woolway, R. I. & Merchant, C. J. Global reconstruction of twentieth century lake surface water temperature reveals different warming trends depending on the climatic zone. Clim. Change 160, 427–442 (2020).
O'Reilly, C. M. et al. Rapid and highly variable warming of lake surface waters around the globe. Geophys. Res. Lett. 42, 10773–10781 (2015).
Sharma, S. et al. Widespread loss of lake ice around the Northern Hemisphere in a warming world. Nat. Clim. Change 9, 227–231 (2019).
Gebre, S., Boissy, T. & Alfredsen, K. Sensitivity of lake ice regimes to climate change in the Nordic region. Cryosphere 8, 1589–1605 (2014).
Dibike, Y., Prowse, T., Saloranta, T. & Ahmed, R. Response of Northern Hemisphere lake-ice cover and lake-water thermal structure patterns to a changing climate. Hydrol. Process. 25, 2942–2953 (2011).
Weyhenmeyer, G. A., Meili, M. & Livingstone, D. M. Nonlinear temperature response of lake ice breakup. Geophys. Res. Lett. 31, L07203 (2004).
Magnuson, J. J. et al. Historical trends in lake and river ice cover in the northern hemisphere. Science 289, 1743–1746 (2000).
Shatwell, T., Thiery, W. & Kirillin, G. Future projections of temperature and mixing regime of European temperate lakes. Hydrol. Earth Syst. Sci. 23, 19 (2019).
Woolway, R. I. & Merchant, C. J. Worldwide alteration of lake mixing regimes in response to climate change. Nat. Geosci. 12, 271–276 (2019).
Kraemer, B. M. et al. Morphometry and average temperature affect lake stratification responses to climate change. Geophys. Res. Lett. 42, 4981–4988 (2015).
Kirillin, G. Modeling the impact of global warming on water temperature and seasonal mixing regimes in small temperate lakes. Boreal Environ. Res. 15, 279–293 (2010).
Austin, J. A. & Colman, S. M. Lake superior summer water temperatures are increasing more rapidly than regional air temperatures: a positive ice-albedo feedback. Geophys. Res. Lett. 34, L06604 (2007).
Wang, J. et al. Temporal and spatial variability of great lakes ice cover, 1973–2010. J. Clim. 25, 1318–1329 (2012).
Woolway, R. I. et al. Northern hemisphere atmospheric stilling accelerates lake thermal responses to a warming world. Geophys. Res. Lett. 46, 11983–11992 (2019).
Schmid, M. & Köster, O. Excess warming of a Central European lake driven by solar brightening. Water Resour. Res. 52, 8103–8116 (2016).
Pepin, N. et al. Elevation-dependent warming in mountain regions of the world. Nat. Clim. Change 5, 424–430 (2015).
Ceppi, P., Scherrer, S. C., Fischer, A. M. & Appenzeller, C. Revisiting Swiss temperature trends 1959-2008. Int. J. Climatol. 32, 203–213 (2012).
Livingstone, D. M., Lotter, A. F. & Kettle, H. Altitude-dependent differences in the primary physical response of mountain lakes to climatic forcing. Limnol. Oceanogr. 50, 1313–1325 (2005).
Riffler, M., Lieberherr, G. & Wunderle, S. Lake surface water temperatures of European Alpine lakes (1989–2013) based on the Advanced Very High Resolution Radiometer (AVHRR) 1 km data set. Earth Syst. Sci. Data 7, 1–17 (2015).
Goudsmit, G.-H., Burchard, H., Peeters, F. & Wüest, A. Application of k-ε turbulence models to enclosed basins: The role of internal seiches. J. Geophys. Res. 107, 3230 (2002).
Gaudard, A., Råman Vinnå, L., Bärenbold, F., Schmid, M. & Bouffard, D. Toward an open access to high-frequency lake modeling and statistics data for scientists and practitioners—the case of Swiss lakes using Simstrat v2.1. Geosci. Model Dev. 12, 3955–3974 (2019).
CH2018. CH2018—Climate scenarios for Switzerland. Technical Report, National Center for Climate Services, Zürich. pp. 271 (2018).
IPCC. Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change 1535 [eds. Stocker, T. F. et al.]. (Cambridge University Press, 2013).
Råman Vinnå, L., Wüest, A., Zappa, M., Fink, G. & Bouffard, D. Tributaries affect the thermal response of lakes to climate change. Hydrol. Earth Syst. Sci. 22, 31–51 (2018).
Sommer, C. et al. Rapid glacier retreat and downwasting throughout the European Alps in the early 21st century. Nat. Commun. 11, 3209 (2020).
Christianson, K. R. & Johnson, B. M. Combined effects of early snowmelt and climate warming on mountain lake temperatures and fish energetics. Arct. Antarct. Alp. Res. 52, 130–145 (2020).
Roberts, D. C., Forrest, A. L., Sahoo, G. B., Hook, S. J. & Schladow, S. G. Snowmelt timing as a determinant of lake inflow mixing. Water Resour. Res. 54, 1237–1251 (2018).
Christianson, K. R., Johnson, B. M. & Hooten, M. B. Compound effects of water clarity, inflow, wind and climate warming on mountain lake thermal regimes. Aquat. Sci. 82, 6 (2020).
Mi, H., Fagherazzi, S., Qiao, G., Hong, Y. & Fichot, C. G. Climate change leads to a doubling of turbidity in a rapidly expanding Tibetan lake. Sci. Total Environ. 688, 952–959 (2019).
Boé, J., Somot, S., Corre, L. & Nabat, P. Large discrepancies in summer climate change over Europe as projected by global and regional climate models: causes and consequences. Clim. Dyn. 54, 2981–3002 (2020).
Schmid, M., Hunziker, S. & Wüest, A. Lake surface temperatures in a changing climate: a global sensitivity analysis. Clim. Change 124, 301–315 (2014).
Fink, G., Schmid, M., Wahl, B., Wolf, T. & Wüest, A. Heat flux modifications related to climate-induced warming of large European lakes. Water Resour. Res. 50, 2072–2085 (2014).
Hendricks-Franssen, H. J. & Scherrer, S. C. Freezing of lakes on the Swiss plateau in the period 1901–2006. Int. J. Climatol. 28, 421–433 (2008).
Lewis, W. M. A revised classification of lakes based on mixing. Can. J. Fish. Aquat. Sci. 40, 1779–1787 (1983).
Kirillin, G. & Shatwell, T. Generalized scaling of seasonal thermal stratification in lakes. Earth Sci. Rev. 161, 179–190 (2016).
North, R. P., North, R. L., Livingstone, D. M., Köster, O. & Kipfer, R. Long-term changes in hypoxia and soluble reactive phosphorus in the hypolimnion of a large temperate lake: consequences of a climate regime shift. Glob. Change Biol. 20, 811–823 (2014).
Ficker, H., Luger, M. & Gassner, H. From dimictic to monomictic: empirical evidence of thermal regime transitions in three deep alpine lakes in Austria induced by climate change. Freshw. Biol. 62, 1335–1345 (2017).
Magee, M. R., McIntyre, P. B. & Wu, C. H. Modeling oxythermal stress for cool-water fishes in lakes using a cumulative dosage approach. Can. J. Fish. Aquat. Sci. 75, 1303–1312 (2018).
Giorgi, F., Jones, C. & Asrar, G. R. Addressing climate information needs at the regional level: the CORDEX framework. WMO Bull. 58, 175–183 (2009).
CH2018 Project Team. CH2018—Climate Scenarios for Switzerland (National Centre for Climate Services, 2018).
Feigenwinter, I. et al. Exploring Quantile Mapping as a Tool to Produce User-Tailored Climate Scenarios for Switzerland. Technical Report No. 270, 44 (MeteoSwiss, 2018).
Ivanov, M. A. & Kotlarski, S. Assessing distribution-based climate model bias correction methods over an alpine domain: added value and limitations. Int. J. Climatol. 37, 2633–2653 (2017).
Rajczak, J., Kotlarski, S. & Schär, C. Does quantile mapping of simulated precipitation correct for biases in transition probabilities and spell lengths? J. Clim. 29, 1605–1615 (2016).
Rajczak, J., Kotlarski, S., Salzmann, N. & Schär, C. Robust climate scenarios for sites with sparse observations: a two-step bias correction approach. Int. J. Climatol. 36, 1226–1243 (2016).
Flerchinger, G. N., Xaio, W., Marks, D., Sauer, T. J. & Yu, Q. Comparison of algorithms for incoming atmospheric long-wave radiation. Water Resour. Res. https://doi.org/10.1029/2008WR007394 (2009).
Woolway, R. I. et al. Automated calculation of surface energy fluxes with high-frequency lake buoy data. Environ. Model. Softw. 70, 191–198 (2015).
Leppäranta, M. Freezing of Lakes and the Evolution of their Ice Cover (Springer, 2014).
Leppäranta, M. In The Impact of Climate Change on European Lakes (ed. George, G.) 63–83 (Springer, 2010).
Saloranta, T. M. & Andersen, T. MyLake—a multi-year lake simulation model code suitable for uncertainty and sensitivity analysis simulations. Ecol. Modell. 207, 45–60 (2007).
Perroud, M., Goyette, S., Martynov, A., Beniston, M. & Anneville, O. Simulation of multiannual thermal profiles in deep Lake Geneva: a comparison of one-dimensional lake models. Limnol. Oceanogr. 54, 1574–1594 (2009).
Gaudard, A. et al. Optimizing the parameterization of deep mixing and internal seiches in one-dimensional hydrodynamic models: a case study with Simstrat v1.3. Geosci. Model. Dev. 10, 3411–3423 (2016).
Foley, B., Jones, I. D., Maberly, S. C. & Rippey, B. Long-term changes in oxygen depletion in a small temperate lake: effects of climate change and eutrophication: oxygen depletion in a small lake. Freshw. Biol. 57, 278–289 (2012).
This study was financially supported by the Swiss Federal Office for the Environment (FOEN) as part of the project "Hydrologische Grundlagen zum Klimawandel" (Hydro-CH2018). We thank Silje Sørland for EURO-CORDEX expertise, Adrien Gaudard for helping with the Simstrat model setup, Alfred Wüest and Hendrik Huwald for contributing to the development of the project proposal.
Eawag, Swiss Federal Institute of Aquatic Science and Technology, Surface Waters—Research and Management, Kastanienbaum, Switzerland
Love Råman Vinnå, Martin Schmid & Damien Bouffard
Institute for Atmospheric and Climate Science, ETH Zürich, 8092, Zürich, Switzerland
Iselin Medhaug
Love Råman Vinnå
Martin Schmid
Damien Bouffard
D.B. and M.S. conceived and supervised the study; L.R.V., D.B. and M.S. designed the methodology; L.R.V. performed the lake modelling, the data curation and the formal analysis; D.B. and M.S. contributed to the formal analysis; I.M. analysed the raw RCM data; data visualization was designed by L.R.V., I.M., D.B. and M.S. and implemented by L.R.V.; L.R.V. prepared the initial draft of the manuscript; D.B., I.M. and M.S. reviewed and edited the manuscript.
Correspondence to Love Råman Vinnå or Damien Bouffard.
Peer review information Primary handling editor: Heike Langenberg.
Råman Vinnå, L., Medhaug, I., Schmid, M. et al. The vulnerability of lakes to climate change along an altitudinal gradient. Commun Earth Environ 2, 35 (2021). https://doi.org/10.1038/s43247-021-00106-w
DOI: https://doi.org/10.1038/s43247-021-00106-w
Earlier ice loss accelerates lake warming in the Northern Hemisphere
Shushi Peng
Gang Liu
Long-term changes and periodicity of ice phenomena in the high mountain Lake Morskie Oko (Tatra Mountains, Western Carpathians)
Joanna Pociask-Karteczka
Zenon Nieckarz
Adam Choiński
Journal of Mountain Science (2022)
Stable isotopes in global lakes integrate catchment and climatic controls on evaporation
Yuliya Vystavna
Astrid Harjung
Leonard I. Wassenaar
Seasonality modulates wind-driven mixing pathways in a large lake
Bieito Fernández Castro
Alfred Wüest
Communications Earth & Environment (Commun Earth Environ) ISSN 2662-4435 (online) | CommonCrawl |
\begin{document}
\title{Non-linear monotone positive maps}
\author{Masaru Nagisa} \address[Masaru Nagisa]{Graduate School of Science, Chiba University, Chiba, 263-8522, Japan} \email{[email protected]} \author{Yasuo Watatani} \address[Yasuo Watatani]{Department of Mathematical Sciences, Kyushu University, Motooka, Fukuoka, 819-0395, Japan} \email{[email protected]}
\maketitle
\begin{abstract} We study several classes of general non-linear positive maps between $C^*$-algebras, which are not necessary completely positive maps. We characterize the class of the compositions of *-multiplicative maps and positive linear maps as the class of non-linear maps of boundedly positive type abstractly. We consider three classes of non-linear positive maps defined only on the positive cones, which are the classes of being monotone, supercongruent or concave. Any concave maps are monotone. The intersection of the monotone maps and the supercongruent maps characterizes the class of monotone Borel functional calculus. We give many examples of non-linear positive maps, which show that there exist no other relations among these three classes in general.
\end{abstract}
\section{Introduction} We study several classes of general non-linear positive maps between $C^*$-algebras. Ando-Choi \cite{A-C} and Arveson \cite{Ar2} investigated non-linear completely positive maps and extend the Stinespring dilation theorem. Ando-Choi showed that any non-linear completely positive map is decomposed as a doubly infinite sum of compressions of completely positive linear maps on certain $C^*$-tensor products. Arveson obtained the similar expression for bounded completely positive complex-valued functions on the open unit ball of a unital $C^*$-algebra. Hiai-Nakamura \cite{H-N} studied a non-linear counterpart of Arveson's Hahn-Banach type extension theorem \cite {Ar1} for completely positive linear maps. Beltita-Neeb \cite{B-N} studied non-linear completely positive maps and dilation theorems for real involutive algebras. Recently Dadkhah-Moslehian \cite{D-M} studied some properties of non-linear positive maps like Lieb maps and the multiplicative domain for 3-positive maps.
We study general non-linear positive maps between $C^*$-algebras, which are not necessary completely positive maps. First we study a non-completely positive variation of Stinespring type dilation theorem. Let $A$ and $B$ be $C^*$-algebras. We consider non-linear positive maps $\varphi : A \rightarrow B$. For instance, *-multiplicative maps , positive linear maps and their compositions are typical examples of non-linear positive maps. We characterize the class of the compositions of these algebraically simple maps as non-linear maps of boundedly positive type abstractly. This class is different with the class of non-linear completely positive maps, because the transpose map of the $n$ by $n$ matrix algebra for $n \geq 2$ is contained in the class. They are not necessarily real analytic.
Another typical example of non-linear posiive mas is given as the functional calculus by a continuous positive function. See, for example, \cite{bhatia1} , \cite{bhatia2} and \cite{Si}. In particular operator monotone functions are important to study operator means in Kubo-Ando theory in \cite{kuboando}. Osaka-Silvestrov-Tomiyama \cite{O-S-T} studied monotone operator functions on $C^*$-algebras. Recently Hansen-Moslehian-Najafi \cite{hansenmn} characterize the continuous functional calculus by a operator convex function by being of Jensen-type. Moreover a sufficient condition is given by Anjidani \cite{An}.
We consider three classes of non-linear positive maps defined only on the positive cones, which are the classes of being monotone, supercongruent or concave. Let $A$ be a $C^*$-algebra. We denote by $A^+$ be the cone of all positive elements. A non-linear positive map $\varphi : A^+ \rightarrow B^+$ between $C^*$-algebras $A$ and $B$ is said to be {\it monotone } if for any $x, y \in A^+$, $x \leq y$ implies that $\varphi(x) \leq \varphi(y)$. We say $\varphi : A^+ \rightarrow B^+$ is {\it supercongruent} if $c\varphi(a)c \le \varphi(cac)$ for any $a\in A^+$ and any contraction $c\in A^+$. A positive map $\varphi : A^+ \rightarrow B^+$ is said to be {\it concave}
if $\varphi (tx + (1-t)y) \geq t\varphi (x) + (1-t)\varphi (x)$
for any $x, y \in A^+$ and $t \in [0,1]$.
Let $f: [0,\infty) \rightarrow [0,\infty)$ be a operator monotone {\it continuous} function , $H$ a Hilbert space and $\varphi_f : B(H)^+ \rightarrow B(H)^+$ be a continuous functional calculus by $f$ denoted by $\varphi_f(a) = f(a)$ for $a \in B(H)^+$. Then $\varphi_f$ is a monotone, supercongruent, concave and normal positive map.
Let $M$ be a von Neumann algebra on a Hilbert space $H$ and $\varphi : M^+ \rightarrow M^+$ be the non-linear positive map defined by the
$\varphi(a) ={\text {(the range projection of a)}}$ for $a \in M^+$. Then $\varphi$
is monotone, supercongruent and normal . In fact, this map is a functional calculus
of $a$ by a Borel function $\chi_{(0,\infty)}$ on $[0,\infty )$.
In this paper we shall show that any concave maps are monotone. The intersection of the monotone maps and the supercongruent maps characterizes the class of monotone Borel functional calculus. We give many examples of non-linear positive maps, which show that there exist no other relations among these three classes.
We also discuss the ambiguity of operator means for non-invertible positive operators related with our Theorem. Based on the theory of Grassmann manifolds, Bonnabel-Sepulchre \cite{B-S} and Batzies-H\''{u}per-Machado-Leite \cite{B-H-M-L} introduced the geometric mean for positive semidefinite matrices or projections of fixed rank. Fujii \cite{F} extends it to a general theory of means of positive semideinite matrices of fixed rank.
Noncommutative function theory is important and related to our paper. But the domain of a noncommutative function is graded, which is different with our simple one domain setting. Therefore we do not disscuss a relation with them here. It will be discussed in the future.
Finally we show a matrix version of the Choquet integral \cite{Ch}, the Sugeno integral \cite{Su} or more generally the inclusion-exclusionintegral by Honda-Okazaki \cite{H-O} for non-addiive monotone measures as another type of examples of non-linear monotone positive maps.
This work was supported by JSPS KAKENHI Grant Number JP17K18739.
\section{Non-linear maps of boundedly positive type}
Let $A$ and $B$ be $C^*$-algebras. We consider non-linear positive maps
$\varphi : A \rightarrow B$. For instance, *-multiplicative maps , positive linear maps and their compositions are typical examples of non-linear positive maps. In this section, we characterize the class of the compositions of these algebraically simple maps as non-linear maps of boundedly positive type abstractly. This class is different with the class of non-linear completely positive maps, because the transpose map of the $n$ by $n$ matrix algebra for $n \geq 2$ is contained in the class. They are not necessarily real analytic.
\begin{definition} \rm
Let $A$ and $B$ be $C^*$-algebras. A map $\varphi : A \rightarrow B$ is said to be of positive type if for any finite subset $\{a_1,a_2,\dots , a_n\} \subset A$ and any finite subset $\{\alpha_1,\alpha_2,\dots , \alpha_n\}\subset {\mathbb C} $ $$ 0 \leq \sum_{i=1}^{n} \sum_{j=1}^{n} \overline{\alpha_i}{\alpha_j} \varphi (a_i^*a_j) . $$
A map $\varphi : A \rightarrow B$ is said to be of boundedly positive type if for any $a \in A$, there exists a constant $K=K_a > 0$ such that
for any finite subset $\{a_1,a_2,\dots , a_n\} \subset A$ and any finite subset $\{\alpha_1,\alpha_2,\dots , \alpha_n\}\subset {\mathbb C} $ $$ 0 \leq \sum_{i=1}^{n} \sum_{j=1}^{n} \overline{\alpha_i}{\alpha_j} \varphi (a_i^*a^*aa_j)
\leq K
\sum_{i=1}^{n} \sum_{j=1}^{n} \overline{\alpha_i}{\alpha_j} \varphi (a_i^*a_j) . $$ Recall that a map $\varphi : A \rightarrow B$ is said to be positive if for any $a \in A$ $0 \leq \varphi (a^*a)$. Assume that $A$ is unital. Then it is clear that if $\varphi : A \rightarrow B$ is of boundedly positive type, then $\varphi$ is of positive type. If $\varphi$ is of positive type, then $\varphi$ is positive. \end{definition}
\begin{example} \rm Let $A$ and $B$ be $C^*$-algebras. If a map $\varphi : A \rightarrow B$ is a positive linear map, then $\varphi$ is of boundedly positive type. In fact,
for any non-zero $a \in A$, put $K = \|a\|^2> 0$ . Then
for any finite subset $\{a_1,a_2,\dots , a_n\} \subset A$ and any finite subset $\{\alpha_1,\alpha_2,\dots , \alpha_n\}\subset {\mathbb C} $ \begin{align*} 0 \leq \sum_{i=1}^{n} \sum_{j=1}^{n} \overline{\alpha_i}{\alpha_j} \varphi (a_i^*a^*aa_j)
& = \varphi ((\sum_{i=1}^{n}{\alpha_i}a_i) ^* a^*a
(\sum_{j=1}^{n}{\alpha_j}a_j))\\
& \leq \|a\|^2
\sum_{i=1}^{n} \sum_{j=1}^{n} \overline{\alpha_i}{\alpha_j} \varphi (a_i^*a_j) . \end{align*} We may put $K = 1$ if $ a = 0$. \end{example}
\begin{example} \rm Let $A$ and $B$ be $C^*$-algebras. If a map $\varphi : A \rightarrow B$ is *-multiplicative, that is, $\varphi(ab) = \varphi(a)\varphi(b)$ and$\varphi(a^*) = \varphi(a)^*$ for any $a,b \in A$, then $\varphi$ is of boundedly positive type. In fact,
for any $a \in A$, put $K = \|\varphi(a)\|^2 + 1> 0$ . Then
for any finite subset $\{a_1,a_2,\dots , a_n\} \subset A$ and any finite subset $\{\alpha_1,\alpha_2,\dots , \alpha_n\}\subset {\mathbb C} $ \begin{align*} 0 \leq \sum_{i=1}^{n} \sum_{j=1}^{n} \overline{\alpha_i}{\alpha_j} \varphi (a_i^*a^*aa_j)
& = (\sum_{i=1}^{n} {\alpha_i}\varphi(a_i)) ^*{\varphi(a)}^*{\varphi(a)}
(\sum_{j=1}^{n} {\alpha_j}\varphi(a_j)) \\
& \leq (\|\varphi(a)\|^2 + 1)
\sum_{i=1}^{n} \sum_{j=1}^{n} \overline{\alpha_i}{\alpha_j} \varphi (a_i^*a_j) . \end{align*} For example the determinant $det: M_n({\mathbb C}) \rightarrow {\mathbb C}$ is of boundedly positive type. Let $B = A \otimes_{min} \dots \otimes_{min} A$ and $\varphi : A \rightarrow B$ be defined by $\varphi (a) = a \otimes \dots \otimes a$, then $\varphi$ is of boundedly positive type. \end{example}
We shall study the class of maps of boundedly positive type. Let $A$, $B$ and $C$ be unital $C^*$-algebras. If $\varphi_1 : A \rightarrow C$ is *-multiplicative and $\varphi_2 : C \rightarrow B$ is a positive linear map, then the composition $\varphi = \varphi_2 \circ \varphi_1$ is of boundedly positive type. Conversely any map of boundedly positive type is of this form.
\begin{theorem} Let $A$ and $B$ be unital $C^*$-algebras. Consider a map $\varphi: A \rightarrow B$ . Then the following are equivalent: \\ \begin{enumerate} \item[$(1)$] $\varphi$ is of boundedly positive type. \item[$(2)$] There exists a unital $C^*$-algebra $C$,
a *-multiplicative map $\varphi_1 : A \rightarrow C$
and a positive linear map $\varphi_2 : C \rightarrow B$ such that $\varphi$ is
the composition $\varphi = \varphi_2 \circ \varphi_1$ of these maps. \end{enumerate} \end{theorem} \begin{proof} (2) $\Rightarrow$ (1): Assume (2).
For any $a \in A$, put $K = \|\varphi_1(a)\|^2 + 1> 0$ . Then
for any finite subset $\{a_1,a_2,\dots , a_n\} \subset A$ and any finite subset $\{\alpha_1,\alpha_2,\dots , \alpha_n\}\subset {\mathbb C} $ \begin{align*} 0 \leq \sum_{i=1}^{n} \sum_{j=1}^{n} \overline{\alpha_i}{\alpha_j} \varphi (a_i^*a^*aa_j)
& =\varphi_2( (\sum_{i=1}^{n}
{\alpha_i}\varphi_1(a_i)) ^*{\varphi_1(a)}^*{\varphi_1(a)}
(\sum_{j=1}^{n} {\alpha_j}\varphi_1(a_j)) ) \\
& =\leq (\|\varphi_1(a)\|^2 + 1)\varphi_2( (\sum_{i=1}^{n}
{\alpha_i}\varphi_1(a_i)) ^*
(\sum_{j=1}^{n} {\alpha_j}\varphi_1(a_j)) ) \\
& \leq (\|\varphi_1(a)\|^2 + 1)
\sum_{i=1}^{n} \sum_{j=1}^{n} \overline{\alpha_i}{\alpha_j} \varphi (a_i^*a_j) . \end{align*} (1) $\Rightarrow$ (2): Assume (1). Let ${\mathcal S}_A$ be a *-semigroup defined by ${\mathcal S}_A = A$ as a set with the product $ab$ and the involution $a^*$ borrowed from $A$. Consider an algebraic *-semigroup algebra ${\mathbb C}[{\mathcal S}_A] $
of ${\mathcal S}_A$ with linear basis $\{u_a \ | \ a \in A \}$ : $$ {\mathbb C}[{\mathcal S}_A]
= \{ x = \sum_i x_iu_{a_i } \ | \ x_i \in {\mathbb C}, a_i \in A \} $$ with the product $u_a u_b = u_{ab}$ and the involution $(u_a)^* = u_{a^*}$ for $a,b \in A$ . Thus ${\mathbb C}[{\mathcal S}_A] $ is a algebraic *-algebra. Define a linear map $\psi : {\mathbb C}[{\mathcal S}_A] \rightarrow B$ by $$ \psi(\sum_i x_iu_{a_i } ) = \sum_i x_i\varphi({a_i }). $$ For any state $\omega$ on $B$, define a linear functional $\psi_{\omega}$ on ${\mathbb C}[{\mathcal S}_A] $ by $\psi_ {\omega} = \omega \circ \psi $. We introduce a pre-inner product $<,>_{\omega}$ on ${\mathbb C}[{\mathcal S}_A] $ by $$ <x,y>_{\omega} := \psi_ {\omega}(y^*x) = \omega(\sum_i \sum_j \overline{y_j}{x_i} \varphi (b_j^*a_i)) $$ for $x = \sum_i x_iu_{a_i },\ y= \sum_i y_ju_{b_j} \in {\mathbb C}[{\mathcal S}_A] $, $(x_i, y_j \in {\mathbb C}, a_i , b_j \in A ).$ Then $$<x,x>_{\omega} = \psi_ {\omega}(x^*x) = \omega(\sum_i \sum_j \overline{x_j}{x_i} \varphi (a_j^*a_i)) \geq 0, $$ since $\varphi$ is of positive type . Let
$N_{\omega} := \{ x \in {\mathbb C}[{\mathcal S}_A] \ | \ <x,x>_{\omega} =0 \}$. Define a Hilbert space $H_{\omega}$ by the completion of ${\mathbb C}[{\mathcal S}_A]/ N_{\omega}$.
Let $\eta_{\omega} : {\mathbb C}[{\mathcal S}_A] \rightarrow H_{\omega}$
be the canonical map such that
$<\eta_{\omega}(x), \eta_{\omega}(y)> = <x,y>_{\omega}$.
Then we have a *-representation
$\pi_{\omega} : {\mathbb C}[{\mathcal S}_A] \rightarrow B(H_{\omega})$ such that
$\pi_{\omega}(u_a)\eta_{\omega}(x) = \eta_{\omega}(u_ax)$ and
$\pi_{\omega}(u_a)^* = \pi_{\omega}(u_a^*) = \pi_{\omega}(u_a^*)$
for $a \in A$ . In fact
\begin{align*}
\|\eta_{\omega}(u_ax)\|^2 &= <\sum_i x_iu_{aa_i }, \sum_j x_ju_{aa_j}>_{\omega}\\ &=\omega(\sum_i \sum_j \overline{x_j}{x_i} \varphi (a_j^*a^*aa_i))\\ &\leq K\omega (\sum_i \sum_j
\overline{x_j}{x_i} \varphi (a_j^*a_i)) = K \|\eta_{\omega}(x)\|^2 \end{align*} since $\varphi$ is of boundedly positive type, where $K$ depends only on $a$. Therefore $\pi_{\omega}(u_a)$ is a well defined bounded operator with
$\|\pi_{\omega}(u_a)\| \leq \sqrt{K}$. Since $A$ has a unit $I$, we have that $$ <\pi_{\omega}(u_a)\eta_{\omega}(u_I), \eta_{\omega}(u_I)> = <\eta_{\omega}(u_au_I), \eta_{\omega}(u_I)> ={\omega}(\psi(u_a)) = {\omega}(\varphi(a)) $$ Moreover for $x = \sum_i x_iu_{a_i } \in {\mathbb C}[{\mathcal S}_A] $ , we have that $$ {\omega}(\psi(x)) = <\pi_{\omega}(x)\eta_{\omega}(u_I), \eta_{\omega}(u_I)> $$ Next we shall consider a $\varphi$-universal representation $\pi_u$ of a *-algebra ${\mathbb C}[{\mathcal S}_A]$ on a Hilbert space $H_u$ as follows: Put $$
H_u = \oplus \{H_{\omega} \ | \ \omega \text{ is a state on } B \} $$ and $$
\pi_u = \oplus \{\pi_{\omega} \ | \ \omega \text{ is a state on } B \}. $$ For $x \in {\mathbb C}[{\mathcal S}_A]$ define the $\varphi$-universal seminorm $$
\|x\|_u := \sup \{ \|\pi_{\omega}(x)\| \ | \ \omega \text{ is a state on } B \}
\leq \sum_i |x_i|\sqrt{K_{a_i}} < \infty. $$ Let $C$ be the completion of ${\mathbb C}[{\mathcal S}_A]/ {\rm Ker } \pi_u$ by
the induced norm $\|[x]\|_u$ . Then $C$ is a $C^*$-algebra and isomorphic to the closure of $\pi_u({\mathbb C}[{\mathcal S}_A])$. We also have a *-representaion $\overline{\pi_u}$ of $C$ on $H_u$ such that $\overline{\pi_u}([x]) = \pi_u(x)$. \\ Next we shall show that for $x \in {\mathbb C}[{\mathcal S}_A]$ $$
\| \psi (x) \| \leq 2\|\varphi(I)\| \|x\|_u. $$ In fact, since \begin{align*}
|{\omega}(\psi(x))
&|= |<\pi_{\omega}(x)\eta_{\omega}(u_I), \eta_{\omega}(u_I)> |\\
&\leq \| \ \pi_{\omega}(x) \| \|\eta_{\omega}(u_I)\|^2
= \| \ \pi_{\omega}(x) \| {\omega}(\varphi (I))
\leq \| \varphi (I) \| \| \ \pi_u(x) \|. \end{align*} we have that $$
\| \psi (x) \| \leq 2(\text{ numerical radius of }(\psi (x))
\leq 2\|\varphi(I)\| \|\pi_u(x)\|
\leq 2\|\varphi(I)\| \| x \|_u $$ Therefore ${\rm Ker } \pi_u \subset {\rm Ker } \psi$. Hence there exists a linear map $\tilde{\psi} : {\mathbb C}[{\mathcal S}_A]/ {\rm Ker } \pi_u \rightarrow B$ such that $\tilde{\psi}([x]) = \psi (x)$. Moreover $\tilde{\psi} $ extends to a linear map $\varphi_2 : C \rightarrow B$ by the boundedness of $\tilde{\psi}$. Then $\varphi_2$ is a positive linear map. In fact, for $x \in {\mathbb C}[{\mathcal S}_A]$, since $\varphi$ is of positive type, $$ \varphi_2([x^*x]) = \psi(x^*x)= \psi((\sum_i \sum_j \overline{x_j}{x_i} \varphi (a_j^*a_i)) \geq 0 $$ Any positive element in $C$ can be approximated with these $[x^*x]$ for
$x \in {\mathbb C}[{\mathcal S}_A]/{\rm Ker } \pi_u$. By the continuity of $\varphi_2$ ,
$\varphi_2$ is also positive. \\
We define $\varphi_1: A \rightarrow C$ by $\varphi_1(a) = [u_a]$.
Then $\varphi_1$ is *-multiplicative. Moreover
$$
\varphi_2 \circ \varphi_1(a) = \varphi_2([u_a]) = \psi (u_a) = \varphi(a).
$$
for $a \in A.$
\end{proof} \begin{definition} \rm Let $A$ and $B$ be $C^*$-algebras. For a map $\varphi : A \rightarrow B$ and a natural number $n$, $\varphi_n: M_n(A) \rightarrow M_n(B)$ is defined by $\varphi_n((a_{ij})_{ij})= (\varphi(a_{ij}))_{ij}$. Then $\varphi$ is completely positive if $\varphi_n$ is positive for any $n$. $\varphi$ is said to be positive definite if, for any $n$ and for any $\{a_1,a_2,\dots , a_n\} \subset A$, $\varphi(a_i^*a_j))_{ij}\in M_n(B)$ is positive as in \cite[Definition 2.8] {B-N} . \end{definition} \begin{remark} \rm (1)Let $A$ and $B$ be $C^*$-algebras. If a map $\varphi: A \rightarrow B$ is a non-linear map. If $\varphi$ is completely positive, then $\varphi$ is positive definite. If $\varphi$ is positive definite,then $\varphi$ is of positive type. But the converses do not hold. In fact, the transpose map of the $n$ by $n$ matrix algebra for $n \geq 2$ is of positive type but is not positive definite. \\ (2)We should note that the class of completely positive maps and the class of maps of boundedly positive type are different. For example, let $A = B = {\mathbb C}$ and $\varphi(z) = e^z$. Then $\varphi$ is completely positive but is not of boundedly positive type. In fact, there exist no constant $K > 0$ such that for any $z \in {\mathbb C}$, $\varphi(z^*3^2z) \leq K \varphi(z^*z)$. The transpose map of the $n \times n$ matrix algebra for $n \geq 2$ is of boundedly positive type but is
not completely positive. \end{remark}
\section{Some classes of non-linear positive maps defined only on the positive cones}
Let $A$ be a $C^*$-algebra. We denote by $A^+$ be the cone of all positive elements. In this section we consider non-linear posiive maps defined only on the positive cones.
\begin{definition} \rm Let $A$ and $B$ be $C^*$-algebras. A non-linear positive map $\varphi : A^+ \rightarrow B^+$ is said to be {\it monotone } if for any $x, y \in A^+$, $x \leq y$ implies that $\varphi(x) \leq \varphi(y)$. We say $\varphi : A^+ \rightarrow B^+$ is {\it supercongruent} if $c\varphi(a)c \le \varphi(cac)$ for any $a\in A^+$ and any contraction $c\in A^+$. A positive map $\varphi : A^+ \rightarrow B^+$ is said to be {\it concave}
if $\varphi (tx + (1-t)y) \geq t\varphi (x) + (1-t)\varphi (x)$
for any $x, y \in A^+$ and $t \in [0,1]$.
When $A$ and $B$ are von Neumann algebras, $\varphi : A^+ \rightarrow B^+$ is
said to be {\it normal} if , for any bounded increasing net $a_{\nu} \in A^+$, \[ \varphi(\sup_{\nu} a_{\nu}) = \sup_{\nu} \varphi(a_{\nu}) . \] \end{definition}
\begin{example} \rm Let $f: [0,\infty) \rightarrow [0,\infty)$ be a operator monotone continuous function , $H$ a Hilbert space and $\varphi_f : B(H)^+ \rightarrow B(H)^+$ be a continuous functional calculus by $f$ denoted by $\varphi_f(a) = f(a)$ for $a \in B(H)^+$. Then $\varphi_f$ is a monotone, supercongruent, concave and normal positive map. \label{monotonesupercongruent} \end{example}
There exists a non-linear positive map $\varphi = : B(H)^+ \rightarrow B(H)^+$ which is monotone, supercongruent and normal but is not a continuous functional calculus. For example, let $\varphi(a)$ be the projection onto the closure of the range of $a \in B(H)^+$, then $\varphi(a)$ is called the range projection of $a$ or the supprot projection of $a$ and is equal to the projection onto the orthogonal complement of the kernel of $a$ (\cite[2.22]{stratila}).
\begin{proposition} Let $M$ be a von Neumann algebra on a Hilbert space $H$ and $\varphi : M^+ \rightarrow M^+$ be the non-linear positive map defined by the
$\varphi(a) ={\text {(the range projection of a)}}$ for $a \in M^+$. Then $\varphi$ is
is monotone, supercongruent and normal .
\label{rangeprojection} \end{proposition}
\begin{proof} For $a,b\in M^+$, we remark the following facts: \begin{itemize}
\item $\varphi(a) \le \varphi(b)$ is equivalent to ${\rm Ker }(a) \supset {\rm Ker }(b)$.
\item ${\rm Ker}(a) = {\rm Ker}(\varphi(a))$.
\item $\varphi(a)\ge a$ if $\|a\|\le 1$.
\item $\varphi(a)=a$ if $a$ is a projection. \end{itemize}
For $0\le a \le b$, it is clear that ${\rm Ker }(a) \supset {\rm Ker }(b)$. So $\varphi$ is monotone.
For any contraction $c$, we have \[ c^*ac\xi = 0 \Rightarrow a^{1/2}c \xi = 0 \Rightarrow \varphi(a)c\xi =0 \Rightarrow c^*\varphi(a)c\xi =0. \] Since ${\rm Ker }(c^*ac) \subset {\rm Ker }(c^*\varphi(a)c)$, $\varphi(c^*\varphi(a)c) \le \varphi(c^*ac)$. Because $c^* \varphi(a)c$ is a contraction, $$ c^*\varphi(a)c\le \varphi(c^*\varphi(a)c) \le \varphi(c^*ac) $$. So $\varphi$ is supercongruent.
Let $\{a_\nu\}$ be a bounded increasing net in $ M^+$. Since \[ {\rm Ker}(\sup_\nu \varphi(a_\nu)) = \bigcap_\nu {\rm Ker}(\varphi(a_\nu))
= \bigcap_\nu {\rm Ker}(a_\nu) = {\rm Ker}(\sup_\nu a_\nu) \] and $\sup_\nu \varphi(a_\nu)$ is a projection, we have \[ \varphi(\sup_\nu a_\nu) = \varphi (\sup_\nu \varphi(a_\nu)) = \sup_\nu \varphi(a_\nu). \] So $\varphi$ is normal on $ M^+$. \end{proof}
We shall study and compair these properties of being monotone, supercongruent and concave for general non-linear positive maps $\varphi : A^+ \rightarrow B^+$ on the whole positive cone $ A^+$ of a $C^*$-algebra $A$. If $\varphi$ is concave, then $\varphi$ is monotone. But there exist no other relations between them in general as follows:
\begin{proposition} Let $A$ and $B$ be $C^*$-algebras and $\varphi : A^+ \rightarrow B^+$ be a non-linear positive map. If $\varphi$ is concave, then $\varphi$ is monotone. \end{proposition} \begin{proof} Assume that $\varphi$ is concave. For $0\le a \le b$, we define $\{a_{k}\}$ as follows: \[ a_{0}=a, \; a_{1}=b, a_{k} = a + k(b-a) \geq 0 \ \ k=0,1,2,\ldots \] Then $a_{k+1}=\frac{a_{k}+a_{k+2}}{2}$. By the cancavity of $\varphi$, it follows \[ \varphi(a_{k+1}) \ge \frac{1}{2}(\varphi(a_{k})+ \varphi(a_{k+2}) ) . \] So we have \[ \varphi(a_{k+2})-\varphi(a_{k+1}) \le \varphi(a_{k+1}) -\varphi(a_{k}), \; k=0,1,2,\ldots \] and \[ \varphi(a_{n}) = \varphi(a_{0}) + \sum_{k=1}^{n} (\varphi(a_{k})-\varphi(a_{k-1}) )
\le \varphi(a_{0}) + n (\varphi(a_{1})-\varphi(a_{0}) ). \] We may assume that $A \subset B(H)$ for some Hilbert space $H$. We shall show that $\varphi(a) \le \varphi(b)$. On the contrary, suppose it were not so. Then there exists a vector $\xi \in H$ such that
$ \langle \varphi(a)\xi | \xi \rangle > \langle \varphi(b)\xi | \xi \rangle $.
That is, $ \langle(\varphi(a_1)-\varphi(a_0))\xi | \xi \rangle <0). $
then $\langle \varphi(a_{n})\xi | \xi \rangle <0$ for a sufficiently large $n$. This contradicts to that $\varphi(a_n) \ge 0$ for any $n$. Hence we have that $\varphi(a) \le \varphi(b)$. \end{proof}
\begin{proposition} There exist many non-linear positive maps on the positive cones of some $C^*$-algebras which satisfy anyone of the following conditions: \begin{enumerate}
\item[$(1)$] $\varphi$ is concave and not supercongruent.
\item[$(2)$] $\varphi$ is monotone, not concave and not supercongruent.
\item[$(3)$] $\varphi$ is not monotone and supercongruent.
\item[$(4)$] $\varphi$ is not monotone and not supercongruent.
\end{enumerate} \end{proposition} \begin{proof} In each case, this was verified by construting many concrete examples in the below. \end{proof} Remark. We shall show that if $\varphi$ is monotone and supercongruent, then $\varphi$ is concave in the next section. \\
\noindent Example (1-1) let $M$ be a ${\rm II}_1$-factor and $\tau$ the trace on $M$. Define $\varphi : M^+ \rightarrow M^+$ by $\varphi(a) = \tau( a ){\bf 1}$ for $a \in M^+ $. Since $\varphi$ is the restriction of a linear map, $\varphi$ is concave. Let $p$ be a projention in $M$ with $\tau(p)=\frac{1}{2}$. Since \[ p \tau({\bf 1})p = p \nleq \tau(p{\bf 1} p){\bf 1} = \tau(p) {\bf 1} =\frac{1}{2}{\bf 1}, \] $\varphi$ is not supercongruent.
\noindent Example (1-2) Let $H$ be a Hilbert space and $M=B(H)$. Condsider a projection
$p(\neq {\bf 1})$ of $B(H)$ and take a vector $\xi \in pH$ with $\|\xi\|=1$. Define $\varphi : M^+ \rightarrow M^+$ by $\varphi(a) = \langle a\xi, \xi \rangle {\bf 1}$ for $a \in B(H)^+$. Since $\varphi$ is the restriction of a linear kap, $\varphi$ is concave. By the fact \[ ({\bf 1}-p)\varphi(p)({\bf 1} -p) = {\bf 1} -p \nleq \varphi(({\bf 1}-p)p({\bf 1}-p)=0, \] $\varphi$ is not supercongruent.
\noindent Example (2-1) Let $H = \ell ^2(\mathbb N)$ and $M = B(H)$. Consider a maximal abelian *-subalgebra $A \cong \ell^{\infty}(\mathbb N)$ of $B(H)$ and a conditional expectation $E$ of $B(H)$ onto $A$. We define $\varphi(a ) = E(a)^{2}$ for $a \in B(H)^+$. Since $E$ is positive linear map and the mapping $\mathcal{A}^+ ni a \mapsto a^{2} \in A^+$ is monotone, $\varphi$ is monotone. By the fact \begin{gather*}
\frac{\varphi(0{\bf 1})+\varphi(2{\bf 1})}{2} =2{\bf 1} \nleq {\bf 1} =\varphi({\bf 1}) = \varphi(\frac{0{\bf 1} +2{\bf 1}}{2}), \\
\frac{{\bf 1}}{2}\varphi({\bf 1})\frac{{\bf 1}}{2}=\frac{{\bf 1}}{4} \nleq \frac{{\bf 1}}{16}=\varphi(\frac{{\bf 1}}{4})
=\varphi(\frac{{\bf 1}}{2}\cdot {\bf 1} \cdot \frac{{\bf 1}}{2}), \end{gather*} $\varphi$ is not concave and not superconvergent.
\noindent Example (3-1) Let $H$ be a Hilbert space and $M=B(H)$. For $a \in B(H)^+$, define
$\varphi(a) = \begin{cases} {\bf 1} & \|a\|\le 1 \\ a & \|a\|>1 \end{cases}$. \\ Let $p(\neq {\bf 1})$ be a projection. Then we have \[ \varphi(\frac{1}{2}p) = {\bf 1} \nleq \varphi(2p)=2p. \] So $\varphi$ is not monotone.
Let $c \in B(H)^*$ be a contraction. If $\|a\|\le 1$, then $c^{*}\varphi(a) c=c^{*}c\le {\bf 1} = \varphi(c^{*}ac)$. If $\|a\|>1$, then $\varphi(a)=a$ and \begin{align*}
\varphi(c^{*}ac) & = \begin{cases} c^{*}ac, \quad & \|c^{*}ac\| >1 \\
{\bf 1}, & \|c^{*}ac\|\le 1 \end{cases} \\
& \ge c^{*}\varphi(a)c . \end{align*} So $\varphi$ is supercongruent.
\noindent Example (3-2) Let $\mathcal{M}$ be a ${\rm II}_1$-factor. Define $\varphi : M^+ \rightarrow M^+$ by $\varphi(a) = \begin{cases} {\bf 1} & a \text{ is invertible} \\
2 {\bf 1} & a \text{ is not invertible} \end{cases} $ ,\\ for $a \in M^+$. It is clear that $\varphi$ is not monotone, since, for any non invertible positive contraction $a$, \[ \varphi(a) =2 {\bf 1} \ge {\bf 1} =\varphi({\bf 1}), \text{ and } a\le {\bf 1}. \] If $a$ is invertible, then \[ c^*\varphi(a) c = c^{*}c \le {\bf 1} \le\varphi(c^{*}ac). \] If $a$ is not invertible, \[ c^{*}\varphi(a)c = 2 c^{*}c \le 2{\bf 1} =\varphi(c^{*}ac), \] where we use the fact that a left invertible element in a factor of type ${\rm II}_1$ is invertible. So we have that $\varphi$ is supercongruence.
\noindent Example (3-3) Let $M$ be a ${\rm II}_1$-factor and $\tau:$ the normalized trace on $M$. Let $\alpha :[0,1]\longrightarrow [0,\infty)$ be a decreasing and non-constant function. For $a \in M^+$, put $r(a)$ be the range projection of $a$. Define $\varphi : M^+ \rightarrow M^+$ by \[ \varphi(a) = \alpha (\tau( r(a) ) ) {\bf 1} . \] By definition, there exist $t_{0}$, $t_{1}$ with $0\le t_{0}<t_{1}$ and $\alpha(t_{0})>\alpha(t_{1})$. We can chose projections $p,q$ with $\tau(p)=t_0$, $\tau_(q)=t_1$, and $p\le q$. Then we have $\varphi(p) > \varphi(q)$. So $\varphi$ is not monotone.
For $x\in M$, we denote $r(x)$ (resp. $s(x)$) the range projection of $x$ (resp. the support projection of $x$). Let $c,x \in M^+$ with $\|c\|\le 1$. We set $p=r(x)$ and consider the polar decomposition of pc as follows: \[ pc = hv , \] where $h\ge 0$ and $r(v)=s(h)\le p$, $v^{*}v=s(pc)$, and $vv^{*}=s(h)$. Since $M$ is a factor of type ${\rm II}_{1}$, there exists a unitary $u\in M$ satisfying $u^{*}s(h) = v^{*}$. Then we have \begin{align*}
s(c^{*}xc) & = s(c^{*}pxpc) = s(v^{*}hxhv) = s(u^{*}hxhu) \\
& = u^{*} s(hxh)u \le u^{*}s(p)u = u^{*}s(x) u. \end{align*} Since \[ \tau(s(c^{*}xc))\le \tau(u^{*}s(x)u) = \tau(s(x)), \] we can prove the supercongruence of $\varphi$ as follows: \[ \varphi(c^{*}xc) = \alpha(\tau(c^{*}xc)){\bf 1} \ge \alpha(\tau(s(x))){\bf 1} \ge c^{*}\alpha(\tau(s(x)))c
= c^{*}\varphi(x) c. \]
\noindent Example (3-4) Let $H$ be a Hilbert space and $M=B(H)$. For $a \in B(H)^+$, define \[ \varphi(a) = \begin{cases} {\bf 1}, \quad & {\rm rank}(a)=\infty \\
2{\bf 1} & {\rm rank}(a)<\infty \end{cases} . \] Let $p$ be a finite rank projection. By the fact $\varphi(p)=2 {\bf 1} >{\bf 1} = \varphi({\bf 1})$, $\varphi$ is not monotone. If ${\rm rank}(a) <\infty$, then ${\rm rank}(c^*ac)<\infty$ and \[ c^{*}\varphi(a)c =2c^{*}c \le 2 I = \varphi(c^{*}ac).\] If ${\rm rank}(a) =\infty$, then \[ c^{*}\varphi(a)c =c^{*}c \le I \leq \varphi(c^{*}ac).\] So $\varphi$ is supercongruent.
\noindent Example (4-1) Let $H$ be a Hilbert space and $M=B(H)$. For $a \in B(H)^+$, define $\varphi(a)= a^{2}$. Because $f(x) = x^2$ is not an operator monotone function, $\varphi$ is not monotone. \[ \frac{1}{2}{\bf 1} \cdot \varphi({\bf 1}) \cdot \frac{1}{2}{\bf 1} = \frac{1}{4}{\bf 1} \nleq
\varphi(\frac{1}{2}{\bf 1} \cdot {\bf 1} \cdot \frac{1}{2}{\bf 1}) = \varphi(\frac{1}{4}{\bf 1}) =\frac{1}{16}{\bf 1}. \] This implies that $\varphi$ is not supercongruent.
\noindent Example (4-2) Let $f$ be a real function $f(x) = 1 \vee x$. Let $H$ be a Hilbert space and $M=B(H)$. For $a \in B(H)^+$, define $\varphi(a)= {\bf 1} \vee a = f(a)$ by a functional calculus. Consider \begin{gather*} a=\begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix} \le
b= \begin{pmatrix} 3 & 0 \\ 0 & 3/2 \end{pmatrix}. \\
\varphi(a) = \begin{pmatrix} 3/2 & 1/2 \\ 1/2 & 3/2 \end{pmatrix} \nleq
\varphi(b) = \begin{pmatrix} 3 & 0 \\ 0 & 3/2 \end{pmatrix}. \end{gather*} Thus $\varphi$ is not monotone.
Consider
\begin{gather*} a=\begin{pmatrix} 2 & 0 \\ 0 & 0 \end{pmatrix} , \quad
c= \begin{pmatrix} 1/2 & 1/2\\ 1/2& 1 /2\end{pmatrix}. \\
c\varphi(a)c = c\begin{pmatrix} 2 & 0 \\ 0 & 1 \end{pmatrix}c =\begin{pmatrix} 3/4 & 3/4 \\ 3/4 & 3/4 \end{pmatrix} \nleq
\varphi(cac) = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \end{gather*} Hence $\varphi$ is not supercongruent.
\section{Characterization of monotone maps given by Borel functional calculus} Let $M$ be a von Neumann algebra on a Hilbert space $H$ and $\varphi : M^+ \rightarrow M^+$ be the non-linear positive map defined by the range projection $\varphi(a)$ of $a \in M^+$. Then we showed that $\varphi$ is monotone, supercongruent and normal . This is a typical example of non-linear
positive map which is
monotone, supercongruent and normal but is not a form of continuous functional
calculus. We should remark that this map is given by a Borel functional culculus of the
Borel function $\chi_{(0,\infty)}$ on $[0,\infty )$ as follows: \[ \varphi(a) = \chi_{(0,\infty)}(a), \] where \[ \chi_{(0,\infty)} (t) = \begin{cases} 0 & t = 0 \\ 1 & t>0 \end{cases} .\]
In this section, we shall characterize monotone maps given by Borel functional calculus.
At first we recall Borel functional calculus.
Let $\Omega$ be a metrizable topological space and $C(\Omega)$ a set of all complex valued continuous functions on $\Omega$. We denote by $\mathcal{B}(\Omega)$ the set of all bounded complex Borel functions on $\Omega$. For a bound self-adjoint linear operator $a\in B(H)$ there exists a correspondence \[ \mathcal{B}(\sigma(a)) \ni f \mapsto f(a) \in B(H) \] satisfying \begin{itemize}
\item[(1)] $f(a) = \alpha_0 {\bf 1} + \alpha_1 a + \cdots + \alpha_n a^n$ for any polynomial $f(\lambda) =\alpha_0 + \alpha_1\lambda + \cdots + \alpha_n\lambda^n$.
\item[(2)] $(f_n)_n$ is a bounded sequence in $\mathcal{B}(\sigma(a))$. If $(f_n)_n$ tends to $f\in \mathcal{B}(\sigma(a))$ with respect to the point-wise convergent topology, then the sequence $(f_n(a))_n$ of operators tends to the operator $f(a)$ in the strong operator topology.
\item[(3)] If $f$ is continuous on $\sigma(a)$, then the Borel functional calculus
coincides with the continuous functional calculus. \end{itemize} Moreover, this correspondence is a *-homomorphism of $\mathcal{B}(\sigma(a))$ onto the von Neumann algebra generated by $a$. (see \cite{stratila}2.20.) We call $f(a)$ the Borel functional calculus of $a$ by $f\in \mathcal{B}(\sigma(a))$.
The following fact is well-known (see, \cite[Theorem V.2.3]{bhatia1}).
\begin{lemma} Let $f$ be an operator monotone {\it continuous} function on an interval $J$ which contains $0$ and $f(0)\ge 0$. Then we have \[ c^{*}f(a)c \le f(c^{*}ac), \]
for any $a=a^{*}\in B(H)$ with $\sigma(a)\subset J$ and any $c\in B(H)$ with $\|c\|\le 1$. \end{lemma}
\begin{theorem} Let $M$ be an infinite-dimensional factor on a Hilbert space $H$ and $\varphi : M^+ \longrightarrow M^+$ be a non-linear positive map. Then the following are equivalent: \begin{enumerate} \item[$(1)$] $\varphi$ is monotone and supercongruent. \item[$(2)$] There exists a Borel function $f:[0,\infty) \longrightarrow [0,\infty)$ such that $f$ is continuous on $(0,\infty)$ , operator monotone on $(0,\infty)$ with \[ f(0) \le \lim_{t\to 0+}f(t), \] and $\varphi(a)$ is equal to the Borel functional calculus $f(a)$ of $a$ by $f$ for any $a\in M^+$. \end{enumerate} \label{Borelcharacterization} \end{theorem} \begin{proof} (1) $\Rightarrow$ (2): Assume that $\varphi$ is monotone and supercongruent. Firstly, we shall show that, for any $a \in M^+$ and any projection $p \in M$, if $ap = pa$, then $p\varphi(a) = \varphi(a) p = p\varphi(pap)p$.
In fact, suppose that $ap = pa$. Then $pap=a^{1/2}pa^{1/2}\le a$. Since $\varphi$ is supercongruent and monotone, we have \[ p \varphi(a) p \le \varphi (pap) \le \varphi(a). \] The positivity of $\varphi(a)-p\varphi(a)p$ implies $p\varphi(a)({\bf 1}-p)=0$ and $({\bf 1} -p)\varphi(a)p=0$. So $p\varphi(a) = \varphi(a) p = p\varphi(pap)p$.
Take $t{\bf 1}$ for any $t \in [0, \infty)$. Because $p(t{\bf 1}) = (t{\bf 1})p$, we have that $p\varphi(t{\bf 1}) = \varphi(t{\bf 1})p$. Since $M$ is a factor, $\varphi(t{\bf 1})$ is a scalar operator $f(t){\bf 1}$. Thus $f$ turns out to be a (not necessarily continuous) function $f:[0,\infty)\longrightarrow [0,\infty)$ such that $$ \varphi(t{\bf 1}) = f(t){\bf 1} \ \ {\ for \ any \ } t \in [0, \infty). $$ By the monotonicity of $\varphi$, $f$ is increasing on $[0,\infty)$. In particular, we have \[ f(0) \le \lim_{t\to 0+} f(t). \]
Moreover for any $a \in M^+$ and any $\xi \in {\rm Ker } a$, we have $$ \varphi(a)\xi = f(0)\xi $$ In fact, let $r$ be the range projection of $a$. Then ${\bf 1} - r$ is the projection onto the ${\rm Ker } a$. Since $ar = ra$, we have that $\varphi(a)r = r\varphi(a) = r\varphi(rar)r$. Similarly, $\varphi(a)({\bf 1} - r) = ({\bf 1} - r)\varphi(a)$ and $$ \varphi(a)({\bf 1} - r) = ({\bf 1} - r)\varphi( ({\bf 1} - r)a ({\bf 1} - r)) ({\bf 1} - r) = ({\bf 1} - r)\varphi( 0{\bf 1}) ({\bf 1} - r) = f(0)({\bf 1} - r) $$ Hence $\varphi (a)\xi = f(0)({\bf 1} - r)\xi = f(0)\xi $.
We shall show that for any $n\in \mathbb{N}$, $t_i\in [0,\infty)$ and projections $p_i \in M$ $(i=1,2,\ldots,n)$ with $\sum_{i=1}^n p_i= {\bf 1}$, \[ \varphi(\sum_{i=1}^n t_ip_i) = \sum_{i=1}^n f(t_i)p_i. \] In fact, put $a = \sum_{i=1}^n t_ip_i$ and take any $k=1,2,\ldots,n$ and fix it. Put $b = t_k{\bf 1}$. Then $ap_k = p_ka $ and $bp_k = p_kb$. Therefore $$ p_k\varphi(a) = \varphi(a) p_k = p_k\varphi(p_kap_k)p_k = p_k\varphi(t_kp_k)p_k $$ and $$ f(t_k)p_k = \varphi(b) p_k= p_k\varphi(p_kbp_k)p_k = p_k\varphi(t_kp_k)p_k. $$ Hence \[ \varphi(\sum_{i=1}^n t_ip_i) = \sum_{i=1}^n f(t_i)p_i. \]
Next we shall show that, for any {\it invertible} $a \in M^+$ and any sequence $(a_n)_n $
in $M^+$ with $a_n \leq a$ , if $\|a_n - a \| \rightarrow 0$, then
$\|\varphi(a_n) - \varphi(a)\| \rightarrow 0$. In fact, let $$ c_n = a^{-1} \# a_n := a^{-1/2}(a^{1/2}a_na^{1/2})^{1/2}a^{-1/2} $$ be the geometric operator mean of $a^{-1}$ and $ a_n$, see??. Then $a_n = c_nac_n$ and $$ 0 \leq c_n \leq a^{-1/2}(a^{1/2}aa^{1/2})^{1/2}a^{-1/2} = {\bf 1}. $$
Because $\|a_n - a \| \rightarrow 0$, we have that $\|c_n - {\bf 1} \| \rightarrow 0$. Since $\varphi$ is monotone and supercongruent, $$ 0 \leq c_n \varphi(a)c_n \leq \varphi(c_n ac_n) = \varphi(a_n) \leq \varphi(a). $$ Then $\varphi(a) - \varphi(a_n) \leq \varphi(a) - c_n \varphi(a)c_n$. Hence $$
\| \varphi(a) - \varphi(a_n) \| \leq \|\varphi(a) - c_n \varphi(a)c_n\| \rightarrow 0. $$ We shall also show that, for any {\it invertible} $a \in M^+$ and any sequence $(b_n)_n $
in $M^+$ with $a\leq b_n$ , if $\|b_n - a \| \rightarrow 0$, then
$\|\varphi(b_n) - \varphi(a) \| \rightarrow 0$. In fact, let $$ d_n = a \# b_n^{-1} := a^{1/2}(a^{-1/2}b_n^{-1}a^{-1/2})^{1/2}a^{1/2} $$ be the geometric operator mean of $a$ and $b_n^{-1}$. Then $a= d_nb_nd_n$ and $$ 0 \leq d_n \leq = a^{1/2}(a^{-1/2}a^{-1}a^{-1/2})^{1/2}a^{1/2} ={\bf 1}. $$
Because $\|b_n - a \| \rightarrow 0$, we have that $\|d_n - {\bf 1} \| \rightarrow 0$. Since $\varphi$ is monotone and supercongruent, $$ 0 \leq d_n\varphi(b_n)d_n \leq \varphi (d_n b_nd_n)= \varphi(a), $$ and $0 \leq \varphi(b_n) \leq d_n^{-1}\varphi(a)d_n^{-1}$. Then $$\varphi(b_n) - \varphi(a) \leq d_n^{-1}\varphi(a)d_n^{-1} - \varphi(a) $$ Hence
$\| \varphi(b_n) - \varphi(a) \| \leq \| d_n^{-1}\varphi(a)d_n^{-1} - \varphi(a)
\| \rightarrow 0$.
In particular, Since $\varphi(t{\bf 1}) = f(t){\bf 1}$, the function $f$ is continuous on $(0,\infty)$.
Moreover $f$ is a Borel function on $[0,\infty)$.
For any {\it invertible} elememt $a \in M^+$. we shall show that $\varphi(a)$ is
equal to the continuous functional calculus of $a$ by $f|_{(0,\infty)}$ on $(0,\infty)$, that is, $\varphi(a)=f(a)$. We may assume that $\sigma(a)\subset [\alpha, \beta]$ for some $0 <\alpha \leq \beta$ in $(0,\infty)$. For any positive integer $n$, we define a function $g_n$ on $[\alpha, \beta]$ as follows: \[ g_n(t) = \begin{cases} \alpha, & \alpha \le t \le \alpha + \dfrac{\beta-\alpha}{2^n} \\
\alpha +(k-1)\dfrac{\beta-\alpha}{2^n}, & \alpha +(k-1)\dfrac{\beta-\alpha}{2^n}<t
\le \alpha +k\dfrac{\beta-\alpha}{2^n} \end{cases} ,\] where $k=2,3,\ldots,2^n$. Put $a_n =g_n(a)$. Then we have $0 \leq a_n \le a$, $\sigma(a_n)\subset [\alpha, \beta]$ and $\varphi(a_n) =f(a_n)$, because $a_n$ has a finite spectra.
Since $\|a_n - a \| \rightarrow 0$, $\|\varphi(a_n) - \varphi(a)\| \rightarrow 0$. On the otherhand, since $f$ is continuous on $(0,\infty)$ and the continuous functional calculus by $f$ on $[\alpha, \beta]$ is norm continuous,
$\|f(a_n) - f(a)\|\rightarrow 0$. Therefore $\varphi(a)=f(a)$.
Because $M$ is an infinite-dimensional factor, $M$ contains any finite matrix algebra $M_n({\mathbb C})$. Hence $f$ is an operator monotone continuous function on $(0,\infty)$.
For possiblly non-invertible element $a \in M^+$ in general, we shall show that $\varphi(a)$ is equal to the Borel functional calculus of $a$ by $f$, that is $\varphi(a)=f(a)$. This case is a little bit subtle. We may assume that $\sigma(a)\subset [0, \beta]$ fo some $\beta \geq 0$.
For any positive integer $n$, we define a function $\tilde{g_n}$ on $[0,\beta]$ as follows: \[ \tilde{g_n}(t) = \begin{cases} 0, & 0 \le t \le \dfrac{\beta}{2^n} \\
\dfrac{(k-1)\beta}{2^n}, & \dfrac{(k-1)\beta}{2^n}<t
\le \dfrac{k\beta}{2^n} \end{cases} , \] where $k=2,3,\ldots,2^n$. Put $a_n =\tilde{g_n}(a)$. If $m\le n$ we have $a_m\le a_n\le a$. Since $\varphi$ is monotone,
$\varphi(a_m) \le \varphi(a_n) \le \varphi(a)$. And
$\varphi(a_n) =f(a_n)$, because $a_n$ has a finite spectra. Since the sequence $\{ \tilde{g}_{n}\}$ converges the identity map on $[0, \beta]$ with respect to the pointwise convergent topology, the increasing sequence $(a_n )_n = (\{ \tilde{g}_{n}\}(a))_n$ in $M^+$
converges to $a$ in the strong operator topology.
We do not know that $\varphi$ is normal in this moment. But, only for this particular sequence $(a_n )_n $,
we can show that $\varphi(a_n)$ converges $\varphi(a)$ in the weak operator topology.
In fact, let $\tilde{h_n}$ be a bounded Borel function on $[0,\beta]$ as follows: \[ \tilde{h_n}(t) =\begin{cases} 0, & 0\le t \le \dfrac{\beta}{2^n} \\
\sqrt{ \dfrac{\tilde{g_n}(t)}{t} } & \dfrac{\beta}{2^n}<t \le \beta \end{cases}. \] Then $0\le \tilde{h}_{n}\le 1$ and $\{\tilde{h}_{n}\}$ poitwise converges to $\chi_{(0,\beta]}$. We set $c_n = \tilde{h_n}(a)$. Then the sequence $(c_{n} )_n$ of positive contractions strongly converges to the range projection $r = \chi_{(0,\beta]}(a)$ of $a$ and $c_{n}ac_{n}= a_{n}$.
For any $\xi \in H$, put $\xi_1 := r\xi$ and $\xi_2 := ({\bf 1} - r)\xi \in {\rm Ker } a$. Since $a_n \leq a$, ${\rm Ker } a \subset {\rm Ker } a_n$ and $\xi_2 \in {\rm Ker } a_n$. Because $ar = ra$ and $a_nr = ra_n$, $\varphi(a)r = r\varphi(a)$ and $\varphi(a_n)r = r\varphi(a_n)$ . Since $\varphi$ is monotone and supercongruent, $c_n \varphi(a)c_n \le \varphi(c_nac_n) = \varphi(a_n) \le \varphi(a)$.
Thus we have \[ 0\le \varphi(a)-\varphi(a_n) \le \varphi(a) -c_n\varphi(a)c_n . \] Then we have \begin{align*}
0 & \leq <(\varphi(a) - \varphi(a_n))\xi, \xi> \\
= & <(\varphi(a) - \varphi(a_n))\xi_1, \xi_1>
+ <(\varphi(a) - \varphi(a_n))\xi_2, \xi_2>\\
\leq & <(\varphi(a) - c_n \varphi(a)c_n)\xi_1, \xi_1>
+ <f(0)\xi_2, \xi_2>- <f(0)\xi_2, \xi_2> \\
= & <\varphi(a)\xi_1, \xi_1> - <\varphi(a_n)c_n\xi_1, c_n\xi_1>. \end{align*}
Since $c_n\xi_1$ convergents to $r\xi_1 = \xi_1$, we conclude that $\varphi(a_n)$ converges $\varphi(a)$ in the weak operator topology.
We should note that a Borel functional calculus is not normal in general. But we shall show that the Borel functional calculus $\varphi_f$ on $M^+$ by the particular function $f$ is normal. In fact, define a continuous function $F:[0,\infty)\longrightarrow [0,\infty)$ by \[ F(t) = f(t) - \lim_{t\to 0+} f(t). \] Then $F$ is operator monotone on $[0,\infty)$. In fact, for $0\le a \le b$ and any $\epsilon>0$, $F(a+\epsilon {\bf 1})\le F(b+\epsilon {\bf 1})$ because $f$ is operator monotone on $(0,\infty)$. By the continuity of $F$, we can get $F(a)\le F(b)$ by making $\epsilon$ tend to $0$. Thus $F$ is operator monotone function on $[0,\infty)$ with $F(0)=0$. The functional calculus $\varphi_F$ by the continuous function $F$ is normal. The function $f$ is decomposed into \[ f(t) = F(t) + k \chi_{(0,\infty)}(t), \qquad k =\lim_{t\to 0+}f(t)-f(0)\ge 0. \] Then the Borel functional calculus $\varphi_f$ of $a\in \mathcal{M}^+$ by $f$ has the form: \[ \varphi_f(a) = \varphi_F(a) + k \varphi_{(0,\infty)}(a), \] where $\varphi_{(0,\infty)}(a)$ is the Borel functional calculus of $a$ by $\chi_{(0,\infty)}$ and in fact the range projection of $a$. Hence $\varphi_{(0,\infty)}$ is normal by Propositon \ref{rangeprojection}. Therefore the Borel functional calculus $\varphi_f$ by $f$ is normal.
Finally, since $\varphi(a_n)$ converges $\varphi(a)$ and $f(a_n)$ converges $f(a)$ in the weak operator topology and $\varphi(a_n) =f(a_n)$, we conclude that $\varphi(a) = f(a)=\varphi_f(a)$, the Borel functional calculus of $a$ by $f$. \\ (2) $\Rightarrow$ (1): Suppose that there exists a Borel function $f:[0,\infty) \longrightarrow [0,\infty)$ such that $f$ is continuous on $(0,\infty)$ , operator monotone on $(0,\infty)$ with \[ f(0) \le \lim_{t\to 0+}f(t), \] and $\varphi(a)$ is equal to the Borel functional calculus $f(a)= \varphi_f(a)$ of $a$ by $f$ for any $a\in M^+$. We define a continuous function $F:[0,\infty)\longrightarrow [0,\infty)$ by \[ F(t) = f(t) - \lim_{t\to 0+} f(t). \] Then as in the preceding discussion, $F$ is operator monotone on $[0,\infty)$ with $F(0)=0$. Hence the continuous functional calculus $\varphi_F$ is supercongruent as in Example \ref{monotonesupercongruent}. The function $f$ is decomposed into \[ f(t) = F(t) + k \chi_{(0,\infty)}(t), \qquad k =\lim_{t\to 0+}f(t)-f(0)\ge 0, \] and \[ \varphi_f(a) = \varphi_F(a) + k \varphi_{(0,\infty)}(a), \] where $\varphi_{(0,\infty)}(a)$ is the range projection of $a$ and $\varphi_{(0,\infty)}$ is supercongruent. Hence $\varphi$ is monotone and supercongruent.
\end{proof}
By the above theorem, the restricted norm continuity and the normality of the non-linear positive map are satisfied automatically without assuming them apriori.
\begin{corollary} Let $M$ be a infinite-dimensional factor and $\varphi : M^+ \longrightarrow M^+$ be a non-linear positive map. If $\varphi$ is monotone and supercongruent, then $\varphi$ is normal on $M^+ $ and $\varphi$ is norm continuous on the set of positive invertible elements $(M^+)^{-1}$. Moreover $\varphi$ is concave. \end{corollary} \begin{proof} The almost all except concavity are proved in the discussion of the proof in the theorem above. Since $f_n(t) = t^{1/n}$ is a operator concave function on $[0,\infty)$ and $\chi_{(0,\infty)}(t) = \lim_{n \to \infty} f_n(t)$, $\chi_{(0,\infty)}$ is also operator concave function. Since $F$ is operator monotone on $[0,\infty)$, $F$ is also operator concave. Therfore $\varphi$ is concave. \end{proof}
In the above Theorem, if we weaken the supercongruent condition as only for positive {\it invertible} contraction $c \in M^+$ and $a \in M^+$ $$ c\varphi(a)c \le \varphi(cac), $$ then the conclusion of the Theorem above does not hold in general. In fact, let the function $f_\alpha$ $(\alpha\ge 0)$ be operator monotone on $[0,\infty)$ and increasing for $\alpha$, that is \begin{gather*}
a, b \in M^+ \text{ with } a\le b \Rightarrow f_{\alpha}(a)\le f_{\alpha}(b) \\
\text{ and } \alpha \le \beta \Rightarrow f_{\alpha}(t) \le f_{\beta}(a) \quad (t\in[0,\infty)), \tag{*} \end{gather*} for any factor $M$. For an example, it is well-known \[ f(t) = \alpha -\frac{1}{t+1} \quad (\alpha\ge 0) \] is operator monotone for $[0,\infty)$ (\cite{bhatia1}, \cite{bhatia2}, \cite{hiaipetz}). So the function \[ f_{\alpha}(t) = \frac{\alpha}{\alpha+1}-\frac{1}{t+1} \quad (\alpha\ge 0) \] satisfies the condition (*).
\begin{proposition} We assume that $M = B(H)$ for a separable Hilbert space $H$ and the operator monotone function $f_{\alpha}$ on $[0,\infty)$ with the property (*) and \[ f_{\infty}(t) = \lim_{\alpha \to \infty}f_{\alpha}(t) <\infty \] exists for all $t\in[0,\infty)$. We define the map $\varphi : M^+\longrightarrow M^+$ as follows: \[ \varphi(a) = f_{{\rm rank}(a)}(a) \qquad a \in M^+, \] where ${\rm rank}(a) = \dim($the closure of $a\mathcal{H})$. Then we have the following. \begin{enumerate}
\item[$(1)$] $a, b \in M^+$ $\Rightarrow$ $\varphi(a)\le \varphi(b)$.
\item[$(2)$] For any invertible $c\in M$, $c^{*}\varphi(a)c \le \varphi (c^{*}ac)$ $(a\in M^+)$.
\item[$(3)$] If $f_m\neq f_n$ for some $m,n\in \mathbb{N}$, then $\varphi$ is not given as the continuous function calculus. \end{enumerate} \end{proposition} \begin{proof} (1) Since $a\le b$, ${\rm rank}a \le {\rm rank} b$. So we have \[ \varphi(a) = f_{{\rm rank}(a)}(a) \le f_{{\rm rank}(a)}(b) \le f_{{\rm rank}(b)}(b) = \varphi(b). \]
(2) Since the mapping $f_{{\rm rank}(a)}$ is operator monotone on $[0,\infty)$, we have \[ c^*f_{{\rm rank}(a)}(a)c = f_{{\rm rank}(a)}(c^*ac) , \] using the approximation of polynomials for $f_{{\rm rank}(a)}$. By the invertibility of $c$, we have ${\rm rank}(c^*ac) ={\rm rank}(a)$ and \[ c^* \varphi(a) c = \varphi(c^*ac). \]
(3) By definiton, we have $\varphi(t{\bf 1}) = f_{\infty}(t) {\bf 1}$ for any $t\in [0,\infty)$. We assume $m<n$ and $f_m(t_0)<f_n(t_0)$ for some $t_0\in (0,\infty)$. For a projection $p\in M$ with ${\rm rank}(p) = m$, we have \[ \varphi(t_0 p) = f_m(t_0 p) <f_\infty(t_0 p) . \] \end{proof}
Finally we shall discuss the ambiguity of operator means for non-invertible positive operators related with our Theorem , if we do {\it not} assume the upper semi-continuity for operator means. We follow the original paper of Kubo-Ando \cite {kuboando} , see also \cite{bhatia2}, \cite{hiaipetz}.
\begin{corollary} Let $M$ be an infinite-dimensional factor. If the mapping \[ \sigma: M^+\times M^+\ni (a,b) \mapsto a\sigma b \in M^+\] satisfies the following conditions: \begin{enumerate}
\item[$(1)$] $a\le c$ and $b \le d$ imply $a \sigma b \le c\sigma d$.
\item[$(2)$] For any $c\in M^+$, $c(a\sigma b)c \le (cac)\sigma (cbc)$. \end{enumerate} then there exist non-negative real valued, increasing, continuous functions $f$ and $g$ on $(0,\infty)$ such that \begin{align*}
a\sigma b & = b^{1/2}f(b^{-1/2}ab^{-1/2})b^{1/2} \\
& = a^{1/2}g(A^{-1/2}ba^{-1/2})a^{1/2} \end{align*} for any positive invertible operators $a,b \in M^+$. But we do not know how to represent $a\sigma b$ for positive non-invertible operators $a,b \in M^+$. \end{corollary} \begin{proof} We define the mapping $\varphi: M^+ \longrightarrow M^+$ as follows: \[ \varphi(a) = a\sigma {\bf 1} \qquad (a\in M^+). \] It is clear that \[ a\le b \; \Rightarrow \varphi(a)=a\sigma {\bf 1} \le b\sigma {\bf 1} = \varphi(b) \] and for any contraction $c\in M^+$, \[ cf(a)c = c(a\sigma {\bf 1})c \le (cac)\sigma (c^2) \le (cac)\sigma {\bf 1} = f(cac). \] By Theorem \ref{Borelcharacterization}, we can get the desired function $f$ and the relation \[ f(a) = a\sigma {\bf 1} \qquad (a\in (M^+)^{-1}). \]
For any positive invertible operators $a, b\in M^+$, we have $$ a\sigma b = b^{1/2}f(b^{-1/2}ab^{-1/2})b^{1/2} $$ as usual way: \begin{align*}
a \sigma b & = b^{1/2}b^{-1/2}(a\sigma b b^{-1/2} b^{1/2} \\
& \le b^{1/2}( (b^{-1/2}ab^{-1/2}) \sigma {\bf 1} )b^{1/2} = b^{1/2}f(b^{-1/2}ab^{-1/2})b^{1/2} \\
& \le a\sigma b. \end{align*}
We also define $\psi: M^+ \longrightarrow M^*$ as follows: \[ \psi(a) = {\bf 1} \sigma a \qquad (a\in M^+). \] Then we can get the function $g$ satisfying \[ a\sigma b = a^{1/2}g(a^{-1/2}ba^{-1/2})a^{1/2} \quad \text{ for any } a,b \in (M^+)^{-1}). \] \end{proof}
\begin{remark} \rm We do not know how to represent $a\sigma b$ for positive non-invertible operators $a,b \in M^+$. Based on the theory of Grassmann manifolds, Bonnabel-Sepulchre \cite{B-S} and Batzies-H\''{u}per-Machado-Leite \cite{B-H-M-L} introduced the geometric mean for positive semidefinite matrices or projections of fixed rank. Fujii \cite{F} extends it to a general theory of means of positive semideinite matrices of fixed rank. \end{remark}
\section{Non-additive measures and non-linear monotone positive maps} In this section we begin to study non-linear monotone positive maps related with non-additive measures. A non-additive measure is also called capacity, fuzzy measure, submeasure, monotone measure, etc. in different fields. Non-additive measures were firstly studied by Choquet \cite{Ch} and Sugeno \cite{Su} . They proposed Choquet integral and Sugeno integral with respect to monotone measures.
\begin{definition} \rm
Let $\Omega$ be a set and ${\mathcal B}$ a $\sigma$-field on $\Omega$. A function $\mu: {\mathcal B} \rightarrow [0, \infty]$ is called a monotone measure if $\mu$ satisfies \begin{enumerate} \item[$(1)$] $\mu(\emptyset) = 0$, and \item[$(2)$] For any $A,B \in {\mathcal B}$, if $A \subset B$, then $\mu(A) \leq \mu(B)$. \end{enumerate} \end{definition}
We recall the discrete Choquet integral with respect to a monotone measure on a finite set $\Omega = \{1,2, \dots, n\}$. Let ${\mathcal B} = P(\Omega)$ be the set of all subsets of $\Omega$ and $\mu: {\mathcal B} \rightarrow [0, \infty)$ be a finite monotone measure . \begin{definition} \rm The discrete Choquet integral of $f = (x_1, x_2,\dots, x_n) \in [0,\infty)^n$ with respect to a monotone measure $\mu$ on a finite set $\Omega = \{1,2, \dots, n\}$ is defined as follows: $$ (C)\int f d\mu = \sum_{i=1}^{n-1} (x_{\sigma(i) }- x_{\sigma(i+1)})\mu(A_i) + x_{\sigma(n) }\mu(A_n) , $$ where $\sigma$ is a permutaion on $\Omega$ such that $x_{\sigma(1)} \geq x_{\sigma(2)} \geq \dots \geq x_{\sigma(n)}$ , $A_i = \{\sigma(1),\sigma(2),\dots,\sigma(i)\}$. Here we should note that $$ f = \sum_{i=1}^{n-1} (x_{\sigma(i) }- x_{\sigma(i+1)})\chi_{A_i} + x_{\sigma(n) }\chi_{A_n} $$ \end{definition} Let $A = {\mathbb C}^n$ and define $(C-\varphi)_{\mu} :( {\mathbb C}^n)^+ \rightarrow {\mathbb C}^+$ by the Choquet integral $(C-\varphi)_{\mu}(f) = (C)\int f d\mu$. Then $(C-\varphi)_{\mu}$ is a non-linear monotone positive map such that $(C-\varphi)_{\mu}(\alpha f) = \alpha(C- \varphi)_{\mu}(f)$ for a positive scalar $\alpha$.
We shall consider a matrix version of the discrete Choquet integral. \begin{proposition} Let $\mu: {\mathcal B} \rightarrow [0, \infty)$ be a finite monotone measure on a finite set $\Omega = \{1,2, \dots, n\}$ with ${\mathcal B} = P(\Omega)$. Let $A = M_n({\mathbb C})$ and define $(C-\varphi)_{\mu} : (M_n({\mathbb C}))^+ \rightarrow {\mathbb C}^+$ as follows: For $a \in (M_n({\mathbb C}))^+$, let $\lambda(a) = (\lambda_1(a),\lambda_2(a),\dots, \lambda_n(a))$ be the list of the eigenvalues of $a$ in decreasing order : $\lambda_1(a), \geq \lambda_2(a) \geq \dots \geq \lambda_n(a)$ with counting multiplicities. Let $$ (C-\varphi)_{\mu}(a) = \sum_{i=1} ^{n-1} ( \lambda_i(a)- \lambda_{i+1}(a))\mu(A_i) + \lambda_n (a) \mu(A_n) , $$ where $A_i = \{1,2,\dots,i \}$. Then $(C-\varphi)_{\mu}$ is a unitary invariant non-linear monotone positive map such that $(C-\varphi)_{\mu}(\alpha a) = \alpha (C-\varphi)_{\mu}(a)$ for a positive scalar $\alpha$. \end{proposition} \begin{proof} For $a, b \in (M_n({\mathbb C}))^+$, suppose that $0 \leq a \leq b$. By the mini-max principle for eigenvalues, we have that $\lambda_i(a) \leq \lambda_i(b)$ for $i = 1,2,\dots,n$. \begin{align*} (C-\varphi)_{\mu}(a) & = \sum_{i=1} ^{n-1} ( \lambda_i (a) - \lambda_{i+1} (a))\mu(A_i) + \lambda_n (a) \mu(A_n) \\
& = \sum_{i=2} ^n \lambda_i (a)(\mu(A_i) - \mu(A_{i-1}) ) +
\lambda_1 (a)(\mu(A_1) \\
& \leq \sum_{i=2} ^n \lambda_i (b)(\mu(A_i) - \mu(A_{i-1}) ) +
\lambda_1 (b)\mu(A_1) \\
&= (C-\varphi)_{\mu} (b) \end{align*} since $\mu$ is a monotone measure. Thus $\varphi_{\mu}$ is monotone. It is clear that $\varphi_{\mu} (\alpha a) = \alpha \varphi_{\mu} (a)$ for a positive scalar $\alpha$ by the definiton and $\varphi_{\mu}$ is a unitary invariant. \end{proof} Furthermore we can replace a monotone measure on a finite set $\Omega = \{1,2, \dots, n\}$ by a positive operator-valued monotone measure $\mu: {\mathcal B} \rightarrow B(H)^+$ for some Hilbert space $H$ , that is, \begin{enumerate} \item[$(1)$] $\mu(\emptyset) = 0$, and \item[$(2)$] For any $X,Y \in {\mathcal B} = P(\Omega)$, if $X \subset Y$, then $\mu(X) \leq \mu(Y)$. \end{enumerate} We have a similar result as follows: \begin{proposition} Let $H$ be a Hilbert space , $\mu: {\mathcal B} \rightarrow B(H)^+$ be a positive operator-valued monotone measure on a finite set $\Omega = \{1,2, \dots, n\}$ with ${\mathcal B} = P(\Omega)$. Define $(C-\varphi)_{\mu} : (M_n({\mathbb C}))^+ \rightarrow B(H)^+$ as follows: For $a \in (M_n({\mathbb C}))^+$, let $\lambda(a) = (\lambda_1(a),\lambda_2(a),\dots, \lambda_n(a))$ be the list of the eigenvalues of $a$ in decreasing order with counting multiplicities. Let $$ (C-\varphi)_{\mu}(a) = \sum_{i=1} ^{ n-1} ( \lambda_i(a)- \lambda_{i+1}(a))\mu(A_i) + \lambda_n(a) \mu(A_n) , $$ where $A_i = \{1,2,\dots,i\}$. Then $(C-\varphi)_{\mu} $ is a a unitary invariant non-linear monotone positive map such that $(C-\varphi)_{\mu}(\alpha a) = \alpha (C-\varphi)_{\mu}(a)$ for a positive scalar $\alpha$. \end{proposition} \begin{proof} Use the similar argument as above. \end{proof}
Honda and Okazaki \cite{H-O} proposed the inclusion-exclusion integral with respect to a monotone measure, which is a generalization of the Lebesgue integral and the the Choquet integral. We can also consider a matrix version of the inclusion-exclusion integral. \begin{proposition} Let $H$ be a Hilbert space , $\mu: {\mathcal B} \rightarrow B(H)^+$ be a positive operator-valued monotone measure on a finite set $\Omega = \{1,2, \dots, n\}$ with ${\mathcal B} = P(\Omega)$. Fix a positive number $K$. Let $(\Omega, P(\Omega),\mu, I,K)$ be an interactive monotone measure space such that the interaction operator $I$ is positive and monotone in the sense of \cite{H-O}. Define $$
(I-\varphi)_{\mu} : \{a \in (M_n({\mathbb C}))^+ \ | \ \sigma (a) \subset [0,K] \} \rightarrow B(H)^+ $$ as follows: For $a \in (M_n({\mathbb C}))^+$ with the spectrum $\sigma (a) \subset [0,K]$ , let $\lambda(a) = (\lambda_1(a),\lambda_2(a),\dots, \lambda_n(a))$ be the list of the eigenvalues of $a$ in decreasing order with counting multiplicities. Let $$ (I-\varphi)_{\mu}(a) = \sum_{A \in P(\Omega) }
(\sum_{B\supset A} (-1)^{|B\setminus A|}I(\lambda(a)|B))\mu(A). $$ Then $(I-\varphi)_{\mu} $ is a a unitary invariant non-linear monotone positive map. \end{proposition} \begin{proof} For $a, b \in (M_n({\mathbb C}))^+$ with the spectra $\sigma (a) \subset [0,K]$ and $\sigma (b) \subset [0,K]$, suppose that $0 \leq a \leq b$. By the mini-max principle for eigenvalues, we have that $\lambda_i(a) \leq \lambda_i(b)$ for $i = 1,2,\dots,n$. Since the interaction operator $I$ is monotone, $$
\sum_{B\supset A} (-1)^{|B\setminus A|}I(\lambda(a)|B)
\leq \sum_{B\supset A} (-1)^{|B\setminus A|}I(\lambda(b)|B). $$ Therefore $(I-\varphi)_{\mu}(a) \leq (I-\varphi)_{\mu}(b)$. \end{proof}
Next we recall the Sugeno integral with respect to a monotone measure on a finite set $\Omega = \{1,2, \dots, n\}$. \begin{definition} \rm The discrete Sugeno integral of $f = (x_1, x_2,\dots, x_n) \in [0,\infty)^n$ with respect to a monotone measure $\mu$ on a finite set $\Omega = \{1,2, \dots, n\}$ is defined as follows: $$ (S)\int f d\mu = \vee_{i=1}^{n} (x_{\sigma(i) } \wedge \mu(A_i) ) , $$ where $\sigma$ is a permutaion on $\Omega$ such that $x_{\sigma(1)} \geq x_{\sigma(2)} \geq \dots \geq x_{\sigma(n)}$ , $A_i = \{\sigma(1),\sigma(2),\dots,\sigma(i)\}$ and $\vee =\max$ , $\wedge = \min$. Here we should note that $$
f = \vee_{i=1}^{n} (x_{\sigma(i) } \chi_{A_i}) $$ \end{definition} Let $A = {\mathbb C}^n$ and define $(S-\varphi)_{\mu} :( {\mathbb C}^n)^+ \rightarrow {\mathbb C}^+$ by the Sugeno integral $(S-\varphi)_{\mu}(f) = (S)\int f d\mu$. Then $(S-\varphi)_{\mu}$ is a non-linear monotone positive map such that $(S-\varphi)_{\mu}(\alpha f) = \alpha(S- \varphi)_{\mu}(f)$ for a positive scalar $\alpha$.
We shall consider a matrix version of the discrete Sugeno integral. \begin{proposition} Let $\mu: {\mathcal B} \rightarrow [0, \infty)$ be a finite monotone measure on a finite set $\Omega = \{1,2, \dots, n\}$ with ${\mathcal B} = P(\Omega)$. Let $A = M_n({\mathbb C})$ and define $(S-\varphi)_{\mu} : (M_n({\mathbb C}))^+ \rightarrow {\mathbb C}^+$ as follows: For $a \in (M_n({\mathbb C}))^+$, let $\lambda(a) = (\lambda_1(a),\lambda_2(a),\dots, \lambda_n(a))$ be the list of the eigenvalues of $a$ in decreasing order : $\lambda_1(a), \geq \lambda_2(a) \geq \dots \geq \lambda_n(a)$ with counting multiplicities. Let $$ (S-\varphi)_{\mu}(a) = \vee_{i=1}^{n} (\lambda_i(a) \wedge \mu(A_i) ) $$ where $A_i = \{1,2,\dots,i \}$. Then $(S-\varphi)_{\mu}$ is a unitary invariant non-linear monotone positive map such that $(S-\varphi)_{\mu}(\alpha a) = \alpha (S-\varphi)_{\mu}(a)$ for a positive scalar $\alpha$. \end{proposition} \begin{proof} For $a, b \in (M_n({\mathbb C}))^+$, suppose that $0 \leq a \leq b$. Since $\lambda_i(a) \leq \lambda_i(b)$ for $i = 1,2,\dots,n$, \begin{align*} (S-\varphi)_{\mu}(a) & = \vee_{i=1}^{n} (\lambda_i(a) \wedge \mu(A_i) ) \\
& \leq \vee_{i=1}^{n} (\lambda_i(b) \wedge \mu(A_i) )
= (S-\varphi)_{\mu} (b). \end{align*} Thus $\varphi_{\mu}$ is monotone. It is clear that $\varphi_{\mu} (\alpha a) = \alpha \varphi_{\mu} (a)$ for a positive scalar $\alpha$ by the definiton and $\varphi_{\mu}$ is a unitary invariant. \end{proof}
\end{document} | arXiv |
\begin{document}
\title{ Uniform regularity and vanishing viscosity limit for the free surface Navier-Stokes
equations}
\author{Nader Masmoudi} \address{Courant Institute, 251 Mercer Street, New York, NY 10012-1185, USA,} \email{[email protected] } \author{Frederic Rousset} \address{IRMAR, Universit\'e de Rennes 1, campus de Beaulieu, 35042 Rennes cedex, France} \email{ [email protected] } \date{} \begin{abstract} We study the inviscid limit of the free boundary Navier-Stokes equations. We prove the existence of solutions on a uniform time interval by using a suitable functional framework based on Sobolev conormal spaces.
This allows us to use a strong compactness argument to justify the inviscid limit.
Our approach does not rely on the justification of asymptotic expansions.
In particular, we get a new existence result for the Euler equations
with free surface from the one
for Navier-Stokes.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction}
The study of fluid motions with free interfaces has attracted a lot of attention during the last thirty years.
When the fluid is viscous, that is to say for the Navier-Stokes equation, the local well-posedness for the Cauchy problem was shown
for example in the classical works by Beale
\cite{Beale83} and Tani \cite{Tani96}, we also refer to the work by Solonnikov \cite{Solonnikov} and also to \cite{Guo-Tice}
for example. These results rely heavily on the smoothing effect due to the presence of viscosity.
When viscosity is neglected, it is much more difficult to control the regularity of the free surface. The first attempts to get local well-posedness
for the free-surface Euler equation
go back to the works by Nalimov \cite{Nalimov74} and H.~Yoshihara \cite{Yosihara82} for small data in two dimensions. A major
breakthrough was achieved in the nineties by S.~Wu \cite{Wu97,Wu99} who solved the irrotational problem in dimension two and three (see also the works by Craig \cite{Craig85}, Beale, Hou, and Lowengrub \cite{BHL93}, Ogawa and Tani \cite{OT02}, Schneider and Wayne \cite{SW02}, Lannes \cite{Lannes05},
Ambrose and Masmoudi \cite{AM05}, \cite{AM09}). In this
irrotational case, where dispersive effects are predominant, an almost global existence result was achieved by S. Wu \cite{Wu10} in the two-dimensional case and global existence results in three dimensions were proved recently by Germain, Masmoudi and Shatah \cite{GMS09cras,GMS11prep} and by
S.~Wu \cite{Wu11}. We also refer to \cite{Alazard} for a local well-posedness result below the standard regularity threshold. For the full system (without the irrotational assumption)
there are well posedness results starting with the works by
Christodoulou and Lindblad \cite{CL00} and Lindblad \cite{Lindblad05}. Alternative approach were proposed
by
Coutand and Shkoller \cite{CS07},
Shatah and Zeng \cite{SZ08} and
P.~Zhang and Z.~Zhang \cite{ZZ08}. Note that the well-posedness of the Euler equation requires a Taylor sign condition for the pressure
on the boundary.
A classical problem in fluid mechanics, since Reynolds numbers for real flows are very high,
is to study the behaviour of the solutions of the Navier-Stokes equation at high Reynolds number which corresponds
to small viscosity. A natural conjecture is of course to expect that the limit is given by a solution of the Euler equation.
Nevertheless in the presence of boundaries the situation can be more complicated as we shall describe below.
The main result of this work is the justification of the inviscid limit when we start from the free surface Navier-Stokes equation.
The study of the vanishing viscosity limit for the Navier-Stokes equation has a long history. We shall distinguish three different approach to the problem. The first one consists in proving
that strong solutions to the Navier-Stokes equation exist on an interval of time independent of the viscosity parameter
and then passing to the limit by strong compactness arguments. The second approach is based on weak compactness arguments
starting from Leray weak solutions of the Navier-Stokes equation. Note that in these two approach the well posedness
of the Euler equation is not used. On the contrary, the third approach relies on the a priori knowledge that sufficiently smooth
solutions of the Euler equation exist: it consists either in comparing directly
by a modulated type energy argument a Leray weak solution of the Navier-Stokes
equation with an approximate solution whose leading order is given by a smooth solution the Euler equation or
in directly justifying asymptotic expansions by studying a modified Navier-Stokes equation for the remainder.
When there is no boundary, the vanishing viscosity limit can be justified by using the three types of methods:
we refer to Swann \cite{Swann71} and Kato \cite{Kato72} (see also \cite{Masmoudi07cmp}) for results with the first approach,
to DiPerna and Majda \cite{DM87cmp,DM87cpam} for the second approach in the two-dimensional case
and to the book \cite{Majda-Bertozzi} for example for the third approach. Note that the second approach yields the existence
of global weak solution for the Euler equation and thus in three dimensions only results with the first and
third approach are known.
In a fixed domain with boundary, the situation is more complicated, the situation depends on the type of boundary conditions that
we impose on the boundary.
Let us first describe the situation in the case of the homogeneous Dirichlet boundary condition.
For the first approach, the main difficulty is that for data of size of order one, all the known existence
results of strong solutions for the Navier-Stokes equation yield uniform estimates that are valid only on a time interval which shrinks to zero when
the viscosity goes to zero. This has a physical explanation: there is formation of boundary layers
in a small vicinity of the boundary. Indeed, in a small vicinity of the boundary, the solution $u^\varepsilon$, $\varepsilon$
being the viscosity parameter, of the Navier-Stokes
equation is expected to behave like
$ u^\varepsilon \sim U(t,y, z/\sqrt{\varepsilon})$,
if the boundary is locally given by $z=0$, where $U$ is some profile which is not described by the Euler equation.
Because of this small scale behaviour, one can see that there is no hope to get uniform estimates in all the spaces
for which the 3-D Euler equation is known to be well-posed ($H^s$, $s>5/2$ for example). Most of the works
on the topic are thus concentrated on the justification of the third approach: namely the construction of
an high order approximate solution (involving two spatial scales thus of matched asymptotics type) and the control of the remainder. Nevertheless, the equation that
governs the boundary layers that appear in the formal expansion, the Prandtl equation, can be ill-posed in Sobolev spaces \cite{GD10},
thus it is not always possible to even constuct an approximate solution. Moreover, even when this is possible, some
instabilities make impossible the control of the remainder on a suitable interval of time \cite{Grenier}, \cite{Guo-Nguyen}.
Consequently, in a domain with fixed boundary with homogeneous Dirichlet boundary condition, the only full result is the work by
Sammartino and Caflisch \cite{SC98a} where the asymptotic expansion is justified in the analytic framework.
There is also a result by Kato \cite{Kato86} where some necessary condition for the convergence to hold is exhibited and the work \cite{Masmoudi98arma} where the vertical viscosity is assumed much smaller than the horizontal one. The case of a non-homogeneous Dirichlet boundary condition
which happens
for example when there is injection or succion on the boundary is also studied. In this case the size of the boundary layer is $\varepsilon$
(against $\sqrt{\varepsilon}$ in the previous case) which makes the situation very different. The constuction and justification of the asymptotic expansion, even for
general hyperbolic-parabolic systems have been widely studied recently, we refer to
\cite{Gisclon-Serre, Grenier-Gues, Grenier-Rousset, Metivier-Zumbrun,
Gues-Metivier-Williams-Zumbrun, Rousset, Temam-Wang}.
Let us finally describe the situation for the Navier-Stokes equation with a free surface. At first we note that there is a minimal amount of regularity which is needed in order to define the motion of the free surface, therefore there is up to our knowledge
no suitable notion of weak solution for this equation.
Consequently, it seems that to justify the inviscid limit one can only use either the first approach or the justification
of matched asymptotic expansions. The main drawbacks of this last approach is first that a lot of regularity is needed
in order to construct the expansion and second that the a priori knowledge of the well-posedness of the free surface Euler equation is needed.
From a mathematical point of view, this is not very satisfactory since the local theory for the Euler equation is much more difficult
to establish than the one for the Navier-Stokes equation. Moreover, because of the parabolic behaviour of the equation
one do not expect to need more regularity on the data in order to solve the viscous equation than the inviscid one.
In this work, we shall thus avoid to deal with the justification of asymptotic expansions as it is usually done and deal with the first approach.
In this approach, one of the main difficulty is
to prove
that the local smooth solution of the Navier-Stokes equation (which comes from the classical works
\cite{Beale83}, \cite{Tani96}) can be continued on an interval of time independent of the viscosity.
In the vicinity of the free surface the expected behaviour of the solution is
$ u^\varepsilon \sim u(t,x)+ \sqrt{\varepsilon}U(t,y, z/\sqrt{\varepsilon})$. Again, we observe that the $H^s$ norm $s>5/2$ still cannot be bounded
on an interval of time independent of $\varepsilon$.
Nevertheless, the situation seems more favorable since one can expect that the Lipschitz norm which is the crucial quantity
for the inviscid problem remains bounded.
We shall consequently use a functional framework based on conormal Sobolev spaces which minimizes the needed
amount of normal regularity but which gives a control of the Lipschitz norm of the solution in order to get that
the solution of the Navier-Stokes equation exists and is uniformly bounded on an interval of time independent of $\varepsilon$.
Such an approach to the inviscid limit was recently used in our work \cite{MR11prep}
in the much simpler case of the Navier-Stokes equation with the Navier boundary condition on a fixed boundary.
As a consequence of our estimates, we shall justify the inviscid limit by strong compactness arguments.
Note that we shall also get as a corollary the existence
of a solution to the free surface Euler equation.
In particular this gives an approach by a physical regularization to the construction of solutions of the Euler equation which does not use the Nash-Moser
iteration scheme. Of course, in order to get our result, we shall need to assume the
Taylor sign condition for the pressure
on the boundary which is needed to have local well-posedness for the limit Euler system.
\subsection{The free surface Navier-Stokes equations}
Let us now write down the system that we shall study. We consider the motion of an incompressible viscous fluid at high Reynolds number submitted to the influence of gravity.
The equation for motion is thus the incompressible Navier-Stokes equation \begin{equation} \label{NS} \partial_{t} u + u \cdot\nabla u + \nabla p = \varepsilon \Delta u, \quad \nabla \cdot u = 0 , \quad x \in \Omega_{t}, \, t>0 \end{equation} where $u(t,x)\in \mathbb{R}^3$ is the velocity of the fluid and $p\in \mathbb{R}$ is the pressure.
We shall assume that the fluid domain $\Omega_{t}$ is given by \begin{equation} \label{omegat} \Omega_{t}= \big\{ x \in \mathbb{R}^3, \quad x_{3} < h(t, x_{1}, x_{2}) \big\}\end{equation}
thus the surface is given by the graph
$ \Sigma_{t}= \big\{ x \in \mathbb{R}^3, \quad x_{3} = h(t, x_{1}, x_{2}) \big\}$
where $h(t, x_{1}, x_{2})$ which defines the free surface is also an unknown of the problem.
The parameter $\varepsilon>0$ which is the inverse of a Reynolds number is small.
In this work, we shall focus on the 3-D equation set in the simplest domain. Note that this framework is relevant
to describe for example atmospheric or oceanic motions. All our results are also valid in the two-dimensional case
and can be extended to more general domains.
On the boundary, we impose two boundary conditions. The first one is of kinematic nature and states
that fluid particles do not cross the surface:
\begin{equation}
\label{bord1}
\partial_{t} h = u\big(t, x_{1}, x_{2}, h(t, x_{1}, x_{2})\big) \cdot {\bf N}, \quad (x_{1}, x_{2}) \in \mathbb{R}^2
\end{equation}
where ${\bf N}$ is the outward normal given by
\begin{equation}
\label{Ndef} {\bf N}= \left( \begin{array}{ccc} - \partial_{1} h \\ -\partial_{2} h \\ 1 \end{array} \right).
\end{equation}
The second one expresses the continuity of the stress tensor on the surface. Assuming
that there is vacuum above the fluid, we get
\begin{equation}
\label{bord2}
p \, {\bf n}- 2 \, \varepsilon \, S u \,{\bf n} = {g}\, h\, {\bf n}, \quad x \in \Sigma_{t}
\end{equation} where $S$ is the symmetric part of the gradient $$ Su = {1 \over 2} \big( \nabla u + \nabla u^t\big),$$
${\bf n}$ is the outward unit normal (${\bf n}= {\bf N}/|{\bf N}|$) and $g$ is a positive number which measures the influence of the gravity on the motion of the fluid
(in the dimensional form of the equation this would be the acceleration of gravity constant). Note that the term involving the gravity force in \eqref{NS} has been incorporated in the pressure term, this is why it appears in the right-hand side of \eqref{bord2}. We neglect surface tension effects.
At $t=0$, we supplement the system with the initial condition \begin{equation} \label{initintro} h_{/t=0} = h_{0}, \quad u_{/t=0}= u_{0}. \end{equation} Note that this means in particular that the initial domain $\Omega_{0}$ is given.
As usual in free boundary problems, we shall need to fix the fluid domain. A convenient choice in our case
is to consider a family of diffeomorphism $\Phi(t, \cdot)$ under the form
\begin{eqnarray}
\label{diff}
\Phi(t, \cdot) : \, \mathcal{S}= \mathbb{R}^2 \times (-\infty, 0) & \rightarrow & \Omega_{t} \\
\nonumber x= (y,z) &\mapsto & (y, \varphi(t, y, z))
\end{eqnarray}
and we have to choose $\varphi(t, \cdot)$. Note that we have to ensure that $\partial_{z} \varphi>0$
in order to get that $\Phi(t, \cdot)$ is a diffeomorphism.
Once the choice of $\varphi$ is made, we reduce the problem into the fixed domain
$ \mathcal{S} $ by setting
\begin{equation}
\label{vdef}
v(t,x)= u(t, \Phi(t,x)), \quad q(t,x)= p(t, \Phi(t,x) ) \quad x \in \mathcal{S}.
\end{equation}
Many choices are possible.
A basic choice is to take $\varphi(t,y,z) = z+ \eta(t,y)$. Such a choice would fit in the inviscid case
i.e. for the Euler equation when $\varepsilon= 0$. Indeed, the energy estimate and the physical condition yield that $h$ and $u$
have the same regularity and hence this choice yields that $\Phi$ has in $\mathcal{S}$
the same regularity as $h$. In the Navier-Stokes case, the dissipation term in the energy estimate yields that
$ \sqrt{\varepsilon }\, u$ is one derivative smoother than $u$ and $h$
and hence it is natural to choose a diffeomorphism such that $\Phi$ has
the same regularity as $u$ in $\mathcal{S}$. This is not given by our previous choice since
by using the boundary condition \eqref{bord1}, we shall get that $ \sqrt{\varepsilon }\, h$ and hence
$ \sqrt{\varepsilon}\, \varphi$ gains only $1/2$ derivative.
The easiest way
to fix this difficulty is to to take a smoothing diffeomorphism
as in \cite{Schweizer05}, \cite{Lannes05}.
We thus choose $\varphi$ such that
\begin{equation}
\label{eqphi}
\varphi(t,y,z)= A z + \eta(t, y, z)
\end{equation}
where $\eta$ is given by the extension of $h$ defined by
\begin{equation}
\label{eqeta}
\hat{\eta}(\xi, z)= \chi \big(z {\xi } \big) \hat{h}(\xi)
\end{equation}
where $\hat{\cdot}$ stands for the Fourier transform with respect to the $y$ variable and
$\chi$ is a smooth, even, compactly supported function such that $ \chi= 1$ on $B(0, 1)$.
The number $A>0$ is chosen so that
\begin{equation}
\label{Adeb}
\partial_{z} \varphi(0,y,z) \geq 1, \quad \forall x \in \mathcal{S}
\end{equation}
which ensures that $\Phi(0, \cdot)$ is a diffeomorphism.
Another way to reduce the problem to a fixed domain would be to use standard Lagrangian coordinates. This coordinate system also
has the advantage that $\Phi$ in $\mathcal{S}$ has the same regularity as the velocity in $\mathcal{S}$. Nevertheless, here since the velocity
will only have conormal regularity in $\mathcal{S}$, the Lagrangian map will also have only conormal regularity.
Consequently one main advantage of the choice \eqref{eqeta} compared to Lagrangian coordinates is that we still have for
$\eta$ in $\mathcal{S}$ a standard Sobolev regularity even if $v$ only has a conormal one.
To transform the equations by using \eqref{vdef}, we
introduce the operators
$\partial_{i}^\varphi$, $i=t, \, 1, \, 2, \, 3$
such that
$$ \partial_{i}u \circ \Phi(t, \cdot)= \partial^{\varphi}_{i}v, \quad i= t, \, 1, \, 2, \, 3
$$
we shall give the precise expression of these operators later.
We thus get that by using the change of variable \eqref{diff}, the equation
\eqref{NS} for $u$, $p$ becomes an equation for $v$, $q$ in the fixed domain $\mathcal{S}$
which reads:
\begin{equation}
\label{NSv}
\partial^{\varphi}_{t}v + \big(v \cdot \nabla^\varphi \big)v + \nabla^\varphi q = \varepsilon \Delta^\varphi v =2 \varepsilon\nabla^\varphi \cdot S^\varphi v , \quad
\nabla^\varphi \cdot v = 0, \quad x \in \mathcal{S}\end{equation}
where we have naturally set
$$ \nabla^\varphi q = \left( \begin{array}{ccc} \partial^{\varphi}_{1} q \\ \partial^{\varphi}_{2} q \\ \partial^{\varphi}_{3} q \end{array} \right), \quad
\Delta^\varphi= \sum_{i=1}^3 (\partial^{\varphi}_{i})^2, \quad \nabla^\varphi\cdot v= \sum_{i=1}^3 \partial^{\varphi}_{i} v_{i}, \quad
v \cdot \nabla^\varphi= \sum_{i=1}^3 v_{i}\partial^{\varphi}_{i}$$
and $S^\varphi v= {1 \over 2}\big( \nabla^\varphi v + (\nabla^\varphi v)^t)$.
The two boundary conditions \eqref{bord1}, \eqref{bord2} become on $z=0$
\begin{eqnarray}
\label{bordv1} & & \partial_{t}h = v \cdot {\bf N}= -v_{y}(t,y,0) \cdot \nabla h(t,y) + v_{3}(t,y,0), \\
\label{bordv2} & & q\,{\bf n} - 2 \varepsilon S^\varphi v \cdot {\bf n} = gh \,{\bf n}
\end{eqnarray}
where we use the notation $v=(v_{y}, v_{3})^t$. We shall also use set $\nabla= (\nabla_{y}, \partial_{z})^t)$
From now on, we shall thus study in $\mathcal{S}$ the system
\eqref{NSv}, \eqref{bordv1}, \eqref{bordv2} with $\varphi$ given by \eqref{eqphi}, \eqref{eqeta}. We add to this system
the initial conditions
\begin{equation}
\label{initv}
h_{/t=0}= h_{0}, \quad v_{/t=0}= v_{0}.
\end{equation}
To measure the regularity of functions
defined in $\mathcal{S}$, we shall use Sobolev conormal spaces.
Let us introduce the vector fields
$$Z_{i}= \partial_{i}, \, i=1, \, 2, \quad Z_{3}= { z \over 1 - z } \partial_{z}.$$
We define the Sobolev conormal space $H^{m}_{co}$ as
$$ H^m_{co}(\mathcal{S)}= \big\{ f \in L^2(\mathcal{S}), \quad Z^\alpha f \in L^2 (\mathcal{S}), \quad | \alpha| \leq m \big\}$$
where $ Z^\alpha = Z_{1}^{\alpha_{1}} Z_{2}^{\alpha_{2}} Z_{3}^{\alpha_{3}} $
and we set
$$ \| f \|_{m}^2 =\sum_{|\alpha| \leq m} \| Z^\alpha f \|^2_{L^2}, \quad \| f \| = \| f \|_0 = \| f \|_{L^2}.
$$
In a similar way, we set
$$ W^{m, \infty}_{co}(\mathcal{S})= \big\{ f \in L^\infty(\mathcal{S}), \quad Z^\alpha f \in L^\infty (\mathcal{S}), \quad | \alpha| \leq m \big\}$$
and
$$ \| f \|_{m, \infty} =\sum_{|\alpha| \leq k} \| Z^\alpha f \|_{L^\infty}.$$
For vector fields, we also take the sums over its components.
Note that the use of these spaces is classical in (hyperbolic) boundary value problems, see \cite{Gues,Hormander,Rauch,Tartakoff} for example.
Finally for functions defined on $\mathbb{R}^2$ (like $h$ in our problem), we use the notation $|\cdot |_{m}$ for
the standard Sobolev norm.
\subsection{Main results}
Our aim is to get a local well-posedness result for strong solutions of \eqref{NSv}, \eqref{bordv1}, \eqref{bordv2}
which is valid on an interval of time independent of $\varepsilon$ for $\varepsilon \in (0,1]$. Note that such a result will also imply the local
existence of strong solutions for the Euler equation. As it is well-known,
\cite{Wu99,AM05,CS07,SZ08}, when there is no surface tension, a Taylor sign condition is needed
to get local well-posedness for the Euler equation. For the Euler equation in a domain of
the form \eqref{omegat},
the Taylor sign condition reads
\begin{equation}
\label{Taylor} -\partial_{N}p + g \geq c_{0}>0, \quad x \in \Sigma_{t}.
\end{equation}
Before stating our main result, we need to understand what kind of Taylor sign condition
(necessary in order to get a uniform with respect to $\varepsilon$ local existence result) we have to impose
for the Navier-Stokes equation.
By using the divergence free condition, we get as usual that the pressure $q$ solves in $\mathcal{S}$ the elliptic equation
$$ \Delta^\varphi q = - \nabla^\varphi \cdot (v\cdot \nabla^\varphi v).$$
Moreover, by using the boundary condition \eqref{bord2}, we get that on the boundary
$$q_{/z=0} = 2 \varepsilon S^\varphi v \,{\bf n} \cdot \,{\bf n} + gh.$$
We shall thus decompose the pressure into an "Euler" part and a "Navier-Stokes" part by setting $q= q^E + q^{NS}$ with
$$ \Delta^\varphi q^E = - \nabla^\varphi \cdot (v\cdot \nabla^\varphi v), \quad q^E_{/z=0} = gh$$
and
$$ \Delta^\varphi q^{NS} =0, \quad q^{NS}_{/z=0} = 2 \varepsilon S^\varphi v \,{\bf n} \cdot \,{\bf n}.$$
The main idea is that the part $q^{NS}$ is small when $\varepsilon$ is small while $q^{E}$ which is of order one is the part which should converge to the pressure of the Euler equation
when $\varepsilon$ goes to zero. Consequently, the Taylor sign condition has to be imposed on $q^E$. After the change
of coordinates, this becomes
\begin{equation}
\label{Taylor2}
g- \partial_{z}^\varphi q^E_{/z=0} \geq c_{0}>0.
\end{equation}
Note that since we shall indeed prove that the part $q^{NS}$ of the pressure is small when $\varepsilon$ is small, we shall actually obtain
that for $\varepsilon$ sufficiently small, the total pressure verifies the Taylor condition.
Finally, let us introduce a last notation, we denote by $\Pi= {Id} - {\bf n}\otimes {\bf n}$ the projection on the tangent space of the boundary.
Our main result reads:
\begin{theoreme}
\label{main} For $m \geq 6$, consider inital data $(h_{0}^\varepsilon, v_{0}^\varepsilon)$ such that \begin{equation} \label{hypinit} \sup_{\varepsilon \in (0, 1]} \Big(
|h_{0}^\varepsilon|_{m}+ \varepsilon^{1\over 2} |h_{0}^\varepsilon|_{m+{1\over 2}}
+ \|v_{0}^\varepsilon\|_{m} + \| \partial_{z} v_{0}^{\varepsilon}\|_{m-1} + \| \partial_{z} v_{0}^\varepsilon\|_{1, \infty} + \varepsilon^{1 \over 2} \|\partial_{zz}^2 v_{0}^\varepsilon \|_{L^\infty} \Big) \leq R_{0}, \end{equation}
and assume that the Taylor sign condition \eqref{Taylor2} is verified
and that the compatibility condition
$ \Pi S^\varphi v_{0}^\varepsilon {\bf n}_{/z=0}= 0$ holds.
Then, there exists $T>0$ and $C>0$ such that for every $\varepsilon\in (0, 1]$, there exists a unique solution $(v^\varepsilon, h^\varepsilon)$ of \eqref{NSv}, \eqref{bordv1}, \eqref{bordv2}, \eqref{eqphi}, \eqref{eqeta} which is
defined on $[0, T]$ and satifies the estimate: \begin{equation}
\label{mainborne1}\sup_{[0, T]} \big(\|v^\varepsilon\|_{m}^2 + |h^\varepsilon|_{m}^2 + \|\partial_{z}v^\varepsilon\|_{m-2}^2 + \|\partial_{z} v^\varepsilon\|_{1, \infty}^2 \big) + \| \partial_{z} v^\varepsilon\|_{L^4([0,T], H^{m-1}_{co})}^2\leq C.\end{equation} Moreover, we also have the estimate \begin{equation}
\label{mainborne2} \sup_{[0, T]}\big( \varepsilon |h^\varepsilon|_{m+{1 \over 2}}^2 + \varepsilon \|\partial_{zz}v^\varepsilon\|_{L^\infty}^2\big) + \varepsilon \int_{0}^T\big( \| \nabla v^\varepsilon \|_{m}^2+
\|\nabla \partial_{z}v^\varepsilon\|_{m-2}^2\big) \leq C. \end{equation} \end{theoreme}
Note that in the above result, we have separated
the estimates \eqref{mainborne1} that are independent of $\varepsilon$ from the ones in
\eqref{mainborne2} that depend on $\varepsilon$ and
are useful to get a closed estimate for the Navier-Stokes case.
This is why we have stated an estimate of $\|\partial_{z} v^\varepsilon\|_{m-1}$
which is not pointwise in time but only $L^4$. The reason for which we do not expect $\sup_{[0,T]}\|\partial_{z} v^\varepsilon\|_{m-1} $
to be uniformly bounded with respect to $\varepsilon$ will be explained below. It is related to the boundary control of
the vorticity for the Navier-Stokes system.
Nevertheless we point out that there is no loss
of regularity, namely we have that $\partial_{z} v^\varepsilon (t, \cdot)\in H^{m-1}_{co}$ for every $t \in [0, T]$ with bounds that may blow up with $\varepsilon$.
We shall also provide estimates for the pressure during the proof.
Also, note that the uniform existence time $T$ is a priori also limited by the validity of the Taylor sign condition \eqref{Taylor2}.
For the Euler equation, it was pointed out by S. Wu \cite{Wu97} that in our infinite depth framework with
zero vorticity in the bulk,
a maximum principle applied to the equation for the pressure yields that the Taylor sign condition always holds.
Nevertheless, this argument breaks down when vorticity is not zero. Finally, let us point out that the compatibility condition $ \Pi S^\varphi v_{0}^\varepsilon {\bf n}_{/z=0}= 0$ that we assume at the initial time
is exactly the same as (1.8) in \cite{Beale83}.
The main part of the paper will be devoted to the proof of Theorem \ref{main}. We shall explain the main steps and the main difficulties
slightly below. We immediately see
that by using the compactness provided by the uniform estimates of Theorem \ref{main}, we shall easily get as a corollary,
the justification of the inviscid limit and the existence of a solution to the limit
Euler system:
\begin{theoreme}
\label{theoinviscid}
Under the assumptions of Theorem \ref{main}, if we assume in addition
that there exists $(h_{0}, v_{0})$ such that
\begin{equation}
\label{hypinviscid} \lim_{\varepsilon \rightarrow 0} \|v^\varepsilon_{0} - v_{0}\|_{L^2(\mathcal{S})} + |h^\varepsilon_{0} - h_{0} |_{L^2(\mathbb{R}^2)}= 0. \end{equation}
Then there exists $(h(t,x), v(t,x))$ with $ Z^\alpha \nabla v \in L^{\infty}([0, T] \times \mathcal{S})$, $| \alpha | \leq 1$ and
$$v \in L^\infty([0, T], H^m_{co}(\mathcal{S})), \,
\partial_{z}v \in L^\infty([0, T], H^{m-2}_{co}(\mathcal{S})), \,
h \in L^\infty([0, T], H^m(\mathbb{R}^2))$$
such that
$$ \lim_{\varepsilon \rightarrow 0} \sup_{[0,T]}\Big( \|v^\varepsilon -v\|_{L^2(\mathcal{S})} + \| v^\varepsilon -v\|_{L^\infty(\mathcal{S})} +
|h^\varepsilon -h |_{L^2(\mathbb{R}^2)} + |h^\varepsilon -h |_{W^{1, \infty}(\mathbb{R}^2)}\Big)= 0 $$ and which is the unique solution to the free surface Euler equation
in the sense that
\begin{equation}
\label{eulerint} \partial^{\varphi}_{t}v + \big(v \cdot \nabla^\varphi \big)v + \nabla^\varphi q= 0, \quad \nabla^\varphi \cdot v= 0, \quad x \in \mathcal{S}
\end{equation}
with the boundary conditions
\begin{equation}
\label{eulerb}\partial_{t}h= v \cdot {\bf N} \quad \hbox{and} \quad q= gh\end{equation}
at $z=0$, $\varphi$ being still defined by \eqref{eqphi}, \eqref{eqeta}.
\end{theoreme}
We thus provide the justification of the inviscid limit in $L^2$ and $L^\infty$ norms. The convergence in higher norms (conormal for $v$,
standard Sobolev for $h$) follows by interpolation and the uniform estimate \eqref{mainborne1}. Note that we do not obain by compactness the convergence
of $v^\varepsilon$ in Lipschitz norm. This is expected since in that norm, the boundary layer profile cannot be neglected when we pass to the limit.
The above result also provides an existence and uniqueness result of solutions (with minimal normal regularity of the velocity) to the free surface Euler system
which is new.
Note that by using the equation for the vorticity, one can easily propagate higher normal regularity and thus recover
the results of \cite{Lindblad05,SZ08,ZZ08}. In particular, we get that
$ \partial_{z}v \in L^\infty([0, T], H^{m-1}_{co}(\mathcal{S}))$.
\subsection{Sketch of the proof and organization of the paper}
In order to prove Theorem \ref{main}, since classical local existence results of smooth solutions are available in the litterature
\cite{Beale81,Tani96}, the main difficulty is to get a priori estimates
on a time interval small but independent of $\varepsilon$ of the quantities that appear in \eqref{mainborne1}, \eqref{mainborne2} in terms of the initial data only for a sufficiently smooth solution of the equation. We shall suppress the subscript $\varepsilon$ in the proof
of the a priori estimates for notational convenience,
we shall simply denote $(v^\varepsilon, h^\varepsilon)$ by $(v,h)$. Let us define:
\begin{align*}
\mathcal{N}_{m, T}(v,h) & = \sup_{[0, T]}\big( \|v(t) \|_{m}^2 + \|\partial_{z}v\|_{m-2}^2 + |h(t)|_{m}^2+ \varepsilon |h(t)|_{m+{1 \over 2}}^2
+ \varepsilon \|\partial_{zz}v(t)\|_{L^\infty}^2 + \|\partial_{z} v(t)\|_{1, \infty}^2 \big) \\
\nonumber & \quad \quad \quad + \|\partial_{z}v\|_{L^4([0,T], H^{m-1}_{co})}^2
+ \varepsilon \int_{0}^T \|\nabla v\|_{m}^2 + \varepsilon \int_{0}^T \|\nabla \partial_{z}v\|_{m-2}^2<+ \infty.
\end{align*}
The crucial point is thus to get a closed control of the above quantity on a sufficiently small
time interval which is independent of $\varepsilon$.
The main part of the paper will be devoted to these a priori estimates.
Many steps will be needed in order to get the result.
\subsubsection*{Step 1: Estimates of $v$ and $h$}
The first step will be to estimate $Z^\alpha v$ and $Z^\alpha h$ for
$0 \leq |\alpha| \leq m$. When $\alpha=0$, the estimate is a consequence of the energy identity for free surface Navier-Stokes
equation which reads
$$ {d \over dt} \Big( \int_{\mathcal{S}} |v|^2 \, d\mathcal{V}_{t} + g \int_{z=0} |h|^2\, dy \Big) + 4 \varepsilon \int_{\mathcal{S}} |S^\varphi v|^2\, d\mathcal{V}_{t}=0.$$
Here $d\mathcal{V}_{t}$ stands for the natural volume element induced by the change of variable \eqref{eqphi}: $d\mathcal{V}_{t}=\partial_{z}\varphi(t,y,z)\, dydz$.
The difficulty to get estimates for higher order derivatives is that the coefficients (which depend on $h$) in the equation \eqref{NSv} are not smooth enough
(even with the use of the smoothing diffeomorphism that we have taken) to control
the commutators in the usual way.
For example, for the transport term which reads:
$$ \partial_{t}^\varphi + v \cdot \nabla^\varphi = \partial_{t} + v_{y}\partial_{y} + { 1 \over \partial_{z} \varphi}\big( v\cdot N - \partial_{t} \eta)\partial_{z},
\quad N= (-\partial_{1}\varphi, -\partial_{2} \varphi, 1)^t, $$
the commutator between $Z^\alpha$ and this term in the equation involves in particular the term
$ (v \cdot Z^\alpha N)\partial_{z}v$ which can be estimated only with the help of $\|Z^\alpha N\|_{L^2} \sim |h|_{m+{1 \over 2}}$.
This yields
a loss of $1/2$ derivative.
We also get similar problems when we compute for the pressure term the commutator between $Z^\alpha$ and $\nabla^\varphi q$.
This difficulty was solved by Alinhac in \cite{Alinhac}. The main idea is that some cancellation occurs
when we use the good unknown
$V^\alpha= Z^\alpha v - {\partial_{z}^\varphi v} Z^\alpha \eta$. Indeed, let us write our equation under the abstract form $$ \mathcal{N}(v,q, \varphi)= \partial^{\varphi}_{t}v + \big(v \cdot \nabla^\varphi \big)v + \nabla^\varphi q -2\varepsilon \nabla^\varphi \cdot \big( S^\varphi v\big).$$ Then, if $ \mathcal{N}(v,q, \varphi )=0, $ the linearized equation can be written as
\begin{eqnarray*}
& & D\mathcal{N}(v, q, \varphi) \cdot(\dot v, \dot q, \dot \varphi) = \\
& & \quad \quad \quad \quad \big( \partial_{t}^\varphi +( v \cdot \nabla^\varphi)- 2 \varepsilon \nabla^\varphi \cdot \big( S^\varphi \cdot \big) \big)\big(\dot v- \partial^{\varphi}_{z} v \, \dot \varphi \big)
+ \nabla^\varphi\big( \dot q - \partial^{\varphi}_{z} q \, \dot \varphi\big) \\
& & \quad \quad \quad \quad \quad \quad \quad \quad + \big( \dot v \cdot \nabla^\varphi\big) v - \dot \varphi (\partial^{\varphi}_{z} v \cdot \nabla^\varphi) v.
\end{eqnarray*} This means that the fully linearized equation has the same structure as the equation linearized with respect to the $v$ variable only
thanks to the introduction of the good unknown. The justification of this identity will be recalled in
section \ref{prelim}.
By using this crucial remark, we get that the equation for $(Z^\alpha v, Z^\alpha q, Z^\alpha \eta)$ can be written as $$ \partial^{\varphi}_{t} V^\alpha + v \cdot \nabla^\varphi V^\alpha + \nabla^\varphi Q^\alpha- 2 \varepsilon \nabla^\varphi \cdot S^\varphi V^\alpha
=l.o.t.$$
with $V^\alpha = Z^\alpha v - {\partial_{z}^\varphi v} Z^\alpha \eta$, $Q^\alpha= Z^\alpha q - {\partial_{z}^\varphi q} Z^\alpha \eta$
and the $l.o.t$ means lower order terms with respects to $v$ and $h$ that can be controlled by $ \mathcal{N}_{m, T}(v,h) $.
Consequently, we can perform an $L^2$ type energy estimate for this equation.
By using energy estimates, we shall get an identity under the form
$$ {d \over dt }{1 \over 2} \int_{\mathcal{S}} |V^\alpha|^2 \, d\mathcal{V}_{t} + {d \over dt }{1 \over 2} \int_{z=0} (g - \partial_{z}^\varphi q^E) |Z^\alpha h|^2
= \cdots$$
which yields a good control of the regularity of the surface only if the sign condition $g-\partial_{z}^\varphi q^E \geq c_{0}>0$
is matched.
The main conclusion of this step will be that
\begin{equation} \label{step1est}
\big\|\big(Z^m v - \partial_{z}^\varphi v Z^m \eta\big)(t)\big\|^2 + |h(t)|_{m}^2 \leq C_{0}+ t \Lambda(R)+ \int_{0}^t \|\partial_{z}v\|_{m-1}^2 \end{equation}
where $C_{0}$ depends only on the initial data and $\Lambda$ is some continuous increasing function
in all its arguments (independent of $\varepsilon$)
as soon as
$$ Q_{m}(t) = \|v\|_{m}^2 + |h|_{m}^2 + \|\partial_{z}v\|_{m-2}^2 + \|v\|_{2, \infty}^2 + \|\partial_{z}v\|_{1, \infty}^2 + \varepsilon \|\partial_{zz}v\|_{L^\infty}^2 \leq R$$
for $t \in [0, T^\varepsilon]$.
\subsubsection*{Step 2: Normal derivative estimates I} In order to close the argument, we need estimates for
$\partial_{z}v$. We shall first estimate $\|\partial_{z}v \|_{L^\infty_{t}(H^{m-2}_{co})}$.
This is not sufficient to control the right-hand side in \eqref{step1est}, but this will be important in order to get the $L^\infty$ estimates.
The main idea is to use the equivalent quantity
$$ S_{N}= \Pi S^\varphi v\, N$$
which vanishes on the boundary.
This allows to perform conormal estimates on the convection-diffusion type equation with homogeneous Dirichlet boundary condition
satisfied by $S_{N}$.
This yields again an estimate under the form
$$ \|\partial_{z}v(t)\|_{m-2}^2 \leq C_{0}+ t \Lambda(R)+ \int_{0}^t \|\partial_{z}v\|_{m-1}^2.$$
\subsubsection*{Step 3: $L^\infty$ estimates}
We also have to estimate the $L^\infty$ norms that occur in the definition of $Q_{m}$. Note that we can not use Sobolev embedding (as it is classically done) since the conormal spaces do not control normal derivatives.
The estimate of $\|v\|_{2, \infty}$ is a consequence of an anisotropic Sobolev estimate and
thus, the difficult part is to estimate $\|\partial_{z}v\|_{1, \infty}$. Again, it is more convenient to estimate
the equivalent quantity $\| S_{N}\|_{1,\infty}$ since $S_{N}$ solves
a convection diffusion equation with homogeneous boundary condition. The estimate of $\|S_{N}\|_{L^\infty}$
is a consequence of the maximum principle for this equation. The control
of $\| Z_{i}S_{N}\|_{L^\infty}$
is more difficult. The main reason is that a crude estimate of the commutator between $Z_{i}$
and the variable coefficient operator $\Delta^\varphi$ involves terms with more normal derivatives: two normal derivatives of $S_{N}$
and hence three normal derivatives of $v$. To overcome
this difficulty, we note that at this step, the regularity of the surface
is not really a problem: we want to estimate a fixed number of derivatives of $v$ in $L^\infty$ while
$m$ can be considered as large as we need. Consequently, the idea is to change the coordinate system
into a normal geodesic one in order to get the simplest possible expression for the Laplacian.
By neglecting all the terms that can be estimated by the previous steps, we get a simple
equation
under the form
$$ \partial_{t} \tilde S_{N} + z \partial_{z}w_{3}(t,y,0) \partial_{z}
\tilde S_{N} + w_{y}(t,y,0) \cdot \nabla_{y} \tilde S_{N} - \varepsilon \partial_{zz} \tilde S_{N} =
l.o.t$$
where $\tilde{S}_{N}$ stands for $S_{N}$ expressed in the new coordinate system and $w$ is the vector field that we obtain from
$v$ by the change of variable.
This is a one-dimensional Fokker Planck type equation (with an additional drift term in the tangential direction that can be eliminated
by using lagrangian coordinates in this direction) for which the Green function is explicit
and hence, we can use it to estimate $\|Z_{i}\tilde S_{N}\|_{L^\infty}$.
Again the conclusion of this step is an estimate of the form
$$ \|\partial_{z} v\|_{1, \infty}^2+ \varepsilon \|\partial_{zz}v\|_{L^\infty}^2 \leq C_{0}+t \Lambda(R) + \int_{0}^t \|\partial_{z}v\|_{m-1}^2.$$
\subsubsection*{Step 4: Normal derivative estimate II}
In order to close our estimate, we still need to estimate
$ \| \partial_{z}v\|_{m-1}$. For this estimate it does not seem a good idea to use $S_{N}$ as an equivalent quantity for $\partial_{z}v$.
Indeed, the equation for $Z^{m-1}S_{N}$ involves $Z^{m-1} D^2 p$ as a source term and we note that since the Euler part of the pressure
involves a harmonic function that verifies $p^E =gh$ on the boundary, we have that
$$ Z^{m-1}D^2 p^E \sim Z^{m-1} D^{3\over 2} h \sim |h|_{m+{1 \over 2}}$$
and hence we do not have enough regularity of the surface. To get a better estimate,
it is natural to try to use the vorticity $\omega= \nabla^\varphi \times v$ in order to eliminate the pressure. We shall get that
indeed $Z^{m-1} \omega$ solves an equation under the form
$$\partial_{t} Z^{m-1}\omega + V \cdot \nabla Z^{m-1}\omega - \varepsilon \Delta^\varphi Z^{m-1} \omega= l.o.t$$
Nevertheless, while for the Euler equation the vorticity which solves a transport equation with a
characteristic boundary is easy to estimate, for the Navier-Stokes system
in a domain with boundaries it is much more difficult to estimate.
The difficulty in the case of the Navier-Stokes system is that we need an estimate of the value
of the vorticity at the boundary to estimate it in the interior.
Since on the boundary we have roughly $Z^{m-1}\omega \sim Z^m v + Z^m h$,
we only have by using a trace estimate
a (uniform) control by known quantities (and in particular the energy dissipation of the Navier-Stokes equation) of
$$ \sqrt{\varepsilon}\int_{0}^t |Z^{m-1} \omega_{/z=0}|_{L^2(\mathbb{R}^2)}^2.$$
In this case, by a simple computation on the heat equation which is given in section \ref{sectionheat},
we see that the best estimate that we can expect, when we study the problem with zero initial datum
$$ \partial_{t} f - \varepsilon \Delta f= 0, \, z<0, \quad f_{/z=0}= f^b$$ and with the boundary data $f^b$ that satisfies an estimate as above is
$$ \int_{0}^{+ \infty} e^{-2\gamma t } \big\|( \gamma + |\partial_{t}|)^{1\over 4}) f \big\|_{L^2(\mathcal{S})}^2 \, dt
\leq \sqrt{\varepsilon} \int_{0}^{+ \infty} e^{-2 \gamma t}|f^b|_{L^2(\mathbb{R}^2)}^2 \, dt\leq C.$$ Consequently, we see that we get a control of $f$ in $H^{1\over 4}((0, T), L^2)$ which gives by Sobolev embedding an estimate of $f$
in $L^4([0, T], L^2(\Omega))$ only.
Motivated by this result on the heat equation,
we shall get an estimate of $\|Z^{m-1}\omega\|_{L^4((0,T), L^2)}$.
Note that the transport term in the equation has an important effect. Indeed, in the previous example of the heat equation,
if we add a constant drift $c \cdot \nabla f $ in the equation, we obtain a smoothing effect under the form
$$ \int_{0}^{+ \infty} e^{-2\gamma t } \big\|( \gamma + |\partial_{t}+c \cdot \nabla |)^{1\over 4}) f \big\|^2 \, dt.$$
In order to estimate $\|Z^{m-1}\omega\|_{L^4((0,T), L^2)}$, we shall consequently
first switch into Lagrangian coordinates in order to eliminate the transport term and hence look for an estimate of
$ \|(Z^{m-1} \omega ) \circ X\|_{H^{1 \over 4}([0,T], L^2)}$ since $(Z^{m-1}\omega)\circ X$ solves a purely parabolic
equation. Note that the price to pay is that the parabolic operator that we get after the change of variable has coefficients with limited
uniform regularity (in particular with respect to the time and normal variables). To get this estimate, we use a microlocal symmetrizer
based on a "partially" semiclassical paradifferential calculus i.e. based on the weight $(\gamma^2 + |\tau|^2 + |\sqrt{\varepsilon}\, \xi|^4)^{1 \over 4})$.
The use of paradifferential calculus in place of pseudodifferential one is needed because of the only low regularity uniform estimates of the coefficients that are known.
The main properties of this calculus can be seen as a consequence of the general quasi-homogeneous calculus studied in \cite{Metivier-Zumbrun}
and is recalled at
the end of the paper.
By Sobolev embedding this yields an estimate of $\|(Z^{m-1} \omega ) \circ X\|_{L^{4}([0,T], L^2)}$
and thus of $\|Z^{m-1} \omega\|_{L^{4}([0,T], L^2)}$ by a change of variable.
This finally allows to get an estimate of $\|Z^{m-1}\partial_{z} v\|_{L^4((0,T), L^2)}$.
The estimate of $ \mathcal{N}_{m, T}(v,h)$ on some uniform time follows by combining the estimates of the four steps. Note that at
the end, we also have to check that
the Taylor sign condition and the condition that $\Phi(t, \cdot)$ is a diffeomorphism remain true.
\subsubsection*{Organisation of the paper}
The paper is organized as follows. In section \ref{prelim}, we recall the main properties of Sobolev conormal spaces, in particular, the product laws,
embeddings and trace estimates that we shall use. We also state some geometric identities that are linked to the change of variable
\eqref{id0} and the Korn inequality that we shall use to control the energy dissipation term in \eqref{NSv}.
In section \ref{sectionprelimeta}, we study the regularity properties of $\eta$ given by \eqref{eqeta}.
Next, the main part of the paper is devoted to the proof of the a priori estimates leading to the proof of Theorem \ref{main}.
We first recall the energy identity for the Navier-Stokes equation \eqref{NSv} in section \ref{sectionbasic}.
The aim of the nexts three sections is to get the estimates of higher order conormal derivatives.
In section \ref{sectionconorm}, we study the equations satisfy by $(Z^\alpha v, Z^\alpha h)$ and prove the estimates for
lower order commutators and boundary values. In section \ref{sectionpressure}, we prove the needed estimates for the pressure using elliptic regularity in conormal spaces
and in section \ref{sectionconorm2}, we get the desired estimates for $\|v\|_{m}$ and $|h|_{m}.$
Next, in section \ref{sectionnorm1}, we start the study of the estimates for normal derivatives, we prove an estimate
for $\|\partial_{z} v \|_{m-2}$. In section \ref{sectionLinfty}, we prove the needed $L^\infty$ estimates.
Finally, in section \ref{sectionnorm2}, we prove the estimate for
$\|\partial_{z} v \|_{L^4 ([0,T], H^{m-1}_{co})}$.
The results of paradifferential calculus that are needed for this section are recalled in section \ref{sectionparadiff}.
The proof of the existence
part of Theorem \ref{main} is given in section \ref{sectionexist} and the uniqueness part is briefly explained in
section \ref{sectionunique}. The proof of Theorem \ref{theoinviscid} is given in section \ref{sectioninviscid} and section \ref{sectiontech} is devoted to the proof of some technical lemmas.
In all the paper, the notation $\Lambda(\cdot, \cdot)$ stands for a continuous increasing function in all its arguments, independent of $\varepsilon$ and that may change
from line to line.
\section{Functional framework and geometric identities}
\label{prelim}
\subsection{Some properties of Sobolev conormal spaces}
We have already recalled the notations that we shall use for Sobolev conormal spaces.
It will be convenient to also use the following notation. We set for $m \geq 1$:
$$ E^m= \big\{ f \in H^m_{co}, \quad \partial_{z}f \in H^{m- 1}_{co}\big\}, \quad E^{m, \infty}= \big\{ f \in W^{m, \infty}_{co}, \quad \partial_{z}f \in W^{m-1, \infty}_{co}\big\}$$
and we define the norms:
$$ \|f\|_{E^m}^2= \|f\|_{m}^2 + \|\partial_{z} f \|_{m-1}^2, \quad \|f\|_{E^{m,\infty}}= \|f\|_{m, \infty}+ \|\partial_{z} f \|_{m-1, \infty}.$$
We shall also use Sobolev tangential spaces defined for $s\in \mathbb{R}$ by
$$ H^s_{tan}(\mathcal{S})= \big\{ f \in L^2(\mathcal{S}), \quad \Lambda^s f \in L^2 (\mathcal{S})\big\}$$
where $\Lambda^s$ is the tangential Fourier multiplier by $\big(1+ |\xi|^2 \big)^{s \over 2}$ i.e.
$\mathcal{F}_{y} \big( \Lambda^s f \big)(\xi, z)= \big(1+ |\xi|^2 \big)^{s \over 2} \mathcal{F}_{y}( f) (\xi, z)$
where $\mathcal{F}_{y}$ is the partial Fourier transform in the $y$ variable. We set
$$ \|f \|_{H^{s}_{tan}}= \| \Lambda^s f \|_{L^2}.$$
Note that we have
$$ \| f\|_{H^s_{tan}} \lesssim \| f\|_{m}$$
for $m \in \mathbb{N}$, $m \geq s$.
In this paper, we shall use repeatedly the following Gagliardo-Nirenberg-Moser type properties of the Sobolev conormal spaces:
\begin{prop}
\label{sob}
We have the following products, and commutator estimates:
\begin{itemize}
\item
For $u, \,v\in L^ \infty \cap H^k_{co}$, $k \geq 0$:
\begin{equation}
\label{gues}
\| Z^{\alpha_{1}} u\, Z^{\alpha_{2}} v \| \lesssim \|u\|_{L^\infty} \, \|v\|_{k} + \|v\|_{L^\infty} \|u\|_{k}, \quad
|\alpha_{1}| + |\alpha_{2}|=k.
\end{equation}
\item For $ 1 \leq | \alpha | \leq k $, $g \in H^{k-1}_{co} \cap L^\infty$, $f \in H^k_{co}$ such
that $Zf \in L^\infty$, we have
\begin{equation}
\label{com}
\| [Z^\alpha, f]g \| \lesssim \|Zf\|_{k-1} \| g \|_{L^\infty} + \|Z f \|_{L^\infty} \|g \|_{k-1}.
\end{equation}
\item For $| \alpha |=k \geq 2$, we define the symmetric commutator $[Z^\alpha, f, g]= Z^\alpha (f g) - Z^\alpha f \, g - f Z^\alpha \,g$.
Then, we have the estimate
\begin{equation}
\label{comsym} \| [Z^\alpha, f, g] \| \lesssim \|Zf \|_{L^\infty}\|Zg\|_{k-2} + \|Z g \|_{L^\infty} \|Z f \|_{k-2}.
\end{equation}
\end{itemize}
\end{prop}
\begin{proof}
The proof of \eqref{gues} is classical and can be found for example in \cite{Gues}. The commutator estimates \eqref{com}, \eqref{comsym}
follow from \eqref{gues} and the Leibnitz formula. Indeed, we have
$$[Z^\alpha, f]g = \sum_{ \tiny{ \begin{array}{ll}\beta + \gamma = \alpha, \\
\beta \neq 0 \end{array} } } C_{\beta, \gamma} Z^\beta f Z^\gamma g$$
and hence since $\beta \neq 0$, we can write $Z^\beta = Z^{\tilde \beta} Z_{i}$, with $ |\tilde \beta |= | \beta |-1, $ to get that
$$ \|Z^{\tilde \beta} Z_{i}f\, Z^\gamma g \| \lesssim \|Zf\|_{L^\infty} \|g\|_{k-1} + \|g\|_{L^\infty} \|Zf\|_{k-1}$$
thanks to \eqref{gues}. The proof of \eqref{comsym} can be obained by a similar argument.
\end{proof}
We shall also need embedding and trace estimates for these spaces:
\begin{prop} \label{proptrace} \begin{itemize}
\item For $s_{1}\geq 0 $, $s_{2} \geq 0$ such that $s_{1}+ s_{2} >2$ and $f$ such that $f \in H^{s_{1}}_{tan}$, $\partial_{z} f \in H^{s_{2}}_{tan}$, we have the anisotropic Sobolev embedding:
\begin{equation}
\label{emb}
\|f \|_{L^\infty}^2 \lesssim \|\partial_{z}f \|_{H^{s_{2}}_{tan}} \, \|f\|_{H^{s_{1}}_{tan}}.
\end{equation}
\item For $ f \in H^{1 }(\mathcal{S})$, we have the trace estimates:
\begin{equation}
\label{trace}
|f(\cdot, 0)|_{H^s(\mathbb{R}^2)} \leq C \| \partial_{z} f \|^{1\over 2}_{H^{s_{2}}_{tan}}\, \|f\|^{1\over 2}_{H^{s_{1}}_{tan}},
\end{equation}
with $s_{1}+ s_{2}= 2 s \geq 0$.
\end{itemize}
\end{prop}
As a consequence of \eqref{sob}, we shall use very often that :
\begin{rem}
\label{remLinfty}
For $k \geq 5$, we have :
$$ \| f\|_{2, \infty}^2 \lesssim \|\partial_{z} f\|_{k-2} \, \| f\|_{k}, \quad k \geq 5.$$
\end{rem}
\begin{proof}
To get \eqref{emb}, we first note that thanks to the one-dimensional Sobolev estimate that we have
$$ |\hat f (\xi, z) | \leq \Big( \int_{-\infty}^0 |\partial_{z} \hat f(\xi, z ) |\, | \hat f (\xi, z)| \, dz \Big)^{1 \over 2 }$$
and hence, we obtain from Cauchy-Schwarz and the fact that $s_{1}+ s_{2}>2$ that
$$ \|f\|_{L^\infty} \leq \int_{\xi}| \hat f(\xi, z)| \, d\xi \lesssim\Big( \int_{\mathbb{R}^2}\big( 1 + |\xi |)^{s_{1}+ s_{2}}
\int_{-\infty}^0 |\partial_{z} \hat f(\xi, z ) |\, | \hat f (\xi, z)| \, dz \, d \xi \Big)^{1 \over 2 } \lesssim \| \partial_{z} \Lambda^{s_{1}} f \|^{1\over 2} \, \| \Lambda^{s_{2}} f\|^{1 \over 2}.$$
The trace estimates \eqref{trace} are also elementary in $\mathcal{S}$. To get the estimate, it suffices
to write that
$$ |f(\cdot, 0)|_{H^s(\mathbb{R}^2)}^2 = 2 \int_{\mathbb{R}^2} \int_{-\infty}^0 \partial_{z}\Lambda^s f(z,y)\, \Lambda^sf(z,y) dz dy$$
and the result follows from Cauchy-Schwarz and the fact that
$$ \int_{\mathbb{R}^2 }\partial_{z}\Lambda^sf(z,y)\, \Lambda^sf(z,y) \, dy= \int_{\mathbb{R}^2 }\partial_{z}\Lambda^{s_{1}}f(z,y)\, \Lambda^{2s - s_{1}}f(z,y) \, dy.$$
\end{proof}
For later use, we also recall the classical tame Sobolev-Gagliardo-Nirenberg-Moser and commutator estimates
in $\mathbb{R}^2$:
\begin{prop}
\label{sobbord}
For $s \in \mathbb{R}$, $s\geq 0$, we have
\begin{eqnarray}
\label{sobr2} & & |\Lambda^s(fg) |_{L^2(\mathbb{R}^2)}
\leq C_{s} \big( | f|_{L^\infty(\mathbb{R}^2} |g|_{H^s(\mathbb{R}^2)} + | g|_{L^\infty(\mathbb{R}^2)} |f|_{H^s(\mathbb{R}^2)} \big) \\
& & \label{comr2}
\| [\Lambda^s, f ] \nabla g\|_{L^2(\mathbb{R}^2)}
\leq C_{s} \big( | \nabla f|_{L^\infty(\mathbb{R}^2)} |g|_{H^s(\mathbb{R}^2)} + |\nabla g|_{L^\infty(\mathbb{R}^2)} |f|_{H^s(\mathbb{R}^2)} \big)\\
& & \label{cont2D} |u v|_{1 \over 2} \lesssim |u|_{1, \infty} |v|_{1\over 2}
\end{eqnarray}
where $\Lambda^s$ is the Fourier multiplier by $ \big(1+ |\xi|^{2}\big)^{s \over 2}$.
\end{prop}
These estimates are also classical, we refer for example to \cite{Chemin95}. Note that the last estimate can be obtained as
a consequence of (1) and (5) in Theorem \ref{symbolic}.
We shall also need to use results about
semiclassical paradifferential calculus in section \ref{sectionnorm2}. They are described
in section \ref{sectionparadiff}.
\subsection{The equations in the fixed domain}
By using the change of variables \eqref{vdef} and the definition \eqref{eqphi}, \eqref{eqeta}, we obtain that
\begin{eqnarray*}
\big( \partial_{i} u\big)(t, y, \varphi) & = & \big( \partial_{i} v - { \partial_{i} \varphi \over \partial_{z} \varphi}
\partial_{z} v\big)(t,y,z), \quad i= 0,\, 1,\, 2 \\
\big( \partial_{3} u\big)(t, y, \varphi) & = &\big( {1 \over \partial_{z} \varphi} \partial_{z} v\big)(t,y,z)
\end{eqnarray*}
where we set $\partial_{0}= \partial_{t}$.
We thus introduce the following operators
$$ \partial^\varphi_{i} = \partial_{i} - { \partial_{i} \varphi \over \partial_{z} \varphi}
\partial_{z}, \quad i=0, \, 1,\, 2, \quad \partial^{\varphi}_{3}= \partial^{\varphi}_{z}= {1 \over \partial_{z} \varphi} \partial_{z} $$
in order to have
\begin{equation}
\label{id0}
\partial_{i}u \circ \Phi= \partial^{\varphi}_{i}v, \quad i= 0, \, 1, \, 2, \, 3.
\end{equation}
This yields by using the change of variable \eqref{diff} that $(v,q)$ solves in the fixed domain $\mathcal{S}$ the system \eqref{NSv}, \eqref{bordv1}, \eqref{bordv2} that was introduced previously.
Note that thanks to this definition, since the operators $\partial_{i}$ commute, we immediately
get that
\begin{equation}
\label{comD}
[ \partial^{\varphi}_{i}, \partial^{\varphi}_{j}]= 0, \quad \forall i, \, j.
\end{equation}
Since the Jacobian of the change of variable \eqref{diff} is $\partial_{z} \varphi$, it is natural to
use on $\mathcal{S}$ when performing energy estimates the following weighted $L^2$ scalar products:
$$ \int_{\mathcal{S}} f\, g \, d \mathcal{V}_{t}, \quad d\mathcal{V}_{t} = \partial_{z}\varphi(t,y,z) \, dy dz
$$
and for vector fields on $\mathcal{S}$.
$$ \int_{\mathcal{S}} v \cdot w \, d \mathcal{V}_{t} $$
where $\cdot$ stands for the standard
scalar product of $\mathbb{R}^3$.
With this notation, we have the following integration by parts identities for the operators $\partial_{i}^\varphi$ and
the above weighted scalar products:
\begin{lem}
\label{lemipp}
\begin{eqnarray}
\label{ipp1}
& &\int_{\mathcal{S}}\partial^{\varphi}_{i}f\, g \, d\mathcal{V}_{t} = - \int_{\mathcal{S}} f\, \partial^{\varphi}_{i}g\, d\mathcal{V}_{t} + \int_{z=0} fg \, {\bf N}_{i}\, dy, \quad i= 1, \, 2, \, 3, \\
& & \label{ipp2} \int_{\mathcal{S}}\partial^{\varphi}_{t}f\, g \, d\mathcal{V}_{t} = \partial_{t} \int_{\mathcal{S}}f\, g \, d\mathcal{V}_{t} - \int_{\mathcal{S}}f\, \partial^{\varphi}_{t} g \, d\mathcal{V}_{t} - \int_{z=0} f g\, \partial_{t} h
\end{eqnarray}
where the outward normal ${\bf N}$ is given by \eqref{Ndef}.
\end{lem}
Note that in the above formulas, though it is not always explicitely mentionned, in each occurence
of $\varphi$, this function has to be taken at the time $t$.
We recall that by the choice \eqref{eqphi}, we have that on $z=0$, $\varphi=h$.
These formulas are straightforward consequences of the standard integration by parts formulas.
Also, as a straightforward corollary, we get:
\begin{cor}
\label{coripp}
Assume that $v(t, \cdot)$ is a vector field on $\mathcal{S}$ such that $\nabla^\varphi \cdot v=0$,
then for every smooth functions $f$, $g$ and smooth vector fields $u$, $w$, we have the identities:
\begin{eqnarray}
\label{ippt} & & \int_{\mathcal{S}}\big( \partial^{\varphi}_{t} f + v \cdot \nabla^\varphi f\big) f \, d\mathcal{V}_{t}= {1 \over 2} \partial_{t} \int_{\mathcal{S}}
|f|^2\, d\mathcal{V}_{t} - {1 \over 2} \int_{z= 0} |f|^2 \big( \partial_{t}h - v \cdot {\bf N} \big) dy, \\
\label{ippD} & & \int_{\mathcal{S}} \Delta^\varphi f\, g \, d\mathcal{V}_{t}= - \int_{\mathcal{S}} \nabla^\varphi f \cdot \nabla^\varphi g\,
d\mathcal{V}_{t} + \int_{z=0} \nabla^\varphi f \cdot {\bf N}\, f\, dy, \\
& & \label{ippS}
\int_{\mathcal{S}} \nabla^\varphi \cdot (S^\varphi u) \cdot w \, d\mathcal{V}_{t}= - \int_{\mathcal{S}} S^\varphi u \cdot S^\varphi w\,
d\mathcal{V}_{t} + \int_{z=0} (S^\varphi u\, {\bf N}) \cdot w\, dy.
\end{eqnarray}
\end{cor}
\subsection{Alinhac good unknown}
In order to perform high order energy estimates, we shall need to study the equation that we get
after applying conormal derivatives to the equation \eqref{NSv}. The standard way to proceed is
to say that the obtained equation has the structure
\begin{equation}
\label{naif} \partial^{\varphi}_{t}Z^\alpha v + \big(v \cdot \nabla^\varphi \big) Z^\alpha v + \nabla^\varphi Z^\alpha q = \varepsilon \Delta^\varphi Z^\alpha v + l.o.t\end{equation}
where $l.o.t$ stands for lower order terms and then to perform the natural energy estimate for this equation.
The difficulty in free boundary problems is that very often $\varphi$ is not smooth enough to consider all the standard commutators
as lower order terms. Nevertheless there is a crucial cancellation pointed out by Alinhac \cite{Alinhac} which
allows to perform high order energy estimates. Since applying a derivative to the equation is like linearizing the equation,
this cancellation can be explained in terms of a link between full an partial linearization.
Let us set
\begin{eqnarray*}
& & \mathcal{N}(v,q, \varphi) =
\partial^{\varphi}_{t}v + \big(v \cdot \nabla^\varphi \big)v + \nabla^\varphi q -2\varepsilon \nabla^\varphi \cdot \big( S^\varphi v\big) , \\
& & d(v,\varphi)= \nabla^\varphi \cdot v,\\
& & \mathcal{B}(v,q,\varphi)= 2 \varepsilon S^\varphi v \, {\bf N} - (q-gh){\bf N}.
\end{eqnarray*}
It is more convenient to use the above form of equation \eqref{NSv}
in view of the boundary condition \eqref{bord2}.
Note that by formal differentiation with respect to all the unknowns, we obtain the linearized operators:
\begin{eqnarray}
\label{full1}
D\mathcal{N} (v, q, \varphi) \cdot(\dot v, \dot q, \dot \varphi)& = &
\partial^{\varphi}_{t}\dot v + \big(v \cdot \nabla^\varphi \big)\dot v + \nabla^\varphi \dot q - 2 \,\varepsilon \nabla^\varphi \cdot\big( S^\varphi \dot v\big)
+\big( \dot v \cdot \nabla^\varphi \big)v \\
\nonumber & &
- \partial^{\varphi}_{z} v \big( \partial^{\varphi}_{t} \dot \varphi + v \cdot \nabla^\varphi \dot \varphi \big) + \partial_{z}^\varphi q \, \nabla^\varphi \dot
\varphi \\
\nonumber & & + 2 \varepsilon \nabla^\varphi\big( \partial^{\varphi}_{z} v \otimes \nabla^\varphi \dot \varphi + \nabla^\varphi \dot \varphi \otimes \partial^{\varphi}_{z} v \big)
+ 2 \varepsilon \partial^{\varphi}_{z} \big( S^\varphi v\big)\, \nabla^\varphi \dot \varphi, \\
\label{full3} D d (v, \varphi) \cdot (\dot v, \dot \varphi)& = & \nabla^\varphi \cdot \dot v - \nabla^\varphi \dot \varphi
\cdot \partial^{\varphi}_{z} v, \\
\label{full4} D \mathcal{B}(v,q,\varphi) \cdot(\dot v, \dot q, \dot \varphi) & = & 2 \varepsilon S^\varphi \dot v {\bf N} - \partial^{\varphi}_{z} v \otimes \nabla^\varphi \dot \varphi {\bf N}
- \nabla^\varphi \dot \varphi \otimes \partial^{\varphi}_{z} v {\bf N}- (\dot q -g \dot h) {\bf N} \\
\nonumber & & +\big( 2 \varepsilon S^\varphi v - (q- gh)\big)\dot {\bf N},
\end{eqnarray}
where $\dot {\bf N}= (-\partial_{1} \dot \varphi, -\partial_{2} \dot \varphi, 0)^t$.
We have the following crucial identity first observed by Alinhac \cite{Alinhac} which relates
full and partial linearization.
\begin{lem}
\label{lemal}
We have the following identities:
\begin{eqnarray*}
D\mathcal{N}(v, q, \varphi) \cdot(\dot v, \dot q, \dot \varphi) & = &
\big( \partial_{t}^\varphi +( v \cdot \nabla^\varphi)- 2 \varepsilon \nabla^\varphi \cdot \big( S^\varphi \cdot \big) \big)\big(\dot v- \partial^{\varphi}_{z} v \, \dot \varphi \big)
+ \nabla^\varphi\big( \dot q - \partial^{\varphi}_{z} q \, \dot \varphi\big) \\
& & \big( \dot v \cdot \nabla^\varphi\big) v + \dot \varphi \big( \partial^{\varphi}_{z}\big( \mathcal{N}(v,q, \varphi )\big)
- (\partial^{\varphi}_{z} v \cdot \nabla^\varphi) v \big) \\
D d(v, \varphi)\cdot(\dot v, \dot \varphi) &= &
\nabla^\varphi \cdot \big( \dot v - \partial^{\varphi}_{z}v\, \dot \varphi \big) + \dot \varphi \,\partial^{\varphi}_{z} \big( d(v, \varphi) \big),\\
D \mathcal{B}(v,q,\varphi) \cdot(\dot v, \dot q, \dot \varphi) & = & 2 \varepsilon S^\varphi\big( \dot v -\partial^{\varphi}_{z} v \,\dot \varphi \big) {\bf N}
+ 2 \varepsilon \, \dot \varphi \, \partial_{z} \big( S^\varphi v\big) {\bf N} - (\dot q -g \dot h) {\bf N} \\
\nonumber & & +\big( 2 \varepsilon S^\varphi v - (q- gh)\big)\dot {\bf N},
\end{eqnarray*}
\end{lem}
As a consequence of this lemma, if $(v,q, \varphi)$ solves \eqref{NSv} we get that
\begin{align}
\label{NSal1}
D\mathcal{N}(v, q, \varphi) \cdot(\dot v, \dot q, \dot \varphi)
= \big( \partial_{t}^\varphi +( v \cdot \nabla^\varphi)- \varepsilon \Delta^\varphi \big)\big(\dot v- \partial^{\varphi}_{z} v \, \dot \varphi \big)+ \nabla^\varphi\big( \dot q - \partial^{\varphi}_{z} v \, \dot \varphi\big) \\ \nonumber+ \big( \dot v \cdot \nabla^\varphi\big) v - \dot \varphi \, (\partial^{\varphi}_{z} v \cdot \nabla^\varphi) v
\end{align}
and
\begin{equation}
\label{NSal2}
\nabla^\varphi \cdot \big( \dot v- \partial^{\varphi}_{z} v \, \dot \varphi \big) = 0.
\end{equation}
The main consequence of these identies is that even if the naive form \eqref{naif} of the equation for high order derivatives
cannot be used, we can almost use it in the sense that if we replace $Z^\alpha v$ and $Z^\alpha q$ by the corresponding
"good unknowns" $Z^\alpha v - \partial_{z}^\varphi v Z^\alpha \eta$, $Z^\alpha q - \partial_{z}^\varphi v Z^\alpha \eta$
in the left hand side, then we indeed get lower order terms in the right hand side of \eqref{naif}.
\begin{proof}
The proof follows from simple algebraic manipulations.
There are many ways to explain this cancellation. It can be seen as a consequence
of \eqref{comD}.
Let us set
$$ \mathcal{A}_{i}(v,\varphi)= \partial^{\varphi}_i v, \quad \mathcal{F}_{ij}(v,\varphi)= \partial^{\varphi}_{i} \partial^{\varphi}_{j} v$$
for $i=0, \, 1, \, 2, \, 3$.
We note that for $i=0, \, 1, \, 2$
$$ D \mathcal{A}_{i}(v,\varphi)\cdot(\dot v, \dot \varphi)=
\partial^{\varphi}_{i} \dot v - \partial^{\varphi}_{z}v\, \partial^{\varphi}_{i} \dot \varphi= \partial^{\varphi}_{i}\big( \dot v - \partial^{\varphi}_{z}v \, \dot \varphi)
+\partial^{\varphi}_{i} \partial^{\varphi}_{z}v \, \dot \varphi. $$
Next, since $\partial^{\varphi}_{i}$ and $\partial^{\varphi}_{z}$ commute, we also have
$$ \partial^{\varphi}_{i} \partial^{\varphi}_{z}v= \partial^{\varphi}_{z} \partial^{\varphi}_{i} v = \partial^{\varphi}_{z}\big( \mathcal{A}_{i}(v, \varphi) \big) $$
and consequently, we find that
\begin{equation}
\label{al11}
D \mathcal{A}_{i}(v,\varphi)\cdot(\dot v, \dot \varphi) =
\partial^{\varphi}_{i}\big( \dot v - \partial^{\varphi}_{z}v \, \dot \varphi) + \dot \varphi \, \partial^{\varphi}_{z} \big( \mathcal{A}_{i}(v, \varphi) \big).
\end{equation}
A similar computation shows that this relation is also true for $i=3$.
In a similar way, we have that for $i=1, \, 2$, $j=1, \, 2$
\begin{eqnarray*} D \mathcal{F}_{ij}(v, \varphi) \cdot(\dot v, \dot \varphi) & = &
\partial^{\varphi}_{i}\big( \partial^{\varphi}_{j} \dot v - \partial^{\varphi}_{z}v\, \partial^{\varphi}_{j} \dot \varphi \big) - \partial^{\varphi}_{i}\dot \varphi\, \partial^{\varphi}_{z}\big(\partial^{\varphi}_{j}v \big) \\
& = & \partial^{\varphi}_{ij} \big( \dot v - \partial^{\varphi}_{z} v \, \dot \varphi \big) + \dot \varphi \partial^{\varphi}_{i}\partial^{\varphi}_{j}\partial^{\varphi}_{z}v
\end{eqnarray*}
and hence we find by using again \eqref{comD} that
\begin{equation}
\label{al12}
D \mathcal{F}_{ij}(v,\varphi)\cdot(\dot v, \dot \varphi) = \partial^{\varphi}_{ij} \big( \dot v - \partial^{\varphi}_{z} v \, \dot \varphi \big) + \dot \varphi \, \partial^{\varphi}_{z}\big( \mathcal{F}_{ij}(v, \varphi) \big).
\end{equation}
A similar computation shows that this relation also holds true when $i= 3$ or $j=3$.
The proof of Lemma \ref{lemal} easily follows by using the two relations
\eqref{al11}, \eqref{al12}.
\end{proof}
\subsection{Control from the dissipation term}
In view of the integration by parts formula \eqref{ippD}, \eqref{ippS} we need to prove that the control of quantities
like $\int_{\mathcal{S}} | \nabla^\varphi f|^2 d \mathcal{V}_{t}$
yield a control of the standard $H^1$ norm of $f$. We also
need Korn type inequalities to control the energy dissipation term. This is the aim of this final part of this preliminary section.
\begin{lem}
\label{mingrad}
Assume that $\partial_{z} \varphi \geq c_{0} $ and $\|\nabla \varphi \|_{L^\infty} \leq 1/c_{0}$ for some $c_{0}>0$,
then there exists $\Lambda_{0}= \Lambda({1\over c_{0}})$ such that
$$
\| \nabla f \|_{L^2(\mathcal{S})}^2 \leq \Lambda_{0}
\int_{\mathcal{S}} | \nabla^\varphi f |^2\, d \mathcal{V}_{t}.
$$
\end{lem}
\begin{proof}
By using the definition of the operators $\partial_{i}^\varphi$,
we first note that
$$ |\partial_{z}f| \leq | \partial_{z} \varphi|\, | \partial_{z}^\varphi f|. $$
Hence we find
$$ \| \partial_{z} f\|_{L^2(\mathcal{S})} \leq \Lambda_{0} \| \partial_{z}^\varphi f\|_{L^2(\mathcal{S})}$$
and since $ d \mathcal{V}_{t}= \partial_{z} \varphi dx \geq c_{0} dx$ by assumption, this yields
$$ \| \partial_{z} f\|_{L^2(\mathcal{S})}^2 \leq \Lambda_{0} \int_{\mathcal{S}} |\partial_{z}^\varphi f |^2 d\mathcal{V}_{t}.$$
Next, since for $i=1,\, 2$, we have
$$ | \partial_{i} f| \leq | \partial_{i}^\varphi f| + |\partial_{i} \varphi| \, |\partial_{z}^\varphi f| \leq \Lambda_{0} | \nabla^\varphi f|, $$
we also obtain that
$$ \| \partial_{i} f\|_{L^2(\mathcal{S})}^2 \leq \Lambda_{0} \int_{\mathcal{S}} |\nabla^\varphi f |^2 d\mathcal{V}_{t}, \quad i=1, \, 2.$$
This ends the proof of Lemma \ref{mingrad}.
\end{proof}
In the next proposition we state an adapted version of the classical Korn inequality in $\mathcal{S}$:
\begin{prop}
\label{Korn}
If $\partial_{z} \varphi \geq c_{0} $, $\|\nabla \varphi \|_{L^\infty} + \|\nabla^2 \varphi\|_{L^\infty} \leq {1 \over c_{0}}$ for some $c_{0}>0,$
then there exists $\Lambda_{0}= \Lambda(1/c_{0})>0$, such that for every $v \in H^1(\mathcal{S})$, we have
\begin{equation}
\label{estKorn}
\|\nabla v\|_{L^2(\mathcal{S})}^2 \leq \Lambda_{0} \Big( \int_{\mathcal{S}} |S^\varphi v |^2 \, d\mathcal{V}_{t} + \| v\|_{L^2(\mathcal{S})}^2 \Big)
\end{equation}
where $$S^\varphi v= {1 \over 2 }\big({ \nabla^\varphi v + \nabla^\varphi v^t}\big). $$
\end{prop}
For the sake of completeness, we shall give a proof of this estimate in subsection \ref{sectionKorn}.
\section{Preliminary estimates of $\eta$}
\label{sectionprelimeta}
In this section, we shall begin our a priori estimates for a sufficiently smooth solution
of \eqref{eqphi}, \eqref{NSv}.
We assume that $A$ is chosen such that $\partial_{z} \varphi_{0}(y, z) \geq 1$ at the initial time.
We shall work on an interval of time $[0, T^\varepsilon]$ such that
\begin{equation}
\label{min}
\partial_{z} \varphi(t,y,z) \geq {c_{0}}, \, \forall t \in [0, T^\varepsilon]
\end{equation}
for some $c_{0}>0$. This ensures that for every $t \in [0, T^\varepsilon]$, $\Phi(t, \cdot)$ is a diffeomorphism.
We shall first estimate $\eta$ given by \eqref{eqeta} in terms of $h$ and $v$.
Our first result is:
\begin{prop}
\label{propeta} We have the following estimates for $\eta$ defined by \eqref{eqeta}
\begin{eqnarray}
\label{etaharm} & & \forall s \geq 0, \quad \| \nabla \eta (t)\|_{H^s(\mathcal{S})} \leq C_{s} |h(t)|_{s+{1 \over 2} }, \\
\label{dtetaharm} & & \forall s \in \mathbb{N}, \quad \| \nabla \partial_{t} \eta \|_{H^s(\mathcal{S})}
\leq C_{s}\big( 1+ \|v\|_{L^\infty} + |\nabla h|_{L^\infty} \big)( \|v\|_{E^{s+1}} + |\nabla h|_{s+ {1 \over 2}}\big)
\end{eqnarray}
and moreover, we also have the $L^\infty $ estimates
\begin{eqnarray}
\label{etainfty}
& & \forall s \in \mathbb{N}, \quad \| \eta \|_{W^{s, \infty}} \leq C_{s} |h |_{s, \infty },\\
\label{dtetainfty}
& & \forall s \in \mathbb{N}, \quad \| \partial_{t} \eta \|_{W^{s, \infty}}
\leq C_{s}\big( 1+ |\nabla h|_{s, \infty} \big) \|v\|_{s, \infty }
\end{eqnarray}
where $C_{s}$ depends only on $s$.
\end{prop}
Note that the above estimate ensures that $\eta$ has a standard Sobolev regularity in $\mathcal{S}$
and not only a conormal one. This is one of the main advantage in the choice of the diffeomorphism
given by \eqref{diff} and \eqref{eqphi}, \eqref{eqeta}.
\begin{proof}
From the explicit expression \eqref{eqeta}, we get that
$$ \int_{-\infty}^0 \big( |\xi|^2\, |\hat{\eta}(\xi, z) |^2 + |\partial_{z}\hat{\eta}(\xi, z) |^2\, dz \lesssim | \xi| \,|\hat{h}(\xi)|^2$$
and hence
\eqref{etaharm} follows by using the Bessel identity.
By using \eqref{etaharm}, we get that
$$ \|\nabla \partial_{t} \eta \|_{s} \lesssim |\partial_{t}h |_{s+{1 \over 2 }}.$$
By using \eqref{bordv1} and \eqref{sobr2}, we get by setting $v^b(t,y)= v(t,y,0)$ that
$$ |\partial_{t}h |_{s+{1 \over 2 }} \leq |v^b\cdot {\bf N}|_{s+ {1 \over 2}}
\lesssim \|v\|_{L^\infty} |\nabla h|_{s+{1\over 2}} + (1+ |\nabla h|_{L^\infty}) |v^b|_{s+ {1 \over 2 }}$$
and hence \eqref{dtetaharm} follows by using the trace inequality \eqref{trace}.
For the $L^\infty$ estimates, we observe that we can write
\begin{equation}
\label{etaconv} \eta(y,z)= {1 \over z^2} \psi( {\cdot \over z})\star_{y} h:= \psi_{z} \star_{y} h
\end{equation}
where $\star_{y}$ stands for a convolution in the $y$ variable and $\psi$ is in $L^1(\mathbb{R}^2)$. Consequently,
from the Young inequality for convolutions, we get that
$$ \| \eta \|_{L^\infty} \lesssim |h|_{L^\infty}, \quad \|\partial_{i} \eta \|_{L^\infty} \lesssim |\nabla h|_{L^\infty}, \quad i=1, \, 2.$$
For $\partial_{z} \eta$, we note that
$$ \partial_{z} \hat \eta= \big( \xi_{1} \partial_{1} \chi + \xi_{2} \partial_{2} \chi\big)(z \xi)\, \hat h (\xi) = \nabla \chi\big (\xi z) \cdot \mathcal{F}_{y}( \nabla h)(\xi).$$
This yields that
$$ \partial_{z} \eta = {1 \over z^2} \psi^{(1)}( {\cdot \over z})\star_{y} \nabla h$$
where $\psi^{(1)}$ is again an $L^1$ function and hence we obtain that
$$ \|\partial_{z} \eta \|_{L^\infty} \lesssim | \nabla h |_{L^\infty}.$$
To get \eqref{etainfty}, the estimates of higher derivatives follow by induction.
To prove \eqref{dtetainfty}, we write thanks to $\eqref{etainfty}$ and \eqref{bord1} that
$$ \| \partial_{t} \eta \|_{W^{s, \infty}} \lesssim |v^b \cdot {\bf N}|_{s, \infty} \lesssim \|v\|_{s, \infty}\big( 1 + |\nabla h|_{s, \infty}\big).$$
\end{proof}
Next, we shall study how the smoothing effect of the Navier-Stokes equation on the velocity can be used to
improve the regularity of the surface.
Since in the following we need to estimate very often expressions like $f/\partial_{z} \varphi$ where
$f \in H^s(\mathcal{S})$ or $H^s_{co}(\mathcal{S})$, we shall first state a general Lemma
\begin{lem}
For every $m \in \mathbb{N}$, we have
\begin{equation}
\label{quot} \big\| {f \over \partial_{z} \varphi} \big\|_{m} \leq \Lambda\big( {1 \over c_{0}}, |h|_{1, \infty} + \|f \|_{L^\infty} \big)\big(|h|_{m+{1 \over 2}} +
\|f\|_{m} \big) \end{equation}
\end{lem}
\begin{rem}
Of course, the above estimate is also valid in standard Sobolev spaces:
$$ \big\| {f \over \partial_{z} \varphi} \big\|_{H^s} \leq \Lambda\big( {1 \over c_{0}}, |h|_{1, \infty} + \|f \|_{L^\infty} \big)\big( |h|_{
s+{1 \over 2}} +
\|f\|_{H^s} \big)$$
for $s \in \mathbb{R}$, $s \geq {1 \over 2}.$
\end{rem}
\begin{proof}
Since $\partial_{z} \varphi= A + \partial_{z} \eta$, we note that
$$ {f \over \partial_{z} \varphi}= { f\over A} - {f\over A }{ \partial_{z} \eta \over A+ \partial_{z} \eta } = {f \over A} - {f\over A}\, F(\partial_{z} \eta)$$
where $F(x)= x/(A+ x)$ is a smooth function which is bounded together with all its derivatives on
$A+x \geq c_{0}>0$ and such that $F(0)=0$.
Consequently, by using \eqref{gues}, we get that
$$ \|F(\partial_{z} \eta) \|_{m} \lesssim \Lambda( {1 \over c_{0}}, \|\nabla \eta \|_{L^\infty}\big) \| \partial_{z} \eta\|_{m}$$
and further that
$$ \|{f \over \partial_{z} \varphi } \|_{m} \lesssim \|f /A \|_{m}+ \Lambda\big({1 \over c_{0}}, \| \nabla \eta \|_{L^\infty} + \|f/A \|_{L^\infty} \big)\big(\| \partial_{z} \eta \|_{m}
+ \|f/A \|_{m} \big).$$
The result follows by using \eqref{etainfty} and \eqref{etaharm}.
\end{proof}
Next, we study the gain in the regularity of the surface which is induced by the gain of regularity on the velocity for
the Navier-Stokes equation.
\begin{prop}
\label{heps}
For every $m\in \mathbb{N}$, $\varepsilon \in(0,1)$, we have the estimate
$$ \varepsilon\, |h(t)|_{m+{1 \over 2}}^2
\leq \varepsilon\, |h_{0}|_{m+{1 \over 2}}^2
+ \varepsilon \int_{0}^t | v^b |_{m+ {1\over 2 }}^2 + \int_{0}^t \Lambda_{1, \infty}
\big( \|v\|_{m}^2 + \varepsilon\,|h|_{m+{1 \over 2}}^2 \big)\, d\tau$$
where
\begin{equation} \label{Lam1inf} \Lambda_{1, \infty}= \Lambda(
|\nabla h |_{L^\infty(\mathbb{R}^2)} + \| v \|_{1,\infty} \big) \end{equation}
and $v^b= v_{/z=0}$.
\end{prop}
Note that by using the trace inequality \eqref{trace} (with $s_{1}=0, \, s_{2}= 1,\, s= {1/2}$), we can write that
$$ \varepsilon \int_{0}^t | v^b |_{m+ {1\over 2 }}^2 \leq \varepsilon \int_{0}^t \| \nabla v \|_{m}^2 + \varepsilon \int_{0}^t \|v\|_{m}^2$$
hence the right hand side in the estimate of Proposition \ref{heps} can indeed be absorbed by an energy dissipation term.
Nevertheless, since the exact form of the energy dissipation term for our high order estimates will be more complicated,
we shall give the exact way to control this term later.
\begin{proof}
By using \eqref{bord1}, we get that
$$ \partial_{t} \Lambda^{m+{1 \over 2}} h + v_y(t,y,0) \cdot \nabla \Lambda^{m+{1 \over 2}} h -
\Lambda^{m+ {1 \over 2 }} v_{3}(t,y,0) + [\Lambda^{m+{1 \over 2}}, v_{y}(t,y,0)]\cdot \nabla h=0.$$
From a standard energy estimate for this transport equation, we thus obtain
\begin{align*} {d \over dt } {1 \over 2} \varepsilon \, |h|_{m+{1 \over 2}}^2
& \lesssim \varepsilon\, |\nabla_{y} v^b |_{L^\infty(\mathbb{R}^2)} |h|_{m+{1 \over 2}}^2
\\ & +\varepsilon \big( |v ^b|_{m+ {1 \over 2}} + | [\Lambda^{m+{1 \over 2}}, v_{y}(t,y,0)]\cdot \nabla h |_{L^2(\mathbb{R}^2)}\big) |h|_{m+{1 \over 2}}
\end{align*}
where $v^b= v(t,y,0)$.
By using the commutator estimate \eqref{comr2},
we have
$$ | [\Lambda^{m+{1 \over 2}}, v_{y}(t,y,0)]\cdot \nabla h |_{L^2(\mathbb{R}^2)}
\lesssim |\nabla_{y} v^b |_{L^\infty(\mathbb{R}^2)} |h|_{m+{1 \over 2}}
+ |\nabla h |_{L^\infty(\mathbb{R}^2)} |v^b|_{m+{1 \over 2}}$$ and hence, we obtain
\begin{align}
\label{h1} {d \over dt } {1 \over 2} \varepsilon \, |h|_{m+{1 \over 2}}^2
& \lesssim \varepsilon\, |\nabla_{y} v^b |_{L^\infty(\mathbb{R}^2)} |h|_{m+{1 \over 2}}^2 \\ \nonumber & +
\big( 1 + |\nabla h |_{L^\infty(\mathbb{R}^2)}\big) \varepsilon \, |v ^b|_{m+ {1 \over 2}}
|h|_{m+{1 \over 2}}.
\end{align}
The result follows from the Young inequality
\begin{equation}
\label{young}
ab \leq \delta a^p + C_{\delta} b^q, \quad {1\over p } + {1 \over q}= 1, \, a\, b\geq 0,
\end{equation}
with $p=q=2$ and an integration in time.
This ends the proof of Proposition \ref{heps}.
\end{proof}
By combining Proposition \ref{propeta} and Proposition \ref{heps}, we also obtain
that:
\begin{cor}
\label{coreta}
For every $m\in \mathbb{N}$, $\varepsilon \in(0,1)$, we have
$$ \varepsilon\, \|\nabla \eta(t)\|_{H^{m}(\mathcal{S})}^2
\leq \varepsilon\, C_{m}\, \|h_{0}\|_{m+{1 \over 2}}^2
+ \varepsilon \int_{0}^t | v |_{m+{1\over 2 }}^2 + \int_{0}^t \Lambda_{1,\infty}
\big( \|v\|_{m}^2 + \varepsilon\,\|h\|_{m+{1 \over 2}}^2 \big)\, d\tau$$
where $\Lambda_{1,\infty} $ is defined in \eqref{Lam1inf}.
\end{cor}
\section{Basic $L^2$ estimate}
\label{sectionbasic}
We now start the first main part of our a priori estimates, namely estimates for $Z^m v$ and $Z^m h$.
The easiest case is when $m=0$ as corresponds to the physical energy.
\begin{prop}
\label{basicL2}
For any smooth solution of \eqref{NSv}, we have the energy identity:
$$ {d \over dt} \Big( \int_{\mathcal{S}} |v|^2 \, d\mathcal{V}_{t} + g \int_{z=0} |h|^2\, dy \Big) + 4 \varepsilon \int_{\mathcal{S}} |S^\varphi v|^2\, d\mathcal{V}_{t}=0.$$
\end{prop}
\begin{proof}
By using \eqref{NSv} and the boundary condition \eqref{bord1}, we get that
$$ {d \over dt} \int_{\mathcal{S}} |v|^2 \, d\mathcal{V}_{t}= 2 \int_{\mathcal{S}} \nabla^\varphi \cdot( 2 \varepsilon S^\varphi v - q \, \mbox{Id}\big)
\cdot v\, d \mathcal{V}_{t}$$
and hence by using the integration by parts formula \eqref{ipp2}, we find that
$$ {d \over dt} \int_{\mathcal{S}} |v|^2 \, d\mathcal{V}_{t} + 4 \varepsilon \int_{\mathcal{S}} |S^\varphi v|^2\, d\mathcal{V}_{t}= 2 \int_{\mathcal{S}} q\, \nabla^\varphi \cdot v \, d\mathcal{V}_{t} +
2 \int_{z=0} \big(2 \varepsilon S^\varphi v - q \mbox{Id} \big) {\bf N} \cdot v \, dy.$$
Next, by using successively \eqref{bord2} and \eqref{bord1}, we observe that
$$ 2 \int_{z=0} \big(2 \varepsilon S^\varphi v - q \mbox{Id} \big) {\bf N} \cdot v \, dy = - 2 \int_{z=0} g h \, v \cdot N \, dy
= - \int_{z=0} g {d \over dt } |h|^2 \,dy$$
and the result follows.
\end{proof}
\begin{cor}
\label{corL2}
If $\partial_{z} \varphi\geq c_{0}>0$, $|h|_{2, \infty} \leq {1 \over c_{0}}$ for $t \in [0, T^\varepsilon]$, then we have
$$ \|v(t)\|^2 + \varepsilon \int_{0}^t \| \nabla v \|^2 \leq \Lambda({1 \over c_{0}}) \Big(\|v_{0}\|^2+ \int_{0}^t \| v\|^2 \Big), \quad \forall t \in [0, T^\varepsilon].$$
\end{cor}
\begin{proof}
It suffices to combine Proposition \ref{basicL2} and Propositions \ref{Korn}, \ref{mingrad}.
\end{proof}
\section{Equations satisfied by $(Z^\alpha v, Z^\alpha h, Z^\alpha q)$} \label{sectionconorm}
\subsection{A commutator estimate}
The next step in order to perform higher order conormal estimates is to compute the equation satisfied by $Z^\alpha v$. We thus need
to commute $Z^\alpha$ with each term in the equation \eqref{NSv}.
It is thus usefull to establish the following general expressions and estimates for commutators that we shall use many times.
We first notice that for $i=1, \, 2,\, 3$, we have for any smooth function $f$
\begin{equation}
\label{com1}
Z^\alpha \partial^{\varphi}_{i} f= \partial^{\varphi}_{i} Z^\alpha f - \partial^{\varphi}_{z}f \partial^{\varphi}_{i} Z^\alpha \eta+
\mathcal{C}^\alpha_{i}(f) \end{equation}
where the commutator $\mathcal{C}^\alpha_{i}(f)$ is given for $\alpha \neq 0$ and $i \neq 3$ by
\begin{equation}
\label{Cialpha}
\mathcal{C}^\alpha_{i}(f)= \mathcal{C}^\alpha_{i, 1}(f)+ \mathcal{C}^\alpha_{i,2}(f)+
\mathcal{C}^\alpha_{i, 3}(f)
\end{equation}
with
\begin{align*}
\mathcal{C}^{\alpha}_{i,1}& = -\big[ Z^\alpha, {\partial_{i} \varphi \over \partial_{z} \varphi }, \partial_{z} f \big], \\
\mathcal{C}^{\alpha}_{i,2}& = - \partial_{z} f \big[ Z^\alpha, \partial_{i} \varphi, {1 \over \partial_{z} \varphi}\big] - \partial_{i} \varphi \Big( Z^\alpha\big( {1 \over \partial_{z} \varphi}\big) + {Z^\alpha \partial_{z}
\eta \over (\partial_{z} \varphi)^2 } \Big) \partial_{z} f, \\
\mathcal{C}^\alpha_{i, 3}& = - {\partial_{i} \varphi \over \partial_{z} \varphi} [Z^\alpha, \partial_{z}]f
+ {\partial_{i} \varphi \over (\partial_{z} \varphi)^2}\, \partial_{z} f\, [Z^\alpha, \partial_{z}] \eta.
\end{align*}
Note that for $i=1, \, 2$ we have $\partial_{i}\varphi= \partial_{i}\eta$ and that for $\alpha \neq 0$,
$Z^\alpha \partial_{z}\varphi= Z^\alpha \partial_{z}\eta$. This is why we have replaced $\varphi$ by $\eta$ in
some terms of the above expressions.
For $i=3$, we have the same kind of decomposition for
the commutator (basically, it suffices to replace $\partial_{i}\varphi$ by $1$ in the above expressions).
The commutators $\mathcal{C}^\alpha_{i}(f)$ enjoy the following estimate
\begin{lem}
\label{comi}
For $1 \leq | \alpha |\leq m$, $i= 1, \, 2, \, 3$, we have
\begin{align}
\label{comiest} \| \mathcal{C}_{i}^\alpha(f) \| & \leq \Lambda\big( {1 \over c_{0}}, |h|_{2, \infty } + \|\nabla f \|_{1, \infty} \big) \big( \|\nabla f \|_{m-1} + |h
|_{m- {1 \over 2 }}
\big).
\end{align}
\end{lem}
The meaning of this lemma combined with \eqref{com1} is that by using the notations of the proof of Lemma
\ref{lemal}, we have that
\begin{equation}
\label{com2}
Z^\alpha \partial^{\varphi}_{i}f = D \mathcal{A}_{i}(f, \varphi) \cdot (Z^\alpha f, Z^\alpha \eta) + \mathcal{C}_{i}^\alpha
\end{equation}
where the commutator $\mathcal{C}_{i}^\alpha$ is of lower order in both $f$ and $\eta$.
\subsubsection*{Proof of Lemma \ref{comi}}
We give the proof for $i=1, \, 2$, the last case being similar and slightly easier.
We shall first estimate $\mathcal{C}_{i,1}^\alpha$.
Thanks to \eqref{comsym}, we have
$$ \| \mathcal{C}^\alpha_{i,1} \|_{L^2} \leq \| Z\big( {\partial_{i } \varphi \over \partial_{z} \varphi } \big) \|_{L^\infty}
\, \|\partial_{z} f \|_{m-1} + \| Z \partial_{z} f \|_{L^\infty} \, \|
{\partial_{i } \varphi \over \partial_{z} \varphi } \|_{m-1}.$$
Consequently, by using \eqref{quot}, we first get that
\begin{align*} \| \mathcal{C}^\alpha_{i,1} \|_{L^2} \leq \Lambda({1\over c_{0}},
\|\nabla \varphi \|_{1, \infty} + |h|_{1, \infty} + \|Z \partial_{z} f \|_{L^\infty}\big)
\big( \|\partial_{i}\eta \|_{m-1} +|h|_{m-{1 \over 2 }} + \|\partial_{z} f \|_{m-1}\big),
\end{align*}
next, by using \eqref{eqphi}, we obtain $$
\| \mathcal{C}_{i, 1}^\alpha(f) \| \leq \Lambda\big( {1 \over c_{0}}, \|\nabla \eta \|_{1, \infty}+ |h|_{1, \infty} + \|\nabla f \|_{1, \infty} \big) \big( \|\nabla f \|_{m-1} + |h|_{m-{1\over 2 }} + \|\nabla \eta \|_{m-1}\big)
$$
and finally, by using \eqref{etaharm}, \eqref{etainfty}, we get
\begin{equation}
\label{Calpha1}
\| \mathcal{C}_{i, 1}^\alpha(f) \| \leq \Lambda\big( {1 \over c_{0}}, |h|_{2, \infty} + \|\nabla f \|_{1, \infty} \big) \big( \|\nabla f \|_{m-1} + |h|_{m-{1\over 2 }} \big).
\end{equation}
To estimate the first term in $\mathcal{C}_{i,2}^\alpha$, we use the same kind of arguments: by using \eqref{comsym}
and \eqref{eqphi}, we first get that
$$
\| \partial_{z} f \big[ Z^\alpha, \partial_{i} \varphi, {1 \over \partial_{z} \varphi}\big] \|
\leq \Lambda ({1 \over c_{0}}, \|\partial_{z} f \|_{L^\infty} + \| \nabla \eta \|_{1, \infty} \big)\big( \| \nabla \eta\|_{m-1} + \|{ Z \partial_{z}\eta \over (\partial_{z} \varphi)^2}\|_{m-2} \big)$$
and hence by using \eqref{quot} and \eqref{etaharm}, \eqref{etainfty}, we find
$$ \| \partial_{z} f \big[ Z^\alpha, \partial_{i} \varphi, {1 \over \partial_{z} \varphi}\big] \|
\leq \Lambda ({1 \over c_{0}}, \| \nabla f \|_{L^\infty} + | h|_{2, \infty} \big) | h|_{m-{1\over 2}}.$$
To estimate the second type of terms in $\mathcal{C}_{i, 2}^\alpha$, we note that
for $|\alpha| \geq 1$, we can write
\begin{equation}
\label{Z1/dzphi} Z^\alpha \big({ 1 \over \partial_{z} \varphi} \big) = - Z^{\tilde \alpha} \big({ Z_{j} \partial_{z} \eta \over (\partial
_{z } \varphi)^2 } \big), \quad |\tilde \alpha | = |\alpha |-1,\end{equation}
hence, we obtain for $|\alpha | \geq 2 $ that
$$ \partial_{i} \varphi \Big( Z^\alpha\big( {1 \over \partial_{z} \varphi}\big) + {Z^\alpha \partial_{z}
\eta \over (\partial_{z} \varphi)^2 } \Big) \partial_{z} f
= - \partial_{i}\varphi\, \partial_{z} f\, [ Z^{\tilde \alpha }, {1 \over (\partial_{z} \varphi)^2} ] Z_{j}\partial_{z}\eta $$
and by using again \eqref{com}, \eqref{quot} and \eqref{etaharm}, \eqref{etainfty}, we also obtain that
\begin{equation}
\label{Calpha2}
\| \mathcal{C}_{i, 2}^\alpha(f) \| \leq \Lambda\big( {1 \over c_{0}}, | h |_{2, \infty} + \|\nabla f \|_{L^\infty} \big) | h |_{m-{1 \over 2 } }.
\end{equation}
It remains to estimate $\mathcal{C}_{i, 3}^\alpha$.
Since
$$[Z_{3}, \partial_{z}]= - \partial_{z} \big( { z \over 1+ z } \big) \partial_{z}, $$
we can prove by induction that
\begin{equation}
\label{idcom} [Z^\alpha, \partial_{z}] h= \sum_{| \beta | \leq m-1} c_{\beta } \partial_{z} ( Z^\beta h)\end{equation}
for some harmless smooth bounded functions $c_{\beta}.$
This yields
\begin{align*} \big\| {\partial_{i}\varphi \over \partial_{z} \varphi} [Z^\alpha, \partial_{z}] f \big\|_{L^2}
+ \big\| {\partial_{i} \varphi \over (\partial_{z}
\varphi)^2} [Z^\alpha, \partial_{z} ] \varphi \, \partial_{z} f \big\|_{L^2}
\leq \Lambda( {1 \over c_{0}}, \|\partial_{i} \varphi \|_{L^\infty} + \|\partial_{z} f \|_{L^\infty}
)\big( \|\partial_{z} f \|_{m-1} \\+ \| \partial_{i} \varphi \| + \|\partial_{z} \eta \|_{m-1} \big)
\end{align*}
Indeed, the last term comes from the fact that thanks to \eqref{eqphi},
the commutator $ [Z^\alpha, \partial_{z}]\varphi$ can be decomposed into an harmless bounded
term and the commutator $ [Z^\alpha, \partial_{z}]\eta $ which is in $L^2$.
This yields by using again \eqref{etaharm}
\begin{align}
\nonumber
\| \mathcal{C}^\alpha_{i, 3} \| & \leq
\Lambda( {1 \over c_{0}}, \|\partial_{i} \varphi \|_{L^\infty} + \|\partial_{z} f \|_{L^\infty}
)\big( \|\partial_{z} f \|_{m-1} + \| \nabla \eta \|_{m-1} \big) \\
\label{Calpha3} & \leq \Lambda( {1 \over c_{0}}, | h |_{1, \infty} + \|\nabla f \|_{L^\infty}
)\big( \|\partial_{z} f \|_{m-1} + |h|_{m-{1 \over 2 }} \big).
\end{align}
To end the proof of Lemma \ref{comi}, it suffices to collect \eqref{Calpha1}, \eqref{Calpha2},
\eqref{Calpha3}.
The estimate for $\mathcal{C}_{3}^\alpha$ can be obtained through very similar arguments.
This ends the proof of Lemma \ref{comi}.
\subsection{Interior equation satisfied by $(Z^\alpha v, Z^\alpha q, Z^\alpha \varphi)$}
We shall prove the following:
\begin{lem}
\label{lemValpha}
For $1 \leq | \alpha | \leq m$, let us set $V^\alpha = Z^\alpha v - \partial^{\varphi}_{z}v\, Z^\alpha \eta, $ $Q^\alpha = Z^\alpha q - \partial^{\varphi}_{z}q\, Z^\alpha \eta,$
then we get the system
\begin{eqnarray}
\label{eqValpha} & & \partial^{\varphi}_{t} V^\alpha + v \cdot \nabla^\varphi V^\alpha + \nabla^\varphi Q^\alpha- 2 \varepsilon \nabla^\varphi \cdot S^\varphi V^\alpha
+ \mathcal{C}^\alpha(q)
+ \mathcal{C}^\alpha(\mathcal{T}) \\
\nonumber & & \quad \quad \quad \quad \quad \quad \quad = \varepsilon \mathcal{D^\alpha}\big( S^\varphi v \big) + \varepsilon \nabla^\varphi \cdot \big( \mathcal{E}^\alpha (v)\big) +(\partial^{\varphi}_{z}v \cdot \nabla^\varphi v)
Z^\alpha \eta
, \\
\label{divValpha}
& & \nabla^\varphi \cdot V^\alpha + \mathcal{C}^\alpha (d)= 0.
\end{eqnarray}
where the commutators $\mathcal{C}^\alpha(q)$, $\mathcal{C}^\alpha(d)$ and $\mathcal{E}^\alpha(v)$ satisfy the estimates:
\begin{eqnarray}
\label{Cq} & & \|\mathcal{C}^\alpha (q) \| \leq \Lambda\big( { 1\over c_{0}}, |h|_{2, \infty} + \| \nabla q \|_{1, \infty} \big) \big(
\|\nabla q \|_{m-1}+ |h |_{m-{1\over 2}} \big), \\
\label{Cd}& & \|\mathcal{C}^\alpha (d) \| \leq \Lambda\big( { 1\over c_{0}}, |h|_{2, \infty} + \| \nabla v \|_{1, \infty} \big) \big(
\|\nabla v \|_{m-1}+ |h |_{m-{1\over 2}} \big), \\
\label{CT} & &\| \mathcal{C}^\alpha (\mathcal{T}) \| + \|\mathcal{E}^\alpha(v) \| \leq \Lambda \big( {1 \over c_{0}}, |h|_{2, \infty} + \|v\|_{E^{2, \infty}}
\big) \big( \|v\|_{E^m}+ |h|_{m-{1\over 2} }\big)
\end{eqnarray} and $\mathcal{D}^\alpha (S^\varphi v ) $ is given by $$\mathcal{D}^\alpha\big( S^\varphi v \big)_{ij}= 2 \, \mathcal{C}_{j}^\alpha \big( S^\varphi v)_{ij}. $$
\end{lem}
Note that we will control the commutator $\mathcal{D}^\alpha(S^\varphi v)$ later by using integration by parts.
\begin{proof}
We need to compute
the equation solved by $Z^\alpha v$.
By using the previous notations, we first note that
\begin{equation}
\label{comp1}
Z^\alpha \nabla^\varphi q= \nabla^\varphi Z^\alpha q -\partial_{z}^\varphi q\, \nabla^\varphi Z^\alpha \varphi
+ \mathcal{C}^\alpha(q)
\end{equation}
where
$$ \mathcal{C}^\alpha(q)= \left(\begin{array}{lll} \mathcal{C}_{1}^\alpha(q) \\ \mathcal{C}_{2}^\alpha (q) \\
\mathcal{C}_{3}^\alpha(q) \end{array}\right)$$
and thus thanks to lemma \ref{comi}, we have the estimate
\begin{equation}
\label{comp2}
\| \mathcal{C}^\alpha(q) \| \leq \Lambda\big( {1 \over c_{0}}, |h|_{2,\infty } + \|\nabla q \|_{1, \infty} \big) \big( \|\nabla q \|_{m-1}
+ | h |_{m-{1 \over 2 } }\big).
\end{equation}
In a similar way, we get that \begin{equation} \label{div1} Z^\alpha \big( \nabla^\varphi \cdot v \big)= \nabla^\varphi \cdot Z^\alpha v - \nabla^\varphi Z^\alpha \varphi \cdot
\partial_{z}^\varphi v + \mathcal{C}^\alpha(d) \end{equation} where $$ \mathcal{C}^\alpha(d) = \sum_{i=1}^3 \mathcal{C}^\alpha_{i}(v) $$
and hence, thanks to Lemma \ref{comi}, we also have that
\begin{equation}
\label{comd2}
\| \mathcal{C}^\alpha(d) \| \leq \Lambda\big( {1 \over c_{0}}, |h|_{2,\infty } + \|\nabla v \|_{1, \infty} \big) \big( \|\nabla v \|_{m-1}
+ | h |_{m-{1 \over 2 } }\big).
\end{equation}
Next, we shall expand the transport part of \eqref{NSv}.
$$ Z^\alpha \Big( \partial^{\varphi}_{t} + v \cdot \nabla^\varphi \Big)v.$$
We first note that
\begin{equation}
\label{transportW} \partial^{\varphi}_{t} + v \cdot \nabla^\varphi
= \partial_{t}+ v_{y} \cdot \nabla_{y}v + V_{z} \partial_{z}\end{equation}
where $V_{z}$ is defined by
\begin{equation}
\label{Wdef}
V_{z}= {1\over \partial_{z} \varphi} v_{z}= {1 \over \partial_{z} \varphi} \big( v\cdot {\bf N}- \partial_{t} \varphi\big)= {1\over \partial_{z} \varphi}\big(v\cdot{\bf N}- \partial_{t} \eta\big).
\end{equation} and where ${\bf N}(t,y,z)$ is defined as
$$ {\bf N}(t,y,z)=\big(-\partial_{1} \varphi(t,y,z), -\partial_{2}\varphi(t,y,z), 1\big)^t
=\big(-\partial_{1} \eta(t,y,z), -\partial_{2}\eta(t,y,z), 1\big)^t.$$
Note that ${\bf N}$ is defined in the whole $\mathcal{S}$ and that ${\bf N}(t,y,0)$ is indeed the outward normal to the boundary
that we have used before.
For the vector fields $v_{z}$ and $V_{z}$, we have the estimates:
\begin{lem}
\label{lemestW}
We have for $m \geq 2$, the estimates
\begin{eqnarray}
& & \label{vzinfty} \|v_{z}\|_{1, \infty} + \|V_{z}\|_{1, \infty}\leq \Lambda\big({1\over c_{0}}, \|v\|_{1, \infty} + |h|_{2, \infty}\big), \\
& & \label{Zvzm-2} \|Zv_{z}\|_{m-2} + \|ZV_{z}\|_{m-2} \leq \Lambda\big( {1 \over c_{0}}, \|v\|_{1, \infty} + |h|_{2, \infty} \big) \big(\|v\|_{E^{m-1}} + |h|_{m-{1\over 2 } }\big).
\end{eqnarray}
\end{lem}
{\bf Proof of Lemma \ref{lemestW}}
For the first estimate, we use that
$$ \|v_{z}\|_{L^\infty} + \|V_{z}\|_{L^\infty} \leq \Lambda\big({1\over c_{0}} , \|v\|_{L^\infty}+ \| \nabla \eta \|_{L^\infty} +\| \partial_{t} \eta \|_{L^\infty} \big)
\leq \ \Lambda\big({1\over c_{0}} , \|v\|_{L^\infty} +|h|_{1, \infty}\big)$$
thanks to \eqref{etainfty}, \eqref{dtetainfty}.
In the same way, we get that
$$ \|Zv_{z}\|_{L^\infty} + \|Z V_{z} \|_{L^\infty} \leq \Lambda({1 \over c_{0}}, \|v \|_{1, \infty} + \|\nabla \eta \|_{W^{1, \infty}} + \| \partial_{t} \eta \|_{W^{1, \infty}} \big)
\leq \Lambda({1 \over c_{0}}, \|v \|_{1, \infty} +|h|_{2, \infty} \big)$$
by using also \eqref{dtetainfty}.
By using \eqref{gues} and Proposition \ref{propeta}, we also obtain that
\begin{eqnarray}
\nonumber
\|Z v_{z}\|_{m-2} & \lesssim & \|\nabla \partial_{t} \eta \|_{m-2}+ \|v\|_{L^\infty}( 1+ \|\nabla \eta \|_{m-1}) + \|\nabla \eta \|_{L^\infty} \|v\|_{m-1}\\
\label{Zpetitvz} & \lesssim &
\Lambda\big( \|v\|_{L^\infty} + |h|_{1, \infty} \big) \big(\|v\|_{E^{m-1}} + |h|_{m-{1\over 2 } }\big).\end{eqnarray}
Note that $\partial_{t} \eta$ is not in $L^2$, this why we are obliged to apply one derivative to $v_{z}$ before estimating
it in $L^2$ in order to use Proposition \ref{propeta}.
Finally, we note that we have
$$\| Z V_{z}\|_{m-2} \lesssim \|Z\big( {1\over \partial_{z} \varphi} \big) v_{z}\|_{m-2} + \|{1\over \partial_{z} \varphi} Zv_{z} \|_{m-2}$$
and hence, by using \eqref{gues}, \eqref{quot}, we obtain that
\begin{equation}
\label{Zvzm-2prov}
\| Z V_{z} \|_{m-2} \leq \Lambda\big( {1 \over c_{0}}, \|\nabla \eta \|_{1, \infty} + \|v_{z}\|_{1, \infty} \big) \big(
\| Zv_{z}\|_{m-2}+ \|\partial_{z} \eta \|_{m-2}\big).\end{equation}
Finally, \eqref{Zvzm-2} follows from \eqref{Zvzm-2prov} by using \eqref{Zpetitvz}, \eqref{vzinfty} and Proposition \ref{propeta}.
This ends the proof of Lemma \ref{lemestW}.
\begin{rem}
\label{remdzVz}
By using similar arguments, we also get that for $m \geq 3$, we have
$$ \|\partial_{z} Z V_{z}\|_{m-3} \leq \Lambda\big( \|v\|_{E^{1, \infty}} + |h|_{2, \infty} \big) \big(\|v\|_{E^{m-1}} + |h|_{m-{1\over 2 } }\big)$$
and
$$ \|\partial_{z} V_{z}\|_{m-3} \leq \Lambda\big( \|v\|_{E^{1, \infty}} + |h|_{2, \infty} \big) \big(\|v\|_{E^{m-2}} + |h|_{m-{3\over 2 } }\big).$$
\end{rem}
By using the identity \eqref{transportW}, we thus get that \begin{eqnarray}\nonumber & & \!\!\!\! Z^\alpha \big( \partial^{\varphi}_{t} + v \cdot \nabla^\varphi \big)v \\ \nonumber& & = \big(\partial_{t}+ v_{y} \cdot \nabla_{y} + V_{z} \partial_{z} \big) Z^\alpha v + \big(v\cdot Z^\alpha {\bf N}- \partial_{t} Z^\alpha \eta \big) \partial^{\varphi}_{z} v - \partial^{\varphi}_{z} Z^\alpha \eta \big( v \cdot {\bf N} - \partial_{t} \eta\big)
\partial^{\varphi}_{z}v + \mathcal{C}^\alpha(\mathcal{T}) \\ \label{T1}& & = \big( \partial^{\varphi}_{t}+ v \cdot \nabla^\varphi\big)Z^\alpha v - \partial^{\varphi}_{z}v \big( \partial^{\varphi}_{t}+ v \cdot \nabla^\varphi) Z^\alpha \eta
+ \mathcal{C}^\alpha(\mathcal{T})
\end{eqnarray}
where the commutator $\mathcal{C}^\alpha(\mathcal{T})$ is defined by
$$ \mathcal{C}^\alpha(\mathcal{T})= \sum_{i=1}^5 \mathcal{T}_{i}^\alpha,$$
\begin{eqnarray*}
& & \mathcal{T}_{1}^\alpha= [Z^\alpha, v_{y}]\partial_{y} v, \quad \mathcal{T}_{2}^\alpha = [Z^\alpha, V_{z},\partial_{z}v],
\quad \mathcal{T}_{3}^\alpha= {1 \over \partial_{z}\varphi} [Z^\alpha, v_{z}]\partial_{z}v,\\
& & \mathcal{T}_{4}^\alpha =\Big( Z^{\alpha}\big( {1 \over \partial_{z} \varphi} \big) + { \partial_{z}\, Z^{\alpha} \eta \over
(\partial_{z} \varphi)^2} \Big) v_{z} \partial_{z} v,
\quad \mathcal{T}_{5}^\alpha= v_{z}\partial_{z}v { [Z^\alpha, \partial_{z}]\eta \over (\partial_{z} \varphi)^2 }
+ V_{z} [Z^\alpha, \partial_{z}]v,\\
& & \mathcal{T}_{6}^\alpha =
[Z^\alpha, v_{z}, {1 \over \partial_{z} \varphi}] \partial_{z} v.
\end{eqnarray*} To estimate these commutators, we use similar arguments as in the proof of Lemma \ref{comi}. In particular, we use the estimates \eqref{com}, \eqref{comsym} and \eqref{quot} combined with Lemma \ref{lemestW} and also again the estimates \eqref{etaharm}, \eqref{etainfty}. This yields
\begin{equation}
\label{CalphaTproof}
\| \mathcal{C}^\alpha(\mathcal{T}) \| \leq
\Lambda\big( {1 \over c_{0}}, |h|_{2, \infty}+ \|v\|_{E^{2, \infty}}\big)\big( \|v\|_{E^{m}} + |h|_{m-{1\over 2}}\big)
\end{equation}
and hence the estimate \eqref{CT} for $\mathcal{C}^\alpha(\mathcal{T})$ is proven.
It remains to compute $ \varepsilon Z^\alpha \Delta^\varphi v$. Since $\nabla^\varphi \cdot v = 0$, it is more convenient to use that
$Z^\alpha \Delta^\varphi v= 2 Z^\alpha \nabla^\varphi \cdot (S^\varphi v).$ By using \eqref{com1}, we thus use the following expansion
$$ 2 Z^\alpha \nabla^\varphi \cdot (S^\varphi v) = 2 \nabla^\varphi \cdot \big( Z^\alpha \, S^\varphi v \big) - 2 \big( \partial^{\varphi}_{z}\, S^\varphi v \big) \nabla^\varphi
( Z^\alpha \varphi) + \mathcal{D^\alpha}\big( S^\varphi v \big)$$
where
\begin{equation}
\label{Ddef} \mathcal{D}^\alpha\big( S^\varphi v \big)_{i}= 2 \, \mathcal{C}_{j}^\alpha \big( S^\varphi v)_{ij}\end{equation}
by using the summation convention over repeated indices.
Next, since we can again expand
\begin{equation}
\label{comS} 2 Z^\alpha \big( S^\varphi v \big) = 2 S^\varphi \big( Z^\alpha v \big) - \partial^{\varphi}_{z} v \otimes \nabla^\varphi Z^\alpha \varphi
- \nabla^\varphi Z^\alpha \phi
\otimes \partial^{\varphi}_{z} v + \mathcal{E}^\alpha(v)\end{equation}
with
$$\big( \mathcal{E}^\alpha v\big)_{ij}= \mathcal{C}^\alpha_{i}(v_{j})+ \mathcal{C}^\alpha_{j}(v_{i}), $$
we obtain that
\begin{eqnarray}
\label{deltaexp}
& &\varepsilon Z^\alpha \Delta^\varphi v = \\
\nonumber & & 2 \, \varepsilon\, \nabla^\varphi \cdot S^\varphi(Z^\alpha v ) - 2 \varepsilon \nabla^\varphi \cdot \Big( \partial^{\varphi}_{z} v \otimes \nabla^\varphi Z^\alpha \varphi
- \nabla^\varphi Z^\alpha { \mathbf \varphi } \otimes \partial^{\varphi}_{z} v \Big) -
2 \varepsilon \big( \partial^{\varphi}_{z}\, S^\varphi v \big) \nabla^\varphi
( Z^\alpha \varphi) \\
\nonumber
& & + \varepsilon \mathcal{D^\alpha}\big( S^\varphi v \big) + \varepsilon \nabla^\varphi \cdot \big( \mathcal{E}^\alpha v\big).\end{eqnarray}
Thanks to Lemma \ref{comi}, we also have the estimate
\begin{equation}
\label{Ealphap}\|\mathcal{E}^\alpha(v) \| \leq \Lambda \big( {1 \over c_{0}}, |h|_{2, \infty} + \|\nabla v\|_{1, \infty}
\big)\big( \|v\|_{m}+ \|\partial_{z} v \|_{m-1}+ |h|_{m-{1\over 2} }\big).\end{equation}
Consequently, by collecting
\eqref{comp1}, \eqref{div1}, \eqref{T1} and \eqref{deltaexp}, we get in view of \eqref{full1}, \eqref{full3} that $(Z^\alpha v, Z^\alpha q, Z^\alpha \varphi)$ solves
the equation
$$ D\mathcal{N}(v,q,\varphi) \cdot\big( Z^\alpha v, Z^\alpha q, Z^\alpha \eta\big) - \big(Z^\alpha v \cdot \nabla^\varphi \big)v + \mathcal{C}^\alpha(q)
+ \mathcal{C}^\alpha(\mathcal{T})= \varepsilon \mathcal{D^\alpha}\big( S^\varphi v \big) + \varepsilon \nabla^\varphi \cdot \big( \mathcal{E}^\alpha v\big)$$
and the constraint
$$ D d( v, \varphi) \cdot \big(Z^\alpha v, Z^\alpha \eta\big) + \mathcal{C}^\alpha(d)= 0.$$
Consequently, by using Lemma \ref{lemal} and \eqref{NSal1}, \eqref{NSal2}, we get that $(V^\alpha, Q^\alpha)$ with $V^\alpha = Z^\alpha v - \partial^{\varphi}_{z} v\, Z^\alpha \eta$, $Q^\alpha= Z^\alpha q- \partial^{\varphi}_{z}q Z^\alpha \eta$
solves
\begin{align*} \partial^{\varphi}_{t} V^\alpha + v \cdot \nabla^\varphi V^\alpha + \nabla^\varphi Q^\alpha- 2 \varepsilon \nabla^\varphi \cdot S^\varphi V^\alpha
+ \mathcal{C}^\alpha(q)
+ \mathcal{C}^\alpha(\mathcal{T}) \quad \quad \quad \quad \\
= \varepsilon \mathcal{D^\alpha}\big( S^\varphi v \big) + \varepsilon \nabla^\varphi \cdot \big( \mathcal{E}^\alpha v\big) + \big(\partial_{z}^\varphi v \cdot
\nabla^\varphi v\big)Z^\alpha \eta
\end{align*}
with
$$ \nabla^\varphi \cdot V^\alpha + \mathcal{C}^\alpha (d)= 0.$$
Since we have proven the estimates \eqref{Ealphap}, \eqref{comp2}, \eqref{comd2}, this ends the proof of Lemma
\ref{lemValpha}.
\end{proof}
\subsection{Estimates of the boundary values}
We shall also need to compute the boundary condition satisfied by $Z^\alpha v$ when $\alpha_{3} = 0$
(for $\alpha_{3}\neq 0$, we have $Z^\alpha v= 0$ on the boundary). As a preliminary, in order to control
the commutators, we first establish the following
\begin{lem}
\label{lembord}
For every $s \geq 0$, $s \in \mathbb{R}$, we have the following estimates at $z=0$:
\begin{eqnarray}
\label{dzvb}
| \nabla v(\cdot, 0) |_{s} & \leq & \Lambda\big( \|v\|_{{1, \infty}} + |h|_{2, \infty} \big) \big( |v(\cdot, 0)|_{s+1}
+|h|_{s+1}\big).
\end{eqnarray}
\end{lem}
\begin{proof}
Note that the estimate is obvious for $|\partial_{i} v(\cdot, 0)|_{s}$, $i=1, \,2$. Consequently, the only difficulty
is to estimate $|\partial_{z} v(\cdot, 0)|_{s}$.
Since $\nabla^\varphi \cdot v=0$, we get that
$$ \partial_{z} v \cdot {\bf N}= \partial_{z} \varphi \big(\partial_{1} v_{1}+ \partial_{2} v_{2})= \big( A + \partial_{z} \eta \big) \big( \partial_{1} v_{1}
+ \partial_{2} v_{2} \big)$$
and hence that
\begin{equation}
\label{eqdzvn} \partial_{z}v \cdot {\bf n}={1 \over |{\bf N}| } \big( A + \partial_{z} \eta \big) \big( \partial_{1} v_{1} + \partial_{2} v_{2}\big), \quad |{\bf N}|= \big( 1 + (\partial_{1} \eta^2) + (\partial_{2} \eta)^2 \big)^{1 \over 2} .\end{equation}
Consequently, by using \eqref{sobr2} and \eqref{quot} (or actually its version on the boundary), we get that
$$ | \partial_{z}v \cdot {\bf n} |_{s} \leq \Lambda\big( \|v\|_{1, \infty} + |h|_{{2, \infty}(\mathbb{R}^2)} + \|\partial_{z} \eta \|_{L^\infty} \big)
\big( |v(\cdot, 0)|_{s+1}+ |Z{\bf N}|_{s-1} + |\partial_{z} \eta(\cdot, 0) |_{s}\big) $$ and hence, by using Proposition \ref{propeta} and the trace estimate \eqref{trace} which yields
$$ |\partial_{z} \eta(\cdot, 0) |_{s}\lesssim \| \partial_{z} \eta \|_{H^{ s +{1\over 2}}(\mathcal{S})} \lesssim |h|_{s+1},$$ we get \begin{equation} \label{dzvnb1}
| \partial_{z}v \cdot {\bf n} |_{s} \leq \Lambda\big( \|v\|_{1, \infty} + |h|_{2, \infty} \big)
\big( |v(\cdot, 0)|_{s+1}+ |h|_{s+1}\big). \end{equation} It remains to control $\Pi\big( \partial_{z}v\big)$ where $\Pi= Id - {\bf n}\otimes {\bf n}$ is the orthogonal projection on $({\bf n})^\perp$
which is the tangent space to the boundary.
We shall use the boundary condition \eqref{bord2} which yields
\begin{equation}
\label{bordpi} \Pi\big( S^\varphi v \, {\bf n}) =0.\end{equation}
To expand $\Pi (S^\varphi v\, {\bf n}\big)$, we shall use the local basis in $\Omega_{t}$ induced by \eqref{diff} that we denote
by $(\partial_{y^1}, \partial_{y^2}, \partial_{y^3})$.
Note that by definition, we have by using \eqref{vdef}
that $(\partial_{y^i}u)(t, \Phi(t, \cdot))= \partial_{i}v.$
We also consider the induced riemannian metric defined by $g_{ij}= \partial_{y^i}\cdot \partial_{y^j}$
and we define as usual the metric $g^{ij}$ as the inverse of $g_{ij}$.
Then we obtain that
\begin{equation}
\label{Su} 2 Su \, {\bf n}= \partial_{{\bf n}}u + g^{ij} \big(\partial_{y^j} u \cdot {\bf n}\big) \partial_{y^i}.\end{equation}
Consequently, since
\begin{align*}
\partial_{{\bf N}}u & = - \partial_{1} \varphi \partial_{1} u -\partial_{2} \varphi \partial_{2}u + \partial_{3}u \\
& = - \partial_{1} \varphi \partial^{\varphi}_{1} v -\partial_{2} \varphi \partial^{\varphi}_{2}v + \partial^{\varphi}_{3}v \\ & = {1\over \partial_{z} \varphi} \big( 1+ | \nabla h|^2 \big)\partial_{z}v
-\partial_{1}\varphi \partial_{1}v -\partial_{2}\varphi \partial_{2}v,
\end{align*}
we get from \eqref{bordpi} that
\begin{equation}
\label{eqdzvPi} \Pi \partial_{z}v= {\partial_{z} \varphi \over 1 + | \nabla h |^2}\Big( \Pi \big( \partial_{1} h \partial_{1} v + \partial_{2} h \partial_{2} v \big) - \big(
g^{ij} \big( \partial_{j}v \cdot {\bf n} \big) \Pi \partial_{y^i}\Big).
\end{equation}
Consequently, by using the same product estimates as above, we find that
\begin{align*}
| \Pi \partial_{z} v|_{s} & \leq \Lambda\big( \|v\|_{1, \infty} + \|\partial_{z}v\cdot {\bf n} \|_{L^\infty} + |h|_{2, \infty} \big) \big( |v(\cdot, 0)|_{s+1}
+ |h|_{s+1}+ | \partial_{z}v \cdot {\bf n} |_{s} \big) \\
& \leq \Lambda\big( \|v\|_{1, \infty} + |h|_{2, \infty} \big) \big( |v(\cdot, 0)|_{s+1}
+ |h|_{k+1}\big)
\end{align*}
where the last estimate comes from \eqref{dzvnb1} and the fact that from \eqref{eqdzvn} we obviously have
$$ \|\partial_{z}v \cdot {\bf n} \|_{L^\infty} \leq \Lambda( \| \nabla \eta \|_{L^\infty} + \|v\|_{1, \infty}\big).$$
To conclude, it suffices to combine the last estimate and \eqref{dzvnb1} since
$$ | (\partial_{z}v)(\cdot, 0) |_{s} \leq | \Pi \partial_{z}v |_{s} + |\partial_{z}v \cdot {\bf n}|_{s}.$$
\end{proof}
Next, we shall obtain the boundary condition for $Z^\alpha v$. Again, note that the only interesting case occurs when $\alpha_{3}=0$. Indeed,
otherwise $Z^\alpha v=0$, $Z^\alpha \eta= 0$ on the boundary. We start with the study of the boundary condition \eqref{bordv2}.
\begin{lem}
\label{lembordV}
For every $\alpha$, $1 \leq |\alpha | \leq m$ such that $\alpha_{3}= 0$ we have that on $\{z=0\}$
\begin{equation}
\label{bordV}
2 \varepsilon S^\varphi V^\alpha\, {\bf N} -\big( Z^\alpha q - g Z^\alpha h\big){\bf N} + \big(2 \varepsilon S^\varphi v - \big( q - gh) \big) Z^\alpha {\bf N}= \mathcal{C}^\alpha(\mathcal{B}) - 2 \varepsilon Z^\alpha h \partial^{\varphi}_{z}\big( S^\varphi v \big){\bf N}
\end{equation}
where $V^\alpha = Z^\alpha v - \partial^{\varphi}_{z} v\, Z^\alpha \eta$, $Q^\alpha = Z^\alpha Q- \partial^{\varphi}_{z}q \, Z^\alpha \eta$
and the commutator $\mathcal{C}^\alpha(\mathcal{B})$ enjoys the estimate:
\begin{equation}
\label{CalphaB}
| \mathcal{C}^\alpha(\mathcal{B}) |_{L^2(\mathbb{R}^2)} \leq \Lambda\big( {1 \over c_{0}} , | h|_{2, \infty} +
\|v\|_{E^{2, \infty}} \big) \big( \varepsilon |v^b|_{m}+ \varepsilon | h |_{m} \big)
\big).
\end{equation}
\end{lem}
\begin{proof}
By applying $Z^\alpha$ to the boundary condition \eqref{bord2} and by using the expansion \eqref{comS}, we get that
\begin{equation}
\label{bordV1}
\varepsilon \big( 2 S^\varphi \big( Z^\alpha v \big) - \partial^{\varphi}_{z} v \otimes \nabla^\varphi Z^\alpha \varphi
- \nabla^\varphi Z^\alpha v \otimes \partial^{\varphi}_{z} v \big) {\bf N}- \big( Z^\alpha q - g Z^\alpha h\big) {\bf N}
+ \big( 2 \varepsilon S^\varphi v - (q- gh ) \big) Z^\alpha {\bf N} = \mathcal{C}^\alpha(\mathcal{B})
\end{equation}
where
$$ \mathcal{C}^\alpha(\mathcal{B})= - \varepsilon \mathcal{E}^\alpha(v) - \mathcal{C}^\alpha (\mathcal{B})_{1} + \mathcal{C}^\alpha(\mathcal{B})_{2}$$
with $\mathcal{E}$ defined after \eqref{comS} and
\begin{align*}
& \mathcal{C}^\alpha (\mathcal{B})_{1}= \sum_{ \tiny{ \begin{array}{ll}\beta + \gamma = \alpha, \\
0 <| \beta | <|\alpha| \end{array} }} \varepsilon Z^\beta \big(S^\varphi v \big) \, Z^\gamma {\bf N}, \\ & \mathcal{C}^\alpha(\mathcal{B})_{2}= \sum_{ \tiny{ \begin{array}{ll}\beta + \gamma = \alpha, \\
0 <| \beta | <|\alpha| \end{array} }} Z^\beta \big( q- g h\big)\, Z^\gamma {\bf N}.
\end{align*}
Next, by using the product and commutator estimates of Proposition \ref{sobbord} on the boundary, we obtain that
\begin{align*} | \mathcal{C}^\alpha(\mathcal{B})_{1}|_{L^2(\mathbb{R}^2)} & \lesssim \varepsilon\, \| S^\varphi v\|_{1, \infty} |Z {\bf N}|_{m-2}
+ \varepsilon | S^\varphi v|_{m-1} |Z {\bf N}|_{L^\infty(\mathbb{R}^2)} \\
&\leq \Lambda\big( {1 \over c_{0}}, \|\nabla \eta \|_{1, \infty} + \|\nabla v \|_{1, \infty} \big) \big( \varepsilon |h|_{m} +
\varepsilon |\nabla v |_{m-1}\big)
\end{align*}
and hence, thanks to \eqref{etainfty} and \eqref{dzvb}, we get that
$$ | \mathcal{C}^\alpha(\mathcal{B})_{1}|_{L^2(\mathbb{R}^2)} \lesssim
\Lambda\big( {1 \over c_{0}}, |h|_{ 2, \infty} + \|v\|_{E^{2, \infty}} \big)
\big( \varepsilon |h|_{m}+ \varepsilon |v^b|_{m}\big).$$
To estimate $\mathcal{C}^\alpha(\mathcal B)_{2}$, we first note that thanks to \eqref{bordv2}, we have
$ Z^\beta (q- gh)= Z^\beta\big( \varepsilon S^\varphi v {\bf n} \cdot {\bf n}) $ and we get
in a similar way that
\begin{align*}
| \mathcal{C}^\alpha(\mathcal{B})_{2}|_{L^2(\mathbb{R}^2)} & \lesssim
\big( | Z q^{NS} |_{L^\infty} + |h|_{L^\infty} \big) | \nabla h |_{m-1} + | \nabla h |_{{1, \infty}} \big( | Z q^{NS}|_{m-2} \big) \\
& \leq \Lambda({1 \over c_{0}}, |h|_{2, \infty} + \|v\|_{E^{2,\infty}} \big) \big( \varepsilon |v^b|_{m} + \varepsilon |h|_{m})
\end{align*}
where we have set $q^{NS}= \varepsilon S^\varphi v {\bf n} \cdot {\bf n}$
and the last estimate comes from \eqref{dzvb}.
It remains to estimate $\mathcal{E}^\alpha(v)$ and hence $|\mathcal{C}_{i}^\alpha (v_{j})|_{L^2(\mathbb{R}^2)}.$
At first, we note that $\mathcal{C}_{i, 3}^\alpha(v_{j})= 0$ since we only consider the case that $\alpha_{3}= 0$.
Then, we get as in the proof of Lemma \ref{comi} that
$$ |\mathcal{C}_{i}^\alpha (v_{j})|_{L^2(\mathbb{R}^2)} \leq \Lambda\big( {1 \over c_{0}} , \| \nabla \eta \|_{1, \infty}
+ \| \nabla v \|_{1, \infty}\big) \big( | \nabla v |_{m-1} + |\nabla \eta |_{m-1}\big).$$
Finally, we can use Lemma \ref{lembord} and \eqref{etainfty}, \eqref{etaharm} to get that
$$ |\mathcal{C}_{i}^\alpha (v_{j})|_{L^2(\mathbb{R}^2)} \leq \Lambda\big( {1 \over c_{0}} , | h|_{2, \infty} +
\|v\|_{E^{2, , \infty}} \big) \big( |v^b|_{m} + | h |_{m}\big).$$
This yields
$$ | \varepsilon \mathcal{E}^\alpha(v)|_{L^2(\mathbb{R}^2)} \leq \Lambda\big( {1 \over c_{0}} , | h|_{2, \infty} +
\|v\|_{E^{2, \infty}} \big)\big( \varepsilon |v^b|_{m} + \varepsilon | h |_{m}\big)$$
and the estimate \eqref{CalphaB} follows.
To get \eqref{bordV} from \eqref{bordV1}, it suffices to use Lemma \ref{lemal}.
\end{proof}
It remains to study the kinematic boundary condition \eqref{bordv1} \begin{lem} \label{lembordh}
For every $\alpha$, $1 \leq |\alpha | \leq m$ such that $\alpha_{3}= 0$ we have that on $\{z=0\}$
\begin{equation}
\label{bordh}
\partial_{t} Z^\alpha h - v^b \cdot Z^\alpha {\bf N} - V^\alpha \cdot {\bf N} = \mathcal{C}^\alpha(h)
\end{equation}
where
\begin{equation}
\label{bordhC}
| \mathcal{C}^\alpha(h)|_{L^2(\mathbb{R}^2)}
\leq \Lambda\big( {1 \over c_{0}}, | v|_{E^{1, \infty}} + |h|_{2, \infty} \big) \big( |h|_{m} + \|v\|_{E^{m}} \big). \end{equation} \end{lem} \begin{proof} We immediately get from \eqref{bordv1} that $$ \partial_{t} Z^\alpha h + v^b \cdot Z^\alpha {\bf N} + V^\alpha \cdot {\bf N} = - [ Z^\alpha, v^b_{y}, \nabla_{y}h] - {(\partial_{z} v)^b \over \partial_{z} \varphi} \cdot {\bf N}\,
Z^\alpha h:= \mathcal{C}^\alpha(h)$$
where we recall that we use the notation $f^b= f_{/z=0}$ for the trace on $z=0$
Consequently, by using the formula \eqref{comsym} on the boundary, we get that
$$ \| \mathcal{C}^\alpha(h) \| \leq \Lambda\big( {1 \over c_{0}}, |Z v|_{L^\infty} + |h|_{2, \infty} + |\partial_{z}v |_{L^\infty} \big) \big( |h|_{m} + |Z v^b|_{m-2} \big)$$
and the result follows by using the trace estimate \eqref{trace}.
\end{proof}
\section{Pressure estimates} \label{sectionpressure} In view of the equation \eqref{eqValpha}, in order to estimate the right hand side, we need to control the pressure.
This is the aim of this section. By applying $\nabla^\varphi \cdot$ to the equation \eqref{NSv}, we get that the pressure $q$ solves in $\mathcal{S}$ the system \begin{equation} \label{eqpression} \Delta^\varphi q = - \nabla^\varphi(v\cdot \nabla^\varphi v), \quad q_{/z=0} = 2 \varepsilon S^\varphi {\bf n} \cdot {\bf n} + gh. \end{equation} We shall split the pressure into an "Euler" and a "Navier-Stokes" part with the following decomposition: \begin{equation} \label{pressuredec} q= q^E + q^{NS} \end{equation} where $q^E$ solves \begin{equation} \label{peuler} \Delta^\varphi q^E = - \nabla^\varphi(v\cdot \nabla^\varphi v), \quad q^E_{/z=0}= gh, \end{equation} and $q^{NS}$ solves \begin{equation} \label{qNS} \Delta^\varphi q^{NS} = 0, \quad q^{NS}_{/z=0}= 2 \varepsilon S^\varphi v \,{\bf n} \cdot \,{\bf n}. \end{equation} The idea behind this decomposition is that we shall need more regularity on $v$ to estimate $q^{NS}$ but thanks to the gain
of the $\varepsilon$ factor, this term will be controlled by the viscous regularization. The Euler pressure solves
the same equation with the same boundary condition as in the free boundary Euler equation. Thanks to the explicit expressions of the operators $\partial^{\varphi}_{i}$, we get that the operator $\Delta^\varphi$ can be expressed as \begin{equation} \label{deltaphi} \Delta^{\varphi}f = {1 \over \partial_{z} \varphi} \nabla \cdot \big( E \nabla f \big),\end{equation}
with the matrix $E$ defined by
$$ E= \left( \begin{array}{ccc} \partial_{z} \varphi & 0 & {- \partial_{1} \varphi} \\
0 & \partial_{z} \varphi & -\partial_{2} \varphi \\ -\partial_{1} \varphi & -\partial_{2} \varphi &
{1 + (\partial_{1} \varphi)^2} + (\partial_{2} \varphi)^2 \over \partial_{z} \varphi \end{array} \right)
= \frac1{\partial_{z} \varphi} PP^* $$
and where we have for the gradient and the divergence the expressions:
\begin{equation}
\label{graddiv} \nabla^\varphi \cdot v= {1 \over \partial_{z}\varphi} \nabla\cdot \big( P v \big) , \quad \nabla^\varphi f= {1 \over \partial_{z} \varphi} P^* \nabla f,
\quad P= \left( \begin{array}{ccc} \partial_{z} \varphi & 0 & 0 \\ 0 & \partial_{z} \varphi & 0 \\ -\partial_{1} \varphi & - \partial_{2} \varphi & 1 \end{array}
\right).\end{equation}
Note that $E$ is symmetric positive and that if
$ \| \nabla_y \varphi \|_{L^\infty} \leq { 1 \over c_{0}} $ and
$ \partial_{z} \varphi \geq c_{0}>0$ then
there exists $\delta(c_{0})>0$ such that
\begin{equation}
\label{Eminor}
E X \cdot X \geq \delta |X|^2, \quad \forall X \in \mathbb{R}^3.
\end{equation}
Moreover, we note that
\begin{equation}
\label{ELinfty}
\| E\|_{W^{k, \infty}} \leq \Lambda({1 \over c_{0}}, |h|_{k+1, \infty})
\end{equation}
and that by using the decomposition
$$ E= \mbox{Id}_{A} + \tilde{E}, \quad \tilde E= \left( \begin{array}{ccc} \partial_{z}\eta & 0 & {- \partial_{1} \eta} \\
0 & \partial_{z}\eta & -\partial_{2} \eta \\ -\partial_{1} \eta & -\partial_{2} \eta &
{A( (\partial_{1} \eta)^2} + (\partial_{2} \eta)^2) - \partial_{z} \eta \over \partial_{z} \varphi \end{array} \right), \quad
\, \mbox{Id}_{A}=\mbox{diag} (1, 1, 1/A),$$
we have the estimate
\begin{equation}
\label{EHs} \| \tilde{E}\|_{H^s} \leq \Lambda({1 \over c_{0}}, |h|_{1, \infty} \big) |h|_{s+ {1 \over 2}} .
\end{equation}
Before proving the estimates that we need for $q^E$ and $q^{NS}$, we shall establish
general elliptic estimates for the Dirichlet problem of
the operator $\nabla\cdot\big( E \nabla \cdot \big)$.
\begin{lem}
\label{lemelgen}
Consider the elliptic equation in $\mathcal{S}$
\begin{equation}
\label{elgen}
- \nabla \cdot \big( E \nabla \rho)= \nabla \cdot F, \quad \rho_{/z=0}= 0.
\end{equation}
Then we have the estimates:
\begin{eqnarray}
\label{ElH2} & & \| \nabla \rho \| \leq \Lambda\big( {1 \over c_{0}}, |h|_{1, \infty}\big) \|F\|_{L^2}, \quad
\| \nabla^2 \rho \| \leq \Lambda\big( {1 \over c_{0}}, |h|_{2, \infty}\big)\big( \|\nabla \cdot F \| + \|F\|_{1} \big), \\
\label{ElHmco} & & \| \nabla \rho \|_{k}
\leq \Lambda\big( {1 \over c_{0}}, |h|_{2, \infty} + |h|_{3} + \|F \|_{H^2_{tan}}+ \| \nabla \cdot F \|_{H^1_{tan}} \big)\big( |h|_{k+{1 \over 2}}+ \|F\|_{k} \big), \quad
k \geq 1, \\
\label{Eldzzco} & & \| \partial_{zz} \rho \|_{k-1} \leq \Lambda\big( {1 \over c_{0}}, |h|_{2, \infty} + |h|_{3} + \|F \|_{H^2_{tan}}+ \| \nabla \cdot F \|_{H^1_{tan}} \big) \\
\nonumber & & \quad \mbox{\hspace{6.5cm}} \big( |h|_{k+{1 \over 2}}+ \|F\|_{k} + \| \nabla \cdot F \|_{k-1} \big), \quad k \geq 2.
\end{eqnarray}
\end{lem}
Note that the estimate \eqref{ElHmco} is tame in the sense that it is linear in the highest
norm with respect to the source term and $h$.
Nevertheless, in the statement, the function $\Lambda$ depends on $k$.
\begin{proof}
We can construct by standard variational arguments a solution of \eqref{elgen}
which satisfies
the homogeneous $H^1$ estimate
\begin{equation}
\label{rhoH1} \| \nabla \rho \| \leq \Lambda( {1 \over c_{0}}) \| F\|.
\end{equation}
Next, the classical elliptic regularity result for \eqref{elgen} (see \cite{Gilbarg}, \cite{Lannes05} for example) yields \begin{equation} \label{rhoH2}
\| \nabla \rho \|_{H^1} \leq \Lambda\big( {1 \over c_{0}}, |h|_{2, \infty} \big) \big( \| \nabla \cdot F \| + \|F \|_{H^1_{tan}} \big).
\end{equation}
Before proving \eqref{ElHmco}, we shall need another auxiliary estimate for low order derivatives in order to control $\| \nabla \rho\|_{L^\infty}$.
Indeed, from \eqref{emb}, we get that
\begin{equation}
\label{qinfty0}
\| \nabla \rho \|_{L^\infty}^2 \lesssim \|\partial_{z} \nabla \rho \|_{H^{1}_{tan}} \, \| \nabla \rho \|_{H^2_{tan}}
\end{equation} By applying the tangential derivation $\partial_{y}^\alpha $ to \eqref{elgen} for $| \alpha |= 2$, we get
that $$- \nabla \cdot \big( E \nabla \partial_{y}^\alpha \nabla \rho\big) = \nabla \cdot\big( \partial_{y}^{\alpha} F) + \nabla \cdot \big( [\partial_{y}^{\alpha}, E] \nabla \rho), \quad
\partial_{y}^\alpha \rho_{/z= 0}= 0$$
and the standard energy estimate gives that
$$ \| \nabla \partial_{y}^\alpha \rho \| \leq \Lambda({1 \over c_{0}}) \big( \| \partial_{y}^\alpha F \| + \| [\partial_{y}^\alpha, E] \nabla \rho\|\big).$$
To control the commutator, we can expand it to get that
$$ \| [\partial_{y}^\alpha, E] \nabla \rho\| \lesssim \|E\|_{1, \infty} \| \nabla \rho \|_{H^1_{tan}} + \|(\partial_{y}^\alpha \tilde E)\nabla \rho\|
\leq\Lambda\big({1 \over c_{0}}, |h|_{2, \infty} \big)\| \nabla \rho \|_{H^1_{tan}} + \|(\partial_{y}^\alpha
\tilde E)\nabla \rho\| $$
where we have used \eqref{ELinfty} for the second estimate. To control the above last term, we use successively the Holder
inequality and the Sobolev inequality to get that
\begin{equation}
\label{L4} \|(\partial_{y}^\alpha
\tilde E)\nabla \rho\| \lesssim \| \partial_{y}^\alpha \tilde E \|_{L^3} \| \nabla \rho \|_{L^6} \lesssim \|\partial_{y}^\alpha \tilde E\|_{H^{1\over2}}
\, \| \nabla \rho \|_{H^1}, \quad | \alpha |= 2.\end{equation}
Consequently, by using \eqref{EHs} and \eqref{rhoH2}, we obtain
\begin{equation}
\label{rhos0}
\| \nabla \rho \|_{H^2_{tan}} \leq \Lambda({1 \over c_{0}}, |h|_{2, \infty} + |h|_{3} \big) \|\nabla \cdot F\| + \|F\|_{H^{2}_{tan}}).
\end{equation}
Since the equation \eqref{elgen} gives
\begin{equation}
\label{eqdzzrho} \partial_{zz} \rho ={1 \over E_{33}} \Big( \nabla \cdot F - \partial_{z}\big( \sum_{j<3}E_{3,j}\partial_{j} \rho \big)-
\sum_{i<3, \, j} \partial_{i}\big( E_{i j} \partial_{j} \rho\big) \Big)\end{equation}
we also obtain by using \eqref{rhos0}, \eqref{rhoH2} and \eqref{L4} that
\begin{equation}
\label{dzzrho1} \|\partial_{zz} \rho \|_{H^1_{tan}} \leq \Lambda\big({1 \over c_{0}}, |h|_{2, \infty} + |h|_{3} \big)\big( \| \nabla \cdot F \|_{H^1_{tan}}
+ \|F\|_{H^2_{tan}} \big).\end{equation}
By using the anisotropic Sobolev embedding \eqref{qinfty0} and \eqref{rhos0}, \eqref{dzzrho1}, we finally get that
\begin{equation}
\label{qinfty1}
\|\nabla q \|_{L^\infty} \leq \Lambda\big({1 \over c_{0}}, |h|_{2, \infty} + |h|_{3} \big)\big( \| \nabla \cdot F \|_{H^1_{tan}}
+ \|F\|_{H^2_{tan}} \big).
\end{equation}
We can now establish the estimate \eqref{ElHmco} by induction.
Note that the case $k=1$ is just a consequence of \eqref{ElH2}. Next, we want to apply derivatives to the equation,
use the induction assumption to control commutators and use the energy estimate \eqref{rhoH1} for the obtained equation.
Note that for the last part of the argument we need the source term to be in divergence form. Consequently,
a difficulty arises since $Z_{3}$ does not commute with $\partial_{z}$.
To solve this difficulty, we can introduce a modified field such that
\begin{equation}
\label{comtilde}
\tilde Z_{3} \partial_{z}= \partial_{z} Z_{3}.
\end{equation}
This is achieved by setting
$$ \tilde Z_{3} f = Z_{3} f + {1 \over (1- z) ^2} f.$$
Let us also set $\tilde Z ^\alpha = Z_{1}^{\alpha_{1}} Z_{2}^{\alpha_{2}} \tilde{Z}_{3}^{\alpha_{3}}.$ Note that
$$\|\tilde Z^\alpha f - Z^\alpha f \| \lesssim \|f\|_{|\alpha| - 1}.$$
By applying $\tilde Z^\alpha$ to \eqref{elgen}, we first get
$$ \nabla \cdot \big( Z^\alpha \big( E \nabla \rho )\big) = \nabla \cdot\Big( Z^\alpha F +\big( \tilde{Z}^\alpha - Z^\alpha \big) F_{h}
- \big( \tilde Z^\alpha - Z^\alpha\big) (E\nabla \rho)_{h}\Big)$$
where for a vector $X \in \mathbb{R}^3$, we set $X_{h}= (X_{1}, X_{2}, 0)^t$.Ê Next, this yields
\begin{equation}
\label{elgender}
- \nabla \cdot \big( E \cdot \nabla Z^\alpha\rho\big) = \nabla \cdot\Big( Z^\alpha F +\big( \tilde{Z}^\alpha - Z^\alpha \big) F_{h}
- \big( \tilde Z^\alpha - Z^\alpha\big) (E\nabla \rho)_{h}\Big) + \nabla \cdot \mathcal{C}
\end{equation}
where the commutator $\mathcal{C}$ is given by
$$
\mathcal{C}= - \big( E [Z^\alpha, \nabla] \rho\big) - \big( \sum_{ \beta + \gamma= \alpha, \beta \neq 0}
c_{\beta, \gamma} Z^\beta E \cdot Z^\gamma \nabla \rho \big).$$
From the standard energy estimate since $Z^\alpha \rho$ vanishes
on the boundary, we get that
\begin{equation}
\label{elprov3} \| \nabla \rho\|_{k} \leq \Lambda({1 \over c_{0}} ) \big( \|F\|_{k} + \|E \nabla \rho \|_{k-1} + \| E [Z^\alpha , \nabla] \rho \| + \big\| \sum_{ \beta + \gamma= \alpha, \beta \neq 0} c_{\beta, \gamma}
Z^\beta E \, Z^\gamma \nabla \rho \big\|\big).\end{equation}
To estimate the right hand-side, we first use \eqref{gues} and \eqref{EHs} to get
\begin{eqnarray*} \|E \nabla \rho \|_{k-1} & \lesssim & (1 + \|\tilde{E}\|_{L^\infty}) \|\nabla \rho \|_{k-1} + \| \nabla \rho \|_{L^\infty} \|\tilde{E}\|_{k-1} \\
& \lesssim & \Lambda ({1 \over c_{0}}, |h|_{1, \infty }) \| \nabla \rho \|_{k-1} + \Lambda( {1 \over c_{0}}, |h|_{1, \infty}) |h|_{k- {1 \over 2}} \| \nabla \rho \|_{L^\infty}.
\end{eqnarray*}
Next, $\|\nabla \rho \|_{k-1}$ is controlled by the induction assumption and we use \eqref{qinfty1} to get
$$ \|E \nabla \rho \|_{k-1} \lesssim \Lambda \big({1 \over c_{0}}, |h|_{2, \infty} + |h|_{3} + \|F\|_{H^2_{tan}} + \| \nabla \cdot F\|_{H^1_{tan}} \big)\big(
|h|_{k- {1 \over 2}} + \|F\|_{k- 1} \big).$$
In a similar way, we can use \eqref{idcom} to get that
$$[Z^\alpha, \partial_{z}]= \sum_{ | \beta | \leq | \alpha |-1} \varphi_{\alpha, \beta} \partial_{z}\big( Z^\beta \cdot \big)$$
for some harmless bounded functions $\varphi_{\alpha, \beta}$ and hence
$$ \| E [Z^\alpha , \nabla] \rho \| \leq \Lambda\big( {1 \over c_{0}}, |h|_{1, \infty}\big) \| \nabla \rho \|_{k-1} \big)$$
and we get by using \eqref{gues} and \eqref{EHs} that
$$\big\| \sum_{ \beta + \gamma= \alpha, \beta \neq 0} c_{\beta, \gamma}
Z^\beta E \, Z^\gamma \nabla \rho \big\| \lesssim \Lambda( |h|_{2, \infty} + \|\nabla \rho \|_{L^\infty} )(|h|_{k+ {1 \over 2}} + \|\nabla \rho\|_{k-1}).$$
To conclude, we plug the two last estimates in \eqref{elprov3} and we use the induction assumption to estimate $\| \nabla \rho\|_{k-1}$ and \eqref{qinfty1}. This proves \eqref{ElHmco}.
Finally, to get \eqref{Eldzzco}, it suffices to use the equation \eqref{elgen} in the form \eqref{eqdzzrho} and to use again the product estimates \eqref{gues}, and \eqref{ELinfty}, \eqref{EHs}, \eqref{qinfty1} together with the previous estimate
\eqref{ElHmco}.
\end{proof}
Also, it will be convenient to have at our disposal the same kind of estimates as in Lemma \ref{lemelgen} but with
nonhomogeneous Dirichlet condition.
\begin{lem}
\label{Elnh}
Consider the elliptic equation in $\mathcal{S}$
\begin{equation}
\label{eqelgenbis}
- \nabla \cdot \big( E \nabla \rho)= 0, \quad \rho_{/z=0}= f^b.
\end{equation}
Then we have the estimates:
\begin{eqnarray}
\label{estElnh1}
& & \| \nabla \rho \|_{H^k(\mathcal{S})} \leq \Lambda\big( {1 \over c_{0}}, |h|_{2, \infty} + |h|_{3}+|f^b|_{1, \infty} + |f^b|_{5 \over 2} \big)
\big( |h|_{k+{1 \over 2}} + |f^b|_{k+{1 \over 2}}\big).
\end{eqnarray}
\end{lem}
The regularity involving $|h|_{3}$ in the estimate \eqref{estElnh1} is not optimal, nevertheless, since at
the end we shall
use the Sobolev embedding to control $|h|_{2, \infty}$, this is sufficient.
\begin{proof}
We write $\rho$ under the form
$\rho= \rho^H + \rho^r$
where $\rho^H$ will absorb the boundary data and $\rho^r$ solves
\begin{equation}
\label{eqqr}
- \nabla \cdot \big( E \nabla \rho^{r} \big)= \nabla \cdot\big( E \nabla \rho^H\big), \quad \rho^{r}_{/z= 0}= 0.
\end{equation}
For $\rho^H$, we choose as in \eqref{eqeta}
$$ \hat \rho^H(\xi, z)= \chi(z \, \xi) \hat{f}^b.$$
Consequently, the estimates of Proposition \ref{propeta} are also valid for $\rho^H$ in particular, we have for every $k \geq 0$:
\begin{equation}
\label{estqrbis} \|\nabla \rho^H \|_{H^k} \lesssim |f^b|_{k+ {1 \over 2 }}, \quad \|\rho^H\|_{k, \infty} \lesssim \| f^b\|_{k, \infty}.
\end{equation}
Next, we can estimate the solution of \eqref{eqqr}. In Lemma \ref{lemelgen}, we have studied the elliptic problem
\eqref{elgen} in Sobolev conormal spaces. Nevertheless, by using the same approach one can also get
the estimate corresponding to \eqref{ElHmco} in standard Sobolev spaces. This yields
\begin{align*}
\| \nabla \rho^r \|_{H^k} \leq \Lambda\big( {1 \over c_{0}}, |h|_{2, \infty} + |h|_{3} + \|E\nabla \rho^H\|_{H^2} \big)
\big( |h|_{k+{1 \over 2}} + \| E \nabla \rho^H \|_{H^k} \big).
\end{align*}
To estimate the right hand side, since $\tilde E$ and $q^H$ have a standard Sobolev regularity in $\mathcal{S}$,
we get
thanks to \eqref{estqrbis}, \eqref{EHs} and \eqref{ELinfty} that
\begin{eqnarray*}
& & \| E \nabla q^H \|_{H^k} \lesssim \Lambda\big( |h|_{1, \infty} + |f^b|_{1, \infty}\big) \big( |h|_{k+ {1 \over 2 }} + |f^b|_{k+ {1 \over 2 }}\big).
\end{eqnarray*}
This yields
\begin{equation}
\label{estqr}\| \nabla q^r \|_{k} \leq \Lambda\big( {1 \over c_{0}}, |h|_{2, \infty} +|f^b|_{1, \infty} + |h|_{3} + |f^b|_{5 \over 2} \big)
\big( |h|_{k+{1 \over 2}} + |f^b|_{k+{1 \over 2}}\big).
\end{equation}
Consequently, \eqref{estElnh1} is proven.
\end{proof} We shall first establish regularity estimates for $q^{NS}$ \begin{prop} \label{propPNS} For $q^{NS}$, we have for $m \geq 1$, the estimate: \begin{equation} \label{estqNS}
\| \nabla q^{NS} \|_{H^{m-1}} \leq \Lambda\big( {1 \over c_{0}},|h|_{2, \infty} + \|v\|_{E^{2, \infty}} + |h|_{4} + \|v\|_{E^4} \big)
\big( | \varepsilon v^b |_{m+{1 \over 2}} + \varepsilon |h|_{m+{1 \over 2 }}\big), \end{equation} and the $L^\infty$ estimate \begin{equation}
\label{qNSLinfty} \|\nabla q^{NS} \|_{L^\infty} \leq \varepsilon \Lambda\big( {1 \over c_{0}},|h|_{2, \infty} + \|v\|_{E^{2, \infty}} + |h|_{4} + \|v\|_{E^4} \big). \end{equation}
\end{prop}
\begin{proof} We need to estimate the solution of \begin{equation} \label{qNSbis}
- \nabla \cdot \big( E \nabla q^{NS} \big)= 0, \quad q^{NS}_{/z= 0}= 2 \varepsilon S^\varphi v {\bf n} \cdot {\bf n},
\end{equation}
therefore, it suffices to use Lemma \ref{Elnh} with $f^b =2 \varepsilon S^\varphi v {\bf n} \cdot {\bf n} $. Since, we have
$$ |f^b|_{1, \infty} \leq \Lambda \big({1 \over c_{0}}, |v|_{E^{2, \infty}} + |h|_{2, \infty} \big)$$
and hence by using \eqref{eqdzvn} and \eqref{eqdzvPi}, we find
$$ |f^b|_{1, \infty} \leq \Lambda \big({1 \over c_{0}}, |v|_{E^{1, \infty}} + |h|_{2, \infty} + \varepsilon \| v \|_{2, \infty}\big)$$
and
\begin{align*} | f^b|_{m-{1 \over 2}} & \leq \Lambda\big( {1 \over c_{0}}, |v|_{E^{1, \infty}} + |h|_{2, \infty} \big) \big( \varepsilon | \nabla v(\cdot, 0)|_{m-{1 \over2}}
+ \varepsilon |h|_{m+{1 \over 2}}\big) \\
& \leq\Lambda\big( {1 \over c_{0}}, |v|_{E^{1, \infty}} + |h|_{2, \infty} \big) \big( \varepsilon | v(\cdot, 0)|_{m+{1 \over2}}
+ \varepsilon |h|_{m+{1 \over 2}}\big)
\end{align*}
where the last estimate comes from \eqref{dzvb}
In particular, since the above estimate holds for every $m$, we also have
$$ |f^b|_{5\over 2 } \leq \Lambda\big( {1 \over c_{0}}, |v|_{E^{1, \infty}} + |h|_{2, \infty} + \varepsilon |h|_{7\over 2} + \varepsilon | v^b|_{7\over 2}\big).$$
To conclude and get \eqref{estqNS}, we use the trace estimate \eqref{trace} which yields
\begin{equation}
\label{trace7/2} | v^b|_{7\over 2} \lesssim \| \partial_{z}v \|_{3} + \|v\|_{4} . \end{equation}
Finally, to obtain \eqref{qNSLinfty}, it suffices to use the standard Sobolev embedding in $\mathcal{S}$ to write
$$ \|\nabla q^{NS} \|_{L^\infty} \lesssim \| \nabla q^{NS}\|_{2} $$
combined with \eqref{estqNS} and the trace estimate \eqref{trace7/2}.
This ends the proof of Proposition \ref{propPNS}.
\end{proof}
It remains to estimate $q^E$.
\begin{prop}
\label{proppE}
For $q^{E}$, we have the estimate: \begin{eqnarray}
\label{estqE}
& & \| \nabla q^{E} \|_{m-1} + \|\partial_{zz} q^E \|_{m-2} \leq \Lambda \big( {1 \over c_{0}}, |h|_{2, \infty}+ \|v\|_{E^{1 , \infty}} + |h|_{3} +
|v|_{E^3}\big)
\big( |h|_{m-{1 \over 2}} + |v|_{E^m} \big), \\
\label{estqElinfty} & & \| \nabla q^E \|_{1, \infty} + \| \partial_{z}^2 q^E \|_{L^\infty} \leq \Lambda \big( {1 \over c_{0}}, |h|_{2, \infty}+ \|v\|_{E^{1 , \infty}} +
|h|_{4} + \|v\|_{E^{4}}\big), \\
\label{estqElinfty2} & & \| \nabla q^E \|_{2, \infty} \leq \Lambda \big( {1 \over c_{0}}, |h|_{2, \infty}+ \|v\|_{E^{1 , \infty}} +
|h|_{5} + \|v\|_{E^{5}}\big) \end{eqnarray}
\end{prop}
\begin{proof}
By using \eqref{peuler} and \eqref{graddiv}, we see that we have to solve the elliptic problem
\begin{equation}
\label{peuler2}
- \nabla \cdot \big( E \nabla q^E\big) = \nabla \cdot \big( P ( v \cdot \nabla^\varphi v\big) = \partial_{z}\varphi \nabla^\varphi v \cdot \nabla^\varphi v,
\, (y,z) \in \mathcal{S},
\quad q^E_{/z=0} = gh.
\end{equation}
We can split this equation in two parts by setting $q^E = q^b + q^i$ where $q^b$ solves the homogeneous equation
$$ - \nabla \cdot \big( E \nabla q^b\big) = 0$$
in $\mathcal{S}$ with nonhomogeneous boundary condition $ q^b_{/z=0}= gh$
and $q^i$ solves
$$ - \nabla \cdot \big( E \nabla q^i\big) = \nabla \cdot \big( P ( v \cdot \nabla^\varphi v)\big) = \partial_{z}\varphi \nabla^\varphi v \cdot \nabla^\varphi v,
\, (y,z) \in \mathcal{S},
\quad q^i_{/z=0} = 0.$$
We get the estimate of $q^b$ as a consequence of Lemma \ref{Elnh} with $f^b= gh$. We find
$$ \| \nabla q^b \|_{m-1} + \|\partial_{zz} q^b \|_{m-2} \leq \Lambda \big( {1 \over c_{0}}, |h|_{2, \infty} + |h|_{3 }\big) |h|_{m-{1 \over 2}}.$$
To estimate $q^i$ we can use Lemma \ref{lemelgen}. This yields
\begin{multline*} \| \nabla q^i \|_{m-1} \leq \Lambda \big( {1 \over c_{0}}, |h|_{2, \infty} + |h|_{3} + \|P ( v \cdot \nabla^\varphi v)\|_{2}
+ \|\partial_{z}\varphi \nabla^\varphi v \cdot \nabla^\varphi v\|_{1} \big) \\ \big( |h|_{m-{1 \over 2 }}
+ \|P ( v \cdot \nabla^\varphi v)\|_{m-1} \big).
\end{multline*}
By using again product estimates we thus find
$$ \| \nabla q^i \|_{m-1} \leq \Lambda \big( {1 \over c_{0}}, |h|_{2, \infty}+ \|v\|_{E^{1 , \infty}} + |h|_{3 } + \|v\|_{E^3} \big)
\big( |h|_{m-{1 \over 2}} + |v|_{E^m} \big).$$
In a similar way, using Lemma \ref{lemelgen}, we get the estimate for $ \partial_{zz} q^i$.
To get \eqref{estqElinfty}, we use \eqref{sob} to write that
$$ \| \nabla q^E \|_{k, \infty} \lesssim \| \partial_{z} \nabla q^{E}\|_{k+1} \| \nabla q^E \|_{k+2}, \quad k=1, \, 2$$
and hence we get the estimate for $\| \nabla q^E \|_{k,\infty},$ $k=1, \, 2$ by using \eqref{estqE} with $m= 4$ or $m=5$. Finally, to estimate $\|\partial_{zz} q^E \|_{L^\infty}$, it suffices
to rewrite the equation \eqref{peuler2} under the form \eqref{eqdzzrho} and to use the estimate for $\| \nabla q^E \|_{1, \infty}$ just established.
\end{proof}
This ends the proof of Proposition \ref{proppE}.
It remains a last estimate for $q^E$ that will be used for the control of the Taylor sign condition.
\begin{prop}
\label{proptaylor}
For $T\in (0, T^\varepsilon)$, we have the estimate
$$ \int_{0}^T | (\partial_{z} \partial_{t} q^E)^b |_{L^\infty} \leq \int_{0}^T \Lambda \big( {1 \over c_{0}}, \|v\|_{6}+ \|\partial_{z}v \|_{4} + \|v\|_{E^{2, \infty}} +|h|_{6}+ |h|_{3, \infty}\big) \\
\cdot \big( 1 + \varepsilon \|\partial_{zz}v\|_{L^\infty} + \varepsilon \| \partial_{zz} v\|_{3}\big).$$
\end{prop}
The proof of proposition \ref{proptaylor} is a consequence of the following general lemma: \begin{lem} \label{lemtaylor} Consider $\rho$ the solution of the equation \begin{equation} \label{taylorel}
- \nabla \cdot \big(E \nabla \rho)= \nabla \cdot F, \quad \rho_{/z=0}=0
\end{equation}
then for every $k>1$, $\rho$ satisfies the estimate
$$ \|(\partial_{z} \rho)^b \|_{L^\infty} \leq \Lambda \big( {1 \over c_{0}}, |h|_{k+2, \infty} \big)
\big( \| F\|_{H^{k+ 1}_{tan}} + |(F)^b|_{L^\infty(\mathbb{R}^2)}\big).$$ \end{lem} We recall that we denote
the trace of a function $f$ on the boundary $z=0$ by $f_{/z=0}$ or $f^b$.
\begin{proof}
The first step is to establish that $\rho$ satisfies for every $m\geq 0$ the estimate:
\begin{equation}
\label{eltaylor1}
\| \nabla \rho \|_{H^m_{tan}} \leq \Lambda \big( {1 \over c_{0}}, |h|_{m+1, \infty}\big) \|F\|_{H^m_{tan}}.
\end{equation}
To get this estimate, it suffices to apply $\partial_{y}^\alpha$ with $| \alpha| \leq m$ to \eqref{taylorel}
and to use the standard energy estimate in a classical way.
Next, we can rewrite \eqref{taylorel} as
\begin{equation}
\label{taylorel2}
- E_{33}\partial_{zz} \rho = \partial_{z} F + R
\end{equation}
with $R$ given by
\begin{equation}
\label{Rtaylor}
R= \sum_{i<3} \partial_{i} F_{i} + \partial_{z} E_{33} \partial_{z} \rho + \sum_{i<3}\Big( \partial_{i}\big( E \nabla \rho)_{i} +\partial_{z}(E_{3i}\partial_{i}\rho)\Big).
\end{equation}
For the moment, we see \eqref{taylorel2} as an ordinary differential equation in $z$, the variable $y$ being only a parameter.
We multiply \eqref{taylorel2} by $\partial_{z} \rho$ and we integrate in $z$ to get after integration by parts:
\begin{equation}
\label{eltaylor2}| \partial_{z} \rho(0)|^2 \leq \Lambda \big( {1 \over c_{0}}, |h|_{2, \infty} \big)
\big( |\partial_{z} \rho|_{L^2_{z}}^2 + |R|_{L^2_{z}}^2 + |F(0)|\, | \partial_{z} \rho(0)| + \big| \int_{-\infty}^0 F\, \partial_{zz} \rho\, dz \big|\big).\end{equation} Note that here $| \cdot |$ stands for the absolute value. To estimate the last term, we use again
the equation \eqref{taylorel2} to obtain
$$ \big| \int_{-\infty}^0 F\, \partial_{zz} \rho\, dz \big| \leq \big| \int_{-\infty}^0 F\, {\partial_{z} F \over E_{33}} \, dz \big|
+ \Lambda\big({1 \over c_{0}}\big) |F|_{L^2_{z}}\, |R|_{L^2_{z}}$$
and hence from an integation by parts we find
$$ \big| \int_{-\infty}^0 F\, \partial_{zz} \rho\, dz \big| \leq \Lambda\big({1 \over c_{0}}, |h|_{2, \infty}\big)
\big( |F(0)| + |F|_{L^2_{z}}^2 + |R|_{L^2_{z}}^2 \big).$$
Consequently, we get from \eqref{taylorel2} that
$$| \partial_{z} \rho(0)| \leq \Lambda \big( {1 \over c_{0}}, |h|_{2, \infty} \big)
\big( |\partial_{z} \rho|_{L^2_{z}} + |R|_{L^2_{z}} + |F(0)| \big).$$
Now, by taking the supremum in $y$ and by using the two-dimensional Sobolev embedding for the right hand side (except for the last term), we find that
$$ | (\partial_{z} \rho)^b|_{L^\infty(\mathbb{R}^2)} \leq \Lambda \big( {1 \over c_{0}}, |h|_{2, \infty} \big)
\big( \|\partial_{z} \rho\|_{H^k_{tan}} + \|R\|_{H^k_{tan}} + |F^b|_{L^\infty(\mathbb{R}^2)} \big)$$
for $k>1$.
To conclude, we see from the definition of $R$ that
$$ \|R\|_{H^k_{tan}} \lesssim \Lambda\big( {1 \over c_{0}}, |h|_{k+2,\infty}\big) \|\nabla \rho\|_{H^{k+1}_{tan}}+ \|F\|_{H^{k+1}_{tan}}$$
and hence, we get from \eqref{eltaylor1} that
$$ | (\partial_{z} \rho)^b|_{L^\infty(\mathbb{R}^2)} \leq \Lambda \big( {1 \over c_{0}}, |h|_{k+2, \infty} \big)
\big( \| F\|_{H^{k+ 1}_{tan}} + |F^b|_{L^\infty(\mathbb{R}^2)}\big).$$
This ends the proof of Lemma \ref{lemtaylor}.
\end{proof}
We are now in position to give the proof of Proposition \ref{proptaylor}:
{\bf Proof of proposition \ref{proptaylor}.}
We note that $\partial_{t} q^E$ solves the elliptic equation
$$ \nabla\cdot( E \nabla \partial_{t} q^E) = \nabla \cdot\big( \partial_{t} \big( P (v\cdot \nabla^\varphi v) )\big)-
\nabla \cdot \big( \partial_{t} E \, \nabla q^E\big), \quad \partial_{t} q^E_{/z=0} = g \partial_{t} h $$
consequently, we can again split $\partial_{t} q^E= q^i + q^B$ where $q^B$ absorbs the boundary term:
$$ \nabla\cdot( E \nabla q^B)= 0, \quad q^B_{/z=0}= g \partial_{t} h$$
and $q^i$ solves
\begin{equation}
\label{eqtaylorqi}
\nabla\cdot( E \nabla \partial_{t} q^i) = \nabla \cdot\big( \partial_{t} \big( P v\cdot \nabla^\varphi v) \big)-
\nabla \cdot \big( \partial_{t} E \, \nabla q^E\big), \quad q^i_{/z=0} = 0.
\end{equation}
The estimate of $\|\nabla q^B\|_{L^\infty}$ is a consequence of Lemma \ref{Elnh}. Indeed, for $k>3/2$, we have
$$ \| \nabla q^B\|_{L^\infty} \lesssim \| \nabla q^B \|_{H^k}$$
and from the estimate of Lemma \ref{Elnh}, we find that
$$
\| \nabla q^B\|_{L^\infty} \leq \Lambda\big({ 1 \over c_{0}}, |h|_{2, \infty} + |h|_{3} +|\partial_{t} h|_{3}\big).
$$
From the boundary condition \eqref{bordv1}, we obtain
$$ |\partial_{t}h|_{3} \leq \Lambda( \|v\|_{L^\infty} + \|h\|_{1, \infty}\big) \big( |h|_{4}+ |v^b|_{3}\big)
\leq \Lambda( \|v\|_{L^\infty} + \|h\|_{1, \infty}+ |h|_{4} + \|v\|_{E^4}\big)$$
where the last estimate comes from the trace inequality \eqref{trace}. We thus finally get for $q^B$ that
\begin{equation}
\label{qBtaylor}
\| \nabla q^B\|_{L^\infty} \leq \Lambda\big({ 1 \over c_{0}}, |h|_{2, \infty} + \|v\|_{L^\infty} + |h|_{4} + \|v\|_{E^4}\big).
\end{equation}
It remains to estimate $q^i$. We shall use Lemma \ref{lemtaylor}, but we need to be carefull with the structure of the right hand side
in \eqref{eqtaylorqi}. We first note thanks to the identities \eqref{graddiv} that
$$ \nabla \cdot\big( \partial_{t} \big( P (v\cdot \nabla^\varphi v) )\big)= \nabla \cdot \big( (\partial_{t} P) v\cdot \nabla^\varphi v\big)
+ \nabla \cdot \big( P (\partial_{t}v \cdot \nabla^\varphi v) \big) + \partial_{z}\varphi \nabla^\varphi\cdot \big( v \cdot \partial_{t}(\nabla^\varphi v)\big).$$
For the last term, by using again \eqref{graddiv} and the summation convention on repeated indices, we can write
$$ \partial_{z}\varphi \nabla^\varphi \cdot \big( v \cdot \partial_{t}(\nabla^\varphi v)\big)=
\partial_{z}\varphi \partial_{i}^\varphi\big( v \cdot \partial_{t}\big( {1 \over \partial_{z}\varphi }P^*\nabla v_{i}\big)
= \partial_{z}\varphi \partial_{i}^\varphi\big( v \cdot \partial_{t}\big( {1 \over \partial_{z}\varphi }P^*\big) \nabla v_{i}\big) +
\partial_{z}\varphi \, \partial_{i}^\varphi\big( v \cdot \nabla^\varphi \partial_{t} v_{i}\big)$$
and the crucial observation is that since $\nabla^\varphi \cdot v=0$, we have
$$ \partial_{i}^\varphi\big( v \cdot \nabla^\varphi \partial_{t} v_{i}\big)= \partial_{i}^\varphi \big( v_{j}\partial_{j}^\varphi \partial_{t}v_{i}\big)
= \partial_{j}^\varphi \big( \partial_{t}v_{i}\partial_{i}^\varphi v_{j}\big) +
\partial_{j}^\varphi \big( \partial^{\varphi}_{i} \partial_{t}v_{i} \, v_{j}\big).$$
Again since $\nabla^\varphi \cdot v= 0$, we can write
$$\partial^{\varphi}_{i} \partial_{t}v_{i} = - {1 \over \partial_{z}\varphi} \nabla \cdot(\partial_{t} P v)
$$
and hence, we obtain
$$ \partial_{i}^\varphi\big( v \cdot \nabla^\varphi \partial_{t} v_{i}\big)= \partial_{j}^\varphi \big( \partial_{t}v_{i}\partial_{i}^\varphi v_{j}\big)
- \partial_{j}^\varphi \Big( v_{j}
{1 \over \partial_{z}\varphi} \nabla \cdot(\partial_{t} P v) \Big).$$
Consequently, we have proven that
\begin{multline*} \nabla \cdot\big( \partial_{t} \big( P (v\cdot \nabla^\varphi v) )\big)=
\nabla\cdot \big( (\partial_{t} P) v\cdot \nabla^\varphi v\big)+ 2 \nabla \cdot \big( P (\partial_{t}v \cdot \nabla^\varphi v) \big)
+ \nabla \cdot \big( v \cdot \partial_{t}\big( {1 \over \partial_{z}\varphi }P^*\big) \nabla v\big) \\
- \nabla \cdot\Big( {1 \over \partial_{z}\varphi} \nabla \cdot(\partial_{t} P v) \, v \Big).
\end{multline*}
This allows to observe that
\eqref{eqtaylorqi} is under the form \eqref{taylorel} with
\begin{equation}
\label{defFtaylor} F= (\partial_{t} P) v\cdot \nabla^\varphi v + 2 P (\partial_{t}v \cdot \nabla^\varphi v)+ v \cdot \partial_{t}\big( {1 \over \partial_{z}\varphi }P^*\big) \nabla v - {1 \over \partial_{z}\varphi} \nabla \cdot(\partial_{t} P v) + \partial_{t} E \nabla q^E.\end{equation}
Consequently, by using Lemma \ref{lemtaylor} for $k=2$, we get that
$$ | \partial_{z}q^i_{/z=0}|_{L^\infty} \leq \Lambda\big({ 1 \over c_{0}}, |h|_{4, \infty}\big)
\big( \|F\|_{3} + |F^b|_{L^\infty}\big)$$
and hence that
\begin{equation}
\label{taylorqi3} \int_{0}^T | \partial_{z}q^i_{/z=0}|_{L^\infty} \leq \int_{0}^T
\Lambda\big({ 1 \over c_{0}}, |h(t)|_{4, \infty}\big) \big( \|F\|_{3} + |F^b|_{L^\infty}\big).
\end{equation}
Next, by using Proposition \ref{propeta} and \eqref{gues}, we get that
$$ \|F\|_{3} \leq \Lambda \big( {1 \over c_{0}}, \|v\|_{E^4}+ \|v\|_{E^{1, \infty}} +|h|_{4}+ |h|_{1, \infty}+ |\partial_{t} h|_{5} +
|\partial_{t}h|_{2, \infty} + \| \nabla q^E \|_{3}\big)\big( 1 + \|\partial_{t} v\|_{L^\infty} + \|\partial_{t}v\|_{3}\big).$$
Therefore, by using again the boundary condition \eqref{bordv1} and Proposition \ref{proppE}, we find
$$ \|F\|_{3} \leq \Lambda \big( {1 \over c_{0}}, \|v\|_{E^5}+ \|v\|_{E^{2, \infty}} +\|v\|_{6}+|h|_{6}+ |h|_{3, \infty}\big)( 1 + \| \partial_{t}v \|_{L^\infty}
+ \|\partial_{t}v \|_{3}\big).$$
By using again the expression \eqref{defFtaylor}, we also find
\begin{align*}
|F^b|_{L^\infty} & \leq \Lambda\big( {1 \over c_{0}}, |h|_{3, \infty} + \|v\|_{E^{2, \infty}} + \|\nabla q^E \|_{L^\infty}
\big)\big( 1 + \| \partial_{t} v\|_{L^\infty}\big) \\
& \leq \Lambda\big( {1 \over c_{0}}, |h|_{3, \infty} + \|v\|_{E^{2, \infty}} + |h|_{4}+ \|v\|_{E^4}\big)\big( 1 + \|\partial_{t}v \|_{L^\infty}\big)
\end{align*}
where the last estimate comes from Proposition \ref{proppE}. Consequently, by plugging these last two estimates
in \eqref{taylorqi3}, we find
$$ \int_{0}^T | \partial_{z}q^i_{/z=0}|_{L^\infty} \leq \int_{0}^T
\Lambda \big( {1 \over c_{0}}, \|v\|_{E^5}+ \|v\|_{E^{2, \infty}} + \|v\|_{6}+|h|_{6}+ |h|_{3, \infty}\big)( 1 + \| \partial_{t}v \|_{L^\infty}
+ \|\partial_{t}v \|_{3}\big).$$
To conclude, we can use the equation \eqref{NSv} to get
\begin{multline*} \|\partial_{t}v \|_{3} + \| \partial_{t}v \|_{L^\infty} \leq \Lambda \big( {1 \over c_{0}}, \|v\|_{E^5}+ \|v\|_{E^{2, \infty}} +|h|_{5}+ |h|_{2, \infty}\big) \\
\cdot \big( 1 + \varepsilon \|\partial_{zz}v\|_{L^\infty} + \varepsilon \| \partial_{zz} v\|_{3} + \|\nabla q\|_{L^\infty} + \|\nabla q \|_{3}\big)
\end{multline*}
and then Proposition \ref{propPNS} and Proposition \ref{proppE} combined with the trace estimate \eqref{trace} to find
$$ \|\partial_{t}v \|_{3} + \| \partial_{t}v \|_{L^\infty} \leq \Lambda \big( {1 \over c_{0}}, \|v\|_{E^5}+ \|v\|_{E^{2, \infty}} +|h|_{5}+ |h|_{2, \infty}\big) \\
\cdot \big( 1 + \varepsilon \|\partial_{zz}v\|_{L^\infty} + \varepsilon \| \partial_{zz} v\|_{3}\big).$$
This ends the proof of Proposition \ref{proptaylor}.
\section{Conormal estimates for $v$ and $h$}
\label{sectionconorm2}
\subsection{Control given by the good unknown}
To perform our higher order energy estimates, we shall use the good unknown
$ V^\alpha= Z^\alpha v - \partial^{\varphi}_{z}v Z^\alpha \varphi$. A crucial point is therefore to establish that
the control of this type of quantity and $Z^\alpha h $ yield a control of all
the needed quantities.
We shall perform a priori estimates on an interval of time $[0, T^\varepsilon]$ for which we assume that
\begin{equation}
\label{apriori}
\partial_{z} \varphi \geq {c_{0}}, \quad |h|_{2, \infty} \leq {1 \over c_{0}}, \quad g - (\partial_{z}^\varphi q^E)_{/z=0} \geq {c_{0}\over 2}, \quad \forall t \in [0, T^\varepsilon]
\end{equation}
for some $c_{0}>0$. In the following, we shall use the notation $\Lambda_{0}= \Lambda(1/c_{0})$.
Note in particular that this will allow us to use the Korn inequality recalled in Proposition \ref{Korn}.
Let us introduce a few notations. As we have already used, we set
$$ V^\alpha= Z^\alpha v - \partial_{z}^\varphi v Z^\alpha h, \, \alpha \neq 0, \quad V^0= v$$
(and $V^0= v$),
and we shall use the norms:
$$ \|V^m(t) \|^2 = \sum_{| \alpha| \leq m} \|V^\alpha (t)\|^2, \quad \| S^\varphi V^m(t) \|^2= \sum_{| \alpha| \leq m } \| S^\varphi V^\alpha(t) \|^2.$$
By using \eqref{apriori}, we get that for every $t \in [0, T^\varepsilon]$, we have the equivalence
\begin{equation}
\label{equiv1}
\|v\|_{m} \lesssim \|V^m\| + \Lambda({1 \over c_{0}} , \| \nabla v\|_{L^\infty}) |h|_{m-{1 \over 2 }}, \quad \|V^m\| \lesssim
\|v\|_{m} + \Lambda({1 \over c_{0}} , \| \nabla v\|_{L^\infty} )|h|_{m-{1 \over 2 }}.
\end{equation}
\subsection{Main estimate}
\begin{prop}
\label{conormv}
Let us define for $t \in [0, T^\varepsilon]$,
\begin{equation}
\label{deflambdainfty1} \Lambda_{\infty}(t)= \Lambda \big( {1 \over c_{0}}, \|v(t)\|_{E^{2, \infty}} + \varepsilon^{1 \over 2} \|\partial_{zz}v\|_{L^\infty}+|h(t)|_{4} + |v(t)|_{E^4} \big),
\end{equation}
then, for every $t \in [0, T]$, a sufficiently smooth solution of \eqref{NSv}, \eqref{bordv1}, \eqref{bordv2} satisfies
for every $m \geq 0$ the estimate
\begin{align*}
& \|V^m(t)\|^2 + |h(t)|_{m}^2 + \varepsilon |h(t)|_{m+{1 \over 2}}^2 + \varepsilon \int_{0}^t \| \nabla V^m\|^2 \\
& \leq \Lambda_{0}\big(\| V^m(0)\|^2 + |h(0)|_{m}^2 + \sqrt{\varepsilon}|h(0)|_{m+{1 \over 2}}^2\big) \\
& \quad + \int_{0}^t\Lambda_{\infty} \big( 1 + | (\partial_z \partial_{t}q^E)^b |_{L^\infty}
\big) \big(\|V^m\|^2 + |h(t)|_{m}^2 + \varepsilon |h(t)|_{m+{1 \over 2}}^2\big) + \int_{0}^T \Lambda_{\infty} \|\partial_{z} v\|_{m-1}^2.
\end{align*}
where as before $\Lambda_{0}= \Lambda(1/c_{0})$.
\end{prop}
\begin{proof}
By using the equations \eqref{eqValpha}, \eqref{divValpha} and the boundary condition \eqref{bordV}, the same integrations by parts as in the proof of Lemma \ref{basicL2}, yield
that
\begin{equation}
\label{en1}
{d \over dt} \int_{\mathcal{S}} |V^\alpha |^2 d\mathcal{V}_{t} + 4 \varepsilon \int_{\mathcal{S}} |S^\varphi V^\alpha |^2\, d\mathcal{V}_{t} \\
= \mathcal{R}_{S}+ \mathcal{R}_{C} + 2 \int_{z= 0} \big( 2 \varepsilon S^\varphi V^\alpha - Q^\alpha \mbox{Id} \big) {\bf N} \cdot V^\alpha\, dy
\end{equation}
where we set
\begin{align}
\label{RS} & \mathcal{R}_{S}= 2 \int_{\mathcal{S}} \Big( \varepsilon D^\alpha(S^\varphi v) + \varepsilon \nabla^\varphi \cdot \big( \mathcal{E}^\alpha (v) \big) \cdot V^\alpha
\, d\mathcal{V}_{t}, \\
\label{RC} & \mathcal{R}_{C}=- 2 \int_{\mathcal{S}} \Big( \big(\mathcal{C}^\alpha(\mathcal{T})+ \mathcal{C}^\alpha(q)\big) \cdot V^\alpha - \mathcal{C}^\alpha (d) Q^\alpha \Big)\, d\mathcal{V}_{t}.
\end{align}
Let us start with the analysis of the last term in \eqref{en1}. Note that this is the crucial term for which we shall need
to use the physical condition. We first notice that if $\alpha_{3} \neq 0$ since $V^\alpha_{/z=0}=0$,
this term vanishes. Consequently, we only need to study the case $\alpha_{3}= 0$.
By using \eqref{bordV}, we get that
\begin{align}
\label{enbor} \int_{z= 0} \big( 2 \varepsilon S^\varphi V^\alpha - Q^\alpha \mbox{Id} \big) {\bf N} \cdot V^\alpha & = \int_{z=0}\big( -g Z^\alpha h + \partial^{\varphi}_{z} q\, Z^\alpha h \big) {\bf N}\cdot
V^\alpha \\
& \nonumber - \int_{z= 0} \big(2 \varepsilon S^\varphi v - \big( q-gh){\mbox{Id}} \big) Z^\alpha {\bf N} \cdot V^\alpha \, dy + \mathcal{R}_{B}
\end{align}
where
\begin{equation}
\label{RB}
\mathcal{R}_{B}= \int_{z=0} \big( \mathcal{C}^\alpha(\mathcal{B}) - 2 \varepsilon Z^\alpha h\, \partial^{\varphi}_{z} \big( S^\varphi v\big) {\bf N}) \cdot V^\alpha \, dy.
\end{equation}
\end{proof}
To estimate $\mathcal{R}_{B}$, we can use \eqref{CalphaB} to get that
\begin{align*}
| \mathcal{R}_{B}| & \leq \Lambda\big( {1 \over c_{0}}, |h|_{2, \infty} +\|v \|_{E^{2, \infty}}
\big) \big( \varepsilon(1 + \| \partial_{zz} v \|_{L^\infty}) |h|_{m}+ \varepsilon |v^b|_{m} \big) | (V^\alpha)^b |_{L^2} \\
&\leq \Lambda_{\infty}
\big( \varepsilon(1 + \| \partial_{zz} v \|_{L^\infty}) |h|_{m}+ \varepsilon |v^b|_{m} \big) | (V^\alpha)^b |_{L^2}.
\end{align*}
Note that in the proof, when we do not specify the time variable, this means that all the quantities that appear
are evaluated at time $t$.
To estimate the second term in the right hand side of \eqref{enbor}, we first note that
$$ \int_{z= 0} \big(2 \varepsilon S^\varphi v - \big( q-gh){\mbox{Id}} \big) Z^\alpha {\bf N} \cdot V^\alpha \, dy
= \int_{z= 0} \big(2 \varepsilon S^\varphi v - q^{NS}{\mbox{Id}} \big) Z^\alpha {\bf N} \cdot V^\alpha \, dy$$
since $q= q^E+ q^{NS}$ and $q^E= gh$ on the boundary. This yields
\begin{align*}\Big| \int_{z= 0} \big(2 \varepsilon S^\varphi v - q^{NS}{\mbox{Id}} \big) Z^\alpha {\bf N} \cdot V^\alpha \, dy\Big|
& \leq | Z^\alpha \nabla h |_{-{1 \over 2 }} \, | \big(2 \varepsilon S^\varphi v - q^{NS}{\mbox{Id}} \big)(V^\alpha)^b|_{{1 \over 2}}.
\end{align*}
Since $q^{NS}= S^\varphi v {\bf n} \cdot {\bf n}$ on the boundary, we obtain by using \eqref{cont2D} that
$$\Big| \int_{z= 0} \big(2 \varepsilon S^\varphi v - q^{NS}{\mbox{Id}} \big) Z^\alpha {\bf N} \cdot V^\alpha \, dy\Big|
\leq \Lambda_{\infty}\, \varepsilon |h|_{m+{1 \over 2}} |(V^\alpha)^b |_{1\over 2}. $$
To express the first term in the right hand side of \eqref{enbor}, we use first use the decomposition $q= q^E+ q^{NS}$
and we write
$$ \int_{z=0}\big( -g Z^\alpha h + \partial^{\varphi}_{z} q\, Z^\alpha h \big) {\bf N}\cdot
V^\alpha = \int_{z=0}\big( -g Z^\alpha h + \partial^{\varphi}_{z} q^E\, Z^\alpha h \big) {\bf N}\cdot
V^\alpha \, dy + \int_{z=0} \partial^{\varphi}_{z}q^{NS} Z^\alpha h V^\alpha \cdot {\bf N}.$$ For the second term, we get thanks to \eqref{qNSLinfty} that
\begin{align*} \Big| \int_{z=0} \partial^{\varphi}_{z}q^{NS} Z^\alpha h V^\alpha \cdot {\bf N} \Big| \leq \|\partial_{z}^\varphi q^{NS}\|_{L^\infty}
|h|_{m} |(V^\alpha)^b| & \lesssim \Lambda_{\infty}\, \varepsilon |h|_{m}|(V^\alpha)^b|
\end{align*}
For the first term, we use the kinematic boundary condition under the form given by Lemma \ref{lembordh}:
\begin{align*} \int_{z=0}\big( -g Z^\alpha h + \partial^{\varphi}_{z} q^E\, Z^\alpha h \big) {\bf N}\cdot
V^\alpha \, dy & = \int_{z=0}\big( -g Z^\alpha h + \partial^{\varphi}_{z} q^E\, Z^\alpha h \big)\partial_{t} Z^\alpha h
\\
& - \int_{z=0}\big( -g Z^\alpha h + \partial^{\varphi}_{z} q^E\, Z^\alpha h \big)v^b \cdot (Z^\alpha \nabla_{y}h - \mathcal{C}^\alpha(h))\, dy.
\end{align*}
Thanks to an integration by parts, Proposition \ref{proppE} and \eqref{bordhC}, we obtain
\begin{align*} \Big| \int_{z=0}\big( -g Z^\alpha h + \partial^{\varphi}_{z} q^E\, Z^\alpha h \big)v^b \cdot (Z^\alpha \nabla_{y}h - \mathcal{C}^\alpha(h))\, dy\Big|
& \lesssim \|v\|_{1, \infty}( 1 + \|\partial^{\varphi}_{z} q^E\|_{1, \infty}) \big( |Z^\alpha h| + | \mathcal{C}^\alpha(h) | \big) \\
& \leq \Lambda_{\infty}\big( |h|_{m} + \|v\|_{E^m})|h|_{m}
\end{align*}
and we finally write
$$ \int_{z=0}\big( -g Z^\alpha h + \partial^{\varphi}_{z} q^E\, Z^\alpha h \big)\partial_{t} Z^\alpha h
= - {1 \over 2} {d \over dt} \int_{z=0} ( g- \partial^{\varphi}_{z} q^E) |Z^\alpha h |^2 - \int_{z=0} \partial_{t}\big( \partial^{\varphi}_{z} q^E \big) |Z^\alpha h|^2.$$
Consequently, gathering all the previous estimates, we have proven from \eqref{enbor} that
\begin{equation}
\label{enbor2} \int_{z= 0} \big( 2 \varepsilon S^\varphi V^\alpha - Q^\alpha \mbox{Id} \big) {\bf N} \cdot V^\alpha
= - {1 \over 2} {d \over dt} \int_{z=0} ( g- \partial^{\varphi}_{z} q^E) |Z^\alpha h |^2 + \tilde{ \mathcal{R}}_{B}\end{equation}
where
\begin{align*}
| \tilde{\mathcal{R}}_{B}| & \leq \Lambda_{\infty} \Big( \varepsilon \big(1+ \|\partial_{zz} v \|_{L^\infty} \big)|h|_{m} + \varepsilon |v^b |_{m}\big) |(V^\alpha)^b| \\
& \quad + \varepsilon |h|_{m+{1 \over 2}} |(V^\alpha)^b|_{{1 \over 2}} + (1 + |(\partial_{z} \partial_{t} q^E)^b |_{L^\infty})|h|_{m}^2 + \|v\|_{E^m} |h|_{m}\Big)
\end{align*}
and we can use successively, \eqref{equiv1}, that
\begin{equation}
\label{vbm}|v^b|_{m} \leq \Lambda_{\infty}\big( |(V^m)(\cdot, 0)| +|h|_{m}\big), \end{equation}
and the trace inequality \eqref{trace} which yields
$$ |(V^\alpha)^b|_{L^2}^2\lesssim \| \partial_{z} V^\alpha \| \|V^\alpha \|, \quad |(V^\alpha)^b|_{1\over 2 } \lesssim \|\nabla V^\alpha\| + \|V^\alpha \|$$
to get that
\begin{align}
\label{tildeRB}
| \tilde{\mathcal{R}}_{B}| & \leq \varepsilon \| \nabla V^m\|\, \|V^m\| + \Lambda_{\infty} \|\partial_{z}v\|_{m-1}|h|_{m} \\
\nonumber & \quad \quad +
\Lambda_{\infty} \big( 1 + |(\partial_{z} \partial_{t} q^E)^b |_{L^\infty}\big) \big( |h|_{m}^2
+ \varepsilon |h|_{m+{1 \over 2}}^2 + \|V^m \|^2 \big)
\end{align}
By plugging \eqref{enbor2} into \eqref{en1}, we get that
\begin{equation}
\label{en2}
{d \over dt }{1 \over 2 }\Big(
\int_{\mathcal{S}} |V^\alpha |^2 d\mathcal{V}_{t} + \int_{z=0} ( g- \partial^{\varphi}_{z} q^E) |Z^\alpha h |^2 \, dy \Big)
+ 4 \varepsilon \int_{\mathcal{S}} |S^\varphi V^\alpha |^2\, d\mathcal{V}_{t}= \mathcal{R}_{S} + \mathcal{R}_{C}+ \tilde{\mathcal R}_{B}
\end{equation}
and it remains to estimate the commutators $ \mathcal R_{S}$, $\mathcal{R}_{C}$.
Let us begin with the estimate of $\mathcal{R}_{C}$.
By using \eqref{Cd}, \eqref{CT} and the Sobolev embedding, we immediately get that
\begin{equation}
\label{RC1}|\mathcal{R}_{C}|\leq \Lambda_{\infty} \big( \|v\|_{m} +\|\partial_{z} v\|_{m-1} + |h|_{m} \big) \|V^m\| +\Big| \int \mathcal{C}^\alpha(q) \cdot V^m
d\mathcal{V}_{t} \Big|.
\end{equation}
To estimate the last term, we could use directly \eqref{Cq}. Neverthess, the estimate \eqref{Cq} implies that we need to control
$\| \nabla q \|_{1, \infty}$ and the control of $\| \nabla q^{NS} \|_{1, \infty}$ through Sobolev embedding
and \eqref{estqNS} would involve a dependence in $\varepsilon | v^b|_{s}$ with $s>4$ in the definition of $\Lambda_{\infty}$.
We can actually easily get a better estimate by using that
$\mathcal{C}^\alpha (q) = \mathcal{C}^\alpha(q^E) + \mathcal{C}^\alpha(q^{NS})$ and by handling the two terms in two different ways.
For the Euler pressure, we can use directly \eqref{Cq} and Proposition \ref{proppE} to get that
\begin{equation}
\label{CqE}
\| \mathcal{C}^\alpha(q^E) \| \, \| V^m\| \lesssim \Lambda_{\infty} \big( \|v\|_{E^{m-1}}+ |h|_{m} \big) \| V^m \|.
\end{equation}
To estimate $\mathcal{C}^\alpha(q^{NS})$ we can have a closer look at the structure of the commutator.
By using the decomposition \eqref{Cialpha}, and \eqref{Calpha2}, \eqref{Calpha3} combined with Proposition \ref{propPNS}, we
have that
$$ \| \mathcal{C}^\alpha_{i, 2}(q^{NS})\| +\| \mathcal{C}^\alpha_{i,3}(q^{NS} )\| \leq \Lambda_{\infty} \big( \|v\|_{E^{m-1}}+ |h|_{m} \big).$$
Consequently, it only remains to study $\mathcal{C}^\alpha_{i,1}(q^{NS})= - [Z^\alpha, {\partial_{i} \varphi \over \partial_{z} \varphi}, \partial_{z} q^{NS}]
$(the case $i=3$ can be handled by similar arguments). For this term, it is actually better to write it under the form
$$ \mathcal{C}_{i, 1}^{\alpha}(q^{NS})= - \Big( [Z^\alpha, {\partial_{i}\varphi \over \partial_{z} \varphi } ]\partial_{z} q^{NS} - Z^\alpha \big( {\partial_{i} \varphi
\over \partial_{z}\varphi}\big) \partial_{z} q^{NS}\Big).$$
We can use the commutator estimate \eqref{com} for the first term and Proposition \ref{propeta} and Proposition \ref{propPNS}
to get that
\begin{align*} \| [Z^\alpha, {\partial_{i}\varphi \over \partial_{z} \varphi } ]\partial_{z} q^{NS}\| + \big\| Z^\alpha \big( {\partial_{i} \varphi
\over \partial_{z}\varphi}\big) \partial_{z} q^{NS}\big|
& \leq \Lambda\big({1 \over c_{0}}, \varepsilon^{-1}\|\partial_{z} q^{NS} \|_{L^\infty}\big)\big( \|\partial_{z} q^{NS}\|_{m-1} +\varepsilon |h|_{m+{1 \over 2 }} \big)
\\ & \leq \Lambda_{\infty} \big( \varepsilon |h|_{m+{1 \over 2 }} + \varepsilon |v^b|_{m+{1 \over 2 } }\big).
\end{align*}
This yields
$$ |\mathcal{R}_{C}|\leq \Lambda_{\infty} \big( \|v\|_{m} + \|\partial_{z}v \|_{m-1} + |h|_{m} + \varepsilon^{1 \over 2 } | h|_{m+{1 \over 2 }}
+ \varepsilon |v^b|_{m+{1 \over 2 }} \big) \|V^m\|.$$
To estimate the last term $ \varepsilon |v^b|_{m+{1 \over 2 }} \|V^m\|$ in the above estimate, we can write that
$$ |v^b|_{m+{1 \over 2 }} \leq \sum_{|\alpha| \leq m} |V^\alpha|_{1\over 2} + |\partial_{z}^\varphi v Z^\alpha h|_{1\over 2}$$
and we use \eqref {cont2D} and the trace estimate \eqref{trace} to get that
\begin{equation}
\label{vbm+} |v^b|_{m+{1 \over 2 }} \leq \| \nabla V^m\| + \|V\|_{m} + \Lambda_{\infty} |h|_{m+{1 \over 2 }}.\end{equation}
Hence, we find
\begin{equation}
\label{RCfinal}
|\mathcal{R}_{C}|\leq \varepsilon \|\nabla V^m\|\, \|V^m\| +\Lambda_{\infty} \|\partial_{z}v \|_{m-1} \|V\|_{m} + \Lambda_{\infty}\big( \|V^m\|^2 + |h|_{m}^2
+ \varepsilon |h|_{m+{1 \over 2}}^2\big).
\end{equation}
It remains to estimate $\mathcal{R}_{S}$ which is given by \eqref{RS}. For the second term, we use an integration by parts:
$$ \int_{\mathcal{S}} \varepsilon \nabla^\varphi( \mathcal{E}^\alpha(v) \big) \cdot V^\alpha d \mathcal{V}_{t}=
- \int_{\mathcal{S}} \varepsilon\, \mathcal{E}^\alpha(v) \cdot \nabla V^\alpha d \mathcal{V}_{t}+ \int_{z=0}\varepsilon\, \mathcal{E}^\alpha(v) {\bf N} \cdot V^\alpha \, dy.$$
The estimate of the first term comes directly from \eqref{CT}, we get
$$ \Big| \int_{\mathcal{S}} \varepsilon\, \mathcal{E}^\alpha(v) \cdot \nabla V^\alpha d \mathcal{V}_{t} \Big| \leq \Lambda_{\infty}
\big( \|v\|_{E^m}+ |h|_{m} \big) \varepsilon\, \|\nabla V^\alpha \|.$$
To estimate the boundary term, we, need an estimate of $\mathcal{E}^\alpha(v)$ on the boundary. The same arguments
yielding the proof of \eqref{CT} can be used by using the commutator estimates on the boundary to get that
$$ | \mathcal{E}^\alpha(v)(\cdot, 0)|_{L^2} \leq \Lambda_{\infty}\big( |h|_{m}+ |v^b|_{m} +|\partial_{z} v^b|_{m-1} \big)$$
and hence by using \eqref{dzvb}, we find
$$ | \mathcal{E}^\alpha(v)(\cdot, 0)|_{L^2} \leq \Lambda_{\infty}\big( |h|_{m}+ |v^b|_{m} \big).$$
Consequently, we have proven that
$$ \Big| \int_{\mathcal{S}} \varepsilon \nabla^\varphi( \mathcal{E}^\alpha(v) \big) \cdot V^\alpha d \mathcal{V}_{t} \Big|
\leq \Lambda_{\infty} \Big( \big( \|v\|_{E^m}+ |h|_{m} \big) \varepsilon\, \|\nabla V^\alpha \| + \big( |h|_{m}+ |v^b|_{m} \big) \varepsilon\, |(V^\alpha)^b| \Big).
$$
and by using again the trace inequality, we find that
\begin{equation}
\label{RS1}
\Big| \int_{\mathcal{S}} \varepsilon \nabla^\varphi( \mathcal{E}^\alpha(v) \big) \cdot V^\alpha d \mathcal{V}_{t} \Big|
\leq \Lambda_{\infty} \varepsilon \| \nabla V^m \|\big( \|V^m \| + \|\partial_{z}v\|_{m-1}+ |h|_{m} \big) \end{equation}
It remains to estimate $\varepsilon \int_{\mathcal{S}} \mathcal D^\alpha(S^\varphi v) \cdot V^\alpha d \mathcal{V}_{t}$
This is given in the following lemma:
\begin{lem}
\label{comDS}
We have the estimate:
\begin{align*}
\varepsilon\Big| \int_{\mathcal{S}} \mathcal D^\alpha(S^\varphi v) \cdot V^\alpha d \mathcal{V}_{t}\Big| \leq
\Lambda_{\infty} \Big( \varepsilon \|\nabla V^m\| \big(\|V^m \| + \|\partial_{z}v \|_{m-1}+ |h|_{m+{1 \over2}}\big) + \varepsilon\big( \|v\|_{E^m}^2 + |h|_{m+{1 \over 2}}^2 \big) \\
+ \varepsilon \|\partial_{zz} v\|_{L^\infty}\big( |h|_{m}^2 + \| V^m\|^2\Big).
\end{align*}
\end{lem}
We shall first finish the proof of Proposition \ref{conormv} and then prove Lemma \ref{comDS}.
From \eqref{en2}, we can sum over $\alpha$ (for $\alpha=0$, we use the basic estimate \ref{corL2}), integrate in time, use our a priori assumption \eqref{apriori} on the Taylor condition and use
\eqref{RCfinal}, \eqref{tildeRB} and Lemma \ref{comDS} to get that
\begin{align}
\label{presquefini}
& \|V^m(t) \|^2 + |h(t)|_{m}^2 + \int_{0}^t \| S^\varphi V^m \|^2
\leq \Lambda_{0}\big(
\|V^m(0)\|^2 + |h(0)|_{m}^2\big) \\
\nonumber & \quad + \int_{0}^t \Big(
\varepsilon \, \Lambda_{\infty} \|\nabla V^m\| \big(\|V^m \| + \|\partial_{z}v \|_{m-1}+ |h|_{m+{1 \over2}}\big) \\
\nonumber & \quad \quad \quad + \Lambda_{\infty}(1 + |(\partial_{z}\partial_{t}q^E)^b |_{L^\infty}\big) \big( |h|_{m}^2 + \| V^m\|^2
+ \varepsilon |h|_{m+{1 \over 2}}^2\big) + \Lambda_{\infty} \|\partial_{z}v \|_{m-1}^2
)\Big).
\end{align}
Now, we can use Proposition \ref{mingrad} and the Korn inequality of Proposition \ref{Korn} to get that
$$\|\nabla V^m\| \leq \Lambda\big({1\over c_{0}}\big) \big( \| S^\varphi V^m\| + \|v\|_{m}\big)$$
and the estimate of Proposition \ref{conormv} follows from \eqref{presquefini} and the Young inequality \eqref{young}.
This ends the proof of Proposition \ref{conormv}.
It remains the:
\subsubsection*{Proof of Lemma \ref{comDS}}
By using \eqref{Ddef}, we actually have to estimate
\begin{align}
\nonumber \mathcal{R}_{Si} & = \varepsilon \int_{\mathcal{S}} \mathcal C^\alpha_{j}(S^\varphi v)_{ij} V^\alpha_{j} d \mathcal{V}_{t} \\
\label{RSi} & =
\varepsilon \int_{\mathcal{S}} \mathcal C^\alpha_{j, 1}(S^\varphi v)_{ij} V^\alpha_{j} d \mathcal{V}_{t}
+ \varepsilon \int_{\mathcal{S}} \mathcal C^\alpha_{j, 2}(S^\varphi v)_{ij} V^\alpha_{j} d \mathcal{V}_{t}
+ \varepsilon \int_{\mathcal{S}} \mathcal C^\alpha_{j, 3}(S^\varphi v)_{ij} V^\alpha_{j} d \mathcal{V}_{t} \\
&: = \mathcal{R}_{Si}^1 + \mathcal{R}_{Si}^2 + \mathcal{R}_{Si}^3
\end{align}
where we have used the decomposition \ref{Cialpha}. For the first term, by using the definition of the symmetric commutator
we see that we need to estimate terms like
$$\varepsilon \int_{\mathcal{S}} Z^\beta \big( {\partial_{j} \varphi \over \partial_{z} \varphi}\big) \big(Z^{\tilde{\gamma}} \partial_{z}(S^\varphi v)_{ij} \big)V^\alpha_{j} d \mathcal{V}_{t}$$
where $\beta $ and $\tilde\gamma$ are such that $\beta \neq 0, \, \tilde \gamma \neq 0$ and $|\beta | + |\tilde \gamma|=m.$
By using \eqref{idcom}, we can reduce the problem to the estimate of
$$ \varepsilon \int_{\mathcal{S}} c_{\gamma} Z^\beta \big( {\partial_{j} \varphi \over \partial_{z} \varphi}\big) \partial_{z}\big( Z^\gamma (S^\varphi v)_{ij} \big)V^\alpha_{j} d \mathcal{V}_{t}$$
with $\beta$ as before (thus $| \beta | \leq m-1$) and $| \gamma | \leq | \tilde \gamma|\leq m-1.$ By using an integration by parts, we are lead to the estimate
of three types of terms:
$$ \mathcal{I}_{1}= \varepsilon\int_{\mathcal{S}} Z^\beta \big( {\partial_{j} \varphi \over \partial_{z} \varphi } \big)\,Z^\gamma (S^\varphi v)_{ij}\,
\partial_{z} V_{j}^\alpha d \mathcal{V}_{t}, \quad \mathcal{I}_{2}=
\varepsilon\int_{\mathcal{S}} \Big( \partial_{z}Z^\beta \big( {\partial_{j} \varphi \over \partial_{z} \varphi } \big) \Big)Z^\gamma (S^\varphi v)_{ij} V_{j}^\alpha d \mathcal{V}_{t}, \quad $$
and the term on the boundary
$$ \mathcal{I}_{3}= \varepsilon \int_{y=0} Z^\beta \big( {\partial_{j} \varphi \over \partial_{z} \varphi } \big)\, Z^\gamma (S^\varphi v)_{ij} V_{j}^\alpha dy.$$
For the first term, since $\beta \neq 0$, we get by using again \eqref{gues}, \eqref{quot} and Proposition \ref{propeta} that
$$| \mathcal{I}_{1}| \leq \varepsilon \| \nabla V^m\|\, \Lambda_{\infty}\Big( \| v\|_{m} + \| \partial_{z}v\|_{m-1}
+ |h|_{m+{1 \over 2}} \Big).$$
To estimate $\mathcal{I}_{2}$, we can first use \eqref{idcom} to expand it as a sum of terms under the form
$$ \tilde{\mathcal{I}}_{2} = \varepsilon\int_{\mathcal{S}} \Big( c_{\tilde \beta }Z^{\tilde \beta} \partial_{z} \big( {\partial_{j} \varphi \over \partial_{z} \varphi } \big) \Big)Z^\gamma (S^\varphi v)_{ij} V_{j}^\alpha d \mathcal{V}_{t}$$
with $|\tilde{\beta}|\leq \beta.$ If $\gamma = 0$, since $|\tilde \beta | \leq m-1$, we just write
$$|\tilde{\mathcal{I}}_{2}| \leq \varepsilon \big\| \partial_{z} \big( {\partial_{j} \varphi \over \partial_{z} \varphi }\big)\big\|_{m-1} \|S^\varphi\|_{L^\infty}
\, \| V_{j}^\alpha \|$$
while for for $\gamma \neq 0$, we use \eqref{gues} to get
$$ |\tilde{\mathcal{I}}_{2}| \leq \varepsilon \big\| \partial_{z} \big( {\partial_{j} \varphi \over \partial_{z} \varphi }\big)\big\|_{L^\infty} \|S^\varphi v\|_{m}
\, \| V_{j}^\alpha \| + \varepsilon \| S^\varphi v \|_{1, \infty} \big\| \partial_{z} \big( {\partial_{j} \varphi \over \partial_{z} \varphi }\big)\big\|_{m-1}.$$
Consequently, by using \eqref{quot} and Proposition \ref{propeta}, we obtain
$$ \| \mathcal{I}_{2}\| \leq \Lambda_{\infty} \, \|V^m\| \big( \varepsilon \| S^\varphi v\|_{m} + \varepsilon \|h\|_{m+{1 \over 2}} \big).$$
To conclude, we need to relate $\| S^\varphi v\|_{m}$ to the energy dissipation term. By using \eqref{com1} and
Lemma \ref{comi} combined with the identity \eqref{al11}, we get that
$$ \| S^\varphi v\|_{m} \leq \|S^\varphi V^m \| + \Lambda_{\infty}\big( 1 + \| \partial_{zz}v\|_{L^\infty}\big) |h|_{m} +
\Lambda_{\infty}\big(\| \nabla v\|_{m-1} + |h|_{m-{1\over 2}}\big) $$
and hence we finally obtain that
$$ | \mathcal{I}_{2}| \leq \Lambda_{\infty} \, \|V^m\| \big( \varepsilon \| S^\varphi V^m\| + \varepsilon \|h\|_{m+{1 \over 2}} + \varepsilon(1+ \|\partial_{zz}v \|_{L^\infty})
|h|_{m} + \|v\|_{E^m}) \big).$$
It remains to estimate $\mathcal{I}_{3}$. By using product estimates on the boundary we first find by using that $\beta \neq 0$ that
$$ | \mathcal{I}_{3}| \leq \Lambda_{\infty}\, \varepsilon\big( |h|_{m+{1 \over 2 }} + |(S^\varphi v)^b|_{m-1} \big)|(V^\alpha)^b|\leq
\Lambda_{\infty}\, \varepsilon\big( |h|_{m+{1 \over 2 }} + |v^b|_{m} \big)|(V^\alpha)^b| $$
where we have used \eqref{dzvb} in the second estimate. By using again \eqref{vbm} and the trace inequality, this yields
$$ | \mathcal{I}_{3}| \leq \Lambda_{\infty} \Big( \varepsilon \| \nabla V^m\|\, \|V^m\| + \varepsilon |h|_{m+{1 \over 2}}^2 + \varepsilon \|v\|_{E^m}^2\big).$$
Consequently, we get from the previous estimates that
\begin{equation}
\label{RSi1}
| \mathcal{R}_{Si}^1 | \leq \Lambda_{\infty} \Big( \varepsilon \| \nabla V^m \| \big(\|V^m\|+ |h|_{m+{1 \over2}}\big) + \varepsilon\big( \|v\|_{E^m}^2 + |h|_{m+{1 \over 2}}^2 \big)
+ \varepsilon \|\partial_{zz} v\|_{L^\infty}( |h|_{m}^2+ \|V^m\|^2\Big).
\end{equation}
The estimate of $\mathcal{R}_{Si}^2$ is straightforward, from the definition after \eqref{Cialpha}, we immediately get that
\begin{equation}
\label{RSi2}
| \mathcal{R}_{Si}^2 | \leq \varepsilon \Lambda_{\infty}(1 + \| \partial_{zz} v \|_{L^\infty}) \|V^m\| \, |h|_{m-{1 \over 2}}.
\end{equation}
It remains to estimate $\mathcal{R}_{Si}^3$. By using the definition of $\mathcal{C}_{i, 3}^{\alpha}(S^\varphi v)$,
we get by using again \eqref{idcom} that
$$
\varepsilon \Big| \int_{\mathcal{S}} {\partial_{i}\varphi \over (\partial_{z}\varphi)^2} \partial_{z}\big( S^\varphi v) [Z^\alpha, \partial_{z}]\varphi d\mathcal{V}_{t}\Big| \leq \varepsilon \Lambda_{\infty} ( 1 + \|\partial_{zz}v\|_{L^\infty}) |h|_{m-{1 \over 2}} \|V^m\|.
$$
For the term
$$\varepsilon \int_{\mathcal{S}}{\partial_{i}\varphi \over \partial_{z} \varphi} V^\alpha \, [Z^\alpha, \partial_{z}](S^\varphi v) d\mathcal{V}_{t}, $$
we perform an integration by parts as in the estimate of $\mathcal{R}_{Si}^{1}$. We obtain by similar arguments that
$$ \Big| \int_{\mathcal{S}}{\partial_{i}\varphi \over \partial_{z} \varphi} V^\alpha \, [Z^\alpha, \partial_{z}](S^\varphi v) d\mathcal{V}_{t}\Big| \leq
\Lambda_{\infty}\, \varepsilon \big( \|V^m\| + \| \nabla V^m \|\big) \|v\|_{E^m}.$$
From the two last estimates and \eqref{RSi1}, \eqref{RSi2}, we finally obtain that
\begin{align*} |\mathcal{R}_{Si}| \leq
\Lambda_{\infty} \Big( \varepsilon \|\nabla V^m\| \big(\|V^m \| + \|\partial_{z}v \|_{m-1}+ |h|_{m+{1 \over2}}\big) + \varepsilon\big( \|v\|_{E^m}^2 + |h|_{m+{1 \over 2}}^2 \big) \\
+ \varepsilon \|\partial_{zz} v\|_{L^\infty}\big( |h|_{m}^2 + \| V^m\|^2\Big).
\end{align*}
This end the proof of Lemma \ref{comDS}.
\section{Normal derivative estimates part I}
\label{sectionnorm1}
In order to close the estimate of Proposition \ref{conormv}, we need to estimate $\|\partial_{z}v \|_{m-1}^2$.
For the normal component of $v$, this is given for free, we have
\begin{lem}
\label{lemvnorm}
For every $m \geq 1$, we have
\begin{equation}
\label{vnorm}
\| \partial_{z}v \cdot {\bf n} \|_{m-1} \leq \Lambda\big( {1 \over c_{0},} \|\nabla v\|_{L^\infty}\big)\big( \|V^m\| + |h|_{m-{1 \over 2}}\big)
\end{equation}
where ${\bf N}= (- \partial_{1} \varphi, -\partial_{2} \varphi, 1)^t$, ${\bf n}= {\bf N}/|{\bf N}|$.
\end{lem}
Note that in the above definition ${\bf N}$ is defined in the whole $\mathcal{S}$ via $\varphi$.
\begin{proof}
From \eqref{NSv}, the divergence free condition yields
\begin{equation}
\label{dzVid}
\partial_{z}v \cdot {\bf N}= \partial_{z}\varphi \big( \partial_{1} v_{1} + \partial_{2} v_{2} \big)
\end{equation}
and hence the estimate follows from \eqref{gues} and Proposition \ref{propeta}.
\end{proof}
The next step is to estimate the tangential components of $\partial_{z} v $.
We shall proceed in two steps. As a first step, we shall estimate $\sup_{[0, T]} \| \partial_{z} v \|_{m-2}$.
This estimate
will be important in order to control the $L^\infty$ norms that occur in the definition of $\Lambda_{\infty}$ in
in Proposition \ref{conormv} in terms of known quantities via Sobolev embeddings and $L^\infty$ estimates. We shall prove in the end
of the paper the more delicate estimate which allows to control
$\int_{0}^T \| \partial_{z} v \|_{m-1}^2.$
We begin with the following remark:
\begin{lem}
\label{lemdzS} For every $k \geq 0$, we have
\begin{eqnarray}
\label{dzvk} \| \partial_{z} v\|_{k} \leq \Lambda\big( {1 \over c_{0}}, \|\nabla v\|_{L^\infty}\big) \big( \| S_{{\bf n}}\|_{k} + |h|_{k+{1 \over 2 }} + \| v\|_{k+1}\big), \\
\label{dzzvk} \|\partial_{zz} v \|_{k} \leq \Lambda\big( {1 \over c_{0}}, |v|_{E^{2, \infty}}\big)\big( \|\nabla S_{{\bf n}} \|_{k} + |h|_{k+{ 3 \over 2} } + \|v\|_{k+2}\big)
\end{eqnarray}
where
\begin{equation}
\label{Sn} S_{{\bf n}}= \Pi S^\varphi v \, {\bf n}, \quad \Pi = Id- {\bf n}\otimes{\bf n}\end{equation}
and ${\bf n}$ is defined by ${\bf n}= {\bf N}/|{\bf N}|.$
\end{lem}
The main consequence of \eqref{dzvk} is that for $k \leq m-1 $, since $\|v\|_{k+1}$ is estimated in Proposition \ref{conormv}, we can look for an estimate of $\|S_{{\bf n}}\|_{k}$
instead of $\| \partial_{z}v \|_{k}$.
\begin{proof}
For the proof of Lemma \ref{lemdzS}, we can combine the identity \eqref{Su} which can be written in the equivalent form
\begin{equation}
\label{Subis}
2 S^\varphi v \ n= \partial_{n}u + g^{ij} \big(\partial_{j} v \cdot {\bf n}\big) \partial_{y^i}
\end{equation}
and the fact that
\begin{equation}
\label{Subis1} \partial_{{\bf N}} u = {1 + |\partial_{1}\varphi|^2 + |\partial_{2} \varphi |^2 \over \partial_{z}\varphi} \partial_{z} v - \partial_{1}\varphi \partial_{1}v
- \partial_{2} \varphi \partial_{2} v\end{equation}
to obtain
$$ \| \partial_{z} v\|_{k} \leq \Lambda\big( {1\over c_{0}}, \|\nabla v\|_{L^\infty}\big)\big( \|V^{k+1}\| + |h|_{k+{1 \over 2 }} + \| \partial_{z}v \cdot {\bf n} \|_{k} + \|S^\varphi v\, {\bf n} \|_{k}\big). $$
Recall that we have the estimate $|h|_{2,\infty} \leq 1/c_{0}$ thanks to \eqref{apriori}.
To estimate $ \| \partial_{z}v \cdot {\bf n} \|_{k}$, we use Lemma \ref{lemvnorm} and to estimate the last term, we note that
\begin{equation}
\label{Pisu} S^\varphi v \, {\bf n}= S_{{\bf n}} + \big( S^\varphi v \, {\bf n} \cdot {\bf n} \big)\, {\bf n}= S_{{\bf n}}+ D_{{\bf n}}^\varphi v\, \cdot {\bf n} \, {\bf n}\end{equation}
and we use Lemma \ref{lemvnorm}. This yields \eqref{dzvk}.
To obtain \eqref{dzzvk}, we first observe that by using again \eqref{dzVid}, we obtain that
$$ \| \partial_{zz}v \cdot {\bf n} \|_{k} \leq \Lambda\big( {1 \over c_{0}}, |v|_{E^{2, \infty}}\big)\big( |h|_{k+{3 \over 2}} + \|v\|_{k+1} + \|\partial_{z}v\|_{k+1}\big)$$
and hence, we find by using \eqref{dzvk} that
\begin{equation}
\label{dzzvk1}
\| \partial_{zz}v \cdot {\bf n} \|_{k} \leq \Lambda\big( {1 \over c_{0}}, |v|_{E^{2, \infty}}\big)\big( |h|_{k+{3 \over 2}} + \|v\|_{k+2} + \| S_{{\bf n}}\|_{k+1}\big).
\end{equation}
To get the estimate of the other components of $\partial_{zz}v$, it suffices to use again \eqref{Subis}, \eqref{Subis1}
combined with the previous estimates. This ends the proof of Lemma \ref{lemdzS}.
\end{proof}
The aim of the following Proposition is to prove the estimate on $S_{{\bf n}}$.
Let us recall that $\Lambda_{\infty}$ is defined in \eqref{deflambdainfty1} in Proposition \ref{conormv} and that $\Lambda_{0}= \Lambda(1/c_{0}).$
\begin{prop}
\label{propdzvm-2}
For every $m \geq 2$, the solution of \eqref{NSv}, \eqref{bordv1}, \eqref{bordv2} satisfies,
for every $t \in [0, T^\varepsilon]$, the estimate
\begin{align}
\nonumber
& \|S_{{\bf n}}\|_{m-2}^2 + \varepsilon \int_{0}^t \| \nabla S_{{\bf n}}\|_{m-2}^2 \\
\label{estSnm-2} & \leq \Lambda_{0}\|S_{{\bf n}}(0)\|_{m-2}^2 + \int_{0}^t\Lambda_{\infty}\big( \|V^m\|^2 + |h|_{m}^2 + \varepsilon |h|_{m+{1 \over 2 }}^2
+ \|S_{{\bf n}}\|_{m-2}^2\big) \\
\nonumber & + \int_{0}^t \Lambda_{\infty} \|\partial_{z} v\|_{m-1}^2 + \varepsilon \int_{0}^t \|\nabla V^m\|^2.
\end{align}
\end{prop}
Note that the last term on the right-hand side of \eqref{estSnm-2}
can be estimated by using Proposition \ref{conormv}.
We also point out that in the above estimate, because of the dependence in $|h|_{m}$ in the right-hand side, we cannot change $m-2$ into $m-1$.
The occurence of this term is mainly due to the Euler part of the pressure when we study the equation for $S_{{\bf n}}$.
Indeed, when we perform energy estimates on the equation for $S_{{\bf n}}$, the Hessian $D^2p^E$ of the pressure
is handled as a source term. Since $p^E= gh$ on the boundary, we get that the estimate of $\|D^2 p^E\|_{k}$ necessarily involves
$ |h|_{k+{3 \over 2}}$. This is why we are restricted to $k \leq m-{3/2}<m-1$.
\begin{proof}
The first step is to find an equation for $S_{{\bf n}}$. From \eqref{NSv}, we find the equation
$$ \partial^{\varphi}_{t} \nabla^\varphi v+ \big(v \cdot \nabla^\varphi\big) \nabla^\varphi v + (\nabla^\varphi v)^2 + (D^\varphi)^2 q -\varepsilon \Delta^\varphi \nabla^\varphi v = 0$$
where $(D^\varphi)^2q$ stands for the Hessian matrix of the pressure, $((D^\varphi)^2q)_{ij}= \partial^{\varphi}_{i}\partial^{\varphi}_{j}q$ and hence
by taking the symmetric part of the equation, we get
\begin{equation}
\label{eqS}
\partial^{\varphi}_{t} S^\varphi v+ \big(v \cdot \nabla^\varphi\big) S^\varphi v + { 1\over 2}\big((\nabla^\varphi v)^2+ ((\nabla^\varphi v)^t)^2 + (D^\varphi)^2 q -\varepsilon \Delta^\varphi (S^\varphi v)= 0.
\end{equation}
This allows to get an evolution equation for $S_{{\bf n}}= \Pi \big( S^\varphi v\, {\bf n})$:
\begin{equation}
\label{eqPiS}
\partial^{\varphi}_{t} S_{{\bf n}}+ \big(v \cdot \nabla^\varphi\big) S_{{\bf n}} -\varepsilon \Delta^\varphi (S_{{\bf n}})= F_{S}
\end{equation}
where the source term $F_{S}$ is given by
\begin{equation}
\label{FSdef} F_{S}= F_{S}^1 + F_{S}^2,
\end{equation}
with
\begin{align}
\label{FS1} & F_{S}^1 = - {1 \over 2} \Pi\big( (\nabla^\varphi v)^2+ ((\nabla^\varphi v)^t)^2\big)\, {\bf n} + \big( \partial_{t} \Pi + v \cdot \nabla^\varphi \Pi \big)
S^\varphi v \, {\bf n} + \Pi S^\varphi v \big( \partial_{t} {\bf n} + v \cdot \nabla^\varphi {\bf n}\big) \\
\label{FS2} & F_{S}^2 = - 2 \varepsilon \, \partial^{\varphi}_{i} \Pi\, \partial^{\varphi}_{i} \big( S^\varphi v \, {\bf n} \big) - 2 \varepsilon \Pi \big( \partial^{\varphi}_{i}\big( S^\varphi v \big) \, \partial^{\varphi}_{i} {\bf n} \big) - \varepsilon \big(\Delta^\varphi \Pi \big)S^\varphi v \, {\bf n} - \varepsilon \Pi S^\varphi v \Delta^\varphi {\bf n} \\ \nonumber & \quad \quad \quad - \Pi\big( (D^\varphi)^2 q \big) {\bf n}.
\end{align}
Note that we have used the summation convention for the last term. By using Proposition \ref{propeta} and
Propositions \ref{proppE}, \ref{propPNS}, and the product estimates \eqref{gues}, we have that the source term $F_{S}^1$
is bounded by
$$
\| F_{S}^1 \|_{m-2} \leq \Lambda_{\infty}\big( \|\nabla^\varphi v \|_{m-2} + |h|_{m-{1 \over 2}} + \|v\|_{m-2} \big).
$$
We recall that the above notation implicitely means that all the quantities are evaluated at time $t$;
Note that by using Lemma \eqref{dzvk}, we can rewrite this estimate as
\begin{equation}
\label{estFS}
\| F_{S}^1 \|_{m-2} \leq \Lambda_{\infty}\big( \|S_{{\bf n}} \|_{m-2} + |h|_{m- {1 \over 2}} + \|v\|_{m-1} \big).
\end{equation}
In a similar way, we get for $F_{S}^2$ that
$$
\| F_{S}^2 \|_{m-2} \leq \Lambda_{\infty}\, \varepsilon \big( \|\partial_{zz}v\|_{m-2} + \|\partial_{z}v\|_{m-1} + \|v\|_{m} + |h|_{m+{1 \over 2}}
\big) + \Lambda_{\infty}\|\nabla q\|_{E^{m-1}}.
$$
Indeed, let us give more details on one example:
$$ \| \varepsilon \, \partial^{\varphi}_{i} \Pi\, \partial^{\varphi}_{i} \big( S^\varphi v \, {\bf n} \big) \|_{m-2} \lesssim
\varepsilon \|\partial^{\varphi}_{i} \Pi\|_{L^\infty}\, \| \partial^{\varphi}_{i}\big(S^\varphi v \, {\bf n} \big) \|_{m-2} + \varepsilon \|\partial_{i}^\varphi\big(S^\varphi v \, {\bf n} \big) \|_{L^\infty} \, \|\partial^{\varphi}_{i} \Pi\|_{m-2}$$
and the result follows by using \eqref{gues} again (note that $\Lambda_{\infty}$ involves
$\varepsilon^{1 \over 2} |h|_{3, \infty}$ and $\varepsilon^{1 \over 2} \|\partial_{zz}v\|_{L^\infty}.$) Actually, we get an estimate of $F_{S}^2$
that involves only $\varepsilon \|\partial_{zz}v\|_{L^\infty}$, but this improvement does not seem useful.
Next, we can use Proposition \ref{propPNS} and Proposition \ref{proppE} to estimate the pressure.
This yields
$$ \| F_{S}^2 \|_{m-2} \leq \Lambda_{\infty}\, \varepsilon \big( \|\partial_{zz}v\|_{m-2}
+ |h|_{m+{1 \over 2}} +|v^b|_{m+{1 \over 2}} \big) + \Lambda_{\infty}\big( |v|_{E^m} + |h|_{m - {{1 \over 2}}}\big)$$
and by using Lemma \ref{lemdzS} and Lemma \ref{mingrad}, we can rewrite this last estimate in the alternative form
\begin{equation}
\label{estFS2}
\| F_{S}^2 \|_{m-2} \leq \Lambda_{\infty}\, \varepsilon \big( \|\nabla^\varphi S_{{\bf n}}\|_{m-2}
+ |h|_{m+{1 \over 2}} +|v^b|_{m+{1 \over 2}} \big) + \Lambda_{\infty}\big( |v|_{E^m} + |h|_{m - {{1 \over 2}}}\big).
\end{equation}
Note that thanks to the boundary condition \eqref{bordv2}, we have that $S_{{\bf n}}$ verifies an homogeneous Dirichlet boundary condition
on $z=0$,
\begin{equation}
\label{bordSN}
(S_{{\bf n}})_{/z=0}= 0
\end{equation}
hence we shall be able to estimate $S_{{\bf n}}$ through standard energy estimates.
We shall first prove for $m \geq 2$ the following estimate by induction:
\begin{align}
\label{estSnm-2pf}
& \|S_{{\bf n}}(t)\|_{m-2}^2 + \varepsilon \int_{0}^t \| \nabla^\varphi S_{{\bf n}}|_{m-2}^2 \\
\nonumber & \leq \Lambda_{0}\|S_{{\bf n}}(0)\|_{m-2}^2 + \int_{0}^t\Lambda_{\infty}\big( \|v\|_{E^m}^2 + |h|_{m-{1 \over 2}}^2 + \varepsilon|h|_{m+{1 \over 2 }}^2 + \|S_{{\bf n}}\|_{m-2}^2\big)+ \varepsilon\int_{0}^t |v^b|_{m+{1 \over 2} } \|S_{{\bf n}}\|_{m-2}.
\end{align}
For the $L^2$ estimate (which corresponds to $m=2$), by using the boundary condition \eqref{bordSN} and Lemma \ref{lemipp}, we obtain
$$ {d \over dt} {1 \over 2} \int_{\mathcal{S}} | S_{{\bf n}}|^2 \, d\mathcal{V}_{t} + \varepsilon \int_{\mathcal{S}} | \nabla^\varphi S_{{\bf n}}|^2 \, d\mathcal{V}_{t}
= \int_{\mathcal{S}} F_{S} \cdot S_{{\bf n}}\, d\mathcal{V}_{t}.$$
To conclude, we integrate in time and we use
\eqref{estFS}, \eqref{estFS2} and the Young inequality to absorb the term $\| \nabla^\varphi S_{{\bf n}}\|$ in the left hand side. Note that
thanks to \eqref{apriori}, we can again use that
\begin{equation}
\label{remS0} \| \nabla^\varphi S_{{\bf n}}(t) \|^2 \leq \Lambda_{0} \int_{\mathcal{S}} |\nabla^\varphi S_{{\bf n}} |^2 \, d\mathcal{V}_{t}.
\end{equation}
Assuming that the estimate \eqref{estSnm-2pf} is proven for $k \leq m-3$, we shall prove that it is verified for $m-2$.
We first need to compute the equation
satisfied by $Z^\alpha S_{{\bf n}}$ for $| \alpha | \leq m-2.$ By using the expression \eqref{transportW} for the transport part
of the equation, we get that
\begin{equation}
\label{SNalpha}
\partial^{\varphi}_{t} Z^\alpha S_{{\bf n}}+ \big(v \cdot \nabla^\varphi\big)Z^\alpha S_{{\bf n}} -\varepsilon \Delta^\varphi Z^\alpha (S_{{\bf n}})= F_{S} + \mathcal{C}_{S}
\end{equation}
where the commutator is given by
\begin{equation}
\label{CS}
\mathcal{C}_{S}= \mathcal C_{S}^1 + \mathcal C_{S}^2\end{equation}
with
$$\mathcal{C}_{S}^1= [Z^\alpha v_{y}]\cdot \nabla_{y} S_{{\bf n}} + [Z^\alpha, V_{z}] \partial_{z} S_{{\bf n}}:=
C_{Sy}+ C_{Sz}, \quad \mathcal{C}_{S}^2 = - \varepsilon [Z^\alpha, \Delta^\varphi] S_{{\bf n}}.
$$
Since $Z^\alpha S_{{\bf n}}$ vanishes on the boundary, we get by using Corollary \ref{coripp} and a standard energy estimate
that
\begin{equation}
\label{m-21}
{d \over dt} {1 \over 2} \int_{\mathcal{S}} |Z^\alpha S_{{\bf n}}|^2 \, d\mathcal{V}_{t} + \varepsilon \int_{\mathcal{S}} | \nabla^\varphi Z^\alpha S_{{\bf n}}|^2 \, d\mathcal{V}_{t}
= \int_{\mathcal{S}} \big( F_{S} + \mathcal{C}_{S} \big) \cdot Z^\alpha S_{{\bf n}}\, d\mathcal{V}_{t}
\end{equation}
and we need to estimate the right hand-side. We shall begin with the part involving $\mathcal{C}_{S}$.
Thanks to the decomposition \eqref{CS}, we first observe that thanks to \eqref{com}
\begin{equation}
\label{CSy} \|C_{Sy}\|
\leq \Lambda_{\infty}\big( \|S_{{\bf n}}\|_{m-2} + \|v\|_{E^{m-2}} \big).\end{equation}
To estimate $\mathcal{C}_{Sz}$, we need to study more carefully the commutator in order to avoid the appearance of $\| \partial_{z} S_{{\bf n}}\|_{k}$
which is not controlled (and cannot be controlled uniformly in $\varepsilon$ due to the presence of boundary layers).
By expanding the commutator and by using \eqref{idcom}, we see that we have to estimate terms like
$$ \| Z^\beta V_{z} \, \partial_{z} Z^\gamma S_{{\bf n}} \|$$
with $ | \beta | + | \gamma | \leq m-2$, $ | \gamma | \leq m-3$. Next, we shall use that
$$ Z^\beta V_{z} \, \partial_{z} Z^\gamma S_{{\bf n}}= {1 - z \over z } Z^\beta V_{z} \, Z_{3} Z^\gamma S_{{\bf n}}$$
and we can finally rewrite the above expression as a sum of terms under the form:
\begin{equation}
\label{CSz0} c_{\tilde \beta }Z^{\tilde \beta }\big( {1- z \over {z }} V_{z} \big) \, Z_{3} Z^\gamma S_{{\bf n}}\end{equation}
where $c_{\tilde \beta}$ are harmless bounded functions and $|\tilde \beta | \leq | \beta |$. Indeed, this comes from
the fact that $ Z_{3} ( (1-z)/z ) = \tilde{c} (1-z)/z$ for $\tilde c$ a smooth bounded function.
If $\tilde \beta = 0$, we get
that
$$ \left\| c_{\tilde \beta }Z^{\tilde \beta }\big( {1- z \over {z }} V_{z} \big) \, Z_{3} Z^\gamma S_{{\bf n}} \right\|
\lesssim \|S_{{\bf n}}\|_{m-2}$$
When $\tilde \beta \neq 0$ we can use \eqref{gues} to obtain that
$$ \left\| c_{\tilde \beta }Z^{\tilde \beta }\big( {1- z \over {z }} V_{z} \big) \, Z_{3} Z^\gamma S_{{\bf n}} \right\|
\lesssim \big\| Z \big( {1- z \over {z }} V_{z} \big) \big\|_{L^\infty} \|S_{{\bf n}} \|_{m-2} + \|S_{{\bf n}}\|_{L^\infty}
\big\| Z \big( {1- z \over {z }} V_{z} \big) \big\|_{m-3}$$
Next, we observe that $$ \big\| Z \big( {1- z \over {z }} V_{z} \big) \big\|_{L^\infty} \lesssim \|V_{z} \|_{W^{1, \infty}} + \|ZV_{z}\|_{W^{1, \infty}}\lesssim \|V_{z}\|_{E^{2, \infty}}$$
since thanks to \eqref{Wdef} and \eqref{bordv1}, we have that $V_{z}$ vanishes on the boundary. Moreover,
again since $V_{z}$ vanishes on the boundary, we shall prove that
\begin{equation}
\label{hardyfin} \big\| Z \big( {1- z \over {z }} V_{z} \big) \big\|_{m-3} \lesssim \|ZV_{z}\|_{m-3} + \|\partial_{z} Z V_{z} \|_{m-3} + \|\partial_{z} V_{z} \big \|_{m-3}.\end{equation}
Since
$$ \big\| Z \big( {1- z \over {z }} V_{z} \big) \big\|_{m-3} \lesssim \big\| {1- z \over {z }} Z V_{z} \big\|_{m-3} + \big\| {1 \over z (1-z)}V_{z} \big\|_{m-3},
$$
we have to estimate
$$ \| {1 - z \over z} Z^\beta ZV_{z}\|, \quad \| {1 \over z(1-z)} Z^\beta V_{z}\|, \quad |\beta| \leq m-3.$$
By using the following
variants of the Hardy inequality:
\begin{lem}
\label{hardybis} If $f(0)$= 0, we have the inequalities,
\begin{align*}
& \int_{-\infty}^0 {1\over z^2(1-z)^2} |f(z)|^2 dz \lesssim \int_{-\infty}^0 |\partial_{z}f(z)|^2 \, dz, \\
& \quad \int_{-\infty}^0 \big({ 1- z \over z}\big)^2|f(z)|^2
\lesssim \int_{-\infty}^0\big( |f(z)|^2 + |\partial_{z}f(z)|^2 \big) dz.
\end{align*}
\end{lem}
The estimate \eqref{hardyfin} follows. Note that we had to be careful in these estimates since $V_{z}$ is not in $L^2$ while
its derivatives are. The proof of this Lemma which only relies on an integration by parts is given in section \ref{sectiontech}.
Looking at the previous estimates,
we have thus proven that
$$ \| [ Z^\alpha, V_{z}\partial_{z}] S_{{\bf n}}\| \leq \|V_{z}\|_{E^{2, \infty}} \|S_{{\bf n}}\|_{m-2} + \|S_{{\bf n}}\|_{L^\infty}\big( \|ZV_{z}\|_{E^{m-2}}
+ \|\partial_{z}V_{z}\|_{m-3}\big).
$$
Thanks to Proposition \eqref{propeta}, we have that $\|V_{z}\|_{E^{2, \infty}}\leq \Lambda_{\infty}$.
Moreover, by using Lemma \ref{lemestW} and Remark \ref{remdzVz} we also find that
$$ \|ZV_{z}\|_{E^{m-2}} \leq \Lambda_{\infty}\big( \|v\|_{E^{m-1}} + |h|_{m-{1 \over 2}}\big).$$
We have thus proven the commutator estimate
\begin{equation}
\label{CSz}
\| [ Z^\alpha, V_{z}\partial_{z}] S_{{\bf n}}\| \leq \Lambda_{\infty} \big( \|v\|_{E^{m-1}} + |h|_{m-{1 \over 2}} + \|S_{{\bf n}}\|_{m-2}\big)
\end{equation}
and hence, in view of \eqref{CS1}, this yields
\begin{equation}
\label{CS1}
\|\mathcal{C}_{S}^1 \|
\leq \Lambda_{\infty}\big( \|S_{{\bf n}}\|_{m-2} + \|v\|_{E^{m-1}} + |h|_{m-{1 \over 2}} \big).
\end{equation}
Now, we shall estimate the term involving $\mathcal{C}_{S}^2$.
To expand this commutator, we can use the expression \eqref{deltaphi} of $\Delta^\varphi$. This yields
$$ \mathcal{C}_{S}^2 = \mathcal{C}_{S1}^2 + \mathcal{C}_{S2}^2 + \mathcal{C}_{S3}^2$$
where
$$
\mathcal{C}_{S1}^2= \varepsilon [Z^\beta , {1 \over \partial_{z} \varphi}] \nabla \cdot \big( E \nabla S_{{\bf n}} \big), \quad
\mathcal{C}_{S2}^2=\varepsilon {1 \over \partial_{z} \varphi} [Z^\alpha, \nabla] \cdot \big( E \nabla S_{{\bf n}}\big), \quad
\mathcal{C}_{S3}^2 =\varepsilon {1 \over \partial_{z}\varphi} \nabla\cdot \big( [Z^\alpha, E \nabla \big] S_{{\bf n}}\big).$$
By expanding the first commutator $\mathcal{C}_{S1}^2$, we get that we need to estimate terms under the form
$$\varepsilon \int_{\mathcal{S}} Z^{\beta}\big( {1 \over \partial_{z}\varphi} \big) Z^{\tilde\gamma} \big(\nabla \cdot ( E\nabla S_{{\bf n}})\big)\, \cdot Z^\alpha S_{{\bf n}} \, d\mathcal{V}_{t}$$
with $\beta + \tilde \gamma = \alpha$, $\beta \neq 0$ and next we can commute $Z^{\tilde \gamma}$ and $\nabla$
(we recall that $Z_{3}$ and $\partial_{z}$ does not commute)
to reduce the problem to the estimate of
$$ \varepsilon\int_{\mathcal{S}} Z^{\beta}\big( {1 \over \partial_{z}\varphi} \big) \partial_{i} Z^\gamma\big( ( E\nabla S_{{\bf n}})_{j}\big)\, \cdot Z^\alpha S_{{\bf n}} \, d\mathcal{V}_{t} $$
with $ |\gamma| \leq| \tilde \gamma|$. Since $S_{{\bf n}}$ vanishes on the boundary, we can integrate by parts
to get
\begin{align*} & \Big| \varepsilon\int_{\mathcal{S}} Z^{\beta}\big( {1 \over \partial_{z}\varphi} \big) \partial_{i} Z^\gamma\big( ( E\nabla S_{{\bf n}})_{j}\big)\, \cdot Z^\alpha S_{{\bf n}} \, d\mathcal{V}_{t}\Big| \leq
\\
& \Lambda_{0} \Big( \varepsilon \big\| Z^{\beta}\big( {1 \over \partial_{z}\varphi} \big) Z^\gamma\big( ( E\nabla S_{{\bf n}})_{j} \big)\big\|\,
\big( \| \nabla^\varphi Z^\alpha S_{{\bf n}}\| + \|S_{{\bf n}}\|_{m-2}\big) + \varepsilon \big\|\partial_{i} Z^{\beta}\big( {1 \over \partial_{z}\varphi} \big) Z^\gamma\big( ( E\nabla S_{{\bf n}})_{j} \big) \big\|\,
\| S_{{\bf n}}\|_{m-2}\Big).
\end{align*}
Next, we can use \eqref{gues}, Proposition \ref{propeta} and \eqref{ELinfty}, \eqref{EHs} to obtain
\begin{align*}
\varepsilon^{1 \over 2 } \big\| Z^{\beta}\big( {1 \over \partial_{z}\varphi} \big) Z^\gamma\big( ( E\nabla S_{{\bf n}})_{j} \big)\big\|
& \leq \varepsilon^{1 \over 2} \Big( \big\| Z\big({1 \over \partial_{z} \varphi}\big)\|_{L^\infty}\, \| E\nabla S_{{\bf n}}\|_{m-3} +
\| E \nabla S_{{\bf n}}\|_{L^\infty} \big\|Z\big( {1 \over \partial_{z} \varphi}\big) \big\|_{m-3}\Big) \\
& \leq \varepsilon^{1 \over 2} \Lambda_{0} \|\nabla S_{{\bf n}}\|_{m-3} + \Lambda_{\infty} \, |h|_{m-{3 \over 2}}.
\end{align*}
Note that $\Lambda_{\infty}$ contains a control of $\sqrt{\varepsilon}|\partial_{zz}v\|_{L^\infty}$ that we have used to get the last line.
From the same arguments, we also get that
\begin{align*}
\varepsilon \big\|\partial_{i} Z^{\beta}\big( {1 \over \partial_{z}\varphi} \big) Z^\gamma\big( ( E\nabla S_{{\bf n}})_{j} \big) \big\|
\leq \varepsilon^{1 \over 2} \Lambda_{0} \|\nabla S_{{\bf n}}\|_{m-3} + \Lambda_{\infty} \, \varepsilon^{1 \over 2}\, |h|_{m-{1 \over 2}}
\end{align*}
We have thus proven the estimate
\begin{multline}
\label{CS21}
\Big| \int_{\mathcal{S}} \mathcal{C}_{S1}^2 \cdot Z^\alpha S_{{\bf n}} d\mathcal{V}_{t}\Big|
\leq \Lambda_{0}\big( \varepsilon^{1 \over 2} \| \nabla^\varphi Z^\alpha S_{{\bf n}} \| + \|S_{{\bf n}}\|_{m-2}\big) \\ \big( \varepsilon^{1\over 2 }\|\nabla S_{{\bf n}}\|_{m-3}+ \|S_{{\bf n}}\|_{m-2} + \Lambda_{\infty}( |h|_{m-{3 \over 2 }} +\varepsilon^{1 \over 2} |h|_{m-{1 \over 2}}) \big).
\end{multline}
By using \eqref{idcom} we see that to handle the term involving $\mathcal{C}_{S2}^2$, we have to estimate $$ \varepsilon \int_{\mathcal{S}}\partial_{z} Z^\beta\big( E\nabla S_{{\bf n}}\big) \cdot Z^\alpha S_{{\bf n}} \,dy dz$$
with $|\beta | \leq m-3$,
hence we perform an integration by parts as above to get that
\begin{equation}
\label{CS22}
\Big| \int_{\mathcal{S}} \mathcal{C}_{S2}^2 \cdot Z^\alpha S_{{\bf n}}\, d\mathcal{V}_{t} \Big|
\leq \Lambda_{0}\, \varepsilon^{1 \over 2} \| \nabla^\varphi Z^\alpha S_{{\bf n}} \| \big( \varepsilon^{1\over 2 }\|\nabla S_{{\bf n}}\|_{m-3}+ \Lambda_{\infty} |h|_{m-{3 \over 2 }}\big).
\end{equation} In a similar way, by integrating by parts, we get that \begin{align}
\label{CS23} \Big| \int_{\mathcal{S}} \mathcal{C}_{S3}^2 \cdot Z^\alpha S_{{\bf n}} \, d\mathcal{V}_{t}\Big| & \leq \varepsilon \| [Z^\alpha, E \nabla ] S_{{\bf n}} \| \, \| \nabla Z^\alpha S_{{\bf n}}\| \\ \nonumber & \leq \Lambda_{0} \varepsilon^{1 \over 2} \| \nabla^\varphi Z^\alpha S_{{\bf n}} \| \big( \varepsilon^{1\over 2 }\|\nabla S_{{\bf n}}\|_{m-3}+ \Lambda_{\infty} |h|_{m-{3 \over 2 }}\big).
\end{align}
In view of \eqref{CS21}, \eqref{CS22}, \eqref{CS23}, we have actually proven that
\begin{multline}
\label{CS2}
\Big| \int_{\mathcal{S}} \mathcal{C}_{S}^2 \cdot Z^\alpha S_{{\bf n}} d\mathcal{V}_{t}\Big|
\leq \Lambda_{0}\big( \varepsilon^{1 \over 2} \| \nabla Z^\alpha S_{{\bf n}} \| + \|S_{{\bf n}}\|_{m-2}\big) \\ \big( \varepsilon^{1\over 2 }\|\nabla S_{{\bf n}}\|_{m-3}+ \|S_{{\bf n}}\|_{m-2} + \Lambda_{\infty}( |h|_{m-{3 \over 2 }}+ \varepsilon^{1 \over 2} |h|_{m-{1 \over 2}})\big).
\end{multline}
To conclude our energy estimate, we can use the identity \eqref{m-21}, the estimates \eqref{FS1}, \eqref{FS2} and \eqref{CS1}, \eqref{CS2},
to obtain
\begin{align*}
& {d \over dt} {1 \over 2} \int_{\mathcal{S}} |Z^\alpha S_{{\bf n}}|^2 \, d\mathcal{V}_{t} +{ \varepsilon\over 2} \int_{\mathcal{S}} | \nabla^\varphi Z^\alpha S_{{\bf n}}|^2 \, d\mathcal{V}_{t} \\
& \leq \Lambda_{\infty}\big( \|v\|_{E^m} + |h|_{m-{1 \over 2}} + \varepsilon^{1\over 2}|h|_{m+{1 \over 2 }} \big)\big( \|S_{{\bf n}}\|_{m-2} + |h|_{m-{1 \over 2 }}\big)
+ \varepsilon |v^b|_{m+{1 \over 2} } \|S_{{\bf n}}\|_{m-2} + \Lambda_{0} \|\nabla S_{{\bf n}}\|_{m-3}^2.
\end{align*}
Note that we have used the Young inequality and \eqref{remS0} for $Z^\alpha S_{{\bf n}}$ to absorb the term $\|\nabla Z^\alpha S_{{\bf n}}\|_{m-2}$ by the energy dissipation term
in the left hand side. Next, we can integrate in time to obtain
\begin{align*}
& \|S_{{\bf n}}\|_{m-2}^2 + \varepsilon \int_{0}^t | \nabla^\varphi S_{{\bf n}}|_{m-2}^2 \\
& \leq \Lambda_{0}\|S_{{\bf n}}(0)\|_{m-2}^2 + \int_{0}^t\Big(\Lambda_{\infty}\big( \|v\|_{E^m} + |h|_{m-{1 \over 2}} + \varepsilon^{1\over 2}|h|_{m+{1 \over 2 }} \big)\big( \|S_{{\bf n}}\|_{m-2} + \|h\|_{m-{1 \over 2 }}\big) \\
& \quad + \varepsilon\int_{0}^t |v^b|_{m+{1 \over 2} } \|S_{{\bf n}}\|_{m-2} + \Lambda_{0}\varepsilon \int_{0}^t \|\nabla S_{{\bf n}}\|_{m-3}^2
\end{align*}
and we finally get \eqref{estSnm-2pf} by using the induction assumption to control $\Lambda_{0}\varepsilon \int_{0}^t \|\nabla S_{{\bf n}}\|_{m-3}^2$.
To end the proof of Proposition \ref{propdzvm-2}, we can use again \eqref{vbm+}
to get
$$ \varepsilon\int_{0}^t |v^b|_{m+{1 \over 2} } \|S_{{\bf n}}\|_{m-2} \leq \varepsilon \int_{0}^t \|\nabla V^m\|^2 + \int_{0}^t \Lambda_{\infty}\big(\|V^m\|^2
+ \|S_{n}\|_{m-2}^2 + \varepsilon |h|_{m+{1 \over }}^2 \big).$$
and we can use Lemma \ref{mingrad} to replace $ \|\nabla^\varphi S_{{\bf n}}\|_{m-2}$ by $\|\nabla S_{{\bf n}}\|_{m-2}$ in the left hand side.
This ends the proof of Proposition \ref{propdzvm-2}.
\end{proof}
\section{$L^\infty$ estimates}
\label{sectionLinfty}
In view of the estimate of Proposition \ref{propdzvm-2}, to close the argument, we need to estimate the $L^\infty$ norms contained in $\Lambda_{\infty}$
and $\int_{0}^t \|\partial_{z}v\|_{m-1}^2$. We shall first estimate the $L^\infty$ norms in terms of the quantities
in the left hand side of the estimate of Proposition \ref{propdzvm-2} and Proposition \ref{conormv}.
We shall begin with the estimates which can be easily obtained through Sobolev embeddings.
\begin{prop}
\label{propLinfty1}
We have the following estimates:
\begin{itemize}
\item for $ k\in \mathbb{N}$,
\begin{equation}
\label{hinfty} |h|_{k, \infty} + \sqrt{\varepsilon}|h|_{k+1, \infty} \lesssim |h|_{2+k}+ \varepsilon^{1 \over 2} |h|_{2+k+{1 \over 2}},
\end{equation}
\item for the velocity, we have
\begin{equation}
\label{u2infty}
\|v(t)\|_{2, \infty} \leq \Lambda\big({ 1 \over c_{0}}, |h|_{4, \infty} + \|V^4\| + \|S_{{\bf n}}\|_{3}\big)
\end{equation}
\item and also
\begin{equation}
\label{dzv1infty1}
\|\partial_{z}v \|_{1, \infty} \leq \Lambda_{0} \big( \|S_{{\bf n}}\|_{1, \infty} + \|v\|_{2, \infty}\big), \quad \sqrt{\varepsilon} \|\partial_{zz}v\|_{L^\infty}
\leq \Lambda_{0} \big(\sqrt{\varepsilon} \|\partial_{z}S_{{\bf n}}\|_{L^\infty} + \|S_{{\bf n}}\|_{1, \infty} + \|v\|_{2, \infty}\big ).
\end{equation}
\end{itemize}
\end{prop}
\begin{proof}
The estimate \eqref{hinfty} is a direct consequence of the Sobolev embedding in dimension $2$.
To get the estimate \eqref{u2infty}, we first use the anisotropic Sobolev embedding \eqref{emb} to obtain that
\begin{equation}
\label{dzv3inf} \| Z^\alpha v\|_{\infty}^2 \lesssim \|\partial_{z} v\|_{3} \, \|v\|_{4}, \quad |\alpha| \leq 2\end{equation}
and next we use \eqref{dzVid}, \eqref{Subis} and \eqref{Subis1} to estimate $ \|\partial_{z} v\|_{3} $
in terms of $\|S_{{\bf n}}\|_{3}$. Note that here we estimate the product by putting always
$v$ in $L^2$ and the various coefficients in $L^\infty$ (and thus, we use Proposition \ref{propeta} to control them in $L^\infty$).
This yields
\begin{equation}
\label{dzv3fin} \|\partial_{z}v\|_{3} \leq \Lambda({1\over c_{0}}, |h|_{4, \infty}+ \|S_{{\bf n}}\|_{3} + \|v\|_{4}\big).\end{equation}
Finally, by using the definition of $V^\alpha= Z^\alpha v -\partial_{z}^\varphi v Z^\alpha h$, we also get that
$$ \|v\|_{4} \lesssim \|V^4\| + \Lambda\big( {1 \over c_{0}}, |h|_{4, \infty}\big) \|\partial_{z}v \| \leq \Lambda\big( {1 \over c_{0}}, |h|_{4, \infty}\big)
\big( \|V^4\| + \|S_{n}\| + \|v\|_{1}\big).$$
Next, by crude interpolation, we can write
$$ \|v\|_{1} \lesssim \|v\|_{4}^{1 \over 2} \|v\|^{1\over 2}$$ and
hence the Young inequality yields
$$ \|v\|_{4} \lesssim \Lambda\big( {1 \over c_{0}}, |h|_{4, \infty} + \|V^4\| + \|S_{{\bf n}}\| + \|v\|\big)$$
and since $V^0=v$, we have actually proven that
$$ \|v\|_{4} \lesssim \Lambda\big( {1 \over c_{0}}, |h|_{4, \infty} + \|V^4\| + \|S_{{\bf n}}\|\big).$$
By plugging this estimate in \eqref{dzv3fin} and \eqref{dzv3inf}, we finally obtain \eqref{u2infty}.
The estimates in \eqref{dzv1infty1} are also a consequence of \eqref{dzVid} and \eqref{Subis}, \eqref{Subis1}.
\end{proof}
As a consequence of Proposition \ref{propLinfty1}, we see that the only $L^\infty$ norms that
appear in the estimates of Proposition \ref{conormv} and Proposition \ref{propdzvm-2} that remain to be estimated
are $\|S_{{\bf n}}\|_{1, \infty}$ and $\varepsilon^{1 \over 2} \| \partial_{z}S_{{\bf n}}\|_{L^\infty}$.
Moreover, by using the following Lemma, we see that actually
we only have to prove estimates for
$ \| \chi S_{{\bf n}}\|_{1, \infty}$ and $\varepsilon^{1 \over 2} \| \chi \partial_{z} S_{{\bf n}}\|_{L^\infty}$
for $\chi(z)$ compactly supported and such that $\chi = 1$ in a vicinity of $z=0$.
\begin{lem}
For any smooth cut-off function $\chi$
such that $\chi= 0$ in a vicinity of $z=0$, we have for $m > k+3/2$:
\begin{equation}
\label{inftyint}
\| \chi f\|_{ W^{k, \infty}} \lesssim \|f\|_{m}.
\end{equation}
\end{lem}
To get this Lemma, it suffices to
use the Sobolev embedding of $H^k(\mathcal{S})$ in $L^\infty(\mathcal{S)}$
for $k>3/2$ and then to use the fact that away from the boundary the conormal
Sobolev norm $\| \cdot \|_{k}$ is equivalent to the standard $H^k$ norm.
To summarize the estimates of Proposition \ref{propLinfty1} and the last remark, it is convenient to introduce the notation
\begin{equation}
\label{defQm}
\mathcal{Q}_{m}(t)= \|h(t)\|_{m}^2 + \varepsilon |h(t)|_{m+{1\over 2}}^2 + \|V^m \|^2 + \|S_{{\bf n}}(t)\|_{m-2}^2 + \| S_{{\bf n}}(t)\|_{1, \infty}^2 +
\varepsilon \|\partial_{z} S_{{\bf n}} \big\|_{L^\infty}^2.
\end{equation}
and use it to state
\begin{cor}
\label{corLinfty}
For $m\geq 6$, we have the following estimates for the $L^\infty$ norms:
$$ \|h(t)\|_{4, \infty} + \|v(t)\|_{2, \infty} + \|\partial_{z}v(t) \|_{1, \infty} + \sqrt{\varepsilon}\|\partial_{zz}v(t)\|_{L^\infty}
\leq \Lambda\big({1 \over c_{0}}, \mathcal Q_{m}(t)\big).$$
\end{cor}
The next step will be to estimate
$ \| S_{{\bf n}}\|_{1, \infty}$ and $\varepsilon^{1 \over 2} \| \partial_{z} S_{{\bf n}}\|_{L^\infty}$.
Let us set
\begin{equation}
\label{deflambdainfty} \Lambda_{\infty, m}(t)= \Lambda\big({1 \over c_{0}},
|h|_{4, \infty} + \|v\|_{E^{2, \infty}} + \varepsilon^{1 \over 2} \|\partial_{zz}v\|_{L^\infty} +|h|_{m} + \|v\|_{m} + \|\partial_{z}v\|_{m-2} +
\sqrt{\varepsilon}|h|_{m+{1 \over 2}} \big).
\end{equation}
Note that thanks to Corollary \ref{corLinfty} and \eqref{dzvk}, we have
\begin{equation}
\label{suplambdainfty}
\Lambda_{\infty, m}(t) \leq \Lambda\big({1 \over c_{0}}, \mathcal{Q}_{m}(t)\big), \quad m \geq 6.\end{equation}
\begin{prop}
\label{propdzvinfty}
For $t \in [0, T^\varepsilon], $ and $m \geq 6$, we have the estimate
\begin{align*}
\| S_{{\bf n}} (t) \|_{1, \infty}^2 & \leq \Lambda_{0}\Big( \| S_{{\bf n}}(0)\|_{1, \infty }^2 + \Lambda\big( {1 \over c_{0}}, |h(t)|_{m}+ \|V^m(t) \|+ \|S_{{\bf n}}(t)\|_{m-2}\big) \\
& \quad \quad \quad
+ \int_{0}^t\big( \varepsilon \|\nabla V^m \|^2 + \varepsilon \| \nabla S_{{\bf n}}\|_{m-2}^2\big) +(1+t) \int_{0}^t \Lambda_{\infty, m}(t)\Big).
\end{align*}
\end{prop}
Note that in the above estimate, the terms
$\Lambda\big( {1 \over c_{0}}, |h(t)|_{m}+ \|V^m(t) \|+ \|S_{{\bf n}}(t)\|_{m-2}\big)$ and
$ \int_{0}^t\big( \varepsilon \|\nabla V^m \|^2 + \varepsilon \| \nabla S_{{\bf n}}\|_{m-2}^2\big)$ can be estimated by using the estimates of Proposition \ref{conormv}
and Proposition \ref{propdzvm-2}.
\begin{proof}
To estimate $\| S_{{\bf n}}\|_{1, \infty}$, we shall perform
directly $L^\infty$ estimates on the convection diffusion equation \eqref{eqPiS} solved by $ S_{{\bf n}}.$ As before,
the main interest in the use of $S_{{\bf n}}$ is the fact that it verifies the homogeneous Dirichlet boundary condition
\eqref{bordSN} on the boundary.
The estimate of $\| S_{{\bf n}}\|_{L^\infty}$ is a direct consequence of the maximum principle for the convection diffusion
equation \eqref{eqPiS} with homogeneous Dirichlet boundary condition. We find
$$ \|S_{{\bf n}}\|_{L^\infty} \leq \|S_{{\bf n}}(0)\|_{L^\infty} + \int_{0}^t \|F_{S}\|_{L^\infty}$$
and from a brutal estimate, we get in view of the expressions \eqref{FS1}, \eqref{FS2} that
$$ \| F_{S}\|_{L^\infty} \leq \Lambda_{\infty, m} + \Lambda_{0} \| \nabla q \|_{E^{1, \infty}}.$$
To estimate the pressure, we use \eqref{estqElinfty} to get
$$ \| \nabla q^E\|_{E^{1, \infty}} \leq \Lambda_{\infty, m}$$
and \eqref{estqNS} and the Sobolev embedding in $\mathcal{S}$ to get
\begin{equation}
\label{qNS1infty} \| \nabla q^{NS} \|_{E^{1, \infty}} \lesssim \| \nabla q^{NS}\|_{H^{3}} \leq \Lambda_{\infty}\big( \|\varepsilon v^b\|_{9\over 2} + \|\varepsilon h\|_{9 \over 2}).\end{equation}
To control $ \|\varepsilon v^b\|_{9\over 2}$, we use again \eqref{vbm+}
to get
\begin{equation}
\label{SnqNS}
\| \nabla q^{NS} \|_{E^{1, \infty}} \lesssim \Lambda_{\infty, m}\, \varepsilon \big( \| \nabla V^k \| + \|V^k \|+ |h|_{k+{1 \over 2 }}\big),
\quad k \geq 4.
\end{equation}
Consequently, we obtain by using the Cauchy-Schwarz and Young inequalities that
\begin{equation}
\label{Sninfty1} \|S_{{\bf n}}(t)\|_{L^\infty} \leq \|S_{{\bf n}}(0)\|_{L^\infty} + \varepsilon\int_{0}^t \| S^\varphi V^m \|^2 + (1+ t) \int_{0}^t \Lambda_{\infty, m}.
\end{equation}
The next step is to estimate $\|\chi Z S_{{\bf n}}\|_{L^\infty}$. Indeed, from the definition of Sobolev conormal spaces, we get that
$$ \| Z S_{{\bf n}}\|_{L^\infty} \lesssim \| \chi Z S_{{\bf n}}\|_{L^\infty} + \|v\|_{2, \infty} $$
and hence thanks to \eqref{u2infty}, we find
\begin{equation}
\label{Slocalise} \| Z S_{{\bf n}}\|_{L^\infty} \lesssim \| \chi Z S_{{\bf n}}\|_{L^\infty} + \Lambda\big( {1 \over c_{0}}, |h|_{m} +\|V^m\| + \|S_{{\bf n}}\|_{m-2}
\big), \quad m \geq 6
.\end{equation}
The main difficulty is to handle the commutator between the fields $Z_{i}$ and
the Laplacian $\Delta^\varphi$. As in our previous work \cite{MR11prep},
it is convenient for this estimate to use a coordinate system where the Laplacian has the simplest expression. We shall
thus use a normal geodesic coordinate system in the vicinity of the boundary. Note that we have not used
this coordinate system before because it requires more regularity for the boundary: to get an
$H^m$ (or $\mathcal{C}^m$) coordinate system, we need the boundary to be $H^{m+1}$ (or $\mathcal{C}^{m+1}$).
Nevertheless, at this step, this is not a problem since we want to estimate a fixed low number of derivatives
of the velocity while we can assume that the boundary is $H^m$ for $m$ as large as we need. We shall choose the cut-off
function $\chi$ in order to get a well defined coordinate system in the vicinity of the boundary.
We define a different parametrization of the vicinity of the boundary of $\Omega_{t}$ by
\begin{eqnarray}
\label{diffgeo}
\Psi(t, \cdot) : \, \mathcal{S}= \mathbb{R}^2 \times (-\infty, 0) & \rightarrow & \Omega_{t} \\
\nonumber x= (y,z) &\mapsto & \left( \begin{array}{ll} y \\ h(t,y) \end{array} \right) + z {\bf n}^b(t,y)
\end{eqnarray}
where ${\bf n}^b$ is the unit exterior normal ${\bf n}^b(t,y)= (-\partial_{1}h, - \partial_{2} h, 1)/ |{\bf N}|$.
Note that $D\Psi(t, \cdot)$ is under the form $M_{0}+ R$ where
$ |R|_{\infty} \lesssim z |h|_{2, \infty}$ and
$$M_0= \left( \begin{array}{ccc} 1 & 0 & -\partial_{1}h \\ 0 & 1 & - \partial_{2} h \\ \partial_{1} h & \partial_{2} h & 1 \end{array}\right)$$
is invertible. This yields that $\Psi(t, \cdot) $ is a diffeomorphism from $\mathbb{R}^2\times( - \delta, 0)$ to a vicinity of
$\partial \Omega_{t}$ for some $\delta$ which depends only on $c_{0}$, and for every $t \in [0, T^\varepsilon]$
thanks to \eqref{apriori}. By this parametrization, the scalar product in $\Omega_{t}$ induces a Riemannian
metric on $T\mathcal{S}$ which has the block structure
\begin{equation}
\label{blockg} g(y,z)= \left( \begin{array}{cc} \tilde{g}(y,z) & 0 \\ 0 & 1 \end{array} \right)\end{equation}
and hence, the Laplacian in this coordinate system reads:
\begin{equation}
\label{laplacien} \Delta_{g}f= \partial_{zz}f + {1 \over 2 } \partial_{z} \big( \ln |g| \big) \partial_{z} f + \Delta_{\tilde g} f
\end{equation}
where $|g|$ denotes the determinant of the matrix $g$ and $\Delta_{\tilde{g}}$ which is defined by
$$ \Delta_{\tilde{g}} f= { 1 \over |\tilde{g}|^{1 \over 2} } \sum_{1 \leq i, \, j \leq 2} \partial_{y^i}\big(
\tilde{g}^{ij} |\tilde{g}|^{1 \over 2} \partial_{y^j} f \big)$$
involves only tangential derivatives.
Note that the drawback of this system is that it depends on $h$ and $\nabla h$ via ${\bf n}$ thus it looses one degree of regularity.
To use this coordinate system, we shall first localize the equation for $S^\varphi v$ in a vicinity of the boundary.
Let us set
\begin{equation}
\label{Schidef}
S^\chi= \chi(z) S^\varphi v
\end{equation}
where $\chi(z) = \kappa({z \over \delta(c_{0}) })\in [0, 1]$ and $\kappa$ is smooth compactly supported and takes the value $1$ in a vicinity of $z=0$. Note that this choice implies that
\begin{equation}
\label{chideriv}| \chi^{(k)}(z)| \leq \Lambda_{0}\end{equation}
for every $k \geq 1$.
Thank to \eqref{eqS}, we get for $S^\chi$ the equation
\begin{equation}
\label{Schieq}
\partial_{t}^\varphi S^\chi + (v \cdot \nabla^\varphi)S^\chi - \varepsilon \Delta^\varphi S^\chi= F_{S^\chi}
\end{equation}
where the source term $F_{S^\chi}$ can be split into
\begin{equation}
\label{Fchidef}
F_{S^\chi}= F^\chi + F_{v}
\end{equation}
with
\begin{eqnarray*}
& & F^\chi= \big(V_{z} \partial_{z} \chi\big) S^\varphi v - \varepsilon \nabla^\varphi \chi \cdot \nabla^\varphi S^\varphi v - \varepsilon \Delta^\varphi \chi \, S^\varphi v , \\
& & F_{v} = - \chi \big( D^\varphi)^2 q -{\chi \over 2}\big( (\nabla ^\varphi v)^2 + ((\nabla^\varphi v)^t)^2\big).
\end{eqnarray*}
Note that thanks to Lemma \ref{lemestW} and \eqref{inftyint}, since all the terms in $F^\chi$ are supported away
from the boundary, we get that
\begin{equation}
\label{Fchiest}
\|F^\chi\|_{1, \infty} \leq \Lambda\big( {1 \over c_{0}}, \|v\|_{1, \infty} + |h|_{2, \infty}\big) \|v\|_{5} \leq \Lambda_{\infty, m}
\end{equation}
Next, we define implicitely in $\Omega_{t}$, $\tilde{S}$ by $\tilde S (t, \Phi(t,y,z))= S^\chi(t,y,z)$
and then $S^\Psi$ in $\mathcal{S}$ by $S^\Psi(t,y,z)= \tilde S (t,\Psi(t,y,z))= S^\chi(t, \Phi(t, \cdot)^{-1} \circ \Psi).$ The change of variable is well-defined since we can
choose $\tilde S$ to be supported in the domain where $\Psi^{-1}$ is well defined by taking $\delta$ sufficiently small.
Since $S^\chi$ solves \eqref{Schieq}, we get that $\tilde{S}$ solves
$$ \partial_{t} \tilde S + u \cdot \nabla \tilde S - \varepsilon \Delta \tilde S = F_{S^\chi}(t, \Phi(t, \cdot)^{-1})$$
in $\Omega_{t}$
and hence by using \eqref{laplacien}, we get that $S^\Psi$ solves in $\mathcal{S}$ the convection diffusion equation
\begin{equation}
\label{eqSPsi}
\partial_{t} S^\Psi + w \cdot \nabla S^\Psi - \varepsilon\big( \partial_{zz} S^\Psi + {1 \over 2 } \partial_{z} \big( \ln |g| \big) \partial_{z} S^\Psi + \Delta_{\tilde g} S^\Psi \big)= F_{S^\chi} (t, \Phi^{-1} \circ \Psi)
\end{equation}
where the vector field $w$ is given by
\begin{equation}
\label{defW}
w= \overline{\chi} (D \Psi)^{-1}\big( v(t, \Phi^{-1} \circ \Psi ) - \partial_{t} \Psi \big)
\end{equation}
with $D \Psi$ the jacobian matrix of $\Psi$ (with respect to the space variables). Note that
$S^\Psi$ is compactly supported in $z$ in a vicinity of $z=0$. The function $\overline{\chi}(z)$ is
a function with a slightly larger support such that $\overline \chi S^\Psi= S^\Psi$.
The introduction of this function allows to have $w$ also supported in a vicinity of the boundary.
We finally set
\begin{equation}
\label{etadef}
S_{{\bf n}}^\Psi(t,y,z)=\Pi^b(t,y) S^\Psi {\bf n}^b(t,y)
\end{equation}
with $\Pi^b= \mbox{Id}- {\bf n}^b \otimes {\bf n}^b$. Note that $\Pi^b$ and
${\bf n}^b$ are independent of $z$.
This yields that $S_{{\bf n}}^\Psi$ solves
\begin{equation}
\label{eqSnPsi} \partial_{t} S_{{\bf n}}^\Psi + w \cdot \nabla S_{{\bf n}}^\Psi - \varepsilon\big( \partial_{zz} + {1 \over 2 } \partial_{z} \big( \ln |g| \big) \partial_{z} \big) S_{{\bf n}}^\Psi = F_{{\bf n}}^\Psi\end{equation}
where
\begin{equation}
\label{Fetadef}
F_{{\bf n}}^\Psi=\Big( \Pi^b F_{S^\chi} {\bf n}^b + F_{{\bf n}}^{\Psi, 1} + F_{{\bf n}}^{\Psi, 2} \Big).
\end{equation}
with
\begin{eqnarray}
& & \label{Feta1}
F_{{\bf n}}^{\Psi, 1}= \big(( \partial_{t} + w_{y} \cdot \nabla_{y}) \Pi^b\big)S^\Psi {\bf n}^b + \Pi^b S^\Psi\big( \partial_{t}+ w_{y}\cdot \nabla_{y}) n^b, \\
& & \label{Feta2} F_{{\bf n}}^{\Psi, 2}= - \varepsilon \Pi^b\big( \Delta_{\tilde{g}} S^\Psi\big) {\bf n}^b.
\end{eqnarray}
We shall thus estimate $S_{{\bf n}}^\Psi$ which solves the equation \eqref{eqeta} in $\mathcal{S}$ with the
homogeneous Dirichlet boundary
condition $(S_{{\bf n}}^\Psi)_{/z=0}= 0$ (indeed, on the boundary $z=0$, we have that $S_{{\bf n}}^\Psi=\Pi S^\varphi v {\bf n}= S_{{\bf n}}$.)
In the following, we shall use very often the following observation:
\begin{lem}
\label{leminv}
Consider $\mathcal{T}: \mathcal{S} \rightarrow \mathcal{S}$ such that $\mathcal{T}(y,0)= y, \quad \forall y \in \mathbb{R}^2$
and let $g(x)= f(\mathcal{T} x)$. Then, we have for every $k \geq 1$, the estimate:
$$ \|g\|_{k, \infty} \leq \Lambda\big( \| \nabla \mathcal{T}\|_{k-1,\infty} \big) \|f\|_{k,\infty}.$$
\end{lem}
This is just a statement of the fact that Sobolev conormal spaces are invariant by diffeomorphisms
which preserve the boundary. The proof is just a consequence of the chain rule and the fact that the family $(Z_{1}, \, Z_{2}, \, Z_{3})$ generates
the set of vector fields tangent to the boundary. A similar statement holds for the $\| \cdot \|_{m}$ norm.
Note that it is equivalent to estimate $S_{{\bf n}}^\Psi$ or $S_{{\bf n}}$. Indeed, since $S_{{\bf n}}^\Psi = \Pi^b S^\chi(t, \Phi^{-1}\circ \Psi) n^b$,
the above lemma yields
$$ \| S_{{\bf n}}^\Psi \|_{1, \infty} \leq \Lambda \big( |h|_{2, \infty} \big) \| \Pi^b S^\varphi v \, {\bf n}^b \|_{1, \infty}$$
and we observe that $|\Pi- \Pi^b| + | {\bf n}- {\bf n}^b|= \mathcal{O}(z)$ in the vicinity of the boundary to get
\begin{equation}
\label{etaSn}
\| S_{{\bf n}}^\Psi \|_{1, \infty} \leq \Lambda_{0}\big( \|S_{{\bf n}}\|_{1, \infty} + \|v\|_{2, \infty}\big).
\end{equation}
From the same arguments, we also obtain that
\begin{align}
\nonumber \| S_{{\bf n}}\|_{1, \infty} & \leq \Lambda_{0} \big( \|S_{{\bf n}}^\Psi \|_{1, \infty}+ \|v\|_{2, \infty}\big) \\
\label{Sneta} & \leq \Lambda_{0}
\Big( \|S_{{\bf n}}^\Psi \|_{1, \infty} + \Lambda\big( {1 \over c_{0}}, |h|_{m} + \|V^m\| + \|S_{{\bf n}}\|_{m-2}\big)\Big), \quad m \geq 6
\end{align}
where the last estimate comes from \eqref{u2infty}.
By using repeatidly these arguments, we also obtain
\begin{equation}
\label{w1infty} \|w\|_{1, \infty} \leq \Lambda\big( {1 \over c_{0}}, |h|_{3, \infty} + \|v\|_{1, \infty}+ \|\partial_{t}\Psi\|_{1, \infty} \big)
\leq \Lambda\big( {1 \over c_{0}}, |h|_{3, \infty} + \|v\|_{2, \infty}+ |\partial_{t}h|_{2, \infty} \big) \leq \Lambda_{\infty, m}
\end{equation}
where $\Lambda_{\infty, m}$ is defined by \eqref{deflambdainfty}.
Note that we have used the boundary condition \eqref{bord1} to get the last part of the estimate.
To estimate $Z_{i} S_{{\bf n}}^\Psi= \partial_{i} S_{{\bf n}}^\Psi$, $i=1, \, 2$, we can proceed as in the proof
of
estimate \eqref{Sninfty1}. We first apply $\partial_{i}, \, i=1, \, 2$ to \eqref{eqSnPsi} to get
\begin{equation}
\label{diSnPsi}
\partial_{t} \partial_{i }S_{{\bf n}}^\Psi + w \cdot \nabla \partial_{i }S_{{\bf n}}^\Psi - \varepsilon\big( \partial_{zz} + {1 \over 2 } \partial_{z} \big( \ln |g| \big) \partial_{z} \big) \partial_{i }S_{{\bf n}}^\Psi =\partial_{i}F_{{\bf n}}^\Psi- \partial_{i} w \cdot \nabla S_{{\bf n}}\Psi -{\varepsilon \over 2} \partial_{z} S_{{\bf n}}^\Psi \partial_{iz}^2 \big( \ln |g| \big).\end{equation}
From the maximum principle, we thus get that
\begin{eqnarray}
\nonumber \| \partial_{i }S_{{\bf n}}^\Psi (t) \|_{L^\infty} & \leq & \| \partial_{i }S_{{\bf n}}^\Psi (0)\|_{L^\infty}
+ \int_{0}^t \Big( \| \partial_{i} F_{{\bf n}}^\Psi\|_{L^\infty}+ \|\partial_{i}w \cdot \nabla S_{{\bf n}}^\Psi \|_{L^\infty} + \Lambda\big( {1 \over c_{0}}, |h|_{3, \infty}\big)
\varepsilon \| \partial_{z} S_{{\bf n}}^\Psi\|\Big)\\
\label{diSnPSi1} &\leq & \| \partial_{i }S_{{\bf n}}^\Psi (0)\|_{L^\infty}
+ \int_{0}^t \Big( \| \partial_{i} F_{{\bf n}}^{\Psi}\|_{L^\infty}+ \|\partial_{i}w \cdot \nabla S_{{\bf n}}^\Psi \|_{L^\infty} + \Lambda_{\infty, m}\Big) .
\end{eqnarray}
Note that for the last term, we have used that $g$ involves two derivatives of $h$ but is $\mathcal{C}^\infty $ in $z$ in view
of the definition \eqref{diffgeo} and hence that $\partial_{iz}^2 |g|$ has the regularity of $\partial_{i}|g|$.
It remains to estimate the right hand side in this last estimate. We first note that
$$ \|\partial_{i}w \cdot \nabla S_{{\bf n}}^\Psi \|_{L^\infty} \leq \| w\|_{1, \infty} \| S_{{\bf n}}^\Psi \|_{1, \infty} + \|\partial_{i}w_{3} \partial_{z} S_{{\bf n}}^\Psi\|_{L^\infty}
\leq \Lambda_{\infty, m} + \|\partial_{i}w_{3} \partial_{z} S_{{\bf n}}^\Psi\|_{L^\infty}.$$
Next, thanks to the definition \eqref{defW} of $w$, we note that on the boundary, we have
\begin{equation}
\label{wbord}
w^b=( D \Psi(t,y,0))^{-1}\Big(v^b - \left( \begin{array}{cc} 0 \\ \partial_{t}h \end{array} \right) \Big).
\end{equation}
Since on the boundary, we have
$$ D\Psi(t,y,0)= \left( \begin{array}{ccc}
1 & 0 & {\bf n}_{1}^b \\ 0 & 1 & {\bf n}_{2}^b \\ \partial_{1} h & \partial_{2} h & {\bf n}_{3}^b \end{array} \right), $$
we get that
\begin{equation}
\label{cdv} ((D\Psi(t,y,0))^{-1} Y)_{3}= Y \cdot {\bf n}^b
\end{equation} for every $Y \in \mathbb{R}^3$ and hence in particular that on the boundary
\begin{equation}
\label{w3bord}
w_{3}^b= v^b \cdot {\bf n}^b - \partial_{t} h {\bf n}_{3}^b= {1 \over |{\bf N}|} \big( v^b \cdot {\bf N} - \partial_{t}h)= 0
\end{equation}
thanks to the boundary condition \eqref{bordv1}. Consequently, since $\partial_{i} w_{3}$ also vanishes on the boundary we get
that
\begin{equation}
\label{diwS} \|\partial_{i}w_{3} \partial_{z} S_{{\bf n}}^\Psi\|_{L^\infty} \leq \| \partial_{z} \partial_{i} w_{3}\|_{L^\infty} \|S_{{\bf n}}^\Psi \|_{1, \infty} \leq
\Lambda\big({1 \over c_{0}}, |h|_{3, \infty} + \|v\|_{E^{2, \infty}}\big) \leq \Lambda_{\infty, m}.
\end{equation}
Note that for the last estimate, we have used again that $\Psi$ is $\mathcal{C}^\infty$ in $z$.
It remains to estimate $ \|F_{{\bf n}}^\Psi\|_{1, \infty}$ in the right-hand side of \eqref{diSnPSi1}. By using \eqref{Fetadef} and
the expressions \eqref{Feta1}, \eqref{Feta2}, we get
$$ \|F_{\bf n}^{\Psi, 1} \|_{1, \infty} \leq \Lambda\big( {1 \over c_{0}}, | \partial_{t} h|_{2, \infty}+ |h|_{3, \infty}
+ \|w\|_{1, \infty}\big)\big( \| v\|_{E^{2, \infty}} + \varepsilon \| \partial_{z}v \|_{2, \infty}\big) \leq \Lambda_{\infty, m}$$
and by using again the fact that $|\Pi-\Pi^b |+ |{\bf n}-{\bf n}^b| =\mathcal{O}(z)$, we get that
$$ \|F_{{\bf n}}^{\Psi, 2} \|_{1, \infty} \leq \Lambda_{\infty, m}\big( \varepsilon \| S_{{\bf n}}\|_{3, \infty}+ \varepsilon \|v\|_{4, \infty} \big).$$
Next, from the definitions \eqref{Fetadef}, \eqref{Fchidef}, we get thanks to \eqref{Fchiest} that
$$ \| F_{{\bf n}}^\Psi \|_{1, \infty} \leq \Lambda_{\infty, m} \big( 1 + \varepsilon \| S_{{\bf n}}\|_{3, \infty}+ \varepsilon \|v\|_{4, \infty} + \|Ê
\Pi^b \big( (D^\varphi)^2 q \big){\bf n}^b \big) \|_{1, \infty} \big).$$
To estimate the pressure term, we use that $\Pi \nabla^\varphi$ involves only conormal derivatives
and the decomposition \eqref{pressuredec} to obtain
$$ \| \Pi^b \big( (D^\varphi)^2 q \big){\bf n}^b \big) \|_{1, \infty} \leq \Lambda_{0} \big( \|\nabla q^E \|_{2, \infty} + \|\nabla q^{NS}\|_{2, \infty}\big).$$
For the Euler part $q^E$ of the pressure, we use \eqref{estqElinfty} to get
$ \|\nabla q^E \|_{2, \infty} \leq \Lambda_{\infty}$ and for $q^{NS}$, we can use again
Proposition \ref{propPNS} to get that an estimate analogous to \eqref{SnqNS} holds for $ \| \nabla q^{NS}\|_{{2, \infty}}$
but with the restriction $k \geq 5$ for the right hand side.
This yields
$$ \| \Pi^b \big( (D^\varphi)^2 q \big){\bf n}^b \big) \|_{1, \infty} \leq \Lambda_{0} \big( 1 + \varepsilon \|S^\varphi V^m\|\big).$$
We have thus proven that
\begin{equation}
\label{Fetainfty}
\| F_{{\bf n}}^\Psi\|_{1, \infty} \leq \Lambda_{\infty, m}
\big( 1 + \varepsilon \|\nabla V^m\| + \varepsilon \|S_{{\bf n}}\|_{3, \infty} + \varepsilon \|v\|_{4, \infty}\big).
\end{equation} Consequently, by combining \eqref{diSnPSi1}, \eqref{diwS} and \eqref{Fetainfty}, we obtain \begin{equation}
\label{diSnPsi1} \| \partial_{i }S_{{\bf n}}^\Psi (t) \|_{L^\infty} \leq \| \partial_{i }S_{{\bf n}}^\Psi (0)\|_{L^\infty}
+ \int_{0}^t \Lambda_{\infty, m}\big( 1 + \varepsilon \|S^\varphi V^m\| + \varepsilon \|S_{{\bf n}}\|_{3, \infty} + \varepsilon \|v\|_{4, \infty}\big).
\end{equation}
The next step is to estimate $\|Z_{3} S_{{\bf n}}^\Psi \|_{L^\infty}$. This is more delicate due to the bad commutation
between $Z_{3}$ and $\varepsilon \partial_{zz}$. We shall use a more precise description of the solution of \eqref{eqSnPsi}.
First, it is convenient to eliminate the term $\varepsilon \partial_{z} \ln |g| \partial_{z}$ in the equation \eqref{eqSnPsi}.
We set
\begin{equation}
\label{etadefbis}
\rho(t,y,z)= |g|^{1 \over 4} S_{{\bf n}}^\Psi= |g|^{1 \over 4} \Pi^b(t,y) S^\Psi {\bf n}^b(t,y)
\end{equation}
This yields that $\rho$ solves
\begin{equation}
\label{eqrhobis} \partial_{t} \rho + w \cdot \nabla \rho - \varepsilon \partial_{zz} \rho = |g|^{ 1 \over 4 } \big(F_{{\bf n}}^\Psi + F_{g}\big)
:= \mathcal{H}\end{equation}
where
\begin{eqnarray}
& & \label{Fg} F_{g}= {\rho \over |g|^{1 \over 2}} \Big( \partial_{t} + w \cdot \nabla - \varepsilon \partial_{zz} \big) |g|^{1 \over 4}.
\end{eqnarray}
Since
\begin{equation}
\label{etaequiv} \|Z_{3} S_{{\bf n}}^\Psi \|_{L^\infty} \leq \Lambda_{0} \| \rho \|_{1, \infty}, \quad \| \rho \|_{1, \infty} \leq \Lambda_{0} \|S_{{\bf n}}^\Psi \|_{1, \infty}\end{equation}
it is equivalent to estimate $\|Z_{3} S_{{\bf n}}^\Psi\|_{1, \infty}$ or $\|\rho \|_{1, \infty}.$
We shall thus estimate $\rho$ which solves the equation \eqref{eqrhobis} in $\mathcal{S}$ with the
homogeneous Dirichlet boundary
condition $\rho_{/z=0}= 0$.
This estimate will be a consequence of the following Lemma:
\begin{lem}
\label{lemFP0}
Consider $\rho$ a smooth solution of
\begin{equation}
\label{eqetaFP0}
\partial_{t} \rho + w \cdot \nabla \rho = \varepsilon \partial_{zz} \rho + \mathcal{H}, \quad z<0, \quad
\rho(t,y,0)= 0, \quad \rho(t=0)= \rho_0
\end{equation}
for some smooth vector field $w$ such that
$ w_{3}$ vanishes on the boundary. Assume that
$ \rho$ and $\mathcal{H}$ are compactly supported in $z$. Then, we have the estimate:
$$ \| Z_{i} \rho (t) \|_{\infty}
\lesssim \| Z_{i}\rho_{0} \|_{\infty} + \|\rho_{0}\|_{\infty}+
\int_{0}^t \Big( \big( \|w \|_{E^{2, \infty}} + \| \partial_{zz} w_{3} \|_{L^\infty} \big) \big( \| \rho \|_{1, \infty}
+ \| \rho \|_{4} \big) + \| \mathcal{H} \|_{1, \infty} \Big), \quad i=1, \, 2, \, 3.$$ \end{lem}
We have already used an estimate of the same type in our previous work \cite{MR11prep}.
The proof of the Lemma is given in section \ref{sectionFP}.
From Lemma \ref{lemFP0}, we get that
\begin{equation}
\label{eta1infty1}
\| Z_{3}\rho (t) \|_{\infty}
\lesssim \| Z_{3} \rho_{0} \|_{\infty} + \| \rho_{0}\|_{\infty} +
\int_{0}^t \Big( \big( \|w \|_{E^{2, \infty}} + \| \partial_{zz} w_{3} \|_{L^\infty} \big) \big( \| \rho \|_{1, \infty}
+ \| \rho \|_{4} \big) + \| \mathcal{H} \|_{1, \infty} \Big).
\end{equation}
Next, by using \eqref{Fetainfty} and \eqref{Fg}, we obtain \begin{eqnarray}
\nonumber\|\mathcal{H}\|_{1, \infty} &\leq & \Lambda_{\infty, m} \big( 1 + \varepsilon \|S^\varphi V^m\| + \varepsilon \|S_{{\bf n}}\|_{3, \infty} + \varepsilon \|v\|_{4, \infty}\big)
+ \Lambda\big( {1 \over c_{0}}, |h|_{4, \infty} + \|v\|_{1, \infty} \big) \\
\label{estH1infty} & \leq & \Lambda_{\infty, m} \big( 1 + \varepsilon \|S^\varphi V^m\| + \varepsilon \|S_{{\bf n}}\|_{3, \infty} + \varepsilon \|v\|_{4, \infty}\big).
\end{eqnarray}
Hence, from \eqref{etadefbis}, we get that
$$ \| \rho \|_{4} \leq \Lambda\big({ 1 \over c_{0}}, |h|_{6} + \|S_{{\bf n}}^\Psi\|_{4} \big)\leq \Lambda_{\infty, m}$$
since we assume that $m\geq 6$
and from the definition \eqref{defW}, we also have $\|w\|_{E^{2, \infty}} \leq \Lambda_{\infty, m}$.
We only need to be more careful with the term $\| \partial_{zz} w_{3}\|_{L^\infty}$ in the right hand side of \eqref{eta1infty1} since it contains
two normal derivatives. Looking at the expression \eqref{defW} of $w$, we first note that
since $\Psi$ is $\mathcal{C}^\infty$ in $z$, we have
$$\Big\| \partial_{zz}\big( \overline \chi \big( D\Psi^{-1}\partial_{t} \Psi \big) \big)\Big\|_{L^\infty} \leq \Lambda\big( {1 \over c_{0}}, |h|_{2, \infty} +
|\partial_{t}h|_{1, \infty} \big) \leq \Lambda_{\infty, m}$$
therefore, we only need to estimate $\partial_{zz}\big( \overline \chi D\Psi^{-1} v(t, \Phi^{-1}\circ \Psi)\big)_{3}.$ We can first use that
\begin{align*}\big\| \partial_{zz}\big( \overline \chi D\Psi^{-1} v(t, \Phi^{-1}\circ \Psi\big)_{3}\big\|_{L^\infty}
& \leq \| \overline \chi \partial_{zz} \big( (D\Psi(t,y,0))^{-1} v(t, \Phi^{-1} \circ \Psi)\big)_{3} \big\|_{L^\infty} + \Lambda_{\infty, m} \|v\|_{E^{2, \infty}} \\
& \leq \| \overline \chi \partial_{zz} \big( (D\Psi(t,y,0))^{-1} v(t, \Phi^{-1}\circ \Psi\big)_{3} \big\|_{L^\infty} + \Lambda_{\infty, m}
\end{align*}
and then use the observation \eqref{cdv} to get that
$$ \big\| \partial_{zz}\big( \overline \chi D\Psi^{-1} v(t, \Phi^{-1} \circ \Psi)\big)_{3}\big\|_{L^\infty}
\leq \big\| \overline \chi \partial_{zz}\big( v(t, \Phi^{-1}\circ \Psi) \cdot {\bf n}^b) \big\|_{L^\infty} + \Lambda_{\infty,m}.$$
We thus need to compute $$\overline{\chi} \partial_{zz} \big( v(t, \Phi^{-1} \circ \Psi) \cdot {\bf n}^b \big).$$
Note that $v(t, \Phi^{-1} \circ \Psi)= u(t, \Psi)$ where $u$ is defined
in $\Omega_{t}$ and let us set $u^{\Psi}(t,y,z)= u(t, \Psi)$. Since $u$ is divergence free in
$\Omega_{t}$, the expression in local coordinates of the divergence yields
$$ \partial_{z} u^{\Psi} \cdot {\bf n}^b= -{ 1 \over 2} \partial_{z}\big( \ln |g|\big) u^\Psi \cdot {\bf n}^b
- \nabla_{\tilde{g}} \cdot u_{y}^\Psi $$
where $u_{y}^\Psi= \Pi^b u^\Psi$ and $\nabla_{\tilde{g}}\cdot $ is the divergence operator associated
to the metric $\tilde{g}$ in the definition \eqref{blockg} and hence involves only tangential
derivatives of $u^\Psi$. This means as before that for $u^\Psi \cdot {\bf n}^b= v(t, \Phi^{-1}\circ \Psi) \cdot {\bf n}^b$, we can replace
one normal derivative by tangential derivatives. Consequently, we get from \eqref{defW} that
\begin{equation}
\label{dzzw3} \| \partial_{zz} w_{3}\|_{L^\infty} \leq \Lambda\big( {1 \over c_{0}}, \|u^\Psi \|_{E^{1, \infty} } +|h|_{3, \infty} + | \partial_{t} h|_{L^\infty}\big)
\leq \Lambda_{\infty, m}.
\end{equation}
Note that we have again used the fact that $\Psi$ is $\mathcal{C}^\infty$ in $z$.
Finally, we note that
$$ \| \rho \|_{4} \leq \Lambda\big({1 \over c_{0}}, |h|_{6} + \|S_{{\bf n}}\|_{4} + \|v\|_{5} \big) \leq \Lambda_{\infty, m}, \quad \|Z_{3} \eta_{0} \|_{\infty}
+ \| \eta_{0} \|_{\infty} \leq \Lambda_{0} \| v(0) \|_{E^{2, \infty}},$$
and hence we obtain from \eqref{eta1infty1} that
\begin{equation}
\label{eta1infty} \| \rho (t) \|_{1, \infty}
\leq \Lambda_{0} \|v\|_{E^{2, \infty}} +
\int_{0}^t \Lambda_{\infty} \Big( 1 + \varepsilon \|S^\varphi V^m\| + \varepsilon \|S_{{\bf n}}\|_{3, \infty} + \varepsilon \|v\|_{4, \infty} \Big), \quad m\geq 5.
\end{equation}
Consequently, by combining \eqref{eta1infty}, \eqref{diSnPsi1} and \eqref{etaequiv}, \eqref{Sneta}, we get that
\begin{multline}
\label{dzvinfty2}
\|S_{{\bf n}} \|_{1, \infty} \leq \Lambda_{0} \|v(0)\|_{E^{2, \infty}} + \Lambda\big( {1 \over c_{0}}, \|V^m(t)\|+ \|S_{{\bf n}}(t) \|_{m-2} + |h(t)|_{m}\big)
\\+ \int_{0}^t \Lambda_{\infty, m}
\Big( 1 + \varepsilon \|\nabla V^m\| + \varepsilon \|S_{{\bf n}}\|_{3, \infty} + \varepsilon \|v\|_{4, \infty} \Big).
\end{multline}
To conclude, we use Sobolev estimates to control the last two terms.
At first, thanks to \eqref{emb}, we get that
$$\sqrt{\varepsilon}\|S_{{\bf n}}\|_{3, \infty} \lesssim\big( \sqrt{\varepsilon} \| \partial_{z} S_{{\bf n}}\|_{4}\big)^{1 \over 2} \big( \sqrt{\varepsilon}\|S_{{\bf n}}\|_{5}\big)^{1 \over 2} \leq \Lambda_{\infty, m}\big( \sqrt{\varepsilon}\|\nabla S_{{\bf n}}\|_{4}\big)^{1 \over 2} \big(
\sqrt{\varepsilon} \| \nabla v\|_{4}\big)^{1 \over 2}
$$
and we use that for $\alpha \neq 0$, $|\alpha| \leq 4$,
$$ \sqrt{\varepsilon}\| \nabla Z^\alpha v \| \leq \sqrt{\varepsilon} \| \nabla V^\alpha \| + \Lambda_{\infty, m}.$$
Indeed, note that in particular, the term $\sqrt{\varepsilon}\| \partial_{zz}v \|_{L^\infty}$ is in $\Lambda_{\infty, m}.$
Hence, we have proven that
\begin{equation}
\label{Sn3infty} \sqrt{\varepsilon}\|S_{{\bf n}}\|_{3, \infty} \leq \Lambda_{\infty, m} + \sqrt{\varepsilon} \| \nabla V^m \| + \sqrt{\varepsilon}\|\nabla S_{{\bf n}}\|_{m-2}.\end{equation}
From the same arguments, we also obtain
\begin{equation} \label{v4infty} \sqrt{\varepsilon}\|v\|_{4, \infty} \leq\sqrt{\varepsilon }\big( \|\nabla v \|_{5}+ \|v\|_{6}\big) \leq
\Lambda_{\infty, m} + \sqrt{\varepsilon }\| \nabla V^m\|.\end{equation}
for $m\geq 6$.
Consequently, we get from \eqref{dzvinfty2} and the Cauchy-Schwarz inequality that
for $m \geq 6$
\begin{multline}
\|S_{{\bf n}} \|_{1, \infty} \leq \Lambda_{0} \|v(0)\|_{E^{2, \infty}} + \Lambda\big( {1 \over c_{0}}, \|V^m(t)\|+ \|S_{{\bf n}}(t) \|_{m-2} + |h(t)|_{m}\big)
\\+ (1+ t ) \int_{0}^t \Lambda_{\infty, m} + \int_{0}^t\big(
\ \varepsilon \|\nabla V^m\|^2 + \varepsilon \|\nabla S_{{\bf n}}\|_{m}^2 \big).
\end{multline}
This ends the proof of Proposition \ref{propdzvinfty}.
It will be useful in the future to remember the estimates \eqref{Sn3infty}, \eqref{v4infty} that we have just establish:
\begin{lem}
\label{Linfty0t}
For $m \geq 6$, we have the estimate
$$ \sqrt{\varepsilon} \int_{0}^t \|\nabla v\|_{3, \infty} \leq \int_{0}^t \Lambda_{\infty, m} + \int_{0}^t\big( \varepsilon \| \nabla V^m \|^2 +\varepsilon \| \nabla S_{{\bf n}}\|_{m-2}^2 \big).
$$
\end{lem}
We recall that $\Lambda_{\infty,m}$ was defined in \eqref{deflambdainfty}.
To get the proof of this lemma, it suffices to use that
$$ \| \nabla v\|_{3, \infty} \leq \Lambda_{\infty, m} \big( \|S_{n}\|_{3, \infty} + \|v\|_{4, \infty}\big)$$
thanks to \eqref{Subis}, \eqref{Subis1} and \eqref{dzVid} and then to use \eqref{Sn3infty}, \eqref{v4infty}
and the Young inequality.
Our next $L^\infty$ estimate will be the one of $\sqrt{\varepsilon} \|\partial_{z} S_{{\bf n}} \|_{L^\infty}$ which is still missing.
\begin{prop}
\label{dzzvLinfty}
Assuming that the initial data satisfy the boundary condition \eqref{bordv2}, we have for $m \geq 6$ the estimate
\begin{align*}
\varepsilon\|\partial_{zz} v(t) \|_{L^\infty}^2
& \leq \Lambda_{0}\big( \|v(0)\|_{E^{2, \infty}}^2 + \varepsilon \|\partial_{zz}v_{0}\|^2 \big) + \Lambda\big({ 1 \over c_{0}}, |h(t)|_{m} + \|V^m(t)\| + \|S_{{\bf n}}(t)\|_{m-2} \big)
\\ & \quad + \Lambda_{0} \int_{0}^t \varepsilon \big( \| \nabla S_{{\bf n}}\|_{m-2}^2 + \| \nabla V^m\|^2\big) + (1 + t ) \int_{0}^t \big( 1 + {1 \over\sqrt { t - \tau}}\big)
\Lambda_{\infty, m}\, d\tau
\end{align*}
for $t \in [0, T^\varepsilon]$.
\end{prop}
Note that in our estimates, this is the only place where we use the compatibility condition on the initial data.
We also again point out that the terms
$ \Lambda\big({ 1 \over c_{0}}, |h(t)|_{m} + \|V^m(t)\| + \|S_{{\bf n}}(t)\|_{m-2} \big)$ and
$\int_{0}^t \varepsilon \big( \| \nabla S_{{\bf n}}\|_{m-2}^2 + \| \nabla V^m\|^2\big) $
can be estimated by using Proposition \ref{conormv} and Proposition \ref{propdzvm-2}.
\begin{proof}
As in the proof of Proposition \ref{propdzvinfty}, we shall first reduce the problem to the estimate of
$\sqrt{\varepsilon} \|\partial_{z} \rho \|_{L^\infty}.$
By using \eqref{etadef},
and \eqref{Sneta},
we obtain
$$
\sqrt{\varepsilon} \|\partial_{z} S_{{\bf n}}\|_{L^\infty} \leq \Lambda_{0} \big( \|v\|_{E^{2, \infty}} +\sqrt{\varepsilon} \|\partial_{z} \rho \|_{L^\infty}\big)$$
and hence, by using again Proposition \ref{propLinfty1}(and in particular \eqref{dzv1infty1} and \eqref{u2infty}), we obtain that
\begin{align}
\nonumber \sqrt{\varepsilon} \|\partial_{z} S_{{\bf n}}\|_{L^\infty} & \leq \Lambda_{0} \big( \|S_{{\bf n}}\|_{1, \infty} + \|v\|_{2, \infty} + \sqrt{\varepsilon} \|\partial_{z} \rho \|_{L^\infty}
\\
\label{dzSneta} & \leq \Lambda_{0}\Big( \|S_{{\bf n}}\|_{1, \infty} + \Lambda\big({ 1 \over c_{0}}, |h|_{m} + \|V^m\| + \|S_{{\bf n}}\|_{m-2} \big)
+ \sqrt{\varepsilon} \|\partial_{z} \rho \|_{L^\infty}\Big)
\end{align}
so that it only remains to estimate $\sqrt{\varepsilon}\|\partial_{z} \rho \|_{L^\infty}$.
Again note that we use the fact that $g$ is $\mathcal{C}^\infty$ in $z$ to write the first estimate.
Since $ \eta $ solves the convection-diffusion equation \eqref{eqrhobis} in $z>0$ with zero Dirichlet boundary condition on
the boundary, we can use the one-dimensional heat kernel of $z>0$
$$G(t,z,z')= {1 \over \sqrt{4\pi t}}\big( e^{- { (z-z')^2 \over 4t }} - e^{- { (z+z')^2 \over 4t }} \big)
$$
to write that
\begin{equation}
\label{duhamel} \varepsilon^{1 \over 2} \partial_{z} \rho(t,y,z) =
\sqrt{\varepsilon} \int_{0}^{+ \infty} \partial_{z}G(t,z, z') \rho_{0}(y, z') dz'
+ \int_{0}^t \sqrt{\varepsilon} \partial_{z} G(t-\tau,z,z')\big( \mathcal{H}(\tau,y,z') - w \cdot \nabla \rho\big) dz' d\tau.\end{equation}
Since $\rho_{0}$ vanishes on the boundary thanks to the compatibility condition, we can
integrate by parts the first term to obtain
\begin{equation}
\label{dzzeta1}\varepsilon^{1 \over 2} \|\partial_{z} \rho (t) \|_{L^\infty}
\leq \varepsilon^{1 \over 2 } \|\partial_{z} \rho_{0}\|_{L^\infty} + \int_{0}^t {1 \over \sqrt{t- \tau}} \big(\|\mathcal{H}\|_{L^\infty}+ \| w \cdot \nabla \rho \|_{L^\infty}
\big).
\end{equation}
Next, we use the definition of the source term $\mathcal{H}$ in \eqref{eqrhobis} to get
$$ \| \mathcal{H}\|_{L^\infty} \leq \Lambda_{\infty, m}\big( 1+ \|\nabla q \|_{E^{1, \infty}} + \varepsilon(\|S_{{\bf n}}\|_{2, \infty}+ \|v\|_{3, \infty}) \big)$$
and hence, thanks to Proposition \ref{proppE} and \eqref{qNS1infty}, we get
$$ \| \mathcal{H}\|_{L^\infty} \leq \Lambda_{\infty, m}\big( 1+ \varepsilon \|v^b\|_{9\over 2}+ \varepsilon(\|S_{{\bf n}}\|_{2, \infty}+ \|v\|_{3, \infty}) \big).$$
By using the trace inequality \eqref{trace} and \eqref{emb}, we find for $m \geq 6$
$$\varepsilon \|v^b\|_{9\over 2} \lesssim \varepsilon \|\partial_{z}v\|_{4}^{1 \over 2} \|v\|_{5}^{1 \over 2} \leq \Lambda_{\infty, m}\, \varepsilon \|\nabla v\|_{4}^{1 \over 2}
\leq \Lambda_{\infty, m} \big( 1+ \varepsilon \| \nabla V^m\|^{1 \over 2 }\big)$$
and
\begin{align*}
& \varepsilon \|S_{{\bf n}}\|_{2, \infty} \leq \varepsilon \| \nabla S_{{\bf n}}\|_{3}^{1 \over 2} \|S_{{\bf n}}\|_{4} \leq \Lambda_{\infty, m} \varepsilon \| \nabla S_{{\bf n}}\|_{m-2}^{1 \over 2}, \\
& \varepsilon \|v\|_{3, \infty} \leq \varepsilon \|\nabla v\|_{4}^{1\over 2} \|v\|_{5}^{1\over 2} \leq \Lambda_{\infty, m}\big( 1 + \|\nabla V^m \|\big).
\end{align*}
Consequently, we have proven that
$$ \| \mathcal{H}\|_{L^\infty} \leq \Lambda_{\infty, m}\big( 1+ \|\nabla V^m\|^{1\over 2} + \|\nabla S_{{\bf n}}\|_{m-2}^{1 \over 2}\big)$$
Finally, by using \eqref{w3bord}, we can write as we have used to get \eqref{diwS} that
$$ \|w \cdot \nabla \rho \|_{L^\infty} \leq \| w\|_{E^{1, \infty}} \| \rho \|_{1, \infty} \leq \Lambda_{\infty, m}.$$
Consequently, by combining the two last estimates and \eqref{dzzeta1}, we get that
$$ \varepsilon^{1 \over 2} \|\partial_{z} \rho (t) \|_{L^\infty}
\leq \Lambda_{0} \big( \varepsilon^{1 \over 2 } \|\partial_{zz}v (0)\|_{L^\infty} + \|v(0)\|_{E^{2, \infty}}\big) + \int_{0}^t {\Lambda_{\infty, m} \over \sqrt{t- \tau}}
\big( 1 + \varepsilon \| \nabla S_{{\bf n}}\|^{1 \over 2}_{m-2} +\varepsilon \|\nabla V^m\|^{1 \over 2} \big).$$
To conclude, we use the H\"older inequality and in particular that
$$ \Big( \int_{0}^t { \Lambda_{\infty} \over (t- \tau)^{1 \over 8} } \| \nabla S_{{\bf n}}\|_{m-2}^{1 \over 2} {1 \over (t- \tau)^{ 3\over 8}}\Big)^2
\leq \Big( \int_{0}^t \| \nabla S_{{\bf n}}\|_{m-2}^2 \Big)^{1 \over 2}\Big( \int_{0}^t {\Lambda_{\infty, m}^4 \over (t- \tau)^{1 \over 2} } \Big)^{1\over 2}
t^{1 \over 4}$$
and a similar estimate for the term involving $\|\nabla V^m\|^{1 \over 2} $
to get the estimate
\begin{align*}
\big(\varepsilon^{1 \over 2} \|\partial_{z} \rho (t) \|_{L^\infty}\big)^2
\leq \Lambda_{0} \big( \big(\varepsilon^{1 \over 2 } \|\partial_{zz} v(0)\|_{L^\infty}\big)^2 + \|v(0)\|_{E^{2, \infty}}^2\big)
+ \Lambda_{0}\|v(t)\|_{E^{2, \infty}}^2 \\+\Lambda_{0} \int_{0}^t\big( \varepsilon \| \nabla S_{{\bf n}}\|_{m-2}^2 +\varepsilon \| \nabla V^m\|^2 \big) + (1 + t ) \int_{0}^t { \Lambda_{\infty} \over (t - \tau)^{1 \over 2}}\, d\tau.
\end{align*} To conclude the proof of Proposition \ref{dzzvLinfty}, we can combine the last estimate and \eqref{dzSneta} with the
estimate of Proposition \ref{propdzvinfty}.
\end{proof}
By using arguments close to the ones we have just used, we can also establish that
\begin{lem}
For $m \geq 6$, assume that
$\sup_{[0, T]} \Lambda_{\infty, m}(t) \leq M$.
Then, there exists $\Lambda(M)$ such that
we have the estimate
\begin{equation}
\label{Linfty0t1}
\int_{0}^t \sqrt{\varepsilon} \| \partial_{zz}v\|_{1, \infty} \leq \Lambda(M) \big( 1 + t) \sqrt{t}\Big( 1 +
\int_{0}^t \varepsilon \big( \|\nabla V^m \|^2 + \|\nabla S_{{\bf n}}\|_{m-2}^2 \big)\Big).
\end{equation}
\end{lem}
Note that, by combining the previous lemma and Lemma \ref{Linfty0t}, we obtain
\begin{cor}
\label{corLinfty0t}
For $m \geq 6$, assume that
$\sup_{[0, T]} \Lambda_{\infty, m}(t) \leq M$.
Then, there exists $\Lambda(M)$ such that, we have the estimate
$$
\int_{0}^t \sqrt{\varepsilon} \| \nabla^2v\|_{1, \infty} \leq \Lambda(M) \big( 1 + t)^2\Big( 1 +
\int_{0}^t \varepsilon \big( \|\nabla V^m \|^2 + \|\nabla S_{{\bf n}}\|_{m-2}^2 \big)\Big).
$$
\end{cor}
\begin{proof}
We first note by using \eqref{etadef}, \eqref{Sneta} and again \eqref{dzVid}, \eqref{Subis} \eqref{Subis1} that
$$ \int_{0}^t \sqrt{\varepsilon} \| \partial_{zz}v\|_{1, \infty} \leq \Lambda(M)\int_{0}^t \sqrt{\varepsilon} \big(\| \partial_{z} \rho \|_{1,\infty} + \sqrt{\varepsilon} \| \nabla v\|_{2, \infty}\big)$$
and hence, thanks to Lemma \ref{Linfty0t}, we obtain
\begin{equation}
\label{Linfty+1} \int_{0}^t \sqrt{\varepsilon} \| \partial_{zz}v\|_{1, \infty} \leq \Lambda(M)(1+t)\Big( \int_{0}^t \sqrt{\varepsilon} \| \partial_{z} \rho \|_{1,\infty}
+ \int_{0}^t \varepsilon \big( \|\nabla V^m \|^2 + \|\nabla S_{{\bf n}}\|_{m-2}^2 \Big).\end{equation}
\end{proof}
To estimate $\|\partial_{z}\rho\|_{1, \infty}$, note that a brutal use of the Duhamel formula \eqref{duhamel} is not sufficient.
Indeed, even for the fields $Z_{i}, \, i=1, \, 2$ which commute with $G$, we get
that
$$ \sqrt{\varepsilon }\| \partial_{z} Z_{i} \rho \|_{L^\infty} \leq {1 \over \sqrt{t}} \| \rho _{0}\|_{1, \infty}
+ \int_{0}^t{1 \over \sqrt{t- \tau}}\big( \| \mathcal{H}\|_{1, \infty} + \| w \cdot \nabla \rho \|_{1, \infty} \big)$$
and the problem is that
$$ \| w \cdot \nabla \rho \|_{1, \infty}$$ cannot be estimated in terms of controlled quantities ($ \|\rho \|_{2, \infty} \sim \| \partial_{z}v\|_{2, \infty}$
is not controlled). Consequently, we need to be more precise and incorporate the transport term in the argument.
We shall use the following lemma
\begin{lem}
\label{lemFP1}
Consider $\rho$ a smooth solution of
\begin{equation}
\label{eqetaFP1}
\partial_{t} \rho + w \cdot \nabla \rho = \varepsilon \partial_{zz} \rho + \mathcal{H}, \quad z<0, \quad
\rho(t,y,0)= 0
\end{equation}
for some smooth vector field $w$ such that
$ w_{3}$ vanishes on the boundary. Assume that
$ \rho$ and $\mathcal{H}$ are compactly supported in $z$ and that
$ \sup_{[0, T]}\big(\|w\|_{E^{2, \infty}} + \|\partial_{zz} w_{3}\|\big) \leq M.$
Then, there exists $\Lambda(M)$ such that we have the estimate:
$$\sqrt{\varepsilon } \| \partial_{z} \rho (t) \|_{1, \infty}
\leq \Lambda(M) \Big( { 1 \over \sqrt{t}} \| \rho (0) \|_{1, \infty} +
\int_{0}^t {1 \over \sqrt{t - \tau}} \big( \| \rho \|_{1, \infty}
+ \| \rho \|_{4} + \| \mathcal{H} \|_{1, \infty} \big)\Big), \quad \forall t \in [0, T].$$
\end{lem}
The proof of this Lemma is given in section \ref{sectionFP1}. By using the previous Lemma, for the equation \eqref{eqrhobis}
and the estimates \eqref{estH1infty}, \eqref{dzzw3}, we find that
$$\sqrt{\varepsilon } \| \partial_{z} \rho (t) \|_{1, \infty}
\leq \Lambda(M) \Big( { 1 \over \sqrt{t}} \| \rho (0) \|_{1, \infty} +
\int_{0}^t {1 \over \sqrt{t - \tau}} \big( 1 + \varepsilon \|\nabla V^m\| + \varepsilon \|S_{{\bf n}}\|_{3, \infty} + \varepsilon \|v\|_{4, \infty}\big)\Big) $$
and hence by using \eqref{Sn3infty} and \eqref{v4infty}, we obtain
$$\sqrt{\varepsilon } \| \partial_{z} \rho (t) \|_{1, \infty}
\leq \Lambda(M) \Big( { 1 \over \sqrt{t}} \| \rho (0) \|_{1, \infty} +
\int_{0}^t {1 \over \sqrt{t - \tau}} \big( 1 + \varepsilon \|\nabla V^m\| + \varepsilon \|\nabla S_{{\bf n}}\|_{m-2}\big)\Big)$$
for $m \geq 6$. From integration in time, we obtain
$$ \int_{0}^t \sqrt{\varepsilon } \| \partial_{z} \rho (t) \|_{1, \infty}
\leq \Lambda(M) \Big( \sqrt{t} \| \rho (0) \|_{1, \infty} + \sqrt{t}
\int_{0}^t \big( 1 + \varepsilon \|\nabla V^m\| + \varepsilon \|\nabla S_{{\bf n}}\|_{m-2}\big)\Big).$$
We finally get \eqref{Linfty0t1} by combining the last estimate and \eqref{Linfty+1}. Note that the terms involving the initial data
can be estimated by $\Lambda(M)$.
\end{proof}
\section{Normal derivative estimates part II}
\label{sectionnorm2}
In the estimate of Proposition \ref{propdzvm-2}, we see that the left hand side is still insufficient to control the
term $\int_{0}^t \|\partial_{z} v\|_{m-1}^2.$
It does not seem possible to estimate it by estimating
$\|S_{{\bf n}}\|_{m-1}$. Indeed, in the right hand side of \eqref{SNalpha}, the term involving the pressure
in $F_{S}^2$ does not enjoy any additional regularity. The main obstruction comes from the Euler
part $q^E$ of the pressure. Since $q^E= gh$ on the boundary the estimate given by Proposition \ref{estqE}
is optimal in terms of the regularity of $h$: the estimate of $\|D^2 q^E \|_{m-1}$ necessarily involves
$|h|_{m+{1\over 2}}$ which cannot be controlled uniformly in $\varepsilon$. The solution will be to use
the vorticity instead of $S_{{\bf n}}$ to perform this estimate. The main difficulty will be that
the vorticity does not vanish on the boundary. Note that this difficulty is not a problem in the inviscid
case since there is always a good energy estimate for the transport equation solved by the vorticity even
if it does not vanish on the boundary.
Let us set $\omega = \nabla^\varphi \times v$ (note that we have in an equivalent way that $\omega= (\nabla \times u)(t, \Phi)$).
Since
\begin{equation}
\label{omega1} \omega \times {\bf n}= {1 \over 2}\big( D^\varphi v\, {\bf n} - (D^\varphi v)^t{\bf n})= S^\varphi v\, {\bf n} - (D^\varphi v)^t{\bf n},
\end{equation}
we get as in \eqref{Subis} that
$$ \omega \times {\bf n} = {1 \over 2} \partial_{{\bf n}} u - g^{ij}\big( \partial_{j}v \cdot {\bf n} \big) \partial_{y^i} .$$
Consequently, we get by using also \eqref{Subis1} that
\begin{equation}
\label{dzvm-1O} \|Z^{m-1}\partial_{z} v\| \leq \Lambda_{\infty, 6}\big(\|v\|_{m} + |h|_{m-{1 \over 2}} + \|\omega\|_{m-1}\big).
\end{equation}
and hence we shall estimate $\|\omega\|_{m-1}$ in place of $\|\partial_{z}v\|_{m-1}$. By applying $\nabla^\varphi \times\cdot $ to the equation \eqref{NSv}, we get
the vorticity equation in $\mathcal{S}$
\begin{equation}
\label{eqomega}
\partial^{\varphi}_{t} \omega + v \cdot \nabla^\varphi \omega - \omega \cdot \nabla^\varphi v = \varepsilon \Delta^\varphi \omega.
\end{equation}
By using \eqref{omega1} and the boundary condition \eqref{bordv2}, we note that on the boundary, we have
$$ \omega \times {\bf n} = \Pi \big( \omega \times {\bf n}) = - \Pi\big( g^{ij}\big( \partial_{j}v \cdot {\bf n} \big) \partial_{y^i}\big)$$
and thus $\omega\times {\bf n}$ does not vanish on the boundary. Consequently, there is no gain to consider
$\omega \times {\bf n}$ in place of $\omega$ since the equation for $\omega \times {\bf n}$ is more complicated.
In this subsection, we shall thus estimate $Z^{m-1} \omega$.
Thanks to \eqref{eqomega}, we get that $Z^\alpha \omega $ for $|\alpha| \leq m-1$ solves in $\mathcal{S}$ the equation
\begin{equation}
\label{eqomegaalpha}\partial^{\varphi}_{t} Z^\alpha \omega + v \cdot \nabla^\varphi \, Z^\alpha \omega - \varepsilon \Delta Z^\alpha \omega= F\end{equation}
where the source term $F$ is given by
\begin{equation}
\label{Fomega}
F= Z^\alpha\big(\omega \cdot \nabla^\varphi v) + \mathcal{C}_{S}
\end{equation}
where $\mathcal{C}_{S}$ is given as in \eqref{CS} by
\begin{equation}
\label{CSomega}
\mathcal{C}_{S}= \mathcal C_{S}^1 + \mathcal C_{S}^2\end{equation}
with
$$\mathcal{C}_{S}^1= [Z^\alpha v_{y}]\cdot \nabla_{y} \omega + [Z^\alpha, V_{z}] \partial_{z} \omega:=
C_{Sy}+ C_{Sz}, \quad \mathcal{C}_{S}^2 = - \varepsilon [Z^\alpha, \Delta^\varphi] \omega.
$$
In addition, by using Lemma \ref{lembord}, we get that on the boundary
\begin{equation}
\label{omegab1} |(Z^\alpha \omega)^b| \leq \Lambda_{\infty, 6}\big( |v^b|_{m} + |h|_{m} \big).
\end{equation}
Note that by using the trace Theorem, we get that
$$ |(Z^\alpha \omega)^b| \leq \Lambda_{\infty, 6} \big( \|\nabla V^{m}\|^{1 \over 2} \|V^m\|^{1 \over 2} + |V^m| +|h|_{m} \big).$$
Consequently, the only way we can control this boundary value is through the estimate
$$
\varepsilon^{1 \over 2 } \int_{0}^t |(Z^\alpha \omega)^b|^2 \leq \varepsilon \int_{0}^t \| \nabla V^m\|^2 + \int_{0}^t \Lambda_{\infty, 6}\big(
\|V^m\|^2 + |h|_{m}^2 \big)
$$
and to we see that the left hand side can be estimated by using Proposition \ref{conormv}.
Nevertheless, it will be useful to keep the slightly sharper form of the above inequality which reads
\begin{equation}
\label{omegab2}
\varepsilon^{1 \over 2 } \int_{0}^t |(Z^\alpha \omega)^b|^2 \leq \Lambda_{\infty, 6}\Big( \int_{0}^t \sqrt{\varepsilon} \| \nabla V^m\|\, \|V^m\| +
\|V^m\|^2 + |h|_{m}^2 \Big).
\end{equation}
The main difficulty will be to handle this non-homogeneous boundary value problem for the convection-diffusion
equation \eqref{eqomegaalpha} since because of \eqref{omegab2} the boundary value is at a low level of regularity
(it is $L^2_{t,y}$ and no more).
To split the difficulty, we set
\begin{equation}
\label{splitomega}
Z^\alpha \omega= \omega^\alpha_{h}+ \omega^{\alpha}_{nh}
\end{equation}
where $\omega_{nh}$ solves in $\mathcal{S}$ the equation \eqref{eqomegaalpha}, that is to say
\begin{equation}
\label{eqomeganh}
\partial^{\varphi}_{t} \omega^\alpha_{nh} + v \cdot \nabla^\varphi \omega^\alpha_{nh} - \varepsilon \Delta^\varphi \omega^\alpha_{nh}= F\end{equation}
with the initial and boundary conditions
\begin{equation}
\label{omeganhc}
(\omega^\alpha_{nh})^b= 0, \quad (\omega^\alpha_{nh})_{t=0}= \omega_{0}.\end{equation}
while $\omega_{h}$ will solves in $\mathcal{S}$ the homogeneous equation
\begin{equation}
\label{eqomegah}
\partial^{\varphi}_{t} \omega^\alpha_{h} + v \cdot \nabla^\varphi \omega^\alpha_{h} - \varepsilon \Delta^\varphi \omega^\alpha_{h}= 0
\end{equation}
with the initial and boundary conditions
\begin{equation}
\label{omegahc}
( \omega^\alpha_{h})^b= (Z^\alpha \omega)^b, \quad (\omega^\alpha_{h})_{t=0}= 0.
\end{equation}
The main result of this section is the following proposition:
\begin{prop}
\label{propomega}
For $T \in [0, T^\varepsilon]$, $T^\varepsilon \leq 1$, assume that for $M>0$, the estimate
\begin{equation}
\label{hypomegahfin}
\sup_{[0, T]} \Lambda_{\infty, 6}(t) + \varepsilon \int_{0}^T \big( \varepsilon \|\nabla V^6 \|^2 + \varepsilon \| \nabla S_{{\bf n}}\|_{4}^2 \big) \leq M
\end{equation}
holds. Then, there exists $\Lambda(M)$ such that
\begin{align*}
\| \omega \|_{L^4([0, T], H^{m-1}_{co}(\mathcal{S}))}^2
& \leq \Lambda_{0} \| \omega(0)\|_{m-1}^2
+ \Lambda(M)\int_{0}^t ( \|V^m\| ^2 + \| \omega\|_{m-1}^2 + |h|_{m}^2 + \varepsilon |h|_{m+{1 \over 2}}^2 \big) \\
& \quad + \Lambda_{0} \big( \int_{0}^t \varepsilon \| \nabla S_{{\bf n}} \|_{m-2}^2 +
\varepsilon \| \nabla V^m\|^2\big).
\end{align*}
\end{prop}
The proof follows from
the splitting \eqref{splitomega} and the estimates of Proposition \ref{propomeganh} and
\ref{propomegah}.
\subsection{Estimate of $\omega_{nh}^\alpha$}
Let us first estimate $\omega^\alpha_{nh}$. We shall use the notation
$$\|\omega_{nh}^{m-1}(t) \|^2= \sum_{|\alpha| \leq m-1} \| \omega_{nh}^\alpha \|^2, \quad \int_{0}^t\| \nabla \omega_{nh}^{m-1} \|^2
= \sum_{| \alpha| \leq m-1} \| \nabla \omega_{nh}^\alpha \|^2.$$
\begin{prop}
For $t \in [0, T^\varepsilon]$ , the solution $\omega_{nh}^\alpha$ of \eqref{eqomeganh}, \eqref{omeganhc} satisfies the estimate:
\label{propomeganh}
\begin{align*}
& \|\omega_{nh}^{m-1}(t) \|^2 + \varepsilon \int_{0}^t \| \nabla \omega_{nh}^{m-1} \|^2 \\
& \leq \Lambda_{0}\| \omega(0)\|_{m-1}^2
+ \int_{0}^t \Lambda_{\infty, 6}\big( \|V^m\| ^2 + \| \omega\|_{m-1}^2 + |h|_{m}^2 + \varepsilon |h|_{m+{1 \over 2}}^2 \big)
+ \Lambda_{0} \int_{0}^t \varepsilon \| \nabla S_{{\bf n}} \|_{m-2}^2.
\end{align*}
\end{prop}
Note that the term $ \Lambda_{0} \int_{0}^t \varepsilon \| \nabla^\varphi S_{{\bf n}}\|_{m-2}^2$
in the right-hand side of the above estimate
can be estimated by using Proposition \ref{propdzvm-2}.
\begin{proof}
Since we have for $\omega^\alpha_{nh}$ an homogeneous Dirichlet boundary condition, we deduce from \eqref{eqomeganh}
and a standard energy estimate that \begin{equation}
\label{omeganh1} {d \over dt} {1 \over 2} \int_{\mathcal{S}} |\omega^\alpha_{nh}|^2 \, d\mathcal{V}_{t} + \varepsilon \int_{\mathcal{S}} | \nabla^\varphi \omega^\alpha_{nh}|^2 \, d\mathcal{V}_{t}
= \int_{\mathcal{S}} F \cdot \omega^\alpha_{nh}\, d\mathcal{V}_{t}
\end{equation}
and we need to estimate the right hand side with $F$ given by \eqref{Fomega}. By using \eqref{gues}, we get
that
$$ \|Z^\alpha (\omega \cdot \nabla^\varphi v \big) \|
\leq \Lambda_{\infty, 6} \big( \|\omega \|_{m-1}+ \| \nabla^\varphi v\|_{m-1}\big)
\leq \Lambda_{\infty, 6}\big( \|\omega\|_{m-1} + \|v\|_{m} + |h|_{m-{1 \over 2}}\big).$$
Next, to estimate $\mathcal{C}_{S}$,
we observe that we can use \eqref{CS2} to estimate the part involving $\mathcal{C}_{S}^2$.
Indeed, it suffices, to change $S_{{\bf n}}$ into $\omega$ and $m$ into $m+1$ to obtain
\begin{multline}
\label{CS2omega}
\Big| \int_{\mathcal{S}} \mathcal{C}_{S}^2 \cdot \omega^\alpha_{nh} d\mathcal{V}_{t}\Big|
\leq \Lambda_{0}\big( \varepsilon^{1 \over 2} \| \nabla^\varphi \omega^\alpha_{nh} \| + \|\omega \|_{m-1}\big) \\ \big( \varepsilon^{1\over 2 }\|\nabla \omega\|_{m-2}+ \|\omega\|_{m-1} + \Lambda_{\infty, 6}( |h|_{m-{1 \over 2 }}+ \varepsilon^{1 \over 2} |h|_{m+{1 \over 2}})\big).
\end{multline}
To estimate $\mathcal{C}_{S}^1$, we can use the same decomposition of $\mathcal{C}_{S}^1$ as the one given after \eqref{CS}. For $\mathcal{C}_{Sy}$,
we can also use \eqref{CSy} with $S_{{\bf n}}$ changed into $\omega$ and $m$ into $m+1$. This yields
\begin{equation}
\label{CSyomega}
\|C_{Sy}\|\leq \Lambda_{\infty, 6}\big( \|\omega\|_{m-1} + \|v\|_{E^{m-1}}\big) \leq \Lambda_{\infty, 6} \big( \|\omega\|_{m-1}
+ \|v\|_{m} + |h|_{m} \big)
\end{equation}
The commutator $\mathcal{C}_{Sz}$ requires more care. Indeed, in \eqref{CSz}, we remark that we cannot change $m$ into $m+1$
since $h$ is not smooth enough uniformly in $\varepsilon$: such an estimate would involve $|h|_{m+{1 \over 2}}$ (and without a gain of $\varepsilon^{1 \over 2}$).
By using as in \eqref{CSz0} that this commutator can be expanded into a sum of terms under the form
$$ c_{\tilde \beta }Z^{\tilde \beta }\big( {1- z \over {z }} V_{z} \big) \, Z_{3} Z^\gamma \omega$$
where the constraints are now $|\tilde \beta| + |\tilde \gamma| \leq m-1$, $|\tilde\gamma | \leq m-2$.
Since $V_{z}= v_{z} / \partial_{z}\varphi= (v\cdot {\bf N} - \partial_{t} \eta )/ \partial_{z} \varphi$,
we use as before \eqref{gues} and that $V_{z}$ vanishes on the boundary to write
\begin{align*}
\|c_{\tilde \beta }Z^{\tilde \beta }\big( {1- z \over {z }} V_{z} \big) \, Z_{3} Z^\gamma \omega\|
& \leq \Lambda_{\infty, 6} \| \omega \|_{m-1}+ \| \omega \|_{L^\infty} + \big| Z\big( {1 \over \partial_{z} \varphi} {1 - z \over z} v_{z}\big) \big\|_{m-2} \\
& \leq \Lambda_{\infty, 6} \big( \| \omega \|_{m-1} + |h|_{m-{1 \over 2}} + \big\|{ 1-z \over z} Zv_{z}\big\|_{m-2} \big)
\end{align*}
Next, by using Lemma \ref{hardybis}, we obtain that
$$ \big\|{ 1-z \over z} Zv_{z}\big\|_{m-2} \lesssim \|Zv_{z}\|_{m-2} + \|\partial_{z} Zv_{z}\|_{m-2}
\lesssim \Lambda_{\infty, 6}\big( \|v\|_{E^{m}} + \sum_{| \alpha |= m- 1} \| v \cdot \partial_{z}Z^\alpha {\bf N} - \partial_{z} Z^\alpha \partial_{t} \eta \|\big)$$
and the last term requires some care.
Since $\eta$ is given by \eqref{eqeta}, we first note that
$$ |Z_{3} \hat \eta| \lesssim |\tilde \chi(|\xi| z ) \hat{h}|$$
where $\tilde \chi$ has a slightly bigger support than $\chi$
and thus we get that $Z_{3}$ acts as a zero order operator:
\begin{equation}
\label{Z3mieux}
\| \nabla Z_{3} \eta \| \lesssim |h|_{1\over 2}.
\end{equation}
This yields that if $\alpha_{3} \neq 0$, we gain at least one derivative and thus, have
$$ \| v \cdot \partial_{z}Z^\alpha {\bf N} - \partial_{z} Z^\alpha \partial_{t} \eta \| \leq \Lambda_{\infty, 6}\big( |h|_{m-{1 \over 2}} + |\partial_{t} h|_{m-
{3 \over 2}}\big) \leq \Lambda_{\infty, 6}\big( |h|_{m-{1 \over 2} } + |v^b|_{m-{3 \over 2}} \big)$$
where the last estimate comes from the boundary condition \eqref{bord2}. Hence by using \eqref{trace}, we get
$$ \| v \cdot \partial_{z}Z^\alpha {\bf N} - \partial_{z} Z^\alpha \partial_{t} \eta \| \leq \Lambda_{\infty, 6}\big( |h|_{m-{1 \over 2}}
+ \|v\|_{E^{m-1}}\big).$$
Consequently, we only need to estimate
$$ \| v \cdot \partial_{z}Z^\alpha {\bf N} - \partial_{z} Z^\alpha \partial_{t} \eta \| $$
when $| \alpha |=m-1$, $\alpha_{3}= 0$. By using the expression \eqref{etaconv}, we note that
$$ v \cdot \partial_{z}Z^\alpha {\bf N} - \partial_{z} Z^\alpha \partial_{t} \eta=
v_{1} \,\partial_{z} \big( \psi_{z} \star_{y} \partial_{1} Z^{\alpha} h \big)
+ v_{2} \, \partial_{z} \big( \psi_{z} \star_{y} \partial_{2} Z^{\alpha} h \big)
- \partial_{z}\big( \psi_{z} \star_{y} \partial_{t} Z^{\alpha} h \big) := \mathcal{T}_{\alpha}.$$
For $z \leq -1$ , we can use the smoothing effect of the convolution to get that
$$ \|\mathcal{T}_{\alpha}\|_{L^2(\mathcal{S} \cap |z| \geq 1)} \leq \Lambda_{\infty, 6} \big( \big\| \partial_{z}\big( {1 \over z^{m-1}}\tilde{ \psi}_{z}
\star_{y}\nabla h\big)\big\|_{L^2(\mathcal{S} \cap |z| \geq 1)}
+ \big\| \partial_{z}\big( {1 \over z^{m-1}} \tilde{\psi}_{z}
\star_{y}\partial_{t} h\big)\big\|_{L^2(\mathcal{S} \cap |z| \geq 1)} $$
where $\tilde{\psi}_{z}$ has the same properties as $\psi_{z}$. This yields
\begin{equation}
\label{Talpha0} \|\mathcal{T}_{\alpha}\|_{L^2(\mathcal{S} \cap |z| \geq 1)} \leq \Lambda_{\infty, 6} \big(|h|_{1 \over 2} + \|v\|_{E^1}\big).\end{equation}
For $|z| \leq 1$,
We shall rewrite $\mathcal{T}_{\alpha}$ as
\begin{equation}
\label{Talphadecfin} \mathcal{T}_{\alpha}= v_{1}^b \,\partial_{z} \big( \psi_{z} \star_{y} \partial_{1} Z^{\alpha} h \big)
+ v_{2}^b \, \partial_{z} \big( \psi_{z} \star_{y} \partial_{2} Z^{\alpha} h \big)
- \partial_{z}\big( \psi_{z} \star_{y} \partial_{t} Z^{\alpha} h \big) + \mathcal{R}\end{equation}
where
$$ | \mathcal{R}| \leq \Lambda_{\infty, 6}|z| \big| \partial_{z} \big( \psi_{z} \star Z^\alpha \nabla h\big)\big|
\leq | Z_{3} \big( \psi_{z} \star Z^\alpha \nabla h \big)\big|.$$
Consequently, by using again the observation \eqref{Z3mieux}, we get that
\begin{equation}
\label{Talphadecfin1} \| \mathcal{R}\| _{L^2(\mathcal{S} \cap |z| \leq 1)}
\leq \Lambda_{\infty, 6} |h|_{m-{1 \over 2}}.\end{equation}
To estimate the first term in \eqref{Talphadecfin}, we write
\begin{align*} & v_{1}^b \,\partial_{z} \big( \psi_{z} \star_{y} \partial_{1} Z^{\alpha} h \big)
+ v_{2}^b \, \partial_{z} \big( \psi_{z} \star_{y} \partial_{2} Z^{\alpha} h \big) \\
& = \partial_{z} \big( \psi_{z} \star_{y}\big( v_{1}^b\partial_{1} Z^\alpha h + v_{2}^b \partial_{2} Z^\alpha h - \partial_{t} Z^\alpha h \big) \\
& \quad +
\partial_{z} \Big(\int_{\mathbb{R}^2}\Big( \big(v_{1}(t,y,0) - v_{1}(t,y',0) \big) \psi_{z}(y-y') \partial_{1} Z^\alpha h(t,y')
\\ &
\quad \quad \quad \quad + \big(v_{2}(t,y,0) - v_{2}(t,y',0)\big) \psi_{z}(y-y') \partial_{2} Z^\alpha h(t,y').
\end{align*}
For the second term, we can use the Taylor formula for $v_{i}$ to get that
$$ | (v_{i}(t,y,0) - v_{i}(t,y', 0))\partial_{z} \psi_{z}(y-y') | \leq \Lambda_{\infty, 6} {1 \over z^2} \tilde{\psi}({y-y' \over z}) $$
where $\tilde{\psi} $ is still an $L^1$ function. This yields that
\begin{align}
\label{Talphadecfin2} &\sup_{z \in (-1, 0)} \Big\| \partial_{z} \Big(\int_{\mathbb{R}^2}\Big( \big(v_{1}(t,y,0) - v_{1}(t,y',0) \big) \psi_{z}(y-y') \partial_{1} Z^\alpha h(t,y')
\\ &
\nonumber \quad \quad \quad \quad \quad \quad + \big(v_{2}(t,y,0) - v_{2}(t,y',0)\big) \psi_{z}(y-y') \partial_{2} Z^\alpha h(t,y') \Big\|_{L^2(\mathbb{R}^2)}
\leq \Lambda_{\infty, 6} |h|_{m}.
\end{align}
For the first term, we shall use Lemma \ref{lembordh}. For $|\alpha|= m-1$, we get from Lemma \ref{lembordh} that
$$ v_{1}^b\partial_{1} Z^\alpha h + v_{2}^b \partial_{2} Z^\alpha h - \partial_{t} Z^\alpha h = - \mathcal{C}^\alpha(h) - (V^\alpha)^b \cdot {\bf N}
- v_{3}^b$$
and hence also that
$$ \| v_{1}^b\partial_{1} Z^\alpha h + v_{2}^b \partial_{2} Z^\alpha h - \partial_{t} Z^\alpha h \|_{H^{1 \over 2}(\mathbb{R}^2)}
\leq \Lambda_{\infty, 6} \big( |v^b|_{m-{1 \over 2}} + |h|_{m- {1 \over 2}}\big).$$
Consequently, we obtain that
\begin{align*}
\|\partial_{z} \big( \psi_{z} \star_{y}\big( v_{1}^b\partial_{1} Z^\alpha h + v_{2}^b \partial_{2} Z^\alpha h - \partial_{t} Z^\alpha h \big)\|_{L^2(\mathcal{S}) } & \leq |v_{1}^b\partial_{1} Z^\alpha h + v_{2}^b \partial_{2} Z^\alpha h - \partial_{t} Z^\alpha h \big)|_{1\over 2} \\
&\leq \Lambda_{\infty, 6} \big( |v^b|_{m-{1 \over 2}} + |h|_{m- {1 \over 2}}\big).
\end{align*}
By combining \eqref{Talphadecfin}, \eqref{Talphadecfin1}, \eqref{Talphadecfin2} and the last estimate with the trace Theorem,
we finally obtain that
$$ \| \mathcal{T}_{\alpha}\|_{L^2(\mathcal{S}) \cap |z| \leq 1} \leq \Lambda_{\infty, 6}\big( \|v\|_{E^m} + |h|_{m} \big).$$
Since we had already proven \eqref{Talpha0}, we finally obtain that
\begin{equation}
\label{CSzomega}
\| \mathcal{C}_{Sz}\| \leq \Lambda_{\infty, 6}\big( \|v\|_{E^m} + |h|_{m}\big).
\end{equation}
To conclude the proof of Proposition \ref{propomeganh}, we use the energy estimate \eqref{omeganh1},
the commutator estimates \eqref{CS2omega}, \eqref{CSyomega}, \eqref{CSzomega}, Lemma \ref{mingrad}
and the Young inequality to obtain
\begin{align*}
& \|\omega_{nh}^{m-1}(t) \|^2 + \varepsilon \int_{0}^t \| \nabla \omega_{nh}^{m-1} \|^2 \\
& \leq \Lambda_{0}\| \omega(0)\|_{m-1}^2
+ \int_{0}^t \Lambda_{\infty, 6}\big( \|v\|_{E^m}^2 + \| \omega\|_{m-1}^2 + |h|_{m}^2 + \varepsilon |h|_{m+{1 \over 2}}^2 \big)
+ \Lambda_{0} \int_{0}^t \varepsilon \| \nabla \omega \|_{m-2}^2.
\end{align*}
To end the proof, we note that thanks to \eqref{gues}, we have
$$ \sqrt{\varepsilon} \| \nabla \omega \|_{m-2} \leq \Lambda_{0}\big( \sqrt{\varepsilon}\| \partial_{zz}v \|_{m-2} + \sqrt{\varepsilon} \|\partial_{z}v\|_{m-1} \big) + \Lambda_{\infty}|h|_{m-{1\over 2}} $$
and we use again Lemma \ref{lemdzS} and \eqref{equiv1} to obtain that
\begin{align*}
& \|\omega_{nh}^{m-1}(t) \|^2 + \varepsilon \int_{0}^t \| \nabla \omega_{nh}^{m-1} \|^2 \\
& \leq \Lambda_{0}\| \omega(0)\|_{m-1}^2
+ \int_{0}^t \Lambda_{\infty, 6}\big( \|V^m\| ^2 + \| \omega\|_{m-1}^2 + |h|_{m}^2 + \varepsilon |h|_{m+{1 \over 2}}^2 \big)
+ \Lambda_{0} \int_{0}^t \varepsilon \| \nabla S_{{\bf n}} \|_{m-2}^2.
\end{align*}
This ends the proof of Proposition \ref{propomeganh}.
\end{proof}
\subsection{Estimate of $\omega_{h}^\alpha$}
\label{sectionheat}
It remains to get an estimate for $\int_{0}^t | \omega^{m-1}_{h}|^2$
where $\omega^\alpha_{h}$ solves the homogeneous equation \eqref{eqomegah}, with the non-homogeneous boundary condition \eqref{omegahc}.
Note that the only estimate on $ (Z^\alpha \omega)^b$ that we have at our disposal is the $L^2$ in time
estimate \eqref{omegab2}. This creates two difficulties, the first one is that we cannot easily lift
this boundary condition and perform a standard energy estimate. The second one is that due to this poor estimate on the
boundary value, we cannot hope an estimate of
$\sup_{[0, T]} \| \omega^{m-1}_{h}\|$ independent of $\varepsilon$.
\subsubsection{A simple computation on the heat equation}
To understand the difficulty, let us consider the heat equation
\begin{equation}
\label{exheat}
\partial_{t} f - \varepsilon \Delta f= 0, \quad x= (y,z) \in \mathcal{S}
\end{equation}
with zero initial data and the boundary condition $f(t,y,0) =f^b(t,y)$
\begin{lem}
\label{lemheat}
The solution of the above equation satisfies the estimate:
$$ \int_{0}^{+ \infty} e^{-2\gamma t } \big\|( \gamma + |\partial_{t}|)^{1\over 4}) f \big\|^2 \, dt
\leq \sqrt{\varepsilon} \int_{0}^{+ \infty} e^{-2 \gamma t}|f^b|_{L^2(\mathbb{R}^2)}^2 \, dt.$$
\end{lem}
Consequently, we see that we get a control of $f$ which is $H^{1\over 4}_{t}(L^2(\mathcal{S})).$
Note that by Sobolev embedding, this implies that we can expect a control of
$ \| f\|_{L^4_{t}(L^2(\mathcal{S}))}$ (and not of $\| f\|_{L^\infty_{t}(L^2(\mathcal{S}))}$).
\begin{proof}
We can use a Laplace Fourier transform in time and $y$ to get the equation
$$ - \varepsilon \partial_{zz} \hat f + (\gamma + i \tau + \varepsilon | \xi |^2 ) \hat f= 0, \quad \hat{f}_{/z=0} = \hat{f}^b$$
where
$$\hat{f}(\gamma, \tau, \xi, z)= \int_{0}^{+ \infty}\int_{\mathbb{R}^2} e^{-\gamma t - i \tau t - i \xi \cdot y} f(t,y,z) \, dt dy $$
with $\gamma >0$.
We can solve explicitely this equation to get
$$\hat{f}= e^{ \big( \gamma + i \tau + \varepsilon |\xi|^2\big)^{1 \over 2} {z\over \sqrt{\varepsilon} } }\hat{f}^b, \quad z<0$$
and hence, we find that
$$ | \hat{f}(\gamma, \tau, \xi, \cdot ) |_{L^2_{z}}^2 \leq { \sqrt{\varepsilon} \over (\gamma + |\tau | + \varepsilon |\xi|^2)^{1 \over 2} }
|\hat{f}^b|_{L^2_{z}}^2.$$ This yields
$$ (\gamma + |\tau|)^{1 \over 2} | \hat{f}(\gamma, \tau, \xi, \cdot ) |_{L^2_{z}}^2 \leq \sqrt{\varepsilon} |\hat{f}^b|_{L^2_{z}}^2$$
and the result follows from the Bessel-Parseval identity.
\end{proof}
\subsubsection{Statement of the estimate of $\omega_{h}^\alpha$}
\begin{prop}
\label{propomegah}
For $T \in [0, T^\varepsilon]$, $T^\varepsilon \leq 1$, assume that for $M>0$, the estimate
\begin{equation}
\label{hypomegah}
\sup_{[0, T]} \Lambda_{\infty, 6}(t) + \int_{0}^T \big( \varepsilon \|\nabla V^6 \|^2 + \varepsilon \| \nabla S_{{\bf n}}\|_{4}^2 \big) \leq M
\end{equation}
holds. Then, there exists $\Lambda(M)$ such that for $|\alpha| \leq m-1$, we have
\begin{align*}
\| \omega^\alpha_{h} \|_{L^4([0, T], L^2(\mathcal{S}))}^2 \leq \Lambda(M) \int_{0}^T \big( \|V^m \|^2 + \|h\|_{m}^2\big) +
\varepsilon \int_{0}^t \| \nabla V^m\|^2.
\end{align*}
\end{prop}
Again, note that the last term in the right hand side of the above inequality can be estimated by using
Proposition \ref{conormv}.
We will prove this estimate by using a microlocal symmetrizer.
In the convection diffusion equation \eqref{eqomegah}, the convection term creates some
difficulties. Indeed,
the convection operator
$\partial_{t}^\varphi + v \cdot \nabla^\varphi= \partial_{t} + v_{y}\partial_{y}+ V_{z}\partial_{z}$ dominates in the low frequency regime.
For this operator, the boundary is characteristic since
$V_{z}$ vanishes on the boundary and the fact that $V_{z}$ is not uniformly zero in a vicinity of the boundary
is not convenient when perfoming microlocal energy estimate
(see \cite{Majda-Osher}).
More importantly, if we add a convection term to \eqref{exheat}, even constant, that is to say that we study
$$ \partial_{t} f + c \cdot \nabla_{y} f - \varepsilon \Delta f=0$$
for $c \in \mathbb{R}^2$, then the result of Lemma \ref{lemheat} becomes
$$ \int_{0}^{+ \infty} e^{-2\gamma t } \big\|( \gamma + |\partial_{t} + c \cdot \nabla_{y}|)^{1\over 4}) f \big\|^2 \, dt
\leq \sqrt{\varepsilon} \int_{0}^{+ \infty} e^{-2 \gamma t}|f^b|_{L^2(\mathbb{R}^2)}^2 \, dt.$$
In other words, the smoothing effect does not occur in the direction $\partial_{t}$ but in the direction $\partial_{t}+ c\cdot \nabla_{y} $.
This will be much more difficult to detect when $c$ has variable coefficients. Consequently, in order to fix this
difficulty, we shall use Lagrangian coordinates in order to eliminate the convection term and study a purely parabolic problem.
\subsubsection{Lagrangian coordinates}
Let us define a parametrization of $\Omega_{t}$ by
\begin{equation}
\label{lagrange1}
\partial_{t} X(t,x)= u(t, X(t,x))= v(t,\Phi(t, \cdot)^{-1}\circ X), \quad X(0, x)= \Phi(0,x)
\end{equation}
where $\Phi(t, \cdot)^{-1}$ stands for the inverse of the map $\Phi(t, \cdot)$ defined by \eqref{diff}.
Note that the meaning of the initial condition is that we choose the parametrization of $\Omega_{0}$
which is given by \eqref{diff}.
Let us also define $J(t,x)= | \mbox{det }\nabla X(t,x)|$ the Jacobian of the change of variable.
We have the following estimates for $X:$
\begin{lem}
\label{lemlagrange}
Under the assumption of Proposition \ref{propomegah}, we have for $t \in[0, T]$ the estimates
\begin{eqnarray}
\label{J0} & &|J(t,x)|_{W^{1, \infty}}+ |1/J(t,x)|_{W^{1, \infty}} \leq \Lambda_{0}, \\
\label{nablaX} & & \| \nabla X(t) \|_{L^\infty} + \| \partial_{t} \nabla X \|_{L^\infty} \leq \Lambda_{0} e^{ t\Lambda(M)}, \\
\label{dtnablaX} & & \| \nabla X\|_{1, \infty} + \| \partial_{t} \nabla X\|_{1, \infty} \leq \Lambda(M)e^{ t\Lambda(M)}, \\
\label{ZnablaX} & &
\sqrt{\varepsilon}\| \nabla^2 X \|_{1, \infty}
+ \sqrt{\varepsilon}\| \partial_{t} \nabla^2 X \|_{L^\infty} \leq \Lambda(M)(1+t^2) e^{t \Lambda(M)}.
\end{eqnarray}
\end{lem}
\begin{proof}
Since $u$ is divergence free we get that $J(t,x)= J_{0}(x)$ and the first estimate follows from Proposition \ref{propeta}.
Next, by using the ordinary differential equation \eqref{lagrange1}, we get that
\begin{equation}
\label{eqDX}\partial_{t} DX= Dv D\Phi^{-1} DX\end{equation}
where $D$ stands for the differential with respect to the space variable and hence we find
$$ |\nabla X(t,\cdot )|_{L^\infty} \leq \Lambda_{0}+ \Lambda_{0} \int_{0}^t \|v\|_{E^{1, \infty}} | \nabla X(t)|_{L^\infty}
\leq \Lambda_{0} + \Lambda(M) \int_{0}^t | \nabla X(t)|_{L^\infty}.$$
This yields the first part of \eqref{nablaX} by using the Gronwall inequality. Moreover, by using again the equation \eqref{eqDX}, we get \eqref{dtnablaX}.
Next, by applying one conormal derivative to \eqref{eqDX}, we also get that
$$ \| \nabla X \|_{1, \infty} \leq \Lambda_{0}+ \Lambda_{0} \int_{0}^t \|v\|_{E^{2, \infty}} \| \nabla X \|_{1, \infty}$$
and hence we get \eqref{dtnablaX} from the Gronwall inequality. The estimate for the time derivative
follows again from the equation. The estimate \eqref{ZnablaX} can be obtained in the same way. By applying
$\sqrt{\varepsilon}\nabla$ to \eqref{eqDX}, we find
$$ \sqrt{\varepsilon } \| \nabla^2 X\|_{1, \infty} \leq \Lambda(M) + \Lambda(M) \int_{0}^t \sqrt{\varepsilon} \| \nabla^2 X\|_{1, \infty}
+ \Lambda(M) e^{\Lambda(M) t} \int_{0}^t \sqrt{\varepsilon} \| \nabla^2 v\|_{1, \infty}$$
and hence, by using Corollary \ref{corLinfty0t} and the assumption \eqref{hypomegah}, we find
$$ \sqrt{\varepsilon } \| \nabla^2 X\|_{1, \infty} \leq \Lambda(M) ( 1+t)^2 e^{\Lambda(M) t} + \Lambda(M) \int_{0}^t \sqrt{\varepsilon} \| \nabla^2 X\|_{1, \infty}$$
and the first part of \eqref{ZnablaX} follows by using the Gronwall inequality. For the second part of \eqref{ZnablaX},
it suffices to apply $\sqrt{\varepsilon}\nabla$ to \eqref{eqDX} and to use \eqref{hypomegah} and that
$ \sqrt{\varepsilon}\| \nabla ^2 v\|_{L^\infty} \leq \Lambda_{\infty, 6}$.
\end{proof}
\subsubsection{Equation in Lagrangian coordinates}
Let us set
\begin{equation}
\label{Omegaalphadef}
\Omega^\alpha=e^{-\gamma t} \omega_{h}^\alpha (t, \Phi^{-1}\circ X)\end{equation}
where $\gamma >0$ is a large parameter to be chosen. Then $\Omega^\alpha$ solves in $\mathcal{S}$ the equation
\begin{equation}
\label{omegalagrange} a_{0}\big( \partial_{t}\Omega^\alpha+ \gamma \Omega^\alpha\big) - \varepsilon \partial_{i}\big( a_{ij} \partial_{j}\Omega^\alpha \big) = 0
\end{equation}
where we have used the summation convention. Note that $ a_{0}= |J_{0}|^{1\over 2}$ and that the matrix $ (a_{ij})_{i,j}$ is defined by
$$ (a_{ij})_{ij} = |J_{0}|^{1 \over 2} P^{-1}, \quad P_{ij}= \partial_{i}X \cdot \partial_{j} X.$$
Note that the coefficients of the matrix $(a_{ij})$ will therefore match the estimates of Lemma \ref{lemlagrange}.
Moreover, we also obtain thanks to Lemma \ref{lemlagrange} that
\begin{equation}
\label{adefp}
a(t,x)_{ij} \xi_{i} \xi_{j} \geq {1 \over \Lambda_{0}} e^{-t \Lambda(M) t} |\xi|^2 \geq c_{0}|\xi|^2
\end{equation}
where $c_{0}$ depends only on $M$ for $t \in [0, 1]$.
On the boundary, we get that
\begin{equation}
\label{bordOmega} \Omega^\alpha_{/z=0}= (\Omega^\alpha)^b:= e^{-\gamma t} \omega^\alpha_{nh}(t,\Phi^{-1}\circ X(t, y,0)).
\end{equation}
We shall prove that:
\begin{theoreme}
\label{theomicrolocal}
There exists $\gamma_{0}$ which depends only on $M$ defined by \eqref{hypomegah} such that for $\gamma \geq \gamma_{0}$,
the solution of \eqref{omegalagrange} with the boundary condition \eqref{bordOmega} satisfies the estimate
$$ \big\| \Omega^{m-1}\|_{H^{1 \over 4}(0, T, L^2)}^2 \leq \Lambda(M)
\sqrt{\varepsilon} \int_{0}^T | (\Omega^{m-1})^b|_{L^2(\mathbb{R})^2}^2$$ where $\Lambda(M)$ is uniformly bounded for $T\in [0, 1]$.
\end{theoreme}
Note that we define the norm $H^{1 \over 4}((0,T), L^2))$ as
\begin{equation}
\label{H1/4}
\big\| f \|_{H^{1 \over 4}(0, T, L^2)}^2 = \inf \big\{ \|Pf\|_{H^{1 \over 4}(\mathbb{R}, L^2(\mathcal{S}))}, \quad Pf= f \mbox{ on } [0, T]
\times \mathcal{S}\big\}\end{equation}
and we define the norm on the whole space by Fourier transform in time.
Let us first explain how to deduce the proof of Proposition \ref{propomegah} from the estimate of Theorem \ref{theomicrolocal}.
We first observe that
$$ \| \Omega^\alpha \|_{L^4([0, T], L^2(\mathcal{S}))}^2
\leq C \| \Omega^\alpha \|_{L^2(\mathcal{S}, L^4(0, T))}^2
\leq C \| \Omega^{m-1}\|_{H^{1 \over 4}(0, T, L^2)}^2.$$
where the last estimate comes from the one-dimensional Sobolev embedding $H^{1 \over 4} \subset L^4$.
Note that with the definition \eqref{H1/4}, $C$ does not depend on $T$ since $C$ is given by the Sobolev
embedding in the whole space. Next, we use \eqref{theomicrolocal} to get
$$ \| \Omega^\alpha \|_{L^4([0, T], L^2(\mathcal{S}))}^2 \leq \Lambda(M) \sqrt{\varepsilon} \int_{0}^T e^{- \gamma t } | (\Omega^{m-1})^b|_{L^2(\mathbb{R})^2}^2$$
and thanks to \eqref{bordOmega} and \eqref{Omegaalphadef}, we obtain by change of variable
(let us recall that $J$ satisfies the estimate \eqref{J0}) that
$$ \| \omega^\alpha_{h} \|_{L^4([0, T], L^2(\mathcal{S}))}^2 \leq \Lambda(M) \sqrt{\varepsilon} \int_{0}^T | (\omega_{h})^b|_{m-1}^2.$$
We finally end the proof of Proposition \ref{propomegah} by using \eqref{omegab2} and the Young inequality.
It remains to prove Theorem \ref{theomicrolocal}. In the next subsection, we shall
study a constant coefficient model. In the analysis of this model, we will
construct symbols which will allow us to study the original problem
with the help of the paradifferential calculus described in section \ref{sectionparadiff}.
\subsubsection{Symbolic Analysis}
In this section, we shall perform a symbolic analysis. This will allow to get our energy estimate for the equation
\eqref{omegalagrange}, by using the semi-classical paradifferential calculus with parabolic homogeneity described
in section \ref{sectionparadiff}.
Let us consider $a=(a_{0}(z), a_{ij}(z))$ smooth in $z$ where $a_{ij}$ is a symmetric matrix
and $a_{0}$ a function with values in the compact set $\mathcal{K}$ defined by
\begin{equation}
\label{compact} a_{0}\geq m ,\, a_{3,3} \geq m, \quad |a| + \sqrt{\varepsilon}| \partial_{z} a | \leq M, \quad (a_{ij}) \geq c_{0} \mbox{Id}.
\end{equation}
where $m$, $c_{0}$ and $M$ are positive numbers.
This means that we neglect the dependence in $t, \,y$ in the coefficients of \eqref{omegalagrange}.
The symbolic version of \eqref{omegalagrange} becomes
\begin{equation}
\label{omegasymbolic}
\big( {a_{0}\over a_{3,3}} (\gamma + i \tau ) + A_{y}(a, \sqrt{\varepsilon} \xi )\big) \Omega + A_{z}(a, \sqrt{\varepsilon}\xi)\sqrt{\varepsilon} \partial_{z} \Omega= \varepsilon \partial_{zz} \Omega +
R + F, \quad z<0, \quad \Omega(0)= \Omega^b
\end{equation}
where $A_{y}$ is homogeneous of degree two in $\xi$, $A_{z}$ is homogeneous of degree one in $\xi$ and
\begin{equation}
\label{defArond}
A_{y}(a, \xi)= \sum_{1\leq i,j \leq 2} {a_{ij} \over a_{3,3}}\xi_{i} \xi_{j}, \quad A_{z}(a,\xi)= - 2 \sum_{1 \leq k \leq 2}{a_{k3}\over a_{3,3}}\, i \xi_{k},
\end{equation}
and
$$
R= i \sum_{1 \leq k \leq 2} \sqrt{\varepsilon} {\partial_{z} a_{3k}\over a_{3,3}} \sqrt{\varepsilon}\xi_{k} \Omega+ \sqrt{\varepsilon}
{\partial_{z}a_{33}
\over a_{3,3}}\, \sqrt{\varepsilon}\partial_{z}\Omega.
$$
Note that we have incorporated $F$ in \eqref{omegasymbolic} which is a given source term.
Since in the system \eqref{omegalagrange}, the equations for the components of $\Omega$ are not coupled, we can study
separately each component and hence we shall assume that $\Omega \in \mathbb{R}$ in this subsection.
Thanks to \eqref{compact}, we have that
\begin{equation}
\label{rdexi}
|R| \lesssim \sqrt{\varepsilon} | \xi| | \Omega|+ \sqrt{\varepsilon}|\partial_{z}\Omega|.
\end{equation}
Let us set $\zeta^\varepsilon= (\gamma, \tau, \varepsilon \xi)$ and $\langle \zeta^\varepsilon \rangle= \big( \gamma^2 + \tau^2+ | \sqrt{\varepsilon} \xi |^4 \big)^{1 \over 4}$.
We rewrite \eqref{omegasymbolic} as a system by setting
\begin{equation}
\label{Udef}
U=\big(\Omega, \sqrt{\varepsilon} \partial_{z}\Omega/ \langle \zeta^\varepsilon \rangle \big)^t.
\end{equation}
This yields the system
\begin{equation}
\label{systemOmega}
\sqrt{\varepsilon} \partial_{z} U= \langle \zeta^\varepsilon \rangle \mathcal{A}\big( a, \tilde{\zeta}^\varepsilon\big) U + \mathcal F
\end{equation}
where
\begin{equation}
\label{matrixOmega}
\mathcal{A}\big( a, \tilde{\zeta}\big)= \left(\begin{array}{cc} 0 & 1 \\ \ {a_{0}\over a_{3,3}}(\tilde \gamma + i \tilde\tau)
+ A_{y}(a, \tilde{\xi}) & A_{z}(a, \tilde{\xi})\end{array} \right)
\end{equation}
and where we set $\tilde \zeta = ( \tilde{\gamma}, \tilde{\tau}, \tilde{\xi})$
with
$$ \tilde \gamma = \gamma / \langle \zeta \rangle^2, \quad \tilde \tau= \tau / \langle \zeta \rangle^2, \quad
\tilde \xi= \xi/ \langle \zeta \rangle$$
and for $\tilde{\zeta}^\varepsilon$, we replace $(\gamma, \tau, \xi)$ by $(\gamma, \tau, \varepsilon \xi)$.
Moreover, the source term $\mathcal{F}$ is defined by
\begin{equation}
\label{Fdef}
\mathcal F={1 \over \langle \zeta^\varepsilon \rangle} (0, R + F).
\end{equation}
The boundary condition at $z=0$ becomes
\begin{equation}
\label{Gammadef}
\Gamma U= \Omega^b, \quad \Gamma (U_{1}, U_{2})^t= U_{1}
\end{equation}
where the writting $U=(U_{1}, U_{2})^t$ is related to the block structure of \eqref{matrixOmega}.
Note that $\tilde{\zeta}$ is in the compact set $\tilde\gamma^2 + \tilde\tau^2 + |\tilde \xi|^4 = 1, \, \tilde\gamma \geq 0.$
\begin{prop}
\label{propmodele}
There exists $\gamma_{0}>0$ such that for every $\gamma \geq \gamma _{0}$,
we have for the solution of \eqref{systemOmega}, \eqref{Gammadef} the estimate
$$ \langle \zeta^\varepsilon \rangle |U|_{L^2_{z}}^2 + \sqrt{\varepsilon}|U(0)|^2 \leq C \sqrt{\varepsilon} |\Omega^b|^2 + \Big( {| F|_{L^2_{z}} \over \langle \zeta^\varepsilon \rangle^{3\over 2}}\Big)^2. $$
\end{prop}
Note that $\gamma_{0}$ only depends on the estimates \eqref{compact}. In terms of $\Omega$, the previous
proposition gives the estimate
\begin{equation}
\label{estmodele} \langle \zeta^\varepsilon \rangle |U|_{L^2_{z}}^2 = \langle \zeta^\varepsilon\rangle | \Omega|_{L^2_{z}}^2 + { \varepsilon \over \langle \zeta^\varepsilon \rangle} |
\partial_{z} \Omega|_{L^2_{z}}^2 \leq C \sqrt{\varepsilon} |\Omega^b|^2 + \Big( {| F|_{L^2_{z}} \over \langle \zeta^\varepsilon \rangle^{3\over 2}}\Big)^2.
\end{equation}
We shall prove this Proposition by using the symmetrizers method.
\begin{lem}
\label{lemsym}
There exists $\mathcal{S}\big( a, \tilde{\zeta}\big) $ symmetric, smooth in its argument and $\kappa>0$ such that
$$ \mathcal{S}\mathcal{A} + (\mathcal{S} \mathcal{A})^* \geq \kappa Id, \quad \mathcal{S} + \Gamma^* \Gamma \geq \kappa Id $$
for every $(a,\tilde \zeta) \in \mathcal{K}\times S_{+}$ where $\mathcal{K}$ is the compact set defined by \eqref{compact} and
$$S_{+}= \{\tilde{\zeta}, \, \langle \tilde \zeta \rangle = 1,\,
\tilde{\gamma} \geq 0\}.$$
\end{lem}
The proof of the Proposition can be easily obtained from the Lemma.
We multiply the equation by $\mathcal{S}\big( a(z), \tilde{\zeta}\big) U$ and we integrate in $z$. This yields by using Lemma \ref{lemsym}
$$ \kappa \big( \langle \zeta^\varepsilon \rangle |U|_{L^2_{z}}^2 + \sqrt{\varepsilon}|U(0)|^2 \big)
\leq C\Big( |\mathcal F|_{L^2_{z}} |U|_{L^2_{z}}+ \sqrt{\varepsilon}| \partial_{z} \mathcal{S}|_{L^\infty} |U|_{L^2_{z}}^2 + \sqrt{\varepsilon} |\Gamma U(0)|^2\Big).$$
Note that $ \sqrt{\varepsilon}| \partial_{z} \mathcal{S}|_{L^\infty}$ is uniformly bounded thanks to \eqref{compact}
and hence we get from Cauchy-Schwarz that
$$ \langle \zeta^\varepsilon \rangle |U|_{L^2_{z}}^2 + \sqrt{\varepsilon}|U(0)|^2 \leq C \Big( { |\mathcal F|_{L^2_{z}}^2 \over \langle \zeta^\varepsilon \rangle }
+ |U|_{L^2_{z}}^2 + \sqrt{\varepsilon} |\Gamma U(0)|^2 \Big).$$
By using \eqref{Fdef}, we get that $|\mathcal F| \lesssim |U|$
and hence the result follows by choosing $\gamma$ sufficiently large.
It remains the proof of Lemma \ref{lemsym}. It can be obtained from classical arguments.
We first prove that the eigenvalues of $ \mathcal{A}$ has nonzero real part. If $X=(X_{1}, X_{2})$ is an eigenvector of $\mathcal{A}(a, \tilde{\zeta})$ associated to the eigenvalue $\mu$, we get
that $X_{2}= \mu X_{1}$ and that
$$ a_{33}\mu^2 X_{1} =\Big( a_{0}(\tilde \gamma +i \tilde \tau) +a_{3,3} A_{y}(\tilde{\xi}) + \mu a_{3,3}A_{z}(\tilde{\xi}) \Big)X_{1}.$$
If we assume that $\mu = i \lambda$, this yields
$$a_{0}(\tilde \gamma + i \tilde \tau) + \eta_{i}\eta_{j} a_{ij}= 0$$
where $\eta=(\xi_{1}, \xi_{2}, \lambda)$. By using \eqref{compact}, we find by taking the real part
that $\eta=0$ and $\tilde \gamma = 0$ and thus also $\tilde \tau=0$. This is impossible for
$\langle \tilde \zeta \rangle = 1$. Consequently, there is no eigenvalue on the imaginary axis. Moreover, we easily see
that there is one eigenvalue with positive real part $\mu_{+}$ and one with negative real part $\mu_{-}$.
This yields that we can diagonalize $\mathcal{A}$ in a smooth way: there exists a smooth invertible matrix
$\mathcal{P}(a, \tilde{\zeta})$ such that
$$ \mathcal{A}= \mathcal{P}\left(\begin{array}{cc} \mu_{+} & 0 \\ 0 & \mu_{-} \end{array} \right)\mathcal{P}^{-1}.$$
We thus choose $\mathcal{S}$ in a classical way under the form
$$ \mathcal{S}= (\mathcal{P}^{-1})^*\left( \begin{array}{cc} 1 & 0 \\ 0 & -\delta \end{array} \right)\mathcal{P}^{-1}$$
with $\delta>0$ to be chosen. The first property on $\mathcal{S}$ in Theorem \ref{lemsym} is therefore met.
Note that smooth projections on the subspace associated to the positive and respectively negative eigenvalue of $\mathcal{A}$
are given by
$$ \Pi_{+}= \mathcal{P} \left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right)\mathcal{P}^{-1}, \quad
\Pi_{-}= \mathcal{P} \left( \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right)\mathcal{P}^{-1}$$
To get the second one, we note that if $X$ is an eigenvector of $\mathcal{A}(a, \zeta)$ associated with the eigenvalue of positive real part (we recall
that we study the equation for $z<0$)
we necessarily have $\Gamma X \neq 0$.
In other words, we have $\mbox{Ker }\Gamma \cap \mbox{Ker }\Pi_{-}$. This yields
that the map $X\in \mathbb{R}^2\mapsto (\Gamma X, \Pi_{-} X)$ is invertible and hence by compactness, there exists $ \alpha >0$ such that for
every $X \in \mathbb{R}^2$, every $a$ in the compact set \eqref{compact} and $\tilde{\zeta}$, $\langle \tilde \zeta \rangle= 1$, $\tilde \gamma \geq 0$, we have
$$ | \Gamma X|^2 + | \Pi_{-}(a(0), \zeta )X |^2 \geq \alpha |X|^2.$$
This allows to choose $\delta $ sufficiently small in order to get the second property in Lemma \ref{lemsym}.
\subsection{Energy estimate via microlocal symmetrizer}
We shall now give the proof of Theorem \ref{theomicrolocal}.
Let us take $T\in [0, T^\varepsilon]$. We consider the solution of \eqref{omegalagrange}, \eqref{bordOmega} on $[0,T]$. Since
the initial value is zero, we can assume that $\Omega^\alpha$ is zero for $t \leq 0$.
Thanks to Lemma \ref{lemlagrange}, we can assume that $a_{0}$ and $a_{ij}$ verify the estimates \eqref{compact}
on $[0, T] \times \mathcal{S}$ by a suitable choice of the numbers $m$, $M$ and $c_{0}$
(thus these numbers depend on $\Lambda_{\infty, T}$)
Note that thanks to Lemma \ref{lemlagrange}, we can also assume that
\begin{equation}
\label{compact2} \|\partial_{t,y}( a_{0}, a_{ij}) \|_{L^\infty} + \sqrt{\varepsilon} \|\partial_{t,y}\nabla a_{ij}\|_{L^\infty} \leq M.
\end{equation}
We can choose extensions of these coefficients on
$\mathbb{R} \times \mathcal{S}$ such that the new coefficients still satisfy \eqref{compact} and the above estimate.
We shall not use a different notation for the extensions. We also extend the boundary value $(\Omega^\alpha)^b$
in \eqref{bordOmega} by $0$ for $t\geq T$ and for $t \leq 0$ and denote by $g$ this new function. We shall consider
the solution of
\begin{equation}
\label{etalagrange}
a_{0}\big( \partial_{t}+ \gamma\big)\rho - \varepsilon \partial_{i}\big( a_{ij} \partial_{j}\rho \big) = 0, \quad (t,x) \in \mathbb{R}\times \mathcal{S}
, \quad \rho(t,y,0)=g
\end{equation}
which vanishes for $t \leq 0$. By using a standard uniqueness result for this parabolic equation, we get
that $\rho = \Omega^\alpha$ on $[0, T] \times \mathcal{S}$ consequently, by using the notations of section \ref{sectionparadiff}, it suffices to prove the
estimate:
\begin{prop}
The solution of \eqref{etalagrange} satisfies the estimate
\label{propparaeta}
\begin{equation}
\label{aprouver} \| \rho \|_{\mathcal{H}^{{1\over 2}, \gamma, \varepsilon}}^2 \leq C \sqrt{\varepsilon} \|g\|_{L^2(\mathbb{R}^3)}^2.
\end{equation}
for $\gamma$ sufficiently large, where $C$ depending only of the parameters in \eqref{compact}, \eqref{compact2}
(in particular it is independent of $\varepsilon$)
\end{prop}
Indeed, let us assume that this last proposition is proven, then
since
$$ \|\rho\|_{H^{1\over 4}(\mathbb{R}, L^2(\mathcal{S}))} \leq \| \rho \|_{\mathcal{H}^{{1\over 2}, \gamma, \varepsilon}} $$
we find that
\begin{equation}
\label{prouve} \| \Omega^\alpha \|_{H^{1 \over 2 }([0, T], L^2(\mathcal{S}))}^2 \leq C \sqrt{\varepsilon} \|(\Omega^\alpha)^b\|_{L^2([0, T] \times \mathbb{R}^2)}^2.\end{equation}
We shall thus focus on the proof of \eqref{aprouver}. Again, we shall consider that $\rho$ is a scalar function.
By using the notations of the previous subsection, we can define two symbols (with $z$ as parameter) $a_{y}$ and $a_{z}$ by
$$ a_{y}(X, \zeta,z)= A_{y}(a(t,y,z), \xi), \quad a_{z}(X, \zeta,z)= A_{z}(a(t,y,z), \xi),$$ in such a way that
we can rewrite \eqref{etalagrange} under the form
\begin{equation}
\label{etalagrange2}\varepsilon \partial_{zz} \rho= {a_{0} \over a_{33}}\big( \partial_{t} + \gamma ) \rho + a_{y}(X, \sqrt{\varepsilon} \partial_{y}, z) \rho
+a_{z}(X, \sqrt{\varepsilon} \partial_{y}, z) \sqrt{\varepsilon} \partial_{z} \rho + R^1
\end{equation}
where
$$ R^1= {\sqrt{\varepsilon}\partial_{i} a_{ij}\over a_{3,3}} \sqrt{\varepsilon} \partial_{j} \rho.$$
In view of the model estimate of Proposition \ref{propmodele}, we need to control $\| R^1 \|_{\mathcal H^{-{3 \over 2}, \gamma, \varepsilon}}.$
Let us set
$$b_{ij}= {\sqrt{\varepsilon}\partial_{i} a_{ij}\over a_{3,3}}$$
Let us start with the estimates of the terms where $j\neq 3$. We first note that
$$ \| b_{ij} \sqrt{\varepsilon} \partial_{j} \rho\|_{\mathcal H^{-{3 \over 2}, \gamma, \varepsilon}}
\leq \| \sqrt{\varepsilon} \partial_{j}\big( b_{ij} \rho\big)\|_{\mathcal H^{-{3 \over 2}, \gamma, \varepsilon}}
+ \| \sqrt{\varepsilon}\partial_j b_{ij} \rho\|_{\mathcal H^{-{3 \over 2}, \gamma, \varepsilon}}.$$
For the second term, thanks to the uniform estimate \eqref{compact} and \eqref{compact2}, we can write
$$ \| \sqrt{\varepsilon}\partial_{j} b_{ij} \rho\|_{\mathcal H^{-{3 \over 2}, \gamma, \varepsilon}}
\leq {1 \over \gamma^{3\over 4}}\| \sqrt{\varepsilon}\partial_{j} b_{ij} \rho\|_{L^2} \leq {C \over \gamma^{3\over 4}} \|\rho\|_{L^2}.$$
For the first term thanks to the definition of the weighted norm, we get
$$ \| \sqrt{\varepsilon} \partial_{j}\big( b_{ij} \rho\big)\|_{\mathcal H^{-{3 \over 2}, \gamma, \varepsilon}}
\leq \| b_{ij} \rho\big\|_{\mathcal H^{-{1 \over 2}, \gamma, \varepsilon}}
\leq {C \over \gamma^{1\over 4}} \| \rho \|_{L^2}.$$
Consequently, we have proven that
$$ \| b_{ij} \sqrt{\varepsilon} \partial_{j} \rho\|_{\mathcal H^{-{3 \over 2}, \gamma, \varepsilon}} \leq {C \over \gamma^{1\over 4} } \| \rho \|_{L^2}.$$
for $j \neq 3$. It remains the case that $j=3$ which is more complicated. We shall use a duality argument and
the paradifferential calculus of section \ref{sectionparadiff}. We first write
$$ \| b_{ij} \sqrt{\varepsilon} \partial_{z} \rho\|_{\mathcal H^{-{3 \over 2}, \gamma, \varepsilon}} \leq {1 \over \gamma^{1 \over 2}}
\| b_{ij} \sqrt{\varepsilon} \partial_{z} \rho\|_{\mathcal H^{-{1 \over 2}, \gamma, \varepsilon}}.$$
Next for any test function $f$ in the Schwartz class, we write
$$ |\big( b_{ij} \sqrt{\varepsilon} \partial_{z} \rho, f \big)_{L^2}| \leq | \big( \sqrt{\varepsilon} \partial_{z} \rho, b_{ij} f \big)_{L^2}|
\leq \|\sqrt{\varepsilon} \partial_{z} \rho\|_{\mathcal{H}^{-{1 \over 2}, \gamma, \varepsilon}} \| b_{ij} f \|_{\mathcal{H}^{{1 \over 2}, \gamma, \varepsilon}}.$$
Next, thanks to the estimate \eqref{compact}, \eqref{compact2}, we get in particular that
$\|b_{ij}\|_{L^\infty}$ and $\|\nabla_{t,y} b_{ij}\|_{L^\infty}$ are uniformly bounded, thus we find
$$ \| b_{ij} f \|_{\mathcal{H}^{{1 \over 2}, \gamma, \varepsilon}}
\leq \|T^{\varepsilon, \gamma}_{b_{ij}} f\|_{\mathcal{H}^{{1 \over 2}, \gamma, \varepsilon}} + \| (b_{ij}- T^{\varepsilon, \gamma}_{b_{ij}}) f\|_{\mathcal{H}^{{1}, \gamma, \varepsilon}} \leq C\|f\|_{\mathcal{H}^{{1 \over 2}, \gamma, \varepsilon}} $$
by using in Theorem \ref{symbolic2} the estimate (1) and the first estimate in (5).
This proves by duality that
$$ \| b_{ij} \sqrt{\varepsilon} \partial_{z} \rho \|_{\mathcal{H}^{ -{1 \over 2}, \gamma, \varepsilon}} \leq C \|\sqrt{\varepsilon} \partial_{z} \rho\|_{\mathcal{H}^{-{1 \over 2}, \gamma, \varepsilon}}$$
and hence that
\begin{equation}
\label{bij1} \| b_{ij} \sqrt{\varepsilon} \partial_{z} \rho \|_{\mathcal{H}^{ -{3 \over 2}, \gamma, \varepsilon}} \leq {C\over \gamma^{1\over 2}} \|\sqrt{\varepsilon} \partial_{z} \rho\|_{\mathcal{H}^{-{1 \over 2}, \gamma, \varepsilon}}.\end{equation}
Consequently, we have proven that \begin{equation} \label{R1para}
\|R^1 \|_{\mathcal{H}^{ -{3 \over 2}, \gamma, \varepsilon}}
\leq {C\over \gamma^{1\over 4}} \big( \| \rho \|_{L^2} + \|\sqrt{\varepsilon} \partial_{z} \rho\|_{\mathcal{H}^{-{1 \over 2}, \gamma, \varepsilon}}\big).
\end{equation}
Note that in view of the left hand side in the model estimate of Proposition \ref{propmodele}, the above estimate is a good estimate
since for $\gamma$ sufficiently large, it will be possible to absorb this term by the principal term of our estimate.
Next, we can replace products by paraproducts in the equation \eqref{etalagrange2} to rewrite it under the form
\begin{equation}
\label{etalagrange3}
\varepsilon \partial_{zz}\rho= T^{\varepsilon, \gamma}_{a_{0}/a_{33}}\big( \partial_{t} + \gamma \big)\rho
+ T^{\varepsilon, \gamma}_{a_{y}} \rho + T^{\varepsilon, \gamma}_{a_{z}}\sqrt{\varepsilon} \partial_{z}\rho + R
\end{equation}
where
\begin{eqnarray}
\label{Rdefpara}
& & R= R^1 + R^2, \\
\label{R2defpara} & & R^2 = \big( {a_{0}\over a_{33}} - T^{\varepsilon, \gamma}_{a_{0}/a_{33}}\big) \big( \partial_{t}+ \gamma\big)
- \sum_{1 \leq i, j \leq 2} \big( {a_{ij}\over a_{33}} - T^{\varepsilon, \gamma}_{a_{ij}/a_{33}} \big) \varepsilon\, \partial_{i} \partial_{j} \rho
\\
\nonumber & & \quad \quad \quad - 2 \sum_{1 \leq k \leq 2} \big( {a_{k3}\over a_{33}} - T^{\varepsilon, \gamma}_{a_{k3}/a_{33}} \big) \varepsilon \partial_{k}
\partial_{z} \rho.
\end{eqnarray}
As before, we need to estimate $ \|R^2 \|_{\mathcal{H}^{ -{3 \over 2}, \gamma, \varepsilon}}$. For the first term in the definition of $R^2$,
we can use \eqref{compact} and \eqref{compact2} to write
\begin{align*}
& \| \big( {a_{0}\over a_{33}} - T^{\varepsilon, \gamma}_{a_{0}/a_{33}}\big)\big( \partial_{t}+ \gamma) \rho \|_{\mathcal{H}^{ -{3 \over 2}, \gamma, \varepsilon}}
\\
& \leq \big\| \big( \partial_{t} + \gamma \big) \big( {a_{0}\over a_{33}} - T^{\varepsilon, \gamma}_{a_{0}/a_{33}}\big)\rho \big\|_{\mathcal{H}^{ -{3 \over 2},\gamma, \varepsilon}} + \big\| \partial_{t}({a_{0}\over a_{33}}\big) \rho \|_{\mathcal{H}^{ -{3 \over 2}, \gamma, \varepsilon}} +\big\|
T^{\varepsilon, \gamma}_{\partial_{t}(a_{0}/a_{33})}\rho \big\|_{\mathcal{H}^{ -{3 \over 2}, \gamma, \varepsilon}} \\
& \leq \big\| \big( {a_{0}\over a_{33}} - T^{\varepsilon, \gamma}_{a_{0}/a_{33}}\big)\rho \big\|_{\mathcal{H}^{ {1 \over 2}, \gamma, \varepsilon}}
+ {C \over \gamma^{3 \over 4}} \| \rho \|_{L^2} \\
&\leq {C \over \gamma^{1 \over 4}} \| \rho \|_{L^2} \end{align*}
where we have used the $L^2$ continuity of paraproducts (i.e. (1) with $\mu=0$ in Theorem \ref{symbolic2})
and the first estimate in (5) of Theorem \ref{symbolic2}
to get the two last lines. For the second type of terms in $R^2$, we can proceed in the same way
(note that $\varepsilon \partial_{ij}$ is an operator of order $2$ and hence has the same order as $\partial_{t}$ in our calculus)
to obtain that
$$ \big\| \big( {a_{ij}\over a_{33}} - T^{\varepsilon, \gamma}_{a_{ij}/a_{33}} \big) \varepsilon\, \partial_{i} \partial_{j} \rho\big\|_{\mathcal{H}^{ -{3 \over 2}, \gamma, \varepsilon}}
\leq {C \over \gamma^{1 \over 4}} \| \rho \|_{L^2}.$$ For the last type of terms in $R^2$, we first proceed in the same way to write
\begin{align*}
& \| \big( {a_{k3}\over a_{33}} - T^{\varepsilon, \gamma}_{a_{k3}/a_{33}}\big)
\varepsilon \partial_{k}\partial_{z} \rho\|_{\mathcal{H}^{ -{3 \over 2}, \gamma, \varepsilon}} \\
& \leq \big\| \sqrt{\varepsilon}\partial_{k} \big( {a_{k3}\over a_{33}} - T^{\varepsilon, \gamma}_{a_{k3}/a_{33}}\big) \sqrt{\varepsilon}\partial_{z}\rho \big\|_{\mathcal{H}^{ -{3 \over 2},\gamma, \varepsilon}} + \big\|\sqrt{\varepsilon} \partial_{k}({a_{k3}\over a_{33}}\big) \sqrt{\varepsilon}\partial_{z} \rho \|_{\mathcal{H}^{ -{3 \over 2}, \gamma, \varepsilon}} +\big\| T^{\varepsilon, \gamma}_{\sqrt{\varepsilon}\partial_{k}(a_{k3}/a_{33})} \sqrt{\varepsilon}\partial_{z}\rho \big\|_{\mathcal{H}^{ -{3 \over 2}, \gamma, \varepsilon}} \\
& \leq {1 \over \gamma^{1 \over 4}} \big\| \big( {a_{k3}\over a_{33}} - T^{\varepsilon, \gamma}_{a_{k3}/a_{33}}\big) \sqrt{\varepsilon}\partial_{z}\rho \big\|_{L^2}
+ {C \over \gamma^{1 \over 2}} \| \sqrt{\varepsilon} \partial_{z}\rho \|_{\mathcal{H}^{- {1 \over 2}, \gamma, \varepsilon}}. \end{align*} Indeed, to get the last line, we have used the continuity of the paraproduct and that the
term $ \big\|\sqrt{\varepsilon} \partial_{k}({a_{0}\over a_{33}}\big) \sqrt{\varepsilon}\partial_{z} \rho \|_{\mathcal{H}^{ -{3 \over 2}, \gamma, \varepsilon}} $
can be estimated in a similar way as it was done to get \eqref{bij1}.
To estimate the first term in the right hand side of the above inequality, we can again use a duality argument.
For any test function $f$, we have
$$\Big( \big( {a_{k3}\over a_{33}} - T^{\varepsilon, \gamma}_{a_{k3}/a_{33}}\big) \sqrt{\varepsilon}\partial_{z}\rho, f \Big)_{L^2}
= \Big( \sqrt{\varepsilon}\partial_{z}\rho, \big( {a_{k3}\over a_{33}} - T^{\varepsilon, \gamma}_{a_{k3}/a_{33}}\big) f \Big) -
\Big( \sqrt{\varepsilon}\partial_{z}\rho, \big( (T^{\varepsilon, \gamma}_{a_{k3}/a_{33}})^* - T^{\varepsilon, \gamma}_{a_{k3}/a_{33}}\big) f \Big)$$
where we have used that $a_{k3}/a_{33}$ is a scalar real valued function. Consequently, by using again Theorem \ref{symbolic2},
we obtain that
$$ \Big| \Big( \big( {a_{k3}\over a_{33}} - T^{\varepsilon, \gamma}_{a_{k3}/a_{33}}\big) \sqrt{\varepsilon}\partial_{z}\rho, f \Big)_{L^2}\Big|
\leq C \| \sqrt{\varepsilon}\partial_{z}\rho\|_{\mathcal{H}^{- {1}, \gamma, \varepsilon}} \, \|f\|_{L^2}$$
and hence that
$$ \big\| \big( {a_{k3}\over a_{33}} - T^{\varepsilon, \gamma}_{a_{k3}/a_{33}}\big) \sqrt{\varepsilon}\partial_{z}\rho \big\|_{L^2}
\leq C \| \sqrt{\varepsilon}\partial_{z}\rho\|_{\mathcal{H}^{- {1}, \gamma, \varepsilon}}.$$
Consequently, we finally obtain that
$$\big\| \big( {a_{ij}\over a_{33}} - T^{\varepsilon, \gamma}_{a_{ij}/a_{33}} \big) \varepsilon\, \partial_{i} \partial_{j} \rho\big\|_{\mathcal{H}^{ -{3 \over 2}, \gamma, \varepsilon}}
\leq {C \over \gamma^{1 \over 2}} \| \sqrt{\varepsilon} \partial_{z}\rho \|_{\mathcal{H}^{- {1 \over 2}, \gamma, \varepsilon}}$$ and hence by collecting the previous estimates and \eqref{R1para}, we get that \begin{equation} \label{Rpara}
\|R \|_{\mathcal{H}^{ -{3 \over 2}, \gamma, \varepsilon}}
\leq {C\over \gamma^{1\over 4}} \big( \| \rho \|_{L^2} + \|\sqrt{\varepsilon} \partial_{z} \rho\|_{\mathcal{H}^{-{1 \over 2}, \gamma, \varepsilon}}\big). \end{equation} Next, we can rewrite \eqref{etalagrange3} as a first order system by setting \begin{equation} \label{Udefpara} U=\big ( \rho, T^{\varepsilon, \gamma}_{ 1/\langle \zeta \rangle} \sqrt{\varepsilon}\partial_{z} \rho \big)^t.\end{equation}
Note that $T^{\varepsilon, \gamma}_{ 1/\langle \zeta \rangle}$ is just the Fourier multiplier by $1/\langle \zeta^\varepsilon \rangle$. This yields the system
\begin{equation}
\label{systpara}
\sqrt{\varepsilon} \partial_{z} U = T^{\varepsilon, \gamma}_{M} U + \mathcal{F}, \quad z <0
\end{equation}
where the symbol $M\in \Gamma_{1}^1$ is given by
$$ M(X, \zeta, z)= \langle \zeta \rangle \mathcal{A}(a(X,z), \tilde{\zeta}), \quad \tilde{\zeta}=(\tilde \gamma, \tilde \tau, \tilde \xi)=
(\gamma/\langle \zeta \rangle^2, \tau/\langle \zeta \rangle^2, \xi/\langle \zeta \rangle)$$
where $\mathcal{A}$ is defined in \eqref{matrixOmega} and the source term $\mathcal{F}$ is defined by
$$ \mathcal{F}= (0, T^{\varepsilon, \gamma}_{{1\over \langle \zeta \rangle}} R + \mathcal{C}\big)^t$$
where $\mathcal{C}$ is a lower order commutator. By using \eqref{Rpara} and (2) in Theorem \ref{symbolic2} to estimate $\mathcal C$, we get that
\begin{equation}
\label{Fpara}
\|\mathcal F\|_{\mathcal H^{ -{1 \over 2}, \gamma, \varepsilon}} \leq {C \over \gamma^{1 \over 4}} \|U\|_{\mathcal H^{ {1 \over 2}, \gamma, \varepsilon}}.
\end{equation}
The boundary condition for \eqref{systpara} becomes
\begin{equation}
\label{bordpara}
\Gamma U_{/z=0}=g
\end{equation}
where $\Gamma$ is still defined by \eqref{Gammadef}. We can then perform an energy estimate for the system
\eqref{systpara}, \eqref{bordpara} by using the symmetrizer $\mathcal{S}$ constructed in Lemma \ref{lemsym}.
Indeed let us define a symbol $S(X, \zeta, z) \in \Gamma_{1}^0$ by
$$ S(X, \zeta, z) = \mathcal S ( a(X,z), \tilde \zeta, z).$$
From Lemma \ref{lemsym}, we get that at the level of symbols, we have
$$ \mbox{Re } SM \geq \kappa \langle \zeta \rangle, \quad \mbox{Re }S_{/z=0}+ \Gamma^* \Gamma \geq \kappa.$$
Consequently, we can perform an energy estimate in a standard way by taking the scalar product of \eqref{systpara} with
$T^{\varepsilon, \gamma}_{S} U$ and by using Theorem \ref{symbolic2} (in particular the estimates (2), (3)
and the Garding inequality (4)). This yields
$$ \|U\|_{\mathcal H^{ {1 \over 2}, \gamma, \varepsilon}}^2 + \sqrt{\varepsilon}\|U(0)\|_{L^2(\mathbb{R}^3)}^2
\leq {C \over \gamma^{1 \over 4}}\big( \|U\|_{\mathcal H^{ {1 \over 2}, \gamma, \varepsilon}} + \sqrt{\varepsilon}\|U(0)\|_{L^2(\mathbb{R}^3)}^2\big)
+ \sqrt{\varepsilon} C\|g\|_{L^2(\mathbb{R}^3)}^2 + \big|\big(\mathcal{F}, T^{\varepsilon, \gamma}_{S} U \big)_{L^2} |.$$
Since by using \eqref{Fpara}, we get
$$ \big|\big(\mathcal{F}, T^{\varepsilon, \gamma}_{S} U \big)_{L^2} | \leq \| \mathcal{F}\|_{\mathcal H^{ -{1\over 2}, \gamma, \varepsilon} } \,
\| U\|_{\mathcal H^{ {1 \over 2}, \gamma, \varepsilon}} \leq {C\over \gamma^{1 \over 4}}
\| U\|_{\mathcal H^{ {1 \over 2}, \gamma, \varepsilon}}^2.$$
Consequently for $\gamma$ sufficiently large (with respect to $C$), we obtain the estimate
$$ \|U\|_{\mathcal H^{ {1 \over 2}, \gamma, \varepsilon}}^2 \leq \sqrt{\varepsilon} C\|g\|_{L^2(\mathbb{R}^3)}^2.$$
To conclude, we note that
$$ \|U\|_{\mathcal H^{ {1 \over 2}, \gamma, \varepsilon}}^2= \|\rho\|_{\mathcal H^{ {1 \over 2}, \gamma, \varepsilon}}^2
+ \|\sqrt{\varepsilon}\partial_{z}\rho\|_{\mathcal H^{- {1 \over 2}, \gamma, \varepsilon}}^2.$$
This ends the proof of Proposition \ref{propparaeta}.
\section{Proof of Theorem \ref{main}: uniform existence}
\label{sectionexist}
In this section, we shall prove how we can combine all our energy estimates to get our uniform
existence result. Let us fix $m \geq 6$.
We consider initial data such that
$$\mathcal{I}_{m}(0) = \| v_{0}\|_{E^m} + |h_{0}|_{m} + \sqrt{\varepsilon}|h_{0}|_{m+{1 \over 2}} + \|v_{0}\|_{E^{2,\infty}}+
\varepsilon \| \partial_{zz} v(0)\|_{L^\infty}<+\infty.$$
For such data, we are not aware of a local existence result.
We could prove it by using our energy estimates and a classical iteration scheme. Nevertheless, we can also avoid this by
using the available classical existence results in Sobolev spaces (for example \cite{Beale81}, \cite{Tani96}). Indeed, we can first smooth
the initial velocity and consider a sequence $v_{0}^\delta \in H^{r-1}$, $3<r<7/2$ ($\delta$ being a regularization parameter)
to meet the assumption of \cite{Beale81}, \cite{Tani96}.
This allows to get a positive time $T^{\varepsilon, \delta}$ for which a solution $v$
associated to this initial data exists
in the space $\mathcal{K}^r ([0, T^{\varepsilon, \delta}]\times \mathcal{S})= H^{r\over 2} ([0, T], L^2) \cap L^2 ([0, T], H^r)$.
Next,
we can get by standard parabolic energy estimates
that additional regularity propagates from the initial data, that is to say that on $[0, T^{\varepsilon, \delta}], $ we have
\begin{align}
\label{defNm}
\mathcal{N}_{m}(T) & = \sup_{[0, T]}\big( \|v(t) \|_{m}^2 + \|\partial_{z}v\|_{m-2}^2 + |h(t)|_{m}^2 + \varepsilon |h(t)|_{m+{1 \over 2}}^2
+ \varepsilon \|\partial_{zz}v(t)\|_{L^\infty}^2 + \|v(t)\|_{E^{2, \infty}}^2 \big) \\
\nonumber & \quad \quad \quad + \|\partial_{z}v\|_{L^4([0,T], H^{m-1}_{co})}^2
+ \varepsilon \int_{0}^T \|\nabla v\|_{m}^2 + \varepsilon \int_{0}^t \|\nabla \partial_{z}v\|_{m-2}^2<+ \infty.
\end{align}
Moreover, we can also get from the initial condition that \eqref{apriori} is valid on $[0, T^{\varepsilon, \delta}]$ (possibly by taking $T^{\varepsilon, \delta}$ smaller).
Note that since we do not propagate any additional normal regularity, we do not need additional compatibility conditions.
We shall not detail this step since it can be done by classical energy estimates (much simpler than the ones we have proven since in this stage we are not interested
in estimates independent of $\varepsilon$.)
An important remark is that if $\mathcal{N}_{m}(T_{0}) <+\infty$, then the solution can be continued on
$[0, T_{1}]$, $T_{1}>T_{0}$ with $\mathcal{N}_{m}(T_{1})<+\infty$. Indeed if $\mathcal{N}_{m}(T_{0})<+\infty$,
we can use the parabolic regularity for the Stokes problem on $[T_{0}/2, T_{0}]$ to get that the solution actually
enjoys much more standard Sobolev regularity on $[T_{0}/2, T_{0}]$ (note that we assume that
the surface $h$ is $H^6$) and in particular, we find that $u(T_{0})\in H^{r-1}$, $3<r<7/2$.
This allows to use again the result of Beale \cite{Beale81} to continue the solution and the previous argument about the propagation of additional
regularity to get our claim.
Next we want to use this remark to prove that the solution can be continued on an interval of time independent of $\varepsilon$ and $\delta$.
Towards this, we first note that it is equivalent to control $\mathcal{N}_{m}(T)$ and $\mathcal{E}_{m}(T)$
where
$$
\mathcal{E}_{m}(T)= \sup_{[0, T]} \mathcal{Q}_{m}(t) + \mathcal{D}_{m}(T) + \| \omega \|_{L^4([0, T], H^{m-1}_{co})}^2 $$
with $\mathcal{Q}_{m}$ defined by \eqref{defQm} and
$$ \mathcal{D}_{m}(T) = \varepsilon \int_{0}^T \big( \varepsilon \|\nabla V^m\|^2 + \varepsilon \| \nabla S_{{\bf n}}\|_{m-2}^2 \big).$$
Indeed, the fact that $\mathcal{N}_{m}(T) \leq \Lambda\big( {1 \over c_{0}}, \mathcal{E}_{m}(T))$ is a consequence of \eqref{equiv1}, \eqref{vnorm},
\eqref{dzvm-1O}, Lemma \ref{lemdzS} and Corollary \ref{corLinfty} while the reverse inequality is just a consequence of product estimates.
For two parameters $R$ and $c_{0}$ to be chosen $1/c_{0}<<R$, we can thus define a time $T^{\varepsilon, \delta}_{*}$
\begin{multline*}
T^{\varepsilon, \delta}_{*}= \sup\big\{T\in [0, 1], \quad \mathcal{E}_{m}(t) \leq R, \quad
|h(t)|_{2, \infty} \leq 1/c_{0}, \quad \partial_{z}\varphi(t) \geq c_{0}, \\
\quad g- \partial_{z}^\varphi q^E(t) \geq {c_{0} \over 2}, \quad \forall t \in [0, T].
\big\}.
\end{multline*}
At first, let us notice that thanks to Corollary \ref{corLinfty}, we have that for $T \leq T^{\varepsilon, \delta}_{*}$,
$$ \Lambda_{\infty, 6}(T) \leq \Lambda\big( R)$$
where $\Lambda_{\infty, m}$ is defined by \eqref{deflambdainfty}.
Thanks to Corollary \ref{corLinfty0t}, we also have
$$ \int_{0}^T \sqrt{\varepsilon} \| \nabla^2 v\|_{1, \infty} \leq \Lambda(R).$$
This allows to use Proposition \ref{conormv}, Proposition \ref{propdzvinfty} , Proposition \ref{dzzvLinfty} and Proposition \ref{propomega} to get that
$$
\mathcal{E}_{m}(T)
\leq \Lambda\big( {1 \over c_{0}}, \mathcal{I}_{m}(0)\big) + \Lambda(R)\Big( T^{1 \over 2} + \Lambda(R)\int_{0}^T | (\partial_{z}\partial_{t} q^E)^b \|_{L^\infty}
+ \Lambda(R)\int_{0}^T \| \omega \|_{m-1}^2\Big).
$$
Consequently, from the Cauchy-Schwarz inequality, we find that
\begin{equation}
\label{Em1}
\mathcal{E}_{m}(T)
\leq \Lambda\big( {1 \over c_{0}}, \mathcal{I}_{m}(0)\big) + \Lambda(R)\Big( T^{1 \over 2} + \Lambda(R)\int_{0}^T | (\partial_{z}\partial_{t} q^E)^b \|_{L^\infty}\Big). \end{equation}
To estimate the last term in the right hand side, we can use Proposition \ref{proptaylor} to find
$$ \int_{0}^T | (\partial_{z}\partial_{t} q^E)^b \|_{L^\infty} \leq \Lambda(R)\big( T + \int_{0}^T \big( \varepsilon \|\partial_{zz}v \|_{L^\infty} +
\varepsilon \| \partial_{zz} v\|_{3} \big)\big)$$
and hence thanks to \eqref{dzzvk}, we find
\begin{equation}
\label{taylorfin} \int_{0}^T | (\partial_{z}\partial_{t} q^E)^b \|_{L^\infty} \leq \Lambda(R)\big( T + \int_{0}^T \varepsilon \| \partial_{z} S_{{\bf n}}\|_{3}\big)
\leq \Lambda(R)\sqrt{T}
\end{equation}
where the last estimate comes again from the Cauchy-Schwarz inequality. Consequently, we obtain from \eqref{Em1} that
\begin{equation}
\label{Em2}
\mathcal{E}_{m}(T)
\leq \Lambda\big( {1 \over c_{0}}, \mathcal{I}_{m}(0)\big) + \Lambda(R) T^{1 \over 2}.
\end{equation}
Moreover, thanks to the equation \eqref{bordv2}, we get
that
\begin{equation}
\label{choixc01} |h(t)|_{2, \infty} \leq |h(0)|_{2, \infty}+ \Lambda(R) T, \quad \forall t\in [0, T]\end{equation}
and also
\begin{equation}
\label{choixc02} \partial_{z} \varphi(t) \geq 1 - \int_{0}^t \| \partial_{t} \nabla \eta \|_{L^\infty} \geq 1 - \Lambda(R) T, \quad
\forall t \in [0,T]\end{equation}
since we have chosen $A$ so that \eqref{Adeb} is verified. Finally, since
$$ g- (\partial_{z}^\varphi q^E)^b(t)= (g- (\partial_{z}^\varphi q^E)^b_{/t=0} -\Lambda(R) \int_{0}^t\big( 1 + | (\partial_{t} \partial_{z}q^E)^b |_{L^\infty}
\big), $$
we get from \eqref{taylorfin} that
\begin{equation}
\label{taylorcon} g- (\partial_{z}^\varphi q^E)^b \geq (g- (\partial_{z}^\varphi q^E)^b_{/t=0} - \sqrt{t} \Lambda(R), \quad \forall
t \in [0,T].\end{equation}
In view of
\eqref{taylorcon}, \eqref{choixc01}, \eqref{choixc02} and \eqref{Em2}, we can take
$R= 2 \Lambda( |h(0)|_{2, \infty}, \mathcal{I}_{m}(0))$ to get that there exists $T_{*}$ which depends only on
$ \mathcal{I}_{m}(0)$ (and hence does not depend on $\varepsilon$ and $\delta$) so that for $T \leq \mbox{Min }(T_{*}, T^{\varepsilon, \delta}_{*})$,
we have
$$ \mathcal{E}_{m}(T) \leq R/2, \quad
|h(t)|_{2, \infty} \leq {1\over 2 c_{0}}, \quad \partial_{z}\varphi(t) \geq 2c_{0}, \quad g- \partial_{z}^\varphi q^E(t) \geq {3 \over 4} c_{0}, \quad \forall t \in [0,T].$$
This yields $T^{\varepsilon, \delta }_{*} \geq T_{*}$. Indeed otherwise, our criterion about the continuation of the solution
would contradict the definition of $T^{\varepsilon, \delta}_{*}$.
We have thus proven that for the smoothed initial data $v_{0}^\delta$, there exists an interval of time $[0, T_{*}] $ independent of $\varepsilon$ and
$\delta$
for which the solution exists and such that $\mathcal{N}_{m}(T^*) <+\infty$. To get the existence of solution
without the additional regularity for the initial data, it suffices to pass to the limit. The fact that
$ \mathcal{N}_{m}(T^*)$ is uniformly bounded in $\delta$ allows to pass to the limit easily by using strong compactness arguments.
We shall not give more details since the arguments are very close to the ones that allow to get the inviscid
limit that we shall detail below.
\section{Uniqueness}
\label{sectionunique}
\subsection{Uniqueness for Navier-Stokes}
In this section, we shall prove the uniqueness part of Theorem \ref{main}.
We consider two solutions $(v^i, \varphi^i, q^i)$ with the same initial data of \eqref{NSv}, \eqref{bordv1}, \eqref{bordv2}
which satisfy on $[0, T^\varepsilon]$, the estimate
\begin{equation}
\label{apriorifinal}
\mathcal{N}_{m}^i(T^\varepsilon) \leq R, \quad i=1, \, 2
\end{equation}
where $\mathcal{N}_{m}$ is defined above in \eqref{defNm} and the superscript $i$ refers to one of the two solutions.
We also assume that for each solution \eqref{apriori} is verified. We set
$v= v^1- v^2$, $h=h^1- h^2$, $q= q^1- q^2$.
We shall first provide a simple uniqueness proof for the Navier-Stokes equation.
We will explain how the proof has to be modified in order to get uniqueness
for the Euler equation in a second step.
At first, by using \eqref{transportW} and \eqref{deltaphi}, we get that $v$ solves the equation
\begin{equation}
\label{eqdiff}
\big( \partial_{t} + v_{y}^1\cdot \nabla_{y}+ V_{z}^1 \partial_z \big)v + \nabla^{\varphi} q - \varepsilon \Delta^{\varphi} v = \mathcal{F}
\end{equation}
with $\mathcal{F}$ given by
\begin{align}
\label{sourcediff}
\mathcal{F} & = ( v_{y}^1 - v_{y}^2) \cdot \nabla_{y}v^{2} + ( V_{z}^1 - V_{z}^2 ) \partial_{z}v^2 -
\varepsilon \big( {1 \over \partial_{z} \varphi^1}- {1 \over \partial_{z} \varphi^2}\big) \big( (P^1)^* \nabla q^2 \big) \\
\nonumber & \quad +\varepsilon {1 \over \partial_{z} \varphi_{2}} \big( (P^1- P^2)^* \nabla q^2\big)
+\varepsilon \big( {1 \over \partial_{z} \varphi^1}- {1 \over \partial_{z} \varphi^2}\big) \nabla \cdot \big( E^1 \nabla v^2 \big)
+\varepsilon {1 \over \partial_{z} \varphi_{2}} \nabla \cdot \big( (E^1- E^2) \nabla v^2\big).
\end{align}
In a similar way, thanks to \eqref{graddiv}, we can write the divergence free condition under the form
\begin{equation}
\label{divdiff}
\nabla^\varphi \cdot v = - \varepsilon \big( {1 \over \partial_{z} \varphi^1}- {1 \over \partial_{z} \varphi^2}\big) \nabla \cdot \big( P^1 v^2 \big)
-\varepsilon {1 \over \partial_{z} \varphi^{2}} \nabla \cdot \big( (P^1- P^2) v^2\big).
\end{equation}
On the boundary, we obtain from \eqref{bordv1} \eqref{bordv2} that
\begin{equation}
\label{diffbord1}
\partial_{t} h + (v^b)^1_{y} \cdot \nabla h - ((v^1)^b_{3}- (v^2)^b_{3})=
- ((v^1)^b - (v^2)^b)_{y}\cdot \nabla h^2
\end{equation}
and
\begin{align}
\label{diffbord2}
& q {\bf n}^1 - 2 \varepsilon S^\varphi v {\bf n}^1=
g h {\bf n}^1 + 2 \varepsilon (S^{\varphi^1}- S^{\varphi^2})v^1 {\bf n}^1
+ 2 \varepsilon S^{\varphi^{2}} v^2 ({\bf n}^1 - {\bf n}^2).
\end{align}
At first, as in the proof of Proposition \ref{heps} (see \eqref{h1}), we can show from \eqref{diffbord1} that
\begin{equation}
\label{diffest1} \varepsilon\, |h(t )|_{{1 \over 2}}^2
\leq
\varepsilon \int_{0}^t \| \nabla v \|_{L^2(\mathcal{S})}^2 + \Lambda(R) \int_{0}^t
\big( \| v \|_{L^2(\mathcal{S})}^2 + \varepsilon\,|h|_{{1 \over 2}}^2 \big)\, d\tau.
\end{equation}
Next, we can use again a standard energy estimate for \eqref{eqdiff}, by using Proposition \ref{propeta}
and Proposition \ref{proppE}, Proposition \ref{propPNS}, from a lenghty but easy computation, we obtain for
$v= v^1-v^2$, $h=h^1-h^2$ that
$$ \|v(t)\|_{L^2(\mathcal{S})}^2 +
\|h(t)\|_{L^2(\mathcal{S})}^2 + \varepsilon \int_{0}^t \|\nabla v \|_{L^2(\mathcal{S})}^2
\leq \Lambda(R)\int_{0}^t \big( |h|_{H^{1\over 2}(\mathbb{R}^2)}^2 + \|v\|_{L^2(\mathcal{S})}^2+ \| \nabla (q^1- q^2) \|_{L^2(\mathcal{S})}
\, \|v\|_{L^2(\mathcal{S})}\big).$$
We shall not detail this estimate since most of the arguments have been already used, the only term for
which one has to be careful, is the boundary term that involves one of the two last terms of the right hand side of
\eqref{diffbord2}.
For example, we have to estimate the boundary term
$$ \int_{z=0} 2 \varepsilon (S^{\varphi^1}- S^{\varphi^2}) v^1 n^1 \cdot v \, dy$$
and by using the definition of $S^{\varphi^i} v^1$, $i=1,\, 2$, we obtain integrals like
$$ \varepsilon \int_{/z=0} \partial_{z}v_{j}^1 \, \partial_{i} h \, v_{j}\, dy$$
where $1\leq i \leq 2$.
We can estimate it by
$$ \Big| \varepsilon \int_{/z=0} \partial_{z}v_{j}^1 \, \partial_{i} h \, v_{j}\, dy\Big|
\leq \varepsilon\, |\partial_{z}v_{j}^1 v_{j}|_{H^{1\over 2}(\mathbb{R}^2)} \, | \partial_{i}
h|_{H^{-{1 \over 2} }(\mathbb{R}^2)}
\leq \Lambda(R)\, \varepsilon \,\|v\|_{H^1(\mathcal{S})}\, |h|_{H^{1 \over 2 }(\mathbb{R}^2)} $$
where the last estimate follows from \eqref{cont2D} and the trace Theorem. We can then use
the Young inequality.
From the equation for the pressure, we can also thanks to the estimates of section \ref{sectionpressure} obtain that
$$ \| \nabla (q^1- q^2) \|_{L^2(\mathcal{S})} \leq \Lambda(R)\big( |h|_{H^{1 \over 2}} + \|v\|_{H^1(\mathcal(S)}).$$
Note that there is no $\varepsilon$ in front of $ |h|_{H^{1 \over 2}}$ in this estimate because of the Euler part of the pressure.
This yields
$$ \|v(t)\|_{L^2(\mathcal{S})}^2 +
|h(t)|_{L^2(\mathbb{R})^2}^2 + \varepsilon |h(t)|_{1\over 2}^2 + \varepsilon \int_{0}^t \|\nabla v \|_{L^2(\mathcal{S})}^2
\leq \Lambda(R)\int_{0}^t \big( |h|_{H^{1\over 2}(\mathbb{R}^2)}^2 + \|v\|_{L^2(\mathcal{S})}^2\big).$$
Consequently, we get the uniqueness for Navier-Stokes by combining the last estimate and \eqref{diffbord1}.
Note that the above estimate is not uniform in $\varepsilon$ and thus does not allow to recover the uniqueness for Euler.
\subsection{Uniqueness for Euler}
\begin{prop}
\label{propuniqueeuler}
Consider $(v^i, h^ i)$, $i=1, \, 2$ two solutions of \eqref{eulerint}, \eqref{eulerb} defined on $[0, T]$ with the same initial data and
the regularity stated in Theorem \ref{theoinviscid}. Then $v^1= v^2$ and $h^1= h^2$.
\end{prop}
\begin{proof}
We assume that
\begin{equation}
\label{borneunique1}
\sup_{i, \, [0,T]}\big( \|v^i\|_{m} +\| \partial_{z} v^i\|_{m-2} + |h^i|_{m} +\| \partial_{z}v^i \|_{1, \infty}\big)\leq R.
\end{equation}
The proof relies almost only on arguments that have been used previously, we shall consequently only give the main steps.
Let us also set $v= v^1- v^2$, $h= h^1 - h^2$. By using at first the same crude estimate as we have used above, we first obtain that
\begin{equation}
\label{uniqE1}
\|v(t)\|_{L^2(\mathcal{S})}^2 +
\|h(t)\|_{L^2(\mathcal{S})}^2
\leq \Lambda(R)\int_{0}^t \big( |h|_{1/2}^2 + \|v\|_{1}^2 + \| \partial_{z}v \|_{L^2}^2\big).
\end{equation}
Note that we have used the estimates of Proposition \ref{proppE} to estimate the pressure.
Next, se shall estimate $\|v\|_{1}$ and $|h|_{1}$. Let us set
$$ \mathcal{E}(v,q, \varphi)= \big(\partial^{\varphi}_{t}+ v\cdot \nabla^\varphi \big) v + \nabla^\varphi q.$$
We first apply $Z_{j}$ for $j=1, \, 2,\, 3$ to obtain:
$$ D\mathcal{E}(v^i, q^i, \varphi^i) \cdot (Z_{j}v^i, Z_{j}q^i, Z_{j}\varphi^i)=0$$
for $i=1, \, 2$.
We thus obtain that
$$ D\mathcal{E}(v^2, q^2, \varphi^2) \cdot (Z_{j}v, Z_{j}q, Z_{j}\varphi) + \big(D\mathcal{E}(v^1, q^1, \varphi^1)- D\mathcal{E}(v^2, q^2, \varphi^2) \big)
\cdot( Z_j v^1, Z_{j}q^1, Z_{j}\varphi^1)=0$$
where we also set $\varphi = \varphi^1 - \varphi^2$, $q= q^1 - q^2$. Consequently, by using Lemma \ref{lemal}, we can introduce the good unknowns
$$ V_{j}= Z_{j}v - \partial_{z}^{\varphi^2}v^{2} Z_{j}\varphi, \quad Q_{j}= Z_{j} q - \partial_{z}^{\varphi^2} q^2 Z_{j}\varphi$$ to obtain that
$$\big( \partial^{\varphi}_{t}+ v^2 \cdot \nabla^\varphi) V_{j}+ \nabla^{\varphi^{2}} Q_{j}= \mathcal{R}$$
where
$$ \mathcal{R}= -(V_{j}\cdot \nabla^{\varphi_{2}}v^2 + Z_{j}\varphi \big( \partial_{z}^{\varphi^2} v^2 \cdot \nabla^{\varphi^2}) v^2
- \big(D\mathcal{E}(v^1, q^1, \varphi^1)- D\mathcal{E}(v^2, q^2, \varphi^2) \big)
\cdot( Z_j v^1, Z_{j}q^1, Z_{j}\varphi^1).$$
Consequently, by using that $q^2$ verifies the Taylor sign condition on $[0,T]$, we can proceed as in the proof of Proposition
\ref{conormv} to get the estimate:
\begin{equation}
\label{uniqE2} \|V_{j}(t)\|_{L^2(\mathcal{S})}^2 + |Z_{j}h|_{L^2(\mathbb{R}^2)}^2
\leq \Lambda(R)\int_{0}^t \big( |h|_{1}^2 + \|v\|_{1}^2 + \| \partial_{z}v \|_{L^2}^2\big).
\end{equation}
In view of the estimates \eqref{uniqE1}, \eqref{uniqE2}, we still need to estimate $\|\partial_{z}v\|$ in order to conclude from the Gronwall Lemma.
Let us set $\omega^i= \nabla^{\varphi^i}\times v^i$, $i=1, \, 2$ and $\omega= \omega^1- \omega^2.$
By using an estimate like \eqref{dzvm-1O}, we first obtain that
\begin{equation}
\label{uniqE3} \| \partial_{z}v \|_{L^2} \leq \Lambda(R) \big( \|\omega \|_{L^2(\mathcal{S})}+ |h|_{1} + \|v\|_{1} \big)
\end{equation}
and hence we see that it only remains to estimate $ \|\omega \|_{L^2(\mathcal{S})}$.
Since $\omega^i$ solves the equation
$$\big( \partial_{t}^{\varphi^i}+ v^i \cdot \nabla^{\varphi^i}\big)\omega^i- \omega^i \cdot \nabla^{\varphi^i} v^i= 0,$$
a standard estimate on the difference yields
\begin{equation}
\label{uniqE4}
\|\omega(t)\|_{L^2(\mathcal{S})}^2 \leq \Lambda(R)\int_{0}^t \big( |h|_{1}^2 + \|v\|_{1}^2 +
\| \partial_{z}v \|_{L^2}^2 + \|\omega(t)\|_{L^2(\mathcal{S})}^2\big).
\end{equation}
It suffices to combine \eqref{uniqE1}, \eqref{uniqE2}, \eqref{uniqE3}, \eqref{uniqE4} to end the proof of
Proposition \ref{propuniqueeuler}.
\end{proof}
\section{Proof of Theorem \ref{theoinviscid}: inviscid limit}
\label{sectioninviscid}
From the uniform estimates of Theorem \ref{main}, we have that for $\varepsilon \in (0, 1]$
\begin{align}
\label{uniformfin}
\mathcal{N}_{m}(T) & = \sup_{[0, T]}\big( \|v^\varepsilon(t) \|_{m}^2 + \|\partial_{z}v^\varepsilon\|_{m-2}^2 + |h(t)^\varepsilon|_{m}^2 + \varepsilon |h(t)^\varepsilon|_{m+{1 \over 2}}^2
+ \varepsilon \|\partial_{zz}v(t)^\varepsilon\|_{L^\infty}^2 + \|v(t)^\varepsilon\|_{E^{2, \infty}}^2 \big) \\
\nonumber & \quad \quad \quad + \|\partial_{z}v^\varepsilon\|_{L^4([0,T], H^{m-1}_{co})}^2
+ \varepsilon \int_{0}^T \|\nabla v^\varepsilon\|_{m}^2 + \varepsilon \int_{0}^t \|\nabla \partial_{z}v^\varepsilon\|_{m-2}^2\leq R.
\end{align}
In particular, we have that
$h^\varepsilon$ is bounded in $L^\infty([0,T], H^m)$,
that $v^\varepsilon$ is bounded in $L^\infty ([0,T], H^m_{co})$ and $\nabla v^\varepsilon$ is bounded in $L^\infty ([0,T], H^{m-2}_{co})$.
Thus for every $t$, $v^\varepsilon(t)$ is compact in $H^{m-1}_{co, loc}$.
Next, by using the equation, we also get that $\partial_{t} v^\varepsilon$ is bounded in $L^2([0, T], L^2)$
and that $\partial_{t}h^\varepsilon$ is bounded in $L^2([0, T], L^2)$. Moreover, from Proposition \ref{propPNS} and Proposition
\ref{proppE}, we also get that $\nabla q^\varepsilon$ is bounded in $L^2([0, T], L^2).$
By classical arguments, we deduce that there exists a sequence $\varepsilon_{n}$ which tends to zero and
$(v, h, q)$ such that $v^{\varepsilon_{n}}$ tends to $v$ strongly in $\mathcal{C}([0, T], H^{m-1}_{co, loc})$,
$h^{\varepsilon_{n}}$ tends to $h$ strongly in $\mathcal{C}([0, T], H^{m-1}_{loc})$,
$\nabla q^\varepsilon$ tends to $\nabla q$ weakly in $L^2([0, T] \times \mathcal{S}).$
and
\begin{equation}
\label{uniflim}
\sup_{[0, T]}\big( \|v \|_{m}^2 + \|\partial_{z}v\|_{m-2}^2 + |h(t)|_{m}^2 + \|v^\varepsilon(t)\|_{E^{2, \infty}}^2 \big) + \|\partial_{z}v^\varepsilon\|_{L^4([0,T], H^{m-1}_{co})}^2 \leq R.
\end{equation}
These convergences allow to pass to the limit in the equations by classical arguments and hence, we find that
$(v,h, \nabla q)$ solves weakly the Euler equation \eqref{eulerint} in $\mathcal{S}$. Since, we can also assume
that the trace $v^{\varepsilon_{n}}_{/z=0}$ converges weakly in $L^2([0,T] \times \mathbb{R}^2))$, we also get that the boundary condition
$\partial_{t}h= v \cdot N$ is verified in the weak sense. To pass to the limit, in the boundary condition \eqref{bordv2},
we note that since the Lipschitz norm of $v^\varepsilon$ is uniformly bounded, it only remains in the limit
$ q= gh$.
For the pressure, we note that because of Proposition \ref{propPNS},
we have that $ \|\nabla (q^\varepsilon)^{NS} \|_{L^2(\mathcal{S})} \leq \varepsilon \Lambda(R)$ and hence tends to zero strongly.
We have thus proven that $(v, h)$ is a solution of the free surface Euler equation that satisfies the estimate \eqref{uniflim}.
From the uniqueness for the Euler equation in this class, we obtain that the whole family $(u^\varepsilon, h^\varepsilon)$
converges to $(v,h)$ as above. Note that this proves only local $L^2$ convergence of $v^\varepsilon (t)$ and $h^\varepsilon (t)$.
To get strong convergence, we can use the energy identities. We shall first use the one for \eqref{NSv},
thanks to proposition \ref{basicL2} and the estimates \eqref{uniformfin}, we get that for $t \in [0,T]$, we have
$$ \|v^\varepsilon J^\varepsilon (t) \|_{L^2(\mathcal{S})}^2 + g |h^\varepsilon (t)|_{L^2(\mathbb{R}^2)}^2 - \big( \|v^\varepsilon_{0} J^\varepsilon_{0}\|_{L^2(\mathcal{S})}^2
+ |h^\varepsilon_{0}|_{L^2(\mathbb{R}^2)}^2\big) \leq \varepsilon \Lambda(R)$$
where we have set $J^\varepsilon (t)=( \partial_{z}\varphi^\varepsilon )^{1 \over 2}.$
This yields
$$ \lim_{\varepsilon \rightarrow 0} \big( \|v^\varepsilon J^\varepsilon (t) \|_{L^2(\mathcal{S})}^2 + g |h^\varepsilon (t)|_{L^2(\mathbb{R}^2)}^2 \big)
= \lim_{\varepsilon \rightarrow 0} \big( \|v^\varepsilon_{0} J^\varepsilon_{0}\|_{L^2(\mathcal{S})}^2
+ g|h^\varepsilon_{0}|_{L^2(\mathbb{R}^2)}^2\big)= \|v_{0} J_{0}\|_{L^2(\mathcal{S})}^2
+ g |h_{0}|_{L^2(\mathbb{R}^2)}^2.$$
Note that the last equality is a consequence of \eqref{hypinviscid} and the estimates \eqref{uniformfin}
since
$$\| \partial_{z}\varphi^\varepsilon_{/t=0} - \partial_{z} \varphi_{/t=0} \|_{L^2(\mathcal{S})} \leq C |h^\varepsilon_{0} - h_{0}|_{H^{1 \over 2}(\mathbb{R}^2)}
\leq C \Lambda(R) |h^\varepsilon_{0} - h_{0}|_{L^2(\mathbb{R}^2)}^{1 \over 2}
.$$
Next, we can use the conservation of energy for the solution $(v,h)$ of the free surface Euler equation (that we formally get by taking
$\varepsilon=0$ in Proposition \ref{basicL2}) to get
$$ \lim_{\varepsilon \rightarrow 0} \big( \|v^\varepsilon J^\varepsilon (t) \|_{L^2(\mathcal{S})}^2 + g |h^\varepsilon (t)|_{L^2(\mathbb{R}^2)}^2 \big)
= \|vJ (t)\|_{L^2(\mathcal{S})}^2
+ g|h(t) |_{L^2(\mathbb{R}^2)}^2.$$
This yields that $(v^\varepsilon J^\varepsilon (t), h^\varepsilon(t))$ (for which we already had weak congergence) converge strongly in $L^2$
to $(vJ, h)$. Since the strong convergence of $h$ gives the one of $J^\varepsilon$, we finally obtain
by combining with the uniform bounds \eqref{uniformfin} that
$(v^\varepsilon(t), h^\varepsilon(t))$ converge strongly towards $(v, h)$ in $L^2(\mathcal{S}) \times L^2(\mathbb{R}^2)$.
The $L^\infty$ convergence can be finally obtained thanks to the $L^2$ convergence,
the uniform bounds \eqref{uniformfin} and the inequality \eqref{emb} (with $s_{2}=0$). This ends the proof
of Theorem \ref{theoinviscid}.
\section{Proof of the technical Lemmas}
\label{sectiontech}
\subsection{Proof of Lemma \ref{lemFP0}}
\label{sectionFP}
The estimate of $ \| \rho \|_{L^\infty}$ and $\|\partial_{i} \rho\|_{L^\infty}= \|Z_{i} \rho\|_{L^\infty} $, $i=1, \, 2$ can be easily obtained
from the maximum principle as in \eqref{Sninfty1}.
Indeed, we get that $\partial_{i} \rho$ solves the equation
$$ \partial_{t} \partial_{i} \rho + w \cdot \nabla \partial_{ i } \rho = \varepsilon \partial_{zz}
\partial_{ i } \rho + \partial_{i} \mathcal{H}
- \partial_{i} w\cdot \nabla \rho$$
still with an homogeneous Dirichlet boundary condition. Consequently, by using again the
maximum principle, we find
\begin{equation}
\label{hetab} \|\nabla_y \rho \|_{L^\infty} \leq \|\rho_{0}\|_{1, \infty}
+ \int_{0}^t \Big( \| \mathcal{H} \|_{1, \infty}
+ \| \partial_{i} w\cdot \nabla \rho \|_{L^\infty}\Big).\end{equation}
To estimate the last term in the above expression, we use that $u_{3}$ vanishes on the boundary to get
\begin{equation}
\label{hetab1} \| \partial_{i} w\cdot \nabla \rho \|_{L^\infty}
\lesssim \| w \|_{{1, \infty} } \| \rho\|_{1, \infty}
+ \| \partial_{z} \partial_{i}w_{3}\|_{L^\infty} \| Z_{3} \rho \|_{L^\infty}
\lesssim \|w \|_{ E^{2, \infty}} \| \rho \|_{1, \infty}.\end{equation}
It remains to estimate $\|Z_{3} \rho \|_{L^\infty} $ which is the most difficult term. We cannot
use the same method as previously due to the bad commutator between $Z_{3}$ and
the Laplacian. We shall use a more precise description of the solution of \eqref{eqetaFP0}.
We shall first rewrite the equation \eqref{eqetaFP0} as
$$ \partial_{t} \rho + z \partial_{z}w_{3}(t,y,0) \partial_{z}
\rho + w_{y}(t,y,0) \cdot \nabla_{y} \rho - \varepsilon \partial_{zz} \rho =
\mathcal{H} - R:=G$$
where
$$ R= \big(w_{y}(t,x)- w_{y}(t,y,0)\big) \cdot \nabla_{y} \rho +\big( w_{3}(t,x) - z \partial_{z}w_{3}(t,y,0)
\big) \partial_{z} \rho.$$
The idea will be to use an exact representation of the Green's function of the operator
in the left-hand side to perform the estimate.
Let $S(t, \tau)$ be the $C^0$ evolution operator generated by the left hand side of the above equation.
This means that $f(t,y,z)= S(t, \tau) f_{0}(y,z)$ solves the equation
$$
\partial_{t} f + z \partial_{z}w_{3}(t,y,0) \partial_{z} f
+ w_{y}(t,y,0) \cdot \nabla_{y} f - \varepsilon \partial_{zz} f= 0, \quad z>0, \, t>\tau, \quad f(t, y, 0) = 0. $$ with the initial condition $f(\tau, y, z) = f_{0}(y,z)$. Then we have the following estimates:
\begin{lem}
\label{FP}
There exists $C>0$ independent of $ \varepsilon $ such that
\begin{eqnarray}
\label{FP1} & & \big\| z\partial_{z} S(t, \tau) f_{0} \|_{L^\infty}
\leq C \big( \|f_{0}\|_{L^\infty} + \| z \partial_{z} f_{0} \|_{L^\infty} \big), \quad \forall t \geq \tau \geq 0.
\end{eqnarray}
\end{lem}
We shall postpone the proof of the Lemma until the end of the section.
By using Duhamel formula, we deduce that
\begin{equation}
\label{Duh}
\rho(t) = S(t, \tau) \rho_{0} + \int_{0}^t S(t, \tau) G(\tau) \, d\tau.
\end{equation}
Consequently, by using \eqref{FP1} in Lemma \ref{FP}, we obtain
$$ \| Z_{3} \rho \|_{L^\infty} \lesssim
\Big( \|\rho_{0}\|_{L^\infty}
+ \| z\partial_{z} \rho_{0} \|_{L^\infty}
+ \int_{0}^t \big( \|G \|_{L^\infty}
+ \| z\partial_{z} G \|_{L^\infty}\big) \Big).$$
Since $\rho$ and $G$ are compactly supported, we obtain
\begin{equation}
\label{etab1}
\| Z_{3} \rho \|_{L^\infty} \lesssim
\Big( \|\rho_{0}\|_{1, \infty}
+ \int_{0}^t \|G \|_{1, \infty} \Big).
\end{equation}
It remains to estimate the right hand side.
First, let us estimate the term involving $R$.
Since $u_{3}(t,y,0)= 0$, we have
$$ \| R \|_{L^\infty} \lesssim \|w_y\|_{L^\infty} \|\nabla_y \rho \|_{L^\infty} + \| \partial_{z} w_{3}\|_{L^\infty}
\| Z_{3} \rho \|_{L^\infty} \lesssim \|w\|_{E^{1, \infty}} \, \| \rho \|_{1, \infty}.$$
Next, in a similar way, we get that
$$ \| Z R\|_{L^\infty} \lesssim \| w \|_{2, \infty} \| \rho \|_{1, \infty}
+ \Big\| \big(w_y(t,x)- w_y(t,y,0)\big) \cdot Z \nabla_y \rho \Big\|_{L^\infty}
+ \Big\| \big( w_{3}(t,x) - z \partial_{z}w_{3}(t,y,0)
\big) Z \partial_{z} \rho\Big\|_{L^\infty}$$
By using the Taylor formula and
the fact that $\rho$ is compactly supported in $z$, this yields
$$ \| ZR \|_{L^\infty} \lesssim
\| w \|_{2, \infty} \| \rho \|_{1, \infty} +
\| \partial_{z}w_y\|_{L^\infty} \| \varphi(z) Z \nabla_y \rho \|_{L^\infty}+
\|\partial_{zz} w_{3}\|_{L^\infty} \| \varphi^2(z) Z \partial_{z} \rho \|_{L^\infty} $$
with $\varphi(z)= z/(1-z)$
and hence, we obtain
$$ \| R \|_{1, \infty} \lesssim \big( \|w\|_{E^{2, \infty}} + \| \partial_{zz} w_{3} \|_{1, \infty}\big) \big( \| \rho\|_{1, \infty}
+ \| \varphi(z) \rho \|_{2, \infty}\big).$$
The additional factor $\varphi$ in the last term is crucial to close our estimate.
Indeed, by the Sobolev embedding \eqref{sob}, we have that for $|\alpha |=2$
$$ \| \varphi Z^\alpha \eta \|_{L^\infty}
\lesssim \| Z^\alpha \eta \|_{s_{2}} + \| \partial_{z} \big ( \varphi Z^\alpha \eta\big) \|_{s_{1}} $$
with $s_{1}+ s_{2}>2$, thus, with $s_{1}= 1,$ $s_{2}= 2$,
we obtain from definition of $Z_{3}$ that
\begin{equation}
\label{trick} \| \varphi Z^\alpha \eta \|_{L^\infty} \lesssim \|\eta \|_{4},
\quad | \alpha | = 2.\end{equation}
Consequently, we get
that
\begin{equation}
\label{Rest}
\| R(t) \|_{1, \infty} \lesssim
\big( \|w\|_{E^{2, \infty}} + \| \partial_{zz} w_{3} \|_{L^\infty}\big) \big( \| \rho\|_{1, \infty}
+ \| \rho \|_{4}\big).
\end{equation}
Finally, the proof of
Proposition \ref{etainfty} follows from the last estimate and \eqref{etab1}.
It remains to prove Lemma \ref{FP}.
\subsubsection*{Proof of Lemma \ref{FP}}
Let us set $f(t,y,z)= S(t, \tau) f_{0}(y,z)$, then $f$ solves the equation
$$
\partial_{t} f + z \partial_{z}w_{3}(t,y,0) \partial_{z} f
+ w_{y}(t,y,0) \cdot \nabla_y f - \varepsilon \partial_{zz} f= 0, \quad z>0, \quad f(t, y, 0) = 0. $$
We can first transform the problem into a problem in the whole space.
Let us define $\tilde{f}$ by
\begin{equation}
\label{tildef} \tilde{f}(t,y,z)= f(t,y,z), \, z>0, \quad \tilde{f}(t,y,z) = - f(t,y,-z), \, z<0\end{equation}
then $\tilde{f}$ solves
\begin{equation}
\label{FP2}
\partial_{t} \tilde{f} + z \partial_{z}w_{3}(t,y,0) \partial_{z} \tilde{f}
+ w_{y}(t,y,0) \cdot \nabla_y \tilde{f} - \varepsilon \partial_{zz} \tilde{f}= 0, \quad z \in \mathbb{R}
\end{equation} with the initial condition $\tilde{f}(\tau, y, z)= \tilde{f}_{0}(y,z)$.
We shall get the estimate by using an exact
representation of the solution.
To solve \eqref{FP2}, we can first define \begin{equation} \label{tildeg} g(t,y,z)= f(t, \Phi(t, \tau, y), z)\end{equation}
where $\Phi$ is the solution of $$ \partial_{t} \Phi = w_{y}(t,\Phi, 0), \quad \Phi(\tau, \tau, y)= y. $$ Then, $g$ solves the equation $$ \partial_{t}g + z \gamma(t,y) \partial_{z}g - \varepsilon \partial_{zz} g = 0, \quad z \in \mathbb{R}, \quad
g(\tau, y, z)= \tilde{f}_{0}(y,z)$$
where
\begin{equation}
\label{Gamma}
\gamma(t,y)= \partial_{z} w_{3}(t, \Phi(t, \tau, y), 0)
\end{equation}
which is a one-dimensional Fokker-Planck type equation (note that now $y$ is only a parameter
in the problem). By a simple computation in Fourier space, we find the explicit representation
\begin{eqnarray} \nonumber g(t,x) & = & \int_{\mathbb{R}}
{1 \over \sqrt{ 4 \pi \varepsilon \int_{\tau}^t e^{2 \varepsilon ( \Gamma(t) - \Gamma(s) ) }\, ds}}
\exp \Big( - { (z- z')^2 \over 4 \varepsilon \int_{\tau}^t e^{2 \varepsilon ( \Gamma(t) - \Gamma(s) ) }\, ds} \Big)
\tilde{f}_{0}(y, e^{- \Gamma(t) }z')\, dz' \\
\label{duhamel2} & = & \int_{\mathbb{R}} k(t, \tau, y, z-z') \tilde{f}_{0} (y, e^{- \Gamma(t) }z')\, dz'
\end{eqnarray}
where $\Gamma(t)= \int_{\tau}^t \gamma(s,y)\, ds$ (note that $\Gamma$ depends
on $y$ and $\tau$, we do not write down explicitely this dependence for notational convenience).
Note that $k$ is non-negative and that $\int_{\mathbb{R}} k(t, \tau, y, z) \, dz= 1$, thus,
we immediately recover that
$$ \|g \|_{L^\infty} \leq \|\tilde{f}_{0}\|_{L^\infty}.$$
Next, we observe that we can write
\begin{equation}
\label{trickconorm} z \partial_{z}k (t,\tau, z-z')=\big( z - z' \big) \partial_{z} k - z' \partial_{z'}k
(t, \tau, z-z')\end{equation}
with
$$ \int_{\mathbb{R} } \big| \big( z - z' \big) \partial_{z} k \big| dz' \lesssim 1$$
and thus by using an integration by parts, we find
$$ \|z \partial_{z} g\|_{L^\infty} \lesssim
\| \tilde{f} \|_{L^\infty} + \Big\| e^{- \Gamma(t) } \int_{\mathbb{R}} k(t,\tau, y, z') z' \partial_{z} \tilde{f}_{0}(y, e^{- \Gamma(t)}
z') dz' \Big\|_{L^\infty}.$$
By using \eqref{Gamma}, this yields
$$ \| z \partial_{z} g\|_{L^\infty} \lesssim
\| \tilde{f}_{0} \|_{L^\infty} + \| z \partial_{z} \tilde{f}_{0} \|_{L^\infty}.$$
By using \eqref{tildef} and \eqref{tildeg}, we obtain
$$ \| z \partial_{z} f \|_{L^\infty} \lesssim \| z \partial_{z} \tilde{f} \|_{L^\infty}
\lesssim \| \tilde{f}_{0} \|_{L^\infty} + \| z \partial_{z} \tilde{f}_{0} \|_{L^\infty}
\lesssim \| f_{0} \|_{L^\infty} + \| z \partial_{z} f_{0} \|_{L^\infty} .$$
This ends the proof of Lemma \ref{FP}.
\subsection{Proof of Lemma \ref{lemFP1}}
\label{sectionFP1}
We use the same idea as in the proof of Lemma \ref{lemFP0}. We first estimate
$ \sqrt{\varepsilon } \| \partial_{z} Z_{i} \rho\|_{L^\infty}, $ $i=1, \, 2$. We get for $\partial_{i} \rho$ the equation
$$ \partial_{t} \partial_{i} \rho + w \cdot \nabla \partial_{i} \rho = \varepsilon \partial_{zz} \partial_{i} \rho + \partial_{i}\mathcal{H}
- \partial_{i} w \cdot \nabla \rho$$
that we rewrite as
$$ \partial_{t} \partial_{i}\rho + z \partial_{z}w_{3}(t,y,0) \partial_{z}
\partial_{i} \rho + w_y(t,y,0) \cdot \nabla_y \partial_{i }\rho - \varepsilon \partial_{zz}\partial_{i} \rho =
\partial_{i} \mathcal{H} - \partial_{i}w \cdot \nabla \rho- R:=G$$
where
$$ R^1= \big(w_y(t,x)- w_y(t,y,0)\big) \cdot \nabla_y\partial_{i} \rho +\big( w_{3}(t,x) - z \partial_{z}w_{3}(t,y,0)
\big) \partial_{z}\partial_{i} \rho.$$
By using the notations before Lemma \ref{FP}, we obtain
$$ \partial_{i}\rho= S(t, \tau) \partial_{i}\rho_{0} + \int_{0}^t S(t, \tau) G(\tau) \, d\tau$$
and we shall use the following semigroup estimates for $S$:
\begin{lem}
\label{lemFP1bis}
Under the assumption of Lemma \ref{lemFP1} on $w$, we have that for $0 \leq \tau \leq t \leq T$:
$$ \sqrt{\varepsilon}\| \partial_{z} S (t,\tau) f_{0}\|_{L^\infty} \leq {\Lambda(M) \over \sqrt{t}} \|f_{0}\|_{L^\infty}, \quad
\sqrt{\varepsilon} \| \partial_{z}\big( z \partial_{z} S(t,\tau) f_{0}\big) \|_{L^\infty} \leq
{\Lambda(M) \over \sqrt{t} }\big( \|f_{0}\|_{L^\infty} + \|z\partial_{z} f_{0} \|_{L^\infty}\big)$$
where $\Lambda(M)$ does not depend on $\varepsilon$.
\end{lem}
Let us postpone the proof of this Lemma until the end of the section.
By using Lemma \ref{FP1}, we thus get
that
$$ \sqrt{\varepsilon} \| \partial_{z} \partial_{i} \rho(t) \|_{L^\infty}
\leq \Lambda(M)\Big( {1 \over \sqrt{t} } \|\partial_{i} \rho (0) \|_{L^\infty} + \int_{0}^t {1 \over \sqrt{ t- \tau} }
\big( \| \mathcal{H}\|_{1,\infty} + \|\partial_{i} w \cdot \nabla \rho \|_{L^\infty} + \|R^1\|_{L^\infty}\big)\Big).$$
Next, we can use \eqref{hetab1} and \eqref{Rest} to get that
$$ \| R^1\|_{L^\infty} \leq \Lambda(M) \big( \|\rho\|_{1, \infty} + \|\rho\|_{4}\big).$$
This yields
$$ \sqrt{\varepsilon} \| \partial_{z} \partial_{i} \rho(t) \|_{L^\infty}
\leq \Lambda(M)\Big( {1 \over \sqrt{t} } \|\partial_{i} \rho (0) \|_{L^\infty} + \int_{0}^t {1 \over \sqrt{ t- \tau} }
\big( \| \mathcal{H}\|_{1,\infty} + \|\rho\|_{1, \infty} + \|\rho\|_{4}\big)\Big).$$
The estimate of $\| \partial_{z} \big( Z_{3} \rho\big)\|_{L^\infty}$, we directly use the Duhamel formula \eqref{Duh}
and we use the second estimate given in Lemma \ref{FP1}.
This ends the proof of Lemma \ref{lemFP1bis}. It only remains to prove Lemma \ref{FP1}:
\subsubsection*{Proof of Lemma \ref{FP}}
We can use again that the solution of the equation is given by the representation \eqref{tildeg}, \eqref{duhamel2}
and it suffices to notice that the kernel $k$ has the property
$$ | \partial_{z} k | \leq { \Lambda(M) \over \sqrt{\varepsilon(t- \tau)}}\, | k| $$
and the result follows from standard convolution estimates.
\subsection{Proof of Proposition \ref{Korn}}
\label{sectionKorn}
Thanks to Lemma \ref{mingrad}, the estimate \eqref{estKorn} for $v$ is actually equivalent to the standard Korn inequality
in $\Omega_{t}$ for $ u= v \circ \Phi^{-1}$. For the sake of completeness, we shall
sketch the argument.
We first note that
$$ \int_{\mathcal{S}} |S^\varphi v|^2 \, d\mathcal{V}_{t} \geq c_{0} \|S^\varphi v\|_{L^2(\mathcal{S})}^2.$$
Next, we shall reduce the problem to the classical Korn inequality in $\mathcal{S}$ for an auxiliary
vector field. Let us set
$$ w_{i }= v_{i}+ \partial_{i}\varphi \, v_{3},\, i= 1, \, 2, \quad w_{3}=\partial_{z}\varphi v_{3}.$$
We note that for $ 1 \leq i , \, j \leq 2$, we have
\begin{align*}
2 (Sw)_{ij}= & \partial^{\varphi}_{i}v_{j} + \partial^{\varphi}_{j} v_{i} + \partial_{j}\varphi\big( \partial^{\varphi}_{z}v_{i} + \partial^{\varphi}_{i} v_{3}\big)
+ \partial_{i}\varphi\big( \partial^{\varphi}_{z}v_{j} + \partial^{\varphi}_{j} v_{3}\big) \\
& + 2 \partial_{i} \varphi \partial_{j} \varphi \, \partial^{\varphi}_{z} v_{3} +2 \partial^2_{ij}\varphi \, v_{3}
\end{align*}
and hence that
$$ (Sw)_{ij}= (S^\varphi v)_{ij}+ \partial_{j}\varphi (S^\varphi v)_{i,3}+ \partial_{j}\varphi(S^\varphi v)_{j,3}
+ \partial_{i}\varphi \partial_{j}\varphi S^\varphi(v)_{3, 3}+ \partial^2_{ij} \varphi \, v_{3}.$$
In a similar way, we have that
\begin{eqnarray*} (Sw)_{i,3} & = & \partial_{z}\varphi \, (S^\varphi v)_{i,3} + \partial_{i}\varphi \, \partial_{z}\varphi
(S^\varphi v)_{3, 3}+ \partial^2_{i,3} \varphi\, v_{3}, \quad i= 1, \, 2, \\
(Sw)_{3, 3}& = & ( \partial_{z}\varphi)^2 (S^\varphi v)_{3, 3}+ \partial^2_{zz} \varphi\, v_{3}.
\end{eqnarray*}
This yields
\begin{equation}
\label{korn1}
\| S w \|_{L^2(\mathcal{S})} \leq \Lambda_{0}\big( \|S^\varphi v \|_{L^2(\mathcal{S})} + \|v\|_{L^2(\mathcal{S})} \big).
\end{equation}
Next, the Korn inequality in $\mathcal{S}$ for $w$, yields for some $C>0$:
$$ \|\nabla w \|_{L^2(\mathcal{S})} \leq C\big( \| S w \|_{L^2(\mathcal{S})} + \|w\|_{L^2(\mathcal{S})}.$$
Consequently, we obtain that
$$ \|\nabla w \|_{L^2(\mathcal{S})} \leq \Lambda_{0}\big( \|S^\varphi v \|_{L^2(\mathcal{S})} + \|v\|_{L^2(\mathcal{S})}\big) + C \| w \|_{L^2(\mathcal{S})}.$$
Moreover, since $\partial_{z} \varphi \geq \eta$, we have from the definition of $w$ that
$$ \|w \|_{L^2(\mathcal{S})} \leq \Lambda_{0} \| v\|_{L^2(\mathcal{S})}, \quad \|\nabla v \|_{L^2(\mathcal{S})}
\leq \Lambda_{0}\big( \|\nabla w \|_{L^2(\mathcal{S})} + \|v \|_{L^2(\mathcal{S})} \big),$$
the result follows by combining these inequalities.
Finally, let us recall the proof of \eqref{korn1}.
We can define an extension of $w$ in $\mathbb{R}^3$ by $\tilde{w}= w, \, z<0$ and
$$ \tilde{w}_{i}(y,z)= 2 w_{i}(y, -z) - w_{i}(y,-3z), \quad \tilde{w}_{3}(y,z)= -2w_{3} (y, -z) + 3u_{3}(y, -z), \quad z>0.$$
Since,
$$ \|S \tilde w \|_{L^2(\mathbb{R}^3)} \leq 5 \| S w \|_{L^2(\mathcal{S})},$$
In $\mathbb{R}^3$, we obviously have that
$$ \|S \tilde w \|_{L^2(\mathbb{R}^3)}^2= {1 \over 2 } \| \nabla \tilde{w}\|_{L^2(\mathbb{R}^3)}^2 + {1 \over 2}
\| \nabla \cdot \tilde{w} \|_{L^2(\mathbb{R}^3)}^2 \geq {1 \over 2 } \| \nabla \tilde{w}\|_{L^2(\mathbb{R}^3)}^2$$
and the conclusion follows from the remark that
$$ \|\nabla w \|_{L^2(\mathcal{S})} \leq \| \nabla \tilde w \|_{L^2(\mathbb{R}^3)}.$$
\subsection{Proof of Lemma \ref{hardybis}}
\label{prooflemmahardy}
Let us start with the first inequality. For any test function $f$, we get by an integration by parts
$$ \int_{-\infty}^0 {1 \over z(1-z)} f \partial_{z}f\, dz={1 \over 2} \int_{-\infty}^0 { 1 - 2z \over z^2 (1-z)^2} |f|^2\, dz \geq {1 \over 2}
\int_{-\infty}^0 { 1 \over z^2 (1-z)^2} |f|^2\, dz$$
and we get the result from the Cauchy-Schwarz inequality.
For the second inequality, we just write
$$ \int_{-\infty}^0\big( {1 -z \over z}\big)^ 2 f(z)^2 \, dz \leq 4 \int_{-1}^0 {1 \over z^2} f(z)^2 \, + 4 \int_{-\infty}^{- 1} |f(z)|^2\, dz$$
and we use the standard Hardy inequality on $(-1, 0)$ to estimate the first term.
\section{Results on paradifferential calculus}
\label{sectionparadiff}
In this section, we shall state the results on paradifferential calculus with parabolic homogeneity that we need for our estimates
of section \ref{sectionnorm2}. We shall first define a calculus as in
the Appendix B of \cite{Metivier-Zumbrun} without the semiclassical parameter
$\varepsilon$.
In a second step, we shall deduce the result for the "partially" semiclassical
version (that is to say with weight $\langle \zeta^\varepsilon \rangle= \big( \gamma^2 + \tau^2 + \varepsilon^2 | \xi |^4 \big)^{1 \over 4}$) that we need.
We first define operators acting on functions defined on $\mathbb{R}^3_{(t,y)}$. The calculus for functions defined on $\mathbb{R}_{t}\times \mathcal{S}$ will immediately follow since the $z$ variable will only be a parameter.
For notational convenience, we use in this section the notation $X=(t,y)=(x_{0}, x_{1}, x_{2})$ for the "space" variable and $(\tau, \xi)$
for the corresponding Fourier variables, we also use the notation $\zeta=(\gamma, \tau, \xi)= (\gamma, \eta)\in \mathbb{R}^4_{+}= [1, + \infty[ \times \mathbb{R}^3$
where $\gamma \geq 1$ will be a parameter. We define the weight:
$$ \langle \zeta \rangle=\big(\gamma^2 + \tau^2 + |\xi|^4\big)^{1\over 4}$$
with $|\cdot|$ the Euclidian norm of $\mathbb{R}^2$.
Note that this weight corresponds to the quasihomogeneous weight with $p=2$, $p_{0}= 1$, $p_{1}= p_{2}= 2$ in the general framework
of \cite{Metivier-Zumbrun}. We also point out that since we shall only consider
$\gamma \geq 1$, we do not need to make any difference
between $\langle \zeta\rangle$ and $\Lambda(\zeta)= \big( 1+ \langle \zeta \rangle^{2p})^{1 \over 2p}$ in the notation of \cite{Metivier-Zumbrun}.
By using this weight, we define a scale of modified Sobolev type spaces by
$$ \mathcal{H}^{s, \gamma}(\mathbb{R}^3)= \{ u \in \mathcal{S}'(\mathbb{R}^3), \quad \|u\|_{\mathcal H^{s, \gamma}}^2 <+\infty\}$$
where
$$ \|u\|_{\mathcal{H}^{s, \gamma}}^2 = \int_{\mathbb{R}^3} \langle \zeta \rangle^{2s} |\hat u (\tau, \xi)|^2\, d\tau d\xi,$$
$\hat u$ being the Fourier transform of $u$.
Next, we define our class of symbols:
\begin{defi}
For $\mu \in \mathbb{R}$
\begin{itemize}
\item the class $\Gamma^\mu_{0}$ denotes the space of locally bounded matrices $a(X, \zeta)$ on $\mathbb{R}^3\times \mathbb{R}^4_{+}$
which are $\mathcal{C}^\infty$ with respect to $(\tau, \xi)= \eta$ and such that for all $\alpha \in \mathbb{N}^d$, there is a constant
$C_{\alpha}$ such that
$$ \forall (X, \zeta), \quad | \partial_{\eta}^\alpha a(X, \zeta)|= C_{\alpha} \langle \zeta \rangle^{\mu - \langle \alpha \rangle}$$
where $\langle \alpha \rangle = 2 \alpha_{0}+ \alpha_{1}+ \alpha_{2}$.
\item $\Gamma^\mu_{1}$ denotes the set of symbols $a \in \Gamma^\mu_{0}$ such that $\partial_{i}a \in \Gamma^{\mu}_{0}$, for $
i=0, \, 1, \, 2.$
\end{itemize}
\end{defi}
The spaces $\Gamma_{0}^\mu$ are equipped with the seminorms
$$ \| a \|_{(\mu, N)}= \sup_{\langle \alpha \rangle \leq N} \sup_{\mathbb{R}^3 \times \mathbb{R}^4_{+}} \langle \zeta \rangle^{\langle \alpha \rangle - \mu} |\partial_{\eta}^\alpha a(X, \zeta)|.$$
Paradiffential operators are defined as pseudodifferential operators associated to a suitable regularization of the above symbols.
For a symbol $a$ in the above class, we define a smooth symbol $\sigma_{a}( X, \zeta)$ defined by
$$ \mathcal{F}_{X}\sigma_{a}(\hat X, \zeta)= \psi(\hat X, \zeta) \mathcal{F}_{X}a(\hat X, \zeta)$$
where $\mathcal{F}_{X}$ stands for the partial Fourier transform with respect to the $X$ variable and $\psi$ is an admissible cut-off
function. We say that a smooth function $\psi$ is admissible if there exists $0<\delta_{1}<\delta_{2}<1$ such that
$$ \psi(\hat X, \zeta)= 1,\mbox{ for } \langle \hat X \rangle \leq \delta_{1} \langle \zeta\rangle, \quad
\psi(\hat X, \zeta)= 0,\mbox{ for } \langle \hat X \rangle \geq \delta_{2} \langle \zeta\rangle$$
where $\langle \hat X \rangle= \big( \gamma^2 + \hat{X}_{0}^2 + |(\hat{X}_{1}, \hat{X}_{2})|^4 \big)^{1\over 4}$ and that
for every $\alpha, \, \beta \in \mathbb{N}^d$, there exists $C_{\alpha, \beta}$ such that
$$ \forall \hat X,\, \zeta, \quad |\partial_{\eta}^\alpha \partial_{\hat X}^\beta \psi(\hat X, \zeta)| \leq C_{\alpha, \beta} \langle \zeta\rangle^{-
\langle \alpha \rangle - \langle \beta \rangle}.$$
For a given admissible cut-off function, we then define an operator associated to the symbol $a$, $T_{a}^\gamma$ by
$$ T_{a}^\gamma u= Op(\sigma_{a}) u$$
where $Op(\sigma_{a})$ is the pseudodifferential operator defined by
\begin{equation}
\label{defpseudo} Op(\sigma_{a}) u(X) = (2 \pi)^{-3} \int_{\mathbb{R}^3} e^{i X \cdot \eta} \sigma_{a}(X, \zeta) \hat{u}(\eta) \, d\eta.
\end{equation}
In this definition, the operator $T_{a}^\gamma$ depends on the choice of the admissible cut-off function. Nevertheless, it can be
shown that the difference between two operators defined with the same symbol but with two different admissible cut-off function
is a lower order operator. We refer to \cite{Metivier-Zumbrun} for more details.
Note that viewing $a(X) \in L^\infty$ as a symbol in $\Gamma^0_{0}$, the operator $T_{a}^\gamma$ can be related to Bony's
paraproduct (\cite{Bony}).
The main interest of this class of operators is that it enjoys a nice symbolic calculus.
\begin{theoreme}
\label{symbolic}
We have the following results:
\begin{enumerate}
\item {\bf Boundedness} For $a \in \Gamma^\mu_{0}$, and every $s\in \mathbb{R}$, there exists $C>0$ (which depends only on semi norms of $a$) such that for every $\gamma \geq 1$
and $u \in \mathcal H^{s+\mu, \gamma}$, we have
$$ \|T_{a}^\gamma u\|_{\mathcal H^{s, \gamma}} \leq C \| u \|_{\mathcal{H}^{s+\mu, \gamma}}.$$
\item {\bf Product} For $a \in \Gamma^\mu_{1}$, $b \in \Gamma^{\mu'}_{1}$ and $s \in \mathbb{R}$, we have
$$ \|T_{a}^\gamma T_{b}^\gamma u - T_{ab}^\gamma u \|_{\mathcal H^{s, \gamma}} \leq
C\Big( \|a\|_{(\mu, N)}\, \|\nabla_{X}b\|_{(\mu', N)} + \|\nabla_{X} a \|_{(\mu, N)} \|b\|_{(\mu', N)}\Big) \|u\|_{\mathcal{H}^{s+\mu+\mu'-1, \gamma}}$$
where $C$, $\mu$ and $N$ depend only on $s$, $\mu$, $\mu'$.
\item {\bf Adjoint} For $a \in \Gamma^\mu_{1}$ (recall that we allow $a$ to be matrix valued), denote by $(T_{a}^\gamma)^*$ the adjoint of $a$
and $a^*(X, \zeta)$ the adjoint matrix of $a$, then we have
$$ \|(T_{a}^\gamma)^* - T_{a^*}^\gamma\|_{\mathcal{H}^{s, \gamma}} \leq C \|\nabla_{X}a\|_{(\mu, N)} \, \|u\|_{\mathcal H^{s+\mu-1, \gamma}}$$
where $C$ and $N$ only depend on $\mu$ and $s$.
\item {\bf Garding inequality} For $a \in \Gamma_{1}^\mu$, assume that there exists $c>0$
such that
$$ \forall X, \, \zeta, \quad \mbox{Re }a(X, \zeta) \geq c \langle \zeta \rangle^\mu.$$
Then there exists $C>0$ (depending only on $\|a, \nabla_{X} a \|_{(\mu, N)}$) such that for $\gamma \geq C$, we have
$$ {c \over 2} \| u\|_{\mathcal H^{{\mu \over 2}, \gamma}}^2 \leq \mbox{Re } \big(T_{a}^\gamma u,
u \big)_{L^2}$$
\item {\bf Paraproduct} If $a(X) \in L^\infty$, $\nabla_{X}a \in L^\infty$, there exists $C>0$ such that for every $u$, we have the estimates
\begin{eqnarray*}
\| au - T^\gamma_{a} u \|_{\mathcal H^{1, \gamma}} \leq C \|\nabla_{X} a \|_{L^\infty} \|u\|_{\mathcal{H}^{0, \gamma}}, \\
\gamma \| au - T_{a}^\gamma u \|_{\mathcal{H}^{0, \gamma}} \leq C \| \nabla_{X} a \|_{L^\infty} \| u\|_{\mathcal{H}^{1, \gamma}}, \\
\| a \partial_{j}u - T_{a}^\gamma\partial_{j}u \|_{\mathcal H^{0, \gamma}} \leq C \| \nabla_{X} a \|_{L^\infty} \|u\|_{\mathcal{H}^{ {2 \over p_{j} } - 1, \gamma}}
\end{eqnarray*}
where $\partial_{0}= \partial_{t}$, $p_{0}=1$ and $p_{j}= 2$, $j=1, \, 2$.
\end{enumerate}
\end{theoreme}
For the proof of these results, we refer to \cite{Metivier-Zumbrun} Appendix B.
The meaning of the assumption in the Garding inequality is that $\mbox{Re} \,a : = {1 \over 2}(a+ a^*)$ for a matrix is bounded from
below on the support of $w$.
When dealing with paraproducts, it is usefull to recall that
thanks to the definition of the operators, we have when $a = a(X)$ that
$$ \partial_{j}\big( T^\gamma_{a} u \big) = T^\gamma_{a} \partial_{j} u + T^\gamma _{\partial_{j} a } u.$$
Another usefull remark about products is that if $a= a(X, \zeta)$ but $b=b(\zeta)$ does not depend on X, then we have
\begin{equation}
\label{remarkproduit}
T_{ab}^\gamma = T_{a}^\gamma T_{b}^\gamma.
\end{equation}
To conclude the description of the calculus, let us notice that if $a(X, z, \zeta)$ and $u(X,z)$ depends also on $z \in (-\infty,0) $ which plays
the role of a parameter, all the results
of Theorem \ref{symbolic} remain true by adding a supremum in $z$ in the definition of the $L^\infty$ norms and by defining
the $\|\cdot\|_{\mathcal{H}^{s, \gamma}}$ norm of a function defined on $\mathbb{R}\times \mathcal{S}$ by
$$ \|u\|_{\mathcal{H}^{s, \gamma}}^2 = \int_{-\infty}^0 \int_{\mathbb{R}^3} \langle \zeta \rangle^{2s}| \mathcal{F}_{t,y}u(\tau, \xi, z)|^2 \, d\tau
d\xi dz.$$
Let us turn to the semiclassical version of the calculus that we need. We shall use the semiclassical type weight
$\langle \zeta^\varepsilon \rangle= \big( \gamma^2 + \tau^2 +| \sqrt{\varepsilon} \xi|^4 \big)^{1\over 4}$
where $\varepsilon \in (0,1)$.
We thus define a new weighted norm as
$$ \|u \|_{\mathcal{H}^{s, \gamma, \varepsilon}}^2 = \int_{\mathbb{R}^3} \langle \zeta^\varepsilon \rangle^{2s} |\hat u (\tau, \xi)|^2\, d\tau d\xi,$$
For pseudodifferential operators, the semiclassical quantization corresponding to \eqref{defpseudo} for a symbol $\sigma(X, \zeta)$ is
$$ Op^\varepsilon(\sigma) u(X) = (2 \pi)^{-3} \int_{\mathbb{R}^3} e^{i X \cdot \eta} \sigma(X, \gamma, \tau, \sqrt{\varepsilon}\, \xi) \hat{u}(\eta) \, d\eta, \quad
\eta=(\tau, \xi)$$
Note that this definition is different from the quantization used in \cite{Metivier-Zumbrun}, this is due to the fact that
we shall study a problem of purely parabolic nature.
Let us define the "scaling map" $H^\varepsilon$ by
$$ H^\varepsilon u= \sqrt{\varepsilon} u (t, \sqrt{\varepsilon}y).$$
Then, we note that
$$ Op^\varepsilon(\sigma) = H_{\varepsilon}^{-1} Op(\sigma^\varepsilon) H^\varepsilon $$
where $\sigma^\varepsilon$ is given by $\sigma^\varepsilon(X, \zeta)= \sigma(t, \sqrt{\varepsilon} y, \zeta)$.
This motivates the following definition: for a symbol $a(X, \zeta) \in \Gamma^\mu_{0}$, we define a semiclassical paradifferential calculus by
$$ T^{\varepsilon, \gamma}_{a} u = H_{\varepsilon}^{-1} T^\gamma_{a^\varepsilon} H_{\varepsilon} u$$
with $a^\varepsilon(X, \zeta)= a(t, \sqrt{\varepsilon} y, \zeta)$.
Next, we observe that
$$ \|H^\varepsilon u \|_{\mathcal H^{s, \gamma}}= \|u\|_{\mathcal{H}^{s, \gamma, \varepsilon}}$$
and that if $a$ is in $\Gamma^\mu_{0}$, then the family $(a^\varepsilon)_{\varepsilon \in (0, 1)}$ is uniformly bounded in $\Gamma^\mu_{0}$.
Moreover, if $a \in \Gamma^\mu_{1}$, then $a^\varepsilon$ and $\nabla_{X} a^\varepsilon$ are uniformly bounded in $\Gamma^\mu_{0}$.
This allows to get from Theorem \ref{symbolic} the following properties of the semiclassical calculus:
\begin{theoreme}
\label{symbolic2}
We have the following results:
\begin{enumerate}
\item {\bf Boundedness} For $a \in \Gamma^\mu_{0}$, and every $s\in \mathbb{R}$, we have
$$ \|T_{a}^{\varepsilon, \gamma} u\|_{\mathcal H^{s, \gamma, \varepsilon}} \leq C \| u \|_{\mathcal{H}^{s+\mu, \gamma, \varepsilon}}.$$
\item {\bf Product} For $a \in \Gamma^\mu_{1}$, $b \in \Gamma^{\mu'}_{1}$ and $s \in \mathbb{R}$, we have
$$ \|T_{a}^{\varepsilon, \gamma} T_{b}^{\varepsilon, \gamma} u - T_{ab}^{\varepsilon, \gamma} u \|_{\mathcal H^{s, \gamma, \varepsilon}} \leq
C \|u\|_{\mathcal{H}^{s+\mu+\mu'-1, \gamma, \varepsilon}}$$
\item {\bf Adjoint} For $a \in \Gamma^\mu_{1}$, we have
$$ \|(T_{a}^{\varepsilon, \gamma})^* - T_{a^*}^{\varepsilon, \gamma}\|_{\mathcal{H}^{s, \gamma, \varepsilon}} \leq C \|u\|_{\mathcal H^{s+\mu-1, \gamma, \varepsilon}}$$
\item {\bf Garding inequality} For $a \in \Gamma_{1}^\mu$, assume that there exists $c>0$
such that
$$ \forall X, \, \zeta, \quad \mbox{Re }a(X, \zeta) \geq c \langle \zeta \rangle^\mu.$$
Then there exists $C>0$ such that for $\gamma \geq C$, we have
$$ {c \over 2} \| T_{w}^{\varepsilon, \gamma} u\|_{\mathcal H^{{\mu \over 2}, \gamma, \varepsilon}}^2 \leq \mbox{Re } \big(T_{a}^{\varepsilon, \gamma} u,
u \big)_{L^2}.$$
\item {\bf Paraproduct} If $a(X) \in L^\infty$, $\nabla_{X}a \in L^\infty$, then we have
\begin{eqnarray*}
\| au - T^{\varepsilon, \gamma}_{a} u \|_{\mathcal H^{1, \gamma, \varepsilon}} \leq C \|u\|_{\mathcal{H}^{0, \gamma, \varepsilon}}, \\
\gamma \| au - T_{a}^{\varepsilon, \gamma} u \|_{\mathcal{H}^{0, \gamma}} \leq C\| u\|_{\mathcal{H}^{1, \gamma, \varepsilon}}, \\
\| a \sqrt{\varepsilon}^{\alpha_{j}}\partial_{j}u - T_{a}^{\varepsilon, \gamma}\sqrt{\varepsilon}^{\alpha_{j}}\partial_{j}u \|_{\mathcal H^{0, \gamma, \varepsilon}} \leq C \|u\|_{\mathcal{H}^{{2 \over p_{j}}- 1 , \gamma, \varepsilon}}
\end{eqnarray*}
where $\partial_{0}= \partial_{t}$, $p_{0}=1$ and $p_{j}= 2$, $j=1, \, 2$ and $\alpha_{0}= 0$, $\alpha_{j}= 1$, $j \geq 1$.
\end{enumerate}
As before, in all the above estimates $C$ only depends on seminorms of the symbols and in particular, it is independent of $\varepsilon$ for $\varepsilon \in (0,1)$.
\end{theoreme}
Note that in the above result, due to the anisotropy in our definition of the semiclassical calculus we do not gain powers of $\varepsilon$ in
the product and adjoint estimates.
Finally, if the symbols and $u$ depend on the parameter $z$ all the above results remain true as before with the definition
$$ \|u\|_{\mathcal{H}^{s, \gamma, \varepsilon}}^2 = \int_{-\infty}^0 \int_{\mathbb{R}^3} \langle \zeta^\varepsilon \rangle^{2s}| \mathcal{F}_{t,y}u(\tau, \xi, z)|^2 \, d\tau
d\xi dz.$$
\end{document} | arXiv |
Hypatia
Hypatia[lower-alpha 1] (born c. 350–370; died 415 AD)[1][4] was a neoplatonist philosopher, astronomer, and mathematician who lived in Alexandria, Egypt, then part of the Eastern Roman Empire. She was a prominent thinker in Alexandria where she taught philosophy and astronomy.[5] Although preceded by Pandrosion, another Alexandrine female mathematician,[6] she is the first female mathematician whose life is reasonably well recorded.[7] Hypatia was renowned in her own lifetime as a great teacher and a wise counselor. She wrote a commentary on Diophantus's thirteen-volume Arithmetica, which may survive in part, having been interpolated into Diophantus's original text, and another commentary on Apollonius of Perga's treatise on conic sections, which has not survived. Many modern scholars also believe that Hypatia may have edited the surviving text of Ptolemy's Almagest, based on the title of her father Theon's commentary on Book III of the Almagest.
Hypatia
Bornc. 350–370 AD
Alexandria, Province of Egypt, Eastern Roman Empire
DiedMarch 415 AD (aged 45–65)[1]
Alexandria, Province of Egypt, Eastern Roman Empire
EraAncient philosophy
RegionWestern philosophy
SchoolNeoplatonism
Main interests
• Mathematics
• Astronomy
Influences
• Plato
• Plotinus
• Aristotle
• Theon of Alexandria
Influenced
• Damascius
• Synesius of Cyrene
Hypatia constructed astrolabes and hydrometers, but did not invent either of these, which were both in use long before she was born. She was tolerant towards Christians and taught many Christian students, including Synesius, the future bishop of Ptolemais. Ancient sources record that Hypatia was widely beloved by pagans and Christians alike and that she established great influence with the political elite in Alexandria. Towards the end of her life, Hypatia advised Orestes, the Roman prefect of Alexandria, who was in the midst of a political feud with Cyril, the bishop of Alexandria. Rumors spread accusing her of preventing Orestes from reconciling with Cyril and, in March 415 AD, she was murdered by a mob of Christians led by a lector named Peter.[8][9]
Hypatia's murder shocked the empire and transformed her into a "martyr for philosophy", leading future Neoplatonists such as the historian Damascius (c. 458 – c. 538) to become increasingly fervent in their opposition to Christianity. During the Middle Ages, Hypatia was co-opted as a symbol of Christian virtue and scholars believe she was part of the basis for the legend of Saint Catherine of Alexandria. During the Age of Enlightenment, she became a symbol of opposition to Catholicism. In the nineteenth century, European literature, especially Charles Kingsley's 1853 novel Hypatia, romanticized her as "the last of the Hellenes". In the twentieth century, Hypatia became seen as an icon for women's rights and a precursor to the feminist movement. Since the late twentieth century, some portrayals have associated Hypatia's death with the destruction of the Library of Alexandria, despite the historical fact that the library no longer existed during Hypatia's lifetime.[10]
Life
Upbringing
Hypatia was the daughter of the mathematician Theon of Alexandria (c. 335 – c. 405 AD).[14][15][16] According to classical historian Edward J. Watts, Theon was the head of a school called the "Mouseion", which was named in emulation of the Hellenistic Mouseion,[15] whose membership had ceased in the 260s AD.[17] Theon's school was exclusive, highly prestigious, and doctrinally conservative. Theon rejected the teachings of Iamblichus and may have taken pride in teaching a pure, Plotinian Neoplatonism.[18] Although he was widely seen as a great mathematician at the time,[11][13][19] Theon's mathematical work has been deemed by modern standards as essentially "minor",[11] "trivial",[13] and "completely unoriginal".[19] His primary achievement was the production of a new edition of Euclid's Elements, in which he corrected scribal errors that had been made over the course of nearly 700 years of copying.[11][12][13] Theon's edition of Euclid's Elements became the most widely used edition of the textbook for centuries[12][20] and almost totally supplanted all other editions.[20]
Nothing is known about Hypatia's mother, who is never mentioned in any of the extant sources.[21][22][23] Theon dedicates his commentary on Book IV of Ptolemy's Almagest to an individual named Epiphanius, addressing him as "my dear son",[24][25] indicating that he may have been Hypatia's brother,[24] but the Greek word Theon uses (teknon) does not always mean "son" in the biological sense and was often used merely to signal strong feelings of paternal connection.[24][25] Hypatia's exact year of birth is still under debate, with suggested dates ranging from 350 to 370 AD.[26][27][28] Many scholars have followed Richard Hoche in inferring that Hypatia was born around 370. According to Damascius's lost work Life of Isidore, preserved in the entry for Hypatia in the Suda, a tenth-century Byzantine encyclopedia, Hypatia flourished during the reign of Arcadius. Hoche reasoned that Damascius's description of her physical beauty would imply that she was at most 30 at that time, and the year 370 was 30 years prior to the midpoint of Arcadius's reign.[29][30] In contrast, theories that she was born as early as 350 are based on the wording of the chronicler John Malalas (c. 491 – 578), who calls her old at the time of her death in 415.[28][31] Robert Penella argues that both theories are weakly based, and that her birth date should be left unspecified.[29]
Career
Hypatia was a Neoplatonist, but, like her father, she rejected the teachings of Iamblichus and instead embraced the original Neoplatonism formulated by Plotinus.[18] The Alexandrian school was renowned at the time for its philosophy, and Alexandria was regarded as second only to Athens as the philosophical capital of the Greco-Roman world.[26] Hypatia taught students from all over the Mediterranean.[32] According to Damascius, she lectured on the writings of Plato and Aristotle.[33][34][35][36] He also states that she walked through Alexandria in a tribon, a kind of cloak associated with philosophers, giving impromptu public lectures.[37][38][39]
According to Watts, two main varieties of Neoplatonism were taught in Alexandria during the late fourth century. The first was the overtly pagan religious Neoplatonism taught at the Serapeum, which was greatly influenced by the teachings of Iamblichus.[40] The second variety was the more moderate and less polemical variety championed by Hypatia and her father Theon, which was based on the teachings of Plotinus.[41] Although Hypatia herself was a pagan, she was tolerant of Christians.[42][43] In fact, every one of her known students was Christian.[44] One of her most prominent pupils was Synesius of Cyrene,[26][45][46][47] who went on to become a bishop of Ptolemais (now in eastern Libya) in 410.[47][48] Afterward, he continued to exchange letters with Hypatia[46][47][49] and his extant letters are the main sources of information about her career.[46][47][50][51][52] Seven letters by Synesius to Hypatia have survived,[46][47] but none from her addressed to him are extant.[47] In a letter written in around 395 to his friend Herculianus, Synesius describes Hypatia as "... a person so renowned, her reputation seemed literally incredible. We have seen and heard for ourselves she who honorably presides over the mysteries of philosophy."[46] Synesius preserves the legacy of Hypatia's opinions and teachings, such as the pursuit of "the philosophical state of apatheia—complete liberation from emotions and affections".[53]
The Christian historian Socrates of Constantinople, a contemporary of Hypatia, describes her in his Ecclesiastical History:[21]
There was a woman at Alexandria named Hypatia, daughter of the philosopher Theon, who made such attainments in literature and science, as to far surpass all the philosophers of her own time. Having succeeded to the school of Plato and Plotinus, she explained the principles of philosophy to her auditors, many of whom came from a distance to receive her instructions. On account of the self-possession and ease of manner which she had acquired in consequence of the cultivation of her mind, she not infrequently appeared in public in the presence of the magistrates. Neither did she feel abashed in going to an assembly of men. For all men on account of her extraordinary dignity and virtue admired her the more.[33]
Philostorgius, another Christian historian, who was also a contemporary of Hypatia, states that she excelled her father in mathematics[46] and the lexicographer Hesychius of Alexandria records that, like her father, she was also an extraordinarily talented astronomer.[46][54] Damascius writes that Hypatia was "exceedingly beautiful and fair of form",[55][56] but nothing else is known regarding her physical appearance[57] and no ancient depictions of her have survived.[58] Damascius states that Hypatia remained a lifelong virgin[59][60] and that, when one of the men who came to her lectures tried to court her, she tried to soothe his lust by playing the lyre.[56][61][lower-alpha 2] When he refused to abandon his pursuit, she rejected him outright,[56][61][63] displaying her bloody menstrual rags and declaring "This is what you really love, my young man, but you do not love beauty for its own sake."[34][56][61][63] Damascius further relates that the young man was so traumatized that he abandoned his desires for her immediately.[56][61][63]
Death
Background
From 382 – 412, the bishop of Alexandria was Theophilus.[65] Theophilus was militantly opposed to Iamblichean Neoplatonism[65] and, in 391, he demolished the Serapeum.[66][67] Despite this, Theophilus tolerated Hypatia's school and seems to have regarded Hypatia as his ally.[21][65][68] Theophilus supported the bishopric of Hypatia's pupil Synesius,[21][69] who describes Theophilus in his letters with love and admiration.[68][70] Theophilus also permitted Hypatia herself to establish close relationships with the Roman prefects and other prominent political leaders.[65] Partly as a result of Theophilus's tolerance, Hypatia became extremely popular with the people of Alexandria and exerted profound political influence.[71]
Theophilus died unexpectedly in 412.[65] He had been training his nephew Cyril, but had not officially named him as his successor.[72] A violent power struggle over the diocese broke out between Cyril and his rival Timothy. Cyril won and immediately began to punish those who had supported Timothy; he closed the churches of the Novatianists, who had supported Timothy, and confiscated their property.[73] Hypatia's school seems to have immediately taken a strong distrust towards the new bishop,[68][70] as evidenced by the fact that, in all his vast correspondences, Synesius only ever wrote one letter to Cyril, in which he treats the younger bishop as inexperienced and misguided.[70] In a letter written to Hypatia in 413, Synesius requests her to intercede on behalf of two individuals impacted by the ongoing civil strife in Alexandria,[74][75][76] insisting, "You always have power, and you can bring about good by using that power."[74] He also reminds her that she had taught him that a Neoplatonic philosopher must introduce the highest moral standards to political life and act for the benefit of their fellow citizens.[74]
According to Socrates Scholasticus, in 414, following an exchange of hostilities and a Jewish-led massacre, Cyril closed all the synagogues in Alexandria, confiscated all the property belonging to the Jews, and expelled a number of Jews from the city; Scholasticus suggests all the Jews were expelled, while John of Nikiu notes it was only those involved in the massacre.[77][78][73] Orestes, the Roman prefect of Alexandria, who was also a close friend of Hypatia[21] and a recent convert to Christianity,[21][79][80] was outraged by Cyril's actions and sent a scathing report to the emperor.[21][73][81] The conflict escalated and a riot broke out in which the parabalani, a group of Christian clerics under Cyril's authority, nearly killed Orestes.[73] As punishment, Orestes had Ammonius, the monk who had started the riot, publicly tortured to death.[73][82][83] Cyril tried to proclaim Ammonius a martyr,[73][82][84] but Christians in Alexandria were disgusted,[82][85] since Ammonius had been killed for inciting a riot and attempting to murder the governor, not for his faith.[82] Prominent Alexandrian Christians intervened and forced Cyril to drop the matter.[73][82][85] Nonetheless, Cyril's feud with Orestes continued.[86] Orestes frequently consulted Hypatia for advice[87][88] because she was well-liked among both pagans and Christians alike, she had not been involved in any previous stages of the conflict, and she had an impeccable reputation as a wise counselor.[89]
Despite Hypatia's popularity, Cyril and his allies attempted to discredit her and undermine her reputation.[90][91] Socrates Scholasticus mentions rumors accusing Hypatia of preventing Orestes from reconciling with Cyril.[88][91] Traces of other rumors that spread among the Christian populace of Alexandria may be found in the writings of the seventh-century Egyptian Coptic bishop John of Nikiû,[40][91] who alleges in his Chronicle that Hypatia had engaged in satanic practices and had intentionally hampered the church's influence over Orestes:[91][92][93][94]
And in those days there appeared in Alexandria a female philosopher, a pagan named Hypatia, and she was devoted at all times to magic, astrolabes and instruments of music, and she beguiled many people through her Satanic wiles. And the governor of the city honoured her exceedingly; for she had beguiled him through her magic. And he ceased attending church as had been his custom... And he not only did this, but he drew many believers to her, and he himself received the unbelievers at his house.[92]
Murder
According to Socrates Scholasticus, during the Christian season of Lent in March 415, a mob of Christians under the leadership of a lector named Peter, raided Hypatia's carriage as she was travelling home.[95][96][97] They dragged her into a building known as the Kaisarion, a former pagan temple and center of the Roman imperial cult in Alexandria that had been converted into a Christian church.[89][95][97] There, the mob stripped Hypatia naked and murdered her using ostraka,[95][98][99][100] which can either be translated as "roof tiles" or "oyster shells".[95] Damascius adds that they also cut out her eyeballs.[101] They tore her body into pieces and dragged her limbs through the town to a place called Cinarion, where they set them on fire.[95][101][100] According to Watts, this was in line with the traditional manner in which Alexandrians carried the bodies of the "vilest criminals" outside the city limits to cremate them as a way of symbolically purifying the city.[101][102] Although Socrates Scholasticus never explicitly identifies Hypatia's murderers, they are commonly assumed to have been members of the parabalani.[103] Christopher Haas disputes this identification, arguing that the murderers were more likely "a crowd of Alexandrian laymen".[104]
Socrates Scholasticus presents Hypatia's murder as entirely politically motivated and makes no mention of any role that Hypatia's paganism might have played in her death.[105] Instead, he reasons that "she fell a victim to the political jealousy which at that time prevailed. For as she had frequent interviews with Orestes, it was calumniously reported among the Christian populace that it was she who prevented Orestes from being reconciled to the bishop."[95][106] Socrates Scholasticus unequivocally condemns the actions of the mob, declaring, "Surely nothing can be farther from the spirit of Christianity than the allowance of massacres, fights, and transactions of that sort."[95][102][107]
The Canadian mathematician Ari Belenkiy has argued that Hypatia may have been involved in a controversy over the date of the Christian holiday of Easter 417 and that she was killed on the vernal equinox while making astronomical observations.[108] Classical scholars Alan Cameron and Edward J. Watts both dismiss this hypothesis, noting that there is absolutely no evidence in any ancient text to support any part of the hypothesis.[109][110]
Aftermath
Hypatia's death sent shockwaves throughout the empire;[40][111] for centuries, philosophers had been seen as effectively untouchable during the displays of public violence that sometimes occurred in Roman cities and the murder of a female philosopher at the hand of a mob was seen as "profoundly dangerous and destabilizing".[111] Although no concrete evidence was ever discovered definitively linking Cyril to the murder of Hypatia,[40] it was widely believed that he had ordered it.[40][88] Even if Cyril had not directly ordered the murder himself, his smear campaign against Hypatia had inspired it. The Alexandrian council was alarmed at Cyril's conduct and sent an embassy to Constantinople.[40] The advisors of Theodosius II launched an investigation to determine Cyril's role in the murder.[107]
The investigation resulted in the emperors Honorius and Theodosius II issuing an edict in autumn of 416, which attempted to remove the parabalani from Cyril's power and instead place them under the authority of Orestes.[40][107][112][113] The edict restricted the parabalani from attending "any public spectacle whatever" or entering "the meeting place of a municipal council or a courtroom."[114] It also severely restricted their recruitment by limiting the total number of parabalani to no more than five hundred.[113] According to Damascius, Cyril himself allegedly only managed to escape even more serious punishment by bribing one of Theodosius's officials.[107] Watts argues that Hypatia's murder was the turning point in Cyril's fight to gain political control of Alexandria.[115] Hypatia had been the linchpin holding Orestes's opposition against Cyril together, and, without her, the opposition quickly collapsed.[40] Two years later, Cyril overturned the law placing the parabalani under Orestes's control and, by the early 420s, Cyril had come to dominate the Alexandrian council.[115]
Works
Hypatia has been described as a universal genius,[116] but she was probably more of a teacher and commentator than an innovator.[117][118][21][119] No evidence has been found that Hypatia ever published any independent works on philosophy[120] and she does not appear to have made any groundbreaking mathematical discoveries.[117][118][21][119] During Hypatia's time period, scholars preserved classical mathematical works and commented on them to develop their arguments, rather than publishing original works.[117][121][122] It has also been suggested that the closure of the Mouseion and the destruction of the Serapeum may have led Hypatia and her father to focus their efforts on preserving seminal mathematical books and making them accessible to their students.[120] The Suda mistakenly states that all of Hypatia's writings have been lost,[123] but modern scholarship has identified several works by her as extant.[123] This kind of authorial uncertainty is typical of female philosophers from antiquity.[124] Hypatia wrote in Greek, which was the language spoken by most educated people in the Eastern Mediterranean at the time.[26] In classical antiquity, astronomy was seen as being essentially mathematical in character.[125] Furthermore, no distinction was made between mathematics and numerology or astronomy and astrology.[125]
Edition of the Almagest
Hypatia is now known to have edited the existing text of Book III of Ptolemy's Almagest.[126][127][128] It was once thought that Hypatia had merely revised Theon's commentary on the Almagest,[130] based on the title of Theon's commentary on the third book of Almagest, which reads "Commentary by Theon of Alexandria on Book III of Ptolemy's Almagest, edition revised by my daughter Hypatia, the philosopher",[130][131] but, based on analysis of the titles of Theon's other commentaries and similar titles from the time period, scholars have concluded that Hypatia corrected, not her father's commentary, but the text of Almagest itself.[130][132] Her contribution is thought to be an improved method for the long division algorithms needed for astronomical computation. The Ptolemaic model of the universe was geocentric, meaning it taught that the Sun revolved around the Earth. In the Almagest, Ptolemy proposed a division problem for calculating the number of degrees swept out by the Sun in a single day as it orbits the Earth. In his early commentary, Theon had tried to improve upon Ptolemy's division calculation. In the text edited by Hypatia, a tabular method is detailed.[129] This tabular method might be the "astronomical table" which historic sources attribute to Hypatia.[129] Classicist Alan Cameron additionally states that it is possible Hypatia may have edited, not only Book III, but all nine extant books of the Almagest.[127]
Independent writings
Hypatia wrote a commentary on Diophantus's thirteen-volume Arithmetica, which had been written sometime around the year 250 AD.[19][34][135][136] It set out more than 100 mathematical problems, for which solutions are proposed using algebra.[137] For centuries, scholars believed that this commentary had been lost.[123] Only volumes one through six of the Arithmetica have survived in the original Greek,[19][138][134] but at least four additional volumes have been preserved in an Arabic translation produced around the year 860.[19][136] The Arabic text contains numerous expansions not found in the Greek text,[19][136] including verifications of Diophantus's examples and additional problems.[19]
Cameron states that the most likely source of the additional material is Hypatia herself, since Hypatia is the only ancient writer known to have written a commentary on the Arithmetica and the additions appear to follow the same methods used by her father Theon.[19] The first person to deduce that the additional material in the Arabic manuscripts came from Hypatia was the nineteenth-century scholar Paul Tannery.[133][139] In 1885, Sir Thomas Heath published the first English translation of the surviving portion of the Arithmetica. Heath argued that surviving text of Arithmetica is actually a school edition produced by Hypatia to aid her students.[138] According to Mary Ellen Waithe, Hypatia used an unusual algorithm for division (in the then-standard sexagesimal numeral system), making it easy for scholars to pick out which parts of the text she had written.[133]
The consensus that Hypatia's commentary is the source of the additional material in the Arabic manuscripts of the Arithmetica has been challenged by Wilbur Knorr, a historian of mathematics, who argues that the interpolations are "of such low level as not to require any real mathematical insight" and that the author of the interpolations can only have been "an essentially trivial mind... in direct conflict with ancient testimonies of Hypatia's high caliber as a philosopher and mathematician."[19] Cameron rejects this argument, noting that "Theon too enjoyed a high reputation, yet his surviving work has been judged 'completely unoriginal.'"[19] Cameron also insists that "Hypatia's work on Diophantus was what we today might call a school edition, designed for the use of students rather than professional mathematicians."[19]
Hypatia also wrote a commentary on Apollonius of Perga's work on conic sections,[34][133][134] but this commentary is no longer extant.[133][134] She also created an "Astronomical Canon";[34] this is believed to have been either a new edition of the Handy Tables by the Alexandrian Ptolemy or the aforementioned commentary on his Almagest.[140][141][142] Based on a close reading in comparison with her supposed contributions to the work of Diophantus, Knorr suggests that Hypatia may also have edited Archimedes' Measurement of a Circle, an anonymous text on isometric figures, and a text later used by John of Tynemouth in his work on Archimedes' measurement of the sphere.[143] A high degree of mathematical accomplishment would have been needed to comment on Apollonius's advanced mathematics or the astronomical Canon. Because of this, most scholars today recognize that Hypatia must have been among the leading mathematicians of her day.[117]
Reputed inventions
One of Synesius's letters describes Hypatia as having taught him how to construct a silver plane astrolabe as a gift for an official.[52][144][145][146] An astrolabe is a device used to calculate date and time based on the positions of the stars and planets. It can also be used to predict where the stars and planets will be on any given date.[144][147][148] A "little astrolabe", or "plane astrolabe", is a kind of astrolabe that used stereographic projection of the celestial sphere to represent the heavens on a plane surface, as opposed to an armillary sphere, which was globe-shaped.[129][147] Armillary spheres were large and normally used for display, whereas a plane astrolabe was portable and could be used for practical measurements.[147]
The statement from Synesius's letter has sometimes been wrongly interpreted to mean that Hypatia invented the plane astrolabe herself,[37][149] but the plane astrolabe was in use at least 500 years before Hypatia was born.[52][144][149][150] Hypatia may have learned how to construct a plane astrolabe from her father Theon,[129][145][147] who had written two treatises on astrolabes: one entitled Memoirs on the Little Astrolabe and another study on the armillary sphere in Ptolemy's Almagest.[147] Theon's treatise is now lost, but it was well known to the Syrian bishop Severus Sebokht (575–667), who describes its contents in his own treatise on astrolabes.[147][151] Hypatia and Theon may have also studied Ptolemy's Planisphaerium, which describes the calculations necessary in order to construct an astrolabe.[152] Synesius's wording indicates that Hypatia did not design or construct the astrolabe herself, but merely acted as a guide and mentor during the process of constructing it.[13]
In another letter, Synesius requests Hypatia to construct him a "hydroscope", a device now known as a hydrometer, to determine the density or specific gravity of liquids.[145][149][153][154] Based on this request, it has been claimed that Hypatia invented the hydrometer herself.[149][155] The minute detail in which Synesius describes the instrument, however, indicates that he assumes she has never heard of the device,[156][157] but trusts she will be able to replicate it based on a verbal description. Hydrometers were based on Archimedes' 3rd century BC principles, may have been invented by him, and were being described by the 2nd century AD in a poem by the Roman author Remnius.[158][159][160] Although modern authors frequently credit Hypatia with having developed a variety of other inventions, these other attributions may all be discounted as spurious.[156] Booth concludes, "The modern day reputation held by Hypatia as a philosopher, mathematician, astronomer, and mechanical inventor, is disproportionate to the amount of surviving evidence of her life's work. This reputation is either built on myth or hearsay as opposed to evidence. Either that or we are missing all of the evidence that would support it."[155]
Legacy
Antiquity
Neoplatonism and paganism both survived for centuries after Hypatia's death,[161][162] and new academic lecture halls continued to be built in Alexandria after her death.[163] Over the next 200 years, Neoplatonist philosophers such as Hierocles of Alexandria, John Philoponus, Simplicius of Cilicia, and Olympiodorus the Younger made astronomical observations, taught mathematics, and wrote lengthy commentaries on the works of Plato and Aristotle.[161][162] Hypatia was not the last female Neoplatonist philosopher; later ones include Aedesia, Asclepigenia, and Theodora of Emesa.[163]
According to Watts, however, Hypatia had no appointed successor, no spouse, and no offspring[107][164] and her sudden death not only left her legacy unprotected, but also triggered a backlash against her entire ideology.[165] Hypatia, with her tolerance towards Christian students and her willingness to cooperate with Christian leaders, had hoped to establish a precedent that Neoplatonism and Christianity could coexist peacefully and cooperatively. Instead, her death and the subsequent failure by the Christian government to impose justice on her killers destroyed that notion entirely and led future Neoplatonists such as Damascius to consider Christian bishops as "dangerous, jealous figures who were also utterly unphilosophical."[166] Hypatia became seen as a "martyr for philosophy",[166] and her murder led philosophers to adopt attitudes that increasingly emphasized the pagan aspects of their beliefs system[167] and helped create a sense of identity for philosophers as pagan traditionalists set apart from the Christian masses.[168] Thus, while Hypatia's death did not bring an end to Neoplatonist philosophy as a whole, Watts argues that it did bring an end to her particular variety of it.[169]
Shortly after Hypatia's murder, a forged anti-Christian letter appeared under her name.[170] Damascius was "anxious to exploit the scandal of Hypatia's death", and attributed responsibility for her murder to Bishop Cyril and his Christian followers.[171][172] A passage from Damascius's Life of Isidore, preserved in the Suda, concludes that Hypatia's murder was due to Cyril's envy over "her wisdom exceeding all bounds and especially in the things concerning astronomy".[173][174] Damascius's account of the Christian murder of Hypatia is the sole historical source attributing direct responsibility to Bishop Cyril.[174] At the same time, Damascius was not entirely kind to Hypatia either; he characterizes her as nothing more than a wandering Cynic,[175][176] and compares her unfavorably with his own teacher Isidore of Alexandria,[175][176][177] remarking that "Isidorus greatly outshone Hypatia, not just as a man does over a woman, but in the way a genuine philosopher will over a mere geometer."[178]
Middle Ages
Hypatia's death was similar to those of Christian martyrs in Alexandria, who had been dragged through the streets during the Decian persecution in 250.[182][183][184] Other aspects of Hypatia's life also fit the mold for a Christian martyr, especially her lifelong virginity.[179][185] In the Early Middle Ages, Christians conflated Hypatia's death with stories of the Decian martyrs[179][185] and she became part of the basis for the legend of Saint Catherine of Alexandria, a virgin martyr said to have been exceedingly wise and well-educated.[179][180][181] The earliest attestation for the cult of Saint Catherine comes from the eighth century, around three hundred years after Hypatia's death.[186] One story tells of Saint Catherine being confronted by fifty pagan philosophers seeking to convert her,[181][187] but instead converting all of them to Christianity through her eloquence.[179][181] Another legend claimed that Saint Catherine had been a student of Athanasius of Alexandria.[183] In the Laodikeia of Asia Minor (today Denizli in Turkey) until late 19th century Hypatia was venerated as identical to St. Catherine.[188][189]
The Byzantine Suda encyclopedia contains a very long entry about Hypatia, which summarizes two different accounts of her life.[190] The first eleven lines come from one source and the rest of the entry comes from Damascius's Life of Isidore. Most of the first eleven lines of the entry probably come from Hesychius's Onomatologos,[191] but some parts are of unknown origin, including a claim that she was "the wife of Isidore the Philosopher" (apparently Isidore of Alexandria).[34][191][192] Watts describes this as a very puzzling claim, not only because Isidore of Alexandria was not born until long after Hypatia's death, and no other philosopher of that name contemporary with Hypatia is known,[193][194][195] but also because it contradicts Damascius's own statement quoted in the same entry about Hypatia being a lifelong virgin.[193] Watts suggests that someone probably misunderstood the meaning of the word gynē used by Damascius to describe Hypatia in his Life of Isidore, since the same word can mean either "woman" or "wife".[196]
The Byzantine and Christian intellectual Photios (c. 810/820–893) includes both Damascius's account of Hypatia and Socrates Scholasticus's in his Bibliotheke.[196] In his own comments, Photios remarks on Hypatia's great fame as a scholar, but does not mention her death, perhaps indicating that he saw her scholarly work as more significant.[197] The intellectual Eudokia Makrembolitissa (1021–1096), the second wife of Byzantine emperor Constantine X Doukas, was described by the historian Nicephorus Gregoras as a "second Hypatia".[198]
Early modern period
Early eighteenth-century Deist scholar John Toland used the murder of Hypatia as the basis for an anti-Catholic tract,[199][200][201] portraying Hypatia's death in the worst possible light by changing the story and inventing elements not found in any of the ancient sources.[199][200] A 1721 response by Thomas Lewis defended Cyril,[199][202] rejected Damascius's account as unreliable because its author was "a heathen"[202] and argued that Socrates Scholasticus was "a Puritan", who was consistently biased against Cyril.[202]
Voltaire, in his Examen important de Milord Bolingbroke ou le tombeau de fanatisme (1736) interpreted Hypatia as a believer in "the laws of rational Nature" and "the capacities of the human mind free of dogmas"[117][199] and described her death as "a bestial murder perpetrated by Cyril's tonsured hounds, with a fanatical gang at their heels".[199] Later, in an entry for his Dictionnaire philosophique (1772), Voltaire again portrayed Hypatia as a freethinking deistic genius brutally murdered by ignorant and misunderstanding Christians.[117][203][204] Most of the entry ignores Hypatia herself altogether and instead deals with the controversy over whether or not Cyril was responsible for her death.[204] Voltaire concludes with the snide remark that "When one strips beautiful women naked, it is not to massacre them."[203][204]
In his monumental work The History of the Decline and Fall of the Roman Empire, the English historian Edward Gibbon expanded on Toland and Voltaire's misleading portrayals by declaring Cyril as the sole cause of all evil in Alexandria at the beginning of the fifth century[203] and construing Hypatia's murder as evidence to support his thesis that the rise of Christianity hastened the decline of the Roman Empire.[205] He remarks on Cyril's continued veneration as a Christian saint, commenting that "superstition [Christianity] perhaps would more gently expiate the blood of a virgin, than the banishment of a saint."[206] In response to these accusations, Catholic authors, as well as some French Protestants, insisted with increased vehemence that Cyril had absolutely no involvement in Hypatia's murder and that Peter the Lector was solely responsible. In the course of these heated debates, Hypatia herself tended to be cast aside and ignored, while the debates focused far more intently on the question of whether Peter the Lector had acted alone or under Cyril's orders.[204]
Nineteenth century
The play Hypatia, performed at the Haymarket Theatre in January 1893, was based on the novel by Charles Kingsley.[207]
Julia Margaret Cameron's 1867 photograph Hypatia, also inspired by Charles Kingsley's novel[207]
In the nineteenth century European literary authors spun the legend of Hypatia as part of neo-Hellenism, a movement that romanticised ancient Greeks and their values.[117] Interest in the "literary legend of Hypatia" began to rise.[203] Diodata Saluzzo Roero's 1827 Ipazia ovvero delle Filosofie suggested that Cyril had actually converted Hypatia to Christianity, and that she had been killed by a "treacherous" priest.[208]
In his 1852 Hypatie and 1857 Hypathie et Cyrille, French poet Charles Leconte de Lisle portrayed Hypatia as the epitome of "vulnerable truth and beauty".[211] Leconte de Lisle's first poem portrayed Hypatia as a woman born after her time, a victim of the laws of history.[206][212] His second poem reverted to the eighteenth-century Deistic portrayal of Hypatia as the victim of Christian brutality,[210][213] but with the twist that Hypatia tries and fails to convince Cyril that Neoplatonism and Christianity are actually fundamentally the same.[210][214] Charles Kingsley's 1853 novel Hypatia; Or, New Foes with an Old Face was originally intended as a historical treatise, but instead became a typical mid-Victorian romance with a militantly anti-Catholic message,[215][216] portraying Hypatia as a "helpless, pretentious, and erotic heroine"[217] with the "spirit of Plato and the body of Aphrodite."[218]
Kingsley's novel was tremendously popular;[219][220] it was translated into several European languages[220][221] and remained continuously in print for the rest of the century.[221] It promoted the romantic vision of Hypatia as "the last of the Hellenes"[220] and was quickly adapted into a broad variety of stage productions, the first of which was a play written by Elizabeth Bowers, performed in Philadelphia in 1859, starring the writer herself in the titular role.[221] On 2 January 1893, a much higher-profile stage play adaptation Hypatia, written by G. Stuart Ogilvie and produced by Herbert Beerbohm Tree, opened at the Haymarket Theatre in London. The title role was initially played by Julia Neilson, and it featured an elaborate musical score written by the composer Hubert Parry.[222][223] The novel also spawned works of visual art,[207] including an 1867 image portraying Hypatia as a young woman by the early photographer Julia Margaret Cameron[207][224] and an 1885 painting by Charles William Mitchell showing a nude Hypatia standing before an altar in a church.[207]
At the same time, European philosophers and scientists described Hypatia as the last representative of science and free inquiry before a "long medieval decline".[117] In 1843, German authors Soldan and Heppe argued in their highly influential History of the Witchcraft Trials that Hypatia may have been, in effect, the first famous "witch" punished under Christian authority (see witch-hunt).[225]
Hypatia was honored as an astronomer when 238 Hypatia, a main belt asteroid discovered in 1884, was named for her. The lunar crater Hypatia was also named for her, in addition to craters named for her father Theon. The 180 km Rimae Hypatia are located north of the crater, one degree south of the equator, along the Mare Tranquillitatis.[226]
Twentieth century
An actress, possibly Mary Anderson, in the title role of the play Hypatia, c. 1900. Similarities between this image and the Gaspard portrait at right indicate this one may have served as a model for the Gaspard.[227]
This fictional portrait of Hypatia by Jules Maurice Gaspard, originally the illustration for Elbert Hubbard's 1908 fictional biography, has now become the most iconic and widely reproduced image of her.[228][229][230]
In 1908, American writer Elbert Hubbard published a putative biography of Hypatia in his series Little Journeys to the Homes of Great Teachers. The book is almost entirely a work of fiction.[228][231] In it, Hubbard relates a completely made-up physical exercise program which he claims Theon established for his daughter, involving "fishing, horseback-riding, and rowing".[232] He claims that Theon taught Hypatia to "Reserve your right to think, for even to think wrongly is better than to never think at all."[232] Hubbard claims that, as a young woman, Hypatia traveled to Athens, where she studied under Plutarch of Athens. All of this supposed biographical information, however, is completely fictional and is not found in any ancient source. Hubbard even attributes to Hypatia numerous completely fabricated quotations in which she presents modern, rationalist views.[232] The cover illustration for the book, a drawing of Hypatia by artist Jules Maurice Gaspard showing her as a beautiful young woman with her wavy hair tied back in the classical style, has now become the most iconic and widely reproduced image of her.[228][229][230]
Around the same time, Hypatia was adopted by feminists, and her life and death began to be viewed in the light of the women's rights movement.[233] The author Carlo Pascal claimed in 1908 that her murder was an anti-feminist act and brought about a change in the treatment of women, as well as the decline of the Mediterranean civilization in general.[234] Dora Russell published a book on the inadequate education of women and inequality with the title Hypatia or Woman and Knowledge in 1925.[235] The prologue explains why she chose the title:[235] "Hypatia was a university lecturer denounced by Church dignitaries and torn to pieces by Christians. Such will probably be the fate of this book."[226] Hypatia's death became symbolic for some historians. For example, Kathleen Wider proposes that the murder of Hypatia marked the end of Classical antiquity,[236] and Stephen Greenblatt writes that her murder "effectively marked the downfall of Alexandrian intellectual life".[237] On the other hand, Christian Wildberg notes that Hellenistic philosophy continued to flourish in the 5th and 6th centuries, and perhaps until the age of Justinian I.[238][239]
Fables should be taught as fables, myths as myths, and miracles as poetic fantasies. To teach superstitions as truths is a most terrible thing. The child mind accepts and believes them, and only through great pain and perhaps tragedy can he be in after years relieved of them. In fact, men will fight for a superstition quite as quickly as for a living truth–often more so, since a superstition is so intangible you can not get at it to refute it, but truth is a point of view, and so is changeable.
— Made-up quote attributed to Hypatia in Elbert Hubbard's 1908 fictional biography of her, along with several other similarly spurious quotations[232]
Falsehoods and misconceptions about Hypatia continued to proliferate throughout the late twentieth century.[231] Though Hubbard's fictional biography may have been intended for children,[229] Lynn M. Osen relied on it as her main source in her influential 1974 article on Hypatia in her 1974 book Women in Mathematics.[231] Fordham University used Hubbard's biography as the main source of information about Hypatia in a medieval history course.[228][231] Carl Sagan's 1980 PBS series Cosmos: A Personal Voyage relates a heavily fictionalized retelling of Hypatia's death, which results in the "Great Library of Alexandria" being burned by militant Christians.[149] In actuality, though Christians led by Theophilus did destroy the Serapeum in 391 AD, the Library of Alexandria had already ceased to exist in any recognizable form centuries prior to Hypatia's birth.[10] As a female intellectual, Hypatia became a role model for modern intelligent women and two feminist journals were named after her: the Greek journal Hypatia: Feminist Studies was launched in Athens in 1984, and Hypatia: A Journal of Feminist Philosophy in the United States in 1986.[233] In the United Kingdom, the Hypatia Trust maintains a library and archive of feminine literary, artistic and scientific work; and, sponsors the Hypatia-in-the-Woods women's retreat in Washington, United States.[226]
Judy Chicago's large-scale art piece The Dinner Party awards Hypatia a table setting.[240][241] The table runner depicts Hellenistic goddesses weeping over her death.[234] Chicago states that the social unrest leading to Hypatia's murder resulted from Roman patriarchy and mistreatment of women and that this ongoing unrest can only be brought to an end through the restoration of an original, primeval matriarchy.[242] She (anachronistically and incorrectly) concludes that Hypatia's writings were burned in the Library of Alexandria when it was destroyed.[234] Major works of twentieth century literature contain references to Hypatia,[243] including Marcel Proust's volume "Within a Budding Grove" from In Search of Lost Time, and Iain Pears's The Dream of Scipio.[216]
Twenty-first century
Hypatia has continued to be a popular subject in both fiction and nonfiction by authors in many countries and languages.[244] In 2015, the planet designated Iota Draconis b was named after Hypatia.[245]
In Umberto Eco's 2002 novel Baudolino, the hero's love interest is a half-satyr, half-woman descendant of a female-only community of Hypatia's disciples, collectively known as "hypatias".[246] Charlotte Kramer's 2006 novel Holy Murder: the Death of Hypatia of Alexandria portrays Cyril as an archetypal villain, while Hypatia is described as brilliant, beloved, and more knowledgeable of scripture than Cyril.[247] Ki Longfellow's novel Flow Down Like Silver (2009) invents an elaborate backstory for why Hypatia first started teaching.[248] Youssef Ziedan's novel Azazeel (2012) describes Hypatia's murder through the eyes of a witness.[249] Bruce MacLennan's 2013 book The Wisdom of Hypatia presents Hypatia as a guide who introduces Neoplatonic philosophy and exercises for modern life.[250] In The Plot to Save Socrates (2006) by Paul Levinson and its sequels, Hypatia is a time-traveler from the twenty-first century United States.[251][252][253] In the TV series The Good Place, Hypatia is played by Lisa Kudrow as one of the few ancient philosophers eligible for heaven, by not having defended slavery.[254]
The 2009 film Agora, directed by Alejandro Amenábar and starring Rachel Weisz as Hypatia, is a heavily fictionalized dramatization of Hypatia's final years.[10][255][256] The film, which was intended to criticize contemporary Christian fundamentalism,[257] has had wide-ranging impact on the popular conception of Hypatia.[255] It emphasizes Hypatia's astronomical and mechanical studies rather than her philosophy, portraying her as "less Plato than Copernicus",[255] and emphasizes the restrictions imposed on women by the early Christian church,[258] including depictions of Hypatia being sexually assaulted by one of her father's Christian slaves,[259] and of Cyril reading from 1 Timothy 2:8–12 forbidding women from teaching.[259][260] The film contains numerous historical inaccuracies:[10][259][261] It inflates Hypatia's achievements[149][261] and incorrectly portrays her as finding a proof of Aristarchus of Samos's heliocentric model of the universe, which there is no evidence that Hypatia ever studied.[149] It also contains a scene based on Carl Sagan's Cosmos in which Christians raid the Serapeum and burn all of its scrolls, leaving the building itself largely intact. In reality, the Serapeum probably did not have any scrolls in it at that time,[lower-alpha 3] and the building was demolished in 391 AD.[10] The film also implies that Hypatia is an atheist, directly contradictory to the surviving sources, which all portray her as following the teachings of Plotinus that the goal of philosophy was "a mystical union with the divine."[149]
See also
• Timeline of women in science
Notes
1. /haɪˈpeɪʃə, -ʃiə/ hy-PAY-shə, -shee-ə;[2][3] Greek: Ὑπατία, Koine pronunciation [y.pa.ˈti.a]
2. Using music to relieve lustful urges was a Pythagorean remedy[61] stemming from an anecdote from the life of Pythagoras claiming that, when he encountered some drunken youths trying to break into the home of a virtuous woman, he sang a solemn tune with long spondees and the boys' "raging willfulness" was quelled.[62]
3. The Roman historian Ammianus Marcellinus, writing before the Serapeum's destruction in 391 AD, refers to the Serapeum's libraries in the past tense, indicating that the libraries no longer existed by the time of the Serapeum's destruction.
References
1. O'Connor, John J.; Robertson, Edmund F., "Hypatia of Alexandria", MacTutor History of Mathematics Archive, University of St Andrews
2. Jones, Daniel (2011), Roach, Peter; Setter, Jane; Esling, John (eds.), Cambridge English Pronouncing Dictionary (18th ed.), Cambridge University Press, ISBN 978-0-521-15255-6
3. Wells, John C. (2008), Longman Pronunciation Dictionary (3rd ed.), Longman, ISBN 978-1-4058-8118-0
4. Benedetto, Canio; Isola, Stefano; Russo, Lucio (31 January 2017), "Dating Hypatia's birth : a probabilistic model", Mathematics and Mechanics of Complex Systems, 5 (1): 19–40, doi:10.2140/memocs.2017.5.19, ISSN 2325-3444
5. Krebs, Groundbreaking Scientific Experiments, Inventions, and Discoveries; The Cambridge Dictionary of Philosophy, 2nd edition, Cambridge University Press, 1999: "Greek Neoplatonist philosopher who lived and taught in Alexandria."
6. O'Connor, John J.; Robertson, Edmund F., "Pandrosion of Alexandria", MacTutor History of Mathematics Archive, University of St Andrews
7. Deakin 2012.
8. Edward Jay Watts, (2006), City and School in Late Antique Athens and Alexandria. "Hypatia and pagan philosophical culture in the later fourth century", pp. 197–198. University of California Press
9. Deakin 1994, p. 147.
10. Theodore 2016, pp. 182–183.
11. Deakin 2007, p. 107.
12. Bradley 2006, p. 60.
13. Booth 2017, p. 112.
14. Deakin, Michael (3 August 1997), Ockham's Razor: Hypatia of Alexandria, ABC Radio, retrieved 10 July 2014
15. Watts 2008, pp. 191–192.
16. Dzielska 1996, pp. 66–70.
17. Watts 2008, p. 150.
18. Watts 2008, p. 192.
19. Cameron 2016, p. 194.
20. Cameron, Long & Sherry 1993, p. 47.
21. Booth 2017.
22. Watts 2017, p. 21.
23. Deakin 2007, p. 52.
24. Deakin 2007, p. 53.
25. Dzielska 1996, p. 70.
26. Castner 2010, p. 49.
27. Deakin 2007, pp. 51–52.
28. Dzielska 1996, p. 68.
29. Penella 1984, pp. 126–128.
30. Hoche 1860, pp. 435–474.
31. J. C. Wensdorf (1747–1748) and S. Wolf (1879), as cited by Penella (1984).
32. Castner 2010, p. 20.
33. Socrates of Constantinople, Ecclesiastical History
34. "Suda online, Upsilon 166", www.stoa.org
35. Bregman 1982, p. 55.
36. Cameron, Long & Sherry 1993, pp. 49–50.
37. Oakes 2007, p. 364.
38. Dzielska 1996, p. 56.
39. Haas 1997, p. 311.
40. Watts 2008, p. 200.
41. Watts 2008, pp. 200–201.
42. Bregman 1982, pp. 38–39.
43. Cameron, Long & Sherry 1993, pp. 58–59.
44. Cameron, Long & Sherry 1993, p. 58.
45. Watts 2017, pp. 67–70.
46. Waithe 1987, p. 173.
47. Curta & Holt 2017, p. 283.
48. Watts 2017, p. 88.
49. Dzielska 1996, p. 28.
50. Banev 2015, p. 100.
51. Watts 2017, pp. 88–90.
52. Bradley 2006, p. 63.
53. Dzielska 1996, p. 53.
54. Booth 2017, p. 141.
55. Booth 2017, p. 117.
56. Deakin 2007, p. 62.
57. Booth 2017, pp. 116–117.
58. Booth 2017, p. 116.
59. Booth 2017, pp. 128–130.
60. Watts 2017, pp. 74–75.
61. Watts 2017, p. 75.
62. Riedweg 2005, p. 30.
63. Booth 2017, p. 128.
64. Watts 2017, p. 60.
65. Watts 2008, p. 196.
66. Wessel 2004, p. 49.
67. Watts 2017, pp. 57–61.
68. Deakin 2007, p. 82.
69. Watts 2017, p. 196.
70. Dzielska 1996, p. 95.
71. Watts 2008, pp. 195–196.
72. Watts 2008, pp. 196–197.
73. Watts 2008, p. 197.
74. Dzielska 2008, p. 139.
75. Deakin 2007, p. 83.
76. Haas 1997, pp. 310–311.
77. Seaver, James Everett (1952), Persecution of the Jews in the Roman Empire (300-438), Lawrence, University of Kansas Publications, 1952.
78. "John of Nikiu: The Life of Hypatia", www.faculty.umb.edu, retrieved 19 July 2020
79. Wessel 2004, pp. 36–37.
80. Haas 1997, p. 312.
81. Wessel 2004, p. 36.
82. Wessel 2004, p. 37.
83. Haas 1997, p. 306.
84. Haas 1997, pp. 306–307.
85. Haas 1997, pp. 307, 313.
86. Haas 1997, p. 307.
87. Watts 2008, pp. 197–198.
88. Novak 2010, pp. 239–240.
89. Watts 2008, p. 198.
90. Watts 2008, pp. 199–200.
91. Haas 1997, pp. 312–313.
92. Chronicle 84.87–103, archived from the original on 31 July 2010
93. John, Bishop of Nikiû, Chronicle 84.87–103
94. Grout, James, "Hypatia", Penelope, University of Chicago
95. Novak 2010, p. 240.
96. Watts 2017, pp. 114–115.
97. Haas 1997, p. 313.
98. Dzielska 1996, p. 93.
99. Watts 2017, pp. 115–116.
100. Watts 2008, pp. 198–199.
101. Watts 2017, p. 116.
102. Watts 2008, p. 199.
103. Haas 1997, pp. 235–236, 314.
104. Haas 1997, p. 314.
105. Cameron, Long & Sherry 1993, p. 59.
106. Ecclesiastical History, Bk VII: Chap. 15 (miscited as VI:15).
107. Watts 2017, p. 117.
108. Belenkiy 2010, pp. 9–13.
109. Cameron 2016, p. 190.
110. Watts 2017, p. 157.
111. Watts 2017, p. 121.
112. Dzielska 1996, pp. 95–96.
113. Haas 1997, p. 436.
114. Haas 1997, pp. 67, 436.
115. Watts 2008, pp. 197–200.
116. MacDonald, Beverley and Weldon, Andrew. (2003). Written in Blood: A Brief History of Civilization (pg. 173). Allen & Unwin.
117. Castner 2010, p. 50.
118. Deakin 2007, p. 111.
119. Cameron 2016, pp. 194–195.
120. Dzielska 2008, p. 132.
121. Bradley 2006, pp. 59–60.
122. Booth 2017, p. 106.
123. Waithe 1987, pp. 174–175.
124. Engels 2009, pp. 97–124.
125. Emmer 2012, p. 74.
126. Dzielska 1996, pp. 71–2.
127. Cameron 2016, pp. 193–194.
128. Booth 2017, pp. 108–111.
129. Emmer 2012, p. 76.
130. Dzielska 1996, pp. 71–72.
131. Cameron, Long & Sherry 1993, p. 45.
132. Cameron, Long & Sherry 1993, pp. 45–47.
133. Waithe 1987, p. 175.
134. Booth 2017, p. 110.
135. Deakin 1992, pp. 20–22.
136. Booth 2017, p. 109.
137. Bradley 2006, p. 61.
138. Deakin 1992, p. 21.
139. Sir Thomas Little Heath (1910), Diophantus of Alexandria; A Study in the History of Greek Algebra (2nd ed.), Cambridge University Press, republished 2017, pp. 14 & 18
140. Dzielska 1996, p. 72.
141. Deakin 1994.
142. Dixon, Don, COSMOGRAPHICA Space Art and Science Illustration
143. Knorr 1989.
144. Deakin 2007, pp. 102–104.
145. Deakin 1992, p. 22.
146. Booth 2017, pp. 111–113.
147. Booth 2017, p. 111.
148. Pasachoff & Pasachoff 2007, p. 226.
149. Theodore 2016, p. 183.
150. Booth 2017, pp. 112–113.
151. Virginia Trimble; Thomas R. Williams; Katherine Bracher; Richard Jarrell; Jordan D. Marché; F. Jamil Ragep, eds. (2007), Biographical Encyclopedia of Astronomers, Springer, pp. 1134, ISBN 978-0387304007
152. Booth 2017, pp. 111–112.
153. Deakin 2007, pp. 104–105.
154. Booth 2017, pp. 113–114.
155. Booth 2017, p. 115.
156. Deakin 2007, p. 105.
157. Booth 2017, pp. 114–115.
158. Bensaude-Vincent, Bernadette (2002), Holmes, Frederic L.; Levere, Trevor H. (eds.), Instruments and Experimentation in the History of Chemistry, Massachusetts Institute of Technology Press, p. 153
159. Ian Spencer Hornsey, A history of beer and brewing, Royal Society of Chemistry · 2003, page 429
160. Jeanne Bendick, Archimedes and the Door of Science, Literary Licensing, LLC · 2011, pages 63-64
161. Booth 2017, pp. 151–152.
162. Watts 2017, pp. 154–155.
163. Booth 2017, p. 151.
164. Watts 2008, p. 201.
165. Watts 2017, pp. 117–119.
166. Watts 2017, p. 119.
167. Watts 2017, pp. 119–120.
168. Watts 2017, p. 120.
169. Watts 2017, p. 155.
170. Synodicon, c. 216, in iv. tom. Concil. p. 484, as detailed in The History of the Decline and Fall of the Roman Empire, vol. 8, chapter XLVII
171. Whitfield 1995, p. 14.
172. Wessel 2004, p. 51.
173. Rosser 2008, p. 12.
174. Dzielska 1996, p. 18.
175. Wessel 2004, pp. 52–53.
176. Cameron, Long & Sherry 1993, pp. 41–44.
177. Dzielska 1996, p. 55.
178. Deakin 2007, p. 54.
179. Deakin 2007, pp. 135–136.
180. Walsh 2007, p. 10.
181. Booth 2017, p. 152.
182. Dzielska 2008, p. 141.
183. Walsh 2007, p. 11.
184. Booth 2017, p. 150.
185. Walsh 2007, pp. 10–11.
186. Walsh 2007, p. 34.
187. Deakin 2007, p. 135.
188. Espetsieris K., "Icons of Greek philosophs in Churches", ("Εσπετσιέρης Κ., "Εικόνες Ελλήνων φιλοσόφων εις Εκκλησίας”), Επιστ. Επετηρίς Φιλοσοφ. Σχολ. Παν/μίου Αθηνών 14 (1963-64), pp. 391, 441 – 443 In Greek.
189. Espetsieris K., "Icons of Greek philosophs in Churches. Complementary information." (Εσπετσιέρης Κ., "Εικόνες Ελλήνων φιλοσόφων εις Εκκλησίας). Συμπληρωματικά στοιχεία”, Επιστ. Επετηρίς Φιλοσοφ. Σχολ. Παν/μίου Αθηνών 24 (1973-74), pp. 418-421 In Greek.
190. Watts 2017, pp. 128–129.
191. Watts 2017, p. 129.
192. Booth 2017, p. 130.
193. Watts 2017, pp. 129–130.
194. Booth 2017, pp. 130–131.
195. "Isidorus 1" entry in John Robert Martindale, (1980), The Prosopography of the Later Roman Empire. Cambridge University Press
196. Watts 2017, p. 130.
197. Watts 2017, pp. 130–131.
198. Dzielska 1996, p. 67.
199. Dzielska 1996, p. 2.
200. Watts 2017, pp. 135–136.
201. Ogilvie, M. B. (1986). Women in science: Antiquity through the 19th century. Cambridge, MA: The MIT Press.
202. Watts 2017, pp. 136–137.
203. Dzielska 1996, p. 3.
204. Watts 2017, p. 139.
205. Dzielska 1996, pp. 3–4.
206. Dzielska 1996, p. 4.
207. Watts 2017, p. 142.
208. Saluzzo Roero, Diodata (1827), Ipazia ovvero Delle filosofie poema di Diodata Saluzzo Roero.
209. Grout, James, "The Death of Hypatia", Penelope, University of Chicago
210. Booth 2017, pp. 21–22.
211. Edwards 1999, p. 112.
212. Booth 2017, pp. 20–21.
213. Dzielska 1996, pp. 4–5.
214. Dzielska 1996, pp. 5–6.
215. Dzielska 1996, p. 8.
216. Booth 2017, p. 15.
217. Snyder, J.M. (1989), The woman and the lyre: Women writers in classical Greece and Rome, Carbondale, IL: Southern Illinois University Press
218. Dzielska 1996, p. 9.
219. Watts 2017, pp. 141–142.
220. Dzielska 1996, p. 11.
221. Watts 2017, p. 141.
222. Macqueen-Pope 1948, p. 337.
223. Archer 2013, p. 9.
224. Marsh, Jan; Nunn, Pamela Gerrish (1997), Pre-Raphaelite women artists: Barbara Leigh Smith Bodichon …, Manchester City Art Galleries, ISBN 978-0-901673-55-8
225. Soldan, Wilhelm Gottlieb (1843), Geschichte der Hexenprozesse: aus dem Qvellen Dargestellt, Cotta, p.82.
226. Booth 2017, p. 27.
227. Booth 2017, pp. 25–26, 28.
228. Deakin 2007, p. 163.
229. Cohen 2008, p. 47.
230. Booth 2017, pp. 25–26.
231. Cohen 2008, pp. 47–48.
232. Cohen 2008, p. 48.
233. Dzielska 1996, p. 16.
234. Booth 2017, p. 25.
235. Booth 2017, pp. 26–27.
236. Wider, Kathleen (1986), "Women Philosophers in the Ancient Greek World: Donning the Mantle", Hypatia, 1 (1): 21–62, doi:10.1111/j.1527-2001.1986.tb00521.x, JSTOR 3810062, S2CID 144952549
237. Greenblatt, The Swerve: how the world became modern 2011:93.
238. Christian Wildberg, in Hypatia of Alexandria – a philosophical martyr, The Philosopher's Zone, ABC Radio National (4 April 2009)
239. Dzielska 1996, p. 105.
240. Snyder, Carol (1980–1981), "Reading the Language of "The Dinner Party"", Woman's Art Journal, 1 (2): 30–34, doi:10.2307/1358081, JSTOR 1358081, Among the raised images distributed on the first two wings of the table are two with broken edges—the Hypatia and Petronilla da Meath plates. Chicago confirmed my reading of the broken edge as a reference to the violent deaths both women suffered".
241. Booth 2017, p. 22.
242. Booth 2017, pp. 22–23.
243. Booth 2017, pp. 14–30.
244. Booth 2017, pp. 13–20.
245. Pasachoff, Jay M.; Filippenko, Alex (11 July 2019), The Cosmos: Astronomy in the New Millennium, Cambridge University Press, p. 658, ISBN 978-1-108-43138-5
246. Booth 2017, p. 16.
247. Booth 2017, pp. 16–18.
248. Booth 2017, pp. 18–19.
249. Booth 2017, pp. 19–20.
250. Majumdar, Deepa (September 2015), "Review of The Wisdom of Hypatia", The International Journal of the Platonic Tradition, 9 (2): 261–265, doi:10.1163/18725473-12341327
251. Levinson, Paul (November 2008), "Unburning Alexandria", Analog Science Fiction and Fact, archived from the original on 14 March 2013, retrieved 26 March 2013
252. Clark, Brian Charles (2006), The Plot to Save Socrates – book review, Curled Up With A Good Book, retrieved 26 March 2013
253. Inteview [sic] with Paul Levinson, Author of Unburning Alexandria, The Morton Report, 2013, archived from the original on 28 December 2019, retrieved 3 November 2013
254. Turchiano, Danielle (24 January 2020), "'The Good Place' Boss on Reaching the Titular Location, Finding It's a 'Bummer'", Variety, retrieved 24 January 2020
255. Watts 2017, p. 145.
256. Booth 2017, pp. 13–14.
257. Theodore 2016, p. 182.
258. Watts 2017, pp. 145–146.
259. Watts 2017, p. 146.
260. Booth 2017, p. 14.
261. Mark 2014.
Bibliography
• Archer, William (2013), The Theatrical World For 1893–1897, HardPress, ISBN 978-1-314-50827-7
• Banev, Krastu (2015), Theophilus of Alexandria and the First Origenist Controversy: Rhetoric and Power, Oxford, England: Oxford University Press, ISBN 978-0-19-872754-5
• Belenkiy, Ari (1 April 2010), "An astronomical murder?", Astronomy & Geophysics, 51 (2): 2.9–2.13, Bibcode:2010A&G....51b...9B, doi:10.1111/j.1468-4004.2010.51209.x
• Booth, Charlotte (2017), Hypatia: Mathematician, Philosopher, Myth, London: Fonthill Media, ISBN 978-1-78155-546-0
• Bradley, Michael John (2006), The Birth of Mathematics: Ancient Times to 1300, New York City: Infobase Publishing, ISBN 978-0-8160-5423-7
• Bregman, Jay (1982), Brown, Peter (ed.), Synesius of Cyrene: Philosopher-Bishop, The Transformation of the Classical Heritage, Berkeley: University of California Press, ISBN 978-0-520-04192-9
• Cameron, Alan; Long, Jacqueline; Sherry, Lee (1993), Barbarians and Politics at the Court of Arcadius, Berkeley and Los Angeles: University of California Press, ISBN 978-0-520-06550-5
• Cameron, Alan (2016), "Hypatia: Life, Death, and Works", Wandering Poets and Other Essays on Late Greek Literature and Philosophy, Oxford, England: Oxford University Press, ISBN 978-0-19-026894-7
• Castner, Catherine J. (2010), "Hypatia", in Gagarin, Michael; Fantham, Elaine (eds.), The Oxford Encyclopedia of Ancient Greece and Rome, vol. 1: Academy-Bible, Oxford, England: Oxford University Press, pp. 49–51, ISBN 978-0-19-538839-8
• Cerqueiro, Daniel (2006), Hipatia de Alejandría, la filósofa (in Spanish), Buenos Aires: Pequeña Venecia, ISBN 978-987-9239-16-2
• Cohen, Martin (2008), Philosophical Tales: Being an Alternative History Revealing the Characters, the Plots, and the Hidden Scenes that Make Up the True Story of Philosophy, New York City and London: Wiley-Blackwell, ISBN 978-1405140379
• Curta, Florin; Holt, Andrew (2017), Great Events in Religion: An Encyclopedia of Pivotal Events in Religious History, vol. 1: Prehistory to AD 600, Santa Barbara, CA: ABC-CLIO, ISBN 978-1-4408-4598-7
• Deakin, M. A. B. (1992), "Hypatia of Alexandria" (PDF), History of Mathematics Section, Function, 16 (1): 17–22, archived (PDF) from the original on 9 October 2022
• Deakin, Michael A. B. (1994), "Hypatia and her mathematics", American Mathematical Monthly, 101 (3): 234–243, doi:10.2307/2975600, JSTOR 2975600, MR 1264003
• Deakin, Michael A. B. (2007), Hypatia of Alexandria: Mathematician and Martyr, Amherst, NY: Prometheus Books, ISBN 978-1-59102-520-7
• Deakin, Michael (2012), "Hypatia", Encyclopædia Britannica
• Dzielska, Maria (1996), Hypatia of Alexandria, Cambridge, MA: Harvard University Press, ISBN 978-0-674-43776-0
• Dzielska, Maria (2008), "Learned women in the Alexandrian scholarship and society of late Hellenism", in el-Abbadi, Mostafa; Fathallah, Omnia Mounir (eds.), What Happened to the Ancient Library of Alexandria?, Leiden, The Netherlands: Brill, pp. 129–148, ISBN 978-9004165458
• Edwards, Catharine (1999), Roman Presences: Receptions of Rome in European Culture, 1789-1945, Cambridge, England: Cambridge University Press, ISBN 978-0-521-59197-3
• Emmer, Michele (2012), Imagine Math: Between Culture and Mathematics, New York City: Springer, ISBN 978-8847024274
• Engels, David (2009), "Zwischen Philosophie und Religion: Weibliche Intellektuelle in Spätantike und Islam", in Groß, Dominik (ed.), Gender schafft Wissen, Wissenschaft Gender? Geschlechtsspezifische Unterscheidungen Rollenzuschreibungen im Wandel der Zeit (PDF), Kassel University Press, pp. 97–124, ISBN 978-3-89958-449-3, archived (PDF) from the original on 9 October 2022
• Haas, Christopher (1997), Alexandria in Late Antiquity: Topography and Social Conflict, Baltimore, MS and London: The Johns Hopkins University Press, ISBN 978-0-8018-5377-7
• Hoche, Richard (January 1860), "XV. Hypatia, die tochter Theons", Philologus (in German), 15 (1–3): 435–474, doi:10.1524/phil.1860.15.13.435, S2CID 165101471
• Knorr, Wilbur (1989), "III.11: On Hypatia of Alexandria", Studies in Ancient and Medieval Geometry, Boston: Birkhäuser, pp. 753–804, doi:10.1007/978-1-4612-3690-0_27, ISBN 978-0-8176-3387-5, S2CID 160209956
• Macqueen-Pope, Walter (1948), Haymarket: theatre of perfection, Allen
• Mark, Joshua J. (17 February 2014), "Historical Accuracy in the Film Agora", World History Encyclopedia
• Novak, Ralph Martin Jr. (2010), Christianity and the Roman Empire: Background Texts, Harrisburg, PA: Bloomsbury Publishing, pp. 239–240, ISBN 978-1-56338-347-2
• Oakes, Elizabeth H. (2007), "Hypatia", Encyclopedia of World Scientists, New York City: Infobase Publishing, p. 364, ISBN 9781438118826
• Pasachoff, Naomi; Pasachoff, Jay M. (2007), "Hypatia", in Trimble, Virginia; Williams, Thomas R.; Bracher, Katherine; Jarrell, Richard; Marché, Jordan D.; Ragep, F. Jamil (eds.), Biographical Encyclopedia of Astronomers, New York City: Springer, ISBN 978-0387304007
• Penella, Robert J. (1984), "When was Hypatia born?", Historia: Zeitschrift für Alte Geschichte, 33 (1): 126–128, JSTOR 4435877
• Riedweg, Christoph (2005) [2002], Pythagoras: His Life, Teachings, and Influence, Ithaca, NY: Cornell University Press, ISBN 978-0-8014-7452-1
• Rosser, Sue Vilhauser (2008), Women, Science, and Myth: Gender Beliefs from Antiquity to the Present, Santa Barbara, California: ABC-CLIO, ISBN 978-1598840957
• Theodore, Jonathan (2016), The Modern Cultural Myth of the Decline and Fall of the Roman Empire, Manchester, England: Palgrave, Macmillan, ISBN 978-1-137-56997-4
• Waithe, Mary Ellen (1987), Ancient Women Philosophers: 600 B.C.–500 A. D, vol. 1, Dordrecht, The Netherlands: Martinus Nijhoff Publishers, ISBN 978-90-247-3368-2
• Walsh, Christine (2007), The Cult of St Katherine of Alexandria in Early Medieval Europe, Ashgate Publishing, Ltd., ISBN 978-0-7546-5861-0
• Watts, Edward J. (2008) [2006], City and School in Late Antique Athens and Alexandria, Berkeley and Los Angeles: University of California Press, ISBN 978-0520258167
• Watts, Edward J. (2017), Hypatia: The Life and Legend of an Ancient Philosopher, Oxford, England: Oxford University Press, ISBN 978-0190659141
• Wessel, Susan (2004), Cyril of Alexandria and the Nestorian Controversy: The Making of a Saint and of a Heretic, Oxford Early Christian Studies, Oxford, England: Oxford University Press, ISBN 978-0-19-926846-7
• Whitfield, Bryan J. (Summer 1995), "The Beauty of Reasoning: A Reexamination of Hypatia and Alexandria" (PDF), The Mathematics Educator, 6 (1): 14–21, archived from the original (PDF) on 2 September 2006, retrieved 17 April 2010
Further reading
• Berggren, J. L. (February 2009), "The life and death of Hypatia", Metascience, 18 (1): 93–97, doi:10.1007/s11016-009-9256-z, S2CID 170359849
• Bernardi, Gabriella (2016), "Hypatia of Alexandria (355 or 370 c. to 415)", The Unforgotten Sisters: Female Astronomers and Scientists before Caroline Herschel, Springer Praxis Books, pp. 27–36, doi:10.1007/978-3-319-26127-0_5, ISBN 978-3319261270
• Brakke, David (2018), "Hypatia", in Torjesen, Karen; Gabra, Gawdat (eds.), Claremont Coptic Encyclopedia, Claremont Graduate University
• Cain, Kathleen (Spring 1986), "Hypatia, the Alexandrian Library, and M.L.S. (Martyr-Librarian Syndrome)", Community & Junior College Libraries, 4 (3): 35–39, doi:10.1300/J107V04N03_05
• Cameron, Alan (1990), "Isadore of Miletus and Hypatia: On the editing of mathematical texts", Greek, Roman and Byzantine Studies, 31 (1): 103–127
• Cerqueiro, Daniel (2006), Hipatia de Alejandría, la filósofa (in Spanish), Buenos Aires: Pequeña Venecia, ISBN 978-987-9239-16-2
• Donovan, Sandy (2008), Hypatia: Mathematician, Inventor, and Philosopher (Signature Lives: Ancient World), Compass Point Books, ISBN 978-0756537609
• Molinaro, Ursule (1990), "A Christian Martyr in Reverse: Hypatia", A Full Moon of Women: 29 Word Portraits of Notable Women From Different Times and Places + 1 Void of Course, New York City: Dutton, ISBN 978-0-525-24848-4
• Nietupski, Nancy (1993), "Hypatia of Alexandria: Mathematician, astronomer and philosopher", Alexandria, Phanes Press, 2: 45–56, ISBN 978-0933999978. See also The Life of Hypatia from The Suda (Jeremiah Reedy, trans.), pp. 57–58, The Life of Hypatia by Socrates Scholasticus from his Ecclesiastical History 7.13, pp. 59–60, and The Life of Hypatia by John, Bishop of Nikiu, from his Chronicle 84.87–103, pp. 61–63.
• Parsons, Reuben (1892), "St. Cyril of Alexandria and the Murder of Hypatia", Some Lies and Errors of History, Notre Dame, IN: Office of the "Ave Maria", pp. 44–53
• Richeson, A. W. (1940), "Hypatia of Alexandria" (PDF), National Mathematics Magazine, 15 (2): 74–82, doi:10.2307/3028426, JSTOR 3028426
• Rist, J. M. (1965), "Hypatia", Phoenix, 19 (3): 214–225, doi:10.2307/1086284, JSTOR 1086284
• Ronchey, Silvia (2021) [2011], Hypatia: The True Story, Berlin-New York: DeGruyter, ISBN 978-3-1107-1757-0
• Schaefer, Francis (1902), "St. Cyril of Alexandria and the Murder of Hypatia", The Catholic University Bulletin, 8: 441–453.
• Teruel, Pedro Jesús (2011), Filosofía y Ciencia en Hipatia (in Spanish), Madrid: Gredos, ISBN 978-84-249-1939-9
• Vogt, Kari (1993), "'The Hierophant of Philosophy' – Hypatia of Alexandria", in Børresen, Kari Elisabeth; Vogt, Kari (eds.), Women's Studies of the Christian and Islamic Traditions: Ancient, Medieval and Renaissance Foremothers, Dordrecht: Kluwer Academic Publishers, pp. 155–175, doi:10.1007/978-94-011-1664-0_3, ISBN 978-9401116640
• Zielinski, Sarah (14 March 2010), "Hypatia, Alexandria's Great Female Scholar", Smithsonian, archived from the original on 4 January 2014, retrieved 28 May 2010
External links
• International Society for Neoplatonic Studies
• Socrates of Constantinople, Ecclesiastical History, VII.15, at the Internet Archive
• (in Greek and Latin) Socrates of Constantinople, Ecclesiastical History, VII.15 (pp. 760–761), at the Documenta Catholica Omnia
Platonists
Ancient
Academics
Old
• Plato
• Aristotle
• Eudoxus
• Philip of Opus
• Aristonymus
• Coriscus and Erastus of Scepsis
• Demetrius of Amphipolis
• Euaeon of Lampsacus
• Heraclides and Python of Aenus
• Hestiaeus of Perinthus
• Lastheneia of Mantinea
• Timolaus of Cyzicus
• Speusippus
• Axiothea of Phlius
• Heraclides Ponticus
• Menedemus of Pyrrha
• Xenocrates
• Crantor
• Polemon
• Crates of Athens
Skeptics
Middle
• Arcesilaus
• Diocles of Cnidus
• Lacydes
• Telecles and Evander
• Hegesinus
New
• Carneades
• Hagnon of Tarsus
• Metrodorus of Stratonicea
• Clitomachus
• Charmadas
• Aeschines of Neapolis
• Philo of Larissa
• Cicero
• Dio of Alexandria
Middle Platonists
• Antiochus
• Eudorus of Alexandria
• Philo of Alexandria
• Plutarch
• Justin Martyr
• Gaius
• Albinus
• Alcinous
• Alexander Peloplaton
• Apuleius
• Atticus
• Maximus of Tyre
• Numenius of Apamea
• Ammonius Saccas
• Longinus
• Clement of Alexandria
• Origen
• Origen the Pagan
• Calcidius
Neoplatonists
• Plotinus
• Students
• Amelius
• Porphyry
• Iamblichus
• Sopater
• Eustathius of Cappadocia
• Sosipatra
• Aedesius
• Dexippus
• Chrysanthius
• Theodorus of Asine
• Julian
• Sallustius
• Maximus of Ephesus
• Eusebius of Myndus
• Priscus of Epirus
• Antoninus
• Hypatia
• Gaius Marius Victorinus
• Augustine
• Macrobius
• Boethius
Academy
• Plutarch of Athens
• Asclepigenia
• Hierocles
• Syrianus
• Hermias
• Aedesia
• Proclus
• Marinus
• Isidore
• Ammonius Hermiae
• Asclepiodotus
• Hegias
• Zenodotus
• Agapius
• Damascius
• Simplicius
• Priscian
• John Philoponus
• Olympiodorus
• David the Invincible
• Pseudo-Dionysius the Areopagite
Medieval
• John Scotus Eriugena
• Al-Farabi
• Anselm
• Peter Abelard
• Bernard
• Gilbert
• Thierry
• Henry of Ghent
• Bonaventure
• Theodoric of Freiberg
• Meister Eckhart
• Berthold of Moosburg
• Paul of Venice
Modern
Renaissance
Florentine Academy
• Plethon
• Marsilio Ficino
• Cristoforo Landino
• Giovanni Pico della Mirandola
• Petrus Ramus
• Giordano Bruno
• Blaise Pascal
Cambridge
• Ralph Cudworth
• Henry More
• Anne Conway
• Emanuel Swedenborg
• Thomas Taylor
• Ralph Waldo Emerson
• Josiah Royce
• Bernard Bolzano
• Aleksei Losev
Contemporary
Analytic
• Gottlob Frege
• G. E. Moore
• Kurt Gödel
• Alonzo Church
• Roderick Chisholm
• Michael Dummett
• W. V. O. Quine
• David Kaplan
• Saul Kripke
• Alvin Plantinga
• Peter van Inwagen
• Nicholas Wolterstorff
• Crispin Wright
• Edward N. Zalta
Continental
• Henri Bergson
• Edmund Husserl
• Roman Ingarden
• Leo Strauss
Ancient Greek mathematics
Mathematicians
(timeline)
• Anaxagoras
• Anthemius
• Archytas
• Aristaeus the Elder
• Aristarchus
• Aristotle
• Apollonius
• Archimedes
• Autolycus
• Bion
• Bryson
• Callippus
• Carpus
• Chrysippus
• Cleomedes
• Conon
• Ctesibius
• Democritus
• Dicaearchus
• Diocles
• Diophantus
• Dinostratus
• Dionysodorus
• Domninus
• Eratosthenes
• Eudemus
• Euclid
• Eudoxus
• Eutocius
• Geminus
• Heliodorus
• Heron
• Hipparchus
• Hippasus
• Hippias
• Hippocrates
• Hypatia
• Hypsicles
• Isidore of Miletus
• Leon
• Marinus
• Menaechmus
• Menelaus
• Metrodorus
• Nicomachus
• Nicomedes
• Nicoteles
• Oenopides
• Pappus
• Perseus
• Philolaus
• Philon
• Philonides
• Plato
• Porphyry
• Posidonius
• Proclus
• Ptolemy
• Pythagoras
• Serenus
• Simplicius
• Sosigenes
• Sporus
• Thales
• Theaetetus
• Theano
• Theodorus
• Theodosius
• Theon of Alexandria
• Theon of Smyrna
• Thymaridas
• Xenocrates
• Zeno of Elea
• Zeno of Sidon
• Zenodorus
Treatises
• Almagest
• Archimedes Palimpsest
• Arithmetica
• Conics (Apollonius)
• Catoptrics
• Data (Euclid)
• Elements (Euclid)
• Measurement of a Circle
• On Conoids and Spheroids
• On the Sizes and Distances (Aristarchus)
• On Sizes and Distances (Hipparchus)
• On the Moving Sphere (Autolycus)
• Optics (Euclid)
• On Spirals
• On the Sphere and Cylinder
• Ostomachion
• Planisphaerium
• Sphaerics
• The Quadrature of the Parabola
• The Sand Reckoner
Problems
• Constructible numbers
• Angle trisection
• Doubling the cube
• Squaring the circle
• Problem of Apollonius
Concepts
and definitions
• Angle
• Central
• Inscribed
• Axiomatic system
• Axiom
• Chord
• Circles of Apollonius
• Apollonian circles
• Apollonian gasket
• Circumscribed circle
• Commensurability
• Diophantine equation
• Doctrine of proportionality
• Euclidean geometry
• Golden ratio
• Greek numerals
• Incircle and excircles of a triangle
• Method of exhaustion
• Parallel postulate
• Platonic solid
• Lune of Hippocrates
• Quadratrix of Hippias
• Regular polygon
• Straightedge and compass construction
• Triangle center
Results
In Elements
• Angle bisector theorem
• Exterior angle theorem
• Euclidean algorithm
• Euclid's theorem
• Geometric mean theorem
• Greek geometric algebra
• Hinge theorem
• Inscribed angle theorem
• Intercept theorem
• Intersecting chords theorem
• Intersecting secants theorem
• Law of cosines
• Pons asinorum
• Pythagorean theorem
• Tangent-secant theorem
• Thales's theorem
• Theorem of the gnomon
Apollonius
• Apollonius's theorem
Other
• Aristarchus's inequality
• Crossbar theorem
• Heron's formula
• Irrational numbers
• Law of sines
• Menelaus's theorem
• Pappus's area theorem
• Problem II.8 of Arithmetica
• Ptolemy's inequality
• Ptolemy's table of chords
• Ptolemy's theorem
• Spiral of Theodorus
Centers
• Cyrene
• Mouseion of Alexandria
• Platonic Academy
Related
• Ancient Greek astronomy
• Attic numerals
• Greek numerals
• Latin translations of the 12th century
• Non-Euclidean geometry
• Philosophy of mathematics
• Neusis construction
History of
• A History of Greek Mathematics
• by Thomas Heath
• algebra
• timeline
• arithmetic
• timeline
• calculus
• timeline
• geometry
• timeline
• logic
• timeline
• mathematics
• timeline
• numbers
• prehistoric counting
• numeral systems
• list
Other cultures
• Arabian/Islamic
• Babylonian
• Chinese
• Egyptian
• Incan
• Indian
• Japanese
Ancient Greece portal • Mathematics portal
Authority control
International
• FAST
• ISNI
• VIAF
• 2
• 3
• 4
• 5
• 6
• 7
National
• Norway
• Spain
• France
• BnF data
• Catalonia
• Germany
• Israel
• Finland
• United States
• Japan
• Czech Republic
• Australia
• Greece
• Croatia
• Netherlands
• Poland
• Vatican
Academics
• MathSciNet
• zbMATH
People
• Deutsche Biographie
• Trove
Other
• IdRef
| Wikipedia |
Hille–Yosida theorem
In functional analysis, the Hille–Yosida theorem characterizes the generators of strongly continuous one-parameter semigroups of linear operators on Banach spaces. It is sometimes stated for the special case of contraction semigroups, with the general case being called the Feller–Miyadera–Phillips theorem (after William Feller, Isao Miyadera, and Ralph Phillips). The contraction semigroup case is widely used in the theory of Markov processes. In other scenarios, the closely related Lumer–Phillips theorem is often more useful in determining whether a given operator generates a strongly continuous contraction semigroup. The theorem is named after the mathematicians Einar Hille and Kōsaku Yosida who independently discovered the result around 1948.
Formal definitions
Main article: C0-semigroup
If X is a Banach space, a one-parameter semigroup of operators on X is a family of operators indexed on the non-negative real numbers {T(t)} t ∈ [0, ∞) such that
• $T(0)=I\quad $
• $T(s+t)=T(s)\circ T(t),\quad \forall t,s\geq 0.$
The semigroup is said to be strongly continuous, also called a (C0) semigroup, if and only if the mapping
$t\mapsto T(t)x$
is continuous for all x ∈ X, where [0, ∞) has the usual topology and X has the norm topology.
The infinitesimal generator of a one-parameter semigroup T is an operator A defined on a possibly proper subspace of X as follows:
• The domain of A is the set of x ∈ X such that
$h^{-1}{\bigg (}T(h)x-x{\bigg )}$
has a limit as h approaches 0 from the right.
• The value of Ax is the value of the above limit. In other words, Ax is the right-derivative at 0 of the function
$t\mapsto T(t)x.$
The infinitesimal generator of a strongly continuous one-parameter semigroup is a closed linear operator defined on a dense linear subspace of X.
The Hille–Yosida theorem provides a necessary and sufficient condition for a closed linear operator A on a Banach space to be the infinitesimal generator of a strongly continuous one-parameter semigroup.
Statement of the theorem
Let A be a linear operator defined on a linear subspace D(A) of the Banach space X, ω a real number, and M > 0. Then A generates a strongly continuous semigroup T that satisfies $\|T(t)\|\leq M{\rm {e}}^{\omega t}$ if and only if[1]
1. A is closed and D(A) is dense in X,
2. every real λ > ω belongs to the resolvent set of A and for such λ and for all positive integers n,
$\|(\lambda I-A)^{-n}\|\leq {\frac {M}{(\lambda -\omega )^{n}}}.$
Hille-Yosida theorem for contraction semigroups
In the general case the Hille–Yosida theorem is mainly of theoretical importance since the estimates on the powers of the resolvent operator that appear in the statement of the theorem can usually not be checked in concrete examples. In the special case of contraction semigroups (M = 1 and ω = 0 in the above theorem) only the case n = 1 has to be checked and the theorem also becomes of some practical importance. The explicit statement of the Hille–Yosida theorem for contraction semigroups is:
Let A be a linear operator defined on a linear subspace D(A) of the Banach space X. Then A generates a contraction semigroup if and only if[2]
1. A is closed and D(A) is dense in X,
2. every real λ > 0 belongs to the resolvent set of A and for such λ,
$\|(\lambda I-A)^{-1}\|\leq {\frac {1}{\lambda }}.$
See also
• C0 semigroup
• Lumer–Phillips theorem
• Stone's theorem on one-parameter unitary groups
Notes
1. Engel and Nagel Theorem II.3.8, Arendt et al. Theorem 3.3.4, Staffans Theorem 3.4.1
2. Engel and Nagel Theorem II.3.5, Arendt et al. Corollary 3.3.5, Staffans Corollary 3.4.5
References
• Riesz, F.; Sz.-Nagy, B. (1995), Functional analysis. Reprint of the 1955 original, Dover Books on Advanced Mathematics, Dover, ISBN 0-486-66289-6
• Reed, Michael; Simon, Barry (1975), Methods of modern mathematical physics. II. Fourier analysis, self-adjointness., Academic Press, ISBN 0-12-585050-6
• Engel, Klaus-Jochen; Nagel, Rainer (2000), One-parameter semigroups for linear evolution equations, Springer, ISBN 0-387-98463-1
• Arendt, Wolfgang; Batty, Charles; Hieber, Matthias; Neubrander, Frank (2001), Vector-valued Laplace Transforms and Cauchy Problems, Birkhauser, ISBN 0-8176-6549-8
• Staffans, Olof (2005), Well-posed linear systems, Cambridge University Press, ISBN 0-521-82584-9
• Feller, William (1971), An introduction to probability theory and its applications, vol. II (Second ed.), New York: John Wiley & Sons, ISBN 0-471-25709-5
• Vrabie, Ioan I. (2003), C0-semigroups and applications, North-Holland Mathematics Studies, vol. 191, Amsterdam: North-Holland Publishing, ISBN 0-444-51288-8
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
| Wikipedia |
Some Liouville-type results for stable solutions involving the mean curvature operator: The radial case
Lazer-McKenna conjecture for higher order elliptic problem with critical growth
February 2020, 40(2): 1191-1231. doi: 10.3934/dcds.2020075
Measure solutions to a system of continuity equations driven by Newtonian nonlocal interactions
José Antonio Carrillo 1,, , Marco Di Francesco 2, , Antonio Esposito 2, , Simone Fagioli 2, and Markus Schmidtchen 1,
Department of Mathematics, Imperial College London, London SW7 2AZ, United Kingdom
DISIM - Department of Information Engineering, Computer Science and Mathematics, University of L'Aquila, Via Vetoio 1 (Coppito) 67100 L'Aquila (AQ), Italy
* Corresponding author: José A. Carrillo
Received June 2019 Revised September 2019 Published November 2019
Figure(4)
We prove global-in-time existence and uniqueness of measure solutions of a nonlocal interaction system of two species in one spatial dimension. For initial data including atomic parts we provide a notion of gradient-flow solutions in terms of the pseudo-inverses of the corresponding cumulative distribution functions, for which the system can be stated as a gradient flow on the Hilbert space $ L^2(0,1)^2 $ according to the classical theory by Brézis. For absolutely continuous initial data we construct solutions using a minimising movement scheme in the set of probability measures. In addition we show that the scheme preserves finiteness of the $ L^m $-norms for all $ m\in [1,+\infty] $ and of the second moments. We then provide a characterisation of equilibria and prove that they are achieved (up to time subsequences) in the large time asymptotics. We conclude the paper constructing two examples of non-uniqueness of measure solutions emanating from the same (atomic) initial datum, showing that the notion of gradient flow solution is necessary to single out a unique measure solution.
Keywords: Systems of aggregation equations, Newtonian potentials, uniqueness of solutions, gradient flows, long time asymptotics.
Mathematics Subject Classification: Primary: 45K05, 35A15; Secondary: 92D25, 35L45, 35L80.
Citation: José Antonio Carrillo, Marco Di Francesco, Antonio Esposito, Simone Fagioli, Markus Schmidtchen. Measure solutions to a system of continuity equations driven by Newtonian nonlocal interactions. Discrete & Continuous Dynamical Systems, 2020, 40 (2) : 1191-1231. doi: 10.3934/dcds.2020075
L. Ambrosio, N. Gigli and G. Savaré, Gradient Flows in Metric Spaces and in the Space of Probability Measures, Lectures in Mathematics ETH Zürich. Birkhäuser Verlag, Basel, second edition, 2008. Google Scholar
L. Ambrosio and G. Savaré, Gradient flows of probability measures, Handbook of Differential Equations: Evolutionary Equations, 3 (2007), 1-136. doi: 10.1016/S1874-5717(07)80004-1. Google Scholar
A. Arnold, P. Markowich and G. Toscani, On large time asymptotics for drift-diffusion-Poisson systems, Proceedings of the Fifth International Workshop on Mathematical Aspects of Fluid and Plasma Dynamics (Maui, HI, 1998), 29 (2000), 571-581. doi: 10.1080/00411450008205893. Google Scholar
D. Balagué, J. A. Carrillo, T. Laurent and G. Raoul, Dimensionality of local minimizers of the interaction energy, Arch. Ration. Mech. Anal., 209 (2013), 1055-1088. doi: 10.1007/s00205-013-0644-6. Google Scholar
D. Benedetto, E. Caglioti and M. Pulvirenti, A kinetic equation for granular media, RAIRO-Modélisation Mathématique et Analyse Numérique, 31 (1997), 615–641. doi: 10.1051/m2an/1997310506151. Google Scholar
A. L. Bertozzi, J. A. Carrillo and T. Laurent, Blow-up in multidimensional aggregation equations with mildly singular interaction kernels, Nonlinearity, 22 (2009), 683-710. doi: 10.1088/0951-7715/22/3/009. Google Scholar
A. L. Bertozzi, T. Kolokolnikov, H. Sun, D. Uminsky and J. von Brecht, Ring patterns and their bifurcations in a nonlocal model of biological swarms, Commun. Math. Sci., 13 (2015), 955-985. doi: 10.4310/CMS.2015.v13.n4.a6. Google Scholar
A. L. Bertozzi, T. Laurent and F. Léger, Aggregation and spreading via the newtonian potential: The dynamics of patch solutions, Mathematical Models and Methods in Applied Sciences, 22 (2012), 1140005, 39pp. doi: 10.1142/S0218202511400057. Google Scholar
A. L. Bertozzi, T. Laurent and J. Rosado, Lp theory for the multidimensional aggregation equation, Communications on Pure and Applied Mathematics, 64 (2011), 45-83. doi: 10.1002/cpa.20334. Google Scholar
F. Bolley, Y. Brenier and G. Loeper, Contractive metrics for scalar conservation laws, J. Hyperbolic Differ. Equ., 2 (2005), 91-107. doi: 10.1142/S0219891605000397. Google Scholar
G. A. Bonaschi, Gradient flows driven by a non-smooth repulsive interaction potential, Master's thesis, University of Pavia, Italy, arXiv: 1310.3677, 2011. Google Scholar
G. A. Bonaschi, J. A. Carrillo, M. Di Francesco and M. A. Peletier, Equivalence of gradient flows and entropy solutions for singular nonlocal interaction equations in 1D, ESAIM Control, Optimisation and Calculus of Variations, 21 (2015), 414-441. doi: 10.1051/cocv/2014032. Google Scholar
Y. Brenier, L$^2$ formulation of multidimensional scalar conservation laws, Arch. Ration. Mech. Anal., 193 (2009), 1-19. doi: 10.1007/s00205-009-0214-0. Google Scholar
[14] A. Bressan, Hyperbolic Systems of Conservation Laws. The one-dimensional Cauchy Problem, Oxford Lecture Series in Mathematics and its Applications. Oxford University Press, Oxford, 20 edition, 2000. Google Scholar
H. Brézis, Opérateurs Maximaux Monotones et Semi-groupes de Contractions Dans Les Espaces de Hilbert, North-Holland Publishing Co., Amsterdam, 1973. Google Scholar
M. Burger, M. Di Francesco, S. Fagioli and A. Stevens, Sorting phenomena in a mathematical model for two mutually attracting/repelling species, SIAM Journal on Mathematical Analysis, 50 (2018), 3210-3250. doi: 10.1137/17M1125716. Google Scholar
M. Burger, R. Fetecau and Y. Huang, Stationary states and asymptotic behavior of aggregation models with nonlinear local repulsion, SIAM J. Appl. Dyn. Syst., 13 (2014), 397-424. doi: 10.1137/130923786. Google Scholar
V. Calvez, J. A. Carrillo and F. Hoffmann, Equilibria of homogeneous functionals in the fair-competition regime, Nonlinear Anal., 159 (2017), 85-128. doi: 10.1016/j.na.2017.03.008. Google Scholar
V. Calvez, J. A. Carrillo and F. Hoffmann, The geometry of diffusing and self-attracting particles in a one-dimensional fair-competition regime, Nonlocal and Nonlinear Diffusions and Interactions: New Methods and Directions, 2186 (2017), 1-71. Google Scholar
J. Carrillo, Y. Huang and M. Schmidtchen, Zoology of a nonlocal cross-diffusion model for two species, SIAM Journal on Applied Mathematics, 78 (2018), 1078-1104. doi: 10.1137/17M1128782. Google Scholar
J. A. Carrillo, A. Chertock and Y. Huang, A finite-volume method for nonlinear nonlocal equations with a gradient flow structure, Communications in Computational Physics, 17 (2015), 233-258. doi: 10.4208/cicp.160214.010814a. Google Scholar
J. A. Carrillo, K. Craig and Y. Yao, Aggregation-diffusion equations: Dynamics, asymptotics, and singular limits, Active Particles, 2 (2019), 65–108, arXiv: 1810.03634. doi: 10.1007/978-3-030-20297-2_3. Google Scholar
J. A. Carrillo, M. G. Delgadino and A. Mellet, Regularity of local minimizers of the interaction energy via obstacle problems, Comm. Math. Phys., 343 (2016), 747-781. doi: 10.1007/s00220-016-2598-7. Google Scholar
J. A. Carrillo, M. Di Francesco, A. Figalli, T. Laurent and D. Slepcev, Global-in-time weak measure solutions and finite-time aggregation for nonlocal interaction equations, Duke Math. J., 156 (2011), 229-271. doi: 10.1215/00127094-2010-211. Google Scholar
J. A. Carrillo, L. C. Ferreira and J. C. Precioso, A mass-transportation approach to a one dimensional fluid mechanics model with nonlocal velocity, Advances in Mathematics, 231 (2012), 306-327. doi: 10.1016/j.aim.2012.03.036. Google Scholar
J. A. Carrillo, F. Filbet and M. Schmidtchen, Convergence of a Finite Volume Scheme for a System of Interacting Species with Cross-Diffusion, arXiv e-prints, Apr. 2018. Google Scholar
J. A. Carrillo, Y. Huang and S. Martin, Explicit flock solutions for Quasi-Morse potentials, European J. Appl. Math., 25 (2014), 553-578. doi: 10.1017/S0956792514000126. Google Scholar
J. A. Carrillo, S. Martin and V. Panferov, A new interaction potential for swarming models, Phys. D, 260 (2013), 112-126. doi: 10.1016/j.physd.2013.02.004. Google Scholar
J. A. Carrillo and G. Toscani, Wasserstein metric and large–time asymptotics of nonlinear diffusion equations, New Trends in Mathematical Physics, (In Honour of the Salvatore Rionero 70th Birthday), 2005, 234–244. Google Scholar
[30] T. Cazenave and A. Haraux, An Introduction to Semilinear Evolution Equations, Oxford Lecture Series in Mathematics and its Applications, 13. Oxford University Press, New York, 1998. Google Scholar
C. M. Dafermos, Hyperbolic Conservation Laws in Continuum Physics. Fourth Edition, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, 325 edition, 2016. doi: 10.1007/978-3-662-49451-6. Google Scholar
S. Daneri and G. Savaré, Eulerian calculus for the displacement convexity in the wasserstein distance, SIAM J. Math. Anal., 40 (2008), 1104-1122. doi: 10.1137/08071346X. Google Scholar
E. De Giorgi, New problems on minimizing movements, Boundary Value Problems for PDE and Applications, C. Baiocchi and J. L. Lions eds., Masson, 29 (1993), 81-98. Google Scholar
M. Di Francesco, A. Esposito and S. Fagioli, Nonlinear degenerate cross-diffusion systems with nonlocal interaction, Nonlinear Analysis, 169 (2018), 94-117. doi: 10.1016/j.na.2017.12.003. Google Scholar
M. Di Francesco and S. Fagioli, Measure solutions for nonlocal interaction PDEs with two species, Nonlinearity, 26 (2013), 2777-2808. doi: 10.1088/0951-7715/26/10/2777. Google Scholar
M. Di Francesco and D. Matthes, Curves of steepest descent are entropy solutions for a class of degenerate convection-diffusion equations, Calc. Var. Partial Differential Equations, 50 (2014), 199-230. doi: 10.1007/s00526-013-0633-5. Google Scholar
M. R. D'Orsogna, Y.-L. Chuang, A. L. Bertozzi and L. S. Chayes, Self-propelled particles with soft-core interactions: Patterns, stability, and collapse, Physical Review Letters, 96 (2006), 104302. Google Scholar
L. C. Evans, Partial Differential Equations, volume 19 of Graduate Studies in Mathematics, American Mathematical Society, Providence, Rhode Island, 1998. Google Scholar
J. H. M. Evers and T. Kolokolnikov, Metastable states for an aggregation model with noise, SIAM J. Appl. Dyn. Syst., 15 (2016), 2213-2226. doi: 10.1137/16M1069006. Google Scholar
R. C. Fetecau and Y. Huang, Equilibria of biological aggregations with nonlocal repulsive-attractive interactions, Phys. D, 260 (2013), 49-64. doi: 10.1016/j.physd.2012.11.004. Google Scholar
R. C. Fetecau, Y. Huang and T. Kolokolnikov, Swarm dynamics and equilibria for a nonlocal aggregation model, Nonlinearity, 24 (2011), 2681-2716. doi: 10.1088/0951-7715/24/10/002. Google Scholar
R. Jordan, D. Kinderlehrer and F. Otto, The variational formulation of the fokker–planck equation, SIAM Journal on Mathematical Analysis, 29 (1998), 1-17. doi: 10.1137/S0036141096303359. Google Scholar
T. Kolokolnikov, H. Sun, D. Uminsky and A. L. Bertozzi, Stability of ring patterns arising from two-dimensional particle interactions, Phys. Rev. E, 84 (2011), 015203. doi: 10.1103/PhysRevE.84.015203. Google Scholar
S. Kružhkov, First order quasilinear equations with several independent variables, Mat. Sb. (N.S.), 81 (1970), 228-255. Google Scholar
O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva, Linear and Quasilinear Equations of Parabolic Type, Translated from the Russian by S. Smith. Translations of Mathematical Monographs, Vol. 23. American Mathematical Society, Providence, R.I., 1968. Google Scholar
H. Li and G. Toscani, Long-time asymptotics of kinetic models of granular flows, Arch. Ration. Mech. Anal., 172 (2004), 407-428. doi: 10.1007/s00205-004-0307-8. Google Scholar
T.-P. Liu and M. Pierre, Source solutions and asymptotic behaviour in conservations laws, J. Differential Equations, 51 (1984), 419-441. doi: 10.1016/0022-0396(84)90096-2. Google Scholar
D. Matthes, R. McCann and G. Savaré, A family of fourth order equations of gradient flow type, Comm. P.D.E., 34 (2009), 1352-1397. doi: 10.1080/03605300903296256. Google Scholar
R. J. McCann, A convexity principle for interacting gases, Advances in Mathematics, 128 (1997), 153-179. doi: 10.1006/aima.1997.1634. Google Scholar
A. Mogilner and L. Edelstein-Keshet, A non-local model for a swarm, Journal of Mathematical Biology, 38 (1999), 534-570. doi: 10.1007/s002850050158. Google Scholar
F. Otto, The geometry of dissipative evolution equations: The porous medium equation, Comm. Partial Differential Equations, 26 (2001), 101-174. doi: 10.1081/PDE-100002243. Google Scholar
S. T. Rachev and L. Rüschendorf, Mass Transportation Problems, volume I of Probability and Its Applications, Springer, New York, 1998. Google Scholar
F. Santambrogio, Optimal Transport for Applied Mathematicians, volume 86 of Progress in Nonlinear Differential Equations and Their Applications., Birkhäuser Verlag, Basel, 2015. doi: 10.1007/978-3-319-20828-2. Google Scholar
C. M. Topaz, A. L. Bertozzi and M. A. Lewis, A nonlocal continuum model for biological aggregation, Bull. Math. Biol., 68 (2006), 1601-1623. doi: 10.1007/s11538-006-9088-6. Google Scholar
G. Toscani, One-dimensional kinetic models of granular flows, ESAIM: Modélisation Mathématique et Analyse Numérique, 34 (2000), 1277–1291. doi: 10.1051/m2an:2000127. Google Scholar
C. Villani, Topics in Optimal Transportation, volume 58 of Graduate Studies in Mathematics, American Mathematical Society, Providence, RI, 2003. doi: 10.1007/b12016. Google Scholar
C. Villani, Optimal Transport: Old and New, Grundlehren der mathematischen Wissenschaften. Springer, Berlin, 2009. doi: 10.1007/978-3-540-71050-9. Google Scholar
Figure 1. This example has two separated indicator functions as initial data. In the left graph we see the evolution of system (6) to the stationary state (right graph). In the middle we see the energy (black) of the solution and its dissipation (red). The dotted line is the numerical time derivative of the energy. It matches well with the analytically obtained dissipation
Figure 2. We choose partially overlapping initial data and observe, as before that mixing occurs. The graph on the left displays the evolution of both densities at different time instances, while the rightmost graph displays the stationary state with identical densities. The graph in the middle shows the energy decay along the solution and the numerical dissipation and the analytical dissipation agree well
Figure 3. Initial (left) and exact solution (right) at time $ t = 0.5 $ for the case of two distinct Dirac deltas at the level of distribution functions
Figure 4. Initial (left) and exact solution (right) at time $ t = 1/(4(1-m)) $ with $ m = 0.4 $ for the case of two partially overlapping deltas at the level of distribution functions
Barry Simon. Zeros of OPUC and long time asymptotics of Schur and related flows. Inverse Problems & Imaging, 2007, 1 (1) : 189-215. doi: 10.3934/ipi.2007.1.189
Yanghong Huang, Andrea Bertozzi. Asymptotics of blowup solutions for the aggregation equation. Discrete & Continuous Dynamical Systems - B, 2012, 17 (4) : 1309-1331. doi: 10.3934/dcdsb.2012.17.1309
Veronica Felli, Ana Primo. Classification of local asymptotics for solutions to heat equations with inverse-square potentials. Discrete & Continuous Dynamical Systems, 2011, 31 (1) : 65-107. doi: 10.3934/dcds.2011.31.65
Xianpeng Hu, Hao Wu. Long-time behavior and weak-strong uniqueness for incompressible viscoelastic flows. Discrete & Continuous Dynamical Systems, 2015, 35 (8) : 3437-3461. doi: 10.3934/dcds.2015.35.3437
François James, Nicolas Vauchelet. Equivalence between duality and gradient flow solutions for one-dimensional aggregation equations. Discrete & Continuous Dynamical Systems, 2016, 36 (3) : 1355-1382. doi: 10.3934/dcds.2016.36.1355
Andrea L. Bertozzi, Dejan Slepcev. Existence and uniqueness of solutions to an aggregation equation with degenerate diffusion. Communications on Pure & Applied Analysis, 2010, 9 (6) : 1617-1637. doi: 10.3934/cpaa.2010.9.1617
Serena Dipierro, Benedetta Pellacci, Enrico Valdinoci, Gianmaria Verzini. Time-fractional equations with reaction terms: Fundamental solutions and asymptotics. Discrete & Continuous Dynamical Systems, 2021, 41 (1) : 257-275. doi: 10.3934/dcds.2020137
Dixiang Cheng, Zhengrong Liu, Xin Huang. Periodic solutions of a class of Newtonian equations. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1795-1801. doi: 10.3934/cpaa.2009.8.1795
José A. Carrillo, Yanghong Huang. Explicit equilibrium solutions for the aggregation equation with power-law potentials. Kinetic & Related Models, 2017, 10 (1) : 171-192. doi: 10.3934/krm.2017007
Tomasz Komorowski. Long time asymptotics of a degenerate linear kinetic transport equation. Kinetic & Related Models, 2014, 7 (1) : 79-108. doi: 10.3934/krm.2014.7.79
Vladimir Varlamov. Eigenfunction expansion method and the long-time asymptotics for the damped Boussinesq equation. Discrete & Continuous Dynamical Systems, 2001, 7 (4) : 675-702. doi: 10.3934/dcds.2001.7.675
H. A. Erbay, S. Erbay, A. Erkip. Long-time existence of solutions to nonlocal nonlinear bidirectional wave equations. Discrete & Continuous Dynamical Systems, 2019, 39 (5) : 2877-2891. doi: 10.3934/dcds.2019119
Walter Allegretto, Yanping Lin, Shuqing Ma. Existence and long time behaviour of solutions to obstacle thermistor equations. Discrete & Continuous Dynamical Systems, 2002, 8 (3) : 757-780. doi: 10.3934/dcds.2002.8.757
Youcef Mammeri. On the decay in time of solutions of some generalized regularized long waves equations. Communications on Pure & Applied Analysis, 2008, 7 (3) : 513-532. doi: 10.3934/cpaa.2008.7.513
Laurence Cherfils, Stefania Gatti, Alain Miranville. Long time behavior of the Caginalp system with singular potentials and dynamic boundary conditions. Communications on Pure & Applied Analysis, 2012, 11 (6) : 2261-2290. doi: 10.3934/cpaa.2012.11.2261
Bernard Ducomet. Asymptotics for 1D flows with time-dependent external fields. Conference Publications, 2007, 2007 (Special) : 323-333. doi: 10.3934/proc.2007.2007.323
Hong Cai, Zhong Tan. Time periodic solutions to the three--dimensional equations of compressible magnetohydrodynamic flows. Discrete & Continuous Dynamical Systems, 2016, 36 (4) : 1847-1868. doi: 10.3934/dcds.2016.36.1847
Craig Cowan. Uniqueness of solutions for elliptic systems and fourth order equations involving a parameter. Communications on Pure & Applied Analysis, 2016, 15 (2) : 519-533. doi: 10.3934/cpaa.2016.15.519
Yue-Jun Peng, Yong-Fu Yang. Long-time behavior and stability of entropy solutions for linearly degenerate hyperbolic systems of rich type. Discrete & Continuous Dynamical Systems, 2015, 35 (8) : 3683-3706. doi: 10.3934/dcds.2015.35.3683
Kun Wang, Yinnian He, Yanping Lin. Long time numerical stability and asymptotic analysis for the viscoelastic Oldroyd flows. Discrete & Continuous Dynamical Systems - B, 2012, 17 (5) : 1551-1573. doi: 10.3934/dcdsb.2012.17.1551
José Antonio Carrillo Marco Di Francesco Antonio Esposito Simone Fagioli Markus Schmidtchen | CommonCrawl |
Paul B. Kantor
Paul B. Kantor is an American information scientist. He is Distinguished Professor Emeritus of Information Science at Rutgers University in New Jersey, and an Honorary Research Associate in Industrial and Systems Engineering at the University of Wisconsin, Madison.
Biography
Mr. Kantor was educated in Physics and Mathematics at Columbia University,[1] where he took courses taught by Samuel Eilenberg, Tsung-Dao Lee, Jack Steinberger, Charles Townes, Polykarp Kusch and Melvin Schwartz. He earned a Ph.D. degree in theoretical physics at Princeton University, working with Sam Treiman. He has been a Fulbright Fellow, has received the ASIST Research award, and is a Fellow of the AAAS.
His research centers on the role of information systems for storage and retrieval in a wide range of applications, with particular emphasis on rigorous evaluation of the effectiveness of such systems. Since the 9/11 attacks he has worked in several areas related to Homeland and National Security, dealing with optimal detection of threats and allocation of resources. While this research continues, he is currently investigating the relationships between human and machine learning, as a collaborator in several projects.[2][3]
At Rutgers he was a member of the Department of Library and Information Science, the Center for Operations Research RUTCOR, the Center for Discrete Mathematics and Computer Sciences DIMACS, served as Research Director for the CCICADA DHS Center, and a member the Graduate Faculty of the Department of Computer Science.
Since 2015 he is Professor Emeritus (Dist.) at Rutgers, and an Honorary Associate of the Department of Industrial and Systems Engineering at the University of Wisconsin, Madison,[4] where he is involved in research addressing the similarities and differences between human learning and machine learning. .
Membership
• American Society for Information Science and Technology (American Society for Information Science and Technology)
• American Association for the Advancement of Science (AAAS)
• IEEE
• American Physical Society
• American Statistical Association
• SIAM, the Society for Industrial and Applied Mathematics.
Research grants
• NSF
• DARPA
• ARDA
• US Department of Education
Publications
• Kantor, Paul B. Objective performance measures for academic and research libraries / by Paul B. Kantor. Washington, D.C. : Association of Research Libraries, 1984. viii, 76 p., [13] leaves : ill.; 29 cm. ISBN 0-918006-09-0 (pbk.)
• Kantor, Paul B. Studying the cost and value of library services : final report / by Paul B. Kantor, project director and principal investigator, Tefko Saracevic, co-principal investigator, Joann D’Esposito-Wachtmann, project manager. [New Brunswick, N.J.] : Alexandria Project Laboratory, School of Communication, Information, and Library Studies, Rutgers, c1995. 1 v. (various pagings) : ill.; 28 cm.
• Bulletin of the American Society for Information Science & Technology, Aug/Sep 2002. Mathematical models in information science by Paul B. Kantor "It has been said that mathematicians are basically puzzle solvers, and that they don't so much care what the puzzles are about. That may be a very good ..."
• Intelligence and security informatics : IEEE International Conference on Intelligence and Security Informatics, ISI 2005, Atlanta, GA, USA, May 19–20, 2005 : proceedings / Paul Kantor ... [et al.] (eds.). Berlin; New York : Springer, c2005. xviii, 674 p. : ill.; 24 cm. ISBN 3-540-25999-6
• Ying Sun, Paul B Kantor. Cross-Evaluation: A new model for information system evaluation. Journal of the American Society for Information Science and Technology. Hoboken: Mar 2006. Vol. 57, Iss. 5; p. 614
• Yuval Elovici, Bracha Shapira, Paul B Kantor. A Decision Theoretic Approach to Combining Information Filters: An Analytical and Empirical Evaluation. Journal of the American Society for Information Science and Technology. Hoboken: Feb 1, 2006. Vol. 57, Iss. 3; p. 306
• Yuval Elovici, Bracha Shapira, Paul B. Kantor. Using the Information Structure Model to Compare Profile-Based Information Filtering Systems. Information Retrieval. Boston: Jan 2003. Vol. 6, Iss. 1; p. 75.
• Paul B Kantor. Mathematical models in information science. Bulletin of the American Society for Information Science and Technology. Silver Spring: Aug/Sep 2002. Vol. 28, Iss. 6; p. 22
• Bracha Shapira, Paul B Kantor, Benjamin Melamed. The effect of extrinsic motivation on user behavior in a collaborative information finding system. Journal of the American Society for Information Science and Technology. Hoboken: Sep 2001. Vol. 52, Iss. 11; p. 879.
• Strategic Appraisal: The Changing Role of Information Warfare. Paul B Kantor. The Library Quarterly. Chicago: Jul 2001. Vol. 71, Iss. 3; p. 425.
• Kwong Bor Ng, Paul B Kantor. Predicting the effectiveness of naive data fusion on the basis of system characteristics. Journal of the American Society for Information Science. Nov 2000. Vol. 51, Iss. 13; p. 1177.
• Paul B Kantor, Endre Boros, Benjamin Melamed, Vladimir Menkov, et al. Capturing human intelligence in the Net. Association for Computing Machinery. Communications of the ACM. New York: Aug 2000. Vol. 43, Iss. 8; p. 112.
• Paul B. Kantor, Ellen M. Voorhees. The TREC-5 Confusion Track: Comparing Retrieval Methods for Scanned Text. Information Retrieval. Boston: May 2000. Vol. 2, Iss. 2-3; p. 165.
• Kwong Bor Ng, Paul B Kantor. Predicting the effectiveness of naive data fusion on the basis of system characteristics. Journal of the American Society for Information Science. 2000. Vol. 51, Iss. 13; p. 1177
• Willard I Zangwill, Paul B Kantor. Toward a theory of continuous improvement and the learning curve. Management Science. Linthicum: Jul 1998. Vol. 44, Iss. 7; p. 910.
• Paul B Kantor, Jung Jin Lee. Testing the Maximum Entropy Principle for Information Retrieval. Journal of the American Society for Information Science (1986–1998). New York: May 1998. Vol. 49, Iss. 6; p. 557
• Tefko Saracevic, Paul B Kantor. Studying the Value of Library and Information Services. Part II. Methodology and Taxonomy. Journal of the American Society for Information Science (1986–1998). New York: Jun 1997. Vol. 48, Iss. 6; p. 543
• Tefko Saracevic, Paul B Kantor. Studying the Value of Library and Information Services. Part I. Establishing a Theoretical Framework. Journal of the American Society for Information Science (1986–1998). New York: Jun 1997. Vol. 48, Iss. 6; p. 527.
• Studying the Cost and Value of Library and Information Services: Applying Functional Cost Analysis to the Library in Transition
• Eileen G Abels, Paul B Kantor, Tefko Saracevic. Journal of the American Society for Information Science (1986–1998). New York: Mar 1996. Vol. 47, Iss. 3; p. 217.
• Jung Jin Lee, Paul B Kantor. A Study of Probabilistic Information Retrieval Systems in the Case of Inconsistent Expert Judgments. Journal of the American Society for Information Science (1986–1998). New York: Apr 1991. Vol. 42, Iss. 3; p. 166.
• Paul B Kantor. Brief Communication A Model for the Stopping Behavior of Users of Online Systems. Journal of the American Society for Information Science (1986–1998). New York: May 1987. Vol. 38, Iss. 3; p. 211.
• Paul B Kantor. A Note on Cumulative Advantage Distributions. Journal of the American Society for Information Science (pre-1986). New York: Jul 1978. Vol. 29, Iss. 4; p. 202.
• Paul B Kantor. Availability Analysis. Journal of the American Society for Information Science (pre-1986). New York: Sep/Oct 1976. Vol. 27, Iss. 5; p. 311.
References
1. Columbia College (Columbia University). Office of Alumni Affairs and Development; Columbia College (Columbia University) (1955). Columbia College today. Columbia University Libraries. New York, N.Y. : Columbia College, Office of Alumni Affairs and Development.
2. "Human and machine learning: The search for anomalies | Research | UW–Madison".
3. "NSF Award Search: Award # 2041428 - EAGER: Rule Induction Games to Explore Differences between Human and Machine Intelligence".
4. "Kantor, Paul - UW-Engineering Directory | College of Engineering @ The University of Wisconsin-Madison". directory.engr.wisc.edu. Retrieved 2022-07-22.
• Wired Magazine, Issue 6.03, March 1998: "Ant Wisdom for the Web" by James Glave "Professor Paul Kantor's digital-information pheromones sniff out the good stuff on the Web. But keep your antennae up for intellectual fads and poisoned bait."
• Field, Tom. Follow the ants. CIO. Framingham: 1 April 1998. Vol. 11, Iss. 12; p. 25. (interview with Kantor)
External links
• Page at Rutgers
Authority control
International
• VIAF
National
• Czech Republic
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
| Wikipedia |
Ethnobotanical investigation on medicinal plants in Algoz area (South Kordofan), Sudan
Tahani Osman Issa1,
Yahya Sulieman Mohamed2,
Sakina Yagi3,
Reem Hassan Ahmed1,
Telal Mohammed Najeeb1,
Abdelrafie Mohamed Makhawi1 &
Tarig Osman Khider1
The inhabitants of western Sudan use traditional medicine for the treatment of various ailments due to lack of medical doctors and unaffordable prices of pharmaceutical products. The present study is the first documentation of the traditional plant knowledge on medicinal uses of plants by healers in Algoz (South Kordofan), Sudan.
Ethnobotanical data were collected over a period from March to November 2015 using semi-structured interviews with 30 healers (24 male and 6 female) living in the investigated area. Quantitative indices such as use categories, use value (UV) and informant consensus factor (ICF) were intended to evaluate the importance of medicinal plant species.
A total of 94 medicinal plants, which belong to 45 families and 81 genera, were recorded in the study area. The most represented families are Leguminosae with 20 species followed by Combretaceae (6 species), Rubiaceae (5 species) and Asteraceae (4 species). The reported species were belonging to herbs (43%), trees (28%), shrubs (22%), climbers (4%) and parasites (3%). Root and stem (21% each) were the most plant parts used. A majority of remedies are administered orally (67%) where infusion (36%) and maceration (32%) are the most used methods. The highest ICF (0.87) was reported for poisonous animal bites followed by urinary system diseases (0.89), blood system disorders (0.88) and gynaecological diseases (0.87). Anastatica hierochuntica, Ctenolepis cerasiformis, Echinops longifolius, Cleome gynandra, Maerua pseudopetalosa, Martynia annua, Oldenlandia uniflora, Opuntia ficus-indica, Solanum dubium, Sonchus cornutus, Tribulus terrestris and Drimia maritima were reported for the first time in this study.
The number of medicinal plants reported in this paper reflects evidence that Algoz area had a high diversity of medicinal plants which will continue to play an important role in the healthcare system in the study area.
In 2011, Sudan split into two countries with one third of the country being proclaimed a new state named "Republic of South Sudan" leaving behind the remaining area retaining the older name "the Republic of Sudan" [1]. In its former integral state, Sudan was the largest country in Africa and the tenth in the world, boasting an area of 2.5 million square kilometers which spanned diverse terrains and climatic zones [1]. This did bear directly on the wide diversity of vegetation, from those in the desert and semi-desert in the north through the equatorial in the central part to the extreme of the humid equatorial in the south. Such prevailing conditions favoured diverse vegetation consisting of 3137 documented species of flowering plants belonging to 170 families and 1280 genera, 15% of which are endemic [2]. A large number of these plants have a vital contribution to human health care needs throughout the country. Medicinal and aromatic plants and their derivatives represent an integral part of life in Sudan. Communities in different regions of Sudan use traditional medicine for the treatment of various ailments due to lack of medical doctors and unaffordable prices of pharmaceutical products beside their faith on the medicinal values of traditional medicine [3]. It has been estimated that only 11% of the population has access to formal health care [1].
The geographical position of Sudan represents a multicultural melting pot of diverse traditional knowledge over large distances and facilitated the exchange of knowledge about medicinal plants with other countries from Africa to Middle East and Asia [4].
Despite the varied flora and socio-cultural diversity in Sudan, there is a far-reaching lack of written information on the traditional use of medicinal plants [4]. So, documentation of plants used as traditional medicines in Sudan is warranted. The aim of this study was to investigate the traditional plant knowledge on medicinal uses of plants by local healers in Algoz area (South Kordofan), Sudan.
Algoz area is situated in the northern part of South Kordofan state, and its borders are Northern Kordofan state from the north and northeast, West Kordofan state from the northwest, Dellang locality from the south and Habella locality from the southeast direction (Fig. 1). It is located between latitudes 12°–12° 30 N and longitudes 29° 48–300 E and 622 m above sea level, with a total area of 35,000 km2. Short grass and short scattered trees prevail. The area is associated with exposed rocks crossing the central Sudan forming a surface water divide. The White Nile which is the main tributary of the River Nile bounds the hydrologic system to the east, while the highlands of Kordofan Plateau and the Nuba Mountains bound it to the west and the south respectively. Khor Abu Habil is a major seasonal wadi that crosses the study area and flows from the west to the east. The wadi disappears into the sand dunes a few kilometers before reaching the White Nile. The climate in the area is semi-arid with long hot summers (March–September) and short mild winters (December–February). Seasonal rainfall occurs only during summer (June–September) and varies between 200 mm/year in the north and 450 mm/year in the south [5].
a Sudan map showing the South Kordofan State (red) and b Algoz locality (red)
Algoz area has a multi-population with tribes as Dar Shungool, Gaboosh, Dar Bati, Albargo, Albarno, Flata and some Arabic nomads. They are working mainly in agriculture, animal grazing and trade [6].
Data collection and plant identification
Ethnobotanical data were collected from March to November 2015. Information about the medicinal use of plants was collected by carrying out semi-structured interviews with 30 healers (24 male and 6 female) living in the investigated area. The questionnaire was designed to collect data on (i) local names of the plants, (ii) ailments treated by the plant, (iii) plant parts used, (iv) condition of the plant material (dried or fresh) and (v) modes of preparation and administration. Some social factors like the name, age, occupation and education level of the interviewed person were also recorded. Also, the geographic locality and date of the interview were recorded. Plant specimens were collected for taxonomic identification using keys of written floras such as Broun and Massey [7], Andrews [8,9,10,11], Ross [12], Hutchinson and Dalziel [13], Maydell [14] and Elamin [15]. Voucher specimens were deposited at the Herbarium of Institute of Medicinal and Aromatic Plants, National Centre for Research, Sudan (MAPTMR-H). The botanical names and plant families are given according to the standards of the plant list (www.ipni.org/).
Ethnobotanical data analysis
Data analysis was carried out by using both the classical ethnobotanical systematic investigation and a numerical quantitative approach in order to evaluate the importance of the mentioned plant species in the investigated area. The quantitative study was carried out by calculating the following ethnobotanical indices:
Use categories
The medicinal plant uses were classified into categories following the standard developed by Cook [16]. Each time a plant was mentioned as "used" was considered as one "use report". If one informant used a plant to treat more than one disease in the same category, it was considered as a single use report [17].
Use value (UV)
The relative importance was calculated employing the use value [18], a quantitative measure for the relative importance of species known locally:
$$ \mathrm{UV}=\frac{\sum U}{n} $$
where Ui is the number of use reports cited by each informant for a given species and n refers to the total number of informants.
Use values are high when there are many use reports for a plant, implying that the plant is important, and approach zero (0) when there are few reports related to its use. The use value, however, does not distinguish whether a plant is used for single or multiple purposes.
Informant consensus factor
To test homogeneity of knowledge, the informant consensus factor was used [19]:
$$ \mathrm{ICF}=\frac{N_{\mathrm{ur}}-{N}_{\mathrm{t}}}{\left({N}_{\mathrm{ur}}-1\right)} $$
where Nur refers to the number of use reports for a particular use category and Nt refers to the number of taxa used for a particular use category by all informants. Informant consensus factor (ICF) values are low (near 0) if plants are chosen randomly or if there is no exchange of information about their use among informants and approach one (1) when there is a well-defined selection criterion in the community and/or if information is exchanged between informants [20].
Medicinal plant diversity
A total of 94 medicinal plants, which belong to 45 families and 81 genera, were recorded in the study area. Results provide the following information for each species: scientific name, botanical family, local common name, plant habitat, plant part used, disease treated, route of administration and use value (Table 1). The most represented families are Leguminosae with 20 species followed by Combretaceae (6 species), Rubiaceae (5 species), Asteraceae (4 species), Lamiaceae, Poaceae, Tiliaceae and Zygophyllaceae (3 species each), Apocynaceae, Asclepiadaceae, Brassicaceae, Burseraceae, Cleomaceae, Capparaceae, Malvaceae and Meliaceae (2 species each), and other families were represented with one species each. This dominance of Leguminosae plants is a characteristic of the Sudan flora. The most commonly used species is Sarcocephalus latifolius with a UV of 2.07 followed by Guiera senegalensis with a UV of 1.87, Hydnora abyssinica with a UV of 1.83 and Geigeria alata with a UV of 1.67 respectively. Plants that treat three ailments and more (86%) represent the majority, followed by plants that treat single ailments (8%) and those that treat two ailments (6%) respectively.
Table 1 Ethnomedicinal plants used in the Algoz region (South Kordofan)/western Sudan
Habitat of the plants
Analysis of data based on their habitat showed that the reported species belong to herbs (43%), trees (28%), shrubs (22%), climbers (4%) and parasites (3%) (Fig. 2). The majority of medicinal plants are collected from the wild, and only 11% are cultivated or purchased (0.01%) from the market (Table 1).
Habitat of medicinal plants in the study area
Parts of medicinal plants used
Data on different plant parts used in traditional medicine are indicated in Fig. 3. Those that are used the most were the root and stem (21% each) followed by the fruit (15%), whole plant (14%), seed (12%), leaf (11%), gum/latex, bulb/corm and heartwood (0.02%) and flower (0.01%) respectively. There are cases where different parts of the same plant are being used for the treatment of different diseases.
Percentage of plant parts used
Method of preparation
A majority of remedies are administered orally (67%) where infusion (36%) and maceration (32%) are the most used methods. Some prescriptions can be prepared by both methods: infusion or maceration represented 13%, while decoction represented 11% of preparations. Dried powder or freshly collected plant parts are also used. Other prescriptions are used externally (33%) and applied as dry powder (29%), rub (23%), smoke (23%), poultices (20%) or as a wash (6%) (Table 2). Most of these preparations use water as a solvent extractor. Some herbalists used other adjuvants like honey, sugar, salt, milk, sour milk, yoghurt, ajeen (fermented dough), nisha (light porridge), atroon (sodium bicarbonate), bee wax, wax of goat and olive and sesame oil.
Table 2 Mode of preparations of medicinal plants in the study area
Medicinal plants used in combination
For the treatment of particular ailment, sometimes herbalists used more than one plant. For example, Allium sativum bulb is mixed with Zingiber officinale rhizome and applied to the anus for the treatment of haemorrhoids. A potion is prepared from the seed of Trigonella foenum-graecum, curcuma, Negilla sativa and bee honey for the treatment of uterus inflammation. Root of Tinospora bakis is mixed with Syzygium aromaticum (clove) for the treatment of malaria. Atroon is added to some preparations like those of Ziziphus spina-christi and Acacia oerfota for the treatment of dysentery and toothache respectively.
Quantitative analyses of ethnomedicinal data
Fifteen ailment categories were identified. The ICF was calculated for each ailment category, and the range was from 0.50 to 0.91 (Table 3). The highest ICF (0.91) was reported for poisonous animal bites with 8 species and 77 use reports, followed by urinary system diseases (0.89) with 17 species and 156 use reports, blood system disorders (0.88) with 14 species and 116 use reports and gynaecological diseases (0.87) with 12 species and 86 use reports. The highest ICF for poisonous animal bites can be probably related to the hard and dangerous environmental conditions. The category of plants used for treatment of eye diseases has the lowest degree of consensus (0.50) where only three informants mentioned ailments in this category.
Table 3 Diseases based on categories and informant consensus factor (ICF)
Most frequently cited plant species and medicinal uses
In this study, the most cited plants, those that had at least 20 or more citations for specific ailment, were Guiera senegalensis (57 citations) mainly used for the treatment of malaria (22 citations) and kidney disorders (20 citations). This is followed by Hydnora abyssinica (55 citations) used in the treatment of gastrointestinal system diseases (mainly for diarrhoea and dysentery (40 citations), Geigeria alata (50 citations) used mainly for the treatment of diabetes (20 citations) and hypertension (17 citations), Kigelia africana (32 citations) with 28 citations for the treatment of breast swellings and Carissa spinarum (28 citations) for envy eye.
Medicinal plants and the associated knowledge
Thirty healers (24 male and 6 female) were interviewed and divided into five different age groups (20–30, 31–40, 41–50, 51–60 and > 60). Analysis of the result on ages of healers revealed that the most dominant age of men is 41–50 while for women which were few in number is > 60 (Figs. 3 and 4).
Age group distribution of the traditional healers interviewed
In this study, the most cited plants, Guiera senegalensis, Hydnora abyssinica, Geigeria alata, Kigelia africana and Carissa spinarum, were previously reported with the same traditional uses in ethnobotanical studies from other regions of Sudan. For example, Guiera senegalensis was reported by EL-Kamali [3] and Suleiman [21] for the treatment of malaria. Hydnora abyssinica (H. johannis) for the treatment of diarrhoea and dysentery and Kigelia africana for the treatment of breast swellings were also reported by Musa et al. [22]. Geigeria alata for the treatment of diabetes was reported by EL-Kamali [3] and Suleiman [21]. Carissa spinarum (C. edulis) was reported by EL-Kamali [3] for charm and the treatment of madness. Kigelia africana was reported by Doka and Yagi [23] for swollen mastitis.
The high frequency of citations of medicinal plants can be explained by the fact that these plants are the best known and have long been used by the majority of informants, representing a source of reliability. In fact, many biological activity and phytochemical evaluation were carried out for these plants. For example, Traore-Keita et al. [24] reported that the chloroform extract of roots of Guiera senegalensis exhibited a pronounced antimalarial activity. They isolated two alkaloids, namely, harman and tetrahydroharman, that displayed high antimalarial activity (IC50 (50% inhibition) lower than 4 μg/mL) and low toxicity against human leukemia monocytic cell line (THP1). Yagi et al. [25] found that Hydnora johannis roots have no activity against bacteria spp. that are mainly responsible of diarrhoea but are rich in phenols. They suggested that the curing potency of the roots of H. johannis was not mainly associated with the presence of antibacterial activity agent(s) against bacterial species responsible of dysentery or diarrhoea but might be attributed to the role of tannins in reducing the effect through denaturing the proteins by the formation of protein tannate, thereby causing the intestinal mucosa to become more resistant, reducing the intestinal transit and by acting as a barrier against toxin exerted by bacteria. The antidiabetic potential of Geigeria alata root was evaluated, and diabetic rats dosed with 250 mg/kg of aqueous methanolic extract were found to have significantly (p < 0.05) decreased blood glucose level closer to that of non-diabetic rats and improved β-cell function and antioxidant status [26]. Kigelia africana was found to suppress the breast MCF7 [27], human colon adenocarcinoma (Caco-2), human embryonic kidney (HEK-293) [28] and HeLa cervical cancer cell proliferation [29].
Comparative review of traditional usages of reported species with previous studies from Sudan
A comparative review with previous reports [3, 21,22,23, 30,31,32,33] from different parts of Sudan was performed to identify the new medicinal plants and new uses reported in this study (Table 4). The plants reported by Suleiman [21] for traditional plants used by communities of Northern Kordofan region included a total of 44 plant species with 22 species with same traditional uses which were reported also in this study, while 2 species, Blepharis linariifolia and Catunaregam nilotica (Xeromphis nilotica, Randia nilotica), were reported with different uses. EL-Kamali [3] reported 48 plant species for traditional plant uses in North Kordofan too with 15 species with same traditional uses which were reported also in this study and 5 species, Acacia nilotica subsp. adstringens, Aristolochia bracteolate, Cissus quadrangularis, Dichrostachys cinerea and Sarcocephalus latifolius (Nauclea latifolia), with different uses. Doka and Yagi [23] reported 49 plant species for traditional plant uses in West Kordofan with 16 species with same traditional uses which were reported also in this study, and 9 species were reported in this study with different uses; these included Acacia senegal, Acacia seyal, Arachis hypogaea, Balanites aegyptiaca, Cissus quadrangularis, Combretum aculeatum, Grewia flavescens, Tamarindus indica and Catunaregam nilotica. Musa et al. [22] reported 53 plant species for traditional plant uses in the Blue Nile State, southeastern Sudan, with 18 species with same traditional uses which were reported in this study and 13 species with different uses: Acacia senegal, Acacia seyal, Anogeissus leiocarpus, Carissa spinarum (C. edulis), Cissus quadrangularis, Grewia villosa, Lannea fruticose, Piliostigma reticulatum, Senna occidentalis, Strychnos spinose, Tephrosia uniflora, Terminalia laxiflora and Ximenia americana. Moreover, El Ghazali et al. [30,31,32,33] in their books of Sudanese medicinal plants documented some of these plants for the same or very similar usages. In fact, there are 99 new traditional uses for some previously reported medicinal plants. For example, the whole plant of Striga hermonthica was previously reported to treat diabetes, but in this study, it is used also for menstrual cramps. The fruit of Senna occidentalis is reported to treat eczema beside its common use as a laxative. Plicosepalus acaciae is commonly used to enhance wound healing and as a lactagogue, but in this study, the smoke fumigant of the seeds is reported to repel insect from ear.
Table 4 Comparative review of traditional usages of reported species with previous studies from Sudan
New species and new uses for species are reported for the first time in this study. For example, Anastatica hierochuntica, Ctenolepis cerasiformis, Echinops longifolius, Cleome gynandra, Maerua pseudopetalosa, Martynia annua, Oldenlandia uniflora, Opuntia ficus-indica, Solanum dubium, Sonchus cornutus, Tribulus terrestris and Drimia maritima were not being mentioned in any previous study for the traditional Sudanese medicine. Acanthorrhinum ramosissimum, Cleome viscosa and Setaria acromelaena which were used for evil eye were also reported for the first time.
The majorities of the healers declared that they had learned about medicinal plants from their parents or grandparents. The lack of systematic documentation for medicinal plant knowledge which appears to occur in many parts of the world may contribute to the loss of this knowledge, particularly for plants that are neglected or non-preferred [34,35,36].
The number of medicinal plants reported in this paper reflects evidence that the Algoz area harbours a high diversity of medicinal plants that will continue to play an important role in the healthcare system in the study area. Evaluation of their claimed pharmacological potential efficacy and toxicity profile is essential. Moreover, the present study could contribute in conserving such rich heritage and providing precious information as a contribution through writing the Sudanese pharmacopeia.
Conservation of this traditional knowledge is very important. The progressing mass destruction of wild vegetation for various purposes may accelerate the disappearance of medicinal plants. This in turn may have profound consequences on the roles of traditional medicine on human health. Furthermore, the drop in the availability of raw materials due to the depletion of natural resources affects the discovery of potential drugs [37]. Thus, raising community awareness about conservation and sustainable utilization of the traditional medicinal plants is a vital part for the entire plant biodiversity [22]. Modern biotechnical approaches like genetic engineering, micropropagation via tissue encapsulation of propagules, tissue culture and fermentation should be applied to improve yield and modify the potency of medicinal plants [38].
ICF:
UV:
Mohammed AMA. Research advances in Sudanese traditional medicine: opportunities, constrains and challenges. Altern Integ Med. 2013;2:10.
Khalid H, Abdalla WE, Abdelgadir H. Gems from traditional north-African medicine: medicinal and aromatic plants from Sudan. Nat Prod Bioprospect. 2012;2:92–103.
EL-Kamali HH. Ethnopharmacology of medicinal plants used in north Kordofan (western Sudan). Ethnobot Leaflets. 2009;13:89–97.
Saeed MEM, Abdelgadir H, Sugimoto Y, Khalid HE, Efferth T. Cytotoxicity of 35 medicinal plants from Sudan towards sensitive and multidrug-resistant cancer cells. J Ethnopharmacol. 2015;174:644–58.
Abdalla OAE. Aquifer systems in Kordofan, Sudan: subsurface lithological model. S Afr J Geol. 2006;109:585–98.
Anonym. South Kordofan State, Sudan Ministry of the Cabinet Affairs, 2016 (In Arabic).
Broun AF, Massey RE. Flora of the Sudan. Thomas Murby and Co 1. Fleet Lane, London, E.C. 4. El, 1929.
Andrews FW. The vegetation of the Sudan. In: Tot till JD, editor. Agriculture in the Sudan. UK: Oxford University Press; 1948.
Andrews FW. The flowering plants of the Anglo-Egyptian Sudan, vol. 1. Arbroath: Buncle Co. Ltd.; 1950.
Ross JH. Flora of South Africa. In: Part I. The government printer Pretoria, vol. 16; 1975.
Hutchinson J, Dalziel JM. Flora of west tropical Africa. 1st ed. Millbank: Crown Agents for Overseas Governments and Administration; 1968.
Maydell HJV. Trees and shrubs of the Sahel, their characteristics and uses. Germany: GTZ; 1990.
Elamin HM. Trees and shrubs of the Sudan. U.K: Ithaca Press Exeter; 1990.
Cook FEM. Economic botany data collection standard. Kew: Royal Botanic Gardens; 1995.
Treyvaud AV, Arnason JT, Maquin P, Cal V, Vindas PS, Poveda L. A consensus ethnobotany of the Q'eqchi' Maya of southern Belize. Econ Bot. 2005;59:29–42.
Phillips O, Gentry AH, Reynel C, Wilkin P, Galvez DBC. Quantitative ethnobotany and Amazonian conservation. Conserv Biol. 1994;8:225–48.
Trotter RT, Logan MH. Informant consensus: a new approach for identifying potentially effective medicinal plants. In: Etkin NL, editor. Plants in indigenous medicine and diet. Bedford Hills: Redgrave Publishing Company; 1986.
Gazzaneo LRS, Lucena RFP, Albuquerque UP. Knowledge and use of medicinal plants by local specialists in a region of Atlantic Forest in the state of Pernambuco (Northeastern Brazil). J Ethnobiol Ethnomed. 2005;1:9.
Suleiman MHA. An ethnobotanical survey of medicinal plants used by communities of Northern Kordofan region, Sudan. J Ethnopharmacol. 2015;176:232–42.
Musa MS, Abdelrasoo FE, Elsheikh EA, Ahmed LAMN, Mahmoud AE, Yagi SM. Ethnobotanical study of medicinal plants in the Blue Nile State, south-eastern Sudan. J Med Plant Res. 2011;5(17):4287–97.
Doka IG, Yagi SM. Ethnobotanical survey of medicinal plants in west Kordofan (western Sudan). Ethnobot Leaflets. 2009;13:1409–16.
Traore-Keita F, Gasquet M, Di Giorgio C, Ollivier E, Delmas F, Keita A, Doumbo O, Balansard G, Timon-David P. Antimalarial activity of extracts and alkaloids isolated from six plants used in traditional medicine in Mali and Sao Tome. Phytother Res. 2002;16(7):646–9.
Yagi S, Chrétien F, Duval RE, Fontanay S, Maldini M, Henry M, Chapleur Y, Laurain-Mattar D. Antibacterial activity, cytotoxicity property and chemical constituents of Hydnora johannis roots. South Afr J Bot. 2012;78:228–34.
Hafizur RM, Babiker R, Yagi S, Chishti S, Kabir N, Choudhary MI. The antidiabetic effect of Geigeria alata is mediated by enhanced insulin secretion, modulation of β-cell function, and improvement of antioxidant activity in streptozotocin-induced diabetic rats. J Endocrinol. 2012;214:329–35.
Fouche G, Cragg GM, Pillay P, Kolesnikova N, Maharaj VJ, Senabe J. In vitro anticancer screening of South African plants. J Ethnopharmacol. 2008;119(3):455–61.
Chivandi E, Cave E, Davidson BC, Eriwanger KH, Mayo D, Madziva MT. Suppression of Caco-2 and HEK-293 cell proliferation by Kigelia africana, Mimusops zeyheri and Ximenia caffra seed oils. In Vivo. 2012;26(1):99–105.
Arkhipov A, Sirdaarta J, Rayan P, McDonnell PA, Cock IE. An examination of the antibacterial, antifungal, antigiardial and anticancer properties of Kigelia africana fruit extracts. Pharmacognosy Commun. 2014;4(3):62–76.
El Ghazali GB. Medicinal plants of the Sudan. Part I. Medicinal plants of Arkawit. Sudan: Khartoum University Press; 1987.
El Ghazali GB, El Tohami MS, El Egami AB. Medicinal plants of the Sudan. Part III. Medicinal plants of the White Nile Province. Sudan: Khartoum University Press; 1994.
El Ghazali GB, El Tohami MS, El Egami AB, Abdalla WS, Mohamed MG. Medicinal plants of the Sudan. Part IV. Medicinal plants of Northern Kordofan. Khartoum: Omdurman Islamic University Press; 1997.
El Ghazali GE, Aballa WE, Khalid HE, Khalafalla MM, Hamad AD. Medicinal plants of the Sudan, Part V. Medicinal plants of Ingessana. Khartoum: Sudan Currency Printing Press; 2003.
Fekadu F. Ethiopian traditional medicine, common medicinal plants in perspective, Sioux City, IA, (2001).
Brouwer N, Liu Q, Harrington D, Kohen J, Vemulpad S, Jamie J, Randall M, Randall D. An ethnopharmacological study of medicinal plants in New South Wales. Molecules. 2005;10:1252–62.
Bussmann RW, Sharon D. Traditional medicinal plant use in Loja province, southern Ecuador. J Ethnobiol Ethnomed. 2006;2:44.
Chivian E. Biodiversity: its importance to human health center for health and the global environment. USA: Harvard Medical School; 2002.
Chen S-L, Yu H, Luo H-M, Wu Q, Li C-F, Steinmetz A. Conservation and sustainable use of medicinal plants: problems, progress, and prospects. Chin Med. 2016;11:37.
We would like to thank all the traditional healers and local people of the study area for sharing their knowledge, cooperation and hospitality. The authors are grateful to Dr. Migdad Elsir Shuaib (Department of Geology, Faculty of Science, University of Khartoum) for the geographical and geological information.
This study was financed by the University of Bahri, Sudan, Code No: U of B-1-2015.
We have already included all data in the manuscript collected during the field surveys.
College of Applied and Industrial Sciences, University of Bahri, P.O. Box 1606, Khartoum, Sudan
Tahani Osman Issa
, Reem Hassan Ahmed
, Telal Mohammed Najeeb
, Abdelrafie Mohamed Makhawi
& Tarig Osman Khider
Institute of Medicinal and Aromatic Plant National Centre for Research, Khartoum, Sudan
Yahya Sulieman Mohamed
Department of Botany, Faculty of Science, University of Khartoum, P.O. Box 11115, Khartoum, Sudan
Sakina Yagi
Search for Tahani Osman Issa in:
Search for Yahya Sulieman Mohamed in:
Search for Sakina Yagi in:
Search for Reem Hassan Ahmed in:
Search for Telal Mohammed Najeeb in:
Search for Abdelrafie Mohamed Makhawi in:
Search for Tarig Osman Khider in:
TOI and YS conducted the field survey and collected the data, SY did the analysis and wrote the first draft of the manuscript, RHA and TMN provided support in sampling and plant species identification, AMM provided technical support and helped in the write-up and revision and TOK designed the study and supervised the project. All authors read and approved the final manuscript.
Correspondence to Tarig Osman Khider.
The present study is purely based on filed survey instead of human or animal trails.
Ethical guidelines of the International Society of Ethnobiology (http://www.ethnobiology.net/) were strictly followed.
Issa, T.O., Mohamed, Y.S., Yagi, S. et al. Ethnobotanical investigation on medicinal plants in Algoz area (South Kordofan), Sudan. J Ethnobiology Ethnomedicine 14, 31 (2018) doi:10.1186/s13002-018-0230-y
Algoz area | CommonCrawl |
Also known as Arcalion or Bisbuthiamine and Enerion, Sulbutiamine is a compound of the Sulphur group and is an analog to vitamin B1, which is known to pass the blood-brain barrier easily. Sulbutiamine is found to circulate faster than Thiamine from blood to brain. It is recommended for patients suffering from mental fatigue caused due to emotional and psychological stress. The best part about this compound is that it does not have most of the common side effects linked with a few nootropics.
Noopept is a Russian stimulant sometimes suggested for nootropics use as it may be more effective than piracetam or other -racetams, and its smaller doses make it more convenient & possibly safer. Following up on a pilot study, I ran a well-powered blind randomized self-experiment between September 2013 and August 2014 using doses of 12-60mg Noopept & pairs of 3-day blocks to investigate the impact of Noopept on self-ratings of daily functioning in addition to my existing supplementation regimen involving small-to-moderate doses of piracetam. A linear regression, which included other concurrent experiments as covariates & used multiple imputation for missing data, indicates a small benefit to the lower dose levels and harm from the highest 60mg dose level, but no dose nor Noopept as a whole was statistically-significant. It seems Noopept's effects are too subtle to easily notice if they exist, but if one uses it, one should probably avoid 60mg+.
I took the first pill at 12:48 pm. 1:18, still nothing really - head is a little foggy if anything. later noticed a steady sort of mental energy lasting for hours (got a good deal of reading and programming done) until my midnight walk, when I still felt alert, and had trouble sleeping. (Zeo reported a ZQ of 100, but a full 18 minutes awake, 2 or 3 times the usual amount.)
I largely ignored this since the discussions were of sub-RDA doses, and my experience has usually been that RDAs are a poor benchmark and frequently far too low (consider the RDA for vitamin D). This time, I checked the actual RDA - and was immediately shocked and sure I was looking at a bad reference: there was no way the RDA for potassium was seriously 3700-4700mg or 4-5 grams daily, was there? Just as an American, that implied that I was getting less than half my RDA. (How would I get 4g of potassium in the first place? Eat a dozen bananas a day⸮) I am not a vegetarian, nor is my diet that fantastic: I figured I was getting some potassium from the ~2 fresh tomatoes I was eating daily, but otherwise my diet was not rich in potassium sources. I have no blood tests demonstrating deficiency, but given the figures, I cannot see how I could not be deficient.
That first night, I had severe trouble sleeping, falling asleep in 30 minutes rather than my usual 19.6±11.9, waking up 12 times (5.9±3.4), and spending ~90 minutes awake (18.1±16.2), and naturally I felt unrested the next day; I initially assumed it was because I had left a fan on (moving air keeps me awake) but the new potassium is also a possible culprit. When I asked, Kevin said:
Discussions of PEA mention that it's almost useless without a MAOI to pave the way; hence, when I decided to get deprenyl and noticed that deprenyl is a MAOI, I decided to also give PEA a second chance in conjunction with deprenyl. Unfortunately, in part due to my own shenanigans, Nubrain canceled the deprenyl order and so I have 20g of PEA sitting around. Well, it'll keep until such time as I do get a MAOI.
My first impression of ~1g around 12:30PM was that while I do not feel like running around, within an hour I did feel like the brain fog was lighter than before. The effect wasn't dramatic, so I can't be very confident. Operationalizing brain fog for an experiment might be hard: it doesn't necessarily feel like I would do better on dual n-back. I took 2 smaller doses 3 and 6 hours later, to no further effect. Over the following weeks and months, I continued to randomly alternate between potassium & non-potassium days. I noticed no effects other than sleep problems.
The use of cognitive enhancers by healthy individuals sparked debate about ethics and safety. Cognitive enhancement by pharmaceutical means was considered a form of illicit drug use in some places, even while other cognitive enhancers, such as caffeine and nicotine, were freely available. The conflict therein raised the possibility for further acceptance of smart drugs in the future. However, the long-term effects of smart drugs on otherwise healthy brains were unknown, delaying safety assessments.
Gibson and Green (2002), talking about a possible link between glucose and cognition, wrote that research in the area …is based on the assumption that, since glucose is the major source of fuel for the brain, alterations in plasma levels of glucose will result in alterations in brain levels of glucose, and thus neuronal function. However, the strength of this notion lies in its common-sense plausibility, not in scientific evidence… (p. 185).
But there would also be significant downsides. Amphetamines are structurally similar to crystal meth – a potent, highly addictive recreational drug which has ruined countless lives and can be fatal. Both Adderall and Ritalin are known to be addictive, and there are already numerous reports of workers who struggled to give them up. There are also side effects, such as nervousness, anxiety, insomnia, stomach pains, and even hair loss, among others.
The hormone testosterone (Examine.com; FDA adverse events) needs no introduction. This is one of the scariest substances I have considered using: it affects so many bodily systems in so many ways that it seems almost impossible to come up with a net summary, either positive or negative. With testosterone, the problem is not the usual nootropics problem that that there is a lack of human research, the problem is that the summary constitutes a textbook - or two. That said, the 2011 review The role of testosterone in social interaction (excerpts) gives me the impression that testosterone does indeed play into risk-taking, motivation, and social status-seeking; some useful links and a representative anecdote:
But how, exactly, does he do it? Sure, Cruz typically eats well, exercises regularly and tries to get sufficient sleep, and he's no stranger to coffee. But he has another tool in his toolkit that he finds makes a noticeable difference in his ability to efficiently and effectively conquer all manner of tasks: Alpha Brain, a supplement marketed to improve memory, focus and mental quickness.
Two studies investigated the effects of MPH on reversal learning in simple two-choice tasks (Clatworthy et al., 2009; Dodds et al., 2008). In these tasks, participants begin by choosing one of two stimuli and, after repeated trials with these stimuli, learn that one is usually rewarded and the other is usually not. The rewarded and nonrewarded stimuli are then reversed, and participants must then learn to choose the new rewarded stimulus. Although each of these studies found functional neuroimaging correlates of the effects of MPH on task-related brain activity (increased blood oxygenation level-dependent signal in frontal and striatal regions associated with task performance found by Dodds et al., 2008, using fMRI and increased dopamine release in the striatum as measured by increased raclopride displacement by Clatworthy et al., 2009, using PET), neither found reliable effects on behavioral performance in these tasks. The one significant result concerning purely behavioral measures was Clatworthy et al.'s (2009) finding that participants who scored higher on a self-report personality measure of impulsivity showed more performance enhancement with MPH. MPH's effect on performance in individuals was also related to its effects on individuals' dopamine activity in specific regions of the caudate nucleus.
"Smart Drugs" are chemical substances that enhance cognition and memory or facilitate learning. However, within this general umbrella of "things you can eat that make you smarter," there are many variations as far as methods of action within the body, perceptible (and measurable) effects, potential for use and abuse, and the spillover impact on the body's non-cognitive processes.
When I spoke with Jesse Lawler, who hosts the podcast Smart Drugs Smarts, about breakthroughs in brain health and neuroscience, he was unsurprised to hear of my disappointing experience. Many nootropics are supposed to take time to build up in the body before users begin to feel their impact. But even then, says Barry Gordon, a neurology professor at the Johns Hopkins Medical Center, positive results wouldn't necessarily constitute evidence of a pharmacological benefit.
Using prescription ADHD medications, racetams, and other synthetic nootropics can boost brain power. Yes, they can work. Even so, we advise against using them long-term since the research on their safety is still new. Use them at your own risk. For the majority of users, stick with all natural brain supplements for best results. What is your favorite smart pill for increasing focus and mental energy? Tell us about your favorite cognitive enhancer in the comments below.
…researchers have added a new layer to the smart pill conversation. Adderall, they've found, makes you think you're doing better than you actually are….Those subjects who had been given Adderall were significantly more likely to report that the pill had caused them to do a better job….But the results of the new University of Pennsylvania study, funded by the U.S. Navy and not yet published but presented at the annual Society for Neuroscience conference last month, are consistent with much of the existing research. As a group, no overall statistically-significant improvement or impairment was seen as a result of taking Adderall. The research team tested 47 subjects, all in their 20s, all without a diagnosis of ADHD, on a variety of cognitive functions, from working memory-how much information they could keep in mind and manipulate-to raw intelligence, to memories for specific events and faces….The last question they asked their subjects was: How and how much did the pill influence your performance on today's tests? Those subjects who had been given Adderall were significantly more likely to report that the pill had caused them to do a better job on the tasks they'd been given, even though their performance did not show an improvement over that of those who had taken the placebo. According to Irena Ilieva…it's the first time since the 1960s that a study on the effects of amphetamine, a close cousin of Adderall, has asked how subjects perceive the effect of the drug on their performance.
A week later: Golden Sumatran, 3 spoonfuls, a more yellowish powder. (I combined it with some tea dregs to hopefully cut the flavor a bit.) Had a paper to review that night. No (subjectively noticeable) effect on energy or productivity. I tried 4 spoonfuls at noon the next day; nothing except a little mental tension, for lack of a better word. I think that was just the harbinger of what my runny nose that day and the day before was, a head cold that laid me low during the evening.
If you have spent any time shopping for memory enhancer pills, you have noticed dozens of products on the market. Each product is advertised to improve memory, concentration, and focus. However, choosing the first product promising results may not produce the desired improvements. Taking the time to research your options and compare products will improve your chances of finding a supplement that works.
Most people I talk to about modafinil seem to use it for daytime usage; for me that has not ever worked out well, but I had nothing in particular to show against it. So, as I was capping the last of my piracetam-caffeine mix and clearing off my desk, I put the 4 remaining Modalerts pills into capsules with the last of my creatine powder and then mixed them with 4 of the theanine-creatine pills. Like the previous Adderall trial, I will pick one pill blindly each day and guess at the end which it was. If it was active (modafinil-creatine), take a break the next day; if placebo (theanine-creatine), replace the placebo and try again the next day. We'll see if I notice anything on DNB or possibly gwern.net edits.
I took 1.5mg of melatonin, and went to bed at ~1:30AM; I woke up around 6:30, took a modafinil pill/200mg, and felt pretty reasonable. By noon my mind started to feel a bit fuzzy, and lunch didn't make much of it go away. I've been looking at studies, and users seem to degrade after 30 hours; I started on mid-Thursday, so call that 10 hours, then 24 (Friday), 24 (Saturday), and 14 (Sunday), totaling 72hrs with <20hrs sleep; this might be equivalent to 52hrs with no sleep, and Wikipedia writes:
ADHD medication sales are growing rapidly, with annual revenues of $12.9 billion in 2015. These drugs can be obtained legally by those who have a prescription, which also includes those who have deliberately faked the symptoms in order to acquire the desired medication. (According to an experiment published in 2010, it is difficult for medical practitioners to separate those who feign the symptoms from those who actually have them.) That said, faking might not be necessary if a doctor deems your desired productivity level or your stress around a big project as reason enough to prescribe medication.
A total of 14 studies surveyed reasons for using prescription stimulants nonmedically, all but one study confined to student respondents. The most common reasons were related to cognitive enhancement. Different studies worded the multiple-choice alternatives differently, but all of the following appeared among the top reasons for using the drugs: "concentration" or "attention" (Boyd et al., 2006; DeSantis et al., 2008, 2009; Rabiner et al., 2009; Teter et al., 2003, 2006; Teter, McCabe, Cranford, Boyd, & Guthrie, 2005; White et al., 2006); "help memorize," "study," "study habits," or "academic assignments" (Arria et al., 2008; Barrett et al., 2005; Boyd et al., 2006; DeSantis et al., 2008, 2009; DuPont et al., 2008; Low & Gendaszek, 2002; Rabiner et al., 2009; Teter et al., 2005, 2006; White et al., 2006); "grades" or "intellectual performance" (Low & Gendaszek, 2002; White et al., 2006); "before tests" or "finals week" (Hall et al., 2005); "alertness" (Boyd et al., 2006; Hall et al., 2005; Teter et al., 2003, 2005, 2006); or "performance" (Novak et al., 2007). However, every survey found other motives mentioned as well. The pills were also taken to "stay awake," "get high," "be able to drink and party longer without feeling drunk," "lose weight," "experiment," and for "recreational purposes."
Many of the food-derived ingredients that are often included in nootropics—omega-3s in particular, but also flavonoids—do seem to improve brain health and function. But while eating fatty fish, berries and other healthy foods that are high in these nutrients appears to be good for your brain, the evidence backing the cognitive benefits of OTC supplements that contain these and other nutrients is weak.
White, Becker-Blease, & Grace-Bishop (2006) 2002 Large university undergraduates and graduates (N = 1,025) 16.2% (lifetime) 68.9%: improve attention; 65.2:% partying; 54.3%: improve study habits; 20%: improve grades; 9.1%: reduce hyperactivity 15.5%: 2–3 times per week; 33.9%: 2–3 times per month; 50.6%: 2–3 times per year 58%: easy or somewhat easy to obtain; write-in comments indicated many obtaining stimulants from friends with prescriptions
These are quite abstract concepts, though. There is a large gap, a grey area in between these concepts and our knowledge of how the brain functions physiologically – and it's in this grey area that cognitive enhancer development has to operate. Amy Arnsten, Professor of Neurobiology at Yale Medical School, is investigating how the cells in the brain work together to produce our higher cognition and executive function, which she describes as "being able to think about things that aren't currently stimulating your senses, the fundamentals of abstraction. This involves mental representations of our goals for the future, even if it's the future in just a few seconds."
Organizations, and even entire countries, are struggling with "always working" cultures. Germany and France have adopted rules to stop employees from reading and responding to email after work hours. Several companies have explored banning after-hours email; when one Italian company banned all email for one week, stress levels dropped among employees. This is not a great surprise: A Gallup study found that among those who frequently check email after working hours, about half report having a lot of stress.
If the entire workforce were to start doping with prescription stimulants, it seems likely that they would have two major effects. Firstly, people would stop avoiding unpleasant tasks, and weary office workers who had perfected the art of not-working-at-work would start tackling the office filing system, keeping spreadsheets up to date, and enthusiastically attending dull meetings.
"The author's story alone is a remarkable account of not just survival, but transcendence of a near-death experience. Cavin went on to become an advocate for survival and survivors of traumatic brain injuries, discovering along the way the key role played by nutrition. But this book is not just for injury survivors. It is for anyone who wants to live (and eat) well."
The amphetamine mix branded Adderall is terribly expensive to obtain even compared to modafinil, due to its tight regulation (a lower schedule than modafinil), popularity in college as a study drug, and reportedly moves by its manufacture to exploit its privileged position as a licensed amphetamine maker to extract more consumer surplus. I paid roughly $4 a pill but could have paid up to $10. Good stimulant hygiene involves recovery periods to avoid one's body adapting to eliminate the stimulating effects, so even if Adderall was the answer to all my woes, I would not be using it more than 2 or 3 times a week. Assuming 50 uses a year (for specific projects, let's say, and not ordinary aimless usage), that's a cool $200 a year. My general belief was that Adderall would be too much of a stimulant for me, as I am amphetamine-naive and Adderall has a bad reputation for letting one waste time on unimportant things. We could say my prediction was 50% that Adderall would be useful and worth investigating further. The experiment was pretty simple: blind randomized pills, 10 placebo & 10 active. I took notes on how productive I was and the next day guessed whether it was placebo or Adderall before breaking the seal and finding out. I didn't do any formal statistics for it, much less a power calculation, so let's try to be conservative by penalizing the information quality heavily and assume it had 25%. So \frac{200 - 0}{\ln 1.05} \times 0.50 \times 0.25 = 512! The experiment probably used up no more than an hour or two total.
Government restrictions and difficulty getting approval for various medical devices is expected to impede market growth. The stringency of approval by regulatory authorities is accompanied by the high cost of smart pills to challenge the growth of the smart pills market. However, the demand for speedy diagnosis, and improving reimbursement policies are likely to reveal market opportunities.
Fortunately for me, the FDA decided Smart Powder's advertising was too explicit and ordered its piracetam sales stopped; I was equivocal at the previous price point, but then I saw that between the bulk discount and the fire-sale coupon, 3kg was only $99.99 (shipping was amortized over that, the choline, caffeine, and tryptophan). So I ordered in September 2010. As well, I had decided to cap my own pills, eliminating the inconvenience and bad taste. 3kg goes a very long way so I am nowhere close to running out of my pills; there is nothing to report since, as the pills are simply part of my daily routine.
Some data suggest that cognitive enhancers do improve some types of learning and memory, but many other data say these substances have no effect. The strongest evidence for these substances is for the improvement of cognitive function in people with brain injury or disease (for example, Alzheimer's disease and traumatic brain injury). Although "popular" books and companies that sell smart drugs will try to convince you that these drugs work, the evidence for any significant effects of these substances in normal people is weak. There are also important side-effects that must be considered. Many of these substances affect neurotransmitter systems in the central nervous system. The effects of these chemicals on neurological function and behavior is unknown. Moreover, the long-term safety of these substances has not been adequately tested. Also, some substances will interact with other substances. A substance such as the herb ma-huang may be dangerous if a person stops taking it suddenly; it can also cause heart attacks, stroke, and sudden death. Finally, it is important to remember that products labeled as "natural" do not make them "safe."
And as before, around 9 AM I began to feel the peculiar feeling that I was mentally able and apathetic (in a sort of aboulia way); so I decided to try what helped last time, a short nap. But this time, though I took a full hour, I slept not a wink and my Zeo recorded only 2 transient episodes of light sleep! A back-handed sort of proof of alertness, I suppose. I didn't bother trying again. The rest of the day was mediocre, and I wound up spending much of it on chores and whatnot out of my control. Mentally, I felt better past 3 PM.
The above are all reasons to expect that even if I do excellent single-subject design self-experiments, there will still be the old problem of internal validity versus external validity: an experiment may be wrong or erroneous or unlucky in some way (lack of internal validity) or be right but not matter to anyone else (lack of external validity). For example, alcohol makes me sad & depressed; I could run the perfect blind randomized experiment for hundreds of trials and be extremely sure that alcohol makes me less happy, but would that prove that alcohol makes everyone sad or unhappy? Of course not, and as far as I know, for a lot of people alcohol has the opposite effect. So my hypothetical alcohol experiment might have tremendous internal validity (it does prove that I am sadder after inebriating), and zero external validity (someone who has never tried alcohol learns nothing about whether they will be depressed after imbibing). Keep this in mind if you are minded to take the experiments too seriously.
Not all drug users are searching for a chemical escape hatch. A newer and increasingly normalized drug culture is all about heightening one's current relationship to reality—whether at work or school—by boosting the brain's ability to think under stress, stay alert and productive for long hours, and keep track of large amounts of information. In the name of becoming sharper traders, medical interns, or coders, people are taking pills typically prescribed for conditions including ADHD, narcolepsy, and Alzheimer's. Others down "stacks" of special "nootropic" supplements.
2 break days later, I took the quarter-pill at 11:22 PM. I had discovered I had for years physically possessed a very long interview not available online, and transcribing that seemed like a good way to use up a few hours. I did some reading, some Mnemosyne, and started it around midnight, finishing around 2:30 AM. There seemed a mental dip around 30 minutes after the armodafinil, but then things really picked up and I made very good progress transcribing the final draft of 9000 words in that period. (In comparison, The Conscience of the Otaking parts 2 & 4 were much easier to read than the tiny font of the RahXephon booklet, took perhaps 3 hours, and totaled only 6500 words. The nicotine is probably also to thank.) By 3:40 AM, my writing seems to be clumsier and my mind fogged. Began DNB at 3:50: 61/53/44. Went to bed at 4:05, fell asleep in 16 minutes, slept for 3:56. Waking up was easier and I felt better, so the extra hour seemed to help.
Before you try nootropics, I suggest you start with the basics: get rid of the things in your diet and life that reduce cognitive performance first. That is easiest. Then, add in energizers like Brain Octane and clean up your diet. Then, go for the herbals and the natural nootropics. Use the pharmaceuticals selectively only after you've figured out your basics. | CommonCrawl |
On distortion in groups of homeomorphisms
JMD Home
A nondifferentiable essential irrational invariant curve for a $C^1$ symplectic twist map
July 2011, 5(3): 593-608. doi: 10.3934/jmd.2011.5.593
Bernoulli equilibrium states for surface diffeomorphisms
Omri M. Sarig 1,
Faculty of Mathematics and Computer Science, The Weizmann Institute of Science, POB 26, Rehovot, Israel
Received May 2011 Revised July 2011 Published November 2011
Suppose $f\colon M\to M$ is a $C^{1+\alpha}$ $(\alpha>0)$ diffeomorphism on a compact smooth orientable manifold $M$ of dimension 2, and let $\mu_\Psi$ be an equilibrium measure for a Hölder-continuous potential $\Psi\colon M\to \mathbb R$. We show that if $\mu_\Psi$ has positive measure-theoretic entropy, then $f$ is measure-theoretically isomorphic mod $\mu_\Psi$ to the product of a Bernoulli scheme and a finite rotation.
Keywords: countable Markov partitions., surface diffeomorphisms, Bernoulli, equilibrium measures.
Mathematics Subject Classification: Primary: 37D35; Secondary: 37D2.
Citation: Omri M. Sarig. Bernoulli equilibrium states for surface diffeomorphisms. Journal of Modern Dynamics, 2011, 5 (3) : 593-608. doi: 10.3934/jmd.2011.5.593
R. L. Adler and B. Weiss, "Similarity of Automorphisms of the Torus," Memoirs of the American Mathematical Society, No. 98, American Mathematical Society, Providence, R.I., 1970. Google Scholar
R. L. Adler, P. Shields and M. Smorodinsky, Irreducible Markov shifts, The Annals of Math. Statistics, 43 (1972), 1027-1029. doi: 10.1214/aoms/1177692569. Google Scholar
L. Barreira and Y. Pesin, "Nonuniform Hyperbolicity. Dynamics of Systems with Nonzero Lyapunov Exponents," Encyclopedia of Mathematics and its Applications, 115, Cambridge University Press, Cambridge, 2007. Google Scholar
R. Bowen, Bernoulli equilibrium states for Axiom A diffeomorphisms,, Math. Systems Theory, 8 (): 289. doi: 10.1007/BF01780576. Google Scholar
R. Bowen, "Equilibrium States and the Ergodic Theory of Anosov Diffeomorphisms," Lecture Notes in Mathematics, 470, Springer Verlag, Berlin-New York, 1975. Google Scholar
J. Buzzi, Maximal entropy measures for piecewise affine surface homeomorphisms, Ergodic Theory Dynam. Systems, 29 (2009), 1723-1763. doi: 10.1017/S0143385708000953. Google Scholar
J. Buzzi and O. Sarig, Uniqueness of equilibrium measures for countable Markov shifts and multidimensional piecewise expanding maps, Ergodic Th. & Dynam. Syst., 23 (2003), 1383-1400. Google Scholar
B. M. Gurevič, Shift entropy and Markov measures in the space of paths of a countable graph, (Russian), Dokl. Akad. Nauk SSSR, 192 (1970), 963-965; English Transl. in Soviet Math. Dokl., 11 (1970), 744-747. Google Scholar
B. P. Kitchens, "Symbolic Dynamics. One-Sided, Two-Sided and Countable State Markov Shifts," Universitext, Springer-Verlag, Berlin, 1998. Google Scholar
F. Ledrappier, Propriétés ergodiques de mesures de Sinaï, Inst. Hautes Études Sci. Publ. Math. No., 59 (1984), 163-188. Google Scholar
F. Ledrappier and L.-S. Young, The metric entropy of diffeomorphisms. I. Characterization of measures satisfying Pesin's entropy formula, Ann. of Math. (2), 122 (1985), 509-539. doi: 10.2307/1971328. Google Scholar
S. Newhouse, Continuity properties of entropy, Annals of Math. (2), 129 (1989), 215-235; Errata in Annals of Math., 131 (1990), 409-410. doi: 10.2307/1971492. Google Scholar
D. Ornstein, Factors of Bernoulli shifts are Bernoulli shifts, Adv. in Math., 5 (1970), 349-364. doi: 10.1016/0001-8708(70)90009-5. Google Scholar
D. Ornstein, Two Bernoulli shifts with infinite entropy are isomorphic, Adv. in Math., 5 (1970), 339-348. doi: 10.1016/0001-8708(70)90008-3. Google Scholar
D. Ornstein, Imbedding Bernoulli shifts in flows, in "1970 Contributions to Ergodic Theory and Probability" (Proc. Conf., Ohio State Univ., Columbus, Ohio, 1970), 178-218, Springer, Berlin, 1970. Google Scholar
D. Ornstein and N. A. Friedman, On isomorphism of weak Bernoulli transformations, Adv. in Math., 5 (1970), 365-394. doi: 10.1016/0001-8708(70)90010-1. Google Scholar
D. Ornstein and B. Weiss, On the Bernoulli nature of systems with some hyperbolic structure, Ergodic Theory Dynam. Systems, 18 (1998), 441-456. doi: 10.1017/S0143385798100354. Google Scholar
W. Parry, Intrinsic Markov chains, Trans. Amer. Math. Soc., 112 (1964), 55-66. doi: 10.1090/S0002-9947-1964-0161372-1. Google Scholar
Y. Pesin, Characteristic Ljapunov exponents and smooth ergodic theory, Uspehi, Mat. Nauk, 32 (1977), 55-112, 287. Google Scholar
M. Ratner, Anosov flows with Gibbs measures are also Bernoullian, Israel J. Math., 17 (1974), 380-391. doi: 10.1007/BF02757140. Google Scholar
R. Ruelle, A measure associated with axiom-A attractors, Amer. J. Math., 98 (1976), 619-654. doi: 10.2307/2373810. Google Scholar
O. M. Sarig, Thermodynamic formalism for null recurrent potentials, Israel J. Math., 121 (2001), 285-311. doi: 10.1007/BF02802508. Google Scholar
O. M. Sarig, Symbolic dynamics for surface diffeomorphisms with positive entropy,, submitted., (). Google Scholar
P. Walters, Ruelle's operator theorem and g-measures, Trans. Amer. Math. Soc., 214 (1975), 375-387. Google Scholar
P. Walters, "Ergodic Theory, Introductory Lectures," Lecture Notes in Mathematics, 458, Springer-Verlag, Berlin-New York, 1975. Google Scholar
P. Walters, Regularity conditions and Bernoulli properties of equilibrium states and g-measures, J. London Math. Soc. (2), 71 (2005), 379-396. doi: 10.1112/S0024610704006076. Google Scholar
Yair Daon. Bernoullicity of equilibrium measures on countable Markov shifts. Discrete & Continuous Dynamical Systems, 2013, 33 (9) : 4003-4015. doi: 10.3934/dcds.2013.33.4003
Michael Jakobson, Lucia D. Simonelli. Countable Markov partitions suitable for thermodynamic formalism. Journal of Modern Dynamics, 2018, 13: 199-219. doi: 10.3934/jmd.2018018
Enrique R. Pujals, Federico Rodriguez Hertz. Critical points for surface diffeomorphisms. Journal of Modern Dynamics, 2007, 1 (4) : 615-648. doi: 10.3934/jmd.2007.1.615
Manfred Denker, Yuri Kifer, Manuel Stadlbauer. Thermodynamic formalism for random countable Markov shifts. Discrete & Continuous Dynamical Systems, 2008, 22 (1&2) : 131-164. doi: 10.3934/dcds.2008.22.131
Manfred Denker, Yuri Kifer, Manuel Stadlbauer. Corrigendum to: Thermodynamic formalism for random countable Markov shifts. Discrete & Continuous Dynamical Systems, 2015, 35 (1) : 593-594. doi: 10.3934/dcds.2015.35.593
Yakov Pesin. On the work of Sarig on countable Markov chains and thermodynamic formalism. Journal of Modern Dynamics, 2014, 8 (1) : 1-14. doi: 10.3934/jmd.2014.8.1
Manfred G. Madritsch. Non-normal numbers with respect to Markov partitions. Discrete & Continuous Dynamical Systems, 2014, 34 (2) : 663-676. doi: 10.3934/dcds.2014.34.663
Dominic Veconi. Equilibrium states of almost Anosov diffeomorphisms. Discrete & Continuous Dynamical Systems, 2020, 40 (2) : 767-780. doi: 10.3934/dcds.2020061
Mark F. Demers, Christopher J. Ianzano, Philip Mayer, Peter Morfe, Elizabeth C. Yoo. Limiting distributions for countable state topological Markov chains with holes. Discrete & Continuous Dynamical Systems, 2017, 37 (1) : 105-130. doi: 10.3934/dcds.2017005
Wael Bahsoun, Paweł Góra. SRB measures for certain Markov processes. Discrete & Continuous Dynamical Systems, 2011, 30 (1) : 17-37. doi: 10.3934/dcds.2011.30.17
Thomas Ward, Yuki Yayama. Markov partitions reflecting the geometry of $\times2$, $\times3$. Discrete & Continuous Dynamical Systems, 2009, 24 (2) : 613-624. doi: 10.3934/dcds.2009.24.613
Alfonso Artigue. Robustly N-expansive surface diffeomorphisms. Discrete & Continuous Dynamical Systems, 2016, 36 (5) : 2367-2376. doi: 10.3934/dcds.2016.36.2367
C. Morales. On spiral periodic points and saddles for surface diffeomorphisms. Discrete & Continuous Dynamical Systems, 2011, 29 (3) : 1191-1195. doi: 10.3934/dcds.2011.29.1191
Yakov Pesin, Samuel Senti. Equilibrium measures for maps with inducing schemes. Journal of Modern Dynamics, 2008, 2 (3) : 397-430. doi: 10.3934/jmd.2008.2.397
Barry Simon. Equilibrium measures and capacities in spectral theory. Inverse Problems & Imaging, 2007, 1 (4) : 713-772. doi: 10.3934/ipi.2007.1.713
Brian Marcus and Selim Tuncel. Powers of positive polynomials and codings of Markov chains onto Bernoulli shifts. Electronic Research Announcements, 1999, 5: 91-101.
Peng Sun. Measures of intermediate entropies for skew product diffeomorphisms. Discrete & Continuous Dynamical Systems, 2010, 27 (3) : 1219-1231. doi: 10.3934/dcds.2010.27.1219
Eleonora Catsigeras, Heber Enrich. SRB measures of certain almost hyperbolic diffeomorphisms with a tangency. Discrete & Continuous Dynamical Systems, 2001, 7 (1) : 177-202. doi: 10.3934/dcds.2001.7.177
Rostyslav Kravchenko. The action of finite-state tree automorphisms on Bernoulli measures. Journal of Modern Dynamics, 2010, 4 (3) : 443-451. doi: 10.3934/jmd.2010.4.443
John Franks, Michael Handel. Some virtually abelian subgroups of the group of analytic symplectic diffeomorphisms of a surface. Journal of Modern Dynamics, 2013, 7 (3) : 369-394. doi: 10.3934/jmd.2013.7.369
Omri M. Sarig | CommonCrawl |
Commitment scheme
A commitment scheme is a cryptographic primitive that allows one to commit to a chosen value (or chosen statement) while keeping it hidden to others, with the ability to reveal the committed value later.[1] Commitment schemes are designed so that a party cannot change the value or statement after they have committed to it: that is, commitment schemes are binding. Commitment schemes have important applications in a number of cryptographic protocols including secure coin flipping, zero-knowledge proofs, and secure computation.
A way to visualize a commitment scheme is to think of a sender as putting a message in a locked box, and giving the box to a receiver. The message in the box is hidden from the receiver, who cannot open the lock themselves. Since the receiver has the box, the message inside cannot be changed—merely revealed if the sender chooses to give them the key at some later time.
Interactions in a commitment scheme take place in two phases:
1. the commit phase during which a value is chosen and committed to
2. the reveal phase during which the value is revealed by the sender, then the receiver verifies its authenticity
In the above metaphor, the commit phase is the sender putting the message in the box, and locking it. The reveal phase is the sender giving the key to the receiver, who uses it to open the box and verify its contents. The locked box is the commitment, and the key is the proof.
In simple protocols, the commit phase consists of a single message from the sender to the receiver. This message is called the commitment. It is essential that the specific value chosen cannot be known by the receiver at that time (this is called the hiding property). A simple reveal phase would consist of a single message, the opening, from the sender to the receiver, followed by a check performed by the receiver. The value chosen during the commit phase must be the only one that the sender can compute and that validates during the reveal phase (this is called the binding property).
The concept of commitment schemes was perhaps first formalized by Gilles Brassard, David Chaum, and Claude Crépeau in 1988,[2] as part of various zero-knowledge protocols for NP, based on various types of commitment schemes.[3][4] But the concept was used prior to that without being treated formally.[5][6] The notion of commitments appeared earliest in works by Manuel Blum,[7] Shimon Even,[8] and Adi Shamir et al.[9] The terminology seems to have been originated by Blum,[6] although commitment schemes can be interchangeably called bit commitment schemes—sometimes reserved for the special case where the committed value is a bit. Earlier to that, commitment via one-way hash functions was considered, e.g., as part of, say, Lamport signature, the original one-time one-bit signature scheme.
Applications
Coin flipping
Suppose Alice and Bob want to resolve some dispute via coin flipping. If they are physically in the same place, a typical procedure might be:
1. Alice "calls" the coin flip
2. Bob flips the coin
3. If Alice's call is correct, she wins, otherwise Bob wins
If Alice and Bob are not in the same place a problem arises. Once Alice has "called" the coin flip, Bob can stipulate the flip "results" to be whatever is most desirable for him. Similarly, if Alice doesn't announce her "call" to Bob, after Bob flips the coin and announces the result, Alice can report that she called whatever result is most desirable for her. Alice and Bob can use commitments in a procedure that will allow both to trust the outcome:
1. Alice "calls" the coin flip but only tells Bob a commitment to her call,
2. Bob flips the coin and reports the result,
3. Alice reveals what she committed to,
4. Bob verifies that Alice's call matches her commitment,
5. If Alice's revelation matches the coin result Bob reported, Alice wins
For Bob to be able to skew the results to his favor, he must be able to understand the call hidden in Alice's commitment. If the commitment scheme is a good one, Bob cannot skew the results. Similarly, Alice cannot affect the result if she cannot change the value she commits to.
A real-life application of this problem exists, when people (often in media) commit to a decision or give an answer in a "sealed envelope", which is then opened later. "Let's find out if that's what the candidate answered", for example on a game show, can serve as a model of this system.
Zero-knowledge proofs
One particular motivating example is the use of commitment schemes in zero-knowledge proofs. Commitments are used in zero-knowledge proofs for two main purposes: first, to allow the prover to participate in "cut and choose" proofs where the verifier will be presented with a choice of what to learn, and the prover will reveal only what corresponds to the verifier's choice. Commitment schemes allow the prover to specify all the information in advance, and only reveal what should be revealed later in the proof.[10] Second, commitments are also used in zero-knowledge proofs by the verifier, who will often specify their choices ahead of time in a commitment. This allows zero-knowledge proofs to be composed in parallel without revealing additional information to the prover.[11]
Signature schemes
The Lamport signature scheme is a digital signature system that relies on maintaining two sets of secret data packets, publishing verifiable hashes of the data packets, and then selectively revealing partial secret data packets in a manner that conforms specifically to the data to be signed. In this way, the prior public commitment to the secret values becomes a critical part of the functioning of the system.
Because the Lamport signature system cannot be used more than once, a system to combine many Lamport key-sets under a single public value that can be tied to a person and verified by others was developed. This system uses trees of hashes to compress many published Lamport-key-commitment sets into a single hash value that can be associated with the prospective author of later-verified data.
Verifiable secret sharing
Another important application of commitments is in verifiable secret sharing, a critical building block of secure multiparty computation. In a secret sharing scheme, each of several parties receive "shares" of a value that is meant to be hidden from everyone. If enough parties get together, their shares can be used to reconstruct the secret, but even a malicious cabal of insufficient size should learn nothing. Secret sharing is at the root of many protocols for secure computation: in order to securely compute a function of some shared input, the secret shares are manipulated instead. However, if shares are to be generated by malicious parties, it may be important that those shares can be checked for correctness. In a verifiable secret sharing scheme, the distribution of a secret is accompanied by commitments to the individual shares. The commitments reveal nothing that can help a dishonest cabal, but the shares allow each individual party to check to see if their shares are correct.[12]
Defining the security
Formal definitions of commitment schemes vary strongly in notation and in flavour. The first such flavour is whether the commitment scheme provides perfect or computational security with respect to the hiding or binding properties. Another such flavour is whether the commitment is interactive, i.e. whether both the commit phase and the reveal phase can be seen as being executed by a cryptographic protocol or whether they are non-interactive, consisting of two algorithms Commit and CheckReveal. In the latter case CheckReveal can often be seen as a derandomised version of Commit, with the randomness used by Commit constituting the opening information.
If the commitment C to a value x is computed as C:=Commit(x,open) with open being the randomness used for computing the commitment, then CheckReveal (C,x,open) reduces to simply verifying the equation C=Commit (x,open).
Using this notation and some knowledge about mathematical functions and probability theory we formalise different versions of the binding and hiding properties of commitments. The two most important combinations of these properties are perfectly binding and computationally hiding commitment schemes and computationally binding and perfectly hiding commitment schemes. Note that no commitment scheme can be at the same time perfectly binding and perfectly hiding – a computationally unbounded adversary can simply generate Commit(x,open) for every value of x and open until finding a pair that outputs C, and in a perfectly binding scheme this uniquely identifies x.
Computational binding
Let open be chosen from a set of size $2^{k}$, i.e., it can be represented as a k bit string, and let ${\text{Commit}}_{k}$ be the corresponding commitment scheme. As the size of k determines the security of the commitment scheme it is called the security parameter.
Then for all non-uniform probabilistic polynomial time algorithms that output $x,x'$ and $open,open'$ of increasing length k, the probability that $x\neq x'$ and ${\text{Commit}}_{k}(x,open)={\text{Commit}}_{k}(x',open')$ is a negligible function in k.
This is a form of asymptotic analysis. It is also possible to state the same requirement using concrete security: A commitment scheme Commit is $(t,\epsilon )$ secure, if for all algorithms that run in time t and output $x,x',open,open'$ the probability that $x\neq x'$ and ${\text{Commit}}(x,open)={\text{Commit}}(x',open')$ is at most $\epsilon $.
Perfect, statistical, and computational hiding
Let $U_{k}$ be the uniform distribution over the $2^{k}$ opening values for security parameter k. A commitment scheme is respectively perfect, statistical, or computational hiding, if for all $x\neq x'$ the probability ensembles $\{{\text{Commit}}_{k}(x,U_{k})\}_{k\in \mathbb {N} }$ and $\{{\text{Commit}}_{k}(x',U_{k})\}_{k\in \mathbb {N} }$ are equal, statistically close, or computationally indistinguishable.
Impossibility of universally composable commitment schemes
It is impossible to realize commitment schemes in the universal composability (UC) framework. The reason is that UC commitment has to be extractable, as shown by Canetti and Fischlin[13] and explained below.
The ideal commitment functionality, denoted here by F, works roughly as follows. Committer C sends value m to F, which stores it and sends "receipt" to receiver R. Later, C sends "open" to F, which sends m to R.
Now, assume we have a protocol π that realizes this functionality. Suppose that the committer C is corrupted. In the UC framework, that essentially means that C is now controlled by the environment, which attempts to distinguish protocol execution from the ideal process. Consider an environment that chooses a message m and then tells C to act as prescribed by π, as if it has committed to m. Note here that in order to realize F, the receiver must, after receiving a commitment, output a message "receipt". After the environment sees this message, it tells C to open the commitment.
The protocol is only secure if this scenario is indistinguishable from the ideal case, where the functionality interacts with a simulator S. Here, S has control of C. In particular, whenever R outputs "receipt", F has to do likewise. The only way to do that is for S to tell C to send a value to F. However, note that by this point, m is not known to S. Hence, when the commitment is opened during protocol execution, it is unlikely that F will open to m, unless S can extract m from the messages it received from the environment before R outputs the receipt.
However a protocol that is extractable in this sense cannot be statistically hiding. Suppose such a simulator S exists. Now consider an environment that, instead of corrupting C, corrupts R instead. Additionally it runs a copy of S. Messages received from C are fed into S, and replies from S are forwarded to C.
The environment initially tells C to commit to a message m. At some point in the interaction, S will commit to a value m′. This message is handed to R, who outputs m′. Note that by assumption we have m' = m with high probability. Now in the ideal process the simulator has to come up with m. But this is impossible, because at this point the commitment has not been opened yet, so the only message R can have received in the ideal process is a "receipt" message. We thus have a contradiction.
Construction
A commitment scheme can either be perfectly binding (it is impossible for Alice to alter her commitment after she has made it, even if she has unbounded computational resources); or perfectly concealing (it is impossible for Bob to find out the commitment without Alice revealing it, even if he has unbounded computational resources); or formulated as an instance-dependent commitment scheme, which is either hiding or binding depending on the solution to another problem.[14][15] A commitment scheme can not be both perfectly hiding and perfectly binding at the same time.
Bit-commitment in the random oracle model
Bit-commitment schemes are trivial to construct in the random oracle model. Given a hash function H with a 3k bit output, to commit the k-bit message m, Alice generates a random k bit string R and sends Bob H(R||m). The probability that any R′, m′ exist where m′ ≠ m such that H(R′||m′) = H(R||m) is ≈ 2−k, but to test any guess at the message m Bob will need to make 2k (for an incorrect guess) or 2k-1 (on average, for a correct guess) queries to the random oracle.[16] We note that earlier schemes based on hash functions, essentially can be thought of schemes based on idealization of these hash functions as random oracle.
Bit-commitment from any one-way permutation
One can create a bit-commitment scheme from any one-way function that is injective. The scheme relies on the fact that every one-way function can be modified (via the Goldreich-Levin theorem) to possess a computationally hard-core predicate (while retaining the injective property).
Let f be an injective one-way function, with h a hard-core predicate. Then to commit to a bit b Alice picks a random input x and sends the triple
$(h,f(x),b\oplus h(x))$
to Bob, where $\oplus $ denotes XOR, i.e., bitwise addition modulo 2. To decommit, Alice simply sends x to Bob. Bob verifies by computing f(x) and comparing to the committed value. This scheme is concealing because for Bob to recover b he must recover h(x). Since h is a computationally hard-core predicate, recovering h(x) from f(x) with probability greater than one-half is as hard as inverting f. Perfect binding follows from the fact that f is injective and thus f(x) has exactly one preimage.
Bit-commitment from a pseudo-random generator
Note that since we do not know how to construct a one-way permutation from any one-way function, this section reduces the strength of the cryptographic assumption necessary to construct a bit-commitment protocol.
In 1991 Moni Naor showed how to create a bit-commitment scheme from a cryptographically secure pseudorandom number generator.[17] The construction is as follows. If G is pseudo-random generator such that G takes n bits to 3n bits, then if Alice wants to commit to a bit b:
• Bob selects a random 3n-bit vector R and sends R to Alice.
• Alice selects a random n-bit vector Y and computes the 3n-bit vector G(Y).
• If b=1 Alice sends G(Y) to Bob, otherwise she sends the bitwise exclusive-or of G(Y) and R to Bob.
To decommit Alice sends Y to Bob, who can then check whether he initially received G(Y) or G(Y) $\oplus $ R.
This scheme is statistically binding, meaning that even if Alice is computationally unbounded she cannot cheat with probability greater than 2−n. For Alice to cheat, she would need to find a Y', such that G(Y') = G(Y) $\oplus $ R. If she could find such a value, she could decommit by sending the truth and Y, or send the opposite answer and Y'. However, G(Y) and G(Y') are only able to produce 2n possible values each (that's 22n) while R is picked out of 23n values. She does not pick R, so there is a 22n/23n = 2−n probability that a Y' satisfying the equation required to cheat will exist.
The concealing property follows from a standard reduction, if Bob can tell whether Alice committed to a zero or one, he can also distinguish the output of the pseudo-random generator G from true-random, which contradicts the cryptographic security of G.
A perfectly binding scheme based on the discrete log problem and beyond
Alice chooses a ring of prime order p, with multiplicative generator g.
Alice randomly picks a secret value x from 0 to p − 1 to commit to and calculates c = gx and publishes c. The discrete logarithm problem dictates that from c, it is computationally infeasible to compute x, so under this assumption, Bob cannot compute x. On the other hand, Alice cannot compute a x′ <> x, such that gx′ = c, so the scheme is binding.
This scheme isn't perfectly concealing as someone could find the commitment if he manages to solve the discrete logarithm problem. In fact, this scheme isn't hiding at all with respect to the standard hiding game, where an adversary should be unable to guess which of two messages he chose were committed to - similar to the IND-CPA game. One consequence of this is that if the space of possible values of x is small, then an attacker could simply try them all and the commitment would not be hiding.
A better example of a perfectly binding commitment scheme is one where the commitment is the encryption of x under a semantically secure, public-key encryption scheme with perfect completeness, and the decommitment is the string of random bits used to encrypt x. An example of an information-theoretically hiding commitment scheme is the Pedersen commitment scheme,[18] which is computationally binding under the discrete logarithm assumption. [19] Additionally to the scheme above, it uses another generator h of the prime group and a random number r. The commitment is set $c=g^{x}h^{r}$.[20]
These constructions are tightly related to and based on the algebraic properties of the underlying groups, and the notion originally seemed to be very much related to the algebra. However, it was shown that basing statistically binding commitment schemes on general unstructured assumption is possible, via the notion of interactive hashing for commitments from general complexity assumptions (specifically and originally, based on any one way permutation) as in.[21]
A perfectly hiding commitment scheme based on RSA
Alice selects $N$ such that $N=p\cdot q$, where $p$ and $q$ are large secret prime numbers. Additionally, she selects a prime $e$ such that $e>N^{2}$ and $gcd(e,\phi (N^{2}))=1$. Alice then computes a public number $g_{m}$ as an element of maximum order in the $\mathbb {Z} _{N^{2}}^{*}$ group.[22] Finally, Alice commits to her secret $m$ by first generating a random number $r$ from $\mathbb {Z} _{N^{2}}^{*}$ and then by computing $c=m^{e}g_{m}^{r}$.
The security of the above commitment relies on the hardness of the RSA problem and has perfect hiding and computational binding.[23]
Additive and Multiplicative Homomorphic properties of commitments
The Pedersen commitment scheme introduces an interesting homomorphic property that allows performing addition between two commitments. More specifically, given two messages $m_{1}$ and $m_{2}$ and randomness $r_{1}$ and $r_{2}$, respectively, the it is possible to generate a new commitment such that: $C(m_{1},r_{1})\cdot C(m_{2},r_{2})=C(m_{1}+m_{2},r_{1}+r_{2})$. Formally:
$C(m_{1},r_{1})\cdot C(m_{2},r_{2})=g^{m_{1}}h^{r_{1}}\cdot g^{m_{2}}h^{r_{2}}=g^{m_{1}+m_{2}}h^{r_{1}+r_{2}}=C(m_{1}+m_{2},r_{1}+r_{2})$
To open the above Pedersen commitment to a new message $m_{1}+m_{2}$, the randomness $r_{1}$ and $r_{2}$ has to be added.
Similarly, the RSA-based commitment mentioned above has a homomorphic property with respect to the multiplication operation. Given two messages $m_{1}$ and $m_{2}$ with randomness $r_{1}$ and $r_{2}$, respectively, one can compute: $C(m_{1},r_{1})\cdot C(m_{2},r_{2})=C(m_{1}\cdot m_{2},r_{1}+r_{2})$. Formally: $C(m_{1},r_{1})\cdot C(m_{2},r_{2})=m_{1}^{e}g_{m}^{r_{1}}\cdot m_{2}^{e}g_{m}^{r_{2}}=(m_{1}\cdot m_{2})^{e}g_{m}^{r_{1}+r_{2}}=C(m_{1}\cdot m_{2},r_{1}+r_{2})$.
To open the above commitment to a new message $m_{1}\cdot m_{2}$, the randomness $r_{1}$ and $r_{2}$ has to be added. This newly generated commitment is distributed similarly to a new commitment to $m_{1}\cdot m_{2}$.
Partial reveal
Some commitment schemes permit a proof to be given of only a portion of the committed value. In these schemes, the secret value $X$ is a vector of many individually separable values.
$X=(x_{1},x_{2},...,x_{n})$
The commitment $C$ is computed from $X$ in the commit phase. Normally, in the reveal phase, the prover would reveal all of $X$ and some additional proof data (such as $R$ in simple bit-commitment). Instead, the prover is able to reveal any single value from the $X$ vector, and create an efficient proof that it is the authentic $i$th element of the original vector that created the commitment $C$. The proof does not require any values of $X$ other than $x_{i}$ to be revealed, and it is impossible to create valid proofs that reveal different values for any of the $x_{i}$ than the true one.[24]
Vector hashing
Vector hashing is a naive vector commitment partial reveal scheme based on bit-commitment. Values $m_{1},m_{2},...m_{n}$ are chosen randomly. Individual commitments are created by hashing $y_{1}=H(x_{1}||m_{1}),y_{2}=H(x_{2}||m_{2}),...$. The overall commitment is computed as
$C=H(y_{1}||y_{2}||...||y_{n})$
In order to prove one element of the vector $X$, the prover reveals the values
$(i,y_{1},y_{2},...,y_{i-1},x_{i},m_{i},y_{i+1},...,y_{n})$
The verifier is able to compute $y_{i}$ from $x_{i}$ and $m_{i}$, and then is able to verify that the hash of all $y$ values is the commitment $C$. Unfortunately the proof is $O(n)$ in size and verification time. Alternately, if $C$ is the set of all $y$ values, then the commitment is $O(n)$ in size, and the proof is $O(1)$ in size and verification time. Either way, the commitment or the proof scales with $O(n)$ which is not optimal.
Merkle tree
A common example of a practical partial reveal scheme is a Merkle tree, in which a binary hash tree is created of the elements of $X$. This scheme creates commitments that are $O(1)$ in size, and proofs that are $O(\log _{2}{n})$ in size and verification time. The root hash of the tree is the commitment $C$. To prove that a revealed $x_{i}$ is part of the original tree, only $\log _{2}{n}$ hash values from the tree, one from each level, must be revealed as the proof. The verifier is able to follow the path from the claimed leaf node all the way up to the root, hashing in the sibling nodes at each level, and eventually arriving at a root node value that must equal $C$.[25]
KZG commitment
A Kate-Zaverucha-Goldberg commitment uses pairing-based cryptography to build a partial reveal scheme with $O(1)$ commitment sizes, proof sizes, and proof verification time. In other words, as $n$, the number of values in $X$, increases, the commitments and proofs do not get larger, and the proofs do not take any more effort to verify.
A KZG commitment requires a predetermined set of parameters to create a pairing, and a trusted trapdoor element. For example, a Tate pairing can be used. Assume that $\mathbb {G} _{1},\mathbb {G} _{2}$ are the additive groups, and $\mathbb {G} _{T}$ is the multiplicative group of the pairing. In other words, the pairing is the map $e:\mathbb {G} _{1}\times \mathbb {G} _{2}\rightarrow \mathbb {G} _{T}$. Let $t\in \mathbb {F} _{p}$ be the trapdoor element (if $p$ is the prime order of $\mathbb {G} _{1}$ and $\mathbb {G} _{2}$), and let $G$ and $H$ be the generators of $\mathbb {G} _{1}$ and $\mathbb {G} _{2}$ respectively. As part of the parameter setup, we assume that $G\cdot t^{i}$ and $H\cdot t^{i}$ are known and shared values for arbitrarily many positive integer values of $i$, while the trapdoor value $t$ itself is discarded and known to no one.
Commit
A KZG commitment reformulates the vector of values to be committed as a polynomial. First, we calculate a polynomial such that $p(i)=x_{i}$ for all values of $x_{i}$ in our vector. Lagrange interpolation allows us to compute that polynomial
$p(x)=\sum _{i=0}^{n-1}x_{i}\prod _{0\leq j<n,j\neq i}{\frac {x-j}{i-j}}$
Under this formulation, the polynomial now encodes the vector, where $p(0)=x_{0},p(1)=x_{1},...$. Let $p_{0},p_{1},...,p_{n-1}$ be the coefficients of $p$, such that $ p(x)=\sum _{i=0}^{n-1}p_{i}x^{i}$. The commitment is calculated as
$C=\sum _{i=0}^{n-1}p_{i}Gt^{i}$
This is computed simply as a dot product between the predetermined values $G\cdot t^{i}$ and the polynomial coefficients $p_{i}$. Since $\mathbb {G} _{1}$ is an additive group with associativity and commutativity, $C$ is equal to simply $G\cdot p(t)$, since all the additions and multiplications with $G$ can be distributed out of the evaluation. Since the trapdoor value $t$ is unknown, the commitment $C$ is essentially the polynomial evaluated at a number known to no one, with the outcome obfuscated into an opaque element of $\mathbb {G} _{1}$.
Reveal
A KZG proof must demonstrate that the revealed data is the authentic value of $x_{i}$ when $C$ was computed. Let $y=x_{i}$, the revealed value we must prove. Since the vector of $x_{i}$ was reformulated into a polynomial, we really need to prove that the polynomial $p$, when evaluated at $i$ takes on the value $y$. Simply, we just need to prove that $p(i)=y$. We will do this by demonstrating that subtracting $y$ from $p$ yields a root at $i$. Define the polynomial $q$ as
$q(x)={\frac {p(x)-y}{x-i}}$
This polynomial is itself the proof that $p(i)=y$, because if $q$ exists, then $p(x)-y$ is divisible by $x-i$, meaning it has a root at $i$, so $p(i)-y=0$ (or, in other words, $p(i)=y$). The KZG proof will demonstrate that $q$ exists and has this property.
The prover computes $q$ through the above polynomial division, then calculates the KZG proof value $\pi $
$\pi =\sum _{i=0}^{n-1}q_{i}Gt^{i}$
This is equal to $G\cdot q(t)$, as above. In other words, the proof value is the polynomial $q$ again evaluated at the trapdoor value $t$, hidden in the generator $G$ of $\mathbb {G} _{1}$.
This computation is only possible if the above polynomials were evenly divisible, because in that case the quotient $q$ is a polynomial, not a rational function. Due to the construction of the trapdoor, it is not possible to evaluate a rational function at the trapdoor value, only to evaluate a polynomial using linear combinations of the precomputed known constants of $G\cdot t^{i}$. This is why it is impossible to create a proof for an incorrect value of $x_{i}$.
Verify
To verify the proof, the bilinear map of the pairing is used to show that the proof value $\pi $ summarizes a real polynomial $q$ that demonstrates the desired property, which is that $p(x)-y$ was evenly divided by $x-i$. The verification computation checks the equality
$e(\pi ,H\cdot t-H\cdot i)\ {\stackrel {?}{=}}\ e(C-G\cdot y,H)$
where $e$ is the bilinear map function as above. $H\cdot t$ is a precomputed constant, $H\cdot i$ is computed based on $i$.
By rewriting the computation in the pairing group $\mathbb {G} _{T}$, substituting in $\pi =q(t)\cdot G$ and $C=p(t)\cdot G$, and letting $\tau (x)=e(G,H)^{x}$ be a helper function for lifting into the pairing group, the proof verification is more clear
$e(\pi ,H\cdot t-H\cdot i)=e(C-G\cdot y,H)$
$e(G\cdot q(t),H\cdot t-H\cdot i)=e(G\cdot p(t)-G\cdot y,H)$
$e(G\cdot q(t),H\cdot (t-i))=e(G\cdot (p(t)-y),H)$
$e(G,H)^{q(t)\cdot (t-i)}=e(G,H)^{p(t)-y}$
$\tau (q(t)\cdot (t-i))=\tau (p(t)-y)$
Assuming that the bilinear map is validly constructed, this demonstrates that $q(x)(x-i)=p(x)-y$, without the validator knowing what $p$ or $q$ are. The validator can be assured of this because if $\tau (q(t)\cdot (t-i))=\tau (p(t)-y)$, then the polynomials evaluate to the same output at the trapdoor value $x=t$. This demonstrates the polynomials are identical, because, if the parameters were validly constructed, the trapdoor value is known to no one, meaning that engineering a polynomial to have a specific value at the trapdoor is impossible (according to the Schwartz–Zippel lemma). If $q(x)(x-i)=p(x)-y$ is now verified to be true, then $q$ is verified to exist, therefore $p(x)-y$ must be polynomial-divisible by $(x-i)$, so $p(i)-y=0$ due to the factor theorem. This proves that the $i$th value of the committed vector must have equaled $y$, since that is the output of evaluating the committed polynomial at $i$.
Why the bilinear map pairing is used
The utility of the bilinear map pairing is to allow the multiplication of $q(x)$ by $x-i$ to happen securely. These values truly lie in $\mathbb {G} _{1}$, where division is assumed to be computationally hard. For example, $\mathbb {G} _{1}$ might be an elliptic curve over a finite field, as is common in elliptic-curve cryptography. Then, the division assumption is called the elliptic curve discrete logarithm problem, and this assumption is also what guards the trapdoor value from being computed, making it also a foundation of KZG commitments. In that case, we want to check if $q(x)(x-i)=p(x)-y$. This cannot be done without a pairing, because with values on the curve of $G\cdot q(x)$ and $G\cdot (x-i)$, we cannot compute $G\cdot (q(x)(x-i))$. That would violate the computational Diffie–Hellman assumption, a foundational assumption in elliptic-curve cryptography. We instead use a pairing to sidestep this problem. $q(x)$ is still multiplied by $G$ to get $G\cdot q(x)$, but the other side of the multiplication is done in the paired group $\mathbb {G} _{2}$, so, $H\cdot (t-i)$. We compute $e(G\cdot q(t),H\cdot (t-i))$, which, due to the bilinearity of the map, is equal to $e(G,H)^{q(t)\cdot (t-i)}$. In this output group $\mathbb {G} _{T}$ we still have the discrete logarithm problem, so even though we know that value and $e(G,H)$, we cannot extract the exponent $q(t)\cdot (t-i)$, preventing any contradiction with discrete logarithm earlier. This value can be compared to $e(G\cdot (p(t)-y),H)=e(G,H)^{p(t)-y}$ though, and if $e(G,H)^{q(t)\cdot (t-i)}=e(G,H)^{p(t)-y}$ we are able to conclude that $q(t)\cdot (t-i)=p(t)-y$, without ever knowing what the actual value of $t$ is, let alone $q(t)(t-i)$.
Additionally, a KZG commitment can be extended to prove the values of any arbitrary $k$ values of $X$ (not just one value), with the proof size remaining $O(1)$, but the proof verification time scales with $O(k)$. The proof is the same, but instead of subtracting a constant $y$, we subtract a polynomial that causes multiple roots, at all the locations we want to prove, and instead of dividing by $x-i$ we divide by $ \prod _{i}x-i$ for those same locations.[26]
Quantum bit commitment
It is an interesting question in quantum cryptography if unconditionally secure bit commitment protocols exist on the quantum level, that is, protocols which are (at least asymptotically) binding and concealing even if there are no restrictions on the computational resources. One could hope that there might be a way to exploit the intrinsic properties of quantum mechanics, as in the protocols for unconditionally secure key distribution.
However, this is impossible, as Dominic Mayers showed in 1996 (see[27] for the original proof). Any such protocol can be reduced to a protocol where the system is in one of two pure states after the commitment phase, depending on the bit Alice wants to commit. If the protocol is unconditionally concealing, then Alice can unitarily transform these states into each other using the properties of the Schmidt decomposition, effectively defeating the binding property.
One subtle assumption of the proof is that the commit phase must be finished at some point in time. This leaves room for protocols that require a continuing information flow until the bit is unveiled or the protocol is cancelled, in which case it is not binding anymore.[28] More generally, Mayers' proof applies only to protocols that exploit quantum physics but not special relativity. Kent has shown that there exist unconditionally secure protocols for bit commitment that exploit the principle of special relativity stating that information cannot travel faster than light.[29]
Commitments based on physical unclonable functions
Physical unclonable functions (PUFs) rely on the use of a physical key with internal randomness, which is hard to clone or to emulate. Electronic, optical and other types of PUFs[30] have been discussed extensively in the literature, in connection with their potential cryptographic applications including commitment schemes.[31][32]
See also
• Oblivious transfer
• Accumulator (cryptography)
• Key signing party
• Web of trust
• Zerocoin
• Anagrams — used by 17th-century natural philosophers to establish priority of a discovery without revealing it to others
References
1. Oded Goldreich (2001). Foundations of Cryptography: Volume 1, Basic Tools. Cambridge University Press. ISBN 0-521-79172-3.: 224
2. Gilles Brassard, David Chaum, and Claude Crépeau, Minimum Disclosure Proofs of Knowledge, Journal of Computer and System Sciences, vol. 37, pp. 156–189, 1988.
3. Goldreich, Oded; Micali, Silvio; Wigderson, Avi (1991). "Proofs that yield nothing but their validity". Journal of the ACM. 38 (3): 690–728. CiteSeerX 10.1.1.420.1478. doi:10.1145/116825.116852. S2CID 2389804.
4. Russell Impagliazzo, Moti Yung: Direct Minimum-Knowledge Computations. CRYPTO 1987: 40-51
5. Moni Naor, Bit Commitment Using Pseudorandomness, Journal of Cryptology 4: 2 pp. 151–158, 1991, doi:10.1007/BF00196774.
6. Claude Crépeau, Commitment, Cryptography and Quantum Information Lab, McGill University School of Computer Science, accessed April 11, 2008
7. Manuel Blum, Coin Flipping by Telephone, Proceedings of CRYPTO 1981, pp. 11–15, 1981, reprinted in SIGACT News vol. 15, pp. 23–27, 1983, Carnegie Mellon School of Computer Science.
8. Shimon Even. Protocol for signing contracts. In Allen Gersho, ed., Advances in Cryptography (proceedings of CRYPTO '82), pp. 148–153, Santa Barbara, CA, US, 1982.
9. A. Shamir, R. L. Rivest, and L. Adleman, "Mental Poker". In David A. Klarner, ed., The Mathematical Gardner (ISBN 978-1-4684-6686-7), pp. 37–43. Wadsworth, Belmont, California, 1981.
10. Oded Goldreich, Silvio Micali, and Avi Wigderson, Proofs that yield nothing but their validity, or all languages in NP have zero-knowledge proof systems, Journal of the ACM, 38: 3, pp. 690–728, 1991
11. Oded Goldreich and Hugo Krawczyk, On the Composition of Zero-Knowledge Proof Systems, SIAM Journal on Computing, 25: 1, pp. 169–192, 1996
12. Gennaro; Rosario; Rabin, Michael O.; Rabin, Tal. "Simplified VSS and fast-track multiparty computations with applications to threshold cryptography". Proceedings of the Seventeenth Annual ACM Symposium on Principles of Distributed Computing. 1998, June.
13. R. Canetti and M. Fischlin. Universally Composable Commitments.
14. Shien Hin Ong and Salil Vadhan (1990). Perfect zero knowledge in constant round, In Proc. STOC, p. 482-493, cited in Shien Hin Ong and Salil Vadhan (2008). An Equivalence between Zero Knowledge and Commitments, Theory of Cryptography.
15. Toshiya Itoh, Yiji Ohta, Hiroki Shizuya (1997). A language dependent cryptographic primitive, In J. Cryptol., 10(1):37-49, cited in Shien Hin Ong and Salil Vadhan (2008). An Equivalence between Zero Knowledge and Commitments, Theory of Cryptography.
16. Wagner, David (2006), Midterm Solution, p. 2, retrieved 26 October 2015
17. "Citations: Bit Commitment using Pseudorandom Generators - Naor (ResearchIndex)". Citeseer.ist.psu.edu. Retrieved 2014-06-07.
18. Pedersen, Torben Pryds. "Non-Interactive and Information-Theoretic Secure Verifiable Secret Sharing". Advances in Cryptology – CRYPTO ’91. Berlin, Heidelberg: Springer Berlin Heidelberg. pp. 129–140. doi:10.1007/3-540-46766-1_9. ISBN 978-3-540-55188-1.
19. Metere, Roberto; Dong, Changyu (2017). "Automated cryptographic analysis of the pedersen commitment scheme". International Conference on Mathematical Methods, Models, and Architectures for Computer Network Security. Springer. pp. 275–287.
20. Tang, Chunming; Pei, Dingyi; Liu, Zhuojun; He, Yong (16 August 2004). "Pedersen: Non-interactive and information-theoretic secure verifiable secret sharing" (PDF). Cryptology ePrint Archive. Advances in Cryptology CRYPTO 1991 Springer. Archived from the original (PDF) on 11 August 2017. Retrieved 2 February 2019.
21. Moni Naor, Rafail Ostrovsky, Ramarathnam Venkatesan, Moti Yung: Perfect Zero-Knowledge Arguments for NP Using Any One-Way Permutation. J. Cryptology 11(2): 87-108 (1998)
22. Menezes, Alfred J; Van Oorschot, Paul C; Vanstone, Scott A (2018). Handbook of applied cryptography. CRC press.
23. Mouris, Dimitris; Tsoutsos, Nektarios Georgios (26 January 2022). "Masquerade: Verifiable Multi-Party Aggregation with Secure Multiplicative Commitments" (PDF). Cryptology ePrint Archive.
24. Catalano, Dario; Fiore, Dario (2013). "Vector Commitments and Their Applications". Public-Key Cryptography -- PKC 2013. Lecture Notes in Computer Science. Springer Berlin Heidelberg. 7778: 55–72. doi:10.1007/978-3-642-36362-7_5. ISBN 978-3-642-36362-7. Catalano, Dario; Fiore, Dario (2013). "Vector Commitments and Their Applications" (PDF). International Association for Cryptologic Research.
25. Becker, Georg (2008-07-18). "Merkle Signature Schemes, Merkle Trees and Their Cryptanalysis" (PDF). Ruhr-Universität Bochum. p. 16. Retrieved 2013-11-20.
26. Kate, Aniket; Zaverucha, Gregory; Goldberg, Ian (2010). "Constant-size commitments to polynomials and their applications" (PDF). International Conference on the Theory and Application of Cryptology and Information Security.
27. Brassard, Crépeau, Mayers, Salvail: A brief review on the impossibility of quantum bit commitment
28. A. Kent: Secure classical Bit Commitment using Fixed Capacity Communication Channels
29. Kent, A. (1999). "Unconditionally Secure Bit Commitment". Phys. Rev. Lett. 83 (7): 1447–1450. arXiv:quant-ph/9810068. Bibcode:1999PhRvL..83.1447K. doi:10.1103/PhysRevLett.83.1447. S2CID 8823466.
30. McGrath, Thomas; Bagci, Ibrahim E.; Wang, Zhiming M.; Roedig, Utz; Young, Robert J. (2019-02-12). "A PUF taxonomy". Applied Physics Reviews. 6 (1): 011303. Bibcode:2019ApPRv...6a1303M. doi:10.1063/1.5079407.
31. Rührmair, Ulrich; van Dijk, Marten (2013-04-01). "On the practical use of physical unclonable functions in oblivious transfer and bit commitment protocols". Journal of Cryptographic Engineering. 3 (1): 17–28. doi:10.1007/s13389-013-0052-8. hdl:1721.1/103985. ISSN 2190-8516. S2CID 15713318.
32. Nikolopoulos, Georgios M. (2019-09-30). "Optical scheme for cryptographic commitments with physical unclonable keys". Optics Express. 27 (20): 29367–29379. arXiv:1909.13094. Bibcode:2019OExpr..2729367N. doi:10.1364/OE.27.029367. ISSN 1094-4087. PMID 31684673. S2CID 203593129.
External links
• Quantum bit commitment on arxiv.org
• Kate-Zaverucha-Goldberg (KZG) Constant-Sized Polynomial Commitments - Alin Tomescu
• Kate polynomial commitments
| Wikipedia |
The strong inviscid limit of the isentropic compressible Navier-Stokes equations with Navier boundary conditions
From gradient theory of phase transition to a generalized minimal interface problem with a contact energy
May 2016, 36(5): 2711-2727. doi: 10.3934/dcds.2016.36.2711
Conformal Markov systems, Patterson-Sullivan measure on limit sets and spectral triples
Richard Sharp 1,
Mathematics Institute, University of Warwick, Coventry CV4 7AL, United Kingdom
Received November 2014 Revised September 2015 Published October 2015
For conformal graph directed Markov systems, we construct a spectral triple from which one can recover the associated conformal measure via a Dixmier trace. As a particular case, we can recover the Patterson-Sullivan measure for a class of Kleinian groups.
Keywords: Dixmier trace, Spectral triple, conformal graph directed Markov system, conformal measure, Patterson-Sullivan measure..
Mathematics Subject Classification: Primary: 28A80, 37C30, 37D35, 58B34; Secondary: 37A55, 37D20, 37F30, 37F3.
Citation: Richard Sharp. Conformal Markov systems, Patterson-Sullivan measure on limit sets and spectral triples. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2711-2727. doi: 10.3934/dcds.2016.36.2711
R. Adler and L. Flatto, Geodesic flows, interval maps, and symbolic dynamics,, Bull. Amer. Math. Soc., 25 (1991), 229. doi: 10.1090/S0273-0979-1991-16076-3. Google Scholar
S. Albeverio, D. Guido, A. Ponosov and S. Scarlatti, Singular traces and compact operators,, J. Funct. Anal., 137 (1996), 281. doi: 10.1006/jfan.1996.0047. Google Scholar
V. Baladi, Positive Transfer Operators and Decay of Correlations,, Advanced Series in Nonlinear Dynamics, (2000). doi: 10.1142/9789812813633. Google Scholar
N. Benakli, Polyèdres Hyperboliques Passage du Local au Global,, Thesis, (1992). Google Scholar
R. Bhatia and K. Parthasarathy, Lectures on Functional Analysis. Part I. Perturbation by Bounded Operators,, ISI Lecture Notes, (1978). Google Scholar
M. Bourdon, Actions Quasi-convexes d'un Groupe Hyperbolique, Flot Géodésique,, Thesis, (1993). Google Scholar
R. Bowen, The Hausdorff dimension of quasi-circles,, Publ. Math. IHES, 50 (1979), 11. Google Scholar
R. Bowen, Equilibrium States and the Ergodic Theory of Anosov Diffeomorphisms,, Second revised edition, (2008). Google Scholar
R. Bowen and C. Series, Markov maps associated with Fuchsian groups,, Publ. Math. IHES, 50 (1979), 153. Google Scholar
E. Christensen and C. Ivan, Spectral triples for AF $C^*$-algebras and metrics on the Cantor set,, J. Operator Theory, 56 (2006), 17. Google Scholar
A. Connes, Noncommutative Geometry,, Academic Press, (1994). Google Scholar
A. Connes, Geometry from the spectral point of view,, Lett. Math. Phys., 34 (1995), 203. doi: 10.1007/BF01872777. Google Scholar
J. Conway, A Course in Functional Analysis,, Graduate Texts in Mathematics, (1990). Google Scholar
J. Dixmier, Existence de traces non normales,, C. R. Acad. Sci. Paris Sér. A-B, 262 (1966). Google Scholar
K. Falconer and T. Samuel, Dixmier traces and coarse multifractal analysis,, Ergodic Theory Dynam. Systems, 31 (2011), 369. doi: 10.1017/S0143385709001102. Google Scholar
D. Guido and T. Isola, Fractals in non-commutative geometry,, in Mathematical Physics in Mathematics and Physics, (2000), 171. Google Scholar
D. Guido and T. Isola, Dimensions and singular traces for spectral triples, with applications to fractals,, J. Funct. Anal., 203 (2003), 362. doi: 10.1016/S0022-1236(03)00230-1. Google Scholar
D. Guido and T. Isola, Dimensions and spectral triples for fractals in $\mathbb R^N$,, in Advances in Operator Algebras and Mathematical Physics, (2005), 89. Google Scholar
T. Kato, Perturbation Theory for Linear Operators,, Reprint of the 1980 edition, (1980). Google Scholar
M. Kesseböhmer and T. Samuel, Spectral metric spaces for Gibbs measures,, J. Funct. Anal., 265 (2013), 1801. doi: 10.1016/j.jfa.2013.07.012. Google Scholar
M. Lapidus and C. Pomerance, The Riemann zeta-function and the one-dimensional Weyl-Berry conjecture for fractal drums,, Proc. London Math. Soc., 66 (1993), 41. doi: 10.1112/plms/s3-66.1.41. Google Scholar
S. Lord, A. Sedaev and F. Sukochev, Dixmier traces as singular symmetric functionals and applications to measurable operators,, J. Funct. Anal., 224 (2005), 72. doi: 10.1016/j.jfa.2005.01.002. Google Scholar
R. D. Mauldin and M. Urbanski, Graph Directed Markov Systems: Geometry and Dynamics of Limit Sets,, Cambridge Tracts in Mathematics, (2003). doi: 10.1017/CBO9780511543050. Google Scholar
I. Palmer, Riemannian Geometry of Compact Metric Spaces,, Ph.D. Thesis, (2010). Google Scholar
W. Parry and M. Pollicott, Zeta functions and the periodic orbit structure of hyperbolic dynamics,, Astérisque, (1990), 1. Google Scholar
S. J. Patterson, The limit set of a Fuchsian group,, Acta Math., 136 (1976), 241. doi: 10.1007/BF02392046. Google Scholar
J. Pearson and J. Bellissard, Noncommutative Riemannian geometry and diffusion on ultrametric Cantor sets,, J. Noncommut. Geom., 3 (2009), 447. doi: 10.4171/JNCG/43. Google Scholar
M. Pollicott, A symbolic proof of a theorem of Margulis on geodesic arcs on negatively curved manifolds,, Amer. J. Math., 117 (1995), 289. doi: 10.2307/2374915. Google Scholar
M. Pollicott and R. Sharp, Comparison theorems and orbit counting in hyperbolic geometry,, Trans. Amer. Math. Soc., 350 (1998), 473. doi: 10.1090/S0002-9947-98-01756-5. Google Scholar
M. Pollicott and R. Sharp, Poincaré series and comparison theorems for variable negative curvature,, in Topology, (2001), 229. Google Scholar
D. Ruelle, Thermodynamic Formalism,, Second edition, (2004). doi: 10.1017/CBO9780511617546. Google Scholar
T. Samuel, A Commutative Noncommutative Fractal Geometry,, Ph.D. Thesis, (2010). Google Scholar
C. Series, Geometrical Markov coding of geodesics on surfaces of constant negative curvature,, Ergod. Th. and Dynam. Sys., 6 (1986), 601. doi: 10.1017/S0143385700003722. Google Scholar
R. Sharp, Periodic orbits of hyperbolic flows,, in On Some Aspects of the Theory of Anosov Systems, (2004), 73. Google Scholar
R. Sharp, Spectral triples and Gibbs measures for expanding maps on Cantor sets,, J. Noncommut. Geom., 6 (2012), 801. doi: 10.4171/JNCG/106. Google Scholar
D. Sullivan, The density at infinity of a discrete group of hyperbolic motions,, Publ. Math. IHES, 50 (1979), 171. Google Scholar
D. Sullivan, Entropy, Hausdorff measures old and new, and limit sets of geometrically finite Kleinian groups,, Acta Math., 153 (1984), 259. doi: 10.1007/BF02392379. Google Scholar
P. Tukia, The Hausdorff dimension of the limit set of a geometrically finite Kleinian group,, Acta Math., 152 (1984), 127. doi: 10.1007/BF02392194. Google Scholar
J. Várilly, An Introduction to Noncommutative Geometry,, EMS Series of Lectures in Mathematics, (2006). doi: 10.4171/024. Google Scholar
P. Walters, An Introduction to Ergodic Theory,, Graduate Texts in Mathematics, (1982). Google Scholar
show all references
Mario Roy, Mariusz Urbański. Multifractal analysis for conformal graph directed Markov systems. Discrete & Continuous Dynamical Systems - A, 2009, 25 (2) : 627-650. doi: 10.3934/dcds.2009.25.627
Tomasz Szarek, Mariusz Urbański, Anna Zdunik. Continuity of Hausdorff measure for conformal dynamical systems. Discrete & Continuous Dynamical Systems - A, 2013, 33 (10) : 4647-4692. doi: 10.3934/dcds.2013.33.4647
Nuno Luzia. On the uniqueness of an ergodic measure of full dimension for non-conformal repellers. Discrete & Continuous Dynamical Systems - A, 2017, 37 (11) : 5763-5780. doi: 10.3934/dcds.2017250
Mario Roy, Mariusz Urbański. Random graph directed Markov systems. Discrete & Continuous Dynamical Systems - A, 2011, 30 (1) : 261-298. doi: 10.3934/dcds.2011.30.261
Mario Roy. A new variation of Bowen's formula for graph directed Markov systems. Discrete & Continuous Dynamical Systems - A, 2012, 32 (7) : 2533-2551. doi: 10.3934/dcds.2012.32.2533
Lok Ming Lui, Chengfeng Wen, Xianfeng Gu. A conformal approach for surface inpainting. Inverse Problems & Imaging, 2013, 7 (3) : 863-884. doi: 10.3934/ipi.2013.7.863
Peter Haïssinsky, Kevin M. Pilgrim. Examples of coarse expanding conformal maps. Discrete & Continuous Dynamical Systems - A, 2012, 32 (7) : 2403-2416. doi: 10.3934/dcds.2012.32.2403
Zuxing Xuan. On conformal measures of parabolic meromorphic functions. Discrete & Continuous Dynamical Systems - B, 2015, 20 (1) : 249-257. doi: 10.3934/dcdsb.2015.20.249
Nicholas Hoell, Guillaume Bal. Ray transforms on a conformal class of curves. Inverse Problems & Imaging, 2014, 8 (1) : 103-125. doi: 10.3934/ipi.2014.8.103
Yunping Jiang, Yuan-Ling Ye. Convergence speed of a Ruelle operator associated with a non-uniformly expanding conformal dynamical system and a Dini potential. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4693-4713. doi: 10.3934/dcds.2018206
Hans Henrik Rugh. On dimensions of conformal repellers. Randomness and parameter dependency. Discrete & Continuous Dynamical Systems - A, 2012, 32 (7) : 2553-2564. doi: 10.3934/dcds.2012.32.2553
Domenico Mucci. Maps into projective spaces: Liquid crystal and conformal energies. Discrete & Continuous Dynamical Systems - B, 2012, 17 (2) : 597-635. doi: 10.3934/dcdsb.2012.17.597
Rossen I. Ivanov. Conformal and Geometric Properties of the Camassa-Holm Hierarchy. Discrete & Continuous Dynamical Systems - A, 2007, 19 (3) : 545-554. doi: 10.3934/dcds.2007.19.545
Juan Wang, Yongluo Cao, Yun Zhao. Dimension estimates in non-conformal setting. Discrete & Continuous Dynamical Systems - A, 2014, 34 (9) : 3847-3873. doi: 10.3934/dcds.2014.34.3847
Marcelo M. Disconzi. On the existence of solutions and causality for relativistic viscous conformal fluids. Communications on Pure & Applied Analysis, 2019, 18 (4) : 1567-1599. doi: 10.3934/cpaa.2019075
Robert Eymard, Angela Handlovičová, Karol Mikula. Approximation of nonlinear parabolic equations using a family of conformal and non-conformal schemes. Communications on Pure & Applied Analysis, 2012, 11 (1) : 147-172. doi: 10.3934/cpaa.2012.11.147
Welington Cordeiro, Manfred Denker, Xuan Zhang. On specification and measure expansiveness. Discrete & Continuous Dynamical Systems - A, 2017, 37 (4) : 1941-1957. doi: 10.3934/dcds.2017082
Welington Cordeiro, Manfred Denker, Xuan Zhang. Corrigendum to: On specification and measure expansiveness. Discrete & Continuous Dynamical Systems - A, 2018, 38 (7) : 3705-3706. doi: 10.3934/dcds.2018160
Petr Kůrka. On the measure attractor of a cellular automaton. Conference Publications, 2005, 2005 (Special) : 524-535. doi: 10.3934/proc.2005.2005.524
Tomasz Downarowicz, Yonatan Gutman, Dawid Huczek. Rank as a function of measure. Discrete & Continuous Dynamical Systems - A, 2014, 34 (7) : 2741-2750. doi: 10.3934/dcds.2014.34.2741
PDF downloads (9)
Richard Sharp | CommonCrawl |
Twin Prime Conjecture Reference
Asked 11 years, 1 month ago
I'm looking for a reference which has the first statement of the twin prime conjecture. According to wikipedia, nova, and several other quasi-reputable resources it is Euclid who first stated it, but according to Goldston
http://www.math.sjsu.edu/~goldston/twinprimes.pdf
it was stated nowhere until de Polignac. I'm hoping to resolve this issue by accessing either primary historical documents, or other reputable secondary sources (Goldston being one such example). I have looked at de Polignac's work, and he does indeed make a conjecture, but have been unable to find anything definitive (besides Goldston's statements) that there was no conjecture earlier. If this is too specific for MO, I'll remove the question. Thank you.
reference-request nt.number-theory ho.history-overview prime-numbers
Ben WeissBen Weiss
I don't have it to hand right at this moment, but Narkiewicz' The Development of Prime Number Theory is excellent on just this kind of question. It is a historiomathematical survey of prime number theory up to 1910, and also has discussions of later developments directly related to work done before 1910. It is historical, with very many references, and mathematical, in that it sketches many old proofs. It even has exercises.
engelbrektengelbrekt
$\begingroup$ Before there was Narkiewicz, there was Dickson. In his History, he mentions de Polignac, and doesn't mention anyone earlier. $\endgroup$ – Gerry Myerson May 13 '10 at 3:06
Euclid never made a conjecture about the infinitude of twin primes.
It is possible to guess that he was making a conjecture on the basis of his text but it requires wishful thinking.
Here is the paper where de Polignac makes his general conjecture (which if true also implies the twin prime conjecture).
Regarding the NOVA show, Goldston makes a comment to those behind the NOVA segment (with a response) here:
http://discussions.pbs.org/viewtopic.pbs?t=45116
No one really knows if Euclid made the twin prime conjecture. He does have a proof that there are infinitely many primes, and he or other Greeks could easily have thought of this problem, but the first published statement seems to be due to de Polignac in 1849. Strangely enough, the Goldbach conjecture that every even number is a sum of two primes seems less natural but was conjectured about 100 years before this.
Jason DyerJason Dyer
$\begingroup$ Thanks for the link. It's good to see he responded to that, I too was a bit surprised they didn't mention Y or P in the song. $\endgroup$ – Ben Weiss Dec 3 '09 at 14:33
$\begingroup$ The PBS link seems to have gone away. $\endgroup$ – Gerry Myerson Feb 14 '14 at 21:46
$\begingroup$ @GerryMyerson: I've added the relevant text via the Internet Archive. $\endgroup$ – Charles Feb 18 '14 at 14:44
Hardy & Littlewood give (in 1923) what I believe to be the first quantitative version of the twin prime conjecture (actually, generalized to all even differences) as Conjecture B in their famous "Some problems of 'Partitio numerorum'; III: On the expression of a number as a sum of primes" on page 42. In particular, they conjecture that the twin-prime counting function $P_2(n)$ is
$P_2(n)=2C_2\frac{n}{(\log n)^2}(1+o(1))$ where $C_2=\prod\left(1-\frac{1}{(\varpi-1)^2}\right)$ with $\varpi$ running over the odd primes.
Sylvain JULIEN
CharlesCharles
Not the answer you're looking for? Browse other questions tagged reference-request nt.number-theory ho.history-overview prime-numbers or ask your own question.
Conjecture on prime numbers
About factorization in Zhang's proof of weak Twin Prime conjecture
A surprising conjecture about twin primes | CommonCrawl |
\begin{document}
\title{ Four-dimensional graph-manifolds with fundamental groups quasi-isometric to fundamental groups of orthogonal graph-manifolds} \author{ Aleksandr Smirnov\footnote{This work is supported by the Program of the Presidium of the Russian Academy of Sciences 01 "Fundamental Mathematics, and its Applications" under grant PRAS-18-01, and by RFBR Grant 17-01-00128a } } \date{} \maketitle
\begin{abstract} We introduce a topological invariant, {\it a type} of a graph-manifold, which takes natural values. For a 4-dimensional graph-manifold, whose type does not exceed two, it is proved that its universal cover is bi-Lipschitz equivalent to a universal cover of an orthogonal graph-manifold (for any Riemannian metrics on graph-manifolds). \end{abstract}
\section{Introduction}\label{sec:intr}
The main result of this paper (see Theorem~\ref{thm:type2}) is a bi-Lipschitz equivalence of universal covers for some classes of 4-dimensional graph-manifolds. The motivation for this result is the problem of finding an asymptotic invariants of graph-manifolds, in particular, asymptotic ($\mathrm{asdim}\, \pi_1(M)$) and linearly-controlled asymptotic ($\mathrm{\ell\hbox{-}asdim}\, \pi_1(M)$) dimensions of their fundamental groups. Theorem~\ref{thm:type2} allows us to reduce the finding of these dimensions for a wide class of graph-manifolds to results from~\cite{Smir1}.
In 3-dimensional case, $\dim M = 3$, the problem of finding asymptotic dimensions was solved in~\cite{HS}. For the case, when $\dim M\geq 4$, asymptotic dimensions $\mathrm{asdim}\, \pi_1(M)$ and $\mathrm{\ell\hbox{-}asdim}\,\pi_1(M)$ was found only for graph-manifolds of a special type, called orthogonal graph-manifolds. Namely, in~\cite{Smir1} for orthogonal graph-manifolds it is proved that $$ \mathrm{asdim}\, \pi_1(M)=\mathrm{\ell\hbox{-}asdim}\, \pi_1(M)=\dim M. $$ The definition of these invariants can be found, for example, in~\cite{BS},~\cite{Gro},~\cite{Smir1}.
Orthogonal graph-manifolds in the 3-dimensional case are so-called flip graph-manifolds, for which gluings between blocks are especially simple. According to~\cite{KL}, the fundamental group of any closed 3-dimensional graph-manifold is quasi-isometric to the fundamental group of a flip graph-manifold, therefore, fundamental groups of any closed 3-dimensional graph-manifolds are pairwise quasi-isometric. In higher dimensions this is not true. In this paper we introduce a topological invariant, $\operatorname{type} M$, of the graph-manifold $M$, which takes natural values. In any dimension greater than~3 it is not difficult to construct a graph-manifold of any type. However, for 4-dimensional orthogonal graph-manifolds, the type is always does not exceed~2. The main result of this paper is as follows.
\begin{theorem}\label{thm:type2} If the type of a 4-dimensional graph-manifold $M$ does not exceed two, $\operatorname{type} M \leq 2$, then its universal cover is bi-Lipschitz equivalent to the universal cover of some orthogonal graph-manifold (for any Riemannian metrics on graph-manifolds). \end{theorem}
\begin{corollary}\label{cor:intrees} For the fundamental group of any 4-dimensional graph-manifold $M$, with $\operatorname{type} M \leq 2$, there is a quasi-isometric embedding into the product of 4 metric trees, and, consequently, $\mathrm{asdim}\, \pi_1(M) = \mathrm{\ell\hbox{-}asdim}\, \pi_1(M) = 4$, where $\mathrm{asdim}\,$ and $\mathrm{\ell\hbox{-}asdim}\,$ are asymptotic and linearly-controlled asymptotic dimensions. \end{corollary}
This corollary gives us a simple and easily verifiable sufficient condition that allows us to calculate $\mathrm{asdim}\, \pi_1(M)$ and $\mathrm{\ell\hbox{-}asdim}\, \pi_1(M)$.
In addition, the class $\mathcal{GM}_2$ of graph-manifolds with $\operatorname{type} M \leq 2$ is much wider (see Section~\ref{sec:exmpl}) than the class of orthogonal graph-manifolds. Moreover, it is highly doubtful that any graph-manifold in the class $\mathcal{GM}_2$ has a finite cover by an orthogonal graph-manifold.
As an important additional result we give a criterion of orthogonality of 4-dimensional graph-manifolds, which blocks have type~2, see Theorem~\ref{thm:crit}. As a consequence, we obtain a wide class of 4-dimensional graph-manifolds, which type is equal to~2, but which are not orthogonal (see Corollary~\ref{cor:notort}).
The proof of Theorem~\ref{thm:type2} consists of two steps. An important role is played by the intersection number, and the secondary intersection number that in the general case can be arbitrary positive integers (see section~\ref{subsec:ind1and2}). For orthogonal graph-manifolds such numbers are equal to~1. In the first step, in Section~\ref{sec:virtual}, passing to a finite cover of the graph-manifold $M$, we are building 4-dimensional graph-manifold $N$, with intersection numbers, and secondary intersection numbers equal to~1. Since, the universal cover of graph-manifolds $M$ and $N$ coincide, then we only need to prove Theorem~\ref{thm:type2} for the graph-manifold $N$.
In the second step, we "re-glue" the graph-manifold $N$ to an orthogonal graph-manifold without changing the bi-Lipschitz type of its universal cover. The procedure of the re-gluing is described in Section~\ref{sec:maintheorem}. It is a generalization of the procedure used in~\cite{KL} for 3-dimensional graph-manifolds.
Corollary~\ref{cor:intrees} follows from the result from~\cite{Smir1}, claiming that the fundamental group of any $n$-dimensional orthogonal graph-manifold $M$ can be quasi-isometrically embedded into the product of $n$ metric trees, and, consequently, $$ \ asdim \ pi_1 (M) = \ lasdim \ pi_1 (M) = n. $$
\section{Preliminaries}\label{sec:pre}
\subsection{Graph-manifolds}\label{subsec:grm}
\begin{definition}\label{def:gm} Let $n\geq 3$. {\em A higher-dimensional graph-manifold} is a closed, oriented, $n$-dimensional manifold $M$ that is glued from a finite number of blocks $M_v$, $M = \bigcup_{v\in V}M_v$. It should satisfy the following conditions (1)--(3). \begin{itemize}
\item[(1)] Each block $M_v$ is a trivial $(n-2)$-dimensional tori $T^{n-2}$
bundle over a compact, oriented surface $\Phi_v$
with boundary (the surface should be different from the disk, and the annulus);
\item[(2)] the manifold $M$ is glued from blocks $M_v$, $v \in V$,
by diffeomorphisms between the boundary components (we does not exclude the case
of gluing the boundary components of a single block);
\item[(3)] gluing diffeomorphisms do not identify the homotopy
classes of fiber tori. \end{itemize} \end{definition}
These graph-manifolds for $n\geq 4$ were introduced in~\cite{BK}.
To each graph-manifold $M$ the graph $G$, dual to its block decomposition, corresponds. Thus, the set of graph-manifold blocks coincides with the set of vertices $\mathrm V$ of the graph $G$, and the set of pairs of gluing blocks coincides with the set of edges $\mathrm E$ of the graph $G$. The set of all directed edges of the graph $G$ we denote by $\mathrm W$.
Orthogonal graph-manifolds defined in~\cite{Smir1} are distinguished in the class of graph-manifolds only by the condition for gluing diffeomorphisms. They are obtained as follows.
For each vertex $v\in \mathrm V$ we fix a trivialization of the fibration $M_v\to \Phi_v$, that is, we represent the block $M_v=\Phi_v\times S^1\times\ldots\times S^1$ as the product, where $S^1$ occurs $n-2$ times. Thus, for each edge $w=\{vv'\}$, adjacent to the vertex $v$, we have a trivialization of a boundary torus $T_{w}=S^1\times S^1\times\ldots\times S^1$, $(n-1)$ times, of the block $M_v$, that corresponds to the edge $w$.
In the same way, for each edge $-w$, going in the opposite direction, we have a trivialization of the boundary torus $T_{-w}=S^1\times S^1\times\ldots\times S^1$ of the block $M_{v'}$.
We fix an order on the set of all factors of the trivialization, and define a gluing diffeomorphism of the tori $T_w$ and $T_{-w}$ by some permutation $\mathfrak s_w$ of factors of the trivialization, that does not identify the boundary components $\Phi_v$ and $\Phi_{v'}$.
Note that this map is a well-defined gluing, as permutations $\mathfrak s_w$, and $\mathfrak s_{-w}$ are selected to be mutually inverse. Also, the map $\eta_w$ does not identify the homotopy classes of fiber tori.
In this case, for edges, $w$ and $-w$, going in the opposite directions permutations $\mathfrak s_w$ and $\mathfrak s_{-w}$ are selected to be mutually inverse, i.e. $\mathfrak s_{-w}\circ \mathfrak s_{w}=\operatorname{id}$.
In other words, a graph-manifold is orthogonal iff there exists such trivialization of all blocks that the gluing maps are defined by permutations of the factors as described above. The disadvantage of this definition is that you can not verify whether a given graph-manifold orthogonal or not with this definition. It depends on the choice of the trivialization of the blocks which is not unique. For another choice of trivializations of gluing blocks the graph-manifold can cease to be orthogonal. In Section~\ref{sec:exmpl} we present a criterion of orthogonality of some class of 4-dimensional graph-manifolds. This criterion does not depend on the choice of trivializations.
\subsection{W-structure}\label{subsec:wstr}
The main tool for working with graph-manifolds is the so-called $W$-structure, first described in the 3-dimensional case in works of Waldhausen~\cite{W1},~\cite{W2}. For the $n$-dimensional case the definition of $W$-structure is given in~\cite{BK}. For the convenience of the reader, we give these definitions here.
Let $G$ be the graph of a graph-manifold $M$. For a vertex $v\in \mathrm V$ by $\partial v$ we define the set of all directed edges, adjacent to $v$.
Note that, to each directed edge $w\in \mathrm W$ the homology group $L_w = H_1(T_w;\mathbb Z)\simeq \mathbb Z^{n-1}$\label{resh} of the gluing torus $T_w$ corresponds. Furthermore, for each vertex $v\in V$ the homology group $F_v = H_1(T_v;\mathbb Z) \simeq \mathbb Z^{n-2}$ of the fiber $T_v$ of the block $M_v$ corresponds.
Moreover, if $w\in \partial v$, then there exists an embedding of the group $F_v$ in the group $L_w$ as a maximum subgroup~$F_w$.
We call the group $F_v \simeq F_w$ {\it a fiber group}. For each orientation of the graph-manifold $M$ there are fixed corresponding orientations of each block of $M$, and fixed corresponding orientations of groups $L_w$, $w\in \mathrm W$. Along with it, orientations of groups $L_w$ and $L_{-w}$ are opposite.
A gluing of blocks is described by an isomorphism $\widehat{g}_w\colon L_{-w}\to L_w$, that satisfy the conditions \begin{align}
\widehat{g}_{-w}&=\widehat{g}^{-1}_w;\label{w1}\\
\widehat{g}_w(F_{-w})&\neq F_w. \label{w2} \end{align}
The choice of a trivialization of each block $M_v$, as well as a trivialization of the fiber, fixes for each edge $w\in\partial v$ a basis of the group $L_w$ (up to the choice of its elements signs) so that its corresponding subset of elements forms a basis of the group $F_w$.
Such bases are called \textit{selected.}
Let us describe a set of selected bases groups $L_w, w\in \mathrm W$ in terms of their transformation groups. Let $f_v = (f^1_v,\ldots,f^{n-2}_v)$ be a basis of the group $F_v$.
We choose a basis $(z_w, f_w)$ of the group $L_w$ so that $f_w=f_v$, and there exists a trivialization $M_v =\Phi_v\times T^{n-2}$. such that the set $\{z_w\mid w\in \partial v\}$ corresponds to the boundary of the surface $\Phi_v$.
In this case, the basis $f_v$ defines some orientation of the fiber $F_v$, and the basis $(z_w,f_w)$ defines some orientation of the group $L_w$.
The group of transformations of these bases consists of matrices of the form $$h_w=\left(\begin{array}{cc}
\varepsilon_v & 0 \\
n_w & \sigma_v
\end{array}\right), $$ where $\varepsilon_v = \pm 1$, $n_w \in \mathbb Z^{n-2}$, $\sigma_v \in GL(n-2,\mathbb Z)$, and acts on bases: $$ (z_w,f_w)\cdot h_w = (z_w\cdot \varepsilon_w +f_w \cdot n_w, f_w\cdot \sigma_v). $$ We require that for each vertex $v\in V$ the following conditions are fulfilled:
\begin{align}
\varepsilon_v\cdot \det \sigma_v &= 1; \label{w3}\\
\sum\limits_{w\in \partial v} n_w &= 0.\label{w4} \end{align} It is easy to see that the set $\mathcal H$ of matrices of the form $$ h=\bigoplus\limits_{w\in \mathrm W} h_w, $$ satisfying the conditions~\eqref{w3},~\eqref{w4}, is a group. The condition~\eqref{w3} means that each basis $(z_w, f_w)$ agreed with the fixed orientation of the group $L_w$, and the condition~\eqref{w4} means that these bases correspond to some trivialization of the block $M_v$.
$W$-\textit{structure}, associated with a graph-manifold $M$, $M$, is a collection of groups $\{L_w\mid w\in \mathrm W\}$, satisfying the conditions~\eqref{w1},~\eqref{w2}, and the set of their bases of the form $\Theta = (z, f)\cdot \mathcal H$, where $(z,f)$ is the set of bases mentioned above and $\det g^{z,f}_w = -1$ for each directed edge $w\in \mathrm W$.
The last condition means that the isomorphism $\widehat{g}_w:L_{-w}\to L_w$ reverses the orientation. The elements $(z,f)\in \Theta$ are called \textit{a Waldhausen basis}.
For a fixed Waldhausen basis $(z, f)$ the gluing isomorphism is described by the matrix $$g_w=g_w^{z,f}=\left(\begin{array}{cc}
a_w & b_w \\
c_w & d_w
\end{array}\right), $$ where
\begin{align}
(z_{-w}, f_{-w}) = (z_w, f_w)\cdot g_w\label{w5} \end{align} (it is assumed that the groups $L_{-w}$ and $L_w$ are identified by the isomorphism $\widehat{g}_w$). Here $a_w \in \mathbb Z$. The row $b_w$, and the a column, $c_w$ consist of $n-2$ integers. The matrix $d_w$ is an integer matrix of size $(n-2)\times (n-2)$.
\begin{remark}\label{rem:ort} A graph-manifold $M$ is orthogonal iff on each block of the graph-manifold $M$ there is such a trivialization, that for each directed edge $w\in\mathrm W$ its induced bases $(z_w,f_w)$ and $(z_{-w},f_{-w})$ of the groups $L_w$ and $L_{-w}$ differ only by permutation of the elements, and, perhaps, the statement of signs before vectors. \end{remark}
\subsection{Fiber subspaces, and intersection of lattices}\label{subsec:fs}
In what follows, we will use subgroups of groups isomorphic to~$\mathbb Z^n$. In this case, the maximum subgroups will play an important role. For brevity, we will call them {\it lattices}.
\textit{The lattice in a group} $G$ isomorphic to $\mathbb Z^n$ is a maximum subgroup $H$ which is isomorphic to $\mathbb Z^k$ for some $k\leq n$. That is, such a subgroup, for which there is no another subgroup $H'<G$ isomorphic to $\mathbb Z^k$, such that $H < H'$. By {\it the dimension} of the lattice we call the number $k$.
\begin{remark}\label{rem:resh} The intersection $G_1\cap G_2$ of two lattices $G_1$ and $G_2$ is a lattice. It follows from the fact that if $\gamma^m\in G_i$, $m\neq 0$, then $\gamma\in G_i$, $i=1,2$. \end{remark}
For each edge
$|w|\in \mathrm E$ we denote the intersection of the lattices $F_w$ and $F_{-w}$ by
$P_{|w|}$.
Further, such a lattice in
$L_{|w|}$ we call \textit{the intersection lattice} for the edge~$w$.
\begin{definition} For any edges $w$, $w'\in\partial v$ the lattices
$P_{|w|}$ and
$P_{|w'|}$ are called {\em parallel} iff they are the same in the group $F_v$.
In this case, for brevity, we also say that the edges $w$ and $w'$ {\em are parallel}. \end{definition}
\begin{definition} Lattices
$P_{|w|}$, considered as subgroups of the group $F_v$, $w\in \partial v$, we call \textit{intersection lattices} for the vertex~$v$. \end{definition}
\subsection{Type of a block, and graph-manifolds}\label{subsec:type}
Let $M_v$ be a block of a graph-manifold $M$ that corresponds to a vertex $v$.
\begin{definition} {\em The type} of the vertex $v$ (or of the block $M_v$ ) is the maximal number of pairwise non-parallel edges $w\in \partial v$.
We denote the type of the vertex (of the block) by $\operatorname{type}{v}$ ($\operatorname{type}{M_v}$). {\em The type of a graph-manifold} $M$ is the maximal type of its blocks $\operatorname{type}{M}:=\max\limits_{v\in \mathrm V}\operatorname{type}{v}$. \end{definition}
\begin{remark}\label{rem:type} The type of a block, and consequently the type of the graph-manifold, do not depend on the choice of the Waldhausen basis. It means that they are topological invariants of the graph-manifold. \end{remark}
In this paper we consider only graph-manifolds of the dimension 4, and we are interested in blocks of the type 1 and 2. For each block $M_v$ of the type 1 we denote the unique intersection lattice of the vertex $v$ by $P^1_v$. For each block $M_v$ of the type 2 we denote the corresponding intersection lattices by $P^1_v$ and $P^2_v$.
Further in the paper, by a graph-manifold we mean the 4-dimensional graph-manifold, unless otherwise stated.
\subsection{Intersection number and secondary intersection number}\label{subsec:ind1and2}
Recall the definitions of some invariants of $W$-structures, described in~\cite{BK}.
Using the condition~\eqref{w2}, we have that an integer string $b_w$ is non-zero. It means that the greatest common divisor $i_w\geq 1$ of its elements is defined.
\begin{definition}\label{def:ind} The number $i_w$ is called the \textit{intersection number} of the $W$-structure on the edge $w$. \end{definition}
Geometrically $i_w$ is a number of intersection components of
$T_w\cap T_{-w} \subset T_{|w|}$.
As $i_w=i_{-w}$, that is, the intersection number is independent of the edge direction. It means, that we can use the intersection number of the undirected edge
$e=|w|$, as $i_e=i_{w}=i_{-w}$.
Let $F_e$ be a smallest subgroup of the group $L_e$, which contains $F_w$ and $F_{-w}$, $F_e=\langle F_w,\ F_{-w}\rangle$.
\begin{lemma}\label{lemm:ind1} The subgroup index $(L_e:F_e)$ is equal to the intersection number $i_e$ of the edge $e$. \end{lemma} \begin{proof} Group $F_e$ is generated by elements $f_w$, $f_{-w}$, $F_e=\langle f_w,\ f_{-w} \rangle$, while $L_e=\langle z_w,\ f_w \rangle$.
It follows from the condition~(\ref{w5}) that elements $b^1_w\cdot z_w$, $b^2_w\cdot z_w$, \ldots, $b^{n-2}_w\cdot z_w$ belong to the group $F_e$, where $b_w=(b^1_w,\ldots, b^{n-2}_w)$.
Since the intersection number $i_e$ is equal to the greatest common divisor of numbers $b^1_w$, $b^2_w$, \ldots, $b^{n-2}_w$, then $\alpha\cdot z_w\in F_e$, iff $\alpha$ is divisible by $i_e$. Therefore $(L_e:F_e)=i_e$. \end{proof}
For each block $M_v$ of the type 2, the intersection lattices $P^1_v, P^2_v\subset F_v$ generate a subgroup $P_v\simeq \mathbb Z^2$, $P_v=\langle P^1_v, P^2_v \rangle$ (the least subgroup in $F_v$, containing $P^1_v$ and $P^2_v$) in the group $F_v$.
\begin{definition}\label{def:slper} We call the group $P_v$ \textit{the group of fiber intersections} $M_v$. \end{definition}
The subgroup $P_v$ is not necessarily maximum, so it may not be a lattice in $F_v\simeq \mathbb Z^2$.
\begin{definition}\label{def:in2} The index $j_v$ of the subgroup $P_v$ in the group $F_v$ is called \textit{secondary intersection number} at the vertex $v$. \end{definition}
For each block $M_v$ of the type 1, we choose $P_v=F_v$ and $j_v=1$. (Since $P^1_v\simeq \mathbb Z$, and $P_v\simeq \mathbb Z^2$, then $P_v\neq P^1_v$. )
\section{Unwinding of intersection numbers up to 1.}\label{sec:virtual}
In this section we prove that for any graph-manifold there is a finite sheet cover by a graph-manifold with all intersection numbers, as well as all the secondary intersection numbers equal to 1.
\begin{lemma}\label{lem:virtual2} For any graph-manifold $M$ there is a graph-manifold $N$, for which all intersection numbers are equal to~1, and also all secondary intersection numbers are equal to~1. \end{lemma}
\begin{proof} For each vertex $v\in \mathrm V$ we consider a cover $r_v\colon T^2\to T^2$, corresponding to a subgroup $P_v<F_v=\pi_1(T^2)$ of the fundamental group of the fiber torus (for a vertex of the type 1 such a cover is trivial). The degree of this cover is equal to the secondary intersection number $j_v$ at the vertex~$v$.
We consider a surface $\Phi_v$ and construct an orbifold $\Phi'_v$ as follows. For each edge $w\in \partial v$ we glue a disk $D_w$ with a conical point with an angle $2\pi/j_{u}$, where $u$ is the other end of the edge $w$, to the corresponding component of the boundary of the surface $\Phi_v$.
Since the surface with boundary $\Phi_v$ is different from the disk, and the ring, then the orbifold $\Phi'_v$ is good, compact 2-dimensional orbifold without boundary, and therefore, (see~\cite[Theorem 2.5.]{Sc}) there are a closed surface $\Psi'_v$ and a finite cover $p'_v\colon \Psi'_v\to \Phi'_v$.
Let $n_v$ be a degree of this cover We denote the product $\prod\limits_{v\in \mathrm V}{n_v}$ by~$N$, and the product $\prod\limits_{v\in \mathrm V}{j_v}$~--- by~$J$.
Cutting out from the surface $\Psi'_v$ the preimage $(p')^{-1}_v(\bigcup\limits_{w\in \partial v}{D_w})$ of the glued disks, we obtain a surface $\Psi_v$ with boundary, which covers the surface $\Phi_v$ with boundary with finite degree. We denote the corresponding cover by $p_v\colon \Psi_v\to \Phi_v$.
Let $N_v=\Psi_v\times T^2$. Such manifolds we also call blocks. We define the cover $$ q_v\colon N_v\to M_v=\Phi_v\times T^2, $$ as the product of covers $p_v\colon \Psi_v\to \Phi_v$ and $r_v\colon T^2\to T^2$.
Note that in the block $N_v$ the group $P_v$ plays the role of a fiber group.
Let $w$ be an edge from a vertex $v$ to a vertex $u$, $\gamma_w$ be a boundary component of the surface $\Phi_v$, corresponding to an edge $w$. On each component of the preimage of the torus $T_{w}=\gamma_w\times T^2$ the cover $q_v$ is a product of covers, and is determined by subgroup $A_{w}=\langle P_v,\ P_u\rangle=B_w\times P_v$ of the group $L_{w}$, where $B_w$ is a subgroup of the group $\pi_1(\gamma_w)\simeq \mathbb Z$ with subgroup index $j_u$.
Thus, the group $A_w$ has the subgroup index $(L_w:A_w)=j_u\cdot j_v$ in the group $L_w$.
Now we describe gluings of blocks. Let $w$ be an edge from a vertex $v$ to a vertex $u$. Let $T_v$ be a boundary component of the block $N_v$, and let $T_u$ be a boundary component of the block $N_u$. Let $T_u$, and $T_v$ covers the torus
$T_{|w|}$. A gluing $g'_w\colon T_v\to T_u$, is defined by an isomorphism of groups $H_1(T_v;\mathbb Z)$ and $H_1(T_u;\mathbb Z)$.
For each of these groups we have $A_{w}$, which is induced by covers $q_v$ and $q_u$ respectively. This determines the required gluing. Since the lattices $P_u, P_{v}<A_{w}$ are different, then such a gluing satisfies the definition~\ref{def:gm}~(3).
Note that for the edge $w$, from a vertex $u$ to a vertex $v$ of the graph $G$, we have
$P_{|w|}=F_w\cap F_{-w}=P_u\cap P_v<L_w$.
Therefore, for the edge $w'$, that corresponds to gluing blocks $N_u$ and $N_v$, we have that its intersection lattice
$P_{|w'|}=P_{|w|}$. Consequently, the group $P_v$ plays the role of the fiber intersection group for the block $N_v$. It means, that the secondary intersection number of the block $N_v$ is equal to $(P_v:P_v)=1$.
By Lemma~\ref{lemm:ind1} the intersection number on the edge $w'$ is equal to the subgroup index $(A_w:\langle P_u, P_v \rangle)$ of the subgroup $\langle P_u, P_v \rangle$ in the group $A_w$, i.e it is equal to~1.
For each vertex $v\in \mathrm V$ we consider $N/n_v\cdot J/j_v$ copies of the block $N_v$.
For each edge $w\in \mathrm W$
($e=|w|$), between $u,v\in \mathrm V$, we have $N/n_u\cdot J/j_u$ copies of the block $N_u$ and $N/n_v\cdot J/j_v$ copies of the block $N_v$.
The block $N_u$ has $(n_u\cdot j_u)/(j_u\cdot j_v)=n_u/j_{v}$ boundary components, covers the torus $T_e$, and the block $N_v$ has $n_v/j_{u}$ boundary components, covers the torus $T_e$.
Then all copies of the block $N_u$ have $N/n_u\cdot J/j_u\cdot n_u/j_v=(N\cdot J)/(j_u\cdot j_v)$ boundary components, covers the torus $T_e$.
All copies of the block $N_v$ have the same number of boundary components, covers the torus $T_e$.
We take some correspondence between these copies, and glue each boundary component of a copy of the block $M_u$ with the corresponding boundary component of a copy of the block $M_v$ by some gluing homeomorphism $g'_w$.
We obtain a graph-manifold $M'$, for which all secondary intersection numbers are equal to~1. and which $N\cdot J$~covers the graph-manifold~$M$. \end{proof}
Applying Lemma~\ref{lem:virtual2} to the graph-manifold $M$, we obtain a graph-manifold $N$, for which all intersection numbers are equal to~1, and also all secondary intersection numbers are equal to~1. Moreover, the fundamental groups $\pi_1(N)$ and $\pi_1(M)$ are quasi-isometric.
\section{The proof of Theorem~\ref{thm:type2}}\label{sec:maintheorem}
For the reader's convenience, we present here Lemma~2.4 from~\cite{KL}. This lemma plays an important role in the proof of Theorem~\ref{thm:type2}.
\begin{lemma}\label{lemm:KL} Let $S$ be a smooth compact manifold with strictly negative curvature, and totally-geodesic boundary. Denote by $\widetilde{S}$ the universal cover of $S$. Let $\alpha$ be a closed smooth 1-form on $\partial S$. Denote by $\alpha'$ the pull-back of $\alpha$ to $\partial \widetilde{S}$.
Then there exists a smooth Lipschitz function $h\colon \widetilde{S}\to \mathbb R$ satisfying
$d h|_{\partial \widetilde{S}}=\alpha'$. \end{lemma}
Let $M$ be a 4-dimensional graph-manifold with $\operatorname{type} \leq 2$.
Passing to the finite cover, we assume that all intersection numbers of $M$ are equal to~1, and also all secondary intersection numbers are equal to~1.
Since all secondary intersection numbers of $M$ are equal to~1, then we can choose such a Waldhausen basis $\{(z_w,f_w)\mid w\in \partial v, v\in \mathrm V\}$, that for each block $M_v$ of the type ~2, we have $f^1_v\in P^1_v$ and $f^2_v\in P^2_v$.
Moreover for each block of the type~1 we can choose $f^1_v\in P^1_v$.
For each edge $w\in \mathrm W$, from $v$ to $u$, the gluing $\widehat{g}_{-w}$ of blocks $M_v$ and $M_u$ is given by bases $(z_w,f_w)$ and $(z_{-w},f_{-w})$ of the lattice
$L_{|w|}$.
In other words, the matrix $g_{-w}$ is obtained by decomposing the basis $(z_w,f_w)$ on the basis $(z_{-w},f_{-w})$.
We can assume that $P^1_u=P_w=P^1_v$, where $P_w=F_w\cap F_{-w}$. Then, in this notation, $f^1_w=\pm f^1_{-w}$. Moreover, since the intersection numbers are equal to 1, and $f^1_w=\pm f^1_{-w}$, then from the formula~(\ref{w5}) we have that $f^2_{-w}-z_{w}\in F_{w}$.
We denote the vector $f^2_{-w}-z_w$ by $\delta_w$.
Step by step, changing the gluings on the edges, we construct an orthogonal graph-manifold $N$, whose universal cover $\widetilde{N}$ is bi-Lipschitz equivalent to the universal cover $\widetilde{M}$ of the graph-manifold $M$.
We fix an edge $w\in \mathrm W$. Let it connect the vertices $v$ and $u$. The new gluing $\widehat{g}'_{-w}$ of blocks $M_v$ and $M_u$ we define with basis $z'_w=z_w+\delta_w$, $f'_w=f_w$.
Thus, the isomorphism $\widehat{g}'_{-w}$ is obtained from the isomorphism $\widehat{g}_{-w}$ by the translation by the vector $\delta_w$ on the first coordinate. That is, such a gluing combines vectors $f^1_w$ and $f^1_{-w}$, as well as vectors $z_{w}$ and $f^2_{-w}$.
Since with such a modification of the gluing the lattices $P_v$, $v\in \mathrm V$ and $F_e$, $e\in \mathrm E$ (see Definitions~\ref{def:ind}, and \ref{def:in2}) do not change, then neither the intersection number nor the secondary intersection number change.
Cutting the graph-manifold $M$ along the torus
$T_{|w|}$, and then gluing along it with the gluing $\widehat{g}'_{-w}$, we obtain the graph-manifold $N$.
\begin{lemma}\label{lem:perestr} The universal covers of graph-manifold $M$ and $N$ are bi-Lipschitz homeomorphic. \end{lemma} \begin{proof} Graph-manifolds $M$ and $N$ have a common graph $G$, and hence, the Bass-Serre tree of the graph-manifold $M$ coincides with the same tree of the graph-manifold $N$.
Moreover, for each vertex $v'\in \mathrm V$ blocks $M_{v'}$ and $N_{v'}$ are isomorphic.
The universal cover $\widetilde{M}$ of the graph-manifold $M$ is divided into blocks, dual to the Bass-Serre tree $T_M$, each of which is the universal cover of some block of the graph-manifold $M$.
We call the blocks that cover the block $M_v$ {\it distinguished} blocks.
The universal cover $\widetilde{N}$ of the graph-manifold $N$ also is divided into blocks. Since Bass-Serre trees of these graph-manifolds coincide, then blocks of the manifold $\widetilde{N}$ are copies of blocks of the manifold $\widetilde{M}$.
The manifold $\widetilde{M}$ differs from the manifold $\widetilde{N}$ only by gluings along boundary components of distinguished blocks. We call blocks that correspond to the distinguished blocks, distinguished blocks too.
Now we prove the universal covers $\widetilde{M}$ and $\widetilde{N}$ are bi-Lipschitz homeomorphic. We construct a map $\widetilde{M}\to \widetilde{N}$ in the following way: each non-distinguished block of the manifold $\widetilde{M}$ we map identically to the corresponding non-distinguished block of the manifold $\widetilde{N}$. For each distinguished block $\widetilde{M}_v=\widetilde{\Phi_v}\times \mathbb R^2$ our map induces a map of the boundary of these block to the boundary of the corresponding distinguished block $\widetilde{N}_v=\widetilde{\Phi_v}\times \mathbb R^2$. This map is identical on each boundary component that does not correspond to the edge $w$. On the boundary component $\ell_w\times \mathbb R^2$ that corresponds to the edge $w$, this map is an affine map $A_w\colon \ell_w\times \mathbb R^2\to \ell_w\times \mathbb R^2$, that corresponds to the map $h_w=(g'_{-w})^{-1}\circ g_{-w}\colon
H_1(T_{|w|};\mathbb Z)\to H_1(T_{|w|};\mathbb Z)$. The map $A_w$ is determined up to an integer shift in second factor.
We expand the vector $\delta_w$ in the basis $(f^1_w,f^2_w)$ of the space $F_w$, $\delta_w=\gamma_1 f^1_w+\gamma_2 f^2_w$.
The map $h_w$ is given in the basis $(z_w,f^1_w,f^2_w)$ by formulas $h_w(z_w)=z_w-\delta_w=z_w-\gamma_1 f^1_w- \gamma_2 f^2_w$, $h_w(f^1_w)=f^1_w$, $h_w(f^2_w)=f^2_w$.
Consider a coordinate system $(x,y,z)$ on the boundary component $\ell_w\times \mathbb R^2$, so that the line $y=z=0$ corresponds to the direction $z_w$, the line $x=z=0$ corresponds to the direction $f^1_w$, and the line $x=y=0$ corresponds to the direction $f^2_w$.
In this coordinate system, the map $h_w$ corresponds to the class $\mathcal{A}_w$ of maps $A\colon \ell_w\times \mathbb R^2\to \ell_w\times \mathbb R^2$ looking like $$ A(x,y,z)=(x,y-\gamma_1\cdot x + c_1, z-\gamma_2\cdot x + c_2),\quad (c_1,c_2)\in \mathbb Z^2. $$
Consider a function $\varphi_1\colon \partial\widetilde{\Phi_v}\to \mathbb R$ which equals to $-\gamma_1$ on the components that correspond to the edge $w$ and equals to $0$ on other components. This function defines a closed 1-form on the boundary of the compact surface $\Phi_v$ with boundary.
Similarly a function $\varphi_2\colon \partial\widetilde{\Phi_v}\to \mathbb R$ that equals to $-\gamma_2$, on the components that correspond to the edge $w$ and equals to $0$ on other components, defines a closed 1-form on the boundary of the compact surface $\Phi_v$ with boundary.
From Lemma~\ref{lemm:KL} we have that there exists a smooth Lipschitz function $h_1\colon \widetilde{\Phi_v}\to \mathbb R$ satisfying
$d h_1|_{\partial \widetilde{\Phi_v}}=\varphi_1$.
Similarly, there exist a smooth Lipschitz function $h_2\colon \widetilde{\Phi_v}\to \mathbb R$, satisfying
$d h_2|_{\partial \widetilde{\Phi_v}}=\varphi_2$.
In other words, the restrictions of functions $h_1$ and $h_2$ on the boundary components of the surface $\widetilde{\Phi}_v$ are affine functions.
By construction, on each boundary component $\sigma$ of the block $M_v$ the homeomorphism $\widehat{h}\colon \widetilde{M}_v\to \widetilde{N}_v$, given by formula $\widehat{h}(x,y,z)=(x,y+h_1(x),z+h_2(x))$, differs from some map from the class $\mathcal{A}_w$ on a bounded vector $(c^\sigma_1,c^\sigma_2)\in \mathbb R^2$. We consider the Lipschitz functions $h'_1,h'_2\colon \widetilde{\Phi_v}\to \mathbb R$ with the support in a sufficiently small neighborhood of the boundary $\partial \widetilde{\Phi}_v$, for which on each boundary component $\sigma$ of the block $M_v$ we have $h_1=c^\sigma_1$ and $h_2=c^\sigma_2$.
Then the difference $h(x,y,z)=\widehat{h}(x,y,z)-(0,h'_1(x),h'_2(x))$ is a required bi-Lipschitz homeomorphism. \end{proof}
For the graph-manifold $N$, for the edge $-w$, opposite to the edge $w$, of the graph $G$, similarly to the one described above, we construct a gluing $\widehat{g}'_w\colon L_{-w}\to L_{w}$, which combines vectors $f^1_w$ and $f^1_{-w}$, and combines vectors $z_{-w}$ and $f^2_{w}$.
This gluing does not change the vectors $z_{w}$ and $f^2_{-w}$.
Cutting the graph-manifold $N$ along the torus
$T_{|w|}$, and then gluing along it with the gluing $\widehat{g}'_{w}$, we obtain a graph-manifold $N'$, whose gluing along the edge
$|w|$ is an orthogonal.
It follows from Lemma~\ref{lem:perestr} that the universal covers of graph-manifolds $M$ and $N'$ are bi-Lipschitz homeomorphic.
Applying the above operation sequentially to all opposing pairs of edges $(w,-w)$ of the graph $G$, we obtain from the graph-manifold $M$ a graph-manifold $N$. The universal cover of the graph-manifold $N$ is bi-Lipschitz homeomorphic to the universal cover of the graph-manifold $M$.
This proves Theorem~\ref{thm:type2}.\qed
\section{A criterion of orthogonality}\label{sec:exmpl}
In this section, we present a criterion of orthogonality for 4-dimensional graph-manifolds for which the type of each vertex is equal to~2. As a consequence, we construct an example of a 4-dimensional graph-manifold that is not orthogonal, all blocks of which have type 2, and intersection numbers, and secondary intersection numbers are equal to~1.
We recall the definition of the charge map of a graph-manifold from~\cite{BK} (for the case of graph-manifolds $M$ of arbitrary dimension, $\dim M=n$).
Below, we turn to the homology groups with real coefficients, keeping the same notation. In particular, we define $F_w\otimes_\mathbb Z\mathbb R$, by $F_w$
and $L_{|w|}\otimes_\mathbb Z\mathbb R$. we define by
$L_{|w|}$.
For each directed edge $w$ of the graph $G$ of the graph-manifold $M$, the gluing matrix $g_w$ defines a map $D_w:F_{-w}\to F_w$, such that $D_w(f_{-w}p_w)=f_wd_wp_w$, where $p_w\in \mathbb R^{n-2}$~--- is a column of reals. In other words, the map $D_w$ is defined in bases $f_{-w}$, and $f_w$ by the sub-matrix $d_w$ of matrix $g_w$. This map can be interpreted as a projection of the space $F_{-w}$ onto the space $F_w$ along the vector $z_w$.
In particular, the map $D_w$ is the identity map at the intersection $F_{-w}\cap F_w$.
For each directed edge $w$ of the graph $G$ of the graph-manifold $M$, we fix an orientation of the space $F_w$.
Let $u_w=f^1_w\wedge f^2_w\wedge\ldots\wedge f^{n-2}_w$. Identifying spaces $L_{-w}$ and $L_w$ with the map $g_w$, we obtain a space
$L_{|w|}$ with the couple of oriented subspaces $F_w$ and $F_{-w}$.
Under these conditions, we have the canonical intersection orientation $u_{w\cap {-w}}$ on the subspace $F_w\cap F_{-w}$ (see~\cite{BK}).
\begin{definition} \textit{The charge map} of the vertex $v\in V$ is a restriction $$K_v\colon Q_v\to F_v$$ of the map $$ \bigoplus\limits_{w\in \partial v} \frac{1}{i_w}D_w\colon \bigoplus\limits_{w\in \partial v} F_{-w}\to F_v $$ onto subspace $Q_v$, where $Q_v\subset \bigoplus\limits_{w\in \partial v} F_{-w}$ consists of all vectors $q_v=\bigoplus\limits_{w\in \partial v}q_{-w}$, such that there exists a number $\alpha\in \mathbb R$, that $q_{-w}\wedge u_{w\cap -w} =\alpha\cdot u_{-w}$ for each $w\in\partial v$. \end{definition}
This subspace does not depend on the choice of the Waldhausen basis $(z,f)$ and for its dimension we have
$\dim Q_v=(n-3)|\partial v|+1$ (for details see~\cite{BK}). Note that the subspace $$A_v=\{q_v=\bigoplus\limits_{w\in \partial v}q_{-w}\mid q_{-w}\wedge u_{w\cap -w}
=0\}\subset Q_v$$ is a hyperspace in $Q_v$,
$\dim A_v=(n-3)|\partial v|$.
In the 3-dimensional case, $n=3$, the map $K_v\colon Q_v\to F_v$ is a linear map of 1-dimensional spaces, and therefore it is uniquely determined by a rational number $k_v$, the charge of the vertex $v$.
Although the charge map in higher dimensions is not a number, nevertheless, we can say about the vanishing of the vertex charge.
\begin{definition} {\em The charge of a vertex $v\in V$ is vanishing}, iff the kernel $K_v$ of the charge map is not contained in the subspace $A_v\subset Q_v$, $\ker K_v\not\subset A_v$.
In this case we write $k_v=0$. \end{definition}
\begin{remark} Since in the 3-dimensional case $\dim F_v=\dim Q_v=1$, $A_v=\{0\}$ and, consequently, the condition $k_v=0$ is equivalent to $\ker K_v=Q_v$, i.e $\ker K_v\not\subset A_v$, then our definition coincides with the regular definition of $k_v=0$. \end{remark}
Let $M$ be a 4-dimensional graph-manifold, all blocks of which have type~2. For each vertex $v$ of the graph $G$ of the manifold $M$, and for each edge $w$ from $v$ to $u$, there are two intersection lattices in the fiber lattice $F_u$ of the block $M_u$. We denote one of them, which is not an intersection lattice for the edge $w$ by $\bar{P}_u$.
Also denote $\bar{P}_u\otimes_{\mathbb Z} \mathbb R$ by $J_{-w}$.
\begin{definition} We define {\it the subspace of intersection vectors} in the space $Q_v$ by the next formula $$ B_v:=Q_v\cap \bigoplus\limits_{w\in \partial v} J_{-w}. $$ \end{definition}
\begin{remark} It follows from the definition that the subspace of intersection vectors for the vertex $v$ does not depend on the Waldhausen basis and, consequently, is a topological invariant of the graph-manifold $M$. \end{remark}
\begin{lemma}\label{lem:ortcharge} For any orthogonal graph-manifold $M$ and for any vertex $v$ of its graph $G$ we have $B_v\subset \ker{K_v}$.
Moreover, the charge of any vertex of $M$ is vanishing. \end{lemma}
\begin{proof} According to Remark~\ref{rem:ort}, we can choose on each blocks of the graph-manifolds $M$ the Waldhausen basis so that for each edge $w\in \mathrm W$, bases $(z_w,f_w)$ and $(z_{-w},f_{-w})$ of the groups $L_w$ and $L_{-w}$, differ only by a permutation of the elements, and, perhaps, the signs before vectors. Fix a vertex $v$.
By the orthogonality, for each edge $w\in \partial v$, the subspace $J_{-w}$ is generated by the vector $z_w$. Therefore, $B_v\subset \ker{K_v}$. Moreover, we can choose such a sign $\varepsilon_w=\pm 1$ before the vector $z_{w}$, that $\varepsilon_w\cdot z_{w}\wedge u_{w\cap -w} =1\cdot u_{-w}$.
In this way, $q_v=\bigoplus\limits_{w\in \partial v}\varepsilon_w\cdot z_{w}\in Q_v$, and, at the same time, $q\notin A_v$.
We have that $K_v(q_v)=0$, and consequently $k_v=0$. \end{proof}
\begin{remark} It follows from Lemma~\ref{lemm:ind1}, and Definition~\ref{def:in2} that, intersection numbers of any edge, and secondary intersection number of any vertex of the orthogonal graph-manifold is equal to~1. \end{remark}
This means that the obstructions to orthogonality of graph-manifolds are the difference between an intersection number or a secondary intersection number from~1. However, even in the class of graph-manifolds, all blocks of which have type~2, intersection numbers are equal to~1, and secondary intersection numbers are equal to~1, there exists non-orthogonal graph-manifolds. Below (see Corollary~\ref{cor:notort}) we give an example of such a graph-manifolds.
\begin{theorem}\label{thm:crit} Let $M$ be a graph-manifold, all blocks of which have type~2. The graph-manifold $M$ is orthogonal, iff the following three conditions are satisfied \begin{itemize}
\item[(1)] the intersection number of each edge is equal to 1;~\label{i1}
\item[(2)] the secondary intersection number of each vertex is equal to 1;~\label{i2}
\item[(3)] the subspace of the intersection vectors of each vertex is contained in
the kernel of the charge map
$B_v\subset \ker{K_v}$.~\label{i3} \end{itemize} \end{theorem}
\begin{proof} If $M$ is orthogonal, then from Lemma~\ref{lemm:ind1}, Definition~\ref{def:in2}, and Lemma~\ref{lem:ortcharge} we have that conditions~(1)--(3) are satisfied.
In the opposite direction. Since the type of any block is equal to~1. It follows from the equality of the secondary intersection numbers to~1 that we can choose such a Waldhausen basis $\{(z_w,f_w)\mid w\in \partial v, v\in \mathrm V\}$, that for every block $M_v$ we have $f^1_v\in P^1_v$ and $f^2_v\in P^2_v$.
For each edge $w\in \mathrm W$, from the vertex $v$ to the vertex $u$, the gluing $\widehat{g}_{-w}$ of the blocks $M_v$ and $M_u$ is given by bases $(z_w,f_w)$ and $(z_{-w},f_{-w})$ of the space
$L_{|w|}$.
In other words, the matrix $g_{-w}$ of this gluing is obtained by expanding of the basis $(z_w,f_w)$ in the basis $(z_{-w},f_{-w})$. We can assume that $P^1_u=P_w=P^1_v$, where $P_w=F_w\cap F_{-w}$. Then, in this notation, $f^1_w=\pm f^1_{-w}$. Moreover, since the intersection numbers are equal to~1, and $f^1_w=\pm f^1_{-w}$, then from the formula~(\ref{w5}) we have that $f^2_{-w}-z_{w}\in F_{w}$.
It follows from the condition~(3) that, $q=\bigoplus\limits_{w\in \partial v} f^2_{-w}\in \ker{K_v}$. It means that $\sum\limits_{w\in \partial v} D_w(f^2_{-w})=0$. On the other hand, since $f^2_{-w}-z_{w}\in F_{w}$, then $D_w(f^2_{-w})=f^2_{-w}-z_{w}$.
Denote $f^2_{-w}-z_{w}$ by $n_w$. From the properties~\ref{w3}, and \ref{w4}, and the equality $\sum\limits_{w\in \partial v}{n_w}=0$ we have that there exists such a Waldhausen basis $\{(\bar{z}_w,\bar{f}_w)\mid w\in \partial v, v\in \mathrm V\}$ that $\bar{z}_w=z_q+n_w$ and $\bar{f}_w=f_w$ for any directed edge of the graph $G$.
For such a basis the following conditions are satisfied $\bar{f}^1_w=\bar{f}^1_{-w}$, $\bar{f}^2_{-w}=\bar{z}_w$ and $\bar{f}^2_{w}=\bar{z}_{-w}$.
It means that the manifold $M$ is orthogonal. \end{proof}
\begin{corollary}\label{cor:notort} There exists a 4-dimensional non-orthogonal graph-manifold, all blocks of which have type~2, all intersection numbers are equal to~1, and all secondary intersection numbers are equal to~1. \end{corollary}
\begin{proof} As a graph $G$ of a graph-manifold $M$ we take a cycle of the length $k\geq 3$.
We number its vertices $v_1$, \ldots, $v_k$.
For each vertex $v_i$, $i=1,\ldots, k$, we consider a block $M_i=\Phi\times T^2$, where $\Phi$ is a torus with 2 boundary components. We glue the graph-manifold from blocks so that for each edge $w\in \mathrm W$ the corresponding gluing matrix is equal to $$g_w=g_w^{z,f}=\left(\begin{array}{ccc}
0 & 0 & 1\\
0 & 1 & 0\\
1 & 0 & 0\\
\end{array}\right). $$ It follows from the definition (see subsection~\ref{subsec:grm}) that the resulting graph-manifold is orthogonal. Consequently, by Theorem~\ref{thm:crit} for each vertex $v\in \mathrm V$ the set of the intersection vectors belongs to the kernel of the charge map, $B_v\subset \ker{K_v}$.
Consider the edge $w$ from the vertex $v_2$ to the vertex $v_3$.
Replacing the gluing on this edge by the gluing $$\bar{g}_w=\bar{g}_w^{z,f}=\left(\begin{array}{ccc}
0 & 0 & 1\\
0 & 1 & 1\\
1 & 0 & 0\\
\end{array}\right), $$ we obtain a graph-manifold $M'$, all blocks of which have type~2 It follows from Lemma~\ref{lemm:ind1}, and Definition~\ref{def:in2} that, all intersection numbers of $M'$ are equal to~1, and all secondary intersection numbers of $M'$ are equal to~1. On the other hand, the charge map of the vertex $v_1$ does not change. At the same time, the spaces of the intersection vectors are different. Indeed, consider edges $w_k$ and $w_2$ from $v_1$ to $v_k$ and $v_2$ respectively. Then a new space of intersection vectors $B'_{v_1}$ is obtained from the old one by a translation on the vector $0+f^1_{-w_2}\in F_{-w_k}\oplus F_{-w_2}$). It means that $K_v(q)=f^1_{v_1}$, where $q=f^2_{-w_k}\oplus f^2_{-w_2}\neq 0$.
That is, $B'_{v_1}$ does not belong to the kernel $\ker{K_{v_1}}$. It follows from Theorem~\ref{thm:crit}, that the graph-manifold $M'$ is not orthogonal. \end{proof}
\end{document} | arXiv |
\begin{document}
\title{All unitary perfect polynomials over $\mathbb{F}_2$ with less than five distinct prime factors}
\author{Luis H. Gallardo - Olivier Rahavandrainy \\ Department of Mathematics, University of Brest,\\ 6, Avenue Le Gorgeu, C.S. 93837, 29238 Brest Cedex 3, France.\\
e-mail : [email protected] - [email protected]} \maketitle \begin{itemize} \item[a)] Running head: binary unitary perfect polynomials. \item[b)] Keywords: Sum of divisors, unitary divisors, polynomials, finite fields,\\ characteristic $2.$ \item[c)] Mathematics Subject Classification (2000): 11T55, 11T06. \item[d)] Corresponding author: Luis H. Gallardo. \end{itemize}
{\bf{Abstract}} We find all unitary perfect polynomials over the prime field $\mathbb{F}_2$ with less than five distinct prime factors.
\section{Introduction} Let $p$ be a prime number and let $\mathbb{F}_q$ be a finite field of characteristic $p$ and order $q.$ Let $A \in \mathbb{F}_q[x]$ be a monic polynomial. We say that a divisor $d$ of $A$ is unitary if $d$ is monic and $\displaystyle{\gcd(d,\frac{A}{d}) = 1}$. Let $\omega(A)$ denote the number of distinct monic irreducible factors of $A$ over $\mathbb{F}_q$ and let $\sigma(A)$ (resp. $\sigma^*(A)$) denote the sum of all monic divisors (resp. unitary divisors) of $A$ ($\sigma$ and $\sigma^*$ are multiplicative functions).\\
The analogue notion over the positive integers is the notion of unitary perfect numbers. Only few results are known for them (see \cite{Goto, Graham, Wall}), namely, all are even numbers, we know only five of them. Graham \cite{Graham} characterized three of them, namely $6,60,87360.$ Goto \cite{Goto} proved an explicit exponential upper bound in $k=\omega(n)$ for $n$ unitary perfect. Wall \cite{Wall} improved a previous result of Subbarao, by proving that $\omega(n) \geq 9$ for any unitary perfect number $n.$
We call \emph{even} a polynomial $A$ with some zero in $\mathbb{F}_q,$ and \emph{odd} a polynomial that is not even. We assume that $A \notin \mathbb{F}_q.$
Since $A$ and $\sigma(A)$ have the same degree it follows that $A$ divides $\sigma(A)$ is equivalent to $\sigma(A)=A.$ If $\sigma(A) = A$ (resp. $\sigma^*(A) = A$), then we say that $A$ is a perfect (resp. unitary perfect) polynomial. We may consider the perfect polynomials as a polynomial analogue of the multiperfect numbers. E. F. Canaday, the first doctoral student of Leonard Carlitz, began in $1941$ \cite{Canaday} the study of perfect polynomials by working on the prime field $\mathbb{F}_2.$
Later, in the seventies, J. T. B. Beard Jr. et al. extended this work in several directions (see e.g. \cite{BeardU}, \cite{Beard2}, \cite{Beard}) including the study of unitary perfect polynomials.
We became interested in this subject a few years ago and obtain some results (\cite{Gall-Rahav}, \cite{Gall-Rahav3}, \cite{Gall-Rahav4}, \cite{Gall-Rahav2}, \cite{Gall-Rahav7}, \cite{Gall-Rahav5}, \cite{Gall-Rahav6} and \cite{Gall-Rahav8}) including for $q \in \{2,4\}$ a complete classification of the perfect polynomials $A$ for which $\omega(A)$ is small.
We began the study of unitary perfect polynomials by considering the splitting case when $q=p^2$ (see \cite{Gall-Rahav9}). In this paper we study more general unitary perfect polynomials $A$ improving on previous results of Beard et al. \cite{BeardU2} and Beard \cite{BeardU}. In particular we prove that $A$ must be even, contrary to perfect polynomials for which we do not know whether or not there exist odd perfect polynomials. More precisely, we determine here all unitary perfect polynomials $A$, over $\mathbb{F}_2$, such that $\omega(A) \leq 4$. As usual $\mathbb{N}$ denotes the nonnegative integers and $\mathbb{N}\sp{*}$ the positive integers. \\
Our main results are the following:\\
Let $A$ be a nonconstant polynomial over $\mathbb{F}_2$ such that $\omega(A) \leq 4$, then $A$ is unitary perfect if and only if either $A$ or $A(x+1)$ is of the form $B^{2^n}$ for some $n \in \mathbb{N}$ where: $$\begin{array}{l} - \text{ if } \omega(A) \leq 3: \\ \\ B = x(x+1),\\ B = x^3(x+1)^3(x^2+x+1)^2,\\ B(x) \in \{x^3(x+1)^2(x^2+x+1), \ x^5(x+1)^4(x^4+\cdots+x+1)\}\\ \\ - \text{ if $\omega(A)=4$:} \\ \\ i) \ B = x^6 (x+1)^4 (1+x+x^2)^3(1+x+x^4),\\ ii) \ B = x^{13} (x+1)^{8} (1+x+x^2)^4(1+x+\cdots+x^{12}),\\ iii) \ B = x^{11} (x+1)^{8} (1+x+\cdots+x^4)^2(1+x+\cdots+x^{10}),\\ iv) \ B = x^9 (x+1)^4 (1+x+x^2)^2(1+x^3+x^6),\\ v) \ B = x^{25} (x+1)^{16} (1+x+\cdots+x^4)^4(1+x^5+x^{10}+ x^{15}+ x^{20}),\\ vi) \ B = x^7 (x+1)^4 (1+x^2+x^3)(1+x+x^3),\\ vii) \ B = x^3 (x+1)^3 (1+x+x^2)^3(1+x+x^4),\\ viii) \ B = x^5 (x+1)^6 (1+x+x^2)^2(1+x +\cdots+x^4),\\ ix) \ B = x^5 (x+1)^5 (1+x^3+x^4)(1+x +\cdots+x^4),\\ x) \ B = x^{13} (x+1)^{12}(1+x+x^2)^8(1+x+\cdots+x^{12}),\\ xi) \ B = x^9 (x+1)^6 (1+x+x^2)^4(1+x^3+x^6),\\ xii) \ B = x^7 (x+1)^7 (1+x+x^3)^2(1+x^2+x^3)^2. \end{array}$$ We may consider the family $\{x^{2^n}(x+1)^{2^n}: n \in \mathbb{N}\}$ as an analogue of the family $\{x^{2^n+1}(x+1)^{2^n+1}\}$ of trivial even perfect polynomials over $\mathbb{F}_2.$ \\ Note that Beard \cite{BeardU} and Beard et al. \cite{BeardU2} computed the above list with the exception of v), x), and xi) that are new.
Moreover, compared to the list of all perfect polynomials $A$ over $\mathbb{F}_2$ with $\omega(A) <5$ given in \cite{Gall-Rahav5}, we obtain an additional family of irreducible divisors of unitary perfect polynomials: $$\begin{array}{l} S_1(x) = 1+x^3+x^6, \ S_1(x+1),\\ S_2(x) = 1 + x^5 + x^{10} + x^{15} + x^{20}, \ S_2(x+1)\\ S_3(x) = 1+x+\cdots+ x^{10}, \ S_3(x+1),\\ S_4(x) = 1+x+\cdots+ x^{12}, \ S_4(x+1). \end{array}$$
It is clear from the above results that the classification of all perfect or unitary perfect polynomials $A$ with a moderately large number $\omega(A)$ of distinct prime factors may become very complicated. New tools need to be discovered to make more progress in this area.
\section{Preliminary}\label{preliminaire0} We need the following results. Some of them are obvious, so we omit to give their proofs. Our first result give information on the sizes of the primary parts of unitary perfect polynomials.
\begin{lemma} \emph{(see also \cite[Theorem 1]{BeardU})} \label{nombreminimal} If $A = \displaystyle{P_1^{h_1} \cdots P_r^{h_r} Q_1^{k_1} \cdots Q_s^{k_s}}$ is a nonconstant unitary perfect polynomial over $\mathbb{F}_q$ such that: $$\left\{\begin{array}{l} P_1,\ldots, P_r, Q_1, \ldots, Q_s \text{are both irreducible}\\ h_1\deg(P_1)= \cdots=h_r \deg(P_r) < k_1 \deg(Q_1) \leq \cdots \leq k_s \deg(Q_s). \end{array}\right.$$ Then: $$r \equiv 0 \ ({\rm{mod}} p).$$ \end{lemma}
\begin{proof} By definition, one has: $\displaystyle{0 = \sigma^*(A)-A = \frac{A}{P_1^{h_1}} + \cdots + \frac{A}{P_r^{h_r}}+ \cdots}$ \\ In particular, $r = 1+\cdots+1$, which is the leading coefficient of $\displaystyle{\frac{A}{P_1^{h_1}} + \cdots + \frac{A}{P_r^{h_r}}}$,
equals $0$ in $\mathbb{F}_p$. \end{proof}
\begin{lemma} \label{multiplicativity} If $A = A_1A_2$ is unitary perfect over $\mathbb{F}_2$ and if $\gcd(A_1,A_2) = 1$. Then $A_1$ is unitary perfect if and only if $A_2$ is unitary perfect. \end{lemma} \begin{lemma} \label{translation} If $A(x)$ is unitary perfect over $\mathbb{F}_2$, then the polynomials $A(x+1)$ and $A^{2^n}$ are also unitary perfect over~$\mathbb{F}_2$, for any $n \in \mathbb{N}$. \end{lemma}
We recall here some useful notation and results in Canaday's paper \cite{Canaday}:
\begin{itemize} \item We define as the inverse of a polynomial $P(x)$ of degree $m$, the polynomial $\displaystyle{P^*(x) = x^m P(\frac{1}{x})}$. \item We say that $P$ inverts into itself if $P = P^*$. \item A polynomial $P$ is complete if $P = 1 + x + \cdots + x^h$, for some $h\in \mathbb{N}$. \end{itemize}
Part iii) of the following lemma is essentially a result of Dickson (see \cite[Lemma 2]{Canaday})
\begin{lemma} [\rm{see \cite[lemma 7]{Canaday}, \cite[Lemma 2.1]{Gall-Rahav5}}] \label{complete1} \emph{i)} Any complete polynomial inverts into itself. \emph{ii)} If $1 + x + \cdots + x^h = PQ$, where
$P, Q$ are irreducible, then either $(P = P^*, Q = Q^*)$ or $(P = Q^*, Q = P^*)$.\\ \emph{iii)} If $P = P^*$, $P$ irreducible and if $P = x^a(x+1)^b+1$, then: $$P \in \{1+x+x^2, 1 + x + \cdots + x^4\}.$$ \end{lemma}
\begin{lemma} \emph{(see \cite[Lemmata 4, 5, 6 and Theorem 8]{Canaday})}
\label{complete2} Let $P, Q \in \mathbb{F}_2[x]$ such that $P$ is irreducible and let $n,m \in \mathbb{N}$.\\ \emph{i)}If $1 + P + \cdots + P^{2n} = Q^m$, then $m \in \{0,1\}$.\\ \emph{ii)} If $1 + P + \cdots + P^{2n} = Q^m A$, with $m > 1$ and $A \in \mathbb{F}_2[x]$ is nonconstant, then ${\rm{deg}}(P) > {\rm{deg}}(Q)$.\\ \emph{iii)} If $1 + x + \cdots + x^{2n} = PQ$ and $P = 1 + (x+1) + \cdots + (x+1)^{2m}$, then $n=4$, $P= 1+x+x^2$ and $Q = P(x^3) = 1+x^3+x^6$.\\ \emph{iv)} If any irreducible factor of $1+x+\cdots + x^{2n}$ is of the form $x^a(x+1)^b+1$, then $n \in \{1,2,3\}$.\\ \emph{v)} If $1 + x + \cdots + x^{h} = 1 + (x+1) + \cdots + (x+1)^{h}$, then $h= 2^n-2$, for some $n \in \mathbb{N}$. \end{lemma}
\begin{lemma} \label{complete3} If $1+x+x^2$ divides $1+x+\cdots + x^{h}$, then $h \equiv 2 \mod 3$.\\ If $1+x+\cdots+x^4$ divides $1+x+\cdots + x^{h}$, then $h \equiv 4 \mod 5$. \end{lemma}
As a special case of \cite[Theorem 2.47]{Rudolf}, we have \begin{lemma} \label{irreduc0} The polynomial $1+x+\cdots+x^m$ is irreducible over $\mathbb{F}_2$ if and only if: $$\text{$m+1$ is a prime number and $2$ is a primitive root in $\mathbb{F}_{m+1}$.}$$ \end{lemma}
Consequently one gets
\begin{lemma} \label{irreduc} \emph{i)} The polynomial $Q(x)=1+x^5+\cdots+(x^5)^l$ is irreducible over $\mathbb{F}_2$ if and only if $l = 4$.\\ \emph{ii)} The polynomial $Q(x)=1+x+\cdots+x^{3.2^r}$ is irreducible over $\mathbb{F}_2$ if and only if $r = 2$.\\ \emph{iii)} The polynomial $Q(x)=1+x+\cdots+x^{5.2^r}$ is irreducible over $\mathbb{F}_2$ if and only if $r = 1$. \end{lemma}
\begin{proof} We prove only necessity. Sufficiency is obtained by direct computations.\\ i): For $k \in \mathbb{N}^*$, let $\Phi_k$ be the $k$-th cyclotomic polynomial over $\mathbb{F}_2$. Recall that if $k$ is a prime number, then $\Phi_k(x) = 1+x+\cdots+x^{k-1}$.\\ If $Q(x)$ is irreducible, then $1+x+\cdots+x^l$ is also irreducible.\\ Thus, by Lemma \ref{irreduc0}, $l+1$ is a prime number and $Q(x) = \Phi_{l+1}(x^5)$.\\ It remains to observe that if $5 \not= l+1$, then: $$\Phi_{l+1}(x^5) = \Phi_{l+1}(x) \ \Phi_{5(l+1)}(x).$$ So that $Q$ is not irreducible in that case. We conclude that $l=4$.\\ ii): If $Q(x)$ is irreducible, then by Lemma \ref{irreduc0}, $p=3 \ . \ 2^r + 1$ is a prime number and $2$ is a primitive root in $\mathbb{F}_{p}$. So, $2$ is not a square in $\mathbb{F}_{p}$. By considering the Legendre Symbol $\displaystyle{(\frac{2}{p}) = (-1)^{\frac{p^2-1}{8}}}$, we see that we must have $r \in \{1,2\}$.\\ The case $r=1$ does not happen since $Q(x)$ is irreducible.\\ iii): As above, we obtain: $r \in \{1,2\}$. The case $r =2$ does not happen since $5.2^r + 1$ is prime. \end{proof}
We prove now the non-existence of odd unitary perfect polynomials:
\begin{lemma} \label{noodd} Any nonconstant unitary perfect polynomial over $\mathbb{F}_2$ is divisible by $x$ and by $x+1$. In particular, there is no odd unitary perfect polynomial over $\mathbb{F}_2$. \end{lemma}
\begin{proof} If $P$ is an odd prime polynomial over $\mathbb{F}_2$, then $P(0) = P(1) = 1$, so that for any positive integer $h$, $1+P(0)^h =
1+P(1)^h = 0$. Thus, the monomials $x$ and $x+1$ divide $1+P^h$. Now, let $A$ be an unitary perfect polynomial. We have $\omega(A) \geq 2$. If both $x, x+1$ divide $A$, then we are done. If there exists an odd polynomial $P \in \mathbb{F}_2[x]$ such that $P^h \ | \ A$ and $P^{h+1} \nmid A$, then $\sigma^*(P^h) = 1+P^h$ divides $\sigma^*(A) = A$. So $x, x+1$ divide $A$. \end{proof}
\begin{remark} \begin{itemize} \item In the rest of the paper, we put $\overline{S}(x) = S(x+1)$ for $S \in \mathbb{F}_2[x]$. \item For Theorems \ref{casomegainf3} and \ref{casomega4}, we shall prove only necessity, since sufficiency is always obtained by direct computations. \end{itemize} \end{remark}
\section{Case $\omega(A) \leq 3$} We prove the following result: \begin{theorem} \label{casomegainf3} Let $A \in \mathbb{F}_2[x]$ be a polynomial such that $\omega(A) \leq 3$, then $A$ is unitary perfect over $\mathbb{F}_2$ if~and~only~if either $A$ or $\overline{A}$ is of the form $B^{2^n} \text{ for some } n \in \mathbb{N}, \text {where:}$ $$\left\{\begin{array}{l} i) \ B = x^2+x,\\ ii) \ B \in \{x^3(x+1)^2(x^2+x+1), \ x^5(x+1)^4(x^4+\cdots+x+1)\},\\ iii) \ B = x^3(x+1)^3(x^2+x+1)^2. \end{array} \right.$$ \end{theorem}
\subsection{Case $\omega(A) = 2$} The following proposition gives the first part of Theorem \ref{casomegainf3}.
\begin{proposition} Let $A \in \mathbb{F}_2[x]$ such that $\omega(A) = 2$, then $A$ is unitary perfect over $\mathbb{F}_2$ if and only if $A$ is of the form $(x^2+x)^{2^n}$, for some $n \in \mathbb{N}$. \end{proposition}
\begin{proof} It remains to prove necessity since sufficiency is obvious.\\ The case where $A \in \{ x^hP^k, (x+1)^h P^k\}$, with $P$ odd, is impossible by Lemma \ref{noodd}. So $A$ splits: $A =x^h(x+1)^k$. We must have: $1+x^h = (x+1)^h, \ 1+(x+1)^k = x^k$. Hence, $h=k=2^n$, for some $n\in \mathbb{N}$. \end{proof}
Consequently the unitary perfect polynomials $A$ with $\omega(A)=2$ are exactly the perfect polynomùials with $\omega(A)=2.$
\subsection{Case $\omega(A) = 3$} In this case, $A$ is of the form $x^{h_1}(x+1)^{k_1} P^l$, with $P$ odd.
\begin{lemma} \label{exponentodd} If $A = x^{h_1}(x+1)^{k_1} P^l$ is an unitary perfect polynomial over $\mathbb{F}_2$, then $l = 2^n$, for some nonnegative integer $n$. \end{lemma}
\begin{proof} Put: $l = 2^n u$, where $u$ is odd and $n \in \mathbb{N}$. Since the only prime divisors of $A = \sigma^*(A)$ are $x,x+1$ and $P$, and since $P$ does not divide $1+P^l$, the polynomial $1+P^l = \sigma^*(P^l)$ must be of the form $x^a(x+1)^b$. Thus, $$(1+P)(1+P+ \cdots + P^{u-1}) = 1+P^u =x^c(x+1)^d.$$ Since $x, x+1$ divide $1+P$ and since $\gcd(1+P, 1+P+ \cdots + P^{u-1}) = 1$, we conclude that $u-1 = 0$. \end{proof}
Put $h_1 = 2^h c, \ k_1 = 2^k d$ with $c,d$ odd. Since $A$ is unitary perfect, we have \begin{equation} \label{starzero}
\left\{\begin{array}{l} 1+x^{h_1} = (x+1)^{2^h}(1+x+\cdots +x^{c-1})^{2^h},\\ 1+(x+1)^{k_1} = x^{2^k}(1+(x+1)+\cdots +(x+1)^{d-1})^{2^k},\\ 1+P^{2^n} = (1+P)^{2^n} = (x^{a_3}(x+1)^{b_3})^{2^n}. \end{array} \right. \end{equation}
Lemma \ref{complete2}-i) implies that: $$1+x+\cdots +x^{c-1}, \ 1+(x+1)+\cdots +(x+1)^{d-1} \in \{1, P\}.$$ Since $h_1$ and $k_1$ play symmetric roles and since $P$ must appear in the right hand side of \eqref{starzero}, we may reduce the study to the two cases: $$\begin{array}{l} {\rm{(I)}}: 1+x+\cdots +x^{c-1}=P, \ d = 1, \\ {\rm{(II)}}: 1+x+\cdots +x^{c-1} = P = 1+(x+1)+\cdots +(x+1)^{d-1}. \end{array}$$
\subsubsection{Case (I)} According to Lemma \ref{complete1}-iii), we have: $P \in \{1+x+x^2, 1+x+\cdots+x^4\}$ and $c \in \{3,5\}$.\\ By considering exponents and degrees, System \eqref{starzero} implies $$\begin{array}{l} k=h+1, n=h \ \text{ if } c = 3,\\ k=h+2, n=h \ \text{ if } c = 5. \end{array}$$ We obtain part ii) of Theorem \ref{casomegainf3}.
\subsubsection{Case (II)} We have $c=d$ and $P = \overline{P}$. So, by Lemma \ref{complete1}, $P = 1+x+x^2$, and hence $c=d=3$. System \eqref{starzero} implies: $k=h, \ n=h+1$, and we obtain part iii) of Theorem \ref{casomegainf3}. This completes the proof of Theorem \ref{casomegainf3}.\\
It turns out that we can also get Theorem \ref{casomegainf3}.
as a consequence of a nice result of Swan:
\subsubsection{Another proof using Swan's Lemma} We would like to give, here, another proof of parts ii) and iii) of Theorem~\ref{casomegainf3}, by using Lemma \ref{nombreminimal} and the following result about reducibility of a binary polynomial in $\mathbb{F}_2[x]$: \begin{lemma} [see \cite{Swan}, p. 1103, line 3] \label{swan} Let $n, k \in \mathbb{N}$ be such that $8n > k$, then the polynomial $x^{8n} + x^k +1$ is reducible over $\mathbb{F}_2$. \end{lemma}
From that, we obviously obtain the
\begin{corollary} \label{corollaryswan} Let $r$ be a positive integer, then the polynomial $$P=x^{2^r} + x^{2^r-1}+1$$ is irreducible over $\mathbb{F}_2$ if and only if $r \in\{1,2\}$. \end{corollary}
We recall that $A$ is of the form $x^{h_1}(x+1)^{k_1} P^l$, with $P$ odd and $l = 2^n$ for some $n \in \mathbb{N}$. Put $p = \deg(P)$. By Lemma \ref{nombreminimal}, we have either $({h_1} = {k_1} \leq lp)$ or $({h_1}=lp \leq {k_1})$ or $({k_1}=lp \leq {h_1})$. The third case is similar to the second since ${h_1}$ and ${k_1}$ play symmetric roles. \\ \\ \underline{Case ${h_1} = {k_1} \leq lp$}\\ \\ We obtain $A = x^{h_1}(x+1)^{h_1}P^{2^n}$, $h_1 \leq 2^n p$. Since $A$ is unitary perfect, we have $$\begin{array}{l} 1+x^{h_1} = (x+1)^{b_1}P^{c_1},\\ 1+(x+1)^{h_1} = x^{a_2}P^{c_2},\\ 1+P^{2^n} = (1+P)^{2^n} = (x^{a_3}(x+1)^{b_3})^{2^n}. \end{array}$$ Hence: $$\begin{array}{l} P = x^{a_3}(x+1)^{b_3} + 1,\\ (x+1)^{b_1}P^{c_1} = 1+x^{h_1} = 1+(x+1+1)^{h_1} = (x+1)^{a_2}{(P(x+1))}^{c_2}. \end{array}$$ It follows that: $$a_2 = b_1, \ c_2 = c_1 \geq 1, \ P(x) = P(x+1).$$ Thus, $c_2 = c_1=2^{n-1}$ and $a_3 = b_3$. The irreducibility of $P$ implies $a_3=b_3=1$. So, $P = x^2+x+1$. Put ${h_1} = 2^h c$, where $c$ is odd. We have now: $$(1+x)^{2^h}(1+x+\cdots+ x^{c-1})^{2^h} = 1+x^{h_1} = (x+1)^{b_1}(x^2+x+1)^{2^{n-1}}.$$ Thus $c=3$ and $h=n-1$. We get $A = B^{2^{n-1}}$, where $B=x^3(x+1)^3(x^2+x+1)^2$. So we obtain part iii) of Theorem \ref{casomegainf3}.\\ \\ \underline{Case ${h_1}=lp \leq {k_1}$}\\ \\ We obtain now: $A = x^{h_1}(x+1)^{k_1}P^{2^n}$, $h_1 = 2^n p \leq k_1$. Since $A$ is unitary perfect, we have $$\begin{array}{l} 1+x^{h_1} = (1+x^p)^{2^n}= ((x+1)^{b_1}P^{c_1})^{2^n},\\ 1+(x+1)^{k_1} = x^{a_2}P^{c_2},\\ 1+P^{2^n} = (1+P)^{2^n} = (x^{a_3}(x+1)^{b_3})^{2^n}. \end{array}$$ Hence: $$a_2+c_2p = k_1,\ b_1+c_1p = p, \ 2^nc_1+c_2 = 2^n.$$ It follows that $c_1 \in \{0,1\}$. If $c_1 = 0$, then $b_1 = p$ and $1+x^p = (x+1)^p$, so $p = 2^r$, for some $r \in \mathbb{N}^*$. Thus, $a_3 + b_3 = 2^r$. Since $P = x^{a_3}(x+1)^{b_3} +1$ is irreducible, $a_3$ and $b_3$ must be both odd. Moreover, $c_2 = 2^n$ and $$a_2+2^n \ 2^r = a_2+c_2p = k_1 = 2^n(b_1+b_3) = 2^n(2^r + b_3).$$ Hence $$a_2 = 2^nb_3,$$ and $$(1+ (x+1)^{2^r+b_3})^{2^n} = 1+(x+1)^{k_1} = x^{a_2}P^{c_2} = (x^{b_3}P)^{2^n}.$$ It follows that: $$1+ (x+1)^{2^r+b_3} = x^{b_3}P = x^{b_3}(x^{a_3}(x+1)^{b_3} +1).$$ Thus, $$b_3 = 1,\ a_3 = 2^r - 1, \ k_1 = 2^n(2^r+1), \ P = x^{2^r - 1}(x+1)+1,$$ and $$A = (x^{2^r}(x+1)^{2^r+1}P)^{2^n}.$$ So by Corollary \ref{corollaryswan}, we get $r \in\{1,2\}$ and $\overline{A}$ satisfies part ii) of Theorem~\ref{casomegainf3}.\\ If $c_1 = 1$, then $c_2 = b_1 = 0$. It follows that $1+x^p = P$, with $p \geq 2$. This contradicts the fact that $P$ is irreducible.
\section{Case $\omega(A) = 4$} \label{case4factors} We prove the following result: \begin{theorem} \label{casomega4} Let $A \in \mathbb{F}_2[x]$ be a polynomial such that $\omega(A) = 4$, then $A$ is unitary perfect over $\mathbb{F}_2$ if~and~only~if either $A$ or $\overline{A}$ is of the form $B^{2^n} \text{ for some } n \in \mathbb{N}, \text {where:}$ $$\left\{\begin{array}{l} i) \ B = x^6 (x+1)^4 (1+x+x^2)^3(1+x+x^4),\\ ii) \ B = x^{13} (x+1)^{8} (1+x+x^2)^4(1+x+\cdots+x^{12}),\\ iii) \ B = x^{11} (x+1)^{8} (1+x+\cdots+x^4)^2(1+x+\cdots+x^{10}),\\ iv) \ B = x^9 (x+1)^4 (1+x+x^2)^2(1+x^3+x^6),\\ v) \ B = x^{25} (x+1)^{16} (1+x+\cdots+x^4)^4(1+x^5+x^{10}+ x^{15}+ x^{20}),\\ vi) \ B = x^7 (x+1)^4 (1+x^2+x^3)(1+x+x^3),\\ vii) \ B = x^3 (x+1)^3 (1+x+x^2)^3(1+x+x^4),\\ viii) \ B = x^5 (x+1)^6 (1+x+x^2)^2(1+x +\cdots+x^4),\\ ix) \ B = x^5 (x+1)^5 (1+x^3+x^4)(1+x +\cdots+x^4),\\ x) \ B = x^{13} (x+1)^{12}(1+x+x^2)^8(1+x+\cdots+x^{12}),\\ xi) \ B = x^9 (x+1)^6 (1+x+x^2)^4(1+x^3+x^6),\\ xii) \ B = x^7 (x+1)^7 (1+x+x^3)^2(1+x^2+x^3)^2. \end{array} \right.$$ \end{theorem}
The following proposition gives more details about the form of an unitary perfect polynomial.
\begin{proposition} \label{exponentodd2} Every unitary perfect polynomial $A$ over $\mathbb{F}_2$, with $\omega(A) = 4$, is of the form $x^{h_1}(x+1)^{k_1} P^{2^l u}Q^{2^m}$, where: $$\begin{array}{l} \text{i) $P, Q,u$ are odd, ${\rm{deg}}(P) \leq {\rm{deg}}(Q)$},\\ \text{ii) $h_1,k_1 \in \mathbb{N}^*$, $l,m \in \mathbb{N}$ and either $(u =1)$ or $(u=3, \ Q = 1+P+ P^2)$,}\\ \text{iii) $P \in \{1+x+x^2, 1+x+\cdots+x^4\}$ if $P$ is complete},\\ \text{iv) ${\rm{deg}}(Q) \geq 4$ if $\ Q$ is complete}. \end{array}$$ \end{proposition}
\begin{proof}
First of all, $x$ and $x+1$ divide $A$ by Lemma \ref{noodd}. So $$A = x^{h_1}(x+1)^{k_1} P^{r}Q^{s},$$ for some $h_1,k_1,r,s \in \mathbb{N}^*$. Put $r = 2^l u, \ s = 2^m v$, where $u,v$ are odd and $l,m \in \mathbb{N}$. Consider $$\sigma^*(Q^s) = 1+Q^s = (1+Q)^{2^m}(1+Q+\cdots + Q^{v-1})^{2^m}.$$ Since $x$ and $x+1$ divide $1+Q$, they do not divide $1+Q+\cdots + Q^{v-1}$. Hence, $1+Q+\cdots + Q^{v-1} \in \{1,P\}$, by Lemma \ref{complete2}-i). If $v-1 \geq 2$, then $1+Q+\cdots + Q^{v-1} = P$. This is impossible because ${\rm{deg}}(P) \leq {\rm{deg}}(Q)$. Thus, $v-1 = 0$ and $s = 2^m$. Now, by considering degrees, we see that the irreducible odd polynomial $Q$ does not divide $1+P$. It follows that $(1+P)^{2^l}(1+P+\cdots + P^{u-1})^{2^l} = 1+P^r = \sigma^*(P^r)$ must be of the form $x^a(x+1)^bQ^c$. Thus, by Lemma \ref{complete2}-i): $$1+P+\cdots + P^{u-1} \in \{1, Q\}.$$ We conclude that either $(u = 1)$ or $(1+P+\cdots + P^{u-1} = Q)$.\\ If $u > 1$, then put $u=2w+1$. We get $$1+Q^{2^m} = (1+Q)^{2^m} = \text{\Large{(}}P(1+P+\cdots+P^{u-2})\text{\Large{)}}^{2^m} =$$ $$ \text{\LARGE{(}}P(1+P)\text{\Large{(}}1+P+\cdots+P^{w-1}\text{\Large{)}}^2 \text{\LARGE{)}}^{2^m}.$$ Since $x, x+1$ and $P$ divide $1+Q$ and since $x,x+1$ divide $1+P$, none of the irreducible divisors of $A$ does divide $1+P+\cdots+P^{w-1}$. Hence $w=1$, $u = 3$ and $Q = 1+P+P^2$. Since ${\rm{deg}}(P) \leq {\rm{deg}}(Q)$, the irreducible polynomial $Q$ does not divide $1+P$. So $P$ is always of the form $x^a(x+1)^b + 1$. If $P$ is complete, then by parts i) and iii) of Lemma \ref{complete1}, we have $P \in \{1+x+x^2, 1+x+\cdots+x^4\}$. Finally, if $Q$ is complete, since $1+x+x^2$ is the only degree $2$ odd irreducible polynomial over $\mathbb{F}_2$, we must have ${\rm{deg}}(Q) \geq 4$. \end{proof}
Put
$$p={\rm{deg}}(P), \ q = {\rm{deg}}(Q), \ h_1 = 2^hc, \ k_1 = 2^kd, \text{ with $c, d$ odd}.$$ Since $A$ is unitary perfect and since $Q$ does not divide $1+P$, we have: \begin{equation} \label{star}
\ \left\{\begin{array}{l} 1+x^{h_1} = (1+x^c)^{2^h}= (1+x)^{2^h}(1+x+\cdots +x^{c-1})^{2^h} = (1+x)^{2^h}P^{2^hc_1}Q^{2^hd_1},\\ 1+(x+1)^{k_1} = x^{2^k}(1+(1+x)+\cdots +(1+x)^{d-1})^{2^k} = x^{2^k}P^{2^kc_2}Q^{2^kd_2},\\ 1+P^{2^lu} = (1+P)^{2^l}(1+P+\cdots+P^{u-1})^{2^l} = (x^{a_3}(1+x)^{b_3})^{2^l}Q^{2^ld_3},\\ 1+Q^{2^m} = (1+Q)^{2^m} = (x^{a_4}(1+x)^{b_4}P^{c_4})^{2^m}. \end{array} \right. \end{equation}
By considering degrees and exponents of $x, x+1, P$ and $Q$, \eqref{star} implies: \begin{equation} \label{2star}
\ \left\{\begin{array}{l} 2^hc = 2^h(1+pc_1+qd_1) = 2^k + 2^l a_3 + 2^m a_4,\\ 2^kd = 2^k(1+pc_2+qd_2) = 2^h + 2^l b_3 + 2^m b_4,\\ 2^lup = 2^l(a_3 + b_3+qd_3) = (2^hc_1 + 2^kc_2+2^mc_4)p,\\ 2^mq = 2^m(a_4+b_4+pc_4) = (2^hd_1 + 2^kd_2+2^ld_3)q. \end{array} \right. \end{equation}
By Lemma \ref{complete2}, $c_1,d_1,c_2,d_2,d_3 \in \{0,1\}$ so that: $$1+x+\cdots +x^{c-1}, \ 1+(1+x)+\cdots +(1+x)^{d-1} \in \{1, P, Q, PQ\}.$$ Since $h_1$ and $k_1$ play symmetric roles, and since $x, x+1, P$ and $Q$ must divide $A=\sigma^*(A)$, it is sufficient to consider the following ten cases: $$\begin{array}{l} {\rm{(I)}}: c=d=1, \\ {\rm{(II)}}: 1+x+\cdots +x^{c-1} = P, \ d = 1, \\ {\rm{(III)}}: 1+x+\cdots +x^{c-1} = Q, \ d = 1, \\ {\rm{(IV)}}: 1+x+\cdots +x^{c-1} = PQ, \ d = 1,\\ {\rm{(V)}}: 1+x+\cdots +x^{c-1} = P = 1+(x+1)+\cdots +(x+1)^{d-1},\\ {\rm{(VI)}}: 1+x+\cdots +x^{c-1} = Q, \ 1+(x+1)+\cdots +(x+1)^{d-1} =P,\\ {\rm{(VII)}}: 1+x+\cdots +x^{c-1} = PQ, \ 1+(x+1)+\cdots +(x+1)^{d-1} =P,\\ {\rm{(VIII)}}: 1+x+\cdots +x^{c-1} = Q = 1+(x+1)+\cdots +(x+1)^{d-1},\\ {\rm{(IX)}}: 1+x+\cdots +x^{c-1} = PQ, \ 1+(x+1)+\cdots +(x+1)^{d-1} =Q,\\ {\rm{(X)}}: 1+x+\cdots +x^{c-1} = PQ= 1+(x+1)+\cdots +(x+1)^{d-1}. \end{array}$$
\subsection{Case (I)} In this case, if $u = 1$, then since $Q$ must appear in the right hand side of System \eqref{star}, $Q$ must divide $1+P$, which is impossible. So, $u = 3$ and $1+Q = P(P+1)$. Thus, System \eqref{star} implies that $c_4 = 1$ and $3\cdot2^l = c_4 \cdot 2^m = 2^m$ so that $3$ divides $2^m$. It is impossible.
\subsection{Case (II)} As above, $u = 3$ and $Q = 1+P+P^2$. By Proposition \ref{exponentodd2}, we get $$P \in \{1+x+x^2, 1+x+\cdots+x^4\} \text{ and } c \in \{3, 5\}.$$ If $P = 1+x+\cdots+x^4$, then: $$Q=1+P+P^2=1+x+x^3+x^6+x^8=(1+x+x^2)(1+x^2+x^4+x^5+x^6),$$ which is reducible.\\ So we must have: $P = 1+x+x^2$. Thus, $c=3$ and $Q = 1+x+x^4$. System \eqref{2star} implies that: $$l = m, \ h = m+1, \ k = m+2.$$ We obtain part i) of Theorem \ref{casomega4}.
\subsection{Case (III)} $P$ must divide $1+Q$ since it must appear in the right hand side~of~ \eqref{star}.\\ Put: $c-1 = 2^r s$, with $s$ odd. We get $$x^{a_4}(1+x)^{b_4+1}P^{c_4} = (1+x)(1+Q) = x(x+1)(1+x+\cdots+x^{c-2}).$$ Thus, $a_4 = 1$ and $$(x+1)^{b_4+1}P^{c_4} = (1+x)(1+x+\cdots+x^{c-2}) = 1+x^{c-1} = (1+x)^{2^r}(1+x+\cdots+x^{s-1})^{2^r}.$$ We conclude that: $$b_4 = 2^r - 1, \ c_4 = 2^r, \ P = 1+x+\cdots+x^{s-1}.$$ By Proposition \ref{exponentodd2}, we get $$P \in \{1+x+x^2, 1+x+\cdots+x^4\}.$$ Thus, $c \in \{3 \cdot 2^r + 1, 5 \cdot 2^r + 1\}$, and by Lemma \ref{irreduc}, $c \in \{11,13\}$. It follows that we must have $$\begin{array}{l} u=1, \ d_3=0, \\ P = 1+x+x^2, \ Q = 1+x+\cdots+x^{12} \ \text{ if $c = 13$},\\ P = 1+x+\cdots+x^4, \ Q = 1+x+\cdots+x^{10} \ \text{ if $c = 11$}. \end{array}$$
System \eqref{2star} implies $$\begin{array}{l} m=h, \ l=h+2, \ k = h+3 \ \text{ if $c = 13$},\\ m=h, \ l=h+1, \ k = h+3 \ \text{ if $c = 11$}. \end{array}$$ We obtain parts ii) and iii) of Theorem \ref{casomega4}.
\subsection{Case (IV)} We get $1+x+\cdots+x^{c-1} = PQ,$ and by Lemma \ref{complete1}: $P \in \{P^*, Q^*\}$.
\subsubsection{Case $P = P^*$} In this case, by Lemma \ref{complete1}-iii), we have: $P \in \{1+x+x^2,1+x+\cdots + x^4\}$.\\ $\bullet$ If $P = 1+x+x^2$, then by Lemma \ref{complete2}-iii), the only possibility is $$c = 9, \ Q = 1+x^3+x^6.$$ So, we must have $$u=1.$$ System \eqref{2star} implies the following: $$m=h, \ l=h+1, \ k = h+2.$$ We obtain then part iv) of Theorem \ref{casomega4}.\\
$\bullet$ If $P = 1+x+\cdots+x^4$, then $1+x+\cdots+x^4$ divides $1+x+\cdots+x^{c-1}$.\\ So, by Lemma \ref{complete3}, $c$ is divisible by $5$. Put $c=5 w$. We get $Q = 1+x^5+x^{10}+ \cdots + (x^5)^{w-1} \not= 1+P+P^2$. Thus, by Lemma \ref{irreduc}-i) and by Proposition \ref{exponentodd2}, we have $$c=5w=25, \ u=1, \ P = 1+x+\cdots+x^4, \ Q = 1+x^5+ x^{10}+x^{15}+ x^{20}.$$ System \eqref{2star} implies $$m=h, \ l=h+2, \ k = h+4.$$ So we obtain part v) of Theorem \ref{casomega4}.
\subsubsection{Case $P =Q^*$} We get $p=q$. So both $P$ and $Q$ are of the form $x^a(x+1)^b+1$. We conclude by Lemma \ref{complete2}-iv) that: $$c = 7, \ P, Q \in \{1+x^2+x^3, 1+x+x^3\}.$$ It follows that $Q \not= 1+P+P^2$ and $u =1$. System \eqref{2star} implies $$l=m=h, \ k=h+2.$$ We obtain then part vi) of Theorem \ref{casomega4}.
\subsection{Case (V)} In this case, by Lemma \ref{complete1}-iii), $P = 1+x+x^2$ and $c=d=3$. Moreover, $u$ must be equal to $3$. So, $Q = 1+P+P^2 = 1+x+x^4$. System \eqref{2star} implies now: $$l=m=k=h.$$ Consequently we obtain part vii) of Theorem \ref{casomega4}.
\subsection{Case (VI)} In this case, $\overline{P} \in \{1+x+x^2, 1+x+\cdots+x^4\}$ by Lemma \ref{complete2}-iv). So $Q \not= 1+P+P^2$ and hence $u = 1$.
\subsubsection{Case where $P$ does not divide $1+Q$} In this case, both $P$ and $Q$ are of the form $x^a(x+1)^b+1$. By Lemma \ref{complete2}-iv) and Proposition \ref{exponentodd2}-iii)-iv), we have two possibilities: $$\begin{array}{l} \overline{P} = P = 1+x+x^2, \ Q = 1+x+\cdots+x^4,\\ \overline{P} = 1+x+\cdots+x^4 = Q. \end{array}$$ Thus $(c,d) \in \{(5,3), (5,5)\}$. System \eqref{2star} implies $$\begin{array}{l} m=h, \ l=k=h+1 \text{ if $c=5, \ d=3$},\\ l=m=k=h \text{ if $c=d=5$}. \end{array}$$ We obtain parts viii) and ix) of Theorem \ref{casomega4}.
\subsubsection{Case where $P$ divides $1+Q$} In this case, $P$ must divide $\displaystyle{\frac{1+Q}{x} = 1+x+\cdots+ x^{c-2}}$. Moreover, according to System \eqref{star}, we have $$a_4 = 1, \ 1+x+\cdots+x^{c-2} = (x+1)^{b_4}P^{c_4}.$$ Thus, if we put $c-1 = 2^r s$, with $s$ odd, we obtain $$(1+x)^{2^r}(1+x+\cdots+x^{s-1})^{2^r} = (1+x^s)^{2^r} = 1+x^{c-1} = (x+1)^{b_4+1}P^{c_4}.$$ We conclude that: $$b_4 = 2^r - 1,$$ and by Lemma \ref{complete2}-i): $$P = 1+x+\cdots+x^{s-1}, \ c_4 = 2^r.$$ Hence, by Lemmata \ref{complete2}-v) and \ref{complete1}-iii)the only possiblity that remains is $$\overline{P} = 1+x+x^2 = P, \ s=3, \ c = 3 \cdot 2^r+1.$$ It follows that $r = 2$ by Lemma \ref{irreduc}. System \eqref{2star} implies that: $$m=h, \ k=h+2,\ l=h+3.$$ We obtain part x) of Theorem \ref{casomega4}.
\subsection{Case (VII)} In this case, $P$ divides $1+x+\cdots+ x^{c-1}$. By Lemma \ref{complete2}-iii), we get $$c=9, \ P = 1+x+x^2, \ d=3, \ Q = 1+x^3+x^6.$$ Moreover, $u=1$ since $Q \not= 1+P+P^2$. \\ System \eqref{2star} implies that: $$m=h, \ k=h+1,\ l=h+2.$$ We obtain part xi) of Theorem \ref{casomega4}.
\subsection{Case (VIII)} In this case, by Lemma \ref{complete2}-v) and by Proposition \ref{exponentodd2}-iv), $c=d=2^w-1 \geq 5$.\\ Since $P$ must appear in the right hand side of \eqref{star} , it must divide $1+Q = x(1+x+\cdots+x^{c-2})$. Hence $P$ divides $1+x+\cdots+x^{c-2}$. Thus, $$a_4 = 1 \text{ and } (x+1)^{b_4}P^{c_4}= 1+x+\cdots+x^{c-2} = (1+x)(1+x+\cdots+x^{2^{w-1}-2})^2.$$ We deduce that: $$b_4 = 1, \ c_4 = 2, \ P = 1+x+\cdots+x^{2^{w-1}-2}.$$ By Proposition \ref{exponentodd2}-iii), we must have $$2^{w-1}-2 \in \{2,4\}.$$ So $w=3$ and $Q = 1+x+\cdots+x^7=(1+x)^7$ which is not irreducible.
\subsection{Case (IX)} In this case, $Q$ divides $1+x+\cdots+ x^{c-1}$. By Lemma \ref{complete2}-iii), we get $$Q = 1+x+x^2, \ P = 1+x^3+x^6.$$ This contradicts the fact: $\deg(P) \leq \deg(Q)$.
\subsection{Case (X)} In this case, by Lemma \ref{complete2}-v), by Proposition \ref{exponentodd2}-iv) and by Lemma \ref{complete1}-ii), we get $$c=d=2^w-1 \geq 5, \text{ and either $(P=P^*, Q = Q^*)$
or $(P = Q^*)$}.$$
\subsubsection{Case where $P=P^*,\ Q = Q^*$} We have by Lemma \ref{complete1}-iii): $P \in \{1+x+x^2,1+x+\cdots+x^4\}$.\\ $\bullet$ If $P = 1+x+x^2$, by Lemma \ref{complete2}-iii), $Q = 1+x^3+x^6$. Thus, $c = 9 =2^w-1$. This is impossible.\\ $\bullet$ If $P = 1+x+\cdots+x^4$, then $\overline{P}$ divides $1+x+\cdots +x^{d-1}$. So, by Lemma \ref{complete2}, $d-1 =8$. This is impossible.
\subsubsection{Case where $P=Q^*$} We have $p = q$ and both $P,Q$ are of the form $x^a(x+1)^b+1$. By Lemma \ref{complete2}-iv), $$c=d=7 \text{ and } P,Q \in \{1+x+x^3, 1+x^2+x^3\}.$$ Moreover $u = 1$, by Proposition \ref{exponentodd2}-ii). System \eqref{2star} implies that: $$l=m=h+1, \ k = h.$$ We obtain finally part xii) of Theorem \ref{casomega4}. This completes the proof of the Theorem.
\section{Acknowledgments} We are grateful to the referee of a first version of this paper for careful reading and for suggestions that improved the presentation of the paper. We are including in the next section his report (but excluding the detailed technical suggestions to authors).
\section{Report on preliminary version and conclusion} Referee report on the paper "All unitary perfect polynomials over F2 with less than five distinct prime factors" by Luis H. Gallardo and Olivier Rahavandrainy.\\
The authors are studying the problem of finding all the unitary perfect poly nomials over finite fields. The present paper contains the full classification of all the perfect unitary polynomials over F2 and serves as a continuation of a series of their publication devoted to the same topic. Previously the problem was studied by E.F. Canaday, J.T.B. Beard Jr, A.T. Bulloc, M.S. Harbin, J.R. Oconnel Jr, K.I. West. The latest publication on study of the perfect unitary polynomials was published in 1991, and this makes papers of the mentioned authors hardly available. Moreover, publications [2]-[4] in the reference list is unavailable since the journal Rend. Acad. Lincei they published in has status "no longer indexed" in database of the AMS and the journal's webpage containing the mentioned volumes was not found. Happily the authors are citing the papers [2]-[4] only in the history of the question. The general idea of the proofs of the results in the paper is rather ele- mentary. But it requires a great scope of computations and applies more deep results on irreducibility of the polynomials. Some of these irreducibil- ity results was proved by the authors in their previous papers. In general the paper makes good impression by numerous tricks used by the authors to simplify computations. The paper worth to be published in the Journal, it presents a new research which devoted to an interesting problem. The au- thors gives several interesting ideas, combination of which solves a problem. I found several misprints and places where arguments of proofs are not clear. I'd like to recommend the authors to correct misprints and clarify unclear arguments in proofs.
\section{Conclusion} From the (seemingly favorable ?) report above it was deduced that the preliminary version of this paper was not suitable for publication in the IJNT.
\def\thebibliography#1{\section*{\titrebibliographie} \addcontentsline{toc} {section}{\titrebibliographie}\list{[\arabic{enumi}]}{\settowidth
\labelwidth{[
#1]}\leftmargin\labelwidth \advance\leftmargin\labelsep \usecounter{enumi}} \def\hskip .11em plus .33em minus -.07em{\hskip .11em plus .33em minus -.07em} \sloppy \sfcode`\.=1000\relax} \let\endthebibliography=\endlist
\def\def\titrebibliographie{References}\thebibliography{\def\titrebibliographie{References}\thebibliography} \let\endbiblio=\endthebibliography
\newbox\auteurbox \newbox\titrebox \newbox\titrelbox \newbox\editeurbox \newbox\anneebox \newbox\anneelbox \newbox\journalbox \newbox\volumebox \newbox\pagesbox \newbox\diversbox \newbox\collectionbox
\def\fabriquebox#1#2{\par\egroup \setbox#1=\vbox\bgroup \leftskip=0pt \hsize=\maxdimen \noindent#2}
\def\bibref#1{\bibitem{#1}
\setbox0=\vbox\bgroup}
\def\fabriquebox\auteurbox\styleauteur{\fabriquebox\auteurbox\styleauteur} \def\fabriquebox\titrebox\styletitre{\fabriquebox\titrebox\styletitre} \def\fabriquebox\titrelbox\styletitrelivre{\fabriquebox\titrelbox\styletitrelivre} \def\fabriquebox\editeurbox\styleediteur{\fabriquebox\editeurbox\styleediteur}
\def\fabriquebox\journalbox\stylejournal{\fabriquebox\journalbox\stylejournal}
\def\fabriquebox\volumebox\stylevolume{\fabriquebox\volumebox\stylevolume} \def\fabriquebox\collectionbox\stylecollection{\fabriquebox\collectionbox\stylecollection}
{\catcode`\- =\active\gdef\annee{\fabriquebox\anneebox\catcode`\- =\active\def -{\hbox{\rm \string-\string-}}\styleannee\ignorespaces}}
{\catcode`\- =\active\gdef\anneelivre{\fabriquebox\anneelbox\catcode`\-= \active\def-{\hbox{\rm \string-\string-}}\styleanneelivre}}
{\catcode`\-=\active\gdef\pages{\fabriquebox\pagesbox\catcode`\- =\active\def -{\hbox{\rm\string-\string-}}\stylepages}}
{\catcode`\- =\active\gdef\divers{\fabriquebox\diversbox\catcode`\-=\active \def-{\hbox{\rm\string-\string-}}\rm}}
\def\ajoutref#1{\setbox0=\vbox{\unvbox#1\global\setbox1= \lastbox}\unhbox1 \unskip\unskip\unpenalty}
\newif\ifpreviousitem \global\previousitemfalse \def\ifpreviousitem {,\ }\fi{\ifpreviousitem {,\ }\fi}
\def\voidallboxes {\setbox0=\box\auteurbox \setbox0=\box\titrebox \setbox0=\box\titrelbox \setbox0=\box\editeurbox \setbox0=\box\anneebox \setbox0=\box\anneelbox \setbox0=\box\journalbox \setbox0=\box\volumebox \setbox0=\box\pagesbox \setbox0=\box\diversbox \setbox0=\box\collectionbox \setbox0=\null}
\def\fabriquelivre {\ifdim\ht\auteurbox>0pt \ajoutref\auteurbox\global\previousitemtrue\fi \ifdim\ht\titrelbox>0pt \ifpreviousitem {,\ }\fi\ajoutref\titrelbox\global\previousitemtrue\fi \ifdim\ht\collectionbox>0pt \ifpreviousitem {,\ }\fi\ajoutref\collectionbox\global\previousitemtrue\fi \ifdim\ht\editeurbox>0pt \ifpreviousitem {,\ }\fi\ajoutref\editeurbox\global\previousitemtrue\fi \ifdim\ht\anneelbox>0pt \ifpreviousitem {,\ }\fi \ajoutref\anneelbox \fi\global\previousitemfalse}
\def\fabriquearticle {\ifdim\ht\auteurbox>0pt \ajoutref\auteurbox \global\previousitemtrue\fi \ifdim\ht\titrebox>0pt \ifpreviousitem {,\ }\fi\ajoutref\titrebox\global\previousitemtrue\fi \ifdim\ht\titrelbox>0pt \ifpreviousitem {,\ }\fi{\rm in}\ \ajoutref\titrelbox\global \previousitemtrue\fi \ifdim\ht\journalbox>0pt \ifpreviousitem {,\ }\fi \ajoutref\journalbox\global\previousitemtrue\fi \ifdim\ht\volumebox>0pt \ \ajoutref\volumebox\fi \ifdim\ht\anneebox>0pt \ {\rm(}\ajoutref\anneebox \rm)\fi \ifdim\ht\pagesbox>0pt \ifpreviousitem {,\ }\fi\ajoutref\pagesbox\fi\global\previousitemfalse}
\def\fabriquedivers {\ifdim\ht\auteurbox>0pt \ajoutref\auteurbox\global\previousitemtrue\fi \ifdim\ht\diversbox>0pt \ifpreviousitem {,\ }\fi\ajoutref\diversbox\fi}
\def\endbibref {\egroup \ifdim\ht\journalbox>0pt \fabriquearticle \else\ifdim\ht\editeurbox>0pt \fabriquelivre \else\ifdim\ht\diversbox>0pt \fabriquedivers \fi\fi\fi .\voidallboxes}
\let\styleauteur=\sc \let\styletitre=\it \let\styletitrelivre=\sl \let\stylejournal=\rm \let\stylevolume=\bf \let\styleannee=\rm \let\stylepages=\rm \let\stylecollection=\rm \let\styleediteur=\rm \let\styleanneelivre=\rm
\begin{biblio}{99}
\begin{bibref}{Beard2} \fabriquebox\auteurbox\styleauteur{J. T. B. Beard Jr} \fabriquebox\titrebox\styletitre{Perfect polynomials Revisited} \fabriquebox\journalbox\stylejournal{Publ. Math. Debrecen} \fabriquebox\volumebox\stylevolume{38/1-2} \pages 5-12 \annee 1991 \end{bibref}
\begin{bibref}{BeardU} \fabriquebox\auteurbox\styleauteur{J. T. B. Beard Jr} \fabriquebox\titrebox\styletitre{Unitary perfect polynomials over $GF(q)$} \fabriquebox\journalbox\stylejournal{Rend. Accad. Lincei} \fabriquebox\volumebox\stylevolume{62} \pages 417-422 \annee 1977 \end{bibref}
\begin{bibref}{BeardU2} \fabriquebox\auteurbox\styleauteur{J. T. B. Beard Jr, A. T. Bullock, M. S. Harbin} \fabriquebox\titrebox\styletitre{Infinitely many perfect and unitary perfect polynomials} \fabriquebox\journalbox\stylejournal{Rend. Accad. Lincei} \fabriquebox\volumebox\stylevolume{63} \pages 294-303 \annee 1977 \end{bibref}
\begin{bibref}{Beard} \fabriquebox\auteurbox\styleauteur{J. T. B. Beard Jr, J. R. Oconnell Jr, K. I. West} \fabriquebox\titrebox\styletitre{Perfect polynomials over $GF(q)$} \fabriquebox\journalbox\stylejournal{Rend. Accad. Lincei} \fabriquebox\volumebox\stylevolume{62} \pages 283-291 \annee 1977 \end{bibref}
\begin{bibref}{Canaday} \fabriquebox\auteurbox\styleauteur{E. F. Canaday} \fabriquebox\titrebox\styletitre{The sum of the divisors of a polynomial} \fabriquebox\journalbox\stylejournal{Duke Math. J.} \fabriquebox\volumebox\stylevolume{8} \pages 721-737 \annee 1941 \end{bibref}
\begin{bibref}{Gall-Rahav} \fabriquebox\auteurbox\styleauteur{L. Gallardo, O. Rahavandrainy} \fabriquebox\titrebox\styletitre{On perfect polynomials over $\mathbb{F}_4$} \fabriquebox\journalbox\stylejournal{Port. Math. (N.S.)} \fabriquebox\volumebox\stylevolume{62(1)} \pages 109-122 \annee 2005 \end{bibref}
\begin{bibref}{Gall-Rahav3} \fabriquebox\auteurbox\styleauteur{L. Gallardo, O. Rahavandrainy} \fabriquebox\titrebox\styletitre{Perfect polynomials over $\mathbb{F}_4$ with less than five prime factors} \fabriquebox\journalbox\stylejournal{Port. Math. (N.S.) } \fabriquebox\volumebox\stylevolume{64(1)} \pages 21-38 \annee 2007 \end{bibref}
\begin{bibref}{Gall-Rahav4} \fabriquebox\auteurbox\styleauteur{L. H. Gallardo, O. Rahavandrainy} \fabriquebox\titrebox\styletitre{Odd perfect polynomials over $\mathbb{F}_2$} \fabriquebox\journalbox\stylejournal{J. Th\'eor. Nombres Bordeaux} \fabriquebox\volumebox\stylevolume{19} \pages 165-174 \annee 2007 \end{bibref}
\begin{bibref}{Gall-Rahav2} \fabriquebox\auteurbox\styleauteur{L. H. Gallardo, O. Rahavandrainy} \fabriquebox\titrebox\styletitre{On splitting perfect polynomials over $\mathbb{F}_{p^p}$} \fabriquebox\journalbox\stylejournal{Preprint (2007)} \end{bibref}
\begin{bibref}{Gall-Rahav7} \fabriquebox\auteurbox\styleauteur{L. H. Gallardo, O. Rahavandrainy} \fabriquebox\titrebox\styletitre{There is no odd perfect polynomial over $\mathbb{F}_2$ with four prime factors} \fabriquebox\journalbox\stylejournal{Port. Math. (N.S.)} \fabriquebox\volumebox\stylevolume{66(2)} \pages 131-145 \annee 2009 \end{bibref}
\begin{bibref}{Gall-Rahav5} \fabriquebox\auteurbox\styleauteur{L. H. Gallardo, O. Rahavandrainy} \fabriquebox\titrebox\styletitre{Even perfect polynomials over $\mathbb{F}_2$ with four prime factors} \fabriquebox\journalbox\stylejournal{Intern. J. of Pure and Applied Math.} \fabriquebox\volumebox\stylevolume{52(2)} \pages 301-314 \annee 2009 \end{bibref}
\begin{bibref}{Gall-Rahav6} \fabriquebox\auteurbox\styleauteur{L. H. Gallardo, O. Rahavandrainy} \fabriquebox\titrebox\styletitre{On splitting perfect polynomials over $\mathbb{F}_{p^2}$} \fabriquebox\journalbox\stylejournal{Port. Math. (N.S.) } \fabriquebox\volumebox\stylevolume{66(3)} \pages 261-273 \annee 2009 \end{bibref}
\begin{bibref}{Gall-Rahav8} \fabriquebox\auteurbox\styleauteur{L. H. Gallardo, O. Rahavandrainy} \fabriquebox\titrebox\styletitre{All perfect polynomials with up to four prime factors over $\mathbb{F}_4$} \fabriquebox\journalbox\stylejournal{Math. Commun.} \fabriquebox\volumebox\stylevolume{14(1)} \pages 47-65 \annee 2009 \end{bibref}
\begin{bibref}{Gall-Rahav9} \fabriquebox\auteurbox\styleauteur{L. H. Gallardo, O. Rahavandrainy} \fabriquebox\titrebox\styletitre{On unitary splitting perfect polynomials over $\mathbb{F}_{p^2}$} \fabriquebox\journalbox\stylejournal{Preprint (2009)} \end{bibref}
\begin{bibref}{Goto} \fabriquebox\auteurbox\styleauteur{Goto, Takeshi} \fabriquebox\titrebox\styletitre{Upper bounds for unitary perfect numbers and unitary harmonic numbers} \fabriquebox\journalbox\stylejournal{Rocky Mountain J. Math.} \fabriquebox\volumebox\stylevolume{37(5)} \pages 1557-1576 \annee 2007 \end{bibref}
\begin{bibref}{Graham} \fabriquebox\auteurbox\styleauteur{Graham, S. W.} \fabriquebox\titrebox\styletitre{Unitary perfect numbers with squarefree odd part} \fabriquebox\journalbox\stylejournal{Fibonacci Quart.} \fabriquebox\volumebox\stylevolume{26(4)} \pages 312-317 \annee 1989 \end{bibref}
\begin{bibref}{Rudolf} \fabriquebox\auteurbox\styleauteur{R. Lidl, H. Niederreiter} \fabriquebox\titrelbox\styletitrelivre{Finite Fields, Encyclopedia of Mathematics and its applications} \fabriquebox\editeurbox\styleediteur{Cambridge University Press} \anneelivre 1983 (Reprinted 1987) \end{bibref}
\begin{bibref}{Swan} \fabriquebox\auteurbox\styleauteur{R. G. Swan} \fabriquebox\titrebox\styletitre{Factorization of polynomials over finite fields} \fabriquebox\journalbox\stylejournal{Pacific J. Math.} \fabriquebox\volumebox\stylevolume{12} \pages 1099-1106 \annee 1962 \end{bibref}
\begin{bibref}{Wall} \fabriquebox\auteurbox\styleauteur{Wall, Charles R.} \fabriquebox\titrebox\styletitre{New unitary perfect numbers have at least nine odd components} \fabriquebox\journalbox\stylejournal{Fibonacci Quart.} \fabriquebox\volumebox\stylevolume{26(4)} \pages 312-317 \annee 1988 \end{bibref}
\end{biblio}
\end{document} | arXiv |
Boundary value problems of elliptic and parabolic type with boundary data of negative regularity
Felix Hummel ORCID: orcid.org/0000-0002-2374-70301
Journal of Evolution Equations volume 21, pages 1945–2007 (2021)Cite this article
We study elliptic and parabolic boundary value problems in spaces of mixed scales with mixed smoothness on the half-space. The aim is to solve boundary value problems with boundary data of negative regularity and to describe the singularities of solutions at the boundary. To this end, we derive mapping properties of Poisson operators in mixed scales with mixed smoothness. We also derive \(\mathcal {R}\)-sectoriality results for homogeneous boundary data in the case that the smoothness in normal direction is not too large.
Avoid the most common mistakes and prepare your manuscript for journal editors.
In recent years, there were some efforts to generalize classical results on the bounded \(\mathcal {H}^\infty \)-calculus ([7, 8, 13, 14]) and maximal regularity ([8, 9, 11, 12, 21]) of elliptic and parabolic equations to cases in which rougher boundary data can be considered. The main tool in order to derive these generalizations is spatial weights, especially power weights of the form
$$\begin{aligned} w_r^{\partial \mathcal {O}}(x):={\text {dist}}(x,\partial \mathcal {O})^r \quad (x\in \mathcal {O}), \end{aligned}$$
which measure the distance to the boundary of the domain \(\mathcal {O}\subset \mathbb {R}^n\). Including weights which fall outside the \(A_p\)-range, i.e., weights with \(r\notin (-1,p-1)\), provides a huge flexibility concerning the smoothness of the boundary data which can be considered. We refer the reader to [32] in which the bounded \(\mathcal {H}^\infty \)-calculus for the shifted Dirichlet Laplacian in \(L_p(\mathcal {O},w_r^{\partial \mathcal {O}})\) with \(r\in (-1,2p-1){\setminus }\{p-1\}\) has been obtained and applications to equations with rough boundary data are given. One even obtains more flexibility if one studies boundary value problems in weighted Besov and Triebel–Lizorkin spaces. Maximal regularity results for the heat equation with inhomogeneous boundary data have been obtained in [30]. In [22], similar results were shown for general elliptic and parabolic boundary value problems.
The elliptic and parabolic equations we are interested in are of the form
$$\begin{aligned} \lambda u -A(D)u&=f\quad \;\text {in }\quad \mathbb {R}^n_+,\nonumber \\ B_j(D)u&=g_j\quad \text {on }\quad \mathbb {R}^{n-1}\quad (j=1,\ldots ,m), \end{aligned}$$
$$\begin{aligned} \partial _t u -A(D)u&=f\quad \;\text {in }\quad \mathbb {R}\times \mathbb {R}^n_+,\nonumber \\ B_j(D)u&=g_j\quad \text {on }\quad \mathbb {R}\times \mathbb {R}^{n-1}\quad (j=1,\ldots ,m), \end{aligned}$$
where \(A,B_1,\ldots ,B_m\) is a homogeneous constant-coefficient parameter-elliptic boundary system, f is a given inhomogeneity and the \(g_j\) \((j=1,\ldots ,m)\) are given boundary data. Of course, f and the \(g_j\) in (1-2) may depend on time. We will also study the case in which (1-2) is supplemented by initial conditions, i.e.,
$$\begin{aligned} \partial _t u -A(D)u&=f\quad \;\text {in }\quad (0,T]\times \mathbb {R}^n_+,\nonumber \\ B_j(D)u&=g_j\quad \text {on }\quad (0,T]\times \mathbb {R}^{n-1}\quad (j=1,\ldots ,m),\nonumber \\ u(0,\,\cdot \,)&=u_0 \end{aligned}$$
for some \(T\in \mathbb {R}_+\). The focus will lie on the systematic treatment of boundary conditions \(g_j\) which are only assumed to be tempered distributions. In particular, boundary data of negative regularity will be included. However, we still have some restrictions on the smoothness in time of the boundary data for (1-3).
One reason for the interest in the treatment of rougher boundary data is that they naturally appear in problems with boundary noise. The fact that white noise terms have negative pathwise regularity (see for example [4, 16, 47]) was one of the main motivations for this work. It was already observed in [6] that even in one dimension solutions to equations with Gaussian boundary noise only have negative regularity in an unweighted setting. By introducing weights, this issue was resolved for example in [1]. We also refer to [5] in which the singularities at the boundary of solutions of Poisson and heat equation with different kinds of noise are analyzed.
One drawback of the methods in [1, 5, 6] is that solutions are constructed in a space which is too large for traces to exist, i.e., the operators
$$\begin{aligned} {\text {tr}}_{\partial \mathcal {O}}B_j(D)\quad (j=1,\ldots ,m) \end{aligned}$$
are not well defined as operators from the space in which the solution is constructed to the space of boundary data. This problem is avoided by using a mild solution concept, which is a valid approach in the classical setting and therefore, it seems reasonable to accept mild solutions as good enough, even though \({\text {tr}}_{\partial \mathcal {O}}B_j(D)u\) does not make sense on its own.
In this paper, we propose a point of view which helps us to give a meaning to \({\text {tr}}_{\partial \mathcal {O}}B_j(D)u\) in a classical sense. We will exploit that solutions to (1-1), (1-2) and (1-3) are very smooth in normal directions so that taking traces will easily be possible, even if the boundary data is just given by tempered distributions. This can be seen by studying these equations in spaces of the form \(\mathscr {B}^k(\mathbb {R}_{+},\mathscr {A}^s(\mathbb {R}^{n-1}))\), where \(\mathscr {A}\) and \(\mathscr {B}\) denote certain scales of function spaces with smoothness parameters s and k, respectively. The parameter k corresponds to the smoothness in normal direction and will be taken large enough so that we can take traces and the parameter s corresponds to smoothness in tangential directions and will be taken small enough so that \(\mathscr {A}^s\) contains the desired boundary data. This way, we will not only be able to give a meaning to \({\text {tr}}_{\partial \mathcal {O}}B_j(D)u\), but we will also obtain tools which help us to analyze the singularities of solutions at the boundary. This supplements the quantitative analysis in [22, 30, 32].
The idea to use spaces with mixed smoothness is quite essential in this paper, even if one refrains from using mixed scales. We refer to [42, Chapter 2] for an introduction to spaces with dominating mixed smoothness. It seems like these spaces have not been used in the theory of partial differential equations so far. Nonetheless, we should mention that they are frequently studied in the theory of function spaces and have various applications. In particular, they are a classical tool in approximation theory in a certain parameter range, see for example [46, Chapter 11].
This paper is structured in the following way:
Section 2 briefly introduces the tools and concepts we use throughout the paper. This includes some notions and results from the geometry of Banach spaces, \(\mathcal {R}\)-boundedness and weighted function spaces.
In Sect. 3, we study pseudo-differential operators in mixed scales with mixed smoothness. This will be important for the treatment of Poisson operators, as we will view them as functions in normal direction with values in the space of pseudo-differential operators of certain order in tangential directions.
Section 4 is the central part of this paper and the basis for the results in the subsequent sections. Therein, we derive various mapping properties of Poisson operators with values in spaces of mixed scales and mixed smoothness.
In Sect. 5, we study Eq. (1-1) in spaces of mixed scales and mixed smoothness with homogeneous boundary data, i.e., with \(g_j=0\). We derive \(\mathcal {R}\)-sectoriality of the corresponding operator under the assumption that the smoothness in normal direction is not too high. As a consequence, we also obtain maximal regularity for (1-3) with \(g_j=0\) in the UMD case.
Finally, we apply our techniques to Eqs. (1-1), (1-2) and (1-3) in Sect. 6. We will be able to treat (1-1) and (1-2) for arbitrary regularity in space and time. However, for the initial boundary value problem (1-3) we still have some restrictions concerning the regularity in time of the boundary data.
Comments on localizations
We should emphasize that we do not address questions of localization or perturbation in this work. Thus, we do not yet study what kind of variable coefficients or lower-order perturbations of the operators we can allow. We also do not yet study how our results can be transferred to more general geometries than just the half-space. Nonetheless, we want to give some ideas on how one can proceed.
The usual approach to transfer results for boundary value problems from the model problem on the half-space with constant coefficients to more general domains with compact smooth boundary and operators with non-constant coefficients is quite technical but standard. One takes a cover of the boundary which is fine enough such that in each chart the equation almost looks like the model problem with just a small perturbation on the coefficients and the geometry. In order to formally treat the local problem in a chart as the model problem, one has to find suitable extensions of the coefficients and the geometry such that the parameter-ellipticity is preserved and such that the coefficients are constant up to a small perturbation. One also carries out similar steps in the interior of the domain, where the equation locally looks like an elliptic or parabolic equation in \(\mathbb {R}^n\). The essential step is then to derive perturbation results which justify that these small perturbations do not affect the property which one wants to transfer to the more general situation. Such a localization procedure has been carried out in full detail in [36]. We also refer the reader to [8, Chapter 8].
The localization approach also seems to be reasonable in our situation. However, there is an additional difficulty: As described above, we work in spaces of the form \(\mathscr {B}^k(\mathbb {R}_{+},\mathscr {A}^s(\mathbb {R}^{n-1}))\) which splits the half-space in tangential and normal directions. Since this splitting uses the geometric structure of the half-space, one might wonder what the right generalization of these spaces to a smooth bounded domain would be. In order to answer this question, we think that the notion of a collar of a manifold with boundary should be useful. More precisely, due to Milnor's collar neighborhood theorem (see for example [38, Corollary 3.5]) there exists an open neighborhood U of the boundary \(\partial M\) of a smooth manifold M which is diffeomorphic to \(\partial M\times [0,1)\). This neighborhood U is a so-called collar neighborhood. On \(\partial M\times [0,1)\) it is straightforward how to generalize our spaces with mixed smoothness: One could just take \(\mathscr {B}^k([0,1),\mathscr {A}^s(\partial M))\). Now one could define the space
$$\begin{aligned} \mathscr {B}^k(\mathscr {A}^s(U)):=\big \{u:U\rightarrow \mathbb {C}\;\vert \; u\circ \Phi ^{-1}\in \mathscr {B}^k([0,1),\mathscr {A}^s(\partial M))\big \}, \end{aligned}$$
where \(\Phi :U\rightarrow \partial M\times [0,1)\) denotes the diffeomorphism provided by the collar neighborhood theorem. It seems natural to endow \(\mathscr {B}^k(\mathscr {A}^s(U))\) with the norm
$$\begin{aligned} \Vert u\Vert _{\mathscr {B}^k(\mathscr {A}^s(U))}:=\Vert u\circ \Phi ^{-1} \Vert _{\mathscr {B}^k([0,1),\mathscr {A}^s(\partial M))}. \end{aligned}$$
This definition allows us to give a meaning to the splitting in normal and tangential directions for a neighborhood of the boundary of general domains. It is less clear how to extend this splitting to the interior of the domain. But fortunately, this is also not important in our analysis. Indeed, the solution operators for inhomogeneous boundary data have a strong smoothing effect so that solutions are arbitrarily smooth in the interior of the domain with continuous dependence on the boundary data. Therefore, in the case of smooth coefficients one may work with smooth functions in the interior. To this end, we can take another open set \(V\subset {\text {int}} M\), where \({\text {int}} M\) denotes the interior of M, such that \(M=U \cup V\) and \(\Phi (U\cap V)\subset \partial M\times [\tfrac{1}{2},1)\). Moreover, we take \(\phi ,\psi :C^{\infty }(M)\) such that \({\text {supp}}\phi \subset U\), \({\text {supp}}\psi \subset V\) and \(\phi +\psi \equiv 1\). Finally, we think that on bounded domains the right spaces should be given by
$$\begin{aligned} \{u:M\rightarrow \mathbb {C}\;\vert \; \phi \cdot u\in \mathscr {B}^k(\mathscr {A}^s(U)),\; \psi \cdot u\in C^{\infty }(V) \}, \end{aligned}$$
since they preserve the splitting in normal and tangential directions close to the boundary and since they use the smoothing effect of the solution operators where the splitting cannot be preserved anymore.
Comparison to other works
There are several other works which study boundary value problems with rough boundary data. The approach which seems to be able to treat the most boundary data is the one by Lions and Magenes, see [33,34,35]. It also allows for arbitrary regularity at the boundary. One of the main points is that the trace operator is extended to the space \(D_A^{-r}\) (see [33, Theorem 6.5]) where it maps into a boundary space with negative regularity. The space \(D_A^{-r}\) contains those distributions \(u\in H^{-r}(\Omega )\) such that Au is in the dual of the space
$$\begin{aligned} \Xi ^{r+2m}:=\big \{u\;\vert \;{\text {dist}}(\,\cdot \,,\partial \Omega )^{|\alpha |}\partial ^{\alpha }u\in L_2(\Omega ),\;|\alpha |\le r+2m\bigg \}, \end{aligned}$$
i.e., \(\Xi ^{r+2m}\) contains functions whose derivatives may have singularities at the boundary with a certain order. This extension of the trace operator was generalized to the scale of Hörmander spaces in [3], where the authors used suitable interpolation techniques. One advantage of our approach compared to [3, 33,34,35] is that we can give a detailed quantitative analysis of the smoothness and singularities of solutions at the boundary. For example, we show in Theorem 4.16 that solutions of (1-1) are arbitrarily smooth in normal direction if the smoothness in tangential direction is chosen low enough. Moreover, we describe the singularities at the boundary if the smoothness in tangential direction is too high.
Another technique to treat rougher boundary data is to systematically study boundary value problems in weighted spaces. This has for example been carried out in [22, 30, 32]. By using power weights in the \(A_{\infty }\) range, one can push the regularity on the boundary down to almost 0. However, if one works with \(A_{\infty }\) weights, then many Fourier analytic tools are not available anymore in the \(L_p\) scale. The situation is better in Besov and Triebel–Lizorkin spaces, where Fourier multiplier techniques can still be applied. This has been used for second-order operators with Dirichlet boundary conditions in [30] and for general parameter-elliptic and parabolic boundary value problems in [22]. In both references, maximal regularity with inhomogeneous boundary data has been derived. Lindemulder and Veraar [32] derive a bounded \(\mathcal {H}^\infty \)-calculus for the Dirichlet Laplacian in weighted \(L_p\)-spaces even for some weights which fall outside the \(A_p\) range. The main tool therein to replace Fourier multiplier techniques are variants of Hardy's inequality. The results derived in [22, 30, 32] are stronger than the ones we derive here in the sense that we do not derive maximal regularity with inhomogeneous boundary data or a bounded \(\mathcal {H}^\infty \)-calculus. As in our work, the singularities at the boundary are described by the strength of the weights one has to introduce. However, we can treat much more boundary data since [22, 30, 32] are restricted to positive regularity on the boundary.
There are also works dealing with rough boundary data for nonlinear equations such as the Navier–Stokes equation, see for example [2, 19] and references therein. The former reference uses the notion of very weak solutions as well as semigroup and interpolation–extrapolation methods. The latter reference studies the problem in the context of the Boutet de Monvel calculus. Our methods would still have to be extended to nonlinear problems. However, in both of the cited works there are restrictions on the regularity of the boundary data which can be considered.
Notations and assumptions
We write \(\mathbb {N}=\{1,2,3,\ldots \}\) for the natural numbers starting from 1 and \(\mathbb {N}_0=\{0,1,2,\ldots \}\) for the natural numbers starting from 0. Throughout the paper, we take \(n\in \mathbb {N}\) to be the space dimension and write
$$\begin{aligned} \mathbb {R}^n_+:=\{x=(x_1,\ldots ,x_n)\in \mathbb {R}^n: x_n>0\}. \end{aligned}$$
If \(n=1\), we also just write \(\mathbb {R}_+:=\mathbb {R}^1_+\). Given a real number \(x\in \mathbb {R}\), we write
$$\begin{aligned} x_+:=[x]_+:=\max \{0,x\}. \end{aligned}$$
We will frequently use the notation with the brackets for sums or differences of real numbers. Oftentimes, we split \(x=(x',x_n)\in \mathbb {R}^{n-1}\times \mathbb {R}\) or in the Fourier image \(\xi =(\xi ',\xi _n)\) where \(x',\xi '\in \mathbb {R}^{n-1}\) refer to the directions tangential to the boundary \(\mathbb {R}^{n-1}=\partial \mathbb {R}^{n}_+\) and \(x_n,\xi _n\in \mathbb {R}\) refer to the normal directions. Given \(x\in \mathbb {C}^n\) or a multi-index \(\alpha \in \mathbb {N}_0^n\), we write
$$\begin{aligned} |x|:=\bigg (\sum _{j=1}^n|x_j|^2\bigg )^{1/2}\quad \text {or}\quad |\alpha |=\sum _{j=1}^n|\alpha _j| \end{aligned}$$
for the Euclidean length of x or the \(\ell ^1\)-norm of the multi-index \(\alpha \), respectively. Even though this notation is ambiguous, it is convention in the literature and we therefore stick to it. We write
$$\begin{aligned} xy:=x\cdot y:=\sum _{j=1}^n x_j\overline{y}_j\quad (x,y\in \mathbb {C}^n) \end{aligned}$$
for the usual scalar product. The Bessel potential will be denoted by
$$\begin{aligned} \langle x\rangle :=(1+|x|^2)^{1/2}\quad (x\in \mathbb {C}^n). \end{aligned}$$
Given an angle \(\phi \in (0,\pi ]\), we write
$$\begin{aligned} \Sigma _{\phi }:=\{z\in \mathbb {C}:|{\text {arg}} z|<\phi \}. \end{aligned}$$
If M is a set, then we use the notation
$$\begin{aligned} {\text {pr}}_j:M^n\rightarrow M,\; (a_1,\ldots ,a_n)\rightarrow a_j\quad (j=1,\ldots ,n) \end{aligned}$$
for the canonical projection of \(M^n\) to the j-th component.
Throughout the paper, E will denote a complex Banach space on which we impose additional conditions at certain places. The topological dual of a Banach space \(E_0\) will be denoted by \(E_0'\). By \(\mathscr {S}(\mathbb {R}^n;E)\) and \(\mathscr {S}'(\mathbb {R}^n;E)\) we denote the spaces of E-valued Schwartz functions and E-valued tempered distributions, respectively. Given a domain \(\mathcal {O}\subset \mathbb {R}^n\), we write \(\mathscr {D}(\mathcal {O};E)\) and \(\mathscr {D}'(\mathcal {O};E)\) for the spaces of E-valued test functions and E-valued distributions, respectively. If \(E=\mathbb {C}\) in some function space, then we will omit it in the notation. On \(\mathscr {S}(\mathbb {R}^n;E)\), we define the Fourier transform
$$\begin{aligned} (\mathscr {F}f)(\xi ):=\frac{1}{(2\pi )^{n/2}}\int _{\mathbb {R}^n} e^{-ix\xi } f(x)\,\mathrm{d}x\quad (f\in \mathscr {S}(\mathbb {R}^n;E)). \end{aligned}$$
As usual, we extend it to \(\mathscr {S}'(\mathbb {R}^n;E)\) by \([\mathscr {F}u](f):=u(\mathscr {F}f)\) for \(u\in \mathscr {S}'(\mathbb {R}^n;E)\) and \(f\in \mathscr {S}(\mathbb {R}^n)\). Sometimes, we also use the Fourier transform \(\mathscr {F}'\) which only acts on the tangential directions, i.e.,
$$\begin{aligned} (\mathscr {F}'f)(\xi ',x_n):=\frac{1}{(2\pi )^{(n-1)/2}}\int _{\mathbb {R}^n} e^{-ix'\xi '} f(x',x_n)\,\mathrm{d}x'\quad (f\in \mathscr {S}(\mathbb {R}^n;E)). \end{aligned}$$
By \(\sigma (T)\) and \(\rho (T)\), we denote the spectrum and the resolvent set, respectively, of a linear operator \(T:E\supset D(T)\rightarrow E\) defined on the domain D(T). We write \(\mathcal {B}(E_0,E_1)\) for the set of all bounded linear operators from the Banach space \(E_0\) to the Banach space \(E_1\) and we set \(\mathcal {B}(E):=\mathcal {B}(E,E)\).
If \(f,g:M\rightarrow \mathbb {R}\) map some parameter set M to the reals, then we occasionally write \(f\lesssim g\) if there is a constant \(C>0\) such that \(f(x)\le C g(x)\) for all \(x\in M\). If \(f\lesssim g\) and \(g\lesssim f\), we also write \(f\eqsim g\). We mainly use this notation in longer computations.
Now we formulate our assumptions on the operators \(A(D),B_1(D),\ldots ,B_m(D)\): Let
$$\begin{aligned} A(D)=\sum _{|\alpha |=2m}a_{\alpha }D^{\alpha },\quad B_j(D)=\sum _{|\beta |=m_j} b^j_{\beta }D^{\beta }\quad (j=1,\ldots ,m) \end{aligned}$$
for some \(m,m_1,\ldots ,m_m\in \mathbb {N}\) with \(m_j<2m\) \((j=1,\ldots ,m)\) and \(a_{\alpha },b^j_{\beta }\in \mathcal {B}(E)\).
Assumption 1.1
(Ellipticity and Lopatinskii–Shapiro condition) There is a \(\phi '\in (0,\pi ]\) such that
\(\rho (A(\xi ))\subset \Sigma _{\phi '}\) for all \(\xi \in \mathbb {R}^n{\setminus }\{0\}\).
The equation
$$\begin{aligned} \lambda u(x_n)-A(\xi ',D_n)u(x_n)&=0\quad \,\,(x_n>0),\\ B_j(\xi ',D_n)u(0)&=g_j\quad (j=1,\ldots ,m) \end{aligned}$$
has a unique continuous solution u with \(\lim _{x\rightarrow \infty } u(x)=0\) for all \((\lambda ,\xi ')\in \Sigma _{\phi '}\times \mathbb {R}^{n-1}\) and all \(g=(g_1,\ldots ,g_m)\in E^m\).
We take \(\phi \in (0,\phi ')\). If time-dependent equations are considered, we assume that \(\phi >\pi /2\).
Assumption 1.1 will be a global assumption which we assume to hold true without explicitly mentioning this every time. As we also consider mixed scales in this paper, there will be a lot of different choices of the precise spaces. Moreover, for the Bessel potential scale we will need different assumptions on the weights and the Banach space E than for the Besov scale, Triebel–Lizorkin scale, or their dual scales. Thus, it will be convenient to introduce a notation which covers all these different cases. Some of the notation and notions in the following assumption will be introduced later in Sect. 2. For the moment, we just mention that H denotes the Bessel potential scale, B the Besov scale, \(\mathcal {B}\) its dual scale, F the Triebel–Lizorkin scale and \(\mathcal {F}\) its dual scale.
Let E be a Banach space, \(s_0,s_1,s_2\in \mathbb {R}\), \(p_0,p_1,p_2\in [1,\infty )\) and \(q_0,q_1,q_2\in [1,\infty ]\). Let further \(w_0,w_1,w_2\) be weights and \(I_{x_n},J_{t}\subset \mathbb {R}\) intervals. In the following, \(\bullet \) is a placeholder for any suitable choice of parameters. Moreover, by writing \(J_{t}\), \(I_{x_n}\) and \(\mathbb {R}^{n-1}_{x'}\) we indicate with respect to which variable the spaces should be understood. Here, t denotes the time, \(x_n\) the normal direction and \(x'\) the tangential directions.
We take
$$\begin{aligned} \mathscr {A}^\bullet \in \{H^\bullet _{p_0}(\mathbb {R}^{n-1}_{x'},&w_0;E), B^\bullet _{p_0,q_0}(\mathbb {R}^{n-1}_{x'},w_0;E),F^\bullet _{p_0,q_0}(\mathbb {R}^{n-1}_{x'},w_0;E),\\&\mathcal {B}^\bullet _{p_0,q_0}(\mathbb {R}^{n-1}_{x'},w_0;E),\mathcal {F}^\bullet _{p_0,q_0}(\mathbb {R}^{n-1}_{x'},w_0;E)\}. \end{aligned}$$
If \(\mathscr {A}^\bullet \) belongs to the Bessel potential scale, we assume that \(p_0\in (1,\infty )\), that E is a UMD space and that \(w_0\) is an \(A_p(\mathbb {R}^{n-1})\) weight. If \(\mathscr {A}^\bullet \) belongs to the Besov or Triebel–Lizorkin scale, we assume that \(w_0\) is an \(A_{\infty }(\mathbb {R}^{n-1})\) weight. If \(\mathscr {A}^\bullet \) belongs to the dual scale of Besov or Triebel–Lizorkin scale, we assume that \(w_0\) is an \([A_{\infty }(\mathbb {R}^{n-1})]_p'\) weight, \(p_0,q_0\in (1,\infty )\) and that E is a UMD space.
$$\begin{aligned}&\mathscr {B}^{\bullet }(I_{x_n};\mathscr {A}^\bullet )\in \{H^\bullet _{p_1}(I_{x_n},w_1;\mathscr {A}^\bullet ), B^\bullet _{p_1,q_1}(I_{x_n},w_1;\mathscr {A}^\bullet ),F^\bullet _{p_1,q_1}(I_{x_n},w_1;\mathscr {A}^\bullet ),\\&\qquad \mathcal {B}^\bullet _{p_1,q_1}(I_{x_n},w_1;\mathscr {A}^\bullet ),\mathcal {F}^\bullet _{p_1,q_1}(I_{x_n},w_1;\mathscr {A}^\bullet )\}. \end{aligned}$$
We impose conditions on \(w_1,p_1,q_1\) and E which are analogous to the ones for \(w_0,p_0,q_0\) and E in part (a).
$$\begin{aligned}&\mathscr {C}^{\bullet }(J_t;\mathscr {A}^\bullet )\in \{H^\bullet _{p_2}(J_t,w_2;\mathscr {A}^\bullet ), B^\bullet _{p_2,q_2}(J_t,w_2;\mathscr {A}^\bullet ),F^\bullet _{p_2,q_2}(J_t,w_2;\mathscr {A}^\bullet ),\\&\qquad \mathcal {B}^\bullet _{p_2,q_2}(J_t,w_2;\mathscr {A}^\bullet ),\mathcal {F}^\bullet _{p_2,q_2}(J_t,w_2;\mathscr {A}^\bullet )\}. \end{aligned}$$
$$\begin{aligned}&\mathscr {C}^{\bullet }(J_{t};\mathscr {B}^{\bullet }(I_{x_n};\mathscr {A}^\bullet ))\in \{H^\bullet _{p_2}(J_{t},w_2;\mathscr {B}^{\bullet }(I_{x_n};\mathscr {A}^\bullet )), B^\bullet _{p_2,q_2}(J_{t},w_2;\mathscr {B}^{\bullet }(I_{x_n};\mathscr {A}^\bullet )),\\&\qquad F^\bullet _{p_2,q_2}(J_{t},w_2;\mathscr {B}^{\bullet }(I_{x_n};\mathscr {A}^\bullet )), \}. \end{aligned}$$
Most of the time, we just write \(\mathscr {A}^s\), \(\mathscr {B}^k(\mathscr {A}^s)\) and \(\mathscr {C}^l(\mathscr {B}^k(\mathscr {A}^s))\) instead of \(\mathscr {A}^s_{p_0,q_0}(\mathbb {R}^{n-1}_{x'},w_0;E)\), \(\mathscr {B}^k_{p_1,q_1}(I_{x_n},w_1;\mathscr {A}^s(\mathbb {R}^{n-1}_{x'},w_0;E))\) and \(\mathscr {C}^l_{p_2,q_2}(J_t,w_2;\mathscr {B}^k_{p_1,q_1}(I_{x_n},w_1;\mathscr {A}^s(\mathbb {R}^{n-1}_{x'},w_0;E))\). We mainly do this in order to keep notations shorter. Moreover, most of the time we only work with the smoothness parameter so that adding the other parameters to the notation would be distracting. However, at some places we will still add some of the other parameters if more clarity is needed.
Also Assumption 1.2 will be global and we use this notation throughout the paper.
Assumption 1.2 is formulated in a way such that we can always apply Mikhlin's theorem, Theorem 2.15, and its iterated versions Proposition 3.7 and Proposition 3.8. If E has to satisfy Pisier's property \((\alpha )\) for some results, we will explicitly mention it.
Note that every \(f\in \mathscr {S}'(\mathbb {R}^{n-1})\) is contained in one of the spaces \(\mathscr {A}^{\bullet }\) with certain parameters, see for example [26, Proposition 1].
Some notions from the geometry of Banach spaces
If one wants to transfer theorems from a scalar-valued to a vector-valued situation, then this is oftentimes only possible if one imposes additional geometric assumptions on the Banach space. And since the iterated spaces we introduced in Assumption 1.2 are vector-valued even if we take \(E=\mathbb {R}\) or \(E=\mathbb {C}\), it should not come as a surprise that we have to introduce some of these geometric notions. We refer the reader to [23, 24] for an extensive treatment of the notions in this subsection.
UMD spaces
The importance of UMD spaces lies in the fact that Mikhlin's Fourier multiplier theorem has only been generalized for operator-valued symbols if the underlying Banach spaces are UMD spaces. Therefore, if one wants to work with Fourier multipliers on vector-valued \(L_p\)-spaces, one is forced to impose this geometric condition.
A Banach space E is called UMD space if for all \(p\in (1,\infty )\) there is a constant \(C>0\) such that for all probability spaces \((\Omega ,\mathcal {F},\mathbb {P})\), all \(N\in \mathbb {N}\), all \(\varepsilon _1,\ldots ,\varepsilon _N\in \mathbb {C}\) with \(|\varepsilon _1|=\ldots =|\varepsilon _N|=1\), all filtrations \((\mathcal {F}_k)_{k=0}^N\) and all martingales \((f_k)_{k=0}^N\) in \(L_p(\Omega ;E)\) it holds that
$$\begin{aligned} \bigg \Vert \sum _{k=1}^N \varepsilon _k (f_k-f_{k-1})\bigg \Vert _{L_p(\Omega ;E)}\le C \bigg \Vert \sum _{k=1}^N f_k-f_{k-1}\bigg \Vert _{L_p(\Omega ;E)}. \end{aligned}$$
This is equivalent to E being a Banach space of class \(\mathcal {HT}\), which is defined by the boundedness of the Hilbert transform on \(L_p(\mathbb {R};E)\). UMD spaces are always reflexive. Some important examples of UMD spaces are:
Hilbert spaces, in particular the scalar fields \(\mathbb {R},\mathbb {C}\),
the space \(L_p(S;E)\) for \(p\in (1,\infty )\), a \(\sigma \)-finite measure space \((S,\mathcal {A},\mu )\) and a UMD space E,
the classical function spaces such as Bessel potential spaces \(H^{s}_p\), Besov spaces \(B^{s}_{p,q}\) and Triebel–Lizorkin spaces \(F^{s}_{p,q}\) in the reflexive range as well as their E-valued versions if E is a UMD space.
Cotype
In this work, Banach spaces satisfying a finite cotype assumption could be considered as merely a technical notion that is needed to derive Proposition 4.11 which is a sharper version of Proposition 4.9. The latter does not need a finite cotype assumption, while we show that it seems to be necessary to derive the former in Proposition 4.13. The main reason why we need finite cotype assumptions is that they allow us to use a version of Kahane's contraction principle with function coefficients, see Proposition 2.1.
Let \((\Omega ,\mathcal {F},\mathbb {P})\) be a probability space. A sequence of random variables \((\varepsilon _k)_{k\in \mathbb {N}}\) is called Rademacher sequence if it is an i.i.d. sequence with \(\mathbb {P}(\varepsilon _k=1)=\mathbb {P}(\varepsilon _k=-1)=\frac{1}{2}\) for \(k\in \mathbb {N}\). A Banach space E is said to have cotype \(q\in [2,\infty ]\) if there is a constant \(C>0\) such that for all choices of \(N\in \mathbb {N}\) and \(x_1,\ldots ,x_N\in E\) the estimate
$$\begin{aligned} \bigg (\sum _{k=1}^N \Vert x_k\Vert ^q\bigg )^{1/q}\le C\bigg (\mathbb {E}\big \Vert \sum _{k=1}^N\varepsilon _kx_k\big \Vert ^q\bigg )^{1/q} \end{aligned}$$
holds with the usual modification for \(q=\infty \). We want to remark the following
Every Banach space has cotype \(\infty \).
If a Banach space has cotype \(q\in [2,\infty )\), then it also has cotype \(\widetilde{q}\in [q,\infty ]\).
No nontrivial Banach space can have cotype \(q\in [1,2)\) since even the scalar fields \(\mathbb {R}\) and \(\mathbb {C}\) do not satisfy this.
If the Banach space E has cotype \(q_E\), then \(L_p(S;E)\) has cotype \(\max \{p,q_E\}\) for every measure space \((S,\mathcal {A},\mu )\).
If the Banach space E has cotype \(q_E\), then \(H^{s}_p(\mathbb {R}^n;E)\) has cotype \(\max \{p,q_E\}\) and \(B^{s}_{p,q}(\mathbb {R}^n;E)\) and \(F^{s}_{p,q}(\mathbb {R}^n;E)\) have cotype \(\max \{p,q,q_E\}\). The same also holds for the weighted variants we introduce later.
Pisier's property \((\alpha )\)
Finally, we also need Pisier's property \((\alpha )\) at some places in this paper. This condition is usually needed if one wants to derive \(\mathcal {R}\)-boundedness from Mikhlin's multiplier theorem. If one has a set of \(\mathcal {R}\)-bounded operator-valued symbols, then one needs Pisier's property \((\alpha )\) in order to obtain the \(\mathcal {R}\)-boundedness of the resulting operator family.
A Banach space E has Pisier's property \((\alpha )\) if Kahane's contraction principle also holds for double random sums, i.e., if for two Rademacher sequences \((\varepsilon '_i)_{i\in \mathbb {N}}\), \((\varepsilon ''_j)_{j\in \mathbb {N}}\) on the probability spaces \((\Omega ',\mathcal {F}',\mathbb {P}')\) and \((\Omega '',\mathcal {F}'',\mathbb {P}'')\), respectively, there is a constant \(C>0\) such that for all \(M,N\in \mathbb {N}\), all \((a_{ij})_{1\le i\le M,1\le j\le N}\subset \mathbb {C}\) with \(|a_{ij}|\le 1\) and all \((x_{ij})_{1\le i\le M,1\le j\le N}\subset E\) the estimate
$$\begin{aligned} \mathbb {E}_{\mathbb {P'}}\mathbb {E}_{\mathbb {P''}}\bigg \Vert \sum _{i=1}^M\sum _{j=1}^N a_{ij}\varepsilon _{i}\varepsilon _{j}x_{ij} \bigg \Vert ^2\le C^2 \mathbb {E}_{\mathbb {P'}}\mathbb {E}_{\mathbb {P''}}\bigg \Vert \sum _{i=1}^M\sum _{j=1}^N \varepsilon _{i}\varepsilon _{j}x_{ij} \bigg \Vert ^2 \end{aligned}$$
holds. Even though Pisier's property \((\alpha )\) is independent of the UMD property, the examples of spaces with Pisier's property \((\alpha )\) we have in mind are similar:
the space \(L_p(S;E)\) for \(p\in [1,\infty )\), a measure space \((S,\mathcal {A},\mu )\) and a Banach space E with Pisier's property \((\alpha )\),
the classical function spaces such as Bessel potential spaces \(H^{s}_p\), Besov spaces \(B^{s}_{p,q}\) and Triebel–Lizorkin spaces \(F^{s}_{p,q}\) in the reflexive range as well as their E-valued versions if E has Pisier's property \((\alpha )\).
\(\mathcal {R}\)-bounded Operator Families
We refer the reader to [8, 24] for introductions to \(\mathcal {R}\)-bounded operator families. The notion of \(\mathcal {R}\)–boundedness is frequently needed if one works with vector-valued function spaces. As UMD spaces, it is essential for vector-valued generalizations of Mikhlin's multiplier theorem. But perhaps more importantly, it can be used to derive a necessary and sufficient condition for a closed linear operator \(A:E\supset D(A)\rightarrow E\) on the UMD space E to have the property of maximal regularity. This is the case if and only if it is \(\mathcal {R}\)–sectorial, i.e., if and only if the set
$$\begin{aligned} \{\lambda (\lambda -A)^{-1}:\lambda \in \mathbb {C},\,\arg \lambda < \phi \} \end{aligned}$$
for some \(\phi >\pi /2\) is \(\mathcal {R}\)–bounded, see [49, Theorem 4.2]. Here, an operator A is said to have the property of maximal regularity on [0, T), \(0<T<\infty \), if the mapping
$$\begin{aligned} W^1_p([0,T);X)\cap L_p([0,T);D(A))\rightarrow L_p([0,T);X)\times I_p(A),\;u\mapsto \begin{pmatrix}\partial _t u-Au \\ \gamma _0 u \end{pmatrix} \end{aligned}$$
is an isomorphism of Banach spaces, where \(\gamma _0u:= u(0)\) denotes the temporal trace and \(I_p(A)\) is the space of admissible initial conditions. It can be described as a real interpolation space of X and D(A) by the relation \(I_p(A):=(X,D(A))_{1-1/p.p}\). The above isomorphy is very useful for the treatment of nonlinear parabolic equations, as it allows for the efficient use of fixed point iterations. This approach to nonlinear equations has already been applied many times in the literature.
Let us now define \(\mathcal {R}\)–boundedness: Let \(E_0,E_1\) be Banach spaces. A family of operators \(\mathcal {T}\subset \mathcal {B}(E_0,E_1)\) is called \(\mathcal {R}\)-bounded if there is a constant \(C>0\) and \(p\in [1,\infty )\) such that for a Rademacher sequence \((\varepsilon _k)_{k\in \mathbb {N}}\) on a probability space \((\Omega ,\mathcal {F},\mathbb {P})\) and all \(N\in \mathbb {N}\), \(x_1,\ldots ,x_N\in E_0\) and \(T_1,\ldots ,T_N\in \mathcal {T}\) the estimate
$$\begin{aligned} \left\| \sum _{k=1}^N \varepsilon _k T_k x_k \right\| _{L_p(\Omega ;E_1)}\le C\left\| \sum _{k=1}^N \varepsilon _k x_k \right\| _{L_p(\Omega ;E_0)} \end{aligned}$$
holds. The least admissible constant such that this estimate holds will be denoted by \(\mathcal {R}(\mathcal {T})\) or, if we want to emphasize the dependence on the Banach spaces, by \(\mathcal {R}_{\mathcal {B}(E_0,E_1)}(\mathcal {T})\). By the Kahane–Khintchine inequalities, the notion of \(\mathcal {R}\)-boundedness does not depend on p. \(\mathcal {R}\)-boundedness trivially implies uniform boundedness, but the converse does not hold true in general. For Hilbert spaces however, both notions coincide.
An equivalent characterization of \(\mathcal {R}\)-boundedness can be given by using the \({\text {Rad}}_p(E)\)-spaces. They are defined as the space of all sequences \((x_k)_{k\in \mathbb {N}}\subset E\) such that \(\sum _{k=1}^{\infty } \varepsilon _k x_k\) converges in \(L_p(\Omega ;E)\). \({\text {Rad}}_p(E)\)-spaces are endowed with the norm
$$\begin{aligned} \Vert (x_k)_{k\in \mathbb {N}}\Vert _{{\text {Rad}}_p(E)}=\sup _{N\in \mathbb {N}}\left\| \sum _{k=1}^N \varepsilon _k x_k \right\| _{L_p(\Omega ;E)}. \end{aligned}$$
Given \(T_1,\ldots ,T_N\in \mathcal {B}(E_0,E_1)\) we define
$$\begin{aligned} {\text {diag}}(T_1,\ldots ,T_N):{\text {Rad}}_p(E_0)\rightarrow {\text {Rad}}_p(E_1),\,(x_k)_{k\in \mathbb {N}}\rightarrow (T_kx_k)_{k\in \mathbb {N}} \end{aligned}$$
where \(T_k:=0\) for \(k>N\). Then, a family of operators \(\mathcal {T}\subset \mathcal {B}(E_0,E_1)\) is \(\mathcal {R}\)-bounded if and only if
$$\begin{aligned} \{{\text {diag}}(T_1,\ldots ,T_N): N\in \mathbb {N}, T_1,\ldots ,T_N\in \mathcal {T}\}\subset \mathcal {B}({\text {Rad}}_p(E_0),{\text {Rad}}_p(E_1)) \end{aligned}$$
is uniformly bounded.
Let us now collect some results concerning \(\mathcal {R}\)-boundedness which will be useful in this paper.
Proposition 2.1
Let E be a Banach space with cotype \(q\in [2,\infty )\), \((\varepsilon _k)_{k\in \mathbb {N}}\) a Rademacher sequence on the probability space \((\Omega ,\mathcal {F},\mathbb {P})\) and let \((S,\mathcal {A},\mu )\) be a \(\sigma \)-finite measure space. For all \(\widetilde{q}\in (q,\infty ]\) there exists a constant \(C>0\) such that for all \(N\in \mathbb {N}\), \(f_1,\ldots ,f_N\in L_{\widetilde{q}}(S)\) and \(x_1,\ldots ,x_N\in E\) it holds that
$$\begin{aligned} \left\| \sum _{k=1}^N \varepsilon _k f_k x_k \right\| _{L_{\widetilde{q}}(S;L_2(\Omega ;E))}\le C \sup _{1\le k\le N} \Vert f_k\Vert _{L_{\widetilde{q}}(S)}\left\| \sum _{k=1}^N \varepsilon _kx_k\right\| _{L_2(\Omega ;E)}. \end{aligned}$$
If \(q\in \{2,\infty \}\), then we can also take \(\widetilde{q}=q\).
This is one of the statements in [25, Lemma 3.1]. \(\square \)
It was already observed in [25, Remark 3.3] that if \(\widetilde{q}<\infty \), then Proposition 2.1 can also be formulated as follows: The image of the unit ball \(B_{L^{\widetilde{q}}(S)}(0,1)\) in \(L^{\widetilde{q}}(S)\) under the embedding \(L^{\widetilde{q}}(S)\hookrightarrow \mathcal {B}(E,L^{\widetilde{q}}(S;E)),f\mapsto f\otimes (\,\cdot \,)\) is an \(\mathcal {R}\)-bounded subset of \(\mathcal {B}(E,L^{\widetilde{q}}(S;E))\).
If \((S,\mathcal {A},\mu )\) is nonatomic and if (2-1) holds for all \(N\in \mathbb {N}\), \(f_1,\ldots ,f_N\in L_{\widetilde{q}}(S)\) and \(x_1,\ldots ,x_N\in E\), then E has cotype \(\widetilde{q}\). This follows from the statements in [25, Lemma 3.1].
Let \((A,\Sigma ,\nu )\) be a \(\sigma \)-finite measure space. Let further \(2\le \overline{q}<q<\infty \) and let \(\overline{E}\) be a Banach space with cotype \(\overline{q}\). If \(E=L_q(A;\overline{E})\), then (2-1) also holds with \(\widetilde{q}=q\). This was shown in [25, Remark 3.4].
Let \((E_0,E_1)\) and \((F_0,F_1)\) be interpolation couples of UMD-spaces, \(\Sigma \subset \mathbb {C}\) and \(f:\Sigma \rightarrow \mathbb {C}\). Let further \((T(\lambda ))_{\lambda \in \Sigma }\subset \mathcal {B}(E_0+E_1,F_0+F_1)\) be a collection of operators such that
$$\begin{aligned} \mathcal {R}_{\mathcal {B}(E_0,F_0)}(\{T(\lambda ):\lambda \in \Sigma \})<M_0,\quad \mathcal {R}_{\mathcal {B}(E_1,F_1)}(\{f(\lambda )T(\lambda ):\lambda \in \Sigma \})<M_1 \end{aligned}$$
for some \(M_0,M_1>0\). We write \(E_{\theta }=[E_0,E_1]_{\theta }\) and \(F_{\theta }=[F_0,F_1]_{\theta }\) with \(\theta \in (0,1)\) for the complex interpolation spaces. Then, there is a constant \(C>0\) such that
$$\begin{aligned} \mathcal {R}_{\mathcal {B}(E_\theta ,F_\theta )}(\{f(\lambda )^{\theta }T(\lambda ):\lambda \in \Sigma \})<CM_0^{1-\theta }M_1^{\theta }. \end{aligned}$$
In order to avoid possible ambiguities with complex exponentials, we assume that f takes values in \((0,\infty )\). As a consequence of Kahane's contraction principle ([24, Theorem 6.1.13]), we may do this without loss of generality. It suffices to show that
$$\begin{aligned} \{{\text {diag}}(f(\lambda _1)^{\theta }T(\lambda _1),\ldots ,f(\lambda _N)^{\theta }T(\lambda _N):N\in \mathbb {N},\lambda _1,\ldots ,\lambda _N\in \Sigma \} \end{aligned}$$
is a bounded family in \(\mathcal {B}({\text {Rad}}_p(E_{\theta }),{\text {Rad}}_p(F_{\theta }))\). Let
$$\begin{aligned} S:=\{z\in \mathbb {C}: 0\le {\text {{Re}}}z\le 1\}. \end{aligned}$$
For fixed \(N\in \mathbb {N}\) and \(\lambda _1,\ldots ,\lambda _N\in \Sigma \), we define
$$\begin{aligned} \mathscr {T}_{\lambda _1,\ldots ,\lambda _N}:S \rightarrow \mathcal {B}({\text {Rad}}_p(E_{0})&\cap {\text {Rad}}_p(E_{1}), {\text {Rad}}_p(F_{0})+ {\text {Rad}}_p(F_{1})),\\&\,z\mapsto {\text {diag}}( f(\lambda _1)^z T(\lambda _1),\ldots ,f(\lambda _N)^z T(\lambda _N)). \end{aligned}$$
For fixed \((x_k)_{k\in \mathbb {N}}\in {\text {Rad}}_p(E_{0})\cap {\text {Rad}}_p(E_{1})\), the mapping
$$\begin{aligned} \mathscr {T}_{\lambda _1,\ldots ,\lambda _N}(\,\cdot \,)(x_k)_{k\in \mathbb {N}}:S \rightarrow {\text {Rad}}_p(F_{0})+ {\text {Rad}}_p(F_{1}),&\,z\mapsto (f(\lambda _k)^zT(\lambda _k)x_k)_{k\in \mathbb {N}}, \end{aligned}$$
is continuous, bounded and analytic in the interior of S. Again, we used the convention \(T(\lambda _k)=0\) for \(k>N\). Moreover, by assumption we have that
$$\begin{aligned} \sup _{t\in \mathbb {R}}\Vert \mathscr {T}_{\lambda _1,\ldots ,\lambda _N}(j+it)\Vert _{\mathcal {B}({\text{ Rad }}_p(E_{j}),{\text{ Rad }}_p(F_{j}))}<M_j\quad (j\in \{0,1\}). \end{aligned}$$
Thus, it follows from abstract Stein interpolation (see [48, Theorem 2.1]) that
$$\begin{aligned} \Vert \mathscr {T}_{\lambda _1,\ldots ,\lambda _N}(\theta )\Vert _{\mathcal {B}({\text {Rad}}_p^{\theta }(E_{0},E_1),{\text {Rad}}_p^{\theta }(F_{0},F_1))}<M_0^{1-\theta }M_1^{\theta }, \end{aligned}$$
where we used the shorter notation \({\text {Rad}}_p^{\theta }(E_{0},E_1)=[{\text {Rad}}_p(E_{0}),{\text {Rad}}_p(E_{1})]_{\theta }\) in the subscript. But it was shown in [27, Corollary 3.16] that
$$\begin{aligned}{}[{\text {Rad}}_p(E_0),{\text {Rad}}_p(E_1)]_{\theta }={\text {Rad}}_p(E_{\theta }) \end{aligned}$$
with equivalence of norms so that there is a constant \(C>0\) such that
$$\begin{aligned} \Vert \mathscr {T}_{\lambda _1,\ldots ,\lambda _N}(\theta )\Vert _{\mathcal {B}({\text {Rad}}_p(E_{\theta }),{\text {Rad}}_p(F_{\theta }))}<CM_0^{1-\theta }M_1^{\theta }. \end{aligned}$$
Since \(N\in \mathbb {N}\) and \(\lambda _1,\ldots ,\lambda _N\in \Sigma \) were arbitrary, we obtain the assertion. \(\square \)
The proof of Proposition 2.3 was inspired by the proof of [20, Lemma 6.9]. Note that [20, Example 6.13] shows that Proposition 2.3 does not hold true if the complex interpolation functor is replaced by the real one.
In Proposition 2.3, we only use the UMD assumption in order to show that the interpolation space of two Rademacher spaces coincides with the Rademacher space of the interpolation space of the underlying Banach spaces. This holds more generally for K-convex Banach spaces (see [24, Theorem 7.4.16]). We refrain from introducing K-convexity in order not to overload this paper with geometric notions. Note however that UMD spaces are K-convex, see [24, Example 7.4.8].
Let \(\psi \in (0,\pi )\) and let \(E_0,E_1\) be Banach spaces. Let further \(N:\overline{\Sigma }_{\psi }\rightarrow \mathcal {B}(E_0,E_1)\) be holomorphic and bounded on \(\Sigma _{\psi }\) and suppose that \(N|_{\partial \Sigma _\psi }\) has \(\mathcal {R}\)-bounded range. Then, the set
$$\begin{aligned} \{\lambda ^k\big (\tfrac{d}{d\lambda }\big )^kN(\lambda ):\lambda \in \overline{\Sigma }_{\psi '}\} \end{aligned}$$
is \(\mathcal {R}\)-bounded for all \(\psi '<\psi \) and all \(k\in \mathbb {N}_0\).
For \(k=0\) and \(k=1\), the proof is contained in [29, Example 2.16]. Other values of k can then be obtained by iteration. Note that the boundedness of N is necessary since Poisson's formula, which is used for \(k=0\), only holds for bounded functions.
\(\square \)
Definition 2.6
Let (X, d) be a metric space and \(E_0,E_1\) be Banach spaces. Let further \(U\subset \mathbb {R}^n\) be open and \(k\in \mathbb {N}_0\).
We say that a function \(f:X\rightarrow \mathcal {B}(E_0,E_1)\) is \(\mathcal {R}\)-continuous if for all \(x\in X\) and all \(\varepsilon >0\) there is a \(\delta >0\) such that we have
$$\begin{aligned} \mathcal {R}_{\mathcal {B}(E_0,E_1)}(\{f(y)-f(x): y\in B(x,\delta )\})<\varepsilon . \end{aligned}$$
We write \(C_{\mathcal {R}B}(X,\mathcal {B}(E_0,E_1))\) for the space of all \(\mathcal {R}\)-continuous functions \(f:X\rightarrow \mathcal {B}(E_0,E_1)\) with \(\mathcal {R}\)-bounded range.
We say that a function \(f:X\rightarrow \mathcal {B}(E_0,E_1)\) is uniformly \(\mathcal {R}\)-continuous if for all \(\varepsilon >0\) there is a \(\delta >0\) such that for all \(x\in X\) we have
We write \(BUC_{\mathcal {R}}(X,\mathcal {B}(E_0,E_1))\) for the space of all uniformly \(\mathcal {R}\)-continuous functions \(f:X\rightarrow \mathcal {B}(E_0,E_1)\) with \(\mathcal {R}\)-bounded range.
We write \(C_{\mathcal {R}B}^k(U,\mathcal {B}(E_0,E_1))\) for the space of all \(f\in C^k(U,\mathcal {B}(E_0,E_1))\) such that \(\partial ^{\alpha }f\in C_{\mathcal {R}B}(U,\mathcal {B}(E_0,E_1))\) for all \(\alpha \in \mathbb {N}_0^n\), \(|\alpha |\le k\). Analogously, we write \(BUC_{\mathcal {R}}^k(U,\mathcal {B}(E_0,E_1))\) for the space of all \(f\in C^k(U,\mathcal {B}(E_0,E_1))\) such that \(\partial ^{\alpha }f\in BUC_{\mathcal {R}}(U\mathcal {B}(E_0,E_1))\) for all \(\alpha \in \mathbb {N}_0^n\), \(|\alpha |\le k\).
Let \(U\subset \mathbb {R}^n\) be open, \(E_0,E_1\) Banach spaces and \(k\in \mathbb {N}_0\). Let further \(\mathcal {T}\subset C_{\mathcal {R}B}^k(X,\mathcal {B}(E_0,E_1))\) or \(\mathcal {T}\subset BUC_{\mathcal {R}}^k(X,\mathcal {B}(E_0,E_1))\).
We say that \(\mathcal {T}\) is bounded if
$$\begin{aligned} \sup _{f\in \mathcal {T}}\mathcal {R}(f^{(j)}(U):j\in \{0,\ldots ,k\})<\infty . \end{aligned}$$
We say that \(\mathcal {T}\) is \(\mathcal {R}\)-bounded if
$$\begin{aligned} \mathcal {R}(\{f^{(j)}(x):x\in U, f\in \mathcal {T}, j\in \{0,\ldots ,k\}\})<\infty . \end{aligned}$$
Weighted function spaces
Weights are an important tool to weaken the regularity assumptions on the data which are needed in order to derive well-posedness and a priori estimates for elliptic and parabolic boundary value problems, see for example [1, 5, 22, 30, 32]. Power weights, i.e., weights of the form \(w_\gamma (x):={\text {dist}}(x,\partial \mathcal {O})^\gamma \) which measure the distance to the boundary of the domain \(\mathcal {O}\subset \mathbb {R}^n\), are particularly useful for this purpose. Roughly speaking, the larger the value of \(\gamma \), the larger may the difference between regularity in the interior and regularity on the boundary be. This way, one can obtain arbitrary regularity in the interior while the regularity of the boundary data may be close to 0. However, there is an important borderline: If \(\gamma \in (-1,p-1)\), where p denotes the integrability parameter of the underlying function space, then \(w_\gamma \) is a so-called \(A_p\) weight. If \(\gamma \ge p-1\), then it is only an \(A_{\infty }\) weight. \(A_p\) weights are an important class of weights. These weights are exactly the weights w for which the Hardy–Littlewood maximal operator is bounded on \(L_p(\mathcal {O},w)\). Consequently, the whole Fourier analytic toolbox can still be used and many results can directly be transferred to the weighted setting. In the \(A_{\infty }\)-range however, this does not hold any longer. But in order to obtain more flexibility for the regularity of the boundary data which can be considered, one would like to go beyond the borderline and also work with \(A_{\infty }\) weights. This is possible if one works with weighted Besov or Triebel–Lizorkin spaces. As we will explain later, these scales of function spaces allow for a combination of \(A_{\infty }\) weights and Fourier multiplier methods. In our analysis, we want to include both cases: We treat the more classical situation with the Bessel potential scale and \(A_p\) weights, which include the classical Sobolev spaces, as well as the more flexible situation with Besov and Triebel–Lizorkin scales and \(A_{\infty }\) weights.
Let us now give the precise definitions: Let \(\mathcal {O}\subset \mathbb {R}^n\) be a domain. A weight w on \(\mathcal {O}\) is a function \(w:\mathcal {O}\rightarrow [0,\infty ]\) which takes values in \((0,\infty )\) almost everywhere with respect to the Lebesgue measure. We mainly work with the classes \(A_p\) \((p\in (1,\infty ])\). A weight w on \(\mathbb {R}^n\) is an element of \(A_p\) for \(p\in (1,\infty )\) if and only if
$$\begin{aligned}{}[w]_{A_p}:= \sup _{Q\text { cube in }\mathbb {R}^n}\bigg (\frac{1}{\lambda (Q)}\int _Q w(x)\,\mathrm{d}x\bigg )\bigg (\frac{1}{\lambda (Q)}\int _Q w(x)^{-\frac{1}{p-1}}\,\mathrm{d}x\bigg )^{p-1}<\infty . \end{aligned}$$
The quantity \([w]_{A_p}\) is called \(A_p\) Muckenhoupt characteristic constant of w. We define \(A_{\infty }:=\bigcup _{1<p<\infty } A_p\). Moreover, we write \([A_{\infty }]_p'\) for the space of all weights w such that the p-dual weight \(w^{-\frac{1}{p-1}}\) is in \(A_{\infty }\). We refer to [18, Chapter 9] for an introduction to these classes of weights.
For \(p\in [1,\infty )\), a domain \(\mathcal {O}\subset \mathbb {R}^n\), a weight w and a Banach space E the weighted Lebesgue–Bochner space \(L_p(\mathcal {O},w;E)\) is defined as the space of all strongly measurable functions \(f:\mathcal {O}\rightarrow E\) such that
$$\begin{aligned} \Vert f\Vert _{L_p(\mathcal {O},w;E)}:=\bigg (\int _{\mathcal {O}} \Vert f(x)\Vert _{E}^p w(x)\,\mathrm{d}x\bigg )^{1/p}<\infty . \end{aligned}$$
We further set \(L_{\infty }(\mathcal {O},w;E):=L_{\infty }(\mathcal {O};E)\). In addition, let \( L_1^{loc}(\mathcal {O};E)\) be the space of all locally integrable functions, i.e., strongly measurable functions \(f:\mathcal {O}\rightarrow E\) such that
$$\begin{aligned} \int _K \Vert f(x)\Vert _E\,\mathrm{d}x<\infty \end{aligned}$$
for all compact \(K\subset \mathcal {O}\). As usual, functions which coincide on a set of measure 0 are considered as equal in these spaces.
One has to be cautious with the definition of weighted Sobolev spaces. One would like to define them as spaces of distributions such that derivatives up to a certain order can be represented by functions in \(L_p(\mathcal {O},w;E)\). But for some weights, the elements of \(L_p(\mathcal {O},w;E)\) might not be locally integrable and thus, taking distributional derivatives might not be possible. Hölder's inequality shows that \(L_p(\mathcal {O},w;E)\subset L_1^{loc}(\mathcal {O},E)\) if \(w^{-\frac{1}{p-1}}\in L_1^{loc}(\mathcal {O})\). We refer to [28] for further thoughts in this direction.
Let \(\mathcal {O}\subset \mathbb {R}^n\) be a domain, E a Banach space, \(m\in \mathbb {N}_0\), \(p\in [1,\infty )\) and w a weight on \(\mathcal {O}\) such that \(w^{-\frac{1}{p-1}}\in L_1^{loc}(\mathcal {O})\).
We define the weighted Sobolev space \(W^m_p(\mathcal {O},w;E)\) by
$$\begin{aligned}&W^m_p(\mathcal {O},w;E):=\{f\in L_p(\mathcal {O},w;E)\,|\,\forall \alpha \in \mathbb {N}_0^n,\,|\alpha |\\&\quad \le m : \partial ^\alpha f\in L_p(\mathcal {O},w;E)\} \end{aligned}$$
and endow with the norm \(\Vert f\Vert _{W^m_p(\mathcal {O},w;E)}:=\big (\sum _{|\alpha |\le m} \Vert f\Vert _{L_p(\mathcal {O},w;E)}^p\big )^{1/p}\). With the usual modifications, we can also define \(W^m_\infty (\mathcal {O},w;E)\).
As usual, we define \(W^m_{p,0}(\mathcal {O},w;E)\) to be the closure of the space of test functions \(\mathscr {D}(\mathcal {O};E)\) in \(W^m_{p}(\mathcal {O},w;E)\).
Let E be reflexive, \(w\in A_p\) and \(p,p'\in (1,\infty )\) conjugated Hölder indices, i.e., they satisfy \(1=\frac{1}{p}+\frac{1}{p'}\). Then, we define the dual scale \(W^{-m}_p(\mathcal {O},w;E):=(W^m_{p',0}(\mathcal {O},w^{-\frac{1}{p-1}};E'))'\).
We further define weighted Bessel potential, Besov and Triebel–Lizorkin spaces. Since we use the Fourier analytic approach, we already define them as subsets of tempered distributions.
Let E be a Banach space, \(s\in \mathbb {R}\), \(p\in [1,\infty ]\) and w a weight on \(\mathbb {R}^n\) such that \(w^{-\frac{1}{p-1}}\in L_1^{loc}(\mathbb {R}^n)\). Then, we define the weighted Bessel potential space \(H^s_p(\mathbb {R}^n,w;E)\) by
$$\begin{aligned} H^s_p(\mathbb {R}^n,w;E):=\{f\in \mathscr {S}'(\mathbb {R}^n;E)\,|\,\langle D \rangle ^s f\in L_p(\mathbb {R}^n,w;E)\} \end{aligned}$$
and endow it with the norm \(\Vert f\Vert _{H^s_p(\mathbb {R}^n,w;E)}:=\Vert \langle D\rangle ^s f\Vert _{L_p(\mathbb {R}^n,w;E)}\).
Definition 2.10
Let \(\varphi _0\in \mathscr {D}(\mathbb {R}^n)\) be a smooth function with compact support such that \(0\le \varphi _0\le 1\) and
$$\begin{aligned} \varphi _0(\xi )=1\quad \text {if } |\xi |\le 1,\qquad \varphi _0(\xi )=0\quad \text {if }|\xi |\ge 3/2. \end{aligned}$$
For \(\xi \in \mathbb {R}^n\) and \(k\in \mathbb {N}\), let further
$$\begin{aligned} \varphi (\xi )&:=\varphi _0(\xi )-\varphi _0(2\xi ),\\ \varphi _k(\xi )&:=\varphi (2^{-k}\xi ). \end{aligned}$$
We call such a sequence \((\varphi _k)_{k\in \mathbb {N}_0}\) smooth dyadic resolution of unity.
Let E be a Banach space and let \((\varphi _k)_{k\in \mathbb {N}_0}\) be a smooth dyadic resolution of unity. On the space of E-valued tempered distributions \(\mathscr {S}'(\mathbb {R}^n;E)\), we define the sequence of operators \((S_k)_{k\in \mathbb {N}_0}\) by means of
$$\begin{aligned} S_kf:=\mathscr {F}^{-1}\varphi _k\mathscr {F} f\quad (f\in \mathscr {S}'(\mathbb {R}^n;E)). \end{aligned}$$
The sequence \((S_k f)_{k\in \mathbb {N}_0}\) is called dyadic decomposition of f.
By construction, we have that \(\mathscr {F}(S_k f)\) has compact support so that \(S_k f\) is an analytic function by the Paley–Wiener theorem, see [17, Theorem 2.3.21]. Moreover, it holds that \(\sum _{k\in \mathbb {N}_0} \varphi _k=1\) so that we have \(f=\sum _{k\in \mathbb {N}_0} S_kf\), i.e., f is the limit of a sequence of analytic functions where the limit is taken in the space of tempered distributions. Elements of Besov and Triebel–Lizorkin spaces even have convergence in a stronger sense, as their definition shows:
Let \((\varphi _k)_{k\in \mathbb {N}_0}\) be a smooth dyadic resolution of unity. Let further E be a Banach space, w a weight, \(s\in \mathbb {R}\), \(p\in [1,\infty )\) and \(q\in [1,\infty ]\).
We define the weighted Besov space \(B^s_{p,q}(\mathbb {R}^n,w;E)\) by
$$\begin{aligned} B^s_{p,q}(\mathbb {R}^n,w;E):=\{f\in \mathscr {S}'(\mathbb {R}^n,E): \Vert f\Vert _{B^s_{p,q}(\mathbb {R}^n,w;E)}<\infty \} \end{aligned}$$
$$\begin{aligned} \Vert f\Vert _{B^s_{p,q}(\mathbb {R}^n,w;E)}:=\Vert (2^{sk}\mathscr {F}^{-1}\varphi _k\mathscr {F}f)_{k\in \mathbb {N}_0}\Vert _{\ell ^q(L_p(\mathbb {R}^n,w;E))}. \end{aligned}$$
We define the weighted Triebel–Lizorkin space \(F^s_{p,q}(\mathbb {R}^n,w;E)\) by
$$\begin{aligned} F^s_{p,q}(\mathbb {R}^n,w;E):=\{f\in \mathscr {S}'(\mathbb {R}^n;E): \Vert f\Vert _{F^s_{p,q}(\mathbb {R}^n,w;E)}<\infty \} \end{aligned}$$
$$\begin{aligned} \Vert f\Vert _{F^s_{p,q}(\mathbb {R}^n,w;E)}:=\Vert (2^{sk}\mathscr {F}^{-1}\varphi _k\mathscr {F}f)_{k\in \mathbb {N}_0}\Vert _{L_p(\mathbb {R}^n,w;\ell ^q(E)))}. \end{aligned}$$
It is well known, that these spaces do not depend on the choice of the dyadic resolution of unity if w is an \(A_{\infty }\)-weight. In this case, different choices lead to equivalent norms, see for example [37, Proposition 3.4]. In fact, the condition on the weight can be weakened: In [41], it was shown that one also obtains the independence of the dyadic resolution of unity in the case of so-called \(A_{\infty }^{loc}\) weights.
Let E be a reflexive Banach space, \(w\in [A_{\infty }]_p'\), \(s\in \mathbb {R}\) and \(p,q\in (1,\infty )\). We define the dual scales of Besov and Triebel–Lizorkin scale by
$$\begin{aligned}&\mathcal {B}^{s}_{p,q}(\mathbb {R}^n,w;E):=(B^{-s}_{p',q'}(\mathbb {R}^n,w^{-\frac{1}{p-1}};E'))',\\&\quad \mathcal {F}^{s}_{p,q}(\mathbb {R}^n,w;E):=(F^{-s}_{p',q'}(\mathbb {R}^n,w^{-\frac{1}{p-1}};E'))', \end{aligned}$$
where \(p',q'\) denote the conjugated Hölder indices.
Remark 2.13
The main reason for us to include the dual scales in our considerations is the following: If w is additionally an admissible weight in the sense of [42, Section 1.4.1.], then we have \(\mathcal {B}^{s}_{p,q}(\mathbb {R}^n,w)=B^{s}_{p,q}(\mathbb {R}^n,w)\) and \(\mathcal {F}^{s}_{p,q}(\mathbb {R}^n,w)=F^{s}_{p,q}(\mathbb {R}^n,w)\). Therefore, we can also treat weighted Besov and Triebel–Lizorkin spaces with weights that are outside the \(A_{\infty }\) range. Formulating this in terms of dual scales allows us to transfer Fourier multiplier theorems without any additional effort just by duality. The main example we have in mind will be \(w(x)=\langle x\rangle ^{d}\) with arbitrary \(d\in \mathbb {R}\). We will make use of this in a forthcoming paper on equations with boundary noise.
Proposition 2.14
Recall that Assumption 1.2 holds true and suppose that E has cotype \(q_E\in [2,\infty )\). Let further \((S,\Sigma ,\mu )\) be a \(\sigma \)-finite measure space, \((\varepsilon _k)_{k\in \mathbb {N}}\) a Rademacher sequence on the probability space \((\Omega ,\mathcal {F},\mathbb {P})\), \(s\in \mathbb {R}\) and \(p_0,q_0\in (1,\infty )\). Consider one of the following cases:
\(\mathscr {A}^\bullet \) stands for the Bessel potential scale and \(p\in (\max \{q_E,p_0\},\infty )\). Moreover, we allow \(p=\max \{q_E,p_0\}\) if \(q_E<p_0\) or if E is a Hilbert space and \(p_0=2\).
\(\mathscr {A}^\bullet \) stands for the Besov scale and \(p\in (\max \{q_E,p_0,q_0\},\infty )\). Moreover, we allow \(p=\max \{q_E,p_0,q_0\}\) if \(q_E< p_0\le q_0\) or if E is a Hilbert space and \(p_0=q_0=2\).
\(\mathscr {A}^\bullet \) stands for the Triebel–Lizorkin scale and \(p\in (\max \{q_E,p_0,q_0\},\infty )\). Moreover, we allow \(p=\max \{q_E,p_0,q_0\}\) if \(q_E< q_0\le p_0\) or if E is a Hilbert space and \(p_0=q_0=2\).
Then, the images of balls with finite radius in \(L_p(S)\) under the embedding
$$\begin{aligned} L_p(S)\hookrightarrow \mathcal {B}(\mathscr {A}^s,L_p(S;\mathscr {A}^s)), f\mapsto f\otimes (\,\cdot \,) \end{aligned}$$
are \(\mathcal {R}\)-bounded. More precisely, there is a constant \(C>0\) such that for all \(N\in \mathbb {N}\), \(g_1,\ldots ,g_N\in \mathscr {A}^s\) and all \(f_1,\ldots ,f_N\in L_p(S)\) we have the estimate
$$\begin{aligned} \left\| \sum _{k=1}^N \varepsilon _k f_k \otimes g_k\right\| _{L_p(\Omega ;L_p(S;\mathscr {A}^s))}\le C\sup _{k=1,\ldots ,n}\Vert f\Vert _{L_p(S)}\left\| \sum _{k=1}^N \varepsilon _k g_k\right\| _{L_p(\Omega ;\mathscr {A}^s)}. \end{aligned}$$
The cases \(p\in (\max \{q_E,p_0\},\infty )\) in the Bessel potential case and \(p\in (\max \{q_E,p_0,q_0\},\infty )\) in the Besov and Triebel–Lizorkin case follow from the result by Hytönen and Veraar, Proposition 2.1, as in these cases \(\mathscr {A}^s\) has cotype \(\max \{q_E,p_0\}\) and \(\max \{q_E,p_0,q_0\}\), respectively, see for example [24, Proposition 7.1.4]. The Hilbert space cases follow directly since uniform boundedness and \(\mathcal {R}\)-boundedness coincide. The other cases in which \(p=\max \{q_E,p_0\}\) or \(p=\max \{q_E,p_0,q_0\}\) are allowed follow by Fubini's theorem together with the Kahane–Khintchine inequalities as in [25, Remark 3.4]. \(\square \)
For the mapping properties, we derive later on, it is essential that we can use Mikhlin's multiplier theorem. There are many versions of this theorem available. For our purposes, the following will be sufficient.
Theorem 2.15
Let E be a UMD space, \(p\in (1,\infty )\), \(s\in \mathbb {R}\) and let w be an \(A_p\) weight. Let \(m\in C^n(\mathbb {R}^n{\setminus }\{0\};\mathcal {B}(E))\) such that
$$\begin{aligned} \kappa _m:=\mathcal {R}\big (\{|\xi |^{|\alpha |}D^{\alpha }m(\xi ):\xi \in \mathbb {R}^n{\setminus }\{0\},|\alpha |\le n\}\big )<\infty . \end{aligned}$$
Then, we have that
$$\begin{aligned} \Vert \mathscr {F}^{-1} m \mathscr {F} \Vert _{\mathcal {B}(H^s_p(\mathbb {R}^n,w;E))}\le C\kappa _m \end{aligned}$$
with a constant \(C>0\) only depending on n, p and E.
Suppose that E is a UMD space with Pisier's property \((\alpha )\). Let \(p\in (1,\infty )\) and \(w\in A_p(\mathbb {R}^n)\). Let further \(\mathcal {T}\subset C^{n}(\mathbb {R}^n{\setminus }\{0\},\mathcal {B}(E))\). Then, there is a constant \(C>0\) independent of \(\mathcal {T}\) such that
$$\begin{aligned} \mathcal {R}_{\mathcal {B}(H^s_p(\mathbb {R}^n,w;E))}(\{\mathscr {F}^{-1}m\mathscr {F}:m\in \mathcal {T}\})\le C\kappa _{\mathcal {T}} \end{aligned}$$
$$\begin{aligned} \kappa _{\mathcal {T}}:=\mathcal {R}_{\mathcal {B}(E)}(\{|\xi |^{|\alpha |} D^{\alpha }m(\xi ):\xi \in \mathbb {R}^n{\setminus }\{0\},\alpha \in \mathbb {N}_0^n,|\alpha |\le n, m\in \mathcal {T}\}). \end{aligned}$$
Let E be a Banach space, \(p\in (1,\infty )\), \(q\in [1,\infty ]\) and \(s\in \mathbb {R}\). Let further w be an \(A_{\infty }\) weight, \(m\in C^{\infty }(\mathbb {R}^n,\mathcal {B}(E))\) and \(\mathscr {A}^s_{p,q}(\mathbb {R}^n,w;E)\in \{B^s_{p,q}(\mathbb {R}^n,w;E),F^s_{p,q}(\mathbb {R}^n,w;E)\}\). Then, there is a natural number \(N\in \mathbb {N}\) and a constant \(C>0\) not depending on m such that
$$\begin{aligned} \Vert \mathscr {F}^{-1} m \mathscr {F} \Vert _{\mathcal {B}(\mathscr {A}^s_{p,q}(\mathbb {R}^n,w;E))}\le C\kappa _m \end{aligned}$$
$$\begin{aligned} \kappa _m:=\sup _{|\alpha |\le N}\sup _{\xi \in \mathbb {R}^n} \Vert \langle \xi \rangle ^{|\alpha |}D^{\alpha }m(\xi )\Vert _{\mathcal {B}(E)}. \end{aligned}$$
The same holds if E is reflexive, \(p,q\in (1,\infty )\), \(w\in [A_\infty ]_p'\) and \(\mathscr {A}^s_{p,q}(\mathbb {R}^n,w;E)\in \{\mathcal {B}^s_{p,q}(\mathbb {R}^n,w;E),\mathcal {F}^s_{p,q}(\mathbb {R}^n,w;E)\}\).
Let E be a Banach space, \(p\in (1,\infty )\), \(q\in [1,\infty ]\) and \(s\in \mathbb {R}\). Let further w be an \(A_{\infty }\) weight, \(\mathcal {T}\subset C^{\infty }(\mathbb {R}^n,\mathcal {B}(E))\) and \(\mathscr {A}^s_{p,q}(\mathbb {R}^n,w;E)\in \{B^s_{p,q}(\mathbb {R}^n,w;E),F^s_{p,q}(\mathbb {R}^n,w;E)\}\). Then, there is an \(N\in \mathbb {N}\) and a constant \(C>0\) independent of \(\mathcal {T}\) such that
$$\begin{aligned} \mathcal {R}(\{\mathscr {F}^{-1} m \mathscr {F} : m\in \mathcal {T}\}\le C \kappa _{\mathcal {T}} \end{aligned}$$
$$\begin{aligned} \kappa _{\mathcal {T}}:=\sup _{|\alpha |\le N}\mathcal {R}(\{\langle \xi \rangle ^{|\alpha |}D^{\alpha }m(\xi ):\xi \in \mathbb {R}^n,m\in \mathcal {T}\}). \end{aligned}$$
The same holds if E is a UMD space, \(p,q\in (1,\infty )\), \(w\in [A_\infty ]_p'\) and \(\mathscr {A}^s_{p,q}(\mathbb {R}^n,w;E)\in \{\mathcal {B}^s_{p,q}(\mathbb {R}^n,w;E),\mathcal {F}^s_{p,q}(\mathbb {R}^n,w;E)\}\).
Part (a) with \(s=0\) is contained in [15, Theorem 1.2]. The general case \(s\in \mathbb {R}\) follows from \(s=0\) by decomposing \(m(\xi )=\langle \xi \rangle ^{-s}m(\xi )\langle \xi \rangle ^{s}\) and by using the definition of Bessel potential spaces. Part (b) can be derived as [29, 5.2 (b)]. The scalar-valued, unweighted version of part (c) is contained in [45, Paragraph 2.3.7]. But the proof therein can be transferred to our situation by using [37, Proposition 2.4]. Part (d) is the isotropic version of [22, Lemma 2.4]. The statements concerning the dual scales follow by duality. In the \(\mathcal {R}\)-bounded case, we refer the reader to [24, Proposition 8.4.1]. \(\square \)
For the dual scales in Theorem 2.15(d), it is actually not necessary to assume that E is a UMD space. Instead [24, Proposition 8.4.1] shows that K-convexity is good enough. But since we did not introduce K-convexity, we only stated the less general version here.
Later on, we sometimes want to apply Mikhlin's theorem for m taking values in \(\mathcal {B}(E^N,E^M)\) with certain \(N,M\in \mathbb {N}\) instead of \(\mathcal {B}(E)\). Note however that we can identify \(\mathcal {B}(E^N,E^M)\simeq \mathcal {B}(E)^{M\times N}\). Hence, one can apply Mikhlin's theorem for each component and the statements of Theorem 2.15 transfer to the case in which m takes values in \(\mathcal {B}(E^N,E^M)\).
Later on, we will also use parameter-dependent versions of our function spaces. They are natural to work with in the context of the parameter-dependent Boutet de Monvel calculus. And since we use elements of this parameter-dependent calculus, these spaces are also useful in our setting.
Recall that Assumption 1.2 holds true. Let \(\mu \in \mathbb {C}\) and \(s,s_0\in \mathbb {R}\). Then, we define the parameter-dependent weighted spaces
$$\begin{aligned}&\mathscr {A}^{s,\mu ,s_0}(\mathbb {R}^n,w;E):=\langle D,\mu \rangle ^{s_0-s} \mathscr {A}^{s_0}(\mathbb {R}^n,w;E),\\&\quad \Vert \cdot \Vert _{\mathscr {A}^{s,\mu ,s_0}(\mathbb {R}^n,w;E)}:=\Vert \langle D,\mu \rangle ^{s-s_0}\cdot \Vert _{\mathscr {A}^{s_0}(\mathbb {R}^n,w;E)}, \end{aligned}$$
where \(\langle D,\mu \rangle :=\mathscr {F}^{-1}\langle \xi ,\mu \rangle \mathscr {F}=\mathscr {F}^{-1}(1+|\xi |^2+|\mu |^2)^{1/2}\mathscr {F}\).
Lemma 2.19
Let \(\mu \in \mathbb {C}\) and \(s,s_0\in \mathbb {R}\). We have the estimates
$$\begin{aligned} \Vert \cdot \Vert _{\mathscr {A}^{s,\mu ,s_0}}\eqsim \Vert \cdot \Vert _{\mathscr {A}^{s}}+\langle \mu \rangle ^{s-s_0} \Vert \cdot \Vert _{\mathscr {A}^{s_0}}\qquad&\text {if }\quad s-s_0\ge 0,\\ \Vert \cdot \Vert _{\mathscr {A}^{s,\mu ,s_0}} \lesssim \Vert \cdot \Vert _{\mathscr {A}^{s}} \lesssim \langle \mu \rangle ^{s_0-s}\Vert \cdot \Vert _{\mathscr {A}^{s,\mu ,s_0}}\qquad&\text {if }\quad s-s_0\le 0. \end{aligned}$$
Assumption 1.2 is formulated in a way such that we can apply our versions of the Mikhlin multiplier theorem, Theorem 2.15(a) and (c). Let first \(s\ge s_0\). Note that the function
$$\begin{aligned} m:\mathbb {R}^n\times \mathbb {C}\rightarrow \mathbb {R},\,(\xi ,\mu )\mapsto \frac{\langle \xi ,\mu \rangle ^{s-s_0}}{\langle \xi \rangle ^{s-s_0}+\langle \mu \rangle ^{s-s_0}} \end{aligned}$$
satisfies the condition from Theorem 2.15 uniformly in \(\mu \). Indeed, by induction it follows that \(\partial ^{\alpha }_{\xi }m(\xi ,\mu )\) \((\alpha \in \mathbb {N}_0^n)\) is a linear combination of terms of the form
$$\begin{aligned} p_{\beta ,i,j,k}(\xi ,\mu )=\xi ^\beta \langle \xi ,\mu \rangle ^{s-s_0-i}\langle \xi \rangle ^{(s-s_0-2)j-k}(\langle \xi \rangle ^{s-s_0}+\langle \mu \rangle ^{s-s_0})^{-1-j} \end{aligned}$$
for some \(\beta \in \mathbb {N}_0^n\) and \(i,k,j\in \mathbb {N}_0\) such that \(|\alpha |=i+2j+k-|\beta |\). But each of these terms satisfies
$$\begin{aligned} \langle \xi \rangle ^{|\alpha |}|p_{\beta ,i,j,k}(\xi ,\mu )|&=m(\xi ,\mu )\langle \xi \rangle ^{|\alpha |}|\xi ^\beta |\langle \xi ,\mu \rangle ^{-i}\langle \xi \rangle ^{(s-s_0-2)j-k}(\langle \xi \rangle ^{s-s_0}+\langle \mu \rangle ^{s-s_0})^{-j}\\&\lesssim m(\xi ,\mu )\langle \xi \rangle ^{|\alpha |+|\beta |-i-2j-k+(s-s_0)j-(s-s_0)j}\\&\lesssim m(\xi ,\mu )\\&\lesssim 1. \end{aligned}$$
Hence, \((m(\cdot ,\mu ))_{\mu \in \mathbb {C}}\) is a bounded family of Fourier multipliers. Using this, we obtain
$$\begin{aligned} \Vert u\Vert _{\mathscr {A}^{s,\mu ,s_0}}&=\Vert \langle D,\mu \rangle ^{s-s_0}u\Vert _{\mathscr {A}^{s_0}}=\big \Vert m(D,\mu )(\langle D\rangle ^{s-s_0}+\langle \mu \rangle ^{s-s_0})u\big \Vert _{\mathscr {A}^{s_0}}\\&\lesssim \big \Vert (\langle D\rangle ^{s-s_0}+\langle \mu \rangle ^{s-s_0})u\big \Vert _{\mathscr {A}^{s_0}}\\&\le \Vert u\Vert _{\mathscr {A}^{s}}+\langle \mu \rangle ^{s-s_0}\Vert u\Vert _{\mathscr {A}^{s_0}}=\bigg \Vert \frac{\langle D\rangle ^{s-s_0}}{\langle D,\mu \rangle ^{s-s_0}}\langle D,\mu \rangle ^{s-s_0}u\bigg \Vert _{\mathscr {A}^{s_0}}\\&\quad +\bigg \Vert \frac{\langle \mu \rangle ^{s-s_0}}{\langle D,\mu \rangle ^{s-s_0}}\langle D,\mu \rangle ^{s-s_0}u\bigg \Vert _{\mathscr {A}^{s_0}}\\&\lesssim \Vert \langle D,\mu \rangle ^{s-s_0}u\Vert _{\mathscr {A}^{s_0}}= \Vert u\Vert _{\mathscr {A}^{s,\mu ,s_0}} \end{aligned}$$
for \(s-s_0\ge 0\) and
$$\begin{aligned} \Vert u\Vert _{\mathscr {A}^{s,\mu ,s_0}}&=\Vert \langle D,\mu \rangle ^{s-s_0}u\Vert _{\mathscr {A}^{s_0}}=\bigg \Vert \frac{\langle D,\mu \rangle ^{s-s_0}}{\langle D\rangle ^{s-s_0}}\langle D\rangle ^{s-s_0}u\bigg \Vert _{\mathscr {A}^{s_0}}\lesssim \Vert \langle D\rangle ^{s-s_0}u\Vert _{\mathscr {A}^{s_0}}\\&=\Vert u\Vert _{\mathscr {A}^{s}}=\bigg \Vert \frac{\langle D\rangle ^{s-s_0}}{\langle D,\mu \rangle ^{s-s_0}}\langle D,\mu \rangle ^{s-s_0}u\bigg \Vert _{\mathscr {A}^{s_0}}\lesssim \langle \mu \rangle ^{s_0-s}\Vert u\Vert _{\mathscr {A}^{s,\mu ,s_0}} \end{aligned}$$
for \(s-s_0\le 0\). \(\square \)
In this paper, we also consider function spaces on open intervals I. In this case, we can just define them by restriction.
Let be \(I\subset \mathbb {R}\) an open interval. Then, we define the space \((\mathscr {A}^{\bullet }(I,w;E),\Vert \cdot \Vert _{\mathscr {A}^{\bullet }(I,w;E)})\) by
$$\begin{aligned}&\mathscr {A}^{\bullet }(I,w;E)=\{f|_I:f\in \mathscr {A}^{\bullet }(I,w;E)\},\\&\quad \Vert g\Vert _{\mathscr {A}^{\bullet }(I,w;E)}:=\inf _{f\in \mathscr {A}^{\bullet }(\mathbb {R},w;E), f|_I=g} \Vert f\Vert _{\mathscr {A}^{\bullet }(\mathbb {R},w;E)}. \end{aligned}$$
We use the same definition for \(\mathscr {B}^{\bullet }\) and \(\mathscr {C}^{\bullet }\).
Recall that we defined \(W^{-m}_p(\mathcal {O},w;E)\) as the dual of \(W^m_{p'}(\mathcal {O},w^{-\frac{1}{p-1}};E')\) and not by restriction. In the scalar-valued unweighted setting both definitions coincide, see [44, Section 2.10.2]. We believe that the same should hold true under suitable assumptions in the weighted vector-valued setting. But since this is not important for this work, we do not investigate this any further.
Let \(s\in \mathbb {R}\), \(p\in (1,\infty )\), \(r\in (-1,p-1)\) and \(l\in \mathbb {N}\). Suppose that \(\mathscr {A}^{s}\) is reflexive. Then, we have the continuous embeddings
$$\begin{aligned} L_p(\mathbb {R}_+,|{\text {pr}}_n|^{r+lp};&\mathscr {A}^s)\hookrightarrow W^{-l}_{p}(\mathbb {R}_+,|{\text {pr}}_n|^{r}; \mathscr {A}^s) \end{aligned}$$
We should note that almost the same proof was given in [30]. By duality, it suffices to prove
$$\begin{aligned} W^{l}_{p',0}(\mathbb {R}_+,|{\text {pr}}_n|^{r'};(\mathscr {A}^{s})')\hookrightarrow L_{p'}(\mathbb {R}_+,|{\text {pr}}_n|^{r'-lp'};(\mathscr {A}^{s})') \end{aligned}$$
where \(r'=-\frac{r}{p-1}\) and \(p'=\frac{p}{p-1}\). But this is a special case of [32, Corollary 3.4]. \(\square \)
Pseudo-differential operators in mixed scales
Now we briefly introduce some notions and notations concerning pseudo-differential operators. Since we only use the x-independent case in the following, we could also formulate our results in terms of Fourier multipliers. However, parameter-dependent Hörmander symbol classes provide a suitable framework for the formulation of our results. In the case of parameter-dependent symbols, we oftentimes consider spaces of smooth functions on an open set \(U\subset \mathbb {R}^n\times \mathbb {C}\). In this case, we identify \(\mathbb {C}\simeq \mathbb {R}^2\) and understand the differentiability in the real sense. If we want to understand it in the complex sense, we say holomorphic instead of smooth.
Let Z be a Banach space, \(d\in \mathbb {R}\), \(\Sigma \subset \mathbb {C}\) open and \(\vartheta :\Sigma \rightarrow (0,\infty )\) a function.
The space of parameter-independent Hörmander symbols \(S^d(\mathbb {R}^n;Z)\) of order d is the space of all smooth functions \(p\in C^{\infty }(\mathbb {R}^n;Z)\) such that
$$\begin{aligned} \Vert p\Vert ^{(d)}_k:=\sup _{\xi \in \mathbb {R}^n, \atop \alpha \in \mathbb {N}_0^n, |\alpha |\le k} \langle \xi \rangle ^{-(d-|\alpha |)} \Vert D^{\alpha }_{\xi }p(\xi )\Vert _{Z}<\infty \end{aligned}$$
for all \(k\in \mathbb {N}_0\).
The space of parameter-dependent Hörmander symbols \(S^{d,\vartheta }(\mathbb {R}^n\times \Sigma ;Z)\) of order d is the space of all smooth functions \(p\in C^{\infty }(\mathbb {R}^n\times \Sigma ;Z)\) such that
$$\begin{aligned} \Vert p\Vert ^{(d,\vartheta )}_k:=\sup _{\alpha \in \mathbb {N}_0^n,\,\gamma \in \mathbb {N}_0^2\atop |\alpha |+|\gamma |\le k}\sup _{\xi \in \mathbb {R}^n,\mu \in \Sigma } \vartheta (\mu )^{-1}\langle \xi ,\mu \rangle ^{-(d-|\alpha |_1-|\gamma |_1)} \Vert D^{\alpha }_{\xi }D_{\mu }^{\gamma }p(\xi ,\mu )\Vert _{Z}<\infty \end{aligned}$$
for all \(k\in \mathbb {N}_0\). If \(\vartheta =1\), then we also omit it in the notation.
Actually, if one omits the weight function \(\vartheta \), then the latter symbol class is the special case of parameter-dependent Hörmander symbols with regularity \(\infty \). Usually, one also includes the regularity parameter \(\nu \) in the notation of the symbol class, so that the notation \(S^{d,\infty }(\mathbb {R}^n\times \Sigma ;Z)\) is more common in the literature. But since the symbols in this paper always have infinite regularity, we omit \(\infty \) in the notation.
For the Bessel potential case, \(\mathcal {R}\)-bounded versions of these symbol classes are useful.
Let E be a Banach space, \(N,M\in \mathbb {N}\), \(d\in \mathbb {R}\), \(\Sigma \subset \mathbb {C}\) open and \(\vartheta :\Sigma \rightarrow (0,\infty )\) a function.
By \(S^d_{\mathcal {R}}(\mathbb {R}^n;\mathcal {B}(E^N,E^M))\), we denote the space of all smooth functions \(p\in C^{\infty }(\mathbb {R}^n;\mathcal {B}(E^N,E^M))\) such that
$$\begin{aligned} \Vert p\Vert ^{(d)}_{k,\mathcal {R}}:=\mathcal {R}\big \{\langle \xi \rangle ^{-(d-|\alpha |_1)}D^{\alpha }_{\xi }p(\xi ):\xi \in \mathbb {R}^n, \alpha \in \mathbb {N}_0^n, |\alpha |\le k \big \}<\infty \end{aligned}$$
By \(S^{d,\vartheta }_{\mathcal {R}}(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^N,E^M))\), we denote the space of all smooth functions \(p\in C^{\infty }(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^N,E^M))\) such that
$$\begin{aligned}&\Vert p\Vert ^{(d,\vartheta )}_{k,\mathcal {R}}:=\mathcal {R}\big \{\vartheta (\mu )^{-1}\langle \xi ,\mu \rangle ^{-(d-|\alpha |-|\gamma |)} D^{\alpha }_{\xi }D_{\mu }^{\gamma }p(\xi ,\mu ):\\&\xi \in \mathbb {R}^n,\mu \in \Sigma ,\alpha \in \mathbb {N}_0^n,\gamma \in \mathbb {N}_0^2,|\alpha |+|\gamma |\le k\big \}<\infty \end{aligned}$$
It seems like \(\mathcal {R}\)-bounded versions of the usual Hörmander symbol classes have first been considered in the Ph.D. thesis of Štrkalj. We also refer to [40].
It was observed in [10] that also the \(\mathcal {R}\)-bounded symbol classes are Fréchet spaces.
Since uniform bounds can be estimated by \(\mathcal {R}\)-bounds, we have the continuous embeddings
$$\begin{aligned}&S^d_{\mathcal {R}}(\mathbb {R}^n;\mathcal {B}(E^N,E^M))\hookrightarrow S^d(\mathbb {R}^n;\mathcal {B}(E^N,E^M)),\\&\quad S^{d,\vartheta }_{\mathcal {R}}(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^N,E^M))\hookrightarrow S^{d,\vartheta }(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^N,E^M)). \end{aligned}$$
Since uniform boundedness and \(\mathcal {R}\)-boundedness for a set of scalars are equivalent, we have that
$$\begin{aligned} S^d(\mathbb {R}^n;\mathbb {C})\hookrightarrow S^d_{\mathcal {R}}(\mathbb {R}^n;\mathcal {B}(E^N)),\quad S^{d,\vartheta }(\mathbb {R}^n\times \Sigma ;\mathbb {C})\hookrightarrow S^{d,\vartheta }_{\mathcal {R}}(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^N)). \end{aligned}$$
Given \(d_1,d_2\in \mathbb {R}\), \(\vartheta _1,\vartheta _2:\Sigma \rightarrow (0,\infty )\) and \(N_1,N_2,N_3\in \mathbb {N}\) we have the continuous bilinear mappings
$$\begin{aligned}&S^{d_2}(\mathbb {R}^n;\mathcal {B}(E^{N_2},E^{N_3}))\times S^{d_1}(\mathbb {R}^n;\mathcal {B}(E^{N_1},E^{N_2}))\\&\rightarrow S^{d_1+d_2}(\mathbb {R}^n;\mathcal {B}(E^{N_1},E^{N_3})),\,(p_2,p_1)\mapsto p_2p_1,\\&S^{d_2,\vartheta _2}(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^{N_2},E^{N_3}))\times S^{d_1,\vartheta _1}(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^{N_1},E^{N_2}))\\&\qquad \rightarrow S^{d_1+d_2,\vartheta _1\cdot \vartheta _2}(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^{N_1},E^{N_3})),\,(p_2,p_1)\mapsto p_2p_1. \end{aligned}$$
The same properties also hold for the \(\mathcal {R}\)-bounded versions.
The differential operator \(\partial ^{\alpha }\) with \(\alpha \in \mathbb {N}_0^n\) is a continuous linear operator
$$\begin{aligned} S^{d}(\mathbb {R}^n;\mathcal {B}(E^{N},E^{M}))\rightarrow S^{d-|\alpha |}(\mathbb {R}^n;\mathcal {B}(E^{N},E^{M})),\,p\mapsto \partial ^{\alpha }p,\\ S^{d,\vartheta }(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^{N},E^{M}))\rightarrow S^{d-|\alpha |,\vartheta }(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^{N},E^{M})),\,p\mapsto \partial ^{\alpha }p. \end{aligned}$$
One could also view parameter-independent symbol classes as a subset of parameter-dependent symbol classes with bounded \(\Sigma \subset \mathbb {C}\) which consists of those symbols which do not depend on the parameter \(\mu \). Hence, the statements we formulate for parameter-dependent symbol classes in the following also hold in a similar way in the parameter-independent case.
Let E be a Banach space, \(d\in \mathbb {R}\) and \(\Sigma \subset \mathbb {C}\) open. Let further \(p\in S^d(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^{N},E^{M}))\). Then, we define the corresponding pseudo-differential operator by
$$\begin{aligned} (P_{\mu }f)(x):= & {} ({\text{ op }}[p(\,\cdot \,,\mu )]f)(x)\\:= & {} [\mathscr {F}^{-1}p(\,\cdot \,,\mu )\mathscr {F}f](x)=\frac{1}{(2\pi )^{n/2}}\int _{\mathbb {R}^n}e^{ix\xi }p(\xi ,\mu )\widehat{f}(\xi )\,d\xi . \end{aligned}$$
for \(f\in \mathscr {S}(\mathbb {R}^n;E^N)\).
Since we only consider x-independent symbols, the mapping properties of such pseudo-differential operators are an easy consequence of Mikhlin's theorem.
Let \(N,M\in \mathbb {N}\), \(s,s_0,d\in \mathbb {R}\), \(\Sigma \subset \mathbb {C}\) open and \(\vartheta :\Sigma \rightarrow (0,\infty )\) a function. Consider one of the following two cases
\(\mathscr {A}^{\bullet }\) belongs to the Bessel potential scale and \(S^{d,\vartheta }_{\mathscr {A}}(\mathbb {R}^{n}\times \Sigma ;\mathcal {B}(E^N,E^M))=S^{d,\vartheta }_{\mathcal {R}}(\mathbb {R}^{n}\times \Sigma ;\mathcal {B}(E^N,E^M))\).
\(\mathscr {A}^{\bullet }\) belongs to the Besov or the Triebel–Lizorkin scale and \(S^{d,\vartheta }_{\mathscr {A}}(\mathbb {R}^{n}\times \Sigma ;\mathcal {B}(E^N,E^M))=S^{d,\vartheta }(\mathbb {R}^{n}\times \Sigma ;\mathcal {B}(E^N,E^M))\).
Then, the mapping
$$\begin{aligned}&S^{d,\vartheta }_{\mathscr {A}}(\mathbb {R}^{n}\times \Sigma ;\mathcal {B}(E^N,E^M))\times \mathscr {A}^{s+d,\mu ,s_0}(\mathbb {R}^{n},w_0,E^N)\\&\rightarrow \mathscr {A}^{s,\mu ,s_0}(\mathbb {R}^{n},w_0,E^M),\; (p,f)\mapsto {\text {op}}[p(\,\cdot \,,\mu )]f \end{aligned}$$
defined by extension from \(\mathscr {S}(\mathbb {R}^{n},E^N)\) to \(\mathscr {A}^{s+d,\mu ,s_0}(\mathbb {R}^{n},w_0,E^N)\) is bilinear and continuous. Moreover, there is a constant \(C>0\) independent of \(\vartheta \) such that
$$\begin{aligned} \Vert {\text {op}}[p(\,\cdot \,,\mu )]f\Vert _{ \mathscr {A}^{s,\mu ,s_0}(\mathbb {R}^{n},w_0,E^N)} \le C \vartheta (\mu )\Vert p\Vert ^{(d,\vartheta )}_N\Vert f\Vert _{ \mathscr {A}^{s+d,\mu ,s_0}(\mathbb {R}^{n},w_0,E^M)} \end{aligned}$$
for all \(\mu \in \Sigma \).
It is obvious that the mapping is bilinear. For the continuity, we note that
$$\begin{aligned}&S^{d,\vartheta }_{\mathscr {A}}(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^N,E^M))\\&\rightarrow S^{0,\vartheta }_{\mathscr {A}}(\mathbb {R}^n\times \Sigma ;\mathcal {B}(E^N,E^M)), p\mapsto [(\xi ,\mu )\mapsto p(\xi ,\mu )\langle \xi ,\mu \rangle ^{-d}] \end{aligned}$$
is continuous. Hence, by Mikhlin's theorem there is an \(N'\in \mathbb {N}\) such that
$$\begin{aligned} \Vert {\text {op}}[p(\,\cdot \,,\mu )]f\Vert _{ \mathscr {A}^{s,\mu ,s_0}(\mathbb {R}^{n},w_0,E^M)}&=\Vert {\text {op}}[\langle \cdot ,\mu \rangle ^{s_0-s}]\\&{\text {op}}[p(\,\cdot \,,\mu )\langle \cdot ,\mu \rangle ^{-d}]{\text {op}}[\langle \cdot ,\mu \rangle ^{d+s-s_0}]f\Vert _{ \mathscr {A}^{s,\mu ,s_0}(\mathbb {R}^{n},w_0,E^M)}\\&= \Vert {\text {op}}[p(\,\cdot \,,\mu )\langle \cdot ,\mu \rangle ^{-d}]{\text {op}}[\langle \cdot ,\mu \rangle ^{d+s-s_0}]f\Vert _{ \mathscr {A}^{s_0}(\mathbb {R}^{n},w_0,E^M)}\\&\lesssim \vartheta (\mu )\Vert p\Vert ^{(d,\vartheta )}_{N',\mathscr {A}}\Vert {\text {op}}[\langle \cdot ,\mu \rangle ^{d+s-s_0}]f\Vert _{ \mathscr {A}^{s_0}(\mathbb {R}^{n},w_0,E^N)}\\&= \vartheta (\mu )\Vert p\Vert ^{(d,\vartheta )}_{N',\mathscr {A}}\Vert f\Vert _{ \mathscr {A}^{s+d,\mu ,s_0}(\mathbb {R}^{n},w_0,E^N)}. \end{aligned}$$
This also shows the asserted estimate. \(\square \)
We can also formulate an \(\mathcal {R}\)-bounded version of Proposition 3.5 without the parameter-dependence of the function spaces.
Let \(N,M\in \mathbb {N}\), \(s,d\in \mathbb {R}\), \(\Sigma \subset \mathbb {C}\) open and \(\vartheta :\Sigma \rightarrow (0,\infty )\) a function. Consider one of the following two cases
\(\mathscr {A}^{\bullet }\) belongs to the Bessel potential scale and E satisfies Pisier's property \((\alpha )\) in addition to Assumption 1.2.
\(\mathscr {A}^{\bullet }\) belongs to the Besov or the Triebel–Lizorkin scale.
$$\begin{aligned}&S^{d,\vartheta }_{\mathcal {R}}(\mathbb {R}^{n}\times \Sigma ;\mathcal {B}(E^N,E^M))\times \mathscr {A}^{s+d}(\mathbb {R}^{n},w_0;E^N)\\&\rightarrow \mathscr {A}^{s}(\mathbb {R}^{n},w_0;E^M),\; (p,f)\mapsto {\text {op}}[p(\,\cdot \,,\mu )]f \end{aligned}$$
defined by extension from \(\mathscr {S}(\mathbb {R}^n,E^N)\) to \(\mathscr {A}^{s,\mu ,s_0}(\mathbb {R}^{n},w_0;E^N)\) is bilinear and continuous. Moreover, there is a constant \(C>0\) independent of \(\vartheta \) such that
$$\begin{aligned} \mathcal {R}_{\mathcal {B}(\mathscr {A}^{s+d}(\mathbb {R}^{n},w_0;E^N),\mathscr {A}^{s}(\mathbb {R}^{n},w_0;E^M))}\big (\{\vartheta (\mu )^{-1}\langle \mu \rangle ^{-d_+}{\text {op}}[p(\,\cdot \,,\mu )]:\mu \in \Sigma \}\big )\le C \Vert p\Vert ^{(d,\vartheta )}_N. \end{aligned}$$
Note that \(m(\,\cdot \,,\mu ):=[\xi \mapsto \langle \mu \rangle ^{-d_+}\langle \xi ,\mu \rangle ^d\langle \xi \rangle ^{-d}]\) satisfies Mikhlin's condition uniformly in \(\mu \). Indeed, by induction on \(|\alpha |\) one gets that \(\partial ^{\alpha }\langle \mu \rangle ^{-d_+}\langle \xi ,\mu \rangle ^d\langle \xi \rangle ^{-d}\) is a linear combination of terms of the form
$$\begin{aligned} p_{j,k}(\xi ,\mu )=\xi ^{\beta }\langle \xi ,\mu \rangle ^{d-2j}\langle \xi \rangle ^{-d-2k}\langle \mu \rangle ^{-d_+} \end{aligned}$$
for some \(\beta \in \mathbb {N}_0^{n-1}\), \(j,k\in \mathbb {N}_0\) with \(|\alpha |=2j+2k-|\beta |\). For such a term, we obtain
$$\begin{aligned} \langle \xi \rangle ^{|\alpha |}|p_{j,k}(\xi ,\mu )|&=m(\xi ,\mu )|\xi ^{\beta }|\langle \xi ,\mu \rangle ^{-2j}\langle \xi \rangle ^{|\alpha |-2k}\\&\le m(\xi ,\mu )\langle \xi \rangle ^{|\alpha |+|\beta |-2j-2k}\\&\lesssim 1. \end{aligned}$$
Hence, by Mikhlin's theorem there is an \(N'\in \mathbb {N}\) such that
$$\begin{aligned}&\mathcal {R}_{\mathcal {B}(\mathscr {A}^{s+d}(\mathbb {R}^{n-1},w_0;E^N),\mathscr {A}^{s}(\mathbb {R}^{n-1},w_0;E^M))}\big (\{\vartheta (\mu )^{-1}\langle \mu \rangle ^{-d_+}{\text {op}}[p(\,\cdot \,,\mu )]:\mu \in \Sigma \}\big )\\&\quad = \mathcal {R}_{\mathcal {B}(\mathscr {A}^{s+d}(\mathbb {R}^{n-1},w_0;E^N),\mathscr {A}^{s}(\mathbb {R}^{n-1},w_0;E^M))}\big (\{\vartheta (\mu )^{-1}\\&\quad {\text {op}}[p(\,\cdot \,,\mu )\langle \cdot ,\mu \rangle ^{-d}]{\text {op}}[\langle \mu \rangle ^{-d_+} \langle \cdot ,\mu \rangle ^{d}]:\mu \in \Sigma \}\big )\\&\quad \lesssim \mathcal {R}_{\mathcal {B}(\mathscr {A}^{s}(\mathbb {R}^{n-1},w_0;E^N),\mathscr {A}^{s}(\mathbb {R}^{n-1},w_0;E^M))}\big (\{\vartheta (\mu )^{-1}{\text {op}}[p(\,\cdot \,,\mu )\langle \cdot ,\mu \rangle ^{-d}]:\mu \in \Sigma \}\big )\\&\quad \lesssim \Vert p\Vert ^{(d,\vartheta )}_{N'}. \end{aligned}$$
(Iterated version of Mikhlin's theorem) Let \(s,k\in \mathbb {R}\) and let E be a Banach space. Consider one of the following cases with Assumption 1.2 in mind, \(\mathscr {B}\) being defined on \(I_{x_n}=\mathbb {R}\) and with \(m\in L_{\infty }(\mathbb {R}^n;\mathcal {B}(E))\) being smooth enough:
Neither \(\mathscr {A}\) nor \(\mathscr {B}\) stands for the Bessel potential scale. For \(N\in \mathbb {N}_0\), we define
$$\begin{aligned} \kappa _{m,N}:=\sup \big \{\langle \xi '\rangle ^{|\alpha '|}\langle \xi _n\rangle ^{\alpha _n}\Vert \partial _{\xi }^{\alpha }m(\xi ',\xi _n)\Vert _{\mathcal {B}(E)}: \alpha \in \mathbb {N}_0^{n},|\alpha |\le N, \xi \in \mathbb {R}^{n}\big \}. \end{aligned}$$
\(\mathscr {A}\) stands for the Bessel potential scale and \(\mathscr {B}\) does not stand for the Bessel potential scale. For \(N\in \mathbb {N}_0\), we define
$$\begin{aligned}&\kappa _{m,N}:=\sup _{\xi _n\in \mathbb {R},\alpha _n\in \mathbb {N}_0,\alpha _n\le N}\mathcal {R}\big \{|\xi '|^{|\alpha '|}\\&\langle \xi _n\rangle ^{\alpha _n}\partial _{\xi }^{\alpha }m(\xi ',\xi _n): \alpha '\in \mathbb {N}_0^{n-1},|\alpha '|\le N, \xi '\in \mathbb {R}^{n-1}\big \}. \end{aligned}$$
\(\mathscr {B}\) stands for the Bessel potential scale and \(\mathscr {A}\) does not stand for the Bessel potential scale. For \(N\in \mathbb {N}_0\), we define
$$\begin{aligned} \kappa _{m,N}:=\mathcal {R}\big \{\langle \xi '\rangle ^{|\alpha '|}|\xi _n|^{\alpha _n}\partial _{\xi }^{\alpha }m(\xi ',\xi _n): \alpha \in \mathbb {N}_0^{n},|\alpha |\le N, \xi \in \mathbb {R}^{n}\big \}. \end{aligned}$$
Both \(\mathscr {A}\) and \(\mathscr {B}\) stand for the Bessel potential scale and E satisfies Pisier's property \((\alpha )\). For \(N\in \mathbb {N}_0\), we define
$$\begin{aligned} \kappa _{m,N}:=\mathcal {R}\big \{|\xi '|^{|\alpha '|}|\xi _n|^{\alpha _n}\partial _{\xi }^{\alpha }m(\xi ',\xi _n): \alpha \in \mathbb {N}_0^{n},|\alpha |\le N, \xi \in \mathbb {R}^{n}\big \}. \end{aligned}$$
There is an \(N\in \mathbb {N}_0\) and a constant \(C>0\) independent of m such that
$$\begin{aligned} \Vert {\text {op}}[m]\Vert _{\mathcal {B}(\mathscr {B}^{k}(\mathscr {A}^s))}\le C\kappa _{m,N}. \end{aligned}$$
First, we note that \({\text {op}}[\partial _{\xi _n}^{\alpha _n}m(\,\cdot ,\xi _n)]=\partial _{\xi _n}^{\alpha _n}{\text {op}}[m(\,\cdot ,\xi _n)]\), \(\alpha _n\in \mathbb {N}\), if m is smooth enough. Indeed, let \(\varepsilon >0\) be small enough and \(h\in (-\varepsilon ,\varepsilon )\). Then, we have
$$\begin{aligned}&\big \Vert {\text {op}}\big [\tfrac{1}{h}(m(\,\cdot ,\xi _n+h)-m(\,\cdot ,\xi _n))-\partial _nm(\,\cdot ,\xi _n) \big ]\big \Vert _{\mathcal {B}(\mathscr {A}^s,\mathscr {A}^s)}\\&\quad \le C\sup _{\alpha '\in \mathbb {N}_0^{n-1},|\alpha '|\le N'}\sup _{\xi '\in \mathbb {R}^{n-1}}\Vert \langle \xi '\rangle ^{|\alpha '|}\partial _{\xi '}^{\alpha '}[\tfrac{1}{h}(m(\xi ',\xi _n+h)-m(\xi ',\xi _n))-\partial _nm(\xi ',\xi _n) ]\Vert _{\mathcal {B}(E)}\\&\quad = C\sup _{\alpha '\in \mathbb {N}_0^{n-1},|\alpha '|\le N'}\sup _{\xi '\in \mathbb {R}^{n-1}}\bigg \Vert \langle \xi '\rangle ^{|\alpha '|}\partial _{\xi '}^{\alpha '}\bigg [\int _0^1\partial _n m(\xi ',\xi _n+sh)-\partial _n m(\xi ',\xi _n)\,ds\bigg ]\bigg \Vert _{\mathcal {B}(E)}\\&\quad \le C\sup _{\alpha '\in \mathbb {N}_0^{n-1},|\alpha '|\le N'}\sup _{\xi '\in \mathbb {R}^{n-1}}\sup _{s\in [0,1]}\langle \xi '\rangle ^{|\alpha '|}\Vert \partial _{\xi '}^{\alpha '}\partial _n[m(\xi ',\xi _n+s h)-m(\xi ',\xi _n)]\Vert _{\mathcal {B}(E)} \end{aligned}$$
Now we can use the uniform continuity of
$$\begin{aligned} \mathbb {R}^{n-1}\times (-\varepsilon ,\varepsilon )\rightarrow \mathcal {B}(E),\,(\xi ',h)\mapsto \langle \xi '\rangle ^{|\alpha '|}\partial _{\xi '}^{\alpha '}\partial _n m(\xi ',\xi _n+h)) \end{aligned}$$
to see that we have convergence to 0 as \(h\rightarrow 0\) in the above estimate. The uniform continuity follows from the boundedness of the derivatives (if m is smooth enough). For derivatives of order \(\alpha _n\ge 2\) we can apply the same argument to \(\partial _{\xi _n}^{\alpha _n-1}m\).
The idea is now to apply Mikhlin's theorem twice. For example, in case (d) one obtains
$$\begin{aligned} \Vert {\text {op}}[m]\Vert _{\mathcal {B}(\mathscr {B}^{k}(\mathscr {A}^s)))}&\lesssim \mathcal {R}_{\mathcal {B}(\mathscr {A}^s,\mathscr {A}^s)}\big (\{|\xi _n|^{\alpha _n}\partial _{\xi _n}^{\alpha _n}{\text {op}}[m(\,\cdot ,\xi _n)]:\alpha _n\in \mathbb {N},\alpha _n\le N_n,\xi _n\in \mathbb {R}\}\big )\\&=\mathcal {R}_{\mathcal {B}(\mathscr {A}^s,\mathscr {A}^s)}\big (\{{\text {op}}[|\xi _n|^{\alpha _n}\partial _{\xi _n}^{\alpha _n}m(\,\cdot ,\xi _n)]:\alpha _n\in \mathbb {N},\alpha _n\le N_n,\xi _n\in \mathbb {R}\}\big )\\&\lesssim \kappa _{m,N} \end{aligned}$$
by Theorem 2.15(b) for \(N_n,N\in \mathbb {N}_0\) large enough. The other cases are obtained analogously. \(\square \)
There also is an \(\mathcal {R}\)-bounded version of Proposition 3.7
(Iterated \(\mathcal {R}\)-bounded version of Mikhlin's theorem) Let \(s,k\in \mathbb {R}\) and let E be a Banach space. Consider one of the following cases with Assumption 1.2 in mind and with \(\mathcal {M}\subset C^{\widetilde{N}}(\mathbb {R}^n{\setminus }\{0\};\mathcal {B}(E))\) with \(\widetilde{N}\in \mathbb {N}_0\) being large enough:
$$\begin{aligned} \kappa _{\mathcal {M},N}:=\mathcal {R}\big \{\langle \xi '\rangle ^{|\alpha '|}\langle \xi _n\rangle ^{\alpha _n}\partial _{\xi }^{\alpha }m(\xi ): m\in \mathcal {M},\alpha \in \mathbb {N}_0^{n},|\alpha |\le N, \xi \in \mathbb {R}^{n}\big \}. \end{aligned}$$
\(\mathscr {A}\) stands for the Bessel potential scale, \(\mathscr {B}\) does not stand for the Bessel potential scale and E satisfies Pisier's property \((\alpha )\). For \(N\in \mathbb {N}_0\), we define
$$\begin{aligned} \kappa _{\mathcal {M},N}:=\mathcal {R}\big \{|\xi '|^{|\alpha '|}\langle \xi _n\rangle ^{\alpha _n}\partial _{\xi }^{\alpha }m(\xi ): m\in \mathcal {M},\alpha \in \mathbb {N}_0^{n},|\alpha |\le N, \xi \in \mathbb {R}^{n}\big \}. \end{aligned}$$
\(\mathscr {B}\) stands for the Bessel potential scale, \(\mathscr {A}\) does not stand for the Bessel potential scale and E satisfies Pisier's property \((\alpha )\). For \(N\in \mathbb {N}_0\), we define
$$\begin{aligned} \kappa _{\mathcal {M},N}:=\mathcal {R}\big \{\langle \xi '\rangle ^{|\alpha '|}|\xi _n|^{\alpha _n}\partial _{\xi }^{\alpha }m(\xi ): m\in \mathcal {M},\alpha \in \mathbb {N}_0^{n},|\alpha |\le N, \xi \in \mathbb {R}^{n}\big \}. \end{aligned}$$
$$\begin{aligned} \kappa _{\mathcal {M},N}:=\mathcal {R}\big \{|\xi '|^{|\alpha '|}|\xi _n|^{\alpha _n}\partial _{\xi }^{\alpha }m(\xi ',\xi _n):m\in \mathcal {M}, \alpha \in \mathbb {N}_0^{n},|\alpha |\le N, \xi \in \mathbb {R}^{n}\big \}. \end{aligned}$$
There is an \(N\in \mathbb {N}_0\) and a constant \(C>0\) such that
$$\begin{aligned} \mathcal {R}(\{{\text {op}}[m]:m\in \mathcal {M}\})\le C\kappa _m\quad \text {in }\mathcal {B}(\mathscr {B}^{k}(\mathscr {A}^s)). \end{aligned}$$
This follows by the same proof as 3.7. One just has to use the \(\mathcal {R}\)-bounded versions of Mikhlin's theorem. \(\square \)
(Lifting Property for Mixed Scales). Let \(s,k,t_0,t_1\in \mathbb {R}\). Then,
$$\begin{aligned} \langle D_n\rangle ^{t_0}\langle D'\rangle ^{t_1}:\mathscr {B}^{k+t_0}(\mathscr {A}^{s+t_1}){\mathop {\rightarrow }\limits ^{\simeq }} \mathscr {B}^{k}(\mathscr {A}^{s}) \end{aligned}$$
is an isomorphism of Banach spaces
If \(\mathscr {A}^\bullet \) or \(\mathscr {B}^{\bullet }\) belongs to the Bessel potential scale, then it follows from the definition of Bessel potential spaces that
$$\begin{aligned} \langle D'\rangle ^{t_1} :\mathscr {A}^{s+t_1}{\mathop {\rightarrow }\limits ^{\simeq }} \mathscr {A}^{s}\quad \text {or} \quad \langle D_n\rangle ^{t_0}:\mathscr {B}^{k+t_0}(\mathscr {A}^{s}){\mathop {\rightarrow }\limits ^{\simeq }} \mathscr {B}^{k}(\mathscr {A}^{s}), \end{aligned}$$
respectively. In the other cases, this is the statement of [37, Proposition 3.9]. Composing the two mappings yields the assertion. \(\square \)
Let \(s,k\in \mathbb {R}\) and \(t\ge 0\). Suppose that E has Pisier's property \((\alpha )\) if both \(\mathscr {A}\) and \(\mathscr {B}\) belong to the Bessel potential scale. Then,
$$\begin{aligned} \langle D\rangle ^{t}:\mathscr {B}^{k+t}(\mathscr {A}^{s})\cap \mathscr {B}^{k}(\mathscr {A}^{s+t}){\mathop {\rightarrow }\limits ^{\simeq }}\mathscr {B}^k(\mathscr {A}^s) \end{aligned}$$
is an isomorphism of Banach spaces.
By the assumptions we imposed on \(\mathscr {B}^{\bullet }(\mathscr {A}^\bullet )\), we can apply Mikhlin's theorem. We define
$$\begin{aligned} f:\mathbb {R}^{n}\rightarrow \mathbb {R},\;\xi \mapsto \frac{\langle \xi \rangle ^{t}}{\langle \xi '\rangle ^{t}+\langle \xi _n\rangle ^{t}} \end{aligned}$$
which satisfies the assumptions of Proposition 3.7. Indeed, by induction we have that \(\partial _{\xi }^{\alpha }f\) is a linear combination of terms of the form
$$\begin{aligned} \xi ^{\beta }\langle \xi \rangle ^{t-i'-i_n}\langle \xi '\rangle ^{(t-2)j'-k'}\langle \xi _n\rangle ^{(t-2)j_n-k_n}(\langle \xi '\rangle ^t+\langle \xi _n\rangle ^t)^{-1-j'-j_n} \end{aligned}$$
for some \(\beta \in \mathbb {N}_0^n\), \(i',i_n,j',j_n,k',k_n\in \mathbb {N}_0\) such that \(\alpha _n=i_n+2j_n+k_n-\beta _n\) and \(|\alpha '|=i'+2j'+k'-|\beta '|\). But for such a term we have that
$$\begin{aligned}&|\langle \xi '\rangle ^{|\alpha '|}\langle \xi _n\rangle ^{\alpha _n}\xi ^{\beta }\langle \xi \rangle ^{t-i'-i_n}\langle \xi '\rangle ^{(t-2)j'-k'}\langle \xi _n\rangle ^{(t-2)j_n-k_n}(\langle \xi '\rangle ^t+\langle \xi _n\rangle ^t)^{-1-j'-j_n}|\\&\quad = |f(\xi )\langle \xi '\rangle ^{|\alpha '|}\langle \xi _n\rangle ^{\alpha _n}\xi ^{\beta }\langle \xi \rangle ^{-i'-i_n}\langle \xi '\rangle ^{(t-2)j'-k'}\langle \xi _n\rangle ^{(t-2)j_n-k_n}(\langle \xi '\rangle ^t+\langle \xi _n\rangle ^t)^{-j'-j_n}|\\&\quad \le f(\xi )[\langle \xi '\rangle ^{|\alpha '|+|\beta '|-i'-2j'-k'}\langle \xi _n\rangle ^{\alpha _n+\beta _n-i_n-2j_n-k_n}][\langle \xi '\rangle ^{tj'}\langle \xi _n\rangle ^{tj_n}(\langle \xi '\rangle ^t+\langle \xi _n\rangle ^t)^{-j'-j_n}]\\&\quad \le f(\xi )\\&\quad \le 1. \end{aligned}$$
This shows that f satisfies the assumptions of Proposition 3.7. Therefore, we obtain
$$\begin{aligned} \Vert \langle D\rangle ^{t}u\Vert _{\mathscr {B}^k(\mathscr {A}^s)}&=\Vert f(D)(\langle D'\rangle ^{t}+\langle D_n\rangle ^{t})u \Vert _{\mathscr {B}^k(\mathscr {A}^s)}\lesssim \Vert (\langle D'\rangle ^{t}+\langle D_n\rangle ^{t})u \Vert _{\mathscr {B}^k(\mathscr {A}^s)}\\&\lesssim \max \{\Vert u \Vert _{\mathscr {B}^{k+t}(\mathscr {A}^{s})},\Vert u \Vert _{\mathscr {B}^{k}(\mathscr {A}^{s+t})}\} \end{aligned}$$
$$\begin{aligned} \max \{\Vert u \Vert _{\mathscr {B}^{k+t}(\mathscr {A}^{s})},\Vert u \Vert _{\mathscr {B}^{k}(\mathscr {A}^{s+t})}\}&\le \Vert \langle D_n\rangle ^t u \Vert _{\mathscr {B}^{k}(\mathscr {A}^{s})}+\Vert \langle D'\rangle ^t u \Vert _{\mathscr {B}^{k}(\mathscr {A}^{s})}\\&= \bigg \Vert \frac{\langle D'\rangle ^{t}}{\langle D\rangle ^{t}}\langle D\rangle ^{t}u \bigg \Vert _{\mathscr {B}^k(\mathscr {A}^s)}+\bigg \Vert \frac{\langle D_n\rangle ^{t}}{\langle D\rangle ^{t}}\langle D\rangle ^{t}u \bigg \Vert _{\mathscr {B}^k(\mathscr {A}^s)}\\&\lesssim \Vert \langle D\rangle ^{t}u \Vert _{\mathscr {B}^k(\mathscr {A}^s)}. \end{aligned}$$
This proves the assertion. \(\square \)
Let \(s,k,d\in \mathbb {R}\). Let
$$\begin{aligned} S^d_{\mathscr {B},\mathscr {A}}(\mathbb {R}^n,\mathcal {B}(E)):={\left\{ \begin{array}{ll} S^d(\mathbb {R}^n,\mathcal {B}(E))&{}\quad \text { if neither }\mathscr {A}\text { nor }\mathscr {B}\\ &{} \qquad \qquad \text { stands for the Bessel potential scale},\\ S^d_{\mathcal {R}}(\mathbb {R}^n,\mathcal {B}(E))&{}\quad \text { otherwise}. \end{array}\right. } \end{aligned}$$
Suppose that E has Pisier's property \((\alpha )\) if both \(\mathscr {A}\) and \(\mathscr {B}\) belong to the Bessel potential scale
If \(d\le 0\), then
$$\begin{aligned} S^d_{\mathscr {B},\mathscr {A}}(\mathbb {R}^n,\mathcal {B}(E)) \times \mathscr {B}^k(\mathscr {A}^s) \rightarrow \mathscr {B}^{k-d}(\mathscr {A}^s)\cap \mathscr {B}^k(\mathscr {A}^{s-d}),\;(p,u)\mapsto {\text {op}}[p] u \end{aligned}$$
is bilinear and continuous.
If \(d\ge 0\), then
$$\begin{aligned} S^d_{\mathscr {B},\mathscr {A}}(\mathbb {R}^n,\mathcal {B}(E)) \times (\mathscr {B}^{k+d}(\mathscr {A}^s)\cap \mathscr {B}^k(\mathscr {A}^{s+d})) \rightarrow \mathscr {B}^k(\mathscr {A}^s),\;(p,u)\mapsto {\text {op}}[p] u \end{aligned}$$
By writing \(p(\xi )=\frac{p(\xi )}{\langle \xi \rangle ^d}\langle \xi \rangle ^d\) and using Proposition 3.10, we only have to treat the case \(d=0\). But this case is included in the iterated version of Mikhlin's theorem, Proposition 3.7. Indeed, for a symbol \(p\in S^0(\mathbb {R}^n,\mathcal {B}(E))\) we have
$$\begin{aligned} \sup _{\xi \in \mathbb {R}^n\atop \alpha \in \mathbb {N}_0^n,|\alpha |_1\le k} \langle \xi '\rangle ^{|\alpha '|}\langle \xi _n\rangle ^{|\alpha _n|}\Vert \partial _{\xi }^{\alpha }p(\xi )\Vert _{\mathcal {B}(E)}\le \sup _{\xi \in \mathbb {R}^n\atop \alpha \in \mathbb {N}_0^n,|\alpha |\le k} \langle \xi \rangle ^{|\alpha |}\Vert \partial _{\xi }^{\alpha }p(\xi )\Vert _{\mathcal {B}(E)}<\infty \end{aligned}$$
for all \(k\in \mathbb {N}_0\). If \(p\in S^0_{\mathcal {R}}(\mathbb {R}^n,\mathcal {B}(E))\), we can use Kahane's contraction principle in order to obtain
$$\begin{aligned}&\mathcal {R}\big \{\langle \xi '\rangle ^{|\alpha '|}\langle \xi _n\rangle ^{|\alpha _n|}\partial _{\xi }^{\alpha }p(\xi ): \alpha \in \mathbb {N}_0^n,|\alpha |\le k,\xi \in \mathbb {R}^n\big \}\\&\quad \le \mathcal {R}\big \{\langle \xi \rangle ^{|\alpha |}\partial _{\xi }^{\alpha }p(\xi ): \alpha \in \mathbb {N}_0^n,|\alpha |\le k,\xi \in \mathbb {R}^n\big \}. \end{aligned}$$
Poisson operators in mixed scales
Consider Eq. (1-1) with \(f=0\), i.e.,
$$\begin{aligned} \lambda u -A(D)u&=0\quad \;\text {in }\mathbb {R}^n_+,\nonumber \\ B_j(D)u&=g_j\quad \text {on }\mathbb {R}^{n-1}. \end{aligned}$$
Recall that we always assume that the ellipticity condition and the Lopatinskii–Shapiro condition are satisfied in the sector \(\Sigma _{\phi '}{\setminus }\{0\}\) with \(\phi '\in (0,\pi )\) and that \(\phi \in (0,\phi ')\). The solution operators of (4-1) which map the boundary data \(g=(g_1,\ldots ,g_m)\) to the solution u are called Poisson operators. This notion comes from the Boutet de Monvel calculus, where Poisson operators are part of the so-called singular Green operator matrices. These matrices were introduced to extend the idea of pseudo-differential operators to boundary value problems. They allow for a unified treatment of boundary value problems and their solution operators, since both of them are contained in the algebra of singular Green operator matrices. In this work however, we do not need this theory in full generality. Instead, we just focus on Poisson operators.
We will use a solution formula for the Poisson operator corresponding to (4-1) which was derived in the classical work [8, Proposition 6.2] by Denk, Hieber and Prüss. In order to derive this formula, a Fourier transform in the tangential directions of (4-1) is applied. This yields a linear ordinary differential equation of order 2m at each point in the frequency space. This ordinary differential equation is then transformed to a linear first-order system which can easily be solved by an exponential function if one knows the values of
$$\begin{aligned} U(\xi ',0):=(\mathscr {F}'u(\xi ',0),\partial _n\mathscr {F}'u(\xi ',0),\ldots ,\partial _n^{2m-1}\mathscr {F}'u(\xi ',0)). \end{aligned}$$
The Lopatinskii–Shapiro condition ensures that those vectors \(U(\xi ',0)\) which yield a stable solution can be uniquely determined from \((\mathscr {F}'g_1,\ldots ,\mathscr {F}'g_m)\). The operator which gives this solution is denoted by
$$\begin{aligned} M(\xi ',\lambda ):E^m\rightarrow E^{2m},\,(\mathscr {F}'g_1(\xi '),\ldots ,\mathscr {F}'g_m(\xi '))\mapsto U(\xi ',0). \end{aligned}$$
Now one just has to take the inverse Fourier transform of the solution and a projection to the first component. The latter is necessary to come back from the solution of the first-order system to the solution of the higher-order equation.
This would already be enough to derive a good solution formula. However, in [8] an additional rescaling was introduced so that compactness arguments can be applied. More precisely, the variables \(\rho (\xi ',\lambda )=\langle \xi ',|\lambda |^{1/2m}\rangle =(1+|\xi '|^2+|\lambda |^{1/m})^{1/2}\), \(b=\xi '/\rho \) and \(\sigma =\lambda /\rho ^{2m}\) are introduced in the Fourier image. The solution formula is then written in terms of \((\rho ,b,\sigma )\) instead of \((\xi ',\lambda )\). For this reformulation, it is crucial that the operators \(A,B_1,\ldots ,B_m\) are homogeneous, i.e., that there are no lower-order terms. And even though this rescaling makes the formulas more involved, the compactness arguments which can be used as a consequence are very useful.
After carrying out all these steps, the solution can be represented by
$$\begin{aligned} u(x)={\text {pr}}_1[{\text {Poi}}(\lambda )g](x) \end{aligned}$$
\({\text {pr}}_1:E^{2m}\rightarrow E,\,w\mapsto w_1\) is the projection onto the first component,
\(g=(g_1,\ldots ,g_m)^T\) and the operator \({\text {Poi}}(\lambda ):E^m\rightarrow E^{2m}\) is given by
$$\begin{aligned} \left[ {\text {Poi}}(\lambda )g\right] (x):=\big [(\mathscr {F}')^{-1} e^{i\rho A_0(b,\sigma )x_n}M(b,\sigma )\hat{g}_{\rho }\big ](x'). \end{aligned}$$
\(\mathscr {F}'\) is the Fourier transform along \(\mathbb {R}^{n-1}\), i.e., in tangential direction,
\(A_0\) is a smooth function with values in \(\mathcal {B}(E^{2m},E^{2m})\) which one obtains from \(\lambda -A(\xi ',D_n)\) after a reduction to a first-order system,
M is a smooth function with values in \(\mathcal {B}(E^{m},E^{2m})\) which maps the values of the boundary operators applied to the stable solution v to the vector containing its derivatives at \(x_n=0\) up to the order \(2m-1\), i.e.,
$$\begin{aligned} (B_1(D)v(0),\ldots , B_m(D)v(0))^T\mapsto (v(0),\partial _n v(0),\ldots , \partial _n^{2m-1} v(0))^T, \end{aligned}$$
\(\rho \) is a positive parameter that can be chosen in different ways in dependence of \(\xi '\) and \(\lambda \). In our case, it will be given by \(\rho (\xi ',\lambda )=\langle \xi ',|\lambda |^{1/2m}\rangle =(1+|\xi '|^2+|\lambda |^{1/m})^{1/2}\),
\(b=\xi '/\rho \), \(\sigma =\lambda /\rho ^{2m}\) and \(\hat{g}_{\rho }=((\mathscr {F}'g_1)/\rho ^{m_1},\ldots ,(\mathscr {F}'g_m)/\rho ^{m_m})^T\),
Again, we want to emphasize that b, \(\sigma \), and \(\rho \) depend on \(\xi '\) and \(\lambda \). We only neglect this dependence in the notation for the sake of readability.
Another operator that we will use later is the spectral projection \(\mathscr {P}_{-}\) of the matrix \(A_0\) to the part of the spectrum that lies above the real line. This spectral projection has the property that \(\mathscr {P}_-(b,\sigma )M(b,\sigma )=M(b,\sigma )\).
For our purposes, we will rewrite the above representation in the following way: For \(j=1,\ldots ,m\) we write
$$\begin{aligned} M_{\rho ,j}(b,\sigma )\hat{g}_j := M(b,\sigma )\frac{\hat{g}_j\otimes e_j}{\rho ^{m_j}}, \end{aligned}$$
where \(\hat{g}_j\otimes e_j\) denotes the m-tuple whose j-th component equals to \(\hat{g}_j\) and whose other components all equal to 0, as well as
$$\begin{aligned}{}[{\text {Poi}}_j(\lambda )g_j](x):=\big [(\mathscr {F}')^{-1} e^{i\rho A_0(b,\sigma )x_n}M_{\rho ,j}(b,\sigma )\hat{g}_j\big ](x') \end{aligned}$$
so that we obtain
$$\begin{aligned} u= {\text {pr}}_1{\text {Poi}}(\lambda )g={\text {pr}}_1\sum _{j=1}^m{\text {Poi}}_j(\lambda )g_j. \end{aligned}$$
If we look at Formula (4-2), we can already see that the solution operator is actually just an exponential function in normal direction. As such, it should be arbitrarily smooth. Of course, one has to think about with respect to which topology in tangential direction this smoothness should be understood. It is the aim of this section to analyze this carefully. We treat (4-2) as a function of \(x_n\) with values in the space of pseudo-differential operators in tangential direction. Since (4-2) is exponentially decaying in \(\xi '\) if \(x_n>0\), the pseudo-differential operators will have order \(-\infty \), i.e., they are smoothing. Hence, the solutions will also be arbitrarily smooth, no matter how rough the boundary data is. However, the exponential decay becomes slower as one approaches \(x_n=0\). This will lead to singularities if (4-2) is considered as a function of \(x_n\) with values in the space of pseudo-differential operators of a fixed order. In the following, we will study how strong these singularities are depending on the regularity in normal and tangential directions in which the singularities are studied. The answer will be given in Theorem 4.16. Therein, one may choose the regularity in normal direction k, the integrability in normal direction p, the regularity in tangential direction t of the solution and the regularity s of the boundary data. Then, the parameter r in the relation \(r-p[t+k-m_j-s]_+>-1\) gives a description of the singularity at the boundary, since it is the power of the power weight which one has to add to the solution space such that the Poisson operator is a well-defined continuous operator between the spaces one has chosen.
In the following, we oftentimes substitute \(\mu =\lambda ^{1/2m}\) for homogeneity reasons. If \(\lambda \) is above the real line, then we take \(\mu \) to be the first of these roots, and if \(\lambda \) is below the real line, we take \(\mu \) to be the last of these roots. If \(\lambda >0\), then we just take the ordinary positive root.
A domain \(\mathcal {O}\) is called plump with parameters \(R>0\) and \(\delta \in (0,1]\) if for all \(x\in \mathcal {O}\) and all \(r\in (0,R]\) there exists a \(z\in \mathcal {O}\) such that
$$\begin{aligned} B(z,\delta r)\subset B(x,r)\cap \mathcal {O}. \end{aligned}$$
Let \(E_0,E_1\) be a Banach spaces. As described above, we take \(\mu =\lambda ^{1/2m}\) so that \(\rho (\xi ',\mu )=\langle \xi ',\mu \rangle \), \(b=\xi '/\rho \) and \(\sigma =\mu ^{2m}/\rho ^{2m}\). Let \(U\subset \mathbb {R}^{n-1}\times \Sigma _{\phi }\) be a plump and bounded environment of the range of \((b,\sigma )\). Then, the mapping
$$\begin{aligned} BUC^{\infty }(U,\mathcal {B}(E_0,E_1))\rightarrow S^0_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m},\mathcal {B}(E_0,E_1)), A \mapsto A\circ (b,\sigma ) \end{aligned}$$
is well defined and continuous.
A similar proof was carried out in [22, Proposition 4.21]. We combine this proof with [24, Theorem 8.5.21] in order to obtain the \(\mathcal {R}\)-bounded version.
Let \(A\in BUC^{\infty }(U,\mathcal {B}(E_0,E_1))\). By induction on \(|\alpha '|+|\gamma |\), we show that \(D_{\xi '}^{\alpha '}D_{\mu }^\gamma (A\circ (b,\sigma ))\) is a linear combination of terms of the form \((D_{\xi '}^{\widetilde{\alpha }'}D_{\mu }^{\widetilde{\gamma }}A)\circ (b,\sigma )\cdot f\) with \(f\in S^{-|\alpha '|-|\gamma |}_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m})\), \(\widetilde{\alpha }'\in \mathbb {N}_0^{n-1}\) and \(\widetilde{\gamma }\in \mathbb {N}_0\). It follows from [24, Theorem 8.5.21] that this is true for \(|\alpha '|+|\gamma |=0\). So let \(j\in \{1,\ldots ,n-1\}\). By the induction hypothesis, we have that \(D_{\xi '}^{\alpha '}D_{\mu }^\gamma (A\circ (b,\sigma ))\) is a linear combination of terms of the form \((D_{\xi '}^{\widetilde{\alpha }'}D_{\mu }^{\widetilde{\gamma }}A)\circ (b,\sigma )\cdot f\) with \(f\in S^{-|\alpha '|-|\gamma |}_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m})\), \(\widetilde{\alpha }'\in \mathbb {N}_0^{n-1}\) and \(\widetilde{\gamma }\in \mathbb {N}_0\). Hence, for \(D_{\xi _j}D_{\xi '}^{\alpha '}D_{\mu }^{\gamma }\) it suffices to treat the summands separately, i.e., we consider \(D_{\xi _j}((D_{\xi '}^{\widetilde{\alpha }'}D_{\mu }^{\widetilde{\gamma }}A)\circ (b,\sigma )\cdot f)\). By the product rule and the chain rule, we have
$$\begin{aligned}&D_{\xi _j}((D_{\xi '}^{\widetilde{\alpha }'}D_{\mu }^{\widetilde{\gamma }}A)\circ (b,\sigma )\cdot f)\\&\quad =(D_{\xi '}^{\widetilde{\alpha }'}D_{\mu }^{\widetilde{\gamma }}A)\circ (b,\sigma ))(D_jf)+\bigg (\sum _{l=2}^{n}D_{\xi _j}( \tfrac{\xi '_l}{\rho })\cdot f \cdot [(D_l D_{\xi '}^{\widetilde{\alpha }'}D_{\mu }^{\widetilde{\gamma }} A)\circ (b,\sigma )]\bigg )\\&\qquad +D_{\xi _j}( \tfrac{\mu ^{2m}}{\rho ^{2m}})\cdot f \cdot [(D_l D_{\xi '}^{\widetilde{\alpha }'}D_{\mu }^{\widetilde{\gamma }} A)\circ (b,\sigma )]. \end{aligned}$$
By the induction hypothesis and Remark 3.3 (e) and (f) we have that
$$\begin{aligned} (D_{\xi _j}f),(D_{\xi _j}\frac{\xi _1'}{\rho })f,\ldots ,(D_{\xi _j} \frac{\xi '_{n-1}}{\rho })f,(D_{\xi _j} \tfrac{\mu ^{2m}}{\rho ^{2m}}) f\in S^{-|\alpha '|-|\gamma |-1}_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m}). \end{aligned}$$
The same computation for \(D_{\mu _1}\) and \(D_{\mu _2}\) instead of \(D_{\xi _j}\) also shows the desired behavior and hence, the induction is finished.
Now we use [24, Theorem 8.5.21] again: Since U is plump we have that A and all its derivatives have an \(\mathcal {R}\)-bounded range on U. Therefore, the terms \((D_{\xi '}^{\widetilde{\alpha }'}D_{\mu }^{\widetilde{\gamma }}A)\circ (b,\sigma )\cdot f\) from above satisfy
$$\begin{aligned} \mathcal {R}_{\mathcal {B}(E_0,E_1)}\{\langle \xi ',\mu \rangle ^{|\alpha '|+|\gamma |}(D_{\xi '}^{\widetilde{\alpha }'}D_{\mu }^{\widetilde{\gamma }}A)\circ (b,\sigma )\cdot f(\xi ',\mu ):(\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}\}<\infty \end{aligned}$$
which shows the assertion. \(\square \)
Corollary 4.4
There is a constant \(c>0\) such that the mapping
$$\begin{aligned} \mathbb {R}_+\rightarrow S^{0}_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m};\mathcal {B}(E^{2m})), y\mapsto [(\xi ',\mu )\mapsto e^{cy}e^{iA_0(b,\sigma )y}\mathscr {P}_{-}(b,\sigma )] \end{aligned}$$
is bounded and and uniformly continuous.
There are constants \(C,c>0\) such that
$$\begin{aligned} \mathcal {R}(\{e^{c\rho x_n}e^{i\rho A_0(b,\sigma )x_n}\mathscr {P}_-(b,\sigma ):(\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}\})<C \end{aligned}$$
for all \(x_n\ge 0\)
By Lemma 4.3, it suffices to show that there is a plump environment U of the range \((b,\sigma )\) such that
$$\begin{aligned} \mathbb {R}_+\rightarrow BUC^{\infty }(U,\mathcal {B}(E^{2m})), y\mapsto [(\xi ',\mu )\mapsto e^{cy}e^{iA_0(\xi ,\mu )y}\mathscr {P}_{-}(\xi ,\mu )] \end{aligned}$$
is bounded and continuous. We can for example take
$$\begin{aligned} U=\big \{\big (\tfrac{\theta \xi '}{\rho },\tfrac{\theta \mu ^{2m}}{\rho ^{2m}}\big ): \xi '\in \mathbb {R}^{n-1},\,\mu \in \Sigma _{\phi /2m},\,\theta \in (\tfrac{1}{2},2)\big \}. \end{aligned}$$
Obviously, this set contains the range of \((b,\sigma )\) and it is smooth and relatively compact. By this compactness, it follows as in [8, Section 6] (mainly because of the spectral gap (6.11)) that there is a constant \(c>0\) such that
$$\begin{aligned} \sup _{y\ge 0,\, (\xi ',\mu )\in U}\Vert e^{2cy}e^{i A_0(\xi ',\mu )y}\mathscr {P}_-(\xi ',\mu ) \Vert _{\mathcal {B}(E^{2m})}<\infty . \end{aligned}$$
Now, we show by induction on \(|\alpha '|+|\gamma |\) that \(\partial _{\xi '}^{\alpha '}\partial _{\mu }^{\gamma }e^{i A_0(\xi ',\mu )y}\mathscr {P}_-(\xi ',\mu )\) is a linear combination of terms of the form
$$\begin{aligned} f(\xi ',\mu ) e^{i A_0(\xi ',\mu )y}\mathscr {P}_-(\xi ',\mu ) g(\xi ',\mu )y^p \end{aligned}$$
where \(f,g:\mathbb {R}^{n-1}\times \Sigma _{\phi /2m}\rightarrow \mathcal {B}(E^{2m})\) are holomorphic and \(p\in \mathbb {N}_0\). Obviously this is true for \(|\alpha '|+|\gamma |=0\). For the induction step, we can directly use the induction hypothesis and consider a term of the form (4-4). Since
$$\begin{aligned}&e^{i A_0(\xi ',\mu )y}\mathscr {P}_-(\xi ',\mu )=e^{i A_0(\xi ',\mu )y}\mathscr {P}_-(\xi ',\mu )^2\\&\quad =\mathscr {P}_-(\xi ',\mu )e^{i A_0(\xi ',\mu )y}\mathscr {P}_-(\xi ',\mu )=\mathscr {P}_-(\xi ',\mu )e^{i A_0(\xi ',\mu )y} \end{aligned}$$
one can directly verify that the derivatives \(\partial _{\xi _j}\partial _{\xi '}^{\alpha '}\partial _{\mu }^{\gamma }e^{i A_0(\xi ',\mu )y}\mathscr {P}_-(\xi ',\mu )\) \((j=1,\ldots ,n-1)\) and \(\partial _{\mu _i}\partial _{\xi '}^{\alpha '}\partial _{\mu }^{\gamma }e^{i A_0(\xi ',\mu )y}\mathscr {P}_-(\xi ',\mu )\) \((i\in \{1,2\})\) are again a linear combination of terms of the form (4-4). But for such a term, we have
$$\begin{aligned} \Vert f e^{i A_0y}\mathscr {P}_- gy^p \Vert _{BUC(U,\mathcal {B}(E^{2m}))}\le C e^{-cy} \end{aligned}$$
for some constant \(C>0\) and all \(y\ge 0\). This shows that
$$\begin{aligned} \big ([(\xi ',\mu )\mapsto e^{cy}e^{iA_0(\xi ,\mu )y}\mathscr {P}_{-}(\xi ,\mu )]\big )_{y\ge 0} \end{aligned}$$
satisfies the assumptions of Lemma 4.3 uniformly in y so that the boundedness follows. The continuity follows from applying the same argument to
$$\begin{aligned} (e^{iA_0(\xi ,\lambda )h}-{\text {id}}_{E^{2m}})e^{iA_0(\xi ,\lambda )y}\mathscr {P}_-(\xi ',\mu ) \end{aligned}$$
for small |h|, \(h\in \mathbb {R}\). Note that \((\xi ,\lambda )\) only runs through a relatively compact set again so that \(e^{iA_0(\xi ,\lambda )h}-{\text {id}}_{E^{2m}}\rightarrow 0\) in \(BUC^{\infty }(U)\) as \(h\rightarrow \infty \).
This follows from the first part by substituting \(y=\rho x_n\).
Let \(n_1,n_2\in \mathbb {R}\) and \(c>0\). Moreover, let \(f_0\in S^{n_1}_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m},\mathcal {B}(E^{2m}))\) and \(g\in S_{\mathcal {R}}^{n_2}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m},\mathcal {B}(E,E^{2m}))\). Then, for all \(\alpha \in \mathbb {N}_0^{n+1}\) we have that \(\partial _{\xi ',\mu }^{\alpha }f_0e^{c\rho x_n+i\rho A_0(b,\sigma )x_n}\mathscr {P}_-(b,\sigma )g_0\) is a linear combination of terms of the form
$$\begin{aligned} f_{\alpha }e^{c\rho x_n+i\rho A_0(b,\sigma )x_n}\mathscr {P}_-(b,\sigma )g_{\alpha }x_n^k \end{aligned}$$
where \(f_{\alpha }\in S_{\mathcal {R}}^{n_1-d_1}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m},\mathcal {B}(E^{2m}))\), \(g_{\alpha }\in S^{n_2-d_2}_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m},\mathcal {B}(E,E^{2m}))\) and \(k+d_1+d_2=|\alpha |\).
This can be shown by induction on \(|\alpha |\). Using Lemma 4.3, the proof of [22, Lemma 4.22] carries over to our setting. \(\square \)
Let \(\zeta ,\delta \ge 0\), \(k\in \mathbb {N}_0\) and \(\vartheta (c,x_n,\mu )=x_n^{-\delta }|\mu |^{-\zeta } e^{-c|\mu |x_n}\) for \(c,x_n,\mu \in \mathbb {R}_+\).
For all \(l\in \mathbb {N}_0\), there are constants \(C,c>0\) such that
$$\begin{aligned} \Vert (\xi ',\mu )\mapsto D_{x_n}^ke^{c\rho x_n}e^{i\rho A_0(b,\sigma )x_n}M_j(b,\sigma )\tfrac{1}{\rho ^{m_j}}\Vert ^{(k+\zeta -m_j-\delta ,\vartheta (c/2,x_n,\cdot \,))}_{l,\mathcal {R}}<C \end{aligned}$$
for all \(x_n\in \mathbb {R}_+\).
The mapping
$$\begin{aligned}&\mathbb {R}_+\rightarrow S^{k+\zeta -m_j-\delta }_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m};\mathcal {B}(E,E^{2m})), x_n\mapsto [(\xi ',\mu )\\&\mapsto x_n^{\delta } e^{c\rho x_n}D_{x_n}^ke^{i\rho A_0(b,\sigma )x_n}M_j(b,\sigma )\tfrac{1}{\rho ^{m_j}}] \end{aligned}$$
is continuous.
If \(f\in BUC(\mathbb {R}_+,\mathbb {C})\) with \(f(0)=0\), then
$$\begin{aligned}&\overline{\mathbb {R}_+}\rightarrow S^{k+\zeta -m_j-\delta }_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m};\mathcal {B}(E,E^{2m})),\\&x_n\mapsto [(\xi ',\mu )\mapsto f(x_n)x_n^{\delta } e^{c\rho x_n}D_{x_n}^ke^{i\rho A_0(b,\sigma )x_n}M_j(b,\sigma )\tfrac{1}{\rho ^{m_j}}] \end{aligned}$$
is uniformly continuous.
If \(\varepsilon \in (0,\delta ]\), then
$$\begin{aligned}&\overline{\mathbb {R}_+}\rightarrow S^{k+\zeta +\varepsilon -m_j-\delta }_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m};\mathcal {B}(E,E^{2m})),\\&x_n\mapsto [(\xi ',\mu )\mapsto x_n^{\delta } e^{c\rho x_n}D_{x_n}^ke^{i\rho A_0(b,\sigma )x_n}M_j(b,\sigma )\tfrac{1}{\rho ^{m_j}}] \end{aligned}$$
By Lemma 4.5, we have that \(D_{\xi '}^{\alpha '}D_{\mu }^{\gamma } e^{c\rho x_n}e^{i\rho A_0(b,\sigma )x_n}M_{j}(b,\sigma )\frac{1}{\rho ^{m_j}}\) is a linear combination of terms of the form
$$\begin{aligned} f_{\alpha ',\gamma }e^{c\rho x_n+i\rho A_0(b,\sigma )x_n}\mathscr {P}_-(b,\sigma )g_{\alpha ',\gamma }x_n^{p} \end{aligned}$$
where \(f_{\alpha ',\gamma }\in S^{-d_1}(\mathbb {R}^{n-1}\times \mathbb {R}_+,\mathcal {B }( E^{2m},E^{2m}))\), \(g_{\alpha ',\gamma }\in S^{-m_{j}-d_2}(\mathbb {R}^{n-1}\times \mathbb {R}_+,\mathcal {B}(E,E^{2m}))\) and \(p+d_1+d_2=|\alpha '|+|\gamma |\). But for such a term, we have that
$$\begin{aligned}&\mathcal {R}\big (\{\vartheta (\tfrac{c}{2},x_n,\mu )^{-1}\rho ^{-k-\zeta +m_j+\delta +|\alpha '|+|\gamma |}D_{x_n}^kf_{\alpha ',l}(\xi ',\mu )e^{c\rho x_n}e^{i\rho A_0(b,\sigma )x_n}\\&\qquad \mathscr {P}_-(b,\sigma )g_{\alpha ',l}(\xi ',\mu )x_n^p : (\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}\}\big ) \\&\quad \le C \sum _{\widetilde{k}=0}^k \mathcal {R}\big (\{\vartheta (\tfrac{c}{2},x_n,\mu )^{-1}\rho ^{-k-\zeta +m_j+\delta +|\alpha '|+|\gamma |}f_{\alpha ',l}(\xi ',\mu )e^{c\rho x_n}e^{i\rho A_0(b,\sigma )x_n}\\&\qquad \mathscr {P}_-(b,\sigma )[c\rho +i\rho A_0(b,\sigma )]^{k-\widetilde{k}}g_{\alpha ',l}(\xi ',\mu )x_n^{[p-\widetilde{k}]_+}: (\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}\}\big )\\&\quad \le C \sum _{\widetilde{k}=0}^k\mathcal {R}\big (\{\vartheta (\tfrac{c}{2},x_n,\mu )^{-1} \rho ^{-\zeta +\delta -\widetilde{k}-d_1-d_2+|\alpha '|+|\gamma |}e^{-c\rho x_n}x_n^{[p-\widetilde{k}]_+}: (\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}\}\big )\\&\quad \le C \mathcal {R}\big (\{\vartheta (\tfrac{c}{2},x_n,\mu )^{-1}x_n^{-\delta }|\mu |^{-\zeta }\rho ^{-d_1-d_2-p+|\alpha '|+|\gamma |}e^{-\tfrac{c}{2}\rho x_n}: (\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}\}\big )\\&\quad \le C. \end{aligned}$$
From the second to the third line, we used Lemma 4.3 and Corollary 4.4(b). This gives (4-5).
Again, we consider a term of the form
$$\begin{aligned} f_{\alpha ',\gamma }e^{i\rho A_0(b,\sigma )x_n}\mathscr {P}_-(b,\sigma )g_{\alpha ',\gamma }x_n^{p} \end{aligned}$$
where \(f_{\alpha ',\gamma }\in S^{-d_1}(\mathbb {R}^{n-1}\times \mathbb {R}_+,\mathcal {B }( E^{2m},E^{2m}))\), \(g_{\alpha ',\gamma }\in S^{-m_{j}-d_2}(\mathbb {R}^{n-1}\times \mathbb {R}_+,\mathcal {B}(E,E^{2m}))\) and \(p+d_1+d_2=|\alpha '|+|\gamma |\). By the same computation as in part (a), we obtain
$$\begin{aligned}&\mathcal {R}\big (\{\vartheta (\tfrac{c}{2},x_n,\mu )^{-1}\rho ^{-k-\zeta +m_j+\delta +|\alpha '|+|\gamma |}D_{x_n}^kf_{\alpha ',l}\\&(\xi ',\mu )[e^{c\rho (x_n+h)+i\rho A_0(b,\sigma )(x_n+h)}-e^{c\rho x_n+i\rho A_0(b,\sigma )x_n}]:\\&\qquad (\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}\})\\&\quad \le C \mathcal {R}\big (\{\vartheta (\tfrac{c}{2},x_n,\mu )^{-1}x_n^{-\delta }|\mu |^{-\zeta }\rho ^{-d_1-d_2-p+|\alpha '|+|\gamma |}[e^{c\rho h+i\rho A_0(b,\sigma )h}\\&\qquad -{\text {id}}_{E^{2m}}] e^{-\tfrac{3c}{4}\rho x_n}: (\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}\}\big )\\&\quad = C \mathcal {R}\big (\{[e^{c\rho h+i\rho A_0(b,\sigma )h}-{\text {id}}_{E^{2m}}] e^{-\tfrac{c}{4}\rho x_n}: (\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}\}\big )\\&\quad \le C \mathcal {R}\big (\{[e^{c\rho h+i\rho A_0(b,\sigma )h}-{\text {id}}_{E^{2m}}] e^{-\tfrac{c}{4}\rho x_n}: (\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}, \rho \le \tfrac{1}{\sqrt{h}}\}\big ) \\&\qquad +C \mathcal {R}\big (\{[e^{c\rho h+i\rho A_0(b,\sigma )h}-{\text {id}}_{E^{2m}}] e^{-\tfrac{cx_n}{4\sqrt{h}}}: (\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}, \rho \ge \tfrac{1}{\sqrt{h}}\}\big ). \end{aligned}$$
From Corollary 4.4 (a), it follows that the first \(\mathcal {R}\)-bound tends to 0 as \(h\rightarrow 0\). By Corollary 4.4 (b), it holds that
$$\begin{aligned} \mathcal {R}(\{e^{c\rho h+i\rho A_0(b,\sigma )h}-{\text {id}}_{E^{2m}}:(\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}, \rho \ge \tfrac{1}{\sqrt{h}}\})<\infty \end{aligned}$$
and since \(x_n>0\) also the second \(\mathcal {R}\)-bound tends to 0 as \(h\rightarrow 0\). This shows the desired continuity.
This follows by the same computation as in part (b). However, without f there would be no continuity at \(x_n=0\) as the second \(\mathcal {R}\)-bound
$$\begin{aligned} \mathcal {R}\big (\{[e^{c\rho h+i\rho A_0(b,\sigma )h}-{\text {id}}_{E^{2m}}] e^{-\tfrac{cx_n}{2\sqrt{h}}}: (\xi ',\mu )\in \mathbb {R}^{n-1}\times \Sigma _{\phi /2m}, \rho \ge \tfrac{1}{\sqrt{h}}\}\big ) \end{aligned}$$
does not tend to 0 as \(h\rightarrow 0\) for \(x_n=0\). By adding f though, we obtain the desired continuity.
This follows from part (c) with \(f(x_n)=x_n^{\varepsilon }\) for \(x_n\) close to 0.
Given a topological spaces \(Z_0,Z_1\) and \(z\in Z_0\), we now write
$$\begin{aligned} {\text {ev}}_{z}:C(Z_0;Z_1)\rightarrow Z_1, f\mapsto f(z) \end{aligned}$$
for the evaluation map at z.
Let \(k\in \mathbb {N}_0\), \(p_0\in (1,\infty )\), \(q_0\in [1,\infty ]\), \(\zeta \ge 0\) and \(s_0,s,\widetilde{t}\in \mathbb {R}\).
There are constants \(C,c>0\) such that for all \(x_n>0\) and all \(\lambda \in \Sigma _{\phi }\) we have the parameter-dependent estimate
$$\begin{aligned} \Vert [D_{x_n}^k{\text {Poi}}_j(\lambda )&f](\cdot ,x_n)\Vert _{\mathscr {A}^{\widetilde{t}+m_j-k-\zeta ,|\lambda |^{1/2m},s_0}(\mathbb {R}^{n-1},w,E^{2m})}\\ {}&\le Cx_n^{-[\widetilde{t}-s]_+}|\lambda |^{-\zeta /2m}e^{-c|\lambda |^{1/2m}x_n}\Vert f\Vert _{\mathscr {A}^{s,|\lambda |^{1/2m},s_0}(\mathbb {R}^{n-1},w,E)}\\&\quad (f\in \mathscr {S}(\mathbb {R}^{n-1},E)). \end{aligned}$$
There is a constant \(c>0\) such that for all \(\lambda \in \Sigma _{\phi }\) we have that
$$\begin{aligned} K(\lambda ):=[x_n\mapsto x_n^{[\widetilde{t}-s]_+}|\lambda |^{\frac{\zeta -[-[\widetilde{t}-s]_+-m_j+k+\zeta ]_+}{2m}}e^{c|\lambda |^{1/2m} x_n}{\text {ev}}_{x_n}D_{x_n}^k{\text {Poi}}_j(\lambda )] \end{aligned}$$
is an element of
$$\begin{aligned} C_{\mathcal {R}B}\big (\mathbb {R}_+;\mathcal {B}(\mathscr {A}^s(\mathbb {R}^{n-1},w;E^{2m}),\mathscr {A}^{\widetilde{t}+m_j-k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))\big ). \end{aligned}$$
Moreover, for all \(\sigma >0\) we have that the set \(\{K(\lambda ):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \}\) is \(\mathcal {R}\)-bounded in \(C_{\mathcal {R}B}\big (\mathbb {R}_+;\mathcal {B}(\mathscr {A}^s(\mathbb {R}^{n-1},w;E^{2m}),\mathscr {A}^{\widetilde{t}+m_j-k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))\big )\).
Let \(f\in BUC([0,\infty ),\mathbb {C})\) such that \(f(0)=0\). There is a constant \(c>0\) such that for all \(\lambda \in \Sigma _{\phi }\) we have that
$$\begin{aligned} K_f(\lambda ):=[x_n\mapsto f(x_n)x_n^{[\widetilde{t}-s]_+}|\lambda |^{\frac{\zeta -[-[\widetilde{t}-s]_+-m_j+k+\zeta ]_+}{2m}}e^{c|\lambda |^{1/2m} x_n}{\text {ev}}_{x_n}D_{x_n}^k{\text {Poi}}_j(\lambda )] \end{aligned}$$
$$\begin{aligned} BUC_{\mathcal {R}}\big (\mathbb {R}_+;\mathcal {B}(\mathscr {A}^s(\mathbb {R}^{n-1},w;E^{2m}),\mathscr {A}^{\widetilde{t}+m_j-k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))\big ). \end{aligned}$$
Moreover, for all \(\sigma >0\) we have that the set \(\{K_f(\lambda ):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \}\) is \(\mathcal {R}\)-bounded in \(BUC_{\mathcal {R}}\big (\mathbb {R}_+;\mathcal {B}(\mathscr {A}^s(\mathbb {R}^{n-1},w;E^{2m}),\mathscr {A}^{\widetilde{t}+m_j-k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))\big )\).
Let \(\varepsilon >0\) and let \(K(\lambda )\) be defined as in Part (b). Then, \(K(\lambda )\) is an element of
$$\begin{aligned} BUC_{\mathcal {R}}\big (\mathbb {R}_+;\mathcal {B}(\mathscr {A}^s(\mathbb {R}^{n-1},w;E^{2m}),\mathscr {A}^{\widetilde{t}+m_j-\varepsilon -k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))\big ). \end{aligned}$$
Moreover, for all \(\sigma >0\) we have that the set \(\{K(\lambda ):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \}\) is \(\mathcal {R}\)-bounded in \(BUC_{\mathcal {R}}\big (\mathbb {R}_+;\mathcal {B}(\mathscr {A}^s(\mathbb {R}^{n-1},w;E^{2m}),\mathscr {A}^{\widetilde{t}+m_j-\varepsilon -k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))\big )\).
By Proposition 4.6, we have that
$$\begin{aligned} \big (D_{x_n}^k e^{i\rho A_0(b,\sigma )x_n}\frac{M_{j}(b,\sigma )}{\rho ^{m_j}}\big )_{x_n>0}&\subset S^{\zeta +k-m_{j}-[\widetilde{t}-s]_+}_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m};\mathcal {B}(E,E^{2m}))\\ {}&\subset S^{\zeta +k-m_{j}-\widetilde{t}+s}_{\mathcal {R}}(\mathbb {R}^{n-1}\times \Sigma _{\phi /2m};\mathcal {B}(E,E^{2m})). \end{aligned}$$
Therefore, it follows from (4-5) together with the mapping properties for parameter-dependent pseudo-differential operators, Proposition 3.5, that \({\text {ev}}_{x_n}D_{x_n}^k{\text {Poi}}_j(\lambda )\) maps \(\mathscr {A}^{s,|\lambda |^{1/2m},s_0}(\mathbb {R}^{n-1},w,E)\) into \(\mathscr {A}^{\widetilde{t}+m_j-k-\zeta ,|\lambda |^{1/2m},s_0}(\mathbb {R}^{n-1},w,E^{2m})\) with a bound on the operator norms which is given by \(Cx_n^{-[\widetilde{t}-s]_+}|\lambda |^{-\zeta /2m} e^{-c|\lambda |^{1/2m}x_n}\) for all \(\widetilde{t},s\in \mathbb {R}\), \(x_n>0\) and all \(\zeta \ge 0\).
We use Proposition 4.6(a) together with Proposition 3.6. Then, we obtain
$$\begin{aligned}&\mathcal {R}_{\mathcal {B}(\mathscr {A}^s(\mathbb {R}^{n-1},w;E^{2m}),\mathscr {A}^{\widetilde{t}+m_j-k-\zeta }(\mathbb {R}^{n-1},w;E^{2m})}\big (\{x_n^{[\widetilde{t}-s]_+}|\lambda |^{\frac{\zeta -[-[\widetilde{t}-s]_+-m_j+k+\zeta ]_+}{2m}}e^{c|\lambda |^{1/2m}x_n}\\&\qquad {\text {ev}}_{x_n}D_{x_n}^k{\text {Poi}}_j(\lambda ):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \}\big )\\&\quad \le \mathcal {R}_{\mathcal {B}(\mathscr {A}^{\widetilde{t}-[\widetilde{t}-s]_+}(\mathbb {R}^{n-1},w;E^{2m}),\mathscr {A}^{\widetilde{t}+m_j-k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))}\big (\{x_n^{[\widetilde{t}-s]_+}|\lambda |^{\frac{\zeta -[-[\widetilde{t}-s]_+-m_j+k+\zeta ]_+}{2m}}e^{c|\lambda |^{1/2m}x_n}\\&\qquad {\text {ev}}_{x_n}D_{x_n}^k{\text {Poi}}_j(\lambda ):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \}\big )\\&\quad \le C_{\sigma }. \end{aligned}$$
This shows that
$$\begin{aligned} \mathcal {R}(\{[K(\lambda )](x_n):x_n\ge 0,\,\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \})<\infty \end{aligned}$$
in \(\mathcal {B}(\mathscr {A}^s(\mathbb {R}^{n-1},w;E^{2m}),\mathscr {A}^{\widetilde{t}+m_j-k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))\) it remains to show that the \(K(\lambda )\) are \(\mathcal {R}\)-continuous. But this follows from the continuity statement in Proposition 4.6 (b) together with Proposition 3.6.
This follows as Part (b) but with Proposition 4.6(c) instead of Proposition 4.6(b).
This follows as Part (b) but with Proposition 4.6 (d) instead of Proposition 4.6(b).
Consider the situation of Corollary 4.7 and let \(p\in [1,\infty )\), \(r\in \mathbb {R}\). In order to shorten the formulas, we write \(\gamma _1=r-p[\widetilde{t}-s]_+\) and \(\gamma _2=p\zeta -p[-[\widetilde{t}-s]_+-m_j+ k+\zeta ]_+\). Suppose that \(\gamma _1>-1\). Then, for all \(\sigma >0\) and all there is a constant \(C>0\) such that for all \(\lambda \in \Sigma _{\phi }\) with \(|\lambda |\ge \sigma \) and all \(f\in \mathscr {A}^{s}_p(\mathbb {R}^{n-1},w;E)\) it holds that
$$\begin{aligned} \Vert {\text{ Poi }}_{j}(\lambda )f\Vert _{W^{k}_{p}(\mathbb {R}_+,|{\text{ pr }}_n|^r,\mathscr {A}^{\widetilde{t}+m_j- k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))}\le C|\lambda |^{\frac{-1-\gamma _1-\gamma _2}{2mp}} \Vert f\Vert _{\mathscr {A}^{s}_p(\mathbb {R}^{n-1},w;E)}. \end{aligned}$$
We use Corollary 4.7 and obtain
$$\begin{aligned}&\Vert {\text{ Poi }}_{j}(\lambda )f\Vert _{W^{k}_{p}(\mathbb {R}_+,|{\text{ pr }}_n|^r,\mathscr {A}^{\widetilde{t}+m_j- k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))}^p \\&\quad = \sum _{l=0}^{k}\int _0^{\infty } \Vert [D_{x_n}^l{\text{ Poi }}_{j}(\lambda )f](\,\cdot \,,x_n) \Vert _{\mathscr {A}^{\widetilde{t}+m_j- k-\zeta }(\mathbb {R}^{n-1},w;E^{2m})}^px_n^r\,\mathrm {d}x_n\\ {}&\quad \le C \Vert f\Vert _{\mathscr {A}^{s}_p(\mathbb {R}^{n-1},w;E)}^p\sum _{l=0}^{k}|\lambda |^{\frac{p[-[\widetilde{t}-s]_+-m_j+ l-\zeta ]_+-p\zeta }{2m}}\int _0^{\infty } x_n^{\gamma _1} e^{-cp|\lambda |^{1/2m}x_n}\,\mathrm {d}x_n\\ {}&\quad \le C |\lambda |^{\frac{-\gamma _2}{2m}}\Vert f\Vert _{\mathscr {A}^{s}_p(\mathbb {R}^{n-1},w;E)}^p\int _0^{\infty } x_n^{\gamma _1} e^{-c|\lambda |^{1/2m}x_n}\,\mathrm {d}x_n\\ {}&\quad \le C|\lambda |^{\frac{-1-\gamma _1-\gamma _2}{2m}} \Vert f\Vert _{\mathscr {A}^{s}_p(\mathbb {R}^{n-1},w;E)}^p\int _{0}^{\infty } y_n^{\gamma _1} e^{-cy_n}\,dy_n\\ {}&\quad \le C|\lambda |^{\frac{-1-\gamma _1-\gamma _2}{2m}} \Vert f\Vert _{\mathscr {A}^{s}_p(\mathbb {R}^{n-1},w;E)}^p \end{aligned}$$
for all \(f\in \mathscr {A}^{s}_p(\mathbb {R}^{n-1},w;E)\). \(\square \)
Consider the situation of Corollary 4.7 and let \(p\in [1,\infty )\), \(r\in \mathbb {R}\). Again we write \(\gamma _1=r-p[\widetilde{t}-s]_+\) as well as \(\gamma _2=p\zeta -p[-[\widetilde{t}-s]_+-m_j+ k+\zeta ]_+\). Suppose that \(\gamma _1>-1\) and take \(\varepsilon \in (0,1+\gamma _1)\). Then, for all \(\sigma >0\) and all there is a constant \(C>0\) such that for all \(\varepsilon \ge 0\)
$$\begin{aligned} \mathcal {R}\big (\{|\lambda |^{\frac{1+\gamma _1+\gamma _2-\varepsilon }{2mp}}{\text {Poi}}_{j}(\lambda ):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \}\big )\le C, \end{aligned}$$
where the \(\mathcal {R}\)-bounds are taken in \(\mathcal {B}(\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E),W^{k}_{p}(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{\widetilde{t}+m_j- k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))).\)
Let \((\varepsilon _l)_{l\in \mathbb {N}}\) be a Rademacher sequence on the probability space \((\Omega ,\mathcal {F},\mathbb {P})\) and let \(N\in \mathbb {N}\), \(\lambda _1,\ldots ,\lambda _N\in \Sigma _{\phi }\) and \(f_1,\ldots ,f_N\in \mathscr {A}^s(\mathbb {R}^{n-1},w;E)\). Using Corollary 4.7 and Kahane's contraction principle, we obtain
$$\begin{aligned}&\left\| \sum _{l=1}^N \varepsilon _l|\lambda _l|^{\frac{1+\gamma _1+\gamma _2-\varepsilon }{2mp}}{\text {Poi}}_{j}(\lambda _l) f_l \right\| _{L_p(\Omega ;W^k_p(\mathbb {R}_+,|{\text {pr}}_n|^{r},\mathscr {A}^{\widetilde{t}+m_j- k-\zeta }(\mathbb {R}^{n-1},w;E^{2m})))}\\&\quad \eqsim \left\| \sum _{l=1}^N \varepsilon _l|\lambda _l|^{\frac{1+\gamma _1+\gamma _2-\varepsilon }{2mp}}{\text {Poi}}_{j}(\lambda _l) f_l \right\| _{W^k_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};L_p(\Omega ;\mathscr {A}^{\widetilde{t}+m_j- k-\zeta }(\mathbb {R}^{n-1},w;E^{2m})))}\\&\quad \eqsim \sum _{\widetilde{k}=0}^k\left\| \sum _{l=1}^N \varepsilon _l|\lambda _l|^{\frac{1+\gamma _1+\gamma _2-\varepsilon }{2mp}}D_{x_n}^{\widetilde{k}}{\text {Poi}}_{j}(\lambda _l) f_l \right\| _{L_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};L_p(\Omega ;\mathscr {A}^{\widetilde{t}+m_j- k-\zeta }(\mathbb {R}^{n-1},w;E^{2m})))}\\&\quad \lesssim _{\sigma }\left\| x_n\mapsto \sum _{l=1}^N \varepsilon _l|\lambda _l|^{\frac{1+\gamma _1-\varepsilon }{2mp}}x_n^{-[\widetilde{t}-s]_+} e^{-\frac{c}{2}|\lambda _l|^{1/2m}x_n}f_l \right\| _{L_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};L_p(\Omega ;\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E)))}\\&\quad \lesssim \left( \int _{0}^{\infty } \max _{l=1,\ldots ,N}\{|\lambda _l|^{\frac{1+\gamma _1-\varepsilon }{2m}}e^{-p\frac{c}{2}|\lambda _l|^{1/2m}x_n} x_n^{\gamma _1}\}\,\mathrm{d}x_n\right) ^{1/p}\left\| \sum _{l=1}^N \varepsilon _lf_l \right\| _{L_p(\Omega ;\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E))}\\&\quad \lesssim \left( \int _{0}^{\infty } \max _{l=1,\ldots ,N}\{e^{-p\frac{c}{3}|\lambda _l|^{1/2m}x_n} x_n^{-1+\varepsilon }\}\,\mathrm{d}x_n\right) ^{1/p}\left\| \sum _{l=1}^N \varepsilon _lf_l \right\| _{L_p(\Omega ;\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E))}\\&\quad \le \left( \int _{0}^{\infty } e^{-p\frac{c}{3}\sigma ^{1/2m}x_n} x_n^{-1+\varepsilon }\,\mathrm{d}x_n\right) ^{1/p}\left\| \sum _{l=1}^N \varepsilon _lf_l \right\| _{L_p(\Omega ;\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E))}\\&\quad \le C\left\| \sum _{l=1}^N \varepsilon _lf_l \right\| _{L_p(\Omega ;\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E))} \end{aligned}$$
for all \(N\in \mathbb {N}\), all \(\lambda _1,\ldots ,\lambda _N\in \Sigma _{\phi }\) and all \(f_1,\ldots ,f_N\in \mathscr {A}^s(\mathbb {R}^{n-1},w;E)\). This is the desired estimate. \(\square \)
Comparing Proposition 4.8 and Proposition 4.9, one might wonder if one can omit the \(\varepsilon \) in Proposition 4.9. After having applied Kahane's contraction principle in the proof of Proposition 4.9, it seems like the \(\varepsilon \) is necessary. Roughly speaking, taking \((\lambda _l)_{l\in \mathbb {N}}\) such that this sequence is dense in \(\Sigma _{\phi }{\setminus }\overline{B(0,\sigma )}\) will cause \(\max _{l=1,\ldots ,N}\{|\lambda _l|^{\frac{\gamma _1+1}{2m}}e^{-p\frac{c}{2}|\lambda _l|^{1/2m}x_n} x_n^{\gamma _1}\}\) to have a singularity of the form \(x_n^{-1}\) at \(x_n=0\) if \(N\rightarrow \infty \). Indeed, taking \(|\lambda _l|^{1/2m}\) close to \(x_n^{-1}\) yields that \(|\lambda _l|^{\frac{\gamma _1+1}{2m}}e^{-p\frac{c}{2}|\lambda _l|^{1/2m}x_n} x_n^{\gamma _1}\) is close to \(x_n^{-1} e^{-pc/2}\). Hence, the integral \(\left( \int _{0}^{\infty } \max _{l=1,\ldots ,N}\{|\lambda _l|^{\frac{\gamma _1+1}{2m}}e^{-p\frac{c}{2}|\lambda _l|^{1/2m}x_n} x_n^{\gamma _1}\}\,\mathrm{d}x_n\right) ^{1/p}\) will tend to \(\infty \) as \(N\rightarrow \infty \). Thus, if one wants to remove the \(\varepsilon \), it seems like one should not apply Kahane's contraction principle as it is applied in the proof of Proposition 4.9. This can for example be avoided under a cotype assumption on E together with a restriction on p, as Proposition 4.11 shows. However, there are some cases in which the \(\varepsilon \) cannot be removed. We will show this in Proposition 4.13.
Consider the situation of Corollary 4.7 and let \(r\in \mathbb {R}\). Suppose that E has finite cotype \(q_E\). Suppose that the assumptions of Proposition 2.14 hold true. Again, we define \(\gamma _1=r-p[\widetilde{t}-s]_+\) as well as \(\gamma _2=p\zeta -p[-[\widetilde{t}-s]_+-m_j+ k+\zeta ]_+\). Suppose that \(\gamma _1>-1\). Then, for all \(\sigma >0\) there is a constant \(C>0\) such that
$$\begin{aligned} \mathcal {R}\big (\{|\lambda |^{\frac{1+\gamma _1+\gamma _2}{2mp}}{\text {Poi}}_{j}(\lambda ):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \}\big )\le C, \end{aligned}$$
where \(\mathcal {R}\)-bounds are taken in \(\mathcal {B}(\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E),W^{k}_{p}(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{\widetilde{t}+m_j- k-\zeta }(\mathbb {R}^{n-1},w;E^{2m}))).\)
Let \((\varepsilon _l)_{l\in \mathbb {N}}\) be a Rademacher sequence on the probability space \((\Omega ,\mathcal {F},\mathbb {P})\) and let \(N\in \mathbb {N}\), \(\lambda _1,\ldots ,\lambda _N\in \Sigma _{\phi }\) and \(f_1,\ldots ,f_N\in \mathscr {A}^s(\mathbb {R}^{n-1},w;E)\). Using Corollary 4.7 and Proposition 2.14, we obtain
$$\begin{aligned}&\left\| \sum _{l=1}^N \varepsilon _l|\lambda _l|^{\frac{1+\gamma _1+\gamma _2}{2mp}}{\text {Poi}}_{j}(\lambda _l) f_l \right\| _{L_p(\Omega ;W^k_p(\mathbb {R}_+,|{\text {pr}}_n|^{r},\mathscr {A}^{\widetilde{t}+m_j- k-\zeta }(\mathbb {R}^{n-1},w;E^{2m})))}\\&\quad \eqsim \left\| \sum _{l=1}^N \varepsilon _l|\lambda _l|^{\frac{1+\gamma _1+\gamma _2}{2mp}}{\text {Poi}}_{j}(\lambda _l) f_l \right\| _{W^k_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};L_p(\Omega ;\mathscr {A}^{\widetilde{t}+m_j- k-\zeta }(\mathbb {R}^{n-1},w;E^{2m})))}\\&\quad \eqsim \sum _{\widetilde{k}=0}^k\left\| \sum _{l=1}^N \varepsilon _l|\lambda _l|^{\frac{1+\gamma _1+\gamma _2}{2mp}}D_{x_n}^{\widetilde{k}}{\text {Poi}}_{j}(\lambda _l) f_l \right\| _{L_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};L_p(\Omega ;\mathscr {A}^{\widetilde{t}+m_j- k-\zeta }(\mathbb {R}^{n-1},w;E^{2m})))}\\&\quad \lesssim _{\sigma }\left\| x_n\mapsto \sum _{l=1}^N \varepsilon _l|\lambda _l|^{\frac{1+\gamma _1}{2mp}}x_n^{-[\widetilde{t}-s]_+} e^{-\frac{c}{2}|\lambda _l|^{1/2m}x_n}f_l \right\| _{L_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};L_p(\Omega ;\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E)))}\\&\quad \lesssim \max _{l=1,\ldots ,N}\left( \int _{0}^{\infty } |\lambda _l|^{\frac{1+\gamma _1}{2m}}e^{-p\frac{c}{2}|\lambda _l|^{1/2m}x_n} x_n^{\gamma _1}\,\mathrm{d}x_n\right) ^{1/p}\left\| \sum _{l=1}^N \varepsilon _lf_l \right\| _{L_p(\Omega ;\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E))}\\&\quad =\left( \int _{0}^{\infty } e^{-p\frac{c}{2}y} y^{\gamma _1}\,dy\right) ^{1/p}\left\| \sum _{l=1}^N \varepsilon _lf_l \right\| _{L_p(\Omega ;\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E))}\\&\quad \le C\left\| \sum _{l=1}^N \varepsilon _lf_l \right\| _{L_p(\Omega ;\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E))} \end{aligned}$$
Let us now see what can happen if the cotype assumption is not satisfied.
Let \(\widetilde{\mathscr {A}}^s\) be defined by
$$\begin{aligned} \widetilde{\mathscr {A}}^s:=\{u\in \mathscr {A}^s: {\text {supp}}\mathscr {F}u\subset \overline{B(0,1)} \} \end{aligned}$$
where B(0, 1) denotes the ball with center 0 and radius 1. We endow \(\widetilde{\mathscr {A}}^s\) with the norm \(\Vert \cdot \Vert _{\mathscr {A}^s}\). Then, \(\widetilde{\mathscr {A}}^s\) is a Banach space.
Let \((u_n)_{n\in \mathbb {N}}\subset \widetilde{\mathscr {A}}^s\) be a Cauchy sequence. Since \(\mathscr {A}^s\) is a Banach space, we only have to prove that the limit \(u:=\lim _{n\rightarrow \infty }u_n\) satisfies \({\text {supp}}\mathscr {F}u\subset \overline{B(0,1)}\). But since
$$\begin{aligned} \mathscr {F}:\mathscr {A}^s\rightarrow \mathscr {S}'(\mathbb {R}^n;E) \end{aligned}$$
is continuous, it follows that
$$\begin{aligned}{}[\mathscr {F}u](f)=\lim _{n\rightarrow \infty }[\mathscr {F}u_n](f)=0 \end{aligned}$$
for all \(f\in \mathscr {S}(\mathbb {R}^n)\) such that \({\text {supp}} f\subset \overline{B(0,1)}^c\). This shows the assertion. \(\square \)
Let \(\sigma >0\), \(r\in \mathbb {R}\) and \(p\in [1,2)\). For \(\lambda \ge \sigma \) and \(g\in \mathscr {A}^s\) let \(u_\lambda :={\text {Poi}}_{\Delta }(\lambda )g\) be the solution of
$$\begin{aligned} \lambda u_{\lambda }(x)-\Delta u_{\lambda }(x)&=0\quad \quad (x\in \mathbb {R}^n_+),\nonumber \\ u_{\lambda }(x',0)&=g(x')\quad (x'\in \mathbb {R}^{n-1}) \end{aligned}$$
which is decaying in normal direction. Then, the set of operators
$$\begin{aligned} \{|\lambda |^{\frac{1+r}{2p}}{\text {Poi}}_{\Delta }(\lambda ):\lambda \ge \sigma \}\subset \mathcal {B}(\mathscr {A}^s, L_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)) \end{aligned}$$
is not \(\mathcal {R}\)-bounded.
Applying Fourier transform in tangential direction to (4-6), we obtain
$$\begin{aligned} \partial _n^2 \hat{u}(\xi ',x_n)&=(\lambda +|\xi '|^2)\hat{u}(\xi ',x_n),\\&\hat{u}(\xi ',0)=\hat{g}(\xi '). \end{aligned}$$
The stable solution of this equation is given by \(e^{-(\lambda +|\xi '|^2)^{1/2} x_n}\hat{g}(\xi ')\) so that the decaying solution of (4-6) is given by
$$\begin{aligned} u_{\lambda }(x',x_n)={\text {Poi}}_{\Delta }(\lambda )g= [\mathscr {F}_{x'\rightarrow \xi '}^{-1}e^{-(\lambda +|\xi '|^2)^{1/2} x_n}\mathscr {F}_{x'\rightarrow \xi '}g](x'). \end{aligned}$$
Let \(\chi \subset \mathscr {D}(\mathbb {R}^{n-1})\) be a test function with \(\chi (\xi ')=1\) for \(\xi '\in B(0,1)\) and \({\text {supp}}\chi \subset B(0,2)\). It holds that \(\chi (\xi ')e^{((\lambda +|\xi '|^2)^{1/2}-|\lambda |^{1/2})x_n}\) satisfies the Mikhlin condition uniformly in \(\lambda \ge \sigma \) and \(x_n\le 1\). Hence, we have that
$$\begin{aligned} \{{\text {op}}[\chi (\xi ')e^{((\lambda +|\xi '|^2)^{1/2}-|\lambda |^{1/2})x_n}]:\lambda \ge \sigma ,x_n\in [0,1]\}\subset \mathcal {B}(\widetilde{\mathscr {A}^s}) \end{aligned}$$
is \(\mathcal {R}\)-bounded, where \(\widetilde{\mathscr {A}^s}\) is defined as in Lemma 4.12. Using these observations together with the \(\mathcal {R}\)-boundedness of \(\{|\lambda |^{\frac{1+r}{2p}}{\text {Poi}}_{\Delta }(\lambda ):\lambda \ge \sigma \}\), we can carry out the following calculation: Let \((\varepsilon _l)_{l\in \mathbb {N}}\) be a Rademacher sequence on the probability space \((\Omega ,\mathcal {F},\mathbb {P})\), \(\lambda _l=(\sigma 2^l)^2\) \((l\in \mathbb {N})\), \(N\in \mathbb {N}\) and \(g_1,\ldots ,g_N\in \widetilde{\mathscr {A}}^s\). Then, we obtain
$$\begin{aligned} \left\| \sum _{l=1}^N \varepsilon _l g_l\right\| _{L_p(\Omega ;\widetilde{\mathscr {A}}^s)}& > rsim \left( \int _{\Omega }\int _{0}^{\sigma ^{-1}} \bigg \Vert \sum _{l=1}^N \varepsilon _l\lambda _l^{\frac{1+r}{2p}}[{\text {Poi}}_{\Delta }(\lambda _l)g_l](\,\cdot ,x_n)\bigg \Vert ^p_{\mathscr {A}^s}x_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\&= \left( \int _{\Omega }\int _{0}^{\sigma ^{-1}} \bigg \Vert \sum _{l=1}^N \varepsilon _l\lambda _l^{\frac{1+r}{2p}}{\text {op}}[\chi (\xi ')e^{-(\lambda +|\xi '|^2)^{1/2} x_n}]g_l\bigg \Vert ^p_{\mathscr {A}^s}x_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\& > rsim \left( \int _{\Omega }\int _{0}^{\sigma ^{-1}} \bigg \Vert \sum _{l=1}^N \varepsilon _l\lambda _l^{\frac{1+r}{2p}}e^{-|\lambda _l|^{1/2}x_n}g_l\bigg \Vert ^p_{\mathscr {A}^s}x_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\& > rsim \left( \int _{\Omega }\sum _{m=1}^N\int _{\sigma ^{-1}2^{-m}}^{\sigma ^{-1}2^{-m+1}} \bigg \Vert \sum _{l=1}^N \varepsilon _l\lambda _l^{\frac{1+r}{2p}}e^{-|\lambda _l|^{1/2}x_n}g_l\bigg \Vert ^p_{\mathscr {A}^s}x_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\& > rsim \left( \int _{\Omega }\sum _{m=1}^N\int _{\sigma ^{-1}2^{-m}}^{\sigma ^{-1}2^{-m+1}} \lambda _m^{\frac{1+r}{2}}e^{-p|\lambda _m|^{1/2}x_n}\Vert g_m\Vert ^p_{\mathscr {A}^s}x_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\& > rsim \left( \sum _{m=1}^N\Vert g_m\Vert ^p_{\mathscr {A}^s} \right) ^{1/p}. \end{aligned}$$
This shows that \(\widetilde{\mathscr {A}}^s\) has cotype p. However, \(\widetilde{\mathscr {A}}^s\) is a nontrivial Banach space by Lemma 4.12 and therefore its cotype must satisfy \(p\ge 2\). This contradicts \(p\in [1,2)\) and hence, \(\{|\lambda |^{\frac{1+r}{2p}}{\text{ Poi }}_{\Delta }(\lambda ):\lambda \ge \sigma \}\) cannot be \(\mathcal {R}\)-bounded. \(\square \)
Proposition 4.13 shows that it is not possible in general to remove the \(\varepsilon \) in Proposition 4.9. Even though we only treat the Laplacian with Dirichlet boundary conditions in Proposition 4.13, it seems like the integrability parameter in normal direction may not be smaller than the cotype of the space in tangential directions in order to obtain the sharp estimate of Proposition 4.11.
Depending on what one aims for, it can also be better to substitute \(t=\widetilde{t}+m_j-k-\zeta \) in Proposition 4.8, Proposition 4.9 or Proposition 4.11. In this case, we obtain the estimates
$$\begin{aligned} \Vert {\text {Poi}}_{j}(\lambda )\Vert&\le C|\lambda |^{\frac{-1-\gamma _1-\gamma _2}{2mp}},\quad (\text {Proposition } 4.8),\\ \mathcal {R}\big (\{|\lambda |^{\frac{1+\gamma _1+\gamma _2-\varepsilon }{2mp}}{\text {Poi}}_{j}(\lambda ):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \}\big )&\le C,\quad (\text {Proposition } 4.9),\\ \mathcal {R}\big (\{|\lambda |^{\frac{1+\gamma _1+\gamma _2}{2mp}}{\text {Poi}}_{j}(\lambda ):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \}\big )&\le C,\quad (\text {Proposition } 4.11), \end{aligned}$$
$$\begin{aligned}&\gamma _1=r-p[t+k+\zeta -m_j-s]_+,\\&\quad \gamma _2=p\zeta -p[-[t+k+\zeta -m_j-s]_+-m_j+k+\zeta ]_+, \end{aligned}$$
and where the operator norms and the \(\mathcal {R}\)-bounds are taken in
$$\begin{aligned} \mathcal {B}(\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E),W^{k}_{p}((\varepsilon ,\infty ),|{\text {pr}}_n|^{r};\mathscr {A}^{t}(\mathbb {R}^{n-1},w;E^{2m}))). \end{aligned}$$
If we now choose \(\zeta :=[m_j+s-k-t]_+\), then we obtain
$$\begin{aligned} \gamma _1=r-p[t+k-m_j-s]_+,\quad \gamma _2=p[m_j+s-k-t]_+-p[s-t]_+. \end{aligned}$$
From this, it follows that
$$\begin{aligned} -\gamma _1-\gamma _2=-r+p(k-m_j)+p([s-t]_++t-s)=-r+p(k-m_j)+p[t-s]_+ \end{aligned}$$
This yields the following result:
Recall Assumptions 1.1 and 1.2. Let \(k\in \mathbb {N}_0\), \(r,s,t\in \mathbb {R}\) and \(p\in [1,\infty )\). Suppose that \(r-p[t+k-m_j-s]_+>-1\).
For all \(\sigma >0\), there is a constant \(C>0\) such that
$$\begin{aligned} \Vert {\text {Poi}}_{j}(\lambda )\Vert \le C|\lambda |^{\frac{-1-r+p(k-m_j)+p[t-s]_+}{2mp}} \end{aligned}$$
for all \(\lambda \in \Sigma _{\phi }\) such that \(|\lambda |\ge \sigma \) where the operator norms are taken in the space \(\mathcal {B}(\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E),W^{k}_{p}(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{t}(\mathbb {R}^{n-1},w;E^{2m})))\).
Let \(\varepsilon \in (0,\gamma _1+1)\). Then, for all \(\sigma >0\) there is a constant \(C>0\) such that
$$\begin{aligned} \mathcal {R}\big (\{|\lambda |^{\frac{1+r-\varepsilon -p(k-m_j)-p[t-s]_+}{2mp}}{\text {Poi}}_{j}(\lambda ):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \}\big )\le C \end{aligned}$$
where the \(\mathcal {R}\)-bounds are taken in \(\mathcal {B}(\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E),W^{k}_{p}(\mathbb {R}_+, |{\text {pr}}_n|^{r};\mathscr {A}^{t}(\mathbb {R}^{n-1},w;E^{2m})))\).
Suppose that the assumptions of Proposition 2.14 hold true. Then, for all \(\sigma >0\) there is a constant \(C>0\) such that
$$\begin{aligned} \mathcal {R}\big (\{|\lambda |^{\frac{1+r-p(k-m_j)-p[t-s]_+}{2mp}}{\text {Poi}}_{j}(\lambda ):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \}\big )\le C \end{aligned}$$
where the \(\mathcal {R}\)-bounds are taken in \(\mathcal {B}(\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E),W^{k}_{p}(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{t}(\mathbb {R}^{n-1},w;E^{2m})))\).
This follows from Proposition 4.8, Proposition 4.9 and Proposition 4.11 together with the observations in Remark 4.15. \(\square \)
Corollary 4.17
Let \(k\in \mathbb {Z}\), \(s,t\in \mathbb {R}\), \(p\in (1,\infty )\) and \(r\in (-1,p-1)\). Suppose that \(r-p[t+k-m_j-s]_+>-1\) and that \(\mathscr {A}^s\) is reflexive.
The case \(k\in \mathbb {N}_0\) is already contained in Theorem 4.16. Hence, we only treat the case \(k<0\). In this case, it holds that
$$\begin{aligned} (r-pk)-p[t-m_j-s]_+\ge r-p[t+k-m_j-s]_+>-1. \end{aligned}$$
Hence, Theorem 4.16 holds with a weight of the power \(r-pk\) and smoothness 0 in normal direction. Combining this with Lemma 2.22 yields the assertion. \(\square \)
Let \(s,t\in \mathbb {R}\), \(k\in (0,\infty ){\setminus }\mathbb {N}\), \(p\in (1,\infty )\), \(r\in (-1,p-1)\) and \(q\in [1,\infty ]\). We write \(k=\overline{k}-\theta \) with \(\overline{k}\in \mathbb {N}_0\) and \(\theta \in [0,1)\). Suppose that \(r-p[t+\overline{k}-m_j-s]_+>-1\).
For all \(\sigma >0\) there is a constant \(C>0\) such that for all \(\lambda \in \Sigma _{\phi }\) with \(|\lambda |\ge \sigma \) we have the estimate
where the norm is taken in \(\mathcal {B}(\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E),H^{k}_{p}(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{t}(\mathbb {R}^{n-1},w;E^{2m})))\) or in \(\mathcal {B}(\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E),B^{k}_{p,q}(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{t}(\mathbb {R}^{n-1},w;E^{2m}))).\)
Let \(\varepsilon \in (0,\gamma _1+1)\) and let E be a UMD space. Then, for all \(\sigma >0\) there is a constant \(C>0\) such that
where the \(\mathcal {R}\)-bounds are taken in \(\mathcal {B}(\mathscr {A}^{s}(\mathbb {R}^{n-1},w;E),H^{k}_{p}(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{t}(\mathbb {R}^{n-1},w;E^{2m})))\).
Suppose that the assumptions of Proposition 2.14 hold true and let E be a UMD space. Then, for all \(\sigma >0\) there is a constant \(C>0\) such that
This follows from Theorem 4.16 together with real and complex interpolation, see [37, Proposition 6.1, (6.4)] together with a retraction–coretraction argument, [31, Proposition 5.6] and Proposition 2.3. Note that the power weight \(|{\text {pr}}_n|^{r}\) is an \(A_p\) weight, since \(r\in (-1,p-1)\), see [18, Example 9.1.7]. \(\square \)
Let \(p\in (1,\infty )\), \(r\in (-1,p-1)\) and \(w_r(x):=x^r\) for \(x\in \mathbb {R}_+\). Then, the linear operator
$$\begin{aligned} T:L_p(\mathbb {R}_+,w_r;\mathbb {R})\rightarrow L_p(\mathbb {R}_+,w_r;\mathbb {R}),\;f\mapsto \int _{0}^{\infty } \frac{f(y)}{x+y}\,dy \end{aligned}$$
is bounded.
In [17, Appendix I.3] this was shown for \(r=0\) using Schur's Lemma. We adjust the same proof to the weighted setting.
Let \(K(x,y):=\frac{1}{y^r(x+y)}\). Then, we may write
$$\begin{aligned} (Tf)(x)=\int _0^\infty K(x,y)f(y) y^r\,dy. \end{aligned}$$
We further define the transpose operator
$$\begin{aligned} (T^tf)(y)=\int _0^\infty K(x,y)f(x) x^r\,\mathrm{d}x=\frac{1}{y^r}\int _0^\infty \frac{f(x)}{x+y}x^r\,\mathrm{d}x. \end{aligned}$$
By the lemma in [17, Appendix I.2], it is sufficient to find \(C>0\) and \(u,v:\mathbb {R}_+\rightarrow (0,\infty )\) such that
$$\begin{aligned} T(u^{p'})\le Cv^{p'}\quad \text {and}\quad T^t(v^{p})\le Cu^{p}, \end{aligned}$$
where \(1=\frac{1}{p}+\frac{1}{p'}\). Similar to [17, Appendix I.3], we choose
$$\begin{aligned} u(x):=v(x):=x^{-\frac{1+r}{pp'}} \end{aligned}$$
$$\begin{aligned} C:=\max \left\{ \int _0^\infty \frac{t^{-\frac{1+r}{p}}}{1+t}\,dt,\int _0^\infty \frac{t^{r-\frac{1+r}{p'}}}{1+t}\,dt \right\} . \end{aligned}$$
Note that \(r\in (-1,p-1)\) ensures that both integrals are finite since
$$\begin{aligned} -\frac{1+r}{p}\in (-1,0)\Longleftrightarrow r\in (-1,p-1) \end{aligned}$$
$$\begin{aligned} r-\frac{1+r}{p'}\in (-1,0)\Longleftrightarrow r\in (-1,p-1). \end{aligned}$$
With this choice, we obtain
$$\begin{aligned} (T u^{p'})(x)=\int _0^\infty \frac{y^{-\frac{1+r}{p}}}{x+y}\,dy=x^{-\frac{1+r}{p}}\int _0^\infty \frac{t^{-\frac{1+r}{p}}}{1+t}\,dt\le C v(x)^{p'} \end{aligned}$$
$$\begin{aligned} (T^t v^{p})(y)=\frac{1}{y^r}\int _0^\infty \frac{x^{r-\frac{1+r}{p'}}}{x+y}\,\mathrm{d}x=y^{-\frac{1+r}{p'}}\int _0^\infty \frac{t^{r-\frac{1+r}{p'}}}{1+t}\,dt\le Cu(y)^p. \end{aligned}$$
This shows the assertion. \(\square \)
From now on, we use the notation
$$\begin{aligned} D^{k,\widetilde{k},s}_{r}(I):= H_p^{k}(I,|{\text {pr}}_n|^r,\mathscr {A}^{s+\widetilde{k}})&\cap H^{k+\widetilde{k}}_p(I,|{\text {pr}}_n|^r,\mathscr {A}^{s}),\nonumber \\ D^{k,2m,s}_{r,B}(I):= \{u\in H_p^{k}(I,|{\text {pr}}_n|^r,\mathscr {A}^{s+2m})&\cap H^{k+2m}_p(I,|{\text {pr}}_n|^r,\mathscr {A}^{s}):\nonumber \\&{\text {tr}}_{x_n=0} B_j(D)u=0\text { for all }j=1,\ldots ,m\} \end{aligned}$$
for \(p\in (1,\infty )\) \(k,\widetilde{k}\in [0,\infty )\), \(s\in \mathbb {R}\), \(r\in (-1,p-1)\) and \(I\in \{\mathbb {R}_+,\mathbb {R}\}\). Moreover, we endow both spaces with the norm
$$\begin{aligned} \Vert u\Vert _{D^{k,\widetilde{k},s}_{r}(I)}&=\max \{\Vert u\Vert _{H_p^{k}(I,|{\text {pr}}_n|^r,\mathscr {A}^{s+\widetilde{k}})}, \Vert u\Vert _{H^{k+\widetilde{k}}_p(I,|{\text {pr}}_n|^r,\mathscr {A}^{s})} \},\\ \Vert u\Vert _{D^{k,2m,s}_{r,B}(I)}&=\max \{\Vert u\Vert _{H_p^{k}(I,|{\text {pr}}_n|^r,\mathscr {A}^{s+2m})}, \Vert u\Vert _{H^{k+2m}_p(I,|{\text {pr}}_n|^r,\mathscr {A}^{s})} \}, \end{aligned}$$
respectively, so that \((D^{k,\widetilde{k},s}_{r}(I),\Vert \cdot \Vert _{D^{k,\widetilde{k},s}_{r}(I)})\) and \((D^{k,2m,s}_{r,B}(I),\Vert \cdot \Vert _{D^{k,2m,s}_{r,B}(I)})\) are a Banach spaces.
Let \(s\in \mathbb {R}\), \(p\in (1,\infty )\), \(r\in (-1,p-1)\) and \(k\in \mathbb {N}_0\) such that \(k\le \min \{\beta _n:\beta \in \mathbb {N}_0^n,|\beta |=m_j,b_\beta ^j\ne 0\}\). Let further \(u\in D^{k,2m,s}_{r}(\mathbb {R}_+)\) and \(\theta \in [0,1]\) such that \(2m\theta \in \mathbb {N}_0\). Then, for all \(\sigma >0\) there is a constant \(C>0\) such that we have the estimate
$$\begin{aligned} \mathcal {R}(\{\lambda ^{\theta }{\text {Poi}}_j(\lambda ){\text {tr}}_{x_n}B_j(D):\lambda \in \Sigma _{\phi },\,|\lambda |\ge \sigma \})\le C \end{aligned}$$
where the \(\mathcal {R}\)-bound is taken in
$$\begin{aligned} \mathcal {B}(D^{k,2m,s}_{r}(\mathbb {R}_+),D^{k,2m(1-\theta ),s}_{r}(\mathbb {R}_+)). \end{aligned}$$
The proof uses an approach which is sometimes referred to as Volevich-trick. This approached is already standard in the treatment of parameter-elliptic and parabolic boundary value problems in classical Sobolev spaces, see for example Lemma 7.1 in [8] and how it is used to obtain the results therein. The idea is to use the fundamental theorem of calculus in normal directions and to apply the boundedness of the operator
from Lemma 4.19. Using these ideas in connection with Corollary 4.7, we can carry out the following computation: Let \((\varepsilon _n)_{n\in \mathbb {N}}\) be a Rademacher sequence on the probability space \((\Omega ,\mathcal {F},\mathbb {P})\), \(N\in \mathbb {N}\), \(\lambda _1,\ldots ,\lambda _N\) and \(u_1,\ldots ,u_N\in H^{k}(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s+2m})\cap H^{k+2m}(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s})\). Then, we obtain
$$\begin{aligned}&\left\| \sum _{l=1}^N \varepsilon _l\lambda _l^{\theta }{\text {Poi}}_j{\text {tr}}_{x_n=0}B_j(D) u_l \right\| _{L_p(\Omega ;D^{k,2m(1-\theta ),s}_{r}(\mathbb {R}_+))}\\&\quad \lesssim \sum _{\widetilde{k}=0}^k\left\| \sum _{l=1}^N \varepsilon _l\lambda _l^{\theta }D_{x_n}^{\widetilde{k}}{\text {Poi}}_j{\text {tr}}_{x_n=0}B_j(D) u_l \right\| _{L_p(\Omega ;L_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s+2m(1-\theta )})}\\&\qquad + \sum _{\widetilde{k}=0}^{k+(1-\theta )2m}\left\| \sum _{l=1}^N \varepsilon _l\lambda _l^{\theta }D_{x_n}^{\widetilde{k}}{\text {Poi}}_j{\text {tr}}_{x_n=0}B_j(D) u_l \right\| _{L_p(\Omega ;L_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s})}\\&\quad \le \sum _{\widetilde{k}=0}^k\left( \int _{\Omega }\int _{\mathbb {R}_+}\bigg \Vert \int _{\mathbb {R}_+}\sum _{l=1}^N \varepsilon _l\lambda _l^{\theta }[\partial _{y_n}D_{x_n}^{\widetilde{k}}{\text {ev}}_{x_n+y_n}{\text {Poi}}_j][B_j(D) u_l](\,\cdot \,,y_n)\,dy_n\bigg \Vert _{\mathscr {A}^{s+2m(1-\theta )}}^px_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\&\qquad +\sum _{\widetilde{k}=0}^k\left( \int _{\Omega }\int _{\mathbb {R}_+}\bigg \Vert \int _{\mathbb {R}_+}\sum _{l=1}^N \varepsilon _l\lambda _l^{\theta }[D_{x_n}^{\widetilde{k}}{\text {ev}}_{x_n+y_n}{\text {Poi}}_j][\partial _{y_n}B_j(D) u_l](\,\cdot \,,y_n)\,dy_n\bigg \Vert _{\mathscr {A}^{s+2m(1-\theta )}}^px_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\&\qquad +\sum _{\widetilde{k}=0}^{k+(1-\theta )2m}\left( \int _{\Omega }\int _{\mathbb {R}_+}\bigg \Vert \int _{\mathbb {R}_+}\sum _{l=1}^N \varepsilon _l\lambda _l^{\theta }[\partial _{y_n}D_{x_n}^{\widetilde{k}}{\text {ev}}_{x_n+y_n}{\text {Poi}}_j][B_j(D) u_l](\,\cdot \,,y_n)\,dy_n\bigg \Vert _{\mathscr {A}^{s}}^px_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\&\qquad + \sum _{\widetilde{k}=0}^{k+(1-\theta )2m}\left( \int _{\Omega }\int _{\mathbb {R}_+}\bigg \Vert \int _{\mathbb {R}_+}\sum _{l=1}^N \varepsilon _l\lambda _l^{\theta }[D_{x_n}^{\widetilde{k}}{\text {ev}}_{x_n+y_n}{\text {Poi}}_j][\partial _{y_n}B_j(D) u_l](\,\cdot \,,y_n)\,dy_n\bigg \Vert _{\mathscr {A}^{s}}^px_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}. \end{aligned}$$
In order to keep the notation shorter, we continue the computation with just the first of the four terms. The steps we would have to carry out for the other three terms, are almost exactly the same with just minor changes on the parameters. We obtain
$$\begin{aligned}&\sum _{\widetilde{k}=0}^k\left( \int _{\Omega }\int _{\mathbb {R}_+}\bigg \Vert \int _{\mathbb {R}_+}\sum _{l=1}^N \varepsilon _l\lambda _l^{\theta }[\partial _{y_n}D_{x_n}^{\widetilde{k}}{\text {ev}}_{x_n+y_n}{\text {Poi}}_j][B_j(D) u_l](\,\cdot \,,y_n)\,dy_n\bigg \Vert _{\mathscr {A}^{s+2m(1-\theta )}}^px_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\&\quad \le \sum _{\widetilde{k}=0}^k\left( \int _{\Omega }\int _{\mathbb {R}_+}\bigg ( \int _{\mathbb {R}_+}\bigg \Vert \sum _{l=1}^N \varepsilon _l\lambda _l^{\theta }[\partial _{y_n}D_{x_n}^{\widetilde{k}}{\text {ev}}_{x_n+y_n}{\text {Poi}}_j][B_j(D) u_l](\,\cdot \,,y_n)\bigg \Vert _{\mathscr {A}^{s+2m(1-\theta )}}\,dy_n\bigg )^px_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\&\quad \lesssim \left( \int _{\Omega }\int _{\mathbb {R}_+}\bigg ( \int _{\mathbb {R}_+}\bigg \Vert \sum _{l=1}^N \varepsilon _l\frac{1}{x_n+y_n}[B_j(D) u_l](\,\cdot \,,y_n)\bigg \Vert _{\mathscr {A}^{s+k+2m-m_j}}\,dy_n\bigg )^px_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\&\quad \lesssim \left( \int _{\Omega }\int _{\mathbb {R}_+}\bigg \Vert \sum _{l=1}^N \varepsilon _l[B_j(D) u_l](\,\cdot \,,x_n)\bigg \Vert _{\mathscr {A}^{s+k+2m-m_j}}^px_n^r\,\mathrm{d}x_n\,d\mathbb {P}\right) ^{1/p}\\&\quad \le \sum _{|\beta |=m_j}\bigg \Vert b_{\beta }^j\partial _n^{\beta _n}\partial _{x'}^{\beta '}\sum _{l=1}^N \varepsilon _l u_l\bigg \Vert _{L_p(\Omega ;L_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s+k+2m-m_j}))}\\&\quad \lesssim \sum _{\beta _n=k}^{m_j}\bigg \Vert \sum _{l=1}^N \varepsilon _l u_l\bigg \Vert _{L_p(\Omega ;H^{\beta _n}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s+k+2m-\beta _n}))}. \end{aligned}$$
From the second to the third line, we used Corollary 4.7, from the third to the fourth line we used Lemma 4.19 and in the last step we used that \(k\le \min \{\beta _n:\beta \in \mathbb {N}_0^n,|\beta |=m_j,b_\beta ^j\ne 0\}\). The other three terms above can either also be estimated by
$$\begin{aligned} \sum _{\beta _n=k}^{m_j}\bigg \Vert \sum _{l=1}^N \varepsilon _l u_l\bigg \Vert _{L_p(\Omega ;H^{\beta _n}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s+k+2m-\beta _n}))} \end{aligned}$$
$$\begin{aligned} \sum _{\beta _n=k}^{m_j}\bigg \Vert \sum _{l=1}^N \varepsilon _l u_l\bigg \Vert _{L_p(\Omega ;H^{\beta _n+1}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s+k+2m-\beta _n-1}))} \end{aligned}$$
if the derivative \(\partial _{y_n}\) is taken of \(g_j\) instead of \({\text {Poi}}_j\). Since \(m_j< 2m\), we obtain the estimate
$$\begin{aligned} \left\| \sum _{l=1}^N \varepsilon _l\lambda _l^{\theta }{\text {Poi}}_j{\text {tr}}_{x_n=0}B_j(D) u_l \right\| _{L_p(\Omega ;D^{k,2m(1-\theta ),s}_{r}(\mathbb {R}_+))}\lesssim \left\| \sum _{l=1}^N \varepsilon _l u_l \right\| _{L_p(\Omega ;D^{k,2m,s}_{r}(\mathbb {R}_+))}. \end{aligned}$$
Resolvent estimates
Now we study the resolvent problem, i.e., (1-1) with \(g_j=0\). We show that the corresponding operator is \(\mathcal {R}\)-sectorial and thus has the property of maximal regularity in the UMD case. But first, we prove the \(\mathcal {R}\)-sectoriality in \(\mathbb {R}^n\).
Let \(k,s\in \mathbb {R}\). Suppose that E satisfies Pisier's property \((\alpha )\) if one of the scales \(\mathscr {A}, \mathscr {B}\) belongs to the Bessel potential scale. Then, for all \(\sigma >0\) the realization of \(A(D)-\sigma \) in \(\mathscr {B}^k(\mathscr {A}^s)\) given by
$$\begin{aligned} A(D)-\sigma :\mathscr {B}^k(\mathscr {A}^s) \supset \mathscr {B}^{k+2m}(\mathscr {A}^s)\cap \mathscr {B}^k(\mathscr {A}^{s+2m})\rightarrow \mathscr {B}^k(\mathscr {A}^s),\, u\mapsto A(D)u-\sigma u \end{aligned}$$
is \(\mathcal {R}\)-sectorial in \(\Sigma _{\phi }\) and there is a constant \(C>0\) such that the estimate
$$\begin{aligned} \Vert u\Vert _{\mathscr {B}^{k+2m}(\mathscr {A}^s)\cap \mathscr {B}^k(\mathscr {A}^{s+2m})}\le C\Vert (\lambda +\sigma -A(D)u\Vert _{\mathscr {B}^k(\mathscr {A}^s)} \end{aligned}$$
holds for all \(\lambda \in \Sigma _{\phi }\).
It was shown in [22, Lemma 5.10] that
$$\begin{aligned}&\mathcal {R}(\{\langle \xi \rangle ^{|\alpha |}D^{\alpha }_{\xi } (s_1+s_2\lambda +s_3|\xi |^{2m})(\lambda +1-A(\xi ))^{-1}\nonumber \\&:\lambda \in \Sigma _{\phi },\,\xi \in \mathbb {R}^n\})<\infty \quad \text {in }\mathcal {B}(E) \end{aligned}$$
holds for all \(\alpha \in \mathcal {B}(E)\) and all \((s_1,s_2,s_3)\in \mathbb {R}^3\). Note that the authors of [22] use a different convention concerning the sign of A. Taking \((s_1,s_2,s_3)=(0,1,0)\) together with the iterated \(\mathcal {R}\)-bounded versions of Mikhlin's theorem, Proposition 3.8, show that
$$\begin{aligned} \mathcal {R}(\{\lambda (\lambda +1-A(D))^{-1}:\lambda \in \Sigma _{\phi }\})<\infty . \end{aligned}$$
Thus, it only remains to prove that \(\mathscr {B}^{k+2m}(\mathscr {A}^s)\cap \mathscr {B}^k(\mathscr {A}^{s+2m})\) is the right domain and that (5-1) holds. But (5-2) with \((s_1,s_2,s_3)=(1,0,1)\) shows that
$$\begin{aligned}{}[\xi \mapsto (1+|\xi |^{2m})(\lambda +1-A(\xi ))^{-1}]\in S^0_{\mathcal {R}}(\mathbb {R}^n;\mathcal {B}(E)) \end{aligned}$$
$$\begin{aligned}{}[\xi \mapsto (\lambda +1-A(\xi ))^{-1}]\in S^{-2m}_{\mathcal {R}}(\mathbb {R}^n;\mathcal {B}(E)) \end{aligned}$$
uniformly in \(\lambda \in \Sigma _{\phi }\). Now the assertion follows from Proposition 3.11. \(\square \)
If both \(\mathscr {A}\) and \(\mathscr {B}\) belong to the Bessel potential scale, then Theorem 5.1 can be improved in the following way: Lemma 3.9 together with Fubini's theorem yields that
$$\begin{aligned} \langle D' \rangle ^{-s}\langle D_n\rangle ^{-k} L_p(\mathbb {R}^n_x,w_0\otimes w_1;E){\mathop {\rightarrow }\limits ^{\eqsim }} H^k_p(\mathbb {R}_{x_n},w_1;H^{s}_p(\mathbb {R}^{n-1}_{x'},w_0,E)). \end{aligned}$$
Moreover, we have
$$\begin{aligned}&\langle D' \rangle ^{-s}\langle D_n\rangle ^{-k} H^{2m}_p(\mathbb {R}^n_x,w_0\otimes w_1;E)\\&\quad =H^{k+2m}_p(\mathbb {R}_{x_n},w_1;H^{s}_p(\mathbb {R}^{n-1}_{x'},w_0,E))\cap H^{k}_p(\mathbb {R}_{x_n},w_1;H^{s+2m}_p(\mathbb {R}^{n-1}_{x'},w_0,E)). \end{aligned}$$
But it is well known that the realization of A(D) even admits a bounded \(\mathcal {H}^{\infty }\)-calculus in \( L_p(\mathbb {R}^n_x,w_0\otimes w_1;E)\) with domain \(H^{2m}_p(\mathbb {R}^n_x,w_0\otimes w_1;E)\) no matter whether Pisier's property \((\alpha )\) is satisfied or not (recall that the weights in the Bessel potential case are in \(A_p\)). This can be derived by using the weighted versions of Mikhlin's theorem in the proof of [8, Theorem 5.5]. Since \(\langle D' \rangle ^{-s}\langle D_n\rangle ^{-k}\) is an isomorphism, A(D) also admits a bounded \(\mathcal {H}^{\infty }\)-calculus in \(H^k_p(\mathbb {R}_{x_n},w_1;H^{s}_p(\mathbb {R}^{n-1}_{x'},w_0,E))\) with domain
$$\begin{aligned} H^{k+2m}_p(\mathbb {R}_{x_n},w_1;H^{s}_p(\mathbb {R}^{n-1}_{x'},w_0,E))\cap H^{k}_p(\mathbb {R}_{x_n},w_1;H^{s+2m}_p(\mathbb {R}^{n-1}_{x'},w_0,E)), \end{aligned}$$
even if Pisier's property \((\alpha )\) is not satisfied.
In the proof of Theorem 5.1, one can also use Proposition 3.7 instead of Proposition 3.8 if one only needs sectoriality. In this case, we can again drop the assumption that E has to satisfy Pisier's property \((\alpha )\).
For the \(\mathcal {R}\)-sectoriality of the boundary value problem, which we are going to derive in Theorem 5.4, we have a restriction on the regularity in normal direction. It may not be larger than \(k_{max}\in \mathbb {N}_0\) which we define by
$$\begin{aligned} k_{\max }:=\min \{\beta _n|\,\exists j\in \{1,\ldots ,m\}\exists \beta \in \mathbb {N}_0^n,|\beta |=m_j: b^j_{\beta }\ne 0\}, \end{aligned}$$
i.e., \(k_{\max }\) is the minimal order in normal direction of all nonzero differential operators which appear in any of the boundary operators
$$\begin{aligned} B_j(D)=\sum _{|\beta |=m_j} b^j_{\beta }D^{\beta }\quad (j=1,\ldots ,m). \end{aligned}$$
Therefore, if there is a nonzero term with no normal derivatives in one of the \(B_1,\ldots , B_n\), then \(k_{\max }=0\). In particular, it holds that \(k_{\max }=0\) if one of the operators \(B_1,\ldots , B_n\) corresponds to the Dirichlet trace at the boundary. This includes the case of the Dirichlet Laplacian. On the other hand, for the Neumann Laplacian we have \(k_{\max }=1\). In this sense, our results will be analogous to the usual isotropic case: We will be able to derive \(\mathcal {R}\)-sectoriality of the Neumann Laplacian in \(L_{p}(\mathbb {R}_+,;\mathscr {A}^s)\) and \(H_{p}^1(\mathbb {R}_+,;\mathscr {A}^s)\), but for the Dirichlet Laplacian we can only derive it in \(L_{p}(\mathbb {R}_+,;\mathscr {A}^s)\).
Recall Assumptions 1.1 and 1.2. Suppose that E satisfies Pisier's property \((\alpha )\). Let \(k\in [0,k_{\max }]\cap \mathbb {N}_0\), \(p\in (1,\infty )\), \(r\in (-1,p-1)\) and \(s\in \mathbb {R}\). We define the operator
$$\begin{aligned} A_B:H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)\supset D(A_B) \rightarrow H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s),\,u\mapsto A(D)u \end{aligned}$$
on the domain
$$\begin{aligned} D(A_B):=\{ u\in H_{p}^{k}(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s):&A_B u\in H_{p}^{k}(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)\\&{\text {tr}}_{x_n=0} B_j(D)u=0\text { for all }j=1,\ldots ,m\} \end{aligned}$$
Then, for all \(\sigma >0\) we have that \(A_B-\sigma \) is \(\mathcal {R}\)-sectorial in \(\Sigma _{\phi }\). Moreover, there is a constant C such that for all \(\lambda \in \Sigma _\phi \) with \(|\lambda |\ge \sigma \) we have the estimate
$$\begin{aligned} \Vert u\Vert _{D^{k,2m,s}_{r,B}(\mathbb {R}_+)} \le C \Vert (\lambda -A(D))u\Vert _{H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)}. \end{aligned}$$
In particular, it holds that \(D^{k,2m,s}_{r,B}(\mathbb {R}_+)=D(A_B)\).
$$\begin{aligned} R(\lambda )f=r_+(\lambda -A(D))_{\mathbb {R}^n}^{-1} \mathscr {E} f - \sum _{j=1}^m{\text {pr}}_{1}{\text {Poi}}_j(\lambda ){\text {tr}}_{x_n=0}B_j(D)(\lambda -A(D))_{\mathbb {R}^n}^{-1} \mathscr {E} f, \end{aligned}$$
where \(\lambda \in \Sigma _{\phi }\), \(f\in H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)\), \((\lambda -A(D))_{\mathbb {R}^n}^{-1}\) denotes the resolvent on \(\mathbb {R}^n\) as in Theorem 5.1, \(r_+\) denotes the restriction of a distribution on \(\mathbb {R}^n\) to \(\mathbb {R}^n_+\) and \(\mathscr {E}\) denotes an extension operator mapping \(H_{p}^t(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)\) into \(H_{p}^t(\mathbb {R},|{\text {pr}}_n|^r;\mathscr {A}^s)\) for arbitrary \(t\in \mathbb {R}\). \(\mathscr {E}\) can for example be chosen to be Seeley's extension, see [43].
Combining Proposition 4.20 with \(\theta =1\) and Theorem 5.1 yields that the set
$$\begin{aligned} \{\lambda R(\lambda ):\lambda \in \Sigma _{\phi },|\lambda |\ge \sigma \}\subset \mathcal {B}(H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)) \end{aligned}$$
is \(\mathcal {R}\)-bounded. Next, we show that \(R(\lambda )\) is indeed the resolvent so that we obtain \(\mathcal {R}\)-sectoriality. To this end, we show that
$$\begin{aligned} R(\lambda ):H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s) \rightarrow D(A_B) \end{aligned}$$
is a bijection with inverse \(\lambda -A_B\). Let \(f\in H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)\). Since
$$\begin{aligned} {\text {tr}}_{x_n=0}B_k(D){\text {pr}}_1{\text {Poi}}_j(\lambda )=\delta _{k,j}{\text {id}}_{\mathscr {A}} \end{aligned}$$
by construction, it follows from applying \(B_j(D)\) to (5-4) that \({\text {tr}}_{x_n=0}B_j(D)R(\lambda )f=0\) for all \(j=1,\ldots ,m\). Moreover, we have \((\lambda -A(D)){\text {pr}}_1{\text {Poi}}_j(\lambda )=0\) by the definition of \({\text {Poi}}_j(\lambda )\). This shows that
$$\begin{aligned} (\lambda -A(D))R(\lambda )={\text {id}}_{H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)} \end{aligned}$$
and therefore
$$\begin{aligned} A(D)R(\lambda )f = \lambda R(\lambda )f - (\lambda -A(D))R(\lambda )f=\lambda R(\lambda )f-f. \end{aligned}$$
But it is already contained in (5-5) that \( \lambda R(\lambda )f\in H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)\). This shows that \(R(\lambda )\) maps \(H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)\) into \(D(A_B)\). In addition, (5-6) shows the injectivity of \(R(\lambda )\). But also
$$\begin{aligned} (\lambda -A(D)):D(A_B)\rightarrow H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s) \end{aligned}$$
is injective as a consequence of the Lopatinskii–Shapiro condition. Hence, there is a mapping
$$\begin{aligned} T(\lambda ):H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)\rightarrow D(A_B) \end{aligned}$$
such that \(T(\lambda )(\lambda -A(D))={\text {id}}_{D(A_B)}\). But from this, we obtain
$$\begin{aligned} T(\lambda )=T(\lambda )(\lambda -A_B)R(\lambda )=R(\lambda ) \end{aligned}$$
$$\begin{aligned} R(\lambda )(\lambda -A_B)={\text {id}}_{D(A_B)},\quad (\lambda -A_B)R(\lambda )={\text {id}}_{H_{p}^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)}, \end{aligned}$$
i.e., \(R(\lambda )=(\lambda -A_B)^{-1}\) is indeed the resolvent and we obtain the \(\mathcal {R}\)-sectoriality.
It remains to show that the estimate (5-3) holds. To this end, we can again use the formula for the resolvent (5-4) in connection with Proposition 4.20 (\(\theta =0\)) and Theorem 5.1. Then, we obtain for \(u\in D(A_B)\) that
$$\begin{aligned} \Vert u\Vert _{D^{k,2m,s}_{r,B}(\mathbb {R}_+)}&\le \Vert r_+ (\lambda -A(D))_{\mathbb {R}^n}^{-1}\mathscr {E}(\lambda -A_B)u\Vert _{D^{k,2m,s}_{r,B}(\mathbb {R}_+)} \\&\quad +\sum _{j=1}^m\Vert {\text {pr}}_{1}{\text {Poi}}_j(\lambda ){\text {tr}}_{x_n=0}B_j(D)\\&\quad (\lambda -A(D))_{\mathbb {R}^n}^{-1} \mathscr {E} (\lambda -A_B)u\Vert _{D^{k,2m,s}_{r,B}(\mathbb {R}_+)}\\&\lesssim \Vert (\lambda -A_B)u\Vert _{H^k(\mathbb {R}_+,|{\text {pr}}_n|^r\mathscr {A}^s)}\\&\quad +\sum _{j=1}^m\Vert r_+ (\lambda -A(D))_{\mathbb {R}^n}^{-1} \mathscr {E} (\lambda -A_B)u\Vert _{D^{k,2m,s}_{r,B}(\mathbb {R}_+)}\\&\lesssim \Vert (\lambda -A_B)u\Vert _{H^k(\mathbb {R}_+,|{\text {pr}}_n|^r\mathscr {A}^s)}. \end{aligned}$$
This also implies that \(D(A_B)=D^{k,2m,s}_{r,B}(\mathbb {R}_+)\). Indeed, it follows from Proposition 3.11 that
$$\begin{aligned}&\Vert (\lambda -A_B)u\Vert _{H^k(\mathbb {R}_+,|{\text {pr}}_n|^r\mathscr {A}^s)}\lesssim \Vert (\lambda -A(D))\mathscr {E}u\Vert _{H^k(\mathbb {R},|{\text {pr}}_n|^r\mathscr {A}^s)}\\&\lesssim \Vert \mathscr {E}u\Vert _{D^{k,2m,s}_{r,B}(\mathbb {R})}\lesssim \Vert u\Vert _{D^{k,2m,s}_{r,B}(\mathbb {R}_+)} \end{aligned}$$
for \(u\in D^{k,2m,s}_{r,B}(\mathbb {R}_+)\). Hence, we have
$$\begin{aligned} D^{k,2m,s}_{r,B}(\mathbb {R}_+)\hookrightarrow D(A_B) \hookrightarrow D^{k,2m,s}_{r,B}(\mathbb {R}_+). \end{aligned}$$
If E is a UMD space, then the results of Theorem 5.4 also hold for \(k\in [0,k_{\max }]\), i.e., k does not have to be an integer. This follows from complex interpolation, see Proposition 2.3 and [31, Proposition 5.6]. Note that unlike in Proposition 2.3 we can not replace the UMD space E by a K-convex Banach space, since the UMD property is needed for the complex interpolation of Bessel potential spaces in [31, Proposition 5.6]. Moreover, in Assumption 1.2 we require E to be a UMD space if one of the spaces in tangential or normal direction belongs to the Bessel potential scale.
Two canonical applications of Theorem 5.4 are Dirichlet and Neumann Laplacian.
Let \(E=\mathbb {C}\), \(p\in (1,\infty )\), \(r\in (1,p-1)\) and \(s\in \mathbb {R}\).
We consider the Laplacian with Dirichlet boundary conditions
$$\begin{aligned} \Delta _D:L_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^s)\supset D(\Delta _D)\rightarrow L_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^s) \end{aligned}$$
on the domain \(D(\Delta _D)\) given by
$$\begin{aligned} D(\Delta _D):=\{u\in H^{2}_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^s)\cap L_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{s+2m}): {\text {tr}}_{x_n=0}u=0\}. \end{aligned}$$
For all \(\sigma >0\), it holds that \(\Delta _D-\sigma \) is \(\mathcal {R}\)-sectorial in any sector \(\Sigma _{\psi }\) with \(\psi \in (0,\pi )\).
Let \(k\in \{0,1\}\). We consider the Laplacian with Neumann boundary conditions
$$\begin{aligned} \Delta _N:H^k_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^s)\supset D(\Delta _D)\rightarrow H^k_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^s) \end{aligned}$$
on the domain \(D(\Delta _N)\) given by
$$\begin{aligned} D(\Delta _N):=\{u\in H^{k+2}_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^s)\cap H^k_p(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{s+2m}): {\text {tr}}_{x_n=0}\partial _n u=0\}. \end{aligned}$$
For all \(\sigma >0\), it holds that \(\Delta _N-\sigma \) is \(\mathcal {R}\)-sectorial in any sector \(\Sigma _{\psi }\) with \(\psi \in (0,\pi )\).
Both statements follow directly from Theorem 5.4. \(\square \)
Application to boundary value problems
Let \(s_1,\ldots ,s_m\in \mathbb {R}\) and \(g_j\in \mathscr {A}^{s_j}\) \((j=1,\ldots ,m)\). Then, the equation
$$\begin{aligned} \lambda u-A(D)u&=0\quad \;\text {in }\mathbb {R}^n_+,\\ B_j(D)u&=g_j\quad \text {on }\mathbb {R}^{n-1} \end{aligned}$$
has a unique solution \(u\in \mathscr {S}'(\mathbb {R}^n_+;E)\) for all \(\lambda \in \Sigma _{\phi }\). This solution satisfies
$$\begin{aligned} u\in \sum _{j=1}^m\bigcap _{r,t\in \mathbb {R},\,k\in \mathbb {N}_0,\,p\in [1,\infty )\atop r-p[t+k-m_j-s_j]_+>-1} W_p^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^t). \end{aligned}$$
Moreover, for all \(\sigma >0\), \(t,r\in \mathbb {R}\), \(p\in [1,\infty )\) and \(k\in \mathbb {N}_0\) such that \(r-p[t+k-m_j-s_j]_+>-1\) for all \(j=1,\ldots ,m\) there is a constant \(C>0\) such that
$$\begin{aligned} \Vert u\Vert _{W_p^k(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^t)}\le C\sum _{j=1}^m |\lambda |^{\frac{-1-r+p(k-m_j)+p[t-s_j]_+}{2mp}}\Vert g_j\Vert _{\mathscr {A}^{s_j}} \end{aligned}$$
for all \(\lambda \in \Sigma _{\phi }\) with \(|\lambda |\ge \sigma \)
All the assertions follow directly from Theorem 4.16 (a). \(\square \)
Note that the smoothness parameters k and t of the solution in Theorem 6.1 can be chosen arbitrarily large if one accepts a strong singularity at the boundary. On the other hand, if t is chosen small enough, then the singularity can be removed.
In Theorem 6.1, we can take \(k=1+\max _{j=1,\ldots ,m}m_j\), \(r=0\) and t such that \(r-p[t+k-m_j-s_j]_+>-1\) for all \(j=1,\ldots ,m\). This means that the boundary conditions \(B_j(D)u=g_j\) can be understood in a classical sense. Indeed, [37, Proposition 7.4] in connection with [37, Proposition 3.12] shows that
$$\begin{aligned} W^k_p(\mathbb {R}_+;\mathscr {A}^t)\hookrightarrow BUC^{k-1}(\mathbb {R}_+;\mathscr {A}^t). \end{aligned}$$
Hence, \({\text {tr}}_{x_n=0}B_j(D)u\) can be defined in the classical sense.
One can again use interpolation techniques or one can directly work with Corollary 4.18 in order to obtain results for the Bessel potential or the Besov scale in normal direction. Note however that this comes with some restrictions on the weight \(|{\text {pr}}_n|^r\).
As defined in Remark 5.3 we set
$$\begin{aligned} k_{\max }:=\min \{\beta _n|\,\exists j\in \{1,\ldots ,m\}\exists \beta \in \mathbb {N}_0^n,|\beta |= m_j: b^j_{\beta }\ne 0\}. \end{aligned}$$
Let \(s\in \mathbb {R}\), \(p\in (1,\infty )\), \(r\in (-1,p-1)\), \(k\in [0,k_{\max }]\cap \mathbb {N}_0\) and \(f\in W^k_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)\). Let further \(s_j\in (s+2m+k-m_j-\frac{1+r}{p},\infty )\) and \(g_j\in \mathscr {A}^{s_j}\) \((j=1,\ldots ,m)\). Then, the equation
$$\begin{aligned} \lambda u-A(D)u&=f\quad \;\text {in }\mathbb {R}^n_+,\\ B_j(D)u&=g_j\quad \text {on }\mathbb {R}^{n-1} \end{aligned}$$
has a unique solution
$$\begin{aligned} u\in W^{k+2m}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s) \cap W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s+2m}) \end{aligned}$$
and for all \(\sigma >0\) there is a constant \(C>0\) such that for all \(\lambda \in \Sigma _{\phi }\) with \(|\lambda |\ge \sigma \) we have the estimate
$$\begin{aligned}&\Vert u\Vert _{W^{k+2m}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)}+\Vert u\Vert _{W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s+2m})}+|\lambda |\,\Vert u\Vert _{W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)}\\&\quad \le C\left( \Vert f\Vert _{W^k_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)}+\sum _{j=1}^m|\lambda |^{\frac{-1-r+p(k+2m-m_j)}{2mp}}\Vert g\Vert _{\mathscr {A}^{s_j}}\right) . \end{aligned}$$
By Theorem 5.4, we have a unique solution
$$\begin{aligned} u_1\in W^{k+2m}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s) \cap W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s+2m}) \end{aligned}$$
to the equation
$$\begin{aligned} \lambda u_1-A(D)u_1&=f\quad \text {in }\mathbb {R}^n_+,\\ B_j(D)u_1&=0\quad \text {on }\mathbb {R}^{n-1} \end{aligned}$$
which satisfies the estimate
$$\begin{aligned}&\Vert u_1\Vert _{W^{k+2m}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)}+\Vert u_1\Vert _{W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s+2m})}\\&+|\lambda |\,\Vert u_1\Vert _{W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)}\le C\Vert f\Vert _{W^k_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)}. \end{aligned}$$
By Remark 5.2(b), we do not need Pisier's property \((\alpha )\) for this. Moreover, by Theorem 6.1 the unique solution \(u_2\) to the equation
$$\begin{aligned} \lambda u_2-A(D)u_2&=0\quad \;\text {in }\mathbb {R}^n_+,\\ B_j(D)u_2&=g_j\quad \text {on }\mathbb {R}^{n-1} \end{aligned}$$
satisfies the estimates
$$\begin{aligned} \Vert u_2\Vert _{W^{k+2m}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^s)}&\le C \sum _{j=1}^m|\lambda |^{\frac{-1-r+p(k+2m-m_j)}{2mp}}\Vert g\Vert _{\mathscr {A}^{s_j}},\\ \Vert u_2\Vert _{W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s+2m})}&\le C \sum _{j=1}^m|\lambda |^{\frac{-1-r+p(k-m_j)}{2mp}}\Vert g\Vert _{\mathscr {A}^{s_j}},\\ \Vert u_2\Vert _{W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{s})}&\le C \sum _{j=1}^m|\lambda |^{\frac{-1-r+p(k-m_j)}{2mp}}\Vert g\Vert _{\mathscr {A}^{s_j}}. \end{aligned}$$
Note that by our choice of \(s_j\), we have
$$\begin{aligned}&r-p[s+2m+k-m_j-s_j]_+>r\\&-p(s+2m+k-m_j-s-2m-k+m_j+\tfrac{1+r}{p})=-1 \end{aligned}$$
for \(s+2m+k-m_j-\frac{1+r}{p}<s_j\le s+2m+k-m_j\) and
$$\begin{aligned} r-[s+2m+k-m_j-s_j]_+=r>-1 \end{aligned}$$
for \(s_j\ge s+2m+k-m_j\). The unique solution u of the full system is given by \(u=u_1+u_2\) and therefore summing up yields the assertion. \(\square \)
Recall from Assumption 1.2 that \(\mathscr {C}\) stands for the Bessel potential, Besov, Triebel–Lizorkin or one of their dual scales and that we impose some conditions on the corresponding parameters. Let \(\sigma >0\), \(s_1,\ldots ,s_m,l_1,\ldots ,l_m\in \mathbb {R}\) and \(g_j\in \mathscr {C}^{l_j}(\mathbb {R}_+,w_2;\mathscr {A}^{s_j})\). Let further
$$\begin{aligned} P_j=\{(r,t_0,l,k,p): t_0,l\in \mathbb {R},&r\in (-1,\infty ).k\in \mathbb {N}_0,p\in [1,\infty ),\\&r-p[t_0+k-m_j-s_j]_+>-1,\\&r-2mp(l-l_j)-p(k-m_j)-p[t_0-s_j]_+>-1\} \end{aligned}$$
the set of admissible parameters. Then, the equation
$$\begin{aligned} \partial _t u +\sigma u- A(D) u&= 0\quad \;\text {in }\mathbb {R}\times \mathbb {R}^n_+,\nonumber \\ B_j(D)u&=g_j\quad \text {on }\mathbb {R}_+\times \mathbb {R}^{n-1}, \end{aligned}$$
has a unique solution \(u\in \mathscr {S}'(\mathbb {R}\times \mathbb {R}^n_+;E)\). This solution satisfies
$$\begin{aligned} u\in \sum _{j=1}^m\bigcap _{(r,t_0,l,k,p)\in P_j}\mathscr {C}^{l}(\mathbb {R},w_2;W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{t_0})) \end{aligned}$$
and for all \((r,t_0,l,k,p)\in \bigcap _{j=1}^mP_j\) there is a constant \(C>0\) independent of \(g_1,\ldots ,g_m\) such that
$$\begin{aligned} \Vert u\Vert _{\mathscr {C}^{l}(\mathbb {R},w_2;W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{t_0}))}\le C\sum _{j=1}^m\Vert g_j\Vert _{\mathscr {C}^{l_j}(\mathbb {R},w_2;\mathscr {A}^{s_j})}. \end{aligned}$$
We apply the Fourier transform \(\mathscr {F}_{t\mapsto \tau }\) in time to (6-1) and obtain
$$\begin{aligned} (\sigma +i\tau ) \hat{u}- A(D) \hat{u}&= 0\quad \;\text {in }\mathbb {R}\times \mathbb {R}^n_+,\nonumber \\ B_j(D)\hat{u}&=\hat{g}_j\quad \text {on }\mathbb {R}_+\times \mathbb {R}^{n-1}. \end{aligned}$$
Hence, the solution of (6-1) is given by
$$\begin{aligned} u(t,x)=\sum _{j=1}^m\mathscr {F}_{t\rightarrow \tau }^{-1}{\text {Poi}}_j(\sigma +i\tau )\mathscr {F}_{t\rightarrow \tau }g_j. \end{aligned}$$
From Theorem 4.16 together with Lemma 2.5, it follows that
$$\begin{aligned}{}[\tau \mapsto {\text {Poi}}_j(\sigma +i\tau )]\in S^{\frac{-1-r+p(k-m_j)+p[{t_0}-s_j]_+}{2mp}+\varepsilon }_{\mathcal {R}}(\mathbb {R},\mathcal {B}(\mathscr {A}^{s_j},W^k_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{t_0}))) \end{aligned}$$
for arbitrary \(\varepsilon >0\) if the parameters satisfy \(r-p[{t_0}+k-m_j-s_j]_+>-1\). Hence, the parameter-independent version of Proposition 3.6 (as in Remark 3.3 (g)) yields
$$\begin{aligned}&\mathscr {F}_{t\rightarrow \tau }^{-1}{\text {Poi}}_j(\sigma +i\tau )\mathscr {F}_{t\rightarrow \tau }g_j\in \mathscr {C}^{l_j-\varepsilon +\frac{1+r-p(k-m_j)-p[{t_0}-s_j]_+}{2mp}}\\&\quad (\mathbb {R},w_2;W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{t_0})) \end{aligned}$$
as well as the estimate
$$\begin{aligned}&\Vert \mathscr {F}_{t\rightarrow \tau }^{-1}{\text {Poi}}_j(\sigma +i\tau )\mathscr {F}_{t\rightarrow \tau }g_j\Vert _{\mathscr {C}^{l_j-\varepsilon +\frac{1+r-p(k-m_j)-p[{t_0}-s_j]_+}{2mp}}(\mathbb {R},w_2;W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{t_0})}\\&\quad \le C\Vert g_j\Vert _{\mathscr {C}^{l_j}(\mathbb {R},w_2;\mathscr {A}^{s_j})}. \end{aligned}$$
But the condition
$$\begin{aligned} r-2mp(l-l_j)-p(k-m_j)-p[{t_0}-s_j]_+>-1 \end{aligned}$$
implies
$$\begin{aligned} l\le l_j-\varepsilon +\frac{1+r-p(k-m_j)-p[{t_0}-s_j]_+}{2mp}, \end{aligned}$$
if \(\varepsilon >0\) is chosen small enough. Therefore, we obtain
$$\begin{aligned} \mathscr {F}_{t\rightarrow \tau }^{-1}{\text {Poi}}_j(\sigma +i\tau )\mathscr {F}_{t\rightarrow \tau }g_j\in \mathscr {C}^{l}(\mathbb {R},w_2;W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{t_0})) \end{aligned}$$
and the estimate
$$\begin{aligned} \Vert \mathscr {F}_{t\rightarrow \tau }^{-1}{\text {Poi}}_j(\sigma +i\tau )\mathscr {F}_{t\rightarrow \tau }g_j\Vert _{\mathscr {C}^{l}(\mathbb {R},w_2;W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{t_0})}\le C\Vert g_j\Vert _{\mathscr {C}^{l_j}(\mathbb {R},w_2;\mathscr {A}^{s_j})}. \end{aligned}$$
if \((r,{t_0},l,k,p)\in P_j\). Taking the sum over all \(j=1,\ldots ,m\) yields the assertion. \(\square \)
If \(\mathscr {C}\) does not stand for the Bessel potential scale or if \(p>\max \{p_0,q_0,q_E\}\) where \(q_E\) denotes the cotype of E, then the parameter set \(P_j\) in Theorem 6.4 can potentially be chosen slightly larger, namely
$$\begin{aligned} P_j=\{(r,{t_0},l,k,p): {t_0},l\in \mathbb {R},&r\in (-1,\infty ).k\in \mathbb {N}_0,p\in [1,\infty ),\\&r-p[{t_0}+k-m_j-s_j]_+>-1,\\&r-2mp(l-l_j)-p(k-m_j)-p[{t_0}-s_j]_+\ge -1\}. \end{aligned}$$
Indeed, if \(p>\max \{p_0,q_0,q_E\}\), then
$$\begin{aligned}{}[\tau \mapsto {\text {Poi}}_j(\sigma +i\tau )]\in S^{\frac{-1-r+p(k-m_j)+p[{t_0}-s_j]_+}{2mp}}_{\mathcal {R}}(\mathbb {R},\mathcal {B}(\mathscr {A}^{s_j},W^k_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{t_0}))) \end{aligned}$$
by Theorem 4.16. If one continues the proof of Theorem 6.4 with this information, then one will find that the \(\varepsilon \) in (6-4) can be removed so that the inequality (6-3) does not have to be strict. The same holds for Besov and Triebel–Lizorkin scale, as in this case
$$\begin{aligned}{}[\tau \mapsto {\text {Poi}}_j(\sigma +i\tau )]\in S^{\frac{-1-r+p(k-m_j)+p[{t_0}-s_j]_+}{2mp}}(\mathbb {R},\mathcal {B}(\mathscr {A}^{s_j},W^k_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{t_0}))) \end{aligned}$$
is good enough and holds without restriction on p.
As in Remark 6.2, we can take the trace \({\text {tr}}_{x_n=0}B_j(D)u\) in the classical sense if k is large enough and if l and \({t_0}\) are small enough.
Again, we can use interpolation techniques to extend the result in Theorem 6.4 to the case in which the Bessel potential or Besov scale are taken in normal direction. However, this can only be done for \(r\in (-1,p-1)\).
Let \(\alpha \in (0,1)\), \(T>0\), \(s,t_0\in \mathbb {R}\), \(p\in (1,\infty )\), \(r\in (-1,p-1)\), \(\mu \in (-1,\infty )\), \(v_{\mu }(t)=t^{\mu }\) \((t\in (0,T])\) and \(s_1,\ldots ,s_m,l_1,\ldots ,l_m\in \mathbb {R}\). Assume that \(\mu \in (-1,q_2)\) if \(\mathscr {C}\) belongs to the Bessel potential scale. Let again
and \(k\in [0,k_{\max }]\cap \mathbb {N}_0\). We further assume that
$$\begin{aligned} l_j>\frac{1+\mu }{q_2}-\frac{1+r}{2mp}+\frac{k-m_j+[t_0-s_j]_+}{2m}\quad \text {and}\quad s_j>t_0+k-m_j-\frac{1+r}{p} \end{aligned}$$
for all \(j=1,\ldots ,m\). Suppose that E satisfies Pisier's property \((\alpha )\).
Then, for all \(u_0\in H_p^k(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{t_0})\), all \(\alpha \)-Hölder continuous \(f\in C^{\alpha }((0,T);H_p^k(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{t_0}))\) with \(\alpha \in (0,1)\) and \(g_j\in \mathscr {C}^{l_j}([0,T],v_{\mu };\mathscr {A}^{s_j})\) there is a unique solution u of the equation
$$\begin{aligned} \partial _t u- A(D) u&=f\quad \;\text {in }(0,T]\times \mathbb {R}^{n}_+,\nonumber \\ B_j(D)u&=g_j\quad \text {on }(0,T]\times \mathbb {R}^{n-1},\nonumber \\ u(0,\,\cdot \,)&=u_0 \end{aligned}$$
which satisfies
$$\begin{aligned} u&\in C([0,T];H^{k}_{p}(\mathbb {R}_{+,x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0})),\\ u&\in \mathscr {C}^{l^*}((0,T],v_{\mu };H^{k+2m}_p(\mathbb {R}_{+,x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0-2m})),\\ u&\in C^1((0,T];H^{k}_{p}([\delta ,\infty )_{x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0})),\\ u&\in C((0,T];H^{k+2m}_p([\delta ,\infty )_{x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0})\cap H^k_p([\delta ,\infty )_{x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0+2m})) \end{aligned}$$
for all \(\delta >0\) and some \(l^*\in \mathbb {R}\).
First, we substitute \(v(t,\,\cdot \,)=e^{-\sigma t}u(t,\,\cdot \,)\) for some \(\sigma >0\). Since we work on a bounded time interval [0, T], this multiplication is an automorphism of all the spaces we consider in this theorem. Hence, it suffices to look for a solution of the equation
$$\begin{aligned} \partial _t v+\sigma v- A(D) v&=\widetilde{f}\quad \;\text {in }[0,T]\times \mathbb {R}^{n}_+,\\ B_j(D)v&=\widetilde{g}_j\quad \text {on }[0,T]\times \mathbb {R}^{n-1},\\ v(0,\,\cdot \,)&=u_0, \end{aligned}$$
where \(\widetilde{f}(t)=e^{-\sigma t} f(t)\) and \(\widetilde{g}_j(t)=e^{-\sigma t} g_j(t)\). We split v into two parts \(v=r_{[0,T]}v_1+v_2\) which are defined as follows: \(v_1\) solves the equation
$$\begin{aligned} \partial _t v_1+\sigma v_1- A(D) v_1&=0\quad \;\text {in }\mathbb {R}\times \mathbb {R}^{n}_+,\\ B_j(D)v_1&=\mathscr {E}\widetilde{g}_j\quad \text {on }\mathbb {R}\times \mathbb {R}^{n-1}, \end{aligned}$$
where \(\mathscr {E}\) is a suitable extension operator and \(r_{[0,T]}\) is the restriction to [0, T]. Moreover, \(v_2\) is the solution of
$$\begin{aligned} \partial _t v_2+\sigma v_2- A(D) v_2&=\widetilde{f}\quad \;\text {in }[0,T]\times \mathbb {R}^{n}_+,\nonumber \\ B_j(D)v_2&=0\quad \text {on }[0,T]\times \mathbb {R}^{n-1},\nonumber \\ v_2(0,\,\cdot \,)&=v_0-v_1(0,\,\cdot \,). \end{aligned}$$
For \(v_1\), it follows from Theorem 6.4 that
$$\begin{aligned} v_1\in \sum _{j=1}^m\bigcap _{(r',t',l',k',p')\in P_j}\mathscr {C}^{l'}(\mathbb {R},v_{\mu };W^{k'}_{p'}(\mathbb {R}_+,|{\text {pr}}_n|^{r'};\mathscr {A}^{t'})) \end{aligned}$$
and for all \((r',t',l',k',p')\in \bigcap _{j=1}^mP_j\) there is a constant \(C>0\) independent of \(g_1,\ldots ,g_m\) such that
$$\begin{aligned} \Vert v_1\Vert _{\mathscr {C}^{l'}(\mathbb {R},v_{\mu };W^{k'}_{p'}(\mathbb {R}_+,|{\text {pr}}_n|^{r'};\mathscr {A}^{t'}))}\le C\sum _{j=1}^m\Vert \mathscr {E}\widetilde{g}_j\Vert _{\mathscr {C}^{l_j}(\mathbb {R},v_{\mu };\mathscr {A}^{s_j})}. \end{aligned}$$
In particular, if \(l'>\frac{1+\mu }{q_2}\) we have \(v_1\in BUC([0,T];W^{k'}_{p'}(\mathbb {R}_+,|{\text {pr}}_n|^{r'};\mathscr {A}^{t'}))\) so that we can take the time trace \(v_1(0)\), see [37, Proposition 7.4]. If condition (6-5) is satisfied, then we can choose \(l>\frac{1+\mu }{q_2}\) small enough such that \((r,t_0,l,k,p)\in \bigcap _{j=1}^mP_j\). Hence, under this condition we obtain
$$\begin{aligned} v_1(0)\in W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{t_0}) \end{aligned}$$
is well defined. But since we have \(r\in (-1,p-1)\), it follows from Theorem 5.4 that \(A_B-\sigma \) generates a holomorphic \(C_0\)-semigroup \((T(t))_{t\ge 0}\) in \(W^{k}_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^{t_0})\). In addition, \(v_2\) is given by
$$\begin{aligned} v_2(t)=T(t)[v_0-v_1(0,\,\cdot \,)]+\int _{0}^t T(t-s) f(s)\,ds. \end{aligned}$$
It follows from standard semigroup theory that
$$\begin{aligned} v_2\in C((0,T];D(A_B))\cap C^1((0,T];X)\cap C([0,T];X), \end{aligned}$$
$$\begin{aligned}&X=H^{k}_{p}(\mathbb {R}_{+,x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0}),\quad D(A_B)=D^{k,2m,s}_{r,B}(\mathbb {R}_+)\\&\hookrightarrow H^{k+2m}_p(\mathbb {R}_{+,x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0})\cap H^k_p(\mathbb {R}_{+,x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{{t_0}+2m}), \end{aligned}$$
see for example [39, Chapter 4, Corollary 3.3]. Since also \(v_1\in BUC([0,T];W^{k}_{p}(\mathbb {R}_+,|{\text {pr}}_n|^{r};\mathscr {A}^{t_0}))\), we obtain
$$\begin{aligned} u\in C([0,T];H^{k}_{p}(\mathbb {R}_{+,x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0})), \end{aligned}$$
and, since \(v_1\) is arbitrarily smooth away from the boundary, also
$$\begin{aligned} u&\in C^1((0,T];H^{k}_{p}([\delta ,\infty )_{x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0})),\\ u&\in C((0,T];H^{k+2m}_p([\delta ,\infty )_{x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0})\cap H^k_p([\delta ,\infty )_{x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0+2m})) \end{aligned}$$
for all \(\delta >0\). Concerning the value of \(l^*\), we note that if \((r,{t_0},l,k,p)\in \bigcap _{j=1}^mP_j\), then also \((r,{t_0}-2m,l-1,k+2m,p)\in \bigcap _{j=1}^mP_j\). Hence, we just have to take \(l^*\le l-1\) such that
$$\begin{aligned}&C((0,T];H^{k+2m}_p(\mathbb {R}_{+,x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0}))\\&\hookrightarrow \mathscr {C}^{l^*}((0,T],v_{\mu };H^{k+2m}_p(\mathbb {R}_{+,x_n},|{\text {pr}}_n|^{r};\mathscr {A}^{t_0-2m})). \end{aligned}$$
Altogether, this finishes the proof. \(\square \)
While we can treat arbitrary space regularity of the boundary data in Theorem 6.6, it is important to note that (6-5) poses a restriction on the time regularity of the boundary data. Even if we take \({t_0}\le \min _{j=1,\ldots ,m} s_j\), \(k=0\), r very close to \(p-1\) and \(q_2\) very large, we still have the restriction
$$\begin{aligned} l_j>-\frac{1+m_j}{2m}. \end{aligned}$$
In the case of the heat equation with Dirichlet boundary conditions, this would mean that the boundary data needs to have a time regularity strictly larger than \(-\frac{1}{2}\). Having boundary noise in mind, it would be interesting to go beyond this border. It would need further investigation whether this is possible or not. In fact, (6-5) gives a restriction on the time regularity only because we do not allow \(r\ge p-1\), otherwise we could just take r very large and allow arbitrary regularity in time. The reason why we have to restrict to \(r<p-1\) is that we want to apply the semigroup to the time trace \(v_1(0)\). However, until now we can only do this for \(r\in (-1,p-1)\). Hence, if one wants to improve Theorem 6.6 to the case of less time regularity, there are at least two possible directions:
One could try to generalize Theorem 5.4 to the case in which \(r>p-1\). In fact, in [32] Lindemulder and Veraar derive a bounded \(\mathcal {H}^{\infty }\)-calculus for the Dirichlet Laplacian in weighted \(L_p\)-spaces with power weights of order \(r\in (-1,2p-1){\setminus }\{p-1\}\). It would be interesting to see whether their methods also work for \(L_p(\mathbb {R}_+,|{\text {pr}}|^r;\mathscr {A}^s)\) with \(r\in (p-1,2p-1)\).
One could try to determine all initial data \(u_0\) which is given by \(u_0=\widetilde{u}_0+v_1(0)\) where \(\widetilde{u}_0\in H^k_p(\mathbb {R}_+,|{\text {pr}}_n|^r;\mathscr {A}^t)\) and \(v_1\) is the solution to
$$\begin{aligned} \partial _t v_1+\sigma v_1- A(D) v_1&=0\quad \;\text {in }\mathbb {R}\times \mathbb {R}^{n}_+,\\ B_j(D)v_1&=\widetilde{g}_j\quad \text {in }\mathbb {R}\times \mathbb {R}^{n}_+, \end{aligned}$$
for some \(\widetilde{g}_j\in \mathscr {C}^{l_j}(\mathbb {R},v_{\mu };\mathscr {A}^{s_j})\) satisfying \(\widetilde{g}_j\vert _{[0,T]}=g_j\). For such initial data, the initial boundary value problem can be solved with our methods for arbitrary time regularity of the boundary data. Indeed, in this case we just have to take the right extension of \(g_j\) so that \(u_0-v_1(0)\in H^k_p(\mathbb {R}_+,|{\text {pr}}|^r;\mathscr {A}^t)\). Then, we can just apply the semigroup in order to obtain the solution of (6-7).
E. Alòs and S. Bonaccorsi. Stochastic partial differential equations with Dirichlet white-noise boundary conditions. Ann. Inst. H. Poincaré Probab. Statist., 38(2):125–154, 2002.
H. Amann. Navier-Stokes equations with nonhomogeneous Dirichlet data. J. Nonlinear Math. Phys., 10(suppl. 1):1–11, 2003.
A. Anop, R. Denk, and A. Murach. Elliptic problems with rough boundary data in generalized sobolev spaces. arXiv preprintarXiv:2003.05360, 2020.
S. Aziznejad and J. Fageot. Wavelet Analysis of the Besov Regularity of Lévy White Noises. arXiv preprintarXiv:1801.09245v2, 2020.
Z. Brzeźniak, B. Goldys, S. Peszat, and F. Russo. Second order PDEs with Dirichlet white noise boundary conditions. J. Evol. Equ., 15(1):1–26, 2015.
G. Da Prato and J. Zabczyk. Evolution equations with white-noise boundary conditions. Stochastics Stochastics Rep., 42(3-4):167–182, 1993.
R. Denk, G. Dore, M. Hieber, J. Prüss, and A. Venni. New thoughts on old results of R. T. Seeley. Math. Ann., 328(4):545–583, 2004.
R. Denk, M. Hieber, and J. Prüss. \(\mathscr {R}\)-boundedness, Fourier multipliers and problems of elliptic and parabolic type. Mem. Amer. Math. Soc., 166(788):viii+114, 2003.
R. Denk, M. Hieber, and J. Prüss. Optimal \(L^p\)-\(L^q\)-estimates for parabolic boundary value problems with inhomogeneous data. Math. Z., 257(1):193–224, 2007.
R. Denk and T. Krainer. \(\mathscr {R}\)-boundedness, pseudodifferential operators, and maximal regularity for some classes of partial differential operators. Manuscripta Math., 124(3):319–342, 2007.
R. Denk, J. Prüss, and R. Zacher. Maximal \(L_p\)-regularity of parabolic problems with boundary dynamics of relaxation type. J. Funct. Anal., 255(11):3149–3187, 2008.
G. Dore and A. Venni. On the closedness of the sum of two closed operators. Math. Z., 196(2):189–201, 1987.
G. Dore and A. Venni. \(H^\infty \) functional calculus for an elliptic operator on a half-space with general boundary conditions. Ann. Sc. Norm. Super. Pisa Cl. Sci. (5), 1(3):487–543, 2002.
X. T. Duong. \(H_\infty \) functional calculus of elliptic operators with \(C^\infty \) coefficients on \(L^p\) spaces of smooth domains. J. Austral. Math. Soc. Ser. A, 48(1):113–123, 1990.
S. Fackler, T. P. Hytönen, and N. Lindemulder. Weighted estimates for operator-valued fourier multipliers. Collect. Math. 71(3):511–548, 2020.
J. Fageot, A. Fallah, and M. Unser. Multidimensional Lévy white noise in weighted Besov spaces. Stochastic Process. Appl., 127(5):1599–1621, 2017.
L. Grafakos. Classical Fourier analysis, volume 249 of Graduate Texts in Mathematics. Springer, New York, second edition, 2008.
L. Grafakos. Modern Fourier analysis, volume 250 of Graduate Texts in Mathematics. Springer, New York, second edition, 2009.
G. Grubb. Nonhomogeneous Dirichlet Navier-Stokes problems in low regularity \(L_p\) Sobolev spaces. J. Math. Fluid Mech., 3(1):57–81, 2001.
B. H. Haak, M. Haase, and P. C. Kunstmann. Perturbation, interpolation, and maximal regularity. Adv. Differential Equations, 11(2):201–240, 2006.
M. Hieber and J. Prüss. Heat kernels and maximal \(L^p\)-\(L^q\) estimates for parabolic evolution equations. Comm. Partial Differential Equations, 22(9-10):1647–1669, 1997.
F. Hummel and N. Lindemulder. Elliptic and parabolic boundary value problems in weighted function spaces. arXiv preprintarXiv:1911.04884v1, 2019.
T. Hytönen, J. van Neerven, M. Veraar, and L. Weis. Analysis in Banach spaces. Vol. I. Martingales and Littlewood-Paley theory, volume 63 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]. Springer, Cham, 2016.
T. Hytönen, J. van Neerven, M. Veraar, and L. Weis. Analysis in Banach spaces. Vol. II, volume 67 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]. Springer, Cham, 2017. Probabilistic methods and operator theory.
T. Hytönen and M. Veraar. \(R\)-boundedness of smooth operator-valued functions. Integral Equations Operator Theory, 63(3):373–402, 2009.
M. Kabanava. Tempered Radon measures. Rev. Mat. Complut., 21(2):553–564, 2008.
M. Kaip and J. Saal. The permanence of \(\mathscr {R}\)-boundedness and property\((\alpha )\) under interpolation and applications to parabolic systems. J. Math. Sci. Univ. Tokyo, 19(3):359–407, 2012.
A. Kufner and B. Opic. How to define reasonably weighted Sobolev spaces. Comment. Math. Univ. Carolin., 25(3):537–554, 1984.
P. C. Kunstmann and L. Weis. Maximal \(L_p\)-regularity for parabolic equations, Fourier multiplier theorems and \(H^\infty \)-functional calculus. In Functional analytic methods for evolution equations, volume 1855 of Lecture Notes in Math., pages 65–311. Springer, Berlin, 2004.
N. Lindemulder. Second order operators subject to dirichlet boundary conditions in weighted triebel-lizorkin spaces: Parabolic problems. arXiv preprintarXiv:1812.05462, 2018.
N. Lindemulder, M. Meyries, and M. Veraar. Complex interpolation with Dirichlet boundary conditions on the half line. Math. Nachr., 291(16):2435–2456, 2018.
N. Lindemulder and M. Veraar. The heat equation with rough boundary conditions and holomorphic functional calculus. J. Differ. Equ. 269(7):5832–5899, 2020.
J.-L. Lions and E. Magenes. Non-homogeneous boundary value problems and applications. Vol. I. Springer-Verlag, New York-Heidelberg, 1972. Translated from the French by P. Kenneth, Die Grundlehren der mathematischen Wissenschaften, Band 181.
J.-L. Lions and E. Magenes. Non-homogeneous boundary value problems and applications. Vol. II. Springer-Verlag, New York-Heidelberg, 1972. Translated from the French by P. Kenneth, Die Grundlehren der mathematischen Wissenschaften, Band 182.
J.-L. Lions and E. Magenes. Non-homogeneous boundary value problems and applications. Vol. III. Springer-Verlag, New York-Heidelberg, 1973. Translated from the French by P. Kenneth, Die Grundlehren der mathematischen Wissenschaften, Band 183.
M. Meyries. Maximal regularity in weighted spaces, nonlinear boundary conditions, and global attractors. PhD thesis, Karlsruhe Institute of Technology, 2010.
M. Meyries and M. Veraar. Sharp embedding results for spaces of smooth functions with power weights. Studia Math., 208(3):257–293, 2012.
J. Milnor. Lectures on the\(h\)-cobordism theorem. Notes by L. Siebenmann and J. Sondow. Princeton University Press, Princeton, N.J., 1965.
A. Pazy. Semigroups of linear operators and applications to partial differential equations, volume 44 of Applied Mathematical Sciences. Springer-Verlag, New York, 1983.
P. Portal and v. Štrkalj. Pseudodifferential operators on Bochner spaces and an application. Math. Z., 253(4):805–819, 2006.
V. S. Rychkov. Littlewood-Paley theory and function spaces with \(A^{\rm loc}_p\) weights. Math. Nachr., 224:145–180, 2001.
H.-J. Schmeisser and H. Triebel. Topics in Fourier analysis and function spaces. A Wiley-Interscience Publication. John Wiley & Sons, Ltd., Chichester, 1987.
R. T. Seeley. Extension of \(C^{\infty }\) functions defined in a half space. Proc. Amer. Math. Soc., 15:625–626, 1964.
H. Triebel. Interpolation theory, function spaces, differential operators, volume 18 of North-Holland Mathematical Library. North-Holland Publishing Co., Amsterdam-New York, 1978.
H. Triebel. Theory of function spaces, volume 78 of Monographs in Mathematics. Birkhäuser Verlag, Basel, 1983.
R. M. Trigub and E. S. Bellinsky. Fourier analysis and approximation of functions. Kluwer Academic Publishers, Dordrecht, 2004. [Belinsky on front and back cover].
M. C. Veraar. Regularity of Gaussian white noise on the \(d\)-dimensional torus. In Marcinkiewicz centenary volume, volume 95 of Banach Center Publ., pages 385–398. Polish Acad. Sci. Inst. Math., Warsaw, 2011.
J. Voigt. Abstract Stein interpolation. Math. Nachr., 157:197–199, 1992.
L. Weis. Operator-valued Fourier multiplier theorems and maximal \(L_p\)-regularity. Math. Ann., 319(4):735–758, 2001.
Since most of the material in this work is based on some results of my Ph.D. thesis and just contains generalizations, simplifications and corrections of mistakes, I would like to thank my Ph.D. supervisor Robert Denk again for his outstanding supervision.
I thank Mark Veraar for the instructive discussion on the necessity of finite cotype in certain estimates, which helped me to prove Proposition 4.13.
I also thank the Studienstiftung des deutschen Volkes for the scholarship during my doctorate and the EU for the partial support within the TiPES project funded by the European Union's Horizon 2020 research and innovation programme under Grant Agreement No. 820970. Moreover, I acknowledge partial support of the SFB/TR109 "Discretization in Geometry and Dynamics."
Open Access funding enabled and organized by Projekt DEAL.
Faculty of Mathematics, Technical University of Munich, Boltzmannstraße 3, 85748, Garching bei München, Germany
Felix Hummel
Correspondence to Felix Hummel.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Hummel, F. Boundary value problems of elliptic and parabolic type with boundary data of negative regularity. J. Evol. Equ. 21, 1945–2007 (2021). https://doi.org/10.1007/s00028-020-00664-0
Issue Date: June 2021
Mathematics Subject Classification
Primary: 35B65
Secondary: 35K52
Boundary data of negative regularity
Mixed scales
Mixed smoothness
Poisson operators
Singularities at the boundary | CommonCrawl |
GMATAdvanced-PS-19
If C is the temperature in degrees Celsius and F is the temperature in degrees Fahrenheit then the relationship between temperatures on the two scales is expressed by the equation 9C=5(F-32).On a day when the temperature extremes recorded at a certain weather station differed by 45 degrees on the Fahrenheit scale, by how many degrees did the temperature extremes differ on the celsius scale?
GMATAdvanced-PS-18 | BiChu21-PS-115
There were 36, 000 hardback copies of a certain novel sold before the paperback version was issued. From the time the first paperback copy was sold until the last copy of the novel was sold, 9 times as many paperback copies as hardback copies were sold. If a total of 441,000 copies of the novel were sold in all, how many paperback copies were sold?
A certain manufacturer uses the function C(x)=0.04x2-8.5x+25,000 to calculate the cost, in dollars, of producing x thousand units of its product. The table above gives values of this cost function for values of x between 0 and 50 in increments of 10. For which of the following intervals is the average rate of decrease in cost less than the average rate of decrease in cost for each of the other intervals?
A car traveled 462 miles per tankful of gasoline on the highway and 336 miles per tankful of gasoline in the city. If the car traveled 6 fewer miles per gallon in the city than on the highway, how many miles per gallon did the car travel in the city?
A school supply store sells only one kind of desk and one kind of chair, at a uniform cost per desk or per chair. If the total cost of 3 desks and 1 chair is twice that of 1 desk and 3 chairs,then the total cost of 4 desks and 1 chair is how many times that of 1 desk and 4 chairs?
Let n and k be positive integers with k ≤ n. From an n × n array of dots, a k × k array of dots is selected.The figure above shows two examples where the selected k × k array is enclosed in a square. How many pairs (n, k) are possible so that exactly 48 of the dots in the n × n array are NOT in the selected k × k array?
The figure above represents a network of one-way streets. The arrows indicate the direction of traffic flow and the numbers indicate the amount of traffic flow into or out of each of the four intersections during a certain hour. During that hour, what was the amount of traffic flow along the street from R to S if the total amount of traffic flow into P was 1, 200?
(Assume that none of the traffic originates or terminates in the network)
Each of the integers from 0 to 9, inclusive, is written on a separate slip of blank paper and the ten slips are dropped into a hat. If the slips are then drawn one at a time without replacement, how many must be drawn to ensure that the numbers on two of the slips drawn will have a sum of 10?
Rita and Sam play the following game with n sticks on a table. Each must remove 1, 2, 3, 4 or 5 sticks at a time on altemate tums, and no stick that is removed is put back on the table.The one who removes the last stick(or sticks) from the table wins. If Rita goes first, which of the following is a value of n such that Sam can always win no matter how Rita plays?
The figures above show a hexagonal nut that has a width of 1~$\frac{5}{16}$~ inches and a wrench that, in order to fit the nut. must have a width of at least 1~$\frac{5}{16}$~ inches. Of all the wrenches that fit the nut and have widths that are whole numbers of millimeters, the wrench that fits the nut most closely has a width of how many millimeters?
(Note: 1 inch ≈ 25. 4 millimeters) | CommonCrawl |
JUMP Math | Seventh Grade
Home Reports Center Math Seventh Grade JUMP Math
JUMP Math - Seventh Grade
Teacher Resource, Part 1, Sample Quizzes and Tests, Unit 2 Test, Item 3, "Subtract. Then find the distance apart. a. +8 − (−5) = ______, so +8 and −5 are _____ units apart. b. +4 − (+9) = ______, so +4 and +9 are _____ units apart. c. −6 − (+2) = ______, so −6 and +2 are _____ units apart. d. −3 − (−10) = ______, so −3 and −10 are _____ units apart." Students are subtracting rational numbers and finding their distance apart. (7.NS.1c)
Teacher Resource, Part 1, Sample Quizzes and Tests, Unit 3 Quiz, Item 5, "Factor the expression. Use the GCF of the numbers. a. 6x + 8 b. 12x − 15 c. 3x + 12 d. 8x − 4." Students factor an expression using the Greatest Common Factor. (7.EE.1)
Teacher Resource, Part 2, Sample Quizzes and Tests, Unit 5 Test, Item 4, "a. Use long division to write $$\frac{7}{8}$$ as a decimal. b. How do you know that the division is finished at this point?" Students use long division to change a fraction to a decimal and explain that the decimal form of a rational number terminates in 0s. (7.NS.2d)
Teacher Resource, Part 2, Sample Quizzes and Tests, Unit 6 Quiz, Item 2, "What shape is the cross-section?" Students name the 2-dimensional figure that results from slicing a cross-section of a 3-dimensional figure. (7.G.3)
Teacher Resource, Part 2, Sample Quizzes and Tests, Unit 7 Quiz, Item 2, "Box plots A and B represent sets that have 500 data values. a. Which set has the greater median? __________ b. Which set has the greater IQR? __________" Students compare two data sets and assess the degree of overlap. (7.SP.3)
Teacher Resource, Part 2, Sample Quizzes and Tests, Unit 3, Item Bonus, "2(x + 3) = 3x - 7" Students expand an expression and collect terms. (8.EE.8)
Teacher Resource, Part 2, Sample Quizzes and Tests, Unit 4, Item 2, "Evaluate the variables. Include the units." Students find exterior angles based on a picture. (8.G.5)
The materials for Grade 7 include 15 units. In the materials, there are 168 lessons, and of those, 29 are Bridging lessons. According to the materials, Bridging lessons should not be "counted as part of the work of the year" (page A-56), so the number of lessons examined for this indicator is 139 lessons. The supporting clusters were also reviewed to determine if they could be factored in due to how strongly they support major work of the grade. There were connections found between supporting clusters and major clusters, and due to the strength of the connections found, the number of lessons addressing major work was increased from the approximately 96 lessons addressing major work as indicated by the materials themselves to 102 lessons.
Teacher Resource, Part 1, Unit 6, Lessons G7-8, G7-9, and G7-10 connect 7.RP.2 with 7.G.1 as students are expected to recognize and represent proportional relationships between quantities in order to solve problems involving scale drawings of geometric figures.
Teacher Resource, Part 2, Unit 4, Lessons G7-11, G7-12, and G7-13 connect 7.EE.4a with 7.G.5 as students are expected to solve word problems leading to equations of the form px + q = r and p(x + q) = r, where p, q, and r are specific rational numbers, that arise from using facts about supplementary, complementary, vertical, and adjacent angles in a figure.
Teacher Resource, Part 2, Unit 4, Lessons G7-14, G7-15, and G7-16 connect 7.EE.4 with 7.G.6 as students are expected to construct simple equations and inequalities to solve problems by reasoning about the quantities that arise from real-world and mathematical problems involving area of two-dimensional objects composed of triangles, quadrilaterals, and polygons.
Teacher Resource, Part 2, Unit 7, Lesson SP7-16 connects 7.RP.2 with 7.SP.2 as students are expected to recognize and represent proportional relationships between quantities in order to draw inferences about a population with an unknown characteristic of interest.
At the beginning of each unit, "This Unit in Context" provides a description of prior concepts and standards students have encountered during the grade levels before this one. The end of this section also makes connections to concepts that will occur in future grade levels. For example, "This Unit in Context" from Unit 8, Statistics and Probability: Probability Models, of Teacher Resource, Part 1 describes the topics from Measurement and Data that students encountered in Grades K through 5, specifically organizing and representing data with scaled picture and bar graphs and line plots with measurements in fractions of a unit, and from Statistics and Probability in Grade 6, specifically developing an understanding of statistical variability and summarizing and describing distributions. The description then includes the topic of probability, specifically referring to using different tools to find probabilities, and it concludes with how the work of this unit builds to the statistical topic of bivariate data in Grade 8.
There are some lessons that are not labeled Bridging lessons that contain off-grade-level material, but these lessons are labeled as "preparation for" and can be connected to grade-level work. For example, Teacher Resource, Part 2, Unit 2, Lesson RP7-33 addresses solving addition and multiplication equations including negative addends and coefficients, and the lesson is labeled as "preparation for 7.EE.4."
In the Advanced Lessons, students get the opportunity to engage with more difficult problems, but the problems are still aligned to grade-level standards. For example, the problems in Teacher Resource, Part 2, Unit 3, Lesson EE7-27 engage students in solving inequalities where the coefficient of the variable is negative, which is more difficult than when the coefficient is positive, but these problems still align to 7.EE.4b. Also, the problems in Teacher Resource, Part 2, Unit 5, Lesson NS7-52 that have students simplifying numerical expressions that include repeating decimals align to standards from 7.NS.
Every lesson identifies "Prior Knowledge Required" even though the prior knowledge identified is not aligned to any grade-level standards. For example, Teacher Resource, Part 2, Unit 2, Lesson RP7-28 states that its goal is to solve problems involving ratios with fractional terms, and the prior knowledge required is that students can divide fractions, can multiply fractions by whole numbers, can multiply whole numbers by fractions, can find equivalent ratios, and can understands ratio tables.
There are 29 lessons identified as Bridging lessons, but none of these lessons are explicitly aligned to standards from prior grades even though they do state for which grade-level standards they are preparation. For example, in Teacher Resource, Part 1, Unit 4, four of the seven lessons are Bridging lessons labeled as "preparation for 7.NS.1," and two of the seven are Bridging lessons labeled as "preparation for 7.NS.2." However, none of these six Bridging lessons are explicitly aligned to standards prior to Grade 7. Also, Teacher Resource, Part 2, Unit 3, Lesson EE7-1 is a Bridging lesson labeled as "preparation for 7.EE.4" that has students substituting values for a variable into an expression, but the lesson is not explicitly aligned to standards prior to Grade 7.
In the materials, the units are organized by domains and are clearly labeled. For example, Teacher Resource, Part 1, Unit 3 is titled Expressions and Equations: Equivalent Expressions, and Teacher Resource, Part 2, Unit 6 is titled Geometry: Volume, Surface Area, and Cross Sections. Within the units, there are goals for each lesson, and the language of the goals is visibly shaped by the CCSSM cluster headings. For example, in Teacher Resource, Part 1, Unit 8, the goal for Lesson SP7-9 states "Students will design and use a simulation to determine probabilities of compound events." The language of this goal is visibly shaped by 7.SP.C, "Investigate chance processes and develop, use, and evaluate probability models."
In Teacher Resource, Part 2, Unit 4, Lessons G7-22 and G7-23, the materials connect 7.G.A with 7.G.B as students draw, construct, and describe geometrical figures; describe the relationships between them; and solve problems involving angle measure and area.
In Teacher Resource, Part 2, Unit 7, Lesson SP7-13, the materials connect 7.SP.A with 7.SP.B as students use random sampling to draw inferences about a population and informal comparative inferences about two populations.
In Teacher Resource, Part 2, Unit 3, Lesson EE7-19, the materials connect 7.RP with 7.EE as students are expected to recognize and represent proportional relationships between quantities and rewrite expressions in different forms in a problem context to shed light on the problem and how the quantities in it are related.
Teacher Resource, Part 1, Unit 5, Lesson RP7-22, Extensions, Item 1, "Jake makes a 2.5% commission on the sale of a $30 item. How much money does he make in commission? Justify your answer." In the extensions for this lesson problems exist where students use different forms.
Teacher Resource, Part 1, Unit 5, Lesson RP7-16, Exercises, Item 2, "a. Check that $$\frac{1}{20}$$ = 0.05 b. Use the fact that $$\frac{1}{20}$$ = 0.05 to write the fraction as a decimal. i. $$\frac{2}{20}$$ ii. $$\frac{13}{20}$$ iii. $$\frac{7}{20}$$"
Teacher Resource, Part 1, Unit 3, Lesson EE7-2, Exercises, Item 1, "Use the commutative property of multiplication to complete the equation. a. (9-7) x (3+4) = ____ b. (8-5) x (8÷4)= ____" Conceptual understanding is built with this lesson.
Teacher Resource, Part 2, Unit 3, Lesson EE7-15, Exercises, Item 1, "Move all the variable terms to the left side and all the constant terms to the right side. a. 3x + 3 - x = 5."
Cluster 7.NS.A develops procedural skill in completing addition, subtraction, multiplication, and division with rational numbers. Examples include:
Teacher Resource, Part 1, Unit 2, Lesson NS7-5, Exercise, Item e, "(+2) + (-3) Do the previous exercises without using a number line. Make sure you get all the same answers." Students first use the number line to add integers and then apply noticed patterns to addition problems without a number line.
Teacher Resource, Part 1, Lesson NS7-14, Extensions, Item 5, "Lynn says that -4$$\frac{1}{5}$$+ 3$$\frac{2}{5}$$= -1$$\frac{3}{5}$$ because -4 + 3 = -1 and 1 + 2 = 3. Do you agree with Lynn? Why or why not?" Students apply the patterns associated with adding and subtracting integers to add and subtract decimals and fractions that contain negatives.
Student Resource, Assessment & Practice Book, Part 1, Lessons NS7-27, Item 2a, "-8 x 5 = 0 - ____=_____." Students use the distributive property to further understand multiplying integers.
Standard 7.EE.1 expects students to use procedural skills in developing equivalent, linear expressions with rational coefficients through addition, subtraction, factoring, and multiplication. Examples include:
Teacher Resource, Part 1, Unit 3, Lesson EE7-9, Exercises,"Write an equivalent expression without brackets. Then simplify your expression. a. 3x − (5 + 6x) b. 5x + 4 − (2x + 9)." Students simplify expressions by combining like terms using properties of operations.
Teacher Resource, Part 1, Unit 3, Lesson EE7-11, Exercises, "Simplify. a. 3x − 2(x + 5)." Students use pictures and area models to write equivalent expressions that involve multiplication and factoring.
Standard 7.EE.4 expects students to develop procedural skill in constructing and solving linear equations in the form px+q=r or p(x + q)=r, and inequalities in the form px+q>r and px+q<r. Examples include:
Teacher Resource, Part 2, Unit 3, Lesson EE7-14, Exercises, "Solve the equation in two steps. a. 3x + (-5) =13 b. (-4)y - (-2) = 34 c. (-4) + 9z =14" Students undo operations to solve equations with rational numbers. Students solve many problems including, one-step equations, two-step equations, equations using the distributive property, and equations with complex fractions involving cross multiplying.
Teacher Resource, Part 2, Unit 3, Lesson EE7-23, Exercises, Item 1, "Write the description using symbols. a. x is 16 or more b. x is 25 or less c. x is -0.5 or less." Students use symbols to write inequalities to represent conditions and show solutions on a number line.
Teacher Resource, Part 2, Unit 3, Lesson EE7-25, Exercises, "Write an inequality to represent the weights on a balance." Pictures of balances are shown with different weights. Students use a balance model to solve inequalities.
Book 2 Unit 2 has limited real-world problems for students to solve. The focus of the unit is on learning specific algorithms to solve ratio problems.
Teacher Resource, Part 2, Unit 2, Lesson RP7-27, Exercises, "Write a ratio with whole-number terms. a. $$\frac{3}{8}$$ of a pizza for every 3 people." However, none of the independent student work includes real-world scenarios. (7.RP.A) Students use fractional ratios and write equivalent whole number ratios.
Teacher Resource, Part 2, Unit 2, Lesson RP7-28, Extensions, Item 2, "John skates $$\frac{7}{2}$$ km in 10 minutes and bikes $$\frac{19}{4}$$ km in 12 minutes. Does he skate or bike faster?" (7.RP.A) Students are using proportional relationships to solve the problem.
Teacher Resource, Part 1, Unit 7, Lesson NS7-28, Exercises, "Write a multiplication equation to show the amount of change. a. Ted gained $10 every hour for 5 hours." (7.NS.3) Students are given a variety of real-world contexts and are asked to write expressions and equations for each context. Students are also asked to solve equations using multiplication and addition.
Teacher Resource, Part 2, Unit 1, Lesson NS7-32, Exercises, "Will the recipe turn out? a. I'm making 5 $$\frac{1}{2}$$ batches of gravy, and each batch needs $$\frac{3}{8}$$ cup of flour. I use 2 cups of flour." (7.NS.3) Students are using the four operations with rational numbers to solve problems. However, students are presented with few opportunities to solve real-world problems involving the four operations of rational numbers. When real-world problems are given, students are encouraged to follow the given examples and the problems do not have room for multiple strategies.
Teacher Resource, Part 1, Unit 5, Lesson RP7-22, Exercises, "The amount of tax is 5%. Multiply the original price by 1.05 to calculate the price after taxes. a. a $30 sweater b. a $12 CD." (7.EE.3) The application questions follow given examples closely. For example, students solve percentage increase problems by being shown the structure of the problems before this set of exercises.
Teacher Resource, Part 1, Unit 8, Lesson PS7-9, Exercises, "Which part in the exercises above has the same answer as the given problem? a. 40% of students in the class are boys. Students are picked at random once a week for five weeks. Estimate the probability that a boy will be chosen in at least two consecutive weeks. b. 40% of blood donors have Type O blood. What is the probability that none of the first six donors asked have Type O blood?" (7.SP.8) The application questions follow given examples closely.
Non-routine problems are occasionally found in the materials. For example:
In Book 1, Unit 1 Lesson RP7-11, Extensions, Item 5, "Raj mixes 3 cups of white paint with 1 cup of blue paint. He meant to mix 1 cup of white paint with 3 cups of blue paint. How much blue paint does he need to add to get the color he originally wanted?" (7.RP.A) Students are shown how to use ratio tables to help them solve problems with proportional relationships.
Conceptual Understanding: Teacher Resource, Part 1, Unit 2, Lesson NS7-5, Exercises, "Add using a number line. a. (-4) + (+1)." Students are introduced to adding integers using a number line. In the guided practice of the teachers edition, cut out arrows are moved around a number line drawn on the board to show students how adding negative numbers is done on a number line.
Application: Teacher Resource, Part 2, Unit 2, Lesson RP7-35, Exercise, "A shirt cost $25. After taxes, it costs $30. What percent of the original price are the taxes?" Students use contexts to learn to cross multiply to arrive at an equation and then solve the equation.
Procedural Skill and Fluency: In Part 2, Unit 5, Lesson NS7-47, Extensions, Item 1, "Investigate if the estimate is more likely to be correct when the divisor is closer to the rounded number you used to make your estimate. For example, when the divisor is 31 rounded to 30, is your estimate more likely to be correct than when the divisor is 34 rounded to 30? Try these examples: 31⟌243, 31⟌249, 31⟌257, 31⟌265, 31⟌274, 34⟌243, 34⟌249, 34⟌257, 34⟌265, 34⟌274." Students are given opportunities to develop fluency with division with rational numbers.
Student Resource, Student Assessment and Practice Book, Part 2, Lesson G7-24, Item 9, "The dimensions of a cereal box are 7 $$\frac{7}{8}$$ inches by 3 $$\frac{1}{3}$$ inches by 11 $$\frac{4}{5}$$ inches. What is the volume of the cereal box in cubic inches?" This problem has students using application and procedural skill and fluency using the formula to solve the word problem.
Teacher Resource, Part 1, Lesson RP7-16, Exercises, "Draw models to multiply. a. 2 × 4.01 b. 3 × 3.12" develops conceptual understanding of multiplying decimals by modeling the multiplication while using procedural fluency.
Teacher Resource, Part 2, Lesson G7-18, Exercises, Item a, "The area of an Olympic ice rink is 1,800 m$$^2$$. A school builds an ice rink to the scale (Olympic rink) : (school rink) = 5 : 4. What is the area of the school rink?" Students develop procedural fluency when they practice calculating areas given scales while solving application problems.
"Mathematical Practices in this Unit" gives suggestions on how students can show they have met a Mathematical Practice. For example, in Teacher Resource, Part 1, Unit 8, Mathematical Practices in this Unit, "MP.1: SP7-2 Extension 2, SP7-9 Extension 2."
"Mathematical Practices in this Unit" gives the Mathematical Practices that can be assessed in the unit. For example, in Teacher Resources, Part 2, Unit 7, Mathematical Practices in this Unit, "In this unit, you will have the opportunity to assess MP.1 to MP.4 and MP.6 and MP.8."
MP2: Teacher Resource, Part 1, Unit 3, Lesson EE7-6, Extensions, Item 3, "a. Sketch a circle divided into the following fractions. i. thirds, ii. fourths, iii. fifths. b. Evaluate the expression 360x for i. x = $$\frac{1}{3}$$, ii. x = $$\frac{1}{4}$$, iii. x = $$\frac{1}{5}$$, c. Use your answers to b and a protractor to check the accuracy of your sketches in a." In this question, students take the quantitative work with the sketches of circles and connect it to the abstract work of evaluating expressions.
MP2: Teacher Resource, Part 2, Unit 5, Lesson NS7-46, Extensions, Item 3, "Bev made a grape drink by mixing $$\frac{1}{3}$$ cup of ginger ale with $$\frac{1}{2}$$ cup of grape juice. She used all her ginger ale, but she still has lots of grape juice. She wants to make 30 cups of the grape drink for a party. How many 355 mL cans of ginger ale does she need to buy?" In the solution, students get a remainder in their division, so they must interpret that remainder as needing to buy more cans.
MP6: Teacher Resource, Part 2, Unit 3, Lesson EE7-15, Extensions, Item 6, "Clara's Computer Company is making a new type of computer and Clara wants to advertise it. A 30-second commercial costs $1,500,000. Clara plans to sell the computer at a profit of $45.00. Clara determines that 8,600,000 people watched the commercial. a. What percentage of people who watched the commercial would have to buy the product to pay for the price of the commercial? Show your work using equations. Say what each equation means in the situation. b. What facts did you need to use to do part a? c. What place value did you round your answer in part a? Explain your choice. d. Do you think the commercial was a good idea for Clara? Explain." Students attend to precision throughout the problem to determine if the commercial was a good idea.
MP6: Teacher Resource, Part 1, Unit 2, Lesson NS7-2, Extensions, Item 5, "Liz has red, blue, and white paint in the ratio 3:2:1. She mixes equal parts of all three colors to make light purple paint. If she uses all her white paint, what is the ratio of red to blue paint that she has leftover? Use a T-table or a tape diagram with clear labels." MP6 is developed as students are encouraged to use clear labels in models to ensure they can understand their calculations. This would help students be precise with their ratio calculations.
MP7: Teacher Resource, Part 1, Unit 2, Lesson NS7-3, Extensions, Item 2, "Look for shortcut ways to add the gains and losses. a. -4 - 5 - 6 +7 +8 + 9." Students are shown how to group numbers together to make the addition easier, looking for addends that combine to make 10, and looking for opposites to cancel out. Students use structure to complete the problem.
MP7: Teacher Resource, Part 2, Unit 1, Lesson NS7-37, Extensions 2, "Without doing the division, which do you expect to be greater? -21,317.613 ÷ $$\frac{1}{2}$$ or -21,317.613 ÷ $$\frac{3}{5}$$? Explain." Students use the structure of dividing by fractions to help them reason about which answer would be greater.
MP4: Teacher Resource, Part 1, Unit 5, Lesson RP7-19, Extensions, Item 4, "Ethan bought a house for $80,000. He spent $5,000 renovating it. Two years after he bought the house, the value increased by 20%. If he sells the house, what would his annual profit be, per year?" Because students are working very similar problems before this set of problems, students do not model with mathematics.
MP4: Teacher Resource, Part 2, Unit 6, Lesson G7-25, Exercise, Item 1, "The base of a free-standing punching bag is an octagon. The area of the base is 3.5 ft$$^2$$ and the height is 3 ft. a. What is the volume of the punching bag? b. A 30 kg bag of sand fills $$\frac{2}{3}$$ ft$$^2$$. How many bags of sand do you need to fill the punching bag?" Because students are working very similar problems before this set of problems, students do not model with mathematics.
MP5: Teacher Resource, Part 1, Unit 2, Lesson NS7-2, Extensions, Item 5, "Use a T-table or a tape diagram with clear labels. Which was faster?...What does the tape diagram show you that the T-table does not?" Students are told which tools to use.
MP5: Teacher Resource, Part 2, Unit 6, Lesson G7-24, Exercises, "a. Find the volume of the prism in three different ways." A picture of a rectangular prism with sides of 13 cm, 2 cm, and 5 cm is shown. "b. Which way is the easiest to calculate mentally? Solutions: a. 13 x 2 x 5 = 26 x 5 = 130, so V = 130 cm$$^3$$, 13 x 5 x 2 = 65 x 2 = 130, so V = 130 cm$$^3$$, 5 x 2 x 13 = 130, so V = 130 cm$$^3$$; b) 5 cm x 2 cm x 13 cm is the easiest to calculate because 5 x 2 = 10 and it is easy to multiply by 10." This problem has students deciding which way to multiply the numbers is easiest, which does not require the use of tools.
Teacher Resource, Part 2, Unit 1, Lesson NS7-37, Extensions, Item 2, "a. Without doing the division, which do you expect to be greater, −21,317,613 ÷ $$\frac{1}{2}$$ or −21,317,613 ÷ $$\frac{3}{5}$$? Explain. b. In pairs, explain your answers to part a. Do you agree with each other? Discuss why or why not. Use math words."
In Teacher Resource, Part 2, Unit 3, Lesson EE7 -17, Extensions, Item 4, students complete the following problem: "a. After the first two numbers in a sequence, each number is the sum of all previous numbers in the sequence. If the 20th term is 393,216, what is the 18th term? Look for a fast way to solve the problem. b. In pairs, explain why the way you chose in part a works. Do you agree with each other? Discuss why or why not."
In Teacher Resources, Part 2, Unit 5, Lesson NS7-49, Extensions, Item 3, students are told, "Eddy painted a square wall. Randi is painting a square wall that is twice as wide as the wall that Eddy painted. Eddy used 1.3 gallons of paint. Randi says she will need 2.6 gallons of paint because that is twice as much as Eddy needed and the wall she is painting is twice as big. So you agree with Randi? Why or Why not?"
Teacher Resource, Part 1, Unit 3, Lesson EE7-2, Extensions, Item 4, "Which value for w makes the equation true? Justify your answers. a. 2 x 5 - w x 2 b. w x 6 = 6 x 3"
Teacher Resource, Part 1, Unit 8, Lesson SP7 - 1, Extensions, Item 4, "Sam randomly picks a marble from a bag. The probability of picking a red marble is $$\frac{2}{5}$$. What is the probability of not picking red? Explain."
Teacher Resource, Part 2, Unit 3, Lesson EE7 -28, Extensions, Item 3, "a. The side lengths of a triangle are x, 2x +1, and 10. What can x be? Justify your answer. b. In pairs, explain your answers to part a. Do you agree with each other? Discuss why or why not."
Many MP3 problems in the extension sections follow a similar structure. Students are given a problem and "explain." Then, students compare their answers with a partner and discuss if they agree or not. This one dimensional approach does not offer guidance to students on how to construct an argument or critique the reasoning of others. For example, Teacher Resource, Part 2, Unit 7, Lesson SP7-14, Extensions, Item 4, "a. What shape is the cross section of the cube? Explain how you know using math words. b. In pairs, discuss your answers to part a. Do you agree with each other? Discuss why or why not."
Students are given extension questions when they are asked to analyze the math completed by a fictional person. For example, Teacher Resource, Part 1, Unit 1, Lesson RP7-7, Extensions, Item 4, students are asked to determine if another student is correct with their reasoning. "Two whole numbers are in the ratio 1 : 3. Rob says they cannot add to an odd number. Is he right? Explain." These problems begin to develop students' ability to analyze the mathematical reasoning of others but do not fully develop this skill. Students analyze an answer given by another, but do not develop an argument or present counterexamples.
A rubric for the Mathematical Practices is provided for teachers on page L-71. For MP3, a Level 3 is stated as, "Is able to use objects, drawings, diagrams, and actions to construct an argument" and "Justifies conclusions, communicates them to others, and responds to the arguments of others." This rubric would provide some guidance to teachers about what to look for in student answers but no further direction is provided about how to use it to coach students to improve their arguments or critiques.
In the Math Practices in this Unit Sections, MP3 is listed multiple times. The explanation of MP3 in the unit often consists of a general statement. For example, in Teacher Resources, Part 1, Unit 3, the MP3 portion of the section states, "In EE7-7 Extension 3, students construct and critique arguments when they discuss in pairs the reasons why they agree or disagree with the statement 0 ÷ 0 = 1, and when they ask questions to understand and challenge each other's thinking." These explanations do not provide guidance to teachers to get students constructing arguments or critiquing the reasoning of others.
Some guidance is provided to teachers for constructing a viable argument when teachers are provided solutions to questions labeled as MP3 in the extension questions. Some of these questions include wording that could be used as an exemplar response about what a viable argument is. For example, Teacher Resource, Part 1, Unit 1, Lesson RP7-10, Extensions, Item 6 students are asked to tell if the given quantities are in a proportional relationship. Teachers are provided with the sample solutions, "Sample solutions: a. The quantities are proportional. We made a table with headings "side length," "area of square," and "square of perimeter." The ratio for area to square of perimeter was always 1 to 16, so the two quantities are proportional…"
In Teacher Resource, Part 2, Unit 3, Lesson EE7-17, Extensions, Item 4 students are asked in pairs to explain why the way they chose in part a) works. Students are asked if they agree with each other and to discuss why or why not. Answers and a teacher NOTE are provided: "NOTE: In part b), encourage partners to ask questions to understand and challenge each other's thinking (MP.3)—see page A-49 for sample sentence and question stems."
In Teacher Resource, Part 1, Unit 4, Lesson NS7-22, Extensions, Item 3, students are given the following problem and asked to explain: "b. Len placed a table 1.23 m long along a wall 3 m long. If his bed is 2.13 m long, will it fit along the same wall? Explain." The answer is provided but no guidance is provided to teachers to help students explain.
In Teacher Resource, Part 2, Unit 3, Lesson RP7-15, Extensions, Item 5, students are asked, "How would you shift the decimal point to divide by 10,000,000? Explain." Teachers are given the sample response, "Move the decimal 7 places (because there are 7 zeros in 10,000,000) to the left (because I am dividing)." This is not facilitating the development of mathematical arguments. | CommonCrawl |
\begin{definition}[Definition:Decision Theory]
'''Decision theory''' is the study of the reasoning underlying an agent's choices.
\end{definition} | ProofWiki |
A Stochastic Binary Model for Regulation of Gene Expression to Investigate Treatment Effects Targeting RKIP
Guilherme Giovanini, Luciana Rodrigues Carvalho Barros, Leonardo dos Reis Gama, Tharcisio Citrangulo Tortelli Junior, Alexandre Ferreira Ramos
Subject: Life Sciences, Biophysics Keywords: RKIP expression regulation; Stochastic binary regulation of gene expression; Treatment targeting RKIP levels increase; Reduction of heterogeneity of treatment response
In this manuscript we use an exactly solvable stochastic binary model for regulation of gene expression to analyse the dynamics of response to a treatment aiming to modulate the number of transcripts of RKIP gene. We demonstrate the usefulness of our method simulating three treatment scenarios aiming to reestablish RKIP gene expression dynamics towards pre-cancerous state: i. to increase the promoter's ON state duration; ii. to increase the mRNAs' synthesis rate; iii. to increase both rates. We show that the pre-treatment kinetic rates of ON and OFF promoter switching speeds and mRNA synthesis and degradation will affect the heterogeneity and time for treatment response. Hence, we present a strategy for reducing drug dosage by simultaneously targeting multiple kinetic rates. That enables a reduction of treatment response time and heterogeneity which in principle diminishes the chances of emergence of resistance to treatment. This approach may be useful for inferring kinetic constants related to expression of antimetastatic genes or oncogenes and on the design of multi-drug therapeutic strategies targeting master regulatory genes.
Self-similarity in Touch with Stochastic Process
Krzysztof Z. Sokalski
Subject: Physical Sciences, Condensed Matter Physics Keywords: scaling; self-similarity; stochastic processes
Partial differential representation of self-similarity feature has been derived from notion of the homogenous function in general sense. This representation allows consideration of stochastic self-similar systems. As well as the partial differential representation allows consideration of Stochastic Partial Differential Equation.
Lobatto-Milstein Numerical Method in Application of Uncertainty Investment of Solar Power Projects
Mahmoud A. Eissa, Boping Tian
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: stochastic differential equation; numerical simulation; real option; renewable energy; Egypt
Recently, there has been a growing interest in the production of electricity from renewable energy sources (RES). The RES investment is characterized by uncertainty, which is long-term, costly, depend on feed-in-tariff and support schemes. In this paper, we address the real option valuation (ROV) of a solar power plant investment. The real option framework is investigated. This framework considers the renewable certificate price, furthermore the cost of delay between establishing and operating the solar power plant. The optimal time of launching the project and assess the value of deferred option are discussed. The new three stage numerical methods are constructed, the Lobatto3C-Milstein (L3CM) methods. The numerical methods are integrated with concept of Black-Scholes option pricing theory, and applied in option valuation for solar energy investment with uncertainty. The numerical results of L3CM, finite difference and Monte Carlo methods are compared to show the efficiency of our methods. Our data set refers to the Arab Republic of Egypt.
An Analysis of Technical Efficiency for Household's Rice Production in Cambodia: A Case Study of Three Districts in Battambang Province
Sokvibol Kea, Hua Li, Linvolak Pich
Subject: Social Sciences, Economics Keywords: agricultural productivity; Battambang; Cambodia; rice production; stochastic frontier production function (SFA model); technical efficiency
The aims of this study are to measure the technical efficiency (TE) of Cambodian household's rice production and trying to determine its main influencing factors using the stochastic frontier production function. The study utilized primary data collected from 301 rice farmers in three selected districts of Battambang by structured questionnaires. The empirical results indicated the level of household rice output varied according to differences in the efficiency of production processes. The mean TE is 0.34 which means that famers produce 34% of rice at best practice at the current level of production inputs and technology, indicates that rice output has the potential of being increased further by 66% at the same level of inputs if farmers had been technically efficient. Furthermore, between 2013-2015 TE of household's rice production recorded -14.3% decline rate due to highly affected of drought during dry season of 2015. Moreover, evidence reveals that land, fertilizer, and pesticide are the major influencing input factors of household's rice production, while disaster, education of household head, family size and other crops' cultivated area are core influencing factors decreasing TE. Conversely, the main influencing factors increasing TE are irrigated area, number of plot area and sex of household head.
An Analysis of Technical Efficiency for Household's Rice Production in Cambodia: A Case Study of Three Districts in Battambang Province
Subject: Social Sciences, Economics Keywords: technical efficiency; stochastic frontier production function (SFA model); rice production; Battambang; Cambodia; agricultural productivity
The aims of this study are to measure the technical efficiency (TE) of Cambodian household's rice production and trying to determine its main influencing factors using the stochastic frontier production function. The study utilized primary data collected from 301 rice farmers in three selected districts of Battambang by structured questionnaires. The empirical results indicated the level of household rice output varied according to differences in the efficiency of the production processes. The mean TE is 0.34 which means that famers produce 34% of rice at best practice at the current level of production inputs and technology, indicates that rice output has the potential of being increased further by 66% at the same level of inputs if farmers had been technically efficient. Furthermore, between 2013-2015, TE of household's rice production recorded -14.3% decline rate due to highly affected of drought during dry season of 2015. Moreover, evidence reveals that land, fertilizer, and pesticide are the major influencing input factors of household's rice production, while disaster, education of household head, family size and other crops' cultivated area are core influencing factors decreasing TE. Conversely, the main influencing factors increasing TE are irrigated area, number of plot area and sex of household head.
On a Coupled System of Random and Stochastic Differential Equations with Nonlocal Stochastic Integral Conditions
A. M. A. El-Sayed, Hoda A. Foued
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Stochastic processes; stochastic differential equation; coupled system; nonlocal stochastic integral conditions.
Here we are concerning with two problems of a coupled system of random and stochastic nonlinear differential equations with two coupled systems of nonlinear nonlocal random and stochastic integral conditions. The existence of solutions will be studied. The sufficient condition for the uniqueness of the solution will be given. The continuous dependence of the unique solution on the nonlocal conditions will be proved.
The Relevance of Foreshocks in Earthquake Triggering: A Statistical Study
Eugenio Lippiello, Cataldo Godano, Lucilla De Arcangelis
Subject: Physical Sciences, Applied Physics Keywords: seismic forecasting; foreshocks; stochastic model
An increase of seismic activity is often observed before large earthquakes. Events responsible for this increase are usually named foreshock and their occurrence probably represents the most reliable precursory pattern. Many foreshocks statistical features can be interpreted in terms of the standard mainshock-to-aftershock triggering process and are recovered in the Epidemic Type Aftershock Sequence ETAS model. Here we present a statistical study of instrumental seismic catalogs from four different geographic regions. We focus on some common features of foreshocks in the four catalogs which cannot be reproduced by the ETAS model. In particular we find in instrumental catalogs a significantly larger number of foreshocks than the one predicted by the ETAS model. We show that this foreshock excess cannot be attributed to catalog incompleteness. We therefore propose a generalized formulation of the ETAS model, the ETAFS model, which explicitly includes foreshock occurrence. Statistical features of aftershocks and foreshocks in the ETAFS model are in very good agreement with instrumental results.
Stochastic Differential Equations in Infinite Dimensional Hilbert Space and its Optimal Control Problem with L´evy Processes
Meijiao Wang, Qiuhong Shi, Qingxin Meng, Maoning Tang
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: Stochastic Evolution Equation; Teugels Martingales; Optimal Control; Stochastic Maximum Principle; Verification Theorem
The paper is concerned with a class of stochastic differential equations in infinite dimensional Hilbert space with random coefficients driven by Teugel's martingales which are more general processes. and its optimal control problem. Here Teugels martingales are a family of pairwise strongly orthonormal martingales associated with L\'{e}vy processes (see Nualart and Schoutens). There are three major ingredients. The first is to prove the existence and uniqueness of the solutions by continuous dependence theorem of solutions combining with the parameter extension method. The second is to establish the stochastic maximum principle and verification theorem for our optimal control problem by the classicconvex variation method and dual technique.The third is to represent an example of a Cauchy problem for a controlled stochastic partial differential equation driven by Teugels martingales which our theoretical results can solve.
Battery Life Estimation of a Battery Under Different Stress Conditions
Natascia Andrenacci, Francesco Vellucci, Vincenzo Sglavo
Subject: Engineering, Energy & Fuel Technology Keywords: cycle aging; Lithium battery; stochastic algorithm
The prediction of capacity degradation, and more generally of the behaviors related to battery aging, is useful in the design and use phases of a battery to help improve the efficiency and reliability of energy systems. In this paper, a stochastic model for the prediction of battery cell degradation is presented. The proposed model takes its cue from an approach based on Markov chains, although it is not comparable to a Markov process, as the transition probabilities vary as the number of cycles that the cell has performed varies. The proposed model can reproduce the abrupt decrease in the capacity that occurs near the end of life condition (80% of the nominal value of the capacity) for the cells analyzed. Furthermore, we illustrate the ability of this model to predict the capacity trend for a lithium-ion cell with nickel-manganese-cobalt (NMC) at the cathode and graphite at the anode subjected to a life cycle in which there are different aging factors, using the results obtained for cells subjected to single aging factors.
Event-Based Clustering with Energy Data
Kieran Greer, Yaxin Bi
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: stochastic clustering; energy prediction; disaggregation
This paper describes a stochastic clustering architecture that is used in the paper for making predictions over energy data. The design is discrete, localised optimisations based on similarity, followed by a global aggregating layer, which can be compared with the recent random neural network designs, for example. The topic relates to the IDEAS Smart Home Energy Project, where a client-side Artificial Intelligence component can predict energy consumption for appliances. The proposed data model is essentially a look-up table of the key energy bands that each appliance would use. Each band represents a level of consumption by the appliance. This table can replace disaggregation from more complicated methods, usually constructed from probability theory, for example. Results show that the table can accurately disaggregate a single source to a set of appliances, because each appliance has quite a unique energy footprint. As part of predicting energy consumption, the model could possibly reduce costs by 50% and more than that if the proposed schedules are also included. The hyper-grid has been changed to consider rows as single units, making it more tractable. A second case study considers wind power patterns, where the grid optimises over the dataset columns in a self-similar way to the rows, allowing for some level of feature analysis.
Valuing the Future, Discounting in Random Environments: A Review
Jaume Masoliver, Miquel Montero, Josep Perelló, J. Doyne Farmer, John Geanakoplos
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: stochastic processes; finance; interest rates
We address the process of discounting in random environments which allows to value the far future in economic terms. We review several approaches to the problem regarding different well-established stochastic market dynamics in the continuous-time context and include the Feynman-Kac approach. We also review the relation between bond pricing theory and discount and introduce the market price of risk and the risk neutral measures from an intuitive point of view devoid of excessive formalism. We provide the discount for each economic model and discuss their key results. We finally present a summary of our previous empirical studies on several countries of the long-run discount problem.
Analytical and Numerical Evaluation of Co-Scheduling Strategies and Their Application
Ruslan Kuchumov, Vladimir Korkhov
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: Co-scheduling; HPC; scheduling theory; stochastic optimization
Applications in high-performance computing (HPC) may not use all available computational resources, leaving some of them underutilized. By co-scheduling, i.e. running more than one application on the same computational node, it is possible to improve resource utilization and overall throughput. Some applications may have conflicting requirements on resources and co-scheduling may cause performance degradation, so it is important to take it into account in scheduling decisions. In this paper, we formalized co-scheduling problem and proposed multiple scheduling strategies to solve it: an optimal strategy, an online strategy and heuristic strategies. These strategies vary in terms of the optimality of the solution they produce and a priori information about the system they require. We showed theoretically that the online strategy provides schedules with a competitive ratio that has a constant upper limit. This allowed us to solve the co-scheduling problem using heuristic strategies that approximate this online strategy. Numerical simulations showed how heuristic strategies compare to the optimal strategy for different input systems. We proposed a method for measuring input parameters of the model in practice and evaluated this method on HPC benchmark applications. We showed high accuracy of measurement method, which allows to apply proposed scheduling strategies in scheduler implementation.
Reduced Numerical Modeling of Turbulent Flow with Fully Resolved Time Advancement. Part 1. Theory and Physical Interpretation
Alan R Kerstein
Subject: Engineering, Mechanical Engineering Keywords: turbulence; numerical simulation; multiscale modeling; stochastic processes
A multiscale modeling concept for numerical simulation of multiphysics turbulent flow utilizing map-based advection is described. The approach is outlined with emphasis on its theoretical foundations and physical interpretations in order to establish the context for subsequent presentation of the associated numerical algorithms and the results of validation studies. The model formulation is a synthesis of existing methods, modified and extended in order to obtain a qualitatively new capability. The salient feature of the approach is that time advancement of the flow is fully resolved both spatially and temporally, albeit with modeled advancement processes restricted to one spatial dimension. This one-dimensional advancement is the basis of a bottom-up modeling approach in which three-dimensional space is discretized into under-resolved mesh cells, each of which contains an instantiation of the modeled one-dimensional advancement. Filtering is done only to provide inputs to a pressure correction that enforces continuity and to obtain mesh-scale-filtered outputs if desired. The one-dimensional advancement, the pressure correction, and coupling of one-dimensional instantiations using a Lagrangian implementation of mesh-resolved volume fluxes is sufficient to advance the three-dimensional flow without time advancing coarse-grained equations, a feature that motivates the designation of the approach as autonomous microscale evolution (AME). In this sense, the one-dimensional treatment is not a closure because there are no unclosed terms to evaluate. However, the approach is additionally suitable for use as a subgrid-scale closure of existing large-eddy-simulation methods. The potential capabilities and limitations of both of these implementations of the approach are assessed conceptually and with reference to demonstrated capabilities of related methods.
Superintelligent Deep Learning Artificial Neural Networks
Jamilu Adamu
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Stochastic Superintelligent; , Deterministic Superintelligent; Neuron; Stochastic Activation Function; Deterministic Activation Function; Probabilistic; Jameel's ANNAF Criterion
Activation Functions are crucial parts of the Deep Learning Artificial Neural Networks. From the Biological point of view, a neuron is just a node with many inputs and one output. A neural network consists of many interconnected neurons. It is a "simple" device that receives data at the input and provides a response. The function of neurons is to process and transmit information; the neuron is the basic unit in the nervous system. Carly Vandergriendt (2018) stated the human brain at birth consists of an estimated 100 billion Neurons. The ability of a machine to mimic human intelligence is called Machine Learning. Deep Learning Artificial Neural Networks was designed to work like a human brain with the aid of arbitrary choice of Non-linear Activation Functions. Currently, there is no rule of thumb on the choice of Activation Functions, "Try out different things and see what combinations lead to the best performance", however, sincerely; the choice of Activation Functions should not be Trial and error. Jamilu (2019) proposed that Activation Functions shall be emanated from AI-ML-Purified Data Set and its choice shall satisfy Jameel's ANNAF Stochastic and or Deterministic Criterion. The objectives of this paper are to propose instances where Deep Learning Artificial Neural Networks are SUPERINTELLIGENT. Using Jameel's ANNAF Stochastic and or Deterministic Criterion, the paper proposed four classes where Deep Learning Artificial Neural Networks are Superintelligent namely; Stochastic Superintelligent, Deterministic Superintelligent, and Stochastic-Deterministic 1st and 2nd Levels Superintelligence. Also, a Normal Probabilistic-Deterministic case was proposed.
Stochastic Thermodynamics of Oscillators' Networks
Simone Borlenghi, Anna Delin
Subject: Physical Sciences, General & Theoretical Physics Keywords: stochastic thermodynamics, heat transfer, oscillators networks, entropy production
We apply the stochastic thermodynamics formalism to describe the dynamics of systems of complex Langevin and Fokker-Planck equations. We provide in particular a simple and general recipe to calculate thermodynamical currents, dissipated and propagating heat for networks of nonlinear oscillators. By using the Hodge decomposition of thermodynamical forces and fluxes, we derive a formula for entropy production that generalises the notion of non-potential forces and makes transparent the breaking of detailed balance and of time reversal symmetry for states arbitrarily far from equilibrium. Our formalism is then applied to describe the off-equilibrium thermodynamics of a few examples, notably a continuum ferromagnet, a network of classical spin-oscillators and the Frenkel-Kontorova model of nano friction.
Inflation of Universe by Nonlinear Electrodynamics
Sergey Kruglov
Subject: Physical Sciences, Astronomy & Astrophysics Keywords: Nonlinear electrodynamics; inflation; causality; unitarity; stochastic magnetic fields
Nonlinear electrodynamics with two dimensional parameters is studied. The range of electromagnetic fields when principles of causality, unitarity and the classical stability hold are obtained. A singularity of the electric field at the center of charges is absent within our model and there are corrections to the Coulomb law as $r\rightarrow\infty$. The universe inflation takes place in the background of stochastic magnetic fields. The second stage of the universe evolution is the radiation era so that the graceful exit exists. We estimated the spectral index, the tensor-to-scalar ratio, and the running of the spectral index that are in an approximate agreement with the PLANK and WMAP data.
Technical Efficiency of Barley Production: The Case of Smallholde Farmers in Meket District, Amhara National Regional State, Ethiopia
Getachew Wollie
Subject: Social Sciences, Economics Keywords: technical efficiency; stochastic frontier; trans-log; Meket; Barley
This study analyzed the technical efficiency of barley production by smallholder farmers in Meket district, Amhara National Regional State, Ethiopia. A cross sectional data from a sample of 123 barley producers during the 2016/17 production season was collected by applying two stage random sampling. To address the objective of the study, both descriptive statistics and econometric models were used to analyze the data. The trans-log functional form of the production function simultaneously with single stage estimation approach was used to estimate the production of barley output and technical inefficiency factors. The estimated stochastic production frontier model indicated that input variables such as fertilizer, human labor and oxen power were the significant variables to increase the quantity of barley output while, barley seed had a negative effect. The estimated mean levels of technical efficiency of the sample farmers were about 70.9% which revealed that, presence of a room to increase their technical efficiency level on average by 29.1% with the existing resources. The discrepancy ratio gamma indicated that 63% of the total variation from the frontier comes due to technical inefficiency while, the remaining 37% comes due to factors outside the control of farmers. Among the hypothesized factors that affect technical inefficiency; education level, extension contact and number of barley plots significantly and negatively affected technical inefficiency score. Besides, practice of crop rotation, distance of residence from the nearest main market, total expenditure and soil fertility was found to have a positive and significant effect. Hence, emphasis should be given to decrease the inefficiency level of those more inefficient farm households via experience sharing among the farmers and usage of improved or certified barley seed. Besides to this, policies and strategies of the government should be directed towards increasing farmers' education, improve the system of input distributions and institutional facilities.
On the Statistical Mechanics of Alien Species Distribution
michael G. bowler, Colleen K. kelly
Subject: Biology, Ecology Keywords: statistical mechanics; resource partitioning; stochastic processes; population dynamics
Many species of plants are found in regions to which they are alien. Their global distributions are characterised by a family of exponential functions of the kind that arise in elementary statistical mechanics (an example in ecology is MacArthur's broken stick). We show here that all these functions are quantitatively reproduced by a model containing a single parameter – some global resource partitioned at random on the two axes of species number and site number. A dynamical model generating this equilibrium is a two fold stochastic process and suggests a curious and interesting biological interpretation in terms of niche structures fluctuating with time and productivity; with sites and species highly idiosyncratic. Idiosyncrasy implies that attempts to identify a priori those species likely to become naturalized are unlikely to be successful. Although this paper is primarily concerned with a particular problem in population biology, the two fold stochastic process may be of more general interest.
Preprint SHORT NOTE | doi:10.20944/preprints202009.0401.v2
The Hunger Games as the Key to Happily Ever After?
Jacques Deere, Clarice Xu, Celestine Adelmant, Aziz Aboobaker, Roberto Salguero-Gómez
Subject: Biology, Ecology Keywords: life history; longevity; senescence; stochastic environments
The world's human population is reaching record longevities. Consequently, our societies are experiencing the impacts of prolonged longevity, such as increased retirement age. A major hypothesised influence on ageing patterns is resource limitation, formalised under calorie restriction theory. This theory predicts extended organismal longevity due to reduced calorie intake without malnutrition. However, several challenges face current calorie restriction (CR) research and, although several attempts have been made to overcome these challenges, there is still a lack of holistic understanding of how CR shapes organismal vitality. Here, we conduct a literature review of 222 CR peer-reviewed publications to summarise the state-of-the-art in the field. We use this summary to highlight challenges of CR research in our understanding of its impacts on longevity. Our review demonstrates that experimental research in this field is biased towards short-lived species (98.2% of studies examine species with <5 years of mean life expectancy) and lacks realism in key areas, such as stochastic environments or interactions with other environmental drivers such as temperature. We argue that only by considering a range of short- and long-lived species and by taking more realistic approaches can the impacts of CR on longevity be examined and validated in natural settings. We conclude by proposing experimental designs and study species that will allow the discipline to gain a much-needed understanding of how restricting caloric intake affects long-lived species in realistic settings. Through incorporating more experimental realism, we anticipate crucial insights that will ultimately shape the myriad of socio-bio-economic impacts of senescence in humans and other species across the Tree of Life.
Preprint BRIEF REPORT | doi:10.20944/preprints202007.0324.v1
A Stochastic Model Depending on the Infection Rate :COVID19 Case in Djibouti
Liban Ismail
Subject: Keywords: Stochastic model epidemic; COVID-19; Simulation
The novel Coronavirus (COVID-19) is spreading and has caused a large-scale infection in China since December 2019. Thefi rst infected person was declared on March 18, 2020 in Djibouti. This has led to a signi cant impact on the lives and economy in Djibouti and other countries. In this study, we propose a double compartment stochastic model which describes the evolution of the infection rate and the evolution of the number of infected in the period from May 20 to June 23, 2020. We will also propose the evolution of people infected in two states, recovered and deceased.
Medium-Term Hydropower Scheduling with Variable Head under Inflow, Energy and Reserve Capacity Price Uncertainty
Martin N. Hjelmeland, Arild Helseth, Magnus Korpås
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: hydropower scheduling; stochastic programming; integer programming
We propose a model for medium-term hydropower scheduling (MTHS) with variable head and uncertainty in inflow, reserve capacity, and energy price. With an increase of intermittent energy sources in the generation mix, it is expected that a flexible hydropower producer can obtain added profits by participating in other markets than just the energy market. To capture this added potential the hydropower system should be modeled with a higher level of details. In this context, we apply an algorithm based on stochastic dual dynamic programming (SDDP) to solve the nonconvex MTHS problem, and show that the use of Strengthened Benders (SB) cuts to represent the expected future profit (EFP) function provide accurate scheduling results for slightly nonconvex problems. A method to visualize the EFP function in a dynamic programming setting is provided, serving as a useful tool for a priori inspection of the EFP shape and its nonconvexity.
Dynkin Game under G-Expectation in Continuous Time
Helin Wu, Yong Ren, Feng Hu
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: Dynkin game; Ambiguity; Backward stochastic differential equation (BSDE); Reflected backward stochastic differential equation (Reflected BSDE); Constraint
In this paper, we investigate some kind of Dynkin game under $g$-expectation induced by backward stochastic differential equation (shortly for BSDE). We define the lower and upper value functions $\underline{V}_t=ess\sup\limits_{\tau\in{\mathcal{T}_t}} ess\inf\limits_{\sigma\in{\mathcal{T}_t}}\mathcal{E}^g_t[R(\tau,\sigma)]$ and $\overline{V}_t=ess\inf\limits_{\sigma\in{\mathcal{T}_t}}ess\sup\limits_{\tau\in{\mathcal{T}_t}}\mathcal{E}^g_t[R(\tau,\sigma)]$, respectively. Under some regular assumptions, a pair of saddle point is obtained and the value function of Dynkin game $V(t)=\underline{V}_t=\overline{V}_t$ follows. Furthermore, the constrained case of Dynkin game is also considered.
Computational Simulations of Similar Probabilistic Distributions to the Binomial and Poisson Distributions
Terman Frometa-Castillo, Anil Pyakuryal, Amadeo Wals-Zurita, Asghar Mesbahi
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: simulation; binomial distribution; Poisson distribution; stochastic process; modelling
This study has developed a Matlab application for simulating statistical models project (SMp) probabilistic distributions that are similar to binomial and Poisson, which were created by mathematical procedures. The simulated distributions are graphically compared with these popular distributions. The application allows to obtain many probabilistic distributions, and shows the trend (τ ) for n trials with success probability p, i.e. the maximum probability as τ=np. While the Poisson distribution PD(x;µ) is a unique probabilistic distribution, where PD=0 in x=+∞, the application simulates many SMp(x;µ,Xmax) distributions, where µ is the Poisson parameter and value of x with generally the maximum probability, and Xmax is the upper limit of x with SMp(x;µ,Xmax) ≥ 0 and limit of the stochastic region of a random discrete variable. It is shown that by simulation via, one can get many and better probabilistic distributions than by mathematical one.
Subject: Mathematics & Computer Science, Probability And Statistics Keywords: simulation; binomial distribution; poison distribution; stochastic process; modelling
This study has developed a Matlab application for simulating statistical models project (SMp) probabilistic distributions that are similar to binomial and Poisson, which were created by mathematical procedures. The simulated distributions are graphically compared with these legendary distributions. The application allows to obtain many probabilistic distributions, and shows the trend (τ ) for n trials with success probability p, i.e. the maximum probability as τ=np. While the Poisson distribution PD(x;µ) is a unique probabilistic distribution, where PD=0 in x=+∞, the application simulates many SMp(x;µ,Xmax) distributions, where µ is the Poisson parameter and value of x with generally the maximum probability, and Xmax is upper limit of x with SMp(x;µ,Xmax) ≥ 0 and limit of the stochastic region of the random discrete variable X. It is shown that by simulation via, one can get many and better probabilistic distributions than by mathematical one.
Dynamic of Stochastic Epidemic Model Based on the Association Between Susceptible and Recovered Individuals
Luyao Xin, Yingxin Guo, Quanxin Zhu
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: extinction; permanence in mean; stability; stochastic epidemic model
In this paper, we propose a new mathematical model based on the association between susceptible and recovered individual, where the association between susceptible and recovered individual is disturbed by white noise. This model is based on demographic changes and is used for long term behavior. We study the stability of equilibria of the deterministic model and prove the conditions for the extinction of diseases. Then, we investigate and obtain the critical condition of the stochastic epidemic model for the extinction and the permanence in mean of the disease with the white noise. To verify our results, we present some numerical simulations for real data related to disease.
Applications of Generating Functions to Stochastic Processes and to the Complexity of the Knapsack Problem
Jorma Jormakka, Sourangshu Ghosh
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: stochastic processes; generating functions; polynomial time algorithms; partitions; knapsacks
The paper describes a method of solving some stochastic processes using generating functions. A general theorem of generating functions of a particular type is derived. A generating function of this type is applied to a stochastic process yielding polynomial time algorithms for certain partitions. The method is generalized to a stochastic process describing a rather general linear transform. Finally, the main idea of the method is used in deriving a theoretical polynomial time algorithm to the knapsack problem.
Implementation of the Hindmarsh–Rose Model Using Stochastic Computing
Oscar Camps, Stavros Stavrinides, Carol De Benito, Rodrigo Picos
Subject: Mathematics & Computer Science, Computational Mathematics Keywords: stochastic logic; chaotic systems; approximate computing; Hindmarsh Rose system
In this paper we present a successful implementation of the Hindmarsh–Rose model within a SC environment. The merits of the proposed approach are design simplicity, due to stochastic computing, and the ease of implementation. Simulation results showed that the approximation achieved is equivalent to introducing a noise source into the original model. A study for the level of noise introduced, according to the number of bits in the stochastic sequence, has been performed. Additionally, we demonstrate that such an approach, even though it is noisy, it reproduces the behaviour of biological systems, which are intrinsically noisy. It is also demonstrated that a speedup of x2 compared to biological systems is easily achievable, with a very small number of gates, thus paving the road for the in silico implementation of large neuron networks.
Preprint TECHNICAL NOTE | doi:10.20944/preprints202101.0027.v1
Introducing Uncertainty in Rrisk Calculation Along Road Using Simple Stochastic Aapproach
Michel Jaboyedoff, Tiggi Choanji, Marc-Henri Derron, Li Fei, Amalia Gutierrez, Lidia Loiotine, François Noel, Chunwei Sun, Emmanuel Wyser, Charlotte Wolff
Subject: Earth Sciences, Atmospheric Science Keywords: landslide; rockfall; risk; stochastic; uncertainty; transportation corridors
Based on a previous risk calculation study along a road corridor, risk is recalculated using stochastic simulation by introducing variability for most of the parameters in the risk equation. This leads to an exceedance curve comparable to that of catastrophe models. This approach introduces uncertainty into the risk calculation in a simple way, which can be used for poorly documented cases to fulfil lack of data. This approach seems to tend to minimize risk or to question risk calculations.
Fine-Tune Robust Optimization
Alexandre Cesar Balbino Barbosa Filho, Sergio Mauro da Silva Neiro
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: Robust Optimization; Optimization Under Uncertainty; Robustness; Stochastic
A Robust Optimization framework with original concepts and fundamentals also admitting a fusion of ideals from relative regret models and static robust optimization, containing conservatism concepts is disclosed. The algorithm uses a fine-tune strategy to tune the model so the robustness and a target ideality can be mutually achieved with a specified risk. The framework comprises original concepts, a mathematical approach and an algorithm. The statistical treatment of the data with the original concepts from the framework make it able to make short, middle or long-term decision-making setting. The framework has high tractability since the algorithm forces the creation of a setting that makes a robust optimization with the specified risk. The framework can be applied in linear and nonlinear mathematical models since that the objective function is monotonic in the domain of the active convex region. Several examples are solved to best understand the framework and all results demonstrated high tractability and performance. There is a wide range of applications. Along all the text, there is a profound discussion about its philosophy, objective, original concepts, fields of application, statistical and probabilistic fundamentals.
Characterization of Pathogen Air-Borne Inoculum Density by Information Theoretic Analysis of Spore Trap Time Series Data
Robin A Choudhury, Neil McRoberts
Subject: Biology, Plant Sciences Keywords: time series; entropy; average mutual information; stochastic process; deterministic dynamics
Air sampling using vortex air samplers combined with species specific amplification of pathogen DNA, was carried out over two years in four or five locations in the Salinas Valley of California. The resulting time series data for the abundance of pathogen DNA trapped per day displayed complex dynamics with features of both deterministic (chaotic) and stochastic dynamics. Methods of nonlinear time series analysis developed for the reconstruction of low dimensional attractors provided new insights into the complexity of the pathogen abundance data, but also indicated that practicality may limit the capacity for definitively classifying the dynamics of air borne plant pathogen inoculum. Over the two years of the study five location/year combinations were classified as having stochastic linear dynamics and four were not. Calculation of entropy values for either the number of pathogen DNA copies or for a binary string indicating the pathogen abundance data were increasing or not, revealed (1) some robust differences in the dynamics between seasons that were not obvious in the time series data themselves, and also (2) that the series were almost all at their theoretical maximum entropy value when considered from the simple perspective of whether instantaneous change along the sequence is positive or not.
Towards Generic Simulation for Demanding Stochastic Processes
Demetris Koutsoyiannis, Panayiotis Dimitriadis
Subject: Keywords: stochastics; stochastic processes; stochastic simulation; Monte Carlo simulation; long range de-pendence; persistence; Hurst-Kolmogorov dynamics; climacogram; cumulants; intermittence
We outline and test a new methodology for genuine simulation of stochastic processes with any dependence and any marginal distribution. We reproduce time dependence with a generalized, time symmetric or asymmetric, moving-average scheme. This implements linear filtering of non-Gaussian white noise, with the weights of the filter determined by analytical equations in terms of the autocovariance of the process. We approximate the marginal distribution of the process, irrespective of its type, using a number of its cumulants, which in turn determine the cumulants of white noise in a manner that can readily support the generation of random numbers from that approximation, so that it be applicable for stochastic simulation. The simulation method is genuine as it uses the process of interest directly without any transformation (e.g. normalization). We illustrate the method in a number of synthetic and real-world applications with either persistence or antipersistence, and with non-Gaussian marginal distributions that are bounded, thus making the problem more demanding. These include distributions bounded from both sides, such as uniform, and bounded form below, such as exponential and Pareto, possibly having a discontinuity at the origin (intermittence). All examples studied show the satisfactory performance of the method.
Understanding the impact of different landscape-level fuel management strategies on wildfire hazard
Akli Benali, Ana C.L. Sá, João Pinho, Paulo Fernandes, José M.C. Pereira
Subject: Earth Sciences, Environmental Sciences Keywords: wildfire; hazard; modelling; stochastic; fuel treatment; fuel breaks; forest management
The disastrous 2017 fire season in Portugal lead to widespread recognition of the need for a paradigm shift in forest and fire management. We focused our study on Alvares, a parish in central Portugal which had 60% of its area burned in 2017, with a large record of historical. We evaluated how different fuel treatment strategies can reduce wildfire hazard in Alvares, through i) a fuel break network with different priorities and ii) random fuel treatments resulting from stand-level management intensification. To assess this, we developed a stochastic fire simulation system (FUNC-SIM) that integrates uncertainties in fuel distribution over the landscape. If the landscape remains unchanged, Alvares will have large burn probabilities in the north, northeast, and center-east areas of the parish that are very often associated with high fire line intensities. The different fuel treatment scenarios decreased burned area between 12.1-31.2%, resulting from 1%-4.6% increases in annual treatment area, and reduced 10%-40% the likelihood of wildfires larger than 5000 ha. On average, simulated burned area decreased 0.22% per each ha treated, and effectiveness decreased with increasing area treated. Overall, both fuel treatment strategies effectively reduced wildfire hazard and should be part of a larger, holistic and integrated plan to reduce the vulnerability of the Alvares parish to wildfires.
Stochastic Modeling of Forces on Jacket-Type Offshore Structures Colonized by Marine Growth
Hamed Ameryoun, Franck Schoefs, Laurent Barille, Yoann Thomas
Subject: Engineering, Marine Engineering Keywords: marine growth; biofouling; wave loading; stochastic modeling; reliability; jacket structures
The present paper deals with the stochastic modeling of bio-colonization for the computation of stochastic hydrodynamic loading on jacket-type offshore structures. It relies on a multidisciplinary study gathering biological and physical research fields that accounts of uncertainties at all the levels. Indeed, bio-colonization of offshore structures is a complex phenomenon with two major but distinct domains (i) marine biology whose processes are modeled with biomathematics methods and (ii) hydrodynamic processes. This paper aims to connect these two domains. It proposes a stochastic model for the marine organism's growth and then continues with transfers for assessment of drag coefficient and forces probability density functions that accounts for marine growth evolution. A case study relies on the characteristics (growth and shape) of the blue mussel (Mytilus edulis) in northeastern Atlantic.
Further Robust Dissipativity Analysis of Uncertain Stochastic Generalized Neural Networks With Markovian Jump Parameters
Pharunyou Chanthorn, Grienggrai Rajchakit, Jenjira Thipcha, Chanikan Emharuethai, Ramalingam Sriraman
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: dissipativity analysis; generalized neural networks; Markovian jump parameters; stochastic disturbance
This paper analyzes the robust dissipativity of uncertain stochastic generalized neural networks (USGNNs) with Markovian jumping parameters and time-varying delays. In practical applications most of the systems refer to uncertainties, hence, the norm-bounded parameter uncertainties and stochastic disturbance are considered. Then, by constructing an appropriate Lyapunov-Krasovskii functional (LKF) and by employing integral inequalities LMI-based sufficient conditions of the considered systems are established. Numerical simulations are given to show the merit of the presented results.
Active Control of Edgewise Vibrations in Wind Turbine Blades Using Stochastic Disturbance Accommodating Control
Cong Cong
Subject: Engineering, Control & Systems Engineering Keywords: stochastic disturbance accommodating control; edgewise vibrations; minimum-variance unbiased estimator
Vibrations of blades and tower have important impact for wind turbine. This paper presents a active controller design to suppress blade edgewise vibrations under aerodynamic load and gravitational load.Treating the sum of aerodynamic load input in edgewise direction and gravitational load as unknown disturbance input,a stochastic disturbance accommodating control(SDAC) approach is proposed to design a controller which it utilizes a minimum-variance unbiased estimator(MVUE) to estimate both state and unknown input. The stability analysis proved that the proposed SDAC is bounded in mean square.In order to verify the performance of the minimum-variance unbiased estimator and the proposed SDAC, numerical simulations using Matlab/Simulink have been carried out for the National Renewable Energy Laboratory 5-MW wind turbine.Under the different circumstance which exists the random process and measure noise and noise free. It is shown that the estimation value by MVUE can tracking the real state and unknown input. The results are also compared to the traditional linear quadratic regulator(LQR) and show that the proposed stochastic disturbance accommodating control scheme can further reduce displacement in edgewise vibrations direction and the control strategy is more effective than the LQR.
Company Value with Ruin Constraint in a Discrete Model
Christian Hipp
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: stochastic control; optimal dividend payment; ruin probability constraint
Optimal dividend payment under a ruin constraint is a two objective control problem which—in simple models—can be solved numerically by three essentially different methods. One is based on a modified Bellman equation and the policy improvement method (see Hipp C., 2003). In this paper we use explicit formulas for running allowed ruin probabilities which avoid a complete search and speed up and simplify the computation. The second is also a policy improvement method, but without the use of a dynamic equation (see Hipp C., 2003). It is based on closed formulas for first entry probabilities and discount factors for the time until first entry (see Hipp C., 2016). Third a new, faster and more intuitive method which uses appropriately chosen barrier levels and a closed formula for the corresponding dividend value. Using the running allowed ruin probabilities, a simple test for admissibility—concerning the ruin constraint—is given. All these methods work for the discrete De Finetti model and are applied in a numerical example. The non stationary Lagrange multiplier method suggested in (see Hipp C., 2016), Section 2.2.2 does also yield optimal dividend strategies which differ from those in all other methods, and Lagrange gaps are present here. These gaps always exist in De Finetti models, see (see Hipp C., 2017).
Technical Efficiency and Its Determinants on Maize Seed Production in Palpa District, Nepal
Mahesh Sapkota, Niraj Prakash Joshi, Rishi Ram Kattel, Mahima Bajracharya
Subject: Social Sciences, Economics Keywords: cost effective; stochastic frontier; maize; technical efficiency; tobit
This paper aimed to assess the technical efficiency of maize seed production and the major factors contributing on technical efficiency. Maize is the second most important staple crop in Nepal, but the average yield of maize is very low as compared to other countries having similar agro-climatic requirements. Inefficient use of resources had led to low yield in maize crop. The software Raosoft was used to determine the required sample size and total of 182 samples were selected using simple random technique in June, 2016. Stochastic production frontier model and Tobit model were used to derive the results. The average technical efficiency of maize seed production ranged from 0.25-0.92 with an average of 0.71which revealed the scope of increasing technical efficiency by 29 percent. The majority of the farmers (29.1%) were at higher technical efficiency level of 0.8-0.9 followed by 28.6 percent at 0.7-0.8 and 23.1 percent at 0.6-0.7. Age and schooling of household head, experience on maize seed production, area shared by maize crop and dummies variables such as livestock holding, source of seed and access to extension service had found significantly affecting on the technical efficiency level. For the least developed country like Nepal it would be better to use the available resources wisely and improvement of existing technologies would be more cost effective than that of discovering new technologies. The study recommended that the concerned organizations should focus on mixed agricultural farming system, access to better quality seed and provide technical knowledge which would help in improving technical efficiency.
Tensor Rules in the Stochastic Organization of Genomes and Genetic Stochastic Resonance in Algebraic Biology
Sergey V. Petoukhov
Subject: Life Sciences, Genetics Keywords: genome, DNA, alphabet, matrices, tensor product, quantum informatics, stochastic resonance.
The article is devoted to the new results of the author, which add his previously published ones, of studying hidden rules and symmetries in structures of long single-stranded DNA sequences in eukaryotic and prokaryotic genomes. The author uses the existence of different alphabets of n-plets in DNA: the alphabet of 4 nucleotides, the alphabet of 16 douplets, the alphabet of 64 triplets, etc. Each of such DNA alphabets of n-plets can serve for constructing a text as a chain of these n-plets. Using this possibility, the author represents any long DNA nucleotide sequence as a bunch of many so-called n-texts, each of which is written on the basis of one of these alphabets of n-plets. Each of such n-texts has its individual percents of different n-plets in its genomic DNA. But it turns out that in such multi-alphabetical or multilayer presentation of each of many genomic DNA, analyzed by the author, universal rules of probabilities and symmetry exist in interrelations of its different n-texts regarding their percents of n-plets. In this study, the tensor product of matrices and vectors is used as an effective analytical tool borrowed from the arsenal of quantum mechanics. Some additions to the topic of algebra-holographic principles in genetics are also presented. Taking into account the described genomic rules of probability, the author puts also forward a concept of the important role of stochastic resonances in genetic informatics.
Stochastic Computing Emulation of Memristor Cellular Nonlinear Networks
Oscar Camps, Mohamed-Moner al Chawa, Stavros G. Stavrinides, Rodrigo Picos
Subject: Engineering, Electrical & Electronic Engineering Keywords: Cellular Nonlinear Networks; Stochastic Logic; real time processing; image processing; memristors.
Cellular Nonlinear Networks (CNN) are a concept introduced in 1988 by Leon Chua and Lin Yang as a bio-inspired architecture, capable of massively parallel computation. Later on, CNN have been enhanced by incorporating designs that incorporate memristors to profit from their processing and memory capabilities. In addition, Stochastic Computing (SC) can be used to optimize the quantity of required processing elements; thus it provides a lightweight approximate computing framework, quite accurate and effective, though. In this work, we propose utilization of SC in designing and implementing a memristor-based CNN. As a proof of the proposed concept, an example of application is presented. This application combines Matlab and a FPGA in order to create the CNN. The implemented CNN has then been used to perform three different real-time applications on a 512x512 gray-scale and a 768x512 color image: storage of the image, edge detection, and image sharpening. It has to be pointed out that the same CNN has been used for the three different tasks, with the sole change of some programmable parameters. Results show an excellent capability with significant accompanying advantages, like the low number of needed elements further allowing for a low cost FPGA-based system implementation, something confirming the system's ability for real time operation.
SEIR Modeling of the Italian Epidemic of SARS-CoV-2 Using Computational Swarm Intelligence
Alberto Godio, Francesca Pace, Andrea Vergnano
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: SARS-CoV-2; COVID-19; SEIR modeling; Italy; stochastic modeling; swarm intelligence; Google COVID 19 Community Mobility Reports
We applied a generalized SEIR epidemiological model to the recent SARS-CoV-2 outbreak in the world, with a focus on Italy and its Lombardia, Piemonte, and Veneto regions. We focus on the application of a stochastic approach in fitting the model numerous parameters using a Particle Swarm Optimization (PSO) solver, to improve the reliability of predictions in the medium term (30 days). We analyze the official data and the predicted evolution of the epidemic in the Italian regions, and we compare the results with data and predictions of Spain and South Korea. We link the model equations to the changes in people's mobility, with reference to Google's COVID-19 Community Mobility Reports. We discuss the effectiveness of policies taken by different regions and countries and how they have an impact on past and future infection scenarios.
Comparison of Cambodian Rice Production Technical Efficiency at National and Household Level
Subject: Social Sciences, Economics Keywords: Agricultural productivity; Cambodia; Rice production; Stochastic Frontier Analysis (SFA); Technical Efficiency
Rice is the most important food crop in Cambodia and its production is the most organized food production system in the country. The main objective of this study is to measure technical efficiency (TE) of Cambodian rice production and also trying to identify core influencing factors of rice TE at both national and household level, for explaining the possibilities of increasing productivity and profitability of rice, by using translog production function through Stochastic Frontier Analysis (SFA) model. Four-years dataset (2012-2015) generated from the government documents was utilized for the national analysis, while at household-level, the primary three-years data (2013-2015) collected from 301 rice farmers in three selected districts of Battambang province by structured questionnaires was applied. The results indicate that level of rice output varied according to the different level of capital investment in agricultural machineries, total actual harvested area, and technically fertilizers application within provinces, while level of household rice output varied according to the differences in efficiency of production processes, techniques, total annual harvested land, and technically application of fertilizers and pesticides of farmers. The overall mean TE was estimated at 78.4% (national-level) and 34% (household-level), indicates that rice output has the potential of being increased further by 21.6% (national production) and 66% (household) at the same level of inputs and technology if farmers had been technically efficient. The TE also recorded -7% decreasing rate at the national-level and -14.3% at household-level due to highly affected of natural disasters and various environmental and social factors during the study periods.
Resource Concentration and Clustering in Replicator Dynamics with Stochastic Reset Events
Ignacio T. Gómez Garay, Damián H. Zanette
Subject: Physical Sciences, Other Keywords: replicator population; stochastic resetting; resource distribution; anomalous fluctuations; clustering
As a model for economic and ecological systems, replicator dynamics represents a basic form of agent competition for finite resources. Here, we investigate the effects of stochastic resetting in this kind of processes. Random reset events abruptly lead individual resources to a small value from which dynamics must start anew. Numerical results show that resource distribution over the population of competing agents develops highly nonuniform profiles, exhibiting clustering and fluctuations with anomalous dependence on the population size. This non-standard statistical behavior jeopardizes an analytical treatment based on mean-field assumptions. We propose alternative simplified analytical approaches which provide a stylized description of entropy evolution for the clustered distribution of resources and explain the unusually slow decrease of fluctuations.
A Novel Hybrid Monte Carlo Algorithm for Sampling Path Space
Francis J. Pinski
Subject: Physical Sciences, Mathematical Physics Keywords: Brownian Dynamics; Stochastic Processes; Sampling path space, transition paths
To sample from complex, high-dimensional distributions, one may choose algorithms based on the Hybrid Monte Carlo (HMC) method. HMC-based algorithms generate nonlocal moves alleviating diffusive behavior. Here, I build on an already defined HMC framework, Hybrid Monte Carlo on Hilbert spaces [A. Beskos, F.J. Pinski, J.-M. Sanz-Serna and A.M. Stuart, Stoch. Proc. Applic. 121, 2201 - 2230 (2011); doi:10.1016/j.spa.2011.06.003] that provides finite-dimensional approximations of measures π which have density with respect to a Gaussian measure on an infinite-dimensional Hilbert (path) space. In all HMC algorithms, one has some freedom to choose the mass operator. The novel feature of the algorithm described in this article lies in the choice of this operator. This new choice defines a Markov Chain Monte Carlo (MCMC) method which is well defined on the Hilbert space itself. As before, the algorithm described herein uses an enlarged phase space Π having the target π as a marginal, together with a Hamiltonian flow that preserves Π. In the previous method, the phase space π was augmented with Brownian bridges. With the new choice for the mass operator, π is augmented with Ornstein-Uhlenbeck (OU) bridges. The covariance of Brownian bridges grows with its length, which has negative effects on the Metropolis-Hasting acceptance rate. This contrasts with the covariance of OU bridges which is independent of the path length. The ingredients of the new algorithm include the definition of the mass operator, the equations for the Hamiltonian flow, the (approximate) numerical integration of the evolution equations, and finally the Metropolis-Hastings acceptance rule. Taken together, these constitute a robust method for sampling the target distribution in an almost dimension-free manner. The behavior of this novel algorithm is demonstrated by computer experiments for a particle moving in two dimensions, between two free-energy basins separated by an entropic barrier.
A Periodic Potential Underdamped Stochastic Resonance Method and Its Application for Gear Fault Diagnosis
Zhixing Li, Xiandong Liu, Tian He, Yingchun Shan
Subject: Engineering, Mechanical Engineering Keywords: fault diagnosis, stochastic resonance, periodic potential, underdamped, weak signal
The vibration feature of weak gear fault is often covered in strong background noise, which makes it necessary to establish weak feature enhancement methods. Among the enhancement methods, stochastic resonance (SR) has the unique advantage of transferring noise energy to weak signals and has a great application prospection in weak signal extraction. But the traditional SR potential model cannot form a richer potential structure and may lead to system instability when the noise is too great. To overcome these shortcomings, the article presents a periodic potential underdamping stochastic resonance (PPUSR) method after investigating the potential function and system signal-to-noise ratio (SNR). In addition, system parameters are further optimized by using ant colony algorithm. Through simulation and gear experiments, the effectiveness of the proposed method was verified. We concluded that compared with the traditional underdamped stochastic resonance (TUSR) method, the PPUSR method had a higher recognition degree and better frequency response capability.
Stable Value Funds Performance
David F. Babbel, Miguel A. Herce
Subject: Social Sciences, Finance Keywords: stable value; defined contribution; optimal asset allocation; stochastic dominance
Little in the scholarly economics literature is directed specifically to the performance of stable value funds, although they occupy a leading place among retirement investment vehicles. They are currently offered in more than one-third of all defined contribution plans in the USA, with more than $800 billion of assets under management. This paper rigorously examines their performance throughout the entire period since their inception in 1973. We produce a composite index of stable value returns. We next conduct mean-variance analysis, Sharpe and Sortino ratio analysis, stochastic dominance analysis, and optimal multi-period portfolio composition analysis. Our evidence suggests that stable value funds dominate (on average) two major asset classes based on a historical analysis, and that they often occupy a significant position in optimized portfolios across a broad range of risk aversion levels. We discuss factors that contributed to stable value funds' past performance and whether they can continue to perform well into the future. We also discuss considerations regarding whether or not to include stable value as an element in target date funds within defined contribution pension plans.
Incremental Learning for Large Scale Churn Prediction
Sergio Hernandez, Diego Vergara, Felipe Jorquera
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: churn prediction; incremental principal component analysis; stochastic gradient descent
Modern companies accumulate a vast amount of customer data that can be used for creating a personalized experience. Analyzing this data is difficult and most business intelligence tools cannot cope with the volume of the data. One example is churn prediction, where the cost of retaining existing customers is less than acquiring new ones. Several data mining and machine learning approaches can be used, but there is still little information about the different algorithm settings to be used when the dataset doesn't fit into a single computer memory. Because of the difficulties of applying feature selection techniques at a large scale, Incremental Probabilistic Component Analysis (IPCA) is proposed as a data preprocessing technique. Also, we present a new approach to large scale churn prediction problems based on the mini-batch Stochastic Gradient Decent (SGD) algorithm. Compared to other techniques, the new method facilitates training with large data volumes using a small memory footprint while achieving good prediction results.
Study of Variational Inference for Flexible Distributed Probabilistic Robotics
Malte Rørmose Damgaard, Rasmus Pedersen, Thomas Bak
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Distributed Robotics; Probabilistic Robotics; Variational Inference; Message-Passing Algorithm; Stochastic Variational Inference
By combining stochastic variational inference with message passing algorithms we show how to solve the highly complex problem of navigation and avoidance in distributed multi-robot systems in a computationally tractable manner, allowing online implementation. Subsequently, the proposed variational method lends itself to more flexible solutions than prior methodologies. Furthermore, the derived method is verified both through simulations with multiple mobile robots and a real world experiment with two mobile robots. In both cases the robots shares the operating space and needs to cross each other's paths multiple times without colliding.
On Consensus-based Distributed Blind Calibration of Sensor Networks
Milos S. Stankovic, Srdjan S. Stankovic, Karl Henrik Johansson, Marko Beko, Luis M. Camarinha-Matos
Subject: Engineering, Control & Systems Engineering Keywords: Blind Calibration, Macro Calibration, Distributed Estimation, Sensor Networks, Consensus, Synchronization, Stochastic Approximation
This review paper deals with recently proposed algorithms for distributed blind macro-calibration of sensor networks based on consensus (synchronization), not requiring any fusion center. The basic algorithm, performing the estimation of the local calibration parameters, is derived commencing from appropriate local criteria, and developing the corresponding gradient descent scheme. It is shown that the estimated parameters of the calibration functions asymptotically converge, in the mean-square sense and with probability one (w.p.1), to such values that ensure consensus on calibrated sensor gains and calibrated sensor offsets. For the more realistic case in which additive measurement noise, communication dropouts and additive communication noise are present, two algorithm modifications are introduced: one using a simple compensation term, and a more robust one based on an instrumental variable. By utilizing stochastic approximation arguments it is shown that the modified algorithms also achieve asymptotic agreement for calibrated sensor gains and offsets, in the mean-square sense and w.p.1. Convergence rate is analyzed in terms of an upper bound of the mean-square error. It is also shown that the communications between nodes can be completely asynchronous, which is of substantial importance for real-world applications. Suggestions for design of \textit{a priori} adjustable weights are given. Finally, it is shown that, if there is a subset of (precalibrated) reference sensors with fixed calibration parameters, the calibrated sensor gains and offsets of the rest of the sensors do not achieve consensus - they converge to different points dictated by the reference sensors and the network characteristics. Wide applicability and efficacy of these algorithms are illustrated on several simulation examples.
Mechanisms of Protein Search for Targets on DNA: Theoretical Insights
Alexey A. Shvets, Maria P. Kochugaeva, Anatoly B. Kolomeisky
Subject: Chemistry, General & Theoretical Chemistry Keywords: protein-DNA interactions; facilitated diffusion; protein target search; discrete-state stochastic models
Protein-DNA interactions are critical for the successful functioning of all natural systems. The key role in these interactions is played by processes of protein search for specific sites on DNA. Although it has been studied for many years, only recently microscopic aspects of these processes became more clear. In this work, we present a review on current theoretical understanding of the molecular mechanisms of the protein target search. A comprehensive discrete-state stochastic method to explain the dynamics of the protein search phenomena is introduced and explained. Our theoretical approach utilizes a first-passage analysis and it takes into account the most relevant physical-chemical processes. It is able to describe many fascinating features of the protein search, including unusually high effective association rates, high selectivity and specificity, and the robustness in the presence of crowders and sequence heterogeneity.
Biased Stochastic Process of Randomly Moving Particles with Constant Average Velocities
Tao Guo
Subject: Physical Sciences, Mathematical Physics Keywords: Biased Stochastic Process; Randomly Moving Particles; Special Relativity Effect; Lorentz-like factor
In a randomly moving particle swarm with fixed kinetic energy, the particle speeds follow the Maxwell distribution. In a certain period, the moving directions of particles in a sub-particle swarm may aggregate. Thus, the movements of the particles have the characteristics of biased stochastic movement. Regarding the biased particle swarm formed by a series of randomly moving particles (with a uniform average velocity c) with a greater probability of moving in a certain direction and the same probability of moving in other directions, there is a certain group velocity u in this direction, while the diffusion rate in other directions is slower than that of unbiased moving particles with the same average speed c. Moreover, the degree of slowing follows the Lorentz-like factor. In this article, the characteristics of this kind of biased random process are deduced starting from a biased random walk by using probability theory, and the expression of the Ito equation is provided. This article is expected to provide a reference to understand the nature of the special relativity effect.
Integer Versus Fractional Order SEIR Deterministic and Stochastic Models of Measles
Md Rafiul Islam, Angela Peace, Daniel Medina, Tamer Oraby
Subject: Life Sciences, Other Keywords: fractional SEIR stochastic model; Caputo fractional order differential equations; measles; parameter estimation
In this paper, we compare the performance between systems of ordinary and (Caputo) fractional differential equations depicting the susceptible-exposed-infectious-recovered (SEIR) models of diseases. In order to understand the origins of both approaches as mean-field approximations of integer and fractional stochastic processes, we introduce the fractional differential equations as approximations of some type of fractional nonlinear birth--death processes. Then, we examine validity of the two approaches against empirical courses of epidemics; we fit both of them to case counts of three measles epidemics that occurred during the pre-vaccination era in three different locations. While FDEs appear more flexible in fitting empirical data, our ODEs offered better fits to two out of three data sets. Important differences in transient dynamics between these modeling approaches are discussed.
Stochastic Computing Implementation of Chaotic Systems
Oscar Camps, Stavros G. Stavrinides, Rodrigo Picos
Subject: Engineering, Electrical & Electronic Engineering Keywords: Stochastic Logic; Chaotic Systems; Approximate Computing; Shimizu-Morioka System; Chaotic Circuits; FPGA Implementation
An exploding demand for processing capabilities related to the emergence of the IoT, AI and big data, has led to the quest for increasingly efficient ways to expeditiously process the rapidly increasing amount of data. These ways include different approaches like improved devices capable of going further in the more Moore path, but also new devices and architectures capable of going beyond Moore and getting more than Moore. Among the solutions being proposed, Stochastic Computing has positioned itself as a very reasonable alternative for low-power, low-area, low-speed, and adjustable precision calculations; four key-points beneficial to edge computing. On the other hand, chaotic circuits and systems appear to be an attractive solution for (low-power, green) secure data transmission in the frame of edge computing and IoT in general. Classical implementations of this class of circuits require intensive and precise calculations. This paper discusses the use of the SC framework for the implementation of nonlinear systems, showing that it can provide results comparable to those of classical integration, with much simpler hardware, paving the way for relevant applications.
Stochastic Thermal Load Dispatch Employing Opposition-based Greedy Heuristic Search
Manmohan Singh, JS Dhillon
Subject: Engineering, Electrical & Electronic Engineering Keywords: fuzzy theory; heuristic search; stochastic economic load dispatch; risk analysis
A thermal load dispatch problem minimizes the number of objectives viz operating cost and emission of gaseous pollutants together while allocating the power demand among the committed generating units subject to physical and technological system constraints. A stochastic thermal load dispatch problem is undertaken while taking into consideration, the uncertainties, errors in data measurements and nature of load demand which is random. Owing to uncertain load demand, variance due to mismatch of power demand termed as risk, is considered as another conflicting objective to be minimized. Generally multiobjective problems generate a set of non-inferior solutions are generated and supplied to a decision maker to select the best solution from the set of non-inferior solutions. This paper proposes opposition-based greedy heuristic search (OGHS) method to generate a set of non-inferior solutions. Opposition-based learning is applied to generate initial population to select good candidates. Migration to maintain diversity in the set of feasible solutions is also based on opposition-based learning. Mutation strategy is implemented by perturbing the genes heuristically in parallel and better one solution is sought for each member. Feasible solutions are achieved heuristically by modifying the generation-schedules in such a manner that violation of operating generation limits are avoided. The OGHS method is simple to implement and provides global solutions derived from the randomness of the population generated without tuning of parameters. Decision maker exploits fuzzy membership functions to decide the final decision. Validity of the method has been demonstrated by analysing systems in different scenarios consisting of six generators and forty generators.
A Switched Capacitor Memristor Emulator using Stochastic Computing
Carola de Benito, Oscar Camps, Mohamad Moner Al Chawa, Stavros G. Stavrinides, Rodrigo Picos
Subject: Engineering, Electrical & Electronic Engineering Keywords: memristor; emulator; analog design; switched capacitor; stochastic computing; mixed signal
Due to the increased use of memristors, and its many applications, the use of emulators has grown in parallel to avoid some of the difficulties presented by real devices such as variability and reliability. In this paper, we present a memristive emulator designed using a Switched Capacitor (SC), this is, an analog component/ block and a control part or block implemented using stochastic computing (SCo) and therefore fully digital. Our design is thus a mixed signal circuit. Memristor equations are implemented using stochastic computing to generate the control signals necessary to work with the controllable resistor implemented as switched capacitor.
The Optimal Harvest Decisions for Natural and Artificial Maturation Mangoes Under Uncertain Demand, Yields, and Prices
Sheng-I Chen, Wei-Fu Chen
Subject: Engineering, Automotive Engineering Keywords: fresh agricultural products; harvest schedule; stochastic programming; sample-average approximation
This study focuses on the decisions of picking, inventory, ripening, delivering, and selling mangoes in a harvesting season. Demand, supply, and prices are uncertain, and their probability density functions are fitted based on actual trading data collected from the largest spot market in Taiwan. A stochastic programming model is formulated to minimize the expected cost under the considerations of labor, storage space, shelf life, and transportation restrictions. We implement the sample-average approximation to obtain a high-quality solution of the stochastic program. The analysis compares deterministic and stochastic solutions to assess the uncertain effect on the harvest decisions. Finally, the optimal harvest schedule of each mango type is suggested based on the stochastic program solution.
Data-driven Model Reduction for Stochastic Burgers Equations
Fei Lu
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: data-driven modeling; stochastic Burgers equation; closure model; CFL number
We present a class of efficient parametric closure models for 1D stochastic Burgers equations. Casting it as statistical learning of the flow map, we derive the parametric form by representing the unresolved high wavenumber Fourier modes as functionals of the resolved variables' trajectory. The reduced models are nonlinear autoregression (NAR) time series models, with coefficients estimated from data by least squares. The NAR models can accurately reproduce the energy spectrum, the invariant densities, and the autocorrelations. Taking advantage of the simplicity of the NAR models, we investigate maximal and optimal space-time reduction. Reduction in space dimension is unlimited, and NAR models with two Fourier modes can perform well. The NAR model's stability limits time reduction, with a maximal time step smaller than that of the K-mode Galerkin system. We report a potential criterion for optimal space-time reduction: the NAR models achieve minimal relative error in the energy spectrum at the time step where the K-mode Galerkin system's mean CFL number agrees with the full model's.
Mean-Field Type Games between Two Players Driven by Backward Stochastic Differential Equations
Alexander Aurell
Subject: Mathematics & Computer Science, Numerical Analysis & Optimization Keywords: mean-field type game; non-zero-sum differential game; cooperative game; backward stochastic differential equations; linear-quadratic stochastic control; social cost; price of anarchy
In this paper, mean-field type games between two players with backward stochastic dynamics are defined and studied. They make up a class of non-zero-sum differential games where the players' state dynamics solve backward stochastic differential equations (BSDEs) that depend on the marginal distributions of player states. Players try to minimize their individual cost functionals, also depending on the marginal state distributions. Under some regularity conditions, we derive necessary and sufficient conditions for existence of Nash equilibria. Player behavior is illustrated by numerical examples, and is compared to a centrally planned solution where the social cost, the sum of player costs, is minimized. The inefficiency of a Nash equilibrium, compared to socially optimal behavior, is quantified by the so-called price of anarchy. Numerical simulations of the price of anarchy indicate how the improvement in social cost achievable by a central planner depends on problem parameters.
Associating Stochastic Modelling of Flow Sequences With Climatic Trends
Sandhya Patidar, Eleanor Tanner, Bankaru-Swamy Soundharajan, Bhaskar Sen Gupta
Subject: Engineering, Automotive Engineering Keywords: Stochastic modelling; Climate change; Streamflow; El Nino/Southern Oscillation (ENSO), Extreme events modelling
Water is essential to all life-forms including various ecological, geological, hydrological, and climatic processes/activities. With changing climate, associated El Nino/Southern Oscillation (ENSO) events appear to stimulate highly uncertain patterns of precipitation (P) and evapotranspiration (EV) processes across the globe. Changes in P and EV patterns are highly sensitive to temperature variation and thus also affecting natural streamflow processes. This paper presents a novel suite of stochastic modelling approaches for associating streamflow sequences with climatic trends. The present work is built upon a stochastic modelling framework HMM_GP that integrates a Hidden Markov Model with a Generalised Pareto distribution for simulating synthetic flow sequences. The GP distribution within HMM_GP model is aimed to improve the model's efficiency in effectively simulating extreme events. This paper further investigated the potentials of Generalised Extreme Value Distribution (EVD) coupled with an HMM model within a regression-based scheme for associating impacts of precipitation and evapotranspiration processes on streamflow. The statistical characteristic of the pioneering modelling schematic has been thoroughly assessed for their suitability to generate/predict synthetic river flows sequences for a set of future climatic projections. The new modelling schematic can be adapted for a range of applications in the area of hydrology, agriculture and climate change.
Formulation Of A Rational Option Pricing Model using Artificial Neural Networks
Kaustubh yadav, Anubhuti yadav
Subject: Mathematics & Computer Science, Artificial Intelligence & Robotics Keywords: Black Scholes Equation; Heston Model Calibration; Option Pricing; Stochastic Processes; Artificial Neural Networks
This paper inquires on the options pricing modeling using Artificial Neural Networks to price Apple(AAPL) European Call Options. Our model is based on the premise that Artificial Neural Networks can be used as functional approximators and can be used as an alternative to the numerical methods to some extent, for a faster and an efficient solution. This paper provides a neural network solution for two financial models, the BlackScholes-Merton model, and the calibrated-Heston Stochastic Volatility Model, we evaluate our predictions using the existing numerical solutions for the same, the analytic solution for the Black-Scholes equation, COS-Model for Heston's Stochastic Volatility Model and Standard Heston-Quasi analytic formula. The aim of this study is to find a viable time-efficient alternative to existing quantitative models for option pricing.
A Performance Study of Massive MIMO Heterogeneous Networks with Ricean/Rayleigh Fading
James Kweku Nkrumah Nyarko, Christian Ango Mbom
Subject: Engineering, Electrical & Electronic Engineering Keywords: Massive MIMO; Ricean Fading; Beamforming; Stochastic Process; Heterogeneous Network; Interference Coordination; Spectral Efficiency
The unprecedented demand for services and applications, coupled with large network architecture calls for a radical melioration in evolving wireless networks. Massive multiple-input and multiple-out (massive MIMO) systems, small cell networks (SCN) and Heterogeneous networks (HetNet) are envisioned to meet the new quality of service (QoS) objectives of evolving networks. In this paper, we investigate massive MIMO with HetNet, where the intended macro BS signal follows Ricean fading and interfering Femto BS signals follow Rayleigh fading. Subject to the Ricean assumption, strong line-of-sight (LOS) channel exists in the coverage area, when the high power macro BS is mounted at a high altitude. Then, by exploiting matrix and stochastic geometric tools, we evaluate the signal-to-interference (SIR). And obtain QOS objectives: coverage and outage probabilities, and area spectral efficiency (ASE). Further, we investigate the role of multiple antenna system in improving the SIR. This involves massive MIMO beamforming coordination of the macro BSs through power control with max-min optimization for users at the cell-edge. Numerical results show that the coverage and outage performance converge for different user locations, pathloss and Ricean factor. Also, the monotonic increase in Ricean factor improves the SIR of a user located within coverage region. That is, a user is in an outage at a distance where the Ricean factor is very small. And, optimal macro BS antennas and Ricean factor that achieve the ASE performance guarantee a rate-fairness between the cooperating macrocells, and avoid strong Ricean channel correlation. The performance gain is dependent on Ricean factor and user location, but independent of cell size.
Total Factor Energy Efficiency of China's Industrial Sector: A Stochastic Frontier Analysis
Xiaobo Shen, Boqiang Lin
Subject: Social Sciences, Economics Keywords: malmquist productivity index; total factor energy efficiency; stochastic input distance function; China's industry
Based on stochastic frontier analysis and translog input distance function, this paper examines the total factor energy efficiency of China's industry using input-output data of 30 sub-industries from 2002 to 2014, and decomposes the changes in estimated total factor energy efficiency into the effects of technical change, technical efficiency change, scale efficiency change and input-mix effect. The results show that during this period the total factor energy efficiency in China's industry grows annually at a rate of 3.63%, technical change, technical efficiency change and input-mix effect contribute positively to the change in total factor energy efficiency, while scale efficiency change contributes negatively to it.
The Role of Inflation-indexed Bond in Optimal Management of Defined Contribution Pension Plan During the Decumulation Phrase
Xiaoyi Zhang, Junyi Guo
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: inflation-indexed bond; DC pension plan; stochastic optimal control; dynamic programming approach; HJB equation.
In this paper we investigate the optimal investment strategy for a defined contribution (DC) pension plan during the decumulation phrase which is risk-averse and pays close attention to inflation risk. The plan aims to maximize the expected constant relative risk aversion (CRRA) utility from the terminal wealth by investing the wealth in a financial market consisting of an inflation-indexed bond, an ordinary zero coupon bond and a risk-free asset. We derive the optimal investment strategy in closed-form using the dynamic programming approach by solving the corresponding Hamilton-Jacobi-Bellman (HJB) equation. Our theoretical and numerical results reveal that under some rational assumptions, an inflation-indexed bond do has significant advantage to hedge inflation risk.
Stochastic Resonance and Safe Basin of Single-Walled Carbon Nanotubes with Strongly Nonlinear Stiffness under Random Magnetic Field
Jia Xu, Chao Li, Yiran Li, CW Lim, Zhiwen Zhu
Subject: Materials Science, Nanotechnology Keywords: random magnetic field; safe basin; single-walled carbon nanotubes; stochastic resonance; strong nonlinearity
In this paper, a kind of single-walled carbon nanotube nonlinear model is developed, and the strongly nonlinear dynamic characteristics of such carbon nanotubes subjected to random magnetic field are studied. The nonlocal effect of microstructure is considered based on the theory of nonlocal elasticity. The natural frequency of the strongly nonlinear dynamic system is obtained by the energy function method, the drift coefficient and the diffusion coefficient are verified. The stationary probability density function of the system dynamic response is given and the fractal boundary of the safe basin is provided. Theoretical analysis and numerical simulation show that stochastic resonance occurs when varying the random magnetic field intensity. The boundary of safe basin has fractal characteristics and the area of safe basin decreases when the intensity of the magnetic field permeability increases.
Stochastic Cosmology and the Vacuum Energy Parameter
Ervin Goldfain
Subject: Physical Sciences, General & Theoretical Physics Keywords: FRW model; accelerated expansion; Vacuum Energy parameter; stochastic cosmology; Langevin equation
The Vacuum Energy Parameter (VEP) of standard cosmology denotes the fraction of the critical density attributed to the accelerated expansion of the Universe. Astrophysical evidence sets the numerical range of VEP at 0.692 +, - 0.012, yet the root cause of these values is currently unknown. Drawing from the stochastic interpretation of early-Universe cosmology, we develop here a derivation of the VEP based on classical diffusion theory and the Langevin equation. Predictions are shown to be in reasonable agreement with observations.
Stochastic Capsule Endoscopy Image Enhancement
Ahmed Mohammed, Ivar Farup, Marius Pedersen, Øistein Hovde, Sule Yildirim
Subject: Mathematics & Computer Science, Other Keywords: capsule video endoscopy; stochastic sampling; random walks; color gradient; image decomposition
Capsule endoscopy, which uses a wireless camera to take images of the digestive tract, is emerging as an alternative to traditional colonoscopy. The diagnostic values of these images depend on the quality of revealed underlying tissue surfaces. In this paper, we consider the problem of enhancing the visibility of detail and shadowed tissue surfaces for capsule endoscopy images. Using concentric circles at each pixel for random walks combined with stochastic sampling, the proposed method enhances the details of vessel and tissue surfaces. The framework decomposes the image into two detail layers that contain shadowed tissue surfaces and detail features. The target pixel value is recalculated for the smooth layer using similarity of the target pixel to neighboring pixels by weighting against the total gradient variation and intensity differences. In order to evaluate the diagnostic image quality of the proposed method, we used clinical subjective evaluation with a rank order on selected KID image database and compared to state of the art enhancement methods. The result showed that the proposed method provides a better result in terms of diagnostic image quality and objective quality contrast metrics and structural similarity index.
Tumbling-Snake Model for Polymeric Liquids Subjected to Biaxial Elongational Flows with a Focus on Planar Elongation
Pavlos S. Stephanou, Martin Kröger
Subject: Engineering, Other Keywords: polymer melt; stochastic differential equation; link tension coefficient; entanglements; biaxial flow
We have recently solved the tumbling-snake model for concentrated polymer solutions and entangled melts in the presence of both steady-state and transient shear and uniaxial elongational flows, supplemented by a variable link tension coefficient. Here, we provide the transient and stationary solutions of the tumbling-snake model under biaxial elongation both analytically, for small and large elongation rates, and via Brownian dynamics simulations, for the case of planar elongational flow over a wide range of rates, times, and the model parameters. We show that both the steady-state and transient first planar viscosity predictions are similar to their uniaxial counterparts, in accord with recent experimental data. The second planar viscosity seems to behave in all aspects similarly to the shear viscosity, if shear rate is replaced by elongation rate.
Rules of Stochastics of Genomic DNAs, Biological Dualism "Stochastics-Determinism",and Tensor-Unitary Transformations
Sergey Petoukhov
Subject: Life Sciences, Genetics Keywords: genomic DNAs; stochastics; tensor-unitary transformation; quantum informatics; fractal; projection operators; gestalt phenomena; stochastic determinism
The article is devoted to algebraic modeling of universal rules of stochastic organization of genomic DNA of higher and lower organisms, previously published by the author. The proposed algebraic apparatus, which uses formalisms of quantum mechanics and quantum informatics and which is based on the so-called tensor-unitary transformations of vectors that generate families of interrelated stochastic-deterministic vectors of increased dimensions. The features of the vectors' interconnections in these families model the stochastic-deterministic properties of the named phenomenological rules. There are new approaches to modeling of developing multi-parameter biosystems, whose number of parameters increases in the course of step-by-step development. In the light of the presented materials, the issues of fractal-like organization in genetically inherited biosystems are considered. The development of the theory of stochastic determinism as an antipode of deterministic chaos is discussed.
To Freeze or Not to Freeze? Epidemic Prevention and Control in the DSGE Model Using an Agent-Based Epidemic Component
Jagoda Kaszowska, Przemysław Włodarczyk
Subject: Social Sciences, Accounting Keywords: COVID-19; agent-based modelling; dynamic stochastic general equilibrium models; scenario analyses
The ongoing COVID-19 pandemic has raised numerous questions concerning the shape and range of state interventions whose goals are to reduce the number of infections and deaths. The lockdowns, which have become the most popular response worldwide, are assessed as being an outdated and economically inefficient way to fight the disease. However, in the absence of efficient cures and vaccines, there is a lack of viable alternatives. In this paper we assess the economic consequences of the epidemic prevention and control schemes that were introduced in order to respond to the COVID-19 pandemic. The analyses report the results of epidemic simulations that were obtained using the agent-based modelling methods under the different response schemes and their use in order to provide conditional forecasts of the standard economic variables. The forecasts were obtained using the DSGE model with the labour market component.
StEMORS: A Stochastic Eco-Hydrological Model for Optimal Reservoir Sizing
Aristotelis Koskinas, Pinelopi Tsira, Aristoteles Tegos
Subject: Engineering, Civil Engineering Keywords: environmental flow assessment; water balance model; stochastic analysis; reliability; dynamic flow regime
Dams design and their operation cause strong environmental alteration and therefore a long-term debate is ongoing for the scale of these projects. At the same time, the concept of Environmental Flow Assessment (EFA) is a crucial element of modified ecosystems featuring large infrastructure such as dams and reservoirs for mitigating potential environmental degradation while they operate. Nowadays, integrated scientific frameworks are required to quantify the risks caused by large infrastructure. Through the use of stochastic analysis, it is possible to quantify these uncertainties, and present a solution that incorporates long-term persistence and environmental sustainability into a balanced reservoir simulation model. In this work, an attempt is made to determine a benchmark reservoir size incorporating hydrological and ecological criteria though stochastic analysis. The primary goal is to ensure the best possible conditions for the ecosystem, and then secondarily to allow a steady supply of water for other uses. Using a synthetic timeseries based on historical inputs, it is possible to determine and preserve essential statistical characteristics of a river's streamflow, and use these to detect the optimal reservoir capacity that maximizes environmental and local water demand reliability.
Optimal Noise Level for Imperceptible Vibrotactile Stimulation during a Force Stability Task
Courtney A. Haynes, Matthew S. Tenan, Antony D. Passaro, Andrew J. Tweedell
Subject: Behavioral Sciences, Behavioral Neuroscience Keywords: imperceptible; stimulation; vibrotactile; Gaussian noise; stochastic resonance; somatosensory system; sub-sensory threshold
Imperceptible vibratory noise stimulation has shown to be an effective means of improving stability for both whole body postural control and simple motor control tasks. While the physiological mechanism affording this improvement is uncertain, it is suspected that sensory noise stimulation may elicit a stochastic resonance-like effect within the somatosensory system. A stochastic resonance effect describes the phenomenon in which noise added to a non-linear system improves signal detection rather than degrading it. One hallmark of stochastic resonance is the existence of an optimal noise level which elicits the best system performance. There is disagreement in the literature regarding the presence of an optimal stimulation level for motor stability in humans. The goals of this study were to: 1) determine optimal stimulation level as a function of an individual's sub-sensory threshold level, and 2) to determine whether performance of a force stability task was significantly better when subjects received stimulation at this identified optimal level compared to other sub-sensory threshold stimulation levels. Eighteen (18) participants completed an isometric finger flexion task with visual feedback while receiving noise stimulation scaled to varying percentages of their individual sub-sensory threshold level. Performance for this force stabilization task was quantified as the root-mean-square (RMS) error between the target force and the actual generated force values. Despite controlling all other signal properties and varying only amplitude, optimal noise stimulation values still varied widely across participants (10-100% sub-sensory threshold level). Statistical modeling revealed a significant improvement in task performance with optimal noise stimulation compared to other sub-sensory stimulation levels (p ≤ 0.019) with estimated marginal mean differences in force errors ranging from 0.13 to 0.23 N. Moderate significant Spearman correlations (rs = 0.49 and rs = 0.56, respectively) were found between finger flexion maximal voluntary contraction (MVC) and sub-sensory threshold level and MVC and optimal stimulation level. A strong, significant Spearman correlation (rs = 0.65) was observed between sub-sensory threshold level and optimal stimulation level. Although these correlations do not provide a means to predict optimal stimulation level as a function of these other measures, optimal stimulation level appears to increase with sub-sensory threshold and MVC.
From Stochastic Geometry to Structural Access Point Deployment for Wireless Networks: A Lloyd Algorithm Approach
Ali Riza Ekti
Subject: Engineering, Electrical & Electronic Engineering Keywords: Expectation-maximization algorithm; Lloyd's algorithm; stochastic geometry; Poisson point process; Voronoi diagram
In a wireless network, locations of base stations (BSs)/access points (APs)/sensor nodes can be modeled based on stochastic processes, e.g., a Poisson point process (PPP) or a deterministic pattern planned ahead by providers. While deterministic deployment does not provide tractable interference analysis in general, PPP yields tractable analysis for interference. However, PPP allows APs to be deployed very close to each other and gives pessimistic results compared to the field measurements. In this study, in order to address this issue, Lloyd's algorithm, which functions as a bridge between random and structural APs deployments, is investigated for analyzing coverage probability in a network. The link distance distribution is modeled as a mixture of Weibull distributions and its parameters are obtained by using the expectation-maximization (EM) algorithm for each iteration of Lloyd's algorithm. The link distance distribution is further utilized for calculating the coverage probability approximately by exploiting the tractability of PPP.
Accelerated Benders' Decomposition for Integrated Forward/Reverse Logistics Network Design under Uncertainty
Vahab Vahdat, Mohammad Ali vahdatzad
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: integrated forward/reverse logistics network; accelerated benders' decomposition; two-stage stochastic programming
In this paper, a two-stage stochastic programming modelling is proposed to design a multi-period, multistage, and single-commodity integrated forward/reverse logistics network design problem under uncertainty. The problem involves both strategic and tactical decision levels. The first stage deals with strategic decisions, which are the number, capacity, and location of forward and reverse facilities. At the second stage tactical decisions such as base stock level as an inventory policy is determined. The generic introduced model consists of suppliers, manufactures, and distribution centers in forward logistic and collection centers, remanufactures, redistribution, and disposal centers in reverse logistic. The strength of proposed model is its applicability to various industries. The problem is formulated as a mixed-integer linear programming model and is solved by using Benders' Decomposition (BD) approach. In order to accelerate the Benders' decomposition, a number of valid inequalities are added to the master problem. The proposed accelerated BD is evaluated through small-, medium-, and large-sized test problems. Numerical results reveal that proposed solution algorithm increases convergence of lower bound and upper bound of BD and is able to reach an acceptable optimality gap in a convenient CPU time.
Comparing Markov Chain Samplers for Molecular Simulation
Robert D. Skeel, Youhan Fang
Subject: Physical Sciences, Atomic & Molecular Physics Keywords: Markov chain Monte Carlo; stochastic dynamics integrators; decorrelation time; integrated autocorrelation time
Markov chain Monte Carlo sampling propagators, including numerical integrators for stochastic dynamics, are central to the calculation of thermodynamic quantities and determination of structure for molecular systems. Efficiency is paramount, and to a great extent, this is determined by the integrated autocorrelation time (IAcT). This quantity varies depending on the observable that is being estimated. It is suggested that it is the maximum of the IAcT over all observables that is the relevant metric. Reviewed here is a method for estimating this quantity. For reversible propagators (which are those that satisfy detailed balance), the maximum IAcT is determined by the spectral gap in the forward transfer operator, but for irreversible propagators, the maximum IAcT can be far less than or greater than what might be inferred from the spectral gap. This is consistent with recent theoretical results (not to mention past practical experience) suggesting that irreversible propagators generally perform better if not much better than reversible ones. Typical irreversible propagators involve a parameter controlling the mix of ballistic and diffusive movement. To gain insight into the effect of the damping parameter for Langevin dynamics, its optimal value is obtained here for a multidimensional quadratic potential energy function.
Personalized Biometrics of Physical Pain Agree with Its Psychological Perception by Participants with Sensory Over Responsivity
Jihye Ryu, Tami Bar-Shalita, Yelena Granovsky, Irit Weissman-Fogel, Elizabeth B. Torres
Subject: Medicine & Pharmacology, Allergology Keywords: EEG; pain biometrics; stochastic analyses; micro-movements spikes; sensory over responsivity; standardized scale; personalized pain
The study of pain requires a balance between subjective methods that rely on self-reports and complementary objective biometrics that ascertain physical signals associated with subjective accounts. There are at present no objective scales that enable the personalized assessment of pain, as most work involving electrophysiology rely on summary statistics from a priori theoretical population assumptions. Along these lines, recent work has provided evidence of differences in pain sensations between participants with Sensory Over Responsivity (SOR) and controls. While these analyses are useful to understand pain across groups, there remains a need to quantify individual differences more precisely in a personalized manner. Here we offer new methods to characterize pain using the moment-by-moment standardized fluctuations in EEG brain activity centrally reflecting the person's experiencing temperature-based stimulation at the periphery. This type of gross data is often disregarded as noise, yet here we show its utility to characterize the lingering sensation of discomfort raising to the level of pain, individually, for each participant. We show fundamental differences between the SOR group in relation to controls and provide an objective account of pain congruent with the subjective self-reported data. This offers the potential to build a standardized scale useful to profile pain levels in a personalized manner across the general population.
A Novel T-S Fuzzy Robust Control for Part Transportation of Aircraft Carrier Considering Transportation Time and Stochastic Demand
Tiantian Luan, Mingxiao Sun, Zhanyong Hu, Qiang Fu, Hao Wang
Subject: Engineering, Marine Engineering Keywords: part transportation; Takagi-Sugeno fuzzy control; carrier aircraft; transportation time; stochastic demand; cross rule group
The part transportation efficiency is a main factor of aircraft sortie generation rate. Part transportation is used to transport spare part from base to carrier. Transportation strategy depends on both demand on carrier and inventory in transportation base. The transportation time and stochastic demand will induce fluctuations of cost and inventory. Thus, a Takagi-Sugeno fuzzy system of dynamic part transportation is established considering transportation time and stochastic demand. And a novel Takagi-Sugeno fuzzy robust control is designed for dynamic part transportation, which will keep transportation cost and part inventory stable. First of all, a fuzzy model with stochastic demand and transportation time is proposed. Then, a novel robust control with cross rule groups is conducted according to production and transportation strategy, which will reduce fluctuations induced by strategies switch. Moreover, robust stability is guaranteed and part can be supplied in time under a low cost. Finally, simulation illustrates usefulness and quickness of the novel Takagi-Sugeno fuzzy robust control. Besides, the proposed method will be useful in other transportation electrification systems with delay time and uncertainty.
A Linear Process Approach to Short-term Trading Using the VIX Index as a Sentiment Indicator
Yawo Mamoua Kobara, Cemre Pehlivanoglu, Okechukwu Joshua Okigbo
Subject: Social Sciences, Accounting Keywords: Short-term trading; mean reversion; VIX; SPY; linear stochastic process; MACD; Bollinger Bands
One of the key challenges of stock trading is the stock prices follow a random walk process, which is a special case of a stochastic process, and are highly sensitive to new information. A random walk process is difficult to predict in the short-term. Many linear process models that are being used to predict financial time series are structural models that provide an important decision boundary, albeit not adequately considering the correlation or causal effect of market sentiment on stock prices. This research seeks to increase the predictive capability of linear process models using the SPDR S\&P 500 ETF (SPY) and the CBOE Volatility (VIX) Index as a proxy for market sentiment. Three econometric models are considered to forecast SPY prices: (i) Auto-Regressive Integrated Moving Average (ARIMA), (ii) Generalized Auto Regressive Conditional Heteroskedasticity (GARCH), and (iii) Vector Autoregression (VAR). These models are integrated into two technical indicators, Bollinger Bands and Moving Average Convergence Divergence (MACD), focusing on forecast performance. The profitability of various algorithmic trading strategies is compared based on a combination of these two indicators. This research finds that linear process models that incorporate the VIX Index do not improve the performance of algorithmic trading strategies.
Time Optimal Control for Semilinear Stochastic Functional Differential Equations With Delays
Yong Han Kang, Jin-Mun Jeong
Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: stochastic differential equation; retarded control system; time optimal control; admissible set; analytic semigroup.
The purpose of this paper is to find the time optimal control to a target set for semilinear stochastic functional differential equations involving time delays or memories under general conditions on a target set and nonlinear terms even though the equations contain unbounded principal operators. Our research approach is construct a fundamental solution for corresponding linear systems and establish variations of constant formula of solutions for given stochastic equations. The existence result of time optimal controls for one point target set governed by the given semilinear stochastic equation is also established.
Supplier Replacement Model in a One-Level Assembly System under Lead-Time Uncertainty
Hasan Murat Afsar, Oussama Ben-Ammar, Alexandre Dolgui, Faicel Hnaien
Subject: Engineering, Industrial & Manufacturing Engineering Keywords: assembly systems; replenishment; stochastic lead times; holding cost; backlogging cost; purchase cost; optimization
Supplier selection/replacement strategies and optimized purchasing policies play a key role in efficient supply chain management in today's dynamic market. Here we study supplier replacement in a one-level assembly system (OLAS) producing one type of finished product. To assemble the product, we need to provide multi-type components, but assembly will be interrupted if any single component is missing, and incoming units will get hoarded until the missing component arrives. The assembly process can be interrupted by various sources of uncertainty, including delays in component deliveries. There is consequently a non-negligible risk that the assembly process may get stopped any moment. This brings inventory-related costs, which should be minimized. Here we consider discrete lead-time distributions to mimic industry-world reality. We present a model that takes into account not only optimal assignment of component order release dates but also replacement of a critical supplier. For a given unit, we model several alternative suppliers with alternative pricing and lead-time uncertainties, and we evaluate the impact on the total assembly system. For a more general case where several suppliers may be replaced, we propose a genetic algorithm.
Fluctuation Theorem of Information Exchange within an Ensemble of Paths Conditioned at a Coupled-Microstates
Lee Jinwoo
Subject: Physical Sciences, Condensed Matter Physics Keywords: local non-equilibrium thermodynamics, fluctuation theorem, mutual information, entropy production, local mutual information, thermodynamics of information, stochastic thermodynamics
Fluctuation theorems are a class of equalities each of which links a thermodynamic path functional such as heat and work to a state function such as entropy and free energy. Jinwoo and Tanaka [L. Jinwoo and H. Tanaka, Sci. Rep. 5, 7832 (2015)] have shown that each microstate of a fluctuating system can be regarded as an ensemble (or a 'macrostate') if we consider trajectories that reach each microstate. They have revealed that local forms of entropy and free energy are true thermodynamic potentials of each microstate, encoding heat, and work, respectively, within an ensemble of paths that reach each state. Here we show that information that is characterized by the local form of mutual information between two subsystems in a heat bath is also a true thermodynamic potential of each coupled state and encodes the entropy production of the subsystems and heat bath during a coupling process. To this end, we extend the fluctuation theorem of information exchange [T. Sagawa and M. Ueda, Phys. Rev. Lett. 109, 180602 (2012)] by showing that the fluctuation theorem holds even within an ensemble of paths that reach a coupled state during dynamic co-evolution of two subsystems.
Comparison of Stochastic and Machine Learning Methods for Multi-Step Ahead Forecasting of Hydrological Processes
Georgia Papacharalampous, Hristos Tyralis, Demetris Koutsoyiannis
Subject: Engineering, Civil Engineering Keywords: multi-step ahead forecasting; neural networks; random forests; stochastic vs machine learning models; support vector machines; time series
Research within the field of hydrology often focuses on comparing stochastic to machine learning (ML) forecasting methods. The comparisons performed are all based on case studies, while an extensive study aiming to provide generalized results on the subject is missing. Herein, we compare 11 stochastic and 9 ML methods regarding their multi-step ahead forecasting properties by conducting 12 large-scale computational experiments based on simulations. Each of these experiments uses 2 000 time series generated by linear stationary stochastic processes. We conduct each simulation experiment twice; the first time using time series of 100 values and the second time using time series of 300 values. Additionally, we conduct a real-world experiment using 405 mean annual river discharge time series of 100 values. We quantify the performance of the methods using 18 metrics. The results indicate that stochastic and ML methods perform equally well.
Recognition of Handwritten Digit using Convolutional Neural Network in Python with Tensorflow and Comparison of Performance for Various Hidden Layers
Fathma Siddique, Shadman Sakib, Md. Abu Bakr Siddique
Subject: Engineering, Control & Systems Engineering Keywords: Handwritten digit recognition; Convolutional Neural Network (CNN); Deep learning; MNIST dataset; Epochs; Hidden Layers; Stochastic Gradient Descent; Backpropagation
In recent times, with the increase of Artificial Neural Network (ANN), deep learning has brought a dramatic twist in the field of machine learning by making it more Artificial Intelligence (AI). Deep learning is used remarkably used in vast ranges of fields because of its diverse range of applications such as surveillance, health, medicine, sports, robotics, drones etc. In deep learning, Convolutional Neural Network (CNN) is at the center of spectacular advances that mixes Artificial Neural Network (ANN) and up to date deep learning strategies. It has been used broadly in pattern recognition, sentence classification, speech recognition, face recognition, text categorization, document analysis, scene, and handwritten digit recognition. The goal of this paper is to observe the variation of accuracies of CNN to classify handwritten digits using various numbers of hidden layer and epochs and to make the comparison between the accuracies. For this performance evaluation of CNN, we performed our experiment using Modified National Institute of Standards and Technology (MNIST) dataset. Further, the network is trained using stochastic gradient descent and the backpropagation algorithm.
Fourier Transform on the Homogeneous Space of 3D Positions and Orientations for Exact Solutions to PDEs
Remco Duits, Erik J. Bekkers, Alexey Mashtakov
Subject: Mathematics & Computer Science, Analysis Keywords: Fourier transform; rigid body motions; partial differential equations; Lévy processes; lie groups; homogeneous spaces; stochastic differential equations
Fokker-Planck PDEs (incl. diffusions) for stable Lévy processes (incl. Wiener processes) on the joint space of positions and orientations play a major role in mechanics, robotics, image analysis, directional statistics and probability theory. Exact analytic designs and solutions are known in the 2D case, where they have been obtained using Fourier transform on $SE(2)$. Here we extend these approaches to 3D using Fourier transform on the Lie group $SE(3)$ of rigid body motions. More precisely, we define the homogeneous space of 3D positions and orientations $\mathbb{R}^{3}\rtimes S^{2}:=SE(3)/(\{\mathbf{0}\} \times SO(2))$ as the quotient in $SE(3)$. In our construction, two group elements are equivalent if they are equal up to a rotation around the reference axis. On this quotient we design a specific Fourier transform. We apply this Fourier transform to derive new exact solutions to Fokker-Planck PDEs of $\alpha$-stable Lévy processes on $\mathbb{R}^{3}\rtimes S^{2}$. This reduces classical analysis computations and provides an explicit algebraic spectral decomposition of the solutions. We compare the exact probability kernel for $\alpha = 1$ (the diffusion kernel) to the kernel for $\alpha=\frac12$ (the Poisson kernel). We set up SDEs for the Lévy processes on the quotient and derive corresponding Monte-Carlo methods. We verify that the exact probability kernels arise as the limit of the Monte-Carlo approximations.
Intelligent Passenger Frequency Prediction for Transportation Sustainability using Kalman Filter Algorithm and Convolutional Neural Network
Onemanyin David Jimoh, Lukman Adewale Ajao, Oluwafemi Oyetunde Adeleke, Stephen Sunday Kolo, Oyedeji Abdulwaheed Olarinoye
Subject: Engineering, Electrical & Electronic Engineering Keywords: ARIMA; convolutional neural network; Kalman filter; passenger flow; transportation; short-term prediction; stochastic model
The passenger prediction flow is very significant to transportation sustainability. This is due to some chaos of traffic jams encountered by the road users during their movement to the offices, schools, or markets at earlier of the days and during closing periods. This problem is peculiar to the transportation system of the Federal University of Technology Minna, Nigeria. However, the prevailing technique of passenger flow estimation is non-parametric which depends on the fixed planning and is easily affected by noise. In this research, we proposed the development of a hybrid intelligent passenger frequency prediction model using the Auto-Regressive Integrated Moving Average (ARIMA) linear model, Convolutional Neural Network (CNN), and Kalman Filter Algorithm (KFA). The passengers' frequency of arrival at the bus terminals is obtained and enumerated through the closed-circuit television (CCTV) and demonstrated using the Markovian Queueing Systems Model (MQSM). The ARIMA model was used for learning and prediction and compared the result with the combined techniques of using CNN-KFA. The autocorrelation coefficient functions (ACF) and partial autocorrelation coefficient functions (PACF) are used to examine the stationary data with different features. The performance of the models was analyzed and evaluated in describing the short-term passenger flow frequency at each terminal using the Mean Absolute Percentage Error (MAPE) and Mean Squared Error (MSE) values. The CNN-Kalman-filter model was fitted into the short-term series and the MAPE values are below 10%. The Mean Square Error (MSE) shows that the CNN-Kalman Filter model has the overall best performance with 83.33% of the time better than the ARIMA model and provides high accuracy in forecasting.
Uncertainty Relations in Hydrodynamics
Gyell Gonçalves de Matos, Takeshi Kodama, Tomoi Koide
Subject: Physical Sciences, Acoustics Keywords: Navier-Stokes-Fourier equation; Navier-Stokes-Korteweg equation; uncertainty relations; stochastic calculus; variational principle
Uncertainty relations in hydrodynamics are numerically studied. We first give a review for the formulation of the generalized uncertainty relations in the stochastic variational method (SVM), following the work by two of the present authors [Phys.\ Lett.\ A\textbf{382}, 1472 (2018)]. In this approach, the origin of the finite minimum value of uncertainty is attributed to the non-differentiable (virtual) trajectory of a quantum particle and then both of the Kennard and Robertson-Schr\"{o}dinger inequalities in quantum mechanics are reproduced. The same non-differentiable trajectory is applied to the motion of fluid elements in the Navier-Stokes-Fourier equation or the Navier-Stokes-Korteweg equation. By introducing the standard deviations of position and momentum for fluid elements, the uncertainty relations in hydrodynamics are derived. These are applicable even to the Gross-Pitaevskii equation and then the field-theoretical uncertainty relation is reproduced. We further investigate numerically the derived relations and find that the behaviors of the uncertainty relations for liquid and gas are qualitatively different. This suggests that the uncertainty relations in hydrodynamics are used as a criterion to classify liquid and gas in fluid.
Including Arbitrary Geometric Correlations into One-Dimensional Time-Dependent Schrödinger Equations
Devashish Pandey, Xavier Oriols, Guillermo Albareda
Subject: Keywords: nanojunction; constriction; quantum electron transport; quantum confinement; dimensionality reduction, stochastic Schrödinger equations; geometric correlations
The so-called Born-Huang ansatz is a fundamental tool in the context of ab-initio molecular dynamics, viz., it allows to effectively separate fast and slow degrees of freedom and thus treating electrons and nuclei at different mathematical footings. Here we consider the use of a Born-Huang-like expansion of the three-dimensional time-dependent Schr\"odinger equation to separate transport and confinement degrees of freedom in electron transport problems that involve geometrical constrictions. The resulting scheme consists of an eigenstate problem for the confinement degrees of freedom (in the transverse direction) whose solution constitutes the input for the propagation of a set of coupled one-dimensional equations of motion for the transport degree of freedom (in the longitudinal direction). This technique achieves quantitative accuracy using an order less computational resources than the full dimensional simulation for a prototypical two-dimensional constriction.
Analytic Expressions for Radar Sea Clutter WSSUS Scattering Functions
Corey Cooke
Subject: Engineering, Electrical & Electronic Engineering Keywords: airborne radar; radar clutter; radar signal processing; stochastic systems; time-varying systems; maximum entropy
Bello's stochastic linear time-varying system theory has been widely used in the wireless communications literature to characterize multipath fading channel statistics. In the context of radar backscatter, this formulation allows for statistical characterization of distributed radar targets in range and Doppler using wide-sense stationary uncorrelated scattering (WSSUS) models. WSSUS models separate the channel from the effect of the waveform and receive filter, making it an ideal formulation for waveform design problems. Of particular interest in the radar waveform design community is the ability to suppress unwanted backscatter from the earth's surface, known as clutter. Various methods for estimating WSSUS system functions have been studied in the literature, but to date, no analytic expressions for radar surface clutter range-Doppler scattering functions exist. In this work we derive a wideband generalization of the Jakes Doppler spectrum model, which is widely used in the wireless communications literature, adapt it for use in radar problems, and show how the maximum entropy method can be used to extend this model to account for internal clutter motion. Validation of the spectral and stationarity properties of the proposed model against a subset of the Australian Ingara sea clutter database is performed, and good agreement is shown.
Evolve Then Filter Regularization for Stochastic Reduced Order Modeling
Xuping Xie, Feng Bao, Clayton G. Webster
Subject: Mathematics & Computer Science, Computational Mathematics Keywords: reduced order modeling; regularization; fluid dynamics; stochastic Burgers Equation; proper orthogonal decomposition; spatial filter
In this paper, we introduce the evolve-then-filter (EF) regularization method for reduced order modeling of convection-dominated stochastic systems. The standard Galerkin projection reduced order model (G-ROM) yield numerical oscillations in a convection-dominated regime. The evolve-then-filter reduced order model (EF-ROM) aims at the numerical stabilization of the standard G-ROM, which uses explicit ROM spatial filter to regularize various terms in the reduced order model (ROM). Our numerical results based on a stochastic Burgers equation with linear multiplicative noise. It shows that the EF-ROM is significantly better results than G-ROM.
Information Geometry in Classical and Quantum Systems
Eun-jin Kim
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: stochastic processes; Langevin equation; Fokker-Planck equation; information length; Fisher information; relaxation; chaos; attractor
A probabilistic description is essential for understanding the dynamics of stochastic systems far from equilibrium. To compare different Probability Density Functions (PDFs), it is extremely useful to quantify the difference among different PDFs by assigning an appropriate metric to probability such that the distance increases with the difference between the two PDFs. This metric structure then provides a key link between stochastic processes and geometry. For a non-equilibrium process, we define an infinitesimal distance at any time by comparing two PDFs at times infinitesimally apart and sum these distances in time. The total distance along the trajectory of the system quantifies the total number of different states that the system undergoes in time and is called the information length. By using this concept, we investigate classical and quantum systems and demonstrate the utility of the information length as a unique Lagrangian diagnostic to quantify the information change as a system continuously evolves in time and to map out attractor structure. We further elucidate quantum effects (uncertainty relation) and the dual role of the width of PDF in quantum systems.
Far-from-Equilibrium Time Evolution between two Gamma Distributions
Eun-jin Kim, Lucille-Marie Tenkès, Rainer Hollerbach
Subject: Mathematics & Computer Science, Applied Mathematics Keywords: non-equlibrium; stochastic systems; langevin equation; fokker-planck equation; time-dependent PDFs; gamma distribution
Many systems in nature and laboratories are far from equilibrium and exhibit significant fluc- tuations, invalidating the key assumptions of small fluctuations and short memory time in or near equilibrium. A full knowledge of Probability Distribution Functions (PDFs), especially time- dependent PDFs, becomes essential in understanding far-from-equilibrium processes. We consider a stochastic logistic model with multiplicative noise, which has gamma distributions as stationary PDFs. We numerically solve the transient relaxation problem, and show that as the strength of the stochastic noise increases the time-dependent PDFs increasingly deviate from gamma distributions. For sufficiently strong noise a transition occurs whereby the PDF never reaches a stationary state, but instead forms a peak that becomes ever more narrowly concentrated at the origin. The addition of an arbitrarily small amount of additive noise regularizes these solutions, and re-establishes the existence of stationary solutions. In addition to diagnostic quantities such as mean value, standard deviation, skewness and kurtosis, the transitions between different solutions are analyzed in terms of entropy and information length, the total number of statistically distinguishable states that a system passes through in time.
Choice Under Uncertainty and Ambiguity: An Empirical Inquiry of a Behavioral Economic Experiment Applied to COVID-19.
Kathleen Shea Rodenburg, Anit Bhattacharyya, Erin Rodenburg, Andrew Papadopoulos
Subject: Behavioral Sciences, Applied Psychology Keywords: public health decision making; COVID-19; behavioral economic; experimental economics; first-order stochastic dominance; bounded rationality; decision trees
Results from a behavioral economic laboratory experiment are used to enhance our understanding of public health decisions made during the COVID-19 pandemic. The identification of systematic biases from optimal decision theory found in controlled experiments could help inform public policy design for future public health crises. The laboratory and the shelter-in-place decisions made during COVID-19 included elements of risk, uncertainty and ambiguity. The lab findings found individuals adopt different decision rules depending on both personal attributes and on the context and environment in which the decision task is conducted. Key observations to consider in the context of the COVID-19 decision environment include the importance of past experience, the ability to understand and calculate the odds of each action, the size and differences in economic payoffs given the choice, the value of information received, and how past statistical independent outcomes influence future decisions. The academic space encompassing both public health and behavioral economics is small, yet important, particularly in the current crisis. The objective of continued research in this area would be to develop a more representative model of decision-making processes, particularly during crisis, that would serve to enhance future public health policy design.
Multi-fidelity Model Calibration in Structural Dynamics using Stochastic Variational Inference on Manifolds
Panagiotis Tsilifis, Piyush Pandita, Sayan Ghosh, Liping Wang
Subject: Mathematics & Computer Science, Computational Mathematics Keywords: Gaussian Processes; Stochastic Variational Inference; Manifold Gradient Ascent; Multi-fidelity modeling; Structural Dynamics; Vibration Torsion
Bayesian techniques for engineering problems, that rely on Gaussian process (GP) regression, are known for their ability to quantify epistemic and aleatory uncertainties and for being data efficient. The mathematical elegance of applying these methods usually comes at a high computational cost when compared to deterministic and empirical Bayesian methods. Furthermore, using these methods becomes practically infeasible in scenarios characterized by a large number of inputs and thousands of training data. The focus of this work is on enhancing Gaussian Process-based metamodeling and model calibration tasks, when the size of the training datasets is significantly large. To achieve this goal, we employ a stochastic variational inference algorithm that enables rapid statistical learning of the calibration parameters and hyperparameter tuning, while retaining the rigor of Bayesian inference. The numerical performance of the algorithm is demonstrated on multiple metamodeling and model calibration problems with thousands of training data.
Modeling and Analyzing Multi-tier Millimeter/Micro Wave Hybrid Caching Networks
Jihong Zhao, Shuyuan Zhao, Hua Qu, Gongye Ren
Subject: Engineering, Other Keywords: Multi-tier hybrid caching networks; stochastic geometry; millimeter wave; average successful delivery probability; performance analysis
In the fifth generation communication system, millimeter wave (mmWave) networks will coexist with traditional micro wave (μWave) networks, which allows for higher data transmission rate and better user experience. In this paper, we give a comprehensive framework of mathematical models and analytical methods for multi-tier mmWave and μWave hybrid caching networks based on stochastic geometry. We propose an association strategy by assuming average biased-received power association and Rayleigh fading. Accordingly, by using a D-ball approximating blockage model of mmWave networks, expressions of the cell association probability and the coverage probability of the hybrid networks are derived. Also, we use the average successful delivery probability as the performance metric to analyze the existing caching placement strategies. Simulation results validate the accuracy of our analytical conclusions.
Jump-Diffusion Models for Valuing the Future: Discounting Under Extreme Situations
Jaume Masoliver, Miquel Montero, Josep Perelló
Subject: Social Sciences, Accounting Keywords: stochastic processes; finance; climate; discount function; environmental econonomics; Poissonian jumps; Ornstein-Uhlenbeck process; interest rates; asymptotics
We develop the process of discounting when underlying rates follow a jump-diffusion process, that is, when, in addition of diffusive behavior, rates suffer a series of finite discontinuities located at random Poissonian times. Jump amplitudes are also random and governed by an arbitrary density. Such a model may describe the economic evolution specially when extreme situations occur (pandemics, global wars, etc.). When between jumps the dynamical evolution is governed by an Ornstein-Uhlenbeck diffusion process, we obtain exact and explicit expressions for the discount function and the long-run discount rate and show that the presence of discontinuities may drastically reduce the discount rate, a fact that has significant consequences for environmental planning. We also discuss as a specific example the case when rates are described by the continous time random walk.
Extending Quantum Probability from Real Axis to Complex Plane
Ciann-Dong Yang, Shiang-Yi Han
Subject: Physical Sciences, General & Theoretical Physics Keywords: complex stochastic differential equation; complex Fokker-Planck equation; quantum trajectory; complex probability; optimal quantum guidance law.
Probability is an open question in the ontological interpretation of quantum mechanics. It has been discussed in some trajectory interpretations such as Bohmian mechanics and stochastic mechanics. New questions arise when the domain of probability extends to the complex space, including the generation of complex trajectories, the definition of the complex probability, the relation of the complex probability to the real quantum probability, and so on. The complex treatment proposed here applies the optimal quantum guidance law to derive the stochastic differential (SD) equation governing the particle's random motions in the complex plane. The ensemble of the complex quantum random trajectories (CQRTs) solved from the complex SD equation is used to construct the probability distribution ρc(t,x,y) of the particle's position over the complex plane z=x+iy. The correctness of the obtained complex probability is confirmed by the solution of the complex Fokker-Planck equation. The significant contribution of the complex probability is that it can be used to reconstruct both quantum probability and classical probability, and to clarify their relationship. Although quantum probability and classical probability are both defined on the real axis, they are obtained by projecting complex probability onto the real axis in different ways. This difference explains why the quantum probability cannot exactly converge to the classical probability when the quantum number is large.
Bouncing oil droplets, de Broglie's quantum thermostat and convergence to equilibrium
Mohamed Hatifi, Ralph Willox, Samuel Colin, Thomas Durt
Subject: Physical Sciences, General & Theoretical Physics Keywords: Bouncing oil droplets; Stochastic quantum dynamics; de Broglie–Bohm theory; Quantum non-equilibrium; H-theorem; Ergodicity
Recently, the properties of bouncing oil droplets, also known as `walkers', have attracted much attention because they are thought to offer a gateway to a better understanding of quantum behaviour. They indeed constitute a macroscopic realization of wave-particle duality, in the sense that their trajectories are guided by a self-generated surrounding wave. The aim of this paper is to try to describe walker phenomenology in terms of de Broglie-Bohm dynamics and of a stochastic version thereof. In particular, we first study how a stochastic modification of the de Broglie pilot-wave theory, à la Nelson, affects the process of relaxation to quantum equilibrium, and we prove an H-theorem for the relaxation to quantum equilibrium under Nelson-type dynamics. We then compare the onset of equilibrium in the stochastic and the de Broglie-Bohm approaches and we propose some simple experiments by which one can test the applicability of our theory to the context of bouncing oil droplets. Finally, we compare our theory to actual observations of walker behavior in a 2D harmonic potential well.
Life Cycle Modeling of Structural Defects via Computational Geometry and Time Series Forecasting
Sara Mohamadi, David Lattanzi
Subject: Engineering, Civil Engineering Keywords: Remote sensing; Photogrammetry; Life cycle modeling; Time series forecasting; Structural damage; Stochastic modeling; Convex Hull; ARIMA; VAR; Fatigue crack prediction
The evaluation of geometric defects is necessary in order to maintain the integrity of structures over time. These assessments are designed to detect damages of structures and ideally help inspectors to estimate the remaining life of structures. Current methodologies for monitoring structural systems, while providing useful information about the current state of a structure, are limited in the monitoring of defects over time and in linking them to predictive simulation. This paper presents a new approach to the predictive modeling of geometric defects. A combination of segmented from point clouds are parametrized using the convex hull algorithm to extract features from detected defects, and a stochastic dynamic model is then adapted to these features to model the evolution of the hull over time. Describing a defect in terms of its parameterized hull enables consistent temporal tracking for predictive purposes, while implicitly reducing data dimensionality and complexity as well. In this study, 2D point clouds analogous to information derived from point clouds were first generated over simulated life-cycles. The evolutions of point cloud hull parameterizations were modeled as stochastic dynamical processes via autoregressive integrated moving average (ARIMA) and vectorized autoregression (VAR) and compared against ground truth. The results indicate that this convex hull approach provides consistent and accurate representations of defect evolution across a range of defect topologies and is reasonably robust to noisy measurements, however assumptions regarding the underlying dynamical process play a significant the role in predictive accuracy. The results were then validated on experimental data from fatigue testing with high accuracy. Longer term, the results of this work will support finite element model updating for predictive analysis of structural capacity.
How Single-Molecule Localization Microscopy Expanded Our Mechanistic Understanding of RNA Polymerase II Transcription
Peter Hoboth, Ondřej Šebesta, Pavel Hozak
Subject: Life Sciences, Biochemistry Keywords: cell nucleus; gene expression; transcription foci; transcription factors; super-resolution microscopy; structured illumination; stimulated emission depletion; stochastic optical reconstruction; photoactivation
Classical models of gene expression were built using genetics and biochemistry. Although these approaches are powerful, they have very limited consideration of the spatial and temporal organization of gene expression. Although the spatial organization and dynamics of RNA polymerase II (RNAPII) transcription machinery has fundamental functional consequences for gene expression, its detailed studies have been for long time abrogated by the limits of classical light microscopy. The advent of super-resolution microscopy (SRM) techniques allowed for the visualization of the RNAPII transcription machinery with nanometer resolution and millisecond precision. In this review, we summarize the recent methodological advances in SRM, focus on its application for studies of the nanoscale organization in space and time of RNAPII transcription, and discuss its consequences for the mechanistic understanding of gene expression.
Hyperbolic Numbers in Modeling Genetic Phenomena
Subject: Life Sciences, Genetics Keywords: hyperbolic numbers; matrix; eigenvectors; genetics; Punnett squares; Fibonacci numbers; phyllotaxis; music harmony; literary texts; doubly stochastic matrices
The article is devoted to applications of 2-dimensional hyperbolic numbers and their algebraic 2n-dimensional extensions in modeling some genetic and cultural phenomena. Mathematical properties of hyperbolic numbers and their bisymmetric matrix representations are described in a connection with their application to analyze the following structures: alphabets of DNA nucleobases; inherited phyllotaxis phenomena; Punnett squares in Mendelian genetics; the psychophysical Weber-Fechner law; long literary Russian texts (in their special binary representations). New methods of algebraic analysis of the harmony of musical works are proposed, taking into account the innate predisposition of people to music. The hypothesis is put forward that sets of eigenvectors of matrix representations of basis units of 2n-dimensional hyperbolic numbers play an important role in transmitting biological information. A general hyperbolic rule regarding the oligomer cooperative organization of different genomes is described jointly with its quantum-information model. Besides, the hypothesis about some analog of the Weber-Fechner law for sequences of spikes in single nerve fibers is formulated. The proposed algebraic approach is connected with the theme of the grammar of biology and applications of bisymmetric doubly stochastic matrices. Applications of hyperbolic numbers reveal hidden interrelations between structures of different biological and physical phenomena. They lead to new approaches in mathematical modeling genetic phenomena and innate biological structures.
Prevalence and Economic Costs of Absenteeism in an Aging Population – A Quasi-Stochastic Projection for Germany
Patrizio Vanella, Christina Benita Wilke, Doris Söhnlein
Subject: Social Sciences, Econometrics & Statistics Keywords: Cohort-Component Method; Multivariate Methods; Time Series Analysis; Monte Carlo Methods; Stochastic Forecasting; Demography; Statistical Epidemiology; Labor Market Research; Health Economics
Demographic change is leading to the aging of German society. As long as the baby boom co-horts are still of working age, the working population will also age - and decline as soon as this baby boom generation gradually reaches retirement age. At the same time, there has been a trend towards increasing absenteeism (times of inability to work) in companies since the zero years, with the number of days of absence increasing with age. We present a novel stochastic forecast approach that combines population forecasting with forecasts of labor force participation trends, considering epidemiological aspects. For this, we combine a stochastic Monte Carlo-based cohort-component forecast of the population with projections of labor force participation rates and morbidity rates. This article examines the purely demographic effect on the economic costs associated with such absenteeism due to the inability to work. Under expected future employment patterns and constant morbidity patterns, absenteeism is expected by close to 5 percent by 2050 relative to 2020, associated with increasing economic costs of almost 3 percent. Our results illustrate how strongly the pronounced baby boom/ baby bust phenomenon determines demographic development in Germany in the midterm.
Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint
Hea-Jung Kim
Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Bayesian reliability analysis; Bayesian hierarchical model; MCMC method; scale mixtures of log-normal failure time model; stochastic constraint; two-stage MaxEnt prior.
This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT) models with stochastic (or uncertain) constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT) models (such as log-normal, log-Cauchy, and log-logistic FT models) as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC) sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt) prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works. | CommonCrawl |
Journal of NeuroEngineering and Rehabilitation
User-centered practicability analysis of two identification strategies in electrode arrays for FES induced hand motion in early stroke rehabilitation
Christina Salchow-Hömmen ORCID: orcid.org/0000-0001-5527-98951,
Natalie Jankowski2,
Markus Valtin1,
Laura Schönijahn2,
Sebastian Böttcher3,
Frank Dähne3 &
Thomas Schauer1
Journal of NeuroEngineering and Rehabilitation volume 15, Article number: 123 (2018) Cite this article
Surface electrode arrays have become popular in the application of functional electrical stimulation (FES) on the forearm. Arrays consist of multiple, small elements, which can be activated separately or in groups, forming virtual electrodes (VEs). As technology progress yields rising numbers of possible elements, an effective search strategy for suitable VEs in electrode arrays is of increasing importance. Current methods can be time-consuming, lack user integration, and miss an evaluation regarding clinical acceptance and practicability.
Two array identification procedures with different levels of user integration—a semi-automatic and a fully automatic approach—are evaluated. The semi-automatic method allows health professionals to continuously modify VEs via a touchscreen while the stimulation intensities are automatically controlled to maintain sufficient wrist extension. The automatic approach evaluates stimulation responses of various VEs for different intensities using a cost function and joint-angles recordings. Both procedures are compared in a clinical setup with five sub-acute stroke patients with moderate hand disabilities. The task was to find suitable VEs in two arrays with 59 elements in total to generate hand opening and closing for a grasp-and-release task. Practicability and acceptance by patients and health professionals were investigated using questionnaires and interviews.
Both identification methods yield suitable VEs for hand opening and closing in patients who could tolerate the stimulation. However, the resulting VEs differed for both approaches. The average time for a complete search was 25% faster for the semi-automatic approach (semi-automatic: 7.3min, automatic: 10.5min). User acceptance was high for both methods, while no clear preference could be identified.
The semi-automatic approach should be preferred as the search strategy in arrays on the forearm. The observed faster search duration will further reduce when applying the system repeatedly on a patient as only small position adjustments for VEs are required. However, the setup time will significantly increase for generation of various grasp types and adaptation to different arm postures. We recommend different levels of user integration in FES systems such that the search strategy can be chosen based on the users' preferences and application scenario.
Functional electrical stimulation (FES) is a common technique in physical rehabilitation to facilitate the motor recovery of disabled limbs after stroke [1] or spinal cord injury [2]. In therapy, sequences of electrical stimulation pulses are usually applied via surface electrodes to evoke contractions in the paralyzed muscles. The usage of standard hydro-gel surface electrodes has several disadvantages such as lacking selectivity of the stimulation, long placement times, and static electrode positions during therapy sessions [3]. The listed drawbacks are especially relevant for the application of FES on body parts with high muscle density, such as the human forearm, where a selective stimulation is mandatory to generate complex movements (e.g. grasping) [4]. Inaccurate stimulation results and elaborate setup times combined with non-adaptable, open-loop stimulation patterns result in a lack of acceptance and little application of this technology in clinical practice.
Electrode arrays (or multi-pad electrodes) were introduced to overcome these problems and have become popular in FES research within the last two decades [5]. Electrode arrays consist of multiple, small elements, which can be activated separately. By activating multiple elements in a defined temporal pattern (synchronously/asynchronously), so-called virtual electrodes (VEs) can be formed [6, 7]. VEs can dynamically change their position, shape, and size. This facilitates repositioning of the stimulation electrode in real-time by choosing different subsets of active elements.
The application of electrode arrays yields new challenges regarding setup time, VE identification strategies, and user integration. The standard, intuitive manual approach for finding suitable VEs in electrode arrays consists of testing single elements or element combinations iteratively [8]. An element or an element combination is selected, the stimulation intensity is increased until a certain degree of motion is achieved, and the evoked movement is observed and judged by the caregiver. This procedure is repeated until a satisfying VE is found for each desired motion. Nowadays electrode arrays with up to 78 elements are used on the forearm [9]. Thereby, a manual, brute-force search for suitable VEs within electrode arrays is laborious and time-consuming, and additionally may lead to muscle fatigue.
Many approaches have been introduced to automatically find the optimal stimulation point(s) for defined movements within an array. Automatic search algorithms usually combine a stimulation and element testing strategy with a predefined selection criterion, or cost function. In most approaches, the evoked motion via twitch or step stimulation is registered and evaluated in a cost function for each tested element or element combination. Such a function can be the fit with a reference trajectory, which was derived from the movement of healthy people [10], the achievement of predefined constraints for the joint angles [11], or the maximum registered joint angle amplitude [12] together with additional restrictions [13]. The existing methods often evaluate only a small number of stimulation intensities [12, 14] and interpolate the outcome with higher stimulation intensities. Thereby, the search space is reduced, but higher intensities may induce movement in underlying and neighboring muscles as well, which is not considered.
Recently an electro-physiologically based identification approach was suggested which analyzes the FES induced electromyogram of the target muscles to determine suitable VEs [15]. Although the authors presented reliable results, difficulties with this approach are long setup and search times due to the use of multiple electrodes and devices, which will increase even more when complex hand movements shall be generated. Other methods suggest the use of feedback-controlled strategies for the optimization of VEs for hand postures to satisfy the complexity of the problem [16, 17]. However, the current automatic approaches disregard the existing expertise of the users, as the individual opinion of the treating health professional and the patient's sensing regarding stimulation comfort is often not reflected sufficiently in the decision process. Together with extensive setup procedures, lack of customization, and non-user-evaluated interfaces, this may lead to poor acceptance of electrode-array-based neuroprosthesis in clinical practice. The involvement of users in development processes turned out to increase the usability and acceptance of health care systems such as rehabilitation technologies and is recommended already in early stages [18]. However, systematic usability analyses are missing in current research in the field of array-based FES. One way to overcome these problems is to establish new user array interfaces and dynamic stimulation adaptation via the feedback of integrated sensors, as suggested in [19]. There, individual elements in the electrode array can be activated and deactivated via an overlying touch layer. This allows individual adaptation by the caregiver but can be quite time-consuming.
The scope of this paper is to evaluate array identification methods of different levels of user integration and support to analyze usability, practicality, and acceptance of such methods in clinical early stroke rehabilitation. We present two array identification methods that aim to assist the therapist in finding individual stimulation areas and stimulation parameters according to a patient's needs and personal training strategy in a convenient way. The first approach is our recently introduced semi-automatic identification procedure which allows the caregiver to continuously modify VEs to find a good stimulation area [20]. The purpose of the semi-automatic search was to provide an identification framework that a) is faster and more convenient than manual search and b) overcomes the lack of user integration and acceptance of fully automatic identification procedures for electrode arrays. In the presented framework, the center of a VE can be modified by the therapist to arbitrary positions within the arrays, and individual stimulation intensities of involved elements are determined automatically with feedback-control.
The second approach is an automatic identification procedure, which identifies stimulation positions as well as parameters, and offers an interactive framework for health professionals and patients. The fully automatic approach might be more appropriate for home use because it provides more assistance in the decision process. This support is especially important for the independent usage by patients. The presented automatic search strategy is examined as an example for the practicability of such approaches in everyday clinical practice. It is an extension and combination of algorithm features from previously published methods by Hoffmann et al. [12] and Schill et al. [14] for hand movements. A cost function is defined on joint angle constraints and calculated for the stimulation of single elements, and element combinations.
Our goal was to validate the two methods for finding suitable VEs and to assess the practicability, effectiveness, and acceptance of the different approaches concerning their varying degree of user integration in a clinical environment. Therefore, we evaluated and compared both methods—the semi-automatic and the automatic approach—in a clinical setup with sub-acute stroke patients for the identification of suitable VEs in a hand neuroprosthesis (HNP). The HNP consists of an electrical stimulator, two array electrodes, two single counter electrodes, and an inertial sensor network for tracking hand motion. The task was to find suitable VEs for hand opening in an array placed above the extensor muscle group in the forearm, and for hand closing in an array covering the hand flexors. Physicians and therapists were instructed to find up to three suitable VEs with each approach for a grasp-and-release motion. After the VE identification, a predefined stimulation pattern was repeatedly applied and the generated hand movement was assessed. Both methods were tested one after another on the same patient with a short break in between, to allow a direct comparison of the results.
For the first time, the application of array search techniques was accompanied by user-centered methods. We conducted face-to-face user acceptance and satisfaction surveys using standardized and system specific questionnaires and interviews. 'Users' in the context of rehabilitation systems might refer to patients and health professionals such as physicians and therapists as well [21]. The interests of both user groups need to be considered for the successful integration of new technologies. The following research questions, regarding the functionality and acceptance of the tested array identification strategies, were in the center of our investigations: i) were both array identification approaches appropriate to find suitable VEs?, ii) which approach needed more time?, iii) which approach was favored by clinicians and patients regarding practicability, outcome, comfort, and fun?, iv) what are the essential key-factors for future HNP to gain high impact and acceptance in clinical practice?
Both identification methods are outlined in detail in the following section. Subsequently, the clinical experimental setup and the procedure of the user-centered evaluation are presented. We show results of five sub-acute stroke patients, compare the suggested identification methods, and discuss the relevance of our results for future developments.
Semi-automatic search strategy
Common approaches for finding suitable VEs in electrode arrays assess the motion or force that is caused by applying stimulation to discrete positions. Single array elements are either deactivated or stimulated at the same (global) stimulation intensity. Our recently introduced semi-automatic search strategy [20] aims to overcome the restriction of discrete VE positions by providing a smooth interpolation function for the area of the array. The interpolation function determines whether an element should be activated and which individual intensity is applied depending on the position and dimensions of a virtual electrode model in the given array layout. For the model, three different shapes have been realized until now: circle, ellipse, and rectangle. The position of the center of the VE model as well as the dimensions can be changed in real-time by the user, as illustrated in Fig. 1. A graphical user interface (GUI) was developed for devices with touch display. This enables the user to modify the VE model position via finger input (see Fig. 1, left) and conveniently test different VE configurations within the array.
Overview of the closed-loop semi-automatic search. A controller adjusts the global intensity u based on the error between the recorded movement of one degree of freedom (DoF) y and the reference angle r (loop one). The interpolation function assigns an individual intensity to the array elements, which is then applied to the patient. In this picture, a circular VE model is used. Additionally, patient and health professional perceive the other DoF (e.g. individual finger movements) and control the VE model parameters shape, size, and position (loop two)
Meanwhile, the system supports the user in choosing the applied stimulation intensities via closed-loop control. Our system constantly controls the global stimulation intensity u, such that a predefined reference joint angle r is achieved in a major degree of freedom y. The individual stimulation intensities of the elements rely on the global intensity u and can be equal to or less than u. The closed loop is depicted in Fig. 1 for the search of a suitable VE for hand extension, as applied in the following experiments. The stimulation is adjusted such that a desired wrist extension is achieved. In this way, the automatic adaption of the stimulation intensity enables the treating health professional to search manually for a sufficient stimulation area, while completely focusing on the current hand posture. The level of applied intensity also serves as information on the current VE parameters. The currently applied intensity is displayed to the caregiver such that he/she can choose the position that yields the desired hand posture with the lowest intensity u. However, the closed-loop control is optional and can be deactivated, if desirable. In this case, the global stimulation intensity u has to be tuned by the health professionals themselves. For further details on the semi-automatic search strategy including interpolation function and controller design, please refer to [20].
The whole procedure of VE identification with the semi-automatic approach as performed in the following experiments is illustrated in Fig. 2. First, motor (um) and pain (up) thresholds have to be identified for the individual patient. This was done by stimulating any arbitrary element, representative of the sensation of pain on the forearm [22]. Afterward, step responses of 2 s at three predefined elements, which are distributed across the array, are recorded. The PID controller parameters are calculated based on these measurements according to [20]. The user, in our case the caregiver (physician or physical therapist), can then manipulate the shape, size, and position of the VE model and observe the evoked motion in the patient for DoFs that are not under feedback control. Any position within an array can be tested and saved as a suitable VE for a desired motion. It is possible to combine VEs with different positions to one active VE. In this way, active VEs with branched patterns can be realized, which might be necessary for generating a uniform movement of all fingers [8]. The search with the semi-automatic approach was performed twice, first in the extensor array to find VEs for hand opening and wrist stabilization with feedback control (cp. Fig. 1), and then in the flexor array to find a VE for finger flexion (grasping; see details in the following sections). For the latter, the stimulation intensity applied to the VE in the flexor array was modified manually (open-loop), whereas the already identified VE for wrist stabilization in the extensor array could be stimulated simultaneously in closed-loop mode, to guarantee sufficient wrist extension.
Course of action for the VE identification with the semi-automatic search. After the initialization steps, the user can manipulate the active VE model and observe the resulting motion, while the system automatically adapts and distribute the stimulation intensity (gray box)
Automatic search strategy
Parallel to the semi-automatic search, we developed and tested an automatic search strategy as an alternative, which aims to identify suitable VEs and matching stimulation parameters (stimulation current and pulse width). This approach explores the array(s) automatically and suggests suitable VEs for different reference postures. This intelligent procedure might be the right choice when it comes to FES systems in home-use, as it assists the patient in the decision making. The presented automatic search strategy combines algorithm features of previous methods by Hoffmann et al. [12] and Schill et al. [14]. Our algorithm consists of two phases as illustrated in Fig. 3: In phase I, all single elements of an array are sequentially stimulated in a random order with a staircase like intensity profile (single element mode). The induced movement by the electrical stimulation strongly depends on the stimulation parameters (frequency, current, and pulse width). We chose to automatically increase the stimulation intensity (current and pulse width) step-wise for each element until a threshold, such that the induced motion is recorded for varying intensities. The stimulation frequency is set to a fixed value. A cost function J(i,n) based on the observed steady-state joint angle recordings is calculated online after each stimulated element i and for each applied intensity level \(n \in \mathbb {Z}_{+}^{\ast }\). For each element i, the algorithm determines the minimal value
$$ L(i) = \min_{n}{J(i,n)} $$
Course of action for the VE identification with the automatic search. First, the maximum tolerated stimulation intensity up of the individual patient is determined exemplarily by stimulating one array element manually. Then each element and element combination is stimulated sequentially with a staircase-like profile from zero until up (default step-size for the applied normalized charge: 0.01; default step-duration: 120 ms). A push-button was given to the patient, such that he/she was enabled to stop the stimulation at any time. The next element (phase I)/element combination (phase II) is stimulated automatically after 1.5 s of break (adjustable) or the stimulation is started by the patient pushing the button. During phase I, uncomfortable elements can be marked and are excluded for phase II
of the cost function over all applied stimulation intensities. The result of phase I are the four best elements with the lowest L. In phase II, element combinations with those elements are formed according to two heuristics and are stimulated in the same manner in an arbitrary order (combined element mode; cf. Fig. 3). Finally, the five VEs with the lowest cost function values over all applied stimulation intensities are suggested as suitable VEs with corresponding stimulation intensities for a given reference movement.
The developed cost function J(i,n) for each element or element combination i is defined as follows:
$$ J(i,n) = \frac{100}{\sum_{j=1}^{M} g_{j}} \cdot \sum_{j=1}^{M} g_{j} \cdot \|\bar{a}_{j}(i,n) - a_{j,ref}\|. $$
The function J(i,n) is calculated separately for each of the applied stimulation intensity levels n (n=1…N). With a delay of 60 ms, the recorded joint angles are averaged over the remaining steady-state time interval after each increase of the stimulation intensity yielding \(\bar {a}_{j}(i,n)\). The measured and reference joint angles are normalized to the anatomical range of motion of the specific hand and finger joints. The resulting averages \(\bar {a}_{j}(i,n)\) are compared with the corresponding reference joint angles aj,ref for each considered joint j=1…M. The difference of each joint can be weighted individually with the weight gj.
For each desired motion, different reference joint angles and weights can be chosen. It is possible to adapt these values to individual patients. For the experiments conducted in this paper, reference angles have been extracted from recorded hand movements of five healthy volunteers during a grasp-and-release task. The desired movements and matching reference angles can be found in the "Procedure" section.
Two different heuristics were established to build candidate element combinations for phase II. In heuristic a), the element combinations for testing consisted of all eleven possible combinations of the four best single elements (cf. [12]). The maximum number of elements in an element combination selected with heuristic a) thus is four. In heuristic b), the three best single elements are combined with their neighboring elements. Combinations of two elements (good element plus direct neighbor), three elements (good element plus two direct neighbors in a row), and four elements are considered. Combinations with four elements consist of one of the four best elements plus two directly neighboring elements and one direct neighbor of those elements. To limit the number of combinations with heuristic b) and thereby the required time of phase II, an additional selection criterion is applied. Combinations which hold neighbor elements with comparatively high cost function values are excluded. The cost function of an element counted as comparatively high if its value is bigger than the mean cost function value of all tested elements of phase I. The number of all tested combinations of phase II for one cost function is limited to 22, so eleven combinations are selected by each heuristic. For heuristic b), those element combinations are selected which hold the lowest mean cost function over the included elements.
At the end of phase II, the five best VEs of phase I and II are presented to the user (see Fig. 3). Additionally, an array map is displayed showing the distribution of the cost function of the single elements, in other words, the results of phase I. The user is allowed to reexamine the suggested VEs regarding their evoked movement. This manual phase can be necessary because we did not measure all DoF, and to guarantee that the evoked movement matches the expectations of the patient and the treating health professional. However, the users can also decide to trust the system's decision and simply accept the suggested VEs. Furthermore, new VEs can be built manually by combining elements if necessary. If more than one cost function is investigated, the last three steps of Fig. 3 are repeated for each function/reference posture.
Experimental setup
For the clinical validation, we utilized our hand neuroprosthesis consisting of five components, as shown in Fig. 4: The RehaMovePro stimulator with science adapter and demultiplexer (Hasomed GmbH, Germany) [23], two customized electrode arrays with separate counter electrodes, a modular inertial hand sensor system (HSS) for the paralyzed hand [24], a laptop with touchscreen, and an external push-button.
Experimental setup of the hand neuroprosthesis. The setup is exemplarily shown on the left arm
The demultiplexer supports up to 59 active elements and two counter elements. Therefore, one electrode array with 35 elements was designed and placed above the wrist and finger extensor muscles (array E), and one with 24 elements was placed above the finger flexors in the middle of the ventral side of the forearm (array F; cf. Fig. 4). The element size was 12x12 mm2 with a spacing of 2 mm. The elements itself consisted of nine connected sub-elements with a size of 3.5x3.5 mm2 (see Fig. 5). In this way, we increased the flexibility of the elements and thereby the flexibility and comfort of the whole electrode array. A single hydro-gel layer (AG702 Stimulating Gel, Axelgaard Manufacturing Co., Ltd., USA) was used. The array electrodes were manufactured as flexible printed circuit boards (Würth Elektronik, Germany). In the experiments, the arrays were attached via the gel layers and fixed via a custom-made cuff as seen in Fig. 5. Two oval counter electrodes (4x6.4 cm, ValuTrode, Axelgaard Manufacturing Co., Ltd., USA) were placed at approximately 1 cm distance in distal direction, respectively. Active element configurations could be changed with the frequency of the stimulation, which was necessary for the interpolation in the semi-automatic approach.
Picture of the utilized hand neuroprosthesis. a The HNP is displayed in action on a patient, showing the hand sensor system with four sensor strips on the fingers, base unit, and wireless sensor. The array and counter electrodes are placed beneath the arm cuff. Array E is displayed in detail in b
Stimulation was applied at 33 Hz as biphasic pulses of asymmetric shape (pulse width, current) but charge balanced. Elements were activated asynchronously, successively after another. Asynchronous stimulation in electrode arrays has been shown to be stable regarding discomfort [22] and to provide benefits on fatigue and selectivity compared to synchronous stimulation [25]. The stimulation pulse intensity u equaled the normalized charge of the stimulation pulses. The charge itself is defined as the product of the current amplitude I and pulse width pw (u = 0: I = 0 mA, pw = 10 μs; u = 1: I = 80 mA, pw = 500 μs). In our setup, I and pw have been increased or decreased simultaneously while remaining a constant ratio (please see [26] for details).
To track the resulting motion of the paralyzed hand and fingers, we utilized our recently introduced inertial sensor network [24]. The HSS consisted of a base unit with USB connection on the hand back, a wireless inertial measurement unit (IMU) located on the dorsal side of the forearm, and up to five sensor strips for the five fingers. Each strip comprised three 9-D inertial sensors, one IMU placed on each finger segment. We refrained from embedding the HSS in a glove to allow for easy installation on paralyzed hands and to maintain the full sense of touch for the user. Instead, we attached individual sensor strips adhesively to the finger segments with skin-friendly adhesive tape and used a custom-made silicon mount for the base unit. The mounting of the HSS by another person takes approximately 2 min.
In the experiments, we used four sensor strips measuring the thumb (F1), index (F2), middle (F3), and ring finger (F4) (see Figs. 4 and 5). Joint angles were defined in line with the ISB recommendations [27] and estimated via orientation estimation by sensor fusion for each IMU and the calculation of relative quaternions between the connected segments (please see [24] and [28] for details). In total, we measured 19 joint angles with a sampling rate of 100 Hz: extension (negative) and flexion (positive) angle of the wrist (α), metacarpal-phalangeal joints (MCPα), proximal interphalangeal joints (PIP), and distal interphalangeal joints (DIP) of the fingers F2–F4, as well as the abduction angle of wrist (β) and MCP joints (MCPβ), and the five joint angles of the thumb (F1). At the beginning of each measurement, the hand with mounted HSS had to remain in a neutral pose for a few seconds during which the heading of all sensor units was aligned. In patients, this pose could be taken up with the help of the health professional.
The (control) algorithms were initially developed in 'Matlab/Simulink' (The Mathworks Inc., USA) on a regular PC using a modified Linux ERT target [29]. The GUI was realized in Python and presented on a computer with touch display. During the measurements, the treating health professional was instructed to operate the hand neuroprosthesis via the GUI. An external push-button (PowerMate, Griffin Technology, USA) was given to the patient, such that she/he could interrupt the stimulation at any time.
Five sub-acute stroke patients (female=1, male=4, age 53 to 69 (59±6.5), 6–14 days after stroke (8.4±3.2), right-side paralyzed =3) with moderate to moderately severe hemiparesis of the upper extremity were included in the pilot study. Exclusion criteria were severe communication limitations, cognitive dysfunction, and no response towards electrical stimulation at a comfortable stimulation level. The included patients had an mRS (modified rankin-scale) of 3–4 (3.2±0.4) and a muscle strength in the hand and forearm of 0–3 (2.2±1.3) from 5 according to Janda [30]. All patients needed support for performing grasp-and-release tasks with the paralyzed hand.
In addition to the stroke patients as one user group, also the second user group—the health professionals—were included in the user-centered evaluation. The second user group consisted of n=5 (female=1) health professionals (physician=3, ergotherapist=1, medical student=1). Three of five health professionals stated their age with an age range between 33–45 years (39.3±4.9). None of the health professionals had previously used FES in the treatment.
User-centered evaluation
Both quantitative and qualitative research methods were used to evaluate the functionality and acceptance of the tested array identification strategies as well as our hardware setup from the user's perspective. Therefore, interviews, questionnaires, and the thinking-aloud technique were utilized to receive a broad insight into their perception of the system and the identification methods. A new method can only be called successful if the technology is accepted by its users, here stroke patients and health professionals (therapists, physicians). A questionnaire for the patients was developed tailored to our system and research questions. The questionnaire contained open-ended and closed questions to gain qualitative and quantitative data. Closed questions were mainly rated on five-point Likert scales from 1 (fully disagree) to 5 (fully agree). The questionnaire for the patients was conducted as an interview and covered sociodemographic data, the personal attitude to technology (measure of technology commitment, [31]), experience with technology in general, usage experience with the system (e.g. problems, understanding, motivation, safety, pain in dealing with the system) and the acceptance of the system via the Technology Acceptance Model (TAM) by Davis [32]. The TAM is one of the most widely used acceptance models. The acceptance and actual use of a technology can be explained in the terms of internal beliefs, attitudes, and intentions of the user, which are decisively influenced by the perceived usefulness and perceived ease of use of the technology.
The concurrent thinking aloud method was used to gain direct information from the health professionals about the interaction with the system. With this method, the participating physicians and therapists are encouraged to verbalize their thoughts continuously while handling tasks with the system (cf. [33]). This should provide access to their thought, feelings, intentions, and expectations and reveal their perception of the actual system use [34].
Measurements were performed at the clinic for neurology with stroke unit and early rehabilitation at the Unfallkrankenhaus Berlin (Germany). If possible, the experiments were conducted with the patient sitting in a chair at a table. Otherwise, the measurements were performed with the patient in a comfortable upright position in their bed. During the identification procedures, the paralyzed forearm was positioned in an arm mount and patients were instructed to obviate voluntary hand movements.
At the beginning of each experiment, the health professionals were familiarized with the thinking-aloud method. In order to gain information about the underlying reasons for their preferred identification method, the health professionals were instructed to express all their thoughts on the system and especially the identification method during the whole session. All sessions were recorded with an audio device. The health professionals had been familiarized with the usage of the HNP in previous workshops and experiments.
Both array identification methods, (A) the semi-automatic and (B) the automatic, were used one after another to find suiting stimulation positions for a grasp-and-release task. A suitable stimulation position was defined in accordance with [35] ("functional point"). The suitable position related to the position/combination of elements of the VE in the array where sufficient strength of contraction can be generated in the target muscles with minimum overflow to non-synergistic muscles. The first method applied was always the semi-automatic search, as the knowledge of the health professional on the stimulation responses gained during the automatic search would have distorted the results regarding search time and positions.
In accordance with [13], three VEs needed to be identified for the grasp-and-release task evoking the following movements: (1) Hand and finger extension for a hand opening (VE1), (2) wrist stabilization (wrist extended, fingers in rest or flexed; VE2), and (3) functional finger flexion (VE3). The corresponding reference joint angles and weights for the cost function of the automatic search strategy are listed in Table 1. VE1 and VE2 were searched for in array E above the extensors. After a successful identification of these VEs, array F was tested to evoke finger flexion. It was possible to simultaneously stimulate array E for wrist stabilization, even with feedback control for the semi-automatic search. The identified positions, applied stimulation intensities, recorded joint angles, and the duration of each step of the identification procedures were saved for the subsequent analysis.
Table 1 Default joint-angle references for the cost function of three VEs
If successful, each identification procedure was followed by a grasping routine, which was performed with and without an object (wooden cube, 5 ×5 cm). The predefined stimulation sequence consisted of 4 s of stimulating a hand opening via VE1, then 4 s of stimulating finger flexors (VE3) and extensors for stabilizing the wrist (VE2; if identified), and ended with 4 s of hand opening (VE1). The patient was able to initialize and pause the stimulation sequence via the push-button. It was also possible that the patient controlled the onset and offset of each stimulation phase via the push-button to synchronize it with its own voluntary effort, which was wanted in this state of the experiment. Thereby, different timings within the stimulation sequence (>4 s or <4 s) were possible. If the patient was unable to reach the object due to severe arm palsy, the therapist gave the object to the patient or the object was placed on the table next to the patient's hand. After each identification approach, the patients were asked about their experiences in a short interview as outlined in Fig. 6, which summarizes the whole experimental procedure.
Overview of the experiment and user-centered evaluation. Method A and B refer to the semi-automatic and automatic search strategy
Quantitative: The subsequent data analysis of recorded joint angles, system parameters, and applied stimulation parameters was conducted with 'Matlab' (The Mathworks Inc., USA). Identified VEs were compared between methods and patients. The quantitative data from the interviews of the patients were analyzed with the statistic software 'SPSS Statistics' (Ver 22, IBM, USA) using methods of descriptive and nonparametric statistics. To examine if the identification methods cause significant differences in the user's experience of the system and the stimulation effect, Wilcoxon signed-rank tests were performed. Furthermore, nonparametric correlation analyses according to Spearman were carried out to test, if the user experience is related to age, date and the severity of stroke.
Qualitative: To examine the feedback from the health professionals on the identification methods, a qualitative data analysis was performed using strategies of qualitative content analysis by Mayring [36]. The sound material was transcribed according to Dresing and Pehl [37]. In order to answer the research questions, positive and negative feedback to the system and the identification methods, as well as positive and negative feedback to the stimulation outcome was analyzed with the software 'MAXQDA' (Ver 12, VERBI GmbH, Germany).
Identified VEs
The identified VEs for all three desired movements in each patient are summarized for both search methods in Fig. 7. Details on the corresponding, identified stimulation parameters are given in Table 2. In all five stroke patients suitable VEs were identified to evoke the movement hand opening (VE1) with the semi-automatic and the automatic approach. The identified VEs varied in number and position of the active elements for the two methods. VE1 consisted on average of 4.2 elements for the semi-automatic search, and on 1.6 elements for the automatic search (cf. Table 2). In general, the VEs identified with the semi-automatic search consisted of more active elements than VEs identified with the automatic search for all three desired movements. For patient 2 and 4, in which the locations of the chosen VEs were apart for the two methods, the measured hand postures are illustrated in Fig. 7 for stimulating VE1 found with the automatic approach (top pictogram) and with the semi-automatic search (bottom pictogram). For both cases, the observed hand posture was similar revealing minor differences in the joint angles of the index finger. It should be noted that the VE for hand opening identified with the semi-automatic approach always utilized a VE model of circular shape.
Identified virtual electrodes with the two search methods for each patient. Each row represents the results of one patient for all desired movements, as indicated in the headline. Electrode array layouts are displayed in top view, as positioned on the forearm with gel layer at the bottom. Different array layout orientations are due to the treatment of different arms: right arm for patients 1, 2, and 5; left arm for patients 4 and 5. Search results are marked in yellow for the semi-automatic search and in blue for the automatic search (top five VEs). The finally chosen VE for the automatic search is marked by a black frame. Elements that were identified with both search methods are colored in yellow and blue. For patients and movements where the results of semi-automatic and automatic search are located quite differently in the array the evoked hand motion is depicted (see patient 2 and 3) with the measured hand segments colored in red. If an array layout is not given for defined motion, it means that either no suitable VE was found for that motion (patient 2, 4 and 5) or that the patient could perform the movement on his/her own with the remaining hand function (patient 3)
Table 2 Details on the identified VEs and corresponding stimulation parameters for both search methods
In two patients (2 and 5) suitable VEs were identified to generate a wrist stabilization (VE2) with the semi-automatic and the automatic approach. In patient 1, VE2 was identified with the semi-automatic search. During its stimulation in the subsequent grasp-and-release pattern, it turned out to hinder a precise finger flexion and was turned off. Hence, a VE for wrist stabilization was not considered in the automatic search. Besides the extension of the wrist, the stimulation in the extensor array (E) often led to a small degree of finger extension which hindered the grasping function.
The identification of a position for finger flexion (VE3) was successful in two patients (1 and 4, see Fig. 7). Patient 3 showed sufficient remaining finger flexion such that no FES-support via the flexor array was necessary. The main reason for not finding a suitable VE3 in patients 2 and 5 was the low tolerance towards the electrical stimulation in the flexor array. All patients reported the stimulation to be more unpleasant in the flexor array than in the extensor array. This resulted in a lower stimulation intensity maximum, as seen when comparing intensities for VE1 and VE3 in Table 2, which was sometimes insufficient to evoke a strong finger flexion. Furthermore, parallel induced wrist flexion when stimulating finger flexors was a problem that could not always be compensated by stimulation of VE2. In line with the findings for VE1, the identified VEs (VE3) in patients 1 and 4 varied in number and positions of the active elements for the two identifications methods. For patient 4, the resulting hand posture is depicted when stimulating VE3 found with the automatic approach (top) and with the semi-automatic search (bottom) in Fig. 7. VE3 of the semi-automatic search led to less flexion in the wrist. It should be noted that the VE for finger flexion identified with the semi-automatic approach always utilized a VE model of rectangular shape (cf. Table 2).
The difference between the identified VEs with both methods was further analyzed by considering the minimal cost function value L. For this, the cost function value was calculated offline for the VE of the semi-automatic approach as well. The time frames where the saved VEs were stimulated during the identification process were determined and used for the calculation. The resulting cost function values of both methods are depicted in Table 3. To increase the interpretability of the results, illustrated scales for the references of VE1 and VE3 are provided in Fig. 8. For VE1 and VE2 in array E, there is no clear tendency that one method outperforms the other one in terms of the cost function values. The differences were sometimes minor (patient 3, 5). In patient 1, VE1 of the semi-automatic search was also stimulated during the automatic search, resulting in a different movement with a larger cost function value (10.4 to 1.8; cf. Table 3), which is why this combination was not chosen in that approach.
Cost function scale for the references for hand opening (VE1) and hand closing (VE3). Exemplary hand postures are depicted with corresponding cost function values J. For the cost function value "0", the defined reference postures are depicted for both scales, because a value of "0" means that the generated hand posture equals the reference exactly. The little finger is depicted in gray, because it was not utilized in the experiments and thereby in the cost function. For better illustration, it was assigned the same joint angles as the ring finger
Table 3 Cost function values for each identified VE with each method for each patient
Grasp-and-release task
Three out of five patients were able to perform the reach-and-grasp task with a wooden cube successfully at the end of the experiment (patient 1, 2, and 3). Figure 9 exemplarily shows the resulting joint angles and hand postures of patient 1 with the VEs from the semi-automatic search. The patient used the push-button to control the timing of the stimulation pattern. This patient was not able to hold the object without the stimulation: finger flexors were stimulated (VE3). Patient 2 and 3 were sometimes able to grasp the object without FES-support.
One grasp-and-release cycle with patient 1 using the VEs of the semi-automatic search. The applied stimulation intensities for hand opening (VE1) and grasping (VE3) are displayed in the first graph. Flexion/extension (α) and abduction (β) of the wrist are shown in the second graph in blue colors. Flexion/extension angles of the finger joints (MCPα, PIP, DIP) for fingers F2-F4 are plotted in green in the last three graphs. The measured hand posture including the thumb is visualized at discrete times during the three phases of the grasp-and-release cycle: hand opening, grasp of wooden cube, release. The little finger is depicted in gray, because it was not measured in the experiments. In the interest of a good visualization, the little finger was assigned the same joint angles as the ring finger
In the analysis, we also considered the search process itself. Exemplary results of the feedback control of the wrist angle during the semi-automatic search in the extensor array are depicted in Fig. 10. While the VE model position changed, the global stimulation intensity u was adjusted automatically by the controller. As can be seen in this example, there existed positions in the array, where the same degree of wrist extension was achieved with less stimulation intensity. For the depicted patient, a position for hand opening (VE1) was chosen that needed a lower stimulation intensity than the other positions and led to a strong finger extension. Details on the search settings used in each patient are summarized in Table 4. All provided VE model shapes—circle, ellipse, rectangle—were used during the search process, whereby not every shape was tested in every patient or every array. For the interpretation, it is to be noted that the circular shape was selected by default when starting the identification procedure. The option of combining VEs of different locations to one active VE was not utilized. The feedback control was not applied in patient 4, because the tolerated level of stimulation was too small to allow closed-loop adaptation. The wrist stabilization via VE2 was only used in patient 5 during the search for VE3.
Fig. 10
Semi-automatic search in the extensor array with feedback-control for patient 1. The feedback controlled wrist extension/flexion angle α (blue) is displayed over time together with the applied global stimulation intensity u (black; actuating variable). The reference angle αref was set to 15∘ (black dotted line) and the tolerated error bound αbound (gray, dotted line) was ±5∘. An error smaller ±αbound equaled zero at the input of the controller. In the displayed time frame, the location of the VE model was changed by the user. The resulting position of the VE model in the array (red circle) and the corresponding active elements, marked in yellow, are exemplarily shown for four times, together with the measured hand posture. The little finger is depicted in gray, because it was not measured in the experiments. In the interest of a good visualization, the little finger was assigned the same joint angles as the ring finger
Table 4 Used options in the semi-automatic search and automatic search for each patient
For the experimental results of the automatic search, we noted that single elements were chosen more frequently than element combinations as the final VE. However, the average cost function value L of phase I of the algorithm was always bigger or almost equal to the average value of phase II, indicating that the applied heuristics worked sufficiently. The option of defining customized VEs, which were not rated in the top five by the algorithm for a desired posture, was used once, as seen in Table 4.
Search duration
The donning of the HNP including stimulation electrodes and the inertial sensor network took between 2–4 min. The average time needed for the VE search with each identification method is summarized in Fig. 11. The total duration of the search method included the initialization of the method (adjustment of parameters,...), the search for VE1 and VE2 in the extensor array, and the search for VE3 in the flexor array. In our experiments, the total time of the semi-automatic method (7.5 min) was smaller than the time needed with the automatic method (10.3 min). The search in the extensor array took longer compared to the other periods of the search procedure, which was due to the larger number of elements in that array and the search for two different induced movements (VE1 and VE2).
Average duration for the VE search with both identification methods. The time required for the identification procedure is displayed for each period: (i) initialization of the sensors, stimulation thresholds, and for the semi-automatic search the estimation of the controller parameters (Init), (ii) search in the extensor array for VE1 and VE2, and (iii) search in the flexor array for VE3. The sum of all periods leads to the total time needed for each approach (last column)
Evaluation by patients
The survey of the patients after each identification method held similar answers for both methods. Patients were asked regarding the perceived pain on a scale from 0 (= no pain) until 10 (= highest conceivable pain) for the identified VEs. When stimulating in the extensor array (E), average values of 1.2±0.45 were reported by the patients for both methods, stating that only slight discomfort was perceived. For the stimulation of the flexor array (F), the semi-automatic approach led to higher pain values (3.8±3) compared to the automatic approach (2.8±3.5). However, this difference was not significant. In addition, patients were asked after each identification method regarding their personal perception, such as anxiety and fun. They answered the questions on a five-stage Likert scale, as seen in the results in Fig. 12. Again, no significant differences were found between the two methods. When interpreting the results, it has to be kept in mind, that the semi-automatic approach was the first method applied and often was the very first FES treatment of the patients.
Patients' perception of the two search strategies. After each identification method, the patients ranked their personal perception in the experiment on a five-stage Likert scale: 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, and 5 = strongly agree
The HNP was also assessed in general on a five-stage Likert. On average, the patients agreed that the HNP is comfortable (4.25) and to some extent enjoyable (3.5). The answers regarding technology acceptance were summarized in the dimensions intention of use (4.6), ease of use (4.9), and perceived benefit (5) according to the TAM.
Evaluation by health professionals
The qualitative analysis of the audio data from the health professionals revealed positive and negative comments regarding the identification methods and the HNP in general. One physician stated the duration of both identification methods as relatively fast but rated the automatic method faster as the semi-automatic method. In the automatic identification method, one physician also indicated to like the random order of the stimulation locations, because of the relieving effect for the muscles. The health professionals made further positive comments, which can be related to both identification methods. In this context, the graphic visualization of the array electrodes and their attachment, as well as the visualization of the current stimulation location in the GUI and the system by itself gained positive feedback. The negative feedback only pertains to the usability of the GUI in both identification methods.
Positive and negative feedback was perceived by the health professionals regarding the outcome of the FES. In the semi-automatic approach, nine statements with positive feedback on the stimulation outcome were determined. The physicians were especially pleased with the stimulated wrist movement. In overall five times, they rated the stimulation effect from "good" to "nearly perfect". Furthermore, in one test session, the outcome for the index finger was assessed as effective and, in another test session, the physician positively valued the stimulation outcome because of the holistic movement of all fingers. In the automatic approach, 20 positive comments on the stimulation outcome were counted. Four positive statements were related to the overall stimulation outcome and three comments were explicitly about efficient electrode positions. Furthermore, the physicians and therapists remarked in three statements each a good movement effect of 1) the thumb, 2) the index finger and 3) the wrist and gave positive comments on the stimulated hand opening. In two cases, they annotated that the stimulated single electrodes are more efficient than the electrode combinations.
In the semi-automatic approach, four statements including a negative feedback on the stimulation outcome were counted. The physicians indicated the movement of the fingers and especially the thumb as quite weak and stated to see only a small stimulation effect. In the automatic approach, seven statements with negative feedback on the stimulation outcome were found. In two patients, the physicians complained—as well as in the semi-automatic condition—about the missing stimulation effect on the thumb. On the contrary, one participating physician noticed in another patient the opposite effect and criticized that only the thumb and the middle finger showed stimulation effects. Further negative feedback was related to the fact, that the stimulation produced a non-physiological rotation of the wrist.
The participating health professionals gave further clinical indications, which can be used to optimize the system and its handling. Two statements on the importance and the difficulty of the stimulation of the thumb movement were determined. In this context, one physician suggested to move or enlarge the stimulation field (coverage of the array) closer to the direction of the hand.
We evaluated two array identification procedures with different degrees of user integration, which both aim to assist in finding suitable stimulation areas and stimulation parameters in a hand neuroprosthesis for the individual patient. The results in five sub-acute stroke patients showed that both identification methods–semi-automatic and fully automatic–yield suitable VEs for hand opening and with limitation for hand closing in patients who could tolerate the stimulation. To the best of our knowledge, this was the first time that array identification procedures were directly compared in a clinical setup and the user's perspective was considered systematically to improve the usability of future FES systems.
Comparison of VEs
The preferred VEs for hand opening, wrist stabilization, and finger flexion differed among patients, probably due to inter-individual physiological variability and slightly varying array locations on the individual forearm. This finding is in line with other studies (e.g. [10, 13]) and motivates the application of electrode arrays. It also indicates that identification methods need to be applied at least one time for each placed array on the forearm for each patient. Furthermore, in half the cases of our measurements, the preferred VEs with the semi-automatic and automatic approach were found at different areas in the arrays within one patient. We assume several reasons which might contribute to this observation. When considering the generated hand postures with both methods displayed in Fig. 7, and the corresponding cost function values of each VE presented in Table 3, often similar values and postures can be observed. This indicates that there exist several activation areas covered by the array which evoke similar functional movements. Popović-Maneski et al. [35] observed similar phenomena when they identified functional points in hemiplegic patients for a grasp-and-release task. So one reason for the divergent results of the two identification methods is the existence of multiple, equally good solutions in the search space. During the semi-automatic search, the clinician chose a different solution than the automatic algorithm. The term "optimal stimulation point", as used in other studies on VE identification on the forearm (e.g. [10, 16]), can be misleading.
Another reason for the diverging results of the two methods might be the patient-individual time-variance of the response towards electrical stimulation. This assumption is supported by the cost function values of the VEs identified with the semi-automatic approach. For example, in patient 1, the identified VE1 of the semi-automatic approach was tested within both search procedures and cost function values can be compared. This revealed that the VE1 matched the reference nearly perfectly (1.8) during the semi-automatic search, but during the automatic search the same VE yielded a higher cost function value (10.4) indicating a change in the patient's muscular responsiveness. The duration of the experiment with one identification procedure including explanations, search, grasp-and-release routine, and interview/questionnaire regarding the applied method was approximately 25 min. After this time, when considering the HNP and the patient's forearm as a system, characteristics might have changed due to reasons such as electrode-skin interface impedance changes, increased muscle tone, or muscle fatigue. This might have led to different selected VEs in the second approach, the automatic search.
Besides the differences in the location of the identified VEs with both methods, the selected VEs also varied in the number of active elements, the shape those elements formed, and in the applied stimulation intensity, as seen in Table 2. We assumed that this was due to the design of the two methods. In the automatic approach, all single elements were tested and thereby also available as VEs to choose. During the semi-automatic search, it was preferred by the users to use a VE model of larger size activating several elements at the same time. In this way, possible activation zones in the array could be manually explored at shorter time. This led to a higher number of elements included in the VEs of the semi-automatic search, 4.2 elements on average, compared to the automatic search with 1.6 elements. It is still an unanswered question whether a lower or higher number of active elements and thereby a smaller or larger VE is beneficial in the therapeutic treatment. Different array layouts in research prevented a direct comparison with findings in other studies. Nevertheless, others identified VEs with a branched pattern [8, 35], which our results do not reflect. Both our methods allow to build a branched active pattern, but the automatic approach requires less effort.
Regarding the stimulation intensities, the values were similar for both identification methods, even though the number of activated elements differed a lot for hand opening (VE1; cf. Table 2). A reason for this might be the asynchronous stimulation of VEs with multiple elements. The applied intensities varied between arrays. Stimulation on the ventral side of the forearm was perceived as more painful in general, which was one of the reasons for not finding suitable VEs for grasping in all patients. Sometimes, the tolerated intensity was not sufficient to evoke strong finger flexion as required for manipulating objects. Another reason was the parallel induced but unintended wrist flexion. The stabilization of the wrist by stimulating VE2 was often not possible because no stimulation point could be found to exclusively elicit the wrist extensors in the extensor array. Finger extensors were excited as well, which would hinder a successful grasp. Closed-loop control for both VEs (VE2 and VE3) might be a future solution here, to balance the intensities and thereby the induced motions of both VEs [17, 38]. Yet, closed-loop control requires that patients tolerate sufficient large stimulation intensities.
Practicability analysis
The results regarding induced motion as well as clinician's and patient's feedback indicated no clear preference for one of the two methods. Neither of the identification methods was perceived as painful by the patients, whereby the value of the fully-automatic method was insignificantly better. This might be explained by the fact that the semi-automatic search was always the first method performed and the patients were not used to the sensation of the stimulation at that point. However, the patients felt safe, comfortable, and appropriately challenged during the experiments (cf. Fig. 12). Regarding the HNP, the comparatively poorer rating of the item "enjoyable" to the item "comfortable" with regard to the wearing of the HNP at the end of the experiment could indicate that wearing comfort should be increased for prolonged use of the prosthesis.
The automatic approach received more positive feedback but also more negative feedback than the semi-automatic from the health professionals. The higher amount of comments on the automatic approach could be related to the longer duration of the procedure, with the health professional being less involved. The evaluation of the health professional's statements on the procedure further revealed that the visualization in the GUI played an important role in acceptance of the methods. Uncertainties were identified regarding the current state of the system, visualization, and operation. A user-friendly GUI, tailored to the individual method and system, turned out to be essential for perception by the users. Therefore, we conclude that future algorithms should always be evaluated in combination with their operation interface for clinical use to deduce their usability and duration in clinical practice, even in early development stages [39].
The average time needed for the VE search was lower for the semi-automatic search than for the automatic strategy. With the donning of the HNP taking between 2–4 min and an average total identification time of 7.5 min for the semi-automatic search, a total setup time for the FES-supported grasp-and-release task of 10 min could be achieved. Most published studies on VE identification did not hold detailed information on the required time of the search procedure in the conducted experiments (e.g. [9, 11, 13, 14]), although it is of high practical relevance. Furthermore, many studies have evaluated only the methodology itself under specific test conditions and not the actual clinical course of action, making it impossible to estimate the needed search time. Bijelić et al. [8] with a manual search (push-button control box) and Popović and Popović [10] with an automatic approach reported a search time of approximately 5 min per 24-element electrode array, which would sum up to >12 min in our setup with 59 elements. Freeman [17] achieved approximately 8 min per posture when using a 40-element array and an iterative learning control approach. Counting hand opening and hand closing as one posture each, this would sum up to 16 min in our setup. According to [10], a duration of <10 min for the phase of electrode-determination is within the level of tolerance for clinical applications. Compared to the mentioned methods, our achieved results with the semi-automatic search for two movements–hand opening and grasping–are faster. However, additional time is needed if different grasp types (e.g. tip grasp) have to be identified and if an automatic adaptation to varying underarm postures (pronation/supination) should be realized.
In rehabilitation therapy, repetitive training with FES is best practice [40, 41]. In previous publications, it was observed that size and shape of individually identified VEs remained the same from day to day in the same patient if the electrode array is placed at the same forearm position (e.g. [10, 13]). In a recent study by Malešević et al. [7], which analyzed the temporal and spatial variability of surface motor activation zones in electrode arrays in 20 FES sessions in stroke patients, it was reported that changes in the VE configuration for wrist, finger, and thumb extension were required each session for all patients. The authors concluded that an experimental (re-)calibration procedure is necessary for each therapy session. They suggested using the results of the previous session as a priori knowledge to reduce the search space in the following session(s). In this application scenario, we conclude that our semi-automatic identification approach would be a suitable tool to gradually modify stored VEs of previous sessions if necessary, which is a benefit in comparison with the suggested and other fully automatic approaches. For the future, we could also imagine a combination of both methods as a suitable approach for clinical practice: In the first session, the whole array is scanned with the automatic approach and the information is saved for the following sessions. Then, the semi-automatic approach is used to individually modify the VEs in the regions of interest.
Changes from day to day in VEs could not be tested in the presented study as the sub-acute stroke patients were only measured once, which is a major limitation of our results. Follow-up experiments would have been desirable, but could not be performed due to the limited time the patients stayed in the neurology department. As most patients used FES for the first time, they were cautious regarding the stimulation intensity and unprepared for the pricking feeling of the stimulation. Nevertheless, the methods were tested in this early stage of the rehabilitation process, because it was suggested to start the treatment of stroke patients as early as possible [42]. Therefore, array identification methods need to prove to be suitable under these conditions.
Another limitation is that the identification methods were tested under restricted conditions, with the forearm lying in pronation on an arm mount. The grasp-and-release task following the identification phase solely included one object without supporting the upper arm via FES. For patients with insufficient, remained voluntary activity in the upper arm, reaching the object had to be supported manually by the caregiver. These limitations have partly been necessary to limit the length of the experiment, allowing a direct comparison of two different methods. As a result, some important features and aspects of electrode array-based FES could not be investigated. Multiple publications mentioned the need for a dynamic VE relocation in the array during forearm movements, which occur in many activities of daily living (e.g. [8, 35, 43]). The rotation of the forearm between pronation and supination yields a relative shift between the active electrode position on the skin and underlying tissue, changing which muscles or motor units are recruited. A real-time adaptation of the active VE in electrode arrays can compensate resulting changes in the generated hand movement, as suggested in [19] or as we presented it in [44]. A clinical validation of this feature is an important aspect of future studies. Especially the setup procedures and identification duration, necessary for the VE identification in different forearm postures, need to be assessed, as it will increase.
That the suitability of the identified VEs was solely assessed in a simplified grasp-and-release task with one object is a major drawback of our assessment. Thereby, the applicability of the method for the identification of different grasp types and strength cannot be estimated. The results in patients 2 and 5 suggest that a re-design of the array electrodes may be needed.
We conclude that both presented array identification methods–semi-automatic and fully automatic–enable for finding suitable VEs in our proposed hand neuroprosthesis. However, the resulting VEs differed for both approaches in three of five patients. The semi-automatic approach should be preferred as the search strategy in arrays on the forearm. The observed faster search duration will further reduce when applying the system repeatedly on a patient as only small position adjustments for VEs are required. Nevertheless, the search duration will increase significantly, when different grasping types shall be generated, or an adaptation to varying forearm conditions shall be realized. It remains to be seen whether this constraint precludes the use in short exercise sessions or whether the repeated use of the system from day-to-day will speed up the identification significantly due to a priori knowledge.
None of the two methods was preferred over the other by the interviewed clinicians and patients regarding practicability, outcome, comfort, and fun. Therefore, we conclude that both levels of user integration should be provided in future FES systems such that the applied method can be chosen individually based on the users' preferences and the application scenario. Our results underline the need for personalization of the search procedure as, for example, different VE model shapes were utilized during the semi-automatic search and closed-loop support was applied in some patients but not in all.
We found that the design of the GUI influences the acceptance of the methods in general. Further, our results from the patient surveys regarding acceptance and engagement indicate that the motivation of patients at this stage of rehabilitation is particularly high. This observation encourages the application of FES-based neuroprosthesis in early rehabilitation interventions. It follows that the hard- and software must also be evaluated clinically for this type of application. Due to these findings, we recommend the incorporation of end-users in research and product development processes. Future studies in this area should include more detailed questions regarding setup time, handling of the equipment, and desired options in the algorithms for personalization.
For our future system, patient measurements with multiple sessions are necessary to review and maybe re-design the electrode array to generate finger flexion. An additional single surface electrode for the stimulation of thumb (to support grasp) shall be added for future measurements, as health professionals complained about the lack of induced thumb movement in some patients. Furthermore, it is necessary for patients without volitional muscle contractions in the upper arm to support the reaching motion as well. The system presented in the project RETRAINER [45] and the GO-SAIL system [46] are two examples of where an integration of lower and upper arm support was realized. As both presented array identification methods turned out to be suitable for finding VEs for grasp-and-release tasks, both methods together with the forearm movement compensation presented in [44] could be integrated into a holistic system for a hand neuroprosthesis including a user-evaluated interface [21].
DIP:
Distal interphalangeal joint/angle
DoF(s):
Degree(s) of freedom
FES:
Functional electrical stimulation
F1:
Index finger
Ring finger
HNP:
Hand neuroprosthesis
HSS:
Hand sensor system
IMU:
Inertial measurement unit
MCP:
Metacarpal-phalangeal joint/angle
PIP:
Proximal interphalangeal joint/angle
TAM:
VE(s):
Virtual electrode(s)
De Kroon JR, Van der Lee JH, IJzerman MJ, Lankhorst GJ. Therapeutic electrical stimulation to improve motor control and functional abilities of the upper extremity after stroke: a systematic review. Clin Rehabil. 2002; 16(4):350–60.
Ragnarsson KT. Functional electrical stimulation after spinal cord injury: current use, therapeutic effects and future directions. Spinal Cord. 2008; 46(4):255–74.
Micera S, Keller T, Lawrence M, Morari M, Popovic DB. Wearable neural prostheses. IEEE Eng Med Biol. 2010; 29(3):64–9.
Westerveld AJ, Schouten AC, Veltink PH, van der Kooij H. Selectivity and resolution of surface electrical stimulation for grasp and release. IEEE Trans Neural Syst Rehabil Eng. 2012; 20(1):94–101.
Koutsou AD, Moreno JC, del Ama AJ, Rocon E, Pons JL. Advances in selective activation of muscles for non-invasive motor neuroprostheses. J NeuroEng Rehabil. 2016; 13(1):56.
Lawrence M, Kirstein T, Keller T. Electrical simulation of the finger flexors using 'virtual electrodes' In: Bijak M, Mayr W, Pichler M, editors. Proc 8th Int Workshop on Functional Electrical Stimulation: 10-13 September 2004. Vienna: 2004. p. 191–4.
Malešević J, Štrbac M, Isaković M, Kojić V, Konstantinović L, Vidaković A, Dedijer Dujović S, Kostić M, Keller T. Temporal and spatial variability of surface motor activation zones in hemiplegic patients during functional electrical stimulation therapy sessions. Artif Organs. 2017; 41(11):E166–E177. https://doi.org/10.1111/aor.13057.
Bijelić G, Popović-Bijelić A, Jorgovanović N, Bojanić D, Popović DB. E actitrode: The new selective stimulation interface for functional movements in hemiplegics patients. Serb J Electr Eng. 2004; 1(3):21–8.
Crema A, Malešević N, Furfaro I, Raschellà F, Pedrocchi A, Micera S. A wearable multi-site system for nmes-based hand function restoration. IEEE Trans Neural Syst Rehabil Eng. 2018; 26:428–40.
Popović DB, Popović MB. Automatic determination of the optimal shape of a surface electrode: selective stimulation. J Neurosci Meth. 2009; 178:174–81.
O'Dwyer SB, O'Keeffe DT, Coote S, Lyons GM. An electrode configuration technique using an electrode matrix arrangement for fes-based upper arm rehabilitation systems. Med Eng Phys. 2006; 28(2):166–76.
Hoffmann U, Deinhofer M, Keller T. Automatic determination of parameters for multipad functional electrical stimulation: Application to hand opening and closing. In: Proc 34th IEEE EMBS Annu Int Conf: 28 August-1 September 2012. San Diego. New York: IEEE: 2012. p. 1859–63.
Malešević NM, Maneski LZ, Ilić V, Jorgovanović N, Bijelić G, Keller T, Popović DB. A multi-pad electrode based functional electrical stimulation system for restoration of grasp. J NeuroEng Rehabil. 2012; 9(1):66.
Schill O, Rupp R, Pylatiuk C, Schulz S, Reischl M. Automatic adaptation of a self-adhesive multi-electrode array for active wrist joint stabilization in tetraplegic sci individuals. In: Proc IEEE TIC-STH Int Conf: 26-27 September 2009. Toronto: 2009. p. 708–13.
De Marchis C, Monteiro TS, Simon-Martinez C, Conforto S, Gharabaghi A. Multi-contact functional electrical stimulation for hand opening: electrophysiologically driven identification of the optimal stimulation site. J NeuroEng Rehabil. 2016; 13(1):22.
Exell T, Freeman C, Meadmore K, Hughes A-M, Hallewell E, Burridge J. Optimisation of hand posture stimulation using an electrode array and iterative learning control. J Automatic Control. 2013; 21(1):1–5.
Freeman CT. Iterative Learning Control with Restricted Input Subspace for Electrode Array-based FES. In: P Amer Contr Conf (ACC). Portland: IEEE: 2014. p. 4243–8.
Jackson AE, Holt RJ, Culmer PR, Makower SG, Levesley MC, Richardson RC, Cozens JA, Williams MM, Bhakta BB. Dual robot system for upper limb rehabilitation after stroke: the design process. P I Mech Eng C-J Mec. 2007; 221(7):845–57.
Popović D, Malešević N, Keller T. Apparatus for External Activation of Paralyzed Body Parts by Stimulation of Peripheral Nerves. 2011;WO 2011/079866 A1 Fundación Fatronik (ES).
Salchow C, Valtin M, Seel T, Schauer T. A new semi-automatic approach to find suitable virtual electrodes in arrays using an interpolation strategy. Eur J Transl Myol. 2016; 26(2):6029. https://doi.org/10.4081/ejtm.2016.6029.
Jankowski N, Schönijahn L, Salchow C, Ivanova E, Wahl M. User-centred design as an important component of technological development. CDBME. 2017; 3(1):69–73.
Imatz Ojanguren E. Neuro-fuzzy modeling of multi-field surface neuroprostheses for hand grasp. Spain; 2016. http://hdl.handle.net/10810/19541.
Valtin M, Kociemba K, Behling C, Kuberski B, Becker S, Schauer T. RehaMovePro: A versatile mobile stimulation system for transcutaneous FES applications. Eur J Transl Myol. 2016; 26(3):6076. https://doi.org/10.4081/ejtm.2016.6076.
Valtin M, Salchow C, Seel T, Laidig D, Schauer T. Modular finger and hand motion capturing system based on inertial and magnetic sensors. CDBME. 2017; 3(1):19–23.
Popović-Maneski L, Malešević NM, Savić AM, Keller T, Popović DB. Surface-distributed low-frequency asynchronous stimulation delays fatigue of stimulated muscles. Muscle Nerve. 2013; 48(6):930–7.
Shalaby R. Development of an electromyography detection system for the control of functional electrical stimulation in neurological rehabilitation. 2011. PhD thesis, Technische Universität Berlin, Control Systems Group. https://doi.org/10.14279/depositonce-2904.
Wu G, van der Helm FC, Veeger HE, Makhsous M, Van Roy P, Anglin C, et al. ISB recommendation on definitions of joint coordinate systems of various joints for the reporting of human joint motion—part II: shoulder, elbow, wrist and hand. J Biomech. 2005; 38(5):981–92.
Seel T, Ruppin S. Eliminating the effect of magnetic disturbances on the inclination estimates of inertial sensors. IFAC-PapersOnLine. 2017; 50(1):8798–803. 20th IFAC World Congress.
Sojka M, Píša P. Usable Simulink embedded coder target for Linux. In: Proc 16th Real Time Linux Workshop: 12-13 October 2014. Dusseldorf: 2014.
Janda V. Manuelle Muskelfunktionsdiagnostik, 4th. München: Urban & Fischer; 2000.
Neyer FJ, Felber J, Gebhardt C. Development and validation of a brief measure of technology commitment. Diagnostica. 2012; 58(2):87–99. https://doi.org/10.1026/0012-1924/a000067.
Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS quart. 1989; 13:319–40.
Nielsen J. Thinking Aloud: The #1 Usability Tool. https://www.nngroup.com/articles/thinking-aloud-the-1-usability-tool/. Accessed 29 Aug 2017.
Van Someren MW, Barnard YF, Sandberg J. The Think Aloud Method. A Practical Guide to Modelling Cognitive Processes. London: Academic Press; 1994.
Popović-Maneski L, Kostić M, Bijelić G, Keller T, Mitrović S, Konstantinović L, Popović DB. Multi-pad electrode for effective grasping: design. IEEE T Neur Sys Reh. 2013; 21(4):648–54.
Mayring P. Qualitative Inhaltsanalyse. Grundlagen und Techniken, 11th. Weinheim: Beltz; 2010.
Dresing T, Pehl T. Transkription In: Mey G, Mruck K, editors. Handbuch Qualitative Forschung in der Psychologie. Wiesbaden, Germany: Springer: 2010. p. 723–33.
Westerveld AJ, Kuck A, Schouten AC, Veltink PH, van der Kooij H. Grasp and release with surface functional electrical stimulation using a model predictive control approach. In: Proc IEEE 34th Annu Int Conf on Engineering in Medicine and Biology Society (EMBS): 28 August - 1 September 2012. San Diego, CA, USA. New York: IEEE: 2012. p. 333–6.
Forshaug AK, Sigurjónsson JB. User involvement in the design of health care devices In: Bingham G, Southee D, McCardle J, Kovacevic A, Bohemia E, Parkinson B, editors. Proc 17th Int Conf on Engineering and Product Design Education: 3–4 Page 3/4 September. Glasgow: The Design Society: 2015. p. 226–31.
Conti GE, Schepens SL. Changes in hemiplegic grasp following distributed repetitive intervention: a case series. Occup Ther Int. 2009; 16(3-4):204–17.
Knecht S, Hesse S, Oster P. Rehabilitation nach Schlaganfall (ae: rehabilitation after stroke). Dtsch Arztebl. 2011; 108(36):600–6.
Popovic MR, Popovic DB, Keller T. Neuroprostheses for grasping. Neurol Res. 2002; 24(5):443–52.
Chen S-C, Yu C-H, Liu C-L, Kuo C-H, Hsu S-T. Design of surface electrode array applied for hand functional electrical stimulation in the variation of forearm gesture. In: 12th Annu Conf Int FES Society (IFESS). Philadelphia: 2007.
Salchow-Hömmen C, Thomas T, Valtin M, Schauer T. Automatic control of grasping strength for functional electrical stimulation in forearm movements via electrode arrays. AT-Autom. 2018. In press.
Ambrosini E, Ferrante S, Zajc J, Bulgheroni M, Baccinelli W, d'Amico E, et al. The combined action of a passive exoskeleton and an EMG-controlled neuroprosthesis for upper limb stroke rehabilitation: First results of the RETRAINER project. In: Proc IEEE Int Conf on Rehabilitation Robotics (ICORR): 17-20 July 2017. London, UK, vol. 2017. New York: IEEE: 2017. p. 56–61.
Kutlu M, Freeman C, Hughes AM, Spraggs M. A home-based FES system for upper-limb stroke rehabilitation with iterative learning control. IFAC-PapersOnLine. 2017; 50(1):12089–94. 20th IFAC World Congress.
We thank all participants for volunteering in the experiments, Würth Elektronik (Germany) for manufacturing the electrode arrays, and Axelgaard Manufacturing Co. (Ltd., USA) for contributing the surface electrodes and gel layers.
As part of the research project BeMobil, this work was supported by the German Federal Ministry of Education and Research (FKZ16SV7069K). We acknowledge support by the Open Access Publication Funds of the Technische Universität Berlin.
The datasets are available from the corresponding author on reasonable request.
Control Systems Group, Technische Universität Berlin, Einsteinufer 17, Berlin, 10587, Germany
Christina Salchow-Hömmen, Markus Valtin & Thomas Schauer
Institut für Rehabilitationswissenschaften, Humboldt Universität zu Berlin, Unter den Linden 6, Berlin, 10099, Germany
Natalie Jankowski & Laura Schönijahn
Klinik für Neurologie mit Stroke Unit und Frührehabilitation, Unfallkrankenhaus Berlin, Warener Str. 7, Berlin, 12683, Germany
Sebastian Böttcher & Frank Dähne
Christina Salchow-Hömmen
Natalie Jankowski
Markus Valtin
Laura Schönijahn
Sebastian Böttcher
Frank Dähne
Thomas Schauer
CS, MV and TS developed the identification methods. CS, NJ, LS, SB and FD participated in the design of the user-centered evaluation. CS and MV provided the experimental setup. CS, NJ, LS and SB conducted the experiments and user surveys. CS, NJ and LS processed and analyzed the experimental results. CS, NJ and TS were responsible for writing the manuscript. All authors read and approved the final manuscript.
Correspondence to Christina Salchow-Hömmen.
The experimental study has been carried out after the formal approval of the local ethical committee (Berlin Chamber of Physicians, Eth-25/15). Written informed consent was obtained from each participant before the session.
Salchow-Hömmen, C., Jankowski, N., Valtin, M. et al. User-centered practicability analysis of two identification strategies in electrode arrays for FES induced hand motion in early stroke rehabilitation. J NeuroEngineering Rehabil 15, 123 (2018). https://doi.org/10.1186/s12984-018-0460-1
Electrode array
Virtual electrodes
Hand rehabilitation
User-centered design | CommonCrawl |
Curvature of Space and Time, with an Introduction to Geometric Analysis
Curvature of Space and Time, with an Introduction to Geometric Analysis is an undergraduate-level textbook for mathematics and physics students on differential geometry, focusing on applications to general relativity. It was written by Iva Stavrov, based on a course she taught at the 2013 Park City Mathematics Institute and subsequently at Lewis & Clark College,[1][2] and was published in 2020 by the American Mathematical Society, as part of their Student Mathematical Library book series.[1]
Topics
Curvature of Space and Time is arranged into five chapters with 14 sections in total, with each section covering a single lecture's worth of material.[1] Its topics are covered both mathematically and historically, with reference to the original source material of Bernhard Riemann and others.[3] However, it deliberately avoids some topics from differential topology that have traditionally been covered in differential geometry courses, including abstract manifolds and tangent vectors.[2] Instead, it approaches the subject through coordinate-based geometry, emphasizing quantities that are invariant under changes of coordinates. Its goals include both providing a shortened path for students to reach an understanding of Einstein's mathematics, and promoting curvature as a central way of describing shape and geometry.[4]
The first chapter defines Riemannian manifolds as embedded subsets of Euclidean spaces rather than as abstract spaces. It uses Christoffel symbols to formulate differential equations having the geodesics as their solutions,[1] and descrobes the Koszul formula and energy functional[3] Examples include the Euclidean metric, spherical geometry, projective geometry, and the Poincaré half-plane model of the hyperbolic plane.[1][2] Chapter 2 includes vector fields, gradients, divergence,[2] directional derivatives, tensor calculus,[1] Lie brackets,[3] Green's identities, the maximum principle, and the Levi-Civita connection.[2] It begins a discussion of curvature and the Riemann curvature tensor that is continued into Chapter 3,[1][3] "the heart of the book",[4] whose topics include Jacobi fields, Ricci curvature, scalar curvature,[2] Myers's theorem, the Bishop–Gromov inequality, and parallel transport.[4]
After these mathematical preliminaries, the final two chapters are more physical, with the fourth chapter concerning special relativity, general relativity, the Schwarzschild metric,[1] and Kruskal–Szekeres coordinates.[3] Topics in the final chapter include geometric analysis, Poisson's equation for the potential fields of charge distributions, and mass in general relativity.[1]
Audience and reception
As is usual for a textbook, Curvature of Space and Time has exercises that extend the coverage of its topics and make it suitable as the text for undergraduate courses. Although there are multiple undergraduate-level textbooks on differential geometry, they have generally taken an abstract mathematical view of the subject, and at the time of publishing of Curvature of Space and Time, courses based on this material had somewhat fallen out of fashion. This book is unusual in taking a more direct approach to the parts of the subject that are most relevant to physics. However, although it attempts to cover this material in a self-contained way, reviewer Mark Hunacek warns that it may be too advanced for typical mathematics students, and perhaps better reserved for honors students as well as "mathematically sophisticated physics majors". He also suggests the book as an introduction to the area for researchers in other topics.[1]
Reviewer Hans-Bert Rademacher calls this a "remarkable book", with "excellent motivations and insights", but suggests it as a supplement to standard texts and courses rather than as the main basis for teaching this material.[2] And although finding fault with a few details, reviewer Justin Corvino suggests that, with faculty guidance over these rough spots, the book would be suitable both for independent study or an advanced topics course, and "required reading" for students enthusiastic about learning the mathematics behind Einstein's theories.[4]
References
1. Hunacek, Mark (October 2021), "Review of Curvature of Space and Time", MAA Reviews, Mathematical Association of America
2. Rademacher, Hans-Bert, "Review of Curvature of Space and Time", zbMATH, Zbl 1472.83001
3. Suceavă, Bogdan D. (July 2021), "Review of Curvature of Space and Time", The Mathematical Intelligencer, doi:10.1007/s00283-021-10108-3, S2CID 253818213
4. Corvino, Justin (September 2021), "Review of Curvature of Space and Time", The American Mathematical Monthly, 128 (8): 764–768, doi:10.1080/00029890.2021.1945378, S2CID 237609917
| Wikipedia |
Snub trioctagonal tiling
In geometry, the order-3 snub octagonal tiling is a semiregular tiling of the hyperbolic plane. There are four triangles, one octagon on each vertex. It has Schläfli symbol of sr{8,3}.
Snub trioctagonal tiling
Poincaré disk model of the hyperbolic plane
TypeHyperbolic uniform tiling
Vertex configuration3.3.3.3.8
Schläfli symbolsr{8,3} or $s{\begin{Bmatrix}8\\3\end{Bmatrix}}$
Wythoff symbol| 8 3 2
Coxeter diagram or or
Symmetry group[8,3]+, (832)
DualOrder-8-3 floret pentagonal tiling
PropertiesVertex-transitive Chiral
Images
Drawn in chiral pairs, with edges missing between black triangles:
Related polyhedra and tilings
This semiregular tiling is a member of a sequence of snubbed polyhedra and tilings with vertex figure (3.3.3.3.n) and Coxeter–Dynkin diagram . These figures and their duals have (n32) rotational symmetry, being in the Euclidean plane for n=6, and hyperbolic plane for any higher n. The series can be considered to begin with n=2, with one set of faces degenerated into digons.
n32 symmetry mutations of snub tilings: 3.3.3.3.n
Symmetry
n32
Spherical Euclidean Compact hyperbolic Paracomp.
232 332 432 532 632 732 832 ∞32
Snub
figures
Config. 3.3.3.3.2 3.3.3.3.3 3.3.3.3.4 3.3.3.3.5 3.3.3.3.6 3.3.3.3.7 3.3.3.3.8 3.3.3.3.∞
Gyro
figures
Config. V3.3.3.3.2 V3.3.3.3.3 V3.3.3.3.4 V3.3.3.3.5 V3.3.3.3.6 V3.3.3.3.7 V3.3.3.3.8 V3.3.3.3.∞
From a Wythoff construction there are ten hyperbolic uniform tilings that can be based from the regular octagonal tiling.
Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 10 forms.
Uniform octagonal/triangular tilings
Symmetry: [8,3], (*832) [8,3]+
(832)
[1+,8,3]
(*443)
[8,3+]
(3*4)
{8,3} t{8,3} r{8,3} t{3,8} {3,8} rr{8,3}
s2{3,8}
tr{8,3} sr{8,3} h{8,3} h2{8,3} s{3,8}
or
or
Uniform duals
V83 V3.16.16 V3.8.3.8 V6.6.8 V38 V3.4.8.4 V4.6.16 V34.8 V(3.4)3 V8.6.6 V35.4
References
• John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, The Symmetries of Things 2008, ISBN 978-1-56881-220-5 (Chapter 19, The Hyperbolic Archimedean Tessellations)
• "Chapter 10: Regular honeycombs in hyperbolic space". The Beauty of Geometry: Twelve Essays. Dover Publications. 1999. ISBN 0-486-40919-8. LCCN 99035678.
See also
• Snub hexagonal tiling
• Floret pentagonal tiling
• Order-3 heptagonal tiling
• Tilings of regular polygons
• List of uniform planar tilings
• Kagome lattice
External links
• Weisstein, Eric W. "Hyperbolic tiling". MathWorld.
• Weisstein, Eric W. "Poincaré hyperbolic disk". MathWorld.
• Hyperbolic and Spherical Tiling Gallery
• KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
• Hyperbolic Planar Tessellations, Don Hatch
Tessellation
Periodic
• Pythagorean
• Rhombille
• Schwarz triangle
• Rectangle
• Domino
• Uniform tiling and honeycomb
• Coloring
• Convex
• Kisrhombille
• Wallpaper group
• Wythoff
Aperiodic
• Ammann–Beenker
• Aperiodic set of prototiles
• List
• Einstein problem
• Socolar–Taylor
• Gilbert
• Penrose
• Pentagonal
• Pinwheel
• Quaquaversal
• Rep-tile and Self-tiling
• Sphinx
• Socolar
• Truchet
Other
• Anisohedral and Isohedral
• Architectonic and catoptric
• Circle Limit III
• Computer graphics
• Honeycomb
• Isotoxal
• List
• Packing
• Problems
• Domino
• Wang
• Heesch's
• Squaring
• Dividing a square into similar rectangles
• Prototile
• Conway criterion
• Girih
• Regular Division of the Plane
• Regular grid
• Substitution
• Voronoi
• Voderberg
By vertex type
Spherical
• 2n
• 33.n
• V33.n
• 42.n
• V42.n
Regular
• 2∞
• 36
• 44
• 63
Semi-
regular
• 32.4.3.4
• V32.4.3.4
• 33.42
• 33.∞
• 34.6
• V34.6
• 3.4.6.4
• (3.6)2
• 3.122
• 42.∞
• 4.6.12
• 4.82
Hyper-
bolic
• 32.4.3.5
• 32.4.3.6
• 32.4.3.7
• 32.4.3.8
• 32.4.3.∞
• 32.5.3.5
• 32.5.3.6
• 32.6.3.6
• 32.6.3.8
• 32.7.3.7
• 32.8.3.8
• 33.4.3.4
• 32.∞.3.∞
• 34.7
• 34.8
• 34.∞
• 35.4
• 37
• 38
• 3∞
• (3.4)3
• (3.4)4
• 3.4.62.4
• 3.4.7.4
• 3.4.8.4
• 3.4.∞.4
• 3.6.4.6
• (3.7)2
• (3.8)2
• 3.142
• 3.162
• (3.∞)2
• 3.∞2
• 42.5.4
• 42.6.4
• 42.7.4
• 42.8.4
• 42.∞.4
• 45
• 46
• 47
• 48
• 4∞
• (4.5)2
• (4.6)2
• 4.6.12
• 4.6.14
• V4.6.14
• 4.6.16
• V4.6.16
• 4.6.∞
• (4.7)2
• (4.8)2
• 4.8.10
• V4.8.10
• 4.8.12
• 4.8.14
• 4.8.16
• 4.8.∞
• 4.102
• 4.10.12
• 4.122
• 4.12.16
• 4.142
• 4.162
• 4.∞2
• (4.∞)2
• 54
• 55
• 56
• 5∞
• 5.4.6.4
• (5.6)2
• 5.82
• 5.102
• 5.122
• (5.∞)2
• 64
• 65
• 66
• 68
• 6.4.8.4
• (6.8)2
• 6.82
• 6.102
• 6.122
• 6.162
• 73
• 74
• 77
• 7.62
• 7.82
• 7.142
• 83
• 84
• 86
• 88
• 8.62
• 8.122
• 8.162
• ∞3
• ∞4
• ∞5
• ∞∞
• ∞.62
• ∞.82
| Wikipedia |
·· Book (34)
·· Commissioned report (282)
·· Other report (66)
·· Working paper (188)
·· Discussion paper (105)
·· Other contribution (102)
·· Paper (172)
·· Poster (28)
·· Abstract (14)
·· Other (24)
·· Patent (6)
·· Article (8763)
·· Letter (52)
·· Comment/debate (43)
·· Book/Film/Article review (54)
·· Literature review (21)
·· Editorial (35)
·· Special issue (49)
·· Review article (150)
·· Short survey (1)
·· Chapter (196)
·· Conference contribution (545)
·· Foreword/postscript (1)
·· Chapter (peer-reviewed) (74)
Published in 2019 (20)
Published in 1997-2014 (174)
E-pub ahead of print (2)
Unpublished (13)
Does not have full text (755)
Has full text (293)
Book/Report → Book → Commissioned report → Other report Working paper → Working paper → Discussion paper Other contribution → Other contribution Contribution to conference → Paper → Poster → Abstract → Other Patent → Patent Contribution to journal → Article → Letter → Comment/debate → Book/Film/Article review → Literature review → Editorial → Special issue → Meeting abstract → Review article → Short survey → Conference article Chapter in Book/Report/Conference proceeding → Chapter → Entry for encyclopedia/dictionary → Conference contribution → Foreword/postscript → Other chapter contribution → Chapter (peer-reviewed) Contribution to specialist publication → Article → Featured article → Book/Film/Article review → Editorial Non-textual form → Software → Web publication/site
1 - 293 out of 293Page size: 500
PAFway: pairwise associations between functional annotations in biological networks and pathways
Mahjoub, M. & Ezer, D., 6 Dec 2019, (Submitted).
Research output: Working paper
Situation Coverage Testing for a Simulated Autonomous Car -- an Initial Case Study
Hawkins, H. R. & Alexander, R., 15 Nov 2019, arXiv.
Applying the three core concepts of economic evaluation in health to education in the UK
Hinde, S., Walker, S. M. & Lortie-Forgues, H., 12 Nov 2019, York, UK: Centre for Health Economics, University of York, 16 p. (CHE Research Paper; no. 170).
Research output: Working paper › Discussion paper
The economic and energy impacts of a UK export shock: comparing alternative modelling approaches
Allan, G., Barrett, J., Brockway, P., Sakai, M., Hardt, L., McGregor, P. G., Ross, A. G., Roy, G., Swales, J. K. & Turner, K., 4 Sep 2019, UK: UKERC, 89 p.
Drivers of Health Care Expenditure
Mason, A., Rodriguez Santana, I. D. L. N., Aragon Aragon, M. J. M., Rice, N., Chalkley, M. J., Wittenberg, R. & Fernandez, J-L., Aug 2019, York, UK: Centre for Health Economics, University of York, p. 1-56, 57 p. (CHE Research Paper; no. 169).
The causal effect of hospital volume on health gains from hip replacement surgery
Rachet Jacquet, L., Gutacker, N. & Siciliani, L., Aug 2019, York, UK: Centre for Health Economics, University of York, 32 p. (CHE Research Paper; no. 168).
Is an ounce of prevention worth a pound of cure? Estimates of the impact of English public health grant on mortality and morbidity
Martin, S., Lomas, J. R. S. & Grant, U. O. Y., Jul 2019, York, UK: Centre for Health Economics, University of York, 35 p. (CHE Research Paper; no. 166).
The effect of government contracting with faith-based health care providers in Malawi
Tafesse, W., Manthalu, G. & Chalkley, M. J., Jul 2019, York, UK: Centre for Health Economics, University of York, 54 p. (CHE Research Paper; no. 167).
Fitness differences suppress the number of mating types in evolving isogamous species
Krumbeck, Y., Constable, G. W. A. & Rogers, T., 17 Jun 2019, arXiv.
Economic Analysis for Health Benefits Package Design
Love-Koh, J., Walker, S. M., Kataika, E., Sibandze, S., Arnold, M., Ochalek, J. M., Griffin, S., Revill, P. & Sculpher, M. J., 11 Jun 2019, York, UK: Centre for Health Economics, University of York, 24 p. (CHE Research Paper; no. 165).
Unifying Semantic Foundations for Automated Verification Tools in Isabelle/UTP
Foster, S., Baxter, J., Cavalcanti, A., Woodcock, J. & Zeyda, F., 14 May 2019, (arXiv).
The impact of primary care incentive schemes on care home placements for people with dementia
Kasteridis, P., Liu, D., Mason, A., Goddard, M. K., Jacobs, R., Wittenberg, R. & Howdon, D. D. H., May 2019, York, UK: Centre for Health Economics, University of York, 30 p. (CHE Research Paper; no. 164).
Plasma Wakefield Accelerator Research 2019 - 2040: A community-driven UK roadmap compiled by the Plasma Wakefield Accelerator Steering Committee (PWASC)
Hidding, B., Hooker, S., Jamison, S., Muratori, B., Murphy, C., Najmudin, Z., Pattathil, R., Sarri, G., Streeter, M., Welsch, C., Wing, M. & Xia, G., 19 Apr 2019, 38 p.
Productivity of the English National Health Service: 2016/17 update
Castelli, A., Chalkley, M. J., Gaughan, J. M., Pace, M. L. & Rodriguez Santana, I. D. L. N., Apr 2019, York, UK: Centre for Health Economics, University of York, 76 p. (CHE Research Paper; no. 163).
Effects of market structure and patient choice on hospital quality for planned patients
Moscelli, G., Gravelle, H. S. E. & Siciliani, L., Mar 2019, York, UK: Centre for Health Economics, University of York, 56 p. (CHE Research paper; no. 162).
Assessing health opportunity costs for the Indian health care systems
Ochalek, J. M., Asaria, M., Chuar, P. F., Lomas, J. R. S., Mazumdar, S. & Claxton, K. P., Feb 2019, UK: Centre for Health Economics, University of York, p. 1-29, 29 p. (CHE Research Paper; no. 161).
Incorporating concerns for equity into health resource allocation: A guide for practitioners
Love-Koh, J., Griffin, S., Kataika, E., Revill, P., Sibandze, S. & Walker, S. M., 21 Jan 2019, York, UK: Centre for Health Economics, University of York, 20 p. (CHE Research Paper; no. 160).
Formal Methods in Dependable Systems Engineering: A Survey of Professionals from Europe and North America
Gleirscher, M. & Marmsoler, D., 2019, (Submitted) (arXiv).
Optimal investment and contingent claim valuation with exponential disutility under proportional transaction costs
Roux, A., 2019, (Unpublished).
System Safety Practice: An Interrogation of Practitioners about Their Activities, Challenges, and Views with a Focus on the European Region
Gleirscher, M. & Nyokabi, A., 2019, (Submitted) (arXiv).
Recommendations for the development of a health sector resource allocation formula in Malawi
McGuire, F., Revill, P., Twea, P., Mohan, S., Manthalu, G. & Smith, P. C., Dec 2018, York, UK: Centre for Health Economics, University of York, 40 p. (CHE Research Paper; no. 159).
A Substrate-Independent Framework to Characterise Reservoir Computers
Dale, M., Miller, J. F., Stepney, S. & Trefzer, M. A., 16 Oct 2018, arXiv, 19 p.
Estimating the marginal productivity of the English National Health Service from 2003/04 to 2012/13
Lomas, J. R. S., Martin, S. & Claxton, K. P., Oct 2018, York, UK: Centre for Health Economics, University of York, 127 p. (CHE Research Paper; no. 158).
Quasitriangular coideal subalgebras of $U_q(\mathfrak{g})$ in terms of generalized Satake diagrams
Regelskis, V. & Vlaar, B., 6 Jul 2018.
The determinants of health care expenditure growth
Rice, N. & Aragon Aragon, M. J. M., Jul 2018, York, UK: Centre for Health Economics, University of York, 48 p. (CHE Research Paper; no. 156).
Who benefits from universal child care? Estimating marginal returns to early child care attendance
Cornelissen, T., Dustmann, C., Raute, A. & Schönberg, U., 1 Jun 2018, CReaM (Centre for Research & Analysis of Migration).
Cost, context and decisions in Health Economics and cost-effectiveness analysis
Culyer, A. J., Jun 2018, York, UK: Centre for Health Economics, University of York, p. 1-17, 17 p. (CHE Research Paper; no. 154).
Setting research priorities in Global Health: appraising the value of evidence generation activities to support decision-making in health care
Woods, B. S., Rothery, C., Revill, P., Hallett, T. & Phillips, A., Jun 2018, York: Centre for Health Economics, University of York, p. 1-54, 55 p. (CHE Research Paper; no. 155).
An Evaluation of Classification and Outlier Detection Algorithms
Hodge, V. J. & Austin, J., 2 May 2018, 4 p.
The genome of Ectocarpus subulatus highlights unique mechanisms for stress tolerance in brown algae
Dittami et al. , 25 Apr 2018.
Stateful-Failure Reactive Designs in Isabelle/UTP
Foster, S. D., Baxter, J. E., Cavalcanti, A. L. C. & Woodcock, JAMES. C. P., 17 Apr 2018, (Unpublished).
Vertex Representations for Yangians of Kac-Moody algebras
Guay, N., Regelskis, V. & Wendlandt, C., 11 Apr 2018.
Generalised Reactive Processes in Isabelle/UTP
Foster, S. D. & Canham, S. J., 6 Apr 2018, (Unpublished) 56 p.
Reactive Designs in Isabelle/UTP
Foster, S. D., Baxter, J. E., Cavalcanti, A. L. C., Woodcock, JAMES. C. P. & Canham, S. J., 6 Apr 2018, (Unpublished) 108 p.
Theory of Designs in Isabelle/UTP
Foster, S. D., Nemouchi, Y. & Zeyda, F., 6 Apr 2018, (Unpublished) 47 p.
Kleene Algebra in Unifying Theories of Programming
Foster, S. D., 5 Apr 2018, (Unpublished) 6 p.
Isabelle/UTP: Mechanised Theory Engineering for the UTP
Foster, S. D., Zeyda, F., Nemouchi, Y., De Oliveira Salazar Ribeiro, P. F. & Wolff, B., 4 Apr 2018, (Unpublished) 162 p.
Accounting for the quality of NHS output
Bojke, C., Castelli, A., Grasic, K., Mason, A. & Street, A. D., Apr 2018, York, UK: Centre for Health Economics, University of York, p. 1-62, 62 p. (CHE Research Paper; no. 153).
Compositional Assume-Guarantee Reasoning of Control Law Diagrams using UTP
Ye, K., Foster, S. D. & Woodcock, JAMES. C. P., Apr 2018, (Unpublished) 202 p.
Castelli, A., Chalkley, M. J. & Rodriguez Santana, I. D. L. N., Apr 2018, York, UK: Centre for Health Economics, University of York, p. 1-78, 78 p. (CHE Research Paper; no. 152).
Bounded orbits of Diagonalizable Flows on finite volume quotients of products of SL2(R)
An, J., Ghosh, A., Guan, L. & Ly, T., 20 Mar 2018.
An optimal bound for the ratio between ordinary and uniform exponents of Diophantine approximation
Marnat, A. & Moshchevitin, N., 8 Feb 2018.
Smoking Inequality across Genders and Socioeconomic Classes. Evidence from Longitudinal Italian Data
Di Novi, C., Jacobs, R. & Migheli, M., 1 Feb 2018, Pavia, Italy: Università di Pavia, 24 p. ( DEM Working Paper Series; vol. 02-18, no. 152).
Spatial Competition and Quality: Evidence from the English Family Doctor Market
Gravelle, H. S. E., Liu, D., Propper, C. & Santos, R., Feb 2018, York, UK: Centre for Health Economics, University of York, 38 p. (CHE Research Paper; no. 151).
Poverty and Wellbeing Impacts of Microfinance: What Do We Know?
Maitrot, M. R. L. & Niño-Zarazúa, M., 29 Nov 2017, 39 p.
Win Prediction in Esports: Mixed-Rank Match Prediction in Multi-player Online Battle Arena Games
Hodge, V. J., Devlin, S. M., Sephton, N. J., Block, F. O., Drachen, A. & Cowling, P. I., 17 Nov 2017, 1711.06498 ed., arXiv, 7 p.
Does hospital competition improve efficiency? The effect of the patient choice reform in England
Longo, F., Siciliani, L., Moscelli, G. & Gravelle, H. S. E., Nov 2017, York, UK: Centre for Health Economics, University of York, p. 1-27, 27 p. (CHE Research Paper; no. 149).
Pricing implications of non-marginal budgetary impacts in health technology assessment: a conceptual model
Howdon, D. D. H. & Lomas, J. R. S., Nov 2017, York, UK: Centre for Health Economics, University of York, p. 1-17, 17 p. (CHE Research Paper; no. 148).
Scoping review on social care economic evaluation methods
Weatherly, H. L. A., Neves De Faria, R. I., van den Berg, B., Sculpher, M. J., O'Neill, P., Nolan, K., Glanville, J., Isojarvi, J., Baragula, E. & Edwards, M., Nov 2017, York UK: Centre for Health Economics, p. 1-53, 53 p. (CHE Research Paper; no. 150).
Nested algebraic Bethe ansatz for open spin chains with even twisted Yangian symmetry
Gerrard, A., MacKay, N. & Regelskis, V., 23 Oct 2017, 29 p.
Reference Pulse Attack on Continuous-Variable Quantum Key Distribution with Local Local Oscillator
Ren, S., Kumar, R., Wonfor, A., Tang, X., Penty, R. & White, I., 29 Sep 2017.
Modelling the Dynamics of a Public Health Care System: Evidence from Time-Series Data
Iacone, F., Martin, S., Siciliani, L. & Smith, P. C., Sep 2017, York, UK: Centre for Health Economics, University of York, 31 p. (CHE Research Paper; no. 29).
Towards atomically precise manipulation of 2D nanostructures in the electron microscope
Susi, T., Kepaptsoglou, D., Lin, Y-C., Ramasse, Q. M., Meyer, J. C., Suenaga, K. & Kotakoski, J., 29 Aug 2017, (arXiv).
Health care costs in the English NHS: reference tables for average annual NHS spend by age, sex and deprivation group
Asaria, M., Apr 2017, York, UK: Centre for Health Economics, University of York, p. 1-25, 25 p. (CHE Research Paper; no. 147).
Productivity of the English NHS: 2014/15 Update
Bojke, C., Castelli, A., Grasic, K., Howdon, D. D. H., Street, A. D. & Rodriguez Santana, I. D. L. N., Apr 2017, York, UK: Centre for Health Economics, University of York, p. 1-81, 81 p. (CHE Research Paper; no. 146).
Public financial management and health service delivery: A literature review
Goryakin, Y., Revill, P., Mirelman, A., Sweeney, R., Ochalek, J. M. & Suhrcke, M. E., Apr 2017, London, 43 p. (Overseas Development Institute Report).
The Hausdorff and dynamical dimensions of self-affine sponges: a dimension gap result
Simmons, D. S. & Das, T., 25 Mar 2017, (Accepted/In press) 42 p.
Generalised additive mixed models for dynamic analysis in linguistics: a practical introduction
Sóskuthy, M., 15 Mar 2017.
Do hospitals respond to rivals' quality and efficiency? A spatial econometrics approach
Longo, F., Siciliani, L., Gravelle, H. S. E. & Santos, R., Mar 2017, York, UK: Centre for Health Economics, University of York, p. 1-45, 45 p. (CHE Research Paper; no. 144).
The effect of hospital ownership on quality of care: evidence from England.
Moscelli, G., Gravelle, H. S. E., Siciliani, L. & Gutacker, N., Mar 2017, Centre for Health Economics, University of York, p. 1-34, 34 p. (CHE Research Paper; no. 145).
First degree cohomology of Specht modules and extensions of symmetric powers
Donkin, S. & Geranios, H., 28 Feb 2017, (Submitted) 94 p.
A measure theoretic result for approximation by Delone sets
Baake, M. & Haynes, A., 16 Feb 2017, 6 p.
The economics of health inequality in the English NHS: the long view
Asaria, M., 6 Feb 2017, Centre for Health Economics, University of York: Centre for Health Economics, University of York, p. 1-13, 14 p. (CHE Research Paper; no. 142).
First do no harm –: The impact of financial incentives on dental x-rays
Chalkley, M. J. & Listl, S., Feb 2017, York, UK: Centre for Health Economics, University of York, p. 1-22, 22 p. (CHE Research Papers; no. 143).
Defining and measuring unmet need to guide healthcare funding: identifying and filling the gaps.
Aragon Aragon, M. J. M., Chalkley, M. J. & Goddard, M. K., Jan 2017, York: Centre for Health Economics, University of York, p. 1-46, 46 p. (CHE Research Paper; no. 141).
Into their land and labours : a comparative and global analysis of trajectories of peasant transformation
Vanhaute, E. & Cottyn, H. D. G. J., 2017, p. 1, 21 p. (ICAS Review Paper Series; no. 8).
Sums of reciprocals and the three distance theorem
Beresnevich, V. & Leong, N. E. C., 2017, 15 p.
Paying for performance for health care in low- and middle-income countries: an economic perspective
Chalkley, M. J., Mirelman, A., Siciliani, L. & Suhrcke, M. E., Dec 2016, York, UK: Centre for Health Economics, University of York, p. 1-27, 27 p. (CHE Research Paper; no. 140).
Market structure, patient choice and hospital quality for elective patients
Moscelli, G., Gravelle, H. S. E. & Siciliani, L., Nov 2016, York, UK: Centre for Health Economics, University of York, p. 1-33, 33 p. (CHE Research Paper; no. 139).
Which local authorities are most unequal?
Bradshaw, J. R. & Bloor, K. E., 26 Oct 2016, 4 p.
A randomized version of the Littlewood Conjecture
Haynes, A. & Koivusalo, H., 25 Oct 2016.
Funding of mental health services: Do available data support episodic payment?
Jacobs, R., Chalkley, M. J., Aragon Aragon, M. J. M., Boehnke, J. R., Clark, M., Moran, V. & Gilbody, S. M., Oct 2016, York, UK: Centre for Health Economics, p. 1-82, 82 p. (CHE Research Paper; no. 137).
Hospital productivity growth in the English NHS 2008/09 to 2013/14
Aragon Aragon, M. J. M., Castelli, A., Chalkley, M. J. & Gaughan, J. M., Oct 2016, Centre for Health Economics, University of York, p. 1-44, 44 p. (CHE Research Paper; no. 138).
Using MinION nanopore sequencing to generate a de novo eukaryotic draft genome: preliminary physiological and genomic description of the extremophilic red alga Galdieria sulphuraria strain SAG 107.79
Davis, S. J., 20 Sep 2016, p. 1-17, 17 p.
Fairer decisions, better health for all: Health equity and cost-effectiveness analysis
Cookson, R. A., Mirelman, A., Asaria, M., Dawkins, B. & Griffin, S., Sep 2016, Centre for Health Economics, University of York, p. 1-43, 43 p. (CHE Research Paper; no. 135).
Supporting the development of an essential health package: principles and initial assessment for Malawi
Ochalek, J. M., Claxton, K. P., Revill, P., Sculpher, M. J. & Rollinger, A., Sep 2016, York, UK: Centre for Health Economics, University of York, p. 1-90, 90 p. (CHE Research Paper; no. 136).
Single-world theory of the extended Wigner's world experiment
Sudbery, A., 20 Aug 2016.
On Spin Calogero-Moser system at infinity
Sklyanin, E., Khoroshkin, S. & Matushko, M. G., 1 Aug 2016, arXiv, 28 p.
The impact of diabetes on labour market outcomes in Mexico: a panel data and biomarker analysis.
Seuring, T., Serneels, P. & Suhrcke, M. E., Aug 2016, York, UK: Centre for Health Economics, University of York, p. 1-37, 37 p. (CHE Research Paper; no. 134).
On the Minimum of a Positive Definite Quadratic Form over Non--Zero Lattice points. Theory and Applications
Adiceam, F. & Zorin, E., 15 Jul 2016, 46 p.
Delayed discharges and hospital type: evidence from the English NHS
Gaughan, J. M., Gravelle, H. S. E. & Siciliani, L., Jul 2016, York, UK: Centre for Health Economics, University of York, p. 1-27, 27 p. (CHE Research Paper; no. 133).
Years of good life based on income and health: re-engineering cost-benefit analysis to examine policy impacts on wellbeing and distributive justice
Cookson, R. A., Cotton-Barrett, O., Adler, M., Asaria, M. & Ord, T., Jul 2016, York UK: Centre for Health Economics, University of York, p. 1-25, 25 p. (CHE Research Paper; no. 132).
The Ecology of Fringe Science and its Bearing on Policy
Collins, HM., Bartlett, A. & Reyes-Galindo, LI., 18 Jun 2016.
Parents' Experiences of Administering Distressing Nursing and Healthcare Procedures as part of Supporting Children with Complex or Long-Term Conditions at Home: Final Report
Spiers, G. F., Beresford, B. A. & Clarke, S. E., Jun 2016, York: Social Policy Research Unit, University of York, 77 p.
Pathways to nuclear disarmament: delegitimising nuclear violence
Ritchie, N. E., 11 May 2016, (Unpublished) 14 p.
Optimal hospital payment rules under rationing by random waiting
Gravelle, H. S. E. & Schroyen, F., May 2016, York, UK: Centre for Health Economics, University of York, p. 1-56, 56 p. (CHE Research Paper; no. 130).
The Family Peer Effect on Mothers' Labour Supply
Tominey, E., Nicoletti, C. & Salvanes, K., 20 Apr 2016.
Assessing the impact of health care expenditures on mortality using cross-country data
Nakamura, R., Lomas, J. R. S., Claxton, K. P., Bokhari, F., Moreno Serra, R. A. & Suhrcke, M. E., Apr 2016, York, UK: Centre for Health Economics, University of York, p. 1-57, 57 p. (CHE Research Paper; no. 128).
Socioeconomic inequalities in health care in England
Cookson, R. A., Propper, C., Asaria, M. & Raine, R., Apr 2016, York, UK: Centre for Health Economics, University of York, p. 1-34, 34 p. (CHE Research Paper; no. 129).
Pharmaceutical Pricing: Early Access, The Cancer Drugs Fund and the Role of NICE
Claxton, K. P., Mar 2016, Centre for Health Economics, University of York, 4 p. (Policy & Research Briefing).
Sibling spillover effects in school achievement
Nicoletti, C. & Rabe, B., 16 Jan 2016, (Discussion Papers).
Eliciting the level of health inequality aversion in England
Robson, M., Asaria, M., Tsuchiya, A., Ali, S. & Cookson, R. A., Jan 2016, York, UK: Centre for Health Economics, University of York, p. 1-19, 19 p. (CHE Research Paper; no. 125).
Health equity indicators for the English NHS
Cookson, R. A., Asaria, M., Ali, S., Ferguson, B., Fleetcroft, R., Goddard, M. K., Goldblatt, P., Laudicella, M. & Raine, R., Jan 2016, York: Centre for Health Economics, University of York, 265 p. (CHE Research Paper; no. 124).
Location, quality and choice of hospital: Evidence from England 2002/3-2012/13
Moscelli, G., Siciliani, L., Gutacker, N. & Gravelle, H. S. E., Jan 2016, York, UK: Centre for Health Economics, University of York, p. 1-31, 31 p. (CHE Research Paper; no. 123).
Bojke, C., Castelli, A., Grasic, K., Howdon, D. & Street, A. D., Jan 2016, York, UK: Centre for Health Economics, University of York, p. 1-69, 69 p. (CHE Research Paper; no. 126).
Making a drama out of learning
McCotter, S., 2016, York: The University of York, p. 20-21, (The Forum Magazine; no. 40).
Nonlinear Stochastic Partial Differential Equations of hyperbolic type driven by L\'{e}vy-type noises
Brzezniak, Z., 2016, (Unpublished) 30 p.
Cost per DALY averted thresholds for low- and middle-income countries: evidence from cross country data
Ochalek, J. M., Lomas, J. & Claxton, K. P., Dec 2015, York, UK: Centre for Health Economics, University of York, p. 1-50, 50 p. (CHE Research Paper; no. 122).
Cost-effectiveness thresholds in health care: a bookshelf guide to their meaning and use
Culyer, T., Dec 2015, York, UK: Centre for Health Economics, University of York, p. 1-22, 22 p. (CHE Research Paper; no. 121).
Efficiency, equity and equality in health and health care
The socioeconomic and demographic characteristics of United Kingdom junior doctors in training across specialities.
Rodríguez Santana, I. & Chalkley, M. J., Dec 2015, York, UK: Centre for Health Economics, University of York, p. 1-15, 15 p. (CHE Research Paper; no. 119).
Choosing and booking – and attending? Impact of an electronic booking system on outpatient referrals and non-attendances
Dusheiko, M. A. & Gravelle, H. S. E., Oct 2015, York, UK: Centre for Health Economics, University of York, p. 1-51, 51 p. (CHE Research Paper; no. 116).
Hospital trusts productivity in the English NHS: uncovering possible drivers of productivity variations
Aragon Aragon, M. J. M., Castelli, A. & Gaughan, J. M., Oct 2015, York, UK: Centre for Health Economics, University of York, p. 1-27, 27 p. (CHE Research Paper; no. 117).
How much should be paid for Prescribed Specialised Services?
Bojke, C., Grasic, K. & Street, A. D., Oct 2015, York, UK: Centre for Health Economics, University of York, p. 1-56, 56 p. (CHE Research Paper; no. 118).
'An Unduly Moralistic Approach to Disputes?' Monetary Remedies After Coventry v Lawrence
Steele, J., 15 Sep 2015, (Unpublished).
Comparing predictive accuracy in small samples
Coroneo, L. & Iacone, F., Sep 2015, Department of Economics and Related Studies, University of York, 26 p. (Discussion Paper in Economics; vol. 15, no. 15).
Multidimensional performance assessment using dominance criteria
Gutacker, N. & Street, A. D., Sep 2015, York, UK: Centre for Health Economics, University of York, p. 1-34, 34 p. (CHE Research Paper; no. 115).
Waiting time prioritisation: evidence from England
Cookson, R. A., Gutacker, N. & Siciliani, L., Sep 2015, York, UK: Centre for Health Economics, University of York, p. 1-26, 26 p. (CHE Research Paper; no. 114).
Extremality and dynamically defined measures, part II: Measures from conformal dynamical systems
Das, T., Fishman, L., Simmons, D. & Urbański, M., 23 Aug 2015, (Submitted) 28 p.
The impact of primary care quality on inpatient length of stay for people with dementia: An analysis by discharge destination
Kasteridis, P., Goddard, M. K., Jacobs, R., Santos, R. & Mason, A., Jul 2015, York, UK: Centre for Health Economics, University of York, p. 1-41, 41 p. (CHE Research Paper; no. 113).
Socioeconomic inequality of access to healthcare: Does patients' choice explain the gradient?Evidence from the English NHS
Moscelli, G., Siciliani, L., Gutacker, N. & Cookson, R. A., Jun 2015, York, UK: Centre for Health Economics, University of York, p. 1-43, 43 p. (CHE Research Paper; no. 112).
Do patients choose hospitals that improve their health?
Gutacker, N., Siciliani, L., Moscelli, G. & Gravelle, H. S. E., May 2015, York, UK: Centre for Health Economics, University of York, p. 1-37, 37 p. (CHE Research Paper; no. 111).
Country-level cost-effectiveness thresholds: initial estimates and the need for further research.
Woods, B., Revill, P., Sculpher, M. & Claxton, K. P., Mar 2015, York, UK: Centre for Health Economics, University of York, p. 1-24, 24 p. (CHE Research Paper; no. 109).
Bojke, C., Castelli, A., Grasic, K. & Street, A. D., Mar 2015, York, UK: Centre for Health Economics, University of York, p. 1-57, 57 p. (CHE Research Paper; no. 110).
The UK Naval Nuclear Propulsion Programme and Highly Enriched Uranium
Ritchie, N., Mar 2015, Washington, D.C.: Federation of American Scientists, 26 p.
Hausdorff dimensions of very well intrinsically approximable subsets of quadratic hypersurfaces
Fishman, L., Merrill, K. & Simmons, D., 26 Feb 2015, (Unpublished) 9 p.
Rational curves on smooth cubic hypersurfaces over finite fields
Browning, T. & Vishe, P., 17 Feb 2015, (Unpublished) 12 p.
Cost analysis of the legal declaratory relief requirement for withdrawing Clinically Assisted Nutrition and Hydration (CANH) from patients in the Permanent Vegetative State (PVS) in England and Wales
Formby, A. P., Cookson, R. A. & Halliday, S., Feb 2015, York, UK: Centre for Health Economics, University of York, 15 p. (CHE Research Paper; no. 108).
Health care expenditures, age, proximity to death and morbidity: implications for an ageing population
Howdon, D. D. H. & Rice, N., 28 Jan 2015, York, UK: Centre for Health Economics, University of York, p. 1-42, 42 p. (CHE Research Paper; no. 107).
Patient choice and the effects of hospital market structure on mortality for AMI, hip fracture and stroke patients
Gravelle, H., Moscelli, G., Santos, R. & Siciliani, L., Dec 2014, York, UK: Centre for Health Economics, University of York, p. 1-57, 57 p. (CHE Research Paper; no. 106).
Some questions about devolving "welfare"
Bradshaw, J. R., 1 Oct 2014, 4 p.
The impact of hospital financing on the quality of inpatient care in England
Martin, S., Street, A. D., Han, L. & Hutton, J., Oct 2014, York, UK: Centre for Health Economics, University of York, p. 1-68, 68 p. (CHE Research Paper; no. 105).
Understanding the differences in in-hospital mortality between Scotland and England
Aragon Aragon, M. J. M. & Chalkley, M. J., Oct 2014, York, UK: Centre for Health Economics, University of York, p. 1-20, 20 p. (CHE Research Paper; no. 104).
The costs of specialised care
Bojke, C., Grasic, K. & Street, A. D., Sep 2014, Centre for Health Economics, University of York, p. 1-54, 54 p. (CHE Research Papers; no. 103).
Testing the bed-blocking hypothesis: does higher supply of nursing and care homes reduce delayed hospital discharges?
Gaughan, J. M., Gravelle, H. S. E. & Siciliani, L., Aug 2014, York, UK: Centre for Health Economics, University of York, 32 p. (CHE Research Paper; no. 102).
How to share it out: the value of information in teams
Gershkov, A., Li, J. & Schweinzer, P., 9 Jul 2014, York: Department of Economics and Related Studies, University of York, 43 p. (DERS Discussion Papers in Economics; vol. 14/08).
Concentric Symmetry
Silva, F. N., Comin, C. H., Peron, T. K. D., Rodrigues, F. A., Ye, C., Wilson, R. C., Hancock, E. & Costa, L. . F., 1 Jul 2014.
Addressing missing data in patient-reported outcome measures (PROMs): implications for comparing provider performance
Gomes, M., Gutacker, N., Bojke, C. & Street, A. D., Jul 2014, York, UK: Centre for Health Economics, University of York, 24 p. (CHE Research Paper; no. 101).
The impact of diabetes on employment in Mexico
Seutring, T., Goryakib, Y. & Suhrcke, M. E., Jul 2014, York, UK: Centre for Health Economics, University of York, 27 p. (CHE Research Paper; no. 100).
Financial mechanisms for integrating funds for health and social care: an evidence review
Mason, A., Goddard, M. K. & Weatherly, H. L. A., 20 Mar 2014, York: University of York, Centre for Health Economics;, 77 p. (CHE Research Paper; no. 97).
Network meta-analysis of (individual patient) time to event data alongside (aggregate) count data
Saramago Goncalves, P. R., Chuang, L-H. & Soares, M. F. O., Jan 2014, York, UK: Centre for Health Economics, University of York, 26 p. (CHE Research Paper; no. 95).
Productivity of the English National Health Service from 2004/5: updated to 2011/12
Bojke, C., Castelli, A., Grasic, K. & Street, A. D., Jan 2014, York, UK: Centre for Health Economics, University of York, p. 1-40, 40 p. (CHE Research Paper; no. 94).
Public Values for Energy Futures: Framing, Indeterminacy and Policy Making
Butler, C., Demski, C., Parkhill, K. A., Pidgeon, N. & Spence, A., 2014, UKERC.
The importance of multimorbidity in explaining utilisation and costs across health and social care settings: evidence from South Somerset's Symphony Project
Kasteridis, P., Street, A. D., Dolman, M., Gallier, L., Hudson, K., Martin, J. & Wyer, I., 2014, York, UK: Centre for Health Economics, University of York, 60 p. (CHE Research Paper; no. 96).
Unspanned macroeconomic factors in the yield curve
Coroneo, L., Giannone, D. & Modugno, M., 2014, Brussels: Federal Reserve Board, Washington, D.C., 35 p. (Finance and Economics Discussion Series ; vol. 2014, no. 57).
Using cost-effectiveness thresholds to determine value for money in low-and middle-income country healthcare systems: Are current international norms fit for purpose?
Revill, P., Walker, S. M., Madan, J., Ciaranello, A., Mwase, T., Gibb, D. M., Claxton, K. P. & Sculpher, M. J., 2014, York, UK: Centre for Health Economics, University of York, 15 p. (CHE Research Paper ; no. 98).
WHO decides what is fair? International HIV treatment guidelines, social value judgements and equitable provision of lifesaving antiretroviral therapy
Revill, P., Asaria, M., Phillips, A., Gibb, D. M. & Gilks, C., 2014, York UK: Centre for Health Economics, University of York, 17 p. (CHE Research Paper; no. 99).
Distributional cost-effectiveness analysis of health care programmes
Asaria, M., Griffin, S., Cookson, R. A., Whyte, S. & Tappenden, P., Nov 2013, York, UK: Centre for Health Economics, University of York, 20 p. (CHE Research Paper; no. 91).
Distributional cost-effectiveness analysis: a tutorial
Asaria, M., Griffin, S. & Cookson, R. A., Nov 2013, York, UK: Centre for Health Economics, University of York, 24 p. (CHE Research Paper; no. 92).
Methods for the estimation of the NICE cost effectiveness threshold
Claxton, K. P., Martin, S., Soares, M. O., Rice, N., Spackman, E., Hinde, S., Devlin, N., Smith, P. C. & Sculpher, M., Nov 2013, York, UK: Centre for Health Economics, University of York, 436 p. (CHE Research Paper; no. 81).
The influence of cost-effectiveness and other factors on NICE decisions
Dakin, H., Devlin, N., Feng, Y., O'Neill, P., Rice, N. & Parkin, D., Nov 2013, york, UK: Centre for Health Economics, University of York, 32 p. (CHE Research Paper; no. 93).
Who becomes the winner? Effects of venture capital on firms' innovative incentives
Beacham, M. I. & Datta, B., Oct 2013.
Attributing a monetary value to patients' time: A contingent valuation approach
Van Den Berg, B., Gafni, A. & Portrait, F., Sep 2013, York, UK: Centre for Health Economics, University of York, 44 p. (CHE Research Paper; no. 90).
Competition, prices, and quality in the market for physician consultations
Gravelle, H. S. E., Scott, A., Sivey, P. & Yong, J., Jul 2013, York, UK: Centre for Health Economics, University of York, 34 p. (CHE Research Paper; no. 89).
Does quality affect patients' choice of doctor? Evidence from the UK
Santos, R., Gravelle, H. S. E. & Propper, C., Jul 2013, York, UK: Centre for Health Economics, University of York, 49 p. (CHE Research Paper; no. 88).
NHS productivity from 2004/5 to 2010/11
Bojke, C., Castelli, A., Grasic, K., Street, A. D. & Ward, P., Jul 2013, York, UK: Centre for Health Economics, University of York, p. 1-38, 38 p. (CHE Research Paper; no. 87).
To block or not to block: Network Competition when Skype enters the mobile market
Datta, B. & LO, Y-S., Jul 2013.
Choice of contracts for quality in health care: Evidence from the British NHS
Fichera, E., Gravelle, H. S. E., Pezzino, M. & Sutton, M., Jun 2013, York, UK: Centre for Health Economics, University of York, 33 p. (CHE Research Paper; no. 85).
Electromagnetic transition from the 4$^+$ to 2$^+$ resonance in $^8$Be measured via the radiative capture in $^4$He+$^4$He
M. Datar, V., R. Chakrabarty, D., Kumar, S., Nanal, V., Pastore, S., B. Wiringa, R., P. Behera, S., Chatterjee, A., Jenkins, D., J. Lister, C., T. Mirgule, E., Mitra, A., G. Pillay, R., Ramachandran, K., J. Roberts, O., C. Rout, P., Shrivastava, A. & Sugathan, P., 6 May 2013, Arxiv (Cornell University).
Long term care provision, hospital length of stay and discharge destination for hip fracture and stroke patients: ESCHRU Report to Department of Health, March 2013
Gaughan, J., Gravelle, H., Santos, R. & Siciliani, L., 1 May 2013, York, UK: Centre for Health Economics, p. 1-44, 44 p. (CHE Research Paper; no. 86).
The quality of life of female informal caregivers: from Scandinavia to the Mediterranean Sea
Di Novi, C., Jacobs, R. & Migheli, M., May 2013, York, UK: Centre for Health Economics, University of York, 29 p. (CHE Research Paper; no. 84).
Co-evolution of networks and quantum dynamics: a generalization of the Barabási-Albert model of preferential attachment
Hancock, E., Konno, N., Latora, V., Machida, T., Nicosia, V., Severini, S. & Wilson, R., 4 Feb 2013, Arxiv (Cornell University), 10 p.
Does a hospital's quality depend on the quality of other Hospitals? A spatial econometrics approach to investigating hospital quality competition
Gravelle, H. S. E., Santos, R. & Siciliani, L., Jan 2013, York, UK: Centre for Health Economics, University of York, 33 p. (CHE Research Paper; no. 82).
Testing for optimal monetary policy via moment inequalities
Coroneo, L., Corradi, V. & Santos Monteiro, P., 2013, York: Department of Economics and Related Studies, University of York, 39 p. (Discussion Paper in Economics; no. 13/07).
Transforming the UK Energy System: Public Values, Attitudes and Acceptability - Synthesis Report
Parkhill, K. A., Demski, C., Butler, C., Spence, A. & Pidgeon, N., 2013, UKERC.
Transforming the UK Energy System: Public Values, Attitudes and Acceptability - Deliberating Energy System Transitions in the UK
Butler, C., Parkhill, K. A. & Pidgeon, N., 2013, UKERC, 72 p.
Hospital quality competition under fixed prices
Gravelle, H., Santos, R., Siciliani, L. & Goudie, R., 1 Nov 2012, York, UK: Centre for Health Economics, University of York, 53 p. (CHE Research Paper; no. 80).
Cubic hypersurfaces and a version of the circle method for number fields
Browning, T. & Vishe, P., 10 Jul 2012, (Accepted/In press) Arxiv (Cornell University), 45 p.
Incentives in the public sector: some preliminary evidence from a government agency
Burgess, S., Propper, C., Ratto, M. & Tominey, E., 1 Jul 2012, Bonn: Institute for the Study of Labor (IZA), 40 p. (IZA Discussion Papers; vol. 6738).
English hospitals can improve their use of resources: an analysis of costs and length of stay for ten treatments
Gaughan, J. M., Mason, A., Street, A. D. & Ward, P., Jul 2012, York, UK: Centre for Health Economics, University of York, 71 p. (CHE Research paper; no. 78).
Well-Being and psychological consequences of temporary contracts: the case of younger Italian employees
Carrieri, V., Di Novi, C., Jacobs, R. & Robone, S., Jul 2012, York, UK: Centre for Health Economics, University of York, 33 p. (CHE Research Paper; no. 79).
Exotic dense matter states pumped by relativistic laser plasma in the radiation dominant regime
Colgan, J., Abdallah, Jr., J., Ya. Faenov, A., pikuz, S., Wagenaars, E., Booth, N., brown, C., Culfa, O., J. Dance, R., evans, R., gray, R., hoarty, D., Kaempfer, T., Lancaster, K., McKenna, P., Rossall, A. K., Yu. Skobelev, I., schulze, K., Uschmann, I., zhidkov, A. & 1 others, C. Woolsey, N., 27 Jun 2012, Arxiv (Cornell University).
Productivity of the English National Health Service 2003/4-2009/10 - Report for the Department of Health
Bojke, C., Castelli, A., Goudie, R., Street, A. & Ward, P., Mar 2012, York, UK: Centre for Health Economics, University of York, 53 p. (CHE Research Paper; no. 76).
Twenty Years of Using Economic Evaluations for Reimbursement Decisions: What Have We Achieved?
Drummond, M. F., Feb 2012, York, UK: Centre for Health Economics, University of York, 22 p. (CHE Research Paper; no. 75).
Analysing hospital variation in health outcome at the level of EQ-5D dimensions
Gutacker, N., Bojke, C., Daidone, S., Devlin, N. & Street, A., Jan 2012, York, UK: Centre for Health Economics, University of York, 27 p. (CHE Research Paper; no. 74).
Estimating the costs of specialised care: updated analysis using data for 2009/10. Report to the Department of Health
Daidone, S. & Street, A. D., Dec 2011, York, UK: Centre for Health Economics, University of York, 32 p. (CHE Research Paper; no. 71).
Keep it Simple? Predicting Primary Health Care Costs with Measures of Morbidity and Multimorbidity
Brilleman, S. L., Gravelle, H. S. E., Hollinghurst, S., Purdy, S., Salisbury, C. & Windmeijer, F., Dec 2011, York, UK: Centre for Health Economics, University of York, 29 p. (CHE Research Paper; no. 72).
Modelling Individual Patient Hospital Expenditure for General Practice Budgets
Gravelle, H. S. E., Dusheiko, M. A., Martin, S., Smith, P. C., Rice, N. & Dixon, J., Dec 2011, York, UK: Centre for Health Economics, University of York, 43 p. (CHE Research Paper; no. 73).
NICE's social value judgements about equity in health and health care
Shah, K., Cookson, R. A., Culyer, A. J. & Littlejohns, P., Nov 2011, York, UK: Centre for Health Economics, University of York, 20 p. (CHE Research Paper; no. 70).
Does hospital competition harm equity? Evidence from the English National Health Service
Cookson, R. A., Laudicella, M. & Li Donni, P., Oct 2011, York, UK: Centre for Health Economics, University of York, 30 p. (CHE Research Paper; no. 66).
Measuring Change in Health Care Equity Using Small Area Administrative Data: Evidence from the English NHS 2001-8
Truly Inefficient or Providing Better Quality of Care? Analysing the Relationship Between Risk-Adjusted Hospital Costs and Patients' Health Outcomes
Gutacker, N., Bojke, C., Daidone, S., Devlin, N., Parkin, D. & Street, A., Oct 2011, York, UK: Centre for Health Economics, University of York, 23 p. (CHE Research Paper; no. 68).
Uncertainty, evidence and irrecoverable costs: informing approval, pricing and research decisions for health technologies
Claxton, K. P., Palmer, S. J., Longworth, L., Bojke, L., Griffin, S., McKenna, C., Soares, M. O., Spackman, E. & Youn, J., Oct 2011, York, UK: Centre for Health Economics, University of York, 81 p. (CHE Research Paper; no. 69).
Does Better Disease Management in Primary Care Reduce Hospital Costs?
Dusheiko, M. A., Gravelle, H. S. E., Martin, S., Rice, N. & Smith, P. C., Aug 2011, York, UK: Centre for Health Economics, University of York, 33 p. (CHE Research Paper; no. 65).
Voting and the macroeconomy: separating trend from cycle
Maloney, J. & Pickering, A. C., 12 Jul 2011, York, UK: Department of Economics and Related Studies, University of York, p. 1-45, 45 p. (University of York Discussion Papers in Economics).
Do Hospitals Respond to Greater Autonomy? Evidence from the English NHS
Verzulli, R., Jacobs, R. & Goddard, M., Jul 2011, York, UK: Centre for Health Economics, University of York, 32 p. (CHE Research Paper; no. 64).
Avoidable mortality: what it means and how it is measured
Castelli, A. & Nizalova, O. Y., Jun 2011, York, UK: Centre for Health Economics, University of York, 44 p. (CHE Research Paper; no. 63).
An equity checklist: a framework for health technology assessments
Culyer, A. J. & Bombard, Y., May 2011, York, UK: Centre for Health Economics, University of York, 20 p. (CHE Research Paper; no. 62).
Study of Electromagnetically Induced Transparency using long-lived Singlet States
Roy, S. S. & Mahesh, T. S., 17 Mar 2011.
Interventions to Promote Improved Access to Higher Education: Exploratory Paper: Evidence resource for the Bridge Group
Rudd, P., Mar 2011, London: Bridge Group, 8 p. (Evidence Resources for the Bridge Group).
Estimating the costs of specialised care
Daidone, S. & Street, A. D., Feb 2011, York, UK: Centre for Health Economics, University of York, (CHE Research Paper; no. 61).
Growing up in social housing in the new millennium: Housing, neighbourhoods and early outcomes for children born in 2000
Tunstall, B., Lupton, R., Kneale, D. & Jenkins, A., Feb 2011, London: London School of Economics and Polictical Science, 45 p. (CASE Papers; no. 143).
'Experiment Earth?' Reflections on a public dialogue on geoengineering: Reflections on a public dialogue on geoengineering
Corner, A., Parkhill, K. A. & Pidgeon, N., 2011, Cardiff University, 33 p.
Building the Big Society
Tunstall, R., Lupton, R., Power, A. & Richardson, L., 2011, CASE, LSE, 48 p. (CASE Reports; no. 67).
Examining instruments aimed at promoting waste reduction and recycling in achieving sustainability of the food supply chain
Thankappan, S., 2011, York, UK.: University of York, 11 p.
Social housing and social exclusion 2000-2011
Tunstall, R., 2011, CASE, LSE, (CASE Paper; no. 153).
Value-based pricing for pharmaceuticals: its role, specification and prospects in a newly devolved NHS
Claxton, K., Sculpher, M. & Carroll, S., 2011, York,UK: Centre for Health Economics, University of York, 33 p. (CHE Research Paper; no. 60).
Cause and effect relationship between post-merger operating performance changes and workforce adjustments
Kuvandikov, A., Aug 2010, Department of Management Studies, University of York.
Causes of post-merger workforce adjustments
Labour demand and wage effects of takeovers that involve employee layoffs
Shareholders and employees: rent transfer and rent sharing in corporate takeovers
Back to Life: Leadership from a Process Perspective
Wood, M., May 2010, Department of Management Studies, University of York.
Reluctant Bedfellows or Model Marriage? Postmodern Thinking Applied to Mainstream Public Sector Health Services Research Settings
Accounting and Labour Control at Boulton and Watt, c. 1775-1810
Toms, S., Mar 2010, Department of Management Studies, University of York.
How Can SMES Become More Competitive On The Graduate Labour Market?
Muenzinger, I., Mar 2010, Department of Management Studies, University of York.
Using history to help refine international business theory: ownership advantages and the eclectic paradigm.
da Silva Lopes, T., Mar 2010, Department of Management Studies, University of York.
Appropriate perspectives for health care decisions.
Claxton, K., Palmer, S., Sculpher, M. & Walker, S., 2010, York, UK: Centre for Health Economics, University of York, 86 p. (CHE Research Paper; no. 54).
Does cost-effectiveness analysis discriminate against patients with short life expectancy? Matters of logic and matters of context
Paulden, M. & Culyer, A. J., 2010, York, UK: Centre for Health Economics, University of York, 15 p. (CHE Research Paper; no. 55).
Efficient emissions reduction
Roussillon, B. & Schweinzer, P., 2010, 27 p. (University of Manchester Discussion Paper Series; vol. EDP-1004).
Foundation trusts: a retrospective review
Bojke, C. & Goddard, M., 2010, York, UK: Centre for Health Economics, University of York, 21 p. (CHE Research Paper; no. 58).
Hospital car parking the impact of access costs
Mason, A., 2010, York, UK: University of York, Centre for Health Economics;, 65 p. (CHE Research Paper; no. 59).
Nordic welfare financiers made global portfolio investors: institutional change in pension fund governance in Sweden and Finland
Sorsa, V-P. & Roumpakis, A., 2010, Oxford: Centre for Employment Work and Finance, University of Oxford, 88 p. (Oxford University Working Papers in Employment, Work and Finance; vol. WPG10-01 ).
PSE Measures Review Paper: Children's Deprivation Items
Bradshaw, J. & Main, G., 2010, [s.l.]: Poverty and Social Exclusion, 48 p. (Poverty and Social Exclusion in the UK: The 2011 Survey, Working Paper Series; no. 7).
Regional variation in the productivity of the English National Health Service
Bojke, C., Castelli, A., Laudicella, M., Street, A. & Ward, P., 2010, York, UK: Centre for Health Economics, University of York, 42 p. (CHE Research Paper; no. 57).
Simulation or cohort models? Continuous time simulation and discretized Markov models to estimate cost-effectiveness
Soares, M. & Canto e Castro, L., 2010, Centre for Health Economics, (CHE Research Paper; vol. 56).
Building Cross Cultural Competencies.
Richardson, H. & Warwick, P., Nov 2009, Department of Management Studies, University of York.
Risk Disclosure and Re-establishing Legitimacy in the Event of a Crisis - Did Northern Rock Use Risk Disclosure to Repair Legitimacy after their 2007 Collapse?
Edkins, A., Nov 2009, Department of Management Studies, University of York.
An economic framework for analysing the social determinants of health and health inequalities
Epstein, D., Jimenez-Rubio, D., Smith, P. C. & Suhrcke, M., Oct 2009, York, UK: Centre for Health Economics, University of York, p. 1-68, 68 p. (CHE Research Paper; no. 52).
Budget Allocation and the Revealed Social Rate of Time Preference for Health
Paulden, M. & Claxton, K. P., Oct 2009, York, UK: Centre for Health Economics, University of York, 20 p. (CHE Research Paper; no. 53).
Does Community and Environmental Responsibility Affect Firm Risk? Evidence from UK Panel Data 1994-2006
Toms, S., Anderson, K. & Salama, A., Aug 2009, Department of Management Studies, University of York.
Risk and value in labour and capital markets: The UK corporate economy, 1980-2005.
Toms, S. & Salama, A., Aug 2009, Department of Management Studies, University of York.
Lines of Flight: Everyday Resistance along England's Backbone
Wood, M. & Brown, S., Jul 2009, University of York, The York Management School, 29 p.
The status of planning processes in family-owned businesses: A study of transformational economy and its relationship to the financial performance of family-owned Ukrainian firms
Lehkman, O., Jul 2009, University of York, The York Management School, 53 p.
The status of planning processes in family-owned businesses: A study of transformational economy and its relationship to the financial performance of family-owned Ukrainian firms.
Lehkman, O., Jul 2009, Department of Management Studies, University of York.
Charismatic Leadership and its emergence under crisis conditions: A case study from the airline industry
Kakavogianni, D., Mar 2009, University of York, The York Management School, 71 p.
Employee Share Ownership Plans: A Review
Kaarsemaker, E., Pendleton, A. & Poutsma, E., Feb 2009, University of York, The York Management School, 30 p.
The Wife's Administration of the Earnings'? Working-Class Women and Savings in the Mid-Nineteenth Century
Maltby, J., Feb 2009, University of York, 30 p.
A Political Theory of Decentralization Dynamics.
Jurado, I., 2009, Madrid, (Juan March Institute Working Paper Series).
Exploring the impact of public services on quality of life indicators
Castelli, A., Jacobs, R., Goddard, M. & Smith, P. C., 2009, York UK: Centre for Health Economics, University of York, 148 p. (CHE Research Paper; no. 46).
Investigating Patient Outcome Measures in Mental Health
Jacobs, R., 2009, York, UK: Centre for Health Economics, University of York, 94 p. (CHE Research Paper; no. 48).
MRC-NICE scoping project: identifying the national institute for health and clinical excellence's methodological research priorities and an initial set of priorities
Longworth, L., Bojke, L., Tosh, J. & Sculpher, M., 2009, York, UK: Centre for Health Economics, University of York, 120 p. (CHE Research Paper; no. 51).
NHS Input and Productivity Growth 2003/4 - 2007/8
Street, A. & Ward, P., 2009, York, UK: Centre for Health Economics, University of York, 42 p. (CHE Research Paper; no. 47).
Payment by results in mental health: A review of the international literature and an economic assessment of the approach in the English NHS
Mason, A. & Goddard, M., 2009, York, UK: Centre for Health Economics, University of York, 71 p. (CHE Research Paper; no. 50).
What explains variation in the costs of treating patients in English obstetrics specialties?
Laudicella, M., Olsen, K. R. & Street, A., 2009, York, UK: Centre for Health Economics, University of York, 25 p. (CHE Research Paper ; no. 49).
Asymmetric Response: Explaining Corporate Social Disclosure by Multi-National Firms in Environmentally Sensitive Industries
Toms, S., Jul 2008, University of York, 28 p.
'Strangers and Brothers': The Secret History of Profit, Value and Risk. An inaugural lecture.
Toms, S., Jun 2008, University of York, 26 p.
Dimensions of Design Space: A Decision-Theoretic Approach to Optimal Research Design
Conti, S. & Claxton, K. P., Jun 2008, York, UK: Centre for Health Economics, University of York, 28 p. (CHE Research Paper; no. 38).
Budgetary policies and available actions: a generalisation of decision rules for allocation and research decisions
McKenna, C., Chalabi, Z., Epstein, D. & Claxton, K., 2008, York, UK: Centre for Health Economics, University of York, 28 p. (CHE Research Paper; no. 44).
Determinants of generals practitioners' wages in England
Morris, S., Sutton, A., Gravelle, H., Elliott, B., Hole, A., Ma, A., Sibbald, B. & Skatun, D., 2008, York, UK: Centre for Health Economics, University of York, 21 p. (CHE Research Paper; no. 36).
Doctor behaviour under a pay for performance contract: Further evidence from the quality and outcomes framework
Gravelle, H., Sutton, M. & Ma, A., 2008, York, UK: Centre for Health Economics, University of York, 39 p. (CHE Research Paper; no. 34).
Economic analysis of cost-effectiveness of community engagement to improve health
Carr-Hill, R. & Street, A., 2008, York, UK: Centre for Health Economics, University of York, 27 p. (CHE Research Paper; no. 33).
Establishing a fair playing field for payment by results
Mason, A., Miraldo, M., Siciliani, L., Sivey, P. & Street, A., 2008, York, UK: Centre for Health Economics, University of York, 88 p. (CHE Research Paper; no. 39).
Fairness in primary care procurement measures of under-doctoredness: sensitivity analysis and trends
Hole, A., Marini, G., Goddard, . M. & Gravelle, H., 2008, York, UK: Centre for Health Economics, University of York, 41 p. (CHE Research Paper; no. 35).
Living with Nuclear Power in Britain: A Mixed-methods Study
Pidgeon, N. F., Henwood, K., Parkhill, K. A., Venables, D. & Simmons, P., 2008, Cardiff University.
Measuring NHS output growth
Castelli, A., Laudicella, M. & Street, A., 2008, York, UK: Centre for Health Economics, University of York, 68 p. (CHE Research paper; no. 43).
Optimal contracts and contractual arrangements within the hospital: bargaining vs. take-it-or-leave-it offers
Galizzi, M. M. & Miraldo, M., 2008, York, UK: Centre for Health Economics, University of York, 37 p. (CHE Research Paper; no. 37).
Price adjustment in the hospital sector
Miraldo, M., Siciliani, L. & Street, A. D., 2008, York, UK: Centre for Health Economics, University of York, 33 p. (CHE Research Paper; no. 41).
Price regulation of pluralistic markets subject to provider collusion
Longo, R., Miraldo, M. & Street, A., 2008, York, UK: Centre for Health Economics, University of York, 25 p. (CHE Research Paper; no. 45).
Public health care resource allocation and the rule of rescue
Cookson, R., Tsuchiya, A. & McCabe, C., 2008, Journal of Medical Ethics.
Quality in and Equality of Access to Healthcare Services in England
Goddard, M., 2008, York, UK: Centre for Health Economics, University of York, 72 p. (CHE Research Paper; no. 40).
The link between health care spending and health outcomes for the new English Primary Care Trusts
Martin, S., Rice, N. & Smith, P. C., 2008, York, UK: Centre for Health Economics, University of York, 51 p. (CHE Research Paper; no. 42).
The limits of market-based governance and accountability - PFI refinancing and the resurgence of the regulatory state
Beck, M., Toms, S. & Asenova, D., Jul 2007, Department of Management Studies, University of York.
The private finance initiative (PFI) and finance capital: A note on gaps in the "accountability" debate
Asenova, D. & Beck, M., Jul 2007, Department of Management Studies, University of York.
BSE crisis and food safety regulation: a comparison of the UK and Germany
Beck, M., Kewell, B. & Asenova, D., Mar 2007, Department of Management Studies, University of York, 36 p.
Business strategy and firm performance: the British corporate economy, 1949-1984
Antcliff, V., Higgins, D., Toms, S. & Wilson, J. F., 2007, Department of Management Studies, University of York.
Doctor behaviour under a pay for performance contract: Evidence from the quality and outcomes framework
Further evidence on the link between health care spending and health outcomes in England
Hospital financing and the development and adoption of new technologies
Miraldo, M., 2007, York, UK: Centre for Health Economics, University of York, 42 p. (CHE Research Paper; no. 26).
Introducing activity-based financing: a review of experience in Australia, Denmark, Norway and Sweden
Street, A., Vitikainen, K., Bjorvatn, A. & Hvenegaard, A., 2007, York, UK: Centre for Health Economics, University of York, 56 p. (CHE Research Paper; no. 30).
Keynes and the cotton industry: a reappraisal
Higgins, D., Toms, S. & Filatotchev, I., 2007, Department of Management Studies, University of York.
Leadership then at all events
Wood, M. & Ladkin, D., 2007, Department of Management Studies, University of York, 46 p.
Mark versus Luke? Appropriate methods for the evaluation of public health interventions
Claxton, K., Sculpher, M. & Culyer, A., 2007, York, UK: Centre for Health Economics, University of York, 22 p. (CHE Research Paper; no. 31).
Measurement of non-market output in education and health
Smith, P. C. & Street, A., 2007, York, UK: Centre for Health Economics, University of York, 27 p. (CHE Research Paper; no. 23).
Moving the gender agenda or stirring chicken's entrails?: where next for feminist methodologies in accounting?
Haynes, K., 2007, Department of Management Studies, University of York, 33 p.
Oldham capitalism and the rise of the Lancashire textile industry
Toms, S., 2007, Department of Management Studies, University of York.
Political, social and economic determinants of corporate social disclosure by multi-national firms in environmentally sensitive industries
Toms, S., Hasseldine, J. & Massoud, H., 2007, York: The York Management School.
Reference pricing versus co-payment in the pharmaceutical industry: firms' pricing strategies
Reference pricing versus co-payment in the pharmaceutical industry: price, quality and market coverage
The failed promise of foreign direct investment: some remarks on 'malign' investment and political instability in former Soviet states
Beck, M. & Acc-Nikmehr, N., 2007, Department of Management Studies, University of York.
The link between health care spending and health outcomes: Evidence from English programme budgeting data
Martin, S., Rice, N. & Smith, P., 2007, York, UK: Centre for Health Economics, University of York, 30 p. (CHE Research Paper; no. 24).
There is no such thing as an audit society
Maltby, J., 2007, Department of Management Studies, University of York, 21 p.
'Real business'? : gendered identities in accounting and management academia
Haynes, K. & Fearful, A., 2007, Department of Management Studies, University of York.
"We do not share the troubles of our trans-Atlantic cousins": The statutory framework for accounting in the UK and the US in the interwar period
Maltby, J., 2007, Department of Management Studies, University of York.
Development of company law in India : the case of the Companies Act 1956
Verma, S. & Gray, S. J., Feb 2006, York: Department of Management Studies, University of York, 44 p.
The establishment of the Institute of Chartered Accountants of India (ICAI): the first step in the development of an accounting profession in post-independence India
Valuing human resources: perceptions and practices in UK organisations.
Verma, S. & Dewe, P., Feb 2006, York: Department of Management Studies, University of York, 42 p.
(Re)figuring accounting and maternal bodies: the gendered embodiment of accounting professionals
Haynes, K., Jan 2006, York: Department of Management Studies, University of York, 42 p.
Drugs for exceptionally rare diseases: a commentary on Hughes et al
Claxton, K., McCabe, C., Tsuchiya, A. & Raftery, J., 2006, QJM.
Industrial districts as organizational environments: resources, networks and structures
Popp, A., Toms, S. & Wilson, J., 2006, York: Department of Management Studies, University of York, 35 p.
International Students in the UK : how can we give them a better experience?
Warwick, P., 2006, York: Department of Management Studies, University of York, 26 p.
Other lives in accounting: critical reflections on oral history methodology in action
Haynes, K., 2006, York: Department of Management Studies, University of York, 32 p.
Quantum conductance of homogeneous and inhomogeneous interacting electron systems
Godby, R. W., Bokes, P. & Jung, J., 2006.
Reputation in organizational settings: a research agenda
Kewell, B., 2006, York: Department of Management Studies, University of York, 14 p.
Sustained Competitive Advantage and the Modern Labour Theory of Value
Toms, S., 2006, York Management School: University of York, 46 p.
The rise and fall of the patient forum
Warwick, P., 2006, York: Department of Management Studies, University of York, 8 p.
Who invests too much in employer stock, and why do they do it? Some evidence from uk stock ownership plans
Pendleton, A., 2006, York: Department of Management Studies, University of York, 32 p.
A project management module for virtual teaching
Ward, A., Dec 2005, Department of Management Studies: University of York, 18 p.
Making sense of tragedy: the 'reputational' antecedents of a hospital disaster
Kewell, B., Dec 2005, York: Department of Management Studies, University of York, 37 p.
The association between accounting and market-based risk measures
Toms, S., Nguyen, D. T. & Salama, A., Dec 2005, York: Department of Management Studies, University of York, 26 p.
The labour theory of value, risk and the rate of profit
Toms, S., Dec 2005, York: Department of Management Studies, University of York, 20 p.
Back to the future in NHS reform
Interactive situation modelling in knowledge intensive domains
Fernandes, K., 2005, York: Department of Management Studies, University of York, 27 p.
Self as social practice: rewriting the feminine in qualitative organizational research
Linstead, A., 2005, York: Department of Management Studies, University of York, 43 p.
The resource-based view of the firm and the labour theory of value
Toms, S., 2005, York: Department of Management Studies, York, 29 p.
Asset pricing models, the labour theory of value and their implications for accounting
Toms, S., Mar 2004, York: Department of Management Studies, University of York, 31 p.
Derrida reappraised: deconstruction, critique and emancipation in management studies
Learmonth, M., Mar 2004, York: Department of Management Studies, University of York, 31 p.
Forgotten feminists: The Federation of British Professional and Business Women, 1933-1969
Perriton, L., 2004, York: Department of Management Studies, University of York, 25 p.
Modelling time-constrained software development
Powell, A., 2004, York: Department of Management Studies, University of York, 21 p.
The reform of the NHS in Portugal
Diogo, C., 2004, York: Department of Management Studies, University of York, 39 p.
Theorising path dependence: how does history come to matter in organisations, and what can we do about it?
Greener, I., 2004, York: Department of Management Studies, University of York, 30 p.
Transforming identities: accounting professionals and the transition to motherhood
Haynes, K., 2004, York: Department of Management Studies, University of York, 46 p. (The York Management School Working Paper Series; no. 6).
Determining the parameters in a social welfare function using stated preference data: an application to health
Shaw, R., Dolan, P., Tsuchiya, A., Smith, P. & Williams, A., 2002, Applied Economics.
Influence of dynamics on magic numbers for silicon clusters
Porter, A. R. & Godby, R. W., 1997, p. 1-12, 11 p. | CommonCrawl |
Journal of Systems Science and Complexity (JSSC), founded in 1988, is a bimonthly journal.
Superintended by: Chinese Academy of Sciences
Edited by: Academy of Mathematics and Systems Science, Chinese Academy of Sciences
Covered by: SCI Expanded, Ei Compendex, etc.
JCR IF: 1.272(2021)
Selected by: The Excellence Action Plan of China STM Journals
Editor Work
Highlights | More...
Just Accepted | More...
Volume 35 Issue 6, 25 November 2022
Global Practical Exponential Stabilization for One-Sided Lipschitz Systems with Time Delay
IMEN Akrouti, NADHEM Echi
2022, 35(6): 2029-2045. DOI: 10.1007/s11424-022-1061-4
Asbtract ( 20 ) PDF (297KB) ( 8 )
References | Related Articles | Metrics
This paper addresses the practical stabilization problem for a class of one-sided Lipschitz nonlinear time delay systems with external disturbances. In case there is no perturbation, the exponential convergence of the observer was confirmed. When external disturbances appear in the system, a separation principle is established, and the authors show that the closed loop system is exponentially practical stable. By choosing a suitable Lyapunov-Krasovskii functional, the authors derive new sufficient conditions to guarantee the exponential stability of the systems. Finally, a physical model is performed to prove the efficiency and applicability of the suggested approach.
Fully Actuated System Approach for Linear Systems Control: A Frequency-Domain Solution
DUAN Guang-Ren, ZHOU Bin
This note studies fully actuated linear systems in the frequency domain in terms of polynomial matrix description (PMD). For a controllable first-order linear state-space system model, by using the right coprime factorization of its transfer function matrix, under the condition that the denominator matrix in the right coprime factorization is column reduced, it is equivalently transformed into a fully actuated PMD model, whose time-domain expression is just a high-order fully actuated (HOFA) system model. This method is a supplement to the previous one in the time-domain, and reveals a connection between the controllability of the first-order linear state-space system model and the full-actuation of its PMD model. Both continuous-time and discrete-time linear systems are considered. Some numerical examples are worked out to illustrate the effectiveness of the proposed approaches.
Trust-Region Based Stochastic Variational Inference for Distributed and Asynchronous Networks
FU Weiming, QIN Jiahu, LING Qing, KANG Yu, YE Baijia
Asbtract ( 9 ) PDF (803KB) ( 4 )
Stochastic variational inference is an efficient Bayesian inference technology for massive datasets, which approximates posteriors by using noisy gradient estimates. Traditional stochastic variational inference can only be performed in a centralized manner, which limits its applications in a wide range of situations where data is possessed by multiple nodes. Therefore, this paper develops a novel trust-region based stochastic variational inference algorithm for a general class of conjugate-exponential models over distributed and asynchronous networks, where the global parameters are diffused over the network by using the Metropolis rule and the local parameters are updated by using the trust-region method. Besides, a simple rule is introduced to balance the transmission frequencies between neighboring nodes such that the proposed distributed algorithm can be performed in an asynchronous manner. The utility of the proposed algorithm is tested by fitting the Bernoulli model and the Gaussian model to different datasets on a synthetic network, and experimental results demonstrate its effectiveness and advantages over existing works.
Stability of a Variable Coefficient Star-Shaped Network with Distributed Delay
ZHANG Hai-E, XU Gen-Qi, CHEN Hao, LI Min
2022, 35(6): 2077-2106. DOI: 10.1007/s11424-022-1157-x
Asbtract ( 4 ) PDF (1087KB) ( 3 )
The paper deals with the exponential stability problem of a variable coefficient star-shaped network, whose strings are coupled at a common end in a star-shaped configuration and the common connection of all strings can be moved. Two kinds of media materials with a component of viscous and another simply elastic are distributed on each string. Under suitable hypothesis on the coefficient functions $\mu_j(x)$ of damping terms and the kernels $\eta_j(s)$ of distributed delay terms, the well-posedness of the system is obtained by means of resolvent family theory. In addition, the allocation proportion of the two parts and the property of the material character functions are discussed when the star-shaped network is exponentially stable. Meanwhile, the sufficient condition of exponential stability is established. Numerical simulations are also included to verify the main results.
Controllability of General Linear Discrete Multi-Agent Systems with Directed and Weighted Signed Network
ZHAO Lanhao, JI Zhijian, LIU Yungang, LIN Chong
This paper investigates the controllability of general linear discrete-time multi-agent systems with directed and weighted signed networks by using graphic and algebraic methods. The non-delay and delay cases are considered respectively. For the case of no time delay, the upper bound condition of the controllable subspace is given by using the equitable partition method, and the influence of coefficient matrix selection of individual dynamics is illustrated. For the case of single delay and multiple delays, the equitable partition method is extended to deal with time-delay systems, and some conclusions are obtained. In particular, some simplified algebraic criteria for controllability of systems with time delay are obtained by using augmented system method and traditional algebraic controllability criteria.
Algebraic Verification of Finite Group-Based Potential Games with Vector Payoffs
WANG Yuanhua, LI Haitao
This paper studies a class of strategic games, where players often collaborate with other players to form a group when making decisions, and the payoff functions of players in such games are presented as vector functions. First, using the semi-tensor product (STP) method, it is proved that a finite game with vector payoffs is potential if and only if its potential equation has solution. By adding a suitable weight vector to the vector payoffs of each player, a finite game with vector payoffs that is not potential can be converted into a potential game. Second, as a natural generalization, the authors consider the verification problem of the group-based potential games with vector payoffs. By solving a linear potential equation, a simple formula is obtained to calculate the corresponding potential function. Finally, some examples are presented and discussed in detail to illustrate the theoretical results.
Access Control Method for EV Charging Stations Based on State Aggregation and Q-Learning
TANG Ziyu, LUO Yonglong, FANG Daohong, ZHAO Chuanxin
2022, 35(6): 2145-2165. DOI: 10.1007/s11424-022-1155-z
This paper presents intelligent access control for a charging station and a framework for dynamically and adaptively managing charging requests from randomly arriving electric vehicles (EVs), to increase the revenue of the station. First, charging service requests from random EV arrivals are described as an event-driven sequential decision process, and the decision-making relies on an event-extended state that is composed of the real-time electricity price, real-time charging station state, and EV arrival event. Second, a state aggregation method is introduced to reduce the state space by first aggregating the charging station state in the form of the remaining charging time and then further aggregating it via sort coding. Besides, mathematical calculations of the code value are provided, and their uniqueness and continuous integer characteristics are proved. Then, a corresponding Q-learning method is proposed to derive an optimal or suboptimal access control policy. The results of a case study demonstrate that the proposed learning optimisation method based on the event-extended state aggregation performs better than flat Q-learning. The space complexity and time complexity are significantly reduced, which substantially improves the learning efficiency and optimisation performance.
Event-Triggered Adaptive Fuzzy Finite Time Control of Fractional-Order Non-Strict Feedback Nonlinear Systems
XIN Chun, LI Yuanxin, NIU Ben
In this article, the problem of event-triggered adaptive fuzzy finite time control of non-strict feedback fractional order nonlinear systems is investigated. By using the property of fuzzy basis function, the obstacle caused by algebraic loop problems is successfully circumvented. Moreover, a new adaptive event-triggered scheme is designed under the unified framework of backstepping control method, which can largely reduce the amount of communications. The stability of the closed-loop system is ensured through fractional Lyapunov stability analysis. Finally, the effectiveness of the proposed scheme is verified by simulation examples.
Positivity and Stability of Fractional-Order Linear Time-Delay Systems
HAO Yilin, HUANG Chengdai, CAO Jinde, LIU Heng
This article focuses on the positivity and the asymptotic stability of fractional-order linear time-delay systems (FOLTDSs) which are composed of $N$ $(N\geq2)$ subsystems. Firstly, a sufficient and necessary condition is given to ensure the positivity of FOLTDSs. The solutions of the studied systems are obtained by using the Laplace transform method, and it can be observed that the positivity of FOLTDSs is completely determined by the series of matrices and independent of the magnitude of time-delays. Secondly, a theorem is given to prove the asymptotic stability of positive FOLTDSs. By considering the monotonicity and asymptotic properties of systems with constant time-delay, it is further shown that the asymptotic stability of positive FOLTDSs is independent of the time-delay. Next, a state-feedback controller, whose gain matrix is derived by resolving a linear programming question, is designed such that the state variables of the systems are nonnegative and asymptotically convergent. When the order of the FOLTDSs is greater than one, by utilizing a proposed property of Caputo derivative, a sufficient condition for the positivity of FOLTDS is presented. Finally, simulation examples are presented to verify the validity and practicability of the theoretical analysis.
Orthogonal Decomposition of Incomplete-Profile Finite Game Space
DAI Xiaoyan, WANG Jinhuan, XU Yong
This work studies the orthogonal decomposition of the incomplete-profile normal finite game (IPNFG) space using the method of semi-tensor product (STP) of matrices. Firstly, by calculating the rank of the incomplete-profile potential matrix, the bases of incomplete-profile potential game subspace ($\mathcal{G}_P^{\it\Omega}$) and incomplete-profile non-strategic game subspace ($\mathcal{N}^{\it\Omega}$) are obtained. Then the bases of incomplete-profile pure potential game subspace ($\mathcal{P}^{\it\Omega}$) and incomplete-profile pure harmonic game subspace ($\mathcal{H}^{\it\Omega}$) are also revealed. These bases offer an expression for the orthogonal decomposition. Finally, an example is provided to verify the theoretical results.
Finite-Time Stability for Interval Type-2 Fuzzy Nonlinear Systems via an Observer-Based Sliding Mode Control
LIU Yu'an, XIA Jianwei, WANG Jing, SHEN Hao
This work focuses on the design of a sliding mode controller for a class of continuous-time interval type-2 fuzzy-model-based nonlinear systems with unmeasurable state information over a finite-time interval. Aiming at describing the nonlinearities containing parameter uncertainties that inevitably appear in practice, the interval type-2 fuzzy sets are employed to model the studied system. To improve the designing flexibility, a fuzzy observer model non-parallel distribution compensation scheme is designed to estimate the state information of the plant, i.e., the observer is allowed to have a mismatching premise structure from the system. On this basis, the appropriate fuzzy sliding surface and fuzzy controller are constructed by following the same premise variables as the designed fuzzy observer. Then, by means of the sliding mode control theory and the Lyapunov function method, some novel sufficient criteria are established to ensure the finite-time boundedness for the studied systems via a partitioning strategy including the reaching phase, the sliding motion phase and the whole time interval. Furthermore, the designed gains are acquired by solving the matrix convex optimization problem. Finally, the effectiveness of the developed method is demonstrated by two simulation examples.
Task-Space Tracking Control of Robotic Manipulator Via Intermittent Controller
MA Mihua, CAI Jianping
An intermittent controller for robotic manipulator in task space was developed in this paper. In task space, for given a desired time-varying trajectory, the robot end-effector can track the desired target under the designed intermittent controller. Different from most of the existing works on control of robotic manipulator, the intermittent control for robotic manipulator is discussed in task space instead of joint space. Besides, the desired trajectory can be time-varying and not limited to constant. As a direct application, the authors implemented the proposed controller on tracking of a two-link robotic manipulator in task space. Numerical simulations demonstrate the effectiveness and feasibility of the proposed intermittent control strategy.
Comparison of Covariate Balance Weighting Methods in Estimating Treatment Effects
ZHAN Mingfeng, FANG Ying, LIN Ming
Different covariate balance weighting methods have been proposed by researchers from different perspectives to estimate the treatment effects. This paper gives a brief review of the covariate balancing propensity score method by Imai and Ratkovic (2014), the stable balance weighting procedure by Zubizarreta (2015), the calibration balance weighting approach by Chan, et al. (2016), and the integrated propensity score technique by Sant'Anna, et al. (2020). Simulations are conducted to illustrate the finite sample performance of both the average treatment effect and quantile treatment effect estimators based on different weighting methods. Simulation results show that in general, the covariate balance weighting methods can outperform the conventional maximum likelihood estimation method while the performance of the four covariate balance weighting methods varies with the data generating processes. Finally, the four covariate balance weighting methods are applied to estimate the treatment effects of the college graduate on personal annual income.
Multiple Change Points Detection in High-Dimensional Multivariate Regression
MA Xiaoyan, ZHOU Qin, ZI Xuemin
This paper considers the problem of detecting structural changes in a high-dimensional regression setting. The structural parameters are subject to abrupt changes of unknown magnitudes at unknown locations. The authors propose a new procedure that minimizes a penalized least-squares loss function via a dynamic programming algorithm for estimating the locations of change points. To alleviate the computational burden, the authors adopt a prescreening procedure by eliminating a large number of irrelevant points before implementing estimation procedure. The number of change points is determined via Schwarz's information criterion. Under mild assumptions, the authors establish the consistency of the proposed estimators, and further provide error bounds for estimated parameters which achieve almost-optimal rate. Simulation studies show that the proposed method performs reasonably well in terms of estimation accuracy, and a real data example is used for illustration.
Optimal Pricing and Return-Freight Insurance: Strategic Analysis of E-Sellers in the Presence of Reputation Differentiation
YANG Ying, CHAI Rui, SUN Xinyu, LI Yiming
Motivated by the practice that e-sellers cooperate with insurance companies to offer consumers the return-freight insurance (RI), this paper aims to investigate the competing e-sellers' RI strategies. Regarding the information asymmetry in the online context, reputation system is widely applied by e-platforms. In an online market with two competing e-sellers that sell the same product but are differentiated in their reputation, this paper builds an analytical model to explore the e-sellers optimal pricing and RI strategies. Combined with sellers' reputation and their RI strategies, the equilibrium outcomes under four cases are discussed. This paper reveals the conditions that e-sellers are willing to offer RI. Specifically, the findings demonstrate that low reputation e-seller is more likely to offer RI. Moreover, when the sellers are more divergent, they are more likely to co-exist in the market. Insurance premium and RI compensation play critical roles in their decisions. RI introduction tends to increase the price, thus offsets the benefits of RI, but does not affect the total consumer surplus.
Opinion Dynamics Induced by Agents with Particular Goal
LI Zhenpeng, TANG Xijin, HONG Zhenjie
The authors investigate the opinion dynamics in a setting where some special agents induce public opinions towards their desired direction, with Particular Goal (PG agents for short) to manipulate beliefs. Based on the bounded confidence model, the authors find PG agents can significantly improve the level of consensus. The authors also study how opinion pattern is influenced by varying the model in terms of changing the network structure, different parameters, and PG agents choosing strategy. The authors conduct the comparison of model results with empirical data from on line social networks. The authors hope the study may shade a light on public opinion control and regulation.
B-Spline Method for Spatio-Temporal Inverse Model
WANG Hongxia, ZHAO Zihan, WU Yuehua, LUO Xuehong
Inverse models can be used to estimate surface fluxes in terms of the observed atmospheric concentration measurement data. This paper proposes a new nonparametric spatio-temporal inverse model and provides the global expressions for the estimates by employing the B-spline method. The authors establish the asymptotic normality of the estimators under mild conditions. The authors also conduct numerical studies to evaluate the finite sample performance of the proposed methodologies. Finally, the authors apply the method to anthropogenic carbon dioxide (CO${}_{2}$) emission data from different provinces of Canada to illustrate the validity of the proposed techniques.
Partially Linear Single-Index Model in the Presence of Measurement Error
LIN Hongmei, SHI Jianhong, TONG Tiejun, ZHANG Riquan
The partially linear single-index model (PLSIM) is a flexible and powerful model for analyzing the relationship between the response and the multivariate covariates. This paper considers the PLSIM with measurement error possibly in all the variables. The authors propose a new efficient estimation procedure based on the local linear smoothing and the simulation-extrapolation method, and further establish the asymptotic normality of the proposed estimators for both the index parameter and nonparametric link function. The authors also carry out extensive Monte Carlo simulation studies to evaluate the finite sample performance of the new method, and apply it to analyze the osteoporosis prevention data.
Incorporating Variation and Quality of the Underlying Effects in Meta-Analysis
FU Jinyu, LIN Jinguan
This paper proposes a model to further explore the effects of the quality information and variation of the underlying effects on the summary effect measure in meta-analysis. A shape parameter is used in this model to quantify the asymmetry of the effect sizes of studies that are included. Estimation of the proposed model parameters is carried out by the Bayesian MCMC method. Performances of the resultant estimates are examined in the simulations and empirical case with data obtained from a total of 22 meta-analyses taken from three different designs. A conclusion would be drawn that it is advisable to take the proposed model, when quality information becomes available, in particular with a situation where the underlying effects approximately follow a normal distribution. If, however, the quality information is absent, the skew-normal distribution for random effect model should be adopted.
Nonparametric Two-Step Estimation of Drift Function in the Jump-Diffusion Model with Noisy Data
YE Xuguo, ZHAO Yanyong, LIN Jinguan, LONG Weifang
This paper considers a nonparametric diffusion process whose drift and diffusion coefficients are nonparametric functions of the state variable. A two-step approach to estimate the drift function of a jump-diffusion model {in noisy settings} is proposed. The proposed estimator is shown to be consistent and asymptotically normal in the presence of finite activity jumps. Simulated experiments and a real data application are undertaken to assess the finite sample performance of the newly proposed method.
The Invertibility of Rational Univariate Representations
XIAO Shuijing, ZENG Guangxing
In this paper, the so-called invertibility is introduced for rational univariate representations, and a characterization of the invertibility is given. It is shown that the rational univariate representations, obtained by both Rouillier's approach and Wu's method, are invertible. Moreover, the ideal created by a given rational univariate representation is defined. Some results on invertible rational univariate representations and created ideals are established. Based on these results, a new approach is presented for decomposing the radical of a zero-dimensional polynomial ideal into an intersection of maximal ideals.
Heilbronn's Problem of Eight Points in the Square
DEHBI Lydia, ZENG Zhenbing
In this work the authors consider the problem of optimally distributing 8 points inside a unit square so that the smallest area of the ${8\choose 3}$ triangles formed by them is maximal. Symbolic computations are employed to reduce the problem into a nonlinear programming problem and find its optimal solution. All computations are done using Maple. | CommonCrawl |
\begin{document}
\begin{frontmatter}
\title{Asymptotic limits for a non-linear integro-differential equation modelling leukocytes' rolling on arterial walls}
\author[1]{Vuk Mili\v{s}i\'c\fnref{fn1}} \address[1]{CNRS/Laboratoire de Mathématiques de Bretagne Atlantique, UMR 6205, {France}} \fntext[fn1]{{\tt [email protected]}}
\author[2]{Christian Schmeiser\fnref{fn2}} \address[2]{Fakult\"at f\"ur Mathematik, Universit\"at Wien,
1090 Wien, Austria} \fntext[fn2]{{\tt [email protected]}}
\begin{keyword}
{{Leukocyte rolling,}}
{Lipschitz mechanical energy,}
{delayed gradient flow,}
{Volterra integral equations,}
{asymptotic limits} \end{keyword}
\begin{abstract}{
We consider a non-linear integro-differential model describing $z$, the position of
the cell center on the real line presented in \cite{pmid29571710}.
We introduce a
new $\varepsilon$-scaling and we prove rigorously the asymptotics when $\varepsilon$ goes to zero.
We show that this scaling characterizes the long-time behavior of the solutions
of our problem in the cinematic regime ({\em i.e.} the velocity $\dot{z}$ tends to a limit).
The convergence results are first given when $\psi$, the elastic energy associated to linkages,
is convex and regular (the second order derivative of $\psi$
is bounded).
In the absence of blood flow, when $\psi$,
is quadratic, we compute the final position $z_\infty$ to which we prove that $z$ tends.
We then build a rigorous mathematical framework for $\psi$ being convex but only Lipschitz.
We extend convergence results with respect to $\varepsilon$ to the case when
$\psi'$ admits a finite number of jumps.
In the last part, we show that in the constant force case (see Model 3 in \cite{pmid29571710}, {\em i.e.} $\psi$ is the absolute value), we solve explicitly the problem and recover
the above asymptotic results.} \end{abstract} \end{frontmatter}
\setcounter{tocdepth}{1} \tableofcontents
\section{Introduction} Neutrophils are the first line of defense against bacteria and fungi and help fighting parasites and viruses. They are necessary for mammalian life, and their failure to recover after myeloablation is fatal. Neutrophils are short-lived, effective killing machines. They take their cues directly from the infectious organism, from tissue macrophages and other elements of the immune system. Neutrophils get close to their destination through the blood system. {When receiving chemical signals, they express adhesion molecules \cite{pmid23112187}, responsible for their rolling, slowing down, and eventual sticking to vessel walls \cite{pmid1572393} (see Fig. \ref{fig.neutro}), followed by extravasation and crawling through tissue towards their final destination.} \begin{figure}
\caption{A schematic view of the interactions between a neutrophil and the arterial wall in the blood flow. (illustration taken from \cite{pmid30530726})}
\label{fig.neutro}
\end{figure}
In this article we analyze a class of models for the process of rolling and slowing down along the vessel wall by transient elastic linkages. The model has the nondimensionalized form \begin{equation}\label{eq.z.nl}
\begin{aligned}
& \dot{z}_\e(t) + \int_{\RR_+} \psi'\left( \frac{{ z}_\e(t)-{ z}_\e(t-\varepsilon a)}{\varepsilon} \right) \varrho(a,t) da= v(t) \,, & t \in (0,T] \,,\\
& { z}_\e(t) = z_p(t) \,, &t\le 0 \,. \end{aligned} \end{equation} Here ${ z}_\e(t)\in\mathbb R$ is the position of the cell at time $t$ with the given past positions $z_p(t)$, $t\le 0$. The integro-differential equation describes a force balance between the friction force $f(t) = v(t) - \dot{z}_\e(t)$ with the blood flow velocity $v(t)$, and the elastic linkage forces between the cell and the vessel wall, described by the integral. These forces are parametrized by the age $a$ of the linkages, and the density of linkages with respect to their age is given by $\rho(a,t)\ge 0$. The function $\psi$ describes the potential energy of a linkage, dependent on the distance ${ z}_\e(t)-{ z}_\e(t-\varepsilon a)$ between the present position of the cell and its position when the linkage has been established (see Fig. \ref{fig.roll.cell.scheme}). The dimensionless parameter $\varepsilon>0$ results from scaling and represents the ratio between the typical age of a linkage and a characteristic time for the cell movement. Small values of $\varepsilon$ correspond to a rapid turnover of linkages. The occurrence of the factor $1/\varepsilon$ is a scaling assumption, needed to obtain an effect of the linkages in the limit $\varepsilon\to 0$. However, \eqref{eq.z.nl} can also be seen as a macroscopic rescaling of the model for the microscopic unknown $Z(\tau)={ z}_\e(\varepsilon\tau)/\varepsilon$. Note that in this interpretation we assume the data $v$ and $\rho$ to vary in terms of the macroscopic time $t$.
\begin{figure}
\caption{The position of the moving binding site at time $t$ and time $t-{a_1}$ with some of the respective linkages.}
\label{fig.roll.cell.scheme}
\end{figure}
{Models of the form \eqref{eq.z.nl} with various choices of $\psi$ have been derived in \cite{pmid29571710}, passing from a probabilistic description to an averaged version. The simplest example is a linear model with quadratic potential energy $\psi(u) = u^2/2$. This has already been formulated in 1960s together with the formal macroscopic limit as a derivation of rubber friction \cite{Schallamach}. It has also been used in the context of the Filament Based Lamellipodium Model \cite{OeSch,MR3385931} for the crosslinking between cytoskeleton filaments and cell-substrate adhesion. There it is usually coupled with an age structured population model for the density $\rho$. Its mathematical analysis has been developed in \cite{MiOel.1,MiOel.2,MiOel.3,MiOel.4,Mi.5,mi.proc}. \\ Nonlinear models may contain the effects of material or of geometric nonlinearities. An example for the latter is a model with linkages in the form of membrane tethers \cite{pmid30530726} connecting the anchoring point with the closest boundary point of a circular cell with (microscopic) radius $r$, see Fig. \ref{fig.pyth}. This gives a tether length $\ell(u) = \sqrt{u^2+r^2}-r$ and, with linear material properties, $\psi(u) = \ell(u)^2/2 = O(u^4)$ as $u\to 0$. Concerning material nonlinearities we also allow models with nondifferentiable potentials such as constant force
$\psi(u)=|u|$. Our main structural assumption is convexity of $\psi$.}
\tikzstyle{vecArrow} = [thick, decoration={markings,mark=at position
1 with {\arrow[semithick]{open triangle 60}}},
preaction = {decorate},
postaction = {draw,line width=0.4pt}]
\begin{figure}
\caption{The actual length of filaments of a cell of radius r is the dashed (green) segment whose length is $\ell=\sqrt{(z(t)-z(t-a))^2+r^2}-r
$}
\label{fig.pyth}
\end{figure}
{The formal macroscopic limit of \eqref{eq.z.nl} as $\varepsilon\to 0$ reads \begin{equation}\label{eq.zz.nl}
\begin{aligned}
& \dot{z}_0(t) + \int_{\RR_+} \psi'(a \dot{z}_0(t))\varrho(a,t) da = v(t) \,, & t>0 \,,\\
& z_0(t) =z_p(t) \,, & t\le0 \,. \end{aligned} \end{equation} Convexity of $\psi$ implies that the left hand side of the implicit ODE is a strictly increasing function of $\dot{z}_0(t)$. A rigorous justification of the macroscopic limit has been given in \cite{MiOel.1} for the linear problem without the additional friction force due to the blood flow. Generalizations of this result belong to the main goals of the present work.\\ Strongly related is the long time asymptotics. Assuming the data $(\rho(a,t),v(t))$ to converge to $(\rho_\infty(a),v_\infty)$ as $t\to\infty$, we expect convergence of the velocity $\dot{z}_\e(t)$ to a constant $\dot{z}_\infty$ satisfying \begin{equation}\label{eq.lim.t.large}
\dot{z}_\infty + \int_{\RR_+} \psi'(a \dot{z}_\infty) \varrho_\infty(a) da = v_\infty \,, \end{equation} essentially the same equation as in \eqref{eq.zz.nl}. Again we shall be interested in making this limit rigorous. }
Another concern of this article, motivated by the formal computations made in \cite{pmid29571710} is to give a rigorous mathematical meaning {to \eqref{eq.z.nl} in the case, when $\psi$ is only Lipschitz (as a consequence of convexity), and to justify the asymptotic limits also in this situation.}
\noindent The main results of this paper can be summarized as follows : \begin{enumerate}[i)]
\item {For $\psi$ convex and additionally with Lipschitz continuous derivative, a comparison principle for a class of integro-differential equations including \eqref{eq.z.nl} (proved in Section 2) is used in Section 3 to obtain an a priori estimate allowing to show global existence of a unique solution of \eqref{eq.z.nl}. The comparison principle is also used for an error estimate in the rigorous justification of the limit $\varepsilon\to 0$. Under weak convergence assumptions on the data as $t\to\infty$ we prove $z(t) = \dot{z}_\infty t + O(1)$ for the solution $z$ of \eqref{eq.z.nl} with $\varepsilon=1$, where $\dot{z}_\infty$ is the unique solution of \eqref{eq.lim.t.large}. The asymptotic behaviour of the $O(1)$-term remains open in general, except for a simple linear model problem with $\dot{z}_\infty=0$, where the limit of $z(t)$ can be computed explicitly.}
\item {In Section 4 the case of convex (and therefore locally Lipschitz) $\psi$ without any further smoothness assumptions is treated, except global Lipschitz continuity. In this situation a new notion of solution is needed. We take inspiration from gradient flows for nonsmooth energy functionals \cite{Evans.Book} and rewrite the problem with a smoothed potential as a variational inequality, where we can pass to the nonsmooth limit. The limiting variational inequality
\begin{equation}\label{eq.diff.inclusion.z.intro}
\begin{aligned}
(v(t)-\dot{z}_\e(t))(w-&{ z}_\e(t)) + \varepsilon \int_{\RR_+} \psi\left(\frac{{ z}_\e(t)-{ z}_\e(t-\varepsilon a)}{\varepsilon}\right) \varrho(a,t) da \\
& \leq \varepsilon \int_{\RR_+} \psi\left(\frac{w-{ z}_\e(t-\varepsilon a)}{\varepsilon}\right) \varrho(a,t) da \,, \quad \forall w \in \mathbb R \,,
\end{aligned}
\end{equation} is then equivalent to the differential inclusion $$
v(t) - \dot{z}_\e(t) \in \partial \int_{\RR_+} \varepsilon\, \psi\left( \frac{{ z}_\e(t)-{ z}_\e(t-\varepsilon a)}{\varepsilon} \right) \varrho(a,t) da \,, $$ where the right hand side is the subdifferential of the integral interpreted as a function of ${ z}_\e(t)$. We prove global existence of a solution in this sense. With $w={ z}_\e(t) + \varepsilon \hat w$, the variational inequality is written in a form where we can pass to the limit $\varepsilon\to 0$, giving
\begin{equation}\label{eq.zz.nl.lip}
\begin{aligned}
(v-\dot{z}_0(t)) & \hat w + \int_{{\RR_+}} \psi( a \dot{z}_0(t)) \varrho(a,t) da
\leq \int_{{\RR_+}} \psi( a \dot{z}_0(t)+\hat w ) \varrho(a,t) da , \quad \forall \hat w \in \mathbb R \,.
\end{aligned}
\end{equation} The linearization approach of Section 3 for the rigorous limit does not work in the nonsmooth case. However, convergence can be proved under the additional assumptions of time-independent data $(\rho,v)$, finitely many discontinuities of $\psi'$, and a nonvanishing limiting velocity. The proof relies on the fact that, by the nonvanishing velocity, the argument of $\psi'$ is close to the discontinuities only for a small set of values of $a$. We then extend this result to data $(\varrho_\varepsilon,v_\varepsilon)$ non-constant in time but whose $\varepsilon$-limit pair $(\varrho_0,v_0)$ is constant. Finally the convergence as $t\to\infty$ is transformed to the convergence as $\varepsilon\to 0$ by a rescaling, allowing to apply the previous result. This gives essentially that $z(t) = \dot{z}_\infty t + o(t)$, i.e. a weaker result than for smooth potentials, where $\dot{z}_\infty$ is equal to the solution $\gamma$ of
\begin{equation}\label{eq.zz.nl.lip.cst}
(v- \gamma)w + \int_{\RR_+} \psi(a \gamma ) \varrho(a) da \leq \int_{\RR_+} \psi(w + a\gamma ) \varrho(a)da \,,\quad \forall w \in \mathbb R \,.
\end{equation} }
\item In order to illustrate our results, we consider {in Section 5} the case when $\psi (u)= |u|$,
and study solutions of \eqref{eq.zz.nl.lip}. We show a {\em plastic} asymptotic behavior of the model~:
if $v_\infty \notin (-\mu_\infty,\mu_\infty)$ where $\mu_\infty := \int_{\RR_+} \varrho_\infty (a) da$, then $\gamma +\mu_\infty \,{\rm sgn}(\gamma) = v_\infty$ and $z \sim \gamma t$ when $t$ is large.
If $v_\infty \in [-\mu_\infty,\mu_\infty]$, the unique solution of \eqref{eq.zz.nl.lip} is $\gamma=0$ : the neutrophil should stop.
In this latter case, the previous asymptotic results do not prove that actually $\dot{z}$ vanishes for
$t$ growing large.
Assuming that $\varrho(a,t):=\varrho_\infty(a) \chiu{\{a<t\}}(a,t)$ with $\varrho_\infty$ being a decreasing integrable function and $\chiu{\{a<t\}}(a,t)$ the characteristic function of the set $\{a<t\}$, we show that
$$
z(t) = \begin{cases}
z^0+ \int_0^t[v_\infty-\mu_\infty(\tau)]_+ d\tau, & \text{if} \; v_\infty \geq 0,\\
z^0+ \int_0^t[v_\infty+\mu_\infty(\tau)]_- d\tau, & \text{if} \; v_\infty \leq 0.
\end{cases}
$$
where $\mu_\infty(t):=\int_{0}^{t} \varrho_\infty(t) dt$ and $[\cdot]_\pm$ denotes the positive/negative part.
The same approach gives an explicit profile of $z(t)$ in the case when $v_\infty \notin [-\mu_\infty,\mu_\infty]$.
All these arguments provide rigorous mathematical justifications
of numerical observations and formal computation in \cite[Section 3.3.2]{pmid29571710}. \end{enumerate}
\section{{Notations, generic hypotheses, and a comparison principle}}
{We introduce some notation for the rest of this article. For the final time $T\in (0,\infty]$ we introduce $I_T:=[0,T]$ for $T<\infty$ and $I_T:=[0,\infty)$ for $T=\infty$}. {For functional space we write} $L^p_t L^q_a := L^p(I_T;L^q({\RR_+}))$ for any real $(p,q) \in [1,\infty]^2$, and similarly $L^\infty_{a,t} := L^\infty({\RR_+}\times I_T)$. The weighted $L^p$ space of functions of $a\in{\RR_+}$ with non-negative weight $\omega(a)$ is denoted by $L^p(\omega(a)da)$, $1\le p\le \infty$.
We state the basic hypotheses that are common to results presented hereafter. Extra hypotheses will be assumed locally in the claims.
\begin{assum}\label{hypo.data} {For $0<T\le\infty$ we assume that }
\begin{compactenum}[i)]
\item {The potential $\psi$ is even, convex, and $\psi(0)=0\le\psi(u)$, $u\in\mathbb R$.}
\item The past {data $z_p$ is bounded and} Lipschitz on $\mathbb R_-$, {\em i.e.},
$$
| z_p(a_1) - z_p(a_2) | \leq L_p |a_1 -a_2|, \qquad (a_1,a_2) \in \mathbb R_-^2 \,.
$$
\item The source term {satisfies $v\in C^1(I_T)$}.
\item The {nonnegative} kernel {satisfies $\varrho\in C_B(I_T;L^1((1+a^2)da))$}.
\end{compactenum} \end{assum}
{For later use we prove a comparison principle and a stability estimate for a class of integro-differential equations including \eqref{eq.z.nl}. \begin{lemm}\label{lemm:comp} Let $\varepsilon,T>0$ and let $\phi(a,t,u)$ be measurable with respect to $(a,t)$, and let it be odd and nondecreasing as a function of $u$. Let the operator ${\mathcal H}$ be defined by $$
{\mathcal H}[z](t) := \dot z(t) + \int_0^\infty \phi\left(a,t,\frac{z(t)-z(t-\varepsilon a)}{\varepsilon}\right)da \,,\qquad 0<t\le T\,, $$ acting on functions $z$, whose values on $(-\infty,0]$ are prescribed. Then ${\mathcal H}$ satisfies the comparison principle $$
\Bigl({\mathcal H}[z](t)\ge 0 \mbox{ for } t>0 \Bigr)\quad\mbox{and}\quad \Bigl(z(t)\ge 0 \mbox{ for } t\le 0\Bigr) \qquad\Longrightarrow\qquad z\ge 0 \,. $$ Any solution $z\in C([0,T])$ of the problem $$
{\mathcal H}[z](t) = f(t) \,,\quad t>0 \,;\qquad z(t) = z_p(t) \,,\quad t\le 0 \,, $$ satisfies $$
|z(t)| \le \sup_{(-\infty,0)} |z_p| + \int_0^t |f(\tau)|d\tau \,,\qquad 0\le t\le T \,. $$ \end{lemm} \begin{proof} The comparison principle is, as usual, first shown for the case of strict inequalities. Thus, we assume ${\mathcal H}[z](t)>0$, $t>0$, and $z(t)>0$, $t\le 0$. Let $t_0>0$ denote the smallest zero of $z$. Then we arrive at the contradiction $$
\dot z(t_0) > -\int_0^\infty \phi\left(a,t_0,\frac{-z(t_0-\varepsilon a)}{\varepsilon}\right) da \ge 0 \,, $$ implying $z>0$. The statement with non-strict inequalities is obtained in the standard way by an approximation argument: For $\delta>0$ let $z_\delta := z + \delta(1+t_+)$. This implies $$
{\mathcal H}[z_\delta](t) \ge \delta + {\mathcal H}[z](t) \ge \delta > 0 \,,\quad t>0 \,,\qquad z_\delta(t) \ge \delta > 0 \,,\quad t\le 0 \,, $$ giving $z \ge -\delta (1+t_+)$ by the argument with strict inequalities and $z\ge 0$ in the limit $\delta\to 0$.\\
Finally we define $Z(t) := \sup_{\tau\le 0}|z_p(\tau)| + \int_0^t |f(\tau)|d\tau$, $t>0$, and $Z(t) := \sup_{\tau\le t}|z_p(\tau)|$, $t\le 0$. This implies \begin{align*}
&{\mathcal H}[Z-|z|](t) = |f(t)| - \,{\rm sgn}(z(t)) f(t) \\
&+ \int_0^\infty \left( \,{\rm sgn}(z(t)) \phi\left(a,t,\frac{z(t) - z(t-\varepsilon a)}{\varepsilon}\right) + \phi\left(a,t,\frac{Z(t)-Z(t-\varepsilon a) - |z(t)| + |z(t-\varepsilon a)|}{\varepsilon}\right)\right)da \\
&\ge \int_0^\infty \left( \,{\rm sgn}(z(t)) \phi\left(a,t,\frac{z(t) - z(t-\varepsilon a)}{\varepsilon}\right) + \phi\left(a,t,\frac{- |z(t)| + |z(t-\varepsilon a)|}{\varepsilon}\right)\right)da \,, \end{align*} where we have used the monotonicities of $Z$ and of $\phi$. For $z(t)>0$ we use the oddness of $\phi$ and write the integrand on the right hand side as $$
\phi\left(a,t,\frac{z(t) - z(t-\varepsilon a)}{\varepsilon}\right) - \phi\left(a,t,\frac{z(t) - |z(t-\varepsilon a)|}{\varepsilon}\right) \ge 0 \,, $$ by the monontonicity of $\phi$. For $z(t)<0$ the integrand reads $$
\phi\left(a,t,\frac{z(t) + |z(t-\varepsilon a)|}{\varepsilon}\right) - \phi\left(a,t,\frac{z(t) - z(t-\varepsilon a)}{\varepsilon}\right) \ge 0 \,, $$
showing ${\mathcal H}[Z-|z|](t)\ge 0$, $t>0$. Since obviously $Z(t) - |z(t)| \ge 0$ for $t\le 0$, an application of the comparison principle completes the proof. \end{proof} }
\section{The regular convex potential} \label{sec.reg}
{In this section the additional assumption $\psi\in C^{1,1}(\mathbb R)$ on the potential is used. We start with existence results for \eqref{eq.z.nl} and for the formal limit \eqref{eq.zz.nl} as $\varepsilon\to 0$.}
\begin{theorem}\label{thm.exist.uniq} Let Assumptions \ref{hypo.data} hold and {let furthermore $\psi'$ be Lipschitz on ${\RR_+}$.} Then there exists a unique solution {${ z}_\e\in C^1(I_T)$} of problem \eqref{eq.z.nl}. \end{theorem}
\begin{proof} {Local existence will be proven by Picard iteration as for ODEs in the space $C([0,\tau])$ with $\tau>0$ small enough. Since this is completely standard, we only prove the contraction property of the fixed point map $$
F[z](t) = z_p(0) + \int_0^t v(s)ds - \int_0^t\int_0^\infty \rho(a,s)\psi'\left(\frac{z(s)-z(s-\varepsilon a)}{\varepsilon}\right)da\,ds \,. $$ Let $z_1,z_2\in C([0,\tau])$ with $z_1(t)=z_2(t)=z_p(t)$, $t\le 0$. Then we estimate $$
\left|F[z_1](t) - F[z_2](t)\right| \le \frac{2L'\tau}{\varepsilon} \sup_{s\in(0,T)}\int_0^\infty \rho(a,s)da \sup_{s\in (0,\tau)}|z_1(s)-z_2(s)| \,, $$ with the Lipschitz constant $L'$ of $\psi'$, showing that $F$ is a contraction for $\tau$ small enough. Existence on $[0,T]$ follows from the a priori estimate $$
|{ z}_\e(t)| \le \sup_{(-\infty,0)}|z_p| + \int_0^t |v(s)|ds \,, $$ obtained by an application of Lemma \ref{lemm:comp} with $\phi(a,t,u)= \rho(a,t)\psi'(u)$ and $f=v$. Continuous differentiability of ${ z}_\e$ follows from the continuity of $v$ and $\rho$ with respect to $t$. } \end{proof}
\begin{lemm}\label{lem.exist.zz}
{Let the assumptions of Theorem \ref{thm.exist.uniq} hold.
Then there exists a unique solution $z_0\in C^1(I_T)$ of \eqref{eq.zz.nl}.} \end{lemm}
\begin{proof} This is an initial value problem for an implicit ODE. The monotonicity of $\psi'$ and $\psi'(0)=0$ imply existence and uniqueness of $\dot{z}_0(t)$ as well as
the stability estimate $|\dot{z}_0(t)|\le |v(t)|$. By the Lipschitz continuity of $\psi'$ and by $\rho\in C(I_T; L^1(a\,da))$, the left hand side of \eqref{eq.zz.nl} is continuous as a function of $\dot{z}_0(t)$ and $t$. This and the stability estimate imply continuity of $\dot{z}_0$, completing the proof. \end{proof}
\newcommand{\ti{u}_{0,\e}}{\ti{u}_{0,\varepsilon}}
Now we are in the position to prove a convergence result. \begin{theorem}\label{thm:eps20} Let the assumptions of Theorem \ref{thm.exist.uniq} hold. Then $\lim_{\varepsilon\to 0} { z}_\e = z_0$ uniformly in bounded subsets of $I_T$. \end{theorem} \begin{proof} A straightforward computation shows that the difference between \eqref{eq.z.nl} and \eqref{eq.zz.nl} can be written as a linearized problem for the error ${\hat{z}_\e} := { z}_\e -z_0$: $$
\cL_\e [{\hat{z}_\e}](t) = {\mathcal R}_\varepsilon(t) \,,\quad t>0 \,;\qquad {\hat{z}_\e}(t) = 0 \,,\quad t\le 0 \,, $$ with \begin{eqnarray*}
\cL_\e[z](t) &:=& \dot{z}(t) + \int_{\RR_+} k_\varepsilon(a,t) \frac{z(t) - z(t-\varepsilon a)}{\varepsilon}da \,,\\
k_\varepsilon(a,t) &:=& \varrho(a,t)\int_0^1 \psi''\left(s{ u}_\e(a,t) + (1-s)u_0(a,t)\right) ds \,, \end{eqnarray*} and with \begin{equation}\label{R.eps}
{\mathcal R}_\varepsilon(t) = \int_0^\infty k_\varepsilon(a,t)a\left( \dot{z}_0(t) - \frac{z_0(t) - z_0(t-\varepsilon a)}{\varepsilon a}\right)da \,. \end{equation}
Since $\psi''\ge 0$, Lemma \ref{lemm:comp} (with $\phi(a,t,u) = k_\varepsilon(a,t)u$) can be applied to the linearized problem, giving $|{\hat{z}_\e}(t)| \le \int_0^t {\mathcal R}_\varepsilon(\tau)d\tau$. It remains to estimate \eqref{R.eps}. We start with \begin{eqnarray*}
\left| \dot{z}_0(t) - \frac{z_0(t) - z_0(t-\varepsilon a)}{\varepsilon a}\right| &\le& \frac{1}{\varepsilon a} \int_{t-\varepsilon a}^t \left| \dot{z}_0(t) - \dot{z}_0(s)\right| ds \\
&\le& \frac{1}{\varepsilon a} \int_{t-\varepsilon a}^t \left| \dot{z}_0(t) - \dot{z}_0(s_+)\right| ds + \frac{1}{\varepsilon a} \int_{t-\varepsilon a}^t \left| \dot{z}_0(s_+) - \dot{z}_0(s)\right| ds \,. \end{eqnarray*} In the first term on the right hand side we use the modulus of continuity $\omega_t$ of $\dot{z}_0$ on the interval $[0,t]$. In the second term the integrand is bounded by Assumption \ref{hypo.data} (ii). Thus, $$
\left| \dot{z}_0(t) - \frac{z_0(t) - z_0(t-\varepsilon a)}{\varepsilon a}\right| \le \omega_t(\varepsilon a) + \left(|\dot{z}_0(0+)| + L_p\right) {\bf 1}_{t-\varepsilon a < 0} \,. $$ Since $k_\varepsilon \le L'\rho$ (with the Lipschitz constant $L'$ of $\psi'$ already used above), \begin{eqnarray*}
|{\mathcal R}_\varepsilon(t)| &\le& L' \int_0^\infty a\rho(a,t) \omega_t(\varepsilon a)da + L' \left(|\dot{z}_0(0+)| + L_p\right) \int_{t/\varepsilon}^\infty a\rho(a,t)da \\
&\le& L' \int_0^\infty a\rho(a,t) \omega_t(\varepsilon a)da + L' \left(|\dot{z}_0(0+)| + L_p\right) \int_0^\infty (1+a)a\rho(a,t)da \frac{\varepsilon}{\varepsilon + t}\,. \end{eqnarray*} The result follows by integration with respect to $t$ and by using the dominated convergence theorem for the first term on the right hand side. \end{proof} \renewcommand{\varrho_\infty}{\varrho_\infty} \newcommand{\dot{z}_\infty}{\dot{z}_\infty}
\renewcommand{\hat{v}}{\hat{v}} The rest of this section is concerned with large time asymptotics. For notational simplicity the parameter $\varepsilon$ is set to 1, whence \eqref{eq.z.nl} reads \begin{equation}\label{eq.z1.nl}
\begin{aligned}
& \dot z(t) + \int_{\RR_+} \psi'\left( z(t)-z(t-a)\right) \varrho(a,t) da= v(t) \,, & t >0 \,,\\
& z(t) = z_p(t) \,, &t\le 0 \,. \end{aligned} \end{equation} First we prove that for large time the velocity becomes approximately constant. For the time dependent data, a weak convergence assumption is sufficient, in the sense that the difference between the data and its asymptotic limit is integrable up to $t=\infty$.
\begin{theorem}\label{thm:t.inf.vinf.ne.0} Let the assumptions of Theorem \ref{thm.exist.uniq} hold with $T=\infty$ and let $v_\infty\in\mathbb R$ and $\varrho_\infty\in L^1((1+a)da)$ satisfy $$
\int_0^\infty |v(t)-v_\infty| dt < \infty \,,\qquad \int_0^\infty \int_0^\infty a|\varrho(a,t)-\varrho_\infty(a)| \; da \;dt < \infty \,. $$ Then there exists a unique solution $\dot{z}_\infty\in\mathbb R$ of \eqref{eq.lim.t.large}, such that the solution $z$ of \eqref{eq.z1.nl} satisfies $$
z(t) = \dot{z}_\infty t + O(1) \,,\qquad \mbox{as } t\to\infty \,. $$ \end{theorem} \newcommand{\hat{\varrho}}{\hat{\varrho}} \newcommand{{\mathcal L}}{{\mathcal L}} \begin{proof} Existence and uniqueness for \eqref{eq.lim.t.large} follows as in the proof of Lemma \ref{lem.exist.zz}, and we denote the solution by $\dot{z}_\infty:= \gamma$. A straightforward computation shows that $\hat z$, defined by $\hat z(t) := z(t) - \dot{z}_\infty t - z_p(0)$, $t>0$, and $\hat z(t) = 0$, $t\le 0$, satisfies the linearized equation ${\mathcal L}[\hat z](t) = {\mathcal R}(t)$, $t>0$, with \begin{eqnarray*}
{\mathcal L}[z](t) &=& \dot z(t) + \int_0^\infty k(a,t)(z(t)-z(t-a)) da \,,\quad k(a,t) = \rho(a,t) \int_0^1 \psi''(su(a,t)+(1-s)a\dot{z}_\infty)ds \,,\\
{\mathcal R}(t) &=& v(t)-v_\infty - \int_0^\infty \psi'(a\dot{z}_\infty)(\rho(a,t) - \rho_\infty(a))da \,. \end{eqnarray*} Lemma \ref{lemm:comp} can be applied with $\varepsilon=1$, $\phi(a,t,u)=k(a,t)u$, and $f = {\mathcal R}$, to show that, for any $t>0$, \begin{eqnarray*}
|z(t) - \dot{z}_\infty t| &\le& |z_p(0)| + |\hat z(t)| \le |z_p(0)| + \int_0^\infty |{\mathcal R}(\tau)|d\tau \\
&\le& |z_p(0)| + \int_0^\infty |v(\tau)-v_\infty| d\tau + L'|\dot{z}_\infty| \int_0^\infty \int_0^\infty a|\rho(a,\tau)-\rho_\infty(a)|da\,d\tau \,, \end{eqnarray*} completing the proof. \end{proof}
\renewcommand{\varepsilon}{\varepsilon} \renewcommand{\dot{z}}{\dot{z}} An improvement of this result, i.e. convergence of $\dot z(t)$ and of $z(t)- \dot{z}_\infty t$, can be achieved under additional assumptions, in particular for vanishing flow velocity $v$.
\begin{propm}\label{prop.dec.caract} Let the assumptions of Theorem \ref{thm.exist.uniq} hold with $T=\infty$ and let $v\equiv 0$. Let $\varrho$ satisfy $0 \ge (\partial_t + \partial_a) \varrho\in (L^\infty \cap L^1)({\RR_+}\times{\RR_+})$ and $0\le \varrho(0,t)\in L^\infty({\RR_+})$. Furthermore let there exist $\varrho_\infty \in L^1({\RR_+},(1+a))$ such that $\varrho(\cdot,t) \to \varrho_\infty$ in $L^1({\RR_+},(1+a))$. Then the solution of \eqref{eq.z1.nl} satisfies { $$
\int_0^\infty | \dot{z} (t) |^2 dt \leq \int_{\RR_+} \varrho (a,0)\psi(z_p(0)-z_p(-a)) da $$} and $ \lim_{t\to\infty} \dot{z}(t) = 0. $ \end{propm}
\begin{proof} Setting $u(a,t):=z(t)-z(t-a)$, the function $\psi(u(a,t))$ solves the transport problem $$ (\partial_t + \partial_a) \psi(u) = \psi'(u(a,t)) \dot{z},\quad \psi(u(0,t))=0 \quad \text{and}\quad \psi(u(a,0))=\psi({ u}_{I}(a)) \,. $$ {with ${ u}_{I}(a):=z_p(0)-z_p(-a)$. This connection between the delay equation and age structured population models has already been used in \cite{MiOel.1}, see also \cite{Diekmann}.} Considering $\varrho(a,t) \psi(u(a,t))$, it solves in the sense of characteristics (cf \cite[Theorem 2.1 and Lemma 2.1]{MiOel.1})~: $$
(\partial_t + \partial_a) \varrho \psi(u) - ((\partial_t + \partial_a )\varrho) \psi(u) = \varrho \psi'(u(a,t)) \dot{z} , $$ integrated in age this gives : $$ \ddt{} \int_{\RR_+} \varrho(a,t) \psi(u(a,t) ) da \leq \int_{\RR_+} \varrho \psi'(u(a,t)) da \dot{z} = - \dot{z}^2, $$ which then leads to~: $$ \left[ \int_{\RR_+} \varrho(a,t) \psi(u(a,t)) da \right]_{s=0}^{s=t} + \int_0^t \dot{z}^2 ds \leq 0. $$ This shows that $\dot{z}$ belongs to $L^2({\RR_+})$ since $$ \nrm{\dot{z}}{L^2({\RR_+})}^2 \leq \int_{\RR_+} \psi({ u}_{I}) \varrho(a,0) da < \infty.
$$ {With the formula $u(a_0,t) = \int_{t-a_0}^t \dot{z} (\tau) d\tau$, $a_0<t$, the Cauchy-Schwarz inequality implies} $$
| u(a_0,t) | \leq \sqrt{a_0} \nrm{\dot{z}}{L^2(t-a_0,t)}. $$ Using Lebesgue's Theorem, it is easy to show that $\lim_{t\to \infty} \nrm{\dot{z}}{L^2(t-a_0,t)} =0$. Thanks to Lebesgue's Theorem again, one shows that $$
\int_0^t \varrho_\infty(a) | u(a,t) | da \to 0 $$ when $t$ grows large. By hypothesis, $\psi'(0)=0$, so that $$
\left| \int_0^t \psi'(u(a,t)) \varrho_\infty(a)da \right| \leq \nrm{\psi''}{L^\infty(\mathbb R)} \int_0^t | u(a,t) |\varrho_\infty(a) da, $$ which shows that the left hand side also tends to zero as $t$ tends to infinity.
In order to study the convergence of $\int_{\RR_+} \varrho(a,t) \psi'(u(a,t)) da$ when $t$ goes to infinity, we split the integral in two parts : $$ \int_{\RR_+} \psi'(u) \varrho(a,t) da = \left( \int_0^t + \int_t^\infty \right) \psi'(u) \varrho(a,t) da =: I_1 + I_2. $$ For the first part one has : $$ I_1 = \int_0^t \psi'(u) \left(\varrho(a,t) - \varrho_\infty(a) \right)da + \int_0^t \psi'(u) \varrho_\infty (a) da $$ The last term is already estimated above and tends to zero when $t$ goes large. For the first one, as $\psi'(0)=0$, one has $$
\begin{aligned} \int_0^t & \psi'(u) (\varrho(a,t) -\varrho_\infty(a)) da
\leq \nrm{\psi''}{L^\infty} \nrm{\frac{u}{\sqrt{1+a}}}{L^\infty(0,t)} \nrm{(1+a)(\varrho(\cdot,t)-\varrho_\infty)}{L^1({\RR_+})}\\ \end{aligned}
$$ the latter term vanishing when $t$ grows by hypothesis. It remains to consider $I_2$. By {the definition of $u$ we have} $$ u(a,t) = { u}_{I}(a-t) + \int_0^t \dot{z}(\tau) d\tau \,,\qquad a\ge t \,, $$ and thus $$
| u(a,t) | \leq |{ u}_{I}(a-t)| + \sqrt{t} \nrm{\dot{z}}{L^2_t}, $$ which finally provides : $$
\left| \frac{u(a,t)}{(1+a)} \right| \leq \nrm{\frac{{ u}_{I}}{(1+a)}}{L^\infty_a} + \nrm{\dot{z}}{L^2_t}. $$ By Lebesgue's Theorem, this gives that $I_2$ tends to zero as $t$ goes to infinity. These arguments show that $\dot{z}$ vanishes at infinity since $\dot{z}(t) = - \int_{\RR_+} \varrho(a,t) \psi' (u(a,t) )da$. \end{proof}
{Finally we are able to identify the limit of $z(t)$ under the further assumptions that $\varrho$ is time independent and nonincreasing, and the problem is linear.} We assume that $\psi(u)=u^2/2$ and {and $\partial_a \varrho(a) \leq 0$, and set $p(a,t) := \int_0^t u(a,\tau ) d\tau= \int_0^t (z(\tau)-z(\tau-a))d\tau$, which solves} \begin{equation}\label{eq.p} \left\{ \begin{aligned} & (\partial_t + \partial_a ) p = - \int_{\RR_+} \varrho(a) p(a,t) da + u_I(a) \,,\quad\text{ a.e. } (a,t) \in ({\RR_+})^2 \\ & p(0,t) =0, \quad
p(a,0) = 0. \end{aligned} \right. \end{equation} If $p$ reaches a steady state $p_\infty$, it should satisfy {$$
\partial_a p_\infty(a) = - \int_{\RR_+} \varrho({\ti{a}}) p_\infty({\ti{a}}) d{\ti{a}} + u_I(a) \,,\qquad p_\infty(0) =0 \,, $$ with the explicit solution $$
p_\infty(a) = \int_0^a { u}_{I}({\ti{a}})d{\ti{a}} - a \int_0^\infty \varrho({\ti{a}}) \int_0^{{\ti{a}}} { u}_{I}(\hat a)d\hat a\, d{\ti{a}} \left( 1 + \int_0^\infty \varrho({\ti{a}}){\ti{a}}\,d{\ti{a}}\right)^{-1} \,. $$} \newcommand{{\hat{p}}}{{\hat{p}}}
Then, setting ${\hat{p}}(a,t) := p(a,t)-p_\infty(a)$, it solves the homogeneous problem associated with \eqref{eq.p}, with the initial condition ${\hat{p}}(a,0)=-p_\infty(a)$. {Multiplication by $\rho{\hat{p}}$ and integration with respect to $a$ and $t$ gives $$
\int_0^\infty \rho {\hat{p}}^2 da - 2\int_0^t \int_0^\infty {\hat{p}}^2 \partial_a\rho \,da\,ds + 2\int_0^t \left(\int_0^\infty \rho{\hat{p}} \,da\right)^2 ds = \int_0^\infty \rho p_\infty^2 da \,. $$ We use the monotonicity of $\rho$ for the second term and the Cauchy-Schwarz inequality for the first to obtain $$
\left(\int_0^\infty \rho{\hat{p}} \,da\right)^2 + 2 \int_0^\infty \rho\,da \int_0^t \left(\int_0^\infty \rho{\hat{p}} \,da\right)^2 ds \le \int_0^\infty \rho\,da \int_0^\infty \rho p_\infty^2 da \,, $$ which implies $\int_0^\infty \rho(a){\hat{p}}(a,t)da \to 0$ as $t\to\infty$ using the same arguments as for $\dot{z}$ and $\int_{\RR_+} \rho (a) u(a,t) da$ in Proposition \ref{prop.dec.caract}. The simple computation $$
z(t) - z_p(0) + \int_{\RR_+} \varrho(a) p_\infty(a) da = -\int_{\RR_+} \varrho(a){\hat{p}}(a,t) da $$ completes the proof of the following result.}
\begin{propm}
{
Let the assumptions of Proposition \ref{prop.dec.caract} hold, let $\varrho$ be independent of $t$ and nonincreasing, and let $\psi(u) = u^2/2$.
Then the solution of \eqref{eq.z1.nl} satisfies
$$
\lim_{t\to\infty} z(t) = z_p(0) - \int_0^\infty \rho(a) p_\infty(a)da = \left( z_p(0) + \int_0^\infty \varrho(a) \int_{-a}^0 z_p(\tau)d\tau\right)
\left( 1 + \int_0^\infty \varrho({\ti{a}}){\ti{a}}\,d{\ti{a}}\right)^{-1} \,.
$$}
\end{propm}
For instance if
$\varrho(a):=\beta \exp(- \zeta a)$, where $\zeta$ and $\beta$ are constants, {$$
\lim_{t\to\infty} z(t) = \frac{\zeta^2 z_p(0) + \beta\zeta \int_{-\infty}^0 \exp(\zeta\tau)z_p(\tau) d\tau }{\zeta^2 + \beta} \,. $$ }
\section{Discontinuous stretching force -- differential inclusions} \label{sec.diff.inclusion} \renewcommand{\varepsilon}{\varepsilon}
{In this section we allow the elastic response function $\psi'$ to be discontinuous. However, different from the preceding section, we assume its boundedness. Note that in terms of the potential $\psi$ this means that the convexity assumption, which implies local Lipschitz continuity, is strengthened to global Lipschitz continuity. We start by smoothing $\psi$, to be able to apply results from the preceding section.} \newcommand{\psi_\delta}{\psi_\delta} \begin{lemm}\label{lem.reg} {Let Assumptions \ref{hypo.data} hold and furthermore $\psi\in C^{0,1}(\mathbb R)$ with Lipschitz constant $L$.
Let $\omega_1$ denote a smooth, even probability density and $\omega_\delta := \delta^{-1}\omega_1(\cdot/\delta)$.
Then, for $\delta>0$, $\psi_\delta := \omega_\delta\star\psi - \omega_\delta\star\psi(0)$ is smooth, even, convex, and Lipschitz continuous with Lipschitz constant $L$.
Furthermore $\psi_\delta'$ is Lipschitz continuous on $\mathbb R$ and $\lim_{\delta\to 0+}\psi_\delta = \psi$, uniformly on bounded subsets of $\mathbb R$.}
\end{lemm} \begin{proof}
Since $\psi$ is convex we have
$$
\psi(\theta u + (1-\theta) v- y )= \psi(\theta (u - y) + (1-\theta)(v-y)) \leq \theta \psi(u-y) + (1-\theta) \psi(v-y) \,.
$$
Integrating against $\omega_\delta(y)dy$ gives the convexity of $\psi_\delta$. {The estimate
$$
|\psi_\delta''(u)| = \left| \int_{\mathbb R} \psi'(u-v)\omega_\delta'(v)dv\right| \le \frac{L}{\delta} \int_{\mathbb R} |\omega_1'(\eta)|d\eta
$$
shows the Lipschitz continuity of $\psi_\delta'$.} The remaining results are
standard and can be found in basic textbooks (cf. Appendix C Theorem 6 in \cite[Appendix C, Theorem 6]{Evans.Book}). \end{proof} \renewcommand{u_\e^\delta}{u_\varepsilon^\delta} \renewcommand{z_\e^\delta}{z_\varepsilon^\delta} \newcommand{\dot{z}^\delta_\e}{\dot{z}^\delta_\varepsilon}
{\begin{lemm}\label{lem:reg.exist} Let the assumptions of Lemma \ref{lem.reg} hold. Then problem \eqref{eq.z.nl} with $\psi$ replaced by $\psi_\delta$ has a unique solution $z_\e^\delta\in C^1(I_T)$, which is, for every compact $\tilde I\subset I_T$, bounded in $C^1(\tilde I)$ uniformly in $\delta$ and $\varepsilon$. \end{lemm} \begin{proof} The data with $\psi$ replaced by $\psi_\delta$ satisfy the assumptions of Theorem \ref{thm.exist.uniq}, implying the existence and uniqueness statement. The obvious estimates $$
|\dot{z}^\delta_\e(t)| \le \|v\|_{L^\infty(I_T)} + L\left\|\int_0^\infty \varrho(a,\cdot)da \right\|_{L^\infty(I_T)} \,,\qquad
|z_\e^\delta(t)| \le |z_p(0)| + t \|\dot{z}^\delta_\e\|_{L^\infty(I_T)} \,, $$ complete the proof. \end{proof} We shall deal with the lack of smoothness of the potential by passing to a variational formulation analogous to the treatment of gradient flows with nonsmooth convex potentials (see, e.g., \cite{Evans.Book}). For $t\in I_T$, $z:\, (-\infty,t)\to \mathbb R$, and $w\in\mathbb R$, we define $$
\mathcal{I}_\delta [z,t](w) := \varepsilon \int_{\RR_+} \psi_\delta\left(\frac{w-z(t-\varepsilon a)}{\varepsilon}\right) \varrho(a,t) da \,, $$ which is (for each $\delta>0$) a smooth function of $w$. With the notation from Lemma \ref{lem:reg.exist} we have by the convexity and smoothness of $\psi_\delta$ that for each $t\in I_T$ $$
z_\e^\delta(t) = \argmin_{w\in\mathbb R} \left( \mathcal{I}_\delta [z_\e^\delta,t](w) + w(\dot{z}^\delta_\e(t) - v(t))\right) \,, $$ or, equivalently, \begin{equation}\label{var-inequal-d}
\mathcal{I}_\delta [z_\e^\delta,t](w) \geq \mathcal{I}_\delta [z_\e^\delta,t](z_\e^\delta(t)) +(v(t)-\dot{z}^\delta_\e(t)) (w-z_\e^\delta(t)) \,,\qquad \forall\,w\in\mathbb R \,. \end{equation} The formal limit $$
\mathcal{I}[z,t](w) := \varepsilon \int_{\RR_+} \psi\left(\frac{w-z(t-\varepsilon a)}{\varepsilon}\right) \varrho(a,t) da \,, $$ of $\mathcal{I}_\delta$ is still a convex, but not necessarily a smooth function of $w$. \newcommand{\partial \mathcal{I}}{\partial \mathcal{I}} We define its set valued subdifferential by $$ \partial \mathcal{I} [z,t](w) := \left\{ q \in \mathbb R:\, \mathcal{I}[z,t](\hat w) \geq \mathcal{I}[z,t](w) + q (\hat w-w),\quad \forall \hat w \in \mathbb R \right\} \,. $$ For each $w\in\mathbb R$ it is a nonempty closed interval. An existence result, where \eqref{eq.z.nl} is replaced by a differential inclusion can now be proven by passing to the limit $\delta\to 0$ in \eqref{var-inequal-d}. \begin{theorem}\label{thm.z.eps.exist.inclusion} Let the assumptions of Lemma \ref{lem.reg} hold. Then there exists
${ z}_\e \in C^{0,1}_{loc}(I_T)$ such that, for almost every $t\in I_T$,
\begin{equation}\label{eq.z.eps.diff.incl1}
v(t) -\dot{z}_\e(t) \in \partial \mathcal{I}[{ z}_\e,t]({ z}_\e(t)) \,.
\end{equation} \end{theorem} \begin{proof} By Lemma \ref{lem:reg.exist} and the Arzela-Ascoli theorem, there exists ${ z}_\e \in C^{0,1}_{loc}(I_T)$, such that, as $\delta\to 0$, $z_\e^\delta$ converges (up to the choice of an appropriate subsequence) to ${ z}_\e$ uniformly on bounded subintervals of $I_T$. Also $\dot{z}^\delta_\e$ converges to $\dot{z}_\e$ in $L^\infty(I_T)$ weak star, where the notation is justified, since it is equal to the derivative of ${ z}_\e$ almost everywhere in $I_T$. By Lemma \ref{lem.reg}, ii) and iii), the integrands in $\mathcal{I}_\delta [z_\e^\delta,t](w)$ and $\mathcal{I}_\delta [z_\e^\delta,t](z_\e^\delta(t))$ converge pointwise in $a\in{\RR_+}$. By the uniform Lipschitz continuities of $\psi_\delta$ and $z_\e^\delta$ the integrands can be bound by $C(1+a)\rho\in L^1({\RR_+})$. Therefore we can pass to the limit in $\mathcal{I}_\delta [z_\e^\delta,t](w)$ and $\mathcal{I}_\delta [z_\e^\delta,t](z_\e^\delta(t))$ by dominated convergence.\\ The last term in \eqref{var-inequal-d} converges in $L^\infty(I_T)$ weak star, as a consequence of the strong convergence of $z_\e^\delta$ and of the weak star convergence of $\dot{z}^\delta_\e$. Therefore the limiting variational inequality $$
\mathcal{I} [{ z}_\e,t](w) \geq \mathcal{I}[{ z}_\e,t]({ z}_\e(t)) +(v(t)-\dot{z}_\e(t)) (w-{ z}_\e(t)) \,,\qquad \forall\,w\in\mathbb R \,, $$ holds for all Lebesgue points $t$ of $\dot{z}_\e$, and this is equivalent to \eqref{eq.z.eps.diff.incl1}. \end{proof}
The formal limit problem \eqref{eq.zz.nl.lip} is equivalent to $$
0 \in \partial \mathcal{J}_t(\dot z_0(t)) \qquad\mbox{with } \mathcal{J}_t(w) = \frac{w^2}{2} - v(t)w + \int_0^\infty \frac{\psi(aw)}{a}\rho(a,t)da \,, $$ which means that we are looking for a minimizer of $\mathcal{J}_t$. Since this a strictly convex, coercive function, a unique minimizer exists, showing the existence of a unique solution of \eqref{eq.zz.nl.lip}.
In the following proof we shall need a result on the representation of subdifferentials \cite{clarke.book}.
With the definition
$$
{ u}_\e(a,t) :=
\begin{cases}
\frac{{ z}_\e(t)-{ z}_\e(t-\varepsilon a)}{\varepsilon}& \text{ if } t >\varepsilon a \\
\frac{{ z}_\e(t)-z_p(t-\varepsilon a)}{\varepsilon} & \text{ otherwise}
\end{cases},\quad u_0(a,t) := a \dot{z}_0(t) , \quad \text{ for a.e. } (a,t) \in {\RR_+} \times (0,T).
$$
we define the function
$$
f(w) := \int_{\RR_+} \psi ({ u}_\e(a,t) +w) \varrho(a,t) da \,.
$$
As a consequence of $\psi$ being convex and Lipschitz, the subdifferentials of $\psi$ and $f$ coincide with their
generalized gradients, as defined in \cite[Prop. 2.2.7]{clarke.book}. This allows to use \cite[Theorem 2.7.2]{clarke.book} implying
$$
\partial f(w) \subset \int_{\RR_+} \partial\psi(w+{ u}_\e(a,t)) \varrho(a,t) da \,.
$$
As a consequence there exist measurable selections $\zeta_{{ u}_\e}(a,t) \in \partial\psi({ u}_\e(a,t))$ and $\zeta_{u_0}(a,t) \in \partial \psi (u_0(a,t))$
such that
$$
\dot{z}_\e(t) + \int_{\RR_+} \zeta_{{ u}_\e} (a,t)\varrho(a,t) da = v(t) \,,\qquad \dot{z}_0(t) + \int_{\RR_+} \zeta_{u_0} (a,t)\varrho(a,t) da = v(t) \,.
$$
\renewcommand{\overline{u}}{\overline{u}} \begin{theorem}\label{thm.cvg.pcwz.cuu}
Assume that ${ z}_\e$ solves the differential inclusion \eqref{eq.diff.inclusion.z.intro}, with
\begin{itemize}
\item $\varrho$ is constant in time, and $\varrho \in L^1 ({\RR_+},(1+a)^2)\cap L^\infty({\RR_+})$
\item $v$ is constant,
\item $\psi$ is convex, $L_\psi$-Lipschitz and there
exists a finite set $U:= \{\overline{u}_{i}, \; i\in \{1,\dots,N\}\}$ such that
$u_1 < u_2 < \dots < u_N$,
$\psi \in C^{1,1}({\RR_+} \setminus U )$
and there exists $L_{\psi'}$ such that
{$$
| \psi'(w_1) -\psi'(w_2) | \leq L_{\psi'} | w_2 - w_1 |.
$$}
for all {$(w_1,w_2)$} $ \in (-\infty,u_1)^2 \cup\bigcup_{i=1}^{N-1} (u_i,u_{i+1})^2 \cup (u_N,\infty)^2$.
\end{itemize}
then there exists a unique real $\gamma \in \mathbb R$ solving \eqref{eq.zz.nl.lip}. Moreover if $\gamma\neq0$, then
{\begin{equation}
\label{eq.err.est}
\nrm{{ z}_\e- z_0}{C([0,T])} \leq C(v,\rho,L_{\psi'},z_p) \varepsilon \left| \ln \varepsilon \right|
\end{equation}
where $z_0(t) = \gamma t + z_p(0)$.} \end{theorem}
\begin{proof}
We prove the result for $N=1$, the general proof for $N>1$ works the same.
{
First, if $\gamma$ solves \eqref{eq.zz.nl.lip} with a kernel $\varrho(a)$ and a source term $v$ both constant in time,
then $\gamma$ is constant.
For the rest of the proof, we set $u_0 (a,t):= a \gamma$ and we assume that $\gamma \neq 0$.
Then, one defines $A_{\eta,t} := \{ a \in {\RR_+} $ $\text{ s.t. } |{ u}_\e(a,t)-u_0(a,t)|\leq \eta\}$.
Since, for a fixed $t$, the function of a, ${ u}_\e(a,t)-u_0(a,t)$
is continuous, $A_{\eta,t}$ is a closed set. It is also Lebesgue-measurable.
By hypothesis, there exists $\overline{u}\in\mathbb R$ such that $\psi \in C^{1,1}(\mathbb R\setminus\{\overline{u}\})$
and there exists a constant $L_{\psi'}$ such that
$$
\left| \psi'(u)-\psi'(v)\right| \leq L_{\psi'} |u-v| ,\quad \forall (u,v) \in (-\infty,\overline{u})^2 \cup (\overline{u},+\infty)^2.
$$
In this context, we consider four cases depending on whether $\gamma>0$ (resp $\gamma <0$) and
$\overline{u} \geq 0$ (resp. $\overline{u}<0$) :
\begin{enumerate}[i)]
\item If $\gamma>0$ and $\overline{u}<0$,
we assume that $\eta < |\overline{u}|$.
For every $a \in A_{\eta,t}$,
one has :
$$
u_0(a,t) = \gamma a \geq 0 > \overline{u}
$$
and
$$
-\eta < { u}_\e(a,t) - \gamma a < \eta
$$
which implies :
$$
\overline{u} < \gamma a + \overline{u} < \gamma a- \eta < { u}_\e(a,t).
$$
This means that for every $a \in A_{\eta,t}$,
$$
(u_0(a,t),{ u}_\e(a,t)) \in (\overline{u},\infty)^2.
$$
Both solutions lie in the domain where $\psi'$ is Lipschitz. Thus
$\zeta_{{ u}_\e}(a,t)=\psi'({ u}_\e(a,t))$ and $\zeta_{u_0}(a,t)=\psi'(u_0(a,t))$,
and thus setting
$$
{\mathcal R}_\eta(t):=\int_{A_{\eta,t}}\left( \zeta_{{ u}_\e}(a,t) - \zeta_{u_0}(a,t)\right) \varrho(a ) da
$$
one has that $|{\mathcal R}_\eta(t)| \leq \eta L_{\psi'} \nrm{\varrho}{L^1_a}$.
The symmetric case when $\gamma <0$ and $\overline{u} >0$ works the same provided again that $\eta < \overline{u}$.
\item If instead, $\gamma >0$ and $\overline{u} \geq 0$, there exists $a_0 \geq 0$ such that $\overline{u} = \gamma a_0$.
We split the previous integral in two parts~:
$$
{\mathcal R}_{\eta}(t) = \left(\int_{A_{\eta,t}\cap B(a_0,\omega)}+\int_{A_{\eta,t} \setminus B(a_0,\omega)} \right) \left( \zeta_{{ u}_\e}(a,t) - \zeta_{u_0}(a,t)\right) \varrho(a)da =: I_1(t) + I_2(t)
$$
where $\omega$ is a small positive parameter yet to be fixed.\\
The first term can be bounded by the measure of $B(a_0,\omega)$, indeed :
\begin{equation}\label{eq.I1}
|I_1(t)| \leq 2 L_\psi \int_{B(a_0,\omega)} \varrho(a) da \leq C \omega
\end{equation}
the latter bound being possible since $\varrho$ is also a bounded function. \\
Next, if $a \in A_{\eta,t} \setminus B(a_0,\omega)$ we start by choosing
$a \leq a_0 - \omega$. Moreover, we assume that
\begin{equation}\label{eq.cond.omega}
\boxed{\omega > \frac{\eta}{\gamma}}
\end{equation}
These two latter inequalities allow to write :
$$
0 < \eta < \omega \gamma \leq \gamma (a_0-a) = u_0(a_0,t)-u_0(a,t) = \overline{u} - u_0(a,t)
$$
which implies obviously that $u_0(a,t) < \overline{u}-\eta < \overline{u}$. Since $a\in A_{\eta,t}$,
$$
{ u}_\e(a,t) < \eta + u_0(a,t) < \eta + u_0(a_0,t) - \eta = \overline{u}
$$
so that ${ u}_\e(a,t) < \overline{u}$ as well.
This implies that : for $a \in A_{\eta,t}$ and $a\leq a_0-\omega$,
$({ u}_\e(a,t),u_0(a,t)) \in (-\infty,-\overline{u})^2$. \\
If $a\geq a_0+\omega$ and $a \in A_{\eta,t}$, then
one shows in the same way that : $({ u}_\e(a,t),u_0(a,t)) \in (\overline{u},\infty)^2$. \\
The case when $\gamma <0$ and $\overline{u} \leq 0$ follows exactly the same lines and leads to the same
conclusion : when $a\in A_{\eta,t}\setminus B(a_0,\omega)$, provided that \eqref{eq.cond.omega} holds :
$$
({ u}_\e(a,t),u_0(a,t)) \in (-\infty,\overline{u})^2 \cup (\overline{u},\infty)^2.
$$
Thus $\zeta_{{ u}_\e}(a,t) = \psi'({ u}_\e(a,t))$
and $\zeta_{u_0}(a,t) = \psi'(u_0(a,t))$ and again
$$
\forall a \in A_{\eta,t} \setminus B(a_0,\omega), \; \; | \zeta_{{ u}_\e}(a,t) - \zeta_{u_0}(a,t) | \leq L_{\psi'} \eta
$$
which shows that
\begin{equation}\label{eq.I2}
\left|I_2(t)\right| \leq L_{\psi'} \eta \nrm{\varrho}{L^1_a}
\end{equation}
So, if for instance $\omega = 2 \eta / |\gamma|$, combining \eqref{eq.I1} and \eqref{eq.I2}, we have proved that :
$$
\left|{\mathcal R}_\eta(t) \right| \leq \frac{C \eta }{|\gamma|}.
$$
One shall remark firstly that $\eta$ can be made arbitrarily small and
that the latter bound is uniform with respect to $\varepsilon$.
\end{enumerate}
Setting again ${\hat{z}_\e}(t) := { z}_\e(t) - z_0(t)$, we shall write the difference equation solved by ${\hat{z}_\e}$~:
$$
\partial_t {\hat{z}_\e} + \int_{{\RR_+}}\left( {\zeta_{{ u}_\e(a,t)} - \zeta_{u_0(a,t)}}\right) \varrho(a) da = 0.
$$
We rewrite the last integral term on the left hand side as
$$
\begin{aligned}
\int_{{\RR_+}} \left(\zeta_{{ u}_\e(a,t)} - \zeta_{u_0(a,t)}\right) \varrho(a) da
= &
\int_{{\RR_+} \setminus A_{\eta,t}} \frac{\zeta_{{ u}_\e(a,t)} - \zeta_{u_0(a,t)}}{{ u}_\e(a,t)-u_0(a,t)}
({ u}_\e(a,t)-u_0(a,t))\varrho(a) da \\
& + \int_{ A_{\eta,t} } {\zeta_{{ u}_\e(a,t)} - \zeta_{u_0(a,t)}}
\varrho(a) da
\end{aligned}
$$
that becomes :
$$
\partial_t {\hat{z}_\e} + \int_{{\RR_+} \setminus A_{\eta,t}} \frac{\zeta_{{ u}_\e(a,t)} - \zeta_{u_0(a,t)}}{{ u}_\e(a,t)-u_0(a,t)}
({ u}_\e(a,t)-u_0(a,t))\varrho(a) da = - {\mathcal R}_\eta,
$$
and we denote
\begin{equation}
\label{eq.def.k.eps}
k_\varepsilon(a,t) := \frac{\zeta_{{ u}_\e(a,t)} - \zeta_{u_0(a,t)}}{{ u}_\e(a,t)-u_0(a,t)} \varrho(a) \chiu{{\RR_+} \setminus A_{\varepsilon,\eta, t}}{(a)}.
\end{equation}
Since the subdifferential of $\psi$ is monotone, $k_\varepsilon$ is positive, moreover it is a function in $L^1({\RR_+},(1+a)^2)$. Indeed
\begin{equation}\label{eq.borne.keps}
0 \leq k_\varepsilon(a,t) \leq {2 L_\psi} \varrho(a)/{\eta}.
\end{equation}
Our problem can thus be rephrased as
\begin{equation}\label{eq.dz.psip.disc}
\partial_t {\hat{z}_\e} + \int_{{\RR_+}} k_\varepsilon (a,t) \left\{ { u}_\e(a,t) - u_0(a,t) \right\} da = -{\mathcal R}_\eta,
\end{equation}
that becomes :
$$
\partial_t {\hat{z}_\e} + \int_{{\RR_+}} k_\varepsilon (a,t) \left\{ { u}_\e(a,t) - \ti{u}_{0,\e}(a,t) \right\} da =
-\int_{\RR_+} k_\varepsilon(a,t) (\ti{u}_{0,\e}(a,t) - u_0(a,t) ) da
-{\mathcal R}_\eta,
$$
where
$$
\ti{u}_{0,\e} (a,t) :=
\begin{cases}
\frac{z_0(t)-z_0(t-\varepsilon a)}{\varepsilon} = \gamma a & \text{if} \; t\geq \varepsilon a \\
\frac{z_0(t)-z_p(0)}{\varepsilon} = \frac{\gamma t}{\varepsilon} & \text{ otherwise}.
\end{cases}
$$
Thanks to this latter definition the first term in the right hand side above can be reduced to
$$
\int_{\RR_+} k_\varepsilon(a,t) (\ti{u}_{0,\e}(a,t) - u_0(a,t) ) da = \ue \int_\tse^\infty \left(\tse -a \right) k_\varepsilon(a,t) da
$$
Then we rewrite \eqref{eq.dz.psip.disc} as :
\begin{equation}\label{eq.hat.zz.lip}
\begin{aligned}
{\mathcal T}_\varepsilon [{\hat{z}_\e}](t) & =
\ue \int_{\tse}^{+\infty} k_\varepsilon(a,t) \left(a -\tse + {\hat{z}_\e}(t-\varepsilon a)\right) da - {\mathcal R}_\eta,
\end{aligned}
\end{equation}
where ${\mathcal T}_\varepsilon$ is defined as
$$
{\mathcal T}_\varepsilon [{\hat{z}_\e}](t) := \partial_t {\hat{z}_\e}(t) + \ue \left(\int_{{\RR_+}} k_\varepsilon(a,t) da\right) {\hat{z}_\e}(t) - \ue \int_0^\tse k_\varepsilon(a,t) {\hat{z}_\e}(t-\varepsilon a) da .
$$
The first term in the right hand side of \eqref{eq.hat.zz.lip} can be estimated
thanks to \eqref{eq.borne.keps}~:
\begin{equation}\label{eq.tail.estimates}
\left| \ue \int_{\tse}^{+\infty} k_\varepsilon(a,t) {\hat{z}_\e}(t-\varepsilon a) da\right| \leq \frac{4 L_\psi (1+L_{z_p}) \nrm{(1+a)^2\varrho}{L^1_a}}{\eta (1+\tse)}.
\end{equation}
At this step, we have proved that
$$
{\mathcal T}_\varepsilon [{\hat{z}_\e} ](t) \leq m(t) := C \left( \frac{\eta}{|\gamma|} + \frac{1}{\eta (1+t/\varepsilon)}\right).
$$
An easy computation shows that
$$
{\mathcal T}_\varepsilon \left[\left|{\hat{z}_\e}\right|\right] (t) \leq \,{\rm sgn}({\hat{z}_\e}(t)) {\mathcal T}_\varepsilon [{\hat{z}_\e} ](t) \leq m(t)
$$
and since $\int_0^t m(s)ds$ is non-decreasing and non-negative, one has
$$
{\mathcal T}_\varepsilon \left[ \int_0^t m(\tau) d\tau \right] \geq m(t)
$$
leading to the inequality :
$$
{\mathcal T}_\varepsilon \left[\left|{\hat{z}_\e}\right|\right] (t) \leq {\mathcal T}_\varepsilon \left[ \int_0^t m(\tau) d\tau \right]
$$
We are in the framework of \cite[Generalized Gronwall Lemma 3.10, p. 298]{Gripenberg_ea} and we write :
$$
| {\hat{z}_\e}(t) | \leq \int_0^t m(\tau) d\tau
= C \left(\frac{ \varepsilon \ln|\varepsilon|}{\eta } + t \frac{\eta}{|\gamma|}\right).
$$
Then, setting $\eta = \sqrt{\varepsilon\ln|\varepsilon|}$, one obtains the error estimates \eqref{eq.err.est} which ends the proof.} \end{proof}
{\begin{theorem}\label{thm.cvg.eps.nl.sing}
Let ${ z}_\e$ solve the differential inclusion \eqref{eq.diff.inclusion.z.intro}, with
\begin{compactenum}[i)]
\item The kernels $\varrho_\varepsilon$ and $\varrho_0$ are such that :
\begin{itemize}
\item $\varrho_\varepsilon \in L^1\cap L^\infty({\RR_+}\times(0,T))$
\item $\varrho_0 \in L^1({\RR_+},(1+a)^2)\cap L^\infty({\RR_+})$ is constant in time.
\end{itemize}
with $\varrho_\varepsilon - \varrho_\infty$ tending to zero
in $L^1({\RR_+}\times(0,T))$.
\item the source term $v_\varepsilon \in W^{1,\infty}(0,T)$ and $v_0 \in \mathbb R$ such that
$v_\varepsilon \to v_0 \in \mathbb R^*$ in $L^1(0,T)$,
\item $\psi$ satisfies hypotheses of Theorem \ref{thm.cvg.pcwz.cuu},
\end{compactenum}
the same conclusions as in Theorem \ref{thm.cvg.pcwz.cuu} hold.
\end{theorem}} {\begin{proof}
As this is an minor extension of Theorem \ref{thm.cvg.pcwz.cuu} we only
point out the necessary extra arguments.
The difference ${\hat{z}_\e}$ satisfies now :
$$
\partial_t {{\hat{z}_\e}} + \int_{\RR_+} (\zeta_{{ u}_\e}-\zeta_{u_0}) \rho_0(a) da = \int_{\RR_+} \zeta_{{ u}_\e}(\rho_0-\rho_\e) da + v_\varepsilon(t) -v_0
$$
which following the same arguments as above becomes :
$$
\partial_t {{\hat{z}_\e}} + \int_{\RR_+} ({ u}_\e(a,t)-u_0(a,t))k_\varepsilon(a,t) da = - {\mathcal R}_\eta +
\int_{\RR_+} \zeta_{{ u}_\e}(\rho_0-\rho_\e) da + v_\varepsilon(t) -v_0
$$
where $k_\varepsilon$ is defined in \eqref{eq.def.k.eps}. Since one obtains as above :
$$
{\mathcal T}_\varepsilon [|{\hat{z}_\e}| ](t) \leq m(t) := C \left( \frac{\eta}{|\gamma|} + \frac{1}{\eta (1+t/\varepsilon)}
+ L_\psi \int_{\RR_+} |\rho_\e(a,t)-\rho_0(a)| da + |v_\varepsilon(t) -v_0|\right).
$$
The same comparison principle as in Theorem \ref{thm.cvg.pcwz.cuu}, then provides the claim
integrating $m$ in time. \end{proof}}
\begin{rmkm}
If $\psi$ is only Lipschitz and convex, then its derivative has at most a
countable set of points in $\mathbb R$ where it is discontinuous. Hypotheses above on $\psi$ assume a finite number of isolated jumps of $\psi'$ on the real line.
To our knowledge it is not possible to extend the previous proof to this
general case.
Nevertheless,
for practical applications (cf, for instance, examples in \cite{pmid29571710} and Section \ref{sec.abs.val}) it seems sufficient.
\end{rmkm}
Here we present a new way to recover large time asymptotics thanks to the $\varepsilon$ scaling above. \begin{theorem}\label{thm.t.large.Lipschitz}
Under Assumptions \ref{hypo.data}, and assuming that
\begin{compactenum}[1)]
\item $v_\infty \in \mathbb R$ { and $v \in W^{1,\infty} ({\RR_+})$ is such that $$
\int_{\RR_+} \left| v(t)-v_\infty\right| dt < \infty. $$ }
\item $\varrho_\infty\in L^1({\RR_+},(1+a))$ such that {
$$
\int_{\RR_+} \int_{\RR_+} \left| \varrho(a,t)-\varrho_\infty(a)\right| da dt < \infty.
$$
}
\item if $\psi$ satisfies assumptions of Theorem \ref{thm.cvg.pcwz.cuu},
\end{compactenum}
then if $z$ solves
\begin{equation}\label{eq.z.inclusion.eps.one}
\begin{aligned}
(v(t)-\dot{z}(t))(w-&z(t)) + \varepsilon \int_{\RR_+} \psi\left({z(t)-z(t-\varepsilon a)}\right) \varrho(a,t) da \\
& \leq \int_{\RR_+} \psi\left({w-{ z}_\e(t-\varepsilon a)}\right) \varrho(a,t) da, \quad \forall w \in \mathbb R.
\end{aligned}
\end{equation}
when $t$ goes to infinity,
there exists $z_0(\tilde{t}):= \gamma t$ such that
\begin{equation}\label{eq.asymptotic.lim.time}
\lim_{t \to \infty} \left| \frac{z(t)}{t}-z_0(1) \right| =0
\end{equation}
where $\gamma$ solves \eqref{eq.zz.nl.lip.cst}
\end{theorem}
\begin{proof}
We consider the solution $z$ of the problem \eqref{eq.z.nl} on the time interval $(0,1/\varepsilon)$,
where $\varepsilon>0$ is an arbitrarily small parameter.
We set ${ z}_\e(\tilde{t}):=\varepsilon z(\tilde{t}/\varepsilon)$ and $z_{p,\e}(\tilde{t}) := \varepsilon z_p(\tilde{t}/\varepsilon)$, then one has :
\begin{equation}\label{eq.chg.var}
\begin{aligned}
\partial_{\tit} { z}_\e (\tilde{t}) & = \partial_t z(\tilde{t}/ \varepsilon), \\
{ u}_\e(a,\tilde{t})& := \frac{{ z}_\e(\tilde{t})-{ z}_\e(\tilde{t}-\varepsilon a)}{\varepsilon}= z(\tilde{t}/\varepsilon )-z(\tilde{t}/\varepsilon -a)=:u(a,\tilde{t}/\varepsilon)
\end{aligned}
\end{equation}
So, if $z$ solves \eqref{eq.z.inclusion.eps.one}, then
${ z}_\e$ solves \eqref{eq.diff.inclusion.z.intro}.
By Theorem \ref{thm.cvg.eps.nl.sing},
${ z}_\e(\tilde{t})$ converges to $z_0(\tilde{t}):=\int_0^{\tilde{t}} \gamma(\tau) d\tau$
in $C([0,1])$. This gives for instance that
$$
\lim_{\varepsilon\to 0} | { z}_\e(1) - z_0(1) | =0.
$$
One then returns to $z$ thanks to the change of unknowns and
setting $t=1/\varepsilon$ implies \eqref{eq.asymptotic.lim.time}
which completes the claim.
\end{proof}
\section{An example from the literature}\label{sec.abs.val}
Here we consider the elastic response $\psi(u)=|u|$. In a first step assuming that the data $(\varrho,v)$ are constant in time, we study the asymptotic limit \eqref{eq.zz.nl.lip.cst} and solve it explicitly (cf section \ref{sec.asymptotic}).
Then assuming a specific form of linkages' distribution we do not account for any past positions at time $t=0$. We show, in this framework, that it is possible to solve explicitly \eqref{eq.zz.nl.lip} in section \ref{sec.exact} and we illustrate numerically this fact in the last part. \subsection{Study of the limit equation \eqref{eq.zz.nl.lip.cst}}\label{sec.asymptotic}
\begin{propm}
We suppose that the kernel $\varrho$ is non-negative and satisfies
$\varrho(a,t)=\varrho_\infty(a) \in L^1({\RR_+})$.
Assume that $\gamma(t)$ solves \eqref{eq.zz.nl.lip} then it is constant and
\begin{enumerate}[i)]
\item if $\gamma>0$ then $v_\infty = \gamma + \mu_\infty$,
\item if $\gamma<0$ then $v_\infty = \gamma - \mu_\infty$,
\item if $\gamma=0$ then $v_\infty \in [-\mu_\infty,\mu_\infty]$,
\item If $v_\infty \in [-\mu_\infty,\mu_\infty]$ then $\gamma=0$
\end{enumerate} \end{propm}
\begin{proof}
As in the proof of Theorem \ref{thm.cvg.pcwz.cuu}, if $\gamma$ solves \eqref{eq.zz.nl.lip}
with constant data, it is constant.
In the first case, if $\gamma>0$, then choosing $w<0$ implies that
$$
\begin{aligned}
w (v_\infty-\gamma) & + \gamma \int_{\RR_+} a \varrho_\infty da \\
& \leq \gamma \left( \int_{-\frac{w}{\gamma}}^\infty a \varrho_\infty da - \int_0^{-\frac{w}{\gamma}} a \varrho_\infty da \right)
+ w \left( \int_{-\frac{w}{\gamma}}^\infty \varrho_\infty da - \int_0^{-\frac{w}{\gamma}} \varrho_\infty da \right) .
\end{aligned}
$$
Using Lebesgue's Theorem and taking the limit when $w$ goes to $0^-$ gives
that $v_\infty - \mu_\infty \geq \gamma > 0$.
In a same way, if $\gamma <0$, expressing \eqref{eq.zz.nl.lip} for positive values of $w$ and taking the limit when $w\to 0^+$ provides that
$v+\mu_\infty \leq \gamma < 0$.
On the other hand if $\gamma >0$ (resp. $\gamma <0$) then choosing $w>0$
(resp. $w<0$) gives straightforwardly that $v_\infty - \mu_\infty \leq \gamma$
(resp. $v_\infty + \mu_\infty \geq \gamma$), which concludes the proof of i) and ii).
Taking $\gamma=0$ in \eqref{eq.zz.nl.lip} provides that
$$
v_\infty w \leq \mu_\infty |w|
$$
which ends the third claim.
For the last part, if there exists two distinct non-zero solutions $\gamma_i$ for $i\in\{1,2\} $,
if they have the same sign, they are equal since then i) or ii) hold.
If their signs are opposite then we end up with a contradiction since then
$v_\infty - \mu_\infty >0$ and $v_\infty + \mu_\infty <0$ at the same time.
Remains the case when one of the two solution only is zero (for instance $\gamma_1=0$).
In this case again we have a contradiction since then
$
v_\infty \notin [-\mu_\infty; \mu_\infty]
$ (since $\gamma_2 \neq 0$)
and $v_\infty \in [-\mu_\infty; \mu_\infty] $.
If $v_\infty \in (-\mu_\infty,\mu_\infty)$, then $\gamma=0$ is a solution of \eqref{eq.zz.nl.lip} since
$$
v_\infty w \leq \mu_\infty |w|, \quad \forall w \in \mathbb R
$$
which is \eqref{eq.zz.nl.lip} for $\gamma=0$. By uniqueness, it is the only one. \end{proof}
In fig. \ref{fig.gamma}, we plot the solution $\gamma$ of \eqref{eq.zz.nl.lip} in the case when $\varrho(a,t)=\varrho_\infty(a)$ and $v=v_\infty$. \begin{figure}
\caption{The velocity-force diagram when $\psi(u)=|u|$ and $\int_{\RR_+} \varrho_\infty (a)da =1$}
\label{fig.gamma}
\end{figure}
\newcommand{\dot{z}}{\dot{z}} \subsection{The exact solution of \eqref{eq.diff.inclusion.z.intro}} \label{sec.exact} We assume here in \eqref{eq.diff.inclusion.z.intro} that the kernel is such that $\varrho(a,t) = \varrho_\infty(a) \chiu{\{a<t\}}(a,t)$. Thus, we solve the problem : find $z \in {\hbox{\rm Lip}} ({\RR_+})$ solving \begin{equation}\label{eq.sec.333}
(v_\infty - \dot{z}(t)) w + \int_0^t \varrho_\infty(a) | u(a,t) | da \leq
\int_0^t \varrho_\infty(a) | u(a,t) + w| da, \quad \forall t > 0, \end{equation}
together with the initial condition $z(0)=z^0$.
\begin{theorem}\label{thm.plastic}
Assume that $\varrho_\infty$ is a positive monotone non-increasing function in $L^1({\RR_+})$.
We set $\mu_\infty(t) = \int_0^t \varrho_\infty(a) da$ that tends to $\mu_\infty$ when $t$ goes to infinity.
Let's assume moreover that $v_\infty \in[-\mu_\infty, \mu_\infty]$ then
the only solution of \eqref{eq.sec.333} is
\begin{equation}\label{eq.zinf}
z(t) = \begin{cases}
z^0+ \int_0^t[v_\infty-\mu_\infty(\tau)]_+ d\tau, & \text{if} \; v_\infty \geq 0,\\
z^0+ \int_0^t[v_\infty+\mu_\infty(\tau)]_- d\tau, & \text{if} \; v_\infty \leq 0.
\end{cases}
\end{equation}
which tends as $t$ grows large to $z_\infty = z(t_1)$ where $t_1$ is such that $\mu_\infty(t_1)=v_\infty$.
\end{theorem}
\begin{proof} We assume hereafter that $\mu_\infty > v_\infty \geq 0$, since the opposite case works the same.
A simple computation gives that
$$
| v_\infty - \dot{z} | \leq \int_0^t \varrho_\infty (a )da =: \mu_\infty(t),
$$
which shows that
$0< v_\infty -\mu_\infty(t) \leq \dot{z}(t)$ on $[0,t_1)$, where $t_1$
is the time for which $\mu_\infty(t_1) = v_\infty$.
In this case setting $u(a,t):= \int_{t-a}^t \dot{z}(\tau) d\tau$, shows that
$u(a,t) \geq 0$, for $(a,t) \in \{ (a,t) \in [0,t_1]^2$ such that $ \; a\leq t\}=:\Gamma(t_1)$. For $t$ fixed one has that $u(a,t)$ is increasing with respect to $a \in [0,t]$ and absolutely continuous. Thus
there exists $a(w) \in [0,t]$ such that $u(a,t) \leq w$ for all $a \in [0,a_0(w)]$
and $u(a,t)\geq w$ for $a\in [a_0(w),t]$, this gives
$$
(v_\infty-\dot{z}(t),-w) \leq w \left( \int_0^{a_0(w)} \varrho_\infty (a)da - \int_{a_0(w)}^{t} \varrho_\infty(a)da \right)
- 2 \int_0^{a_0(w)} \varrho_\infty(a)u(a,t) da,
$$
for all $w \in [0,u(t,t)]$, then passing to the limit wrt $w\to 0$ gives thanks to the
integrability of $\varrho_\infty(a) u(a,t)$ close to $a=0$, and since $a_0(w) \to 0$
when $w\to 0$, that : $\dot{z}(t) \leq v_\infty - \mu_\infty(t)$.
So on $[0,t_1]$,
\begin{equation}\label{eq.dz}
\dot{z}(t) = v_\infty-\mu_\infty(t)
\end{equation}
Thus $u(a,t)= \int_{t-a}^{t} v_\infty - \mu_\infty(\tau) d \tau$ for every $(a,t)\in \Gamma(t_1)$.
We assume that on $(t_1,t_1+\delta)$, with $\delta$ a small positive parameter, $\dot{z}$ is negative definite. We fix $t\in (t_1,t_1+\delta)$. As $z$ is monotone increasing on $(0,t_1)$, there exists $\tau_1$ such that for all $\tau \leq \tau_1$, $z(\tau) \leq z(t)$, while for $\tau \in (\tau_1,t)$, $z(\tau )\geq z(t)$. We set $\eta >0$ a small parameter such that $t-\eta$ still belongs to $(t_1,t_1+\delta)$, there exists $\tau_2$ depending on $\eta$ such that $z(\tau)$ is in $(z(t-\eta),z(t_1))$ for $\tau \in (\tau_2,t-\eta)$, while $z(t-\eta) > z(\tau) $ for $\tau$ in $(0,\tau_2) \cup (t-\eta,t)$ (see fig. \ref{fig.tau}).
\begin{figure}
\caption{When we assume that $\dot{z}(t)<0$ on $(t_1,t_1+\delta)$}
\label{fig.tau}
\end{figure} One recovers from \eqref{eq.sec.333}, that $$ \begin{aligned} (v_\infty-\dot{z}(t)) & (z(t-\eta)-z(t)) \\ & + \underbrace{\int_0^{\tau_1} (z(t)-z(\tau) ) \varrho_\infty(t-\tau) d\tau
+ \int_{\tau_1}^{t} (z(\tau) -z(t)) \varrho_\infty(t-\tau) d\tau}_{I_1} \\ &\quad \leq
\underbrace{\int_0^{t}|(z(t-\eta)-z(\tau) | \varrho_\infty(t-\tau) d\tau}_{I_2}. \end{aligned} $$ We analyze the terms $I_1$ and $I_2$ : $$ I_1 = z(t) \left( \int_0^{\tau_1} - \int_{\tau_1}^t \right) \varrho_\infty(t-\tau) d\tau + \left( \int_{\tau_1}^t-\int_0^{\tau_1} \right) z(\tau) \varrho_\infty(t-\tau) d\tau ,
$$ while $$ \begin{aligned} I_2 & = z(t-\eta )\left(\int_{t-\eta}^{t} + \int_{0}^{\tau_2} - \int_{\tau_2}^{t-\eta}\right) \varrho_\infty(t-\tau) d \tau \\ & - \left(\int_{t-\eta}^{t} + \int_{0}^{\tau_2} - \int_{\tau_2}^{t-\eta}\right) z(\tau )\varrho_\infty(t-\tau) d \tau. \end{aligned} $$ This leads to write : $$ \begin{aligned} & (v_\infty-\dot{z}(t)) (z(t-\eta)-z(t)) + (z(t) - z(t-\eta))\left\{ \left( \int_{0}^{\tau_1} - \int_{ \tau_1}^{t}\right) \varrho_\infty(t-\tau) d\tau \right\}\\ & + 2 \left( \int_{\tau_1}^{\tau_2} + \int_{t-\eta}^{t}\right) (z(\tau )- z(t-\eta ))\varrho_\infty(t-\tau) d\tau \leq 0. \end{aligned} $$ Factorizing the difference $z(t-\eta)-z(t)$ and dividing by $\eta$ leads to write : $$ \begin{aligned} (v_\infty -\dot{z}(t) -&\mu_\infty(t)+ 2 \mu_\infty(t-\tau_1)) \frac{(z(t-\eta)-z(t))}{\eta} \\ &+ \underbrace{\frac{2}{\eta}\left( \int_{\tau_1}^{\tau_2} + \int_{t-\eta}^{t}\right) (z(\tau )- z(t-\eta ))\varrho_\infty(t-\tau) d\tau }_{I_3} \leq 0 \end{aligned} $$ As $z(\tau)$ is monotone either on $(\tau_1,\tau_2)$ or on $(t-\eta,t)$, the latter term can be estimated as $$
\left|I_3\right| \leq \left| \frac{z(t-\eta)-z(t)}{\eta}\right| \left(\int_{ \tau_1}^{\tau_2} + \int_{t-\eta}^t \varrho_\infty(t-\tau) d\tau \right) \leq C \nrm{\dot{z}}{L^\infty({\RR_+})} o_\eta(1) $$ since $\tau_2$ tends to $\tau_1$ as $\eta$ tends to zero. One concludes making $\eta$ tend to zero that $$ (v_\infty -\dot{z}(t) -\mu_\infty(t)+ 2 \mu_\infty(t-\tau_1)) ( - \dot{z}(t)) \leq 0 $$ which we divide by $-\dot{z}(t)$, since it is a positive definite quantity by hypothesis. This leads to $$ \underbrace{v_\infty - \int_{t-\tau_1}^t \varrho_\infty(a) da }_{I_4(t)} + \mu_\infty(t-\tau_1) \leq \dot{z}. $$ Then, assuming that $\varrho_\infty$ is a monotone non-increasing function, shows that $I_5(t) := \int_{t-\tau_1}^t \varrho_\infty(a) da$ is decreasing as well, thus $$ I_4(t)=v_\infty - I_5(t) \geq v_\infty - I_5(\tau_1) = v_\infty - \mu_\infty(\tau_1) \geq 0 $$ the latter estimate being true since $\tau_1 < t_1$, which finally gives that $$ \mu_\infty(t-\tau_1) \leq \dot{z}. $$ The latter quantity is strictly positive since $t>t_1>\tau_1$, this leads to a contradiction. Indeed, because $\mu_\infty(t_1)=v_\infty$ and $\lim_{t \to \infty} \mu_\infty(t) = \mu_\infty > v_\infty$,
there exists an open set $M \subset (t_1,\infty)$ of positive measure on which $\varrho_\infty(a)>0$ for a.e. $a\in M$. Since $\varrho_\infty(a)$ is decreasing there exist $a_0 \in M$ such that $\sup_M \varrho_\infty \geq \varrho_\infty(a_0) >0$. Take $\delta < a_0-t_1$ which implies that $t \in (t_1,a_0)$ then $$ \mu_\infty(t-\tau_1 ) := \int_0^{t-\tau_1} \varrho_\infty(a) da \geq \varrho_\infty(a_0)\int_0^{t-\tau_1} da = (t-\tau_1) \varrho_\infty(a_0) > 0. $$
Thus $\dot{z}$ cannot be negative definite.
We assume now that for $t\in(t_1,t_1+\delta)$, $\dot{z}(t)>0$. We fix $t$ as above. Again using \eqref{eq.sec.333}, one obtains~: $$ \left( v_\infty - \dot{z}(t) \right)(z(t_1)-z(t)) + \int_{0}^{t} (z(t)-z(t-a)) \varrho_\infty(a ) da \leq \int_{0}^{t} (z(t_1)-z(t-a)) \varrho_\infty(a ) da $$ which transforms into : $$ (v_\infty - \dot{z}(t) -\mu_\infty(t) ) (z(t_1) - z(t )) \leq 0 $$ which leads to $$ \dot{z} \leq v_\infty - \mu_\infty(t) < 0 $$ which again is a contradiction. Thus $\dot{z}$ must be zero on a positive neighborhood of $t_1$.
Since both arguments extend to any interval $I \in (t_1,\infty)$ the claim is proved when $v_\infty \in (-\mu_\infty,\mu_\infty)$. For the particular case when $v_\infty = \pm \mu_\infty$, the time $t_1$ such that $\mu_\infty(t)=\pm v_\infty$ is infinite. Thus \eqref{eq.dz} remains true on ${\RR_+}$ if $v_\infty=\mu_\infty$ and $\dot{z}(t)=v_\infty+\mu_\infty(t)$ if $v_\infty=-\mu_\infty$. This can be rewritten as $$ \begin{aligned} z(t) & = z^0 + \,{\rm sgn}(v_\infty) \int_0^t \int_{\tau}^{\infty} \varrho_\infty(a)da d\tau \\ & = z^0 + \,{\rm sgn}(v_\infty) \left\{ \int_0^t a \varrho_\infty(a)da + t \int_t^\infty \varrho_\infty(a)da\right\} . \end{aligned} $$ \end{proof}
\begin{corom}
Under the same hypotheses as above, but if $v_\infty \notin [-\mu_\infty,\mu_\infty]$, then
$$
z(t) = z^0 +
\int_{0}^{t} \left( v_\infty - \,{\rm sgn}(v_\infty ) \mu_\infty(\tau) \right)d\tau
= z^0 + \gamma t + \,{\rm sgn}(v_\infty ) \int_0^t \int_{\tau}^{\infty} \varrho_\infty (a) da d \tau
$$ \end{corom}
\subsection{A numerical illustration} We discretize the previous problem using minimizing movements scheme \cite{AmGiSa}. We denote $R_j := \exp(- j \Delta a)
$, for $j\in \mathbb N$, and we approximate the functional $I[w,t] := \int_0^t | w - z(t-a) | \varrho_\infty(a) da$ by setting $$
I_n[w] := \Delta a \sum_{j=0}^{n-1} | w - Z^{n-1-j} |R_j, $$ and the total energy minimized for each time step $n$ reads : \begin{equation}\label{eq.minimiz.mvts}
{\mathcal E}_n(w):= \frac{(w-Z^{n-1})^2}{2 \Delta t} + I_n[w] - v_\infty w \end{equation} it is a convex functional with respect to $w$ and there exists a unique minimum for each step $n$. So at each time step $t^{n}=n \Delta t$, we define $Z^n$ as $$ Z^n = \argmin\limits_{w \in \mathbb R} {\mathcal E}_n(w), $$ One can compare $z$ computed by this minimization scheme with the theoretical formula \eqref{eq.zinf} above. We plot in fig. \ref{fig.minimis} the result of this computation, where $v_\infty$ is set to $v_\infty=0.1$ in the plastic regime cf fig \ref{fig.plastic}, and $v_\infty=1.5$ in the kinematic regime (cf fig. \ref{fig.elastic}) with $\mu_\infty=\int_{\RR_+} \exp(-a)da=1$.
\begin{figure}
\caption{Displacement $z(t)$ in the plastic case ($v_\infty=0.1<1=\mu_\infty$)}
\label{fig.plastic}
\caption{Velocity $\dot{z}(t)$ in the kinematic case ($v_\infty=1.5>1=\mu_\infty$)}
\label{fig.elastic}
\caption{Numerical simulation using a gradient flow scheme \eqref{eq.minimiz.mvts} associated to \eqref{eq.diff.inclusion.z.intro}}
\label{fig.minimis}
\end{figure}
\def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$}
\end{document} | arXiv |
65537-gon
In geometry, a 65537-gon is a polygon with 65,537 (216 + 1) sides. The sum of the interior angles of any non–self-intersecting 65537-gon is 11796300°.
Regular 65537-gon
A regular 65537-gon
TypeRegular polygon
Edges and vertices65537
Schläfli symbol{65537}
Coxeter–Dynkin diagrams
Symmetry groupDihedral (D65537), order 2×65537
Internal angle (degrees)≈179.994 507°
PropertiesConvex, cyclic, equilateral, isogonal, isotoxal
Dual polygonSelf
Regular 65537-gon
The area of a regular 65537-gon is (with t = edge length)
$A={\frac {65537}{4}}t^{2}\cot {\frac {\pi }{65537}}$
A whole regular 65537-gon is not visually discernible from a circle, and its perimeter differs from that of the circumscribed circle by about 15 parts per billion.
Construction
The regular 65537-gon (one with all sides equal and all angles equal) is of interest for being a constructible polygon: that is, it can be constructed using a compass and an unmarked straightedge. This is because 65,537 is a Fermat prime, being of the form 22n + 1 (in this case n = 4). Thus, the values $\cos {\frac {\pi }{65537}}$ and $\cos {\frac {2\pi }{65537}}$ are 32768-degree algebraic numbers, and like any constructible numbers, they can be written in terms of square roots and no higher-order roots.
Although it was known to Gauss by 1801 that the regular 65537-gon was constructible, the first explicit construction of a regular 65537-gon was given by Johann Gustav Hermes (1894). The construction is very complex; Hermes spent 10 years completing the 200-page manuscript.[1] Another method involves the use of at most 1332 Carlyle circles, and the first stages of this method are pictured below. This method faces practical problems, as one of these Carlyle circles solves the quadratic equation x2 + x − 16384 = 0 (16384 being 214).[2]
Symmetry
The regular 65537-gon has Dih65537 symmetry, order 131074. Since 65,537 is a prime number there is one subgroup with dihedral symmetry: Dih1, and 2 cyclic group symmetries: Z65537, and Z1.
65537-gram
A 65537-gram is a 65,537-sided star polygon. As 65,537 is prime, there are 32,767 regular forms generated by Schläfli symbols {65537/n} for all integers 2 ≤ n ≤ 32768 as $\left\lfloor {\frac {65537}{2}}\right\rfloor =32768$.
See also
• Circle
• Equilateral triangle
• Pentagon
• Heptadecagon (17-sides)
• 257-gon
References
1. Johann Gustav Hermes (1894). "Über die Teilung des Kreises in 65537 gleiche Teile". Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, Mathematisch-Physikalische Klasse (in German). Göttingen. 3: 170–186.
2. DeTemple, Duane W. (Feb 1991). "Carlyle circles and Lemoine simplicity of polygon constructions" (PDF). The American Mathematical Monthly. 98 (2): 97–208. doi:10.2307/2323939. JSTOR 2323939. Archived from the original (PDF) on 2015-12-21. Retrieved 6 November 2011.
Bibliography
• Weisstein, Eric W. "65537-gon". MathWorld.
• Robert Dixon Mathographics. New York: Dover, p. 53, 1991.
• Benjamin Bold, Famous Problems of Geometry and How to Solve Them New York: Dover, p. 70, 1982. ISBN 978-0486242972
• H. S. M. Coxeter Introduction to Geometry, 2nd ed. New York: Wiley, 1969. Chapter 2, Regular polygons
• Leonard Eugene Dickson Constructions with Ruler and Compasses; Regular Polygons Ch. 8 in Monographs on Topics of Modern Mathematics
• Relevant to the Elementary Field (Ed. J. W. A. Young). New York: Dover, pp. 352–386, 1955.
External links
• 65537-gon mathematik-olympiaden.de (German), with images of the documentation HERMES; retrieved on July 9, 2018
• Wikibooks 65573-Eck (German) Approximate construction of the first side in two main steps
• 65537-gon, exact construction for the 1st side, using the Quadratrix of Hippias and GeoGebra as additional aids, with brief description (German)
Polygons (List)
Triangles
• Acute
• Equilateral
• Ideal
• Isosceles
• Kepler
• Obtuse
• Right
Quadrilaterals
• Antiparallelogram
• Bicentric
• Crossed
• Cyclic
• Equidiagonal
• Ex-tangential
• Harmonic
• Isosceles trapezoid
• Kite
• Orthodiagonal
• Parallelogram
• Rectangle
• Right kite
• Right trapezoid
• Rhombus
• Square
• Tangential
• Tangential trapezoid
• Trapezoid
By number
of sides
1–10 sides
• Monogon (1)
• Digon (2)
• Triangle (3)
• Quadrilateral (4)
• Pentagon (5)
• Hexagon (6)
• Heptagon (7)
• Octagon (8)
• Nonagon (Enneagon, 9)
• Decagon (10)
11–20 sides
• Hendecagon (11)
• Dodecagon (12)
• Tridecagon (13)
• Tetradecagon (14)
• Pentadecagon (15)
• Hexadecagon (16)
• Heptadecagon (17)
• Octadecagon (18)
• Icosagon (20)
>20 sides
• Icositrigon (23)
• Icositetragon (24)
• Triacontagon (30)
• 257-gon
• Chiliagon (1000)
• Myriagon (10,000)
• 65537-gon
• Megagon (1,000,000)
• Apeirogon (∞)
Star polygons
• Pentagram
• Hexagram
• Heptagram
• Octagram
• Enneagram
• Decagram
• Hendecagram
• Dodecagram
Classes
• Concave
• Convex
• Cyclic
• Equiangular
• Equilateral
• Infinite skew
• Isogonal
• Isotoxal
• Magic
• Pseudotriangle
• Rectilinear
• Regular
• Reinhardt
• Simple
• Skew
• Star-shaped
• Tangential
• Weakly simple
| Wikipedia |
Tagged: determinant of a matrix
Find All Values of $x$ such that the Matrix is Invertible
Given any constants $a,b,c$ where $a\neq 0$, find all values of $x$ such that the matrix $A$ is invertible if
\[
1 & 0 & c \\
0 & a & -b \\
-1/a & x & x^{2}
\]
Click here if solved 305
Find All Eigenvalues and Corresponding Eigenvectors for the $3\times 3$ matrix
Find all eigenvalues and corresponding eigenvectors for the matrix $A$ if
2 & -3 & 0 \\
0 & 0 & 3
Find All Values of $a$ which Will Guarantee that $A$ Has Eigenvalues 0, 3, and -3.
Let $A$ be the matrix given by
-2 & 0 & 1 \\
-5 & 3 & a \\
4 & -2 & -1
\] for some variable $a$. Find all values of $a$ which will guarantee that $A$ has eigenvalues $0$, $3$, and $-3$.
Compute the Determinant of a Magic Square
8 & 1 & 6 \\
\] Notice that $A$ contains every integer from $1$ to $9$ and that the sums of each row, column, and diagonal of $A$ are equal. Such a grid is sometimes called a magic square.
Compute the determinant of $A$.
Given the Data of Eigenvalues, Determine if the Matrix is Invertible
In each of the following cases, can we conclude that $A$ is invertible? If so, find an expression for $A^{-1}$ as a linear combination of positive powers of $A$. If $A$ is not invertible, explain why not.
(a) The matrix $A$ is a $3 \times 3$ matrix with eigenvalues $\lambda=i , \lambda=-i$, and $\lambda=0$.
(b) The matrix $A$ is a $3 \times 3$ matrix with eigenvalues $\lambda=i , \lambda=-i$, and $\lambda=-1$.
Express the Eigenvalues of a 2 by 2 Matrix in Terms of the Trace and Determinant
Let $A=\begin{bmatrix}
a & b\\
c& d
\end{bmatrix}$ be an $2\times 2$ matrix.
Express the eigenvalues of $A$ in terms of the trace and the determinant of $A$.
Linear Transformation $T:\R^2 \to \R^2$ Given in Figure
Let $T:\R^2\to \R^2$ be a linear transformation such that it maps the vectors $\mathbf{v}_1, \mathbf{v}_2$ as indicated in the figure below.
Find the matrix representation $A$ of the linear transformation $T$.
Is the Sum of a Nilpotent Matrix and an Invertible Matrix Invertible?
A square matrix $A$ is called nilpotent if some power of $A$ is the zero matrix.
Namely, $A$ is nilpotent if there exists a positive integer $k$ such that $A^k=O$, where $O$ is the zero matrix.
Suppose that $A$ is a nilpotent matrix and let $B$ be an invertible matrix of the same size as $A$.
Is the matrix $B-A$ invertible? If so prove it. Otherwise, give a counterexample.
Linear Algebra Midterm 1 at the Ohio State University (2/3)
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.
There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).
The time limit was 55 minutes.
This post is Part 2 and contains Problem 4, 5, and 6.
Check out Part 1 and Part 3 for the rest of the exam problems.
Problem 4. Let
\[\mathbf{a}_1=\begin{bmatrix}
1 \\
\end{bmatrix}, \mathbf{a}_2=\begin{bmatrix}
-1 \\
\end{bmatrix}, \mathbf{b}=\begin{bmatrix}
a \\
\end{bmatrix}.\]
Find all the values for $a$ so that the vector $\mathbf{b}$ is a linear combination of vectors $\mathbf{a}_1$ and $\mathbf{a}_2$.
Problem 5.
Find the inverse matrix of
\[A=\begin{bmatrix}
0 & 0 & 2 & 0 \\
0 &1 & 0 & 0 \\
1 & 0 & 0 & 1
\end{bmatrix}\] if it exists. If you think there is no inverse matrix of $A$, then give a reason.
Consider the system of linear equations
\begin{align*}
3x_1+2x_2&=1\\
5x_1+3x_2&=2.
\end{align*}
(a) Find the coefficient matrix $A$ of the system.
(b) Find the inverse matrix of the coefficient matrix $A$.
(c) Using the inverse matrix of $A$, find the solution of the system.
(Linear Algebra Midterm Exam 1, the Ohio State University)
An Example of a Matrix that Cannot Be a Commutator
Let $I$ be the $2\times 2$ identity matrix.
Then prove that $-I$ cannot be a commutator $[A, B]:=ABA^{-1}B^{-1}$ for any $2\times 2$ matrices $A$ and $B$ with determinant $1$.
Using the Wronskian for Exponential Functions, Determine Whether the Set is Linearly Independent
By calculating the Wronskian, determine whether the set of exponential functions
\[\{e^x, e^{2x}, e^{3x}\}\] is linearly independent on the interval $[-1, 1]$.
Find Inverse Matrices Using Adjoint Matrices
Let $A$ be an $n\times n$ matrix.
The $(i, j)$ cofactor $C_{ij}$ of $A$ is defined to be
\[C_{ij}=(-1)^{ij}\det(M_{ij}),\] where $M_{ij}$ is the $(i,j)$ minor matrix obtained from $A$ removing the $i$-th row and $j$-th column.
Then consider the $n\times n$ matrix $C=(C_{ij})$, and define the $n\times n$ matrix $\Adj(A)=C^{\trans}$.
The matrix $\Adj(A)$ is called the adjoint matrix of $A$.
When $A$ is invertible, then its inverse can be obtained by the formula
\[A^{-1}=\frac{1}{\det(A)}\Adj(A).\]
For each of the following matrices, determine whether it is invertible, and if so, then find the invertible matrix using the above formula.
(a) $A=\begin{bmatrix}
0 &-1 &2 \\
\end{bmatrix}$.
(b) $B=\begin{bmatrix}
0 &1 &4 \\
True or False: If $A, B$ are 2 by 2 Matrices such that $(AB)^2=O$, then $(BA)^2=O$
Let $A$ and $B$ be $2\times 2$ matrices such that $(AB)^2=O$, where $O$ is the $2\times 2$ zero matrix.
Determine whether $(BA)^2$ must be $O$ as well. If so, prove it. If not, give a counter example.
How to Prove a Matrix is Nonsingular in 10 Seconds
Using the numbers appearing in
\[\pi=3.1415926535897932384626433832795028841971693993751058209749\dots\] we construct the matrix \[A=\begin{bmatrix}
3 & 14 &1592& 65358\\
97932& 38462643& 38& 32\\
7950& 2& 8841& 9716\\
939937510& 5820& 974& 9
Prove that the matrix $A$ is nonsingular.
Eigenvalues of a Matrix and its Transpose are the Same
Let $A$ be a square matrix.
Prove that the eigenvalues of the transpose $A^{\trans}$ are the same as the eigenvalues of $A$.
The Formula for the Inverse Matrix of $I+A$ for a $2\times 2$ Singular Matrix $A$
Let $A$ be a singular $2\times 2$ matrix such that $\tr(A)\neq -1$ and let $I$ be the $2\times 2$ identity matrix.
Then prove that the inverse matrix of the matrix $I+A$ is given by the following formula:
\[(I+A)^{-1}=I-\frac{1}{1+\tr(A)}A.\]
Using the formula, calculate the inverse matrix of $\begin{bmatrix}
2 & 1\\
1& 2
Determine Whether There Exists a Nonsingular Matrix Satisfying $A^4=ABA^2+2A^3$
Determine whether there exists a nonsingular matrix $A$ if
\[A^4=ABA^2+2A^3,\] where $B$ is the following matrix.
\[B=\begin{bmatrix}
-1 & 1 & -1 \\
2 & 1 & -4
If such a nonsingular matrix $A$ exists, find the inverse matrix $A^{-1}$.
(The Ohio State University, Linear Algebra Final Exam Problem)
The Product of Two Nonsingular Matrices is Nonsingular
Prove that if $n\times n$ matrices $A$ and $B$ are nonsingular, then the product $AB$ is also a nonsingular matrix.
The Determinant of a Skew-Symmetric Matrix is Zero
Prove that the determinant of an $n\times n$ skew-symmetric matrix is zero if $n$ is odd.
Eigenvalues of Orthogonal Matrices Have Length 1. Every $3\times 3$ Orthogonal Matrix Has 1 as an Eigenvalue
(a) Let $A$ be a real orthogonal $n\times n$ matrix. Prove that the length (magnitude) of each eigenvalue of $A$ is $1$.
(b) Let $A$ be a real orthogonal $3\times 3$ matrix and suppose that the determinant of $A$ is $1$. Then prove that $A$ has $1$ as an eigenvalue.
The Existence of an Element in an Abelian Group of Order the Least Common Multiple of Two Elements
Find a Matrix so that a Given Subset is the Null Space of the Matrix, hence it's a Subspace
Find All Values of $x$ so that a Matrix is Singular
Find the Rank of a Matrix with a Parameter
True or False. The Intersection of Bases is a Basis of the Intersection of Subspaces
Positive definite Real Symmetric Matrix and its Eigenvalues | CommonCrawl |
# 1. Graph Theory Fundamentals
A graph consists of two main components: vertices (also called nodes) and edges. Vertices represent the objects or entities being modeled, and edges represent the relationships or connections between them.
There are two types of graphs: directed and undirected. In a directed graph, the edges have a specific direction, indicating a one-way relationship between vertices. In an undirected graph, the edges have no direction, indicating a two-way relationship between vertices.
For example, let's consider a social network. Each person in the network can be represented by a vertex, and the friendships between people can be represented by edges. If the graph is directed, an edge from vertex A to vertex B indicates that person A is friends with person B. If the graph is undirected, the edge between A and B indicates that A and B are friends with each other.
Graphs can be represented in different ways. One common representation is an adjacency matrix, which is a square matrix where the rows and columns represent the vertices, and the entries represent the presence or absence of edges between vertices. Another representation is an adjacency list, which is a list of vertices, where each vertex has a list of its adjacent vertices.
For example, let's consider the following graph:
```
A --- B
/ \ /
C D
```
This graph can be represented by the following adjacency matrix:
```
A B C D
A 0 1 1 0
B 1 0 0 1
C 1 0 0 0
D 0 1 0 0
```
And it can also be represented by the following adjacency list:
```
A: [B, C]
B: [A, D]
C: [A]
D: [B]
```
## Exercise
Consider the following graph:
```
A --- B
/ \ /
C D
```
Represent this graph using an adjacency matrix and an adjacency list.
### Solution
Adjacency matrix:
```
A B C D
A 0 1 1 0
B 1 0 0 1
C 1 0 0 0
D 0 1 0 0
```
Adjacency list:
```
A: [B, C]
B: [A, D]
C: [A]
D: [B]
```
# 1.1. Basic Terminology and Definitions
In graph theory, there are several basic terms and definitions that are important to understand. These terms help us describe and analyze the properties of graphs.
A path in a graph is a sequence of vertices connected by edges. The length of a path is the number of edges it contains. A simple path is a path that does not contain any repeated vertices.
A cycle in a graph is a path that starts and ends at the same vertex. A graph with no cycles is called acyclic.
For example, consider the following graph:
```
A --- B
/ \ /
C D
```
The path A-B-D is a simple path of length 2, while the path A-B-C-A is a cycle.
A connected graph is a graph in which there is a path between every pair of vertices. A disconnected graph is a graph that is not connected.
For example, consider the following graph:
```
A --- B E
/ \ /
C D
```
This graph is connected because there is a path between every pair of vertices (A-B, A-C, A-D, B-C, B-D, C-D). However, if we remove the edge between A and B, the graph becomes disconnected.
## Exercise
Consider the following graph:
```
A --- B E
/ \ /
C D
```
Is this graph connected or disconnected? If it is connected, provide an example of a path between two vertices. If it is disconnected, explain which vertices are in different components.
### Solution
This graph is connected. For example, there is a path between vertices A and D: A-B-D.
# 1.2. Types of Graphs
There are several different types of graphs that are commonly studied in graph theory. Each type of graph has its own unique properties and characteristics.
A directed graph, also known as a digraph, is a graph in which each edge has a direction. This means that the edges have an arrow indicating the direction of the relationship between the vertices. In a directed graph, the edges are ordered pairs of vertices.
For example, consider the following directed graph:
```
A ---> B
/ \ /
C D
```
In this graph, the edge from A to B indicates that there is a directed relationship from A to B. The edge from B to C indicates a directed relationship from B to C.
An undirected graph is a graph in which the edges have no direction. This means that the edges are unordered pairs of vertices. In an undirected graph, the relationship between the vertices is symmetric.
For example, consider the following undirected graph:
```
A --- B
/ \ /
C D
```
In this graph, the edges have no direction, so the relationship between the vertices is symmetric. The edge between A and B indicates a relationship between A and B, but it does not specify a direction.
## Exercise
Consider the following graph:
```
A --- B E
/ \ /
C D
```
Is this graph directed or undirected? If it is directed, provide an example of an edge with a direction. If it is undirected, explain why the edges have no direction.
### Solution
This graph is undirected. The edges have no direction because the relationship between the vertices is symmetric. For example, the edge between A and B indicates a relationship between A and B, but it does not specify a direction.
# 1.3. Representing Graphs
There are several different ways to represent graphs in computer science. Each representation has its own advantages and disadvantages, depending on the specific problem being solved.
One common way to represent a graph is using an adjacency matrix. An adjacency matrix is a square matrix where the rows and columns represent the vertices of the graph. The value in each cell of the matrix represents whether there is an edge between the corresponding vertices.
For example, consider the following graph:
```
A --- B
/ \ /
C D
```
The adjacency matrix for this graph would be:
```
A B C D
A 0 1 1 0
B 1 0 0 1
C 1 0 0 0
D 0 1 0 0
```
In this matrix, a value of 1 indicates that there is an edge between the corresponding vertices, while a value of 0 indicates that there is no edge.
Another way to represent a graph is using an adjacency list. An adjacency list is a collection of lists, where each list represents the neighbors of a particular vertex. Each vertex is associated with a list of its adjacent vertices.
For example, consider the following graph:
```
A --- B
/ \ /
C D
```
The adjacency list for this graph would be:
```
A: [B, C]
B: [A, D]
C: [A]
D: [B]
```
In this list, each vertex is associated with a list of its adjacent vertices. For example, vertex A is associated with the list [B, C], indicating that A is adjacent to B and C.
## Exercise
Consider the following graph:
```
A --- B E
/ \ /
C D
```
Represent this graph using an adjacency matrix and an adjacency list.
### Solution
The adjacency matrix for this graph would be:
```
A B C D E
A 0 1 1 0 0
B 1 0 0 1 0
C 1 0 0 0 0
D 0 1 0 0 0
E 0 0 0 0 0
```
The adjacency list for this graph would be:
```
A: [B, C]
B: [A, D]
C: [A]
D: [B]
E: []
```
# 2. Bipartite Graphs and Matching
In graph theory, a bipartite graph is a graph whose vertices can be divided into two disjoint sets such that no two vertices within the same set are adjacent. In other words, a bipartite graph is a graph that can be colored using only two colors, such that no two adjacent vertices have the same color.
Bipartite graphs have many interesting properties and applications. One important concept in bipartite graphs is matching. A matching in a bipartite graph is a set of edges such that no two edges share a common vertex. In other words, a matching is a set of edges that do not intersect.
Matching problems arise in many real-world scenarios, such as assigning tasks to workers, pairing students for projects, or matching patients with organ donors. The Hungarian algorithm is a well-known algorithm for finding a maximum matching in a bipartite graph.
# 2.1. Definition and Properties of Bipartite Graphs
A bipartite graph is a graph G = (V, E) where V can be partitioned into two disjoint sets V1 and V2 such that every edge in E connects a vertex from V1 to a vertex from V2. In other words, there are no edges that connect vertices within the same set.
Bipartite graphs have several important properties. One property is that the maximum degree of a vertex in a bipartite graph is at most 1. This means that each vertex in a bipartite graph can have at most one neighbor.
Another property of bipartite graphs is that they do not contain any odd cycles. An odd cycle is a cycle with an odd number of vertices. In a bipartite graph, all cycles have an even number of vertices.
Consider the following bipartite graph:
```
A --- B
/ /
C --- D
```
This graph can be partitioned into two sets: V1 = {A, C} and V2 = {B, D}. Every edge in the graph connects a vertex from V1 to a vertex from V2.
## Exercise
Identify whether the following graphs are bipartite or not:
1.
```
A --- B
/ |
C --- D
```
2.
```
A --- B
/ \ |
C D - E
```
3.
```
A --- B
/ \ / \
C D - E - F
```
### Solution
1. This graph is bipartite. It can be partitioned into two sets: V1 = {A, C} and V2 = {B, D}. Every edge in the graph connects a vertex from V1 to a vertex from V2.
2. This graph is not bipartite. The edge between D and E connects two vertices within the same set.
3. This graph is bipartite. It can be partitioned into two sets: V1 = {A, C, E} and V2 = {B, D, F}. Every edge in the graph connects a vertex from V1 to a vertex from V2.
# 2.2. Matching in Bipartite Graphs
Matching in bipartite graphs is the problem of finding a subset of edges in a bipartite graph such that no two edges share a common endpoint. In other words, a matching is a set of edges where each vertex is incident to at most one edge in the matching.
A matching in a bipartite graph is called a perfect matching if it covers all the vertices in the graph. That is, every vertex in the graph is incident to exactly one edge in the matching.
Finding a maximum matching in a bipartite graph is a well-studied problem in graph theory. There are several algorithms that can be used to find a maximum matching, such as the Hopcroft-Karp algorithm and the Hungarian algorithm.
Consider the following bipartite graph:
```
A --- B
/ |
C --- D
```
A possible matching in this graph is {(A, B), (C, D)}. This matching covers all the vertices in the graph, so it is a perfect matching.
## Exercise
Find a maximum matching in the following bipartite graphs:
1.
```
A --- B
/ |
C --- D
```
2.
```
A --- B
/ \ |
C D - E
```
### Solution
1. The maximum matching in this graph is {(A, B), (C, D)}. This matching covers all the vertices in the graph, so it is a perfect matching.
2. The maximum matching in this graph is {(A, B), (C, D)}. This matching covers all the vertices in the graph, so it is a perfect matching.
# 2.3. Maximum Matching and Minimum Vertex Cover
In a bipartite graph, the size of a maximum matching is equal to the size of a minimum vertex cover. A vertex cover is a subset of vertices in a graph such that every edge in the graph is incident to at least one vertex in the subset.
The size of a maximum matching can be found using the concept of augmenting paths. An augmenting path is a path in the graph that starts and ends with unmatched vertices and alternates between matched and unmatched edges. By finding augmenting paths and updating the matching accordingly, we can increase the size of the matching until no more augmenting paths can be found.
On the other hand, a minimum vertex cover can be found using the concept of alternating paths. An alternating path is a path in the graph that starts and ends with unmatched vertices and alternates between matched and unmatched edges. By finding alternating paths and updating the vertex cover accordingly, we can decrease the size of the vertex cover until no more alternating paths can be found.
Consider the following bipartite graph:
```
A --- B
/ |
C --- D
```
A maximum matching in this graph is {(A, B), (C, D)}. The size of this matching is 2.
A minimum vertex cover in this graph is {A, D}. The size of this vertex cover is also 2.
## Exercise
Find a maximum matching and a minimum vertex cover in the following bipartite graphs:
1.
```
A --- B
/ |
C --- D
```
2.
```
A --- B
/ \ |
C D - E
```
### Solution
1.
- Maximum matching: {(A, B), (C, D)}
- Minimum vertex cover: {A, D}
2.
- Maximum matching: {(A, B), (C, D)}
- Minimum vertex cover: {A, C, E}
# 3. Linear Programming Basics
Linear programming is a mathematical optimization technique used to find the best possible solution to a problem with linear constraints. It involves maximizing or minimizing an objective function while satisfying a set of linear equality or inequality constraints.
The basic idea behind linear programming is to represent the problem as a system of linear equations or inequalities and then find the values of the decision variables that optimize the objective function. The decision variables represent the quantities or values that we want to determine.
Linear programming can be used to solve a wide range of real-world problems, such as resource allocation, production planning, transportation, and scheduling. It provides a systematic and efficient approach to decision-making, allowing us to make the best use of available resources and achieve optimal outcomes.
Consider a company that produces two products, A and B. The company has limited resources, such as labor and raw materials, and wants to maximize its profit. The profit per unit of product A is $10, and the profit per unit of product B is $15. The company's production capacity for product A is 100 units, and for product B is 150 units. The company also has constraints on the amount of labor and raw materials available.
Let's define the decision variables as follows:
- x: number of units of product A to produce
- y: number of units of product B to produce
The objective function is to maximize the profit, which can be expressed as:
$$10x + 15y$$
Subject to the following constraints:
- Labor constraint: $2x + 3y \leq 240$
- Raw materials constraint: $4x + 2y \leq 200$
- Production capacity constraint for product A: $x \leq 100$
- Production capacity constraint for product B: $y \leq 150$
The solution to this linear programming problem will give us the optimal values of x and y that maximize the profit while satisfying all the constraints.
## Exercise
Consider a company that produces three products, A, B, and C. The profit per unit of product A is $8, the profit per unit of product B is $10, and the profit per unit of product C is $12. The company's production capacity for product A is 200 units, for product B is 150 units, and for product C is 100 units. The company also has constraints on the amount of labor and raw materials available.
Define the decision variables and write the objective function and constraints for this linear programming problem.
### Solution
Let's define the decision variables as follows:
- x: number of units of product A to produce
- y: number of units of product B to produce
- z: number of units of product C to produce
The objective function is to maximize the profit, which can be expressed as:
$$8x + 10y + 12z$$
Subject to the following constraints:
- Labor constraint: $2x + 3y + 4z \leq 400$
- Raw materials constraint: $3x + 2y + z \leq 300$
- Production capacity constraint for product A: $x \leq 200$
- Production capacity constraint for product B: $y \leq 150$
- Production capacity constraint for product C: $z \leq 100$
# 3.1. What is Linear Programming?
Linear programming is a mathematical optimization technique that allows us to find the best possible solution to a problem with linear constraints. It involves maximizing or minimizing an objective function while satisfying a set of linear equality or inequality constraints.
In simple terms, linear programming helps us make decisions by finding the optimal values of decision variables that will give us the best outcome. It is widely used in various fields such as economics, operations research, and engineering.
The word "linear" in linear programming refers to the fact that the objective function and constraints are linear equations or inequalities. This means that the variables in these equations are raised to the power of 1 and do not appear in any other non-linear form, such as squares or square roots.
Linear programming problems can be solved using various algorithms, such as the simplex method or the interior point method. These algorithms iteratively improve the solution until the optimal values of the decision variables are found.
Linear programming is a powerful tool for decision-making because it allows us to consider multiple constraints and objectives simultaneously. By formulating a problem as a linear programming model, we can analyze different scenarios, allocate resources efficiently, and make informed decisions.
# 3.2. Formulating Problems as Linear Programs
To solve a problem using linear programming, we first need to formulate it as a linear program. This involves identifying the decision variables, the objective function, and the constraints.
The decision variables represent the quantities or values that we want to determine. They are usually denoted by symbols such as x, y, or z. For example, in a production planning problem, the decision variables could represent the number of units of different products to produce.
The objective function defines the goal or objective that we want to maximize or minimize. It is a linear combination of the decision variables and represents the quantity that we want to optimize. For example, in a profit maximization problem, the objective function could be the total profit, which is a sum of the profits from each product.
The constraints represent the limitations or restrictions that we need to satisfy. They are linear equations or inequalities involving the decision variables. For example, in a resource allocation problem, the constraints could represent the availability of different resources, such as labor or raw materials.
Once we have identified the decision variables, objective function, and constraints, we can write the linear program in mathematical notation. This involves expressing the objective function and constraints as linear equations or inequalities.
By solving the linear program, we can find the values of the decision variables that optimize the objective function while satisfying all the constraints. This provides us with the best possible solution to the problem.
# 3.3. Solving Linear Programs with the Simplex Method
The simplex method is one of the most widely used algorithms for solving linear programming problems. It is an iterative algorithm that starts with an initial feasible solution and improves it in each iteration until the optimal solution is found.
The simplex method works by moving from one feasible solution to another that improves the objective function. It does this by moving along the edges of the feasible region, which is defined by the constraints.
At each iteration, the simplex method selects a variable to enter the basis and a variable to leave the basis. The basis is a set of variables that are currently in the solution and satisfy certain conditions. The entering variable is chosen to improve the objective function, while the leaving variable is chosen to maintain feasibility.
The simplex method continues iterating until no further improvement is possible, which indicates that the optimal solution has been reached. The optimal solution is characterized by the fact that there are no more improving moves that can be made.
The simplex method is a powerful algorithm that can solve linear programming problems with thousands or even millions of variables and constraints. It has been widely implemented in software packages and can handle a wide range of real-world problems.
However, the simplex method does have some limitations. It may not be efficient for certain types of problems, such as those with a large number of constraints or those that require integer solutions. In such cases, other algorithms or techniques, such as integer programming or dynamic programming, may be more appropriate.
## Exercise
Consider the following linear programming problem:
Maximize: $2x + 3y$
Subject to:
- $x + y \leq 10$
- $2x + y \leq 15$
- $x, y \geq 0$
Use the simplex method to solve this problem and find the optimal solution.
### Solution
Step 1: Convert the problem to standard form by introducing slack variables:
Maximize: $2x + 3y + 0s_1 + 0s_2$
Subject to:
- $x + y + s_1 = 10$
- $2x + y + s_2 = 15$
- $x, y, s_1, s_2 \geq 0$
Step 2: Write the initial tableau:
```
| 2 3 0 0 | 10 |
| 2 1 1 0 | 15 |
```
Step 3: Select the entering variable and leaving variable:
- Entering variable: $x$ (since its coefficient in the objective row is positive)
- Leaving variable: $s_1$ (since its ratio with the right-hand side in the first row is the smallest)
Step 4: Perform the pivot operation:
```
| 1 1.5 0 0 | 5 |
| 0 -0.5 1 0 | 5 |
```
Step 5: Repeat steps 3 and 4 until no further improvement is possible:
```
| 1 0 0.75 0 | 7.5 |
| 0 1 -2.5 0 | 2.5 |
```
Step 6: The optimal solution is $x = 7.5$ and $y = 2.5$, with a maximum objective value of $2(7.5) + 3(2.5) = 20$.
Therefore, the optimal solution to the linear programming problem is $x = 7.5$, $y = 2.5$, and the maximum objective value is 20.
# 4. Solving the Assignment Problem with Linear Programming
The assignment problem is a special case of the linear programming problem where the objective is to minimize the total cost of assigning a set of tasks to a set of agents. Each task must be assigned to exactly one agent, and each agent can only be assigned to one task.
The assignment problem can be formulated as a linear program by introducing binary variables that indicate whether a task is assigned to an agent. Let $x_{ij}$ be a binary variable that takes the value 1 if task $i$ is assigned to agent $j$, and 0 otherwise. Let $c_{ij}$ be the cost of assigning task $i$ to agent $j$.
The objective is to minimize the total cost:
$$\text{Minimize: } \sum_{i=1}^{n} \sum_{j=1}^{m} c_{ij}x_{ij}$$
subject to the following constraints:
- Each task must be assigned to exactly one agent:
$$\sum_{j=1}^{m} x_{ij} = 1 \text{ for } i = 1, 2, ..., n$$
- Each agent can only be assigned to one task:
$$\sum_{i=1}^{n} x_{ij} = 1 \text{ for } j = 1, 2, ..., m$$
- The binary variables must be non-negative:
$$x_{ij} \geq 0 \text{ for } i = 1, 2, ..., n \text{ and } j = 1, 2, ..., m$$
The assignment problem can be solved using the simplex method or other linear programming algorithms. However, a more efficient algorithm called the Hungarian algorithm can be used to solve the assignment problem in polynomial time.
The Hungarian algorithm is an efficient algorithm for solving the assignment problem. It is based on the concept of augmenting paths in bipartite graphs. A bipartite graph is a graph whose vertices can be divided into two disjoint sets such that every edge connects a vertex from one set to a vertex from the other set.
The Hungarian algorithm works by iteratively finding augmenting paths in the bipartite graph. An augmenting path is a path in the graph that starts and ends with unmatched vertices and alternates between matched and unmatched edges. By flipping the matching along the augmenting path, the algorithm can improve the current matching.
The Hungarian algorithm starts with an initial matching that may not be optimal. It then repeatedly finds augmenting paths and updates the matching until no further improvement is possible. The algorithm terminates when a maximum matching is found, which is a matching that cannot be further improved.
The Hungarian algorithm has a time complexity of O(n^3), where n is the number of vertices in the bipartite graph. This makes it very efficient for solving assignment problems with a large number of tasks and agents.
## Exercise
Consider the following assignment problem:
Task 1: Agent 1 - Cost: 2
Task 2: Agent 2 - Cost: 4
Task 3: Agent 3 - Cost: 3
Use the Hungarian algorithm to find the optimal assignment and the minimum total cost.
### Solution
Step 1: Create a cost matrix for the assignment problem:
```
| 2 4 3 |
```
Step 2: Apply the Hungarian algorithm to the cost matrix:
- Step 1: Subtract the minimum value in each row from all the values in that row:
```
| 0 2 1 |
```
- Step 2: Subtract the minimum value in each column from all the values in that column:
```
| 0 0 0 |
```
- Step 3: Assign tasks to agents based on the zero entries in the cost matrix:
```
| 1 0 0 |
```
- Step 4: Check if a maximum matching is found. If not, go to step 5.
- Step 5: Find a row or column with fewer assignments than the number of tasks or agents. In this case, there are no such rows or columns, so the algorithm terminates.
Step 3: The optimal assignment is Task 1 to Agent 1, Task 2 to Agent 2, and Task 3 to Agent 3. The minimum total cost is 2 + 4 + 3 = 9.
Therefore, the optimal assignment for the given assignment problem is Task 1 to Agent 1, Task 2 to Agent 2, and Task 3 to Agent 3, with a minimum total cost of 9.
# 4.1. Formulating the Assignment Problem as a Linear Program
To formulate the assignment problem as a linear program, we need to define the decision variables, the objective function, and the constraints.
Let's assume we have n tasks and m agents.
The decision variables are binary variables $x_{ij}$, where $x_{ij}$ takes the value 1 if task i is assigned to agent j, and 0 otherwise.
The objective function is to minimize the total cost of the assignment. The cost of assigning task i to agent j is denoted as $c_{ij}$. The objective function can be written as:
$$\text{Minimize: } \sum_{i=1}^{n} \sum_{j=1}^{m} c_{ij}x_{ij}$$
The constraints ensure that each task is assigned to exactly one agent, and each agent is assigned to exactly one task. These constraints can be written as:
- Each task must be assigned to exactly one agent:
$$\sum_{j=1}^{m} x_{ij} = 1 \text{ for } i = 1, 2, ..., n$$
- Each agent can only be assigned to one task:
$$\sum_{i=1}^{n} x_{ij} = 1 \text{ for } j = 1, 2, ..., m$$
- The binary variables must be non-negative:
$$x_{ij} \geq 0 \text{ for } i = 1, 2, ..., n \text{ and } j = 1, 2, ..., m$$
By formulating the assignment problem as a linear program, we can use linear programming algorithms to find the optimal assignment that minimizes the total cost.
# 4.2. Solving the Assignment Problem with the Hungarian Algorithm
The Hungarian algorithm is a popular method for solving the assignment problem. It is an efficient algorithm that guarantees an optimal solution.
The Hungarian algorithm works by iteratively finding a series of augmenting paths in a bipartite graph. An augmenting path is a path that starts and ends at unmatched vertices and alternates between matched and unmatched edges.
The algorithm begins with an initial feasible solution and iteratively improves it until an optimal solution is found. At each iteration, the algorithm constructs a set of independent zeros in the cost matrix, which corresponds to a set of edges that can be added to the current matching without violating the constraints.
The Hungarian algorithm consists of the following steps:
1. Initialize the cost matrix and the initial feasible solution.
2. While the current feasible solution is not optimal:
- Construct the equality subgraph, which consists of all zeros in the cost matrix that are covered by the current matching.
- Find a set of independent zeros in the equality subgraph.
- If the number of independent zeros is equal to the number of vertices, the current feasible solution is optimal and the algorithm terminates.
- If the number of independent zeros is less than the number of vertices, modify the cost matrix and update the current feasible solution.
3. Return the optimal solution.
The Hungarian algorithm has a time complexity of O(n^3), where n is the number of vertices in the bipartite graph. This makes it a very efficient algorithm for solving the assignment problem.
# 4.3. Time Complexity and Optimality of the Algorithm
The Hungarian algorithm is not only efficient, but it is also guaranteed to find an optimal solution to the assignment problem.
The time complexity of the Hungarian algorithm is O(n^3), where n is the number of vertices in the bipartite graph. This time complexity is achieved by iteratively finding augmenting paths in the graph.
The optimality of the Hungarian algorithm is proven by the following theorem:
**Theorem:** The Hungarian algorithm finds an optimal solution to the assignment problem.
**Proof:** Let M be a matching in the bipartite graph G, and let M' be an optimal matching. We want to show that the cost of M is equal to the cost of M'.
Suppose that M is not optimal. Then there exists a matching M'' with a smaller cost than M.
Consider the symmetric difference of M and M'': M ⊕ M''. This symmetric difference consists of a set of disjoint cycles.
Since M'' has a smaller cost than M, there must exist at least one cycle in M ⊕ M'' that contains more edges from M'' than from M.
By alternating the edges in this cycle, we can construct a new matching M''' with a smaller cost than M. This contradicts the assumption that M is not optimal.
Therefore, M must be optimal.
This theorem proves that the Hungarian algorithm always finds an optimal solution to the assignment problem.
## Exercise
Consider the following cost matrix:
$$
\begin{bmatrix}
2 & 3 & 1 \\
4 & 2 & 5 \\
3 & 1 & 4 \\
\end{bmatrix}
$$
Use the Hungarian algorithm to find the optimal assignment.
### Solution
The steps of the Hungarian algorithm are as follows:
1. Initialize the cost matrix and the initial feasible solution:
- Subtract the minimum value in each row from all elements in that row.
- Subtract the minimum value in each column from all elements in that column.
The updated cost matrix is:
$$
\begin{bmatrix}
1 & 2 & 0 \\
2 & 0 & 3 \\
2 & 0 & 3 \\
\end{bmatrix}
$$
The initial feasible solution is:
$$
\begin{bmatrix}
0 & 0 & 1 \\
0 & 1 & 0 \\
1 & 0 & 0 \\
\end{bmatrix}
$$
2. While the current feasible solution is not optimal:
- Construct the equality subgraph:
$$
\begin{bmatrix}
1 & 2 & 0 \\
2 & 0 & 3 \\
2 & 0 & 3 \\
\end{bmatrix}
$$
- Find a set of independent zeros in the equality subgraph:
The set of independent zeros is {(1, 3), (2, 2), (3, 2)}.
- Modify the cost matrix and update the current feasible solution:
- Subtract the minimum value in the set of independent zeros from all other elements in their respective rows and columns.
The updated cost matrix is:
$$
\begin{bmatrix}
0 & 1 & 0 \\
2 & 0 & 3 \\
2 & 0 & 3 \\
\end{bmatrix}
$$
- Add the maximum number of edges to the current feasible solution without violating the constraints.
The updated feasible solution is:
$$
\begin{bmatrix}
0 & 0 & 1 \\
0 & 1 & 0 \\
1 & 0 & 0 \\
\end{bmatrix}
$$
3. Return the optimal solution:
The optimal assignment is {(1, 3), (2, 2), (3, 1)} with a total cost of 2 + 0 + 2 = 4.
# 5. Applications of the Hungarian Algorithm
The Hungarian algorithm has a wide range of applications in various fields. It can be used to solve assignment and scheduling problems, resource allocation and network flow problems, and matching problems in social networks.
**5.1. Assignment and Scheduling Problems**
The Hungarian algorithm can be used to solve assignment and scheduling problems, where a set of tasks needs to be assigned to a set of resources or time slots. For example, it can be used to assign workers to shifts in a company, allocate classrooms to courses in a school, or schedule jobs on machines in a manufacturing plant.
Suppose we have a company with 4 workers and 4 shifts. Each worker has a different skill level, and each shift requires a specific skill level. We want to find the optimal assignment of workers to shifts that minimizes the total skill mismatch.
The cost matrix representing the skill mismatch between workers and shifts is:
$$
\begin{bmatrix}
2 & 3 & 1 & 4 \\
4 & 2 & 5 & 1 \\
3 & 1 & 4 & 2 \\
1 & 3 & 2 & 4 \\
\end{bmatrix}
$$
Using the Hungarian algorithm, we can find the optimal assignment of workers to shifts that minimizes the total skill mismatch.
**5.2. Resource Allocation and Network Flow**
The Hungarian algorithm can also be used to solve resource allocation and network flow problems. These problems involve allocating limited resources to satisfy certain demands or flows. For example, it can be used to allocate resources in a supply chain, optimize transportation routes, or balance workloads in a distributed system.
Suppose we have a transportation network with 4 sources and 4 destinations. Each source has a limited supply of goods, and each destination has a specific demand for goods. We want to find the optimal allocation of goods from sources to destinations that minimizes the total transportation cost.
The cost matrix representing the transportation cost between sources and destinations is:
$$
\begin{bmatrix}
2 & 3 & 1 & 4 \\
4 & 2 & 5 & 1 \\
3 & 1 & 4 & 2 \\
1 & 3 & 2 & 4 \\
\end{bmatrix}
$$
Using the Hungarian algorithm, we can find the optimal allocation of goods from sources to destinations that minimizes the total transportation cost.
**5.3. Matching in Social Networks**
The Hungarian algorithm can also be used to solve matching problems in social networks. These problems involve finding the best possible matches between individuals based on certain criteria. For example, it can be used to match students to schools, employees to job positions, or romantic partners based on compatibility.
Suppose we have a dating app with 4 male users and 4 female users. Each user has a set of preferences for potential matches. We want to find the optimal matching of male and female users that maximizes the overall compatibility.
The compatibility matrix representing the compatibility between male and female users is:
$$
\begin{bmatrix}
2 & 3 & 1 & 4 \\
4 & 2 & 5 & 1 \\
3 & 1 & 4 & 2 \\
1 & 3 & 2 & 4 \\
\end{bmatrix}
$$
Using the Hungarian algorithm, we can find the optimal matching of male and female users that maximizes the overall compatibility.
## Exercise
Consider the following cost matrix representing the transportation cost between sources and destinations:
$$
\begin{bmatrix}
2 & 3 & 1 & 4 \\
4 & 2 & 5 & 1 \\
3 & 1 & 4 & 2 \\
1 & 3 & 2 & 4 \\
\end{bmatrix}
$$
Use the Hungarian algorithm to find the optimal allocation of goods from sources to destinations that minimizes the total transportation cost.
### Solution
The steps of the Hungarian algorithm are as follows:
1. Initialize the cost matrix and the initial feasible solution:
- Subtract the minimum value in each row from all elements in that row.
- Subtract the minimum value in each column from all elements in that column.
The updated cost matrix is:
$$
\begin{bmatrix}
1 & 2 & 0 & 3 \\
3 & 1 & 4 & 0 \\
2 & 0 & 3 & 1 \\
0 & 2 & 1 & 3 \\
\end{bmatrix}
$$
The initial feasible solution is:
$$
\begin{bmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
\end{bmatrix}
$$
2. While the current feasible solution is not optimal:
- Construct the equality subgraph:
$$
\begin{bmatrix}
1 & 2 & 0 & 3 \\
3 & 1 & 4 & 0 \\
2 & 0 & 3 & 1 \\
0 & 2 & 1 & 3 \\
\end{bmatrix}
$$
- Find a set of independent zeros in the equality subgraph:
The set of independent zeros is {(1, 3), (2, 4), (3, 2), (4, 1)}.
- Modify the cost matrix and update the current feasible solution:
- Subtract the minimum value in the set of independent zeros from all other elements in their respective rows and columns.
The updated cost matrix is:
$$
\begin{bmatrix}
1 & 2 & 0 & 2 \\
2 & 0 & 3 & 0 \\
1 & 0 & 3 & 0 \\
0 & 2 & 0 & 2 \\
\end{bmatrix}
$$
- Add the maximum number of edges to the current feasible solution without violating the constraints.
The updated feasible solution is:
$$
\begin{bmatrix}
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
\end{bmatrix}
$$
3. Return the optimal solution:
The optimal allocation of goods from sources to destinations is {(1, 3), (2, 4), (3, 1), (4, 2)} with a total transportation cost of 1 + 0 + 1 + 2 = 4.
# 6. Extensions and Variations of the Hungarian Algorithm
The Hungarian algorithm can be extended and adapted to solve various extensions and variations of the assignment problem. These extensions and variations include generalizing the algorithm to non-bipartite graphs, handling weighted and unbalanced assignment problems, and addressing multi-objective optimization and sensitivity analysis.
**6.1. Generalization to Non-Bipartite Graphs**
The Hungarian algorithm is originally designed for bipartite graphs, where the number of vertices in each partition is equal. However, it can be generalized to handle non-bipartite graphs, where the number of vertices in each partition may be different. This generalization involves adding dummy nodes with zero-weight edges to balance the graph.
Suppose we have a graph with 4 workers and 5 shifts. Each worker has a different skill level, and each shift requires a specific skill level. We want to find the optimal assignment of workers to shifts that minimizes the total skill mismatch.
The cost matrix representing the skill mismatch between workers and shifts is:
$$
\begin{bmatrix}
2 & 3 & 1 & 4 & 2 \\
4 & 2 & 5 & 1 & 3 \\
3 & 1 & 4 & 2 & 1 \\
1 & 3 & 2 & 4 & 4 \\
\end{bmatrix}
$$
Using the generalized Hungarian algorithm, we can find the optimal assignment of workers to shifts that minimizes the total skill mismatch.
**6.2. Weighted and Unbalanced Assignment Problems**
The Hungarian algorithm can also handle weighted and unbalanced assignment problems, where the edges have different weights or the number of vertices in each partition is different. This involves modifying the cost matrix and adjusting the initialization and augmentation steps of the algorithm.
Suppose we have a graph with 4 workers and 4 shifts. Each worker has a different skill level, and each shift requires a specific skill level. Additionally, each shift has a different cost associated with it. We want to find the optimal assignment of workers to shifts that minimizes the total skill mismatch and cost.
The cost matrix representing the skill mismatch and cost between workers and shifts is:
$$
\begin{bmatrix}
2 & 3 & 1 & 4 \\
4 & 2 & 5 & 1 \\
3 & 1 & 4 & 2 \\
1 & 3 & 2 & 4 \\
\end{bmatrix}
$$
Using the weighted and unbalanced Hungarian algorithm, we can find the optimal assignment of workers to shifts that minimizes the total skill mismatch and cost.
**6.3. Multi-Objective Optimization and Sensitivity Analysis**
The Hungarian algorithm can be extended to handle multi-objective optimization and sensitivity analysis in assignment problems. This involves considering multiple criteria or objectives for the assignment and analyzing the impact of changes in the cost matrix on the optimal solution.
Suppose we have a graph with 4 workers and 4 shifts. Each worker has a different skill level, and each shift requires a specific skill level. Additionally, each worker has a preference for certain shifts. We want to find the optimal assignment of workers to shifts that minimizes the total skill mismatch, cost, and preference deviation.
The cost matrix representing the skill mismatch, cost, and preference deviation between workers and shifts is:
$$
\begin{bmatrix}
2 & 3 & 1 & 4 \\
4 & 2 & 5 & 1 \\
3 & 1 & 4 & 2 \\
1 & 3 & 2 & 4 \\
\end{bmatrix}
$$
Using the multi-objective Hungarian algorithm, we can find the optimal assignment of workers to shifts that minimizes the total skill mismatch, cost, and preference deviation.
## Exercise
Consider the following cost matrix representing the skill mismatch between workers and shifts:
$$
\begin{bmatrix}
2 & 3 & 1 & 4 \\
4 & 2 & 5 & 1 \\
3 & 1 & 4 & 2 \\
1 & 3 & 2 & 4 \\
\end{bmatrix}
$$
Use the generalized Hungarian algorithm to find the optimal assignment of workers to shifts that minimizes the total skill mismatch.
### Solution
The steps of the generalized Hungarian algorithm are as follows:
1. Initialize the cost matrix and the initial feasible solution:
- Subtract the minimum value in each row from all elements in that row.
- Subtract the minimum value in each column from all elements in that column.
- If the size of the two partitions is not equal, add dummy nodes with zero-weight edges to balance the graph.
The updated cost matrix is:
$$
\begin{bmatrix}
1 & 2 & 0 & 3 \\
3 & 1 & 4 & 0 \\
2 & 0 & 3 & 1 \\
0 & 2 & 1 & 3 \\
\end{bmatrix}
$$
The initial feasible solution is:
$$
\begin{bmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
\end{bmatrix}
$$
2. While the current feasible solution is not optimal:
- Construct the equality subgraph:
$$
\begin{bmatrix}
1 & 2 & 0 & 3 \\
3 & 1 & 4 & 0 \\
2 & 0 & 3 & 1 \\
0 & 2 & 1 & 3 \\
\end{bmatrix}
$$
- Find a set of independent zeros in the equality subgraph:
The set of independent zeros is {(1, 3), (2, 4), (3, 2), (4, 1)}.
- Modify the cost matrix and update the current feasible solution:
- Subtract the minimum value in the set of independent zeros from all other elements in their respective rows and columns.
The updated cost matrix is:
$$
\begin{bmatrix}
1 & 2 & 0 & 2 \\
2 & 0 & 3 & 0 \\
1 & 0 & 3 & 0 \\
0 & 2 & 0 & 2 \\
\end{bmatrix}
$$
- Add the maximum number of edges to the current feasible solution without violating the constraints.
The updated feasible solution is:
$$
\begin{bmatrix}
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
\end{bmatrix}
$$
3. Return the optimal solution:
The optimal assignment of workers to shifts is {(1, 3), (2, 4), (3, 1), (4, 2)} with a total skill mismatch of 1 + 0 + 1 + 2 = 4.
# 7. Real-World Examples and Case Studies
**7.1. Solving Real-World Problems with the Hungarian Algorithm**
The Hungarian algorithm has been used to solve real-world problems in diverse fields such as transportation, economics, and computer science.
One example is the assignment of taxi drivers to passengers in a ride-sharing service. The Hungarian algorithm can be used to find the optimal assignment of drivers to passengers that minimizes the total travel time or cost.
Another example is the allocation of resources in a manufacturing plant. The Hungarian algorithm can be used to assign tasks to machines or workers in a way that minimizes the total production time or cost.
**7.2. Comparing the Hungarian Algorithm to Other Methods**
The Hungarian algorithm has been compared to other methods for solving assignment problems, such as the auction algorithm and the shortest path algorithm.
In a study comparing the Hungarian algorithm to the auction algorithm for the assignment of workers to shifts in a call center, it was found that the Hungarian algorithm outperformed the auction algorithm in terms of solution quality and computational efficiency.
In another study comparing the Hungarian algorithm to the shortest path algorithm for the assignment of tasks to workers in a project management system, it was found that the Hungarian algorithm provided more accurate and consistent results.
**7.3. Case Studies: Applications in Transportation, Economics, and Computer Science**
In this section, we will explore specific case studies where the Hungarian algorithm has been applied in transportation, economics, and computer science.
One case study is the optimization of transportation routes in a logistics network. The Hungarian algorithm can be used to assign shipments to vehicles in a way that minimizes the total transportation cost or time.
Another case study is the allocation of goods in a supply chain. The Hungarian algorithm can be used to assign goods to warehouses or distribution centers in a way that minimizes the total storage or transportation cost.
In computer science, the Hungarian algorithm has been used for tasks such as image matching, data association, and network flow optimization.
## Exercise
Think of a real-world problem that can be solved using the Hungarian algorithm. Describe the problem and explain how the Hungarian algorithm can be applied to solve it.
### Solution
One real-world problem that can be solved using the Hungarian algorithm is the assignment of nurses to patients in a hospital. The goal is to find the optimal assignment of nurses to patients that minimizes the total workload or cost.
The Hungarian algorithm can be applied to solve this problem by representing the nurses and patients as vertices in a bipartite graph. The edges between nurses and patients represent the compatibility or suitability of each nurse for each patient. The cost associated with each edge represents the workload or cost of assigning a nurse to a patient.
By applying the Hungarian algorithm to this bipartite graph, we can find the optimal assignment of nurses to patients that minimizes the total workload or cost. This assignment can help optimize the allocation of nursing resources in the hospital and ensure that each patient receives the appropriate level of care.
# 5.3. Matching in Social Networks
The Hungarian algorithm can also be applied to solve matching problems in social networks. In social networks, matching refers to the process of connecting individuals or entities based on certain criteria or preferences.
**5.3.1. Dating and Matchmaking**
One example of matching in social networks is dating and matchmaking. Dating apps and websites often use matching algorithms to connect individuals based on their interests, preferences, and compatibility.
For instance, consider a dating app that matches users based on their hobbies, age, location, and other criteria. The app can use the Hungarian algorithm to find the optimal matches between users, taking into account their preferences and compatibility scores.
**5.3.2. Job Matching**
Matching algorithms can also be used in job recruitment and hiring processes. Job matching involves connecting job seekers with suitable job opportunities based on their skills, qualifications, and preferences.
For example, consider a job portal that matches job seekers with job postings based on their skills, experience, and location. The portal can use the Hungarian algorithm to find the best matches between job seekers and job postings, considering factors such as job requirements, salary expectations, and commute distance.
**5.3.3. Collaborative Filtering**
Matching algorithms are also used in collaborative filtering systems, which recommend items or content to users based on their preferences and behavior.
For instance, consider a movie recommendation system that suggests movies to users based on their past movie ratings and preferences. The system can use the Hungarian algorithm to find the best matches between users and movies, taking into account their similarity scores and preferences.
## Exercise
Think of another example of matching in social networks. Describe the problem and explain how the Hungarian algorithm can be applied to solve it.
### Solution
Another example of matching in social networks is roommate matching. In college dormitories or shared housing, matching algorithms can be used to connect individuals looking for roommates based on their lifestyle, habits, and preferences.
The Hungarian algorithm can be applied to solve this problem by representing the individuals as vertices in a bipartite graph. The edges between individuals represent their compatibility or suitability as roommates. The cost associated with each edge represents the level of compatibility or agreement between potential roommates.
By applying the Hungarian algorithm to this bipartite graph, we can find the optimal roommate assignments that maximize compatibility and minimize conflicts. This can help ensure a harmonious living environment and improve the overall roommate experience.
# 6. Extensions and Variations of the Hungarian Algorithm
The Hungarian algorithm can be extended and modified to solve various other optimization problems. These extensions and variations allow the algorithm to be applied to a wider range of scenarios and problem domains.
**6.1. Generalization to Non-Bipartite Graphs**
While the Hungarian algorithm is originally designed for bipartite graphs, it can be generalized to solve matching problems on non-bipartite graphs as well. This generalization involves transforming the non-bipartite graph into a bipartite graph by adding dummy nodes and edges.
For example, consider a graph representing a transportation network with nodes representing cities and edges representing transportation routes. The Hungarian algorithm can be used to find the optimal assignment of routes to vehicles, taking into account factors such as distance, cost, and capacity.
**6.2. Weighted and Unbalanced Assignment Problems**
The Hungarian algorithm can also handle weighted and unbalanced assignment problems. Weighted assignment problems involve assigning weights or costs to the edges of the graph, while unbalanced assignment problems involve having a different number of nodes in each partition of the graph.
For instance, consider a task scheduling problem where tasks have different durations and priorities. The Hungarian algorithm can be used to assign tasks to workers, taking into account the duration and priority of each task, as well as the availability and skills of the workers.
**6.3. Multi-Objective Optimization and Sensitivity Analysis**
The Hungarian algorithm can be extended to handle multi-objective optimization problems, where multiple conflicting objectives need to be considered. This extension involves assigning weights or priorities to the different objectives and finding the optimal assignment that balances these objectives.
Sensitivity analysis can also be performed using the Hungarian algorithm to evaluate the impact of changes in the input parameters on the optimal assignment. This analysis helps understand the robustness and flexibility of the assignment solution.
## Exercise
Think of another extension or variation of the Hungarian algorithm. Describe the problem it can solve and explain how the algorithm can be adapted to handle it.
### Solution
Another variation of the Hungarian algorithm is the capacitated assignment problem. In this problem, the edges of the graph have capacity constraints, representing the maximum amount of resources that can be assigned to each task or job.
The Hungarian algorithm can be adapted to handle the capacitated assignment problem by incorporating the capacity constraints into the assignment process. The algorithm ensures that the assigned resources do not exceed the capacity of the corresponding edges, while still finding the optimal assignment that minimizes the total cost or maximizes the total benefit.
This adaptation of the Hungarian algorithm is useful in various scenarios, such as resource allocation in project management, task assignment in workforce scheduling, and equipment assignment in production planning. It allows organizations to optimize their resource utilization and improve operational efficiency.
# 6.1. Generalization to Non-Bipartite Graphs
The Hungarian algorithm is originally designed for bipartite graphs, where the graph can be divided into two disjoint sets of nodes. However, it can be generalized to solve matching problems on non-bipartite graphs as well. This generalization involves transforming the non-bipartite graph into a bipartite graph by adding dummy nodes and edges.
To generalize the Hungarian algorithm to non-bipartite graphs, we first identify the nodes that need to be matched. We then add dummy nodes to create a bipartite graph. The dummy nodes represent the unmatched nodes in the original graph.
For example, consider a graph representing a social network, where nodes represent individuals and edges represent relationships. We want to find the optimal matching of individuals for a social event. The Hungarian algorithm can be used to solve this problem by transforming the social network graph into a bipartite graph.
In the bipartite graph, one set of nodes represents the individuals who are available for the event, and the other set of nodes represents the available spots at the event. We add dummy nodes to represent the unmatched individuals and unmatched spots.
For instance, if there are more individuals than available spots, we add dummy spots to balance the graph. Similarly, if there are more available spots than individuals, we add dummy individuals.
Once the bipartite graph is created, we can apply the Hungarian algorithm to find the optimal matching. The algorithm will assign individuals to spots in a way that maximizes compatibility or minimizes conflicts, depending on the specific problem.
## Exercise
Think of a non-bipartite graph problem that can be solved using the Hungarian algorithm. Describe the problem and explain how the graph can be transformed into a bipartite graph.
### Solution
One example of a non-bipartite graph problem that can be solved using the Hungarian algorithm is the task assignment problem in project management. In this problem, we have a set of tasks that need to be assigned to a set of team members.
To solve this problem using the Hungarian algorithm, we can transform the task assignment graph into a bipartite graph. One set of nodes represents the tasks, and the other set of nodes represents the team members. We add dummy nodes to balance the graph if necessary.
The edges of the bipartite graph represent the compatibility or suitability of each team member for each task. The weights or costs of the edges represent factors such as skill level, availability, and workload. The Hungarian algorithm can then be applied to find the optimal assignment of tasks to team members, taking into account these factors.
By using the Hungarian algorithm, project managers can efficiently assign tasks to team members, considering their skills and availability, and optimize the overall project performance.
# 6.2. Weighted and Unbalanced Assignment Problems
In the previous sections, we discussed the Hungarian algorithm for solving matching problems on bipartite graphs. In these problems, each edge has a weight of 1, representing the cost or distance between nodes. However, in some applications, the edges may have different weights, and we need to consider these weights when finding the optimal matching.
Weighted assignment problems involve finding the matching that minimizes the total weight of the edges. The weights can represent factors such as cost, distance, or time. The Hungarian algorithm can be modified to handle weighted assignment problems by incorporating the edge weights into the algorithm.
For example, consider a transportation problem where we want to assign shipments to trucks. Each shipment has a weight, and each truck has a capacity. We want to minimize the total weight of the shipments while ensuring that the total weight does not exceed the capacity of each truck.
To solve this weighted assignment problem using the Hungarian algorithm, we modify the algorithm to consider the weights of the edges. Instead of simply checking for the existence of an edge, we compare the weights of the edges and select the edge with the minimum weight.
In the transportation problem, the weights of the edges represent the weights of the shipments. The algorithm will assign shipments to trucks in a way that minimizes the total weight of the shipments while respecting the capacity constraints of the trucks.
Unbalanced assignment problems involve cases where the number of nodes in the two sets of the bipartite graph is not equal. In these cases, the Hungarian algorithm can still be applied by adding dummy nodes to balance the graph, as we discussed in the previous section.
For instance, in the transportation problem, if there are more shipments than trucks, we can add dummy trucks with infinite capacity to balance the graph. Similarly, if there are more trucks than shipments, we can add dummy shipments with zero weight to balance the graph.
## Exercise
Think of a weighted or unbalanced assignment problem that can be solved using the Hungarian algorithm. Describe the problem and explain how the algorithm can be modified to handle the weights or unbalanced nature of the problem.
### Solution
One example of a weighted assignment problem that can be solved using the Hungarian algorithm is the task scheduling problem in project management. In this problem, we have a set of tasks that need to be scheduled on a set of machines, and each task has a processing time.
To solve this problem using the Hungarian algorithm, we modify the algorithm to consider the processing times of the tasks. Instead of simply checking for the existence of an edge, we compare the processing times of the tasks and select the task with the minimum processing time.
In the case of an unbalanced assignment problem, if there are more tasks than machines, we can add dummy machines with infinite processing times to balance the graph. Similarly, if there are more machines than tasks, we can add dummy tasks with zero processing times to balance the graph.
By using the modified Hungarian algorithm, project managers can efficiently schedule tasks on machines, considering their processing times, and optimize the overall project completion time.
# 6.3. Multi-Objective Optimization and Sensitivity Analysis
In some matching problems, there may be multiple objectives or criteria that need to be considered when finding the optimal matching. These objectives can represent different factors such as cost, time, quality, or efficiency. The Hungarian algorithm can be extended to handle multi-objective optimization problems by incorporating these objectives into the algorithm.
Multi-objective optimization involves finding the set of optimal solutions that satisfy multiple objectives simultaneously. In the context of matching problems, this means finding the set of matchings that optimize multiple criteria or objectives.
For example, consider a job assignment problem where we want to assign workers to jobs. Each worker has different skills and preferences, and each job has different requirements and priorities. We want to find the set of assignments that maximizes the overall skill level, minimizes the overall cost, and satisfies the requirements and priorities of the jobs.
To solve this multi-objective assignment problem using the Hungarian algorithm, we modify the algorithm to consider multiple objectives. Instead of simply selecting the edge with the minimum weight, we compare the weights of the edges based on the multiple objectives and select the edge that satisfies the objectives.
In the job assignment problem, the weights of the edges can represent factors such as skill level, cost, and priority. The algorithm will assign workers to jobs in a way that maximizes the overall skill level, minimizes the overall cost, and satisfies the requirements and priorities of the jobs.
Sensitivity analysis is another important aspect of optimization problems. It involves analyzing the impact of changes in the input data or parameters on the optimal solution. In the context of matching problems, sensitivity analysis can help us understand how changes in the weights of the edges or the objectives affect the optimal matching.
For instance, in the job assignment problem, sensitivity analysis can help us understand how changes in the skill levels of the workers or the requirements of the jobs affect the optimal assignments. This information can be valuable for making informed decisions and adjusting the assignments based on changing conditions or priorities.
## Exercise
Think of a multi-objective matching problem that can be solved using the Hungarian algorithm. Describe the problem and explain how the algorithm can be modified to handle multiple objectives. Also, describe how sensitivity analysis can be applied to this problem.
### Solution
One example of a multi-objective matching problem that can be solved using the Hungarian algorithm is the supplier selection problem in supply chain management. In this problem, we have a set of suppliers and a set of products, and each supplier has different prices, lead times, and quality levels. We want to find the set of assignments that minimizes the overall cost, minimizes the overall lead time, and maximizes the overall quality.
To solve this problem using the Hungarian algorithm, we modify the algorithm to consider multiple objectives. Instead of simply selecting the edge with the minimum weight, we compare the weights of the edges based on the multiple objectives and select the edge that satisfies the objectives.
In the supplier selection problem, the weights of the edges can represent factors such as price, lead time, and quality. The algorithm will assign suppliers to products in a way that minimizes the overall cost, minimizes the overall lead time, and maximizes the overall quality.
Sensitivity analysis can be applied to this problem by analyzing the impact of changes in the prices, lead times, and quality levels of the suppliers on the optimal assignments. This analysis can help supply chain managers understand how changes in the input data or parameters affect the optimal assignments and make informed decisions based on these insights.
# 7. Real-World Examples and Case Studies
7.1. Solving Real-World Problems with the Hungarian Algorithm
One application of the Hungarian algorithm is in solving assignment and scheduling problems. For example, in the field of transportation and logistics, the algorithm can be used to optimize the assignment of drivers to deliveries or the scheduling of tasks to workers. By finding the optimal assignments, companies can improve efficiency, reduce costs, and ensure timely deliveries.
Consider a delivery company that needs to assign a set of drivers to a set of delivery routes. Each driver has different skills, availability, and preferences, and each route has different requirements and priorities. The company can use the Hungarian algorithm to find the optimal assignments that maximize the overall skill level, minimize the overall travel time, and satisfy the requirements of the routes.
7.2. Comparing the Hungarian Algorithm to Other Methods
One alternative method for solving assignment problems is the auction algorithm. The auction algorithm is based on the concept of a bidding process, where each worker or resource bids for the tasks or jobs based on their preferences and capabilities. The algorithm iteratively adjusts the bids and assigns the tasks to the highest bidder. While the auction algorithm can handle complex assignment problems, it may not always guarantee an optimal solution.
In the job assignment problem mentioned earlier, the auction algorithm could be used to assign workers to jobs based on their preferences and capabilities. Each worker would bid for the jobs they are interested in, and the algorithm would assign the jobs to the highest bidders. However, this approach may not always result in the optimal assignments, as it does not consider the overall skill level or the requirements of the jobs.
7.3. Case Studies: Applications in Transportation, Economics, and Computer Science
One case study involves the use of the Hungarian algorithm in resource allocation and network flow optimization. In the field of telecommunications, the algorithm can be used to optimize the routing of data packets through a network, ensuring efficient and reliable communication. By finding the optimal paths for data transmission, companies can reduce congestion, improve network performance, and minimize costs.
Another case study involves the use of the Hungarian algorithm in matching algorithms for social networks. In online dating platforms or professional networking sites, the algorithm can be used to match individuals based on their preferences, interests, and compatibility. By finding the optimal matches, these platforms can enhance user experience, increase engagement, and facilitate meaningful connections.
The Hungarian algorithm has also been applied in computer vision and image processing. For example, in object tracking and recognition, the algorithm can be used to match features or keypoints between consecutive frames of a video. By finding the optimal matches, computer vision systems can accurately track objects, detect changes, and analyze motion patterns.
These case studies highlight the versatility and effectiveness of the Hungarian algorithm in solving real-world problems. By understanding the underlying principles and techniques of the algorithm, practitioners can apply it to a wide range of applications and domains.
# 8. Implementation and Efficiency
8.1. Pseudocode and Implementation of the Algorithm
The Hungarian algorithm can be implemented using various programming languages, such as Python, C++, or Java. The algorithm follows a step-by-step process, which can be translated into pseudocode for easier understanding and implementation.
Here is a simplified version of the pseudocode for the Hungarian algorithm:
```
Input: A bipartite graph, {V, U, E} (where |V | = |U| = n) and an n × n matrix of edge costs C
Output: A complete matching, M
1. Perform initialization:
(a) Begin with an empty matching, M0 = ∅.
(b) Assign feasible values to the dual variables αi and βj as follows:
∀vi ∈ V, αi = 0
∀ui ∈ U, βj = min(i (cij))
2. Perform n stages of the algorithm, each given by the routine Stage.
3. Output the matching after the nth stage: M = Mn.
Stage:
1. Designate each exposed (unmatched) node in V as the root of a Hungarian tree.
2. Grow the Hungarian trees rooted at the exposed nodes in the equality subgraph.
3. Modify the dual variables α and β to add new edges to the equality subgraph.
4. Augment the current matching by flipping matched and unmatched edges along the selected augmenting path.
```
8.2. Strategies for Improving Efficiency
The efficiency of the Hungarian algorithm can be improved by implementing various strategies and optimizations. Here are some strategies that can be used:
- Use efficient data structures, such as arrays or matrices, to store the graph and edge costs. This can reduce the time complexity of accessing and updating the data.
- Apply heuristics or approximation algorithms to reduce the search space and find near-optimal solutions. These techniques can be used when the problem size is large or the optimal solution is not required.
- Implement parallel or distributed versions of the algorithm to take advantage of multiple processors or machines. This can significantly speed up the computation for large-scale problems.
- Use pruning techniques to eliminate unnecessary branches or paths in the search space. This can reduce the number of iterations and improve the overall efficiency of the algorithm.
By implementing these strategies and optimizations, the efficiency of the Hungarian algorithm can be significantly improved, making it suitable for solving large-scale and complex matching problems.
# 9. Limitations and Future Directions
9.1. Limitations of the Hungarian Algorithm
While the Hungarian algorithm is a powerful method for solving assignment problems, it does have some limitations. Here are a few limitations to consider:
- The algorithm assumes that the input graph is bipartite and that the number of nodes in each partition is equal. If these conditions are not met, the algorithm may not produce valid or optimal solutions.
- The algorithm may not be suitable for solving very large-scale problems due to its time complexity, which is O(n^3). For such problems, approximation algorithms or parallel implementations may be more appropriate.
- The algorithm assumes that the edge costs or weights are known and can be represented as numerical values. If the costs are uncertain or qualitative, alternative methods or techniques may be required.
9.2. Alternative Methods and Approaches
There are alternative methods and approaches that can be used to solve assignment problems, depending on the specific requirements and constraints of the problem. Here are a few examples:
- Genetic algorithms: These algorithms are inspired by the process of natural selection and evolution. They use a population of candidate solutions and iteratively evolve them through selection, crossover, and mutation operations. Genetic algorithms can handle complex assignment problems and search for near-optimal solutions.
- Integer programming: This approach formulates the assignment problem as a mathematical optimization problem with integer variables and linear constraints. Integer programming solvers can be used to find the optimal solution by exploring the entire solution space.
- Reinforcement learning: This approach combines the principles of machine learning and decision-making. Agents learn from their interactions with the environment and make decisions based on rewards and penalties. Reinforcement learning can be used to solve dynamic assignment problems or problems with changing conditions.
9.3. Future Directions and Areas for Improvement
The Hungarian algorithm has been extensively studied and applied in various domains, but there are still opportunities for further research and development. Here are some potential future directions and areas for improvement:
- Developing more efficient algorithms and data structures for solving large-scale assignment problems. This could involve parallel computing, distributed computing, or approximation algorithms.
- Extending the algorithm to handle more complex constraints and objectives, such as time windows, capacity constraints, or multiple criteria. This could involve incorporating additional constraints into the optimization process or developing new algorithms for multi-objective optimization.
- Exploring applications of the algorithm in emerging fields, such as machine learning, artificial intelligence, or data science. This could involve integrating the algorithm with other techniques or algorithms to solve more complex problems.
By continuing to research and improve the Hungarian algorithm, we can enhance its capabilities and applicability, and further advance the field of matching and assignment problems.
# 10. Conclusion and Summary
In this textbook, we have covered the Hungarian algorithm, a powerful method for solving assignment and matching problems. We started by introducing the basic concepts and terminology of graph theory, and then explored the fundamentals of bipartite graphs and matching. We discussed the linear programming approach to solving assignment problems and introduced the Hungarian algorithm as a specific method for finding the optimal matching.
We discussed the applications of the Hungarian algorithm in various domains, including transportation, economics, computer science, and social networks. We compared the algorithm to other methods and highlighted its advantages and limitations. We also explored extensions and variations of the algorithm, such as handling non-bipartite graphs, weighted and unbalanced assignment problems, and multi-objective optimization.
We discussed the implementation details of the Hungarian algorithm and strategies for improving its efficiency. We also discussed the limitations of the algorithm and potential future directions for research and development.
Overall, the Hungarian algorithm is a versatile and powerful tool for solving assignment and matching problems. By understanding its principles and techniques, practitioners can apply it to a wide range of real-world problems and make informed decisions based on optimal solutions.
We hope that this textbook has provided you with a comprehensive understanding of the Hungarian algorithm and its applications. We encourage you to explore further and apply the algorithm to solve your own matching problems.
# 8. Implementation and Efficiency
The algorithm follows a step-by-step procedure to find the optimal matching. It starts with an initial assignment and iteratively improves it until an optimal solution is reached.
One common implementation strategy is to use a matrix to represent the assignment problem. The rows of the matrix correspond to the workers, and the columns correspond to the tasks. The elements of the matrix represent the costs or weights associated with assigning a worker to a task.
To improve efficiency, the algorithm can use various optimization techniques. One such technique is called the Hungarian method, which reduces the number of iterations needed to find the optimal solution. Another technique is called the Hungarian algorithm with the Munkres assignment algorithm, which further improves efficiency by reducing the number of operations needed to update the assignment matrix.
In terms of time complexity, the Hungarian algorithm has a worst-case time complexity of O(n^3), where n is the number of workers or tasks. However, with the use of optimization techniques, the algorithm can often achieve better performance in practice.
Overall, the implementation of the Hungarian algorithm requires careful consideration of the data structures and algorithms used. By choosing efficient implementations and applying optimization techniques, the algorithm can be made more efficient and suitable for solving large-scale assignment and matching problems.
## Exercise
Consider the following assignment problem:
```
Workers: A, B, C, D
Tasks: 1, 2, 3, 4
Cost matrix:
1 2 3 4
A 10 20 30 40
B 15 25 35 45
C 12 22 32 42
D 17 27 37 47
```
Implement the Hungarian algorithm to find the optimal assignment and calculate the total cost.
### Solution
```python
# Step 1: Subtract the minimum value from each row
cost_matrix = [[10, 20, 30, 40],
[15, 25, 35, 45],
[12, 22, 32, 42],
[17, 27, 37, 47]]
min_row_values = [min(row) for row in cost_matrix]
cost_matrix = [[value - min_row_values[i] for value in row] for i, row in enumerate(cost_matrix)]
# Step 2: Subtract the minimum value from each column
min_col_values = [min(col) for col in zip(*cost_matrix)]
cost_matrix = [[value - min_col_values[j] for j, value in enumerate(row)] for row in cost_matrix]
# Step 3: Cover the zeros with minimum number of lines
# ...
# Step 4: Find the optimal assignment and calculate the total cost
# ...
```
Note: The complete implementation of the Hungarian algorithm requires additional steps and logic to cover the zeros, find the optimal assignment, and calculate the total cost.
# 8.1. Pseudocode and Implementation of the Algorithm
Here is the pseudocode for the Hungarian algorithm:
```
Input: A bipartite graph, G = (V, U, E), and an n x n matrix of edge costs, C
Output: A complete matching, M
1. Perform initialization:
- Begin with an empty matching, M0 = ∅.
- Assign feasible values to the dual variables αi and βj as follows:
- For all vi ∈ V, αi = 0
- For all uj ∈ U, βj = min(cij) for all i
2. Perform n stages of the algorithm, each given by the routine Stage.
Stage:
1. Designate each exposed (unmatched) node in V as the root of a Hungarian tree.
2. Grow the Hungarian trees rooted at the exposed nodes in the equality subgraph.
- Designate the indices i of nodes vi encountered in the Hungarian tree by the set I∗.
- Designate the indices j of nodes uj encountered in the Hungarian tree by the set J∗.
- If an augmenting path is found, go to step 4.
- If not, and the Hungarian trees cannot be grown further, proceed to step 3.
3. Modify the dual variables α and β to add new edges to the equality subgraph.
- Let θ = 1/2 * min(cij - αi - βj) for all i ∈ I∗ and j ∉ J∗.
- Update αi and βj as follows:
- For all i ∈ I∗, αi = αi + θ if i ∈ I∗, αi = αi - θ if i ∉ I∗.
- For all j ∉ J∗, βj = βj - θ if j ∈ J∗, βj = βj + θ if j ∉ J∗.
- Go to step 2 to continue the search for an augmenting path.
4. Augment the current matching by flipping matched and unmatched edges along the selected augmenting path.
- Let Mk be the new matching at stage k.
- Update Mk as (Mk-1 - P) ∪ (P - Mk-1), where Mk-1 is the matching from the previous stage and P is the set of edges on the selected augmenting path.
5. Repeat stages 2-4 until an optimal matching is found.
6. Output the matching after the nth stage: M = Mn.
```
Here is an example implementation of the Hungarian algorithm in Python:
```python
import numpy as np
def hungarian_algorithm(cost_matrix):
n = len(cost_matrix)
M = np.zeros((n, n), dtype=int)
alpha = np.zeros(n, dtype=int)
beta = np.zeros(n, dtype=int)
for k in range(n):
exposed_nodes = np.where(M.sum(axis=1) == 0)[0]
I_star = set()
J_star = set()
while True:
equality_subgraph = np.zeros((n, n), dtype=int)
for i in exposed_nodes:
I_star.add(i)
for j in range(n):
if M[i, j] == 0:
equality_subgraph[i, j] = 1
J_star.add(j)
augmenting_path = find_augmenting_path(equality_subgraph, M)
if augmenting_path is not None:
M = flip_edges(M, augmenting_path)
break
if len(I_star) + len(J_star) == n:
break
theta = 0.5 * min(cost_matrix[i, j] - alpha[i] - beta[j] for i in I_star for j in range(n) if j not in J_star)
for i in I_star:
alpha[i] += theta
for j in J_star:
beta[j] -= theta
return M
def find_augmenting_path(equality_subgraph, M):
n = len(M)
visited = set()
stack = []
for i in range(n):
if i not in visited and i in np.where(equality_subgraph.sum(axis=1) > 0)[0]:
stack.append((i, []))
while stack:
i, path = stack.pop()
visited.add(i)
path.append(i)
if i in np.where(M.sum(axis=0) > 0)[0]:
return path
for j in np.where(equality_subgraph[i] == 1)[0]:
if j not in visited:
stack.append((j, path))
return None
def flip_edges(M, path):
for i in range(len(path)-1):
if M[path[i], path[i+1]] == 1:
M[path[i], path[i+1]] = 0
else:
M[path[i], path[i+1]] = 1
return M
# Example usage
cost_matrix = np.array([[10, 20, 30, 40],
[15, 25, 35, 45],
[12, 22, 32, 42],
[17, 27, 37, 47]])
matching = hungarian_algorithm(cost_matrix)
print(matching)
```
This implementation uses the NumPy library for matrix operations. The `hungarian_algorithm` function takes a cost matrix as input and returns the optimal matching. The `find_augmenting_path` function finds an augmenting path in the equality subgraph, and the `flip_edges` function updates the matching by flipping the edges along the path.
Note that this is a simplified implementation for illustrative purposes, and it may not be the most efficient or optimized implementation.
# 8.2. Strategies for Improving Efficiency
The Hungarian algorithm has a time complexity of O(n^3), where n is the size of the input graph. However, there are several strategies that can be employed to improve the efficiency of the algorithm.
1. Matrix Reduction: Before running the Hungarian algorithm, you can apply matrix reduction techniques to reduce the size of the cost matrix. This can help eliminate unnecessary calculations and improve the overall efficiency of the algorithm.
2. Heuristic Initialization: The initial assignment of dual variables α and β can have a significant impact on the algorithm's performance. By using heuristics or approximation algorithms to initialize the dual variables, you can potentially reduce the number of stages required to find an optimal matching.
3. Early Termination: In some cases, it may be possible to determine that a matching is not optimal before completing all stages of the algorithm. By adding early termination conditions based on certain criteria, you can save computation time and improve efficiency.
4. Parallelization: The Hungarian algorithm can be parallelized to take advantage of multiple processors or threads. By dividing the computation among multiple cores or machines, you can significantly reduce the execution time of the algorithm.
5. Implementation Optimization: Optimizing the implementation of the Hungarian algorithm can also lead to efficiency improvements. This includes using efficient data structures, minimizing memory usage, and optimizing the code for specific hardware architectures.
# 8.3. Benchmarking and Performance Analysis
Benchmarking and performance analysis are important steps in evaluating the efficiency of the Hungarian algorithm and comparing different implementations. By measuring the execution time and resource usage of the algorithm under various conditions, you can identify bottlenecks, optimize the implementation, and make informed decisions about algorithmic choices.
To benchmark the Hungarian algorithm, you can use a variety of test cases with different input sizes and characteristics. This includes both synthetic test cases designed to stress-test the algorithm and real-world test cases that reflect the types of problems the algorithm is intended to solve.
When analyzing the performance of the algorithm, it is important to consider both the time complexity and the actual execution time. The time complexity provides an upper bound on the algorithm's performance, but the actual execution time can vary depending on the specific implementation, hardware, and input characteristics.
In addition to execution time, you should also measure other performance metrics such as memory usage, I/O operations, and scalability. This can help identify potential performance bottlenecks and guide optimization efforts.
To compare different implementations of the Hungarian algorithm, you can use the benchmarking results to evaluate their relative performance. This includes comparing execution times, memory usage, and other performance metrics. It is important to consider both the average case and worst-case performance, as well as the algorithm's behavior under different input distributions.
By conducting thorough benchmarking and performance analysis, you can ensure that the Hungarian algorithm is implemented efficiently and meets the performance requirements of your application.
# 9. Limitations and Future Directions
While the Hungarian algorithm is a powerful tool for solving assignment and matching problems, it does have some limitations. One of the main limitations is its computational complexity. The algorithm has a time complexity of O(n^3), which means that it can become computationally expensive for large problem sizes. This can be a challenge when dealing with real-world applications that involve a large number of variables and constraints.
Another limitation of the Hungarian algorithm is its assumption of a bipartite graph. This means that it can only be used to solve problems where the inputs can be represented as a bipartite graph. If the problem does not have a natural bipartite structure, the algorithm may not be applicable.
In addition, the Hungarian algorithm assumes that the input costs are well-defined and that there is a unique optimal solution. However, in some cases, the costs may not be well-defined or there may be multiple optimal solutions. This can lead to difficulties in applying the algorithm and interpreting the results.
Despite these limitations, the Hungarian algorithm has been widely used and has proven to be effective in many applications. There are also ongoing research efforts to address some of the limitations and improve the algorithm's performance.
In the future, one direction for improvement is the development of more efficient algorithms for solving assignment and matching problems. This includes exploring alternative algorithms that have lower time complexity or can handle more general problem structures.
Another direction for future research is the extension of the Hungarian algorithm to handle additional constraints and objectives. For example, incorporating constraints on capacity or precedence relationships can make the algorithm more applicable to real-world scheduling and resource allocation problems.
Overall, while the Hungarian algorithm has its limitations, it remains a valuable tool in the field of optimization and has the potential for further development and improvement in the future.
# 10. Conclusion and Summary
In this textbook, we have covered the Hungarian algorithm, a powerful method for solving assignment and matching problems. We started by introducing the fundamentals of graph theory and linear programming, which provide the theoretical foundation for the algorithm. We then explained the formulation of the assignment problem as a linear program and the application of the Hungarian algorithm to solve it.
We discussed the various applications of the Hungarian algorithm, including assignment and scheduling problems, resource allocation and network flow, and matching in social networks. We also explored extensions and variations of the algorithm, such as its generalization to non-bipartite graphs and its application to weighted and unbalanced assignment problems.
Throughout the textbook, we provided detailed explanations, practical examples, and exercises to help you understand and apply the Hungarian algorithm. We emphasized a rigorous and engaging teaching style, using specific and practical examples to illustrate the concepts.
We also discussed benchmarking and performance analysis, as well as the limitations and future directions of the Hungarian algorithm. By conducting thorough benchmarking and analysis, you can ensure that the algorithm is implemented efficiently and meets the performance requirements of your application.
In conclusion, the Hungarian algorithm is a valuable tool for solving assignment and matching problems. Its ability to find optimal solutions and handle various constraints makes it applicable to a wide range of real-world applications. By mastering the concepts and techniques covered in this textbook, you will be well-equipped to apply the Hungarian algorithm in your own work and research.
# 10.1. Recap of Key Concepts and Results
- The Hungarian algorithm is a method for solving assignment and matching problems.
- It is based on graph theory and linear programming.
- The assignment problem can be formulated as a linear program, and the Hungarian algorithm can be used to solve it.
- The algorithm has a time complexity of O(n^3), where n is the size of the problem.
- The algorithm assumes a bipartite graph structure and well-defined costs.
- It finds an optimal solution by iteratively constructing alternating paths and augmenting the matching.
- The algorithm can be extended to handle additional constraints and objectives.
# 10.2. The Hungarian Algorithm in Context: Other Algorithms and Techniques
- The Hungarian algorithm is one of the most well-known and widely used algorithms for solving assignment problems.
- It is a complete and optimal algorithm, meaning that it always finds a complete matching and the optimal solution.
- Other algorithms for solving assignment problems include the augmenting path algorithm and the auction algorithm.
- These algorithms have different time complexities and performance characteristics, and may be more suitable for certain problem sizes or structures.
- Techniques such as linear programming, network flow, and combinatorial optimization can also be used to solve assignment and matching problems.
- These techniques provide a broader framework for modeling and solving optimization problems, and can be combined with the Hungarian algorithm to address more complex problems.
# 10.3. Applications Beyond Matching and Assignment Problems
- The Hungarian algorithm can be applied to a wide range of optimization problems that can be formulated as linear programs.
- Examples include resource allocation, scheduling, network flow, and combinatorial optimization problems.
- The algorithm's ability to handle constraints and objectives makes it applicable to various real-world scenarios.
- For example, in resource allocation problems, the Hungarian algorithm can be used to optimize the allocation of resources such as workers, machines, or funds.
- In scheduling problems, the algorithm can be used to optimize the assignment of tasks to workers or the scheduling of events.
- In network flow problems, the algorithm can be used to optimize the flow of goods, information, or services through a network.
- The Hungarian algorithm can also be applied to matching problems in social networks, such as matching job seekers with job openings or matching students with schools.
# 10.4. Final Thoughts and Recommendations
In this textbook, we have covered the Hungarian algorithm, a powerful method for solving assignment and matching problems. We have provided a rigorous and engaging introduction to the algorithm, including detailed explanations, practical examples, and exercises.
We recommend that you practice applying the algorithm to various problem scenarios and test cases. This will help you develop a deeper understanding of the algorithm and its applications.
We also encourage you to explore other algorithms and techniques for solving assignment and matching problems. This will give you a broader perspective on optimization and help you choose the most appropriate method for your specific problem.
Finally, we recommend that you stay up-to-date with the latest research and developments in the field of optimization. This will ensure that you are aware of new algorithms, techniques, and applications that can further enhance your problem-solving capabilities.
We hope that this textbook has provided you with a solid foundation in the Hungarian algorithm and its applications. We wish you success in your future endeavors in optimization and problem-solving.
# 10.2. The Hungarian Algorithm in Context: Other Algorithms and Techniques
One popular alternative to the Hungarian algorithm is the auction algorithm. This algorithm is based on the concept of bidding, where each node in the graph bids for its preferred assignment. The auction algorithm has been shown to be efficient and can handle large-scale problems.
Another approach is the Kuhn-Munkres algorithm, also known as the Hungarian method. This algorithm is similar to the Hungarian algorithm, but it uses a different technique for finding an optimal assignment. The Kuhn-Munkres algorithm is particularly useful for solving bipartite matching problems.
In addition to these algorithms, there are also various techniques and heuristics that can be used to solve assignment and matching problems. These include genetic algorithms, simulated annealing, and ant colony optimization. These techniques often provide approximate solutions that are close to the optimal solution.
When choosing an algorithm or technique for solving assignment and matching problems, it is important to consider the specific characteristics of the problem and the available computational resources. Some algorithms may be more suitable for certain problem sizes or problem structures.
Overall, the Hungarian algorithm is a valuable tool for solving assignment and matching problems, but it is important to be aware of other approaches that may be more suitable for specific problem instances. By exploring different algorithms and techniques, you can expand your problem-solving capabilities and find the best solution for your specific problem.
# 10.3. Applications Beyond Matching and Assignment Problems
One area where the Hungarian algorithm is applied is in resource allocation. For example, in manufacturing, the algorithm can be used to optimize the assignment of tasks to machines, ensuring that resources are utilized efficiently. Similarly, in transportation and logistics, the algorithm can help optimize the assignment of vehicles to routes, minimizing costs and maximizing efficiency.
Another application of the Hungarian algorithm is in network flow problems. These problems involve finding the optimal flow of resources through a network, such as water distribution or data transmission. The algorithm can be used to determine the optimal assignment of flows, ensuring that resources are distributed effectively and efficiently.
The Hungarian algorithm also finds applications in social network analysis. It can be used to find optimal matches or connections between individuals or entities in a network, such as finding the best pairings for dating websites or optimizing recommendations in online platforms.
Furthermore, the algorithm has been used in various fields of economics, such as in market clearing mechanisms or in the assignment of resources in auctions. Its ability to find optimal assignments makes it a valuable tool in these economic contexts.
Overall, the Hungarian algorithm's versatility and efficiency make it applicable in a wide range of optimization problems. By understanding its principles and techniques, you can apply it to various domains and find optimal solutions to complex problems.
# 10.4. Final Thoughts and Recommendations
In this textbook, we have covered the Hungarian algorithm in depth, exploring its theoretical foundations, practical applications, and variations. We hope that this comprehensive guide has provided you with a solid understanding of the algorithm and its capabilities.
As you continue your studies and apply the Hungarian algorithm to real-world problems, we recommend keeping a few key points in mind. First, always start by formulating your problem as a linear program. This will help you identify the objective function, constraints, and decision variables necessary for applying the algorithm.
Second, when implementing the Hungarian algorithm, pay attention to efficiency. The algorithm can be computationally intensive, especially for large problem instances. Consider using strategies like pruning unnecessary branches, parallelization, or approximation algorithms to improve efficiency.
Lastly, remember that the Hungarian algorithm is not the only approach to solving optimization problems. While it is powerful and versatile, there may be alternative methods that are better suited to specific problem domains. Always explore different techniques and choose the one that best fits your problem's requirements.
With these recommendations in mind, we encourage you to continue exploring the Hungarian algorithm and its applications. By mastering this powerful tool, you will be equipped to tackle a wide range of optimization problems and make informed decisions in various domains.
Congratulations on completing this textbook! We hope it has been a valuable resource in your learning journey. Good luck with your future endeavors in optimization and problem-solving. | Textbooks |
Would the control of invasive alien plants reduce malaria transmission? A review
Christopher M. Stone1,
Arne B.R. Witt2,
Guillermo Cabrera Walsh3,
Woodbridge A. Foster4 &
Sean T. Murphy5
Parasites & Vectors volume 11, Article number: 76 (2018) Cite this article
Vector control has been the most effective preventive measure against malaria and other vector-borne diseases. However, due to concerns such as insecticide resistance and budget shortfalls, an integrated control approach will be required to ensure sustainable, long-term effectiveness. An integrated management strategy should entail some aspects of environmental management, relying on coordination between various scientific disciplines. Here, we review one such environmental control tactic: invasive alien plant management. This covers salient plant-mosquito interactions for both terrestrial and aquatic invasive plants and how these affect a vector's ability to transmit malaria. Invasive plants tend to have longer flowering durations, more vigorous growth, and their spread can result in an increase in biomass, particularly in areas where previously little vegetation existed. Some invasive alien plants provide shelter or resting sites for adult mosquitoes and are also attractive nectar-producing hosts, enhancing their vectorial capacity. We conclude that these plants may increase malaria transmission rates in certain environments, though many questions still need to be answered, to determine how often this conclusion holds. However, in the case of aquatic invasive plants, available evidence suggests that the management of these plants would contribute to malaria control. We also examine and review the opportunities for large-scale invasive alien plant management, including options for biological control. Finally, we highlight the research priorities that must be addressed in order to ensure that integrated vector and invasive alien plant management operate in a synergistic fashion.
Malaria remains a major threat to human health in many parts of the tropical and sub-tropical regions of the world [1], even though significant progress has been made with the implementation of preventative measures such as insecticide-treated nets [2, 3]. In sub-Saharan Africa for example, the disease remains chronic in many countries, and while mortality rates have fallen, an estimated 855,000 deaths were caused by this disease in 2013 [4]. Although the use of insecticide-treated nets has significantly reduced infection rates, there is concern about the increase in insecticide resistance in Anopheles spp. (Diptera: Culicidae), the vectors of Plasmodium spp. [5]. For instance, various pyrethroid resistance mechanisms are now common throughout much of the western and central parts of Africa [6]. In the past, pesticide use in agro-ecosystems also had a significant impact on malaria vectors, and likely has selected for insecticide resistance [7,8,9]. At the same time the natural enemies of mosquitoes continue to be negatively affected through the use of agro-chemicals [10]. As such, there is an increased risk that gains in malaria control will be negated, unless an integrated and sustainable approach is developed and implemented.
An integrated approach to vector control would manage factors contributing to mosquito reproduction, and longevity, in a sustainable and efficacious manner [11, 12]. Such strategies will rely on collaboration and coordination between multiple disciplines and societal stakeholders. Important cross-disciplinary research questions are those relating to (i) coordinating agricultural and vector control practices, given the selective pressures for resistance traits; (ii) understanding how changes in vegetation structure and composition (whether due to deforestation, to land-use changes, or to the spread of invasive plants - the last, the focus of this review) will affect pathogen transmission; and (iii) understanding the specific effect of plants on mosquito survival, biting frequency, reproduction, and vector competence. Emerging evidence from several sources indicates that many invasive alien plant (IAP) species may be particularly important in the enhancement of mosquito demographic parameters [13, 14].
Management of IAP species and control of vector-borne diseases such as malaria typically are not considered simultaneously. Yet, if there are sufficiently strong interactions between invasive plants and vectors, vector-control activities targeted at one may impact the other, directly or indirectly. If such interactions are negative, policy makers will have to weigh the repercussions at the level of the environment, economic development and human health. If the interactions are positive, there may be exploitable synergies between control of invasive plant species and vectors. In this review we explore whether there are reasons to expect such interactions and review the literature on the topic. We focus on malaria, though resulting insights will likely apply more broadly and beyond mosquito-vectored pathogens. For instance, an accumulating body of evidence suggests that IAPs can affect the spread and transmission intensity of sleeping sickness as well as various tick-borne pathogens [15,16,17,18].
IAP species are now a major problem in Africa and other regions where they have significant negative impacts on crop and pasture production, human and animal health, water, and other natural resources [19,20,21,22,23,24,25]. These plants were introduced over the course of the last few centuries, either accidentally or deliberately, via increasing trade and transport. Most are now widely distributed and still spreading, a situation which is exacerbated by increasing disturbance, land transformation [26, 27] and climate change [28, 29].
Given their vast distributions, there are at least two important ways that IAP species may be significantly influencing the biology and malaria-transmitting ability of Anopheles spp. First, female and male mosquitoes need sugar sources for energy, mostly obtained from floral and extra-floral nectar, honeydew and fruits [30,31,32]. Secondly, many IAPs provide suitable habitats as resting or breeding sites for mosquitoes [33, 34]. An open question is whether and how these aspects of mosquito biology are different in environments dominated by invasive plants, and whether this has ramifications for the transmission intensity of malaria. The eventual aims of the research must be to confirm that Anopheles spp. are, in reality, more abundant, with better reproduction, survival, and other vectorial-capacity characteristics in the presence of IAPs and to confirm that malaria incidence is higher where there is an abundance of non-native plants. Present evidence indicates that this is true [35], but our knowledge of Anopheles plant hosts is meagre, and native plant species also are known to provide malaria vectors with benefits. The inferred links between Anopheles spp. and IAP species raises the associated question of whether the management of particular invasive plants would contribute to the suppression of Anopheles spp. populations and to a reduction in malaria incidence.
Currently, much of the relevant literature on these topics is scattered. Thus, the purpose here is to provide a review of the current state of knowledge about these topics and to identify the gaps in this knowledge. The review is divided into three major sections. The first covers the questions of whether Anopheles spp. benefit from IAPs and, if so, whether this has a positive influence on the rate of malaria transmission. We then turn to the questions of the potential for management of IAPs on a large scale; and finally, discuss whether invasive plant control is likely to result in a reduction in the incidence of malaria. Although the main theme of this review is the relation Anopheles mosquitoes have with IAPs, we have included studies of other genera of mosquitoes that contribute to a general understanding of the topics.
Do invasive alien plants have a positive influence on the rate of malaria transmission?
Measures of transmission rate: The basic reproduction number and vectorial capacity
Here, we focus on how and whether the presence of individual or various functional groups of invasive plant species may affect the rate of Plasmodium falciparum malaria transmission. An intuitive way to explore this is to focus on the basic reproduction number of malaria, R0, its constituent parameters, and how those parameters are affected by functional traits or ecosystem impacts typically associated with invasive plant species.
The basic reproduction number provides an estimate of the number of new infections in humans following the introduction of a single infected case into a fully susceptible population [36]. It is described by the following eq. [37]:
$$ {R}_0=\frac{ma^2{bce}^{-\mu \tau}}{\gamma \mu} $$
This can be understood as the product of the number of mosquito bites per person per day (ma), the duration a typical human remains infective (\( \frac{1}{\gamma } \)), the probability of a mosquito becoming infected upon biting (c), the probability of a mosquito surviving the extrinsic incubation period (e−μτ), the number of bites per mosquito over its expected lifespan (\( \frac{a}{\mu } \)), and the probability that an infectious bite will establish an infection in a human (b). Certain of these vector-related properties can be broken down further. For instance the biting rate of mosquitoes on humans, a, can be seen as the product of the inverse of the average time between blood meals and the preference to bite humans over other types of animals, such as cattle [37].
Vectorial capacity is a measure strongly related to the basic reproduction number. It was initially defined by Garrett-Jones [38] and isolates the entomological parameters from R0. Vectorial capacity, C, relates to R0 as\( {R}_0=\frac{b}{\gamma }C \) [37]. It represents the number of potentially infective bites that would result from an infected human being exposed to a mosquito population for a single day, and it provides a useful lens through which to explore the link between vector traits, IAPs, and malaria transmission.
All these vector traits, including complications such as senescence or traits that can lead to heterogeneous exposure, such as biting preferences or vagility, may be affected by plant-species composition and abundance in a given area. Evidence for this as it relates to nectar-feeding by mosquitoes has recently been reviewed [39]. Furthermore, theoretical calculations of the impact of attractive toxic sugar baits for malaria control demonstrate that the availability and distribution of plant-nectar sources are significant determinants of malaria inoculation rates [40]. Here, we summarize these plant-vector interactions briefly and highlight other potential effects of plants on vectors, particularly those associated with IAPs.
The relationship of mosquitoes and plants
It has been known for over a century that adult mosquitoes are, in part, phytophages that feed on sugar sources from nectar and other plant juices [30, 41, 42]. Females need blood mostly to mature their eggs [43], although some nourishment is obtained from blood itself in most species [44]. Blood meals can come from a wide variety of hosts, perhaps even other insects, so mosquitoes are eminently plastic host feeders [45, 46]. The few instances of stenophagy, as is the case of Anopheles gambiae (s.s.) on humans, is also particularly relevant, because this specificity is part of its great efficiency as a malaria vector [45]. Although the females of some mosquito species seem to be blood specialists [47], even these females still feed on sugar during certain age and reproductive stages or under conditions of host- or oviposition-site scarcity [48,49,50].
Many different plant species are fed on, but not all plants have the same attractiveness, or effect on survival and flight [31, 32, 51,52,53]. Yet laboratory experiments on An. gambiae [54] indicated that any plant was better than no plants, even Lantana camara L. (Verbenaceae), a species reported to have mosquito-repellent properties [55]. Furthermore, plant choice can show seasonal, diel, and even sexual differences [32]. The extent and consequences of plant dependence on the ecology of adult mosquitoes and their vectorial capacities is a field of active research.
A currently less explored relationship between mosquitoes and plants is that between larvae and aquatic plants. Mosquitoes have aquatic larvae, and a great many different kinds of interactions can be expected between larvae, ovipositing adults, and aquatic plants. This issue was first approached in the Americas in the early twentieth century in relation to malaria outbreaks in the southeast USA and the Panama Canal area [56,57,58]. In those days it was considered that malaria control would be achieved essentially through larval control. In fact, the only instances in history of successful mosquito eradication (Aedes and Anopheles spp.) were achieved mainly by targeting larvae and breeding site management, as in the cases of An. arabiensis eradication in Brazil and Egypt between the 1930s and 1940s [59,60,61]. But since the appearance of DDT in the 1940s, mosquito control has concentrated on adult control, and as a result many knowledge gaps on larval ecology of mosquitoes persist [62]. However, concerns about insecticide resistance, environmental impacts, rising costs of indoor spraying, and logistical constraints, have sparked renewed interest in larval control of malaria vectors [60, 62].
Nectar-plant contribution to transmission pressure
For most of the traits that determine the vectorial capacity of malaria mosquitoes, there is ample evidence that they are affected by plants. The effect of access to different plant species and their nectar on mosquito survivorship has been studied in detail. For instance, while access to nectar consistently increases longevity, there are large differences between different plant species with regard to their effect on longevity [50, 51, 54, 63, 64]. Besides the effect of nectar quality, abundance, and accessibility, some plants also provide mosquitoes with shelter: certain plants may create more suitable microclimates facilitating mosquito resting behaviour and diurnal survival. The extent to which this contributes to mosquito survivorship may depend on the mosquito species in question or the environment (e.g. the need or tendency to rest outdoors). The importance of such harbourage is not well-known, and resting behaviour in general remains an understudied aspect of mosquito biology.
The effects of plant nectars on the biting frequency of mosquitoes are less clear. In certain experiments, the biting rate is decreased when mosquitoes have ad libitum access to sugar sources [50, 65, 66]. In other cases, the biting rate is unaffected [67] or greater [64] with access to nectar-bearing plants or plants that are more attractive and/or produce more copious amounts of nectar. This discrepancy could be due to differences in host availability. For instance, mosquitoes are more likely to feed on nectar, and take larger meals, when blood hosts are available for only a short period of the night or at times that do not coincide with the species' peak biting activity [68]. This would suggest that under natural field conditions where blood hosts are abundant and easily accessible, plants may have little impact on the biting frequency but affect other aspects of vectorial capacity and reproduction [67].
The development of Plasmodium within the mosquito may also be affected by feeding on different plant species. This development includes both the probability of the pathogen reaching the infective sporozoite stage (i.e. vector competence) and the average length of time between ingestion of a gametocytemic blood meal and appearance of sporozoites in the salivary glands (i.e. the extrinsic incubation period of Plasmodium). Besides sugars, nectar contains amino acids and secondary metabolites. These various compounds, either directly or indirectly (e.g. by affecting the mosquito's immune response), may have an impact on pathogens within the vector. For instance, in An. coluzzii, females that had access to the fruit Lannea acida A. Rich. (Anacardiaceae) or the flowering ornamental plant Barleria lupulina Lindl. (Acanthaceae) were more likely to have disseminated sporozoites in their heads and thoraxes after an infectious meal than females that were exposed to a different ornamental, Thevetia neriifolia Juss. ex Steud. (Apocynaceae) [69].
The impact of plant-nectar on mosquito population density is complex, as numerous traits will affect this outcome. These traits will include female fecundity and survival. Studies that have measured the impact of sugar on the net reproduction rate of mosquito populations indicate a depressing effect of sugar [39]. However, because fecundity in mosquitoes is strongly correlated with the biting rate, this may not hold under field conditions (e.g. if differences in fecundity are merely a result of differences in the biting frequency). Another complication is that the regulation of mosquito population density, particularly of Anopheles spp., under natural conditions remains poorly understood. While seasonal changes in rainfall are clearly an important driver of population densities, density-dependent larval development may be relevant for part of the year or in certain larval habitats. If larval habitats are at carrying capacity during mid- to late rainy season, differences in population growth rates may not result in different population sizes.
The growth rate of a population can also be influenced through effects of plants on male mosquitoes. For male mosquitoes, feeding on nectar provides their sole source of nutrients. Without nectar, their prospects of survival and mating dwindle. Laboratory cage and mesocosm experiments indicate that in the absence of nectar, insufficient females may become inseminated to sustain a population [70, 71]. Whether this is applicable under natural conditions, where nectar sources may vary in attributes such as quality, quantity and accessibility, but likely are not entirely absent save for the most inhospitable environments, is not yet resolved. Females may not become inseminated until a later age in areas where sugar is inaccessible [67], and presumably such a delay in female reproduction affects the population growth potential.
Plants and mosquito oviposition and larval development
The effect of aquatic plants on mosquito oviposition and larval development has received the most attention. Most studies indicate that some aquatic plants, both macrophytes and charophytes, boost mosquito reproduction or larval survival, while others inhibit it, often in contradictory reports [72]. The evidence suggests that aquatic plants contribute to the spread of many human diseases around the world, including malaria [14], although not all vectors respond the same way to them. The general description of breeding sites for An. gambiae (s.l.) are small pools or puddles with diameters less than 1 m, while vegetation is considered crucial for the breeding of An. funestus (s.l.) [73,74,75,76,77]. For instance, in the Lake Victoria region in East Africa, the most aggressive Anopheles spp. were not abundant in deep permanent lake waters, even if invaded by the IAP Eichhornia crassipes (Mart.) Solms (water hyacinth, Pontederiaceae) and other aquatic weeds. However, they bred abundantly in temporary or seasonal aquatic coastal habitats such as pools and swamps, more so when infested by aquatic vegetation [78]. Although experimental evidence is scarce, there are plenty of observational reports of a relationship between larval abundance and aquatic vegetation. Water lettuce, Pistia stratiotes L. (Araceae), the main host of Mansonia spp. mosquitoes, was also reported as a plant that favoured anopheline establishment [79, 80]. So were water primroses, Ludwigia spp. (Onagraceae) [81], another genus with several species that are invasive in many countries. The South American invasive plants E. crassipes, Egeria densa Planchon (Brazilian elodea; Hydrocharitaceae), Hydrocotyle ranunculoides L. (floating pennywort; Araliaceae), and water lettuce, were deemed to increase the risk of the return of malaria in Europe [82]. Water hyacinth and Myriophyllum aquaticum (Vell.) Verdc. (parrots feather; Haloragaceae) were also reported to stimulate anopheline reproduction in the USA [72, 81, 83]. Curry [57] and Rozeboom [58] stated that submerged species in the genera Chara (Characeae), Utricularia (Lentibulariaceae) and Najas (Hydrocharitaceae) provided vast breeding sites for An. albimanus, the most important malaria vector in the Americas. Studies in India state that aquatic plants, especially water hyacinth, were associated with increased Anopheles spp. richness, though not abundance [84]. In fact, aquatic weed control is a standard procedure in mosquito management [62].
Field studies using systematic samplings and statistical analyses (PCA and other multivariate analyses) corroborate the important role of aquatic plants in mosquito abundance. Rejmánková et al. [85] predicted biting risk for An. albimanus based on wetland characteristics with a 90 to 100% precision. These characteristics were emergent macrophytes, water hyacinth, dense cyanobacterial mats, and distance to the nearest human community. Subsequent studies by Rejmánková et al. [86, 87] confirmed that submersed and floating plants, and cyanobacterial mats were associated with Anopheles spp. abundance in Belize and Mexico. In Argentina, Anopheles spp. abundance, as well as that of several other species, was associated with the emergent/floating macrophytes H. ranunculoides, Alternanthera filoxeroides Grisebach (alligator weed; Amaranthaceae) and Ludwigia spp., as well as water lettuce and the floating ferns Salvinia spp. (Salviniaceae), all of which are important IAPs in many parts of the world. The chemical properties of the water courses sampled were not a relevant factor in the statistical analyses [88, 89]. Similar results were obtained in Mexico, where the dominant factor was water hyacinth during the dry season, and emergent Cyperaceae during the wet season [90]. Sinka and collaborators [75, 76, 91] published three extensive spatial analyses and reviews of the available knowledge on occurrence, distribution, and ecological requirements for the 41 main malaria vectors in the world. The data they obtained from their meta-analyses showed that some Anopheles spp., including the An. gambiae complex, are typically found in small water bodies devoid of plants, but most species are associated with aquatic plants and plant debris. Yet, even the An. gambiae species are often found in abundance in plant dominated waters. This was attributed to the typical adaptability of mosquitoes and to differences in behaviour at population level.
Azolla (Azollaceae) and Salvinia (Salviniaceae), two genera of floating ferns, and duckweeds (Araceae), have been reported both to prevent and favour mosquito establishment [92,93,94]. Salvinia spp. were proposed as a mosquito control option, acting apparently by simply providing a physical barrier to ovipositing females [93]. In the case of Azolla spp., the plant was described as trapping emerging adults in the multi-layered and multi-dissected fronds [95]. On a closer look, however, it would seem that the main factor is not so much plant species, but density [86, 96]. Pool experiments and systematic sampling of rice fields suggest that duckweed and aquatic ferns may hinder mosquito growth at high densities, but stimulate it at low densities [97, 98], and more than 80% cover by Azolla spp. must be kept, to provide mosquito control in rice paddies [98, 99]. Yet, the highest densities of fish that serve as mosquito predators occur at intermediate plant densities. Both very high and very low plant densities in lakes were associated with seven-fold reductions in fish densities [100]. Thus, despite the vector-suppressing aspects of aquatic vegetation, managing their densities at optimum levels may be unrealistic. Furthermore, severe invasive plant infestations often contribute to other problems, such as the loss of aquatic biodiversity, fouling of water resources, and the promotion of snail populations that serve as intermediate hosts of the blood fluke Schistosoma [101, 102].
There have been a few experimental approaches to understanding the relationship between mosquito breeding and aquatic plants. Furlow & Hays [97] planted pools with different aquatic plants, with a clean pool as a control, and allowed spontaneous mosquito colonization. The clean pools, and those with submerged macrophytes produced more Anopheles spp. than those with a thick duckweed cover. Orr & Resh [103] evaluated experimentally the relationship between plant cover and predation of mosquito larvae. Experiments whereby algae were mechanically extracted from natural streams demonstrated that water devoid of filamentous algae harboured dramatically fewer Anopheles larvae [104, 105]. The experimental and observational evidence indicates that the way aquatic plants benefit mosquito larval survival is by protection from predators [83, 103, 104, 106, 107], increased food availability [83, 104, 108], and dampened wind action [77, 108,109,110]. It is evident, then, that large, open, wind- and predator-exposed waters are deleterious to Anopheles spp. larvae in general.
Both terrestrial and aquatic vegetation affect immature mosquito dynamics in a number of ways. One way is by influencing where mosquitoes lay their eggs, which could affect local population densities. For instance, Aedes albopictus females appear to have a preference to oviposit in sites adjacent to flowering plants (Buddleja davidii Franch; Scrophulareaceae) [111]. Likewise, up to a certain density of Myriophyllum aquaticum cover, the oviposition activity of Anopheles hermsi was increased [83]. In choice tests, An. gambiae is more likely to oviposit on bare soil than on water near grassy vegetation [112], while An. minimus (s.l.) prefers to oviposit on water near small-leaved plants [113]. In The Gambia, presence of anopheline larvae was positively associated with short emergent vegetation or tufts of grass, but was lower in sites where taller vegetation shaded more than 25% of a potential larval site [114]. The latter insight is well established for certain sun-loving vectors and has led to the planting of species providing dense shade to control vectors. For instance, in Java, An. maculatus populations on tea plantations were managed by planting the invasive alien shrub Tithonia diversifolia (Hemsl.) A. Gray (Asteraceae) [115].
Terrestrial plants also can impact larval development through input of plant materials into water bodies. A well-known example is the deposition of maize pollen into larval habitats. In the vicinity of maize fields, larvae develop into adults with greater probability, do so more rapidly, and produce larger-bodied adults [116]. Larvae also were found to develop into adults despite conditions of intense crowding, if they were near areas where maize pollen was abundant [117]. The importance of the input of plant material in the form of leaf litter, fruits, or flowers is well established for the larvae of container- or treehole-breeding mosquitoes [13]. Additionally, certain IAPs can serve as harbourages for adults (e.g. Amur honeysuckle, Lonicera maackii Rupr; Caprifoliaceae) and support greater mosquito densities [118, 119]. The importance of such inputs for malaria mosquitoes in particular is less clear.
Do mosquito interactions with invasive alien plants cause differences in transmission?
Does malaria transmission pressure differ between pristine and invaded landscapes? There are at least two ways by which vector-borne pathogen transmission might increase when invasive plants become established. The first would occur where an IAP establishes itself in a barren or early successional environment, or reaches a greater biomass (and therefore provides more adult refugia and nectar) than the plant species it replaced. The second scenario would occur when an IAP possesses traits that change the functioning of an ecosystem in a manner that enhances a vector's vectorial capacity by altering its components.
A wide variety of ecological and evolutionary hypotheses have been posited as potential explanations for what makes particular species successful invaders, many of which allude to particular traits that allow for invasiveness [120,121,122,123,124]. These include the ideal-weed hypothesis, which suggests that a weed possessing life history traits such as early and high fertility, small seed size, and rapid growth would tend to have a competitive advantage. IAPs may have an advantage due to biotic release from enemies (enemy release hypothesis), whereby herbivores or pathogens that limit population growth in the native habitat are absent in invaded habitats. This would potentially allow for resources initially allocated to defences to be diverted to increased fertility or growth (evolution of increased competitive ability) [125,126,127]. Alternatively, invasive plants may release allelopathic chemicals to which native species are not adapted (the novel weapons hypothesis). Invasion success can also be related to increased resource availability, disturbances, or the presence of empty niches [121]. The diversity of these hypotheses (e.g. Catford et al. [121] describe 29 different ones) highlights the complexity involved and perhaps bodes caution when trying to link interactions of vectors with a group of organisms as broadly defined as "invasive alien plants". However, many of these hypotheses do relate at least in part to biotic traits of the invasive organism. Further, invasives are often identified as such by their ecological or economic impact. To the extent that such broader ecosystem impacts are comparable among different invasive species, these characteristics may apply to mosquitoes as well. The questions of interest are then, which traits and impacts are well supported, and how do those traits intersect with the manner in which mosquitoes rely on local plant communities?
A number of such traits that could affect mosquitoes have been identified through large-scale comparative studies. These have shown that for a variety of traits related to plant physiology, including allocation of resources, growth rate, size, and fitness, invasive plants had greater trait values than native species [128]. Traits linked to invasiveness include a fast growth rate and vigorous spatial growth, particularly for IAP species in tropical areas [128], as well as greater photosynthetic rate and efficiency of water and N and P usage [129]. Potentially more salient findings, with regard to mosquitoes, are that IAP species have been found to be taller in their native range [130,131,132] (potentially affecting shading of habitats), and they have a different flowering phenology, either flowering longer than native plants [132,133,134,135,136] or tending to flower earlier or later [130, 137]. The majority of these studies were undertaken in temperate or Mediterranean ecosystems and therefore must be extrapolated with care to Afro-tropical regions. Such extended or earlier flowering periods, could, if associated with an extended or earlier production of nectar, enhance mosquito survival and population growth during periods when native plant communities might not support mosquito reproductive success.
A body of work also exists on ecosystem impacts associated with invasive species. For instance, water usage of plants, considered at an ecosystem scale (rather than at the level of an individual plant or even leaf), when evaluated within each growth form (grass/sedge, forb/fern, or tree/shrub), was found not to differ significantly between native and invasive plants. But a comparison among all growth forms pooled together showed that on average ecosystems dominated by IAPs had a 50% higher rate of evapotranspiration than those dominated by native plants [138]. Whether this might affect the microclimates of mosquito immatures (e.g. abundance of standing water) or of adults (e.g. humidity) is an open question. Further, IAPs have been found to decrease local plant abundance and species richness, but increase total community productivity (as measured in plant biomass or net primary productivity) [27]. The fitness as well as the abundance of animals also was lower, the latter by approximately 17% [27]. Likewise, an increase in biomass and reduction in plant diversity, causing a reduction in vertebrate animal abundance, may affect mosquito populations as well. This can cause a shift in host utilization of generalist blood-feeders such as An. arabiensis, resulting in a higher biting rate on humans. For instance, many IAPs are toxic and reduce pasture carrying capacities [23,24,25]. If this leads to a severe reduction in cattle near human habitations, this would result in higher biting rates on humans, due to reduced availability of domestic animals or to a genetic shift to a greater preference for human hosts. Malaria rates could be further exacerbated by any economic impacts of a loss of pasture. If invasive plants also affect non-domestic animals that do not serve as blood sources, including predators, competitors, pollinators, and honeydew producers, outcomes for the mosquito population become difficult to predict. Invaded ecosystems do tend to have a depauperate insect fauna, including fewer parasitoids and predators, and fewer phytophagous insects [139,140,141,142,143,144,145], which can result in a reduction in insectivorous birds [146,147,148,149]. It seems plausible, then, that such shifts in invertebrate diversity and abundance could favour mosquitoes, but this remains to be confirmed.
Evidence is scant that IAPs differ from non-invasive species in ways related directly to the fitness or vectorial capacity of vectors, and only a few studies have tested this explicitly. Traits having a direct effect would most likely relate to harbourages and to nectar, both of which can increase survival of adults. Features of favourable daytime harbourages may include plants that provide greater protection from wind, low humidity, direct sunlight, and excessive heat. Features of nectar that favour mosquitoes are its quantity and concentration, and the ease with which it can be accessed. The amount of accessible nectar is determined by physical characteristics of the inflorescence, interactions with invertebrates such as spiders or ants, and competition with pollinators, parasitoids, or other nectar feeders. Such tritrophic interactions have received little attention to date, and outcomes could conceivably go either way. For instance, if IAPs tend to escape from pathogens and herbivores, they may have less of a need to invest in the production of extra-floral nectar to encourage ants. On the other hand, if invasive plants are able to invest more energy into reproductive output, for instance by flowering and producing nectar for a longer period, and invertebrate communities tend to be diminished, this could result in a greater availability of nectar for mosquitoes.
A number of studies have assessed the survival of An. gambiae when provided with access to a variety of native and alien plants (though typically that distinction was accidental). In those studies, the plants that allowed the greatest longevity were those invasive in parts of Africa such as Ricinus communis L. (Euphorbiaceae) [51], Manihot esculenta Crantz (Euphorbiaceae) [62], and Tecoma stans (L.) Juss ex Kunth (Bignoniaceae) [54] (Fig. 1). Other IAPs however, such as Lantana camara, appear to provide only very little nectar and support longevities of only a few days [50, 51, 54, 150]. The same appears true of the weed Parthenium hysterophorus L. (Asteraceae) [54] (Fig. 1), although in one study this weed provided ample sugar and supported mean lifespans much closer to that of R. communis [151]. Whether such discrepancies are due to differences in experimental set-up (e.g. the condition of plants or cuttings that were used) or underlying genetic or environmental differences between plant populations, is an important open question. If there are extremely large differences in nectar production between populations of the same species of plant, recommendations for management will become that much more complicated.
Invasive plant species in Africa known to be attractive to malaria vectors include Prosopis juliflora (a), Parthenium hysterophorus (b), Senna didymobotrya (c), and Tecoma stans (d)
Equally important is whether mosquitoes will locate and attempt to feed on nectar from non-native plants. A priori, one might assume that given the importance of nectar-feeding for mosquito reproductive success, mosquitoes will have evolved to respond most strongly to the volatiles of nectariferous plants native to their region. An alternative hypothesis is that the volatile organic compound blends released by plants provide a clue to their nectar productivity, and mosquitoes are able to detect such state-dependent cues. There is some support for the latter notion. For instance, when given access to a panel of plants, those that the mosquitoes perched on most often (i.e. were attracted to, or retained on the longest, in a cage) were also the plants that resulted in the greatest proportion of sugar-fed mosquitoes after exposure in a no-choice test, and these included a mix of native and alien plants [31]. Olfactometer experiments likewise have shown that both native [e.g. Senna didymobotrya (Fresen) H.S. Irwin & Barneby; Fabaceae,] and alien (P. hysterophorus) species to tropical Africa to be among the most attractive plants to An. gambiae (Fig. 1). In Mali, the most attractive flowering plants were Acacia macrostachya DC (Fabaceae), Faidherbia albida (Delile) A.Chev. (Fabaceae), Boscia angustifolia A. Rich (Capparaceae) and Ziziphus jujube Mill. (Rhamnaceae) [32], only the latter of which is an alien species and can become invasive in certain regions. In Burkina Faso, the most attractive plants were reported to include Mangifera indica L. (Anacardiaceae), Delonix regia (Hook.) Raf. (Fabaceae), Thevetia neriifolia, Senna siamea (Lam.) H.S. Irwin & Barneby (Fabaceae) and Cassia sieberiania DC (Fabaceae) [52], of which only the last is native to equatorial mainland Africa. This is further supported by a recent study finding that An. gambiae benefitted from the presence of the introduced invasive shrub Prosopis juliflora (Sw.) DC (Fabaceae) in Mali [35] (Fig. 1). Thus, there is little support for the notion that mosquitoes would favour plants with which they have co-occurred for longer (evolutionary) periods. Attraction studies that account for the state or condition of various plants could shed more light on this issue.
Finally, very little is known of the importance of biomass or abundance of different plants on the foraging decisions made by mosquitoes. Presumably (given that preferences for different plant species are not absolute), the plants that are fed on in a given locale will be a function both of their attractiveness or acceptability to mosquitoes, and of their abundance or how frequently plant species are encountered. This is particularly important for malaria mosquitoes such as An. gambiae, because sugar feeding and blood feeding are, to an extent, energetically interchangeable in this species. Thus, changes in plant abundance or biomass might affect not only which plants are used, but the rate at which humans are bitten.
To summarize, while traits and ecosystem impacts of IAPs appear highly variable and context dependent, those with the broadest support appear to be longer flowering durations, more vigorous growth (which may result in more flowers), and potentially an increase in biomass that provides more daytime shelter, sometimes in areas where previously little vegetation existed. Additionally, many IAPs may provide mosquitoes with ample nectar, increasing their longevity in the field. While one might expect mosquitoes to favour plants with which they have interacted over evolutionary time, alien plants are often among the most attractive plant hosts. So although many questions remain, it is at least plausible that invasive plants will increase malaria transmission rates.
The potential for invasive alien plants to be managed on a large scale
Invasive alien plant management
IAP management strategies need to include activities related to prevention, to early detection and rapid response, and to control. As most IAP species were intentionally introduced, the most effective way to preclude introductions into other regions is to prevent them. As such, a risk assessment of a plant's potential for invasiveness should be undertaken prior to its introduction in these regions. Evaluated species that are deemed to pose a risk to natural resources, agriculture, or human and animal health should never be imported intentionally. The risk of unintentionally introducing invasive or potentially invasive species, especially in contaminated imports, can be reduced by imposing better controls at all ports of entry. If the authorities or designated officials have failed to prevent the introduction of an invasive or potentially invasive species, and it has established in the field, it is critical that it be detected early and eradicated, if possible, before it becomes widespread and abundant. To that end it is important that a surveillance strategy be developed and implemented so that small and isolated invasions can be contained and eradicated. If surveillance did not result in the early detection of a potentially problematic plant, and eradication is no longer feasible because it is already widespread and abundant, it is essential to implement a control strategy.
Control strategies can include the use of cultural, physical, chemical or biological methods or a combination of some or all of these measures, followed by rehabilitation or restoration. Cultural control (e.g. the use of fire, flooding or grazing, to reduce the abundance of invasive plants) can be effective on its own, or when used in conjunction with other control methods. Fire is especially effective for controlling succulents such as species in the Cactaceae and Crassulaceae, and can also be used to reduce the abundance of young seedlings or saplings of other IAP plants, even grasses. Total inundation of semi-aquatic plants by water, through controlled flooding, can also be used to manage semi-aquatic IAP species. Livestock, such as goats, are sometimes also used to control palatable terrestrial weeds, although results have been mixed - livestock can often contribute to the further spread of invasive plants.
Manual control involves the direct removal of the above-ground parts of a plant with an axe, saw, chainsaw or slasher, or the uprooting of plants using a hoe, garden fork or spade, or by hand pulling. Removal of the above-ground parts of a plant is suitable only for those weeds that do not coppice or regrow from the rootstock. Manual control may also include ring- and strip-barking of large shrubs or trees. Mechanical control may involve the use of heavy machinery or equipment (e.g. bulldozers or tractors and can, among others, involve pushing, stick-raking, blade-ploughing and/or chaining of larger plants or medium density infestations). There are a number of advantages of manual control in that practitioners require little training or supervision; tools are simple, cheap and easily obtainable. In most cases, little or no harm is caused to the environment and manual control can be used in countries where no herbicides are registered for use against a particular IAP species. However, there are disadvantages, as it often includes procedures that are labour intensive, and as such can be expensive in countries with high labour costs; it is physically demanding and slow, and it usually requires repeated follow-up operations; where machinery is used, manual control can be expensive - incurring fuel and maintenance costs; soil disturbance may stimulate seed germination among weeds, and on steep slopes or on riverbanks this may also exacerbate soil erosion; and in dense infestations, native species are often inadvertently damaged or removed.
Chemical control is the use of herbicides, applied alone or in combination with other methods and can be applied in several ways. Foliar spraying is the use of a diluted herbicide sprayed over the foliage (leaves and stems) of seedlings, shrubs, grasses or dense vine infestations to the 'point of runoff'. Basal stem applications usually are applied to thin-barked woody weeds, tree saplings, regrowth and multi-stemmed shrubs and trees. The entire circumference of the trunk or stem from ground level to a height of 30–100 cm is sprayed or painted. Total frill involves the use of a hand-axe, panga, or machete, whereby horizontal cuts into the sapwood tissue of the stems or trunks of trees, vines or woody weeds, and then inserting herbicide into the cuts. Stem injection, sometimes also called drill-and-frill, involves the use of a battery-powered drill or similar tool. Holes are drilled (at a 45° downward angle) into the stems or trunks of trees, cacti, vines or woody weeds, and herbicides injected into the drill hole, using a squeeze bottle or plastic syringe. Stump applications involve the cutting down of a plant at the base of the stem and then immediately applying herbicide to the stump. Cut stump, sometimes also referred to as "cut and spray" or "lopping/pruning" involves the felling of a plant completely at its base (no higher than 15 cm above the ground), preferably horizontally by chainsaw, brush-cutter, or similar tools and then applying herbicide (with a paint brush, a squeeze bottle, a sponge-tipped bottle or a spray bottle) to the cut stump. Scrape and paint involves scraping a very thin layer of bark from a 10–30 cm section of stem (taking care not to cut through the vine), and then applying the herbicide to the exposed green underlying soft tissue before the plant can seal.
The main advantage of chemical control is that it is more cost-effective than other methods, especially manual control. Other advantages include the fact that results are quicker than with manual control, especially when compared with ring-barking or stripping, and that use of the correct herbicides, applied according to label recommendations, can have little to no negative impacts on the environment. However, there are also disadvantages, including the purchase of specialized equipment and the training of applicators, which can add to costs; herbicides can be expensive, especially if incorrect formulations are used, resulting in poor control and requiring repeated applications; target species must be 'healthy', and weather conditions suitable, at the time of a herbicide's application; foliar applications can affect non-target species; herbicide misuse may cause environmental damage; and manual control of plants may be necessary before herbicide application. Also, widespread misuse of herbicides can produce severe environmental and health impacts [152, 153].
Biological control, that is, the use of host-specific natural enemies (pathogens, mites, and insects) to control invasive plants, has been practiced for many decades by a host of countries, especially the USA, Australia, South Africa, Canada and New Zealand. Over a period of 150 years, until the end of 1996, more than 350 species of invertebrates and pathogens were deliberately released in 75 countries for the control of at least 133 weed species [154]. It was estimated [155] that by the end of 2012, there were 1555 separate and intentional releases of 469 species of invasive-plant biological control agents against 175 species of invasive plants (when related taxa of unidentified plant species, such as some Opuntia spp. (Cactaceae), are counted as single target plants). These so-called 'classical' biocontrol projects have been conducted in a total of 90 countries [156]. At the national level, biocontrol programmes have achieved success rates of 83%, 80%, 61%, 51% and 50%, respectively, in New Zealand [156], Mauritius [157], South Africa [158], Australia [159] and Hawaii [160]. Analyses undertaken in South Africa more than 20 years ago revealed that six invasive alien plants out of 23 targeted were under complete control, and a further 13 under substantial control [161]. In Hawaii, more than 25 years ago, seven introduced weeds out of 21 were already under complete control, and substantial control had been achieved for three more [160]. In Australia, of 15 completed programmes, 12 resulted in complete control [159]. In South Africa, without biological control, the area occupied by the invasive cactus Opuntia aurantiaca Lindl. (Cactaceae) could have been 15 times greater than it is today [158]. Thanks to biological control Opuntia ficus-indica (L.) Mill (Cactaceae) invasions in South Africa have been reduced by approximately 90% [162, 163]. In fact the introduction of host specific and damaging agents has probably contributed 75% to the control of species in the family Cactaceae [164].
Some examples of biological control programmes against IAPs that have achieved some degree of control are given in Table 1. In general, programmes against aquatic species have, to date, achieved a high level of success.
Table 1 Examples of biological control programmes against aquatic and terrestrial invasive alien plants
The main aim of biological control is to suppress plant vigour, reduce seed production, slow plant growth, and reduce the density of the plant infestations. The main benefits of biological control according to Greathead [176] are these: agents establish self-perpetuating populations, often throughout the range of a target weed, including areas that are not accessible using chemical or mechanical control methods; the control is permanent; there are no negative impacts on the environment; the cost of biological control programmes is low, relative to other approaches, and requires only a one-off investment; benefits can be reaped by many stakeholders, irrespective of their financial status or contribution to the initial research process. Biological control also can be used to resolve "conflicts". Many woody invasive species are widely promoted and disseminated for fuel wood in developing countries but also have negative impacts on livelihoods. Host-specific and damaging agents that attack only the reproductive parts of the plant can reduce spread and densification without reducing the beneficial attributes of the target species and in this way resolve the "conflict". However, it should be noted that biological control agents are not available for every IAP species; some released agents have had negligible impacts on the target species; and there are situations where an agent has a significant impact on the target weed only in a small part of its adventive range. This is why control requires an integrated strategy where various options are used in combination in order to enhance suppression or elimination.
An issue that has not been addressed adequately is the cost-effectiveness of various control options. This need has led to a number of recent studies, mainly undertaken in South Africa, to determine if the benefits of invasive plant control outweigh the costs. For example, under a dynamic simulation of an ecological-economic model of IAP control in a mountain fynbos ecosystem in South Africa, it was found that the cost of proactive clearing would range from 0.6% to 4.76% of the economic value of ecosystem services, while resulting in an increase of the value of these services between 138 and 149% [177]. Also in South Africa, De Lange & van Wilgen [164] estimated the value of ecosystem services at ZAR 152 billion (presently, about US$ 11.551 billion) annually, of which an estimated ZAR 6.5 billion (US$ 490 million) is lost every year due to IAPs. However, the loss would have been an estimated additional ZAR 41.7 billion (US$3169 million) had no invasive plant control been carried out. Costs of aquatic weed control in Florida in the late 1960s were estimated to be US$ 6 million annually and benefits were reported as US$ 82 million, with the largest benefits coming from increased land use (due to drainage) and prevented flood damage [178].
Studies on the benefits of targeting individual species also have provided evidence of cost effectiveness. An analysis of the costs and benefits of the invasive Australian tree Acacia mearnsii De Wild. (Fabaceae) in South Africa suggests that a 'do nothing' scenario (with no attempts to control the spread of the species beyond the limits of commercial plantations) is not cost-effective, as the benefit: cost ratio is around 0.4 [179]. The most attractive control option will be a combination of biological control of the whole plant (flowers, seed pods, leaves and stems) and physical clearing (benefit: cost ratio of 7.5) [179]. Brown & Daigneault [180] found that an integrated approach to the control of the invasive tree Spathodea campanulata Beauv. (Bignoniaceae) in Fiji derived monetized benefits of US$ 3.7 for each US$ 1 spent, even without explicitly considering biodiversity, culture, and other non-monetized benefits of control. It is estimated that tamarisk (Tamarix spp.; Tamaricaceae) invasions in the western United States cost about US$ 280–450 ha−1 [181]. Eradicating these invasive species and restoring native riparian communities throughout the region would cost about US$ 7400 ha−1, considerably more than the current costs of impacts. However, these intervention costs would be fully recovered in as few as 17 years, after which the societal, ecological, and economic benefits of restoration would continue to accrue indefinitely [182].
Although there have been comparatively few studies to evaluate the overall costs and benefits of an integrated control strategy, the benefits of classical biological control programmes have been well documented. An analysis of some biocontrol research programmes in South Africa found that benefit: cost ratios ranged from 34:1 for Lantana camara L. (Verbenaceae) to 4331:1 for golden wattle, Acacia pycnantha Benth. (Fabaceae) [183]. In fact, the benefit: cost ratios for biocontrol projects in South Africa range from 50:1 for invasive sub-tropical shrubs to 3726:1 for invasive Australian trees (de Lange and van Wilgen, 2010 [164]). It is also estimated that biological control agents present in South Africa have reduced the financial costs of mechanical and chemical control by more than 19.8%, or ZAR 1.38 billion (presently, about US$ 104.8 million) [184]. It is further estimated that biological control programmes, if fully implemented in the future, may reduce control costs by an additional 41.4%, or ZAR 2.89 billion (presently, about US$ 219.5 million) [184]. These findings are supported by studies in Australia, which have found that every dollar invested in the invasive plant biological control effort yielded a return of A$ 23.10 [185]. There, the benefit: cost ratio for agriculture alone (in terms of both cost savings on control and increased production) was 17.4. If current annual expenditures on biological control research continue into the future, it is expected that projects targeting invasive plants with biological control agents in Australia may provide, on average, an annual net benefit of A$ 95.3 million, of which A$ 71.8 million is expected to flow into the agriculture sector [185]. A good example of the benefits of biological control is that of water hyacinth in southern Benin, where the reduction of this aquatic plant by biological control has been credited with an increase in income of US$ 30.5 million per year to a community of about 200,000 people [186]. If one assumes that the benefits stay constant over the next 20 years, the accumulated present value will be US$ 260 million - a benefit: cost ratio of 124:1 [186].
In summary, the most cost-effective way of controlling invasive plants is by combining two or more of the methods mentioned above. For example, manual control used in combination with chemical and/or biological control, commonly known as integrated pest management (IPM), should be implemented wherever possible in order to reduce costs and improve the efficacy of control across a landscape. Invasive plants can be effectively controlled over large areas by developing and implementing an integrated management strategy.
The potential effect of invasive alien plant management on the incidence of malaria
If in certain regions IAPs are exacerbating malaria transmission, it would be useful to know what will occur if the distribution and densities of these weeds are reduced. For certain scenarios this appears straightforward. If the invasion of alien plants leads to increased poverty by reducing crop yields and pasture productivity, then improving people's socio-economic conditions should, to an extent, help them escape the poverty trap of malaria by providing increased access to medication and preventive measures. In arid areas where an IAP might have expanded the range of a malaria vector, removal of the weed would likely reverse this expansion [35]. A similar argument would hold for cases where an invasive plant might prolong the seasonal population peaks of vectors. Support for the latter notion comes from a recent field trial in Mali, where the impact of the invasive shrub Prosopis juliflora on malaria mosquitoes was investigated during the dry season [35]. Villages without the plant were compared to those where P. juliflora had become established. In half of the latter villages, at a certain point all flowering branches of this plant were removed. Anopheles spp. were monitored throughout, to examine whether removal of this invasive putative nectar source affected mosquito species composition, age structure, population size, and sugar-feeding status. The average number of female Anopheles caught per trap declined more than two-fold in the villages where inflorescences had been removed, while numbers stayed stable in the positive and negative control villages. Likewise, the proportion of females that survived for at least 3 gonotrophic cycles dropped from 35% to 11%. Sugar-feeding rates also dropped dramatically following removal of the flowering branches. These results are similar to those of a study that indicated that in areas with more abundant or richer nectar sources, Anopheles spp., mosquitoes would live longer and greater populations would be sustained [64]. Another notable result was that the species composition shifted from a mix of An. coluzzi, An. gambiae and An. arabiensis to one dominated by An. coluzzi. Whether this species shift toward An. coluzzi reflects its lower dependence on nectar, or perhaps a tendency to make use of other sources available in arid environments (for instance, by piercing plant tissues), remains to be investigated. It also will be important to investigate the impacts of (i) invasive plants on mosquitoes throughout the year, not just in the dry season; (ii) other IAP species (both terrestrial and aquatic), to determine whether these effects are mosquito-plant species-specific or instead applicable to invasive species and mosquitoes in general; and (iii) invasive plants in a wider range of habitats, particularly in more verdant areas where mosquito-plant interactions will be far more complex. Another recent invasive plant-removal experiment performed in North America suggests that at least some of these aspects may be common. In a 2-year study using a Before-After/Control-Impact design, [187] found that the abundance of Culex spp. declined following removal of invasive Amur honeysuckle (Lonicera maacki). Although they did not measure vector survival rates directly, they did find a more favourable microclimate for mosquitoes in areas with honeysuckle. Neither study evaluated the impact on pathogen transmission, but it could be considerable. A quick calculation shows that in the case of P. juliflora inflorescence removal, assuming that only vector mortality and abundance would change by the levels indicated in the paper [35], and assuming a gonotrophic cycle length of 3 days and an extrinsic incubation period of 12 days, R 0 would be reduced by a factor of approximately 28. However, as reviewed above, many of the other behavioural and life history traits of mosquitoes also may change with changes in vegetation, and it will be important to study such impacts in a comprehensive manner in future experiments.
Less clear is what happens if an invasive plant merely replaces part of the local plant community and (perhaps) increases the available plant biomass and nectar in that region. There are several important questions that cannot in an obvious way be extrapolated from the laboratory and semi-field experiments that have been done to date. The initial questions are these: Which plants tend to be replaced in invaded landscapes, those that already provide nectar to mosquitoes, or those that are poor nectar-hosts or relatively unattractive to mosquitoes? And by how much does the availability of nectar change in the landscape, whether as a result of different nectar production rates, or changes in the invertebrate community within the landscape? If there is generally an increase in the availability of nectar, how does this affect mosquito nectar-feeding, human-biting behaviour, reproductive success, and population size throughout the year? It is worth noting that in most studies there were only two levels of nectar abundance, high and low, or present and absent. To understand how invasive plants might affect vectorial capacity, we would require insights into mosquito traits as a function of nectar availability. For instance, it is likely that there is a lower limit of nectar availability at which mosquito populations can be sustained [73], but whether that lower level is ever relevant in nature, i.e. whether nectar ever becomes limiting in nature, remains unknown [30]. Likewise, above a certain level of nectar abundance, further increases may have limited impact on mosquitoes. An additional complication is that nectar feeding appears to be more relevant to energetically-deprived or smaller mosquitoes, and when blood hosts are inaccessible or absent [68, 188]. It is possible that this interaction with blood-host accessibility or presence explains why in some circumstances where sugar access is greater, the biting rate and resulting vectorial capacity of the mosquito population is lower [50, 66]. Thus, ideally we would need to measure mosquito traits as a function of nectar abundance at different levels of blood-host presence.
In practical terms, another point of uncertainty relates to the intensity of transmission that occurs in a given region. In areas where vectorial capacity or R0 (or alternatively, the proportion of humans that are parasitemic) is very high, a strong reduction in vectorial capacity and therefore R0, may still have little impact on malaria prevalence. It is only in areas with a lower transmission pressure that reductions in vectorial capacity will have a more pronounced impact on malaria incidence. This suggests that if management of nectar sources of mosquitoes has a strong impact on vectorial capacity, it could potentially contribute in a significant way to malaria control in areas of low transmission. Alternatively, plant removal could be considered as one of many components of an integrated control strategy in high transmission areas. This might be particularly relevant in areas where use of long-lasting insecticidal nets and human-case management are insufficient to interrupt transmission, or areas where insecticide resistance is climbing.
The relationship between many Anopheles species and aquatic IAPs is often strong enough to warrant invasive plant control as an additional malaria management tool. The same can be said for many mosquito species and other disease vectors. We must ask ourselves if manipulating the environment to control any given malaria vector includes the risk of creating suitable environments for other malaria vectors or vectors of other diseases. The current evidence suggests this would not be the case: removal of emergent vegetation to control An. funestus, for instance, would not necessarily create more good habitats for An. gambiae (s.l.) and other vectors, inasmuch as large bodies of exposed, deep, clear water are unsuitable for oviposition and larval development of mosquitoes in general [75, 76, 78, 88,89,90]. Added to the economic and environmental benefits of applying biological control for aquatic weeds, it is apparent that malaria suppression also could profit from aquatic invasive plant management, which already has had successes.
This review highlights the complexity of the Anopheles-plant relationship and the necessity of understanding it, in order to anticipate how and when IAPs may increase malaria incidence. By using our knowledge of the interplay of factors influencing this relationship from the pathogen's perspective, it appears we can judiciously apply invasive-plant interventions to suppress malaria transmission, or even to interrupt it altogether in some instances. Field experiments focused on unknown features of the mosquito-plant interface will yield more information needed to know best how to approach the invasive-plant problem. Initial investigations should use the entomological inoculation rate (EIR) to compare malaria exposure in areas with similar housing conditions, human density, socioeconomics, and bed-net usage, leaving only alien-plant establishment as the variable. If the comparison indicates a strong impact of these plants on Plasmodium exposure, further studies on mosquito foraging behaviour and its implications for population dynamics and vectorial capacity will be revealing and provide further insights. These will inform how IAP management can contribute to malaria control and ensure that programmes targeting different aspects of environmental and human health are to be coordinated in a beneficial manner.
Vectorial capacity
EIR:
Entomological inoculation rate
N h :
Population size of humans
N v :
Population size of vectors
R 0 :
Basic reproduction number of malaria
ZAR:
World Health Organisation. World Malaria Report. http://www.who.int/malaria/publications/world_malaria_report_2014/report/en/. Accessed 13 Dec 2016.
Raghavendra K, Barik TK, Niranjan Reddy BP, Sharm P, Dash AP. Malaria vector control: from past to future. Parasitol Res. 2011;108:757–79.
Bhatt S, Weiss DJ, Cameron E, Bisanzio D, Mappin B, Darymple U, et al. the effect of malaria control on Plasmodium falciparum in Africa between 2000 and 2016. Nature. 2015;526:207–11.
CJL M, Ortblad KF, Guinovart C, Lim SS, Wolock TM, Roberts DA, et al. Global, regional and national incidence and mortality for HIV, tuberculosis, and malaria during 1990-2013: a systematic analysis for the global burden of disease study 2013. Lancet. 2014;384(9947):1005–70.
Curtis C, Maxwell C, Lemnge M, Kilama WL, Steketee RW, Hawley WA, et al. Scaling-up coverage with insecticide-treated nets against malaria in Africa: who should pay? Lancet Infect Dis. 2003;3:304–7.
Ranson H, N'Guessan R, Lines J, Moiroux N, Nkuni Z, Corbel V. Pyrethroid resistance in African anopheline mosquitoes: what are the implications for malaria control? Trends Parasitol. 2011;27:91–8.
Roberts DR, Andre RG. Insecticide resistance issues in vector-borne disease control. Am J Trop Med Hyg. 1994;50:21–34.
Overgaard HJ, Sandve SR, Suwonkerd W. Evidence of anopheline mosquito resistance to agrochemicals in northern Thailand. SEA J Trop Med Pub Health. 2005;36(Suppl. 4):152–7.
Nkya TE, Akhouayri I, Poupardin R, Batengana B, Mosha F, Magesa S, et al. Insecticide resistance mechanisms associated with different environments in the malaria vector Anopheles gambiae: a case study in Tanzania. Malaria J. 2014;13:28.
Relyea RA. The impact of insecticides and herbicides on the biodiversity and productivity of aquatic communities. Ecol Appl. 2005;15:618–27.
Fillinger U, Ndenga B, Githeko A, Lindsay SW. Integrated malaria vector control with microbial larvicides and insecticide-treated nets in western Kenya: a controlled trial. Bull World Health Organ. 2009;87:655–65.
Kroeger I, Liess M, Dziock F, Duquesne S. Sustainable control of mosquito larvae in the field by the combined actions of the biological insecticide Bti and natural competitors. Vector Ecol. 2013;38:82–9.
Reiskind M, Zarrabi A, Lounibos L. Invasive leaf resources alleviate density dependence in the invasive mosquito, Aedes albopictus. Biol Inv. 2010;12:2319–28.
Mack R, Smith M. Invasive plants as catalysts for the spread of human parasites. NeoBiota. 2011;9:13–29.
Nash TAM. Africa's bane: the tsetse fly. London: Collins; 1969.
Syed Z, Guerin PM. Tsetse flies are attracted to the invasive plant Lantana camara. J Insect Physiol. 2004;50:43–50.
Allan BF, Dutra HP, Goessling LS, Barnett K, Chase JM, Marquis RJ, et al. Invasive honeysuckle eradication reduces tick-borne disease risk by altering host dynamics. Proc Natl Acad Sci USA. 2010;107:18523–7.
Civitello DJ, Cohen J, Fatima H, Halstead N, Liriano J, McMahon TA, et al. Biodiversity inhibits parasites: broad evidence for the dilution effect. Proc Natl Acad Sci USA. 2015;112:8667–71.
Mwangi E, Swallow B. Prosopis juliflora invasion and rural livelihoods in the Lake Baringo area of Kenya. Conserv Soc. 2008;6:130–40.
Van Wilgen BW, Reyers B, Le Maitre DC, Richardson DM, Schonegevel L. A biome-sale assessment of the impact of invasive alien plants on ecosystem species in South Africa. J Environ Manag. 2008;89:336–49.
Maundu P, Kibet S, Morimoto Y, Imbumi M, Adekar R. Impact of Prosopis juliflora on Kenya's semi-arid and arid ecosystems and local livelihoods. Biodiversity. 2009;10:33–50.
Pratt CF, Constantine KL, Murphy ST. Economic impacts of invasive alien species on African smallholder livelihoods. Glob Food Secur. 2017;14:31–7.
Shackleton RT, Witt ABR, Nunda W, Richardson DM. Chromolaena odorata (Siam weed) in eastern Africa: distribution and socio-ecological impacts. Biol Inv. 2017;19:1285–98.
Shackleton RT, ABR W, Aool W, Pratt C. Distribution of the invasive alien weed, Lantana camara, and its ecological and livelihood impacts in eastern Africa. Afr J Range Forage Sci. 2017;34:1–11.
Shackleton RT, Witt ABR, Piroris FM, van Wilgen BW. A survey of the distribution and perceptions of the socio-economic and ecological impacts of the invasive alien cactus Opuntia stricta in East Africa. Biol Inv. 2017;19(8):2427–41.
Vila M, Pujadas J. Land-use and socio-economic correlates of plant invasions in European and north African countries. Biol Conserv. 2001;100:397–401.
Vila M, Espinar J, Hejda M, Hulme P, Jarosik V, Maron J, et al. Ecological impacts of invasive alien plants: a meta-analysis of their effects on species, communities and ecosystems. Ecol Lett. 2011;14:702–8.
Hellmann JJ, Byers JE, Bierwagen BG, Dukes JS. Five potential consequences of climate change for invasive species. Conserv Biol. 2008;22:534–43.
Bezeng BS, Morales-Castilla I, van der Bank M, Yessoufou K, Daru BH, Davies TJ. Climate change may reduce the spread of non-native species. Ecosphere. 2017:e01694.
Foster WA. Mosquito sugar feeding and reproductive energetics. Annu Rev Entomol. 1995;40:443–74.
Manda H, Gouagna LC, Nyandat E, Kabiru EW, Jackson RR, Foster WA, et al. Discriminative feeding behaviour of Anopheles gambiae s.s. On endemic plants in western Kenya. Med Vet Entomol. 2007;21:103–11.
Müller GC, Beier JC, Traore SF, Toure MB, Traore MM, Bah S, et al. Field experiments of Anopheles gambiae attraction to local fruits/seedpods and flowering plants in Mali to optimize strategies for malaria vector control in Africa using attractive toxic sugar bait methods. Malaria J. 2010;9(1):262.
Rajnikant BRM, Sharma RC, Gupta DK, Gautam AS. Anopheline breeding in ponds of central Gujarat with reference to water hyacinth infestation. Ind J Malariol. 1992;29:57–61.
Webb CE, Ironside A, Mansfield S. A comparison of oviposition preference in the presence of three aquatic plants by the mosquitoes Culex annulirostris Skuse and Culex quinquefasciatus (Culicidae: Diptera) in laboratory tests. Gen App Entomol. 2012;41:21–6.
Müller GC, Junnila A, Traore M, Traore SF, Doumbia S, Sissoko F, et al. The invasive shrub Prosopis juliflora enhances the malaria parasite transmission capacity of Anopheles mosquitoes: a habitat manipulation experiment. Malaria J. 2017;16(237)
Diekmann O, Heesterbeek JAP. Mathematical epidemiology of infectious diseases: model building, analysis and interpretation (Vol. 5). Chichester: John Wiley & Sons; 2000.
Smith D, McKenzie E. Statics and dynamics of malaria infection in Anopheles mosquitoes. Malaria J. 2004;3:13.
Garrett-Jones C. Prognosis for interruption of malaria transmission through assessment of the mosquito's vectorial capacity. Nature. 1964;204:1173–5.
Stone CM, Foster WA. Ecology of parasite-vector interactions. In: Takken W, CJM K, editors. Plant-sugar feeding and vectorial capacity. Wageningen: Wageningen Academic Publishers; 2013. p. 35–79.
Zhu L, Marshall JM, Qualls WA, Schlein Y, McManus JW, Arheart KL, et al. Modelling optimum use of attractive toxic sugar bait stations for effective malaria vector control in Africa. Malaria J. 2015;14(492)
Spielman A. Bionomics of autogenous mosquitoes. Annu Rev Entomol. 1971;16:231–48.
Nyasembe VO, Teal PEA, Mukabana WR, Tumlinson JH, Torto B. Behavioural response of the malaria vector Anopheles gambiae to host plant volatiles and synthetic blends. Parasit Vectors. 2012;5:234.
Nayar JK, Sauerman DM. Physiological effects of carbohydrates on survival, metabolism, and flight potential of female Aedes taeniorhynchus. J Insect Physiol. 1971;17:2221–33.
Van Handel E. Metabolism of nutrients in the adult mosquito. Mosq News. 1984;44:573–9.
Takken W, Verhulst NO. Host preferences of blood-feeding mosquitoes. Annu Rev Entomol. 2013;58:433–53.
George J, Blanford H, Thomas MB, Baker TC. Malaria mosquitoes host-locate and feed upon caterpillars. PLoS One. 2014;9(11):e108894.
Harrington LC, Edman JD, Scott TW. Why do female Aedes aegypti (Diptera: Culicidae) feed preferentially and frequently on human blood? J Med Entomol. 2001;38:411–22.
Foster WA, Takken W. Nectar-related vs. human-related volatiles: behavioural response and choice by female and male Anopheles gambiae (Diptera: Culicidae) between emergence and first feeding. Bull Entomol Res. 2004;94:145–57.
Gary RE, Foster WA. Diel timing and frequency of sugar feeding in the mosquito Anopheles gambiae, depending on sex, gonotrophic state and resource availability. Med Vet Entomol. 2006;20:308–16.
Stone CM, Jackson BT, Foster WA. Effects of bed net use, female size, and plant abundance on the first meal choice (blood vs sugar) of the malaria mosquito Anopheles gambiae. Malaria J. 2012;11:3.
Impoinvil DE, Kongere JO, Foster WA, Njiru BN, Killeen GF, Githure JI, et al. Feeding and survival of the malaria vector Anopheles gambiae on plants growing in Kenya. Med Vet Entomol. 2004;18:108–15.
Gouagna LC, Poueme RS, Dabiré KR, Ouédraogo JB, Fontenille D, Simard F. Patterns of sugar feeding and host plant preferences in adult males of An. gambiae (Diptera: Culicidae). J Vector Ecol. 2010;35:267–76.
Nikbakhtzadeh MR, Terbot JW II, Otienoburu PE, Foster WA. Olfactory basis of floral preference of the malaria vector Anopheles gambiae (Diptera: Culicidae) among common African plants. J Vector Ecol. 2014;39:372–83.
Manda H, Gouagna LC, Foster WA, Jackson RR, Beier JC, Githure I, Hassanali A. Effect of discriminative plant-sugar feeding on the survival and fecundity of Anopheles gambiae. Malaria J. 2007;6:113.
Mng'ong'o FC, Sambali JJ, Sabas E, Rubanga J, Magoma J, Ntamatungiro AJ, et al. Repellent plants provide affordable natural screening to prevent mosquito house entry in tropical rural settings-results from a pilot efficacy study. PLoS One. 2011;6(10):e25927.
Curry DP. Breeding of Anopheline mosquitoes among aquatic vegetation of Gatun Lake, accompanied by periodic long flights of A. albimanus Wied. Sth Med J. 1934;27:644–51.
Rozeboom LE. A symposium on human malaria: with special reference to North America and the Caribbean region. In: Moulton FR, editor. Distribution and ecology of the Anopheles mosquitoes of the Caribbean region. USA: American association for the Advancement of Science; 1941. p. 98–107.
Meyer SL. Plants of importance in the breeding of Anopheles albimanus Wied. in Panama. Bull Torrey Bot Club. 1947;74:257–61.
Soper FL, Wilson DB. Anopheles gambiae in Brazil: 1930 to 1940. New York: Rockefeller Foundation; 1943.
Killeen GF, Fillinger U, Kiche I, Gouagna LC, Knols BGJ. Eradication of Anopheles gambiae from Brazil: lessons for malaria control in Africa? Lancet Infect Dis. 2002;2:618–27.
Morrison AC, Zielinski-Gutierrez E, Scott TW, Rosenberg R. Defining challenges and proposing solutions for control of the virus vector Aedes aegypti. PLoS Med. 2008;5(3):e68.
Walker K, Lynch M. Contributions of Anopheles larval control to malaria suppression in tropical Africa: review of achievements and potential. Med Vet Entomol. 2007;21:2–21.
Gary RE, Foster WA. Anopheles gambiae feeding and survival on honeydew and extra-floral nectar of peridomestic plants. Med Vet Entomol. 2004;18:102–7.
Gu W, Müller G, Schlein Y, Novak RJ, Beier JC. Natural plant sugar sources of Anopheles mosquitoes strongly impact malaria transmission potential. PLoS One. 2001;6(1):e15996.
Straif SC, Beier JC. Effects of sugar availability on the blood feeding behaviour of Anopheles gambiae (Diptera: Culicidae). J Med Entomol. 1996;33:608–12.
Gary RE, Foster WA. Effects of available sugar on the reproductive fitness and vectorial capacity of the malaria vector Anopheles gambiae (Diptera: Culicidae). J Med Entomol. 2001;38:22–8.
Ebrahimi B, Jackson BT, Guseman JL, Przybylowicz CM, Stone CM, Foster WA. Alteration of plant species assemblages can decrease the transmission potential of malaria mosquitoes. J Appl Ecol. 2017, 2017; doi.org/10.1111/1365-2664.13001
Stone CM, Jackson BT, Foster WA. Effects of plant-community composition on the vectorial capacity and fitness of the malaria mosquito Anopheles gambiae. Am J Trop Med Hygiene. 2012;87:727–36.
Hien D, Dabiré K, Roche B, Diabaté A, Yerbanga RS, Cohuet A, et al. Plant-mediated effects on mosquito capacity to transmit human malaria. PLoS Pathog. 2016;12(8):e1005773.
Gary RE, Cannon JW, Foster WA. Effect of sugar on male Anopheles gambiae Giles (Diptera: Culicidae) mating performance, as modified by temperature, space, and body size. Parasit Vectors. 2009;2:19.
Stone CM, Taylor RM, Roitberg BD, Foster WA. Sugar deprivation reduces insemination of Anopheles gambiae (Diptera: Culicidae), despite daily recruitment of adults, and predicts decline in model populations. J Med Entomol. 2009;46:1327–37.
Barber MA, Hayne TE. Water hyacinth and the breeding of Anopheles. Pub Health Rep (1896–1970). 1925;40:2557–62.
Chinery WA. Effects of ecological changes on the malaria vectors Anopheles funestus and the Anopheles gambiae complex of mosquitoes in Accra, Ghana. J Trop Med Hygiene. 1984;87:75–81.
Sattler MA, Mtasiwa D, Kiama M, Premji Z, Tanner M, Killeen GF, et al. Habitat characterization and spatial distribution of Anopheles sp. mosquito larvae in Dar es Salaam (Tanzania) during an extended dry period. Malaria J. 2005;4(4)
Sinka ME, Rubio-Palis Y, Manguin S, Patil AP, Temperley WH, Gething PW, et al. The dominant Anopheles vectors of human malaria in the Americas: occurrence data, distribution maps and bionomic précis. Parasit Vectors. 2010;3:72.
Sinka ME, Bangs MJ, Manguin S, Coetzee M, Mbogo CM, Hemingway J, et al. The dominant Anopheles vectors of human malaria in Africa, Europe and the Middle East: occurrence data, distribution maps and bionomic précis. Parasit Vectors. 2010;3:117.
Dia I, Guelbeogo MW, Ayala D. Chapter 7. Advances and perspectives in the study of the malaria mosquito Anopheles funestus. In: Manguin S, editor. Anopheles mosquitoes - New insights into malaria vectors. InTech; 2013. doi:https://doi.org/10.5772/55389.
Adoka SO, Dida G, Anyona DN, Matano AS, Othero DA, Kanangire CK. Spatial distribution and habitat characterisation of mosquito species in the lake and land habitats of western Kenya. East Afr Med J. 2016;93:117–26.
Zetek J. Anopheles breeding among water lettuce - a new habitat. Bull Entomol Res. 1920;11:73–5.
Curry DP. Some observations on the Nyssorhynchus group of the Anopheles (Culicidae) of Panama. Am J Hygiene. 1932;15:566–72.
Eyles DE, Robertson JL Jr. A Guide and key to the aquatic plants of the southeastern United States. Pub Health Bull. 1944;286:1–141.
Alarcón-Elbal PM. Plantas invasoras acuáticas y culícidos: un binomio peligroso. Invasive aquatic plants and culicids: a dangerous duo. Bol Real Soc Española Hist Nat. Sección Biología. 2013;107:5–15.
Orr B, Resh V. Influence of Myriophyllum aquaticum cover on Anopheles mosquito abundance, oviposition, and larval microhabitat. Oecologia. 1992;90:474–82.
Srivastava RK, Srivastava H. Observations on anopheline breeding in relation to aquatic plants in different breeding habitats of Kheda (Gujarat). J Comm Dis. 2004;36:187–94.
Rejmánková E, Roberts D, Pawley A, Manguin S, Polanco J. Predictions of adult Anopheles albimanus densities in villages based on distances to remotely sensed larval habitats. Am J Trop Med Hygiene. 1995;53:482–8.
Rejmánková E, Savage HM, Rejmánek M, Arredondo-Jiménez JI, Roberts DR. Multivariate analysis of relationships between habitats, environmental factors and occurrence of Anopheline mosquito larvae Anopheles albimanus and A. preudopunctipennis in southern Chiapas, Mexico. J Appl Ecol. 1991;28:827–41.
Rejmánková E, Roberts DR, Harbach RE, Pecor J, Peyton EL, Manguin S, et al. Environmental and regional determinants of Anopheles (Diptera: Culicidae) larval distribution in Belize, central America. Environ Entomol. 1993;22:978–92.
Almirón WR, Brewer ME. Classification of immature stage habitats of Culicidae (Diptera) collected in Córdoba, Argentina. Mem Instituto Oswaldo Cruz. 1996;91:1–9.
Stein M, Ludueña-Almeida F, Willener JA, Almirón WR. Classification of immature mosquito species according to characteristics of the larval habitat in the subtropical province of Chaco, Argentina. Mem Instit Oswaldo Cruz. 2011;106:400–7.
Savage HM, Rejmánková E, Arredondo-Jiménez JI, Roberts DR, Rodríguez MH. Limnological and botanical characterization of larval habitats for two primary malarial vectors, Anopheles albimanus and Anopheles pseudopunctipennis, in coastal areas of Chiapas state. Mexico J Am Mosq Cont Assoc. 1990;6:612–20.
Sinka ME, Bangs MJ, Manguin S, Chareonviriyaphap T, Patil AP, Temperley WH, et al. The dominant Anopheles vectors of human malaria in the Asia-Pacific region: occurrence data, distribution maps and bionomic précis. Parasit Vectors. 2011;4:89.
Matheson R. The utilization of aquatic plants as aids in mosquito control. Am Nat. 1930;64:56–86.
Hobbs JH, Molina PA. The influence of the aquatic fern Salvinia auriculata on the breeding of Anopheles albimanus in coastal Guatemala. Mosq News. 1983;43:456–9.
Imbahale SS, Mweresa CK, Takken W, Mukabana WR. Development of environmental tools for anopheline larval control. Parasit Vectors. 2011;10(1):184.
Amerasinghe FP, Kulasooriya SA. Azolla vs mosquitoes: some experiments with Culex quinquefasciatus. MIRCEN J App Microbiol Biotechnol. 1985;1:355–63.
Greenway M, Dale P, Chapman H. An Assessment of mosquito breeding and control in 4 surface flow wetlands in tropical-subtropical Australia. Water Sci Technol. 2003;48:249–56.
Furlow BM, Hays KL. Some influences of aquatic vegetation on the species and number of Culicidae (Diptera) in small pools of water. Mosq News. 1972;32:595–9.
Mwingira VS, Mayala BK, Senkoro KP, Rumisha SF, Shayo EH, Mlozi MR, et al. Mosquito larval productivity in rice-fields infested with Azolla in Mvomero District, Tanzania. Tanzania J Health Res. 2009;11:17–22.
Rajendran R, Reuben R. Evaluation of the water fern Azolla microphylla for mosquito population management in the rice-land agro-ecosystem of south India. Med Vet Entomol. 1991;5:299–310.
Hoyer MV, Canfield DE. Aquatic macrophytes and their relation to fish populations of Florida lakes. Charleston, South Carolina: Abstracts, Thirty-third Annual Meeting, The Aquatic Plant Management Society, Inc., and The Fifteenth Annual Meeting, The South Carolina Aquatic Plant Management Society, Inc; 1993.
Bennett FD. Investigations On the insects attacking the aquatic ferns Salvinia spp. in Trinidad and northern South America. In: Proceedings of the Nineteenth Annual Meeting Southern Weed Conference. USA: Southern Weed Society. 1966;497–504.
Thomas PA, Room PMT. Control of Salvinia molesta. Nature. 1996;320:581–4.
Orr BK, Resh VH. Experimental Test of the influence of aquatic macrophyte cover on the survival of Anopheles larvae. J Am Mosq Control Assoc. 1989;5:579–85.
Bond JG, Rojas JC, Arredondo-Jiménez JI, Quiroz-Martínez H, Valle J, Williams T. Population control of the malaria vector Anopheles pseudopunctipennis by habitat manipulation. Proc Royal Soc London B. 2004;271:2161–9.
Bond JG, Quiroz-Martínez H, Rojas JC, Valle J, Ulloa A, Williams T. Impact of environmental manipulation for Anopheles pseudopunctipennis Theobald control on aquatic insect communities in southern Mexico. J Vector Ecol. 2007;32:41–53.
Bradley GH. Some factors associated with the breeding of Anopheles mosquitoes. J Agric Res. 1932;44:381–99.
Orr BK, Resh VH. Interactions among mosquitofish (Gambusia affinis), sago pondweed (Potamogeton pectinatus), and the survivorship of Anopheles mosquito larvae. In: Proceedings and Papers of the 55th Annual Conference California Mosquito and Vector Control Association. Sacramento; 1987. p. 94–7.
Minakawa N, Seda P, Yan G. Influence of host and larval habitat distribution on the abundance of African malaria vectors in western Kenya. Am J Trop Med Hygiene. 2002;67:32–8.
Rubio-Palis Y, Menare C, Quinto A, Magris M, Amarista M. Caracterización de criaderos de anofelinos (Diptera: Culicidae) vectores de malaria del Alto Orinoco, Amazonas, Venezuela. Entomotropica. 2005;20:29–38.
Dukeen M, Omer S. Ecology of the malaria vector Anopheles arabiensis Patton (Diptera: Culicidae) by the Nile in northern Sudan. Bull Entomol Res. 1986;76:451–67.
Davis T, Kline D, Kaufman P. Aedes albopictus (Diptera: Culicidae) oviposition preference as influenced by container size and Buddleja davidii plants. J Med Entomol. 2016;53:273–8.
Huang J, Walker E, Otienoburu P, Amimo F, Vulule J, Miller J. Laboratory tests of oviposition by the African malaria mosquito, Anopheles gambiae, on dark soil as influenced by presence or absence of vegetation. Malaria J. 2006;5:88.
Overgaard H. Effect of plant structure on oviposition behavior of Anopheles minimus s.l. J Vector Ecol. 2007;32:193–7.
Fillinger U, Sombroek H, Majambere S, van Loon E, Takken W, Lindsay S. Identifying the most productive breeding sites for malaria mosquitoes in Gambia. Malaria J. 2009;8:62.
Hackett LW, Russell PF, Scharff JW, Senior WR. The present use of naturalistic measures in the control of malaria. Bull Health Organ. 1938;7:1016–64.
Ye-Ebiyo Y, Pollack R, Spielman A. Enhanced development in nature of larval Anopheles arabiensis mosquitoes feeding on maize pollen. Am J Trop Med Hygiene. 2000;63:90–3.
Ye-Ebiyo Y, Pollack R, Kiszewski A, Spielman A. Enhancement of development of larval Anopheles arabiensis by proximity to flowering maize (Zea mays) in turbid water and when crowded. Am J Trop Med Hygiene. 2003;68:748–52.
Muturi EJ, Gardner AM, Bara JJ. Impact of an alien invasive shrub on ecology of native and alien invasive mosquito species (Diptera: Culicidae). Environ Entomol. 2015;44:1308–15.
Gardner AM, Allan BF, Frisbie LA, Muturi EJ. Asymmetric effects of native and exotic invasive shrubs on ecology of the West Nile virus vector Culex pipiens (Diptera: Culicidae). Parasit Vectors. 2015;8(329)
Baker HG. Characteristics and modes of origin of weeds. In: Baker HG, Stebbins GL, editors. The genetics of colonizing species. New York, USA: Academic Press; 1965. p. 147–68.
Catford J, Jannson R, Nillson C. Reducing redundancy in invasion ecology by integrating hypotheses into a single theoretical framework. Divers Distrib. 2009;15:22–40.
Kolar CS, Lodge DM. Progress in invasion biology: predicting invaders. Trends Ecol Evol. 2001;16:199–204.
Rejmánek M, Richardson DM. What attributes make some plant species more invasive? Ecology. 1996;77:1655–61.
Richardson DM, Cowling RM. In: van Wilgen BW, Richardson DM, Kruger FJ, van Hensbergen HJ, editors. Why is mountain fynbos invasible and which species invade? Fire in South African Mountain Fynbos. Berlin, Germany: Springer-Verlag; 1992. p. 161–89.
Colautti RI, Ricciardi A, Grigorovich IA, Macisaac HJ. Is invasion success explained by the enemy release hypothesis? Ecol Lett. 2004;7:721–33.
Keane RM, Crawley MJ. Exotic plant invasions and the enemy release hypothesis. Trends Ecol Evol. 2002;17:164–70.
Joshi J, Vrieling K. The enemy release and EICA hypothesis revisited: incorporating the fundamental difference between specialist and generalist herbivores. Ecol Lett. 2005;8:704–14.
van Kleunen M, Weber E, Fischer M. A meta-analysis of trait differences between invasive and non-invasive plant species. Ecol Lett. 2010;13:235–45.
Pysek P, Richardson D. Traits associated with invasiveness in alien plants: where do we stand? In: Nentwig W, editor. Biological Invasions. Berlin Heidelberg: Springer; 2008. p. 97–125.
Crawley M, Harvey P, Purvis A. Comparative ecology of the native and alien floras of the British Isles. Phil Trans Royal Soc London B: Biol Sci. 1996;351:1251–9.
Williamson M, Fitter A. The characters of successful invaders. Biol Conserv. 1996;78:163–70.
Goodwin B, McAllister A, Fahrig L. Predicting invasiveness of plant species based on biological information. Conserv Biol. 1999;13:422–6.
Cadotte M, Lovett-Doust J. Ecological and taxonomic differences between native and introduced plants of southwestern Ontario. Ecoscience. 2001;8:230–8.
Lake JC, Leishman MR. Invasion success of exotic plants in natural ecosystems: the role of disturbance, plant attributes and freedom from herbivores. Biol Conserv. 2004;117:215–26.
Lloret F, Medail F, Brundu G, Carnarda I, Moragues E, Rita J, Lambdon P, Hulme P. Species attributes and invasion success by alien plants on Mediterranean islands. J Ecol. 2005;93:512–20.
Cadotte M, Murray B, Lovett-Doust J. Evolutionary and ecological influences of plant invader success in the flora of Ontario. Ecoscience. 2006;13:388–95.
Pysek P, Sádlo J, Mandák B, Jarosík V. Czech alien flora and the historical pattern of it formation: what came first to central Europe? Oecologia. 2003;135:122–30.
Cavaleri M, Sack L. Comparative water use of native and invasive plants at multiple scales: a global meta-analysis. Ecology. 2010;91:2705–15.
Wilson CG, Flanagan CJ, Gillet JD. The phytophagous insect fauna of the introduced shrub Mimosa pigra in northern Australia and its relevance to biological control. Environ Entomol. 1990;19:776–84.
McClay AS, Palmer WA, Bennett FD Pullen RK. Phytophagous arthropods associated with Parthenium hysterophorus (Asteraceae) in North America. Environ Entomol. 1995;24:796–809.
Liu H, Stiling P. Testing the enemy release hypothesis: a review and meta-analysis. Biol Inv. 2006;8:1535–45.
Gerber E, Krebs C, Murrell C, Moretti M, Rocklin R, Schaffner U. Exotic invasive knotweeds (Fallopia spp.) negatively affect native plant and invertebrate assemblages in European riparian habitats. Biol Conserv. 2008;141:646–54.
Perre P, Loyola RD, Lewinsohn TM. Insects in urban plants: contrasting the flower head feeding assemblages on native and exotic hosts. Urban Ecosyst. 2011;14:711–22.
Tanner RA, Varia S, Eschen R, Wood S, Murphy S, Gange AC. Impacts of an invasive non-native annual weed, Impatiens glandulifera, on above- and below-ground invertebrate communities in the United Kingdom. PLoS One. 2013;8(6):e67271.
Hengstum T, Hooftman DA, Oostermeijer JGB, Tienderen PH. Impact of plant invasions on local arthropod communities: a meta-analysis. J Ecol. 2014;102:4–11.
JRW D, Anderson DM, Milton SJ, Anderson TA. Avian assemblages in Acacia and Prosopis drainage line woodland in the Kalahari, South Africa. J Arid Environ. 2002;51:1–19.
Scheiman DM, Bollinger EK, Johnson DH. Effects of leafy spurge infestation on grassland bird. J Wildlife Manage. 2003;67:15–121.
Lloyd JD, Martin TE. Reproductive success of chestnut-collared longspurs in native and exotic grassland. Condor. 2005;107:363–74.
Shanungu GK. Management of the invasive Mimosa pigra L. in Lochinvar National Park, Zambia. Biodiversity. 2009;10:56–60.
Nikbakhtzadeh M, Terbot J, Foster W. Survival value and sugar access of four east African plant species attractive to a laboratory strain of sympatric Anopheles gambiae (Diptera: Culicidae). J Med Entomol. 2016;53:1105–11.
Nyasembe VO, Cheseto X, Kaplan F, Foster WA, Teal PEA, Tumlinson JH, Borgemeister C, Torto B. The invasive American weed Parthenium hysterophorus can negatively impact malaria control in Africa. PLoS One. 2015;10(9):e0137836.
Pimentel D. Environmental and economic costs of the application of pesticides primarily in the United States. Environ Dev Sustain. 2005;7:229–52.
Annett R, Habibi HR, Hontela A. Impact of glyphosate and glyphosate-based herbicides on the freshwater environment. J App Toxic. 2014;34:458–79.
Julien MH. Griffiths MW editors. Biological control of weeds. A world catalogue of agents and their target weeds, 4th ed. Wallingford, UK: CABI Publishing; 1998.
Winston RL, Shwarzländer M, Hinz HL, Day MD, MJW C, Julien MH, editors. Biological Control of Weeds: A World Catalogue of Agents and their Target Weeds, 5th ed. Morgantown. West Virginia, USA: USDA Forest Service, Forest Health, Technology Enterprise FHTET-2014–04.838; 2014.
Fowler SV. Trivial and political reasons for the failure of classical biological control of weeds: A personal view: In: Spencer NR, editor. Proceedings of the X International Symposium on Biological Control of Weeds. Bozeman, Montana, USA: Montana State University; 2000. p. 169–172.
Fowler SV, Syrett P, Hill RL. Success and safety in the biological control of environmental weeds in New Zealand. Austral Ecol. 2000;25:553–62.
Zimmerman HH, Moran VC, Hoffman HJ. Biological control in the management of invasive alien plants in south African and the role of the working for water programme. S Afr J Sci. 2004;100:34–40.
McFayden REC. Successes in biological control of weeds. In: Spencer NR, editor. Proceedings of the Xth International Symposium on Biological Control of Weeds. Bozeman: Montana State University; 2000. p. 3–14.
Markin GP, Lai PY, Funasaki GY. Status of biological control of weeds in Hawaii and implications for managing native ecosystems. In: Stone CP, Smith CW, Tunison JT, editors. Alien plant invasions in native ecosystems of Hawaii, i: management and research. Honolulu Hawaii: University of Hawaii Cooperative National Park Resources Studies Unit; 1992. p. 446–82.
Hoffmann JH. Biological control of weeds: the way forward, a South African perspective. In: Stirton CH, editor. Proceedings of the BCPC Symposium: Weeds in a Changing World. Farnham, UK: British Crop Protection Council Book no. 64; 1995. p. 77–89.
Annecke DP, Moran VC. Critical reviews of biological pest control in South Africa. 2. The prickly pear, Opunlia ficus-indica (L.) miller. J Entomol Soc SA. 1978;41:161–88.
Moran VC, Zimmermann HG. Biological control of cactus weeds of minor importance in South Africa. Agric Ecosyst Environ. 1991;37:37–55.
De Lange WJ, van Wilgen BW. An economic assessment of the contribution of weed biological control to the management of invasive alien plants and to the protection of ecosystem services in South Africa. Biol Inv. 2010;12:4113–24.
Neuenschwander P, Julien MH, Center TD, Hill MP. Pistia stratiotes L. (Araceae). In: Muniappan R, Reddy GVP, Raman A, editors. Biological Control of Tropical Weeds Using Arthropods. Cambridge: Cambridge University Press; 2009. p. 332–352.
Coetzee JA, Hill MP, Byrne MJ, Bownes A. A review of the biological control programmes on Eichhornia crassipes (C. Mart.) Solms (Pontederiaceae), Salvinia molesta D. S. Mitch. (Salviniaceae), Pistia stratiotes L. (Araceae), Myriophyllum aquaticum (Vell.) Verdc. (Haloragaceae) and Azolla filiculoides Lam. (Azollaceae) in South Africa. Afr Entomol. 2011;19:451–68.
McConnachie AJ, de Wit MP, Hill MP, Byrne MJ. Economic evaluation of the successful biological control of Azolla filiculoides in South Africa. Biol Control. 2003;28:25–32.
Fernández Carrillo JL, Fernández Carrillo E, Alonso-Zarazaga MA. Primera cita de Stenopelmus rufinasus Gyllenhal, 1835 en la Península Ibérica (Coleoptera, Erirhinidae). Graellsia. 2005;61:39–140.
Oliver JD. A Review of the biology of giant salvinia (Salvinia Molesta Mitchell). J Aquat Plant Manage. 1993;31:227–31.
Cilliers CJ. Biological control of parrot's feather, Myriophyllum aquaticum (Vell.) Verdc. (Haloragaceae), in South Africa. Afri Entomol Mem. 1999;1:113–8.
Julien M, Sosa AJ, Chan R, Schooler S, Traversa G. Alternanthera philoxeroides (Martius) Grisebach-alligator weed. In: Julien M, McFadyen R, Cullen J, editors. Biological control of weeds in Australia. Melbourne, Australia: CSIRO Publishing; 2012. p. 43–51.
Van Klinken RD, Fichera G, Cordo H. Targeting biological control across diverse landscapes: the release, establishment and early success of two insects on mesquite (Prosopis) in rangeland Australia. Biol Control. 2003;26:8–20.
Van Klinken R. Prosopis spp. - mesquite. In: Julien M, McFadyen R, Cullen J, editors. Biological Control of Weeds in Australia. Melbourne, Australia: CISRO Publishing; 2012. p. 477–85.
Impson FAC, Kleinjan CA, Hoffmann JH, Post JA, Wood AR. Biological control of Australian Acacia species and Paraserianthes lophantha (Willd.) Nielsen (Mimosaceae) in South Africa. Afr Entomol. 2011;19:186–207.
Dhileepan K. Biological control of parthenium (Parthenium hysterophorus) in Australian rangeland translates to improved grass production. Weed Sci. 2007;55:497–501.
Greathead DJ. Benefits and risks of classical biological control. In: T. Hokkanen HMT, Lynch JM, editors. Biological Control: Benefits and Risks. Cambridge: Cambridge University Press; 1995. p. 55–63.
Higgens SL, Turpie LK, Costanza R, Cowling RM, Le Maitre DC, Marais C, et al. An ecological economic simulation model of mountain fynbos ecosystems. Dynamics, valuation and management. Ecol Econ. 1997;22:155–69.
Lovell SJ, Stone SF, Fernandez L. The economic impacts of aquatic invasive species: a review of the literature. Agric Res Econ Rev. 2006;35:195–208.
De Wit M, Crookes D, van Wilgen BW. Conflicts of interest in environmental management: estimating the costs and benefits of a tree invasion. Biol Inv. 2001;3:167–78.
Brown P, Daigneault A. Cost-benefit analysis of managing the invasive African tulip tree (Spathodea campanulata) in the Pacific. Environ Sci Pol. 2014;39:65–76.
Zavaleta E. Valuing ecosystem services lost to Tamarix invasion in the United States. In: Mooney HA, Hobbs RJ, editors. Invasive species in a changing world. Washington, DC: Island Press; 2000. p. 261–300.
Zavaleta E. The economic value of controlling an invasive shrub. Ambio. 2000;29:462–7.
Van Wilgen BW, de Witt MP, Anderson HJ, le Maitre DC, Kotze IM, Ndala S, et al. Costs and benefits of biological control of invasive alien plants. Case studies from South Africa. S Afr J Sci. 2004;100:113–22.
Versfeld DB, Le Maitre DC, Chapman RA. Alien invading plants and water resources in South Africa: a preliminary assessment. WRC Report No. TT 99/98 abd CSIR No. ENV/S-C 97154; 1998.
Page AR, Lacey KL. Economic Impact Assessment of Australian Weed Biological Control. Australia: CRC for Australian Weed Management Technical Series. 10; 2006.
De Groote H, Ajuonu O, Attignon S, Djessou R, Neuenschwander P. Economic impact of biological control of water hyacinth in southern Benin. Ecol Econ. 2003;45:105–17.
Gardner AM, Muturi EJ, Overmier LD, Allan BF. Large-scale removal of invasive honeysuckle decreases mosquito and avian abundance. EcoHealth. 2017;14(4):750–61.
Ma BO, Roitberg BD. The role of resource availability and state-dependence in the foraging strategy of blood-feeding mosquitoes. Evol Ecol Res. 2008;10:1111–30.
Many of the ideas elaborated on in this review were informed by and initially developed during group discussions at a workshop held in Naivasha, Kenya, in December of 2015. We are grateful to all the participants for sharing their expertise and insights: Christian Borgemeister, Alexandra Chaskopoulou, Susan Imbahale, Samson Kiware, Dhileepan Kunjithapatham, Theo Linders, Charles Mbogo, Günter Müller, James Mutunga, Ephantus Muturi, Patricia Neenan, Vincent Nyasembe, Erik Ochomo, Bernie Roitberg, Dan Strickman, Zain Syed, David Tchouassi, Baldwyn Torto, James Tumlinson, Holly Tuten, Winnie Nunda, Andrew Wannenburgh, and Rue-De Xue. Funding for the workshop was provided by the Bill & Melinda Gates Foundation and CABI.
This work was supported by the Bill and Melinda Gates Foundation (grant number OPP1135160).
Illinois Natural History Survey, University of Illinois, Urbana, Champaign, IL, 61820, USA
Christopher M. Stone
CABI Africa, 673 Limuru Road, Muthaiga, PO Box 633-00621, Nairobi, Kenya
Arne B.R. Witt
Fundación para el Estudio de Especies Invasivas (FuEDEI), Bolivar 1559, Hurlingham, Buenos Aires, Argentina
Guillermo Cabrera Walsh
Department of Evolution, Ecology and Organismal Biology, Ohio State University, Columbus, OH, 43210, USA
Woodbridge A. Foster
CABI, Bakeham Lane, Egham, Surrey, TW20 9TY, UK
Sean T. Murphy
Search for Christopher M. Stone in:
Search for Arne B.R. Witt in:
Search for Guillermo Cabrera Walsh in:
Search for Woodbridge A. Foster in:
Search for Sean T. Murphy in:
STM and CS led on Introduction. CS, GCW and WAF led on section "Do invasive alien plants have a positive influence on the rate of malaria transmission?". AW, STM and GCW led on "The potential for invasive alien plants to be managed on a large scale". All authors contributed to the synthesis of the last sections. All authors read and approved the final manuscript.
Correspondence to Arne B.R. Witt.
Stone, C.M., Witt, A.B., Walsh, G.C. et al. Would the control of invasive alien plants reduce malaria transmission? A review. Parasites Vectors 11, 76 (2018) doi:10.1186/s13071-018-2644-8
Invasive alien plants
Plant-vector interactions
Nectar feeding
Larval habitat
Vector-borne disease
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Please note that comments may be removed without notice if they are flagged by another user or do not comply with our community guidelines. | CommonCrawl |
Mirsky's theorem
In mathematics, in the areas of order theory and combinatorics, Mirsky's theorem characterizes the height of any finite partially ordered set in terms of a partition of the order into a minimum number of antichains. It is named for Leon Mirsky (1971) and is closely related to Dilworth's theorem on the widths of partial orders, to the perfection of comparability graphs, to the Gallai–Hasse–Roy–Vitaver theorem relating longest paths and colorings in graphs, and to the Erdős–Szekeres theorem on monotonic subsequences.
The theorem
The height of a partially ordered set is defined to be the maximum cardinality of a chain, a totally ordered subset of the given partial order. For instance, in the set of positive integers from 1 to N, ordered by divisibility, one of the largest chains consists of the powers of two that lie within that range, from which it follows that the height of this partial order is $1+\lfloor \log _{2}N\rfloor $.
Mirsky's theorem states that, for every finite partially ordered set, the height also equals the minimum number of antichains (subsets in which no pair of elements are ordered) into which the set may be partitioned. In such a partition, every two elements of the longest chain must go into two different antichains, so the number of antichains is always greater than or equal to the height; another formulation of Mirsky's theorem is that there always exists a partition for which the number of antichains equals the height. Again, in the example of positive integers ordered by divisibility, the numbers can be partitioned into the antichains {1}, {2,3}, {4,5,6,7}, etc. There are $1+\lfloor \log _{2}N\rfloor $ sets in this partition, and within each of these sets, every pair of numbers forms a ratio less than two, so no two numbers within one of these sets can be divisible.
To prove the existence of a partition into a small number of antichains for an arbitrary finite partially ordered set, consider for every element x the chains that have x as their largest element, and let N(x) denote the size of the largest of these x-maximal chains. Then each set N−1(i), consisting of elements that have equal values of N, is an antichain, and these antichains partition the partial order into a number of antichains equal to the size of the largest chain. In his original proof, Mirsky constructs the same partition inductively, by choosing an antichain of the maximal elements of longest chains, and showing that the length of the longest chain among the remaining elements is reduced by one.
Related results
Dilworth's theorem
Mirsky was inspired by Dilworth's theorem, stating that, for every partially ordered set, the maximum size of an antichain equals the minimum number of chains in a partition of the set into chains. For sets of order dimension two, the two theorems coincide (a chain in the majorization ordering of points in general position in the plane is an antichain in the set of points formed by a 90° rotation from the original set, and vice versa) but for more general partial orders the two theorems differ, and (as Mirsky observes) Dilworth's theorem is more difficult to prove.
Mirsky's theorem and Dilworth's theorem are also related to each other through the theory of perfect graphs. An undirected graph is perfect if, in every induced subgraph, the chromatic number equals the size of the largest clique. In the comparability graph of a partially ordered set, a clique represents a chain and a coloring represents a partition into antichains, and induced subgraphs of comparability graphs are themselves comparability graphs, so Mirsky's theorem states that comparability graphs are perfect. Analogously, Dilworth's theorem states that every complement graph of a comparability graph is perfect. The perfect graph theorem of Lovász (1972) states that the complements of perfect graphs are always perfect, and can be used to deduce Dilworth's theorem from Mirsky's theorem and vice versa (Golumbic 1980).
Gallai–Hasse–Roy–Vitaver theorem
Mirsky's theorem can be restated in terms of directed acyclic graphs (representing a partially ordered set by reachability of their vertices), as the statement that there exists a graph homomorphism from a given directed acyclic graph G to a k-vertex transitive tournament if and only if there does not exist a homomorphism from a (k + 1)-vertex path graph to G. For, the largest path graph that has a homomorphism to G gives the longest chain in the reachability ordering, and the sets of vertices with the same image in a homomorphism to a transitive tournament form a partition into antichains. This statement generalizes to the case that G is not acyclic, and is a form of the Gallai–Hasse–Roy–Vitaver theorem on graph colorings and orientations (Nešetřil & Ossona de Mendez 2012).
Erdős–Szekeres theorem
It follows from either Dilworth's theorem or Mirsky's theorem that, in every partially ordered set of rs + 1 elements, there must exist either a chain of r + 1 elements or an antichain of s + 1 elements. Mirsky (1971) uses this observation, applied to a partial order of order dimension two, to prove the Erdős–Szekeres theorem that in every sequence of rs + 1 totally ordered elements there must exist either a monotonically increasing subsequence of r + 1 elements or a monotonically decreasing subsequence of s + 1 elements.
Extensions
Mirsky's theorem extends immediately to infinite partially ordered sets with finite height. However, the relation between the length of a chain and the number of antichains in a partition into antichains does not extend to infinite cardinalities: for every infinite cardinal number κ, there exist partially ordered sets that have no infinite chain and that do not have an antichain partition with κ or fewer antichains (Schmerl 2002).
References
• Dilworth, Robert P. (1950), "A Decomposition Theorem for Partially Ordered Sets", Annals of Mathematics, 51 (1): 161–166, doi:10.2307/1969503, JSTOR 1969503.
• Golumbic, Martin Charles (1980), "5.7. Coloring and other problems on comparability graphs", Algorithmic Graph Theory and Perfect Graphs, New York: Academic Press, pp. 132–135, ISBN 0-12-289260-7, MR 0562306.
• Lovász, László (1972), "Normal hypergraphs and the perfect graph conjecture", Discrete Mathematics, 2 (3): 253–267, doi:10.1016/0012-365X(72)90006-4.
• Mirsky, Leon (1971), "A dual of Dilworth's decomposition theorem", American Mathematical Monthly, 78 (8): 876–877, doi:10.2307/2316481, JSTOR 2316481.
• Nešetřil, Jaroslav; Ossona de Mendez, Patrice (2012), "Theorem 3.13", Sparsity: Graphs, Structures, and Algorithms, Algorithms and Combinatorics, vol. 28, Heidelberg: Springer, p. 42, doi:10.1007/978-3-642-27875-4, ISBN 978-3-642-27874-7, MR 2920058.
• Schmerl, James H. (2002), "Obstacles to extending Mirsky's theorem", Order, 19 (2): 209–211, doi:10.1023/A:1016541101728, MR 1922918, S2CID 26514679.
Order theory
• Topics
• Glossary
• Category
Key concepts
• Binary relation
• Boolean algebra
• Cyclic order
• Lattice
• Partial order
• Preorder
• Total order
• Weak ordering
Results
• Boolean prime ideal theorem
• Cantor–Bernstein theorem
• Cantor's isomorphism theorem
• Dilworth's theorem
• Dushnik–Miller theorem
• Hausdorff maximal principle
• Knaster–Tarski theorem
• Kruskal's tree theorem
• Laver's theorem
• Mirsky's theorem
• Szpilrajn extension theorem
• Zorn's lemma
Properties & Types (list)
• Antisymmetric
• Asymmetric
• Boolean algebra
• topics
• Completeness
• Connected
• Covering
• Dense
• Directed
• (Partial) Equivalence
• Foundational
• Heyting algebra
• Homogeneous
• Idempotent
• Lattice
• Bounded
• Complemented
• Complete
• Distributive
• Join and meet
• Reflexive
• Partial order
• Chain-complete
• Graded
• Eulerian
• Strict
• Prefix order
• Preorder
• Total
• Semilattice
• Semiorder
• Symmetric
• Total
• Tolerance
• Transitive
• Well-founded
• Well-quasi-ordering (Better)
• (Pre) Well-order
Constructions
• Composition
• Converse/Transpose
• Lexicographic order
• Linear extension
• Product order
• Reflexive closure
• Series-parallel partial order
• Star product
• Symmetric closure
• Transitive closure
Topology & Orders
• Alexandrov topology & Specialization preorder
• Ordered topological vector space
• Normal cone
• Order topology
• Order topology
• Topological vector lattice
• Banach
• Fréchet
• Locally convex
• Normed
Related
• Antichain
• Cofinal
• Cofinality
• Comparability
• Graph
• Duality
• Filter
• Hasse diagram
• Ideal
• Net
• Subnet
• Order morphism
• Embedding
• Isomorphism
• Order type
• Ordered field
• Ordered vector space
• Partially ordered
• Positive cone
• Riesz space
• Upper set
• Young's lattice
| Wikipedia |
Quantifying Popularity in Real-Time for High-Volume Websites, Part 1
June 4, 2015 11:02 pm by P.I.E. Staff
This is the first half of a two-part blog post series. Part one covers the theory, part two is an implementation.
Because there is no accepted industry standard algorithm for determining popularity, content publishers can afford to get creative in their assessments. Sometimes, however, these algorithms can be trivially exploitable by spammers to deliver low-quality content to high-traffic areas of a website (e.g. the front page of the website).
What follows is Paragon Initiative Enterprise's user-driven popularity algorithm that is resilient against fraudulent voting. There is no patent for this algorithm; instead, we release it to the public domain. We hope that, after being refined and studied, it can be put to use for the public good.
What is Popularity Anyway?
Although your definition might differ, when we're discussing popularity we mean two things:
Popularity means it is well-liked. This means that on any given scale (0 to 5 stars, 0 to 10 out of 10, 0 to 100 points, etc.), popular items are ranked higher.
Popularity also means recency. Something that was well-liked 10 years ago might not be relevant compared to something scoring high this very minute.
In practice, this means any attempt to rank popularity must have two qualities:
The algorithm must, within reason, allow a community to judge what content is better than other content. From a security perspective, this means that an army of fake accounts should not be able to easily tilt the scale of apparent public opinion.
The algorithm must prioritize the new over the old.
Let's begin by designing an abuse-resistant algorithm for ranking content by a community's perception of its quality, then let's make one adjustment to fairly prioritize the latest and greatest over the tried and true.
To Judge Popularity, First Assess Quality
We'll start with an algorithm used by IMDB for their Top 250 movies, and it looks like this:
$$W = \dfrac{Rv + Cm}{v+m}$$
$R$ = the average score for a particular item
$v$ = the number of users who voted for a particular item
$C$ = the average of all the votes in the database
$m$ = the minimum number of votes to qualify
$W$ = the weighted rating (what we're solving for)
This has an interesting property: The Weighted rating ($W$) for a particular item is dragged closer to mediocrity ($C$) until it achieves a lot of votes that push it further away from the average.
In mathematics terms:
As $v$ approaches infinity, $W$ approaches $R$.
As $v$ approaches zero, $W$ approaches $C$.
The IMDB algorithm is quite powerful: In IMDB's case, it prevents new movies with 30,000 votes exclusively for 10 out of 10 from scoring higher than a movie with over 500,000 votes (most which are 10/10).
However, in absence of outside mitigations or manual intervention, this algorithm invites the possibility of automated fraudulent voting. All votes are treated equally, whether or not they are legitimate (although in IMDB's case, they only count active users' votes).
What if, instead, we allowed the community to self-select the users whose votes should count more?
The Karmatic Quality Formula
Popular news websites such as Hacker News and Reddit employ a karma system, which in simple terms tallies all of the upvotes and downvotes a user has received from their peers.
Since the karma for any given user is known, and the average karma for all active users is knowable, we can therefore weigh each user's votes by a simple ratio of the two:
$$ r_i = \dfrac{k_i}{\bar k} $$
Now that we have a weight for each user's votes, let's modify the IMDB Algorithm above to include karma ratios instead of a simple average.
$$ K = \frac{\sum_0^{v-1}{{s_i}{r_i}}}{\sum_0^{v-1}{r_i}} $$
$$ W = \dfrac{Kv + Cm}{v+m} $$
$s_i$ = a particular score from a particular vote
$r_i$ = the user's karma ratio
$K$ = weighted average
Quick Example
Let's say we have 10,000 active users with an average karma of 100. An attacker, who wants to push something terrible to the front page (e.g. to profit from ad impressions), controls 1,000 fake accounts (karma == 1) and has them all vote 10/10 on their spam submission.
50 legitimate average users (karma == 100) rate the spam submission 0/10.
What's the result of $K$? About 0.197 out of 10.
What is the result of $W$ if $ m = 100 $ and $ C = 6 $? About 5.178 out of 10.
Despite controlling 10% of the population, a handful of high-karma users (as decided by their peers) can effectively demote spam submissions just by giving it a low score.
Combined with CAPTCHAs and other spam-fighting solutions to make automated account registration more difficult, we can greatly reduce the potential impact of a successful spam campaign. But we still haven't quantified popularity.
An Algorithm for Popularity
The Karmatic Quality Formula can provide a reasonable estimate of the quality of some piece of content, but we're interested in what's good right this very second.
Our solution is straightforward: $K_p$ is similar to our calculation for $K$ above, except we also multiply each $ s_i k_i $ by one more term to add an exponential decay:
$$ K_p = \frac{\sum_0^{v-1}{{s_i}{r_i}{e^{D(t_i - t_{now})}}}}{\sum_0^{v-1}{r_i}} $$
$D$ = the magnitude of the exponential decay (a constant)
$t_{now}$ = the current moment in time (e.g. UNIX timestamp)
$t_i$ = the moment in time a particular vote was cast
$K_p$ = the karmatic rating with a decay, for the purpose of calculating popularity
Since $t_i$ will always be less than $t_{now}$, the result of $e^{D({t_i}-{t_now})}$ will always be less than or equal to 1.0.
Finally we can determine the popularity of a particular article, $P$:
$$ P = \dfrac{{K_p}{v} + Cm}{v+m} $$
$K_p$ = karma-weighted average, decayed for popularity
$P$ = the popularity score
Now we have a concise mathematical definition for popularity. In part two, we will implement this algorithm in PostgreSQL. | CommonCrawl |
location: FAQ / logrrsq
How do I choose between different logistic regression models?
In multiple linear regressions we have R-squared to summarise the fit of a model and thereby inform on choice.
SPSS produces two R-squared measures (Nagelkerke and Cox-Snell) for binary logistic regressions but the use of these is controversial (see below). They compare the fit of the model with the predictors to one without them (just like fit indices in structural equation models, for those familiar with this area).
What follows is an extract of an e-mail on the choice of R-squared in logistic regression from Dietrich Alte. The recommended referenced journal is available from the University library.
Menard (2000) referred to below suggests using
$$R^text{2} = \frac{\mbox{Difference between -2 log likelihoods of null model and model with covariates of interest in}}{\mbox{-2 log likelihood of null model}} $$
where the null model is a binary logistic regression with a single predictor (covariate) consisting of a column of 1's. To fit this particular model you first need to click the options button and ask for no constant to be in the regression. SPSS and other packages routinely output -2 log likelihoods which are indices of particular model goodness-of-fits.
The first measure (Cox-Snell) is used to assess models where all the independent variables are continuous and the second (Nagelkerke) is used where there are one or more binary independent variables in the model.
These statistics have at their core the ratio of the likelihood function of the fitted model to the likelihood function of an intercept only model. What that are actually measuring is the proportion of change in the likelihood function of the specified model vs. no model at all.
I don't like these statistics very much, and like them even less because their names suggest they are analogous to the "variance explained" measures used in linear models, but they are actually measuring something else.
There was a very good article in the Feb 2000 issue of The American Statistician by Scott Menard called "Coefficients of Determination for Multiple Logistic Regression Models," which may be of use.
Note that R^2 cannot be used to compare two logistic models if one or more suffers from overdispersion. In this case information criteria should be used (see here).
Scott Menard, "Coefficients of Determination for Multiple Logistic Regression Analysis," The American Statistician 54:17-24, 2000.
None: FAQ/logrrsq (last edited 2014-06-19 10:23:23 by PeterWatson) | CommonCrawl |
Mixed convection boundary-layer flow past a vertical flat plate with a convective boundary condition
M. M. Rahman, J. H. Merkin*, I. Pop
The mixed convection boundary-layer flow on a vertical surface with an applied convective boundary condition is considered. Specific forms for the outer flow and surface heat transfer parameter are taken to reduce the problem a similarity system, which is seen to involve three parameters: m, the exponent of the outer flow; λ, the mixed convection parameter and B, the Biot number, as well as the Prandtl number. $${m = \frac{1}{5}}$$m=15 is found to be a transitional case with different behaviour depending on whether $${m > \frac{1}{5}}$$m>15 or $${m < \frac{1}{5}}$$m<15. For $${m > \frac{1}{5}}$$m>15, there is a critical value λc with solutions only for λ ≥ λc, a range of values of λ where there are dual solutions and the upper solution branch continuing into aiding flow. For $${0 < m < \frac{1}{5}}$$0<m<15, there is a value of B where there is a change in behaviour from that seen for $${m > \frac{1}{5}}$$m>15 to solutions terminating at a finite value of λ > 0 and continuing to large values of |λ|. For m = 0 (uniform outer flow) only this latter behaviour is seen to arise. When m < 0, though still in the range when there is a solution to the Falkner–Skan, λ = 0, problem, the behaviour is similar to that seen for $${m > \frac{1}{5}}$$m>15.
Acta Mechanica
Dive into the research topics of 'Mixed convection boundary-layer flow past a vertical flat plate with a convective boundary condition'. Together they form a unique fingerprint.
Mixed convection Engineering & Materials Science 100%
Boundary layer flow Engineering & Materials Science 99%
Boundary conditions Engineering & Materials Science 60%
Prandtl number Engineering & Materials Science 46%
Heat transfer Engineering & Materials Science 26%
Rahman, M. M., Merkin, J. H., & Pop, I. (2015). Mixed convection boundary-layer flow past a vertical flat plate with a convective boundary condition. Acta Mechanica, 226(8), 2441-2460. https://doi.org/10.1007/s00707-015-1334-2
Mixed convection boundary-layer flow past a vertical flat plate with a convective boundary condition. / Rahman, M. M.; Merkin, J. H.; Pop, I.
In: Acta Mechanica, Vol. 226, No. 8, 20.08.2015, p. 2441-2460.
Rahman, MM, Merkin, JH & Pop, I 2015, 'Mixed convection boundary-layer flow past a vertical flat plate with a convective boundary condition', Acta Mechanica, vol. 226, no. 8, pp. 2441-2460. https://doi.org/10.1007/s00707-015-1334-2
Rahman MM, Merkin JH, Pop I. Mixed convection boundary-layer flow past a vertical flat plate with a convective boundary condition. Acta Mechanica. 2015 Aug 20;226(8):2441-2460. https://doi.org/10.1007/s00707-015-1334-2
Rahman, M. M. ; Merkin, J. H. ; Pop, I. / Mixed convection boundary-layer flow past a vertical flat plate with a convective boundary condition. In: Acta Mechanica. 2015 ; Vol. 226, No. 8. pp. 2441-2460.
@article{c723ad1ac2fd435ca09fa46eb3ba8092,
title = "Mixed convection boundary-layer flow past a vertical flat plate with a convective boundary condition",
abstract = "The mixed convection boundary-layer flow on a vertical surface with an applied convective boundary condition is considered. Specific forms for the outer flow and surface heat transfer parameter are taken to reduce the problem a similarity system, which is seen to involve three parameters: m, the exponent of the outer flow; λ, the mixed convection parameter and B, the Biot number, as well as the Prandtl number. $${m = \frac{1}{5}}$$m=15 is found to be a transitional case with different behaviour depending on whether $${m > \frac{1}{5}}$$m>15 or $${m < \frac{1}{5}}$$m<15. For $${m > \frac{1}{5}}$$m>15, there is a critical value λc with solutions only for λ ≥ λc, a range of values of λ where there are dual solutions and the upper solution branch continuing into aiding flow. For $${0 < m < \frac{1}{5}}$$0 \frac{1}{5}}$$m>15 to solutions terminating at a finite value of λ > 0 and continuing to large values of |λ|. For m = 0 (uniform outer flow) only this latter behaviour is seen to arise. When m < 0, though still in the range when there is a solution to the Falkner–Skan, λ = 0, problem, the behaviour is similar to that seen for $${m > \frac{1}{5}}$$m>15.",
author = "Rahman, {M. M.} and Merkin, {J. H.} and I. Pop",
note = "Publisher Copyright: {\textcopyright} 2015, Springer-Verlag Wien.",
journal = "Acta Mechanica",
publisher = "Springer Wien",
T1 - Mixed convection boundary-layer flow past a vertical flat plate with a convective boundary condition
AU - Rahman, M. M.
AU - Merkin, J. H.
AU - Pop, I.
N1 - Publisher Copyright: © 2015, Springer-Verlag Wien.
N2 - The mixed convection boundary-layer flow on a vertical surface with an applied convective boundary condition is considered. Specific forms for the outer flow and surface heat transfer parameter are taken to reduce the problem a similarity system, which is seen to involve three parameters: m, the exponent of the outer flow; λ, the mixed convection parameter and B, the Biot number, as well as the Prandtl number. $${m = \frac{1}{5}}$$m=15 is found to be a transitional case with different behaviour depending on whether $${m > \frac{1}{5}}$$m>15 or $${m < \frac{1}{5}}$$m<15. For $${m > \frac{1}{5}}$$m>15, there is a critical value λc with solutions only for λ ≥ λc, a range of values of λ where there are dual solutions and the upper solution branch continuing into aiding flow. For $${0 < m < \frac{1}{5}}$$0 \frac{1}{5}}$$m>15 to solutions terminating at a finite value of λ > 0 and continuing to large values of |λ|. For m = 0 (uniform outer flow) only this latter behaviour is seen to arise. When m < 0, though still in the range when there is a solution to the Falkner–Skan, λ = 0, problem, the behaviour is similar to that seen for $${m > \frac{1}{5}}$$m>15.
AB - The mixed convection boundary-layer flow on a vertical surface with an applied convective boundary condition is considered. Specific forms for the outer flow and surface heat transfer parameter are taken to reduce the problem a similarity system, which is seen to involve three parameters: m, the exponent of the outer flow; λ, the mixed convection parameter and B, the Biot number, as well as the Prandtl number. $${m = \frac{1}{5}}$$m=15 is found to be a transitional case with different behaviour depending on whether $${m > \frac{1}{5}}$$m>15 or $${m < \frac{1}{5}}$$m<15. For $${m > \frac{1}{5}}$$m>15, there is a critical value λc with solutions only for λ ≥ λc, a range of values of λ where there are dual solutions and the upper solution branch continuing into aiding flow. For $${0 < m < \frac{1}{5}}$$0 \frac{1}{5}}$$m>15 to solutions terminating at a finite value of λ > 0 and continuing to large values of |λ|. For m = 0 (uniform outer flow) only this latter behaviour is seen to arise. When m < 0, though still in the range when there is a solution to the Falkner–Skan, λ = 0, problem, the behaviour is similar to that seen for $${m > \frac{1}{5}}$$m>15.
JO - Acta Mechanica
JF - Acta Mechanica | CommonCrawl |
\begin{document}
\title{Rare event asymptotics for exploration processes for random graphs} \author{Shankar Bhamidi, Amarjit Budhiraja, Paul Dupuis, Ruoyu Wu} \maketitle
\begin{abstract} \noindent Large deviations for random graph models has been a topic of significant recent research activity. \textcolor{black}{Much work} in this area is focused on the class of \emph{dense} random graph models (number of edges in the graph scale as $n^{2}$, where $n$ is the number of vertices) where the theory of graphons has emerged as a principal tool in the study of large deviation properties. \textcolor{black}{These tools do not give a good approach to
large deviation problems for random graph models in the sparse regime.} The aim of this paper is to \textcolor{black}{study} an approach for large deviation problems in this regime by establishing Large Deviation Principles (LDP) on suitable path spaces for certain exploration processes of the associated random graph sequence. Exploration processes are an important tool in the study of sparse random graph models and have been used to understand detailed asymptotics of many functionals of sparse random graphs, such as component sizes, surplus, deviations from trees, etc. In the context of rare event asymptotics of interest here, the point of view of exploration process transforms a large deviation analysis of a static random combinatorial structure to the study of a small noise LDP for certain stochastic dynamical systems with jumps.
Our work focuses on one particular class of random graph models, namely the configuration model; however the general approach of using exploration processes for studying large deviation properties of sparse random graph models has broader applicability. The goal is to study asymptotics of probabilities of non-typical behavior in the large network limit. The first key step for this is to establish a LDP for an exploration process associated with the configuration model. A suitable exploration process here turns out to be an infinite dimensional Markov process with transition probability rates that diminish to zero in certain parts of the state space. Large deviation properties of such Markovian models is challenging due to poor regularity behavior of the associated local rate functions. Our proof of the LDP relies on a representation of the exploration process in terms of a system of stochastic differential equations driven by Poisson random measures and variational formulas for moments of nonnegative functionals of Poisson random measures. Uniqueness results for certain controlled systems of deterministic equations play a key role in the analysis. Next, using the rate function in the LDP for the exploration process we formulate a calculus of variations problem associated with the asymptotics of component degree distributions. The second key ingredient in our study is a careful analysis of the infinite dimensional Euler-Lagrange equations associated with this calculus of variations problem. Exact solutions of these systems of nonlinear differential equations are identified which then provide explicit formulas for decay rates of probabilities of non-typical component degree distributions and related quantities. \newline
\noindent
\noindent \textbf{AMS 2010 subject classifications:} 60F10, 60C05, 05C80, 90B15.\newline
\noindent \textbf{Keywords:} large deviation principle, random graphs, sparse regime, diminishing rates, Euler-Lagrange equations, calculus of variations problems, configuration model, branching processes, variational representations, Poisson random measures, exploration process, singular dynamics, giant component. \end{abstract}
\section{Introduction} \label{sec:intro}
Large deviations for random graph models has been a topic of significant recent research activity (see, e.g., \cite{chatterjee2011large,chatterjee2016introduction,bordenave2015large,puhalskii2005stochastic,o1998some,choi2013large, DhaSen}). \textcolor{black}{Much work} in this area is focused on the class of \emph{dense} random graph models (number of edges in the graph scale like $n^{2}$, where $ n $ is the number of vertices). In this regime, the theory of graphons obtained under dense graph limits \cite{borgs2008convergent,borgs2012convergent,lovasz2012large, DhaSen} has emerged as a key tool in the study of large deviation asymptotics. In contrast to the above papers, the focus in the current work is on a \emph{sparse} random graph setting where the average degree of a typical vertex is $O(1)$ so that the number of edges in the graph are $O(n)$ as $n\rightarrow \infty $. \textcolor{black}{In this regime tools based on the theory of graphons do not give a good approach to
the study of large deviation problems.}
The goal of this work is to \textcolor{black}{study an approach for large deviation problems} in the sparse regime by establishing large deviation principles for a class of stochastic dynamical systems, known as the \emph{exploration processes}, that play a central role in the study of sparse random graphs. \textcolor{black}{The idea of using stochastic processes to study large deviation problems for static combinatorial objects has been used previously in several works, e.g.\ in \cite{dupnuzwhi} for studying urn models, in \cite{puhalskii2005stochastic} for studying Erd\H{o}s-R\'enyi random graphs, in \cite{choset} in the study of preferential attachment model, and in \cite{puhalskii2013number} for another type of attachment model.} Our work focuses on one particular class of random graph models, namely the configuration model; however similar techniques are expected to be useful for other sparse random graph models as well where tractable dynamic constructions via exploration processes are available.
The \emph{configuration model} refers to a sequence of random graphs with number of vertices approaching infinity and the degree distribution converging to a pre-specified probability distribution $\boldsymbol{p}= \{p_k\}_{k\in {\mathbb{N}}}$ on the set of non-negative integers. \textcolor{black}{This random graph model is a basic object in probabilistic combinatorics; see \cite{molloy1995critical} where sufficient conditions for the existence of a large connected component in a configuration model were given, which then lead to these types of random graphs being used as models for various real world systems, see e.g.\ \cite{newman2001random} and \cite{Hofstad2016} and references therein for a comprehensive survey of rigorous results on this model (see also \cite{bender1978asymptotic,bollobas1980probabilistic} where constructions similar to the configuration model were first used to count graphs with a prescribed degree sequence). This model has become one of the standard workhorses in the study of networks in areas such as epidemiology (see e.g.\ \cite{newman2002spread} where epidemics on graphs with prescribed degree distribution are considered) and community detection (where the configuration model forms the basis of one of the most well known techniques called modularity optimization \cite{newman2006modularity}, \cite[Section 6]{fortunato2010community}). In such applications, after observing a real world system, the configuration model with the same degree distribution is used as a ``baseline'' model to compare against the real world system to judge the existence of atypical events. Thus an important question in such random graph models is to estimate probabilities of atypical structural behaviors, particularly when the system size is large.
}
In this paper, we are interested in probabilities of events $E^{n,{\varepsilon}}(\boldsymbol{q})$ associated with the configuration model random graph $G_n$ on $n$ vertices, described as \begin{align}
E^{n,{\varepsilon}}(\boldsymbol{q}) &= \{\mbox{there exists a component in } G_n \mbox{ with } m_k \mbox{ degree $k$ vertices, where } \nonumber\\
&\quad \quad\quad\quad\quad \quad m_k \in [n(q_k-{\varepsilon}), n(q_k+{\varepsilon})],\; k \in {\mathbb{N}}\},\label{mainevent} \end{align} and where $\boldsymbol{q} = (q_k)_{k\in {\mathbb{N}}}$ is such that $0 \le q_k \le p_k$ for every $k$. One of our main results (see Theorem \ref{thm:ldg_degree_distribution}) shows that, under conditions, for large $n$ and small ${\varepsilon}$ \begin{equation}\label{eq:mainasymp}
P\left\{ E^{n,\varepsilon}({\boldsymbol{q}})\right\} \approx \exp\left\{-n\left[H(\boldsymbol{q}) + H(\boldsymbol{p}-\boldsymbol{q}) - H(\boldsymbol{p})\right]\right\},
\end{equation} where for a nonnegative sequence $\boldsymbol{r} = (r_k)_{k\in {\mathbb{N}}}$, \begin{equation}\label{eq:hdefn} H(\boldsymbol{r})\doteq \sum_{k=1}^{\infty }{r}_{k}\log { r}_{k}-\left( \frac{1}{2}\sum_{k=1}^{\infty }k{r}_{k}\right) \log \left( \frac{1}{2}\sum_{k=1}^{\infty }k{r}_{k}\right). \end{equation} This result in particular gives asymptotics for probabilities of observing a component of a given size (see Remark \ref{rem:conjsize}) and explicit formulas for rates of decay of probabilities of observing a $D$-regular component of a given size in $G_n$ (see Corollaries \ref{cordreg} and \ref{cordregsubgraph}); see also Conjectures \ref{conj:Dreg-multi} and \ref{conj:Dreg-max} on large deviation asymptotics for the size of the largest component in a $D$-regular graph.
\textcolor{black}{In order to prove Theorem \ref{thm:ldg_degree_distribution} we first study a more general and abstract problem of large deviations for a certain class of stochastic dynamical systems in Theorem \ref{thm:main-ldp}.} The starting point is a dynamical construction of the configuration model given through a discrete time infinite dimensional Markov chain referred to as the {\em exploration process} (cf. \cite{MolloyReed1998size,Janson2009new}). As the name suggests, the exploration process is constructed by first appropriately selecting a vertex in the graph and then exploring the neighborhood of the chosen vertex until the component of that vertex is exhausted. After this one moves on to another `unexplored' vertex resulting in successive exploration of components of the random graph until the entire graph has been explored. The stochastic process corresponding to one particular coordinate of this infinite dimensional Markov chain encodes the number of edges in any given component through the length of its excursions away from zero. The remaining coordinates of this Markov chain can be used to read off the number of vertices of a given degree in any given component of the random graph. See Section \ref{sec:eea} for a precise description of the state space of this Markov chain. The exploration process can be viewed as a small noise stochastic dynamical system in which the transition steps are of size $O(1/n)$ with $n$ denoting the number of vertices in the random graph. A key ingredient in the proof of Theorem \ref{thm:ldg_degree_distribution}, is a Large Deviation Principle (LDP) for an infinite dimensional jump-Markov process that can be viewed as a continuous time analogue of the exploration process. This result, given in Theorem \ref{thm:main-ldp}, is our second main result. As other applications of this theorem, we recover a well known result on the asymptotics of the largest component in the configuration model due to Molloy and Reed \cite{MolloyReed1998size} and Janson and Luczak \cite{Janson2009new}, and also present a result (whose proof is omitted) on asymptotics of scaled number of components in a configuration model (see Remark \ref{rem:remnumcopm}). The rate function in the LDP given in Theorem \ref{thm:main-ldp} can be used to formulate a calculus of variations problem associated with the event $E^{n,{\varepsilon}}(\boldsymbol{q})$ described in \eqref{mainevent}. This problem is at the heart of our analysis and by studying the corresponding infinite dimensional system of coupled Euler-Lagrange equations we construct an explicit minimizer in this optimization problem (see Lemma \ref{lem:minimizer-verify-general}). The cost associated with the minimizer is the exponent on the right side of \eqref{eq:mainasymp} and provides the exact expression for the decay rate for the probability of interest.
\subsection{Proof techniques and overview of contributions} In addition to the study of the asymptotics of the configuration model, one of the main motivations for working on these sets of problems was the development of new techniques for handling large deviations for processes with ``degeneracies.'' We will give an overview of these contributions in this section.
The exploration process associated with the $n$-th random graph (with $n$ vertices) in the configuration model is described as an $\mathbb{R}^{\infty }$-valued `small noise' Markov chain $\{\boldsymbol{X}^{n}(j)\}_{j\in \mathbb{N}_{0}}$. Under our assumptions, there exists a $N\in \mathbb{N}$ such that for all $ j\geq nN$, $\boldsymbol{X}^{n}(j)=\boldsymbol{0}$ for all $n\in \mathbb{N}$. In order to study large deviations for such a sequence, one usually considers a sequence of continuous times processes, or equivalently $\mathbb{C}([0,N]: \mathbb{R}^{\infty })$-valued random variables, obtained by a linear interpolation of $\{\boldsymbol{X}^{n}(j)\}_{j\in \mathbb{N}_{0}}$ over intervals of length $1/n$. A large deviations analysis of such a sequence in the current setting is challenging due to `diminishing rates' feature of the transition kernel (see \eqref{eq:aj-dynamics}) which in turn leads to poor regularity of the associated local rate function. By diminishing rates we mean the property that probabilities of certain transitions, although non-zero, can get arbitrarily close to $0$ as the system becomes large. In the model we consider, the system will go through phases where some state transitions have very low probabilities, that are separated by phases of `regular behavior,' many times. In terms of the underlying random graphs the first type of phase corresponds to time periods in the dynamic construction that are close to the completion of exploration of one component and beginning of exploration of a new component. The poor regularity of the local rate function makes standard approximations of the near optimal trajectory that are used in proofs of large deviation principles for such small noise systems hard to implement. In order to overcome these difficulties we instead consider a different continuous time process associated with the exploration of the configuration model. This continuous time process is obtained by introducing i.i.d.\ exponential random times before each step in the edge exploration Markov chain. A precise description of this process is given in terms of stochastic differential equations (SDE) driven by a countable collection of Poisson random measures (PRM), where different PRMs are used to describe the different types of transitions (see Section \ref {sec:model}). Although the coefficients in this SDE are discontinuous functions, their dependence on the state variable is much more tractable than the state dependence in the transition kernel of the discrete time model.
Large deviations for small noise SDE driven by Brownian motions have been studied extensively both in finite and infinite dimensions. An approach based on certain variational representations for moments of nonnegative functionals of Brownian motions and weak convergence methods \cite {BoueDupuis1998variational, BudhirajaDupuis2000variational} has been quite effective in studying a broad range of such systems (cf. references in \cite{BudhirajaDupuisMaroulas2011variational}). A similar variational representation for functionals of a Poisson random measure has been obtained in \cite{BudhirajaDupuisMaroulas2011variational}. There have been several recent papers that have used this representation for studying large deviation problems (see, e.g., \cite{BudhirajaChenDupuis2013large,BudhirajaDupuisGanguly2015moderate, BudhirajaWu2017moderate}). This representation is the starting point of the analysis in the current work as well, however the application of the representation to the setting considered here leads to new challenges. One key challenge that arises in the proof of the large deviations lower bound can be described as follows. The proof of the lower bound based on variational representations and weak convergence methods, for systems driven by Brownian motions, requires establishing unique solvability of controlled deterministic equations of the form \begin{equation} dx(t)=b(x(t))dt+\sigma (x(t))u(t)dt,\;x(0)=x_{0}, \label{eq:condifdet} \end{equation} where $u\in L^{2}([0,T]:\mathbb{R}^{d})$ (space of square integrable functions from $[0,T]$ to $\mathbb{R}^{d}$) is a given control. It turns out that the conditions that are typically introduced for the well-posedness of the original small noise stochastic dynamical system of interest (e.g.\ Lipschitz properties of the coefficients $b$ and $\sigma $) are enough to give the wellposedness of \eqref{eq:condifdet}. For example when the coefficients are Lipschitz, one can use a standard argument based on Gronwall's lemma and an application of the Cauchy-Schwarz inequality to establish the desired uniqueness property. In contrast, when studying systems driven by a PRM one instead needs to establish wellposedness of controlled equations of the form \begin{equation} x(t)=x(0)+\int_{[0,t]\times S}1_{[0,g(x(s))]}(y)\varphi (s,y)ds\,m(dy),\;0\leq t\leq T, \label{eq:contdetpoi} \end{equation} where $S$ is a locally compact metric space, $m$ a locally finite measure on $S$, $g \colon \mathbb{R}\rightarrow \mathbb{R}_+$ is a measurable map and the control $\varphi $ is a nonnegative measurable map on $[0,T]\times S$ which satisfies the integrability property \begin{equation*} \int_{\lbrack 0,T]\times S}\ell (\varphi (s,y))ds\,m(dy)<\infty , \end{equation*} where $\ell (x)=x\log x-x+1$. If $\varphi $ were uniformly bounded and $g$ sufficiently regular (e.g., Lipschitz) uniqueness follows once more by a standard Gronwall argument. However, in general if $g$ is not Lipschitz or $ \varphi $ is not bounded (both situations arise in the problem considered here, see e.g.\ \eqref{eq:psi}-\eqref{eq:phi_k}) the problem of uniqueness becomes a challenging obstacle. One of the novel contributions of this work is to obtain uniqueness results for equations of the form \eqref{eq:contdetpoi} when certain structural properties are satisfied. The setting we need to consider is more complex than the one described above in that there is an infinite collection of coupled equations (one of which corresponds to the Skorokhod problem for one dimensional reflected trajectories) that describe the controlled system. However the basic difficulties can already be seen for the simpler setting in \eqref{eq:contdetpoi}. Although for a general $\varphi $ the unique solvability of equations of the form \eqref{eq:contdetpoi} may indeed be intractable, the main idea in our approach is to argue that one can perturb the original $\varphi $ slightly so that the solution $x(\cdot )$ stays the same and moreover this $x(\cdot)$ is the unique solution of the corresponding equation with the perturbed $\varphi $. Furthermore the cost difference between the original and perturbed $\varphi $ is appropriately small. The uniqueness result given in Lemma \ref {lem:uniqueness} is a key ingredient in the proof of the lower bound given in Section \ref{sec:lower}. The proof of the upper bound, via the weak convergence based approach to large deviations relies on establishing suitable tightness and limit characterization results for certain controlled versions of the original small noise system. This proof is given in Section \ref{sec:upper}.
The rate function in the LDP for the exploration process in Theorem \ref{thm:main-ldp} is given as a variational formula on an infinite dimensional path space (see \eqref{eq:rate_function}). Getting useful information from such an abstract formula in general seems hopeless, however, as we show in this work, for the event considered in \eqref{mainevent}, the variational formula can be used to extract much more explicit information. We begin by observing (see \eqref{eq:enverps}) that the event $E^{n,{\varepsilon}}(\boldsymbol{q})$ of interest can be written explicitly in terms of the exploration process. Using this and the LDP in Theorem \ref{thm:main-ldp} one can provide an upper bound for the probability of the event in terms of a quantity $I^2_{0,\tau}((0,\boldsymbol{p}), (0,\boldsymbol{p} - \boldsymbol{q}))$ which can be interpreted (see Section \ref{sec:cal} for a precise definition) as the minimal cost for certain controlled analogues of the exploration process to move from the state $(0,\boldsymbol{p})$ to $(0,\boldsymbol{p} - \boldsymbol{q})$ in $\tau$ units of time, where $\tau = \frac{1}{2} \sum_{k=1}^\infty kq_k$ (see Lemmas \ref{lem:puhalskii-upper-bound} and \ref{lem:puhalskii-upper-bound-improvement}). We then show that this deterministic control problem, which can be reformulated as a calculus of variations problem, admits an explicit solution. This solution is given in Construction \ref{cons:cont} and its optimality is studied in Lemma \ref{lem:minimizer-verify-general}. Using this optimality property, the complementary lower bound for the probability of interest is given in Lemma \ref{lem:puhalskii-lower-bound}. Lemmas \ref{lem:minimizer-verify-general} and \ref{lem:puhalskii-lower-bound} form the technical heart of the proof of Theorem \ref{thm:ldg_degree_distribution} and rely on a detailed and careful analysis of the infinite dimensional Euler-Lagrange equations associated with the calculus of variations problem.
\subsection{Organization of the paper}
The paper is organized as follows. In Section \ref{sec:assuandres} we introduce the configuration model, our main assumptions, and our first main result, Theorem \ref{thm:ldg_degree_distribution}, on asymptotics of probabilities of $E^{n,{\varepsilon}}(\boldsymbol{q})$. We record some consequences of these results for $D$-regular graphs and subgraphs in Corollaries \ref{cordreg} and \ref{cordregsubgraph}. Remark \ref{rem:conjsize} discusses another application of this result to the study of asymptotics of probabilities of components of a given size. In Section \ref{sec:eea} we review the edge-exploration algorithm (EEA) from \cite{MolloyReed1998size,Janson2009new} that gives a dynamical construction of the configuration model. For reasons discussed previously, the large deviation analysis of the discrete time EEA presents several technical obstacles and thus in Section \ref{sec:model} we introduce a closely related continuous time jump-Markov process $(\boldsymbol{X}^n, Y^n)$ with values in $({\mathbb{R}}\times {\mathbb{R}}_+^{\infty})\times {\mathbb{R}}$ which is mathematically more tractable. Sections \ref{sec:rate_def} and \ref{sec:main-result} present our second main result, Theorem \ref{thm:main-ldp}, that gives a large deviation principle for the sequence $(\boldsymbol{X}^n, Y^n)_{n \in {\mathbb{N}}}$ in a suitable infinite dimensional path space. In Section \ref{sec:main-result} we also note two side consequences of Theorem \ref{thm:main-ldp}. The first, given in Section \ref{sec:LLN} is a law of large numbers (LLN) result that recovers well known results of Janson and Luczak (2009) on the asymptotics of the largest component in the configuration model. The second, discussed in Remark \ref{rem:remnumcopm}, gives a LDP for the scaled number of components in $G_n$ as $n\to \infty$.
Section \ref{sec:repnWCCP} presents the variational representation from \cite {BudhirajaDupuisMaroulas2011variational} for functionals of PRM that is the starting point of our proofs. Some tightness and characterization results that are used both in the upper and lower bound proofs are also given in this section. Next, Section \ref{sec:upper} gives the proof of the large deviation upper bound whereas the proof of the lower bound is given in Section \ref{sec:lower}. Finally, Section \ref{sec:rate_function} establishes the compactness of level sets of the function $I_{T}$ defined in Section \ref{sec:main-result}, thus proving that $I_{T}$ is a rate function. Together, results of Sections \ref{sec:upper}, \ref{sec:lower} and \ref{sec:rate_function} complete the proof of Theorem \ref{thm:main-ldp}.
We next turn to the proof of Theorem \ref{thm:ldg_degree_distribution} which is given in Sections \ref{sec:cal}-\ref{sec:pfsect4}. First in Section \ref{sec:cal} we introduce a calculus of variations problem that is central to the proof of Theorem \ref{thm:ldg_degree_distribution}. We also introduce (see Construction \ref{cons:cont}) a candidate minimizer in this optimization problem and present several technical results (Lemmas \ref{lem:I1I2L}--\ref{lem:minimizer-verify-general}) that are needed for the proof of the optimality property of the candidate minimizer. Using results of Section \ref{sec:cal} the proof of Theorem \ref{thm:ldg_degree_distribution} is completed in Section \ref{sec:pf_LDP_degree}. Finally, Section \ref{sec:pfsect4} contains the proofs of technical lemmas from Section \ref{sec:cal} whereas Section \ref{sec:examples} presents the proof of the LLN results from Section \ref{sec:LLN}.
\subsection{Notation}
The following notation will be used. For a Polish space $\mathbb{S}$, denote the corresponding Borel $\sigma$-field by $\mathcal{B}(\mathbb{S})$.
Denote by $\mathcal{P}(\mathbb{S})$ (resp.\ $\mathcal{M}(\mathbb{S})$) the space of probability measures (resp. finite measures) on $\mathbb{S}$, equipped with the topology of weak convergence. Denote by $\mathbb{C}_b( \mathbb{S})$ (resp.\ $\mathbb{M}_b(\mathbb{S})$) the space of real bounded and continuous functions (resp.\ bounded and measurable functions). For $f
\colon \mathbb{S} \to \mathbb{R}$, let $\|f\|_\infty \doteq \sup_{x \in
\mathbb{S}} |f(x)|$.
For a Polish space $\mathbb{S}$ and $T>0$, denote by $\mathbb{C}([0,T]:\mathbb{S})$ (resp.\ $\mathbb{D}([0,T]:\mathbb{S})$) the space of continuous functions (resp.\ right continuous functions with left limits) from $[0,T]$ to $ \mathbb{S}$, endowed with the uniform topology (resp.\ Skorokhod topology).
\textcolor{black}{We recall that a collection $\{ X^n \}$ of $\mathbb{S}$-valued random variables on some probability space $(\Omega, \mathcal{F}, P)$ is said to be tight, if for each $\varepsilon>0$ there is a compact set $K \subset \mathbb{S}$ such that $\sup_{n}P(X^n \in K^c)\le \varepsilon.$} A sequence of $\mathbb{D}([0,T]:\mathbb{S})$-valued random variables is said to be $\mathcal{C}$-tight if it is tight in $\mathbb{D}([0,T]:\mathbb{S})$ and every weak limit point takes values in $\mathbb{C}([0,T]:\mathbb{S})$ a.s. We use the symbol `$\Rightarrow$' to denote convergence in distribution.
We denote by $\mathbb{R}^{\infty}$ the space of all real sequences which is identified with the countable product of copies of $\mathbb{R}$. This space is equipped with the usual product topology. For ${\boldsymbol{x}}=(x_k)_{k \in \mathbb{N}}, {\boldsymbol{y}}=(y_k)_{k \in \mathbb{N}}$, we write ${ \boldsymbol{x}} \le {\boldsymbol{y}}$ if $x_k \le y_k$ for each $k \in \mathbb{N}$. We will use the notation $a\doteq b$ to signify that the definition of $a$ is given by the quantity $b$.
Let $\mathcal{C} \doteq \mathbb{C}([0,T]:\mathbb{R})$, $ \mathcal{C}_\infty \doteq \mathbb{C}([0,T]:\mathbb{R}^\infty)$, $\mathcal{D} \doteq \mathbb{D}([0,T]:\mathbb{R})$, $\mathcal{D}_\infty \doteq \mathbb{D} ([0,T]:\mathbb{R}^\infty)$. Let $x^+ \doteq \max \{x,0\}$ for $x \in \mathbb{R}$. Denote by $\mathbb{R}_+$ the set of all non-negative real numbers.
Let $\mathbb{N}_0 \doteq \mathbb{N} \cup \{0\}$. Cardinality of a set $A$ is denoted by $|A|$. For $n \in \mathbb{N}$, let $ [n] \doteq \{1,2,\dotsc,n\}$. We use the following conventions: $0\log 0=0$, $0 \log (x/0) =0$ for $x\ge 0$, and $x \log (x/0) = \infty$ for $x>0$.
\section{Assumptions and Results}
\label{sec:assuandres} Fix $n \in \mathbb{N}$. We start by describing the construction of the configuration model of random graphs with vertex set $[n]$. Detailed description and further references for the configuration model can be found in \cite[Chapter 7]{Hofstad2016}.
\subsection{The configuration model and assumptions}
Let ${\boldsymbol{d}}(n)=\{d_{i}^{(n)}\}_{i\in \lbrack n]}$ be a degree sequence, namely a sequence of non-negative integers such that $ \sum_{i=1}^{n}d_{i}^{(n)}$ is even. Let $2m^{(n)}\doteq \sum_{i=1}^{n}d_{i}^{(n)}$. We will usually suppress the dependence of $ d_{i}^{(n)}$ and $m^{(n)}$ on $n$ in the notation. Using the sequence $ \{d_{i}\}$ we construct a random graph on $n$ labelled vertices $[n]$ as follows: (i) Associate with each vertex $i\in \lbrack n]$ $d_{i}$ \emph{half-edges}. (ii) Perform a uniform random matching on the $2m$ half-edges to form $m$ edges so that every edge is composed of two half-edges. This procedure creates a random multigraph $G([n],{\boldsymbol{d}}(n))$ with $m$ edges, allowing for multiple edges between two vertices and self-loops, and is called the \emph{configuration model} with degree sequence ${\boldsymbol{d}}(n)$. Since we are concerned with connectivity properties of the resulting graph, vertices with degree zero play no role in our analysis, and therefore we assume that $d_{i}>0$ for all $i\in \lbrack n],~n\geq 1$. We make the following additional assumptions.
\begin{Assumption} \label{asp:convgN} There exists a probability distribution ${\boldsymbol{p}}
\doteq \left\{ p_{k}\right\} _{k\in \mathbb{N}}$ on $\mathbb{N}$ such that, writing $n_{k}^{\scriptscriptstyle(n)}\doteq |\left\{ i\in \lbrack n]:d_{i}=k\right\} |$ for the number of vertices with degree $ k $, $ {n_{k}^{\scriptscriptstyle(n)}}/{n}\rightarrow p_{k}\mbox{ as } n\rightarrow \infty ,\mbox{ for all }k\in \mathbb{N}. $ \end{Assumption}
We will also usually suppress the dependence of $n_{k}^{\scriptscriptstyle(n)}$ on $n$ in the notation. We make the following assumption on moments of the degree distribution.
\begin{Assumption} \label{asp:exponential-boundN} There exists some $\varepsilon_{\boldsymbol{p} } \in (0,\infty)$ such that $\sup_{n \in \mathbb{N}} \sum_{k=1}^{\infty} \frac{n_k}{n}k^{1+\varepsilon_{\boldsymbol{p}}} < \infty$.
\end{Assumption}
The above two assumptions will be made throughout this work.
\begin{Remark} \label{rem1.1} \begin{enumerate}[\upshape (i)] \item Note that Assumptions \ref{asp:convgN} and \ref{asp:exponential-boundN} , along with Fatou's lemma, imply that $\sum_{k=1}^{\infty }p_{k}k^{1+\varepsilon _{\boldsymbol{p}}}<\infty $. Conversely, if $ \sum_{k=1}^{\infty }p_{k}k^{\lambda }<\infty $ for some $\lambda \in (4,\infty )$ and $\{D_{i}\}_{i\in \mathbb{N}}$ is a sequence of i.i.d.\ $ \mathbb{N}$-valued random variables with common distribution $ \{p_{k}\}_{k\in \mathbb{N}}$, then using a Borel--Cantelli argument it can be shown that for a.e.\ $\omega $, Assumptions \ref{asp:convgN} and \ref {asp:exponential-boundN} are satisfied with $d_{i}=D_{i}(\omega )$, $i\in \lbrack n]$, $n\in \mathbb{N}$, and $\varepsilon _{\boldsymbol{p}}=\frac{ \lambda }{4}-1$. \item Under Assumptions \ref{asp:convgN} and \ref{asp:exponential-boundN}, $ \mu\doteq\sum_{k=1}^\infty kp_k < \infty$ and the total number of edges $m = \frac{1}{2} \sum_{i=1}^n d_i$ satisfies $\frac{m}{n} \to \frac{1}{2} \sum_{k=1}^\infty kp_k$ as $n \to \infty$. \end{enumerate} \end{Remark}
\subsection{Large Deviation Asymptotics for Component Degree Distributions}
\label{sec:degree_distribution}
We will say that a component of $G([n],{\boldsymbol{d}}(n))$ has degree configuration $\{\bar{n}_{k}\}$ if the component has $\bar{n}_{k}$ vertices with degree $k$, for $k \in {\mathbb{N}}$. Given $\boldsymbol{0} \le {\boldsymbol{q}}=(q_{k},k\in \mathbb{N})\leq {\boldsymbol{p}}$,
we are interested in the asymptotic exponential rate of decay of the probability of the event $E^{n,\varepsilon }({\boldsymbol{q}})$ introduced in \eqref{mainevent} that corresponds to the existence of a component in $G([n],{\boldsymbol{d}}(n))$ with degree configuration $\{\bar{n}_{k}\}$
satisfying $(q_{k}-\varepsilon )n\leq \bar{n}_{k}\leq (q_{k}+\varepsilon )n$, $k\in \mathbb{N}$,
namely, we want to characterize $\lim_{\varepsilon \rightarrow 0}\lim_{n\rightarrow \infty }\frac{1}{n}\log {P}\left\{ E^{n,\varepsilon }({\boldsymbol{q}})\right\} $. Note that for there to exist a component with degree configuration $\{nq_k\}$ we must have $\sum_{k=1}^{\infty }kq_{k}\ge 2\left(\sum_{k=1}^{\infty }q_{k}-\frac{1}{n}\right).$ We will in fact assume a slightly stronger condition: \begin{equation}\label{eq:slightstr} \sum_{k=1}^{\infty }kq_{k}>2\sum_{k=1}^{\infty }q_{k}. \end{equation} This condition says that there are strictly more edges than vertices in the component. Define $\beta \doteq \beta ({\boldsymbol{q}})$ as follows: $\beta =0$ when $q_{1}=0$, and when $q_{1}>0$, $\beta \in (0,1)$ is the unique solution (see Remark \ref{rmk:uniqueness_beta} below) of the equation \begin{equation*} \sum_{k=1}^{\infty }kq_{k}=(1-\beta ^{2})\sum_{k=1}^{\infty }\frac{kq_{k}}{ 1-\beta ^{k}}. \end{equation*} Define the function $K({\boldsymbol{q}})$ by \begin{equation} K({\boldsymbol{q}})\doteq \left( \frac{1}{2}\sum_{k=1}^{\infty }kq_{k}\right) \log (1-\beta ({\boldsymbol{q}})^{2})-\sum_{k=1}^{\infty }q_{k}\log (1-\beta ({\boldsymbol{q}})^{k})\label{eq:kdefnn} \end{equation} and with $H(\cdot)$ as in \eqref{eq:hdefn}
define \begin{equation}
\label{eq:Itil_1}
{\tilde{I}}_{1}({\boldsymbol{q}})\doteq H({\boldsymbol{q}})+H({ \boldsymbol{p}}-{\boldsymbol{q}})-H({\boldsymbol{p}})+K({\boldsymbol{q}}). \end{equation}
\begin{Remark} \label{rmk:uniqueness_beta} The existence and uniqueness of $\beta ({ \boldsymbol{q}})$ can be seen as follows. For $\alpha \in (0,1)$ consider \begin{equation*} \alpha F(\alpha )\doteq \sum_{k=1}^{\infty }kq_{k}-(1-\alpha ^{2})\sum_{k=1}^{\infty }\frac{kq_{k}}{1-\alpha ^{k}}=\alpha \left( \sum_{k=3}^{\infty }\frac{\alpha -\alpha ^{k-1}}{1-\alpha ^{k}} kq_{k}-q_{1}\right). \end{equation*} For $k\geq 3$ and $\alpha \in (0,1)$ let $ F_{k}(\alpha )\doteq (\alpha -\alpha ^{k-1})/(1-\alpha ^{k}). $ It is easily verified that $F_{k}(\cdot )$ is strictly increasing on $(0,1)$. Thus for $\alpha \in (0,1)$, $0=F_{k}(0+)<F_{k}(\alpha )<F_{k}(1-)=\frac{k-2}{k}$, and so \begin{equation*} -q_{1}=F(0+)<F(\alpha )<F(1-)=\sum_{k=3}^{\infty }(k-2)q_{k}-q_{1}. \end{equation*} Since $F$ is continuous on $(0,1)$, $-q_{1}<0$ and $\sum_{k=3}^{\infty }(k-2)q_{k}-q_{1}=\sum_{k=1}^{\infty }kq_{k}-2\sum_{k=1}^{\infty }q_{k}>0$, we have the existence and uniqueness of $\beta ({\boldsymbol{q}})$. \end{Remark}
\begin{Remark}
\label{rem:finhk}
We note that for every $\boldsymbol{0} \le {\boldsymbol{q}}=(q_{k},k\in \mathbb{N})\leq {\boldsymbol{p}}$, $K(\boldsymbol{q})$ and $H(\boldsymbol{q})$ are finite. Indeed, the finiteness of $K(\boldsymbol{q})$ is immediate from
Assumption \ref{asp:exponential-boundN}. To see the finiteness of $H(\boldsymbol{q})$, note that on the one hand $\sum_{k=1}^{\infty }{q}_{k}\log {q}_{k} \le 0$ while on the other hand
\begin{align*}
\sum_{k=1}^{\infty} q_k \log q_k & = \sum_{k=1}^{\infty} q_k \log \frac{q_k}{2^{-(k+1)}} - (\log 2) \sum_{k=1}^{\infty} (k+1) q_k \\
& \ge -\left(1-\sum_{k=1}^\infty q_k\right) \log \frac{1-\sum_{k=1}^\infty q_k}{2^{-1}}- (\log 2) \sum_{k=1}^{\infty} (k+1) q_k > -\infty,
\end{align*}
where the first inequality follows from non-negativity of relative entropy and putting mass $1-\sum_{k=1}^\infty q_k$ on $k=0$, and the last inequality once more uses Assumption \ref{asp:exponential-boundN}. \end{Remark}
The following result gives asymptotics of the event $E^{n,\varepsilon }({ \boldsymbol{q}})$. The proof of the theorem, which is based on Theorem \ref {thm:main-ldp}, is given in Section \ref{sec:pf_LDP_degree}.
\begin{Theorem} \label{thm:ldg_degree_distribution}
Suppose $\boldsymbol{0} \le {\boldsymbol{q}} \le { \boldsymbol{p}}$ and that \eqref{eq:slightstr} is satisfied. Then \begin{enumerate}[(i)]
\item (Upper bound) when $p_1=0$, we have $\beta({\boldsymbol{q}})=0$, $K({ \boldsymbol{q}})=0$ and \begin{equation*} \limsup_{\varepsilon \to 0} \limsup_{n \to \infty } \frac{1}{n} \log { P}\left\{ E^{n,\varepsilon}({\boldsymbol{q}})\right\} \le -{ \tilde{I}}_1({\boldsymbol{q}}). \end{equation*}
\item (Lower bound) \begin{equation*} \liminf_{\varepsilon \to 0} \liminf_{n \to \infty } \frac{1}{n} \log { P}\left\{ E^{n,\varepsilon}({\boldsymbol{q}})\right\} \ge -{ \tilde{I}}_1({\boldsymbol{q}}). \end{equation*} \end{enumerate} \end{Theorem} \begin{Remark} The proof of Theorem \ref{thm:ldg_degree_distribution} relies on a large deviation principle for a certain exploration process (see Section \ref{sec:model}) that is given in Theorem \ref{thm:main-ldp}. The latter result does not require the condition $p_1 = 0$. Also note that the lower bound in Theorem \ref{thm:ldg_degree_distribution} does not require the condition $p_1=0$ either. One can also give an upper bound (without requiring $p_1=0$) in terms of a variational formula given by the right side of \eqref{eq:upperbd_to_improve}. When $p_1=0$, this variational expression can be simplified and is seen to be equal to $-{\tilde{I}}_1({\boldsymbol{q}})$. This is shown in Lemma \ref{lem:puhalskii-upper-bound-improvement} whose proof crucially relies on the property $p_1=0$. Whether the two expressions are equal in general when $p_1\neq 0$ remains an open problem. \end{Remark} As an immediate corollary of Theorem \ref{thm:ldg_degree_distribution} we have the following result for $D$-regular graphs, i.e., graphs such that each vertex is of degree $D$. In the following $\lim\star$ represents either $\limsup$ or $\liminf$. \begin{Corollary}
\label{cordreg}
{\bf ($D$-regular graphs)}
Suppose that
there exists some $D \in \mathbb{N}$ with $D \ge 3$, such that $p_k = 0$, $n_k = 0$ for $k \ne D$ and $p_D = 1$, $n_D=n$.
Fix $q_D \in (0,1]$ and
denote by $E^{n,\varepsilon}_D(\boldsymbol{q})$ the event that there is a component of size $N_D \in [n(q_D-{\varepsilon}), n(q_D+{\varepsilon})]$. Then
\begin{equation}
\label{eq:Dreg}
\lim\star_{\varepsilon \to 0} \lim\star_{n \to \infty } \frac{1}{n} \log P \left\{ E^{n,\varepsilon}_D({\boldsymbol{q}})\right\} =
\left(1-\frac{D}{2}\right) \left( q_D \log q_D + (1-q_D) \log (1-q_D) \right).
\end{equation}
\end{Corollary}
\begin{proof}
Let $q_k=0$ for $k \in \mathbb{N}\setminus\{D\}$ and let ${\boldsymbol{q}}=\{q_{k},k\in \mathbb{N}\}$.
Then
since $p_1=0$, we have $\beta(\boldsymbol{q})=0$ and $K({\boldsymbol{q}})=0$.
Using \eqref{eq:Itil_1} we have
\begin{align*}
{\tilde{I}}_{1}({\boldsymbol{q}}) & = H({\boldsymbol{q}})+H({ \boldsymbol{p}}-{\boldsymbol{q}})-H({\boldsymbol{p}})+K({\boldsymbol{q}}) \\
& = q_D \log q_D - \frac{Dq_D}{2} \log \left( \frac{Dq_D}{2} \right) + (1-q_D) \log (1-q_D) - \frac{D-Dq_D}{2} \log \left( \frac{D-Dq_D}{2} \right) \\
& \qquad + \frac{D}{2} \log \left( \frac{D}{2} \right) \\
& = \left(1-\frac{D}{2}\right) \left( q_D \log q_D + (1-q_D) \log (1-q_D) \right).
\end{align*}
The result then follows from Theorem \ref{thm:ldg_degree_distribution}. \end{proof}
We note that the expression \eqref{eq:Dreg} has the same form when $q_D$ is replaced by $1-q_D$. This suggests that the most likely way of having a component of size around $nq_D$ in $D$-regular graphs is to let almost all of the remaining $n(1-q_D)$ vertices be in one component. Indeed, conditioning on having a component of size around $nq_D$, the remaining vertices can be viewed as a smaller configuration model of $D$-regular graphs with about $n(1-q_D)$ vertices. It then follows from the well known results for the asymptotics of the largest component in the configuration model \cite{MolloyReed1998size,Janson2009new} (and Theorem \ref{thm:LLN}) that these remaining vertices are in one component with high probability.
Based on these observations we make the following conjecture.
\begin{Conjecture}
\label{conj:Dreg-multi}
{\bf ($D$-regular graphs, multiple components)}
Suppose that there exists some $D \in \mathbb{N}$ with $D \ge 3$, such that $p_k = 0$, $n_k = 0$ for $k \ne D$ and $p_D = 1$, $n_D=n$.
Fix $M \in {\mathbb{N}}$ and $q_D^{(i)} \in (0,1]$ for each $i=1,\dotsc,M$, such that $\sum_{i=1}^M q_D^{(i)} \le 1$.
Let $q_k^{(i)}=0$ for $k \in \mathbb{N}\setminus\{D\}$ and let ${\boldsymbol{q}^{(i)}}=\{q_{k}^{(i)},k\in \mathbb{N}\}$, for each $i=1,\dotsc,M$.
Let $\boldsymbol{q}^{(M+1)} = \boldsymbol{p}-\sum_{i=1}^M \boldsymbol{q}^{(i)}$.
Denote by $E^{n,\varepsilon,M}_D$ the event that there are components of sizes $N_D^{(i)} \in [n(q_D^{(i)}-{\varepsilon}), n(q_D^{(i)}+{\varepsilon})]$, $i=1,\dotsc,M$. Then
\begin{align*}
\lim\star_{\varepsilon \to 0} \lim\star_{n \to \infty } \frac{1}{n} \log P \left\{ E^{n,\varepsilon,M}_D\right\} & = \sum_{i=1}^{M+1} H(\boldsymbol{q}^{(i)}) - H(\boldsymbol{p}) =
\left(1-\frac{D}{2}\right) \sum_{i=1}^{M+1} q_D^{(i)} \log q_D^{(i)}.
\end{align*} \end{Conjecture}
We also note that for each fixed $a \in [0,1]$, the function $[0,a] \ni x \mapsto x\log x + (a-x)\log(a-x) \in (-\infty,0]$ is maximized at $x=0$ and $x=a$. This suggests that, the most likely way for the largest component to be of certain size, is to let as many of the remaining components as possible have such a size. Based on this we make the following conjecture on the large deviation behavior of the largest component size for D-regular graphs.
\begin{Conjecture}
\label{conj:Dreg-max}
{\bf ($D$-regular graphs, largest component)}
Suppose that there exists some $D \in \mathbb{N}$ with $D \ge 3$, such that $p_k = 0$, $n_k = 0$ for $k \ne D$ and $p_D = 1$, $n_D=n$.
For each $x \in [0,1]$, let $q_D^{(x)} = x$, $q_k^{(x)}=0$ for $k \in \mathbb{N}\setminus\{D\}$, and ${\boldsymbol{q}^{(x)}}=\{q_{k}^{(x)},k\in \mathbb{N}\}$.
Denote by $M^n$ the size of the largest component.
Then $\frac{M^n}{n}$ satisfies a large deviation principle in ${\mathbb{R}}_+$ with rate function $I_{max}$ defined by
\begin{align*}
I_{max}(x) & = k(x)H(\boldsymbol{q}^{(x)}) + H(\boldsymbol{q}^{(1-xk(x))}) - H(\boldsymbol{p}) =
\left(1-\frac{D}{2}\right) \left( xk(x) \log x + (1-xk(x)) \log \left(1-xk(x)\right) \right)
\end{align*}
for $x \in [0,1]$ and $I_{max}(x)=\infty$ otherwise, where $k(x)=\lfloor \frac{1}{x} \rfloor$ is the largest integer such that $xk(x) \le 1$.
\end{Conjecture}
Recall that $\mu\doteq\sum_{k=1}^\infty kp_k < \infty$. The following result gives bounds on probabilities of observing a $D$-regular subgraph in a configuration model with a general degree sequence $(p_k)$.
\begin{Corollary}
\label{cordregsubgraph}
Suppose that Assumptions \ref{asp:convgN} and \ref{asp:exponential-boundN} hold.
Also suppose that $p_1=0$.
Fix $D \in {\mathbb{N}}$ with $D \ge 3$ such that $p_D > 0$.
Fix $q_D \in (0, p_D]$. Denote by $E^{n,\varepsilon}(q)$ the event that the graph has a component that is $D$-regular and has size
$N_D \in [n(q_D-{\varepsilon}), n(q_D+{\varepsilon})]$.
Then
\begin{align*}
&\lim\star_{\varepsilon \to 0} \lim\star_{n \to \infty } \frac{1}{n} \log P \left\{ E^{n,\varepsilon}({\boldsymbol{q}})\right\}\\
&\quad = \left(q_D \log q_D + (p_D-q_D) \log (p_D-q_D) - p_D \log p_D\right) \\
& \quad\quad - \left( \frac{Dq_D}{2} \log \left( \frac{Dq_D}{2} \right) + \frac{\mu-Dq_D}{2} \log \left( \frac{\mu-Dq_D}{2} \right) - \frac{\mu}{2} \log \left( \frac{\mu}{2} \right) \right).
\end{align*}
\end{Corollary}
\begin{proof}
Let $q_k=0$ for $k \in \mathbb{N}\setminus\{D\}$ and let ${\boldsymbol{q}}=(q_{k},k\in \mathbb{N})$.
As before, since $q_1=0$, we have $\beta(\boldsymbol{q})=0$ and $K({\boldsymbol{q}})=0$.
Using \eqref{eq:Itil_1} we have
\begin{align*}
{\tilde{I}}_{1}({\boldsymbol{q}}) & = H({\boldsymbol{q}})+H({ \boldsymbol{p}}-{\boldsymbol{q}})-H({\boldsymbol{p}})+K({\boldsymbol{q}}) \\
& = q_D \log q_D - \frac{Dq_D}{2} \log \left( \frac{Dq_D}{2} \right) \\
& \qquad + (p_D-q_D) \log (p_D-q_D) + \sum_{k \ne D} p_k \log p_k - \frac{\mu-Dq_D}{2} \log \left( \frac{\mu-Dq_D}{2} \right) \\
& \qquad - \sum_{k=1}^\infty p_k \log p_k + \frac{\mu}{2} \log \left( \frac{\mu}{2} \right) \\
& = \left(q_D \log q_D + (p_D-q_D) \log (p_D-q_D) - p_D \log p_D\right) \\
& \qquad - \left( \frac{Dq_D}{2} \log \left( \frac{Dq_D}{2} \right) + \frac{\mu-Dq_D}{2} \log \left( \frac{\mu-Dq_D}{2} \right) - \frac{\mu}{2} \log \left( \frac{\mu}{2} \right) \right).
\end{align*}
The result then follows from Theorem \ref{thm:ldg_degree_distribution}. \end{proof}
\begin{Remark}
\label{rem:conjsize} Theorem \ref{thm:ldg_degree_distribution} can be used to extract other asymptotic results. We give below one example without proof.
Suppose that Assumptions \ref{asp:convgN}
and \ref{asp:exponential-boundN} hold. Also suppose that $p_1=p_2=0$.
Let $r\in (0,1]$ and denote by $E^{n,\varepsilon}_r$ the event that the graph has a component that has size
$N_r \in [n(r-{\varepsilon}), n(r+{\varepsilon})]$.
Then
\begin{align*}
\lim\star_{\varepsilon \to 0} \lim\star_{n \to \infty } \frac{1}{n} \log P \left\{ E^{n,\varepsilon}_r\right\} & =
\inf_{0\le \boldsymbol{q} \le \boldsymbol{p}:\; \boldsymbol{q}\cdot \boldsymbol{1} =r }\left\{H({\boldsymbol{p}})-H({\boldsymbol{q}}) - H({ \boldsymbol{p}}-{\boldsymbol{q}})\right\}.
\end{align*}
\end{Remark}
\begin{Remark}
\color{black}
There is an important connection between the configuration model and the uniform distribution on the space of all simple graphs (namely graphs which have no multiple edges and self-loops) with a prescribed degree distribution which we now describe.
Given a degree sequence $\mathbf{d}(n)$, let ${\mathbb{G}}([n], \mathbf{d}(n))$ be the set of all (simple) graphs on vertex set $[n]$ with degree sequence $\mathbf{d}(n)$. Let ${\mathbb{U}}{\mathbb{M}}_n(\mathbf{d}(n))$ denote the uniform measure on ${\mathbb{G}}([n], \mathbf{d}(n))$. Then as is well known (see e.g.\ \cite[Proposition 7.15]{Hofstad2016}), the configuration model satisfies the property that the
conditional distribution of $G([n], \mathbf{d}(n))$, given the event that $G([n], \mathbf{d}(n))$ is simple, is
${\mathbb{U}}{\mathbb{M}}_n(\mathbf{d}(n))$.
Further by \cite{janson2009probability}, under the assumptions of the current paper $P(G([n], \mathbf{d}(n)) \text{ is simple}) \to e^{-(\nu/2 +\nu^2/4)}$ where $\nu = \sum_{k} k(k-1)p_k/\sum_k kp_k$.
These observations suggest a natural approach to asymptotic questions of the form studied in the current work for
(simple) graphs with a prescribed degree distribution. In particular by an elementary Bayes formula calculation it follows that
if
\begin{equation}\label{eq:bayes}
\frac{\log P(G([n], \mathbf{d}(n)) \text{is simple} \,\big|\, E^{n,\varepsilon}({\boldsymbol{q}}))}{n}\to 0,\end{equation}
then Theorem \ref{thm:ldg_degree_distribution} will continue to hold with the configuration model replaced with the uniform distribution on the space of simple graphs with prescribed degree sequence. In general, characterizing the asymptotics of quantities as in \eqref{eq:bayes} is key to the large deviation analysis of ${\mathbb{U}}{\mathbb{M}}_n(\mathbf{d}(n))$. Study of these questions is deferred to future work.
\end{Remark}
\subsection{Edge-exploration algorithm (EEA)}
\label{sec:eea}
\textcolor{black}{Given a degree sequence ${\boldsymbol{d}}(n)$, we now describe a well known dynamic construction of the configuration model $G([n],{\boldsymbol{d}}(n))$ given in \cite{MolloyReed1998size,Janson2009new} by sequentially matching half-edges. Tracking functionals of this dynamic construction, in particular hitting times of zero of the number of so-called active edges (see below) reveals component size information of $G([n],{\boldsymbol{d}}(n))$. Construction given below closely follows \cite{Janson2009new}.} This algorithm traverses the graph by exploring all its edges, unlike typical graph exploration algorithms, which sequentially explore vertices. At each stage of the algorithm, every vertex in $[n]$ is in one of two possible states, sleeping or awake, while each half-edge is in one of three states: sleeping (unexplored), active or dead (removed). \textcolor{black}{The exploration process sequentially visits vertices, {\bf awakening} vertices whilst {\bf activating} or {\bf killing} half-edges.
}
Write $\mathcal{S}_{\mathbb{V}}(j)$ for the set of sleeping vertices at step $j$ and similarly let $\mathcal{S}_{\mathbb{E}}(j),\mathcal{ A}_{\mathbb{E}}(j)$ be the set of sleeping and active half-edge at step $j$. We call a half-edge
\textquotedblleft living\textquotedblright\ if it is either sleeping or active. Initialize by setting all vertices and half-edges to be in the sleeping state. For step $j\geq 0$, write $A(j)\doteq |\mathcal{A}_{\mathbb{E
}}(j)|$ for the number of active half-edges and $V_{k}(j)$ for the number of sleeping vertices $v\in \mathcal{S}_{\mathbb{V}}(j)$ with degree $k$. Write $\boldsymbol{V}(j)\doteq(V_{k}(j), k\in \mathbb{N})$ for the corresponding vector in $\mathbb{R}_{+}^{\infty }$.
At step $j=0$, all vertices and half-edges are asleep hence $A(0)=0$ and $ V_k(0)=n_k$ for $k \ge 1$. The exploration process proceeds as follows:
\begin{enumerate}[\upshape(1)]
\item If the number of active half-edges and sleeping vertices is zero, i.e.\ $A(j) = 0$ and $\boldsymbol{V}(j)={\boldsymbol{0}}$, all vertices and half-edges have been explored and we terminate the algorithm.
\item If $A(j) = 0$ and $\boldsymbol{V}(j) \neq {\boldsymbol{0}}$, so there exist sleeping vertices, pick one such vertex with probability proportional to its degree and mark the vertex as awake and all its half-edges as active.
Thus the transition $(A(j),\boldsymbol{V}(j))$ to $(A(j+1),\boldsymbol{V}(j+1))$ at step $j+1$ takes the form \begin{equation*} (0, {\boldsymbol{v}}) \mapsto (k, {\boldsymbol{v}} - {\boldsymbol{e}}_k) \mbox{ with probability } \frac{kv_k}{\sum_{i=1}^\infty i v_i}, \: k \in \mathbb{N}, \end{equation*} where ${\boldsymbol{e}}_k$ is the $k$-th unit vector.
\item If $A(j) > 0$, pick an active half-edge uniformly at random, pair it with another uniformly chosen living half-edge (either active or sleeping), say $e^*$, merge both half-edges to form a full edge and kill both half-edges. If $e^*$ was sleeping when picked, wake the vertex corresponding to the half-edge $e^*$, and mark all its other half-edges active. Thus in this case the transition takes the form \begin{align*} (a, {\boldsymbol{v}}) &\mapsto (a-2, {\boldsymbol{v}}) \mbox{ with probability } \frac{a-1}{\sum_{i=1}^\infty i v_i + a-1}, \\ (a, {\boldsymbol{v}}) &\mapsto (a+k-2, {\boldsymbol{v}} - {\boldsymbol{e}} _k) \mbox{ with probability } \frac{kv_k}{\sum_{i=1}^\infty i v_i + a-1}, \: k \in \mathbb{N}. \end{align*} \end{enumerate}
The statements in (2) and (3) can be combined as follows: If $A(j) \neq 0$ or $\boldsymbol{V}(j) \neq {\boldsymbol{0}}$, then the transition $(A(j),\boldsymbol{V}(j))$ to $ (A(j+1),\boldsymbol{V}(j+1))$ takes the form \begin{align} \label{eq:aj-dynamics} \begin{aligned} (a, {\boldsymbol{v}}) &\mapsto (a-2 \textcolor{black}{\cdot 1_{\{a > 0\}}}, {\boldsymbol{v}}) \mbox{ with probability } \frac{(a-1)^+}{\sum_{i=1}^\infty i v_i + (a-1)^+}, \\ (a, {\boldsymbol{v}}) &\mapsto (a+k-2 \textcolor{black}{\cdot 1_{\{a > 0\}}}, {\boldsymbol{v}} - {\boldsymbol{e}}_k) \mbox{ with probability } \frac{kv_k}{\sum_{i=1}^\infty i v_i + (a-1)^+}, \: k \in \mathbb{N}. \end{aligned} \end{align}
The random graph $G([n],{\boldsymbol{d}}(n))$ formed at the termination of the above algorithm has the same distribution as the configuration model with degree sequence ${\boldsymbol{d}}(n)$ \cite {molloy1995critical,Janson2009new}.
\begin{Remark}
\label{rmk:track-component}
We note that for $j> 0$, $A(j)=0$ if and only if the exploration of a
component in the random graph $G([n],{\boldsymbol{d}}(n))$ is completed at
step $j$. Thus the number of edges in a component equals the length of an
excursion of $\{A(j)\}$ away from $0$ and the largest excursion length gives
the size of the largest component, namely the number of edges in the
component with maximal number of edges.
The vertices in each component are those that are awakened during
corresponding excursions. \end{Remark}
Note that at each step in the EEA, either a new vertex is woken up or two half-edges are killed. Since there are a total of $n$ vertices and $2m$ half-edges, we have from Assumptions \ref{asp:convgN} and \ref {asp:exponential-boundN} that the algorithm terminates in at most $m+n\le n L $ steps where $L \doteq 1 + \lfloor \sup_n \frac{1}{2}\sum_{k=1}^\infty k \frac{n_k}{n} \rfloor < \infty$. We define $A(j) \equiv 0$ and $\boldsymbol{V}(j) \equiv {\boldsymbol{0}}$ for all $j \ge j_0$ where $j_0$ is the step at which the algorithm terminates.
\subsection{An equivalent continuous time exploration process}
\label{sec:model} A natural way to study large deviation properties of the configuration model is through the discrete time sequence $ \{A(j),\boldsymbol{V}(j)\}_{j\in \mathbb{N}_0}$ in EEA which can be viewed as a discrete time \textquotedblleft small noise" Markov process. In order to study large deviations for such a sequence, a standard approach is to consider the sequence of $\mathbb{C}([0,L]:\mathbb{R}^{\infty})$-valued random variables obtained by a linear interpolation of $\{A(j),\boldsymbol{V}(j)\}_{j\in \mathbb{N}_0}$ over intervals of length $1/n$. As was noted in the Introduction, the `diminishing rates' feature of the transition kernel \eqref{eq:aj-dynamics} makes the large deviations analysis of this sequence challenging. \textcolor{black}{An alternative approach is to consider a continuous time stochastic process that provides a tractable construction of the configuration model. We briefly recall one such construction that was introduced in \cite[Section 4]{Janson2009new}. }
\color{black} \subsubsection{A simple continuous time construction} \label{sec:cts-constr-eea} In
\cite[Section 4]{Janson2009new} it was observed that the configuration model can be explored using a continuous time process constructed using exponential random variables as follows. \begin{enumerate}
\item Every half-edge $e$ is given an independent exponential life-time (call this a clock). Initially, all half-edges and vertices are taken to be sleeping.
\item Whenever the clock of a half-edge rings this half-edge becomes awake and connects to an existing awake half-edge if such a half-edge exists; otherwise it waits for the next half-edge clock to ring and connects to this half-edge completing a full edge. Both such half-edges are then called dead. If at any point a half-edge of a sleeping vertex awakes, that vertex is then said to be awake.
\item The process continues until all half-edges are dead at which point the exploration ends. \end{enumerate}
It is observed in \cite[Section 4]{Janson2009new} that the random graph constructed at the end of the exploration is a realization from the desired configuration model.
Although the above continuous time construction gives a simple method to produce a sample from the configuration model, it turns out to be hard to directly use it for the study of large deviation problems of interest here. In view of this we present
below a different continuous time process for the exploration of the configuration model that is obtained by a more direct Poissonization of the Markov chain
$(A(\cdot), \boldsymbol{V}(\cdot))$ in Section \ref{sec:eea}.
\subsubsection{A continuous time construction via Poissonization} Let $N(t)$ be a rate-$n$ Poisson process independent of the processes $(A, \boldsymbol{V})$ of Section \ref{sec:eea} and define $(\tilde A(t), \tilde{\boldsymbol{V}}(t)) \doteq (A(N(t)), \boldsymbol{V}(N(t))$. Then $(\tilde A, \tilde{\boldsymbol{V}})$ gives a natural continuous time process associated with the exploration of the configuration model. We now give a distributionally equivalent representation of this process which is more tractable for a large deviation analysis. The construction given below ensures that $\{(nX^n_0(\cdot)+1, nX^n_k(\cdot)), k\in \mathbb{N}\}$, where $X^n_j$ are processes defined below, has the same distribution as $\{\tilde A(\cdot), \tilde V_k(\cdot), k\in \mathbb{N}\}$.
\color{black}
We begin with some notation that will be needed to formulate the continuous time model. For a locally compact Polish space $\mathbb{S}$, let $ \mathcal{M}_{FC}(\mathbb{S})$ be the space of all measures $\nu$ on $( \mathbb{S},\mathcal{B}(\mathbb{S}))$ such that $\nu(K)<\infty$ for every compact $K \subset \mathbb{S}$. We equip $\mathcal{M}_{FC}(\mathbb{S})$ with the usual vague topology. This topology can be metrized such that $\mathcal{M }_{FC}(\mathbb{S})$ is a Polish space (see \cite {BudhirajaDupuisMaroulas2011variational} for one convenient metric). A Poisson random measure (PRM) $ N$ on a locally compact Polish space $\mathbb{S}$ with intensity measure $\nu \in \mathcal{M}_{FC}(\mathbb{S})$ is an $ \mathcal{M}_{FC}(\mathbb{S})$-valued random variable such that for each $A \in \mathcal{B}(\mathbb{S})$ with $\nu(A)<\infty$, $N(A)$ is Poisson distributed with mean $\nu(A)$ and for disjoint $A_1,\dotsc,A_k \in \mathcal{B}(\mathbb{S})$, $N(A_1),\dotsc,N(A_k)$ are mutually independent random variables (cf.\ \cite{IkedaWatanabe1990SDE}).
Let $(\Omega ,\mathcal{F},{P})$ be a complete probability space on which are given
i.i.d.\ PRM $\{N _{k}(ds\,dy\,dz)\}_{k\in \mathbb{N}_{0}}$ on $\mathbb{R}_{+}\times \lbrack 0,1]\times \mathbb{R}_{+}$ with intensity measure $ds\times dy\times dz$. Let \begin{equation*} \hat{\mathcal{F}}_{t}\doteq \sigma \{N_{k}((0,s]\times A\times B),0\leq s\leq t,A\in \mathcal{B}([0,1]),B\in \mathcal{B}(\mathbb{R} _{+}),k\in \mathbb{N}_{0}\},\;t\geq 0 \end{equation*} and let $\{\mathcal{F}_{t}\}$ be the ${P}$-completion of this filtration. Fix $T\in (0,\infty )$. Let $\mathcal{\bar{P}}$ be the $\{ \mathcal{F}_{t}\}_{0\leq t\leq T}$-predictable $\sigma $-field on $\Omega \times \lbrack 0,T]$. Let $\bar{\mathcal{A}}_{+}$ be all $( \mathcal{\bar{P}}\otimes \mathcal{B}([0,1]))/\mathcal{B}(\mathbb{R}_{+})$ -measurable maps from $\Omega \times \lbrack 0,T]\times \lbrack 0,1]$ to $ \mathbb{R}_{+}$. For $\varphi \in \bar{\mathcal{A}}_{+}$, define a counting process $N_{k}^{\varphi }$ on $[0,T]\times \lbrack 0,1]$ by \begin{equation*} N_{k}^{\varphi }([0,t]\times A)\doteq \int_{\lbrack 0,t]\times A\times \mathbb{R}_{+}}{{1}}_{[0,\varphi (s,y)]}(z)\,N _{k}(ds\,dy\,dz),\:t\in \lbrack 0,T],A\in \mathcal{B}([0,1]),k\in \mathbb{N} _{0}. \end{equation*} We think of $N_{k}^{\varphi }$ as a controlled random measure, where $ \varphi $ is the control process that produces a thinning of the point process $N_{k}$ in a random but non-anticipative manner to produce a desired intensity. We will write $N_{k}^{\varphi }$ as $N_{k}^{\theta }$ if $\varphi \equiv \theta $ for some constant $\theta \in \mathbb{R}_{+}$. Note that $N_{k}^{\theta }$ is a PRM on $[0,T]\times [0,1]$ with intensity $\theta ds \times dy$. For ${\boldsymbol{x}}=(x_{0},x_{1},x_{2},\dotsc )\in \mathbb{R}\times \mathbb{R}_{+}^{\infty }$, let \begin{equation} r({\boldsymbol{x}}) \doteq (x_{0})^{+} + \sum_{k=1}^{\infty }kx_{k}, \quad r_{0}({\boldsymbol{x}}) \doteq \frac{(x_{0})^{+}}{r({\boldsymbol{x}})} {{1}}_{\{\textcolor{black}{r({\boldsymbol{x}}) \in (0,\infty)}\}}, \quad r_{k}({\boldsymbol{x}}) \doteq \frac{kx_{k}}{r({\boldsymbol{x}})} {{1}}_{\{\textcolor{black}{r({\boldsymbol{x}}) \in (0,\infty)}\}}, \quad k\in \mathbb{N}. \label{eq:r_k} \end{equation} Note that $\sum_{k \in \mathbb{N}_{0}} r_k(\boldsymbol{x}) =1$ whenever $\textcolor{black}{r({\boldsymbol{x}}) \in (0,\infty)}$. Recall that ${\boldsymbol{e}}_{k}$ is the $k$-th unit vector in $\mathbb{R} ^{\infty }$, $k\in \mathbb{N}_{0}$. Define the state process $ \boldsymbol{X}^{n}(t)=(X_{0}^{n}(t),X_{1}^{n}(t),X_{2}^{n}(t),\dotsc )$ with values in $ \mathbb{R}\times \mathbb{R}_{+}^{\infty }$ as the solution to the following SDE:
\begin{equation}\label{eq:eq903e} \begin{aligned} \boldsymbol{X}^{n}(t)=\boldsymbol{X}^{n}(0)& +\frac{1}{n}\int_{[0,t]\times \lbrack 0,1]}{{1} }_{\{X_{0}^{n}(s-)\geq 0\}}\left[ -2{\boldsymbol{e}}_{0}\right] {{ 1}}_{[0,r_{0}(\boldsymbol{X}^{n}(s-)))}(y)\,N_{0}^{n}(ds\,dy) \\ \quad & +\sum_{k=1}^{\infty }\frac{1}{n}\int_{[0,t]\times \lbrack 0,1]}{ {1}}_{\{X_{0}^{n}(s-)\geq 0\}}\left[ (k-2){\boldsymbol{e}}_{0}-{ \boldsymbol{e}}_{k}\right] {{1}}_{[0,r_{k}(\boldsymbol{X}^{n}(s-)))}(y) \,N_{k}^{n}(ds\,dy) \\ \quad & +\sum_{k=1}^{\infty }\frac{1}{n}\int_{[0,t]\times \lbrack 0,1]}{ {1}}_{\{X_{0}^{n}(s-)<0\}}\left[ k{\boldsymbol{e}}_{0}-{ \boldsymbol{e}}_{k}\right] {{1}}_{[0,r_{k}(\boldsymbol{X}^{n}(s-)))}(y) \,N_{k}^{n}(ds\,dy), \end{aligned} \end{equation} where $\boldsymbol{X}^{n}(0)\doteq \frac{1}{n}(-1,n_{1},n_{2},\dotsc )$. The existence and uniqueness of solutions to this SDE follows from the summability of $r_k(\cdot)$. \textcolor{black}{ Indeed, for each $\boldsymbol{z} \in \mathbb{R}\times \mathbb{R}_{+}^{\infty }$ and $u \in [0,T]$, the process
$$Z^n(u,\boldsymbol{z}, t) \doteq \frac{1}{n} \int_{(u,t]\times [0,1]} N^n_0(ds\, dy) + \sum_{k=1}^{\infty }\frac{1}{n}\int_{(u,t]\times \lbrack 0,1]}
{{1}}_{[0,r_{k}(\boldsymbol{z}))}(y) N^n_k(ds\, dy),\; u<t\le T$$ satisfies $Z^n(u,\boldsymbol{z}, T)<\infty$ since $\sum_{k \in \mathbb{N}_{0}} r_k(\boldsymbol{z}) \le1$.
Together with the mutual independence of the PRM $\{N _{k}(ds\,dy\,dz)\}_{k\in \mathbb{N}_{0}}$ this says that the jump instants of the point process $\{Z^n(u,\boldsymbol{z}, t)\}_{u< t \le T}$ can be enumerated as $$u <\tau^n_1(\boldsymbol{z})< \cdots \tau^n_{k_n}(\boldsymbol{z})<T$$ where $k_n = nZ^n(u,\boldsymbol{z}, T)$. Thus having constructed the solution of \eqref{eq:eq903e} on $[0,u]$, the solution can be extended to $[0, \tau^n_1(\boldsymbol{z})]$, where $\boldsymbol{z} = \boldsymbol{X}^{n}(u)$, and the unique solution of \eqref{eq:eq903e} is now obtained by a standard recursive construction from one jump instant to the next. The solution can be written in an explicit form in terms of the atoms of the PRM $\{N^n_k\}$ which also shows that the solution is a measurable function of the driving PRM. }
It is not difficult to see that $\frac{1}{n} (A(j)-1,V_{1}(j),V_{2}(j),\dotsc )$ in the discrete time EEA can be viewed as the embedded Markov chain associated with $\boldsymbol{X}^{n}$. \textcolor{black}{Namely, denoting the jump instants of the process $\boldsymbol{X}^{n}$ as $\{\sigma^n_j\}$, the collection $\{(nX^n_0(\sigma_j^n)+1, nX^n_k(\sigma_j^n)), k,j \in \mathbb{N}\}$ has the same distribution as $\{A(j), V_k(j), k,j \in \mathbb{N}\}$. In particular, for $k \in \mathbb{N}$, $nX^n_k(\sigma_j^n)$ can be interpreted as the number of sleeping vertices with degree $k$ at the $j$-th step of the exploration in the discrete EEA and in view of Remark \ref{rmk:track-component}, the excursions of $X^n_0$ away from $-1/n$ track the components in the configuration model.} In defining the state process, one could replace $X^n_0(0)$ with the asymptotically equivalent process $X^n_0(0)+1/n$ which starts from $0$ and is more directly comparable with the sequence $A(j)/n$. However some of the expressions are simplified (see, e.g., the formulas for rates in \eqref{eq:r_k} and the transition probabilities in \eqref{eq:aj-dynamics})
when describing the state in terms of $X^n_0(0)$ instead of $X^n_0(0)+1/n$. We now rewrite the evolution of $\boldsymbol{X}^{n}$ as follows: \begin{align*} \boldsymbol{X}^{n}(t)& =\boldsymbol{X}^{n}(0)+{\boldsymbol{e}}_{0}\sum_{k=0}^{\infty }\frac{(k-2)}{n} \int_{[0,t]\times \lbrack 0,1]}{{1}}_{[0,r_{k}(\boldsymbol{X}^{n}(s-)))}(y) \,N_{k}^{n}(ds\,dy) \\ & \quad -\sum_{k=1}^{\infty }{\boldsymbol{e}}_{k}\frac{1}{n} \int_{[0,t]\times \lbrack 0,1]}{{1}}_{[0,r_{k}(\boldsymbol{X}^{n}(s-)))}(y) \,N_{k}^{n}(ds\,dy) \\ & \quad +{\boldsymbol{e}}_{0}\sum_{k=0}^{\infty }\frac{2}{n} \int_{[0,t]\times \lbrack 0,1]}{{1}}_{\{X_{0}^{n}(s-)<0\}}{ {1}}_{[0,r_{k}(\boldsymbol{X}^{n}(s-)))}(y)\,N_{k}^{n}(ds\,dy). \end{align*} Here the first two integrands do not depend on the sign of $X_{0}^{n}$ and are interpreted as the main contribution to the evolution. The last sum is a `reflection' term in the ${\boldsymbol{e}}_{0}$ direction and makes a contribution of $\frac{2}{n}{\boldsymbol{e}}_{0}$ only when $X_{0}^{n}(s-)<0$ . For $t\geq 0$ define \begin{align} Y^{n}(t)& \doteq X_{0}^{n}(0)+\sum_{k=0}^{\infty }\frac{k-2}{n} \int_{[0,t]\times \lbrack 0,1]}{{1}}_{[0,r_{k}(\boldsymbol{X}^{n}(s-)))}(y) \,N_{k}^{n}(ds\,dy), \label{eq:Y_n} \\ \eta ^{n}(t)& \doteq \sum_{k=0}^{\infty }\frac{2}{n}\int_{[0,t]\times \lbrack 0,1]}{{1}}_{\{X_{0}^{n}(s-)<0\}}{{1}} _{[0,r_{k}(\boldsymbol{X}^{n}(s-)))}(y)\,N_{k}^{n}(ds\,dy). \label{eq:eta_n} \end{align} Using these we can write \begin{align} X_{0}^{n}(t)& =Y^{n}(t)+\eta ^{n}(t), \label{eq:X_n_0} \\ X_{k}^{n}(t)& =X_{k}^{n}(0)-\frac{1}{n}\int_{[0,t]\times \lbrack 0,1]}{ {1}}_{[0,r_{k}(\boldsymbol{X}^{n}(s-)))}(y)\,N_{k}^{n}(ds\,dy),\:k\in \mathbb{N} . \label{eq:X_n_k} \end{align} Here $\eta ^{n}$ is viewed as the regulator function which ensures that $ X_{0}^{n}(t)\geq -\frac{1}{n}$. Note that for $k \in \mathbb{N}$, $X_{k}^{n}(t)$ is non-increasing and non-negative. Also, from \eqref{eq:eq903e} we see that $r(\boldsymbol{X}^{n}(t))$ is non-increasing.
\subsection{Rate Function}
\label{sec:rate_def}
The main result of this work gives a large deviation principle for $\{(\boldsymbol{X}^n, Y^n)\}_{n \in \mathbb{N}}$ in the path space $\mathcal{D}_\infty\times \mathcal{D}$. In this section we define the associated rate function $I_T$, \textcolor{black}{where the subscript $T$ makes explicit the fact that the processes $\{(\boldsymbol{X}^n, Y^n)\}_{n \in \mathbb{N}}$ are considered on the time horizon $[0,T]$}. Including the process $Y^n$ in the LDP is convenient for obtaining large deviation results, for the degree distribution in giant components, of the form given in Section \ref{sec:examples}.
Recall the probability distribution ${\boldsymbol{p}}\doteq \{p_{k}\}_{k\in \mathbb{N}}$ introduced in Assumption \ref{asp:convgN}. \textcolor{black}{In order to describe the rate function it will be convenient to introduce the Skorohod map. The use of Skorohod reflection mechanism to describe exploration processes for random graphs goes back to the work of Aldous \cite{aldous1997brownian}. In the context of large deviation problems for Erd\H{o}s-R\'enyi random graph models it has also been used in \cite{puhalskii2005stochastic}.} Let $\Gamma \colon \mathcal{C}\rightarrow \mathcal{C}$ denote the one-dimensional Skorokhod map defined by \begin{equation*} \Gamma (\psi )(t)\doteq \psi (t)-\inf_{0\leq s\leq t}\psi (s)\wedge 0,\;t\in \lbrack 0,T],\psi \in \mathcal{C}. \end{equation*}
Let $\mathcal{C}_T$ be the subset of $\mathcal{C}_\infty \times \mathcal{C}$ , consisting of those functions $(\boldsymbol{\zeta},\psi)$ such that
\begin{enumerate}[\upshape(a)]
\item $\psi(0)=0$, and $\psi$ is absolutely continuous on $[0,T]$.
\item $\zeta_0(t) = \Gamma(\psi)(t)$ for $t \in [0,T]$.
\item For each $k \in \mathbb{N}$, $\zeta_k(0) = p_k$, $\zeta_k$ is non-increasing and absolutely continuous and $\zeta_k(t) \ge 0$ for $t \in [0,T]$. \end{enumerate}
For $(\boldsymbol{\zeta} ,\psi )\in (\mathcal{D}_{\infty }\times \mathcal{D})\setminus \mathcal{C}_{T}$, define $I_{T}(\boldsymbol{\zeta} ,\psi )\doteq \infty $. For $(\boldsymbol{\zeta} ,\psi )\in \mathcal{C}_{T}$, define \begin{equation} I_{T}(\boldsymbol{\zeta} ,\psi )\doteq \inf_{\boldsymbol{\varphi} \in \mathcal{S}_{T}(\boldsymbol{\zeta} ,\psi )}\left\{ \sum_{k=0}^{\infty }\int_{[0,T]\times \lbrack 0,1]}\ell (\varphi _{k}(s,y))\,ds\,dy\right\} . \label{eq:rate_function} \end{equation}
Here for $x\geq 0$, \begin{equation} \ell (x)\doteq x\log x-x+1, \label{eq:ell} \end{equation} and $\mathcal{S}_{T}(\boldsymbol{\zeta} ,\psi )$ is the set of all sequences of functions $ \boldsymbol{\varphi} =(\varphi _{k})_{k\in \mathbb{N}_{0}}$, $\varphi _{k}:[0,T]\times \lbrack 0,1]\rightarrow \mathbb{R}_{+}$, such that
\begin{align} \psi (t)& =\sum_{k=0}^{\infty }(k-2)\int_{[0,t]\times \lbrack 0,1]}{ {1}}_{[0,r_{k}(\boldsymbol{\zeta} (s)))}(y)\,\varphi _{k}(s,y)ds\,dy, \; t \in [0,T] \label{eq:psi} \\ \zeta _{k}(t)& =p_{k}-\int_{[0,t]\times \lbrack 0,1]}{{1}} _{[0,r_{k}(\boldsymbol{\zeta} (s)))}(y)\,\varphi _{k}(s,y)ds\,dy,k\in \mathbb{N}, \; t \in [0,T]. \label{eq:phi_k} \end{align}
\begin{Remark} \label{rmk:property_ODE} Suppose $(\boldsymbol{\zeta} ,\psi )\in \mathcal{C}_{T}$ satisfies \eqref{eq:psi} and \eqref{eq:phi_k} for some $\boldsymbol{\varphi} \in \mathcal{S}_{T}(\boldsymbol{\zeta} ,\psi )$.
\begin{enumerate}[\upshape(a)]
\item From Assumptions \ref{asp:convgN} and \ref{asp:exponential-boundN} it follows that the following uniform integrability holds: As $K \to \infty$, \begin{equation*} \sup_{0 \le t \le T}\sum_{k=K}^\infty k \zeta_k(t) \le \sum_{k=K}^\infty k \sup_{0 \le t \le T} \zeta_k(t) = \sum_{k=K}^\infty k p_k \to 0. \end{equation*} This in particular says that $r(\boldsymbol{\zeta}(\cdot)) \in \mathcal{C}$, where $ r(\cdot)$ is defined in \eqref{eq:r_k}.
\item For any $k \in \mathbb{N}$, whenever $\zeta_k(t_k)=0$ for some $t_k \in [0,T]$, we must have $\zeta_k(t) = 0$ for all $t \in [t_k,T]$. This follows since $\zeta_k$ is non-increasing and non-negative for every $k$.
\item Whenever $r(\boldsymbol{\zeta}(t^*))=0$ for some $t^* \in [0,T]$, we must have from part (b) that $\zeta_k(t) = 0$ for all $t \in [t^*,T]$ and $k \in \mathbb{N}$ . This, together with \eqref{eq:psi}, implies that $\psi(\cdot)$ is non-increasing on the interval $[t^*,T]$. Hence by property (b) of $\mathcal{ C}_T$, $\zeta_0(t)$ is non-increasing and non-negative for $t \in [t^*,T]$. Since $\zeta_0(t^*)=0$, we must then have $\zeta_0(t) = 0$ for $t \in [t^*,T]$ , which means that $\boldsymbol{\zeta}(t) = \boldsymbol{0}$ for $t \in [t^*,T]$. Thus whenever such a $t^*$ exists, $\boldsymbol{\zeta}(t)= \boldsymbol{0}$ after the time instant \begin{equation} \tau_{\boldsymbol{\zeta}} \doteq \inf \{t \in [0,T]:r(\boldsymbol{\zeta}(t))=0\} \wedge T. \label{eq:defntauphi} \end{equation} \end{enumerate} \end{Remark}
\subsection{LDP and LLN for the Exploration Process}
\label{sec:main-result}
The following LDP is one of our main results and is key to the proof of Theorem \ref{thm:ldg_degree_distribution}.
\begin{Theorem} \label{thm:main-ldp} The function $I_T$ in \eqref{eq:rate_function} is a rate function on $\mathcal{D}_\infty \times \mathcal{D}$ and the sequence $ \{(\boldsymbol{X}^n,Y^n)\}_{n \in \mathbb{N}}$ satisfies a large deviation principle in $ \mathcal{D}_\infty \times \mathcal{D}$ with rate function $I_T$. \end{Theorem}
\textbf{Outline of the proof:} Due to the equivalence between a large deviation principle and a Laplace principle, it suffices to show the following three statements (cf.\ \cite[Section 1.2]{DupuisEllis2011weak} or \cite[Section 1.2]{buddupbook}).
\begin{enumerate}[\upshape(1)]
\item Laplace principle upper bound: For all $h\in \mathbb{C}_{b}(\mathcal{D} _{\infty }\times \mathcal{D})$, \begin{equation} \limsup_{n\rightarrow \infty} \frac{1}{n}\log {{E}} e^{-nh(\boldsymbol{X}^{n},Y^{n})}\le - \inf_{(\boldsymbol{\zeta} ,\psi )\in \mathcal{C}_{\infty }\times \mathcal{C}}\{I_{T}(\boldsymbol{\zeta} ,\psi )+h(\boldsymbol{\zeta} ,\psi )\}. \label{eq:lappriupp} \end{equation}
\item Laplace principle lower bound: For all $h \in \mathbb{C}_b(\mathcal{D} _\infty \times \mathcal{D})$, \begin{equation} \label{eq:lapprilow} \liminf_{n \to \infty} \frac{1}{n} \log {{E}} e^{-nh(\boldsymbol{X}^{n},Y^{n})} \ge - \inf_{(\boldsymbol{\zeta} ,\psi ) \in \mathcal{C}_\infty\times \mathcal{C}} \{I_T(\boldsymbol{\zeta} ,\psi )+h(\boldsymbol{\zeta} ,\psi )\}. \end{equation}
\item $I_T$ is a rate function on $\mathcal{D}_\infty\times \mathcal{D}$: For each $M \in [0,\infty)$, $\{(\boldsymbol{\zeta} ,\psi ) \in \mathcal{D}_\infty \times \mathcal{D} : I_T(\boldsymbol{\zeta} ,\psi ) \le M \}$ is a compact subset of $\mathcal{D} _\infty \times \mathcal{D}$. \end{enumerate}
Statements (1), (2) and (3) will be shown in Sections \ref{sec:upper},
\ref{sec:lower} and \ref{sec:rate_function}, respectively. \begin{Remark}
\label{rem:remnumcopm}
As noted above, the LDP in Theorem \ref{thm:main-ldp} is a key to the proof of Theorem \ref{thm:ldg_degree_distribution}. In the next subsection we will show how this LDP can be used to easily give a LLN result. The LDP can be used to establish other asymptotic results as well. We give one such example without proof below.
Denote by $C^n$ the number of components in $G([n],{\boldsymbol{d}}(n))$.
Then $\eta^n$ defined in \eqref{eq:eta_n} can be used to represent $\frac{C^n}{n}$.
\textcolor{black}{Such an observation in the context of Erd\H{o}s-R\'enyi random graphs was first made in \cite{aldous1997brownian} (see also \cite{puhalskii2005stochastic}).}
Note that whenever the EEA starts to explore a new component, $X^n_0$ will jump from $-\frac{1}{n}$ and as a result, $\eta^n$ will increase by $\frac{2}{n}$.
Therefore
\begin{equation*}
\frac{C^n}{n} = \sup_{t >0} \frac{\eta^n(t)}{2} = \lim_{T \to \infty} \frac{\eta^n(T)}{2}.
\end{equation*}
Observe from \eqref{eq:X_n_0} that $\eta^n = X^n_0 - Y^n$, and that for large deviation asymptotics one can assume
that the EEA terminates by time $N \doteq 1 + \lfloor \sup_n \frac{1}{2}\sum_{k=1}^\infty k \frac{n_k^{(n)}}{n} \rfloor < \infty$ (see Lemma \ref{lem:choosing-T} and its proof for precise details).
Using this fact, Theorem \ref{thm:main-ldp}, and the contraction principle one can establish that
$\frac{C^n}{n}$ satisfies a large deviation principle in $\mathbb{R}_+$ with rate function $\hat{I}$ defined by
\begin{equation*}
\hat{I}(x) = \lim_{T \to \infty} \inf_{(\boldsymbol{\zeta},\psi) \in \mathcal{C}_T : \zeta_0(T)-\psi(T) = 2x} I_T(\boldsymbol{\zeta},\psi).
\end{equation*}
The rate function $\hat{I}(x)$ has the following alternative representation.
\begin{equation*}
\Scale[0.9]{\hat{I}(x) = \inf_{(\boldsymbol{\zeta},\psi) \in \mathcal{C}_N : \zeta_0(N)-\psi(N) = 2x, \boldsymbol{\zeta}(N)={\boldsymbol{0}}} \int_0^N \left[ r_0(\boldsymbol{\zeta}(t)) \ell\left( -\frac{\psi'(t)+\sum_{k=1}^\infty (k-2) \zeta'_k(t)}{2r_0(\boldsymbol{\zeta}(t))} \right)
+ \sum_{k=1}^\infty r_k(\boldsymbol{\zeta}(t)) \ell\left(-\frac{\zeta'_k(t)}{r_k(\boldsymbol{\zeta}(t))}\right) \right] dt.}
\end{equation*}
\end{Remark} \subsubsection{Law of large number limits}
\label{sec:LLN} The LDP in Theorem \ref{thm:main-ldp} can be used to identify the LLN limit $(\boldsymbol{\zeta} ,\psi )$ of the exploration process $(\boldsymbol{X}^{n},Y^{n})$, which corresponds to the \textit{unique} pair satisfying $I_{T}(\boldsymbol{\zeta} ,\psi )=0$. In particular we recover well known results for the asymptotics of the largest component in the configuration model \cite{MolloyReed1998size,Janson2009new}. We assume the following strengthened version of Assumption \ref{asp:exponential-boundN}.
\begin{Assumption} \label{asp:exponential-boundN-S} $\sup_{n \in \mathbb{N}} \sum_{k=1}^{\infty} \frac{n_k}{n}k^{2} < \infty$.
\end{Assumption} \begin{Remark}
Under our standing assumptions, namely Assumption \ref{asp:convgN} and \ref{asp:exponential-boundN}, one can show by following the arguments in Section \ref{sec:repnWCCP} that $\{(\boldsymbol{X}^n, Y^n)\}_{n \in \mathbb{N}}$ is tight and any weak limit point $(\boldsymbol{\zeta} ,\psi )$ of this sequence is in $\mathcal{C}_{T}$ and satisfies \eqref{eq:psi} and \eqref{eq:phi_k} with $\varphi_k =1$ for $k \in \mathbb{N}_0$. However it seems hard to argue the uniqueness of this limiting system of equations without additional conditions. Instead we show that if Assumption \ref{asp:exponential-boundN} is replaced with the stronger condition in Assumption \ref{asp:exponential-boundN-S} then there is an explicit trajectory $(\boldsymbol{\zeta} ,\psi )$ for which the rate function vanishes and in fact it is the unique such trajectory. This is the content of Theorem \ref{thm:LLN} and Proposition \ref{prop:uniqueness_LLN}. From these results the LLN follows immediately. Whether the LLN holds under the weaker Assumption \ref{asp:exponential-boundN} is an open problem. \end{Remark}
Recall $\mu \doteq \sum_{k=1}^{\infty }kp_{k}$ and note that $\mu <\infty $.
Define, for $z\in \lbrack 0,1]$, \begin{equation*} G_{0}(z)\doteq \sum_{k=1}^{\infty }p_{k}z^{k}\;\;\mbox{ and } \;\;G_{1}(z)\doteq \sum_{k=1}^{\infty }\frac{kp_{k}}{\mu }z^{k-1}. \end{equation*} Define $F_{s}(t)\doteq G_{0}(s)-G_{0}(st)$ for $s\in (0,1]$ and $t\in \lbrack 0,1]$. Then $F_{s}\colon \lbrack 0,1]\rightarrow \lbrack 0,G_{0}(s)]$ is strictly decreasing and continuous. Let $F_{s}^{-1}(\cdot )$ denote the inverse of $F_{s}$. Define \begin{equation*} f_{s}(t)\doteq \left\{ \begin{array}{ll} F_{s}^{-1}(t) & \mbox{ when }0\leq t\leq G_{0}(s), \\ 0 & \mbox{ when }t>G_{0}(s). \end{array} \right. \end{equation*} Then $f_{s}(t)$ is strictly decreasing until it hits zero. Note that in particular, $f_{1}(t)=F_{1}^{-1}(t){{1}}_{[0,1]}(t)$. Define $ f_{0}(t)\doteq 0$ for $t\geq 0$.
Fix $T\geq \frac{\mu }{2}$. The following theorem together with Proposition \ref{prop:uniqueness_LLN} characterizes the unique $(\boldsymbol{\zeta} ,\psi )\in \mathcal{C}_{T}$ that minimizes the rate function $I_{T}(\boldsymbol{\zeta} ,\psi )$. Letting \begin{equation*} \nu \doteq \frac{\sum_{k=1}^{\infty }k(k-1)p_{k}}{\sum_{k=1}^{\infty }kp_{k}} , \end{equation*} part 1 of the theorem considers the subcritical and critical cases $\nu \leq 1$, where the size of the largest component is $o(n)$, while part 2 considers the supercritical case $\nu >1$, where the size of the largest component is $O(n)$. Proofs of Theorem \ref{thm:LLN} and Proposition \ref{prop:uniqueness_LLN} are provided in Section \ref{sec:examples}.
\begin{Theorem} \phantomsection \label{thm:LLN}
Suppose that Assumptions \ref{asp:convgN} and \ref{asp:exponential-boundN-S} hold.
\begin{enumerate}[\upshape(1)]
\item Suppose $\sum_{k=1}^\infty k(k-2)p_k \le 0$. Define $\boldsymbol{\zeta}(t) = (\zeta_k(t))_{k \in \mathbb{N}_0}$ and $\psi(t)$ by \begin{align*} \zeta_0(t) & \doteq 0, \zeta_k(t) \doteq p_k(f_1(t))^k, k \in \mathbb{N}, \\ \psi(t) & \doteq -2 \int_0^t r_0(\boldsymbol{\zeta}(s))\,ds + \sum_{k=1}^\infty (k-2) (p_k - \zeta_k(t)). \end{align*} Then $(\boldsymbol{\zeta} ,\psi ) \in \mathcal{C}_T$ and $I_T(\boldsymbol{\zeta} ,\psi )=0$.
\item Suppose $\sum_{k=1}^\infty k(k-2)p_k > 0$. If $p_1 > 0$, then there exists a unique $\rho \in (0,1)$ such that $G_1(\rho) = \rho$. If $p_1 = 0$, $G_1(\rho) = \rho$ with $\rho\doteq 0$. Define $\tau = \frac{\mu}{2} (1-\rho^2)>0$ and define $\boldsymbol{\zeta}(t) = (\zeta_k(t))_{k \in \mathbb{N}_0}$ and $ \psi(t)$ by \begin{align*} \zeta_0(t) & \doteq \left[ \mu-2t -\mu \sqrt{1-2t/\mu}G_1 ( \sqrt{1-2t/\mu}) \right] {{1}}_{[0,\tau]}(t), \\ \zeta_k(t) & \doteq \left\{ \begin{array}{ll} p_k(1-{2t}/{\mu})^{k/2} & \mbox{ when } 0 \le t \le \tau, \\ p_k \rho^k (f_\rho(t - \tau))^k & \mbox{ when } t > \tau, \end{array} \right. k \in \mathbb{N}, \\ \psi(t) & \doteq -2 \int_0^t r_0(\boldsymbol{\zeta}(s))\,ds + \sum_{k=1}^\infty (k-2) (p_k - \zeta_k(t)). \end{align*} Then $(\boldsymbol{\zeta} ,\psi ) \in \mathcal{C}_T$ and $I_T(\boldsymbol{\zeta} ,\psi )=0$. \end{enumerate} \end{Theorem}
The following proposition says that there is a unique $(\boldsymbol{\zeta} ,\psi )$ satisfying $I_T(\boldsymbol{\zeta} ,\psi ) = 0$, so that this pair is the law of large numbers limit.
\begin{Proposition} \label{prop:uniqueness_LLN} Suppose Assumptions \ref{asp:convgN} and \ref {asp:exponential-boundN-S} hold. Then the pair $(\boldsymbol{\zeta} ,\psi )$ defined in Theorem \ref{thm:LLN} is the unique element of $\mathcal{D}_\infty\times \mathcal{D}$ such that $I_T(\boldsymbol{\zeta} ,\psi ) = 0$. \end{Proposition}
\section{Representation and Weak Convergence of Controlled Processes}
\label{sec:repnWCCP} We will use the following useful representation formula proved in \cite{BudhirajaDupuisMaroulas2011variational}. For the second equality in the theorem see the proof of Theorem $2.4$ in \cite {BudhirajaChenDupuis2013large}. The representation in the cited papers is given in terms of a single Poisson random measure with points in a locally compact Polish space. However for the current work it is convenient to formulate the representation in terms of a countable sequence of independent Poisson random measures on $[0,T]\times \lbrack 0,1]$. This representation is immediate from the results in \cite {BudhirajaDupuisMaroulas2011variational} and \cite {BudhirajaChenDupuis2013large} by viewing the countable sequence of Poisson random measures with points in $[0,T]\times \lbrack 0,1]$ and intensity the Lebesgue measure $\lambda _{T}$ on $[0,T]\times \lbrack 0,1]$ as a single PRM with points in the augmented space $[0,T]\times \lbrack 0,1]\times \mathbb{N}_{0}$ with intensity $\lambda _{T}\otimes \varrho $, where $ \varrho $ is the counting measure on $\mathbb{N}$. Recall that $\bar{ \mathcal{A}}_{+}$ denotes the class of $(\mathcal{\bar{P}}\times \mathcal{B} ([0,1]))/\mathcal{B}(\mathbb{R}_{+})$-measurable maps from $\Omega \times \lbrack 0,T]\times \lbrack 0,1]$ to $\mathbb{R}_{+}$. For each $m\in \mathbb{ N}$ let \begin{align*} & \bar{\mathcal{A}}_{b,m}\doteq \{(\varphi _{k})_{k\in \mathbb{N} _{0}}:\varphi _{k}\in \bar{\mathcal{A}}_{+}\mbox{ for each }k\in \mathbb{N} _{0}\mbox{ such that for all }(\omega ,t,y)\in \Omega \times \lbrack 0,T]\times \lbrack 0,1], \\ & \quad \qquad \qquad \qquad \qquad {1}/{m}\leq \varphi _{k}(\omega ,t,y)\leq m\mbox{ for }k\leq m\mbox{ and }\varphi _{k}(\omega ,t,y)=1\mbox{ for }k>m\} \end{align*} and let $\bar{\mathcal{A}}_{b}\doteq \cup _{m=1}^{\infty }\bar{\mathcal{A}} _{b,m}$. Recall the function $\ell $ defined in \eqref{eq:ell}.
\begin{Theorem} \label{thm:var_repn} Let $F \in \mathbb{M}_b([\mathcal{M}_{FC}([0,T]\times[ 0,1])]^\infty)$. Then for $\theta > 0$, \begin{align*} -\log {{E}} e^{-F((N_k^\theta)_{k \in \mathbb{N}_0})} & = \inf_{\varphi_k\in \bar{\mathcal{A}}_+, k \in \mathbb{N}_0} {{E}} \left[ \theta \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k(s,y))\,ds\,dy + F((N_k^{\theta\varphi_k})_{k \in \mathbb{N} _0}) \right] \\ & = \inf_{\boldsymbol{\varphi} = (\varphi_k)_{k \in \mathbb{N}_0} \in \bar{\mathcal{A}} _b} {{E}} \left[ \theta \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k(s,y))\,ds\,dy + F((N_k^{\theta\varphi_k})_{k \in \mathbb{N} _0}) \right]. \end{align*} \end{Theorem}
Fix $h\in \mathbb{C}_{b}(\mathcal{D}_{\infty }\times \mathcal{D})$. Since $ (\boldsymbol{X}^{n},Y^{n})$ can be written as $\Psi ((N_{k}^{n})_{k\in \mathbb{N}_{0}})$ for some measurable function $\Psi $ from $[\mathcal{M}_{FC}([0,T]\times \lbrack 0,1])]^{\infty }$ to $\mathcal{D}_{\infty }\times \mathcal{D}$, we have from the second equality in Theorem \ref{thm:var_repn} that with $ (\theta ,F)=(n,nh\circ \Psi )$, \begin{equation} -\frac{1}{n}\log {{E}}e^{-nh(\boldsymbol{X}^{n},Y^{n})}=\inf_{\boldsymbol{\varphi} ^{n}=(\varphi _{k}^{n})_{k\in \mathbb{N}_{0}}\in \bar{\mathcal{A}}_{b}}{ {E}}\left\{ \sum_{k=0}^{\infty }\int_{[0,T]\times \lbrack 0,1]}\ell (\varphi _{k}^{n}(s,y))\,ds\,dy+h(\bar{\boldsymbol{X}}^{n},{\bar{Y}} ^{n})\right\} . \label{eq:mainrepn17} \end{equation} Here $(\bar{\boldsymbol{X}}^{n},{\bar{Y}}^{n})=\Psi ((N_{k}^{n\varphi _{k}^{n}})_{k\in \mathbb{N}_{0}})$, which solves the controlled analogue of \eqref{eq:Y_n}--\eqref{eq:X_n_k}, namely $\bar{\boldsymbol{X}} ^{n}(0)\doteq \frac{1}{n}(-1,n_{1},n_{2},\dotsc )$, and for $ t\in \lbrack 0,T]$,
\begin{align} {\bar{Y}}^{n}(t)& ={\bar{X}}_{0}^{n}(0)+\sum_{k=0}^{\infty }\frac{k-2}{n}\int_{[0,t]\times \lbrack 0,1]}{ {1}}_{[0,r_{k}(\bar{\boldsymbol{X}}^{n}(s-)))}(y)\,N_{k}^{n\varphi _{k}^{n}}(ds\,dy) \label{eq:Ybar_n_upper_temp} \\ {\bar{X}}_{0}^{n}(t)& ={\bar{Y}}^{n}(t) +\frac{2}{n}\sum_{k=0}^{\infty }\int_{[0,t]\times \lbrack 0,1]}{ {1}}_{\{{\bar{X}}_{0}^{n}(s-)<0\}}{{1}} _{[0,r_{k}(\bar{\boldsymbol{X}}^{n}(s-)))}(y)\,N_{k}^{n\varphi _{k}^{n}}(ds\,dy), \label{eq:Xbar_n_0_upper_temp} \\ {\bar{X}}_{k}^{n}(t)& ={\bar{X}}_{k}^{n}(0)-\frac{1 }{n}\int_{[0,t]\times \lbrack 0,1]}{{1}}_{[0,r_{k}(\bar{\boldsymbol{X}} ^{n}(s-)))}(y)\,N_{k}^{n\varphi _{k}^{n}}(ds\,dy),\; k\in \mathbb{ N}. \label{eq:Xbar_n_k_upper_temp} \end{align} There is a bar in the notation $\bar{\boldsymbol{X}}^n, \bar{Y}^n$ (and $\bar{\nu}^n$ defined in \eqref{eq:nu_n_upper} below) to indicate that these are `controlled' processes, given in terms of the control sequence $\boldsymbol{\varphi}^{n}\doteq (\varphi _{k}^{n})_{k\in \mathbb{N}_{0}}$. We will occasionally suppress the dependence on $\boldsymbol{\varphi}^{n}$ in the notation and will make this dependence explicit if there are multiple controls (e.g.\ as in Section \ref{sec:upper})
In the proof of both the upper and lower bound we will show it is sufficient to consider a sequence $\{\varphi _{k}^{n}\in \bar{\mathcal{A}}_{+},k\in \mathbb{N}_{0}\}$ that satisfies the following uniform bound for some $ M_{0}<\infty$: \begin{equation} \sup_{n\in \mathbb{N}}\sum_{k=0}^{\infty }\int_{[0,T]\times \lbrack 0,1]}\ell (\varphi _{k}^{n}(s,y))\,ds\,dy\leq M_{0},\mbox{ a.s. }{ P}. \label{eq:cost_bd_upper} \end{equation} In the rest of this section we study tightness and convergence properties of controlled processes $(\bar{\boldsymbol{X}}^{n},{\bar{Y}}^{n})$ that are driven by controls $\{\varphi_k^n\}$ that satisfy the above a.s. bound.
From \eqref{eq:Ybar_n_upper_temp}--\eqref{eq:Xbar_n_k_upper_temp} we can rewrite \begin{align} {\bar{Y}}^{n}(t)& ={\bar{X}}_{0}^{n}(0)+\sum_{k=0}^{\infty }(k-2){\bar{B}} _{k}^{n}(t), \label{eq:Ybar_n_upper} \\ {\bar{X}}_{0}^{n}(t)& ={\bar{Y}}^{n}(t)+{\bar{\eta}}^{n}(t), \label{eq:Xbar_n_0_upper} \\ {\bar{X}}_{k}^{n}(t)& ={\bar{X}}_{k}^{n}(0)-{\bar{B}}_{k}^{n}(t),k\in \mathbb{N}, \label{eq:Xbar_n_k_upper} \end{align} where \begin{align} {\bar{B}}_{k}^{n}(t)& \doteq \frac{1}{n}\int_{[0,t]\times \lbrack 0,1]}{ {1}}_{[0,r_{k}(\bar{\boldsymbol{X}}^{n}(s-)))}(y)\,N_{k}^{n\varphi _{k}^{n}}(ds\,dy),k\in \mathbb{N}_{0}, \label{eq:Bbar_n_k_upper} \\ {\bar{\eta}}^{n}(t)& \doteq \sum_{k=0}^{\infty }\frac{2}{n}\int_{[0,t]\times \lbrack 0,1]}{{1}}_{\{{\bar{X}}_{0}^{n}(s-)<0\}}{{1}} _{[0,r_{k}(\bar{\boldsymbol{X}}^{n}(s-)))}(y)\,N_{k}^{n\varphi _{k}^{n}}(ds\,dy) \notag \\ & =\sum_{k=1}^{\infty }\frac{2}{n}\int_{[0,t]\times \lbrack 0,1]}{ {1}}_{\{{\bar{X}}_{0}^{n}(s-)<0\}}{{1}}_{[0,r_{k}(\bar{\boldsymbol{X}}^{n}(s-)))}(y)\,N_{k}^{n\varphi _{k}^{n}}(ds\,dy). \label{eq:etabar_n_upper} \end{align} Here the last line follows on observing that ${{1}}_{\{{\bar{X}}_{0}^{n}(s-)<0\}}{{1}}_{[0,r_{0}(\bar{\boldsymbol{X}}^{n}(s-)))}(y)\equiv 0$.
Since $m_1 \doteq \sup_{n \in \mathbb{N}} \sum_{k=1}^\infty k \frac{n_k}{n} < \infty$ by Assumption \ref{asp:exponential-boundN}, \textcolor{black}{using \eqref{eq:r_k}} we have $-\frac{1}{n} \le {\bar{X}}^n_0(t) \le m_1$, $0 \le r(\bar{\boldsymbol{X}}^n(t)) \le m_1$ and $0 \le {\bar{X}}^n_k(t) \le \frac{n_k}{n}$ for $t \in [0,T]$. \textcolor{black}{In particular, the nonnegativity of $\bar{X}_{k}^{n}(t)$ is an immediate consequence of the evolution equation \eqref{eq:Xbar_n_k_upper_temp} on observing that $r_k(\bar{\boldsymbol{X}}^{n}(s-))=0$ if $\bar{X}_{k}^{n}(s-)=0$ and that the jumps of $\bar{X}_{k}^{n}$ are of size $1/n$.} Also note that both $r(\bar{\boldsymbol{X}}^n(\cdot))$ and ${\bar{X}}^n_k(\cdot)$ for $k \in \mathbb{N}$ are non-increasing.
The following lemma summarizes some elementary properties of $\ell$. For part (a) we refer to \cite[Lemma 3.1]{BudhirajaDupuisGanguly2015moderate}, and part (b) is an easy calculation that is omitted.
\begin{Lemma} \phantomsection
\label{lem:property_ell}
\begin{enumerate}[\upshape(a)]
\item For each $\beta > 0$, there exists $\gamma(\beta) \in (0,\infty)$ such that $\gamma(\beta) \to 0$ as $\beta \to \infty$ and $x \le \gamma(\beta) \ell(x)$, for $x \ge \beta > 1$.
\item For $x \ge 0$, $x \le \ell(x) + 2$. \end{enumerate} \end{Lemma}
The next lemma gives some uniform integrability properties for the control sequence $\boldsymbol{\varphi}^n$.
\begin{Lemma} \label{lem:UI_upper}
For $K\in \mathbb{N}$ define \begin{equation} {\bar{U}}_{K}\doteq \sup_{n\in \mathbb{N}}{{E}}\left\{ \sum_{k=K}^{\infty }\int_{[0,T]\times \lbrack 0,1]}k\varphi _{k}^{n}(s,y){ {1}}_{[0,r_{k}(\bar{\boldsymbol{X}}^{n}(s)))}(y)\,ds\,dy\right\} . \label{eq:Ubar_K} \end{equation}
Then ${\bar{U}}_{K}<\infty $ for each $K\in \mathbb{N}$ and $ \lim_{K\rightarrow \infty }{\bar{U}}_{K}=0$. \end{Lemma}
\begin{proof}
From \eqref{eq:Bbar_n_k_upper} and \eqref{eq:Xbar_n_k_upper} it follows that
\begin{equation*}
{\bar{U}}_K = \sup_{n \in \mathbb{N}} {{E}} \left\{ \sum_{k=K}^\infty k {\bar{B}}^n_k(T) \right\} = \sup_{n \in \mathbb{N}} {{E}} \left\{ \sum_{k=K}^\infty k \left[ {\bar{X}}^n_k(0) - {\bar{X}}^n_k(T) \right] \right\} \le \sup_{n \in \mathbb{N}} \sum_{k=K}^\infty \frac{kn_k}{n}.
\end{equation*}
Recalling $\varepsilon_{\boldsymbol{p}} \in (0,\infty)$ introduced in Assumption \ref{asp:exponential-boundN}, we have
\begin{equation*}
\sup_{n \in \mathbb{N}} \sum_{k=K}^\infty \frac{kn_k}{n} \le
K^{-\varepsilon_{\boldsymbol{p}}} \sup_{n \in \mathbb{N}} \sum_{k=1}^\infty \frac{n_k}{n} k^{1+ \varepsilon_{\boldsymbol{p}}} \to 0
\end{equation*}
as $K \to \infty$.
The result follows.
\end{proof}
The following lemma proves some key tightness properties. Write $\bar{\boldsymbol{B}}^n \doteq \{{\bar{B}}^n_k\}_{k \in \mathbb{N}_0}$. Define $\bar{\boldsymbol{\nu}}^{n}\doteq \{\bar{\nu}_{k}^{n}\}_{k\in \mathbb{N}_{0}}$, where for $k\in \mathbb{N}_{0}$, \begin{equation} \bar{\nu} _{k}^{n}([0,t]\times A)\doteq \int_{\lbrack 0,t]\times A}\varphi _{k}^{n}(s,y)\,ds\,dy,\quad t\in \lbrack 0,T],A\in \mathcal{B} ([0,1]). \label{eq:nu_n_upper} \end{equation}
\begin{Lemma} \label{lem:tightness} Suppose that the bound in \eqref{eq:cost_bd_upper} is satisfied. Then the sequence of random variables $\{(\bar{\boldsymbol{\nu}}^{n}, \bar{\boldsymbol{X}}^n, {\bar{Y}}^n, \bar{\boldsymbol{B}}^n, {\bar{\eta}}^n)\}$ is tight in $[ \mathcal{M}([0,T]\times[0,1])]^\infty \times \mathcal{D}_\infty \times \mathcal{D} \times \mathcal{D}_\infty \times \mathcal{D}$. \end{Lemma}
\begin{proof}
We will argue the tightness of $\{\bar{\boldsymbol{\nu}}^{n}\}$ in $[\mathcal{M}([0,T]\times[0,1])]^\infty$ and the $\mathcal{C}$-tightness of $\{{\bar{\boldsymbol{X}}}^n\}$, $\{{\bar{Y}}^n\}$, $\{{\bar{\boldsymbol{B}}}^n\}$, and
$\{{\bar{\eta}}^n\}$ in $\mathcal{D}_\infty$, $\mathcal{D}$, $\mathcal{D}_\infty$, and $\mathcal{D}$ respectively.
This will complete the proof.
Consider first $\{\bar{\boldsymbol{\nu}}^{n}\}$.
Note that $[0,T] \times [0,1]$ is a compact metric space.
Also from Lemma \ref{lem:property_ell}(b) and \eqref{eq:cost_bd_upper} we have a.s.\ for each $k \in \mathbb{N}_0$,
\begin{equation*}
\bar{\nu}^{n}_k([0,T]\times[0,1]) = \int_{[0,T] \times [0,1]} \varphi_k^n(s,y) \,ds\,dy \le \int_{[0,T] \times [0,1]} \left( \ell(\varphi_k^n(s,y))+2 \right) ds\,dy \le M_0 + 2T.
\end{equation*}
Hence $\{\bar{\nu}^{n}_k\}$ is tight in $\mathcal{M}([0,T]\times[0,1])$.
Next, since ${\bar{X}}^n_k(0) \in [0,1]$ for $k \in \mathbb{N}$ a.s., we see from \eqref{eq:Xbar_n_0_upper} and \eqref{eq:Xbar_n_k_upper} that $\mathcal{C}$-tightness of $\{{\bar{\boldsymbol{X}}}^n\}$ in $\mathcal{D}_\infty$ follows once we show $\mathcal{C}$-tightness of $\{{\bar{Y}}^n\}$, $\{{\bar{\boldsymbol{B}}}^n\}$ and $\{{\bar{\eta}}^n\}$.
We now show that $\{({\bar{Y}}^n(t), {\bar{\boldsymbol{B}}}^n(t), {\bar{\eta}}^n(t)))\}$ is tight for each $t$.
From \eqref{eq:Ybar_n_upper}, \eqref{eq:Bbar_n_k_upper} and \eqref{eq:etabar_n_upper},
\begin{align*}
& {{E}} \left[ |{\bar{Y}}^n(t)| + \sum_{k=0}^\infty |{\bar{B}}^n_k(t)| + |{\bar{\eta}}^n(t)| \right] \\
& \le \frac{1}{n}+ \sum_{k=0}^\infty [|k-2|+1] {{E}}|{\bar{B}}^n_k(t)| + {{E}}|{\bar{\eta}}^n(t)| \\
& \le \frac{1}{n}+{{E}} \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} [|k-2| + 1 + 2 \cdot {{1}}_{\{k\ge 1\}}] \varphi^n_k(s,y) {{1}}_{[0,r_k(\bar{\boldsymbol{X}}^n(s)))}(y) \,ds\,dy \\
& \le \frac{1}{n}+ 3 {{E}} \int_{[0,T] \times [0,1]} \varphi^n_0(s,y) \,ds\,dy + 4{\bar{U}}_1,
\end{align*}
where the last line uses the definition of ${\bar{U}}_1$ in \eqref{eq:Ubar_K}.
From Lemma \ref{lem:property_ell}(b) and \eqref{eq:cost_bd_upper}, we have
\begin{equation*}
{{E}} \int_{[0,T] \times [0,1]} \varphi^n_0(s,y) \,ds\,dy \le {{E}} \int_{[0,T] \times [0,1]} \left[ \ell(\varphi^n_0(s,y))+2 \right] ds\,dy \le M_0 + 2T.
\end{equation*}
Therefore $\sup_{n \in \mathbb{N}} {{E}} \left[ |{\bar{Y}}^n(t)| + \sum_{k=0}^\infty |{\bar{B}}^n_k(t)| + |{\bar{\eta}}^n(t)| \right] < \infty$ and we have tightness of $\{({\bar{Y}}^n(t), \bar{\boldsymbol{B}}^n(t), {\bar{\eta}}^n(t)))\}$ in $\mathbb{R} \times \mathbb{R}^\infty \times \mathbb{R}$ for each $t \in [0,T]$.
We now consider fluctuations of $({\bar{Y}}^n, \bar{\boldsymbol{B}}^n, {\bar{\eta}}^n)$.
Recall the filtration $\{\mathcal{F}_t\}_{0 \le t \le T}$.
For $\delta \in [0,T]$, let $\mathcal{T}^\delta$ be the collection of all $[0,T-\delta]$-valued stopping times $\tau$.
Note that for $\tau \in \mathcal{T}^\delta$,
\begin{equation*}
{{E}} |{\bar{Y}}^n(\tau+\delta) - {\bar{Y}}^n(\tau)| \le {{E}} \left[ \sum_{k=0}^\infty (k+2) \left| {\bar{B}}^n_k(\tau+\delta) - {\bar{B}}^n_k(\tau) \right| \right].
\end{equation*}
Thus in order to argue tightness of $\{({\bar{Y}}^n, \bar{\boldsymbol{B}}^n, {\bar{\eta}}^n)\}$, by the Aldous--Kurtz tightness criterion (cf.\ \cite[Theorem 2.7]{Kurtz1981approximation}) it suffices to show that
\begin{equation}
\label{eq:tightness_upper}
\limsup_{\delta \to 0} \limsup_{n \to \infty} \sup_{\tau \in \mathcal{T}^\delta} {{E}} \left[ \sum_{k=0}^\infty (k+2) \left| {\bar{B}}^n_k(\tau+\delta) - {\bar{B}}^n_k(\tau) \right| + \left| {\bar{\eta}}^n(\tau+\delta) - {\bar{\eta}}^n(\tau) \right| \right] = 0.
\end{equation}
From \eqref{eq:Bbar_n_k_upper} and \eqref{eq:etabar_n_upper} it follows that for every $K \in \mathbb{N}$ and $M \in (0,\infty)$,
\begin{align*}
& {{E}} \left[ \sum_{k=0}^\infty (k+2) \left| {\bar{B}}^n_k(\tau+\delta) - {\bar{B}}^n_k(\tau) \right| + \left| {\bar{\eta}}^n(\tau+\delta) - {\bar{\eta}}^n(\tau) \right| \right] \\
& \le {{E}} \sum_{k=0}^\infty \int_{(\tau,\tau+\delta] \times [0,1]} (k+4) \varphi^n_k(s,y) {{1}}_{[0,r_k(\bar{\boldsymbol{X}}^n(s)))}(y) \,ds\,dy \\
& \le {{E}} \sum_{k=0}^{K-1} \left[ \int_{(\tau,\tau+\delta] \times [0,1]} (k+4) \varphi^n_k(s,y) {{1}}_{\{ \varphi^n_k(s,y) > M\}} \,ds\,dy \right. \\
& \qquad \left. + \int_{(\tau,\tau+\delta] \times [0,1]} (k+4) \varphi^n_k(s,y) {{1}}_{\{ \varphi^n_k(s,y) \le M\}} \,ds\,dy \right] + 5 {\bar{U}}_K.
\end{align*}
Using Lemma \ref{lem:property_ell}(a) and \eqref{eq:cost_bd_upper}, we can bound the last display by
\begin{align*}
& {{E}} \sum_{k=0}^{K-1} \int_{(\tau,\tau+\delta] \times [0,1]} (K+3)\gamma(M) \ell(\varphi^n_k(s,y)) \,ds\,dy + K(K+3)M\delta + 5 {\bar{U}}_K \\
& \quad \le (K+3)\gamma(M) M_0 + K(K+3)M\delta + 5 {\bar{U}}_K.
\end{align*}
Therefore
\begin{align*}
& \limsup_{\delta \to 0} \limsup_{n \to \infty} \sup_{\tau \in \mathcal{T}^\delta} {{E}} \left[ \sum_{k=0}^\infty (k+2) \left| {\bar{B}}^n_k(\tau+\delta) - {\bar{B}}^n_k(\tau) \right| + \left| {\bar{\eta}}^n(\tau+\delta) - {\bar{\eta}}^n(\tau) \right| \right] \\
& \quad \le (K+3)\gamma(M) M_0 + 5 {\bar{U}}_K.
\end{align*}
Taking $M \to \infty$ and then $K \to \infty$, we have from Lemmas \ref{lem:property_ell}(a) and \ref{lem:UI_upper} that \eqref{eq:tightness_upper} holds.
Finally $\mathcal{C}$-tightness is immediate from the following a.s.\ bounds, Assumption \ref{asp:exponential-boundN}, \textcolor{black}{and \cite[Theorem 13.4]{Billingsley1999}}: for any $k \in \mathbb{N}_0$, $K \in \mathbb{N}$ and $t \in (0,T]$,
\begin{align*}
|{\bar{B}}^n_k(t)- {\bar{B}}^n_k(t-)| \le \frac{1}{n}, \;
|{\bar{\eta}}^n(t)- {\bar{\eta}}^n(t-)| \le \frac{2}{n}, \;
|{\bar{Y}}^n_k(t)- {\bar{Y}}^n_k(t-)| \le \frac{K}{n} + \sum_{j=K+1}^\infty \frac{jn_j}{n}.
\end{align*}
This completes the proof. \end{proof}
Next we will characterize weak limit points of $\{(\bar{\boldsymbol{\nu}} ^{n},\bar{\boldsymbol{X}}^{n},{\bar{Y}}^{n},\bar{\boldsymbol{B}}^{n},{\bar{\eta}}^{n})\}$. For that, we need the following notation. For $k\in \mathbb{N}_{0}$ define the compensated process \begin{equation*} {\tilde{N}}_{k}^{n\varphi _{k}^{n}}(ds\,dy)\doteq N_{k}^{n\varphi _{k}^{n}}(ds\,dy)-n\varphi _{k}^{n}(s,y)\,ds\,dy. \end{equation*} Then ${\tilde{N}}_{k}^{n\varphi _{k}^{n}}([0,t]\times A)$ is an $\{\mathcal{F }_{t}\}$-martingale for $A\in \mathcal{B}([0,1])$ and $k\in \mathbb{N}_{0}$. Let \begin{equation} {\bar{B}}_{k}^{n}(t)={\tilde{B}}_{k}^{n}(t)+{\hat{B}}_{k}^{n}(t), \; t \in [0,T], \; k \in \mathbb{N}_{0}. \label{eq:Bbar_n_k_upper_decomp} \end{equation}
where \begin{equation*} {\tilde{B}}_{k}^{n}(t)\doteq \frac{1}{n}\int_{[0,t]\times \lbrack 0,1]}{ {1}}_{[0,r_{k}(\bar{\boldsymbol{X}}^{n}(s-)))}(y)\,{\tilde{N}}_{k}^{n\varphi _{k}^{n}}(ds\,dy) \end{equation*} is an $\{\mathcal{F}_{t}\}$-martingale and \begin{equation*} {\hat{B}}_{k}^{n}(t)\doteq \int_{\lbrack 0,t]\times \lbrack 0,1]}{ {1}}_{[0,r_{k}(\bar{\boldsymbol{X}}^{n}(s)))}(y)\varphi _{k}^{n}(s,y)\,ds\,dy. \end{equation*} Write $\tilde{\boldsymbol{B}}^n \doteq ({\tilde{B}}^n_k)_{k \in \mathbb{N}_0}$ and $\hat{\boldsymbol{B}}^n \doteq ({\hat{B}}^n_k)_{k \in \mathbb{N}_0}$. Let $\lambda _{t}$ be Lebesgue measure on $ [0,t]\times \lbrack 0,1]$.
We have the following characterization of the weak limit points. Recall $\mathcal{S}_{T}(\boldsymbol{\zeta} ,\psi )$ defined in \eqref{eq:psi} and \eqref{eq:phi_k}.
\begin{Lemma} \label{lem:char_limit} Suppose Assumptions \ref{asp:convgN} and \ref {asp:exponential-boundN} hold. Also assume that the bound \eqref{eq:cost_bd_upper} is satisfied and suppose that $(\bar{\boldsymbol{\nu}}^{n}, \bar{\boldsymbol{X}}^n, {\bar{Y}}^n, \bar{\boldsymbol{B}}^n, {\bar{\eta}}^n)$ converges along a subsequence, in distribution, to $(\bar{\boldsymbol{\nu}}, \bar{\boldsymbol{X}}, {\bar{Y}}, \bar{\boldsymbol{B}}, { \bar{\eta}}) \in [\mathcal{M}([0,T]\times[0,1])]^\infty \times \mathcal{D} _\infty \times \mathcal{D} \times \mathcal{D}_\infty \times \mathcal{D}$ given on some probability space $(\Omega^*,\mathcal{F}^*,{P}^*)$ . Then the following holds ${P}^*$-a.s.
\begin{enumerate}[\upshape(a)]
\item For each $k \in \mathbb{N}_0$, $\bar{\nu}_k \ll \lambda_T$.
\item $(\bar{\boldsymbol{X}},{\bar{Y}},\bar{\boldsymbol{B}},{\bar{\eta}})\in \mathcal{C}_{\infty }\times \mathcal{C}\times \mathcal{C}_{\infty }\times \mathcal{C}$, and for $ t\in \lbrack 0,T]$ \begin{align} {\bar{X}}_{k}(t)& =p_{k}-{\bar{B}}_{k}(t)\geq 0,k\in \mathbb{N}, \label{eq:Xbar_k_upper} \\ {\bar{Y}}(t)& =\sum_{k=0}^{\infty }(k-2){\bar{B}}_{k}(t), \label{eq:Ybar_upper} \\ {\bar{X}}_{0}(t)& ={\bar{Y}}(t)+{\bar{\eta}}(t)\geq 0. \label{eq:Xbar_0_upper} \end{align}
\item For $k\in \mathbb{N}_{0}$ let $ \varphi _{k}(s,y)\doteq \frac{d\bar{\nu} _{k}}{d\lambda _{T}}(s,y),\;(s,y)\in \lbrack 0,T]\times \lbrack 0,1]. $ Then for $t\in \lbrack 0,T]$ and $k\in \mathbb{N}_{0}$ \begin{equation} \label{eq:Bbar_k_upper} {\bar{B}}_{k}(t)=\int_{[0,t]\times \lbrack 0,1]}{{1}}_{[0,r_{k}(\bar{\boldsymbol{X}}(s)))}(y)\,\varphi _{k}(s,y)ds\,dy. \end{equation}
\item ${\bar{X}}_{0}=\Gamma ({\bar{Y}})$. In particular, $(\bar{\boldsymbol{X}},{\bar{Y} })\in \mathcal{C}_{T}$ and $\boldsymbol{\varphi} \in \mathcal{S}_{T}(\bar{\boldsymbol{X}},{\bar{Y}})$ . \end{enumerate} \end{Lemma}
\begin{proof}
Assume without loss of generality that $(\bar{\boldsymbol{\nu}}^{n}, \bar{\boldsymbol{X}}^n, {\bar{Y}}^n, \bar{\boldsymbol{B}}^n, {\bar{\eta}}^n) \Rightarrow (\bar{\boldsymbol{\nu}}, \bar{\boldsymbol{X}}, {\bar{Y}}, \bar{\boldsymbol{B}}, {\bar{\eta}})$ along the whole sequence as $n \to \infty$.
(a)
This is an immediate consequence of the bound in \eqref{eq:cost_bd_upper} and Lemma A.1 of \cite{BudhirajaChenDupuis2013large}.
(b)
The first statement is an immediate consequence of the $\mathcal{C}$-tightness argued in the proof of Lemma \ref{lem:tightness}.
Then using \eqref{eq:Xbar_n_k_upper}, Assumption \ref{asp:convgN} and the fact that ${\bar{X}}^n_k(t) \ge 0$ a.s., we have \eqref{eq:Xbar_k_upper}.
Next, note that
by Assumption \ref{asp:exponential-boundN}, as $K \to \infty$
\begin{equation}
\label{eq:UI_Xbar_n}
\sup_{n \in \mathbb{N}} \sup_{0 \le t \le T} \left| \sum_{k=K}^\infty (k-2){\bar{B}}^n_k(t) \right| \le \sup_{n \in \mathbb{N}} \sum_{k=K}^\infty \frac{kn_k}{n} \le K^{-\varepsilon_{\boldsymbol{p}}} \sup_{n \in \mathbb{N}} \sum_{k=K}^\infty \frac{n_k}{n} k^{1+\varepsilon_{\boldsymbol{p}}} \to 0,
\end{equation}
\textcolor{black}{where in obtaining the first inequality we have used the fact that
due to \eqref{eq:Xbar_n_k_upper} and the nonnegativity of $\bar{X}_{k}^{n}(t)$, $\bar B^n_k(t) \le \bar X^n_k(0)$.}
Hence $\sum_{k=0}^\infty (k-2) {\bar{B}}^n_k \Rightarrow \sum_{k=0}^\infty (k-2) {\bar{B}}_k \in \mathcal{C}$.
From this and \eqref{eq:Ybar_n_upper} we see that \eqref{eq:Ybar_upper} holds.
Next, since $({\bar{Y}}^n,{\bar{\eta}}^n) \Rightarrow ({\bar{Y}},{\bar{\eta}}) \in \mathcal{C}^2$ and ${\bar{X}}^n_0(t) \ge -\frac{1}{n}$ a.s., we have from \eqref{eq:Xbar_n_0_upper} that \eqref{eq:Xbar_0_upper} holds.
(c)
By Doob's inequality, as $n \to \infty$
\begin{align*}
{{E}} \sum_{k=0}^\infty \sup_{0 \le t \le T} |{\tilde{B}}^n_k(t)|^2
& \le \frac{4}{n} {{E}} \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \varphi^n_k(s,y) {{1}}_{[0,r_k(\bar{\boldsymbol{X}}^n(s)))}(y) \,ds\,dy \\
& \le \frac{4}{n} {{E}} \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \left[ \ell(\varphi^n_k(s,y)) + 2 \right] {{1}}_{[0,r_k(\bar{\boldsymbol{X}}^n(s)))}(y) ds\,dy\\
& \le \frac{4}{n} \left( M_0 + 2 T \right) \to 0,
\end{align*}
where the second inequality follows from Lemma \ref{lem:property_ell}(b) and the third inequality follows from \eqref{eq:cost_bd_upper}.
Therefore as $n \to \infty$
\begin{equation}
\label{eq:cvg_Atil_Btil_upper}
{\tilde{\boldsymbol{B}}}^n \Rightarrow \boldsymbol{0}.
\end{equation}
By appealing to the Skorokhod representation theorem, we can assume without loss of generality that $(\bar{\boldsymbol{\nu}}^{n}, \bar{\boldsymbol{X}}^n, {\bar{Y}}^n, \bar{\boldsymbol{B}}^n, {\bar{\eta}}^n, {\tilde{\boldsymbol{B}}}^n) \to (\bar{\boldsymbol{\nu}}, \bar{\boldsymbol{X}}, {\bar{Y}}, \bar{\boldsymbol{B}}, {\bar{\eta}}, \boldsymbol{0})$ a.s.\ on $(\Omega^*,\mathcal{F}^*,{P}^*)$, namely there exists some event $F \in \mathcal{F}^*$ such that ${P}^*(F^c) = 0$ and
\begin{equation*}
(\bar{\boldsymbol{\nu}}^{n}, \bar{\boldsymbol{X}}^n, {\bar{Y}}^n, \bar{\boldsymbol{B}}^n, {\bar{\eta}}^n, {\tilde{\boldsymbol{B}}}^n) \to (\bar{\boldsymbol{\nu}}, \bar{\boldsymbol{X}}, {\bar{Y}}, \bar{\boldsymbol{B}}, {\bar{\eta}}, \boldsymbol{0}) \mbox{ on } F.
\end{equation*}
Fix ${\bar{\omega}} \in F$. The rest of the argument will be made for such an ${\bar{\omega}}$ which will be suppressed from the notation.
From \eqref{eq:UI_Xbar_n} we have that as $n \to \infty$ $$r(\bar{\boldsymbol{X}}^n(t)) = ({\bar{X}}^n_0(t))^+ + \sum_{k=1}^\infty k {\bar{X}}^n_k(t) \to ({\bar{X}}_0(t))^+ + \sum_{k=1}^\infty k {\bar{X}}_k(t) = r(\bar{\boldsymbol{X}}(t))$$
uniformly in $t \in [0,T]$, and $r(\bar{\boldsymbol{X}}(\cdot))$ is continuous \textcolor{black}{and hence bounded}.
Let ${\bar{\tau}} \doteq \tau_{\bar{\boldsymbol{X}}}$, where $\tau_{\bar{\boldsymbol{X}}}$ is defined through \eqref{eq:defntauphi}, namely
${\bar{\tau}} = \inf \{ t \in [0,T] : r(\bar{\boldsymbol{X}}(t)) = 0\} \wedge T$.
We will argue that \eqref{eq:Bbar_k_upper} holds for all $t < {\bar{\tau}}$, $t = {\bar{\tau}}$ and $t >{\bar{\tau}}$.
For $t < {\bar{\tau}}$, we have $r(\bar{\boldsymbol{X}}(t)) > 0$.
Hence for each $k \in \mathbb{N}_0$,
\begin{equation}\label{eq:cgceofindic}
{{1}}_{[0,r_k(\bar{\boldsymbol{X}}^n(s)))}(y) \to {{1}}_{[0,r_k(\bar{\boldsymbol{X}}(s)))}(y)
\end{equation}
as $n \to \infty$ for $\lambda_t$-a.e.\ $(s,y) \in [0,t] \times [0,1]$
since $\lambda_t\{(y,s): y = r_k(\bar{\boldsymbol{X}}(s))\}=0$.
From \eqref{eq:cgceofindic} and the uniform integrability of $(s,y) \mapsto ({{1}}_{[0,r_k(\bar{\boldsymbol{X}}^n(s)))}(y) -{{1}}_{[0,r_k(\bar{\boldsymbol{X}}(s)))}(y)) \varphi^n_k(s,y)$ (with respect to the normalized Lebesgue measure on
$[0,T]\times [0,1]$) which follows from \eqref{eq:cost_bd_upper} and the superlinearity of $\ell$, we have that
$${\hat{B}}^n_k(t) - \int_{[0,t] \times [0,1]} {{1}}_{[0,r_k(\bar{\boldsymbol{X}}(s)))}(y) \, \varphi_k^n(s,y) ds\,dy \to 0.$$
Also, from the bound in \eqref{eq:cost_bd_upper} it follows that
$$\int_{[0,t] \times [0,1]} {{1}}_{[0,r_k(\bar{\boldsymbol{X}}(s)))}(y) \, \varphi_k^n(s,y) ds\,dy \to \int_{[0,t] \times [0,1]} {{1}}_{[0,r_k(\bar{\boldsymbol{X}}(s)))}(y) \, \varphi_k(s,y) ds\,dy.$$
Combining the two convergence statements we have
\begin{equation}
{\hat{B}}^n_k(t) \to \int_{[0,t] \times [0,1]} {{1}}_{[0,r_k(\bar{\boldsymbol{X}}(s)))}(y) \, \varphi_k(s,y) ds\,dy.\label{eq:cgcebhatkt}
\end{equation}
The above convergence along with \eqref{eq:Bbar_n_k_upper_decomp} and \eqref{eq:cvg_Atil_Btil_upper} gives \eqref{eq:Bbar_k_upper} for $t < {\bar{\tau}}$. Since \eqref{eq:Bbar_k_upper} holds for $t < {\bar{\tau}}$, it also holds for $t = {\bar{\tau}}$ by continuity of $\bar{\boldsymbol{B}}$ and of the right side in \eqref{eq:Bbar_k_upper}.
Now suppose $T\ge t > {\bar{\tau}}$.
Since $r(\bar{\boldsymbol{X}}(\cdot))$ is continuous, we see from the definition of ${\bar{\tau}}$ that $r(\bar{\boldsymbol{X}}({\bar{\tau}})) = 0$.
Noting that $r(\bar{\boldsymbol{X}}^n(\cdot))$ is non-negative and non-increasing, so is $r(\bar{\boldsymbol{X}}(\cdot))$.
Therefore $r(\bar{\boldsymbol{X}}(t)) = 0$ and $\bar{\boldsymbol{X}}(t) = \boldsymbol{0}$ for ${\bar{\tau}} \le t \le T$.
From this we see that the right hand side of \eqref{eq:Bbar_k_upper} remains constant for ${\bar{\tau}} \le t \le T$ and it suffices to show that $\bar{\boldsymbol{B}}(t) = \bar{\boldsymbol{B}}({\bar{\tau}})$ for ${\bar{\tau}} < t \le T$.
From \eqref{eq:Bbar_n_k_upper} it follows that, for $k \in \mathbb{N}$,
\begin{equation}
\label{eq:cvg_upper_Bbar_k}
\sup_{{\bar{\tau}} < t \le T} |{\bar{B}}^n_k(t) - {\bar{B}}^n_k({\bar{\tau}})| = {\bar{B}}^n_k(T) - {\bar{B}}^n_k({\bar{\tau}}) = {\bar{X}}^n_k({\bar{\tau}}) - {\bar{X}}^n_k(T) \le {\bar{X}}^n_k({\bar{\tau}}),
\end{equation}
which converges to ${\bar{X}}_k({\bar{\tau}}) = 0$ as $n \to \infty$.
Hence ${\bar{B}}_k(t) = {\bar{B}}_k({\bar{\tau}})$ for ${\bar{\tau}} < t \le T$ and this gives \eqref{eq:Bbar_k_upper} for each $k \in \mathbb{N}$.
Next we show ${\bar{B}}_0(t) = {\bar{B}}_0({\bar{\tau}})$ for ${\bar{\tau}} < t \le T$.
From \eqref{eq:Ybar_n_upper} and \eqref{eq:Xbar_n_0_upper},
\begin{align}
& \sup_{{\bar{\tau}} < t \le T} |{\bar{B}}^n_0(t) - {\bar{B}}^n_0({\bar{\tau}})| \notag \\
& \le \sup_{{\bar{\tau}} < t \le T} |{\bar{X}}^n_0(t) - {\bar{X}}^n_0({\bar{\tau}})| + \sup_{{\bar{\tau}} < t \le T} |{\bar{\eta}}^n(t) -{\bar{\eta}}^n({\bar{\tau}})| + \sum_{k=1}^\infty |k-2| \sup_{{\bar{\tau}} < t \le T} |{\bar{B}}^n_k(t) - {\bar{B}}^n_k({\bar{\tau}})|. \label{eq:cvg_upper_Bbar_0}
\end{align}
Since ${\bar{X}}^n_0(t) \ge -\frac{1}{n}$, we have
\begin{align*}
\sup_{{\bar{\tau}} < t \le T} |{\bar{X}}^n_0(t) - {\bar{X}}^n_0({\bar{\tau}})| & \le \sup_{{\bar{\tau}} < t \le T} |{\bar{X}}^n_0(t)| + |{\bar{X}}^n_0({\bar{\tau}})|
\le \sup_{{\bar{\tau}} < t \le T} ({\bar{X}}^n_0(t))^+ + \frac{1}{n} + ({\bar{X}}^n_0({\bar{\tau}}))^+ + \frac{1}{n} \\
& \le \sup_{{\bar{\tau}} < t \le T} r(\bar{\boldsymbol{X}}^n(t)) + r(\bar{\boldsymbol{X}}^n({\bar{\tau}})) + \frac{2}{n}
\le 2r(\bar{\boldsymbol{X}}^n({\bar{\tau}})) + \frac{2}{n},
\end{align*}
where the last line follows from the fact that $r(\bar{\boldsymbol{X}}^n(t))$ is non-increasing for $t \in [0,T]$.
From \eqref{eq:etabar_n_upper} and \eqref{eq:Bbar_n_k_upper} it follows that
\begin{align*}
& \sup_{{\bar{\tau}} < t \le T} |{\bar{\eta}}^n(t) -{\bar{\eta}}^n({\bar{\tau}})| \\
& = \sup_{{\bar{\tau}} < t \le T} 2 \sum_{k=1}^\infty \frac{1}{n} \int_{({\bar{\tau}},t] \times [0,1]} {{1}}_{\{{\bar{X}}^n_0(u-) < 0\}} {{1}}_{[0,r_k(\bar{\boldsymbol{X}}^n(u-)))}(y) \, N^{n\varphi^n_k}_k(du\,dy) \\
& \le \sup_{{\bar{\tau}} < t \le T} 2 \sum_{k=1}^\infty \frac{1}{n} \int_{({\bar{\tau}},t] \times [0,1]} {{1}}_{[0,r_k(\bar{\boldsymbol{X}}^n(u-)))}(y) \, N^{n\varphi^n_k}_k(du\,dy) \\
& = \sup_{{\bar{\tau}} < t \le T} 2 \sum_{k=1}^\infty |{\bar{B}}^n_k(t) - {\bar{B}}^n_k({\bar{\tau}})|.
\end{align*}
Combining above two estimates with \eqref{eq:cvg_upper_Bbar_0}, we see that as $n \to \infty$,
\begin{align}
\sup_{{\bar{\tau}} < t \le T} |{\bar{B}}^n_0(t) - {\bar{B}}^n_0({\bar{\tau}})| & \le 2r(\bar{\boldsymbol{X}}^n({\bar{\tau}})) + \frac{2}{n} + \sup_{{\bar{\tau}} < t \le T} \sum_{k=1}^\infty (k+4) |{\bar{B}}^n_k(t) - {\bar{B}}^n_k({\bar{\tau}})| \nonumber\\
& \le 2r(\bar{\boldsymbol{X}}^n({\bar{\tau}})) + \frac{2}{n} + \sum_{k=1}^\infty (k+4) {\bar{X}}^n_k({\bar{\tau}}) \le 7 r(\bar{\boldsymbol{X}}^n({\bar{\tau}})) + \frac{2}{n} \label{eq:middl}\\
& \to 7 r(\bar{\boldsymbol{X}}({\bar{\tau}})) = 0,\nonumber
\end{align}
where the second inequality follows from \eqref{eq:cvg_upper_Bbar_k}. Since we have proved \eqref{eq:Bbar_k_upper} for all $t < {\bar{\tau}}$, $t = {\bar{\tau}}$ and $t >{\bar{\tau}}$, part (c) follows.
(d)
From \eqref{eq:Xbar_0_upper} and a well known characterization of the solution of the Skorohod problem (see, e.g., \cite[Section 3.6.C]{KaratzasShreve1991brownian}), it suffices to show that ${\bar{\eta}}(0) = 0$, ${\bar{\eta}}(t) \ge 0$, ${\bar{\eta}}(t)$ is non-decreasing for $t \in [0,T]$ and $\int_0^T {\bar{X}}_0(t) \, {\bar{\eta}}(dt) = 0$.
Since ${\bar{\eta}}^n(0) = 0$, ${\bar{\eta}}^n(t) \ge 0$ and ${\bar{\eta}}^n(t)$ is non-decreasing for $t \in [0,T]$, so is ${\bar{\eta}}$.
It remains to show $\int_0^T {\bar{X}}_0(t) \, {\bar{\eta}}(dt) = 0$.
Note that ${\bar{\eta}}^n(t)$ increases only when ${\bar{X}}^n_0(t-)<0$, namely ${\bar{X}}^n_0(t-) = - \frac{1}{n}$.
Therefore
\begin{equation*}
\int_0^T \left( {\bar{X}}^n_0(t-)+\frac{1}{n} \right) {\bar{\eta}}^n(dt) = 0.
\end{equation*}
From this we have
\begin{equation}\label{eq:cvg_etabar_upper}
\Scale[0.9]{\begin{aligned}
\left| \int_0^T {\bar{X}}_0(t) \, {\bar{\eta}}(dt) \right|
& = \left| \int_0^T {\bar{X}}_0(t) \, {\bar{\eta}}(dt) - \int_0^T \left( {\bar{X}}^n_0(t-)+\frac{1}{n} \right) {\bar{\eta}}^n(dt) \right| \\
& \le \left| \int_0^T {\bar{X}}_0(t) \, {\bar{\eta}}(dt) - \int_0^T {\bar{X}}_0(t) \, {\bar{\eta}}^n(dt) \right| + \int_0^T |{\bar{X}}_0(t) - {\bar{X}}^n_0(t-)| \, {\bar{\eta}}^n(dt) + \frac{{\bar{\eta}}^n(T)}{n}.
\end{aligned}}
\end{equation}
Since both ${\bar{\eta}}^n$ and ${\bar{\eta}}$ are non-decreasing, we see that ${\bar{\eta}}^n \to {\bar{\eta}}$ as finite measures on $[0,T]$.
Combining this with the fact that ${\bar{X}}_0 \in \mathbb{C}_b([0,T]:\mathbb{R})$, we get $$\left| \int_0^T {\bar{X}}_0(t) \, {\bar{\eta}}(dt) - \int_0^T {\bar{X}}_0(t) \, {\bar{\eta}}^n(dt) \right| \to 0$$ as $n \to \infty$.
Also from continuity of ${\bar{X}}_0$, we have uniform convergence of ${\bar{X}}^n_0$ to ${\bar{X}}_0$ and hence
\begin{equation*}
\int_0^T |{\bar{X}}_0(t) - {\bar{X}}^n_0(t-)| \, {\bar{\eta}}^n(dt) + \frac{{\bar{\eta}}^n(T)}{n} \le \left( \sup_{0 \le t \le T} |{\bar{X}}^n_0(t-)-{\bar{X}}_0(t)| + \frac{1}{n} \right) {\bar{\eta}}^n(T) \to 0
\end{equation*}
as $n \to \infty$.
Combining these two convergence results with \eqref{eq:cvg_etabar_upper},
we see that
$
\int_0^T {\bar{X}}_0(t) \, {\bar{\eta}}(dt) = 0.
$
This proves part (d) and completes the proof. \end{proof}
\section{Laplace upper bound}
\label{sec:upper}
In this section we prove the Laplace upper bound \eqref{eq:lappriupp}.
From \eqref{eq:mainrepn17}, for every $n \in \mathbb{N}$, we can choose ${\tilde{\boldsymbol{\varphi}}}^n \doteq ({\tilde{\varphi}}^n_k)_{k \in \mathbb{N}_0} \in \bar{\mathcal{A}}_b$ such that \begin{equation*} -\frac{1}{n} \log {{E}} e^{-nh(\boldsymbol{X}^{n},Y^{n})} \ge {{E}} \left\{ \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell({\tilde{\varphi}}_k^n(s,y)) \, ds\,dy + h({\bar{\boldsymbol{X}}}^{n,{\tilde{\boldsymbol{\varphi}}}^n}, {\bar{Y}}^{n,{\tilde{\boldsymbol{\varphi}}}^n}) \right\} - \frac{1}{n}, \end{equation*} where $(\bar{\boldsymbol{X}}^{n,{\tilde{\boldsymbol{\varphi}}}^n}, {\bar{Y}}^{n,{\tilde{\boldsymbol{\varphi}}} ^n}) $ are defined by \eqref{eq:Ybar_n_upper_temp}-- \eqref{eq:Xbar_n_k_upper_temp} by replacing $\boldsymbol{\varphi}^n$ with ${\tilde{\boldsymbol{\varphi}}}
^n$. Since $\|h\|_\infty < \infty$, \begin{align*} \sup_{n \in \mathbb{N}} {{E}} \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell({\tilde{\varphi}}_k^n(s,y)) \, ds\,dy & \color{black} \le \sup_{n \in \mathbb{N}} \left[ -\frac{1}{n} \log {{E}} e^{-nh(\boldsymbol{X}^{n},Y^{n})} - E h({\bar{\boldsymbol{X}}}^{n,{\tilde{\boldsymbol{\varphi}}}^n}, {\bar{Y}}^{n,{\tilde{\boldsymbol{\varphi}}}^n}) + \frac{1}{n} \right] \\
& \le 2\|h\|_\infty + 1 \doteq M_h. \end{align*} Now we modify ${\tilde{\boldsymbol{\varphi}}}^n$ so that the last inequality holds not in the sense of expectation, but rather almost surely, for a possibly larger constant [see \eqref{eq:cost_bd_upper}]. Fix $\sigma \in (0,1)$ and define \begin{equation*} {\tilde{\tau}}^n \doteq \inf \left\{t \in [0,T] : \sum_{k=0}^\infty \int_{[0,t] \times [0,1]} \ell({\tilde{\varphi}}_k^n(s,y)) \, ds\,dy > 2 M_h
\|h\|_\infty / \sigma \right\} \wedge T. \end{equation*} For $k \in \mathbb{N}_0$, letting $ \varphi^n_k(s,y) \doteq {\tilde{\varphi}}^n_k(s,y) {{1}}_{\{s \le {\tilde{\tau}}^n\}} + {{1}}_{\{s > {\tilde{\tau}}^n\}}$, $(s,y) \in [0,T]\times[0,1]$, we have ${\boldsymbol{\varphi}}^n \doteq ({\varphi}^n_k)_{k \in \mathbb{N}_0} \in \bar{\mathcal{A}}_b$ since ${\tilde{\tau}}^n$ is an $\{\mathcal{F}_t\}$-stopping time. Also \begin{equation*} {{E}} \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k^n(s,y)) \, ds\,dy \le {{E}} \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell({\tilde{\varphi}}_k^n(s,y)) \, ds\,dy \end{equation*} and \begin{align*} {P}(\boldsymbol{\varphi}^n \ne{\tilde{\boldsymbol{\varphi}}}^n) & \le {P} \left( \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell({\tilde{\varphi}}
_k^n(s,y)) \, ds\,dy > 2 M_h \|h\|_\infty / \sigma \right) \\
& \le \frac{\sigma}{2 M_h \|h\|_\infty} {{E}} \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell({\tilde{\varphi}}_k^n(s,y)) \, ds\,dy
\le \frac{\sigma}{2 \|h\|_\infty}. \end{align*} Letting $(\bar{\boldsymbol{X}}^{n,{\boldsymbol{\varphi}}^n}, {\bar{Y}}^{n,{\boldsymbol{\varphi}} ^n}) $ be defined through \eqref{eq:Ybar_n_upper_temp}-- \eqref{eq:Xbar_n_k_upper_temp} using $\boldsymbol{\varphi}^n$, we have \begin{equation*}
\left| {{E}} h(\bar{\boldsymbol{X}}^{n,\boldsymbol{\varphi}^n},{\bar{Y}}^{n,\boldsymbol{\varphi}^n}) - { {E}} h(\bar{\boldsymbol{X}}^{n,{\tilde{\boldsymbol{\varphi}}}^n},{\bar{Y}}^{n,{\tilde{\boldsymbol{\varphi}}
}^n}) \right| \le 2 \|h\|_\infty {P}(\boldsymbol{\varphi}^n \ne{\tilde{ \boldsymbol{\varphi}}}^n) \le \sigma. \end{equation*} Hence we have \begin{equation*} -\frac{1}{n} \log {{E}} e^{-nh(\boldsymbol{X}^{n},Y^{n})} \ge {{E}} \left\{ \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k^n(s,y)) \, ds\,dy + h(\bar{\boldsymbol{X}}^{n,\boldsymbol{\varphi}^n},{\bar{Y}}^{n,\boldsymbol{\varphi}^n}) \right\} - \frac{1}{n} - \sigma \end{equation*} and \begin{equation} \label{eq:cost_bd_upper_B} \sup_{n \in \mathbb{N}} \sum_{k=0}^\infty \int_{[0,T] \times [0,1]}
\ell(\varphi_k^n(s,y)) \, ds\,dy \le 2M_h \|h\|_{\infty}/\sigma \doteq K_0, \mbox{ a.s. } {P}. \end{equation}
Now we can complete the proof of the Laplace upper bound. Recall that $h \in \mathbb{C}_b(\mathcal{D}_\infty \times \mathcal{D})$. Write $(\bar{\boldsymbol{\nu}}^{n},\bar{\boldsymbol{X}}^{n},{\bar{Y}}^{n}) \doteq (\bar{\boldsymbol{\nu}}^{n,\boldsymbol{\varphi}^n}, \bar{\boldsymbol{X}}^{n,{\boldsymbol{\varphi}}^n}, {\bar{Y}}^{n,{\boldsymbol{\varphi}} ^n}) $, where $\bar{\boldsymbol{\nu}}^{n,\boldsymbol{\varphi}^n}$ is as defined in \eqref{eq:nu_n_upper} using $\boldsymbol{\varphi}^n$. Noting from \eqref{eq:cost_bd_upper_B} that \eqref{eq:cost_bd_upper} is satisfied with $ M_0= K_0$, we have from Lemma \ref{lem:tightness} that $\{(\bar{\boldsymbol{\nu}}^{n}, \bar{\boldsymbol{X}}^{n},{\bar{Y}}^{n})\}$ is tight. Assume without loss of generality that $(\bar{\boldsymbol{\nu}}^{n},\bar{\boldsymbol{X}}^{n},{\bar{Y}} ^{n})$ converges along the whole sequence weakly to $(\bar{\boldsymbol{\nu}},\bar{\boldsymbol{X}} ,{\bar{Y}})$, given on some probability space $(\Omega^*,\mathcal{F}^*,{ P}^*)$. By Lemma \ref{lem:char_limit} we have $(\bar{\boldsymbol{X}},{\bar{ Y}}) \in \mathcal{C}_T$ and $\bar{\boldsymbol{\nu}} = \bar{\boldsymbol{\nu}}^{\boldsymbol{\varphi}}$ for some $\boldsymbol{\varphi} \in \mathcal{S}_T(\bar{\boldsymbol{X}},{\bar{Y}})$ a.s.\ ${P}^*$, where $\bar{\boldsymbol{\nu}}^{\boldsymbol{\varphi}}$ is as defined in \eqref{eq:nu_n_upper} using $\boldsymbol{\varphi}$. Owing to the topology used for the measure component and the relation \eqref{eq:nu_n_upper},
Lemma A.1 in \cite {BudhirajaChenDupuis2013large} (see also \cite[Appendix A.4.3, Lemma A.11]{buddupbook}) implies the lower semicontinuity of the cost that is needed for the second inequality below. Using Fatou's lemma and the definition of $I_T$ in \eqref{eq:rate_function} \begin{align*} \liminf_{n \to \infty} -\frac{1}{n} \log {{E}} e^{-nh(\boldsymbol{X}^{n},Y^{n})} & \ge \liminf_{n \to \infty} {{E}} \left\{ \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k^n(s,y)) \, ds\,dy + h(\bar{\boldsymbol{X}}^n,{\bar{Y}}^n) - \frac{1}{n} - \sigma \right\} \\ & \ge {{E}}^* \left\{ \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k(s,y)) \, ds\,dy + h(\bar{\boldsymbol{X}},{\bar{Y}}) \right\} - \sigma \\ & \ge \inf_{(\boldsymbol{\zeta} ,\psi ) \in \mathcal{D}_\infty \times \mathcal{D}} \{I_T(\boldsymbol{\zeta} ,\psi ) + h(\boldsymbol{\zeta} ,\psi )\} - \sigma. \end{align*} Since $\sigma \in (0,1)$ is arbitrary, this completes the proof of the Laplace upper bound.
\section{Laplace lower bound}
\label{sec:lower}
In this section we prove the Laplace lower bound \eqref{eq:lapprilow}.
The following lemma, which shows unique solvability of the ODE \eqref{eq:psi} and \eqref{eq:phi_k} for controls $\boldsymbol{\varphi}$ in a suitable class, is key in the proof.
\begin{Lemma} \label{lem:uniqueness} Fix $\sigma \in (0,1)$. Given $(\boldsymbol{\zeta} ,\psi ) \in \mathcal{C}_T$ with $I_T(\boldsymbol{\zeta} ,\psi ) < \infty$, there exists $\boldsymbol{\varphi}^* \in \mathcal{S}_T(\boldsymbol{\zeta} ,\psi )$ such that
\begin{enumerate}[\upshape(a)]
\item $\sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k^*(s,y)) \, ds\,dy \le I_T(\boldsymbol{\zeta} ,\psi ) + \sigma$.
\item If $(\tilde{\boldsymbol{\zeta}},\tilde\psi)$ is another pair in $\mathcal{C}_T$ such that $\boldsymbol{\varphi}^* \in \mathcal{S}_T(\tilde{\boldsymbol{\zeta}},\tilde\psi)$, then $ (\tilde{\boldsymbol{\zeta}},\tilde\psi) = (\boldsymbol{\zeta} ,\psi )$. \end{enumerate}
\end{Lemma}
\begin{proof}
Since $I_T(\boldsymbol{\zeta} ,\psi ) < \infty$, we can choose some $\boldsymbol{\varphi} \in \mathcal{S}_T(\boldsymbol{\zeta} ,\psi )$ such that
\begin{equation*}
\sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k(s,y)) \, ds\,dy \le I_T(\boldsymbol{\zeta} ,\psi ) + \frac{\sigma}{2}.
\end{equation*}
Next we will modify $\boldsymbol{\varphi}$ to get the desired $\boldsymbol{\varphi}^*$.
\textcolor{black}{
For $k \in {\mathbb{N}}_0$, let
\begin{align*}
\rho_k(t) & \doteq 1_{\{r_k(\boldsymbol{\zeta}(t)) = 0\}} + \frac{\int_0^1 {{1}}_{[0,r_k(\boldsymbol{\zeta}(t)))}(y) \varphi_k(t,y)\,dy}{r_k(\boldsymbol{\zeta}(t))} 1_{\{r_k(\boldsymbol{\zeta}(t)) \ne 0\}}, \\
{\tilde{\varphi}}_k(t,y) & \doteq \rho_k(t) {{1}}_{[0,r_k(\boldsymbol{\zeta}(t)))}(y) + {{1}}_{[r_k(\boldsymbol{\zeta}(t)),1]}(y).
\end{align*}
Then $$\int_{[0,t]\times \lbrack 0,1]}{{1}}_{[0,r_{k}(\boldsymbol{\zeta} (s)))}(y)\,{\tilde{\varphi}}_{k}(s,y)\,ds\,dy = \int_{[0,t]\times \lbrack 0,1]}{{1}}_{[0,r_{k}(\boldsymbol{\zeta} (s)))}(y)\,\varphi_{k}(s,y)\,ds\,dy$$ and hence $({\tilde{\varphi}}_k)_{k \in {\mathbb{N}}_0} \in {\mathcal{S}}_T(\boldsymbol{\zeta},\psi)$.
}
Since $\ell$ is convex and nonnegative and $\ell(1)=0$,
\textcolor{black}{
we have
\begin{align*}
\int_{[0,T] \times [0,1]} \ell({\tilde{\varphi}}_k(s,y)) \, ds\,dy & = \int_0^T 1_{\{r_k(\boldsymbol{\zeta}(s)) \ne 0\}} r_k(\boldsymbol{\zeta}(s)) \ell(\rho_k(s)) \, ds \le \int_{[0,T] \times [0,1]} \ell(\varphi_k(s,y)) \, ds\,dy
\end{align*}
for each $k \in {\mathbb{N}}_0$.
Therefore we can assume without loss of generality (and abusing notation) that
$
\varphi_k(t,y) = \rho_k(t) {{1}}_{[0,r_k(\boldsymbol{\zeta}(t)))}(y) + {{1}}_{[r_k(\boldsymbol{\zeta}(t)),1]}(y)
$
for some $\rho_k(t) \in [0,\infty)$, for each $k \in \mathbb{N}_0$ and $(t,y) \in [0,T]\times[0,1]$.
}
Fix $\varepsilon \in (0,1)$. We will shrink the support of $\boldsymbol{\varphi}$ to get the desired $\boldsymbol{\varphi}^*$ for sufficiently small $\varepsilon$.
For $t \in [0,T]$, let
\begin{equation*}
\varphi_k^\varepsilon(t,y) = \frac{\rho_k(t)}{1-\varepsilon} {{1}}_{[0,(1-\varepsilon)r_k(\boldsymbol{\zeta}(t)))}(y) + {{1}}_{[(1+\varepsilon)r_k(\boldsymbol{\zeta}(t)),1]}(y).
\end{equation*}
Clearly $\boldsymbol{\varphi}^\varepsilon \in \mathcal{S}_T(\boldsymbol{\zeta} ,\psi )$.
Note that $\varphi_k^\varepsilon(t,y) = 0$ for $(1-\varepsilon)r_k(\boldsymbol{\zeta}(t)) < y < (1+\varepsilon)r_k(\boldsymbol{\zeta}(t))$, which will be a key when we prove uniqueness in part (b).
Recall $\tau_{\boldsymbol{\zeta}}$ introduced in \eqref{eq:defntauphi}.
Then
\begin{align*}
& \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k^\varepsilon(t,y)) \, dt\,dy - \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k(t,y)) \, dt\,dy \\
& = \sum_{k=0}^\infty \int_0^{\tau_{\boldsymbol{\zeta}}} \left[ (1-\varepsilon)r_k(\boldsymbol{\zeta}(t)) \ell\left(\frac{\rho_k(t)}{1-\varepsilon}\right) + 2\varepsilon r_k(\boldsymbol{\zeta}(t)) \ell(0) - r_k(\boldsymbol{\zeta}(t)) \ell(\rho_k(t)) \right] dt \\
& = \sum_{k=0}^\infty \int_0^{\tau_{\boldsymbol{\zeta}}} r_k(\boldsymbol{\zeta}(t)) \bigg[ \left( \rho_k(t) \log \left(\frac{\rho_k(t)}{1-\varepsilon}\right) - \rho_k(t) + 1-\varepsilon \right) + 2\varepsilon \\
& \qquad - \left( \rho_k(t) \log \rho_k(t) - \rho_k(t) + 1 \right) \bigg] dt \\
& = \sum_{k=0}^\infty \int_0^{\tau_{\boldsymbol{\zeta}}} r_k(\boldsymbol{\zeta}(t)) \left[ \rho_k(t) \log \left(\frac{1}{1-\varepsilon}\right) + \varepsilon \right] dt.
\end{align*}
From Lemma \ref{lem:property_ell}(b) we have
\begin{align*}
& \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k^\varepsilon(t,y)) \, dt\,dy - \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k(t,y)) \, dt\,dy \\
& \le \sum_{k=0}^\infty \int_0^{\tau_{\boldsymbol{\zeta}}} r_k(\boldsymbol{\zeta}(t)) \left[ \left( \ell(\rho_k(t)) + 2 \right) \log \left(\frac{1}{1-\varepsilon}\right) + \varepsilon \right] dt \\
& = \log \left(\frac{1}{1-\varepsilon}\right)\sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k(t,y)) \, dt\,dy + 2 \tau_{\boldsymbol{\zeta}} \log \left(\frac{1}{1-\varepsilon}\right) + \tau_{\boldsymbol{\zeta}} \varepsilon \\
& \le \left(I_T(\boldsymbol{\zeta} ,\psi ) + \frac{\sigma}{2}\right)\log \left(\frac{1}{1-\varepsilon}\right) + 2T\log \left(\frac{1}{1-\varepsilon}\right) + T \varepsilon .
\end{align*}
Choosing $\varepsilon$ small enough so that the last display is no larger than $\frac{\sigma}{2}$, we have
\begin{align*}
\sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k^\varepsilon(s,y)) \, ds\,dy \le \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k(s,y)) \, ds\,dy + \frac{\sigma}{2} \le I_T(\boldsymbol{\zeta} ,\psi ) + \sigma.
\end{align*}
Part (a) then holds with $\boldsymbol{\varphi}^* = \boldsymbol{\varphi}^\varepsilon$ for such an $\varepsilon$.
We now show that part (b) is satisfied with such a $\boldsymbol{\varphi}^*$. Suppose that, in addition to $(\boldsymbol{\zeta} ,\psi )$, there is another pair of $({\tilde{\boldsymbol{\zeta}}},{\tilde{\psi}})$ such that $({\tilde{\boldsymbol{\zeta}}},{\tilde{\psi}}) \in \mathcal{C}_T$ and $\boldsymbol{\varphi}^* \in \mathcal{S}_T({\tilde{\boldsymbol{\zeta}}},{\tilde{\psi}})$.
Let
$\tau \doteq \inf \{ t \in [0,T] : \boldsymbol{\zeta}(t) \ne {\tilde{\boldsymbol{\zeta}}}(t) \} \wedge T.$
We claim that $\tau = T$.
Once the claim is verified, it follows from continuity of $\boldsymbol{\zeta}$ and ${\tilde{\boldsymbol{\zeta}}}$ that $\boldsymbol{\zeta}(t) = {\tilde{\boldsymbol{\zeta}}}(t)$ for all $t \in [0,T]$.
Then from \eqref{eq:psi}, $\psi = {\tilde{\psi}}$ proving part (b).
Now we prove the claim that $\tau = T$. We will argue via contradiction.
Suppose that $\tau < T$.
To complete the proof, it suffices to reach the following contradiction
\begin{equation}
\label{eq:uniqueness_contradiction}
\boldsymbol{\zeta}(t) = {\tilde{\boldsymbol{\zeta}}}(t), t \in [\tau,\tau+\delta] \mbox{ for some } \delta > 0.
\end{equation}
From definition of $\tau$ and \eqref{eq:psi} it follows that $(\boldsymbol{\zeta}(t),r(\boldsymbol{\zeta}(t)),\psi(t)) = ({\tilde{\boldsymbol{\zeta}}}(t),r({\tilde{\boldsymbol{\zeta}}}(t)),{\tilde{\psi}}(t))$ for all $t < \tau$.
From Remark \ref{rmk:property_ODE}(a) we have that $r(\boldsymbol{\zeta}(\cdot)), r({\tilde{\boldsymbol{\zeta}}}(\cdot)) \in \mathcal{C}$.
Then by continuity, $(\boldsymbol{\zeta}(t),r(\boldsymbol{\zeta}(t)),\psi(t)) = ({\tilde{\boldsymbol{\zeta}}}(t),r({\tilde{\boldsymbol{\zeta}}}(t)),{\tilde{\psi}}(t))$ for all $t \le \tau$.
If $r(\boldsymbol{\zeta}(\tau)) = r({\tilde{\boldsymbol{\zeta}}}(\tau)) = 0$, then from Remark \ref{rmk:property_ODE}(c) we have $\boldsymbol{\zeta}(t) = {\tilde{\boldsymbol{\zeta}}}(t) = \boldsymbol{0}$ for all $t \ge \tau$, which gives \eqref{eq:uniqueness_contradiction}.
Now we show \eqref{eq:uniqueness_contradiction} for the remaining case: $r(\boldsymbol{\zeta}(\tau)) = r({\tilde{\boldsymbol{\zeta}}}(\tau)) > 0$.
For this, note that by continuity of $r(\boldsymbol{\zeta})$ and $r({\tilde{\boldsymbol{\zeta}}})$, there exists some $\delta > 0$ such that for all $t \in [\tau,\tau+\delta]$,
\begin{equation}
\label{eq:uniqueness_key_fraction}
r(\boldsymbol{\zeta}(t)) > 0, r({\tilde{\boldsymbol{\zeta}}}(t)) > 0, \left| \frac{r(\boldsymbol{\zeta}(t))}{r({\tilde{\boldsymbol{\zeta}}}(t))} - 1 \right| < \varepsilon,
\end{equation}
where $\varepsilon$ is as in part (a) and recall that $\boldsymbol{\varphi}^*=\boldsymbol{\varphi}^\varepsilon$.
We will argue in two steps.\\
Step $1$: We will prove that
\begin{equation}
\label{eq:step_1_claim}
\zeta_k(t) = {\tilde{\zeta}}_k(t) \mbox{ for all } t \in [\tau,\tau+\delta], k \in \mathbb{N}.
\end{equation}
Suppose not, namely there exists $k\in \mathbb{N}$ such that
$\tau_k \doteq \inf \{t \in [\tau,\tau+\delta] : \zeta_k(t) \ne {\tilde{\zeta}}_k(t) \}\wedge T$
satisfies $\tau \le \tau_k < \tau+\delta$.
By continuity, we have $\zeta_k(t) = {\tilde{\zeta}}_k(t)$ for $t \le \tau_k$.
We must have $\zeta_k(\tau_k) = {\tilde{\zeta}}_k(\tau_k) > 0$, since otherwise $\zeta_k(\tau_k) = {\tilde{\zeta}}_k(\tau_k) = 0$ and so from Remark \ref{rmk:property_ODE}(b) $\zeta_k(t) = {\tilde{\zeta}}_k(t) = 0$ for all $t \ge \tau_k$, which contradicts the definition of $\tau_k$.
From \eqref{eq:uniqueness_key_fraction} it then follows that
\begin{align*}
r_k(\boldsymbol{\zeta}(\tau_k)) & = \frac{k\zeta_k(\tau_k)}{r(\boldsymbol{\zeta}(\tau_k))} > 0, \\
| r_k(\boldsymbol{\zeta}(\tau_k)) - r_k({\tilde{\boldsymbol{\zeta}}}(\tau_k)) | & = \left| \frac{k\zeta_k(\tau_k)}{r(\boldsymbol{\zeta}(\tau_k))} - \frac{k{\tilde{\zeta}}_k(\tau_k)}{r({\tilde{\boldsymbol{\zeta}}}(\tau_k))} \right| = \frac{k\zeta_k(\tau_k)}{r(\boldsymbol{\zeta}(\tau_k))} \left| 1 - \frac{r(\boldsymbol{\zeta}(\tau_k)}{r({\tilde{\boldsymbol{\zeta}}}(\tau_k))} \right| < \varepsilon r_k(\boldsymbol{\zeta}(\tau_k)).
\end{align*}
Once more by continuity, there exists some $\delta_k>0$ such that last two inequalities hold for $t \in [\tau_k,\tau_k+\delta_k]$, namely $$r_k(\boldsymbol{\zeta}(t)) > 0, \: (1-\varepsilon)r_k(\boldsymbol{\zeta}(t)) < r_k({\tilde{\boldsymbol{\zeta}}}(t)) < (1+\varepsilon)r_k(\boldsymbol{\zeta}(t)).$$
From construction of $\varphi^\varepsilon$, we see that for $t \in [\tau_k,\tau_k+\delta_k]$,
\begin{equation*}
\int_{(\tau_k,t] \times [0,1]} {{1}}_{[0,r_k({\tilde{\boldsymbol{\zeta}}}(s)))}(y) \varphi_k^\varepsilon(s,y) \, ds\,dy = \int_{(\tau_k,t] \times [0,1]} {{1}}_{[0,r_k(\boldsymbol{\zeta}(s)))}(y) \varphi_k^\varepsilon(s,y) \, ds\,dy.
\end{equation*}
It then follows from \eqref{eq:phi_k} that $\zeta_k(t) = {\tilde{\zeta}}_k(t)$ for all $t \le \tau_k+\delta_k$.
This contradicts the definition of $\tau_k$.
Therefore \eqref{eq:step_1_claim} must hold.\\
Step $2$: We will prove that
\begin{equation}
\label{eq:step_2_claim}
\zeta_0(t) = {\tilde{\zeta}}_0(t) \mbox{ for all } t \in [\tau,\tau+\delta].
\end{equation}
Let $\eta(t) \doteq \zeta_0(t) - \psi(t)$ and ${\tilde{\eta}}(t) \doteq {\tilde{\zeta}}_0(t) - {\tilde{\psi}}(t)$.
From properties of the Skorokhod map $\Gamma$ (see, e.g., \cite[Section 3.6.C]{KaratzasShreve1991brownian}), we have that
\begin{align}
& \eta(0) = 0, \eta(t) \mbox{ is non-decreasing and } \int_0^T \zeta_0(t) \, \eta(dt) = 0, \label{eq:step_2_eta} \\
& {\tilde{\eta}}(0) = 0, {\tilde{\eta}}(t) \mbox{ is non-decreasing and } \int_0^T {\tilde{\zeta}}_0(t) \, {\tilde{\eta}}(dt) = 0 \label{eq:step_2_etatil}.
\end{align}
Consider $[\zeta_0(t) - {\tilde{\zeta}}_0(t)]^2$.
Since $\zeta_0,\psi,{\tilde{\zeta}}_0,{\tilde{\psi}}$ are absolutely continuous, we have for $t \in [\tau,\tau+\delta]$,
\begin{equation}
\Scale[0.9]
{\begin{aligned}
(\zeta_0(t) - {\tilde{\zeta}}_0(t))^2
& = (\zeta_0(\tau) - {\tilde{\zeta}}_0(\tau))^2 + 2 \int_\tau^t (\zeta_0(s) - {\tilde{\zeta}}_0(s)) (\zeta_0'(s) - {\tilde{\zeta}}_0'(s)) \, ds \\
& = 2 \int_\tau^t (\zeta_0(s) - {\tilde{\zeta}}_0(s)) (\psi'(s) - {\tilde{\psi}}'(s)) \, ds + 2 \int_\tau^t (\zeta_0(s) - {\tilde{\zeta}}_0(s)) (\eta_0'(s) - {\tilde{\eta}}_0'(s)) \, ds.
\end{aligned}}
\label{eq:step_2_phi}
\end{equation}
From \eqref{eq:psi} and \eqref{eq:phi_k} we see that for $t \in [\tau,\tau+\delta]$,
\begin{align*}
\psi(t) & = \sum_{k=1}^\infty (k-2) (p_k-\zeta_k(t)) - 2\int_{[0,t] \times [0,1]} {{1}}_{[0,r_0(\boldsymbol{\zeta}(s)))}(y) \, \varphi_0^\varepsilon(s,y) ds\,dy, \\
{\tilde{\psi}}(t) & = \sum_{k=1}^\infty (k-2) (p_k-{\tilde{\zeta}}_k(t)) - 2\int_{[0,t] \times [0,1]} {{1}}_{[0,r_0({\tilde{\boldsymbol{\zeta}}}(s)))}(y) \, \varphi_0^\varepsilon(s,y) ds\,dy.
\end{align*}
Taking the difference of these two displays and using \eqref{eq:step_1_claim}, we have that for $t \in [\tau,\tau+\delta]$,
\begin{equation}
\psi(t) - {\tilde{\psi}}(t) = - 2 \int_{[0,t] \times [0,1]} \left( {{1}}_{[0,r_0(\boldsymbol{\zeta}(s)))}(y) - {{1}}_{[0,r_0({\tilde{\boldsymbol{\zeta}}}(s)))}(y) \right) \varphi_0^\varepsilon(s,y) \, ds\,dy. \label{eq:step_2_psi}
\end{equation}
Since for each fixed $y \ge 0$ the function $x\mapsto \frac{x}{x+y}$ is non-decreasing on $(-y,\infty)$, we have from \eqref{eq:step_1_claim} and \eqref{eq:uniqueness_key_fraction} that if for some
$t \in [\tau,\tau+\delta]$, $\zeta_0(t) \ge {\tilde{\zeta}}_0(t)$, then
\begin{align*}
\Scale[0.95]{r_0(\boldsymbol{\zeta}(t)) = \frac{\zeta_0(t)}{\zeta_0(t) + \sum_{k=1}^\infty k \zeta_k(t)}
= \frac{\zeta_0(t)}{\zeta_0(t) + \sum_{k=1}^\infty k {\tilde{\zeta}}_k(t)} \ge \frac{{\tilde{\zeta}}_0(t)}{{\tilde{\zeta}}_0(t) + \sum_{k=1}^\infty k {\tilde{\zeta}}_k(t)} = r_0({\tilde{\boldsymbol{\zeta}}}(t)).}
\end{align*}
Therefore for $t \in [\tau,\tau+\delta]$,
\begin{equation*}
{{1}}_{[0,r_0(\boldsymbol{\zeta}(t)))}(y) \ge {{1}}_{[0,r_0({\tilde{\boldsymbol{\zeta}}}(t)))}(y) \mbox{ when } \zeta_0(t) \ge {\tilde{\zeta}}_0(t)
\end{equation*}
and similarly
\begin{equation*}
{{1}}_{[0,r_0(\boldsymbol{\zeta}(t)))}(y) \le {{1}}_{[0,r_0({\tilde{\boldsymbol{\zeta}}}(t)))}(y) \mbox{ when } \zeta_0(t) \le {\tilde{\zeta}}_0(t).
\end{equation*}
Combining these two inequalities with \eqref{eq:step_2_psi}, we see that
\begin{equation}
\label{eq:step_2_mono_1}
(\zeta_0(s) - {\tilde{\zeta}}_0(s)) (\psi'(s) - {\tilde{\psi}}'(s)) \le 0, \mbox{ a.e. } s \in [\tau,\tau+\delta].
\end{equation}
Next from \eqref{eq:step_2_eta} and \eqref{eq:step_2_etatil} we see that for $t \in [\tau,\tau+\delta]$,
\begin{align*}
\int_\tau^t {{1}}_{\{\zeta_0(s) > {\tilde{\zeta}}_0(s)\}} (\zeta_0(s) - {\tilde{\zeta}}_0(s)) (\eta_0'(s) - {\tilde{\eta}}_0'(s)) \, ds & \le \int_\tau^t {{1}}_{\{\zeta_0(s) > {\tilde{\zeta}}_0(s)\}} (\zeta_0(s) - {\tilde{\zeta}}_0(s)) \eta_0'(s) \, ds \\
& \le \int_\tau^t {{1}}_{\{\zeta_0(s) > 0\}} \zeta_0(s) \, \eta_0(ds)
= 0,
\end{align*}
and similarly
\begin{equation*}
\int_\tau^t {{1}}_{\{\zeta_0(s) < {\tilde{\zeta}}_0(s)\}} (\zeta_0(s) - {\tilde{\zeta}}_0(s)) (\eta_0'(s) - {\tilde{\eta}}_0'(s)) \, ds \le 0.
\end{equation*}
Combining these two inequalities with \eqref{eq:step_2_mono_1} and \eqref{eq:step_2_phi}, we have for $t \in [\tau,\tau+\delta]$,
$
[\zeta_0(t) - {\tilde{\zeta}}_0(t)]^2 \le 0,
$
proving \eqref{eq:step_2_claim}. Combining \eqref{eq:step_1_claim} and \eqref{eq:step_2_claim} gives \eqref{eq:uniqueness_contradiction} and completes the proof. \end{proof}
We can now complete the proof of the Laplace lower bound. Fix $h \in \mathbb{ C}_b(\mathcal{D}_\infty \times \mathcal{D})$ and $\sigma \in (0,1)$. Fix some $\sigma$-optimal $(\boldsymbol{\zeta}^*,\psi^*) \in \mathcal{C}_T$ with $ I_T(\boldsymbol{\zeta}^*,\psi^*) < \infty$, namely \begin{equation*} I_T(\boldsymbol{\zeta}^*,\psi^*) + h(\boldsymbol{\zeta}^*,\psi^*) \le \inf_{(\boldsymbol{\zeta} ,\psi ) \in \mathcal{D} _\infty \times \mathcal{D}} \left\{ I_T(\boldsymbol{\zeta} ,\psi ) + h(\boldsymbol{\zeta} ,\psi ) \right\} + \sigma. \end{equation*}
Let $\boldsymbol{\varphi}^* \in \mathcal{S}_T(\boldsymbol{\zeta}^*,\psi^*)$ be as in Lemma \ref {lem:uniqueness} (with $(\boldsymbol{\zeta},\psi)$ there replaced by $(\boldsymbol{\zeta}^*,\psi^*)$).
For each $n \in \mathbb{N}$ and $(s,y) \in [0,T] \times [0,1]$, consider the deterministic control \begin{align*} \varphi^n_k(s,y) & \doteq \frac{1}{n} {{1}}_{\{\varphi^*_k(s,y) \le \frac{1}{n}\}} + \varphi^*_k(s,y) {{1}}_{\{\frac{1}{n} < \varphi^*_k(s,y) < n\}} + n {{1}}_{\{\varphi^*_k(s,y) \ge n\}}, k \le n, \\ \varphi^n_k(s,y) & \doteq 1, k > n. \end{align*} Then $\boldsymbol{\varphi}^n \doteq (\varphi^n_k) \in \bar{\mathcal{A}}_b$ and from \eqref{eq:mainrepn17} we have \begin{equation*} -\frac{1}{n} \log {{E}} e^{-nh(\boldsymbol{X}^{n},Y^{n})} \le {{E}} \left\{ \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k^n(s,y)) \, ds\,dy + h({\bar{\boldsymbol{X}}}^{n},{\bar{Y}}^{n}) \right\}, \end{equation*} where $({\bar{\boldsymbol{X}}}^{n}, {\bar{Y}}^{n})$ are given as in \eqref{eq:Ybar_n_upper_temp}--\eqref{eq:Xbar_n_k_upper_temp}. Noting that for all $n \in \mathbb{N}$, $k \in \mathbb{N}_0$ and $(s,y) \in [0,T] \times [0,1]$, $\ell(\varphi^n_k(s,y)) \le \ell(\varphi^*_k(s,y))$, we have from Lemma \ref{lem:uniqueness}(a) that \eqref{eq:cost_bd_upper} holds with $M_0$ replaced by $I_T(\boldsymbol{\zeta}^*,\psi^*) + 1$. Define $\{\bar{\boldsymbol{\nu}}^{n}\}$ as in \eqref{eq:nu_n_upper} with controls $\boldsymbol{\varphi}^n$. From Lemma \ref{lem:tightness} it follows that $ \{(\bar{\boldsymbol{\nu}}^{n},{\bar{\boldsymbol{X}}}^{n},{\bar{Y}}^{n})\}$ is tight. Assume without loss of generality that $(\bar{\boldsymbol{\nu}}^{n},{\bar{\boldsymbol{X}}}^{n},{\bar{Y}}^{n})$ converges along the whole sequence weakly to $(\bar{\boldsymbol{\nu}},{\bar{\boldsymbol{X}}},{\bar{Y}})$, given on some probability space $ (\Omega^*,\mathcal{F}^*,{P}^*)$. From the construction of $ \boldsymbol{\varphi}^n$ we must have $\bar{\boldsymbol{\nu}} = \bar{\boldsymbol{\nu}}^{\boldsymbol{\varphi}^*}$ a.s.\ ${P}^*$, where $\bar{\boldsymbol{\nu}}^{\boldsymbol{\varphi}^*}$ is as defined in \eqref{eq:nu_n_upper} using $\boldsymbol{\varphi}^*$. By Lemma \ref{lem:char_limit} we have $({\bar{\boldsymbol{X}}},{\bar{Y}}) \in \mathcal{C} _T$ and $\boldsymbol{\varphi}^* \in \mathcal{S}_T({\bar{\boldsymbol{X}}},{\bar{Y}})$ a.s.\ ${ P}^*$. From Lemma \ref{lem:uniqueness}(b) it now follows that $( {\bar{\boldsymbol{X}}},{\bar{Y}}) = (\boldsymbol{\zeta}^*,\psi^*)$ a.s.\ ${P}^*$. Finally, from Lemma \ref{lem:uniqueness}(a), \begin{align*} \limsup_{n \to \infty} -\frac{1}{n} \log {{E}} e^{-nh(\boldsymbol{X}^{n},Y^{n})} & \le \limsup_{n \to \infty} {{E}} \left\{ \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k^n(s,y)) \, ds\,dy + h({\bar{\boldsymbol{X}}}^n,{\bar{Y}}^n) \right\} \\ & \le \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k^*(s,y)) \, ds\,dy + {{E}}^* h({\bar{\boldsymbol{X}}}, \bar Y) \\ & = \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k^*(s,y)) \, ds\,dy + h(\boldsymbol{\zeta}^*,\psi^*) \\ & \le I_T(\boldsymbol{\zeta}^*,\psi^*) + h(\boldsymbol{\zeta}^*,\psi^*) + \sigma \\ & \le \inf_{(\boldsymbol{\zeta} ,\psi ) \in \mathcal{D}_\infty \times \mathcal{D}} \left\{ I_T(\boldsymbol{\zeta} ,\psi ) + h(\boldsymbol{\zeta} ,\psi ) \right\} + 2\sigma. \end{align*} Since $\sigma \in (0,1)$ is arbitrary, this completes the proof of the Laplace lower bound.
\section{Compact Sub-level Sets}
\label{sec:rate_function} In this section we prove that the function $I_T$ defined in \eqref{eq:rate_function} is a rate function, namely the set $\Gamma_N \doteq \{ (\boldsymbol{\zeta} ,\psi ) \in \mathcal{D}_\infty \times \mathcal{D} : I_T(\boldsymbol{\zeta} ,\psi ) \le N \}$ is compact for each fixed $N \in [0,\infty)$. Since the proof (as is usual) is very similar to the proof of the Laplace upper bound we will only provide details on steps that are significantly different.
Take any sequence $\{(\boldsymbol{\zeta}^n,\psi^n)_{n \in \mathbb{N}}\} \subset \Gamma_N$. Then $(\boldsymbol{\zeta}^n,\psi^n) \in \mathcal{C}_T$ and there exists some $\frac{1}{n}$ -optimal $\boldsymbol{\varphi}^n \in \mathcal{S}_T(\boldsymbol{\zeta}^n,\psi^n)$, namely \begin{equation} \label{eq:cost_bd_rate} \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k^n(s,y)) \, ds\,dy \le I_T(\boldsymbol{\zeta}^n,\psi^n) + \frac{1}{n} \le N + \frac{1}{n}. \end{equation} Recalling \eqref{eq:psi} and \eqref{eq:phi_k} and letting $\eta^n(t) \doteq \zeta^n_0(t) - \psi^n(t)$, we can write for $t \in [0,T]$, \begin{equation} \zeta^n_0(t) = \Gamma(\psi^n)(t) = \psi^n(t) + \eta^n(t) = \sum_{k=0}^\infty (k-2) B_k^n(t) + \eta^n(t), \label{eq:phi_n_0_rate} \end{equation} where \begin{equation} B_k^n(t) \doteq \int_{[0,t] \times [0,1]} {{1}} _{[0,r_k(\boldsymbol{\zeta}^n(s)))}(y) \, \varphi_k^n(s,y) \,ds\,dy, k \in \mathbb{N}_0. \label{eq:B_n_k_rate} \end{equation} From standard properties of the one-dimensional Skorokhod Problem we have \begin{equation} \eta^n(0) = 0, \eta^n(t) \mbox{ is non-decreasing and } \int_0^T { {1}}_{\{\zeta^n_0(t)>0\}} \, \eta^n(dt) = 0. \label{eq:eta_n_rate} \end{equation}
Write $\boldsymbol{B}^n = (B^n_k)_{n \in \mathbb{N}_0}$ and let $\boldsymbol{\nu}^n$ be defined as in \eqref{eq:nu_n_upper} with deterministic controls $\boldsymbol{\varphi}^n$. The following lemma shows that $ \{(\boldsymbol{\nu}^n,\boldsymbol{\zeta}^n,\psi^n,\boldsymbol{B}^n,\eta^n)\}$ is pre-compact. The proof is similar to that of Lemma \ref{lem:tightness} and is therefore omitted.
\begin{Lemma} \label{lem:pre_compact_rate} $\{(\boldsymbol{\nu}^n,\boldsymbol{\zeta}^n,\psi^n,\boldsymbol{B}^n,\eta^n)\} $ is pre-compact in $[\mathcal{M}_{FC}([0,T]\times[0,1])]^\infty \times \mathcal{C}_\infty \times \mathcal{C} \times \mathcal{C}_\infty \times \mathcal{C}$. \end{Lemma}
The following lemma characterizes limit points of $(\boldsymbol{\nu}^n,\boldsymbol{\zeta}^n, \psi^n,\boldsymbol{B}^n,\eta^n)$.
\begin{Lemma} \label{lem:cvg_rate} Suppose $(\boldsymbol{\nu}^n,\boldsymbol{\zeta}^n,\psi^n,\boldsymbol{B}^n,\eta^n)$ converges along a subsequence to $(\boldsymbol{\nu},\boldsymbol{\zeta},\psi,\boldsymbol{B},\eta) \in [\mathcal{M} ([0,T]\times[0,1])]^\infty \times \mathcal{C}_\infty \times \mathcal{C} \times \mathcal{C}_\infty \times \mathcal{C}$. Then the following hold.
\begin{enumerate}[\upshape(a)]
\item For each $k \in \mathbb{N}_0$, $\nu_k \ll \lambda_T$, and letting $ \varphi_k \doteq \frac{d\nu_k}{d\lambda_T}$, $ \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k(s,y)) \, ds\,dy \le N. $
\item For each $t\in[0,T]$, \begin{align*} \label{eq:phi_k_rate} \zeta_0(t) & = \Gamma(\psi)(t) = \psi(t) + \eta(t), \;\; \psi(t) = \sum_{k=0}^\infty (k-2) B_k(t)\\ \zeta_k(t) & = p_k - B_k(t), \; k \in \mathbb{N}. \end{align*}
\item For each $t\in[0,T]$, \begin{equation} B_k(t) = \int_{[0,t] \times [0,1]} {{1}}_{[0,r_k(\boldsymbol{\zeta}(s)))}(y) \, \varphi_k(s,y) ds\,dy, k \in \mathbb{N}_0, \label{eq:B_k_rate} \end{equation} and in particular $(\boldsymbol{\zeta}, \psi) \in \mathcal{C}_T$ and $\boldsymbol{\varphi} \in \mathcal{S} _T(\boldsymbol{\zeta}, \psi)$. \end{enumerate} \end{Lemma}
\begin{proof}
Assume without loss of generality that
\begin{equation}
\label{eq:cvg_rate_joint}
(\boldsymbol{\nu}^n,\boldsymbol{\zeta}^n,\psi^n,\boldsymbol{B}^n,\eta^n) \to (\boldsymbol{\nu},\boldsymbol{\zeta},\psi,\boldsymbol{B},\eta)
\end{equation}
as $n \to \infty$ along the whole sequence.
Much of the proof is similar to that of Lemma \ref{lem:char_limit} except the proof of \eqref{eq:B_k_rate} for $k=0$. Thus we only give details for the latter statement.
From \eqref{eq:cvg_rate_joint} and arguments similar to Lemma \ref{lem:UI_upper} it follows that
\begin{equation*}
r(\boldsymbol{\zeta}^n(t)) = (\zeta^n_0(t))^+ + \sum_{k=1}^\infty k \zeta^n_k(t) \to (\zeta_0(t))^+ + \sum_{k=1}^\infty k \zeta_k(t) = r(\boldsymbol{\zeta}(t))
\end{equation*}
uniformly in $t \in [0,T]$ as $n \to \infty$.
Therefore $r(\boldsymbol{\zeta}(\cdot))$ is continuous.
Let $\tau \doteq \inf \{ t \in [0,T] : r(\boldsymbol{\zeta}(t)) = 0\} \wedge T$.
We will argue that \eqref{eq:B_k_rate}, for $k=0$, holds for all $t < \tau$, $t = \tau$ and $t > \tau$. The proof of the cases
$t < \tau$ and $t = \tau$ is similar to that of \eqref{eq:Bbar_k_upper} and is omitted.
Now consider $T\ge t > \tau$.
From \eqref{eq:eta_n_rate} and \eqref{eq:phi_n_0_rate}, for $\tau < t \le T$,
\begin{equation*}
|\eta^n(t) - \eta^n(\tau)| = \int_\tau^t \, d\eta^n(s) = \int_\tau^t {{1}}_{\{\zeta^n_0(s) = 0\}} \, d\eta^n(s) = \int_\tau^t {{1}}_{\{\zeta^n_0(s) = 0\}} \, \left (d\zeta^n_0(s) - \sum_{k=0}^\infty (k-2)dB^n_k(s)\right).
\end{equation*}
From \eqref{eq:B_n_k_rate} we see that $\int_\tau^t {{1}}_{\{\zeta^n_0(s) = 0\}} \, dB^n_0(s) = 0$.
Also since $\zeta_0^n$ is non-negative and absolutely continuous, we have ${{1}}_{\{\zeta^n_0(s) = 0\}} (\zeta^n_0)'(s) = 0$ for a.e.\ $s \in [0,T]$.
Therefore
\begin{equation*}
|\eta^n(t) - \eta^n(\tau)|
\le \sum_{k=1}^\infty |k-2| |B^n_k(t) - B^n_k(\tau)|.
\end{equation*}
Applying the triangle inequality to \eqref{eq:phi_n_0_rate} and using this estimate, we see that
\begin{align*}
\sup_{\tau < t \le T} |B^n_0(t) - B^n_0(\tau)|
\le \sup_{\tau < t \le T} |\zeta^n_0(t) - \zeta^n_0(\tau)| + 2 \sum_{k=1}^\infty |k-2|\sup_{\tau < t \le T} |B^n_k(t) - B^n_k(\tau)|.
\end{align*}
Now as in the proof of \eqref{eq:middl} we have
$\sup_{\tau < t \le T} |B^n_0(t) - B^n_0(\tau)| \le
4 r(\boldsymbol{\zeta}^n(\tau))$, which converges to $4 r(\boldsymbol{\zeta}(\tau))=0$ as $n \to \infty$.
Hence $B_0(t) = B_0(\tau)$ for $\tau < t \le T$ and this gives \eqref{eq:B_k_rate} for $k=0$.
Since we have proved \eqref{eq:B_k_rate} for $k=0$ and all $t < \tau$, $t = \tau$ and $t > \tau$, the proof is complete. \end{proof}
\noindent\textbf{Proof of compact sub-level sets $\Gamma_M$:} Now we are ready to prove that $\Gamma_M$ is compact for each fixed $M \in [0,\infty)$. Recall $(\boldsymbol{\zeta}^n,\psi^n)$ introduced above \eqref{eq:cost_bd_rate} and $\boldsymbol{\nu}^n$ introduced above Lemma \ref{lem:pre_compact_rate}. From Lemma \ref{lem:pre_compact_rate} we have pre-compactness of $\{(\boldsymbol{\nu}^n,\boldsymbol{\zeta}^n,\psi^n)\}$ in $[\mathcal{M} ([0,T]\times[0,1])]^\infty \times \mathcal{C}_\infty \times \mathcal{C}$. Assume without loss of generality that $(\boldsymbol{\nu}^n,\boldsymbol{\zeta}^n,\psi^n)$ converges along the whole sequence to some $(\boldsymbol{\nu},\boldsymbol{\zeta},\psi)$. By Lemma \ref{lem:cvg_rate} $(\boldsymbol{\zeta},\psi) \in \mathcal{C}_T$ and $\boldsymbol{\nu}= \boldsymbol{\nu}^{\boldsymbol{\varphi}}$, where for $k \in \mathbb{N}_0$, ${\nu}_k^{\boldsymbol{\varphi}}$ is as defined by the right side of \eqref{eq:nu_n_upper} replacing ${\varphi}_k^n$ with ${\varphi}_k$,
and \begin{equation*} I_T(\boldsymbol{\zeta} ,\psi ) \le \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k(s,y)) \, ds\,dy \le M. \end{equation*} Therefore $(\boldsymbol{\zeta} ,\psi ) \in \Gamma_M$ which proves that $\Gamma_M$ is compact.
\begin{Remark} \label{rmk:unique_varphi} Suppose that for all $n \in \mathbb{N}$, $(\boldsymbol{\zeta}^n,\psi^n) = (\boldsymbol{\zeta} ,\psi )$ for some $(\boldsymbol{\zeta} ,\psi ) \in \mathcal{C}_T$ with $I_T(\boldsymbol{\zeta} ,\psi ) < \infty$ and $M = I_T(\boldsymbol{\zeta} ,\psi )$. Then taking $\boldsymbol{\varphi}^n$ satisfying \eqref{eq:cost_bd_rate} (with $(\boldsymbol{\zeta}^n,\psi^n)$ replaced with $(\boldsymbol{\zeta} ,\psi )$), we see from the above argument that there exists some $\boldsymbol{\varphi} \in \mathcal{S}_T(\boldsymbol{\zeta} ,\psi )$ such that \begin{equation*} I_T(\boldsymbol{\zeta} ,\psi ) \le \sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k(s,y)) \, ds\,dy \le I_T(\boldsymbol{\zeta} ,\psi ), \end{equation*} namely $I_T(\boldsymbol{\zeta} ,\psi )$ is achieved at some $\boldsymbol{\varphi} \in \mathcal{S} _T(\boldsymbol{\zeta} ,\psi )$. \end{Remark}
\section{Calculus of Variations Problem} \label{sec:cal}
In this section we study a calculus of variations problem that is key in proof of Theorem \ref{thm:ldg_degree_distribution}. We begin by giving an overview of the proof strategy. Let $0 \le \boldsymbol{q} \le \boldsymbol{p}$. First note that, \textcolor{black}{in view of Remark \ref{rmk:track-component} and since, as noted in Section \ref{sec:model}, $\{(nX^n_0(\sigma_j^n)+1, nX^n_k(\sigma_j^n)), k,j \in \mathbb{N}\}$ has the same distribution as $\{A(j), V_k(j), k,j \in \mathbb{N}\}$, where $\{\sigma^n_j\}$ denote the jump instants of the process $\boldsymbol{X}^{n}$, the set $E^{n,\varepsilon}(\boldsymbol{q})$ can be written, in distributionally equivalent form (namely the probabilities of the events on the left and the right of the display below are the same), as} \begin{align}
E^{n,\varepsilon}(\boldsymbol{q})
& = \{ \exists\, t_1, t_2 \in [0,\infty) \text{ such that } X^n_0(t_1-) = X^n_0(t_2) = - 1/n, X^n_0(t) > - 1/n \mbox{ for } t\in [t_1, t_2), \nonumber\\
& \qquad |X_k^n(t_1-) - X_k^n(t_2) - q_k| \le \varepsilon \mbox{ for all } k \in \mathbb{N} \}. \label{eq:enverps} \end{align} Here $t_1$ (resp.\ $t_2$) corresponds to the time instant the first vertex (resp.\ the last edge) in a component is woken up (resp.\ is formed).
For $t_2 \ge t_1 \ge 0$ and $(\boldsymbol{\zeta},\psi) \in \mathcal{C}_{t_2}$, define \begin{equation}
\label{eq:I_t1_t2}
I_{t_1,t_2}(\boldsymbol{\zeta},\psi) \doteq \inf_{\boldsymbol{\varphi} \in \mathcal{S}_{t_2}(\boldsymbol{\zeta},\psi)} \sum_{k=0}^\infty \int_{[t_1,t_2] \times [0,1]} \ell(\varphi_k(s,y)) \,ds\,dy. \end{equation} Further for $ \boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)} \in \mathbb{R}_+^\infty$, define \begin{align*}
& \mathcal{J}^0_{t_1,t_2} (\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) \doteq \{ (\boldsymbol{\zeta},\psi) \in \mathcal{C}_{t_2}: \boldsymbol{\zeta}(t_1) = \boldsymbol{x}^{(1)}, \boldsymbol{\zeta}(t_2) = \boldsymbol{x}^{(2)} \}, \\
& \mathcal{J}^1_{t_1,t_2}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) \doteq \{ (\boldsymbol{\zeta},\psi) \in \mathcal{J}^0_{t_1,t_2} (\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}): \psi(t) \ge \psi(t_1)-x^{(1)}_0 \mbox{ for } t \in (t_1,t_2) \}, \\
& \mathcal{J}^2_{t_1,t_2}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) \doteq \{ (\boldsymbol{\zeta},\psi) \in \mathcal{J}^1_{t_1,t_2} (\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}): d r(\boldsymbol{\zeta}(t))/dt = -2 \mbox{ for a.e. } t \in (t_1,t_2) \}, \end{align*} and \begin{equation}
\label{eq:def-I-j}
I_{t_1,t_2}^j(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) \doteq \inf_{(\boldsymbol{\zeta},\psi) \in \mathcal{J}^j_{t_1,t_2}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})} I_{t_1,t_2}(\boldsymbol{\zeta},\psi), \quad j=0,1,2.
\end{equation} Here as usual, the infimum over an empty set is infinity.
The proof of Theorem \ref{thm:ldg_degree_distribution} proceeds through the following steps. Let $\tau \doteq \frac{1}{2} \sum_{k=1}^\infty kq_k$
and assume $\sum_{k=1}^\infty k q_k > 2\sum_{k=1}^\infty q_k$.
\textcolor{black}{Note that the limit as $\varepsilon \to 0$ in fact exists because the set $E^{n,\varepsilon}(\boldsymbol{q})$ is decreasing as $\varepsilon$ decreases.}
\begin{itemize}
\item Lemma \ref{lem:puhalskii-lower-bound} shows the lower bound
\begin{equation}\label{star1227}
\textcolor{black}{\liminf_{\varepsilon \to 0}} \liminf_{n \to \infty } \frac{1}{n} \log P( E^{n,\varepsilon}(\boldsymbol{q})) \ge - I^2_{0,\tau}( (0,\boldsymbol{p}), (0 ,\boldsymbol{p} - \boldsymbol{q})).
\end{equation} \item In Lemma \ref{lem:puhalskii-upper-bound}
we show the upper bound
\begin{equation}
\label{eq:upperbd_to_improve*}
\textcolor{black}{\limsup_{\varepsilon \to 0}} \limsup_{n \to \infty } \frac{1}{n} \log P( E^{n,\varepsilon}(\boldsymbol{q})) \le - \inf_{\boldsymbol{q} \le \bar{\boldsymbol{p}} \le \boldsymbol{p}, t_1 \ge 0}[ I^0_{0,t_1}((0,\boldsymbol{p}), (0,\bar{\boldsymbol{p}})) + I^2_{t_1,t_1+\tau}((0,\bar{\boldsymbol{p}}), (0,\bar{\boldsymbol{p}} - \boldsymbol{q}))].
\end{equation} \item Lemma \ref{lem:puhalskii-upper-bound-improvement} shows that when $p_1=0$ the upper and lower bounds coincide. \item Finally Proposition \ref{prop:minimizer-summary} shows that $$I^2_{0,\tau}( (0,\boldsymbol{p}), (0 ,\boldsymbol{p} - \boldsymbol{q})) = H({\boldsymbol{q}}) + H(\boldsymbol{p}-{\boldsymbol{q}}) - H({\boldsymbol{p}}) + K({\boldsymbol{q}})$$ completing the proof of Theorem \ref{thm:ldg_degree_distribution}. \end{itemize} Note that for $(\boldsymbol{\zeta},\psi) \in \mathcal{J}^1_{t_1,t_2}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$, $\zeta_0(t) = {x}_0^{(1)} + \psi(t)- \psi(t_1)$ for $t \in [t_1, t_2]$. Intuitively, on the event $\{(\boldsymbol{X}^n, Y^n) \in \mathcal{J}^1_{t_1,t_2}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})\}$ the exploration remains in the same component over $[t_1,t_2]$, and on the smaller event $\{(\boldsymbol{X}^n, Y^n) \in \mathcal{J}^2_{t_1,t_2}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})\}$
the exploration pace matches that for the discrete-time exploration process (with time steps of length $1/n$), in which
at each step $2$ half-edges are killed. The main idea in the proof of the theorem is that in characterizing the asymptotics of
the probability of interest one can restrict to $\mathcal{J}^2_{0,\tau}( (0,\boldsymbol{p}), (0 ,\boldsymbol{p} - \boldsymbol{q}))$,
which roughly means that one can restrict to trajectories that avoid the boundary and whose evolution matches that of the original discrete time process of interest removing the artificial ``continuous time'' aspect of the evolution.
Define for $\boldsymbol{x} = (x_k)_{k \in \mathbb{N}_0} \in \mathbb{R}_+^\infty$ and $\boldsymbol{\beta} = (\beta_k)_{k \in \mathbb{N}_0} \in \mathbb{R} \times [-1,0]^\infty$ with $\sum_{k=1}^\infty \beta_k \ge -1$, \begin{equation}
\label{eq:L}
\textcolor{black}{L(\boldsymbol{x},\boldsymbol{\beta}) \doteq
\sum_{k=0}^\infty \nu(k|\boldsymbol{\beta}) \log \left( \frac{\nu(k|\boldsymbol{\beta})}{\mu(k|\boldsymbol{x})} \right), \;
L_k(\boldsymbol{x},\boldsymbol{\beta}) \doteq \nu(k|\boldsymbol{\beta}) \log \left( \frac{\nu(k|\boldsymbol{\beta})}{\mu(k|\boldsymbol{x})} \right),} \end{equation} where \begin{align}
\nu(0|\boldsymbol{\beta}) & \doteq 1+\sum_{k=1}^\infty \beta_k, \quad \nu(k|\boldsymbol{\beta}) \doteq -\beta_k, \quad k \in \mathbb{N}, \label{eq:nu} \\
\mu(k|\boldsymbol{x}) & \doteq r_k(\boldsymbol{x}), \boldsymbol{x} \ne \boldsymbol{0}, \quad \mu(k|\boldsymbol{x}) \doteq 1_{\{k=0\}}, \boldsymbol{x} = \boldsymbol{0}, \quad k \in \mathbb{N}_0. \label{eq:mu} \end{align}
We set $L(\boldsymbol{x},\boldsymbol{\beta}) = \infty$, if $\boldsymbol{\beta} \in \mathbb{R} \times [-1,0]^\infty$ and $\sum_{k=1}^\infty \beta_k < -1$. Note that $\beta_0$ actually does not play a role in the definition of $L(\boldsymbol{x},\boldsymbol{\beta})$ or $\nu(\cdot|\boldsymbol{\beta})$. Later on $(\boldsymbol{x},\boldsymbol{\beta})$ will be usually replaced by $(\boldsymbol{\zeta}(t),\boldsymbol{\zeta}'(t))$ for some absolutely continuous path \textcolor{black}{$\boldsymbol{\zeta}=(\zeta_k)_{k \in {\mathbb{N}}_0}$ and $t \ge 0$, where $\boldsymbol{\zeta}'(t) \doteq (\zeta_k'(t))_{k \in {\mathbb{N}}_0}$}.
In the next six lemmas $\boldsymbol{x}^{(1)} \doteq (x^{(1)}_0,\boldsymbol{p}^{(1)})$ and $\boldsymbol{x}^{(2)} \doteq (x^{(2)}_0,\boldsymbol{p}^{(2)})$ where $x^{(1)}_0,x^{(2)}_0 \in \mathbb{R}_+$ and ${\boldsymbol{0}} \le \boldsymbol{p}^{(2)} \le \boldsymbol{p}^{(1)} \le \boldsymbol{p}$. Let $\zbd \doteq \boldsymbol{x}^{(1)} - \boldsymbol{x}^{(2)}$. Define \begin{equation}
\label{eq:tau}
\varsigma(\boldsymbol{x}^{(1)},\boldsymbol{x}^{(2)}) \doteq \frac{1}{2} (r(\boldsymbol{x}^{(1)}) - r(\boldsymbol{x}^{(2)})) = \frac{1}{2} \left((x^{(1)}_0-x^{(2)}_0) + \sum_{k=1}^\infty k (p_k^{(1)}-p_k^{(2)})\right). \end{equation} \textcolor{black}{We write $\varsigma \equiv \varsigma(\boldsymbol{x}^{(1)},\boldsymbol{x}^{(2)})$ for short in the next six lemmas.} The following lemma relates $I^1,I^2$ and $L$.
\begin{Lemma}
\label{lem:I1I2L}
Fix $t_1 \in [0,\infty)$.
Suppose $\varsigma \ge 0$.
Let $\boldsymbol{x}^{(0)} \doteq (0,\boldsymbol{p})$.
Suppose there exists some $(\boldsymbol{\zeta}^*,\psi^*) \in \mathcal{J}^0_{0,t_1}(\boldsymbol{x}^{(0)}, \boldsymbol{x}^{(1)})$ such that $I_{0,t_1}(\boldsymbol{\zeta}^*,\psi^*) < \infty$.
Then
\begin{equation}
\label{eq:I1I2}
\inf_{t_2 \ge t_1} I^1_{t_1,t_2}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) = I^2_{t_1,t_1+\varsigma}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}).
\end{equation}
Furthermore, for $(\boldsymbol{\zeta},\psi) \in \mathcal{J}^2_{t_1,t_1+\varsigma}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$,
\begin{equation}
\label{eq:IL}
I_{t_1,t_1+\varsigma}(\boldsymbol{\zeta},\psi) = \int_{t_1}^{t_1+\varsigma} L(\boldsymbol{\zeta}(s), \boldsymbol{\zeta}'(s))\,ds,
\end{equation}
and if $I_{t_1,t_1+\varsigma}(\boldsymbol{\zeta},\psi)<\infty$, then $\sum_{k=1}^{\infty} \zeta'_k(t) \ge -1$ for a.e.\ $t \in [t_1, t_1+\varsigma]$.
In particular,
\begin{equation}
\label{eq:I1I2L}
\inf_{t_2 \ge t_1} I^1_{t_1,t_2}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) = I^2_{t_1,t_1+\varsigma}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) = \inf_{(\boldsymbol{\zeta},\psi) \in \mathcal{J}^2_{t_1,t_1+\varsigma}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})} \int_{t_1}^{t_1+\varsigma} L(\boldsymbol{\zeta}(s), \boldsymbol{\zeta}'(s))\,ds.
\end{equation} \end{Lemma} \begin{Lemma}
\label{lem:uniqbeta}
Suppose that
$\sum_{k=1}^\infty k z_k + z_0 > 2 \sum_{k=1}^\infty z_k$ and, $x_0^{(2)} > 0$ or $z_1 > 0$.
Then there is a unique $\beta \equiv \beta (\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) \in (0,1)$ such that
\begin{equation}
\label{eq:def-beta}
\sum_{k=1}^\infty k z_k = (1-\beta^2)\sum_{k=1}^\infty \frac{k z_k}{1-\beta^k} + x_0^{(2)} - \beta^2 x_0^{(1)}.
\end{equation} \end{Lemma}
The construction given below will be used to give an explicit representation for the minimizer for the right side of \eqref{eq:I1I2L}.
\begin{Construction}
\label{cons:cont} Suppose that either (i) or (ii) holds, where \begin{enumerate}[\upshape(i)] \item
$x_0^{(2)}=0$ and $z_1=0$. \item $\sum_{k=1}^\infty k z_k + z_0 > 2 \sum_{k=1}^\infty z_k$ and, $x_0^{(2)} > 0$ or $z_1 > 0$. \end{enumerate}
Let $\beta \equiv \beta (\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) \in [0,1)$ be $0$ in case (i) and
be the unique solution in $(0,1)$ of \eqref{eq:def-beta} in case (ii) (as ensured by Lemma \ref{lem:uniqbeta}). Note that $\beta$ satisfies \eqref{eq:def-beta} in both cases (i) and (ii).
Define $\varsigma$ as in \eqref{eq:tau} and suppose that $\varsigma \ge 0$. Let $\tilde \varsigma \doteq \varsigma/(1-\beta^2)$ and $\tilde z_k \doteq z_k/(1-\beta^k)$ for $k \in \mathbb{N}$. Fix $t_1\ge 0$ and let $\boldsymbol{x}^{(0)}$,
$(\boldsymbol{\zeta}^*,\psi^*)$ be as in Lemma \ref{lem:I1I2L}. Define $({\boldsymbol{\tilde\zeta}},\tilde{\psi})$ by $({\boldsymbol{\tilde\zeta}}(t),\tilde{\psi}(t)) = (\boldsymbol{\zeta}^*(t),\psi^*(t))$ for $t \in [0,t_1]$ and for $t \in [t_1,t_1+\varsigma]$
\begin{align}
\tilde{\zeta}_k(t) & \doteq p_k^{(1)} - \tilde z_k \left[ 1- \left( 1-\frac{t-t_1}{\tilde \varsigma}\right)^{k/2} \right], \quad k\in \mathbb{N}, \label{eq:def-minimizer} \\
\tilde{\zeta}_0(t) & \doteq x_0^{(1)} + \sum_{k=1}^\infty k(p_k^{(1)}-\tilde{\zeta}_k(t)) - 2(t-t_1), \label{eq:def-minimizer-0} \\
\tilde{\psi}(t) & \doteq \tilde{\psi}(t_1) + \sum_{k=1}^\infty k(p_k^{(1)}-\tilde{\zeta}_k(t)) - 2(t-t_1). \label{eq:def-minimizer-psi} \end{align} \end{Construction}
The next two lemmas give some properties of the various quantities in the above construction. Let \begin{align*}
\Xi &\doteq \Big\{(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}): \mbox{ for } i= 1,2,\; \boldsymbol{x}^{(i)} \doteq (x^{(i)}_0,\boldsymbol{p}^{(i)}), x^{(i)}_0 \in \mathbb{R}_+,\\ &\quad \quad {\boldsymbol{0}} \le \boldsymbol{p}^{(2)} \le \boldsymbol{p}^{(1)} \le \boldsymbol{p} \mbox{ and } \sum_{k=1}^\infty k (p^{(1)}_k - p^{(2)}_k) + (x^{(1)}_0- x^{(2)}_0)> 2 \sum_{k=1}^\infty (p^{(1)}_k - p^{(2)}_k)\Big\}. \end{align*} We will equip $\Xi$ with the topology corresponding to coordinatewise convergence. \begin{Lemma}
\label{lem:lemctybeta}
Both $\beta$ and $x_0^{(2)} \log \beta$ are continuous
on $\Xi$: for $(\boldsymbol{x}^{(1),n}, \boldsymbol{x}^{(2),n}) \in \Xi$ with
$(\boldsymbol{x}^{(1),n}, \boldsymbol{x}^{(2),n}) \to (\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) \in \Xi$,
$\beta^n \doteq \beta (\boldsymbol{x}^{(1),n}, \boldsymbol{x}^{(2),n}) \to \beta (\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) \doteq \beta$ and $x_0^{(2),n} \log \beta^n \to x_0^{(2)} \log \beta$. \end{Lemma}
\begin{Lemma}
\label{lem:minimizer-def}
Suppose that $\varsigma \ge 0$.
Also suppose that $\sum_{k=1}^\infty k z_k + z_0 > 2 \sum_{k=1}^\infty z_k$. Fix $t_1\ge 0$.
Let
$(\boldsymbol{\zeta}^*,\psi^*)$ be as in Lemma \ref{lem:I1I2L} and $({\boldsymbol{\tilde\zeta}},\tilde{\psi})$ be as introduced in Construction \ref{cons:cont}.
Then
\begin{enumerate}[\upshape(a)]
\item
$\varsigma \le \tilde \varsigma = \frac{1}{2} \left( x^{(1)}_0 + \sum_{k=1}^\infty k \tilde{z}_k \right)$.
\item
$({\boldsymbol{\tilde\zeta}},\tilde{\psi}) \in \mathcal{J}^2_{t_1,t_1+\varsigma}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$.
\item
$\tilde{\zeta}_0(t) > 0$ for $t \in (t_1,t_1+\varsigma)$.
\end{enumerate} \end{Lemma}
The next lemma calculates $\int_{t_1}^{t_1+\varsigma} L({\boldsymbol{\tilde\zeta}}(s), {\boldsymbol{\tilde\zeta}}'(s))\,ds$ for $({\boldsymbol{\tilde\zeta}},\tilde{\psi})$ introduced in Construction \ref{cons:cont}. Recall that $\boldsymbol{z}= \boldsymbol{x}^{(1)}- \boldsymbol{x}^{(2)}$.
\begin{Lemma}
\label{lem:minimizer-cost}
Suppose that $\varsigma \ge 0$. Suppose that either (i) or (ii) in Construction \ref{cons:cont} is satisfied. Also, let
$(\boldsymbol{\zeta}^*,\psi^*)$ be as in Lemma \ref{lem:I1I2L} with some $t_1\ge 0$ and
let $({\boldsymbol{\tilde\zeta}},\tilde{\psi})$ be given as in Construction \ref{cons:cont}.
Define the function $\tilde{K}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$ by
\begin{equation*}
\tilde{K}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) \doteq \frac{z_0+\sum_{k=1}^\infty k z_k}{2} \log(1-\beta^2) - \sum_{k=1}^\infty z_k \log(1-\beta^k) + x_0^{(2)} \log \beta.
\end{equation*}
For $\boldsymbol{x} \in \mathbb{R} \times \mathbb{R}_+^\infty$ such that $x_0+ \sum_{k=1}^{\infty} k x_k \ge 0$, define $\tilde{H}(\boldsymbol{x})$ by
\begin{equation*}
\tilde{H}(\boldsymbol{x}) \doteq \sum_{k=1}^\infty x_k \log x_k - \frac{x_0+\sum_{k=1}^\infty k x_k}{2} \log \frac{x_0+\sum_{k=1}^\infty k x_k}{2}.
\end{equation*}
Then
\begin{equation*}
\int_{t_1}^{t_1+\varsigma} L({\boldsymbol{\tilde\zeta}}(s), {\boldsymbol{\tilde\zeta}}'(s))\,ds = \tilde{H}(\boldsymbol{z}) + \tilde{H}(\boldsymbol{x}^{(2)}) - \tilde{H}(\boldsymbol{x}^{(1)}) + \tilde{K}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) < \infty.
\end{equation*}
Moreover, the right hand side is lower semicontinuous in $(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) \in \Xi$, namely
for $(\boldsymbol{x}^{(1),n}, \boldsymbol{x}^{(2),n}) \in \Xi$ with
$(\boldsymbol{x}^{(1),n}, \boldsymbol{x}^{(2),n}) \to (\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) \in \Xi$,
\begin{align*}
&\liminf_{n\to \infty} \left(\tilde{H}(\boldsymbol{z}^n) + \tilde{H}(\boldsymbol{x}^{(2),n}) - \tilde{H}(\boldsymbol{x}^{(1),n}) + \tilde{K}(\boldsymbol{x}^{(1),n}, \boldsymbol{x}^{(2),n})\right)\\
&\quad \ge \tilde{H}(\boldsymbol{z}) + \tilde{H}(\boldsymbol{x}^{(2)}) - \tilde{H}(\boldsymbol{x}^{(1)}) + \tilde{K}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}),
\end{align*}
where $\boldsymbol{z}^n = \boldsymbol{x}^{(1),n}-\boldsymbol{x}^{(2),n}$, $\boldsymbol{z}=\boldsymbol{x}^{(1)}-\boldsymbol{x}^{(2)}$. \end{Lemma} Recall the functions $H$ and $K$ from \eqref{eq:hdefn} and \eqref{eq:kdefnn} respectively. We note that with $\tilde K$ and $\tilde H$ as introduced in the above lemma, for ${\boldsymbol{0}} \le \boldsymbol{q} \le \bar{\boldsymbol{q}} \le \boldsymbol{p}$ \begin{equation}
\label{eq:hktil}
H(\boldsymbol{q}) = \tilde H(0, \boldsymbol{q}), \; K(\boldsymbol{q}) = \tilde{K}((0,\bar{\boldsymbol{q}}), (0,\bar{\boldsymbol{q}}-\boldsymbol{q})). \end{equation} The next lemma shows that $({\boldsymbol{\tilde\zeta}},\tilde{\psi})$ in Construction \ref{cons:cont} is a minimizer for the right side of \eqref{eq:I1I2L}.
\begin{Lemma}
\label{lem:minimizer-verify-general}
Suppose that $\varsigma \ge 0$. Suppose that $\sum_{k=1}^\infty k z_k + z_0 > 2 \sum_{k=1}^\infty z_k$.
Fix $t_1\ge 0$ and let
$(\boldsymbol{\zeta}^*,\psi^*)$ be as in Lemma \ref{lem:I1I2L} and $({\boldsymbol{\tilde\zeta}},\tilde{\psi})$ as introduced in Construction \ref{cons:cont}.
Then
\begin{equation}
I^2_{t_1,t_1+\varsigma}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) = \inf_{(\boldsymbol{\zeta},\psi) \in \mathcal{J}^2_{t_1,t_1+\varsigma}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})} \int_{t_1}^{t_1+\varsigma} L(\boldsymbol{\zeta}(s), \boldsymbol{\zeta}'(s))\,ds = \int_{t_1}^{t_1+\varsigma} L({\boldsymbol{\tilde\zeta}}(s), {\boldsymbol{\tilde\zeta}}'(s))\,ds.
\label{eq:i2ttau}
\end{equation} \end{Lemma}
Proofs of Lemmas \ref{lem:I1I2L}--\ref{lem:minimizer-verify-general} are given in Section \ref{sec:pfsect4}. The following proposition summarizes an important consequence of the above lemmas for the case when $x_0^{(1)}=x_0^{(2)}=0$.
\begin{Proposition}
\label{prop:minimizer-summary}
$\,$
\phantomsection
\begin{enumerate}[(a)]
\item
Suppose ${\boldsymbol{0}} \le \boldsymbol{q} \le \bar{\boldsymbol{q}} \le \boldsymbol{p}$ and that either $\sum_{k=1}^\infty kq_k > 2\sum_{k=1}^\infty q_k$
or
$\sum_{k=1}^\infty kq_k = 2\sum_{k=1}^\infty q_k$ but $p_1=0$.
Given $t_1 \ge 0$, and with $\boldsymbol{x}^{(0)} \doteq (0, \boldsymbol{p})$, $\boldsymbol{x}^{(1)} \doteq (0, \bar{\boldsymbol{q}})$,
suppose there exists some $(\boldsymbol{\zeta}^*,\psi^*) \in \mathcal{J}^0_{0,t_1}(\boldsymbol{x}^{(0)}, \boldsymbol{x}^{(1)})$ such that $I_{0,t_1}(\boldsymbol{\zeta}^*,\psi^*) < \infty$.
Then
\[\Scale[0.9]{\begin{aligned}
\inf_{t_2 \ge t_1} I^1_{t_1,t_2}((0,\bar{\boldsymbol{q}}), (0,\bar{\boldsymbol{q}}-\boldsymbol{q})) = I^2_{t_1,t_1+\tau}((0,\bar{\boldsymbol{q}}), (0,\bar{\boldsymbol{q}}-\boldsymbol{q}))
= H({\boldsymbol{q}}) + H(\bar{\boldsymbol{q}}-{\boldsymbol{q}}) - H(\bar{\boldsymbol{q}}) + K({\boldsymbol{q}}),
\end{aligned}}\]
where \textcolor{black}{$\tau \doteq \varsigma((0,\bar{\boldsymbol{q}}), (0,\bar{\boldsymbol{q}}-\boldsymbol{q})) = \frac{1}{2} \sum_{k=1}^\infty kq_k$}.
\item
Suppose $p_1=0$, $\boldsymbol{q} \ge 0$, $\bar{\boldsymbol{q}} \ge {\boldsymbol{0}}$, $\boldsymbol{q} + \bar{\boldsymbol{q}} \le \boldsymbol{p}$, $\sum_{k=1}^\infty kq_k \ge 2\sum_{k=1}^\infty q_k$, and $\sum_{k=1}^\infty k\bar{q}_k \ge 2\sum_{k=1}^\infty \bar{q}_k$.
Let \textcolor{black}{$\tau \doteq \varsigma((0,\boldsymbol{p}), (0,\boldsymbol{p}-\boldsymbol{q})) = \frac{1}{2} \sum_{k=1}^\infty kq_k$} and $\bar{\tau} \doteq \frac{1}{2} \sum_{k=1}^\infty k \bar{q}_k$.
Then
\[\begin{aligned}
&I^2_{0,\bar \tau}((0,\boldsymbol{p}), (0,\boldsymbol{p} - \bar{\boldsymbol{q}})) + I^2_{\bar \tau, \bar \tau + \tau}((0,\boldsymbol{p} - \bar{\boldsymbol{q}}), (0,\boldsymbol{p} - \bar{\boldsymbol{q}} - \boldsymbol{q}))\\
&\quad= I^2_{0, \tau}((0,\boldsymbol{p}), (0,\boldsymbol{p} - \boldsymbol{q})) + I^2_{ \tau, \tau +\bar \tau}((0,\boldsymbol{p} - \boldsymbol{q}), (0,\boldsymbol{p} - \boldsymbol{q} - \bar{\boldsymbol{q}})).
\end{aligned}\]
\end{enumerate} \end{Proposition}
\begin{proof}
(a) The first equality in part (a) is a consequence of Lemma \ref{lem:I1I2L}.
For the second equality, consider first the case $\sum_{k=1}^\infty kq_k > 2\sum_{k=1}^\infty q_k$.
From \eqref{eq:hktil} we have
\[\Scale[0.9]{\begin{aligned}
H({\boldsymbol{q}}) + H(\bar{\boldsymbol{q}}-{\boldsymbol{q}}) - H(\bar{\boldsymbol{q}}) + K({\boldsymbol{q}})
= \tilde{H}(0,{\boldsymbol{q}}) + \tilde{H}(0,\bar{\boldsymbol{q}}-{\boldsymbol{q}}) - \tilde{H}(0,\bar{\boldsymbol{q}}) + \tilde{K}((0,\bar{\boldsymbol{q}}), (0,\bar{\boldsymbol{q}}-\boldsymbol{q})).
\end{aligned}}\]
Applying Lemma \ref{lem:minimizer-cost} with $\boldsymbol{x}^{(1)} = (0, \bar{\boldsymbol{q}})$, $\boldsymbol{x}^{(2)}
= (0, \bar{\boldsymbol{q}}- \boldsymbol{q})$, the above expression equals
$ \int_{t_1}^{t_1+\tau} L({\boldsymbol{\tilde\zeta}}(s), {\boldsymbol{\tilde\zeta}}'(s))\,ds$
where ${\boldsymbol{\tilde\zeta}}$ is defined by \eqref{eq:def-minimizer} -- \eqref{eq:def-minimizer-psi}.
Now from Lemma \ref{lem:minimizer-verify-general}
$$I^2_{t_1,t_1+\tau}((0, \bar{\boldsymbol{q}}), (0, \bar{\boldsymbol{q}}- \boldsymbol{q})) = H({\boldsymbol{q}}) + H(\bar{\boldsymbol{q}}-{\boldsymbol{q}}) - H(\bar{\boldsymbol{q}}) + K({\boldsymbol{q}})$$
which proves the second equality in part (a) for the considered case.
Now we consider the case $\sum_{k=1}^\infty kq_k = 2\sum_{k=1}^\infty q_k$ and $p_1=0$.
Since $p_1=0$, we must have $q_k=0$ for each $k \ne 2$.
Then for any $(\boldsymbol{\zeta},\psi) \in \mathcal{J}^2_{t_1,t_1+\tau}((0, \bar{\boldsymbol{q}}), (0, \bar{\boldsymbol{q}}- \boldsymbol{q}))$ with $I_{t_1,t_1+\tau}(\boldsymbol{\zeta},\psi)<\infty$, we must have (see \eqref{eq:psi} and the definition of $\mathcal{J}^2_{t_1,t_2}$) $\zeta_2'(t)=-1$ and $\zeta_k'(t)=\psi'(t)=0$, $k \ne 2$ for $t \in [t_1,t_1+\tau]$.
Also, in this case $q_1=0$ and so we are in case (i) of Construction \ref{cons:cont} with $\boldsymbol{x}^{(1)} = (0, \bar{\boldsymbol{q}})$ and
$\boldsymbol{x}^{(2)} = (0, \bar{\boldsymbol{q}}- \boldsymbol{q})$.
It is easily checked that
any $(\boldsymbol{\zeta},\psi)$ with the above properties
is same as the minimizer $({\boldsymbol{\tilde\zeta}},\tilde{\psi})$ over
$[t_1, t_1+\tau]$.
Thus using Lemma \ref{lem:I1I2L} and Lemma \ref{lem:minimizer-cost} we get
\begin{align*}
I^2_{t_1, t_1+\tau}((0,\bar{\boldsymbol{q}}), (0,\bar{\boldsymbol{q}} - \boldsymbol{q})) & =
\inf_{(\boldsymbol{\zeta},\psi) \in \mathcal{J}^2_{t_1,t_1+\tau}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})} \int_{t_1}^{t_1+\tau} L(\boldsymbol{\zeta}(s), \boldsymbol{\zeta}'(s))\,ds\\
&=\int_{t_1}^{t_1+\tau} L({\boldsymbol{\tilde\zeta}}(s),{\boldsymbol{\tilde\zeta}}'(s)) \, ds \\
& = \tilde{H}(0,{\boldsymbol{q}}) + \tilde{H}(0,{\bar{\boldsymbol{q}}}-{\boldsymbol{q}}) - \tilde{H}(0,{\bar{\boldsymbol{q}}}) + \tilde{K}((0,{\bar{\boldsymbol{q}}}), (0,{\bar{\boldsymbol{q}}}-\boldsymbol{q})) \\
& = H({\boldsymbol{q}}) + H({\bar{\boldsymbol{q}}}-{\boldsymbol{q}}) - H({\bar{\boldsymbol{q}}}) + K({\boldsymbol{q}}).
\end{align*}
This proves part (a) in this case.
(b)
From part (a),
$$I^2_{0,\bar \tau}((0,\boldsymbol{p}), (0,\boldsymbol{p} - \bar{\boldsymbol{q}})) = H({\bar{\boldsymbol{q}}}) + H({\boldsymbol{p}}-{\bar{\boldsymbol{q}}}) - H({\boldsymbol{p}}) + K(\bar{\boldsymbol{q}})$$
and since the right side is finite, again from part (a),
$$I^2_{\bar \tau, \bar \tau + \tau}((0,\boldsymbol{p} - \bar{\boldsymbol{q}}), (0,\boldsymbol{p} - \bar{\boldsymbol{q}} - \boldsymbol{q})) = H({\boldsymbol{q}}) + H({\boldsymbol{p}}-\bar{\boldsymbol{q}}-\boldsymbol{q}) - H({\boldsymbol{p}}-\bar{\boldsymbol{q}}) + K({\boldsymbol{q}}).$$
Therefore,
\begin{align}
& I^2_{0,\bar \tau}((0,\boldsymbol{p}), (0,\boldsymbol{p} - \bar{\boldsymbol{q}})) + I^2_{\bar \tau, \bar \tau + \tau}((0,\boldsymbol{p} - \bar{\boldsymbol{q}}), (0,\boldsymbol{p} - \bar{\boldsymbol{q}} - \boldsymbol{q})) \notag \\
& = \left[ H({\bar{\boldsymbol{q}}}) + H({\boldsymbol{p}}-{\bar{\boldsymbol{q}}}) - H({\boldsymbol{p}}) + K(\bar{\boldsymbol{q}}) \right] + \left[ H({\boldsymbol{q}}) + H({\boldsymbol{p}}-\bar{\boldsymbol{q}}-\boldsymbol{q}) - H({\boldsymbol{p}}-\bar{\boldsymbol{q}}) + K({\boldsymbol{q}}) \right] \notag \\
& = \left[ H({\boldsymbol{q}}) + H({\boldsymbol{p}}-{\boldsymbol{q}}) - H({\boldsymbol{p}}) + K({\boldsymbol{q}}) \right] + \left[ H({\bar{\boldsymbol{q}}}) + H({\boldsymbol{p}}-\boldsymbol{q}-\bar{\boldsymbol{q}}) - H({\boldsymbol{p}}-\boldsymbol{q}) + K(\bar{\boldsymbol{q}}) \right] \notag \\
& = I^2_{0, \tau}((0,\boldsymbol{p}), (0,\boldsymbol{p} - \boldsymbol{q})) + I^2_{ \tau, \tau +\bar \tau}((0,\boldsymbol{p} - \boldsymbol{q}), (0,\boldsymbol{p} - \boldsymbol{q} - \bar{\boldsymbol{q}})), \label{eq:minimizer-summary-pf}
\end{align}
where the last line follows, once more, from (a). This proves (b) and completes the proof. \end{proof}
\section{Proof of Theorem \ref{thm:ldg_degree_distribution}} \label{sec:pf_LDP_degree}
In this section we will use Theorem \ref{thm:main-ldp} and results in Section \ref{sec:cal} to prove Theorem \ref{thm:ldg_degree_distribution}. Let $0 \le \boldsymbol{q} \le \boldsymbol{p}$. Recall the \textcolor{black}{(distributionally equivalent)} representation of the event $E^{n,\varepsilon}(\boldsymbol{q})$ given in \eqref{eq:enverps}, \textcolor{black}{in terms of $\Xbd^n$}. Define \begin{align}
E^{n,\varepsilon,T}(\boldsymbol{q}) & \doteq \{ \exists\, t_1, t_2 \in [0,T] \text{ such that } X^n_0(t_1-) = X^n_0(t_2) = - 1/n, X^n_0(t) > - 1/n \mbox{ for } t\in [t_1, t_2), \notag \\
& \qquad |X_k^n(t_1-) - X_k^n(t_2) - q_k| \le \varepsilon \mbox{ for all } k \in \mathbb{N} \} \notag \\
& = \{ \exists\, t_1, t_2 \in [0,T] \text{ such that } X^n_0(t_1-) = X^n_0(t_2) = - 1/n, \notag \\
& \qquad Y^n(t) > Y^n(t_1-) - 2/n \mbox{ for } t\in [t_1, t_2), |X_k^n(t_1-) - X_k^n(t_2) - q_k| \le \varepsilon \mbox{ for all } k \in \mathbb{N} \}. \label{eq:app_E_n_eps} \end{align} Note that $E^{n,\varepsilon,T}(\boldsymbol{q}) \subset E^{n,\varepsilon}(\boldsymbol{q})$ but they are not equal, since the continuous-time EEA may not terminate by time $T$. Consider the event that the continuous-time EEA terminates before time $T$, namely the event $F^{n,T}$ defined as \begin{equation}
\label{eq:F-n-T}
F^{n,T} \doteq \{X^n(T)=(-{1}/{n},\boldsymbol{0})\}. \end{equation} Then \begin{equation}
\label{eq:E-Etil}
E^{n,\varepsilon}(\boldsymbol{q}) \cap F^{n,T} \subset E^{n,\varepsilon,T}(\boldsymbol{q}) \subset E^{n,\varepsilon}(\boldsymbol{q}). \end{equation} The following lemma guarantees that in order to study the exponential rate of decay of $P(E^{n,\varepsilon}(\boldsymbol{q}))$, it suffices to study that of $P(E^{n,\varepsilon,T}(\boldsymbol{q}))$.
\begin{Lemma}
\label{lem:choosing-T}
$\limsup_{n \to \infty} \frac{1}{n} \log P((F^{n,T})^c) \to -\infty$ as $T \to \infty$. \end{Lemma}
\begin{proof}
Recall from Section \ref{sec:eea} that the discrete-time EEA terminates in at most $nN$ steps where $N \doteq \lfloor \sup_n \frac{1}{2} \sum_{k=1}^\infty k \frac{n_k}{n} \rfloor + 1 < \infty$.
So since the discrete time EEA is the embedded chain associated with the continuous time EEA (see Section \ref{sec:model}), $\boldsymbol{X}^n$ will have at most $nN$ jumps before arriving at the absorbing state $(-\frac{1}{n},\boldsymbol{0})$.
Since the total jump rate for $\boldsymbol{X}^n(t)$ at any instant before getting absorbed is $n\sum_{k=0}^\infty r_k(\boldsymbol{X}^n(t)) = n$, we have
\begin{equation*}
P(F^{n,T}) \ge P \left( \sum_{i=1}^{nN} \xi_i \le T \right) = P \left( \frac{1}{n} \sum_{i=1}^{nN} \tilde{\xi}_i \le T \right),
\end{equation*}
where $\xi_i$ are i.i.d. $\exp(n)$ and $\tilde{\xi}_i$ are i.i.d. $\exp(1)$.
Therefore
\begin{align*}
\limsup_{n \to \infty} \frac{1}{n} \log P((F^{n,T})^c) & \le
N \limsup_{n \to \infty} \frac{1}{nN} \log \mathbb{P} \left( \frac{1}{nN} \sum_{i=1}^{nN} \tilde{\xi}_i > \frac{T}{N} \right) \\
& = -N L_1 \left( \frac{T}{N} \right) \to -\infty
\end{align*}
as $T \to \infty$, where the second equality is from Cram\'{e}r's theorem and where
$L_1(x) \doteq x - 1 - \log x$ for $x \ge 0$ is the Legendre transform of the log-moment generating function of $\tilde{\xi}_1$. \end{proof}
The following lemma gives an upper bound for the exponential rate of decay of
$P( E^{n,\varepsilon}(\boldsymbol{q}) )$.
\begin{Lemma}
\label{lem:puhalskii-upper-bound}
Suppose $\sum_{k=1}^\infty k q_k > 2\sum_{k=1}^\infty q_k$.
Then the upper bound in \eqref{eq:upperbd_to_improve*} holds, namely
\begin{equation}
\label{eq:upperbd_to_improve}
\textcolor{black}{\limsup_{\varepsilon \to 0}} \limsup_{n \to \infty } \frac{1}{n} \log P( E^{n,\varepsilon}(\boldsymbol{q})) \le - \inf_{\boldsymbol{q} \le \bar{\boldsymbol{p}} \le \boldsymbol{p}, t_1 \ge 0}[ I^0_{0,t_1}((0,\boldsymbol{p}), (0,\bar{\boldsymbol{p}})) + I^2_{t_1,t_1+\tau}((0,\bar{\boldsymbol{p}}), (0,\bar{\boldsymbol{p}} - \boldsymbol{q}))],
\end{equation}
\textcolor{black}{where $\tau \doteq \varsigma((0,\bar{\boldsymbol{p}}), (0,\bar{\boldsymbol{p}} - \boldsymbol{q})) = \frac{1}{2} \sum_{k=1}^\infty kq_k$ for each $\boldsymbol{q} \le \bar{\boldsymbol{p}} \le \boldsymbol{p}$.}
\end{Lemma}
\begin{proof}
From \eqref{eq:E-Etil} we have
\begin{equation*}
P( E^{n,\varepsilon}(\boldsymbol{q})) \le P (E^{n,\varepsilon,T}(\boldsymbol{q})) + P ( (F^{n,T})^c)
\end{equation*}
and hence
\begin{align*}
\limsup_{n \to \infty } \frac{1}{n} \log P( E^{n,\varepsilon}(\boldsymbol{q}))
\le \max\left\{ \limsup_{n \to \infty } \frac{1}{n} \log P( E^{n,\varepsilon,T}(\boldsymbol{q})), \limsup_{n \to \infty} \frac{1}{n} \log P ( (F^{n,T})^c)\right\}.
\end{align*}
In view of Lemma \ref{lem:choosing-T}, it suffices to show that for all sufficiently large $T$
\begin{equation*}
\textcolor{black}{\limsup_{\varepsilon \to 0}} \limsup_{n \to \infty } \frac{1}{n} \log P( E^{n,\varepsilon,T}(\boldsymbol{q})) \le - \inf_{\boldsymbol{q}\le \bar{\boldsymbol{p}} \le \boldsymbol{p}, t_1 \ge 0}[ I^0_{0,t_1}((0,\boldsymbol{p}), (0,\bar{\boldsymbol{p}})) + I^2_{t_1,t_1+\tau}((0,\bar{\boldsymbol{p}}), (0,\bar{\boldsymbol{p}} - \boldsymbol{q}))].
\end{equation*}
Let $\mathfrak{P}_T \doteq \mathbb{D}([0,T]:\mathbb{R} \times \mathbb{R}_+^\infty \times \mathbb{R})$ and consider
\begin{align*}
\tilde{E}^{\varepsilon,T}(\boldsymbol{q}) &
\doteq \{ (\boldsymbol{\zeta},\psi) \in \mathfrak{P}_T : \exists\, t_1, t_2 \in [0,T] \text{ such that } \zeta_0(t_1-) = \zeta_0(t_2) \le 0, \\
& \qquad \psi(t) \ge \psi(t_1-) - \varepsilon \mbox{ for } t \in [t_1, t_2), |\zeta_k(t_1-) - \zeta_k(t_2) -q_k| \le \varepsilon \mbox{ for all } k \in \mathbb{N}\}.
\end{align*}
Denote the closure of $\tilde{E}^{\varepsilon,T}(\boldsymbol{q})$ by $cl\tilde{E}^{\varepsilon,T}(\boldsymbol{q})$.
From the definition in \eqref{eq:app_E_n_eps}, when $n > 2 \varepsilon^{-1}$
\begin{equation*}
E^{n,\varepsilon,T}(\boldsymbol{q}) \subset \{ (\boldsymbol{X}^n,Y^n) \in \tilde{E}^{\varepsilon,T}(\boldsymbol{q})\} \subset \{ (\boldsymbol{X}^n,Y^n) \in cl\tilde{E}^{\varepsilon,T}(\boldsymbol{q})\}.
\end{equation*}
From this and Theorem \ref{thm:main-ldp} we have
\begin{equation*}
\limsup_{n \to \infty } \frac{1}{n} \log P( E^{n,\varepsilon,T}(\boldsymbol{q})) \le \limsup_{n \to \infty } \frac{1}{n} \log P( (\boldsymbol{X}^n,Y^n) \in cl\tilde{E}^{\varepsilon,T}(\boldsymbol{q})) \le - \inf_{(\boldsymbol{\zeta},\psi) \in cl\tilde{E}^{\varepsilon,T}(\boldsymbol{q})} I_T(\boldsymbol{\zeta},\psi).
\end{equation*}
Since $I_T(\boldsymbol{\zeta},\psi) < \infty$ only when $(\boldsymbol{\zeta},\psi) \in \mathcal{C}_T$, we have
\begin{align*}
\limsup_{n \to \infty } \frac{1}{n} \log P( E^{n,\varepsilon,T}(\boldsymbol{q})) \le - \inf_{(\boldsymbol{\zeta},\psi) \in cl\tilde{E}^{\varepsilon,T}(\boldsymbol{q}) \cap \mathcal{C}_T} I_T(\boldsymbol{\zeta},\psi).
\end{align*}
It is easy to see that $cl\tilde{E}^{\varepsilon,T}(\boldsymbol{q}) \cap \mathcal{C}_T = \tilde{E}^{\varepsilon,T}(\boldsymbol{q}) \cap \mathcal{C}_T$.
Thus we have
\begin{align*}
\limsup_{n \to \infty } \frac{1}{n} \log P( E^{n,\varepsilon,T}(\boldsymbol{q})) \le - \inf_{(\boldsymbol{\zeta},\psi) \in \tilde{E}^{\varepsilon,T}(\boldsymbol{q}) \cap \mathcal{C}_T} I_T(\boldsymbol{\zeta},\psi).
\end{align*}
Letting
\begin{align*}
\tilde{E}^T(\boldsymbol{q}) &
\doteq\{ (\boldsymbol{\zeta},\psi) \in \mathcal{C}_T : \exists\, t_1, t_2 \in [0,T] \text{ such that } \zeta_0(t_1) = \zeta_0(t_2) \le 0, \\
& \qquad \psi(t) \ge \psi(t_1) \mbox{ for } t \in [t_1, t_2), \zeta_k(t_1) - \zeta_k(t_2) = q_k \mbox{ for all } k \in \mathbb{N}\},
\end{align*}
we have $\tilde{E}^T(\boldsymbol{q}) = \bigcap_{\varepsilon > 0} \left( \tilde{E}^{\varepsilon,T}(\boldsymbol{q}) \cap \mathcal{C}_T \right)$.
From this, the lower semi-continuity and compactness of level sets of $I_T(\boldsymbol{\zeta},\psi)$ (since $I_T$ is a rate function; see Theorem \ref{thm:main-ldp}), it follows
\begin{equation*}
\textcolor{black}{\limsup_{\varepsilon \to 0}} \limsup_{n \to \infty } \frac{1}{n} \log P( E^{n,\varepsilon,T}(\boldsymbol{q})) \le - \textcolor{black}{\liminf_{\varepsilon \to 0}} \inf_{(\boldsymbol{\zeta},\psi) \in \tilde{E}^{\varepsilon,T}(\boldsymbol{q}) \cap \mathcal{C}_T} I_T(\boldsymbol{\zeta},\psi) = - \inf_{(\boldsymbol{\zeta},\psi) \in \tilde{E}^T(\boldsymbol{q})} I_T(\boldsymbol{\zeta},\psi).
\end{equation*}
Breaking $(\boldsymbol{\zeta},\psi) \in \tilde{E}^T(\boldsymbol{q})$ for $t \in [0,T]$ according to $t \le t_1$, $t_1 \le t \le t_2$ and $t \ge t_2$,
\begin{align*}
\inf_{(\boldsymbol{\zeta},\psi) \in \tilde{E}^T(\boldsymbol{q})} I_T(\boldsymbol{\zeta},\psi)
& = \inf_{\boldsymbol{q} \le \bar{\boldsymbol{p}} \le \boldsymbol{p}, 0 \le t_1 < t_2 \le T} [I^0_{0,t_1}((0,\boldsymbol{p}), (0,\bar{\boldsymbol{p}})) + I^1_{t_1,t_2}((0,\bar{\boldsymbol{p}}), (0,\bar{\boldsymbol{p}} - \boldsymbol{q}))] \\
& \ge \inf_{\boldsymbol{q} \le \bar{\boldsymbol{p}} \le \boldsymbol{p}, 0 \le t_1 < t_2<\infty} [I^0_{0,t_1}((0,\boldsymbol{p}), (0,\bar{\boldsymbol{p}})) + I^1_{t_1,t_2}((0,\bar{\boldsymbol{p}}), (0,\bar{\boldsymbol{p}} - \boldsymbol{q}))] \\
& = \inf_{\boldsymbol{q} \le \bar{\boldsymbol{p}} \le \boldsymbol{p}, t_1 \ge 0} [I^0_{0,t_1}((0,\boldsymbol{p}), (0,\bar{\boldsymbol{p}})) + I^2_{t_1, t_1 + \tau}((0,\bar{\boldsymbol{p}}), (0,\bar{\boldsymbol{p}} - \boldsymbol{q}))],
\end{align*}
where the last line follows from Lemma \ref{lem:I1I2L}.
The result follows. \end{proof}
The following lemma improves the upper bound \eqref{eq:upperbd_to_improve} in Lemma \ref{lem:puhalskii-upper-bound} when $p_1=0$.
\begin{Lemma}
\label{lem:puhalskii-upper-bound-improvement}
Suppose $p_1=0$ and ${\boldsymbol{0}} \le \boldsymbol{q} \le \boldsymbol{p}$.
Then
\begin{enumerate}[\upshape(a)]
\item
$I^0_{0,t_1}((0,\boldsymbol{p}),(0,{\boldsymbol{q}})) = I^1_{0,t_1}((0,\boldsymbol{p}),(0,{\boldsymbol{q}}))$ for each $t_1 \ge 0$.
\item
Let $\tau \doteq \frac{1}{2} \sum_{k=1}^\infty kq_k$ as in Lemma \ref{lem:puhalskii-upper-bound}.
The infimum on the right side of \eqref{eq:upperbd_to_improve} is achieved at $t_1=0$:
\begin{equation*}
\inf_{\boldsymbol{q} \le \bar{\boldsymbol{p}} \le \boldsymbol{p}, t_1 \ge 0}[ I^0_{0,t_1}((0,\boldsymbol{p}), (0,\bar{\boldsymbol{p}})) + I^2_{t_1,t_1+\tau}((0,\bar{\boldsymbol{p}}), (0,\bar{\boldsymbol{p}} - \boldsymbol{q}))] = I^2_{0,\tau}((0,\boldsymbol{p}), (0,\boldsymbol{p} - \boldsymbol{q})).
\end{equation*}
\end{enumerate} \end{Lemma}
\begin{proof}
(a)
Fix $t_1 \ge 0$.
It suffices to show that if $(\boldsymbol{\zeta},\psi) \in \mathcal{J}^0_{0,t_1}((0,\boldsymbol{p}),(0,{\boldsymbol{q}}))$ satisfies
$I_{0,t_1}(\boldsymbol{\zeta},\psi)<\infty$ then $(\boldsymbol{\zeta},\psi) \in \mathcal{J}^1_{0,t_1}((0,\boldsymbol{p}),(0,{\boldsymbol{q}}))$.
For such a pair of $(\boldsymbol{\zeta},\psi)$, let $\boldsymbol{\varphi} \in \mathcal{S}_{t_1}(\boldsymbol{\zeta},\psi)$ be such that the associated cost is finite.
In particular $\psi$, and consequently $\zeta_0$, is absolutely continuous.
Since $\zeta_0(t) = \Gamma(\psi)(t) \ge 0$ for $t \in [0,t_1]$, we have
\begin{equation}
\label{eq:puhalskii-upper-bound-improvement-1}
1_{\{\zeta_0(t)>0\}}\zeta_0'(t) = 1_{\{\zeta_0(t)>0\}}\psi'(t), \quad 1_{\{\zeta_0(t)=0\}}\zeta_0'(t) = 0, \mbox{ a.e. } t \in [0,t_1].
\end{equation}
Since $p_1=0$, we see from \eqref{eq:psi} that $1_{\{\zeta_0(t)=0\}}\psi'(t) \ge 0$ for a.e.\ $t \in [0,t_1]$. Indeed, when $p_1=0$ the term for $k=1$ in the sum on the right side of \eqref{eq:psi} is zero. Also, the term for $k=2$ is always zero and the integrand for $k=0$ is zero on the set $\{\zeta_0(t)=0\}$. This shows that,
on this set, the derivative of the sum on the right side of \eqref{eq:psi} is nonnegative. Combining this with \eqref{eq:puhalskii-upper-bound-improvement-1}, we have for $t \in [0,t_1]$,
\begin{align*}
\psi(t) & = \int_0^t 1_{\{\zeta_0(s)>0\}}\psi'(s)\,ds + \int_0^t 1_{\{\zeta_0(s)=0\}}\psi'(s)\,ds \ge \int_0^t 1_{\{\zeta_0(s)>0\}}\zeta_0'(s)\,ds \\
& = \int_0^t 1_{\{\zeta_0(s)>0\}}\zeta_0'(s)\,ds + \int_0^t 1_{\{\zeta_0(s)=0\}}\zeta_0'(s)\,ds = \zeta_0(t) \ge 0.
\end{align*}
This implies $(\boldsymbol{\zeta},\psi) \in \mathcal{J}^1_{0,t_1}((0,\boldsymbol{p}),(0,\boldsymbol{q}))$
and part (a) follows.
(b)
For $\boldsymbol{0} \le \bar{\boldsymbol{p}} \le \boldsymbol{p}$, let $\bar{\boldsymbol{q}} \doteq \boldsymbol{p} - \bar{\boldsymbol{p}}$ and $\bar{\tau} \doteq \frac{1}{2} \sum_{k=1}^\infty k \bar{q}_k$.
Since $p_1=0$, we always have $\sum_{k=1}^\infty k q_k \ge 2\sum_{k=1}^\infty q_k$ and $\sum_{k=1}^\infty k \bar{q}_k \ge 2\sum_{k=1}^\infty \bar{q}_k$.
Therefore
\begin{align*}
& \inf_{\boldsymbol{q} \le \bar{\boldsymbol{p}} \le \boldsymbol{p}, t_1 \ge 0}[ I^0_{0,t_1}((0,\boldsymbol{p}), (0,\bar{\boldsymbol{p}})) + I^2_{t_1,t_1+\tau}((0,\bar{\boldsymbol{p}}), (0,\bar{\boldsymbol{p}} - \boldsymbol{q}))] \\
& = \inf_{\boldsymbol{q} \le \bar{\boldsymbol{p}} \le \boldsymbol{p}, t_1 \ge 0}[ I^1_{0,t_1}((0,\boldsymbol{p}), (0,\bar{\boldsymbol{p}})) + I^2_{t_1,t_1+\tau}((0,\bar{\boldsymbol{p}}), (0,\bar{\boldsymbol{p}} - \boldsymbol{q}))] \\
& = \inf_{\boldsymbol{q} \le \bar{\boldsymbol{p}} \le \boldsymbol{p}}[ I^2_{0,{\bar{\tau}}}((0,\boldsymbol{p}), (0,\bar{\boldsymbol{p}})) + I^2_{{\bar{\tau}},{\bar{\tau}}+\tau}((0,\bar{\boldsymbol{p}}), (0,\bar{\boldsymbol{p}} - \boldsymbol{q}))] \\
& = \inf_{\boldsymbol{0} \le \bar{\boldsymbol{q}} \le \boldsymbol{p} - \boldsymbol{q}} [I^2_{0,\bar \tau}((0,\boldsymbol{p}), (0,\boldsymbol{p} - \bar{\boldsymbol{q}})) + I^2_{\bar \tau, \bar \tau + \tau}((0,\boldsymbol{p} - \bar{\boldsymbol{q}}), (0,\boldsymbol{p} - \bar{\boldsymbol{q}} - \boldsymbol{q}))]
\end{align*}
where the first equality uses part (a) with ${\boldsymbol{q}} = \bar{\boldsymbol{p}}$ and the second equality follows from Lemma \ref{lem:I1I2L} and the observation that $I^2_{t,t+\tau}((0,\bar{\boldsymbol{p}}), (0,\bar{\boldsymbol{p}} - \boldsymbol{q})) = I^2_{t', t' + \tau}((0,\bar{\boldsymbol{p}}), (0,\bar{\boldsymbol{p}} - \boldsymbol{q}))$ for all $t, t'$ as long as
$I^0_{0, s}((0,{\boldsymbol{p}}), (0,\bar{\boldsymbol{p}})<\infty$ for $s= t, t'$.
Using Proposition \ref{prop:minimizer-summary}(b), the right side on the last line equals
\begin{align*}
& \inf_{\boldsymbol{0} \le \bar{\boldsymbol{q}} \le \boldsymbol{p} - \boldsymbol{q}} [I^2_{0, \tau}((0,\boldsymbol{p}), (0,\boldsymbol{p} - \boldsymbol{q})) + I^2_{ \tau, \tau +\bar \tau}((0,\boldsymbol{p} - \boldsymbol{q}), (0,\boldsymbol{p} - \boldsymbol{q} - \bar{\boldsymbol{q}}))]\\
& = I^2_{0, \tau}((0,\boldsymbol{p}), (0,\boldsymbol{p} - \boldsymbol{q})) + \inf_{\boldsymbol{0} \le \bar{\boldsymbol{q}} \le \boldsymbol{p} - \boldsymbol{q}} I^2_{ \tau, \tau +\bar \tau}((0,\boldsymbol{p} - \boldsymbol{q}), (0,\boldsymbol{p} - \boldsymbol{q} - \bar{\boldsymbol{q}}))\\
& = I^2_{0, \tau}((0,\boldsymbol{p}), (0,\boldsymbol{p} - \boldsymbol{q})),
\end{align*}
where the last equality follows by considering $\bar\qbd=0$.
This completes the proof. \end{proof}
Next we will prove the lower bound.
\begin{Lemma}
\label{lem:puhalskii-lower-bound}
Suppose ${\boldsymbol{0}} \le \boldsymbol{q} \le \boldsymbol{p}$ and $\sum_{k=1}^\infty k q_k > 2\sum_{k=1}^\infty q_k$.
Let $\tau \doteq \frac{1}{2} \sum_{k=1}^\infty kq_k$.
Then the lower bound in \eqref{star1227} holds. \end{Lemma}
\begin{proof}
\textcolor{black}{Let $({\boldsymbol{\tilde\zeta}}(t),\tilde{\psi}(t))$ be as introduced in Construction \ref{cons:cont} for $t \le \tau$}, with $t_1=0$, $\boldsymbol{x}^{(1)} = (0,\boldsymbol{p})$ and $\boldsymbol{x}^{(2)} = (0,\boldsymbol{p}-\boldsymbol{q})$.
We define \textcolor{black}{$(\tilde{\boldsymbol{\boldsymbol{\zeta}}}(t),\tilde{\psi}(t))$} for $t > \tau$ through \eqref{eq:psi}-\eqref{eq:phi_k} by setting $\varphi_k(t,y)=1$ for all $k,y$ and $t>\tau$. Then $I_t({\boldsymbol{\tilde\zeta}},\tilde{\psi}) = I_\tau({\boldsymbol{\tilde\zeta}},\tilde{\psi})$ for all $t > \tau$.
So by Lemmas \ref{lem:I1I2L} and \ref{lem:minimizer-verify-general}, for $t\ge \tau$
\begin{equation}
\label{eq:puhalskii-lower-bound-minimizer}
I_t({\boldsymbol{\tilde\zeta}},\tilde{\psi}) = I_\tau({\boldsymbol{\tilde\zeta}},\tilde{\psi}) = \int_0^\tau L({\boldsymbol{\tilde\zeta}}(s), {\boldsymbol{\tilde\zeta}}'(s))\,ds = I^2_{0,\tau}((0,\boldsymbol{p}), (0,\boldsymbol{p} - \boldsymbol{q})).
\end{equation}
For $\delta \in (0,1)$ consider the set
\begin{equation}
\label{eq:Gtil-delta}
\Scale[0.9]{\tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi}) \doteq \{ (\boldsymbol{\zeta},\psi) \in \mathbb{D}([0,\infty):\mathbb{R} \times \mathbb{R}_+^\infty \times \mathbb{R}) : \sup_{t \in [0,\tau]}|\zeta_k(t) - \tilde{\zeta}_k(t)| < \delta, \mbox{ for all } k = 0, 1 ,2,\dotsc, \lfloor\delta^{-1}\rfloor\}.}
\end{equation}
Let $\tau^n \doteq \inf\{t \ge \tau : X^n_0(t) = -\frac{1}{n}\}$.
Then $\tau^n < \infty$ a.s.
Define for odd integer $j \ge -1$,
\begin{equation*}
G^n_j \doteq \left\{ X^n_0(\tau) = \frac{j}{n}, X^n_k(\tau^n) = X^n_k(\tau), k \in \mathbb{N} \right\},
\end{equation*}
and for even integer $j \ge -1$,
\begin{equation*}
G^n_j \doteq \left\{ X^n_0(\tau) = \frac{j}{n}, \sum_{k=1}^\infty (X^n_k(\tau^n) - X^n_k(\tau)) = - \frac{1}{n} \right\}.
\end{equation*}
Intuitively, $G^n_j$ describes the event that from time instant $\tau$ to the time $\tau^n$ at which the current component is fully explored, the continuous-time EEA does not wake up any sleeping vertices, with the exception that if the number of active half-edges at time $\tau$ is odd (namely $X^n_0(\tau) = \frac{j}{n}$ for some even integer $j \ge -1$), in which case exactly one sleeping vertex (necessarily with odd degree) will be woken up.
Consider the event
\begin{equation*}
A^n_\delta({\boldsymbol{\tilde\zeta}},\tilde{\psi}) \doteq \left\{(\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})\right\} \bigcap \left( \bigcup_{j=-1}^\infty G^n_j \right).
\end{equation*}
Fix $\varepsilon \in (0,1)$.
We claim that there exist $\delta_0>0$ and $n_0 >0$ such that
\begin{equation}
\label{eq:puhalskii-lower-bound-claim}
A^n_\delta({\boldsymbol{\tilde\zeta}},\tilde{\psi}) \subset E^{n,\varepsilon}(\boldsymbol{q}) \mbox{ for all } \delta < \delta_0 \mbox{ and } n > n_0.
\end{equation}
To see this, first note that by Assumption \ref{asp:exponential-boundN}, there exists $M \in \mathbb{N}$ such that
\begin{equation}
\label{eq:puhalskii-lower-bound-tail}
\sup_{n \in \mathbb{N}} \sum_{k=M}^\infty k \frac{n_k}{n} < \frac{\varepsilon}{2},\quad \sum_{k=M}^\infty k p_k < \frac{\varepsilon}{2}.
\end{equation}
By continuity of ${\boldsymbol{\tilde\zeta}}$, there exists $\varepsilon_0 > 0$ such that
\begin{align}
&|\tilde{\zeta}_k(t) - \tilde{\zeta}_k(0)| < \frac{\varepsilon}{4} \mbox{ for all } t \in [0,\varepsilon_0], k=0,1,\dotsc,M, \label{eq:puhalskii-lower-bound-0} \\
&|\tilde{\zeta}_k(t) - \tilde{\zeta}_k(\tau)| < \frac{\varepsilon}{4} \mbox{ for all } t \in [\tau - \varepsilon_0, \tau], k=0,1,\dotsc,M. \label{eq:puhalskii-lower-bound-tau}
\end{align}
From Lemma \ref{lem:minimizer-def}(c) we have $\tilde{\zeta}_0(t)>0$ for all $t \in (0,\tau)$.
Since $\tilde{\zeta}_0(t)$ is continuous,
\begin{equation*}
\delta_0 \doteq \left(\inf_{t \in [\varepsilon_0, \tau - \varepsilon_0]}\tilde{\zeta}_0(t)\right) \wedge \frac{\varepsilon}{4} \wedge \frac{1}{M} > 0.
\end{equation*}
Take $n_0 > \frac{4}{\varepsilon}$. We now show \eqref{eq:puhalskii-lower-bound-claim} with this choice of $n_0$ and $\delta_0$.
Fix $\delta<\delta_0$ and $n>n_0$ and
consider $\omega \in A^n_\delta({\boldsymbol{\tilde\zeta}},\tilde{\psi})$.
For $t \in [\varepsilon_0, \tau - \varepsilon_0]$, since $|X^n_0(t)-\tilde{\zeta}_0(t)| < \delta < \delta_0 \le \tilde{\zeta}_0(t)$, we have $\inf_{t \in [\varepsilon_0, \tau - \varepsilon_0]}X^n_0(t) > 0$.
So there exist $t_1^n \in [0,\varepsilon_0]$ and $t_2^n \in [\tau-\varepsilon_0, \tau^n]$ such that
\begin{equation}
\label{eq:puhalskii-lower-bound-degree-0}
X^n_0(t_1^n-)=X^n_0(t_2^n)=-\frac{1}{n}, \quad X^n_0(t) > -\frac{1}{n} \mbox{ for } t \in [t_1^n,t_2^n),
\end{equation}
where by convention $X^n_0(0-)=X^n_0(0) = -1/n$.
For $k \ge M$, it follows from \eqref{eq:puhalskii-lower-bound-tail} that
\begin{equation}
\label{eq:puhalskii-lower-bound-degree-1}
|X_k^n(t_1^n-) - X_k^n(t_2^n) - q_k| \le |X_k^n(t_1^n-) - X_k^n(t_2^n)| + q_k \le \frac{n_k}{n} + p_k < \varepsilon.
\end{equation}
For $1 \le k \le M \le \lfloor \delta^{-1} \rfloor$,
\begin{align}
| X_k^n(t_1^n-) - X_k^n(t_2^n) - q_k|
& = |(X_k^n(t_1^n-) - X_k^n(t_2^n)) - ( \tilde{\zeta}_k(0) - \tilde{\zeta}_k(\tau))| \notag \\
& \le |X_k^n(t_1^n-)-\tilde{\zeta}_k(0)| + |X_k^n(t_2^n)-\tilde{\zeta}_k(\tau)|. \label{eq:puhalskii-lower-bound-degree-2}
\end{align}
From \eqref{eq:Gtil-delta} and \eqref{eq:puhalskii-lower-bound-0} we have the following bound for the first term
in \eqref{eq:puhalskii-lower-bound-degree-2}.
\begin{equation*}
|X_k^n(t_1^n-)-\tilde{\zeta}_k(0)| \le |X_k^n(t_1^n-)-\tilde{\zeta}_k(t_1^n)| + |\tilde{\zeta}_k(t_1^n)-\tilde{\zeta}_k(0)| < \delta + \frac{\varepsilon}{4}.
\end{equation*}
For the second term in \eqref{eq:puhalskii-lower-bound-degree-2}, if $t_2^n \le \tau$, then using \eqref{eq:Gtil-delta} and \eqref{eq:puhalskii-lower-bound-tau} we have
\begin{equation*}
|X^n_k(t_2^n) - \tilde{\zeta}_k(\tau)| \le |X^n_k(t_2^n) - \tilde{\zeta}_k(t_2^n)| + |\tilde{\zeta}_k(t_2^n) - \tilde{\zeta}_k(\tau)| < \delta + \frac{\varepsilon}{4}.
\end{equation*}
If $t_2^n > \tau$, then $t_2^n=\tau^n$ and from the definition of $G_j^n$ and \eqref{eq:Gtil-delta} we have
\begin{equation*}
|X^n_k(t_2^n) - \tilde{\zeta}_k(\tau)| \le |X^n_k(\tau^n) - X^n_k(\tau)| + |X^n_k(\tau) - \tilde{\zeta}_k(\tau)| \le \frac{1}{n} + \delta \le \frac{\varepsilon}{4} + \delta.
\end{equation*}
Combining these three displays with \eqref{eq:puhalskii-lower-bound-degree-2} gives
\begin{equation*}
| X_k^n(t_1^n-) - X_k^n(t_2^n) - q_k| < 2 \left( \delta + \frac{\varepsilon}{4} \right) < \varepsilon, \quad k \in \mathbb{N}.
\end{equation*}
From this, and \eqref{eq:enverps}, \eqref{eq:puhalskii-lower-bound-degree-0}, \eqref{eq:puhalskii-lower-bound-degree-1} we have $\omega \in E^{n,\varepsilon}(\boldsymbol{q})$. Since $\delta < \delta_0$ and $n > n_0$ is arbitrary, the
claim \eqref{eq:puhalskii-lower-bound-claim} holds.
For fixed $\delta < \delta_0$ and $n > n_0$ consider the following two probabilities
\begin{equation*}
P(A^n_\delta({\boldsymbol{\tilde\zeta}},\tilde{\psi})), \quad P((\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})).
\end{equation*}
Write
\begin{align}
P(A^n_\delta({\boldsymbol{\tilde\zeta}},\tilde{\psi})) & = \sum_{j=-1}^\infty P\left(\{(\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})\} \cap G^n_j\right)
= \sum_{j=-1}^{\lfloor \delta n \rfloor} \boldsymbol{E}\left[ 1_{\{(\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})\}} P\left( G^n_j | \mathcal{F}_\tau \right) \right] \label{eq:puhalskii-lower-bound-key}
\end{align}
where we only have to sum up to $\lfloor \delta n \rfloor$ in the last line when $(\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})$ since $\tilde{\zeta}_0(\tau)=0$.
Since $\tau^n = \tau$ on $\{X_0^n(\tau)=-\frac{1}{n}\}$, we have
\begin{equation}
\label{eq:puhalskii-lower-bound-G1}
P\left( G^n_{-1} | \mathcal{F}_\tau \right) = 1_{\{X^n_0(\tau) = -\frac{1}{n}\}}.
\end{equation}
From Assumption \ref{asp:exponential-boundN} and the fact that $r(\boldsymbol{X}^n(t))$ is non-increasing it follows
\begin{equation*}
\sup_{n \in \mathbb{N}} \sup_{t \ge 0} r(\boldsymbol{X}^n(t)) \le \sup_{n \in \mathbb{N}} \sum_{k=1}^\infty \frac{kn_k}{n} \doteq C_0 < \infty.
\end{equation*}
Hence for odd integer $1 \le j \le \lfloor \delta n \rfloor$ and $\delta < \frac{C_0}{2}$,
\begin{align}
& 1_{\{(\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})\}} P\left( G^n_j | \mathcal{F}_\tau \right) \notag \\
& = 1_{\{(\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})\}} \frac{j}{nr(\boldsymbol{X}^n(\tau))} \cdot \frac{j-2}{nr(\boldsymbol{X}^n(\tau))-2} \dotsm \frac{1}{nr(\boldsymbol{X}^n(\tau))-(j-1)} 1_{\{X^n_0(\tau) = \frac{j}{n}\}} \notag \\
& \ge 1_{\{(\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})\}} \frac{j}{C_0n} \cdot \frac{j-2}{C_0n} \dotsm \frac{1}{C_0n} 1_{\{X^n_0(\tau) = \frac{j}{n}\}} \notag \\
& \ge 1_{\{(\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})\}} \frac{\lfloor 2\delta n \rfloor!}{(C_0n)^{\lfloor 2\delta n \rfloor}} 1_{\{X^n_0(\tau) = \frac{j}{n}\}}, \label{eq:puhalskii-lower-bound-G2}
\end{align}
where the last inequality follows since the term on the last line includes more fractions that are less than $1$ than the one on the previous line. For the first equality we have used the fact that on the event $G^n_j$ all the active $j+1$ half edges (an even number) at time instant $\tau$ should merge among themselves (without waking any sleeping vertices) by the time instant $\tau^n$, whereas the total number of available half edges (either awake or sleeping) at time instant $\tau$ equals $nr(\boldsymbol{X}^n(\tau))+1$.
For even integer $0\le j \le \lfloor \delta n \rfloor$, we consider three different cases for values of $\boldsymbol{p}$ and $\boldsymbol{q}$.
{\bf Case 1:} There exists some odd $m \in \mathbb{N}$ such that $p_m > q_m \ge 0$.
Let $C_m \doteq \frac{1}{2}(p_m - q_m) > 0$.
For $\delta < \frac{1}{m} \wedge \delta_0 \wedge C_m$ and $(\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})$, we have from \eqref{eq:Gtil-delta},
\begin{equation*}
X^n_{m}(\tau) = \tilde{\zeta}^n_{m}(\tau) - (\tilde{\zeta}^n_{m}(\tau) - X^n_{m}(\tau)) > (p_{m} - q_{m}) - \delta > C_m,
\end{equation*}
which implies $X^n_{m}(\tau) \ge 1/n$ for $n \ge \delta^{-1}$.
So for even integer $0 \le j \le \lfloor \delta n \rfloor$ and $n > \frac{m}{\delta} \vee n_0$,
\begin{align}
& 1_{\{(\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})\}} P\left( G^n_j | \mathcal{F}_\tau \right) \notag \\
& \ge 1_{\{(\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})\}} \frac{mnX^n_{m}(\tau)}{nr(\boldsymbol{X}^n(\tau))} \cdot \frac{j+m-2}{nr(\boldsymbol{X}^n(\tau))-2} \cdot \frac{j+m-4}{nr(\boldsymbol{X}^n(\tau))-4} \notag \\
& \quad \dotsm \frac{1}{nr(\boldsymbol{X}^n(\tau))-(j+m-1)} 1_{\{X^n_0(\tau) = \frac{j}{n}\}} \notag \\
& \ge 1_{\{(\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})\}} \frac{m}{C_0n} \cdot \frac{j+m-2}{C_0n} \cdot \frac{j+m-4}{C_0n} \dotsm \frac{1}{C_0n} 1_{\{X^n_0(\tau) = \frac{j}{n}\}} \notag \\
& \ge 1_{\{(\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})\}} \frac{\lfloor 2\delta n \rfloor!}{(C_0n)^{\lfloor 2\delta n \rfloor}} 1_{\{X^n_0(\tau) = \frac{j}{n}\}}, \label{eq:puhalskii-lower-bound-G3}
\end{align}
where the last inequality follows once again as in \eqref{eq:puhalskii-lower-bound-G2}.
Combining \eqref{eq:puhalskii-lower-bound-key}--\eqref{eq:puhalskii-lower-bound-G3} implies that, for $\delta < \frac{C_0}{2} \wedge \frac{1}{m} \wedge \delta_0 \wedge C_m$ and $n > \frac{m}{\delta} \vee n_0$,
\begin{align*}
P(A^n_\delta({\boldsymbol{\tilde\zeta}},\tilde{\psi})) & \ge \sum_{j=-1}^{\lfloor \delta n \rfloor} \boldsymbol{E}\left[ 1_{\{(\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})\}} \frac{\lfloor 2\delta n \rfloor!}{(C_0n)^{\lfloor 2\delta n \rfloor}} 1_{\{X^n_0(\tau) = \frac{j}{n}\}} \right] \\
& = \frac{\lfloor 2\delta n \rfloor!}{(C_0n)^{\lfloor 2\delta n \rfloor}} P((\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})) \\
& \ge \sqrt{2\pi\lfloor 2\delta n \rfloor} \left(\frac{\lfloor 2\delta n \rfloor}{C_0en}\right)^{\lfloor 2\delta n \rfloor} P((\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})),
\end{align*}
where the last line uses Stirling's approximation $n! \ge \sqrt{2\pi n} (\frac{n}{e})^n$.
From this and \eqref{eq:puhalskii-lower-bound-claim} we have
\begin{align}
& \liminf_{n \to \infty} \frac{1}{n}\log P(E^{n,\varepsilon}(\boldsymbol{q})) \notag \\
& \ge \liminf_{n \to \infty} \frac{1}{n}\log P( A^n_\delta({\boldsymbol{\tilde\zeta}},\tilde{\psi}) ) \notag \\
& \ge \liminf_{n \to \infty} \left[ \frac{1}{2n} \log \left( 2\pi\lfloor 2\delta n \rfloor \right) + \frac{\lfloor 2\delta n \rfloor}{n} \log \left(\frac{\lfloor 2\delta n \rfloor}{C_0en}\right) + \frac{1}{n} \log P((\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})) \right] \notag \\
& = 2\delta \log\left(\frac{ 2\delta }{C_0e}\right) + \liminf_{n \to \infty} \frac{1}{n} \log P((\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})). \label{eq:puhalskii-lower-bound-last}
\end{align}
Define the open set
\begin{equation*}
\Scale[0.9]{G_{\delta,\tau}({\boldsymbol{\tilde\zeta}},\tilde{\psi}) \doteq \{ (\boldsymbol{\zeta},\psi) \in \mathbb{D}([0,\tau]:\mathbb{R} \times \mathbb{R}_+^\infty \times \mathbb{R}): \sup_{t \in [0,\tau]}|\zeta_k(t) - \tilde{\zeta}_k(t)| < \delta, \mbox{ for all } k = 0, 1 ,2,\dotsc, \lfloor\delta^{-1}\rfloor\}.}
\end{equation*}
It follows from Theorem \ref{thm:main-ldp} that
\begin{align*}
\liminf_{n \to \infty} \frac{1}{n} \log P((\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})) & = \liminf_{n \to \infty} \frac{1}{n} \log P((\boldsymbol{X}^n,Y^n) \in G_{\delta,\tau}({\boldsymbol{\tilde\zeta}},\tilde{\psi})) \\
& \ge - \inf_{(\boldsymbol{\zeta},\psi) \in G_{\delta,\tau}({\boldsymbol{\tilde\zeta}},\tilde{\psi})} I_\tau((\boldsymbol{\zeta},\psi)) \ge - I_\tau({\boldsymbol{\tilde\zeta}},\tilde{\psi}).
\end{align*}
Combining this with \eqref{eq:puhalskii-lower-bound-last},
\eqref{eq:puhalskii-lower-bound-minimizer} and sending $\delta \to 0$ gives
\begin{equation}
\label{eq:puhalskii-lower-bound-pre-key}
\liminf_{n \to \infty} \frac{1}{n}\log P(E^{n,\varepsilon}(\boldsymbol{q})) \ge - I_\tau({\boldsymbol{\tilde\zeta}},\tilde{\psi}) = -I^2_{0,\tau}((0,\boldsymbol{p}), (0,\boldsymbol{p} - \boldsymbol{q})).
\end{equation}
The lower bound in Case 1 now follows on sending $\varepsilon \to 0$.
{\bf Case 2:} $p_m = 0$ for all odd $m \in \mathbb{N}$.
It suffices to establish a similar estimate as in \eqref{eq:puhalskii-lower-bound-G3}, the lower bound in Case 2 will then follow as in Case 1.
From Assumptions \ref{asp:convgN} and \ref{asp:exponential-boundN} $$\sum_{k=0}^\infty (2k+1) \frac{n_{2k+1}}{n} \to \sum_{k = 0}^\infty (2k+1) p_{2k+1} = 0.$$
Therefore for each $\kappa \in (0,1)$, there exists some $\ntil_\kappa \in \mathbb{N}$ such that $0 \le \sum_{k=0}^\infty (2k+1) \frac{n_{2k+1}}{n} < \kappa$ for $n > \ntil_\kappa$, which implies $n_m=0$ for all odd $m \ge \kappa n$. Consider now an even integer $0 \le j \le \lfloor \delta n \rfloor$ and $n > \ntil_\kappa$. Denote by $M^n$ the largest odd degree for which there is a sleeping vertex at time instant $\tau$ in the continuous time EEA. Note that $M^n \le \kappa n$ a.s.
Therefore for $\kappa < \delta<\frac{C_0}{2} \wedge \delta_0$ and $n > n_0 \vee \ntil_\kappa$,
\begin{align*}
& 1_{\{(\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})\}} P\left( G^n_j | \mathcal{F}_\tau \right) \notag \\
& \ge 1_{\{(\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})\}} \sum_{1 \le m \le \lfloor \kappa n \rfloor, m \mbox{ is odd}} \left[ 1_{\{M^n=m\}} \frac{mnX^n_{m}(\tau)}{nr(\boldsymbol{X}^n(\tau))} \cdot \frac{j+m-2}{nr(\boldsymbol{X}^n(\tau))-2} \right. \notag \\
& \quad \left. \cdot \frac{j+m-4}{nr(\boldsymbol{X}^n(\tau))-4} \dotsm \frac{1}{nr(\boldsymbol{X}^n(\tau))-(j+m-1)} \right] 1_{\{X^n_0(\tau) = \frac{j}{n}\}}
\end{align*} Since $nX_m^n(\tau) \ge 1$ on the set $\{M^n=m\}$ the right side can be bounded below by \begin{align*}
&1_{\{(\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})\}} \sum_{1 \le m \le \lfloor \kappa n \rfloor, m \mbox{ is odd}} \left[ 1_{\{M^n=m\}} \frac{(j+m-2)!!m}{(C_0n)^{(j+m+1)/2}} \right] 1_{\{X^n_0(\tau) = \frac{j}{n}\}} \notag \\
& \ge 1_{\{(\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})\}} \sum_{1 \le m \le \lfloor \kappa n \rfloor, m \mbox{ is odd}} \left[ 1_{\{M^n=m\}} \frac{\lfloor 2\delta n \rfloor!}{(C_0n)^{\lfloor 2\delta n \rfloor}} \right] 1_{\{X^n_0(\tau) = \frac{j}{n}\}} \notag \\
& = 1_{\{(\boldsymbol{X}^n,Y^n) \in \tilde{G}_{\delta}({\boldsymbol{\tilde\zeta}},\tilde{\psi})\}} \frac{\lfloor 2\delta n \rfloor!}{(C_0n)^{\lfloor 2\delta n \rfloor}} 1_{\{X^n_0(\tau) = \frac{j}{n}\}}, \notag
\end{align*}
where the last inequality follows once again
as in \eqref{eq:puhalskii-lower-bound-G2}.
Therefore we have the same inequality as in \eqref{eq:puhalskii-lower-bound-G3} for $n > n_0 \vee \ntil_\kappa$ and $\delta<\frac{C_0}{2} \wedge \delta_0$, and so the lower bound in Case 2 follows.
{\bf Case 3:} There exists an odd $m \in \mathbb{N}$ such that $p_m > 0$ but $p_m=q_m$.
For $i \in \mathbb{N}$, consider the vector $\boldsymbol{q}^i \doteq (q_k^i)_{k \in \mathbb{N}}$, where $q_k^i \doteq q_k$ for $k \ne m$ and $q_m^i \doteq q_m - \frac{1}{i}$.
Fix $\varepsilon \in (0,1)$.
Choose $i$ so that $p_m > q_m^i > 0$, $\varepsilon > \frac{1}{i} = q_m - q_m^i$ and $\sum_{k=1}^\infty kq_k^i > 2\sum_{k=1}^\infty q_k^i$.
When $\boldsymbol{q}$ is replaced by $\boldsymbol{q}^i$ we are in Case 1 and
thus for $\varepsilon^i < (\varepsilon - \frac{1}{i})$, from the lower bound \eqref{eq:puhalskii-lower-bound-pre-key} for Case 1,
\begin{equation}
\label{eq:puhalskii-lower-bound-case3}
\liminf_{n \to \infty} \frac{1}{n}\log P(E^{n,\varepsilon}(\boldsymbol{q})) \ge \liminf_{n \to \infty} \frac{1}{n}\log P(E^{n,\varepsilon^i}(\boldsymbol{q}^i)) \ge -I^2_{0,\tau}((0,\boldsymbol{p}), (0,\boldsymbol{p} - \boldsymbol{q}^i)).
\end{equation}
From Proposition \ref{prop:minimizer-summary}(a) we have
\begin{equation}\label{eq:i20t}
I^2_{0,\tau}((0,\boldsymbol{p}), (0,\boldsymbol{p} - \boldsymbol{q}^i)) = H({\boldsymbol{q}^i})+H({ \boldsymbol{p}}-{\boldsymbol{q}^i})-H({\boldsymbol{p}})+K({\boldsymbol{q}^i}).
\end{equation}
Since $((0,\boldsymbol{p}), (0,\boldsymbol{p}-\boldsymbol{q}^i))$ and $((0,\boldsymbol{p}), (0,\boldsymbol{p}-\boldsymbol{q}))$
are in $\Xi$ and $\boldsymbol{q}^i\to \boldsymbol{q}$, from Lemma \ref{lem:lemctybeta} we have $K(\boldsymbol{q}^i)\to K(\boldsymbol{q})$. Also, clearly
$$
H({\boldsymbol{q}^i})+H({ \boldsymbol{p}}-{\boldsymbol{q}^i})-H({\boldsymbol{p}})
\to H({\boldsymbol{q}})+H({ \boldsymbol{p}}-{\boldsymbol{q}})-H({\boldsymbol{p}}).$$
Thus, as $i\to \infty$, the right side of \eqref{eq:i20t} converges to
$$H({\boldsymbol{q}})+H({ \boldsymbol{p}}-{\boldsymbol{q}})-H({\boldsymbol{p}})+K({\boldsymbol{q}})
= I^2_{0,\tau}((0,\boldsymbol{p}), (0,\boldsymbol{p} - \boldsymbol{q})),$$
where the equality follows again by Proposition \ref{prop:minimizer-summary}(a).
The desired result now follows on sending $i \to \infty$ and then $\varepsilon \to 0$
in \eqref{eq:puhalskii-lower-bound-case3}.
The above three cases cover all possible values of $\boldsymbol{p}$ and $\boldsymbol{q}$.
This completes the proof. \end{proof}
\subsection{Completing the proof of Theorem \ref{thm:ldg_degree_distribution}}
The upper bound of Theorem \ref{thm:ldg_degree_distribution} follows from Lemma \ref{lem:puhalskii-upper-bound}, Lemma \ref{lem:puhalskii-upper-bound-improvement}(b) and Proposition \ref{prop:minimizer-summary}(a). The lower bound of Theorem \ref{thm:ldg_degree_distribution} follows from Lemma \ref{lem:puhalskii-lower-bound} and Proposition \ref{prop:minimizer-summary}(a). \qed
\section{Proofs of Auxiliary Lemmas} \label{sec:pfsect4} In this section we prove the lemmas in Section \ref{sec:cal}. Specifically, in Section \ref{subsec:6.1} we prove Lemma \ref{lem:I1I2L}, in Section \ref{subsec:6.2} we prove Lemmas \ref{lem:uniqbeta} and \ref{lem:lemctybeta}, in Section \ref{subsec:6.3} we prove Lemma \ref{lem:minimizer-def}, in Section \ref{subsec:6.4} we prove Lemma \ref{lem:minimizer-cost}, and finally in Section \ref{subsec:6.5} we prove Lemma \ref{lem:minimizer-verify-general}.
We start with the following remark.
\begin{Remark}
\label{rmk:prep_evolution}
Fix $0 \le t_1 < t_2 < \infty$, $\boldsymbol{x}^{(1)},\boldsymbol{x}^{(2)} \in {\mathbb{R}}_+^\infty$ and $(\boldsymbol{\zeta},\psi) \in \mathcal{J}^1_{t_1,t_2}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$ such that $I_{t_1,t_2}(\boldsymbol{\zeta},\psi) < \infty$.
Fix $\varepsilon \in (0,1)$.
Then there exists $\boldsymbol{\varphi} \in \mathcal{S}_{t_2}(\boldsymbol{\zeta},\psi)$ such that
\begin{equation*}
\sum_{k=0}^\infty \int_{[t_1,t_2] \times [0,1]} \ell(\varphi_k(s,y)) \,ds\,dy \le I_{t_1,t_2}(\boldsymbol{\zeta},\psi) + \varepsilon.
\end{equation*}
Using convexity of $\ell$, we can assume without loss of generality that $\varphi_k(t,y) = \rho_k(t) 1_{[0,r_k(\boldsymbol{\zeta}(t)))}(y) + 1_{[r_k(\boldsymbol{\zeta}(t)),1]}(y)$ for $t \in [t_1,t_2]$, where
$\rho_k$ is some nonnegative function.
From \eqref{eq:psi} and \eqref{eq:phi_k} we see that for a.e.\ $t \in [t_1,t_2]$,
\begin{align}
\zeta_k'(t) & = -\rho_k(t)r_k(\boldsymbol{\zeta}(t)), k \in \mathbb{N}, \label{eq:I1I2L_phi_k} \\
\psi'(t) & = \sum_{k=0}^\infty (k-2)\rho_k(t)r_k(\boldsymbol{\zeta}(t)). \label{eq:I1I2L_psi}
\end{align}
Since $(\boldsymbol{\zeta},\psi) \in \mathcal{J}_{t_1,t_2}^1(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$,
$\zeta_0(t) = \zeta_0(t_1) + \psi(t) - \psi(t_1)$ over $(t_1,t_2)$, namely
there is no reflection over this interval.
Therefore for a.e.\ $t \in [t_1,t_2]$, we have $\zeta_0'(t) = \psi'(t)$ and
\begin{equation}
\label{eq:rmk_prep}
- \frac{1}{2} \frac{d}{dt} r(\boldsymbol{\zeta}(t)) = - \frac{1}{2} \left( \psi'(t) + \sum_{k=1}^\infty k \zeta'_k(t) \right) = \sum_{k=0}^\infty \rho_k(t) r_k(\boldsymbol{\zeta}(t)), t \in [t_1,t_2].
\end{equation} \end{Remark}
Now we prove the lemmas in Section \ref{sec:cal}.
\subsection{Proof of Lemma \ref{lem:I1I2L}} \label{subsec:6.1}
We first prove \eqref{eq:I1I2}.
Since $$\inf_{t_2 \ge t_1} I^1_{t_1,t_2}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) \le I^1_{t_1,t_1+\varsigma}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) \le I^2_{t_1,t_1+\varsigma}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}),$$
it suffices to show
\begin{equation}
\inf_{t_2 \ge t_1} I^1_{t_1,t_2}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) \ge I^2_{t_1,t_1+\varsigma}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) \label{eq:25.2}
\end{equation}
when $\inf_{t_2 \ge t_1} I^1_{t_1,t_2}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) < \infty$.
Fix $\varepsilon \in (0,1)$.
There exist $t_2^\varepsilon \ge t_1$, $(\boldsymbol{\zeta},\psi) \in \mathcal{J}^1_{t_1,t_2^\varepsilon}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$ and $\boldsymbol{\varphi} \in \mathcal{S}_{t_2^\varepsilon}(\boldsymbol{\zeta},\psi)$ such that
\begin{equation}
\label{eq:I1I2L_first}
\sum_{k=0}^\infty \int_{[t_1,t_2^\varepsilon] \times [0,1]} \ell(\varphi_k(s,y)) \,ds\,dy \le I_{t_1,t_2^\varepsilon}(\boldsymbol{\zeta},\psi) + \varepsilon \le \inf_{t_2 \ge t_1} I^1_{t_1,t_2}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) + 2\varepsilon.
\end{equation}
Recall that $t \mapsto r(\boldsymbol{\zeta}(t))$ is a non-increasing function (see \eqref{eq:rmk_prep}). We claim that in fact we can assume without loss of generality that $t \mapsto r(\boldsymbol{\zeta}(t))$ is strictly decreasing for $t \in [t_1,t_2^\varepsilon]$. Indeed, if this function is not strictly decreasing, we can modify $(\boldsymbol{\zeta},\psi)$ such that for the modified trajectory strict monotonicity holds and the associated cost is not any higher. Such a modification can be constructed via a limiting argument as follows. Consider $(\boldsymbol{\zeta}^n,\psi^n)$ defined recursively as:
$(\boldsymbol{\zeta}^0,\psi^0,\boldsymbol{\varphi}^0) \doteq (\boldsymbol{\zeta},\psi,\boldsymbol{\varphi})$ on $[0,t_2^0]$ where $t_2^0 \doteq t_2^\varepsilon$.
For $n \in \mathbb{N}_0$, having defined $(\boldsymbol{\zeta}^n,\psi^n)$ and $t_2^n \le t_2^\varepsilon$, where
$(\boldsymbol{\zeta}^n,\psi^n) \in \mathcal{J}^1_{t_1,t_2^n}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$, and
$\boldsymbol{\varphi}^n \in \mathcal{S}_{t_2^n}(\boldsymbol{\zeta}^n,\psi^n)$ such that
\begin{equation}\sum_{k=0}^\infty \int_{[t_1,t_2^n] \times [0,1]} \ell(\varphi_k^n(s,y)) \,ds\,dy
\le \sum_{k=0}^\infty \int_{[t_1,t_2^\varepsilon] \times [0,1]} \ell(\varphi_k(s,y)) \,ds\,dy,
\label{eq:cosdec} \end{equation}
we modify $(\boldsymbol{\zeta}^n,\psi^n)$, in case $r(\boldsymbol{\zeta}(\cdot))$ is not strictly decreasing on
$[t_1,t_2^n]$ as follows. Let $[s_1^n,s_2^n] \subset [t_1,t_2^n]$ be the largest constant piece of $r(\boldsymbol{\zeta}^n(\cdot))$, namely $r(\boldsymbol{\zeta}^n(t))$ is constant on $[s_1^n,s_2^n]$ and $s_2^n-s_1^n$ is maximized among all such possible pieces.
Let $t_2^{n+1} \doteq t_2^n-(s_2^n-s_1^n)$ and define $(\boldsymbol{\zeta}^{n+1},\psi^{n+1})$ by shrinking $(\boldsymbol{\zeta}^n,\psi^n)$ over $[s_1^n,s_2^n]$, namely let $(\boldsymbol{\zeta}^{n+1}(t),\psi^{n+1}(t)) \doteq (\boldsymbol{\zeta}^n(t),\psi^n(t))$ for $t \le s_1^n$ and $(\boldsymbol{\zeta}^{n+1}(t),\psi^{n+1}(t)) \doteq (\boldsymbol{\zeta}^n(t+s_2^n-s_1^n),\psi^n(t+s_2^n-s_1^n))$ for $s_1^n < t \le t_2^{n+1}$.
Clearly $(\boldsymbol{\zeta}^{n+1},\psi^{n+1}) \in \mathcal{J}^1_{t_1,t_2^{n+1}}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$ and the associated control $\boldsymbol{\varphi}^{n+1}$ satisfies \eqref{eq:cosdec}
with $n$ replaced with $n+1$.
If $r(\boldsymbol{\zeta}(t))$ only has $N$ constant pieces over $[t_1,t_2^\varepsilon]$, then $(\boldsymbol{\zeta}^N,\psi^N)$ is the desired modification of $(\boldsymbol{\zeta},\psi)$.
If $r(\boldsymbol{\zeta}(t))$ has countably many constant pieces over $[t_1,t_2^\varepsilon]$, then the sequence $(\boldsymbol{\zeta}^n,\psi^n)$ is well defined and \eqref{eq:cosdec} holds for every $n$. Since the sequence $t_2^n$
is non-increasing, it converges to some point $\bar{t}_2$.
Since $I_{\bar{t}_2}$ has compact sub-level sets, this sequence (of paths over the time interval $[t_1,\bar t_2]$) has a limit point $(\bar{\boldsymbol{\zeta}},\bar{\psi})$. It is easy to check that this limit point must belong to
$\mathcal{J}^1_{t_1,\bar{t}_2}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$ and
$I_{t_1,\bar{t}_2}(\bar{\boldsymbol{\zeta}},\bar{\psi}) \le \liminf_{n \to \infty} I_{t_1,\bar{t}_2}(\boldsymbol{\zeta}^n,\psi^n) \le I_{t_1,t_2^\varepsilon}(\boldsymbol{\zeta},\psi)$.
From the construction one can show that for fixed $\delta \in (0,1)$, $\inf_{s \in [t_1,t_2^n-\delta]} |r(\boldsymbol{\zeta}^n(s)) - r(\boldsymbol{\zeta}^n(s+\delta))|$ is nondecreasing in $n$ and eventually positive.
Therefore $|r(\bar{\boldsymbol{\zeta}}(s)) - r(\bar{\boldsymbol{\zeta}}(s+\delta))|>0$ for each $s \in [t_1,\tbar_2-\delta]$.
As $\delta$ is arbitrary, $(\bar{\boldsymbol{\zeta}},\bar{\psi})$ is the desired modification of $(\boldsymbol{\zeta},\psi)$ verifying
the claim.
From Remark \ref{rmk:prep_evolution} we can further assume without loss of generality that $\varphi_k(t,y) = \rho_k(t) 1_{[0,r_k(\boldsymbol{\zeta}(t)))}(y) + 1_{[r_k(\boldsymbol{\zeta}(t)),1]}(y)$ for $t \in [t_1,t_2^\varepsilon]$ and \eqref{eq:I1I2L_phi_k}--\eqref{eq:rmk_prep} hold for $t \in [t_1,t_2^\varepsilon]$.
We now introduce a time transformation. Consider the non-decreasing function $f$ defined as: $f(0)=0$,
\begin{equation*}
f'(t) \doteq \left\{
\begin{array}{ll}
1, & t \in [0,t_1), \\
- \frac{1}{2} \frac{d}{dt} r(\boldsymbol{\zeta}(t)) = \sum_{k=0}^\infty \rho_k(t) r_k(\boldsymbol{\zeta}(t)), & t \in [t_1,t_2^\varepsilon],
\end{array} \right.
\end{equation*}
where the equality in the second line follows from \eqref{eq:rmk_prep}. Since $r(\boldsymbol{\zeta}(t))$ is strictly decreasing for $t \in [t_1,t_2^\varepsilon]$, $f(t)$ must be strictly increasing for $t \in [t_1,t_2^\varepsilon]$.
So $g \doteq f^{-1}$ is well-defined and absolutely continuous on $[0,f(t_2^\varepsilon)]$.
Note that $f(t_2^\varepsilon) = f(t_1) + \int_{t_1}^{t_2^\varepsilon} f'(t)\,dt = t_1 - \frac{1}{2} (r(\boldsymbol{\zeta}(t_2^\varepsilon)) - r(\boldsymbol{\zeta}(t_1))) = t_1 + \varsigma$, where the last equality is from
\eqref{eq:tau}.
Define $({\boldsymbol{\tilde\zeta}}(t),\tilde{\psi}(t)) \doteq (\boldsymbol{\zeta}(g(t)),\psi(g(t))$ for $t \in [0,t_1 + \varsigma]$.
Then it is easy to see that $({\boldsymbol{\tilde\zeta}},\tilde{\psi}) \in \mathcal{J}_{t_1,t_1+\varsigma}^1(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$.
Since $f(g(t))=t$, $f'(g(t)) g'(t)=1$ and so
\begin{equation*}
\frac{d}{dt} r({\boldsymbol{\tilde\zeta}}(t)) = -2f'(g(t))g'(t) = -2 \mbox{ for a.e. } t \in [t_1,t_1+\varsigma].
\end{equation*}
Therefore $({\boldsymbol{\tilde\zeta}},\tilde{\psi}) \in \mathcal{J}_{t_1,t_1+\varsigma}^2(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$.
Define
\begin{equation*}
\tilde{\varphi}_k(t,y) \doteq \left\{
\begin{array}{ll}
\varphi_k(t,y), & t \in [0,t_1), \\
\tilde{\rho}_k(t) 1_{[0,r_k({\boldsymbol{\tilde\zeta}}(t)))}(y) + 1_{[r_k({\boldsymbol{\tilde\zeta}}(t)),1]}(y), & t \in [t_1, t_1+\varsigma],
\end{array} \right.
\end{equation*}
where $\tilde{\rho}_k(t) \doteq \rho_k(g(t)) g'(t)$ for $t \in [t_1, t_1+\varsigma]$.
From \eqref{eq:I1I2L_phi_k} and \eqref{eq:I1I2L_psi}, for $t \in [t_1, t_1+\varsigma]$,
\begin{align*}
\tilde{\zeta}_k'(t) & = \zeta_k'(g(t)) g'(t) = -\rho_k(g(t)) r_k(\boldsymbol{\zeta}(g(t))) g'(t) = -\tilde{\rho}_k(t) r_k({\boldsymbol{\tilde\zeta}}(t)), k \in \mathbb{N}, \\
\tilde{\psi}'(t) & = \psi'(g(t)) g'(t) = \sum_{k=0}^\infty (k-2)\rho_k(g(t))r_k(\boldsymbol{\zeta}(g(t))) g'(t) = \sum_{k=0}^\infty (k-2)\tilde{\rho}_k(t)r_k({\boldsymbol{\tilde\zeta}}(t)).
\end{align*}
So $\tilde{\boldsymbol{\varphi}} \in \mathcal{S}_{t_1+\varsigma}({\boldsymbol{\tilde\zeta}},\tilde{\psi})$.
We claim that
\begin{equation}
\label{eq:I1I2L_cost}
\sum_{k=0}^\infty \int_{[t_1,t_2^\varepsilon] \times [0,1]} \ell(\varphi_k(t,y)) \, dt\,dy \ge \sum_{k=0}^\infty \int_{[t_1,t_1+\varsigma] \times [0,1]} \ell(\tilde{\varphi}_k(t,y)) \, dt\,dy.
\end{equation}
To see the claim, first note that the left hand side of \eqref{eq:I1I2L_cost} equals
\begin{equation*}
\sum_{k=0}^\infty \int_{t_1}^{t_2^\varepsilon} r_k(\boldsymbol{\zeta}(t)) \ell(\rho_k(t)) \, dt.
\end{equation*}
Since $g(f(t))=t$, we have $g'(f(t)) f'(t)=1$ and hence the right hand side of \eqref{eq:I1I2L_cost} is
\begin{align*}
\sum_{k=0}^\infty \int_{t_1}^{t_1+\varsigma} r_k({\boldsymbol{\tilde\zeta}}(t)) \ell(\tilde{\rho}_k(t)) \, dt
& = \sum_{k=0}^\infty \int_{t_1}^{t_2^\varepsilon} r_k({\boldsymbol{\tilde\zeta}}(f(t))) \ell(\tilde{\rho}_k(f(t))) f'(t) \, dt \\
& = \sum_{k=0}^\infty \int_{t_1}^{t_2^\varepsilon} r_k(\boldsymbol{\zeta}(t)) \ell\left(\frac{\rho_k(t)}{f'(t)}\right) f'(t) \, dt.
\end{align*}
Combining the above two facts, we have
\begin{align*}
& \sum_{k=0}^\infty \int_{[t_1,t_2^\varepsilon] \times [0,1]} \ell(\varphi_k(t,y)) \, dt\,dy - \sum_{k=0}^\infty \int_{[t_1,t_1+\varsigma] \times [0,1]} \ell(\tilde{\varphi}_k(t,y)) \, dt\,dy \\
& = \sum_{k=0}^\infty \int_{t_1}^{t_2^\varepsilon} r_k(\boldsymbol{\zeta}(t)) \left[ \rho_k(t) \log f'(t) - f'(t) + 1 \right] dt \\
& = \int_{t_1}^{t_2^\varepsilon} \left[ \left( \sum_{k=0}^\infty r_k(\boldsymbol{\zeta}(t)) \rho_k(t) \right) \log f'(t) - f'(t) + 1 \right] dt, \\
& = \int_{t_1}^{t_2^\varepsilon} \ell(f'(t)) \, dt \; \ge 0,
\end{align*}
where the next to last equality uses the fact that $\sum_{k=0}^\infty r_k(\boldsymbol{\zeta}(t)) = 1$
for all $t \in [t_1,t_2^\varepsilon)$ and the last equality uses the definition of $f'(t)$.
This proves the claim in \eqref{eq:I1I2L_cost}.
Combining \eqref{eq:I1I2L_first} and \eqref{eq:I1I2L_cost} with the fact that $\tilde{\boldsymbol{\varphi}} \in \mathcal{S}_{t_1+\varsigma}({\boldsymbol{\tilde\zeta}},\tilde{\psi})$ and $({\boldsymbol{\tilde\zeta}},\tilde{\psi}) \in \mathcal{J}_{t_1,t_1+\varsigma}^2(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$ gives
\begin{equation*}
I^2_{t_1,t_1+\varsigma}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) \le \sum_{k=0}^\infty \int_{[t_1,t_1+\varsigma] \times [0,1]} \ell(\tilde{\varphi}_k(t,y)) \, dt\,dy \le \inf_{t_2 \ge t_1} I^1_{t_1,t_2}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) + 2\varepsilon.
\end{equation*}
Since $\varepsilon \in (0,1)$ is arbitrary, \eqref{eq:25.2} follows, which, as argued previously, gives
\eqref{eq:I1I2}.
Next consider \eqref{eq:IL} and the third statement in the lemma for fixed $(\boldsymbol{\zeta},\psi) \in \mathcal{J}^2_{t_1,t_1+\varsigma}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$.
We first show that
\begin{equation}
\label{eq:I1I2L_second_1}
I_{t_1,t_1+\varsigma}(\boldsymbol{\zeta},\psi) \ge \int_{t_1}^{t_1+\varsigma} L(\boldsymbol{\zeta}(s), \boldsymbol{\zeta}'(s))\,ds.
\end{equation}
Assume without loss of generality that $I_{t_1,t_1+\varsigma}(\boldsymbol{\zeta},\psi) < \infty$.
Fix $\varepsilon \in (0,1)$.
From Remark \ref{rmk:prep_evolution} we can find some $\boldsymbol{\varphi} \in \mathcal{S}_{t_1+\varsigma}(\boldsymbol{\zeta},\psi)$ such that
\begin{align*}
& \sum_{k=0}^\infty \int_{[t_1,t_1+\varsigma] \times [0,1]} \ell(\varphi_k(s,y)) \,ds\,dy \le I_{t_1,t_1+\varsigma}(\boldsymbol{\zeta},\psi) + \varepsilon, \\
& \varphi_k(t,y) = \rho_k(t) 1_{[0,r_k(\boldsymbol{\zeta}(t)))}(y) + 1_{[r_k(\boldsymbol{\zeta}(t)),1]}(y), \quad t \in [t_1,t_1+\varsigma], k \in \mathbb{N}_0,
\end{align*}
for a suitable sequence of non-negative functions $\rho_k$,
and \eqref{eq:I1I2L_phi_k}--\eqref{eq:rmk_prep} hold for $t \in [t_1,t_1+\varsigma]$.
Using \eqref{eq:I1I2L_phi_k}, \eqref{eq:I1I2L_psi} and the fact that $\frac{d}{dt} r(\boldsymbol{\zeta}(t))=-2$ for a.e.\ $t \in [t_1,t_1+\varsigma]$ we have
\begin{align*}
\rho_0(t)r_0(\boldsymbol{\zeta}(t)) & = -\frac{\psi'(t)+\sum_{k=1}^\infty (k-2) \zeta'_k(t)}{2} = 1+\sum_{k=1}^\infty \zeta'_k(t),
\end{align*}
which also implies $\sum_{k=1}^\infty \zeta_k'(t) \ge -1$ for a.e.\ $t \in [t_1,t_1+\varsigma]$, proving the third statement in the lemma.
Furthermore we have
\begin{align}
& \sum_{k=0}^\infty \int_{[t_1,t_1+\varsigma] \times [0,1]} \ell(\varphi_k(s,y)) \,ds\,dy \notag \\
& = \sum_{k=0}^\infty \int_{t_1}^{t_1+\varsigma} r_k(\boldsymbol{\zeta}(t)) \ell(\rho_k(t)) \, dt \notag \\
& = \int_{t_1}^{t_1+\varsigma} \left[ r_0(\boldsymbol{\zeta}(t)) \ell\left( \frac{1+\sum_{k=1}^\infty \zeta'_k(t)}{r_0(\boldsymbol{\zeta}(t))} \right) + \sum_{k=1}^\infty r_k(\boldsymbol{\zeta}(t)) \ell\left(\frac{-\zeta'_k(t)}{r_k(\boldsymbol{\zeta}(t))}\right) \right] dt \notag \\
& = \int_{t_1}^{t_1+\varsigma} L(\boldsymbol{\zeta}(t), \boldsymbol{\zeta}'(t)) \, dt,
\label{eq:I1I2L_second_1_2}
\end{align}
where the last equality uses the definition of $\ell$ in \eqref{eq:ell} and $L$ in \eqref{eq:L} and we use the convention that $0 \ell(x/0)=0$ for $x\ge 0$.
Therefore
\begin{equation}
\label{eq:IL_1}
\int_{t_1}^{t_1+\varsigma} L(\boldsymbol{\zeta}(t), \boldsymbol{\zeta}'(t)) \, dt = \sum_{k=0}^\infty \int_{[t_1,t_1+\varsigma] \times [0,1]} \ell(\varphi_k(s,y)) \,ds\,dy \le I_{t_1,t_1+\varsigma}(\boldsymbol{\zeta},\psi) + \varepsilon.
\end{equation}
Since $\varepsilon \in (0,1)$ is arbitrary, we have \eqref{eq:I1I2L_second_1}.
Next we show that
\begin{equation}
\label{eq:I1I2L_second_2}
I_{t_1,t_1+\varsigma}(\boldsymbol{\zeta},\psi) \le \int_{t_1}^{t_1+\varsigma} L(\boldsymbol{\zeta}(s), \boldsymbol{\zeta}'(s))\,ds.
\end{equation}
Assume without loss of generality that $\int_{t_1}^{t_1+\varsigma} L(\boldsymbol{\zeta}(s), \boldsymbol{\zeta}'(s))\,ds < \infty$.
Since there exists some $(\boldsymbol{\zeta}^*,\psi^*) \in \mathcal{J}^0_{0,t_1}(\boldsymbol{x}^{(0)}, \boldsymbol{x}^{(1)})$ such that $I_{0,t_1}(\boldsymbol{\zeta}^*,\psi^*) < \infty$, we can further assume without loss of generality that $I_{0,t_1}(\boldsymbol{\zeta},\psi) < \infty$.
Then there exists some $\boldsymbol{\varphi}^* \in \mathcal{S}_{t_1}(\boldsymbol{\zeta},\psi)$.
Let $\boldsymbol{\varphi}(t,y) \doteq \boldsymbol{\varphi}^*(t,y)$ for $t \in [0,t_1)$, and for $t \in [t_1,t_1+\varsigma]$ define
\begin{align*}
\rho_k(t) & \doteq - \frac{\zeta'_k(t)}{r_k(\boldsymbol{\zeta}(t))} {\boldsymbol{1}}_{\{r_k(\boldsymbol{\zeta}(t)) \ne 0\}}, k \in \mathbb{N}, \\
\rho_0(t) & \doteq \frac{\sum_{k=1}^\infty (k-2)\rho_k(t)r_k(\boldsymbol{\zeta}(t)) - \psi'(t)}{2r_0(\boldsymbol{\zeta}(t))} {\boldsymbol{1}}_{\{r_0(\boldsymbol{\zeta}(t)) \ne 0\}}, \\
\varphi_k(t,y) & \doteq \rho_k(t) 1_{[0,r_k(\boldsymbol{\zeta}(t)))}(y) + 1_{[r_k(\boldsymbol{\zeta}(t)),1]}(y), y \in [0,1], k \in \mathbb{N}_0.
\end{align*}
Clearly \eqref{eq:I1I2L_phi_k} and \eqref{eq:I1I2L_psi} hold for $t \in [t_1,t_1+\varsigma]$ and hence $\boldsymbol{\varphi} \in \mathcal{S}_{t_1+\varsigma}(\boldsymbol{\zeta},\psi)$.
Also one can check that \eqref{eq:I1I2L_second_1_2} still holds.
Therefore
\begin{equation*}
I_{t_1,t_1+\varsigma}(\boldsymbol{\zeta},\psi) \le \sum_{k=0}^\infty \int_{[t_1,t_1+\varsigma] \times [0,1]} \ell(\varphi_k(s,y)) \,ds\,dy = \int_{t_1}^{t_1+\varsigma} L(\boldsymbol{\zeta}(t), \boldsymbol{\zeta}'(t)) \, dt.
\end{equation*}
This gives \eqref{eq:I1I2L_second_2} and completes the proof of \eqref{eq:IL}.
Finally, \eqref{eq:I1I2L} follows on combining \eqref{eq:I1I2}, \eqref{eq:def-I-j} and \eqref{eq:IL}.
This completes the proof of the lemma.
\qed
\subsection{Proofs of Lemmas \ref{lem:uniqbeta} and \ref{lem:lemctybeta}.} \label{subsec:6.2}
{\em Proof of Lemma \ref{lem:uniqbeta}.} Consider for $(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) \in \Xi$, the function $\alpha \mapsto B(\alpha)$ on $(0,1)$, defined by \begin{align}
B(\alpha) \equiv B(\alpha;\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) & \doteq \frac{1}{\alpha} \left( (1-\alpha^2)\sum_{k=1}^\infty \frac{k z_k}{1-\alpha^k} - \sum_{k=1}^\infty k z_k + x_0^{(2)} - \alpha^2 x_0^{(1)} \right) \nonumber\\
& = z_1 - \sum_{k=3}^\infty k z_k \frac{\alpha - \alpha^{k-1}}{1-\alpha^k} + \frac{x_0^{(2)}}{\alpha} - \alpha x_0^{(1)} \nonumber\\
& = z_1 - \sum_{k=3}^\infty k z_k B_k(\alpha) + \frac{x_0^{(2)}}{\alpha} - \alpha x_0^{(1)}, \label{eq:longbalph} \end{align} where $B_k(\alpha) \doteq (\alpha- \alpha^{k-1})/(1 - \alpha^k)$ for $k \ge 3$ and $\boldsymbol{z} = \boldsymbol{x}^{(1)} -\boldsymbol{x}^{(2)}$ as before. For each $k \ge 3$ and $\alpha \in (0,1)$, using the inequality of arithmetic and geometric means one can verify that \begin{equation}
B_k'(\alpha) = \frac{(1-\alpha^2)(k-1)}{(1-\alpha^k)^2} \left(\frac{1+\alpha^2+\alpha^4+\dotsb+\alpha^{2k-4}}{k-1} - \alpha^{k-2}\right) > 0, \label{eq:B_k_monotone} \end{equation} and \begin{equation}
0 = B_k(0+) \le B_k(\alpha) \le B_k(1-) = (k-2)/k. \label{eq:B_k_bd} \end{equation} So $B(1-) = z_1 - \sum_{k=3}^\infty (k-2)z_k + x_0^{(2)} - x_0^{(1)} = - \left( \sum_{k=1}^\infty (k-2) z_k + z_0 \right) < 0$ by assumption and $B(\alpha)$ is decreasing in $\alpha \in (0,1)$. Also note that the assumption $\sum_{k=1}^\infty k z_k + z_0> 2 \sum_{k=1}^\infty z_k$ can be rewritten as \begin{equation*}
\sum_{k=3}^\infty (k-2) z_k + x_0^{(1)} > z_1 + x_0^{(2)}, \end{equation*} which implies \begin{equation}
\label{eq:cor_assump}
\mbox{ either } x_0^{(1)}>0 \mbox{ or } z_k > 0 \mbox{ for some } k \ge 3. \end{equation} From this and \eqref{eq:B_k_monotone} we see that $B(\alpha)$ is actually strictly decreasing in $\alpha \in (0,1)$. Since each $B_k(\alpha)$ is continuous on $(0,1)$, $B(\alpha)$ is also continuous by \eqref{eq:B_k_bd} and the dominated convergence theorem.
Finally, since $B(0+) = z_1 + \infty \cdot 1_{\{x_0^{(2)} > 0\}} > 0$ and $B(1-) <0$, there must exist a unique $\beta \in(0,1)$ such that $B(\beta)=0$. This completes the proof of the lemma.
\qed\\
\noindent {\em Proof of Lemma \ref{lem:lemctybeta}.} Suppose $(\boldsymbol{x}^{(1),n}, \boldsymbol{x}^{(2),n}) \to (\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$ as $n \to \infty$, where $(\boldsymbol{x}^{(1),n}, \boldsymbol{x}^{(2),n}), (\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) \in \Xi$. Recall the function $B(\cdot)$ defined above \eqref{eq:B_k_monotone} and the definition of $\beta(\cdot)$ from Section \ref{sec:degree_distribution}. We consider two possible cases for the values of $x_0^{(2)}$ and $z_1$.
{\bf Case 1:} $x_0^{(2)}=0$ and $z_1=0$. In this case $\beta = \beta(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})=0$ by definition and $x_0^{(2)} \log \beta=0$ by our convention. Since $B(0+) = z_1 + \infty \cdot 1_{\{x_0^{(2)} > 0\}} = 0$ and $B(\alpha)$ is strictly decreasing in $\alpha \in (0,1)$, we have $B(\alpha)<0$ for every $\alpha \in (0,1)$. Fixing $\alpha \in (0,1)$, from \eqref{eq:B_k_bd} and the dominated convergence theorem one has \begin{equation}
\label{eq:B_n_cvg}
B^{(n)}(\alpha) \doteq B(\alpha;\boldsymbol{x}^{(1),n}, \boldsymbol{x}^{(2),n}) \to B(\alpha)
\doteq B(\alpha;\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) \end{equation} as $n \to \infty$. Therefore $B^{(n)}(\alpha)<0$ for sufficiently large $n$. Since $B^{(n)}$ is decreasing, we must have $\beta^{(n)} \doteq \beta(\boldsymbol{x}^{(1),n}, \boldsymbol{x}^{(2),n}) \le \alpha$ for all such $n$. Since $\alpha \in (0,1)$ is arbitrary, this implies that as $n \to \infty$, $\beta^{(n)} \to 0 = \beta$. Next note that the convergence of $x_0^{(2),n} \log \beta^{(n)} \to x_0^{(2)} \log \beta =0$ holds trivially if $x_0^{(2),n} =0$ for all sufficiently large $n$. Suppose now that $x_0^{(2),n} >0$ for every $n$. Also take $n$ to be sufficiently large, so that $x_0^{(2),n}<1$. From \eqref{eq:longbalph} and since $kB^{(n)}_k(\alpha) \le (k-2)$ from \eqref{eq:B_k_bd} [applied with $(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$ replaced by $(\boldsymbol{x}^{(1),n}, \boldsymbol{x}^{(2),n})$] we have \begin{equation*}
B^{(n)}((x_0^{(2),n})^2) \ge 0 - \sum_{k=3}^\infty (k-2)z_k^n + \frac{1}{x_0^{(2),n}} - (x_0^{(2),n})^2x_0^{(1),n} > 0 \end{equation*} for $x_0^{(2),n}$ sufficiently small. So $\beta^{(n)} \ge (x_0^{(2),n})^2$ for such $n$ and so $x_0^{(2),n} \log \beta^{(n)} \to 0 = x_0^{(2)} \log \beta$.
{\bf Case 2:} $x_0^{(2)} > 0$ or $z_1 > 0$. In this case, for $n$ sufficiently large, we must have $x_0^{(2),n} > 0$ or $z_1^n > 0$. So $\beta^{(n)}$ satisfies $B^{(n)}(\beta^{(n)})=0$ for all such $n$. Since $\beta>0$, and by proof of Lemma \ref{lem:uniqbeta} $B(\beta)=0$, $B(0+) >0$ and $B(\cdot)$ is strictly decreasing, we have $B(\beta/2)>0$. As in the proof of \eqref{eq:B_n_cvg}, we see that $B^{(n)}(\beta/2) \to B(\beta/2)$ as $n\to \infty$, and so
$B^{(n)}(\beta/2) > 0$ for all sufficiently large $n$. Since $B^{(n)}$ is decreasing, we must have $\beta^{(n)} \ge \beta/2>0$ for all sufficiently large $n$. From this, \eqref{eq:B_k_bd} and the dominated convergence theorem one can show that along any convergent subsequence of $\beta^{(n)}$, $B^{(n)}(\beta^{(n)}) \to B(\lim \beta^{(n)})$. So any limit point of $\beta^{(n)}$ is a solution to $B(\alpha)=0$ defined on $(0,1)$. But $\beta$ is the unique solution to this equation. So $\beta^{(n)} \to \beta$ and also $x_0^{(2),n} \log \beta^{(n)} \to x_0^{(2)} \log \beta$. This completes the proof of the lemma.
\qed
\subsection{Proof of Lemma \ref{lem:minimizer-def}} \label{subsec:6.3}
(a) Recall the definition of $\varsigma$ in \eqref{eq:tau} and $\tilde{\varsigma},\tilde{z}_k$ from Construction \ref{cons:cont}. From \eqref{eq:def-beta} we have \begin{equation}\label{eq:taub}
\tilde \varsigma = \frac{\varsigma}{1-\beta^2} = \frac{x^{(1)}_0-x^{(2)}_0 + \sum_{k=1}^\infty k z_k}{2(1-\beta^2)} = \frac{1}{2} \left( x^{(1)}_0 + \sum_{k=1}^\infty k \tilde{z}_k \right). \end{equation} Since $\beta \in [0,1)$, we have $\varsigma \le \varsigma/(1-\beta^2) = \tilde{\varsigma}$. This proves part (a).
(b) We first show that $({\boldsymbol{\tilde\zeta}},\tilde{\psi}) \in \mathcal{J}^1_{t_1,t_1+\varsigma}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$.
For this, it suffices to check
\begin{align}
\tilde{\zeta}_k(t_1+\varsigma) & = p_k^{(2)} \mbox{ for } k \in \mathbb{N}, \;
\tilde{\zeta}_0(t_1+\varsigma) = x^{(2)}_0, \label{eq:minimizer-condition-0} \\
\tilde{\psi}(t) - \tilde{\psi}(t_1) & \ge -x_0^{(1)}
\mbox{ for } t \in [t_1,t_1+\varsigma]. \label{eq:minimizer-condition-2}
\end{align}
From \eqref{eq:def-minimizer} we have
$$\tilde{\zeta}_k(t_1+\varsigma) = p_k^{(1)} - \tilde z_k( 1 - (1- \varsigma/\tilde \varsigma)^{k/2})= p_k^{(1)} - \tilde z_k( 1 - \beta^k) = p_k^{(1)} - z_k= p_k^{(2)},$$
which gives the first statement in \eqref{eq:minimizer-condition-0}.
From this, \eqref{eq:def-minimizer-0} and \eqref{eq:tau} it follows that
\begin{equation*}
\tilde{\zeta}_0(t_1+\varsigma) = x_0^{(1)} + \sum_{k=1}^\infty k(p_k^{(1)}-p_k^{(2)}) - 2\varsigma = x_0^{(2)},
\end{equation*}
which gives the second statement in \eqref{eq:minimizer-condition-0}.
For \eqref{eq:minimizer-condition-2}, applying the change of variable $t - t_1 = \tilde \varsigma (1- \alpha_t^2)$, namely $\alpha_t = \left( 1-\frac{t-t_1}{\tilde \varsigma}\right)^{1/2}$ for $t \in [t_1,t_1+\varsigma]$, we have
\begin{align*}
\sum_{k=1}^\infty k(p_k^{(1)}-\tilde{\zeta}_k(t)) - 2(t-t_1) & = \sum_{k=1}^\infty k \tilde z_k \left[ 1- \left( 1-\frac{t-t_1}{\tilde \varsigma}\right)^{k/2} \right] -2(t-t_1) \\
& = \sum_{k=1}^\infty k \tilde z_k ( 1- \alpha_t^k) - 2\tilde \varsigma (1- \alpha_t^2) \\
& = \sum_{k=1}^\infty k \tilde z_k ( 1- \alpha_t^k) - \sum_{k=1}^\infty k \tilde{z}_k (1-\alpha_t^2) - x^{(1)}_0 (1-\alpha_t^2) \doteq F(\alpha_t),
\end{align*}
where the third equality follows from part (a).
Using this we can write, for $t \in [t_1,t_1+\varsigma]$,
\begin{equation}
\label{eq:minimizer-condition-alpha}
\tilde{\zeta}_0(t) = x_0^{(1)} + F(\alpha_t) = \tilde{\psi}(t) - \tilde{\psi}(t_1) + x_0^{(1)},
\end{equation}
Note that, for $t \in [t_1,t_1+\varsigma]$,
\begin{align*}
x_0^{(1)} + F(\alpha_t) & = \alpha_t^2 x_0^{(1)} + \sum_{k=1}^\infty k \tilde z_k(\alpha_t^2 -\alpha_t^k) = \alpha_t(1 - \alpha_t) \left( \frac{\alpha_t x_0^{(1)}}{1-\alpha_t} -\tilde z_1 + \sum_{k=3}^\infty k \tilde z_k \frac{\alpha_t- \alpha_t^{k-1}}{1-\alpha_t} \right) \\
& = \alpha_t(1 - \alpha_t) \left( \frac{\alpha_t x_0^{(1)}}{1-\alpha_t} -\tilde z_1 + \sum_{k=3}^\infty k \tilde z_k \tilde{B}_k(\alpha_t) \right) = \alpha_t (1-\alpha_t) \tilde{B}(\alpha_t),
\end{align*}
where $\tilde{B}_k(\alpha) \doteq (\alpha- \alpha^{k-1})/(1-\alpha)$ for $k \ge 3$ and $\Btil(\alpha) \doteq \frac{\alpha x_0^{(1)}}{1-\alpha} -\tilde z_1 + \sum_{k=3}^\infty k \tilde z_k \tilde{B}_k(\alpha)$.
One can verify (e.g.\ using Young's inequality) that
\begin{equation*}
\tilde{B}_k'(\alpha) = \frac{(k-2)\alpha^{k-1}-(k-1)\alpha^{k-2}+1}{(\alpha-1)^2} > 0, \quad k \ge 3, \: \alpha\in[0,1).
\end{equation*}
So $\tilde{B}(\alpha)$ is increasing.
Using \eqref{eq:def-beta} one can verify that $\tilde{B}(\beta)=\frac{x_0^{(2)}}{\beta(1-\beta)} 1_{\{\beta>0\}} \ge 0$.
Since for $t \in (t_1, t_1+\varsigma]$, $\alpha_t \in [\beta, 1)$, for all such $t$
$x_0^{(1)} + F(\alpha_t) \ge 0$.
This along with \eqref{eq:minimizer-condition-alpha} gives \eqref{eq:minimizer-condition-2}.
So far we have verified that $({\boldsymbol{\tilde\zeta}},\tilde{\psi}) \in \mathcal{J}^1_{t_1,t_2}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$.
From \eqref{eq:def-minimizer-0} we also have $\frac{d}{dt} r({\boldsymbol{\tilde\zeta}}(t)) = -2$ for $t \in [t_1,t_1+\varsigma]$.
Thus actually $({\boldsymbol{\tilde\zeta}},\tilde{\psi}) \in \mathcal{J}^2_{t_1,t_2}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$, completing the proof of (b).
(c) Since for $t \in (t_1, t_1+\varsigma)$, $\tilde{\zeta}_0(t) = x_0^{(1)} + F(\alpha_t) = \alpha_t (1-\alpha_t) \tilde{B}(\alpha_t)$ and $\tilde{B}(\beta) \ge 0$, it suffices to show that $\tilde{B}(\alpha)$ is strictly increasing in $\alpha \in [\beta,1)$.
But thanks to \eqref{eq:cor_assump}, this is immediate from the fact that $\tilde{B}_k'(\alpha)>0$ and $\frac{\alpha x_0^{(1)}}{1-\alpha}$ is strictly increasing when $x_0^{(1)}>0$.
This gives part (c) and completes the proof of the lemma.
\qed
\subsection{Proof of Lemma \ref{lem:minimizer-cost}} \label{subsec:6.4}
Let $\mu_0 \doteq \frac{1}{2} \left( x_0^{(1)} + \sum_{k=1}^\infty k p_k^{(1)} \right)$.
Recall $\tilde z_k = \frac{z_k}{1-\beta^k}$.
It then follows from \eqref{eq:def-minimizer}, \eqref{eq:def-minimizer-0} and Lemma \ref{lem:minimizer-def}(a) that for $t \in [t_1,t_1+\varsigma]$,
\begin{align}
\tilde \zeta'_k(t) & = -\frac{k \tilde z_k}{2 \tilde \varsigma-2(t-t_1)} \left( 1- \frac{t-t_1}{\tilde \varsigma} \right)^{k/2} = -\frac{k}{2\tilde \varsigma-2(t-t_1)} [\tilde \zeta_k(t)-p_k^{(1)} + \tilde z_k], \label{eq:Euler-Lagrange-key-1} \\
1 + \sum_{k=1}^\infty \tilde \zeta'_k(t) & = \frac{x_0^{(1)} - 2(t-t_1) + \sum_{k=1}^\infty k [p_k^{(1)} - \tilde \zeta_k(t)]}{2\tilde \varsigma -2(t-t_1)} = \frac{\tilde \zeta_0(t)}{2\tilde \varsigma - 2(t-t_1)}, \label{eq:Euler-Lagrange-key-2} \\
r(\boldsymbol{\tilde \zeta}(t)) & = 2\mu_0 -2(t-t_1) \notag.
\end{align}
From these we have
\begin{align}
& \int_{t_1}^{t_1+\varsigma} L(\boldsymbol{\tilde\zeta}(s), \boldsymbol{\tilde\zeta}'(s))\,ds \notag \\
& = \int_{t_1}^{t_1+\varsigma} \left[ \left(1 + \sum_{k=1}^\infty \tilde\zeta'_k(t)\right) \log \left(\frac{1 + \sum_{k=1}^\infty\tilde\zeta'_k(t) }{\tilde\zeta_0(t)/r(\boldsymbol{\tilde\zeta}(t))}\right) + \sum_{k=1}^\infty(- \tilde\zeta'_k(t)) \log \left(\frac{-\tilde\zeta'_k(t)}{k\tilde\zeta_k(t)/r(\boldsymbol{\tilde\zeta}(t))}\right) \right] dt \notag \\
& = \int_{t_1}^{t_1+\varsigma} \left[ \log(2 \mu_0-2(t-t_1)) - \log (2 \tilde \varsigma - 2(t-t_1)) - \sum_{k=1}^\infty \tilde\zeta'_k(t) \log \left(\frac{\tilde\zeta_k(t)-p_k^{(1)}+ \tilde z_k}{\tilde\zeta_k(t)}\right) \right] dt. \label{eq:H-K-1}
\end{align}
We claim that we can interchange the integration and summation in the last line.
To see this, first note that there exists some $M \in \mathbb{N}$ such that $z_k \le \tilde{z}_k \le 2 z_k \le 2 p_k^{(1)} < 1$ for $k \ge M$.
Since $\tilde\zeta_k(t)$ is non-increasing, we have
\begin{align*}
& \sum_{k=M}^\infty \int_{t_1}^{t_1+\varsigma} \left| \tilde\zeta'_k(t) \log \left( \frac{\tilde\zeta_k(t)-p_k^{(1)}+ \tilde z_k}{\tilde\zeta_k(t)} \right) \right| dt \\
& = \sum_{k=M}^\infty \int_{t_1}^{t_1+\varsigma} \left| \log \left( \frac{\tilde\zeta_k(t)-p_k^{(1)}+ \tilde z_k}{\tilde\zeta_k(t)} \right) \right| d (p_k^{(1)}-\tilde\zeta_k(t))
= \sum_{k=M}^\infty \int_0^{z_k} \left| \log \left(\frac{ \tilde z_k - u}{p_k^{(1)} - u}\right)\right| du \\
& \le \sum_{k=M}^\infty \int_0^{z_k} \left( -\log (\tilde z_k - u) - \log (p_k^{(1)} - u) \right) du
\le - 2\sum_{k=M}^\infty \int_0^{2p_k^{(1)}} \log u \,du.
\end{align*}
Using $\tilde{\ell}(x) \doteq x \log x - x = \int \log x \, dx$, the last expression equals
\begin{equation*}
- 2\sum_{k=M}^\infty (\tilde{\ell}(2p_k^{(1)}) - \tilde{\ell}(0)) = 4\sum_{k=M}^\infty p_k^{(1)} \log \left( \frac{1}{p_k^{(1)}} \right) -4(\log 2 -1)\sum_{k=M}^\infty p_k^{(1)}.
\end{equation*}
Here the last term is clearly finite.
Letting $\tilde{M} \doteq \sum_{k=M}^\infty p_k^{(1)} \in (0,1]$, we have
\begin{align*}
\sum_{k=M}^\infty p_k^{(1)} \log \left( \frac{1}{p_k^{(1)}} \right) & = \tilde{M} \sum_{k=M}^\infty \frac{p_k^{(1)}}{\tilde{M}} \log \left( \frac{1}{k^2 p_k^{(1)}} \right) + \sum_{k=M}^\infty p_k^{(1)} \log k^2 \\
& \le \tilde{M} \log \left( \sum_{k=M}^\infty \frac{1}{\tilde{M}k^2} \right) + 2 \sum_{k=M}^\infty kp_k^{(1)} < \infty,
\end{align*}
where the inequality holds since $\log x$ is concave and $\log x \le x$.
Therefore
\begin{equation*}
\sum_{k=M}^\infty \int_{t_1}^{t_1+\varsigma} \left| \tilde\zeta'_k(t) \log \left( \frac{\tilde\zeta_k(t)-p_k^{(1)}+ \tilde z_k}{\tilde\zeta_k(t)} \right) \right| dt < \infty.
\end{equation*}
One can easily verify that for $1 \le k \le M$,
\begin{equation*}
\int_{t_1}^{t_1+\varsigma} \left| \tilde\zeta'_k(t) \log \left( \frac{\tilde\zeta_k(t)-p_k^{(1)}+ \tilde z_k}{\tilde\zeta_k(t)} \right) \right| dt = \int_0^{z_k} \left| \log \left(\frac{ \tilde z_k - u}{p_k^{(1)} - u}\right)\right| du < \infty.
\end{equation*}
So the claim holds.
Actually we have also shown that $\int_{t_1}^{t_1+\varsigma} L({\boldsymbol{\tilde\zeta}}(s), {\boldsymbol{\tilde\zeta}}'(s))\,ds < \infty$.
From \eqref{eq:H-K-1} it then follows that
\begin{align}
& \int_{t_1}^{t_1+\varsigma} L(\boldsymbol{\tilde\zeta}(s), \boldsymbol{\tilde\zeta}'(s))\,ds \notag \\
& = \int_{t_1}^{t_1+\varsigma} \left[ \log(\mu_0 - (t-t_1)) - \log (\tilde \varsigma - (t-t_1)) \right] dt - \sum_{k=1}^\infty \int_{t_1}^{t_1+\varsigma} \log \left( \frac{\tilde\zeta_k(t)-p_k^{(1)}+ \tilde z_k}{\tilde\zeta_k(t)} \right) d\tilde\zeta_k(t) \notag \\
& = \left. \left[ -\tilde{\ell}(\mu_0 - (t-t_1)) + \tilde{\ell} (\tilde \varsigma - (t-t_1)) - \sum_{k=1}^\infty \tilde{\ell}(\tilde\zeta_k(t)-p_k^{(1)} + \tilde z_k) + \sum_{k=1}^\infty \tilde{\ell}(\tilde\zeta_k(t)) \right] \right|_{t=t_1}^{t_1+\varsigma} \notag \\
& = -(\mu_0 -\varsigma)\log(\mu_0 - \varsigma) + ( \tilde \varsigma-\varsigma) \log (\tilde \varsigma- \varsigma) + \mu_0 \log \mu_0 - \tilde \varsigma \log \tilde \varsigma \notag \\
& \quad + \sum_{k=1}^\infty \left[ -(\tilde z_k - z_k) \log (\tilde z_k - z_k) + p_k^{(2)}\log p_k^{(2)} + \tilde z_k \log \tilde z_k - p_k^{(1)} \log p_k^{(1)} \right], \label{eq:H-K-2}
\end{align}
where the last line follows from $\boldsymbol{\tilde\zeta}(t_1) = \boldsymbol{x}^{(1)}$ and $\boldsymbol{\tilde\zeta}(t_1+\varsigma) = \boldsymbol{x}^{(2)}$.
Using $\tilde \varsigma = \varsigma/(1-\beta^2)$,
\begin{align*}
( \tilde \varsigma-\varsigma) \log (\tilde \varsigma- \varsigma) - \tilde \varsigma \log \tilde \varsigma
& = \frac{\beta^2 \varsigma}{1- \beta^2} \log \frac{\beta^2 \varsigma}{1- \beta^2} - \frac{\varsigma}{1-\beta^2} \log \frac{\varsigma}{1-\beta^2}\\
& = - \varsigma \log \varsigma + \varsigma \log (1-\beta^2) + \frac{2\beta^2 \varsigma}{1- \beta^2} \log \beta.
\end{align*}
Since $\tilde z_k = z_k / (1 - \beta^k)$, we have
\begin{align*}
&\sum_{k=1}^\infty \left[ -(\tilde z_k - z_k) \log (\tilde z_k - z_k) + \tilde z_k \log \tilde z_k \right] \\
& \quad= \sum_{k=1}^\infty \left[ z_k \log z_k - z_k \log(1-\beta^k) - \frac{k\beta^k z_k}{1-\beta^k} \log \beta \right]\\
& \quad= \sum_{k=1}^\infty \left[ z_k \log z_k - z_k \log(1-\beta^k) \right] + \sum_{k=1}^\infty \left( kz_k- \frac{kz_k}{1-\beta^k} \right) \log \beta\\
& \quad= \sum_{k=1}^\infty \left[ z_k \log z_k - z_k \log(1-\beta^k) \right] + \left( x_0^{(2)} -\frac{2\beta^2 \varsigma}{1- \beta^2} \right) \log \beta,
\end{align*}
where the last line is from \eqref{eq:def-beta} and \eqref{eq:taub}.
The last two displays along with \eqref{eq:H-K-2} give
\begin{align}
& \int_{t_1}^{t_1+\varsigma} L(\boldsymbol{\tilde\zeta}(s), \boldsymbol{\tilde\zeta}'(s))\,ds \notag \\
& \quad= -(\mu_0 -\varsigma)\log(\mu_0 - \varsigma) - \varsigma \log \varsigma + \mu_0 \log \mu_0 + \varsigma \log (1-\beta^2) \notag \\
& \quad\quad + \sum_{k=1}^\infty \left[ z_k \log z_k + p_k^{(2)} \log p_k^{(2)} - p_k^{(1)} \log p_k^{(1)} \right] - \sum_{k=1}^\infty z_k \log(1-\beta^k) + x_0^{(2)} \log \beta \label{eq:semi-cont}\\
& \quad= \tilde{H}(\boldsymbol{z}) + \tilde{H}(\boldsymbol{x}^{(2)}) - \tilde{H}(\boldsymbol{x}^{(1)}) + \tilde{K}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}). \notag
\end{align}
Finiteness of the above follows as in Remark \ref{rem:finhk}.
This gives the first statement in the lemma.
For the lower semicontinuity, first note that $-(\mu_0 -\varsigma)\log(\mu_0 - \varsigma) - \varsigma \log \varsigma + \mu_0 \log \mu_0 + \varsigma \log (1-\beta^2) - \sum_{k=1}^\infty z_k \log(1-\beta^k) + x_0^{(2)} \log \beta$ is continuous from Lemma \ref{lem:lemctybeta} and Assumption \ref{asp:exponential-boundN}.
The remaining terms in \eqref{eq:semi-cont} can be written as
\begin{align*}
&\sum_{k=1}^\infty \left[ z_k \log z_k + p_k^{(2)} \log p_k^{(2)} - p_k^{(1)} \log p_k^{(1)} \right]\\
&=
\sum_{k=0}^\infty \left[ z_k \log z_k + p_k^{(2)} \log p_k^{(2)} - p_k^{(1)} \log p_k^{(1)} \right]
- \left[ z_0 \log z_0 + p_0^{(2)} \log p_0^{(2)} - p_0^{(1)} \log p_0^{(1)} \right]\\
&=\sum_{k=0}^\infty \left[ z_k \log \frac{z_k}{p_k^{(1)}} \right] + \sum_{k=0}^\infty \left[ p_k^{(2)} \log \frac{p_k^{(2)}}{p_k^{(1)}} \right] - \left[ z_0 \log z_0 + p_0^{(2)} \log p_0^{(2)} - p_0^{(1)} \log p_0^{(1)} \right],
\end{align*}
where $z_0 \doteq 1- \sum_{k=1}^\infty z_k$, and $p_0^{(i)} \doteq 1- \sum_{k=1}^\infty p_k^{(i)}$ for $i=1,2$.
The last term in the above display is clearly a lower semicontinuous function of $(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) \in \Xi$.
The lemma follows.
\qed
\subsection{Proof of Lemma \ref{lem:minimizer-verify-general}} \label{subsec:6.5}
We begin with a lemma that gives the statement in Lemma \ref{lem:minimizer-verify-general} under a stronger assumption.
\begin{Lemma}
\label{lem:minimizer-verify-special}
Suppose the same setting as in Lemma \ref{lem:minimizer-verify-general}.
Suppose in addition that: (i) $x^{(1)}_0,x^{(2)}_0>0$, and (ii) for every $k \in \mathbb{N}$, if $p^{(1)}_k > 0$ then $p^{(2)}_k > 0$.
Then \eqref{eq:i2ttau} is satisfied.
\end{Lemma}
\begin{proof}
The first equality in \eqref{eq:i2ttau} is proved in Lemma \ref{lem:I1I2L}.
For the second equality, we need to show that $({\boldsymbol{\tilde\zeta}},\tilde{\psi})$ is the minimizer of the function
\begin{equation*}
\tilde{G}(\boldsymbol{\zeta},\psi) \doteq \int_{t_1}^{t_1+\varsigma} L(\boldsymbol{\zeta}(s), \boldsymbol{\zeta}'(s))\,ds, \quad (\boldsymbol{\zeta},\psi) \in \mathcal{J}^2_{t_1,t_1+\varsigma}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}).
\end{equation*}
We will prove this via contradiction.
First note that $\mathcal{J}^2_{t_1,t_1+\varsigma}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$ is a convex set.
Also using the definition of $L$, one can verify that $\tilde{G}(\boldsymbol{\zeta},\psi)$ is a convex function in $(\boldsymbol{\zeta},\psi) \in \mathcal{J}^2_{t_1,t_1+\varsigma}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$.
Now suppose there exists some $(\boldsymbol{\zeta},\psi) \in \mathcal{J}^2_{t_1,t_1+\varsigma}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$ such that $\tilde{G}(\boldsymbol{\zeta},\psi) < \tilde{G}({\boldsymbol{\tilde\zeta}},\tilde{\psi})$.
From Lemma \ref{lem:minimizer-cost}
we have $\tilde{G}({\boldsymbol{\tilde\zeta}},\tilde{\psi}) < \infty$.
For $\varepsilon \in [0,1]$, construct the family of paths $(\boldsymbol{\zeta}^\varepsilon,\psi^\varepsilon) \doteq (1-\varepsilon) ({\boldsymbol{\tilde\zeta}},\tilde{\psi}) + \varepsilon (\boldsymbol{\zeta},\psi)$.
Letting $g(\varepsilon) \doteq \tilde{G}(\boldsymbol{\zeta}^\varepsilon,\psi^\varepsilon)$, we have $g(1) = \tilde{G}(\boldsymbol{\zeta},\psi) < \tilde{G}({\boldsymbol{\tilde\zeta}},\tilde{\psi}) = g(0)$.
It follows from the convexity that $g$ is left and right differentiable wherever it is finite.
We will show that $g'_+(0)=0$, where $g'_+(\cdot)$ is the right derivative of $g$.
The convexity of $g$ will then give the desired contradiction.
By convexity of $g$, we have $g(\varepsilon) < g(0)$ for every $\varepsilon \in (0,1]$.
From Lemma \ref{lem:minimizer-def}(c), assumption (i) and continuity of $\tilde{\zeta}_0$ we have
\begin{equation}
\label{eq:deltadefn}
\delta \doteq \inf_{t \in [t_1,t_1+\varsigma]} \tilde{\zeta}_0(t) > 0.
\end{equation}
From \eqref{eq:Euler-Lagrange-key-2} we see
\begin{equation}
\label{eq:minimizer-verify-special-temp}
1 + \sum_{k=1}^\infty \tilde\zeta'_k(t) = \frac{\tilde\zeta_0(t)}{2\tilde \varsigma - 2(t-t_1)} \ge \frac{\delta}{2\tilde{\varsigma}} > 0, \quad t \in [t_1,t_1+\varsigma].
\end{equation}
Now fix $0 < \varepsilon < \frac{1}{4} \wedge \delta \wedge \frac{\delta}{2\tilde{\varsigma}}$.
Then $\zeta^\varepsilon_0(t) > \frac{\delta}{2}$ for all $t \in [t_1,t_1+\varsigma]$.
We next argue that one can assume without loss of generality that
\begin{equation}
\label{eq:minimizer-finite-dim}
\zeta_k(t) = \tilde{\zeta}_k(t) \mbox{ for all } t \in [t_1,t_1+\varsigma] \mbox{ and } k \ge n_0
\end{equation}
for some large enough $n_0 \in {\mathbb{N}}$.
To show this, we define $(\boldsymbol{\zeta}^n,\psi^n)$ for $n \in \mathbb{N}$ as follows: For $t \in [0,t_1)$, $(\boldsymbol{\zeta}^n(t),\psi^n(t)) \doteq (\boldsymbol{\zeta}^\varepsilon(t),\psi^\varepsilon(t))$, and for $t \in [t_1,t_1+\varsigma]$,
\begin{align*}
\zeta^n_k(t) & \doteq \tilde{\zeta}_k(t), \quad k \ge n, \\
\zeta^n_k(t) & \doteq \zeta^\varepsilon_k(t), \quad 1 \le k < n, \\
\zeta^n_0(t) & \doteq x_0^{(1)} + \sum_{k=1}^\infty k(p_k^{(1)}-\zeta^n_k(t)) - 2(t-t_1), \\
\psi^n(t) & \doteq \psi^\varepsilon(t_1) + \sum_{k=1}^\infty k(p_k^{(1)}-\zeta^n_k(t)) - 2(t-t_1).
\end{align*}
From this definition we have $(\zeta_k^n)_{k \in \mathbb{N}} \to (\zeta^\varepsilon_k)_{k \in \mathbb{N}}$ in $\mathbb{C}([0,t_1+\varsigma]:\mathbb{R}_+^\infty)$ as $n \to \infty$.
So $(\zeta_0^n,\psi^n) \to (\zeta^\varepsilon_0,\psi^\varepsilon)$ in $\mathbb{C}([0,t_1+\varsigma]:\mathbb{R}^2)$ as $n \to \infty$.
From this we see $\psi^n(t) - \psi^n(t_1) + x_0^{(1)} = \zeta_0^n(t)$ is uniformly bounded away from $0$ in $t \in[t_1,t_1+\varsigma]$ for sufficiently large $n$.
So $\boldsymbol{\zeta}^n \in \mathcal{J}^2_{t_1,t_1+\varsigma}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$ for all such $n$.
Recall $L_k$ and $L$ defined in \eqref{eq:L}.
Using the definition of $\zeta_k^n$ for $1 \le k < n$
\begin{align}
\tilde{G}(\boldsymbol{\zeta}^n,\psi^n) - \tilde{G}(\boldsymbol{\zeta}^\varepsilon,\psi^\varepsilon)
& = \int_{t_1}^{t_1+\varsigma} [L(\boldsymbol{\zeta}^n(s),(\boldsymbol{\zeta}^n(s))') - L(\boldsymbol{\zeta}^\varepsilon(s),(\boldsymbol{\zeta}^\varepsilon(s))')] \,ds \nonumber\\
& = \int_{t_1}^{t_1+\varsigma} [L_0(\boldsymbol{\zeta}^n(s),(\boldsymbol{\zeta}^n(s))') - L_0(\boldsymbol{\zeta}^\varepsilon(s),(\boldsymbol{\zeta}^\varepsilon(s))')] \,ds \nonumber\\
& \quad + \int_{t_1}^{t_1+\varsigma} \sum_{k=n}^\infty [L_k(\boldsymbol{\zeta}^n(s),(\boldsymbol{\zeta}^n(s))') - L_k(\boldsymbol{\zeta}^\varepsilon(s),(\boldsymbol{\zeta}^\varepsilon(s))')] \,ds. \label{eq:eqsun11}
\end{align}
We claim that both terms on the right side converge to $0$ as $n \to \infty$.
To see this, note that
$$\Scale[0.92]{L_0(\boldsymbol{\zeta}^n(s),(\boldsymbol{\zeta}^n(s))') = \left(1+\sum_{k=1}^\infty (\zeta^n_k)'(s)\right) \log \left[ \left(1+\sum_{k=1}^\infty (\zeta^n_k)'(s)\right) \Big/ \left(\frac{\zeta^n_0(s)}{r(\boldsymbol{\zeta}^n(s))}\right) \right] \to L_0(\boldsymbol{\zeta}^\varepsilon(s),(\boldsymbol{\zeta}^\varepsilon(s))')}$$
as $n \to \infty$, for each $s \in [t_1,t_1+\varsigma]$.
From \eqref{eq:minimizer-verify-special-temp} and the choice of $\varepsilon$ we have that
$$1 \ge 1+\sum_{k=1}^\infty (\zeta^n_k)'(s) \ge \frac{\delta}{2\tilde\varsigma} - \varepsilon > 0.$$
Since $\zeta^\varepsilon_0(s)$ and ${\tilde{\zeta}}_0(s)$ are both bounded from above and away from $0$ for all $s \in [t_1,t_1+\varsigma]$,
$$\sup_{n \in {\mathbb{N}}} \sup_{s \in [t_1,t_1+\varsigma]} |L_0(\boldsymbol{\zeta}^n(s),(\boldsymbol{\zeta}^n(s))')| < \infty.$$
The first term on the right side of \eqref{eq:eqsun11} then converges to $0$ as $n \to \infty$ by the dominated convergence theorem.
For the second term, note that
$$L_k(\boldsymbol{\zeta}^n(s),(\boldsymbol{\zeta}^n(s))') = -(\zeta^n_k)'(s) \log \left[ - (\zeta^n_k)'(s) \Big/ \left(\frac{k\zeta_k^n(s)}{r(\boldsymbol{\zeta}^n(s))}\right) \right] \to L_k(\boldsymbol{\zeta}^\varepsilon(s),(\boldsymbol{\zeta}^\varepsilon(s))')$$
as $n \to \infty$, for each $s \in [t_1,t_1+\varsigma]$.
Since $\boldsymbol{\zeta}^n \in \mathcal{J}^2_{t_1,t_1+\varsigma}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$, we have $r(\boldsymbol{\zeta}^n(s)) = r(\boldsymbol{{\tilde{\zeta}}}(s)) = r(\boldsymbol{\zeta}^\varepsilon(s))$ for each $s \in [t_1,t_1+\varsigma]$, and hence
$$|L_k(\boldsymbol{\zeta}^n(s),(\boldsymbol{\zeta}^n(s))')| \le |L_k(\boldsymbol{{\tilde{\zeta}}}(s),\boldsymbol{{\tilde{\zeta}}}'(s))| + |L_k(\boldsymbol{\zeta}^\varepsilon(s),(\boldsymbol{\zeta}^\varepsilon(s))')|.$$
Since $\tilde{G}(\boldsymbol{\zeta}^\varepsilon,\psi^\varepsilon)<\infty$ and $\tilde{G}({\boldsymbol{\tilde\zeta}},\tilde{\psi})<\infty$, we see that the last expression is summable over $k \in {\mathbb{N}}$ and integrable over $s \in [t_1,t_1+\varsigma]$.
Therefore the second term in the claim converges to $0$ as $n \to \infty$ by the dominated convergence theorem.
From the above claim we then have that $\tilde{G}(\boldsymbol{\zeta}^{n_0},\psi^{n_0}) < \tilde{G}({\boldsymbol{\tilde\zeta}},\tilde{\psi})$ for sufficiently large $n_0$.
We now fix such a $n_0$ and, abusing notation, denote
$(\boldsymbol{\zeta},\psi) = (\boldsymbol{\zeta}^{n_0},\psi^{n_0})$
and define $(\boldsymbol{\zeta}^\varepsilon,\psi^\varepsilon)$ as before, by using the new definition of
$(\boldsymbol{\zeta},\psi)$, so that \eqref{eq:minimizer-finite-dim} holds.
Since $(\boldsymbol{\zeta},\psi) \in \mathcal{J}^2_{t_1,t_1+\varsigma}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$, we have $r(\boldsymbol{\zeta}(t)) = x_0^{(1)} + \sum_{k=1}^\infty kp_k^{(1)} - 2(t-t_1)$ and $\zeta_0(t) = r(\boldsymbol{\zeta}(t)) - \sum_{k=1}^\infty k\zeta_k(t) = x_0^{(1)} + \sum_{k=1}^\infty k(p_k^{(1)}-\zeta_k(t)) - 2(t-t_1)$ for $t \in [t_1,t_1+\varsigma]$.
Using the definition of $L$, one can write
\begin{align*}
\tilde{G}(\boldsymbol{\zeta},\psi)
& = \int_{t_1}^{t_1+\varsigma} \left\{ \left(1+\sum_{k=1}^\infty \zeta'_k(t)\right) \log \left[ \left(1+\sum_{k=1}^\infty \zeta'_k(t)\right) \Big/ \left(\frac{\zeta_0(t)}{r(\boldsymbol{\zeta}(t))}\right) \right] \right. \\
& \qquad \left. - \sum_{k=1}^\infty \zeta'_k(t) \log \left[ \left( -\zeta'_k(t) \right) \Big/ \left(\frac{k\zeta_k(t)}{r(\boldsymbol{\zeta}(t))}\right) \right] \right\} dt \\
& = \int_{t_1}^{t_1+\varsigma} \left\{ \left(1+\sum_{k=1}^\infty \zeta'_k(t)\right) \log \left(\frac{1+\sum_{k=1}^\infty \zeta'_k(t)}{x_0^{(1)} + \sum_{k=1}^\infty k(p_k^{(1)}-\zeta_k(t)) - 2(t-t_1)}\right) \right. \\
& \qquad \left. - \sum_{k=1}^\infty \zeta'_k(t) \log \left(\frac{-\zeta'_k(t)}{k\zeta_k(t)}\right) \right\} dt + \int_{t_1}^{t_1+\varsigma} \log \left( x_0^{(1)} + \sum_{k=1}^\infty kp_k^{(1)} - 2(t-t_1) \right) dt,
\end{align*}
and the analogous expression holds for $\tilde{G}(\boldsymbol{\zeta}^\varepsilon,\psi^\varepsilon)$.
Let $\boldsymbol{\theta} \doteq \boldsymbol{\zeta} - {\boldsymbol{\tilde\zeta}}$.
From \eqref{eq:minimizer-finite-dim} we have $\theta_k=0$ for $k > n_0$ and hence
\begin{align*}
g(\varepsilon) & = \int_{t_1}^{t_1+\varsigma} \left\{ \left(1+\sum_{k=1}^\infty (\zeta^\varepsilon_k)'(t)\right) \log \left(\frac{1+\sum_{k=1}^\infty (\zeta^\varepsilon_k)'(t)}{x_0^{(1)} + \sum_{k=1}^\infty k(p_k^{(1)}-\zeta_k^\varepsilon(t)) - 2(t-t_1)}\right) \right. \\
& \qquad \left. - \sum_{k=1}^{n_0} (\zeta^\varepsilon_k)'(t) \log \left(\frac{-(\zeta^\varepsilon_k)'(t)}{k\zeta_k^\varepsilon(t)}\right) \right\} dt + C_0 \\
& = \int_{t_1}^{t_1+\varsigma} \eta(t,(\tilde{\zeta}_k(t) + \varepsilon \theta_k(t),\tilde{\zeta}_k'(t) + \varepsilon \theta_k'(t))_{k=1}^{n_0}) \, dt + C_0, \\
& \doteq \int_{t_1}^{t_1+\varsigma} \tilde{\eta}(t,\varepsilon) \, dt + C_0
\end{align*}
for some constant $C_0$, where
\begin{align*}
&\eta(t,(u_k,v_k)_{k=1}^{n_0})\\
&\quad = \left(1+\sum_{k=1}^{n_0} v_k +\alpha_t\right) \log \left(\frac{1+\sum_{k=1}^{n_0} v_k + \alpha_t}{x_0^{(1)} + \sum_{k=1}^{n_0} k(p_k^{(1)}-u_k) + \gamma_t - 2(t-t_1)}\right) - \sum_{k=1}^{n_0} v_k \log \left(\frac{-v_k}{ku_k}\right),
\end{align*}
with $\alpha_t \doteq \sum_{k=n_0+1}^{\infty} \tilde{\zeta}_k'(t)$ and
$\gamma_t \doteq \sum_{k=n_0+1}^{\infty} k(p_k^{(1)}-\tilde{\zeta}_k(t))$.
We wish to show that differentiation under the integral over $t$ with respect to $\varepsilon$ is valid in a neighborhood of $0$.
For this, we now establish an integrable bound on the partial derivative of $\tilde{\eta}$ with respect to $\varepsilon$.
To obtain such a bound, note that we only need to consider the contribution from $\varepsilon \theta_k(t)$ for $1 \le k \le n_0$ such that $p_k^{(2)}>0$, since when $p_k^{(2)} = 0$, one has that $p_k^{(1)} = 0$ by assumption (ii), which implies $\theta_k(t) \equiv 0$.
Therefore assume without loss of generality that $p_k^{(2)}>0$ for every $1 \le k \le n_0$.
Further note that we can assume $p_k^{(1)} > p_k^{(2)}$, since otherwise, once more, $\theta_k(t) \equiv 0$.
Therefore we assume without loss of generality that
\begin{equation}
\label{eq:p1p2strict}
p_k^{(1)} > p_k^{(2)} > 0, \quad 1 \le k \le n_0.
\end{equation}
Denote by $\frac{\partial \eta}{\partial u_k}$ and $\frac{\partial \eta}{\partial v_k}$ the corresponding partial derivatives for the function $\eta(t,(u_k,v_k)_{k=1}^{n_0})$.
Then one can verify that
\begin{equation*}
\frac{\partial \tilde{\eta}(t,\varepsilon)}{\partial \varepsilon} = \sum_{k=1}^{n_0} \frac{\partial \eta}{\partial u_k} \vert_{(t,(\zeta_k^\varepsilon(t), (\zeta_k^\varepsilon)'(t))_{k=1}^{n_0})} \theta_k(t) + \sum_{k=1}^{n_0} \frac{\partial \eta}{\partial v_k} \vert_{(t,(\zeta_k^\varepsilon(t), (\zeta_k^\varepsilon)'(t))_{k=1}^{n_0})} \theta_k'(t).
\end{equation*}
The partial derivatives of $\eta$ are
\small{\begin{align}
& \frac{\partial \eta(t,(u_k,v_k)_{k=1}^{n_0})}{\partial u_k} = \frac{k(1+\sum_{j=1}^{n_0} v_j + \sum_{j=n_0+1}^\infty \tilde{\zeta}_j'(t))}{x_0^{(1)} + \sum_{j=1}^{n_0} j(p_j^{(1)}-u_j) + \sum_{j=n_0+1}^\infty j(p_j^{(1)}-\tilde{\zeta}_j(t)) - 2(t-t_1)} + \frac{v_k}{u_k},\label{eq:37.a} \\
& \frac{\partial \eta(t,(u_k,v_k)_{k=1}^{n_0})}{\partial v_k} = \log \left(\frac{1+\sum_{j=1}^{n_0} v_j + \sum_{j=n_0+1}^\infty \tilde{\zeta}_j'(t)}{x_0^{(1)} + \sum_{j=1}^{n_0} j(p_j^{(1)}-u_j) + \sum_{j=n_0+1}^\infty j(p_j^{(1)}-\tilde{\zeta}_j(t)) - 2(t-t_1)}\right) - \log \frac{-v_k}{ku_k}, \label{eq:37.b}
\end{align}}
for $1 \le k \le n_0$.
For all $0 \le \varepsilon < \quarter \wedge \delta \wedge \frac{\delta}{2 \tilde{\varsigma}}$ and $t \in [t_1,t_1+\varsigma]$, from \eqref{eq:p1p2strict} and \eqref{eq:deltadefn},
\begin{align*}
& 0 < \frac{\delta}{2} \le (1-\varepsilon) {\tilde{\zeta}}_0(t) \le \zeta_0^\varepsilon(t) \le \sum_{k=1}^\infty kp_k < \infty, \\
& \zeta_0^\varepsilon(t) = x_0^{(1)} + \sum_{j=1}^{n_0} j(p_j^{(1)}-\zeta^\varepsilon_j(t)) + \sum_{j=n_0+1}^\infty j(p_j^{(1)}-\tilde{\zeta}_j(t)) - 2(t-t_1), \\
& 0 < p_k^{(2)} \le \zeta_k^\varepsilon(t) \le p_k^{(1)} < \infty, \:\: -1 \le (\zeta_k^\varepsilon)'(t) \le 0, \:\: |\theta_k(t)| \le p_k^{(1)}, \:\: |\theta_k'(t)| \le 2, \quad 1 \le k \le n_0, \\
& 0 < \frac{\delta}{4\tilde{\varsigma}} \le (1-\varepsilon)\left(1+\sum_{k=1}^\infty {\tilde{\zeta}}_k'(t)\right) \le 1+\sum_{k=1}^\infty (\zeta^\varepsilon_k)'(t) \le 1,
\end{align*}
where the last line uses \eqref{eq:minimizer-verify-special-temp} and Lemma \ref{lem:I1I2L}. Furthermore, using \eqref{eq:def-minimizer} we get
\begin{align*}
(\zeta_k^\varepsilon)'(t) \le (1-\varepsilon) {\tilde{\zeta}}_k'(t) \le -\frac{3k\ztil_k}{8\tilde \varsigma} \left( 1-\frac{t-t_1}{\tilde \varsigma}\right)^{k/2-1} = -\frac{3k\ztil_k}{8\tilde \varsigma^{k/2}} \left( \tilde\varsigma-(t-t_1)\right)^{k/2-1}.
\end{align*}
Combining these bounds we have
\begin{align*}
\left| \frac{\partial \eta}{\partial u_k} \vert_{(t,(\zeta_k^\varepsilon(t), (\zeta_k^\varepsilon)'(t))_{k=1}^{n_0})} \right| & \le \frac{k}{\delta/2} + \frac{1}{p_k^{(2)}}, \\
\left| \frac{\partial \eta}{\partial v_k} \vert_{(t,(\zeta_k^\varepsilon(t), (\zeta_k^\varepsilon)'(t))_{k=1}^{n_0})} \right| & \le \max\left\{\left|\log \frac{1}{\delta/2}\right|, \left|\log \frac{\delta/4\tilde\varsigma}{x_0^{(1)}+\sum_{j=1}^\infty jp_j}\right| \right\} \\
& \quad + \max\left\{\left|\log \frac{1}{kp_k^{(2)}}\right|, \left|\log \frac{\frac{3k\ztil_k}{8\tilde \varsigma^{k/2}} \left( \tilde \varsigma-(t-t_1)\right)^{k/2-1}}{kp_k^{(1)}}\right| \right\}
\end{align*}
for all $\varepsilon \in [0,1/4]$, $t \in [t_1,t_1+\varsigma]$, and $k=1,\dotsc,n_0$.
Therefore
one can find some $\tilde C_0 \in (0,\infty)$ such that
\begin{equation*}
\left| \frac{\partial \tilde{\eta}(t,\varepsilon)}{\partial \varepsilon} \right| \le \Ctil_0 + \Ctil_0 |\log \left( \tilde\varsigma-(t-t_1)\right)|, \quad \varepsilon \in [0,1/4], \quad t \in [t_1,t_1+\varsigma].
\end{equation*}
Since $|\log(\tilde\varsigma - (t-t_1))|$ is integrable in $t \in [t_1,t_1+\varsigma]$, we have obtained an integrable bound on $\left| \frac{\partial \tilde{\eta}(t,\varepsilon)}{\partial \varepsilon} \right|$ that is uniform in $\varepsilon \in [0, 1/4]$. Thus
we can differentiate under the integral sign to get
\begin{equation*}
g'(\varepsilon) = \int_{t_1}^{t_1+\varsigma} \frac{\partial \tilde{\eta}(t,\varepsilon)}{\partial \varepsilon} \, dt
\end{equation*}
for all $0 \le \varepsilon < \frac{1}{4} \wedge \delta \wedge \frac{\delta}{2 \tilde{\varsigma}}$.
Next we claim that the following Euler-Lagrange equations are satisfied.
\begin{equation}
\label{eq:Euler-Lagrange}
\frac{\partial \eta}{\partial u_n} (t,(\tilde{\zeta}_k(t),\tilde{\zeta}'_k(t))_{k =1}^{n_0}) = \frac{d}{dt} \frac{\partial \eta}{\partial v_n} (t,(\tilde{\zeta}_k(t),\tilde{\zeta}'_k(t))_{k =1}^{n_0}) \mbox{ for } 1 \le n \le n_0, t \in [t_1,t_1+\varsigma].
\end{equation}
Once this claim is verified, we have
\begin{align*}
g'_+(0) & = \sum_{k=1}^{n_0} \int_{t_1}^{t_1+\varsigma} \left[ \frac{\partial \eta}{\partial u_k} \vert_{(t,(\tilde{\zeta}_k(t), \tilde{\zeta}_k'(t))_{k=1}^{n_0})} \theta_k(t) + \frac{\partial \eta}{\partial v_k} \vert_{(t,(\tilde{\zeta}_k(t), \tilde{\zeta}_k'(t))_{k=1}^{n_0})} \theta_k'(t) \right] dt \\
& = \sum_{k=1}^{n_0} \int_{t_1}^{t_1+\varsigma} \theta_k'(t) \left[ - \int_{t_1}^t \frac{\partial \eta}{\partial u_k} \vert_{(s,(\tilde{\zeta}_k(s), \tilde{\zeta}_k'(s))_{k=1}^{n_0})} \, ds + \frac{\partial \eta}{\partial v_k} \vert_{(t,(\tilde{\zeta}_k(t), \tilde{\zeta}_k'(t))_{k=1}^{n_0})} \right] dt \\
& = \sum_{k=1}^{n_0} \int_{t_1}^{t_1+\varsigma} \tilde{c}_k \theta_k'(t) \, dt
= \sum_{k=1}^{n_0} \tilde{c}_k (\theta_k(t_1+\varsigma) - \theta_k(t_1)) = 0,
\end{align*}
where the second equality follows from integration by parts, the third is a consequence of \eqref{eq:Euler-Lagrange} with some suitable constants $\tilde c_k$ and the last equality holds since $\theta_k(t_1)=0=\theta_k(t_1+\varsigma)$.
This gives the desired contradiction and shows that $({\boldsymbol{\tilde\zeta}},\tilde{\psi})$ is the minimizer.
Finally we prove the claim \eqref{eq:Euler-Lagrange}. Fix $1 \le n \le n_0$.
Using \eqref{eq:37.a} and \eqref{eq:Euler-Lagrange-key-2} one can verify that
\begin{align*}
\frac{\partial \eta}{\partial u_n} (t,(\tilde{\zeta}_k(t),\tilde{\zeta}'_k(t))_{k =1}^{n_0}) & = \frac{n(1 + \sum_{k=1}^\infty \tilde{\zeta}'_k(t))}{\tilde{\zeta}_0(t)} + \frac{\tilde{\zeta}_n'(t)}{\tilde{\zeta}_n(t)} \\
& = \frac{n}{2\tilde{\varsigma} -2(t-t_1)} + \frac{\tilde{\zeta}_n'(t)}{\tilde{\zeta}_n(t)} \\
& = \frac{d}{dt} \left( -\frac{n}{2} \log(\tilde{\varsigma}-(t-t_1)) + \log(\tilde{\zeta}_n(t)) \right).
\end{align*}
Therefore it suffices to show
\begin{equation}
\label{eq:Euler-Lagrange-simple}
-\frac{n}{2} \log(\tilde{\varsigma}-(t-t_1)) + \log(\tilde{\zeta}_n(t)) = \frac{\partial \eta}{\partial v_n} (t,(\tilde{\zeta}_k(t),\tilde{\zeta}'_k(t))_{k =1}^{n_0}) + \bar{c}_n
\end{equation}
for some constant $\bar{c}_n$.
From \eqref{eq:37.b} one has that
\begin{align*}
\frac{\partial \eta}{\partial v_n} (t,(\tilde{\zeta}_k(t),\tilde{\zeta}'_k(t))_{k =1}^{n_0}) & = \log(n\tilde{\zeta}_n(t)) - \log(-\tilde{\zeta}_n'(t)) + \log\left(\frac{1 + \sum_{k=1}^\infty \tilde{\zeta}'_k(t)}{\tilde{\zeta}_0(t)}\right) \\
& = \log(n\tilde{\zeta}_n(t)) - \log(-\tilde{\zeta}_n'(t)) - \log(2\tilde{\varsigma} -2(t-t_1))
\end{align*}
where the last line follows from \eqref{eq:Euler-Lagrange-key-2}.
From this we have
\begin{align*}
& -\frac{n}{2} \log(\tilde{\varsigma}-(t-t_1)) + \log(\tilde{\zeta}_n(t)) - \frac{\partial \eta}{\partial v_n} (t,(\tilde{\zeta}_k(t),\tilde{\zeta}'_k(t))_{k =1}^{n_0}) \\
& = - \left(\frac{n}{2}-1\right) \log(\tilde{\varsigma}-(t-t_1)) - \log\left(\frac{n}{2}\right) + \log(-\tilde{\zeta}_n'(t)) \\
& = \log \tilde{z}_n - \frac{n}{2} \log \tilde{\varsigma}.
\end{align*}
where the last line follows from \eqref{eq:Euler-Lagrange-key-1} and \eqref{eq:def-minimizer}.
Therefore \eqref{eq:Euler-Lagrange-simple} holds with $\bar{c}_n=\log \tilde{z}_n - \frac{n}{2} \log \tilde{\varsigma}$
which proves \eqref{eq:Euler-Lagrange}.
This completes the proof. \end{proof}
\begin{proof}[Proof of Lemma \ref{lem:minimizer-verify-general}]
The first equality in \eqref{eq:i2ttau} follows as before from Lemma \ref{lem:I1I2L}.
Lemma \ref{lem:minimizer-verify-special} shows that the second equality holds if additional two assumptions in Lemma \ref{lem:minimizer-verify-special} are satisfied.
Let $(\boldsymbol{\zeta},\psi) \in \mathcal{J}^2_{t_1,t_1+\varsigma}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$ be a trajectory such that $\int_{t_1}^{t_1+\varsigma} L(\boldsymbol{\zeta}(s), \boldsymbol{\zeta}'(s))\,ds \le \int_{t_1}^{t_1+\varsigma} L({\boldsymbol{\tilde\zeta}}(s), {\boldsymbol{\tilde\zeta}}'(s))\,ds$.
It suffices to show
\begin{equation}\int_{t_1}^{t_1+\varsigma} L(\boldsymbol{\zeta}(s), \boldsymbol{\zeta}'(s))\,ds \ge \int_{t_1}^{t_1+\varsigma} L({\boldsymbol{\tilde\zeta}}(s), {\boldsymbol{\tilde\zeta}}'(s))\,ds.\label{eq:showineq}
\end{equation}
We claim that we can assume
\begin{itemize}
\item
$\zeta_0(t) > 0$ for all $t \in (t_1,t_1+\varsigma)$,
\item
if $z_k>0$ for some $k \in \mathbb{N}$, then $\zeta_k(t) > 0$ for all $t \in (t_1,t_1+\varsigma)$.
\end{itemize}
For this, note that ${\boldsymbol{\tilde\zeta}}$ satisfies these two properties.
Letting $(\boldsymbol{\zeta}^\varepsilon,\psi^\varepsilon) \doteq \varepsilon (\boldsymbol{\zeta},\psi) + (1-\varepsilon) ({\boldsymbol{\tilde\zeta}},\tilde{\psi})$ for $\varepsilon \in (0,1)$ we have that $(\boldsymbol{\zeta}^\varepsilon,\psi^\varepsilon) \in \mathcal{J}^2_{t_1,t_1+\varsigma}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)})$ and it satisfies the two claimed properties.
Also, from the convexity of $L$ we see that, it suffices to prove \eqref{eq:showineq} with $(\boldsymbol{\zeta},\psi)$
replaced with $(\boldsymbol{\zeta}^\varepsilon,\psi^\varepsilon)$.
Therefore the claim holds.
Fix two sequences of time instants $t_1^{(n)} \doteq t_1 + \frac{1}{n}$ and $t_2^{(n)} \doteq t_1+\varsigma - \frac{1}{n}$.
Note that $t_2^{(n)} = t_1^{(n)} + \varsigma^{(n)}$ where $\varsigma^{(n)}$ is defined by \eqref{eq:tau} by replacing
$(\boldsymbol{x}^{(1)},\boldsymbol{x}^{(2)})$ with $(\boldsymbol{x}^{(1),n},\boldsymbol{x}^{(2),n}) =
(\boldsymbol{\zeta}(t_1^{(n)}), \boldsymbol{\zeta}(t_2^{(n)}))$.
Consider now the optimization problem in \eqref{eq:def-I-j} associated with
$I^2_{t_1^{(n)},t_1^{(n)} + \varsigma^{(n)}}(\boldsymbol{x}^{(1),n},\boldsymbol{x}^{(2),n})$.
Note that for this problem the two additional assumptions in Lemma \ref{lem:minimizer-verify-special} are satisfied.
Furthermore, the assumption $\sum_{k=1}^\infty k z_k + z_0 > 2 \sum_{k=1}^\infty z_k$ in Lemma \ref{lem:minimizer-verify-general} also holds with $\boldsymbol{z}$ replaced by $\boldsymbol{z}^{(n)} = \boldsymbol{x}^{(1),n}-\boldsymbol{x}^{(2),n}$, for sufficiently large $n$.
Therefore Lemma \ref{lem:minimizer-verify-special} can be applied with $(\boldsymbol{x}^{(1)},\boldsymbol{x}^{(2)})$ replaced with $(\boldsymbol{x}^{(1),n},\boldsymbol{x}^{(2),n})$.
Let $({\boldsymbol{\tilde\zeta}}^{(n)},\tilde{\psi}^{(n)}) \in \mathcal{J}^2_{t_1^{(n)},t_1^{(n)} + \varsigma^{(n)}}(\boldsymbol{x}^{(1),n},\boldsymbol{x}^{(2),n})$
be the corresponding minimizer and $\beta^{(n)} \doteq \beta(\boldsymbol{x}^{(1),n},\boldsymbol{x}^{(2),n})$.
Then
\begin{align*}
\int_{t_1}^{t_1+\varsigma} L(\boldsymbol{\zeta}(s), \boldsymbol{\zeta}'(s))\,ds
& = \lim_{n \to \infty} \int_{t_1^{(n)}}^{t_2^{(n)}} L(\boldsymbol{\zeta}(s), \boldsymbol{\zeta}'(s))\,ds \\
& \ge \liminf_{n \to \infty} \int_{t_1^{(n)}}^{t_2^{(n)}} L({\boldsymbol{\tilde\zeta}}^{(n)}(s), ({\boldsymbol{\tilde\zeta}}^{(n)})'(s))\,ds \\
& = \liminf_{n \to \infty} [\tilde{H}(\boldsymbol{z}^{n}) + \tilde{H}(\boldsymbol{x}^{(2),n}) - \tilde{H}(\boldsymbol{x}^{(1),n}) + \tilde{K}(\boldsymbol{x}^{(1),n}, \boldsymbol{x}^{(2),n})] \\
& \ge \tilde{H}(\boldsymbol{z}) + \tilde{H}(\boldsymbol{x}^{(2)}) - \tilde{H}(\boldsymbol{x}^{(1)}) + \tilde{K}(\boldsymbol{x}^{(1)}, \boldsymbol{x}^{(2)}) \\
& = \int_{t_1}^{t_1+\varsigma} L({\boldsymbol{\tilde\zeta}}(s), {\boldsymbol{\tilde\zeta}}'(s))\,ds.
\end{align*}
Here the first inequality follows from Lemma \ref{lem:minimizer-verify-special} and the last three lines use Lemma \ref{lem:minimizer-cost}. \end{proof}
\section{Proof of LLN}
\label{sec:examples} In this section we give the proofs of Theorem \ref{thm:LLN} and Proposition \ref{prop:uniqueness_LLN}.
\noindent {\em Proof of Theorem \ref{thm:LLN} }
(1)
Assume without loss of generality that $T\ge 1$. Since $f_1(t) \le 1$, we see from Assumption \ref{asp:exponential-boundN-S} that $r(\boldsymbol{\zeta}(\cdot))$ with $r$ from (\ref{eq:r_k}) and $\psi$ are well-defined.
Let $\varphi_k(s,y) = 1$ for all $k \in \mathbb{N}_0$ and $(s,y) \in [0,T] \times [0,1]$.
It suffices to show $\boldsymbol{\varphi} \in \mathcal{S}_T(\boldsymbol{\zeta} ,\psi )$ and $(\boldsymbol{\zeta} ,\psi ) \in \mathcal{C}_T$.
Since $f_1(t)=F^{-1}_1(t) {{1}}_{[0,1]}(t)$, we have $\tau_{\boldsymbol{\zeta}}=1$, where $\tau_{\boldsymbol{\zeta}}$ was defined in \eqref{eq:defntauphi}.
Since $F_1(f_1(t))=t$ for $t\in [0,1]$,
\begin{equation*}
f'_1(t) = - \frac{1}{\sum_{k=1}^\infty k p_k (f_1(t))^{k-1}} \mbox{ for } 0 < t < \tau_{\boldsymbol{\zeta}} \mbox{ and } f'_1(t)=0 \mbox{ for } \tau_{\boldsymbol{\zeta}} < t <T.
\end{equation*}
Using this it follows that for $k \in \mathbb{N}$,
\begin{equation*}
\zeta'_k(t) = - \frac{k \zeta_k(t)}{\sum_{j=1}^\infty j\zeta_j(t)} = - r_k(\boldsymbol{\zeta}(t)) \mbox{ for } 0 < t < \tau_{\boldsymbol{\zeta}} \mbox{ and } \zeta'_k(t)=0 \mbox{ for } \tau_{\boldsymbol{\zeta}} < t <T.
\end{equation*}
From this we see that \eqref{eq:phi_k} holds and we can write
\begin{equation*}
\psi(t) = \sum_{k=0}^\infty (k-2) \int_0^t r_k(\boldsymbol{\zeta}(s))\,ds.
\end{equation*}
This gives \eqref{eq:psi} and verifies that $\boldsymbol{\varphi} \in \mathcal{S}_T(\boldsymbol{\zeta} ,\psi )$.
Next we argue that $(\boldsymbol{\zeta} ,\psi ) \in \mathcal{C}_T$.
From Assumption \ref{asp:exponential-boundN-S}, for $t < \tau_{\boldsymbol{\zeta}}$, as $K \to \infty$,
\begin{equation*}
\sum_{k=K}^\infty |k-2| |\zeta'_k(t)| \le \sum_{k=K}^\infty k r_k(\boldsymbol{\zeta}(t)) \le \frac{\sum_{k=K}^\infty k^2 p_k}{r(\boldsymbol{\zeta}(t))} \to 0.
\end{equation*}
In particular, $\psi$ is absolutely continuous and thus property (a) of $\mathcal{C}_T$ holds.
Also, for $t < \tau_{\boldsymbol{\zeta}}$,
\begin{equation*}
\psi'(t) = \sum_{k=1}^\infty (k-2) r_k(\boldsymbol{\zeta}(t)) = \frac{\sum_{k=1}^\infty k(k-2)p_k (f_1(t))^k}{r(\boldsymbol{\zeta}(t))} \le \frac{f_1(t) \sum_{k=1}^\infty k(k-2)p_k}{r(\boldsymbol{\zeta}(t))} \le 0.
\end{equation*}
Therefore $\Gamma(\psi)(t)=0=\zeta_0(t)$ for $t < \tau_{\boldsymbol{\zeta}}$.
For $\tau_{\boldsymbol{\zeta}} \le t \le T$, clearly $\Gamma(\psi)(t)=0=\zeta_0(t)$.
So we have checked property (b) of $\mathcal{C}_T$.
Property (c) of $\mathcal{C}_T$ follows from the definition of $\zeta_k$, $k \in \mathbb{N}$.
Therefore $(\boldsymbol{\zeta} ,\psi ) \in \mathcal{C}_T$ and part (1) follows.
(2) The fact that when $p_1>0$ there is a unique $\rho \in (0,1)$ such that $G_1(\rho)=\rho$ is proved in \cite{molloy1995critical}.
Since $f_\rho(t) \le 1$, we see from Assumption \ref{asp:exponential-boundN-S} that $r(\boldsymbol{\zeta}(\cdot))$ and $\psi$ are well-defined.
Let $\varphi_k(s,y) = 1$ for all $k \in \mathbb{N}_0$ and $(s,y) \in [0,T] \times [0,1]$.
It suffices to show $\boldsymbol{\varphi} \in \mathcal{S}_T(\boldsymbol{\zeta} ,\psi )$ and $(\boldsymbol{\zeta} ,\psi ) \in \mathcal{C}_T$.
First consider times $t < \tau$.
Using the definitions of $r$, $G_1$ and $\tau$, for $t < \tau$
\begin{equation*}
r(\boldsymbol{\zeta}(t)) = \mu-2t -\mu \sqrt{1-2t/\mu}G_1 ( \sqrt{1-2t/\mu}) + \sum_{k=1}^\infty k p_k (1 - 2t/\mu)^{k/2} = \mu-2t > \mu \rho^2 \ge 0.
\end{equation*}
From this one can verify that for $t < \tau$,
\begin{align*}
\zeta'_k(t) & = - \frac{k \zeta_k(t)}{\mu-2t} = -r_k(\boldsymbol{\zeta}(t)).
\end{align*}
Using this we see that \eqref{eq:phi_k} holds for $t < \tau$ and hence as before \eqref{eq:psi} holds as well.
To show that $(\boldsymbol{\zeta} ,\psi ) \in \mathcal{C}_t$ for $t<\tau$, it suffices to show that $\psi(t)$ is absolutely continuous and $\zeta_0(t) = \psi(t)$ for $t \in [0,\tau)$.
Note that for $t < \tau$, $\sum_{k=1}^\infty |k-2| |r_k(\boldsymbol{\zeta}(t))| \le \frac{\sum_{k=1}^\infty k^2 p_k}{\mu-2t}$.
So from Assumption \ref{asp:exponential-boundN-S}, $\psi$ is absolutely continuous over $[0,\tau]$.
Also, one can verify that for $t < \tau$,
\begin{equation*}
\zeta'_0(t) = \frac{d}{dt} r(\boldsymbol{\zeta}(t)) - \sum_{k=1}^\infty k \zeta'_k(t) = - 2 + \sum_{k=1}^\infty k r_k(\boldsymbol{\zeta}(t)) = \psi'(t).
\end{equation*}
So $\zeta_0(t) = \psi(t)$ for $t < \tau$.
Thus we have that $\boldsymbol{\varphi} \in \mathcal{S}_t(\boldsymbol{\zeta} ,\psi )$ and $(\boldsymbol{\zeta} ,\psi ) \in \mathcal{C}_t$ for each $t<\tau$.
We now consider $t \in [\tau, \tau_{\boldsymbol{\zeta}}]$.
Since $\rho \in [0,1)$ and $G_1(\rho)=\rho$, we have
\begin{align*}
0 & = \frac{\mu(G_1(\rho)-\rho)}{\rho-1} = \frac{1}{\rho-1} \sum_{k=1}^\infty kp_k (\rho^{k-1}-\rho) = -p_1 + \rho \sum_{k=3}^\infty kp_k \frac{\rho^{k-2}-1}{\rho-1} \\
& = -p_1 + \rho \sum_{k=3}^\infty kp_k (\rho^{k-3}+\rho^{k-4}+\dotsb+1) \\
& \ge -p_1 + \rho \sum_{k=3}^\infty kp_k (k-2)\rho^{k-3}
\ge \sum_{k=1}^\infty k(k-2)p_k\rho^{k-1}
\end{align*}
and therefore $0 \ge \sum_{k=1}^\infty k(k-2)p_k\rho^k = \sum_{k=1}^\infty k(k-2) \zeta_k(\tau)$.
Namely, the assumption in part (1) is satisfied with ${\boldsymbol{p}}$ replaced by $\boldsymbol{\zeta}(\tau)$.
Thus the proof for the case $t \in [\tau, \tau_{\boldsymbol{\zeta}}]$ is very similar to that in part (1), with $f_1(t)$ replaced by $f_{\rho}(t-\tau)$ and $p_k$ replaced with $\zeta_k(\tau)$, and we would like to omit the detail.
This completes the proof of (2).
\qed \\
\noindent {\em Proof of Proposition \ref{prop:uniqueness_LLN}.}
Suppose for $i=1,2$, $(\boldsymbol{\zeta}^{(i)},\psi^{(i)})$ are two pairs such that $I_T(\boldsymbol{\zeta}^{(i)},\psi^{(i)}) = 0$.
By the definition of $I_T(\cdot)$, $(\boldsymbol{\zeta}^{(i)},\psi^{(i)}) \in \mathcal{C}_T$.
\textcolor{black}{From Remark \ref{rmk:unique_varphi} we see that there exists some $\boldsymbol{\varphi}^{(i)} \in \mathcal{S}_T(\boldsymbol{\zeta}^{(i)},\psi^{(i)})$ whose cost equals $I_T(\boldsymbol{\zeta}^{(i)},\psi^{(i)})$, namely}
\begin{equation*}
\sum_{k=0}^\infty \int_{[0,T] \times [0,1]} \ell(\varphi_k^{(i)}(s,y)) \, ds\,dy =I_T(\boldsymbol{\zeta}^{(i)},\psi^{(i)})= 0.
\end{equation*}
Since $\ell(x) = 0$ if and only if $x=1$, we must have $\varphi_k^{(i)}(s,y) = 1$ for a.e.\ $(s,y) \in [0,T]\times[0,1]$ and $k \in \mathbb{N}_0$.
Using such $\varphi^{(i)}$ with \eqref{eq:psi} and \eqref{eq:phi_k}, we see that
\begin{align}
\zeta_k^{(i)}(t) & = p_k - \int_0^t r_k(\boldsymbol{\zeta}^{(i)}(s))\,ds, k \in \mathbb{N}, \label{eq:phi_unique_LLN} \\
\psi^{(i)}(t) & = \sum_{k=0}^\infty (k-2) \int_0^t r_k(\boldsymbol{\zeta}^{(i)}(s))\,ds. \label{eq:psi_unique_LLN}
\end{align}
Since $\zeta_0^{(i)} = \Gamma(\psi^{(i)})$, for a.e.\ $t$, $(\zeta_0^{(i)})'(t) \ge (\psi^{(i)})'(t) = \sum_{k=0}^\infty (k-2) r_k(\boldsymbol{\zeta}^{(i)}(t))$, and by (\ref{eq:r_k})
\begin{align*}
\frac{d}{dt} r(\boldsymbol{\zeta}^{(i)}(t)) & = (\zeta_0^{(i)})'(t) + \sum_{k=1}^\infty k (\zeta_k^{(i)})'(t) \ge \sum_{k=0}^\infty (k-2) r_k(\boldsymbol{\zeta}^{(i)}(t)) - \sum_{k=1}^\infty k r_k(\boldsymbol{\zeta}^{(i)}(t)) \\
& = -2\cdot{{1}}_{\{r(\boldsymbol{\zeta}^{(i)}(t))>0\}} \ge -2.
\end{align*}
Consider the strictly increasing function $g^{(i)}(t)$ defined by
\begin{equation}
\label{eq:g_unique_LLN}
g^{(i)}(0) = 0, \: (g^{(i)})'(t) = r(\boldsymbol{\zeta}^{(i)}(g^{(i)}(t))) {{1}}_{\{g^{(i)}(t) < \tau_{\boldsymbol{\zeta}^{(i)}}\}} + {{1}}_{\{g^{(i)}(t) \ge \tau_{\boldsymbol{\zeta}^{(i)}}\}},
\end{equation}
where $\tau_{\boldsymbol{\zeta}^{(i)}}$ is as in \eqref{eq:defntauphi}.
Since $\frac{d}{dt} r(\boldsymbol{\zeta}^{(i)}(t)) \in [-2,0]$ and $0 \le r(\boldsymbol{\zeta}^{(i)}(\cdot)) \le r(\boldsymbol{\zeta}^{(i)}(0)) = \sum_{k=1}^\infty kp_k < \infty$, we see that $r(\boldsymbol{\zeta}^{(i)}(\cdot))$ is bounded and Lipschitz.
Also $r(\boldsymbol{\zeta}^{(i)}(t)) > 0$ for $t < \tau_{\boldsymbol{\zeta}^{(i)}}$.
So we have existence and uniqueness of the strictly increasing function $g^{(i)}(t)$ before it reaches $\tau_{\boldsymbol{\zeta}^{(i)}}$.
The existence, uniqueness and monotonicity of $g^{(i)}(t)$ after $\tau_{\boldsymbol{\zeta}^{(i)}}$ is straightforward.
Define $({\tilde{\boldsymbol{\zeta}}}^{(i)}(t),{\tilde{\psi}}^{(i)}(t)) \doteq (\boldsymbol{\zeta}^{(i)}(g^{(i)}(t)),\psi^{(i)}(g^{(i)}(t)))$.
From \eqref{eq:phi_unique_LLN} and \eqref{eq:psi_unique_LLN} it follows that
\[
\Scale[0.9]{\begin{aligned}
{\tilde{\zeta}}_k^{(i)}(t) & = p_k - \int_0^t k{\tilde{\zeta}}_k^{(i)}(s)\,ds, k \in \mathbb{N}, \\
{\tilde{\psi}}^{(i)}(t) & = \sum_{k=1}^\infty (k-2) \int_0^t k{\tilde{\zeta}}_k^{(i)}(s)\,ds - 2 \int_0^t {\tilde{\zeta}}_0^{(i)}(s)\,ds
= \sum_{k=1}^\infty (k-2) \int_0^t k{\tilde{\zeta}}_k^{(i)}(s)\,ds - 2 \int_0^t \Gamma({\tilde{\psi}}^{(i)})(s)\,ds.
\end{aligned}}
\]
Clearly ${\tilde{\zeta}}_k^{(1)} = {\tilde{\zeta}}_k^{(2)}$ for each $k \in \mathbb{N}$.
Also, since $\Gamma$ is Lipschitz on path space, Gronwall's inequality implies ${\tilde{\psi}}^{(1)} = {\tilde{\psi}}^{(2)}$, and hence ${\tilde{\zeta}}_0^{(1)} = {\tilde{\zeta}}_0^{(2)}$.
Noting that \eqref{eq:g_unique_LLN} can be written as
\begin{equation*}
g^{(i)}(0) = 0, \: (g^{(i)})'(t) = r(\tilde{\boldsymbol{\zeta}}^{(i)}(t)) {{1}}_{\{r(\tilde{\boldsymbol{\zeta}}^{(i)}(t)) > 0\}} + {{1}}_{\{r(\tilde{\boldsymbol{\zeta}}^{(i)}(t)) = 0\}},
\end{equation*}
we have $g^{(1)} = g^{(2)}$.
Since $g^{(i)}$ is strictly increasing, its inverse function is well-defined and we must have that $(\boldsymbol{\zeta}^{(1)},\psi^{(1)})=(\boldsymbol{\zeta}^{(2)},\psi^{(2)})$.
This completes the proof.
\qed
\noindent \textbf{Acknowledgement:} We would like to thank the two referees for a careful review of our work and for the many helpful suggestions. The research of SB was supported in part by the NSF (DMS-1606839, DMS-1613072) and the Army Research Office (W911NF-17-1-0010). The research of AB was supported in part by the NSF (DMS-1305120, DMS-1814894, DMS-1853968). The research of PD was supported in part by the NSF (DMS-1904992) and DARPA (W911NF-15-2-0122). The research of RW was supported in part by DARPA (W911NF-15-2-0122).
\begin{bibdiv} \begin{biblist}
\bib{aldous1997brownian}{article}{
author={Aldous, David},
title={Brownian excursions, critical random graphs and the
multiplicative coalescent},
date={1997},
journal={The Annals of Probability},
pages={812\ndash 854}, }
\bib{bender1978asymptotic}{article}{
author={Bender, Edward~A},
author={Canfield, E~Rodney},
title={The asymptotic number of labeled graphs with given degree
sequences},
date={1978},
journal={Journal of Combinatorial Theory, Series A},
volume={24},
number={3},
pages={296\ndash 307}, }
\bib{Billingsley1999}{book}{
author={Billingsley, P.},
title={{Convergence of Probability Measures}},
series={Wiley series in probability and mathematical statistics:
Probability and statistics},
publisher={John Wiley \& Sons, New York},
date={1999}, }
\bib{bollobas1980probabilistic}{article}{
author={Bollob{\'a}s, B{\'e}la},
title={A probabilistic proof of an asymptotic formula for the number of
labelled regular graphs},
date={1980},
journal={European Journal of Combinatorics},
volume={1},
number={4},
pages={311\ndash 316}, }
\bib{bordenave2015large}{article}{
author={Bordenave, Charles},
author={Caputo, Pietro},
title={Large deviations of empirical neighborhood distribution in sparse
random graphs},
date={2015},
journal={Probability Theory and Related Fields},
volume={163},
number={1-2},
pages={149\ndash 222}, }
\bib{borgs2008convergent}{article}{
author={Borgs, Christian},
author={Chayes, Jennifer~T},
author={Lov{\'a}sz, L{\'a}szl{\'o}},
author={S{\'o}s, Vera~T},
author={Vesztergombi, Katalin},
title={Convergent sequences of dense graphs i: Subgraph frequencies,
metric properties and testing},
date={2008},
journal={Advances in Mathematics},
volume={219},
number={6},
pages={1801\ndash 1851}, }
\bib{borgs2012convergent}{article}{
author={Borgs, Christian},
author={Chayes, Jennifer~T},
author={Lov{\'a}sz, L{\'a}szl{\'o}},
author={S{\'o}s, Vera~T},
author={Vesztergombi, Katalin},
title={Convergent sequences of dense graphs {I}{I}. multiway cuts and
statistical physics},
date={2012},
journal={Annals of Mathematics},
volume={176},
number={1},
pages={151\ndash 219}, }
\bib{BoueDupuis1998variational}{article}{
author={Bou{\'e}, M.},
author={Dupuis, P.},
title={{A variational representation for certain functionals of Brownian
motion}},
date={1998},
journal={The Annals of Probability},
volume={26},
number={4},
pages={1641\ndash 1659}, }
\bib{BudhirajaChenDupuis2013large}{article}{
author={Budhiraja, A.},
author={Chen, J.},
author={Dupuis, P.},
title={{Large deviations for stochastic partial differential equations
driven by a Poisson random measure}},
date={2013},
journal={Stochastic Processes and their Applications},
volume={123},
number={2},
pages={523\ndash 560}, }
\bib{BudhirajaDupuis2000variational}{article}{
author={Budhiraja, A.},
author={Dupuis, P.},
title={{A variational representation for positive functionals of
infinite dimensional Brownian motion}},
date={2000},
journal={Probability and Mathematical Statistics},
volume={20},
number={1},
pages={39\ndash 61}, }
\bib{buddupbook}{book}{
author={Budhiraja, A.},
author={Dupuis, P.},
title={{Analysis and Approximation of Rare Events. Representations and
Weak Convergence Methods}},
publisher={Series Prob. Theory and Stoch. Modelling, Springer},
date={2019},
volume={94}, }
\bib{BudhirajaDupuisGanguly2015moderate}{article}{
author={Budhiraja, A.},
author={Dupuis, P.},
author={Ganguly, A.},
title={{Moderate deviation principles for stochastic differential
equations with jumps}},
date={2016},
journal={The Annals of Probability},
volume={44},
number={3},
pages={1723\ndash 1775}, }
\bib{BudhirajaDupuisMaroulas2011variational}{article}{
author={Budhiraja, A.},
author={Dupuis, P.},
author={Maroulas, V.},
title={{Variational representations for continuous time processes}},
date={2011},
journal={Annales de l'Institut Henri Poincar{\'e}(B), Probabilit{\'e}s et
Statistiques},
volume={47},
number={3},
pages={725\ndash 747}, }
\bib{BudhirajaWu2017moderate}{article}{
author={Budhiraja, A.},
author={Wu, R.},
title={{Moderate deviation principles for weakly interacting particle
systems}},
date={2017},
ISSN={1432-2064},
journal={Probability Theory and Related Fields},
volume={168},
number={3},
pages={721\ndash 771},
url={http://dx.doi.org/10.1007/s00440-016-0723-3}, }
\bib{chatterjee2016introduction}{article}{
author={Chatterjee, Sourav},
title={An introduction to large deviations for random graphs},
date={2016},
journal={Bulletin of the American Mathematical Society},
volume={53},
number={4},
pages={617\ndash 642}, }
\bib{chatterjee2011large}{article}{
author={Chatterjee, Sourav},
author={Varadhan, SR~Srinivasa},
title={The large deviation principle for the {E}rd{\H{o}}s-{R}{\'e}nyi
random graph},
date={2011},
journal={European Journal of Combinatorics},
volume={32},
number={7},
pages={1000\ndash 1017}, }
\bib{choset}{article}{
author={Choi, Jihyeok},
author={Sethuraman, Sunder},
title={Large deviations for the degree structure in preferential
attachment schemes},
date={2013},
journal={The Annals of Applied Probability},
volume={23},
number={2},
pages={722\ndash 763}, }
\bib{choi2013large}{article}{
author={Choi, Jihyeok},
author={Sethuraman, Sunder},
author={others},
title={Large deviations for the degree structure in preferential
attachment schemes},
date={2013},
journal={The annals of applied probability},
volume={23},
number={2},
pages={722\ndash 763}, }
\bib{DhaSen}{article}{
author={Dhara, S.},
author={Sen, S.},
title={{Large deviation for uniform graphs with given degrees}},
date={2019},
journal={arXiv preprint arXiv:1904.07666}, }
\bib{DupuisEllis2011weak}{book}{
author={Dupuis, P.},
author={Ellis, R.~S.},
title={{A Weak Convergence Approach to the Theory of Large Deviations}},
series={Wiley series in probability and mathematical statistics:
Probability and statistics},
publisher={John Wiley \& Sons, New York},
date={1997},
volume={902}, }
\bib{dupnuzwhi}{article}{
author={Dupuis, Paul},
author={Nuzman, Carl},
author={Whiting, Phil},
title={Large deviation asymptotics for occupancy problems},
date={2004},
journal={The Annals of Probability},
volume={32},
number={3B},
pages={2765\ndash 2818}, }
\bib{fortunato2010community}{article}{
author={Fortunato, Santo},
title={Community detection in graphs},
date={2010},
journal={Physics Reports},
volume={486},
number={3},
pages={75\ndash 174}, }
\bib{IkedaWatanabe1990SDE}{book}{
author={Ikeda, N.},
author={Watanabe, S.},
title={{Stochastic Differential Equations and Diffusion Processes}},
series={North-Holland Mathematical Library},
publisher={Elsevier},
date={1981},
volume={24}, }
\bib{Janson2009new}{article}{
author={Janson, S.},
author={Luczak, M.~J},
title={{A new approach to the giant component problem}},
date={2009},
journal={Random Structures \& Algorithms},
volume={34},
number={2},
pages={197\ndash 216}, }
\bib{janson2009probability}{article}{
author={JANSON, SVANTE},
title={The probability that a random multigraph is simple},
date={2009},
journal={Combinatorics, Probability \& Computing},
volume={18},
number={1-2},
pages={205}, }
\bib{KaratzasShreve1991brownian}{book}{
author={Karatzas, I.},
author={Shreve, S.~E.},
title={{Brownian Motion and Stochastic Calculus}},
series={Graduate Texts in Mathematics},
publisher={Springer New York},
date={1991},
volume={113},
ISBN={9780387976556}, }
\bib{Kurtz1981approximation}{book}{
author={Kurtz, T.~G.},
title={{Approximation of Population Processes}},
series={CBMS-NSF Regional Conference Series in Applied Mathematics},
publisher={SIAM},
date={1981},
volume={36}, }
\bib{lovasz2012large}{book}{
author={Lov{\'a}sz, L{\'a}szl{\'o}},
title={Large networks and graph limits},
publisher={American Mathematical Society Providence},
date={2012},
volume={60}, }
\bib{molloy1995critical}{article}{
author={Molloy, Michael},
author={Reed, Bruce},
title={A critical point for random graphs with a given degree sequence},
date={1995},
journal={Random Structures \& Algorithms},
volume={6},
number={2-3},
pages={161\ndash 180}, }
\bib{MolloyReed1998size}{article}{
author={Molloy, Michael},
author={Reed, Bruce},
title={The size of the giant component of a random graph with a given
degree sequence},
date={1998},
journal={Combinatorics, Probability and Computing},
volume={7},
number={3},
pages={295–305}, }
\bib{newman2002spread}{article}{
author={Newman, Mark~EJ},
title={Spread of epidemic disease on networks},
date={2002},
journal={Physical Review E},
volume={66},
number={1},
pages={016128}, }
\bib{newman2006modularity}{article}{
author={Newman, Mark~EJ},
title={Modularity and community structure in networks},
date={2006},
journal={Proceedings of the national academy of sciences},
volume={103},
number={23},
pages={8577\ndash 8582}, }
\bib{newman2001random}{article}{
author={Newman, Mark~EJ},
author={Strogatz, Steven~H},
author={Watts, Duncan~J},
title={Random graphs with arbitrary degree distributions and their
applications},
date={2001},
journal={Physical review E},
volume={64},
number={2},
pages={026118}, }
\bib{o1998some}{article}{
author={O'Connell, Neil},
title={Some large deviation results for sparse random graphs},
date={1998},
journal={Probability Theory and Related Fields},
volume={110},
number={3},
pages={277\ndash 285}, }
\bib{puhalskii2005stochastic}{article}{
author={Puhalskii, Anatolii~A},
title={Stochastic processes in random graphs},
date={2005},
journal={The Annals of Probability},
volume={33},
number={1},
pages={337\ndash 412}, }
\bib{puhalskii2013number}{article}{
author={Puhalskii, Anatolii~A},
title={On the number of isolated vertices in a growing random graph},
date={2013},
journal={Rocky Mountain Journal of Mathematics},
volume={43},
number={6},
pages={1941\ndash 1989}, }
\bib{Hofstad2016}{book}{
author={Van Der~Hofstad, Remco},
title={Random {G}raphs and {C}omplex {N}etworks},
publisher={Cambridge University Press},
date={2016},
volume={1}, }
\end{biblist} \end{bibdiv}
\scriptsize{\textsc{\noindent S. Bhamidi and A. Budhiraja\newline Department of Statistics and Operations Research\newline University of North Carolina\newline Chapel Hill, NC 27599, USA\newline email: [email protected], [email protected]
}
\textsc{\noindent P. Dupuis \newline Division of Applied Mathematics\newline Brown University\newline Providence, RI 02912, USA\newline email: paul\[email protected]
}
\textsc{\noindent R. Wu\newline Department of Mathematics\newline University of Michigan\newline Ann Arbor, MI 48109, USA\newline email: [email protected] }}
\end{document} | arXiv |
\begin{definition}[Definition:Proportion/Perturbed]
Let $a, b, c$ and $A, B, C$ be magnitudes.
$a, b, c$ are '''in perturbed proportion''' to $A, B, C$ {{iff}}:
:$a : b = B : C$
:$b : c = A : B$
{{EuclidSaid}}
:''{{:Definition:Euclid's Definitions - Book V/18 - Perturbed Proportion}}''
{{EuclidDefRefNocat|V|18|Perturbed Proportion}}
Category:Definitions/Proportion
\end{definition} | ProofWiki |
Uncrossed Knight's Tour
A well-known puzzle is to "tour" all the squares of an $8 \times 8$ chessboard using a knight, which is a piece that can move only by jumping one square in one direction and two squares in an orthogonal direction. The knight must visit every square of the chessboard, without repeats, and then return to its starting square. There are many ways to do this, and the chessboard size is manageable, so it is a reasonable puzzle for a human to solve.
However, you have access to a computer, and some coding skills! So, we will give you a harder version of this problem on a rectangular $m \times n$ chessboard with an additional constraint: the knight may never cross its own path. If you imagine its path consisting of straight line segments connecting the centers of squares it jumps between, these segments must form a simple polygon; that is, no two segments intersect or touch, except that consecutive segments touch at their common end point. This constraint makes it impossible to visit every square, so instead you must maximize the number of squares the knight visits. We keep the constraint that the knight must return to its starting square. Figure 1 shows an optimal solution for the first sample input, a $6 \times 6$ board.
Figure 1: An optimal solution for a $6 \times 6$ board.
The input consists of a single line containing two integers $m$ ($1 \le m \le 8$) and $n$ ($1 \le n \le 10^{15}$), giving the dimensions of the rectangular chessboard.
Display the largest number of squares that a knight can visit in a tour on an $m \times n$ chessboard that does not cross its path. If no such tour exists, display $0$.
Sample Input 1
Sample Output 1
Problem ID: uncrossedknights
CPU Time limit: 1 second
Memory limit: 2048 MB
Difficulty: 6.7
Sample data files
Source: International Collegiate Programming Contest (ACM-ICPC) World Finals 2018
License: Restricted, used with permission | CommonCrawl |
A geometry guided image denoising scheme
IPI Home
Imaging acoustic obstacles by singular and hypersingular point sources
May 2013, 7(2): 523-544. doi: 10.3934/ipi.2013.7.523
A three-dimensional inverse gravimetry problem for ice with snow caps
Victor Isakov 1, , Shingyu Leung 2, and Jianliang Qian 2,
Wichita State University, 1845 Fairmount, Wichita, KS 67260-0033
Department of Mathematics, Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong, China, China
Received September 2012 Revised February 2013 Published May 2013
We propose a model for the gravitational field of a floating iceberg $D$ with snow on its top. The inverse problem of interest in geophysics is to find $D$ and snow thickness $g$ on its known (visible) top from remote measurements of derivatives of the gravitational potential. By modifying the Novikov's orthogonality method we prove uniqueness of recovering $D$ and $g$ for the inverse problem. We design and test two algorithms for finding $D$ and $g$. One is based on a standard regularized minimization of a misfit functional. The second one applies the level set method to our problem. Numerical examples validate the theory and demonstrate effectiveness of the proposed algorithms.
Keywords: alternating minimization, Inverse gravimetry, Green function., weighted essentially non-oscillatory schemes, level set method.
Mathematics Subject Classification: 65N21, 65N06, 65N8.
Citation: Victor Isakov, Shingyu Leung, Jianliang Qian. A three-dimensional inverse gravimetry problem for ice with snow caps. Inverse Problems & Imaging, 2013, 7 (2) : 523-544. doi: 10.3934/ipi.2013.7.523
M. Bertalmio, L.-T. Cheng, S. Osher and G. Sapiro, Variational problems and partial differential equations on implicit surfaces,, J. Comput. Phys., 174 (2001), 759. doi: 10.1006/jcph.2001.6937. Google Scholar
J. Brandman, A level-set method for computing the eigenvalues of elliptic operators defined on compact hypersurfaces,, J. Sci. Comput., 37 (2008), 282. doi: 10.1007/s10915-008-9210-z. Google Scholar
M. Burger, A level set method for inverse problems,, Inverse Problems, 17 (2001), 1327. doi: 10.1088/0266-5611/17/5/307. Google Scholar
M. Burger and S. Osher, A survey on level set methods for inverse problems and optimal design,, European J. Appl. Math., 16 (2005), 263. doi: 10.1017/S0956792505006182. Google Scholar
T. Cecil, S. J. Osher and J. Qian, Simplex free adaptive tree fast sweeping and evolution methods for solving level set equations in arbitrary dimension,, J. Comput. Phys., 213 (2006), 458. doi: 10.1016/j.jcp.2005.08.020. Google Scholar
O. Dorn and D. Lesselier, Level set methods for inverse scattering,, Inverse Problems, 22 (2006). doi: 10.1088/0266-5611/22/4/R01. Google Scholar
A. Elcrat, V. Isakov, E. Kropf and D. Stewart, A stability analysis of the harmonic continuation,, Inverse Problems, 28 (2012). doi: 10.1088/0266-5611/28/7/075016. Google Scholar
N. Halko, P. G. Martinsson, Y. Shkolnisky and M Tygert, An algorithm for the principal component analysis of large data sets,, SIAM J. Sci. Comput., 33 (2010), 2580. doi: 10.1137/100804139. Google Scholar
S. Hou, K. Solna and H.-K. Zhao, Imaging of location and geometry for extended targets using the response matrix,, J. Comput. Phys., 199 (2004), 317. doi: 10.1016/j.jcp.2004.02.010. Google Scholar
V. Isakov, "Inverse Source Problems,", AMS, (1990). Google Scholar
V. Isakov, S. Leung and J. Qian, A fast local level set method for inverse gravimetry,, Commun. Comput. Phys., 10 (2011), 1044. doi: 10.4208/cicp.100710.021210a. Google Scholar
M. Keldysh, On the solubility and stability of the Dirichlet's problem,, Uspekhi Matem. Nauk., 8 (1940), 171. Google Scholar
S. Leung, Eulerian approach for computing the finite time Lyapunov exponent,, J. Comput. Phys., 230 (2011), 3500. doi: 10.1016/j.jcp.2011.01.046. Google Scholar
A. Litman, D. Lesselier and F. Santosa, Reconstruction of a 2-D binary obstacle by controlled evolution of a level-set,, Inverse Problems, 14 (1998), 685. doi: 10.1088/0266-5611/14/3/018. Google Scholar
C. B. Macdonald and S. J. Ruuth, The implicit closest point method for the numerical solution of partial differential equations on surfaces,, SIAM J. Sci. Comput., 31 (2009), 4330. doi: 10.1137/080740003. Google Scholar
C. Miranda, "Partial Differential Equations of Elliptic Type,", Springer-Verlag, (1970). Google Scholar
P. Novikov, Sur le probleme inverse du potential,, Dokl. Akad. Nauk SSSR, 18 (1938), 165. Google Scholar
S. J. Osher and J. A. Sethian, Fronts propagating with curvature dependent speed: algorithms based on Hamilton-Jacobi formulations,, J. Comput. Phys., 79 (1988), 12. doi: 10.1016/0021-9991(88)90002-2. Google Scholar
A. I. Prilepko, D. G. Orlovskii and I. A. Vasin, "Methods for Solving Inverse Problems in Mathematical Physics,", Marcel Dekker, (2000). Google Scholar
J. Qian, L.-T. Cheng and S. J. Osher, A level set based Eulerian approach for anisotropic wave propagations,, Wave Motion, 37 (2003), 365. doi: 10.1016/S0165-2125(02)00101-4. Google Scholar
J. Qian and S. Leung, A level set method for paraxial multivalued traveltimes,, J. Comput. Phys., 197 (2004), 711. doi: 10.1016/j.jcp.2003.12.017. Google Scholar
J. Qian and S. Leung, A local level set method for paraxial multivalued geometric optics,, SIAM J. Sci. Comp., 28 (2006), 206. doi: 10.1137/030601673. Google Scholar
F. Santosa, A level-set approach for inverse problems involving obstacles,, Control, 1 (1996), 17. Google Scholar
K. van den Doel, U. Ascher and A. Leitao, Multiple level sets for piecewise constant surface reconstruction in highly ill-posed problems,, J. Sci. Comput., 43 (2010), 44. doi: 10.1007/s10915-009-9341-x. Google Scholar
J. Xu and H. K. Zhao, An Eulerian formulation for solving partial differential equations along a moving interface,, J. Sci. Comput., 19 (2003), 573. doi: 10.1023/A:1025336916176. Google Scholar
H.-K. Zhao, T. Chan, B. Merriman and S. J. Osher, A variational level set approach for multiphase motion,, J. Comput. Phys., 127 (1996), 179. doi: 10.1006/jcph.1996.0167. Google Scholar
Wangtao Lu, Shingyu Leung, Jianliang Qian. An improved fast local level set method for three-dimensional inverse gravimetry. Inverse Problems & Imaging, 2015, 9 (2) : 479-509. doi: 10.3934/ipi.2015.9.479
Alexander Kurganov, Anthony Polizzi. Non-oscillatory central schemes for traffic flow models with Arrhenius look-ahead dynamics. Networks & Heterogeneous Media, 2009, 4 (3) : 431-451. doi: 10.3934/nhm.2009.4.431
Yonggui Zhu, Yuying Shi, Bin Zhang, Xinyan Yu. Weighted-average alternating minimization method for magnetic resonance image reconstruction based on compressive sensing. Inverse Problems & Imaging, 2014, 8 (3) : 925-937. doi: 10.3934/ipi.2014.8.925
Wen-ming He, Jun-zhi Cui. The estimate of the multi-scale homogenization method for Green's function on Sobolev space $W^{1,q}(\Omega)$. Communications on Pure & Applied Analysis, 2012, 11 (2) : 501-516. doi: 10.3934/cpaa.2012.11.501
Yue Lu, Ying-En Ge, Li-Wei Zhang. An alternating direction method for solving a class of inverse semi-definite quadratic programming problems. Journal of Industrial & Management Optimization, 2016, 12 (1) : 317-336. doi: 10.3934/jimo.2016.12.317
Rogelio Valdez. Self-similarity of the Mandelbrot set for real essentially bounded combinatorics. Discrete & Continuous Dynamical Systems - A, 2006, 16 (4) : 897-922. doi: 10.3934/dcds.2006.16.897
Zhenlin Guo, Ping Lin, Guangrong Ji, Yangfan Wang. Retinal vessel segmentation using a finite element based binary level set method. Inverse Problems & Imaging, 2014, 8 (2) : 459-473. doi: 10.3934/ipi.2014.8.459
Sandro Zagatti. Minimization of non quasiconvex functionals by integro-extremization method. Discrete & Continuous Dynamical Systems - A, 2008, 21 (2) : 625-641. doi: 10.3934/dcds.2008.21.625
Kyoungsun Kim, Gen Nakamura, Mourad Sini. The Green function of the interior transmission problem and its applications. Inverse Problems & Imaging, 2012, 6 (3) : 487-521. doi: 10.3934/ipi.2012.6.487
Jongkeun Choi, Ki-Ahm Lee. The Green function for the Stokes system with measurable coefficients. Communications on Pure & Applied Analysis, 2017, 16 (6) : 1989-2022. doi: 10.3934/cpaa.2017098
Peter Bella, Arianna Giunti. Green's function for elliptic systems: Moment bounds. Networks & Heterogeneous Media, 2018, 13 (1) : 155-176. doi: 10.3934/nhm.2018007
Virginia Agostiniani, Rolando Magnanini. Symmetries in an overdetermined problem for the Green's function. Discrete & Continuous Dynamical Systems - S, 2011, 4 (4) : 791-800. doi: 10.3934/dcdss.2011.4.791
Sungwon Cho. Alternative proof for the existence of Green's function. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1307-1314. doi: 10.3934/cpaa.2011.10.1307
Kohei Ueno. Weighted Green functions of nondegenerate polynomial skew products on $\mathbb{C}^2$. Discrete & Continuous Dynamical Systems - A, 2011, 31 (3) : 985-996. doi: 10.3934/dcds.2011.31.985
Kohei Ueno. Weighted Green functions of polynomial skew products on $\mathbb{C}^2$. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 2283-2305. doi: 10.3934/dcds.2014.34.2283
Russell E. Warren, Stanley J. Osher. Hyperspectral unmixing by the alternating direction method of multipliers. Inverse Problems & Imaging, 2015, 9 (3) : 917-933. doi: 10.3934/ipi.2015.9.917
Zhi-Min Chen. Straightforward approximation of the translating and pulsating free surface Green function. Discrete & Continuous Dynamical Systems - B, 2014, 19 (9) : 2767-2783. doi: 10.3934/dcdsb.2014.19.2767
Claudia Bucur. Some observations on the Green function for the ball in the fractional Laplace framework. Communications on Pure & Applied Analysis, 2016, 15 (2) : 657-699. doi: 10.3934/cpaa.2016.15.657
Qichun Wang, Chik How Tan, Pantelimon Stănică. Concatenations of the hidden weighted bit function and their cryptographic properties. Advances in Mathematics of Communications, 2014, 8 (2) : 153-165. doi: 10.3934/amc.2014.8.153
Bin Dong, Aichi Chien, Yu Mao, Jian Ye, Fernando Vinuela, Stanley Osher. Level set based brain aneurysm capturing in 3D. Inverse Problems & Imaging, 2010, 4 (2) : 241-255. doi: 10.3934/ipi.2010.4.241
PDF downloads (10)
Victor Isakov Shingyu Leung Jianliang Qian | CommonCrawl |
\begin{document}
\title{Poisson equation on Wasserstein space and diffusion approximations for McKean-Vlasov equation}
\date{}
\author{Yun Li, Fuke Wu and Longjie Xie}
\address{Yun Li:
Institute of Systems Science,
Academy of Mathematics and Systems Science, Chinese Academy of Sciences, and School of Mathematics Sciences, University of Chinese Academy of Sciences, Beijing 100149, P.R.China\\
Email: [email protected] }
\address{Longjie Xie:
School of Mathematics and Statistics, Jiangsu Normal University,
Xuzhou, Jiangsu 221000, P.R.China\\
Email: [email protected] }
\address{Fuke Wu:
School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan, Hubei 430074, P.R.China\\
Email: [email protected] }
\thanks{
This work is supported by NNSF of China (No. 12090011, 12071186, 11931004, 61873320). }
\begin{abstract} We consider the fully-coupled McKean-Vlasov equation with multi-time-scale potentials, and all the coefficients depend on the distributions of both the slow component and the fast motion. By studying the smoothness of the solution of the non-linear Poisson equation on Wasserstein space, we derive the asymptotic limit as well as the quantitative error estimate of the convergence for the slow process.
Extra homogenized drift term containing derivative in the measure argument of the solution of the Poisson equation appears in the limit, which seems to be new and is unique for systems involving the fast distribution.
\noindent {{\bf AMS 2010 Mathematics Subject Classification:} 60J60, 60F05, 35J60, 70K70. }
\noindent{{\bf Keywords:} Poisson equation; diffusion approximation; McKean-Vlasov equation; multi-scale processes.} \end{abstract}
\maketitle
\section{Introduction and main result}
For $d\geqslant 1$, let ${\mathscr P}_2({\mathbb R}^{d})$ be the space of all square integrable probability measures over ${\mathbb R}^d$ equipped with the Wasserstein metric, i.e., $$
{\mathcal W}_2(\mu_1,\mu_2):=\inf_{\pi\in{\mathcal P}(\mu_1,\mu_2)}\left(\int_{{\mathbb R}^d\times{\mathbb R}^d}|x-y|^2\pi({\mathord{{\rm d}}} x,{\mathord{{\rm d}}} y)\right)^{\frac{1}{2}},\quad \forall \mu_1,\mu_2\in {\mathscr P}_2({\mathbb R}^{d}), $$ where ${\mathcal P}(\mu_1,\mu_2)$ denotes the class of measures on ${\mathbb R}^d\times{\mathbb R}^d$ with marginal $\mu_1$ and $\mu_2$. Consider the following multi-scale McKean-Vlasov stochastic differential equations (SDEs for short) in ${\mathbb R}^{d_1}\times{\mathbb R}^{d_2}$: \begin{equation} \label{sde} \left\{ \begin{aligned} &{\mathord{{\rm d}}} X^{\varepsilon}_t =F(X^{\varepsilon}_t,{\mathcal L}_{X_t^\varepsilon},Y^{\varepsilon}_t,{\mathcal L}_{Y^{\varepsilon}_t}){\mathord{{\rm d}}} t+\frac{1}{\varepsilon}H(X^{\varepsilon}_t,{\mathcal L}_{X_t^\varepsilon},Y^{\varepsilon}_t,{\mathcal L}_{Y^{\varepsilon}_t}){\mathord{{\rm d}}} t\\ &\qquad\qquad\qquad\qquad\qquad\quad\qquad+G(X^{\varepsilon}_t,{\mathcal L}_{X_t^\varepsilon},Y^{\varepsilon}_t,{\mathcal L}_{Y^{\varepsilon}_t}){\mathord{{\rm d}}} W^1_t,\quad\,\,\,\, X^{\varepsilon}_0=\xi,\\ &{\mathord{{\rm d}}} Y^{\varepsilon}_t =\frac{1}{\varepsilon}c(X^{\varepsilon}_t,{\mathcal L}_{X_t^\varepsilon},Y^{\varepsilon}_t,{\mathcal L}_{Y^{\varepsilon}_t}){\mathord{{\rm d}}} t+\frac{1}{\varepsilon^2}b({\mathcal L}_{X_t^\varepsilon},Y^{\varepsilon}_t,{\mathcal L}_{Y^{\varepsilon}_t}){\mathord{{\rm d}}} t\\ &\qquad\qquad\qquad\qquad\qquad\quad\qquad\,+\frac{1}{\varepsilon}\sigma({\mathcal L}_{X_t^\varepsilon},Y^{\varepsilon}_t,{\mathcal L}_{Y^{\varepsilon}_t}){\mathord{{\rm d}}} W_t^2,\quad\qquad\!\! Y^{\varepsilon}_0=\eta, \end{aligned} \right. \end{equation} where $d_1, d_2\geqslant 1$, ${\mathcal L}_\vartheta$ denotes the law of a random variable $\vartheta$, the coefficients $F, H: {\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2})\to{\mathbb R}^{d_1}$, $G: {\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2})\to{\mathbb R}^{d_1}\otimes{\mathbb R}^{d_1}$, $c: {\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2})\to{\mathbb R}^{d_2}$, $b: {\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2})\to{\mathbb R}^{d_2}$ and $\sigma: {\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2})\to{\mathbb R}^{d_2}\otimes{\mathbb R}^{d_2}$ are measurable functions, $W^1_t$, $W^2_t$ are $d_1$, $d_2$-dimensional independent standard Brownian motions defined on some probability space $(\Omega,{\mathscr F},{\mathbb P})$, $\xi$, $\eta$ are $d_1$, $d_2$-dimensional random variables with finite moments, respectively, and the small parameter $0<\varepsilon\ll 1$ represents the separation of time scales between the slow component $X_t^\varepsilon$ (often being called the driving process) and the fast motion $Y_t^\varepsilon$ (also being called the driven process).
The McKean-Vlasov equation, also known as the distribution dependent SDE or the mean-field SDE, describes the limiting behaviour of an individual particle involving within a system of particles interacting through their empirical measure, as the size of the population grows to infinity (the so-called propagation of chaos). The pioneer work on such system was indicated by Kac \cite{K} in kinetic theory and McKean \cite{M} in the study of non-linear partial differential equations (PDEs for short). So far, the McKean-Vlasov SDEs have been investigated in various aspects such as well-posedness, connection with non-linear Fokker-Planck equations, large deviation and numerical approximation, etc, we refer the readers to \cite{BR,BRR,CGT,CD,CC2,CC,CGPS,CCD,CF,CM,DEGS,EGZ,FG,GP,GS,HW,K2,MV} among others. Meanwhile, the presence of multiple scales arises naturally in many applications ranging from climate modeling to chemical physics, and has been the central topic of study in science and engineering (see \cite{PS}). In particular, multiple scales can leads to hysteresis loops in the bifurcation diagram and induce phase transitions of certain McKean-Vlasov equation as studied in \cite{CGPS,DGPS,DP,GP}. Existing averaging results for slow-fast McKean-Vlasov SDEs can be found in \cite{BS2,BS,BS3,HLL2,RSX}. However, the coefficients of the systems considered in all the previous works are not allowed to depend on the distribution of the fast motion. Of particular relevance to us is the system of weakly interacting diffusions in a two-scale potential relying on the faster empirical measure considered in \cite{GGP}, the combined mean field and diffusive limit was investigated therein.
Our aim in this paper is to derive rigorously the homogenized limit of the non-linear system (\ref{sde}) as $\varepsilon\to0$. One of the novelty of the McKean-Vlasov equation (\ref{sde}) lies in that, even the slow component $X_t^\varepsilon$ has a fast varying term. This is known to be closely related to the homogenization of non-linear PDEs (see e.g. \cite{CC2,CCKW,DGO,HP}). In particular, when $F\equiv G\equiv0$ and $H(x,\mu,y,\nu)=y$, then system (\ref{sde}) reduces to the overdamped non-linear kinetic equation (see e.g. \cite{CC2,J} and the references therein). More importantly, all the coefficients in system (\ref{sde}) rely on the distribution of the fast motion $Y_t^\varepsilon$, which makes the situation more complicated. Such feature would then capture the system explored in \cite{GGP}, and identify its asymptotic limit would serve to give insight into the nature of phase transitions for some McKean-Vlasov systems as originally studied in \cite{D}. The key difficulties caused by these features are that, compared with the previous results (e.g. \cite{BS2,BS,BS3,CGPS,GP,HLL2,RSX}), we need to control the fluctuations of the functional central limit type, seek a totally non-linear SDE as the frozen equation, and handle the corresponding non-linear PDEs to serve as a corrector.
It turns out that the asymptotic limit for the system (\ref{sde}) will be given in terms of the solution of an non-linear Poisson equation. Namely, we need to consider the following Poisson equation on ${\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2})$: \begin{align}\label{po1} {\mathscr L}_0\Phi(x,\mu,y,\nu)=-H(x,\mu,y,\nu), \end{align} where $(x,\mu)\in{\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})$ are regarded as parameters, and for a test function $\varphi$, the operator ${\mathscr L}_0$ is defined by \begin{align}\label{l0} {\mathscr L}_0\varphi(y,\nu)&:={\mathscr L}_0(\mu,y,\nu)\varphi(y,\nu)\nonumber\\ &:=\frac{1}{2} a(\mu,y,\nu)\partial^2_y\varphi(y,\nu)+b(\mu,y,\nu)\cdot\partial_y\varphi(y,\nu)\nonumber\\ &+\int_{{\mathbb R}^{d_2}}\Big[b(\mu,\tilde y,\nu)\cdot\partial_{\nu}\varphi(y,\nu)(\tilde y)+\frac{1}{2} a(\mu,\tilde y,\nu)\cdot\partial_{\tilde y}\big[\partial_{\nu}\varphi(y,\nu)(\tilde y)\big]\Big]\nu({\mathord{{\rm d}}} \tilde y), \end{align} with $a(\mu,y,\nu):=\sigma\sigma^*(\mu,y,\nu)$. Though the first part of the operator involving the usual derivatives in the space variable is quite standard, the integral part contains the Lions derivative (see \cite[Section 6]{C} or \cite{L}) of the test function with respect to the measure argument. Unlike previous works, equation (\ref{po1}) is totally non-linear. This is exactly due to the dependence on distribution of the fast motion in system (\ref{sde}). In fact, the operator ${\mathscr L}_0$ can be viewed as the infinitesimal generator of the frozen process $Y_t^{\mu,\eta}$ obtained from $Y_t^\varepsilon$ in (\ref{sde}) by freezing the slow component at fixed $\mu\in{\mathscr P}_2({\mathbb R}^{d_2})$, i.e., $Y_t^{\mu,\eta}$ satisfies the following McKean-Vlasov equation: \begin{align}\label{sde1} {\mathord{{\rm d}}} Y_t^{\mu,\eta}=b(\mu,Y_t^{\mu,\eta},{\mathcal L}_{Y^{\mu,\eta}_t}){\mathord{{\rm d}}} t+\sigma(\mu,Y_t^{\mu,\eta},{\mathcal L}_{Y^{\mu,\eta}_t}){\mathord{{\rm d}}} W_t^2,\quad Y_0^{\mu,\eta}=\eta, \end{align} where $\mu\in{\mathscr P}_2({\mathbb R}^{d_2})$ is regarded as a parameter. Furthermore, note that the Poisson equation (\ref{po1}) is formulated on the whole space but not on compact sets (without boundary conditions). We shall show that a necessary condition for (\ref{po1}) to be well-posed is to assume that $H$ satisfies the following centering condition (see (\ref{cen3}) below): \begin{align}\label{cen} \int_{{\mathbb R}^{d_2}}H(x,\mu,y,\zeta^{\mu})\zeta^{\mu}({\mathord{{\rm d}}} y)=0, \end{align} where $\zeta^{\mu}({\mathord{{\rm d}}} y)$ is the unique invariant measure for the non-linear SDE (\ref{sde1}). We point out that we have freezed the $\nu$-measure variable of $H$ in (\ref{cen}) at the invariant measure $\zeta^\mu$. Under some mild assumptions, we prove that equation (\ref{po1}) has a unique solution and admits the probabilistic representation (see {\bf Theorem \ref{po}} below) $$ \Phi(x,\mu,y,\nu)={\mathbb E}\left(\int_0^\infty H(x,\mu,Y_t^{\mu,y,\nu},{\mathcal L}_{Y_t^{\mu,\eta}}{\mathord{{\rm d}}} t\right), $$ where $Y_t^{\mu,y,\nu}$ satisfies the following decoupled equation associated with the McKean-Vlasov SDE (\ref{sde1}): \begin{align}\label{sde2} Y_t^{\mu,y,\nu}=y+\int_0^tb\big(\mu,Y_s^{\mu,y,\nu},{\mathcal L}_{Y_s^{\mu,\eta}}\big){\mathord{{\rm d}}} s+\int_0^t\sigma\big(\mu,Y_s^{\mu,y,\nu},{\mathcal L}_{Y_s^{\mu,\eta}}\big){\mathord{{\rm d}}} W^2_s \end{align} with ${\mathcal L}_\eta=\nu$. We remark that the equation (\ref{sde2}) is not a McKean-Vlasov type SDE since the law appearing in the coefficients is not ${\mathcal L}_{Y_t^{\mu,y,\nu}}$ but rather ${\mathcal L}_{Y_t^{\mu,\eta}}$, i.e., the law of the solution to the McKean-Vlasov SDE (\ref{sde1}). In fact, equation (\ref{sde2}) can be viewed as a time-inhomogeneous It\^o type SDE.
The smoothness of the solution $\Phi$ to equation (\ref{po1}) with respect to both the $y$ and $\nu$ variables as well as the parameters $x$ and $\mu$ are also investigated. Especially, regularities with respect to the measure $\mu$ are more difficult since $\mu$ appears not only in the function $H$ but also the McKean-Vlasov process $Y_t^{\mu,\eta}$ and its decoupled process $Y_t^{\mu,y,\nu}$. In addition, the verification that the averaged coefficients are smooth usually constitutes a separate problem connected with the smoothness of the invariant measure $\zeta^\mu$ with respect tot the parameter $\mu$ (see e.g. \cite{BSV}). Here, we provide explict formulas for the derivatives in the $\mu$-variable of the solution $\Phi$ as well as the averaged functions with respect to the invariant measure $\zeta^\mu$ (see Corollary \ref{avef}), which quantify the $\mu$-derivatives in terms of the $(y,\nu)$-derivatives. Poisson equation with parameters and in the whole space ${\mathbb R}^d$ involving the classical linear second order differential operator was studied in a series of papers by Pardoux and Veretennikov \cite{P-V,P-V2,P-V3} (see also \cite{RX1}) and is shown to be a powerful tool in the theory of numerical approximation, stochastic averaging, homogenization and other functional limit theorems in probability (see e.g. \cite{APV,HMP,MST,PP}). Thus, our results for the non-linear Poisson equation (\ref{po1}) are of independent interest.
To formulate the asymptotic limit of system (\ref{sde}), let us define the averaged drift and diffusion coefficients by \begin{align} \bar F(x,\mu)&:=\int_{{\mathbb R}^{d_2}}F(x,\mu,y,\zeta^{\mu})\zeta^{\mu}({\mathord{{\rm d}}} y),\label{bF}\\ \overline{GG^*}(x,\mu)&:=\int_{{\mathbb R}^{d_2}}GG^*(x,\mu,y,\zeta^{\mu})\zeta^{\mu}({\mathord{{\rm d}}} y).\label{sigmaG} \end{align} Again, note that the $\nu$-variable of the coefficients in (\ref{bF}) and (\ref{sigmaG}) is freezed at the invariant measure $\zeta^\mu$. Several extra homogenized drift and diffusion coefficients will appear. We denote by \begin{align} \overline{H\cdot\partial_x\Phi}(x,\mu)&:=\int_{{\mathbb R}^{d_2}}H(x,\mu,y,\zeta^{\mu})\cdot\partial_x\Phi(x,\mu,y,\zeta^{\mu})\zeta^{\mu}({\mathord{{\rm d}}} y),\label{bH}\\ \overline{c\cdot\partial_y\Phi}(x,\mu)&:=\int_{{\mathbb R}^{d_2}}c(x,\mu,y,\zeta^{\mu})\cdot\partial_y\Phi(x,\mu,y,\zeta^{\mu})\zeta^{\mu}({\mathord{{\rm d}}} y),\label{bc1}\\ \overline{H\cdot\Phi}(x,\mu)&:=\int_{{\mathbb R}^{d_2}}H(x,\mu,y,\zeta^{\mu})\cdot\Phi(x,\mu,y,\zeta^{\mu})\zeta^{\mu}({\mathord{{\rm d}}} y),\label{sigma} \end{align} and \begin{align} \overline{c\cdot\partial_\nu\Phi}(x,\mu,y)(\tilde x)&:=\int_{{\mathbb R}^{d_2}}c(\tilde x,\mu,\tilde y,\zeta^{\mu})\cdot \partial_\nu\Phi(x,\mu,y,\zeta^{\mu})(\tilde y)\zeta^{\mu}({\mathord{{\rm d}}} \tilde y),\label{bc2}\\ \overline{\overline{c\cdot\partial_\nu\Phi}}(x,\mu)(\tilde x)&:=\int_{{\mathbb R}^{d_2}}\overline{c\cdot\partial_\nu\Phi}(x,\mu,y)(\tilde x)\zeta^{\mu}({\mathord{{\rm d}}} y)\nonumber\\ &=\int_{{\mathbb R}^{d_2}}\int_{{\mathbb R}^{d_2}}c(\tilde x,\mu,\tilde y,\zeta^{\mu})\cdot \partial_\nu\Phi(x,\mu,y,\zeta^{\mu})(\tilde y)\zeta^{\mu}({\mathord{{\rm d}}} \tilde y)\zeta^{\mu}({\mathord{{\rm d}}} y).\label{bc3} \end{align} Then we shall show that, as $\varepsilon\to 0$, the slow component $X_t^\varepsilon$ will converge weakly to $\bar X_t$ which satisfies the following McKean-Vlasov equation: \begin{align}\label{ave} {\mathord{{\rm d}}} \bar X_t&=\bar F(\bar X_t,{\mathcal L}_{\bar X_t}){\mathord{{\rm d}}} t+\overline{H\cdot\partial_x\Phi}(\bar X_t,{\mathcal L}_{\bar X_t}){\mathord{{\rm d}}} t\nonumber\\ &\quad+\overline{c\cdot\partial_y\Phi}(\bar X_t,{\mathcal L}_{\bar X_t}){\mathord{{\rm d}}} t +\tilde{\mathbb E}\bigg(\overline{\overline{c\cdot\partial_\nu\Phi}}(\bar X_t,{\mathcal L}_{\bar X_t})(\tilde{\bar{X_t}})\bigg){\mathord{{\rm d}}} t\nonumber\\ &\quad+\sqrt{\overline{GG^*}+2\overline{H\cdot\Phi}}(\bar X_t,{\mathcal L}_{\bar X_t}){\mathord{{\rm d}}} W^1_t,\qquad \bar X_0=\xi, \end{align} where $\tilde{\bar{X_t}}$ is a copy of $\bar X_t$ defined on a copy $(\tilde\Omega,\tilde{\mathscr F}, \tilde{{\mathbb P}})$ of the original probability space $(\Omega,{\mathscr F}, {\mathbb P})$, and $\tilde{\mathbb E}$ is the expectation taken with respect to $\tilde{\mathbb P}$. Meanwhile, we obtain the optimal rate of convergence in terms of $\varepsilon$.
We remark that the expectation term involving derivative in the measure argument of the solution of the Poisson equation in (\ref{ave}) seems to be completely new, which is due to the effect of the fast distribution in the coefficients. Alternatively, this term can also be expressed as $$ \tilde{\mathbb E}\bigg(\overline{\overline{c\cdot\partial_\nu\Phi}}(\bar X_t,{\mathcal L}_{\bar X_t})(\tilde{\bar{X_t}})\bigg)=\overline{\overline{\overline{c\cdot\partial_\nu\Phi}}}(\bar X_t,{\mathcal L}_{\bar X_t}), $$ where $$ \overline{\overline{\overline{c\cdot\partial_\nu\Phi}}}(x,\mu):=\int_{{\mathbb R}^{d_1}}\overline{\overline{c\cdot\partial_\nu\Phi}}(x,\mu)(\tilde x)\mu({\mathord{{\rm d}}} \tilde x), $$ thus depending only on $\bar X_t$ and its distribution ${\mathcal L}_{\bar X_t}$.
To state our main result, we make the following assumption on the coefficients:
\noindent{\bf (H$^{\sigma,b}$):} there exist constants $c_2>c_1\geqslant 0$ such that for every $\mu\in{\mathscr P}_2({\mathbb R}^{d_1})$, $y_1, y_2\in{\mathbb R}^{d_2}$ and $\nu_1, \nu_2\in {\mathscr P}_2({\mathbb R}^{d_2})$, \begin{align*}
&\|\sigma(\mu,y_1,\nu_1)-\sigma(\mu,y_2,\nu_2)\|^2+2\<b(\mu,y_1,\nu_1)-b(\mu,y_2,\nu_2),y_1-y_2{\rangle}\\
&\leqslant c_1{\mathcal W}_2(\nu_1,\nu_2)^2-c_2|y_1-y_2|^2. \end{align*}
For brevity, the spaces of functions used in this paper will be introduced in the Notations part at the end of this section. The following is the main result of this paper.
\begin{theorem}\label{main} Let $T>0$, {\bf (H$^{\sigma,b}$)} hold and $H$ satisfy the centering condition (\ref{cen}). Assume that $F,G,H,c\in\big(\textbf{C}_b^{4,(2,2),4,(2,2)}\cap {\mathcal C}_b^{4,6,(3,3)}\big)({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ and $b,\sigma\in \big(\textbf{C}_b^{(2,2),4,(2,2)}\cap {\mathcal C}_b^{4,6,(3,3)}\big)({\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$. Then for every $\varphi\in C_b^{(3,1)}({\mathscr P}_2({\mathbb R}^{d_1}))$ and $t\in[0,T]$, we have \begin{align*}
\big|\varphi({\mathcal L}_{X_t^\varepsilon})-\varphi({\mathcal L}_{\bar X_t})\big|\leqslant C_T\,\varepsilon, \end{align*} where $X_t^\varepsilon$ and $\bar X_t$ satisfy the McKean-Vlasov equation (\ref{sde}) and (\ref{ave}), respectively, and $C_T>0$ is a constant independent of $\varepsilon$. \end{theorem}
Interacting diffusions moving in a two-scales potential depending on the empirical measure of the process $X_t^\varepsilon$ was considered in \cite{GP}, and systems with potential relying on the fast empirical measure $X_t^\varepsilon/\varepsilon$ was studied in \cite{GGP}. As correspondence, we provide the following example.
\begin{example} Consider the following McKean-Vlasov type stochastic Langevin equations with two-time-scales potentials in ${\mathbb R}^d$: \begin{align}\label{ex2} {\mathord{{\rm d}}} X_t^\varepsilon=F(X_t^\varepsilon,{\mathcal L}_{X_t^\varepsilon}){\mathord{{\rm d}}} t+\frac{1}{\varepsilon}H(X_t^\varepsilon/\varepsilon,{\mathcal L}_{X_t^\varepsilon/\varepsilon}){\mathord{{\rm d}}} t+\sqrt{2\beta^{-1}}{\mathord{{\rm d}}} W_t, \end{align} where $\beta^{-1}:=\kappa_BT$ is the temperature ($\kappa_B$ denotes the Boltzman's constant and $T$ is the absolute temperature). Different choice of the potential functions could yield numerous models. Letting $$ Y_t^\varepsilon:=X_t^\varepsilon/\varepsilon. $$ Then we can rewrite the system (\ref{ex2}) as \begin{equation} \label{ex} \left\{ \begin{aligned} &{\mathord{{\rm d}}} X^{\varepsilon}_t =F(X^{\varepsilon}_t,{\mathcal L}_{X^{\varepsilon}_t}){\mathord{{\rm d}}} t+\frac{1}{\varepsilon}H(Y^{\varepsilon}_t,{\mathcal L}_{Y^{\varepsilon}_t}){\mathord{{\rm d}}} t +\sqrt{2\beta^{-1}}{\mathord{{\rm d}}} W_t,\\ &{\mathord{{\rm d}}} Y^{\varepsilon}_t =\frac{1}{\varepsilon}F(X^{\varepsilon}_t,{\mathcal L}_{X^{\varepsilon}_t}){\mathord{{\rm d}}} t+\frac{1}{\varepsilon^2}H(Y^{\varepsilon}_t,{\mathcal L}_{Y^{\varepsilon}_t}){\mathord{{\rm d}}} t+\frac{\sqrt{2\beta^{-1}}}{\varepsilon}{\mathord{{\rm d}}} W_t. \end{aligned} \right. \end{equation} System (\ref{ex}) is a particular case of (\ref{sde}). Thus, according to Theorem \ref{main}, the homogenized limit for (\ref{ex2}) shall be given by \begin{align}\label{bex} {\mathord{{\rm d}}} \bar X_t=(1+c_1)\, F(\bar X_t,{\mathcal L}_{\bar X_t}){\mathord{{\rm d}}} t+ c_2\, {\mathbb E} F(\bar X_t,{\mathcal L}_{\bar X_t}){\mathord{{\rm d}}} t+\sqrt{2\beta^{-1}+2 c_3}\,{\mathord{{\rm d}}} W_t, \end{align} where \begin{align*} &c_1:=\int_{{\mathbb R}^d}\partial_y\Phi(y,\zeta)\zeta({\mathord{{\rm d}}} y),\quad c_2:=\int_{{\mathbb R}^{d}}\int_{{\mathbb R}^{d}} \partial_\nu\Phi(y,\zeta)(\tilde y)\zeta({\mathord{{\rm d}}} y)\zeta({\mathord{{\rm d}}} \tilde y),\\ &c_3:=\int_{{\mathbb R}^d}H(y,\zeta)\cdot\Phi(y,\zeta)\zeta({\mathord{{\rm d}}} y), \end{align*} and $\Phi(y,\nu)$ is the unique solution to the Poisson equation $$ {\mathscr L}_0(y,\nu)\Phi(y,\nu)=-H(y,\nu). $$ Studying the qualitative properties of the original system (\ref{ex2}) and its homogenized limit (\ref{bex}) such as the bifurcation diagram and phase transitions would be a interesting problem, we shall carry it out in a future work. \end{example}
The rest of this paper is organized as follows. Section 2 is devoted to study the non-linear Poisson equation on the whole space, regularities of the solution are obtained. In Section 3 we prepare two fluctuations lemmas, and then we give the proof of Theorem \ref{main} in Section 4. Finally, an Appendix containing the proofs of some auxiliary lemmas is provided at the end of the paper.
\noindent{\bf Notations.} Throughout this paper, we use the following notations.
Given a function $f$ on ${\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2})$, we define \begin{align}\label{op1} {\mathscr L}_1f(x,\mu,y,\nu)&:={\mathscr L}_1(x,\mu,y,\nu)f(x,\mu,y,\nu)\nonumber\\ &:=F(x,\mu,y,\nu)\cdot\partial_xf(x,\mu,y,\nu)\nonumber\\ &\quad+\frac{1}{2}\mathrm{Tr}\big(GG^*(x,\mu,y,\nu)\cdot\partial^2_xf(x,\mu,y,\nu)\big),\\ {\mathscr L}_2f(x,\mu,y,\nu)&:={\mathscr L}_2(x,\mu,y,\nu)f(x,\mu,y,\nu):=H(x,\mu,y,\nu)\cdot\partial_xf(x,\mu,y,\nu),\label{op2} \end{align} and \begin{align} {\mathscr L}_3f(x,\mu,y,\nu)&:={\mathscr L}_3(x,\mu,y,\nu)f(x,\mu,y,\nu):=c(x,\mu,y,\nu)\cdot\partial_yf(x,\mu,y,\nu).\label{op3} \end{align} We shall write $$ {\langle}\phi,\nu{\rangle}:=\int_{{\mathbb R}^{d_2}}\phi(y)\nu({\mathord{{\rm d}}} y). $$
Let us briefly recall the derivatives with respect to the measure variable introduced by Lions. The idea is to consider the canonical lift of a real-valued function $f: {\mathscr P}_2({\mathbb R}^d)\to{\mathbb R}$ into a function ${\mathbb F}: L^2(\Omega)\ni X\mapsto f({\mathcal L}_X)\in{\mathbb R}$. Using the Hilbert structure of the space $L^2(\Omega)$, the function $f$ is said to be differentiable at $\mu\in {\mathscr P}_2({\mathbb R}^d)$ if its canonical lift ${\mathbb F}$ is Fr\'echet differentiable at some point $X$ with ${\mathcal L}_X=\mu$. By Riezs' representation theorem, the Fr\'echet derivative $D{\mathbb F}(X)$, viewed as an element of $L^2(\Omega)$, can be given by a function $\partial_\mu f(\mu)(\cdot): {\mathbb R}^d\mapsto{\mathbb R}^d$ such that $$ D{\mathbb F}(X)=\partial_\mu f({\mathcal L}_X)(X). $$ The function $\partial_\mu f(\mu)(x)$ is then called the Lions derivative ($L$-derivative for short) of $f$ at $\mu$. Similarly, we can define the higher order derivatives of $f$ at $\mu$.
Let $d,d_1,d_2\geqslant 1$, $T>0$ and $k,\ell,m\in{\mathbb N}=\{0,1,2,\cdots\}$. We introduce the following spaces of functions.
\begin{itemize}
\item The space $C_b^k({\mathbb R}^{d})$. A function $f(y)$ is in $C_b^k({\mathbb R}^{d})$ if $f$ is $k$-th continuously differentiable and all its derivatives are bounded.
\item The space $C_b^{(k,\ell)}({\mathscr P}_2({\mathbb R}^{d}))$. A function $f(\nu)$ is in $C_b^{(k,\ell)}({\mathscr P}_2({\mathbb R}^{d}))$ if the mapping $\nu\mapsto f(\nu)$ is $k$-times continuously $L$-differentiable, and we can find a version of $\partial^k_\nu f(\nu)(\tilde y_1,\cdots,\tilde y_k)$ such that for every $\nu\in{\mathscr P}_2({\mathbb R}^{d})$, the mapping $(\tilde y_1,\cdots,\tilde y_k)\mapsto \partial^k_\nu f(\nu)(\tilde y_1,\cdots,\tilde y_k)$ are in $C_b^\ell({\mathbb R}^{d}\times\cdots\times{\mathbb R}^d)$.
\item The space $C_b^{2k,(k,k)}({\mathbb R}^{d}\times{\mathscr P}_2({\mathbb R}^{d}))$. A function $f(y,\nu)$ is in $C_b^{2k,(k,k)}({\mathbb R}^{d}\times{\mathscr P}_2({\mathbb R}^{d}))$ if for any $\nu\in {\mathscr P}_2({\mathbb R}^{d})$, the mapping $y\mapsto f(y,\nu)$ is in $C_b^{2k}({\mathbb R}^{d})$, and for any $y\in {\mathbb R}^{d}$, the mapping $\nu\mapsto f(y,\nu)$ is in $C_b^{(k,k)}({\mathscr P}_2({\mathbb R}^{d}))$, and for every $1\leqslant i\leqslant k$, we can find a version of $\partial^i_\nu f(y,\nu)(\tilde y_1,\cdots,\tilde y_i)$ such that the mapping $(y,\tilde y_1,\cdots,\tilde y_i)\mapsto \partial^i_\nu f(y,\nu)(\tilde y_1,\cdots,\tilde y_i)$ is in $C_b^{2k-i}({\mathbb R}^d\times\cdots\times{\mathbb R}^d)$.
\item The space $C_b^{(k,\ell),2k,(k,k)}({\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$. A function $f(\mu,y,\nu)$ is in $C_b^{(k,\ell),2k,(k,k)}({\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ if for any fixed $(y,\nu)\in {\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2})$, the mapping $\mu\mapsto f(\mu,y,\nu)$ is in $C_b^{(k,\ell)}({\mathscr P}_2({\mathbb R}^{d_1}))$, and for every $\mu\in{\mathscr P}_2({\mathbb R}^{d_1})$, the mapping $(y,\nu)\mapsto f(\mu,y,\nu)$ is in $C_b^{2k,(k,k)}({\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$.
\item The space $C_b^{m,(k,\ell),2k,(k,k)}({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$. A function $f(x,\mu,y,\nu)$ is in $C_b^{m,(k,\ell),2k,(k,k)}({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ if for any $(\mu,y,\nu)$, the mapping $x\mapsto f(x,\mu,y,\nu)$ is in $C_b^m({\mathbb R}^{d_1})$, and for fixed $x\in {\mathbb R}^{d_1}$, the mapping $(\mu,y,\nu)\mapsto f(x,\mu,y,\nu)$ is in $C_b^{(k,\ell),2k,(k,k)}({\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$.
\item The space ${\mathbb C}_b^{(k,\ell),2k,(k,k)}({\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$. A function $f(\mu,y,\nu)$ is in ${\mathbb C}_b^{(k,\ell),2k,(k,k)}({\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ if $f\in C_b^{(k,\ell),2k,(k,k)}({\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$, and we can find a version of $\partial^k_\mu f(\mu,y,\nu)(\tilde x_1,\cdots,\tilde x_k)$ such that the mapping $(y,\nu)\mapsto\partial^\ell_{(\tilde x_1,\cdots,\tilde x_k)}\partial^k_\mu f(\mu,y,\nu)(\tilde x_1,\cdots,\tilde x_k)$ is in $C_b^{2k,(k,k)}({\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$.
\item The space ${\mathbb C}_b^{m,(k,\ell),2k,(k,k)}({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$. A function $f(x,\mu,y,\nu)$ is in ${\mathbb C}_b^{m,(k,\ell),2k,(k,k)}({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ if $f\in C_b^{m,(k,\ell),2k,(k,k)}({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$, and for every $x\in{\mathbb R}^{d_1}$, the mapping $(\mu,y,\nu)\mapsto f(x,\mu,y,\nu)$ is in ${\mathbb C}_b^{(k,\ell),2k,(k,k)}
({\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$, the mapping $(\mu,y,\nu)\mapsto\partial_x^m f(x,\mu,y,\nu)$ is in ${\mathbb C}_b^{(k,\ell),2k,(k,k)}({\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$.
\item The space $C_b^{1,m,(k,\ell),2k,(k,k)}({\mathbb R}_+\times{\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$. A function $f(t,x,\mu,y,\nu)$ is in $C_b^{1,m,(k,\ell),2k,(k,k)}({\mathbb R}_+\times{\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ if for any $(x,\mu,y,\nu)$, the mapping $t\mapsto f(t,x,\mu,y,\nu)$ is in $C_b^1({\mathbb R}_+)$, and for fixed $t\in {\mathbb R}_+$, the mapping $(x,\mu,y,\nu)\mapsto f(t,x,\mu,y,\nu)$ is in $C_b^{m,(k,\ell),2k,(k,k)}({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$. Similarly, we define the space ${\mathbb C}_b^{1,m,(k,\ell),2k,(k,k)}({\mathbb R}_+\times{\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$.
\end{itemize}
We shall mainly use the above spaces with $k=1,2,3$ and $\ell=1,2$. For simplify, we denote \begin{align*} &\textbf{C}_b^{4,(2,2),4,(2,2)}({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))\\ &:=\Big({\mathbb C}_b^{4,(2,2),2,(1,1)}\cap{\mathbb C}_b^{4,(1,1),4,(2,2)}\Big)({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2})), \end{align*} and \begin{align*} &{\mathcal C}_b^{4,6,(3,3)}({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))\\ &:=\Big(C_b^{4,(1,3),6,(3,3)}\cap C_b^{4,(3,1),6,(3,3)}\Big)({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2})). \end{align*} When a function $f(\mu,y,\nu)$ does not dependent on the $x$-varibale, we just view it as $f(x,\mu,y,\nu)\equiv f(\mu,y,\nu)$.
\section{Poisson equations on Wasserstein space}
\subsection{McKean-Vlasov equations and associated elliptic equations} Let us first consider the following McKean-Vlasov equation ({\it without parameters}) in ${\mathbb R}^{d_2}$: \begin{align}\label{eqy} {\mathord{{\rm d}}} Y^\eta_t=b(Y^\eta_t,{\mathcal L}_{Y^\eta_t}){\mathord{{\rm d}}} t+\sigma(Y^\eta_t,{\mathcal L}_{Y^\eta_t}){\mathord{{\rm d}}} W_t,\quad Y_0^\eta=\eta\in L^2(\Omega). \end{align} Without the abuse of notations, we use the same symbol $b$ and $\sigma$ to denote the drift and diffusion coefficients as in the parameterized equation (\ref{sde1}), and for every $\varphi\in C_b^{2,(1,1)}({\mathbb R}^{d_2}\times {\mathscr P}_2({\mathbb R}^{d_2}))$, we still use ${\mathscr L}_0$ to denote the operator \begin{align}\label{l00} {\mathscr L}_0\varphi(y,\nu)&:={\mathscr L}_0(y,\nu)\varphi(y,\nu)\nonumber\\ &:=\frac{1}{2} a(y,\nu)\partial^2_y\varphi(y,\nu)+b(y,\nu)\cdot\partial_y\varphi(y,\nu)\nonumber\\ &\quad+\int_{{\mathbb R}^{d_2}}\Big[b(\tilde y,\nu)\cdot\partial_{\nu}\varphi(y,\nu)(\tilde y)+\frac{1}{2} a(\tilde y,\nu)\cdot\partial_{\tilde y}\big[\partial_{\nu}\varphi(y,\nu)(\tilde y)\big]\Big]\nu({\mathord{{\rm d}}} \tilde y), \end{align} where $a(y,\nu):=\sigma\sigma^*(y,\nu)$. In this situation, the assumption {\bf ($H^{\sigma,b}$)} on the coefficients becomes
\noindent{\bf ($\hat H^{\sigma,b}$):} there exist constants $c_2>c_1\geqslant 0$ such that for every $y_1, y_2\in{\mathbb R}^{d_2}$ and $\nu_1, \nu_2\in {\mathscr P}_2({\mathbb R}^{d_2})$, \begin{align*}
\|\sigma(y_1,\nu_1)-\sigma(y_2,\nu_2)\|^2+2\<b(y_1,\nu_1)-b(y_2,\nu_2),y_1-y_2{\rangle}\leqslant c_1{\mathcal W}_2(\nu_1,\nu_2)^2-c_2|y_1-y_2|^2. \end{align*}
Let $Y_t^\eta$ be the unique strong solution of the equation (\ref{eqy}). It turns out that the law of $Y_t^{\eta}$ only depends on $\eta$ through its distribution ${\mathcal L}_\eta=\nu$. Thus, given a measure $\nu\in{\mathscr P}_2({\mathbb R}^{d_2})$, it makes sense to consider ${\mathcal L}_{Y_t^{\eta}}$ as a function of $\nu$ without specifying the choice of the lifted random variable $\eta$, and we can define a (non-linear) semigroup $\{P_t^*\}_{t\geqslant 0}$ on ${\mathscr P}_2({\mathbb R}^{d_2})$ by letting $$ P_{t}^*\nu:={\mathcal L}_{Y_{t}^{\eta}}\quad {\text {with}}\quad {\mathcal L}_\eta=\nu. $$ We say that a probability measure $\zeta$ is an invariant measure of the McKean-Vlasov equation (\ref{eqy}) or the process $Y_t^\eta$ if $$ P_t^*\zeta=\zeta,\quad \forall t\geqslant 0. $$ Under the assumption {\bf ($\hat H^{\sigma,b}$)}, it is known that (see e.g. \cite[Theorem 3.1]{W1}) there exists a unique invariant measure $\zeta$ for the equation (\ref{eqy}). Moreover, there exist constants $C_0, \lambda_0>0$ such that for every $\nu\in{\mathscr P}_2({\mathbb R}^{d_2})$, \begin{align}\label{exp}
\|P_t^*\nu-\zeta\|_{TV}+{\mathcal W}_2(P_t^*\nu,\zeta)\leqslant C_0\,{\mathrm{e}}^{-\lambda_0t}\,{\mathcal W}_2(\nu,\zeta). \end{align}
We then introduce, for fixed $y\in{\mathbb R}^{d_2}$, the following decoupled equation of the McKean-Vlasov equation (\ref{eqy}): \begin{align}\label{eqyy} {\mathord{{\rm d}}} Y_t^{y,\nu}=b(Y_t^{y,\nu},{\mathcal L}_{Y_t^{\eta}}){\mathord{{\rm d}}} t+\sigma(Y_t^{y,\nu},{\mathcal L}_{Y_t^{\eta}}){\mathord{{\rm d}}} W_t,\quad Y_0^{y,\nu}=y\in{\mathbb R}^{d_2}, \end{align} where ${\mathcal L}_\eta=\nu$. Given a function $f(y,\nu)$ on ${\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2})$, let us define \begin{align}\label{Ttf} {\mathcal T}_tf(y,\nu):={\mathbb E} f(Y_t^{y,\nu},{\mathcal L}_{Y_t^{\eta}}),\quad \forall t>0, y\in {\mathbb R}^{d_2}, \nu\in{\mathscr P}_2({\mathbb R}^{d_2}). \end{align} We have the following result.
\begin{theorem}\label{nonpde} Assume that $\sigma, b, f\in C_b^{2,(1,1)}({\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$. Then for every $T>0$, the function ${\mathcal T}_tf(y,\nu)$ is the unique solution in $C_b^{1,2,(1,1)}([0,T]\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ of the non-linear PDE \begin{equation} \label{kez} \left\{ \begin{aligned} &\partial_t U(t,y,\nu)-{\mathscr L}_0(y,\nu)U(t,y,\nu)=0,\quad \forall(t,y,\nu)\in(0,T]\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}),\\ & U(0,y,\nu)=f(y,\nu),\\ \end{aligned} \right. \end{equation} where ${\mathscr L}_0(y,\nu)$ is defined by (\ref{l00}). If we further assume that {\bf ($\hat H^{\sigma,b}$)} holds, then we have that for every $y\in{\mathbb R}^{d_2}$ and $\nu\in{\mathscr P}_2({\mathbb R}^{d_2})$, \begin{align}\label{expy}
{\mathcal W}_2({\mathcal L}_{Y_t^{y,\nu}},\zeta)\leqslant C_0(1+|y|+{\mathcal W}_2(\nu,\delta_0)){\mathrm{e}}^{-\lambda_0 t}, \end{align} where $C_0, \lambda_0>0$ are positive constants independent of $t$.
In particular, $\zeta$ is also the unique invariant for the decoupled equation (\ref{eqyy}), and we have \begin{align}\label{expty}
|{\mathcal T}_tf(y,\nu)-\<f(\cdot,\zeta),\zeta{\rangle}|\leqslant C_0(1+|y|+{\mathcal W}_2(\nu,\delta_0)){\mathrm{e}}^{-\lambda_0 t}. \end{align}
\end{theorem}
\begin{proof} The assertion that ${\mathcal T}_tf(y,\nu)$ defined by (\ref{Ttf}) is in $C_b^{1,2,(1,1)}([0,T]\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ and is the unique solution of the equation (\ref{kez}) follows by \cite[Theorem 7.2]{BLPR}. To prove (\ref{expy}), by It\^o's formula we have \begin{align*}
{\mathord{{\rm d}}} {\mathbb E}|Y_t^\eta-Y_t^{y,\nu}|^2&={\mathbb E}\big[2\<Y_t^\eta-Y_t^{y,\nu}, b(Y_t^{\eta},{\mathcal L}_{Y^{\eta}_t})-b(Y_t^{y,\nu},{\mathcal L}_{Y^{\eta}_t}){\rangle}\\
&\quad+\|\sigma(Y_t^{\eta},{\mathcal L}_{Y^{\eta}_t})-\sigma(Y_t^{y,\nu},{\mathcal L}_{Y^{\eta}_t})\|^2\big]{\mathord{{\rm d}}} t\\
&\leqslant -c_2{\mathbb E}|Y_t^\eta-Y_t^{y,\nu}|^2{\mathord{{\rm d}}} t. \end{align*} By comparsion theorem we get $$
{\mathcal W}_2({\mathcal L}_{Y_t^{y,\nu}},{\mathcal L}_{Y_t^\eta})\leqslant C_0(1+|y|+{\mathcal W}_2(\nu,\delta_0)){\mathrm{e}}^{-\lambda_0 t}. $$ This together with estimate (\ref{exp}) implies (\ref{expy}). Moreover, we deduce that \begin{align*} {\mathcal T}_tf(y,\nu)-\<f(\cdot,\zeta),\zeta{\rangle}&={\mathbb E} f(Y_t^{y,\nu},{\mathcal L}_{Y_t^{\eta}})-\<f(\cdot,\zeta),\zeta{\rangle}\\ &={\mathbb E} f(Y_t^{y,\nu},{\mathcal L}_{Y_t^{\eta}})-{\mathbb E} f(Y_t^{y,\nu},\zeta)+{\mathbb E} f(Y_t^{y,\nu},\zeta)-\<f(\cdot,\zeta),\zeta{\rangle}\\ &=:I_1+I_2. \end{align*} For the first term, we have by (\ref{exp}) that $$
|I_1|\leqslant {\mathcal W}_2({\mathcal L}_{Y_t^{\eta}},\zeta)\leqslant C_0\,{\mathrm{e}}^{-\lambda_0 t}{\mathcal W}_2(\nu,\zeta). $$ For the second term, we have by (\ref{expy}) that \begin{align*}
|I_2|\leqslant C_0{\mathcal W}_2({\mathcal L}_{Y_t^{y,\nu}},\zeta)\leqslant C_0{\mathrm{e}}^{-\lambda_0 t}(1+|y|+{\mathcal W}_2(\nu,\delta_0)). \end{align*} The proof is finished. \end{proof}
Fix $\lambda>0$ below. Consider the following non-linear elliptic equation: \begin{align}\label{ellip} \lambda U_\lambda(y,\nu)-{\mathscr L}_0(y,\nu)U_\lambda(y,\nu)=f(y,\nu), \end{align} where ${\mathscr L}_0(y,\nu)$ is defined by (\ref{l00}). We have:
\begin{theorem}\label{22} Assume that $\sigma, b, f\in C_b^{2,(1,1)}({\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$. Then the function \begin{align}\label{u1} U_\lambda(y,\nu):=\int_0^\infty{\mathrm{e}}^{-\lambda t}{\mathcal T}_tf(y,\nu){\mathord{{\rm d}}} t \end{align} is the unique solution in $C_b^{2,(1,1)}({\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ to the equation (\ref{ellip}). \end{theorem} \begin{proof} The assertion that $U_\lambda(y,\nu)$ defined by $(\ref{u1})$ is in $C_b^{2,(1,1)}({\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ and solves equation (\ref{ellip}) follows by Theorem \ref{nonpde} and the integral by part formula. In fact, we have \begin{align*} {\mathscr L}_0U_\lambda(y,\nu)&=\int_0^\infty{\mathrm{e}}^{-\lambda t}{\mathscr L}_0{\mathcal T}_tf(y,\nu){\mathord{{\rm d}}} t\\
&=\int_0^\infty{\mathrm{e}}^{-\lambda t}\partial_t{\mathcal T}_tf(y,\nu){\mathord{{\rm d}}} t={\mathrm{e}}^{-\lambda t}{\mathcal T}_tf(y,\nu)\Big|_{0}^{\infty}+\lambda\int_0^\infty{\mathrm{e}}^{-\lambda t}{\mathcal T}_tf(y,\nu){\mathord{{\rm d}}} t\\ &=-f(y,\nu)+\lambda U_\lambda(y,\nu), \end{align*} which implies the desired result. To prove the uniqueness, let $U_\lambda(y,\nu)\in C_b^{2,(1,1)}({\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ be a solution to equation (\ref{ellip}). Then by It\^o's formula (see e.g. \cite[Proposition 2.1]{CF}) we have \begin{align*} U_\lambda(Y_t^{y,\nu},{\mathcal L}_{Y_t^\eta})=U_\lambda(y,\nu)+\int_0^t{\mathscr L}_0U_\lambda(Y_s^{y,\nu},{\mathcal L}_{Y_s^\eta}){\mathord{{\rm d}}} s+M_t, \end{align*} where $M_t$ is a martingale given by $$ M_t:=\int_0^t(\partial_yU_\lambda)(Y_s^{y,\nu},{\mathcal L}_{Y_s^\eta})\cdot\sigma(Y_s^{y,\nu},{\mathcal L}_{Y_s^\eta}){\mathord{{\rm d}}} W_s. $$ By the product formula, we further have \begin{align*} {\mathrm{e}}^{-\lambda t}U_\lambda(Y_t^{y,\nu},{\mathcal L}_{Y_t^\eta})&=U_\lambda(y,\nu)+\int_0^t{\mathrm{e}}^{-\lambda s}{\mathscr L}_0U_\lambda(Y_s^{y,\nu},{\mathcal L}_{Y_s^\eta}){\mathord{{\rm d}}} s+\int_0^t{\mathrm{e}}^{-\lambda s}{\mathord{{\rm d}}} M_s\\ &\quad-\lambda\int_0^t{\mathrm{e}}^{-\lambda s}U_\lambda(Y_s^{y,\nu},{\mathcal L}_{Y_s^\eta}){\mathord{{\rm d}}} s\\ &=U_\lambda(y,\nu)-\int_0^t{\mathrm{e}}^{-\lambda s}f(Y_s^{y,\nu},{\mathcal L}_{Y_s^\eta}){\mathord{{\rm d}}} s+\int_0^t{\mathrm{e}}^{-\lambda s}{\mathord{{\rm d}}} M_s, \end{align*} where in the last equality we have used (\ref{ellip}). Taking expectation from both sides and letting $t\to\infty$, we obtain $$ U_\lambda(y,\nu)={\mathbb E}\left(\int_0^\infty{\mathrm{e}}^{-\lambda s}f(Y_s^{y,\nu},{\mathcal L}_{Y_s^{\eta}}){\mathord{{\rm d}}} s\right), $$ which implies the desired result. \end{proof}
Now, we consider the following non-linear Poisson equation in the whole space ${\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2})$: \begin{align}\label{ellip2} {\mathscr L}_0(y,\nu)U(y,\nu)=-f(y,\nu). \end{align} This equation can be viewed as the limit $\lambda\to0$ of equation (\ref{ellip}). However, in general the integral in (\ref{u1}) is not well-defined for measurable function $f$ if $\lambda=0$. It turns out that a necessary condition for equation (\ref{ellip2}) to be well-defined is to assume that $f$ satisfies the following centering condition: \begin{align}\label{cen3} \int_{{\mathbb R}^{d_2}}f(y,\zeta)\zeta({\mathord{{\rm d}}} y)=0, \end{align} where $\zeta$ is the unique invariant measure of the McKean-Vlasov SDE (\ref{eqy}). In fact, if $U\in C_b^{2,(1,1)}({\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ satisfies the equation (\ref{ellip2}), then by It\^o's formula we can deduce that \begin{align*} U(Y_t^\eta,{\mathcal L}_{Y_t^\eta})=U(\eta,{\mathcal L}_{\eta})+\int_0^t{\mathscr L}_0 U(Y_s^\eta,{\mathcal L}_{Y_s^\eta}){\mathord{{\rm d}}} s +\int_0^t\partial_yU(Y_s^\eta,{\mathcal L}_{Y_s^\eta})\sigma(Y^\eta_s,{\mathcal L}_{Y^\eta_s}){\mathord{{\rm d}}} W_s, \end{align*} where ${\mathscr L}_0$ is defined by (\ref{l00}). Letting the initial distribution ${\mathcal L}_\eta$ to be the invariant measure $\zeta$ of the equation (\ref{eqy}), then we have that for every $t>0$, ${\mathcal L}_{Y_t^\eta}=\zeta$. Taking expectation from both sides of the above equality, we arrive at \begin{align*} \int_{{\mathbb R}^{d_2}}U(y,\zeta)\zeta({\mathord{{\rm d}}} y)=\int_{{\mathbb R}^{d_2}}U(y,\zeta)\zeta({\mathord{{\rm d}}} y)+\int_0^t\int_{{\mathbb R}^{d_2}}{\mathscr L}_0 U(y, \zeta)\zeta({\mathord{{\rm d}}} y){\mathord{{\rm d}}} s, \end{align*} which together with equation (\ref{ellip2}) implies (\ref{cen3}). Note that the $\nu$-variable of the function $f(y,\nu)$ in (\ref{cen3}) is fixed at the invariant measure $\zeta$.
We have the following result.
\begin{theorem}\label{pot} Let {\bf ($\hat H^{\sigma,b}$)} hold and $f$ satisfy the centering condition (\ref{cen3}).
\noindent (i) If a function $U\in C_b^{2,(1,1)}({\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ satisfies the equation (\ref{ellip2}), then $U$ admits the probabilistic representation \begin{align}\label{u2} U(y,\nu)=\int_0^\infty{\mathcal T}_tf(y,\nu){\mathord{{\rm d}}} t, \end{align} where ${\mathcal T}_tf$ is defined by (\ref{Ttf}).
\noindent (ii) Assume that $\sigma, b, f\in C_b^{2,(1,1)}({\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$. Then the function $U$ defined by (\ref{u2}) is the unique solution in $C_b^{2,(1,1)}({\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ to the non-linear Poisson equation (\ref{ellip2}), which also satisfies the centering condition.
\noindent (iii) If we further assume that $\sigma, b, f\in C_b^{2k,(k,k)}({\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ with $k\geqslant 1$, then we have $U(y,\nu)\in C_b^{2k,(k,k)}({\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$.
\end{theorem}
\begin{proof} We divide the proof into four steps.
\noindent (i) Let us first show that the integral in (\ref{u2}) is well-defined. In fact, since $f$ satisfies the centering condition (\ref{cen3}), we have by (\ref{expty}) that $$
|{\mathcal T}_tf(y,\nu)|\leqslant C_0(1+|y|+{\mathcal W}_2(\nu,\delta_0)){\mathrm{e}}^{-\lambda_0 t}. $$ The conclusion in (i) follows by It\^o's formula and the same argument as in the proof of Theorem \ref{22}. Taking the limit $\lambda\to0$ in (\ref{u1}) we obtain $$
\lim_{\lambda\to0}|U_\lambda(y,\nu)-U(y,\nu)|=0. $$ Meanwhile, by Fubini's theorem and the fact that $\zeta$ is also the unique invariant for $Y_t^{y,\nu}$ (see (\ref{expy})), we have \begin{align*} \int_{{\mathbb R}^{d_2}}U(y,\zeta)\zeta({\mathord{{\rm d}}} y)&=\int_{{\mathbb R}^{d_2}}\int_0^\infty{\mathbb E} f(Y_t^{y,\zeta},\zeta){\mathord{{\rm d}}} t\zeta({\mathord{{\rm d}}} y)\\ &=\int_0^\infty\int_{{\mathbb R}^{d_2}}{\mathbb E} f(Y_t^{y,\zeta},\zeta)\zeta({\mathord{{\rm d}}} y){\mathord{{\rm d}}} t=\int_0^\infty\int_{{\mathbb R}^{d_2}} f(y,\zeta)\zeta({\mathord{{\rm d}}} y){\mathord{{\rm d}}} t=0. \end{align*} Thus $U$ satisfies the centering condition (\ref{cen3}).
\noindent (ii) Next, let $U$ be defined by (\ref{u2}), we first consider the regularity of $U$ with respect to the $y$-variable. In view of (\ref{Ttf}) and (\ref{u2}), we deduce that \begin{align*} \partial_y U(y,\nu)=\int_0^\infty{\mathbb E}\left[\partial_yf(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_y Y^{y,\nu}_t\right]{\mathord{{\rm d}}} t, \end{align*} where $\partial_y Y^{y,\nu}_t$ satisfies \begin{align*} {\mathord{{\rm d}}} \partial_y Y^{y,\nu}_t=\partial_y b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_y Y^{y,\nu}_t{\mathord{{\rm d}}} t+\partial_y \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_y Y^{y,\nu}_t {\mathord{{\rm d}}} W_t. \end{align*} According to Lemma \ref{Y1} in the Appendix, we have \begin{align*}
\sup_{y\in{\mathbb R}^{d_{2}},\nu\in{\mathscr P}_{2}({\mathbb R}^{d_{2}})}{\mathbb E}\|\partial_y Y^{y,\nu}_t\|^2\leqslant C_{0}\,{\mathrm{e}}^{-c_2t}, \end{align*}
which together with the boundedness of $\|\partial_yf\|$ yields \begin{align*}
\sup_{y\in{\mathbb R}^{d_{2}},\nu\in{\mathscr P}_{2}({\mathbb R}^{d_{2}})}\|\partial_y U(y,\nu)\|\leqslant C_{0}<\infty. \end{align*} Similarly, we have \begin{align*} \partial^2_y U(y,\nu)=\int_0^\infty{\mathbb E}\left[\partial^2_yf(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_y Y^{y,\nu}_t\cdot\partial_y Y^{y,\nu}_t+\partial_yf(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial^2_y Y^{y,\nu}_t\right]{\mathord{{\rm d}}} t, \end{align*} where $\partial^2_y Y^{y,\nu}_t$ satisfies \begin{align*} {\mathord{{\rm d}}} \partial^2_y Y^{y,\nu}_t&=\partial^2_y b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_y Y^{y,\nu}_t\cdot\partial_y Y^{y,\nu}_t{\mathord{{\rm d}}} t+\partial_y b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial^2_y Y^{y,\nu}_t{\mathord{{\rm d}}} t\\ &\quad+\partial^2_y \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_y Y^{y,\nu}_t\cdot\partial_y Y^{y,\nu}_t {\mathord{{\rm d}}} W_t+\partial_y \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial^2_y Y^{y,\nu}_t {\mathord{{\rm d}}} W_t. \end{align*} Applying Lemma \ref{Y1} in the Appendix again we have \begin{align*}
\sup_{y\in{\mathbb R}^{d_{2}},\nu\in{\mathscr P}_{2}({\mathbb R}^{d_{2}})}\|\partial^2_y U(y,\nu)\|\leqslant C_{0}<\infty. \end{align*} As a result, for every $\nu\in{\mathscr P}_{2}({\mathbb R}^{d_{2}})$ we obtain that $U(\cdot,\nu)\in C_b^2({\mathbb R}^{d_2})$.
\noindent (iii) We proceed to study the regularity of $U$ with respect to the $\nu$-variable. By the definition of the $L$-derivative, we have (see e.g. \cite[Lemma 6.1]{BLPR}) \begin{align} \partial_\nu U(y,\nu)(\tilde y)&=\int_0^\infty{\mathbb E}\Big[\partial_yf(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_\nu Y^{y,\nu}_t(\tilde y)+\tilde{\mathbb E}\big[\partial_\nu f(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\big] \nonumber\\ &\qquad\quad+\tilde{\mathbb E}\big[\partial_\nu f(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde\eta}_t)\cdot \tilde Z^{\tilde \eta}_t(\tilde y)\big]\Big]{\mathord{{\rm d}}} t,\label{nuU} \end{align} and thus \begin{align*} \partial_{\tilde y}\partial_\nu U(y,\nu)(\tilde y)&=\int_0^\infty{\mathbb E}\Big[\partial_yf(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_{\tilde y}\partial_\nu Y^{y,\nu}_t(\tilde y)\\ &\qquad\qquad+\tilde{\mathbb E}\big[\partial_\nu f(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial^2_y \tilde Y^{\tilde y,\nu}_t\big] \\ &\qquad\qquad+\tilde{\mathbb E}\big[\partial_{\tilde y}\partial_\nu f(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\big] \\ &\qquad\qquad+\tilde{\mathbb E}\big[\partial_\nu f(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde \eta}_t)\cdot\partial_{\tilde y} \tilde Z^{\tilde \eta}_t(\tilde y)\big]\Big]{\mathord{{\rm d}}} t, \end{align*}
where $Z^\eta_t(\tilde y):=\partial_\nu Y^{y,\nu}_t(\tilde y)|_{y=\eta}$ with $\partial_\nu Y^{y,\nu}_t(\tilde y)$ satisfying \begin{align*} {\mathord{{\rm d}}} \partial_\nu Y^{y,\nu}_t(\tilde y)&=\partial_y b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_\nu Y^{y,\nu}_t(\tilde y){\mathord{{\rm d}}} t+\tilde{\mathbb E}\big[\partial_\nu b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\big]{\mathord{{\rm d}}} t \\ &\quad+\tilde{\mathbb E}\big[\partial_\nu b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde \eta}_t)\cdot \tilde Z^{\tilde \eta}_t(\tilde y)\big]{\mathord{{\rm d}}} t+\partial_y \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_\nu Y^{y,\nu}_t(\tilde y) {\mathord{{\rm d}}} W_t\\ &\quad+\tilde{\mathbb E}\big[\partial_\nu \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\big]{\mathord{{\rm d}}} W_t \\ &\quad+\tilde{\mathbb E}\big[\partial_\nu \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde\eta}_t)\cdot \tilde Z^{\tilde \eta}_t(\tilde y)\big]{\mathord{{\rm d}}} W_t, \end{align*}
and $\partial_{\tilde y} Z^\eta_t(\tilde y):=\partial_{\tilde y}\partial_\nu Y^{y,\nu}_t(\tilde y)|_{y=\eta}$ with $\partial_{\tilde y}\partial_\nu Y^{y,\nu}_t(\tilde y)$ satisfying \begin{align*} {\mathord{{\rm d}}} \partial_{\tilde y}\partial_\nu Y^{y,\nu}_t(\tilde y)&=\partial_y b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_{\tilde y}\partial_\nu Y^{y,\nu}_t(\tilde y){\mathord{{\rm d}}} t+\tilde{\mathbb E}\big[\partial_\nu b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial^2_y \tilde Y^{\tilde y,\nu}_t\big]{\mathord{{\rm d}}} t \\ &\quad+\tilde{\mathbb E}\big[\partial_{\tilde y}\partial_\nu b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\big]{\mathord{{\rm d}}} t \\ &\quad+\tilde{\mathbb E}\big[\partial_\nu b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde \eta}_t)\cdot \partial_{\tilde y}\tilde Z^{\tilde \eta}_t(\tilde y)\big]{\mathord{{\rm d}}} t\\ &\quad+\partial_y \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_{\tilde y}\partial_\nu Y^{y,\nu}_t(\tilde y) {\mathord{{\rm d}}} W_t\\ &\quad+\tilde{\mathbb E}\big[\partial_\nu \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial^2_y \tilde Y^{\tilde y,\nu}_t\big]{\mathord{{\rm d}}} W_t \\ &\quad+\tilde{\mathbb E}\big[\partial_{\tilde y}\partial_\nu \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\big]{\mathord{{\rm d}}} W_t \\ &\quad+\tilde{\mathbb E}\big[\partial_\nu \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde\eta}_t)\cdot \partial_{\tilde y}\tilde Z^{\tilde \eta}_t(\tilde y)\big]{\mathord{{\rm d}}} W_t. \end{align*} By Lemma \ref{Y2} in the Appendix, we have \begin{align*}
&\sup_{y\in{\mathbb R}^{d_{2}},\nu\in{\mathscr P}_{2}({\mathbb R}^{d_{2}}),\tilde y\in{\mathbb R}^{d_{2}}}{\mathbb E}\|\partial_\nu Y^{y,\nu}_t(\tilde y)\|^2\leqslant C_{0}\,{\mathrm{e}}^{-(c_2-c_1-\gamma)t}, \end{align*} and \begin{align*}
&\sup_{y\in{\mathbb R}^{d_{2}},\nu\in{\mathscr P}_{2}({\mathbb R}^{d_{2}}),\tilde y\in{\mathbb R}^{d_{2}}}{\mathbb E}\|\partial_{\tilde y}\partial_\nu Y^{y,\nu}_t(\tilde y)\|^2\leqslant C_{0}\,{\mathrm{e}}^{-(c_2-c_1-\gamma)t}, \end{align*} which in turn imply that \begin{align*}
{\mathbb E}\|Z^{\eta}_t(\tilde y)\|^2&={\mathbb E}\|\partial_\nu Y^{y,\eta}_t(\tilde y)|_{y=\eta}\|^2={\mathbb E}{\mathbb E}\big[\|\partial_\nu Y^{y,\eta}_t(\tilde y)|_{y=\eta}\|^2|{\mathcal F}_0\big]\\
&\leqslant\sup_{y\in{\mathbb R}^{d_{2}}}{\mathbb E}\|\partial_\nu Y^{y,\nu}_t(\tilde y)\|^2\leqslant C_{0}\,{\mathrm{e}}^{-(c_2-c_1-\gamma)t}, \end{align*} and \begin{align*}
{\mathbb E}\|\partial_{\tilde y}Z^{\eta}_t(\tilde y)\|^2\leqslant C_{0}e^{-(c_2-c_1-\gamma)t}. \end{align*} Thus we arrive at \begin{align*}
&\sup_{y\in{\mathbb R}^{d_{2}},\nu\in{\mathscr P}_{2}({\mathbb R}^{d_{2}}),\tilde y\in{\mathbb R}^{d_{2}}}\Big(\|\partial_\nu U(y,\nu)(\tilde y)\|+\|\partial_{\tilde y}\partial_\nu U(y,\nu)(\tilde y)\|\Big)\leqslant C_{0}<\infty. \end{align*} As a result, for every fixed $y\in{\mathbb R}^{d_2}$ we obtain $U(y,\cdot)\in C_b^{(1,1)}({\mathscr P}_{2}({\mathbb R}^{d_{2}}))$.
Similarly, we have \begin{align*} &\partial_y\partial_\nu U(y,\nu)(\tilde y)=\partial_\nu\partial_y U(y,\nu)(\tilde y)\\ &=\int_0^\infty{\mathbb E}\Big[\partial^2_yf(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_y Y^{y,\nu}_t\cdot\partial_\nu Y^{y,\nu}_t(\tilde y)+\partial_yf(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_\nu\partial_y Y^{y,\nu}_t(\tilde y)\\ &\qquad\quad+\tilde{\mathbb E}\big[\partial_\nu\partial_yf(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_y Y^{y,\nu}_t\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\big]\\ &\qquad\quad+\tilde{\mathbb E}\big[\partial_\nu\partial_yf(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde \eta}_t)\cdot\partial_y Y^{y,\nu}_t \cdot \tilde Z^{\tilde \eta}_t(\tilde y)\big]\Big]{\mathord{{\rm d}}} t, \end{align*} where \begin{align*} {\mathord{{\rm d}}} \partial_\nu\partial_y Y^{y,\nu}_t(\tilde y)&=\partial^2_y b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_y Y^{y,\nu}_t\cdot\partial_\nu Y^{y,\nu}_t(\tilde y){\mathord{{\rm d}}} t+\partial_y b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_\nu\partial_y Y^{y,\nu}_t(\tilde y){\mathord{{\rm d}}} t\\ &\quad+\tilde{\mathbb E}\big[\partial_\nu\partial_y b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_y Y_t^{y,\nu}\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\big]{\mathord{{\rm d}}} t \\ &\quad+\tilde{\mathbb E}\big[\partial_\nu\partial_y b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde\eta}_t)\cdot\partial_y Y_t^{y,\nu}\cdot \tilde Z^{\tilde \eta}_t(\tilde y)\big]{\mathord{{\rm d}}} t\\ &\quad+\partial^2_y \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_y Y^{y,\nu}_t\cdot\partial_\nu Y^{y,\nu}_t(\tilde y) {\mathord{{\rm d}}} W_t\\ &\quad+\partial_y \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_\nu\partial_y Y^{y,\nu}_t(\tilde y) {\mathord{{\rm d}}} W_t\\ &\quad+\tilde{\mathbb E}\big[\partial_\nu\partial_y \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_y Y_t^{y,\nu}\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\big]{\mathord{{\rm d}}} W_t \\ &\quad+\tilde{\mathbb E}\big[\partial_\nu\partial_y \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde\eta}_t)\cdot\partial_y Y_t^{y,\nu}\cdot \tilde Z^{\tilde \eta}_t(\tilde y)\big]{\mathord{{\rm d}}} W_t. \end{align*} By Lemma \ref{Y2} in the Appendix, we have \begin{align*}
\sup_{y\in{\mathbb R}^{d_{2}},\nu\in{\mathscr P}_{2}({\mathbb R}^{d_{2}}),\tilde y\in{\mathbb R}^{d_{2}}}{\mathbb E}\|\partial_\nu\partial_y Y^{y,\nu}_t(\tilde y)\|^2\leqslant C_{0}\,{\mathrm{e}}^{-(c_2-c_1-\gamma)t}, \end{align*} which in turn implies \begin{align*}
\sup_{y\in{\mathbb R}^{d_{2}},\nu\in{\mathscr P}_{2}({\mathbb R}^{d_{2}}),\tilde y\in{\mathbb R}^{d_{2}}}\|\partial_y\partial_\nu U(y,\nu)(\tilde y)\|\leqslant C_{0}<\infty. \end{align*} Combing the above results, we obtain $U\in C_b^{2,(1,1)}({\mathbb R}^{d_{2}}\times{\mathscr P}_{2}({\mathbb R}^{d_{2}}))$. The derivation that $U$ is the unique solution of equation (\ref{ellip2}) follows by It\^o's formula.
\noindent (iv) We proceed to prove the conclusions in (iii). The proof of $U(\cdot,\nu)\in C_b^{2k}({\mathbb R}^{d_2})$ when $\sigma, b, f\in C_b^{2k,(k,k)}({\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ is entirely similar as in step (ii). Let us focus on the higher derivatives with respect to the $\nu$-variable. In view of (\ref{nuU}) and by the definition of the $L$-derivative, we have \begin{align*} &\partial^2_\nu U(y,\nu)(\tilde y,\breve y)\\ &=\int_0^\infty{\mathbb E}\Big[\partial^2_yf(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_\nu Y^{y,\nu}_t(\tilde y)\cdot\partial_\nu Y^{y,\nu}_t(\breve y)+\partial_yf(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial^2_\nu Y^{y,\nu}_t(\tilde y,\breve y)\\ &\qquad\quad+\breve{\mathbb E}\big[\partial_\nu\partial_y f(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\breve Y^{\breve y,\nu}_t)\cdot\partial_\nu Y^{y,\nu}_t(\tilde y)\cdot\partial_y \breve Y^{\breve y,\nu}_t\big] \\ &\qquad\quad+\breve{\mathbb E}\big[\partial_\nu\partial_y f(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\breve Y^{\breve\eta}_t)\cdot\partial_\nu Y^{y,\nu}_t(\tilde y)\cdot\breve Z^{\breve\eta}_t(\breve y)\big] \\ &\qquad\quad+\tilde{\mathbb E}\big[\partial_\nu f(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_\nu\partial_y \tilde Y^{\tilde y,\nu}_t(\breve y)\big] \\ &\qquad\quad+\tilde{\mathbb E}\big[\partial_{y}\partial_\nu f(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\cdot\partial_\nu Y^{y,\nu}_t(\breve y)\big] \\ &\qquad\quad+\tilde{\mathbb E}\big[\partial_{\tilde y}\partial_\nu f(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\cdot\partial_\nu \tilde Y^{y,\nu}_t(\breve y)\big] \\ &\qquad\quad+\tilde{\mathbb E}\breve{\mathbb E}\big[\partial^2_\nu f(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t,\breve Y^{\breve y,\nu}_t)\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\cdot\partial_y \breve Y^{\breve y,\nu}_t\big] \\ &\qquad\quad+\tilde{\mathbb E}\breve{\mathbb E}\big[\partial^2_\nu f(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t,\breve Y^{\breve\eta}_t)\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\cdot\breve Z^{\breve\eta}_t(\breve y)\big]\\ &\qquad\quad+\tilde{\mathbb E}\big[\partial_\nu f(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\breve y,\nu}_t)\cdot\partial_y\partial_\nu\tilde Y^{\breve y,\nu}_t(\tilde y)\big]+\tilde{\mathbb E}\big[\partial_\nu f(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde\eta}_t)\cdot\tilde Z^{\tilde\eta}_t(\tilde y,\breve y)\big]\\ &\qquad\quad+\tilde{\mathbb E}\big[\partial_{y}\partial_\nu f(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde\eta}_t)\cdot\tilde Z^{\tilde\eta}_t(\tilde y)\cdot\partial_\nu Y^{y,\nu}_t(\breve y)\big] \\ &\qquad\quad+\tilde{\mathbb E}\big[\partial_{\tilde y}\partial_\nu f(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\breve y,\nu}_t)\cdot\tilde Z^{\tilde\eta}_t(\tilde y)\cdot\partial_y \tilde Y^{\breve y,\nu}_t\big] \\ &\qquad\quad+\tilde{\mathbb E}\big[\partial_{\tilde y}\partial_\nu f(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde\eta}_t)\cdot\tilde Z^{\tilde\eta}_t(\tilde y)\cdot\tilde Z^{\tilde\eta}_t(\breve y)\big]\\ &\qquad\quad+\tilde{\mathbb E}\breve{\mathbb E}\big[\partial^2_\nu f(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde\eta}_t,\breve Y^{\breve y,\nu}_t)\cdot \tilde Z^{ \tilde\eta}_t(\tilde y)\cdot\partial_y \breve Y^{\breve y,\nu}_t\big] \\ &\qquad\quad+\tilde{\mathbb E}\breve{\mathbb E}\big[\partial^2_\nu f(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde\eta}_t,\breve Y^{\breve\eta}_t)\cdot\tilde Z^{ \tilde\eta}_t(\tilde y)\cdot\breve Z^{\breve\eta}_t(\breve y)\big]\Big]dt, \end{align*}
where $Z^\eta_t(\tilde y,\breve y ):=\partial^2_\nu Y^{y,\nu}_t(\tilde y,\breve y)|_{y=\eta}$ with $\partial^2_\nu Y^{y,\nu}_t(\tilde y,\breve y)$ satisfying \begin{align*} {\mathord{{\rm d}}}\partial^2_\nu Y^{y,\nu}_t(\tilde y,\breve y)&=\partial^2_yb(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_\nu Y^{y,\nu}_t(\tilde y)\cdot\partial_\nu Y^{y,\nu}_t(\breve y){\mathord{{\rm d}}} t\\ &\quad+\partial_yb(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial^2_\nu Y^{y,\nu}_t(\tilde y,\breve y){\mathord{{\rm d}}} t\\ &\quad+\partial^2_y\sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_\nu Y^{y,\nu}_t(\tilde y)\cdot\partial_\nu Y^{y,\nu}_t(\breve y){\mathord{{\rm d}}} W_t\\ &\quad+\partial_y\sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial^2_\nu Y^{y,\nu}_t(\tilde y,\breve y){\mathord{{\rm d}}} W_t+{\mathscr E}_1(\tilde y, \breve y)+{\mathscr E}_2(\tilde y, \breve y), \end{align*} and ${\mathscr E}_1(\tilde y, \breve y)$ contains the drift part given by \begin{align*} {\mathscr E}_1(\tilde y, \breve y):\!&=\breve{\mathbb E}\big[\partial_\nu\partial_y b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\breve Y^{\breve y,\nu}_t)\cdot\partial_\nu Y^{y,\nu}_t(\tilde y)\cdot\partial_y \breve Y^{\breve y,\nu}_t\big]{\mathord{{\rm d}}} t \\ &\quad+\breve{\mathbb E}\big[\partial_\nu\partial_y b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\breve Y^{\breve\eta}_t)\cdot\partial_\nu Y^{y,\nu}_t(\tilde y)\cdot\breve Z^{\breve\eta}_t(\breve y)\big]{\mathord{{\rm d}}} t \\ &\quad+\tilde{\mathbb E}\big[\partial_\nu b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_\nu\partial_y \tilde Y^{\tilde y,\nu}_t(\breve y)\big]{\mathord{{\rm d}}} t \\ &\quad+\tilde{\mathbb E}\big[\partial_{y}\partial_\nu b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\cdot\partial_\nu Y^{y,\nu}_t(\breve y)\big]{\mathord{{\rm d}}} t \\ &\quad+\tilde{\mathbb E}\big[\partial_{\tilde y}\partial_\nu b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\cdot\partial_\nu \tilde Y^{y,\nu}_t(\breve y)\big] {\mathord{{\rm d}}} t\\ &\quad+\tilde{\mathbb E}\breve{\mathbb E}\big[\partial^2_\nu b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t,\breve Y^{\breve y,\nu}_t)\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\cdot\partial_y \breve Y^{\breve y,\nu}_t\big]{\mathord{{\rm d}}} t \\ &\quad+\tilde{\mathbb E}\breve{\mathbb E}\big[\partial^2_\nu b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t,\breve Y^{\breve\eta}_t)\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\cdot\breve Z^{\breve\eta}_t(\breve y)\big]{\mathord{{\rm d}}} t\\ &\quad+\tilde{\mathbb E}\big[\partial_\nu b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\breve y,\nu}_t)\cdot\partial_y\partial_\nu\tilde Y^{\breve y,\nu}_t(\tilde y)\big]{\mathord{{\rm d}}} t\\ &\quad+\tilde{\mathbb E}\big[\partial_\nu b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde\eta}_t)\cdot\tilde Z^{\tilde\eta}_t(\tilde y,\breve y)\big]{\mathord{{\rm d}}} t\\ &\quad+\tilde{\mathbb E}\big[\partial_{y}\partial_\nu b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde\eta}_t)\cdot\tilde Z^{\tilde\eta}_t(\tilde y)\cdot\partial_\nu Y^{y,\nu}_t(\breve y)\big] {\mathord{{\rm d}}} t\\ &\quad+\tilde{\mathbb E}\big[\partial_{\tilde y}\partial_\nu b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\breve y,\nu}_t)\cdot\tilde Z^{\tilde\eta}_t(\tilde y)\cdot\partial_y \tilde Y^{\breve y,\nu}_t\big] {\mathord{{\rm d}}} t\\ &\quad+\tilde{\mathbb E}\big[\partial_{\tilde y}\partial_\nu b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde\eta}_t)\cdot\tilde Z^{\tilde\eta}_t(\tilde y)\cdot\tilde Z^{\tilde\eta}_t(\breve y)\big]{\mathord{{\rm d}}} t \\ &\quad+\tilde{\mathbb E}\breve{\mathbb E}\big[\partial^2_\nu b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde\eta}_t,\breve Y^{\breve y,\nu}_t)\cdot \tilde Z^{ \tilde\eta}_t(\tilde y)\cdot\partial_y \breve Y^{\breve y,\nu}_t\big] {\mathord{{\rm d}}} t\\ &\quad+\tilde{\mathbb E}\breve{\mathbb E}\big[\partial^2_\nu b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde\eta}_t,\breve Y^{\breve\eta}_t)\cdot\tilde Z^{ \tilde\eta}_t(\tilde y)\cdot\breve Z^{\breve\eta}_t(\breve y){\mathord{{\rm d}}} t, \end{align*} and ${\mathscr E}_2(\tilde y, \breve y)$ involves the martingale terms given by \begin{align*} {\mathscr E}_2(\tilde y, \breve y):\!&=\breve{\mathbb E}\big[\partial_\nu\partial_y \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\breve Y^{\breve y,\nu}_t)\cdot\partial_\nu Y^{y,\nu}_t(\tilde y)\cdot\partial_y \breve Y^{\breve y,\nu}_t\big]{\mathord{{\rm d}}} W_t \\ &\quad+\breve{\mathbb E}\big[\partial_\nu\partial_y \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\breve Y^{\breve\eta}_t)\cdot\partial_\nu Y^{y,\nu}_t(\tilde y)\cdot\breve Z^{\breve\eta}_t(\breve y)\big]{\mathord{{\rm d}}} W_t \\ &\quad+\tilde{\mathbb E}\big[\partial_\nu \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_\nu\partial_y \tilde Y^{\tilde y,\nu}_t(\breve y)\big]{\mathord{{\rm d}}} W_t \\ &\quad+\tilde{\mathbb E}\big[\partial_{y}\partial_\nu \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\cdot\partial_\nu Y^{y,\nu}_t(\breve y)\big]{\mathord{{\rm d}}} W_t \\ &\quad+\tilde{\mathbb E}\big[\partial_{\tilde y}\partial_\nu \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\cdot\partial_\nu \tilde Y^{y,\nu}_t(\breve y)\big] {\mathord{{\rm d}}} W_t\\ &\quad+\tilde{\mathbb E}\breve{\mathbb E}\big[\partial^2_\nu \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t,\breve Y^{\breve y,\nu}_t)\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\cdot\partial_y \breve Y^{\breve y,\nu}_t\big]{\mathord{{\rm d}}} W_t \\ &\quad+\tilde{\mathbb E}\breve{\mathbb E}\big[\partial^2_\nu \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t,\breve Y^{\breve\eta}_t)\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\cdot\breve Z^{\breve\eta}_t(\breve y)\big]{\mathord{{\rm d}}} W_t\\ &\quad+\tilde{\mathbb E}\big[\partial_\nu \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\breve y,\nu}_t)\cdot\partial_y\partial_\nu\tilde Y^{\breve y,\nu}_t(\tilde y)\big]{\mathord{{\rm d}}} W_t\\ &\quad+\tilde{\mathbb E}\big[\partial_\nu \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde\eta}_t)\cdot\tilde Z^{\tilde\eta}_t(\tilde y,\breve y)\big]{\mathord{{\rm d}}} W_t\\ &\quad+\tilde{\mathbb E}\big[\partial_{y}\partial_\nu \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde\eta}_t)\cdot\tilde Z^{\tilde\eta}_t(\tilde y)\cdot\partial_\nu Y^{y,\nu}_t(\breve y)\big] {\mathord{{\rm d}}} W_t\\ &\quad+\tilde{\mathbb E}\big[\partial_{\tilde y}\partial_\nu \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\breve y,\nu}_t)\cdot\tilde Z^{\tilde\eta}_t(\tilde y)\cdot\partial_y \tilde Y^{\breve y,\nu}_t\big] {\mathord{{\rm d}}} W_t\\ &\quad+\tilde{\mathbb E}\big[\partial_{\tilde y}\partial_\nu \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde\eta}_t)\cdot\tilde Z^{\tilde\eta}_t(\tilde y)\cdot\tilde Z^{\tilde\eta}_t(\breve y)\big]{\mathord{{\rm d}}} W_t \\ &\quad+\tilde{\mathbb E}\breve{\mathbb E}\big[\partial^2_\nu \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde\eta}_t,\breve Y^{\breve y,\nu}_t)\cdot \tilde Z^{ \tilde\eta}_t(\tilde y)\cdot\partial_y \breve Y^{\breve y,\nu}_t\big] {\mathord{{\rm d}}} W_t\\ &\quad+\tilde{\mathbb E}\breve{\mathbb E}\big[\partial^2_\nu \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde\eta}_t,\breve Y^{\breve\eta}_t)\cdot\tilde Z^{ \tilde\eta}_t(\tilde y)\cdot\breve Z^{\breve\eta}_t(\breve y){\mathord{{\rm d}}} W_t. \end{align*} Applying Lemma \ref{Y2} in the Appendix, we have \begin{align*}
&\sup_{y\in{\mathbb R}^{d_{2}},\nu\in{\mathscr P}_{2}({\mathbb R}^{d_{2}}),\tilde y\in{\mathbb R}^{d_{2}},\breve y\in{\mathbb R}^{d_{2}}}{\mathbb E}\|\partial^2_\nu Y^{y,\nu}_t(\tilde y,\breve y)\|^2\leqslant C_{0}\,{\mathrm{e}}^{-(c_2-c_1-\gamma)t},\\
&\sup_{\nu\in{\mathscr P}_{2}({\mathbb R}^{d_{2}}),\tilde y\in{\mathbb R}^{d_{2}},\breve y\in{\mathbb R}^{d_{2}}}{\mathbb E}\|Z^{\eta}_t(\tilde y,\breve y)\|^2\leqslant C_{0}\,{\mathrm{e}}^{-(c_2-c_1-\gamma)t}, \end{align*} which in turn implies that \begin{align*}
\sup_{y\in{\mathbb R}^{d_{2}},\nu\in{\mathscr P}_{2}({\mathbb R}^{d_{2}}),\tilde y\in{\mathbb R}^{d_{2}},\breve y\in{\mathbb R}^{d_{2}}}\|\partial^2_\nu U(y,\nu)(\tilde y,\breve y)\|\leqslant C_{0}<\infty. \end{align*} The derivation that $(y,\tilde y,\breve y)\mapsto\partial^2_\nu U(y,\nu)(\tilde y,\breve y)$ is in $C_b^2({\mathbb R}^{d_2}\times{\mathbb R}^{d_2}\times{\mathbb R}^{d_2})$ and $(y,\tilde y)\mapsto\partial_\nu U(y,\nu)(\tilde y)$ is in $C_b^3({\mathbb R}^{d_2}\times{\mathbb R}^{d_2})$ follows by the same argument as in steps (ii) and (iii). Thus we obtain $U\in C_b^{4,(2,2)}({\mathbb R}^{d_2}\times{\mathbb R}^{d_2})$ when $\sigma, b, f\in C_b^{4,(2,2)}({\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$. For general $k>2$, the proof follows by the same arguments, we omit the details here. \end{proof}
\subsection{Poisson equations with parameters}
Now, we consider the following parameterized Poisson equation on the whole space ${\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2})$: \begin{align}\label{pof} {\mathscr L}_0U(x,\mu,y,\nu)=-f(x,\mu,y,\nu), \end{align} where $(x,\mu)\in{\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})$ are parameters, and the operator ${\mathscr L}_0$ is defined by (\ref{l0}). Recall that $\zeta^\mu$ is the unique invariant measure for the frozen McKean-Vlasov equation (\ref{sde1}). As in the previous subsection, we assume that $f$ satisfies the following centering condition: \begin{align}\label{cenf} \int_{{\mathbb R}^{d_2}}f(x,\mu,y,\zeta^{\mu})\zeta^{\mu}({\mathord{{\rm d}}} y)=0,\quad\forall (x,\mu)\in{\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1}). \end{align} By regarding $(x,\mu)$ as parameters and according to Theorem \ref{pot}, the unique solution $U$ to equation (\ref{pof}) should satisfy the centering condition (\ref{cenf}) and admit the probabilistic representation \begin{align}\label{Pequ1} U(x,\mu,y,\nu)&={\mathbb E}\left(\int_0^\infty f\big(x,\mu,Y_t^{\mu,y,\nu},{\mathcal L}_{Y_t^{\mu,\eta}}\big){\mathord{{\rm d}}} t\right), \end{align} where $Y_t^{\mu,y,\nu}$ and $Y_t^{\mu,\eta}$ satisfy equation (\ref{sde1}) and (\ref{sde2}) with ${\mathcal L}_\eta=\nu$, respectively. The main problem addressed here is the smoothness of the solution $U$ with respect to the parameters $(x,\mu)$. We have the following result, which will play a central role in our analysis below.
\begin{theorem}\label{po} Let {\bf ($ H^{\sigma,b}$)} hold, $\ell, j, k, m, n\in{\mathbb N}$, the function $f$ satisfies the centering condition (\ref{cenf}), and $U$ is defined by (\ref{Pequ1}).
(i) Assume that $a, b\in {\mathbb C}_b^{(n,k),2(m-n),(m-n,m-n)}({\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ and $f\in {\mathbb C}_b^{j,(n,k),2(m-n),(m-n,m-n)}({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ with $0\leqslant n\leqslant \ell<m$. Then we have \begin{align}\label{U1} U\in {\mathbb C}_b^{j,(\ell,k),2(m-\ell),(m-\ell,m-\ell)}({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2})). \end{align}
(ii) Assume further that $a, b\in C_b^{(m,k),2m,(m,m)}({\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ and $f\in C_b^{j,(m,k),2m,(m,m)}({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$, then we have \begin{align}\label{U2} U\in C_b^{j,(m,k),2m,(m,m)}({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2})). \end{align}
\end{theorem} \begin{proof}
We devide the proof into three steps.
\noindent (i) The regularities of $U$ with respect to the $(y,\nu)$-variables follow by Theorem \ref{pot}. Our task is to prove the regularities of $U$ with respect to $x$ and $\mu$.
The derivatives of $U$ with respect to the $x$-variable are easy. Since $f$ satisfies the centering condition (\ref{cenf}), it is obvious that $\partial^j_xf$ satisfies (\ref{cenf}), too. Thus, we can take derivative directly from both sides of (\ref{Pequ1}) to get that for any $j\in{\mathbb N}$, \begin{align}\label{Ux} \partial^j_xU(x,\mu,y,\nu)={\mathbb E}\left(\int_0^\infty \partial^j_xf\big(x,\mu,Y_t^{\mu,y,\nu},{\mathcal L}_{Y_t^{\mu,\eta}}\big){\mathord{{\rm d}}} t\right), \end{align} which in turn implies the desired conclusions for $U$ with respect to the $x$-variable.
(ii) The regularities of $U$ with respect to the $\mu$-variable is more delicate, since $\mu$ also appears in the process $Y_t^{\mu,y,\nu}$ as well as the distribution ${\mathcal L}_{Y_t^{\mu,\eta}}$. Taking derivative directly will involve complicated computations. We shall use the equation itself. Note that when $m=1$, the conclusion in (\ref{U1}) is obvious, we proceed to prove (\ref{U2}) with $m=1$. Since $U$ is a classical solution to $$ {\mathscr L}_0(\mu,y,\nu)U(x,\mu,y,\nu)=-f(x,\mu,y,\nu), $$ we have for every $\mu\in {\mathscr P}_2({\mathbb R}^{d_1})$, $\phi\in L^2({\mathbb R}^{d_1})$ and $\rho>0$ that \begin{align*} &{\mathscr L}_0(\mu,y,\nu)\Big(\frac{U(x,\mu,y,\nu)-U(x,\mu\circ(I+\rho\phi)^{-1},y,\nu)}{\rho}\Big)\\ &=\frac{f(x,\mu\circ(I+\rho\phi)^{-1},y,\nu)-f(x,\mu,y,\nu)}{\rho}\\ &\quad+\frac{1}{\rho}\Big({\mathscr L}_0(\mu\circ(I+\rho\phi)^{-1},y,\nu)-{\mathscr L}_0(\mu,y,\nu)\Big)U(x,\mu\circ(I+\rho\phi)^{-1},y,\nu)\\ &=:h_1^{\phi}(x,\mu,y,\nu)(\rho), \end{align*} where \begin{align*} &\Big({\mathscr L}_0(\mu\circ(I+\rho\phi)^{-1},y,\nu)-{\mathscr L}_0(\mu,y,\nu)\Big)U(x,\mu\circ(I+\rho\phi)^{-1},y,\nu)\\ &=\Big(b(\mu\circ(I+\rho\phi)^{-1},y,\nu)-b(\mu,y,\nu)\Big) \cdot\partial_yU(x,\mu\circ(I+\rho\phi)^{-1},y,\nu)\\ &\quad+\frac{1}{2}\Big(a(\mu\circ(I+\rho\phi)^{-1},y,\nu)-a(\mu,y,\nu)\Big) \cdot\partial^2_yU(x,\mu\circ(I+\rho\phi)^{-1},y,\nu)\\ &\quad+\int_{{\mathbb R}^{d_2}}\Big[\big(b(\mu\circ(I+\rho\phi)^{-1},\tilde y,\nu)-b\big(\mu,\tilde y,\nu)\big) \cdot\partial_{\nu}U(x,\mu\circ(I+\rho\phi)^{-1},y,\nu)(\tilde y)\\ &\quad+\frac{1}{2}\big(a(\mu\circ(I+\rho\phi)^{-1},\tilde y,\nu)-a(\mu,\tilde y,\nu)\big) \cdot\partial_{\tilde y}\big[\partial_{\nu}U(x,\mu\circ(I+\rho\phi)^{-1},y,\nu)(\tilde y)\big]\Big]\nu({\mathord{{\rm d}}} \tilde y). \end{align*} This implies that for every $\rho>0$, $h_1^{\phi}(x,\mu,y,\nu)(\rho)$ satisfies $$ \int_{{\mathbb R}^{d_2}}h_1^{\phi}(x,\mu,y,\zeta^\mu)(\rho)\zeta^\mu({\mathord{{\rm d}}} y)=0, $$ and by Theorem \ref{pot} (i) we have the representation that \begin{align}\label{d1} &\frac{U(x,\mu\circ(I+\rho\phi)^{-1},y,\nu)-U(x,\mu,y,\nu)}{\rho}\nonumber\\ &={\mathbb E}\left(\int_0^\infty h_1^{\phi}(x,\mu,Y_t^{\mu,y,\nu},{\mathcal L}_{Y_t^{\mu,\eta}})(\rho){\mathord{{\rm d}}} t\right). \end{align} Note that by the definition of $L$-derivative, \begin{align*} &\lim_{\rho\to0}\frac{f(x,\mu\circ(I+\rho\phi)^{-1},y,\nu)-f(x,\mu,y,\nu)}{\rho}=\int_{{\mathbb R}^{d_1}}\partial_\mu f(x,\mu, y,\nu)(\tilde x)\cdot \phi(\tilde x)\mu({\mathord{{\rm d}}} \tilde x),\\ &\lim_{\rho\to0}\frac{b(\mu\circ(I+\rho\phi)^{-1},y,\nu)-b(\mu,y,\nu)}{\rho}=\int_{{\mathbb R}^{d_1}}\partial_\mu b(\mu,y,\nu)(\tilde x)\cdot \phi(\tilde x)\mu({\mathord{{\rm d}}} \tilde x), \end{align*} and \begin{align*} \lim_{\rho\to0}\frac{a(\mu\circ(I+\rho\phi)^{-1},y,\nu)-a(\mu,y,\nu)}{\rho}=\int_{{\mathbb R}^{d_1}}\partial_\mu a(\mu, y,\nu)(\tilde x)\cdot \phi(\tilde x)\mu({\mathord{{\rm d}}} \tilde x). \end{align*} By the continuity of $\partial_y^2 U$ and $\partial_{\tilde y}\partial_\nu U$ we have \begin{align*} &\lim_{\rho\to0}h_1^\phi(x,\mu,y,\nu)(\rho)\\ &=\int_{{\mathbb R}^{d_1}}\Big[\partial_\mu f(x,\mu,y,\nu)(\tilde x)+\partial_\mu b(\mu,y,\nu)(\tilde x)\cdot\partial_yU(x,\mu,y,\nu)\nonumber\\ &\qquad\qquad+\frac{1}{2}\partial_\mu a(\mu,y,\nu)(\tilde x)\cdot\partial^2_yU(x,\mu,y,\nu)\nonumber\\ &\qquad\qquad+\int_{{\mathbb R}^{d_2}}\Big[\partial_\mu b(\mu,\tilde y,\nu)(\tilde x)\cdot\partial_{\nu}U(x,\mu,y,\nu)(\tilde y)\nonumber\\ &\qquad\qquad\qquad\quad+\frac{1}{2}\partial_\mu a(\mu,\tilde y,\nu)(\tilde x)\cdot\partial_{\tilde y}\big[\partial_{\nu}U(x,\mu,y,\nu)(\tilde y)\big]\Big]\nu({\mathord{{\rm d}}} \tilde y)\Big]\cdot\phi(\tilde x)\mu({\mathord{{\rm d}}}\tilde x)\nonumber\\ &=:\int_{{\mathbb R}^{d_1}}h_1(x,\mu,y,\nu)(\tilde x)\cdot\phi(\tilde x)\mu({\mathord{{\rm d}}}\tilde x). \end{align*} This further implies that $h_1$ satisfies the centering condition (\ref{cenf}), i.e., \begin{align}\label{h1c} \int_{{\mathbb R}^{d_2}}h_1(x,\mu,y,\zeta^\mu)(\tilde x)\zeta^\mu({\mathord{{\rm d}}} y)=0. \end{align} Thus, taking the limit $\rho\to0$ in (\ref{d1}), we obtain
\begin{align*} &\lim_{\rho\to0}\frac{U(x,\mu\circ(I+\rho\phi)^{-1},y,\nu)-U(x,\mu,y,\nu)}{\rho}\nonumber\\ &={\mathbb E}\left(\int_0^\infty\int_{{\mathbb R}^{d_1}}h_1(x,\mu,Y_t^{\mu,y,\nu},{\mathcal L}_{Y_t^{\mu,\eta}})(\tilde x)\cdot\phi(\tilde x)\mu({\mathord{{\rm d}}}\tilde x){\mathord{{\rm d}}} t\right). \end{align*} As a result, $U(x,\mu,y,\nu)$ is $L$-differentiable at $\mu$ and by regarding $(x,\mu)$ as parameters, we arrive at $$ \partial_\mu U(x,\mu,y,\nu)(\tilde x)={\mathbb E}\left(\int_0^\infty h_1(x,\mu,Y_t^{\mu,y,\nu},{\mathcal L}_{Y_t^{\mu,\eta}})(\tilde x){\mathord{{\rm d}}} t\right). $$ Meanwhile, note that for every $k\in{\mathbb N}$, $\partial^k_{\tilde x} h_1$ also satisfies the centering condition, thus we have \begin{align*} \partial^k_{\tilde x}\partial_\mu U(x,\mu,y,\nu)(\tilde x)={\mathbb E}\left(\int_0^\infty \partial^k_{\tilde x}h_1(x,\mu,Y_t^{\mu,y,\nu},{\mathcal L}_{Y_t^{\mu,\eta}})(\tilde x){\mathord{{\rm d}}} t\right), \end{align*} where \begin{align*} \partial^k_{\tilde x}h_1(x,\mu,y,\nu)(\tilde x)&=\partial^k_{\tilde x}\partial_\mu f(x,\mu,y,\nu)(\tilde x)+\partial^k_{\tilde x}\partial_\mu b(\mu,y,\nu)(\tilde x)\cdot\partial_yU(x,\mu,y,\nu)\nonumber\\ &\quad+\frac{1}{2}\partial^k_{\tilde x}\partial_\mu a(\mu,y,\nu)(\tilde x)\cdot\partial^2_yU(x,\mu,y,\nu)\nonumber\\ &\quad+\int_{{\mathbb R}^{d_2}}\Big[\partial^k_{\tilde x}\partial_\mu b(\mu,\tilde y,\nu)(\tilde x)\cdot\partial_{\nu}U(x,\mu,y,\nu)(\tilde y)\nonumber\\ &\qquad\quad+\frac{1}{2}\partial^k_{\tilde x}\partial_\mu a(\mu,\tilde y,\nu)(\tilde x)\cdot\partial_{\tilde y}\big[\partial_{\nu}U(x,\mu,y,\nu)(\tilde y)\big]\Big]\nu({\mathord{{\rm d}}} \tilde y). \end{align*} Consequently, we have $U(x,\cdot,y,\nu)\in C_b^{(1,k)}({\mathscr P}_2({\mathbb R}^{d_1}))$.
(iii) Now we prove (\ref{U1}) and (\ref{U2}) with $m=2$. By the assumptions that $a,b\in {\mathbb C}_b^{(1,k),2,(1,1)}({\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ and $f\in {\mathbb C}_b^{j,(1,k),2,(1,1)}({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$, we have $$ \partial^k_{\tilde x}\partial_\mu a(\mu,\cdot,\cdot)(\tilde x), \partial^k_{\tilde x}\partial_\mu b(\mu,\cdot,\cdot)(\tilde x),\partial^k_{\tilde x}\partial_\mu f(x,\mu,\cdot,\cdot)(\tilde x)\in C_b^{2,(1,1)}({\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2})).
$$
Combing this with the fact that $\partial_y^2U(x,\mu,\cdot,\cdot), \partial_{\tilde y}\partial_\nu U(x,\mu,\cdot,\cdot)(\tilde y)\in C_b^{2,(1,1)}({\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$, one can check that for every fixed $x,\tilde x\in{\mathbb R}^{d_1}$ and $\mu\in{\mathscr P}_2({\mathbb R}^{d_1})$, we have $$ \partial^k_{\tilde x}h_1(x,\mu,\cdot,\cdot)(\tilde x)\in C_b^{2,(1,1)}({\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2})). $$ As a result of Theorem \ref{pot}, we conclude that \begin{align*} {\mathscr L}_0(\mu,y,\nu)\partial_\mu U(x,\mu,y,\nu)(\tilde x)=-h_1(x,\mu,y,\nu)(\tilde x) \end{align*} and \begin{align*} {\mathscr L}_0(\mu,y,\nu)\partial^k_{\tilde x}\partial_\mu U(x,\mu,y,\nu)(\tilde x)=-\partial_{\tilde x}h_1(x,\mu,y,\nu)(\tilde x), \end{align*} which in turn yields that $\partial^k_{\tilde x}\partial_\mu U(x,\mu,\cdot,\cdot)(\tilde x)\in C_b^{2,(1,1)}({\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$. Consequently, we get $U(x,\cdot,\cdot,\cdot)\in{\mathbb C}_b^{(1,k),2,(1,1)}({\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$. In view of (\ref{Ux}), by $\partial_x^j f(x,\cdot,\cdot,\cdot)\in {\mathbb C}_b^{(1,k),2,(1,1)}({\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ and Theorem \ref{pot}, we obtain \begin{align*} {\mathscr L}_0(\mu,y,\nu)\partial_x^j U(x,\mu,y,\nu)=-\partial_x^jf(x,\mu,y,\nu), \end{align*} which in turns implies that $\partial_x^j U(x,\cdot,\cdot,\cdot)\in {\mathbb C}_b^{(1,k),2,(1,1)}({\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$. Thus we have
$$
U\in {\mathbb C}_b^{j,(1,k),2,(1,1)}({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2})).
$$ It remains to show the second order derivatives of $U(x,\mu,y,\nu)$ with respect to the $\mu$-variable. The same argument as above we have $$ \partial^2_\mu U(x,\mu,y,\nu)(\tilde x,\tilde{\tilde x})={\mathbb E}\left(\int_0^\infty h_2(x,\mu,Y_t^{\mu,y,\nu},{\mathcal L}_{Y_t^{\mu,\eta}})(\tilde x,\tilde{\tilde x}){\mathord{{\rm d}}} t\right), $$ where $h_2$ satisfies the centering condition (\ref{cenf}) and is given by \begin{align*} &h_2(x,\mu,y,\nu)(\tilde x,\tilde{\tilde x}):=\partial^2_\mu f(x,\mu,y,\nu)(\tilde x,\tilde{\tilde x})+\partial^2_\mu b(\mu,y,\nu)(\tilde x,\tilde{\tilde x})\cdot\partial_y U(x,\mu,y,\nu)\\ &\quad+\partial_\mu b(\mu,y,\nu)(\tilde x)\cdot\partial_\mu\partial_y U(x,\mu,y,\nu)(\tilde{\tilde x})+\partial_\mu b(\mu,y,\nu)(\tilde{\tilde x})\cdot\partial_y [\partial_\mu U(x,\mu,y,\nu)(\tilde x)]\\ &\quad+\frac{1}{2}\partial_\mu a(\mu,y,\nu)(\tilde{\tilde x})\cdot\partial^2_y[\partial_\mu U(x,\mu,y,\nu)(\tilde x)]\\ &\quad+\frac{1}{2}\partial^2_\mu a(\mu,y,\nu)(\tilde x,\tilde{\tilde x})\cdot\partial^2_y U(x,\mu,y,\nu)+\frac{1}{2}\partial_\mu a(\mu,y,\nu)(\tilde x)\cdot\partial_\mu\partial^2_y U(x,\mu,y,\nu)(\tilde{\tilde x})\\ &\quad+\int_{{\mathbb R}^{d_2}}\Big[\partial_\mu b(\mu,\tilde y,\nu)(\tilde{\tilde x})\cdot\partial_{\nu}[\partial_\mu U(x,\mu,y,\nu)(\tilde x)](\tilde y)\\ &\qquad\qquad+\partial^2_\mu b(\mu,\tilde y,\nu)(\tilde x,\tilde{\tilde x})\cdot\partial_{\nu}U(x,\mu,y,\nu)(\tilde y)\\ &\qquad\qquad+\partial_\mu b(\mu,\tilde y,\nu)(\tilde x)\cdot\partial_{\mu}[\partial_\nu U(x,\mu,y,\nu)(\tilde y)](\tilde{\tilde x})\Big]\nu({\mathord{{\rm d}}} \tilde y)\\ &\quad+\frac{1}{2}\int_{{\mathbb R}^{d_2}}\!\!\Big[\partial_\mu a(\mu,\tilde y,\nu)(\tilde{\tilde x})\cdot\partial_{\tilde y}\big[\partial_{\nu}[\partial_\mu U(x,\mu,y,\nu)(\tilde x)](\tilde y)\big]\\ &\qquad\qquad\quad+\partial^2_\mu a(\mu,\tilde y,\nu)(\tilde x,\tilde{\tilde x})\cdot\partial_{\tilde y}[\partial_{\nu}U(x,\mu,y,\nu)(\tilde y)]\\ &\qquad\qquad\quad+\partial_\mu a(\mu,\tilde y,\nu)(\tilde x)\cdot\partial_{\mu}\big[\partial_{\tilde y}[\partial_\nu U(x,\mu,y,\nu)(\tilde y)](\tilde{\tilde x})\big]\Big]\nu({\mathord{{\rm d}}} \tilde y). \end{align*} Consequently, we deduce that $$ \partial_{\tilde{\tilde x}}[\partial^2_\mu U(x,\mu,y,\nu)(\tilde x,\tilde{\tilde x})]={\mathbb E}\left(\int_0^\infty \partial_{\tilde{\tilde x}}h_2(x,\mu,Y_t^{\mu,y,\nu},{\mathcal L}_{Y_t^{\mu,\eta}})(\tilde x,\tilde{\tilde x}){\mathord{{\rm d}}} t\right), $$ where \begin{align*} \partial_{\tilde{\tilde x}}h_2(x,\mu,y,\nu)(\tilde x,\tilde{\tilde x})&=\partial_{\tilde{\tilde x}}[\partial^2_\mu f(x,\mu,y,\nu)(\tilde x,\tilde{\tilde x})]+\partial_{\tilde{\tilde x}}[\partial^2_\mu b(\mu,y,\nu)(\tilde x,\tilde{\tilde x})]\cdot\partial_y U(x,\mu,y,\nu)\\ &\quad+\partial_\mu b(\mu,y,\nu)(\tilde x)\cdot\partial_{\tilde{\tilde x}}[\partial_\mu\partial_y U(x,\mu,y,\nu)(\tilde{\tilde x})]\\ &\quad+\partial_{\tilde{\tilde x}}[\partial_\mu b(\mu,y,\nu)(\tilde{\tilde x})]\cdot\partial_y [\partial_\mu U(x,\mu,y,\nu)(\tilde x)]\\ &\quad+\frac{1}{2}\partial_{\tilde{\tilde x}}[\partial_\mu a(\mu,y,\nu)(\tilde{\tilde x})]\cdot\partial^2_y[\partial_\mu U(x,\mu,y,\nu)(\tilde x)]\\ &\quad+\frac{1}{2}\partial_{\tilde{\tilde x}}[\partial^2_\mu a(\mu,y,\nu)(\tilde x,\tilde{\tilde x})]\cdot\partial^2_y U(x,\mu,y,\nu)\\ &\quad+\frac{1}{2}\partial_\mu a(\mu,y,\nu)(\tilde x)\cdot\partial_{\tilde{\tilde x}}[\partial_\mu\partial^2_y U(x,\mu,y,\nu)(\tilde{\tilde x})]\\ &\quad+\int_{{\mathbb R}^{d_2}}\Big[\partial_{\tilde{\tilde x}}[\partial_\mu b(\mu,\tilde y,\nu)(\tilde{\tilde x})]\cdot\partial_{\nu}[\partial_\mu U(x,\mu,y,\nu)(\tilde x)](\tilde y)\\ &\qquad\qquad+\partial_{\tilde{\tilde x}}[\partial^2_\mu b(\mu,\tilde y,\nu)(\tilde x,\tilde{\tilde x})]\cdot\partial_{\nu}U(x,\mu,y,\nu)(\tilde y)\\ &\qquad\qquad+\partial_\mu b(\mu,\tilde y,\nu)(\tilde x)\cdot\partial_{\tilde{\tilde x}}\big[\partial_{\mu}[\partial_\nu U(x,\mu,y,\nu)(\tilde y)](\tilde{\tilde x})\big]\Big]\nu({\mathord{{\rm d}}} \tilde y)\\ &\quad+\frac{1}{2}\int_{{\mathbb R}^{d_2}}\Big[\partial_{\tilde{\tilde x}}[\partial_\mu a(\mu,\tilde y,\nu)(\tilde{\tilde x})]\cdot\partial_{\tilde y}\big[\partial_{\nu}[\partial_\mu U(x,\mu,y,\nu)(\tilde x)](\tilde y)\big]\\ &\qquad\qquad\quad+\partial_{\tilde{\tilde x}}[\partial^2_\mu a(\mu,\tilde y,\nu)(\tilde x,\tilde{\tilde x})]\cdot\partial_{\tilde y}[\partial_{\nu}U(x,\mu,y,\nu)(\tilde y)]\\ &\qquad\qquad\quad+\partial_\mu a(\mu,\tilde y,\nu)(\tilde x)\cdot\partial_{\tilde{\tilde x}}\big[\partial_{\mu}\partial_{\tilde y}[\partial_\nu U(x,\mu,y,\nu)(\tilde y)](\tilde{\tilde x})\big]\Big]\nu({\mathord{{\rm d}}} \tilde y). \end{align*} As a result, we obtain $U(x,\cdot,y,\nu)\in C_b^{(2,1)}({\mathscr P}_2({\mathbb R}^{d_1}))$. In the same way, we get $U(x,\cdot,y,\nu)\in C_b^{(2,k)}({\mathscr P}_2({\mathbb R}^{d_1}))$. For general $m\geqslant 3$, the proof follows by the similar argumants, we omit the details here. \end{proof}
Given a function $f(x,\mu,y,\nu)$, we denote \begin{align}\label{barf} \bar f(x,\mu):=\int_{{\mathbb R}^{d_2}}f(x,\mu,y,\zeta^\mu)\zeta^\mu({\mathord{{\rm d}}} y), \end{align} where $\zeta^\mu$ is the unique invariant measure for the frozen McKean-Vlasov equation (\ref{sde1}). The verification that the averaged coefficient $\bar f(x,\mu)$ are smooth usually constitutes a separate problem connected with the smoothness of the invariant measure $\zeta^\mu$ with respect to the parameter $\mu$. Here, as a direct application of Theorem \ref{po}, we have the following result.
\begin{corollary}\label{avef} Assume that {\bf ($ H^{\sigma,b}$)} holds and $\ell, m, k\in{\mathbb N}$. If for every $1\leqslant n<m$, $a, b\in (C_b^{(m,k),2m,(m,m)}\cap {\mathbb C}_b^{(n,k),2(m-n),(m-n,m-n)})({\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ and $f\in (C_b^{\ell,(m,k),2m,(m,m)}\cap {\mathbb C}_b^{\ell,(n,k),2(m-n),(m-n,m-n)})({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$. Then we have $\bar f\in C_b^{\ell,(m,k)}({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1}))$.
In particular, we have \begin{align}\label{barf1} \partial_\mu \bar f(x,\mu)(\tilde x)&=\int_{{\mathbb R}^{d_2}}\bigg[\partial_\mu f(x,\mu,y,\zeta^\mu)(\tilde x)+\partial_\mu b(\mu,y,\zeta^\mu)(\tilde x)\cdot\partial_yU(x,\mu,y,\zeta^\mu)\nonumber\\ &\qquad+\frac{1}{2}\partial_\mu a(\mu,y,\zeta^\mu)(\tilde x)\cdot\partial^2_yU(x,\mu,y,\zeta^\mu)\nonumber\\ &\qquad+\int_{{\mathbb R}^{d_2}}\Big[\partial_\mu b(\mu,\tilde y,\zeta^\mu)(\tilde x)\cdot\partial_{\nu}U(x,\mu,y,\zeta^\mu)(\tilde y)\nonumber\\ &\qquad +\frac{1}{2}\partial_\mu a(\mu,\tilde y,\nu)(\tilde x)\cdot\partial_{\tilde y}\big[\partial_{\nu}U(x,\mu,y,\zeta^\mu)(\tilde y)\big]\Big]\zeta^\mu({\mathord{{\rm d}}} \tilde y)\bigg]\zeta^\mu({\mathord{{\rm d}}} y), \end{align} and \begin{align}\label{barf2} &\partial^2_\mu\bar f(x,\mu)(\tilde x,\tilde{\tilde x})\nonumber\\ &=\int_{{\mathbb R}^{d_2}}\bigg[\partial^2_\mu f(x,\mu,y,\zeta^\mu)(\tilde x,\tilde{\tilde x})+\partial^2_\mu b(\mu,y,\zeta^\mu)(\tilde x,\tilde{\tilde x})\cdot\partial_y U(x,\mu,y,\zeta^\mu)\nonumber\\ &\qquad+\partial_\mu b(\mu,y,\zeta^\mu)(\tilde x)\cdot\partial_\mu\partial_y U(x,\mu,y,\zeta^\mu)(\tilde{\tilde x})+\partial_\mu b(\mu,y,\zeta^\mu)(\tilde{\tilde x})\cdot\partial_y [\partial_\mu U(x,\mu,y,\zeta^\mu)(\tilde x)]\nonumber\\ &\qquad+\frac{1}{2}\partial_\mu a(\mu,y,\zeta^\mu)(\tilde{\tilde x})\cdot\partial^2_y[\partial_\mu U(x,\mu,y,\zeta^\mu)(\tilde x)]+\frac{1}{2}\partial^2_\mu a(\mu,y,\zeta^\mu)(\tilde x,\tilde{\tilde x})\cdot\partial^2_y U(x,\mu,y,\zeta^\mu)\nonumber\\ &\qquad+\frac{1}{2}\partial_\mu a(\mu,y,\zeta^\mu)(\tilde x)\cdot\partial_\mu\partial^2_y U(x,\mu,y,\zeta^\mu)(\tilde{\tilde x})\nonumber\\ &\qquad+\int_{{\mathbb R}^{d_2}}\Big[\partial_\mu b(\mu,\tilde y,\zeta^\mu)(\tilde{\tilde x})\cdot\partial_{\nu}[\partial_\mu U(x,\mu,y,\zeta^\mu)(\tilde x)](\tilde y)\nonumber\\ &\qquad\qquad\quad+\partial^2_\mu b(\mu,\tilde y,\zeta^\mu)(\tilde x,\tilde{\tilde x})\cdot\partial_{\nu}U(x,\mu,y,\zeta^\mu)(\tilde y)\nonumber\\ &\qquad\qquad\quad+\partial_\mu b(\mu,\tilde y,\zeta^\mu)(\tilde x)\cdot\partial_{\mu}[\partial_\nu U(x,\mu,y,\zeta^\mu)(\tilde y)](\tilde{\tilde x})\Big]\zeta^\mu({\mathord{{\rm d}}} \tilde y)\nonumber\\ &\qquad+\frac{1}{2}\int_{{\mathbb R}^{d_2}}\Big[\partial_\mu a(\mu,\tilde y,\zeta^\mu)(\tilde{\tilde x})\cdot\partial_{\tilde y}\big[\partial_{\nu}[\partial_\mu U(x,\mu,y,\zeta^\mu)(\tilde x)](\tilde y)\big]\nonumber\\ &\qquad\qquad\quad+\partial^2_\mu a(\mu,\tilde y,\zeta^\mu)(\tilde x,\tilde{\tilde x})\cdot\partial_{\tilde y}[\partial_{\nu}U(x,\mu,y,\zeta^\mu)(\tilde y)]\nonumber\\ &\qquad\qquad\quad+\partial_\mu a(\mu,\tilde y,\zeta^\mu)(\tilde x)\cdot\partial_{\mu}\partial_{\tilde y}[\partial_\nu U(x,\mu,y,\zeta^\mu)(\tilde y)](\tilde{\tilde x})\Big]\zeta^\mu({\mathord{{\rm d}}} \tilde y)\bigg]\zeta^\mu({\mathord{{\rm d}}} y), \end{align} where $U(x,\mu,y,\nu)$ is the unique solution to the Poisson equation \begin{align}\label{Uf} {\mathscr L}_0U(x,\mu,y,\nu)=-\big[f(x,\mu,y,\nu)-\bar f(x,\mu)\big]. \end{align} \end{corollary} \begin{proof} Since $\zeta^\mu$ does not depend on the $x$-variable, we can take derivative directly with respect to $x$ from both sides of (\ref{barf}) to get \begin{align*} &\partial^\ell_x\bar f(x,\mu)=\int_{{\mathbb R}^{d_2}}\partial^\ell_xf(x,\mu,y,\zeta^\mu)\zeta^\mu({\mathord{{\rm d}}} y), \end{align*} which in turn implies that $\bar f(\cdot,\mu)\in C_b^\ell({\mathbb R}^{d_1})$. We proceed to show the regularities of $\bar f$ with respect to the $\mu$-variable. Note that the function $$ \delta f(x,\mu,y,\nu):=f(x,\mu,y,\nu)-\bar f(x,\mu) $$ always satisfies the centering condition. Thus, under our assumptions there exists a unique solution $U$ to the Poisson equation (\ref{Uf}). Following the same arguments as in (\ref{h1c}) we have \begin{align}\label{fc} &\int_{{\mathbb R}^{d_2}}\bigg[\partial_\mu \delta f(x,\mu,y,\zeta^\mu)(\tilde x)+\partial_\mu b(\mu,y,\zeta^\mu)(\tilde x)\cdot\partial_yU(x,\mu,y,\zeta^\mu)\nonumber\\ &\qquad+\frac{1}{2}\partial_\mu a(\mu,y,\zeta^\mu)(\tilde x)\cdot\partial^2_yU(x,\mu,y,\zeta^\mu)\nonumber\\ &\qquad+\int_{{\mathbb R}^{d_2}}\Big[\partial_\mu b(\mu,\tilde y,\zeta^\mu)(\tilde x)\cdot\partial_{\nu}U(x,\mu,y,\zeta^\mu)(\tilde y)\nonumber\\ &\qquad\qquad+\frac{1}{2}\partial_\mu a(\mu,\tilde y,\nu)\cdot\partial_{\tilde y}\big[\partial_{\nu}U(x,\mu,y,\zeta^\mu)(\tilde y)\big]\Big]\zeta^\mu({\mathord{{\rm d}}} \tilde y)\bigg]\zeta^\mu({\mathord{{\rm d}}} y)=0. \end{align} Since \begin{align*} &\int_{{\mathbb R}^{d_2}}\partial_\mu \delta f(x,\mu,y,\zeta^\mu)(\tilde x)\zeta^\mu({\mathord{{\rm d}}} y)\\ &=-\partial_\mu \bar f(x,\mu)(\tilde x)+\int_{{\mathbb R}^{d_2}}\partial_\mu f(x,\mu,y,\zeta^\mu)(\tilde x)\zeta^\mu({\mathord{{\rm d}}} y), \end{align*} this together with (\ref{fc}) yields (\ref{barf1}). Taking derivatives with respect to the $\tilde x$ and $x$ from both sides of (\ref{barf1}), we obtain \begin{align*} \partial_{\tilde x}[\partial_\mu \bar f(x,\mu)(\tilde x)]&=\int_{{\mathbb R}^{d_2}}\bigg[\partial_{\tilde x}[\partial_\mu f(x,\mu,y,\zeta^\mu)(\tilde x)]+\partial_{\tilde x}[\partial_\mu b(\mu,y,\zeta^\mu)(\tilde x)]\cdot\partial_yU(x,\mu,y,\zeta^\mu)\\ &\qquad+\frac{1}{2}\partial_{\tilde x}[\partial_\mu a(\mu,y,\zeta^\mu)(\tilde x)]\cdot\partial^2_yU(x,\mu,y,\zeta^\mu)\\ &\qquad+\int_{{\mathbb R}^{d_2}}\Big[\partial_{\tilde x}[\partial_\mu b(\mu,\tilde y,\zeta^\mu)(\tilde x)]\cdot\partial_{\nu}U(x,\mu,y,\zeta^\mu)(\tilde y)\\ &\qquad\qquad+\frac{1}{2}\partial_{\tilde x}[\partial_\mu a(\mu,\tilde y,\nu)(\tilde x)]\cdot\partial_{\tilde y}\big[\partial_{\nu}U(x,\mu,y,\zeta^\mu)(\tilde y)\big]\Big]\zeta^\mu({\mathord{{\rm d}}} \tilde y)\bigg]\zeta^\mu({\mathord{{\rm d}}} y), \end{align*} and \begin{align*} \partial_x[\partial_\mu \bar f(x,\mu)(\tilde x)]&=\int_{{\mathbb R}^{d_2}}\bigg[\partial_x[\partial_\mu f(x,\mu,y,\zeta^\mu)(\tilde x)]+\partial_\mu b(\mu,y,\zeta^\mu)(\tilde x)\cdot\partial_x\partial_yU(x,\mu,y,\zeta^\mu)\\ &\qquad+\frac{1}{2}\partial_\mu a(\mu,y,\zeta^\mu)(\tilde x)\cdot\partial_x\partial^2_yU(x,\mu,y,\zeta^\mu)\\ &\qquad+\int_{{\mathbb R}^{d_2}}\Big[\partial_\mu b(\mu,\tilde y,\zeta^\mu)(\tilde x)\cdot\partial_x[\partial_{\nu}U(x,\mu,y,\zeta^\mu)(\tilde y)]\\ &\qquad\qquad+\frac{1}{2}\partial_\mu a(\mu,\tilde y,\nu)(\tilde x)\cdot\partial_{\tilde y}\big[\partial_x[\partial_{\nu}U(x,\mu,y,\zeta^\mu)(\tilde y)]\big]\Big]\zeta^\mu({\mathord{{\rm d}}} \tilde y)\bigg]\zeta^\mu({\mathord{{\rm d}}} y). \end{align*} Therefore, we deduce $\bar f\in C_b^{2,(1,1)}({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1}))$. Similarly, we have \begin{align*} &\int_{{\mathbb R}^{d_2}}\bigg[\partial^2_\mu \delta f(x,\mu,y,\zeta^\mu)(\tilde x,\tilde{\tilde x})+\partial^2_\mu b(\mu,y,\zeta^\mu)(\tilde x,\tilde{\tilde x})\cdot\partial_y U(x,\mu,y,\zeta^\mu)\\ &\qquad+\partial_\mu b(\mu,y,\zeta^\mu)(\tilde x)\cdot\partial_\mu\partial_y U(x,\mu,y,\zeta^\mu)(\tilde{\tilde x})+\partial_\mu b(\mu,y,\zeta^\mu)(\tilde{\tilde x})\cdot\partial_y [\partial_\mu U(x,\mu,y,\zeta^\mu)(\tilde x)]\\ &\qquad+\frac{1}{2}\partial_\mu a(\mu,y,\zeta^\mu)(\tilde{\tilde x})\cdot\partial^2_y[\partial_\mu U(x,\mu,y,\zeta^\mu)(\tilde x)]+\frac{1}{2}\partial^2_\mu a(\mu,y,\zeta^\mu)(\tilde x,\tilde{\tilde x})\cdot\partial^2_y U(x,\mu,y,\zeta^\mu)\\ &\qquad+\frac{1}{2}\partial_\mu a(\mu,y,\zeta^\mu)(\tilde x)\cdot\partial_\mu\partial^2_y U(x,\mu,y,\zeta^\mu)(\tilde{\tilde x})\\ &\qquad+\int_{{\mathbb R}^{d_2}}\Big[\partial_\mu b(\mu,\tilde y,\zeta^\mu)(\tilde{\tilde x})\cdot\partial_{\nu}[\partial_\mu U(x,\mu,y,\zeta^\mu)(\tilde x)](\tilde y)\\ &\qquad\qquad\quad+\partial^2_\mu b(\mu,\tilde y,\zeta^\mu)(\tilde x,\tilde{\tilde x})\cdot\partial_{\nu}U(x,\mu,y,\zeta^\mu)(\tilde y)\\ &\qquad\qquad\quad+\partial_\mu b(\mu,\tilde y,\zeta^\mu)(\tilde x)\cdot\partial_{\mu}[\partial_\nu U(x,\mu,y,\zeta^\mu)(\tilde y)](\tilde{\tilde x})\Big]\zeta^\mu({\mathord{{\rm d}}} \tilde y)\\ &\qquad+\frac{1}{2}\int_{{\mathbb R}^{d_2}}\Big[\partial_\mu a(\mu,\tilde y,\zeta^\mu)(\tilde{\tilde x})\cdot\partial_{\tilde y}\big[\partial_{\nu}[\partial_\mu U(x,\mu,y,\zeta^\mu)(\tilde x)](\tilde y)\big]\\ &\qquad\qquad\quad+\partial^2_\mu a(\mu,\tilde y,\zeta^\mu)(\tilde x,\tilde{\tilde x})\cdot\partial_{\tilde y}[\partial_{\nu}U(x,\mu,y,\zeta^\mu)(\tilde y)]\\ &\qquad\qquad\quad+\partial_\mu a(\mu,\tilde y,\zeta^\mu)(\tilde x)\cdot\partial_{\mu}\partial_{\tilde y}[\partial_\nu U(x,\mu,y,\zeta^\mu)(\tilde y)](\tilde{\tilde x})\Big]\zeta^\mu({\mathord{{\rm d}}} \tilde y)\bigg]\zeta^\mu({\mathord{{\rm d}}} y)=0, \end{align*} which implies that (\ref{barf2}) holds. For general $m\in{\mathbb N}$, the proof follows by induction, we omit the details here. \end{proof}
\section{Fluctuations estimates}
Let $(X_t^\varepsilon, Y_t^\varepsilon)$ and $\bar X_t$ satisfy the McKean-Vlasov equations (\ref{sde}) and (\ref{ave}), respectively. This section is devoted to establish some integral estimates for the fluctuations between $X_t^\varepsilon$ and its homogenized limit $\bar X_t$. Given a function $f(t,x,\mu,y,\nu)$ on $[0,\infty)\times{\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2})$, we consider the Poisson equation: \begin{align}\label{pss} {\mathscr L}_0\psi(t,x,\mu,y,\nu)=-f(t,x,\mu,y,\nu), \end{align} where the operator ${\mathscr L}_0$ is defined by (\ref{l0}), and $(t,x,\mu)\in {\mathbb R}_+\times{\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})$ are regarded as parameters. We first give the following estimate of the functional law of large number type.
\begin{lemma}\label{flu1} Assume that {\bf (H$^{\sigma,b}$)} hold, $F,H,G,c\in C_b^{2,(1,1),2,(1,1)}({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ and $b,\sigma\in C_b^{(1,1),2,(1,1)}({\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$. Then for every $f\in C_b^{1,2,(1,1),2,(1,1)}({\mathbb R}_+\times{\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ satisfying (\ref{cen}), we have \begin{align*}
\left|{\mathbb E}\left(\int_0^tf(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon}){\mathord{{\rm d}}} s\right)\right|\leqslant C_0\,\varepsilon, \end{align*} where $C_0>0$ is a constant independent of $\varepsilon$. \end{lemma} \begin{proof} Let $\psi$ be the solution of the equation (\ref{pss}). According the assumptions on the coefficients and Theorem \ref{po}, we have
$\psi\in C_b^{1,2,(1,1),2,(1,1)}({\mathbb R}_+\times{\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$. Using It\^o's formula, we deduce that \begin{align} &\psi(t,X_t^\varepsilon,{\mathcal L}_{X_t^\varepsilon},Y_t^\varepsilon,{\mathcal L}_{Y_t^\varepsilon})\nonumber\\ &=\psi(0,\xi,{\mathcal L}_\xi,\eta,{\mathcal L}_\eta) +\int_0^t\Big[\partial_s\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})\nonumber\\ &\quad+{\mathscr L}_1(X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})\nonumber\\ &\quad+\frac{1}{\varepsilon}{\mathscr L}_2(X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})\nonumber\\ &\quad+\frac{1}{\varepsilon}{\mathscr L}_3(X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})\nonumber\\ &\quad+\frac{1}{\varepsilon^2}{\mathscr L}_0({\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon}) \psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})\Big]{\mathord{{\rm d}}} s+M_t^1+\frac{1}{\varepsilon}M_t^2\nonumber\\ &\quad+\tilde{\mathbb E}\bigg(\int_0^tF(\tilde X^{\varepsilon}_s,{\mathcal L}_{X_s^\varepsilon},\tilde Y^{\varepsilon}_s,{\mathcal L}_{Y^{\varepsilon}_s})\cdot\partial_\mu\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})(\tilde X^{\varepsilon}_s)\nonumber\\ &\qquad\quad+\frac{1}{\varepsilon}H(\tilde X^{\varepsilon}_s,{\mathcal L}_{X_s^\varepsilon},\tilde Y^{\varepsilon}_s,{\mathcal L}_{Y^{\varepsilon}_s})\cdot\partial_\mu\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})(\tilde X^{\varepsilon}_s)\nonumber\\ &\qquad\quad+\frac{1}{2}\mathord{{\rm Tr}}\Big(GG^*(\tilde X^{\varepsilon}_s,{\mathcal L}_{X_s^\varepsilon},\tilde Y^{\varepsilon}_s,{\mathcal L}_{Y^{\varepsilon}_s})\cdot\partial_{\tilde x}\big[\partial_\mu\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})(\tilde X^{\varepsilon}_s)\big]\Big)\nonumber\\ &\qquad\quad+\frac{1}{\varepsilon}c(\tilde X^{\varepsilon}_s,{\mathcal L}_{X_s^\varepsilon},\tilde Y^{\varepsilon}_s,{\mathcal L}_{Y^{\varepsilon}_s})\cdot\partial_\nu\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})(\tilde Y^{\varepsilon}_s){\mathord{{\rm d}}} s\bigg),\label{ff1} \end{align} where the operators ${\mathscr L}_1, {\mathscr L}_2$ and ${\mathscr L}_3$ are defined by (\ref{op1}), (\ref{op2}) and (\ref{op3}), respectively, the process ($\tilde X^{\varepsilon}_s,\tilde Y^{\varepsilon}_s$) is a copy of the original process $(X^{\varepsilon}_s,Y^{\varepsilon}_s)$ defined on a copy $(\tilde\Omega,\tilde{\mathscr F},\tilde{\mathbb P})$ of the original probability space $(\Omega,{\mathscr F},{\mathbb P})$, and $M_t^1$ and $M_t^2$ are two martingales defined by \begin{align*} M_t^1&:=\int_0^t\partial_x\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})\cdot G(X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})dW^1_s,\\ M_t^2&:=\int_0^t\partial_y\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})\cdot\sigma({\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})dW^2_s. \end{align*}
Taking expectations and multiplying $\varepsilon^2$ from both sides of (\ref{ff1}), and in view of the equation (\ref{pss}), we obtain \begin{align*} &{\mathbb E}\left(\int_0^tf(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon}){\mathord{{\rm d}}} s\right)\\ &=\varepsilon^2\,{\mathbb E}\big[\psi(0,\xi,{\mathcal L}_\xi,\eta,{\mathcal L}_\eta)-\psi(t,X_t^\varepsilon,{\mathcal L}_{X_t^\varepsilon},Y_t^\varepsilon,{\mathcal L}_{Y_t^\varepsilon})\big]\\ &\quad+\varepsilon^2\,{\mathbb E}\left(\int_0^t\partial_s\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon}) {\mathord{{\rm d}}} s\right)\\ &\quad+\varepsilon^2\,{\mathbb E}\left(\int_0^t {\mathscr L}_1(X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon}){\mathord{{\rm d}}} s\right)\\ &\quad+\varepsilon\,{\mathbb E}\left(\int_0^t{\mathscr L}_2(X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon}) \psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon}){\mathord{{\rm d}}} s\right)\\ &\quad+\varepsilon\,{\mathbb E}\left(\int_0^t{\mathscr L}_3(X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon}) \psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon}){\mathord{{\rm d}}} s\right)\\ &\quad+\varepsilon^2\,{\mathbb E}\tilde{\mathbb E}\bigg(\int_0^tF(\tilde X^{\varepsilon}_s,{\mathcal L}_{X_s^\varepsilon},\tilde Y^{\varepsilon}_s,{\mathcal L}_{Y^{\varepsilon}_s})\cdot\partial_\mu\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})(\tilde X^{\varepsilon}_s)\nonumber\\ &\qquad\quad+\frac{1}{\varepsilon}H(\tilde X^{\varepsilon}_s,{\mathcal L}_{X_s^\varepsilon},\tilde Y^{\varepsilon}_s,{\mathcal L}_{Y^{\varepsilon}_s})\cdot\partial_\mu\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})(\tilde X^{\varepsilon}_s)\nonumber\\ &\qquad\quad+\frac{1}{2}\mathord{{\rm Tr}}\Big(GG^*(\tilde X^{\varepsilon}_s,{\mathcal L}_{X_s^\varepsilon},\tilde Y^{\varepsilon}_s,{\mathcal L}_{Y^{\varepsilon}_s})\cdot\partial_{\tilde x}\big[\partial_\mu\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})(\tilde X^{\varepsilon}_s)\big]\Big)\nonumber\\ &\qquad\quad+\frac{1}{\varepsilon}c(\tilde X^{\varepsilon}_s,{\mathcal L}_{X_s^\varepsilon},\tilde Y^{\varepsilon}_s,{\mathcal L}_{Y^{\varepsilon}_s})\cdot\partial_\nu\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})(\tilde Y^{\varepsilon}_s){\mathord{{\rm d}}} s\bigg). \end{align*} Using the assumptions on the coefficients and the regularity of $\psi$ again, the expectations on the right hand side of the above equality can be controlled. Thus we arrive at \begin{align*}
\left|{\mathbb E}\left(\int_0^tf(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon}){\mathord{{\rm d}}} s\right)\right|\leqslant C_0\,\varepsilon. \end{align*} The proof is finished. \end{proof}
Next, we provide the fluctuations estimate of the functional central limit type. Recall that $\psi$ is the solution of the Poisson equation (\ref{pss}), and $\zeta^\mu$ is the invariant measure for the SDE (\ref{sde1}). For simplify, we define \begin{align} \overline{H\cdot\partial_x\psi}(t,x,\mu)&:=\int_{{\mathbb R}^{d_2}}H(x,\mu,y,\zeta^{\mu})\cdot \partial_x\psi(t,x,\mu,y,\zeta^{\mu})\zeta^{\mu}({\mathord{{\rm d}}} y),\label{hpsi}\\ \overline{c\cdot\partial_y\psi}(t,x,\mu)&:=\int_{{\mathbb R}^{d_2}}c(x,\mu,y,\zeta^{\mu})\cdot \partial_y\psi(t,x,\mu,y,\zeta^{\mu})\zeta^{\mu}({\mathord{{\rm d}}} y),\label{cpsi1} \end{align} and \begin{align} \overline{c\cdot\partial_\nu\psi}(t,x,\mu,y)(\tilde x)&:=\int_{{\mathbb R}^{d_2}}c(\tilde x,\mu,\tilde y,\zeta^{\mu})\cdot \partial_\nu\psi(t,x,\mu,y,\zeta^{\mu})(\tilde y)\zeta^{\mu}({\mathord{{\rm d}}} \tilde y),\label{cpsi2}\\ \overline{\overline{c\cdot\partial_\nu\psi}}(t,x,\mu)(\tilde x)&:=\int_{{\mathbb R}^{d_2}} \overline{c\cdot\partial_\nu\psi}(t,x,\mu,y)(\tilde x)\zeta^{\mu}({\mathord{{\rm d}}} y)\nonumber\\ &=\int_{{\mathbb R}^{d_2}}\int_{{\mathbb R}^{d_2}}c(\tilde x,\mu,\tilde y,\zeta^{\mu})\cdot \partial_\nu\psi(t,x,\mu,y,\zeta^{\mu})(\tilde y)\zeta^{\mu}({\mathord{{\rm d}}} \tilde y)\zeta^{\mu}({\mathord{{\rm d}}} y).\label{cpsi3} \end{align} The following result will play an important role below.
\begin{lemma}\label{flu2} Assume that {\bf (H$^{\sigma,b}$)} hold, $F,H,G,c\in C_b^{2,(1,1),2,(1,1)}({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ and $\sigma, b\in {\mathbb C}_b^{(1,2),2,(1,1)}\cap C_b^{(2,1),4,(2,2)}({\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$. Then for every $f\in {\mathbb C}_b^{1,3,(1,2),2,(1,1)}\cap C_b^{1,3,(2,1),4,(2,2)}({\mathbb R}_+\times{\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ satisfying (\ref{cen}), we have \begin{align}\label{es2}
&\bigg|{\mathbb E}\left(\frac{1}{\varepsilon}\int_0^tf(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon}){\mathord{{\rm d}}} s\right)\nonumber\\ &\qquad-{\mathbb E}\bigg(\int_0^t\overline{H\cdot\partial_x\psi}(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon}) +\overline{c\cdot\partial_y\psi}(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon}){\mathord{{\rm d}}} s\bigg)\nonumber\\
&\qquad-{\mathbb E}\tilde{{\mathbb E}}\bigg(\int_0^t\overline{\overline{c\cdot\partial_\nu\psi}}(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon})(\tilde{X}_s^\varepsilon){\mathord{{\rm d}}} s\bigg)\bigg|\leqslant C_0\,\varepsilon, \end{align} where $C_0>0$ is a constant independent of $\varepsilon$. \end{lemma} \begin{proof} Multiplying $\varepsilon$ from both sides of (\ref{ff1}) and using (\ref{pss}), we have \begin{align*} &{\mathbb E}\left(\frac{1}{\varepsilon}\int_0^tf(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon}){\mathord{{\rm d}}} s\right)\\ &=\varepsilon\,{\mathbb E}\big[\psi(0,\xi,{\mathcal L}_\xi,\eta,{\mathcal L}_\eta)-\psi(t,X_t^\varepsilon,{\mathcal L}_{X_t^\varepsilon},Y_t^\varepsilon,{\mathcal L}_{Y_t^\varepsilon})\big]\\ &\quad+\varepsilon\,{\mathbb E}\left(\int_0^t\partial_s\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon}){\mathord{{\rm d}}} s\right)\\ &\quad+\varepsilon\,{\mathbb E}\left(\int_0^t {\mathscr L}_1(X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon}){\mathord{{\rm d}}} s\right)\\ &\quad+\varepsilon\,{\mathbb E}\tilde{\mathbb E}\bigg(\int_0^tF(\tilde X^{\varepsilon}_s,{\mathcal L}_{X_s^\varepsilon},\tilde Y^{\varepsilon}_s,{\mathcal L}_{Y^{\varepsilon}_s})\cdot\partial_\mu\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})(\tilde X^{\varepsilon}_s)\nonumber\\ &\qquad\quad+\frac{1}{2}\mathord{{\rm Tr}}\Big(GG^*(\tilde X^{\varepsilon}_s,{\mathcal L}_{X_s^\varepsilon},\tilde Y^{\varepsilon}_s,{\mathcal L}_{Y^{\varepsilon}_s})\cdot\partial_{\tilde x}\big[\partial_\mu\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})(\tilde X^{\varepsilon}_s)\big]\Big){\mathord{{\rm d}}} s\bigg)\nonumber\\ &\quad+{\mathbb E}\left(\int_0^t H(X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})\cdot \partial_x\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon}){\mathord{{\rm d}}} s\right)\\ &\quad+{\mathbb E}\left(\int_0^t c(X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})\cdot \partial_y\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon}){\mathord{{\rm d}}} s\right)\\ &\quad+{\mathbb E}\tilde{\mathbb E}\left(\int_0^t H(\tilde X^{\varepsilon}_s,{\mathcal L}_{X_s^\varepsilon},\tilde Y^{\varepsilon}_s,{\mathcal L}_{Y^{\varepsilon}_s})\cdot\partial_\mu\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})(\tilde X^{\varepsilon}_s){\mathord{{\rm d}}} s\right)\nonumber\\ &\quad+{\mathbb E}\tilde{\mathbb E}\left(\int_0^t c(\tilde X^{\varepsilon}_s,{\mathcal L}_{X_s^\varepsilon},\tilde Y^{\varepsilon}_s,{\mathcal L}_{Y^{\varepsilon}_s})\cdot\partial_\nu\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})(\tilde Y^{\varepsilon}_s){\mathord{{\rm d}}} s\right). \end{align*} By the same argument as in the proof of Lemma \ref{flu1}, we obtain \begin{align*}
&\bigg|{\mathbb E}\left(\frac{1}{\varepsilon}\int_0^tf(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon}){\mathord{{\rm d}}} s\right)-{\mathbb E}\bigg(\int_0^t\overline{H\cdot\partial_x\psi}(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon}){\mathord{{\rm d}}} s\bigg)\\ &-{\mathbb E}\bigg(\int_0^t\overline{c\cdot\partial_y\psi}(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon}){\mathord{{\rm d}}} s\bigg)
-{\mathbb E}\tilde{{\mathbb E}}\bigg(\int_0^t\overline{\overline{c\cdot\partial_\nu\psi}}(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon})(\tilde{X}_s^\varepsilon){\mathord{{\rm d}}} s\bigg)\bigg|\\
&\leqslant C_0\,\varepsilon+\bigg|{\mathbb E}\tilde{\mathbb E}\left(\int_0^tH(\tilde X^{\varepsilon}_s,{\mathcal L}_{X_s^\varepsilon},\tilde Y^{\varepsilon}_s,{\mathcal L}_{Y^{\varepsilon}_s})\cdot\partial_\mu\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})(\tilde X^{\varepsilon}_s){\mathord{{\rm d}}} s\right)\bigg|\\
&\quad+\bigg|{\mathbb E}\left(\int_0^t H(X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})\cdot \partial_x\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon}){\mathord{{\rm d}}} s\right)\\
&\qquad-{\mathbb E}\bigg(\int_0^t\overline{H\cdot\partial_x\psi}(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon}){\mathord{{\rm d}}} s\bigg)\bigg|\\
&\quad+\bigg|{\mathbb E}\left(\int_0^t c(X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})\cdot \partial_y\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon}){\mathord{{\rm d}}} s\right)\\
&\qquad-{\mathbb E}\bigg(\int_0^t\overline{c\cdot\partial_y\psi}(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon}){\mathord{{\rm d}}} s\bigg)\bigg|\\
&\quad+\bigg|{\mathbb E}\tilde {\mathbb E}\left(\int_0^t c(\tilde X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},\tilde Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})\cdot \partial_\nu\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})(\tilde Y_s^\varepsilon){\mathord{{\rm d}}} s\right)\\
&\qquad-{\mathbb E}\tilde {\mathbb E}\bigg(\int_0^t\overline{\overline{c\cdot\partial_\nu\psi}}(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon})(\tilde X_s^\varepsilon){\mathord{{\rm d}}} s\bigg)\bigg|\\ &=:C_0\,\varepsilon+{\mathcal I}_1(\varepsilon)+{\mathcal I}_2(\varepsilon)+{\mathcal I}_3(\varepsilon)+{\mathcal I}_4(\varepsilon). \end{align*} In what follows, we estimate the above four terms one by one. For the first term, we write \begin{align*}
{\mathcal I}_1(\varepsilon)=\bigg|{\mathbb E}\bigg[\tilde{\mathbb E}\left(\int_0^tH(\tilde X^{\varepsilon}_s,{\mathcal L}_{X_s^\varepsilon},\tilde Y^{\varepsilon}_s,{\mathcal L}_{Y^{\varepsilon}_s})\cdot\partial_\mu\psi(s,x,{\mathcal L}_{X_s^\varepsilon},y,{\mathcal L}_{Y_s^\varepsilon})(\tilde X^{\varepsilon}_s){\mathord{{\rm d}}} s\right)\bigg|_{(x,y)=(X^{\varepsilon}_s,Y^{\varepsilon}_s)}\bigg]\bigg|. \end{align*} Since $H(\tilde x,\mu,\tilde y,\nu)$ satisfies the centering condition (\ref{cen}), this in turn implies that for every fixed $(x,y)\in{\mathbb R}^{d_1}\times{\mathbb R}^{d_2}$, $$ (t,\tilde x,\mu,\tilde y,\nu)\mapsto H(\tilde x,\mu,\tilde y,\nu)\partial_\mu\psi(t,x,\mu,y,\nu)(\tilde x) $$ satisfies the centering condition, too. Moreover, by the assumptions on the coefficients and Theorem \ref{po}, we have $$ \partial_\mu\psi(\cdot,x,\cdot,y,\cdot)(\cdot)\in C_b^{1,(1,1),(1,1),2}({\mathbb R}_+\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathscr P}_2({\mathbb R}^{d_2})\times{\mathbb R}^{d_1}). $$ Thus using Lemma \ref{flu1} we get that for every fixed $(x,y)\in{\mathbb R}^{d_1}\times{\mathbb R}^{d_2}$, \begin{align*}
\bigg|\tilde{\mathbb E}\left(\int_0^tH(\tilde X^{\varepsilon}_s,{\mathcal L}_{X_s^\varepsilon},\tilde Y^{\varepsilon}_s,{\mathcal L}_{Y^{\varepsilon}_s})\cdot\partial_\mu\psi(s,x,{\mathcal L}_{X_s^\varepsilon},y,{\mathcal L}_{Y_s^\varepsilon})(\tilde X^{\varepsilon}_s){\mathord{{\rm d}}} s\right)\bigg|\leqslant C_1 \,\varepsilon, \end{align*} which in turn implies that $$ {\mathcal I}_1(\varepsilon)\leqslant C_1\,\varepsilon. $$ To control the second and third term, note that by the definition (\ref{hpsi}) and (\ref{cpsi1}) we have \begin{align*} \int_{\mathbb{R}^{d_2}} \big[H(x,\mu,y,\zeta^\mu)\cdot\partial_x\psi(t,x,\mu,y,\zeta^\mu)-\overline{H\cdot\partial_x\psi}(t,x,\mu)\big]\zeta^\mu({\mathord{{\rm d}}} y)=0 \end{align*} and \begin{align*} \int_{\mathbb{R}^{d_2}} \big[c(x,\mu,y,\zeta^\mu)\cdot\partial_y\psi(t,x,\mu,y,\zeta^\mu)-\overline{c\cdot\partial_y\psi}(t,x,\mu)\big]\zeta^\mu({\mathord{{\rm d}}} y)=0. \end{align*} Since $\partial_x\psi, \partial_y\psi \in C_b^{1,2,(1,1),2,(1,1)}({\mathbb R}_+\times{\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$, we have by Corollary \ref{avef} that $$ \overline{H\cdot\partial_x\psi}(t,x,\mu), \overline{c\cdot\partial_y\psi}(t,x,\mu)\in C_b^{1,2,(1,1)}({\mathbb R}_+\times{\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})). $$ Using Lemma \ref{flu1} again we obtain $$ {\mathcal I}_2(\varepsilon)+{\mathcal I}_3(\varepsilon)\leqslant C_2\,\varepsilon. $$ As for ${\mathcal I}_4(\varepsilon)$, using (\ref{cpsi2}) we write \begin{align}
{\mathcal I}_4(\varepsilon)&\leqslant\bigg|{\mathbb E}\tilde {\mathbb E}\left(\int_0^t c(\tilde X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},\tilde Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})\cdot \partial_\nu\psi(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})(\tilde Y_s^\varepsilon){\mathord{{\rm d}}} s\right)\nonumber\\
&\qquad-{\mathbb E}\tilde {\mathbb E}\left(\int_0^t \overline{c\cdot\partial_\nu\psi}(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon)(\tilde X_s^\varepsilon){\mathord{{\rm d}}} s\right)\bigg|\nonumber\\
&\quad+\bigg|{\mathbb E}\tilde {\mathbb E}\left(\int_0^t \overline{c\cdot\partial_\nu\psi}(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon)(\tilde X_s^\varepsilon){\mathord{{\rm d}}} s\right)\nonumber\\
&\qquad-{\mathbb E}\tilde {\mathbb E}\bigg(\int_0^t\overline{\overline{c\cdot\partial_\nu\psi}}(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon})(\tilde X_s^\varepsilon){\mathord{{\rm d}}} s\bigg)\bigg|\nonumber\\
&\leqslant\bigg|{\mathbb E}\bigg[\tilde {\mathbb E}\bigg(\int_0^t c(\tilde X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},\tilde Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})\cdot \partial_\nu\psi(s,x,{\mathcal L}_{X_s^\varepsilon},y,{\mathcal L}_{Y_s^\varepsilon})(\tilde Y_s^\varepsilon)\nonumber\\
&\qquad-\overline{c\cdot\partial_\nu\psi}(s,x,{\mathcal L}_{X_s^\varepsilon},y)(\tilde X_s^\varepsilon){\mathord{{\rm d}}} s\bigg)\bigg|_{(x,y)=(X_s^\varepsilon,Y_s^\varepsilon)}\bigg]\bigg|\nonumber\\
&\quad+\bigg|\tilde {\mathbb E}\bigg[{\mathbb E}\bigg(\int_0^t \overline{c\cdot\partial_\nu\psi}(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},Y_s^\varepsilon)(\tilde x)
-\overline{\overline{c\cdot\partial_\nu\psi}}(s,X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon})(\tilde x){\mathord{{\rm d}}} s\bigg)\bigg|_{\tilde x=\tilde X_s^\varepsilon}\bigg]\bigg|\nonumber\\ &=:{\mathcal I}_{4,1}(\varepsilon)+{\mathcal I}_{4,2}(\varepsilon).\label{estc4} \end{align} By the definition of $\overline{c\cdot\partial_\nu\psi}(t,x,\mu,y)(\tilde x)$, for any fixed $(x,y)\in\mathbb{R}^{d_1}\times\mathbb{R}^{d_2}$, $$ (\tilde x,\mu,\tilde y,\nu)\mapsto c(\tilde x,\mu,\tilde y,\nu)\cdot\partial_\nu\psi(t,x,\mu,y,\nu)(\tilde y)-\overline{c\cdot\partial_\nu\psi}(t,x,\mu,y)(\tilde x) $$ satisfies the centering condition (\ref{cen}). By the assumptions on the coefficients and Theorem \ref{po}, we have $$ \partial_\nu\psi\in C_b^{1,2,(1,1),2,(1,1),2}({\mathbb R}_+\times{\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2})\times{\mathbb R}^{d_2}). $$
Therefore, using Corollary \ref{avef} and Lemma \ref{flu1} we obtain \begin{align*}
&\bigg|\tilde {\mathbb E}\bigg(\int_0^t c(\tilde X_s^\varepsilon,{\mathcal L}_{X_s^\varepsilon},\tilde Y_s^\varepsilon,{\mathcal L}_{Y_s^\varepsilon})\cdot \partial_\nu\psi(s,x,{\mathcal L}_{X_s^\varepsilon},y,{\mathcal L}_{Y_s^\varepsilon})(\tilde Y_s^\varepsilon)\nonumber\\
&\quad-\overline{c\cdot\partial_\nu\psi}(s,x,{\mathcal L}_{X_s^\varepsilon},y)(\tilde X_s^\varepsilon){\mathord{{\rm d}}} s\bigg)\bigg|\leqslant C_4 \,\varepsilon, \end{align*} which in turn implies \begin{align}\label{estc41} {\mathcal I}_{4,1}(\varepsilon)\leqslant C_4\,\varepsilon. \end{align} Similarly, in view of (\ref{cpsi3}) we have that for any fixed $\tilde x\in\mathbb{R}^{d_1}$, $$ (x,\mu,y)\mapsto \overline{c\cdot\partial_\nu\psi}(t,x,\mu,y)(\tilde x)-\overline{\overline{c\cdot\partial_\nu\psi}}(t,x,\mu)(\tilde x) $$ also satisfies the centering condition. By the same argument as (\ref{estc41}) we have \begin{align}\label{estc42} {\mathcal I}_{4,2}(\varepsilon)\leqslant C_4\,\varepsilon. \end{align} Substituting (\ref{estc41}) and (\ref{estc42}) into (\ref{estc4}) yields $$ {\mathcal I}_{4}(\varepsilon)\leqslant C_4\,\varepsilon. $$ Combing the above computations, the proof is finished. \end{proof}
\section{Proof of Theorem \ref{main}}
Throughout this section, we assume that the conditions in Theorem \ref{main} hold. Let $\bar X^{s,\xi}_t$ be the unique solution to the SDE (\ref{ave}) starting from the initial point $\xi\in L^2(\Omega)$ at time $s$. Namely, for $t\geqslant s$, \begin{align*} {\mathord{{\rm d}}} \bar X^{s,\xi}_t&=\bar F(\bar X^{s,\xi}_t,{\mathcal L}_{\bar X^{s,\xi}_t}){\mathord{{\rm d}}} t+\overline{H\cdot\partial_x\Phi}(\bar X^{s,\xi}_t,{\mathcal L}_{\bar X^{s,\xi}_t}){\mathord{{\rm d}}} t\\ &\quad+\overline{c\cdot\partial_y\Phi}(\bar X^{s,\xi}_t,{\mathcal L}_{\bar X^{s,\xi}_t}){\mathord{{\rm d}}} t+\int_{\mathbb{R}^{d_1}}\overline{\overline{c\cdot\partial_\nu\Phi}}(\bar X^{s,\xi}_t,{\mathcal L}_{\bar X^{s,\xi}_t})(\tilde x){\mathcal L}_{\bar X^{s,\xi}_t}({\mathord{{\rm d}}} \tilde x){\mathord{{\rm d}}} t\nonumber\\ &\quad+\sqrt{\overline{GG^*}+2\overline{H\cdot\Phi}}(\bar X^{s,\xi}_t,{\mathcal L}_{\bar X^{s,\xi}_t}){\mathord{{\rm d}}} W^1_t,\quad \bar X_s^{s,\xi}=\xi, \end{align*} where the coefficients are defined by (\ref{bF})-(\ref{bc3}), respectively. Fix $T>0$ and $\varphi: {\mathscr P}_2({\mathbb R}^{d_1})\to{\mathbb R}$. For $t\in[0,T]$, define \begin{align}\label{uu} u(t,{\mathcal L}_\xi):=\varphi({\mathcal L}_{\bar X^{t,\xi}_T}). \end{align} Then we have: \begin{lemma} Assume that $\varphi\in C_b^{(3,1)}({\mathscr P}_2({\mathbb R}^{d_1}))$. Then $u(t,{\mathcal L}_\xi)$ is the unique solution in $C_b^{1,(3,1)}([0,T]\times{\mathscr P}_2({\mathbb R}^{d_1}))$ of the equation \begin{equation}\label{equ} \left\{ \begin{aligned} &\partial_t u(t,{\mathcal L}_\xi)+{\mathbb E}\Big[\Big(\bar F(\xi,{\mathcal L}_{\xi})+\overline{H\cdot\partial_x\Phi}(\xi,{\mathcal L}_{\xi}) +\overline{c\cdot\partial_y\Phi}(\xi,{\mathcal L}_{\xi})\Big)\cdot\partial_\mu u(t,{\mathcal L}_{\xi})(\xi)\\ &\qquad\qquad+\int_{\mathbb{R}^{d_1}}\overline{\overline{c\cdot\partial_\nu\Phi}}(\xi,{\mathcal L}_{\xi})(\tilde x){\mathcal L}_{\xi}({\mathord{{\rm d}}} \tilde x)\cdot\partial_\mu u(t,{\mathcal L}_{\xi})(\xi)\\ &\qquad\qquad+\frac{1}{2}\mathord{{\rm Tr}}\Big(\big(\overline{GG^*}+2\overline{H\cdot\Phi}(\xi,{\mathcal L}_{\xi})\big)\cdot\partial_{x}\big[\partial_\mu u(t,{\mathcal L}_{\xi})\big](\xi)\Big)\Big]=0,\\ &u(T,{\mathcal L}_{\xi})=\varphi({\mathcal L}_\xi). \end{aligned} \right. \end{equation} \end{lemma} \begin{proof} Under the assumptions on the coefficients and using Corollary \ref{avef}, we have $\bar F,\overline{GG^*}\in C_b^{3,(3,1)}({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1}))$. Meanwhile, since $\Phi\in \textbf{C}_b^{4,(2,2),4,(2,2)}\cap {\mathcal C}_b^{4,6,(3,3)}({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$, using Corollary \ref{avef} again we have $\overline{H\cdot\partial_x\Phi}, \overline{c\cdot\partial_y\Phi}, \overline{H\cdot\Phi}\in C_b^{3,(3,1)}({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1}))$ and $\overline{\overline{c\cdot\partial_\nu\Phi}}(x,\mu)(\tilde x)\in C_b^{3,(3,1),2}({\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_1})$. The statements follows by \cite[Theorem 7.2]{BLPR} and the same arguments as in the proof of Theorem \ref{pot}, we omit the details. \end{proof}
We are now in the position to give:
\begin{proof}[Proof of Theorem \ref{main}] Let $u(t,{\mathcal L}_\xi)$ be defined by (\ref{uu}). Then we have \begin{align*} {\mathscr J}(\varepsilon):=\varphi({\mathcal L}_{X_T^\varepsilon})-\varphi({\mathcal L}_{\bar X_T})=u(T,{\mathcal L}_{X_T^\varepsilon})-u(0,{\mathcal L}_\xi). \end{align*} Thus, by It\^o's formula, \begin{align*} {\mathscr J}(\varepsilon)&={\mathbb E}\bigg(\int_0^T\partial_tu(t,{\mathcal L}_{X_t^\varepsilon}) +F(X^{\varepsilon}_t,{\mathcal L}_{X_t^\varepsilon},Y^{\varepsilon}_t,{\mathcal L}_{Y^{\varepsilon}_t})\cdot\partial_{\mu}u(t,{\mathcal L}_{X_t^\varepsilon})(X^{\varepsilon}_t)\\ &\quad+\frac{1}{\varepsilon}H(X^{\varepsilon}_t,{\mathcal L}_{X_t^\varepsilon},Y^{\varepsilon}_t,{\mathcal L}_{Y^{\varepsilon}_t}) \cdot\partial_{\mu}u(t,{\mathcal L}_{X_t^\varepsilon})(X^{\varepsilon}_t)\\ &\quad+\frac{1}{2}\mathord{{\rm Tr}}\Big(GG^*(X^{\varepsilon}_t,{\mathcal L}_{X_t^\varepsilon},Y^{\varepsilon}_t,{\mathcal L}_{Y^{\varepsilon}_t}) \cdot\partial_{x}\big[\partial_{\mu}u(t,{\mathcal L}_{X_t^\varepsilon})\big](X^{\varepsilon}_t)\Big){\mathord{{\rm d}}} t\bigg). \end{align*} In view of equation (\ref{equ}), we further obtain that \begin{align*}
\big|{\mathscr J}(\varepsilon)\big|&\leqslant\bigg|{\mathbb E}\bigg(\int_0^T\delta F(X^{\varepsilon}_t,{\mathcal L}_{X_t^\varepsilon},Y^{\varepsilon}_t,{\mathcal L}_{Y^{\varepsilon}_t})\cdot\partial_{\mu}u(t,{\mathcal L}_{X_t^\varepsilon})(X^{\varepsilon}_t){\mathord{{\rm d}}} t\bigg)\bigg|\\
&\quad+\frac{1}{2}\bigg|{\mathbb E}\bigg(\int_0^T\mathord{{\rm Tr}}\Big(\delta (GG^*)(X^{\varepsilon}_t,{\mathcal L}_{X_t^\varepsilon},Y^{\varepsilon}_t,{\mathcal L}_{Y^{\varepsilon}_t})\cdot\partial_{ x}\big[\partial_{\mu}u(t,{\mathcal L}_{X_t^\varepsilon})\big](X^{\varepsilon}_t)\Big){\mathord{{\rm d}}} t\bigg)\bigg|\\
&\quad+\bigg|{\mathbb E}\bigg(\int_0^T\frac{1}{\varepsilon}H(X^{\varepsilon}_t,{\mathcal L}_{X_t^\varepsilon},Y^{\varepsilon}_t,{\mathcal L}_{Y^{\varepsilon}_t}) \cdot\partial_{\mu}u(t,{\mathcal L}_{X_t^\varepsilon})(X^{\varepsilon}_t)\\ &\quad-\Big(\overline{H\cdot\partial_x\Phi}(X_t^\varepsilon,{\mathcal L}_{X_t^\varepsilon}) +\overline{c\cdot\partial_y\Phi}(X_t^\varepsilon,{\mathcal L}_{X_t^\varepsilon})\Big)\cdot\partial_\mu u(t,{\mathcal L}_{X_t^\varepsilon})(X_t^\varepsilon)\\ &\quad-\int_{\mathbb{R}^{d_1}}\overline{\overline{c\cdot\partial_\nu\Phi}}(X_t^\varepsilon,{\mathcal L}_{X_t^\varepsilon})(\tilde x){\mathcal L}_{X_t^\varepsilon}({\mathord{{\rm d}}} \tilde x)\cdot\partial_\mu u(t,{\mathcal L}_{X_t^\varepsilon})(X_t^\varepsilon)\\
&\quad-\mathord{{\rm Tr}}\Big(\overline{H\cdot\Phi}(X_t^\varepsilon,{\mathcal L}_{X_t^\varepsilon})\cdot\partial_{x}\big[\partial_\mu u(t,{\mathcal L}_{X_t^\varepsilon})\big](X_t^\varepsilon)\Big){\mathord{{\rm d}}} t\bigg)\bigg|\\ &=:{\mathscr J}_1(\varepsilon)+{\mathscr J}_2(\varepsilon)+{\mathscr J}_3(\varepsilon), \end{align*} where $$ \delta F(x,\mu,y,\nu):=F(x,\mu,y,\nu)-\bar F(x,\mu) $$ and $$ \delta(GG^*)(x,\mu,y,\nu):=GG^*(x,\mu,y,\nu)-\overline{GG^*}(x,\mu). $$ By the definition of $\bar F(x,\mu)$, we have \begin{align*} &\int_{{\mathbb R}^{d_2}}\delta F(x,\mu,y,\zeta^\mu)\cdot\partial_{\mu}u(t,\mu)(x)\zeta^{\mu}({\mathord{{\rm d}}} y)\\ &=\int_{{\mathbb R}^{d_2}}\delta F(x,\mu,y,\zeta^\mu)\zeta^{\mu}({\mathord{{\rm d}}} y)\cdot\partial_{\mu}u(t,\mu)(x)=0. \end{align*} Meanwhile, $\delta F(x,\mu,y,\nu)\cdot\partial_{\mu}u(t,\mu)(x)\in C_b^{1,2,(1,1),2,(1,1)}({\mathbb R}_+\times{\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$. As a result of Lemma \ref{flu1}, we have $$ {\mathscr J}_1(\varepsilon)\leqslant C_1\,\varepsilon. $$ Note that \begin{align*} &\int_{{\mathbb R}^{d_2}}\delta(GG^*)(x,\mu,y,\zeta^\mu)\cdot\partial_{x}[\partial_{\mu}u(t,\mu)(x)]\zeta^{\mu}({\mathord{{\rm d}}} y)\\ &=\int_{{\mathbb R}^{d_2}}\delta(GG^*)(x,\mu,y,\zeta^\mu)\zeta^{\mu}({\mathord{{\rm d}}} y)\cdot\partial_{x}[\partial_{\mu}u(t,\mu)(x)]=0. \end{align*} Then, using Lemma \ref{flu1} again, one can get $$ {\mathscr J}_2(\varepsilon)\leqslant C_2\,\varepsilon. $$ To control the third term, note that since $H(x,\mu,y,\nu)$ satisfies the centering condition (\ref{cen}), we have \begin{align*} &\int_{{\mathbb R}^{d_2}}H(x,\mu,y,\zeta^\mu)\cdot\partial_{\mu}u(t,\mu)(x)\zeta^{\mu}({\mathord{{\rm d}}} y)\\ &=\int_{{\mathbb R}^{d_2}}H(x,\mu,y,\zeta^\mu)\zeta^{\mu}({\mathord{{\rm d}}} y)\cdot\partial_{\mu}u(t,\mu)(x)=0. \end{align*} On the other hand, recall that $\Phi(x,\mu,y,\nu)$ satisfies the Poisson equation (\ref{po1}), and define $$ \tilde\Phi(t,x,\mu,y,\nu):=\Phi(x,\mu,y,\nu)\cdot\partial_{\mu}u(t,\mu)(x). $$ Then, one can check that $$ {\mathscr L}_0\tilde\Phi(t,x,\mu,y,\nu)=-H(x,\mu,y,\nu)\cdot\partial_{\mu}u(t,\mu)(x). $$ Moreover, we have $\tilde\Phi\in C_b^{1,2,(1,1),2,(1,1)}({\mathbb R}_+\times{\mathbb R}^{d_1}\times{\mathscr P}_2({\mathbb R}^{d_1})\times{\mathbb R}^{d_2}\times{\mathscr P}_2({\mathbb R}^{d_2}))$ and it holds that \begin{align*} \overline{H\cdot\partial_x\tilde\Phi}(t,x,\mu)&=\int_{{\mathbb R}^{d_2}} H(x,\mu,y,\zeta^{\mu})\cdot\partial_x\Phi(x,\mu,y,\zeta^{\mu})\cdot\partial_{\mu}u(t,\mu)(x)\zeta^{\mu}({\mathord{{\rm d}}} y)\\ &\quad+\int_{{\mathbb R}^{d_2}}\mathord{{\rm Tr}} \Big(H(x,\mu,y,\zeta^{\mu})\cdot\Phi(x,\mu,y,\zeta^{\mu})\cdot\partial_x\big[\partial_{\mu}u(t,\mu)\big](x)\Big)\zeta^{\mu}({\mathord{{\rm d}}} y)\\ &=\overline{H\cdot\partial_x\Phi}(x,\mu)\cdot\partial_{\mu}u(t,\mu)(x) +\mathord{{\rm Tr}}\Big(\overline{H\cdot\Phi}(x,\mu)\cdot\partial_x\big[\partial_{\mu}u(t,\mu)\big](x)\Big),\\ \overline{c\cdot\partial_y\tilde\Phi}(t,x,\mu)&=\int_{{\mathbb R}^{d_2}} c(x,\mu,y,\zeta^{\mu})\cdot\partial_y\Phi(x,\mu,y,\zeta^{\mu})\cdot\partial_{\mu}u(t,\mu)(x)\zeta^{\mu}({\mathord{{\rm d}}} y)\\ &=\overline{c\cdot\partial_y\Phi}(x,\mu)\cdot\partial_{\mu}u(t,\mu)(x), \end{align*} and \begin{align*} &\overline{\overline{H\cdot\partial_\nu\tilde\Phi}}(t,x,\mu)(\tilde x)\\ &=\int_{{\mathbb R}^{d_2}}\int_{{\mathbb R}^{d_2}} c(\tilde x,\mu,\tilde y,\zeta^{\mu})\cdot\partial_\nu\Phi(x,\mu,y,\zeta^{\mu})(\tilde y)\cdot\partial_{\mu}u(t,\mu)(x)\zeta^{\mu}({\mathord{{\rm d}}} \tilde y)\zeta^{\mu}({\mathord{{\rm d}}} y)\\ &=\overline{\overline{c\cdot\partial_\nu\Phi}}(x,\mu)(\tilde x)\cdot\partial_{\mu}u(t,\mu)(x). \end{align*} Thus, by estimate (\ref{es2}) we have $$ {\mathscr J}_3(\varepsilon)\leqslant C_3\,\varepsilon. $$ The proof is completed. \end{proof}
\section{Appendix}
Recall that $Y_t^\eta$ and $Y_t^{y,\nu}$ satisfy the equations (\ref{eqy}) and (\ref{eqyy}), respectively. Throughout this section, we assume the assumption {\bf ($\hat H^{\sigma,b}$)} holds, and provide the following estimates for $Y_t^{y,\nu}$.
\begin{lemma}\label{Y1} Assume that {\bf ($\hat H^{\sigma,b}$)} holds. Then for any $p\geq2$, we have \begin{align*}
&{\mathbb E}\|\partial_yY^{y,\nu}_t\|^p\leqslant C_0\,{\mathrm{e}}^{-\frac{p}{2}c_2t},\\
&{\mathbb E}\|\partial^2_yY^{y,\nu}_t\|^p\leqslant C_0\,{\mathrm{e}}^{-\frac{p}{2}(c_2-\gamma)t}, \end{align*} where $C_0$ is a positive constant independent of $t$ and $\gamma\ll(c_2-c_1)$. \end{lemma} \begin{proof}
Recall that
\begin{align*}
{\mathord{{\rm d}}} \partial_y Y^{y,\nu}_t=\partial_y b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_y Y^{y,\nu}_t{\mathord{{\rm d}}} t+\partial_y \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_y Y^{y,\nu}_t {\mathord{{\rm d}}} W_t.
\end{align*}
Using It\^{o}'s formula, we compute that
\begin{align}\label{Yy}
{\mathord{{\rm d}}} {\mathbb E}\|\partial_y Y^{y,\nu}_t\|^p
&\leqslant\frac{p}{2}{\mathbb E}\big[\|\partial_y Y^{y,\nu}_t\|^{p-2}\cdot\big(2\langle \partial_y Y^{y,\nu}_t,\partial_y b(Y^{y,\nu}_t,{\mathcal L}_{Y^{\eta}_t})\cdot\partial_y Y^{y,\nu}_t \rangle\nonumber\\
&\quad+(p-1)\|\partial_y\sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^{\eta}_t})\cdot\partial_y Y^{y,\nu}_t\|^2\big)\big]{\mathord{{\rm d}}} t.
\end{align}
In view of the assumption {\bf ($\hat H^{\sigma,b}$)}, we have for any $h\in{\mathbb R}^{d_2}$,
\begin{align*}
2\langle h,\partial_yb(y,\nu)\cdot h\rangle+(p-1)\|\partial_y\sigma(y,\nu)\cdot h\|^2\leqslant-c_2|h|^2,
\end{align*}
which together with (\ref{Yy}) implies that
\begin{align*}
{\mathord{{\rm d}}} {\mathbb E}\|\partial_y Y^{y,\nu}_t\|^p
\leqslant-\frac{p}{2}c_2{\mathbb E}\|\partial_y Y^{y,\nu}_t\|^p{\mathord{{\rm d}}} t.
\end{align*}
Thus, by the comparison theorem, we get
\begin{align*}
{\mathbb E}\|\partial_y Y^{y,\nu}_t\|^p\leqslant C_0e^{-\frac{p}{2}c_2t}.
\end{align*}
Similarly, we have
\begin{align*}
{\mathord{{\rm d}}} {\mathbb E}\|\partial^2_y Y^{y,\nu}_t\|^p
&\leqslant\frac{p}{2}{\mathbb E}\big[\|\partial^2_y Y^{y,\nu}_t\|^{p-2}\cdot2\langle \partial^2_y Y^{y,\nu}_t,\partial_y b(Y^{y,\nu}_t,{\mathcal L}_{Y^{\eta}_t})\cdot\partial^2_y Y^{y,\nu}_t\\
&\qquad+\partial^2_y b(Y^{y,\nu}_t,{\mathcal L}_{Y^{\eta}_t})\cdot\partial_y Y^{y,\nu}_t\cdot\partial_y Y^{y,\nu}_t \rangle\big]{\mathord{{\rm d}}} t\nonumber\\
&\quad+\frac{p(p-1)}{2}{\mathbb E}\big[\|\partial^2_y Y^{y,\nu}_t\|^{p-2}\cdot\|\partial_y\sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^{\eta}_t})\cdot\partial^2_y Y^{y,\nu}_t\\
&\qquad+\partial^2_y \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^{\eta}_t})\cdot\partial_y Y^{y,\nu}_t\cdot\partial_y Y^{y,\nu}_t\|^2\big]{\mathord{{\rm d}}} t\\
&\leqslant-\frac{p}{2}(c_2-\gamma){\mathbb E}\|\partial^2_y Y^{y,\nu}_t\|^p{\mathord{{\rm d}}} t+C_0{\mathbb E}\|\partial_y Y^{y,\nu}_t\|^{2p}{\mathord{{\rm d}}} t\\
&\leqslant-\frac{p}{2}(c_2-\gamma){\mathbb E}\|\partial^2_y Y^{y,\nu}_t\|^p{\mathord{{\rm d}}} t+C_0e^{-pc_2t}{\mathord{{\rm d}}} t,
\end{align*}
which in turn yields \begin{align*}
{\mathbb E}\|\partial^2_y Y^{y,\nu}_t\|^p\leqslant C_0e^{-\frac{p}{2}(c_2-\gamma)t}.
\end{align*}
Thus the proof is completed. \end{proof}
\begin{lemma}\label{Y2} Assume that {\bf ($\hat H^{\sigma,b}$)} holds. Then we have for every $p\geq2$, \begin{align*}
&{\mathbb E}\|\partial_\nu Y^{y,\nu}_t(\tilde y)\|^p\leqslant C_0\,{\mathrm{e}}^{-\frac{p}{2}(c_2-c_1-\gamma)t},\\
&{\mathbb E}\|\partial_{\tilde y}\partial_\nu Y^{y,\nu}_t(\tilde y)\|^p\leqslant C_0\,{\mathrm{e}}^{-\frac{p}{2}(c_2-c_1-\gamma)t},\\
&{\mathbb E}\|\partial_y\partial_\nu Y^{y,\nu}_t(\tilde y)\|^p\leqslant C_0\,{\mathrm{e}}^{-\frac{p}{2}(c_2-c_1-\gamma)t},\\
&{\mathbb E}\|\partial^2_\nu Y^{y,\nu}_t(\tilde y,\breve{y})\|^p\leqslant C_0\,{\mathrm{e}}^{-\frac{p}{2}(c_2-c_1-\gamma)t}, \end{align*} where $C_0$ is a positive constant independent of $t$ and $\gamma\ll(c_2-c_1)$. \end{lemma} \begin{proof} Recall that
\begin{align*}
{\mathord{{\rm d}}} \partial_\nu Y^{y,\nu}_t(\tilde y)&=\partial_y b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_\nu Y^{y,\nu}_t(\tilde y){\mathord{{\rm d}}} t+\tilde{\mathbb E}\big[\partial_\nu b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\big]{\mathord{{\rm d}}} t \\
&\quad+\tilde{\mathbb E}\big[\partial_\nu b(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde \eta}_t)\cdot \tilde Z^{\tilde \eta}_t(\tilde y)\big]{\mathord{{\rm d}}} t+\partial_y \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})\cdot\partial_\nu Y^{y,\nu}_t(\tilde y) {\mathord{{\rm d}}} W_t\\
&\quad+\tilde{\mathbb E}\big[\partial_\nu \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\big]{\mathord{{\rm d}}} W_t \\
&\quad+\tilde{\mathbb E}\big[\partial_\nu \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde \eta}_t)\cdot \tilde Z^{\tilde \eta}_t(\tilde y)\big]{\mathord{{\rm d}}} W_t,
\end{align*}
and
\begin{align*}
{\mathord{{\rm d}}} Z^{\eta}_t(\tilde y)&=\partial_y b(Y^{\eta}_t,{\mathcal L}_{Y^\eta_t})\cdot Z^{\eta}_t(\tilde y){\mathord{{\rm d}}} t+\tilde{\mathbb E}\big[\partial_\nu b(Y^{\eta}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\big]{\mathord{{\rm d}}} t \\
&\quad+\tilde{\mathbb E}\big[\partial_\nu b(Y^{\eta}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde \eta}_t)\cdot \tilde Z^{\tilde \eta}_t(\tilde y)\big]{\mathord{{\rm d}}} t+\partial_y \sigma(Y^{\eta}_t,{\mathcal L}_{Y^\eta_t})\cdot Z^{\eta}_t(\tilde y) {\mathord{{\rm d}}} W_t\\
&\quad+\tilde{\mathbb E}\big[\partial_\nu \sigma(Y^{\eta}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_y \tilde Y^{\tilde y,\nu}_t\big]{\mathord{{\rm d}}} W_t \\
&\quad+\tilde{\mathbb E}\big[\partial_\nu \sigma(Y^{\eta}_t,{\mathcal L}_{Y^\eta_t})(\tilde Y^{\tilde \eta}_t)\cdot \tilde Z^{\tilde \eta}_t(\tilde y)\big]{\mathord{{\rm d}}} W_t.
\end{align*}
By It\^{o} formula, we have
\begin{align*}
{\mathord{{\rm d}}} {\mathbb E}\|Z^{\eta}_t(\tilde y)\|^p
&\leqslant\frac{p}{2}{\mathbb E}\big[\|Z^{\eta}_t(\tilde y)\|^{p-2}\cdot2\langle Z^{\eta}_t(\tilde y),\partial_y b(Y^{\eta}_t,{\mathcal L}_{Y^{\eta}_t})\cdot Z^{\eta}_t(\tilde y) \\
&\quad+\tilde {\mathbb E}[\partial_\nu b(Y^{\eta}_t,{\mathcal L}_{Y^{\eta}_t})(\tilde Y^{\tilde \eta}_t)\cdot\tilde Z^{\tilde \eta}_t(\tilde y)]
+\tilde {\mathbb E}[\partial_\nu b(Y^{\eta}_t,{\mathcal L}_{Y^{\eta}_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_y\tilde Y^{\tilde y,\nu}_t]\rangle\big]{\mathord{{\rm d}}} t\\
&\quad+\frac{p(p-1)}{2}{\mathbb E}\big[\|Z^{\eta}_t(\tilde y)\|^{p-2}\cdot\|\partial_y \sigma(Y^{\eta}_t,{\mathcal L}_{Y^{\eta}_t})\cdot Z^{\eta}_t(\tilde y) \\
&\quad+\tilde {\mathbb E}[\partial_\nu \sigma(Y^{\eta}_t,{\mathcal L}_{Y^{\eta}_t})(\tilde Y^{\tilde \eta}_t)\cdot\tilde Z^{\tilde \eta}_t(\tilde y)]
+\tilde {\mathbb E}[\partial_\nu \sigma(Y^{\eta}_t,{\mathcal L}_{Y^{\eta}_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_y\tilde Y^{\tilde y,\nu}_t]\|^2\big]{\mathord{{\rm d}}} t.
\end{align*}
By the assumption {\bf ($\hat H^{\sigma,b}$)}, we have for any $h\in{\mathbb R}^{d_2}$ and $H\in L^2(\Omega)$,
\begin{align*}
&2\langle h,\partial_yb(y,\nu)\cdot h+{\mathbb E}[\partial_\nu b(y,\nu)(\eta)\cdot H]\rangle+(p-1)\|\partial_y\sigma(y,\nu)\cdot h+{\mathbb E}[\partial_\nu\sigma(y,\nu)(\eta)\cdot H]\|^2\\
&\leqslant c_1{\mathbb E}|H|^2-c_2|h|^2,
\end{align*}
with ${\mathcal L}_\eta=\nu$. Consequently, we arrive at
\begin{align*}
{\mathord{{\rm d}}} {\mathbb E}\|Z^{\eta}_t(\tilde y)\|^p&\leqslant-\frac{p}{2}(c_2-c_1-\gamma){\mathbb E}\|Z^{\eta}_t(\tilde y)\|^p{\mathord{{\rm d}}} t+C_0{\mathbb E}\|\partial_yY^{\tilde y,\nu}_t\|^p{\mathord{{\rm d}}} t\\
&\leqslant-\frac{p}{2}(c_2-c_1-\gamma){\mathbb E}\|Z^{\eta}_t(\tilde y)\|^p{\mathord{{\rm d}}} t+C_0e^{-\frac{p}{2}c_2t}{\mathord{{\rm d}}} t,
\end{align*}
which together with the comparison theorem yields
\begin{align*}
{\mathbb E}\|Z^{\eta}_t(\tilde y)\|^p\leqslant C_0e^{-\frac{p}{2}(c_2-c_1-\gamma)t}.
\end{align*}
In the same way we deduce
\begin{align*}
&{\mathord{{\rm d}}} {\mathbb E}\|\partial_\nu Y^{y,\nu}_t(\tilde y)\|^p\\
&\leqslant\frac{p}{2}{\mathbb E}\big[\|\partial_\nu Y^{y,\nu}_t(\tilde y)\|^{p-2}\cdot2\langle \partial_\nu Y^{y,\nu}_t(\tilde y),\partial_y b(Y^{y,\nu}_t,{\mathcal L}_{Y^{\eta}_t})\cdot\partial_\nu Y^{y,\nu}_t(\tilde y) \\
&\quad+\tilde {\mathbb E}[\partial_\nu b(Y^{y,\nu}_t,{\mathcal L}_{Y^{\eta}_t})(\tilde Y^{\tilde \eta}_t)\cdot\tilde Z^{\tilde \eta}_t(\tilde y)]
+\tilde {\mathbb E}[\partial_\nu b(Y^{y,\nu}_t,{\mathcal L}_{Y^{\eta}_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_y\tilde Y^{\tilde y,\nu}_t(\tilde y)]\rangle\big]{\mathord{{\rm d}}} t\\
&\quad+\frac{p(p-1)}{2}{\mathbb E}\big[\|\partial_\nu Y^{y,\nu}_t(\tilde y)\|^{p-2}\cdot\|\partial_y \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^{\eta}_t})\cdot\partial_\nu Y^{y,\nu}_t(\tilde y) \\
&\quad+\tilde {\mathbb E}[\partial_\nu \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^{\eta}_t})(\tilde Y^{\tilde \eta}_t)\cdot\tilde Z^{\tilde \eta}_t(\tilde y)]
+\tilde {\mathbb E}[\partial_\nu \sigma(Y^{y,\nu}_t,{\mathcal L}_{Y^{\eta}_t})(\tilde Y^{\tilde y,\nu}_t)\cdot\partial_y\tilde Y^{\tilde y,\nu}_t(\tilde y)]\|^2\big]{\mathord{{\rm d}}} t\\
&\leqslant-\frac{p}{2}(c_2-\gamma){\mathbb E}\|\partial_\nu Y^{y,\nu}_t(\tilde y)\|^p{\mathord{{\rm d}}} t+C_0{\mathbb E}\|Z^{\eta}_t(\tilde y)\|^p{\mathord{{\rm d}}} t+C_0{\mathbb E}\|\partial_yY^{\tilde y,\nu}_t\|^p{\mathord{{\rm d}}} t\\
&\leqslant-\frac{p}{2}(c_2-\gamma){\mathbb E}\|\partial_\nu Y^{y,\nu}_t(\tilde y)\|^p{\mathord{{\rm d}}} t+C_0e^{-\frac{p}{2}(c_2-c_1-\gamma)t}{\mathord{{\rm d}}} t,
\end{align*}
and thus
\begin{align*}
{\mathbb E}\|\partial_\nu Y^{y,\nu}_t(\tilde y)\|^p\leqslant C_0e^{-\frac{p}{2}(c_2-c_1-\gamma)t}.
\end{align*}
Similarly, we have
\begin{align*}
&{\mathord{{\rm d}}} {\mathbb E}\|\partial_{\tilde y}Z^{\eta}_t(\tilde y)\|^p\\
&\leqslant-\frac{p}{2}(c_2-c_1-\gamma){\mathbb E}\|\partial_{\tilde y}Z^{\eta}_t(\tilde y)\|^p{\mathord{{\rm d}}} t+C_0{\mathbb E}\|\partial_yY^{\tilde y,\nu}_t\|^{2p}{\mathord{{\rm d}}} t+C_0{\mathbb E}\|\partial^2_yY^{\tilde y,\nu}_t\|^p{\mathord{{\rm d}}} t\\
&\leqslant-\frac{p}{2}(c_2-c_1-\gamma){\mathbb E}\|\partial_{\tilde y}Z^{\eta}_t(\tilde y)\|^p{\mathord{{\rm d}}} t+C_0e^{-\frac{p}{2}(c_2-\gamma)t}{\mathord{{\rm d}}} t,
\end{align*}
which in turn yields
\begin{align*}
{\mathbb E}\|\partial_{\tilde y}Z^{\eta}_t(\tilde y)\|^p\leqslant C_0e^{-\frac{p}{2}(c_2-c_1-\gamma)t}.
\end{align*}
This further implies
\begin{align*}
{\mathord{{\rm d}}} {\mathbb E}\|\partial_{\tilde y}\partial_\nu Y^{y,\nu}_t(\tilde y)\|^p
&\leqslant-\frac{p}{2}(c_2-\gamma){\mathbb E}\|\partial_{\tilde y}\partial_\nu Y^{y,\nu}_t(\tilde y)\|^p{\mathord{{\rm d}}} t+C_0{\mathbb E}\|\partial_{\tilde y}Z^{\eta}_t(\tilde y)\|^p{\mathord{{\rm d}}} t\\
&\quad+C_0{\mathbb E}\|\partial_yY^{\tilde y,\nu}_t\|^{2p}{\mathord{{\rm d}}} t+C_0{\mathbb E}\|\partial^2_yY^{\tilde y,\nu}_t\|^p{\mathord{{\rm d}}} t\\
&\leqslant-\frac{p}{2}(c_2-\gamma){\mathbb E}\|\partial_{\tilde y}\partial_\nu Y^{y,\nu}_t(\tilde y)\|^p{\mathord{{\rm d}}} t+C_0e^{-\frac{p}{2}(c_2-c_1-\gamma)t}{\mathord{{\rm d}}} t,
\end{align*}
and thus the desired result is obtained. In the same way, we can prove the estimates of ${\mathbb E}\|\partial_y\partial_\nu Y^{y,\nu}_t(\tilde y)\|^p$ and
${\mathbb E}\|\partial^2_\nu Y^{y,\nu}_t(\tilde y,\breve{y})\|^p$, we omit the details here. \end{proof}
\end{document} | arXiv |
Kelly criterion
In probability theory and intertemporal portfolio choice, the Kelly criterion, Kelly strategy, Kelly formula, or Kelly bet is a formula for bet sizing that leads almost surely to higher wealth compared to any other strategy in the long run (i.e. the limit as the number of bets goes to infinity). The Kelly bet size is found by maximizing the expected value of the logarithm of wealth, which is equivalent to maximizing the expected geometric growth rate. The Kelly Criterion is to bet a predetermined fraction of assets, and it can be counterintuitive. It was described by J. L. Kelly, Jr, a researcher at Bell Labs, in 1956.[1] The practical use of the formula has been demonstrated.[2][3][4]
For an even money bet, the Kelly criterion computes the wager size percentage by multiplying the percent chance to win by two, then subtracting one. So, for a bet with a 70% chance to win (or .7 probability), doubling .7 equals 1.4, from which you subtract 1, leaving .4 as your optimal wager size—40% of available funds.
In recent years, Kelly-style analysis has become a part of mainstream investment theory[5] and the claim has been made that well-known successful investors including Warren Buffett[6] and Bill Gross[7] use Kelly methods. William Poundstone wrote an extensive popular account of the history of Kelly betting.[8]
Kelly formalism is beneficial only in a restricted comparison to alternative formulas for bet sizing. Successful betting formulas are impossible, and ruin is inevitable when betting persistently. A Kelly system may take longer to approach ruin, or exponentially decline to trivial bets, compared to alternative systems.
2 Statement
3 Proof
4 Using Python and SymPy
5 Bernoulli
6 Multiple outcomes
7 Application to the stock market
7.1 Single asset
7.2 Many assets
8 Criticism
In one study, each participant was given $25 and asked to bet on a coin that would land heads 60% of the time. Participants had 30 minutes to play, so could place about 300 bets, and the prizes were capped at $250. The behavior of the test subjects was far from optimal:
Remarkably, 28% of the participants went bust, and the average payout was just $91. Only 21% of the participants reached the maximum. 18 of the 61 participants bet everything on one toss, while two-thirds gambled on tails at some stage in the experiment.[9][10]
Using the Kelly criterion and based on the odds in the experiment (ignoring the cap of $250 and the finite duration of the test), the right approach would be to bet 20% of the pot on each toss of the coin (see first example below). If losing, the size of the bet gets cut; if winning, the stake increases. If the bettors had followed this rule (assuming that bets have infinite granularity and there are up to 300 coin tosses per game and that a player who reaches the cap would stop betting after that), an average of 94% of them would have reached the cap, and the average payout would be $237.36. (In this particular game, because of the cap, a strategy of betting only 12% of the pot on each toss would have even better results.)
StatementEdit
For simple bets with two outcomes, one involving losing the entire amount bet, and the other involving winning the bet amount multiplied by the payoff odds, the Kelly bet is:
f ∗ = b p − q b = b p − ( 1 − p ) b = p ( b + 1 ) − 1 b {\displaystyle f^{*}={\frac {bp-q}{b}}={\frac {bp-(1-p)}{b}}={\frac {p(b+1)-1}{b}}}
f ∗ {\displaystyle f^{*}}
is the fraction of the current bankroll to wager, i.e. how much to bet;
b {\displaystyle b}
is the net odds received on the wager (" b {\displaystyle b}
to 1"), that is, you could win $b (on top of getting back the wagered $1) for a $1 bet;
p {\displaystyle p}
is the probability of winning;
q {\displaystyle q}
is the probability of losing, which is 1 − p {\displaystyle 1-p}
As an example, if a gamble has a 60% chance of winning ( p = 0.60 {\displaystyle p=0.60}
, q = 0.40 {\displaystyle q=0.40}
), and the gambler receives 1-to-1 odds on a winning bet ( b = 1 {\displaystyle b=1}
), then the gambler should bet 20% of the bankroll at each opportunity ( f ∗ = 0.20 {\displaystyle f^{*}=0.20}
), in order to maximize the long-run growth rate of the bankroll.
If the gambler has zero edge, i.e. if b = q / p {\displaystyle b=q/p}
, then the criterion recommends for the gambler to bet nothing.
If the edge is negative ( b < q / p {\displaystyle b<q/p}
) the formula gives a negative result, indicating that the gambler should take the other side of the bet. For example, in American roulette, the bettor is offered an even money payoff ( b = 1 {\displaystyle b=1}
) on red, when there are 18 red numbers and 20 non-red numbers on the wheel ( p = 18 / 38 {\displaystyle p=18/38}
). The Kelly bet is − 1 / 19 {\displaystyle -1/19}
, meaning the gambler should bet one-nineteenth of their bankroll that red will not come up. There is no explicit anti-red bet offered with comparable odds in roulette, so the best a Kelly gambler can do is bet nothing.
The top of the first fraction is the expected net winnings from a $1 bet, since the two outcomes are that you either win $ b {\displaystyle b}
with probability p {\displaystyle p}
, or lose the $1 wagered, i.e. win $−1, with probability q {\displaystyle q}
. Hence:
f ∗ = expected net winnings net winnings if you win {\displaystyle f^{*}={\frac {\text{expected net winnings}}{\text{net winnings if you win}}}}
For even-money bets (i.e. when b = 1 {\displaystyle b=1}
), the first formula can be simplified to:
f ∗ = p − q . {\displaystyle f^{*}=p-q.}
Since q = 1 − p {\displaystyle q=1-p}
, this simplifies further to
f ∗ = 2 p − 1. {\displaystyle f^{*}=2p-1.}
A more general problem relevant for investment decisions is the following:
The probability of success is p {\displaystyle p}
If you succeed, the value of your investment increases from 1 {\displaystyle 1}
to 1 + b {\displaystyle 1+b}
If you fail (for which the probability is q = 1 − p {\displaystyle q=1-p}
) the value of your investment decreases from 1 {\displaystyle 1}
to 1 − a {\displaystyle 1-a}
. (Note that the previous description above assumes that a {\displaystyle a}
is 1.)
In this case, as is proved in the next section, the Kelly criterion turns out to be the relatively simple expression
f ∗ = p / a − q / b . {\displaystyle f^{*}=p/a-q/b.}
Note that this reduces to the original expression for the special case above ( f ∗ = p − q {\displaystyle f^{*}=p-q}
) for b = a = 1 {\displaystyle b=a=1}
Clearly, in order to decide in favor of investing at least a small amount ( f ∗ > 0 ) {\displaystyle (f^{*}>0)}
, you must have
p b > q a . {\displaystyle pb>qa.}
which obviously is nothing more than the fact that the expected profit must exceed the expected loss for the investment to make any sense.
The general result clarifies why leveraging (taking out a loan that requires paying interest in order to raise investment capital) decreases the optimal fraction to be invested, as in that case a > 1 {\displaystyle a>1}
. Obviously, no matter how large the probability of success, p {\displaystyle p}
, is, if a {\displaystyle a}
is sufficiently large, the optimal fraction to invest is zero. Thus, using too much margin is not a good investment strategy when the cost of capital is high, even when the opportunity appears promising.
ProofEdit
Heuristic proofs of the Kelly criterion are straightforward.[11] The Kelly criterion maximizes the expected value of the logarithm of wealth (the expectation value of a function is given by the sum, over all possible outcomes, of the probability of each particular outcome multiplied by the value of the function in the event of that outcome). We start with 1 unit of wealth and bet a fraction f {\displaystyle f}
of that wealth on an outcome that occurs with probability p {\displaystyle p}
and offers odds of b {\displaystyle b}
. The probability of winning is p {\displaystyle p}
, and in that case the resulting wealth is equal to 1 + f b {\displaystyle 1+fb}
. The probability of losing is 1 − p {\displaystyle 1-p}
, and in that case the resulting wealth is equal to 1 − f {\displaystyle 1-f}
. Therefore, the expected value for log wealth ( E ) {\displaystyle (E)}
is given by:
E = p log ( 1 + f b ) + ( 1 − p ) log ( 1 − f ) {\displaystyle E=p\log(1+fb)+(1-p)\log(1-f)}
To find the value of f {\displaystyle f}
for which the expectation value is maximized, denoted as f ∗ {\displaystyle f^{*}}
, we differentiate the above expression and set this equal to zero. This gives:
d E d f ∗ = p b 1 + f ∗ b − 1 − p 1 − f ∗ = 0 {\displaystyle {\frac {dE}{df^{*}}}={\frac {pb}{1+f^{*}b}}-{\frac {1-p}{1-f^{*}}}=0}
Rearranging this equation to solve for the value of f ∗ {\displaystyle f^{*}}
gives the Kelly criterion:
f ∗ = p b + p − 1 b {\displaystyle f^{*}={\frac {pb+p-1}{b}}}
For a rigorous and general proof, see Kelly's original paper[1] or some of the other references listed below. Some corrections have been published.[12]
We give the following non-rigorous argument for the case with b = 1 {\displaystyle b=1}
(a 50:50 "even money" bet) to show the general idea and provide some insights.[1]
When b = 1 {\displaystyle b=1}
, a Kelly bettor bets 2 p − 1 {\displaystyle 2p-1}
times their initial wealth W {\displaystyle W}
, as shown above. If they win, they have 2 p W {\displaystyle 2pW}
after one bet. If they lose, they have 2 ( 1 − p ) W {\displaystyle 2(1-p)W}
. Suppose they make N {\displaystyle N}
bets like this, and win K {\displaystyle K}
times out of this series of N {\displaystyle N}
bets. The resulting wealth will be:
2 N p K ( 1 − p ) N − K W . {\displaystyle 2^{N}p^{K}(1-p)^{N-K}W\!.}
Note that the ordering of the wins and losses does not affect the resulting wealth.
Suppose another bettor bets a different amount, ( 2 p − 1 + Δ ) W {\displaystyle (2p-1+\Delta )W}
for some value of Δ {\displaystyle \Delta }
(where Δ {\displaystyle \Delta }
may be positive or negative). They will have ( 2 p + Δ ) W {\displaystyle (2p+\Delta )W}
after a win and [ 2 ( 1 − p ) − Δ ] W {\displaystyle [2(1-p)-\Delta ]W}
after a loss. After the same series of wins and losses as the Kelly bettor, they will have:
( 2 p + Δ ) K [ 2 ( 1 − p ) − Δ ] N − K W {\displaystyle (2p+\Delta )^{K}[2(1-p)-\Delta ]^{N-K}W}
Take the derivative of this with respect to Δ {\displaystyle \Delta }
and get:
K ( 2 p + Δ ) K − 1 [ 2 ( 1 − p ) − Δ ] N − K W − ( N − K ) ( 2 p + Δ ) K [ 2 ( 1 − p ) − Δ ] N − K − 1 W {\displaystyle K(2p+\Delta )^{K-1}[2(1-p)-\Delta ]^{N-K}W-(N-K)(2p+\Delta )^{K}[2(1-p)-\Delta ]^{N-K-1}W}
The function is maximized when this derivative is equal to zero, which occurs at:
K [ 2 ( 1 − p ) − Δ ] = ( N − K ) ( 2 p + Δ ) {\displaystyle K[2(1-p)-\Delta ]=(N-K)(2p+\Delta )}
which implies that
Δ = 2 ( K N − p ) {\displaystyle \Delta =2({\frac {K}{N}}-p)}
but the proportion of winning bets will eventually converge to:
lim N → + ∞ K N = p {\displaystyle \lim _{N\to +\infty }{\frac {K}{N}}=p}
according to the weak law of large numbers.
So in the long run, final wealth is maximized by setting Δ {\displaystyle \Delta }
to zero, which means following the Kelly strategy.
This illustrates that Kelly has both a deterministic and a stochastic component. If one knows K and N and wishes to pick a constant fraction of wealth to bet each time (otherwise one could cheat and, for example, bet zero after the Kth win knowing that the rest of the bets will lose), one will end up with the most money if one bets:
( 2 K N − 1 ) W {\displaystyle \left(2{\frac {K}{N}}-1\right)W}
each time. This is true whether N {\displaystyle N}
is small or large. The "long run" part of Kelly is necessary because K is not known in advance, just that as N {\displaystyle N}
gets large, K {\displaystyle K}
will approach p N {\displaystyle pN}
. Someone who bets more than Kelly can do better if K > p N {\displaystyle K>pN}
for a stretch; someone who bets less than Kelly can do better if K < p N {\displaystyle K<pN}
for a stretch, but in the long run, Kelly always wins.
The heuristic proof for the general case proceeds as follows.[citation needed]
In a single trial, if you invest the fraction f {\displaystyle f}
of your capital, if your strategy succeeds, your capital at the end of the trial increases by the factor 1 − f + f ( 1 + b ) = 1 + f b {\displaystyle 1-f+f(1+b)=1+fb}
, and, likewise, if the strategy fails, you end up having your capital decreased by the factor 1 − f a {\displaystyle 1-fa}
. Thus at the end of N {\displaystyle N}
trials (with p N {\displaystyle pN}
successes and q N {\displaystyle qN}
failures ), the starting capital of $1 yields
C N = ( 1 + f b ) p N ( 1 − f a ) q N . {\displaystyle C_{N}=(1+fb)^{pN}(1-fa)^{qN}.}
Maximizing log ( C N ) / N {\displaystyle \log(C_{N})/N}
, and consequently C N {\displaystyle C_{N}}
, with respect to f {\displaystyle f}
leads to the desired result
Edward O. Thorp provided a more detailed discussion of this formula for the general case.[13] There, it can be seen that the substitution of p {\displaystyle p}
for the ratio of the number of "successes" to the number of trials implies that the number of trials must be very large, since p {\displaystyle p}
is defined as the limit of this ratio as the number of trials goes to infinity. In brief, betting f ∗ {\displaystyle f^{*}}
each time will likely maximize the wealth growth rate only in the case where the number of trials is very large, and p {\displaystyle p}
and b {\displaystyle b}
are the same for each trial. In practice, this is a matter of playing the same game over and over, where the probability of winning and the payoff odds are always the same. In the heuristic proof above, p N {\displaystyle pN}
failures are highly likely only for very large N {\displaystyle N}
Using Python and SymPyEdit
For a symbolic verification with Python and SymPy one would set the derivative y ′ ( x ) {\displaystyle y'(x)}
of the expected value of the logarithmic bankroll y ( x ) {\displaystyle y(x)}
to 0 and solve for x {\displaystyle x}
>>> from sympy import *
>>> x, b, p = symbols('x b p')
>>> y = p * log(1 + b * x) + (1 - p) * log(1 - x)
>>> solve(diff(y, x), x)
[-(1 - p - b * p) / b]
BernoulliEdit
In a 1738 article, Daniel Bernoulli suggested that, when one has a choice of bets or investments, one should choose that with the highest geometric mean of outcomes. This is mathematically equivalent to the Kelly criterion, although the motivation is entirely different (Bernoulli wanted to resolve the St. Petersburg paradox).
An English-language translation of the Bernoulli article was not published until 1954,[14] but the work was well-known among mathematicians and economists.
Multiple outcomesEdit
Kelly's criterion may be generalized[15] on gambling on many mutually exclusive outcomes, such as in horse races. Suppose there are several mutually exclusive outcomes. The probability that the k {\displaystyle k}
-th horse wins the race is p k {\displaystyle p_{k}}
, the total amount of bets placed on k {\displaystyle k}
-th horse is B k {\displaystyle B_{k}}
, and
β k = B k ∑ i B i = 1 1 + Q k , {\displaystyle \beta _{k}={\frac {B_{k}}{\sum _{i}B_{i}}}={\frac {1}{1+Q_{k}}},}
where Q k {\displaystyle Q_{k}}
are the pay-off odds. D = 1 − t t {\displaystyle D=1-tt}
, is the dividend rate where t t {\displaystyle tt}
is the track take or tax, D β k {\displaystyle {\frac {D}{\beta _{k}}}}
is the revenue rate after deduction of the track take when k {\displaystyle k}
-th horse wins. The fraction of the bettor's funds to bet on k {\displaystyle k}
-th horse is f k {\displaystyle f_{k}}
. Kelly's criterion for gambling with multiple mutually exclusive outcomes gives an algorithm for finding the optimal set S o {\displaystyle S^{o}}
of outcomes on which it is reasonable to bet and it gives explicit formula for finding the optimal fractions f k o {\displaystyle f_{k}^{o}}
of bettor's wealth to be bet on the outcomes included in the optimal set S o {\displaystyle S^{o}}
. The algorithm for the optimal set of outcomes consists of four steps.[15]
Step 1: Calculate the expected revenue rate for all possible (or only for several of the most promising) outcomes:
e r k = D β k p k = D ( 1 + Q k ) p k . {\displaystyle er_{k}={\frac {D}{\beta _{k}}}p_{k}=D(1+Q_{k})p_{k}.}
Step 2: Reorder the outcomes so that the new sequence e r k {\displaystyle er_{k}}
is non-increasing. Thus e r 1 {\displaystyle er_{1}}
will be the best bet.
Step 3: Set S = ∅ {\displaystyle S=\varnothing }
(the empty set), k = 1 {\displaystyle k=1}
, R ( S ) = 1 {\displaystyle R(S)=1}
. Thus the best bet e r k = e r 1 {\displaystyle er_{k}=er_{1}}
will be considered first.
Step 4: Repeat:
If e r k = D β k p k > R ( S ) {\displaystyle er_{k}={\frac {D}{\beta _{k}}}p_{k}>R(S)}
then insert k {\displaystyle k}
-th outcome into the set: S = S ∪ { k } {\displaystyle S=S\cup \{k\}}
, recalculate R ( S ) {\displaystyle R(S)}
according to the formula:
R ( S ) = 1 − ∑ i ∈ S p i 1 − ∑ i ∈ S β i D {\displaystyle R(S)={\frac {1-\sum _{i\in S}{p_{i}}}{1-\sum _{i\in S}{\frac {\beta _{i}}{D}}}}}
and then set k = k + 1 {\displaystyle k=k+1}
Otherwise, set S o = S {\displaystyle S^{o}=S}
and stop the repetition.
If the optimal set S o {\displaystyle S^{o}}
is empty then do not bet at all. If the set S o {\displaystyle S^{o}}
of optimal outcomes is not empty, then the optimal fraction f k o {\displaystyle f_{k}^{o}}
to bet on k {\displaystyle k}
-th outcome may be calculated from this formula:
f k o = e r k − R ( S o ) D β k = p k − R ( S o ) D β k {\displaystyle f_{k}^{o}={\frac {er_{k}-R(S^{o})}{\frac {D}{\beta _{k}}}}=p_{k}-{\frac {R(S^{o})}{\frac {D}{\beta _{k}}}}}
One may prove[15] that
R ( S o ) = 1 − ∑ i ∈ S o f i o {\displaystyle R(S^{o})=1-\sum _{i\in S^{o}}{f_{i}^{o}}}
where the right hand-side is the reserve rate[clarification needed]. Therefore the requirement e r k = D β k p k > R ( S ) {\displaystyle er_{k}={\frac {D}{\beta _{k}}}p_{k}>R(S)}
may be interpreted[15] as follows: k {\displaystyle k}
-th outcome is included in the set S o {\displaystyle S^{o}}
of optimal outcomes if and only if its expected revenue rate is greater than the reserve rate. The formula for the optimal fraction f k o {\displaystyle f_{k}^{o}}
may be interpreted as the excess of the expected revenue rate of k {\displaystyle k}
-th horse over the reserve rate divided by the revenue after deduction of the track take when k {\displaystyle k}
-th horse wins or as the excess of the probability of k {\displaystyle k}
-th horse winning over the reserve rate divided by revenue after deduction of the track take when k {\displaystyle k}
-th horse wins. The binary growth exponent is
G o = ∑ i ∈ S p i log 2 ( e r i ) + ( 1 − ∑ i ∈ S p i ) log 2 ( R ( S o ) ) , {\displaystyle G^{o}=\sum _{i\in S}{p_{i}\log _{2}{(er_{i})}}+(1-\sum _{i\in S}{p_{i}})\log _{2}{(R(S^{o}))},}
and the doubling time is
T d = 1 G o . {\displaystyle T_{d}={\frac {1}{G^{o}}}.}
This method of selection of optimal bets may be applied also when probabilities p k {\displaystyle p_{k}}
are known only for several most promising outcomes, while the remaining outcomes have no chance to win. In this case it must be that
∑ i p i < 1 , {\displaystyle \sum _{i}{p_{i}}<1,}
∑ i β i < 1 {\displaystyle \sum _{i}{\beta _{i}}<1}
Application to the stock marketEdit
In mathematical finance, a portfolio is called growth optimal if security weights maximize the expected geometric growth rate (which is equivalent to maximizing log wealth).[citation needed]
Computations of growth optimal portfolios can suffer tremendous garbage in, garbage out problems.[citation needed] For example, the cases below take as given the expected return and covariance structure of various assets, but these parameters are at best estimated or modeled with significant uncertainty. Ex-post performance of a supposed growth optimal portfolio may differ fantastically with the ex-ante prediction if portfolio weights are largely driven by estimation error. Dealing with parameter uncertainty and estimation error is a large topic in portfolio theory.[citation needed]
The second-order Taylor polynomial can be used as a good approximation of the main criterion. Primarily, it is useful for stock investment, where the fraction devoted to investment is based on simple characteristics that can be easily estimated from existing historical data – expected value and variance. This approximation leads to results that are robust and offer similar results as the original criterion.[16]
Single assetEdit
Considering a single asset (stock, index fund, etc.) and a risk-free rate, it is easy to obtain the optimal fraction to invest through geometric Brownian motion. The value of a lognormally distributed asset S {\displaystyle S}
at time t {\displaystyle t}
( S t {\displaystyle S_{t}}
) is
S t = S 0 exp ( ( μ − σ 2 2 ) t + σ W t ) , {\displaystyle S_{t}=S_{0}\exp \left(\left(\mu -{\frac {\sigma ^{2}}{2}}\right)t+\sigma W_{t}\right),}
from the solution of the geometric Brownian motion where W t {\displaystyle W_{t}}
is a Wiener process, and μ {\displaystyle \mu }
(percentage drift) and σ {\displaystyle \sigma }
(the percentage volatility) are constants. Taking expectations of the logarithm:
E log ( S t ) = log ( S 0 ) + ( μ − σ 2 2 ) t . {\displaystyle \mathbb {E} \log(S_{t})=\log(S_{0})+(\mu -{\frac {\sigma ^{2}}{2}})t.}
Then the expected log return R s {\displaystyle R_{s}}
R s = ( μ − σ 2 2 ) t . {\displaystyle R_{s}=\left(\mu -{\frac {\sigma ^{2}}{2}}\,\right)t.}
For a portfolio made of an asset S {\displaystyle S}
and a bond paying risk-free rate r {\displaystyle r}
, with fraction f {\displaystyle f}
invested in S {\displaystyle S}
and ( 1 − f ) {\displaystyle (1-f)}
in the bond, the expected one-period return is given by
E ( f ( S 1 S 0 − 1 ) + ( 1 − f ) r ) = E ( f exp ( ( μ − σ 2 2 ) + σ W 1 ) ) + ( 1 − f ) r {\displaystyle \mathbb {E} \left(f\left({\frac {S_{1}}{S_{0}}}-1\right)+(1-f)r\right)=\mathbb {E} \left(f\exp \left(\left(\mu -{\frac {\sigma ^{2}}{2}}\right)+\sigma W_{1}\right)\right)+(1-f)r}
however people seem to deal with the expected log return G ( f ) {\displaystyle G(f)}
for one-period instead in the context of Kelly:
G ( f ) = f μ − ( f σ ) 2 2 + ( ( 1 − f ) r ) . {\displaystyle G(f)=f\mu -{\frac {(f\sigma )^{2}}{2}}+((1-f)\ r).}
Solving max ( G ( f ) ) {\displaystyle \max(G(f))}
we obtain
f ∗ = μ − r σ 2 . {\displaystyle f^{*}={\frac {\mu -r}{\sigma ^{2}}}.}
is the fraction that maximizes the expected logarithmic return, and so, is the Kelly fraction.
Thorp[13] arrived at the same result but through a different derivation.
Remember that μ {\displaystyle \mu }
is different from the asset log return R s {\displaystyle R_{s}}
. Confusing this is a common mistake made by websites and articles talking about the Kelly Criterion.
Many assetsEdit
Consider a market with n {\displaystyle n}
correlated stocks S k {\displaystyle S_{k}}
with stochastic returns r k {\displaystyle r_{k}}
, k = 1 , . . . , n , {\displaystyle k=1,...,n,}
and a riskless bond with return r {\displaystyle r}
. An investor puts a fraction u k {\displaystyle u_{k}}
of their capital in S k {\displaystyle S_{k}}
and the rest is invested in the bond. Without loss of generality, assume that investor's starting capital is equal to 1. According to the Kelly criterion one should maximize
E [ ln ( ( 1 + r ) + ∑ k = 1 n u k ( r k − r ) ) ] . {\displaystyle \mathbb {E} \left[\ln \left((1+r)+\sum \limits _{k=1}^{n}u_{k}(r_{k}-r)\right)\right].}
Expanding this with a Taylor series around u 0 → = ( 0 , … , 0 ) {\displaystyle {\vec {u_{0}}}=(0,\ldots ,0)}
E [ ln ( 1 + r ) + ∑ k = 1 n u k ( r k − r ) 1 + r − 1 2 ∑ k = 1 n ∑ j = 1 n u k u j ( r k − r ) ( r j − r ) ( 1 + r ) 2 ] . {\displaystyle \mathbb {E} \left[\ln(1+r)+\sum \limits _{k=1}^{n}{\frac {u_{k}(r_{k}-r)}{1+r}}-{\frac {1}{2}}\sum \limits _{k=1}^{n}\sum \limits _{j=1}^{n}u_{k}u_{j}{\frac {(r_{k}-r)(r_{j}-r)}{(1+r)^{2}}}\right].}
Thus we reduce the optimization problem to quadratic programming and the unconstrained solution is
u ⋆ → = ( 1 + r ) ( Σ ^ ) − 1 ( r → ^ − r ) {\displaystyle {\vec {u^{\star }}}=(1+r)({\widehat {\Sigma }})^{-1}({\widehat {\vec {r}}}-r)}
where r → ^ {\displaystyle {\widehat {\vec {r}}}}
and Σ ^ {\displaystyle {\widehat {\Sigma }}}
are the vector of means and the matrix of second mixed noncentral moments of the excess returns.
There is also a numerical algorithm for the fractional Kelly strategies and for the optimal solution under no leverage and no short selling constraints.[17]
CriticismEdit
Although the Kelly strategy's promise of doing better than any other strategy in the long run seems compelling, some economists have argued strenuously against it, mainly because an individual's specific investing constraints may override the desire for optimal growth rate.[8] The conventional alternative is expected utility theory which says bets should be sized to maximize the expected utility of the outcome (to an individual with logarithmic utility, the Kelly bet maximizes expected utility, so there is no conflict; moreover, Kelly's original paper clearly states the need for a utility function in the case of gambling games which are played finitely many times[1]). Even Kelly supporters usually argue for fractional Kelly (betting a fixed fraction of the amount recommended by Kelly) for a variety of practical reasons, such as wishing to reduce volatility, or protecting against non-deterministic errors in their advantage (edge) calculations.[18]
Risk of ruin
Gambling and information theory
Proebsting's paradox
^ a b c d Kelly, J. L. (1956). "A New Interpretation of Information Rate" (PDF). Bell System Technical Journal. 35 (4): 917–926. doi:10.1002/j.1538-7305.1956.tb03809.x.
^ Thorp, E. O. (January 1961), "Fortune's Formula: The Game of Blackjack", American Mathematical Society
^ Thorp, E. O. (1962), Beat the dealer: a winning strategy for the game of twenty-one. A scientific analysis of the world-wide game known variously as blackjack, twenty-one, vingt-et-un, pontoon or Van John, Blaisdell Pub. Co
^ Thorp, Edward O.; Kassouf, Sheen T. (1967), Beat the Market: A Scientific Stock Market System (PDF), Random House, ISBN 0-394-42439-5, archived from the original (PDF) on 2009-10-07 [page needed]
^ Zenios, S. A.; Ziemba, W. T. (2006), Handbook of Asset and Liability Management, North Holland, ISBN 978-0-444-50875-1
^ Pabrai, Mohnish (2007), The Dhandho Investor: The Low-Risk Value Method to High Returns, Wiley, ISBN 978-0-470-04389-9
^ Thorp, E. O. (September 2008), "The Kelly Criterion: Part II", Wilmott Magazine
^ a b Poundstone, William (2005), Fortune's Formula: The Untold Story of the Scientific Betting System That Beat the Casinos and Wall Street, New York: Hill and Wang, ISBN 0-8090-4637-7
^ https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2856963
^ "Buttonwood", "Irrational tossers", The Economist Newspaper Limited 2016, Nov 1st 2016.
^ Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007), "Section 14.7 (Example 2.)", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8
^ Thorp, E. O. (1969). "Optimal Gambling Systems for Favorable Games". Revue de l'Institut International de Statistique / Review of the International Statistical Institute. International Statistical Institute (ISI). 37 (3): 273–293. JSTOR 1402118. MR 0135630.
^ a b Thorp, Edward O. (June 1997). "The Kelly criterion in blackjack, sports betting, and the stock market" (PDF). 10th International Conference on Gambling and Risk Taking. Montreal. Archived from the original (PDF) on 2009-03-20. Retrieved 2009-03-20.
^ Bernoulli, Daniel (1954) [1738]. "Exposition of a New Theory on the Measurement of Risk". Econometrica. The Econometric Society. 22 (1): 22–36. JSTOR 1909829.
^ a b c d Smoczynski, Peter; Tomkins, Dave (2010) "An explicit solution to the problem of optimizing the allocations of a bettor's wealth when wagering on horse races", Mathematical Scientist", 35 (1), 10-17
^ Marek, Patrice; Ťoupal, Tomáš; Vávra, František (2016). "Efficient Distribution of Investment Capital". 34th International Conference Mathematical Methods in Economics, MME2016, Conference Proceedings: 540–545. Retrieved 24 January 2018.
^ Nekrasov, Vasily (2013). "Kelly Criterion for Multivariate Portfolios: A Model-Free Approach".
^ Thorp, E. O. (May 2008), "The Kelly Criterion: Part I", Wilmott Magazine
Wikibooks has a book on the topic of: Kelly_Criteria
Retrieved from "https://en.wikipedia.org/w/index.php?title=Kelly_criterion&oldid=935468686" | CommonCrawl |
\begin{document}
\title{Geometric Control of a Quadrotor UAV Transporting a Payload Connected via Flexible Cable}
\author{Farhad A. Goodarzi, Daewon Lee, and Taeyoung Lee
\thanks{Farhad A. Goodarzi, Daewon Lee, and Taeyoung Lee are with the School of Mechanical and aerospace
Engineering, The George Washington University, Washington DC 20052
(e-mail: \{fgoodarzi,daewonlee,tylee\}@gwu.edu).}
\thanks{\textsuperscript{\footnotesize\ensuremath{*}}This research has been supported in part by NSF under the grants CMMI-1243000 (transferred from 1029551), CMMI-1335008, and CNS-1337722. } }
\begin{abstract} We derived a coordinate-free form of equations of motion for a complete model of a quadrotor UAV with a payload which is connected via a flexible cable according to Lagrangian mechanics on a manifold. The flexible cable is modeled as a system of serially-connected links and has been considered in the full dynamic model. A geometric nonlinear control system is presented to exponentially stabilize the position of the quadrotor while aligning the links to the vertical direction below the quadrotor. Numerical simulation and experimental results are presented and a rigorous stability analysis is provided to confirm the accuracy of our derivations. These results will be particularly useful for aggressive load transportation that involves large deformation of the cable.
\end{abstract}
\begin{keywords} Geometric Nonlinear Control, Flexible Cable, Load Stabilization, Stability Analysis \end{keywords}
\maketitle
\newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma}
\section{Introduction}
Unmanned aerial vehicles (UAV) have been studied for different applications such as surveillance or mobile sensor networks as well as for educational purposes. Quadrotors are one kind of these UAVs which are very popular due to their dynamic simplicity, maneuverability and high performance.
Areal transportation of a cable-suspended load has been studied traditionally for helicopters~\cite{CicKanJAHS95,BerPICRA09}. Recently, small-size single or multiple autonomous vehicles are considered for load transportation and deployment~\cite{PalCruIRAM12,MicFinAR11,MazKonJIRS10,MelShoDARSSTAR13}, and trajectories with minimum swing and oscillation of payload are generated~\cite{ ZamStaJDSMC08, SchMurIICRA12, PalFieIICRA12}.
Safe cooperative transportation of possibly large or bulky payloads is extremely important in various missions, such as military operations, search and rescue, mars surface explorations and personal assistance. However, these results are based on the common and restrictive assumption that the cable connecting the payload to the quadrotor UAV is always taut and rigid. Also, the dynamic of the cable and payload are ignored and they are considered as bounded disturbances to the transporting vehicle. Therefore, they cannot be applied to aggressive, rapid load transportations where the cable is deformed or the tension along the cable is low, thereby restricting its applicability. As such, it is impossible to guarantee safety operations.
It is challenging to incorporate the effects of a deformable cable, since the dimension of the configuration space becomes infinite. Finite element approximation of a cable often yields complicated equations of motion that make dynamic analysis and controller design extremely difficult.
Recently, a coordinate-free form of the equations of motion for a chain pendulum connected a cart that moves on a horizontal plane is presented according to Lagrangian mechanics on a manifold~\cite{LeeLeoPICDC12}. This paper is an extension of the prior work of the authors in~\cite{FarhadDTLee}. By following the similar approach, in this paper, the cable is modeled as an arbitrary number of links with different sizes and masses that are serially-connected by spherical joints, as illustrated at Figure \ref{fig:Quad}. The resulting configuration manifold is the product of the special Euclidean group for the position and the attitude of the quadrotor, and a number of two-spheres that describe the direction of each link. We present Euler-Lagrange equations of the presented quadrotor model that are globally defined on the nonlinear configuration manifold.
\begin{figure}
\caption{Quadrotor UAV with a cable-suspended load. Cable is modeled as a serial connection of arbitrary number of links (only 5 are illustrated).}
\label{fig:Quad}
\end{figure}
The second part of this paper deals with nonlinear control system development. Quadrotor UAV is under-actuated as the direction of the total thrust is always fixed relative to its body. By utilizing geometric control systems for quadrotor~\cite{LeeLeoPICDC10,LeeLeoAJC13,GooLeePECC13}, we show that the hanging equilibrium of the links can be asymptotically stabilized while translating the quadrotor to a desired position. In contrast to existing papers where the force and the moment exerted by the payload to the quadrotor are considered as disturbances, the control systems proposed in this paper explicitly consider the coupling effects between the cable/load dynamics and the quadrotor dynamics.
Another distinct feature is that the equations of motion and the control systems are developed directly on the nonlinear configuration manifold in a coordinate-free fashion. This yields remarkably compact expressions for the dynamic model and controllers, compared with local coordinates that often require symbolic computational tools due to complexity of multibody systems. Furthermore, singularities of local parameterization are completely avoided to generate agile maneuvers in a uniform way.
Compared with preliminary results in~\cite{GooLeePACC14}, this paper presents a rigorous Lyapunov stability analysis to establish stability properties without any timescale separation assumptions or singular perturbation, and a new nonlinear integral control term is designed to guarantee robustness against unstructured uncertainties in both rotational and translational dynamics. In short, the main contribution of this paper is presenting a nonlinear dynamic model and a control system for a quadrotor UAV with a cable-suspended load, that explicitly incorporate the effects of deformable cable.
This paper is organized as follows. A dynamic model is presented at Section 2 and control systems are developed at Sections 3 and 4. The desirable properties of the proposed control system are illustrated by a numerical example at Section 5, followed by experimental results at Section 6.
\section{Dynamic Model of a Quadrotor with a Flexible Cable}\label{sec:DM}
Consider a quadrotor UAV with a payload that is connected via a chain of $n$ links, as illustrated at Figure \ref{fig:Quad}. The inertial frame is defined by the unit vectors $e_{1}=[1;0;0]$, $e_{2}=[0;1;0]$, and $e_{3}=[0;0;1]\in \ensuremath{\mathbb{R}}^{3}$, and the third axis $e_{3}$ corresponds to the direction of gravity. Define a body-fixed frame $\{\vec{b}_{1},\vec{b}_{2},\vec{b}_{3}\}$ whose origin is located at the center of mass of the quadrotor, and its third axis $\vec b_3$ is aligned to the axis of symmetry.
The location of the mass center, and the attitude of the quadrotor are denoted by $x\in\ensuremath{\mathbb{R}}^3$ and $R\in\ensuremath{\mathsf{SO(3)}}$, respectively, where the special orthogonal group is $\ensuremath{\mathsf{SO(3)}}=\{R\in\ensuremath{\mathbb{R}}^{3\times 3}\,|\, R^T R=I_{3\times 3},\;\mathrm{det}[R]=1\}$. A rotation matrix represents the linear transformation of a representation of a vector from the body-fixed frame to the inertial frame.
The dynamic model of the quadrotor is identical to~\cite{LeeLeoPICDC10}. The mass and the inertia matrix of the quadrotor are denoted by $m\in\ensuremath{\mathbb{R}}$ and $J\in\ensuremath{\mathbb{R}}^{3\times 3}$, respectively. The quadrotor can generates a thrust $-fRe_3\in\ensuremath{\mathbb{R}}^3$ with respect to the inertial frame, where $f\in\ensuremath{\mathbb{R}}$ is the total thrust magnitude. It also generates a moment $M\in\ensuremath{\mathbb{R}}^3$ with respect to its body-fixed frame. The pair $(f,M)$ is considered as control input of the quadrotor.
Let $q_i\in\ensuremath{\mathsf{S}}^2$ be the unit-vector representing the direction of the $i$-th link, measured outward from the quadrotor toward the payload, where the two-sphere is the manifold of unit-vectors in $\ensuremath{\mathbb{R}}^3$, i.e., $\ensuremath{\mathsf{S}}^2=\{q\in\ensuremath{\mathbb{R}}^3\,|\, \|q\|=1\}$. For simplicity, we assume that the mass of each link is concentrated at the outboard end of the link, and the point where the first link is attached to the quadrotor corresponds to the mass center of the quadrotor. The mass and the length of the $i$-th link are defined by $m_i$ and $l_i\in\ensuremath{\mathbb{R}}$, respectively. Thus, the mass of the payload corresponds to $m_n$. The corresponding configuration manifold of this system is given by $\ensuremath{\mathsf{SO(3)}}\times\ensuremath{\mathbb{R}}^3\times (\ensuremath{\mathsf{S}}^2)^n$.
Next, we show the kinematics equations. Let $\Omega\in\ensuremath{\mathbb{R}}^3$ be the angular velocity of the quadrotor represented with respect to the body fixed frame, and let $\omega_i\in\ensuremath{\mathbb{R}}^3$ be the angular velocity of the $i$-th link represented with respect to the inertial frame. The angular velocity is normal to the direction of the link, i.e., $q_i\cdot\omega_i=0$. The kinematics equations are given by \begin{align} \dot R & = R\hat\Omega,\label{eqn:Rdot}\\ \dot{q}_{i} & =\omega_{i}\times q_{i}=\hat{\omega}_{i}q_{i},\label{eqn:qidot} \end{align} where the hat map $\hat\cdot:\ensuremath{\mathbb{R}}^3\rightarrow\ensuremath{\mathfrak{so}(3)}$ is defined by the condition that $\hat x y =x\times y$ for any $x,y\in\ensuremath{\mathbb{R}}^3$, and it transforms a vector in $\ensuremath{\mathbb{R}}^{3}$ to a $3\times 3$ skew-symmetric matrix. More explicitly, it is given by \begin{align}\label{LinEOM} \hat{x}= \begin{bmatrix} 0&-x_{3}&x_{2}\\ x_{3}&0&-x_{1}\\ -x_{2}&x_{1}&0 \end{bmatrix}, \end{align} for $x=[x_{1},x_{2},x_{3}]^{T}\in \ensuremath{\mathbb{R}}^{3}$. The inverse of the hat map is denoted by the \textit{vee} map $\vee:\ensuremath{\mathfrak{so}(3)}\rightarrow\ensuremath{\mathbb{R}}^3$.
Throughout this paper, the 2-norm of a matrix $A$ is denoted by $\|A\|$, and the dot product is denoted by $x \cdot y = x^Ty$. Also $\lambda_{\min}(\cdot)$ and $\lambda_{\max}(\cdot)$ denotes the minimum and maximum eigenvalue of a square matrix respectively, and $\lambda_{m}$ and $\lambda_{M}$ are shorthand for $\lambda_{m}=\lambda_{m}(J)$ and $\lambda_{M}=\lambda_{M}(J)$.
\subsection{Lagrangian}
We derive the equations of motion according to Lagrangian mechanics. The kinetic energy of the quadrotor is given by \begin{align}
T_Q = \frac{1}{2}m\|\dot x\|^2 + \frac{1}{2} \Omega\cdot J\Omega.\label{eqn:TQ} \end{align} Let $x_i\in\ensuremath{\mathbb{R}}^3$ be the location of $m_i$ in the inertial frame. It can be written as \begin{align}\label{posvec33} x_{i}=x+\sum^{i}_{a=1}{l_{a}q_{a}}. \end{align} Then, the kinetic energy of the links are given by \begin{align}
T_L & = \frac{1}{2} \sum_{i=1}^n m_i \|\dot x+\sum^{i}_{a=1}{l_{a}\dot q_{a}}\|^2\nonumber\\
& = \frac{1}{2}\sum_{i=1}^n m_i \|\dot x\| + \dot x\cdot \sum_{i=1}^n\sum_{a=i}^n m_a l_i \dot q_i
+\frac{1}{2}\sum_{i=1}^n m_i \|\sum_{a=1}^i l_a \dot q_a\|^2. \label{eqn:TL} \end{align} From \refeqn{TQ} and \refeqn{TL}, the total kinetic energy can be written as \begin{align}
T & =\frac{1}{2}M_{00}\|\dot{x}\|^{2}+\dot{x}\cdot\sum^{n}_{i=1}{M_{0i}\dot{q}_{i}}+\frac{1}{2}\sum^{n}_{i,j=1}{M_{ij}\dot{q}_{i}\cdot\dot{q}_{j}}\nonumber\\ &\quad +\frac{1}{2}\Omega^{T}J\Omega,\label{eqn:KE} \end{align} where the inertia values $M_{00},M_{0i},M_{ij}\in\ensuremath{\mathbb{R}}$ are given by \begin{gather} M_{00}=m+\sum_{i=1}^n m_i,\quad M_{0i}=\sum_{a=i}^n m_a l_i,\quad M_{i0}=M_{0i},\nonumber\\ M_{ij}=\braces{\sum_{a=\max\{i,j\}}^n m_a} l_i l_j,\label{eqn:Mij} \end{gather} for $1\leq i,j\leq n$. The gravitational potential energy is given by \begin{align} V & = -mgx\cdot e_3 - \sum_{i=1}^n m_i g x_i\cdot e_3\nonumber\\ & = -\sum^{n}_{i=1}\sum^{n}_{a=i}m_{a}gl_{i}e_{3}\cdot q_{i}-M_{00}ge_{3}\cdot x,\label{eqn:PE} \end{align} From \refeqn{KE} and \refeqn{PE}, the Lagrangian is $L=T-V$.
\subsection{Euler-Lagrange equations} Coordinate-free form of Lagrangian mechanics on the two-sphere $\ensuremath{\mathsf{S}}^2$ and the special orthogonal group $\ensuremath{\mathsf{SO(3)}}$ for various multibody systems has been studied in~\cite{Lee08,LeeLeoIJNME08}. The key idea is representing the infinitesimal variation of $R\in\ensuremath{\mathsf{SO(3)}}$ in terms of the exponential map \begin{align}
\delta R = \frac{d}{d\epsilon}\bigg|_{\epsilon = 0} \exp R(\epsilon \hat\eta) = R\hat\eta,\label{eqn:delR} \end{align} for $\eta\in\ensuremath{\mathbb{R}}^3$. The corresponding variation of the angular velocity is given by $\delta\Omega=\dot\eta+\Omega\times\eta$. Similarly, the infinitesimal variation of $q_i\in\ensuremath{\mathsf{S}}^2$ is given by \begin{align} \delta q_i = \xi_i\times q_i,\label{eqn:delqi} \end{align} for $\xi_i\in\ensuremath{\mathbb{R}}^3$ satisfying $\xi_i\cdot q_i=0$. This lies in the tangent space as it is perpendicular to $q_{i}$. Using these, we obtain the following Euler-Lagrange equations. \begin{prop}\label{prop:FDM} Consider a quadrotor with a cable suspended payload whose Lagrangian is given by \refeqn{KE} and \refeqn{PE}. The Euler-Lagrange equations on $\ensuremath{\mathbb{R}}^3\times\ensuremath{\mathsf{SO(3)}}\times(\ensuremath{\mathsf{S}}^2)^n$ are as follows \begin{gather} M_{00}\ddot{x}+\sum^{n}_{i=1}{M_{0i}\ddot{q}_{i}}=-fRe_{3}+M_{00}ge_{3}+\Delta_{x},\label{eqn:xddot}\\ M_{ii}\ddot q_i -\hat q_i^2 (M_{i0}\ddot x + \sum_{\substack{j=1\\j\neq i}}^n M_{ij}\ddot q_j)\nonumber\\
=- M_{ii}\|\dot q_i\|^2 q_i-\sum_{a=i}^n m_a gl_i\hat q_i^2 e_3,\label{eqn:qddot}\\ J\dot{\Omega}+\hat{\Omega}J\Omega=M+\Delta_{R},\label{eqn:Wdot} \end{gather} where $M_{ij}$ is defined at \refeqn{Mij}. Therefore $\Delta_{x}$ and $\Delta_{R}\in\ensuremath{\mathbb{R}}^3$ are fixed disturbances applied to the translational and rotational dynamics of the quadrotor respectively. Equations \refeqn{xddot} and \refeqn{qddot} can be rewritten in a matrix form as follows: \begin{align} &\begin{bmatrix}
M_{00} & M_{01} & M_{02} & \cdots & M_{0n} \\
-\hat q_1^2 M_{10} & M_{11}I_{3} & -M_{12} \hat q_1^2 & \cdots & -M_{1n}\hat q_1^2\\%
-\hat q_2^2 M_{20} & -M_{21} \hat q_2^2 & M_{22} I_{3} & \cdots & -M_{2n} \hat q_2^2\\%
\vdots & \vdots & \vdots & & \vdots\\
-\hat q_n^2 M_{n0} & -M_{n1} \hat q_n^2 & -M_{n2}\hat q_n^2 & \cdots & M_{nn} I_{3}
\end{bmatrix}
\begin{bmatrix}
\ddot x \\ \ddot q_1 \\ \ddot q_2 \\ \vdots \\ \ddot q_n
\end{bmatrix}\nonumber\\
&= \begin{bmatrix}
-fRe_3 +M_{00}ge_3+\Delta_{x}\\
-\|\dot q_1\|^2M_{11} q_1 -\sum_{a=1}^n m_a gl_1\hat q_1^2 e_3\\
-\|\dot q_2\|^2M_{22} q_2 -\sum_{a=2}^n m_a gl_2\hat q_2^2 e_3\\
\vdots\\
-\|\dot q_n\|^2M_{nn} q_n - m_n gl_n\hat q_n^2 e_3
\end{bmatrix}.\label{eqn:ELm} \end{align} Or equivalently, it can be written in terms of the angular velocities as \begin{gather} \begin{bmatrix}
M_{00} & -M_{01}\hat q_1 & -M_{02}\hat q_2 & \cdots & -M_{0n}\hat q_n\\
\hat q_1 M_{10} & M_{11}I_{3} & -M_{12} \hat q_1 \hat q_2 & \cdots & -M_{1n}\hat q_1 \hat q_n\\%
\hat q_2 M_{20} &-M_{21} \hat q_2\hat q_1 & M_{22} I_{3} & \cdots & -M_{2n} \hat q_2 \hat q_n\\%
\vdots & \vdots & \vdots & & \vdots\\
\hat q_n M_{n0} &-M_{n1} \hat q_n \hat q_1 & -M_{n2}\hat q_n \hat q_2 & \cdots & M_{nn} I_{3}
\end{bmatrix}
\begin{bmatrix}
\ddot x \\ \dot \omega_1 \\ \dot \omega_2 \\ \vdots \\ \dot \omega_n
\end{bmatrix}\nonumber\\
=
\begin{bmatrix}
\sum_{j=1}^n M_{0j} \|\omega_j\|^2 q_j-fRe_3+M_{00}ge_3+\Delta_{x}\\
\sum_{j=2}^n M_{1j}\|\omega_j\|^2\hat q_1 q_j +\sum_{a=1}^n m_a gl_1\hat q_1 e_3\\
\sum_{j=1,j\neq 2}^n M_{2j}\|\omega_j\|^2\hat q_2 q_j +\sum_{a=2}^n m_a gl_2\hat q_2 e_3\\
\vdots\\
\sum_{j=1}^{n-1} M_{nj}\|\omega_j\|^2\hat q_n q_j + m_n gl_n\hat q_n e_3\\
\end{bmatrix},\label{eqn:ELwm}\\ \dot q_i = \omega_i\times q_i.\label{eqn:ELwm2} \end{gather} \end{prop} \begin{proof} See Appendix \ref{sec:PfFDM} \end{proof} These provide a coordinate-free form of the equations of motion for the presented quadrotor UAV that is uniformly defined for any number of links $n$, and that is globally defined on the nonlinear configuration manifold. Compared with equations of motion derived in terms of local coordinates, such as Euler-angles, these avoid singularities completely, and they provide a compact form of equations that are suitable for control system design.
\section{Control System Design for a Simplified Dynamic Model}\label{sec:SDM}
\subsection{Control Problem Formulation}
Let $x_d\in\ensuremath{\mathbb{R}}^3$ be a fixed desired location of the quadrotor UAV. Assuming that all of the links are pointing downward, i.e., $q_i=e_3$, the resulting location of the payload is given by \begin{align} x_n=x_d +\sum_{i=1}^n l_i e_3. \end{align} We wish to design the control force $f$ and the control moment $M$ such that this hanging equilibrium configuration at the desired location becomes asymptotically stable.
\subsection{Simplified Dynamic Model}
For the given equations of motion \refeqn{xddot} for $x$, the control force is given by $-fRe_3$. This implies that the total thrust magnitude $f$ can be arbitrarily chosen, but the direction of the thrust vector is always along the third body-fixed axis. Also, the rotational attitude dynamics of the quadrotor is not affected by the translational dynamics of the quadrotor or the dynamics of links.
To overcome the under-actuated property of a quadrotor, in this section, we first replace the term $-fRe_3$ of \refeqn{xddot} by a fictitious control input $u\in\ensuremath{\mathbb{R}}^3$, and design an expression for $u$ to asymptotically stabilize the desired equilibrium. This is equivalent to assuming that the attitude $R$ of the quadrotor can be instantaneously controlled. The effects of the attitude dynamics are incorporated at the next section. Also $\Delta_{x}$ is ignored in the simplified dynamic model. In short, the equations of motion for the simplified dynamic model considered in the section are given by \begin{align} M_{00}\ddot{x}+\sum^{n}_{i=1}{M_{0i}\ddot{q}_{i}}=u+M_{00}ge_{3},\label{eqn:xddot_sim} \end{align} and \refeqn{qddot}.
\subsection{Linear Control System}\label{sec:LCS} The fictitious control input is designed from the linearized dynamics about the desired hanging equilibrium. The variation of $x$ and $u$ are given by \begin{align} \delta x = x - x_d,\quad \delta u = u - M_{00}g e_3.\label{eqn:delxLin} \end{align} From \refeqn{delqi}, the variation of $q_i$ from the equilibrium can be written as \begin{align} \delta q_i = \xi_i\times e_3,\label{eqn:delqLin} \end{align} where $\xi_i\in\ensuremath{\mathbb{R}}^3$ with $\xi_i\cdot e_3=0$. The variation of $\omega_i$ is given by $\delta\omega\in\ensuremath{\mathbb{R}}^3$ with $\delta\omega_i \cdot e_3=0$. Therefore, the third element of each of $\xi_i$ and $\delta\omega_i$ for any equilibrium configuration is zero, and they are omitted in the following linearized equation, i.e., the state vector of the linearized equation is composed of $C^T\xi_i\in\ensuremath{\mathbb{R}}^2$, where $C=[e_1,e_2]\in\ensuremath{\mathbb{R}}^{3\times 2}$.
\newcommand{\mathbf{M}}{\mathbf{M}} \newcommand{\mathbf{K}}{\mathbf{K}} \newcommand{\mathbf{B}}{\mathbf{B}} \newcommand{\mathbf{x}}{\mathbf{x}} \newcommand{\mathbf{u}}{\mathbf{u}} \newcommand{\mathbf{v}}{\mathbf{v}} \newcommand{\mathbf{G}}{\mathbf{G}}
\begin{prop}\label{prop:FDMM} The linearized equations of the simplified dynamic model \refeqn{xddot_sim} and \refeqn{qddot} can be written as follows \begin{gather} \mathbf{M}\ddot \mathbf{x} + \mathbf{G}\mathbf{x} = \mathbf{B} \delta u+ \ensuremath{\mathfrak{g}}(\mathbf{x},\dot{\mathbf{x}}),\label{eqn:Lin} \end{gather} where $\ensuremath{\mathfrak{g}}(\mathbf{x},\dot{\mathbf{x}})$ corresponds to the higher order terms where $\mathbf{x}=[\delta x,\; \mathbf{x}_{q}]^{T}\in\ensuremath{\mathbb{R}}^{2n+3}$, $\mathbf{M}\in\ensuremath{\mathbb{R}}^{2n+3\times 2n+3}$, $\mathbf{G}\in\ensuremath{\mathbb{R}}^{2n+3\times 2n+3}$, $\mathbf{B}\in\ensuremath{\mathbb{R}}^{2n+3\times 3}$, and \refeqn{Lin} can equivalently be written as \begin{align*} \begin{bmatrix} \mathbf{M}_{xx} & \mathbf{M}_{xq}\\ \mathbf{M}_{qx} & \mathbf{M}_{qq} \end{bmatrix} \begin{bmatrix} \delta\ddot x \\ \ddot \mathbf{x}_q\end{bmatrix} &+ \begin{bmatrix} 0_3 & 0_{3\times 2n}\\ 0_{2n\times 3} & \mathbf{G}_{qq}\end{bmatrix} \begin{bmatrix} \delta x \\ \mathbf{x}_q\end{bmatrix}\\ &= \begin{bmatrix} I_3 \\ 0_{2n\times 3}\end{bmatrix} \delta u+ \ensuremath{\mathfrak{g}}(\mathbf{x},\dot{\mathbf{x}}), \end{align*} where the corresponding sub-matrices are defined as \begin{align*} \mathbf{x}_q & = [C^T \xi_1;\,\ldots\,;\,C^T \xi_n],\\ \mathbf{M}_{xx} &= M_{00}I_{3},\\ \mathbf{M}_{xq} &= \begin{bmatrix} -M_{01}\hat e_3C & -M_{02}\hat e_3C & \cdots & -M_{0n}\hat e_3C \end{bmatrix},\\ \mathbf{M}_{qx} & = \mathbf{M}_{xq}^T,\\ \mathbf{M}_{qq} &=
\begin{bmatrix} M_{11}I_{2} & M_{12} I_2 & \cdots & M_{1n}I_2\\% M_{21} I_2 & M_{22} I_{2} & \cdots & M_{2n}I_2\\% \vdots & \vdots & & \vdots\\ M_{n1}I_2 & M_{n2}I_2 & \cdots & M_{nn} I_{2}
\end{bmatrix},\\ \mathbf{G}_{qq} = \mathrm{diag}[&\sum_{a=1}^n {m_a gl_1 I_2},\cdots,m_ngl_nI_2]. \end{align*} \end{prop}
\begin{proof} See Appendix \ref{sec:PfFDMM} \end{proof} For the linearized dynamics \refeqn{Lin}, the following control system is chosen \begin{align} \delta u & = -k_{x}\delta{x}-k_{\dot{x}}\delta\dot{x}-\sum_{a=1}^{n}{k_{q_{i}}C^{T}(e_3\times q_{i})}-k_{\omega_{i}}C^{T}\delta\omega_{i}\nonumber\\ & = -K_x \mathbf{x} - K_{\dot x} \dot \mathbf{x},\label{eqn:delu} \end{align} for controller gains $K_x =[k_xI_3,k_{q_1}I_{3\times 2},\ldots k_{q_n}I_{3\times 2}]\in\ensuremath{\mathbb{R}}^{3\times (3+2n)}$ and $K_{\dot x} =[k_{\dot x}I_3,k_{\omega_1}I_{3\times 2},\ldots k_{\omega_n}I_{3\times 2}]\in\ensuremath{\mathbb{R}}^{3\times (3+2n)}$. Provided that \refeqn{Lin} is controllable, we can choose the controller gains $K_x,K_{\dot x}$ such that the equilibrium is asymptotically stable for the linearized equation \refeqn{Lin}. Then, the equilibrium becomes asymptotically stable for the nonlinear Euler Lagrange equation \refeqn{xddot_sim} and \refeqn{qddot}~\cite{Kha96}. The controlled linearized system can be written as \begin{align} \dot{z}_{1}=&\mathds{A} z_{1}+\mathds{B}\ensuremath{\mathfrak{g}}(\mathbf{x},\dot{\mathbf{x}}), \end{align} where $z_{1}=[\mathbf{x},\; \dot{\mathbf{x}}]^{T}\in\ensuremath{\mathbb{R}}^{4n+6}$ and the matrices $\mathds{A}\in\ensuremath{\mathbb{R}}^{4n+6\times 4n+6}$ and $\mathds{B}\in\ensuremath{\mathbb{R}}^{4n+6\times 2n+3}$ are defined as \begin{align} \mathds{A}=\begin{bmatrix} 0&I\\ -\mathbf{M}^{-1}(\mathbf{G}+\mathbf{B} K_{x})&-{(\mathbf{M}^{-1}\mathbf{B} K_{\dot{x}})} \end{bmatrix}, \mathds{B}=\begin{bmatrix} 0\\ \mathbf{M}^{-1} \end{bmatrix}. \end{align} We can also choose $K_{\mathbf{x}}$ and $K_{\dot{\mathbf{x}}}$ such that $\mathds{A}$ is Hurwitz. Then for any positive definite matrix $Q\in\ensuremath{\mathbb{R}}^{4n+6\times4n+6}$, there exist a positive definite and symmetric matrix $P\in\ensuremath{\mathbb{R}}^{4n+6\times4n+6}$ such that $\mathds{A}^{T}P+P\mathds{A}=-Q$ according to~\cite[Thm 3.6]{Kha96}.
\section{Controller Design for a Quadrotor with a Flexible Cable}\label{sec:CS}
The control system designed in the previous section is generalized to the full dynamic model that includes the attitude dynamics. The central idea is that the attitude $R$ of the quadrotor is controlled such that its total thrust direction $-Re_3$ that corresponds to the third body-fixed axis asymptotically follows the direction of the fictitious control input $u$. By choosing the total thrust magnitude properly, we can guarantee that the total thrust vector $-fRe_{3}$ asymptotically converges to the fictitious ideal force $u$, thereby yielding asymptotic stability of the full dynamic model. \subsection{Controller Design} Consider the full nonlinear equations of motion, let $A\in\ensuremath{\mathbb{R}}^3$ be the ideal total thrust of the quadrotor system that asymptotically stabilize the desired equilibrium. From \refeqn{delxLin}, we have \begin{align} A= M_{00}ge_3 + \delta u = -K_{x} \mathbf{x}-K_{\dot{x}}\dot\mathbf{x} -K_{z}\mathop{\mathrm{sat}}_{\sigma}(e_{\mathbf{x}})+ M_{00}ge_3,\label{eqn:A} \end{align} where the following integral term $e_{\mathbf{x}}\in\ensuremath{\mathbb{R}}^{2n+3}$ is added to eliminate the effect of disturbance $\Delta_x$ in the full dynamic model \begin{align}\label{eqn:exterm} e_{\mathbf{x}}=\int^{t}_{0}{(P\mathds{B})^{T}z_{1}(\tau)\;d\tau}, \end{align} where $K_z =[k_{z}I_3,k_{z_1}I_{3\times 2},\ldots k_{z_n}I_{3\times 2}]\in\ensuremath{\mathbb{R}}^{3\times (3+2n)}$ is an integral gain. For a positive constant $\sigma\in\ensuremath{\mathbb{R}}$, a saturation function $\mathop{\mathrm{sat}}_\sigma:\ensuremath{\mathbb{R}}\rightarrow [-\sigma,\sigma]$ is introduced as \begin{align*} \mathop{\mathrm{sat}}_{\sigma}(y) = \begin{cases} \sigma & \mbox{if } y >\sigma\\ y & \mbox{if } -\sigma \leq y \leq\sigma\\ -\sigma & \mbox{if } y <-\sigma\\ \end{cases}. \end{align*} If the input is a vector $y\in\ensuremath{\mathbb{R}}^n$, then the above saturation function is applied element by element to define a saturation function $\mathop{\mathrm{sat}}_\sigma(y):\ensuremath{\mathbb{R}}^n\rightarrow [-\sigma,\sigma]^n$ for a vector. It is also assumed that an upper bound of the infinite norm of the uncertainty is known \begin{align}\label{eqn:disturbancecond}
\|\Delta_{x}\|_{\infty}\leq \delta, \end{align} for positive constant $\delta$. The desired direction of the third body-fixed axis $b_{3_c}\in\ensuremath{\mathsf{S}}^2$ is given by \begin{align}
b_{3_c} = - \frac{A}{\|A\|}.\label{eqn:b3c} \end{align} This provides a two-dimensional constraint for the desired attitude of quadrotor, and there is additional one-dimensional degree of freedom that corresponds to rotation about the third body-fixed axis, i.e., yaw angle. A desired direction of the first body-fixed axis, namely $b_{1_d}\in\ensuremath{\mathsf{S}}^2$ is introduced to resolve it, and it is projected onto the plane normal to $b_{3_c}$. The desired direction of the second body-fixed axis is chosen to constitute an orthonormal frame. More explicitly, the desired attitude is given by \begin{align}
R_c = \bracket{-\frac{\hat b_{3_c}^2 b_{1_d}}{\|\hat b_{3_c}^2 b_{1_d}\|},\;
\frac{\hat b_{3_c}b_{1_d}}{\|\hat b_{3_c}b_{1_d}\|},\; b_{3_c}}, \end{align} which is guaranteed to lie in $\ensuremath{\mathsf{SO(3)}}$ by construction, assuming that $b_{1_d}$ is not parallel to $b_{3_c}$~\cite{LeeLeoAJC13}. The desired angular velocity $\Omega_{c}\in\ensuremath{\mathbb{R}}^{3}$ is obtained by the attitude kinematics equation \begin{align} \Omega_c = (R_c^T \dot R_c)^\vee. \end{align} Next, we introduce the tracking error variables for the attitude and the angular velocity $e_{R}$, $e_{\Omega}\in\ensuremath{\mathbb{R}}^{3}$ as follows~\cite{TFJCHTLeeHG} \begin{align} &e_{R}=\frac{1}{2}(R_{c}^{T}R-R^{T}R_{c})^{\vee},\\ &e_{\Omega}=\Omega-R^{T}R_{c}\Omega_{c}. \end{align} The thrust magnitude and the moment vector of quadrotor are chosen as \begin{align} f = -A\cdot Re_3,\label{eqn:fi} \end{align}
\begin{align} M =&-{k_R}e_{R} -{k_{\Omega}}e_{\Omega} -k_{I}e_{I}+\Omega\times J\Omega\nonumber\\ &-J(\hat\Omega R^T R_{c} \Omega_{c} - R^T R_{c}\dot\Omega_{c}),\label{eqn:Mi} \end{align} where $k_R,k_\Omega$, and $k_{I}$ are positive constants and the following integral term is introduced to eliminate the effect of fixed disturbance $\Delta_{R}$ \begin{align}\label{eqn:integralterm} e_{I}=\int^{t}_{0}{e_{\Omega}(\tau)+c_{2}e_{R}(\tau)d\tau}, \end{align} where $c_{2}$ is a positive constant.
\subsection{Stability Analysis} \begin{prop}\label{prop:SA1} Consider control inputs $f$, $M$ defined in \refeqn{fi} and \refeqn{Mi}. There exist controller parameters and gains such that, (i) the zero equilibrium of tracking error is stable in the sense of Lyapunov; (ii) the tracking errors $e_{R}$, $e_{\Omega}$, $\mathbf{x}$, $\dot{\mathbf{x}}$ asymptotically converge to zero as $t\rightarrow\infty$; (iii) the integral terms $e_{I}$ and $e_{\mathbf{x}}$ are uniformly bounded.
\end{prop} \begin{proof} See Appendix \ref{sec:stability1} \end{proof} By utilizing geometric control systems for quadrotor, we show that the hanging equilibrium of the links can be asymptotically stabilized while translating the quadrotor to a desired position. The control systems proposed explicitly consider the coupling effects between the cable/load dynamics and the quadrotor dynamics. We presented a rigorous Lyapunov stability analysis to establish stability properties without any timescale separation assumptions or singular perturbation, and a new nonlinear integral control term is designed to guarantee robustness against unstructured uncertainties in both rotational and translational dynamics.
\section{Numerical Example}\label{sec:NE}
The desirable properties of the proposed control system are illustrated by a numerical example. Properties of a quadrotor are chosen as \begin{align*} m=0.5\,\mathrm{kg},\quad J=\mathrm{diag}[0.557,\,0.557,\,1.05]\times 10^{-2}\,\mathrm{kgm^2}. \end{align*} Five identical links with $n=5$, $m_i=0.1\,\mathrm{kg}$, and $l_i=0.1\,\mathrm{m}$ are considered. Controller parameters are selected as follows: $k_x=12.8$, $k_v=4.22$, ${k_R}=0.65$, ${k_\Omega}= 0.11$, $k_{I}=1.5$, $c_{1}=c_{2}=0.7$. Also $k_{q}$ and $k_{\omega}$ are defined as \begin{align*} &k_q=[11.01,\,6.67,\,1.97,\,0.41,\,0.069],\\ &k_\omega=[0.93,\,0.24,\,0.032,\,0.030,\,0.025]. \end{align*}
\begin{figure}
\caption{Stabilization of a payload connected to a quadrotor with 5 links without Integral term}
\label{fig:simresultsWO}
\end{figure} The desired location of the quadrotor is selected as $x_d=0_{3\times 1}$. The initial conditions for the quadrotor are given by \begin{gather*} x(0)=[0.6;-0.7;0.2], \ \dot{x}(0)=0_{3\times 1},\\ R(0)=I_{3\times 3},\quad \Omega(0)=0_{3\times 1}. \end{gather*} The initial direction of the links are chosen such that the cable is curved along the horizontal direction, as illustrated at Figure \ref{fig:fisrt_sub11}, and the initial angular velocity of each link is chosen as zero.
The following two fixed disturbances also considered in the equations \begin{align*} &\Delta_R=[0.03,-0.02,0.01]^T,\\ &\Delta_x=[-0.0125,0.0125,0.01]^T. \end{align*} We define the two following error functions to show the stabilizing performance for the links: \begin{align}
e_{q}=\sum_{i=1}^{n}{\|q_{i}-e_3\|},\quad e_{\omega}=\sum_{i=1}^{n}{\|\omega_{i}\|}. \end{align}
\begin{figure}
\caption{Stabilization of a payload connected to a quadrotor with 5 links with Integral term}
\label{fig:simresultsW}
\end{figure} Simulation results are illustrated at Figures \ref{fig:simresultsWO} and \ref{fig:simresultsW} where quad rotor stabilize the payload while reducing the direction error and the angular velocity error of the link. The corresponding maneuvers of the quadrotor and the links are illustrated by snapshots at Figure \ref{animationsim}. We considered two cases for this numerical simulation to compare the effect of the proposed integral term in the presence of disturbances as follows: (i) with integral term and (ii) without integral term, to emphasize the effect of the integral term. Comparison between Figure \ref{fig:simresultsWO} and \ref{fig:simresultsW} shows that the integral terms eliminates the steady state error significantly in presence of fixed disturbances where the position $x$ of the quadrotor converges to the desired value $x_d$ while stabilizing the payload and links below the quadrotor.
\begin{figure}
\caption{Snapshots of the controlled maneuver}
\label{fig:fisrt_sub11}
\label{animationsim}
\end{figure}
\section{Experimental Results}\label{sec:ER} Experimental results of the proposed controller are presented in this section. A quadrotor UAV is developed with the following configuration as illustrated at Figure \ref{fig:QuadHW}: \begin{itemize} \item Gumstix Overo computer-in-module (OMAP 600MHz processor), running a non-realtime Linux operating system. It is connected to a ground station via WIFI. \item Microstrain 3DM-GX3 IMU, connected to Gumstix via UART. \item BL-CTRL 2.0 motor speed controller, connected to Gumstix via I2C. \item Roxxy 2827-35 Brushless DC motors. \item XBee RF module, connected to Gumstix via UART. \end{itemize}
The weight of the entire UAV system is $0.791\mathrm{kg}$ including one battery. A payload with mass of $m_1=0.036\ \mathrm{kg}$ is attached to the quadrotor via a cable of length $l_1=0.7\ \mathrm{m}$. The length from the center of the quadrotor to each motor rotational axis is $d=0.169\mathrm{m}$, the thrust to torque coefficient is $c_{{\tau}_f}=0.1056\mathrm{m}$ and the moment of inertia is $J=[0.56,0.56,1.05]\times 10^{-2}\,\mathrm{kgm^2}$. The angular velocity is measured from inertial measurement unit (IMU) and the attitude is estimated from IMU data. Position of the UAV is measured from motion capture system (Vicon) and the velocity is estimated from the measurement. Ground computing system receives the Vicon data and send it to the UAV via XBee. The Gumstix is adopted as micro computing unit on the UAV. It has two main threads, Vicon thread and IMU thread. The Vicon thread receives the Vicon measurement and estimates linear velocity of the quadrotor and runs at 30Hz. In IMU thread, it receives the IMU measurement and estimates the angular velocity. Also, control outputs are calculated at 120Hz in this thread.
\begin{figure}
\caption{Hardware development for a quadrotor UAV}
\label{fig:QuadHW}
\end{figure}
Two cases are considered and compared. For the first case, a position control system developed in~\cite{GooLeePECC13}, for quadrotor UAV that does not include the dynamics of the payload and the link, is applied to hover the quadrotor at the desired location, and the second case, the proposed control system is used.
\begin{figure}
\caption{Experimental results ($x_d$:black, $x$:red, $x+l_1q_1$:blue)}
\label{expresultsp}
\end{figure}
Experimental results are shown at Figures \ref{expresultsp} and \ref{expresultsq}. The position of the quadrotor and the payload is compared with the desired position of the quadrotor at Figure~\ref{expresultsp}, and the deflection angle of the link from the vertical direction are illustrated at Figure~\ref{expresultsq}. It is shown that the proposed control system reduces the undesired oscillation of the link effectively, compared with the quadrotor position control system.\footnote{A short video file of the experiments is also available at the experiment section of \url{http://fdcl.seas.gwu.edu}.}
\begin{figure}
\caption{Experimental results: link deflection angles}
\label{expresultsq}
\end{figure}
\section{Conclusions}
Euler-Lagrange equations have been used for the quadrotor and the chain pendulum to model a flexible cable transporting a load in 3D space. These derivations developed in a remarkably compact form which allows us to choose arbitrary number and any configuration of the links. We developed a geometric nonlinear controller to stabilize the links below the quadrotor in the equilibrium position from any chosen initial condition. We expanded these derivations in such way that there is no need of using local angle coordinate and this advantageous technique signalize our derivations.
\appendix \subsection{Proof for Proposition \ref{prop:FDM}}\label{sec:PfFDM} From \refeqn{KE} and \refeqn{PE}, the Lagrangian is given by \begin{align}
L&=\frac{1}{2}M_{00}\|\dot{x}\|^{2}+\dot{x}\cdot\sum^{n}_{i=1}{M_{0i}\dot{q}_i}+\frac{1}{2}\sum^{n}_{i,j=1}{M_{ij}\dot{q}_{i}\cdot\dot{q}_{j}}\nonumber \\ &\quad+\sum^{n}_{i=1}\sum^{n}_{a=i}m_{a}gl_{i}e_{3}\cdot q_{i}+M_{00}ge_{3}\cdot x+\frac{1}{2}\Omega^{T}J\Omega, \end{align} The derivatives of the Lagrangian are given by \begin{align*} \ensuremath{\mathbf{D}}_x L & = M_{00} g e_3,\\ \ensuremath{\mathbf{D}}_{\dot x} L & = M_{00} \dot x + \sum_{i=1}^n M_{0i}\dot q_i, \end{align*} where $\ensuremath{\mathbf{D}}_x L$ represents the derivative of $L$ with respect to $x$. From the variation of the angular velocity given after \refeqn{delR}, we have \begin{align} \ensuremath{\mathbf{D}}_{\Omega}L\cdot\delta\Omega=J\Omega\cdot(\dot\eta+\Omega\times\eta) =J\Omega\cdot\dot\eta- \eta\cdot(\Omega\times J\Omega). \end{align} Similarly from \refeqn{delqi}, the derivative of the Lagrangian with respect to $q_i$ is given by \begin{align*} \ensuremath{\mathbf{D}}_{q_i} L \cdot \delta q_i = \sum_{a=i}^n m_a gl_ie_3\cdot (\xi_i\times q_i) = -\sum_{a=i}^n m_a gl_i\hat e_3 q_i\cdot \xi_i. \end{align*} The variation of $\dot q_i$ is given by \begin{align*} \delta\dot q_i = \dot \xi_i\times q_i + \xi_i\times q_i. \end{align*} Using this, the derivative of the Lagrangian with respect to $\dot q_i$ is given by \begin{align*} &\ensuremath{\mathbf{D}}_{\dot q_i}L\cdot \delta \dot q_i = (M_{i0}\dot x + \sum_{j=1}^n M_{ij}\dot q_j) \cdot \delta \dot q_i \\ & = (M_{i0}\dot x + \sum_{j=1}^n M_{ij}\dot q_j) \cdot (\dot \xi_i \times q + \xi_i \times \dot q_i)\\ & = \hat q_i (M_{i0}\dot x + \sum_{j=1}^n M_{ij}\dot q_j)\cdot \dot\xi_i + \hat{\dot q}_i (M_{i0}\dot x + \sum_{j=1}^n M_{ij}\dot q_j)\cdot \xi_i. \end{align*} Let $\mathfrak{G}$ be the action integral, i.e., $\mathfrak{G}=\int_{t_0}^{t_f} L\,dt$. From the above expressions for the derivatives of the Lagrangian, the variation of the action integral can be written as \begin{align*} \delta \mathfrak{G} =& \int_{t_0}^{t_f} \{M_{00} \dot x + \sum_{i=1}^n M_{0i}\dot q_i\}\cdot \delta \dot x +M_{00}ge_3\cdot\delta x\\ & +\sum_{i=1}^n \{\hat q_i (M_{i0}\dot x + \sum_{j=1}^n M_{ij}\dot q_j)\}\cdot \dot \xi_i\\
&+ \sum_{i=1}^n\{\hat{\dot q}_i (M_{i0}\dot x + \sum_{j=1}^n M_{ij}\dot q_j)-\sum_{a=i}^n m_a gl_i\hat e_3 q_i \}\cdot \xi_i\\
&+ J\Omega\cdot\dot\eta- \eta\cdot(\Omega\times J\Omega)\,dt. \end{align*} Integrating by parts and using the fact that variations at the end points vanish, this reduces to \begin{align*} \delta \mathfrak{G} =& \int_{t_0}^{t_f} \{M_{00}ge_3 -M_{00} \ddot x - \sum_{i=1}^n M_{0i}\ddot q_i\}\cdot \delta x \\ &+ \sum_{i=1}^n\{-\hat q_i (M_{i0}\ddot x + \sum_{j=1}^n M_{ij}\ddot q_j)-\sum_{a=i}^n m_a gl_i\hat e_3 q_i \}\cdot \xi_i\\ & - \eta\cdot(J\dot\Omega+\Omega\times J\Omega) \,dt. \end{align*} According to the Lagrange-d'Alembert principle, the variation of the action integral is equal to the negative of the virtual work done by the external force and moment, namely \begin{align}\label{eqn:estef} -\int_{t_0}^{t_f} (-fRe_3+\Delta_{x})\cdot\delta x + (M+\Delta_R)\cdot \eta\; dt, \end{align}
and we obtain \refeqn{xddot} and \refeqn{Wdot}. As $\xi_i$ is perpendicular to $q_i$, we also have \begin{gather} -\hat q_i^2 (M_{i0}\ddot x + \sum_{j=1}^n M_{ij}\ddot q_j)+\sum_{a=i}^n m_a gl_i\hat q_i^2 e_3=0.\label{eqn:qddot0} \end{gather} Equation \refeqn{qddot0} is rewritten to obtain an explicit expression for $\ddot q_i$. As $q_i\cdot \dot q_i =0$, we have $\dot q_i \cdot \dot q_i +q_i\cdot \ddot q_i=0$. Using this, we have \begin{align*} -\hat q_i^2 \ddot q_i = -(q_i\cdot \ddot q_i )q_i + (q_i\cdot q_i)\ddot q_i =(\dot q_i \cdot \dot q_i) q_i + \ddot q_i. \end{align*} Substituting this equation into \refeqn{qddot0}, we obtain \refeqn{qddot}. This can be slightly rewritten in terms of the angular velocities. Since $\dot q_i = \omega_i\times q_i$ for the angular velocity $\omega_i$ satisfying $q_i\cdot\omega_i=0$, we have \begin{align*}
\ddot q_i & = \dot \omega_i \times q_i + \omega_i\times (\omega_i\times q_i) \\
&= \dot \omega_i \times q_i - \|\omega_i\|^2q_i=-\hat q_i \dot\omega_i - \|\omega_i\|^2q_i. \end{align*} Using this and the fact that $\dot\omega_i\cdot q_i=0$, we obtain \refeqn{ELwm}.
\subsection{Proof for Proposition \ref{prop:FDMM}}\label{sec:PfFDMM}
The variations of $x,u$ and $q$ are given by \refeqn{delxLin} and \refeqn{delqLin}. From the kinematics equation $\dot q_i=\omega_i\times q_i$, $\delta\dot q_i$ is given by \begin{align*} \delta \dot q_i = \dot\xi_i \times e_3 =\delta\omega_i \times e_3 + 0\times (\xi_i\times e_3)=\delta\omega_i \times e_3. \end{align*} Since both sides of the above equation is perpendicular to $e_3$, this is equivalent to $e_3\times(\dot\xi_i\times e_3) = e_3\times(\delta\omega_i\times e_3)$, which yields \begin{gather*} \dot \xi - (e_3\cdot\dot\xi) e_3 = \delta\omega_i -(e_3\cdot\delta\omega_i)e_3. \end{gather*} Since $\xi_i\cdot e_3 =0$, we have $\dot\xi\cdot e_3=0$. As $e_3\cdot\delta\omega_i=0$ from the constraint, we obtain the linearized equation for the kinematics equation: \begin{align} \dot\xi_i = \delta\omega_i.\label{eqn:dotxii} \end{align} Substituting these into \refeqn{ELwm}, and ignoring the higher order terms, we obtain \refeqn{Lin}. See Appendix 4 for details.
\subsection{Proof for Proposition \ref{prop:SA1}}\label{sec:stability1} We first show stability of the rotational dynamics, and later it is combined with the stability analysis of the translational dynamics of quad rotor and the rotational dynamics of links. \subsubsection{a) Attitude Error Dynamics} Here, attitude error dynamics for $e_{R}$, $e_{\Omega}$ are derived and we find conditions on control parameters to guarantee the boundedness of attitude tracking errors. The time-derivative of $Je_{\Omega}$ can be written as \begin{align} J\dot e_\Omega & = \{Je_\Omega + d\}^\wedge e_\Omega - k_R e_R-k_\Omega e_\Omega- k_I e_I + \Delta_R,\label{eqn:JeWdot} \end{align} where $d=(2J-\mathop{\mathrm{tr}}{J}I)R^TR_d\Omega_d\in\ensuremath{\mathbb{R}}^3$~\cite{TFJCHTLeeHG}. The important property is that the first term of the right hand side is normal to $e_\Omega$, and it simplifies the subsequent Lyapunov analysis.
\subsubsection{b) Stability for Attutide Dynamics} Define a configuration error function on $\ensuremath{\mathsf{SO(3)}}$ as follows \begin{align} \Psi= \frac{1}{2}\mathop{\mathrm{tr}}[I- R_c^T R]. \end{align} We introduce the following Lyapunov function \begin{align} \begin{split} \mathcal{V}_{2}=&\frac{1}{2}e_{\Omega}\cdot J\dot{e}_{\Omega}+k_{R}\Psi(R,R_{d})+c_{2}e_{R}\cdot e_{\Omega}\\
&+\frac{1}{2}k_{I}\|e_{I}-\frac{\Delta_R}{k_{I}}\|^{2}. \end{split} \end{align} Consider a domain $D_{2}$ given by \begin{align}
D_2 = \{ (R,\Omega)\in \ensuremath{\mathsf{SO(3)}}\times\ensuremath{\mathbb{R}}^3\,|\, \Psi(R,R_d)<\psi_2<2\}.\label{eqn:D2} \end{align} In this domain we can show that $\mathcal{V}_{2}$ is bounded as follows~\cite{TFJCHTLeeHG} \begin{align}\label{eqn:ffff1} \begin{split}
z_{2}^{T}M_{21}z_{2}&+\frac{k_{I}}{2}\|e_{I}-\frac{\Delta_R}{k_{I}}\|^{2}\leq\mathcal{V}_{2}\\
&\leq z_{2}^{T}M_{22}z_{2}+\frac{k_{I}}{2}\|e_{I}-\frac{\Delta_R}{k_{I}}\|^{2}, \end{split} \end{align}
where $z_{2}=[\|e_{R}\|,\|e_{\Omega}\|]^{T}\in \ensuremath{\mathbb{R}}^{2}$ and the matrices $M_{21}$, $M_{22}$ are given by \begin{align} M_{21}=\frac{1}{2}\begin{bmatrix} k_{R}&-c_{2}\lambda_{M}\\ -c_{2}\lambda_{M}&\lambda_{m} \end{bmatrix},M_{22}=\frac{1}{2}\begin{bmatrix} \frac{2k_{R}}{2-\psi_{2}}&c_{2}\lambda_{M}\\ c_{2}\lambda_{M}&\lambda_{M} \end{bmatrix}, \end{align} The time derivative of $\mathcal{V}_2$ along the solution of the controlled system is given by \begin{align*} \dot{\mathcal{V}}_2 =&
-k_\Omega\|e_\Omega\|^2 - e_\Omega\cdot(k_Ie_I-\Delta_R) \\ &+ c_2 \dot e_R \cdot Je_\Omega+ c_2 e_R \cdot J\dot e_\Omega + (k_Ie_I- \Delta_R) \dot e_I. \end{align*} We have $\dot e_I = c_2 e_R + e_\Omega$ from \refeqn{integralterm}. Substituting this and \refeqn{JeWdot}, the above equation becomes \begin{align*} \dot{\mathcal{V}}_2 =&
-k_\Omega\|e_\Omega\|^2 + c_2 \dot e_R \cdot Je_\Omega-c _2 k_R \|e_R\|^2 \\ &+ c_2 e_R \cdot ((Je_\Omega+d)^\wedge e_\Omega -k_\Omega e_\Omega). \end{align*}
We have $\|e_R\|\leq 1$, $\|\dot e_R\|\leq \|e_\Omega\|$~\cite{TFJCHTLeeHG}, and choose a constant $B_{2}$ such that $\|d\|\leq B_2$. Then we have \begin{align} \dot{\mathcal{V}}_2 \leq - z_2^T W_2 z_2,\label{eqn:dotV2} \end{align} where the matrix $W_2\in\ensuremath{\mathbb{R}}^{2\times 2}$ is given by \begin{align*} W_2 = \begin{bmatrix} c_2k_R & -\frac{c_2}{2}(k_\Omega+B_2) \\ -\frac{c_2}{2}(k_\Omega+B_2) & k_\Omega-2c_2\lambda_M \end{bmatrix}. \end{align*} The matrix $W_{2}$ is a positive definite matrix if \begin{align}\label{eqn:c2} c_{2}<\min\{\frac{\sqrt{k_{R}\lambda_{m}}}{\lambda_{M}},\frac{4k_{\Omega}}{8k_{R}\lambda_{M}+(k_{\Omega}+B_{2})^{2}} \}. \end{align} This implies that \begin{align}\label{eqn:eq2}
\dot{\mathcal{V}}_{2}\leq - \lambda_{m}(W_{2})\|z_{2}\|^{2}, \end{align} which shows stability of attitude dynamics.
\subsubsection{c) Translational Error Dynamics} We derive the tracking error dynamics and a Lyapunov function for the translational dynamics of a quadrotor UAV and the dynamics of links. Later it is combined with the stability analyses of the rotational dynamics. This proof is based on the Lyapunov method presented in Theorem 3.6 and 3.7~\cite{Kha96}. From \refeqn{delxLin}, \refeqn{xddot}, \refeqn{Lin}, and \refeqn{fi}, the linearized equation of motion for the controlled full dynamic model is given by \begin{align}\label{eqn:salam} \mathbf{M}\ddot \mathbf{x} + \mathbf{G}\mathbf{x}=\mathbf{B}(-fRe_{3}-M_{00}ge_{3})+\ensuremath{\mathfrak{g}}(\mathbf{x},\dot{\mathbf{x}})+\mathbf{B}\Delta_{x}, \end{align}
and $\ensuremath{\mathfrak{g}}(\mathbf{x},\dot{\mathbf{x}})$ is higher order term. The subsequent analyses are developed in the domain $D_{1}$ \begin{align}
D_1=\{&(\mathbf{x},\dot{\mathbf{x}},R,e_\Omega)\in\ensuremath{\mathbb{R}}^{2n+3}\times\ensuremath{\mathbb{R}}^{2n+3}\times \ensuremath{\mathsf{SO(3)}}\times\ensuremath{\mathbb{R}}^3\,|\,\nonumber\\ & \Psi< \psi_1 < 1\}.\label{eqn:D} \end{align} In the domain $D_{1}$, we can show that \begin{align} \frac{1}{2} \norm{e_R}^2 \leq \Psi(R,R_c) \leq \frac{1}{2-\psi_1} \norm{e_R}^2\label{eqn:eRPsi1}. \end{align} Consider the quantity $e_{3}^{T}R_{c}^{T}Re_{3}$, which represents the cosine of the angle between $b_{3}=Re_{3}$ and $b_{3_{c}}=R_{c}e_{3}$. Since $1-\Psi(R,R_{c})$ represents the cosine of the eigen-axis rotation angle between $R_{c}$ and $R$, we have $e_{3}^{T}R_{c}^{T}Re_{3}\geq 1-\Psi(R,R_{c})>0$ in $D_{1}$. Therefore, the quantity $\frac{1}{e_{3}^{T}R_{c}^{T}Re_{3}}$ is well-defined. We add and subtract $\frac{f}{e_{3}^{T}R_{c}^{T}Re_{3}}R_{c}e_{3}$ to the right hand side of \refeqn{salam} to obtain \begin{align}\label{eqn:taghall} \mathbf{M}\ddot \mathbf{x} + \mathbf{G}\mathbf{x}=&\mathbf{B}(\frac{-f}{e_{3}^{T}R_{c}^{T}Re_{3}}R_{c}e_{3}-X-M_{00}ge_{3}+\Delta_{x})+\ensuremath{\mathfrak{g}}(\mathbf{x},\dot{\mathbf{x}}), \end{align} where $X\in \ensuremath{\mathbb{R}}^{3}$ is defined by \begin{align}\label{eqn:Xdef} X=\frac{f}{e_{3}^{T}R_{c}^{T}Re_{3}}((e_{3}^{T}R_{c}^{T}Re_{3})Re_{3}-R_{c}e_{3}). \end{align} The first term on the right hand side of \refeqn{taghall} can be written as \begin{align}
-\frac{f}{e_{3}^{T}R_{c}^{T}Re_{3}}R_{c}e_{3}=-\frac{(\|A\|R_{c}e_{3})\cdot Re_{3}}{e_{3}^{T}R_{c}^{T}Re_{3}}\cdot -\frac{A}{\|A\|}=A. \end{align} Substituting this and \refeqn{A} into \refeqn{taghall} \begin{align} \mathbf{M}\ddot \mathbf{x} + \mathbf{G}\mathbf{x}=\mathbf{B}(-K_{x}\mathbf{x}-K_{\dot{x}}\dot{\mathbf{x}}-K_{z}\mathop{\mathrm{sat}}_{\sigma}(e_{\mathbf{x}})-X+\Delta_{x})+\ensuremath{\mathfrak{g}}(\mathbf{x},\dot{\mathbf{x}}), \end{align} This can be rearranged as \begin{align} \begin{split} \ddot{\mathbf{x}}=&-(\mathbf{M}^{-1}\mathbf{G}+\mathbf{M}^{-1}\mathbf{B} K_{x})\mathbf{x}-(\mathbf{M}^{-1}\mathbf{B} K_{\dot{x}})\dot{\mathbf{x}}\\ &-\mathbf{M}^{-1}\mathbf{B} X-\mathbf{M}^{-1}\mathbf{B} K_{z}\mathop{\mathrm{sat}}_{\sigma}(e_{\mathbf{x}})+\mathbf{M}^{-1}\ensuremath{\mathfrak{g}}(\mathbf{x},\dot{\mathbf{x}})+\mathbf{M}^{-1}\mathbf{B}\Delta_{x}. \end{split} \end{align} Using the definitions for $\mathds{A}$, $\mathds{B}$, and $z_{1}$ presented before, the above expression can be rearranged as
\begin{align}\label{eqn:zdot1} \dot{z}_{1}=&\mathds{A} z_{1}+\mathds{B}(-\mathbf{B} X+\ensuremath{\mathfrak{g}}(\mathbf{x},\dot{\mathbf{x}})-\mathbf{B} K_{z}\mathop{\mathrm{sat}}_{\sigma}(e_{\mathbf{x}})+\mathbf{B}\Delta_{x}). \end{align}
\subsubsection{d) Lyapunov Candidate for Translation Dynamics} From the linearized control system developed at section 3, we use matrix $P$ to introduce the following Lyapunov candidate for translational dynamics \begin{align} \mathcal{V}_{1}=z_{1}^{T}Pz_{1}+2\int_{p_{eq}}^{e_{\mathbf{x}}}{(\mathbf{B} K_{z}\mathop{\mathrm{sat}}_{\sigma}(\mu)-\mathbf{B}\Delta_{x})}\cdot d \mu. \end{align}
The last integral term of the above equation is positive definite about the equilibrium point $e_{\mathbf{x}}=p_{eq}$ where \begin{align} p_{eq}=[\frac{\Delta_{x}}{k_{z}},0,0,\cdots], \end{align} if \begin{align} \delta< k_z\sigma, \end{align} considering the fact that $\mathop{\mathrm{sat}}_{\sigma}{y}=y$ if $y<\sigma$.
The time derivative of the Lyapunov function using the Leibniz integral rule is given by \begin{align}\label{eqn:devrr} \dot{\mathcal{V}_{1}}=\dot{z}_{1}^{T}Pz_{1}+z_{1}^{T}P\dot{z}_{1}+2\dot{e}_{\mathbf{x}}\cdot(\mathbf{B} K_{z}\mathop{\mathrm{sat}}_{\sigma}(e_{\mathbf{x}})-\mathbf{B}\Delta_{x}). \end{align} Since $\dot{e}_{\mathbf{x}}^{T}=((P\mathds{B})^{T}z_{1})^{T}=z_{1}^{T}P\mathds{B}$ from \refeqn{exterm}, the above expression can be written as \begin{align}\label{eqn:devvcv} \dot{\mathcal{V}_{1}}=\dot{z}_{1}^{T}Pz_{1}+z_{1}^{T}P\dot{z}_{1}+2z_{1}^{T}P\mathds{B}(\mathbf{B} K_{z}\mathop{\mathrm{sat}}_{\sigma}(e_{\mathbf{x}})-\mathbf{B}\Delta_{x}). \end{align} Substituting \refeqn{zdot1} into \refeqn{devvcv}, it reduces to \begin{align}\label{eqn:beforsimp} \dot{\mathcal{V}}_{1}=z_{1}^{T}(\mathds{A}^{T}P+P\mathds{A})z_{1}+2z_{1}^{T}P\mathds{B}(-\mathbf{B} X+\ensuremath{\mathfrak{g}}(\mathbf{x},\dot{\mathbf{x}})). \end{align}
Let $c_{3}=2\|P\mathds{B}\mathbf{B}\|_{2}\in\ensuremath{\mathbb{R}}$ and using $\mathds{A}^{T}P+P\mathds{A}=-Q$, we have \begin{align}\label{eqn:test}
\dot{\mathcal{V}}_{1}\leq-z_{1}^{T}Qz_{1}+c_{3}\|z_{1}\|\|X\|+2z_{1}^{T}P\mathds{B}g(\mathbf{x},\dot{\mathbf{x}}). \end{align} The second term on the right hand side of the above equation corresponds to the effects of the attitude tracking error on the translational dynamics. We find a bound of $X$, defined at \refeqn{Xdef}, to show stability of the coupled translational dynamics and rotational dynamics in the subsequent Lyapunov analysis. Since \begin{align}
f=\|A\|(e_{3}^{T}R_{c}^{T}Re_{3}), \end{align} we have \begin{align}\label{eqn:ssstr}
\|X\|\leq\|A\|\|(e_{3}^{T}R_{c}^{T}Re_{3})Re_{3}-R_{c}e_{3}\|. \end{align}
The last term $\|(e_{3}^{T}R_{c}^{T}Re_{3})Re_{3}-R_{c}e_{3}\|$ represents the sine of the angle between $b_{3}=Re_{3}$ and $b_{3_{c}}=R_{c}e_{3}$, since $(b_{3_{c}}\cdot b_{3})b_{3}-b_{3_{c}}=b_{3}\times(b_{3}\times b_{3_{c}})$. The magnitude of the attitude error vector, $\|e_{R}\|$ represents the sine of the eigen-axis rotation angle between $R_{c}$ and $R$. Therefore, $\|(e_{3}^{T}R_{c}^{T}Re_{3})Re_{3}-R_{c}e_{3}\|\leq\|e_{R}\|$ in $D_{1}$. It follows that \begin{align} \begin{split}
\|(e_{3}^{T}R_{c}^{T}Re_{3})Re_{3}-R_{c}e_{3}\|&\leq\|e_{R}\|=\sqrt{\Psi(2-\Psi)}\\ &\leq\{\sqrt{\psi_1(2-\psi_1)}\triangleq\alpha\}<1, \end{split} \end{align} therefore \begin{align} \begin{split}
\|X\|&\leq \|A\|\|e_{R}\|\\
&\leq\|A\|\alpha. \end{split} \end{align} We also use the following properties \begin{align}
\lambda_{\min}(Q)\|z_{1}\|^{2}\leq z_{1}^{T}Qz_{1}. \end{align} Note that $\lambda_{\min}(Q)$ is real and positive since $Q$ is symmetric and positive definite. Then, we can simplify \refeqn{test} as given \begin{align}\label{eqn:ssddss}
\dot{\mathcal{V}}_{1} \leq -\lambda_{\min}(Q) \|z_{1}\| ^{2}+c_{3} \|z_{1}\| \|A\|\|e_{R}\|+2z_{1}^{T}P\mathds{B}\ensuremath{\mathfrak{g}}(\mathbf{x},\dot{\mathbf{x}}). \end{align} We find an upper boundary for \begin{align}\label{eqn:AA} A=-K_{x} \mathbf{x}-K_{\dot{x}}\dot\mathbf{x} -K_{z}\mathop{\mathrm{sat}}_{\sigma}(e_{\mathbf{x}})+ M_{00}ge_3, \end{align} by defining \begin{align}
\|M_{00}ge_{3}\|\leq B_{1}, \end{align} for a given positive constant $B_{1}$. We use the following properties where for any matrix $A\in\ensuremath{\mathbb{R}}^{m\times n}$ \begin{align}
\|A\|_{2}\leq \sqrt{mn}\|A\|_{\max}, \end{align}
where $\|A\|_{\max}=\max\{a_{mn}\}$. The third term on the right hand side of \refeqn{AA} can be bounded as \begin{align}
\|-K_{z}\mathop{\mathrm{sat}}_{\sigma}(e_{\mathbf{x}})\|\leq \|K_{z}\|\|\mathop{\mathrm{sat}}_{\sigma}(e_{\mathbf{x}})\|\leq\|K_{z}\| \sqrt{2n+3}\sigma, \end{align} where \begin{align*}
\|K_{z}\|\leq \sqrt{3(2n+3)}\|K_{z}\|_{\max}, \end{align*} and similarly \begin{align*}
&\|K_{\mathbf{x}}\|\leq \sqrt{3(2n+3)}\|K_{\mathbf{x}}\|_{\max},\\
&\|K_{\dot{\mathbf{x}}}\|\leq \sqrt{3(2n+3)}\|K_{\dot{\mathbf{x}}}\|_{\max}. \end{align*}
We define $K_{max}, K_{z_{m}}\in\ensuremath{\mathbb{R}}$ \begin{align*}
&K_{\max}=\max\{\|K_{\mathbf{x}}\|_{\max},\|K_{\dot{\mathbf{x}}}\|_{\max}\}\sqrt{3(2n+3)}, \\
&K_{z_{m}}=\sqrt{3}(2n+3)\|K_{z}\|_{\max}, \end{align*} and then the upper bound of $A$ is given by \begin{align}
\|A\| & \leq K_{\max}(\|\mathbf{x}\|+\|\dot{\mathbf{x}}\|)+\sigma K_{z_{m}}+B_{1}\\
&\leq 2K_{\max}\|z_{1}\|+(B_{1}+\sigma K_{z_{m}}),\label{eqn:normA} \end{align} and substitute \refeqn{normA} into \refeqn{ssddss}
\begin{align}\label{eqn:eq1} \begin{split}
\dot{\mathcal{V}}_{1} \leq& -(\lambda_{\min}(Q)-2c_{3}K_{\max}\alpha) \|z_{1}\| ^{2}\\
&+c_{3}(B_{1}+\sigma K_{z_{m}})\|z_{1}\|\|e_{R}\|+2z_{1}^{T}P\mathds{B}\ensuremath{\mathfrak{g}}(\mathbf{x},\dot{\mathbf{x}}). \end{split} \end{align}
\subsubsection{e) Lyapunov Candidate for the Complete System} Let $\mathcal{V}=\mathcal{V}_{1}+\mathcal{V}_{2}$ be the Lyapunov function for the complete system. The time derivative of $\mathcal{V}$ is given by \begin{align} \dot{\mathcal{V}}=\dot{\mathcal{V}}_{1}+\dot{\mathcal{V}}_{2}. \end{align} Substituting \refeqn{eq1} and \refeqn{eq2} into the above equation \begin{align} \begin{split}
\dot{\mathcal{V}}\leq& -(\lambda_{\min}(Q)-2c_{3}K_{\max}\alpha) \|z_{1}\| ^{2}+2z_{1}^{T}P\mathds{B}\ensuremath{\mathfrak{g}}(\mathbf{x},\dot{\mathbf{x}})\\
&+c_{3}(B_{1}+\sigma K_{z_{m}}) \|z_{1}\|\|e_{R}\|-\lambda_{m}(W_{2})\|z_{2}\|^{2}, \end{split} \end{align}
and using $\|e_{R}\|\leq \|z_{2}\|$, it can be written as \begin{align}\label{eqn:finalsimp} \begin{split}
\dot{\mathcal{V}}\leq& -(\lambda_{\min}(Q)-2c_{3}K_{\max}\alpha) \|z_{1}\| ^{2}+2z_{1}^{T}P\mathds{B}\ensuremath{\mathfrak{g}}(\mathbf{x},\dot{\mathbf{x}})\\
&+c_{3}(B_{1}+\sigma K_{z_{m}})\|z_{1}\|\|z_{2}\|-\lambda_{m}(W_{2})\|z_{2}\|^{2}. \end{split} \end{align} Also, the $2z_{1}^{T}P\mathds{B}\ensuremath{\mathfrak{g}}(\mathbf{x},\dot{\mathbf{x}})$ term in the above equation is indefinite. The function $\ensuremath{\mathfrak{g}}(\mathbf{x},\dot{\mathbf{x}})$ satisfies \begin{align}
\frac{\|\ensuremath{\mathfrak{g}}(\mathbf{x},\dot{\mathbf{x}})\|}{\|z_{1}\|}\rightarrow 0\quad \mbox{as}\quad \|z_{1}\|\rightarrow 0 \end{align} Then, for any $\gamma>0$ there exists $r>0$ such that \begin{align}
\|\ensuremath{\mathfrak{g}}(\mathbf{x},\dot{\mathbf{x}})\|<\gamma\|z_{1}\|,\quad \forall\|z_{1}\|<r \end{align} so, \begin{align}
2z_{1}^{T}P\mathds{B}\ensuremath{\mathfrak{g}}(\mathbf{x},\dot{\mathbf{x}})\leq 2\gamma\|P\|_{2}\|z_{1}\|^{2}. \end{align} Substituting the above equation into \refeqn{finalsimp} \begin{align} \begin{split}
\dot{\mathcal{V}}\leq &-(\lambda_{\min}(Q)-2c_{3}K_{\max}\alpha) \|z_{1}\| ^{2}+2\gamma\|P\|_{2}\|z_{1}\|^{2}\\
&+c_{3}(B_{1}+\sigma K_{z_{m}})\|z_{1}\|\|z_{2}\|-\lambda_{m}(W_{2})\|z_{2}\|^{2}, \end{split} \end{align} we obtain \begin{align}
\dot{\mathcal{V}}\leq-z^{T}Wz+2\gamma\|P\|_{2}\|z_{1}\|^{2}, \end{align} where $z=[z_{1},z_{2}]^{T}\in\ensuremath{\mathbb{R}}^{2}$ and \begin{align} W=\begin{bmatrix} \lambda_{\min}(Q)-2c_{3}K_{\max}\alpha&-\frac{c_{3}(B_{1}+\sigma K_{z_{m}})}{2}\\ -\frac{c_{3}(B_{1}+\sigma K_{z_{m}})}{2}&\lambda_{m}(W_{2}) \end{bmatrix}. \end{align}
Also using $\|z_{1}\|\leq\|z\|$ \begin{align}
\dot{\mathcal{V}}\leq -(\lambda_{\min}(W)-2\gamma\|P\|_{2})\|z\|^{2}. \end{align}
Choosing $\gamma<(\lambda_{\min}(W))/2\|P\|_{2}$, ensures that $\dot{\mathcal{V}}$ is negative semi-definite. This implies that the zero equilibrium of tracking errors is stable in the sense of Lyapunov and $\mathcal{V}$ is non-increasing. Therefore all of error variables $z_{1}$, $z_{2}$ and integral control terms $e_{I}$, $e_{\mathbf{x}}$ are uniformly bounded. Also, from Lasalle-Yoshizawa theorem~\cite[Thm 3.4]{Kha96}, we have $z\rightarrow 0$ as $t\rightarrow \infty$.
\subsection{Proof for the high order terms derivations} We approximate $\xi\in\ensuremath{\mathbb{R}}^{3}$ by $e_{3}\times q$ and other high order terms. The following relations are considerable as it is illustrated in the Fig. \ref{appendixqxi}. \begin{figure}
\caption{$e_{3}$ and $q$, direction of each link}
\label{appendixqxi}
\end{figure} \begin{align}
\sin\|\xi\|=\|q\times e_{3}\|, \end{align} and \begin{align}
\frac{\xi}{\|\xi\|}=\frac{e_{3}\times q}{\|e_{3}\times q\|}, \end{align} so \begin{align}
\xi=\frac{e_{3}\times q}{\|e_{3}\times q\|}\sin^{-1}(\|e_{3}\times q\|). \end{align}
Taking derivative and considering that derivative of $\|A\|$ is $\frac{A\cdot\dot{A}}{\|A\|}$ \begin{align} \begin{split}
\dot{\xi}&=[\frac{e_{3}\times \dot{q}}{\|e_{3}\times q\|}-\frac{(e_{3}\times q)[(e_{3}\times q)\cdot(e_{3}\times \dot{q})]}{\|e_{3}\times q\|^3}]\times\\
&\sin^{-1}(\|e_{3}\times q\|)\\
&+\frac{e_{3}\times q}{\|e_{3}\times q\|}[\frac{(e_{3}\times q)\cdot(e_{3}\times\dot{q})}{\|e_{3}\times q\|}]\frac{1}{\sqrt{1-(\|e_{3}\times q\|)^2}}, \end{split} \end{align} after simplifying we would have \begin{align}\label{eqn:derivative1} \begin{split}
\dot{\xi}&=\frac{e_{3}\times\dot{q}}{\|e_{3}\times q\|}\sin^{-1}(\|e_{3}\times q\|)\\ &+(e_{3}\times q)[(e_{3}\times q)\cdot(e_{3}\times\dot{q})]k, \end{split} \end{align} where $k\in\ensuremath{\mathbb{R}}$ is scalar and defined as follow \begin{align}
k=\frac{1}{\|e_{3}\times q\|^2\sqrt{(1-\|e_{3}\times q\|^2)}}-\frac{\sin^{-1}(\|e_{3}\times q\|)}{\|e_{3}\times q\|^3}. \end{align} The following relation is considerable \begin{align} \dot{q}=\omega\times q, \end{align} where $\omega\in\ensuremath{\mathbb{R}}^{3}$ is the angular velocity of each link. Using $a\times (b \times c)=b(a\cdot c)-c(a\cdot b)$ \begin{align}\label{eqn:q_dot} e_{3}\times\dot{q}=e_{3}\times(q\times\omega)=\omega(e_{3}\cdot q)-q(e_{3}\cdot\omega), \end{align} and substituting Eq. \refeqn{q_dot} into Eq. \refeqn{derivative1} \begin{align}\label{eqn:derivative22} \begin{split}
\dot{\xi}&=\frac{\delta\omega(e_{3}\cdot q)}{\|e_{3}\times q\|}\sin^{-1}(\|e_{3}\times q\|)-\frac{q(e_{3}\cdot\delta\omega)}{\|e_{3}\times q\|}\sin^{-1}(\|e_{3}\times q\|)\\ &+[(e_{3}\times q)\cdot(\delta\omega(e_{3}\cdot q))](e_{3}\times q)k\\ &-[(e_{3}\times q)\cdot(q(e_{3}\cdot \delta\omega))](e_{3}\times q)k, \end{split} \end{align} and using the fact that $a\cdot(a\times b)=0$ the last term on the right hand side vanishes, so \begin{align}\label{eqn:derivative222} \begin{split}
\dot{\xi}&=\frac{\delta\omega(e_{3}\cdot q)}{\|e_{3}\times q\|}\sin^{-1}(\|e_{3}\times q\|)\\ &+[(e_{3}\times q)\cdot(\delta\omega(e_{3}\cdot q))](e_{3}\times q)k\\
&-\frac{q(e_{3}\cdot\delta\omega)}{\|e_{3}\times q\|}\sin^{-1}(\|e_{3}\times q\|), \end{split} \end{align} The first term on the right hand side of Eq. \refeqn{derivative222} can be simplified using Taylor series \begin{align}\label{eqn:tq} q=\exp(\hat{\xi})e_{3}=(I+\hat{\xi}+\ensuremath{\mathfrak{g}}(\xi))e_{3}, \end{align} using the fact that $\hat{\xi}e_{3}\cdot e_{3}=0$ we can have \begin{align} q\cdot e_{3}=(e_{3}+\hat{\xi}e_{3}+\ensuremath{\mathfrak{g}}(\xi))e_{3}=1+\ensuremath{\mathfrak{g}}(\xi), \end{align} so considering Taylor series $\frac{\sin^{-1}(x)}{x}=1+\frac{1}{6}x^2+O(x^{n})$, the first term of the right hand side of Eq. \refeqn{derivative222} can be rewritten as \begin{align}\label{eqn:tq} q=\exp(\hat{\xi})e_{3}=(I+\hat{\xi}+\ensuremath{\mathfrak{g}}(\xi))e_{3}, \end{align} using the fact that $\hat{\xi}e_{3}\cdot e_{3}=0$ we can have \begin{align}\label{eqn:e11} q\cdot e_{3}=(e_{3}+\hat{\xi}e_{3}+\ensuremath{\mathfrak{g}}(\xi))e_{3}=1+\ensuremath{\mathfrak{g}}(\xi), \end{align} also \begin{align}\label{eqn:e22} q\times e_{3}=(e_{3}+\hat{\xi}e_{3}+\ensuremath{\mathfrak{g}}(\xi))\times e_{3}=-\hat{e}_{3}^{2}\xi+\ensuremath{\mathfrak{g}}(\xi), \end{align}
and $\|A\|^{2}=A^{T}A$, so \begin{align}\label{eqn:hote3q} \begin{split}
\|e_{3}\times q\|^{2}&=(-\hat{e}_{3}^{2}\xi+\ensuremath{\mathfrak{g}}(\xi))^{T}(-\hat{e}_{3}^{2}\xi+\ensuremath{\mathfrak{g}}(\xi))\\
&=\|\xi\|^{2}+\ensuremath{\mathfrak{g}}(\xi)=\ensuremath{\mathfrak{g}}(\xi) \end{split} \end{align} so \begin{align}\label{eqn:hqn} \begin{split}
&\frac{(e_{3}\cdot q)}{\|e_{3}\times q\|}\sin^{-1}(\|e_{3}\times q\|)\\
&=(e_3\cdot q)(1+\frac{1}{6}(\|e_{3}\times q\|)^{2}+\cdots)\\ &=(1+\ensuremath{\mathfrak{g}}(\xi))(1+\ensuremath{\mathfrak{g}}(\xi))=1+\ensuremath{\mathfrak{g}}(\xi). \end{split} \end{align} The third term is simplified as follow using the fact that $\omega$ is normal to $q$ \begin{align}\label{eqn:fact} \delta\omega\cdot q=0, \end{align} replacing $q=e_{3}+\hat{\xi}e_{3}+\ensuremath{\mathfrak{g}}(\xi)$ into the above equation we would have \begin{align} (e_{3}+\hat{\xi}e_{3}+\ensuremath{\mathfrak{g}}(\xi))\cdot \delta\omega=0, \end{align} where $g(\xi)$ is higher order terms \begin{align} e_{3}\cdot\delta\omega+\hat{\xi}e_{3}\cdot\delta\omega+\ensuremath{\mathfrak{g}}(\xi)=0, \end{align} and \begin{align} e_{3}\cdot\delta\omega=-\hat{\xi}e_{3}\cdot\delta\omega+\ensuremath{\mathfrak{g}}(\xi)=\ensuremath{\mathfrak{g}}(\xi,\delta\omega)+\ensuremath{\mathfrak{g}}(\xi)=\ensuremath{\mathfrak{g}}(\xi,\delta\omega), \end{align} so, we can show that \begin{align}\label{eqn:set1} \delta\omega\cdot e_{3}=\ensuremath{\mathfrak{g}}(\xi,\delta\omega), \end{align} then, \begin{align}
\frac{q(e_{3}\cdot\delta\omega)}{\|e_{3}\times q\|}\sin^{-1}(\|e_{3}\times q\|)=\ensuremath{\mathfrak{g}}(\xi,\delta\omega). \end{align} Finding the Taylor series of $k$ \begin{align} \begin{split}
&\frac{1}{\|e_{3}\times q\|^2\sqrt{(1-\|e_{3}\times q\|^2)}}-\frac{\sin^{-1}(\|e_{3}\times q\|)}{\|e_{3}\times q\|^3}\\
&=\frac{1}{3}+\frac{3}{10}\|e_{3}\times q\|^2+\frac{15}{56}\|e_{3}\times q\|^4+\cdots, \end{split} \end{align} and using \refeqn{hote3q} \begin{align}
k=\frac{1}{3}+\frac{3}{10}\|e_{3}\times q\|^2+\frac{15}{56}\|e_{3}\times q\|^4+\cdots=\frac{1}{3}+\ensuremath{\mathfrak{g}}(\xi). \end{align} We use \refeqn{e22} to simplify the following term \begin{align}\label{eqn:simple} \begin{split} (e_{3}\times q)\cdot\delta\omega&=(-\hat{e}_{3}^{2}\xi+g(\xi))\cdot\delta\omega\\ &=-\hat{e}_{3}^{2}\xi\cdot\delta\omega+\ensuremath{\mathfrak{g}}(\xi)=\ensuremath{\mathfrak{g}}(\xi,\delta\omega)+\ensuremath{\mathfrak{g}}(\xi)\\ &=g(\xi,\delta\omega), \end{split} \end{align} so, the second term on the right hand side of \refeqn{derivative222} becomes \begin{align} \begin{split} &[(e_{3}\times q)\cdot(\delta\omega(e_{3}\cdot q))](e_{3}\times q)k\\ &=(e_3\cdot q)(e_{3}\times q)(\ensuremath{\mathfrak{g}}(\xi,\delta\omega))(\frac{1}{3}+\ensuremath{\mathfrak{g}}(\xi))=\ensuremath{\mathfrak{g}}(\xi,\delta\omega). \end{split} \end{align} Finally, we can approximate the $\dot{\xi}$ as given \begin{align} \begin{split} \dot{\xi}&=\delta\omega(1+\ensuremath{\mathfrak{g}}(\xi))+\ensuremath{\mathfrak{g}}(\xi,\delta\omega)+\ensuremath{\mathfrak{g}}(\xi,\delta\omega)\\ &=\delta\omega+\ensuremath{\mathfrak{g}}(\xi,\delta\omega). \end{split} \end{align} We can show that Eq. \refeqn{derivative222} can be expressed as presented \begin{align} \dot{\xi}=\delta\omega(1+\ensuremath{\mathfrak{g}}(\xi))+\ensuremath{\mathfrak{g}}(\xi,\delta\omega)=\delta\omega+\ensuremath{\mathfrak{g}}(\xi,\delta\omega). \end{align} Now, we start finding an expression for $\delta\dot{\omega}$ by taking derivative of the above expression We take derivative of \refeqn{fact} \begin{align}\label{eqn:set2} \dot{q}\cdot\delta\omega+q\cdot\delta\dot{\omega}=0, \end{align} so \begin{align} \dot{q}\cdot(q\times\dot{q})+q\cdot\delta\dot{\omega}=0 \end{align} and using $a\cdot{a\times b}=0$ \begin{align} q\cdot\delta\dot{\omega}=0, \end{align} replacing $q=e_{3}+\hat{\xi}e_{3}+\ensuremath{\mathfrak{g}}(\xi)$ into the above expression and simplifying \begin{align} e_{3}\cdot\delta\dot{\omega}=-\hat{\xi}e_{3}\cdot\delta\dot{\omega}+\ensuremath{\mathfrak{g}}(\xi)=\ensuremath{\mathfrak{g}}(\xi,\delta\omega), \end{align} so finally \begin{align}\label{eqn:sss} e_{3}\cdot\delta\dot{\omega}=\ensuremath{\mathfrak{g}}(\xi,\delta\omega), \end{align} We take derivative of the $\dot{\xi}$ equation to find an expression for $\delta\dot{\omega}$. \begin{align} \begin{split}
\ddot{\xi}&=\frac{\dot{q}(e_{3}\cdot\delta\omega)}{\|e_{3}\times q\|}\sin^{-1}(\|e_{3}\times q\|)\\
&+\frac{q(e_{3}\cdot\delta\dot{\omega})}{\|e_{3}\times q\|}\sin^{-1}(\|e_{3}\times q\|)+q(e_{3}\cdot\delta\omega)k\\
&+\frac{\delta\dot{\omega}(e_{3}\cdot q)}{\|e_{3}\times q\|}\sin^{-1}(\|e_{3}\times q\|)+\delta\omega(e_{3}\cdot\dot{q})\frac{\sin^{-1}(\|e_{3}\times q\|)}{\|e_{3}\times q\|}\\
&+\delta\omega(e_{3}\cdot q)[\frac{1}{\|e_{3}\times q\|\sqrt{1-(\|e_{3}\times q\|)^2}}\\
&-\frac{\sin^{-1}(\|e_{3}\times q\|)}{\|e_{3}\times q\|^2}]\frac{(e_{3}\times q)(e_{3}\times\dot{q})}{\|e_{3}\times q\|}\\ &+[(e_{3}\times\dot{q})\cdot\delta\omega](e_{3}\cdot q)(e_{3}\times q)k\\ &+[(e_{3}\times q)\cdot\delta\dot{\omega}](e_{3}\cdot q)(e_{3}\times q)k\\ &+[(e_{3}\times q)\cdot\delta\omega][(e_{3}\cdot \dot{q})(e_{3}\times q)k\\ &+(e_{3}\cdot q)(e_{3}\times \dot{q})k+(e_{3}\cdot q)(e_{3}\times q)\dot{k}], \end{split} \end{align} The first line is higher order term based on the derivations at \refeqn{sss} and \refeqn{set1}. The last line is also higher order term based on the derivations at \refeqn{simple} and $\ddot{\xi}$ becomes \begin{align} \begin{split} \ddot{\xi}&=\delta\dot{\omega}+\ensuremath{\mathfrak{g}}(\xi,\delta\omega)\\ &+[(e_{3}\cdot q)(e_{3}\times q)k][\delta\omega(e_{3}\times \dot{q})\\ &+(e_{3}\times\dot{q})\cdot\delta\omega+(e_{3}\times q)\cdot\delta\dot{\omega}]\\
&+\delta\omega(e_{3}\cdot\dot{q})\frac{\sin^{-1}(\|e_{3}\times q\|)}{\|e_{3}\times q\|}. \end{split} \end{align} We can show that the last line is higher order term as follow \begin{align} \begin{split} e_{3}\cdot\dot{q}&=e_{3}\cdot(\delta\omega\times q)=-\delta\omega\cdot(e_{3}\times q)\\ &=\hat{e}_{3}^{2}\xi\cdot\delta\omega+\ensuremath{\mathfrak{g}}(\xi)=\ensuremath{\mathfrak{g}}(\xi,\delta\omega), \end{split} \end{align} and \begin{align} \delta\omega\cdot(e_{3}\times\dot{q})=\delta\omega\cdot(\delta\omega(e_{3}\cdot q)-q(e_{3}\cdot\delta\omega))=\ensuremath{\mathfrak{g}}(\xi,\delta\omega), \end{align} and \begin{align} \begin{split} (e_{3}\times q)\cdot\delta\dot{\omega}&=(-\hat{e}_{3}^{2}\xi+\ensuremath{\mathfrak{g}}(\xi))\cdot\delta\dot{\omega}\\ &=-\hat{e}_{3}^{2}\xi\cdot\delta\dot{\omega}+\ensuremath{\mathfrak{g}}(\xi)=\ensuremath{\mathfrak{g}}(\xi,\delta\omega), \end{split} \end{align} so $\ddot{\xi}$ becomes \begin{align} \ddot{\xi}=\delta\dot{\omega}+\ensuremath{\mathfrak{g}}(\xi,\delta\omega), \end{align} or \begin{align}\label{eqn:sett} \delta\dot{\omega}=\ddot{\xi}-\ensuremath{\mathfrak{g}}(\xi,\delta\omega). \end{align}
\biography{farhad.jpg}{Farhad A. Goodarzi}
{received B.S. and M.S. degrees in Mechanical Engineering from Sharif University and Santa Clara University, CA in 2009 and 2011. Currently a Ph.D. candidate in ME department at The George Washington University. His research interests include control of complex systems and its application such as autonomous load transportation using multiple quadrotors.}
\biography{daewon_lee.jpg}{Daewon Lee}
{received the B.S., M.S. and Ph.D. degrees in Mechanical Engineering from Seoul National University. He is currently a Post doctoral fellow in mechanical and aerospace engineering department at The George Washington University. His research interests include control theory and its application to control of the quadrotor UAV's.}
\biography{lee.jpg}{Taeyoung Lee}
{is an assistant professor of the Department of Mechanical and Aerospace Engineering at the George Washington University. He received his doctoral degree in Aerospace Engineering and his master's degree in Mathematics at the University of Michigan in 2008. His research interests include computational geometric mechanics and control of complex systems.}
\clearafterbiography\relax
\end{document} | arXiv |
Buckmaster equation
In mathematics, the Buckmaster equation is a second-order nonlinear partial differential equation, named after John D. Buckmaster, who derived the equation in 1977.[1] The equation models the surface of a thin sheet of viscous liquid. The equation was derived earlier by S. H. Smith and by P Smith,[2][3] but these earlier derivations focused on the steady version of the equation.
The Buckmaster equation is
$u_{t}=(u^{4})_{xx}+\lambda (u^{3})_{x}$
where $\lambda $ is a known parameter.
References
1. Buckmaster, J. (1977). Viscous sheets advancing over dry beds. Journal of Fluid Mechanics, 81(4), 735–756.
2. Smith, S. H. (1969). A non-linear analysis of steady surface waves on a thin sheet of viscous liquid flowing down an incline. Journal of Engineering Mathematics, 3(3), 173–179.
3. Smith, P. (1969). On steady long waves on a viscous liquid at small Reynolds number. Journal of Engineering Mathematics, 3(3), 181–187.
| Wikipedia |
Results for 'Llu��s Godo'
Francesc Esteva Lluıs Godo Franco Montagna.Franco Montagna - 2004 - Studia Logica 76:155-194.details
Logics in Logic and Philosophy of Logic
Los godos en De correptione donatistarum (Ep. 185).James S. Alexander & José Anoz - 1999 - Augustinus 44 (172-75):29-34.details
Aristotle's Doctrine on Pleasure Godo Lieberg: Die Lehre von der Lust in den Ethiken des Aristoteles. (Zetemata, Heft 19.) Pp. 130. Munich: Beck, 1958. Paper, DM. 15. [REVIEW]G. B. Kerferd - 1960 - The Classical Review 10 (02):118-120.details
Aristotle: Pleasure in Ancient Greek and Roman Philosophy
Waiting for Godo... and Godan: Completing Rowe's Critique of the Ontological Argument.Roslyn Weiss - 2017 - European Journal for Philosophy of Religion 9 (1):65--86.details
In his critique of Anselm's ontological argument for God's existence, William Rowe introduces the concepts of "magico" and "magican" — defining "magicos" as magicians that do not exist, and "magicans" as magicians that do exist — to help diagnose what may have gone wrong in Anselm's argument. As I made my way through Rowe's intriguing article, I found myself waiting for "Godo" — and for "Godan." I expected Rowe to invoke these counterparts to his "magico" and "magican" — a (...) non-existing God to correspond to his non-existing magician, and an existing God to correspond to his existing magician — to complete his argument. Alas, like Vladimir and Estragon, I waited in vain: neither Godo — nor Godan — ever appeared. In what follows I shall argue that their inclusion in Rowe's argument would have settled the matter against Anselm far more decisively than do Rowe's forays into the murky waters of question-begging. (shrink)
A proof of standard completeness for Esteva and Godo's logic MTL.Sándor Jenei & Franco Montagna - 2002 - Studia Logica 70 (2):183-192.details
In the present paper we show that any at most countable linearly-ordered commutative residuated lattice can be embedded into a commutative residuated lattice on the real unit interval [0, 1]. We use this result to show that Esteva and Godo''s logic MTL is complete with respect to interpretations into commutative residuated lattices on [0, 1]. This solves an open problem raised in.
Nonclassical Logics in Logic and Philosophy of Logic
Semantics for Modal Logic in Logic and Philosophy of Logic
On the logical structure of de Finetti's notion of event.Tommaso Flaminio, Lluis Godo & Hykel Hosni - 2014 - Journal of Applied Logic 12 (3):279-301.details
This paper sheds new light on the subtle relation between probability and logic by (i) providing a logical development of Bruno de Finetti's conception of events and (ii) suggesting that the subjective nature of de Finetti's interpretation of probability emerges in a clearer form against such a logical background. By making explicit the epistemic structure which underlies what we call Choice-based probability we show that whilst all rational degrees of belief must be probabilities, the converse doesn't hold: some probability values (...) don't represent decision-relevant quantifications of uncertainty. (shrink)
Betting Interpretations and Dutch Books in Philosophy of Probability
Subjective Probability in Philosophy of Probability
Kripke semantics, undecidability and standard completeness for Esteva and Godo's logic MTL∀.Franco Montagna & Hiroakira Ono - 2002 - Studia Logica 71 (2):227-245.details
The present paper deals with the predicate version MTL of the logic MTL by Esteva and Godo. We introduce a Kripke semantics for it, along the lines of Ono''s Kripke semantics for the predicate version of FLew (cf. [O85]), and we prove a completeness theorem. Then we prove that every predicate logic between MTL and classical predicate logic is undecidable. Finally, we prove that MTL is complete with respect to the standard semantics, i.e., with respect to Kripke frames on (...) the real interval [0,1], or equivalently, with respect to MTL-algebras whose lattice reduct is [0,1] with the usual order. (shrink)
A complete many-valued logic with product-conjunction.Petr Hájek, Lluis Godo & Francesc Esteva - 1996 - Archive for Mathematical Logic 35 (3):191-208.details
A simple complete axiomatic system is presented for the many-valued propositional logic based on the conjunction interpreted as product, the coresponding implication (Goguen's implication) and the corresponding negation (Gödel's negation). Algebraic proof methods are used. The meaning for fuzzy logic (in the narrow sense) is shortly discussed.
Fuzzy Logic in Logic and Philosophy of Logic
Fuzzy Inference as Deduction.Lluís Godo & Petr Hájek - 1999 - Journal of Applied Non-Classical Logics 9 (1):37-60.details
ABSTRACT The term fuzzy logic has two different meanings -broad and narrow. In Zadeh's opinion, fuzzy logic is an extension of many- valued logic but having a different agenda—as generalized modus ponens, max-min inference, linguistic quantifiers etc. The question we address in this paper is whether there is something in Zadeh's specific agenda which cannot be grasped by "classiceli", "traditional" mathematical logic. We show that much of fuzzy logic can be understood as classical deduction in a many-sorted many-valued Pavelka- Lukasiewicz (...) style rational quantification logic. This means that, besides the linguistic or approximation aspects, the logical aspect is present too and can be made explicit. (shrink)
On the standard and rational completeness of some axiomatic extensions of the monoidal t-Norm logic.Francesc Esteva, Joan Gispert, Lluís Godo & Franco Montagna - 2002 - Studia Logica 71 (2):199 - 226.details
The monoidal t-norm based logic MTL is obtained from Hájek''s Basic Fuzzy logic BL by dropping the divisibility condition for the strong (or monoidal) conjunction. Recently, Jenei and Montgana have shown MTL to be standard complete, i.e. complete with respect to the class of residuated lattices in the real unit interval [0,1] defined by left-continuous t-norms and their residua. Its corresponding algebraic semantics is given by pre-linear residuated lattices. In this paper we address the issue of standard and rational completeness (...) (rational completeness meaning completeness with respect to a class of algebras in the rational unit interval [0,1]) of some important axiomatic extensions of MTL corresponding to well-known parallel extensions of BL. Moreover, we investigate varieties of MTL algebras whose linearly ordered countable algebras embed into algebras whose lattice reduct is the real and/or the rational interval [0,1]. These embedding properties are used to investigate finite strong standard and/or rational completeness of the corresponding logics. (shrink)
The $L\Pi$ and $L\Pi\frac{1}{2}$ logics: two complete fuzzy systems joining Łukasiewicz and Product Logics. [REVIEW]Francesc Esteva, Lluís Godo & Franco Montagna - 2001 - Archive for Mathematical Logic 40 (1):39-67.details
In this paper we provide a finite axiomatization (using two finitary rules only) for the propositional logic (called $L\Pi$ ) resulting from the combination of Lukasiewicz and Product Logics, together with the logic obtained by from $L \Pi$ by the adding of a constant symbol and of a defining axiom for $\frac{1}{2}$ , called $L \Pi\frac{1}{2}$ . We show that $L \Pi \frac{1}{2}$ contains all the most important propositional fuzzy logics: Lukasiewicz Logic, Product Logic, Gödel's Fuzzy Logic, Takeuti and Titani's (...) Propositional Logic, Pavelka's Rational Logic, Pavelka's Rational Product Logic, the Lukasiewicz Logic with $\Delta$ , and the Product and Gödel's Logics with $\Delta$ and involution. Standard completeness results are proved by means of investigating the algebras corresponding to $L \Pi$ and $L \Pi \frac{1}{2}$ . For these algebras, we prove a theorem of subdirect representation and we show that linearly ordered algebras can be represented as algebras on the unit interval of either a linearly ordered field, or of the ordered ring of integers, Z. (shrink)
On the Standard and Rational Completeness of some Axiomatic Extensions of the Monoidal T-norm Logic.Francesc Esteva, Joan Gispert, Lluís Godo & Franco Montagna - 2002 - Studia Logica 71 (2):199-226.details
The monoidal t-norm based logic MTL is obtained from Hájek's Basic Fuzzy logic BL by dropping the divisibility condition for the strong conjunction. Recently, Jenei and Montgana have shown MTL to be standard complete, i.e. complete with respect to the class of residuated lattices in the real unit interval [0, 1] defined by left-continuous t-norms and their residua. Its corresponding algebraic semantics is given by pre-linear residuated lattices. In this paper we address the issue of standard and rational completeness of (...) some important axiomatic extensions of MTL corresponding to well-known parallel extensions of BL. Moreover, we investigate varieties of MTL algebras whose linearly ordered countable algebras embed into algebras whose lattice reduct is the real and/or the rational interval [0, 1]. These embedding properties are used to investigate finite strong standard and/or rational completeness of the corresponding logics. (shrink)
Coherence in the aggregate: a betting method for belief functions on many-valued events.Tommaso Flaminio, Lluis Godo & Hykel Hosni - unknowndetails
Betting methods, of which de Finetti's Dutch Book is by far the most well-known, are uncertainty modelling devices which accomplish a twofold aim. Whilst providing an interpretation of the relevant measure of uncertainty, they also provide a formal definition of coherence. The main purpose of this paper is to put forward a betting method for belief functions on MV-algebras of many-valued events which allows us to isolate the corresponding coherence criterion, which we term coherence in the aggregate. Our framework generalises (...) the classical Dutch Book method. (shrink)
Axiomatization of non-associative generalisations of Hájek's BL and psBL.Yaroslav Petrukhin - 2020 - Journal of Applied Non-Classical Logics 30 (1):1-15.details
ABSTRACTIn this paper, we consider non-associative generalisations of Hájek's logics BL and psBL. As it was shown by Cignoli, Esteva, Godo, and Torrens, the former is the logic of continuous t-norms and their residua. Botur introduced logic naBL which is the logic of non-associative continuous t-norms and their residua. Thus, naBL can be viewed as a non-associative generalisation of BL. However, Botur has not presented axiomatization of naBL. We fill this gap by constructing an adequate Hilbert-style calculus for naBL. (...) Although, as was shown by Flondor, Georgescu, and Iorgulescu, there are no non-commutative continuous t-norms, Hájek's psBL can be viewed as BL's non-commutative generalisation. We present the logic psnaBL of psnaBL-algebras which can be viewed as naBL's non-commutative generalisation as well as psBL's non-associative generalisation and BL's both non-commutative and non-associative generalisation. (shrink)
Using the Arts to Spread Health, Peace and Community Wellbeing in Rural Kenya.Araceli Alonso Rodriguez - 2021 - Araucaria 23 (48).details
This article tells the empirical story of women from seven villages of Kwale, the most southeastern county in the Coast Province of Kenya that borders with Tanzania—Lunga Lunga, Godo, Perani, Umoja, Maasailand, Mpakani and Jirani—as they searched for community health, equity, gender equality and peace on their own terms. This article shows that creative health initiatives can be successfully used as mechanisms for peace building. Since 2010, the Nikumbuke-Health by All Means projects from the University of Madison-Wisconsin in the (...) United States, have trained 57 health promoters and 32 female actors on disease prevention and health promotion that have outreached approximately 120,000 inhabitants around the county enhancing unity in diversity, and breaking down the walls of ethnic hostilities and prejudice. Because of its low cost and high effectivity, the United Nations awarded N-HbAM[1] the 2013 Public Service Award as a model of best practice in gender, community development and sustainable wellbeing. [1] In 2013, at the time of the UNPS Award the projects were still called Nikumbuke-Health by Motorbike. The name was changed in 2018 as the projects broadened in scope and geographical location. (shrink)
Advances in the ŁΠ and logics.Petr Cintula - 2003 - Archive for Mathematical Logic 42 (5):449-468.details
The ŁΠ and logics were introduced by Godo, Esteva and Montagna. These logics extend many other known propositional and predicate logics, including the three mainly investigated ones (Gödel, product and Łukasiewicz logic). The aim of this paper is to show some advances in this field. We will see further reduction of the axiomatic systems for both logics. Then we will see many other logics contained in the ŁΠ family of logics (namely logics induced by the continuous finitely constructed t-norms (...) and Takeuti and Titani's fuzzy predicate logic). (shrink)
Łukasiewicz Negation and Many-Valued Extensions of Constructive Logics.Thomas Macaulay Ferguson - 2014 - In Proc. 44th International Symposium on Multiple-Valued Logic. IEEE Computer Society Press. pp. 121-127.details
This paper examines the relationships between the many-valued logics G~ and Gn~ of Esteva, Godo, Hajek, and Navara, i.e., Godel logic G enriched with Łukasiewicz negation, and neighbors of intuitionistic logic. The popular fragments of Rauszer's Heyting-Brouwer logic HB admit many-valued extensions similar to G which may likewise be enriched with Łukasiewicz negation; the fuzzy extensions of these logics, including HB, are equivalent to G ~, as are their n-valued extensions equivalent to Gn~ for any n ≥ 2. These (...) enriched systems extend Wansing's logic I4C4, showing that Łukasiewicz negation is a species of Nelson's negation of constructible falsity and yielding a Kripke-style semantics for G~ and Gn~ to complement the many-valued semantics. (shrink)
Intuitionistic Logic in Logic and Philosophy of Logic
Many-Valued Logic in Logic and Philosophy of Logic
Paraconsistent Logic in Logic and Philosophy of Logic
Propositional Logic in Logic and Philosophy of Logic
Søen Kierkegaards Papirer.Søen Kierkegaard & Niels Thulstrup - 1968 - Det Danske Sprog-Og Litteraturselskab Og Søen Kierkegaard Selskabet.details
Weighted Logics for Artificial Intelligence – 2.Lluis Godo, Henri Prade & Guilin Qi - 2015 - Journal of Applied Logic 13 (4):395-396.details
Logic and Philosophy of Logic, Miscellaneous in Logic and Philosophy of Logic
How Important Are CEOs to CSR Practices? An Analysis of the Mediating Effect of the Perceived Role of Ethics and Social Responsibility.José-Luis Godos-Díez, Roberto Fernández-Gago & Almudena Martínez-Campillo - 2011 - Journal of Business Ethics 98 (4):531-548.details
Drawing on the Agency-Stewardship approach, which suggests that manager profile may range from the agent model to the steward model, this article aims to examine how important CEOs are to corporate social responsibility (CSR). Specifically, this exploratory study proposes the existence of a relationship between manager profile and CSR practices and that this relation is mediated by the perceived role of ethics and social responsibility. After applying a mediated regression analysis using survey information collected from 149 CEOs in Spain, results (...) show that those closer to the steward model are more inclined to attach great importance to ethics and social responsibility, and to implement CSR practices in their companies. Results also provide support for the suggested mediating effect. Thus, this article extends research in understanding top managers as drivers for CSR and suggests new ways to deal with this issue empirically. (shrink)
Haribhadra's Yoga Works and Psychosynthesis.Kenneth G. Zysk & S. M. Desai - 1984 - Journal of the American Oriental Society 104 (4):788.details
Yoga in Asian Philosophy
Business Education and Idealism as Determinants of Stakeholder Orientation.Jose-Luis Godos-Díez, Roberto Fernández-Gago & Laura Cabeza-García - 2015 - Journal of Business Ethics 131 (2):439-452.details
This paper based on the distinction between the instrumental and normative views of stakeholder management explores how business education and personal moral philosophies may influence the orientation adopted by an individual. A mediated regression analysis using survey information collected from 206 Spanish university students showed that those exposed to management theories were less willing to consider stakeholders when making business decisions if the consequent economic impacts on the firm were omitted. The results also provided support for a negative effect of (...) business education on idealism and a mediating effect of the latter on the relationship between education and stakeholder management orientation. This study thus raises awareness on the influence of business education on individuals' ethical decision-making processes and suggests some possible changes for business education. (shrink)
Martin Buber's Life and Work: The Early Years, 1878-1923.Martin Friedman & Maurice S. Friedman - 1981 - Dutton Adult.details
This comprehensive, critical account of Buber's life and work incorporates extensive research and draws on the author's experiences as Buber's coworker and friend.
Phenomenology in Continental Philosophy
Martin Buber's Life and Work: The Middle Years, 1923-1945.Maurice S. Friedman - 1983 - Dutton Adult.details
Traces the development of the famous theologian's philosophy as he faced the challenges of the Weimar Republic, Nazi Germany, and prewar Palestine.
On the relation between possibilistic logic and modal logics of belief and knowledge.Mohua Banerjee, Didier Dubois, Lluis Godo & Henri Prade - 2017 - Journal of Applied Non-Classical Logics 27 (3-4):206-224.details
Possibilistic logic and modal logic are knowledge representation frameworks sharing some common features, such as the duality between possibility and necessity, and the decomposability of necessity for conjunctions, as well as some obvious differences since possibility theory is graded. At the semantic level, possibilistic logic relies on possibility distributions and modal logic on accessibility relations. In the last 30 years, there have been a series of attempts for bridging the two frameworks in one way or another. In this paper, we (...) compare the relational semantics of epistemic logics with simpler possibilistic semantics of a fragment of such logics that only uses modal formulas of depth 1. This minimal epistemic logic handles both all-or-nothing beliefs and explicitly ignored facts. We also contrast epistemic logic with the S5-based rough set logic. Finally, this paper presents extensions of generalized possibilistic logic with objective and non-nested multimodal formulas, in the style of modal logics KD45 and S5. (shrink)
Victims' Rights and Distributive Justice: In Search of Actors.Jemima García-Godos - 2013 - Human Rights Review 14 (3):241-255.details
The aim of this article is to discuss the role that victim groups and organizations may have in framing and supporting an accountability agenda, as well as their potential for endorsing a distributive justice agenda. The article explores two empirical cases where victims' rights have been introduced and applied by victim organizations to promote accountability—Colombia and Peru. It will be argued that if transitional justice in general and victim reparations in particular are to embark in a quest for distributive justice, (...) it cannot do so without considering victims as political actors, and putting forward demands in terms of victims' rights. (shrink)
Human Rights in Social and Political Philosophy
Fuzzy inference.D. Boixader & L. Godo - 1998 - In Enrique H. Ruspini, Piero Patrone Bonissone & Witold Pedrycz (eds.), Handbook of Fuzzy Computation. Institute of Physics.details
L'oeuvre d'art contre la société du mépris: réinventer la vie intérieure.Emmanuel Godo - 2015 - Paris: Les éditions du Cerf.details
The Role of Innovation Regimes and Policy for Creating Radical Innovations: Comparing Some Aspects of Fuel Cells and Hydrogen Technology Development With the Development of Internet and GSM.Helge Godoe - 2006 - Bulletin of Science, Technology and Society 26 (4):328-338.details
Telegraphy, the distant ancestor of Internet and GSM, was invented by Samuel Morse in 1838. One year later, William Grove invented the fuel cell. Although numerous highly successful innovations stemming from telegraphy may be observed, the development of fuel cells has been insignificant, slow, and erratic and has not yet resulted in notable positive socioeconomic effects. By comparing the modern development of fuel cells and hydrogen technology, that is, a potential radical innovation in energy generation, with some aspects related to (...) the evolution of two highly successful radical innovations, Internet and GSM, the author focuses on the role of innovation regimes and policy in a sectorial system of innovations perspective. In the slow pace in fuel cells and hydrogen technology development, two factors seem to interact negatively: weak and fragmented innovation regimes in the energy sector and the current hegemony of market-oriented R&D policies. (shrink)
Die muse Des properz und seine dichterweihe.Godo Lieberg - 1963 - Philologus: Zeitschrift für Antike Literatur Und Ihre Rezeption 107 (1-2):263-270.details
Hat plautus die szene IV 8 der aulularia eingeschoben?Godo Lieberg - 1992 - Philologus: Zeitschrift für Antike Literatur Und Ihre Rezeption 136 (1):71-80.details
Logical approaches to fuzzy similarity-based reasoning: an overview.Lluís Godo & Ricardo O. Rodríguez - 2008 - In Giacomo Della Riccia, Didier Dubois & Hans-Joachim Lenz (eds.), Preferences and Similarities. Springer. pp. 75--128.details
$24.95 new (collection) View on Amazon.com
Extending a temporal defeasible argumentation framework with possibilistic weights.Lluís Godo, Enrico Marchioni & Pere Pardo - 2012 - In Luis Farinas del Cerro, Andreas Herzig & Jerome Mengin (eds.), Logics in Artificial Intelligence. Springer. pp. 242--254.details
Philosophy of Artificial Intelligence in Philosophy of Cognitive Science
Godel's Proof.S. R. Peterson - 1961 - Philosophical Quarterly 11 (45):379.details
In 1931 the mathematical logician Kurt Godel published a revolutionary paper that challenged certain basic assumptions underpinning mathematics and logic. A colleague of Albert Einstein, his theorem proved that mathematics was partly based on propositions not provable within the mathematical system and had radical implications that have echoed throughout many fields. A gripping combination of science and accessibility, Godel's Proof by Nagel and Newman is for both mathematicians and the idly curious, offering those with a taste for logic and philosophy (...) the chance to satisfy their intellectual curiosity. (shrink)
Philosophy of Mathematics, Misc in Philosophy of Mathematics
Kierkegaard's Writings, Xxii: The Point of View.Søren Kierkegaard - 1978 - Princeton University Press.details
As a spiritual autobiography, Kierkegaard's The Point of View for My Work as an Author stands among such great works as Augustine's Confessions and Newman's Apologia pro Vita Sua. Yet Point of View is neither a confession nor a defense; it is an author's story of a lifetime of writing, his understanding of the maze of greatly varied works that make up his oeuvre. Upon the imminent publication of the second edition of Either/Or, Kierkegaard again intended to cease writing. Now (...) was the time for a direct "report to history" on the authorship as a whole. In addition to Point of View, which was published posthumously, the present volume also contains On My Work as an Author, a contemporary substitute, and the companion piece Armed Neutrality. (shrink)
Søren Kierkegaard in 19th Century Philosophy
$34.12 used $36.52 new $45.00 from Amazon View on Amazon.com
Hegel's conception of nature.S. Alexander - 1886 - Mind 11 (44):495-523.details
Hegel: Philosophy of Nature in 19th Century Philosophy
Amor et Roma apud Propertium, Tibullum, Ovidium.Godo Lieberg - 2002 - Hermes 130 (4):433-448.details
Aristotle's metaphysics.S. Marc Cohen - 2016 - Stanford Encyclopedia of Philosophy.details
The first major work in the history of philosophy to bear the title "Metaphysics" was the treatise by Aristotle that we have come to know by that name. But Aristotle himself did not use that title or even describe his field of study as 'metaphysics'; the name was evidently coined by the first century C.E. editor who assembled the treatise we know as Aristotle's Metaphysics out of various smaller selections of Aristotle's works. The title 'metaphysics' -- literally, 'after the Physics' (...) -- very likely indicated the place the topics discussed therein were intended to occupy in the philosophical curriculum. They were to be studied after the treatises dealing with nature (ta phusika). In this entry, we discuss the ideas that are developed in Aristotle's treatise. (shrink)
Aristotle's Works: The Metaphysics in Ancient Greek and Roman Philosophy
Aristotle: Essence in Ancient Greek and Roman Philosophy
Aristotle: Form and Matter in Ancient Greek and Roman Philosophy
Aristotle: Substance in Ancient Greek and Roman Philosophy
Aristotle: Substantial Forms in Ancient Greek and Roman Philosophy
Aristotle: The Zeta Problem in Ancient Greek and Roman Philosophy
A defeasible reasoning model of inductive concept learning from examples and communication.Santiago Ontañón, Pilar Dellunde, Lluís Godo & Enric Plaza - 2012 - Artificial Intelligence 193 (C):129-148.details
Einstein's "Zur Elektrodynamik..." Revisited, With Some Consequences.S. D. Agashe - 2006 - Foundations of Physics 36 (7):955-1011.details
Einstein, in his "Zur Elektrodynamik bewegter Körper", gave a physical (operational) meaning to "time" of a remote event in describing "motion" by introducing the concept of "synchronous stationary clocks located at different places". But with regard to "place" in describing motion, he assumed without analysis the concept of a system of co-ordinates.In the present paper, we propose a way of giving physical (operational) meaning to the concepts of "place" and "co-ordinate system", and show how the observer can define both the (...) place and time of a remote event. Following Einstein, we consider another system "in uniform motion of translation relatively to the former". Without assuming "the properties of homogeneity which we attribute to space and time", we show that the definitions of space and time in the two systems are linearly related. We deduce some novel consequences of our approach regarding faster-than-light observers and particles, "one-way" and "two-way" velocities of light, symmetry, the "group property" of inertial reference frames, length contraction and time dilatation, and the "twin paradox". Finally, we point out a flaw in Einstein's argument in the "Electrodynamical Part" of his paper and show that the Lorentz force formula and Einstein's formula for transformation of field quantities are mutually consistent. We show that for faster-than-light bodies, a simple modification of Planck's formula for mass suffices. (Except for the reference to Planck's formula, we restrict ourselves to Physics of 1905.). (shrink)
Special Relativity in Philosophy of Physical Science
Newton's Principia for the Common Reader.S. Chandrasekhar - 1995 - Oxford University Press UK.details
Newton's Philosophiae Naturalis Principia Mathematica provides a coherent and deductive presentation of his discovery of the universal law of gravitation. It is very much more than a demonstration that 'to us it is enough that gravity really does exist and act according to the laws which we have explained and abundantly serves to account for all the motions of the celestial bodies and the sea'. It is important to us as a model of all mathematical physics.Representing a decade's work from (...) a distinguished physicist, this is the first comprehensive analysis of Newton's Principia without recourse to secondary sources. Professor Chandrasekhar analyses some 150 propositions which form a direct chain leading to Newton's formulation of his universal law of gravitation. In each case, Newton's proofs are arranged in a linear sequence of equations and arguments, avoiding the need to unravel the necessarily convoluted style of Newton's connected prose. In almost every case, a modern version of the proofs is given to bring into sharp focus the beauty, clarity, and breath-taking economy of Newton's methods.Subrahmanyan Chandrasekhar is one of the most reknowned scientists of the twentieth century, whose career spanned over 60 years. Born in India, educated at the University of Cambridge in England, he served as Emeritus Morton D. Hull Distinguished Service Professor of Theoretical Astrophysics at the University of Chicago, where he has was based from 1937 until his death in 1996. His early research into the evolution of stars is now a cornerstone of modern astrophysics, and earned him the Nobel Prize for Physics in 1983. Later work into gravitational interactions between stars, the properties of fluids, magnetic fields, equilibrium ellipsoids, and black holes has earned him awards throughout the world, including the Gold Medal from the Royal Astronomical Society in London, the National Medal of Science in the United States, and the Copley Medal from the Royal Society. His many publications include Radiative transfer, Hydrodynamic and hydromagnetic stability, and The mathematical theory of black holes, each being praised for its breadth and clarity. Newton's Principia for the common reader is the result of Professor Chandrasekhar's profound admiration for a scientist whose work he believed is unsurpassed, and unsurpassable. (shrink)
Isaac Newton in 17th/18th Century Philosophy
$54.99 used $317.08 new View on Amazon.com
Bradley's Metaphysics and the Self. [REVIEW]S. C. A. - 1971 - Review of Metaphysics 25 (2):373-373.details
An able and clear defense of Bradley's principal theses and the underlying conception of metaphysical enterprise. "This is a book about a metaphysician, about metaphysics, and, most importantly, it attempts to develop elements of a metaphysical position long the lines of what is called Absolute Idealism." The Introduction takes up the Verificationists [[sic]] argument and two recent accounts of metaphysics. Part I devotes ten Chapters to the elucidation and defense of Bradley's conception of reality. It culminates in examining three alternative (...) accounts of "Real". Part II considers "the major philosophical theories of the self in order to defend Bradley's Theory of the self within his metaphysical scheme."--A. S. C. (shrink)
Francis Herbert Bradley in 19th Century Philosophy
Residuated fuzzy logics with an involutive negation.Francesc Esteva, Lluís Godo, Petr Hájek & Mirko Navara - 2000 - Archive for Mathematical Logic 39 (2):103-124.details
Residuated fuzzy logic calculi are related to continuous t-norms, which are used as truth functions for conjunction, and their residua as truth functions for implication. In these logics, a negation is also definable from the implication and the truth constant $\overline{0}$ , namely $\neg \varphi$ is $\varphi \to \overline{0}$. However, this negation behaves quite differently depending on the t-norm. For a nilpotent t-norm (a t-norm which is isomorphic to Łukasiewicz t-norm), it turns out that $\neg$ is an involutive negation. However, (...) for t-norms without non-trivial zero divisors, $\neg$ is Gödel negation. In this paper we investigate the residuated fuzzy logics arising from continuous t-norms without non-trivial zero divisors and extended with an involutive negation. (shrink)
Carnap's dream: Gödel, Wittgenstein, and Logical, Syntax.S. Awodey & A. W. Carus - 2007 - Synthese 159 (1):23-45.details
In Carnap's autobiography, he tells the story how one night in January 1931, "the whole theory of language structure" in all its ramifications "came to [him] like a vision". The shorthand manuscript he produced immediately thereafter, he says, "was the first version" of Logical Syntax of Language. This document, which has never been examined since Carnap's death, turns out not to resemble Logical Syntax at all, at least on the surface. Wherein, then, did the momentous insight of 21 January 1931 (...) consist? We seek to answer this question by placing Carnap's shorthand manuscript in the context of his previous efforts to accommodate scientific theories and metalinguistic claims within Wittgenstein's Tractatus theory of meaning. The breakthrough of January 1931 consists, from this viewpoint, in the rejection of the Tractatus theory in favor of the meta-mathematical perspective of Hilbert, Gödel, and Tarski. This was not yet the standpoint of the published Logical Syntax, as we show, but led naturally to the "principle of tolerance" and thus to Carnap's mature philosophy, in which the inconsistencies between this first view and the principle of tolerance, which survived into the published Syntax, were overcome. (shrink)
Carnap's Intellectual Context in 20th Century Philosophy
Carnap: Der Logische Aufbau Der Welt in 20th Century Philosophy
Carnap: Logical Syntax of Language in 20th Century Philosophy
Carnap: Philosophy of Logic in 20th Century Philosophy
Ludwig Wittgenstein in 20th Century Philosophy
Martin Buber's Life and Work: The Later Years, 1945-1965.Maurice S. Friedman - 1983 - Dutton Adult.details
A biography of the noted philosopher and Jewish theologian focuses on the years in which Buber became internationally acclaimed for his work as an author, philosopher, and peacemaker.
The Divine Mistress Godo Lieberg: Puella Divina. Die Gestalt der göttlichen Geliebten bei Catull im Zusammenhang der antiken Dichtung. Pp. 343. Amsterdam: Schippers, 1962. Cloth, fl. 38. [REVIEW]E. J. Kenney - 1964 - The Classical Review 14 (01):42-43.details
Sartre's "Being and nothingness".S. Gardner - unknowndetails
Sebastian Gardner competently tackles one of Sartre's more complex and challenging works in this new addition to the Reader's Guides series.
"Here's My Dilemma". Moral Case Deliberation as a Platform for Discussing Everyday Ethics in Elderly Care.S. Dam, T. A. Abma, M. J. M. Kardol & G. A. M. Widdershoven - 2012 - Health Care Analysis 20 (3):250-267.details
Our study presents an overview of the issues that were brought forward by participants of a moral case deliberation (MCD) project in two elderly care organizations. The overview was inductively derived from all case descriptions (N = 202) provided by participants of seven mixed MCD groups, consisting of care providers from various professional backgrounds, from nursing assistant to physician. The MCD groups were part of a larger MCD project within two care institutions (residential homes and nursing homes). Care providers are (...) confronted with a wide variety of largely everyday ethical issues. We distinguished three main categories: 'resident's behavior', 'divergent perspectives on good care' and 'organizational context'. The overview can be used for agendasetting when institutions wish to stimulate reflection and deliberation. It is important that an agenda is constructed from the bottom-up and open to a variety of issues. In addition, organizing reflection and deliberation requires effort to identify moral questions in practice whilst at the same time maintaining the connection with the organizational context and existing communication structures. Once care providers are used to dealing with divergent perspectives, inviting different perspectives (e.g. family members) to take part in the deliberation, might help to identify and address ethical 'blind spots'. (shrink)
Nursing Ethics in Applied Ethics
Freese's Pro Murena_- M. Tullii Ciceronis pro L. Murena oratio ad indices. Edited with introduction and notes by J. H. Freese, M.A. London, Macmillan & Co.: 1894. fp. 8vo. Price 2 _s._ 6 _d.. [REVIEW]S. W. A. - 1894 - The Classical Review 8 (10):467-.details
Hellenistic and Later Ancient Philosophy, Misc in Ancient Greek and Roman Philosophy | CommonCrawl |
Nicolas Fuss
Nicolas Fuss (29 January 1755 – 4 January 1826), also known as Nikolai Fuss, was a Swiss mathematician, living most of his life in Imperial Russia.
Nicolas Fuss
Born(1755-01-29)29 January 1755
Basel, Switzerland
Died4 January 1826(1826-01-04) (aged 70)
Saint Petersburg, Russian Empire
Scientific career
FieldsMathematics
Academic advisorsLeonhard Euler
Biography
Fuss was born in Basel, Switzerland. He moved to Saint Petersburg to serve as a mathematical assistant to Leonhard Euler from 1773–1783, and remained there until his death. He contributed to spherical trigonometry, differential equations, the optics of microscopes and telescopes, differential geometry, and actuarial science. He also contributed to Euclidean geometry, including the problem of Apollonius.
In 1797, he was elected a foreign member of the Royal Swedish Academy of Sciences. From 1800–1826, Fuss served as the permanent secretary to the Imperial Academy of Sciences in Saint Petersburg. He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1812.[1] He died in Saint Petersburg.
Family
Nicolas Fuss was married to Albertine Benedikte Philippine Luise Euler (1766-1822). Albertine Euler was the daughter of Leonhard Euler's eldest son Johann Albrecht Euler (1734-1800) and his wife Anna Sophie Charlotte Hagemeister. Pauline Fuss, a daughter of Nicolas and Albertine, married Russian chemist Genrikh Struve. Nicolas's son Paul Heinrich Fuss (1798-1855)[2] edited the first attempt at a collected works of Euler.[3] Paul Heinrich was a member of the Imperial Academy of Sciences in Saint Petersburg from 1823 and its secretary from 1826.[2] Nicolas's son Georg Albert 1806–54),[2] was from 1839 an astronomer in Pulowa and then from 1848 in Vilnius and also published on magnetism.[4]
See also
• Catenary
• Fuss' theorem for bicentric quadrilaterals
• Fuss–Catalan number
References
1. "Book of Members, 1780–2010: Chapter F" (PDF). American Academy of Arts and Sciences. Retrieved 28 July 2014.
2. "Fuß".
3. "Historical and Biographical Resources".
4. "Geogr., magnet. u. hypsometr. Bestimmungen auf e. Reise nach Sibirien u. China in d. J. 1830-32". Mémoires de l'Académie Impériale des Sciences de St. Pétersbourg. Série VI, Tome III, 1838.
• Rudolf Mumenthaler: Fuss, Niklaus in German, French and Italian in the online Historical Dictionary of Switzerland., 2006
• Kurt-R. Biermann (1961), "Fuß, Nikolaus", Neue Deutsche Biographie (in German), vol. 5, Berlin: Duncker & Humblot, pp. 742–743; (full text online)
External links
• MacTutor History of Mathematics
• Nicolas Fuss at the Mathematics Genealogy Project
Authority control
International
• FAST
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Italy
• United States
• Poland
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
Other
• Historical Dictionary of Switzerland
• IdRef
| Wikipedia |
Previous issue | This issue | Most recent issue | All issues (1988–Present) | Next issue | Articles in press | Recently published articles | Next article
Quasianalytic Denjoy-Carleman classes and o-minimality
Authors: J.-P. Rolin, P. Speissegger and A. J. Wilkie
Journal: J. Amer. Math. Soc. 16 (2003), 751-777
MSC (2000): Primary 14P15, 03C64; Secondary 32S45
DOI: https://doi.org/10.1090/S0894-0347-03-00427-2
Abstract: We show that the expansion of the real field generated by the functions of a quasianalytic Denjoy-Carleman class is model complete and o-minimal, provided that the class satisfies certain closure conditions. Some of these structures do not admit analytic cell decomposition, and they show that there is no largest o-minimal expansion of the real field.
Shreeram S. Abhyankar and Tzuong Tsieng Moh, Newton-Puiseux expansion and generalized Tschirnhausen transformation. I, II, J. Reine Angew. Math. 260 (1973), 47–83; ibid. 261 (1973), 29–54. MR 337955, DOI https://doi.org/10.1515/crll.1973.260.47
Edward Bierstone and Pierre D. Milman, Semianalytic and subanalytic sets, Inst. Hautes Études Sci. Publ. Math. 67 (1988), 5–42. MR 972342
Edward Bierstone and Pierre D. Milman, Canonical desingularization in characteristic zero by blowing up the maximum strata of a local invariant, Invent. Math. 128 (1997), no. 2, 207–302. MR 1440306, DOI https://doi.org/10.1007/s002220050141
C. L. Childress, Weierstrass division in quasianalytic local rings, Canadian J. Math. 28 (1976), no. 5, 938–953. MR 417441, DOI https://doi.org/10.4153/CJM-1976-091-7
Lou van den Dries, o-minimal structures and real analytic geometry, Current developments in mathematics, 1998 (Cambridge, MA), Int. Press, Somerville, MA, 1999, pp. 105–152. MR 1772324
Lou van den Dries and Chris Miller, Geometric categories and o-minimal structures, Duke Math. J. 84 (1996), no. 2, 497–540. MR 1404337, DOI https://doi.org/10.1215/S0012-7094-96-08416-1
Lou van den Dries and Patrick Speissegger, The real field with convergent generalized power series, Trans. Amer. Math. Soc. 350 (1998), no. 11, 4377–4421. MR 1458313, DOI https://doi.org/10.1090/S0002-9947-98-02105-9
Lou van den Dries and Patrick Speissegger, The field of reals with multisummable series and the exponential function, Proc. London Math. Soc. (3) 81 (2000), no. 3, 513–565. MR 1781147, DOI https://doi.org/10.1112/S0024611500012648
Andrei Gabrielov, Complements of subanalytic sets and existential formulas for analytic functions, Invent. Math. 125 (1996), no. 1, 1–12. MR 1389958, DOI https://doi.org/10.1007/s002220050066
Yitzhak Katznelson, An introduction to harmonic analysis, John Wiley & Sons, Inc., New York-London-Sydney, 1968. MR 0248482
Hikosaburo Komatsu, The implicit function theorem for ultradifferentiable mappings, Proc. Japan Acad. Ser. A Math. Sci. 55 (1979), no. 3, 69–72. MR 531445
J.-M. Lion and J.-P. Rolin, Théorème de préparation pour les fonctions logarithmico-exponentielles, Ann. Inst. Fourier (Grenoble) 47 (1997), no. 3, 859–884 (French, with English and French summaries). MR 1465789
man:c-infini S. Mandelbrojt, Sur les fonctions indéfiniment dérivables, Acta Math., 72 (1940), pp. 15–29.
Chris Miller, Expansions of the real field with power functions, Ann. Pure Appl. Logic 68 (1994), no. 1, 79–94. MR 1278550, DOI https://doi.org/10.1016/0168-0072%2894%2990048-5
Chris Miller, Infinite differentiability in polynomially bounded o-minimal structures, Proc. Amer. Math. Soc. 123 (1995), no. 8, 2551–2555. MR 1257118, DOI https://doi.org/10.1090/S0002-9939-1995-1257118-1
Adam Parusiński, Lipschitz stratification of subanalytic sets, Ann. Sci. École Norm. Sup. (4) 27 (1994), no. 6, 661–696. MR 1307677
Charles Roumieu, Ultra-distributions définies sur $R^{n}$ et sur certaines classes de variétés différentiables, J. Analyse Math. 10 (1962/63), 153–192 (French). MR 158261, DOI https://doi.org/10.1007/BF02790307
Walter Rudin, Real and complex analysis, 3rd ed., McGraw-Hill Book Co., New York, 1987. MR 924157
abh-moh:tschirnhausenI S. S. Abhyankar and T. T. Moh, Newton-Puiseux expansion and generalized Tschirnhausen transformation I, J. Reine Angew. Math., 260 (1973), pp. 47–83. abh-moh:tschirnhausenII S. S. Abhyankar and T. T. Moh, Newton-Puiseux expansion and generalized Tschirnhausen transformation II, J. Reine Angew. Math., 261 (1973), pp. 29–54. bie-mil:semianalytic E. Bierstone and P. Milman, Semianalytic and subanalytic sets, Inst. Hautes Études Sci. Publ. Math., 67 (1988), pp. 5–42. bie-mil:singular ---, Canonical desingularization in characteristic zero by blowing up the maximum strata of a local invariant, Invent. Math., 128 (1997), pp. 207–302. chi:quasi C. L. Childress, Weierstrass division in quasianalytic local rings, Canad. J. Math., 28 (1976), pp. 938–953. vdd:surveyII L. van den Dries, O-minimal structures and real analytic geometry, in Current developments in mathematics, 1998, International Press, 1999, pp. 105–152. vdd-mil:cat L. van den Dries and C. Miller, Geometric categories and o-minimal structures, Duke Math. J., 84 (1996), pp. 497–540. vdd-spe:genpower L. van den Dries and P. Speissegger, The real field with convergent generalized power series is model complete and o-minimal, Trans. Amer. Math. Soc., 350 (1998), pp. 4377–4421. vdd-spe:multisummable ---, The field of reals with multisummable series and the exponential function, Proc. London Math. Soc. (3), 81 (2000), pp. 513–565. gab:invent A. Gabrielov, Complements of subanalytic sets and existential formulas for analytic functions, Invent. Math., 125 (1996), pp. 1–12. kat:harmonic Y. Katznelson, Introduction to Harmonic Analysis, John Wiley, 1968. kom:implicit H. Komatsu, The implicit function theorem for ultradifferentiable mappings, Proc. Japan Acad., Ser. A, 55 (1979), pp. 69–72. lio-rol:preparation J.-M. Lion and J.-P. Rolin, Théorème de préparation pour les fonctions logarithmico-exponentielles, Ann. Inst. Fourier, 47 (1997), pp. 859–884. man:c-infini S. Mandelbrojt, Sur les fonctions indéfiniment dérivables, Acta Math., 72 (1940), pp. 15–29. mil:powers C. Miller, Expansions of the real field with power functions, Ann. Pure Appl. Logic, 68 (1994), pp. 79–94. mil:cinf ---, Infinite differentiability in polynomially bounded o-minimal structures, Proc. Amer. Math. Soc., 123 (1995), pp. 2551–2555. par:stratification A. Parusiński, Lipschitz stratification of subanalytic sets, Ann. Sci. Ecole Norm. Sup. (4), 27 (1994), pp. 661–996. rou:ultra C. Roumieu, Ultradistributions définies sur $\mathbb {R}n$ et sur certaines classes de variétés différentiables, J. Anal. Math., 10 (1962–1963), pp. 153–192. rud:rca W. Rudin, Real and Complex Analysis, McGraw-Hill, 1987.
Retrieve articles in Journal of the American Mathematical Society with MSC (2000): 14P15, 03C64, 32S45
Retrieve articles in all journals with MSC (2000): 14P15, 03C64, 32S45
J.-P. Rolin
Affiliation: Laboratoire de Topologie, Université de Bourgogne, 9 Av. Alain Savary, B.P. 47870, 21078 Dijon Cedex, France
Email: [email protected]
P. Speissegger
Affiliation: Department of Mathematics, University of Wisconsin, 480 Lincoln Drive, Madison, Wisconsin 53706
MR Author ID: 361060
Email: [email protected]
A. J. Wilkie
Affiliation: Mathematical Institute, University of Oxford, 24-29 St. Giles', Oxford OX1 3LB, United Kingdom
Email: [email protected]
Keywords: Quasianalytic classes, o-minimal structures, resolution of singularities
Received by editor(s): February 19, 2001
Additional Notes: Supported in part by CNRS, NSERC grant OGP0009070 and NSF grant DMS-9988453 | CommonCrawl |
Significantly impaired shoulder function in the first years of rheumatoid arthritis: a controlled study
Annelie Bilberg1,
Tomas Bremell1,
Istvan Balogh2 &
Kaisa Mannerkorpi1,3
Patients with rheumatoid arthritis (RA) risk impaired shoulder function due to the inflammatory process. The knowledge of shoulder function in the early years of the disease is limited. The aim was to compare shoulder function and activity limitation related to the shoulder-arm-hand in women with RA in early disease course compared to age-matched healthy women.
This controlled cross-sectional study included 103 women with rheumatoid arthritis and a reference group of 103 age-matched healthy women. The mean age was 47.1 (SD 10.0) years, the mean disease duration was 20.3 (SD 8.5) months and the mean DAS28 score was 3.8 (SD 1.4) among the patients. Participants were provided with self-reported questionnaires quantifying activity limitations. Shoulder function was assessed by isometric strength of the shoulder, shoulder-arm movement and shoulder pain. Hand-grip force was assessed and examination was made of tender and swollen joints among the patients.
Patients showed significantly (p < 0.0001) impaired shoulder muscle strength, shoulder-arm movement, and shoulder pain compared to the reference group. Patients shoulder muscle strength was approximately 65 % of the results observed in the reference group. Activity limitations related to the shoulder-arm-hand (DASH) were significantly (p < 0.0001) higher in the patient group compared to the reference group and indicates limitations in daily activities for the patients.
Patients with RA were found to have significantly impaired shoulder function already 1.5 years after disease onset compared to age-matched subjects. Reduced shoulder muscle strength was found to be associated with activity limitations (DASH) implying that screening of the shoulder function, emphasising the shoulder muscle strength, should be initiated from disease onset.
The shoulder is the third most common site of musculoskeletal pain in the general population [1] and the prevalence of shoulder symptoms has been reported to be somewhere between 7 and 27 % [2, 3]. Patients with rheumatoid arthritis (RA) have an additional risk of impaired shoulder function as a consequence of inflammation. Synovitis, bursitis and tendinitis causes decreased muscle strength, persistent pain, reduced range of motion and joint destruction, which may lead to functional loss and difficulties with daily activities. Shoulder function correlates with activity limitations in patients with RA [4–7] where pain, decreased muscle strength, a reduced active range of motion and the disease activity itself have been suggested to contribute to the limitation.
Traditionally, shoulder joint involvement in RA is considered to apply to patients with long-term disease [8–10] and to patients who are older at onset [11, 12]. Reduced range of shoulder motion and shoulder muscle strength is common among patients with established RA [5]. However, little is known about shoulder function in the early disease course. To our knowledge, shoulder function has only been sparsely studied with a focus on shoulder movement [13]. Because impaired shoulder function also occurs in the general adult population, we found it interesting to compare patients with RA in an early phase of the disease with a gender-matched and age-matched reference group of self-reported healthy individuals.
The aim of this study was to compare different aspects of shoulder function in women with RA during the first years of disease with that in an age-matched reference group of self-reported healthy women to show the impact of RA on the shoulder. Our hypothesis was that shoulder function in RA is reduced early in the disease course.
A multicentre, controlled cross-sectional study was conducted in the Region of Västra Götaland, West Sweden.
Patient group selection
Eligible patients were women aged 20–60 years who met the 1987 American College of Rheumatology criteria for RA, with a disease duration ranging from 6 months to 3 years. Exclusion criteria were other severe and chronic somatic or psychiatric diseases, shoulder arthroplasty, any unhealed fracture of the upper extremities, ongoing adhesive capsulitis or inability to read and speak Swedish. Patients were recruited from three rheumatology units at Sahlgrenska University Hospital, Skövde Hospital and Uddevalla Hospital following a search of the Swedish RA register and a review of the medical records of patients with RA from 2006 to 2008. One hundred and forty-three women were identified and invited to participate in the study; however, 13 patients did not meet the inclusion criteria due to other rheumatic diseases, not understanding the Swedish language or other severe concomitant disease. A further 27 patients could not be enrolled due to time restrictions or a lack of contact, or declined to participate, leaving a total of 103 patients. The study population has previously been included in another study of ours [14]. Because men have greater variability than women with regard to shoulder muscle strength, a reference group of men would require quite a large number of participants. Therefore no men were included in the study.
Information about demographic data and disease variables was obtained in interviews and from the patients' medical records. Examinations and administration of the questionnaires were carried out by four experienced physical therapists. Examinations of joints for assessment of tender and swollen joints in the patients were conducted under the supervision of an experienced rheumatologist. More than 70 % of the patients were rheumatoid factor (RF) positive and anti-cyclic citrullinated peptide (anti-CCP) positive and almost 40 % showed erosive changes within 2 years of disease duration. This indicates that the study population is representative of other RA populations in Scandinavia with a disease duration ranging from 6 months to 3 years [15, 16].
Reference group selection
A reference group was recruited through a newspaper advertisement and from the public sector in Gothenburg, selected according to age. Eligible participants were self-reported healthy women, aged 20–60 years. Exclusion criteria were the same as for the patient group, with an additional exclusion criterion of RA. Subjective shoulder symptoms such as pain did not lead to exclusion from participation in the reference group since 10.5–23.8 % of the general female population in Sweden report pain from the shoulder–upper arm [17]. Participants with shoulder arthroplasty, unhealed fracture of the upper extremities and ongoing adhesive capsulitis were excluded because of the inability to perform the physical performance-based tests correctly without risking the person's condition.
One hundred and twenty-one women who met the selection criteria and volunteered to participate were, after the assessments, individually matched for age with the patients by a computer program for randomisation, leaving a total of 103 participants in each group.
Information about demographic data, possible diseases and shoulder trauma was obtained by interview. Examinations and administration of the questionnaires were carried out by two experienced physical therapists.
Written and oral study information was provided to all study participants and written consent was obtained. The study was approved by the Regional Ethical Review Board in Gothenburg, Sweden.
Shoulder function was assessed using three variables: muscle strength of the shoulder, active shoulder–arm movement and shoulder pain during movement.
The isometric muscle strength of the shoulder abductor muscles was assessed with an ISOBEX 3.0 dynamometer (Cursor AG, Bern, Switzerland) [18], which measures isometric strength in kilograms. Strength was recorded in a seated position and the tested arm in lateral elevation of 90° in the scapular plane. A strap from the dynamometer, attached to the floor, was placed proximal to the wrist. The patient was instructed to elevate the arm from the original position as much as possible for 5 seconds. The best performance out of three was selected [19].
Active shoulder–arm movement was assessed with the shoulder–arm movement impairment instrument [4, 20], measuring hand raising, hand to opposite shoulder, hand behind back, hand to neck and hand to seat. The score ranges from 1 to 6 (full ability) and the total sum score ranges from 5 to 30.
Shoulder pain during shoulder and arm movements was assessed by the Borg symptom scale [21]. The score ranges from 0 (no pain) to 10, and the total sum score ranges from 0 to 50.
Hand-grip force was assessed by a digital electronic dynamometer, the Grippit (AB Detektor Gothenburg, Sweden) [22, 23], which measures the grip force in Newtons. The device displays the best, the mean and the end value for the hand grip force for each test round. The mean grip force was used for assessment.
Activity limitations related to the shoulder–arm–hand were assessed with the Disabilities of the Arm, Shoulder and Hand (DASH) questionnaire [24, 25], a self-administered questionnaire to assess upper extremity disability and symptoms comprising 30 items [1–5] concerning the patient's health status during the preceding week. The total score ranges from 0 to 100 (severe disability):
$$ \mathrm{DASH}\ \mathrm{score} = \left(\left(\mathrm{sum}\ \mathrm{o}\mathrm{f}\ \mathrm{all}\ \mathrm{points}\ \mathrm{f}\mathrm{o}\mathrm{r}\ \mathrm{all}\ \mathrm{the}\ \mathrm{questions}\ \hbox{-}\ 30\right)/1.2\right). $$
The mean DASH score for norm values for a general US population aged 19–75+ is 10 (standard deviation (SD) 15) [26].
General activity limitations among the patients were assessed with the Health Assessment Questionnaire (HAQ) Index [27, 28], which is a RA disease-specific instrument that measures eight aspects of activity during the previous week rated from 0 to 3 (severe difficulties). The total mean score is calculated from the eight aspects.
Disease activity was recorded by the Disease Activity Score of 28 joints (DAS28) [29] and is based on a calculation of the erythrocyte sedimentation rate (mm/hour), the number of swollen and tender joints (28-joint index) and self-reported general health scored on a visual analogue scale (0–100). A higher value indicates more disease activity.
RF and anti-CCP for the patients were assessed with standard laboratory tests at the accredited laboratories of Sahlgrenska University Hospital.
The occurrence of erosions was assessed by radiographs of the hands, wrists and feet. The presence of erosions is a marker of disease severity.
Physical workload was assessed by type of work categories using a classification system [30]. The categories were 'heavy material handling', 'heavy repetitive', 'medium-heavy load', 'light repetitive' and 'administration/computer work'.
Physical activity during leisure time was assessed with the Leisure Time Physical Activity Instrument (LTPAI), which assesses the amount of physical activity during a typical week [31].
Descriptive data are presented as mean (SD) or median (range) and data for categorical variables as number (percentage). For comparison between RA patients and the reference group, the Mann–Whitney U test was used for continuous variables, the Mantel–Haenzel chi-square test for ordered categorical variables and Fisher's exact test for dichotomous variables. Calculations were made of 95 % confidence intervals for differences between the means.
The Wilcoxon signed-rank test was used for comparison of dominant and non-dominant arms. The Spearman correlation coefficient was used for the correlation analysis. The Mann–Whitney U test was used for comparisons of continuous variables between patients and healthy subjects reporting shoulder symptoms and those reporting no symptoms. All significance tests were two-sided and conducted at the 5 % significance level. Power analysis demonstrated that, with the sample size of 103 in each group, we would achieve a power of 0.97 to detect a 20 % difference in shoulder strength between the patient and reference groups based on the Mann–Whitney U test with α = 0.05.
Characteristics of the patient group are presented in Table 1. The mean age of the patients was 47.1 (SD 10.0) years and of the participants in the reference group 47.1 (SD 10.1) years. The mean duration of disease was 20.3 (SD 8.5) months and the mean DAS28 was 3.8 (SD 1.4) for the patients (see Table 2). The large majority of the patients were RF positive (78.6 %) and anti-CCP positive (75.5 %). Radiographs were performed at an average of 21.1 (SD 10.1) months after diagnosis, showing erosive changes of the hands and/or feet in 38 % of the patients. Radiographs were not performed in four patients owing to administrative difficulties. In the patient group, 73 % worked compared with 97 % in the reference group, and the mean working hours per week was significantly (p <0.0001) lower in the patient group (24.3 (SD 16.8) hours) than in the reference group (36.8 (SD 6.9) hours). Twenty-eight (27 %) of the patients did not work at all due to sick-leave, disability pension or unemployment. There were no significant differences with regard to physical workload between the patient group and the reference group. Patients were significantly (p = 0.003) less physically active during leisure time compared with the reference group (see Table 1).
Table 1 Characteristics of the study population
Table 2 Disease characteristics of the patients
Shoulder symptoms
At the time of the assessment, 53.4 % of the patients and 20.4 % in the reference group reported present shoulder symptoms. Thirty-three (32.0 %) patients had unilateral symptoms and 22 (21.4 %) bilateral symptoms. In the reference group, unilateral shoulder symptoms were found to be more common compared with bilateral symptoms.
Shoulder function
The majority of the study population reported right-hand dominance, 90.3 % of the patients and 98.1 % of the healthy subjects. There was a significant (p = 0.033) difference between the groups regarding right-hand dominance, and analyses of the shoulder function and hand-grip force were therefore conducted according to the dominant arm and the non-dominant arm. No significant differences between the dominant and non-dominant arms for shoulder strength were found in the patient group (mean difference 0.11 (SD 0.82), p = 0.091) or in the reference group (mean difference 0.03 (SD 0.84), p = 0.56). Hence, only the dominant arm is presented in the Results.
Patients showed significantly (p <0.0001) impaired shoulder function with regard to shoulder strength, shoulder–arm movement, lateral shoulder elevation and shoulder pain compared with the reference group for the dominant arm (see Table 3).
Table 3 Assessments of shoulder muscle strength, shoulder movement shoulder pain, hand-grip force and the DASH questionnaire in the patient and reference groups
Shoulder muscle strength
The mean isometric shoulder strength for the dominant arm in the patient group (3.7 kg (SD 1.6)) was significantly (p <0.0001) lower compared with the reference group (5.6 kg (SD 1.2)) (see Table 3).
Active shoulder–arm movement
The mean lateral shoulder elevation for the dominant arm in the patient group (164.3° (SD 23.1)) was significantly (p <0.0001) lower than that in the reference group (178.7° (SD 4.1)). The mean active shoulder–arm movement for the dominant arm in the patient group (27.4 (SD 2.9)) was significantly (p <0.0001) lower than in the reference group (29.7 (SD 0.7)) (see Table 3).
Shoulder pain during movement
The mean shoulder pain was significantly (p <0.0001) higher for the patients' dominant arm (7.6 (SD 7.1)) compared with the reference group (0.9 (SD 2.14)) (see Table 3).
Hand-grip force
The mean hand-grip force in the dominant arm in the patient group (159 N (SD 78)) was significantly (p <0.0001) lower than in the reference group (288 N (SD 60)) (see Table 3).
Significant differences were found between the dominant and non-dominant hands for hand-grip force in the patient group (mean differences 10.9 N (SD 47.3), p = 0.008) and in the reference group (mean differences 22.0 N (SD 33.3), p <0.0001).
Activity limitations of the shoulder–arm–hand
Activity limitations related to the shoulder–arm–hand (DASH questionnaire) were significantly (p <0.0001) higher in the patient group (25.7 (SD 17.3)) than in the reference group (2.6 (SD 5.4)) (see Table 3).
The DASH score was significantly higher for the patients in all age groups compared with the reference group when the groups were divided into 10-year age intervals (see Fig. 1).
Box plot of DASH score by age group for the patient and reference groups, and for the total study population. A significant difference between the groups was found for all age groups for the DASH score. Ctrl. control, DASH Disability of the Arm, Shoulder and Hand, Pat. patient
Associations between shoulder muscle strength and physical assessments and disease activity in the patient group
The associations between the shoulder muscle strength of the dominant arm and shoulder pain (r s = –0.48, <0.001), hand-grip force (r s = 0.51, p <0.001), DAS28 (r s = –0.34, p = 0.001) and DASH score (r s = –0.45, p <0.001) were all found to be significant.
Group comparison between patients and healthy subjects reporting shoulder symptoms and those reporting no shoulder symptoms
Patients reporting shoulder symptoms showed significantly (p <0.0001) impaired shoulder function with regard to shoulder strength, shoulder–arm movement, lateral shoulder elevation and shoulder pain for the symptomatic shoulder when compared with healthy subjects reporting shoulder symptoms. Significant impaired shoulder function for both the dominant and non-dominant arms was also found in patients reporting no shoulder symptoms when compared with healthy subjects reporting no shoulder symptoms (Additional file 1).
The aim of this controlled, cross-sectional study was to compare the shoulder function and activity limitations related to the shoulder–arm–hand in women with RA in the first years of disease with an age-matched reference group of self-reported healthy women.
In patients, the shoulder function was found to be significantly impaired for all of the shoulder variables studied—shoulder muscle strength, active shoulder–arm movement and shoulder pain during movement—when compared with the reference group. Moreover, patients reported significantly more activity limitations (DASH questionnaire) compared with the reference group, indicating limitations of daily activities.
Shoulder muscle strength in the patient group was approximately 65 % of the strength in the reference group. The impaired shoulder muscle strength in the patient group corresponds to previous findings of reduced general muscle strength in patients with longstanding RA [23, 32, 33]. However, our results indicate that shoulder muscle strength is reduced already at an early stage of disease. A possible contributing factor to the reduced shoulder muscle strength might have been the increased shoulder pain during movement in the patient group that was found to correlate significantly with the shoulder muscle strength. It has previously been suggested that functional ability is more influenced by disease activity than by joint destruction in early RA [34, 35]. The significant correlation found between shoulder muscle strength and DAS28 in the present study supports previous studies reporting associations between muscle strength and inflammatory disease parameters in RA [36, 37]. The anatomic origin of the reduced shoulder muscle strength is not targeted in our study. However, periarticular and soft tissue engagements of the shoulder have been reported previously in painful RA shoulders [8]. Moreover, early fatty degeneration of the rotator cuff [9], induced by periods of pain and inactivity, might be an additional contributing factor to the reduced shoulder muscle strength in our patient group as well as asymptomatic rotator cuff tears [38]. Furthermore, the lower overall leisure-time physical activity level reported in the patient group might also have contributed to the reduced shoulder muscle strength.
An unexpected result was the reduction of shoulder muscle strength to 73 % in patients reporting no shoulder symptoms compared with healthy subjects reporting no shoulder symptoms. The impaired shoulder function in patients reporting no shoulder symptoms indicates that patients are not always aware of their functional limitations. This finding shows the importance of initiating screening of the shoulder function at the time of disease onset in all patients, not just among those reporting shoulder problems.
Although shoulder movement was found to be significantly reduced in the patient group as compared with the reference group, the majority of the patients appear to have sufficient shoulder movement for daily activities.
Compared with the reference group, patients' hand-grip force was reduced to approximately 55 %, corresponding to a previous study in early RA [39]. Hand-grip force has been suggested to correlate with muscle strength in the upper extremities [40], which our result supports since we found a moderate association (r s = 0.50) between the two outcome measures. However, we find it important to assess both hand-grip force and shoulder muscle strength when screening for possible impairments of the upper extremities among patients with RA in the first years of disease.
The majority of the patients reported some degree of activity limitations as assessed with the DASH questionnaire compared with the reference group. In addition, when compared with the norm values suggested by Hunsaker et al. [26], our patients showed a higher DASH score in all age groups. On the other hand, the mean HAQ score was found to be low in the patient group, indicating low activity limitations [41]. These results might be seen as inconsistent. However, the DASH questionnaire appear to contain questions concerning more physical strenuous activities than does the HAQ and may better reflect the demands of daily living in physically active patients with RA. We have previously validated the DASH questionnaire for Swedish patients with RA and have found that the DASH score correlated well with the DAS28 and the HAQ score as well as with shoulder function variables and hand-grip force [6]. The DASH questionnaire seems to be appropriate for screening activity limitations of the upper extremity in RA and can be a complement to the HAQ.
For an accurate and objective evaluation of the shoulder function in patients it is important to compare with an age-matched and gender-matched healthy reference group. The patient group and the reference group did not differ in age or physical workloads, which are both found to be strong predictors of shoulder symptoms in the general population [3, 42, 43]. However, significant differences were found for education, work status and leisure-time physical activities, where the reference group had a higher education level, worked a greater number of hours per week and had a higher leisure-time physical activity level. These findings were expected since low education level [44], work disability [45, 46] and low leisure-time physical activity level [47] are common in RA patients. However, the reference group appears to have a slightly higher education level compared with the general population [48]. The prevalence of shoulder symptoms in the general population has been found to be somewhere between 7 and 27 % [2, 3], which is in agreement with a previous Swedish study of shoulder–upper arm pain in the general female population [17]. The variation has been suggested to be explained partly by the differences in the definition of shoulder symptoms and the methods used for its estimation [3]. Our findings are in agreement with these previous studies because 20 % of the healthy women in the reference group reported shoulder symptoms [2, 3, 17].
Furthermore, our reference values for shoulder muscle strength are consistent with those of a previous study of norm values for isometric shoulder muscle strength in healthy subjects [38].
The well-matched reference group in terms of age, gender and physical workload is a strength of the study. However, this was a controlled cross-sectional study and the causality of impaired shoulder function, disease activity and activity limitations cannot be stated. In future studies, a prospective follow-up of patients with newly onset RA with regard to shoulder function are warranted. Such a study would provide an opportunity to identify the patients at risk of shoulder dysfunction and help to explain the natural progression of the disease and its impact on shoulder function. Moreover, to improve our anatomical understanding of the impaired shoulder function in the early disease course, radiographic and ultrasound images seem important to be assessed.
The overall results of this study indicate that patients with RA have reduced shoulder muscle strength and limited function already 1.5 years after disease onset, even if they do not complain of symptoms from the shoulder. Shoulder muscle strength is related to activity limitations (DASH questionnaire), grip strength, shoulder pain and measures of disease activity (DAS28). Patients would benefit from assessment of shoulder function early in the disease course because of implications for therapy.
anti-CCP:
Anti-cyclic citrullinated peptide
DAS28:
Disease activity score of 28 joints
DASH:
Disabilities of the arm, shoulder and hand
HAQ:
Health assessment questionnaire
LTPAI:
Leisure time physical activity instrument
RF:
Urwin M, Symmons D, Allison T, Brammah T, Busby H, Roxby M, et al. Estimating the burden of musculoskeletal disorders in the community: the comparative prevalence of symptoms at different anatomical sites, and the relation to social deprivation. Ann Rheum Dis. 1998;57:649–55.
Rechardt M, Shiri R, Karppinen J, Jula A, Heliovaara M, Viikari-Juntura E. Lifestyle and metabolic factors in relation to shoulder pain and rotator cuff tendinitis: a population-based study. BMC Musculoskelet Disord. 2010;11:165.
Luime JJ, Koes BW, Hendriksen IJ, Burdorf A, Verhagen AP, Miedema HS, et al. Prevalence and incidence of shoulder pain in the general population; a systematic review. Scand J Rheumatol. 2004;33:73–81.
Bostrom C, Harms-Ringdahl K, Nordemar R. Relationships between measurements of impairment, disability, pain, and disease activity in rheumatoid arthritis patients with shoulder problems. Scand J Rheumatol. 1995;24:352–9.
Slungaard B, Mengshoel AM. Shoulder function and active motion deficit in patients with rheumatoid arthritis. Disabil Rehabil. 2013;35:1357–63. doi:10.3109/09638288.2012.732187
Bilberg A, Bremell T, Mannerkorpi K. Disability of the Arm, Shoulder and Hand questionnaire in Swedish patients with rheumatoid arthritis: A validity study. J Rehabil Med. 2012;44:7–11.
Bostrom C. Shoulder rotational strength, movement, pain and joint tenderness as indicators of upper-extremity activity limitation in moderate rheumatoid arthritis. Scand J Rehabil Med. 2000;32:134–9.
Stegbauer J, Rump LC, Weiner SM. Sites of inflammation in painful rheumatoid shoulder assessed by musculoskeletal ultrasound and power Doppler sonography. Rheumatol Int. 2008;28:459–65.
van de Sande MAJ, de Groot JH, Rozing PM. Clinical implications of rotator cuff degeneration in the rheumatic shoulder. Arthritis Rheum. 2008;59:317–24.
Lehtinen JT, Kaarela K, Belt EA, Kautiainen HJ, Kauppi MJ, Lehto MU. Incidence of glenohumeral joint involvement in seropositive rheumatoid arthritis. A 15 year endpoint study. J Rheumatol. 2000;27:347–50.
van Schaardenburg D. Rheumatoid arthritis in the elderly. Prevalence and optimal management. Drugs Aging. 1995;7:30–7.
Deal CL, Meenan RF, Goldenberg DL, Anderson JJ, Sack B, Pastan RS, et al. The clinical features of elderly-onset rheumatoid arthritis. A comparison with younger-onset disease of similar duration. Arthritis Rheum. 1985;28:987–94.
Olofsson Y, Book C, Jacobsson LT. Shoulder joint involvement in patients with newly diagnosed rheumatoid arthritis. Prevalence and associations. Scand J Rheumatol. 2003;32:25–32.
Bilberg A, Bremell T, Balogh I, Mannerkorpi K. Work status in patients with early rheumatoid arthritis: emphasis on shoulder function and mechanical exposure. Scand J Rheumatol. 2014;3:119–23.
Hallert E, Thyberg I, Hass U, Skargren E, Skogh T. Comparison between women and men with recent onset rheumatoid arthritis of disease activity and functional ability over two years (the TIRA project). Ann Rheum Dis. 2003;62:667–70.
Rantalaiho V, Kautiainen H, Virta L, Korpela M, Mottonen T, Puolakka K. Trends in treatment strategies and the usage of different disease-modifying anti-rheumatic drugs in early rheumatoid arthritis in Finland. Results from a nationwide register in 2000–2007. Scand J Rheumatol. 2011;40:16–21.
Bergman S, Herrstrom P, Hogstrom K, Petersson IF, Svensson B, Jacobsson LT. Chronic musculoskeletal pain, prevalence rates, and sociodemographic associations in a Swedish population study. J Rheumatol. 2001;28:1369–77.
Leggin BG, Neuman RM, Iannotti JP, Williams GR, Thompson EC. Intrarater and interrater reliability of three isometric dynamometers in assessing shoulder strength. J Shoulder Elbow Surg. 1996;5:18–24.
Klintberg IH, Svantesson U, Karlsson J. Long-term patient satisfaction and functional outcome 8-11 years after subacromial decompression. Knee Surg Sports Traumatol Arthrosc. 2010;18:394–403.
Bostrom C, Harms-Ringdahl K, Karreskog H, Nordemar R. Effects of static and dynamic shoulder rotator exercises in women with rheumatoid arthritis: a randomised comparison of impairment, disability, handicap, and health. Scand J Rheumatol. 1998;27:281–90.
Borg GA. Psychophysical bases of perceived exertion. Med Sci Sports Exerc. 1982;14:377–81.
Nordenskiöld U. Elastic wrist orthoses. Reduction of pain and increase in grip force for women with rheumatoid arthritis. Arthr Care Res. 1990;3:158–62.
Nordenskiöld UM, Grimby G. Grip force in patients with rheumatoid arthritis and fibromyalgia and in healthy subjects. A study with the Grippit instrument. Scand J Rheumatol. 1993;22:14–9.
Hudak PL, Amadio PC, Bombardier C. Development of an upper extremity outcome measure: the DASH (disabilities of the arm, shoulder and hand) [corrected]. The Upper Extremity Collaborative Group (UECG). Am J Ind Med. 1996;29:602–8.
Atroshi I, Gummesson C, Andersson B, Dahlgren E, Johansson A. The disabilities of the arm, shoulder and hand (DASH) outcome questionnaire: reliability and validity of the Swedish version evaluated in 176 patients. Acta Orthop Scand. 2000;71:613–8.
Hunsaker FG, Cioffi DA, Amadio PC, Wright JG, Caughlin B. The American academy of orthopaedic surgeons outcomes instruments: normative values from the general population. J Bone Joint Surg Am. 2002;84-A:208–15.
Fries JF, Spitz P, Kraines RG, Holman HR. Measurement of patient outcome in arthritis. Arthritis Rheum. 1980;23:137–45.
Ekdahl C, Eberhardt K, Andersson SI, Svensson B. Assessing disability in patients with rheumatoid arthritis. Use of a Swedish version of the Stanford Health Assessment Questionnaire. Scand J Rheumatol. 1988;17:263–71.
Prevoo ML, van't Hof MA, Kuper HH, van Leeuwen MA, van de Putte LB, van Riel PL. Modified disease activity scores that include twenty-eight-joint counts. Development and validation in a prospective longitudinal study of patients with rheumatoid arthritis. Arthritis Rheum. 1995;38:44–8.
Larsson B, Balogh I. Is there a relationship between fibromyalgia syndrome and work conditions? J Musculoskelatal Pain. 2005;13:5–14.
Mannerkorpi K, Hernelid C. Leisure time physical activity instrument and physical activity at home and work instrument. Development, face validity, construct validity and test-retest reliability for subjects with fibromyalgia. Disabil Rehabil. 2005;27:695–701.
Ekdahl C, Broman G. Muscle strength, endurance, and aerobic capacity in rheumatoid arthritis: a comparative study with healthy subjects. Ann Rheum Dis. 1992;51:35–40.
Ekblom B, Lovgren O, Alderin M, Fridstrom M, Satterstrom G. Physical performance in patients with rheumatoid arthritis. Scand J Rheumatol. 1974;3:121–5.
Welsing PM, van Gestel AM, Swinkels HL, Kiemeney LA, van Riel PL. The relationship between disease activity, joint destruction, and functional capacity over the course of rheumatoid arthritis. Arthritis Rheum. 2001;44:2009–17.
Guillemin F, Briancon S, Pourel J. Functional disability in rheumatoid arthritis: two different models in early and established disease. J Rheumatol. 1992;19:366–9.
Schiottz-Christensen B, Lyngberg K, Keiding N, Ebling AH, Danneskiold-Samsoe B, Bartels EM. Use of isokinetic muscle strength as a measure of severity of rheumatoid arthritis: a comparison of this assessment method for RA with other assessment methods for the disease. Clin Rheumatol. 2001;20:423–7.
Stucki G, Bruhlmann P, Stucki S, Michel BA. Isometric muscle strength is an indicator of self-reported physical functional disability in patients with rheumatoid arthritis. Br J Rheumatol. 1998;37:643–8.
Kim HM, Teefey SA, Zelig A, Galatz LM, Keener JD, Yamaguchi K. Shoulder strength in asymptomatic individuals with intact compared with torn rotator cuffs. J Bone Joint Surg Am. 2009;91:289–96.
Hakkinen A, Hannonen P, Hakkinen K. Muscle strength in healthy people and in patients suffering from recent-onset inflammatory arthritis. Br J Rheumatol. 1995;34:355–60.
Bohannon RW. Hand-grip dynamometry provides a valid indication of upper extremity strength impairment in home care patients. J Hand Ther. 1998;11:258–60.
Bruce B, Fries JF. The Health Assessment Questionnaire (HAQ). Clin Exp Rheumatol. 2005;23:S14–8.
Larsson B, Sogaard K, Rosendal L. Work related neck-shoulder pain: a review on magnitude, risk factors, biochemical characteristics, clinical picture and preventive interventions. Best Pract Res Clin Rheumatol. 2007;21:447–63.
Ostergren PO, Hanson BS, Balogh I, Ektor-Andersen J, Isacsson A, Orbaek P, et al. Incidence of shoulder and neck pain in a working population: effect modification between mechanical and psychosocial exposures at work? Results from a one year follow up of the Malmo shoulder and neck study cohort. J Epidemiol Community Health. 2005;59:721–8.
Bergstrom U, Jacobsson LT, Nilsson JA, Wirfalt E, Turesson C. Smoking, low formal level of education, alcohol consumption, and the risk of rheumatoid arthritis. Scand J Rheumatol. 2013;42:123–30.
Neovius M, Simard JF, Askling J. How large are the productivity losses in contemporary patients with RA, and how soon in relation to diagnosis do they develop? Ann Rheum Dis. 2011;70:1010–5.
Barrett EM, Scott DG, Wiles NJ, Symmons DP. The impact of rheumatoid arthritis on employment status in the early years of disease: a UK community-based study. Rheumatology (Oxford). 2000;39:1403–9.
Munsterman T, Takken T, Wittink H. Are persons with rheumatoid arthritis deconditioned? A review of physical activity and aerobic capacity. BMC Musculoskelet Disord. 2012;13:202.
Statistics Sweden. http://www.scb.se/. Accessed 2 Jul 2015.
This work was supported by grants from the Norrbacka-Eugenia Foundation, the Swedish Rheumatism Association, Göteborg's Association against Rheumatism (RIG), the Health and Medical Care Executive Board of Västra Götalands Region (VGR) and the Medical Faculty of Göteborg University (LUA/ALF).
The statistical advisor was Nils-Gunnar Persson. Anette Tellander and Jeanette Ahlgren helped with the examinations.
Institute of Medicine, Department of Rheumatology and Inflammation Research, Sahlgrenska Academy, University of Gothenburg, Guldhedsgatan 10, Box 480, 40530, Göteborg, Sweden
Annelie Bilberg, Tomas Bremell & Kaisa Mannerkorpi
Institute of laboratory Medicine, Department of Occupational and Environmental Medicine, University of Lund, 22185, Lund, Sweden
Istvan Balogh
Institute of Neuroscience and Physiology, Section of Health and Rehabilitation, Physiotherapy, Sahlgrenska Academy, University of Gothenburg, Box 455, 40530, Göteborg, Sweden
Kaisa Mannerkorpi
Annelie Bilberg
Tomas Bremell
Correspondence to Annelie Bilberg.
AB participated in the design of the study, recruitment, enrolment and data collection, statistical analysis and drafting the manuscript. TB participated in the design of the study and drafting the manuscript. IB participated in the categorisation of the workload of the study participants and drafting the manuscript. KM participated in the design of the study, recruitment, enrolment, data collection, statistical analysis and drafted the manuscript. All authors read and approved the final manuscript.
Is a table presenting assessments of shoulder muscle strength, shoulder movement shoulder pain, hand-grip force and the DASH questionnaire in the patients and reference group reporting and not reporting shoulder symptoms. For continuous variables mean (SD) or median (minimum; maximum) / n is presented. For comparison between groups the Mann–Whitney U test was used for continuous variables. Shoulder function and hand-grip force for patients and references reporting shoulder symptoms are presented for the symptomatic arm (n = 15) or if bilateral symptoms for the dominant arm (n = 27). Shoulder function and hand-grip force for patients and references reporting no shoulder symptoms are presented for the dominant arm. (DOC 35 kb)
Bilberg, A., Bremell, T., Balogh, I. et al. Significantly impaired shoulder function in the first years of rheumatoid arthritis: a controlled study. Arthritis Res Ther 17, 261 (2015). https://doi.org/10.1186/s13075-015-0777-0
Grip Force
Health Assessment Questionnaire Score
Shoulder Symptom | CommonCrawl |
\begin{definition}[Definition:Tangent Map/Differentable Mappings]
{{stub}}
Category:Definitions/Differential Geometry
\end{definition} | ProofWiki |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.