text
stringlengths 59
500k
| subset
stringclasses 6
values |
---|---|
\begin{document}
\title{Towards Decision Support in Reciprocation\footnote{An extended abstract is published at~\cite{PolevoydeWeerdtJonker2016}
\begin{abstract}
People often interact repeatedly: with relatives, through file sharing, in politics, etc. Many such interactions are reciprocal: reacting to the actions of the other. In order to facilitate decisions regarding reciprocal interactions, we analyze the development of reciprocation over time.
To this end, we propose a model for such interactions that is simple enough to enable formal analysis, but is sufficient to predict how such interactions will evolve. Inspired by existing models of international interactions and arguments between spouses, we suggest a model with two reciprocating attitudes where an agent's action is a weighted combination of the others' last actions (reacting) and either i) her innate kindness, or ii) her own last action (inertia).
We analyze a network of repeatedly interacting agents, each having one of these attitudes, and prove that their actions converge to specific limits. Convergence means that the interaction stabilizes, and the limits indicate the behavior after the stabilization. For two agents, we describe the interaction process and find the limit values. For a general connected network, we find these limit values if all the agents employ the second attitude, and show that the agents' actions then all become equal. In the other cases, we study the limit values using simulations.
We discuss how these results predict the development of the interaction and can be used to help agents decide on their behavior. \end{abstract}
\keywords{reciprocal interaction, agents, action, repeated reciprocation, fixed, floating, behavior, network, convergence, Perron-Frobenius, convex combination}
\section{Introduction}\label{Sec:introd}
Interaction is central in human behavior, e.g., at school, in file sharing, in business cooperation and political struggle. We aim at facilitating decision support for the interacting parties and for the outside observers. To this end, we want to predict interaction.
\begin{comment} People violate economic rationality in certain interactions~\cite{Bennett1995,Rubinstein97}, which is typically understood to be otherwise than directly optimizing a utility function.
A solution for the problem that real people behave emotionally is proposed in the form of a drama theory (first presented in~\cite{HowardBennettBryantBradley1993}), which takes emotions into account. Irrationality here is not the same as limited rationality of Simon~(\cite{Simon1955}), but rather can occur even when the character has enough information and information processing capacity. \end{comment} Instead of being economically rational, people tend to adopt other ways of behavior~\cite{RaghunandanSubramanian2012,Rubinstein97}, not necessarily maximizing some utility function. Furthermore, people tend to reciprocate, i.e., react on the past actions of others~\cite{FalkFischbacher2006,FehrGachter2000,GuthSchmittbergerSchwarze1982,Sobel2005}.
Since reciprocation is ubiquitous, predicting it will allow predicting many real-life interactions and advising on how to improve them. Therefore, we need a model for reciprocating agents that is simple enough for analytical analysis and precise enough to predict such interactions. Understanding such a model would also help understanding how to improve personal and public good. This is also important for engineering computer systems that fit human intuition of reciprocity.
Extant models of (sometimes repeated) reciprocation can be classified as explaining existence or analyzing consequences.
The following models consider the reasons for existence of reciprocal tendencies, often incorporating evolutionary arguments. The classical works of Axelrod~\cite{Axelrod1981,Axelrod1984} considered discrete reciprocity and showed that it is rational for egoists, so that species evolve to reciprocate. Evolutionary explanation appears also in other places, such as~\cite{GuthYaariWitt1992,SethiSomanathan2001}, or~\cite[Chapter~$6$]{Bicchieri2006}, the latter also explicitly considering the psychological aspects of norm emergence. In~\cite{VanSegbroeckPachecoLenaertsSantos2012}, they consider pursuing fairness as a motivation for reciprocation. In~\cite{AxelrodHamilton1981} and~\cite{fletcherZwick2006}, they considered engendering reciprocation by both the genetical kinship theory (helping relatives) and by the utility from cooperating when the same pair of agents interact multiple times. The famous work of Trivers~\cite{Trivers1971} showed that sometimes reciprocity is rational, in much biological detail, and thus, people can evolve to reciprocate. Gintis~\cite[\chapt{$11$}]{gintis2000} considered discrete actions, discussing not only the rationally evolved tit-for-tat, but also reciprocity with no future interaction in sight, what he calls \defined{strong reciprocity}. He modeled the development of strong reciprocity. Several possible reasons for strong reciprocity, such as a social part in the utility of the agents, expressing itself in emotions, were considered in \cite{FehrFischbacherGachter2002}. Berg et al.~\cite{BergDickhautMcCabe1995} proved that people tend to reciprocate and considered possible motivations, such as evolutionary stability. Reciprocal behavior was axiomatically motivated in~\cite{SegalSobel2007}, assuming agents care not only for the outcomes, but also for strategies, thereby pushed to reciprocate.
On another research avenue, Given that reciprocal tendencies exist, the following works analyzed what ways it makes interactions develop. Some models analyzed reciprocal interactions by defining and analyzing a game where the utility function of rational agents directly depends on showing reciprocation, such as in~\cite{CoxFriedmanGjerstad2007,DufwenbergKirchsteiger2004,FalkFischbacher2006,Rabin1993}. The importance of reward/punishment or of incomplete contracts for the flourishing of reciprocal individuals in the society was shown in~\cite{FehrGachter2000}.
To summarize, reciprocity is seen as an inborn quality~\cite{FehrFischbacherGachter2002,Trivers1971}, which has probably been evolved from rationality of agents, as was shown by Axelrod~\cite{Axelrod1981}. As we have already said, understanding how a reciprocal interaction between agents with various reciprocal inclinations uncurls with time will help explain and predict the dynamics of reciprocal interaction, such as arms races and personal relations. This would also be in the spirit of the call to consider various repercussions of reciprocity from~\cite{NowakSigmund2005}. Since no analysis considers non-discrete lengthy interaction, caused by inborn reciprocation, (unlike, say, the discrete one from Axelrod~\cite{Axelrod1981,Axelrod1984}), we model and study how reciprocity makes interaction evolve with time.
We represent actions by \emph{weight}, where a bigger value means a more desirable contribution or, in the interpersonal context, investment in the relationship. \ifthenelse{\equal{IJCAI16}{IJCAI16}}{ }{ Agents reciprocate both to the agent they are acting on and to their whole neighborhood. } We model reciprocity
by two reciprocation attitudes, an action's weight being a convex combination between i) one's own kindness or ii) one's own last action,
and the other's and neighborhood's last actions. The whole past should be considered, but we assume that the last actions represent the history enough, to facilitate analysis. Defining an action (or how much it changes) or a state by a linear combination of the other side's actions and own actions and qualities was also used to analyze arms race ~\cite{Dixon1986,Ward1984} and spouses' interaction~\cite{GottmanSwansonMurray1999} (piecewise linear in this case). Attitude i) depending on the (fixed) kindness is called \name{fixed}, and ii) depending on one's own last action is called \name{floating}. Given this model, we study its behavioral repercussions.
There are several reminiscent but different models. The \name{floating} model resembles opinions that converge to a consensus~\cite{BlondelHendrickxOlshevskyTsitsiklis2005,Moreau2005,TsitsiklisBertsekasAthans1986,DeGroot1974}, while the \name{fixed} model resembles converging to a general equilibrium of opinions~\cite{BindelKleinbergOren2011}. Of course, unlike the models of spreading opinions, we consider different actions on various neighbors, determined by direct reaction and a reaction to the whole neighborhood. Still, because of some technical reminiscence to some of our models, we do use those for one of our proofs.
Another similar model is that of monotonic concession~\cite{RosenscheinZlotkin1994} and that of bargaining over dividing a pie~\cite{Rubinstein1982}. The main difference is that in those models, the agents decide what to do, while in our case, they follow the reciprocation formula.
\begin{example}\label{ex:colleages} Consider $n$ colleagues~$1, 2, \ldots, n$, who can help or harm each other. Let the possible actions be: giving bad work, showing much contempt, showing little contempt, supporting emotionally a little, supporting emotionally a lot, advising, and let their respective weight be a point in ~$[-1, -0.5)$, $[-0.5, -0.2)$, $[-0.2. 0)$, $(0, 0.4)$, $[0.4, 0.7)$, $[0.7, 1]$. Assume that each person knows what the other did to him last time. The social climate, meaning what the whole group did, also influences behavior. However, we may just concentrate on a single pair of even-tempered colleagues who reciprocate regardless the others.
\end{example}
To understand and predict reciprocal behavior, we look at the limit of time approaching infinity, since this describes what actions will take place from some time on. We first consider two agents in \sectnref{Sec:dynam_interact_pair}, assuming their interaction is independent of other agents, or that the total influence of the others on the pair is negligible. This assumption allows for deeper a theoretical analysis of the interaction than in the general case.
The values in the limit for two agents will be also implied by a general convergence result that is presented later, unless both agents are \name{fixed}. We still present them with the other results for two agents for the completeness of \sectnref{Sec:dynam_interact_pair}.
\sectnref{Sec:dynam_interact_interdep} studies interaction of many agents, where the techniques we used for two agents are not applicable, and we show exponentially fast convergence. Exponential convergence means a rapid stabilizing, and it explains acquiring personal behavioral styles, which is often seen in practice~\cite{RobertsWaltonViechtbauer2006}. We find the limit when all the agents act synchronously and at most one has the \name{fixed} reciprocation attitude. Among other things, we prove that when at most one agent is \name{fixed}, the limits of the actions of all agents are the same, explaining formation of organizational subcultures, known in the literature~\cite{Hofstede1980}. We also find that only the kindness values of the \name{fixed} agents influence the limits of the various actions, thereby explaining that persistence (i.e., being faithful to one's inner inclination) makes interaction go one's own way, which is reflected in daily life in the recommendations to reject undesired requests by firmly repeating the reasons for rejection ~\cite[\chapt{1}]{BreitmanHatch2000} and~\cite[Chapter~$8$]{Ury2007}. Other cases are simulated in \sectnref{Sec:dynam_interact_interdep_sim}. These results describe the interaction process and lay the foundation for further analysis of interaction.
The major contributions are proving convergence and finding its limits for at most one \name{fixed} agent or for two agents. These allow to explain the above mentioned phenomena and predict reciprocation. The predictions can assist in deciding whether a given interaction will be profitable, and in engineering more efficient multi-agent systems, fitting the reciprocal intuition of the users.
\section{Modeling Reciprocation}\label{Sec:formal_model}
\ifthenelse{\NOT \equal{IJCAI16}{AAMAS16}}{ \subsection{Basic}\label{Sec:formal_model:basic_facts} }{ }
Let $N = \set{1, 2, \ldots, n}$ be $n \geq 2$ interacting agents.
We assume that possible actions are described by an undirected interaction graph $G = (N, E)$, such that agent $i$ acts on $j$ and vice versa if and only if $(i, j) \in E$. Denote the degree of agent $i \in N$ in $G$ by $d(i)$. This allows for various topologies, including heterogeneous ones, like those in~\cite{SantosPachecoLenaerts2006}.
To be able to mention directed edges, we shall treat this graph as a directed one, where for every $(i, j) \in E$, we have $(j, i) \in E$.
Time is modeled by a set of discrete moments $t \in T \defas \set{0, 1, 2, \ldots}$, defining a time slot whenever at least one agent acts.
Agent $i$ acts at times $T_i \defas \set{t_{i, 0} = 0, t_{i, 1}, t_{i, 2}, \ldots} \subseteq T$, and
$\cup_{i \in N} T_i = T$. We assume that all agents act at $t = 0$, since otherwise we cannot sometimes consider the last action of another agent, which would force us to complicate the model and render it even harder for theoretical analysis.
When all agents always act at the same times~($T_1 = T_2 = \ldots = T_n = T$), we say they act \defined{synchronously}.
For the sake of asymptotic analysis, we assume that each agent gets to act an infinite number of times; that is, $T_i$ is infinite for every $i \in N$. Any real application will, of course, realize only a finite part of it, and infinity models the unboundedness of the process in time.
When $(i, j)$ is in $E$, we denote the weight of an action by agent $i \in N$ on another agent $j \in N$ at moment $t$ by $\imp_{i, j}(t) \colon T_i \to \mathbb{R}$. We extend $\imp_{i, j}$ to $T$ by assuming that at $t \in T \setminus T_i$, we have $\imp_{i, j}(t) = 0$. Since only the weight of an action is relevant, we usually write ``action'' while referring to its weight.
For example, when interacting by file sharing, sending a valid piece of a file, nothing, or a piece with a virus has a positive, zero, or a negative weight, respectively.
For $t \in T$, we define \defined{the last action time $s_i(t) \colon T \to T_i$ of agent $i$} as the largest $t' \in T_i$ that is at most $t$.
Since $0 \in T_i$, this is well defined.
The last action of agent $i$ on (another) agent $j$ is given by $x_{i, j}(t) \defas \imp_{i, j}(s_i(t))$. Thus, we have defined $x_{i, j}(t) \colon T \to \mathbb{R}$, and we use mainly this concept $x_{i, j}$ in the paper.
We denote the total received contribution from all the neighbors $\Neighb(i)$ at their last action times not later than $t$ by $\got_i(t) \colon T \to \mathbb{R}$; formally, $\got_i(t) \defas \sum_{j\in \Neighb(i)}{x_{j, i}(t)}$.
We now define two reciprocation attitudes, which define how an agent reciprocates. We need the following notions. The kindness of agent $i$ is denoted by $k_i \in \mathbb{R}$; w.l.o.g., $k_n \geq \ldots \geq k_2 \geq k_1$ throughout the paper. Kindness models inherent inclination to help others; in particular, it determines the first action of an agent, before others have acted. We model agent $i$'s inclination to mimic a neighboring agent's action and the actions of the whole neighborhood in $G$ by reciprocation coefficients $r_i \in \brackt{0, 1}$ and $r'_i \in \brackt{0, 1}$ respectively, such that $r_i + r'_i \leq 1$. Here, $r_i$ is the fraction of $x_{i, j}(t)$ that is determined by the last action of $j$ upon $i$, and $r_i'$ is the fraction that is determined by $\frac{1}{\abs{\Neighb(i)}}$th of the total contribution to $i$ from all the neighbors at the last time.
\ifthenelse{\NOT \equal{IJCAI16}{AAMAS16}}{ \subsection{Reciprocation}\label{Sec:formal_model:recip} }{ }
Intuitively, the \name{fixed} attitude depends on the agent's kindness at every action, while the \name{floating} one is loose, moving freely in the reciprocation process, and kindness directly influences such behavior only at $t = 0$. In both cases $x_{i,j}(0) \defas k_i$.
\begin{defn}\label{def:fix_recip} For the \defined{fixed reciprocation attitude}, agent $i$'s reaction on the other agent $j$ and on the neighborhood is determined by the agent's kindness weighted by~$1 - r_i - r'_i$, by the other agent's action weighted by~$r_i$ and by the total action of the neighbors weighted by~$r'_i$ and divided over all the neighbors: That is, for $t \in T_i$, $\imp_{i,j}(t) = x_{i,j}(t) \defas$ \begin{eqnarray*} (1 - r_i - r'_i) \cdot k_i + r_i \cdot x_{j, i}(t-1) + r'_i \cdot \frac{\got_i(t - 1)}{\abs{\Neighb(i)}}. \end{eqnarray*} \end{defn}
\begin{defn}\label{def:float_recip} In the \defined{floating reciprocation attitude}, agent $i$'s action is a weighted average of her own last action, of that of the other agent $j$ and of the total action of the neighbors divided over all the neighbors: To be precise, for $t \in T_i$, $\imp_{i,j}(t) = x_{i, j}(t) \defas$ \begin{eqnarray*} (1 - r_i - r'_i) \cdot x_{i, j}(t-1) + r_i \cdot x_{j, i}(t-1) + r'_i \cdot \frac{\got_i(t - 1)}{\abs{\Neighb(i)}}. \end{eqnarray*} \end{defn}
The relations are (usually inhomogeneous) linear recurrences with constant coefficients. We could express the dependence $x_{i, j}(t)$ only on $x_{i, j}(t')$ with $t' < t$, but then the coefficients would not be constant, besides the case of two \name{fixed} agents. We are not aware of a method to use the general recurrence theory to improve our results.
\ifthenelse{\NOT \equal{IJCAI16}{AAMAS16}}{ \subsection{Clarifications} }{ }
Compared to the other models, our model takes reciprocal actions as given and looks at the process, while other models either consider how reciprocation originates, such as the evolutionary model of Axelrod~\cite{Axelrod1981}, or take it as given and consider specific games, such as in \cite{CoxFriedmanGjerstad2007,DufwenbergKirchsteiger2004,FalkFischbacher2006,Rabin1993}.
In~Example~\ref{ex:colleages}, let (just here) $n = 3$ and the reciprocation coefficients be~$r_1 = r_2 = 0.5, r_1' = r_2' = 0.3, r_3 = 0.8, r_3' = 0.1$. Assume the kindness to be~$k_1 = 0, k_2 = 0.5$ and $k_3 = 1$. Since this is a small group, all the colleagues may interact, so the graph is a clique\footnote{A clique is a fully connected graph.}. At $t = 0$, every agent's action on every other agent is equal to her kindness value, so agent $1$ does nothing, agent $2$ supports emotionally a lot, and $3$ provides advice. If all agents act synchronously, meaning $T_1 = T_2 = T_3 = \set{0, 1, \ldots}$ , and all get carried away by the process, meaning that they forget the kindness in the sense of employing \name{floating} reciprocation, then, at $t = 1$ they act as follows: $x_{1, 2}(1) = (1 - 0.5 - 0.3) \cdot 0 + 0.5 \cdot 0.5 + 0.3 \cdot \frac{0.5 + 1}{2} = 0.475$ (supports emotionally a lot), $x_{1, 3}(1) = (1 - 0.5 - 0.3) \cdot 0 + 0.5 \cdot 1 + 0.3 \cdot \frac{0.5 + 1}{2} = 0.975$ (provides advice), $x_{2, 1}(1) = (1 - 0.5 - 0.3) \cdot 0.5 + 0.5 \cdot 0 + 0.3 \cdot \frac{0 + 1}{2} = 0.25$ (supports emotionally a little), and so on.
Consider modeling tit for tat~\cite{Axelrod1984}: \begin{example} In our model, the tit for tat with two options, - cooperate or defect, is easily modeled with $r_i = 1$, $k_i = 1$, meaning that the original action is cooperating ($1$) and the next action is the current action of the other player. Since we consider a mechanism, rather than a game, the agents will always cooperate. If one agent begins with cooperation ($k_1 = 1$) and the other one with defection ($k_2 = 0$), acting synchronously, then they will alternate. \end{example}
\ifthenelse{\NOT\equal{IJCAI16}{AAMAS16}}{ The notation is summarized in \tablref{tbl:notation}.
\begin{table}[ht]
\begin{tabular}{|p{0.17\textwidth}|p{0.8\textwidth}|} \hline Term: & Meaning:\\ \hline $\imp_{i, j}(t) \colon T \to \mathbb{R}$ & The action of $i$ on another agent $j$ at time $t$.\\ \hline
$T_i$ & The time moments when agent $i$ acts.\\ \hline Synchronous & $T_1 = T_2 = \ldots = T_n$.\\ \hline
$s_i(t) \colon T \to T_i$ & $\max\set{t' \in T_i | t' \leq t}$.\\ \hline $x_{i, j}(t) \colon T \to \mathbb{R}$ & $\imp_{i, j}(s_i(t))$.\\ \hline $\got_i(t) \colon T \to \mathbb{R}$ & $\sum_{j\in \Neighb(i)}{x_{j, i}(t)}$.\\ \hline
$k_i$ & The kindness of agent $i$.\\ \hline $r_i, r'_i \in [0, 1], r_i + r'_i \leq 1$ & The reciprocation coefficients of agent $i$.\\ \hline Agent $i$ has the \name{fixed} reciprocation attitude, $j$ is another agent & At moment $t \in T_i$, \begin{eqnarray*} x_{i,j}(t) \defas \begin{cases} (1 - r_i - r'_i) \cdot k_i + r_i \cdot x_{j, i}(t-1)\\ + r'_i \cdot \frac{\got_i(t - 1)}{\abs{\Neighb(i)}} & t > t_{i, 0}\\ k_i & t = t_{i, 0} = 0. \end{cases} \end{eqnarray*}\\ \hline Agent $i$ has the \name{floating} reciprocation attitude, $j$ is another agent & At moment $t \in T_i$, \begin{eqnarray*} x_{i, j}(t) \defas \begin{cases} (1 - r_i - r'_i) \cdot x_{i, j}(t-1)\\ + r_i \cdot x_{j, i}(t-1) + r'_i \cdot \frac{\got_i(t - 1)}{\abs{\Neighb(i)}}
& t > t_{i, 0}\\ k_i & t = t_{i, 0} = 0. \end{cases} \end{eqnarray*}\\ \hline \end{tabular} \caption{The notation used throughout the paper.} \label{tbl:notation} \end{table} }{ }
\section{Pairwise Interaction}\label{Sec:dynam_interact_pair} \ifthenelse{\equal{Process}{Process} \OR \equal{Process}{All}}{
We now consider an interaction of two agents, $1$ and $2$, since this assumption allows proving much more than we will be able to in the general case. The model reduces to a pairwise interaction, when $r'_i = 0$ or when there are no neighbors besides the other agent in the considered pair. We assume both, w.l.o.g.
\ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{ When $T_1$ contains precisely all the even numbered slots and $T_2$ zero and all the odd ones, we say they are \defined{alternating}. }{ } Since agent $1$ can only act on agent $2$ and vice versa, we write $\imp_i(t)$ for $\imp_{i, j}(t)$, $x(t)$ for $x_{1, 2}(t)$ and $y(t)$ for $x_{2, 1}(t)$.
}{ }
We analyze the case of both agents being \name{fixed}, then the case of the \name{floating}, and then the case where one is \name{fixed} and the other one is \name{floating}.
To formally discuss the actions after the interaction has stabilized, we consider the limits (if exist)\footnote{Agent $i$ acts at the times in $T_i = \set{t_{i, 0} = 0, t_{i, 1}, t_{i, 2}, \ldots}$.} $\lim_{p \to \infty}{\imp_1(t_{1, p})}$, and $\lim_{t \to \infty}{x(t)}$, for agent $1$, and $\lim_{p \to \infty}{\imp_2(t_{2, p})}$ and $\lim_{t \to \infty}{y(t)}$ for agent $2$. Since the sequence $\set{x(t)}$ is $\set{\imp_1(t_{1, p})}$ with finite repetitions, the limit $\lim_{t \to \infty}{x(t)}$ exists if and only if $\lim_{p \to \infty}{\imp_1(t_{1, p})}$ does. If they exist, they are equal; the same holds for $\lim_{t \to \infty}{y(t)}$ and $\lim_{p \to \infty}{\imp_2(t_{2, p})}$. Denote $L_x \defas \lim_{t \to \infty}{x(t)}$ and $L_y \defas \lim_{t \to \infty}{y(t)}$.
\subsection{\name{Fixed} Reciprocation}
Here we prove that both action sequences converge. \begin{theorem}\label{The:fixed_recip} If the reciprocation coefficients are not both $1$, which means $r_1 r_2 < 1$, then we have, for $i \in N$: $\lim_{p \to \infty} {\imp_i(t_{i, p})} = \frac{(1 - r_i) k_i + r_i (1 - r_j) k_j}{1 - r_i r_j}$.
\end{theorem} The assumption that not both reciprocation coefficients are $1$ and the similar assumptions in the following theorems (such as $1 > r_i > 0$) mean that the agent neither ignores the other's action, nor does it copy the other's action. These are to be expected in real life.
\ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{ The limits of these actions are shown in Figures~\ref{fig:fixed_fixed_limit_agent1} and \ref{fig:fixed_fixed_limit_agent2}.
\begin{figure}
\caption{The limit of the actions of agent $1$ as a function of the reciprocity coefficients, for a \name{Fixed} - \name{fixed} reciprocation, $k_1 = 1, k_2 = 2$. Given $r_1$, agent $2$ receives most when $r_2 = 0$.}
\label{fig:fixed_fixed_limit_agent1}
\caption{The limit of the actions of agent $2$ as a function of the reciprocity coefficients, for a \name{Fixed} - \name{fixed} reciprocation, $k_1 = 1, k_2 = 2$. Given $r_2$, agent $1$ receives most when $r_1 = 1$.}
\label{fig:fixed_fixed_limit_agent2}
\end{figure} }{ } In Example~\ref{ex:colleages}, if agents $1$ and $2$ employ \name{fixed} reciprocation, $r_1 = r_2 = 0.5, r_1' = r_2' = 0.0$ and $k_1 = 0, k_2 = 0.5$, then we obtain $L_x = \frac{0.5 \cdot (1 - 0.5) 0.5}{1 - 0.5 \cdot 0.5} = 1/6$ and $L_y = \frac{(1 - 0.5) \cdot 0.5}{1 - 0.5 \cdot 0.5} = 1/3$.
In order to prove this theorem, we first show that it is sufficient to analyze the synchronous case, i.e.,~$T_1 = T_2 = T$. \begin{lemma}\label{lemma:fixed_recip:subseq_synch_case} Consider a pair of interacting agents. Denote the action sequences in case both agents acted at the same time, (i.e.,~$T_1 = T_2 = T$), by $\set{x'(t)}_{t \in T}$ and $\set{y'(t)}_{t \in T}$, respectively. Then the action sequences\footnote{Agent $i$ acts at the times in $T_i = \set{t_{i, 0} = 0, t_{i, 1}, t_{i, 2}, \ldots}$.} $\set{\imp_1(t_{1,p})}_{p \in \mathbb{N}}, \set{\imp_2(t_{2,p})}_{p \in \mathbb{N}}$ are subsequences of $\set{x'(t)}_{t \in T}$ and $\set{y'(t)}_{t \in T}$, respectively. \end{lemma} \ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{ \begin{proof} We prove by induction that for each $p > 0$, the sequence $\imp_1(t_{1, 0}), \imp_1(t_{1, 1}), \ldots, \imp_1(t_{1, p})$ is a subsequence of the $x'(0), x'(1), \ldots, \imp_1(t_{1, p})$ and the sequence $\imp_2(t_{2, 0}), \imp_2(t_{2, 1}), \ldots, \imp_2(t_{2, p})$ is a subsequence of the $y'(0), y'(1), \ldots, y'(t_{2, p})$.
For $p = 0$, this is immediate, since $t_{i, 0} = 0$.
For the induction step, assume that the lemma holds for $p - 1$ and prove it for $p > 0$. By definition, $\imp_1(t_{1, p}) = (1 - r_1) \cdot k_1 + r_1 \cdot \imp_2(s_2(t_{1, p}-1))$, and since by the induction hypothesis, $\imp_2(s_2(t_{1, p}-1))$ is an element in the sequence $\set{y'(t)}$, we conclude that $\imp_1(t_{1, p})$ is an element in the sequence $\set{x'(t)}$. Moreover, in $\set{x'(t)}$ this element comes after $\imp_1(t_{1, {p - 1}})$, because either $\imp_1(t_{1, {p - 1}}) = \imp_1(0)$ or $\imp_1(t_{1, {p - 1}}) = (1 - r_1) \cdot k_1 + r_1 \cdot \imp_2(s_2(t_{1, {p - 1}}-1))$ and $\imp_2(s_2(t_{1, {p - 1}}-1))$ precedes $\imp_2(s_2(t_{1, p}-1))$ in $\set{y'(t)}$ by the induction hypothesis. This proves the induction step for agent $1$, and it is proven by analogy for $2$. \end{proof} }{ The proof follows from Definition~\ref{def:fix_recip} by induction. (Straightforward proofs in this paper have been replaced by their general ideas due to lack of space). } Using this lemma, it is sufficient to further assume the synchronous case. \begin{lemma}\label{lemma:fixed_recip:per_seq_behav} In the synchronous case, for every $t > 0: x(2t - 1) \geq x(2t + 1)$, and for every $t \geq 0: x(2t) \leq x(2t + 2) \leq x(2t + 1)$. By analogy, $\forall t > 0: y(2t - 1) \leq y(2t + 1)$, and $\forall t \geq 0: y(2t) \geq y(2t + 2) \geq y(2t + 1)$.
All the inequations are strict if and only if $0 < r_1, r_2 < 1, k_2 > k_1$.\footnote{We always assume that $k_2 \geq k_1$.} \end{lemma}
Since we also have $t \geq 0: x(2t) \leq x(2t + 1)$, we obtain $t > 0: x(2t - 1) \geq x(2t + 1) \geq x(2t)$, and for every $t \geq 0: x(2t) \leq x(2t + 2) \leq x(2t + 1)$. By analogy, $\forall t > 0: y(2t - 1) \leq y(2t + 1) \leq y(2t)$, and $\forall t \geq 0: y(2t) \geq y(2t + 2) \geq y(2t + 1)$. Intuitively, this means that the sequence $\set{x(t)}$ is alternating while its amplitude is getting smaller, and the same holds for the sequence $\set{y(t)}$, with another alternation direction. The intuitive reasons are that first, agent $1$ increases her action, while $2$ decreases it. Then, since $2$ has decreased her action, so does $1$, while since $1$ has increased hers, so does $2$. \ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{ This alternating process is going on, and the amplitude subsides, since the changing part is convexly combined with the constant one.
To prove the theorem we use only the monotonicity of the subsequences of the even and of the odd actions. }{ } We now prove the lemma. \begin{proof}
We employ induction. For $t = 0$, we need to show that $x(0) \leq x(2) \leq x(1)$ and $y(0) \geq y(2) \geq y(1)$.
We know that $x(0) = k_1$, $x(1) = (1 - r_1) \cdot k_1 + r_1 \cdot k_2$, and $y(0) = k_2$, $y(1) = (1 - r_2) \cdot k_2 + r_2 \cdot k_1$.
Since $y(1) \leq k_2$, we have $x(2) = (1 - r_1) \cdot k_1 + r_1 \cdot y(1) \leq x(1)$. Since $y(1) \geq k_1$, we also have $x(2) = (1 - r_1) \cdot k_1 + r_1 \cdot y(1) \geq x(0)$.
The proof for $y$s is analogous.
For the induction step, for any $t > 0$, assume that the lemma holds for $t - 1$, which means $x(2t - 3) \geq x(2t - 1)$ (for $t > 1$), $x(2t - 2) \leq x(2t) \leq x(2t - 1)$, and $y(2t - 3) \leq y(2t - 1)$ (for $t > 1$), $y(2t - 2) \geq y(2t) \geq y(2t - 1)$.
We now prove the lemma for $t$.
By Definition~\ref{def:fix_recip}, $x(2t - 1) = (1 - r_1)k_1 + r_1 y(2t - 2)$ and $x(2t + 1) = (1 - r_1)k_1 + r_1 y(2t)$. Since $y(2t - 2) \geq y(2t)$, we have $x(2t - 1) \geq x(2t + 1)$. By analogy, we can prove that $y(2t - 1) \leq y(2t + 1)$.
Also by definition, $x(2t) = (1 - r_1)k_1 + r_1 y(2t - 1)$ and $x(2t + 2) = (1 - r_1)k_1 + r_1 y(2t + 1)$. Since $y(2t - 1) \leq y(2t + 1)$, we have $x(2t) \leq x(2t + 2)$. By definition, $x(2t + 1) = (1 - r_1)k_1 + r_1 y(2t)$. Since $y(2t) \geq y(2t - 1)$, we conclude that $x(2t + 1) \geq x(2t)$. By analogy, we prove that $y(2t + 1) \leq y(2t)$. From this, we conclude that $x(2t + 2) \leq x(2t + 1)$, and we have shown that $x(2t) \leq x(2t + 2) \leq x(2t + 1)$. By analogy, we prove that $y(2t) \geq y(2t + 2) \geq y(2t + 1)$.
The equivalence of strictness in all the inequations to $0 < r_1, r_2 < 1, k_2 > k_1$ is proven by repeating the proof with strict inequalities in one direction, and by noticing that not having one of the conditions $0 < r_1, r_2 < 1, k_2 > k_1$ implies equality in at least one of the statements of the lemma. \end{proof}
With these results we now prove Theorem~\ref{The:fixed_recip}. \begin{proof} Using Lemma~\ref{lemma:fixed_recip:subseq_synch_case}, we assume the synchronous case. We first prove convergence, and then find its limit. For each agent, Lemma~\ref{lemma:fixed_recip:per_seq_behav} implies that the even actions form a monotone sequence, and so do the odd ones. Both sequences are bounded, which can be easily proven by induction, and therefore each one converges. The whole sequence converges if and only if both limits are the same. We now show that they are indeed the same for the sequence $\set{x(t)}$; the proof for $\set{y(t)}$ is analogous. \ifthenelse{\equal{IJCAI16}{IJCAI16}}{ $ x(t + 1) - x(t)\\ = (1 - r_1)k_1 + r_1 y(t) - (1 - r_1)k_1 - r_1 y(t - 1)\\ = r_1 (y(t) - y(t - 1)) = r_1 r_2 (x(t - 1) - x(t - 2)) = \ldots\\ = (r_1 r_2)^{\floor{t / 2}} \begin{cases} x(1) - x(0) & t = 2 s, s \in \mathbb{N}\\ x(2) - x(1) & t = 2 s + 1, s \in \mathbb{N}. \end{cases}$ }{ \begin{align*}
&x(t + 1) - x(t)\\ &\quad= (1 - r_1)k_1 + r_1 y(t) - (1 - r_1)k_1 - r_1 y(t - 1)\\ &\quad= r_1 (y(t) - y(t - 1)) = r_1 r_2 (x(t - 1) - x(t - 2)) = \ldots\\ &\quad= (r_1 r_2)^{\floor{t / 2}} \begin{cases} x(1) - x(0) & t = 2 s, s \in \mathbb{N}\\ x(2) - x(1) & t = 2 s + 1, s \in \mathbb{N}. \end{cases}
\end{align*} } As $r_1 r_2 < 1$, this difference goes to 0 as $t$ goes to $\infty$. Thus, $x(t)$ converges (and so does $y(t)$). To find the limits $L_x = \lim_{t \to \infty}{x(t)}$ and $L_y = \lim_{t \to \infty}{y(t)}$, notice that in the limit we have
$(1 - r_1) k_1 + r_1 L_y = L_x$ and $(1 - r_2) k_2 + r_2 L_x = L_y$
with the unique solution:
$L_x = \frac{(1 - r_1) k_1 + r_1 (1 - r_2) k_2}{1 - r_1 r_2}$ and $L_y = \frac{(1 - r_2) k_2 + r_2 (1 - r_1) k_1}{1 - r_1 r_2}$.
\end{proof}
\ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{
If, unlike the theorem assumes, $r_1 r_2 = 1$, then since $r_1 r_2 = 1 \iff r_1 = r_2 = 1$, each agent just repeats what the other one did last time, thereby changing roles. It means, in particular, that unless $k_1 = k_2$, no convergence takes place.
}{ }
We see that $L_x \leq L_y$,
which is intuitive, since the agents are always considering their kindness, so the kinder one acts with a bigger weight also in the limit.
\ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{ When this limit inequality is strict, we have $x(t) < y(t)$ for all $t \geq t_0$ for some $t_0$. To find when it is strict, consider the following: \begin{eqnarray*} \frac{(1 - r_1) k_1 + r_1 (1 - r_2) k_2}{1 - r_1 r_2} = \frac{(1 - r_2) k_2 + r_2 (1 - r_1) k_1}{1 - r_1 r_2}\\ \iff (1 - r_1)(1 - r_2) k_1 = (1 - r_1)(1 - r_2) k_2\\ \iff r_1 = 1 \lor r_2 = 1 \lor k_1 = k_2, \end{eqnarray*} and thus, it is strict if and only if $r_1 < 1 \land r_2 < 1 \land k_1 < k_2$. }{ } In the simulation of the actions over time in Figure~\ref{fig:fixed_fixed_03_05_09}, on the left, $y(t)$ is always larger than $x(t)$, and on the right, they alternate several times before $y(t)$ becomes larger.
\begin{figure}
\caption{Simulation of actions for the synchronous case, with $r_1 + r_2 < 1$, $r_2 = 0.5$ on the left, and $r_1 + r_2 > 1$, $r_2 = 0.9$ on the right. This is a \name{fixed} - \name{fixed} reciprocation, with $k_1 = 1, k_2 = 2, r_1 = 0.3$. Each agent's oscillate, while converging to her own limit.
}
\label{fig:fixed_fixed_03_05_09}
\caption{The common limit of the actions as a function of the reciprocity coefficients, for a \name{Floating} - \name{floating} reciprocation, $k_1 = 1, k_2 = 2$. Given $r_2$, agent $1$ receives most when $r_1 = 1$, and given $r_1$, agent $2$ receives most when $r_2 = 0$.}
\label{fig:float_float_limit_agent12}
\end{figure}
\subsection{\name{Floating} Reciprocation}
If both agents have the \name{floating} reciprocation attitude, their action sequences converge to a common limit. \begin{theorem}\label{The:float_recip} If \ifthenelse{\equal{IJCAI16}{IJCAI16}}{ }{the reciprocation coefficients are neither both $0$ and nor both $1$, which means } $0 < r_1 + r_2 < 2$, then, as $t \to \infty$, $x(t)$ and $y(t)$ converge to the same limit.
In the synchronous case ($T_1 = T_2 = T$), they both approach \begin{equation*} \frac{1}{2} \left(k_1 + k_2 + (k_2 - k_1)\frac{r_1 - r_2}{r_1 + r_2}\right) = \frac{r_2}{r_1 + r_2} k_1 + \frac{r_1}{r_1 + r_2} k_2. \end{equation*} \end{theorem}
The common limit of the actions is shown in~\figref{fig:float_float_limit_agent12}.
In Example~\ref{ex:colleages}, if agents $1$ and $2$ employ \name{fixed} reciprocation, $r_1 = r_2 = 0.5, r_1' = r_2' = 0.0$ and $k_1 = 0, k_2 = 0.5$, then we obtain $L_x = L_y = (1/2) \cdot 0 + (1/2) \cdot 0.5 = 0.25$.
\ifthenelse{\equal{IJCAI16}{IJCAI16}}{ The idea of the proof is to show that $\set{[\min\set{x(t), y(t)}, \max\set{x(t), y(t)}]}_{t = 1}^\infty$ is a nested sequence of segments, which lengths approach zero, and therefore, $\set{x(t)}$ and $\set{y(t)}$ converge to the same limit. Finding this limit stems from finding $\lim_{t \to \infty}\paren{x(t) + y(t)}$. }{ Throughout the paper, whenever we need concrete $T_1, T_2$, we consider the synchronous case. \ifthenelse{\equal{IJCAI16}{IJCAI16}}{ The alternative case is omitted due to lack of space. }{ The alternative case is fully handled in~\sectnref{Sec:altern}. } \begin{proof} We first prove that the convergence takes place.
If both agents act at time $t > 0$, then $y(t) - x (t)$ \begin{align} &\quad= x(t - 1) (r_2 - 1 + r_1) + y(t - 1) (1 - r_2 - r_1)\nonumber\\ \ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{ &\quad= y(t - 1) (1 - r_1 - r_2) - x(t - 1) (1 - r_1 - r_2)\nonumber\\ }{ } &\quad= (1 - r_1 - r_2) (y(t - 1) - x(t - 1)). \ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{ }{ \nonumber } \label{eq:recur_diff_eq_both} \end{align} Since $0 < r_1 + r_2 < 2$, we have $\abs{(1 - r_1 - r_2)} < 1$.
If only agent $1$ acts at time $t > 0$, then $y(t) - x(t)$ \begin{eqnarray} = y(t - 1) (1 - r_1) - x(t - 1) (1 - r_1)\\ = (1 - r_1) (y(t - 1) - x(t - 1)). \ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{ }{ \nonumber } \label{eq:recur_diff_eq_one} \end{eqnarray} If $r_1 > 0$, then $\abs{(1 - r_1)} < 1$. Similarly, if only agent $2$ acts, then $y(t) - x (t) = (1 - r_2) (y(t - 1) - x(t - 1))$. Since $r_1 + r_2 > 0$, either $r_1$ or $r_2$ is greater than $0$, and since each agent acts an infinite number of times, we obtain $\lim_{t \to \infty} \abs{y(t) - x(t)} = 0$. Since $\forall t > 0: x(t), y(t) \in [\min\set{x(t - 1), y(t - 1)}, \max\set{x(t - 1), y(t - 1)}]$, we have a nested sequence of segments, which lengths approach zero, thus $x(t)$ and $y(t)$ both converge, and to the same limit.
Assume $T_1 = T_2 = T$ now, to find the common limit.
For all $t > 0$, \begin{eqnarray*} x(t) + y(t) = x(t - 1)(1 - r_1 + r_2) + y(t - 1) (r_1 + 1 - r_2)\nonumber\\ = x(t - 1) + y(t - 1) + (r_1 - r_2) (y(t - 1) - x(t - 1))\nonumber\\ \Rightarrow \lim_{t \to \infty}{x(y) + y(t)} = k_1 + k_2 + \sum_{t = 0}^{\infty}{(r_1 - r_2) (y(t) - x(t))}\\% \ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{ \stackrel{\eqnsref{eq:recur_diff_eq_both}}{=} k_1 + k_2 + (r_1 - r_2)\sum_{t = 0}^{\infty}{(1 - r_1 - r_2)^t (k_2 - k_1)}\\ }{ } \underbrace{\stackrel{\text{geom. series}}{\to}}_{t \to \infty} k_1 + k_2 + (r_1 - r_2)\frac{k_2 - k_1}{r_1 + r_2}\\ = k_1 + k_2 + (k_2 - k_1)\frac{r_1 - r_2}{r_1 + r_2}. \end{eqnarray*}
Since we have shown that both limits exist and are equal, each is equal to half of $k_1 + k_2 + (k_2 - k_1)\frac{r_1 - r_2}{r_1 + r_2}$. \end{proof}
\ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{
If, unlike the theorem assumes, $r_1 + r_2 = 0$, then since $r_1 + r_2 = 0 \iff r_1 = r_2 = 0$, each agent keeps doing the same thing all the time: agent $1$ does $k_1$ and $2$ does $k_2$.
If, unlike the theorem assumes, $r_1 + r_2 = 2$, then since $r_1 + r_2 = 2 \iff r_1 = r_2 = 1$, each agent just repeats what the other one did last time. In the synchronous case this means that they interchangeably play $k_1$ and $k_2$.
}{ }
The following gives the relation between $x$s and $y$s. \begin{proposition} If $r_1 + r_2 \leq 1$, then, for every $t \geq 0: y(t) \geq x(t)$.
If $r_1 + r_2 \geq 1$, then, $y(0) \geq x(0)$. For every $t>0, t \in T_1 \cap T_2$, we have $y(t - 1) \geq x(t - 1) \Rightarrow y(t) \leq x(t)$, and $y(t - 1) \leq x(t - 1) \Rightarrow y(t) \geq x(t)$. For any other $t$, we have $y(t - 1) \geq x(t - 1) \Rightarrow y(t) \geq x(t)$, and $y(t - 1) \leq x(t - 1) \Rightarrow y(t) \leq x(t)$. In words, $x$s and $y$s alter their relative positions if and only if both act. \end{proposition} \begin{proof} Consider the case $r_1 + r_2 \leq 1$ first.
We employ induction. The basis is $t = 0$, where $y(0) = k_2 \geq k_1 = x(0)$.
For the induction step, assume the proposition for all the times smaller than $t > 0$ and prove it for $t$. If only $1$ acts at $t$, then $y(t) = y(t - 1)$ and $x(t) = (1 - r_1) x(t - 1) + r_1 y(t - 1)$. Therefore,
$y(t) \geq x(t) \iff y(t - 1) \geq (1 - r_1) x(t - 1) + r_1 y(t - 1)$, which is equivalent to $(1 - r_1) y(t - 1) \geq (1 - r_1) x(t - 1)$,
which holds by the induction hypothesis. \ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{ If only agent $2$ acts at $t$, then $x(t) = x(t - 1)$ and $y(t) = (1 - r_2) y(t - 1) + r_2 x(t - 1)$. Therefore,
$y(t) \geq x(t)\\ \iff (1 - r_2) y(t - 1) + r_2 x(t - 1) \geq x(t - 1)\\ \iff (1 - r_2) y(t - 1) \geq (1 - r_2) x(t - 1),$ which is true by the induction hypothesis.
}{ The case where only $2$ acts at $t$ is similar. }
If both agents act at $t$, then $x(t) = (1 - r_1) x(t - 1) + r_1 y(t - 1)$ and $y(t) = (1 - r_2) y(t - 1) + r_2 x(t - 1)$. Therefore,
$y(t) \geq x(t) \iff (1 - r_2) y(t - 1) + r_2 x(t - 1) \geq (1 - r_1) x(t - 1) + r_1 y(t - 1) \iff (1 - r_1 - r_2) y(t - 1) \geq (1 - r_1 - r_2) x(t - 1),$
which is true by the induction hypothesis and using the assumption $r_1 + r_2 \leq 1$.
Consider the case $r_1 + r_2 \geq 1$ now.
We employ induction again. The basis is $t = 0$, where $y(0) = k_2 \geq k_1 = x(0)$.
For the induction step, assume the proposition for all values smaller than $t > 0$ and prove it for $t$. The cases where only agent $1$ acts at $t$ and where only $2$ acts at $t$ are shown analogously to how they are shown for the case $r_1 + r_2 \leq 1$. If both agents act at $t$, then we have shown that
$y(t) \geq x(t)\\ \iff (1 - r_1 - r_2) y(t - 1) \geq (1 - r_1 - r_2) x(t - 1),$
which means that $y(t - 1) \geq x(t - 1) \Rightarrow y(t) \leq x(t)$ and $y(t - 1) \leq x(t - 1) \Rightarrow y(t) \geq x(t)$, assuming $r_1 + r_2 \geq 1$. \end{proof}
The proposition implies that if $r_1 + r_2 \leq 1$, then $\set{x(t)}$ do not decrease and $\set{y(t)}$ do not increase, since the next $x(t)$ (or $y(t)$) is either the same of a combination of the last one with a higher value (lower value, for $y(t)$).
For $r_1 + r_2 > 1$, both $\set{x(t)}$ and $\set{y(t)}$ are not monotonic, unless $T_1 \cap T_2 = \set{0}$, in which case they are monotonic, for the reason above (in this case we always have $y(t) \geq x(t)$). For $T_1 \cap T_2 \neq \set{0}$, take any positive $t$ in $T_1 \cap T_2$. Then the larger value at $t - 1$ becomes the smaller one at $t$, thereby getting smaller, and the smaller value gets larger analogously. In the future, the smaller will only grow and the larger will decrease, thereby behaving non-monotonically. In particular, in the alternating case, each agent's actions alternate. {This discussion assumes $r_1 < 1, r_2 < 1$, to avoid getting $x(t) = y(t)$ when a single player acts.}
Some examples are simulated in Figure~\ref{fig:float_float_03_05_09}.
\begin{figure}
\caption{Simulation results for the synchronous case, $r_1 + r_2 < 1$ (left) and $r_1 + r_2 > 1$ (right). We assume a \name{Floating} - \name{floating} reciprocation, $k_1 = 1, k_2 = 2, r_1 = 0.3$. In the left graph, $r_2 = 0.5$, while in the right one, $r_2 = 0.9$. In the left graph, agent $1$'s actions are smaller than those of $2$; agent $1$'s actions increase, while those of agent $2$ decrease. In the right graph, the actions of the agents alter their relative positions at each time step; each agent's actions go up and down.
}
\label{fig:float_float_03_05_09}
\end{figure}
}
\subsection{\name{Fixed} and \name{Floating} Reciprocation}\label{Sec:dynam_interact_pair:fix_float_recip}
Assume that agent $1$ employs the \name{fixed} reciprocation attitude, while $2$ acts by the \name{floating} reciprocation.
\begin{comment}This theorem is not necessarily true, and the proof is not necessarily correct. \begin{theorem}\label{The:fix_float_recip} If $r_1 < r_2$, then, $\lim_{t \to \infty}{x(t)} = \lim_{t \to \infty}{y(t)} = k_1$. \end{theorem} \begin{proof} We begin by proving the following lemma:
We prove the theorem now. Denote $x(t) + y(t) - 2 k_1$ by $z(t)$. If both agents act at time $t > 0$, then \begin{eqnarray*} z(t) = (1 - r_1) k_1 + (r_1 + 1 - r_2) y(t - 1) + r_2 x(t - 1) - 2 k_1\\ = (-1 - r_1) k_1 + (r_1 + 1 - r_2) y(t - 1) + r_2 x(t - 1)\\ = -(r_1 + 1 - r_2) k_1 - r_2 k_1 + (r_1 + 1 - r_2) y(t - 1) + r_2 x(t - 1)\\ = (r_1 + 1 - r_2) (y(t - 1) - k_1) + r_2 (x(t - 1) - k_1)\\ = (r_1 + 1 - r_2) (z(t - 1) - (x(t - 1) - k_1)) + r_2 (x(t - 1) - k_1)\\ = (2 r_2 - r_1 - 1) (x(t - 1) - k_1) + (r_1 + 1 - r_2) z(t - 1). \end{eqnarray*} Therefore, if $(2 r_2 - r_1 - 1) < 0$, then $z(t) \leq (r_1 + 1 - r_2) z(t - 1)$. Since $r_1 < r_2$, $0 \leq r_1 + 1 - r_2 < 1$.
Otherwise ($(2 r_2 - r_1 - 1) \geq 0$), assuming that also $x(t - 1) \leq y(t - 1)$, and therefore $(x(t - 1) - k_1) \leq 0.5 z(t - 1)$, we have \begin{eqnarray*} z(t) \leq (2 r_2 - r_1 - 1) 0.5 z(t - 1) + (r_1 + 1 - r_2) z(t - 1) = \frac{1}{2} (r_1 + 1) z(t - 1), \end{eqnarray*} and since $r_1 < r_2 \leq 1$, we have $\frac{1}{2} (r_1 + 1) < \frac{1}{2} (1 + 1) = 1$.
And assuming that $x(t - 1) > y(t - 1)$, we still have \begin{eqnarray*} x(t) \leq y(t - 1), y(t) \leq x(t - 1) \Rightarrow x(t) + y(t) \leq x(t - 1) + y(t - 1) \Rightarrow z(t) \leq z(t - 1). \end{eqnarray*}
To summarize, $z(t)$ never increases, and it decreases by at least a constant factor, smaller than $1$, unless maybe when $x(t - 1) > y(t - 1)$. Now, form the Lemma~\ref{Lemma:fix_float_recip:struct} ensues that $x(t - 1) > y(t - 1)$ can occur only if $r_1 + r_2 > 1$ and either $t - 1 \in T_2$ or $t - 1 \in T_1 \cap T_2$. In any case, $y$ becomes at least at big as $x$ at the next moment that agent $1$ acts, with or without $2$. Therefore, there are infinite times where $y$ is at least as big as $x$, and therefore, $z(t) \to 0$. Therefore, $x(t) + y(t) \to 2 k_1$. Since $x(t) \geq k_1, y(t) \geq k_1$, we have $x(t) \to k_1$ and $y(t) \to k_1$.
If only agent $2$ acts in $t$, then \begin{eqnarray*} z(t) = (1 - r_2) y(t - 1) + (r_2 + 1) x(t - 1) - 2 k_1\\ = (1 - r_2) (y(t - 1) - k_1) + (r_2 + 1) (x(t - 1) - k_1)\\ = (1 - r_2) (z(t) - (x(t - 1) - k_1)) + (1 + r_2) (x(t - 1) - k_1)\\ = (1 - r_2) z(t) + (2 r_2) (x(t - 1) - k_1), \end{eqnarray*} and assuming $y(t - 1) \geq x(t - 1)$, we get \begin{eqnarray*} z(t) \leq (1 - r_2) z(t) + (2 r_2) 0.5 z(t - 1) = z(t) \end{eqnarray*}
If only agent $1$ acts in $t$, then \begin{eqnarray*} z(t) = (1 - r_1) k_1 + (r_1 + 1) y(t - 1) - 2 k_1 = (r_1 + 1) (y(t - 1) - k_1)\\ = (r_1 + 1) (z(t) - (x(t - 1) - k_1)) \leq , \end{eqnarray*} \end{proof} \end{comment} We can show Theorem~\ref{The:fixed_float_recip}
using the following lemma. \begin{lemma}\label{lemma:fixed_float_recip:monot} If $r_2 > 0$ and $r_1 + r_2 \leq 1$, then, for every $t \geq t_{1, 1}: x(t + 1) \leq x(t)$, and for every $t \geq 0: y(t + 1) \leq y(t)$. \end{lemma} \ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{ We now prove this lemma. \begin{proof}
We employ induction. The basis consists of the following subcases: $t = 0, 0 < t < t_{1, 1}$ and $t = t_{1, 1}$. For $t = 0$, we have either $y(1) = y(0)$ or $y(1) = (1 - r_2) k_2 + r_2 k_1 \leq k_2 = y(0)$.
For any $0 < t < t_{1, 1}$, we have either $y(t + 1) = y(t)$ or $y(t + 1) = (1 - r_2) y(t) + r_2 k_1 \stackrel{k_1 \leq y(t)}{\leq} (1 - r_2) y(t) + r_2 y(t) = y(t)$.
For $t = t_{1, 1}$, we either have $x(t_{1, 1} + 1) = x(t_{1, 1})$ or $x(t_{1, 1} + 1) = (1 - r_1) k_1 + r_1 y(t_{1, 1})$, and anyway $x(t_{1, 1}) = (1 - r_1) k_1 + r_1 y(t_{1, 1} - 1)$ by the definition of $t = t_{1, 1}$. Since $y(t_{1, 1}) \leq y(t_{1, 1} - 1)$ by the induction hypothesis, we have $x(t_{1, 1} + 1) \leq x(t_{1, 1})$.
As to $y$s, we have either $y(t_{1, 1} + 1) = y(t_{1, 1})$ or \begin{eqnarray*} y(t_{1, 1} + 1) = (1 - r_2) y(t_{1, 1}) + r_2 x(t_{1, 1})\\ \Rightarrow y(t_{1, 1} + 1) \leq y(t_{1, 1}) \stackrel{r_2 > 0}{\iff} x(t_{1, 1}) \leq y(t_{1, 1}). \end{eqnarray*} Either $y(t_{1, 1}) = y(t_{1, 1} - 1)$ or $y(t_{1, 1}) = (1 - r_2) y(t_{1, 1} - 1) + r_2 k_1$. In the first case, \begin{eqnarray*} x(t_{1, 1}) \leq y(t_{1, 1}) \iff (1 - r_1) k_1 + r_1 y(t_{1, 1} - 1) \leq y(t_{1, 1} - 1)\\ \iff (1 - r_1) k_1 \leq (1 - r_1) y(t_{1, 1} - 1), \end{eqnarray*} which always holds. In the second case, \begin{eqnarray*} x(t_{1, 1}) \leq y(t_{1, 1})\\ \iff (1 - r_1) k_1 + r_1 y(t_{1, 1} - 1) \leq (1 - r_2) y(t_{1, 1} - 1) + r_2 k_1\\ \iff (1 - r_1 - r_2) k_1 \leq (1 - r_1 - r_2) y(t_{1, 1} - 1), \end{eqnarray*} which is true, since $k_1 \leq y(t_{1, 1} - 1)$ and $(r_1 + r_2) \leq 1$. Thus, the basis is proven.
For the induction step, for any $t > t_{1, 1}$, assume that $ x(t) \leq x(t - 1) \leq \ldots \leq x(t_{1, 1})$, and $y(t) \leq y(t - 1) \leq \ldots \leq y(0)$.
We now prove the lemma for $t$. If $x(t + 1) = x(t)$, then trivially $x(t + 1) \leq x(t)$. Otherwise, $x(t + 1) = (1 - r_1) k_1 + r_1 y(t) \leq (1 - r_1) k_1 + r_1 y(s_1(t) - 1) = x(t)$, where the inequality stems from the induction hypothesis. For $y$s, if $y(t + 1) = y(t)$, then trivially $y(t + 1) \leq y(t)$. Otherwise, $y(t + 1) = (1 - r_2) y(t) + r_2 x(t) \leq (1 - r_2) y(s_2(t) - 1) + r_2 x(s_2(t) - 1) = y(t)$. The above inequality stems from the induction hypothesis, if $s_2(t) - 1 \geq t_{1, 1}$, so that the induction hypothesis for $x$s holds as well as the one for $y$s. Otherwise ($s_2(t) \leq t_{1, 1}$), the above inequality is proven as follows: \begin{eqnarray*} (1 - r_2) y(t) + r_2 x(t)\\ \leq (1 - r_2) y(s_2(t) - 1) + r_2 x(s_2(t) - 1)\\ \iff\\ (1 - r_2) (y(t) - y(s_2(t) - 1)) \leq r_2 (x(s_2(t) - 1) - x(t))\\ = r_2 (k_1 - x(t)) \end{eqnarray*} Notice that $(y(t) - y(s_2(t) - 1)) = y(s_2(t)) - y(s_2(t) - 1) \stackrel{s_2(t) \leq t_{1, 1}}{=} (1 - r_2) y(s_2(t) - 1) + r_2 k_1 - y(s_2(t) - 1) = r_2 (k_1 - y(s_2(t) - 1))$. Thus, \begin{eqnarray*} (1 - r_2) (y(t) - y(s_2(t) - 1)) \leq r_2 (k_1 - x(t))\\ \iff (1 - r_2) r_2 (k_1 - y(s_2(t) - 1)) \leq r_2 (k_1 - x(t))\\ \stackrel{r_2 > 0}{\iff} (1 - r_2) (k_1 - y(s_2(t) - 1)) \leq (k_1 - x(t))\\ \iff x(t) - r_2 k_1 \leq (1 - r_2) y(s_2(t) - 1). \end{eqnarray*} To show this, notice that $s_2(t) \leq t_{1, 1} < t \Rightarrow s_2(t) + 1 \leq t$. In addition, $2$ acts at time slot $t_{1, 1} - 1$, and therefore $s_2(t) \geq t_{1, 1} - 1$. Therefore, using the induction hypothesis for $x$s we obtain \begin{eqnarray*} x(t) \stackrel{t \geq s_2(t) + 1 \geq t_{1, 1}} \leq x(s_2(t) + 1) = (1 - r_1) k_1 + r_1 y(s_2(t)), \end{eqnarray*} where the equality stems from the fact that if $s_2(t) + 1 \notin T_1$, then it is in $T_2$, and therefore $t = s_2(t)$, a contradiction. Therefore, \begin{eqnarray*} x(t) - r_2 k_1 \leq (1 - r_1 - r_2) k_1 + r_1 y(s_2(t))\\ \leq (1 - r_1 - r_2) y(s_2(t)) + r_1 y(s_2(t))\\ = (1 - r_2) y(s_2(t)) \leq (1 - r_2) y(s_2(t) - 1), \end{eqnarray*} and the step has been proven. \end{proof} }{
The proof is by induction on $t$, using the definitions of reciprocation. } With this lemma, we can prove the following. \begin{theorem}\label{The:fixed_float_recip} If $r_2 > 0$ and $r_1 + r_2 \leq 1$, then, $\lim_{t \to \infty}{x(y)} = \lim_{t \to \infty}{y(t)} = k_1$. \end{theorem}
\begin{proof} We first prove that the convergence takes place, and then find its limit. For each agent, Lemma~\ref{lemma:fixed_float_recip:monot} implies that her actions are monotonically non-increasing. Since the actions are bounded below by $k_1$, which can be easily proven by induction, they both converge.
To find the limits, notice that in the limit we have \begin{eqnarray} (1 - r_1) k_1 + r_1 L_y = L_x\label{eq:recur_x_eq}\\ (1 - r_2) L_y + r_2 L_x = L_y.\label{eq:recur_y_eq} \end{eqnarray} From \eqnsref{eq:recur_y_eq}, we conclude that $L_x = L_y$, since $r_2 > 0$. Substituting this to \eqnsref{eq:recur_x_eq} gives us $L_x = L_y = k_1$, since $r_2 > 0$ and $r_1 + r_2 \leq 1$ imply $r_1 < 1$. \end{proof}
\ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{
If, unlike the theorem assumes, $r_2 = 0$, then agent $2$ keeps doing the same thing all the time: $k_2$, and agent $1$ keeps doing $(1 - r_1) k_1 + r_1 k_2$ all the time when $t > 0$.
If, unlike the theorem assumes, $r_1 + r_2 > 1$, but the rest holds, then it is still open what happens.
}{ }
The relation between the sequences of $x$s and $y$s is given by the following proposition (also covering the case $r_1 + r_2 \geq 1$).
\begin{proposition}\label{The:fix_float_recip:struct} If $r_1 + r_2 \leq 1$, then for every $t \geq 0: y(t) \geq x(t)$.
If $r_1 + r_2 \geq 1$, then $y(0) \geq x(0)$. For every $t > 0$ such that $t \in T_1 \cap T_2$, we have $y(t - 1) \leq x(t - 1) \Rightarrow y(t) \geq x(t)$. For any $t \in T_1 \setminus T_2$, we have $y(t) \geq x(t)$, and for any $t \in T_2 \setminus T_1$, we have $y(t - 1) \geq x(t - 1) \Rightarrow y(t) \geq x(t)$, and $y(t - 1) \leq x(t - 1) \Rightarrow y(t) \leq x(t)$. \end{proposition} \ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{ \begin{proof} Consider the case $r_1 + r_2 \leq 1$ first.
We employ induction. The basis is $t = 0$, where $y(0) = k_2 \geq k_1 = x(0)$.
For the induction step, we assume the proposition for all values smaller than $t > 0$ and prove the proposition for $t$. If only $1$ acts at $t$, then $y(t) = y(t - 1)$ and $x(t) = (1 - r_1) k_1 + r_1 y(t - 1)$. Therefore, \begin{eqnarray*} y(t) \geq x(t) \iff y(t - 1) \geq (1 - r_1) k_1 + r_1 y(t - 1)\\ \iff (1 - r_1) y(t - 1) \geq (1 - r_1) k_1, \end{eqnarray*} which is true.
If only agent $2$ acts at $t$, then $x(t) = x(t - 1)$ and $y(t) = (1 - r_2) y(t - 1) + r_2 x(t - 1)$. Therefore, \begin{eqnarray*} y(t) \geq x(t)\\ \iff (1 - r_2) y(t - 1) + r_2 x(t - 1) \geq x(t - 1)\\ \iff (1 - r_2) y(t - 1) \geq (1 - r_2) x(t - 1), \end{eqnarray*} which is true by the induction hypothesis.
If both agents act at $t$, then $x(t) = (1 - r_1) k_1 + r_1 y(t - 1)$ and $y(t) = (1 - r_2) y(t - 1) + r_2 x(t - 1)$. Therefore, \begin{eqnarray*} y(t) \geq x(t) \iff (1 - r_2) y(t - 1) + r_2 x(t - 1)\\ \geq (1 - r_1) k_1 + r_1 y(t - 1)\\ \iff (1 - r_1 - r_2) y(t - 1) \geq (1 - r_1) k_1 - r_2 x(t - 1). \end{eqnarray*} Since $x(t - 1) \geq k_1$, it is enough to show that $(1 - r_1 - r_2) y(t - 1) \geq (1 - r_1) x(t - 1) - r_2 x(t - 1) = (1 - r_1 - r_2) x(t - 1)$, which is true by the induction hypothesis and using the assumption $r_1 + r_2 \leq 1$. Thus, the case $r_1 + r_2 \leq 1$ has been proven.
Consider the case $r_1 + r_2 \geq 1$ now.
We employ induction. The basis is $t = 0$, where $y(0) = k_2 \geq k_1 = x(0)$.
For the induction step, we assume the proposition for all values smaller than $t > 0$ and prove the proposition for $t$. The cases where only agent $1$ acts at $t$ and where only $2$ acts at $t$ are shown by analogy to how they are shown for the case $r_1 + r_2 \geq 1$. If both agents act at $t$, then we have shown that \begin{eqnarray} y(t) \geq x(t)\nonumber\\ \iff (1 - r_1 - r_2) y(t - 1) \geq (1 - r_1) k_1 - r_2 x(t - 1). \label{eq:Lemma:fix_float_recip:struct:both_act} \end{eqnarray} Now, if $y(t - 1) \leq x(t - 1)$, then $(1 - r_1 - r_2) y(t - 1) \geq (1 - r_1 - r_2) x(t - 1) \geq (1 - r_1) k_1 - r_2 x(t - 1)$, and from \eqnsref{eq:Lemma:fix_float_recip:struct:both_act} we have $y(t) \geq x(t)$.
\end{proof} }{
The proof employs induction on $t$. }
We note that although we have not seen yet whether Theorem~\ref{The:fixed_float_recip} holds for $r_1 + r_2 > 1$, we know that neither monotonicity (Lemma~\ref{lemma:fixed_float_recip:monot}) nor $y(t)$ being always at least as large as $x(t)$ or the other way around holds in this case.
As a counterexample for both of them, consider the case of $r_2 = 1, 0 < r_1 < 1, k_2 > k_1$.
One can readily prove by induction that for all $t$ we have $x(2t + 1) >x(2t) = x(2t + 2)$ and $y(2t) > y(2t - 1) = y(2t + 1)$, and thus both sequences are not monotonic. In addition, one can inductively prove that $x(2t + 1) > y(2t + 1), x(2t) < y(2t)$, and therefore no sequence is always larger than the other one.
Figure~\ref{fig:fixed_float_03_05_09} shows how the actions evolve over time. The actions seem to converge also in the unproven case $r_1 + r_2 > 1$.
\begin{figure}
\caption{Simulation of actions for the synchronous case, with $r_1 + r_2 < 1$, $r_2 = 0.5$ on the left, and $r_1 + r_2 > 1$, $r_2 = 0.9$ on the right. This is a \name{fixed} - \name{floating} reciprocation, with $k_1 = 1, k_2 = 2, r_1 = 0.3$. In the left graph, agent $1$'s actions are smaller than those of $2$; agent $1$'s actions decrease after $t = 1$, while those of agent $2$ decrease all the time. The common limits' value fits the theorem's prediction.
}
\label{fig:fixed_float_03_05_09}
\end{figure}
In the case of the mirroring assumption that agent $1$ acts according to the \name{floating} reciprocation attitude, while $2$ acts according to the \name{fixed} reciprocation, we can \ifthenelse{\equal{IJCAI16}{IJCAI16}}{ obtain similar results, which are omitted due to lack of space. }{ obtain the following similar results by analogy.
\begin{theorem}\label{The:float_fixed_recip} If $r_1 > 0$ and $r_1 + r_2 \leq 1$, then, $\lim_{t \to \infty}{x(t)} =\lim_{t \to \infty}{y(t)} = k_2$. \end{theorem} \ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{
The proof is analogous, with the lemma being \begin{lemma} If $r_1 > 0$ and $r_1 + r_2 \leq 1$, then, for every $t \geq t_{2, 1}: y(t + 1) \geq y(t)$, and for every $t \geq 0: x(t + 1) \geq x(t)$. \end{lemma}
If, unlike the theorem assumes, $r_1 = 0$, then agent $1$ keeps doing the same thing all the time: $k_1$, and agent $2$ keeps doing $(1 - r_2) k_2 + r_2 k_1$ all the time when $t > 0$.
If, unlike the theorem assumes, $r_1 + r_2 > 1$, but the rest holds, then it is still open what happens.
}{ }
Regarding the relation between $x$s and $y$s, we prove the following, by analogy to how Proposition~ \ref{The:fix_float_recip:struct} is proven: \begin{proposition} If $r_1 + r_2 \leq 1$, then for every $t \geq 0: y(t) \geq x(t)$.
If $r_1 + r_2 \geq 1$, then, $y(0) \geq x(0)$. For every $t > 0$, such that $t \in T_1 \cap T_2$, we have $y(t - 1) \leq x(t - 1) \Rightarrow y(t) \geq x(t)$. For any $t \in T_2 \setminus T_1$, we have $y(t) \geq x(t)$, and for any $t \in T_1 \setminus T_2$, we have $y(t - 1) \geq x(t - 1) \Rightarrow y(t) \geq x(t)$ and $y(t - 1) \leq x(t - 1) \Rightarrow y(t) \leq x(t)$. \end{proposition} }
\ifthenelse{\equal{IJCAI16}{IJCAI16}}{ }{ Using Proposition~\ref{prop:gen_convergence_async_mixed}, we will prove the following corollary. \begin{corollary}\label{cor:fixed_float_recip_sync_gen} Consider pairwise interaction, where agent $i$ is \name{fixed}, and the other agent $j$ is \name{floating}, and every agent acts at least once every $r$ times. Assume that $r_i < 1$ and $r_j > 0$. Then, both actions sequences converge geometrically to $k_i$. \end{corollary} We omit the proof at this stage. }
For all the considered cases, we have the following \begin{proposition}\label{prop:kind_mono_limit} If both $L_x$ and $L_y$ exist, then $L_x \leq L_y$. \end{proposition}
\ifthenelse{\NOT \equal{IJCAI16}{AAMAS16}}{ \section{Alternating Case}\label{Sec:altern}
We consider the interaction of two agents, $1$ and $2$.
Some of the statements in the paper refer only to the synchronous case ($T_1 = T_2 = T$). All of them can be updated for the alternating case ($T_1$ contains precisely all the even times and $T_2$ contains zero and all the odd ones) \ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{, and we show this in this section. }{. }
Theorem~\ref{The:float_recip} can be extended as follows:
\begin{theorem}\label{The:float_recip_altern} In the case where agents act alternately, which is when $T_1$ contains precisely all the even times and $T_2$ contains zero and all the odd ones, they both approach \begin{eqnarray*} \frac{1}{2} \left(k_1 + k_2 + \frac{(r_1 - r_2 - r_1 r_2)}{r_1 + r_2 - r_1 r_2} (k_2 - k_1) \right)\\ = \frac{r_2}{r_1 + r_2 - r_1 r_2} k_1 + \frac{r_1 - r_1 r_2}{r_1 + r_2 - r_1 r_2} k_2. \end{eqnarray*} \end{theorem} \ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{ \begin{proof} We now assume the alternating case. Consider the behavior of $x(t) + y(t)$. For an even $t > 0$, only agent $1$ acts and we have \begin{eqnarray*} x(t) + y(t) = x(t - 1) + y(t - 1) + (r_1) (y(t - 1) - x(t - 1)). \end{eqnarray*} For an odd $t$, only $2$ acts and we have \begin{eqnarray*} x(t) + y(t) = x(t - 1) + y(t - 1) + (- r_2) (y(t - 1) - x(t - 1)). \end{eqnarray*}
And therefore, we have \begin{eqnarray*} \Rightarrow \lim_{t \to \infty}{x(y) + y(t)} = k_1 + k_2 + \sum_{t = 0}^{\infty}{(- r_2) (y(2t) - x(2t))}\\ + \sum_{t = 0}^{\infty}{(r_1) (y(2t + 1) - x(2t + 1))}\\ \stackrel{\eqnsref{eq:recur_diff_eq_one}}{=} k_1 + k_2 - (r_2)\cdot \sum_{t = 0}^{\infty}{((1 - r_1)^{t} (1 - r_2)^t) (k_2 - k_1)}\\ + (r_1)\cdot \sum_{t = 0}^{\infty}{((1 - r_1)^t (1 - r_2)^{t + 1}) (k_2 - k_1)}\\ = k_1 + k_2\\ + \sum_{t = 0}^{\infty}{(1 - r_1)^t (1 - r_2)^t (r_1 - r_2 - r_1 r_2)) (k_2 - k_1)}\\ \underbrace{\stackrel{\text{geom. series}}{\to}}_{t \to \infty} k_1 + k_2\\ + (r_1 - r_2 - r_1 r_2)(k_2 - k_1)\inv{1 - (1 - r_1)(1 -r_2)}\\ = k_1 + k_2 + \frac{(r_1 - r_2 - r_1 r_2)}{r_1 + r_2 - r_1 r_2} (k_2 - k_1). \end{eqnarray*}
Since we have shown that both limits exist and are equal, each equals to half of $k_1 + k_2 + \frac{(r_1 - r_2 - r_1 r_2)}{r_1 + r_2 - r_1 r_2} (k_2 - k_1)$. \end{proof} }{
The idea of the proof is proving that $x(t) + y(t)$ approach $k_1 + k_2 + \frac{(r_1 - r_2 - r_1 r_2)}{r_1 + r_2 - r_1 r_2} (k_2 - k_1)$. }
\ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{
If, unlike the theorem assumes, $r_1 + r_2 = 2$, then since $r_1 + r_2 = 2 \iff r_1 = r_2 = 1$, in the alternating case, agent $2$ plays at time $1$ the strategy of agent $1$ at time $0$, which is $k_1$, and since then, each player plays it.
}{ }
\begin{comment} In the case of agent $1$ playing \name{fixed} and $2$ playing \name{floating}, we remarked on what happens if, unlike the theorem assumes, $r_1 = 1$, and also $r_2 < 1$ holds. In the alternating case, they both approach \begin{eqnarray*} \frac{1}{2} \left(k_1 + k_2 + \frac{(r_1 - r_2 - r_1 r_2)}{r_1 + r_2 - r_1 r_2} (k_2 - k_1) \right). \end{eqnarray*}
In the case of the mirroring assumption that agent $1$ acts according to the \name{floating} reciprocation attitude, while $2$ acts according to the \name{fixed} reciprocation, we can obtain similar results by analogy, as described in \sectnref{Sec:dynam_interact_pair:fix_float_recip}. The following additions apply to the alternating case. If, unlike Theorem~\ref{The:float_fixed_recip} assumes, $r_2 = 1$, and also $r_1 < 1$ holds, then we concluded that as $t \to \infty$, $x(t)$ and $y(t)$ converge. In the alternating case, they both approach \begin{eqnarray*} \frac{1}{2} \left(k_1 + k_2 + \frac{(r_1 - r_2 - r_1 r_2)}{r_1 + r_2 - r_1 r_2} (k_2 - k_1) \right). \end{eqnarray*} \end{comment}
\ifthenelse{\equal{Process}{GT} \OR \equal{Process}{Two_agents} \OR \equal{Process}{Mult_agents} \OR \equal{Process}{All}}{ Proposition~\ref{The:float_recip_coeff:max_util_pair} can be generalized to the alternating case as well, such that the statement stays true for both the synchronous and the alternating cases. \ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{ The proof for the alternating case follows. \begin{proof} Let us prove for agent $1$. We first express $1$'s utility and then maximize it. Assume first that $0 < r_2 < 1$, and therefore $0 < r_1 + r_2 < 2$. Therefore, from Theorem~\ref{The:float_recip_altern}, \begin{eqnarray*} \lim_{p \to \infty}{v(t_{1, p})} = \lim_{p \to \infty}{w(t_{1, p})}\\ = \frac{1}{2} \left(k_1 + k_2 + \frac{(r_1 - r_2 - r_1 r_2)}{r_1 + r_2 - r_1 r_2} (k_2 - k_1) \right)\\ \Rightarrow u_{1} = (1 - \beta_1) \frac{1}{2} \left(k_1 + k_2 + \frac{(r_1 - r_2 - r_1 r_2)}{r_1 + r_2 - r_1 r_2} (k_2 - k_1) \right). \end{eqnarray*}
To find a maximum point of this utility as function of $r_1$, we differentiate: \begin{eqnarray*} \frac{\partial (u_{1})}{\partial (r_1)} = \ldots = \frac{(1 - \beta_1) (k_2 - k_1) r_2 - (r_2)^2}{(r_1 + r_2 - r_1 r_2)^2} \end{eqnarray*} Therefore, the derivative is zero either for all $r_1$ or for none. In any case, the maximum is attained at an endpoint, so we check the utility at those points now.
At $r_1 = 0$, it is $(1 - \beta_1) k_1$. At $r_1 = 1$, it is $(1 - \beta_1) (r_2 k_1 + (1 - r_2) k_2)$. Since $r_2 k_1 + (1 - r_2) k_2$ is a convex combination of $k_1$ and $k_2$, this is at least $k_1$. Thus, agent $1$ prefers $r_1 = 1$ if and only if $1 - \beta_1 \geq 0$, and we have proven the proposition for agent $1$ choosing, when $r_2 > 0$.
If $r_2 = 0$, then we notice that $r_1 = r_2 = 0$ results in $u_{1} = k_2 - \beta_1 k_1$, which is the largest possible utility, so choosing $r_1 = 0$ is optimal here, and we have proven the proposition for agent $1$ choosing.
The case of agent $2$ choosing $r_2$ is proven by analogy, besides the case $r_1 = 0$. In this case, for $r_2 = 1$ agent $2$'s utility is $k_1 - \beta_2 k_1$. Since for $r_1 = 0$ agent $1$ will make an action of $k_1$ regardless $r_2$, this is the best possible utility for $2$ here, since the performed action is minimized. \end{proof} }{
The idea of the proof is expressing the utility and maximizing it by differentiating. We obtain that the derivative is zero either always or never, so we check the endpoints. }
\ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{ The discussion in \sectnref{Sec:sw_maxim:converg_opt} for the \name{floating} reciprocation remains true, when we employ the generalization of Proposition~\ref{The:float_recip_coeff:max_util_pair} for the alternating case. }{ } }{ }
}{ } \section{Multi-Agent Interaction}\label{Sec:dynam_interact_interdep} \ifthenelse{\equal{Process}{Process} \OR \equal{Process}{GT} \OR \equal{Process}{All}}{
We now analyze the general interdependent interaction, when agents interact with many agents. }{
We analyze reciprocation in the above model. }
To formally discuss the actions after the interaction has settled down, we consider the limits (if exist)\footnote{Agent $i$ acts at the times in $T_i = \set{t_{i, 0} = 0, t_{i, 1}, t_{i, 2}, \ldots}$.} $\lim_{p \to \infty}{\imp_{i, j}(t_{1, p})}$, and $\lim_{t \to \infty}{x_{i, j}(t)}$, for agents $i$ and $j$. Since the sequence $\set{x_{i,j}(t)}$ is $\set{\imp_{i, j}(t_{1, p})}$ with finite repetitions, the limit $\lim_{p \to \infty}{\imp_{i, j}(t_{1, p})}$ exists if and only if $\lim_{t \to \infty}{x_{i, j}(t)}$ does. If they exist, they are equal. Denote $L_{i, j} \defas \lim_{t \to \infty}{x_{i, j}(t)}$.
\anote{One idea is to find a reduction to the the problem of two agents. Another idea is to try to generalize the case of two agents.}
\ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{ \subsection{Convergence} }{ }
\ifthenelse{\equal{IJCAI16}{IJCAI16}}{ }{ We show that in the synchronous case, for every two agents $i, j$ such that $(i, j) \in E$, actions $x_{i, j}(t)$ converge to a strictly positive combination of all the kindness values. The rate of convergence is geometric. }
We first provide general convergence results, and then we find the common limit for the case when at most one agent is \name{fixed} and synchronous in Theorem~\ref{The:gen_convergence}. We finally use simulations to analyze the limits in other cases. In this section, the ambivalent case of $r_i + r_i' = 1$ is taken to be \name{floating}.
First, we have convergence for the case of \name{floating} agents. \begin{proposition}\label{prop:gen_convergence_async_float} Consider a connected interaction graph, where all agents are \name{floating} and for every agent $i$, $r_i + r_i' < 1$. Then, for all pairs of agents $i \neq j$ such that $(i, j) \in E$, the limit $L_{i, j}$ exists; all these limits are equal to each other. \end{proposition} \begin{proof} Follows directly from \cite[Theorem~$2$]{BlondelHendrickxOlshevskyTsitsiklis2005}. This article and similar articles on multiagent coordination~\cite{Moreau2005,TsitsiklisBertsekasAthans1986} prove convergence when all agents are \name{floating}. \end{proof}
We now show convergence, when some agents are \name{fixed}. \begin{proposition}\label{prop:gen_convergence_async_mixed} Consider a connected interaction graph, where for all agents $i$, $r_i' > 0$. Assume that at least one agent employs the \name{fixed} attitude and every agent acts at least once every $q$ times, for a natural $q > 0$.
Then, for all pairs of agents $i \neq j$ such that $(i, j) \in E$, the limit $L_{i, j}$ exists. The convergence is geometrically fast. \end{proposition} \begin{proof} We express how each action depends on the actions in the previous time in matrix $A(t) \in \realsP^{\abs{E} \times \abs{E}}$, which, in the synchronous case, is defined as follows: \begin{equation} A(t)((i, j), (k, l)) \defas \begin{cases} (1 - r_i - r_i') & \text{if } k = i, l = j;\\ r_i + r_i' \frac{1}{\abs{\outNeighb(i)}} & \text{if } k = j, l = i;\\ r_i' \frac{1}{\abs{\outNeighb(i)}} & \text{if } k \neq j, l = i;\\ 0 & \text{otherwise}, \end{cases} \label{eq:dynam_mat_def} \end{equation} where the first line is missing
for the \name{fixed} agents, since for them, own behavior does not matter. If, for each time $t \in T$, the column vector $\vec{p(t)} \in \realsP^{\abs{E}}$ describes the actions at time $t$, in the sense that its $(i, j)$th coordinate contains $x_{i, j}(t)$ (for $(i,j)\in E$), then we have $\vec{p}(t + 1) = A(t + 1) \vec{p}(t) + \vec{k'}$, where $\vec{k'}$ is the relevant kindness vector, formally defined as \begin{equation*} k'(t)((i, j)) \defas \begin{cases} (1 - r_i - r_i') k_i & \text{if } i \text{ is \name{fixed}};\\ 0 & \text{otherwise}. \end{cases} \end{equation*}
In a not necessarily synchronous case, only a subset of agents act at a given time~$t$. For an acting agent $i$, every $A(t)((i, j), (k, l))$ is defined as in the synchronous case. For a non-acting agent~$i$, we define \begin{equation} A(t)((i, j), (k, l)) \defas \begin{cases} 1 & \text{if } k = i, l = j;\\ 0 & \text{otherwise}. \end{cases} \label{eq:dynam_mat_def_async} \end{equation} The kindness vector is defined as \begin{equation*} k'(t)((i, j)) \defas \begin{cases} (1 - r_i - r_i') k_i & \text{if } i \text{ is \name{fixed} and acting};\\ 0 & \text{otherwise}. \end{cases} \end{equation*}
By induction, we obtain $\vec{p}(t) = \prod_{t' = 1}^t{A(t')} \vec{p}(0) + \sum_{\vec{k'} \in K}{\set{ \paren{\sum_{l \in S_{\vec{k'}}(t)}\prod_{t' = l }^t{A(t')}} \vec{k'} }}$, where $K$ is the set of all possible kindness vectors and $S_{\vec{k'}}(t)$ is a set of the appearance times of $\vec{k'}$, which are at most $t$.
We aim to show that $\vec{p(t)}$ converges. First, defining $r_i(M)$ to be the sum of the $i$th row of $M$, note \cite[\eqns{$3$}]{ButlerSiegel2013}, namely \begin{eqnarray} r_i(AB) = \sum_{j = 1}^n{\sum_{k = 1}^n{a_{i,j} b_{j, k}}} = \sum_{j = 1}^n{a_{i, j} r_j(B)}. \label{eq:sum_row_prod} \end{eqnarray} Since the sum of every row in any $A(t)$ is at most $1$, we conclude that if $B \leq \beta C$, $C_{i, j} \equiv 1$, then also $A(t) B \leq \beta C$.
We now prove that an upper bound of the form $\beta C$ on the entries of $\prod_{t = p}^q{A(t)}$ converges to zero geometrically. We have just shown that this bound never increases. First, $A(p) \leq C$, yielding the bound in the beginning. Now, let $i$ be a \name{fixed} agent, and assume he acts at time $t$. Thus, each row in $A(t)$ which relates to the edges entering $i$ sums to less than $1$, and from \eqnsref{eq:sum_row_prod} we gather that the upper bound on the appropriate rows in $A(t) B$ decreases relatively to the bound on $B$ by some constant ratio. Since the graph is connected, for all agents $i$, $r_i' > 0$, and every agent acts every $q$ times, we will have, after enough multiplications, that the bound on all the entries will have decreased by a constant ratio.
Every agent acts at least once every $q$ times, so we gather that for some $q' > 0$, every $q'$ times, the product of matrices becomes at most a given fraction of the product $q'$ times before. This implies a geometric convergence of $\prod_{t' = 1}^t{A(t')}$.
As for $\sum_{l \in S_{\vec{k'}}(t)}\prod_{t' = l }^t{A(t')}$, we have proven an exponential upper bound, thus $\sum_{l \in S_{\vec{k'}}(t)}\prod_{t' = l }^t{A(t')} \leq \sum_{l \in S_{\vec{k'}}(t)}{ \alpha^{\brackt{\frac{t - l + 1}{q'}}} C } \leq \sum_{l \in S_{\vec{k'}}(t)}{ \alpha^{\frac{t - l}{q'}} C } = \alpha^{\frac{t}{q'}} \paren{\sum_{l \in S_{\vec{k'}}(t)}{ \alpha^{\frac{- l}{q'}} }} C \stackrel{\leq}{\text{\small geom. seq.}} \frac{\alpha^{\frac{t}{q'}} - 1}{\alpha^{\frac{1}{q'}} - 1} C$, proving a geometric convergence of the series $\sum_{l \in S_{\vec{k'}}(t)}\prod_{t' = l }^t{A(t')}$. Therefore, $\vec{p(t)}$ converges, and it does so geometrically fast. \end{proof}
As an immediate conclusion of this proposition, we can finally generalize \ifthenelse{\NOT\equal{Process}{Mult_agents}}{ Theorem~\ref{The:fixed_float_recip} }{ The convergence theorem for pairwise interaction of a \name{fixed} and a \name{floating} agent. } to the case $r_1 + r_2 > 1$ as follows. \begin{corollary} Consider pairwise interaction, where one agent $i$ employs \name{fixed} reciprocation and the other agent $j$ employs the \name{floating} one, and every agent acts at least once every $q$ times. Assume that $0 < r_i < 1$ and $r_j > 0$. Then, both limits exist and are equal to $k_i$. The convergence is geometrically fast. \end{corollary} \begin{proof} Proposition~\ref{prop:gen_convergence_async_mixed} implies geometrically fast convergence. We find the limits as in the proof of \ifthenelse{\NOT\equal{Process}{Mult_agents}}{ Theorem~\ref{The:fixed_float_recip}. }{ the theorem for pairwise interaction that being generalized. } \end{proof}
\ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{ We now consider two cases which fall beyond this corollary's assumptions, while we do assume synchroneity, to simplify the details.
If, unlike the theorem assumes, $r_i = 1$, then, if $r_j < 1$, then we know from the \name{float}-\name{float} case that both action sequences approach $\frac{r_j}{r_1 + r_2} k_i + \frac{1}{r_1 + r_2} k_j$. If $r_i = r_j = 1$, then both agents switch from $k_1$ to $k_2$.
If, unlike the theorem assumes, $r_j = 0$, then, agent $i$ constantly acts $(1 - r_i) k_i + r_i k_j$ and $j$ constantly acts $k_j$. }{ }
We now turn to finding the limit. We manage to do this only in the synchronous case, when all the agents are \name{floating} or all the \name{fixed} agents have the same kindness. For all reciprocation attitudes, the following theorem also provides an alternative proof of convergence in the synchronous case. \begin{theorem}\label{The:gen_convergence} Given a connected interaction graph, consider the synchronous case where for all agents $i$, $r_i' > 0$.
If there exists a cycle of an odd length in the graph
(or at least one agent $i$ employs \name{floating} reciprocation and has $r_i + r_i' < 1$), then, for all pairs of agents $i \neq j$ such that $(i, j) \in E$, the limit $L_{i, j}$ exists and it is a positive combination of all the kindness values of the agents who are \name{fixed}, if at least one agent is \name{fixed}, and of all the kindness values $k_1, \ldots, k_n$, if all agents are \name{floating}. The convergence is geometrically fast. Moreover, if all agents employ \name{floating} reciprocation, then all these limits are equal to each other and it is a convex combination of the kindness values, namely \begin{eqnarray} L = \frac{\sum_{i \in N}{\paren{\frac{d(i)}{r_i + r_i'} \cdot k_i}}}{\sum_{i \in N}{\paren{\frac{d(i)}{r_i + r_i'}}}}.\label{eq:all_float_lim} \end{eqnarray} If, on the other hand, all the \name{fixed} agents have the same kindness $k$, then all these limits are equal to $k$. In any case, when not all the agents are \name{floating}, then changing only the kindness of the \name{floating} agents leaves all the limits as before (also follows from the limits being positive combinations of all the kindness values of the agents who are fixed). \end{theorem} Let us say several words about the assumptions. If all agents are \name{fixed}, we can prove that the actions are subsequences of the actions in the synchronous case (a straightforward generalization of~ \ifthenelse{\NOT\equal{Process}{Mult_agents}}{ Lemma~\ref{lemma:fixed_recip:subseq_synch_case}.) }{ The appropriate lemma for pairwise interaction. } Thus, the synchronous case represents all the cases in the limit, when all agents are \name{fixed}.
The assumption of a cycle of an odd length virtually always holds, since three people influencing each other form such a cycle.
\begin{proof} We first prove the case where all agents use \name{floating} reciprocation.
We express how each action depends on the actions in the previous time in a matrix, and prove the theorem by applying the famous Perron--Frobenius theorem~ \cite[Theorem~$1.1$,~$1.2$]{seneta2006} to this matrix. We now define the dynamics matrix $A \in \realsP^{\abs{E} \times \abs{E}}$: \begin{equation} A((i, j), (k, l)) \defas \begin{cases} (1 - r_i - r_i') & \text{if } k = i, l = j;\\ r_i + r_i' \frac{1}{\abs{\outNeighb(i)}} & \text{if } k = j, l = i;\\ r_i' \frac{1}{\abs{\outNeighb(i)}} & \text{if } k \neq j, l = i;\\ 0 & \text{otherwise}. \end{cases} \label{eq:dynam_mat_def_float} \end{equation} According to the definition of \name{floating} reciprocation, if for each time $t \in T$ the column vector $\vec{p(t)} \in \realsP^{\abs{E}}$ describes the actions at time $t$, in the sense that its $(i, j)$th coordinate contains $x_{i, j}(t)$ (for $(i,j)\in E$), then $\vec p(t + 1) = A \vec p(t)$. We then call $\vec{p(t)}$ an action vector. Initially, $\vec p_{(i, j)}(0) = k_i$.
\begin{comment}Another idea way of proving; it probably won't work. We now show that $A$ is diagonalizable. Is it indeed diagonalizable? It is not normal.
Therefore, any original action vector $\vec{p(0)}$ can be written as $\vec{p(0)} = \sum_{k\in \realsP^{n^2}}{\alpha_k e_k}$, and we have $\vec{p(t)} = A^t \vec{p(0)} = \sum_{k\in \realsP^{n^2}}{\lambda_k^t \alpha_k e_k}$. \end{comment}
Further, we shall need to use the Perron--Frobenius theorem for primitive matrices. We now prepare to use it, and first we show that $A$ is primitive. First, $A$ is irreducible since we can move from any $(i, j) \in E$ to any $(k, l) \in E$ as follows. We can move from an action to its reverse, since if $k = j, l = i$, then $A((i, j), (k, l)) = r_i + r_i' \frac{1}{\abs{\outNeighb(i)}} > 0$. We can also move from an action to another action by the same agent, since we can move to any action on the same agent and then to its reverse. To move to an action on the same agent, notice that if $l = i$, then $A((i, j), (k, l)) \geq r_i' \frac{1}{\abs{\outNeighb(i)}} > 0$. Now, we can move from any action $(i, j)$ to any other action $(k, l)$ by moving to the reverse action $(j, i)$ (if $k = j, l = i$, we are done). Then, follow a path from $j$ to $k$ in graph $G$ by moving to the appropriate action by an agent and then to the reverse, as many times as needed till we are at the action $(k, j)$ and finally to the action $(k, l)$. Thus, $A$ is irreducible.
By definition, $A$ is non-negative. $A$ is aperiodic, since either at least one agent $i$ has $r_i + r_i < 1$, and thus the diagonal contains non-zero elements, or there exists a cycle of an odd length in the interaction graph $G$. In the latter case, let the cycle be $i_1, i_2, \ldots, i_p$ for an odd $p$. Consider the following cycles on the index set of the matrix: $(i, j), (j, i), (i, j)$ for any $(i, j) \in E$ and $(i_2, i_1), (i_3, i_2), \ldots, (i_p, i_{p - 1}), (i_1, i_p), (i_2, i_1)$. Their lengths are $2$ and $p$, respectively, which greatest common divisor is $1$, implying aperiodicity. Being irreducible and aperiodic, $A$ is primitive by~\cite[Theorem~$1.4$]{seneta2006}. Since the sum of every row is $1$, the spectral radius is $1$.
According to the Perron--Frobenius theorem for primitive matrices~ \cite[Theorem~$1.1$]{seneta2006}, the absolute values of all eigenvalues except one eigenvalue of $1$ are strictly less than $1$. The eigenvalue $1$ has unique right and left eigenvectors, up to a constant factor. Both these eigenvectors are strictly positive. Therefore,~\cite[Theorem~$1.2$]{seneta2006} implies that $\lim_{t \to \infty}{A^t} = \vec{1} \vec{v}'$, where $\vec{v}'$ is the left eigenvector of the value~$1$, normalized such that $\vec{v}' \vec{1} = 1$, and the approach rate is geometric. Therefore, we obtain $\lim_{t \to \infty}{\vec{p(t)}} = \lim_{t \to \infty}{A^t \vec{p}(0)} = \vec{1} \vec{v}' \vec{p}(0) = \vec{1} \sum_{(i, j) \in E}{v'((i, j)) k_i}$. Thus, actions converge to $\vec{1}$ times $\sum_{(i, j) \in E}{v'((i, j)) k_i}$.
To find this limit, consider the vector $v'$ defined by $v'((i, j)) = \frac{1}{r_i + r_i'}$. Substitution shows it is a left eigenvector of A. To normalize it such that $\vec{v}' \vec{1} = 1$, divide this vector by the sum of its coordinates, which is $\sum_{i \in N}{\frac{d(i)}{r_i + r_i'}}$, obtaining $v'((i, j)) = \frac{1}{\sum_{i \in N}{\frac{d(i)}{r_i + r_i'}}} \cdot \frac{1}{r_i + r_i'}$. Therefore, the common limit is $\frac{\sum_{i \in N}{\paren{\frac{d(i)}{r_i + r_i'} \cdot k_i}}}{\sum_{i \in N}{\paren{\frac{d(i)}{r_i + r_i'}}}}$.
We now prove the case where at least one agent employs \name{fixed} reciprocation. We define the dynamics matrix $A$ analogously to the previous case, besides that the first line from~\eqnsref{eq:dynam_mat_def_float} is missing
for the \name{fixed} agents, since for them, own behavior does not matter. In this case, we have $\vec{p}(t + 1) = A \vec{p}(t) + \vec{k'}$, where $\vec{k'}$ is the relevant kindness vector, formally defined as \begin{equation*} k'((i, j)) \defas \begin{cases} (1 - r_i - r_i') k_i & \text{if } i \text{ is \name{fixed}};\\ 0 & \text{otherwise}. \end{cases} \end{equation*} By induction, we obtain $\vec{p}(t) = A^t \vec{p}(0) + \paren{\sum_{l = 0}^{t - 1}{A^l}} \vec{k'}$.
Analogically to the previous case, $A$ is irreducible and non-negative. As shown above, $A$ is aperiodic, and therefore, primitive. Since at least one agent employs \name{fixed} reciprocation, at least one row of $A$ sums to less than $1$, and therefore the spectral radius of $A$ is strictly less than $1$.
Now, the Perron--Frobenius implies that all the eigenvalues are strictly smaller than $1$. Since we have $\lim_{t \to \infty}{\vec{p}(t)} = \lim_{t \to \infty}{A^t \vec{p}(0)} + \paren{\lim_{t \to \infty}{\sum_{l = 0}^{t - 1}{A^l}}} \vec{k'}$, ~\cite[Theorem~$1.2$]{seneta2006} implies that this limits exist (the first part converges to zero, while the second one is a series of geometrically decreasing elements.) Since $A$ is primitive, $\paren{\lim_{t \to \infty}{\sum_{l = 0}^{t - 1}{A^l}}} > 0$.
When all the fixed agents have the same kindness $k$, we now find the limits. Taking the limits in the equality $\vec{p}(t + 1) = A \vec{p}(t) + \vec{k'}$ yields $(I - A) \lim_{t \to \infty}\vec{p}(t) = \vec{k'}$. \cite[Lemma~$B.1$]{seneta2006} implies that $I - A$ is invertible and therefore, if we guess a vector $\vec{x}$ that fulfills $(I - A) \vec{x} = \vec{k'}$, it will be the limit. Since the vector with all actions equal to $k$ satisfies this equation, we conclude that all the limits are equal to $k$. In any case, when there exists at least one \name{fixed} agent, changing only the kindness of the \name{floating} agents will not change the (unique) solution of $(I - A) \vec{x} = \vec{k'}$, and, therefore, will not change the limits. \anote{ By the way, the series for the inverse of $I - A$ from Lemma~$B.1$ yields once more that $\lim_{t \to \infty}\vec{p}(t) = \paren{\lim_{t \to \infty}{\sum_{l = 0}^{t - 1}{A^l}}} \vec{k'}$.} \end{proof}
\begin{comment}No, since we assume also that for all agents $i$, $r_i' > 0$. This theorem resolves the open question from~\cite{} about a pair of agents, one employing the \name{fixed} reciprocation while the other one employing the \name{fixed} one, where $r_1 + r_2 > 1$, assuming the synchronous case. We have now shown the convergence. Exactly as we did for $r_1 + r_2 > 1$, we show that both limits are equal to $k_1$, if $r_2 > 0$ and $r_1 < 1$. Using this, we can extend~\cite{} by considering maximizing utility by choosing own reciprocation attitude for two agents, maximizing the social welfare by choosing the attitudes of both agents, and finding the Nash equilibria in the reciprocation attitude game (\rmg). \end{comment}
Let us consider several examples of \eqnsref{eq:all_float_lim}.
\begin{example} If the interaction graph is regular, meaning that all the degrees are equal to each other, we have $L = \frac{\sum_{i \in N}{\paren{\frac{k_i}{r_i + r_i'}}}}{\sum_{i \in N}{\paren{\frac{1}{r_i + r_i'}}}}$. This holds for cliques, modeling small human collectives or groups of countries, and for cycles, modeling circular computer networks. \end{example}
\begin{example} For star networks, modeling networks of a supervisor of several people or entities, assume w.l.o.g.\ that agent $1$ is the center, and we have $L = \frac{\frac{n - 1}{r_1 + r_1'} \cdot k_1 + \sum_{i \in N \setminus \set{1}}{\paren{\frac{k_i}{r_i + r_i'} }}}{\frac{n - 1}{r_1 + r_1'} + \sum_{i \in N \setminus \set{1}}{\paren{\frac{1}{r_i + r_i'}}}}$. \end{example}
An obvious conclusion of the theorem is that the \name{fixed} agents are, intuitively spoken, more important than the \name{floating} ones, at least their kindness is. We now conclude about the optimal reciprocation, which goes back to providing decision support. \begin{proposition}\label{prop:float_max_L} If \eqnsref{eq:all_float_lim} holds, then agent~$i$ who wants to maximize the common value $L$, and who can choose either $r_i$ or $r_i'$, in certain limits $[a, b]$, for $a > 0$, should choose either the smallest possible or the largest possible coefficient, as follows. We assume we choose $r_i$, but the same holds for $r_i'$ with the obvious adjustments. She should set $r_i$ to $b$, if $\sum_{j \in N \setminus \set{i}}{\paren{\frac{d(j)}{r_j + r_j'} \cdot k_j}} -k_i \paren{\sum_{j \in N \setminus \set{i}}{\paren{\frac{d(j)}{r_j + r_j'}}}}$ is positive, to $a$, if that is negative, and to an arbitrary, if zero. When this expression is not zero, only these choices are optimal. \end{proposition}
\ifthenelse{\equal{IJCAI16}{IJCAI16}}{ The proof considers the sign of the derivative, and is omitted due to lack of space. }{ \begin{proof} Consider the derivative: \begin{eqnarray*} \frac{\partial L}{\partial (r_i)} = \frac{\frac{-d(i) k_i}{(r_i + r_i')^2} \paren{\sum_{j \in N}{\paren{\frac{d(j)}{r_j + r_j'}}}} + \sum_{j \in N}{\paren{\frac{d(j)}{r_j + r_j'} \cdot k_j}} \frac{d(i)}{(r_i + r_i')^2}} {\paren{\sum_{j \in N}{\paren{\frac{d(j)}{r_j + r_j'}}}}^2}\\
= \frac{ \frac{d(i)}{(r_i + r_i')^2} \paren{ \sum_{j \in N}{\paren{\frac{d(j)}{r_j + r_j'} \cdot k_j}} -k_i \paren{\sum_{j \in N}{\paren{\frac{d(j)}{r_j + r_j'}}}} }} {\paren{\sum_{j \in N}{\paren{\frac{d(j)}{r_j + r_j'}}}}^2}\\
= \frac{ \frac{d(i)}{(r_i + r_i')^2} \paren{ \sum_{j \in N \setminus \set{i}}{\paren{\frac{d(j)}{r_j + r_j'} \cdot k_j}} -k_i \paren{\sum_{j \in N \setminus \set{i}}{\paren{\frac{d(j)}{r_j + r_j'}}}} }} {\paren{\sum_{j \in N}{\paren{\frac{d(j)}{r_j + r_j'}}}}^2}. \end{eqnarray*}
Therefore, the derivative is zero either for all $r_i$ or for none. In any case, the maximum is attained at an endpoint. To avoid complicated substitution, we consider the derivative sign instead: \begin{eqnarray*} \frac{\partial u_i}{\partial r_i} \geq 0 \iff \sum_{j \in N \setminus \set{i}}{\paren{\frac{d(j)}{r_j + r_j'} \cdot k_j}} -k_i \paren{\sum_{j \in N \setminus \set{i}}{\paren{\frac{d(j)}{r_j + r_j'}}}} \geq 0, \end{eqnarray*} and so when $\sum_{j \in N \setminus \set{i}}{\paren{\frac{d(j)}{r_j + r_j'} \cdot k_j}} -k_i \paren{\sum_{j \in N \setminus \set{i}}{\paren{\frac{d(j)}{r_j + r_j'}}}}$ is nonnegative, $i$ should choose the largest $r_i$, which is $b$, and she should choose $r_i = a$ otherwise. When the derivative is not zero, these choices are the only optimal ones. \end{proof} }
\ifthenelse{\equal{IJCAI16}{IJCAI16}}{
}{ We can also prove a general convergence result, allowing agents to act in a more general way than modeled above. We need the following definition: \begin{defn} Given a metric space $(X, d)$, function $f \colon X \to X$ is called \defined{contraction}, if for any $x_1, x_2 \in X$, we have $d(f(x_1), f(x_2)) \leq q d(x_1, x_2)$, for a $q \in (0, 1)$. \end{defn}
\begin{theorem}\label{The:gen_convergence_contract} Given an interaction graph, assume the synchronous case, where every agent acts in the following way. Let $S \subseteq \mathbb{R}$ be a compact set. As in the proof of Theorem~\ref{The:gen_convergence}, assume that for each time $t \in T$, the column vector $\vec{p(t)} \in S^{\abs{E}}$ describes the actions at time $t$, in the sense that its $(i, j)$th\footnote{For $(i, j) \in E$.} coordinate contains $x_{i, j}(t)$, and that there exists a contraction $f \colon S^{\abs{E}} \to S^{\abs{E}}$ with respect to the Euclidean metric, such that $\vec p(t + 1) = f(\vec p(t))$. Initially, $\vec p_{(i, j)}(0) = k_i$. Then, for all pairs of agents $i \neq j$ such that $(i, j) \in E$, the limit $L_{i, j}$ exists. The convergence is geometrically fast. \end{theorem} This theorem is not a generalization of Theorem~\ref{The:gen_convergence}, since matrix in $A$ in the proof of Theorem~\ref{The:gen_convergence} needs not be a contraction. \begin{proof} By definition of action, $\vec{p(t)} = f^t(\vec{p(0)})$, and using Banach's fixed point theorem~\cite[Exercise~$6.88$]{hewitt1975}, we know that $f^t(\vec{p(0)})$ converges to the unique fixed point of $f$ in $S^{\abs{E}}$, with a geometrical speed, thereby proving the theorem. \end{proof} }
\ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{ \subsection{The Process}\label{Sec:dynam_interact_interdep:process} }{ }
\section{Simulations}\label{Sec:dynam_interact_interdep_sim}
We now answer some theoretically unanswered questions from \sectnref{Sec:dynam_interact_interdep} using MatLab simulations, running at least $100$ synchronous rounds, to achieve practical convergence.
We first concentrate on the case of three agents who can influence each other, meaning that the interaction graph is a clique. We begin by corroborating the already proven result that when at least one \name{fixed} agents exists, then the kindness of the \name{floating} agents does not influence the actions in the limit. Another proven thing we corroborate is that when exactly one \name{fixed} agent exists, then all the actions approach her kindness as time approaches infinity. When the actions are plotted as functions of time, we obtain graphs such as those in \figref{fig:act_time_fi_fl_fl_A_1_2_3_3_1_5}. The left graph on that figure demonstrates, that exponential convergence may be quite slow, and this is a new observation we did not know from the theory. We also corroborate that the limiting values of the actions depend linearly on the kindness values of all the fixed agents, the proportionality coefficients being independent of the other kindness values.
In order to reasonably cover the sampling space, all the above mentioned regularities have also been automatically checked for the combinations of kindness values of $1, 2, 3, 4, 5$, over $r_i$ and $r_i'$ values of $0.1, 0.3, 0.5, 0.7, 0.9$ and over all the relevant reciprocation attitudes. The checks were up to the absolute precision of $0.01$.
We do not know the exact limits when there exist two or more \name{fixed} agents with distinct kindness values. We at least know that the dependencies on the kindness values are linear, but we lack theoretical knowledge about the dependencies of the limits of actions on the reciprocation coefficients, so we simulate the interaction for various reciprocation coefficients, obtaining graphs like those in \figref{fig:act_lim_r_1}, and analogously for the dependency on $r_1'$. Note that we can have both increasing and decreasing graphs in the same scenario, and also convex and concave graphs. The observed monotonicity was automatically verified for all the above mentioned combinations of parameters. This monotonicity means that if an agent wants to maximize the limit of the actions of some agent on some other agent, she can do this by choosing an extreme value of $r_i$ or $r_i'$.
\ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{ A natural question is whether Proposition~\ref{prop:kind_mono_limit} can be extended for more than two agents. Since the kindness of the \name{floating} agents does not effect the limits, this sort of monotonicity with respect to kindness would require all the limits of the actions to be the same. We therefore ask whether the monotonicity holds at least for the actions of the \name{fixed} agents. The answer is negative, as \figref{fig:act_time_fi_fi_fi_A_1_5_2} shows. }{ }
The next thing we study is a fourth agent, interacting with some of the other agents. We consider the limits of the actions as functions of the fourth agent's degree, but
we found no regularity in these graphs; in particular, no monotonicity holds in the general case.
\begin{figure}
\caption{Simulation results for the synchronous case, with one \name{fixed} and two \name{floating} agents, for $r_1 = 0.1, r_2 = 0.1, r_3 = 0.1, r_1' = 0.5, r_2' = 0.1, r_3' = 0.1$. In the left graph, $k_1 = 1, k_2 = 1, k_3 = 2$, while in the right one, $k_1 = 3, k_2 = 1, k_3 = 5$. The common limits, which are equal to the kindness of agent $1$, fit the prediction of Theorem~\ref{The:gen_convergence}. }
\label{fig:act_time_fi_fl_fl_A_1_2_3_3_1_5}
\end{figure}
\begin{comment} \begin{figure}
\caption{Simulation results for the synchronous case, where the limits of actions are plotted as functions of the kindness of a \name{fixed} agent $1$, for $r_1 = 0.2, r_2 = 0.1, r_3 = 0.6, r_1' = 0.1, r_2' = 0.4, r_3' = 0.1$, $k_1 = 1, k_2 = 2, k_3 = 3$. In the left graph, agent $1$ is the only \name{fixed} agent,
while in the right one, all agents are \name{fixed}. The linear dependencies on the kindness of agent $1$ fit the prediction of Theorem~\ref{The:gen_convergence}. }
\label{fig:act_lim_kind}
\end{figure} \end{comment}
\begin{figure}
\caption{Simulation results for the synchronous case, where the limits of actions are plotted as functions of $r_1$, for $r_2 = 0.1, r_3 = 0.6, r_1' = 0.1, r_2' = 0.4, r_3' = 0.1$, $k_1 = 3, k_2 = 1, k_3 = 5$. In the left graph, agent $1$ and $2$ are the only \name{fixed} agents,
while in the right one, $1$ is the only \name{floating} agent. All the graphs exhibit monotonicity. }
\label{fig:act_lim_r_1}
\end{figure}
\begin{comment} \begin{figure}
\caption{Simulation results for the synchronous case, where the limits of actions are plotted as functions of $r_1'$, for $r_1 = 0.2, r_2 = 0.1, r_3 = 0.6, r_2' = 0.4, r_3' = 0.1$, $k_1 = 3, k_2 = 1, k_3 = 5$. In the left graph, agent $1$ and $2$ are the only \name{fixed} agents,
while in the right one, $1$ is the only \name{floating} agent. All the graphs exhibit monotonicity. }
\label{fig:act_lim_r_1p}
\end{figure} \end{comment}
\begin{comment} \begin{figure}
\caption{Simulation results for the synchronous case, where the limits of actions are plotted as functions of the degree of agent $4$, for $r_1 = 0.2, r_2 = 0.1, r_3 = 0.6, r_4 = 0.5, r_1' = 0.1, r_2' = 0.4, r_3' = 0.1, r_4' = 0.2$, $k_1 = 2, k_2 = 1, k_3 = 4, k_4 = 2$. In the left graph, all the agents are \name{fixed}, while in the right one, $4$ is the only \name{floating} agent. }
\label{fig:act_lim_deg_4}
\end{figure} \end{comment}
\ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{ \begin{figure}\label{fig:act_time_fi_fi_fi_A_1_5_2}
\end{figure} }{ }
\ifthenelse{\NOT \equal{IJCAI16}{AAMAS16}}{ \section{Additional Notes}
When defining a reciprocating reaction, we used the last action of the other agent to model the opinion about the other agent. We can explicitly define the \defined{opinion} of agent $i$ about another agent $j$ at time $t$, \ifthenelse{\equal{Process}{Two_agents}}{ $\opin_{i} \colon \mathbb{R}^{t + 1} \to \mathbb{R}$, as $\opin_{i}(t) \defas \imp_{j}(s_j(t))$, }{ $\opin_{i, j} \colon \mathbb{R}^{t + 1} \to \mathbb{R}$, as $\opin_{i, j}(t) \defas \imp_{j, i}(s_j(t))$, } upon $i$. Then, we obtain that in the \name{fixed} reciprocation attitude \ifthenelse{\equal{Process}{Two_agents}}{ $\imp_i(t) \defas \begin{cases} (1 - r_i) \cdot k_i + r_i \cdot \opin_i(t - 1) & t > t_{i, 0}\\ k_i & t = t_{i, 0} = 0. \end{cases}$ and in the \name{floating} reciprocation attitude $\imp_i(t) = \begin{cases} (1 - r_i) \cdot \imp_i(s_i(t-1)) + r_i \cdot \opin_i(t - 1).
& t > t_{i, 0}\\ k_i & t = t_{i, 0} = 0. \end{cases}$ }{ for $t > 0$, $\imp_{i,j}(t) \defas (1 - r_i - r'_i) \cdot k_i + r_i \cdot \opin_{i, j}(t-1) + r'_i \cdot \frac{\got_i(t - 1)}{\abs{\Neighb(i)}}$. and in the \name{floating} reciprocation attitude for $t > 0$, $\imp_{i,j}(t) \defas (1 - r_i - r'_i) \cdot \imp_{i, j}(s_i(t-1)) + r_i \cdot \opin_{i, j}(t-1) \\+ r'_i \cdot \frac{\got_i(t - 1)}{\abs{\Neighb(i)}}$. }
Naturally, a more general definition of opinion is possible. To this end, we define the temporal distance in $T_i$, for an $i \in N$, which designates how many times agent $i$ acted between two given times in $T_i$. Formally, for an $i \in N$ and two times $t_{i, l}, t_{i, m} \in T_i$, we define $d_{T_i} \colon T_i^2 \to \realsP$ by $d_{T_i}(t_{i, l}, t_{i, m}) \defas \abs{l - m}$. Now, define the cumulative opinion of $i$ about $j$ at time $t$ to be \ifthenelse{\equal{Process}{Two_agents}}{ $\opin_{i}(t) \defas \sum_{t' \in T_j, t' \leq t}{\delta_i( d_{T_j}(t', s_j(t)) + 1) \cdot \imp_{j}(t')}$, }{ $\opin_{i, j}(t) \defas \sum_{t' \in T_j, t' \leq t}{\delta_i( d_{T_j}(t', s_j(t)) + 1) \cdot \imp_{j, i}(t')}$, } where $\delta_i(p) \colon \realsP \to \realsP$ is the discount function, expressing how much the passed time influences the importance of an action.
Our definition of opinion as \ifthenelse{\equal{Process}{Two_agents}}{ $\opin_{i}({t}) = \imp_{j}(s_j(t))$ }{ $\opin_{i, j}({t}) = \imp_{j, i}(s_j(t))$ } is a particular case of this model, where the discount function is $\delta_i(p) = \begin{cases} 1 & p = 1,\\ 0 & \text{otherwise}. \end{cases}$
}{ } \section{Related Work}
In addition to the direct motivation for our model, presented in Section~\ref{Sec:introd}, we were inspired by Trivers~\cite{Trivers1971} (a psychologist), who describes a balance between an inner quality (immutable kindness) and costs/benefits when determining an action. This idea of balancing the inner and the outer appears also in our model. \ifthenelse{\equal{IJCAI16}{IJCAI16}}{ }{ Trivers also talks about a naturally selected complicated balance between altruistic and cheating tendencies, which we model as kindness, which represents the inherent inclination to contribute. The balance between complying and not complying is mentioned in the conclusion of~\cite{Ury2007}, motivating the convex combination between own kindness or action and others' actions. }
The idea of humans behaving according to a convex combination resembles another model, that of the altruistic extension, like ~\cite{ChenElkindKoutsoupias2011,HoeferSkopalik2013,RahnSchafer2013}, and \chapt{iii.2} in~\cite{Ledyard1994}. In these papers, utility is often assumed being a convex combination, while we consider a mechanism of an action being a convex combination.
\ifthenelse{\equal{IJCAI16}{IJCAI16}}{ }{ Additional motivation stems from the bargaining and negotiation realm, where Pruitt~\cite{pruitt1981} mentions that in negotiation, cooperation often takes place in the form of reciprocation and that personal traits influence the way of cooperation, which corresponds in our model to the personal kindness and reciprocation coefficients. }
\section{Conclusions and Future Work}
In order to facilitate behavioral decisions regarding reciprocation, we need to predict what interaction a given setting will engender. To this end, we model two reciprocation attitudes where a reaction is a weighted combination of the action of the other player, the total action of the neighborhood and either one's own kindness or one's own last action. \ifthenelse{\equal{IJCAI16}{IJCAI16}}{ }{ This combination's weights are defined by the reciprocation coefficients. } For a pairwise interaction, we show that actions converge, find the exact limits,
and show that if you consider your kindness while reciprocating (\name{fixed}), then, asymptotically, your actions values get closer to your kindness than if you consider it only at the outset.
For a general network, we prove convergence and find the common limit if all agents act synchronously and consider their last own action (\name{floating}), besides at most one agent. Dealing with the case when multiple agents consider their kindness (\name{fixed}) is mathematically hard, so we use simulations. \ifthenelse{\equal{IJCAI16}{IJCAI16}}{ }{ We now substantiate these insights from our results, beginning from the pairwise case.
For two agents with \name{fixed} reciprocation, (i.e., when a reaction partly depends on one's own kindness), kinder agent's action are larger in the limit.
While interacting,
each agent goes back and forth in her actions, monotonically narrowing to her limit.
Probably, this alternating may make the process confusing for an outsider.
For two agents with \name{floating} reciprocation, (i.e., when a reaction partly depends on one's own last action), both agents' actions converge to a common limit, which vicinity to an agent's kindness is reversely proportional to her reciprocation coefficient. The commonality of the limit can intuitively result from an agent's next action being a combination of her last action with the other agent's last action, which makes the new action closer to the other's action, this new action to be taken into account in determining the next action. \ifthenelse{\NOT\equal{IJCAI16}{IJCAI16}}{ This makes an agent aligned with the other agent. }{ }
\ifthenelse{\NOT\equal{IJCAI16}{IJCAI16}}{ The behavior of the actions in the process depends on the sum of the reciprocation coefficients. If the sum is at most $1$, then both actions converge monotonically to the common limit, while otherwise, the action with the larger weight becomes smaller at a time slot when both agents act and the actions stay in the same relative order when only a single agent acts. So, when agents are not extremely cooperative, then the kinder agent acts stronger all the time, but if the agents strongly reciprocate to the other's behavior, then the agent whose actions' weights are bigger switches each time when both act. It is remarkable that when both agents act at all the time slots, then for any given pair of the parameters, the relative positions of the weights alternate at a given step if and only if the positions alternate at all the steps. }{ }
For two agents, when one agent acts according to the \name{fixed} reciprocation, and the other one according to the \name{floating} one, both actions converge to the kindness of the agent who employs \name{fixed} reciprocation. This can be intuitively explained as a result of one agent always considering her kindness in determining the next action and thereby having a firm stance, while the other agent aligning himself.
\ifthenelse{\NOT\equal{IJCAI16}{IJCAI16}}{ In the process, we know what happens only if the sum of the reciprocation coefficients is at most $1$. In this case, they are both monotonic from some time on, and agent $2$'s action is always at least as large as $1$'s correspondent action. }{ }
In~Example~\ref{ex:colleages} with two colleagues, the colleague who ignores her inherent inclination and remembers only the last moves will behave as the colleague who constantly considers her kindness. Another conclusion is that if the numerical parameters are set, then the kinder agent employing \name{fixed} attitude and the other one employing \name{floating} attitude is the best for the total reciprocation.
When an agent may interact with any number of agents, we have proven convergence and shown that if all agents employ \name{floating} reciprocation, the limit is common. This limit is a weighted average of the kindness values, the weight of an agent's kindness being her degree in the interaction graph divided by the sum of her reciprocation coefficients. Intuitively, the agents align to each other, and the more connected and the less reciprocating an agent is, the more it influences the common limit. }
In Example~\ref{ex:colleages} with the parameters from the end of \sectnref{Sec:formal_model}, (all the agents employ \name{floating} reciprocation), \eqnsref{eq:all_float_lim} implies that all the actions approach~
$25 / 52$ in the limit, meaning that all the colleagues support each other emotionally a lot.
\begin{comment} Some of the above conclusions of our theoretical analysis are similar to results found by practitioners.
Breitman and Hatch~\cite[\chapt{1}]{BreitmanHatch2000} advise that a person who wants to reject a non-desired suggestion does not explain the reasons for rejection too long, just rephrase what has been told.
Repeating your rejection consistently is also recommended by Ury~\cite[Chapter~$8$]{Ury2007}. In our model, actions model the suggestions of a person to the other one, and this advice is similar to our conclusion that to maximize her own utility, the kinder agent should reciprocate in the \name{fixed} attitude, if she may choose between \name{fixed} and \name{floating} reciprocation, and if she chooses the reciprocation coefficient, then she should set it to~$0$. The remaining question is whether own actions are indeed not too costly in this case.
The $\beta_i$ here is indeed quite low, since saying something is not costly, and the received action is the important part.
Breitman and Hatch also recommend to have formulated own principles of spending resources. This may be related to the above mentioned result that
a kinder agent should choose \name{fixed} reciprocation.
\end{comment}
In addition to predicting the development of reciprocal interactions, our results explain why persistent agents have more influence on the interaction. An expression of the converged behavior is that while growing up, people acquire their own style of reciprocating with acquaintances~\cite{RobertsWaltonViechtbauer2006}. In organizations, many styles are often very similar from person to person, forming organizational cultures
~\cite{Hofstede1980}.
\begin{comment} They vary, because, probably, not all people employ \name{floating} reciprocation, and the model itself is not perfect for the real life. Using further experimental research to improve the model is an important direction for further research. \end{comment}
We saw in theory and we know from everyday life that the reciprocation process may seem confusing, but the exponential convergence promises the confusion to be short. Actually, we can have a not so quick exponential convergence, such as observed in the left graph in \figref{fig:act_time_fi_fl_fl_A_1_2_3_3_1_5}, but mostly, the process converges quickly.
Another important conclusion is that employing \name{floating} reciprocation makes us achieve equality. In the synchronous case, to achieve a common limit it is also enough for all the \name{fixed} agents to have the same kindness.
We also show that if all agents employ \name{floating} reciprocation and act synchronously, then the influence of an agent is proportional to her number of neighbors and inversely proportional to her tendency to reciprocate, that is, the stability.
We prove that in the synchronous case, the limit is either a linear combination of the kindnesses of all the \name{fixed} agents or, if all the agents are \name{floating}, a linear combination of the kindnesses of all the agents. Thus, an agent's kindness influences nothing, or it is a linear factor, thereby enabling a very eager agent to influence the limits arbitrarily, by having the \name{fixed} attitude and the appropriate kindness.
As we see in examples, real situations may require more complex modeling, motivating further research.
For instance, modeling interactions with a known finite time horizon would be interesting.
Since people may change while reciprocating, modeling changes in the reciprocity coefficients and/or reciprocation attitude is important. In addition, groups of colleagues and nations get and lose people, motivating modeling a dynamically changing set of reciprocating agents. Even with the same set of agents, the interaction graph may change as people move around.
\ifthenelse{\NOT \equal{IJCAI16}{IJCAI16}}{ We have already presented some similar ideas from the negotiation realm; therefore, we expect that bridging our work with negotiation can yield many more insights. }{ } We study interaction processes where agents reciprocate with some given parameters, and show that maximizing $L$ would require extreme values of reciprocation coefficients. To predict real situations better and to be able to give constructive advice about what parameters and attitudes of the agents are useful, we should define utility functions to the agents and consider the game where agents choose their own parameters before the interaction commences. This is hard, but people are able to change their behavior. \ifthenelse{\equal{IJCAI16}{IJCAI16}}{ }{ The agents' strategic behavior may come at cost with respect to the social welfare, so considering price of anarchy~\cite{KP99} and stability~\cite{AnshelevichDasGuptaKleinbergTardosWexlerRoughgarden04} of such a game is in order. } Considering how to influence agents to change their behavior is also relevant.
Though it seems extremely hard, it would be nice to consider our model in the light of a game theoretic model of an extensive form game, such as~\cite{DufwenbergKirchsteiger2004}.
We used others' research, based on real data, as a basis for the model; actually evaluating the model on relevant data, like the arms race actions, may be enlightening.
An agent could have different kindness values towards different agents, to represent her prejudgement. Another extension would be allowing the same action be perceived differently by various agents. A system of agents who have both a \name{fixed} and a \name{floating} component would be interesting to analyze.
\anote{Many more directions can be indicated, such as: who is important to influence in the network, evolutionary game theory with overtaking strategies, probabilistic reaction, etc.}
Analytical and simulations analysis of reciprocation process allows estimating whether an interaction will be profitable to a given agent and lays the foundation for further modeling and analysis of reciprocation, in order to anticipate and improve the individual utilities and the social welfare.
\subsubsection*{Acknowledgments.}
This work has been supported by the project SHINE, the flagship project of DIRECT (Delft Institute for Research on ICT at Delft University of Technology).
\end{document} | arXiv |
Difference between actual position of electron and Radial Distribution Probability
Its known that the radius of maximum probability of 2s orbitals is more than that of 2p orbitals. It means that the maximum probability of finding an electron in an 2s is further away from electron than that of 2p. This means 2p electron are mostly found nearer to atom than the 2s atom. Well! thats totally wrong.
So I want to know the factors due to which 2p orbital lie further to nucleus than the 2s one.
physical-chemistry atoms quantum-chemistry orbitals
Siddharth YadavSiddharth Yadav
Ok, let's start out by looking at the hydrogenic atom. The wave function $\Psi(r, \theta, \varphi)$ can be split into a radial part $R_{n,\ell}(r)$ and an angular part $Y_{\ell,m} (\theta, \varphi )$, so that $\Psi(r, \theta, \varphi) = R_{n,\ell}(r) Y_{\ell,m} (\theta, \varphi )$, where $n$, $\ell$ and $m$ are the Principal, Azimuthal and Magnetic quantum number, respectively, and the functions $R_{n,\ell}(r)$ are the solutions of the radial Schroedinger equation
\begin{equation} \bigg( \frac{ - \hbar^{2} }{ 2 m_{\mathrm{e}} } \frac{ \mathrm{d}^{2} }{ \mathrm{d} r^{2} } + \frac{ \hbar^{2} }{ 2 m_{\mathrm{e}} } \frac{ \ell (\ell + 1) }{ r^{2} } - \frac{ Z e^{2} }{ 2 m_{\mathrm{e}} r } - E \bigg) r R_{n,\ell}(r) = 0 \end{equation}
with the nuclear charge $Z$ and the mass of an electron $m_{\mathrm{e}}$.
The probability of finding the electron in the hydrogen-like atom, with the distance $r$ from the nucleus between $r$ and $r + \mathrm{d}r$, with angle $\theta$ between $\theta$ and $\theta + \mathrm{d}\theta$, and with the angle $\varphi$ between $\varphi$ and $\varphi + \mathrm{d}\varphi$ is
\begin{align} | \Psi(r, \theta, \varphi) | \, \mathrm{d} \tau = \left[R_{n,\ell}(r) \right]^{2} \left|Y_{\ell,m} (\theta, \varphi ) \right|^{2} r^{2} \sin \theta \, \mathrm{d}r \, \mathrm{d}\theta \, \mathrm{d}\varphi \ . \end{align}
To find the probability $D_{n, \ell}(r) \, \mathrm{d}r$ that the electron is between $r$ and $r + \mathrm{d}r$ regardless of the direction, you integrate over the angles $\theta$ and $\varphi$ to obtain
\begin{align} D_{n, \ell}(r) \, \mathrm{d}r &= r^{2} \left[R_{n,\ell}(r) \right]^{2} \, \mathrm{d}r \underbrace{\int_{0}^{\pi} \! \! \int_{0}^{2\pi} \left|Y_{\ell,m} (\theta, \varphi ) \right|^{2} \sin \theta \, \mathrm{d}\theta \, \mathrm{d}\varphi}_{= \, 1} \\ &= r^{2} \left[R_{n,\ell}(r) \right]^{2} \, \mathrm{d}r \ . \end{align}
Since the spherical harmonics are normalized, the value of the double integral is unity.
The radial distribution function $D_{n, \ell}(r)$ is the probability density for the electron being in a spherical shell with inner radius r and outer radius $r + \mathrm{d}r$. For the $2\ce{s}$ and $2\ce{p}$ states, these functions are
\begin{align} D_{2, 0}(r) &= \frac{1}{8} \left( \frac{Z}{a_{0}} \right)^{3} r^{2} \left( 2 - \frac{Z r}{a_{0}} \right)^{2} \mathrm{e}^{-Zr/a_{0}} \\ D_{2, 1}(r) &= \frac{1}{24} \left( \frac{Z}{a_{0}} \right)^{5} r^{4} \mathrm{e}^{-Zr/a_{0}} \ , \end{align}
where $a_{0}$ is the Bohr radius. The graph of those two function can be seen here:
The most probable value $r_{\mathrm{mp}}$ of $r$ is found by setting the derivative of $D_{n, \ell}(r)$ with respect to $r$ equal to zero. From the graph above it can be seen that $r_{\mathrm{mp}}(2\ce{p})$ is smaller than $r_{\mathrm{mp}}(2\ce{s})$ (and if you calculate it you can check that for yourself).
But $r_{\mathrm{mp}}$ is not the quantity you are actually interested in. Because although $r_{\mathrm{mp}}$ gives you the most probable distance of the electron from the nucleus you want to know the average distance $\langle r \rangle$ between the electron and the nucleus as that is the distance you will get when you measure $r$ repeatedly and average over your results. The radial distribution functions may be used to calculate the expectation values of functions of the radial variable r. You can get them via
\begin{align} \langle r \rangle_{n, \ell} &= \int_{0}^{\infty} r D_{n, \ell}(r) \, \mathrm{d} r \ . \end{align}
If you evaluate this integral for the $2\ce{s}$ and $2\ce{p}$ orbitals you get
\begin{align} \langle r \rangle_{2\ce{s}} = \langle r \rangle_{2, 0} &= \frac{6 a_{0}}{ Z } \\ \langle r \rangle_{2\ce{p}} = \langle r \rangle_{2, 1} &= \frac{5 a_{0}}{ Z } \ . \end{align}
You find that the $2\ce{s}$ electrons are on average further away from the nucleus than the $2\ce{p}$ electrons.
So, for the hydrogenic atom your claim that $2\ce{p}$ electrons are further away from the nucleus than the $2\ce{s}$ electrons is wrong. But in real atoms you have more than one electron and those electrons will influence each other. One important effect here is, that the inner electrons (or electron density) screens nuclear charge from the outer electrons. Thus the outer electrons feel a lower effective nuclear charge $Z^{\mathrm{eff}}$ than the inner electrons. The extend of this screening is distance dependent, so $Z^{\mathrm{eff}} = Z^{\mathrm{eff}} (r)$. Now, if you look back at the formula for $\langle r \rangle_{n, \ell}$ things will certainly change since $D_{n, \ell}(r)$ will have a different form and the additional $r$-dependence of $Z^{\mathrm{eff}}$ will make things more difficult. Assuming that the general form of $D_{n, \ell}(r)$ will be similar in the real atom, it can be expected that the parts of $D_{n, \ell}(r)$ for small $r$ now have a higher weight in the integral of $\langle r \rangle_{n, \ell}$ than before since these parts of the electron density suffer less from the screening effect. Since $2\ce{s}$ electrons have a small portion of the electron density that lies very close to the nucleus ("penetrates deep into the core region") and $2\ce{p}$ electrons don't this will lead to the observation that $2\ce{p}$ electrons are on average further away from the nucleus than the $2\ce{s}$ electrons.
A more involved explanation of the effects at work in multi-electron atoms can be found at this answer of mine. The important bit starts at the paragraph "Why do states with the same $n$ but lower $\ell$ values have lower energy eigenvalues?"
PhilippPhilipp
This means 2p electron are mostly found nearer to atom than the 2s atom. Well! thats totally wrong.
Why is this idea totally wrong?
Let's consult a radial probability graph. Here's one for hydrogen.
This graph shows that the 2p electrons have a good probability of getting closer to the nucleus than 2s electrons.
You might be wondering why then s-electrons are lower in energy than p-electrons of the same subshell. Notice the little bump in the pink line on the left end of the graph? The 2s electrons can penetrate closer to the nucleus than the 2p electrons can.
DissenterDissenter
$\begingroup$ ya! you r right... but again max probability of finding a 2s-electron in further from the nucleus or behind the 2p one... "Maximum Probability" it should have some importance.Acc. to me, most of the time electron would be find in the region of max RPD... And it spends a small time near the nucleus.. :/ $\endgroup$ – Siddharth Yadav Aug 13 '14 at 13:45
$\begingroup$ Try integrating to find the area under the radial probability curves. In Physical Chemistry by Atkins there is an example of such integration, and the author finds that the area under the 3s curve is larger than the area under the 3p curve at a set point from the radius. $\endgroup$ – Dissenter Aug 13 '14 at 13:49
Not the answer you're looking for? Browse other questions tagged physical-chemistry atoms quantum-chemistry orbitals or ask your own question.
Why is the 2s orbital lower in energy than the 2p orbital when the electrons in 2s are usually farther from the nucleus?
Is probability of finding electron changing here?
Can we find 100% probability region in an orbital if the position of an electron is limited by speed of light?
What's the difference between radial probability and probability density?
What are the physical manifestations of radial nodes?
How to obtain the radial probability distribution function from a quantum chemical calculation?
What is the difference between ψ, |ψ|², radial probability, and radial distribution of electrons?
What is the exact definition of the radial distribution function?
Radial Probability Distribution Curve versus ψ² versus r curve for 1s orbitals | CommonCrawl |
Topological Methods in Nonlinear Analysis
Existence of solutions for a Kirchhoff type fractional differential equations via minimal principle and Morse theory
Vol 46, No 2 (December 2015) /
Nemat Nyamoradi
Yong Zhou
https://doi.org/10.12775/TMNA.2015.061
Fractional differential equations, minimal principle, Morse theory, solutions, Critical point theory
In this paper by using the minimal principle and Morse theory, we prove the existence of solutions to the following Kirchhoff type fractional differential equation: \begin{equation*} \begin{cases} M (\int_{\mathbb{R}} (|{}_{- \infty} D_t^\alpha u (t)|^2 + b (t) |u(t)|^2 )\, d t) \cdot ({}_tD_\infty^{\alpha} ({}_{- \infty} D_t^\alpha u (t) ) + b(t) u (t)) = f (t, u (t)), t \in \mathbb{R}, u \in H^\alpha (\mathbb{R}), \end{cases} \end{equation*} where $\alpha \in ({1}/{2},1)$, ${}_tD_\infty^{\alpha}$ and ${}_{- \infty} D_t^\alpha$ are the right and left inverse operators of the corresponding Liouville--Weyl fractional integrals of order $\alpha$ respectively, $H^\alpha$ is the classical fractional Sobolev Space, $u \in \mathbb{R}$, $b \colon \mathbb{R} \to \mathbb{R}$, $\inf\limits_{t \in \mathbb{R}} b (t) \ge 0$, $f \colon \mathbb{R}\times \mathbb{R} \to \mathbb{R}$ Caratheodory function and $M\colon \mathbb{R}^+ \to \mathbb{R}^+$ is a~function that satisfy some suitable conditions.
O. Agrawal, J. Tenreiro Machado and J. Sabatier, Fractional Derivatives and their Application: Nonlinear Dynamics, Springer-Verlag, Berlin, 2004.
C. O. Alves, F. S. J. A. Correa and T. F. Ma, Positive solutions for a quasilinear elliptic equation of Kirchhoff type, Comput. Math. Appl. 49 (2005), 85-93.
S. Aouaoui, Existence of three solutions for some equation of Kirchhoff type involving variable exponents, Appl. Math. Comput. 218 (2012), 7184-7192.
A. Arosio and S. Panizzi, On the well-posedness of the Kirchhoff string, Trans. Amer. Math. Soc. 348 (1996), 305-330.
T. Bartsch and S.J. Li, Critical point theory for asymptotically quadratic functionals and applications to problems with resonance, Nonlinear Anal. 28 (1997), 419-441.
K. C. Chang, A variant of mountain pass lemma, Sci. Sinica Ser. A 26 (1983), 1241-1255.
M. Gobbino, Quasilinear degenerate parabolic equations of Kirchhoff type, Math. Methods Appl. Sci. 22 (5) (1999), 375-388.
R. Hilfer, Applications of Fractional Calculus in Physics, World Scientific, Singapore, 2000.
F. Jiao and Y. Zhou, Existence results fro fractional boundary value problem via critical point theory, Internat. J. Bifur. Chaos 22 (4) (2012), 1-17.
A. A. Kilbas, H. M. Srivastava and J. J. Trujillo, Theory and Applications of Fractional Differential Equations, in: North-Holland Mathematics Studies, Vol. 204, Elsevier Science B.V., Amsterdam, (2006).
G. Kirchhoff, Vorlesungen uber Mathematische Physik, Mechanik, Teubner, Leipzig (1883).
J. Q. Liu, The Morse index of a saddle point, Syst. Sci. Math. Sci. 2 (1989), 32-39.
J. Q. Liu and J. Su, Remarks on multiple nontrivial solutions for quasilinear resonant problems, J. Math. Anal. Appl. 258 (2001), 209-222.
J. Mawhin and M. Willem, Critical Point Theory and Hamiltonian Systems, Applied Mathematical Sciences 74, Springer, Berlin, 1989.
K. Miller and B. Ross, An Introduction to the Fractional Calculus and Fractional Differential Equations, Wiley and Sons, New York, 1993.
I. Podlubny, Fractional Differential Equations, Academic Press, New York, 1999.
P. Rabinowitz, Minimax Method in Critical Point Theory with Applications to Differential Equations, CBMS Amer. Math. Soc., No 65, 1986.
J. Sabatier, O. Agrawal and J. Tenreiro Machado, Advances in Fractional Calculus. Theoretical Developments and Applications in Physics and Engineering, Springer-Verlag, Berlin, 2007.
S. Spagnolo, The Cauchy problem for the Kirchhoff equations, Rend. Sem. Fis. Mat. Milano 62 (1992), 17-51.
C. Torres, Existence of solution for a class of fractional Hamiltonian systems, Electronic J. Differential Equations 259 (2013), 1-12.
G. Zaslavsky, Hamiltonian Chaos and Fractional Dynamics, Oxford University Press, Oxford, 2005.
Z. Zhang and R. Yuan, Variational approach to solutions for a class of fractional Hamiltonian systems. Math. Method. Appl. Sci. (2013) (Preprint).
NYAMORADI, Nemat & ZHOU, Yong. Existence of solutions for a Kirchhoff type fractional differential equations via minimal principle and Morse theory. Topological Methods in Nonlinear Analysis [online]. 1 December 2015, T. 46, nr 2, s. 617–630. [accessed 2.2.2023]. DOI 10.12775/TMNA.2015.061.
Vol 46, No 2 (December 2015) | CommonCrawl |
\begin{definition}[Definition:Monotone (Order Theory)/Sequence/Real Sequence]
Let $\sequence {x_n}$ be a sequence in $\R$.
Then $\sequence {x_n}$ is '''monotone''' {{iff}} it is either increasing or decreasing.
\end{definition} | ProofWiki |
\begin{document}
\title[Graph algebras, Exel-Laca algebras, and ultragraph algebras] {Graph algebras, Exel-Laca algebras, and ultragraph algebras coincide up to Morita equivalence}
\author{Takeshi Katsura} \address{Takeshi Katsura, Department of Mathematics\\ Keio University\\ Yokohama, 223-8522\\ JAPAN} \email{[email protected]}
\author{Paul S. Muhly} \address{Paul S.~Muhly, Department of Mathematics\\ University of Iowa\\ Iowa City\\ IA 52242-1419\\ USA} \email{[email protected]}
\author{Aidan Sims} \address{Aidan Sims, School of Mathematics and Applied Statistics\\ University of Wollongong\\ NSW 2522\\ AUSTRALIA} \email{[email protected]}
\author{Mark Tomforde} \address{Mark Tomforde \\ Department of Mathematics\\ University of Houston\\ Houston \\ TX 77204-3008\\ USA} \email{[email protected]}
\thanks{ The first author was supported by the Fields Institute, the second author was supported by NSF Grant DMS-0070405, the third author was supported by the Australian Research Council, and the fourth author was supported by an internal grant from the University of Houston Mathematics Department.}
\date{September 1, 2008; minor revisions December 7, 2008} \subjclass[2000]{Primary 46L55}
\keywords{$C^*$-algebras, graph algebras, Exel-Laca algebras, ultragraph algebras, Morita equivalence}
\begin{abstract} We prove that the classes of graph algebras, Exel-Laca algebras, and ultragraph algebras coincide up to Morita equivalence. This result answers the long-standing open question of whether every Exel-Laca algebra is Morita equivalent to a graph algebra. Given an ultragraph $\mathcal{G}$ we construct a directed graph $E$ such that $C^*(\mathcal{G})$ is isomorphic to a full corner of $C^*(E)$. As applications, we characterize real rank zero for ultragraph algebras and describe quotients of ultragraph algebras by gauge-invariant ideals. \end{abstract}
\maketitle
\section{Introduction}
In 1980 Cuntz and Krieger introduced a class of $C^*$-algebras associated to finite matrices \cite{CK}. Specifically, if $A$ is an $n \times n$ $\{0,1\}$-matrix with no zero rows, then the Cuntz-Krieger algebra $\mathcal{O}_A$ is generated by partial isometries $S_1, \dots , S_n$ such that $S_i^*S_i = \sum_{A(i,j) =1} S_jS_j^*$. Shortly thereafter Enomoto, Fujii, and Watatani \cite{EW, FW, Wat} observed that Cuntz and Krieger's algebras could be described very naturally in terms of finite directed graphs. Given a finite directed graph $E$ in which every vertex emits at least one edge, the corresponding $C^*$-algebra $C^*(E)$ is generated by mutually orthogonal projections $P_v$ associated to the vertices and partial isometries $S_e$ associated to the edges such that $S_e^*S_e = P_{r(e)}$ and $P_v = \sum_{s(e)=v} S_eS_e^*$, where $r(e)$ and $s(e)$ denote the range and source of an edge $e$.
Attempting to generalize the theory of Cuntz-Krieger algebras to countably infinite generating sets resulted in two very prominent classes of $C^*$-algebras: graph $C^*$-algebras and Exel-Laca algebras. To motivate our results we briefly describe each of these classes. The key issue for both generalizations is that infinite sums of projections, which a naive approach would suggest, cannot converge in norm.
To generalize graph $C^*$-algebras to infinite graphs, the key modification is to require the relation $P_v = \sum_{s(e)=v} S_eS_e^*$ to hold only when the sum is finite and nonempty. This theory has been explored extensively in many papers (see \cite{KPR, KPRR, BPRS2000, FLR} for seminal results, and \cite{Raeburn2005} for a survey). Graph $C^*$-algebras include many $C^*$-algebras besides the Cuntz-Krieger algebras; in particular, graph $C^*$-algebras include the Toeplitz algebra, continuous functions on the circle, all finite-dimensional $C^*$-algebras, many AF-algebras, many purely infinite simple $C^*$-algebras, and many Type I $C^*$-algebras. From a representation-theoretic point of view, the class of graph $C^*$-algebras is broader still: every AF-algebra is Morita equivalent to a graph $C^*$-algebra, and any Kirchberg algebra with free $K_1$-group is Morita equivalent to a graph $C^*$-algebra.
The approach taken for Exel-Laca algebras is to allow the matrix $A$ to be infinite. Here rows containing infinitely many nonzero entries lead to an infinite sum of projections, which does not give a sensible relation. However, Exel and Laca observed that even when rows of the matrix contain infinitely many nonzero entries, formal combinations of the Cuntz-Krieger relations can result in relations of the form $\prod_{i \in X} S^*_i S_i \prod_{j \in Y} (1 - S^*_j S_j) = \sum_{k \in Z} S_kS_k^*$, where $X$, $Y$, and $Z$ are all finite. It is precisely these finite relations that are imposed in the definition of the Exel-Laca algebra. As with the graph $C^*$-algebras, the Exel-Laca algebras include many classes of $C^*$-algebras in addition to the Cuntz-Krieger algebras, and numerous authors have studied their structure \cite{EL, EL2, RS, Szy4}.
Without too much effort, one can show that neither the class of graph $C^*$-algebras nor the class of Exel-Laca algebras is a subclass of the other. Specifically, there exist graph $C^*$-algebras that are not isomorphic to any Exel-Laca algebra \cite[Proposition A.16.2]{Tom-thesis}, and there exist Exel-Laca algebras that are not isomorphic to any graph $C^*$-algebra \cite[Example~4.2 and Remark~4.4]{RS}. This shows, in particular, that there is merit in studying both classes and that results for one class are not special cases of results for the other. It also begs the question, ``How different are the classes of graph $C^*$-algebras and Exel-Laca algebras?" Although each contains different isomorphism classes of $C^*$-algebras, a natural follow-up question is to ask about Morita equivalence. Specifically, \vskip2ex \noindent \textsc{Question 1:} Is every graph $C^*$-algebra Morita
equivalent to an Exel-Laca algebra? \vskip2ex \noindent \textsc{Question 2:} Is every Exel-Laca algebra Morita
equivalent to a graph $C^*$-algebra? \vskip2ex
While the question of isomorphism is easy to sort out, the Morita equivalence questions posed above are much more difficult. Question~1 was answered in the affirmative by Fowler, Laca, and Raeburn in \cite{FLR}. In particular, if $C^*(E)$ is a graph $C^*$-algebra, then one may form a graph $\tilde{E}$ with no sinks or sources, by adding tails to the sinks of $E$ and heads to the sources of $E$. A standard argument shows that $C^*(\tilde{E})$ is Morita equivalent to $C^*(E)$ (see \cite[Lemma~1.2]{BPRS2000}, for example), and Fowler, Laca, and Raeburn proved that the $C^*$-algebra of a graph with no sinks and no sources is isomorphic to an Exel-Laca algebra \cite[Theorem~10]{FLR}.
On the other hand, Question~2 has remained an open problem for nearly a decade. The various invariants calculated for graph $C^*$-algebras and Exel-Laca algebras have not been able to discern any Exel-Laca algebras that are not Morita equivalent to a graph $C^*$-algebra. For example, the attainable $K$-theories for both classes are the same: all countable free abelian groups arise as $K_1$-groups together with all countable abelian groups as $K_0$-groups (see \cite{Szy2} and \cite{EL2}). Nevertheless, up to this point there has been no method for constructing a graph $E$ from a matrix $A$ so that $C^*(E)$ is Morita equivalent to $\mathcal{O}_A$.
In this paper we provide an affirmative answer to Question~2 using a generalization of a graph known as an ultragraph. Ultragraphs and the associated $C^*$-algebras were introduced by the fourth author to unify the study of graph $C^*$-algebras and Exel-Laca algebras \cite{Tom, Tom2}. An ultragraph is a generalization of a graph in which the range of an edge is a (possibly infinite) set of vertices, rather than a single vertex. The ultragraph $C^*$-algebra is then determined by generators satisfying relations very similar to those for graph $C^*$-algebras (see Section~\ref{sec:prelims}). The fourth author has shown that every graph algebra is isomorphic to an ultragraph algebra \cite[Proposition~3.1]{Tom}, every Exel-Laca algebra is isomorphic to an ultragraph algebra \cite[Theorem~4.5, Remark~4.6]{Tom}, and moreover there are ultragraph algebras that are not isomorphic to any graph algebra and are not isomorphic to any Exel-Laca algebra \cite[\S 5]{Tom2}. Thus the class of ultragraph algebras is strictly larger than the union of the two classes of Exel-Laca algebras and of graph algebras.
In addition to providing a framework for studying graph algebras and Exel-Laca algebras simultaneously, ultragraph algebras also give an alternate viewpoint for studying Exel-Laca algebras. In particular, if $A$ is a (possibly infinite) $\{0,1\}$-matrix, and if we let $\mathcal{G}$ be ultragraph with edge matrix $A$, then the Exel-Laca algebra $\mathcal{O}_A$ is isomorphic to the ultragraph algebra $C^*(\mathcal{G})$. In much of the seminal work done on Exel-Laca algebras \cite{EL, EL2, Szy4}, the structure of the $C^*$-algebra $\mathcal{O}_A$ is related to properties of the infinite matrix $A$ (see \cite[\S 2--\S 10]{EL} and also \cite[Definition~4.1 and Theorem~4.5]{EL2}) as well as properties of an associated graph $\textrm{Gr} (A)$ with edge matrix $A$ (see \cite[Definition~10.5, Theorem~13.1, Theorem~14.1, and Theorem~16.2]{EL} and \cite[Theorem~8]{Szy4}). Unfortunately, these correspondences are often of limited use since the properties of the matrix can be difficult to visualize, and the graph $\textrm{Gr} (A)$ does not entirely reflect the structure of the Exel-Laca algebra $\mathcal{O}_A$ (see \cite[Example~3.14 and Example~3.15]{Tom2}). Another approach is to represent properties of the Exel-Laca algebra in terms of the ultragraph $\mathcal{G}_A$ \cite{Tom, Tom2}. This is a useful technique because it gives an additional way to look at properties of Exel-Laca algebras, the ultragraph $\mathcal{G}_A$ reflects much of the fine structure of the Exel-Laca algebra $\mathcal{O}_A$, and furthermore the interplay between the ultragraph and the associated $C^*$-algebra has a visual nature similar to what occurs with graphs and graph $C^*$-algebras.
In this paper we prove that the classes of graph algebras, Exel-Laca algebras, and ultragraph algebras coincide up to Morita equivalence. This provides an affirmative answer to Question~2 above, and additionally shows that no new Morita equivalence classes are obtained in the strictly larger class of ultragraph algebras. Given an ultragraph $\mathcal{G}$ we build a graph $E$ with the property that $C^*(\mathcal{G})$ is isomorphic to a full corner of $C^*(E)$. Combined with other known results, this shows our three classes of $C^*$-algebras coincide up to Morita equivalence. Since our construction is concrete, we are also able to use graph algebra results to analyze the structure of ultragraph algebras. In particular, we characterize real rank zero for ultragraph algebras, and describe the quotients of ultragraph algebras by gauge-invariant ideals. Of course, these structure results also give corresponding results for Exel-Laca algebras as special cases, and these results are new as well. In addition, our construction implicitly gives a method for taking a $\{0,1\}$-matrix $A$ and forming a graph $E$ with the property that the Exel-Laca algebra is isomorphic to a full corner of the graph $C^*$-algebra $C^*(E)$ (see Remark~\ref{rmk:E-L-full-corn}).
It is interesting to note that we have used ultragraphs to answer Question~2, even though this question is intrinsically only about graph $C^*$-algebras and Exel-Laca algebras. Indeed it is difficult to see how to answer Question~2 without at least implicit recourse to ultragraphs. This provides additional evidence that ultragraphs are a useful and natural tool for exploring the relationship between Exel-Laca algebras and graph $C^*$-algebras.
This paper is organized as follows. After some preliminaries in Section~\ref{sec:prelims}, we describe our construction in Section~\ref{graph-sec} and explain how to build a graph $E$ from an ultragraph $\mathcal{G}$. Since this construction is somewhat involved, we also provide a detailed example for a particular ultragraph at the end of this section. In Section~\ref{Path-sec} we analyze the path structure of the graph constructed by our method. In Section~\ref{corner-sec} we show that there is an isomorphism $\phi$ from the ultragraph algebra $C^*(\mathcal{G})$ to a full corner $P C^*(E) P$ of $C^*(E)$, and we use this result to show that an ultragraph algebra $C^*(\mathcal{G})$ has real rank zero if and only if $\mathcal{G}$ satisfies Condition~(K). In Section~\ref{ideal-sec}, we prove that the induced bijection $I \mapsto C^*(E) \phi(I) C^*(E)$ restricts to a bijection between gauge-invariant ideals of $C^*(\mathcal{G})$ and gauge-invariant ideals of $C^*(E)$. In Section~\ref{quotient-sec}, we give a complete description of the gauge-invariant ideal structure of ultragraph algebras commenced in \cite{KMST}, by describing the quotient of an ultragraph algebra by a gauge-invariant ideal.
\section{Preliminaries} \label{sec:prelims}
For a set $X$, let $\mathcal{P}(X)$ denote the collection of all subsets of $X$. We recall from \cite{Tom} the definitions of an ultragraph and of a Cuntz-Krieger family for an ultragraph.
\begin{definition}(\cite[Definition~2.1]{Tom}) An \emph{ultragraph} $\mathcal{G} = (G^0, \mathcal{G}^1, r,s)$ consists of a countable set of vertices $G^0$, a countable set of edges $\mathcal{G}^1$, and functions $s\colon \mathcal{G}^1\rightarrow G^0$ and $r\colon \mathcal{G}^1 \rightarrow \mathcal{P}(G^0)\setminus\{\emptyset\}$. \end{definition}
The original definition of a Cuntz-Krieger family for an ultragraph $\mathcal{G}$ appears as \cite[Definition~2.7]{Tom}. However, for our purposes, it will be more convenient to work with the Exel-Laca $\mathcal{G}$-families of \cite[Definition~3.3]{KMST}. To give this definition, we first recall that for finite subsets $\lambda$ and $\mu$ of $\mathcal{G}^1$, we define \[ r(\lambda,\mu) := \bigcap_{e \in \lambda} r(e) \setminus \bigcup_{f \in \mu} r(f)\in \mathcal{P}(G^0). \]
\begin{definition}\label{dfn:Cond(EL)} Let $\mathcal{G} =(G^0, \mathcal{G}^1, r,s)$ be an ultragraph. A collection of projections $\{p_v : v\in G^0\}$ and $\{q_e : e \in \mathcal{G}^1\}$ is said to satisfy {\em Condition (EL)} if the following hold: \begin{enumerate} \item the elements of $\{p_v : v\in G^0\}$ are pairwise
orthogonal, \item the elements of $\{q_e : e \in \mathcal{G}^1\}$ pairwise
commute, \item $p_v q_e= p_v$ if $v\in r(e)$, and $p_v q_e=0$ if $v
\notin r(e)$, \item $\prod_{e\in \lambda}q_e\prod_{f\in \mu}(1-q_f)
=\sum_{v\in r(\lambda,\mu)}p_v$ for all finite subsets
$\lambda,\mu$ of $\mathcal{G}^1$ such that $\lambda\cap
\mu=\emptyset$, $\lambda\neq\emptyset$ and
$r(\lambda,\mu)$ is finite. \end{enumerate} \end{definition}
Given an ultragraph $\mathcal{G}$, we write $G^0_{\textnormal{rg}}$ for the set $\{v \in G^0 : s^{-1}(v)\text{ is finite and nonempty}\}$ of \emph{regular vertices} of $\mathcal{G}$.
\begin{definition}\label{dfn:EL-G-fam} For an ultragraph $\mathcal{G} = (G^0, \mathcal{G}^1, r,s)$, an \emph{Exel-Laca $\mathcal{G}$-family} is a collection of projections $\{p_v : v\in G^0\}$ and partial isometries $\{s_e : e \in \mathcal{G}^1\}$ with mutually orthogonal final projections for which \begin{enumerate} \item the collection $\{p_v : v\in G^0\} \cup \{s_e^*s_e :
e\in \mathcal{G}^1\}$ satisfies Condition~(EL), \item $s_es_e^* \leq p_{s(e)}$ for all $e \in \mathcal{G}^1$, \item $p_v = \sum_{s(e) = v} s_es_e^*$ for $v\in
G^0_{\textnormal{rg}}$. \end{enumerate} \end{definition}
Our use of the notation $\mathcal{G}^0$ in what follows will also be in keeping with \cite{KMST} rather than with \cite{Tom, Tom2}.
By a \emph{lattice} in $\mathcal{P}(X)$, we mean a collection of subsets of $X$ which is closed under finite intersections and unions. By an \emph{algebra} in $\mathcal{P}(X)$, we mean a lattice in $\mathcal{P} (X)$ which is closed under taking relative complements. As in \cite{KMST}, we denote by $\mathcal{G}^0$ the smallest algebra in $\mathcal{P}(G^0)$ which contains both $\{\{v\} : v \in G^0\}$ and $\{r(e) : e \in \mathcal{G}^1\}$ (by contrast, in \cite{Tom, Tom2}, $\mathcal{G}^0$ denotes the smallest lattice in $\mathcal{P}(G^0)$ containing these sets). A \emph{representation} of an algebra $\mathfrak{A}$ in a $C^*$-algebra $B$ is a collection $\{p_A : a \in \mathfrak{A}\}$ of mutually commuting projections in $B$ such that $p_{A \cap B} = p_A p_B$, $p_{A \cup B} = p_A + p_B - p_{A \cap B}$ and $p_{A \setminus B} = p_A - p_{A \cap B}$ for all $A,B \in \mathfrak{A}$.
Given an Exel-Laca $\mathcal{G}$-family $\{p_v : v\in G^0\}$, $\{s_e : e \in \mathcal{G}^1\}$ in a $C^*$-algebra $B$, \cite[Proposition~3.4]{KMST} shows that there is a unique representation $\{p_A : A \in \mathcal{G}^0\}$ of $\mathcal{G}^0$ such that $p_{r(e)} = s^*_e s_e$ for all $e \in E^0$, and $p_{\{v\}} = p_v$ for all $v \in G^0$. In particular, given an Exel-Laca $\mathcal{G}$-family $\{p_v, s_e\}$, we will without comment denote the resulting representation of $\mathcal{G}^0$ by $\{p_A : A \in \mathcal{G}^0\}$; in particular, $p_{r(e)}$ denotes $s^*_e s_e$, and $p_{\{v\}}$ and $p_v$ are one and the same.
\section{A directed graph constructed from an ultragraph} \label{graph-sec}
The purpose of this section is to construct a graph $E=(E^0,E^1,r_E,s_E)$ from an ultragraph $\mathcal{G} = (G^0, \mathcal{G}^1, r,s)$. Our construction involves a choice of a listing of $\mathcal{G}^1$ and of a function $\sigma$ with certain properties described in Lemma~3.7; in particular, different choices of listings and of sigma will yield different graphs. In Section~\ref{corner-sec}, we will prove that the ultragraph algebra $C^*(\mathcal{G})$ is isomorphic to a full corner of the graph algebra $C^*(E)$, regardless of the choices made.
\begin{notation} Fix $n\in\mathbb{N} = \{1,2,\ldots\}$, and $\omega\in\{0,1\}^n$. For $i=1,2,\ldots,n$, we denote by $\omega_i \in \{0,1\}$ the
$i$\textsuperscript{th} coordinate of $\omega$, and we denote $n$ by $|\omega|$.
We express $\omega$ as $(\omega_1, \omega_2, \dots, \omega_n)$. We define $(\omega, 0), (\omega, 1) \in \{0,1\}^{n+1}$ by $(\omega, 0) := (\omega_1, \omega_2, \dots, \omega_n, 0)$ and $(\omega, 1) := (\omega_1, \omega_2, \dots, \omega_n, 1)$. For $m\in\mathbb{N}$ with $m\leq n$, we define
$\omega|_m\in\{0,1\}^m$ by $\omega|_m=(\omega_1, \omega_2, \dots, \omega_m)$. The elements $(0,0,\dots, 0,0)$ and $(0,0,\dots, 0, 1)$ in $\{0,1\}^n$ are denoted by $0^n$ and $(0^{n-1},1)$. \end{notation}
Let $\mathcal{G}$ be an ultragraph $(G^0, \mathcal{G}^1, r,s)$. Fix an ordering on $\mathcal{G}^1 = \{e_1, e_2, e_3, \ldots\}$. (This list may be finite or countably infinite.) Using the same notation as established in \cite[Section~2]{KMST}, we define $r(\omega):=\bigcap_{\omega_i=1}r(e_i)\setminus \bigcup_{\omega_j=0}r(e_j)\subset G^0$ for $\omega\in\{0,1\}^n\setminus \{0^n\}$, and $\Delta_n :=\big\{\omega\in \{0,1\}^n\setminus \{0^n\}:
|r(\omega)|=\infty\big\}.$ If $\omega \in \{0,1\}^n$ and $i \in \{0,1\}$ so that $\omega' := (\omega,i) \in \{0,1\}^{n+1}$, we somewhat inaccurately write $r(\omega,i)$, rather than $r((\omega,i))$, for $r(\omega')$.
\begin{definition}\label{dfn:Delta0} We define $\Delta:=\bigsqcup^\infty_{n=1}\Delta_n$. So for
$\omega\in \Delta$, we have $\omega \in \Delta_{|\omega|}$. \end{definition}
\begin{remark}\label{rmk:omegai} Since $r(\omega) = r(\omega, 0) \sqcup r(\omega, 1)$ for each $\omega \in \Delta$, an element $\omega\in \{0,1\}^{n}\setminus\{0^n\}$ is in $\Delta$ if and only if at least one of the elements $(\omega, 0)$ and $(\omega, 1)$ is in $\Delta$. \end{remark}
\begin{definition}
We define $\Gamma_0 :=\{(0^n, 1) : n \ge 0, |r(0^n,1)| = \infty\} \subset \Delta$, and $\Gamma_+ := \Delta \setminus \Gamma_0$. \end{definition}
We point out that $(0^n, 1)$ means $(1)$ when $n=0$. Also, by Remark~\ref{rmk:omegai}, if $n \in \mathbb{N}$, $\omega \in \Delta_{n+1}$, and
$\omega|_n \not= 0^n$, then $\omega|_n \in \Delta_n$. Since $\omega \in \Delta_{n+1}$ satisfies $\omega|_n = 0^n$ if and only if $\omega = (0^n,1) \in \Gamma_0$, it follows that $\Gamma_+ =
\{ \omega \in \Delta: |\omega| > 1 \text{ and } \omega|_{|\omega|-1} \in \Delta \}$.
\begin{definition}\label{dfn:X} Let $W_+ := \bigcup_{\omega\in \Delta}r(\omega)\subset G^0$, and $W_0 := G^0 \setminus W_+$. \end{definition}
\begin{lemma}\label{lem:W_+=disjoint union} We have $W_+ = \bigsqcup_{\omega \in \Gamma_0} r(\omega)$. \end{lemma} \begin{proof}
For $\omega\in \Delta$, let $m(\omega) := \min\{k : \omega_k = 1\}$. Then $\omega|_{m(\omega)} \in \Gamma_0$ and $r(\omega) \subset r(\omega|_{m(\omega)})$. Thus $W_+ = \bigcup_{\omega \in \Gamma_0}r(\omega)$. Finally, the sets $r(\omega)$ and $r(\omega')$ are disjoint for distinct $\omega, \omega' \in \Gamma_0$ by definition. \end{proof}
\begin{lemma}\label{lem:W_+ -> Delta^0} There exists a function $\sigma\colon W_+\to \Delta$ such that $v\in r(\sigma(v))$ for each $v\in W_+$, and such that $\sigma^{-1}(\omega)$ is finite (possibly empty) for each $\omega\in \Delta$. \end{lemma} \begin{proof} Let \[ W_{\infty} := \{v \in W_+ : v \in r(\omega) \text{ for infinitely many $\omega \in \Delta$}\}. \] For $v \in W_+ \setminus W_\infty$, we define $\sigma(v)$ to be the element of $\Delta$ with $v \in r(\sigma(v))$ for which
$|\sigma(v)|$ is maximal. Fix an ordering $\{v_1, v_2, v_3, \dots\}$ of $W_{\infty}$, and fix $k \in \mathbb{N}$. The definition of $W_\infty$ implies that the set $N_k := \{n \ge k : v_k \in r(\omega)\text{ for some } \omega \in \Delta_n\}$ is infinite. Let $n$ denote the minimal element of $N_k$. By Lemma~\ref{lem:W_+=disjoint union}, there is a unique $\omega \in \Delta_n$ such that $v_k \in r(\omega)$. We define $\sigma(v_k) := \omega$.
By definition, we have $v\in r(\sigma(v))$ for each $v\in W_+$. Fix $\omega\in \Delta$. We must show that $\sigma^{-1}(\omega)$
is finite. The set $\sigma^{-1}(\omega)\cap W_\infty$ is finite because it is a subset of $\{v_1,v_2,\ldots,v_{|\omega|}\} \subset W_\infty$. The set $\sigma^{-1}(\omega)\cap (W_+\setminus W_\infty)$ is empty if both $(\omega, 0)$ and $(\omega, 1)$ are in $\Delta$. Otherwise the set $\sigma^{-1}(\omega)\cap (W_+\setminus W_\infty)$ coincides with the finite set $r(\omega, i)$ for $i=0$ or $1$. Thus $\sigma^{-1}(\omega)$ is finite. \end{proof}
\begin{definition} Fix a function $\sigma\colon W_+\to \Delta$ as in Lemma \ref{lem:W_+ -> Delta^0}. Extend $\sigma$ to a function $\sigma\colon G^0\to \Delta\cup\{\emptyset\}$ by setting $\sigma(v)=\emptyset$ for $v\in W_0$. \end{definition}
We take the convention that $|\emptyset| = 0$ so that $v
\mapsto |\sigma(v)|$ is a function from $G^0$ to the nonnegative integers. In particular, $v\in W_0$ if and only if
$|\sigma(v)|=0$, and $v\in W_+$ if and only if $|\sigma(v)|\geq 1$.
\begin{definition}\label{def:X_n} For each $n\in\mathbb{N}$, we define a subset $\Xset{n}$ of $G^0\sqcup \Delta$ by \[
\Xset{n}:=\big\{v\in r(e_n):|\sigma(v)|<n\big\}\sqcup \big\{\omega\in \Delta_n: \omega_n=1\big\}. \] \end{definition}
\begin{remark}\label{rmk:redundant e} The occurrence of the symbol $e$ in the notation $\Xset{n}$ is redundant; we might just as well label this set $X(n)$ or $X_n$. However, we feel that it is helpful to give some hint that the role of the $n$ in this notation is to pick out an edge $e_n$ from our chosen listing of $\mathcal{G}^1$. \end{remark}
\begin{lemma}\label{lem:Wnfin} For each $n\in\mathbb{N}$, the set $\Xset{n}$ is nonempty and finite. \end{lemma} \begin{proof}
Fix $n \in \mathbb{N}$. To see that $\Xset{n}$ is nonempty, suppose that $\{v\in r(e_n):|\sigma(v)|<n\big\} = \emptyset$. Since $r(e_n) \not=\emptyset$, there exists $v \in r(e_n)$ such that $|\sigma(v)|\ge n$. Set $\omega = \sigma(v)|_n$. Since $v \in r(\sigma(v))\subset r(\omega)$, we have $\omega_n = 1$ and $|r(\omega)|=\infty$, so $\omega \in \Xset{n}$. Thus $\Xset{n}$ is nonempty.
For $v \in r(e_n)$ with $|\sigma(v)| = 0$, the element $\omega \in \{0,1\}^{n}$ satisfying $v \in r(\omega)$
is not in $\Delta_n$ by definition. Hence the set $\{v\in r(e_n):|\sigma(v)| = 0\}$ is finite, because it is a subset of the union of finitely many finite sets $r(\omega)$ where $\omega \in \{0,1\}^n\setminus \Delta_n$. Since $\sigma^{-1}(\omega)$ is finite for all
$\omega \in \Delta$ and since $\{\omega \in \Delta : |\omega| <
n\}$ is finite, we have $|\{v\in r(e_n):0<|\sigma(v)|<n\}| < \infty$. Since $\Delta_n$ is finite, $\{\omega\in \Delta_n: \omega_n=1\big\}$ is also finite, and thus $\Xset{n}$ is finite. \end{proof}
\begin{definition}\label{dfn:E} We define a graph $E=(E^0,E^1,r_E,s_E)$ as follows: \begin{align*} E^0 &:=G^0\sqcup \Delta, \\ E^1 &:=\{\overline{x} : x\in W_+\sqcup\Gamma_+\} \sqcup\big\{\edge(n,x) : e_n \in \mathcal{G}^1,\ x\in \Xset{n}\big\}, \\ r_E(\overline{x})&:=x, \quad r_E(\edge(n,x)) := x, \\
s_E(\overline{v})&:=\sigma(v), \quad s_E(\overline{\omega}):=\omega|_{|\omega|-1},\quad s_E(\edge(n,x)) := s(e_{n}). \end{align*} \end{definition}
\begin{remark} Just as in Remark~\ref{rmk:redundant e}, the symbol $e$ here is redundant; we could simply have denoted the edge $\edge(n,x)$ by $(n,x)$. We have chosen notation which is suggestive of the fact that the $n$ is specifying an element of $\mathcal{G}^1$ via our chosen listing. \end{remark}
For the following proposition, recall $E^0_\textnormal{rg}$ denotes the set $\{v \in E^0 : s_E^{-1}(v)$ is finite and nonempty$\}$ of regular vertices of $E$. Also recall from Section~\ref{sec:prelims} that $G^0_\textnormal{rg}$ denotes the set of regular vertices of $\mathcal{G}$.
\begin{proposition}\label{prp:reg verts} We have $E^0_{\textnormal{rg}}=G^0_{\textnormal{rg}}\sqcup\Delta$. \end{proposition}
\begin{proof} For $v\in G^0$, we have $s_E^{-1}(v) = \bigsqcup_{s(e_n) = v} \{\edge(n,x): x\in \Xset{n}\}$. Hence Lemma \ref{lem:Wnfin} implies that $s_E^{-1}(v)$ is nonempty and finite if and only if $s^{-1}(v)\subset\mathcal{G}^1$ is nonempty and finite. Hence $v\in E^0_{\textnormal{rg}}$ if and only if $v\in G^0_{\textnormal{rg}}$. Thus $E^0_{\textnormal{rg}}\cap G^0=G^0_{\textnormal{rg}}$.
Fix $\omega\in \Delta$. By Remark~\ref{rmk:omegai}, we have $(\omega, i) \in \Delta$ for at least one of $i=0$ and $i=1$. Since $s_E\big(\overline{(\omega, i)}\big)=\omega$, the set $s_E^{-1}(\omega)$ is not empty. By definition of $\sigma$, the set $s_E^{-1}(\omega)$ is finite. Hence $\omega\in E^0_{\textnormal{rg}}$. Thus $E^0_{\textnormal{rg}}=G^0_{\textnormal{rg}}\sqcup\Delta$. \end{proof}
\subsection*{An example of the graph constructed from an ultragraph.}
We present an example of our construction, including an illustration. This concrete example will, we hope, help the reader to visualise the general construction and to keep track of notation. The definition of the ultragraph $\mathcal{G}$ in this example may seem involved, but has been chosen to illustrate as many of the features of our construction as possible with a diagram containing relatively few vertices.
\begin{example}\label{ex:illustration} For the duration of this example we will omit the parentheses and commas when describing elements of $\Delta$. For example, the element $(0,0,1) \in \{0,1\}^3$ will be denoted $001$.
We define an ultragraph $\mathcal{G} = (G^0, \mathcal{G}^1, r, s)$ as follows. Let $G^0 := \{v_n : n \in \mathbb{N}\}$ and $\mathcal{G}^1 := \{e_n : n \in \mathbb{N}\}$. For each $n \in \mathbb{N}$, let $s(e_n) := v_n$. For $k\in \mathbb{N}$, let \begin{align*} r(e_{2k-1}) &:= \{ v_m : (k+2) \text{ divides } m \},\\ r(e_{2k}) &:= \{ v_m : \text{$m \le k^2$ and $4$ does not divide $m$}\}. \end{align*}
We will construct a graph $E$ from $\mathcal{G}$ as described in the earlier part of this section. We have displayed $\mathcal{G}$ and $E$ in Figure~\ref{pic:G,E}. To construct $E$, we must first choose a function $\sigma : W_+ \to \Delta$ as in Lemma~\ref{lem:W_+ -> Delta^0}. To do this, we first describe $\Delta$ and $W_+$.
Fix $n \ge 1$ and $\omega \in \{0,1\}^n \setminus \{0^n\}$. If $\omega_{2k} = 1$ for some $k \in \mathbb{N}$, then $r(\omega) \subset \{v_1,v_2, \dots, v_{k^2}\}$ is finite, so $\omega \notin \Delta$. If $\omega_{2k-1} = 0$ and $\omega_{2l-1} = 1$ for $k,l\in \mathbb{N}$ with $(k+2) \mid (l+2)$, then $r(\omega) = \emptyset$, so $\omega \notin \Delta$. Indeed, we have $\omega \in \Delta$ if and only if: \begin{enumerate} \item $\omega_i = 1$ for some $i$; \item\label{it:odd only} $\omega_i = 0$ for all even $i$;
and \item\label{it:divisibility} whenever $k$ satisfies
$\omega_{2k-1} = 0$, we have $(k + 2) \nmid \operatorname{lcm}\{l + 2
: \omega_{2l-1} = 1\}$. \end{enumerate} For example, note that $\omega = 1010100$ and $\omega = 1010000$ are not in $\Delta$ (see Figure~\ref{pic:G,E}) because these have $\omega_1 = \omega_3 = 1$, but $\omega_7 = 0$. Similarly, elements of the form $0{*}{*}{*}{*}{*}1$ are missing.
To describe $\Gamma_0$, first observe that an element of the form $0^n1$ can belong to $\Delta$ only if $n$ is even. For an element $\omega$ of the form $0^{2n}1$, conditions (1)~and~(2) above are trivially satisfied, and $\{l + 2
: \omega_{2l-1} = 1\} = \{n + 3\}$, so condition~(\ref{it:divisibility}) holds if and only if $k+2 \nmid n+3$ for all $k$ such that $2k-1 < 2n + 1$; that is, if and only if $n+3$ is equal to $4$ or is an odd prime number. Thus \begin{align*} \Gamma_0 &= \big\{0^{2(p-3)}1 : \text{$p=4$ or $p$ is an odd prime number}\big\} \\ &= \{1, 001, 00001, 0^81, 0^{16}1, 0^{20}1, 0^{28}1, \ldots\}. \end{align*}
\begin{figure}
\caption{The ultragraph $\mathcal{G}$ (top) and graph $E$ (bottom) of Example~\ref{ex:illustration}}
\label{pic:G,E}
\end{figure}
We now describe $W_+$. By Lemma~\ref{lem:W_+=disjoint union}, $W_+$ is the disjoint union of the sets $r(0^{2(p-3)}1)$ where $p$ runs through $4$ and all odd prime numbers. We have \begin{align*} r(1)&=\{v_m : 3\mid m\}, \\ r(001)&= \{v_m : 3 \nmid m\text{ and } 4 \mid m\}, \\ r(00001)&= \{v_m : 3 \nmid m,\ 4 \nmid m, \text{ and } 5 \mid m\}, \intertext{and for an odd prime number $p$ greater than $5$,} r(0^{2(p-3)}1) &= \big\{v_m : 3 \nmid m,\ 4 \nmid m, \ldots, (p-1) \nmid m,\ p \mid m, \text{ and } m > (p-3)^2\big\}. \end{align*} This implies that $v_1, v_2 \not\in W_+$ and that $v_m \in W_+$ whenever $3\mid m$, $4\mid m$, or $5\mid m$. Fix $m\in \mathbb{N}\setminus (\{1,2\}\cup 3\mathbb{N} \cup 4\mathbb{N} \cup 5\mathbb{N})$. Let $p$ be the smallest odd prime divisor of $m$. Then $p$ is greater than $5$. Moreover $v_m \in W_+$ if and only if $v_m \in r(0^{2(p-3)}1)$, which is equivalent to $m > (p-3)^2$. Let $k = m/p \in \mathbb{N}$. Since $p$ is the smallest odd prime divisor of $m$, either $k=1$, $k=2$, or $k\geq p$. If $k=1$ or $k=2$, we have $m = kp \leq (p-3)^2$ and hence $v_m \not\in W_+$. If $k\geq p$, then $m = kp \geq p^2 > (p-3)^2$, so $v_m \in W_+$. Recall that $W^0 = G^0 \setminus W_+$. We have proved that $W_0$ may be described as \begin{align*} W_0 &= \big\{v_p,v_{2p} : \text{$p=1$ or $p$ is an odd prime number greater than $5$}\big\}\\ &= \{v_1,v_2, v_7, v_{11}, v_{13}, v_{14}, v_{17}, v_{19}, v_{22}, v_{23}, \ldots \}, \end{align*} and then $W_+$ is the complement of this set: \[ W_+ = G^0 \setminus W_0 = \{v_3, v_4, v_5, v_6, v_8, v_9, v_{10}, v_{12}, v_{15}, v_{16}, v_{18}, v_{20}, v_{21}, \ldots\}. \]
We now define a function $\sigma\colon W_+\to \Delta$ with the properties described in Lemma~\ref{lem:W_+ -> Delta^0}. Since each $r(e_{2k}) = \{v_n : n \le k^2, 4\nmid n\}$, the set $W_\infty \subset W_+$ described in the proof of Lemma~\ref{lem:W_+ -> Delta^0} is $\{v_m \in W_+: 4 \mid m\}$. Thus $\{v_{4}, v_8, v_{12}, v_{16}, \dots\}$ is an ordering of $W_\infty$. For $k \in \mathbb{N}$, let $n := \max\{3,k\}$. Then $n$ is the smallest integer such that $n \ge k$ and $v_{4k} \in \bigsqcup_{\omega \in \Delta_n} r(\omega)$. Define $\sigma(v_{4k})$ to be the unique element $\omega$ of $\Delta_n$ such that $v_{4k} \in r(\omega)$. So \[ \sigma(v_4) = \sigma(v_8) = 001, \quad \sigma(v_{12}) = 101, \quad \sigma(v_{16}) = 0010, \quad \sigma(v_{20}) = 00101, \quad \dots \] For $v_m \in W_+ \setminus W_{\infty}$, let $k$ be the minimal integer such that $m \le k^2$. Then $n := 2k - 1$ is the maximal integer such that $v_m \in \bigsqcup_{\omega \in \Delta_n} r(\omega)$. We define $\sigma(v_m)$ to be the unique element $\omega$ of $\Delta_n$ such that $v_m \in r(\omega)$. So \begin{align*} \sigma(v_3) &= 100, \quad \sigma(v_5) = 00001, \quad \sigma(v_6) = 10000,\quad \sigma(v_9) = 10000,\\ \sigma(v_{10}) &= 0000100,\quad \sigma(v_{15}) = 1000100,\quad \sigma(v_{18}) = 100000100,\quad \dots \end{align*} By our convention that $\sigma(v) = \emptyset$ whenever $v \in W_0$, we have \[ \emptyset = \sigma(v_1) = \sigma(v_2) = \sigma(v_7) = \sigma(v_{11}) = \sigma(v_{13}) = \sigma(v_{14}) = \sigma(v_{17}) = \sigma(v_{19}) = \cdots. \] We also have \begin{gather*} \Xset{1} = \{1\},\qquad \Xset{2} = \{v_1\},\qquad \Xset{3} = \{001,101\},\qquad \Xset{4} = \{v_1, v_2, v_3\}, \\ \Xset{5} = \{00001,00101,10001,10101\},\qquad \Xset{6} = \{v_1, v_2, v_3, v_5, v_6, v_7, v_9\}, \\ \Xset{7} = \{v_6, v_{12}, v_{24}, 1000001, 1000101, 1010001, 1010101\}, \\ \Xset{8} = \{v_1, v_2, v_3, v_5, v_6, v_7, v_9,
v_{10}, v_{11}, v_{13}, v_{14}, v_{15}\},\qquad \dots \end{gather*} This is all the information required to draw $E$, and we have done so in Figure~\ref{pic:G,E}. To distinguish the various special sets of vertices discussed above, we draw vertices using four different symbols as follows: vertices of the form $\odot$ belong to $W_0$; those of the form $\otimes$ belong to $W_+$; those of the form $\circledcirc$ belong to $\Gamma_0$; and those of the form $\oplus$ belong to $\Gamma_+$. The dashed arc separates $G^0$ on the left from $\Delta$ on the right.
Edges drawn as double-headed arrows are of the form $\overline{x}$ where $x \in W_+\sqcup\Gamma_+$, and edges drawn as single-headed arrows are of the form $\edge(n,x)$ where $e_n \in \mathcal{G}^1$ and $x \in \Xset{n}$. Since $s_E(\overline{x}) \in \Delta$ and $r_E(\overline{x}) = x$ for all $x \in W_+\sqcup\Gamma_+$, and since $s_E(\edge(n,x)) = s(e_n) = v_n$ and $r_E(\edge(n,x)) = x$ for all $n$ and $x \in \Xset{n}$, once we know the type of an edge, the edge is uniquely determined by its source and its range. Thus it is not necessary to label the edges in the figure. \end{example}
\section{Paths and Condition~(K)} \label{Path-sec}
\setcounter{footnote}{1}
For this section, we fix an ultragraph $\mathcal{G}$, and make a choice of an ordering $\{e_1, e_2, \dots\}$ of $\mathcal{G}^1$ and a function $\sigma$ as in Lemma~\ref{lem:W_+ -> Delta^0}. Let $E=(E^0,E^1,r_E,s_E)$ be the graph constructed from $\mathcal{G}$ as in Definition~\ref{dfn:E}. We relate the path structure of $E$ to that of $\mathcal{G}$. In particular, we show that $\mathcal{G}$ satisfies Condition~(K) as in \cite{KMST} if and only if $E$ satisfies Condition~(K) as in \cite{KPRR}. Condition~(K) was introduced in \cite{KPRR} to characterise those graphs in whose $C^*$-algebras every ideal is gauge-invariant. In Section~\ref{ideal-sec}, we will combine our results in this section with our main result Theorem~\ref{thm:fullcorner} to deduce from Kumjian, Pask, Raeburn and Renault's result the corresponding theorem for ultragraph $C^*$-algebras.
We recall some terminology for graphs (see, for example, \cite{KPR, BPRS2000}; note that our edge direction convention agrees with that used in these papers and in \cite{KMST}, which is opposite to that used in \cite{Raeburn2005}).
For each integer $n \geq 2$, we write \[ E^n := \{\alpha = \alpha_1 \alpha_2 \cdots \alpha_n : \text{$\alpha_i \in E^1$ and $s_E(\alpha_{i+1})=r_E(\alpha_{i})$ for all $i$}\}, \] and $E^* := \bigsqcup_{n=0}^\infty E^n$. The elements of $E^*$ are called \emph{paths}. The \emph{length} of a path $\alpha$
is the integer $|\alpha|$ such that $\alpha \in E^{|\alpha|}$. We extend the range and source maps to $E^*$ as follows. For $v \in E^0$, we write $r_E(v) = s_E(v) = v$. For $\alpha \in E^*\setminus E^0$ we write $r_E(\alpha) =
r_E(\alpha_{|\alpha|})$ and $s_E(\alpha) = s_E(\alpha_1)$.
For $\alpha,\beta \in E^*$ such that $r_E(\alpha) = s_E(\beta)$, we may form the path $\alpha\beta \in E^*$ by concatenation. Thus $E^*$ becomes a category whose unit space is $E^0\subset E^*$. For a vertex $v \in E^0$, a \emph{return path based at $v$} is a path $\alpha$ of nonzero length with $s_E(\alpha) = r_E(\alpha) = v$. A return path $\alpha = \alpha_1 \alpha_2 \cdots \alpha_n$ based at $v$ is called a \emph{first-return path} if $s_E(\alpha_i) \neq v$ for $i = 2, 3, \ldots, n$. We say that $E$ satisfies \emph{Condition~(K)} if no vertex is the base of exactly one first-return path (equivalently, each vertex is either the base of no return path, or is the base of at least two first-return paths).
Consider the subgraph $F$ of $E$ with the same vertices as $E$, and edges $F^1=\{\overline{x} : x\in W_+\sqcup\Gamma_+\} \subset E^1$. Note that $F^1\subset E^1$ is the set of all edges in $E^1$ starting from $\Delta \subset E^0$. In Figure~\ref{pic:G,E} the elements of $F^1$ are the double-headed arrows. Then \[ F^*=E^0\sqcup\big\{\overline{x}_1\overline{x}_2\cdots \overline{x}_n\in E^*: n\in\mathbb{N}, x_i\in W_+\sqcup\Gamma_+\big\}\subset E^*. \] In particular $\alpha \in E^*$ belongs to $F^*$ if and only if it contains no edges of the form $\edge(n,x)$ where $e_n\in\mathcal{G}^1$ and $x\in \Xset{n}$.
\begin{lemma}\label{lem:unique expression} Every $\alpha\in E^*$ can be uniquely expressed as \[ \alpha=g_0\cdot \edge(n_1,x_1)\cdot g_1\cdot \edge(n_2,x_2)\cdot g_{2} \cdot \cdots \cdot \edge(n_k,x_k)\cdot g_{k} \] where each $e_{n_i}\in\mathcal{G}^1$, $x_i\in \Xset{n_i}$ and $g_i\in F^*$. \end{lemma}
\begin{proof} Let $\alpha = \alpha_1\alpha_2 \dots \alpha_n$. Whenever $\alpha_i$ and $\alpha_{i+1}$ are both of the form $\edge(n,x)$, rewrite $\alpha_i \alpha_{i+1} = \alpha_i r_E(\alpha_i) \alpha_{i+1}$ (recall that $r_E(\alpha_i) \in E^0$ belongs to $F^*$ by definition). Now by grouping sequences of consecutive edges from $F^1$, we obtain an expression for $\alpha$ of the desired form. This expression is clearly unique. \end{proof}
In the graph $F$, we can distinguish the sets $W_0$, $W_+$, $\Gamma_0$, and $\Gamma_+$ as follows. \begin{itemize} \item An element in $W_0$ emits no edges, and receives no edges. \item An element in $W_+$ emits no edges, and receives exactly one edge. \item An element in $\Gamma_0$ emits a finite and nonzero
number of edges, and receives no edges. \item An element in $\Gamma_+$ emits a finite and nonzero
number of edges, and receives exactly one edges. \end{itemize} Next we describe the paths of the graph $F$. To do so, the following notation is useful.
\begin{definition} For $n \in \mathbb{N}$ and $\omega \in \{0,1\}^n \setminus \{0^n\}$ we set \[ r'(\omega)
:= \{v \in r(\omega) : |\sigma(v)| \geq n\}
=\{ v \in G^0 : |\sigma(v)| \geq n,\ \sigma(v)|_{n} = \omega\}. \] \end{definition}
To see that the two sets in the definition above coincide, it suffices to see that for $v \in G^0$ with $|\sigma(v)| \ge n$, we have $v \in r(\omega)$ if and only if $\sigma(v)|_{n} =
\omega$. For this, observe that $v \in r(\sigma(v)) \subset r(\sigma(v)|_{n})$ and that there is at most one $\omega \in \{0,1\}^{n} \setminus \{0^{n}\}$ such that $v \in r(\omega)$.
\begin{lemma}\label{lem:r'empty} For $\omega \notin \Delta$ the set $r'(\omega)$ is empty. \end{lemma}
\begin{proof} If $\omega \in \{0,1\}^n \setminus \{0^n\}$
for some $n \in \mathbb{N}$, and suppose that $r'(\omega)$ is not empty. Then there exists $v\in G^0$ with $|\sigma(v)|\geq n$
and $\sigma(v)|_{n} = \omega$. Since $\sigma(v)\in\Delta$ and $\omega\neq 0^n$, we have $\omega \in \Delta$. This shows $r'(\omega) =\emptyset$ for $\omega \notin \Delta$. \end{proof}
\begin{lemma}\label{lem:r(e_n) and X_n} For each $n \in \mathbb{N}$, we have $r(e_n) = (\Xset{n} \cap G^0) \sqcup \big( \bigsqcup_{\omega \in \Xset{n} \cap \Delta}r'(\omega) \big)$. \end{lemma} \begin{proof} We have $r(e_n) = \bigsqcup_{\omega \in \{0,1\}^n,\,\omega_n=1} r(\omega)$. The definition of $\Xset{n}$ (see Definition~\ref{def:X_n}) guarantees that
$\Xset{n} \cap G^0 = \{v \in r(e_n) : |\sigma(v)| < n\}$. For $\omega \in \{0,1\}^n$ with $\omega_n=1$, we have
$r'(\omega)=r(\omega)\setminus\{v \in r(\omega) : |\sigma(v)| < n\}$ by definition. Hence \[\textstyle r(e_n) = (\Xset{n} \cap G^0) \sqcup \big(\bigsqcup_{\omega \in \{0,1\}^n,\,\omega_n=1} r'(\omega)\big). \] Finally $r'(\omega)=\emptyset$ for $\omega \notin \Delta$ by Lemma~\ref{lem:r'empty}. \end{proof}
\begin{remark}\label{rem:r'=r'} For $\omega \in \{0,1\}^n \setminus \{0^n\}$ one can show
$r'(\omega) = r'(\omega, 0) \sqcup r'(\omega, 1) \sqcup \sigma^{-1}(\omega)$, using the fact $\sigma^{-1}(\omega)=\{v \in r(\omega) : |\sigma(v)|=n\}$. We omit the routine proof because we do not use it, but we remark this fact because this relates to Lemma~\ref{lem:Q'} (this can be proved using Lemma~\ref{lem:paths in F}~(3) below). \end{remark}
\begin{lemma}\label{lem:paths in F} The graph $F$ contains no return paths, and each $\alpha \in F^*$ is uniquely determined by $s_E(\alpha)$ and $r_E(\alpha)$. Moreover, \begin{enumerate} \item every path in $\alpha \in F$ of nonzero length satisfies $s_E(\alpha) \in \Delta$; \item there is a path in $F$ from $\omega \in \Delta$ to
$\omega' \in \Delta$ if and only if $|\omega| \le
|\omega'|$ and $\omega = \omega'|_{|\omega|}$. \item there is a path in $F$ from $\omega \in \Delta$ to $v
\in G^0$ if and only if $v \in r'(\omega)$; and \end{enumerate} \end{lemma} \begin{proof} Fix $e \in F^1$. Then either $r_E(e) \in G^0$ and hence is a sink in $F$, or else $s_E(e) \in \Delta_n$ and $r_E(e) \in \Delta_{n+1}$ for some $n \in \mathbb{N}$. Thus $F$ contains no return paths.
Now suppose that $\alpha, \alpha' \in F^*$ satisfy $r_E(\alpha) =
r_E(\alpha')$ and $s_E(\alpha) = s_E(\alpha')$. Without loss of generality, we may assume that $|\alpha| \ge |\alpha'|$. By definition of $F$, each vertex $v \in E^0$ receives at most one edge in $F^1$, so $\alpha = \beta\alpha'$ for some $\beta \in F^*$. This forces $s_E(\beta) = s_E(\alpha) = s_E(\alpha') = r_E(\beta)$, and then $\beta$ has length $0$ by the preceding paragraph, and $\alpha = \alpha'$.
By definition of $F$, we have $s_E(F^1) = \Delta$, which proves~(1). As explained in the first paragraph, a path
$\alpha$ from $\Delta_{n}$ to $\omega'\in \Delta$ must have the form $\alpha = \overline{\omega'|_{n+1}} \cdot
\overline{\omega'|_{n+2}} \cdots
\overline{\omega'|_{|\omega'|-1}} \cdot \overline{\omega'}$. This expression makes sense if and only if $n \le |\omega'|$
and $\omega := \omega'|_n$ is in $\Delta_{n}$, and then $\alpha$ has source $\omega$. This proves~(2). For~(3), fix $\omega \in \Delta_{n}$ and $v \in G^0$. There is a path from
$\omega$ to $v$ if and only if $v \in W_+$ and there is a path from $\omega$ to $\sigma(v)$. By~(2), this occurs if and only if $n \le |\sigma(v)|$ and $\sigma(v)|_{n} = \omega$ (in particular, $n=|\omega|$). Thus, there is a path from $\omega$ to $v$ if and only if $v \in r'(\omega)$. \end{proof}
\begin{definition}\label{dfn:f-paths} Lemma~\ref{lem:paths in F} implies that for each $x \in E^0$, there is a unique element $f_x \in F^*$ such that $r_E(f_x) = x$ and $s_E(f_x) \in W_0 \sqcup \Gamma_0$. Observe that \begin{itemize} \item For $x\in W_0\sqcup\Gamma_0$, we have $f_x = x$. \item For $x = \omega\in\Gamma_+$, we have \[
f_x:=\overline{\omega|_{m+1}}\cdot\overline{\omega|_{m+2}}\cdots
\overline{\omega|_{|\omega|-1}}\cdot\overline{\omega} \] where $m=\min\{k:\omega_k=1\}$. \item For $x = v\in W_+$, we have $f_x = f_{\sigma(v)}
\overline{v}$. \end{itemize} \end{definition}
\begin{example} Consider the ultragraph of Example~\ref{ex:illustration}, and the corresponding graph $E$ illustrated there. \begin{itemize} \item We have $f_{v_1} = v_1$ and $f_{00001} = 00001$ since $v_1 \in W_0$ and $00001 \in \Gamma_0$. \item We have $f_{001000} =
\overline{0010}\cdot\overline{00100}\cdot\overline{001000}$. \item We have $f_{v_6} =
\overline{10}\cdot\overline{100}\cdot\overline{1000}\cdot\overline{10000}\cdot\overline{v_6}$. \end{itemize} In the second two instances, it is easy to see that $f_x$ is the unique path in double-headed arrows from $\Gamma_0$ (that is, a vertex of the form $\circledcirc$) to $x$. \end{example}
\begin{lemma}\label{lem:path} For fixed $v,w \in G^0$, the map \begin{equation} g_0 \cdot \edge(n_1,x_1)\cdot g_1\cdot \edge(n_2,x_2)\cdot g_{2}\cdots \edge(n_k,x_k)\cdot g_{k} \mapsto \begin{cases} g_0 &\text{ if $k = 0$} \\ e_{n_1}e_{n_2}\cdots e_{n_k} &\text{ otherwise} \end{cases} \end{equation} is a bijection between paths in $E$ from $v$ to $w$ and paths in $\mathcal{G}$ beginning at $v$ whose ranges contain $w$ where each $e_{n_i}\in\mathcal{G}^1$, $x_i\in \Xset{n_i}$ and $g_i\in F^*$ as in Lemma~\ref{lem:unique expression}. \end{lemma}
\begin{proof} First note that we have $g_0=v$ because $v\in G^0$ emits no edges in $F$. Since $s_E(\edge(n_i,x_i)) = s(e_{n_i}) \in G^0$, to show that the map is well-defined and bijection, it suffices to show that for each $e_{n}\in\mathcal{G}^1$ and $w \in G^0$ there exists a path $\alpha = \edge(n,x)\cdot g$ where $x\in \Xset{n}$ and $g\in F^*$ satisfying $r_E(\edge(n,x)) = w$ if and only if $w \in r(e_n)$, and in this case $x\in \Xset{n}$ and $g\in F^*$ are unique. This follows from Lemma~\ref{lem:r(e_n) and X_n} and Lemma~\ref{lem:paths in F}~(3). \end{proof}
We introduced notions of paths and Condition~(K) for graphs at the beginning of this section. We now recall the corresponding notions for ultragraphs. A \emph{path} in an ultragraph is a sequence $\alpha = \alpha_1 \alpha_2 \dots \alpha_{|\alpha|}$ of edges such that $s(\alpha_{i+1}) \in r(\alpha_i)$ for all $i$. We write $s(\alpha) = s(\alpha_1)$ and $r(\alpha) =
r(\alpha_{|\alpha|})$. A \emph{return path} is a path $\alpha$ such that $s(\alpha) \in r(\alpha)$. A \emph{first-return path} is a return path $\alpha$ such that $s(\alpha) \not= s(\alpha_i)$ for any $i \ge 1$. As in \cite[Section~7]{KMST}, we say an ultragraph $\mathcal{G} = (G^0, \mathcal{G}^1, r, s)$ satisfies \emph{Condition~(K)} if no vertex is the base of exactly one first-return path.
\begin{proposition}\label{prop:Cond(K)} The graph $E$ satisfies Condition~(K) if and only if the ultragraph $\mathcal{G}$ satisfies Condition~(K). \end{proposition} \begin{proof} Lemma~\ref{lem:paths in F} implies that every return path in $E$ passes through some vertex in $G^0$. Hence $E$ satisfies Condition~(K) if and only if no vertex in $G^0$ is the base of exactly one first-return path in $E$. This in turn happens if and only if $\mathcal{G}$ satisfies Condition~(K) by Lemma~\ref{lem:path}. \end{proof}
\section{Full corners of graph algebras} \label{corner-sec}
Once again, we fix an ultragraph $\mathcal{G}$ and a graph $E$ constructed from $\mathcal{G}$ as in Definition~\ref{dfn:E}. We will show that the ultragraph algebra $C^*(\mathcal{G})$ is isomorphic to a full corner of the graph algebra $C^*(E)$.
\begin{definition}\label{def:graph algebra} The \emph{graph algebra} $C^*(E)$ of the graph $E=(E^0,E^1,r_E,s_E)$ is the universal $C^*$-al\-ge\-bra generated by mutually orthogonal projections $\{q_x : x\in E^0\}$ and partial isometries $\{t_\alpha : \alpha\in E^1\}$ with mutually orthogonal ranges satisfying the Cuntz-Krieger relations: \begin{enumerate} \item $t_\alpha^*t_\alpha = q_{r_E(\alpha)}$ for all
$\alpha\in E^1$; \item $t_\alpha t_\alpha^* \leq q_{s_E(\alpha)}$ for all
$\alpha\in E^1$; and \item $q_{x} = \sum_{s_E(\alpha)=x} t_\alpha t_\alpha^*$ for $x \in E^0_{\textnormal{rg}}$. \end{enumerate} \end{definition}
As usual, for a path $\alpha=\alpha_1\alpha_2\cdots \alpha_n$ in $E$ we define $t_\alpha\in C^*(E)$ by $t_\alpha=t_{\alpha_1}t_{\alpha_2}\cdots t_{\alpha_n}$. For $x \in E^0 \subset E^*$, the notation $t_x$ is understood as $q_x$. The properties (1) and (2) in Definition~\ref{def:graph algebra} hold for all $\alpha \in E^*$.
\begin{definition}\label{dfn:Ux} For each $x\in E^0$ define a partial isometry $U_x := t_{f_x}\in C^*(E)$ where $f_x \in F^*$ is as in Definition~\ref{dfn:f-paths}. \end{definition}
By definition of $f_x$ and the Cuntz-Krieger relations, we have $U_x^*U_x=q_x$ and $U_xU_x^*\leq q_{s_E(f_x)}$ for $x\in E^0$.
\begin{lemma}\label{lem:uxux*uyuy*} For $x,y\in E^0$ with $x\neq y$, we have \[ (U_{x}U_{x}^*)(U_{y}U_{y}^*) =\begin{cases} U_{x}U_{x}^* & \text{if there exists a path in $F$ from $x$ to $y$,}\\ U_{y}U_{y}^* & \text{if there exists a path in $F$ from $y$ to $x$,}\\ 0 & \text{otherwise.} \end{cases} \] In particular, for $n\in\mathbb{N}$ and $x,y\in \Xset{n}$ with $x\neq y$, we have $U_x^*U_y=0$. \end{lemma} \begin{proof}
Without loss of generality, we may assume $|f_x| \le |f_y|$. Then $U_{x}^*U_{y}\neq 0$ if and only if $f_y$ extends $f_x$, and in this case $(U_{x}U_{x}^*)U_{y}=U_{y}$. By the uniqueness of $f_y$ in $F^*$ stated in Definition~\ref{dfn:f-paths}, $f_y$ extends $f_x$ exactly when there exists a path in $F$ from $x$ to $y$.
For the last statement, observe that by Lemma~\ref{lem:r(e_n) and X_n} and Lemma~\ref{lem:paths in F}, there exist no paths in $F$ among vertices in $\Xset{n}$. Hence $U_x^*U_y = U_x^*U_xU^*_x U_y U^*_y U_y = 0$. \end{proof}
\begin{definition}\label{dfn:G-family in E} For $v\in G^0$, we set $P_v := U_vU_v^*$. For $e_n\in\mathcal{G}^1$, we set \[ S_{e_n} := U_{s(e_n)}\sum_{x\in \Xset{n}}t_{\edge(n,x)}U_{x}^*. \] \end{definition}
It is clear that $P_v$ is a nonzero projection, and the last statement of Lemma~\ref{lem:uxux*uyuy*} implies that $S_{e_n}$ is a partial isometry. We will show in Proposition \ref{prop:ELGfam} that the collection $\{P_v : v\in G^0\}$ and $\{S_{e_n} : e_n\in\mathcal{G}^1\}$ is an Exel-Laca $\mathcal{G}$-family in $C^*(E)$.
\begin{definition}\label{dfn:Q'} For $e_n \in \mathcal{G}^1$, we define $Q_{e_n}:=S_{e_n}^*S_{e_n}\in C^*(E)$. For $\omega \in \bigsqcup_{n=1}^\infty (\{0,1\}^n \setminus \{0^n\})$, we define \[ Q'_\omega := \begin{cases} U_{\omega}U_{\omega}^* &\text{ if $\omega \in \Delta$} \\ 0&\text{ otherwise.} \end{cases} \] \end{definition}
The projections $Q'_\omega$ are related to the sets $r'(\omega)$ of the preceding section (see Proposition~\ref{prop:r'(o)}).
\begin{lemma}\label{lem:P_vQ'_o} The collections $\{P_v : v\in G^0\}$ and $\{Q_\omega' : \omega\in \bigsqcup_{n=1}^\infty (\{0,1\}^n \setminus \{0^n\})\}$ of projections satisfy the following: \begin{enumerate} \item $\{P_v : v\in G^0\}$ are pairwise orthogonal. \item $\{Q_\omega' : \omega\in \bigsqcup_{n=1}^\infty
(\{0,1\}^n \setminus \{0^n\})\}$ pairwise commute. \item $\{Q_\omega' : \omega\in \{0,1\}^n \setminus
\{0^n\}\}$ are pairwise orthogonal for each $n \in \mathbb{N}$. \item $P_v Q_\omega' = Q_\omega' P_v = P_v$ if $v \in r'(\omega)$, and $P_v Q_\omega' = Q_\omega' P_v = 0$ if $v \notin r'(\omega)$. \end{enumerate} \end{lemma} \begin{proof} By Lemma~\ref{lem:paths in F} paths in $F$ are uniquely determined by their ranges and sources, and Lemma~\ref{lem:uxux*uyuy*} shows how the $U_x U^*_x$ multiply. The four assertions follow immediately. \end{proof}
\begin{lemma}\label{lem:Q_e=} For $n\in\mathbb{N}$, we have \[ Q_{e_n}=\sum_{x\in \Xset{n}}U_xU_x^*= \sum_{\substack{\omega\in \{0,1\}^{n}\\ \omega_n=1}}Q'_\omega
+\sum_{\substack{v\in r(e_{n})\\ |\sigma(v)|<n}}P_v. \] \end{lemma} \begin{proof} We compute: \begin{align*} Q_{e_n} &= S_{e_n}^* S_{e_n} \\ &=\bigg(\sum_{x\in \Xset{n}}U_xt_{\edge(n,x)}^*\bigg)U_{s(e_n)}^*U_{s(e_n)} \bigg(\sum_{y\in \Xset{n}}t_{\edge(n,y)}U_y^*\bigg) \\ &=\sum_{x,y\in \Xset{n}}(U_xt_{\edge(n,x)}^*t_{\edge(n,y)}U_y^*). \end{align*} Since $t_{\edge(n,x)}^*t_{\edge(n,y)}=0$ for $x,y\in \Xset{n}$ with $x\neq y$, we deduce that $Q_{e_n}=\sum_{x\in \Xset{n}}U_xU_x^*$ as claimed.
By the definition of $\Xset{n}$, we have \[ \sum_{x\in \Xset{n}}U_xU_x^*= \sum_{\substack{\omega\in \{0,1\}^{n}\\ \omega_n=1}}Q'_\omega
+\sum_{\substack{v\in r(e_{n})\\ |\sigma(v)|<n}}P_v.\qedhere \] \end{proof}
\begin{lemma}\label{lem:Q_e} The collection of projections $\{Q_{e} : e\in\mathcal{G}^1\}$ satisfy the following: \begin{enumerate} \item $\{Q_{e} : e\in\mathcal{G}^1\}$ pairwise commute. \item $P_vQ_e=Q_eP_v=P_v$ if $v\in r(e)$, and $P_vQ_e=Q_eP_v=0$ if $v\notin r(e)$. \item For $n \in \mathbb{N}$ and $\omega \in \{0,1\}^n \setminus
\{0^n\}$, we have $Q_\omega'Q_{e_n} = Q_{e_n}Q_\omega'
= Q_\omega'$ if $\omega_n=1$, and $Q_\omega'Q_{e_n} =
Q_{e_n}Q_\omega' = 0$ if $\omega_n=0$. \end{enumerate} \end{lemma} \begin{proof} Assertions (1)~and~(2) follow from routine calculations using Lemma~\ref{lem:P_vQ'_o} and Lemma~\ref{lem:Q_e=}. Assertion~(3) follows from similar calculations using the decomposition of $r(e_n)$ from Lemma~\ref{lem:r(e_n) and X_n}. \end{proof}
\begin{lemma}\label{lem:Q'} For $\omega\in \{0,1\}^n\setminus\{0^n\}$, we have \[ Q'_\omega=Q'_{(\omega, 0)}+Q'_{(\omega, 1)}
+\sum_{\substack{v\in r(\omega)\\ |\sigma(v)|=n}}P_v. \] \end{lemma} \begin{proof} For $\omega\notin\Delta$ both sides of the equation are zero. For $\omega\in\Delta$, we have $\omega\in E^0_{\textnormal{rg}}$ by Proposition~\ref{prp:reg verts}. Hence by the Cuntz-Krieger relations, we have \begin{align*} q_\omega &=\sum_{\substack{i\in \{0,1\}\\ (\omega, i)\in\Delta}} t_{\overline{(\omega, i)}}t_{\overline{(\omega, i)}}^* +\sum_{\substack{v\in G^0\\ \sigma(v)=\omega}} t_{\overline{v}}t_{\overline{v}}^*\\ &=\sum_{\substack{i\in \{0,1\}\\ (\omega, i)\in\Delta}} t_{\overline{(\omega, i)}}t_{\overline{(\omega, i)}}^*
+\sum_{\substack{v\in r(\omega)\\ |\sigma(v)|=n}} t_{\overline{v}}t_{\overline{v}}^*. \end{align*} Multiplying by $U_\omega$ on the left and by $U_\omega^*$ on the right gives the desired equation. \end{proof}
\begin{definition}\label{dfn:Q_o} For $n\in\mathbb{N}$ and $\omega\in\{0,1\}^n\setminus\{0^n\}$, we define $Q_\omega\in C^*(E)$ by \[ Q_\omega := \prod_{\omega_i=1}Q_{e_i}\prod_{\omega_j=0}(1-Q_{e_j}). \] \end{definition}
\begin{lemma}\label{lem:Q_o} For every $\omega \in \{0,1\}^n\setminus\{0^n\}$, we have \begin{equation}\label{eq:Q(1-Q)} Q_\omega = Q'_\omega +
\sum_{\substack{v\in r(\omega)\\ |\sigma(v)|<|\omega|}}P_v. \end{equation} \end{lemma} \begin{proof} We proceed by induction on $n$. The case $n=1$ follows from Lemma \ref{lem:Q_e=} because $Q_{\omega}=Q_{e_1}$ and $r(\omega) = r(e_1)$ for the only element $\omega=(1)$ of $\{0,1\}^1\setminus\{0^1\}$.
Fix $n \in \mathbb{N}$, and suppose as an inductive hypothesis that Equation~\eqref{eq:Q(1-Q)} holds for all elements of $\{0,1\}^{n}\setminus\{0^{n}\}$. Then for each $\theta \in \{0,1\}^{n}\setminus\{0^{n}\}$, the inductive hypothesis and Lemma~\ref{lem:Q'} imply that \begin{align} Q_\theta
&= Q'_\theta + \sum_{\substack{v\in r(\theta)\\ |\sigma(v)|<n}}P_v \nonumber \\ &= \bigg(Q'_{(\theta, 0)}+Q'_{(\theta, 1)}
+\sum_{\substack{v\in r(\theta)\\ |\sigma(v)|=n}}P_v \bigg)
+\sum_{\substack{v\in r(\theta)\\ |\sigma(v)|<n}}P_v \nonumber \\ &= Q'_{(\theta, 0)} + Q'_{(\theta, 1)}
+ \sum_{\substack{v\in r(\theta)\\ |\sigma(v)|<n+1}}P_v.\label{eq:Q from Q'} \end{align}
Now fix $\omega \in \{0,1\}^{n+1} \setminus \{0^{n+1}\}$; we must establish Equation~\ref{eq:Q(1-Q)} for this $\omega$. We consider three cases: $\omega = (\theta,1)$ for some $\theta \in \{0,1\}^n \setminus \{0^n\}$; $\omega = (\theta,0)$ for some $\theta \in \{0,1\}^n \setminus \{0^n\}$; or $\omega = (0^n,1)$.
First suppose that $\omega = (\theta,1)$. Then $Q_\omega = Q_{(\theta, 1)} =Q_\theta Q_{e_{n+1}}$. Combining this with Lemma~\ref{lem:Q_e} (2)~and~(3) and with~\ref{eq:Q from Q'}, we obtain \[ Q_\omega =Q'_{(\theta, 1)} +
\sum_{\substack{v\in r(\theta)\cap r(e_{n+1})\\ |\sigma(v)|<n+1}}P_v
=Q'_{(\theta, 1)} + \sum_{\substack{v\in r(\theta, 1)\\ |\sigma(v)|<n+1}}P_v. \]
Now suppose that $\omega = (\theta,0)$. We may apply the conclusion of the preceding paragraph to $(\theta,1)$ for calculate \[ Q_\omega
= Q_{(\theta, 0)}
= Q_\theta -Q_{(\theta, 1)}
= Q'_{(\theta, 0)} + \sum_{\substack{v\in r(\theta)\setminus r(\theta, 1)\\ |\sigma(v)|<n+1}}P_v
=Q'_{(\theta, 0)}+\sum_{\substack{v\in r(\theta, 0)\\ |\sigma(v)|<n+1}}P_v. \]
Finally, suppose that $\omega = (0^n,1)$. Then we may apply the conclusion of the preceding paragraph to each $Q'_{(\theta,1)}$ where $\theta \in \{0,1\}^n \setminus \{0^n\}$ to calculate \begin{flalign*} &&Q_\omega
&= Q_{(0^n,1)} &\\
&&&=Q_{e_{n+1}}-\sum_{\theta \in \{0,1\}^n\setminus\{0^n\}}Q_{(\theta, 1)}&\\
&&&=\sum_{\substack{\delta \in \{0,1\}^{n+1}\\ \delta_{n+1}=1}}Q'_{\delta}
+\sum_{\substack{v\in r(e_{n+1})\\ |\sigma(v)|<n+1}}P_v
-\sum_{\theta \in \{0,1\}^n\setminus\{0^n\}}
\bigg(Q'_{(\theta, 1)}+\sum_{\substack{v\in r(\theta, 1)\\ |\sigma(v)|<n+1}}P_v \bigg)&\\
&&&=\sum_{\theta \in \{0,1\}^n}Q'_{(\theta, 1)}
+\sum_{\substack{v\in r(e_{n+1})\\ |\sigma(v)|<n+1}}P_v
-\sum_{\theta \in \{0,1\}^n\setminus\{0^n\}}Q'_{(\theta, 1)}
-\sum_{\substack{\theta \in \{0,1\}^n\setminus\{0^n\}\\ v\in r(\theta, 1)\\ |\sigma(v)|<n+1}}P_v&\\
&&&=Q'_{(0^n,1)}+\sum_{\substack{v\in r(0^n,1)\\
|\sigma(v)|<n+1}}P_v &\qedhere \end{flalign*} \end{proof}
\begin{corollary}\label{cor:cond(4)check}
For $\omega\in\{0,1\}^n\setminus\{0^n\}$ with $|r(\omega)|<\infty$, we have \[ \prod_{\omega_i=1}Q_{e_i}\prod_{\omega_j=0}(1-Q_{e_j}) =\sum_{v\in r(\omega)}P_v. \] \end{corollary} \begin{proof} Take $\omega\in\{0,1\}^n\setminus\{0^n\}$ with
$|r(\omega)|<\infty$. Then $\omega\notin \Delta$. Hence Lemma~\ref{lem:r'empty} and the definition of $r'(\omega)$ imply that $|\sigma(v)|<|\omega|$ for all $v\in r(\omega)$, and by definition, $Q'_\omega = 0$. Thus the conclusion follows from Lemma~\ref{lem:Q_o}. \end{proof}
\begin{lemma}\label{lem:SeSe*} For $e_n\in\mathcal{G}^1$, we have \[ S_{e_n}S_{e_n}^*=U_{s(e_n)}\bigg( \sum_{x\in \Xset{n}}t_{\edge(n,x)}t_{\edge(n,x)}^*\bigg)U_{s(e_n)}^*. \] \end{lemma} \begin{proof} Lemma~\ref{lem:uxux*uyuy*} shows that the $U_x$, $x \in \Xset{n}$ have mutually orthogonal range projections, and the result then follows from the definition of $S_{e_n}$. \end{proof}
\begin{lemma}\label{lem:SeSe*2} For each $v\in G^0$, \[ \bigg\{\sum_{x\in \Xset{n}}t_{\edge(n,x)}t_{\edge(n,x)}^* : n \in \mathbb{N}, s(e_n)=v\bigg\} \] is a collection of pairwise orthogonal projections dominated by $q_v$. Moreover, $G^0_{\textnormal{rg}} \subset E^0_{\textnormal{rg}}$, and if $v\in G^0_{\textnormal{rg}}$ then \[ \sum_{\{n \in \mathbb{N} \,:\, s(e_n) = v\}} \Big(\sum_{x\in \Xset{n}}t_{\edge(n,x)}t_{\edge(n,x)}^*\Big) = q_v. \] \end{lemma} \begin{proof} Proposition~\ref{prp:reg verts} shows that $G^0_{\textnormal{rg}} \subset E^0_\textnormal{rg}$ and both equations then follow from the Cuntz-Krieger relations in $C^*(E)$. \end{proof}
\begin{proposition}\label{prop:ELGfam} The collection $\{P_v : v\in G^0\}$ and $\{S_{e_n} : e_n\in\mathcal{G}^1\}$ is an Exel-Laca $\mathcal{G}$-family in $C^*(E)$. \end{proposition} \begin{proof} By Lemma \ref{lem:P_vQ'_o}~(1) and Lemma \ref{lem:Q_e}~(1)~and~(2), the collection $\{P_v : v\in G^0\}$ and $\{Q_{e_n} : e_n\in\mathcal{G}^1\}$ satisfies the conditions (1), (2), and (3) of Definition~\ref{dfn:Cond(EL)}. It follows from \cite[Corollary~2.18]{KMST} that to establish the $\{P_v\}$ and the $\{Q_{e_n}\}$ satisfy Condition~(EL), it suffices to verify Condition~(4) of Definition~\ref{dfn:Cond(EL)} when $\lambda \cup \mu = \{e_1, \dots, e_n\}$ for some $n$, and this follows from Corollary~\ref{cor:cond(4)check}. The conditions (2) and (3) in Definition \ref{dfn:EL-G-fam} and the fact that the elements of $\{S_{e_n} : e_n\in\mathcal{G}^1\}$ have mutually orthogonal ranges follow from Lemma~\ref{lem:SeSe*} and Lemma~\ref{lem:SeSe*2}. \end{proof}
\begin{proposition}\label{prop:phi} There is a strongly continuous action $\beta$ of $\mathbb{T}$ on $C^*(E)$ satisfying \begin{itemize} \item $\beta_z(q_x)=q_x$ for $x\in E^0$, \item $\beta_z(t_{\overline{x}})=t_{\overline{x}}$ for
$x\in W_+\sqcup\Gamma_+$, and \item $\beta_z(t_{\edge(n,x)})=zt_{\edge(n,x)}$ for
$e_n\in\mathcal{G}^1, x\in \Xset{n}$. \end{itemize} Moreover, there is an injective homomorphism $\phi\colon
C^*(\mathcal{G}) \to C^*(E)$ such that $\phi(p_v)=P_v$ and
$\phi(s_e)=S_e$, and $\phi$ is equivariant for $\beta$
and the gauge action on $C^*(E)$. \end{proposition} \begin{proof} The existence of $\beta$ follows from a standard argument using the universal property of $C^*(E)$.
The first statement of \cite[Corollary~3.5]{KMST} implies that $C^*(\mathcal{G})$ is universal for Exel-Laca $\mathcal{G}$-families. Hence there is a homomorphism $\phi : C^*(\mathcal{G}) \to C^*(E)$ such that $\phi(p_v)=P_v$ and $\phi(s_e)=S_e$. To prove that $\phi$ is injective, we first show that $\phi$ is equivariant for $\beta$ and the gauge action on $C^*(E)$, and then apply the gauge-invariant uniqueness theorem for ultragraphs as stated in \cite[Corollary~3.5]{KMST}.
It suffices to show that $\beta_z(P_v)=P_v$ and $\beta_z(S_e)=zS_e$ for $v\in G^0$, $e\in \mathcal{G}^1$ and $z \in \mathbb{T}$. Each $\beta_z$ fixes $t_\alpha$ for every $\alpha \in F^*$, and hence fixes the partial isometries $U_x$ of Definition~\ref{dfn:Ux}. Hence $\beta$ has the desired properties by definition of the $S_e$ and $P_v$. \end{proof}
The defining properties of the homomorphism $\phi : C^*(\mathcal{G}) \to C^*(E)$ of the preceding proposition imply that \[ \phi(p_{r(e_n)}) = \phi(s^*_{e_n} s_{e_n}) = S_{e_n}^* S_{e_n} = Q_{e_n} \] for all $n$, so for all $\omega \in \{0,1\}^n \setminus \{0^n\}$ we have \[ \phi(p_{r(\omega)})
= \phi\Big(\prod_{\omega_i = 1} p_{r(e_i)} \prod_{\omega_j = 0} (1 - p_{r(e_j)})\Big)
= \prod_{\omega_i = 1} Q_{e_i} \prod_{\omega_j = 0} (1 - Q_{e_j})
= Q_{\omega}. \]
The following proposition shows that the sets $r'(\omega)$ of the preceding section and the projections $Q'_\omega$ discussed in this section satisfy a similar relationship (also compare Lemma~\ref{lem:r(e_n) and X_n} with Lemma~\ref{lem:Q_e=}, and Remark~\ref{rem:r'=r'} with Lemma~\ref{lem:Q'}).
\begin{proposition}\label{prop:r'(o)} For $\omega \in \bigsqcup_{n=1}^\infty (\{0,1\}^n \setminus \{0^n\})$, the set $r'(\omega)$ is in $\mathcal{G}^0$, and we have $\phi(p_{r'(\omega)}) = Q'_{\omega}$. \end{proposition} \begin{proof} The set $r'(\omega)$ belongs to $\mathcal{G}^0$ by the definitions of $r'(\omega)$ and the algebra $\mathcal{G}^0$. That $\phi(p_{r'(\omega)}) = Q'_{\omega}$ follows from the definition of $r'(\omega)$ and Lemma~\ref{lem:Q_o}. \end{proof}
We next determine the image of the injection $\phi$ of Proposition~\ref{prop:phi}.
\begin{lemma}\label{lem:UU^*inImage} For all $x\in E^0$, we have $U_xU_x^*\in\phi(C^*(\mathcal{G}))$. \end{lemma} \begin{proof} For $x = v\in G^0$, we have $U_vU_v^*=P_v\in\phi(C^*(\mathcal{G}))$. For $x = \omega\in \Delta$, we have \[ U_\omega U_\omega^* =\prod_{\omega_i=1}Q_{e_i}\prod_{\omega_j=0}(1-Q_{e_j})
-\sum_{\substack{v\in r(\omega)\\ |\sigma(v)|<|\omega|}}P_v \in \phi(C^*(\mathcal{G})) \] by Lemma \ref{lem:Q_o}. \end{proof}
\begin{lemma}\label{lem:tg} Let $\alpha \in E^*$, and suppose $s_E(\alpha)\in W_0\sqcup\Gamma_0$. Let \[ \alpha= g_0 \cdot \edge(n_1,x_1)\cdot g_1\cdot \edge(n_2,x_2)\cdot g_{2}\cdots \edge(n_k,x_k)\cdot g_{k} \] be the unique expression for $\alpha$ such that each $e_{n_i}\in\mathcal{G}^1$, $x_i\in \Xset{n_i}$, and $g_i\in F^*$ as in Lemma~\ref{lem:unique expression}. Then \[ t_\alpha=S_{e_{n_1}}S_{e_{n_2}}\cdots S_{e_{n_k}}U_{r_E(\alpha)}. \] \end{lemma} \begin{proof} The proof proceeds by induction on $k$. When $k=0$, the path $\alpha=g_0$ belongs to $F^*$ with $s_E(\alpha)\in W_0\sqcup\Gamma_0$. By Definition~\ref{dfn:f-paths}, we have $\alpha=f_{r_E(\alpha)}$. Hence $t_\alpha=U_{r_E(\alpha)}$.
Suppose as an inductive hypothesis that the result holds for $k-1$, and fix \[ \alpha=g_0\cdot \edge(n_1,x_1)\cdot g_1\cdot \edge(n_2,x_2)\cdot g_{2}\cdots \edge(n_k,x_k)\cdot g_{k}\in E^*. \] Let $\alpha'=g_0\cdot \edge(n_1,x_1)\cdot g_1\cdot \edge(n_2,x_2)\cdot g_{2}\cdots \edge(n_{k-1},x_{k-1})\cdot g_{k-1}$. Then $\alpha=\alpha'\cdot\edge(n_{k},x_{k})\cdot g_{k}$. By the inductive hypothesis, \[ t_\alpha=t_{\alpha'}t_{\edge(n_{k},x_{k})}t_{g_{k}} =S_{e_{n_1}}S_{e_{n_2}}\cdots S_{e_{n_{k-1}}}U_{r_E(\alpha')} t_{\edge(n_{k},x_{k})}t_{g_{k}}. \] The path $f_{x_{k}}g_{k}$ satisfies $s_E(f_{x_{k}}g_{k})=s_E(f_{x_{k}})\in W_0\sqcup\Gamma_0$ and $r_E(f_{x_{k}}g_{k})=r_E(g_{k})=r_E(\alpha)$. By Definition~\ref{dfn:f-paths} $f_{r_E(\alpha)}=f_{x_{k}}g_{k}$. Hence $U_{r_E(\alpha)}=U_{x_{k}}t_{g_{k}}$, and Lemma~\ref{lem:uxux*uyuy*} implies \[ S_{e_{n_{k}}}U_{r_E(\alpha)} =\bigg(U_{s(e_{n_{k}})}\sum_{x\in \Xset{n_{k}}}t_{\edge(n_{k},x)}U_{x}^*\bigg) \big(U_{x_{k}}t_{g_{k}}\big) =U_{s(e_{n_{k}})}t_{\edge(n_{k},x_{k})}t_{g_{k}}. \] Since $r_E(\alpha')=s_E(\edge(n_{k},x_{k}))=s(e_{n_{k}})$, \[ t_\alpha =S_{e_{n_1}}S_{e_{n_2}}\cdots S_{e_{n_{k-1}}}S_{e_{n_{k}}}U_{r_E(\alpha)}. \qedhere \] \end{proof}
The sum $\sum_{x\in W_0\sqcup\Gamma_0}q_x$ converges strictly to a projection $Q \in \mathcal{M}(C^*(E))$ such that \[ Q t_\alpha t^*_\beta = \begin{cases} t_\alpha t^*_\beta &\text{ if $s(\alpha) \in W_0\sqcup\Gamma_0$}\\ 0 &\text{ otherwise} \end{cases} \] (see \cite[Lemma~2.10]{Raeburn2005} or \cite[Lemma~2.1.13]{Tomforde2006} for details). We then have \begin{align*} QC^*(E)Q=\cspa\big\{t_{\alpha}t_{\alpha'}^*\in C^*(E): \alpha,\alpha'\in E^* & \text{ with $r_E(\alpha)=r_E(\alpha')$} \\ & \text{ and $s_E(\alpha),s_E(\alpha')\in W_0\sqcup\Gamma_0$}\big\}. \end{align*}
\begin{proposition}\label{prop:Image=QC^*(E)Q} We have $\phi(C^*(\mathcal{G})) = QC^*(E)Q$. \end{proposition} \begin{proof} Let $x \in E^0$. Since $s_E(f_x) \in W_0\sqcup\Gamma_0$, we have $U_x = Q U_x$. This and the definitions of $\{P_v : v\in G^0\}$ and $\{S_{e_n} : e_n\in\mathcal{G}^1\}$ imply that each $P_v$ and each $S_{e_n}$ is in $QC^*(E)Q$. Hence $\phi(C^*(\mathcal{G})) \subset QC^*(E)Q$. Let $\alpha,\alpha'\in E^*$ such that $r_E(\alpha)=r_E(\alpha')=x\in E^0$ and $s_E(\alpha),s_E(\alpha')\in W_0\sqcup\Gamma_0$. By Lemma \ref{lem:tg}, \[ t_{\alpha}t_{\alpha'}^*=S_{e_{n_1}}S_{e_{n_2}}\cdots S_{e_{n_k}}U_{x}U_{x}^* S_{e_{m_l}}^*\cdots S_{e_{m_2}}^*S_{e_{m_1}}^* \] for some $e_{n_i},e_{m_j} \in \mathcal{G}^1$. Lemma~\ref{lem:UU^*inImage} therefore implies that $t_{\alpha}t_{\alpha'}^*\in \phi(C^*(\mathcal{G}))$. Hence $QC^*(E)Q \subset \phi(C^*(\mathcal{G}))$. \end{proof}
\begin{lemma}\label{lem:fullproj} The projection $Q$ is full. \end{lemma} \begin{proof} For $x \in E^0$, we have $U_xU_x^* \in QC^*(E)Q$. Hence $q_x=U_x^*U_x$ is in the ideal generated by $QC^*(E)Q$. Since the ideal generated by $\{q_x:x\in E^0\}$ is $C^*(E)$, the ideal generated by $QC^*(E)Q$ is also $C^*(E)$. \end{proof}
\begin{theorem}\label{thm:fullcorner} The homomorphism $\phi$ of Proposition~\ref{prop:phi} is an isomorphism from $C^*(\mathcal{G})$ to the full corner $QC^*(E)Q$. Consequently $C^*(\mathcal{G})$ and $C^*(E)$ are Morita equivalent. \end{theorem} \begin{proof} This follows from Proposition \ref{prop:phi}, Proposition \ref{prop:Image=QC^*(E)Q}, and Lemma \ref{lem:fullproj}. \end{proof}
\begin{theorem} The three classes of graph algebras, of Exel-Laca algebras, and of ultragraph algebras coincide up to Morita equivalence. \end{theorem} \begin{proof} By \cite[Theorem~4.5 and Remark~4.6]{Tom}, every Exel-Laca algebra is isomorphic to an ultragraph algebra. Moreover, by \cite[Theorem~4.5 and Proposition~6.6]{Tom}, every ultragraph algebra is isomorphic to a full corner of an Exel-Laca algebra.
Proposition~3.1 of \cite{Tom} implies that every graph $C^*$-algebra is isomorphic to an ultragraph algebra. Finally, Theorem~\ref{thm:fullcorner} implies that every ultragraph algebra is Morita equivalent to a graph algebra. \end{proof}
\begin{remark} \label{rmk:E-L-full-corn} Note that Theorem~\ref{thm:fullcorner} also shows how to realize an Exel-Laca algebra as the full corner of a graph algebra. If $\mathcal{A}$ is a countably indexed $\{0,1\}$-matrix with no zero rows, let $\mathcal{G}_A$ be the ultragraph of \cite[Definition~2.5]{Tom}, which has $A$ as its edge matrix. It follows from \cite[Theorem~4.5]{Tom} that the Exel-Laca algebra $\mathcal{O}_A$ is isomorphic to $C^*(\mathcal{G}_A)$. If we let $E$ be a graph constructed from $\mathcal{G}_A$ as in Section~\ref{graph-sec}, then $\mathcal{O}_A$ is isomorphic to a full corner of $C^*(E)$. It is noteworthy that it seems very difficult to see how to construct the graph $E$ directly from the infinite matrix $A$ without at least implicit reference to the ultragraph $\mathcal{G}_A$. \end{remark}
\begin{remark}\label{rmk:ultragraph=graph} With the notation as above, it is straightforward to see that the following conditions are equivalent: \begin{enumerate} \rom \item The homomorphism $\phi$ of Proposition~\ref{prop:phi}
is surjective. \item The projection $Q$ is the unit of $\mathcal{M} C^*(E)$. \item $W_+\sqcup \Gamma_+ = \emptyset$ \item $\Delta=\emptyset$.
\item For all $e\in \mathcal{G}^1$, $|r(e)|<\infty$. \end{enumerate} In this case, the graph $E=(E^0,E^1,r_E,s_E)$ is obtained as $E^0=G^0$, $E^1=\{(e,x) : e\in\mathcal{G}^1, x\in r(e)\}$, $s_E(e,x)=s(e)$ and $r_E(e,x)=x$. In other words, the graph $E$ is obtained from the ultragraph $\mathcal{G}$ by changing each ultraedge $e\in \mathcal{G}^1$ to a set of (ordinary) edges $\{(e,x) : x\in r(e)\}$. \end{remark}
As a consequence of Theorem~\ref{thm:fullcorner}, we obtain the following characterization of real rank zero for ultragraph algebras.
\begin{proposition} Let $\mathcal{G}$ be an ultragraph. Then $C^*(\mathcal{G})$ has real rank zero if and only if $\mathcal{G}$ satisfies Condition~(K). \end{proposition} \begin{proof} Let $E$ be a graph constructed from $\mathcal{G}$ as in Section~\ref{graph-sec}. By Theorem~\ref{thm:fullcorner}, $C^*(E)$ is Morita equivalent to $C^*(\mathcal{G})$. Hence \cite[Theorem~3.8]{BroPed} implies that $C^*(\mathcal{G})$ has real rank zero if and only if $C^*(E)$ has real rank zero. By \cite[Theorem~3.5]{Jeong}, $C^*(E)$ has real rank zero if and only if $E$ satisfies Condition~(K). By Proposition~\ref{prop:Cond(K)}, $E$ satisfies Condition~(K) if and only if $\mathcal{G}$ satisfies Condition~(K). \end{proof}
\section{Gauge-invariant ideals} \label{ideal-sec}
We continue in this section with a fixed ultragraph $\mathcal{G}$, and let $E$ be a graph constructed from $\mathcal{G}$ as in Section~\ref{graph-sec}. We let $Q \in \mathcal{M}(C^*(E))$ and $\phi : C^*(\mathcal{G}) \to Q C^*(E) Q$ be as in Theorem~\ref{thm:fullcorner}.
By Theorem~\ref{thm:fullcorner}, the homomorphism $\phi$ induces a bijection from the set of ideals of $C^*(\mathcal{G})$ to the set of ideals of $C^*(E)$. We will show in Proposition~\ref{prop:action} that this bijection restricts to a bijection between gauge-invariant ideals of $C^*(\mathcal{G})$ and gauge-invariant ideals of $C^*(E)$.
Let $\beta$ be the action of $\mathbb{T}$ on $C^*(E)$ constructed in the proof of Proposition \ref{prop:phi}. Specifically, $\beta_z(q_x)=q_x$ for $x\in E^0$, $\beta_z(t_{\overline{x}})=t_{\overline{x}}$ for $x\in W_+\sqcup\Gamma_+$, and $\beta_z(t_{\edge(n,x)})=zt_{\edge(n,x)}$ for $e_n\in\mathcal{G}^1, x\in \Xset{n}$. Let $\alpha\in E^*$ and let \[ \alpha=g_0\cdot (n_1,x_1)\cdot g_1\cdot (n_2,x_2)\cdot g_{2}\cdots (n_k,x_k)\cdot g_{k} \] be the unique expression for $\alpha$ where each $n_i\in\mathbb{N}$, $x_i\in \Xset{n_i}$ and $g_i\in F^*$ as in Lemma~\ref{lem:unique expression}. We define $m(\alpha)=\max\{n_1,\ldots,n_k\}$ and $l(\alpha)=k$. Then one can verify that $\beta_z(t_{\alpha})=z^{l(\alpha)}t_{\alpha}$. It follows (see, for example, the argument of \cite[Corollary~3.3]{Raeburn2005}) that the fixed point algebra $C^*(E)^{\beta}$ of the action $\beta$ satisfies \[ C^*(E)^\beta = \cspa\{t_\alpha t_{\alpha'}^* : \text{$\alpha,\alpha'\in E^*$ with $l(\alpha)=l(\alpha')$}\}. \] We define \[ C^*(E)^{\circ}:=\cspa\{t_\alpha t_\alpha^* \in C^*(E) : \alpha \in E^*\} \subset C^*(E)^\beta. \] Then $C^*(E)^\circ$ is an abelian $C^*$-sub\-al\-ge\-bra of $C^*(E)$.
\begin{lemma}\label{lem:beta-inv1} For $k,n\in\mathbb{N}$ and $x\in E^0$, define \[ E^*_{k,n,x}:=\{\alpha\in E^* : l(\alpha)=k,\ m(\alpha)\leq n,\ r_E(\alpha)=x\}. \] Then $E^*_{k,n,x}$ is finite and $t_\alpha^*t_{\alpha'}=0$ for $\alpha,\alpha'\in E^*_{k,n,x}$ with $\alpha\neq \alpha'$. Hence \[ \mathfrak{A}_{k,n,x}:= \spa\big\{t_{\alpha} t_{\alpha'}^* : \alpha,\alpha'\in E^*_{k,n,x}\big\} \]
is a $C^*$-al\-ge\-bra isomorphic to $M_{|E^*_{k,n,x}|}(\mathbb{C})$. \end{lemma} \begin{proof} For each $x\in E^0$, only finitely many paths $g$ in $F^*$ satisfy $r_E(g)=x$. Hence the set $E^*_{k,n,x}$ is finite. It is easy to see that the $t_{\alpha} t_{\alpha'}^*$ are matrix units. \end{proof}
\begin{lemma}\label{lem:beta-inv2} For $k,n\in\mathbb{N}$ and $x,y\in E^0$ with $x \neq y$, we have $\mathfrak{A}_{k,n,x}\mathfrak{A}_{k,n,y}\subset \mathfrak{A}_{k,n,y}$ if there exists a path in $F^*$ from $x$ to $y$, $\mathfrak{A}_{k,n,x}\mathfrak{A}_{k,n,y}\subset \mathfrak{A}_{k,n,x}$ if there exists a path in $F^*$ from $y$ to $x$, and $\mathfrak{A}_{k,n,x}\mathfrak{A}_{k,n,y}=0$ otherwise. \end{lemma} \begin{proof} We begin by recalling that for any $\alpha$, $\alpha'$, $\beta$, and $\beta'$ in $E^*$, the Cuntz-Krieger relations imply that \begin{equation}\label{eq:monomial product} t_\alpha t^*_{\alpha'} t_\beta t^*_{\beta'} = \begin{cases} t_{\alpha\mu} t^*_{\beta'} &\text{ if $\beta = \alpha'\mu$ for some $\mu \in E^*$}\\ t_{\alpha} t^*_{\beta'\nu} &\text{ if $\alpha' = \beta\nu$ for some $\nu \in E^*$}\\ 0 &\text{ otherwise}. \end{cases} \end{equation} Suppose that $\alpha' \in E^*_{k,n,x}$ and $\beta \in E^*_{k,n,y}$ satisfy $\beta = \alpha'\mu$ for some $\mu \in E^*$. We claim that $\mu$ is a path in $F^*$ from $x$ to $y$, and that $\alpha\mu \in E^*_{k,n,y}$. Since $r(\alpha') = x$ and $r(\beta) = y$, $\mu$ is a path from $x$ to $y$. Since $l(\alpha') = l(\beta)$, Lemma~\ref{lem:unique expression} and the definition of $l$ imply that $\mu \in F^*$. Hence $l(\alpha\mu) = l(\alpha) = k$. Since every edge in $\alpha\mu$ is an edge in $\alpha$ or an edge in $\beta$, we also have $m(\alpha\mu) \le \max\{m(\alpha), m(\beta)\} \le n$. Since $r(\alpha\mu) = r(\mu) = r(\beta) = y$, it follows that $\alpha\mu \in E^*_{k,n,y}$ as claimed.
A symmetric argument now shows that if $\alpha' \in E^*_{k,n,x}$ and $\beta \in E^*_{k,n,y}$ satisfy $\alpha' = \beta\nu$ for some $\nu \in E^*$, then $\nu$ is a path in $F^*$ from $y$ to $x$ and $\beta'\nu \in E^*_{k,n,y}$.
By Lemma~\ref{lem:paths in F}, $F^*$ contains no return paths. Since $x \not= y$, it follows that there cannot exist paths $\mu,\nu \in F^*$ such that $\mu$ is a path from $x$ to $y$ and $\nu$ is a path from $y$ to $x$. Combining this with the preceding paragraphs and with~\eqref{eq:monomial product} proves the result. \end{proof}
\begin{lemma}\label{lem:center} Let $\mathfrak{A}_0$ and $\mathfrak{A}'$ be finite-dimensional $C^*$-sub\-al\-ge\-bras of a $C^*$-algebra such that $\mathfrak{A}_0 \mathfrak{A}' \subset \mathfrak{A}_0$. Then $\mathfrak{A}:=\mathfrak{A}_0+\mathfrak{A}'$ is a finite-dimensional $C^*$-al\-ge\-bra whose center is contained in the $C^*$-al\-ge\-bra generated by the center of $\mathfrak{A}_0$ and the center of $\mathfrak{A}'$. \end{lemma} \begin{proof} It is clear that $\mathfrak{A}:=\mathfrak{A}_0+\mathfrak{A}'$ is finite dimensional. It is easy to check that $\mathfrak{A}$ is a $C^*$-al\-ge\-bra and $\mathfrak{A}_0 \subset \mathfrak{A}$ is an ideal. Since $\mathfrak{A}_0$ has a unit $p_0$, $\mathfrak{A}$ is the direct sum of $\mathfrak{A}_0$ and the $C^*$-sub\-al\-ge\-bra $(1-p_0)\mathfrak{A} \subset \mathfrak{A}$ where $1$ is the unit of $\mathfrak{A}$. Since $\mathfrak{A}=\mathfrak{A}_0+\mathfrak{A}'$, the $*$-ho\-mo\-mor\-phism $\mathfrak{A}' \ni x \mapsto (1-p_0)x \in (1-p_0)\mathfrak{A}$ is a surjection between finite-dimensional $C^*$-al\-ge\-bra s. Thus its restriction to the center of $\mathfrak{A}'$ is a surjection onto the center of $(1-p_0)\mathfrak{A}$. This implies that the center of $(1-p_0)\mathfrak{A}$ is contained in the $C^*$-al\-ge\-bra generated by the center of $\mathfrak{A}_0$ and the center of $\mathfrak{A}'$ because $p_0$ is in the center of $\mathfrak{A}_0$. Since the center of $\mathfrak{A}$ is the direct sum of the center of $\mathfrak{A}_0$ and the center of $(1-p_0)\mathfrak{A}$, it is contained in the $C^*$-al\-ge\-bra generated by the center of $\mathfrak{A}_0$ and the center of $\mathfrak{A}'$. \end{proof}
\begin{lemma}\label{lem:beta-inv3} Let $k,n\in\mathbb{N}$ and let $\lambda$ be a finite subset $E^0$. Then $\mathfrak{A}_{k,n,\lambda}:=\sum_{x\in \lambda}\mathfrak{A}_{k,n,x}$ is a finite-dimensional $C^*$-al\-ge\-bra whose center is contained in $C^*(E)^{\circ}$. \end{lemma} \begin{proof}
The proof proceeds by induction on $|\lambda|$. When
$|\lambda|=1$, this follows from Lemma \ref{lem:beta-inv1}. Suppose the statement holds whenever $|\lambda|=m$. Fix a finite subset $\lambda \subset E^0$ with $|\lambda|=m+1$. By Lemma~\ref{lem:paths in F}, $F^*$ contains no return paths, so there exists $x_0 \in \lambda$ such that there is no path in $F^*$ from $x_0$ to any other vertex in $\lambda$. Let $\lambda' = \lambda \setminus \{x_0\}$. Then Lemma~\ref{lem:beta-inv2} implies that $\mathfrak{A}_{k,n,x_0}\mathfrak{A}_{k,n,\lambda\setminus\{x_0\}}\subset \mathfrak{A}_{k,n,x_0}$. Hence $\mathfrak{A}_{k,n,\lambda}$ is a finite-dimensional $C^*$-al\-ge\-bra whose center is contained in $C^*(E)^{\circ}$ by the inductive hypothesis applied to $\lambda'$, and Lemma~\ref{lem:center}. \end{proof}
\begin{lemma}\label{lem:beta-inv4} Let $\lambda_1, \lambda_2, \dots$ be an increasing sequence of finite subsets of $E^0$ such that $\bigcup_{n=1}^\infty \lambda_n=E^0$. For $n\in\mathbb{N}$ let $\mathfrak{A}_n:=\sum_{k=1}^n \mathfrak{A}_{k,n,\lambda_n}$. Then $\mathfrak{A}_1, \mathfrak{A}_2, \dots$ is an increasing sequence of finite-dimensional $C^*$-al\-ge\-bra s whose centers are contained in $C^*(E)^{\circ}$, and the union $\bigcup_{n=1}^\infty \mathfrak{A}_n$ is dense in $C^*(E)^{\beta}$. \end{lemma} \begin{proof} Equation~\ref{eq:monomial product} implies that $\mathfrak{A}_{k',n,\lambda_n}\mathfrak{A}_{k,n,\lambda_n}\subset \mathfrak{A}_{k,n,\lambda_n}$ for $k'\leq k$. An argument similar to the proof of Lemma~\ref{lem:beta-inv3} therefore shows that $\mathfrak{A}_n$ is a finite-dimensional $C^*$-al\-ge\-bra whose center is contained in $C^*(E)^{\circ}$. By definition, $\{\mathfrak{A}_n : n \in \mathbb{N}\}$ is increasing. The union $\bigcup^\infty_{n=1} \mathfrak{A}_n$ is dense in $C^*(E)^\beta$ because it contains all the spanning elements. \end{proof}
\begin{lemma}\label{lem:ideal of C*(E)^b} Every ideal $I$ of $C^*(E)^{\beta}$ is generated as an ideal by $I\cap C^*(E)^{\circ}$. \end{lemma} \begin{proof} Let $\lambda_1, \lambda_2, \dots$ and $\mathfrak{A}_1, \mathfrak{A}_2, \dots$ be as in Lemma~\ref{lem:beta-inv4}. Then $I$ is generated as an ideal by $\bigcup_{n=1}^\infty I\cap \mathfrak{A}_n$. For each $n$, the algebra $C^*(E)^{\circ}$ contains the center of the finite-dimensional $C^*$-al\-ge\-bra $\mathfrak{A}_n$, so $I\cap \mathfrak{A}_n$ is generated as an ideal by $I\cap \mathfrak{A}_n\cap C^*(E)^{\circ}$. Hence $I$ is generated as an ideal by $I\cap C^*(E)^{\circ}$. \end{proof}
\begin{proposition}\label{prop:beta-inv} Let $I$ be an ideal of $C^*(E)$. Then $I$ is $\beta$-invariant if and only if $I$ is generated as an ideal by $I\cap C^*(E)^{\circ}$. \end{proposition}
To prove the proposition, we first present a well-known technical lemma, an exact statement of which we have found difficult to locate in the literature.
\begin{lemma}\label{lem:inv ideal generators} Let $A$ be a $C^*$-algebra and let $\beta$ be a strongly continuous action of $\mathbb{T}$ by automorphisms of $A$. An ideal $I$ of $A$ is $\beta$-invariant if and only if it is generated as an ideal by $I \cap A^\beta$. \end{lemma} \begin{proof} If $I$ is generated as an ideal by $I \cap A^\beta$, then it is clearly $\beta$-invariant.
Now suppose that $I$ is $\beta$-invariant. Then $I \cap A^\beta = I^\beta$. Moreover, $\beta$ descends to an action $\widetilde{\beta}$ of $\mathbb{T}$ on $A/I$, and averaging over $\beta$ and $\widetilde{\beta}$ gives faithful conditional expectations $\Phi : A \to A^\beta$ and $\widetilde{\Phi} : A/I \to (A/I)^{\widetilde{\beta}}$ such that $\widetilde{\Phi}(a + I) = \Phi(a) + I$.
Let $J \subset I$ be the ideal of $A$ generated by $I^\beta$; we must show that $J = I$. Fix $a \in J$. Then $a^*a \in J$, so $\Phi(a^*a) \in J^\beta = I^\beta$. Thus $\widetilde{\Phi}(a^*a + I) = \Phi(a^*a) + I = 0_{A/I}$ since $I^\beta \subset I$. Since $\widetilde{\Phi}$ is faithful, $a^*a + I = 0_{A/I}$, so the $C^*$-identity implies $a + I = 0_{A/I}$, and $a \in I$. \end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:beta-inv}] Since elements in $C^*(E)^{\circ}$ are fixed by $\beta$, if $I$ is generated by $I \cap C^*(E)^{\circ}$, then $I$ is $\beta$-invariant. Conversely suppose that $I$ is $\beta$-invariant. Then Lemma~\ref{lem:inv ideal generators} shows that $I$ is generated as an ideal of $C^*(E)$ by $I\cap C^*(E)^{\beta}$, and Lemma~\ref{lem:ideal of C*(E)^b} implies that $I\cap C^*(E)^{\beta}$ is generated as an ideal of $C^*(E)^\beta$ by $I\cap C^*(E)^{\circ}$. \end{proof}
The following proposition holds for a general graph $E$.
\begin{proposition}\label{prop:gauge-inv} An ideal $I$ of $C^*(E)$ is gauge invariant if and only if $I$ is generated by $I\cap C^*(E)^{\circ}$. \end{proposition} \begin{proof} Since $C^*(E)^{\circ}$ is in the fixed point algebra of the gauge action, the ideal generated by a $C^*$-sub\-al\-ge\-bra of $C^*(E)^{\circ}$ is gauge invariant. Conversely, let $I$ be a gauge-invariant ideal of $C^*(E)$ and $J$ be the ideal generated by $I\cap C^*(E)^{\circ}$. Since $I\cap C^*(E)^{\circ}\subset J\subset I$, we have $J\cap C^*(E)^{\circ}=I\cap C^*(E)^{\circ}$. Theorem~3.6 of \cite{BHRS} implies that each gauge-invariant ideal of $C^*(E)$ is uniquely determined by its intersection with $C^*(E)^\circ$. Since both $I$ and $J$ are gauge invariant, it follows that $I=J$. \end{proof}
\begin{proposition}\label{prop:action} Let $I$ be an ideal of $C^*(\mathcal{G})$. Then $I$ is invariant under the gauge action on $C^*(\mathcal{G})$ if and only if the ideal generated by $\phi(I)$ is invariant under the gauge action on $C^*(E)$. \end{proposition} \begin{proof} Let $J$ be the ideal generated by $\phi(I)$ in $C^*(E)$. By Proposition~\ref{prop:phi}, $I$ is invariant under $\gamma$ if and only if $J$ is invariant under $\beta$. The latter condition is equivalent to the gauge invariance of $J$ by Proposition \ref{prop:beta-inv} and Proposition \ref{prop:gauge-inv}. \end{proof}
\section{Quotients by gauge-invariant ideals} \label{quotient-sec}
In this section, give a more explicit description of the bijection between gauge-invariant ideals of $C^*(\mathcal{G})$ and gauge-invariant ideals of $C^*(E)$ stated in Proposition~\ref{prop:action}. To do this we use the classifications of gauge-invariant ideals in graph algebras \cite[Theorem~3.6]{BHRS} and in ultragraph algebras \cite[Theorem~6.12]{KMST}. We also describe quotients of ultragraph algebras by gauge-invariant ideals as full corners in graph algebras.
First recall from \cite[Section~6]{KMST} that an admissible pair for $\mathcal{G}$ consists of a subset $\mathcal{H}$ of $\mathcal{G}^0$ and a subset $V$ of $G^0$ such that: \begin{itemize} \item $\mathcal{H}$ is an ideal: if $U_1, U_2 \in \mathcal{H}$ then $U_1
\cup U_2 \in \mathcal{H}$, and if $U_1 \in \mathcal{G}^0$, $U_2 \in \mathcal{H}$
and $U_1 \subset U_2$, then $U_1 \in \mathcal{H}$; \item $\mathcal{H}$ is hereditary: if $e \in \mathcal{G}^1$ and $\{s(e)\}
\in \mathcal{H}$, then $r(e) \in \mathcal{H}$; \item $\mathcal{H}$ is saturated: if $v \in G^0_{\textnormal{rg}}$ and $r(e)
\in \mathcal{H}$ for all $e \in s^{-1}(v)$, then $\{v\} \in
\mathcal{H}$; and \item $V \subset \mathcal{H}^\textnormal{fin}_\infty$, where \[\textstyle \mathcal{H}^\textnormal{fin}_\infty := \{v \in G^0 :
|s^{-1}(v)| = \infty \text{ and }
0 < |s^{-1}(v) \cap \{e \in \mathcal{G}^1 : r(e) \notin \mathcal{H}\}| < \infty\}. \] \end{itemize}
Theorem~6.12 of \cite{KMST} shows that there is a bijection $I \mapsto (\mathcal{H}_I, V_I)$ between gauge-invariant ideals of $C^*(\mathcal{G})$ and admissible pairs for $\mathcal{G}$. Specifically, $\mathcal{H}_I = \{U \in \mathcal{G}^0 : p_U \in I\}$ and $V_I = \{v \in (\mathcal{H}_I)^\textnormal{fin}_\infty : p_v - \sum_{e \in s^{-1}(v), r(e) \notin \mathcal{H}_I} s_e s^*_e \in I\}$.
We must also recall from \cite{BPRS2000} some terminology for a directed graph $E=(E^0,E^1,r_E,s_E)$. A subset $H$ of $E^0$ is said to be \emph{hereditary} if $r_E(\alpha) \in H$ whenever $\alpha \in E^1$ and $s_E(\alpha) \in H$. A hereditary subset $H$ is said to be \emph{saturated} if $x \in H$ whenever $x \in E^0_{\textnormal{rg}}$ and $r_E(\alpha) \in H$ for all $\alpha \in s_E^{-1}(x)$. If $H \subset E^0$ is saturated hereditary, then we define \[
H^\textnormal{fin}_\infty := \{v \in E^0 : |s_E^{-1}(v)| = \infty \text{ and }
0 < |s_E^{-1}(v) \cap r_E^{-1}(E^0 \setminus H)| < \infty\}. \] Theorem~3.6 of \cite{BHRS} shows that there is a bijection $J \mapsto (H_J, B_J)$ between gauge-invariant ideals of $C^*(E)$ and pairs $(H,B)$ such that $H \subset E^0$ is saturated hereditary, and $B \subset H^\textnormal{fin}_\infty$. Specifically, \[\textstyle H_J = \{x \in E^0 : q_x \in J\} \quad\text{and}\quad B_J = \{v \in (H_J)^\textnormal{fin}_\infty : q_v - \sum_{\alpha \in s_E^{-1}(v), r_E(\alpha) \notin H_J} t_\alpha t_\alpha^* \in J\}. \]
\begin{definition} For a saturated hereditary ideal $\mathcal{H} \subset \mathcal{G}^0$, we define $\theta(\mathcal{H}) \subset E^0$ by \[ \theta(\mathcal{H}) := \{v \in G^0 : \{v\} \in \mathcal{H} \} \cup \{\omega \in \Delta : r'(\omega) \in \mathcal{H}\}. \] \end{definition}
\begin{proposition}\label{prop:ideals Me} If $I$ is a gauge-invariant ideal of $C^*(\mathcal{G})$, and $J$ is the ideal of $C^*(E)$ generated by $\phi(I)$, then $H_J = \theta(\mathcal{H}_I)$, $(H_J)^\textnormal{fin}_\infty = (\mathcal{H}_I)^\textnormal{fin}_\infty$, and $B_J = V_I$. \end{proposition} \begin{proof} We use the notation established in Section~\ref{graph-sec} and Section~\ref{corner-sec}. Let $x \in E^0$. Since $q_x = U_x^*U_x$, we have \[ x \in H_J \iff q_x \in J \iff U_x \in J
\iff U_xU_x^* \in J. \] For $v \in G^0$, we have $U_vU_v^* = \phi(p_v)$. Hence \[ U_vU_v^* \in J \iff p_v \in I \iff \{v\} \in \mathcal{H}_I . \] Thus $v \in H_J$ if and only if $\{v\} \in \mathcal{H}_I$. Similarly, for $\omega \in \Delta$, we have $Q'_\omega = U_\omega U^*_\omega$ by Definition~\ref{dfn:Q'}, and Proposition~\ref{prop:r'(o)} implies that $\phi(p_{r'(\omega)}) = Q'_\omega$, so \[ U_\omega U_\omega^* \in J \iff p_{r'(\omega)} \in I \iff r'(\omega) \in \mathcal{H}_I . \] Thus $\omega \in H_J$ if and only if $r'(\omega) \in \mathcal{H}_I$. This shows that $H_J = \theta(\mathcal{H}_I)$.
Next, we show $(H_J)^\textnormal{fin}_\infty = (\mathcal{H}_I)^\textnormal{fin}_\infty$. Since each $\omega \in \Delta$ satisfies $|s_E^{-1}(\omega)| < \infty$, we have $(H_J)^\textnormal{fin}_\infty \subset G^0$. Fix $v \in G^0$. We have $s_E^{-1}(v) = \bigsqcup_{s(e_n) = v}
\{\edge(n,x): x\in \Xset{n}\}$. Since each $\Xset{n}$ is finite, $|s_E^{-1}(v)| = \infty$ if and only if $|s^{-1}(v)| = \infty$. Lemma~\ref{lem:r(e_n) and X_n} and the conclusion of the preceding paragraph imply that $r(e_n) \in \mathcal{H}_I$ if and only if $\Xset{n} \subset H_J$. Hence \begin{equation}\label{eq:HJ<->HI}
0 < |s_E^{-1}(v) \cap r_E^{-1}(E^0 \setminus H_J)| < \infty \ \Longleftrightarrow\
0 < |s^{-1}(v) \cap \{e \in \mathcal{G}^1 : r(e) \notin \mathcal{H}_I\}| < \infty \end{equation} Thus $(H_J)^\textnormal{fin}_\infty = (\mathcal{H}_I)^\textnormal{fin}_\infty$.
Finally we show $B_J = V_I$. Fix $v \in (H_J)^\textnormal{fin}_\infty = (\mathcal{H}_I)^\textnormal{fin}_\infty$. Let $L := \{n : s(e_n) = v, r(e_n) \notin \mathcal{H}_I\}$. By~\eqref{eq:HJ<->HI}, we have \[ \{\alpha \in s_E^{-1}(v) : r_E(\alpha) \notin H_J\} = \{\edge(n,x) : n \in L, x \in \Xset{n} \setminus H_J\}. \] For $n \in L$ and $x \in \Xset{n} \cap H_J$, we have $t_{\edge(n,x)}^*t_{\edge(n,x)}= q_x \in J$, and hence $t_{\edge(n,x)} t_{\edge(n,x)}^* \in J$. Thus \begin{align*} q_v - \sum_{\alpha \in s_E^{-1}(v), r_E(\alpha) \notin H_J} t_\alpha t^*_\alpha
&= q_v - \sum_{\substack{n \in L\\ x \in \Xset{n} \setminus H_J}} t_{\edge(n,x)} t_{\edge(n,x)}^* \\
&= q_v - \sum_{\substack{n \in L\\ x \in \Xset{n}}} t_{\edge(n,x)} t_{\edge(n,x)}^* + \sum_{\substack{n \in L\\ x \in \Xset{n} \cap H_J}} t_{\edge(n,x)} t_{\edge(n,x)}^* \end{align*} belongs to $J$ if and only if \begin{equation}\label{eq:gap in J} q_v - \sum_{n \in L, x \in \Xset{n}} t_{\edge(n,x)} t_{\edge(n,x)}^* \in J. \end{equation} Moreover, \eqref{eq:gap in J} holds if and only if $p_v - \sum_{n \in L} s_{e_n} s_{e_n}^* \in I$ because \[ \phi\Big(p_v - \sum_{n \in L} s_{e_n} s_{e_n}^*\Big) =P_v - \sum_{n \in L} S_{e_n} S_{e_n}^* =U_v\Big(q_v - \sum_{n \in L, x \in \Xset{n}} t_{\edge(n,x)} t_{\edge(n,x)}^*\Big)U_v^* \] by Lemma~\ref{lem:SeSe*}. Hence $B_J = V_I$. \end{proof}
\begin{corollary} Let $I$ be a gauge-invariant ideal of $C^*(\mathcal{G})$. Then the isomorphism $\phi : C^*(\mathcal{G}) \to Q C^*(E) Q$ restricts to an isomorphism of $I$ onto $Q J Q$, where $J$ is the unique gauge-invariant ideal of $C^*(E)$ such that $H_J = \theta(\mathcal{H}_I)$ and $B_J = V_I$. \end{corollary} \begin{proof} We have $\phi(I) = Q J Q$ where $J$ is the ideal of $C^*(E)$ generated by $\phi(I)$. By Proposition~\ref{prop:action} and Proposition~\ref{prop:ideals Me}, $J$ is the gauge-invariant ideal of $C^*(E)$ such that $H_J = \theta(\mathcal{H}_I)$ and $B_J = V_I$. \end{proof}
Using Proposition~\ref{prop:ideals Me} and the results of \cite{BHRS}, we may now describe quotients of ultragraph algebras by gauge-invariant ideals as full corners in graph algebras.
\begin{definition} Let $I$ be a gauge-invariant ideal of $C^*(\mathcal{G})$, and let $\mathcal{H}_I$, $V_I$, and $\theta(\mathcal{H}_I)$ be as above. We define a directed graph $E_I = (E_I^0,E_I^1,r_{E_I},s_{E_I})$ as follows. The vertex and edge sets are defined by \begin{align*} E_I^0 &:= (E^0 \setminus \theta(\mathcal{H}_I)) \sqcup \{\wt{x} : x \in (\mathcal{H}_I)^\textnormal{fin}_\infty\setminus V_I\},\text{ and} \\ E_I^1 &:= r_E^{-1}(E^0 \setminus \theta(\mathcal{H}_I)) \sqcup \{\wt{\alpha} : \alpha \in r_E^{-1}((\mathcal{H}_I)^\textnormal{fin}_\infty\setminus V_I)\subset E^1\}. \end{align*} The range and source of $e \in r_E^{-1}(E^0 \setminus \theta(\mathcal{H}_I))$ in $E_I$ are the same as those in $E$. For $\alpha \in r_E^{-1}((\mathcal{H}_I)^\textnormal{fin}_\infty\setminus V_I)$ we define $s_{E_I}(\wt{\alpha}) := s_E(\alpha)$, and $r_{E_I}(\wt{\alpha}) := \wt{x}$ where $x = r_E(\alpha) \in (\mathcal{H}_I)^\textnormal{fin}_\infty\setminus V_I$. \end{definition}
\begin{corollary}\label{cor:Quotients Me} With the notation above, $C^*(\mathcal{G})/I$ is isomorphic to a full corner of $C^*(E_I)$. \end{corollary} \begin{proof} Let $J$ be the ideal of $C^*(E)$ generated by $\phi(I)$. By Theorem~\ref{thm:fullcorner} the homomorphism $\phi$ induces an isomorphism $\phi_I \colon C^*(\mathcal{G})/I \to \overline{Q} (C^*(E)/J) \overline{Q}$ where $\overline{Q} \in \mathcal{M}(C^*(E)/J)$ is the image of $Q \in \mathcal{M} (C^*(E))$ under the extension of the quotient map to multiplier algebras (see \cite[Corollary~2.51]{TFB}). In particular, the projection $\overline{Q}$ is full. By Proposition~\ref{prop:ideals Me} we obtain $H_J = \theta(\mathcal{H}_I)$ and $(H_J)^\textnormal{fin}_\infty \setminus B_J =(\mathcal{H}_I)^\textnormal{fin}_\infty \setminus V_I$. By \cite[Corollary~3.5]{BHRS} there is an isomorphism $\psi\colon C^*(E)/J \to C^*(E_I)$. Let $Q_I \in \mathcal{M} (C^*(E_I))$ be the image of $\overline{Q}$ under $\psi$. Then $Q_I$ is full, and $\psi \circ \phi_I \colon C^*(\mathcal{G})/I \to Q_I C^*(E_I) Q_I$ is the desired isomorphism. \end{proof}
\end{document} | arXiv |
Möbius–Kantor polygon
In geometry, the Möbius–Kantor polygon is a regular complex polygon 3{3}3, , in $\mathbb {C} ^{2}$. 3{3}3 has 8 vertices, and 8 edges. It is self-dual. Every vertex is shared by 3 triangular edges.[1] Coxeter named it a Möbius–Kantor polygon for sharing the complex configuration structure as the Möbius–Kantor configuration, (83).[2]
Möbius–Kantor polygon
Orthographic projection
shown here with 4 red and 4 blue 3-edge triangles.
Shephard symbol3(24)3
Schläfli symbol3{3}3
Coxeter diagram
Edges8 3{}
Vertices8
Petrie polygonOctagon
Shephard group3[3]3, order 24
Dual polyhedronSelf-dual
PropertiesRegular
Discovered by G.C. Shephard in 1952, he represented it as 3(24)3, with its symmetry, Coxeter called as 3[3]3, isomorphic to the binary tetrahedral group, order 24.
Coordinates
The 8 vertex coordinates of this polygon can be given in $\mathbb {C} ^{3}$, as:
(ω,−1,0)(0,ω,−ω2)(ω2,−1,0)(−1,0,1)
(−ω,0,1)(0,ω2,−ω)(−ω2,0,1)(1,−1,0)
where $\omega ={\tfrac {-1+i{\sqrt {3}}}{2}}$.
As a configuration
The configuration matrix for 3{3}3 is:[3] $\left[{\begin{smallmatrix}8&3\\3&8\end{smallmatrix}}\right]$
Real representation
It has a real representation as the 16-cell, , in 4-dimensional space, sharing the same 8 vertices. The 24 edges in the 16-cell are seen in the Möbius–Kantor polygon when the 8 triangular edges are drawn as 3-separate edges. The triangles are represented 2 sets of 4 red or blue outlines. The B4 projections are given in two different symmetry orientations between the two color sets.
orthographic projections
Plane B4 F4
Graph
Symmetry [8] [12/3]
Related polytopes
This graph shows the two alternated polygons as a compound in red and blue 3{3}3 in dual positions.
3{6}2, or , with 24 vertices in black, and 16 3-edges colored in 2 sets of 3-edges in red and blue.[4]
It can also be seen as an alternation of , represented as . has 16 vertices, and 24 edges. A compound of two, in dual positions, and , can be represented as , contains all 16 vertices of .
The truncation , is the same as the regular polygon, 3{6}2, . Its edge-diagram is the cayley diagram for 3[3]3.
The regular Hessian polyhedron 3{3}3{3}3, has this polygon as a facet and vertex figure.
Notes
1. Coxeter and Shephard, 1991, p.30 and p.47
2. Coxeter and Shephard, 1992
3. Coxeter, Complex Regular polytopes, p.117, 132
4. Coxeter, Regular Complex Polytopes, p. 109
References
• Shephard, G.C.; Regular complex polytopes, Proc. London math. Soc. Series 3, Vol 2, (1952), pp 82–97.
• Coxeter, H. S. M. and Moser, W. O. J.; Generators and Relations for Discrete Groups (1965), esp pp 67–80.
• Coxeter, H. S. M.; Regular Complex Polytopes, Cambridge University Press, (1974), second edition (1991).
• Coxeter, H. S. M. and Shephard, G.C.; Portraits of a family of complex polytopes, Leonardo Vol 25, No 3/4, (1992), pp 239–244
| Wikipedia |
andrewarchi/bit_log.md
Code Revisions 174
An accumulation of notes and references gathered from my reading
bit_log.md
Bit Log
This is a sequence of notes on software and its theory. Frequent topics include programming languages, compiler design, algorithms, and optimization.
Rust stack usage analysis
(stub)
Jorge Aparicio writes about the process of building a static stack usage analyzer for Rust using LLVM IR. The style of his blog, Embedded in Rust, mimics Markdown with its headers and is a pleasing, simple design.
https://blog.japaric.io/stack-analysis/#function-pointers
Quadtrees
https://en.wikipedia.org/wiki/Quadtree
Robert Lee's fast implementations of k-opt for TSP use quadtrees.
https://github.com/rlee32/fast-k-opt
Computed goto for efficient dispatch tables
https://eli.thegreenplace.net/2012/07/12/computed-goto-for-efficient-dispatch-tables
https://www.reddit.com/r/rust/comments/4irmoz/how_to_eliminate_goto_statements_in_order_to/
Tech Interview Handbook
https://yangshun.github.io/tech-interview-handbook/
Static types and bug density
You want to reduce bugs? Use TDD. You want useful code intelligence tools? Use static types.
https://medium.com/javascript-scene/the-shocking-secret-about-static-types-514d39bf30a3
Rust profile-guided optimization
Michael Woerister, on the llvm-dev mailing list, fed back that as far as he's aware, profile-guided optimization now works exactly as expected with Rust.
https://lists.llvm.org/pipermail/llvm-dev/2019-December/137331.html
AI-based infinite text adventures
https://pcc.cs.byu.edu/2019/11/21/ai-dungeon-2-creating-infinitely-generated-text-adventures-with-deep-learning-language-models/
http://www.aidungeon.io/
Go cross-platform epoll
Epoller is a Go epoll interface for connections in Linux, MacOS and Windows.
Poller simplifies the cumbersome C API:
type Poller interface {
Add(conn net.Conn) error
Remove(conn net.Conn) error
Wait(count int) ([]net.Conn, error)
WaitWithBuffer() ([]net.Conn, error)
WaitChan(count int) <-chan []net.Conn
Close() error
https://github.com/smallnest/epoller
miniKanren
µKanren - implementation miniKanren in 40 lines of Scheme "the essence of miniKanren"
miniKanren - language
http://webyrd.net/scheme-2013/papers/HemannMuKanren2013.pdf
Pointfree Haskell
https://wiki.haskell.org/Pointfree
https://hackage.haskell.org/package/pointful
http://pointfree.io/
Spotify statistics
https://www.spotify.com/us/wrapped/
https://github.com/sdimi/spotifav
https://www.reddit.com/r/spotify/comments/95ga27/list_of_spotify_stats_websites_and_not_just_stats/
https://obscurifymusic.com/
https://skiley.net/
https://medium.com/cuepoint/visualizing-hundreds-of-my-favorite-songs-on-spotify-fe50c94b8af3
Brodal queues
Efficient priority queue. Chris Okasaki was involved (the guy from functional red/black trees).
find-min: Θ(1)
delete-min: O(log n)
insert: Θ(1)
decrease-key: Θ(1)
meld: Θ(1)
https://en.wikipedia.org/wiki/Brodal_queue
https://en.m.wikipedia.org/wiki/Lua_(programming_language)
DeSmuME Lua scripting: http://tasvideos.org/LuaScripting.html
https://www.davidbaumgold.com/tutorials/wine-mac/
Did not work for me:
https://www.howtogeek.com/263211/how-to-run-windows-programs-on-a-mac-with-wine/
https://www.alfredapp.com/
100 blocks a day
https://waitbutwhy.com/2016/10/100-blocks-day.html
https://waitbutwhy.com/2014/05/life-weeks.html
https://waitbutwhy.com/2015/12/the-tail-end.html
String interning
https://blog.mozilla.org/nnethercote/2019/07/17/how-to-speed-up-the-rust-compiler-in-2019/
https://en.wikipedia.org/wiki/String_interning
JitFromScratch
Talk from LLVM Social Berlin
https://github.com/weliveindetail/JitFromScratch
One Weekend Compiler
https://github.com/MarkLeone/WeekendCompiler
https://www.meetup.com/Utah-Cpp-Programmers/events/xxrjflyxqbqb/
LLVM Stacker tutorial lang
https://releases.llvm.org/1.1/docs/Stacker.html https://releases.llvm.org/2.2/docs/FAQ.html
Related: https://stackoverflow.com/questions/55947098/how-to-use-llvm-to-convert-stack-based-virtual-machine-bytecode-to-ssa-form
rr-debugger
Replays recorded runs for debugging
Robert O'Callahan
https://rr-project.org/
Flang using MLIR and LLVM
https://github.com/flang-compiler/flang
MLIR Toy language
https://www.youtube.com/watch?v=cyICUIZ56wQ
MLIR keynote
Because location tracking is integral, we can also build the testsuite to depend on it, and use it to test analysis passes. ... Design for testability is an key part of our design, and we take it further than LLVM did.
Look into:
Polyhedral Compiler Techniques Widely explored in compiler research:
Great success in HPC and image processing kernels
Tensor abstraction gives full control over memory layout
Our implementation is well underway, and we've built a number of passes using this representation. That said, we're still actively evolving and changing things here - this would be a great place to get involved if you are interested in contributing.
OpenMP & Other Parallelism Dialects
OpenMP is mostly orthogonal to host language: ● Common runtime + semantics ● Rich model, many optimizations are possible Model OpenMP as a dialect in MLIR: ● Share across Clang and Fortran ● Region abstraction makes optimizations easy ○ Simple SSA intra-procedural optimizations
Sub communities within Clang would also benefit. For example, OpenMP is not very well served with the current design. Very simple optimizations are difficult on LLVM IR, because function outlining has already been performed. This turns even trivial optimizations (like constant folding into parallel for loops) into interprocedural problems. Having a first class region representation makes these things trivial, based on standard SSA techniques. This would also provide a path for Fortran to reuse the same code. Right now the Flang community either has to generate Clang ASTs to get reuse (egads!) or generate LLVM IR directly and reimplement OpenMP lowering.
"Building a Compiler with MLIR" Tutorial
Build a new frontend/AST with MLIR, show lowering to LLVM IR
Introduce mid-level array IR: use it to optimize, tile, and emit efficient code
Tomorrow @ 12:00
https://llvm.org/devmtg/2019-04/slides/Keynote-ShpeismanLattner-MLIR.pdf
LLForth
https://github.com/riywo/llforth
Pi nodes
PiNodes encode statically proven information that may be implicitly assumed in basic blocks dominated by a given pi node. They are conceptually equivalent to the technique introduced in the paper "ABCD: Eliminating Array Bounds Checks on Demand" or the predicate info nodes in LLVM.
https://docs.julialang.org/en/v1/devdocs/ssair/index.html
ABCD: Eliminating Array Bounds Checks on Demand:
https://www.classes.cs.uchicago.edu/archive/2006/spring/32630-1/papers/p321-bodik.pdf
LLVM pi blocks
https://reviews.llvm.org/rGf0af11d86f8
LLVM arbitrary precision integers
http://lists.llvm.org/pipermail/cfe-dev/2019-November/063838.html
You may find that doing this in MLIR works better. MLIR has an open type system, so you can add a new type for your integers quite easily, though then you won't benefit from any LLVM optimisations. It's not clear that you could necessarily use the generic LLVM backend infrastructure though, so a direct translation from MLIR to your bytecode may preferable.
Go 1.14
http://www.go-gazette.com/issues/what-s-coming-in-go-1-14-query-data-as-a-graph-in-go-a-go-pacman-clone-more-202864
https://docs.google.com/presentation/d/1HfIwlVTmVWQk94OLKfTGvXpQxyp0U4ywG1u5j2tjiuE/mobilepresent?slide=id.g550f852d27_228_0
Embedding overlapping interfaces: https://github.com/golang/go/issues/6977
Constant len and cap are untyped: https://github.com/golang/go/issues/31795
Low-cost defers: https://github.com/golang/go/issues/34481
Escape analysis rewrite: https://github.com/golang/go/issues/23109
Elixir anonymous function shorthand
iex> sum = fn (a, b) -> a + b end
iex> sum.(2, 3)
iex> sum = &(&1 + &2)
Like Swift?
https://elixirschool.com/en/lessons/basics/functions/#the--shorthand
VSCode ligature stylistic sets and web UI
Stylistic sets:
There is now more fine grained control over the font features. When configuring "editor.fontLigatures": true, VS Code would turn on liga and calt. But some fonts have more settings, such as stylistic sets used by Fira Code.
We now allow these font features to be explicitly controlled, for example:
"editor.fontFamily": "Fira Code",
"editor.fontLigatures": true,
"[javascript]": {
"editor.fontLigatures": "'ss02', 'ss19'",
Web UI:
There is now a minimal version of VS Code that can run in a browser that is available for development and testing. The Web version is still missing some features and is under active development.
In your local fork of the vscode repository, execute yarn web from the command line and access http://localhost:8080/. For more details about cloning and building the vscode repo, see the setup instructions.
https://code.visualstudio.com/updates/v1_40
CUDA C/C++
https://www.nvidia.com/docs/IO/116711/sc11-cuda-c-basics.pdf
Semmle CodeQL and GitHub
Semmle developed CodeQL, a query language with the vision of code as data. CodeQL can expressively query patterns of code and detect classes of vulnerabilities through variant analysis. It has extensive libraries to perform control and data flow analysis and taint tracking language agnostically.
https://semmle.com/codeql
GitHub recently acquired Semmle and its plans to integrate CodeQL to detect vulnerabilities in GitHub repos. Semmle is also hiring.
https://github.com/features/security
https://blog.semmle.com/secure-software-github-semmle/
{Semmle}{GitHub}{CodeQL}{static analysis}{security}{taint analysis}
Languages using LLVM
Diagram on LLVM IRs: https://llvm.org/devmtg/2019-04/slides/Keynote-ShpeismanLattner-MLIR.pdf
https://docs.julialang.org/en/v1/devdocs/llvm/index.html
LLVM has only one expansion and it is wrong/misleading. Solution: have lots of ambiguous expansions so we can change our mind later :-)
https://github.com/tensorflow/mlir
http://nondot.org/sabre/Resume.html#Google
{MLIR}{IR}{compiler}
Moore's Law and programming languages
Chris Lattner talks of a new era of compilers coming with the end of Moore's Law. Chris is currently developing MLIR for high performance machine learning which is unlike earlier IRs. I hope to contribute to this changing landscape.
Increased diversity in compilers and tools is a huge opportunity - this area is exploding in importance as we stand here, gazing over the precipice at the end of Moore's Law. At the same time there is so much untapped talent, and we *need* fresh perspectives and new ideas!
https://twitter.com/clattner_llvm/status/1180614183429132289
{Moore's Law}{compiler}
Lisp Flavored Erlang
Lisp Flavored Erlang (LFE) is a lisp dialect designed on BEAM (Erlang VM) that combines metaprogramming and Erlang's reliable concurrency.
https://en.wikipedia.org/wiki/LFE_%28programming_language%29
http://lfe.io/
{LFE}{BEAM}{Erlang}
Alternate OCaml front-ends
The OCaml compiler is written modularly and can accept an AST from an external front-end. For example, the m17n project is a front-end that introduces Unicode identifiers to enable multilingualization. m17n carefully handles special cases as none of the normalization forms (see entry from 2019-06-04) are closed under concatenation and file systems treat Unicode differently.
https://github.com/whitequark/ocaml-m17n
{OCaml}{m17n}{front-end}{compiler}
Reason programming language
Reason, also known as ReasonML, is a syntax extension and toolchain for OCaml created by Jordan Walke at Facebook. Reason offers a syntax familiar to JavaScript programmers, and transpiles to OCaml. Statically typed Reason (or OCaml) code may be compiled to dynamically typed JavaScript using the BuckleScript compiler.
https://en.wikipedia.org/wiki/Reason_(programming_language)
https://reasonml.github.io/docs/en/what-and-why
https://github.com/facebook/reason
Reason generalizes and cleans up the syntax of OCaml. Parenthesis are consistently used and the syntax is designed to be more approachable to JavaScript developers. Automated tooling can seamlessly convert codebases between OCaml and Reason syntaxes.
https://reasonml.github.io/docs/en/comparison-to-ocaml
https://reasonml.github.io/docs/en/convert-from-ocaml
As Reason is developed by Facebook, it has excellent interoperability with React by using its ReasonReact bindings.
https://github.com/reasonml/reason-react
Reason and Elm both compile to JavaScript, so it seems natural to compare the two. Reason is developed on the mature OCaml, so benefits from a large ecosystem and fast compilation. Reason also is highly interoperable with existing JavaScript libraries where Elm instead makes this more difficult in favor of pure Elm code. Elm uses Haskell-like syntax.
https://stackoverflow.com/questions/51015850/reasonml-vs-elm#51027309
In a Hacker News post on Reason, Jordan Walke notes that it may be possible to remove the need for semicolon as a delimiter and reformat existing code using refmt. The F# spec documents its semicolon elision and indentation, so may be useful to reference for such a change. Users analogize Reason with Elixir, LFE, Clojure, and TypeScript as they are all languages developed on existing compilers, VMs, or languages.
https://news.ycombinator.com/item?id=11716975
{Reason}{OCaml}{PL}
Google search engine prototype anatomy
Sergey Brin and Lawrence Page, in "The Anatomy of a Large-Scale Hypertextual Web Search Engine", detail the design of an early prototype of the Google search engine and its advantages over contemporary search engines. They describe their PageRank algorithm for ranking pages by links and their scalable system for indexing.
http://infolab.stanford.edu/~backrub/google.html
Lists of other CS papers accessible to undergrads:
https://wiki.nikitavoloboev.xyz/research-papers
{Google}{paper}
Garbage collection algorithms visualized
https://spin.atomicobject.com/2014/09/03/visualizing-garbage-collection-algorithms/
https://stackoverflow.com/questions/7823725/what-kind-of-garbage-collection-does-go-use
{garbage collection}
Rockstar programming language
Rockstar is a programming language with programs resembling song lyrics. It was designed by Dylan Beattie in response to the overuse of the "rockstar developer" phrase used by recruiters.
Mainly because if we make Rockstar a real (and completely pointless) programming language, then recruiters and hiring managers won't be able to talk about 'rockstar developers' any more.
There are two styles of Rockstar programs. Idiomatic Rockstar programs are written as plausible song lyrics (such as the fizz-buzz program below) and minimalist programs omit more expressive forms in favor of clarity.
Midnight takes your heart and your soul
While your heart is as high as your soul
Put your heart without your soul into your heart
Give back your heart
Desire is a lovestruck ladykiller
My world is nothing
Fire is ice
Hate is water
Until my world is Desire,
Build my world up
If Midnight taking my world, Fire is nothing and Midnight taking my world, Hate is nothing
Shout "FizzBuzz!"
If Midnight taking my world, Fire is nothing
Shout "Fizz!"
If Midnight taking my world, Hate is nothing
Say "Buzz!"
Whisper my world
https://github.com/RockstarLang/rockstar
https://codewithrockstar.com/
On its Hacker News posting, users speculate names for similar languages:
Maybe we should create more languages called Agile, Senior, Expert, Lead, Ninja, 1 Mio Dollar, Ivy League, Full Stack, etc
Full Stack, a language where the only data type is a stack.
{esolang}{Rockstar}
Single instruction compiler
https://stackoverflow.com/questions/9439001/what-is-the-minimum-instruction-set-required-for-any-assembly-language-to-be-con/19677755#19677755 https://github.com/xoreaxeaxeax/movfuscator
Pragmas in Go
https://dave.cheney.net/2018/01/08/gos-hidden-pragmas
OpenEmu multiple emulator engine for macOS
https://github.com/OpenEmu/OpenEmu
https://archive.org/details/MAME_0.149_ROMs
http://bamf2048.github.io/sdl_mame_tut/
http://bamf2048.github.io/sdl_mame_tut2/
Poor performance with macOS out of the box: https://github.com/TASVideos/desmume
Unable to compile for macOS: https://github.com/Arisotura/melonDS
Macros in Racket
https://docs.racket-lang.org/guide/macros.html
Actor model compared to CSP
https://en.wikipedia.org/wiki/Actor_model https://en.wikipedia.org/wiki/Communicating_sequential_processes
https://en.wikipedia.org/wiki/Hackintosh
Set macOS default text editor
https://apple.stackexchange.com/questions/123833/replace-text-edit-as-the-default-text-editor
CS student falsehoods
https://www.netmeister.org/blog/cs-falsehoods.html
Prolog guide
David Matuszek gives an overview of Prolog:
Prolog variables are similar to "unknowns" in algebra: Prolog tries to find values for the variables such that the entire clause can succeed. Once a value has been chosen for a variable, it cannot be altered by subsequent code; however, if the remainder of the clause cannot be satisfied, Prolog may backtrack and try another value for that variable. ...
Unification can be performed on lists:
[a, b, c] = [Head | Tail]. /* a = Head, [b, c] = Tail. */
[a, b] = [A, B | T]. /* a = A, b = B, [] = Tail. */
[a, B | C] = [X | Y]. /* a = X, [B | C] = Y. */
In most (but not all) Prolog systems, the list notation is syntactic sugar for the '.' functor, with the equivalence: '.'(Head, Tail) = [Head | Tail].
To solve arithmetic would introduce significant complexity to the language implementation, but would be very valuable. It would be interesting to see if research has been done in this area. It may be feasible to integrate SMT solving logic into Prolog evaluation.
Arithmetic is performed only upon request. For example, 2+2=4 will fail, because 4 is a number but 2+2 is a structure with functor '+'. Prolog cannot work arithmetic backwards; the following definition of square root ought to work when called with sqrt(25, R), but it doesn't.
sqrt(X, Y) :- X is Y * Y. /* Requires that Y be instantiated. */
Arithmetic is procedural because Prolog isn't smart enough to solve equations, even simple ones. This is a research area.
https://www.cis.upenn.edu/~matuszek/Concise%20Guides/Concise%20Prolog.html
{PL}{Prolog}{SMT}
Solving Sudoku with Prolog
Markus Triska demonstrates solving Sudoku using constraint logic programming in Prolog. In this way, the solution is much more elegant than in a general purpose imperative language.
sudoku(Rows) :-
length(Rows, 9),
maplist(same_length(Rows), Rows),
append(Rows, Vs), Vs ins 1..9,
maplist(all_distinct, Rows),
transpose(Rows, Columns),
maplist(all_distinct, Columns),
Rows = [As,Bs,Cs,Ds,Es,Fs,Gs,Hs,Is],
blocks(As, Bs, Cs),
blocks(Ds, Es, Fs),
blocks(Gs, Hs, Is).
blocks([], [], []).
blocks([N1,N2,N3|Ns1], [N4,N5,N6|Ns2], [N7,N8,N9|Ns3]) :-
all_distinct([N1,N2,N3,N4,N5,N6,N7,N8,N9]),
blocks(Ns1, Ns2, Ns3).
https://www.metalevel.at/sudoku/
Prolog is extremely well-suited for solving combinatorial tasks like Sudoku puzzles, and also for tough practical challenges such such as timetabling, scheduling and allocation tasks on an industrial scale.
The key feature that makes Prolog so efficient and frequently used for such tasks is constraint propagation, provided via libraries or as a built-in feature in many Prolog system. Fast and efficient constraint propagation is often an important reason for buying a commercial Prolog system.
In this example, I am using CLP(FD/ℤ), constraint logic programming over finite domains/integers, the amalgamation of constraint programming (CP) and logic programming (LP), which blend especially seamlessly.
{Prolog}{Sudoku}{constraint logic programming}
Whitespace with Befunge syntax
Project idea: develop Befunge-like language that can compile to Whitespace.
Project idea: develop n-dimensional Befunge-like language.
{project}{Whitespace}{Befunge}
Programming paradigms
https://en.wikipedia.org/wiki/Language-oriented_programming
https://en.wikipedia.org/wiki/Intentional_programming
https://en.wikipedia.org/wiki/Natural-language_programming
http://doc.pypy.org/en/latest/architecture.html
https://en.wikipedia.org/wiki/PyPy
Futamura projections:
http://blog.sigfpe.com/2009/05/three-projections-of-doctor-futamura.html
https://gist.github.com/tomykaira/3159910
https://en.wikipedia.org/wiki/Partial_evaluation#Futamura_projections
Minesweeper Turing-complete
Richard Kaye demonstrates that Minesweeper, with an infinite grid, can simulate computations in his paper "Infinite versions of minesweeper are Turing complete". Knowledge of a square gives partial information about its neighbors and determines possible continuations over the plane.
http://web.mat.bham.ac.uk/R.W.Kaye/minesw/infmsw.pdf
https://hackernoon.com/beyond-esolangs-6-things-that-are-unintentionally-turing-complete-60ab1e1b50f9
Detecting semaphore deadlock
I hypothesize that semaphore deadlock can be statically detected by creating a graph with edges for every semaphore wait (and post?). Any cycles indicate potential deadlock.
{project}{semaphore}{deadlock}
FALSE esolang
http://strlen.com/false-language/
https://esolangs.org/wiki/FALSE
{esolang}{FALSE}
Befunge and Whitespace interoperability
I designed my Nebula compiler for Whitespace, a stack-based language with a secondary heap. Befunge is similarly stack-based, so Nebula's stack analysis and basic blocks could be reused for Befunge.
If Befunge's self-modifying p instruction could be expanded into its possible combinations as I (falsely) posited below in the "Compiling Befunge" entry, then the Nebula IR could be shared between the two languages.
Sharing the same IR would open the way for foreign function interface calls between the languages. Any label in a Whitespace program can be called (or jumped to) as a procedure and a ret instruction transfers control back to the caller while end terminates the program. Befunge, on the other hand, has no formal concept of procedures. Befunge control flow maintains a cell position and direction, so a procedure for the purposes of FFI could be uniquely defined as a position-direction tuple. Whitespace could then transfer control to a position and direction in Befunge and the program would terminate with Befunge's @ end instruction (Befunge has no return construct).
To link mixed programs, additional metadata would need to be stored to connect a Whitespace label to a Befunge position and direction. A simple file format could be defined that defines the mappings. This would enable a Befunge cell to be defined as a call to a Whitespace procedure without needing to expand Befunge syntax.
The Befunge ? random path instruction poses problems when converting programs from Befunge into Whitespace. The only non-deterministic behavior in Whitespace is user input, so any random number generation would need to be seeded from initial user input. Thus pure Whitespace cannot equivalently represent any Befunge program containing the ? instruction. However, for hybrid programs, Whitespace can leverage Befunge's ? instruction to implement pseudorandom number generators that require no seed from the user.
{project}{Nebula}{esolang}{Befunge}{Whitespace}{IR}{FFI}
Compiling Befunge
Befunge is a stack-based esolang with a two-dimensional instruction pointer and self-modifying instructions. It was designed by Chris Pressey with the goal of being difficult to compile.
Computationally, any Befunge-93 program can be represented as a push-down automaton, but due to the 80x25 grid size restriction, not all push-down automata can be encoded in Befunge-93 making it not Turing-complete. Befunge-98 removes this size restriction.
Although difficult to compile, a handful of JIT compilers exist:
The Betty compiler, for example, treats every possible straight line of instructions as a subprogram, and if a p instruction alters that subprogram, that subprogram is recompiled. This is an interesting variation on just-in-time compilation, and it results in a much better advantage over an interpreter, since many instructions can be executed in native code without making intervening decisions on the 'direction' register.
The Befunjit and Bejit compilers, similarly to the Betty compiler, split the original code into subprograms which are lazily compiled and executed. They, however, divide the original playfield into "static paths" - code paths which do not contain instructions that conditionally change direction (i.e. |, _ or ?). The "static paths" may span on more cells than the "straight line paths" of Betty, which results in fewer and longer subprograms. Thus, there are fewer context jumps between the compiler and the compiled code and allows more optimisations.
https://esolangs.org/wiki/Befunge
https://en.wikipedia.org/wiki/Befunge
https://stackoverflow.com/questions/20935830/why-is-befunge-considered-hard-to-compile
Project idea: design a Befunge compiler that splits the program into static paths (basic blocks). As the program size and number of instructions are fixed, it should be possible to statically enumerate the possible combinations of instruction mutations and construct paths for each. Jump tables can be used to select the desired path at run time. By statically expressing every possible path, more aggressive optimizations and analyses would be enabled and there would be no need for the context switching used by JIT implementations. The compiler's name could be "Befudge".
Edit: I had overlooked that the p instruction may modify cells at non-constant coordinates, thus it is not feasible to statically compute all possible paths in the general case. It only works if all p instructions in a given program use constant coordinates.
{project}{esolang}{Befunge}{compiler}
Smalltalk and Simula languages
{PL}{Smalltalk}{Simula}
https://en.wikipedia.org/wiki/Paul_Graham_(programmer)
Blub programming language
Beating the Averages
You can see that machine language is very low level. But, at least as a kind of social convention, high-level languages are often all treated as equivalent. They're not. Technically the term "high-level language" doesn't mean anything very definite. There's no dividing line with machine languages on one side and all the high-level languages on the other. Languages fall along a continuum [4] of abstractness, from the most powerful all the way down to machine languages, which themselves vary in power.
Ordinarily technology changes fast. But programming languages are different: programming languages are not just technology, but what programmers think in. They're half technology and half religion.[6] And so the median language, meaning whatever language the median programmer uses, moves as slow as an iceberg. Garbage collection, introduced by Lisp in about 1960, is now widely considered to be a good thing. Runtime typing, ditto, is growing in popularity. Lexical closures, introduced by Lisp in the early 1970s, are now, just barely, on the radar screen. Macros, introduced by Lisp in the mid 1960s, are still terra incognita.
[4] Note to nerds: or possibly a lattice, narrowing toward the top; it's not the shape that matters here but the idea that there is at least a partial order.
http://www.paulgraham.com/avg.html
M-expressions
https://en.wikipedia.org/wiki/M-expression
Customizing macOS menu clock format
The clock in the macOS menu bar does not follow the date format set in preferences. The date format can be manually overridden in the terminal, but the menu bar must be restarted for the changes to take effect. However, on my system running macOS Mojave, I couldn't get this to work.
$ defaults read com.apple.menuextra.clock DateFormat
EEE MMM d H:mm:ss
$ defaults write com.apple.menuextra.clock DateFormat -string 'EEE dd MMM HH:mm:ss'
$ killall -KILL SystemUIServer
https://www.tech-otaku.com/mac/setting-the-date-and-time-format-for-the-macos-menu-bar-clock-using-terminal/
https://superuser.com/questions/1111908/change-os-x-date-and-time-format-in-menu-bar
https://apple.stackexchange.com/questions/181490/how-to-change-date-format-on-menu-bar-without-extra-apps
https://apple.stackexchange.com/questions/180847/wrong-date-format-in-the-menu-bar
05AB1E esolang
https://github.com/Adriandmen/05AB1E
https://github.com/Adriandmen/05AB1E/wiki/Commands
{esolang}{05AB1E}
Full employment theorem
https://en.wikipedia.org/wiki/Full_employment_theorem
The Little Book of Semaphores
Kimball German recommended to me "The Little Book of Semaphores" by Allen B. Downey to expand on material covered in Computer Systems. It is a free textbook on synchronization and concurrency that has many puzzles for the reader to solve.
In the introduction, it suggests that information on atomic operations for specific platforms could be gathered, but dismisses it in favor of the more general approach of synchronization. However, compilers could implement install-time optimization to eliminate the need in some cases for more expensive synchronization techniques.
So how can we write concurrent programs if we don't know which operations are atomic? One possibility is to collect specific information about each operation on each hardware platform. The drawbacks of this approach are obvious.
The most common alternative is to make the conservative assumption that all updates and all writes are not atomic, and to use synchronization constraints to control concurrent access to shared variables.
https://greenteapress.com/wp/semaphores/
Quala type qualifiers for LLVM and Clang
Quala is an extension to LLVM and Clang by Adrian Sampson to make type systems pluggable in C and C++. Provided type systems are a taint tracker and a null value dereference checker. Type metadata is recorded in the resultant LLVM IR.
User-customizable type systems make it possible to add optional checks to a language without hacking the compiler. The world is full of great ideas for one-off type systems that help identify specific problems—SQL injection, say—but it's infeasible to expect all of these to be integrated into a language spec or a compiler. Who would want to deal with hundreds of type system extensions they're not really using?
https://github.com/sampsyo/quala
{PL}{LLVM}{Clang}{types}
John Regehr
John Regehr is a professor at the University of Utah that researches in "embedded systems, sensor networks, static analysis, real-time systems, [and] operating systems". He teaches courses including advanced compilers, operating systems, and embedded systems. Several of his recent publications involve LLVM and optimizing compilers.
https://www.cs.utah.edu/~regehr/
https://www.cs.utah.edu/people/faculty/
https://blog.regehr.org
{academia}{University of Utah}
LLVM for grad students
Adrian Sampson provides an intro targeted towards grad students for writing a transformation pass using LLVM.
http://www.cs.cornell.edu/~asampson/blog/llvm.html
Alex Bradbury tracks the LLVM development and announcements in his LLVM Weekly newsletter. This makes following changes in LLVM internals easier.
http://llvmweekly.org/
{LLVM}{compiler}
Graphviz Go utilities
Project grvutils by Than McIntosh provides tools for working with Graphviz graph files. Features include lexing, parsing, manipulating, and pruning.
https://github.com/thanm/grvutils
{Graphviz}{Go}
Go LLVM compilers
GoLLVM is a Go compiler written in C++ using the LLVM backend. It is under active development by Google.
https://go.googlesource.com/gollvm/
llgo is an LLVM-based compiler for Go, written in Go. It leverages go/parser and go/types from the standard library. It has now been incorporated into the LLVM project along with its LLVM bindings for Go.
https://github.com/go-llvm/llgo
Ian Lance Taylor lists some of the benefits of these LLVM compilers:
It's always a good idea to have multiple compilers, as it helps ensure that the language is defined by a spec rather than an implementation.
And, yes, the LLVM compiler generates code that is clearly better for some cases, though the compilation process itself is longer.
https://groups.google.com/forum/#!topic/golang-nuts/Tf0BOTtEpOs
{Go}{LLVM}{compiler}
U Combinator
U Combinator is research group at the University of Utah that formerly included Matt Might, Peter Aldous, and Kimball Germane. The lab researches and develops "advanced languages, compilers and tools to improve the performance, parallelism, security and correctness of software".
The name of the group comes from the U combinator function:
In the theory of programming languages, the U combinator, U, is the mathematical function that applies its argument to its argument; that is U(f) = f(f), or equivalently, U = λf.f(f).
Self-application permits the simulation of recursion in the λ-calculus, which means that the U combinator enables universal computation. (The U combinator is actually more primitive than the more well-known fixed-point Y combinator.)
The expression U(U), read U of U, is the smallest non-terminating program, and U of U is also the local short-hand for the University of Utah.
http://www.ucombinator.org
https://github.com/Ucombinator
{academia}{University of Utah}{PL}
Optimizing Conway's Game of Life
Michael Abrash describes several approaches to optimizing Conway's Game of Life implementations in chapters 17 and 18 of his book "Graphics Programmer's Black Book". One of the simpler optimizations is to store the state of the eight adjacent cells as a bit pattern along with each cell's own state to enable use of a lookup table for the next generation. Since the majority of cells are dead and remain dead, using a change list eliminates scanning over cells that will never change.
http://www.jagregory.com/abrash-black-book/#chapter-17-the-game-of-life
http://www.jagregory.com/abrash-black-book/#chapter-18-its-a-plain-wonderful-life
https://stackoverflow.com/questions/40485/optimizing-conways-game-of-life
Tetris in Conway's Game of Life
QFTASM is RISC with 11 of its 16 opcodes assigned. Notably, MNZ "move if not zero" and ANT "and-not" are used instead of MEZ "move if zero" and NOT because creating a TRUE signal from an empty signal is difficult with cellular automata.
0000 MNZ Move if not zero
0001 MLZ Move if less than zero
0010 ADD Addition
0011 SUB Subtraction
0100 AND Bitwise and
0101 OR Bitwise or
0110 XOR Bitwise exclusive or
0111 ANT Bitwise and-not
1000 SL Shift left
1001 SRL Shift right logical
1010 SRA Shift right arithmetic
https://codegolf.stackexchange.com/questions/11880/build-a-working-game-of-tetris-in-conways-game-of-life
Java shortcomings
https://en.wikipedia.org/wiki/Criticism_of_Java
Enforcing code feature requirements
https://www.artima.com/cppsource/codefeatures.html
Myopia µ-recursive language
Myopia is a programming language based on µ-recursive functions. Myopia is nearly identical to the μ6 language excepting its more readable syntax and lack of integer shorthands. Programs that do not use the M operator are primitive recursive and guaranteed to halt.
Myopia deals with functions from tuples of natural numbers to natural numbers (N^n -> N). The functions are constructed by composing the following primitives:
Z, the zero function.
S, the successor function.
I[i,k], the family of identity functions.
C, the composition operator.
P, the primitive recursive operator.
M, the minimisation operator.
-- plus(x,y) = x + y
plus : N x N -> N
plus = P(id, C(S, I[2,3]))
mult = P(Z, C(plus, I[2,3], I[3,3]))
https://github.com/miikka/myopia
Tink easy-to-use cryptographic APIs
Tink is a multi-language set of cryptographic APIs developed at Google that is designed to be easy to use for developers without a cryptography background. Currently, Java and Android, C++, Objective-C, and Go are production ready and Python and JavaScript are in active development.
https://github.com/google/tink
Building a WebAssembly Compiler
Colin Eberhardt describes the process of developing a compiler that targets WebAssembly, for a language that he designed called chasm.
(func $add (param f32) (param f32) (result f32)
get_local 0
f32.add)
(export "add" (func 0))
If you just want to experiment with WAT you can use the wat2wasm tool from the WebAssembly Binary Toolkit to compile WAT files into wasm modules.
The above code reveals some interesting details around WebAssembly -
WebAssembly is a low-level language, with a small (approx 60) instruction set, where many of the instructions map quite closely to CPU instructions. This makes it easy to compile wasm modules to CPU-specific machine code.
It has no built in I/O. There are no instructions for writing to the terminal, screen or network. In order to wasm modules to interact with the outside world they need to do so via their host environment, which in the case of the browser is JavaScript.
WebAssembly is a stack machine, in the above example get_local 0 gets the local variable (in this case the function param) at the zeroth index and pushes it onto the stack, as does the subsequent instruction. The f3.add instruction pops two values form the stack, adds them together than pushes the value back on the stack.
WebAssembly has just four numeric types, two integer, two floats.
TODO: finish reading
https://blog.scottlogic.com/2019/05/17/webassembly-compiler.html
https://github.com/ColinEberhardt/chasm
Pi spigot computation
https://math.stackexchange.com/questions/1585749/digits-of-pi-using-integer-arithmetic
http://www.cs.ox.ac.uk/jeremy.gibbons/publications/spigot.pdf
https://benchmarksgame-team.pages.debian.net/benchmarksgame/description/pidigits.html
http://mathworld.wolfram.com/PiDigits.html
https://salsa.debian.org/benchmarksgame-team/benchmarksgame/
https://en.wikipedia.org/wiki/Bailey–Borwein–Plouffe_formula
https://en.wikipedia.org/wiki/Chronology_of_computation_of_π
Macros with Elixir
Ashton Wiersdorf gives an overview on macros in Elixir. Macros in Lisp and its successors define transformations on the AST whereas C macros operate at the lexical level.
Ashton recommends Metaprogramming Elixir by Chris McCord and On Lisp by Paul Graham. Graham shows how to make your own DSL and writes Prolog in Lisp using macros to heavy advantage of compile-time optimizations.
https://ashton.wiersdorf.org/macros-with-elixir/
GHC runtime optimizations
Andreas Klebinger describes his experience optimizing GHC on the Well-Typed blog.
He introduced loop recognition by SCC and dominator analysis.
Loops are important for two reasons:
They are good predictors of runtime behavior.
Most execution time is spent in loops.
Combined, this means identifying loops allows for some big wins. Not only can we do a better job at optimizing the code involving them. The code in question will also be responsible for most of the instructions executed making this even better.
Last year I made the native backend "loop aware". In practice this meant GHC would perform strongly connected components (SCC) analysis on the control flow graph.
This allowed us to identify blocks and control flow edges which are part of a loop.
In turn this means we can optimize loops at the cost of non-looping code for a net performance benefit.
However SCC can not determine loop headers, back edges or the nesting level of nested loops which means we miss out on some potential optimizations.
This meant we sometimes ended up placing the loop header in the middle of the loop code. As in code blocks would be laid out in order 2->3->1. This isn't as horrible as it sounds. Loops tend to be repeated many times and it only adds two jumps overhead at worst. But sometimes we bail out of loops early and then the constant overhead matters. We also couldn't optimize for inner loops as SCC is not enough to determine nested loops.
Nevertheless, being aware of loops at all far outweighed the drawbacks of this approach. As a result, this optimization made it into GHC 8.8.
This year I fixed these remaining issues. Based on dominator analysis we can now not only determine if a block is part of a loop. We can also answer what loop it is, how deeply nested that loop is and determine the loop header.
As a consequence we can prioritize the inner most loops for optimizations, and can also estimate the frequency with which all control flow edges in a given function are taken with reasonable accuracy.
Andreas also reduced storage size of integers in interface files by using a variable length encoding that uses the eighth bit of every byte in an integer to denote whether the number continues.
https://www.well-typed.com/blog/2019/10/summer-of-runtime-performance/
Simple SMT solver for optimizing compiler
Edsko de Vries demonstrates the creation of a simple SMT solver in Haskell for use in an optimizing compiler.
It is able to transform the following
if a == 0 then
if !(a == 0) && b == 1 then
https://www.well-typed.com/blog/2014/12/simple-smt-solver/
Monads
(stub) https://stackoverflow.com/questions/2704652/monad-in-plain-english-for-the-oop-programmer-with-no-fp-background To read: https://ericlippert.com/2013/02/21/monads-part-one/
C# functional constructs
Cepheid variable esolang
Project idea: develop an esoteric programming language that would have "Cepheid variables" whose values oscillate predictably as a reference to Cepheid variable stars that pulsate in diameter and temperature with a stable period and amplitude. The mechanics could be like Malbolge, although less sadistic.
{project}{esolang}
OpenRocket model rocket simulator
OpenRocket is a model rocket simulator to test and design rockets before launching.
http://openrocket.info/
https://github.com/openrocket/openrocket
C++ xvalues, glvalues, and prvalues
(stub) https://stackoverflow.com/questions/3601602/what-are-rvalues-lvalues-xvalues-glvalues-and-prvalues
Building a compiler with ANTLR and Kotlin
https://tomassetti.me/parse-tree-abstract-syntax-tree/
Three-valued logic
In three-valued logic systems, there are three truth values indicating true, false, and indeterminate.
a ∧ b
a ∨ b
a ⊕ b
a ⇔ b
a ⇒ b
F F F F F T T
F ? F ? ? ? T
F T F T T F T
? F F ? ? ? ?
? T ? T ? ? T
T F F T T F F
T ? ? T ? ? ?
T T T T F T T
Package tribool implements three-valued logic in Go: https://github.com/grignaak/tribool
https://en.wikipedia.org/wiki/Three-valued_logic
https://github.com/sveltejs/svelte
https://svelte.dev
VSCodium
VSCodium is a collection of scripts to automatically build Visual Studio Code into freely-licensed binaries without telemetry or Microsoft-specific functionality or branding.
https://github.com/VSCodium/vscodium
RealWorld example apps
The RealWorld project is a specification of a Medium.com clone and implementations in many languages. The front-end and back-end languages can be swapped because each implementation follows the same API.
The current most popular front-ends are React / Redux, Angular, and Elm while the most popular back-ends are Node / Express, Go / Gin, and ASP.NET Core.
https://github.com/gothinkster/realworld
Elm front-end:
https://github.com/rtfeldman/elm-spa-example
https://dev.to/rtfeldman/tour-of-an-open-source-elm-spa
Project idea: there is not yet an implementation for a front-end written in Go and compiled to WebAssembly. I've also been interested in learning Elm, so this may be a good chance to do so.
Spectacle window organizer
Spectacle is a window organizer for macOS that adds keyboard shortcuts for moving and resizing windows.
https://github.com/eczarny/spectacle
https://www.spectacleapp.com
Go string to []byte conversion without allocation
Clarification on unsafe conversion between string <-> []byte
https://groups.google.com/forum/#!topic/golang-nuts/Zsfk-VMd_fU
https://github.com/fmstephe/unsafeutil
https://github.com/jfcg/sixb
History Trends Unlimited
Google Chrome history is limited the past three months, but using the History Trends Unlimited extension, history can be archived and searched indefinitely. The data is stored in a local database and can easily be imported or exported.
https://chrome.google.com/webstore/detail/history-trends-unlimited/pnmchffiealhkdloeffcdnbgdnedheme
To import manually history from Chrome's History SQLite database, the data must first be fit into a tab separated format.
sqlite> .open History
sqlite> .mode tabs
sqlite> .output /path/to/archived_history.tsv
sqlite> SELECT u.url, v.visit_time, v.transition
FROM urls u INNER JOIN visits v ON u.id=v.url
WHERE u.hidden=0 ORDER BY u.url;
https://www.addictivetips.com/windows-tips/analyze-chrome-history-trends-over-all-time/
FiraCode version 2
Version 2 of FiraCode adds the long awaited stylistic sets along with 136 new glyphs and 55 updated glyphs. Included in the update is a fix for an issue that I reported with the division slash character, commonly used in Go assembly.
https://github.com/tonsky/FiraCode/releases/tag/2
https://github.com/tonsky/FiraCode/issues/805
Nikita Prokopov describes how to enable stylistic sets in various editors. In VS Code, this can be done by injecting custom CSS into the editor with an extension.
https://github.com/tonsky/FiraCode/issues/617#issuecomment-527469961
https://marketplace.visualstudio.com/items?itemName=be5invis.vscode-custom-css
Detect webcam and microphone usage
Oversight detects usage of the webcam and microphone on macOS. When a process accesses the built-in webcam or microphone, Oversight alerts the user and provides options to allow or block the access.
https://objective-see.com/products/oversight.html
http://osxdaily.com/2017/02/21/detect-camera-microphone-activity-mac-oversight/
The camera and microphone can be disabled entirely in system files, but that prevents legitimate usage.
http://osxdaily.com/2017/03/01/disable-mac-camera-completely/
Hindley–Milner type system
(stub) Used by ML and Haskell.
Types are a partial order, so with polymorphism, the type of an expression can be determined by finding the smallest value in the set of all possible types for that expression.
https://en.wikipedia.org/wiki/Hindley–Milner_type_system
Building Whitespace interpreter source
The official Whitespace interpreter written in Haskell is rather dated and needs some fixes to compile with modern GHC. Using the Haskell API search engine Hoogle, the updated module names can be found.
import System(getArgs)
{- becomes: -}
import System.IO
import System.Environment(getArgs)
Additionally, the compiler option -fvia-C is no longer supported and can be removed from the Makefile.
https://web.archive.org/web/20150717140342/http://compsoc.dur.ac.uk/whitespace/download.php
https://stackoverflow.com/questions/9555671/ghc-7-4-update-breaks-haskell98
https://hoogle.haskell.org
JetBrains MPS visual DSL IDE
JetBrains MPS (Meta Programming System) is an IDE the enables the creation and use of visual DSLs. Code is maintained as an AST rather than a textual format so that graphical elements like tables, diagrams, matrices, or equations can be embedded in the code.
https://www.jetbrains.com/mps/concepts/
https://confluence.jetbrains.com/display/MPS/MPS+publications+page
Similar to MPS, DrRacket supports graphical elements such as images in program source. Languages such as Scratch or GameMaker that are composed entirely of graphical blocks could potentially be represented with MPS.
https://docs.racket-lang.org/drracket/Graphical_Syntax.html
{DSL}{IDE}
Gradual memory management
(stub) "A Framework for Gradual Memory Management" https://drive.google.com/file/d/0B_4wx_3dTGICWG1Ddk81Rnh0YzA/view
Gradual programming
Will Chrichton writes that programming is a gradual process. The program evolves as it develops, but decisions made early like whether to use static or dynamic typing are difficult to change late. He believes that the largest problem in the field of programming languages research is viewing programming languages through the lens of human computer interaction.
I hold this fundamental belief: programming languages should be designed to match the human programming process. We should seek to understand how people think about programs and determine what programming processes come intuitively to the human mind. There are all sorts of fascinating questions here, like:
Is imperative programming more intuitive to people than functional programming? If so, is it because it matches the way our brains are configured, or because it's simply the most common form of programming?
How far should we go to match people's natural processes versus trying to change the way people think about programming?
How impactful are comments in understanding a program? Variable names? Types? Control flow?
Chrichton lists six axes of evolution:
Concrete / abstract
Anonymous / named
Imperative / declarative
Dynamically typed / statically typed
Dynamically deallocated / statically deallocated
General-purpose / domain-specific
Imperative and declarative:
For a multitude of reasons, straight-line, sequential imperative code appears to come more naturally to programmers than functional/declarative code in their conceptual program model. For example, a simple list transformation will likely use for loops:
in_l = [1, 2, 3, 4]
out_l = []
for x in in_l:
if x % 2 == 0:
out_l.append(x * 2)
Whereas a more declarative version will abstract away the control flow into domain-specific primitives:
out_l = map(lambda x: x * 2, filter(lambda x: x % 2 == 0, in_l))
The distinction between the two is not just stylistic - declarative code is usually much more easily analyzed for structure, e.g. a map is trivially parallelizable whereas a general for loop less so. This transformation occurs most often in languages which support mixed imperative/functional code (at the very least closures).
Memory safety:
In 2018, all programming languages should be memory safe, with the only question being whether memory deallocation is determined at compile time (i.e. like Rust, with a borrow checker) or at run time (i.e. like every other language, with a garbage collector). Garbage collection is unquestionably a productivity boost for programmers, as it's natural that our initial program model shouldn't have to consider exactly how long each value should live before deallocation.
Gradual programming vision:
In that light, advancing gradual programming entails the following research process:
Identify parts of the programming process that change gradually over time, but currently require undue overhead or switching languages to adapt.
Develop language mechanisms that enable programmers to gradually move along a particular axis within a homogeneous programming environment.
Empirically validate against real programmers whether the proposed mechanisms match the hypothesized programming process in practice.
http://willcrichton.net/notes/gradual-programming/
JavaScript source map structure
JavaScript source maps provide a mapping from the compressed JavaScript served to the client to the source symbols and names. Languages that compile to JavaScript can also emit source maps.
The mappings are specified compactly using Base64 VLQ (Variable Length Quantity). Each segment consists of 1, 4, or 5 fields:
Generated column
Original line number
Original column
Original name, if available
To store many large numbers in small space, a continuation bit builds on a segment value, a space saving technique with its roots in the MIDI format.
https://www.html5rocks.com/en/tutorials/developertools/sourcemaps/#toc-anatomy
Rust as a compilation target
(stub) http://willcrichton.net/notes/rust-the-new-llvm/
Automatic reference counting
(stub) https://stackoverflow.com/questions/7874342/what-is-the-difference-between-objective-c-automatic-reference-counting-and-garb
Command-line weather
Querying wttr.in displays the weather for your current location in colored ASCII art.
curl wttr.in
https://github.com/chubin/wttr.in
Nim programming language
Nim (formerly named Nimrod) is an imperative, general-purpose, multi-paradigm, statically typed, systems, compiled programming language designed and developed by Andreas Rumpf. It is designed to be "efficient, expressive, and elegant", supporting metaprogramming, functional, message passing, procedural, and object-oriented programming styles by providing several features such as compile time code generation, algebraic data types, a foreign function interface (FFI) with C and C++, and compiling to C, C++, Objective-C, and JavaScript.
https://en.wikipedia.org/wiki/Nim_%28programming_language%29
Features listed on its website include:
Fast deferred reference counting memory management.
Iterators are are compiled to inline loops.
Compile-time evaluation of user-defined functions.
Preference of value-based data types allocated on the stack.
Macro system allows direct manipulation of the AST.
Supports backends including compilation to C, C++, and JavaScript.
https://nim-lang.org
Choose from a deferred RC'ing [reference counting] garbage collector that is fast, incremental and pauseless; or a soft real-time garbage collector that is deterministic allowing you to specify its max pause time; and many others.
A chart shows Nim as having the best garbage collector pause times and memory usage compared to Go, Java, and Haskell.
https://nim-lang.org/features.html
{Nim}{PL}{garbage collection}
Unicode capacity
(stub) 21-bit width: https://www.infoq.com/presentations/unicode-history/
0xxxxxxx 0x00..0x7F Only byte of a 1-byte character encoding
10xxxxxx 0x80..0xBF Continuation byte: one of 1-3 bytes following the first
110xxxxx 0xC0..0xDF First byte of a 2-byte character encoding
1110xxxx 0xE0..0xEF First byte of a 3-byte character encoding
11110xxx 0xF0..0xF7 First byte of a 4-byte character encoding
https://stackoverflow.com/questions/5290182/how-many-bytes-does-one-unicode-character-take
Four column ASCII
The "space" character had to come before graphics to make sorting easier, so it became position 20hex; for the same reason, many special signs commonly used as separators were placed before digits. (Wikipedia)
NUL @ ` 00000
SOH ! A a 00001
STX " B b 00010
ETX # C c 00011
EOT $ D d 00100
ENQ % E e 00101
ACK & F f 00110
BEL ' G g 00111
BS ( H h 01000
TAB ) I i 01001
LF * J j 01010
VT + K k 01011
FF , L l 01100
CR - M m 01101
SO . N n 01110
SI / O o 01111
DLE 0 P p 10000
DC1 1 Q q 10001
DC2 2 R r 10010
DC3 3 S s 10011
DC4 4 T t 10100
NAK 5 U u 10101
SYN 6 V v 10110
ETB 7 W w 10111
CAN 8 X x 11000
EM 9 Y y 11001
SUB : Z z 11010
ESC ; [ { 11011
FS < \ | 11100
GS = ] } 11101
RS > ^ ~ 11110
US ? _ DEL 11111
https://garbagecollected.org/2017/01/31/four-column-ascii/
http://www.catb.org/esr/faqs/things-every-hacker-once-knew/
Ceiling division
Python has floor division using the // operator. The naïve approach for ceiling division is to use math.ceil which converts the operands to floating point and would lose precision greater than 53 bits. Instead, flip the sign to do upside-down floor division.
def ceil_div(a, b):
return -(-a // b)
https://stackoverflow.com/questions/14822184/is-there-a-ceiling-equivalent-of-operator-in-python
Reverse Engineering for Beginners
"Reverse Engineering for Beginners", a free ebook written by Dennis Yurichev, alternatively known as "Understanding Assembly Language", covers reverse engineering techniques with many real world examples included.
https://beginners.re
Yurichev also has published several other ebooks including "Math for Programmers":
https://yurichev.com/writings/Math-for-programmers.pdf
SMT Solvers
https://en.wikipedia.org/wiki/Satisfiability_modulo_theories
https://yurichev.com/writings/SAT_SMT_by_example.pdf
https://github.com/DennisYurichev/SAT_SMT_by_example
Go bindings for Z3: https://github.com/mitchellh/go-z3
Abstract interpretation
(stub) https://en.wikipedia.org/wiki/Abstract_interpretation
Curry language
(stub) Created by the same guy as ALF.
Algebraic Logic Functional programming language
ALF is a language designed by Michael Hanus that combines functional and logic programming paradigms.
ALF is a language which combines functional and logic programming techniques. The foundation of ALF is Horn clause logic with equality which consists of predicates and Horn clauses for logic programming, and functions and equations for functional programming. Since ALF is a genuine integration of both programming paradigms, any functional expression can be used in a goal literal and arbitrary predicates can occur in conditions of equations. The operational semantics of ALF is based on the resolution rule to solve literals and narrowing to evaluate functional expressions. In order to reduce the number of possible narrowing steps, a leftmost-innermost basic narrowing strategy is used which can be efficiently implemented. Furthermore, terms are simplified by rewriting before a narrowing step is applied and also equations are rejected if the two sides have different constructors at the top. Rewriting and rejection can result in a large reduction of the search tree. Therefore this operational semantics is more efficient than Prolog's resolution strategy.
https://www.informatik.uni-kiel.de/~mh/systems/ALF/
https://en.wikipedia.org/wiki/Algebraic_Logic_Functional_programming_language
Semantic resolution trees
The Wikipedia page on semantic resolution trees is an empty stub.
https://en.wikipedia.org/wiki/Semantic_resolution_tree
LLVM zero division
(stub) http://llvm.1065342.n5.nabble.com/Integer-divide-by-zero-td56495.html
Turing machines or lambda calculus
Kimball Germane remarked that I seem like a Turing machine kind of guy as opposed to lambda calculus. I concern myself with the low-level details of a program and compiler optimizations in a similar manner to the mechanical-like Turing machines. He says that Turing machines are for machines and lambda calculus is for humans.
Parse tree and AST distinction
The parse tree (also known as "concrete syntax tree") is a concrete representation of the input, retaining all of the information of the input. The AST is an abstract representation of the input, so derivable associations like parentheses or whitespace need not be present.
https://stackoverflow.com/questions/5026517/whats-the-difference-between-parse-tree-and-ast
I have read section 1 and half of section 2: https://llvm.org/docs/tutorial/MyFirstLanguageFrontend/LangImpl02.html
Specifically, GCC suffers from layering problems and leaky abstractions: the back end walks front-end ASTs to generate debug info, the front ends generate back-end data structures, and the entire compiler depends on global data structures set up by the command line interface.
Install-time optimization is the idea of delaying code generation even later than link time, all the way to install time, as shown in Figure 11.7. Install time is a very interesting time (in cases when software is shipped in a box, downloaded, uploaded to a mobile device, etc.), because this is when you find out the specifics of the device you're targeting. In the x86 family for example, there are broad variety of chips and characteristics. By delaying instruction choice, scheduling, and other aspects of code generation, you can pick the best answers for the specific hardware an application ends up running on.
http://www.aosabook.org/en/llvm.html
https://github.com/tinygo-org/go-llvm
https://riptutorial.com/de/llvm
mentions vector ops: https://idea.popcount.org/2013-07-24-ir-is-better-than-assembly/
https://github.com/llir/grammar/blob/master/ll.tm
go LLVM IR https://blog.gopheracademy.com/advent-2018/llvm-ir-and-go/
Rob Pike and Robert Griesemer talk on the Go Time podcast's 100th episode about the history, influence, and future of Go.
Go is the first language to enforce formatting through an external tool like Gofmt. Gofmt enables refactoring and language change updates using Gofix.
https://changelog.com/gotime/100
Announced in golang-nuts mailing list: https://groups.google.com/forum/#!topic/golang-nuts/UWtBKoQp8wk
Immutability in JavaScript
Object.freeze freezes an object, preventing addition and removal of properties and prevents changes to existing properties or the prototype.
Object.seal seals an object, preventing addition of new properties and marking all existing properties as non-configurable. Values of present properties can still be changed as long as they are writable.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/freeze
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/seal
https://opensource.com/article/17/6/functional-javascript
macOS dynamic wallpapers
Marcin Czachurski, in a three-part article, describes the dynamic wallpaper feature added in macOS Mojave. The default dynamic wallpaper is a shot of Mojave that changes based on the time of day. Each frame stores the altitude and azimuth to accurately update the image according to the local position of the sun.
https://itnext.io/macos-mojave-dynamic-wallpaper-fd26b0698223
https://itnext.io/macos-mojave-dynamic-wallpapers-ii-f8b1e55c82f
https://itnext.io/macos-mojave-wallpaper-iii-c747c30935c4
Dependently typed
http://www.ats-lang.org
https://en.wikipedia.org/wiki/ATS_%28programming_language%29
Darcs version control system
Darcs is a distributed version control system developed by physicist David Roundy. It is based on a patch algebra with patches in a repository as a partially ordered set.
The patches of a repository are linearly ordered. Darcs automatically calculates whether patches can be reordered (an operation called commutation), and how to do it.
The notion of dependency between patches is defined syntactically. Intuitively, a patch B depends on another patch A if A provides the content that B modifies. This means that patches that modify different parts of the code are considered, by default, independent. To address cases when this is not desirable, Darcs enables the user to specify explicit dependencies between patches.
https://en.wikipedia.org/wiki/Darcs
http://darcs.net/Using/Model
Darcs merging is similar to rebasing in Git, though unlike Git, reordering patches does not change patch identities. Also, Darcs repositories can operate in lazy mode where history is retrieved on demand.
http://darcs.net/DifferencesFromGit
Bitwise set operations
Bit sets can be manipulated using bitwise operators in a manner similar to sets.
`a b` a ∪ b
a & b a ∩ b Intersection
a ^ b a ⊕ b Exclusive or
a &^ b a - b Difference
^a U - a Complement
popcnt(a) |a| Cardinality
Bitwidth analyzing compiler
Mark Stephenson, Jonathan Babb, and Saman Amarasinghe. Bitwidth analysis with application to silicon compilation. In Proceedings of the ACM SIGPLAN 2000 Conference on Programming Language Design and Implementation, pages 108-120, 2000.
We ease loop identification in SSA form by converting all φ-functions that occur in loop headers to μ-functions.
Projects using OCaml
The reference WebAssembly interpreter is written in OCaml.
https://github.com/WebAssembly/spec/blob/master/interpreter/README.md
https://github.com/WebAssembly/spec/blob/master/interpreter/syntax/ast.ml
https://github.com/WebAssembly/design/blob/master/Semantics.md
The static JavaScript typehecker Flow is written in OCaml and compiled to JavaScript.
https://github.com/facebook/flow
XQuery XML query language
XQuery is a functional query language that queries and transforms structured and unstructured data, usually in the form of XML.
https://en.wikipedia.org/wiki/XQuery
https://www.w3schools.com/xml/xquery_intro.asp
WebSQL and WebOQL web query languages
WebSQL and WebOQL are declarative programming languages designed to support web querying and restructuring.
[WebSQL] aimed to combine the content-based queries of search engines with structure-based queries similar to what one would find in a database system. The language combines conditions on text patterns appearing within documents with graph patterns describing link structure.
WebSQL proposes to model the web as a relational database composed of two (virtual) relations: Document and Anchor. The Document relation has one tuple for each document in the web and the Anchor relation has one tuple for each anchor in each document in the web. This relational abstraction of the web allows us to use a query language similar to SQL to pose the queries.
If Document and Anchor were actual relations, we could simply use SQL to write queries on them. But since the Document and Anchor relations are completely virtual and there is no way to enumerate them, we cannot operate on them directly. The WebSQL semantics depends instead on materializing portions of them by specifying the documents of interest in the FROM clause of a query. The basic way of materializing a portion of the web is by navigating from known URL's. Path regular expressions are used to describe this navigation. An atom of such a regular expression can be of the form d1 => d2, meaning document d1 points to d2 and d2 is stored on a different server from d1; or d1 -> d2, meaning d1 points to d2 and d2 is stored on the same server as d1.
SELECT d.url,e.url,a.label
FROM Document d SUCH THAT
"www.mysite.start" ->* d,
Document e SUCH THAT d => e,
Anchor a SUCH THAT a.base = d.url
WHERE a.href = e.url AND a.label = "label";
[WebSQL] treats web pages as atomic objects with two properties: they contain or do not contain certain text patterns, and they point to other objects. ... [WebOQL] goes beyond WebSQL in two significant ways. First, it provides access to the internal structure of the web objects it manipulates. Second, it provides the ability to create new complex structures as a result of a query. Since the data on the web is commonly semistructured, the language emphasizes the ability to support semistructured features.
The main data structure provided by WebOQL is the hypertree. Hypertrees are ordered arc-labeled trees with two types of arcs, internal and external. Internal arcs are used to represent structured objects and external arcs are used to represent references (typically hyperlinks) among objects. Arcs are labeled with records.
Sets of related hypertrees are collected into webs. Both hypertrees and webs can be manipulated using WebOQL and created as the result of a query.
WebOQL is a functional language, but queries are couched in the familiar select-from-where form.
[Tag:"result" /
select y
from x in browse("file:pubs.xml") via ^*[tag = "publications"],
y in x',
z in y'
where z.tag = "author" and z.value ~ "Smith"
https://www.w3.org/TandS/QL/QL98/pp/wql.html
Uses for spare computers
Central backup server with redundancy
Penetration testing target
Virus-infected sandbox
Collaborative remote desktop
Idea lists:
https://news.ycombinator.com/item?id=430636
https://fossbytes.com/10-things-to-do-with-an-old-computer/
https://explainxkcd.com/wiki/index.php/350:_Network
CollabVM allows you to control remote machines through a website and perform actions by voting:
http://computernewb.com/collab-vm/
https://github.com/computernewb/collab-vm-server
https://web.archive.org/web/20190310182342/http://computernewb.com:80/collab-vm/download/
Fossil source code manager
Used by and designed for SQLite; relational.
https://www.fossil-scm.org/home/doc/trunk/www/index.wiki
https://sqlite.org/src/doc/trunk/README.md
https://en.wikipedia.org/wiki/Fossil_(software)
CRAPL academic-strength open-source licence
http://matt.might.net/articles/crapl/
https://www.software.ac.uk/news/2013-05-31-crapl-academic-strength-open-source-licence
Red-Black tree deletion
Marc Nieper-Wißkirchen talks (in German) at Curry Club Ausburg about Kimball Germane and Matthew Might's paper on deletions in red-black trees and implements the algorithm in Scheme.
https://www.youtube.com/watch?v=JOiURKrhnSo
http://matt.might.net/papers/germane2014deletion.pdf
http://matt.might.net/articles/red-black-delete/
https://gitlab.com/nieper/immutable-maps
As a joke, when he launches vi, it errors and running vim launches emacs instead.
/bin/bash: computer has not enough memory to run process: vi
Coq proof language
Kimball Germane provides an introduction to the Coq theorem prover.
Coq implements a dependently-typed strongly-normalizing programming language that allows users to express formal specifications of programs. Coq assists the user in finding artifacts that meet a specification and from which it can extract a certified implementation in Haskell, Racket, or OCaml automatically. This talk will iterate through a series of increasingly-precise specifications of a commonly-used function and the experience of a Coq user meeting these specifications.
Defining natural numbers in terms of successor:
Inductive nat : Set :=
| 0 : nat
| S : nat -> nat.
Eval compute in 0.
Eval compute in (S 0).
Eval compute in (S (S (S 0))). (* 3 *)
Proving associativity of addition using induction:
Fixpoint add (a b : nat) : nat :=
match a with
| 0 => b
| S n => S (add n b)
Theorem add_assoc : forall (a b c : nat),
(add a (add b c)) = (add (add a b) c).
intros a b c.
induction a. simpl. reflexivity.
simpl. rewrite -> IHa. reflexivity.
Projects using Coq and resources:
(* notable projects
Vellvm
Verified Software Toolchain
ceL4 and CertiKOS - verified kernels
(* Resources
Coq Reference Manual
Coq'Art - Interactive Theorem Proving and Program Development
Certified Programming with Dependent Types - Alan Chlipala
Software Foundation
I wanted to talk a little about some cool projects that are being done with Coq. Vellvm is verified LLVM, so that's what it sounds like; Verified Software Toolchain and Ynot are frameworks to reason about C programs and imperative programs; Bedrock reasoning about assembly language; there are some verified kernels that are fully verified; and CompCert is a gigantic project which aimed to have a formalized C compiler and they succeeded.
One of the researchers at the University of Utah, John Regehr - they do hardening for compilers and fuzzying for C compilers and CompCert had the fewest number of bugs and the only bugs it had were in the unverified parts, which was the front-end parser, and since then, that project has verified their front-end parser. So there's good reason to think that there are not mistakes in CompCert.
They [CompCert] define the semantics for C and the semantics for assembly and their high-level proof is that the semantics are preserved across compilation. The chain includes register allocation, so it has stuff about graph coloring - on their website there's a diagram that shows all of the phases that they've proven.
He mentions the Idris programming language as being dependently typed similar to Coq.
Idris is a dependently typed programming language that is getting more popular and I wonder if they would extract to Idris differently where you had to provide a proof.
https://www.youtube.com/watch?v=ngM2N98ppQE
Gemini Guidance Computer
(stub) https://en.m.wikipedia.org/wiki/Gemini_Guidance_Computer
Saturn Launch Vehicle Digital Computer
Memory was in the form of 13-bit syllables, each with a 14th parity bit. Instructions were one syllable in size, while data words were two syllables (26 bits).
The LVDC was highly redundant and reliable:
For reliability, the LVDC used triple-redundant logic and a voting system. The computer included three identical logic systems. Each logic system was split into a seven-stage pipeline. At each stage in the pipeline, a voting system would take a majority vote on the results, with the most popular result being passed on to the next stage in all pipelines. This meant that, for each of the seven stages, one module in any one of the three pipelines could fail, and the LVDC would still produce the correct results.
There are 18 simple instructions.
https://en.m.wikipedia.org/wiki/Saturn_Launch_Vehicle_Digital_Computer
Apparently, the LVDC was hand-compiled:
Young (American) programmers just out of college were then employed to manually compile the FORTRAN program into the assembly language of the embedded LVDC CPU, and presumably to make whatever other adjustments are needed when you pass from the computing environment of a large computer to a smaller embedded system.
http://apollo.josefsipek.net/LVDC.html
Backtracking regexp in Go
Doug Clark wrote a regular expression library for Go that allows backtracking and does not have the constant-time guarantees of the built-in regexp package.
https://github.com/dlclark/regexp2
https://groups.google.com/forum/#!topic/golang-nuts/MAW8Tj7KIfY
GMP pi computation
GMP can be used to compute pi and is the fastest implementation of those surveyed by Nick Craig-Wood.
curl https://gmplib.org/download/misc/gmp-chudnovsky.c --output gmp-chudnovsky.c
gcc -s -Wall -o gmp-chudnovsky gmp-chudnovsky.c -lgmp -lm
https://gmplib.org/pi-with-gmp.html
https://www.craig-wood.com/nick/articles/pi-chudnovsky/
https://ubuntuforums.org/showthread.php?t=2209074
Ubuntu on Raspberry Pi 4
The Raspberry Pi 4 changed to 64-bit, so most operating systems other than the default Raspbian distribution are not currently compatible. CloudKernels walks through their process of building a 64-bit bootable Ubuntu image for the Pi 4.
https://blog.cloudkernels.net/posts/rpi4-64bit-image/
C char array and pointer
(stub) https://stackoverflow.com/questions/10186765/what-is-the-difference-between-char-array-and-char-pointer-in-c
Go memory model
(stub) https://golang.org/ref/mem
Profiling Go programs
(stub) https://blog.golang.org/profiling-go-programs
Go as a compiler construction language
If self compilation was an early goal of Go, it would have been more compiler oriented - a design I would greatly appreciate.
Go turned out to be a fine language in which to implement a Go compiler, although that was not its original goal. Not being self-hosting from the beginning allowed Go's design to concentrate on its original use case, which was networked servers. Had we decided Go should compile itself early on, we might have ended up with a language targeted more for compiler construction, which is a worthy goal but not the one we had initially.
https://golang.org/doc/faq#What_compiler_technology_is_used_to_build_the_compilers
Early Go development
The debugger was originally named ogle. Old versions of the FAQ mention that "'Ogle' would be a good name for a Go debugger." https://web.archive.org/web/20110902121904/http://golang.org:80/pkg/exp/ogle/
A vector container package used to exist for sequential storage with specialized versions for int and string. https://web.archive.org/web/20120326025602/http://golang.org:80/pkg/container/vector/
The first commit after the hello world programs contains an early annotated Go spec. https://github.com/golang/go/blob/18c5b488a3b2e218c0e0cf2a7d4820d9da93a554/doc/go_spec
In Rob Pike's Gophercon 2014 talk "Hello, Gophers!", he discusses the language inspiration and development. https://talks.golang.org/2014/hellogophers.slide
TODO: read original spec
Go compiler naming scheme
The Go compiler borrows from the Plan 9 naming scheme:
The 6g (and 8g and 5g) compiler is named in the tradition of the Plan 9 C compilers, described in http://plan9.bell-labs.com/sys/doc/compiler.html (see the table in section 2). 6 is the architecture letter for amd64 (or x86-64, if you prefer), while g stands for Go.
https://web.archive.org/web/20100813130556/http://golang.org/doc/go_faq.html
Plan 9 compilers:
0a, 1a, 2a, 5a, 7a, 8a, ka, qa, va - assemblers
0c, 1c, 2c, 5c, 7c, 8c, kc, qc, vc - C compilers
0l, 1l, 2l, 5l, 7l, 8l, kl, ql, vl - loaders
https://en.wikipedia.org/wiki/List_of_Plan_9_applications
The Go gopher formerly named Gordon
The well known mascot of Go is called simply "the Go gopher". https://blog.golang.org/gopher
However, in its early days, it was known as "Gordon the Go Gopher". This can be seen on the homepage of Glenda, the Plan 9 Bunny, in the Internet Archive, from about 2009-12-06 to 2013-04-01.
http://glenda.cat-v.org/friends/
https://web.archive.org/web/20091206213953/http://glenda.cat-v.org:80/friends/
SSA form bibliography
Annotated bibliography of 106 papers relating to SSA form: http://www.dcs.gla.ac.uk/~jsinger/ssa.html
Go SSA form tools
https://github.com/golang/tools/blob/master/cmd/ssadump/main.go
https://godoc.org/golang.org/x/tools/go/ssa
Plan 9 applications
Plan 9 even has a file system filter rot13fs.c to transform traffic with ROT13.
https://pdos.csail.mit.edu/~rsc/plan9.html
https://pdos.csail.mit.edu/~rsc/rot13fs.c
Functional higher-order functions in Go
Using higher-order functions borrowed from functional languages in Go such as apply, filter, and reduce is considered an anti-pattern and for loops are instead preferable.
https://github.com/robpike/filter
Redis persistance using fork
RDB maximizes Redis performances since the only work the Redis parent process needs to do in order to persist is forking a child that will do all the rest. The parent instance will never perform disk I/O or alike.
RDB needs to fork() often in order to persist on disk using a child process. Fork() can be time consuming if the dataset is big, and may result in Redis to stop serving clients for some millisecond or even for one second if the dataset is very big and the CPU performance not great. AOF also needs to fork() but you can tune how often you want to rewrite your logs without any trade-off on durability.
Whenever Redis needs to dump the dataset to disk, this is what happens:
Redis forks. We now have a child and a parent process.
The child starts to write the dataset to a temporary RDB file.
When the child is done writing the new RDB file, it replaces the old one.
This method allows Redis to benefit from copy-on-write semantics.
Log rewriting uses the same copy-on-write trick already in use for snapshotting. This is how it works:
Redis forks, so now we have a child and a parent process.
The child starts writing the new AOF in a temporary file.
The parent accumulates all the new changes in an in-memory buffer (but at the same time it writes the new changes in the old append-only file, so if the rewriting fails, we are safe).
When the child is done rewriting the file, the parent gets a signal, and appends the in-memory buffer at the end of the file generated by the child.
Profit! Now Redis atomically renames the old file into the new one, and starts appending new data into the new file.
https://redis.io/topics/persistence
Forking in threaded applications
https://stackoverflow.com/questions/28370646/how-do-i-fork-a-go-process
http://www.serpentine.com/blog/threads-faq/the-history-of-threads/
https://stackoverflow.com/questions/10027477/how-to-fork-a-process
Binary combinatory logic esoteric lang
Binary combinatory logic is a formulation of combinatory logic using only the symbols 0 and 1.
<term> ::= 00 | 01 | 1 <term> <term>
00 represents the K operator.
01 represents the S operator.
1 <term> <term> represents the application operator (<term1> <term2>).
https://esolangs.org/wiki/Binary_combinatory_logic
{esolang}{Binary Combinatory Logic}
GrammaTech CodeSonar and CodeSurfer
(stub) https://www.grammatech.com/products/codesonar
Local development certificate generation
(stub) https://github.com/FiloSottile/mkcert
GitHub public keys vulnerable
GitHub public keys are available for anyone to access at the URL https://github.com/username.keys. Unless configured otherwise, ssh sends all public keys until one works. By storing all GitHub keys, a server can identify the client by their key.
https://github.com/FiloSottile/whosthere
https://blog.filippo.io/hi/#whatyoumighthaveused
macOS Catalina default shell is zsh
Starting in macOS Catalina, the default shell will be zsh. The version of Bash used by macOS is stuck on 3.2 because newer versions newer versions are licensed under GPL v3.
Scripting OS X gives an in depth walkthrough on migrating to zsh: https://scriptingosx.com/2019/06/moving-to-zsh/
Google text adventure walkthrough
On the StrategyWiki, a simple walkthrough is presented for the simple text adventure game hidden in Google Search.
https://strategywiki.org/wiki/Google_Text_Adventure/Walkthrough
Stack-based language compilation
"Compilation of Stack-Based Languages (Abschlußbericht)" by M. Anton Ertl and Christian Pirker (1998) describes techniques for compiling stack-based languages.
RAFTS is a framework for applying state of the art compiler technology to the compilation of stack-based languages like Forth and Postscript. The special needs of stack-based languages are an efficient stack representation, fast procedure calls, and fast compilation. RAFTS addresses the stack representation problem by allocating stack items to registers such that most stack accesses in the source program are register accesses in the machine language program, and by eliminating most stack pointer updates. To achieve fast calls, RAFTS performs these optimizations interprocedurally and also performs procedure inlining and tail call optimization. Fast compilation is achieved by selecting fast algorithms and implementing them efficiently.
The basic block code generation reduces the number of stack pointer updates to at most one per stack and basic block. It is possible to reduce the number much more. E.g., in procedures where all stack items are allocated to registers, no stack pointer update is needed at all. Like register allocation, stack pointer update minimization has to be performed interprocedurally to achieve a significant effect.
https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.49.4352
http://www.complang.tuwien.ac.at/projects/rafts.html
Chrome offline dinosaur source
The source for the offline dinosaur game in Chrome is available in Chromium and written in JavaScript. https://github.com/chromium/chromium/blob/master/components/neterror/resources/offline.js
Project idea: extract the dinosaur game into a standalone page.
The T-Rex appears to be named "Stan the Offline Dino" as referenced in several test files:
<head><title>Welcome to Stan the Offline Dino's Homepage</title></head>
https://github.com/chromium/chromium/blob/master/chromecast/browser/test/data/dynamic_title.html
Wikipedia documents the evolution of the dinosaur game:
If the user tries to browse when offline, a message is shown that they are not connected to the Internet. An illustration of the "Lonely T-Rex" dinosaur is shown at the top, designed by Sebastien Gabriel. From September 2014, tapping the dinosaur (in Android or iOS) or pressing space or ↑ (on desktop) launches a browser game called "T-Rex Runner" in which the player controls a running dinosaur by tapping the screen (in Android or iOS) or pressing space, ↑ or ↓ (on desktop) to avoid obstacles, including cacti and, from June 2015, pterodactyls. In 2016, another feature was added to the game. When the player reaches 700 points the game begins to switch between day (white background, black lines and shapes) and night (black background, white lines and shapes). During September 2018, for Google Chrome's 10th birthday, a birthday cake causing the dinosaur to wear a birthday hat when collected was added. Reaching a score of 900 will switch the colour scheme back to day, and the switch back and forth will occur at further subsequent milestones. The game is also available at the chrome://network-error/-106 and chrome://dino pages. The game's code is available on the Chromium site.
https://en.wikipedia.org/wiki/List_of_Google_Easter_eggs#Chrome
Conway's Game of Life
Project idea: implement a Conway's Game of Life simulation in Go using bit packing.
Query language research
BYU computer science professor Kimball Germane, specializes in programming languages and his current research is in utilizing the SQLite VM bytecode to construct DSL query languages that are more expressive than allowed through SQL.
SQLite bytecode engine: https://sqlite.org/opcode.html
Wikipedia lists many Easter eggs hidden in Google search. Included is a selection of those queries:
"<blink>", "blink tag", or "blink html" includes samples of the blink element in the results.
"conway's game of life" on a desktop browser generates a running configuration of the game to the right of the search results. The process can also be stopped and altered by the user.
"google in 1998" on a desktop browser will generate a layout similar to the one Google used for its search engine in 1998.
"is google down" returns with "No".
"kerning" will add spaces between the letters of the word "kerning" in the search results.
"keming" will remove spaces between the letters of the word "keming".
"<marquee>", "marquee tag", or "marquee html" will apply the marquee element to the results count at the top of the results.
"minesweeper" will have a playable game of minesweeper. Users can select between three modes: easy, medium and hard.
"pac-man", "google pacman" or "play pacman" will show the Pac-Man related interactive Google Doodle from 2010. Clicking Insert Coin twice will enable a second player, Ms. Pac-Man.
"pluto" describes Pluto as "Our favorite dwarf planet since 2006" in the Knowledge Graph.
"recursion" includes a "Did you mean: recursion" link back to the same page.
"text adventure" or "google easter eggs" using most popular modern browsers (except Safari) and opening the browser's developer console will trigger a text-based adventure game playable within the console.
"tic tac toe" will show a playable game of tic-tac-toe. Users can select to play against the browser at different levels - "easy", "medium" or "hard" (called "impossible") - or against a friend. An alternative way to find the game is to search "shall we play a game".
https://en.wikipedia.org/wiki/List_of_Google_Easter_eggs
C main signatures
A common extension to C supported by Unix systems adds a third parameter for environment information.
int main(int argc, char *argv[], char *envp[])
Alternatively, the environment is available in <unistd.h> with extern char **environ;.
In C, a function without parameters accepts any number of arguments, so int main() accepts any arguments, whereas int main(void) accepts none. C++ treats those two forms identically.
https://stackoverflow.com/questions/2108192/what-are-the-valid-signatures-for-cs-main-function
autoplay: a learning environment for interactive fiction
Daniel Ricks developed autoplay, an environment employing machine learning to play text-based games.
https://github.com/danielricks/autoplay
Git Bash completion with aliases
Using the built-in Git function __git_complete, Bash command completion can be enabled for aliases.
alias branch='git branch'
alias checkout='git checkout'
alias check='git checkout'
alias commit='git commit'
alias diff='git diff'
alias fetch='git fetch'
alias pull='git pull'
alias push='git push'
source "$HOME/git-completion.bash"
__git_complete branch _git_branch
__git_complete checkout _git_checkout
__git_complete check _git_checkout
__git_complete commit _git_commit
__git_complete diff _git_diff
__git_complete fetch _git_fetch
__git_complete pull _git_pull
__git_complete push _git_push
See entry from 2019-05-24 for installation of Git completion.
https://stackoverflow.com/questions/342969/how-do-i-get-bash-completion-to-work-with-aliases
Todo.txt
Todo.txt is a minimal task management format with a command-line interface and mobile apps.
http://todotxt.org/
https://github.com/todotxt/todo.txt
https://github.com/todotxt/todo.txt-cli
Aheui - esoteric language in Hangul
Aheui is a grid-based programming language reflecting the graphical design of Hangul, the Korean writing system. It is the first esolang designed for Hangul.
https://github.com/aheui
https://esolangs.org/wiki/Aheui
The language specification provides an introduction into Korean orthography and lists the function of each vowel and consonant. https://aheui.readthedocs.io/en/latest/specs.en.html
Interpreters are implemented in a dozen languages, including Go and self interpreted in Aheui:
https://github.com/aheui/goaheui
https://github.com/aheui/aheui.aheui
AVIS is a cell based editor for Aheui: https://github.com/aheui/avis
Aheui in a polyglot: https://codegolf.stackexchange.com/questions/31506/99-bottles-of-beer-99-languages/#31539
{esolang}{Aheui}
μ6 esoteric language
μ6 is a low-level esolang based on μ-recursive functions with minimal syntax like Brainf*. Commands are encoded as nibbles and base-6 integers are used.
0000 0 Digit 0
0110 [ Begin of function composition
0111 ] End of function composition
1000 / Projection
1001 . Constant zero-function
1010 + Successor-function
1011 , Pairing-function
1100 < Left of pair or const
1101 > Right of pair or const
1110 # Primitive recursion
1111 @ μ-operator (minimization)
https://github.com/bforte/mu6
https://esolangs.org/wiki/Mu6
{esolang}{μ6}
Google Chrome history database
Google Chrome history is stored locally as a SQLite3 database and can be easily exported.
cd ~/Library/Application\ Support/Google/Chrome/Default/
sqlite3 History
sqlite> .headers on
sqlite> .mode csv
sqlite> .output my-chrome-output.csv
sqlite> SELECT DATETIME(last_visit_time/1000000-11644473600, 'unixepoch', 'localtime'), url
FROM urls
ORDER BY last_visit_time DESC;
https://yuji.wordpress.com/2014/03/10/export-chrome-history-as-csv-spreadsheet/
XQuartz - X11 for macOS
The XQuartz open source project is a version of the X11 windowing system for macOS.
https://www.xquartz.org
APOD Automator script
Automator in macOS can be used to automatically download the current NASA Astronomy Picture of the Day and set it as the desktop background.
https://macosxautomation.com/automator/apod/index.html
Karabiner - macOS keyboard customizer
Karabiner is a low level keyboard customization utility for macOS.
I use it for a few simple bindings:
Fn -> Left Ctrl
Left Ctrl -> Fn
Caps Lock -> Esc
https://pqrs.org/osx/karabiner/
Nikita Voloboev demos his extensive Karabiner customizations:
https://wiki.nikitavoloboev.xyz/macos/macos-apps/karabiner
https://medium.com/@nikitavoloboev/karabiner-god-mode-7407a5ddc8f6
Go sync.Map
Map is like a Go map[interface{}]interface{} but is safe for concurrent use by multiple goroutines without additional locking or coordination. Loads, stores, and deletes run in amortized constant-time.
The Map type is specialized. Most code should use a plain Go map instead, with separate locking or coordination, for better type safety and to make it easier to maintain other invariants along with the map content.
The Map type is optimized for two common use cases:
when the entry for a given key is only ever written once but read many times, as in caches that only grow, or
when multiple goroutines read, write, and overwrite entries for disjoint sets of keys. In these two cases, use of a Map may significantly reduce lock contention compared to a Go map paired with a separate Mutex or RWMutex.
https://golang.org/pkg/sync/#Map
Go sync.Once
Once is an object that will perform exactly one action.
Do calls the function f if and only if Do is being called for the first time for this instance of Once. In other words, given var once Once, if once.Do(f) is called multiple times, only the first call will invoke f, even if f has a different value in each invocation. A new instance of Once is required for each function to execute.
https://golang.org/pkg/sync/#Once
Used by crypto/elliptic: https://golang.org/src/crypto/elliptic/elliptic.go
Go types package
Package go/types declares the data types and implements the algorithms for type-checking of Go packages.
https://godoc.org/go/types
Alan Donovan provides a detailed tutorial on the use of the package: https://github.com/golang/example/tree/master/gotypes
Info has maps to store the relationship between identifiers and objects. Only non-nil maps in Info are populated, letting API clients control the information needed from the type checker. The field Defs records declaring identifiers and Uses records referring identifiers.
type Info struct {
Defs map[*ast.Ident]Object
Uses map[*ast.Ident]Object
Implicits map[ast.Node]Object
Selections map[*ast.SelectorExpr]*Selection
Scopes map[ast.Node]*Scope
TODO: Continue reading at https://github.com/golang/example/tree/master/gotypes#function-and-method-types
Referring page: https://stackoverflow.com/questions/40266003/how-to-assert-ast-typespec-to-int-type-in-golang
gosec - Go security checker
Gosec inspects source code for security problems by scanning the Go AST.
https://github.com/securego/gosec
Floating point math associativity
GCC optimizes pow(a, 2) into a*a, but does not optimize pow(a, 6) or a*a*a*a*a*a into (a*a*a)*(a*a*a) because floating point math is not associative, though associativity and other optimizations can be enabled with compiler flags.
https://stackoverflow.com/questions/6430448/why-doesnt-gcc-optimize-aaaaaa-to-aaaaaa
C++ compiler division by zero optimization
The C++ compiler does not throw a division by zero exception when d == 0.
int d = 0;
d /= d;
C++ does not have a "Division by Zero" Exception to catch. The behavior you're observing is the result of Compiler optimizations:
The compiler assumes Undefined Behavior doesn't happen
Division by Zero in C++ is undefined behavior
Therefore, code which can cause a Division by Zero is presumed to not do so.
And, code which must cause a Division by Zero is presumed to never happen
Therefore, the compiler deduces that because Undefined Behavior doesn't happen, then the conditions for Undefined Behavior in this code (d == 0) must not happen
Therefore, d / d must always equal 1.
https://stackoverflow.com/questions/57628986/why-doesnt-d-d-throw-a-division-by-zero-exception-when-d-0
Go language proverbs
Rob Pike philosophizes at Gopherfest SV 2015 and provides the following proverbs for teaching or understanding Go:
Don't communicate by sharing memory, share memory by communicating.
Concurrency is not parallelism.
Channels orchestrate; mutexes serialize.
The bigger the interface, the weaker the abstraction.
Make the zero value useful.
interface{} says nothing.
Gofmt's style is no one's favorite, yet gofmt is everyone's favorite.
A little copying is better than a little dependency.
Syscall must always be guarded with build tags.
Cgo must always be guarded with build tags.
Cgo is not Go.
With the unsafe package there are no guarantees.
Clear is better than clever.
Reflection is never clear.
Errors are values.
Don't just check errors, handle them gracefully.
Design the architecture, name the components, document the details.
Documentation is for users.
Don't panic.
https://www.youtube.com/watch?v=PAAkCSZUG1c https://go-proverbs.github.io
XKCD surveys
https://blog.xkcd.com/2010/05/03/color-survey-results/
https://www.explainxkcd.com/wiki/index.php/1572:_xkcd_Survey
Yorick experimentation
Matrices in Yorick are column-major, so to transpose a column vector to a row vector, we increase the dimensionality of the matrix using a "-" pseudo-index.
> u=[1,2,3,4]
> u
[1,2,3,4]
> u(-,)
[[1],[2],[3],[4]]
Outer product of column vectors u ⊗ v = uvᵀ:
> v=[1,10,100]
> v(-,)
[[1],[10],[100]]
> u*v(-,)
[[1,2,3,4],[10,20,30,40],[100,200,300,400]]
> transpose(u*v(-,))
[[1,10,100],[2,20,200],[3,30,300],[4,40,400]]
Inner product (dot product), defined as ⟨u, v⟩ = uᵀv, is represented in Yorick using the "+" sign. The plus sign selects "the dimension to be iterated over in the summation of the inner product."
> u=[1,2,3]
> v=[4,5,6]
> u(+)*v(+)
Matrix multiplication is composed of dot products at each position and is thus represented using the plus sign. The transpose matches "normal" matrix multiplication since Yorick is column-major.
> a=[[1,2],[3,4]]
> b=[[5,6],[7,8]]
> a(+,)*b(,+)
[[19,43],[22,50]]
> transpose(a(+,)*b(,+))
The original NumPy authors were familiar with Yorick and borrowed the concept of broadcasting from Yorick. https://stackoverflow.com/questions/26948776/where-did-the-term-broadcasting-come-from/26950256#26950256
JEH-Tech explains Yorick with fantastic diagrams. https://web.archive.org/web/20170102091157/http://www.jeh-tech.com/yorick.html
Go terminal package
Package crypto/ssh/terminal provides support functions for dealing with terminals, as commonly found on UNIX systems.
https://godoc.org/golang.org/x/crypto/ssh/terminal
Yorick syntax and building
Yorick has optional semicolons to enable easier to type statements into the terminal. When omitted, the lexer must insert semicolons so that the parser works correctly which complicates the grammar significantly. This context sensitivity is called a "lexical tie-in" and is discouraged.
https://github.com/dhmunro/yorick/issues/21
http://yorick.sourceforge.net/phpBB3/viewtopic.php?p=1235#p1235
Yorick can be built from source from its repository. https://github.com/dhmunro/yorick
Simple hello world:
> print, "Hello, World!"
"Hello, World!"
Go astutil package
Package astutil contains common utilities for working with the Go AST.
https://godoc.org/golang.org/x/tools/go/ast/astutil
Split a subdirectory into a separate repo
A subdirectory can be split into a separate repo, the inverse of a repo merge. Any history existing outside of that subdirectory will not appear in the split repo. This causes problems if that folder has been moved.
git filter-branch --prune-empty --subdirectory-filter FOLDER-NAME BRANCH-NAME
https://help.github.com/en/articles/splitting-a-subfolder-out-into-a-new-repository
VSCode TextMate grammar performance
https://github.com/Microsoft/vscode/issues?q=is:issue+milestone:%22July+2019+Recovery%22+is:closed
https://github.com/microsoft/vscode/issues/78769
https://github.com/jeff-hykin/cpp-textmate-grammar/issues/343
https://github.com/microsoft/vscode-textmate/issues/104
Go Native Client support
Russ Cox describes in detail the process of implementing support for Native Client (NaCl) in Go 1.3 and the architecture restrictions that added complexity.
https://www.reddit.com/r/golang/comments/1ssynh/go_13_native_client_support/
https://docs.google.com/document/d/1oA4rs0pfk5NzUyA0YX6QsUEErNIMXawoscw9t0NHafo/pub
Go 1.13 is the last release that will run on NaCl. https://tip.golang.org/doc/go1.13
Comparing binaries and source code
http://joxeankoret.com/blog/2018/08/12/histories-of-comparing-binaries-with-source-codes/
http://joxeankoret.com/blog/2015/03/13/diaphora-a-program-diffing-plugin-for-ida-pro/
Control flow graph function matching
Joxean Koret proposes a heuristic based on the idea that "different basic blocks and edges are different interesting pieces of information". The Koret-Karamitas algorithm "КОКА" gets features at function, basic block, edge, and instruction level, assigns a different prime value to each different feature, and then generates a hash of the product.
Huku classifies basic blocks in 7 categories: normal, entry points, exit points, traps, self-loops, loop heads and loop tails. In the same way, he classifies 4 different kinds of edges: basis, forward, back edges and cross-links.
Huku uses instruction histograms to classify instructions in 4 categories based on their functionality: arithmetic, logic, data transfer, and redirection.
http://joxeankoret.com/blog/2018/11/04/new-cfg-based-heuristic-diaphora/
https://census-labs.com/media/efficient-features-bindiff.pdf
JCry - a ransomware written in Go
JCry is downloaded as a fake update to Adobe Flash Player through a compromised website. It drops encryption and decryption programs into Startup, then encrypts the 1MB of all files with significant extensions. It then demands payment for a decryption key through an onion link in a Tor browser.
https://blogs.quickheal.com/jcry-ransomware-written-golang/
https://gbhackers.com/jcry-ransomware/
https://reverseengineering.stackexchange.com/questions/206/where-can-i-as-an-individual-get-malware-samples-to-analyze
Recently while in San Francisco, I stumbled upon a Raiden II arcade machine in Musée Mécanique. As a child, one of my favorite games was Raiden X, a Flash spinoff of the Raiden series, so it was fun to play the original game.
Project idea: create a dedicated arcade machine using a Raspberry Pi, monitor, joystick, and buttons to play arcade games like the Raiden series, Pac Man, or Dig Dug or Flash games like Raiden X. Software like MAME (Multiple Arcade Machine Emulator) exists for arcade games, so those would be simple, but the Flash format poses issues because support has been largely dropped because the runtime has security issues and the PC controls would need to be mapped to a joystick and buttons.
Origins of the Raspberry Pi
The Raspberry Pi was developed to introduce more people to programming and at a low cost. The cost of $35 was a goal early on and drove many design decisions. Later once produced in bulk, upgrades could be made while staying within the price range. https://www.techrepublic.com/article/inside-the-raspberry-pi-the-story-of-the-35-computer-that-changed-the-world/
Transposing an 8x8 bit matrix
"Hacker's Delight", Chapter 7-3
This procedure treats the 8×8-bit matrix as 16 2×2-bit matrices and transposes each of the 16 2×2-bit matrices. The matrix is then treated as four 2×2 sub-matrices whose elements are 2×2-bit matrices and each of the four 2×2 sub-matrices are transposed. Finally, the matrix is treated as a 2×2 matrix whose elements are 4×4-bit matrices and the 2×2 matrix is transposed.
unsigned long long x;
x = x & 0xAA55AA55AA55AA55LL |
(x & 0x00AA00AA00AA00AALL) << 7 |
(x >> 7) & 0x00AA00AA00AA00AALL;
x = x & 0xCCCC3333CCCC3333LL |
(x & 0x0000CCCC0000CCCCLL) << 14 |
(x >> 14) & 0x0000CCCC0000CCCCLL;
x = x & 0xF0F0F0F00F0F0F0FLL |
(x & 0x00000000F0F0F0F0LL) << 28 |
(x >> 28) & 0x00000000F0F0F0F0LL;
Geohash in Go assembly
(stub) https://mmcloughlin.com/posts/geohash-assembly
Find nth set bit
https://stackoverflow.com/questions/45482787/how-to-efficiently-find-the-n-th-set-bit
https://stackoverflow.com/questions/38938911/portable-efficient-alternative-to-pdep-without-using-bmi2
https://graphics.stanford.edu/~seander/bithacks.html#SelectPosFromMSBRank
Efficient integer square root algorithm
https://web.archive.org/web/20121207130016/http://www.embedded.com/electronics-blogs/programmer-s-toolbox/4219659/Integer-Square-Roots
https://stackoverflow.com/questions/1100090/looking-for-an-efficient-integer-square-root-algorithm-for-arm-thumb2
Constant-time bits
Go version 1.13 guarantees execution time of Add, Sub, Mul, RotateLeft, and ReverseBytes in package math/bits to be independent of the inputs.
https://tip.golang.org/doc/go1.13
https://github.com/golang/go/issues/31267
CL 170758:
// Variable time
func Add64(x, y, carry uint64) (sum, carryOut uint64) {
yc := y + carry
sum = x + yc
if sum < x || yc < y {
carryOut = 1
// Constant time
sum = x + y + carry
carryOut = ((x & y) | ((x | y) &^ sum)) >> 63
https://golang.org/cl/170758
Go crypto/subtle
Package subtle implements functions that are often useful in cryptographic code but require careful thought to use correctly such as constant-time comparisons or copies.
func ConstantTimeByteEq(x, y uint8) int
func ConstantTimeCompare(x, y []byte) int
func ConstantTimeCopy(v int, x, y []byte)
func ConstantTimeEq(x, y int32) int
func ConstantTimeLessOrEq(x, y int) int
func ConstantTimeSelect(v, x, y int) int
https://golang.org/pkg/crypto/subtle/
Go regression testing
Nearly every fixed bug or issue has an associated test created in the test/fixedbugs directory to prevent regressions. Each test is tagged with a comment on the first line indicating the mode of testing: run, compile, errorcheck, or build. If a test has an associated directory, it becomes rundir, compiledir, etc. The level of automation and thorough nature of these tests is impressive.
https://github.com/golang/go/tree/master/test/fixedbugs
Go objdump
objdump disassembles executable files in Go's Plan 9 assembly syntax.
https://golang.org/cmd/objdump/
Go context concurrency pattern
(stub) https://blog.golang.org/context
Communicating sequential processes
(stub) https://www.youtube.com/watch?v=hB05UFqOtFA
Less is exponentially more
(stub) https://commandcenter.blogspot.com/2012/06/less-is-exponentially-more.html
Retrospective on early Go development
(stub) https://commandcenter.blogspot.com/2017/09/go-ten-years-and-climbing.html
Go interface implementation
(stub) https://research.swtch.com/interfaces
Go design philosophy
"Go at Google: Language Design in the Service of Software Engineering"
Go grammar is mostly regular:
Compared to other languages in the C family, its grammar is modest in size, with only 25 keywords (C99 has 37; C++11 has 84; the numbers continue to grow). More important, the grammar is regular and therefore easy to parse (mostly; there are a couple of quirks we might have fixed but didn't discover early enough). Unlike C and Java and especially C++, Go can be parsed without type information or a symbol table; there is no type-specific context.
https://talks.golang.org/2012/splash.article
TODO: Expand on CSP and arena allocator.
Selected features to be added in Go 1.13:
More number literal prefixes are supported.
The restriction that a shift count must be unsigned is removed.
math/bits: The execution time of Add, Sub, Mul, RotateLeft, and ReverseBytes is now guaranteed to be independent of the inputs.
Semantics of unary plus and minus
(stub) https://groups.google.com/forum/#!topic/golang-nuts/aOIn6y_pX_U
Go proposal for 128 bit integers
https://github.com/golang/go/issues/9455
32-bit implementation of 64-bit division: https://cr.yp.to/2005-590/powerpc-cwg.pdf
Go runtime errors
Runtime errors are distinguished from ordinary errors by the no-op function RuntimeError() in the runtime.Error interface.
type Error interface {
RuntimeError()
https://golang.org/pkg/runtime/#Error
Google monorepos
(stub) https://cacm.acm.org/magazines/2016/7/204032-why-google-stores-billions-of-lines-of-code-in-a-single-repository/fulltext
Working on the Google Go team
(stub) https://medium.com/@ljrudberg/working-on-the-go-team-at-google-917b2c8d35ff
WebAssembly compiled languages list
https://github.com/appcypher/awesome-wasm-langs
Go experimental subpackages
utf8string provides an efficient way to index strings by rune rather than by byte. https://godoc.org/golang.org/x/exp/utf8string
apidiff determines whether two versions of the same package are compatible. https://godoc.org/golang.org/x/exp/apidiff
sumdb/gosumcheck checks a go.sum file against a go.sum database server. https://godoc.org/golang.org/x/exp/sumdb/gosumcheck
shiny/materialdesign provides named colors and icons specified by Material Design. https://godoc.org/golang.org/x/exp/shiny/materialdesign/colornames https://godoc.org/golang.org/x/exp/shiny/materialdesign/icons
shiny/screen and shiny/driver provide interfaces and drivers for accessing a screen. https://godoc.org/golang.org/x/exp/shiny/screen https://godoc.org/golang.org/x/exp/shiny/driver
Red-black trees in functional languages
Chris Okasaki demonstrated that red-black trees can be efficiently and elegantly implemented in functional languages. He simplifies insert to have four unbalanced cases and one balanced case. http://www.eecs.usma.edu/webs/people/okasaki/jfp99.ps
Red-black trees have become one of the most common persistent data structures in functional languages. https://en.wikipedia.org/wiki/Red–black_tree
Go reusable containers
The container package in the Go standard library provides a few reusable containers including a circular linked list, a doubly linked list, and a heap interface and functions to operate on the heap.
https://golang.org/pkg/container/
2019-08 (undated)
Git repository merging
Go hex dump
Dumper in encoding/hex writes a hex dump in the format of hexdump -C.
https://golang.org/pkg/encoding/hex/#Dumper
However, a potential bug is that it does not output the offset of the final byte like hexdump -C, so it does not match precisely.
Go test helpers
t.Helper() can be called to mark the caller as a test helper function and skips printing file and line information for that function.
encoding/asci85 has a clever Errorf and comparison wrapper:
testEqual(t, "Encode(%q) = %q, want %q", p.decoded, strip85(string(buf)), strip85(p.encoded))
func testEqual(t *testing.T, msg string, args ...interface{}) bool {
t.Helper()
if args[len(args)-2] != args[len(args)-1] {
t.Errorf(msg, args...)
Go binary.Varint
Unsigned integers are serialized 7 bytes at a time, starting with the least significant bits. The most significant bit indicates if there is a continuation byte. https://golang.org/src/encoding/binary/varint.go
Go bounds check elimination
Issue #14808 provides a list of bound check eliminations used or not used by Go compiler including the following:
var a[]int
use a[0], a[1], a[2] // three bound checks
// can be improved as
_ = a[3] // early bounds check
use a[0], a[1], a[3] // no bound checks
// or
a = a[:3:len(a)] // early bound check
use a[3], a[2], a[1] // one bound check
Bounds check hints in the wild in binary.LittleEndian and binary.BigEndian:
func (littleEndian) Uint16(b []byte) uint16 {
_ = b[1] // bounds check hint to compiler; see golang.org/issue/14808
return uint16(b[0]) | uint16(b[1])<<8
func (littleEndian) PutUint16(b []byte, v uint16) {
_ = b[1] // early bounds check to guarantee safety of writes below
b[0] = byte(v)
b[1] = byte(v >> 8)
Grid processing algorithms
Matrices using image processing algorithms to group coordinates in grid: https://stackoverflow.com/questions/24985127/efficiently-grouping-a-list-of-coordinates-points-by-location-in-python
A max-heap can be used to order rectangles in grid by size: https://stackoverflow.com/questions/5810649/finding-rectangles-in-a-2d-block-grid
Go json.RawMessage
RawMessage is a raw encoded JSON value implementing Marshaler and Unmarshaler used to delay decoding or precompute an encoding.
type clientResponse struct {
Id uint64 `json:"id"`
Result *json.RawMessage `json:"result"`
Error interface{} `json:"error"`
https://golang.org/src/net/rpc/jsonrpc/client.go
Example from json docs:
// use a precomputed JSON during marshal
h := json.RawMessage(`{"precomputed": true}`)
c := struct {
Header *json.RawMessage `json:"header"`
Body string `json:"body"`
}{Header: &h, Body: "Hello Gophers!"}
// delay parsing part of a JSON message
type Color struct {
Space string
Point json.RawMessage // delay parsing until we know the color space
type RGB struct {
R, G, B uint8
var c Color
err := json.Unmarshal(`{"Space": "RGB", "Point": {"R": 98, "G": 218, "B": 255}}`, &c)
var dst interface{}
switch c.Space {
case "RGB":
dst = new(RGB)
err = json.Unmarshal(c.Point, dst)
https://golang.org/pkg/encoding/json/#RawMessage
Aesthetic new tab
Project idea: develop a web browser extension to replace the new tab page with a more aesthetically pleasing page including a QLOCKTWO-style clock and artistic backgrounds. The search bar is redundant with the address bar and need not be included. Depending on the level of minimalism desired, feeds of the user's favorite websites could be displayed.
Apollo mission streams
Apollo 11 in Real Time: https://apolloinrealtime.org/11/
Apollo 13 Real-time: http://apollo13realtime.org/
Radix 2^51 trick for fast addition and subtraction
Adding or subtracting 256-bit numbers requires carries for each 64-bit limb and thus is slow because the instructions cannot be parallelized. If 256-bit numbers are instead split into 51-bit limbs stored in 64-bit registers, then each limb can be added in parallel with the carries propagated later. This allows 2^13 limbs to be added before the high 13 bits overflow.
For subtraction, the carries are negative, so the limbs can be treated as signed instead of unsigned. This reserves the most significant bit for the sign which reduces the number of operations that can be performed between normalizations to 2^12.
| [--------------------- 52 bits --------------------]|
| [-------------------- 51 bits --------------------]|
https://www.chosenplaintext.ca/articles/radix-2-51-trick.html
Constant-time cryptography
When writing constant-time code, timing should not depend on secret information.
Secret information may only be used in an input to an instruction if that input has no impact on what resources will be used and for how long.
Today's languages and compilers weren't really built for this, so it's a challenge.
The compiler might decide that your code would be faster if it used variable-time instructions. There are even cases where an optimizing compiler will see that you are trying to, say, avoid using an if statement, and the compiler puts the if statement back in because it knows it will be faster.
https://www.chosenplaintext.ca/articles/beginners-guide-constant-time-cryptography.html
Code can be verified to be constant-time using a patch to Valgrind made by Adam Langley: https://github.com/agl/ctgrind
BigInt in ES2020
BigInt is a numeric primitive for arbitrary precision integers introduced in ES2020: https://v8.dev/features/bigint
BigInt has its own type and can be defined with a n suffix (typeof 42n === 'bigint').
A BigInt is not strictly equal to a Number (===), but is abstractly equal (==).
When coerced to a boolean, BigInt follows the same logic as Number.
Binary +, -, *, and ** all work. / and % work, rounding towards zero.
Bitwise operations |, &, <<, >>, and ^ assume a two's complement representation for negative values.
Unary - negates, though unary + is not supported because asm.js expects +x to produce either a Number or an exception.
Unsigned right shift >>> is unsupported because BigInt is always signed.
BigInt64Array and BigUint64Array, make it easier to efficiently represent lists of 64-bit signed and unsigned integers.
Detecting signals in Go
src-d/go-git intercepts signals to exit cleanly from Git calls using os/signal and context.
c := make(chan os.Signal, 1)
signal.Notify(c, os.Interrupt)
https://github.com/src-d/go-git/blob/master/_examples/context/main.go
go-intervals
go-intervals is a library for performing set operations on 1-dimensional intervals, such as time ranges:
https://github.com/google/go-intervals
pprof profiler
pprof is a code profiler, primarily for C and C++.
https://github.com/google/pprof
Go integration with pprof is enabled in the runtime/pprof package. Russ Cox details the process of building support in Go for pprof.
https://research.swtch.com/pprof
Stellarium source
The digital planetarium software Stellarium is open source:
https://github.com/Stellarium/stellarium
Tabletop Whale
Tabletop Whale is a blog of open source charts and graphics visualizing large science datasets.
http://tabletopwhale.com/index.html
"The Western Constellations", a map of every star seen from Earth:
http://tabletopwhale.com/2019/07/15/the-western-constellations.html
https://github.com/eleanorlutz/western_constellations_atlas_of_space
"An Orbit Map of the Solar system", including every object over 10km diameter:
http://tabletopwhale.com/2019/06/10/the-solar-system.html
https://github.com/eleanorlutz/asteroids_atlas_of_space
Uber Go libraries
Wrapper types for sync/atomic which enforce atomic access: https://github.com/uber-go/atomic
Combine one or more Go errors together: https://github.com/uber-go/multierr
Goroutine leak detector: https://github.com/uber-go/goleak
LLVM IR in Go
LLIR is an unofficial library for interacting with LLVM IR in pure Go.
https://github.com/llir/llvm
Several projects use LLIR including a research project to decompile assembly into Go using LLVM IR and a transpiler to Bash:
https://github.com/decomp/decomp
https://github.com/NateGraff/blessedvirginmary
GNU Multiple Precision Arithmetic Library
Go interface for GMP compatible with math/big: https://github.com/ncw/gmp
GMP can compute up to 41 billion digits of π: https://gmplib.org/pi-with-gmp.html
Bit twiddling reference
(stub) http://graphics.stanford.edu/~seander/bithacks.html
Interstellar film script differences
IMSDb has a draft script from March 12, 2008 for Interstellar that drastically different from the final film version. In it, Murph is a boy and the Chinese passed through the wormhole long before NASA and figured out how to manipulate gravity. https://www.imsdb.com/scripts/Interstellar.html
Project idea: make a tool to format film scripts to be more pleasant to read. Scripts on IMSDb are consistently formatted, albeit with some inaccuracies, so could be mapped to another format.
Send channels and receive channels in Go
There are three types of channels: bidirectional chan, receive-only <-chan, and send-only chan<-. A bidirectional channel can be casted to either receive-only or send-only, but cannot be converted back.
https://stackoverflow.com/questions/13596186/whats-the-point-of-one-way-channels-in-go
First seen in the Go syntax definitions in GitHub's Semantic project: https://github.com/github/semantic/blob/master/src/Language/Go/Syntax.hs
Elm functional language
Elm is a pure functional UI design DSL with strong static type checking and "no runtime exceptions in practice".
https://en.wikipedia.org/wiki/Elm_(programming_language)
https://codeburst.io/8-javascript-alternatives-for-web-developers-to-consider-22f8d38bdfa9
XMLisp - Lisp with XML syntax
Project idea: implement a Lisp-like language with implicit returns, higher order functions, and expressions as values. This would be a more capable successor to XMLang that would introduce type safety and would parse with encoding/xml rather than JSX.
https://github.com/andrewarchi/xmlang
<func name="fib" params="n int">
<if>
<eq>n 0</eq>
<add>
<fib><sub>n 1</sub></fib>
</func>
<fib>5</fib>
Go crypto/rand
crypto/rand operates on *big.Int, unlike math/rand.
rand.Reader is a global, shared instance of a cryptographically secure random number generator that reads from OS-specific APIs.
Ahead-of-time and just-in-time compilation
AOT compilers compile before running and JIT compilers compile while running.
An interpreter executes a program written in one language and evaluates the results as prescribed by the specification.
A compiler translates a program from one language into a semantically equivalent program in another language such that the semantics of the program are preserved.
https://softwareengineering.stackexchange.com/questions/246094/understanding-the-differences-traditional-interpreter-jit-compiler-jit-interp
μ-recursive function
https://en.wikipedia.org/wiki/μ-recursive_function
Whitespace is supposedly based on μ-recursive functions: https://cs.stackexchange.com/questions/95790/do-any-programming-languages-use-general-recursive-functions-as-their-basis
HaPyLi programming language
HaPyLi is a programming language designed to compile to Whitespace, with syntax derived from Haskell, Python, and Lisp. HaPyLi uses the Whitespace heap to store strings and globals. It supports inline Whitespace, but requires that all arguments and local variables be popped and exactly one value be pushed. The standard library includes alloc, similar to malloc in C, but there is no corresponding free implementation.
import "stdlib/base.hpl"
def power(x y) =
(if (== y 1)
(* x (power x (- y 1))))
def main() = (print-number (power 2 10))
Unfortunately, as the homepage is defunct, the compiler source is no longer available.
https://esolangs.org/wiki/HaPyLi
http://web.archive.org/web/20120905174811/http://hapyli.webs.com/
Marinus Oosters created a 99 bottles of beer program written in HaPyLi: http://www.99-bottles-of-beer.net/language-hapyli-2556.html
While developing HaPyLi, the author posted a question on Haskell monads during code generation: https://stackoverflow.com/questions/607830/use-of-haskell-state-monad-a-code-smell
{esolang}{HaPyLi}
Thue esolang
https://esolangs.org/wiki/Thue
{esolang}{Thue}
Astro programming language
https://github.com/astrolang/astro
Go math/bits proposal
All bit twiddling functions, except popcnt, are already implemented by runtime/internal/sys and receive special support from the compiler in order to "to help get the very best performance". However, the compiler support is limited to the runtime package and other Golang users have to reimplement the slower variant of these functions.
https://golang.org/src/math/bits/bits_tables.go
Go 1.13 signed shift counts
In Go 1.13, shift counts (<< and >>) are no longer required to be unsigned and when negative, a panic occurs.
This requires an estimated minimum of two extra instructions per non-constant shift: a test and a branch to be check at run-time, as done for make. The compiler can omit the check for unsigned and constant values and when it is able to prove that the operand is non-negative.
As a last resort, an explicit uint conversion or mask in the source code will allow programmers to force the removal of the check, just as an explicit mask of the shift count today avoids the oversize shiftcheck.
https://go.googlesource.com/proposal/+/master/design/19113-signed-shift-counts.md
https://groups.google.com/forum/#!topic/golang-dev/jln8MwFpATc
Assembly performing slower than high level programs
Compilers can often produce optimized code faster than hand coded assembly.
https://stackoverflow.com/questions/40354978/c-code-for-testing-the-collatz-conjecture-faster-than-hand-written-assembly
https://stackoverflow.com/questions/9601427/is-inline-assembly-language-slower-than-native-c-code
List of some compiler optimizations: https://en.wikipedia.org/wiki/Optimizing_compiler
Go blank identifier uses
Disable unused declaration error:
_ = unused
To import a package solely for its side-effects (initialization), use the blank identifier as explicit package name:
import _ "lib/math"
https://golang.org/ref/spec#Import_declarations
Static type assertion:
type T struct{}
var _ I = T{} // Verify that T implements I.
var _ I = (*T)(nil) // Verify that *T implements I.
https://golang.org/doc/faq#guarantee_satisfies_interface
Interspersing delimiters without branching
The Go Programming Language: gopl.io/ch1/echo1
var s, sep string
for i := 1; i < len(os.Args); i++ {
s += sep + os.Args[i]
sep = " "
fmt.Println(s)
Semantic source code library by GitHub
Parsing, analyzing, and comparing source code across many languages: https://github.com/github/semantic
Can perform AST semantic diffs: https://github.com/github/semantic/blob/master/docs/examples.md
Use by GitHub's beta code navigation features: https://help.github.com/en/articles/navigating-code-on-github
Yorick programming language
"Yorick is an interpreted programming language for scientific simulations or calculations, postprocessing or steering large simulation codes, interactive scientific graphics, and reading, writing, or translating large files of numbers." http://dhmunro.github.io/yorick-doc/
"Arrays are first-class objects that can be operated on with a single operation. Since the virtual machine understands arrays, it can apply optimized compiled subroutines to array operations, eliminating the speed penalty of the interpreter." https://www.linuxjournal.com/article/2184
"Yorick is good at manipulating elements in N-dimensional arrays conveniently with its powerful syntax." https://en.wikipedia.org/wiki/Yorick_(programming_language)
I was referred to Yorick by Matt Borthwick as it is his favorite programming language for physics. A trick to compute the product of the elements of an array and avoid overflows is to take the exponentiation of the sum of the natural logs of the elements: exp(sum(ln(arr))). https://github.com/matt6deg
Go unnamed method receiver
A method can have a receiver without a name.
func (CmdReceivePack) Usage() string
https://github.com/src-d/go-git/blob/master/cli/go-git/receive_pack.go
Go embedded struct fields
Embedded struct fields have no name and promote fields and methods to another struct. https://golang.org/ref/spec#Struct_types
Discovered in encoding/json/encode.go:
type jsonError struct{ error }
type A struct {
foo int
type B struct {
bar int
b := B{A{10}, 3}
// b.bar, b.foo, b.A are accessible
https://golangtutorials.blogspot.com/2011/06/anonymous-fields-in-structs-like-object.html
Git branch cleanup tool
Project idea: after a PR has been merged and the branch deleted on the remote, any local clones of this branch remain and should be deleted.
Scan repo(s) for all branches
Exclude dev, master, currently checked out branch, and branches with an open PR
Delete all merged branches
Go sync.Pool
sync.Pool saves some allocation and garbage collection overhead when frequently allocating many objects of the same type.
fmt uses sync.Pool for printer allocations: https://golang.org/src/fmt/print.go#L128
It may be useful to use in wspace for the repeated big.Int (de)allocations.
An alternative is to use buffered channels: https://www.reddit.com/r/golang/comments/2ap67l/when_to_use_syncpool_and_when_not_to/
https://golang.org/pkg/sync/#Pool
Password crossword
The 2013 Adobe breach has the credentials of millions of users and the passwords are encrypted insecurely using Triple DES.
In an XKCD comic, Randall Munroe creates a crossword puzzle of solving the password blocks using the given password hints. https://www.xkcd.com/1286/)
Using statistics of the most common passwords, one could provide a word bank common passwords that could be used while solving such a crossword puzzle.
Go reflection to view and set unexported fields
Reflection is designed to allow any field to be accessed, but outside of the definition package, only exported fields can be modified (The Go Programming Language, Donovan and Kernighan).
However, using unsafe.Pointer and (*reflect.Value).UnsafeAddr, unexported values can be assigned to, though doing so potentially interferes with garbage collection. https://stackoverflow.com/questions/17981651/in-go-is-there-any-way-to-access-private-fields-of-a-struct-from-another-packag
Deleting value in Go map
An element can be deleted from a map using delete(m, key) similar to delete m[key] in Javascript.
Go modules synchronizer
When working in a project using modules, each package and sub-package requires specific dependency versions which protects a package from breaking changes in its dependencies, but it makes changing code in multiple packages simultaneously difficult.
Project idea: a tool that watches locally changed packages and updates interdependencies would greatly simplify this process.
Go runtime package
Operations to interact with runtime system and low-level reflect type information. https://golang.org/pkg/runtime/
Discovered from attempt to print line numbers in error messages. https://stackoverflow.com/questions/24809287/how-do-you-get-a-golang-program-to-print-the-line-number-of-the-error-it-just-ca
Text normalization
Text normalization in Go: https://blog.golang.org/normalization
Detailing on NFC, NFD, NFKC, and NFKD methods of transforming text: https://unicode.org/reports/tr15/
Normalizing to NFC compacts text, giving substantial savings to languages like Korean
Project idea: Make a keyboard with Hangul input that converts to NFC as you type, but allows for deletion by character rather than by block
Allows for normalization of look-alikes
Goto in Python
In Python, an April Fools joke added goto, label, and comefrom. http://entrian.com/goto/
Discovered from a comparison of throw to comefrom in favor of Go's error handling decisions. https://news.ycombinator.com/item?id=19778097
Git annotations for ls
Project idea: annotate ls command with Git branch and status information.
There are answers here, but they don't appear to be efficient: https://unix.stackexchange.com/questions/249363/git-bash-ls-show-git-repo-folders
Note the branch annotations at the end:
drwxr-xr-x 1 0018121 Domain Users 0 Dec 14 14:33 MyProject/ (develop)
drwxr-xr-x 1 0018121 Domain Users 0 Dec 14 14:17 Data/
drwxr-xr-x 1 0018121 Domain Users 0 Dec 14 12:08 MyApp/ (master)
-rw-r--r-- 1 0018121 Domain Users 399K Aug 4 10:41 readme.txt
Git checkout shortcut
Project idea: scan for directories containing a Git repo and create Bash aliases for each branch.
https://stackoverflow.com/questions/11981716/how-to-quickly-find-all-git-repos-under-a-directory
See entry from 2019-05-24 for solution using Git shell completions.
Assembly in Go
Guide to the assembly used in Go: https://golang.org/doc/asm
The assembler's parser treats period and slash as punctuation, so the assembler allows the middle dot character U+00B7 and the division slash U+2215 in identifiers and rewrites them to plain period and slash.
For example, fmt.Printf and math/rand.Int are rewritten to fmt·Printf and math∕rand·Int.
RE2 regular expression engine
(stub) https://github.com/google/re2
Code Search
Google Code Search performs regular expression matching with a trigram index. https://github.com/google/codesearch
https://swtch.com/~rsc/regexp/regexp4.html
Binary RegExp
https://github.com/rsc/binaryregexp
Continuation-passing style
Functions in CPS take an extra argument, the continuation, a function of one argument. The result is returned by calling the continuation function with this value.
Procedure returns are calls to a continuation, intermediate values are all given names, argument evaluation order is made explicit, and tail calls call a procedure with the same continuation.
Functional and logic compilers often use CPS as an intermediate representation, whereas imperative or procedural compilers would use static single assignment form (SSA).
; Direct style
(define (pyth x y)
(sqrt (+ (* x x) (* y y))))
; Continuation-passing style
(define (pyth& x y k)
(*& x x (lambda (x2)
(*& y y (lambda (y2)
(+& x2 y2 (lambda (x2py2)
(sqrt& x2py2 k))))))))
https://en.wikipedia.org/wiki/Continuation-passing_style
Go math/bits package
Arithmetic functions: add/sub with carry and mul/div with remainder. Bit manipulation: leading/trailing zeros count, bit count, one count, reverse, and rotate.
https://golang.org/pkg/math/bits/
Git autocomplete
Git autocompletion can be installed by downloading the following file and sourcing in profile.
curl https://raw.githubusercontent.com/git/git/master/contrib/completion/git-completion.bash -o ~/.git-completion.bash
https://apple.stackexchange.com/questions/55875/git-auto-complete-for-branches-at-the-command-line
Approxidate
Approxidate is the date parser in Git and it supports many date formats and was originally written by Linus: https://github.com/git/git/blob/master/date.c
Approxidate has been converted to a C library and wrapped for Go:
https://github.com/simplereach/timeutils
https://github.com/thatguystone/approxidate
Git integration with Go
go-git is a highly extensible Git implementation library written in pure Go.
https://github.com/src-d/go-git
https://git-scm.com/book/en/v2/Appendix-B%3A-Embedding-Git-in-your-Applications-go-git
Datalog Disassembly
Datalog Disassembly is a disassembler by GrammaTech using the Datalog language.
A fast disassembler which is accurate enough for the resulting assembly code to be reassembled. The disassembler implemented using the datalog (souffle) declarative logic programming language to compile disassembly rules and heuristics. The disassembler first parses ELF file information and decodes a superset of possible instructions to create an initial set of datalog facts. These facts are analyzed to identify code location, symbolization, and function boundaries. The results of this analysis, a refined set of datalog facts, are then translated to the GTIRB intermediate representation for binary analysis and reverse engineering. The GTIRB pretty printer may then be used to pretty print the GTIRB to reassemblable assembly code.
The analysis contains two parts:
The C++ files take care of reading an elf file and generating facts that represent all the information contained in the binary.
src/datalog/*.dl contains the specification of the analyses in datalog. It takes the basic facts and computes likely EAs, chunks of code, etc. The results are represented in GTIRB or can be printed to assembler code using the gtirb-pprinter.
https://github.com/GrammaTech/ddisasm
Project idea: determine how my Datalog interpreter performs in comparison to industry implementations like Soufflé.
Static single assignment form
In compiler design, SSA is a property of an intermediate representation, which requires that each variable is assigned exactly once, and every variable is defined before it is used.
Variables in the are split into versions so that every definition gets its own version.
y := 1
x := y
Rewritten in SSA:
y₁ := 1
y₂ := 2
x₁ := y₂
https://en.wikipedia.org/wiki/Static_single_assignment_form
Go loop bound reevaluation
Loop bound expressions can be optimized in some cases to be evaluated once: https://stackoverflow.com/questions/41327984/does-go-runtime-evaluate-the-for-loop-condition-every-iteration
Go 1.7 switched to using SSA for the compiler which generates more compact, more efficient code and provides a better platform for optimizations such as bounds check elimination.
Unicode property trie lookup
(stub) https://github.com/foliojs/unicode-properties
W3C CSS Color Module Level 4 changes
EDI Parsing
Powerline formats the shell prompt and vim status line into great looking segments. It uses patched fonts like FiraCode to render custom Unicode glyphs. Powerline Gitstatus is a segment for showing the status of a Git working copy.
https://github.com/powerline/powerline
https://github.com/jaspernbrouwer/powerline-gitstatus
Signal release notes
The release notes for version 2.38.1 of the Signal encrypted messenger contain a humorous bug fix:
Users on iOS 9 will no longer where the input toolbar wanders off to for a few moments every time they end a captioned attachment. What will they do with the time that they save? Hopefully upgrade to iOS 10.
AsciiDots and Ook! esoteric languages
AsciiDots executes using dots travelling along ascii art paths taking inspiration from electrical engineering.
https://esolangs.org/wiki/AsciiDots
Ook! is a simple mapping of Brainf* instructions to trinary combinations of Ook., Ook?, and Ook!.
https://esolangs.org/wiki/Ook!
{esolang}{AsciiDots}{Ook!}
TrumpScript
TrumpScript is a joke programming language developed for a hackathon. Its slogan is "Make Python great again".
No floating point numbers, only integers. America never does anything halfway.
All numbers must be strictly greater than 1 million. The small stuff is inconsequential to us.
There are no import statements allowed. All code has to be home-grown and American made.
Instead of True and False, we have the keywords fact and lie.
Only the most popular English words, Trump's favorite words, and current politician names can be used as variable names.
Error messages are mostly quotes directly taken from Trump himself.
All programs must end with America is great.
Our language will automatically correct Forbes' $4.5B to $10B.
In its raw form, TrumpScript is not compatible with Windows, because Trump isn't the type of guy to believe in PC.
TrumpScript boycotts OS X and all Apple products until such time as Apple gives cellphone info to authorities regarding radical Islamic terrorist couple from Cal.
The language is completely case insensitive.
If the running computer is from China, TrumpScript will not compile. We don't want them stealing our American technological secrets.
By constructing a wall (providing the -Wall flag), TrumpScript will refuse to run on machines with Mexican locales
Warns you if you have any Communists masquerading as legitimate "SSL Certificates" from China on your system.
Won't run in root mode because America doesn't need your help being great. Trump is all we need.
Easy to type with small hands
https://devpost.com/software/trumpscript
https://samshadwell.me/TrumpScript/
https://github.com/samshadwell/TrumpScript
{esolang}{TrumpScript}
Customizing Windows command prompt
https://github.com/microsoft/terminal
https://github.com/tallpants/vscode-theme-iterm2
GHIDRA - NSA reverse engineering and decompilation tool
StackBlitz online IDE and Turbo package manager
StackBlitz is an online IDE for web development powered using Visual Studio Code. It features live reloading, package management, deployment to Google Cloud, and GitHub integration.
https://github.com/stackblitz/core
https://medium.com/stackblitz-blog/stackblitz-online-vs-code-ide-for-angular-react-7d09348497f4
StackBlitz uses a custom JavaScript package manager, Turbo, that runs entirely in the browser and retrieves only the files you need on demand. It installs packages about five times faster than Yarn or NPM, reduces the size of node_modules by up to two two orders of magnitude, and pulls from multiple redundant CDNs. Files not directly required by the main field are lazy loaded.
Peer dependencies can be easily installed through the IDE.
https://medium.com/stackblitz-blog/introducing-turbo-5x-faster-than-yarn-npm-and-runs-natively-in-browser-cc2c39715403
Flix is a functional programming language that includes first-class Datalog predicates. Flix synthesizes features from ML-style languages, logic languages, and Go-like concurrency.
algebraic data types
first-class functions
extensible records
parametric polymorphism
Hindley-Milner type inference
CSP-style concurrency
buffered & unbuffered channels
first-class datalog constraints
polymorphic datalog predicates
stratified negation
unboxed primitives
expressions holes
full tail call elimination
compilation to JVM bytecode
core standard library
human friendly errors
https://flix.github.io/
http://lambda-the-ultimate.org/node/5557
https://en.wikipedia.org/wiki/Datalog#Systems_implementing_Datalog
https://souffle-lang.github.io
QLOCKTWO text-based clock
In Zürich, Switzerland, I saw a store selling text-based clocks made by QLOCK2. The clocks have a grid of letters that light up to spell out the time.
For example, in 5:28 would be rounded to 5:30 and displayed in German as "ES IST HALB FÜNF".
E S K I S T A F Ü N F
Z E H N Z W A N Z I G
D R E I V I E R T E L
V O R F U N K N A C H
H A L B A E L F Ü N F
E I N S X A M Z W E I
D R E I P M J V I E R
S E C H S N L A C H T
S I E B E N Z W Ö L F
Z E H N E U N K U H R
In English, it would be "IT IS HALF PAST FIVE".
I T L I S A S A M P M
A C Q U A R T E R D C
T W E N T Y F I V E X
H A L F S T E N F T O
P A S T E R U N I N E
O N E S I X T H R E E
F O U R F I V E T W O
E I G H T E L E V E N
S E V E N T W E L V E
T E N S E O'C L O C K
Trie data structure
"Efficient and scalable trie-based algorithms for computing set containment relations"
Unlambda is a minimal functional language based on combinatory logic. It is the first functional Turing tarpit.
https://esolangs.org/wiki/Unlambda
https://en.wikipedia.org/wiki/Unlambda
{esolang}{Unlambda}
C compile-time assertions
The Linux kernel uses macros as compile-time assertions to break the build when conditions are true. The trick leverages bitfields with negative widths to produce compilation errors. This contrasts with assert which is a runtime test.
#define BUILD_BUG_ON_ZERO(e) (sizeof(struct { int:-!!(e); }))
#define BUILD_BUG_ON_NULL(e) ((void *)sizeof(struct { int:-!!(e); }))
https://stackoverflow.com/questions/9229601/what-is-in-c-code
https://scala-lang.org
Peter Aldous' project.
Apollo transcendental function computation
https://space.stackexchange.com/questions/30952/how-did-the-apollo-computers-evaluate-transcendental-functions-like-sine-arctan
Countering Trusting Trust
The following was originally written as email on 2019-09-25 to Dr. Peter Aldous following up with conversation from 2019-09-24:
I did some more research into Reflections on Trusting Trust by Thompson and came across this dissertation "Fully Countering Trusting Trust through Diverse Double-Compiling" by David A. Wheeler that counters such an attack.
Essentially, he proves that a questionable compiler can be verified by comparing a trusted compiler, compiled from source, then compiled by itself with the result of a questionable compiler executable compiling that first trusted compiler's source twice. https://www.dwheeler.com/trusting-trust/
Below is a summary of his approach and a link to a more complete summary. Wheeler's site explains some details that the summary glosses over, but the dissertation itself if 199 pages long, so I haven't read that.
Suppose we have two completely independent compilers: A and T. More specifically, we have source code SA of compiler A, and executable code EA and ET. We want to determine if the binary of compiler A - EA - contains this trusting trust attack.
Here's Wheeler's trick:
Step 1: Compile SA with EA, yielding new executable X.
Step 2: Compile SA with ET, yielding new executable Y.
Since X and Y were generated by two different compilers, they should have different binary code but be functionally equivalent. So far, so good.
Step 3: Compile SA with X, yielding new executable V.
Step 4: Compile SA with Y, yielding new executable W.
Since X and Y are functionally equivalent, V and W should be bit-for-bit equivalent. https://www.schneier.com/blog/archives/2006/01/countering_trus.html
As linked from Wheeler's paper, University of Michigan researchers discovered and implemented a hardware backdoor that can be installed by a single employee at the processor's fabrication facility and triggered by a sequence of obscure commands that charge a capacitor, then eventually trigger and grant OS access. This is a scary prospect because of how monumentally difficult it would be to detect such a backdoor. https://www.wired.com/2016/06/demonically-clever-backdoor-hides-inside-computer-chip/
In the explain xkcd wiki, Wheeler's paper is mentioned: https://explainxkcd.com/wiki/index.php/1755:_Old_Days.
Reflections on Trusting Trust
Ken Thompson demonstrates in "Reflections on Trusting Trust" that we can't fully trust any software we did not write ourselves, including compilers.
He describes a scenario in which the C compiler could install a backdoor into the Unix login command without being detected. First, the C compiler source would be patched with the vulnerability, then built and distributed as a binarys. If the bugged compiler compiles Unix, it will identify and install this backdoor. Additionally, It recognizes when it compiles itself such that it plants the backdoor in future compiler versions. As the exploit exists only in the executable, it is undetectable while looking at the source.
https://www.archive.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf
XKCD comic "Old Days" mentions "Reflections on Trusting Trust" in the title text.
https://xkcd.com/1755/
Windows 95 on an Apple Watch
Nick Lee installed Windows 95 on an Apple Watch, but as it is emulated, it takes around an hour to boot.
https://blog.tendigi.com/i-installed-windows-95-on-my-apple-watch-589fda5e36d
Webkit color implementation
Fira font for Firefox OS
Fira is a typeface designed by Mozilla for Firefox OS and includes many weights and monospaced variants. It is the base for FiraCode.
https://github.com/mozilla/Fira
FiraCode font with ligatures
FiraCode is a monospaced font with ligatures for common programming symbols. Ligatures include arrows, equalities, increment/decrement, hexadecimal positioning, and other operators.
https://github.com/tonsky/FiraCode
JSX usage outside of React
JSX can be used outside of React as Robert Prehn outlines in his article.
You could use this to do anything that lends itself to functional composition. Some ideas:
You could create DSLs with an XML-like syntax within your JavaScript.
You could abuse the JSX transform to compile XML configuration or seed data into JavaScript objects (please don't).
You could even make a whole XML-syntax functional programming language that compiles to JS (just stop).
<or>
<not>
{true}
</not>
</or>
{false}
</and>
https://revelry.co/using-jsx-with-other-frameworks-than-react/
I developed a simple language named XMLang based on this idea:
Using C preprocessor language agnostically
The C preprocessor is independent of the C language and thus can be used with languages that do not have compile-time evaluation.
-E Stop after the preprocessing stage; do not run the compiler
proper. The output is in the form of preprocessed source code,
which is sent to the standard output.
-P Inhibit generation of linemarkers in the output from the
preprocessor. This might be useful when running the preprocessor
on something that is not C code, and will be sent to a program
which might be confused by the linemarkers.
The source file must be named with a .c suffix and the output can be redirected to a file.
gcc -E -P hello.js.c > hello.js
#define HOWDY
function hello(name) {
console.log('Hello, ' + name);
#ifdef HOWDY
console.log('Howdy, ' + name + '!');
Apollo Guidance Computer source
https://github.com/chrislgarry/Apollo-11
https://github.com/virtualagc/virtualagc
Implementing arbitrary precision integers
Project idea: arbitrary precision integers could be stored using contiguous integers with carry and borrow used to implement addition and subtraction across boundaries. Multiplication and division would be more difficult.
This idea was sparked by something in a CS 224 Computer Systems lecture.
Social security numbers insecure
https://arstechnica.com/science/2009/07/social-insecurity-numbers-open-to-hacking/
ArnoldC and LOLCODE quote-based esolangs
https://github.com/lhartikk/ArnoldC/wiki/ArnoldC https://esolangs.org/wiki/LOLCODE
{esolang}{ArnoldC}{LOLCODE}
Whitespace programming language
https://en.wikipedia.org/wiki/Whitespace_(programming_language)
{esolang}{Whitespace}
Piet graphical esolang
http://progopedia.com/language/piet/
{esolang}{Piet}
Uuencoding
https://en.wikipedia.org/wiki/Uuencoding
Malbolge
https://en.wikipedia.org/wiki/Malbolge http://www.lscheffer.com/malbolge.shtml https://en.wikipedia.org/wiki/Beam_search
{esolang}{Malbolge}
youtube-dl video downloader
youtube-dl is a highly configurable command-line program written in Python to download videos from YouTube.com and other video sites.
https://github.com/ytdl-org/youtube-dl
BitShift esoteric language
BitShift is an esolang with only two valid characters: 0 and 1. Instructions are denoted by the count of alternating 0 and 1 characters and are delimited by a matching pair (00 or 11).
1 Shift the value 1 bit to the left (0000 0001 > 0000 0010)
2 Shift the value 1 bit to the right (0000 0010 > 0000 0001)
3 XOR the value with 1 (0000 0000 > 0000 0001)
4 XOR the value with 128 (0000 0000 > 1000 0000)
5 Set the value to 0
6 Convert the value to a character and print it
7 Read a character from user input and set the value to it
https://esolangs.org/wiki/BitShift
https://codegolf.stackexchange.com/questions/64763/make-the-ppcg-favicon/64773#64773
{esolang}{BitShift}
Pi series
The sum of inverse squares is pi squared over six:
\frac{\pi^2}{6} = \sum_{n=1}^{\infty} \frac{1}{n^2}
The arctangent power series of 1 is equal pi/4:
\arctan(x) = 1 - \frac{x^3}{3} + \frac{x^5}{5} - \frac{x^7}{7} + ...
\arctan(1) = \frac{\pi}{4} = 1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + ...
Initial Go commits
The first four commits in the Go repository reference the evolution of C, a callback to when Rob Pike worked with Brian Kernighan in the 1980s at Bell Labs. https://stackoverflow.com/questions/21979690/whats-the-story-behind-the-revision-history-of-go/21981037#21981037
convert to Draft-Proposed ANSI C, Brian Kernighan committed on Apr 1, 1988
printf("hello, world\n");
https://github.com/golang/go/commit/0744ac969119db8a0ad3253951d375eb77cfce9e
convert to C, Brian Kernighan committed on Jan 19, 1974
printf("hello, world");
https://github.com/golang/go/commit/0bb0b61d6a85b2a1a33dcbc418089656f2754d32
hello, world, Brian Kernighan committed on Jul 18, 1972
main( ) {
extrn a, b, c;
putchar(a); putchar(b); putchar(c); putchar('!*n');
a 'hell';
b 'o, w';
c 'orld';
https://github.com/golang/go/commit/7d7c6a97f815e9279d08cfaea7d5efb5e90695a8
Navigating history on GitHub to earliest commit: https://stackoverflow.com/questions/28533602/how-do-i-navigate-to-the-earliest-commit-in-a-github-repository
Shamir's Secret Sharing
Shamir's Secret Sharing is an algorithm in cryptography created by Adi Shamir. It is a form of secret sharing, where a secret is divided into parts, giving each participant its own unique part.
To reconstruct the original secret, a minimum number of parts is required. In the threshold scheme this number is less than the total number of parts. Otherwise all participants are needed to reconstruct the original secret.
The essential idea of Adi Shamir's threshold scheme is that 2 points are sufficient to define a line, 3 points are sufficient to define a parabola, 4 points to define a cubic curve and so forth. That is, it takes k points to define a polynomial of degree k-1.
https://en.wikipedia.org/wiki/Shamir's_Secret_Sharing
https://github.com/lian/shamir-secret-sharing
https://github.com/amper5and/secrets.js
Logo graphical programming language
Douglas Clements and Dominic Gullo in "Effects of Computer Programming on Young Children's Cognition"
The study utilized the Logo programming language that was designed to be intuitive for children. Through a 12 week period, six year old children learned Logo programming during two sessions a week. Students used Logo to draw graphics by directing the movements of a turtle. As the lessons progressed, the difficulty of the tasks increased and, by the end, they could use the full Logo language. Those students were compared with a control group that only participated in computer assisted instruction and the Logo programming group significantly improved in fluency, originality, divergent thinking, direction giving, and had less errors.
https://en.wikipedia.org/wiki/Logo_%28programming_language%29
Legacy HTML color parsing
Obsolete HTML color attributes parse colors differently and strings such as 'chucknorris' are rendered as '#c00000'. This behavior was inherited from Netscape and has persisted in modern browsers for compatibility. The HTML spec describes this algorithm.
https://stackoverflow.com/questions/12939234/why-do-weird-things-in-font-color-attribute-produce-real-colors/12939327#12939327
https://stackoverflow.com/questions/8318911/why-does-html-think-chucknorris-is-a-color/12630675#12630675
https://www.w3.org/TR/2011/WD-html5-20110525/common-microsyntaxes.html#rules-for-parsing-a-legacy-color-value
https://html.spec.whatwg.org/multipage/common-microsyntaxes.html#rules-for-parsing-a-legacy-colour-value
http://scrappy-do.blogspot.com/2004/08/little-rant-about-microsoft-internet.html
The code Netscape Classic used for parsing color strings is open source:
https://dxr.mozilla.org/classic/source/lib/layout/layimage.c#155
Colors for arbitrary strings can be previewed using a tool by Tim Pietrusky:
http://randomstringtocsscolor.com/
I extended a fork of TinyColor to parse legacy colors:
https://github.com/andrewarchi/TinyColor/tree/parse-legacy
https://codepen.io/andrewarchi/pen/VKyGvv
https://codepen.io/andrewarchi/pen/wzmGER
TinyColor
https://github.com/bgrins/TinyColor
GitHub calendar art
https://github.com/ZachSaucier/github-calendar-customizer
RoboKitty
RoboKitty is the codebase for an interactive robotic kitty using Arduino. It includes two modules - 8-bit tunes which plays ABC notation music stored on an SD card and battery monitor which displays charge in increments of 10% on LEDs.
https://github.com/bajuwa/RoboKitty
Flexbox Froggy
Flexbox Froggy is a game for learning CSS flexbox. It teaches flexbox by positioning frogs on lilypads and beating levels.
https://github.com/thomaspark/flexboxfroggy
http://flexboxfroggy.com/
Code poetry
Stanford holds yearly code poetry slams in which students present poems written in any programming language. These programs are poetic when read, yet still executable to a computer. Zak Kain wrote "Capsized" for Code Poetry Slam 1.1 using CSS properties.
.ocean {
color: cornflowerblue;
pitch: high;
overflow: visible;
.boat {
color: firebrick;
transform: rotate(94deg);
.rescue-team {
visibility: visible;
.crew {
widows: none;
https://web.archive.org/web/20150415031602/http://stanford.edu:80/~mkagen/codepoetryslam/#1.1_kain
The book "code {poems}" is selection of 55 poems compiled by Ishac Bertran. Contained within is the short C++ poem "Unhandled Love" by Daniel Bezerra.
class love {};
throw love();
http://code-poems.com/
ZeroSprites CSS sprite generator
ZeroSprites is a CSS sprites generator aimed at area minimization using algorithms used in the field of VLSI floorplanning.
https://github.com/clyfish/zerosprites
http://zerosprites.com/
Font Awesome is a free vector icon font for web development.
https://github.com/FortAwesome/Font-Awesome | CommonCrawl |
Home | Ham Radio | IcomControl Home Page IcomProgrammmer II JRX: Virtual Ham Radio Morse Code PLSDR RadioComm Home Page Sangean ATS-909X Simple 10 MHz Frequency Standard Software-Defined Radios Share This Page
A treatise on an old, very reliable communication method
— All content Copyright © 2017, P. Lutus — Message Page —
Introduction | Practice Area : Text to Code | Recording Utility | Practice Area : Code to Text | Control Panel | Morse Code List | Other Programming Resources | In Depth | Version History | Reader Feedback
Morse code — dots and dashes — heralded the dawn of radio and of wireless communication. There were many reasons for the use of Morse code at that time, one being that the primitive technology imposed severe limitations on the kinds of information that could be impressed on a radio signal. Guglielmo Marconi achieved the first transatlantic radio communication with a crude spark gap transmitter, before vacuum tubes were commonly available. Because the signal was readable Morse code, the eager listening team in Newfoundland knew they were hearing something other than lightning.
Imagine that you're listening to faint and crude radio signals from across the Atlantic ocean in the time of Marconi's pioneering work — for a Morse code sample, click here:
Is my signal getting through?
Now imagine you're a radio operator on a ship in the North Atlantic on April 15th 1912, and you hear a faint radio signal from somewhere in the darkness — click this example to hear how it might have sounded:
This is Titanic. CQD. Engine room flooded.
As technology improved, so did long-distance radio communications. Once vacuum tube transmitters with quartz crystal signal sources replaced spark gap transmitters, much clearer transmissions over greater distances became possible. By the time I became a ham radio operator in the 1950s, radiotelegraphy had become more consistent and reliable — click this example:
CQ DX DE KE7ZZ K
The above Morse code means "Calling any distant (DX) stations, this is ham radio operator KE7ZZ calling and listening."
From a modern perspective Morse code has a number of drawbacks including a data rate much slower than voice or modern digital methods, but over long distances, in adverse conditions or with little transmitter power available, it has been the preferred mode for reliable point-to-point communications — until recently.
In 2007 the (U.S.) Federal Communications Commission dropped the requirement that ham radio operators be able to send and receive Morse code (similar changes have taken place in commercial and military radio operations). Many celebrated this change for a number of reasons — modern wireless communications no longer relies on Morse, even over long distances and critical applications, and the requirement was stopping many people from getting ham radio licenses and starting an interesting hobby.
From my perspective, now that Morse is a dead art, my interest has increased — sort of like wanting to learn Latin on realizing no one speaks it any more. This page is dedicated to the lost art of Morse code, which is able to communicate more information over greater distances, in the presence of poorer atmospheric conditions and interfering signals, than any other method.
Practice Area : Text to Code
In this section, users can translate sample text into Morse code, or paste their own text into the practice window and translate that. This editing feature lets you create custom code while learning to listen to and decode Morse.
To use this feature, just click the provided link to translate the default text sample, or if you prefer, click below to erase the default text sample, type or paste your own text into the practice window, then click the Start button to start/stop translation.
Four score and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal. Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure. We are met on a great battle-field of that war. We have come to dedicate a portion of that field, as a final resting place for those who here gave their lives that that nation might live. It is altogether fitting and proper that we should do this. But, in a larger sense, we can not dedicate — we can not consecrate — we can not hallow — this ground. The brave men, living and dead, who struggled here, have consecrated it, far above our poor power to add or detract. The world will little note, nor long remember what we say here, but it can never forget what they did here. It is for us the living, rather, to be dedicated here to the unfinished work which they who fought here have thus far so nobly advanced. It is rather for us to be here dedicated to the great task remaining before us — that from these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion — that we here highly resolve that these dead shall not have died in vain — that this nation, under God, shall have a new birth of freedom — and that government of the people, by the people, for the people, shall not perish from the earth. Abraham Lincoln November 19, 1863
Recording Utility
This section allows the user to make and save a recording of the Morse code generated above. To use this feature:
Click the "Enable Audio Recording" checkbox below.
Operate any of the Morse generator functions on this page.
Return here to play and/or save the result.
If this feature doesn't work on your browser (i.e. any Microsoft browser), well ... change browsers. At the time of writing, neither of Microsoft's browsers supports this feature, but both Google Chrome and Firefox do.
Enable Audio Recording
To save a recording, right-click the play arrow and choose "Save audio as ..." or another available feature.
Remember to disable the recording feature when you're done — it uses browser memory when enabled.
Practice Area : Code to Text
This section allows users to listen to a sequence of Morse code audio signals and type the corresponding characters on the computer keyboard. This is an efficient way to learn how to translate Morse code into text. The user may choose which subsets of Morse code are included — letters, numbers, and different punctuation sets:
Letters (a-z)
Common Punctuation ()
Exotic Punctuation ()
Press "Start" below, then type a guess about the Morse character you hear. If your guess is correct, a new character will be generated. If there's an error, the program will beep and repeat the missed character. Press "Stop" to end the practice session.
Use the Control Panel below to change the code practice transmission rate and other properties.
This panel allows fine tuning of the Morse code generator's values. Simply change a setting at the right and generate some code to hear the result. Your changes are saved in a browser cookie for later use. A full explanation of the values appears below the entry panel.
Note: For an immediate change, press Enter after making your entry.
Reset all values to defaults
Figure 1: Code timing diagram
Dot constant. We compute the time duration of a Morse code dot using this equation:
\begin{equation} dd = \frac{dc}{wpm} \end{equation}
dd = Duration for one dot, units seconds.
dc = Dot Constant.
wpm = Desired words per minute.
The value dc is set by convention to 1.2 (units seconds) as explained here. This value produces code timings that differ slightly from that of the ARRL code practice recordings, which many people consider an excellent resource. Users are free to choose a different value for this quantity, which will be saved between sessions.
NOTE: Each of the dot times listed below is added to the prior value. So an entered value of 2 for dot time between characters produces an effective dot time of 3 (2+1). In the same way, an entry of 4 for dot time between words produces an effective dot time of 7 (4+2+1) — see Figure 1.
Dot time between dots and dashes. Scaled by dd as in equation (1). This is the duration of the silent pause between any two dots or dashes within a Morse code element, and the value is proportional to a dot duration.
Dot time between characters. Scaled by dd as in equation (1). In much the same way, this entry determines the duration of the pause between complete Morse code elements.
This value also plays a part in the Farnsworth speed method described below.
Dot time between words. Scaled by dd as in equation (1). This entry determines the duration of the pause between entire words of code.
This value can be used to stretch the interval between words while maintaining a specific WPM rate for the individual Morse code elements. This entry (and the dot time between characters entry above) allows creation of Farnsworth-speed code, popular for teaching Morse. The Farnsworth scheme creates a difference between character speed and text speed, which allows the student to hear individual characters at a relatively high rate, but allows more time between characters and words for interpretation.
Speed WPM. Units words per minute. In connection with the Dot Constant and equation (1) above, this entry determines the overall speed of the code.
Frequency. Units Hertz. The pitch of the generated code. A frequency entry of 0 will produce a DC level at the computer's audio output, suitable for operating a relay or other device to be time-synchronized with the Morse code's dots and dashes.
Volume. A value of 1.0 represents full volume. Values greater than 1.0 will produce distortion in the computer's audio output.
Slope Constant. Units seconds. This value determines the waveform rise and fall time for the generated dots and dashes. To understand the reason for this setting, try setting it to zero and see how the code sounds. This rise and fall time issue is particularly important in radio communication because an improperly designed transmitter, one with rapid rise and fall times, creates sidebands much wider than necessary and interferes with other transmissions. The default value is 0.005 (5 milliseconds).
Scope Timing Trace
Figure 2: Oscilloscope trace for repeated letter "i" ("..").
Figure 2 captures this page's Morse generator output with a frequency setting of 0 (which produces a DC level at the computer's audio output, convenient for oscilloscope measurements). The center of the display is the beginning of the letter "i" (".."). The first dot is followed by one dot-time, then another dot, then an intercharacter pause of 3 dot times. At the left center of the display is the inter-word pause of 7 dot times after the prior word.
Morse Code List
Here's a list of widely accepted Morse code sequences. Click the characters to hear them rendered as Morse code.
Other Programming Resources
All my programs are released under the GPL, which means you can use them in your own projects, adapt the code to your own purposes, as long as my copyright notices are preserved in the source listings.
This Web page uses JavaScript to generate Morse code — click here to see/download the source.
I have also written a Java Morse code generator — click here to download the JAR file. To use the Java program, install Java if you haven't already, then open a command prompt in the same directory as the JAR file and enter:
$ java -jar morse_sender.jar
Some helpful text will appear. To translate some code:
$ java -jar morse_sender.jar here is my text
There are a number of other ways to supply the text to be translated, as shown in the help screen.
Click here to view/download the Java source.
I have written a Python code generator also, which operates very much like the Java program — click here to view/download the program listing. Much like the Java program, make sure you have Python 3 installed on your system and open a command shell:
$ ./morse_sender.py (for help)
$ ./morse_sender.py here is my text
The above might not work so well on Windows unless Python has been installed in a way that makes it visible everywhere on your system.
Land Telegraphy
Although Morse code saw its greatest use in connection with radio, it came into existence before radio even existed. Samuel F. B. Morse co-created the first version of Morse Code in the early 19th century while developing and pioneering the use of land telegraph systems.
In those times, when the sender pressed a telegraph key, the "receiver" — an electromagnetic relay — would close: clack! Compared to the ease of understanding modern code tones transmitted by radio, I imagine it required some practice to distinguish dots from dashes and signals from silence, while listening to a chattering relay.
Required Bandwidth
Radiotelegraphy signals require very little bandwidth, and many more code transmitters can be crowded into a given band than when using other radio schemes. The explanation has to do with the relationship between information content and bandwidth — more information in a given time interval, more bandwidth required. This relationship is formally defined as the Nyquist frequency theorem, which briefly says that to support the transmission of N bits per second of information, the information channel must have a bandwidth of 2N.
Consider this relationship between different radio modulation methods:
Bandwidth Required
Morse code (CW) 100 Hz
Single sideband (SSB) 5 KHz
Amplitude Modulation (AM) (broadcast) 10 KHz
Frequency Modulation (FM) (broadcast) 100 KHz
HDTV (broadcast, compressed) 8-16 MHz
Another way to look at this is to compare the time required to transmit a book-length manuscript, by comparing Morse code with other wireless methods. A typical modern book has 70,000 words, each word an average of five characters, so 350,000 characters.
If we transmit the book's words by Morse code at 13 five-character words per minute, and disregarding data lost to missing punctuation and uppercase-only alphabetic characters, and assuming a continuous effort with no breaks for darkness or fatigue, we would need almost 90 hours or 3.74 days.
How much time would a book recital require? A professional speaker can produce a word rate of 155 words per minute, so — again without any pauses for fatigue or bathroom breaks — we would need 7.5 hours.
Skipping over digital television and other difficult-to-quantify transmission methods, let's see how much time would be required to transmit the book in digital form, using a modern wireless channel. I chose 802.11g for this example, to avoid the science fiction element inherent in the newer, faster protocols. The 802.11g maximum data rate is 54Mbps (megabits per second), which can be converted to 6.75 MBps (megabytes per second) if we assume eight bits per byte. Using this communication method, the book can be transmitted in ... wait for it ... about ten milliseconds or 1/100 of a second.
The takeaway? It requires roughly 31 million times longer to transmit the book by Morse code than with modern methods.
QRP
The code phrase QRP means using the minimum amount of power required to communicate. Because Morse code requires such a small bandwidth, it follows that a receiver can be adjusted to accept only a small bandwidth that includes the desired signal. This adjustment greatly reduces interference from noise sources and other transmitters, which means a very small transmitter power can be used. Even though U.S. radio amateurs can use as much as 1000 watts of transmitter power, those who choose to use the smallest amount of power can easily accomplish their communications with five watts or less — by using Morse code.
Using specialized receiving methods, in experiments I've extracted a signal from a noise environment that was 100 times stronger (+40db). In my phase-locked-loop article I describe the method in detail and include example code. This means if I knew which frequency to monitor and using the phase-locked-loop method, I could receive a signal from a small transmitter on Mars. This idea would only work if the participants used Morse code.
2020.09.12 Version 1.8. Based on reader feedback, corrected the code sequence for the exclamation point (to -.-.--).
2020.03.03 Version 1.7. Added a function to overcome a new browser requirement that user interaction must precede the playing of audio.
2017.06.01 Version 1.6. Added a translation table for certain common Unicode characters that the original program would skip.
2017.05.19 Version 1.5. Made the control inputs more responsive to user changes.
2017.05.13 Version 1.4. Changed configuration of audio generator for better browser compatibility, changed assignments in code practice punctuation symbol groups.
2017.05.10 Version 1.3. Added a code practice section so readers can listen to code and type the corresponding characters, as a way to learn code reception.
2017.03.28 Version 1.2. Added a volume control setting, changed the generator configuration so it produces a DC output level when frequency is set to zero, which facilitates oscilloscope traces of code signals.
2017.03.28 Version 1.1. Added timing diagram, adjusted default code timings to agree with accepted conventions.
2017.03.26 Version 1.0. Initial public release.
Reader Feedback
Exclamation point Error
First, I really like your Morse code page. The quality of the sound is very good. The material is clearly presented with links for additional information. Thank you. You're most welcome! Now an observation of what looks to be a difference between the page and the Wiki page referenced. I found another page that agrees with the Wiki page, https : // morsecode.world / international / morse.html
When working on the characters other than letters and numerals, I fond that the ! is given as "Exclamation Point [!] KW digraph Not in ITU-R recommendation" or dah-di-dah-di-dah-dah at the Wikipedia link, but it is dah-dah-dah-dit on your page, https: // arachnoid . com morse_code index.html
The site https : // lcwo.net has a third sequence for !. Thanks for your correction to my outdated code for '!'. I've changed the source in this Web page as well as in the three downloadable computer language versions I offer — Java, JavaScript and Python. I appreciate your feedback and your attention to detail.
I noticed one wrinkle when testing the result in a Linux command-line environment. Because of how the Bash command-line processor works, one must use single quotes, not double quotes, when submitting text examples that include '!'.
Thanks again, and feel free to offer any more corrections you care to! | CommonCrawl |
When fast diffusion and reactive growth both induce accelerating invasions
CPAA Home
Analysis of Boundary-Domain Integral Equations to the mixed BVP for a compressible stokes system with variable viscosity
November 2019, 18(6): 3035-3057. doi: 10.3934/cpaa.2019136
Asymptotic behavior of solutions to incompressible electron inertial Hall-MHD system in $ \mathbb{R}^3 $
Ning Duan 1,2, , Yasuhide Fukumoto 3, and Xiaopeng Zhao 1,2,,
School of Science, Jiangnan University, Wuxi 214122, China
School of Science, Northeastern University, Shenyang 110819, China
Institute of Mathematics for Industry, Kyushu University, 744 Motooka, Nishi-ku, Fukuoka 819-0395, Japan
* Corresponding author
Received September 2018 Revised September 2018 Published May 2019
Full Text(HTML)
In this paper, by using Fourier splitting method and the properties of decay character $ r^* $, we consider the decay rate on higher order derivative of solutions to 3D incompressible electron inertial Hall-MHD system in Sobolev space $ H^s(\mathbb{R}^3)\times H^{s+1}(\mathbb{R}^3) $ for $ s\in\mathbb{N}^+ $. Moreover, based on a parabolic interpolation inequality, bootstrap argument and some weighted estimates, we also address the space-time decay properties of strong solutions in $ \mathbb{R}^3 $.
Keywords: Electron inertial Hall-MHD system, decay character, decay rate, weighted decay.
Mathematics Subject Classification: Primary: 35Q35, 35B40; Secondary: 76W05.
Citation: Ning Duan, Yasuhide Fukumoto, Xiaopeng Zhao. Asymptotic behavior of solutions to incompressible electron inertial Hall-MHD system in $ \mathbb{R}^3 $. Communications on Pure & Applied Analysis, 2019, 18 (6) : 3035-3057. doi: 10.3934/cpaa.2019136
H. M. Abdelhamid, Y. Kawazura and Z. Yoshida, Hamiltonian formalism of extended magnetohydrodynamics, J. Phys. A: Math. Theor., 48 (2015), 235502. doi: 10.1088/1751-8113/48/23/235502. Google Scholar
N. Andrés, L. Martin, P. Dmitruk and D. Gómez, Effects of electron inertia in collisionless magnetic reconnection, Phy. Plasmas, 21 (2014), 072904. Google Scholar
N. Andrés, C. Gonzalez, L. Martin, P. Dmitruk and D. Gómez, Two-fluid turbulence including electron inertia, Phy. Plasmas, 21 (2014), 122305. Google Scholar
N. Andrés, P. Dmitruk and D. Gómez, Influence of the Hall effect and electron inertia in collisionless magnetic reconnection, Phy. Plasmas, 23 (2016), 022903. Google Scholar
C. T. Anh and P. T. Trang, Decay characterization of solutions to the viscous Camassa-Holm equations, Nonlinearity, 31 (2018), 621-650. doi: 10.1088/1361-6544/aa96ce. Google Scholar
C. Bjorland and M. E. Schonbek, Poincaré's inequality and diffusive evolution equations, Adv. Differential Equations, 14 (2009), 241-260. Google Scholar
L. Brandolese, Characterization of solutions to dissipative systems with sharp algebraic decay, SIAM J. Math. Anal., 48 (2016), 1616-1633. doi: 10.1137/15M1040475. Google Scholar
L. Brandolese and M. E. Schonbek, Large time decay and growth for solutions of a viscous Boussinesq system, Trans. Amer. Math. Soc., 364 (2012), 5057-5090. doi: 10.1090/S0002-9947-2012-05432-8. Google Scholar
L. Brandolese, On a non-solenoidal approximation to the incompressible Navier-Stokes equations, J. Lond. Math. Soc. (2), 96 (2017), 326-344. doi: 10.1112/jlms.12063. Google Scholar
L. Caffarelli, R. Kohn and L. Nirenberg, First order interpolation inequalities with weights, Compos. Math., 53 (1984), 259-275. Google Scholar
D. Chae and M. E. Schonbek, On the temporal decay for the Hall-magnetohydrodynamic equations, J. Differential Equations, 255 (2013), 3971-3982. doi: 10.1016/j.jde.2013.07.059. Google Scholar
[12] J. W. Cholewa and T. Dlotko, Global Attractors in Abstract Parabolic Problems, Cambridge University Press, Cambridge, 2000. doi: 10.1017/CBO9780511526404.
M. Dai and M. E. Schonbek, Asymptotic behavior of solutions to the liquid crystal system in $H^m(\mathbb{R}^3)$, SIAM J. Math. Anal., 46 (2014), 3131-3150. doi: 10.1137/120895342. Google Scholar
Y. Fukumoto and X. Zhao, Well-posedness and large time behavior of solutions for the electron inertial Hall-MHD system, Adv. Differential Equations, 24 (2019), 31-68. Google Scholar
Q. Jiu and H. Yu, Decay of solutions to the three-dimensional generalized Navier-Stokes equations, Asymptotic Anal., 94 (2015), 105-124. doi: 10.3233/ASY-151307. Google Scholar
T. Kato and G. Ponce, Commutator estimates and the Euler and Navier-Stokes equations, Commun. Pure. Appl. Math., 41 (1988), 891-907. doi: 10.1002/cpa.3160410704. Google Scholar
K. Kimura and P. J. Morrison, On energy conservation in extended magnetohydrodynamics, Phy. Plasmas, 21 (2014), 082101. Google Scholar
I. Kukavica, Space-time decay for solutions of the Navier-Stokes equations, Indiana Univ. Math. J., 50 (2001), 205-222. doi: 10.1512/iumj.2001.50.2084. Google Scholar
I. Kukavica, On the weighted decay for solutions of the Navier-Stokes system, Nonlinear Anal., 70 (2009), 2466-2470. doi: 10.1016/j.na.2008.03.031. Google Scholar
I. Kukavica and J. J. Torres, Weighted bounds for the velocity and the vorticity for the Navier-Stokes equations, Nonlinearity, 19 (2006), 293-303. doi: 10.1088/0951-7715/19/2/003. Google Scholar
I. Kukavica and J. J. Torres, Weighted $L^p$ decay for solutions of the Navier-Stokes equations, Comm. Partial Differential Equations, 32 (2007), 819-831. doi: 10.1080/03605300600781659. Google Scholar
T. Miyakawa, On space-time decay properties of nonstationary incompressible Navier-Stokes flows in $\mathbb{R}^n$, Funkcial. Ekvac., 43 (2000), 541-557. Google Scholar
C. J. Niche and M. E. Schonbek, Decay characterization of solutions to dissipative equations, J. London Math. Soc., 91 (2015), 573-595. doi: 10.1112/jlms/jdu085. Google Scholar
C. J. Niche, Decay characterization of solutions to Navier-Stokes-Voigt equations in terms of the initial datum, J. Differential Equations, 260 (2016), 4440-4453. doi: 10.1016/j.jde.2015.11.014. Google Scholar
M. E. Schonbek, $L^2$ decay for weak solutions of the Navier-Stokes equations, Arch. Ration. Mech. Anal., 88 (1985), 209-222. doi: 10.1007/BF00752111. Google Scholar
M. E. Schonbek, Large time behaviour of solutions to the Navier-Stokes equations, Comm. Partial Differential Equations, 11 (1986), 733-763. doi: 10.1080/03605308608820443. Google Scholar
M. Schonbek and T. Schonbek, On the boundedness and decay of moments of solutions to the Navier-Stokes equations, Adv. Differential Equations, 5 (2000), 861-898. Google Scholar
S. Takahashi, A weighted equation approach to decay rate estimates for the Navier-Stokes equations, Nonlinear Anal., 37 (1999), 751-789. doi: 10.1016/S0362-546X(98)00070-4. Google Scholar
S. Weng, Space-time decay estimates for the incompressible viscous resistive MHD and Hall-MHD equations, J. Funct. Anal., 270 (2016), 2168-2187. doi: 10.1016/j.jfa.2016.01.021. Google Scholar
S. Weng, Remarks on asymptotic behaviors of strong solutions to a viscous Boussinesq system, Math. Methods Appl. Sci., 39 (2016), 4398-4418. doi: 10.1002/mma.3868. Google Scholar
S. Weng, On analyticity and temporal decay rates of solutions to the viscous resistive Hall-MHD system, J. Differential Equations, 260 (2016), 6504-6524. doi: 10.1016/j.jde.2016.01.003. Google Scholar
X. Zhao, Decay of solutions to a new Hall-MHD system in $\mathbb{R}^3$, C. R. Acad. Sci. Paris, Ser. I., 355 (2017), 310-317. doi: 10.1016/j.crma.2017.01.019. Google Scholar
X. Zhao, Space-time decay estimates of solutions to Liquid crystal system in $\mathbb{R}^3$, Commun. Pure Anal. Appl., 18 (2019), 1-13. doi: 10.3934/cpaa.2019001. Google Scholar
X. Zhao and M. Zhu, Global well-posedness and asymptotic behavior of solutions for the three-dimensional MHD equations with Hall and ion-slip effects, Z. Angew. Math. Phys., 69 (2018), Art. 22, 13 pp. doi: 10.1007/s00033-018-0907-z. Google Scholar
Jincheng Gao, Zheng-An Yao. Global existence and optimal decay rates of solutions for compressible Hall-MHD equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (6) : 3077-3106. doi: 10.3934/dcds.2016.36.3077
Jishan Fan, Fucai Li, Gen Nakamura. Low Mach number limit of the full compressible Hall-MHD system. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1731-1740. doi: 10.3934/cpaa.2017084
Denis Mercier, Virginie Régnier. Decay rate of the Timoshenko system with one boundary damping. Evolution Equations & Control Theory, 2019, 8 (2) : 423-445. doi: 10.3934/eect.2019021
Junxiong Jia, Jigen Peng, Kexue Li. On the decay and stability of global solutions to the 3D inhomogeneous MHD system. Communications on Pure & Applied Analysis, 2017, 16 (3) : 745-780. doi: 10.3934/cpaa.2017036
Zhuangyi Liu, Ramón Quintanilla. Energy decay rate of a mixed type II and type III thermoelastic system. Discrete & Continuous Dynamical Systems - B, 2010, 14 (4) : 1433-1444. doi: 10.3934/dcdsb.2010.14.1433
Abdelaziz Soufyane, Belkacem Said-Houari. The effect of the wave speeds and the frictional damping terms on the decay rate of the Bresse system. Evolution Equations & Control Theory, 2014, 3 (4) : 713-738. doi: 10.3934/eect.2014.3.713
Mohammed Aassila. On energy decay rate for linear damped systems. Discrete & Continuous Dynamical Systems - A, 2002, 8 (4) : 851-864. doi: 10.3934/dcds.2002.8.851
Bopeng Rao. Optimal energy decay rate in a damped Rayleigh beam. Discrete & Continuous Dynamical Systems - A, 1998, 4 (4) : 721-734. doi: 10.3934/dcds.1998.4.721
Hai-Liang Li, Hongjun Yu, Mingying Zhong. Spectrum structure and optimal decay rate of the relativistic Vlasov-Poisson-Landau system. Kinetic & Related Models, 2017, 10 (4) : 1089-1125. doi: 10.3934/krm.2017043
Eugenii Shustin. Exponential decay of oscillations in a multidimensional delay differential system. Conference Publications, 2003, 2003 (Special) : 809-816. doi: 10.3934/proc.2003.2003.809
Tariel Sanikidze, A.F. Tedeev. On the temporal decay estimates for the degenerate parabolic system. Communications on Pure & Applied Analysis, 2013, 12 (4) : 1755-1768. doi: 10.3934/cpaa.2013.12.1755
Frank Jochmann. Decay of the polarization field in a Maxwell Bloch system. Discrete & Continuous Dynamical Systems - A, 2003, 9 (3) : 663-676. doi: 10.3934/dcds.2003.9.663
Yoshikazu Giga, Yukihiro Seki, Noriaki Umeda. On decay rate of quenching profile at space infinity for axisymmetric mean curvature flow. Discrete & Continuous Dynamical Systems - A, 2011, 29 (4) : 1463-1470. doi: 10.3934/dcds.2011.29.1463
Yongming Liu, Lei Yao. Global solution and decay rate for a reduced gravity two and a half layer model. Discrete & Continuous Dynamical Systems - B, 2019, 24 (6) : 2613-2638. doi: 10.3934/dcdsb.2018267
Zhong Tan, Huaqiao Wang, Yucong Wang. Time-splitting methods to solve the Hall-MHD systems with Lévy noises. Kinetic & Related Models, 2019, 12 (1) : 243-267. doi: 10.3934/krm.2019011
Peng Sun. Exponential decay of Lebesgue numbers. Discrete & Continuous Dynamical Systems - A, 2012, 32 (10) : 3773-3785. doi: 10.3934/dcds.2012.32.3773
W. Wei, Yin Li, Zheng-An Yao. Decay of the compressible viscoelastic flows. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1603-1624. doi: 10.3934/cpaa.2016004
Irene M. Gamba, Maria Pia Gualdani, Christof Sparber. A note on the time decay of solutions for the linearized Wigner-Poisson system. Kinetic & Related Models, 2009, 2 (1) : 181-189. doi: 10.3934/krm.2009.2.181
Ammar Khemmoudj, Taklit Hamadouche. General decay of solutions of a Bresse system with viscoelastic boundary conditions. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4857-4876. doi: 10.3934/dcds.2017209
Jong Yeoul Park, Sun Hye Park. On uniform decay for the coupled Euler-Bernoulli viscoelastic system with boundary damping. Discrete & Continuous Dynamical Systems - A, 2005, 12 (3) : 425-436. doi: 10.3934/dcds.2005.12.425
Download XML
PDF downloads (49)
HTML views (157)
by authors
on AIMS
Ning Duan Yasuhide Fukumoto Xiaopeng Zhao
Recipient's E-mail*
Content* | CommonCrawl |
\begin{document}
\title{Multi-Attribute Utility Preference Robust Optimization:
A Continuous Piecewise Linear Approximation Approach hanks{This project is supported by RGC grant 14500620.
}
\begin{abstract} In this paper, we consider a multi-attribute decision making problem where the decision maker's (DM's) objective is to maximize the expected utility of outcomes but the true utility function which captures the DM's risk preference is ambiguous. We propose a maximin multi-attribute utility preference robust optimization (UPRO) model where the optimal decision is based on the worst-case utility function in an ambiguity set of plausible utility functions constructed using partially available information such as the DM's specific preferences between some lotteries. Specifically, we consider a UPRO model with two attributes, where the DM's risk attitude is multivariate risk-averse and the ambiguity set is defined by a linear system of inequalities represented by the Lebesgue–Stieltjes (LS) integrals of the DM's utility functions. To solve the maximin problem, we propose an explicit piecewise linear approximation (EPLA) scheme to approximate the DM's true unknown utility so that the inner minimization problem reduces to a linear program, and we solve the approximate maximin problem by a derivative-free (Dfree) method. Moreover, by introducing binary variables to locate the position of the reward function in a family of simplices, we propose an implicit piecewise linear approximation (IPLA) representation of the approximate UPRO and solve it using the Dfree method. Such IPLA technique prompts us to reformulate the approximate UPRO as a single mixed-integer program (MIP) and extend the tractability of the approximate UPRO to the multi-attribute case. Under some moderate conditions, we derive error bounds between the UPRO and the approximate UPRO in terms of the ambiguity set, the optimal value and the optimal solution. Furthermore, we extend the model to the expected utility maximization problem with expected utility constraints where the worst-case utility functions in the objective and constraints are considered simultaneously. Finally, we report the numerical results about performances of the proposed models and the computational schemes, and show that the schemes work efficiently and the UPRO model is stable against data perturbation.
\end{abstract}
\textbf{Keywords}: Multi-attribute UPRO, Non-additive utility, Lebesgue–Stieltjes integral, Preference elicitation, Piecewise linear approximation, Tractability, MIP, Error bounds, Data perturbation
\section{Introduction}
{\color{black} Utility preference robust optimization (UPRO) model concerns the optimal decision making where the decision maker (DM) aims to maximize the expected utility but the true utility function which captures the DM's preference is ambiguous. Instead of finding an approximate utility function using partially available information as in the literature of behavioural economics (see, e.g., \cite{clemen2013making} and \cite[Chapter 10]{gonzalez2018utility}),
the UPRO models construct a set of plausible utility functions and base the optimal decision on the worst-case utility function from the set to mitigate such ambiguity. This type of approach can be traced back to Maccheroni~\cite{maccheroni2002maxmin} who considers the worst-case utility evaluation among
a number of available utilities when a conservative DM faces uncertain outcomes of lotteries. He derives necessary and sufficient conditions for the existence of
a set of utility functions such that the {\color{black}worst-case} in the set can be used to characterize the conservative decision making
framework. Armbruster and Delage \cite{AmD15} give a comprehensive treatment of the problem from
optimization perspective by formally proposing a maximin UPRO paradigm. Specifically, they consider a class of utility functions which are concave or S-shaped and discuss how a DM's preference may be elicited through pairwise comparisons. Moreover, they demonstrate that solving the UPRO
model
reduces to solving a linear program (LP) under some mild conditions. Over the past few years, the research of
UPRO related models has received increasing attentions, see for instance \cite{hu2015robust,haskell2016ambiguity,hu2018robust,GXZ21,DGX22,WuX22}.
The above UPRO models are all about single-attribute decision making problem. In practice, there is a multitude of {\color{black}literature} focusing on the multi-attribute case.
For instance, in healthcare it is typical to use several metrics rather than just one to measure the quality of life (\cite{feeny2002multiattribute,torrance1982application}). Similar problems can be found in network management \cite{azaron2008multi,chen2010stochastic}, scheduling \cite{Zakariazadeh2014}, multiobjective design optimization problem \cite{Dino2017,tseng1990minimax}, and portfolio optimization \cite{fliege2014robust}. Indeed, over the past few decades, there has been significant research on multi-attribute expected utility \cite{fishburn1992multiattribute,miyamoto1996multiattribute,tsetlin2006equivalent,tsetlin2007decision,tsetlin2009multiattribute,von1988decomposition}.
Zhang et al.~\cite{zhang2020preference} seem to be the first to
propose a preference robust optimization (PRO) model for multi-attribute decision making. Specifically, they consider a multivariate shortfall risk minimization problem where there is an ambiguity in an investor's true disutility function of losses and they consider the worst-case disutility function
in an ambiguity set
to calculate the risk measure.
Wu et al.~\cite{wu2020preference}
propose a general PRO model
for
multi-attribute decision making. Instead of considering expected utility,
they consider a
quasi-concave choice function
to measure the DM's multi-attribute rewards which subsumes the expected utility model as a special case, and propose a support function-based approach to solve
the resulting preference robust choice problem. Since the model is very general, the computational scheme does not benefit from the specific structure that it would do in expected utility maximization problems.
For example, it is unclear whether we can use piecewise linear utility functions to approximate the true unknown utility function in the multi-attribute UPRO models as in the single-attribute case (\cite{GXZ21}). We are interested in the piecewise linear approximation (PLA) approach for several reasons. First, a DM's utility preference is usually elicited at some discrete points. Connecting the utility function values at these points to form a piecewise linear utility function is the easiest way to obtain an approximate utility function. Second, the PLA approach works for a broader class of UPRO models without specific requirements on convexity, S-shapedness or quasiconvexity of the true utility function. Third, despite the PLA approach does not solve UPRO models precisely as the support function-based approach, it allows us to derive an error bound under some moderate conditions.
In this paper, we endeavour to carry out a comprehensive study on the multi-attribute UPRO from modelling to computational schemes and underlying theory. Unlike single-attribute case, a conservative DM's utility function is not necessarily concave which means that Armbruster-Delage's support function-based approach is not applicable in this case. This prompts us to adopt the PLA approach. The extension of the PLA approach from single-attribute UPRO to multi-attribute UPRO would be trivial if the utility function is additive or concave.
However, when we consider
a general multi-attribute utility function without
specific independence condition,
the construction, representation of PLA and subsequent computation of the approximate UPRO require
a lot of new work.
One of the
challenges that we have to tackle is to find an appropriate
representation
of a piecewise linear utility function which is easy to construct, and
to embed
in the objective function and in the ambiguity set.
The main contributions of this paper can be summarized as follows.
First, we
propose a maximin robust optimization model for bi-attribute decision making
where the DM is
multivariate risk-averse,
there is incomplete information to identify the DM's true utility function, and the optimal decision is based on the worst-case utility function in an ambiguity set. We discuss in detail how the ambiguity set of bivariate utility functions may be constructed by standard preference elicitation
methods such as pairwise comparisons.
To solve
the maximin problem, we propose a two-dimensional continuous
PLA scheme to approximate the true unknown utility function
so that the inner minimization problem can be reduced to a finite-dimensional program.
Differing from the one-dimensional case, we divide the domain of the utility functions into a set of mutually exclusive triangles and define an approximate
utility function which is linear over each of the triangles.
Moreover,
we reformulate the ambiguity set defined by
the expected utility values of the DM's preferences
between
lotteries
into the one where the expected utility values are represented by the Lebesgue–Stieltjes (LS) integrals with respect to (w.r.t.) the utility function.
The PLA approach allows us to derive the approximate utility function explicitly using indicator functions {\color{black}and} characterize the Lipschitz continuity of the utility function by individual variables, and
{\color{black}enables} us to calculate the LS integrals conveniently.
Second, by exploiting the piecewise linearity of the approximate utility function, we use the well-known polyhedral method
(\cite{DLM10,KDN04,LeW01,VAN10,vielma2015mixed})
to represent the
multi-attribute reward functions
using a convex combination of the vertices of the simplex
containing the vector in the domain of
the multivariate utility function,
and subsequently reformulate the inner
approximate
utility minimization problem as a mixed-integer program (MIP). Differing from the PLA approach described above, the approximate utility function cannot be represented explicitly, rather it is determined by solving an MIP. We call this implicit PLA (IPLA) whereas the former is explicit PLA (EPLA). A clear benefit of IPLA is that
it works for multidimensional cases and also allows us to reformulate the whole maximin problem as a single MIP.
Third, we extend the preference robust approach to the expected utility maximization problem with expected utility constraints. Instead of considering the worst-case utility in the objective and the worst-case utility in the constraints separately, we propose a UPRO model where the optimal decision is based on the same worst-case utility function in both the objective and the constraints. We derive conditions under which the two robust formulations are equivalent and
carry out comparative analysis through numerical studies to identify the differences that the two models may render.
Fourth, to justify the PLA scheme, we derive {\color{black}error bounds} for the optimal value and the optimal solutions, which is built on a newly derived Hoffman's lemma for the linear system in the infinite-dimensional space under the pseudo-metric. We also quantify the difference between {\color{black}the ambiguity sets} before and after the PLA
and indicate the special cases when these two ambiguity sets coincide. Moreover, to facilitate the application of the UPRO model in a data-driven environment, we carry out stability analysis on the optimal value and the optimal solutions of the UPRO model against data perturbation/contamination.
Finally, we undertake extensive numerical tests on the proposed UPRO models and
computational schemes and obtain the following main findings. The EPLA scheme (see (\ref{eq:PRO-N-reformulate})) and the IPLA scheme (see (\ref{eq:PRO_MILP_eqi})) generate the same results in terms of the convergence of the worst-case utility functions and the optimal values, but the former works much faster because the IPLA requires solving a MILP as opposed to an LP in the EPLA and the number of integer variables increases rapidly with the increase of scenarios of the underlying exogenous uncertainty.
The
IPLA works also well in tri-attribute case although
the CPU time is long.
For the constrained expected utility maximization problem, the two robust models may coincide in some cases but differ in other cases depending on the constraints. The approximate maximin model is stable in the presence of small perturbations arising during the preference elicitation process and {\color{black}resulting from} exogenous uncertainty data.
The rest of the paper is organized as follows. Section~\ref{sec:2Bi-Attribute} introduces the multi-attribute UPRO model
and the definition of the ambiguity set. Section~\ref{sec:numer-methods}
gives the details of the EPLA
approach and
tractable formulation of approximate UPRO in bi-attribute case.
Section~\ref{sec:multi-atrribute} discusses the IPLA approach for the UPRO in multi-attribute case.
Section~\ref{sec-errorbound} investigates the error bound of the approximate ambiguity set as well as the impact on the optimal value and the optimal solutions to the UPRO model. Section~\ref{sec:constrained} extends the UPRO model to the constrained optimization problem.
Section~\ref{sec:numerical results} reports the numerical tests of the UPRO model. Concluding remarks are given in Section~\ref{sec:Concluding remarks}.
\section{The bi-attribute UPRO model} \label{sec:2Bi-Attribute}
We consider the following one-stage expected bi-attribute utility maximization problem \begin{eqnarray} \label{eq:UMP-bi} \max_{\bm{z}\in Z} \; {\mathbb{E}}_P[u(\bm{f}(\bm{z},\bm{\xi}))], \end{eqnarray} where $\bm{f}:{\rm I\!R}^{n}\times {\rm I\!R}^{m} \to {\rm I\!R}^2$ is a continuous vector-valued function representing the rewards from two attributes, $\bm{z}$ is a decision vector which is restricted to taking values over a specified feasible set $Z\subset {\rm I\!R}^n$, $\bm{\xi}$ is a random vector representing exogenous uncertainties in the decision making problem mapping from probability space $(\Omega,\mathcal{F},\mathbb{P})$ to ${\rm I\!R}^m$,
the expectation is taken w.r.t.~the probability of $\bm{\xi}$, i.e., $P:=\mathbb P\circ\bm{\xi}^{-1}$, and $u:{\rm I\!R}^2\to{\rm I\!R}$ is a real-valued
componentwise non-decreasing continuous utility function, which maps each value of $\bm{f}$ to a utility value of the DM's interest.
To facilitate our discussions, we make the following assumption throughout the paper.
\begin{assumption} \label{assu-original} $\bm{f}$ is a continuous function with its range covered by
$T:=X\times Y$ with $X:=[\underline{x},\bar{x}]$ and $Y:=[\underline{y},\bar{y}]$,
$Z$ is a compact convex subset of ${\rm I\!R}^n$ and the support set $\Xi$ of $\bm{\xi}$ is compact. \end{assumption}
Assumption~\ref{assu-original} allows us to restrict the domain of the unknown true utility function to a rectangle $T$. We follow \cite{hu2015robust} and the literature of behavioural economics to normalize the utility function with $u(\underline{x},\underline{y})=0$ and $u(\bar{x},\bar{y})=1$. In most of the existing research on multi-attribute decision making, utility functions are assumed to be known (\cite{greco2016multiple,liesio2021nonadditive}) or can be elicited and estimated through a tolerable amount of questions (\cite{andre2007non}). In practice, however, a DM's utility function is often unknown either from the DM's perspective or from the modeller's perspective (\cite{AmD15}).
In this paper, our focus is on the situation where the DM does not have complete information to identify the true utility function $u^*$, i.e.,
risk preference, but it is possible to elicit partial information to construct an ambiguity set of utility functions, denoted by $\mathcal{U}$, such that the true utility function which represents the DM's preference lies within $\mathcal{U}$ with high likelihood. Under this circumstance, it might be sensible to consider the following bi-attribute utility preference robust optimization model to mitigate the model risk arising from the ambiguity in the true utility function \begin{equation} \label{eq:MAUT-robust}
\inmat{(BUPRO)} \quad
{\vartheta}:=\max_{\bm{z}\in Z} \; \min_{u\in {\cal U}} \; {\mathbb{E}}_P[u(\bm{f}(\bm{z},\bm{\xi}))]. \end{equation} The structure of the BUPRO model is largely determined by the structure of the ambiguity set $\mathcal{U}$ as well as the nature of the utility functions in this set.
Various approaches have been proposed to construct
an ambiguity set of utility functions in the literature of PRO depending on the availability of information (see \cite{AmD15, liu2021multistage,guo2022robust}). They are usually based on two types of information about a DM's preference: generic information such as risk aversion or risk taking
and specific information such as preferring one prospect to another (see \cite{WaX23}).
In single-attribute decision making, a DM is risk-averse if and only if the DM's utility function is concave (see \cite{tsanakas2003risk}). Unfortunately, the equivalent relation does not hold in the multi-attribute case.
Let $x_0,x_1\in X$, $y_0,y_1\in Y$ with $x_0< x_1$ and $y_0<y_1$. Consider the following two lotteries:
Lottery one ($L_1$) gives the DM a 0.5 chance of receiving $(x_0,y_0)$ and a 0.5 chance of receiving $(x_1,y_1)$. Lottery two ($L_2$) gives the DM a 0.5 chance of receiving $(x_0,y_1)$ and a 0.5 chance of receiving $(x_1,y_0)$. The DM is said to be \emph{multivariate risk-averse} (MRA) if the DM prefers $L_2$ to $L_1$ for all $x_0,x_1,y_0$ and $y_1$ described above (see e.g.,\cite{richard1975multivariate}).
This type of behaviour means that the DM prefers taking a mix of the best and worst in the two respective attribute to getting either the ``best'' or the ``worst'' with equal probability.
Using the expected utility theory, we can write down the DM's preference mathematically as $0.5u(x_0,y_0)+0.5u(x_1,y_1) \leq 0.5u(x_0,y_1)+0.5u(x_1,y_0)$, which is equivalent to
\begin{equation} \label{eq:conservative} u(x_0,y_1)+u(x_1,y_0)\geq u(x_0,y_0)+u(x_1,y_1) \end{equation} for all $x_0,x_1,y_0$ and $y_1$. (\ref{eq:conservative}) is known as \emph{conservative property}.
In the case when the utility function is twice continuously differentiable,
the property is equivalent to $u_{x y}:=\frac{\partial^2 u}{\partial x \partial y}\leq 0$ for all $(x,y)\in T$,
see \cite[Theorem 1]{richard1975multivariate}.
This kind of definition is given in \cite{richard1975multivariate}, {\color{black}and} there are some other definitions of MRA, see e.g.~\cite{ duncan1977matrix,karni1979multivariate,levy1991arrow} and references therein.
From the definition, we can see immediately that a
risk-averse DM's utility function is not necessarily concave (e.g.~
$u(x,y)=x+y-(x y)^{1/4}$ for $x>0$ and $y>0$).
This is a fundamental difference between the multi-attribute and single-attribute utility functions.
In the forthcoming discussions, we will consider utility functions satisfying (\ref{eq:conservative}) since
risk-averse is widely considered in the literature (\cite{abbas2005attribute,abbas2009multiattribute}),
e.g., $u(x,y)=\frac{1-e^{-\gamma(x+\beta y)}}{1-e^{-\gamma(1+\beta)}}$ with $\gamma>0$, $\beta>0$, and $u(x,y)=e^x-e^{-y}-e^{-x-2y}$.
Specific information about a DM's preference is often obtained by a modeller during a preference elicitation process. The most widely used elicitation method is pairwise comparison (\cite{AmD15}). For instance, a DM is given a pair of lotteries ${\bm A}$ and ${\bm B}$ defined over $(\Omega,{\cal F},\mathbb{P})$ with different outcomes and asked for preference. If the DM prefers ${\bm A}$, then we can use the expected utility theory to characterize the preference, i.e.,
\begin{equation*} {\mathbb{E}}_{\mathbb{P}}[u(\bm{B}(\omega))] = \int_{T} u(x,y) d F_{\bm{B}}(x,y)\leq \int_{T} u(x,y) d F_{\bm{A}}(x,y) = {\mathbb{E}}_{\mathbb{P}}[u(\bm{A}(\omega))] \end{equation*} or equivalently \begin{equation} \label{eq:ambi-U-ex} \int_{T} u(x,y) d \psi(x,y):=\int_{T} u(x,y) d (F_{\bm{B}}(x,y)-F_{\bm{A}}(x,y))\leq 0, \end{equation} where $F_{{\bm A}}$ and $F_{{\bm B}}$ are the cumulative distribution functions of ${\bm A}$ and $\bm{B}$, $u$ is the true utility function which represents the DM's preference but is unknown. The outcomes of the pairwise comparisons enable us to narrow down the scope of the utility function by inequalities. As more and more questions are asked, we
can derive more inequalities as such {\color{black}which
lead to} a smaller ambiguity set.
To facilitate discussions, we give a formal definition of the ambiguity set constructed as such.
\begin{definition}
\label{defi:ambguity-set} Let $\mathscr{U}$ be the set of continuous, componentwise non-decreasing, and normalized utility functions mapping from $T$
to $[0,1]$
satisfying conservative property (\ref{eq:conservative}). Define the ambiguity set of utility functions as \begin{equation} \label{eq:ambiguity_set}
\mathcal{U}:=\left\{ u\in \mathscr{U}
\,:\,\int_{T} u(x,y)d\psi_l(x,y)\leq c_l, l=1,\ldots,M \right\}, \end{equation} where $\psi_l:T\rightarrow {\rm I\!R}$ is a real-valued function and
$c_l$ is a given constant for $l=1,\ldots, M$, and the integrals are in the sense of Lebesgue-Stieltjes
integration.
\end{definition} In this definition, we make a blanket assumption that
the LS
integrals are well-defined,
we refer readers to \cite{clarkson1933definitions}, \cite[page 129]{hildebrandt1963introduction} and \cite{Mcs47} for the concept and properties of the integration.
${\cal U}$ in (\ref{eq:ambiguity_set}) is defined by a system of inequalities which are linear in both $u$ and $\psi_l$. Thus the ambiguity set ${\cal U}$ defined as such is a convex set. Moreover, we assume that the DM's preferences shown during the elicitation process are consistent, which means that ${\cal U}$ is non-empty.
In practice, preferences observed over an elicitation process may be inconsistent due to observation/measurement errors, noise in data
or the DM's wrong answers. We refer readers to \cite{AmD15} and
\cite{BeO13} for approaches to handle the inconsistency.
\section{Explicit piecewise linear approximation of BUPRO}
\label{sec:numer-methods}
We now move on to discuss how to solve the maximin problem (\ref{eq:MAUT-robust}).
Since the true utility function is not necessarily concave, we cannot adopt the
{\color{black}support function-based approach}
used in single-attribute UPRO models (see \cite{AmD15}) and in multi-attribute UPRO models (see \cite{zhang2020preference}).
Instead, we use the PLA
approach considered in \cite{GXZ21}. The main challenge is that constructing a PLA of a bivariate utility function is much more complex than that of a univariate utility function. In this section, we discuss the details.
Let ${\cal X}:=\{x_i,i=1,\ldots,N_1\}\subset X$ and ${\cal Y}:=\{y_j,j=1,\ldots,N_2\}\subset Y$ with $\underline{x}=x_1<\ldots<x_{N_1}=\bar{x}$ and $\underline{y}=y_1<\ldots<y_{N_2}=\bar{y}$. We define $\mathcal{X}\times \mathcal{Y}:=\{(x_i,y_j), x_i\in {\cal X},y_j\in {\cal Y}\}$ as a set of $N_1N_2$ gridpoints.
Let $X_1:=[x_1,x_2]$, $X_i:=(x_i,x_{i+1}]$ for $i=2,\ldots,N_1-1$ and $Y_1:=[y_1,y_2]$, $Y_j:=(y_j,y_{j+1}]$ for $j=2,\cdots,N_2-1$. We divide $T$ into $(N_1-1)(N_2-1)$ mutually exclusive cells $T_{i,j}:=X_i\times Y_j$, $i=1,\cdots,N_1-1$, $j=1,\cdots,N_2-1$ and $T=\bigcup_{i=1}^{N_1-1}\bigcup_{j=1}^{N_2-1} T_{i,j}$.
There are two ways to define a continuous piecewise linear function over a cell $T_{i,j}$. One is to define two linear pieces over the two triangle areas separated using the {\em main diagonal} (Type-1 PLA) connecting $(x_i,y_j)$ and $(x_{i+1},y_{j+1})$ and the other is using the {\em counter diagonal} (Type-2 PLA) connecting $(x_i,y_{j+1})$ and $(x_{i+1},y_{j})$, see Figure~\ref{fig-division-all} for an illustration.
Consider Type-1. For any $(x,y)\in T_{i,j}$, if $\frac{y_{j+1}-y_j}{x_{i+1}-x_i}\leq\frac{y-y_j}{x-x_i}$, then $(x,y)$ lies in the upper triangle and the upper linear piece of the utility function is defined as \begin{equation} \label{eq-up}
u^{1u}_{i,j}(x,y) := \frac{y_{j+1}-y}{y_{j+1}-y_j} u_{i,j}
+\left(\frac{y-y_j}{y_{j+1}-y_j}-\frac{x-x_i}{x_{i+1}-x_i}\right) u_{i,j+1}
+\frac{x-x_i}{x_{i+1}-x_i} u_{i+1,j+1}. \end{equation}
If $0\leq\frac{y-y_j}{x-x_i}\leq \frac{y_{j+1}-y_j}{x_{i+1}-x_i}$, then $(x,y)$ lies in the lower triangle and \begin{equation} \label{eq-lo}
u^{1l}_{i,j}(x,y) := \frac{x_{i+1}-x}{x_{i+1}-x_i} u_{i,j}
+\left( \frac{x-x_i}{x_{i+1}-x_i}-\frac{y-y_j}{y_{j+1}-y_j} \right) u_{i+1,j}
+\frac{y-y_j}{y_{j+1}-y_j} u_{i+1,j+1}, \end{equation} where $u_{i,j} := u(x_i,y_j)$, $i=1,\cdots,N_1,j=1,\cdots,N_2$.
Note that this kind of definition is based on the interpolation method using the utility values at the three vertices of the triangles.
It differs significantly from Guo and Xu \cite{GXZ21} and Hu~et~al.~\cite{hu2022distributionally}, where each linear piece is defined by
a slope intercept form.
We do not
adopt their approaches because
in multi-attribute case they require the utility values of two neighbouring active linear pieces to coincide on the boundary of each cell,
which would significantly complicate the representation of PLA.
We now turn to discuss the construction of Type-2 PLA.
The upper
and lower linear pieces can be defined respectively as \begin{equation} \label{eq-up-2}
u_{i,j}^{2u}(x,y) := \frac{x_{i+1}-x}{x_{i+1}-x_i} u_{i,j+1}
+\left( \frac{x-x_i}{x_{i+1}-x_i}-\frac{y_{j+1}-y}{y_{j+1}-y_j} \right) u_{i+1,j+1}
+\frac{y_{j+1}-y}{y_{j+1}-y_j} u_{i+1,j} \end{equation} and \begin{equation} \label{eq-lo-2}
u_{i,j}^{2l}(x,y) := \frac{y-y_j}{y_{j+1}-y_j} u_{i,j+1}
+\left( \frac{x_{i+1}-x}{x_{i+1}-x_i}-\frac{y-y_j}{y_{j+1}-y_j} \right) u_{i,j}
+ \frac{x-x_i}{x_{i+1}-x_i} u_{i+1,j}. \end{equation}
Notice that the conservative property for the utility function plays an important role, that is, \begin{equation} \label{eq:consevative-condition} u_{i,j+1}+u_{i+1,j}\geq u_{i,j}+u_{i+1,j+1} \quad \forall i=1,\cdots,N_1-1,j=1,\cdots,N_2-1. \end{equation} If the conservative property holds at each cell, then the graph of the Type-2 piecewise linear function majorizes that of the Type-1, see Figure~\ref{fig-diagall-b}. In this case, the diagonal line connecting points $1$ and $4$ looks like a ``valley",
while the segment connecting points $2$ and $3$ looks like a ``ridge".
\begin{figure}
\caption{
(a) The red line is the main diagonal and
the blue line is the counter diagonal.
(b) When the conservative condition holds,
the graph of the
Type-2 piecewise linear function (PLF) (blue and green planes) lies above
that of the Type-1 PLF (represented by dotted lines).
(c) When the conservative condition fails,
the graph of the
Type-1
PLF (orange and green planes) lies above
that of the Type-2 PLF (represented by dotted lines).
}
\label{fig-division-all}
\label{fig-diagall-b}
\label{fig-diagall-c}
\label{fig-division}
\end{figure}
\begin{definition}[Ambiguity set of piecewise linear utility functions] \label{def-ambi-N} Let $\mathscr{U}_N\subset\mathscr{U}$ be the set of all Type-1 (or Type-2) piecewise linear utility functions
over $T_{i,j}$ for $i=1,\cdots,N_1-1, j=1,\cdots,N_2-1$.
Define the ambiguity set of piecewise linear utility functions as
\begin{equation} \label{eq:U_N-PLA}
\mathcal{U}_N:=\left\{ u_N\in \mathscr{U}_N \,:\,
\int_{T} u_N(x,y) d\psi_l(x,y)
\leq c_l, \; l=1,\ldots,M \right\}. \end{equation} \end{definition}
We propose to use $\mathcal{U}_{N}$ to approximate $\mathcal{U}$. Since $\mathscr{U}_{N}\subset\mathscr{U}$, then $\mathcal{U}_{N}\subset\mathcal{U}$. Conversely, for any $u\in\mathcal{U}$,
we can construct a piecewise linear utility function $u_{N}\in\mathscr{U}_{N}$ by connecting the utility values at
gridpoints $(x_i,y_j)$, {\color{black}$i=1,\cdots,N_1$, $j=1,\cdots,N_2$}.
In general, $u_{N}\notin\mathcal{U}_{N}$ but the inclusion may hold
{\color{black}in} some special cases.
\begin{proposition} \label{prop-uti-N} Let $\psi_l(x,y)$ be a simple function over $T$ for $l=1,\cdots,M$, which takes constant values over cells $T_{i,j}$ for $i=1,\cdots,N_1-1, j=1,\cdots,N_2-1$.
Then for any $u\in\mathcal{U}$, there exists a function $u_{N}\in\mathscr{U}_{N}$ with $u_N(x_i,y_j)=u(x_i,y_j)$ for $i=1,\ldots,N_1$, $j=1,\ldots,N_2$ such that $u_{N}\in\mathcal{U}_{N}$. Specifically, for $(x,y)\in T$
such $u_{N}$ can be constructed as Type-1 or Type-2 piecewise linear functions defined as \begin{equation} \label{eq-utility-N-1} \begin{split}
& \inmat{(Type-1)\quad} u_{N}(x,y) = \sum_{i=1}^{N_1-1} \sum_{j=1}^{N_2-1} \mathds{1}_{T_{i,j}} (x,y) \times \\
& \left[
u^{1u}_{i,j}(x,y) \mathds{1}_{\left(\frac{y_{j+1}-y_j}{x_{i+1}-x_i},+\infty\right)} \left(\frac{y-y_j}{x-x_i}\right)
+ u^{1l}_{i,j}(x,y) \mathds{1}_{\left[0,\frac{y_{j+1}-y_j}{x_{i+1}-x_i}\right]} \left(\frac{y-y_j}{x-x_i}\right) \right] \end{split} \end{equation} or \begin{equation} \label{eq-utility-N-2} \begin{split}
& \inmat{(Type-2)\quad} u_{N}(x,y) = \sum_{i=1}^{N_1-1} \sum_{j=1}^{N_2-1} \mathds{1}_{T_{i,j}} (x,y) \times \\
& \left[u^{2u}_{i,j}(x,y) \mathds{1}_{\left[0,\frac{y_{j+1}-y_j}{x_{i+1}-x_i}\right]} \left(\frac{y_{j+1}-y}{x-x_i}\right) +
u^{2l}_{i,j}(x,y)
\mathds{1}_{\left(\frac{y_{j+1}-y_j}{x_{i+1}-x_i},+\infty\right)} \left(\frac{y_{j+1}-y}{x-x_i}\right) \right], \end{split} \end{equation} where $u^{1u}_{i,j}(x,y)$, $u^{1l}_{i,j}(x,y)$, $u^{2u}_{i,j}(x,y)$, and $u^{2l}_{i,j}(x,y)$ are defined as in (\ref{eq-up})-(\ref{eq-lo-2}), and $\mathds{1}_A(\cdot)$ denotes the indicator function of set $A$.
\end{proposition}
The proof is deferred to Appendix~\ref{app:proof-uN}. Using $\mathcal{U}_{N}$, we propose to solve the BUPRO problem (\ref{eq:MAUT-robust})
by solving the following approximate problem: \begin{equation} \label{eq:MAUT-robust-N} \inmat{(BUPRO-N)} \quad {\vartheta}_N:=\max_{\bm{z}\in Z}\min_{u_{N}\in\mathcal{U}_{N}} {\mathbb{E}}_P[u_{N}(\bm{f}(\bm{z},\bm{\xi}))]. \end{equation}
In the rest of the section, we discuss numerical schemes for solving the BUPRO-N problem.
To this end, we need to restrict our discussion to the case that $\bm{\xi}$ is discretely distributed. \begin{assumption} \label{assu-discrete}
$P$
is a discrete distribution with $P(\bm{\xi}=\bm{\xi}^k)=p_k$ for $k=1,\ldots,K$. \end{assumption}
Under Assumption~\ref{assu-discrete},
we can write the BUPRO-N model
as \begin{equation} \label{eq:MAUT-robust-N-dis}
\max_{\bm{z}\in Z}\min_{u_{N}\in\mathcal{U}_{N}} \sum_{k=1}^K p_k u_{N}(\bm{f}(\bm{z},\bm{\xi}^k)). \end{equation} The maximin problem can be decomposed into an inner minimization problem \begin{equation} \label{eq:MAUT-robust-N-dis-min}
v_{N}(\bm{z}):=\min_{u_{N}\in\mathcal{U}_{N}} \sum_{k=1}^K p_k u_{N}(\bm{f}(\bm{z},\bm{\xi}^k)) \end{equation} and an outer maximization problem $ {\vartheta}_N=\max_{\bm{z}\in Z} v_{N}(\bm{z})$.
Our strategy is to formulate (\ref{eq:MAUT-robust-N-dis-min}) as an LP and solve
the outer maximization problem by
derivative-free (Dfree) methods. We will discuss
the performance of PLA in Section~\ref{sec-errorbound}. Note that if $\bm{\xi}$ is continuously distributed, then we may regard (\ref{eq:MAUT-robust-N-dis}) as a discrete approximation to the BUPRO-N model.
We now move on to derive the tractable formulation of
(\ref{eq:MAUT-robust-N-dis-min})
when $\mathscr{U}$ is a class of componentwise non-decreasing and Lipschitz continuous utility functions which is concave in
each single variate.
\begin{assumption} \label{A:concave-in-x-and-y} For any $u\in\mathscr{U}$,
the single-variate
utility functions $u(x,\hat{y})$ and $u(\hat{x},y)$ are concave at any instantiations $\hat{y}\in Y$ and $\hat{x}\in X$. \end{assumption}
The
single-variate utility functions
in Assumption~\ref{A:concave-in-x-and-y}
can be regarded as non-normalized
single-attribute utility functions.
The concavity condition is used widely in the literature of expected utility theory, which implies the DM is risk-averse for each
individual attribute (\cite{tsanakas2003risk}).
\begin{assumption} \label{assu-lip} Each function $u\in\mathscr{U}$ is Lipschitz continuous over $T$ with the modulus being bounded by $L$ in the sense that \begin{equation} \label{eq-Lip-condition}
|u(x,y)-u(x',y')|\leq L \|(x-x',y-y')\|_1 \quad \forall \, (x,y),(x',y')\in T. \end{equation} \end{assumption}
The normalization condition and the Lipschitz condition imply that $L\geq 1/(\bar{x}-\underline{x}+\bar{y}-\underline{y})$. This Lipschitz condition means that the DM's utility change is not drastic at any level of the attributes. It is satisfied when $u$ is locally Lipschitz continuous over an open set containing $T$.
Notice that in the case when $\psi_l$ is not a simple function for $l=1,\cdots,M$, the LS integrals in (\ref{eq:U_N-PLA}) cannot be calculated directly. Fortunately, we can tackle the issue by swapping the positions between $u_N$ and $\psi_l$.
Specifically,
using multivariate integration by parts for the LS integrals (see, e.g., \cite{young1917multiple} and \cite{Ans22}), we can rewrite
ambiguity set (\ref{eq:ambiguity_set}) as \begin{equation} \label{eq:u_N-int} \begin{split}
\mathcal{U}_N=\left\{u_N \in \mathscr{U}_N: \int_T \right.
\hat{\psi}_l(x,y) & d u_N(x,y) + \int_{X}\psi_{1,l}(x)d u_N(x,\underline{y})\\
&\left. +\int_{Y} \psi_{2,l}(y)d u_N(\underline{x},y) \leq c_l,\;l=1,\ldots,M \right\}, \end{split} \end{equation}
where $\hat{\psi}_l(x,y):=\psi_l(\bar{x},\bar{y})-\psi_l(x,\bar{y})-\psi_l(\bar{x},y)+\psi_l(x,y)$,
$\psi_{1,l}(x):=\psi_l(\bar{x},\bar{y})-\psi_l(x,\bar{y})-\psi_l(\bar{x},\underline{y})+\psi_l(x,\underline{y})$, and
$\psi_{2,l}(y):=\psi_l(\bar{x},\bar{y})-\psi_l(\underline{x},\bar{y})-\psi_l(\bar{x},y)+\psi_l(\underline{x},y)$ for $l=1,\ldots,M$.
Likewise, we can reformulate the ambiguity set ${\cal U}$ defined in (\ref{eq:ambiguity_set})
as
\begin{equation} \label{eq-equipres} \begin{split}
\mathcal{U}=\left\{u \in \mathscr{U}: \int_T \right.
\hat{\psi}_l(x,y) &d u(x,y) + \int_{X}\psi_{1,l}(x)d u(x,\underline{y})\\
&\left. +\int_{Y} \psi_{2,l}(y)d u(\underline{x},y) \leq c_l,\;l=1,\ldots,M \right\}. \end{split} \end{equation}
In the case that the decision making problem has only one variable (e.g. $y$ disappears), the first term and the third term at the left hand side of the inequalities will disappear and consequently the two-dimensional conditions defined as in (\ref{eq:u_N-int}) reduce to the one-dimensional moment-type conditions in \cite{GXZ21}.
The next proposition states how
the two-dimensional LS integrals in (\ref{eq:u_N-int}) may be converted into one-dimensional Riemann integrals. The proof is deferred to Appendix~\ref{app:proof-LS}.
\begin{proposition} \label{prop-int-pl} Let $F: [\underline{a}, \overline{a}]\times [\underline{b}, \overline{b}] \rightarrow {\rm I\!R}$ be a
continuous
function.
Assume:
(a) $F$ is a piecewise linear function with two linear pieces divided by line segment connecting points $A(\underline{a}, \underline{b})$ and $B(\bar{a}, \bar{b})$; (b) $\psi$ is a real-valued measurable function
w.r.t.~a measure induced by $F$,
and is
Riemann integrable over the line segment connecting points $A$ and $B$.
Then \begin{equation} \label{eq-int-pl}
\int_{\underline{a}, \bar{a}}^{\underline{b}, \bar{b}}\psi(x,y)d F(x,y)=
\frac{F(\bar{a},\bar{b})-F(\underline{a},\bar{b})-F(\bar{a},\underline{b})+F(\underline{a},\underline{b})}{\bar{a}-\underline{a}}\int_{\underline{a}}^{\bar{a}} \psi(x,y(x))d x, \end{equation} where $y(x)$ is the linear function representing the segment $AB$. \end{proposition}
With this, we are ready to
reformulate the inner minimization problem (\ref{eq:MAUT-robust-N-dis-min})
as an LP.
\begin{proposition} Under Assumptions \ref{assu-original}-\ref{assu-lip}, the inner minimization problem (\ref{eq:MAUT-robust-N-dis-min}) with Type-1 PLA can be reformulated as the following LP:
\begin{subequations} \label{eq:PRO-N-reformulate} \begin{align}
\displaystyle{ \min_{{\bm u}}} \quad
& \sum_{k=1}^K p_k \sum_{i=1}^{N_1-1} \sum_{j=1}^{N_2-1} \mathds{1}_{T_{i,j}} (\bm{f}^k)
\left[ u_{i,j}^{1l}(\bm{f}^k) \mathds{1}_{\left[0,\frac{y_{j+1}-y_j}{x_{i+1}-x_i}\right]}
\left( \frac{f_2^k-y_j}{f_1^k-x_i} \right) \right. \nonumber \\
& \left. + u_{i,j}^{1u}(\bm{f}^k) \mathds{1}_{\left( \frac{y_{j+1}-y_j}{x_{i+1}-x_i},+\infty \right)} \left(\frac{f_2^k-y_j}{f_1^k-x_i}\right) \right] \label{eq-traform-obj} \\
{\rm s.t.} \quad
&
\sum_{i=1}^{N_1-1}\sum_{j=1}^{N_2-1} \frac{u_{i,j+1}-u_{i+1,j+1}-u_{i,j}+u_{i+1,j}}{x_{i+1}-x_i} \int_{x_i}^{x_{i+1}} \hat{\psi}_l (x,y(x)) d x \nonumber \\
& +
\sum_{i=1}^{N_1-1} \frac{u_{i+1,1}-u_{i,1}}{x_{i+1}-x_i} \int_{x_i}^{x_{i+1}} \psi_{1,l}(x) d x \nonumber \\
& +
\sum_{j=1}^{N_2-1} \frac{u_{1,j+1}-u_{1,j}}{y_{j+1}-y_j} \int_{y_j}^{y_{j+1}} \psi_{2,l}(y) d y \leq c_l, l=1,\ldots,M, \label{eq-traform-paircom}\\
& \frac{u_{i+1,j}-u_{i,j}}{x_{i+1}-x_i} \leq \frac{u_{i,j}-u_{i-1,j}}{x_i-x_{i-1}}, i=2,\ldots,N_1-1, j=1,\ldots,N_2, \label{eq-traform-concave1} \\
& \frac{u_{i,j+1}-u_{i,j}}{y_{j+1}-y_j} \leq \frac{u_{i,j}-u_{i,j-1}}{y_j-y_{j-1}}, i=1,\ldots,N_1, j=2,\ldots,N_2-1, \label{eq-traform-concave2} \\
& u_{i+1,j}-u_{i,j}\leq L(x_{i+1}-x_i), i=1,\ldots,N_1-1, j=1,\ldots,N_2, \label{eq-traform-lip1} \\
& u_{i,j+1}-u_{i,j}\leq L(y_{j+1}-y_j), i=1,\ldots,N_1,j=1,\ldots,N_2-1, \label{eq-traform-lip2} \\
& u_{i+1,j}\geq u_{i,j}, i=1,\ldots,N_1-1,j=1,\ldots,N_2, \label{eq-traform-mon1} \\
& u_{i,j+1}\geq u_{i,j}, i=1,\ldots,N_1,j=1,\ldots,N_2-1, \label{eq-traform-mon2} \\
& u_{i,j}+u_{i+1,j+1} \leq
u_{i,j+1}+u_{i+1,j}, i=1,\ldots,N_1-1,j=1,\ldots,N_2-1, \label{eq-traform-conservative} \\
& u_{1,1}=0, u_{N_1,N_2}=1, \label{eq-traform-norm1} \end{align} \end{subequations} where {\color{black}${\bm u}:={\rm vec}\left((u_{i,j})_{1\leq i\leq N_1}^{1\leq j\leq N_2}\right)=(u_{1,1},\ldots,u_{N_1,1},\ldots,u_{1,N_2},\ldots,u_{N_1,N_2})^T\in {\rm I\!R}^{N_1N_2}$,} $\bm{f}^k:=\bm{f}(\bm{z},\\ \bm{\xi}^k)=(f_1^k,f_2^k)$ with $f_1^k:=f_1(\bm{z},\bm{\xi}^k)$, $f_2^k:=f_2(\bm{z},\bm{\xi}^k)$, $\hat{\psi}_l$, $\psi_{1,l}$ $\psi_{2,l}$ are defined as in (\ref{eq:u_N-int}), $u_{i,j}^{1l}$ and $u_{i,j}^{1u}$ are defined as in (\ref{eq-up}) and (\ref{eq-lo}).
\end{proposition}
\noindent \textbf{Proof.} Using the Type-1 PLA as defined in (\ref{eq-utility-N-1}), we may reformulate the objective as (\ref{eq-traform-obj}). Moreover, constraint (\ref{eq-traform-paircom})
represents the integral inequalities conditions defined as in (\ref{eq:u_N-int}) from Proposition~\ref{prop-int-pl}. Constraints (\ref{eq-traform-concave1}) and (\ref{eq-traform-concave2})
characterize
concavity of single-variate utility functions assumed in Assumption~\ref{A:concave-in-x-and-y}. Constraints (\ref{eq-traform-lip1}) and (\ref{eq-traform-lip2})
capture the Lipschitz continuity for the utility function. Constraints (\ref{eq-traform-mon1}) and (\ref{eq-traform-mon2})
reflect componentwise monotonicity of utility functions. Constraint (\ref{eq-traform-conservative}) states the conservative property. Constraint (\ref{eq-traform-norm1})
is the normalization condition for the utility function.
\Box
\begin{remark} \label{rem:BUPRO-DF} (i)
Note that (\ref{eq:PRO-N-reformulate}) is reformulated based on the Type-1 PLA.
A similar formulation can be obtained for Type-2 PLA.
By solving (\ref{eq:PRO-N-reformulate}), we can
obtain the worst-case utility function $u_N^{\rm worst}$.
The information on $u_N^{\rm worst}$ gives us
a guidance as to how the inner minimization problem approximates the true expected utility.
The problem size depends on the number of gridpoints and is independent of the scenarios of $\bm{\xi}$.
Note also that the single-attribute utility functions are assumed to be concave in Assumption~\ref{A:concave-in-x-and-y}.
This is in accordance with single-attribute decision making in the risk-averse case.
Likewise, we can also assume that one (or both) single-attribute utility at any instantiation is (are) convex. In that case, it suffices to input constraints (\ref{eq-traform-concave1}) or (and) (\ref{eq-traform-concave2}) in the reverse direction.
The Lipschitz continuity is also reflected by the Type-1 PLA (\ref{eq-utility-N-1}) which can be formulated as
\begin{equation*}
u^{1u}_{i,j}(x,y) = \left( \frac{u_{i+1,j+1}-u_{i,j+1}}{x_{i+1}-x_i}, \frac{u_{i,j+1}-u_{i,j}}{y_{j+1}-y_j} \right) (x,y)^T +b^{1u}_{i,j} \end{equation*} and \begin{equation*}
u^{1l}_{i,j}(x,y) = \left( \frac{u_{i+1,j}-u_{i,j}}{x_{i+1}-x_i}, \frac{u_{i+1,j+1}-u_{i+1,j}}{y_{j+1}-y_j} \right) (x,y)^T+b^{1l}_{i,j}, \end{equation*} where $b^{1u}_{i,j}$
and $b^{1l}_{i,j}$ are constants representing the
intercepts respectively.
The Lipschitz continuity defined as in Assumption~\ref{assu-lip} corresponds to $$
\max\left\{ \left\|\left( \frac{u_{i+1,j+1}-u_{i,j+1}}{x_{i+1}-x_i}, \frac{u_{i,j+1}-u_{i,j}}{y_{j+1}-y_j} \right)\right\|_{\infty},\left\|\left( \frac{u_{i+1,j}-u_{i,j}}{x_{i+1}-x_i}, \frac{u_{i+1,j+1}-u_{i+1,j}}{y_{j+1}-y_j} \right)\right\|_{\infty} \right\}\leq L, $$ {\color{black} over each cell $T_{i,j}$,} which implies constraints (\ref{eq-traform-lip1}) and (\ref{eq-traform-lip2}).
(ii)
The aforementioned PLA utility function $u_N$ is constructed either in Type-1 or in Type-2.
It is possible to allow both.
Specifically, we can define
\begin{equation*} \begin{split}
& u_N (x,y) =\sum_{i=1}^{N_1-1}\sum_{j=1}^{N_2-1} \mathds{1}_{T_{i,j}}(x,y) \times \\
& \left[
h_{i,j}
\left(
u^{1u}_{i,j}(x,y) \mathds{1}_{\left(\frac{y_{j+1}-y_j}{x_{i+1}-x_i},+\infty\right)} \left(\frac{y-y_j}{x-x_i}\right)
+ u^{1l}_{i,j}(x,y) \mathds{1}_{\left[0,\frac{y_{j+1}-y_j}{x_{i+1}-x_i}\right]} \left(\frac{y-y_j}{x-x_i}\right) \right) \right.\\
& \left. + (1-h_{i,j})
\left(u^{2u}_{i,j}(x,y) \mathds{1}_{\left[0,\frac{y_{j+1}-y_j}{x_{i+1}-x_i}\right]} \left(\frac{y_{j+1}-y}{x-x_i}\right) +
u^{2l}_{i,j}(x,y)
\mathds{1}_{\left(\frac{y_{j+1}-y_j}{x_{i+1}-x_i},+\infty\right)} \left(\frac{y_{j+1}-y}{x-x_i}\right) \right)
\right], \end{split} \end{equation*} where
$\{h_{i,j}, i=1,\ldots,N_1-1,j=1,\ldots,N_2-1\}$ is a set of binary variables
taking values $0$ or $1$. In the case that $h_{i,j}=1$, Type-1 PLA is invoked over $T_{i,j}$. Otherwise, Type-2 PLA is active. Obviously, this approach significantly extends the class of piecewise linear utility functions and consequently the optimal value of the inner minimization problem is smaller than that of Type-1 and Type-2.
With regard to the tractable formulation, we will have $(N_1-1)(N_2-1)$ additional binary variables
and the inner minimization becomes an MILP.
(iii) Type-1 PLA $u_N(\bm{f}(\bm{z},\bm{\xi}^k))$ can be alternatively represented in the following form:
\begin{equation*}
u_{N}(\bm{f}(\bm{z},\bm{\xi}^k)) = \sum_{i=1}^{N_1-1}\sum_{j=1}^{N_2-1} \left[ \alpha_{i,j}^k(\bm{z}) u_{i,j}
+\alpha_{i,j+1}^k(\bm{z}) u_{i,j+1}
+\alpha_{i+1,j}^k(\bm{z}) u_{i+1,j}
+\alpha_{i+1,j+1}^k(\bm{z}) u_{i+1,j+1}\right], \end{equation*} where \begin{align*}
&\alpha_{i,j}^k(\bm{z}):=\mathds{1}_{T_{i,j}}(\bm{f}^k) \left[
\frac{y_{j+1}-f_2^k}{y_{j+1}-y_j} \mathds{1}_{\left[0,\frac{y_{j+1}-y_j}{x_{i+1}-x_i}\right]}\left(\frac{f_2^k-y_j}{f_1^k-x_i} \right)
+ \frac{x_{i+1}-f_1^k}{x_{i+1}-x_i} \mathds{1}_{\left(\frac{y_{j+1}-y_j}{x_{i+1}-x_i},\infty \right)} \left(\frac{f_2^k-y_j}{f_1^k-x_i}\right) \right],
\nonumber \\
&\alpha_{i,j+1}^k(\bm{z}):= \left(\frac{f_2^k-y_j}{y_{j+1}-y_j}-\frac{f_1^k-x_i}{x_{i+1}-x_i}\right) \mathds{1}_{T_{i,j}}(\bm{f}^k) \mathds{1}_{\left[0,\frac{y_{j+1}-y_j}{x_{i+1}-x_i}\right]} \left(\frac{f_2^k-y_j}{f_1^k-x_i}\right) , \\
& \alpha_{i+1,j}^k(\bm{z}):= \left( \frac{f_1^k-x_i}{x_{i+1}-x_i}-\frac{f_2^k-y_j}{y_{j+1}-y_j}\right) \mathds{1}_{T_{i,j}}(\bm{f}^k) \mathds{1}_{\left(\frac{y_{j+1}-y_j}{x_{i+1}-x_i},\infty \right)}\left(\frac{f_2^k-y_j}{f_1^k-x_i}\right), \nonumber \\
& \alpha_{i+1,j+1}^k(\bm{z}):=\mathds{1}_{T_{i,j}} (\bm{f}^k) \left[ \frac{f_1^k-x_i}{x_{i+1}-x_i} \mathds{1}_{\left[0,\frac{y_{j+1}-y_j}{x_{i+1}-x_i}\right]}\left(\frac{f_2^k-y_j}{f_1^k-x_i}\right)
+ \frac{f_2^k-y_j}{y_{j+1}-y_j} \mathds{1}_{\left(\frac{y_{j+1}-y_j}{x_{i+1}-x_i},\infty \right)}\left(\frac{f_2^k-y_j}{f_1^k-x_i}\right) \right]. \nonumber \end{align*}
For fixed $\bm{z}$,
the inner minimization problem (\ref{eq:MAUT-robust-N-dis-min})
is also an LP with this $u_N$. Let $L({\bm u},{\bm \lambda};\bm{z})$ be the Lagrange function of the inner problem
and ${\bm \lambda}$ be the vector of Lagrange multipliers. Then the inner problem can be reformulated as $\min_{{\bm u}}\max_{{\bm \lambda}} L({\bm u},{\bm \lambda};\bm{z})$.
In this way, we can reformulate the maximin problem (\ref{eq:PRO-N-reformulate}) as a single maximization problem $ \max_{\bm{z}\in Z,{\bm \lambda}} \{\min_{{\bm u}}L({\bm u},{\bm \lambda};\bm{z})\}. $
Unfortunately, this is not helpful
since the coefficients of $u_{i,j}$, $u_{i,j+1}$, $u_{i+1,j}$, $u_{i+1,j+1}$ are composed of indicator functions of
$\bm{f}^k$.
In the next section, we will propose a new approach to handle the issue
and extend the discussions to
the multi-attribute case.
\end{remark}
\section{Implicit piecewise linear approximation of UPRO -- from bi-attribute to multi-attribute case}
\label{sec:multi-atrribute}
{\color{black} In this section, we look into the PLA approach from a slightly different perspective: instead of deriving an explicit form of piecewise linear function as we discussed in the previous section, we propose to use the well-known polyhedral method
(see e.g. \cite{LeW01,KDN04,DLM10,VAN10,vielma2015mixed}), where the PLA
function at each cell is implicitly determined by solving a minimization or a maximization program.
There are two
advantages for
doing this.
One is that the implicit approach allows us to extend the PLA for UPRO problem from bi-attribute decision making problems to the multi-attribute case and this would be extremely complex under the
explicit PLA framework.
The other
is that
the implicit approach enables us to reformulate
the approximate UPRO problem into a single MILP when ${\bm f}(\bm{z},\bm{\xi})$ is linear in $\bm{z}$.
}
\subsection{Bi-attribute case} \label{sec:two-dim-u}
Inspired by the polyhedral method, we
can obtain the coefficients $\alpha_{i,j}$ of $u_N(\bm{f}(\bm{z},\bm{\xi}^k))$ in terms of $u_{i,j}$ under Type-1 PLA in Remark~\ref{rem:BUPRO-DF}~(iii) by solving a system of linear equalities and inequalities:
\begin{subequations} \begin{align}
& \sum_{i=1}^{N_1} \sum_{j=1}^{N_2} \alpha_{i,j}^{k}=1,\; k=1,\cdots,K,
\label{eq:mixed-integer-R2-b}\\
& \sum_{i=1}^{N_1} \sum_{j=1}^{N_2} \alpha_{i,j}^{k} x_{i}=f_1^k,\;\; \sum_{i=1}^{N_1} \sum_{j=1}^{N_2} \alpha_{i,j}^{k} y_{j}= f_2^k,\;\; k=1,\cdots,K,\label{eq:mixed-integer-R2-c}\\
& \sum_{i=1}^{N_1-1} \sum_{j=1}^{N_2-1} \left(h_{i,j,k}^{u}+h_{i,j,k}^{l}\right)=1,\;\;
k=1,\cdots,K,\label{eq:mixed-integer-R2-d}\\
& {\bm h}^u_k,{\bm h}^l_k\in \{0,1\}^{(N_1-1)(N_2-1)},\;k=1,\cdots,K,
\label{eq:mixed-integer-R2-e}\\
&0\leq \alpha_{i,j}^{k}\leq
h_{i,j,k}^{u}
+h_{i,j,k}^{l}
+h_{i,j-1,k}^{u}
+h_{i-1,j-1,k}^{l}
+h_{i-1,j-1,k}^{u}
+h_{i-1,j,k}^{l},\nonumber \\
& \qquad \qquad \qquad \qquad \qquad \quad
i=1,\cdots,N_1,\; j=1,\cdots,N_2, \; k=1,\cdots,K, \label{eq:mixed-integer-R2-f} \end{align} \end{subequations}
where {\color{black} ${\bm h}^u_k:= {\rm vec}\left((h^u_{i,j,k})_{1\leq i\leq N_1}^{1\leq j\leq N_2}\right)$, ${\bm h}^l_k:= {\rm vec}\left((h^l_{i,j,k})_{1\leq i\leq N_1}^{1\leq j\leq N_2}\right)$ for $k=1,\ldots,K$,}
${\bm f}(\bm{z},\bm{\xi}^k)=(f_1^k,f_2^k)^T$ with $f_i^k:=f_i(\bm{z},\bm{\xi}^k)$ for $i=1,2$,
$h_{0,*,*}^*=h_{*,0,*}^*=h_{N_1,*,*}^*=h_{*,N_2,*}^*=0$. Here $*$ represents all indexes possibly taken at the subscripts and superscripts. Constraint (\ref{eq:mixed-integer-R2-b}) and $\alpha_{i,j}^k\geq 0$ result from the coefficients of the convex combinations of $u_{i,j}$
for $u_N({\bm f}(\bm{z},\bm{\xi}^k))$.
Constraint (\ref{eq:mixed-integer-R2-c})
arises because
the linearity of $u_N$ over $T_{i,j}$ guarantees that the convex combination coefficients of ${\bm f}(\bm{z},\bm{\xi}^k)$ and $u_N({\bm f}(\bm{z},\bm{\xi}^k))$ are
identical. Since $h_{i,j,k}^u, h_{i,j,k}^l\in \{0,1\}$, constraint (\ref{eq:mixed-integer-R2-d}) imposes a restriction that only one is used for the convex combination among all triangles. The constraint (\ref{eq:mixed-integer-R2-f}) imposes that the
only nonzero $\alpha_{i,j}$
can be those associated with the three vertices of a such triangle. For example, if $h_{i,j,k}^l=1$, then $\bm{f}(\bm{z},\bm{\xi}^k)$ lies in the lower triangle of the cell $T_{i,j}$. This is indicated by the fact that $\alpha_{i,j}^k\leq h_{i,j,k}^l=1$, $\alpha_{i+1,j+1}^k\leq h_{i,j,k}^l=1$, $\alpha_{i+1,j}^l\leq h_{i,j,k}^l=1$, and $\alpha_{i',j'}^k=0$ for $(i',j')\notin \{(i,j),(i+1,j+1),(i+1,j)\}$, see Figure~\ref{fig:Type1IPLA} where the six triangles are related to point $(x_i,y_j)$ and we indicate the corresponding binary variables $h_{i,j,k}^u$ and $h_{i,j,k}^l$ in each triangle to facilitate readers understanding.
Consequently, under Assumption~\ref{assu-discrete},
we can formulate the bi-attribute utility maximization problem $\max_{\bm{z}\in Z}\sum_{k=1}^Kp_k[u_N({\bm f(\bm{z},\bm{\xi}^k)})]$
as an MIP:
\begin{subequations} \label{eq:mixed-integer-R2-2} \begin{align}
\max\limits_{\bm{z}\in Z,{\bm \alpha},{\bm h^l},{\bm h}^u} \; & \sum_{k=1}^K p_k \sum_{i=1}^{N_1} \sum_{j=1}^{N_2} \alpha_{i,j}^{k} u_{i,j}\\
{\rm s.t. }\quad\;\;\; & \inmat{constraints } (\ref{eq:mixed-integer-R2-b})-
(\ref{eq:mixed-integer-R2-f}),
\end{align} \end{subequations} where ${\bm \alpha}:=({\bm \alpha}^1,\cdots,{\bm \alpha}^K)\in {\rm I\!R}^{(N_1N_2)\times K}$, ${\bm \alpha}^{k}:={\rm vec}\left((\alpha_{i,j}^k)_{1\leq i\leq N_1}^{1\leq j\leq N_2}\right)$
for $k=1,\cdots,K$, ${\bm h}^l:=({\bm h}_1^l,\cdots,{\bm h}_K^l)\in {\rm I\!R}^{(N_1-1)(N_2-1)\times K}$, ${\bm h}^u:=({\bm h}_1^u,\cdots,{\bm h}_K^u)\in {\rm I\!R}^{(N_1-1)(N_2-1)\times K}$. If $f(\bm{z},\bm{\xi})$ is linear in $\bm{z}$, then the problem (\ref{eq:mixed-integer-R2-2}) is an MILP. This idea can be applied to the BUPRO-N model. To ease the exposition, we consider the case that the ambiguity set is constructed by pairwise comparisons, that is, $\psi_l=F_{{\bm B}_l}-F_{{\bm A}_l}$, and
\begin{equation*} \begin{split}
{\cal U}_N
&=\left\{u_N\in \mathscr{U}_N:
\int_{T} u_N(x,y) d (F_{\bm{B}_l}(x,y)-F_{\bm{A}_l}(x,y))\leq 0,\;l=1,\cdots,M\right\}.
\end{split} \end{equation*}
Under Assumption~\ref{assu-lip}, suppose the set of gridpoints $\{(x_i,y_j):i=1,\cdots,N_1,j=1,\cdots,N_2\}$ contains all the outcomes of lotteries $ {\bm A}_l$ and ${\bm B}_l$ for $l=1,\cdots,M$,
then we can reformulate
BUPRO-N problem
(\ref{eq:MAUT-robust-N-dis})
as:
\begin{subequations} \label{eq:PRO_MILP_eqi} \begin{align} \max\limits_{\bm{z}\in Z,{\bm \alpha}, {\bm h}^u, {\bm h}^l} \min\limits_{{\bm u}} \;\;& \sum_{k=1}^K p_k \sum_{i=1}^{N_1} \sum_{j=1}^{N_2} \alpha_{i,j}^k u_{i,j}\\ {\rm s.t.} \;\; &
\sum_{i=1}^{N_1} \sum_{j=1}^{N_2} (\mathbb{P}({\bm B}_l=(x_i,y_j))- \mathbb{P}({\bm A}_l=(x_i,y_j)))u_{i,j}
\leq 0, \nonumber \\
& \hspace{16em} l=1,\cdots,M,
\label{eq:PRO_MILP_mina2-c}\\
& \inmat{constraints } (\ref{eq-traform-lip1})-(\ref{eq-traform-norm1}), \\
& \inmat{constraints } (\ref{eq:mixed-integer-R2-b})
-(\ref{eq:mixed-integer-R2-f}),
\end{align} \end{subequations} where constraints (\ref{eq-traform-lip1})-(\ref{eq-traform-norm1}) characterize the restrictions on ${\bm u}$ and constraints (\ref{eq:mixed-integer-R2-b})
-(\ref{eq:mixed-integer-R2-f})
stipulate the coefficients of ${\bm u}$ implicitly as discussed earlier. {\color{black}Problem (\ref{eq:PRO_MILP_eqi}) is equivalent to problem (\ref{eq:PRO-N-reformulate}) without constraints (\ref{eq-traform-concave1})-(\ref{eq-traform-concave2}) and with $\psi_l=F_{{\bm B}_l}-F_{{\bm A}_l}$, $l=1,\cdots,M$.}
It is possible to change the maximization
w.r.t. ${\bm \alpha}$, ${\bm h}^l$ and ${\bm h}^u$ into
minimization. The next proposition explains this.
\begin{proposition} The BUPRO-N problem (\ref{eq:PRO_MILP_eqi}) is equivalent to \begin{subequations} \label{eq:PRO_MILP_mina2} \begin{align}
\displaystyle \max_{\bm{z}\in Z} \displaystyle \min_{{\bm \alpha},{\bm h}^u,{\bm h}^l,
{\bm u}} \;\;& \sum_{k=1}^K p_k \sum_{i=1}^{N_1} \sum_{j=1}^{N_2} \alpha_{i,j}^{k} u_{i,j}\\
\inmat{s.t.}\quad\;\; & \inmat{constraints } (\ref{eq-traform-lip1})-(\ref{eq-traform-norm1}),\;
(\ref{eq:PRO_MILP_mina2-c}), \label{eq:PRO_MILP_mina2-b}\\
& \inmat{constraints } (\ref{eq:mixed-integer-R2-b})
-(\ref{eq:mixed-integer-R2-f}).
\end{align} \end{subequations}
\end{proposition} \noindent{\bf Proof.}
We begin by writing part of the outer maximization (w.r.t. ${\bm \alpha},{\bm h}^u,{\bm h}^l$) and the inner minimization problem of
(\ref{eq:PRO_MILP_eqi}) as
\begin{subequations} \label{eq:PRO_MILP_mina} \begin{align}
\displaystyle \max_{{\bm \alpha},{\bm h}^u,{\bm h}^l} \; & \min_{{\bm u}}\left\{ \sum_{k=1}^K p_k \sum_{i=1}^{N_1} \sum_{j=1}^{N_2} \alpha_{i,j}^{k} u_{i,j}: (\ref{eq-traform-lip1})-(\ref{eq-traform-norm1}) \right \}\\ {\rm s.t.} \quad & \inmat{\,constraints } (\ref{eq:mixed-integer-R2-b})
- (\ref{eq:mixed-integer-R2-f}),
(\ref{eq:PRO_MILP_mina2-c}). \label{eq:PRO_MILP_mina-b}
\end{align} \end{subequations} Since the representation of point $\bm{f}(\bm{z},\bm{\xi}^k)$ by the convex combination of the vertices of a simplex is unique, the feasible set of the outer
maximization problem (\ref{eq:PRO_MILP_mina}) (specified by (\ref{eq:PRO_MILP_mina-b}))
is a singleton for each fixed $\bm{z}$.
Thus we can replace operation ``$\max_{{\bm \alpha},{\bm h}^u,{\bm h}^l}$'' with ``$\min_{{\bm \alpha},{\bm h}^u,{\bm h}^l}$'' without affecting the optimal value and the optimal solutions of (\ref{eq:PRO_MILP_mina}). The replacement effectively reduces (\ref{eq:PRO_MILP_eqi}) to (\ref{eq:PRO_MILP_mina2}).
$\Box$
Note that the outer maximization problem (\ref{eq:PRO_MILP_mina2}) can be solved by a Dfree method, where the inner problem can be seen as an MILP when ${\bm f}(\bm{z},\bm{\xi}^k)$ is linear in $\bm{z}$.
\begin{remark} \begin{itemize}
\item[(i)] Inequality (\ref{eq:mixed-integer-R2-f}) corresponds to Type-1 PLA.
For Type-2 case
(see Figure~\ref{fig:Type2IPLA}), we can replace (\ref{eq:mixed-integer-R2-f}) by \begin{equation} \label{eq:constraint-alpha} \begin{split} 0\leq \alpha_{i,j}^k\leq h^u_{i-1,j,k} + h^l_{i-1,j,k} & + h^u_{i-1,j-1,k}+h^l_{i,j-1,k}+h^u_{i,j-1,k}+h^l_{i,j,k}, \\
& i=1,\cdots,N_1,j=1,\cdots,N_2,k=1,\cdots,K. \end{split} \end{equation}
\item[(ii)]
For the mixed-type PLA, we
can also obtain the coefficients $\alpha_{i,j}$ of $u_N(\bm{f}(\bm{z},\bm{\xi}^k))$ in terms of $u_{i,j}$
by solving a system of linear equalities and inequalities: \begin{subequations} \begin{align}
& \sum_{i=1}^{N_1} \sum_{j=1}^{N_2} \alpha_{i,j}^{k\tau}=1,\;\tau=1,2,\;k=1,\cdots,K,
\label{eq:mixed-integer-Type2-b}\\
& \sum_{i=1}^{N_1} \sum_{j=1}^{N_2} \alpha_{i,j}^{k\tau} x_{i}=f_1^k,\;\; \sum_{i=1}^{N_1} \sum_{j=1}^{N_2} \alpha_{i,j}^{k\tau} y_{j}= f_2^k,\;\; k=1,\cdots,K,\;\tau =1,2, \label{eq:mixed-integer-Type2-c}\\
& \sum_{i=1}^{N_1-1}\sum_{j=1}^{N_2-1} \left(h_{i,j,k}^{1u}+h_{i,j,k}^{1l}+h_{i,j,k}^{2u}+h_{i,j,k}^{2l}\right)=1,\;k=1,\cdots,K,
\label{eq:mixed-integer-Type2-d}\\
& {\bm h}^{\tau u}_{k},{\bm h}^{\tau l}_{k}\in \{0,1\}^{(N_1-1)(N_2-1)},\;k=1,\cdots,K,\;\tau=1,2,
\label{eq:mixed-integer-Type2-e}\\
&0\leq \alpha_{i,j}^{k1}\leq
h_{i,j,k}^{1u}
+h_{i,j,k}^{1l}
+h_{i,j-1,k}^{1u}
+h_{i-1,j-1,k}^{1l }
+h_{i-1,j-1,k}^{1u }
+h_{i-1,j,k}^{1l},\nonumber \\
& \qquad \qquad \qquad \qquad i=1,\cdots,N_1,\; j=1,\cdots,N_2, \; k=1,\cdots,K,\\ &0\leq \alpha_{i,j}^{k2}\leq h^{2u}_{i-1,j,k} + h^{2l}_{i-1,j,k} + h^{2u}_{i-1,j-1,k}+h^{2l}_{i,j-1,k}+h^{2u}_{i,j-1,k}+h^{2l}_{i,j,k}, \nonumber \\ & \qquad \qquad \qquad \qquad i=1,\cdots,N_1,j=1,\cdots,N_2,k=1,\cdots,K, \label{eq:mixed-integer-Type2} \end{align} \end{subequations} where variables $\alpha_{i,j}^{k1}$, $h_{i,j,k}^{1u}$, $h_{i,j,k}^{1l}$ represent the Type-1 PLA case, and $\alpha_{i,j}^{k2}$, $h_{i,j,k}^{2u}$, $h_{i,j,k}^{2l}$
represent the Type-2 case.
The constraint (\ref{eq:mixed-integer-Type2-d}) indicates that only one type partition is used for each cell.
\end{itemize} \end{remark}
\begin{figure}
\caption{\footnotesize
(a) \& (b) represent the bi-attribute case
over $T_{i,j}$.
They show the six triangles related to point $(x_i,y_j)$.
}
\label{fig:Type1IPLA}
\label{fig:Type2IPLA}
\label{fig:alpha_ij}
\end{figure}
\subsubsection{Single mixed-integer reformulation of \texorpdfstring{(\ref{eq:PRO_MILP_eqi})}{}}
By deriving the Lagrange dual of the inner minimization problem of (\ref{eq:PRO_MILP_eqi}) which is established under Type-1 PLA, we can recast the maximin problem as a
single MILP when ${\bm f}(\cdot,\bm{\xi})$ is linear.
\begin{proposition} [Reformulation of (\ref{eq:PRO_MILP_eqi})] \label{prop:single-MILP} Problem (\ref{eq:PRO_MILP_eqi})
can be reformulated as a single MILP when ${\bm f}(\bm{z},\bm{\xi})$ is linear in $\bm{z}$,
\begin{subequations} \label{eq:PRO_MILP_single} \hspace{-0.5cm} \begin{align} \displaystyle \max_{\substack{{\bm z}\in Z, {\bm \alpha}, {\bm h}^u, {\bm h}^l\\ {\bm \lambda}^1, {\bm \lambda}^2, {\bm \eta}^1\\ {\bm \eta}^2, {\bm \tau}, {\bm \zeta}}} \; & -\sum_{i=1}^{N_1-1}\sum_{j=1}^{N_2}\eta_{i,j}^1L(x_{i+1}-x_i) -\sum_{i=1}^{N_1}\sum_{j=1}^{N_2-1} \eta_{i,j}^2 L(y_{j+1}-y_j) + \sum_{k=1}^Kp_k \alpha_{N_1,N_2}^k \nonumber \\ & -\lambda_{N_1-1,N_2}^1 -\lambda_{N_1,N_2-1}^2 +\eta_{N_1-1,N_2}^1 +\eta_{N_1,N_2-1}^2 +\tau_{N_1-1,N_2-1}
+{\bm \zeta}^T {\bm Q}_{N_1,N_2} \\ {\rm s.t.} \quad\;\;\, & \sum_{k=1}^Kp_k\alpha_{i,j}^k
+\lambda_{i,j}^1-\lambda_{i-1,j}^1+\lambda_{i,j}^2-\lambda_{i,j-1}^{2}+\eta_{i-1,j}^1 -\eta_{i,j}^1+\eta_{i,j-1}^2-\eta_{i,j}^2
\nonumber\\
&
\quad + \tau_{i,j}+\tau_{i-1j-1}
-\tau_{i,j-1} -\tau_{i-1,j} +{\bm \zeta}^T{\bm Q}_{i,j} \geq 0, i\in {\cal I}, j\in {\cal J},\\ & \sum_{k=1}^Kp_k\alpha_{N_1,j}^k -\lambda_{N_1-1,j}^1 +\lambda_{N_1,j}^2 -\lambda_{N_1,j-1}^2 +\eta_{N_1-1,j}^1
+\eta_{N_1,j-1}^2 -\eta_{N_1,j}^2 \nonumber \\ & \quad +\tau_{N_1-1,j-1} -\tau_{N_1-1,j} +{\bm \zeta}^T {\bm Q}_{N_1,j}\geq 0, j\in {\cal J},\\ & \sum_{k=1}^Kp_k\alpha_{1,j}^k +\lambda_{1,j}^1 +\lambda_{1,j}^2 -\lambda_{1,j-1}^2 -\eta_{1,j}^1 +\eta_{1,j-1}^2 -\eta_{1,j}^2 +\tau_{1,j} -\tau_{1,j-1} \nonumber \\ & \quad +{\bm \zeta}^T {\bm Q}_{1,j} \leq 0, j\in {\cal J}, \\ & \sum_{k=1}^Kp_k \alpha_{i,N_2}^k + \lambda_{i,N_2}^1 -\lambda_{i-1,N_2}^1 -\lambda_{i,N_2-1}^2 +\eta_{i-1,N_2}^1 -\eta_{i,N_2}^1 +\eta_{i,N_2-1}^2
\nonumber \\ & \quad +\tau_{i-1,N_2-1} -\tau_{i,N_2-1}
+{\bm \zeta}^T {\bm Q}_{i,{N_2}} \geq 0, i\in {\cal I},\\ & \sum_{k=1}^Kp_k \alpha_{i,1}^k + \lambda_{i,1}^1 -\lambda_{i-1,1}^1 +\lambda_{i,1}^2 +\eta_{i-1,1}^1 -\eta_{i,1}^1 -\eta_{i,1}^2 +\tau_{i,1} -\tau_{i-1,1}\nonumber \\ & \quad +{\bm \zeta}^T {\bm Q}_{i,{N_2}} \geq 0, i\in {\cal I}\\ & \sum_{k=1}^Kp_k \alpha_{1,1}^k + \lambda_{1,1}^1 +\lambda_{1,1}^2 -\eta_{1,1}^1 -\eta_{1,1}^2 +\tau_{1,1}
+{\bm \zeta}^T {\bm Q}_{i,{N_2}} \geq 0,\\ & \sum_{k=1}^Kp^k\alpha_{N_1,1}^k-\lambda_{N_1-1,1}^1+\lambda_{N_1,1}^2+\eta_{N_1-1,1}^1-\eta_{N_1,1}^2-\tau_{N_1-1,1}\geq 0,\\ & \sum_{k=1}^Kp^k\alpha_{1,N_2}^k+\lambda_{1,N_2}^1-\lambda_{1,N_2-1}^2-\eta_{1,N_2}^1+\eta_{1,N_2-1}^2-\tau_{1,N_2-1}\geq 0,\\
&\inmat{constraints } (\ref{eq:mixed-integer-R2-b}),
(\ref{eq:mixed-integer-R2-d})-
(\ref{eq:mixed-integer-R2-f}),\label{eq:PRO_MILP_single-j}\\ & \sum_{i=1}^{N_1} \sum_{j=1}^{N_2} \alpha_{i,j}^{k} x_{i}=f_1(\bm{z},\bm{\xi}^k),\;\; \sum_{i=1}^{N_1} \sum_{j=1}^{N_2} \alpha_{i,j}^{k} y_{j}= f_2(\bm{z},\bm{\xi}^k), \nonumber \\ & \hspace{16em} k=1,\cdots,K,\\ & {\bm \lambda}^1 \geq 0, {\bm \lambda}^2 \geq 0, {\bm \eta}^1\geq 0, {\bm \eta}^2 \geq 0, {\bm \tau} \geq 0, \end{align} \end{subequations} where ${\cal I}:=\{2,\cdots,N_1-1\}$, ${\cal J}:=\{2,\cdots,N_2-1\}$, ${\bm Q}_{ij}:=(Q_{ij}^1,\cdots,Q_{ij}^M)^T\in {\rm I\!R}^M$, $Q_{ij}^l:= \mathbb{P}({\bm B}_l=(x_i,y_j))-\mathbb{P}({\bm A}_l=(x_i,y_j))$,
${\bm \lambda}^1\in {\rm I\!R}^{(N_1-1)\times N_2}_+$, ${\bm \lambda}^2\in {\rm I\!R}^{N_1\times (N_2-1)}_+$, ${\bm \eta}^1\in {\rm I\!R}^{(N_1-1)\times N_2}_+$, ${\bm \eta}^2 \in {\rm I\!R}^{N_1\times (N_2-1)}_+$, ${\bm \tau}\in {\rm I\!R}^{(N_1-1)\times (N_2-1)}_+$,
${\bm \zeta}\in {\rm I\!R}^M$. \end{proposition} We can also reformulate BUPRO-N with the Type-2 PLA as a single MIP. We only need to replace (\ref{eq:PRO_MILP_single-j}) with (\ref{eq:mixed-integer-R2-b}),
(\ref{eq:mixed-integer-R2-d})-(\ref{eq:mixed-integer-R2-e}) and (\ref{eq:constraint-alpha}).
Note that
Hu et al.~\cite{hu2022distributionally} consider a distributionally robust model for the random utility maximization problem in multi-attribute decision making and reformulate a maximin PRO as a single MILP. The main difference is that they considered the true utility function to be in additive form (sum of the single-attribute utility functions).
Here we consider a general multivariate true utility function. Thus, we believe this is a step forward from computational perspective in handling BUPRO-N. Note that in this formulation, we have not incorporated Assumption~\ref{A:concave-in-x-and-y}
because the dual formulation of the problem with the convexity/concavity constraints would be very complex.
\subsection{Tri-attribute case} \label{sec:three-dim-u}
We now extend our discussions on the implicit PLA of UPRO to the tri-attribute case.
\subsubsection{Triangulation of a cube and interpolation}
We follow the well-known triangulation method (see e.g.
\cite{chien1977solving,meyer2005convex,misener2010piecewise}) to divide each cube into six non-overlapping simplices.
There are six ways to divide, {\color{black}and} here we use the second way (called Type B in the references).
Specifically, we consider $u:[\underline{x},\bar{x}]\times[\underline{y},\bar{y}]\times [\underline{z},\bar{z}]\rightarrow {\rm I\!R}$
with $\underline{x}=x_1< x_2 < \cdots< x_{N_1}=\bar{x}$, $\underline{y}=y_1< y_2 < \cdots< y_{N_2}=\bar{y}$ and $\underline{z}=z_1 < z_2< \cdots< z_{N_3}=\bar{z}$. Let $X_i:=(x_i,x_{i+1}]$, $Y_j:=(y_j,y_{j+1}]$ and $Z_l:=(z_l,z_{l+1}]$. For any given point $(x,y,z)\in X_i\times Y_j\times Z_l$,
consider the cube with vertices
$1$: $(x_i,y_j,z_l)$, $2$: $(x_{i+1},y_j,z_l)$, $3$: $(x_{i},y_{j+1},z_l)$ $4$: $(x_{i+1},y_{j+1},z_l)$, $5$: $(x_i,y_j,z_{l+1})$, $6$: $(x_{i+1},y_j,z_{l+1})$, $7$: $(x_{i},y_{j+1},z_{l+1})$, $8$: $(x_{i+1},y_{j+1},z_{l+1})$.
We first divide a cube $[\underline{x},\bar{x}]\times[\underline{y},\bar{y}]\times [\underline{z},\bar{z}]$ in ${\rm I\!R}^3$ into two parts, denoted by Part $1$-$2$-$4$-$5$-$6$-$8$ and Part $1$-$3$-$4$-$5$-$7$-$8$. Then we can produce six simplices by three planes, see
Figures~\ref{fig-divisioin-6-1} $\&$ \ref{fig-divisioin-6-2}.
\begin{figure}
\caption{\footnotesize
Divide the part $1$-$2$-$4$-$5$-$6$-$8$ into three simplices in ${\rm I\!R}^3$.
(a) cuts the part $1$-$2$-$4$-$5$-$6$-$8$ by plane with vertices $1$-$2$-$8$,
and get the first simplex $1$-$2$-$4$-$8$ (red color). (b) \& (c) go on to cut the rest part by plain $1$-$2$-$8$,
and obtain the second simplex $1$-$2$-$6$-$8$ (green color) and the third simplex $1$-$5$-$6$-$8$ (purple color).}
\label{fig:3a}
\label{fig:3b}
\label{fig:3c}
\label{fig-divisioin-6-1}
\end{figure}
\begin{figure}
\caption{\footnotesize Divide the part $1$-$3$-$4$-$5$-$7$-$8$ into three simplices in ${\rm I\!R}^3$. (a) cuts the part $1$-$3$-$4$-$5$-$7$-$8$ by plane of vertices $1$-$7$-$8$,
and get the first simplex $1$-$5$-$7$-$8$ (red color). (b) \& (c) go on to cut the rest part by plain $1$-$3$-$8$,
and obtain the second simplex $1$-$3$-$7$-$8$ (green color) and the third simplex $1$-$3$-$4$-$8$ (purple color).}
\label{fig:4a}
\label{fig:4b}
\label{fig:4c}
\label{fig-divisioin-6-2}
\end{figure}
Let $1$-$4$-$5$-$8$$\searrow$ denote the front half subspace of the plane constructed by points $1$-$4$-$5$-$8$, i.e., \begin{eqnarray*} \inmat{$1$-$4$-$5$-$8$$\searrow$}:=\{(x,y,z)^T\in {\rm I\!R}^3: (y_{j+1}-y_j)x-(x_{i+1}-x_i)y+x_{i+1}y_j-x_iy_{j+1}\geq 0\}, \end{eqnarray*} and let $1$-$2$-$7$-$8$ $\uparrow$ denote the upper subspace of the plane constructed by points $1$-$2$-$7$-$8$, that is, \begin{eqnarray*} && \inmat{ $1$-$2$-$7$-$8$} \uparrow:=\{(x,y,z)^T\in {\rm I\!R}^3: (z_{l+1}-z_l) y-(y_{j+1}-y_j)z +y_{j+1}z_l-y_jz_{l+1} \geq 0\},\\ && \inmat{ $1$-$3$-$6$-$8$} \uparrow:=\{(x,y,z)^T\in {\rm I\!R}^3: (z_{l+1}-z_l) x -(x_{i+1}-x_i) z +x_{i+1}z_l-x_iz_{l+1}\leq 0\}. \end{eqnarray*}
The function value $u({x},{y},{z})$ is approximated by a convex combination of the function values evaluated at the vertices of the simplex containing $({x},{y},{z})$, that is, $$ u({x},{y},{z})=\lambda u_{i,j,l} +\mu u_{{i+1},{j+1},{l+1}}+
\bar{u}, $$ where $\lambda, \mu\in [0,1]$ and
{\small \begin{eqnarray*} && \bar{u}=\left\{\begin{array}{ll} \eta u_{{i+1},j,l} +(1-\lambda-\mu-\eta)u_{{i+1},{j+1},l} & \inmat{if }\; (x,y)\in \inmat{$1$-$4$-$5$-$8$} \searrow \bigcap \inmat{$1$-$2$-$7$-$8$ $\downarrow$ (Fig.~\ref{fig:3a})},\\ \eta u_{{i+1},j,{l+1}} +(1-\lambda-\mu-\eta) u_{{i+1},j,l} & \inmat{if }\; (x,y)\in \inmat{$1$-$4$-$5$-$8$} \searrow \bigcap \inmat{$1$-$3$-$6$-$8$} \downarrow \bigcap \inmat{$1$-$2$-$7$-$8$ $\uparrow$ (Fig.~\ref{fig:3b})},\\ \eta u_{{i+1},j,{l+1}} +(1-\lambda-\mu-\eta) u_{i,j,{l+1}} & \inmat{if }\; (x,y)\in \inmat{$1$-$4$-$5$-$8$} \searrow \bigcap \inmat{$1$-$3$-$6$-$8$} \uparrow \bigcap \inmat{$1$-$2$-$7$-$8$ $\uparrow$ (Fig.~\ref{fig:3c})}, \end{array} \right.\\ && \bar{u}=\left\{\begin{array}{ll} \eta u_{i,j,{l+1}} +(1-\lambda-\mu-\eta)u_{i,{j+1},{l+1}} & \inmat{if }\; (x,y)\in \inmat{$1$-$4$-$5$-$8$} \nearrow \bigcap \inmat{$1$-$2$-$7$-$8$ $\uparrow$ (Fig.~\ref{fig:4a})},\\ \eta u_{{i},{j+1},l} +(1-\lambda-\mu-\eta) u_{{i},{j+1},{l+1}} & \inmat{if }\; (x,y)\in \inmat{$1$-$4$-$5$-$8$} \nearrow \bigcap \inmat{$1$-$3$-$6$-$8$} \uparrow \bigcap \inmat{$1$-$2$-$7$-$8$ $\downarrow$(Fig.~\ref{fig:4b})},\\ \eta u_{{i},{j+1},l} +(1-\lambda-\mu-\eta) u_{{i+1},{j+1},l}& \inmat{if }\; (x,y)\in \inmat{$1$-$4$-$5$-$8$} \nearrow \bigcap \inmat{$1$-$3$-$6$-$8$} \downarrow \bigcap \inmat{$1$-$2$-$7$-$8$ $\downarrow$(Fig.~\ref{fig:4c})}.
\end{array} \right. \end{eqnarray*} }
\subsubsection{Implicit PLA}
As in the two-dimensional case, since $u_N$ is linear over each simplex, a target point ${\bm f}(\bm{z},\bm{\xi}^k)\in {\rm I\!R}^3$ and its approximate utility value $u_N({\bm f}(\bm{z},\bm{\xi}^k))$ have the same
convex combination.
We use binary variables $h_{i,j,l,k}^{1u}$, $h_{i,j,l,k}^{1m}$, $h_{i,j,l,k}^{1l}$ to characterize whether point ${\bm f}(\bm{z},\bm{\xi}^k)$ lies in the upper or middle, or lower simplex in Part $1\inmat{-}2\inmat{-}4\inmat{-}5\inmat{-}6\inmat{-}8$ or beyond, see Figure~\ref{fig-divisioin-6-1}. Likewise, we use binary variables $h_{i,j,l,k}^{2u}$, $h_{i,j,l,k}^{2m}$, $h_{i,j,l,k}^{2l}$ to characterize whether ${\bm f}(\bm{z},\bm{\xi}^k)$ lies in the upper, or middle, or lower simplex in Part $1\inmat{-}3\inmat{-}4\inmat{-}5\inmat{-}7\inmat{-}8$ or beyond, see Figure~\ref{fig-divisioin-6-2}.
As in the bi-attribute case, we can identify the coefficients of the convex combinations
by solving a system of linear equalities and inequalities:
\begin{subequations} \label{eq:mixed-integer-R3} \begin{align}
& \sum_{i=1}^{N_1} \sum_{j=1}^{N_2} \sum_{l=1}^{N_3} \alpha_{i,j,l}^k=1,\;\;k=1,\cdots,K, \label{eq:mixed-integer-R3-b}\\ & \sum_{i=1}^{N_1} \sum_{j=1}^{N_2} \sum_{l=1}^{N_3} \alpha_{i,j,l}^k x_{i}= f_1^k,\;
\sum_{i=1}^{N_1} \sum_{j=1}^{N_2} \sum_{l=1}^{N_3}\alpha_{i,j,l}^k y_{j}=f_2^k,\;
\sum_{i=1}^{N_1} \sum_{j=1}^{N_2} \sum_{l=1}^{N_3}\alpha_{i,j,l}^k z_{l}=f_3^k, \nonumber \\
& \hspace{20em} k=1,\cdots,K, \label{eq:mixed-integer-R3-d} \\ & \sum_{i=1}^{N_1-1} \sum_{j=1}^{N_2-1} \sum_{l=1}^{N_3-1} h_{i,j,l,k}^{1u}+h_{i,j,l,k}^{1m}+h_{i,j,l,k}^{1l} +h_{i,j,l,k}^{2u}+h_{i,j,l,k}^{2m}+h_{i,j,l,k}^{2l}=1, \nonumber \\ & \hspace{20em} k=1,\cdots,K, \label{eq:mixed-integer-R3-e} \\
& {\bm h}_k^{\tau u},
{\bm h}_k^{\tau m},
{\bm h}_k^{\tau l}
\in \{0,1\}^{(N_1-1) (N_2-1) (N_3-1)},\;
\tau=1,2,\;k=1,\cdots,K,
\label{eq:mixed-integer-R3-f}\\ & 0\leq \alpha_{i,j,l}^k \leq \sum_{\nu = {\rm I}}^{\rm VIII} H_{i,j,l,k}^{\nu},\; i=1,\cdots,N_1,\; j=1,\cdots,N_2, \; l=1,\cdots,N_3, \nonumber \\ & \hspace{20em} k=1,\cdots,K, \label{eq:mixed-integer-R3-g} \end{align} \end{subequations} where ${\bm h}_{k}^{\tau u}:= (h_{1,1,1,k}^{\tau u},\cdots, h_{N_1-1,N_2-1,N_3-1,k}^{\tau u})^T$, ${\bm h}_{k}^{\tau m}:=(h_{1,1,1,k}^{\tau m},\cdots, h_{N_1-1,N_2-1,N_3-1,k}^{\tau m})^T$, ${\bm h}_{k}^{\tau l}:=(h_{1,1,1,k}^{\tau l}$, $\cdots, h_{N_1-1,N_2-1,N_3-1,k}^{\tau l})^T$ for $\tau =1,2$,
$k=1,\cdots,K$, and \begin{eqnarray*} && H_{i,j,l,k}^{\rm I}:= h_{i,j,l,k}^{1u}+h_{i,j,l,k}^{1m}+h_{i,j,l,k}^{1l} +h_{i,j,l,k}^{2u}+h_{i,j,l,k}^{2m}+h_{i,j,l,k}^{2l},
\\ && H_{i,j,l,k}^{\rm II} :=h_{i-1,j,l,k}^{1m}+h_{i-1,j,l,k}^{1l}, \qquad~~~ H_{i,j,l,k}^{\rm III} :=h_{i-1,j-1,l,k}^{1l}+h_{i-1,j-1,l,k}^{2l},\\ && H_{i,j,l,k}^{\rm IV} :=h_{i,j-1,l,k}^{2m} +h_{i,j-1,l,k}^{2l}, \qquad~~~ H_{i,j,l,k}^{\rm V} :=h_{i,j,l-1,k}^{1u}+h_{i,j,l-1,k}^{2u},\\ && H_{i,j,l,k}^{\rm VI}: =h_{i-1,j,l-1,k}^{1u} +h_{i-1,j,l-1,k}^{1m}, \quad H_{i,j,l,k}^{\rm VIII}:= h_{i,j-1,l-1,k}^{2u}+h_{i,j-1,l-1,k}^{2m},\\ && H_{i,j,l,k}^{\rm VII}:= h_{i-1,j-1,l-1,k}^{1u} +h_{i-1,j-1,l-1,k}^{1m} +h_{i-1,j-1,l-1,k}^{1l} +h_{i-1,j-1,l-1,k}^{2u}\\ && \qquad \qquad +h_{i-1,j-1,l-1,k}^{2m} +h_{i-1,j-1,l-1,k}^{2l}, \end{eqnarray*}
${\bm f({\bm w},\bm{\xi}^k)}=(f_1^k,f_2^k,f_3^k)^T$, $f_i^k:=f_i({\bm w},\bm{\xi}^k)$ for $i=1,2,3$, $h_{0,*,*,*}^*=h_{*,0,*,*}^*=h_{*,*,0,*}^*=h_{N_1,*,*,*}^*=h_{*,N_2,*,*}^*=h_{*,*,N_3,*}^*=0$. Constraint (\ref{eq:mixed-integer-R3-e}) imposes the restriction that only one is active for the convex combination among all $6$ simplices. Constraint (\ref{eq:mixed-integer-R3-g}) imposes that the only nonzero $\alpha_{i,j,l}^k$ can be those associated with the four vertices of such simplex, see
Figure~\ref{fig:I-VIII}.
$H_{i,j,l,k}^\nu$ represents the sum of $h_{i,j,l,k}^*$ with $*\in \{1u,1m,1l,2u,2m,2l\}$ in Octant $\nu$ that are related to point $(x_i,y_j,z_l)$ for $\nu={\rm I},\ldots,{\rm VIII}$.
Specifically, there are $6$ simplices in Octant I that are related to point $(x_i,y_j,z_l)$, and the corresponding binary variables are $h_{i,j,l,k}^{1u}$, $h_{i,j,l,k}^{1m}$, $h_{i,j,l,k}^{1l}$, $h_{i,j,l,k}^{2u}$, $h_{i,j,l,k}^{2m}$, $h_{i,j,l,k}^{2l}$. There are two triangles in Octant II that are related to $(x_i,y_j,z_l)$, and the corresponding binary variables are $h_{i-1,j,l,k}^{1m}$ and $h_{i-1,j,l,k}^{1l}$. The related binary variables in Octant III-VIII can also be observed. Such $h_{i,j,l,k}^*$ can be used to identify which vertices are used to represent ${\bm f}(\bm{z},\bm{\xi}^k)$. For example, if $h_{i,j,l,k}^{1l}=1$, then $\bm{f}(\bm{z},\bm{\xi}^k)$ lies in the lower simplex in the former part of cube $X_i\times Y_j\times Z_l$. This is indicated
by the fact that $\alpha_{i,j,l}^k\leq h_{i,j,l,k}^{1l}=1$, $\alpha_{i+1,j+1,l+1}^k\leq h_{i,j,l,k}^{1l}=1$, $\alpha_{i+1,j,l}^k\leq h_{i,j,l,k}^{1l}=1$, $\alpha_{i+1,j+1,l}^k\leq h_{i,j,l,k}^{1l}=1$, and $\alpha_{i',j',l'}^k=0$ for $(i',j',l')\notin \{(i,j,l),(i+1,j+1,l+1),(i+1,j,l),(i+1,j+1,l)\}$, see Figure~\ref{fig:I-VIII} for the 24 simplices that are related to point $(x_i,y_j,z_l)$.
\begin{figure}
\caption{\footnotesize
(a) divides a cube into $8$ sub-cubes denoted by octants I, II, III, IV, V, VI, VII and VIII.
The red point in (a)-(c) is $(x_i,y_j,z_l)$.
(b) $\&$ (c) illustrate all the $24$ simplices related to the point $(x_i,y_j,z_l)$, which prompts the last constraint in problem (\ref{eq:mixed-integer-R3}).
(b) represents the cases in octants I-IV. In octant I, the vertex $1$ is $(x_i,y_j,z_l)$, and there are six simplices containing the red point $(x_i,y_j,z_l)$. In octant II, the vertex $1$ is $(x_{i-1},y_j,z_l)$, and there are two simplices containing the red point.
In octant III, the vertex $1$ is $(x_{i-1},y_{j-1},z_l)$, and there are two simplices containing the red point.
In octant IV, the vertex $1$ is $(x_{i},y_{j-1},z_l)$, and there are two simplices containing the red point. (c) represents the cases in octants V-VIII. In octant V, the vertex $1$ is $(x_i,y_j,z_{l-1})$, and there are two simplices containing the red point.
In octant VI, the vertex $1$ is $(x_{i-1},y_j,z_{l-1})$, and there are two simplices containing the red point.
In octant VII, the vertex $1$ is $(x_{i-1},y_{j-1},z_{l-1})$, and there are six simplices containing the red point. In octant VIII, the vertex $1$ is $(x_{i},y_{j-1},z_{l-1})$, and there are two simplices containing the red point.}
\label{fig:I-VIII}
\end{figure}
Consequently, we can reformulate the tri-attribute utility maximization problem \linebreak $\max_{{\bm w}\in Z}\sum_{k=1}^K p_k[u_N({\bm f({\bm w},\bm{\xi}^k)})]$ as: \begin{subequations} \label{eq:mixed-integer-R3-2} \begin{align}
\max\limits_{{\bm w}\in Z,{\bm \alpha},{\bm h^l},{\bm h}^m,{\bm h}^u} \; & \sum_{k=1}^K p_k \sum_{i=1}^{N_1} \sum_{j=1}^{N_2}\sum_{l=1}^{N_3}\alpha_{i,j,l}^k u_{i,j,l}\\
{\rm s.t.} \qquad\;\; & \inmat{constraints } (\ref{eq:mixed-integer-R3-b})- (\ref{eq:mixed-integer-R3-g}), \end{align} \end{subequations} where ${\bm \alpha}:=({\bm \alpha}^1,\cdots,{\bm \alpha}^K)\in {\rm I\!R}^{(N_1N_2N_3)\times K}$, ${\bm \alpha}^k:=(\alpha_{1,1,1}^k,\cdots,\alpha_{N_1,N_2,N_3}^k)^T$ for $k=1,\cdots,K$, ${\bm h}^u:=({\bm h}^{1u}_1,\cdots,{\bm h}^{1u}_K,{\bm h}^{2u}_1,\cdots,{\bm h}^{2u}_K)\in {\rm I\!R}^{(N_1-1)(N_2-1)(N_3-1)\times 2K}$,\; ${\bm h}^m:=({\bm h}^{1m}_1,\cdots,{\bm h}^{1m}_K,{\bm h}^{2m}_1,\cdots, \\ {\bm h}^{2m}_K)$,\; ${\bm h}^l:=({\bm h}^{1l}_1,\cdots,$ ${\bm h}^{1l}_K,{\bm h}^{2l}_1,\cdots,{\bm h}^{2l}_K)$,\; ${\bm u}:=(u_{1,1,1},\cdots,$ $u_{N_1,N_2,N_3})^T\in {\rm I\!R}^{N_1 N_2N_3}$. If $f({\bm w},\bm{\xi})$ is linear in ${\bm w}$, then (\ref{eq:mixed-integer-R3-2}) is an MILP.}
Extending this to the UPRO model, we consider the ambiguity set ${\cal U}_N$ constructed by pairwise comparison of questions $({\bm A}_m,{\bm B}_m)$.
Under Assumption~\ref{assu-lip}, suppose that the set of gridpoints $\{(x_i,y_j,z_l):i=1,\cdots,N_1,j=1,\cdots,N_2,l=1,\cdots,N_3\}$ contains all the outcomes of lotteries $ {\bm A}_m$ and ${\bm B}_m$ for $m=1,\cdots,M$, then we can solve the tri-attribute utility preference robust optimization (TUPRO) problem by solving the approximate TUPRO-N problem $\max_{\bm{z}\in Z} \min_{u_N\in {\cal U}_N} \sum_{k=1}^Kp_k[u_N({\bm f(\bm{z},\bm{\xi}^k)})]$ as: \begin{subequations} \label{eq:PRO_MILP_3m} \begin{align} \max\limits_{\bm{z}\in Z} \min\limits_{ \substack{{\bm \alpha}, {\bm h}^u,{\bm h}^m\\ {\bm h}^l,\bm u}}\;& \sum_{k=1}^K p_k \sum_{i=1}^{N_1} \sum_{j=1}^{N_2} \sum_{l=1}^{N_3} \alpha_{i,j,l}^k u_{i,j,l}\\ {\rm s.t.} \;\;\;\; &
u_{i+1,j,l}\geq u_{i,j,l}, i=1,\cdots,N_1-1, j=1,\cdots,N_2,l=1,\cdots,N_3, \label{eq:PRO_MILP_3m-f}\\ & u_{i,j+1,l}\geq u_{i,j,l}, i=1,\cdots,N_1, j=1,\cdots,N_2-1,l=1,\cdots,N_3, \label{eq:PRO_MILP_3m-h}\\ & u_{i,j,l+1}\geq u_{i,j,l}, i=1,\cdots,N_1, j=1,\cdots,N_2,l=1,\cdots,N_3-1, \label{eq:PRO_MILP_3m-i}\\ & u_{i+1,j,l}-u_{i,j,l}\leq L(x_{i+1}-x_i), \nonumber \\ & \qquad \qquad i=1,\cdots,N_1-1, j=1,\cdots,N_2,l=1,\cdots,N_3,\label{eq:PRO_MILP_3m-j}\\ & u_{i,j+1,l}-u_{i,j,l}\leq L(y_{j+1}-y_j), \nonumber \\ & \qquad \qquad i=1,\cdots,N_1, j=1,\cdots, N_2-1, l=1,\cdots,N_3, \label{eq:PRO_MILP_3m-k}\\ & u_{i,j,l+1}-u_{i,j,l}\leq L(z_{l+1}-z_{l}), \nonumber \\ & \qquad \qquad i=1,\cdots,N_1,j=1,\cdots,N_2, l=1,\cdots,N_3-1, \label{eq:PRO_MILP_3m-l} \\ & u_{1,1,1}=0, \; u_{N_1,N_2,N_3}=1, \label{eq:PRO_MILP_3m-m}\\ &\sum_{i=1}^{N_1} \sum_{j=1}^{N_2}\sum_{l=1}^{N_3} ( \mathbb{P}({\bm B}_m=(x_i,y_j,z_l))-\mathbb{P}({\bm A}_m=(x_i,y_j,z_l)))u_{i,j,l} \leq 0,
\nonumber \\ & \hspace{16em} m=1,\cdots,M, \label{eq:PRO_MILP_3m-n}\\ & \inmat{constraints }(\ref{eq:mixed-integer-R3-b})-(\ref{eq:mixed-integer-R3-g}),\label{eq:PRO_MILP_3m-b} \end{align} \end{subequations} where
${\bm u}:=(u_{1,1,1},\cdots,u_{N_1,N_2,N_3})^T\in {\rm I\!R}^{N_1 N_2N_3}$.
We can solve problem (\ref{eq:PRO_MILP_3m}) by a Dfree method, where the inner problem is an MILP when ${\bm f}(\bm{z},\bm{\xi})$ is linear in $\bm{z}$.
It is also possible to reformulate the problem further as a single MILP, we leave this for interested readers.
\subsection{Multi-attribute case} \label{sec:m-dim-u}
Since a large number of simplices are needed to partition hypercubes of dimension greater than three (see \cite{hughes1996simplexity}), we give a general framework
for the $m$ attributes case.
For ${\bm x}\in {\rm I\!R}^m$, we can divide the domain of utility $\bigtimes_{i=1}^{m}[\underline{x}_i,\overline{x}_i]$ into $(N_1-1)\times (N_2-1)\times \cdots \times (N_m-1)$ subsets $\{\bigtimes_{i=1}^{m}[x_{i_j},x_{i_{j+1}}]:j=1,\cdots,N_{i}-1\}$. We denote the values of $u_N$ at $(x_{1_{j_1}},\cdots,x_{m_{j_m}})$ by $u_{1_{j_1},\cdots,m_{j_m}}$ for $j_i=1,\cdots,N_i$, $i=1,\cdots,m$. We {\color{black} reshape $(u_{1_{j_1},\cdots,m_{j_m}})_{N_1\times \cdots \times N_m}\in {\rm I\!R}^{N_1\times \cdots \times N_m}$ as a vector ${\bm u}=(u_1,\cdots,u_V)^T\in {\rm I\!R}^{V}$} with $V:=N_1\times\cdots \times N_m$, and label the corresponding vertices
by $1,\cdots,V$. {\color{black} We divide the domain $\bigtimes_{i=1}^{m}[\underline{x}_i,\overline{x}_i]$ into mutually exclusive simplices and label them
by $1,\cdots,S$.} The $v$-th vertice is ${\bm x}_v:=(x_{1_v},\cdots,x_{i_v},\cdots,x_{m_v})^T \in {\rm I\!R}^m$ for $v=1,\ldots,V$. Let ${\cal V}_s$ denote the set of vertices of the $s$-th simplex. As in the bi-attribute and tri-attribute cases, for given
${\bm f}(\bm{z},\bm{\xi}^k)=(f_1(\bm{z},\bm{\xi}^k),f_2(\bm{z},\bm{\xi}^k),\cdots,f_m(\bm{z},\bm{\xi}^k))^T$, we can identify
the simplex containing
${\bm f}(\bm{z},\bm{\xi}^k)$ and obtain the coefficients
of the representation of ${\bm u}$ at ${\bm f}(\bm{z},\bm{\xi}^k)$ in terms of the utility values at the vertices of the simplex
by solving a system of linear equalities and inequalities:
\begin{subequations} \label{eq:mixed-integer-Rm} \begin{align} & \sum_{v=1}^{V} \alpha_{v}^k=1,\;\;k=1,\cdots,K, \label{eq:mixed-integer-Rm-b}\\ & \sum_{v=1}^V \alpha_{v}^k x_{i_v}=f_i(\bm{z},\bm{\xi}^k),\;\;i=1,\cdots,m,\; k=1,\cdots, K, \label{eq:mixed-integer-Rm-c}\\
& \sum_{s=1}^S h_s^k=1,\;\;h_s^k\in \{0,1\}, \;s=1,\cdots,S, \;k=1,\cdots,K, \label{eq:mixed-integer-Rm-d}\\ & 0\leq \alpha_v^k\leq \sum_{s:{\bm x}_v\in {\cal V}_s} h_s^k, \; v=1,\cdots,V,\;k=1,\cdots,K, \label{eq:mixed-integer-Rm-e} \end{align} \end{subequations} where $s:{\bm x}_v\in {\cal V}_s$ means all $s\in \{1,\cdots,S\}$ satisfying that the vertice ${\bm x}_v$ belongs to the set ${\cal V}_s$.
Constraint (\ref{eq:mixed-integer-Rm-e})
implies that
only $\alpha_{v}$ values different from $0$
are those associated with the vertices of the simplex. Then we can reformulate the multi-attribute utility maximization problem $\max_{\bm{z}\in Z}\sum_{k=1}^K p_k[u_N({\bm f(\bm{z},\bm{\xi}^k)})]$ as an MIP
(see e.g. \cite{VAN10}), \begin{subequations} \label{eq:mixed-integer-Rm-2} \begin{align} \max\limits_{\bm{z} \in Z, {\bm \alpha}, {\bm h}} \; & \sum_{k=1}^K p_k \sum_{v=1}^{V}\alpha_{v}^k u_v\\ {\rm s.t.}\;\;\; & \inmat{constraints }(\ref{eq:mixed-integer-Rm-b})-(\ref{eq:mixed-integer-Rm-e}), \end{align} \end{subequations} where ${\bm \alpha}:=({\bm \alpha}^1,\cdots,{\bm \alpha}^K)\in {\rm I\!R}^{V\times K}$, ${\bm h}:=({\bm h}^1,\cdots,{\bm h}^K)\in {\rm I\!R}^{S \times K}$. If $f({\bm{z},\bm{\xi}})$ is linear in $\bm{z}$, then problem (\ref{eq:mixed-integer-Rm-2}) is an MILP.
We continue to assume that the ambiguity set ${\cal U}_N$ is constructed by pairwise comparisons of lotteriess ${\bm A}_l$ and ${\bm B}_l$ with $l=1,\cdots,M$. Let $ \tilde{\cal U}_N={\cal U}_N \bigcap \{u_N: u_N \inmat{ is Lipschtz with its modulus }L\}. $
Consequently, we can solve the multi-attribute utility preference robust problem
by solving MUPRO-N problem $\max_{\bm{z}\in Z} \min_{u_N\in \tilde{\cal U}_N}\sum_{k=1}^K p_k[u_N({\bm f(\bm{z},\bm{\xi}^k)})]$ as an MIP:
\begin{subequations} \begin{align} \label{eq:PRO_m-dim} \displaystyle{ \max_{\bm{z}\in Z} \min_{{\bm u}, {\bm \alpha},{\bm h} } }\; & \sum_{k=1}^K p_k \sum_{v=1}^{V}\alpha_{v}^k u_v \\ {\rm s.t.} \;\; & \inmat{constraints }(\ref{eq:mixed-integer-Rm-b})-(\ref{eq:mixed-integer-Rm-e}), \;\\ & u_{1_{j_1},\cdots,i_{j_{i}+1},\cdots,m_{j_m}} \geq u_{1_{j_1},\cdots,i_{j_{i}},\cdots,m_{j_m}}, \nonumber \\ & \hspace{12em} j_i=1,\cdots,N_i, i=1,\cdots,m, \label{eq:PRO_m-dim-c}\\
& u_{1_{j_1},\cdots,i_{j_{i}+1},\cdots,m_{j_m}} - u_{1_{j_1},\cdots,i_{j_{i}},\cdots,m_{j_m}} \leq L(x_{i_{j_i+1}}-x_{i_{j_i}}), \nonumber \\ & \hspace{12em} j_i=1,\cdots,N_i,i=1,\cdots,m, \qquad~~\; \label{eq:PRO_m-dim-e}\\ & u_1=0,\; u_V=1,\label{eq:PRO_m-dim-f}\\ &\sum_{v=1}^{V} \mathbb{P}({\bm B}_l=\bm{x}_v)u_v\leq \sum_{v=1}^{V} \mathbb{P}({\bm A}_l= \bm{x}_v)u_v, \;l=1,\cdots,M, \end{align} \end{subequations} where ${\bm u}\in {\rm I\!R}^{V}$, ${\bm \alpha}=({\bm \alpha}^1,\cdots,{\bm \alpha}^K)\in {\rm I\!R}^{V\times K}$, ${\bm h}=({\bm h}^1,\cdots,{\bm h}^K)\in {\rm I\!R}^{S \times K}$. Constraint (\ref{eq:PRO_m-dim-c}) represents the non-decreasing property of the utility function.
Constraint (\ref{eq:PRO_m-dim-e}) represents the Lipschitz continuity of $u_N$ and (\ref{eq:PRO_m-dim-f}) characterizes the normalization of $u_N$.
\section{Error bounds for the PLA}
\label{sec-errorbound}
In the previous section, we outline computational schemes
to solve BUPRO-N problem. In this section, we investigate the error bounds of the optimal value and the optimal solutions obtained from solving BUPRO-N problem when we use them to approximate the optimal value and optimal solutions of BUPRO problem. Notice that
the only difference between the two maximin optimization problems
is the feasible set of the inner minimization problem, thus we proceed with our investigation by quantifying the difference between $\mathcal{U}_N$ and $\mathcal{U}$ and then apply classical stability results in parametric programming to derive the error bounds of the optimal value and optimal solutions. Proofs of all technical results are deferred to the appendix.
To ease the exposition, we
write $\langle u, \psi_l \rangle$ for $\int_T u(x,y) d \psi_l(x,y)$,
and subsequently (\ref{eq:ambiguity_set}) as \begin{equation} \label{eq-pseume}
\mathcal{U}=\{u\in\mathscr{U}: {\langle} u,\bm{\psi} {\rangle} \leq \bm{C}\}, \end{equation} where $\bm{\psi} := (\psi_1(x,y), \ldots, \psi_M(x,y))^T \in {\rm I\!R}^M$, $\bm{C}:=(c_1,\ldots,c_M)^T\in {\rm I\!R}^M$. Note that $\langle u, \psi_l \rangle$ should not be read as a kind of inner product as we cannot swap the positions between $u$ and $\psi_l$. We adopt the
notation since (\ref{eq-pseume}) clearly indicates ${\cal U}$ as the set of the solutions of the inequality system ${\langle} u,\bm{\psi} {\rangle} \leq \bm{C}$ relative to $\mathscr{U}$.
To quantify the difference between two utility functions, we define, for any $u,v\in\mathscr{U}$, the pseudo-metric between $u$ and $v$
under the function set $\mathscr{G}$ by \begin{equation*}
\mathsf {d\kern -0.07em l}_{\mathscr{G}}(u,v):=\sup_{g\in\mathscr{G}} |{\langle} g,u {\rangle} - {\langle} g,v {\rangle}|. \end{equation*}
It is easy to observe that $\mathsf {d\kern -0.07em l}_{\mathscr{G}}(u,v)=0$ if and only if ${\langle} g,u {\rangle}={\langle} g,v {\rangle}$ for all $g\in\mathscr{G}$. In practice, we may regard $\mathscr{G}$ as a set of “test functions” associated with some prospects and interpret $u$ as a measure induced by utility. The pseudo-metric means that if $u$ and $v$ give the same average value for each $g\in\mathscr{G}$, then they are regarded as “equal” under $\mathsf {d\kern -0.07em l}_{\mathscr{G}}$
although they may not be identical. Thus $\mathsf {d\kern -0.07em l}_{\mathscr{G}}$ is a kind of pseudo-metric defined over the space of utility-induced measures $\mathscr{U}$. This definition is in parallel to a similar definition in probability theory, where $u$ and $v$ are in a position of probability measures and the corresponding pseudo-metric is known as $\zeta$-metric, see \cite{Rom03}. Here we continue to adopt the terminology although the background is different.
\begin{example} \label{exm-g}
Recall that $T= [\underline{x},\bar{x}]\times[\underline{y},\bar{y}]$.
(a) Let $$
\mathscr{G}=\mathscr{G}_M:=\left\{ g: T \rightarrow {\rm I\!R} \,\left|\;
\inmat{g is measurable,} \sup_{\bm{t}\in T}|g(\bm{t})| \leq 1 \right.\right\}. $$ Then $\mathsf {d\kern -0.07em l}_{\mathscr{G}_M}(u,v)$ corresponds to the total variation metric and $\mathsf {d\kern -0.07em l}_{\mathscr{G}}(u,v)\leq 1$.
(b) Let \begin{equation} \label{eq-kantorovich}
\mathscr{G}=\mathscr{G}_K:=\{g:T\to{\rm I\!R} \,|\; \inmat{g is Lipschitz continuous with the modulus bounded by 1}\}. \end{equation} Then $\mathsf {d\kern -0.07em l}_{\mathscr{G}_K}(u,v)$ corresponds to the Kantorovich metric in which case we have
$\mathsf {d\kern -0.07em l}_{\mathscr{G}_K}(u,v)=\int_T \|\bm{t}-\bm{t}'\| d\pi(\bm{t},\bm{t}') \leq \sqrt{(\bar{x}-\underline{x})^2+(\bar{y}-\underline{y})^2)} $,
where $\int_T \pi(\bm{t},\bm{t}') d \bm{t}'=u(\bm{t})$, $\int_T \pi(\bm{t},\bm{t}') d \bm{t}=v(\bm{t}')$
and $\|\bm{t}\|$ denotes the Euclidean norm.
(c) Let $\mathscr{G}:=\mathscr{G}_L\cap\mathscr{G}_M$. Then $\mathsf {d\kern -0.07em l}_{\mathscr{G}}(u,v)$ corresponds to the bounded Lipschitz metric and $\mathsf {d\kern -0.07em l}_{\mathscr{G}}(u,v)\leq\min\left\{1,\sqrt{(\bar{x}-\underline{x})^2+(\bar{y}-\underline{y})^2)}\right\}$.
(d) Let \begin{equation*}
\mathscr{G}=\mathscr{G}_I:=\{g:T\to{\rm I\!R} \,|\; g=\mathds{1}_{[\underline{x},x]\times[\underline{y},y]}(\cdot), (x,y)\in T \}. \end{equation*} Then $\mathsf {d\kern -0.07em l}_{\mathscr{G}_I}(u,v)$ corresponds to the Kolmogorov metric in which case we have $\mathsf {d\kern -0.07em l}_{\mathscr{G}_I}(u,v)\leq~1$. \end{example}
For any two sets $U, V\subset \mathscr{U}$, let $ \mathbb{D}_{\mathscr{G}}(U,V) := \sup_{u\in U}\inf_{v\in V} \mathsf {d\kern -0.07em l}_{\mathscr{G}}(u,v), $ which quantifies the deviation of $U$ from $V$ and $ \mathbb{H}_{\mathscr{G}}(U,V) := \max\left\{\mathbb{D}_{\mathscr{G}}(U,V), \mathbb{D}_{\mathscr{G}}(V,U)\right\}, $ which denotes the Hausdorff distance between the two sets under the pseudo-metric. By convention, when $U=\{u\}$ is a singleton, we write the distance $\mathsf {d\kern -0.07em l}_{\mathscr{G}}(u,V)$ from $u$ to set $V$
rather than $\mathbb{D}_{\mathscr{G}}(U,V)$.
Using the pseudo-metric, we
can derive an error bound of any utility function $u\in\mathscr{U}$ deviating from $\mathcal{U}$ in terms of the residual of the linear system defining ${\cal U}$. This
type of result is known as Hoffman’s lemma. We state this in the next lemma.
\begin{lemma}[Hoffman's lemma] \label{lem-hof} Consider (\ref{eq-pseume}).
Assume: (a)
$\mathscr{G}$ is chosen so that the resulting pseudo-distance between any two utility functions is finite-valued, and (b) there exist a constant $\alpha$ and a function $u^0\in\mathcal{U}$ such that \begin{equation} \label{eq-sla}
{\langle} u^0, \bm{\psi} {\rangle} -\bm{C}+\alpha \mathbb{B}^M \subset {\rm I\!R}_-^M. \end{equation} Then \begin{equation} \label{eq-hof}
\mathsf {d\kern -0.07em l}_{\mathscr{G}} (u,\mathcal{U}) \leq \frac{\mathsf {d\kern -0.07em l}_{\mathscr{G}}(u,u^0)}{\alpha}
\|({\langle} u, \bm{\psi} {\rangle} -\bm{C})_+\| \quad \forall u\in\mathscr{U}, \end{equation} where $({\bm a})_+:=\max\{0,{\bm a}\}$ which is taken componentwise.
\end{lemma}
Condition (\ref{eq-sla}) is known as Slater’s condition. It implies that there is at least one utility function $u^0$ such that
$\langle u,\bm{\psi}\rangle $ lies in the interior of ${\rm I\!R}^M_-$.
This kind of condition
is widely used in the literature of
Hoffman’s lemma for linear and convex systems, see \cite{Rob75} and references therein.
Since the proof of Hoffman’s lemma in the case that utility function in ${\rm I\!R}^2$ is similar to the case with utility function in ${\rm I\!R}$ (\cite{GXZ21}), we omit the details.
\subsection{Error bound on the ambiguity set}
We move on to quantify the difference between $\mathcal{U}$ and $\mathcal{U}_N$. First, we give the following technical result.
\begin{proposition} \label{prop-d} Let $u\in\mathscr{U}$ and $u_N$ be the PLA of $u$ defined as in Proposition~\ref{prop-uti-N}, then the following assertions hold:
(i) If $\mathscr{G}=\mathscr{G}_K$,
then \begin{equation}
\label{}
\mathsf {d\kern -0.07em l}_{\mathscr{G}_K} (u,u_N)\leq 2(\beta_{N_1}^2+\beta_{N_2}^2)^{1/2}, \end{equation} where $\mathscr{G}_K$ is defined as in Example~\ref{exm-g}~(b)
and \begin{equation} \label{eq-be}
\beta_{N_1}:=\max_{i=2,\ldots,N_1}(x_i-x_{i-1}), \; \beta_{N_2}:=\max_{j=2,\ldots,N_2}(y_j-y_{j-1}). \end{equation}
(ii) If $\mathscr{G}=\mathscr{G}_I$ and
$u$ is Lipschitz continuous over
$T$ with the modulus $L$,
then \begin{equation} \label{eq-d}
\mathsf {d\kern -0.07em l}_{\mathscr{G}_I} (u,u_{N}) \leq
2L\left( \beta_{N_1}+\beta_{N_2} \right), \end{equation}
where $\mathscr{G}_I$ is defined as in Example~\ref{exm-g}~(d).
\end{proposition}
With Lemma~\ref{lem-hof} and Proposition~\ref{prop-d}, we are ready to quantify the difference between $\mathcal{U}_N$ and $\mathcal{U}$.
\begin{theorem}[Error bound on $\mathbb{H}_{\mathscr{G}}(\mathcal{U}_N,\mathcal{U})$] \label{thm-erramb} Assume: (a) Slater's condition
in Lemma~\ref{lem-hof} is satisfied;
(b)
$\int_T d\psi_l(\bm{t})$ is well-defined; {\color{black} (c)
$u$ is Lipschitz continuous over
$T$ with the modulus $L$.}
Then there exist a positive constant $\hat{\alpha}<\alpha$, $N^0_1$ and $N^0_2$ such that the following assertions hold for specific $\mathscr{G}$ defined as in Example~\ref{exm-g}.
(i) If $\mathscr{G}=\mathscr{G}_K$,
then
\begin{equation} \label{eq-erram-L} \begin{split}
\mathbb{H}_{\mathscr{G}_K} (\mathcal{U},\mathcal{U}_{N}) \leq & 2(\beta_{N_1}^2+\beta_{N_2}^2)^{1/2} \\
& + L(\beta_{N_1}+ \beta_{N_2})
\frac{\left((\bar{x}-\underline{x})^2+(\bar{y}-\underline{y})^2\right)^{1/2}}{\hat{\alpha}} \left(\sum_{l=1}^M \left| \int_T d \psi_l (\bm{t})\right|^2\right)^{1/2} \end{split} \end{equation} for all $N_1\geq N_1^0$ and $N_2\geq N_2^0$.
(ii) If $\mathscr{G}=\mathscr{G}_I$, then \begin{equation} \label{eq-erram-I}
\mathbb{H}_{\mathscr{G}_I}(\mathcal{U},\mathcal{U}_{N})\leq
L\left( \beta_{N_1}+\beta_{N_2} \right) \left( 2+\frac{1}{\hat{\alpha}} \left(\sum_{l=1}^M \left| \int_T d \psi_l (t)\right|^2\right)^{1/2} \right) \end{equation} for all $N_1\geq N_1^0$ and $N_2\geq N_2^0$, where $\beta_{N_1}$, $\beta_{N_2}$ are defined as in (\ref{eq-be}) and $\beta_{N_i}\to 0$ as $N_i\to\infty$ for $i=1,2$. \end{theorem}
The constant $\hat{\alpha}$ is related to Slater’s condition for the linear system when the utility function is restricted to space $\mathscr{U}_N$, {\color{black} see \cite[pages 16]{GXZ21}.} It is well-known that Kolmogorov metric $\mathsf {d\kern -0.07em l}_{\mathscr{G}_I}$ is tighter than Kantorovich metric $\mathsf {d\kern -0.07em l}_{\mathscr{G}_K}$ defined as in Example~\ref{exm-g}~(b) and (d) because the former is about the largest difference between two utility functions whereas the latter is about the area between the graphs of the two utility functions, see \cite{GiS02}. Consequently, $\mathbb{H}_{\mathscr{G}_I}(\mathcal{U}_N,\mathcal{U})$ is tighter than $\mathbb{H}_{\mathscr{G}_K}(\mathcal{U}_N,\mathcal{U})$.
The following corollary shows that the second term disappears in both cases
when $\psi_l$, $l=1,\ldots,M$ are simple functions.
\begin{corollary} \label{cor-err-optval-discrete} Let $u\in\mathscr{U}$. Assume that $u$ is Lipschitz continuous over
$T$ with the modulus $L$.
If $\psi_l$ is a simple function taking constant values
over each cell of $T$
for
$l=1,\ldots,M$, then $\mathbb{H}_{\mathscr{G}_K}(\mathcal{U}_{N},\mathcal{U})\leq 2(\beta_{N_1}^2+\beta_{N_2}^2)^{1/2}$ and $\mathbb{H}_{\mathscr{G}_I}(\mathcal{U}_{N},\mathcal{U})\leq 2L\left( \beta_{N_1}+\beta_{N_2} \right)$. \end{corollary}
The corollary provides us with some useful insights: if $\psi_l$ is a simple function for $l=1,\cdots,M$ (which
corresponds to the case when the DM's preference is elicited via pairwise comparison lotteries), then we can construct the grid of $T$ in such a way that $\psi_l$ is constant over $T_{i,j}$ (the vertices of the cells comprise all outcomes of the lotteries).
In this way, we may effectively reduce the modelling error arising from PLA of the utility function. Note also that in this case, Slater’s condition is not
required, which means that the error bound holds for all $N_1$ and $N_2$ rather than for them to be sufficiently large.
\subsection{Error bound on the optimal value and the optimal solution}
We are now ready to quantify the difference between the BUPRO-N and BUPRO models. Let ${\vartheta}_N$ and ${\vartheta}$ denote the respective optimal values, and $Z_N^*$ and $Z^*$ denote the corresponding sets of optimal solutions.
\begin{theorem}[Error bound on the optimal value and the optimal solution] \label{thm-optval} Assume the settings and conditions of
Theorem~\ref{thm-erramb}.
Then the following assertions hold.
(i) \begin{equation} \label{eq-err-vt}
|{\vartheta}_{N}-{\vartheta}| \leq L\left( \beta_{N_1}+\beta_{N_2} \right) \left( 3+\frac{1}{\hat{\alpha}} \left(\sum_{l=1}^M \left| \int_T d \psi_l (t)\right|^2\right)^{1/2} \right) \end{equation} for all $N_1\geq N_1^0$ and $N_2\geq N^0_1$, where $L$, $\hat{\alpha}$, $\beta_{N_1}$, $\beta_{N_2}$, $N_1^0$ and $N^0_2$ are defined as in Theorem~\ref{thm-erramb}.
(ii) Let $v(\bm{z}):=\min_{u\in\mathcal{U}} {\mathbb{E}}_P[u(\bm{f}(\bm{z},\bm{\xi}))]$. Define the growth function $\Lambda(\tau):=\min\{v(\bm{z})-{\vartheta}^*:d(\bm{z},Z^*)\geq \tau,\forall\, \bm{z}\in Z\}$
and $\Lambda^{-1}(\eta):=\sup\{\tau:\Lambda(\tau)\leq\eta\}$ where $d(\bm{z},Z^*)=\inf_{\bm{z}'\in Z^*} \|\bm{z}-\bm{z}'\|$. Then \begin{equation} \label{eq-err-so}
\mathbb{D}(Z_{N}^*,Z^*)\leq \Lambda^{-1} \left( 2L\left( \beta_{N_1}+\beta_{N_2} \right) \left( 3+\frac{1}{\hat{\alpha}} \left(\sum_{l=1}^M \left| \int_T d \psi_l (t)\right|^2\right)^{1/2} \right) \right), \end{equation}
where $\mathbb{D}(Z_{N}^*,Z^*):=\sup_{\bm{z}\in Z_{N}^*} \inf_{\bm{z}' \in Z^*} \|\bm{z}-\bm{z}'\|$. \end{theorem}
\begin{remark} \label{rem:distance}
(i) Note that ${\vartheta}$ is not computable whereas ${\vartheta}_N$ is. The error bound established in (\ref{eq-err-vt}) gives the DM an interval centred at ${\vartheta}_N$ which contains ${\vartheta}$. We can say that for a specified precision $\epsilon$, we can use the inequality to estimate $\beta_N$ such that $|{\vartheta}_N-{\vartheta}|\leq\epsilon$. In the case when $x_1,\ldots,x_{N_1}$ and $y_1,\ldots,y_{N_2}$ are evenly spread over $[\underline{x},\bar{x}]$ and $[\underline{y},\bar{y}]$, we know the specified precision is reached when
$L\left( \frac{\bar{x}-\underline{x}}{N_1}+\frac{\bar{y}-\underline{y}}{N_2} \right) \left( 3+\frac{1}{\hat{\alpha}} \left(\sum_{l=1}^M \left| \int_T d \psi_l (\bm{t})\right|^2\right)^{1/2} \right)\leq\epsilon$.
(ii) The error bound (\ref{eq-err-vt}) is established without restricting
the utility functions to being concave and it is derived under the PLA scheme. We envisage that similar results may be obtained using spline approximation and leave interested readers to investigate. Note that these are mesh-dependent approximation schemes which means that the quality of approximation depends on the number of gridpoints $N=N_1N_2$.
(iii) Let $u^{\rm worst}_N\in \arg\min_{u_N\in {\cal U}_N} \sum_{k=1}^K p_k u_N({\bm f}(\bm{z}^N,\bm{\xi}^k))$, where $\bm{z}^N$ denotes the optimal solution of (\ref{eq:MAUT-robust-N-dis}). Then
$\mathsf {d\kern -0.07em l}_{\mathscr{G}_I}(u^*,u_N^{\rm worst})=\sup_{\bm{t}\in T} |u^*(\bm{t})-u_N^{\rm worst}(\bm{t})|$. Let $u_N^*$ denote the PLA of $u^*$ with identical values at the gridpoints. Then $$
\mathsf {d\kern -0.07em l}_{\mathscr{G}_I}(u^*,u^*_N)=\sup_{\bm{t}\in T}|u^*(\bm{t})-u^*_N(\bm{t})|=\sup_{\substack{i=1,\cdots,N_1-1, \\ j=1,\cdots,N_2-1}} \sup_{\bm{t} \in T_{i,j}}|u^*(\bm{t})-u_N^*(\bm{t})|\leq L (\beta_{N_1}+\beta_{N_2}), $$ and
$\mathsf {d\kern -0.07em l}_{\mathscr{G}_I}(u^*_N,u_N^{\rm worst})=\sup_{\bm{t}\in T} |u_N^*(\bm{t})-u_N^{\rm worst}(\bm{t})|=\max_{i=1,\cdots,N_1,j=1,\cdots,N_2}|u_N^*(\bm{t}_{i,j})-u_N^{\rm worst}(\bm{t}_{i,j})|$, where $\bm{t}_{i,j}:=(x_i,y_j)$. In Section~\ref{sec:numerical results}, we will examine how $u_{N}^{\rm worst}$ converges to $u^*$ as the number of queries increases.
(iv) The error bounds established under $\mathsf {d\kern -0.07em l}_{\mathscr{G}_I}$ and $\mathsf {d\kern -0.07em l}_{\mathscr{G}_K}$ require conservative property of the utility function. Specifically, the bound of Hausdorff distance between $\mathcal{U}$ and $\mathcal{U}_N$ is related to two terms $\mathsf {d\kern -0.07em l}_{\mathscr{G}}(u,u_N)$ and $\mathsf {d\kern -0.07em l}_{\mathscr{G}}(u_N,u_N^0)$ (see (\ref{eq-uU})), where $u\in\mathcal{U}$, $u_N$ is the PLA of $u$, and $u_N^0$ is defined in (\ref{eq-sla-0}). It can be observed that in the case that $\mathscr{G}=\mathscr{G}_K$, the bound of $\mathsf {d\kern -0.07em l}_{\mathscr{G}}(u,u_N)$ relies on the conservative property as shown in (\ref{eq-u-u-N}), whereas in the case $\mathscr{G}=\mathscr{G}_I$, the bound of $\mathsf {d\kern -0.07em l}_{\mathscr{G}}(u_N,u_N^0)$ relies on the conservative property in Example~\ref{exm-g} (d). This makes it difficult to extend the theoretical results to multivariate utility case. We leave this for future research.
\end{remark}
\begin{example} Consider the ambiguity set
defined as in (\ref{eq:ambi-U-ex}). Since ${\bm A}$ is preferred, there exists some $u^0\in\mathscr{U}$ and a small positive number $\epsilon$ such that $\int_T u^0(x,y) d(F_{\bm A}(x,y)-F_{\bm B}(x,y))<-\epsilon$. Let $\alpha=-\epsilon-\int_T u^0(x,y) d(F_{\bm A}(x,y)-F_{\bm B}(x,y))>0$. Then Slater’s condition (\ref{eq-sla}) is satisfied. Let $\hat{\alpha}\in(0,\alpha)$ be such that $\int_T u^0_N(x,y) d(F_{\bm A}(x,y)-F_{\bm B}(x,y))+\hat{\alpha}
\in {\rm I\!R}_-$.
Observe that $\psi(x,y) :=F_{\bm A}(x,y)-F_{\bm B}(x,y)$
satisfies
$|\psi(x,y)|\leq2$ for all $(x,y)\in T$. By Theorem~\ref{thm-optval}, $
|{\vartheta}-{\vartheta}_N| \leq L\left( \beta_{N_1}+\beta_{N_2} \right) \left( 3+\frac{2}{\hat{\alpha}} \right). $ Moreover, if ${\bm A}$ and ${\bm B}$ follow discrete distributions, then $\psi$ is a step function. In that case, we may select the gridpoints in ${\cal X}\times {\cal Y}$ (in the PLA) from the gridpoints
of $\psi$ and subsequently it follows by Corollary~\ref{cor-err-optval-discrete} that $|{\vartheta}-{\vartheta}_N| \leq \mathbb{H}_{\mathscr{G}_I} (\mathcal{U},\mathcal{U}_{N})+ L(\beta_{N_1}+\beta_{N_2})\leq 3L(\beta_{N_1}+\beta_{N_2})$. \end{example}
\section{
BUPRO models for constrained optimization problem} \label{sec:constrained}
In this section, we extend the UPRO model to the expected utility maximization problem with expected utility constraints. Specifically, we consider the following problem: \begin{equation} \label{eq:SPR-x} \begin{split}
{{\vartheta}}^*:=\max_{\bm{z}\in Z} \;\; & {\mathbb{E}}_P[ u(\bm{f}(\bm{z},\bm{\xi}))] \\
{\rm s.t.} \;\;\, &
{\mathbb{E}}_P[u(\bm{g}(\bm{z},\bm{\xi}))] \geq c, \end{split} \end{equation}
where $\bm{f}$ and $\bm{g}$ are continuous functions and $c$ is a constant. We may interpret $\bm{f}$ as the total return of a portfolio and $\bm{g}$ is an important part of it or vice versa. Suppose that the true utility function is unknown but it is possible to construct an ambiguity set ${\cal U}$ using partially available information as we discussed earlier. Then we may consider the following maximin preference robust optimization problem \begin{equation} \label{eq:PRO-x} \begin{split}
\hat{{\vartheta}}:= \max_{\bm{z}\in Z} \;\;
\min_{u\in {\cal U}} \;\; & {\mathbb{E}}_P[ u(\bm{f}(\bm{z},\bm{\xi}))] \\
{\rm s.t.} \;\; &
{\mathbb{E}}_P[u(\bm{g}(\bm{z},\bm{\xi}))] \geq c. \end{split} \end{equation}
In this formulation, we consider the same worst-case utility function in the objective and constraint. There is an alternative way to develop a robust formulation of (\ref{eq:SPR-x}): \begin{equation} \label{eq:PRO-x-1} \begin{split}
\tilde{{\vartheta}}:= \max_{\bm{z}\in Z} \;\;
\min_{u\in {\cal U}} \;\; & {\mathbb{E}}_P[ u(\bm{f}(\bm{z},\bm{\xi}))] \\
{\rm s.t.} \;\;\,
\min_{u\in {\cal U}} \;\; &
{\mathbb{E}}_P[u(\bm{g}(\bm{z},\bm{\xi}))] \geq c. \end{split} \end{equation}
Formulation (\ref{eq:PRO-x-1}) means that the worst-case utility in the objective and in the constraint might differ. It is easy to observe that $\tilde{{\vartheta}} \leq \hat{{\vartheta}}$
which means (\ref{eq:PRO-x-1}) is more conservative than (\ref{eq:PRO-x}). Moreover, if the true utility $u^*$ lies within ${\cal U}$, then $\tilde{{\vartheta}}\leq {\vartheta}^*$.
However, under some conditions, the two formulations are equivalent. The next proposition states this.
\begin{proposition} \label{Prop-equivalence} Let $\hat{\bm{z}}$ denote the optimal solution of problem (\ref{eq:PRO-x}) and \begin{equation}
\tilde{Z}:=
\left\{\bm{z}\in Z \,:\, \inf_{u\in {\cal U}} \; {\mathbb{E}}_P[u(\bm{g}(\bm{z},\bm{\xi}))] -c
\geq 0 \right\}. \label{eq:x*-PRO-U} \end{equation} If $\hat{\bm{z}}\in \tilde{Z}$, then $\hat{{\vartheta}}=\tilde{{\vartheta}}$.
\end{proposition}
\noindent \textbf{Proof.} Let $\hat{v}(\bm{z})$ denote the optimal value of the inner minimization problem of (\ref{eq:PRO-x}) and $$ \tilde{v}(\bm{z}) := \inf_{u\in {\cal U}} \; {\mathbb{E}}_P[ u(\bm{f}(\bm{z},\xi))]. $$ Let $\hat{{\vartheta}}$ and $\tilde{{\vartheta}}$ be defined as in
(\ref{eq:PRO-x}) and (\ref{eq:PRO-x-1}). Define $$ {\cal U}(\bm{z}) := \{u\in {\cal U}: {\mathbb{E}}_P[u(\bm{g}(\bm{z},\bm{\xi}))] \geq c \}. $$ Since ${\cal U}(\bm{z})\subset {\cal U}$, then $\hat{v}(\bm{z})\geq \tilde{v}(\bm{z})$ for all $\bm{z}\in Z$. Moreover, since
$\tilde{Z}\subset Z$, then $$ \hat{{\vartheta}} = \max_{\bm{z}\in Z} \hat{v}(\bm{z}) \geq \max_{\bm{z}\in \tilde{Z}} \tilde{v}(\bm{z}) = \tilde{{\vartheta}}. $$ Conversely, for any $\bm{z}\in \tilde{Z}$, $ {\cal U}(\bm{z})={\cal U}. $
Thus, the assumption that $\hat{\bm{z}}\in \tilde{Z}$ implies that
$ {\cal U}(\hat{\bm{z}})={\cal U} $ and subsequently $\hat{v}(\hat{\bm{z}}) = \tilde{v}(\hat{\bm{z}})$.
This shows $ \hat{{\vartheta}} =\hat{v}(\hat{\bm{z}}) = \tilde{v}(\hat{\bm{z}}) \leq \tilde{{\vartheta}} $ because $\hat{\bm{z}}\in \tilde{Z}$.
$\Box$
From a practical point of view, Proposition~\ref{Prop-equivalence} is not useful
in that we do not know the optimal solution $\hat{\bm{z}}$ and hence are unable to verify the condition $\hat{\bm{z}}\in \tilde{Z}$. Consequently,
it might be sensible to consider (\ref{eq:PRO-x}) as (\ref{eq:PRO-x-1}) might be too conservative.
Using the definition of ${\cal U}(\bm{z})$, we
can write (\ref{eq:PRO-x}) succinctly as \begin{equation} \label{eq:PRO-x-DD}
\inmat{(BUPRO-D)\quad} \max_{\bm{z}\in Z} \; \min_{u\in {\cal U}(\bm{z})} \; {\mathbb{E}}_P[ u(\bm{f}(\bm{z},\bm{\xi}))]. \end{equation}
Problem (\ref{eq:PRO-x-DD}) looks as if the ambiguity set ${\cal U}(\bm{z})$ is decision-dependent.
We propose to use the PLA
approach
to solve problem (\ref{eq:PRO-x-DD}). In this case, \[
\mathcal{U}_N(\bm{z}):=\{u_N\in\mathcal{U}_N \,|\; {\mathbb{E}}_P[u_N(\boldsymbol{g}(\bm{z},\bm{\xi}))] \geq c\}, \] where $\mathcal{U}_N$ is defined as in (\ref{eq:U_N-PLA}). The approximate BUPRO
can be subsequently written as \begin{equation} \label{eq:SPR-x-approx} \inmat{(BUPRO-DN)} \quad \max_{\bm{z}\in Z}\;\min_{u\in {\cal U}_N(\bm{z})}\; {\mathbb{E}}_P[u(\bm{f}(\bm{z},\bm{\xi}))]. \end{equation}
The inner minimization problem based on EPLA
can be reformulated as an LP: \begin{align} \label{eq:PRO-x-inner}
\displaystyle{ \min_{{\bm u} }} \; & \sum_{k=1}^K p_k \sum_{i=1}^{N_1-1} \sum_{j=1}^{N_2-1} \mathds{1}_{T_{i,j}} (\bm{f}^k)
\left[ u^{1l}_{i,j}(f_1^k,f_2^k) \mathds{1}_{\left[0,\frac{y_{j+1}-y_j}{x_{i+1}-x_i}\right]}
\left( \frac{f_2^k-y_j}{f_1^k-x_i} \right) \right. \nonumber \\
& \left. + u^{1u}_{i,j}(f_1^k,f_2^k) \mathds{1}_{\left( \frac{y_{j+1}-y_j}{x_{i+1}-x_i},+\infty \right)} \left(\frac{f_2^k-y_j}{f_1^k-x_i}\right) \right] \nonumber \\
{\rm s.t.} \; &
\sum_{k=1}^K p_k \sum_{i=1}^{N_1-1} \sum_{j=1}^{N_2-1} \mathds{1}_{T_{i,j}} (\bm{g}^k)
\left[ u^{1l}_{i,j}(g_1^k,g_2^k) \mathds{1}_{\left[0,\frac{y_{j+1}-y_j}{x_{i+1}-x_i}\right]}
\left( \frac{g_2^k-y_j}{g_1^k-x_i} \right) \right. \nonumber \\
& \left. + u^{1u}_{i,j}(g_1^k,g_2^k) \mathds{1}_{\left( \frac{y_{j+1}-y_j}{x_{i+1}-x_i},+\infty \right)} \left(\frac{g_2^k-y_j}{g_1^k-x_i}\right) \right] \geq c, \\
& \inmat{constraints} \; (\ref{eq-traform-paircom})-(\ref{eq-traform-norm1}), \nonumber \end{align} where ${\bm u}={\rm vec}\left((u_{i,j})_{1\leq i\leq N_1}^{1\leq j\leq N_2}\right)\in {\rm I\!R}^{N_1N_2}$,
$\bm{g}^k:=\bm{g}(\bm{z},\bm{\xi}^k)=(g_1^k,g_2^k)^T$ with $g_1^k:=g_1(\bm{z},\bm{\xi}^k)$, $g_2^k:=g_2(\bm{z},\bm{\xi}^k)$. Since (\ref{eq:PRO-x-inner}) is an LP
for fixed $\bm{z}$, we can use a Dfree method to solve (\ref{eq:SPR-x-approx}). Similar formulations can be derived based on the IPLA approach.
\section{Numerical results} \label{sec:numerical results}
We have carried out numerical tests on the performances of the proposed models and computational schemes discussed in the previous sections
by applying them to a portfolio optimization problem. In this section, we report the test results.
\subsection{Setup} \label{eq:setup-1} As an example of a real-life portfolio selection problem with uncertain project outcomes, we consider an application of the UPRO models in healthcare resource allocation problem studied by Airoldi~et~al.~\cite{airoldi2011healthcare}. In this application, public health officials (PHO) decide on a portfolio of projects that seek to improve the quality of life. Specifically, the health benefits of $n=8$ projects (access to dental, workforce development, primary prevention, Obesity training, CAMHS School, early detection and diagnostics, palliative \& EOL, active treatment)
are evaluated
through two attributes, commissioning areas of children and cancer.
Moreover, the outcomes of the projects are uncertain and
represented by discretely distributed random
vector $\bm{\xi}^k=(\xi^k_1,\ldots,\xi^k_8)^T$ {\color{black}supported by $\Xi\subset {\rm I\!R}^8$} with equal probabilities $p_k:=1/K$ for $k=1,\ldots, K$. Let $\bm{z}=(z_1,\ldots,z_8)^T$ be the proportions of a fixed fund.
For the convenience of calculation, we generate samples of $\bm{\xi}^k$
by the uniform distribution over $[0,1]^8$.
We consider a situation where the PHO's utility of the bi-attribute outcomes is ambiguous and the optimal allocation is based on the worst-case utility in ambiguity set $\mathcal{U}$
\begin{equation*}
\max_{\bm{z}\in Z} \min_{u\in\mathcal{U}} \; \sum_{k=1}^K p_ku(\bm{f}(\bm{z},\bm{\xi}^k)), \end{equation*} where $f_1(\bm{z},\bm{\xi}^k):=\sum_{i=1}^{5} z_i \xi_i^k \in [0,1]$, $f_2(\bm{z},\bm{\xi}^k):=\sum_{i=6}^{8} z_i \xi_i^k \in [0,1]$ and $Z:=\{\bm{z}\in{\rm I\!R}^8_+ : \sum_{i=1}^8 z_i=1\}$. To examine the performance of BUPRO-N, we carry out the tests with a specified true utility function and investigate how the optimal value and the worst-case utility function converge as information about the PHO's utility preference increases. We
consider the true utility $ u(x,y)=e^x-e^{-y}-e^{-x-2y} $ defined over $[0,1]\times [0,1]$ and normalize it by setting $u^*(x,y) := (u(x,y)-u(0,0))/(u(1,1)-u(0,0))$.
This function
satisfies the conservative
property
(\ref{eq:conservative}), and is convex
w.r.t.~$x$ and concave
w.r.t.~$y$. Although the PHO is unaware that the preference can be characterized as this function, we assume that the decision of PHO never contradicts with results suggested by such a function unless specified otherwise (we will remove this assumption in Section~\ref{subsec:preference-incon}), see similar assumption in \cite{AmD15}.
We may refine $\mathscr{U}$ to a set of normalized
non-decreasing utility functions mapping from $[0,1]^2$ to $[0,1]$, and $\mathscr{U}_N$ the corresponding set of PLA functions.
All of the tests are carried out in MATLAB R2022a installed on a PC (16GB, CPU 2.3 GHz) with an Intel Core i7 processor. We use GUROBI and YALMIP \cite{lofberg2004yalmip} to solve the inner minimization problem (LP or MILP) and single MILP, and SURROGATEOPT to solve the outer maximization problem (unconstrained problem (\ref{eq:MAUT-robust-N-dis}) and constrained problem (\ref{eq:SPR-x-approx})).
\subsection{Design of the pairwise comparison lotteries} \label{sec:PC-design}
As we discussed earlier, the ambiguity set of utility functions $\mathcal{U}_N$ is characterized by available information about the DM's preferences.
We ask PHO
questions by
showing preference between
a risky lottery with two outcomes and a
lottery with certain outcome (we call it ``certain lottery'' following the terminology of \cite{AmD15}),
denoted respectively by \begin{equation} \label{eq:lottery}
\bm{Z}_1 = \left\{ \begin{array}{ll}
(\underline{x},\underline{y}) & \inmat{\;w.p.\;} 1-p, \\
(\bar{x},\bar{y}) & \inmat{\;w.p.\;} p, \end{array} \right. \inmat{ and } \bm{Z}_2=(x,y) \inmat{\;w.p.\;} 1, \end{equation}
where $\underline{x},\bar{x}, \underline{y}$ and $\bar{y}$ are fixed and
$(x,y)\in [\underline{x},\bar{x}]\times[\underline{y},\bar{y}]$ is randomly generated.
Since we assume that $u(\underline{x},\underline{y})=0$ and $u(\bar{x},\bar{y})=1$, the only parameters to be identified are $x,y,p$, so that the question is properly posed.
Observe that $$ {\mathbb{E}}_{\mathbb{P}}[u(\bm{Z}_1(\omega))]=(1-p) u(\underline{x},\underline{y})+ p u(\bar{x},\bar{y})=p \inmat{\quad and\quad} {\mathbb{E}}_{\mathbb{P}}[u(\bm{Z}_2(\omega))]=u(x,y). $$
Thus the question is down to checking
whether inequality $u(x,y) \geq p$ holds or not.
Next, we turn to discuss how to generate $M$ lotteries, or more specifically how to
set values for $x$, $y$ and $p$. We generate randomly $M_1$ points of the first attribute including $\underline{x}$, $\bar{x}$, and $M_2$ points of the second attribute including $\underline{y}$ and $\bar{y}$. Thus
the number of
the certain
lotteries is at most $M=M_1M_2-2$. Let $$ S:=\{(x_{i_l},y_{j_l}), i_l\in\{1,\ldots,M_1\}, j_l\in\{1,\ldots,M_2\}, l=1,\ldots,M \} $$ be the set of all
certain lotteries except points $(\underline{x},\underline{y})$, $(\bar{x},\bar{y})$ and $\mathcal{U}_{N}^{l-1}$ be the set of all piecewise linear utility functions which are consistent to the
previously generated $l-1$ questions. Assume that the $l$th lottery with the certain outcome is $\bm{Z}_2^l=(x_{i_l},y_{i_l})$. Define \begin{equation} \label{eq:I1-I2}
I_1^l:=\min_{u\in\mathscr{U}_N\cap\mathcal{U}_{N}^{l-1}} u(\bm{Z}_2^l)
\inmat{\quad and\quad}
I_2^l:=\max_{u\in\mathscr{U}_N\cap\mathcal{U}_{N}^{l-1}} u(\bm{Z}_2^l). \end{equation} Since $u(x_{i_l},y_{i_l})\in[0,1]$, then $I_1^l,I_2^l\in[0,1]$. We set $p^l:=\frac{I_1^l+I_2^l}{2}$, and use the true utility function $u^*$ to check whether inequality \begin{equation} \label{eq:lottery-l}
u^*(x_{i_l},y_{i_l}) ={\mathbb{E}}_{\mathbb{P}}[u^*(\bm{Z}_2^l(\omega))] \geq {\mathbb{E}}_{\mathbb{P}}[u^*(\bm{Z}_1^l(\omega))] = p^l \end{equation} holds or not. If it holds, then $\bm{Z}_2^l$ is preferred
to $\bm{Z}_1^l$.
The following algorithm describes the procedures
for constructing $\mathcal{U}_{N}=\mathcal{U}_N^M$.
\begin{breakablealgorithm} \caption{}
{ \noindent \textbf{Initialization.} Set $m_1:=1, m_2:=1, l:=1$,
$\mathcal{U}_N^0:=\mathscr{U}_N$ and $S:=\emptyset$. }
\begin{algorithmic}[1] \begin{small} \STATE
Choose two positive integers $M_1$ and $M_2$ as the numbers of the gridpoints of the two attributes. Generate $M_1-2$ points
within $[\underline{x},\bar{x}]$ and $M_2-2$ points
within $[\underline{y},\bar{y}]$ randomly using the uniform distribution; sort them out in increasing order of their values and label them by $x_i, i=1,\ldots,M_1-2$ and $y_j, j=1,\ldots,M_2-2$. Let ${\cal X}:= \{\underline{x},x_1,\ldots,x_{M_1-2},\bar{x}\}$ and ${\cal Y}:= \{\underline{y},y_1,\ldots,y_{M_1-2},\bar{y}\}$, and let ${\cal X}\times{\cal Y}:=\{(x_i,y_j),x_i\in{\cal X}, y_j\in{\cal Y}\}$ be the set of the gridpoints.
\STATE
Let the $l$th certain lottery be $\bm{Z}_2^l=(x_{i_l},y_{j_l})$,
solve the
problem (\ref{eq:I1-I2}) to obtain $I_1^l$ and $I_2^l$.
Let $I^l=[I_1^l,I_2^l]$, $p^l=\frac{I_1^l+I_2^l}{2}$ and $S=S\cup\{\bm{Z}_1^l,\bm{Z}_2^l\}$.
\STATE
If
$p^l\leq u^*(x_{i_l},y_{j_l})$, then
$$ {\cal U}_N^{l}:={\cal U}_N^{l-1} \bigcap \left\{u_N\in \mathscr{U}_N: p^l\leq u_N(x_{i_l},y_{j_l}) \right\}. $$ Otherwise,
$$ {\cal U}_N^{l} :={\cal U}_N^{l-1} \bigcap \left\{u_N\in \mathscr{U}_N: p^l\geq u_N(x_{i_l},y_{j_l})\right\}. $$ Set $l:=l+1$, and go
to Step 1. \end{small} \end{algorithmic} \end{breakablealgorithm}
Steps 1-2 generate a lottery for pairwise comparison. Note that the minimization problem in (\ref{eq:I1-I2}) can be formulated as \begin{subequations} \label{eq-lottery-I_1} \begin{align}
I_1^l = \min_{{\bm u}}\;
\; & u_{i_l,j_l}
\nonumber \\
{\rm s.t.} \;\; &
h_{l'} (p_{l'}- u_{i_{l'},j_{l'}})\leq 0, l'=0,\ldots,l-1, \label{eq-lottery} \\
& \frac{u_{i+1,j}-u_{i,j}}{x_{i+1}-x_i} \geq \frac{u_{i,j}-u_{i-1,j}}{x_i-x_{i-1}}, i=2,\ldots,M_1-1, j=1,\ldots,M_2, \label{eq-single-concave} \\
& \frac{u_{i,j+1}-u_{i,j}}{y_{j+1}-y_j} \leq \frac{u_{i,j}-u_{i,j-1}}{y_{j}-y_{j-1}}, i=1,\ldots,M_1, j=2,\ldots,M_2-1, \label{eq-single-convex} \\
& \inmat{constraints\;} (\ref{eq-traform-mon1})- (\ref{eq-traform-norm1}), \notag \end{align} \end{subequations} where ${\bm u}:=(u_{1,1},\cdots,u_{N_1,1},\cdots,u_{1,N_2},\cdots,u_{N_1N_2})^T$,
(\ref{eq-lottery}) requires the answer to the $l$th question to be consistent with the previous $l-1$ questions (if $\bm{Z}_1^{l'}$ is preferred, then (\ref{eq:lottery-l}) holds for $l=l'$ and we set $h_{l'}=1$, otherwise we set $h_{l'}=-1$), (\ref{eq-single-concave}) and (\ref{eq-single-convex}) comply with the assumption that the single-attribute utility function $u(\cdot,\hat{y})$ is concave and $u(\hat{x},\cdot)$ is convex for any fixed $\hat{x}\in X$ and $\hat{y}\in Y$.
Step 3 asks the DM to choose between the risky lottery and the certain lottery.
Here the true utility function $u^*$ (defined in Section~\ref{eq:setup-1}) is used to ``act as the DM''. After the DM makes a choice, an expected utility inequality is created and added to the ambiguity set $\mathcal{U}_N$. Since $p^l$ is chosen as the midpoint of $I^l$, we deduce that the true utility function value at $(x_{i_l},y_{j_l})$ lies within the right or left half of the interval $I^l$ and the pairwise comparison effectively reduces the ambiguity set by ``half'' in the sense that those $u_N$ whose values (at point $(x_{i_l},y_{j_l})$) lie within the other half of the interval $I^l$ are excluded from the ambiguity set.
\begin{example} We use a simple example to explain the above steps where the true utility function $u^*$ (defined in Section~\ref{eq:setup-1})
is defined over $[0,1]^2$ and the piecewise utility functions have $N=M_1M_2=6$ gridpoints including $(0,0)$ and $(1,1)$. We randomly generate one point in $[0,1]$ for the second attribute as the non-end gridpoints. Then ${\cal X}=\{0,1\}$ and ${\cal Y}=\{0,0.3706,1\}$. The number of questions is $M=M_1 M_2-2=4$.
\noindent \textbf{Lottery 1} $(l=1)$. Set $i_1:=1, j_1:=2$ and $(x_{i_1},y_{j_1})=(0,0.3706)$. By solving (\ref{eq-lottery-I_1}) and the corresponding maximization problem, we obtain that $[I_1^1,I_2^1]=[0,1]$ and set $p^1=0.5$. By checking $u^*(0,0.3706)=0.252\leq p^1$, we set $h_1:=-1$.
\noindent \textbf{Lottery 2} $(l=2)$. Set $i_2:=1, j_2:=3$ and $(x_{i_2},y_{j_2})=(0,1)$. Solve (\ref{eq-lottery-I_1}) and the corresponding maximization problem to obtain $[I_1^2,I_2^2]=[0,0.5]$,
so $p^2=0.25$. By checking whether $u^*(0,1)=0.454\geq p^2$ or not, we set $h_2:=1$.
\noindent \textbf{Lottery 3} $(l=3)$. Set $i_3:=2, j_3:=1$ and $(x_{i_3},y_{i_3})=(1,0)$. We obtain $[I_1^3,I_2^3]=[0.5,1]$, and $p^3=0.75$. By checking $u^*(1,0)=0.712\leq p^3$, we set $h_3:=-1$.
\noindent \textbf{Lottery 4} $(l=4)$. Set $i_4:=2, j_4:=2$ and $(x_{i_4},y_{j_4})=(1,0.3706)$ to obtain $[I_1^4,I_2^4]=[0.75,1]$,
and $p^4=0.875$. By checking $u^*(1,0.3706)=0.864\leq p^4$, we set $h_4:=-1$. \end{example}
\subsection{Convergence results}
In this subsection, we
investigate the convergence of the worst-case approximate utility functions of the unconstrained problem (\ref{eq:MAUT-robust-N}) and the constrained optimization problem (\ref{eq:SPR-x-approx}) under EPLA and IPLA schemes as $N$ increases.
\textbf{(i) EPLA and IPLA for unconstrained problem (\ref{eq:MAUT-robust-N}).}
\underline{EPLA approach.}
We begin by examining the performance of the EPLA approach
with different
types of partitions discussed in Section~\ref{sec:numer-methods}.
We assume the number of the scenarios of $\bm{\xi}$
is $K=1000$. The convergence results are displayed in Figures~\ref{fig-utility-main}-\ref{fig-utility-mixed}, and Tables~\ref{tab-result-N}-\ref{tab:distance}. Figures~\ref{fig-utility-main}-\ref{fig-utility-mixed} depict the true utility function and the worst-case
utility functions for Type-1 PLA (Figure~\ref{fig-utility-main}), Type-2 PLA (Figure~\ref{fig-utility-counter}), and mixed-type PLA (see Remark~\ref{rem:BUPRO-DF}~(ii) for the definition) in Figure~\ref{fig-utility-mixed}. We can see that the worst-case utility functions move closer and closer to the true utility function as more questions are asked in
all the three cases, which is in accordance with our anticipation in Remark~\ref{rem:distance}~(iii) and Table~\ref{tab:distance}.
Table~\ref{tab-result-N} displays the optimal solutions, the optimal values, the errors of the optimal values (which is defined as the difference between the true and the approximate optimal value), and computation time (CPU time).
We find that the optimal values increase as the number of queries increases. This
is because the ambiguity set $\mathcal{U}_N$ becomes smaller as the number of queries increases.
Moreover, the errors
decrease as the number of
questions increases. The optimal values in the Type-1 PLA and mixed-type PLA are smaller than that of the Type-2 PLA
in that the conservative condition
makes the utility value of Type-2 PLA larger than the other cases, see Figure~\ref{fig-division}.
\begin{figure}
\caption{{\bf Type-1 EPLA}: the convergence of the worst-case utility function of EPLA model (\ref{eq:PRO-N-reformulate}) to the
true utility function (in blue) as the number of questions
increases
from $5\times 5$ to $15\times 15$.}
\label{fig-utility-main}
\end{figure}
\begin{figure}
\caption{{\bf Type-2 EPLA}: the convergence of the worst-case utility function of Type-2 EPLA model
to the true utility function (in purple) as the number of questions
increases from $5\times 5$ to $15\times 15$.}
\label{subfig-utility-b-counter}
\label{subfig-utility-c-counter}
\label{subfig-utility-d-counter}
\label{fig-utility-counter}
\end{figure}
\begin{figure}
\caption{{\bf Mixed-type EPLA}:
the convergence of the worst-case utility functions of mixed-type EPLA model to the true utility function (in green).
The cell with no diagonal line means that Type-1 and Type-2 PLAs coincide because in this case ${\bm f}(\bm{z},\bm{\xi}^k)$ does not fall into the cell for $k=1,\cdots,K$.
}
\label{subfig-utility-b-mixed}
\label{subfig-utility-c-mixed}
\label{subfig-utility-d-mixed}
\label{fig-utility-mixed}
\end{figure}
\begin{table}[!ht]
\tiny
\centering
\captionsetup{font=scriptsize}
\caption{Computational results of BUPRO-N problem ($K=1000$, ${\vartheta}^*=0.3392$)}
\renewcommand\arraystretch{1.1}
\begin{threeparttable}
\resizebox{0.8\linewidth}{!}
{
\begin{tabular}{c|ccccc}
\hline
\textbf{EPLA} &
Lotteries & Optimal solutions & Optimal values & Error & CPU time (s) \\
\hline
\multirow{3}{*}{\makecell{Type-1 \\ }} & $5\times5$ & $[0,0,0,1,0,0,0,0]$ & 0.3122 & 0.0270 & 113.6 \\
& $10\times10$ & $[0,0,0,0.955,0,0,0.016,0.029]$ & 0.3321 & 0.0071 & 151.6 \\
& $15\times15$ & $[0,0,0,1,0,0,0,0]$ & 0.3349 & 0.0043 & 223.3 \\
\hline
\multirow{3}{*}{\makecell{Type-2 \\ }} &
$5\times5$ & $[0,0,0,1,0,0,0,0]$ & 0.3122 & 0.0270 & 115.2 \\
& $10\times10$ & $[0,0,0,0.961,0,0,0.008,0.031]$ & 0.3324 & 0.0068 & 164.7 \\
& $15\times15$ & $[0,0,0,1,0,0,0,0]$ & 0.3349 & 0.0043 & 220.5 \\
\hline
\multirow{3}{*}{\makecell{Mixed-type \\ }} &
$5\times5$ & $[0,0,0,1,0,0,0,0]$ & 0.3122 & 0.0270 & 886.2 \\
& $10\times10$ & $[0,0,0,0.955,0,0.003,0.009,0.033]$ & 0.3321 & 0.0071 & - \\
& $15\times15$ & $[0,0,0,1,0,0,0,0]$ & 0.3349 & 0.0043 & - \\
\hline
\end{tabular}
}
\begin{tablenotes}
\raggedleft
\item `-' implies runtime $>$ 3600s.
\end{tablenotes}
\end{threeparttable}
\label{tab-result-N} \end{table}
\begin{table}[!htbp]
\tiny
\centering
\captionsetup{font=scriptsize}
\caption{{\bf EPLA:} upper bound for $\mathsf {d\kern -0.07em l}_{\mathscr{G}_I}(u^*,{u}_N^*)$ and distance $\mathsf {d\kern -0.07em l}_{\mathscr{G}_I}({u}_N^{*},u_{\rm worst}^N)$}
\renewcommand\arraystretch{1.2}
\begin{threeparttable}
\resizebox{0.9\linewidth}{!}
{
\begin{tabular}{ccccccc}
\hline
Lotteries &
$L(\beta_{N_1}+\beta_{N_2})$
& $\mathsf {d\kern -0.07em l}_{\mathscr{G}_I}({u}_N^*,u_{\rm worst}^N)$ (Type-1) & $\mathsf {d\kern -0.07em l}_{\mathscr{G}_I}({u}_N^*,u_{\rm worst}^N)$ (Type-2) & $\mathsf {d\kern -0.07em l}_{\mathscr{G}_I}({u}_N^*,u_{\rm worst}^N)$ (Mixed-type) \\
\hline
$5\times 5$ & $1.8541$ & $0.0763$ & $0.0763$ & $0.0763$\\
$10\times 10$ & $1.0611$ & $ 0.0233$ & $0.0306$ & $0.0226$\\
$15\times 15$ & $0.7682$ & $0.0141$ & $0.0230$ & $0.0141$ \\
\hline
\end{tabular}
}
\end{threeparttable}
\label{tab:distance} \end{table}
\underline{IPLA in bi-attribute case.}
Set $K=20$ (take the first $20$ from $1000$ samples, we do so because the problem size of (\ref{eq:PRO_MILP_mina2}) and (\ref{eq:PRO_MILP_single}) depends on the $K$ whereas problem size of (\ref{eq:PRO-N-reformulate}) under EPLA is independent of $K$), the true utility $u^*$ is the same as in EPLA case. In this set of tests,
the convexity/concavity of single-variate utility functions $u_N(\cdot,\hat{y})$ and $u_N(\hat{x},\cdot)$ for all $\hat{x}\in X$ and $\hat{y}\in Y$ is not considered to facilitate comparison of the three models (maximin EPLA, maximin IPLA and single MILP using IPLA).
because in problem (\ref{eq:PRO_MILP_single}), we have not incorporated
the constraints (see our comments there).
In Table~\ref{tab-result-N-MILP}, we compare the three models for the tractable reformulation of BUPRO-N: EPLA (\ref{eq:PRO-N-reformulate}), IPLA (\ref{eq:PRO_MILP_mina2}) and the single MILP (\ref{eq:PRO_MILP_single}) using IPLA, for both Type-1 PLA and Type-2 PLA in terms of the optimal solution, the optimal value, error between BUPRO-N and utility maximization problem $\max_{\bm{z} \in Z} \sum_{k=1}^K p_k[u^*({\bm f}(\bm{z},\bm{\xi}^k))]$, and CPU time. We find that the optimal values ${\vartheta}_N$ converge to the true optimal value ${\vartheta}^*$ in all cases. We also find that for both types, the EPLA (\ref{eq:PRO-N-reformulate}) where the inner problem is an LP is most efficient,
the single MILP (\ref{eq:PRO_MILP_single})
obtains the best approximate optimal values but takes
longest CPU time.
Note that although the three models are
equivalent theoretically,
the actual computational results
differ slightly because of computational rounding errors.
Figures~\ref{fig:question-Utility_MILP}-\ref{fig:question-Utility-MILP2} display the worst-case utility functions of IPLA maximin model (\ref{eq:PRO_MILP_mina2}) for Type-1 and Type-2 respectively.
We can see that the worst-case utility function
displays some ``oscillations'' although it converges to the true. The phenomenon disappears when we confine $u_N(\cdot,\hat{y})$ and $u_N(\hat{x},\cdot)$
to convex and concave functions respectively.
\begin{figure}
\caption{{\bf Type-1 IPLA}: the convergence of the worst-case utility function solved by the Dfree method for IPLA model (\ref{eq:PRO_MILP_mina2}) without convex/concave constraints.
}
\label{fig:question-Utility_MILP}
\end{figure}
\begin{figure}
\caption{{\bf Type-2 IPLA}: the convergence of the worst-case utility function solved by Dfree method for IPLA model (\ref{eq:PRO_MILP_mina2}) with (\ref{eq:mixed-integer-R2-f}) being replaced by (\ref{eq:constraint-alpha}). }
\label{fig:question-Utility-MILP2}
\end{figure}
\begin{table}[!ht]
\tiny
\centering
\captionsetup{font=scriptsize}
\caption{{\bf The bi-attribute case:} comparison of the results of BUPRO-N problem (K=20, ${\vartheta}^*=0.3835$)}
\renewcommand\arraystretch{1.1}
\resizebox{0.9\linewidth}{!}
{
\begin{tabular}{c|ccccc}
\hline
&
Lotteries & Optimal solutions & Optimal values & Error & CPU time (s) \\
\hline
\multirow{3}{*}{\makecell{Type-1 \\ Maximin \\({\bf EPLA})}} & $5\times5$ & $[ 0.112, 0.037,0,0.439,0.024, 0,0.054,0.335]$ & 0.2835 & 0.1000 & 47.4 \\
& $10\times10$ & $[0, 0.599,0,0.316,0.008,0.037,0.012,0.027]$ & 0.3479 & 0.0356 & 82.8 \\
& $15\times15$ & $[0,1,0,0,0,0,0,0]$ & 0.3754 & 0.0081 & 146.2 \\
\hline
\multirow{3}{*}{\makecell{Type-1 \\ Maximin
\\ ({\bf IPLA}) } } &
$5\times5$ & $[0.0996,0.0313,0.0297,0.4525,0, 0,0.0469,0.3400]$ & 0.2824 & 0.1011 & 240.5 \\
& $10\times10$ & $[0, 0.9467,0,0, 0,0.0476,0,0.0057]$ & 0.3697 & 0.0138 & 916.0 \\
& $15\times15$ & $[0,0.9902,0,0.0038,0,0.0060,0,0]$ & 0.3748 & 0.0087 & 2442.2 \\
\hline
\multirow{3}{*}{\makecell{Type-1\\ Single MILP \\
({\bf IPLA})} } &
$5\times5$ & $[0,1,0,0,0,0,0,0]$ & 0.3232 & 0.0603 & 1103.2 \\
& $10\times10$ & $[0,0.946,0,0,0,0.043,0,0.011]$ & 0.3698 & 0.0137 & 4552.2 \\
& $15\times15$ & $[0,1,0,0,0,0,0,0]$ & 0.3754 & 0.0081 & 3421.1 \\
\hline
\hline
\multirow{3}{*}{\makecell{Type-2 \\ Maximin
\\({\bf EPLA})}
} & $5\times5$ & $[0,0.1542,0.0117,0.1210,0.3554,0.1679,0,0.1898]$ &0.3113 & 0.0722 & 43.7 \\
& $10\times10$ & $[0, 0.5301,0,0.1857,0.2030,0.0410,0.0308,0.0094]$ & 0.3475 & 0.0360 & 56.4 \\
& $15\times15$ & $[0,1,0,0,0,0,0,0]$ & 0.3754 & 0.0081 & 98.2 \\
\hline
\multirow{3}{*}{\makecell{Type-2 \\ Maximin
\\({\bf IPLA})}
} &
$5\times5$ & $[0, 0.8875,0.1125,0,0,0,0,0]$ & 0.3102 & 0.0733 & 213.6 \\
& $10\times10$ & $[0,0.975,0.025,0,0,0,0,0]$ & 0.3410 & 0.0425 & 828.2 \\
& $15\times15$ & $[0,0.7129,0.0986,0,0.1884,0,0,0]$ & 0.3474 & 0.0392 & 2126.9 \\
\hline
\multirow{3}{*}{\makecell{Type-2\\ Single MILP \\
({\bf IPLA})} } &
$5\times5$ & $[0,1,0,0,0,0,0,0]$ & 0.3232 & 0.0603 & 952.0 \\
& $10\times10$ &$[0,0.9470,0,0,0, 0.0467,0,0.0062]$ & 0.3704 & 0.0131 & 1359.1\\
& $15\times15$ & $[0,1,0,0,0,0,0,0]$ & 0.3754 & 0.0081 & 2680.9\\
\hline
\end{tabular}
}
\label{tab-result-N-MILP}
\end{table}
\underline{IPLA in tri-attribute case.} The sample is the same as in the bi-attribute case with $K=20$. The true utility function is
$u(x,y,z)=e^{x}-e^{-y}-e^{-z}-e^{-x-2y-z}:[0,1]^3 \to [0,1]$ and normalize it by setting $u^*(x,y,z)=(u(x,y,z)-u(0,0,0))/(u(1,1,1)-u(0,0,0))$.
We divide the eight projects into three groups in order of importance as the three attributes, that is, $f_1^k:=\sum_{i=1}^{3} w_i \xi_i^k$, $f_2^k:=\sum_{i=4}^{6} w_i \xi_i^k$, $f_3^k:=\sum_{i=7}^{8} w_i \xi_i^k$, and ${\bm w} \in Z:=\{{\bm w}\in{\rm I\!R}^8_+:\sum_{i=1}^8 w_i=1\}$.
Table~\ref{tab-result-N-MILP3} indicates that the IPLA model (\ref{eq:PRO_MILP_3m}) in tri-attribute case is effective and the optimal values ${\vartheta}_N$ of the TUPRO-N problem converge to the true optimal value ${\vartheta}^*$ as the number of lotteries increases.
\begin{table}[!ht]
\tiny
\centering
\captionsetup{font=scriptsize}
\caption{{\bf The tri-attribute case}: computational results of TUPRO-N problem in (K=20, ${\vartheta}^*=0.3193$)}
\renewcommand\arraystretch{1.08}
\begin{threeparttable}
\resizebox{0.8\linewidth}{!}
{
\begin{tabular}{c|ccccc}
\hline
&
Lotteries & Optimal solutions & Optimal values & Error & CPU time (s) \\
\hline
\multirow{3}{*}{ {\bf IPLA}
} & $3\times3 \times 3$ & $[0,0,1,0,0,0,0,0]$ & 0.1994 & 0.1198 & 320.8 \\
& $4 \times 4 \times 4$ & $[ 0.4992,0.5008,0,0, 0,0,0,0]$ & 0.2076 & 0.1117 & 944.1 \\
& $5\times 5 \times 5$ & $[0,1,0, 0,0,0,0,0]$ & 0.2498 & 0.0694 & 2083.7 \\
& $6\times 6 \times 6$ & $[0,1,0, 0,0,0,0,0]$ & 0.2774 & 0.0418 & - \\
\hline
\end{tabular}
}
\begin{tablenotes}
\raggedleft
\item `-' implies runtime $>$ 3600s.
\end{tablenotes}
\end{threeparttable}
\label{tab-result-N-MILP3} \end{table}
\textbf{(ii) EPLA for the constrained optimization problems (\ref{eq:PRO-x}) and (\ref{eq:PRO-x-1}).} The second part of numerical tests is concerned with
problems (\ref{eq:PRO-x}) and (\ref{eq:PRO-x-1}). We set $g_1(\bm{z},\bm{\xi}^k):=\sum_{i=3}^5 z_i\xi_i^k$ and $g_2(\bm{z},\bm{\xi}^k):=\sum_{i=7}^8 z_i\xi_i^k$,
which represent the effects of part of the
projects on
mental health and cancer
commissioning areas. PHO expects this part of effects to reach at least level $c$. We consider two cases: (a)
$c=0.1$ and (b) $c=0.3$.
\underline{Case (a)}.
The optimal values of problem (\ref{eq:PRO-x}) and problem (\ref{eq:PRO-x-1}) coincide (see
Table~\ref{tab-result-N}) because the optimal solution of the former falls into set (\ref{eq:x*-PRO-U}),
which is consistent with our theoretical analysis in Proposition~\ref{Prop-equivalence}.
\underline{Case (b)}. We repeat the tests but with different observations.
Recall that the optimal values of problems (\ref{eq:SPR-x}), (\ref{eq:PRO-x}) and (\ref{eq:PRO-x-1}) are denoted by ${\vartheta}^*$, $\hat{{\vartheta}}$ and $\tilde{{\vartheta}}$, respectively.
Observation 1. For problem (\ref{eq:PRO-x-1}),
we can see from Table~\ref{tab-result-DN-3} that $\tilde{{\vartheta}}<{\vartheta}^*$ and $\tilde{{\vartheta}}$ increases as $M$ increases. This is consistent with our theoretical analysis. The increasing trend is underpinned by the fact that as $M$ increases,
$\mathcal{U}_N$ becomes smaller
and consequently both the objective function $\min_{u\in {\cal U}} {\mathbb{E}}_{P}[u({\bm f}(\bm{z},\bm{\xi}))]$ and the feasible set $\tilde{Z}$ (see (\ref{eq:x*-PRO-U})) become larger.
Observation 2. For problem (\ref{eq:PRO-x}),
we can see from Table~\ref{tab-result-DN-2} that ${\vartheta}^*<\hat{{\vartheta}}$
for the cases that $5\times 5$ and $10\times 10$ lotteries are used.
Note that by theory, $\tilde{{\vartheta}}\leq \hat{{\vartheta}}$ and $\tilde{{\vartheta}}\leq {\vartheta}^*$. Moreover,
when $\hat{\bm{z}}\in \tilde{Z}$, we are guaranteed that $\tilde{{\vartheta}}=\hat{{\vartheta}}\leq {\vartheta}^*$. The observed trend reflects the fact that $\hat{{\vartheta}}>{\vartheta}^*$ may occur when $\hat{\bm{z}}\notin \tilde{Z}$.
Moreover, ${\vartheta}^*>\hat{{\vartheta}}$
when $15\times 15$ lotteries are used since $\hat{\bm{z}}\in \tilde{Z}$.
Observation 3. The optimal value $\hat{{\vartheta}}$ is decreasing from Table~\ref{tab-result-DN-2} as the number of questions increases. This phenomena is a bit difficult to explain. On one hand, when the size of ${\cal U}_N$ decreases, $\hat{v}(z)$ increases and on the other hand the size of $\hat{Z}:=\{\bm{z}:{\mathbb{E}}_P[u(\bm{g}(\bm{z},\bm{\xi}))] \geq c\}$ decreases. Note that $\hat{{\vartheta}} = \max_{z\in \hat{Z}}\hat{v}(z)$,
it seems the reduction of the size of $\hat{Z}$ has more effect than that of the increase of $\hat{v}(z)$ in this test.
We have not tested IPLA as our focus here is on the difference between model (\ref{eq:PRO-x}) and model (\ref{eq:PRO-x-1}) rather than different performances of EPLA and IPLA.
\begin{table}[!ht]
\tiny
\centering
\captionsetup{font=scriptsize}
\caption{Computational results of
problem (\ref{eq:PRO-x-1}) ($K=1000$, $c=0.3$, ${\vartheta}^*=0.3387$)}
\renewcommand\arraystretch{1.1}
\begin{threeparttable}
\resizebox{0.8\linewidth}{!}
{
\begin{tabular}{c|ccccc}
\hline
\textbf{EPLA} &
Lotteries & Optimal solutions & $\tilde{{\vartheta}}$ & Error & CPU time (s) \\
\hline
\multirow{3}{*}{Type-1
} & $5\times5$ & $[0,0,0,1,0,0,0,0]$ & 0.3122 & 0.0265 & 237.0 \\
& $10\times10$ & $[0,0,0,0.955,0,0,0.011,0.034]$ & 0.3321 & 0.0066 & 379.9 \\
& $15\times15$ & $[0,0,0,1,0,0,0,0]$ & 0.3349 & 0.0038 & 436.7 \\
\hline
\multirow{3}{*}{Type-2
} &
$5\times5$ & $[0,0,0,1,0,0,0,0]$ & 0.3122 & 0.0265 & 307.1 \\
& $10\times10$ & $[0,0,0,0.959,0,0,0,0.041]$ & 0.3323 & 0.0064 & 321.2 \\
& $15\times15$ & $[0,0,0,1,0,0,0,0]$ & 0.3349 & 0.0038 & 486.0 \\
\hline
\multirow{3}{*}{ Mixed-type
} &
$5\times5$ & $[0,0,0,1,0,0,0,0]$ & 0.3122 & 0.0265 & 2476.2 \\
& $10\times10$ & - & - & - & - \\
& $15\times15$ & - & - & - & - \\
\hline
\end{tabular}
}
\begin{tablenotes}
\raggedleft
\item `-' implies runtime $>$ 3600s.
\end{tablenotes}
\end{threeparttable}
\label{tab-result-DN-3} \end{table}
\begin{table}[!ht]
\tiny
\centering
\captionsetup{font=scriptsize}
\caption{Computational results of
problem (\ref{eq:PRO-x}) ($K=1000$, $c=0.3$, ${\vartheta}^*=0.3387$)}
\renewcommand\arraystretch{1.1}
\begin{threeparttable}[b]
\resizebox{0.9\linewidth}{!}
{
\begin{tabular}{c|ccccc}
\hline
\textbf{EPLA} &
Lotteries & Optimal solutions & $\hat{{\vartheta}}$ & Error & CPU time (s)\\
\hline
\multirow{3}{*}{Type-1
} & $5\times5$ & $[0.118,0.115,0.178,0.179,0,0.130,0.112,0.169]$ & 0.3873 & -0.0486 & 216.0 \\
& $10\times10$ & $[0,0.111,0,0.883,0,0.006,0,0]$ & 0.3413 & -0.0026 & 255.5 \\
& $15\times15$ & $[0.007,0.098,0,0.875,0.020,0,0,0]$ & 0.3377 & 0.0010 & 363.4 \\
\hline
\multirow{3}{*}{Type-2
} &
$5\times5$ & $[0.176,0.129,0,0.077,0,0.178,0.084,0.357]$ & 0.4186 & -0.0799 & 242.6 \\
& $10\times10$ & $[0.027,0.082,0,0.891,0,0,0,0]$ & 0.3384 & 0.0003 & 261.7 \\
& $15\times15$ & $[0,0,0,1,0,0,0,0]$ & 0.3349 & 0.0038 & 335.2 \\
\hline
\multirow{3}{*}{Mixed-type
} &
$5\times5$ & $[0.073,0.131,0.118,0.022,0.151,0.194,0.107,0.205]$ & 0.4095 & -0.0708 & 2287.6 \\
& $10\times10$ & - & - & - & - \\
& $15\times15$ & - & - & - & - \\
\hline
\end{tabular}
}
\hspace{-0.1cm}
\begin{tablenotes}
\raggedleft
\item `-' implies runtime $>$ 3600s.
\end{tablenotes}
\end{threeparttable}
\label{tab-result-DN-2} \end{table}
\subsection{Perturbation analysis} This part of numerical tests is concerned with data perturbation including (i) elicitation data perturbation and (ii) sample average approximation (SAA) of the exogenous uncertainties. SAA is needed when the true probability distribution $P$ in (\ref{eq:MAUT-robust}) is continuously distributed. In this case, Assumption \ref{assu-discrete} and the subsequent UPRO models may be viewed as sample average approximations. We skip the theoretical analysis about errors arising from SAA and refer interested readers to \cite{GXZ21} in single-attribute case.
\textbf{(i) Perturbation in the data in the ambiguity set.}
In this set of experiments, we will test the performance of the PLA scheme
when the ambiguity sets $\mathcal{U}$ and $\tilde{\mathcal{U}}$ are replaced by $\mathcal{U}_N$ and $\tilde{\mathcal{U}}_N$ respectively. We begin by considering a situation where the underlying functions $\psi_l, l=1,\ldots,M$ in the ambiguity set are perturbed by the observation error of the random data in pairwise comparison questions, i.e., \begin{equation*}
\tilde{\psi}_l(x,y):= \mathds{1}_{[\hat{x}^l+\delta_1,1]\times [\hat{y}^l+\delta_2,1]}(x,y)-(1-p^l) \mathds{1}_{[0,1]\times[0,1]\setminus (1,1)}(x,y)- \mathds{1}_{(1,1)}(x,y), l=1,\ldots,\hat{M}, \end{equation*} where $\hat{M}$ is the number of perturbed functions $\psi_l$. Notice that some lotteries are on the boundary of rectangle $T$ which can only be perturbed inwards. Thus we assume that these lotteries are not perturbed for the convenience of discussion. Let $\mathcal{U}_N=\{u_N\in\mathscr{U}_N: {\langle} u_N,\psi_l {\rangle}\leq c_l, l=1,\ldots,M\} $
and \begin{equation*}
\tilde{\mathcal{U}}_N=\{u_N\in\mathscr{U}_N: {\langle} u_N,\tilde{\psi}_l {\rangle}\leq c_l, l=1,\ldots,M\}. \end{equation*} We can solve problem (\ref{eq:PRO-N-reformulate}) with $\psi_l$ being replaced by $\tilde{\psi}_l$ to obtain the optimal value and the corresponding worst-case utility function. Specifically, we assume $\delta_2=0$, which means we only consider the case that the first attribute is slightly perturbed but the second attribute is not.
Figures~\ref{fig-ptb-ut-main}-\ref{fig-ptb-ut-counter} depict the convergence of the worst-case utility functions as the number of questions increases for fixed $\delta_1=0.1$ with Type-1 PLA and Type-2 PLA.
Figures~\ref{subfig-main-ov}-\ref{subfig-counter-ov} depict the changes of the optimal values as $\delta_1$ varies from $0.01$ to $0.1$ with different $M$.
\begin{figure}
\caption{{\bf Type-1 EPLA}: the worst-case utility function with $\delta_1=0.1$ }
\label{subfig-main-ut-ptb5}
\label{subfig-main-ut-ptb10}
\label{subfig-main-ut-ptb15}
\label{fig-ptb-ut-main}
\end{figure}
\begin{figure}
\caption{{\bf Type-2 EPLA}: the worst-case utility function with $\delta_1=0.1$ }
\label{subfig-counter-ut-ptb5}
\label{subfig-counter-ut-ptb10}
\label{subfig-counter-ut-ptb15}
\label{fig-ptb-ut-counter}
\end{figure}
\begin{figure}
\caption{{\bf EPLA:} the optimal values with $\delta_1=0.01$ and SAA problem as sample size increases}
\label{subfig-main-ov}
\label{subfig-counter-ov}
\label{subfig-SAA-main}
\label{subfig-SAA-counter}
\label{fig-ptb-ov}
\end{figure}
\textbf{(ii) SAA
of exogenous uncertainty. } In this set of experiments, we
use sample data to approximate the true probability distribution $P$ (of $\bm{\xi}$), which is also known as SAA.
We include this in the category of data perturbation in the sense that empirical distribution constructed with sample data may be regarded as a perturbation of $P$. We investigate how the variation of sample size affects the optimal values and the optimal solutions. We solve problem (\ref{eq:PRO-N-reformulate}) with different sample size $K$ and run $20$ simulations for each fixed sample size $K$. We plot a boxplot diagram to examine the convergence of the optimal values as $K$ increases in Figures~\ref{subfig-SAA-main}-\ref{subfig-SAA-counter}. We can see that as the sample size reaches 400, the optimal values of the SAA problem are close to the true optimal value in both Type-1 PLA and Type-2 PLA.
\subsection{Preference inconsistency} \label{subsec:preference-incon}
In Section~\ref{sec:PC-design}, we consider pairwise comparisons to elicit the DM’s preference. In practice, various errors may occur
during the elicitation process such as measurement errors
and DM’s wrong responses, all of which may lead to preference inconsistency. In this part, we examine the effects of the inconsistencies on the worst-case utility functions and the optimal value in the following
two types of inconsistency
during the preference elicitation process.
\textbf{(i) Limitation on the total quantity of errors. } We consider
the rhs of the inequality constraints in the definition of ${\cal U}_N$
to be perturbed by positive constants $\gamma_l$, that is, ${\langle} u_N,\psi_l {\rangle}\leq c_l+\gamma_l, l=1,\ldots,M$.
The perturbation is required for the feasibility of problem (\ref{eq:PRO-N-reformulate}) to hold when noise corrupts the expected utility evaluation when a comparison is made. In other words, the perturbed inequalities accommodate potentially inconsistent responses. We restrict the total inconsistency by setting $\sum_{l=1}^M \gamma_l\leq \Gamma$, where $\Gamma$ is the total error to be tolerated. Figures~\ref{fig-incon-ut-main}-\ref{fig-incon-ut-counter} depict the worst-case utility functions
and the true utility function. Figures~\ref{subfig-incon-main}-\ref{subfig-incon-counter} depict the optimal values with $\Gamma$ varies from $0$ to $1$. As $\Gamma$ increases, the optimal values decrease. From the figures, we find that our PLA approach works very well for this type of inconsistency.
\begin{figure}
\caption{{\bf Type-1 EPLA}: worst-case utility with $\Gamma=0.5$}
\label{subfig-incon-main-ut1}
\label{subfig-incon-main-ut2}
\label{subfig-incon-main-ut3}
\label{fig-incon-ut-main}
\end{figure}
\begin{figure}
\caption{{\bf Type-2 EPLA}: worst-case utility with $\Gamma=0.5$}
\label{subfig-incon-counter-ut1}
\label{subfig-incon-counter-ut2}
\label{subfig-incon-counter-ut3}
\label{fig-incon-ut-counter}
\end{figure}
\begin{figure}
\caption{{\bf EPLA}: the optimal values with total errors and erroneous responses}
\label{subfig-incon-main}
\label{subfig-incon-counter}
\label{subfig-responserr-main}
\label{subfig-responserr-counter}
\label{fig-incon-ov}
\end{figure}
\textbf{(ii) Limitation on the number of erroneous responses. } We consider the case that the DM makes mistakes occasionally, that is, the DM is mistaken at most $\epsilon M$ of lottery comparisons.
We introduce binary variable
$\delta_l$,
which takes
value $1$ if the DM is mistaken about lottery $l$ and $0$ otherwise, and we add the constraint $\sum_{l=1}^M \delta_l\leq \epsilon M$ to limit the total number of mistakes. If the original comparison is ${\mathbb{E}}_{\mathbb{P}}[u(\bm{Z}_1^l(\omega))]\geq {\mathbb{E}}_{\mathbb{P}}[u(\bm{Z}_2^l(\omega))]$, then this condition is replaced by:
\begin{equation*}
\delta_l \hat{M}+{\mathbb{E}}_{\mathbb{P}}[u(\bm{Z}_1^l(\omega))]\geq {\mathbb{E}}_{\mathbb{P}}[u(\bm{Z}_2^l(\omega))]
\quad \inmat{and} \quad
(1-\delta_l) \hat{M}+{\mathbb{E}}_{\mathbb{P}}[u(\bm{Z}_2^l(\omega))]\geq {\mathbb{E}}_{\mathbb{P}}[u(\bm{Z}_1^l(\omega))], \end{equation*} where $\hat{M}$ is a large constant (``Big $\hat{M}$''). These constraints make the inner minimization problem become an MILP.
Figures~\ref{fig-responserr-ut-main}-\ref{fig-responserr-ut-counter} depict the worst-case utility functions, and the gap between them and
the true utility function for Type-1 PLA and Type-2 PLA.
Figures~\ref{subfig-responserr-main}-\ref{subfig-responserr-counter} depict the optimal values with $\epsilon=\{0.1,0.2,0.3\}$.
\begin{figure}
\caption{{\bf Type-1 EPLA}: worst-case utility with $10\times10$ lotteries}
\label{subfig-responserr-main-ut1}
\label{subfig-responserr-main-ut2}
\label{subfig-responserr-main-ut3}
\label{fig-responserr-ut-main}
\end{figure}
\begin{figure}
\caption{{\bf Type-2 EPLA:} worst-case utility with $10\times10$ lotteries}
\label{subfig-responserr-counter-ut1}
\label{subfig-responserr-counter-ut2}
\label{subfig-responserr-counter-ut3}
\label{fig-responserr-ut-counter}
\end{figure}
\section{Concluding remarks} \label{sec:Concluding remarks}
In this paper, we propose EPLA and IPLA approaches to approximate the true unknown utility function
in the multi-attribute UPRO models and demonstrate how the resulting approximate UPRO model can be solved. The EPLA approach works only for two-attribute case as it stands because it is complex to derive an explicit piecewise linear utility function when the utility function has three or more variables. The IPLA is not subject to the limitation of the dimension of the utility function but our numerical test results show that the IPLA-based approach takes considerably longer
CPU time to solve as the numbers of preference elicitation questions and scenarios of exogenous random vector increase. This indicates that the formulation is potentially computationally unscalable. It remains an open question as to how to improve the computational efficiency of the IPLA approach.
For instance, in the
case when $m\geq 4$, in order to derive IPLA of the utility function,
we need to
develop proper triangulation of the hypercube $\bigtimes_{i=1}^{m}[a_{i},b_{i}]$ into simplices in $m$-dimensional space. It will be interesting to explore such triangulation and to identify the simplex where the reward function locates efficiently, see Hughes and Anderson \cite{hughes1996simplexity} and \cite{COTTLE198225,BROADIE198439} for further study. Design of questionnaires to elicit the DM's preference is another point for potential improvement since our strategy is fundamentally based on random utility split
scheme in single-attribute PRO models \cite{AmD15}. It
is worthwhile to explore
some optimal design strategies such as in \cite{Vayanos2020} because in practice, elicitation may be time consuming or costly. Finally,
it will be interesting to explore whether the proposed approaches work more efficiently when the true utility function has some copula structure \cite{abbas2009multiattribute,abbas2013utility}. We leave all these for future research.
\begin{appendices} \renewcommand\thefigure{\thesection.\arabic{figure}} \renewcommand\thetable{\thesection.\arabic{table}}
\setcounter{figure}{0} \setcounter{table}{0}
\section{Proofs}
\subsection{Proof of Proposition~\texorpdfstring{\ref{prop-uti-N}}{3.1}} \label{app:proof-uN}
Since $\psi_l$, $l=1,\cdots,M$, take constant values over $T_{i,j}$ for $i=1,\cdots, N_1-1$ and $j=1,\cdots, N_2-1$,
there exist constants $c_{i,j}^l$ such that $$ \psi_l(x,y):=\sum_{i=1}^{N_1-1}\sum_{j=1}^{N_2-1} c^l_{i,j} \mathds{1}_{T_{i,j}}(x,y). $$
Next, we verify that $u_N(x,y)$
satisfies the following inequalities:
\begin{equation*}
\int_{T} u_N(x,y)d\psi_l(x,y)\leq c_l, l=1,\ldots,M. \end{equation*} By integration in parts (see, e.g., \cite{young1917multiple} and \cite{Ans22}), \begin{equation*} \label{eq:parts_integral} \begin{split} &\int_T u_N(x,y)d\psi_l(x,y) \\ & = u_N(\underline{x},\underline{y})[\psi_l]_{\underline{x},\underline{y}}^{\bar{x},\bar{y}} +\int_T [\psi_l]_{x,y}^{\bar{x},\bar{y}} d u_N(x,y) + \int_X [\psi_l]_{x,\underline{y}}^{\bar{x},\bar{y}}d u_N(x,\underline{y}) +\int_{Y} [\psi_l]_{\underline{x},y}^{\bar{x},\bar{y}} d u_N(\underline{x},y). \end{split} \end{equation*} Since $u_N(\underline{x},\underline{y})=0$, it suffices to calculate the rest three terms at the right hand side of the equation. Let $\hat{\psi}_l(x,y) :=[\psi_l]_{x,y}^{\bar{x},\bar{y}}$. Then by definition (see \cite{young1917multiple} and \cite{Ans22}) \begin{equation*} \begin{split}
\hat{\psi}_l(x,y)
&= \psi_l(\bar{x},\bar{y})-\psi_l(x,\bar{y})-\psi_l(\bar{x},y)+\psi_l(x,y) \\
&= \sum_{i=1}^{N_1-2}\sum_{j=1}^{N_2-2} c^l_{i,j} \mathds{1}_{T_{i,j}}(x,y) -
c^l_{N_1-1,N_2-1}\mathds{1}_{T_{N_1-1,N_2-1}}(x,y). \end{split} \end{equation*} Likewise, we have \begin{equation*} \begin{split}
\psi_{1,l}(x)
&: = [\psi_l]_{x,\underline{y}}^{\bar{x},\bar{y}} =
\psi_l(\bar{x},\bar{y})-\psi_l(x,\bar{y})-\psi_l(\bar{x},\underline{y})+\psi_l(x,\underline{y}) \\
&= \sum_{i=1}^{N_1-1} (c^l_{i,1}-c^l_{i,N_2-1}) \mathds{1}_{X_i}(x) - c^l_{N_1-1,1} \end{split} \end{equation*} and \begin{equation*} \begin{split}
\psi_{2,l}(y)
&:= [\psi_l]_{\underline{x},y}^{\bar{x},\bar{y}} =\psi_l(\bar{x},\bar{y})-\psi_l(\underline{x},\bar{y})-\psi_l(\bar{x},y)+\psi_l(\underline{x},y) \\
&= \sum_{j=1}^{N_2-1} (c^l_{1,j}-c^l_{N_1-1,j}) \mathds{1}_{Y_i}(y) - c^l_{1,N_2-1}. \end{split} \end{equation*}
Consequently, we have \begin{eqnarray} \label{eq:Int-by-part-1}
&& \int_T [\psi_l]_{x,y}^{\bar{x},\bar{y}} d u_{N}(x,y) = \int_T \hat{\psi}_l(x,y) d u_{N}(x,y) \nonumber \\
&&= \sum_{i=1}^{N_1-1}\sum_{j=1}^{N_2-1} \int_{T_{i,j}} \left( \sum_{i=1}^{N_1-2}\sum_{j=1}^{N_2-2} c^l_{i,j} \mathds{1}_{T_{i,j}}(x,y) - c^l_{N_1-1,N_2-1}\mathds{1}_{T_{N_1-1,N_2-1}}(x,y) \right) d u_{N}(x,y) \nonumber \\
&& = \sum_{i=1}^{N_1-2}\sum_{j=1}^{N_2-2} \int_{T_{i,j}} c^l_{i,j} d u_{N}(x,y) - \int_{T_{N_1-1,N_2-1}} c^l_{N_1-1,N_2-1} d u_{N}(x,y) \nonumber \\
&& = \sum_{i=1}^{N_1-2}\sum_{j=1}^{N_2-2} c^l_{i,j}
\left( u_{N}(x_{i+1},y_{j+1})-u_{N}(x_i,y_{j+1})-u_{N}(x_{i+1},y_j)+u_{N}(x_i,y_j) \right) \nonumber \\
&& \quad+ c^l_{N_1-1,N_2-1}\left( u_{N}(x_{N_1},y_{N_2})-u_{N}(x_{N_1-1},y_{N_2-1})-u_{N}(x_{N_1},y_{N_2-1})+u_{N}(x_{N_1-1},y_{N_2}) \right) \nonumber \\
&& = \sum_{i=1}^{N_1-1}\sum_{j=1}^{N_2-1}
c^l_{i,j} \left( u(x_{i+1},y_{j+1})-u(x_i,y_{j+1})-u(x_{i+1},y_j)+u(x_i,y_j) \right) \nonumber \\
&& \quad+ c^l_{N_1-1,N_2-1}\left( u(x_{N_1},y_{N_2})-u(x_{N_1-1},y_{N_2-1})-u(x_{N_1},y_{N_2-1})+u(x_{N_1-1},y_{N_2}) \right) \nonumber \\
&& = \int_{\underline{x},\underline{y}}^{\bar{x},\bar{y}} \psi_l(x,y) d u(x,y), \end{eqnarray} where the third equality follows from the definition of Lebesgue-Stieltjes integration given that $u_N$ is non-decreasing and bounded (see \cite{Mcs47,ash2000probability}). Likewise, we can show that
\begin{eqnarray} \label{eq:Int-by-part-2}
\int_{X}\psi_{1,l}(x) d u_N(x,\underline{y}) = \int_{X}\psi_{1,l}(x) d u(x,\underline{y}) \end{eqnarray} and \begin{eqnarray} \label{eq:Int-by-part-3}
\int_{Y} \psi_{2,l}(y) d u_N(\underline{x},y) = \int_{Y} \psi_{2,l}(y) d u(\underline{x},y). \end{eqnarray} Combing (\ref{eq:Int-by-part-1})-(\ref{eq:Int-by-part-3}), we obtain \begin{equation} \label{eq-int-u-u-N}
\int_{T} u_N(x,y)d\psi_l(x,y) = \int_{T} u(x,y)d\psi_l(x,y) \leq c_l, l=1,\ldots,M. \end{equation} The proof is complete.
\Box
\subsection{Proof of Proposition~\texorpdfstring{\ref{prop-int-pl}}{3.2}.} \label{app:proof-LS}
Since $F$ is a continuous piecewise linear function with two pieces, then there are only two possibilities that $F$ satisfies the conservative condition (\ref{eq:consevative-condition}) or not. Without loss of generality, we assume the conservative condition fails. According to the discussions in \cite{Mcs47,ash2000probability}, $F$ generates a LS (outer) measure $\mu_F^*$ defined as \begin{equation*}
\mu_F^*((\underline{a}, \bar{a}]\times(\underline{b}, \bar{b}])= F(\bar{a},\bar{b})-F(\underline{a},\bar{b})-F(\bar{a},\underline{b})+F(\underline{a},\underline{b}). \end{equation*} By the definition of the
LS integration, \begin{equation*}
\int_{\underline{a}, \underline{b}}^{\bar{a}, \bar{b}}\psi(x,y)d F(x,y) = \int_{(\underline{a}, \bar{a}]\times(\underline{b}, \bar{b}]}\psi(x,y)d \mu_F^*. \end{equation*}
Let $I$ and $II$ denote the triangle regions in $[\underline{a}, \overline{a}]\times [\underline{b}, \overline{b}]$ above (including) and below (including) $AB$ respectively. Let $R=(a,a']\times(b,b']$ be a subset of $I$ or $II$. Then
\[ F(a,b)+ F(a',b')=2F((a+a')/2,(b+b')/2)=F(a,b')+F(a',b), \] because of the linearity of $F$ over the $R$. This implies $\mu_F^*(R)=0$.
Now we turn to discuss the measure
over the boundary of $I\cup II$ (
denoted by $\partial (I\cup II)= ((\underline{a},\bar{a}]\times\bar{b})\cup(\bar{a}\times(\underline{b},\bar{b}])$). For any small constant $\epsilon>0$, $$ \mu_F^*((\underline{a},\bar{a}]\times\bar{b})\leq \mu_F^*((\underline{a},\bar{a}]\times(\bar{b}-\epsilon,\bar{b}])=F(\underline{a},\bar{b}-\epsilon)-F(\underline{a},\bar{b})-F(\bar{a},\bar{b}-\epsilon)+F(\bar{a},\bar{b}). $$ By driving $\epsilon$ to zero, we obtain $$ \mu_F^*((\underline{a},\bar{a}]\times\bar{b})\leq \lim_{\epsilon\to0} (F(\underline{a},\bar{b}-\epsilon)-F(\underline{a},\bar{b})-F(\bar{a},\bar{b}-\epsilon)+F(\bar{a},\bar{b}))=0, $$ which implies $\mu_F^*((\underline{a},\bar{a}]\times\bar{b})=0$. Likewise, we can also obtain $\mu_F^*(\bar{a}\times(\underline{b},\bar{b}])=0$ and hence $\mu_F^*(\partial (I\cup II))=0$.
Let $\{a_i\}$ and $\{b_i\}$ be two sequences of monotonically increasing numbers such that $R_i:=(a_i,a_{i+1}]\times(b_i,b_{i+1}]\subset \inmat{int\,} I$ and $\bigcup_i R_i = I$.
By the property of outer measure, $$ \mu_F^*(\inmat{int\,} I)\leq \sum_i\mu_F^*(R_i)=0. $$ This shows $\mu_F^*(\inmat{int\,} I)=0$. Likewise, $\mu_F^*(\inmat{int\,} II)=0$. Consequently, we have $\mu_F^*(I\cup II)=\mu_F^*(I\cap II)$. Next, let $t\in (\underline{a},\overline{a}]$ and consider the segment $L=(\underline{a},t]\times(\underline{b},y(t)]\cap(I\cap II)$, we have \begin{equation*}
\mu_F^*(L) = \frac{t-\underline{a}}{\bar{a}-\underline{a}} \, \mu_F^*(I\cap II), \end{equation*} where $y(t)$ is the linear function representing $I\cap II$ (AB).
\begin{equation*} \begin{split}
\int_{[\underline{a},\overline{a}]\times [\underline{b}, \overline{b}]}\psi(x,y) & d F(x,y) = \int_{I\cap II} \psi(x,y(x)) d\mu_F^*
\\
& = \frac{\mu_F^*(I\cap II)}{\bar{a}-\underline{a}} \lim_{t\to\bar{a}} \int_{\underline{a}}^t \psi(x,y(x)) dx
= \frac{\mu_F^*(I\cap II)}{\overline{a}-\underline{a}}\int_{\underline{a}}^{\overline{a}} \psi(x,y(x))d x, \end{split} \end{equation*} where the third equality holds since $\psi(x,y(x))$ is Riemann integrable.
\Box
\subsection{Proof of Proposition~\texorpdfstring{\ref{prop:single-MILP}}{4.2}}
By introducing dual variables, we can write down the Lagrange function of the inner minimization problem (\ref{eq:PRO_MILP_eqi}) w.r.t.~${\bm u}$ \begin{eqnarray*} && L({\bm u},{\bm \lambda}^1,{\bm \lambda}^2,{\bm \eta}^1, {\bm \eta}^2, {\bm \tau}, \sigma,{\bm \zeta})\\ && = \sum_{k=1}^K p_k \sum_{i=1}^{N_1} \sum_{j=1}^{N_2} \alpha_{i,j}^k u_{i,j} +\sum_{i=1}^{N_1-1}\sum_{j=1}^{N_2} \lambda_{i,j}^1 (u_{i,j}-u_{i+1,j}) +\sum_{i=1}^{N_1}\sum_{j=1}^{N_2-1} \lambda_{i,j}^2 (u_{i,j}-u_{i,j+1})\\ && \quad +\sum_{i=1}^{N_1-1}\sum_{j=1}^{N_2} \eta_{i,j}^1 (u_{i+1,j}-u_{i,j}-L(x_{i+1}-x_i)) +\sum_{i=1}^{N_1}\sum_{j=1}^{N_2-1} \eta_{i,j}^2 (u_{i,j+1}-u_{i,j}-L(y_{j+1}-y_j))\\ && \quad +\sum_{i=1}^{N_1-1}\sum_{j=1}^{N_2-1} \tau_{i,j} (u_{i,j}+u_{i+1,j+1}-u_{i,j+1}-u_{i+1,j})
+\sigma(1-u_{N_1,N_2})+\sum_{l=1}^M \zeta_l\sum_{i=1}^{N_1}\sum_{j=1}^{N_2} Q_{i,j}^l,
\end{eqnarray*} where ${\bm \lambda}^1\in {\rm I\!R}^{(N_1-1)\times N_2}_+$, ${\bm \lambda}^2\in {\rm I\!R}^{N_1\times (N_2-1)}_+$, ${\bm \eta}^1\in {\rm I\!R}^{(N_1-1)\times N_2}_+$, ${\bm \eta}^2\in {\rm I\!R}^{N_1\times (N_2-1)}_+$, $\tau\in {\rm I\!R}^{(N_1-1)\times (N_2-1)}_+$, $\sigma\in {\rm I\!R}$ and $\zeta\in {\rm I\!R}^M_+$. We can then derive
the Lagrange dual formulation and merge it into the outer maximization problem to obtain (\ref{eq:PRO_MILP_single}).
$\Box$
\subsection{Proof of Proposition~\texorpdfstring{\ref{prop-d}}{5.1}.}
Case (i). $\mathscr{G}=\mathscr{G}_K$. We have
\begin{align}
& \mathsf {d\kern -0.07em l}_{\mathscr{G}_K} (u,u_{N}) \nonumber \\
& = \sup_{g\in\mathscr{G}_K} \left|\int_T g(x,y) d u(x,y) - \int_T g(x,y) d u_{N}(x,y)\right| \nonumber \\
& \leq \sum_{i=1}^{N_1-1} \sum_{j=1}^{N_2-1} \sup_{g\in\mathscr{G}_K}
\left|
\int_{T_{i,j}} g(x,y) d u(x,y) - \int_{T_{i,j}} g(x,y) d u_{N}(x,y)
\right| \nonumber \\
& \leq \sum_{i=1}^{N_1-1} \sum_{j=1}^{N_2-1} \sup_{g\in\mathscr{G}_K}
\left|
\int_{T_{i,j}} g(x,y) d u(x,y)
- \int_{T_{i,j}} g(x_{i},y_{j}) d u(x,y) \right. \nonumber \\
& \hspace{10em} \left.
+ \int_{T_{i,j}} g(x_{i},y_{j}) d u(x,y)
- \int_{T_{i,j}} g(x,y) d u_{N}(x,y)
\right| \nonumber \\
& \leq \sum_{i=1}^{N_1-1} \sum_{j=1}^{N_2-1} \sup_{g\in\mathscr{G}_K}
\left( \left|
\int_{T_{i,j}} |g(x,y)-g(x_{i},y_{j})|
d u(x,y) \right| \right. \nonumber \\
& \hspace{10em} \left. + \left | \int_{T_{i,j}} |g(x_{i},y_{j})-g(x,y)| d u_{N}(x,y) \right|
\right) \nonumber \\
& \leq (\beta_{N_1}^2+\beta_{N_2}^2)^{1/2} \sum_{i=1}^{N_1-1} \sum_{j=1}^{N_2-1}
\left(\left|
\int_{T_{i,j}} d u(x,y)\right|
+\left|\int_{T_{i,j}} d u_{N}(x,y) \right|
\right) \nonumber \\
& = 2(\beta_{N_1}^2+\beta_{N_2}^2)^{1/2} |1-u_N(\underline{x},\bar{y})-u_N(\underline{y},\bar{x})| \leq 2(\beta_{N_1}^2+\beta_{N_2}^2)^{1/2}, \label{eq-u-u-N} \end{align} where the last equality holds due to that $u$ and $u_N$ satisfy the conservative conditions, which implies that $$ \int_{T_{i,j}} d u(x,y) = \int_{T_{i,j}} d u_N(x,y)
= u_{i+1,j+1}-u_{i+1,j}-u_{i,j+1}+u_{i,j}\leq 0. $$ Then $$
\sum_{i=1}^{N_1-1} \sum_{j=1}^{N_2-1} \left| \int_{T_{i,j}} d u(x,y) \right| = \sum_{i=1}^{N_1-1} \sum_{j=1}^{N_2-1} \left| \int_{T_{i,j}} d u_N(x,y) \right|
=|1-u(\underline{x},\bar{y})-u(\underline{y},\bar{x})|. $$
Case (ii). $\mathscr{G}=\mathscr{G}_I$. We only consider Type 1 PLA. Similar arguments can be established for Type 2 PLA. Let $(x,y)\in T_{i,j}$. Consider the case that $(x,y)$ lies below the main diagonal, i.e., $0\leq\frac{y-y_j}{x-x_i}\leq\frac{y_{j+1}-y_j}{x_{i+1}-x_i}$. Thus
\begin{eqnarray*}
&& |u_{N}(x,y)-u(x,y)| \\
&& = \left| \left( 1-\frac{x-x_i}{x_{i+1}-x_i} \right) u_{i,j}
+\left( \frac{x-x_i}{x_{i+1}-x_i} -\frac{y-y_j}{y_{j+1}-y_j} \right) u_{i+1,j} +\frac{y-y_j}{y_{j+1}-y_j} u_{i+1,j+1}-u(x,y) \right| \\
&& \leq \left| \left(1-\frac{x-x_i}{x_{i+1}-x_i}\right) (u_{i,j}-u(x,y)) \right|
+\left| \left(\frac{x-x_i}{x_{i+1}-x_i}-\frac{y-y_j}{y_{j+1}-y_j}\right) (u_{i+1,j}-u(x,y)) \right| \\
&&
\quad+\left|\frac{y-y_j}{y_{j+1}-y_j} (u_{i+1,j+1}-u(x,y)) \right|. \end{eqnarray*}
Since $u_{i,j}=u(x_i,y_j)$, by the Lipschitz continuity
of $u$, we have \begin{equation*}
|u_{i,j}-u(x,y)| = |u(x_i,y_j)-u(x,y)|\leq L(\beta_{N_1}+\beta_{N_2}). \end{equation*} Likewise, we can obtain
$|u_{i+1,j}-u(x,y)|\leq L(\beta_{N_1}+\beta_{N_2})$ and
$|u_{i+1,j+1}-u(x,y)|\leq L(\beta_{N_1}+\beta_{N_2})$, which give rise to \begin{equation} \label{eq-u-uN}
|u_{N}(x,y)-u(x,y)|\leq L(\beta_{N_1}+\beta_{N_2}). \end{equation} We can obtain the same inequality when $(x,y)\in[x_{i-1},x_i]\times[y_{j-1},y_j]$ and $\frac{y-y_j}{x-x_i}\geq\frac{y_{j+1}-y_j}{x_{i+1}-x_i}$.
Summarizing the discussions above, we have \begin{eqnarray*}
\mathsf {d\kern -0.07em l}_{\mathscr{G}_I} (u,u_{N}) &=& \sup_{g\in\mathscr{G}} \left| \int_{\underline{x},\underline{y}}^{x,y} d u(x,y) - \int_{\underline{x},\underline{y}}^{x,y} d u_{N}(x,y) \right| \\
&=& \sup_{(x,y)\in T}
|u(x,y)-u(x,\underline{y})-u(\underline{x},y)-u_{N}(x,y)+u_{N}(x,\underline{y})+u_{N}(\underline{x},y)| \\
&\leq& 2L\left(\beta_{N_1}+\beta_{N_2} \right), \end{eqnarray*}
which implies (\ref{eq-d}).
The proof is complete.
\Box
\subsection{Proof of Theorem~\texorpdfstring{\ref{thm-erramb}}{5.1}.}
Let $\hat{\alpha}<\alpha$ be a positive number. Under Slater's condition (\ref{eq-sla}), there exists a function $u^0_{N} \in\mathcal{U}_{N}$ and a positive number $N^0=N_1^0\times N_2^0$ such that \begin{equation} \label{eq-sla-0}
{\langle} u^0_{N},\bm{\psi} {\rangle} -\bm{C}+\hat{\alpha} \mathbb{B}^M \subset {\rm I\!R}_-^M \end{equation} for $N\geq N^0$. The existence follows from Proposition~\ref{prop-d} in that there exists $u^0$ satisfying (\ref{eq-sla}), and by (\ref{eq-u-uN}) we can construct a piecewise linear utility function $u^0_{N}$ of $u^0$ such that $u^0_{N}\to u^0$
under $\|\cdot\|_\infty$ uniformly as $\beta_{N_i}\to0$, $i=1,2$.
By applying Lemma~\ref{lem-hof} to $\mathcal{U}$ under Slater's condition (\ref{eq-sla-0}), for any $\tilde{u}\in\mathscr{U}_N$, \begin{equation} \label{eq-bbd}
\mathbb{D}_{\mathscr{G}}(\tilde{u},\mathcal{U}_N)\leq \frac{\mathsf {d\kern -0.07em l}_{\mathscr{G}}(\tilde{u},u^0_{N})}{\hat{\alpha}}
\|({\langle} \tilde{u},\bm{\psi} {\rangle}-\bm{C})_+\| \end{equation} for all $N\geq N^0$.
Let $u\in\mathcal{U}$ and $u_{N}$ be defined as in Proposition~\ref{prop-uti-N}. Then \begin{align}
& \|
{\langle} u_{N},\bm{\psi} {\rangle}
- {\langle} u,\bm{\psi} {\rangle}
\|^2 \nonumber \\
& = \sum_{l=1}^M \left|
\int_T u_{N}(x,y) d \psi_l(x,y)-\int_T u(x,y) d \psi_l(x,y)
\right|^2 \nonumber \\
& \leq \sum_{l=1}^M \left|
\int_T |u_{N}(x,y)-u(x,y)|d\psi_l(x,y)
\right|^2 \nonumber\\
& \leq
L^2 (\beta_{N_1}+\beta_{N_2})^2
\sum_{l=1}^M \left| \int_T d \psi_l(t)\right|^2.
\label{eq-inpro} \end{align} By the triangle inequality for the pseudo-metric and (\ref{eq-bbd}), we have \begin{align}
\mathsf {d\kern -0.07em l}_{\mathscr{G}}(u,\mathcal{U}_{N}) &\leq \mathsf {d\kern -0.07em l}_{\mathscr{G}}(u,u_{N})+\mathsf {d\kern -0.07em l}_{\mathscr{G}}(u_{N},\mathcal{U}_{N}) \nonumber \\
&\leq \mathsf {d\kern -0.07em l}_{\mathscr{G}}(u,u_{N})+\frac{\mathsf {d\kern -0.07em l}_{\mathscr{G}}(u_{N},u_{N}^0)}{\hat{\alpha}}
\|({\langle} u_{N},\bm{\psi} {\rangle}-\bm{C})_+\| \nonumber \\
&= \mathsf {d\kern -0.07em l}_{\mathscr{G}}(u,u_{N})+\frac{\mathsf {d\kern -0.07em l}_{\mathscr{G}}(u_{N},u_{N}^0)}{\hat{\alpha}}
[\|({\langle} u_{N},\bm{\psi} {\rangle}-\bm{C})_+\|-\|({\langle} u,\bm{\psi} {\rangle}-\bm{C})_+\|] \nonumber \\
&\leq \mathsf {d\kern -0.07em l}_{\mathscr{G}}(u,u_{N})+\frac{\mathsf {d\kern -0.07em l}_{\mathscr{G}}(u_{N},u_{N}^0)}{\hat{\alpha}}
\|{\langle} u_{N},\bm{\psi} {\rangle}-{\langle} u,\bm{\psi} {\rangle})\| \nonumber \\
&\leq \mathsf {d\kern -0.07em l}_{\mathscr{G}}(u,u_{N})+\frac{\mathsf {d\kern -0.07em l}_{\mathscr{G}}(u_{N},u_{N}^0)}{\hat{\alpha}}
L(\beta_{N_1}+\beta_{N_2}) \left(\sum_{l=1}^M \left| \int_T d \psi_l(t)\right|^2\right)^{1/2},
\label{eq-uU} \end{align} where the equality holds due to $u\in\mathcal{U}$, i.e. $({\langle} u,\bm{\psi} {\rangle}-\bm{C})_+=0$ and the last inequality comes from (\ref{eq-inpro}). In what follows, we turn to estimate $\mathsf {d\kern -0.07em l}_{\mathscr{G}}(u,u_{N})$ and $\mathsf {d\kern -0.07em l}_{\mathscr{G}}(u_{N},u_{N}^0)$ when $\mathscr{G}$ have specific form.
Case (i). If $\mathscr{G}=\mathscr{G}_K$, then $\mathsf {d\kern -0.07em l}_{\mathscr{G}_K}(u_N,u^0_N)\leq \left((\bar{x}-\underline{x})^2+(\bar{y}-\underline{y})^2\right)^{1/2}$ and $\mathsf {d\kern -0.07em l}_{\mathscr{G}_K}(u,u_N)\leq 2(\beta_{N_1}^2+\beta_{N_2}^2)^{1/2}$ by Proposition~\ref{prop-d}~(i).
Taking supremum w.r.t. $u$ over $\mathcal{U}$ on both sides of (\ref{eq-uU}), we obtain \begin{equation*}
\mathbb{D}_{\mathscr{G}_K}(\mathcal{U},\mathcal{U}_N)\leq 2(\beta_{N_1}^2+\beta_{N_2}^2)^{1/2} +
L(\beta_{N_1}+\beta_{N_2})
\frac{\left((\bar{x}-\underline{x})^2+(\bar{y}-\underline{y})^2\right)^2}{\hat{\alpha}}
\left(\sum_{l=1}^M \left| \int_T d \psi_l(t)\right|\right)^{1/2} \end{equation*} and hence (\ref{eq-erram-L}) holds since $\mathbb{D}_{\mathscr{G}_K}(\mathcal{U}_{N},\mathcal{U})=0$.
Case (ii). If $\mathscr{G}=\mathscr{G}_I$, then $\mathsf {d\kern -0.07em l}_{\mathscr{G}_I}(u_N,u^0_N)\leq1$ and $\mathsf {d\kern -0.07em l}_{\mathscr{G}_I}(u,u_N)\leq 2L\left( \beta_{N_1}+\beta_{N_2} \right)$ by Proposition~\ref{prop-d}~(ii). Following a similar analysis to Case (i), we obtain (\ref{eq-erram-I}).
$\Box$
\subsection{Proof of Corollary~\texorpdfstring{ \ref{cor-err-optval-discrete}}{5.1}.}
Since $\mathcal{U}_{N}\subset\mathcal{U}$ by definition, then $ \mathbb{D}_{\mathscr{G}}(\mathcal{U}_{N},\mathcal{U})
=0 $ for any $\mathscr{G}$. Thus, it suffices to estimate $\mathbb{D}_{\mathscr{G}}(\mathcal{U},\mathcal{U}_{N})$. For any $u\in \mathcal{U}$, it follows by Proposition~\ref{prop-uti-N} that we can construct $u_N$ of Type-1 PLA or Type-2 PLA such that $u_N\in \mathcal{U}_N$. Consequently, in the case that $\mathscr{G}=\mathscr{G}_K$, we have $$ \mathsf {d\kern -0.07em l}_{\mathscr{G}_K}(u,\mathcal{U}_{N}) \leq \mathsf {d\kern -0.07em l}_{\mathscr{G}_K}(u,u_{N})+\mathsf {d\kern -0.07em l}_{\mathscr{G}_K}(u_{N},\mathcal{U}_{N}) = \mathsf {d\kern -0.07em l}_{\mathscr{G}_K}(u,u_{N})\leq 2(\beta_{N_1}^2+\beta_{N_2}^2)^{1/2}, $$ where the last inequality follows from Proposition~\ref{prop-d}~(i).
Hence $$ \mathbb{H}_{\mathscr{G}_K} (\mathcal{U},\mathcal{U}_{N})= \max\{0, \mathbb{D}_{\mathscr{G}_K} (\mathcal{U},\mathcal{U}_{N}) \}= \sup_{u\in\mathcal{U}} \mathsf {d\kern -0.07em l}_{\mathscr{G}_K}(u,\mathcal{U}_{N})\leq 2(\beta_{N_1}^2+\beta_{N_2}^2)^{1/2}. $$
In the case that $\mathscr{G}=\mathscr{G}_I$, we have $$ \mathsf {d\kern -0.07em l}_{\mathscr{G}_I}(u,\mathcal{U}_{N}) \leq \mathsf {d\kern -0.07em l}_{\mathscr{G}_I}(u,u_{N}) \leq 2L\left( \beta_{N_1}+\beta_{N_2} \right), $$ where the last inequality follows from Proposition~\ref{prop-d}~(ii) and hence $$ \mathbb{H}_{\mathscr{G}_I} (\mathcal{U},\mathcal{U}_{N})= \max\{0,\mathbb{D}_{\mathscr{G}_I} (\mathcal{U},\mathcal{U}_{N})\}\leq 2L\left( \beta_{N_1}+\beta_{N_2} \right).$$ The proof is complete.
\Box
\subsection{Proof of Theorem~\texorpdfstring{\ref{thm-optval}}{5.2}.}
Part (i). It is well-known that \begin{equation*}
|{\vartheta}_{N}-{\vartheta}|\leq \max_{\bm{z}\in Z} \Big{|} \min_{u\in \mathcal{U}_{N}} {\mathbb{E}}_P [u(\bm{f}(\bm{z},\bm{\xi}))]
- \min_{u\in \mathcal{U}} {\mathbb{E}}_P [u(\bm{f}(\bm{z},\bm{\xi}))] \Big{|}. \end{equation*} Let $\delta$ be a small positive number. For any $\bm{z}\in Z$, we can find a $\delta$-optimal solution $u^{\bm{z}} \in \mathcal{U}$ and $u^{\bm{z}}_N \in \mathcal{U}_N$ such that $$ {\mathbb{E}}_P [u^{\bm{z}} (\bm{f}(\bm{z},\bm{\xi}))] \leq \min_{u\in \mathcal{U}} {\mathbb{E}}_P [u(\bm{f}(\bm{z},\bm{\xi}))]+\delta, \quad {\mathbb{E}}_P [u^{\bm{z}}_{N} (\bm{f}(\bm{z},\bm{\xi}))] \geq \min_{u\in \mathcal{U}_{N}} {\mathbb{E}}_P [u(\bm{f}(\bm{z},\bm{\xi}))]. $$ Combing the above inequalities, we have \begin{eqnarray}
\min_{u\in \mathcal{U}_{N}} {\mathbb{E}}_P [u(\bm{f}(\bm{z},\bm{\xi}))]-\min_{u\in \mathcal{U}} {\mathbb{E}}_P [u(\bm{f}(\bm{z},\bm{\xi}))] &\leq& {\mathbb{E}}_P [u^{\bm{z}}_{N} (\bm{f}(\bm{z},\bm{\xi}))-u^{\bm{z}} (\bm{f}(\bm{z},\bm{\xi}))] +\delta \qquad \nonumber\\
&\leq& \sup_{(x,y)\in T} |u^{\bm{z}}_{N}(x,y)-u^{\bm{z}}(x,y)| +\delta.
\label{eq:thm2-proof}
\end{eqnarray} On the other hand, for any $u,v\in {\cal U}$ \begin{eqnarray*}
&& \sup_{(x,y)\in T} |u(x,y)-v(x,y)| \\
&& \leq \sup_{(x,y)\in T} ( |u(x,y)-v(x,y)-u(x,\underline{y})-u(\underline{x},y)+v(x,\underline{y})+v(\underline{x},y)| \\
&& \hspace{5em} +|v(x,\underline{y})+v(\underline{x},y)-u(x,\underline{y})-u(\underline{x},y)| ) \\
&& = \sup_{(x,y)\in T} \left( \left| \int_{\underline{x},\underline{y}}^{x,y} d u(x,y)-\int_{\underline{x},\underline{y}}^{x,y} d v(x,y) \right| + |v(x,\underline{y})+v(\underline{x},y)-u(x,\underline{y})-u(\underline{x},y)| \right) \\
&& \leq \sup_{g\in\mathscr{G}_I} \left| \int_T g(x,y) d u(x,y)-\int_T g(x,y) d v(t) \right| + \sup_{(x,y)\in T} |v(x,\underline{y})+v(\underline{x},y)-u(x,\underline{y})-u(\underline{x},y)|. \end{eqnarray*} Since ${\cal U}_N\subset {\cal U}$, by setting $u=u^{\bm{z}}_{N}$ and $v=u^{\bm{z}}$, we have \begin{eqnarray}
\sup_{(x,y)\in T} |u^{\bm{z}}_{N}(x,y)-u^{\bm{z}}(x,y)|
&\leq& \mathsf {d\kern -0.07em l}_{\mathscr{G}_I} (u_{N}^{\bm{z}},u^{\bm{z}})
+ \sup_{(x,y)\in T} |u_{N}^{\bm{z}}(x,\underline{y})-u^{\bm{z}}(x,\underline{y})+u_{N}^{\bm{z}}(\underline{x},y)-u^{\bm{z}}(\underline{x},y)| \notag \\
&\leq&
\mathbb{H}_{\mathscr{G}_I} (\mathcal{U}_{N},\mathcal{U})+ L(\beta_{N_1}+\beta_{N_2}).
\label{eq:thm2-proof-1} \end{eqnarray} Combining (\ref{eq:thm2-proof})-(\ref{eq:thm2-proof-1}), we obtain
\begin{eqnarray*}
\min_{u\in \mathcal{U}_{N}} {\mathbb{E}}_P [u(\bm{f}(\bm{z},\bm{\xi}))]-\min_{u\in \mathcal{U}} {\mathbb{E}}_P [u(\bm{f}(\bm{z},\bm{\xi}))]
\leq \mathbb{H}_{\mathscr{G}_I} (\mathcal{U}_{N},\mathcal{U})+ L(\beta_{N_1}+\beta_{N_2}) +\delta. \end{eqnarray*} By exchanging the position of $\mathcal{U}$ and $\mathcal{U}_{N}$, we can use the same argument to derive $$ \min_{u\in \mathcal{U}} {\mathbb{E}}_P [u(\bm{f}(\bm{z},\bm{\xi}))]-\min_{u\in \mathcal{U}_{N}} {\mathbb{E}}_P [u(\bm{f}(\bm{z},\bm{\xi}))] \leq \mathbb{H}_{\mathscr{G}_I} (\mathcal{U},\mathcal{U}_{N})+ L(\beta_{N_1}+\beta_{N_2}) +\delta. $$ Since $\delta>0$ can be arbitrarily small, we obtain \begin{equation} \label{eq-dis-vt}
|{\vartheta}_{N}-{\vartheta}| \leq \max_{\bm{z}\in Z} \left| \min_{u\in \mathcal{U}} {\mathbb{E}}_P [u(\bm{f}(\bm{z},\bm{\xi}))]-\min_{u\in \mathcal{U}_{N}} {\mathbb{E}}_P [u(\bm{f}(\bm{z},\bm{\xi}))] \right| \leq \mathbb{H}_{\mathscr{G}_I} (\mathcal{U},\mathcal{U}_{N})+ L(\beta_{N_1}+\beta_{N_2}) \end{equation} and hence (\ref{eq-err-vt}) follows from (\ref{eq-erram-I}).
Part(ii). Observe that $\Lambda(\cdot)$ is a non-decreasing function, thus its generalized inverse is well-defined. For any $\bm{z}_{N}^*\in Z_{N}^*$ and $\bm{z}^*\in Z^*$, \begin{eqnarray*}
\Lambda(d(\bm{z}_{N}^*,\bm{z})) &\leq& v({\bm{z}_N^*})-{\vartheta}^* =v(\bm{z}_{N}^*)-v(\bm{z}^*)\leq |v(\bm{z}_{N}^*)-v_N(\bm{z}_{N}^*)| + |v(\bm{z}_{N}^*)-v(\bm{z}^*)| \\
&\leq & 2\max_{\bm{z}\in Z} |v(\bm{z})-v_N(\bm{z})|. \end{eqnarray*} Combining the inequality above with (\ref{eq-err-vt}), we obtain $$
d(\bm{z}_{N}^*,Z^*) \leq \Lambda^{-1} \left( 2\max_{\bm{z}\in Z} |v(\bm{z})-v_N(\bm{z})| \right) \leq \Lambda^{-1}(2 \mathbb{H}_{\mathscr{G}_I}(\mathcal{U}_{N},\mathcal{U})+2L(\beta_{N_1}+\beta_{N_2})) $$ and hence (\ref{eq-err-so}) follows.
$\Box$ \end{appendices}
\end{document} | arXiv |
Latitude-based approach for detecting aberrations of hand, foot, and mouth disease epidemics
Jia-Hong Tang1,
Ta-Chien Chan2,
Mika Shigematsu3 &
Jing-Shiang Hwang4
Epidemics of hand, foot and mouth disease (HFMD) among children in East Asia have been a serious annual public health problem. Previous studies in China and island-type territories in East Asia showed that the onset of HFMD epidemics evolved with increased latitude. Based on the natural characteristics of the epidemics, we developed regression models for issuing aberration alerts and predictions.
HFMD sentinel surveillance data from 2008 to 2014 in Japan are used in this study, covering 365 weeks and 47 prefectures between 24 and 46° of north latitude. Average HFMD cases per sentinel are standardized as Z rates. We fit weekly Z rate differences between prefectures located in the south and north of a designated prefecture with linear regression models to detect the surging trend of the epidemic for the prefecture. We propose a rule for issuing an aberration alert determined by the strength of the upward trend of south–north Z rate differences in the previous few weeks. In addition to the warning, we predict a Z rate for the next week with a 95 % confidence interval.
We selected Tokyo and Kyoto for evaluating the proposed approach to aberration detection. Overall, the peaks of epidemics in Tokyo mostly occurred in weeks 28–31, later than in Kyoto, where the disease peaked in weeks 26–31. Positive south–north Z rate differences in both prefectures were clearly observed ahead of the HFMD epidemic cycles. Aberrations in the major epidemics of 2011 and 2013 were successfully detected weeks earlier. The prediction also provided accurate estimates of the epidemic's trends.
We have used only the latitude, one geographical feature affecting the spatiotemporal distribution of HFMD, to develop rules for early aberration detection and prediction. We have also demonstrated that the proposed rules performed well using real data in terms of accuracy and timeliness. Although our approach may provide helpful information for controlling epidemics and minimizing the impact of diseases, the performance could be further improved by including other influential meteorological factors in the proposed latitude-based approach, which is worth further investigation.
Epidemiological surveillance is a routine process of collection, analysis and dissemination of health data for public health purposes. One function of infectious disease surveillance is to detect aberrations at an early stage. Early warning of aberrations could improve the efficiency of control campaigns and facilitate preventative actions to halt the spread of infectious diseases, thus reducing their impact on the health system [1]. Furthermore, morbidity and mortality would be reduced through an earlier and more efficient public health response.
Hand, foot and mouth disease (HFMD), which often strikes children under five years old, is caused by multiple enterovirus serotypes, and usually leads to mild or moderate symptoms, with recovery in about three to six days without medication [2]. Treatment of HFMD is limited, as there is currently no effective antiviral drug or vaccine [3]. So far, preventive measures, such as avoiding direct contact with infectious patients, disinfection of contaminated environments, and good personal hygiene habits, represent the best options for controlling and preventing HFMD infection [4].
Historically, the occurrences of HFMD epidemics were sporadic and local, but this pattern changed in the late 1990's. Since then, medium- to large-scale epidemics have been continuously observed in the Asia-Pacific region, including Singapore [5], Malaysia [6], Hong Kong [7], Taiwan [8], Japan [9], and China [10, 11]. Severe or lethal complications, such as encephalitis, meningitis, pulmonary edema and myocarditis, in the course of enterovirus infections drew attention to these diseases. In Taiwan, the sentinel physicians reported 129,106 cases of HFMD in 1998 [12]. There were 405 patients with severe disease, most of whom were five years old or younger; severe disease was seen in all regions of the island. Complications included encephalitis, aseptic meningitis, pulmonary edema or hemorrhage, acute flaccid paralysis, and myocarditis. Seventy-eight patients died, 71 of whom (91 %) were five years of age or younger. Of the patients who died, 65 (83 %) had pulmonary edema or pulmonary hemorrhage. From 2000 to 2002, many cases with complications were reported in Japan. Cases with complications included 226 (0.10 %) of the total 216,154 reported HFMD cases which occurred in 2000, 32 (0.02 %) of a total of 134,927 reported HFMD cases in 2001, and 14 (0.01 %) of the total 97,870 reported HFMD cases in 2002 [13]. Although severe or lethal complications are rare, some of these HFMD epidemics had unusually high numbers of fatalities, and this generated much fear and anxiety in this region [14]. Therefore, controlling the HFMD epidemics has become an emerging public health problem in these countries.
Detecting infectious diseases' aberrations at an early stage is crucial for swift implementation of control measures. HFMD epidemics exhibit a significant seasonal pattern, with a rapid onset in the spring or summer, a gradual decline after the peak, and a mild second wave in the fall. This pattern has been observed not only in Asia, but also in European countries, such as Sweden, France and Hungary [15]. A bimodal seasonal pattern was reported in the United Kingdom, with peaks in summer, late autumn, and early winter [16]. In Finland, most HFMD cases were observed in autumn [17]. In an attempt to provide early warning for HFMD epidemics, a considerable amount of research has focused on developing statistical methods, including temporal, spatial, and spatiotemporal methods not only to contribute novel information but also to support aberration detection and management to identify aberrations in HFMD data accurately and quickly [18–21].
Meteorological factors have been recognized as spatial risk factors associated with HFMD occurrence. Weekly mean temperature and cumulated rainfall are significantly associated with HFMD incidence with a time lag of 1–2 weeks in Singapore [4]. A higher risk of transmission is associated with temperatures in the range of 70 °F to 80 °F, higher relative humidity, lower wind speed, more precipitation, and greater population density in China [22]. In Hong Kong, relative humidity, mean temperature, and difference in diurnal temperature were positively associated with HFMD consultation rates at a 2-week lag time [7]. In Japan, a study found that ambient temperature and relative humidity were associated with increased HFMD occurrence at a lag of 0–3 weeks [9]. In Taiwan, higher dew point, lower visibility, and lower wind speed were significantly associated with the rise of epidemics [23]. All these studies show that the dispersion of HFMD is sensitive to temperature variation.
In our previous study [23], we integrated the available surveillance and weather data in East Asia to elucidate possible spatiotemporal correlations between HFMD epidemics and the weather. The results revealed that latitude was the most important explanatory factor associated with the timing and amplitude of HFMD epidemics. In some population-based studies of HFMD in China, increasing amplitude of HFMD outbreaks was shown to accompany the increase of latitude in southern China [10, 24, 25]. Meteorological factors including higher dew point, lower visibility, and lower wind speed were significantly associated with the rise of epidemics. In addition, the temperature-related measurements also showed higher range in Japan than in other areas, which indicated the variations which occurred within Japan. Together with the decreasing trend of mean temperature from south to north, we inferred that latitude played an important role in change in temperature and would be associated with HFMD epidemics.
In this study, we propose a novel statistical approach based on linear regression models to detect the future trend of HFMD epidemics rather than detecting outbreaks of HFMD. The goals of this study were to characterize the influences of latitude variation on HFMD epidemics, to identify large epidemics of HFMD sufficiently early, allowing time for intervention, and to detect and predict HFMD epidemic trends with greater precision. The proposed approach would be used to facilitate efficient HFMD control.
Study area and surveillance data
Japan is an archipelago nation in East Asia comprising four major islands and many small islands extending along the Pacific coast of Asia. It lies between 24° to 46° north latitude and 123° to 146° east longitude (Fig. 1a). There are 47 prefectures (local government administrative divisions). Japan lies mainly in the temperate zone, and is characterized by four distinct seasons.
A map showing Japan's prefectures (a), and heat map of Z rate for HFMD by 47 prefectures of Japan (b). a A map showing Japan's prefectures. Japan is divided for administrative purposes into 47 prefectures stretching from Hokkaido in the north to Okinawa in the south. Tokyo is the capital of Japan, and is situated in the center of the Japanese archipelago. Kyoto, an ancient center of Japanese culture, is to the southwest of Tokyo. The original basemaps were downloaded from public available website, GADM database of Global Administrative Areas (http://www.gadm.org/) and further analyzed by the authors in this study. b The prefectures were ordered by latitude from southernmost (bottom) to northernmost (top). Note: The HFMD data of Fukushima in March of 2011 were not available due to the Great East Japan Earthquake, causing a white block on the heat map. The white blocks in Yamanashi, Tottori, Shimane, Kagawa and Tokushima are due to missing values
The prefecture-level HFMD surveillance data from Japan, combined with latitudes of all Japanese prefectures, were used in this study. In Japan, infectious disease surveillance is designated as one of the important components for disease control, and its sentinel surveillance program was revised in 1999 to combine with the national notifiable diseases program, and incorporated into the national epidemiological surveillance infectious diseases (NEISD). The NESID in Japan, which was started in July 1981, is organized by the Ministry of Health, Labour and Welfare (MHLW), and encompasses the sentinel surveillance system for HFMD. NESID guidelines specify the method for selecting sentinel medical institutions [26, 27]. According to the guidelines, prefectural governments select sentinels as randomly as possible, and the numbers of sentinels per district public health center coverage area are determined in proportion to the population of the area in order to adequately assess any HFMD epidemic. HFMD, one of the sentinel reporting diseases in Japan, should be reported weekly by designated sentinels rather than reported immediately by all physicians; data are displayed by weekly reported number per sentinel. The designated sentinels send weekly HFMD data to the district health center on Tuesday of the next week. The health centers tabulate the district data and send it to the local health department on Wednesday. The weekly data are forwarded to MHLW by the local health departments the next day [26, 27].
In this study, latitudes of all Japanese prefectures, which were determined by the geographical center of each prefecture, and prefecture-level data from HFMD cases in Japan were collected online during the period from the 1st week of 2008 to the 52nd week of 2014 (a total of 365 weeks), from the National Institute of Infectious Diseases (NIID). These data are available at http://idsc.nih.go.jp. The HFMD dataset comprises weekly reported cases and cases per sentinel to provide an understanding of the epidemic situation and disease trends in different prefectures. To reflect the relative amplitude and severity of HFMD epidemics for each prefecture, we standardized the reported cases per sentinel separately in each prefecture during the study period, which are called Z rates in this study. The formula for the Z rate calculation is as follows:
$$ {Z}_{kt}=\left({S}_{kt}-{\mu}_k\right)/{\sigma}_k, $$
where Z kt is the value of Z rate in prefecture k at week t, S kt is the cases per sentinel in prefecture k at week t, and μ k and σ k are the mean and standard deviation of cases per sentinel in prefecture k during the study period. Thus, a positive Z rate indicates a datum above the mean of cases per sentinel, while a negative Z rate indicates a datum below the mean of cases per sentinel. The data we used were statistics publicly available online, and thus informed consent was not needed.
Statistical method
With the assumption that HMFD epidemics spread from the south to the north, we propose three rules for estimating the trend of HFMD epidemics and predicting the cases per sentinel for the next week. First, we examine whether differences between the means of Z rates in areas south of a designated area and those north of it are increasing. If an increasing trend is identified, we activate the surveillance system and move to the second step to determine whether an aberration of HFMD cases is likely to occur in this area over the coming month. Finally, we predict the HFMD epidemic in the area one week ahead.
For convenience, all areas under study were sorted from southernmost to northernmost; for example, the latitude of the kth area was the kth lowest among all areas. To detect an unusual signal of HFMD activities in the kth area, we calculate the difference between the means of Z rates in areas south of the kth area and in areas north of the kth area for the tth week. The south–north Z rate difference is defined as
$$ {D}_{kt}=\frac{1}{m}{\displaystyle \sum_{j=k-m}^{k-1}{Z}_{jt}}-\frac{1}{n}{\displaystyle \sum_{j=k+1}^{k+n}{Z}_{jt}}, $$
where m and n represent the number of areas under study located to the south and to the north of the kth area, respectively, with constraints of m < k and n ≤ J − k; J is the total number of areas under study; let Z jt be the Z rate of the tth week in the jth area for t = 1, …, T, j = 1, …, J.
To detect HFMD epidemics future trend, we limit our focus on positive values of these differences. With the assumption that HMFD epidemics spread from the south to the north, positive D kt , …, Dk,t − s values in consecutive weeks indicate that the Z rates may have increased in the areas south of the kth area in the past s weeks before the tth week. When D kt values have been increasing during the previous few weeks, we expect that the HMFD epidemics may spread from the south to the kth area. If the area may be affected soon by the assumption that HMFD epidemics spread from the south to the north, the surveillance system should be activated. For determining whether an increasing Z rate in the coming weeks will occur for the kth area, we propose a rule as follows:
Rule 1: Sending an activation signal
If D kt > 0, ⋯, Dk,t − s > 0 and Dk,t − s − 1 ≤ 0 for s > 1, we fit a linear regression model to these south–north Z rate differences, Dk,t − i = μ t + θ t × i + εt ‐ i, for i = 0, 1, …, s and s ≤ 12.
If autocorrelation in the residuals has been shown to be present at week t, then an autoregressive model of order 1 is considered for the error term. That is, we assume εt − i = φεt − i − 1 + wt − i and \( {w}_{t-i}\overset{i.i.d.}{\sim }N\left(0,\ {\sigma}^2\right) \). The generalized least squares (GLS) regression analysis was used to estimate regression coefficients and their confidence limits.
The slope, θ t , represents the trend of the Z rate differences during the past s weeks. The 95 % lower bound of each θ t was also calculated for judging whether the trend was significantly increasing during the few weeks before week t. Let \( \widehat{\theta}{}_t \) and \( {\widehat{\theta}}_t^L \) be, respectively, the estimate and the 95 % lower bound of the slope θ t from the fitted model. We found that there was a considerable lag between the south–north Z rate differences and Z rates of a designated area. For most prefectures in Japan, the correlation coefficients between the south–north Z rate differences and Z rates reached statistically significant maximum values with a three-week or four-week lag. In a statistical sense, short and sporadic signals did not form a large-scale epidemic. Therefore, we consider the likelihood of HFMD aberration in areas south of the designated area to be increasing if the trend estimates were significant in three consecutive weeks. The designated area will very likely be hit in the coming weeks based on the assumption of HFMD epidemics spreading from the south. Therefore, we propose the first rule: send a signal to activate the surveillance system at the tth week when we observe \( {\widehat{\theta}}_t^L>0 \), \( {\widehat{\theta}}_{t-1}^L>0 \) and \( {\widehat{\theta}}_{t-2}^L>0 \).
Rule 2: Issuing an aberration alert
When an activation signal appears in the kth area, we then determine whether an aberration of HFMD cases is likely to occur in this area over the coming month (4 weeks). We assume that the Z rates of a designated area would be increasing in the coming month when an epidemic has started in the area. Suppose the aberration started at the jth week; then we can use the slope estimate \( \widehat{\pi}{}_j \) from the fitted linear regression model as a measure of intensity of the epidemic in the kth area, Zk, j + u = α j + π j × u + εj + u, for u = 0, 1, 2, 3.
The larger the estimate \( \widehat{\pi}{}_j \) is, the more sharply the Z rate will increase from the jth week. However, at the tth week, we have to wait three more weeks to obtain an estimate of π t from the above linear regression model. We propose to use the relationship between two available trend estimates, \( \widehat{\theta}{}_j \) and \( \widehat{\pi}{}_j \), for j < t to construct a model for estimating π t at the tth week. Let S ⊂ {1, 2, …, t} be a set of indexes in which element s corresponds to the week when \( {\widehat{\theta}}_{s-u}^L>0\ \mathrm{f}\mathrm{o}\mathrm{r}\ u=0,1,2 \). We have also obtained \( {\widehat{\pi}}_s \) for each s ∈ S\{t − 3, t − 2, t − 1, t}. We propose to first fit the linear regression model \( {\widehat{\pi}}_s={\beta}_0+{\beta}_1{\widehat{\theta}}_s+{\varepsilon}_s\ \mathrm{f}\mathrm{o}\mathrm{r}\ s\in S\backslash \left\{t-3,t-2,t-1,t\right\} \).
Then we use the model estimates \( {\widehat{\beta}}_0 \) and \( {\widehat{\beta}}_1 \) to obtain the estimate of Z rate trend at the tth week, i.e., \( {\widehat{\pi}}_t={\widehat{\beta}}_0+{\widehat{\beta}}_1{\widehat{\theta}}_t \).
Our second rule is then proposed: issuing an HFMD epidemic alert in the kth area at the tth week when both \( \widehat{\theta}{}_t \) > 0 and \( \widehat{\pi}{}_t \) > 0.
In order to forecast the possibility and intensity of future epidemics, an epidemic monitoring indicator was set up. The slope estimate, \( \widehat{\pi}{}_t \), which contains information about epidemic activity in the coming 4 weeks, is a suitable indicator for monitoring the trend of the HFMD epidemic. We categorized the epidemic trend of HFMD in the coming month as mild, moderate or strong based on the magnitude of this slope estimate. The slope can be interpreted as the percentage increase of Z rate per week. In this study, we choose 10 % and 30 % as cut-points to categorize the degree of severity of the designated area. The epidemic trend of HFMD in the coming month was categorized as mild if the value of percentage increase of Z rate in the coming month was below 10 %, moderate if it was between 10 % to 30 %, and strong if larger than 30 %.
Combined with Rule 1, we adopted a four-color gauge for visualizing the HFMD epidemic monitoring process, indicating the degree of severity of the epidemic, in which yellow represents an activation signal, while orange, red and purple stand for alerts of mild, moderate and strong epidemic trends in the coming month, respectively.
Rule 3: Predicting future epidemics
Since we have observed the influence of latitude variation on the temporal feature of HFMD epidemics, in which the annual timing of HFMD epidemics was earlier in southern than in northern areas, this relationship can also be used for improving prediction accuracy. A linear regression model was conducted to predict the HFMD epidemic one week ahead. However, the relevant data for constructing the predictive model are critical. Pearson's correlation coefficient was used to identify areas in the south which are significantly associated with the designated area using the HFMD data of the past year. The HFMD data of the current year for those identified southern areas were then used for estimating regression parameters. Specifically, the regression model for prediction is
$$ {Z}_{h,t}={\gamma}_{0,t}+{\gamma}_{1,t}\ {Z}_{h,t-1}+{\varepsilon}_t,\ \mathrm{f}\mathrm{o}\mathrm{r}\kern0.5em 1\le h\le k, $$
where h includes the identified southern areas and the designated area. With the estimates of model parameters, the Z rate of the kth area at week t + 1 could be predicted by
$$ {\widehat{Z}}_{k,\ t+1}={\widehat{\gamma}}_{0,t}+{\widehat{\gamma}}_{1,t}\ {Z}_{k,\ t}. $$
In the next section, we use HFMD data of the two selected prefectures in Japan, Tokyo and Kyoto, to illustrate the proposed approach. Tokyo, the capital of Japan, is the largest city in terms of population and is located roughly in the middle of the Japanese archipelago. Kyoto prefecture, the cultural center of Japan, is located southwest of Tokyo.
Japan has experienced nationwide epidemics of HFMD since the first HFMD case was diagnosed in Tokyo in 1963 [28]. The peak of the HFMD epidemic is usually seen in summer (June to August). However, epidemics may also occur in autumn and winter. A summary of annual data from 2008 to 2014 is shown in Table 1. The number of sentinels in Japan during 2008–2014 was about 3,100 for HFMD surveillance. There have been two large-scale HFMD epidemics since 2008, the first in 2011 (total 347,407 cases; 110.89 per sentinel) and the second in 2013 (total 303,339; 96.54 per sentinel). The year 2011 experienced the largest HFMD epidemic since the establishment of NESID.
Table 1 Annual data summary of HFMD sentinel surveillance, Japan, 2008–2014
Table 2 provides the prefecture-level HFMD data summary during 2008–2014. In 2011 and 2013, the two large-scale HFMD epidemic years, the weekly averages of cases per sentinel were 1–4 cases. The maximum values of cases per sentinel were between 4 to 42 in 2011 and 2013. The maximum value of cases per sentinel was 42.26 cases per sentinel in Saga prefecture in 2011. The weekly averages of cases per sentinel were less than one case for most prefectures in other years. The number of sentinels is determined in proportion to the population of a prefecture.
Table 2 Weekly prefecture-level HFMD data summary, Japan, 2008–2014
To further explore the relationships of geographical locations of prefectures in Japan to features of HFMD epidemics, a heat map created using the gplot package in R software is provided in Fig. 1b. The heat map summarizes information on week of year in columns, and integrates prefecture-level HFMD Z rate data sets during the study period in rows. Larger values are represented by lighter color blocks and smaller values by darker color blocks. From bottom to top, prefectures in Japan were sorted by latitude from low to high. Lighter color blocks in each row indicated the timing of the HFMD peak period of each prefecture in Japan. The two brightest timing bands in Fig. 1b display two large-scale HFMD epidemics for 2011 and 2013, respectively. From bottom to top, the two brightest timing bands show that the HFMD peak time of each prefecture in Japan moved from left to right gradually. This phenomenon reveals the prefecture-level HFMD peak time in Japan moving in a south–north direction over time.
Figure 2a indicates that there have been three large HFMD epidemics in Tokyo during 2008–2014, the first in 2010, the second in 2011, and the last in 2013. Most HFMD epidemics in Tokyo have displayed a common trend of steady increase beginning in April or May, rapid increase during May or June, a peak from July to August, a quick decline in September, and finally, steady decrease until the next February. Figure 2b presents four large-scale epidemics in Kyoto during 2008–2014 which occurred in 2008, 2010, 2011 and 2013. The Z rates were low in January to March in Kyoto, then began to ascend starting in April, and a sharp increase appeared during June to July. A comparison between Figs. 2a and b reveals that the epidemic of 2013 was the largest one since 2008 in Tokyo, while the epidemic of 2011 was the largest one in Kyoto. Overall, the epidemic peaks in Tokyo mostly occurred in weeks 28–31, later than in Kyoto, where the peaks mostly occurred in weeks 26–31.
Weekly Z rates distribution of HFMD, Tokyo, 2008 to 2014 (a), and weekly Z rates distribution of HFMD, Kyoto, 2008 to 2014 (b). a The epidemic peaks in Tokyo mostly occurred in weeks 28–31 during the study period. b The epidemic peaks in Kyoto mostly occurred in weeks 26–31 during the study period
The south–north Z rate differences of Tokyo and Kyoto are shown in Figs. 3a and b, together with their weekly Z rates. It is clear that the two weekly series had similar patterns and that the cycles of the south–north Z rate differences are ahead of the HFMD epidemic cycles in both figures. The peak of the south–north Z rate differences is much earlier than the peak of Z rates.
The south–north Z rate differences together with weekly Z rates, Tokyo, 2008–2014 (a), and the south–north Z rate differences together with weekly Z rates, Kyoto, 2008–2014 (b). a Blue lines represent the difference between the means of Z rates in areas south of Tokyo and in areas north of Tokyo for each week in the study period. b Blue lines represent the difference between the means of Z rates in areas south of Kyoto and in areas north of Kyoto for each week in the study period
Figures 4a and b illustrate HFMD epidemic monitoring indicators in Tokyo and Kyoto in 2011 and 2013, respectively. The monitoring indicators gauge the epidemic trend of HFMD in the following weeks. In Fig. 4a, the monitoring indicators in Tokyo showed colors of activation or mild signals before week 20. Purple signals, indicating the momentum of the epidemic was strong, started to flash from week 23 to week 28, and the peak was reached at week 31 in 2011. Figure 4b reveals that the monitoring indicators in Tokyo began to send an activation signal at week 11, then alerts turned from mild to moderate; finally the monitoring indicators also registered a 6th consecutive strong trend at week 27, and the peak was reached at week 30 in 2013. In Kyoto, the first alert, an activation signal, was issued at week 2, and the monitoring indicators flashed 6 consecutive purple signals starting from the 21st week; the peak was reached at week 28 in 2011. The epidemic in Kyoto in 2013 was smaller but more irregular than the epidemic in 2011. Figure 4d shows that it seems to have two peaks in Kyoto in 2013. The monitoring indicators in Kyoto began to send an activation signal at week 11, and registered a third consecutive purple signal at the 25th week in 2013. There were no alerts issued for the second peak in Kyoto in 2013. In Fig. 4, we can also observe that there are no alerts issued during the second half of 2011 and 2013 (the non-epidemic periods) except a total of 3 activation alerts in Tokyo in 2013.
HFMD epidemic monitoring indicators, Tokyo, 2011 (a) and 2013 (b) and HFMD epidemic monitoring indicators, Kyoto, 2011 (c) and 2013 (d). The epidemic trend of HFMD in the coming weeks were categorized as mild (orange), moderate (red) and strong (purple). The yellow represents an activation signal
Major epidemics during 2011 and 2013 in Tokyo and Kyoto were predicted and are shown for these two years separately in Tables 3 and 4. The predicted values of Z rates are converted to weekly cases per sentinel and listed in these two tables. Although most 95 % predicted intervals cover true values, the predictive model slightly underestimates weekly cases per sentinel during the peak weeks. The average absolute errors of predicted values for 2011 and 2013 in the two cities were 0.14, 0.49, 0.23 and 0.17 cases per sentinel, respectively. Figures 4a and b also clearly demonstrate the relationship between the true values and the predicted values. Overall, the predicted model provides effective prediction of HFMD epidemic trends.
Table 3 Weekly reported and predicted cases per sentinel of major HFMD epidemics, Tokyo, 2011 and 2013
Table 4 Weekly reported and predicted cases per sentinel of major HFMD epidemics, Kyoto, 2011 and 2013
Our study of the influence of latitude on the spatiotemporal characteristics of HFMD epidemics has yielded several notable findings. The two brightest timing bands in Fig. 1b reveal the influence of latitude variation on the spatiotemporal features of the HFMD epidemic, with the peak time moving in a south–north direction over time. The influence of latitude variation on the spatial spreading of HFMD is clear and provides an important basis for detecting HFMD epidemic trends in this study. In other words, the annual epidemic of HFMD started in the south and then gradually spread to the north. We adopt the correlation coefficient as the evaluation indicator to identify the relationship between the south–north Z rate differences and Z rates of a designated area in Rule 1. For most prefectures in Japan, the correlation coefficients between the south–north Z rate differences and Z rates reached statistically significant maximum values with a three-week or four-week lag. These lag values indicate that the south–north Z rate differences are ahead of the HFMD epidemic cycles and provide an important basis for Rule 1. A four-color gauge for the HFMD epidemic monitoring process, indicating the degree of severity of the epidemic, is provided in Rule 2. The monitoring results show that the proposed statistical approach, which takes into consideration the impact of latitude variation on HFMD epidemics, performed well in early aberration detection and predicting the epidemic trend. On the basis of the temporal feature of HFMD epidemics, this study also developed models for prediction of the activity of HFMD epidemics one week ahead, with an alert issued by the proposed aberration detection rules. For HFMD epidemics exhibiting annual variation, the predictive model is used to calculate a predicted value for the next week based on current year data.
Weekly prefecture-level data from 2008 to 2014 were adequate for exploring spatiotemporal trends of HFMD epidemics in Japan. When many spatial regions are under surveillance, aberration detection methods that contain spatial information may be more powerful, but they require an understanding of the nature of the spatial pattern, including how it changes over time. Zhuang et al. [25] extracted the spatial distribution of HFMD infections in China and found that regions with a higher monthly incidence rate of HFMD periodically shifted, following the pattern of south–north–south from March to December. In this study, only the south–north Z rate differences were used in the regression model to detect the spatial spreading of HFMD. Spatial autocorrelation may be simultaneously taken into account in the future work so as to faithfully determine the influence of latitude variation [29, 30].
The spatiotemporal characteristic of latitude was identified in this study. For more comprehensively and objectively understanding the influences of surrounding factors on spatiotemporal trends of HFMD epidemics, more factors with potential impact, such as population density, population flow, medical level, etc., should be taken into consideration in the spatiotemporal modeling for further study [30–32].
The idea of categorizing the epidemic trend of HFMD into three groups (mild, moderate and strong) was for convenience of distinguishing the different severity levels of the future HFMD epidemic trends. However, it is crucial to choose the cut-off points to determine which kind of aberration alert should be issued. In this study, the value of slope was expressed in terms of percentage increase to reflect the severity levels of the future epidemic trend in Japan. The relative amplitude and severity of HFMD epidemic vary according to different causative viruses and different geographical regions, so cut-off points of the slope estimates for determining these three categories may be chosen to suit local circumstances. In this study, we also tried to detect and predict HFMD epidemics trend by considering only the spatial differences and the timing of the aberrations. Although the predicted model fits the epidemic trend well, producing accurate predictions remains a challenge. To enable the proper strategies for both prevention and timely control, more variables with potential impact, including environmental factors (e.g. climate variables) and socioeconomic factors should be considered in further studies to increase the predictive power of the model, because HFMD is a complex communicable disease [30, 31].
One limitation of this study is due to the assumption that HFMD epidemics were influenced by latitude variation and followed a spatial spread pattern from the south to the north. It is possible that our results will not generalize to other infectious diseases without such a spatiotemporal characteristic. The objective of this study, however, was to explore the influence of the variation in latitude on HFMD epidemics. It is possible that using a large amount of surveillance data and taking into account more characteristics of the studied infectious disease could further improve detection performance.
The use of a regression approach may induce another limitation of this paper. Regression analysis is widely used for aberration detection and prediction, but regression modeling generally requires a considerable amount of data to provide stable parameter estimates. In some areas such as a designated area located at or near the southernmost or the northernmost part of a geographic range (e.g. Okinawa and Hokkaido in Japan), it may not be feasible to monitor HFMD epidemics by using the proposed approach. For such areas, time series models, such as SARIMA models, may be utilized for interpreting and applying the HFMD surveillance data for disease control and prevention.
Due to a reporting hierarchy of public health systems, there is an inherent reporting lag in sentinel surveillance data. The time lag between disease onset and the date of report publication was up to one week in Japan. We know the timeliness is a key performance measure of public health surveillance systems. However, the timeliness can vary by disease, intended use of the data, and public health system level. The incubation period of HFMD is 3 to 5 days (with a range from 2 days to 2 weeks). In Japan, the time lapse between onset date and the date of report was short enough to initiate preventive measures and provide early health warnings to the public. In Hong Kong, Malaysia, Japan, the Republic of Korea and Singapore, sentinel surveillance systems have been implemented to monitor HFMD epidemics on weekly basis in order to allow the health authorities to issue early warning of seasonal activity, detect abnormal aberrations and assess the impact of public health control measures. Current sentinel reporting timeliness in Japan may be sufficient to support an immediate public health response in the event of an HFMD epidemic.
Finally, HFMD is caused by several enteroviruses. In this study, there was insufficient information on causal agents, for example, on whether enterovirus 71 (EV71) or coxsackievirus A16 (CVA16) was responsible for the epidemics in Japan during the study period. This may have affected the disease duration or epidemic peaks [33]. In addition, the HFMD case identification depended mainly on clinical presentation, without confirming the diagnosis by microbiological or serological tests, hence resulting in potential misdiagnosis. However, HFMD is considered to be an easily recognized disease by pediatricians [16, 34, 35].
The frequency and scale of HFMD outbreaks are expected to increase [3, 12], and threaten the health security of various nations due to continuing viral mutation [3], climate change [1, 23], and the lack of health resources and effective surveillance systems in some countries [3]. A reliable early warning model can help public health agencies to take preventive actions to control HFMD epidemics at an early stage, thus reducing their impact on the health system and society.
This paper first attempts to explore the influence of latitude variation on HFMD epidemics. We have used only the latitude, one spatiotemporal feature of HFMD, to develop rules for early aberration detection and prediction. We have also demonstrated that the proposed rules performed well on real data in terms of accuracy and timeliness. Although our approach may provide helpful information for controlling epidemics and minimizing the impact of diseases, the performance could be further improved by including other influential meteorological factors along with the proposed latitude-based approach, which is worth further investigation.
Feng H, Duan G, Zhang R, Zhang W. Time series analysis of hand-foot-mouth disease hospitalization in Zhengzhou: establishment of forecasting models using climate variables as predictors. PLoS One. 2014;9(1):e87916.
Hamaguchi T, Fujisawa H, Sakai K, Okino S, Kurosaki N, Nishimura Y, et al. Acute encephalitis caused by intrafamilial transmission of enterovirus 71 in adult. Emerg Infect Dis. 2008;14(5):828–30.
WHO. A guide to clinical management and public health response for hand, foot and mouth disease (HFMD). Geneva, Switzerland: World Health Organization; 2011.
Hii YL, Rocklov J, Ng N. Short term effects of weather on hand, foot and mouth disease. PLoS One. 2011;6(2):e16796.
Ang LW, Koh BK, Chan KP, Chua LT, James L, Goh KT. Epidemiology and control of hand, foot and mouth disease in Singapore, 2001–2007. Ann Acad Med Singapore. 2009;38(2):106–12.
Chua KB, Kasri AR. Hand foot and mouth disease due to enterovirus 71 in Malaysia. Virol Sin. 2011;26(4):221–8.
Ma E, Lam T, Chan KC, Wong C, Chuang SK. Changing epidemiology of hand, foot, and mouth disease in Hong Kong, 2001–2009. Jpn J Infect Dis. 2010;63(6):422–6.
Chan TC, Hwang JS, Chen RH, King CC, Chiang PH. Spatio-temporal analysis on enterovirus cases through integrated surveillance in Taiwan. BMC Public Health. 2014;14.
Onozuka D, Hashizume M. The influence of temperature and humidity on the incidence of hand, foot, and mouth disease in Japan. Sci Total Environ. 2011;410–411:119–25.
Xing W, Liao Q, Viboud C, Zhang J, Sun J, Wu JT, et al. Hand, foot, and mouth disease in China, 2008–12: an epidemiological study. Lancet Infect Dis. 2014;14(4):308–18.
Zhu Q, Hao Y, Ma J, Yu S, Wang Y. Surveillance of hand, foot, and mouth disease in mainland China (2008–2009). Biomed Environ Sci. 2011;24(4):349–56.
Ho M, Chen ER, Hsu KH, Twu SJ, Chen KT, Tsai SF, et al. An epidemic of enterovirus 71 infection in Taiwan. Taiwan enterovirus epidemic working group. N Engl J Med. 1999;341(13):929–35.
Suzuki Y, Taya K, Nakashima K, Ohyama T, Kobayashi JM, Ohkusa Y, et al. Risk factors for severe hand foot and mouth disease. Pediatr Int. 2010;52(2):203–7.
Tseng FC, Huang HC, Chi CY, Lin TL, Liu CC, Jian JW, et al. Epidemiological survey of enterovirus infections occurring in Taiwan between 2000 and 2005: analysis of sentinel physician surveillance data. J Med Virol. 2007;79(12):1850–60.
Samuda GM, Chang WK, Yeung CY, Tang PS. Monoplegia caused by Enterovirus 71: an outbreak in Hong Kong. Pediatr Infect Dis J. 1987;6(2):206–8.
Bendig JW, Fleming DM. Epidemiological, virological, and clinical features of an epidemic of hand, foot, and mouth disease in England and Wales. Commun Dis Rep CDR Rev. 1996;6(6):R81–86.
Blomqvist S, Klemola P, Kaijalainen S, Paananen A, Simonen ML, Vuorinen T, et al. Co-circulation of coxsackieviruses A6 and A10 in hand, foot and mouth disease outbreak in Finland. J Clin Virol. 2010;48(1):49–54.
Buckeridge DL, Musen MA, Switzer P, Crubezy M. An analytic framework fo space-time aberrancy detection in public health surveillance data. Proc AMIA Symp. 2003;120–124.
Buckeridge DL, Burkom H, Campbell M, Hogan WR, Moore AW, Project B. Algorithms for rapid outbreak detection: a research synthesis. J Biomed Inform. 2005;38(2):99–113.
Stroup DF, Williamson GD, Herndon JL. Detection of aberrations in the occurrence of notifiable diseases surveillance data. Stat Med. 1989;8(3):323–9.
Li Z, Lai S, Buckeridge DL, Zhang H, Lan Y, Yang W. Adjusting outbreak detection algorithms for surveillance during epidemic and non-epidemic periods. J Am Med Inform Assoc. 2012;19(e1):e51–53.
Wang Y, Feng ZJ, Yang Y, Self S, Gao YJ, Longini IM, et al. Hand, foot, and mouth disease in China patterns of spread and transmissibility. Epidemiology. 2011;22(6):781–92.
Lee CCD, Tang JH, Hwang JS, Shigematsu M, Chan TC. Effect of meteorological and geographical factors on the epidemics of hand, foot, and mouth disease in island-type territory, east Asia. BioMed Res Int. 2015. doi:10.1155/2015/805039.
Xie YH, Chongsuvivatwong V, Tang Z, McNeil EB, Tan Y. Spatio-temporal clustering of hand, foot, and mouth disease at the county level in Guangxi, China. PLoS One. 2014;9(2):e88065.
Zhuang D, Hu W, Ren H, Ai W, Xu X. The influences of temperature on spatiotemporal trends of hand-foot-and-mouth disease in mainland China. Int J Environ Health Res. 2014;24(1):1–10.
Kawado M, Hashimoto S, Murakami Y, Izumida M, Ohta A, Tada Y, et al. Annual and weekly incidence rates of influenza and pediatric diseases estimated from infectious disease surveillance data in Japan, 2002–2005. J Epidemiol. 2007;17(Suppl):S32–41.
Taniguchi K, Hashimoto S, Kawado M, Murakami Y, Izumida M, Ohta A, et al. Overview of infectious disease surveillance system in Japan, 1999–2005. J Epidemiol. 2007;17(Suppl):S3–13.
Hosoya M, Kawasaki Y, Sato M, Honzumi K, Kato A, Hiroshima T, et al. Genetic diversity of enterovirus 71 associated with hand, foot and mouth disease epidemics in Japan from 1983 to 2003. Pediatr Infect Dis J. 2006;25(8):691–4.
Hu MG, Li ZJ, Wang JF, Jia L, Liao YL, Lai SJ, et al. Determinants of the incidence of hand, foot and mouth disease in China using geographically weighted regression models. PLoS One. 2012;7(6):e38978.
Bo YC, Song C, Wang JF, Li XW. Using an autologistic regression model to identify spatial risk factors and spatial risk patterns of hand, foot and mouth disease (HFMD) in Mainland China. BMC Public Health. 2014;14:358.
Li L, Wang J, Wu J. A spatial model to predict the incidence of neural tube defects. BMC Public Health. 2012;12:951.
Conley TG, Topa G. Socio-economic distance and spatial patterns in unemployment. J Appl Econom. 2002;17(4):303–27.
Lum LC, Wong KT, Lam SK, Chua KB, Goh AY, Lim WL, et al. Fatal enterovirus 71 encephalomyelitis. J Pediatr. 1998;133(6):795–8.
Sarma N. Hand, foot, and mouth disease: current scenario and Indian perspective. Indian J Dermatol Venereol Leprol. 2013;79(2):165–75.
Lee MS, Lin TY, Chiang PS, Li WC, Luo ST, Tsao KC, et al. An investigation of epidemic enterovirus 71 infection in Taiwan, 2008: clinical, virologic, and serologic features. Pediatr Infect Dis J. 2010;29(11):1030–4.
This research is supported by a grant from the Ministry of Science and Technology, Taiwan (MOST 103-2621-M-001 -002). We would also like to express our sincere gratitude to Mr. Kent M. Suárez for his English editing.
Institute of Statistical Science, Academia Sinica, Taipei, 115, Taiwan
Jia-Hong Tang
Research Center for Humanities and Social Sciences, Academia Sinica, Taipei, 115, Taiwan
Ta-Chien Chan
National Institute of Infectious Diseases, Tokyo, 162-8640, Japan
Mika Shigematsu
Institute of Statistical Science, Academia Sinica, 128 Academia Road, Section 2, Taipei, Taiwan
Jing-Shiang Hwang
Correspondence to Jing-Shiang Hwang.
TCC and JSH designed the study and formulation of the model, and wrote the manuscript. JHT did data processing and statistical analysis, and wrote the draft of the manuscript. MS collected and realigned the surveillance data in Japan and interpreted the findings. All authors read and approved the final manuscript.
Tang, JH., Chan, TC., Shigematsu, M. et al. Latitude-based approach for detecting aberrations of hand, foot, and mouth disease epidemics. BMC Med Inform Decis Mak 15, 113 (2015). https://doi.org/10.1186/s12911-015-0236-5
Received: 07 April 2015
Aberration detection
Standards, technology, and modeling | CommonCrawl |
Channel estimation via gradient pursuit for mmWave massive MIMO systems with one-bit ADCs
In-soo Kim1 &
Junil Choi ORCID: orcid.org/0000-0002-9862-90201
In millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems, 1 bit analog-to-digital converters (ADCs) are employed to reduce the impractically high power consumption, which is incurred by the wide bandwidth and large arrays. In practice, the mmWave band consists of a small number of paths, thereby rendering sparse virtual channels. Then, the resulting maximum a posteriori (MAP) channel estimation problem is a sparsity-constrained optimization problem, which is NP-hard to solve. In this paper, iterative approximate MAP channel estimators for mmWave massive MIMO systems with 1 bit ADCs are proposed, which are based on the gradient support pursuit (GraSP) and gradient hard thresholding pursuit (GraHTP) algorithms. The GraSP and GraHTP algorithms iteratively pursue the gradient of the objective function to approximately optimize convex objective functions with sparsity constraints, which are the generalizations of the compressive sampling matching pursuit (CoSaMP) and hard thresholding pursuit (HTP) algorithms, respectively, in compressive sensing (CS). However, the performance of the GraSP and GraHTP algorithms is not guaranteed when the objective function is ill-conditioned, which may be incurred by the highly coherent sensing matrix. In this paper, the band maximum selecting (BMS) hard thresholding technique is proposed to modify the GraSP and GraHTP algorithms, namely, the BMSGraSP and BMSGraHTP algorithms, respectively. The BMSGraSP and BMSGraHTP algorithms pursue the gradient of the objective function based on the band maximum criterion instead of the naive hard thresholding. In addition, a fast Fourier transform-based (FFT-based) fast implementation is developed to reduce the complexity. The BMSGraSP and BMSGraHTP algorithms are shown to be both accurate and efficient, whose performance is verified through extensive simulations.
In millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems, the wide bandwidth of the mmWave band in the range of 30–300 GHz offers a high data rate, which guarantees a significant performance gain [1–4]. However, the power consumption of analog-to-digital converters (ADCs) is scaled quadratically with the sampling rate and exponentially with the ADC resolution, which renders high-resolution ADCs impractical for mmWave systems [5]. To reduce the power consumption, low-resolution ADCs were suggested as a possible solution, which recently gained popularity [6–9]. Coarsely quantizing the received signal using low-resolution ADCs results in an irreversible loss of information, which might cause a significant performance degradation. In this paper, we consider the extreme scenario of using 1 bit ADCs for mmWave systems.
In practice, the mmWave band consists of a small number of propagation paths, which results in sparse virtual channels. In the channel estimation point of view, sparse channels are favorable because the required complexity and measurements can be reduced. Sparsity-constrained channel distributions, however, cannot be described in closed forms, which makes it difficult to exploit Bayesian channel estimation. In [10, 11], channel estimators for massive MIMO systems with 1 bit ADCs were proposed, which account for the effect of the coarse quantization. The near maximum likelihood (nML) channel estimator [10] selects the maximizer of the likelihood function subject to the L2-norm constraint as the estimate of the channel, which is solved using the projected gradient descent method [12]. However, the channel sparsity was not considered in [10]. In [11], the Bussgang linear minimum mean squared error (BLMMSE) channel estimator was derived by linearizing 1 bit ADCs based on the Bussgang decomposition [13]. The BLMMSE channel estimator is an LMMSE channel estimator for massive MIMO systems with 1 bit ADCs, whose assumption is that the channel is Gaussian. Therefore, the sparsity of the channel is not taken into account in [11] either.
To take the channel sparsity into account, iterative approximate MMSE estimators for mmWave massive MIMO systems with 1 bit ADCs were proposed in [14, 15]. The generalized expectation consistent signal recovery (GEC-SR) algorithm in [14] is an iterative approximate MMSE estimator based on the turbo principle [16], which can be applied to any nonlinear function of the linearly mapped signal to be estimated. Furthermore, the only constraint on the distribution of the signal to be estimated is that its elements must be independent and identically distributed (i.i.d.) random variables. Therefore, the GEC-SR algorithm can be used as an approximate MMSE channel estimator for any ADC resolutions ranging from 1 bit to high-resolution ADCs. However, the inverse of the sensing matrix is required at each iteration, which is impractical in massive MIMO systems in the complexity point of view.
The generalized approximate message passing-based (GAMP-based) channel estimator for mmWave massive MIMO systems with low-resolution ADCs was proposed in [15], which is another iterative approximate MMSE channel estimator. In contrast to the GEC-SR algorithm, only matrix-vector multiplications are required at each iteration, which is favorable in the complexity point of view. As in the GEC-SR algorithm, the GAMP-based algorithm can be applied to any ADC resolutions and any channel distributions as long as the elements of channel are i.i.d. random variable. The performance of the GEC-SR and GAMP algorithms, however, cannot be guaranteed when the sensing matrix is ill-conditioned, which frequently occurs in the mmWave band. To prevent the sensing matrix from becoming ill-conditioned, the GAMP-based channel estimator constructs the virtual channel representation using discrete Fourier transform (DFT) matrices, whose columns are orthogonal. However, such virtual channel representation results in a large gridding error, which leads to performance degradation.
Our goal is to propose an iterative approximate maximum a posteriori (MAP) channel estimator for mmWave massive MIMO systems with 1 bit ADCs. Due to the sparse nature, the MAP channel estimation problem is converted to a sparsity-constrained optimization problem, which is NP-hard to solve [17]. To approximately solve such problems iteratively, the gradient support pursuit (GraSP) and gradient hard thresholding pursuit (GraHTP) algorithms were proposed in [17] and [18], respectively. The GraSP and GraHTP algorithms pursue the gradient of the objective function at each iteration by hard thresholding. These algorithms are the generalizations of the compressive sampling matching pursuit (CoSaMP) [19] and hard thresholding pursuit (HTP) [20] algorithms, respectively, in compressive sensing (CS).
With highly coherent sensing matrix, however, the GraSP and GraHTP algorithms do not perform appropriately since the objective function becomes ill-conditioned. To remedy such break down, we exploit the band maximum selecting (BMS) hard thresholding technique, which is then applied to the GraSP and GraHTP algorithms to propose the BMSGraSP and BMSGraHTP algorithms, respectively. The proposed BMS-based algorithms perform hard thresholding for the gradient of the objective function based on the proposed band maximum criterion, which tests whether an index is the ground truth index or the by-product of another index. To reduce the complexity of the BMS-based algorithms, a fast Fourier transform-based (FFT-based) fast implementation of the objective function and gradient is proposed. The BMS-based algorithms are shown to be both accurate and efficient, which is verified through extensive simulations.
The rest of this paper is organized as follows. In Section 2, mmWave massive MIMO systems with 1 bit ADCs are described. In Section 3, the MAP channel estimation framework is formulated. In Section 4, the BMS hard thresholding technique is proposed, which is applied to the GraSP and GraHTP algorithms. In addition, an FFT-based fast implementation is proposed. In Section 5, the results and discussion are presented, and the conclusions are followed in Section 6.
Notation:a, a, and A denote a scalar, vector, and matrix, respectively. ∥a∥0,∥a∥1, and ∥a∥ represent the L0-, L1-, and L2-norm of a, respectively. ∥A∥F is the Frobenius norm of A. The transpose, conjugate transpose, and conjugate of A are denoted as AT,AH, and \(\overline {\mathbf {A}}\), respectively. The element-wise vector multiplication and division of a and b are denoted as a⊙b and a⊘b, respectively. The sum of all of the elements of a is denoted as sum(a). The vectorization of A is denoted as vec(A), which is formed by stacking all of the columns of A. The unvectorization of a is denoted as unvec(a), which is the inverse of vec(A). The Kronecker product of A and B is denoted as A⊗B. The support of a is denoted as supp(a), which is the set of indices formed by collecting all of the indices of the non-zero elements of a. The best s-term approximation of a is denoted as a|s, which is formed by leaving only the s largest (in absolute value) elements of a and replacing the other elements with 0. Similarly, the vector obtained by leaving only the elements of a indexed by the set \(\mathcal {A}\) and replacing the other elements with 0 is denoted as \(\mathbf {a}|_{\mathcal {A}}\). The absolute value of a scalar a and cardinality of a set \(\mathcal {A}\) are denoted as |a| and \(|\mathcal {A}|\), respectively. The set difference between the sets \(\mathcal {A}\) and \(\mathcal {B}\), namely, \(\mathcal {A}\cap \mathcal {B}^{\mathrm {c}}\), is denoted as \(\mathcal {A}\setminus \mathcal {B}\). ϕ(a) and Φ(a) are element-wise standard normal PDF and CDF functions of a, whose ith elements are \(\frac {1}{\sqrt {2\pi }}e^{-\frac {a_{i}^{2}}{2}}\) and \(\int _{-\infty }^{a_{i}}\frac {1}{\sqrt {2\pi }}e^{-\frac {x^{2}}{2}}dx\), respectively. The m×1 zero vector and m×m identity matrix are denoted as 0m and Im, respectively.
mmWave massive MIMO systems with 1 bit ADCs
As shown in Fig. 1, consider a mmWave massive MIMO system with uniform linear arrays (ULAs) at the transmitter and receiver. The N-antenna transmitter transmits a training sequence of length T to the M-antenna receiver. Therefore, the received signal \(\mathbf {Y}=[\mathbf {y}[1]\quad \mathbf {y}[2]\quad \cdots \quad \mathbf {y}[T]]\in \mathbb {C}^{M\times T}\) is
$$ \mathbf{Y}=\sqrt{\rho}\mathbf{H}\mathbf{S}+\mathbf{N}, $$
A mmWave massive MIMO system with an N-antenna transmitter and M-antenna receiver. The real and imaginary parts of the received signal are quantized by 1 bit ADCs
which is the collection of the t-th received signal \(\mathbf {y}[t]\in \mathbb {C}^{M}\) over t∈{1,…,T}. In the mmWave band, the channel \(\mathbf {H}\in \mathbb {C}^{M\times N}\) consists of a small number of paths, whose parameters are the path gains, angle-of-arrivals (AoAs), and angle-of-departures (AoDs) [21]. Therefore, H is
$$ \mathbf{H}=\sum_{\ell=1}^{L}\alpha_{\ell}\mathbf{a}_{\text{RX}}(\theta_{\text{RX}, \ell})\mathbf{a}_{\text{TX}}(\theta_{\text{TX}, \ell})^{\mathrm{H}} $$
where L is the number of paths, \(\alpha _{\ell }\in \mathbb {C}\) is the path gain of the ℓ-th path, and θRX,ℓ∈[−π/2,π/2] and θTX,ℓ∈[−π/2,π/2] are the AoA and AoD of the ℓth path, respectively. The steering vectors \(\mathbf {a}_{\text {RX}}(\theta _{\text {RX}, \ell })\in \mathbb {C}^{M}\) and \(\mathbf {a}_{\text {TX}}(\theta _{\text {TX}, \ell })\in \mathbb {C}^{N}\) are
$$\begin{array}{*{20}l} \mathbf{a}_{\text{RX}}(\theta_{\text{RX}, \ell})&=\frac{1}{\sqrt{M}}\left[\begin{array}{lll}1&\cdots&e^{-j\pi(M-1)\sin(\theta_{\text{RX}, \ell})}\end{array}\right]^{\mathrm{T}}, \end{array} $$
$$\begin{array}{*{20}l} \mathbf{a}_{\text{TX}}(\theta_{\text{TX}, \ell})&=\frac{1}{\sqrt{N}}\left[\begin{array}{lll}1&\cdots&e^{-j\pi(N-1)\sin(\theta_{\text{TX}, \ell})}\end{array}\right]^{\mathrm{T}} \end{array} $$
where the inter-element spacing is half-wavelength. The training sequence \(\mathbf {S}=[\mathbf {s}[1]\quad \mathbf {s}[2]\quad \cdots \quad \mathbf {s}[T]]\in \mathbb {C}^{N\times T}\) is the collection of the tth training sequence \(\mathbf {s}[t]\in \mathbb {C}^{N}\) over t∈{1,…,T}, whose power constraint is ∥s[t]∥2=N. The additive white Gaussian noise (AWGN) \(\mathbf {N}=[\mathbf {n}[1]\quad \mathbf {n}[2]\quad \cdots \quad \mathbf {n}[T]]\in \mathbb {C}^{M\times T}\) is the collection of the tth AWGN \(\mathbf {n}[t]\in \mathbb {C}^{M}\) over t∈{1,…,T}, which is distributed as \(\text {vec}(\mathbf {N})\sim \mathcal {C}\mathcal {N}(\mathbf {0}_{MT}, \mathbf {I}_{MT})\). The signal-to-noise ratio (SNR) is defined as ρ.
At the receiver, the real and imaginary parts of the received signal are quantized by 1 bit ADCs. The quantized received signal \(\hat {\mathbf {Y}}\) is
$$\begin{array}{*{20}l} \hat{\mathbf{Y}}&=\mathrm{Q}(\mathbf{Y})\\ &=\mathrm{Q}(\sqrt{\rho}\mathbf{H}\mathbf{S}+\mathbf{N}) \end{array} $$
where Q(·) is the 1 bit quantization function, whose threshold is 0. Therefore, Q(Y) is
$$ \mathrm{Q}(\mathbf{Y})=\text{sign}(\text{Re}(\mathbf{Y}))+j\text{sign}(\text{Im}(\mathbf{Y})) $$
where sign(·) is the element-wise sign function. The goal is to estimate H by estimating \(\{\alpha _{\ell }\}_{\ell =1}^{L}, \{\theta _{\text {RX}, \ell }\}_{\ell =1}^{L}\), and \(\{\theta _{\text {TX}, \ell }\}_{\ell =1}^{L}\) from \(\hat {\mathbf {Y}}\).
Virtual channel representation
In the mmWave channel model in (2), \(\{\theta _{\text {RX}, \ell }\}_{\ell =1}^{L}\) and \(\{\theta _{\text {TX}, \ell }\}_{\ell =1}^{L}\) are hidden in \(\{\mathbf {a}_{\text {RX}}(\theta _{\text {RX}, \ell })\}_{\ell =1}^{L}\) and \(\{\mathbf {a}_{\text {TX}}(\theta _{\text {TX}, \ell })\}_{\ell =1}^{L}\), respectively. The non-linear mapping of \(\{\theta _{\text {RX}, \ell }\}_{\ell =1}^{L}\) and \(\{\theta _{\text {TX}, \ell }\}_{\ell =1}^{L}\) to Y renders a non-linear channel estimation problem. To convert the non-linear channel estimation problem to a linear channel estimation problem, we adopt the virtual channel representation [22].
The virtual channel representation of H is
$$ \mathbf{H}\approx\mathbf{A}_{\text{RX}}\mathbf{X}^{*}\mathbf{A}_{\text{TX}}^{\mathrm{H}} $$
where the dictionary pair \(\mathbf {A}_{\text {RX}}\in \mathbb {C}^{M\times B_{\text {RX}}}\) and \(\mathbf {A}_{\text {TX}}\in \mathbb {C}^{N\times B_{\text {TX}}}\) is the collection of BRX≥M steering vectors and BTX≥N steering vectors, respectively. Therefore, ARX and ATX are
$$\begin{array}{*{20}l} \mathbf{A}_{\text{RX}}&=\left[\begin{array}{lll}\mathbf{a}_{\text{RX}}(\omega_{\text{RX}, 1})&\cdots&\mathbf{a}_{\text{RX}}(\omega_{\text{RX}, B_{\text{RX}}})\end{array}\right], \end{array} $$
$$\begin{array}{*{20}l} \mathbf{A}_{\text{TX}}&=\left[\begin{array}{lll}\mathbf{a}_{\text{TX}}(\omega_{\text{TX}, 1})&\cdots&\mathbf{a}_{\text{TX}}(\omega_{\text{TX}, B_{\text{TX}}})\end{array}\right], \end{array} $$
whose gridding AoAs \(\{\omega _{\text {RX}, i}\}_{i=1}^{B_{\text {RX}}}\) and AoDs \(\{\omega _{\text {TX}, j}\}_{j=1}^{B_{\text {TX}}}\) are selected so as to form overcomplete DFT matrices. The gridding AoAs and AoDs are the BRX and BTX points from [−π/2,π/2], respectively, to discretize the AoAs and AoDs because the ground truth AoAs and AoDs are unknown. To make a dictionary pair of the overcomplete DFT matrix form, the gridding AoAs and AoDs are given as ωRX,i= sin−1(−1+2/BRX·(i−1)) and ωRX,j= sin−1(−1+2/BTX·(j−1)), respectively. We prefer overcomplete DFT matrices because they are relatively well-conditioned, and DFT matrices are friendly to the FFT-based implementation, which will be discussed in Section 4. The virtual channel \(\mathbf {X}^{*}\in \mathbb {C}^{B_{\text {RX}}\times B_{\text {TX}}}\) is the collection of \(\{\alpha _{\ell }\}_{\ell =1}^{L}\), whose (i,j)th element is αℓ whenever (ωRX,i,ωTX,j) is the nearest to (θRX,ℓ,θTX,ℓ) but 0 otherwise. In general, the error between H and \(\mathbf {A}_{\text {RX}}\mathbf {X}^{*}\mathbf {A}_{\text {TX}}^{\mathrm {H}}\) is inversely proportional to BRX and BTX. To approximate H using (7) with negligible error, the dictionary pair must be dense, namely, BRX≫M and BTX≫N.
Before we proceed, we provide a supplementary explanation on the approximation in (7). In this work, we attempt to estimate the L-sparse X∗ in (7) because the sparse assumption on X∗ is favorable when the goal is to formulate the channel estimation problem as a sparsity-constrained problem. The cost of assuming that X∗ is L-sparse is, as evident, the approximation error shown in (7). Alternatively, the approximation error can be perfectly removed by considering X∗ satisfying \(\mathbf {H}=\mathbf {A}_{\text {RX}}\mathbf {X}^{*}\mathbf {A}_{\text {TX}}^{\mathrm {H}}\), i.e., equality instead of approximation. One well-known X∗ satisfying the equality is the minimum Frobenius norm solution, i.e., \(\mathbf {X}^{*}=\mathbf {A}_{\text {RX}}^{\mathrm {H}}(\mathbf {A}_{\text {RX}}\mathbf {A}_{\text {RX}}^{\mathrm {H}})^{-1}\mathbf {H}(\mathbf {A}_{\text {TX}}\mathbf {A}_{\text {TX}}^{\mathrm {H}})^{-1}\mathbf {A}_{\text {TX}}\). Such X∗, however, has no evident structure to exploit in channel estimation, which is the reason why we assume that X∗ is L-sparse at the cost of the approximation error in (7).
In practice, the arrays at the transmitter and receiver are typically large to compensate the path loss in the mmWave band, whereas the number of line-of-sight (LOS) and near LOS paths is small [23]. Therefore, X∗ is sparse when the dictionary pair is dense because only L elements among BRXBTX elements are non-zero where L≪MN≪BRXBTX. In the sequel, we use the shorthand notation B=BRXBTX.
To facilitate the channel estimation framework, we vectorize (1) and (5) in conjunction with (7). First, note that
$$ \mathbf{Y}=\sqrt{\rho}\mathbf{A}_{\text{RX}}\mathbf{X}^{*}\mathbf{A}_{\text{TX}}^{\mathrm{H}} \mathbf{S}+\mathbf{N}+\mathbf{E} $$
where the gridding error \(\mathbf {E}\in \mathbb {C}^{M\times T}\) represents the mismatch in (7).Footnote 1 Then, the vectorized received signal \(\mathbf {y}=\text {vec}(\mathbf {Y})\in \mathbb {C}^{MT}\) is
$$ \mathbf{y}=\sqrt{\rho}\mathbf{A}\mathbf{x}^{*}+\mathbf{n}+\mathbf{e} $$
$$\begin{array}{*{20}l} \mathbf{A}&=\mathbf{S}^{\mathrm{T}}\overline{\mathbf{A}}_{\text{TX}}\otimes\mathbf{A}_{\text{RX}}\\ &=\left[\begin{array}{llll}\mathbf{a}_{1}&\mathbf{a}_{2}&\cdots&\mathbf{a}_{B}\end{array}\right], \end{array} $$
$$\begin{array}{*{20}l} \mathbf{x}^{*}&=\text{vec}(\mathbf{X}^{*})\\ &=\left[\begin{array}{llll}x_{1}^{*}&x_{2}^{*}&\cdots&x_{B}^{*}\end{array}\right]^{\mathrm{T}}, \end{array} $$
$$\begin{array}{*{20}l} \mathbf{n}&=\text{vec}(\mathbf{N}), \end{array} $$
$$\begin{array}{*{20}l} \mathbf{e}&=\text{vec}(\mathbf{E}). \end{array} $$
The vectorized quantized received signal \(\hat {\mathbf {y}}=\text {vec}(\hat {\mathbf {Y}})\in \mathbb {C}^{MT}\) is
$$\begin{array}{*{20}l} \hat{\mathbf{y}}&=\mathrm{Q}(\mathbf{y})\\ &=\mathrm{Q}(\sqrt{\rho}\mathbf{A}\mathbf{x}^{*}+\mathbf{n}+\mathbf{e}). \end{array} $$
The goal is to estimate L-sparse x∗ from \(\hat {\mathbf {Y}}\).
Problem formulation
In this section, we formulate the channel estimation problem using the MAP criterion. To facilitate the MAP channel estimation framework, the real counterparts of the complex forms in (16) are introduced. Then, the likelihood function of x∗ is derived.
The real counterparts are the collections of the real and imaginary parts of the complex forms. Therefore, the real counterparts \(\hat {\mathbf {y}}_{\mathrm {R}}\in \mathbb {R}^{2MT}, \mathbf {A}_{\mathrm {R}}\in \mathbb {R}^{2MT\times 2B}\), and \(\mathbf {x}_{\mathrm {R}}^{*}\in \mathbb {R}^{2B}\) are
$$\begin{array}{*{20}l} \hat{\mathbf{y}}_{\mathrm{R}}&=\left[\begin{array}{ll}\text{Re}(\hat{\mathbf{y}})^{\mathrm{T}}&\text{Im} (\hat{\mathbf{y}})^{\mathrm{T}}\end{array}\right]^{\mathrm{T}}\\ &=\left[\begin{array}{llll}\hat{y}_{\mathrm{R}, 1}&\hat{y}_{\mathrm{R}, 2}&\cdots&\hat{y}_{\mathrm{R}, 2MT}\end{array}\right]^{\mathrm{T}}, \end{array} $$
$$\begin{array}{*{20}l} \mathbf{A}_{\mathrm{R}}&=\left[\begin{array}{cc}\text{Re}(\mathbf{A})&-\text{Im}(\mathbf{A})\\ \text{Im}(\mathbf{A})&\text{Re}(\mathbf{A})\end{array}\right]\\ &=\left[\begin{array}{llll}\mathbf{a}_{\mathrm{R}, 1}&\mathbf{a}_{\mathrm{R}, 2}&\cdots&\mathbf{a}_{\mathrm{R}, 2MT}\end{array}\right]^{\mathrm{T}}, \end{array} $$
$$\begin{array}{*{20}l} \mathbf{x}_{\mathrm{R}}^{*}&=\left[\begin{array}{ll}\text{Re}(\mathbf{x}^{*}) ^{\mathrm{T}}&\text{Im}(\mathbf{x}^{*})^{\mathrm{T}}\end{array}\right]^{\mathrm{T}}\\ &=\left[\begin{array}{llll}x_{\mathrm{R}, 1}^{*}&x_{\mathrm{R}, 2}^{*}&\cdots&x_{\mathrm{R}, 2B}^{*}\end{array}\right]^{\mathrm{T}}, \end{array} $$
which are the collections of the real and imaginary parts of \(\hat {\mathbf {y}}, \mathbf {A}\), and x∗, respectively. In the sequel, we use the complex forms and the real counterparts interchangeably. For example, x∗ and \(\mathbf {x}_{\mathrm {R}}^{*}\) refer to the same entity.
Before we formulate the likelihood function of x∗, note that e is hard to analyze. However, e is negligible when the dictionary pair is dense. Therefore, we formulate the likelihood function of x∗ without e. The price of such oversimplification is negligible when BRX≫M and BTX≫N, which is to be shown in Section 5 where e≠0MT. To derive the likelihood function of x∗, note that
$$ \sqrt{\rho}\mathbf{A}\mathbf{x}^{*}+\mathbf{n}\sim\mathcal{C}\mathcal{N}(\sqrt{\rho}\mathbf{A}\mathbf{x}^{*}, \mathbf{I}_{MT}) $$
given x∗. Then, from (20) in conjunction with (16), the log-likelihood function f(x) is [10]
$$\begin{array}{*{20}l} f(\mathbf{x})&=\log\text{Pr}\left[\begin{array}{cc}\hat{\mathbf{y}}=\mathrm{Q}(\sqrt{\rho}\mathbf{A} \mathbf{x}+\mathbf{n})\mid\mathbf{x}\end{array}\right]\\ &=\sum_{i=1}^{2MT}\log\Phi\left(\sqrt{2\rho}\hat{y}_{\mathrm{R}, i}\mathbf{a}_{\mathrm{R}, i}^{\mathrm{T}}\mathbf{x}_{\mathrm{R}}\right). \end{array} $$
If the distribution of x∗ is known, the MAP estimate of x∗ is
$$ \underset{\mathbf{x}\in\mathbb{C}^{B}}{\text{argmax}}\ (f(\mathbf{x})+g_{\text{MAP}}(\mathbf{x})) $$
where gMAP(x) is the logarithm of the PDF of x∗. In practice, however, gMAP(x) is unknown. Therefore, we formulate the MAP channel estimation framework based on \(\{\alpha _{\ell }\}_{\ell =1}^{L}, \{\theta _{\text {RX}, \ell }\}_{\ell =1}^{L}\), and \(\{\theta _{\text {TX}, \ell }\}_{\ell =1}^{L}\) where we assume the followings:
\(\alpha _{\ell }\sim \mathcal {C}\mathcal {N}(0, 1)\) for all ℓ
θRX,ℓ∼unif([−π/2,π/2]) for all ℓ
θTX,ℓ∼unif([−π/2,π/2]) for all ℓ
\(\{\alpha _{\ell }\}_{\ell =1}^{L}, \{\theta _{\text {RX}, \ell }\}_{\ell =1}^{L}\), and \(\{\theta _{\text {TX}, \ell }\}_{\ell =1}^{L}\) are independent.
Then, the MAP estimate of x∗ considering the channel sparsity is
$$ \underset{\mathbf{x}\in\mathbb{C}^{B}}{\text{argmax}}\ (f(\mathbf{x})+g(\mathbf{x}))\enspace\text{s.t.}\enspace\|\mathbf{x}\|_{0}\leq L $$
where g(x)=−∥xR∥2 is the logarithm of the PDF of \(\mathcal {C}\mathcal {N}(\mathbf {0}_{B}, \mathbf {I}_{B})\) ignoring the constant factor. However, note that only the optimization problems (22) and (23) are equivalent in the sense that their solutions are the same, not gMAP(x) and g(x). In the ML channel estimation framework, the ML estimate of x∗ is
$$ \underset{\mathbf{x}\in\mathbb{C}^{B}}{\text{argmax}}\ f(\mathbf{x})\enspace\text{s.t.}\enspace\|\mathbf{x}\|_{0}\leq L. $$
In the sequel, we focus on solving (23) because (23) reduces to (24) when g(x)=0. In addition, we denote the objective function and the gradient in (23) as h(x) and ∇h(x), respectively. Therefore,
$$\begin{array}{*{20}l} h(\mathbf{x})&=f(\mathbf{x})+g(\mathbf{x}), \end{array} $$
$$\begin{array}{*{20}l} \nabla h(\mathbf{x})&=\nabla f(\mathbf{x})+\nabla g(\mathbf{x})\\ &=\left[\begin{array}{llll}\nabla h(x_{1})&\nabla h(x_{2})&\cdots&\nabla h(x_{B})\end{array}\right]^{\mathrm{T}} \end{array} $$
where the differentiation is with respect to x.
Channel estimation via gradient pursuit
In this section, we propose the BMSGraSP and BMSGraHTP algorithms to solve (23), which are the variants of the GraSP [17] and GraHTP [18] algorithms, respectively. Then, an FFT-based fast implementation is proposed. In addition, we investigate the limit of the BMSGraSP and BMSGraHTP algorithms in the high SNR regime in 1 bit ADCs.
Proposed BMSGraSP and BMSGraHTP algorithms
Note that h(x) in (23) is concave because f(x) and g(x) are the sums of the logarithms of Φ(·) and ϕ(·), respectively, which are log-concave [24]. However, (23) is not a convex optimization problem because the sparsity constraint is not convex. Furthermore, solving (23) is NP-hard because of its combinatorial complexity. To approximately optimize convex objective functions with sparsity constraints iteratively by pursuing the gradient of the objective function, the GraSP and GraHTP algorithms were proposed in [17] and [18], respectively.
To solve (23), the GraSP and GraHTP algorithms roughly proceed as follows at each iteration when x is the current estimate of x∗ where the iteration index is omitted for simplicity. First, the best L-term approximation of ∇h(x) is computed, which is
$$ T_{L}(\nabla h(\mathbf{x}))=\nabla h(\mathbf{x})|_{L} $$
where TL(·) is the L-term hard thresholding function. Here, TL(·) leaves only the L largest elements (in absolute value) of ∇h(x), and sets all the other remaining elements to 0. Then, after the estimate of supp(x∗) is updated by selecting
$$ \mathcal{I}=\text{supp}(T_{L}(\nabla h(\mathbf{x}))), $$
i.e., \(\mathcal {I}\) is the set of indices formed by collecting the L indices of ∇h(x) corresponding to its L largest elements (in absolute value), the estimate of x∗ is updated by solving the following optimization problem
$$ \underset{\mathbf{x}\in\mathbb{C}^{B}}{\text{argmax}}\ h(\mathbf{x})\enspace\text{s.t.}\enspace\text{supp}(\mathbf{x})\subseteq\mathcal{I}, $$
which can be solved using convex optimization because the support constraint is convex [24]. The GraSP and GraHTP algorithms are the generalizations of the CoSaMP [19] and HTP [20] algorithms, respectively. This follows because the gradient of the squared error is the scaled proxy of the residual.
To solve (23) using the GraSP and GraHTP algorithms, h(x) is required either to have a stable restricted Hessian [17] or to be strongly convex and smooth [18]. These conditions are the generalizations of the restricted isometry property (RIP) in CS [25], which means that h(x) is likely to satisfy these conditions when A is either a restricted isometry, well-conditioned, or incoherent. In practice, however, A is highly coherent because the dictionary pair is typically dense to reduce the mismatch in (7).
To illustrate how the GraSP and GraHTP algorithms fail to solve (23) when A is highly coherent, consider the real counterpart of ∇h(x). The real counterpart \(\nabla h(\mathbf {x}_{\mathrm {R}})\in \mathbb {R}^{2B}\) is
$$\begin{array}{*{20}l} &\nabla h(\mathbf{x}_{\mathrm{R}})\\ =&\left[\begin{array}{ll}\text{Re}(\nabla h(\mathbf{x}))^{\mathrm{T}}&\text{Im}(\nabla h(\mathbf{x}))^{\mathrm{T}}\end{array}\right]^{\mathrm{T}}\\ =&\sum_{i=1}^{2MT}\lambda\left(\sqrt{2\rho}\hat{y}_{\mathrm{R}, i}\mathbf{a}_{\mathrm{R}, i}^{\mathrm{T}}\mathbf{x}_{\mathrm{R}}\right)\sqrt{2\rho}\hat{y}_{\mathrm{R}, i}\mathbf{a}_{\mathrm{R}, i}-2\mathbf{x}_{\mathrm{R}}, \end{array} $$
which follows from \(\nabla \log \Phi (\mathbf {a}_{\mathrm {R}}^{\mathrm {T}}\mathbf {x}_{\mathrm {R}})=\lambda (\mathbf {a}_{\mathrm {R}}^{\mathrm {T}}\mathbf {x}_{\mathrm {R}})\mathbf {a}_{\mathrm {R}}\) and ∇∥xR∥2=2xR where λ(·)=ϕ(·)⊘Φ(·) is the inverse Mills ratio functionFootnote 2. Then, the following observation holds from directly computing ∇h(xi), whose real and imaginary parts are the i-th and (i+B)-th elements of ∇h(xR), respectively.
Observation 1
∇h(xi)=∇h(xj) if ai=aj and xi=xj.
However, Observation 1 is meaningless because ai≠aj unless i=j. To establish a meaningful observation, consider the coherence between ai and aj, which reflects the proximity between ai and aj according to [26,27]
$$ \mu(i, j)=\frac{|\mathbf{a}_{i}^{\mathrm{H}}\mathbf{a}_{j}|}{\|\mathbf{a}_{i}\|\|\mathbf{a}_{j}\|}. $$
Then, using the η-coherence band, which is [26]
$$ B_{\eta}(i)=\{j\mid\mu(i, j)\geq\eta\} $$
where η∈(0,1), we establish the following conjecture when η is sufficiently large.
Conjecture 1
∇h(xi)≈∇h(xj) if j∈Bη(i) and xi=xj.
At this point, we use Conjecture 1 to illustrate how the GraSP and GraHTP algorithms fail to estimate supp(x∗) from (28) by naive hard thresholding when A is highly coherent. To proceed, consider the following example, which assumes that x∗ and \(\hat {\mathbf {Y}}\) are realized with x representing the current estimate of x∗ so as to satisfy
\(i=\underset {k\in \{1, 2, \dots, B\}}{\text {argmax}}\ |x_{k}^{*}|\)
\(i=\underset {k\in \{1, 2, \dots, B\}}{\text {argmax}}\ |\nabla h(x_{k})|\)
\(\mathcal {J}\cap \text {supp}(\mathbf {x}^{*})=\emptyset \)
where i is the index corresponding to the largest element of the ground truthFootnote 3 virtual channel x∗, and
$$ \mathcal{J}=\{j\mid j\in B_{\eta}(i), x_{i}=x_{j}\}\setminus\{i\} $$
is the by-product of i. Here, \(\mathcal {J}\) is called the by-product of i because
$$\begin{array}{*{20}l} |\nabla h(x_{j})|&\approx|\nabla h(x_{i})|\\ &=\underset{k\in\{1, 2, \dots, B\}}{\text{max}}\ |\nabla h(x_{k})|, \end{array} $$
which follows from Conjecture 1, holds despite \(x_{j}^{*}=0\) for all \(j\in \mathcal {J}\). In other words, the by-product of i refers to the fact that ∇h(xi) and ∇h(xj) are indistinguishable for all \(j\in \mathcal {J}\) according to (34), but the elements of x∗ indexed by \(\mathcal {J}\) are 0 according to 3). Therefore, when we attempt to estimate supp(x∗) by hard thresholding ∇h(x), the indices in \(\mathcal {J}\) will likely be erroneously selected as the estimate of supp(x∗) because ∇h(xj) and the maximum element of ∇h(x), which is ∇h(xi) according to 2), are indistinguishable for all \(j\in \mathcal {J}\).
To illustrate how (28) cannot estimate supp(x∗) when A is highly coherent, consider another example where ∇h(x) and TL(∇h(x)) are shown in Figs. 2 and 3, respectively. In this example, supp(x∗) is widely spread, whereas most of supp(TL(∇h(x))) are in the coherence band of the index of the maximum element of ∇h(x). This shows that hard thresholding ∇h(x) is not sufficient to distinguish whether an index is the ground truth index or the by-product of another index. To solve this problem, we propose the BMS hard thresholding technique.
The magnitude of \(\text {unvec}(\nabla h(\mathbf {x}))\in \mathbb {C}^{B_{\text {RX}}\times B_{\text {TX}}}\) at x=0B, namely, before hard thresholding
The magnitude of \(\text {unvec}(T_{L}(\nabla h(\mathbf {x})))\in \mathbb {C}^{B_{\text {RX}}\times B_{\text {TX}}}\) at x=0B, namely, after hard thresholding. This shows how hard thresholding on ∇h(x) results in an incorrect estimate of supp(x∗) when A is highly coherent. In this example, M=N=16,BRX=BTX=64,T=20,L=4,SNR=20 dB, and supp(x∗) is widely spread
The BMS hard thresholding function TBMS,L(·) is an L-term hard thresholding function, which is proposed based on Conjecture 1. The BMS hard thresholding technique is presented in Algorithm 1. Line 3 selects the index of the maximum element of ∇h(x) from the unchecked index set as the current index. Line 4 constructs the by-product testing set. Line 5 checks whether the current index is greater than the by-product testing set. In this paper, Line 5 is referred to as the band maximum criterion. If the band maximum criterion is satisfied, the current index is selected as the estimate of supp(x∗) in Line 6. Otherwise, the current index is not selected as the estimate of supp(x∗) because the current index is likely to be the by-product of another index rather than the ground truth index. Line 8 updates the unchecked index set.
Note that Algorithm 1 is a hard thresholding technique applied to ∇h(x). If the BMS hard thresholding technique is applied to x+κ∇h(x) where κ is the step size, ∇h(x) is replaced by x+κ∇h(x) in the input, output, and Lines 3, 5, and 10 of Algorithm 1. This can be derived using the same logic based on Conjecture 1. Now, we propose the BMSGraSP and BMSGraHTP algorithms to solve (23).
The BMSGraSP and BMSGraHTP algorithms are the variants of the GraSP and GraHTP algorithms, respectively. The difference between the BMS-based and non-BMS-based algorithms is that the hard thresholding function is TBMS,L(·) instead of TL(·). The BMSGraSP and BMSGraHTP algorithms are presented in Algorithms 2 and 3, respectively. Lines 3, 4, and 5 of Algorithms 2 and 3 roughly proceed based on the same logic. Line 3 computes the gradient of the objective function. Line 4 selects \(\mathcal {I}\) from the support of the hard thresholded gradient of the objective function. Line 5 maximizes the objective function subject to the support constraint. This can be solved using convex optimization because the objective function and support constraint are concave and convex, respectively. In addition, b is hard thresholded in Line 6 of Algorithm 2 because b is at most 3L-sparse. A natural halting condition of Algorithms 2 and 3 is to halt when the current and previous \(\text {supp}(\tilde {\mathbf {x}})\) are the same. The readers who are interested in a more in-depth analyses of the GraSP and GraHTP algorithms are referred to [17] and [18], respectively.
Instead of hard thresholding b, we can solve
$$ \tilde{\mathbf{x}}=\underset{\mathbf{x}\in\mathbb{C}^{B}}{\text{argmax}}\ h(\mathbf{x})\enspace\text{s.t.}\enspace\text{supp}(\mathbf{x})\subseteq\text{supp}(T_{L}(\mathbf{b})), $$
which is a convex optimization problem, to obtain \(\tilde {\mathbf {x}}\) in Line 6 of Algorithm 2. This is the debiasing variant of Algorithm 2 [17]. The advantage of the debiasing variant of Algorithm 2 is a more accurate estimate of x∗. However, the complexity is increased, which is incurred by solving (35).
In this paper, we assume that only h(x) and ∇h(x) are required at each iteration to solve (23) using Algorithms 2 and 3, which can be accomplished when the first order method is used to solve convex optimization problems in Line 5 of Algorithms 2 and 3. An example of such first order method is the gradient descent method with the backtracking line search [24].
Fast implementation via FFT
In practice, the complexity of Algorithms 2 and 3 is demanding because h(x) and ∇h(x) are required at each iteration, which are high-dimensional functions defined on \(\mathbb {C}^{B}\) where B≫MN. In recent works on channel estimation and data detection in the mmWave band [14,15,28], the FFT-based implementation is widely used because H can be approximated by (7) using overcomplete DFT matrices. In this paper, an FFT-based fast implementation of h(x) and ∇h(x) is proposed, which is motivated by [14,15,28].
To facilitate the analysis, we convert the summations in h(x) and ∇h(xR) to matrix-vector multiplications by algebraically manipulating (21) and (30). Then, we obtain
$$\begin{array}{*{20}l} &h(\mathbf{x})\\ =&\text{sum}(\log\Phi(\sqrt{2\rho}\hat{\mathbf{y}}_{\mathrm{R}}\odot\mathbf{A}_{\mathrm{R}}\mathbf{x}_{\mathrm{R}}))-\|\mathbf{x}_{\mathrm{R}}\|^{2}, \end{array} $$
$$\begin{array}{*{20}l} &\nabla h(\mathbf{x}_{\mathrm{R}})\\ =&\mathbf{A}_{\mathrm{R}}^{\mathrm{T}}(\lambda(\sqrt{2\rho}\hat{\mathbf{y}}_{\mathrm{R}}\odot\mathbf{A}_{\mathrm{R}}\mathbf{x}_{\mathrm{R}})\odot\sqrt{2\rho}\hat{\mathbf{y}}_{\mathrm{R}})-2\mathbf{x}_{\mathrm{R}} \end{array} $$
where we see that the bottlenecks of h(x) and ∇h(x) come from the matrix-vector multiplications involving AR and \(\mathbf {A}_{\mathrm {R}}^{\mathrm {T}}\) resulting from the large size of A. For example, the size of A is 5120×65536 in Section 5 where M=N=64,BRX=BTX=256, and T=80.
To develop an FFT-based fast implementation of the matrix-vector multiplications involving AR and \(\mathbf {A}_{\mathrm {R}}^{\mathrm {T}}\), define \(\mathbf {c}_{\mathrm {R}}\in \mathbb {R}^{2MT}\) as \(\mathbf {c}_{\mathrm {R}}=\lambda (\sqrt {2\rho }\hat {\mathbf {y}}_{\mathrm {R}}\odot \mathbf {A}_{\mathrm {R}}\mathbf {x}_{\mathrm {R}})\odot \sqrt {2\rho }\hat {\mathbf {y}}_{\mathrm {R}}\) from (37) with \(\mathbf {c}\in \mathbb {C}^{MT}\) being the complex form of cR. From the fact that
$$\begin{array}{*{20}l} \mathbf{A}_{\mathrm{R}}\mathbf{x}_{\mathrm{R}}&=\left[\begin{array}{ll}\text{Re}(\mathbf{A}\mathbf{x})^{\mathrm{T}} &\text{Im}(\mathbf{A}\mathbf{x})^{\mathrm{T}}\end{array}\right]^{\mathrm{T}}, \end{array} $$
$$\begin{array}{*{20}l} \mathbf{A}_{\mathrm{R}}^{\mathrm{T}}\mathbf{c}_{\mathrm{R}}&=\left[\begin{array}{ll}\text{Re}(\mathbf{A} ^{\mathrm{H}}\mathbf{c})^{\mathrm{T}}&\text{Im}(\mathbf{A}^{\mathrm{H}}\mathbf{c})^{\mathrm{T}} \end{array}\right]^{\mathrm{T}}, \end{array} $$
we now attempt to compute Ax and AHc via the FFT. Then, Ax and AHc are unvectorized according to
$$\begin{array}{*{20}l} \text{unvec}(\mathbf{A}\mathbf{x})&=\mathbf{A}_{\text{RX}}\mathbf{X}\mathbf{A}_{\text{TX}}^{\mathrm{H}}\mathbf{S}\\ &=\underbrace{\mathbf{A}_{\text{RX}}(\underbrace{\mathbf{S}^{\mathrm{H}}(\underbrace{\mathbf{A}_{\text{TX}}\mathbf{X}^{\mathrm{H}}}_{\text{FFT}})}_{\text{IFFT}})^{\mathrm{H}}}_{\text{FFT}}, \end{array} $$
$$\begin{array}{*{20}l} \text{unvec}(\mathbf{A}^{\mathrm{H}}\mathbf{c})&=\mathbf{A}_{\text{RX}}^{\mathrm{H}}\mathbf{C}\mathbf{S}^{\mathrm{H}}\mathbf{A}_{\text{TX}}\\ &=\underbrace{\mathbf{A}_{\text{RX}}^{\mathrm{H}}(\underbrace{\mathbf{A}_{\text{TX}}^{\mathrm{H}}(\underbrace{\mathbf{S}\mathbf{C}^{\mathrm{H}}}_{\text{FFT}})}_{\text{IFFT}})^{\mathrm{H}}}_{\text{IFFT}} \end{array} $$
where \(\mathbf {X}=\text {unvec}(\mathbf {x})\in \mathbb {C}^{B_{\text {RX}}\times B_{\text {TX}}}\) and \(\mathbf {C}=\text {unvec}(\mathbf {c})\in \mathbb {C}^{M\times T}\). If the matrix multiplication involving S can be implemented using the FFT, e.g., Zadoff-Chu (ZC) [29] or DFT [11] training sequence, (40) and (41) can be implemented using the FFT because ARX and ATX are overcomplete DFT matrices. For example, each column of ATXXH in (40) can be computed using the BTX-point FFT with pruned outputs, i.e., retaining only N outputs, because we constructed ATX as an overcomplete DFT matrix.
In particular, the matrix multiplications involving ATX,SH, and ARX in (40) can be implemented with BTX-point FFT with pruned outputs repeated BRX times, T-point IFFT with pruned inputs repeated BRX times, and BRX-point FFT with pruned outputs repeated T times, respectively.Footnote 4 Using the same logic, the matrix multiplications involving \(\mathbf {S}, \mathbf {A}_{\text {TX}}^{\mathrm {H}}\), and \(\mathbf {A}_{\text {RX}}^{\mathrm {H}}\) in (41) can be implemented using T-point FFT with pruned outputs repeated M times, BTX-point IFFT with pruned inputs repeated M times, and BRX-point IFFT with pruned inputs repeated BTX times, respectively. Therefore, the complexity of the FFT-based implementation of (40) and (41) is O(BRXBTX logBTX+BRXT logT+TBRX logBRX) and O(MT logT+MBTX logBTX+BTXBRX logBRX), respectively.
To illustrate the efficiency of the FFT-based implementation of (40) and (41), M/N,M/BRX,M/BTX, and M/T are assumed to be fixed. Then, the complexity of the FFT-based implementation of Ax and AHc is O(M2 logM), whereas the complexity of directly computing Ax and AHc is O(M4). Therefore, the complexity of Algorithms 2 and 3 is reduced when h(x) and ∇h(x) are implemented using the FFT operations.
Line 5 of Algorithms 2 and 3 is equivalent to solving
$$ \underset{\mathbf{x}_{\mathcal{I}}\in\mathbb{C}^{|\mathcal{I}|}}{\text{argmax}}\ h_{\mathcal{I}}(\mathbf{x}_{\mathcal{I}})=\underset{\mathbf{x}_{\mathcal{I}}\in\mathbb{C}^{|\mathcal{I}|}}{\text{argmax}}\ (f_{\mathcal{I}}(\mathbf{x}_{\mathcal{I}})+g_{\mathcal{I}}(\mathbf{x}_{\mathcal{I}})) $$
$$\begin{array}{*{20}l} f_{\mathcal{I}}(\mathbf{x}_{\mathcal{I}})&=\log\text{Pr}\left[\begin{array}{ll}\hat{\mathbf{y}}=\mathrm{Q}(\sqrt{\rho} \mathbf{A}_{\mathcal{I}}\mathbf{x}_{\mathcal{I}}+\mathbf{n})\mid\mathbf{x}_{\mathcal{I}}\end{array}\right], \end{array} $$
$$\begin{array}{*{20}l} g_{\mathcal{I}}(\mathbf{x}_{\mathcal{I}})&=-\|\mathbf{x}_{\mathcal{I}}\|^{2}, \end{array} $$
and \(\mathbf {A}_{\mathcal {I}}\in \mathbb {C}^{MT\times |\mathcal {I}|}\) is the collection of ai with \(i\in \mathcal {I}\). Therefore, only \(h_{\mathcal {I}}(\mathbf {x}_{\mathcal {I}})\) and \(\nabla h_{\mathcal {I}}(\mathbf {x}_{\mathcal {I}})\) are required in Line 5 of Algorithms 2 and 3, which are low-dimensional functions defined on \(\mathbb {C}^{|\mathcal {I}|}\) where \(|\mathcal {I}|=O(L)\). If \(h_{\mathcal {I}}(\mathbf {x}_{\mathcal {I}})\) and \(\nabla h_{\mathcal {I}}(\mathbf {x}_{\mathcal {I}})\) are computed based on the same logic in (40) and (41) but A replaced by \(\mathbf {A}_{\mathcal {I}}\), the complexity of Algorithms 2 and 3 is reduced further because the size of the FFT is reduced in Line 5.
In this section, we evaluate the performance of Algorithms 2 and 3 from different aspects in terms of the accuracy, achievable rate, and complexity. Throughout this section, we consider a mmWave massive MIMO system with 1 bit ADCs, whose parameters are M=N=64 and T=80. The rest vary from simulation to simulation, which consist of BRX,BTX, and L. In addition, we consider S, whose rows are the circular shifts of the ZC training sequence of length T as in [15,33]. Furthermore, H is either random or deterministic. If H is random, \(\alpha _{\ell }\sim \mathcal {C}\mathcal {N}(0, 1), \theta _{\text {RX}, \ell }\sim \text {unif}([-\pi /2, \pi /2])\), and θTX,ℓ∼unif([−π/2,π/2]) are independent. Otherwise, we consider different H from simulation to simulation.
The MSEs of \(\{\alpha _{\ell }\}_{\ell =1}^{L}, \{\theta _{\text {RX}, \ell }\}_{\ell =1}^{L}\), and \(\{\theta _{\text {TX}, \ell }\}_{\ell =1}^{L}\) are
$$\begin{array}{*{20}l} \text{MSE}(\{\alpha_{\ell}\}_{\ell=1}^{L})&=\mathbb{E}\left\{\frac{1}{L}\sum_{\ell=1}^{L}|\tilde{\alpha}_{\ell}-\alpha_{\ell}|^{2}\right\}, \end{array} $$
$$\begin{array}{*{20}l} \text{MSE}(\{\theta_{\text{RX}, \ell}\}_{\ell=1}^{L})&=\mathbb{E}\left\{\frac{1}{L}\sum_{\ell=1}^{L}(\tilde{\theta}_{\text{RX}, \ell}-\theta_{\text{RX}, \ell})^{2}\right\}, \end{array} $$
$$\begin{array}{*{20}l} \text{MSE}(\{\theta_{\text{TX}, \ell}\}_{\ell=1}^{L})&=\mathbb{E}\left\{\frac{1}{L}\sum_{\ell=1}^{L}(\tilde{\theta}_{\text{TX}, \ell}-\theta_{\text{TX}, \ell})^{2}\right\} \end{array} $$
where \((\tilde {\alpha }_{\ell }, \tilde {\theta }_{\text {RX}, \ell }, \tilde {\theta }_{\text {TX}, \ell })\) corresponds to some non-zero element of \(\tilde {\mathbf {X}}=\text {unvec}(\tilde {\mathbf {x}})\in \mathbb {C}^{B_{\text {RX}}\times B_{\text {TX}}}\). The normalized MSE (NMSE) of H is
$$ \text{NMSE}(\mathbf{H})=\mathbb{E}\left\{\frac{\|\tilde{\mathbf{H}}-\mathbf{H}\|_{\mathrm{F}}^{2}}{\|\mathbf{H}\|_{\mathrm{F}}^{2}}\right\} $$
where \(\tilde {\mathbf {H}}=\mathbf {A}_{\text {RX}}\tilde {\mathbf {X}}\mathbf {A}_{\text {TX}}\). In (45)–(48), the symbol \(\tilde {\hphantom {\mathbf {y}}}\) is used to emphasize that the quantity is an estimate.
Throughout this section, we consider the debiasing variant of Algorithm 2. The halting condition of Algorithms 2 and 3 is to halt when the current and previous \(\text {supp}(\tilde {\mathbf {x}})\) are the same. The gradient descent method is used to solve convex optimization problems, which consist of (35) and Line 5 of Algorithms 2 and 3. The backtracking line search is used to compute the step size in the gradient descent method and κ in Line 3 of Algorithm 3. In addition, η is selected so that Conjecture 1 is satisfied. In this paper, we select the maximum η satisfying
$$ \underset{i\in\{1, 2, \dots, B\}}{\text{min}}\ |B_{\eta}(i)|>1. $$
For example, the maximum η satisfying (49) is η=0.6367 when BRX=2M and BTX=2N. The channel estimation criterion of Algorithms 2 and 3 is either MAP or ML, which depends on whether H is random or deterministic. To compare the BMS-based and non-BMS-based algorithms, the performance of the GraSP and GraHTP algorithms is shown as a reference in Figs. 4, 5, 6, and 7. The GraSP and GraHTP algorithms forbid BRX≫M and BTX≫N because the GraSP and GraHTP algorithms diverge when A is highly coherent. Therefore, the parameters are selected as BRX=M and BTX=N when the GraSP and GraHTP algorithms are implemented. Such BRX and BTX, however, are problematic because the mismatch in (7) is inversely proportional to BRX and BTX.
MSEs of the BMS-based and BE-based GraSP algorithms for widely spread \(\theta _{\text {RX}, \ell }=\theta _{\text {TX}, \ell }=\frac {\pi }{18}(\ell -1)\) with M=N=64,T=80,L=8, and \(\alpha _{\ell }=(0.8+0.1(\ell -1))e^{j\frac {\pi }{4}(\ell -1)}\). The CRB is provided as a reference, which was derived in [36]
MSEs of the BMS-based and BE-based GraSP algorithms for closely spread \(\theta _{\text {RX}, \ell }=\theta _{\text {TX}, \ell }=\frac {\pi }{36}(\ell -1)\) with M=N=64,T=80,L=8, and \(\alpha _{\ell }=(0.8+0.1(\ell -1))e^{j\frac {\pi }{4}(\ell -1)}\). The CRB is provided as a reference, which was derived in [36]
NMSE vs. SNR where M=N=64,T=80, and L=4 with varying BRX and BTX from algorithm to algorithm
Achievable rate lower bound [15] vs. SNR where M=N=64,T=80, and L=4 with varying BRX and BTX from algorithm to algorithm
In Figs. 4 and 5, we compare the accuracy of the BMS-based and band excluding-based (BE-based) algorithms at different SNRs using \(\text {MSE}(\{\alpha _{\ell }\}_{\ell =1}^{L}), \text {MSE}(\{\theta _{\text {RX}, \ell }\}_{\ell =1}^{L}), \text {MSE}(\{\theta _{\text {TX}, \ell }\}_{\ell =1}^{L})\), and NMSE(H). The BE hard thresholding technique was proposed in [26], which was applied to the orthogonal matching pursuit (OMP) algorithm [34]. In this paper, we apply the BE hard thresholding technique to the GraSP algorithm, which results in the BEGraSP algorithm. In this example, BRX=BTX=256 for the BMS-based and BE-based algorithms. We assume that L=8 and H is deterministic where \(\alpha _{\ell }=(0.8+0.1(\ell -1))e^{j\frac {\pi }{4}(\ell -1)}\). However, \(\{\theta _{\text {RX}, \ell }\}_{\ell =1}^{L}\) and \(\{\theta _{\text {TX}, \ell }\}_{\ell =1}^{L}\) vary from simulation to simulation, which are either widely spread (Fig. 4) or closely spread (Fig. 5). In Figs. 4 and 5, the notion of widely and closely spread paths refer to the fact that the minimum 2-norm distance between the paths are either relatively far or close, i.e., mini≠j∥(θRX,i−θRX,j,θTX,i−θTX,j)∥2 of Fig. 4, which is \(\sqrt {(\pi /18)^{2}+(\pi /18)^{2}}\), is greater than that of Fig. 5, which is \(\sqrt {(\pi /36)^{2}+(\pi /36)^{2}}\). The path gains, AoAs, and AoDs are assumed to be deterministic because the CRB is defined for deterministic parameters only [35]. A variant of the CRB for random parameters is known as the Bayesian CRB, but adding the Bayesian CRB to our work is left as a future work because applying the Bayesian CRB to non-linear measurements, e.g., 1 bit ADCs, is not as straightforward.
According to Figs. 4 and 5, the BMS-based algorithms succeed to estimate both widely spread and closely spread paths, whereas the BE-based algorithms fail to estimate closely spread paths. This follows because the BE hard thresholding technique was derived based on the assumption that supp(x∗) is widely spread. In contrast, the BMS hard thresholding technique is proposed based on Conjecture 1 without any assumption on supp(x∗). This means that when supp(x∗) is closely spread, the BE hard thresholding technique cannot properly estimate supp(x∗) because the BE hard thresholding technique, by its nature, excludes the elements near the maximum element of x∗ from its potential candidate. The BMS hard thresholding technique, in contrast, uses the elements near the maximum element of x∗ to construct the by-product testing set only, i.e., Line 4 of Algorithm 1. Therefore, the BMS-based algorithms are superior to the BE-based algorithms when the paths are closely spread. The Cramér-Rao bounds (CRBs) of \(\text {MSE}(\{\alpha _{\ell }\}_{\ell =1}^{L}), \text {MSE}(\{\theta _{\text {RX}, \ell }\}_{\ell =1}^{L})\), and \(\text {MSE}(\{\theta _{\text {TX}, \ell }\}_{\ell =1}^{L})\) are provided, which were derived in [36]. The gaps between the MSEs and their corresponding CRBs can be interpreted as a performance limit incurred by the discretized AoAs and AoDs. To overcome such limit, the AoAs and AoDs must be estimated based on the off-grid method, which is beyond the scope of this paper.
In addition, note that \(\text {MSE}(\{\alpha _{\ell }\}_{\ell =1}^{L})\) and NMSE(H) worsen as the SNR enters the high SNR regime. To illustrate why x∗ cannot be estimated in the high SNR regime in 1 bit ADCs, note that
$$\begin{array}{*{20}l} \mathrm{Q}(\sqrt{\rho}\mathbf{A}\mathbf{x}^{*}+\mathbf{n})&\approx\mathrm{Q}(\sqrt{\rho}\mathbf{A}\mathbf{x}^{*})\\ &=\mathrm{Q}(\sqrt{\rho}\mathbf{A}c\mathbf{x}^{*}) \end{array} $$
in the high SNR regime with c>0, which means that x∗ and cx∗ are indistinguishable because the magnitude information is lost by 1 bit ADCs. The degradation of the recovery accuracy in the high SNR regime with 1 bit ADCs is an inevitable phenomenon, as observed from other previous works on low-resolution ADCs [11,14,15,33,37].
In Figs. 6 and 7, we compare the performance of Algorithms 2 and 3, and other estimators when H is random. The Bernoulli Gaussian-GAMP (BG-GAMP) algorithm [15] is an iterative approximate MMSE estimator of x∗, which was derived based on the assumption that \(x_{i}^{*}\) is distributed as \(\mathcal {C}\mathcal {N}(0, 1)\) with probability L/B but 0 otherwise, namely, the BG distribution. The fast iterative shrinkage-thresholding algorithm (FISTA) [38] is an iterative MAP estimator of x∗, which was derived based on the assumption that the logarithm of the PDF of x∗ is gFISTA(x)=−γ∥x∥1 ignoring the constant factor, namely, the Laplace distribution. Therefore, the estimate of x∗ is
$$ \underset{\mathbf{x}\in\mathbb{C}^{B}}{\text{argmax}}\ (f(\mathbf{x})+g_{\text{FISTA}}(\mathbf{x})), $$
which is solved using the accelerated proximal gradient descent method [38]. The regularization parameter γ is selected so that the expected sparsity of (51) is 3L for a fair comparison, which was suggested in [17]. In this example, L=4, whereas BRX and BTX vary from algorithm to algorithm. In particular, we select BRX=BTX=256 for Algorithms 2, 3, and the FISTA, whereas BRX=M and BTX=N for the BG-GAMP algorithm.
In Fig. 6, we compare the accuracy of Algorithms 2, 3, and other estimators at different SNRs using NMSE(H). According to Fig. 6, Algorithms 2 and 3 outperform the BG-GAMP algorithm and FISTA as the SNR enters the medium SNR regime. The accuracy of the BG-GAMP algorithm is disappointing because the mismatch in (7) is inversely proportional to BRX and BTX. However, increasing BRX and BTX is forbidden because the BG-GAMP algorithm diverges when A is highly coherent. The accuracy of the FISTA is disappointing because the Laplace distribution does not match the distribution of x∗. Note that (23), which is the basis of Algorithms 2 and 3, is indeed the MAP estimate of x∗, which is in contrast to the FISTA. According to Fig. 6, NMSE(H) worsens as the SNR enters the high SNR regime, which follows from the same reason as in Figs. 4 and 5.
In Fig. 7, we compare the achievable rate lower bound of Algorithms 2, 3, and other estimators at different SNRs when the precoders and combiners are selected based on \(\tilde {\mathbf {H}}\). The achievable rate lower bound shown in Fig. 7 is presented in [15], which was derived based on the Bussgang decomposition [13] in conjunction with the fact that the worst-case noise is Gaussian. According to Fig. 7, Algorithms 2 and 3 outperform the BG-GAMP algorithm and FISTA, which is consistent with the result in Fig. 6.
In Fig. 8, we compare the complexity of Algorithms 2, 3, and other estimators at different BRX and BTX when H is random. To analyze the complexity, note that Algorithms 2, 3, and the FISTA require h(x) and ∇h(x) at each iteration, whose bottlenecks are Ax and AHc, respectively, while the BG-GAMP algorithm requires Ax and AHc at each iteration. Therefore, the complexity is measured based on the number of complex multiplications performed to compute Ax and AHc, which are implemented based on the FFT. In this example, L=4, whereas SNR is either 0 dB or 10 dB.
Normalized complexity vs. BRX=BTX where M=N=64,T=80, and L=4 at SNR=0 dB and SNR=10 dB
In this paper, the complexity of the BG-GAMP algorithm is used as a baseline because the BG-GAMP algorithm is widely used. The normalized complexity is defined as the number of complex multiplications performed divided by the per-iteration complexity of the BG-GAMP. For example, the normalized complexity of the FISTA with BRX=BTX=256 is 160 when the complexity of the FISTA with BRX=BTX=256 is equivalent to the complexity of the 160-iteration BG-GAMP algorithm with BRX=BTX=256. In practice, the BG-GAMP algorithm converges in 15 iterations when A is incoherent [39]. In this paper, an algorithm is said to be as efficient as the BG-GAMP algorithm when the normalized complexity is below the target threshold, which is 15. As a sidenote, our algorithms, namely, the BMSGraSP and BMSGraHTP, requires 2.1710 and 2.0043 iterations in average, respectively, across the entire SNR range.
According to Fig. 8, the complexity of the FISTA is impractical because the objective function of (51) is a high-dimensional function defined on \(\mathbb {C}^{B}\) where B≫MN. In contrast, the complexity of Algorithms 2 and 3 is dominated by (42), whose objective function is a low-dimensional function defined on \(\mathbb {C}^{|\mathcal {I}|}\) where \(|\mathcal {I}|=O(L)\). The normalized complexity of Algorithms 2 and 3 is below the target threshold when BRX≥192 and BTX≥192. Therefore, we conclude that Algorithms 2 and 3 are as efficient as the BG-GAMP algorithm when BRX≫M and BTX≫N.
In the mmWave band, the channel estimation problem is converted to a sparsity-constrained optimization problem, which is NP-hard to solve. To approximately solve sparsity-constrained optimization problems, the GraSP and GraHTP algorithms were proposed in CS, which pursue the gradient of the objective function. The GraSP and GraHTP algorithms, however, break down when the objective function is ill-conditioned, which is incurred by the highly coherent sensing matrix. To remedy such break down, we proposed the BMS hard thresholding technique, which is applied to the GraSP and GraHTP algorithms, namely, the BMSGraSP and BMSGraHTP algorithms, respectively. Instead of directly hard thresholding the gradient of the objective function, the BMS-based algorithms test whether an index is the ground truth index or the by-product of another index. We also proposed an FFT-based fast implementation of the BMS-based algorithms, whose complexity is reduced from O(M4) to O(M2 logM). In the simulation, we compared the performance of the BMS-based, BE-based, BG-GAMP, and FISTA algorithms from different aspects in terms of the accuracy, achievable rate, and complexity. The BMS-based algorithms were shown to outperform other estimators, which proved to be both accurate and efficient. Our algorithms, however, addressed only the flat fading scenario, so an interesting future work would be to propose a low-complexity channel estimator capable of dealing with the wideband scenario.
Methods/experimental
The aim of this study is to propose an accurate yet efficient channel estimator for mmWave massive MIMO systems with 1 bit ADCs. Our channel estimator was proposed based on theoretical analysis. To be specific, we adopted and modified CS algorithms to exploit the sparse nature of the mmWave virtual channels. In addition, we carefully analyzed the proposed channel estimator to reduce the complexity. To verify the accuracy and complexity of the proposed channel estimator, we conducted extensive (Monte-Carlo) simulations.
In practice, X∗ may be either approximately sparse or exactly sparse to formulate (10). If X∗ is approximately sparse, the leakage effect is taken into account so the mismatch in (7) becomes 0, namely, vec(E)=0MT. In contrast, the mismatch in (7) must be taken into account with a non-zero E when X∗ is exactly sparse. Fortunately, E is inversely proportional to BRX and BTX. Therefore, we adopt the latter definition of X∗ and propose our algorithm ignoring E assuming that BRX≫M and BTX≫N. The performance degradation from E will become less as BRX and BTX become sufficiently large.
The element-wise vector division in the inverse Mills ratio function is meaningless because the arguments of the inverse Mills ratio function are scalars in (30). The reason we use the element-wise vector division in the inverse Mills ratio function will become clear in (37), whose arguments are vectors.
We use the term "ground truth" to emphasize that the ground truth x∗ is the true virtual channel which actually gives the quantized received signal \(\hat {\mathbf {Y}}\) from (16), whereas x merely represents the point where ∇h(x) is computed to estimate supp(x∗) via hard thresholding.
The inputs and outputs are pruned because ARX,ATX, and S are rectangular, not square. The details of the pruned FFT are presented in [30–32].
ADC:
AoA:
Angle-of-arrival
AoD:
Angle-of-departure
AWGN:
Additive white Gaussian noise
Band excluding
BG-GAMP:
Bernoulli Gaussian-generalized approximate message passing
BLMMSE:
Bussgang linear minimum mean squared error
BMS:
Band maximum selecting
CoSaMP:
Compressive sampling matching pursuit
CRB:
Cramér-Rao bound
CS:
Compressive sensing
DFT:
Discrete fourier transform
FISTA:
Fast iterative shrinkage-thresholding algorithm
GAMP:
Generalized approximate message passing
GEC-SR:
Generalized expectation consistent signal recovery
GraHTP:
Gradient hard thresholding pursuit
Gradient support pursuit
HTP:
Hard thresholding pursuit
i.i.d.:
Independent and identically distributed
LOS:
Line-of-sight
Maximum a posteriori
Multiple-input multiple-output
mmWave:
Millimeter wave
nML:
Near maximum likelihood
NMSE:
Normalized mean squared error
OMP:
Orthogonal matching pursuit
RIP:
Restricted isometry property
SNR:
ULA:
Uniform linear array
ZF:
Zadoff-Chu
A. L. Swindlehurst, E. Ayanoglu, P. Heydari, F. Capolino, Millimeter-wave massive MIMO: the next wireless revolution?IEEE Commun. Mag.52(9), 56–62 (2014). https://doi.org/10.1109/MCOM.2014.6894453.
Z. Gao, L. Dai, D. Mi, Z. Wang, M. A. Imran, M. Z. Shakir, mmWave massive-MIMO-based wireless backhaul for the 5G Ultra-dense network. IEEE Wirel. Commun.22(5), 13–21 (2015). https://doi.org/10.1109/MWC.2015.7306533.
T. E. Bogale, L. B. Le, Massive MIMO and mmWave for 5G Wireless HetNet: potential benefits and challenges. IEEE Veh. Technol. Mag.11(1), 64–75 (2016). https://doi.org/10.1109/MVT.2015.2496240.
F. Boccardi, R. W. Heath, A. Lozano, T. L. Marzetta, P. Popovski, Five disruptive technology directions for 5G. IEEE Commun. Mag.52(2), 74–80 (2014). https://doi.org/10.1109/MCOM.2014.6736746.
Bin Le, T. W. Rondeau, J. H. Reed, C. W. Bostian, Analog-to-digital converters. IEEE Sig. Proc. Mag.22(6), 69–77 (2005). https://doi.org/10.1109/MSP.2005.1550190.
C. Mollén, J. Choi, E. G. Larsson, R. W. Heath, Uplink performance of wideband massive MIMO with one-bit ADCs. IEEE Trans. Wirel. Commun.16(1), 87–100 (2017). https://doi.org/10.1109/TWC.2016.2619343.
L. Fan, S. Jin, C. Wen, H. Zhang, Uplink achievable rate for massive MIMO systems with low-resolution ADC. IEEE Commun. Lett.19(12), 2186–2189 (2015). https://doi.org/10.1109/LCOMM.2015.2494600.
J. Zhang, L. Dai, S. Sun, Z. Wang, On the spectral efficiency of massive MIMO systems with low-resolution ADCs. IEEE Commun. Lett.20(5), 842–845 (2016). https://doi.org/10.1109/LCOMM.2016.2535132.
S. Jacobsson, G. Durisi, M. Coldrey, U. Gustavsson, C. Studer, Throughput analysis of massive MIMO uplink with low-resolution ADCs. IEEE Trans. Wirel. Commun.16(6), 4038–4051 (2017). https://doi.org/10.1109/TWC.2017.2691318.
J. Choi, J. Mo, R. W. Heath, Near maximum-likelihood detector and channel estimator for uplink multiuser massive MIMO systems with one-bit ADCs. IEEE Trans. Commun.64(5), 2005–2018 (2016). https://doi.org/10.1109/TCOMM.2016.2545666.
Y. Li, C. Tao, G. Seco-Granados, A. Mezghani, A. L. Swindlehurst, L. Liu, Channel estimation and performance analysis of one-bit massive MIMO systems. IEEE Trans. Sign. Proc.65(15), 4075–4089 (2017). https://doi.org/10.1109/TSP.2017.2706179.
D. P. Bertsekas, Nonlinear Programming. Journal of the Operational Research Society. 48(3), 334–334 (1997). https://doi.org/10.1057/palgrave.jors.2600425.
J. J. Bussgang, Research Laboratory of Electronics, Massachusetts Institute of Technology. Technical report. 216:, 1–14 (1952). Article type: Technical report Institution: Research Laboratory of Electronics, Massachusetts Institute of Technology Volume: 216 Page: 1-14 Year: 1952
H. He, C. Wen, S. Jin, Bayesian optimal data detector for hybrid mmWave MIMO-OFDM systems with low-resolution ADCs. IEEE J. Sel. Top. Sig. Proc.12(3), 469–483 (2018). https://doi.org/10.1109/JSTSP.2018.2818063.
J. Mo, P. Schniter, R. W. Heath, Channel estimation in broadband millimeter wave MIMO systems with Few-Bit ADCs. IEEE Trans. Sign. Proc.66(5), 1141–1154 (2018). https://doi.org/10.1109/TSP.2017.2781644.
T. Liu, C. Wen, S. Jin, X. You, in 2016 IEEE International Symposium on Information Theory (ISIT). Generalized turbo signal recovery for nonlinear measurements and orthogonal sensing matrices, (2016), pp. 2883–2887. https://doi.org/10.1109/ISIT.2016.7541826.
S. Bahmani, B. Raj, P. T. Boufounos, Greedy Sparsity-Constrained Optimization. J. Mach. Learn. Res.14(Mar), 807–841 (2013).
MathSciNet MATH Google Scholar
X. -T. Yuan, P. Li, T. Zhang, Gradient Hard Thresholding Pursuit. J. Mach. Learn. Res.18(166), 1–43 (2018).
D. Needell, J. A. Tropp, CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal.26(3), 301–321 (2009). https://doi.org/10.1016/j.acha.2008.07.002.
S. Foucart, Hard Thresholding Pursuit: An Algorithm for Compressive Sensing. SIAM J. Numer. Anal.49(6), 2543–2563 (2011). https://doi.org/10.1137/100806278.
M. R. Akdeniz, Y. Liu, M. K. Samimi, S. Sun, S. Rangan, T. S. Rappaport, E. Erkip, Millimeter wave channel modeling and cellular capacity evaluation. IEEE J. Sel. Areas Commun.32(6), 1164–1179 (2014). https://doi.org/10.1109/JSAC.2014.2328154.
A. M. Sayeed, Deconstructing Multiantenna Fading Channels. IEEE Trans. Sign. Proc.50(10), 2563–2579 (2002). https://doi.org/10.1109/TSP.2002.803324.
W. Hong, K. Baek, Y. Lee, Y. Kim, S. Ko, Study and Prototyping of Practically Large-Scale mmWave Antenna Systems for 5G Cellular Devices. IEEE Commun. Mag.52(9), 63–69 (2014). https://doi.org/10.1109/MCOM.2014.6894454.
S. Boyd, L. Vandenberghe, Convex Optimization (Cambridge University Press, Cambridge, 2004).
Y. C. Eldar, G. Kutyniok, Compressed Sensing: Theory and Applications (Cambridge University Press, Cambridge, 2012).
A. Fannjiang, W. Liao, Coherence Pattern–Guided Compressive Sensing with Unresolved Grids. SIAM J. Imaging Sci.5(1), 179–202 (2012). https://doi.org/10.1137/110838509.
N. Jindal, MIMO broadcast channels with finite-rate feedback. IEEE Trans. Inf. Theory. 52(11), 5045–5060 (2006). https://doi.org/10.1109/TIT.2006.883550.
Z. Marzi, D. Ramasamy, U. Madhow, Compressive channel estimation and tracking for large arrays in mm-wave Picocells. IEEE J. Sel. Top. Sig. Proc.10(3), 514–527 (2016). https://doi.org/10.1109/JSTSP.2016.2520899.
D. Chu, Polyphase Codes with Good Periodic Correlation Properties (Corresp.)IEEE Trans. Inf. Theory. 18(4), 531–532 (1972). https://doi.org/10.1109/TIT.1972.1054840.
J. Markel, FFT Pruning. IEEE Trans. Audio Electroacoustics. 19(4), 305–311 (1971). https://doi.org/10.1109/TAU.1971.1162205.
D. Skinner, Pruning the Decimation In-Time FFT Algorithm. IEEE Trans. Acoust. Speech Sig. Proc.24(2), 193–194 (1976). https://doi.org/10.1109/TASSP.1976.1162782.
T. Sreenivas, P. Rao, FFT Algorithm for both input and output pruning. IEEE Trans. Acoust. Speech Sig. Proc.27(3), 291–292 (1979). https://doi.org/10.1109/TASSP.1979.1163246.
Y. Ding, S. Chiu, B. D. Rao, Bayesian Channel estimation algorithms for massive MIMO systems with hybrid analog-digital processing and low-resolution ADCs. IEEE J. Sel. Top. Sig. Proc.12(3), 499–513 (2018). https://doi.org/10.1109/JSTSP.2018.2814008.
J. A. Tropp, A. C. Gilbert, Signal Recovery from Random Measurements via Orthogonal Matching Pursuit. IEEE Trans. Inf. Theory. 53(12), 4655–4666 (2007). https://doi.org/10.1109/TIT.2007.909108.
H. V. Poor, An Introduction to Signal Detection and Estimation (Springer, Berlin, 2013).
P. Wang, J. Li, M. Pajovic, P. T. Boufounos, P. V. Orlik, in 2017 51st Asilomar Conference on Signals, Systems, and Computers. On Angular-Domain Channel Estimation for One-Bit Massive MIMO Systems with Fixed and Time-Varying Thresholds, (2017), pp. 1056–1060. https://doi.org/10.1109/ACSSC.2017.8335511.
R. P. David, J. Cal-Braz, in ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Feedback-Controlled Channel Estimation with Low-Resolution ADCs in Multiuser MIMO Systems, (2019), pp. 4674–4678. https://doi.org/10.1109/ICASSP.2019.8683652.
A. Beck, M. Teboulle, A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM J. Imaging Sci.2(1), 183–202 (2009). https://doi.org/10.1137/080716542.
J. P. Vila, P. Schniter, Expectation-maximization Gaussian-mixture approximate message passing. IEEE Trans. Sign. Proc.61(19), 4658–4672 (2013). https://doi.org/10.1109/TSP.2013.2272287.
This work was partly supported by Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government(MSIT) (No. 2016-0-00123, Development of Integer-Forcing MIMO Transceivers for 5G & Beyond Mobile Communication Systems) and by the National Research Foundation (NRF) grant funded by the MSIT of the Korea government (2019R1C1C1003638).
School of Electrical Engineering, KAIST, Daejeon, South Korea
In-soo Kim & Junil Choi
In-soo Kim
Junil Choi
IK and JC led the research and wrote the manuscript. Both authors read and approved the final manuscript.
Correspondence to Junil Choi.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Kim, Is., Choi, J. Channel estimation via gradient pursuit for mmWave massive MIMO systems with one-bit ADCs. J Wireless Com Network 2019, 289 (2019). https://doi.org/10.1186/s13638-019-1623-x
MmWave
Massive MIMO
1 bit ADC
MAP channel estimation
GraHTP
CoSaMP
BMS hard thresholding technique | CommonCrawl |
A cooperative spectrum sensing method based on information geometry and fuzzy c-means clustering algorithm
Shunchao Zhang1,
Yonghua Wang1,2,
Jiangfan Li1,
Pin Wan1,3,
Yongwei Zhang1 &
Nan Li1
To improve spectrum sensing performance, a cooperative spectrum sensing method based on information geometry and fuzzy c-means clustering algorithm is proposed in this paper. In the process of signal feature extraction, a feature extraction method combining decomposition, recombination, and information geometry is proposed. First, to improve the spectrum sensing performance when the number of cooperative secondary users is small, the signals collected by the secondary users are split and reorganized, thereby logically increasing the number of cooperative secondary users. Then, in order to visually analyze the signal detection problem, the information geometry theory is used to map the split and recombine signals onto the manifold, thereby transforming the signal detection problem into a geometric problem. Further, use geometric tools to extract the corresponding statistical characteristics of the signal. Finally, according to the extracted features, the appropriate classifier is trained by the fuzzy c-means clustering algorithm and used for spectrum sensing, thus avoiding complex threshold derivation. In the simulation results and performance analysis section, the experimental results were further analyzed, and the results show that the proposed method can effectively improve the spectrum sensing performance.
With the development of wireless communication, spectrum resources have become increasingly scarce, but most of the existing spectrum resources have not been fully utilized. Cognitive radio (CR) technology allows secondary users (SUs) to access the spectrum when the authorized primary user (PU) is idle, thus effectively alleviating the spectrum scarcity problem [1]. In CR, spectrum sensing is a key step that is mainly used to sense the existence of the PU [2].
Classical spectrum sensing methods include energy detection, matched filter detection, and cyclic eigenvalue detection [3]. As the simplest spectrum sensing algorithm, energy detection has low computational complexity and does not require prior information from the PU. Therefore, this method has been widely used. However, the algorithm is susceptible to noise uncertainty, which will greatly reduce detection performance [4, 5]. Matching filter detection is the optimal signal detection algorithm when all the information of the PU is known. However, the disadvantages of this method are also very obvious, because it requires prior knowledge of the PU that include the packet format and sequence, and the modulation type [6, 7]. The calculation of cyclic eigenvalue detection is relatively complicated, so it cannot be detected in real time; therefore, rapid detection is not possible [8].
The application of random matrix theory to spectrum sensing has attracted the interest of many researchers [9, 10]. By acquiring the sensing data of multiple SUs and composing the sampling signal matrix, and then calculating the covariance matrix, the corresponding eigenvalue is finally calculated as the decision statistic. Nowadays, many cooperative spectrum sensing algorithms based on random matrices have been proposed. Liu et al. proposed a maximum to minimum eigenvalue (MME) spectrum sensing method. The modified method uses the extracted MME feature value as a statistical feature of the signal and compares with a preset threshold to determine whether the PU exists [11]. However, when the number of sampling points is insufficient, the detection performance is obviously degraded. Liu et al. proposed a spectrum sensing method for the difference between the maximum eigenvalue and the average energy (DMEAE). The method uses the DMEAE feature as a statistical feature of the signal and then compares it with a preset threshold to achieve spectrum sensing [12]. Tulino et al. proposed a spectrum sensing method based on the difference between the maximum eigenvalue and the minimum feature (DMM) [13]. Similar to the above method, the DMM feature is also used as a statistical feature, and spectrum sensing is implemented by comparing with a preset threshold. However, this method has poor perceived performance when the number of cooperative SUs is small and the signal-to-noise ratio (SNR) is low. A statistical feature extraction method based on decomposition and recombination (DAR) is proposed. To logically increase the number of cooperative users, the method firstly splits and reorganizes the signal matrix, thereby effectively improving the spectrum sensing performance [14]. From the above analysis, it can be known that the traditional spectrum sensing based on random matrix needs to derive and calculate the threshold of the decision in advance. The whole process is complex, and there are problems such as inaccurate thresholds.
With the rapid development of information geometry theory, the concept of statistical manifolds is used to transform signal detection problems into geometric problems on manifolds, and then geometric tools can be used to visually analyze detection problems. Liu et al. used information geometry theory to detect radar signals. At the same time, a matrix constant false alarm rata (CFAR) and a distance detector based on geodesic were proposed [15]. Chen applied the information geometry method to spectrum sensing, increased the measurement of manifolds, and obtained the decision threshold through simulation [16]. Lu et al. used the matching method to obtain the closed expression of the decision threshold, which has higher computational complexity [17]. However, in spectrum sensing, the derivation of the threshold is not only complicated, but there is always some deviation in using the fixed decision threshold to determine whether the PU exists.
In recent years, machine learning has developed rapidly, which also provides a new idea for spectrum sensing. Spectrum sensing can be considered as a problem of two classifications that is whether the PU exists [18–20]. Kumar proposed a spectrum sensing method based on K-means clustering algorithm and energy feature. The method takes the energy value of the signal as the feature and then uses K-means clustering algorithm to classify these features [21]. Zhang et al. proposed a spectrum sensing method based on K-means and signal features that combines the feature extraction methods in the random matrix and selects MME, DMM, and DMEAE as the characteristics of training and classification [22]. Thilina et al. used K-means, Gaussian mixture model (GMM) in unsupervised learning, and neural network (NN) and support vector machine (SVM) in supervised learning to study spectrum perception [23]. Xue et al. proposed a cooperative spectrum sensing algorithm based on unsupervised learning. The dominant features and the maximum and minimum eigenvalues are used as features. K-means clustering and GMM are selected as the learning framework [24]. Compared with the traditional spectrum sensing method, spectrum sensing based on machine learning can effectively eliminate the cumbersome threshold calculation and has better adaptability. Similarly, the method does not need to know the priori information of the PU.
Based on the above researches, this paper proposes a cooperative spectrum sensing method based on information geometry and fuzzy c-means (FCM) clustering algorithm (IGFCM). In the feature extraction process, the order-DAR (O-DAR) and interval-DAR (I-DAR) are introduced to obtain two new matrices, then two covariance matrices of two new matrices are calculated separately, and then two covariance matrices are mapped to the manifold using information geometry theory. Then, use the geodesic distance to calculate the distance on the manifold and use it as a feature. Finally, the FCM clustering algorithm is used to implement spectrum sensing. The spectrum sensing method proposed in this paper does not require any prior information about the communication system, and the unmarked training data is more easily obtained. In the experiment, the spectrum sensing performance of IGFCM was further analyzed. The simulation results show that the method effectively improves the spectrum sensing performance.
Methods or experimental
The main structure and arrangement of this paper are as follows. Section 2 mainly introduces the system model of cooperative spectrum sensing. Section 3 is to improve the spectrum sensing performance in the case of a small number of cooperative users and to analyze the spectrum sensing problem more intuitively. A signal feature extraction method based on split recombination and information geometry is proposed. Section 4 uses FCM clustering algorithm to achieve spectrum sensing. Section 5 uses the amplitude modulation signal to verify the proposed method.
Cooperative spectrum sensing system model
According to the perception of PU by a single SU in a cognitive radio network (CRN), the following binary hypothesis [25] about PU can be obtained. H0 indicates that the PU signal does not exist, and H1 indicates that the PU signal exists.
$$\begin{array}{@{}rcl@{}} x(n)=\left\{\begin{array}{l}w(n) \;H_{0}\\ s(n)+w(n) \;H_{1} \end{array}\right. \end{array} $$
where s(n) represents the signal transmitted by the PU and w(n) is the ambient noise. Since CR is primarily used for relatively fixed networks, the actual channel model is similar to additive white Gaussian noise (AWGN). The systems false alarm probability (Pf) and detection probability (Pd) can be defined as:
$$\begin{array}{@{}rcl@{}} P_{f}=P\lbrack H_{1}\vert H_{0}\rbrack \end{array} $$
$$\begin{array}{@{}rcl@{}} P_{d}=P\lbrack H_{1}\vert H_{1}\rbrack \end{array} $$
In CRN, spectrum sensing is often done in complex environments. Therefore, the SU needs to consider peripheral multi-path fading, shadow effects, and hidden terminals in the process of sensing the PU [26, 27]. Cooperative spectrum sensing reduces the impact of environmental factors by increasing the diversity of SUs. Therefore, to improve the performance of the spectrum sensing system, the method of multi-SU cooperative spectrum sensing is adopted. First, SUs collect information about the authorized channel and then transmits the information to a fusion center (FC) through the reporting channel; finally, the unified processing by the FC and the final decision is made. Cooperative spectrum sensing system model is shown in Fig. 1.
Assuming that there are M SUs in a CRN, the signals collected by the M SUs can form a signal vector matrix xi=[xi(1),xi(2),…,xi(N)], where X=[x1,x2,…,xM]T represents the signal sample value of the ith SU. Therefore, X is a matrix of M×N dimensions.
$$\begin{array}{@{}rcl@{}} {\mathbf{X}} = {\left[ {{x_{1}},{x_{2}},\ldots,{x_{M}}} \right]^{T}} = \left[ {\begin{array}{*{20}{c}} {{x_{1}}(1)}&{{x_{1}}(2)}& \cdots &{{x_{1}}(N)}\\ {{x_{2}}(1)}&{{x_{2}}(2)}& \cdots &{{x_{2}}(N)}\\ \vdots & \vdots & \ddots & \vdots \\ {{x_{M}}(1)}&{{x_{M}}(2)}& \cdots &{{x_{M}}(N)} \end{array}} \right] \end{array} $$
For ease of reference, the symbols and notations used in this paper are summarized in Table 1.
Table 1 Summary symbols and notations
Feature extraction based on decomposition and recombination and information geometry
Feature extraction model
In the feature extraction process, the noise environment needs to be estimated first, and two covariance matrices are obtained by splitting and recombining the noise signal matrix in sequence and interval. The model of feature extraction based on decomposition and recombination and information geometry is shown in Fig. 2. In order to accurately estimate the noise environment, collect enough noise signal matrices and perform O-DAR and I-DAR and covariance transformation (as shown in the box in Fig. 2). Then, use the Riemann mean calculation method to solve the Riemann mean of these covariance matrices. Similarly, the signal matrix with the perception is also subjected to two kinds of split recombination, and the covariance matrix is transformed. Finally, the distance from the covariance matrix obtained from the environment to be perceived to the Riemann mean is calculated. Then, use this distance as a statistical feature of the signal.
Information geometry overview
According to the matrix X, the corresponding covariance matrix can be calculated as shown in Eq. 5.
$$\begin{array}{@{}rcl@{}} {\mathbf{R}} = \frac{1}{N}{\mathbf{X}}{{\mathbf{X}}^{T}} \end{array} $$
From the theory of information geometry, we assume a set of probability density functions p(x|θ), where x is an n-dimensional sample belonging to the random variable Ω, x∈Ω∈Cn. θ is an m-dimensional parameter vector, θ∈Θ⊆Cm. Therefore, the probability distribution space can be described by parameter set Θ. The probability distribution function family S is as shown in Eq. 6.
$$\begin{array}{@{}rcl@{}} S = \left\{{p(x|{\theta})|{\theta} \in \Theta \subseteq {C^{m}}} \right\} \end{array} $$
Under a certain topological structure, S can form a microscopic manifold, called a statistical manifold, and θ is the coordinate of the manifold. From the perspective of information geometry, the probability density function can be parameterized by the corresponding covariance matrix. Under the two hypotheses H0 and H1 of spectrum sensing, the signal can be mapped to a point that is Rw or Rs+Rw, on the manifold. Rw and Rs+Rw are respectively the covariance calculated from the noise matrix and the signal matrix. In particular, both Rw and Rs+Rw are Toeplitz Hermitian positive definite matrices [17]. Therefore, a symmetric positive definite (SPD) matrix space composed of a covariance matrix can be defined as an SPD manifold.
Decomposition and recombination
In this section, we first split and reorganize the signal matrix for SU to logically increase the number of cooperative SUs. The DAR is divided into O-DAR and I-DAR. At the same time, the O-DAR and I-DAR are used to process the signal vector perceived by the SU. The specific algorithm is as follows [14]:
In the process of O-DAR, xi will be sequentially split into sub-signal vectors of q(q>0) segment s=N/q long. Then, the result of splitting xi is as follows:
$$ {{} \begin{aligned} {x_{i}}\left\{\begin{array}{l} {x_{i1}} = \left[ {{x_{i}}(1),{x_{i}}(2),\ldots,{x_{i}}(s)} \right]\\ {x_{i2}} \,=\, \left[ {{x_{i}}(s + 1),{x_{i}}(s + 2),\ldots,{x_{i}}(2s)} \right]\\ \vdots \\ {x_{iq}} = \left[ {{x_{i}}((q - 1)s + 1),{x_{i}}((q - 1)s + 2),\ldots,{x_{i}}(qs)} \right] \end{array}\right. \end{aligned}} $$
The signal vector in Eq. 4 is split according to Eq. 7, and then, the split sub-signal vector is recombined to obtain a qM×s dimensional signal matrix YO−DAR.
$$ {\begin{aligned} {{\mathbf{Y}}_{O - DAR}} = \left[ \begin{array}{l} {x_{11}}\\ \vdots \\ {x_{1q}}\\ \vdots \\ {x_{im}}\\ \vdots \\ {x_{Mq}} \end{array} \right] = \left[ {\begin{array}{cccc} {{x_{1}}(1)}&{{x_{1}}(2)}& \cdots &{{x_{1}}(s)}\\ \vdots &{}&{}&{}\\ {{x_{1}}((q - 1)s + 1)}&{{x_{1}}((q - 1)s + 2)}& \cdots &{{x_{1}}(qs)}\\ \vdots &{}&{}&{}\\ {{x_{i}}((m - 1)s + 1)}&{{x_{i}}((m - 1)s + 2)}& \cdots &{{x_{i}}(ms)}\\ \vdots &{}&{}&{}\\ {{x_{M}}((q - 1)s + 1)}&{{x_{M}}((q - 1)s + 2)}& \cdots &{{x_{M}}(qs)} \end{array}} \right] \end{aligned}} $$
In the process of I-DAR, select sampling points in the sampled data every q−1 units and then recombine the signal matrix X. The sampled data is separated by q−1 units, the sample points are reselected, and the signal matrix is recombined. According to I-DAR, the sampled data can be split into sub-signal vectors of q(q>0) segment s=N/q long. Then, the result of splitting xi is as follows:
$$\begin{array}{@{}rcl@{}} {x_{i}}\left\{\begin{array}{l} {x_{i1}} = \left[ {{x_{i}}(1),{x_{i}}(q + 1),\ldots,{x_{i}}((s - 1)q + 1)} \right]\\ {x_{i2}} = \left[ {{x_{i}}(2),{x_{i}}(q + 2),\ldots,{x_{i}}((s - 1)q + 2)} \right]\\ \vdots \\ {x_{iq}} = \left[ {{x_{i}}(q),{x_{i}}(q + q),\ldots,{x_{i}}((s - 1)q + q)} \right] \end{array} \right. \end{array} $$
The signal vector in Eq. 4 is split according to Eq. 9, and then, the split sub-signal vector is recombined to obtain a qM×s dimensional signal matrix YI−DAR.
$$ {\begin{aligned} {{\mathbf{Y}}_{I - DAR}} = \left[ \begin{array}{l} {x_{11}}\\ \vdots \\ {x_{1q}}\\ \vdots \\ {x_{im}}\\ \vdots \\ {x_{Mq}} \end{array} \right] = \left[ {\begin{array}{llll} {{x_{1}}(1)}&{{x_{1}}(q + 1)}& \cdots &{{x_{1}}((s - 1)q + 1)}\\ \vdots &{}&{}&{}\\ {{x_{1}}(q)}&{{x_{1}}(q + q)}& \cdots &{{x_{1}}((s - 1)q + q)}\\ \vdots &{}&{}&{}\\ {{x_{i}}(m)}&{{x_{i}}(q + m)}& \cdots &{{x_{i}}((s - 1)q + m)}\\ \vdots &{}&{}&{}\\ {{x_{M}}(q)}&{{x_{M}}(q + q)}& \cdots &{{x_{M}}((s - 1)q + q)} \end{array}} \right] \end{aligned}} $$
According to YO−DAR and YI−DAR, the corresponding covariance matrices RO and RI can be calculated.
$$\begin{array}{@{}rcl@{}} {{\mathbf{R}}^{O}} = \frac{1}{s}{{\mathbf{Y}}_{O - DAR}}{{\mathbf{Y}}_{O - DAR}}^{T} \end{array} $$
$$\begin{array}{@{}rcl@{}} {{\mathbf{R}}^{I}} = \frac{1}{s}{{\mathbf{Y}}_{I - DAR}}{{\mathbf{Y}}_{I - DAR}}^{T} \end{array} $$
Riemann mean
First, SUs collect P environmental noise matrices. These noise matrices are then processed using O-DAR and I-DAR, and the covariance matrix will be calculated. Thus, we can obtain \({\mathbf {R}}_{k}^{O}(k = 1,2,\ldots,P)\) and \({\mathbf {R}}_{k}^{I}(k = 1,2,\ldots,P)\) matrices. Their Riemann mean objective functions are shown in Eqs. 13 and 14, respectively.
$$\begin{array}{@{}rcl@{}} \Phi \left({\overline {\mathbf{R}}^{O}}\right) = \frac{1}{P}\sum\limits_{k = 1}^{P} {\mathrm{D}} \left({\mathbf{R}}_{k}^{O},{\overline {\mathbf{R}}^{O}}\right)\end{array} $$
$$\begin{array}{@{}rcl@{}} \Phi \left({\overline {\mathbf{R}}^{I}}\right) = \frac{1}{P}\sum\limits_{k = 1}^{P} {\mathrm{D}} \left({\mathbf{R}}_{k}^{I},{\overline {\mathbf{R}}^{I}}\right)\end{array} $$
\({\overline {\mathbf {R}}^{O}}\) and \({\overline {\mathbf {R}}^{I}}\) are the matrix when Φ(∙) takes the minimum value, where D(∙,∙) is the geodesic distance of two points on the manifold described below.
$$\begin{array}{@{}rcl@{}} {\overline {\mathbf{R}}^{O}} = {\arg\min}\ \Phi \left({\overline {\mathbf{R}}^{O}}\right) \end{array} $$
$$\begin{array}{@{}rcl@{}} {\overline {\mathbf{R}}^{I}} = {\arg\min}\ \Phi \left({\overline {\mathbf{R}}^{I}}\right) \end{array} $$
Assume that for the case where there are two points R1 and R2 on the matrix manifold, \(\overline {\mathbf {R}}\) is located at the midpoint of the geodesic line connecting the two points R1 and R2 on the manifold. Its expression is as shown in Eq. 17.
$$\begin{array}{@{}rcl@{}} \overline {\mathbf{R}} = {\mathbf{R}}_{1}^{1/2}{\left({\mathbf{R}}_{1}^{- 1/2}{{\mathbf{R}}_{2}}{\mathbf{R}}_{1}^{- 1/2}\right)^{1/2}}{\mathbf{R}}_{2}^{1/2} \end{array} $$
If P>2, the Riemann mean will be difficult to calculate. Literatures [28, 29] give a method of iteratively calculating \(\overline {\mathbf {R}}\) using the gradient descent algorithm, and finally obtain the Riemann mean calculation formula as shown in Eq. 18.
$$\begin{array}{@{}rcl@{}} {\overline {\mathbf{R}}_{l + 1}} \!= \overline {\mathbf{R}}_{l}^{1/2}{e^{-\frac{\tau}{P}{\sum}_{k = 1}^{P} {\log \left(\overline {\mathbf{R}}_{l}^{- 1/2}{{\mathbf{R}}_{k}}\overline {\mathbf{R}}_{l}^{- 1/2}\right)}}}\overline {\mathbf{R}}_{l}^{1/2},\;\;\;0 \!\le\! \tau \!\le\! 1 \end{array} $$
where τ is the step size of iteration and l indicates the number of iteration steps. Therefore, we use the gradient descent algorithm to calculate the Riemann matrix, and get \({\overline {\mathbf {R}}^{O}}\) and \({\overline {\mathbf {R}}^{I}}\).
Geodesic distance
The study of a geometric structure is mainly to study some properties such as distance, tangent, and curvature on the structure. There are many ways to measure the distance between two probability distributions on a statistical manifold. The most common is the geodesic distance.
Assuming θ is a point on the manifold, the metric on the statistical manifold can be defined by G(θ) of the following equation, called the Fisher information matrix.
$$\begin{array}{@{}rcl@{}} {\text{G}({\theta}) = \text{E}} \left[ {\frac{{\partial \ln p(x|{\theta})}}{{\partial {{\theta}_{i}}}} \cdot \frac{{\partial \ln p(x|{\theta})}}{{\partial {{\theta}_{j}}}}} \right] \end{array} $$
Due to the nature of the manifold curvature, we determine the distance between the two points by defining the length of the curve connecting the two points on the manifold. Consider an arbitrary curve θ(t)(t1≤t≤t2) between two points θ1 and θ2 on an arbitrary manifold, where θ(t1)=θ1, θ(t2)=θ2. Then, the distance between θ1 and θ2 can be obtained along the curve θ(t) [30].
$$\begin{array}{@{}rcl@{}} {\text{D}({{\theta}_{1}}{{,}}{{\theta}_{2}})} \buildrel \Delta \over = \int_{{t_{1}}}^{{t_{2}}} {\sqrt {{{\left({\frac{{d{\theta}}}{{dt}}} \right)}^{T}}{\mathrm{G}({\theta})}\left({\frac{{d{\theta}}}{{dt}}} \right)dt}} \end{array} $$
It can be seen that the distance between θ1 and θ2 depends on the selection of the curve θ(t). We call the curve that makes Eq. 20 have the smallest distance as the geodesic, and call the corresponding distance as the geodesic distance.
For any probability distribution, the calculation of geodesic distance is more complicated, which has some adverse effects on its application. For a multivariate Gaussian distribution family with the same mean but different covariance matrices, consider the two members of R1 and R2 in the covariance matrix. The geodesic distance between them is shown in the following Eq. 21 [31].
$$ {\begin{aligned} {\mathrm{D}}({{\mathbf{R}}_{1}},{{\mathbf{R}}_{2}}) &\buildrel \Delta \over = \sqrt {\frac{1}{2}tr{{\log}^{2}}\left({{\mathbf{R}}_{1}^{- 1/2}{{\mathbf{R}}_{2}}{\mathbf{R}}_{1}^{- 1/2}} \right)} \\[-4pt]&= \sqrt {\frac{1}{2}\sum\limits_{i = 1}^{n} {{{\log}^{2}}{\eta_{i}}}} \end{aligned}} $$
where ηi is the i eigenvalues of the matrix \({\mathbf {R}}_{1}^{- 1/2}{{\mathbf {R}}_{2}}{\mathbf {R}}_{1}^{- 1/2}\).
According to the feature extraction process and the above analysis, the signal matrix to be perceived is split and recombined in sequence and interval, and the covariance matrix is transformed to obtain RO and RI. Then, we use Eq. 21 to solve the corresponding geodesic distance.
$$ {{} \begin{aligned} {d_{1}}={\mathrm{D}}\left({{\mathbf{R}}^{O}}{\mathrm{,}}{\overline {\mathbf{R}}^{O}}\right) &\buildrel \Delta \over = \sqrt {\frac{1}{2}tr{{\log}^{2}}\left({{{\left({{\mathbf{R}}^{O}}\right)}^{- 1/2}}{{\overline {\mathbf{R}}}^{O}}{{\left({{\mathbf{R}}^{O}}\right)}^{- 1/2}}} \right)} \\&= \sqrt {\frac{1}{2}\sum\limits_{i = 1}^{qM} {{{\log}^{2}}{\eta_{i}}}} \end{aligned}} $$
$$ {\begin{aligned} {d_{2}}&={\mathrm{D}}\left({{\mathbf{R}}^{I}}{\mathrm{,}}{\overline {\mathbf{R}}^{I}}\right) \buildrel \Delta \over = \sqrt {\frac{1}{2}tr{{\log}^{2}}\left({{{\left({{\mathbf{R}}^{I}}\right)}^{- 1/2}}{{\overline {\mathbf{R}}}^{I}}{{\left({{\mathbf{R}}^{I}}\right)}^{- 1/2}}} \right)} \\&= \sqrt {\frac{1}{2}\sum\limits_{i = 1}^{qM} {{{\log}^{2}}{\eta_{i}}}} \end{aligned}} $$
According to the geodesic d1 and d1, a two-dimensional feature vector D=[d1,d2] is used to represent the signal sensed by the SU. Finally, the feature vector D is used for spectrum sensing.
Cooperative spectrum sensing based on FCM clustering algorithm
The FCM clustering algorithm is based on the partitioning method to obtain the clustering result.The basic idea is to divide similar samples into the same class as much as possible. The FCM clustering algorithm is an improvement of the common K-means clustering algorithm. The common K-means clustering algorithm is hard to divide the data, and the FCM clustering algorithm is a flexible fuzzy partitioning [32].Compared with traditional spectrum sensing methods, cooperative spectrum sensing based on FCM clustering algorithm not only eliminates complex threshold derivation but also has adaptability. The overall flow of the IGFCM method described in this paper is shown in Fig. 3.
The overall flow of the IGFCM method
The method of IGFCM is divided into two parts. In the first part, the red box indicates the training process. In the second part, the spectrum sensing process is represented in the green box.
Training process based on FCM
Before training, we need to prepare a training set \(\overline {\mathrm {D}}\):
$$\begin{array}{@{}rcl@{}} \overline {\mathrm{D}} = \left[{{{\mathbf{D}}_{1}},{{\mathbf{D}}_{2}},\ldots,{{\mathbf{D}}_{J}}} \right] \end{array} $$
Among them, Dj is the feature vector extracted in the third section and J represents the number of training feature vectors. The clustering algorithm divides the unlabeled training feature vectors into C non-overlapping clusters. Let Zc denote the set of training feature vectors belonging to class c, where c=1,2,…,C, then
$$\begin{array}{@{}rcl@{}} {Z_{c}} = \left\{{{{\mathbf{D}}_{j}}|{{\mathbf{D}}_{j}} \in Cluster\;c\;\forall \;j} \right\} \end{array} $$
The class Zc has a corresponding center Ψc, and each sample Dj belongs to Ψc with a membership degree of ucj and 0<ucj<1. The objective function Γ of the FCM clustering algorithm is shown in Eq. 26, and the constraint condition is shown in Eq. 27.
$$\begin{array}{@{}rcl@{}} \Gamma = {\sum}_{c = 1}^{C} {{\sum}_{j = 1}^{J} {u_{cj}^{m}}} {\left\| {{{\mathbf{D}}_{j}} - {\Psi_{c}}} \right\|^{2}} \end{array} $$
$$\begin{array}{@{}rcl@{}} {\sum}_{c = 1}^{C} {{u_{cj}} = 1},\;\;\;\;\forall \;j = 1,2,\ldots,J \end{array} $$
where ∥Dj−Ψc∥2 is an error metric and m is a weighted power exponent of the membership degree ucj, which may also be referred to as a smoothness index or a fuzzy weighted index, and m>1.
Using Lagrange to collate the objective function Γ and constraints, the objective function shown in Eq. 28 is obtained.
$$\begin{array}{@{}rcl@{}} \begin{array}{l} \Gamma \,=\, {\sum}_{c = 1}^{C} {{\sum}_{j = 1}^{J} {u_{cj}^{m}}} {\left\| {{{\mathbf{D}}_{j}} - {\Psi_{c}}} \right\|^{2}} \,+\, {\lambda_{1}}\left({{\sum}_{c = 1}^{C} {{u_{c1}} - 1}} \right) \\ \quad+ \ldots \,+\, {\lambda_{j}}\left({{\sum}_{c = 1}^{C} {{u_{cj}} - 1}} \right) \,+\, \ldots \,+\, {\lambda_{J}}\!\left({{\sum}_{c = 1}^{C} {{u_{cJ}} \,-\, 1}} \right) \end{array} \end{array} $$
Then, the membership degree ucj and the cluster center Ψc are respectively derived, and the constraint condition is substituted [33, 34], thereby obtaining the calculation formulas of ucj and Ψc, as shown in the Eqs. 29 and 30.
$$\begin{array}{@{}rcl@{}} {u_{cj}} = \frac{1}{{{\sum}_{k = 1}^{C} {{{\left({\frac{{\left\| {{{\mathbf{D}}_{j}} - {\Psi_{c}}} \right\|}}{{\left\| {{{\mathbf{D}}_{j}} - {\Psi_{k}}} \right\|}}} \right)}^{\frac{2}{{m - 1}}}}}}} \end{array} $$
$$\begin{array}{@{}rcl@{}} {\Psi_{c}} = \frac{{{\sum}_{j = 1}^{J} {\left({{{\mathbf{D}}_{j}}u_{cj}^{m}} \right)}}}{{{\sum}_{j = 1}^{J} {u_{cj}^{m}}}} \end{array} $$
The training process based on the FCM clustering algorithm is as follows:
Step 1 Input training data set \(\overline {\mathrm {D}}\), number of clusters C, smoothing index m, initialization membership ucj, and fault tolerance factor ε
Step 2 Calculate the class center Ψc by Eq. 30
Step 3 Calculate ν=∥Dj−Ψc∥2, if ν<ε, the algorithm stops; otherwise, continue to step 4
Step 4 Recalculate the membership degree ucj according to Eq. 29, return to step 2
Step 5 Output class center point Ψc
Spectrum sensing process based on FCM
After the training is successful, we can get a classifier for spectrum sensing, as shown in Eq. 31.
$$\begin{array}{@{}rcl@{}} \frac{{\left\| {\overline{\overline {\mathbf{D}}} - {\Psi_{1}}} \right\|}}{{\mathop {\min}\limits_{c = 2,3,\ldots,C} \left\| {\overline{\overline {\mathbf{D}}} - {\Psi_{c}}} \right\|}} > \xi \end{array} $$
In Eq. 31, \({\overline {\overline {\mathbf {D}}}}\) denotes an unknown perceptual signal feature vector. If Eq. 31 is satisfied, it indicates that the PU signal exists and the channel is not available; otherwise, the PU signal does not exist and the channel can be used. The parameter ξ is used to control the probability of missed detection and false alarm probability in the sensing process [21].
Simulation results and performance analysis
The cooperative spectrum sensing algorithm based on fuzzy c-means clustering algorithm is simulated and analyzed in this section. The simulation PU signal is the amplitude modulation (AM) signal, and the noise is Gaussian white noise. In order to ensure the accuracy of the experiment, according to the feature extraction method described in Section 4, 2000 signal feature vectors were extracted, of which 1000 are training samples and 1000 are used as test samples.
Firstly, we analyze the clustering effect of FCM clustering algorithm under this feature. Set the simulation parameters: cooperative SU M=2, sampling points N=1000, SNR=−11 dB. Figure 4 shows the signal and noise feature training samples obtained by the feature extraction method of split recombination combined with information geometry as the training input of the classifier.
Unclassified feature vectors
Figure 5 shows the clustering data after using the fuzzy c-means clustering algorithm. The blue dot in Fig. 5 represents the noise feature vector, and the red dot represents the signal feature vector. Black dots and black triangles represent the center of the noise class and the PU signal class, respectively.
Classified feature vectors
Further, we compare and analyze the performance of IGFCM and other methods, which are respectively characterized by energy (ED), and the IQMME, IQDMM, IQDMEAE methods are proposed in literature [22]. Given the simulation parameters, the cooperative SU number M=2, the number of sampling points N=1000, and the simulation diagrams when the SNR=− 13 dB and SNR=− 11 dB respectively are shown in Figs. 6 and 7. Compared with other methods, the IGFCM has better spectrum sensing performance.
ROC curves for different algorithms at SNR=− 13 dB and M=2
Given the number of sampling points N=1000, the SNR=− 15 dB. The simulation results obtained when the number of SUs is 2, 4, 6, 8, and 10 respectively are as shown in Fig. 8. As can be seen from Fig. 8, the number of cooperative SUs M has a great relationship with the detection probability of the algorithm. As M increases, the detection probabilities of several algorithms increase to varying degrees.
ROC curve at different number of SU with SNR=− 15 dB
As the number of sampling points increases, the perceived signal information is more comprehensive, so the extracted features are more representative. In order to observe the spectral sensing performance of the IGFCM method under different sampling points, keep the simulation parameters M=2 and SNR=− 13 dB unchanged. The IGFCM algorithm simulation diagram obtained when the number of sampling points is 1000, 1400, 1800, 2200, or 2600 respectively. As can be seen from Fig. 9, the spectrum sensing performance increases as the number of sampling points increases.
ROC curve at different sampling points with SNR=− 13 dB
In this paper, a spectrum sensing method based on information geometry and FCM clustering is proposed. In the feature extraction process, a feature extraction method combining split recombination and information geometry is proposed to transform complex signal detection problems into manifolds. Geometric problems on the indirect analysis of signal detection problems using geometric tools. Finally, the FCM clustering algorithm is used to train the extracted features to obtain a classifier for spectrum sensing to realize spectrum sensing. The perceptual performance of the method described in this paper is further analyzed in the experimental part. The experimental results show that the method improves the spectrum sensing performance to some extent. In the future work, we will continue to study the application of clustering algorithms in spectrum sensing, such as kernel fuzzy c-means clustering (KFCM) and the scope of possibilistic fuzzy c-means (PFCM). It is hoped that the spectrum sensing performance can be further effectively improved.
CFAR:
Constant false alarm rata
CR:
DAR:
DMEAE:
Difference between the maximum eigenvalue and the average energy
DMM:
Difference between the maximum eigenvalue and the minimum feature
FCM:
Fuzzy c-means
GMM:
Gaussian mixture model
I-DAR:
Interval decomposition and recombination
IGFCM:
KFCM:
Kernel fuzzy c-means
MME:
Maximum to minimum eigenvalue
NN:
O-DAR:
Order decomposition and recombination
PFCM:
The scope of possibilistic fuzzy c-means
PU:
Primary user
SNR:
Signal-to-noise ratio
Symmetric positive definite
Secondary user
SVM:
Support vector machine
A. A. Khan, M. H. Rehmani, M. Reisslein, Cognitive radio for smart grids: survey of architectures, spectrum sensing mechanisms, and networking protocols. IEEE Commun. Surv. Tutor. 18:, 860–898 (2016).
L. Xiao, in IEEE Global Communications Conference: 2014; Austin. Prospect theoretic analysis of anti-jamming communications in cognitive radio networks (IEEEAustin, 1996), pp. 746–751.
K. Cichon, A. Kliks, H. Bogucka, Energy-efficient cooperative spectrum sensing: a survey. IEEE Commun. Surv. Tutor. 18:, 1861–1886 (2016).
N. S. Kim, J. M. Rabaey, A dual-resolution wavelet-based energy detection spectrum sensing for UWB-based cognitive radios. IEEE Trans. Circ. Syst. I Regular Papers. PP:, 1–14 (2017).
E. Chatziantoniou, B. Allen, V. Velisavljevic, P. Karadimas, J. Coon, Energy detection based spectrum sensing over two-wave with diffuse power fading channels. IEEE Trans. Veh. Technol. 66:, 868–874 (2017).
X Zhang, R Chai, F Gao, in IEEE Global Conference on Signal and Information Processing: 09 February 2015; Atlanta. Matched filter based spectrum sensing and power level detection for cognitive radio network (IEEEAtlanta, 2015), pp. 1267–1270.
A Surampudi, in India International Conference on Information Processing: 2016. An adaptive decision threshold scheme for the matched filter method of spectrum sensing in cognitive radio using artificial neural networks (IEEEDoha, 2016), pp. 1–5.
R Mahapatra, in IEEE International Symposium on Wireless Communication Systems: 2016;Reykjavik. Cyclostationary detection for cognitive radio with multiple receivers (IEEEReykjavik, 2008), pp. 493–497.
W. Zhang, G. Abreu, M. Inamori, Y. Sanada, Spectrum sensing algorithms via finite random matrices. IEEE Trans. Commun. 60:, 164–175 (2012).
Y. Zeng, Y. C. Liang, Eigenvalue-based spectrum sensing algorithms for cognitive radio. IEEE Trans. Commun. 57:, 1784–1793 (2009).
C. Liu, in 7th International Conference on Wireless Communications, Networking and Mobile Computing: 2011;Wuhan. A distance-weighed algorithm based on maximum-minimum eigenvalues for cooperative spectrum sensing (IEEEWuhan, 2011), pp. 1–4.
N. Liu, H. S. Shi, B. Yang, D. P. Yuan, Spectrum sensing method based on ME-S-ED. Meas. Control. Technol. 35:, 125–128 (2016).
M. T. Antonia, V. Sergio, Random matrix theory and wireless communications. Commun. Inf. Theory. 1:, 1–182 (2004).
MATH Google Scholar
W. Hu, in International Conference on Wireless Communications, Networking and Mobile Computing: 26-28 September 2014;Beijing. Cooperative spectrum sensing algorithm based on bistable stochastic resonance (IETBeijing, 2014), pp. 126–130.
J. K. Liu, X. S. Wang, W. Tao, Q. U. Long-Hai, Application of information geometry to target detection for pulsed-Doppler radar. J. Natl Univ. Defense Technol. 33:, 77–80 (2011).
Q. Chen, in International Conference on Cloud Computing and Security: 16-18 June 2017;Nanjing. Research on cognitive radio spectrum sensing method based on information geometry (IEEENanjing, 2017), pp. 554–564.
Q. Lu, S. Yang, F. Liu, Wideband spectrum sensing based on Riemannian distance for cognitive radio networks. Sensors. 17:, 661 (2017).
L. Xiao, Y. Li, G. Han, W. Zhuang, PHY-layer spoofing detection with reinforcement learning in wireless networks. IEEE Trans. Veh. Technol. 65:, 10037–10047 (2016).
S. P. Maity, S. Chatterjee, T. Acharya, On optimal fuzzy c-means clustering for energy efficient cooperative spectrum sensing in cognitive radio networks. Dig. Signal Proc. 49:, 104–115 (2016).
A Paul, SP Maity, Kernel fuzzy c-means clustering on energy detection based cooperative spectrum sensing. Dig. Commun. Netw. 4:, 196–205 (2016).
V. Kumar, in Twenty Second National Conference on Communication: 4-6 March 2016;Guwahati. K-mean clustering based cooperative spectrum sensing in generalized k-u fading channels (IEEEGuwahati, 2016), pp. 1–5.
Y. Zhang, P. Wan, S. Zhang, Y. Wang, N. Li, A spectrum sensing method based on signal feature and clustering algorithm in cognitive wireless multimedia sensor networks. Adv. Multimedia. 2017:, 1–10 (2017).
K. M. Thilina, K. W. Choi, N. Saquib, E. Hossain, Machine learning techniques for cooperative spectrum sensing in cognitive radio networks. IEEE J. Sel. Areas Commun. 31:, 2209–2221 (2013).
G. C. Sobabe, in International Congress on Image and Signal Processing, Biomedical Engineering and Informatics. A machine learning based spectrum-sensing algorithm using sample covariance matrix (IEEEShanghai, 2018), pp. 1–6.
S. Chatterjee, A. Banerjee, T. Acharya, S. P. Maity, Fuzzy c-means clustering in energy detection for cooperative spectrum sensing in cognitive radio system. Proc. Mult. Access Commun. 8715:, 84–95 (2014).
A. S. Kannan, in International Conference on Control Communication and Computing: 13-15 December 2013; Thiruvananthapuram. Performance analysis of blind spectrum sensing in cooperative environment (IEEEThiruvananthapuram, 2013), pp. 277–280.
L. Xiao, J. Liu, Q. Li, N. B. Mandayam, H. V. Poor, User-centric view of jamming games in cognitive radio networks. IEEE Trans. Inf. Forensics Secur. 10:, 2578–2590 (2015).
C. Lenglet, M. Rousson, R. Deriche, O. Faugeras, Statistics on the manifold of multivariate normal distributions: theory and application to diffusion tensor MRI processing. J. Math. Imaging Vis. 25:, 423–444 (2006).
M. Menendez, A differential geometric approach to the geometric mean of symmetric positive-definite matrices. Siam J. Matrix Anal. Appl. 26:, 735–747 (2008).
MathSciNet Google Scholar
M. Menendez, D. Morales, L. Pardo, M. Salicru, Statistical tests based on geodesic distances. Appl. Math. Lett. 8:, 65–69 (1995).
M. Calvo, J. M. Oller, A distance between multivariate normal distributions based in an embedding into the Siegel group. J. Multivar. Anal. 35:, 223–242 (1991).
M. N. Ahmed, S. M. Yamany, N. Mohamed, A. A. Farag, T. Moriarty, A modified fuzzy c-means algorithm for bias field estimation and segmentation of MRI data. IEEE Trans. Med. Imaging. 21:, 193–199 (2002).
D. M. S. Bhatti, in International Conference on Information and Communication Technology Convergence: 18-20 October 2017; Jeju. Fuzzy c-means and spatial correlation based clustering for cooperative spectrum sensing (IEEEJeju, 2017), pp. 486–491.
D. M. S. Bhatti, N. Saeed, H. Nam, Fuzzy c-means clustering and energy efficient cluster head selection for cooperative sensor network. Sensors. 16:, 1459–1476 (2016).
The author wants to thank the author's organization because they have provided us with many conveniences.
This work was supported in part by special funds from the central finance to support the development of local universities under No. 400170044, the project supported by the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences under grant No.20180106, the degree and graduate education reform project of Guangdong Province under grant No.2016JGXM_MS_26, the foundation of key laboratory of machine intelligence and advanced computing of the Ministry of Education under grant No.MSC-201706A and the higher education quality projects of Guangdong Province and Guangdong University of Technology.
The materials are mainly from different journals and conferences, as shown in the references. In the simulation results and performance analysis section, mainly use MATLAB tools to simulate.
School of Automation, Guangdong University of Technology, Guangzhou, 510006, China
Shunchao Zhang, Yonghua Wang, Jiangfan Li, Pin Wan, Yongwei Zhang & Nan Li
State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
Yonghua Wang
Hubei Key Laboratory of Intelligent Wireless Communications, South-Central University for Nationalities, Wuhan, 430074, China
Pin Wan
Shunchao Zhang
Jiangfan Li
Yongwei Zhang
Nan Li
SZ provided this idea and wrote the manuscript. YW demonstrated the idea. JL and YZ designed the experiment and conducted the experimental verification. PW and NL give constructive suggestions for the structure of the paper. All authors read and approved the final manuscript.
Correspondence to Yonghua Wang.
Shunchao Zhang received his B.S. degree in Hunan Institute of Engineering in 2016. He is currently pursuing the M.S. degree in Guangdong University of Technology, Guangdong, China. His research interests include spectrum sensing in cognitive radio.
Yonghua Wang received his B.S. degree in Electrical Engineering and Automation from Hebei University of Technology in 2001, the M.S. degree in Control Theory and Control Engineering from Guangdong University of Technology in 2006, and Ph.D. degree in Communication and Information System from SUN-YAN SEN University in 2009. He now with School of Automation in Guangdong University of Technology.
Jiangfan Li received his B.S. degree in Foshan University in 2016. He is currently pursuing the M.S. degree in Guangdong University of Technology, Guangdong, China. His research interests include spectrum sensing in cognitive radio.
Pin Wan received his B.S. degree in Electronic Engineering from Southeast University in 1984, the M.S. degree in Circuit and System from Southeast University in 1990, and Ph.D. degree in Control Theory and Control Engineering from Guangdong University of Technology in 2011. He currently is a professor in School of Automation in Guangdong University of Technology.
Yongwei Zhang received his B.S. degree in Jiaying University in 2016. He is currently pursuing the M.S. degree in Guangdong University of Technology, Guangdong, China. His research interests include spectrum sensing in cognitive radio.
Nan Li is currently pursuing the B.S. degree in Guangdong University of Technology, Guangdong, China. Her research interests include spectrum sensing in cognitive radio.
Zhang, S., Wang, Y., Li, J. et al. A cooperative spectrum sensing method based on information geometry and fuzzy c-means clustering algorithm. J Wireless Com Network 2019, 17 (2019). https://doi.org/10.1186/s13638-019-1338-z
Cooperative spectrum sensing
Fuzzy c-means clustering algorithm | CommonCrawl |
Jonathan Bootle
Sumcheck Arguments and their Applications 📺 Abstract
Jonathan Bootle Alessandro Chiesa Katerina Sotiraki
We introduce a class of interactive protocols, which we call *sumcheck arguments*, that establishes a novel connection between the sumcheck protocol (Lund et al. JACM 1992) and folding techniques for Pedersen commitments (Bootle et al. EUROCRYPT 2016). Informally, we consider a general notion of bilinear commitment over modules, and show that the sumcheck protocol applied to a certain polynomial associated with the commitment scheme yields a succinct argument of knowledge for openings of the commitment. Building on this, we additionally obtain succinct arguments for the NP-complete language R1CS over certain rings. Sumcheck arguments enable us to recover as a special case numerous prior works in disparate cryptographic settings (such as discrete logarithms, pairings, RSA groups, lattices), providing one abstract framework to understand them all. Further, we answer open questions raised in prior works, such as obtaining a lattice-based succinct argument from the SIS assumption for satisfiability problems over rings.
A non-PCP Approach to Succinct Quantum-Safe Zero-Knowledge 📺 Abstract
Jonathan Bootle Vadim Lyubashevsky Ngoc Khanh Nguyen Gregor Seiler
Today's most compact zero-knowledge arguments are based on the hardness of the discrete logarithm problem and related classical assumptions. If one is interested in quantum-safe solutions, then all of the known techniques stem from the PCP-based framework of Kilian (STOC 92) which can be instantiated based on the hardness of any collision-resistant hash function. Both approaches produce asymptotically logarithmic sized arguments but, by exploiting extra algebraic structure, the discrete logarithm arguments are a few orders of magnitude more compact in practice than the generic constructions.\\ In this work, we present the first (poly)-logarithmic \emph{post-quantum} zero-knowledge arguments that deviate from the PCP approach. At the core of succinct zero-knowledge proofs are succinct commitment schemes (in which the commitment and the opening proof are sub-linear in the message size), and we propose two such constructions based on the hardness of the (Ring)-Short Integer Solution (Ring-SIS) problem, each having certain trade-offs. For commitments to $N$ secret values, the communication complexity of our first scheme is $\tilde{O}(N^{1/c})$ for any positive integer $c$, and $O(\log^2 N)$ for the second. %Both of our protocols have somewhat large \emph{slack}, which in lattice constructions is the ratio of the norm of the extracted secrets to the norm of the secrets that the honest prover uses in the proof. The lower this factor, the smaller we can choose the practical parameters. For a fixed value of this factor, our $\tilde{O}(N^{1/c})$-argument actually achieves lower communication complexity. Both of these are a significant theoretical improvement over the previously best lattice construction by Bootle et al. (CRYPTO 2018) which gave $O(\sqrt{N})$-sized proofs.
Linear-Time Arguments with Sublinear Verification from Tensor Codes 📺 Abstract
Jonathan Bootle Alessandro Chiesa Jens Groth
Minimizing the computational cost of the prover is a central goal in the area of succinct arguments. In particular, it remains a challenging open problem to construct a succinct argument where the prover runs in linear time and the verifier runs in polylogarithmic time. We make progress towards this goal by presenting a new linear-time probabilistic proof. For any fixed ? > 0, we construct an interactive oracle proof (IOP) that, when used for the satisfiability of an N-gate arithmetic circuit, has a prover that uses O(N) field operations and a verifier that uses O(N^?) field operations. The sublinear verifier time is achieved in the holographic setting for every circuit (the verifier has oracle access to a linear-size encoding of the circuit that is computable in linear time). When combined with a linear-time collision-resistant hash function, our IOP immediately leads to an argument system where the prover performs O(N) field operations and hash computations, and the verifier performs O(N^?) field operations and hash computations (given a short digest of the N-gate circuit).
Foundations of Fully Dynamic Group Signatures Abstract
Jonathan Bootle Andrea Cerulli Pyrros Chaidos Essam Ghadafi Jens Groth
Group signatures allow members of a group to anonymously sign on behalf of the group. Membership is administered by a designated group manager. The group manager can also reveal the identity of a signer if and when needed to enforce accountability and deter abuse. For group signatures to be applicable in practice, they need to support fully dynamic groups, i.e., users may join and leave at any time. Existing security definitions for fully dynamic group signatures are informal, have shortcomings, and are mutually incompatible. We fill the gap by providing a formal rigorous security model for fully dynamic group signatures. Our model is general and is not tailored toward a specific design paradigm and can therefore, as we show, be used to argue about the security of different existing constructions following different design paradigms. Our definitions are stringent and when possible incorporate protection against maliciously chosen keys. We consider both the case where the group management and tracing signatures are administered by the same authority, i.e., a single group manager, and also the case where those roles are administered by two separate authorities, i.e., a group manager and an opening authority. We also show that a specialization of our model captures existing models for static and partially dynamic schemes. In the process, we identify a subtle gap in the security achieved by group signatures using revocation lists. We show that in such schemes new members achieve a slightly weaker notion of traceability. The flexibility of our security model allows to capture such relaxation of traceability.
Algebraic Techniques for Short(er) Exact Lattice-Based Zero-Knowledge Proofs 📺 Abstract
Jonathan Bootle Vadim Lyubashevsky Gregor Seiler
A key component of many lattice-based protocols is a zero-knowledge proof of knowledge of a vector $$\vec {s}$$ with small coefficients satisfying $$A\vec {s}=\vec {u}\bmod \,q$$ . While there exist fairly efficient proofs for a relaxed version of this equation which prove the knowledge of $$\vec {s}'$$ and c satisfying $$A\vec {s}'=\vec {u}c$$ where $$\Vert \vec {s}'\Vert \gg \Vert \vec {s}\Vert $$ and c is some small element in the ring over which the proof is performed, the proofs for the exact version of the equation are considerably less practical. The best such proof technique is an adaptation of Stern's protocol (Crypto '93), for proving knowledge of nearby codewords, to larger moduli. The scheme is a $$\varSigma $$ -protocol, each of whose iterations has soundness error $$2{/}3$$ , and thus requires over 200 repetitions to obtain soundness error of $$2^{-128}$$ , which is the main culprit behind the large size of the proofs produced. In this paper, we propose the first lattice-based proof system that significantly outperforms Stern-type proofs for proving knowledge of a short $$\vec {s}$$ satisfying $$A\vec {s}=\vec {u}\bmod \,q$$ . Unlike Stern's proof, which is combinatorial in nature, our proof is more algebraic and uses various relaxed zero-knowledge proofs as sub-routines. The main savings in our proof system comes from the fact that each round has soundness error of $$1{/}n$$ , where n is the number of columns of A. For typical applications, n is a few thousand, and therefore our proof needs to be repeated around 10 times to achieve a soundness error of $$2^{-128}$$ . For concrete parameters, it produces proofs that are around an order of magnitude smaller than those produced using Stern's approach.
Sub-linear Lattice-Based Zero-Knowledge Arguments for Arithmetic Circuits 📺 Abstract
Carsten Baum Jonathan Bootle Andrea Cerulli Rafael del Pino Jens Groth Vadim Lyubashevsky
We propose the first zero-knowledge argument with sub-linear communication complexity for arithmetic circuit satisfiability over a prime $${p}$$ whose security is based on the hardness of the short integer solution (SIS) problem. For a circuit with $${N}$$ gates, the communication complexity of our protocol is $$O\left( \sqrt{{N}{\lambda }\log ^3{{N}}}\right) $$ , where $${\lambda }$$ is the security parameter. A key component of our construction is a surprisingly simple zero-knowledge proof for pre-images of linear relations whose amortized communication complexity depends only logarithmically on the number of relations being proved. This latter protocol is a substantial improvement, both theoretically and in practice, over the previous results in this line of research of Damgård et al. (CRYPTO 2012), Baum et al. (CRYPTO 2016), Cramer et al. (EUROCRYPT 2017) and del Pino and Lyubashevsky (CRYPTO 2017), and we believe it to be of independent interest.
Efficient Batch Zero-Knowledge Arguments for Low Degree Polynomials Abstract
Jonathan Bootle Jens Groth
Bootle et al. (EUROCRYPT 2016) construct an extremely efficient zero-knowledge argument for arithmetic circuit satisfiability in the discrete logarithm setting. However, the argument does not treat relations involving commitments, and furthermore, for simple polynomial relations, the complex machinery employed is unnecessary.In this work, we give a framework for expressing simple relations between commitments and field elements, and present a zero-knowledge argument which, by contrast with Bootle et al., is constant-round and uses fewer group operations, in the case where the polynomials in the relation have low degree. Our method also directly yields a batch protocol, which allows many copies of the same relation to be proved and verified in a single argument more efficiently with only a square-root communication overhead in the number of copies.We instantiate our protocol with concrete polynomial relations to construct zero-knowledge arguments for membership proofs, polynomial evaluation proofs, and range proofs. Our work can be seen as a unified explanation of the underlying ideas of these protocols. In the instantiations of membership proofs and polynomial evaluation proofs, we also achieve better efficiency than the state of the art.
LWE Without Modular Reduction and Improved Side-Channel Attacks Against BLISS Abstract
Jonathan Bootle Claire Delaplace Thomas Espitau Pierre-Alain Fouque Mehdi Tibouchi
This paper is devoted to analyzing the variant of Regev's learning with errors (LWE) problem in which modular reduction is omitted: namely, the problem (ILWE) of recovering a vector $$\mathbf {s}\in \mathbb {Z}^n$$ given polynomially many samples of the form $$(\mathbf {a},\langle \mathbf {a},\mathbf {s}\rangle + e)\in \mathbb {Z}^{n+1}$$ where $$\mathbf { a}$$ and e follow fixed distributions. Unsurprisingly, this problem is much easier than LWE: under mild conditions on the distributions, we show that the problem can be solved efficiently as long as the variance of e is not superpolynomially larger than that of $$\mathbf { a}$$. We also provide almost tight bounds on the number of samples needed to recover $$\mathbf {s}$$.Our interest in studying this problem stems from the side-channel attack against the BLISS lattice-based signature scheme described by Espitau et al. at CCS 2017. The attack targets a quadratic function of the secret that leaks in the rejection sampling step of BLISS. The same part of the algorithm also suffers from a linear leakage, but the authors claimed that this leakage could not be exploited due to signature compression: the linear system arising from it turns out to be noisy, and hence key recovery amounts to solving a high-dimensional problem analogous to LWE, which seemed infeasible. However, this noisy linear algebra problem does not involve any modular reduction: it is essentially an instance of ILWE, and can therefore be solved efficiently using our techniques. This allows us to obtain an improved side-channel attack on BLISS, which applies to 100% of secret keys (as opposed to $${\approx }7\%$$ in the CCS paper), and is also considerably faster.
Arya: Nearly Linear-Time Zero-Knowledge Proofs for Correct Program Execution Abstract
Jonathan Bootle Andrea Cerulli Jens Groth Sune Jakobsen Mary Maller
There have been tremendous advances in reducing interaction, communication and verification time in zero-knowledge proofs but it remains an important challenge to make the prover efficient. We construct the first zero-knowledge proof of knowledge for the correct execution of a program on public and private inputs where the prover computation is nearly linear time. This saves a polylogarithmic factor in asymptotic performance compared to current state of the art proof systems.We use the TinyRAM model to capture general purpose processor computation. An instance consists of a TinyRAM program and public inputs. The witness consists of additional private inputs to the program. The prover can use our proof system to convince the verifier that the program terminates with the intended answer within given time and memory bounds. Our proof system has perfect completeness, statistical special honest verifier zero-knowledge, and computational knowledge soundness assuming linear-time computable collision-resistant hash functions exist. The main advantage of our new proof system is asymptotically efficient prover computation. The prover's running time is only a superconstant factor larger than the program's running time in an apples-to-apples comparison where the prover uses the same TinyRAM model. Our proof system is also efficient on the other performance parameters; the verifier's running time and the communication are sublinear in the execution time of the program and we only use a log-logarithmic number of rounds.
Linear-Time Zero-Knowledge Proofs for Arithmetic Circuit Satisfiability
Jonathan Bootle Andrea Cerulli Essam Ghadafi Jens Groth Mohammad Hajiabadi Sune K. Jakobsen
Efficient Zero-Knowledge Arguments for Arithmetic Circuits in the Discrete Log Setting
Jonathan Bootle Andrea Cerulli Pyrros Chaidos Jens Groth Christophe Petit
Carsten Baum (1)
Andrea Cerulli (5)
Pyrros Chaidos (2)
Alessandro Chiesa (2)
Rafael del Pino (1)
Claire Delaplace (1)
Thomas Espitau (1)
Essam Ghadafi (2)
Jens Groth (7)
Mohammad Hajiabadi (1)
Sune K. Jakobsen (1)
Sune Jakobsen (1)
Vadim Lyubashevsky (3)
Mary Maller (1)
Ngoc Khanh Nguyen (1)
Christophe Petit (1)
Gregor Seiler (2)
Katerina Sotiraki (1)
Mehdi Tibouchi (1) | CommonCrawl |
Ergodic Theory and Dynamical Systems (3)
RAIRO - Theoretical Informatics and Applications (3)
The Journal of Anatomy (1)
Encyclopedia of Mathematics and its Applications (18)
Recognizability for sequences of morphisms
VALÉRIE BERTHÉ, WOLFGANG STEINER, JÖRG M. THUSWALDNER, REEM YASSAWI
Journal: Ergodic Theory and Dynamical Systems / Volume 39 / Issue 11 / November 2019
We investigate different notions of recognizability for a free monoid morphism $\unicode[STIX]{x1D70E}:{\mathcal{A}}^{\ast }\rightarrow {\mathcal{B}}^{\ast }$ . Full recognizability occurs when each (aperiodic) point in ${\mathcal{B}}^{\mathbb{Z}}$ admits at most one tiling with words $\unicode[STIX]{x1D70E}(a)$ , $a\in {\mathcal{A}}$ . This is stronger than the classical notion of recognizability of a substitution $\unicode[STIX]{x1D70E}:{\mathcal{A}}^{\ast }\rightarrow {\mathcal{A}}^{\ast }$ , where the tiling must be compatible with the language of the substitution. We show that if $|{\mathcal{A}}|=2$ , or if $\unicode[STIX]{x1D70E}$ 's incidence matrix has rank $|{\mathcal{A}}|$ , or if $\unicode[STIX]{x1D70E}$ is permutative, then $\unicode[STIX]{x1D70E}$ is fully recognizable. Next we investigate the classical notion of recognizability and improve earlier results of Mossé [Puissances de mots et reconnaissabilité des points fixes d'une substitution. Theoret. Comput. Sci.99(2) (1992), 327–334] and Bezuglyi et al [Aperiodic substitution systems and their Bratteli diagrams. Ergod. Th. & Dynam. Sys.29(1) (2009), 37–72], by showing that any substitution is recognizable for aperiodic points in its substitutive shift. Finally we define recognizability and also eventual recognizability for sequences of morphisms which define an $S$ -adic shift. We prove that a sequence of morphisms on alphabets of bounded size, such that compositions of consecutive morphisms are growing on all letters, is eventually recognizable for aperiodic points. We provide examples of eventually recognizable, but not recognizable, sequences of morphisms, and sequences of morphisms which are not eventually recognizable. As an application, for a recognizable sequence of morphisms, we obtain an almost everywhere bijective correspondence between the $S$ -adic shift it generates, and the measurable Bratteli–Vershik dynamical system that it defines.
Edited by Valérie Berthé, Université de Paris VII (Denis Diderot), Michel Rigo, Université de Liège, Belgium
Book: Combinatorics, Words and Symbolic Dynamics
Print publication: 26 February 2016, pp ix-x
Print publication: 26 February 2016, pp v-viii
Print publication: 26 February 2016, pp xix-xx
Print publication: 26 February 2016, pp xi-xviii
Inspired by the celebrated Lothaire series (Lothaire, 1983, 2002, 2005) and animated by the same spirit as in the book (Berthé and Rigo, 2010), this collaborative volume aims at presenting and developing recent trends in combinatorics with applications in the study of words and in symbolic dynamics.
On the one hand, some of the newest results in these areas have been selected for this volume and here benefit from a synthetic exposition. On the other hand, emphasis on the connections existing between the main topics of the book is sought. These connections arise, for instance, from numeration systems that can be associated with algorithms or dynamical systems and their corresponding expansions, from cellular automata and the computation or the realisation of a given entropy, or even from the study of friezes or from the analysis of algorithms.
This book is primarily intended for graduate students or research mathematicians and computer scientists interested in combinatorics on words, pattern avoidance, graph theory, quivers and frieze patterns, automata theory and synchronised words, tilings and theory of computation, multidimensional subshifts, discrete dynamical systems, ergodic theory and transfer operators, numeration systems, dynamical arithmetics, analytic combinatorics, continued fractions, probabilistic models. We hope that some of the chapters can serve as usefulmaterial for lecturing at master/graduate level. Some chapters of the book can also be interesting to biologists and researchers interested in text algorithms or bio-informatics.
Let us succinctly sketch the general landscape of the volume. Short abstracts of each chapter can be found below. The book can roughly be divided into four general blocks. The first one, made of Chapters 2 and 3, is devoted to numeration systems. The second block, made of Chapters 4 to 6, pertains to combinatorics of words. The third block is concernedwith symbolic dynamics: in the one-dimensional setting with Chapter 7, and in the multidimensional one, with Chapters 8 and 9. The last block, made of Chapters 10 and 11, has again a combinatorial nature.
Words, i.e., finite or infinite sequences of symbols taking values in a finite set, are ubiquitous in the sciences. It is because of their strong representation power: they arise as a natural way to code elements of an infinite set using finitely many symbols. So let us start our general description with combinatorics on words.
Notation index
Print publication: 26 February 2016, pp i-iv
Combinatorics, Words and Symbolic Dynamics
Edited by Valérie Berthé, Michel Rigo
Print publication: 26 February 2016
Internationally recognised researchers look at developing trends in combinatorics with applications in the study of words and in symbolic dynamics. They explain the important concepts, providing a clear exposition of some recent results, and emphasise the emerging connections between these different fields. Topics include combinatorics on words, pattern avoidance, graph theory, tilings and theory of computation, multidimensional subshifts, discrete dynamical systems, ergodic theory, numeration systems, dynamical arithmetics, automata theory and synchronised words, analytic combinatorics, continued fractions and probabilistic models. Each topic is presented in a way that links it to the main themes, but then they are also extended to repetitions in words, similarity relations, cellular automata, friezes and Dynkin diagrams. The book will appeal to graduate students, research mathematicians and computer scientists working in combinatorics, theory of computation, number theory, symbolic dynamics, tilings and stringology. It will also interest biologists using text algorithms.
Return words of linear involutions and fundamental groups
VALÉRIE BERTHÉ, VINCENT DELECROIX, FRANCESCO DOLCE, DOMINIQUE PERRIN, CHRISTOPHE REUTENAUER, GIUSEPPINA RINDONE
Journal: Ergodic Theory and Dynamical Systems / Volume 37 / Issue 3 / May 2017
We investigate the shifts associated with natural codings of linear involutions. We deduce, from the geometric representation of linear involutions as Poincaré maps of measured foliations, a suitable definition of return words which yields that the set of return words to a given word is a symmetric basis of the free group on the underlying alphabet, $A$ . The set of return words with respect to a subgroup of finite index $G$ of the free group on $A$ is also proved to be a symmetric basis of $G$ .
A combinatorial approach to products of Pisot substitutions
VALÉRIE BERTHÉ, JÉRÉMIE BOURDON, TIMO JOLIVET, ANNE SIEGEL
Journal: Ergodic Theory and Dynamical Systems / Volume 36 / Issue 6 / September 2016
We define a generic algorithmic framework to prove a pure discrete spectrum for the substitutive symbolic dynamical systems associated with some infinite families of Pisot substitutions. We focus on the families obtained as finite products of the three-letter substitutions associated with the multidimensional continued fraction algorithms of Brun and Jacobi–Perron. Our tools consist in a reformulation of some combinatorial criteria (coincidence conditions), in terms of properties of discrete plane generation using multidimensional (dual) substitutions. We also deduce some topological and dynamical properties of the Rauzy fractals, of the underlying symbolic dynamical systems, as well as some number-theoretical properties of the associated Pisot numbers.
Connectedness of fractals associated with Arnoux–Rauzy substitutions
Valérie Berthé, Timo Jolivet, Anne Siegel
Journal: RAIRO - Theoretical Informatics and Applications / Volume 48 / Issue 3 / July 2014
Rauzy fractals are compact sets with fractal boundary that can be associated with any unimodular Pisot irreducible substitution. These fractals can be defined as the Hausdorff limit of a sequence of compact sets, where each set is a renormalized projection of a finite union of faces of unit cubes. We exploit this combinatorial definition to prove the connectedness of the Rauzy fractal associated with any finite product of three-letter Arnoux–Rauzy substitutions.
Edited by Valérie Berthé, Université de Paris VII, Michel Rigo, Université de Liège, Belgium
Book: Combinatorics, Automata and Number Theory
Print publication: 12 August 2010, pp i-iv
Combinatorics, Automata and Number Theory
Print publication: 12 August 2010
This collaborative volume presents trends arising from the fruitful interaction between the themes of combinatorics on words, automata and formal language theory, and number theory. Presenting several important tools and concepts, the authors also reveal some of the exciting and important relationships that exist between these different fields. Topics include numeration systems, word complexity function, morphic words, Rauzy tilings and substitutive dynamical systems, Bratelli diagrams, frequencies and ergodicity, Diophantine approximation and transcendence, asymptotic properties of digital functions, decidability issues for D0L systems, matrix products and joint spectral radius. Topics are presented in a way that links them to the three main themes, but also extends them to dynamical systems and ergodic theory, fractals, tilings and spectral properties of matrices. Graduate students, research mathematicians and computer scientists working in combinatorics, theory of computation, number theory, symbolic dynamics, fractals, tilings and stringology will find much of interest in this book.
Print publication: 12 August 2010, pp 599-615
Print publication: 12 August 2010, pp xi-xviii
As the title may suggest, this book is about combinatorics on words, automata and formal language theory, as well as number theory. This collaborative work gives a glimpse of the active community working in these interconnected and even intertwined areas. It presents several important tools and concepts usually encountered in the literature and it reveals some of the exciting and non-trivial relationships existing between the considered fields of research. This book is mainly intended for graduate students or research mathematicians and computer scientists interested in combinatorics on words, theory of computation, number theory, dynamical systems, ergodic theory, fractals, tilings and stringology. We hope that some of the chapters can serve as useful material for lecturing at master level.
The outline of this project has germinated after a very successful international eponymous school organised at the University of Liége (Belgium) in 2006 and supported by the European Union with the help of the European Mathematical Society (EMS). Parts of a preliminary version of this book were used as lecture notes for the second edition of the school organized in June 2009 and mainly supported by the European Science Foundation (ESF) through the AutoMathA programme. For both events, we acknowledge also financial support from the University of Liége and the Belgian funds for scientific research (FNRS).
We have selected ten topics which are directed towards the fundamental three themes of this project (namely, combinatorics, automata and number theory) and they naturally extend to dynamical systems and ergodic theory (see Chapters 6 and 7), but also to fractals and tilings (see Chapter 5) and spectral properties of matrices (see Chapter 11).
Print publication: 12 August 2010, pp ix-x
Print publication: 12 August 2010, pp v-viii | CommonCrawl |
Bounding the "spikiness" of a probability distribution
Are there any well-known conditions that guarantee that a probability distribution isn't too "spiky"?
I ask this question because I am interested in the families of probability distributions $f(x)$ on the unit interval such that the following criterion holds: there exists a measurable subset $S\subset[0,1]$ such that $$4\left(\int_{S}f(x)\,dx\right)^{2}\geq\lambda(S)\int_{0}^{1}f(x)^{2}\,dx$$where $\lambda(S)$ denotes the 1-dim real Lebesgue measure of $S$, i.e. the sum of the intervals that comprise it. This looks like a reversed Jensen's inequality, except for the fact that we're taking integrals over two separate domains.
(1)Is there a well-known sufficient condition that would cause this to hold?
(2)How about if I increase the coefficient $4$?
reference-request pr.probability real-analysis st.statistics probability-distributions
Henry.L
Tom SolbergTom Solberg
$\begingroup$ Could you also add your reference indicating where such a measure comes from? $\endgroup$ – Henry.L Dec 23 '17 at 19:29
Non-Gaussianness is an ambiguous concept. In the continuum of probability distributions such as the uniform, where all events are clustered into a given range and equally likely. On the other side are structured, spiky distributions with the certain event being the extreme example.[1]
Therefore the measure of spikiness is usually based on estimators of a shape parameter of an assumed data generating distribution. For example, If you assume the data is generated from a beta distribution, then the spikiness can be measured by an estimator of its shape parameter. This is the classic thinking when a parameterized model is assumed for the underlying probability distribution that generates the model.
Following this idea, a classic test of comparing how similar two probability distributions are is the Kolmogorov-Smirnov test. It induces a nonparametric measure of similarity, and therefore could be used for exploring spikiness. In this direction of characterizing spikiness. In other words, spikiness can be measured by an appropriate choice of norm on the space of probability distributions supported on $[0,1]$.
To be honest I think this is more like a reverse Schwarz inequality rather than a Jesn inequality since I do not see how convexity comes into play. If that is the case, then such a sufficient condition reduces to a choice of $S$ such that majorant conditions hold. For any isotonic functional $A$, including most norms, $0\leq A(f^{2})A(g^{2})-A^{2}(fg)\leq\frac{1}{4}(M-m)^{2}A^{2}(g^{2})$ where $m\cdot g\leq f\leq M\cdot g$ In this case we can take $f=g$ and see if we can related the majorant coefficients $M,m$ with the $\lambda(S)$, which I believe is a common pratice in deriving a bound since the above inequality provides a sharp bound.
[1]Gray, William Charles. Variable norm deconvolution. No. 19. Ph. D. thesis: Stanford University, 1979.
[2]Dragomir, Sever S. "Reverses of Schwarz inequality in inner product spaces with applications." Mathematische Nachrichten 288.7 (2015): 730-742.
Henry.LHenry.L
Not the answer you're looking for? Browse other questions tagged reference-request pr.probability real-analysis st.statistics probability-distributions or ask your own question.
Concentration rates for the posterior distribution
Reference request for a result regarding density of induced probability measure under a submersion
Question on Palm distribution in Kallenberg's book
What is the mathematical characterization of sufficient statistics of a given $\sigma$-dominated probability model?
Optimal joint coupling of all probability measures on a 3 point space
A really simple probabilistic inequality on the unit interval
For what sets does the Lebesgue Differentiation Theorem hold in one dimension?
Uniqueness of the limit sequence of discrete probability measures | CommonCrawl |
PROCEEDING (60)
Chung, Duck-Hwa (14)
Shim, Won-Bo (14)
Chung, June-Key (9)
Lee, Myung-Chul (9)
Koh, Chang-Soon (8)
Kang, Shien-Young (7)
Kim, Wi-Sik (7)
Oh, Myung-Joo (7)
Shin, Chan-Young (7)
Cho, Seung-Yull (6)
Chung, Hong-Keun (6)
Jeong, Jae-Min (6)
Ko, Kwang-Ho (6)
Lee, Dong-Soo (6)
Nam, Ho-Woo (6)
An, Soo-hwan (5)
Choo, Young-Kug (5)
Kang, Shin-Yong (5)
Kong, Yoon (5)
Lee, Seung-Chul (5)
Lim, Sang-Moo (5)
Park, Jung-Hyun (5)
Shon, Dong-Hwa (5)
Ahn, Hye-Jin (4)
Cha, Chang-Yong (4)
Chang, Woo-Hyun (4)
Cheong, Hae Il (4)
Cho, Myung-Hwan (4)
Hong, Hyo Jeong (4)
Hong, Mee-Kyoung (4)
Im, Gyeong-Il (4)
Kang, Hee Gyung (4)
Kang, Sung-Jo (4)
Kim, Jeong-Sook (4)
Kim, Min-Gon (4)
Kwak, Bo-Yeon (4)
Park, Sung-Soo (4)
Shin, Dong-Ho (4)
Shin, Ho-Joon (4)
Yang, Zheng-You (4)
Ahn, Tae-In (3)
Chae, Heesu (3)
Chang, Hyeun-Wook (3)
Cheon, Gi-Jeong (3)
Choe, In Seong (3)
Choe, Yong-Kyung (3)
Choi, Chang-Woon (3)
Choi, Jeom-Il (3)
Choi, Tae-Hyun (3)
Ha, Sang-Do (3)
Hong, Hyo-Jeong (3)
Hwang, Eung-Soo (3)
Jo, Gyunghee (3)
Jun, Moo-hyung (3)
Jung, Suk-chan (3)
Kang, Bong-Jung (3)
Kang, Chung-boo (3)
Kang, Hye-Jin (3)
Kang, Young-Sook (3)
Kim, Cheol-Ho (3)
Kim, Dong-Kil (3)
Kim, Hee-Kyu (3)
Kim, Jae Wha (3)
Kim, Ji-Young (3)
Kim, Jin-Kyoo (3)
Kim, Keun-Sung (3)
Kim, Kwang-Yup (3)
Kim, Myung-Hee (3)
Kim, Sangkyu (3)
Kim, Se-Ho (3)
Kim, Suk-Il (3)
Kim, Won-Jung (3)
Kim, Yoon-Won (3)
Kweon, Chang-Hee (3)
Kwon, Byung-Joon (3)
Lee, Dong-Hoo (3)
Lee, Jae-Tae (3)
Lee, Jeong-Eun (3)
Lee, Ju-Youn (3)
Lee, Jung-Dal (3)
Lee, Jung-Hee (3)
Lee, Kang-Choon (3)
Lee, Kye-Young (3)
Lee, Kyu-Bo (3)
Lee, Kyu-Ho (3)
Lee, Myung-Kyu (3)
Lee, Sang-Sook (3)
Lee, Won-Ha (3)
Lee, Younghee (3)
Lim, Yoon-Kyu (3)
Mo, In-Pil (3)
Moon, Dae-Hyuk (3)
Oh, Suk-Heung (3)
Park, Kyu-Hwan (3)
Park, Sun (3)
Pyo, Dong-Jin (3)
Yoon, Jun-Yeol (3)
Yoon, Sun Young (3)
National Veterinary Research and Quarantine Service (10)
College of Veterinary Medicine, Chungbuk National University (7)
Department of Parasitology, College of Medicine, Chung-Ang University (7)
College of Veterinary Medicine, Seoul National University (6)
Institute of Agriculture and Life Science, Gyeongsang National University (5)
Veterinary Research Institute, Rural Development Administration (5)
College of Pharmacy, Seoul National University (4)
College of Veterinary Medicine, Chonnam National University (4)
Department of Aqualife Medicine, Chonnam National University (4)
Department of Nuclear Medicine, Seoul National University Hospital (4)
Department of Pathology, College of Medicine, Hallym University (4)
Department of Systems Immunology, College of Biomedical Science, Kangwon National University (4)
Division of Applied Life Science, Graduate School, Gyeongsang National University (4)
Korea Food Research Institute (4)
School of Agricultural Biotechnology, Seoul National University (4)
Choongang Vaccine Laboratories (3)
College of Pharmacy, Chungbuk National University (3)
College of Pharmacy, Yeungnam University (3)
College of Veterinary Medicine, Chonbuk National University (3)
College of Veterinary Medicine, Gyeongsang National University (3)
Department of Biochemistry, Seoul National University College of Medicine (3)
Department of Biological Science, College of Natural Sciences, Wonkwang University (3)
Department of Biology, Konkuk University (3)
Department of Biotechnology, Woosuk University (3)
Department of Internal Medicine, College of Medicine, Seoul National University (3)
Department of Internal Medicine, Seoul National University College of Medicine (3)
Department of Microbiology, Ajou University School of Medicine (3)
Department of Microbiology, College of Medicine, Seoul National University (3)
Department of Parasitology and Catholic Institute of Parasitic Diseases, Catholic University of Korea, College of Medicine (3)
Department of Parasitology, College of Medicine, Yonsei University (3)
Department of Pharmacology, Wonkwang University School of Medicine (3)
Departments of Pathology, Keimyung University School of Medicine (3)
Division of Applied Life Science (3)
School of Biological Sciences, Seoul National University (3)
Scripps Korea Antibody Institute (3)
South Sea Institute, KORDI (3)
Therapeutic Antibody Research Center, Korea Research Institute of Bioscience and Biotechnology (3)
Antibody Production Research Unit, Institute of Biotechnology and Genetic Engineering, Chulalongkorn University (2)
Bioland, Ochang institute & factory (2)
Central Laboratory Gyeongsang National University (2)
College of Pharmacy, SungKyunKwan University (2)
Department of Animal Science, National Chiayi University (2)
Department of Aqualife Medicine, College of Fisheries and Ocean Science, Chonnam National University (2)
Department of Biological Sciences, Sungkyunkwan University (2)
Department of Biology, Kyonggi University (2)
Department of Biology, Mokwon University (2)
Department of Biology, Sunchon National University (2)
Department of Bioscience and Biotechnology, Konkuk University (2)
Department of Chemical Engineering, Keimyung University (2)
Department of Chemical and Biomolecular Engineering, Sogang University (2)
Department of Chemistry, Kangwon National University (2)
Department of Clinical Pathology, College of Medicine, Yeungnam University (2)
Department of Environmental Engineering and Biotechnology, Hankuk University of Foreign Studies (2)
Department of Food Science and Technology, Chung-Ang University (2)
Department of Food Science and Technology, Chungbuk University (2)
Department of Genetic Engineering, Hallym University (2)
Department of Genetic Engineering, Sungkyunkwan University (2)
Department of Internal Medicine, College of Medicine, Yeungnam University (2)
Department of Internal Medicine, Keimyung University School of Medicine (2)
Department of Life Science, College of Natural Sciences, Hanyang University (2)
Department of Life Science, Sogang University (2)
Department of Medical Technology, College of Health Science, Yonsei University (2)
Department of Microbiology, College of Medicine, Hallym University (2)
Department of Microbiology, College of Medicine, Inje University (2)
Department of Microbiology, College of Natural Sciences, Changwon National University (2)
Department of Molecular Science and Technology, Ajou University (2)
Department of Nuclear Medicine, Seoul National University College of Medicine (2)
Department of Ophthalmology, College of Medicine, The Catholic University of Korea (2)
Department of Oral and Maxillofacial Surgery, College of Dentistry, Seoul National University (2)
Department of Parasitology and Institute of Tropical Medicine, Yonsei University College of Medicine (2)
Department of Pathology, Keimyung University School of Medicine (2)
Department of Pathology, Seoul National University College of Medicine (2)
Department of Pathology, Yonsei University College of Medicine (2)
Department of Pediatric Dentistry and Institute of Oral Bioscience, School of Dentistry, Chonbuk National University (2)
Department of Pediatrics, Pusan National University Children's Hospital (2)
Department of Pediatrics, Samsung Medical Center, Sungkyunkwan University School of Medicine (2)
Department of Periodontology, School of Dentistry, Pusan National University (2)
Department of Veterinary Medicine, Cheju National University (2)
Department of Veterinary Medicine, College of Agriculture, Chungnam National University (2)
Department of Veterinary Medicine, Kangwon National University (2)
Department of Veterinary Medicine, Kon-kuk University (2)
Departments of Internal Medicine, Korea Cancer Center Hospital, Korea Institute of Radiological and Medical Sciences (2)
Departments of Nuclear Medicine, Korea Cancer Center Hospital, Korea Institute of Radiological and Medical Sciences (2)
Dept. of Conservative Dentistry, College of Dentistry, Seoul National University (2)
Division of Applied Life Science, Gyeongsang National University (2)
EnbioGene (2)
Eubiologics Co., Ltd. (2)
Faculty of Medicine, Thammasat University (2)
Graduate School of Pharmaceutical Sciences, Kyushu University (2)
Institute of Veterinary Research (2)
Korea Research Institute of Bioscience and Biotechnology (2)
Korea Research Institute of Bioscience and Biotechnology, KIST (2)
Kwak Hospital (2)
Laboratory of Cancer Biology, Harvard Medical School (2)
Laboratory of Development and Differentiation, Korea Research Institute of Bioscience and Biotechnology (2)
The Korean Society for Microbiology and Biotechnology (54)
The Korean Society of Veterinary Science (43)
The Korea Society for Parasitology and Tropical Medicine (34)
The Korean Society for Biotechnology and Bioengineering (32)
The Pharmaceutical Society of Korea (30)
Korean Society for Biochemistry and Molecular Biology (28)
The Korean Association of Immunobiologists (28)
The Korean Society for Integrative Biology (27)
The Korean Society of Nuclear Medicine (26)
The Korean Academy of Tuberculosis and Respiratory Diseases (21)
The Microbiological Society of Korea (20)
The Korean Society of Applied Pharmacology (17)
Asian Pacific Journal of Cancer Prevention (16)
The Korean Society of Animal Reproduction (13)
The Korean Society of Food Hygiene and Safety (13)
Korean Chemical Society (12)
Korean Society of Life Science (11)
The Korean Society of Veterinary Service (11)
Korean Academy of Periodontology (10)
Korean Society for Molecular and Cellular Biology (9)
Korean Society of Food Science and Technology (9)
The Korea Society for Microbiology (9)
The Korean Society for Biomedical Laboratory Sciences (9)
The Korean Society of Developmental Biology (9)
The Korean Society of Fish Pathology (8)
Asian Australasian Association of Animal Production Societies (7)
Korean Society of Pediatric Nephrology (7)
Korean Society of Toxicology & Korea Environmental Mutagen Society (7)
The Korean Association of Oral and Maxillofacial Surgeons (7)
The Korean Society for Cytopathology (7)
The Korean Society for Thoracic and Cardiovascular Surgery (7)
The Korean Society of Fisheries and Aquatic Science (7)
The Korean Society of Plant Pathology (7)
The Korean Society of Virology (6)
Yeungnam University College of Medicine (5)
The Korean Bronchoesophagological Society (5)
The Korean Pediatric Society (5)
The Korean Society for Head and Neck Oncology (5)
The Korean Society for Radiation Oncology (5)
The Korean Society for Reproductive Medicine (5)
The Korean Society of Poultry Science (5)
Korean Academy of Pediatric Dentistry (4)
Korean Environmental Mutagen Society (4)
Korean Society of Microscopy (4)
Research Institute of Oriental Medicine (4)
The Korean Academy of Conservative Dentistry (4)
The Korean Society for Applied Biological Chemistry (4)
The Korean Society of Plant Biotechnology (4)
The Korean Society of Veterinary Clinics (4)
Korean Association of Maxillofacial Plastic and Reconstructive Surgeons (3)
Korean College Of Clinical Pharmacy (3)
The Korean Society of Food Science and Nutrition (3)
The Korean Society of Pediatric Infectious Diseases (3)
Korean Society of Applied Entomology (2)
Korean Society of Veterinary Pathology (2)
Korean Veterinary Medical Association (2)
Society of Cosmetic Scientists of Korea (2)
The Korea Academia-Industrial cooperation Society (2)
The Korean Gastric Cancer Association (2)
The Korean Musculoskeletal Tumor Society (2)
The Korean Neurosurgical Society (2)
The Korean Society of Crop Science (2)
The Korean Society of Fisheries and Ocean Technology (2)
The Korean Society of Pediatric Gastroenterology, Hepatology and Nutrition (2)
The Korean Society of Pharmacology (2)
The Physiological Society of Korean Medicine and The Society of Pathology in Korean Medicine (1)
Institute of Agricultural Science, CNU (1)
Korean Association of Pediatric Surgeons (1)
Korean Industrial Hygiene Association (1)
Korean Society for Agricultural Machinery (1)
Korean Society for Clinical Laboratory Science (1)
Korean Society for Environmental Sanitary Engineers (1)
Korean Society for Food Science of Animal Resources (1)
Korean Society of Animal Sciences and Technology (1)
Korean Society of Environmental Biology (1)
Korean Society of Horticultural Science (1)
Korean Society of Photoscience (1)
Korean Society of Sericultural Science (1)
Korean Society on Water Environment (1)
Optical Society of Korea (1)
Research Institute of Korean Medicine (1)
The Korea Institute of Information and Commucation Engineering (1)
The Korean Association for Science Education (1)
The Korean Electrochemical Society (1)
The Korean Nutrition Society (1)
The Korean Sensors Society (1)
The Korean Society for Glycoscience (1)
The Korean Society for Marine Biotechnology (1)
The Korean Society for Microsurgery (1)
The Korean Society for Preventive Medicine (1)
The Korean Society of Embryo Transfer (1)
The Korean Society of Environmental Agriculture (1)
The Korean Society of Food and Nutrition (1)
The Korean Society of Industrial and Engineering Chemistry (1)
The Korean Society of Laryngology, Phoniatrics and Logopedics. (1)
The Korean Society of Mechanical Engineers (1)
The Korean Society of Medical and Biological Engineering (1)
The Korean Society of Medicinal Crop Science (1)
Korean Journal of Veterinary Research (43)
Journal of Microbiology and Biotechnology (41)
The Korean Journal of Parasitology (34)
BMB Reports (28)
IMMUNE NETWORK (28)
Tuberculosis and Respiratory Diseases (21)
The Korean Journal of Nuclear Medicine (20)
KSBB Journal (19)
Microbiology and Biotechnology Letters (13)
YAKHAK HOEJI (13)
Archives of Pharmacal Research (12)
Journal of Food Hygiene and Safety (11)
Korean Journal of Veterinary Service (11)
The Korean Journal of Zoology (11)
Bulletin of the Korean Chemical Society (10)
Journal of Life Science (10)
Journal of Microbiology (10)
Journal of Periodontal and Implant Science (10)
Proceedings of the Korean Society of Applied Pharmacology (10)
Biomedical Science Letters (9)
Korean Journal of Animal Reproduction (9)
Korean Journal of Microbiology (9)
Molecules and Cells (9)
The Journal of the Korean Society for Microbiology (9)
Animal cells and systems (8)
Journal of fish pathology (8)
Asian-Australasian Journal of Animal Sciences (7)
Biomolecules & Therapeutics (7)
Biotechnology and Bioprocess Engineering:BBE (7)
Childhood Kidney Diseases (7)
Development and Reproduction (7)
Journal of Chest Surgery (7)
Journal of the Korean Association of Oral and Maxillofacial Surgeons (7)
Korean Journal of Food Science and Technology (7)
Proceedings of the Zoological Society Korea Conference (7)
The Korean Journal of Cytopathology (7)
The Journal of Korean Society of Virology (6)
Clinical and Experimental Pediatrics (5)
Clinical and Experimental Reproductive Medicine (5)
Korean Journal of Head & Neck Oncology (5)
Korean Journal of Poultry Science (5)
Proceedings of the PSK Conference (5)
Radiation Oncology Journal (5)
The Plant Pathology Journal (5)
Toxicological Research (5)
Yeungnam University Journal of Medicine (5)
Environmental Mutagens and Carcinogens (4)
JOURNAL OF THE KOREAN ACADEMY OF PEDTATRIC DENTISTRY (4)
Korean Journal of Fisheries and Aquatic Sciences (4)
Nuclear Medicine and Molecular Imaging (4)
The Journal of Dong Guk Oriental Medicine (4)
Applied Microscopy (3)
Fisheries and Aquatic Sciences (3)
Journal of Veterinary Clinics (3)
Journal of the Korean Society of Food Science and Nutrition (3)
Korean Journal of Clinical Pharmacy (3)
Maxillofacial Plastic and Reconstructive Surgery (3)
Pediatric Infection and Vaccine (3)
Proceedings of the KOR-BRONCHOESO Conference (3)
Proceedings of the KSAR Conference (3)
Restorative Dentistry and Endodontics (3)
Applied Biological Chemistry (2)
Food Science and Biotechnology (2)
Journal of Applied Biological Chemistry (2)
Journal of Gastric Cancer (2)
Journal of Korean Neurosurgical Society (2)
Journal of the Korean Chemical Society (2)
Journal of the Society of Cosmetic Scientists of Korea (2)
Journal of the korean veterinary medical association (2)
KOREAN JOURNAL OF CROP SCIENCE (2)
Korean Journal of Bronchoesophagology (2)
Korean Journal of Plant Tissue Culture (2)
Korean Journal of Veterinary Pathology (2)
Korean journal of applied entomology (2)
Pediatric Gastroenterology, Hepatology & Nutrition (2)
Proceedings of the Korean Society of Developmental Biology Conference (2)
Proceedings of the Korean Society of Fisheries Technology Conference (2)
Proceedings of the Korean Society of Food Hygiene and Safety Conference (2)
Proceedings of the Korean Society of Toxicology Conference (2)
The Journal of the Korean bone and joint tumor society (2)
Advances in pediatric surgery (1)
Applied Chemistry for Engineering (1)
Archives of Reconstructive Microsurgery (1)
Horticultural Science & Technology (1)
International Journal of Industrial Entomology (1)
Journal of Animal Science and Technology (1)
Journal of Biomedical Engineering Research (1)
Journal of Biosystems Engineering (1)
Journal of Haehwa Medicine (1)
Journal of Korean Society of Occupational and Environmental Hygiene (1)
Journal of Korean Society on Water Environment (1)
Journal of Marine Bioscience and Biotechnology (1)
Journal of Nutrition and Health (1)
Journal of Pharmaceutical Investigation (1)
Journal of Photoscience (1)
Journal of Physiology & Pathology in Korean Medicine (1)
Journal of Plant Biotechnology (1)
Journal of Preventive Medicine and Public Health (1)
Journal of Sensor Science and Technology (1)
Title/Summary/Keyword: Monoclonal antibody
Search Result 703, Processing Time 0.146 seconds
Studies on the development of enzyme linked immuno-sorbent assay (ELISA) for hepatitis B surface antigen (HBsAg) by monoclonal antibodies of different affinity constants
Kim, Gye-Won;Hong, Sung-Youl;Shin, Soon-Cheon;Lee, Sung-Hee;Kim, Won-Bae
Mouse monocolonal antibodies to Hepatitis B surface antien (HBsAg) were prepared and their functional capabilities tested by the method of solid phase enzyme linked immuno sorbent assay (ELISA). HBsAg binding studies inicated that one monoclonal antibody 6E-1-1 bound more HBsAg at a faster rate than the other monoclonal antibodies. Also, for the binding inhibition studies with the selected monoclonal antibody 6E-1-1, one monoclonal antibody 8D-3-6 didn't exhibit binding inhibition for HBsAg. Then, a simultaneous ELISA method was developed for the immunodiagnosis of HBsAg. Different combinations of two monoclonal antibodies as solid phase and horseradish peroxidase (HRPO) labeled phase were studied. The combination of monoclonal antibody of higher affinity constant (6E-1-1) immobilized in a solid phase and monoclonal antibody of lower affinity constant (8D-3-6) as a HRPO laeled phase was more sensitive when two monoclonal antibodies of different affinity constants for HBsAg were prepared.
Application of monoclonal antibody to develop diagnostic techniques for infectious bovine rhinotracheitis virus. II. Diagnosis of infectious bovine rhinotracheitis by using monoclonal antibody (소 전염성비기관염(傳染性鼻氣管炎) 바이러스에 대한 monoclonal antibody 생산(生産)과 진단법(診斷法) 개발 II. Monoclonal antibody를 이용한 소 전염성비기관염(傳染性鼻氣管炎)의 진단(診斷))
Jun, Moo-hyung;Kim, Duck-hwan;An, Soo-hwan;Lee, Jung-bok;Min, Won-gi
Korean Journal of Veterinary Research
To develop more specific and sensitive diagnostic methods for infectious bovine rhinotracheitis, 7-C-2 monoclonal antibody specific to polypeptides of infectious bovine rhinotracheitis virus (IBRV) was applied in indirect immunofluorescence antibody assay (IFA), indirect immunoperoxidase assay(IPA) and radial immunodiffusion enzyme assay (RIDEA). It was found that IBRV infected in MDBK cells could be detected as early as 8 hours post infection by IFA, and that IFA was more rapid and specific to identify IBRV antigen than IPA. The diagnostic efficacy of RIDEA and SN test was studied with 88 bovine sera. It was evident that RIDEA could eliminate the false positive reaction encountered in serum neutralization(SN) test, being more rapid and sensitive than the latter. Highly significant correlation coefficiency (r=0.76, p<0.01) was evaluated between the titers of sera and the diameters of RIDEA. Tracheal membranes and sera collected from 96 slaughtered cattle with lesions in respiratory organs were examined to detect IBRV antigen and antibody by IFA, RIDEA and SN test. It was presented that positive rates were 32.3% in IFA, 20.8% in RIDEA and 21.9% in SN test, and that coincidence rate between RIDEA and SN test were 100% in positive sera and 98.7% in negative sera. In conclusion, it was assumed that application of monoclonal antibody could improve the diagnostic efficacy of IBR by enhancing sensitivity and specificity of IPA, IFA and RIDEA.
Cytotoxicity of Anti-Calla Monoclonal Antibody Conjugates to Methotrexate
Chun, Chang-Joo;Lee, Kang-Choon
Characterization of the Monoclonal Antibody Specific to Human S100A6 Protein (인체 S100A6 단백질에 특이한 단일클론 항체)
Kim, Jae Wha;Yoon, Sun Young;Joo, Joung-Hyuck;Kang, Ho Bum;Lee, Younghee;Choe, Yong-Kyung;Choe, In Seong
IMMUNE NETWORK
v.2 no.3
Background: S100A6 is a calcium-binding protein overexpressed in several tumor cell lines including melanoma with high metastatic activity and involved in various cellular processes such as cell division and differentiation. To detect S100A6 protein in patient' samples (ex, blood or tissue), it is essential to produce a monoclonal antibody specific to the protein. Methods: First, cDNA coding for ORF region of human S100A6 gene was amplified and cloned into the expression vector for GST fusion protein. We have produced recombinant S100A6 protein and subsequently, monoclonal antibodies to the protein. The specificity of anti-S100A6 monoclonal antibody was confirmed using recombinant S100A recombinant proteins of other S100A family (GST-S100A1, GST-S100A2 and GST-S100A4) and the cell lysates of several human cell lines. Also, to identify the specific recognition site of the monoclonal antibody, we have performed the immunoblot analysis with serially deleted S100A6 recombinant proteins. Results: GST-S100A6 recombinant protein was induced and purified. And then S100A6 protein excluding GST protein was obtained and monoclonal antibody to the protein was produced. Monoclonal antibody (K02C12-1; patent number, 330311) has no cross-reaction to several other S100 family proteins. It appears that anti-S100A6 monoclonal antibody reacts with the region containing the amino acid sequence from 46 to 61 of S100A6 protein. Conclusion: These data suggest that anti-S100A6 monoclonal antibody produced can be very useful in development of diagnostic system for S100A6 protein.
PDF KSCI
Detection of Fish Virus by Using Immunomagnetic Separation and Polymerase Chain Reaction (IMS-PCR)
KIM Soo Jin;OH Hae Keun;CHOI Tae-Jin
Korean Journal of Fisheries and Aquatic Sciences
Immunomagnetic separation of virus coupled with .reverse transcription-polymerase chain reaction (IMS-PCR) was performed with infectious hematopoietic necrosis virus (IHNV). A DNA fragment of expected size was synthesized in the RT-PCR with total RNA extracted from IHNV inoculated CHSE-214. In a SDS-PAGE analysis, a protein band of over 70kDa was detected from non-infected cells and cells inoculated with IHNV and infectious pancreatic necrosis virus (IPNV). This protein was detected in the Western blot analysis probably because of non-specific reaction to monoclonal antibody against IHNV nucleocapsid protein. In the immunomagnetic separation, magnetic beads coated with monoclonal antibody against the IHNV nucleocapsid protein was incubated with supernatant from IHNV inoculated CHSE-214 cells. During this process, the non-specifically reacting protein could be removed by washing the magnetic bead with PBS in the presence of an external magnetic field, and viral proteins were detected from the remaining, cleaned magnetic beads. It was necessary to extract viral RNA from the captured virus particles before RT-PCR, and no DNA product was detected when the captured virus was only heated 5 min at $95^{\circ}C$. A PCR-product of expected size was synthesized from IMS-PCR with magnetic beads double coated either by goat anti-mouse IgG antibody -monoclonal antibody or streptavidin - biotin conjugated monoclonal antibody.
Protective Effects of a Monoclonal Antibody to a Mannose-Binding Protein of Acanthamoeba culbertsoni
Park, A-Young;Kang, A-Young;Jung, Suk-Yul
Biomedical Science Letters
Acanthamoeba culbertsoni is the causative agent of granulomatous amoebic encephalitis (GAE), a condition that predominantly occurs in immunocompromised individuals and which is typically fatal. A mannose-binding protein (MBP) among lectins was shown to have strong A. castellanii pathogenic potential when correlated with major virulence proteins. In this study, protective effects were analyzed using the monoclonal antibody to A. culbertsoni MBP by quantification and were also compared with other free-living amoebae. For the amoebial cytotoxicity to the target cell, amoeba trophozoites were incubated with Chinese hamster ovary (CHO) cells. For the protective effects of antibodies, amoebae were pre-incubated with them for 4 h and then added to the target cells. After 24 h, the supernatants were collected and examined for host cell cytotoxicity by measuring lactate dehydrogenase (LDH) release. The cytotoxicity of A. culbertsoni to the CHO cells showed about 87.4%. When the monoclonal antibody was pre-incubated with A. culbertsoni, the amoebial cytotoxicity was remarkably decreased as shown at LDH release (1.858 absorbance), which was represented with about 49.9%. Taken together, it suggested that the monoclonal antibody against MBP be important to inhibit the cytotoxicity of A. culbertsoni trophozoites to the target cell. The antibody will be applied into an in vivo functional analysis, which would help to develop therapeutics.
https://doi.org/10.15616/BSL.2018.24.4.435 인용 PDF KSCI
Development and Immunochemical Properties of Two Monoclonal Antibodies Specific to Human Chorionic Gonadotropin
Kim, You-Hee;Koh, Kwan-Sam
BMB Reports
Using a hybridoma technique, spleen cells of Balb/c mice immunized with human chorionic gonadotropin (hCG) were fused with NS-1 mouse myeloma cells. Two hybrid cell lines, clones KS-8 and KS-19, secreting monoclonal antibodies to hCG, were isolated. KS-8 and KS-19 belong to the immunoglobulin $G_1$ subclass. With the aid of a double-antibody radioimmunoassay, it was established that the KS-8 monoclonal antibody recognizes an immunodeterminant of the $\beta$-subunit of hCG, whereas the KS-19 monoclonal antibody recognizes an epitope present on the $\alpha$-subunit of hCG. The KS-8 monoclonal antibody specifically reacts with human chorionic gonadotropin and shows cross-reactivity of less than 0.3% to other related human glycoprotein hormones. On the other hand, using a hemagglutination test based on antibody-induced agglutination of sheep red blood cells coated with hCG, It was shown that only the KS-19 monoclonal antibody was capable of inducing a positive reaction, although both monoclonal antibodies had similar binding capacity to the coated cells. The results from the dual screening procedures demonstrate that KS-8 and KS-19 monoclonal antibodies show high sensitivity in two different assays, and are hence useful for the qualitative and quantitative determination of hCG by both radioimmunoassay and hemagglutination inhibition tests.
Characteristics and application of monoclonal antibody to progesterone II. Development of progesterone enzyme-linked immunosorbent assay(ELISA) (Progesterone의 단크론성 항체에 관한 특성 및 활용에 관한 연구 II. ELISA 기법의 개발)
Kang, Chung-boo;Kim, Jong-shu
This experiment was carried out to develop a sensitive, rapid, solid-phase microtitre plate assay of progesterone using the monoclonal antibody to this hormone. Monoclonal antibody to progesterone was much higher titre and binding affinity about 10 times than conventional polyclonal antibody to progesterone. Dot-blot analysis of monoclonal antibody revealed a single precipitation band when reacted with anti-mouse IgM and anti-mouse K. A competitive reaction was used with a reaction time of 2 hours. The standard dose-response curve was linear through 1,000pg/well. This ELISA system approach is applicable to evaluation for the rapid assessment of luteal function and reproductive status in both clinical and research in a wide variety of species.
Production of a Monoclonal Antibody and Ultrastructure of the Sporozoite of Cryptosporidium parvum
Choi, Young-Sook;Lee, Sung-Tae;Cho, Myung-Hwan
Journal of Microbiology
Cryptosporidium parvum causes a life-threatening diarrhea in acquired immunodeficiency syndrome (AIDS) patients. THe sporozoite stage of C. parvum has been known to be a target in treating cryptosporidiosis in AIDS patients as it is an extracellular stage. A sporozoite was ultrastructurally observed. It has a creascent shape with a rounded posterior end and a tapering body. The compact nucleus was located at the posterior end. A monoclonal antibody was produced, which recognized a 43 kDa of sporozoite antigens in a western blot analysis and showed the surface labeling in immunofluorescence.
Monoclonal antibody의 대량 생산을 위한 hybridoma cell의 생존능 증가에 관한 연구
Ha, Seong-Jin;Im, Seon-Ha;Lee, Jong-Won;Jo, Mu-Hwan
한국생물공학회:학술대회논문집
Hybridoma cell is very important in point of producing monoclonal antibody(Mab). Producing large quantity of Mab is economically valuable. On this experiment, we used one of hybridoma cell line, 5F12 AD3, and treated various antibiotics such as genetitin(G418), ciprofloxacin and minocycline to improve cell viability and we expect that improving cell viability brings higher concentrations of Mab. The optimum concentration of each antibiotics for improving cell viability were 10ug/ml for G418, 1ug/ml or 10ug/ml for ciprofloxacin and 1ug/ml for minocycline.
1 / 71 pages | CommonCrawl |
\begin{document}
\title{Bounded prime gaps in short intervals} \theoremstyle{plain} \newtheorem{thm}{Theorem} \renewcommand{\hskip -4 pt}{\hskip -4 pt} \newtheorem{lem}{Lemma} \newtheorem{cor}{Corollary} \theoremstyle{definition} \newtheorem{example}{Example} \newtheorem{prob}{Problem}\newtheorem{conj}{Conjecture} \newtheorem{defn}{Definition} \newtheorem{rem}{Remark} \newtheorem{ack}{Acknowledgements} \renewcommand{\hskip -4 pt}{\hskip -4 pt} \def$'${$'$} \newcommand{\C}{{\mathbb C}}\newcommand{\Hh}{{\mathcal H}} \newcommand{\R}{{\mathbb R}} \newcommand{\N}{{\mathbb N}} \newcommand{{\mathbb Z}}{{\mathbb Z}}
\newcommand{\abs}[1]{{\left| {#1} \right|}} \newcommand{\p}[1]{{\left(
{#1} \right)}} \newcommand{\jtf}[1]{{#1}^{\diamond}}
\newcommand{\Oh}[1]{{O \p{#1}}}
\renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \nonstopmode \date{}
\author{Johan Andersson\thanks{Stockholm University, Department of Mathematics, Stockholm University, SE - 106 91 Stockholm, Sweden, Email: [email protected]}}
\maketitle \begin{abstract} We generalise Zhang's and Pintz recent results on bounded prime gaps to give a lower bound for the the number of prime pairs bounded by $6 \cdot 10^7$ in the short interval $[x,x+x (\log x)^{-A}]$. Our result follows only by analysing Zhang's proof of Theorem 1, but we also explain how a sharper variant of Zhang's Theorem 2 would imply the same result for shorter intervals. \end{abstract}
Yitang Zhang \cite{Zhang} in his recent landmark paper proved the there are infinitely many weak prime pairs. More precisely he proved that \begin{gather} \label{b}
\liminf_{n \to \infty} (p_{n+1}-p_n)<7 \cdot 10^7. \end{gather}
His method is a variant of the method of Goldston-Pintz-Yildirim \cite{GPY1,GPY2}. For a nice introduction to this method see Soundararajan \cite{Sound}. Zhang not only realised that it is sufficient to consider divisors $d$ with only small prime factors in the definition of the real arithmetical function $\lambda(n)$ that is fundamental in the method\footnote{This had been realised already by Motohashi-Pintz \cite{MotPin}, see the remark in Pintz \cite{Pintz}, although they did not manage to prove what corresponds to Theorem 2 in Zhang \cite{Zhang} that ultimately is what was needed.}, but he also managed to utilise this insight. As in previous work on related results, Zhang used the notation of admissible set. A set \begin{gather} \Hh= \{h_1,\ldots,h_{k_0} \} \end{gather} is called admissible if there is no obvious arithmetical obstruction that makes it impossible for $\Hh+n$ to be prime for all elements (such as $\Hh=\{1,2\}$) except for a possible exceptional case. The Hardy-Littlewood conjecture asserts that the number of $n$ less than $x$ such that $n+\Hh$ consists of only primes should be asymptotic to $\mathfrak S(\Hh) x/(\log x)^k$, where $\mathfrak S(\Hh)$ is a certain positive constant. While far from being proven, it gives good intuition what the right answer should be for these questions.
Zhang's result \eqref{b} follows from his result that for any admissible $\Hh$ with at least $3.5 \cdot 10^6$ elements there are infinitely many $n$ such that the set $n+\Hh$ contains at least two primes. By being more careful with the estimates and choice of the admissible set, Trudgian \cite{Trudgian} already improved Zhang's constant $7 \cdot 10^7$ to $6 \cdot 10^7$. There is also a team effort led by Terence Tao that as of this moment has managed to improve the bound to a number slightly less than $4\cdot 10^5$; for the latest results see \cite{polymath}. Since this is work in progress and depends on more substantial changes to the proof of Zhang's Theorem 1, we have chosen not to use this result in this version of the paper.
Zhang's bounded prime gap theorem ultimately follows from the inequality \begin{gather} \label{a} \sum_{x \leq n \leq 2x}\left (\sum_{i=1}^{k_0} \theta(n+h_i) -\log(3x) \right) (\lambda(n))^2 \geq (\omega+o(1))x (\log x)^{k_0+l_0+1}, \end{gather} for some constant $\omega>0$ where
\begin{gather*}
\theta(n)=\begin{cases} \log n, & \text{if $n$ is prime,} \\ 0, & \text{otherwise,} \end{cases} \end{gather*} and $\lambda$ is defined as follows (following Zhang \cite[(2.12)]{Zhang}): \begin{gather}
\label{lambdadef} \lambda(n)=\sum_{d|(P(n),\mathcal P)} \mu(d) g(d), \qquad g(y)=\begin{cases} \frac 1 {(k_0+l_0)!} \left(\log \frac D y \right)^{k_0+l_0}, & y<D, \\ 0, & y \geq D, \end{cases}\\ \intertext{where} \notag
k_0=3.5 \cdot 10^6, \qquad l_0=180, \qquad \varpi=1/1168, \qquad D=x^{\varpi+1/4}, \\
D_1=x^\varpi, \qquad \mathcal P=\prod_{p \leq D_1} p, \qquad P(n)=\prod_{i=1}^{k_0} (n+h_i). \notag
\end{gather} By estimating $\lambda(n)$ trivially by its definition \eqref{lambdadef} and absolute values we see that \[\lambda(n) \ll \tau\left(\prod_{i=1}^{k_0} (n+h)\right) (\log D)^{k_0+l_0},\] where $\tau(n)$ denotes the divisor function, and from the well known estimate $\tau(n) \ll n^{\varepsilon}$ one sees that \begin{gather} \label{aaa}
\lambda(n) \ll_\varepsilon x^\varepsilon, \qquad \qquad (n \ll x). \end{gather} The equations \eqref{a} and \eqref{aaa} immediately yield that the number of weak prime pairs less than $x$ is at least of the order $x^{1-\varepsilon}$ for any $\varepsilon>0$, and thus Zhang's proof method gives somewhat stronger results than he states in his paper. The point here is that we do not only have that the left hand side of the inequality in \eqref{a} is positive, but also that it is bounded from below by $x^{1-\varepsilon}$. Pintz \cite{Pintz} was the first to come out with a paper including such a result. He furthermore managed to prove\footnote{His results are in fact much more general and much deeper than this, see his paper.} this result where $x^{-\varepsilon}$ is replaced by a power of $\log x$.
In this short paper we will consider the question of existence of pairs of primes with bounded gaps in short intervals, such as $[x,x+\Delta(x)]$ for some function $\Delta(x)=o(x)$. It turns out that in a similar way as in Pintz \cite{Pintz} it is sufficient to analyse the proof of Theorem 1 in Zhang \cite{Zhang} and use his version of Theorem 2 (which seems to be the deepest part of his paper) as is, in order to prove the following result.
\begin{thm} Suppose that $A \geq 0$ and $\varepsilon>0$. Then the interval $[x,x+x/(\log x)^A]$ contains at least $(1+o(1))x^{1-\varepsilon}$ pairs of primes $p_1,p_2$ such that $1<|p_1-p_2|<6 \cdot 10^7$. \end{thm}
\begin{proof} By using Trudgian's construction \cite{Trudgian} of an admissible set it follows immediately from Lemma 1. \end{proof} \begin{rem} By some more work, such as using estimates for the divisor functions (Lemma 8 in Zhang \cite{Zhang}) or similar methods as in Pintz \cite{Pintz} it is possible to improve the lower bound by replacing $x^{-\varepsilon}$ with a power of $\log x$. \end{rem}
\begin{lem}
Assume that $\Hh$ is an admissible set with at least $k_0=3.5 \cdot 10^6$ elements. Then, given $\varepsilon,A>0$, the interval $[x,x+x/(\log x)^A]$ contains at least $(1+o(1))x^{1-\varepsilon}$ integers $n$ such that the set $n+\Hh$ contains at least two primes. \end{lem} \begin{proof} Lemma 1 follows from using Lemma 2 to estimate the error term in Lemma 3 and Eq. \eqref{aaa}. \end{proof}
As the second Lemma we will state a variant of \cite[Theorem 2]{Zhang} which is a strong version of the Bombieri-Vinogradov theorem.
\begin{lem} Let $\Delta(x)=x (\log x)^{-A}$, and assume that
\begin{gather} \label{Deltadef}
\Delta(\gamma;d,c)=\sum_{\substack{n \equiv c \pmod d \\ x \leq n \leq x+\Delta(x)}} \gamma(n) - \frac 1 {\phi(d)}\sum_{ x \leq n \leq x+\Delta(x)}\gamma(n) \qquad \text{ for} \qquad (d,c)=1. \\ \intertext{Then for any $B>0$ and $1 \leq i \leq k_0$} \notag
\sum_{\substack{d<D^2 \\ d | \mathcal P}} \sum_{c \in \mathcal C_i(d)} \abs{\Delta(\theta;d,c)} \ll_{A,B} \Delta(x) (\log x)^{-B}, \\ \intertext{where} \notag \mathcal C_i(d)= \{c:1\leq c \leq d, (c,d)=1, P(c-h_i) \equiv 0 \pmod d \} \, \, \, \text { for } \, \, \, 1 \leq i \leq k_0. \end{gather}
\end{lem} \begin{proof} Although stated for short intervals $[x,x+\Delta(x)]$ instead of $[x,2x]$ this follows directly from Zhang's theorem 2. This is because $\Delta(x)$ is just $x$ multiplied by a power of $\log x$ and this power can be accounted for in the error term. \end{proof}
\begin{rem} If we can prove Lemma 2 for $\Delta(x)=x^\theta$ and some $7/12 <\theta<1$ it would follow that the intervals $[x,x+x^\theta]$ contains at least $(1+o(1))x^{\theta-\varepsilon}$ weak prime pairs. This would be of great interest. We have chosen to formulate Lemma 3 such that a proof of Lemma 2 in a short interval will immediately give such a consequence. Although we have not yet checked all the details of Zhang's proof of Theorem 2, his approach seems promising also for short intervals. In particular he at many places gets somewhat sharper results than he needs that might be more useful in the short interval case. We also remark that the corresponding short interval version of the classical Bombieri-Vinogradov theorem has been proved by Perelli-Pintz-Salerno \cite{PPS1,PPS2}. \end{rem}
\begin{lem}
Assume that $\Hh$ is admissible and $\Hh$ contains at least $k_0=3.5 \cdot 10^6$ elements. Then there exists some $B>0$ such that with $\Delta(\theta;d,c)$ defined by \eqref{Deltadef} and $0<\Delta(x) \ll x$ we have that
\begin{multline} \notag \sum_{x \leq n \leq x+\Delta(x)} \left( \sum_{i=1}^{k_0} \theta(n+h_i) -\log(3x) \right) (\lambda(n))^2 \geq (\omega+o(1)) \Delta(x) (\log x)^{k_0+l_0+1} + \\ +
\mathcal O \left((\log x)^B \sum_{i=1}^{k_0}\sqrt{ \Delta(x) \sum_{\substack{d<D^2 \\ d | \mathcal P}} \sum_{c \in \mathcal C_i(d)} \abs{\Delta(\theta;d,c)}} \right) + \mathcal O \left(x^{7/12}\right), \end{multline}
where we can choose $\omega=\exp(-5 \cdot 10^7)$. \end{lem} \begin{proof} We will follow Zhang \cite{Zhang} and just indicate where changes are needed. The essence in the changes is that we would like $n \sim x$ to mean $x \leq n \leq x+\Delta(x)$ instead of $x \leq n \leq 2x$. We now proceed as in Zhang \cite{Zhang}. We write the left hand side of the lemma as \[
S_2-\log(3x) S_1, \] where \begin{gather*}
S_1= \sum_{x \leq n \leq x+\Delta(x)} (\lambda(n))^2, \\ \intertext{and} S_2=\sum_{x \leq n \leq x+\Delta(x)}(\lambda(n))^2 \,
\sum_{i=1}^{k_0} \theta(n+h_i). \end{gather*} We will treat $S_1$ and $S_2$ separately. \subsection*{Upper bound for $S_1$}
We follow Zhang \cite{Zhang}, section 4. We treat the inner sum that corresponds to the first displayed formula on p. 16 in the same way. Proceeding in a similar manner, instead of eq. $(4.1)$ in Zhang we get \begin{gather*} S_1=\mathcal T_1 \Delta(x)+\mathcal O(D^{2+\varepsilon}). \label{ajj} \end{gather*} The quantity $ \mathcal T_1$ is the same in Zhang, and from his $(4.19)$ we obtain similarly to $(4.20)$ in his paper that \begin{gather} \label{ac}
S_1 \leq \frac{(1+\kappa_1+o(1))}{(k_0+2 l_0)!} \binom {2l_0}{l_0} \mathfrak S(\mathcal H) x (\log D)^{k_0+2 l_0}+ \mathcal O (D^{2+\varepsilon}),\end{gather} for some $\kappa_1< \exp(-1200)$.
\subsection*{Lower bound for $S_2$} Again we will change the summation order as in Zhang \cite[p. 22]{Zhang} and we get \begin{gather} \notag
S_2= k_0 \mathcal T_2 \sum_{x \leq n \leq x+\Delta(x)} \theta(n) + \sum_{i=1}^{k_0} \mathcal O (\mathcal E_i), \\ \intertext{where $\mathcal T_2$ is defined as in his paper and} \label{Edef}
\mathcal{E}_i = \sum_{\substack{d<D^2 \\ d | \mathcal P}} \tau_3(d) \rho_2(d) \sum_{c \in C_i(d)} \left| \Delta(\theta;d,c) \right|, \end{gather} where we now keep in mind that $\Delta(\theta;d,c)$ is defined by \eqref{Deltadef}, i.e. summing over $n$ in shorter intervals than in Zhang \cite{Zhang}.
We may now assume that $\Delta(x) \gg x^{7/12} (\log x)^{-k_0-l_0-1}$, since otherwise the result follows trivially by the error term $\mathcal O(x^{7/12})$. This allows us to use the prime number theorem in short intervals of Heath-Brown \cite{HeathBrown}, instead of the classical prime number theorem in Zhang's treatment. This gives us (compare with \cite[Eq (5.4)]{Zhang}) that \begin{gather*}
S_2= k_0 \mathcal T_2 \Delta(x)(1+o(1)) + \sum_{i=1}^{k_0} \mathcal O (\mathcal E_i). \end{gather*} Following Zhang\footnote{We are grateful to GH and Denis Chaperon de Lauzi at mathoverflow for explaining precisely how Zhang used Cauchy's inequality. See http://mathoverflow.net/questions/132452.} \cite{Zhang} we use the Cauchy inequality on Eq. \eqref{Edef} to estimate the error terms \begin{gather*}
\mathcal{E}_i \ll \sqrt{ \sum_{\substack{d<D^2 \\ d | \mathcal P}} (\tau_3(d))^2 (\rho_2(d))^2 \sum_{c \in C_i(d)}\left| \Delta(\theta;d,c) \right|} \sqrt{\sum_{\substack{d<D^2 \\ d | \mathcal P}} \sum_{c \in C_i(d)} \left| \Delta(\theta;d,c) \right|}.
\end{gather*}
By using the trivial\footnote{this corresponds to each integer being prime in the residue class for the short interval} upper estimate $|\Delta(\theta;d,c)|\ll \Delta(x)\log x/\phi(d)$ in the first sum, and estimates for sum of divisor functions in short intervals (\cite[Lemma 8]{Zhang}), this gives us the estimate \begin{gather} \label{ajaj}
\mathcal{E}_i \ll \mathcal (\log x)^{B} \sqrt{\Delta(x)} \sqrt{ \sum_{\substack{d<D^2 \\ d | \mathcal P}} \sum_{c \in C_i(d)} \left| \Delta(\theta;d,c) \right|}, \end{gather} for some positive constant $B>0$. Since $\mathcal T_2$ is the same as in Zhang's paper, it can be estimated by \cite[Eq (5.5)]{Zhang}. Corresponding to Eq (5.6) in Zhang we get the inequality \begin{gather} \label{ad}
S_2 \geq \frac{k_0(1-\kappa_2)} {(k_0+2l_0+1)!} \binom{2l_0+2} {l_0+1} \mathfrak S (\Hh) \Delta(x) (\log D)^{k_0+2l_0+1} (1+o(1)). \end{gather}
By the inequalities \eqref{ac} and \eqref{ad} we obtain \begin{multline} \label{ab} S_2 - \log(3x) S_1 \geq \\ \geq \omega \mathfrak S (\Hh) \Delta(x) (\log D)^{k_0+2l_0+1} (1+o(1)) + \mathcal O(D^{2+\epsilon})+\mathcal O\left (\sum_i \mathcal E_i \right), \end{multline} where \begin{gather*}
\omega=\frac 1 {(k_0+2l_0)!} \binom{2l_0}{l_0} \left( \frac{2(2 l_0+1)k_0(1-\kappa_2)} {(l_0+1)(k_0+2l_0+1)}-\frac{4(1+\kappa_1)}{1+4 \varpi} \right). \end{gather*} While Zhang never calculates $\omega$ and just say it is positive, by the estimates $\kappa_1 <\exp(-1200)$ and $\kappa_2 < 10^8 \exp(-1200)$ from Zhang \cite{Zhang}, and the numerical values of $l_0,k_0,\varpi$ it is easy to use Mathematica/Sage to see that $\omega=3.647 \cdot 10^{-21385285}>\exp(-5 \cdot 10^7).$ The results follows by combining \eqref{ajaj} with \eqref{ab} with the fact that with our choice or $D$ we have $\mathcal O(D^{2+\epsilon})=\mathcal O(x^{7/12})$. \end{proof}
\end{document} | arXiv |
\begin{document}
\begin{frontmatter}
\title{Scaling limit of the invasion percolation cluster on a regular tree} \runtitle{Scaling limit of invasion percolation}
\begin{aug} \author[A]{\fnms{Omer} \snm{Angel}\thanksref{OAmark}\ead[label=OAemail]{[email protected]}}, \author[B]{\fnms{Jesse} \snm{Goodman}\corref{}\thanksref{OAmark}\ead[label=JGemail]{[email protected]}} \and \author[C]{\fnms{Mathieu} \snm{Merle}\thanksref{MMmark}\ead[label=MMemail]{[email protected]}} \runauthor{O. Angel, J. Goodman and M. Merle} \affiliation{University of British Columbia, Univertiteit Leiden and Universit\'e~Paris~Diderot} \address[A]{O. Angel\\ Department of Mathematics\\ University of British Columbia\\ 1984 Mathematics Road\\ Vancouver, British Columbia\\ V6T 1Z2\\ Canada\\ \printead{OAemail}}
\address[B]{J. Goodman\\ Mathematisch Instituut\\ Universiteit Leiden\\ PO Box 9512, 2300 RA Leiden\\ Netherlands\\ \printead{JGemail}}
\address[C]{M. Merle\\ Universit\'e Paris Diderot\\ 175, rue du Chevaleret\\ 75013 Paris\\ France\\ \printead{MMemail}} \end{aug}
\thankstext{OAmark}{Supported in part by NSERC.}
\thankstext{MMmark}{Supported in part by the Pacific Institute for the Mathematical Sciences.}
\received{\smonth{10} \syear{2009}} \revised{\smonth{10} \syear{2011}}
\begin{abstract} We prove existence of the scaling limit of the invasion percolation cluster (IPC) on a regular tree. The limit is a random real tree with a single end. The contour and height functions of the limit are described as certain diffusive stochastic processes.
This convergence allows us to recover and make precise certain asymptotic results for the IPC. In particular, we relate the limit of the rescaled level sets of the IPC to the local time of the scaled height function. \end{abstract}
\begin{keyword}[class=AMS] \kwd{60K35} \kwd{82B43}. \end{keyword}
\begin{keyword} \kwd{Invasion percolation} \kwd{scaling limit} \kwd{real tree}. \end{keyword}
\end{frontmatter}
\section{Introduction and main results}\label{intro}
Invasion percolation on an infinite connected graph is a random growth model which is closely related to critical percolation, and is a prime example of self-organized criticality. It was introduced in the eighties by Wilkinson and Willemsen~\cite{WilkWillem1983} and first studied on the regular tree by Nickel and Wilkinson~\cite{NickWilk1983}. The relation between invasion percolation and critical percolation has been studied by many authors (see, e.g.,~\cite{CCN1985,Jarai2003}). More recently, Angel, Goodman, den Hollander and Slade~\cite{AGdHS2008} have given a structural representation of the invasion percolation cluster on a regular tree, and used it to compute the scaling limits of various quantities related to the IPC such as the distribution of the number of invaded vertices at a given level of the tree.
Fixing a degree $\sigma\ge2$, we consider ${\mathcal T}={\mathcal T}_\sigma$: the rooted regular tree with index~$\sigma$, that is, the rooted tree where every vertex has $\sigma$ children. Invasion percolation on ${\mathcal T}$ is defined as follows: edges of ${\mathcal T}$ are assigned weights which are i.i.d. and uniform on $[0,1]$. The invasion percolation cluster on ${\mathcal T}$, denoted IPC, is grown inductively starting from a subgraph $I_0$ consisting of the root $\varnothing$ of ${\mathcal T}$. At each step $I_{n+1}$ consists of $I_n$ together with the edge of minimal weight in the boundary of $I_n$. The invasion percolation cluster IPC is the limit $\bigcup I_n$.
\subsection{Convergence of trees}
We consider the IPC as a metric space with respect to graph distance $d_{\mathrm{gr}}$. Since IPC is already infinite, taking its scaling limit amounts to replacing $d_{\mathrm{gr}}$ by $\frac {1}{k}d_{\mathrm{gr}}$.
\begin{theorem}\label{Tlimexist} The rescaled rooted invasion percolation cluster $(\mbox{IPC},\break\frac{1}{k}d_{\mathrm{gr}}$, $\varnothing)$ has a scaling limit w.r.t. the pointed Gromov--Hausdorff topology, which is a random ${\mathbb R}$-tree. \end{theorem}
Here, an ${\mathbb R}$-tree means a topological space with a unique rectifiable simple path between any two points. Note that, because the IPC is infinite, we must work with the pointed Gromov--Hausdorff topology (see, e.g.,~\cite{Munn2010}, Section~2). For present purposes this means we must show that, for each $R>0$, the ball $\{v\in\mathrm {IPC}\dvtx \frac{1}{k}d_{\mathrm{gr}}(\varnothing, x)\leq R\}$ about the root converges in the Gromov--Hausdorff sense.
A key point in our study is that the contour function (as well as height function and Lukaciewicz path, see Section~\ref{subtrees} below) of an infinite tree does not generally encode the entire tree. If the various encodings of trees are applied to infinite trees, they describe only the part of the tree to the left of the leftmost infinite branch. We present two ways to overcome this difficulty. Both are based on the fact (see~\cite{AGdHS2008}) that the IPC has a.s. a unique infinite branch. Following Aldous~\cite{Aldous1991}, we define a \textit{sin-tree} to be an infinite one-ended tree (i.e., with a single infinite branch).
The first approach is to use the symmetry of the underlying graph ${\mathcal T} $ and observe that the infinite branch of the IPC (called the \textit{backbone}) is independent of the metric structure of the IPC. Thus, for all purposes involving only the metric structure of the IPC, we may as well assume (or condition) that the backbone is the rightmost branch of ${\mathcal T}$. We denote by ${\mathcal R}$ the IPC under this condition. The various encodings of ${\mathcal R}$ encode the entire tree.
The second approach is to consider a pair of encodings, one for the part of the tree to the left of the backbone, and a second encoding the part to the right of the backbone. This is done by considering also the encoding of the reflected tree $\overline{\mbox{IPC}}$. The reflection of a plane tree is defined to be the same tree with the reversed order for the children of each vertex. The uniqueness of the backbone implies that together the two encodings determine the entire IPC.
In order to describe the limits, we first define the process $L(t)$ which is the lower envelope of a Poisson process on $({\mathbb R}^+)^2$. Given a Poisson process ${\mathcal P}$ of intensity 1 in the quarter plane, $L(t)$ is defined by
\[ L(t) = \inf\{ y\dvtx (x,y)\in{\mathcal P}\mbox{ and } x\le t\}. \]
Our other results describe the scaling limits of the various encodings of the trees in terms of solutions of
{\renewcommand{$\cA$}{${\mathcal E}(L)$} \begin{equation} \label{eqSDE} Y_t = B_t - \int_0^t L( -\underline{Y}_{ s} ) \,ds, \end{equation}}
\noindent where $\underline{Y}_{ s}=\inf_{0\leq u\leq s} Y_u$ is the infimum process of $Y$ and $B_t$ is a standard Brownian motion. The reason for the notation is that we also consider solutions of equations ${\mathcal E}(L/2)$ where, in the above, $L$ is replaced by $L/2$. Note that by the scale invariance of the Poisson process, $k L(kt)$ has the same law as $L(t)$. Hence, the scaling of Brownian motion implies that the solution $Y$ has Brownian scaling as well.
We work primarily in the space ${\mathcal C}({\mathbb R}^+,{\mathbb R}^+)$ of continuous functions from ${\mathbb R}^+$ to itself with the topology of locally uniform convergence. We consider three well known and closely related encodings of plane trees, namely, the Lukaciewicz path, and the contour and height functions (all are defined in Section~\ref{subenc1} below). The three are closely related and, indeed, their scaling limits are almost the same. The reason for the triplication is that the contour function is the simplest and most direct encoding of a plane tree, whereas the Lukaciewicz path turns out to be easier to deal with in practice. The height function is a middle ground.
\begin{theorem}\label{Trightscale} For the IPC conditioned on the backbone being on the right, let $V_{\mathcal R} $, $H_{\mathcal R}$ and $C_{\mathcal R}$ denote its Lukaciewicz path, height function and contour function, respectively. Then we have the following weak limits in ${\mathcal C}({\mathbb R}^+,{\mathbb R})$:
\setcounter{equation}{0} \begin{eqnarray} \label{equ1} (k^{-1} V_{\mathcal R}(k^2 t))_{t\ge0} &\to& \bigl(\gamma^{1/2} (Y_{t}-\underline{Y}_{ t})\bigr)_{t\ge0}, \\ (k^{-1} H_{\mathcal R}(k^2 t))_{t\ge0} &\to& \bigl(\gamma^{-1/2} (2Y_{t}-3 \underline{Y}_{ t})\bigr)_{t\ge0}, \\ (k^{-1} C_{\mathcal R}(2k^2 t))_{t\ge0} &\to& \bigl(\gamma^{-1/2} (2Y_{t}-3 \underline{Y}_{ t})\bigr)_{t\ge0} \end{eqnarray}
as $k\to\infty$, where
\[ \gamma= \frac{\sigma-1}{\sigma} \]
and $(Y_t)_{t\ge0}$ is the solution of (\ref{eqSDE}) (and is the same solution in all three limits). \end{theorem}
To put this theorem into context, recall that the Lukaciewicz path of a critical Galton--Watson tree is an excursion of random walk with i.i.d. steps. From this it follows that the path of an infinite sequence of critical trees scales to Brownian motion. The height and contour functions of the sequence are easily expressed in terms of the Lukaciewicz path and, assuming the branching law has\vadjust{\goodbreak} second moments, are seen to scale to reflected Brownian motion (cf. Le Gall \cite{LeGall2005}). Duquesne and Le Gall generalized this approach in~\cite{DuqLeG2002}, and showed that the genealogical structure of a continuous-state branching process is similarly coded by a height process which can be expressed in terms of a L\'evy process, and that this is also the limit of various Galton--Watson trees with heavy tails.
The case of sin-trees is considered by Duquesne~\cite{Duquesne2005} to study the scaling limit of the range of a random walk on a regular tree. His techniques suffice for analysis of the IIC, but the IPC requires additional ideas, the key difficulty being that the Lukaciewicz path is no longer a Markov process. The scaling limit of the IIC turns out to be an illustrative special case of our results, and we will describe its scaling limit as well (in Section~\ref{secIIC}).
For the unconditioned IPC we define its left part $\mbox{IPC}_{G}$ to be the subtree consisting of the backbone and all vertices to its left. The right part $\mbox{IPC}_{D}$ is defined as the left part of the reflected IPC. We can now define $V_G$ and $V_D$ to be, respectively, the Lukaciewicz paths for the left and right parts of the IPC, and similarly define $H_G,H_D,C_G,C_D$ (see also Section~\ref{subenc2} below).
\begin{theorem}\label{Ttwosidedscale} We have the following weak limits in ${\mathcal C}({\mathbb R}^+,{\mathbb R})$:
\begin{eqnarray} k^{-1}(V_G(k^2 t), V_D(k^2 t))_{t\ge0} &\to& \gamma^{1/2}(Y_{t}-\underline{Y}_{ t}, \widetilde Y_{t}-\underline {\widetilde Y}_{ t})_{t\ge0},\\ k^{-1}(H_G(k^2 t), H_D(k^2 t))_{t\ge0} &\to& \gamma^{-1/2}(Y_{t}-2 \underline{Y}_{ t}, \widetilde Y_{t}-2 \underline{\widetilde Y}_{ t})_{t\ge0},\\ \label{eqtwosidedscalecontour} k^{-1}(C_G(2k^2 t), C_D(2k^2 t))_{t\ge0} &\to& \gamma^{-1/2}(Y_{t}-2 \underline{Y}_{ t}, \widetilde Y_{t}-2 \underline{\widetilde Y}_{ t})_{t\ge0} \end{eqnarray}
as $k\to\infty$, where $(Y_t)_{t\ge0}$ and $(\widetilde {Y}_t)_{t\ge0}$ are \textit{independent} solutions of ${\mathcal E}(L/2)$. \end{theorem}
\subsection{Level sizes and volumes}
From the convergence results above we can establish asymptotics for level sizes and volumes in the invasion percolation cluster. In~\cite{AGdHS2008}, it was proved that the size of the $n$th level of the IPC, rescaled by a factor $n$, converges to a nondegenerate limit. Similarly, the volume up to level $n$, rescaled by a factor $n^2$, converges to a nondegenerate limit. The Laplace transforms of these limits were expressed as functions of the $L$-process. However, formulas (1.20)--(1.23) of~\cite{AGdHS2008} do not provide insight into the limiting variables. With our convergence theorem for height functions of ${\mathcal R}$, we can express the limit in terms of the continuous limiting height function.
For $x \in{\mathbb R}_+$ we denote by $C[x]$ the number of vertices of the IPC at height~$[x]$. We let $C[0,x]=\sum_{i=0}^{[x]} C[i]$ denote the number of vertices of the IPC up to height~$[x]$. Write $H_t = \gamma^{-1/2}(2Y_t - 3\underline{Y}_t)$ for the limit of $H_{\mathcal R}$ in Theorem~\ref{Trightscale}, and $l_\infty^a(H)$ for the standard local time at level $a$ of $H$.
\begin{theorem}\label{Tlevels-volume} For every $a>0$ we have the distributional limits
\begin{equation}\label{eqvolume} \frac{1}{n^2} C[0,an] \xrightarrow{n \to\infty} \int_{0}^{\infty} \mathbf{1}_{[0,a]}(H_s) \,ds\vadjust{\goodbreak} \end{equation}
and
\begin{equation}\label{eqlevels} \frac{1}{n} C[an] \xrightarrow{n \to\infty} \frac{\gamma}{4} l_{\infty}^a(H). \end{equation}
\end{theorem}
In the case of the asymptotics of the levels, we also provide an alternative way of expressing the limit directly as a sum of independent variables. Write $\mathbf{e}\{c\}$ for an exponential variable of rate $c$.
\begin{theorem}\label{Tlevels} Let $S$ be a point process such that, conditioned on the $L$-process, $S$ is an inhomogeneous Poisson point process on $[0, a \sqrt{\gamma}]$, with intensity
\[ \frac{2 L(s)\,ds} {\exp( (a \sqrt{\gamma} - s) L(s) ) - 1}. \]
Then, conditionally on $L$, and in distribution,
\begin{equation}\label{eqsumofexp} \frac{1}{n} C[an] \xrightarrow{n\to\infty} \frac{\sqrt{\gamma}}{2} \sum_{s\in S} \mathbf{e}\biggl\{ \frac {L(s)} {1- \exp(-(a \sqrt{\gamma} -s)L(s))} \biggr\}, \end{equation}
where the terms in the sum are independent. \end{theorem}
From this representation and properties of the $L$-process, it is straightforward to recover the representation of the asymptotic Laplace transform of level sizes, (1.21) of~\cite{AGdHS2008}. Also, as the proof of the theorem will show, a.s. only a finite number of distinct values of $L$ contribute to the sum in (\ref{eqsumofexp}).
\subsection{Application to the incipient infinite cluster}
The proofs of Theorems~\ref{Tlimexist}--\ref{Tlevels} also apply to the \textit{incipient infinite cluster} (IIC), whose structure and similarity to the IPC we outline in Section~\ref{subIICrecall}. Stated briefly, the IIC corresponds to the IPC in the simpler case where the process $L(t)$ is replaced by $0$. As a consequence, some elements of the proofs (such as the right-grafting constructions in Section~\ref{secproofmain}) are not needed to handle the IIC. For comparison, we summarize the results for the IIC in the following theorems.\looseness=-1
\begin{theorem}\label{TIICresults1} The rescaled rooted incipient infinite cluster $(\mbox{IIC},\frac {1}{k}d_{\mathrm{gr}},\varnothing)$ has a scaling limit w.r.t. the pointed Gromov--Hausdorff topology, which is a random ${\mathbb R}$-tree.
For the IIC conditioned on the backbone being on the right, let $V_{\mathcal R} ^{\mathrm{IIC}}$, $H_{\mathcal R}^{\mathrm{IIC}}$ and $C_{\mathcal R}^{\mathrm{IIC}}$ denote its Lukaciewicz path, height function and contour function, respectively. Then we have the following weak limits in ${\mathcal C}({\mathbb R}^+,{\mathbb R})$:
\begin{eqnarray} \label{eLukaIICRlimit} (k^{-1} V_{\mathcal R}^{\mathrm{IIC}}(k^2 t))_{t\ge0} &\to& \bigl(\gamma^{1/2} (B_{t}-\underline{B}_{ t})\bigr)_{t\ge0}, \\ \label{eHeightIICRlimit} (k^{-1} H_{\mathcal R}^{\mathrm{IIC}}(k^2 t))_{t\ge0} &\to& \bigl(\gamma^{-1/2} (2B_{t}-3 \underline{B}_{ t})\bigr)_{t\ge0}, \\ \label{eContourIICRlimit} (k^{-1} C_{\mathcal R}^{\mathrm{IIC}}(2k^2 t))_{t\ge0} &\to& \bigl(\gamma^{-1/2} (2B_{t}-3 \underline{B}_{ t})\bigr)_{t\ge0} \end{eqnarray}
as $k\to\infty$, where $B_t$ is a standard Brownian motion.\vadjust{\goodbreak}
For the IIC with unconditioned backbone, the Lukaciewicz paths, height functions and contour functions of its left and right parts have the following weak limits in ${\mathcal C}({\mathbb R}^+,{\mathbb R})$:
\begin{eqnarray} \label{eLukaIIClimit} k^{-1}(V_G^{\mathrm{IIC}}(k^2 t), V_D^{\mathrm{IIC}}(k^2 t))_{t\ge 0} &\to& \gamma^{1/2}(B_{t}-\underline{B}_{ t}, \widetilde B_{t}-\underline {\widetilde B}_{ t})_{t\ge0}, \\ \label{eHeightIIClimit} k^{-1}(H_G^{\mathrm{IIC}}(k^2 t), H_D^{\mathrm{IIC}}(k^2 t))_{t\ge 0} &\to& \gamma^{-1/2}(B_{t}-2 \underline{B}_{ t}, \widetilde B_{t}-2 \underline{\widetilde B}_{ t})_{t\ge0}, \\ \label{eContourIIClimit} k^{-1}(C_G^{\mathrm{IIC}}(2k^2 t), C_D^{\mathrm{IIC}}(2k^2 t))_{t\ge 0} &\to& \gamma^{-1/2}(B_{t}-2 \underline{B}_{ t}, \widetilde B_{t}-2 \underline{\widetilde B}_{ t})_{t\ge0} \end{eqnarray}
as $k\to\infty$, where $B_t$ and $\widetilde B_t$ are \textit{independent} Brownian motions. \end{theorem}
Note that up to constant factors, the scaling limits in (\ref {eLukaIICRlimit}) and (\ref{eLukaIIClimit}) are reflected Brownian motions, while the scaling limits in (\ref{eHeightIIClimit}) and (\ref {eContourIIClimit}) are three-dimensional Bessel processes. The scaling limit in (\ref{eHeightIICRlimit}) and (\ref{eContourIICRlimit}), however, is not a standard process.
\begin{theorem}\label{TIICresults2} Write $H_t^{\mathrm{IIC}} = \gamma^{-1/2}(2Y_t - 3\underline{Y}_t)$ for the limit of $H_{\mathcal R}^{\mathrm{IIC}}$ in (\ref{eHeightIICRlimit}), and $l_\infty^a(H)$ for the standard local time at level $a$ of $H$. Then for every $a>0$ we have the distributional limits
\begin{equation}\label{eqIICvolume} \frac{1}{n^2} C[0,an] \xrightarrow{n \to\infty} \int_{0}^{\infty} \mathbf{1}_{[0,a]}(H_s) \,ds \end{equation}
and
\begin{equation}\label{eqIIClevels} \frac{1}{n} C[an] \xrightarrow{n \to\infty} \frac{\gamma}{4} l_{\infty}^a(H). \end{equation}
Moreover, if $S^{\mathrm{IIC}}$ is an inhomogeneous Poisson point process on $[0,a\sqrt{\gamma}]$ with intensity $2(a\sqrt{\gamma }-s)^{-1}\,ds$, then
\begin{equation} \frac{1}{n}C[an]\xrightarrow{n\to\infty} \frac{\sqrt{\gamma}}{2} \sum_{s\in S} \mathbf{e}\bigl\{ \bigl(a\sqrt {\gamma }-s\bigr)^{-1} \bigr\} \end{equation}
in distribution, where the terms in the sum are independent. \end{theorem}
\section{Background and overview}
\subsection{Structure of the IPC} \label{subrecall}
We now give a brief overview of the IPC structure theorem from \cite {AGdHS2008}, which is the basis for the present work. First of all, the IPC contains a single infinite branch, called the backbone and denoted $\mbox{BB}$. The backbone is a uniformly random branch in the tree (in the natural sense). From the backbone emerge, at every height $n$ and on every edge away from the backbone, subcritical percolation clusters
with parameter $\widehat W_n< p_c=\sigma^{-1}$.
The parameters $\widehat W_n$ are nondecreasing and satisfy $\widehat W_n\xrightarrow{n\to\infty} p_c$. Moreover, $(\widehat W_n)_{n=0}^\infty $ forms a Markov chain with dynamics\vadjust{\goodbreak} of the following kind. The initial value $\widehat W_0$ is distributed\vspace*{1pt} on $[0,p_c]$ according to a certain density function $f$. Given $\widehat W_n=\widehat w$, the next value $\widehat W_{n+1}$ is, with probability $g(\widehat w)$, a new value chosen according to the density $f$ conditioned to be larger than $\widehat w$; or else, with probability $g(\widehat W_n)$, the value $\widehat w$. For our purposes, it will suffice to know that the functions $f$ and $g$ satisfy
\begin{equation}\label{eRateAsymps} \lim_{\widehat w\nearrow p_c}f(\widehat w)>0,\qquad g(\widehat w)\sim\sigma(p_c-\widehat w)=1-\sigma\widehat w \end{equation}
as $\widehat w\nearrow p_c$. (These asymptotics follow from \cite{AGdHS2008}, Sections 2.1.2 and 3.1, since $(\widehat W_n)_{n=0}^\infty$ is the image of the Markov chain $(W_n)_{n=0}^\infty$ under $w\mapsto \widehat w$.)
We will primarily be concerned with the scaling limit of $\widehat W_n$, which is given by the lower envelope process $L(t)$ defined above. Writing $[x]$ for the integer part of~$x$, we have, for any $\varepsilon>0$,
\begin{equation}\label{eWtoL}
\bigl(k\bigl(1 - \sigma\widehat W_{[kt]}\bigr)\bigr)_{t \ge\varepsilon} \xrightarrow{k\to\infty} (L(t))_{t \ge\varepsilon}
\end{equation}
with respect to the Skorohod topology (see~\cite{AGdHS2008}, Proposition 3.3 and Corollary~3.4). Indeed, $L(t)$ is the continuous-time process that jumps, at rate $L(t)$, to a value uniformly chosen between $0$ and $L(t)$; this reflects the asymptotics given in (\ref{eRateAsymps}).
The process $L_t$ diverges as $t\to0$, which somewhat complicates the study of the IPC close to the root.
\subsection{Structure of the IIC} \label{subIICrecall}
The \textit{incipient infinite cluster} (IIC) embodies the notion of a percolation cluster that is both critical and infinite. It was originally defined and discussed by Kesten~\cite{Kesten1986} (see also \cite{BarKum2006}). The IIC can be obtained through a variety of limiting constructions---for instance, by conditioning a critical percolation cluster to extend at least distance $R$ and sending $R\to \infty$, or by examining the neighborhood of a faraway point in the IPC (see~\cite{Jarai2003} and~\cite{AGdHS2008}, Theorem 1.2). In the present context, we note that the IIC on a regular tree has a structure similar to the IPC; see~\cite{AGdHS2008}, Section~2.1.
Specifically, the IIC contains a single infinite branch, the backbone, which is a uniformly random branch in the tree. From the backbone emerge, at every height and on every edge away from the backbone, \textit{critical} percolation clusters.
Note that setting $\widehat W_n\equiv p_c$ in the above description gives rise to the IIC, on the one hand, while in the scaling limit $L$ is replaced by 0. This enables us to use a common framework for both clusters.
The convergence $\widehat W_n\xrightarrow{n\to\infty} p_c$ explains why the IPC and IIC resemble each other far above the root. However, the analysis of~\cite{AGdHS2008} shows that the convergence of the parameter of the attached clusters is slow enough that $r$-point functions and other measurable quantities such as level sizes possess different scaling limits.
\subsection{Encodings of finite trees} \label{subenc1}
For completeness we include here the definition of the various tree encodings we are concerned with. We refer to Le Gall~\cite{LeGall2005} for further details in the case of finite trees and to Duquesne~\cite{Duquesne2005} in the case of sin-trees discussed below.
A \textit{rooted plane tree} $\theta$ (also called an ordered tree) is a tree with a description as follows. Vertices of $\theta$ belong to $\bigcup_{n \geq0} {\mathbb N}^n$. By convention, $\varnothing\in{\mathbb N}^0$ is always a vertex of $\theta$ which is called the root. For a vertex $v \in\theta$, we let $k_v = k_v(\theta)$ be the number of children of $v$ and whenever $k_v = k >0$, these children are denoted $v1,\ldots,vk$. In particular, the $i$th child of the root is simply $i$, and if $vi \in\theta$, then $\forall 1\le j<i$, $vj \in\theta$ as well. Edges of $\theta$ are the edges $(v,vi)$ whenever $vi\in\theta$. Note that the set of edges of $\theta$ are determined by the set of vertices and vice-versa, which allows us to blur the distinction between a tree and its set of vertices. The \textit{$k$th generation} of a tree contains every vertex $v \in\theta\cap{\mathbb N}^k$, so that the $0$th generation consists exactly of the root. Define $\# \theta $ to be the total number of vertices in $\theta$.
Let $(v^i)_{0\le i < \#\theta}$ be the vertices of $\theta$ listed in lexicographic order, so that $v^0=\varnothing$. The \textit{Lukaciewicz path} $V$ of $\theta$ (sometimes known as the depth-first path) is the continuous function $(V_t = V^\theta_t, t \in[0, \# \theta])$ defined as follows: for $n\in\{1,\ldots,\#\theta\}$
\[ V_n = V^\theta_n:= \sum_{i=0}^{n-1} (k_{v^i}-1 ), \]
and between integers $V$ is interpolated linearly.\setcounter{footnote}{2}\footnote{In \cite{LeGall2005,DuqLeG2002}, the Lukaciewicz path is defined as a piecewise constant, discontinuous function, but there the case when the scaling limit of this path is discontinuous is also treated. Note that only the values of $V_n, n \in\{1,\ldots,\#\theta\}$, are needed to recover the tree $\theta$. Moreover, in our case, $\sup_{t \ge0} \vert V_{t+1}-V_t\vert$ is bounded by $\sigma$, so that the eventual scaling limit will be continuous. The advantage of our convention is that it allows us to consider locally uniform convergence of the rescaled Lukaciewicz paths in a space of continuous functions.}
The values $V_n$ are also given by the following \textit{right-hand description of the Lukaciewicz path}. This description is simpler to visualize, though we do not know of a reference for it. For $v \in\theta$, consider the subtree $\theta^v \subset\theta$ formed by all the vertices which are smaller or equal to $v$ in the lexicographic order. Let $n(v,\theta)$ be the number of edges connecting vertices of $\theta^v$ with vertices of $\theta\setminus\theta^v$. Then
\[ V(k) = n(v^k,\theta) - 1. \]
The reason we call this the right-hand description is that $n(v,\theta )$ is also the number of edges attached on the right-hand side of the path from $\varnothing$ to $v$. It is straightforward to check that this description is consistent with other definitions.
The height function is the second encoding we wish to consider. We also define it to be a piecewise linear function\footnote{Again, in \cite {LeGall2005}, the height function of a nondegenerate tree is discontinuous.} with $H(k)$ the height of $v^k$ above the root. It is related to the Lukaciewicz path by
\begin{equation}\label{VtoH} H(n) = \# \bigl\{ k<n\dvtx V_k = \min\{ V_k,\ldots,V_n \} \bigr\}. \end{equation}
Finally, the contour function of $\theta$ is obtained by considering a walker exploring $\theta$ at constant unit speed, starting from the root at time 0, and going from left to right. Each edge is traversed twice (once on each side), so that the total time before returning to the root is $2(\#\theta-1)$. The value $C^\theta(t)$ of the contour function at time $t \in[0, 2(\# \theta-1)]$ is the distance between the walker and the root at time $t$.
It is straightforward to check that the Lukaciewicz path, height function and contour function each uniquely determine---and hence represent---any finite tree~$\theta$. Figure~\ref{figencoding} illustrates these definitions, as they are easier to understand from a picture.
\begin{figure}
\caption{A finite tree and its encodings.}
\label{figencoding}
\label{figgraphVHC}
\end{figure}
At times it is useful to encode a sequence of finite trees by a single function. This is done by concatenating the Lukaciewicz paths or height function of the trees of the sequence. Note that when coding a sequence of trees, jumping from one tree to the next corresponds to reaching a new integer infimum in the Lukaciewicz path, while it corresponds to a visit to 0 in the height process.
\subsection{Encoding sin-trees} \label{subenc2}
While the definitions of Lukaciewicz path, and height and contour functions, extend immediately to infinite (discrete) trees, these paths generally no longer encode a unique infinite tree. For example, all the trees containing the infinite branch $\{\varnothing, 1, 11, 111,\ldots\}$ would have the identity function for height function, so that equal paths correspond to distinct infinite trees. In fact, the only part of an infinite tree which one can recover from the the height and contour functions is the subtree that lies left of the leftmost infinite branch. The Lukaciewicz path encodes additionally the degrees of vertices along the leftmost infinite branch.
However, if we restrict the encodings to the class of trees whose only infinite branch is the rightmost branch, then the three encodings still correspond to unique trees. In particular, observe that $\mbox{IPC}_G$ and ${\mathcal R}$ are fully encoded by their Lukaciewicz paths (as well as by their height, or contour functions). That is the reason we begin our discussion with these conditioned objects.
Not surprisingly, it is possible to encode any sin-tree, such as the IIC and IPC, by using \textit{two} coding paths, one for the part of the tree lying to the left of the backbone, and one for the part lying to its right. More precisely, suppose ${\mathcal T}$ is a sin-tree, and $\mbox{BB}$ denotes its backbone. The left tree is defined as the set of all vertices on or to the left of the backbone:
\[ {\mathcal T}_{G}:= \bigcup_{v\in\mathrm{BB}} {\mathcal T}^v = \{ x \in{\mathcal T}\dvtx \exists v \in \mbox{BB}, x\le v\}. \]
We do not define the right-tree of ${\mathcal T}$ as the set of vertices which lie on or to the right of the backbone. Rather, in light of the way the encodings are defined, it is easier to work with the mirror-image of ${\mathcal T}$, denoted $\overline{{\mathcal T}}$ and defined as follows: since a plane tree is a tree where the children of each vertex are ordered, $\overline{{\mathcal T}}$ may be defined as the same tree but with the reverse order on the children at each vertex. We then define
\[ {\mathcal T}_D = (\overline{{\mathcal T}})_G. \]
Obviously, only the rightmost branches of ${\mathcal T}_G, {\mathcal T}_D$ are infinite, so the Lukaciewicz paths $V_G,V_D$, of ${\mathcal T}_G,{\mathcal T}_D$, do encode uniquely each of these two trees (and so do the height functions $H_G, H_D$ and the contour functions $C_G, C_G$). Therefore, the pair of paths $(V_G,V_D)$ encodes ${\mathcal T}$ [and so do the pairs $(H_G,H_D)$, $(C_G,C_D)$]. Note that $H_G, C_G$ are also, respectively, the height and contour functions of ${\mathcal T}$ itself, while $H_D,C_D$ are, respectively, the height and contour functions of $\overline{{\mathcal T}}$.
\subsection{Overview} \label{subheuristics}
Let us try to give briefly, and heuristically, some intuition of why Theorem~\ref{Trightscale} holds. For $t>0$, the tree emerging from $\mbox{BB}_{[kt]}$ is coded by the $[kt]$th excursion of $V$ above $0$. Except for its first step, this excursion has the same transition probabilities as a random walk with drift $\sigma\widehat{W}_{[kt]}-1$, which, by the\vadjust{\goodbreak} convergence\vspace*{1pt} (\ref{eWtoL}), is approximately $-L(t)/k$. Additionally, by~\cite{AGdHS2008}, Proposition~3.1, $\widehat W_n$ is constant for long stretches of time. It is well known (see, e.g.,~\cite{Jacod1985}, Theorem~2.2.1) that a sequence of random walks with drift $c/k$, suitably scaled, converges as $k \to\infty$ to a $c$-drifted Brownian motion. Thus, we expect to find segments of drifted Brownian paths in our limit. According to the convergence (\ref{eWtoL}), the drift is expressed in terms of the $L$-process. This is what the definition of $Y$ expresses.
Thus, the idea when dealing with either the conditioned or the unconditioned IPC is to cut these sin-trees into pieces corresponding to stretches where $\widehat W$ is constant, and to look separately at the convergence of each piece. Since we deal extensively with codings of trees by paths, we call these pieces of trees \textit{segments}, although in the terminology of \cite {NewmanStein1995,GoodmanPondsInPrep,DamronSapozhnikov2010} and other works they are known as the \textit{ponds} of the IPC.
In Section~\ref{subunique} we establish existence and uniqueness results for equation~(\ref{eqSDE}).
In Section~\ref{secfixed} we look at the convergence of the rescaled paths coding a sequence of such segments for well chosen, fixed values of the $\widehat W$-process. In fact, we consider slightly more general settings which allows us to treat the case of the IIC as well as the various flavors of the IPC.
In Section~\ref{secproofmain} we prove Theorem~\ref{Trightscale} and Theorem~\ref{Ttwosidedscale} by combining segments. To deal with the fact that $\widehat W$ is random and exploit the convergence (\ref{eWtoL}), we use a coupling argument (see Section~\ref{subcoupling}). We then prove that the segments fall into the family dealt with in Section~\ref{secfixed}. Because of the divergence of the $L$-process at the origin, we only perform the above for subtrees above certain levels, and bound the resulting error separately. The proof of Theorem~\ref{Tlimexist} follows from Theorem~\ref{Trightscale}.
Finally, in Section~\ref{seclevels} we apply our convergence results to establish asymptotics for level and volume estimates of the IPC, to recover and extend results of~\cite{AGdHS2008}.
\section{\texorpdfstring{Solving (\protect\ref{eqSDE})}{Solving (E(L))}} \label{subunique}
\begin{claim}\label{cl31} Solutions to (\ref{eqSDE}), ${\mathcal E}(L/2)$ are unique in law. \end{claim}
Curiously, we were unable to determine whether the solutions to (\ref {eqSDE}) are a.s. pathwise unique (i.e., whether strong uniqueness holds). For our purposes uniqueness in law suffices.
\begin{pf*}{Proof of Claim~\ref{cl31}} We prove this claim for equation (\ref{eqSDE}). The proof for equation ${\mathcal E}(L/2)$ is identical.
Let $Y$ be a solution of (\ref{eqSDE}). Since $L$ is positive, $Y_t \le B_t$. Since $L$ is nonincreasing, $\int_{0}^t L(-\underline{Y}_{ s})\,ds \le \int_0^t L(-\underline{B}_{ s}) \,ds$. For any fixed $\varepsilon>0$, a.s. for all small enough $s$, $-\underline{B}_{ s} > s^{1/2-\varepsilon}$, while a.s. for all small enough $u$, $L(u) < u^{-(1+\varepsilon)}$. We deduce that almost surely $\lim_{t \to0} \int_{0}^t L(-\underline{Y}_{ s})\,ds = 0$. Thus, any solution of (\ref{eqSDE}) is continuous.\vadjust{\goodbreak}
Let us now consider two solutions $Y^1$, $Y^2$ of $\mathcal{E}(L)$ and fix $\varepsilon>0$. Introduce
\[ j^{\varepsilon} := \inf\{ t >0\dvtx L(t) < \varepsilon^{-1}\} \]
and
\begin{eqnarray*} t_0^{\varepsilon} &:=& \inf\{t>0\dvtx -\underline{B}_{ t} > j^{\varepsilon}\},\\ t_1^{\varepsilon} &:=& \inf\{t>0\dvtx -\underline{Y}^1_{ t} > j^{\varepsilon}\},\\ t_2^{\varepsilon} &:=& \inf\{t>0\dvtx -\underline{Y}^2_{ t} > j^{\varepsilon}\}. \end{eqnarray*}
From the continuity of $Y^1, Y^2$ we have $Y^1(t_{1}^{\varepsilon}) = Y^2(t_2^{\varepsilon})=-j^{\varepsilon}$. Moreover, we have a.s. $t_1^{\varepsilon} \vee t_2^{\varepsilon} \le t_0^{\varepsilon}$, and, therefore,
\begin{equation} \label{eqmaxt1t2} t_1^{\varepsilon} \vee t_2^{\varepsilon} \xrightarrow{\varepsilon\to0}^{\mathrm{a.s.}} 0. \end{equation}
Introduce a Brownian motion $\beta$ independent of $B$ and consider the (SDE)
{\renewcommand{$\cA$}{${\mathcal E}(\varepsilon,L)$} \begin{equation}\label{eqSDEeps} Z_t^{\varepsilon} = \beta_t - \int_0^t L( j^{\varepsilon}-\underline{Z}^{\varepsilon}_s ) \,ds. \end{equation}}
\noindent Pathwise existence and uniqueness hold for (\ref{eqSDEeps}) by standard arguments.
We then define
\begin{eqnarray*} Y^{1,\varepsilon}_t &=& \cases{ Y^1_t, &\quad if $t < t_1^{\varepsilon}$, \cr Y^1_{t_{1}^{\varepsilon}} + Z_t^\varepsilon, &\quad if $t \ge t_1^{\varepsilon}$,} \\ Y^{2,\varepsilon}_t &=& \cases{ Y^2_t, &\quad if $t < t_2^{\varepsilon}$, \cr Y^2_{t_{2}^{\varepsilon}} + Z_t^\varepsilon, &\quad if $t \ge t_2^{\varepsilon}$.} \end{eqnarray*}
Clearly, $Y^{1,\varepsilon}, Y^{2,\varepsilon}$ are a.s. continuous, and, moreover, $Y^1$ and $Y^{1,\varepsilon}$ have the same distribution, and so do $Y^2$ and $Y^{2,\varepsilon}$. However, $(Y^{i,\varepsilon}(t_i^{\varepsilon}+t))_{t\ge0}$ for $i=1,2$ have a.s. the same path. From this fact, the continuity of $Y^{1,\varepsilon}, Y^{2,\varepsilon}$ and (\ref{eqmaxt1t2}), it follows that for any $F \in{\mathcal C}_b({\mathcal C}({\mathbb R}_+,{\mathbb R}),{\mathbb R})$
\[ \vert E[F(Y^{1})] -E[F(Y^{2})]\vert = \vert E[F(Y^{1,\varepsilon})] -E[F(Y^{2,\varepsilon}) ]\vert \]
goes to 0 as $\varepsilon$ goes to 0, which completes the proof. \end{pf*}
\section{Scaling simple sin-trees and their segments} \label{secfixed}
The goal of this section is to establish the convergence of the rescaled paths encoding suitable sequences of well-chosen segments. In order to cover the separate cases at once, we will work in a slightly more general context than might seem necessary. We first look at a sequence of particular sin-trees $\mathbf{T}^k$ for which the vertices adjacent to the backbone generate i.i.d. subcritical (or critical) Galton--Watson trees. The law of such a tree is determined by the branching law on these Galton--Watson trees and the degrees along the backbone. If the degrees along the backbone do not behave too erratically and the percolation\vspace*{1pt} parameter scales correctly, then the sequence of Lukaciewicz paths ${\mathbf V}^k$ has a scaling limit.\vadjust{\goodbreak}
The results for the IIC follow directly. Also, we determine the scaling limits of the paths encoding a sequence of subtrees obtained by truncations at suitably vertices on the backbones of $\mathbf{T}^k$. These will be important intermediate results in the proofs of Theorems~\ref{Trightscale} and~\ref{Ttwosidedscale}.
\subsection{Notation}\label{subnotationsegments}
Throughout this section we fix for each $k\in{\mathbb Z}_+$ a parameter $w_k \in [0,1/\sigma]$, and denote by $(\theta_n^k)_{n \in{\mathbb Z}_+}$ a sequence of i.i.d. subcritical Galton--Watson trees with branching law $\operatorname{Bin} (\sigma, w_k)$. For each $k$ we also let $Z_k$ be a sequence of random variables $(Z_{k,n})_{n\ge0}$ taking values in ${\mathbb Z}_+$.
\begin{defn}\label{defZktrees} The \textit{$(Z_k,\theta^k)$-tree} is the sin-tree defined as follows. The backbone $\mbox{BB}$ is the rightmost branch. The vertex $\mbox{BB}_i$ has $1+Z_{k,i}$ children, including $\mbox{BB}_{i+1}$. Let $v_0,\ldots$ be all vertices adjacent to the backbone, in lexicographic order, and identify $v_n$ with the root of the tree $\theta^k_n$. \end{defn}
Thus, the first $Z_{k,0}$ of the $\theta$'s are attached to children of $\mbox{BB}_0$, the next $Z_{k,1}$ to children of $\mbox{BB}_1$ and so on. We will use the notation ${\mathbf T}^k$ to designate the $(Z_k,\theta^k)$-tree, and ${\mathbf V}^k$ for its Lukaciewicz path.
\begin{defn}\label{itruncation} Let $T$ be a sin-tree whose backbone is its rightmost branch. For $i \in {\mathbb Z}_+$, let $\mbox{BB}_i$ be the vertex at height $i$ on the backbone of $T$. The \textit{$i$-truncation} of $T$ is the subtree
\[ T^i:= \{ v \in T\dvtx v \le\mbox{BB}_{i}\}, \]
where $\le$ denotes lexicographic ordering. \end{defn}
Thus, the $i$-truncation of a tree consists of the backbone up to $\mbox{BB}_i$ and the subtrees attached strictly below level $i$. We denote by ${\mathbf T}^{k,i}$ the $i$-truncation of ${\mathbf T}^k$, and by ${\mathbf V}^{k,i}$ its Lukaciewicz path. We further define $\tau^{(i)}$ as the time of the $(i+1)$th return to 0 of ${\mathbf V}^k$; here we suppress the dependence of $\tau^{(i)}$ on $k$. Observe then that ${\mathbf V}^{k,i}$ coincides with ${\mathbf V}^{k}$ up to the time $\tau^{(i)}$, takes the value $-1$ at $\tau^{(i)}+1$, and terminates at that time.
It will be useful to study first the special case where $Z_k$ is a sequence of i.i.d. binomial $\operatorname{Bin}(\sigma,w_k)$ random variables. Observe that in this case the subtrees attached to the backbone are i.i.d. Galton--Watson trees [with branching law $\operatorname{Bin}(\sigma,w_k)$]. We use calligraphed letters for the various objects in this case. We denote the binomial variables ${\mathcal Z}_{k,n}$, we write ${\mathcal T}^k$ for the corresponding $({\mathcal Z}_k, \theta^k)$-tree, ${\mathcal T}^{k,i}$ for its $i$-truncation, and ${\mathcal V}^k, {\mathcal V}^{k,i}$ for the corresponding Lukaciewicz paths.
In the perspective of proving our main results, we note another special distribution of the variables $Z_{k,n}$ that is of interest. If $Z_{k,n}$ are i.i.d. $\operatorname{Bin}(\sigma-1, w_k)$, then the subtrees emerging from the backbone of the $(Z_k,\theta^k)$-tree are independent subcritical percolation clusters with parameter $w_k$. In particular, for suitably chosen values of $w_k, n_k$, ${\mathbf T}^{k,n_k}$ has the same law as a\vadjust{\goodbreak} certain segment of ${\mathcal R}$. On the other hand, if $w_k\equiv\sigma ^{-1}$, then the corresponding $(Z,\theta)$-tree is simply the IIC conditioned on its backbone being the rightmost branch of $\mathcal{T}$, which we denote by $\mbox{IIC}_{{\mathcal R}}$. We will see below that the IIC with unconditioned backbone, as well as segments of the unconditioned IPC, can be treated in a similar way.
\subsection{Scaling of segments} \label{subscalesegments}
\begin{prop}\label{prfixeddrift} Let $Z_{k,n}$ be random variables satisfying the following assumptions:
{\renewcommand{$\cA$}{${\mathcal A}$} \begin{equation}\label{eqBBchildrenAssumps} \cases{ \mbox{For any $k$, the variables $(Z_{k,n})_{n}$ are i.i.d.;} \cr \mbox{for some $C,\alpha>0$, ${\mathbb E} Z_{k,n}^{1+\alpha} < C$ for any $k$;} \cr \mbox{for some $\eta>0$, ${\mathbb P}(Z_{k,n}>0) > \eta$ for any $k$;} \cr \mbox{if $m_k={\mathbb E} Z_{k,n}$ then $m=\lim m_k$ exists.}} \end{equation}}
\noindent Further assume that $w_k\leq\sigma^{-1}$ satisfy $\lim_k k(1-\sigma w_k)=u$. Then, as $k \to \infty$, weakly in ${\mathcal C}({\mathbb R}_+,{\mathbb R})$,
\setcounter{equation}{22} \begin{equation} \label{convexc} \biggl(\frac{1}{k}{\mathbf V}^k_{[k^2 t]}\biggr)_{t\ge0} \xrightarrow{k\to\infty} (X_t)_{t\ge0}, \end{equation}
where $X_t = Y_t-\underline{Y}_t$ and $Y_t = B_{\gamma t} - u t$ is a drifted Brownian motion. \end{prop}
Since our goal is to represent segments of the IPC as well-chosen ${\mathbf T} ^{k,i}$, we have to deduce from Proposition~\ref{prfixeddrift} some results for the coding paths of the truncated trees. The convergence will take place in the space of continuous stopped paths denoted ${\mathcal S}$. An element $f \in{\mathcal S}$ is given by a lifetime $\zeta(f)\geq0$ and a continuous function $f$ on $[0,\zeta(f)]$. ${\mathcal S}$ is a Polish space with metric
\[ d(f,g) = \vert\zeta(f)-\zeta(g)\vert+\sup_{t\leq\zeta(f)\wedge \zeta (g)} \{ \vert f(t)-g(t)\vert \} . \]
It is clear from the right-hand description of Lukaciewicz paths that the path of ${\mathbf T}^{k,i}$ visits 0 exactly when reaching backbone vertices. In particular, its length is~$\tau^{(i)}$, the time of the $i$th return to 0 by the path ${\mathbf V}^k$. We shall use this to prove the following.
\begin{corollary}\label{cornkexc} Assume the conditions of Proposition~\ref{prfixeddrift} are in force. Assume further that $0<x=\lim n_k/k$. Then, weakly in ${\mathcal S}$,
\begin{equation}\label{convnkexc} \biggl(\frac{1}{k} {\mathbf V}_{[k^2 t]}^{k,n_k}\biggr)_{t \le\tau^{(n_k)}/k^2} \xrightarrow{k\to\infty} (X_t)_{t\le\tau_{m x}}, \end{equation}
where $X$ and $Y$ are as in Proposition~\ref{prfixeddrift}, and $\tau_y$ is the stopping time $\inf\{t>0\dvtx Y_t=-y \}$. \end{corollary}
It is then straightforward to deduce convergence of the height functions. Let $h^{k}$ (resp., $h^{k,i}$) denote the height function coding the tree ${\mathbf T}^k$ (resp.,~${\mathbf T}^{k,i}$).\vadjust{\goodbreak}
\begin{corollary}\label{corheight,fixed} Suppose the assumptions of Corollary~\ref{cornkexc} are in force. Then weakly in ${\mathcal C}({\mathbb R}_+,{\mathbb R})$,
\begin{equation}\label{eqheightfull} \biggl(\frac{1}{k} h^{k}_{[tk^2]}\biggr)_{t\ge0} \xrightarrow{k\to\infty} \biggl(\frac{2}{\gamma}(Y_t - \underline{Y}_{ t}) - \frac {1}{m}\underline{Y}_{ t}\biggr)_{t\ge0}. \end{equation}
Furthermore, weakly in ${\mathcal S}$,
\begin{equation}\label{eqheightnk} \biggl(\frac{1}{k} h^{k,n_k}_{[tk^2]}\biggr)_{t\le\tau^{(n_k)}/k^2} \xrightarrow{k\to\infty} \biggl(\frac{2}{\gamma} (Y_t - \underline{Y}_{ t}) - \frac {1}{m} \underline{Y}_{ t}\biggr)_{t \le\tau_{m x}}. \end{equation}
\end{corollary}
\subsection{\texorpdfstring{Proof of Proposition \protect\ref{prfixeddrift}}{Proof of Proposition 4.3}} \label{secfixeddriftproof}
We start with the following lemma, which relates the Lukaciewicz paths of a sequence of trees, and that of the tree consisting of a backbone to which the trees of the sequence are attached.
\begin{lemma}\label{LgluetoBB} Let $(\theta_n)_{n\geq0}$ be a sequence of trees, and define the sin-tree $T$ to be the sin-tree with a backbone $\mbox{BB}$ on the right, such that the root of $\theta_n$ is identified with $\mbox{BB}_n$. Let $U$ be the Lukaciewicz path coding the sequence $\theta$, and let $V$ be the Lukaciewicz path of $T$. Then
\[ V_n = U_n + 1 -\underline{U}_{n-1}, \]
where $\underline{U}$ is the infimum process of $U$ and by convention $\underline{U}_{-1}=1$. \end{lemma}
\begin{pf} The lemma follows directly from the definition of Lukaciewicz paths. $U$ reaches a new infimum (and $\underline{U}$ decreases) exactly when the process completes the exploration of a tree in the sequence. The increments of $V$ differ from the increments of $U$ only at vertices of the backbone of $T$, where the degree in $T$ is one more than the degree in $\theta_n$. \end{pf}
We first establish the proposition in the special case introduced earlier, where ${\mathcal Z}_k$ is a sequence of i.i.d. $\operatorname{Bin}(\sigma,w_k)$ random variables. In this case, the subtrees attached to the backbone of ${\mathcal T}^k$ are a sequence of i.i.d. Galton--Watson trees with branching law having expectation $\sigma w_k$ (which tends to 1 as $k\to\infty$) and variance $\sigma w_k (1-w_k)$ (which tends to $\gamma$ as $k\to\infty$).
The Lukaciewicz path ${\mathcal U}^{k}$ of this sequence of Galton--Watson trees is a random walk with drift $\sigma w_k-1$ and stepwise variance $\sigma w_k (1-w_k)$. From a well-known extension of Donsker's invariance principle (see, e.g., \cite{Jacod1985}, Theorem II.3.5), it follows that
\[ \biggl(\frac{1}{k} {\mathcal U}^k(k^2 t)\biggr)_{t\ge0} \xrightarrow{k\to\infty} (Y_t)_{t\ge0} \]
weakly in ${\mathcal C}({\mathbb R}_+,{\mathbb R})$. It now follows from Lemma~\ref{LgluetoBB} that
\begin{equation}\label{eqVlimit} \biggl(\frac{1}{k} {\mathcal V}^k(k^2 t)\biggr)_{t\ge0} \xrightarrow{k\to\infty} (X_t)_{t\ge0}. \end{equation}
Having Proposition~\ref{prfixeddrift} for ${\mathcal Z}_{k,n}$, we now extend it to other degree sequences. By the Skorokhod representation theorem, we may assume (by changing the probability space as needed) that (\ref{eqVlimit}) holds a.s.:
\begin{equation}\label{eqconvforGW} \biggl(\frac{1}{k} {\mathcal V}^k(k^2 t)\biggr)_{t\ge0} \xrightarrow{k \to\infty }^{\mathrm{a.s.}} (X_t)_{t\ge0}. \end{equation}
We further couple the trees ${\mathcal T}^k$ and ${\mathbf T}^k$ (on a suitable probability space where the sequences $Z_k$ are defined) by using the same sequences $\theta^k$ of off-backbone trees. Namely, the subtree descended from the $n$th vertex adjacent to the backbone, in lexicographic order, is $\theta_n^k$ for both ${\mathcal T}^k$ and ${\mathbf T}^k$, and we will identify $v\in\theta_n^k$ with the corresponding vertices of ${\mathcal T}^k$ and ${\mathbf T}^k$. However, because the sequences $Z_k$ and ${\mathcal Z}_k$ are different, the Lukaciewicz paths of these two trees differ, and we now give bounds to control this difference.
It will be convenient to consider the sets of points
\[ {\mathbf G}^k:= \{(i,{\mathbf V}^{k}(i)), i \in{\mathbb Z}_+\},\qquad {\mathcal G}^k:= \{(i,{\mathcal V}^{k}(i)), i \in{\mathbb Z}_+\}, \]
which are the integer points in the graphs of ${\mathbf V}^k,{\mathcal V}^k$. To each vertex $v \in{\mathbf T}^k$ corresponds a point $({\mathbf x}_v, {\mathbf y}_v)\in{\mathbf G}^k$ [and similarly $(x_v,y_v)\in{\mathcal G}^k$ for $v\in{\mathcal T}^k$]. From the right-hand description of Lukaciewicz paths introduced in Section~\ref{subenc1}, we see that
\begin{eqnarray*} {\mathbf G}^k &=& \{({\mathbf x}_v,{\mathbf y}_v)\dvtx v \in{\mathbf T}^k\} = \bigl\{ \bigl( \#({\mathbf T}^k)^v, n(v,{\mathbf T}^k)-1\bigr)\dvtx v \in{\mathbf T}^k \bigr\}, \\ {\mathcal G}^k &=& \{(x_v,y_v)\dvtx v \in{\mathcal T}^k\} = \bigl\{ \bigl( \#({\mathcal T}^k)^v, n(v,{\mathcal T}^k)-1\bigr)\dvtx v \in{\mathcal T}^k \bigr\}. \end{eqnarray*}
The next step is to show that these two sets are close to each other. Any $v\in\theta^k_n$ is contained in both ${\mathbf T}^k$ and ${\mathcal T}^k$. We first show that ${\mathbf x}_v\approx x_v$ and ${\mathbf y}_v\approx y_v$ for such $v$, and then show how to deal with the backbones.
Any tree $\theta^k_n$ is attached by an edge to some vertex in the backbone of ${\mathbf T}^k$ and ${\mathcal T}^k$. For any vertex $v\in\theta^k_n$ we denote the height of this vertex by ${\mathbf l}_v$ and $\ell_v$, respectively:
\[ {\mathbf l}_v = \sup\{t\dvtx \mbox{BB}_t < v \mbox{ in } {\mathbf T}^k \},\qquad \ell_v = \sup\{t\dvtx \mbox{BB}_t < v \mbox{ in } {\mathcal T}^k \}. \]
These values depend implicitly on $k$. Note that ${\mathbf l}_v,\ell_v$ do not depend on which $v\in\theta^k_n$ is chosen, hence, by a slight abuse of notation, we also use ${\mathbf l}_n,\ell_n$ for the same values whenever $v\in\theta^k_n$.
\begin{lemma} Assume $v\in\theta^k_n$. Then
\begin{eqnarray*} \vert{\mathbf x}_v-x_v\vert &=& \vert{\mathbf l}_v-\ell_v\vert, \\ \vert{\mathbf y}_v-y_v\vert &\leq&\sigma+ Z_{k,{\mathbf l}_v}. \end{eqnarray*}
\end{lemma}
\begin{pf} We have
\[ x_v = \#({\mathcal T}^k)^v = \sum_{i<n} \# \theta_i^k + \#(\theta^k_n)^v + \ell_n\vadjust{\goodbreak} \]
and, similarly,
\[ {\mathbf x}_v = \#({\mathbf T}^k)^v = \sum_{i<n} \# \theta_i^k + \#(\theta^k_n)^v + {\mathbf l}_n. \]
The first claim follows.
For the second bound use ${\mathbf y}_v = n(v,{\mathbf T}^k) -1$. There are $n(v,\theta^k_n)$ edges connecting $({\mathbf T}^k)^v$ to its\vspace*{1pt} complement inside $\theta^k_n$ and at most $Z_{k,{\mathbf l}_n}$ edges connecting $\mbox{BB}_{{\mathbf l} _n}$ to the complement. Similarly, in ${\mathcal T}^k$ we have the same $n(v,\theta^k_n)$ edges inside $\theta^k_n$ and at most ${\mathcal Z}_{k,\ell_n}\le\sigma$ edges connecting $\mbox{BB}_{\ell_n}$ to the complement. It follows that the difference is at most $\sigma+Z_{k,{\mathbf l}_n}$. \end{pf}
Next we prepare to deal with the backbone. For a vertex $v\in{\mathbf T}^k$, define $u\in{\mathbf T}^k$ by
\[ u = \min\{ u\in({\mathbf T}^k\setminus\mbox{BB})\dvtx u\ge v \}. \]
If $v\notin\mbox{BB}$, then $u=v$. If $v$ is on the backbone, then $u$ is the first child of $v$, unless $v$ has no children outside the backbone. Note that $u\in\theta^k_n$ for some $n$, so we may also consider $u$ as a vertex of ${\mathcal T}^k$. Note also that $v\to u$ is a nondecreasing map from ${\mathbf T}^k$ to ${\mathcal T}^k$.
\begin{lemma}\label{LBBdisp} For a backbone vertex $v$ in ${\mathbf T}^k$, define $n$ by $\theta^k_n < v < \theta^k_{n+1}$. Then
\begin{eqnarray*} \vert{\mathbf x}_v - {\mathbf x}_u\vert &\le& 1 + {\mathbf l}_{n+1} - {\mathbf l}_n, \\ \vert{\mathbf y}_v - {\mathbf y}_u\vert &\le& \sigma+ Z_{k,{\mathbf l}_{n+1}}. \end{eqnarray*}
\end{lemma}
\begin{pf} The only vertices between $v$ and $u$ in the lexicographic order are $u$ and some of the backbone vertices with indices from ${\mathbf l}_n$ to ${\mathbf l}_{n+1}$, yielding the first bound.
Let $w\in\mbox{BB}$ be $u$'s parent. If $v$ has children apart from the next backbone vertex, then $w=v$ and $u$ is $v$'s first child, so ${\mathbf y} _u-{\mathbf y}_v = k_u-1 \le\sigma-1$. If $v$ has no other children, then ${\mathbf y}_u-{\mathbf y} _v = (k_u-1) + (k_w-1) \le\sigma+ Z_{k,{\mathbf l}_{n+1}}$. \end{pf}
\begin{lemma}\label{Lhdisp} Fix $\varepsilon,A>0$ and let $w$ be the $[Ak^2]$th vertex of ${\mathbf T}^k$. Then with high probability $\ell_w,{\mathbf l}_w \leq k^{1+\varepsilon}$. \end{lemma}
\begin{pf} Since each $\theta^k_n$ is (slightly) subcritical, we have ${\mathbb P}(\#\theta^k_n > k^2) > c_1 k^{-1}$ for some $c_1>0$. Consider the first $k^{1+\varepsilon}$ vertices along the backbone in ${\mathbf T}^k$. With high probability, the number of $\theta$'s attached to them is at least $\eta k^{1+\varepsilon}/2$. On this event, with high probability, at least $c_2 k^\varepsilon$ of these have size at least $k^2$, hence, there are $c_2 k^{2+\varepsilon} \gg Ak^2$ vertices $v$ with ${\mathbf l}_v \le k^{1+\varepsilon}$ (and these include the first $Ak^2$ vertices in the tree). $\ell_w$ is dealt with in the same way. \end{pf}
\begin{lemma}\label{Lhvdisp} Fix $A>0$ and let $w$ be the $[Ak^2]$th vertex of ${\mathbf T}^k$. For $\varepsilon>0$ small enough,
\[ {\mathbb P}\Bigl( \sup_{v<w} \vert{\mathbf x}_v - x_u\vert > 3k^{1+\varepsilon} \Bigr) \xrightarrow{k\to\infty} 0 \]
and
\[ {\mathbb P}\Bigl( \max_{v<w} \vert{\mathbf y}_v-y_u\vert > k^{1-\varepsilon} \Bigr) \xrightarrow{k\to\infty} 0. \]
\end{lemma}
\begin{pf} For a vertex $v\in\theta^k_n$ off the backbone we have $u=v$ and
\[ \vert{\mathbf x}_v - x_u\vert \le\vert{\mathbf l}_v - \ell_v\vert \le{\mathbf l}_v + \ell_v \le{\mathbf l}_w + \ell_w, \]
and with high probability this is at most $2k^{1+\varepsilon}$. If $v<w$ is in the backbone, then we argue that $\vert{\mathbf x}_v-{\mathbf x}_u\vert \ll k^{1+\varepsilon}$. To this end, note that ${\mathbf l}_{n+1}-{\mathbf l}_n$ is dominated by a geometric random variable with mean $1/\eta$ (since the $Z_{k,n}$'s are independent). Since only $n<Ak^2$ might be relevant to the initial part of the tree, this shows that with high probability $\vert{\mathbf x}_v-{\mathbf x}_u\vert < c\log k \ll k^{1+\varepsilon}$.
The bound on the $y$'s follows from the bounds on $\vert{\mathbf y} _v-y_u\vert$. All that is needed is to show that with high probability $Z_{k,n} < k^{1-\varepsilon}$ for all $n<k^{1+\varepsilon}$, and this follows from assumption (\ref{eqBBchildrenAssumps}) and Markov's inequality. \end{pf}
We now finish the proof of Proposition~\ref{prfixeddrift}. Because the path of ${\mathbf V}^k$ is linearly interpolated between consecutive integers, and since for any $A>0$ the paths of $X$ are a.s. uniformly continuous on $[0,A]$, the proposition will follow if we establish that for any $A,\varepsilon>0$,
\begin{equation}\label{eqendprooffd} {\mathbb P}\biggl(\sup_{t \in[0,A]} \biggl\vert\frac{1}{k} V^k_{[k^2 t]} - X_t\biggr\vert >\varepsilon \biggr) \xrightarrow{k\to\infty} 0. \end{equation}
Consider first $t$ such that $k^2t\in{\mathbb Z}_+$. Then there is some vertex $v\in{\mathbf T}^k$ so that ${\mathbf x}_v=k^2t$. Let $u\in{\mathcal T}^k$ be as defined above, and suppose $k^2s=x_u$. Then (\ref{eqconvforGW}) implies that $\vert k^{-1}y_u-X_s\vert$ is uniformly small. Lemma~\ref{Lhvdisp} implies that with high probability $\vert k^2s-k^2t\vert=\vert x_u-{\mathbf x}_v\vert\le 3k^{1+\varepsilon}$ for all such $v$. Thus, $\vert s-t\vert\le k^{-1+\varepsilon} \ll1$. Since paths of $X$ are uniformly continuous, we find $\vert X_s-X_t\vert$ is uniformly small, and so $\vert k^{-1}y_u-X_t\vert$ is uniformly small. Finally, Lemma~\ref{Lhvdisp} states that $\vert y_u-{\mathbf y} _v\vert\leq C$, so the scaled vertical distance is also~$o(1)$.
Next, assume $m<k^2t<m+1$. Then ${\mathbf V}^k(k^2t)$ lies between ${\mathbf V}^k(m)$ and ${\mathbf V}^k(m+1)$. Since both of these are close to the corresponding values of $X$, and since $X$ is uniformly continuous (and the pertinent points differ by at most $k^{-2}$), we may interpolate to find that (\ref{eqendprooffd}) holds for all $t<A$.
\subsection{\texorpdfstring{Proofs of Corollaries \protect\ref{cornkexc} and \protect\ref{corheight,fixed}} {Proofs of Corollaries 4.4 and 4.5}}
\mbox{}
\begin{pf*}{Proof of Corollary~\ref{cornkexc}} By Proposition~\ref{prfixeddrift}, the limit of the process $(\smash{\frac{1}{k} V_{[k^2t]}^k})_{t \le\tau^{(n_k)}}$ must take the form\vspace*{2pt} $(X_{t})_{t\le\tau}$ for some possibly random time $\tau$, and, furthermore, $X_{\tau}=0$. We need to show that $\tau= \tau_{mx} = \inf \{ t \ge0\dvtx -Y_t = mx \}$.
In the special case of the tree ${\mathcal T}^k$ we note that the infimum process $\underline{{\mathcal U}}^k$ records the index of the last visited vertex along the backbone. Therefore, $\tau^{(n_k)}$ is the time at which ${\mathcal U}^k$ first reaches $-n_k$, and by assumption $n_k\sim xk$. Using the a.s. convergence of $\frac{1}{k} {\mathcal U}^k([k^2t])$ toward $Y_t$, along with the fact that for any fixed $x>0$, $\varepsilon>0$, one has a.s. $\underline{Y}_{\tau_x -\varepsilon}> -x > \underline{Y}_{\tau_x +\varepsilon }$, we deduce that a.s., $\tau^{(n_k)}/k^2 \to\tau_x$. It then follows that
\[ \biggl( \frac{1}{k} \mathcal{V}^{k}_{[k^2 t]}, t \le \bigl(\tau^{(n_k)}+1\bigr)/k^2 \biggr) \xrightarrow{k \to\infty}^{\mathrm {a.s.}} ( X_t, t \le\tau_{x} ). \]
Since, in this case, $m_k = \sigma w_k \to m=1$, this implies the corollary for this special distribution.
The general case then follows as a consequence of excursion theory. Indeed, $(-\underline{Y}_{ t}, t \ge0)$ can be chosen to be the local time at its infimum of $Y$ (see, e.g.,~\cite{RogWill1994v2}, Paragraph VI.8.55), that is, a local time at $0$ of $X$, since excursions of $Y$ away from its infimum match those of $X$ away from $0$. However, if $N_t^{(\varepsilon)}$ denotes the number of excursions of $X$ away from $0$ that are completed before $t$ and reach level $\varepsilon$, then $(\lim_{\varepsilon\to0} \varepsilon N_t^{(\varepsilon)}, t\ge0)$ is also a local time at $0$ of $X$, which means that it has to be proportional to $(-\underline{Y}_{ t}, t \ge 0)$ (cf., e.g.,~\cite{Blumenthal1992}, Section III.3(c) and Theorem VI.2.1). In other words, there exists a constant $c>0$ such that for any $t \ge0$,
\[ \lim_{\varepsilon\to0} \varepsilon N_t^{(\varepsilon)} = -c \underline{Y}_{ t}. \]
In the special case when $\mathcal{Z}_{k,n} = \operatorname{Bin}(\sigma, w_k)$ we have already proven the corollary. In particular, the number $\mathcal{N}^{k,(\varepsilon)}$ of excursions of $(\frac{1}{k}{\mathcal U} _{k^2t}^{k}, t \le\tau^{(n_k)})$ which reach level $\varepsilon$ is such that, when letting $k \to\infty$ and then $\varepsilon\to0$, we have $ \varepsilon\mathcal {N}^{k,(\varepsilon)} \to c x $.
Let\vspace*{1pt} $N^{k,(\varepsilon)}$ be the number of excursions of $(\frac{1}{k} \smash{V_{[k^2t]}^{k}}, t \le\tau^{(n_k)})$ which reach level~$\varepsilon$. It follows from Proposition~\ref{prfixeddrift} that, in distribution, $N^{k,(\varepsilon)} \to N_{\tau}^{\varepsilon}$ as $k \to\infty$.
However, by assumption $\mathcal{A}$ we can use the law of large numbers for the sequences $(Z_{k,n})_{n \in{\mathbb N}}$ along with the fact that $m_k \to m$, to ensure that $ \varepsilon N^{k,(\varepsilon)} \underset{k \to \infty}{\sim} m \varepsilon \mathcal{N}^{k,(\varepsilon)}$. Therefore, letting first $k\to\infty$, then $\varepsilon\to0$, we find $ \varepsilon N^{k,(\varepsilon)} \to m c x $.
From the fact that $\tau^{(n_k)}$ are stopping times, we deduce that $\tau$ itself is a stopping time. Since $X_{\tau}=0$, for any $s>0$, the local time at $0$ of $X$ (i.e., $-\underline{Y}$) increases on the interval $(\tau, \tau+s)$.\vadjust{\goodbreak} It follows that for a certain real-valued random variable $R$, $\tau= \tau_R = \inf\{ t \ge0\dvtx -Y_t = R \}$, and we deduce that, in distribution, $R = mx$, that is, $\tau = \tau_{mx}$ \end{pf*}
\begin{pf*}{Proof of Corollary~\ref{corheight,fixed}} The relation between the height function and the Lukaciewicz path is well known; see, for example,~\cite{DuqLeG2002}, Theorem 2.3.1 and equation~(1.7). Combining with Proposition~\ref{prfixeddrift}, one finds that the height process of the \textit{sequence of trees} emerging from the backbone of ${\mathbf T}^k$ converges when rescaled to the process
\[ \frac{2}{\gamma} (Y_t - \underline{Y}_{ t}). \]
Moreover, the difference between the height process of ${\mathbf T}^k$ and that of the sequence of trees emerging from the backbone of ${\mathbf T}^k$ is simply $-\underline{U}^{k}$. As in the proof of Corollary~\ref{cornkexc}, one has weakly in ${\mathcal C}({\mathbb R}_+,{\mathbb R})$,
\[ \biggl(-\frac{1}{k} \underline{U}^k_{[k^2t]}\biggr)_{t\ge0} \xrightarrow{k\to\infty} \biggl(-\frac{1}{m} \underline{Y}_{ t}\biggr)_{t \ge0}, \]
and (\ref{eqheightfull}) follows. The proof of (\ref{eqheightnk}) is similar. \end{pf*}
In fact,~\cite{DuqLeG2002}, Corollary 2.5.1, states the joint convergence of Lukaciewicz paths, height and contour functions. It is thus easy to deduce a strengthening of Corollary~\ref{corheight,fixed} to get the joint convergence.
\subsection{Two-sided trees} \label{subtwotrees}
The limit appearing in Proposition~\ref{prfixeddrift} retains very minimal information about the sequence $Z_k$. If two trees (or two sides of a tree) are constructed as above using independent $\theta$'s but dependent sequences of $Z$'s, the dependence between two sequences might\vspace*{1pt} disappear in the scaling limit. For $k \in{\mathbb Z}_+$, let $w_k \in[0,1/\sigma]$, and denote by $(\theta_n^k)_{n \in{\mathbb Z}_+}$ and $(\widetilde{\theta}_n^k)_{n \in {\mathbb Z} _+}$ two \textit{independent} sequences of i.i.d. subcritical Galton--Watson trees with branching law $\operatorname{Bin}(\sigma, w_k)$. We let $Z_k$, $\widetilde{Z}_k$ be two sequences of random variables taking values in ${\mathbb Z}_+$ such that the pairs $(Z_{k,n}, \widetilde{Z}_{k,n})$ are independent for different $n$; however, we allow $Z_{k,n}$ and $\widetilde{Z}_{k,n}$ to be correlated.
Let\vspace*{1pt} ${\mathbf T}^k, \widetilde{{\mathbf T}}^k$ designate, respectively, the $(Z_k,\theta^k)$-tree, $(\widetilde{Z}_k, \widetilde{\theta }^k)$-tree as defined in Section~\ref{subnotationsegments}. Let\vspace*{1pt} ${\mathbf V}^k$, respectively, $\widetilde{{\mathbf V}}^k$, denote their Lukaciewicz paths. We recall that ${\mathbf T}^{k, n_k}$, $\widetilde{{\mathbf T}}^{k,n_k}$ are, respectively, the $n_k$-truncation of ${\mathbf T}^k$, respectively, $\widetilde{{\mathbf T}}^k$,\vspace*{1pt} and we denote by ${\mathbf V}^{k,n_k}, \widetilde{{\mathbf V}}^{k,n_k}$ their respective Lukaciewicz paths.
\begin{prop}\label{Pindsides} Suppose $w_k\leq\sigma^{-1}$ is such that $u = \lim_{k\to\infty} k(1-\sigma w_k)$ exists, and assume that both sequences of variables $Z_{k,n}, \widetilde{Z}_{k,n}$ satisfy assumption~(\ref{eqBBchildrenAssumps}). Then, as $k \to \infty$, weakly in ${\mathcal C}({\mathbb R}_+,{\mathbb R}^2)$
\[ k^{-1}\bigl({\mathbf V}_{[k^2 t]}^{k}, \widetilde{{\mathbf V}}_{[k^2t]}^k\bigr)_{t\ge0} \xrightarrow{k\to\infty} (X_t, \widetilde{X}_t)_{t\ge0}, \]
where the processes $X,\widetilde{X}$ are two \textit{independent} reflected Brownian motions with drift $-u$ and diffusion coefficient $\gamma$.
Moreover, if $n_k/k \to x>0$, $m_k \to m$, $\widetilde{m}_k \to \widetilde{m}$ as $k \to\infty$, we have
\[ k^{-1}\bigl({\mathbf V}_{[k^2 t]}^{k,n_k}, \widetilde{\mathbf V}_{[k^2 t]}^{k,n_k}\bigr)_{t \le\tau^{(n_k)}/k^2} \xrightarrow{k\to\infty} (X_t, \widetilde X_t)_{t\le\tau_{mx}}. \]
\end{prop}
The proof is almost identical to that of Proposition~\ref{prfixeddrift}. When the sequences $Z_k,\widetilde Z_k$ are independent with $\operatorname{Bin}(\sigma,w_k)$ elements the result follows from Proposition~\ref{prfixeddrift}. For general sequences, the coupling of Section~\ref{secfixeddriftproof} shows that the sides have the same joint scaling limit.
\subsection{Scaling the IIC} \label{secIIC}
At this point we are already in a position to prove the path convergence results for the IIC, equations (\ref {eLukaIICRlimit})--(\ref{eContourIIClimit}) from Theorem~\ref {TIICresults1}. As discussed in Section~\ref{subIICrecall}, the IIC is the result of setting $w_k = 1/\sigma$ in the above constructions. Specifically, let us first suppose that $Z$ is a sequence of i.i.d. $\operatorname{Bin}(\sigma-1, 1/\sigma)$ variables and $(\theta _n)_n$ is a sequence of i.i.d. $\operatorname{Bin}(\sigma, 1/\sigma)$ Galton--Watson trees. Let ${\mathbf T}$ be a $(Z,\theta)$-tree: then ${\mathbf T}$ has the same distribution as $\mbox{IIC}_{{\mathcal R}}$.
The convergence of the rescaled Lukaciewicz path encoding this sin-tree to a time-changed reflected Brownian path is thus a special case of Proposition~\ref{prfixeddrift}. The scaling limits of the height and contour functions follow from Corollary~\ref{corheight,fixed}. We have $m=\gamma$, so both limits are $\frac{2}{\gamma} B_{\gamma t} - \frac{3}{\gamma} \underline{B}_{\gamma t}$.
For the IIC with unconditioned backbone, let $Y_n$ be i.i.d. uniform in $\{1,\ldots,\sigma\}$. Let $Z_n\sim\operatorname{Bin}(Y_n-1,1/\sigma)$ and $\widetilde Z_n\sim \operatorname{Bin}(\sigma-Y_n,1/\sigma)$, independent conditioned on $Y_n$ and independently of all other $n$. Moreover, suppose that $\theta, \widetilde{\theta}$ are two independent sequences of i.i.d. $\operatorname{Bin} (\sigma, 1/\sigma)$ Galton--Watson trees. Then, ${\mathbf T}$ and $\widetilde{\mathbf T}$ are jointly distributed as $\mbox{IIC}_G$ and $\mbox{IIC}_D$.
Since in this case $m=\widetilde m=\gamma/2$, from Proposition~\ref{Pindsides} we see that the rescaled Lukaciewicz paths encoding these two trees converge toward a pair of independent time-changed reflected Brownian motions, and similarly for the right/left height and contour functions of the IIC.
The proofs of the remaining parts of Theorems~\ref{TIICresults1} and \ref{TIICresults2} are identical to the proofs for the IPC, which are given in the next two sections.
\section{Bottom-up construction} \label{secproofmain}
\subsection{Right grafting and concatenation}
\begin{defn} Given a finite plane tree, its \textit{rightmost leaf} is the maximal vertex in the lexicographic order; equivalently, it is the last vertex to be reached by the contour process, and is the rightmost leaf of the subtree above the rightmost child of the root. \end{defn}
\begin{defn} The \textit{right-grafting} of a plane tree $S$ on a finite plane tree~$T$, denoted $T\oplus S$, is the plane\vadjust{\goodbreak} tree resulting from identifying the root of $S$ with the rightmost leaf of $T$. More precisely, let $v$ be the rightmost leaf of $T$. The tree $T \oplus S$ is given by its set of vertices $\{u\dvtx u \in T \setminus\{v\} \mbox{ or } u=vw, w \in S\}$. \end{defn}
Note, in particular, that the vertices of $S$ have been relabeled in $T\oplus S$ through the mapping from $S$ to $T \oplus S$ which maps $w$ to $vw$.
\begin{defn} The \textit{concatenation} of two functions $V_1,V_2 \in{\mathcal S}$ with $V_2(0)=0$, denoted $V=V_1\oplus V_2$, is defined by
\[ V(t) = \cases{ V_1(t), &\quad $t\leq\zeta(V_1)$, \cr V_1(\zeta(V_1)) + V_2\bigl(t-\zeta(V_1)\bigr), &\quad $t\in [\zeta(V_1),\zeta(V_1)+\zeta(V_2)]$.} \]
\end{defn}
\begin{lemma}\label{Lconcatcommute} If each $Y_i\in{\mathcal S}$ attains its minimum at $\zeta(Y_i)$, then
\[ \bigoplus(Y_i - \underline Y_i) = \bigoplus Y_i - \underline {\bigoplus Y_i}. \]
\end{lemma}
The following is straightforward to check, and may be used as an alternate definition of right-grafting.
\begin{lemma}\label{Lgraft} Let $R=T\oplus S$ be finite plane trees, and denote the Lukaciewicz path of $R$ (resp., $T,S$) by $V_R$ (resp., $V_T,V_S$). Let $V'_T$ be $V_T$ terminated at $\#T$ (i.e., without the final value of $-$1). Then $V_R=V'_T\oplus V_S$. \end{lemma}
Consider a sin-tree $T$ in which the backbone is the rightmost path (i.e., the path through the rightmost child at each generation). Given some increasing sequence $\{x_i\}$ of vertices along the backbone, we cut the tree at these vertices: let
\[ \widetilde{T}_i:= \{v \in T\dvtx x_i \le v \le x_{i+1} \}. \]
Thus, $\widetilde{T}_i$ contains the segment of the backbone $[x_i,x_{i+1}]$ as well as all the subtrees connected to any vertex of this segment except $x_{i+1}$. We let $T_i$ be $\widetilde T_i$ rerooted at~$x_i$ (formally, $T_i$ contains all $v$ with $x_i v\in\widetilde T_i$). It is clear from the definitions that $T = \bigoplus_{i=0}^\infty T_i$. Note that apart from being increasing, the sequence $x_i$ is arbitrary.
\subsection{\texorpdfstring{IPC structure and the coupling: Proof of Theorem \protect\ref{Trightscale}} {IPC structure and the coupling: Proof of Theorem 1.2}} \label{subcoupling}
In this section we prove Theorem~\ref{Trightscale}.
Recall the $\widehat{W}$-process introduced in Section \ref {subrecall} and the convergence (\ref{eWtoL}). The $\widehat{W}$-process is constant for long stretches, giving rise to a partition of ${\mathcal R}$ into what we shall call segments. Each segment consists of an interval of the backbone along which $\widehat{W}$ is constant, together with all subtrees attached to the interval. To be precise, define $x_i$ inductively by $x_0=0$ and $x_{i+1} = \inf_{n>x_i} \{\widehat{W}_n>\widehat{W}_{x_i}\}$. With a slight abuse, we also let $x_i$ designate the vertex along the backbone at height $x_i$.\vadjust{\goodbreak}
The backbone is the union of the intervals $[x_i,x_{i+1}]$ for all $i\geq0$, and the rest of the IPC consists of subcritical percolation clusters attached to each vertex of the backbone $y\in[x_i,x_{i+1})$. We can now write
\[ {\mathcal R}= \bigoplus_{i=0}^\infty R_i, \]
where $R_i$ is the $[x_i,x_{i+1}]$ segment of ${\mathcal R}$, rerooted at $x_i$. $R_i$ has a rightmost branch of length $n_i:= x_{i+1}-x_i$. The degrees along this branch are i.i.d. $\operatorname{Bin}(\sigma-1, \widehat{W}_{x_i})$, and each child off the rightmost branch is the root of an independent Galton--Watson tree with branching law $\operatorname{Bin}(\sigma, \widehat{W}_{x_i})$. In what follows, we say that $R_i$ is a $\widehat{W}_{x_i}$-segment of length $n_i$, and we observe that these segments fall into the family dealt with in Section~\ref{secfixed}.
We may summarize the above in the following lemma:
\begin{lemma} Suppose $\widehat{W}$ consists of values $U_i$ repeated $n_i$ times. Then $R_i$ is distributed as a $U_i$-segment of length $n_i$ and conditioned on $\{U_i,n_i\}$, the trees $\{R_i\}$ are independent. \end{lemma}
A difficulty we must deal with is that in the scaling limit there is no first segment, but rather a doubly infinite sequence of segments. Furthermore, the initial segments are far from critical, and so need to be dealt with separately. This is related to the fact that the Poisson lower envelope process $L(t)$ diverges near 0 and has no ``first segment.'' Because of this we restrict ourselves at first to a slightly truncated invasion percolation cluster. For any $\beta>0$ we define
\[ x_0^\beta= \min\{x\dvtx \sigma\widehat{W}_x > 1 - \beta/k\},\qquad x_{i+1}^\beta= \min\{\smash{x > x_i^\beta\dvtx \widehat{W}_x > \widehat {W}_{x_i^\beta}}\}. \]
Note that $x^\beta_0=x_m$ for some $m$ and that $x^\beta_i=x_{m+i}$ for the same $m$ and all~$i$.
Since we have convergence in distribution of the process $\widehat{W}$, we may couple the IPCs for different $k$'s so that the convergence holds a.s. (This means that the random tree ${\mathcal R}$ depends on $k$; we will leave this dependence implicit.) More precisely, let $(j_i^\beta)_{i\in{\mathbb Z}}$ be the sequence of jump times for $\{L(t)\}$, indexed such that $L(j_0^\beta )<\beta<L(j_{-1}^\beta)$ a.s. [We may do this since a.s. $\beta$ is not in the range of $L(t)$.] By the convergence (\ref{eWtoL}) and the Skorohod representation\vspace*{1pt} theorem, we may assume that a.s. for any $t\notin J$ we have $k^{-1}(1-\sigma \widehat{W}^k_{[kt]})\xrightarrow{k\to\infty}L(t)$. Indeed, we will assume further that $k^{-1} x_i^\beta\to j_i^\beta$ a.s. for each $i$. This slightly stronger statement follows from (\ref{eRateAsymps}), which shows that $(k(1-\sigma\widehat W_{[kt]}))$ and $L(t)$ have asymptotically the same total jump rate. In other words, there are no ``small'' jumps of $\widehat W$ that disappear in the scaling limit $L(t)$.
Denote by $V^\beta_i$ (implicitly depending on $k$) the Lukaciewicz path corresponding to the $i$th segment $R^\beta_i$ in ${\mathcal R}^\beta$. For any $\beta,i$, the $i$th segment has associated percolation parameter $w_i^\beta$ satisfying $k(1-\sigma w_i)\xrightarrow{k\to\infty} L(j_i^\beta)$ and length $n_i^\beta$ satisfying $k^{-1}n^\beta_i\rightarrow j_{i+1}^\beta-j_i^\beta$. By Corollary~\ref{cornkexc}, we have the convergence in distribution
\begin{equation} \label{eSegmentConvergence} \bigl( k^{-1} V^\beta_i(k^2 t), 0\le t \le\tau^{(n_i^\beta)} \bigr) \xrightarrow{k\to\infty}\bigl( X_t, 0\le t \le\tau_{\gamma (j_{i+1}^\beta-j_i^\beta)}\bigr), \end{equation}
where $X_t = Y_t-\underline Y_{ t}$, and $Y_t$ solves
\[ d Y_t = \sqrt{\gamma} \,dB_t - L(j_i^\beta) \,dt. \]
As in the previous section, $\tau^{(n_i^\beta)}$ denotes the lifetime of $V^\beta_i$ [i.e., its $(n_i^\beta)$th return to $0$] and $\tau_{y}$ is the hitting time of $-y$ by $Y$.
Because the convergence in (\ref{eSegmentConvergence}) holds for all $\beta,i\in{\mathbb N}$, we may construct the coupling of the probability spaces so that the convergence is also almost sure, and this is the final constraint in our coupling.
\begin{lemma} \label{Lbeta} Fix $\beta>0$. In the coupling described above we have, almost surely, the scaling limit
\[ k^{-1}{\mathcal V}^\beta(k^2 t) \xrightarrow{k\to\infty} X_t, \]
where $X_t = {\mathcal Y}^\beta_t-\underline{\mathcal Y}^\beta_{ t}$, and ${\mathcal Y}^\beta$ solves
\begin{equation} \label{eYbetaEquation} {\mathcal Y}^\beta_t = \sqrt{\gamma} B_t - \int_0^t L\biggl(j^\beta_0 - \frac{1}{\gamma} \underline{\mathcal Y}^{\beta}_{ s}\biggr) \,ds. \end{equation}
\end{lemma}
\begin{pf} Solutions of the equation for ${\mathcal Y}^\beta$ are a concatenation of segments. In each segment the drift is fixed, and each segment terminates when $\underline{\mathcal Y}^\beta$ reaches a certain threshold. The corresponding segments of $X$ exactly correspond to the scaling limit of the tree segments $R^\beta_i$.
Lemma~\ref{Lbeta} then follows from Lemmas~\ref{Lconcatcommute} and \ref{Lgraft}. \end{pf}
\begin{lemma} \label{Lbetatoinfty} Almost surely,
\[ ({\mathcal Y}^\beta_t, t>0) \xrightarrow{\beta\to\infty} {\mathcal Y}_t, \]
where ${\mathcal Y}$ solves
\[ {\mathcal Y}_t = \sqrt{\gamma} B_t - \int_0^t L\biggl(-\frac{1}{\gamma }\underline {\mathcal Y}_{ s}\biggr) \,ds. \]
\end{lemma}
\begin{pf} Consider the difference between the solutions for a pair $\beta<\beta'$. We have the relation
\[ {\mathcal Y}^{\beta'} = Z \oplus{\mathcal Y}^\beta,\vadjust{\goodbreak} \]
where $Z$\vspace*{-2pt} is a solution of $ Z_t = \sqrt{\gamma} B_t - \int_0^t L(j_0^{\beta'} -\frac{1}{\gamma}\underline Z_{ s})\,ds $, killed when $\underline Z$ first reaches $\gamma(j^{\beta'}_0 - j^\beta_0)$. In particular, $Z$ is a stochastic process with drift in $[-\beta',-\beta]$ (and quadratic variation $\gamma$). Thus, to show that ${\mathcal Y}^\beta$ is close to ${\mathcal Y}^{\beta'}$, we need to show that $Z$ is small both horizontally and vertically, that is, $\zeta(Z)$ is small, as is $\Vert Z\Vert_\infty$.
The vertical translation of ${\mathcal Y}^\beta$ is $\sqrt{\gamma} k^{-1}(x^\beta_0-x^{\beta'}_0)$, which is at most $k^{-1}x^\beta_0$. From \cite{AGdHS2008} we know that this tends to 0 in probability as $\beta\to\infty$. This convergence is a.s. since $x^\beta_0$ is nonincreasing in $\beta$.
The values of $Z$ are unlikely to be large, since $Z$ has a nonpositive (in fact, negative) drift and is killed when $\underline Z$ reaches some negative level close to 0.
Finally, there is a horizontal translation of ${\mathcal Y}^\beta$ in the concatenation. This translation is just the time at which $\underline Z$ first reaches $\gamma(j^{\beta'}_0-j^\beta_0)$, which is also small, uniformly in $\beta'$. \end{pf}
Theorem~\ref{Trightscale}(\ref{equ1}) is now a simple consequence of Lemmas~\ref{Lbeta} and~\ref{Lbetatoinfty}. Indeed, the process ${\mathcal Y}-\underline{{\mathcal Y}}$ has the same law as the right-hand side of (\ref{equ1}), due to the scale invariance of solutions of (\ref{eqSDE}). We shall note that, in fact, ${\mathcal Y}$ is the limit of the rescaled Lukaciewicz path coding the sequence of off-backbone trees.
The same argument using Corollary~\ref{corheight,fixed} instead of Corollary~\ref{cornkexc} gives the convergence of the height function.
Finally, convergence of contour functions is deduced from that of height functions by a routine argument (see, e.g.,~\cite{LeGall2005}, Section 1.6).
\subsection{\texorpdfstring{The two-sided tree: Proof of Theorem \protect\ref{Ttwosidedscale}} {The two-sided tree: Proof of Theorem 1.3}} \label{subtwo-sided}
For convenience we use the shorter notation ${\mathcal T}$ to designate the IPC, and we recall the left and right trees ${\mathcal T}_G$ and ${\mathcal T}_D$ as introduced in Section~\ref{subenc2}. The two trees ${\mathcal T}_G$ and ${\mathcal T}_D$ obviously have the same distribution, but are not independent. As in the previous section, we may cut these two trees into segments along which the $\widehat{W}$-process is constant. More precisely,
\[ {\mathcal T}_G = \bigoplus_{i=0}^{\infty} T_G^i, \qquad {\mathcal T}_D = \bigoplus_{i=0}^{\infty} T_D^i, \]
where the distribution of $T_D^i, T_G^i$ can be made precise as follows.
Let\vspace*{1pt} $(\theta_n^i)_{n}, (\widetilde\theta_n^i)_{n}$ be sequences of Galton--Watson trees with branching law $\operatorname{Bin}(\sigma, \widehat{W}_{x_i})$, all independent. Let $Y_n, n \in{\mathbb Z}_+$ be independent uniform on $\{1,\ldots,\sigma\}$, and, conditionally on $Y_n$, let $Z_n$ be $\operatorname{Bin}(Y_n-1, \widehat{W}_{x_i})$ and $\widetilde Z_n$ be $\operatorname{Bin}(\sigma -Y_n,\widehat {W}_{x_i})$, where conditioned on the $Y$'s all are independent. Then $T_G^i$ and $T_D^i$ are distributed as the $n_i$-truncations of the $(Z,\theta^i)$-tree, respectively, of the $(\widetilde{Z},\widetilde {\theta}^i)$-tree (constructed as in Definition~\ref{defZktrees}).
The rest of the proof of Theorem~\ref{Ttwosidedscale} is then almost identical to that of Theorem~\ref{Trightscale}, using Proposition~\ref{Pindsides} instead of Proposition~\ref{prfixeddrift}. Note, however, that the expected number of children of a vertex on the backbone of ${\mathcal T}_G$ or ${\mathcal T}_D$ [i.e., ${\mathbb E} (Z_n)$ or ${\mathbb E}(\widetilde Z_n)$] is divided by $2$ compared to the conditioned case. As a consequence, the limits of the rescaled coding paths of ${\mathcal T}_G^{\beta}, {\mathcal T}_R^{\beta}$ will be expressed in terms of solutions to the equation
\begin{equation} {\mathcal Y}_t^{\beta} = \sqrt{\gamma} B_t - \int_0^t L\biggl( j_0^{\beta} -\frac{2}{\gamma} \underline{{\mathcal Y}}^{\beta}_{ s}\biggr) \,ds \end{equation}
instead of the equation (\ref{eYbetaEquation}) from Lemma~\ref{Lbeta}. Further details are left to the reader.
\subsection{\texorpdfstring{Convergence of trees: Proof of Theorem \protect\ref{Tlimexist}} {Convergence of trees: Proof of Theorem 1.1}} \label{subtrees}
In this section we prove weak convergence of the trees as metric spaces. We refer to~\cite{LeGall2005} for background on the theory of continuous real trees.
\begin{pf*}{Proof of Theorem~\ref{Tlimexist}} To prove convergence in the pointed Gromov--Hausdorff topology, it suffices to prove that the ball of radius $R$ in the rescaled metric converges in the ordinary Gromov--Hausdorff sense (note that these balls are all compact a.s.). To simplify the argument, we will consider ${\mathcal R} $, the IPC conditioned to have its backbone on the right, which does not affect the metric structure.
For compact real trees $T_g,T'_g$ coded by compactly supported contour functions $g,g'$, the inequality
\begin{equation}\label{GHcontourbound} d_{\mathrm{G\mbox{-}H}}(T_g,T'_g)\leq2\Vert g-g'\Vert_\infty \end{equation}
relates convergence of contour functions to convergence of metric spaces (see, e.g.,~\cite{LeGall2005}, Lemma 2.4). Therefore, fix $R>0$ and write
\[ g_k(t)=k^{-1} C_{{\mathcal R}}(2k^2 t),\qquad T_{k,R}=\sup\{t\dvtx g_k(t)\leq R\}. \]
By Theorem~\ref{Trightscale}, $g_k$ converges in distribution as $k\to \infty$.
\begin{claim}\label{clTkRconv} $T_{k,R}$ also converges in distribution. \end{claim}
Assuming this for the moment, the function defined by
\[ g_{k,R}(t)=
\cases{ g_k(t) \wedge R, &\quad if $t\leq T_{k,R}$,\cr R+T_{k,R}-t, &\quad if $T_{k,R}<t\leq R+T_{k,R}$,\cr 0, &\quad if $t>T_{k,R}+R$,}
\]
is continuous, has compact support, and converges in distribution as $k\to\infty$. But $g_{k,R}$ is a contour function coding the part of ${\mathcal R}$ within rescaled distance $R$ of the root. By (\ref {GHcontourbound}) this completes the proof subject to Claim~\ref{clTkRconv}. \end{pf*}
\begin{pf*}{Proof of Claim~\ref{clTkRconv}} $T_{k,R}$ is determined by $g_k(t)$, but we have convergence of $g_k(t)$ only for $t$ in compact subsets of ${\mathbb R}_+$. Therefore, it suffices to show that $T_{k,R}$ is tight.\vadjust{\goodbreak}
Fix $t>0$ and note that ${\mathbb P}(T_{k,R}>t)$ is the probability that the tree ${\mathcal R}$ has more than $k^2 t$ descendants of backbone vertices at heights at most $k R$. We will bound this by replacing ${\mathcal R}$ by a stochastically larger tree ${\mathcal T}$, namely, the tree ${\mathcal T}^k$ from Section~\ref{secfixeddriftproof} with $w_k=p_c$ for each $k$. Write ${\mathcal U}$ for the Lukaciewicz path for the corresponding sequence of off-backbone paths, so that $-\underline{{\mathcal U}}([k^2 t])$ is the height of the backbone vertex from which the $[k^2 t]$th vertex is descended. Thus, ${\mathbb P}(T_{k,R}>t)\leq{\mathbb P}(-\underline{{\mathcal U}}(k^2 t)\leq k R)$. But $-\frac{1}{k}\underline{{\mathcal U}}(k^2 t)\to -\underline{B}_{\gamma t}$, where $B_t$ is a Brownian motion. Tightness follows since $-\underline{B}_{\gamma t}\nearrow\infty$ as $t\to\infty$. \end{pf*}
\section{\texorpdfstring{Level sizes and volumes: Proofs of Theorems \protect\ref{Tlevels-volume} and \protect\ref{Tlevels}} {Level sizes and volumes: Proofs of Theorems 1.4 and 1.5}}\label{seclevels}
\mbox{}
\begin{pf*}{Proof of Theorem~\ref{Tlevels-volume}} We first prove (\ref{eqvolume}). We begin by observing that
\[ \frac{1}{n^2} C[0,an] = \int_0^{\infty} \mathbf{1}_{[0,a]} \biggl(\frac{1}{n}H_{{\mathcal R}}(sn^2) \biggr)\,ds. \]
Our objective is the limit in distribution
\[ \int_0^{\infty} \mathbf{1}_{[0,a]} \biggl( \frac{1}{n} H_{{\mathcal R} } (sn^2)\biggr) \,ds \xrightarrow{n\to\infty} \int_0^{\infty} \mathbf{1}_{[0,a]} (H_s) \,ds. \]
This almost follows from Theorem~\ref{Trightscale}. The problem is that $\int\mathbf{1}_{[0,a]} (X_s) \,ds$ is not a continuous function of the process $X$, and this is for two reasons. First, because of the indicator function, and second, because the topology is uniform convergence on compacts and not on all of ${\mathbb R}$.
To overcome the second obstacle, we argue that for any $\varepsilon$ there is an $A$ such that
\[ {\mathbb P}\biggl(\int_A^\infty\mathbf{1}_{[0,a]} \biggl(\frac{1}{n} H_{{\mathcal R} } (sn^2)\biggr) \,ds \neq0\biggr) <\varepsilon. \]
Indeed, in order for the height function to visit $[0,na]$ after time $n^2A$, the total size of the $[na]$ subcritical trees attached to the backbone up to height $[na]$ must be at least $[n^2A]$. This probability is small for $A$ sufficiently large, even if the trees are replaced by $[na]$ critical trees. Thus, it suffices to prove that for every $A$
\begin{equation}\label{eqfinintconv} \int_0^A \mathbf{1}_{[0,a]} \biggl(\frac{1}{n} H_{{\mathcal R}} (sn^2 )\biggr) \,ds \xrightarrow{n\to\infty}^{\mathrm{dist}.} \int_0^A \mathbf{1}_{[0,a]} (H_s) \,ds. \end{equation}
Next we deal with the discontinuity of $\mathbf{1}_{[0,a]}$ by a standard argument. We may bound $f_\varepsilon\leq\mathbf{1}_{[0,a]} \leq g_\varepsilon$, where $f_\varepsilon,g_\varepsilon$ are continuous and coincide with $\mathbf{1}_{[0,a]}$ outside of $[a-\varepsilon,a+\varepsilon]$. Define the operators
\[ F_\varepsilon(X) = \int_0^A f_\varepsilon(X_s) \,ds,\qquad G_\varepsilon(X) = \int_0^A g_\varepsilon(X_s) \,ds. \]
Then we have a sandwich
\[ F_\varepsilon\biggl(\frac{1}{n} H_{{\mathcal R}}(sn^2)\biggr) \leq\int_0^A \mathbf{1}_{[0,a]} \biggl(\frac{1}{n} H_{{\mathcal R}} (sn^2)\biggr) \,ds \leq G_\varepsilon\biggl(\frac{1}{n} H_{{\mathcal R}}( sn^2 ) \biggr), \]
and similarly for $H_s$. By continuity of the operators,
\[ F_\varepsilon\biggl(\frac{1}{n} H_{{\mathcal R}}(sn^2)\biggr) \xrightarrow {n\to\infty}^{\mathrm{dist}.} F_\varepsilon(H_s),\qquad F_\varepsilon\biggl(\frac{1}{n} H_{{\mathcal R}}(sn^2)\biggr) \xrightarrow {n\to\infty}^{\mathrm{dist}.} F_\varepsilon(H_s). \]
In the limit we have
\[ G_\varepsilon(H_s)-F_\varepsilon(H_s) \xrightarrow{\varepsilon\to0}^{\mathrm{a.s.}} 0 \]
and since $G_\varepsilon-F_\varepsilon$ is continuous, we also have for any $\delta>0$
\[ \lim_{\varepsilon\to0} \lim_{n\to\infty} {\mathbb P}\biggl( G_\varepsilon\biggl(\frac{1}{n} H_{{\mathcal R}}(sn^2)\biggr) -F_\varepsilon \biggl(\frac{1}{n} H_{{\mathcal R}}(sn^2)\biggr) > \delta \biggr) = 0. \]
Combining these bounds implies (\ref{eqfinintconv}), and thus (\ref {eqvolume}).
We now turn to the proof of (\ref{eqlevels}). From (\ref {eqvolume}), we know that for any $\eta>0$,
\[ \frac{1}{\eta n^2} C[an, (a+\eta)n] \xrightarrow{n\to\infty}^{\mathrm{dist}.} \frac{1}{\eta} \int_0^{\infty} \mathbf{1}_{[a,a+\eta]}(H_s)\,ds. \]
Thus, (\ref{eqlevels}) will follow if we can prove that for any $\eta>0$, we have the following limit in probability as $n\to\infty$:
\begin{equation}\label{eqetaapprox} \biggl\vert\frac{\eta n C[an]-C[an,(a+\eta)n]} {\eta n^2} \biggr\vert \stackrel {{\mathbb P}}{\to} 0. \end{equation}
For a given vertex $v$, let $h_v$ denote the height of $v$. If $v$ is not on the backbone, we let $\mbox{perc}(v)$ be the percolation parameter of the off-backbone percolation cluster to which $v$ belongs. We now single out the vertex on the backbone at height $[an]$ and group together vertices at height $[an]$ which correspond to the same percolation parameter.
More precisely, if $\widehat w_1, \widehat w_2, \widehat w_3,\ldots, \widehat w_{N_n}$ are the distinct values taken by the $\widehat W$-process up to time $[na]$, we let
\[ C_n^{(w_i)}:= \{ v \in\mbox{IPC}\setminus\mbox{BB}\dvtx h_v =[an], \mbox {perc}(v) = \widehat w_i \}, \]
so that
\[ \mathfrak{C}[an]:= \{ v \in\mbox{IPC}\dvtx h_v = [an] \} = \bigcup_{i=1}^{N_n} C^{(\widehat w_i)} \cup\mbox{BB}_{[an]},\qquad C[an] = \# \mathfrak{C}[an]. \]
Moreover, any vertex between heights $[an]$ and $[(a+\eta)n]$ in the IPC descends from one of the vertices of $\mathfrak{C}[an]$. We let
\begin{eqnarray*} {\mathcal P}_n^{(\widehat w_i)} &:=& \bigl\{v \in\mbox{IPC}\setminus\mbox{BB}\dvtx [an] \le h_v \le(a+\eta)n, \exists w \in C^{(\widehat w_i)} \mbox{ s.t. } w \le v \bigr\},\\ {\mathcal P}_n^{\mathrm{BB}_{[an]}} &:=& \bigl\{ v \in\mbox{IPC}\dvtx [an] \le h_v \le (a+\eta)n, \mbox{BB}_{[an]} \le v \bigr\}. \end{eqnarray*}
In particular, $C_n^{(w_i)} \subset{\mathcal P}_n^{(w_i)}$ and vertices of the backbone between heights $[an]$ and $[(a+\eta)n]$ are contained in ${\mathcal P}_n^{\mathrm{BB}_{[an]}}$. Moreover,
\[ \mathfrak{C}[an,(a+\eta)n]:= \{ v \in\mbox{IPC}\dvtx [an] \le h_v \le (a+\eta)n \} = {\mathcal P}_n^{\mathrm{BB}_{[an]}} \cup\bigcup_{i=1}^{N_n} {\mathcal P}_n^{(w_i)}. \]
However, the number of distinct values of percolation parameters which one sees at height $[an]$ remains bounded with arbitrarily high probability.
\begin{claim} \label{clfiniteW} For any $\epsilon> 0$, there is $A>0$ such that, for any $n \in {\mathbb N}$,
\[ {\mathbb P}\bigl[ \# \bigl\{ i \in\{1,\ldots,N_n\}\dvtx \bigl\vert C_n^{(w_i)}\bigr\vert \neq0 \bigr\} >A \bigr] \le\epsilon. \]
\end{claim}
From~\cite{AGdHS2008}, Proposition 3.1, the number of distinct values the $\widehat{W}$-process takes between $[na]/2$ and $[na]$ is bounded, uniformly in $n$, with arbitrarily high probability. Furthermore, it is well known that with arbitrarily high probability, among $[na]/2$ critical Galton--Watson trees, the number which reach height $[na]/2$ is bounded, uniformly in $n$. It follows that the number of clusters rising from the backbone at heights $\{0,\ldots,[na]/2\}$ and which possess vertices at height $[na]$ is, with arbitrarily high probability, also bounded for all $n$. The claim follows.
\begin{claim} \label{clBBprogeny} For any $\eta>0$, in probability,
\[ \lim_{n \to\infty} \biggl\vert\frac{1}{\eta n^2} {\mathcal P}_n^{\mathrm{BB}_{[an]}} \biggr\vert =0. \]
\end{claim}
Fix $\eta$. We observe that ${\mathcal P}_n^{\mathrm{BB}_{[an]}}$ is bounded by the total progeny up to height $\eta n$ of $\eta n$ critical Galton--Watson trees. If $\vert B\vert$ denotes a reflected Brownian motion and $l_t^0(\vert B\vert)$ its local time at $0$ up to $t$, we then deduce from a convergence result for a sequence of such trees (cf. formula (7) of~\cite{LeGall2005}) that for any $\epsilon>0$,
\[ \limsup_{n \to\infty} {\mathbb P}\biggl[ \frac{1}{\eta n^2} {\mathcal P}_n^{\mathrm{BB}_{[an]}} > \epsilon\biggr] \le{\mathbb P}\biggl[ \frac{1}{\eta} \inf\{t>0\dvtx l_t^0(\vert B\vert)> \eta\} > \epsilon \biggr], \]
and the claim follows from the fact that $( \inf\{ t>0\dvtx l_t^0(\vert B\vert) > u \}, u \ge0 )$ is a half stable subordinator.
\begin{claim} \label{clpieces} For any $t\in(0,a)$, $\eta>0$, in probability,
\[ \lim_{n \to\infty} \biggl\vert\frac{ {\mathcal P}_n^{(\widehat {W}_{[nt]})}}{\eta n^2} - \frac{\#(C_n^{(\widehat{W}_{[nt]})})}{n} \biggr\vert =0. \]
\end{claim}
Fix $t,\eta$, and define $w_n:= \widehat{W}_{[nt]}$. We have
\begin{eqnarray*} && {\mathbb P}\biggl[ \biggl\vert\frac{ {\mathcal P}_n^{(w_n)}}{\eta n^2} - \frac{\# (C_n^{(w_n)})}{n} \biggr\vert > \epsilon\biggr] \\ &&\qquad \le {\mathbb P}\bigl[ \#\bigl(C_n^{(w_n)}\bigr) > n \epsilon^{-2} \bigr] + {\mathbb P} \biggl[ \biggl\vert\frac{ {\mathcal P}_n^{(w_n)}}{\eta n^2} - \frac{\# (C_n^{(w_n)})}{n} \biggr\vert > \epsilon, \#\bigl(C_n^{(w_n)}\bigr) < \epsilon^{2} n \biggr] \\ &&\qquad\quad{} + \sum_{k=[\epsilon^2 n]}^{[\epsilon^{-2} n]} {\mathbb P}\bigl(\#\bigl(C_n^{(w_n)}\bigr)=k\bigr) {\mathbb P}\biggl[ \biggl\vert\frac{ {\mathcal P} _n^{(w_n)}}{\eta n^2} - \frac{\#(C_n^{(w_n)})}{n} \biggr\vert > \epsilon \Big\vert \#\bigl(C_n^{(w_n)}\bigr)=k\biggr]. \end{eqnarray*}
Using a comparison to critical trees as in the previous argument, the first two terms in the sum above go to $0$ as $n \to\infty$. Furthermore, from~\cite{DuqLeG2002}, Corollary 2.5.1, we know that, conditionally on the processes $\widehat{W}$, $L$, for any $u>0$, the level sets of $[un]$ subcritical Galton--Watson trees with branching law $\operatorname{Bin}(\sigma, w_n)$ converge to the local time process of a reflected drifted Brownian motion $(\vert X_s\vert, s\ge0)$, with drift $L(t)$, stopped at $\tau_u$. Therefore, for any $u>0$,
\begin{eqnarray*} && \lim_{n \to\infty} {\mathbb P}\biggl[ \biggl\vert\frac{ {\mathcal P}_n^{(w_n)}}{\eta n^2} - \frac{\#(C_n^{(w_n)})}{n} \biggr\vert > \epsilon \Big\vert \#\bigl(C_n^{(w_n)}\bigr)= [nu] \biggr] \\ &&\qquad = {\mathbb P}\biggl[ \biggl\vert\frac{1}{\eta} \int_{0}^{\tau_u} \mathbf {1}_{[0,\eta]}(\vert X_s\vert) \,ds - l_t^0(\vert X\vert) \biggr\vert > \epsilon \biggr], \end{eqnarray*}
which for any $\epsilon>0$ goes to $0$ as $\eta\to 0$. Thus, by dominated convergence,
\begin{eqnarray*} &&\lim_{\eta\to0} \limsup_{n \to\infty} \sum_{k=[\epsilon^2 n]}^{[\epsilon^{-2} n]} {\mathbb P}\bigl(\#\bigl(C^{(w_n)}\bigr)=k\bigr) \\ &&\hspace*{57.6pt}\qquad{}\times {\mathbb P}\biggl[ \biggl\vert\frac{ {\mathcal P}_n^{(w_n)}}{\eta n^2} - \frac{\# (C^{(w_n)})}{n} \biggr\vert > \epsilon \Big\vert \#\bigl(C^{(w_n)}\bigr)=k\biggr] =0. \end{eqnarray*}
Claim~\ref{clpieces} follows.
From our decompositions of $\mathfrak{C}[an,(a+\eta)n], \mathfrak{C}[an]$, and Claims~\ref{clfiniteW},~\ref{clBBprogeny} and~\ref{clpieces}, we now deduce (\ref{eqetaapprox}). This implies (\ref{eqlevels}) and completes the proof of Theorem~\ref{Tlevels-volume}. \end{pf*}
\begin{pf*}{Proof of Theorem~\ref{Tlevels}} The basis of the proof is to express the limiting quantity in (\ref {eqlevels}) as a sum of independent contributions corresponding to distinct excursions of $Y-\underline{Y}$. Conditionally on the $L$-process, these contributions will be independent exponential random variables, with parameters arising from certain excursion measures.
From (\ref{eqlevels}), the corollary will be proved if we manage to express $\frac{\gamma}{4} l_{\infty}^a(H)$ as the right-hand side of (\ref{eqsumofexp}). Note\vadjust{\goodbreak} that, if $l_t^x(\frac{\sqrt{\gamma }}{2}H)$ denotes the local time up to time $t$ at level $x$ of
\[ \frac{\sqrt{\gamma}}{2} H = Y_t - \frac{3}{2} \underline{Y}_t, \]
then
\[ \frac{\gamma}{4} l_{t}^a(H) = \frac{\sqrt{\gamma}}{2} l_t^{ {\sqrt {\gamma}a}/{2} }\biggl( \frac{\sqrt{\gamma}}{2} H \biggr), \]
so that we may as well express $\frac{\sqrt{\gamma}}{2} l_t^{ {\sqrt {\gamma}a}/{2} }( \frac{\sqrt{\gamma}}{2} H )$.
To reach this goal, it is convenient to decompose the path of $ \frac{\sqrt{\gamma}}{2} H $ according to the excursions above the origin of $Y-\underline{Y}$. Let us introduce some notation. We let ${\mathcal F}({\mathbb R}_+,{\mathbb R})$ denote the space of real-valued finite paths, so that excursions of $Y$ and of $Y- \underline{Y}$ are elements of ${\mathcal F}({\mathbb R}_+,{\mathbb R})$. For a path $e \in {\mathcal F}({\mathbb R}_+,{\mathbb R})$, we define $\overline{e}:= \sup_{s \ge0} e(s)$, $\underline{e}:= \inf_{s \ge0} e(s)$. For $c \ge0$, we let $N^{(-c)}$ denote the excursion measure of drifted Brownian motion with drift $-c$ away from the origin, and $n^{(-c)}$ that of reflected drifted Brownian motion with drift $-c$ above the origin (see, e.g.,~\cite{RogWill1994v2}, Chapter VI.8).
\begin{lemma} \label{LncNc} For any $c>0$, $a>0$, we have
\begin{eqnarray} \label{eqnc} n^{(-c)}(\overline{e} >a ) &=& \frac{2c}{\exp(2ca)-1}, \\ \label{eqNc} N^{(-c)}(\underline{e} < -a) &=& \frac{c}{1-\exp(-2ca)}. \end{eqnarray}
For $c=0$ we have $n^{(0)}(\overline{e}>a)=a^{-1}$, $N^{(0)}(\underline {e}<-a)=(2a)^{-1}$. \end{lemma}
This result is well known and can be proven by using basic properties of drifted Brownian motion and excursion measures.
We are now going to determine the excursions of $Y-\underline{Y}$ which give a nonzero contribution to $\frac{\gamma}{4}l_{\infty}^a(H)$. We may and will choose $-\underline{Y}$ to be the local time process at $0$ of $Y-\underline{Y}$. Using excursion theory (see, e.g.,~\cite{RogWill1994v2}, Section~VI.8.55), we know that for this normalization of local time, conditionally on the $L$-process, the excursions of $Y-\underline{Y}$ form an inhomogeneous Poisson point process $\mathfrak{P}$ in the space ${\mathbb R}_+ \times {\mathcal F}({\mathbb R}_+,{\mathbb R}_+)$ with intensity $ ds \times n^{(-L(s))}$.
For $b \ge0$, let $\tau_{b}$ denote the hitting time of $b$ by $-Y$. Note that for any $s>\tau_b$, $-\underline{Y}_s > b$, from the fact that drifted Brownian motion started at $0$ instantaneously visits the negative half line. We therefore observe that the last visit to $\frac{\sqrt {\gamma}}{2} a$ by $\frac{\sqrt{\gamma}}{2} H $ is at time $\tau_{a\sqrt{\gamma}}$. Hence, any point of $\mathfrak{P}$ whose first coordinate is larger than $a\sqrt{\gamma}$ corresponds to a part of the path of $H$ which lies strictly above $a$, and therefore cannot contribute to $l_{\infty}^a(H)$. Moreover, a part of the path of $\frac{\sqrt{\gamma}}{2} H$ which corresponds to an excursion of $Y-\underline{Y}$ starting at a time $s < \tau_{a\sqrt{\gamma}}$ will only reach height $\frac{\sqrt{\gamma}}{2} a$ whenever the supremum of this excursion is\vadjust{\goodbreak} greater or equal than $\frac{1}{2}( a\sqrt{\gamma} - \underline{Y}_s)$. Therefore, any excursion of $Y-\underline{Y}$ which gives a nonzero contribution to $l_{\infty}^a(H)$ corresponds to a point of $\mathfrak{P}$ whose first coordinate is some $s$ such that $s \le a\sqrt{\gamma}$ and whose second coordinate is an excursion $e$ such that $\overline{e} \ge \frac{1}{2}(a\sqrt{\gamma} - s)$.
These considerations, along with properties of Poisson point processes, lead to the following claim.\vspace*{-2pt}
\begin{claim} \label{clcontributing} Conditionally on the $L$-process, the excursions of $Y-\underline{Y}$ which give a nonzero\vspace*{1pt} contribution to $\frac{\gamma}{4}l_{\infty}^a(H) = \frac{\sqrt{\gamma}}{2} l_{\infty}^{{\sqrt{\gamma}a}/{2}}( \frac{\sqrt{\gamma}}{2} H )$ are points of a Poisson point process ${\mathcal P}\subset\mathfrak{P}$ on ${\mathbb R}_+ \times {\mathcal F}({\mathbb R}_+,{\mathbb R}_+)$ with intensity
\[ \mathbf{1}_{[0,a\sqrt{\gamma}]}(s) \mathbf{1} \bigl(\overline {e}\geq\tfrac{1}{2}\bigl(a\sqrt{\gamma}-s\bigr)\bigr)\,ds\times n^{-L(s)}(\cdot).\vspace*{-2pt} \]
\end{claim}
The number of points of ${\mathcal P}$ clearly is almost surely countable, so we may write ${\mathcal P}= (s_i, e_i)_{i \in{\mathbb Z}_+}$. In particular, by (\ref{eqnc}), $(s_i)_{i \in{\mathbb Z}_+}$ are the points of the Poisson point process on $[0,a\sqrt{\gamma}]$ introduced in Theorem~\ref{Tlevels}.
Note that $\{e_i, i\in{\mathbb Z}_+\}$ correspond obviously to distinct excursions of $Y-\underline{Y}$, so that their contributions to $l_{\infty}^{{\sqrt{\gamma}a}/{2}}( \frac{\sqrt{\gamma}}{2} H )$ are independent.\vspace*{-2pt}
\begin{claim} \label{clcontributions}
Conditionally given $L$, for each $i \in{\mathbb Z}_+$ the contribution of the excursion $e_i$ to $l_{\infty}^{{\sqrt{\gamma}a}/{2}}( \frac{\sqrt{\gamma}}{2} H )$ is exponentially distributed with parameter
\[ N^{(-L(s_i))}\bigl(\underline{e_i} \le\tfrac{1}{2}\bigl(-a\sqrt{\gamma} + s_i\bigr) \bigr).\vspace*{-2pt} \]
\end{claim}
Fix $i \in{\mathbb Z}_+$, and condition on $L$. Recall that $(s_i,e_i)$ is one of the points of the Poisson process ${\mathcal P}$, so that $e_i$ is chosen according to the measure
\[ n^{(-L(s_i))}\bigl( \cdot ,\overline{e} > \tfrac{1}{2}\bigl(a\sqrt{\gamma} - s_i\bigr)\bigr). \]
Up to the time at which $e_i$ reaches $\frac{1}{2}(a\sqrt{\gamma} -s_i)$, $e_i$ does not contribute to $l_{\infty}^{{\sqrt{\gamma}}a/{2}}( \frac{\sqrt{\gamma}}{2} H )$. From the Markov property of $e$ under the restricted measure $n^{(-L(s_i))}( \cdot,\overline{e} > \frac{1}{2}(a\sqrt {\gamma } - s_i))$, the remaining part of $e_i$ [after it has reached $\frac{1}{2}(a\sqrt {\gamma } -s_i$)] follows the path of a drifted Brownian motion, with drift $-L(s_i)$, started at $\frac{1}{2}(a\sqrt{\gamma} -s_i)$, and stopped when it gets to the origin. Thus, the contribution of $e_i$ to $l_{\infty}^{{\sqrt{\gamma}a}/{2}}( \frac{\sqrt{\gamma}}{2} H )$ is exactly the local time of this stopped drifted Brownian motion at level $\frac{1}{2}(a\sqrt{\gamma} -s_i)$. By shifting vertically, it is also $l_{\infty}^0(X)$, the total local time at the origin of $X$, a drifted Brownian motion, with drift $-L(s_i)$, started at the origin and stopped when reaching $\frac{1}{2}(-a\sqrt{\gamma} +s_i)$. By excursion theory, if $\widetilde {\mathfrak{P}}_i$ is a Poisson point process on ${\mathbb R}_+ \times{\mathcal F}({\mathbb R}_+,{\mathbb R})$ with intensity $ds \times N^{(-L(s_i))}$, then $l_{\infty}^0(X)$ is the coordinate of the first point of $\widetilde{\mathfrak{P}}_i$ which falls into the set
\[ {\mathbb R}_+ \times\bigl\{ e \in {\mathcal F}({\mathbb R}_+,{\mathbb R})\dvtx \underline{e} < \tfrac{1}{2}\bigl(-a\sqrt{\gamma} +s_i\bigr) \bigr\}. \]
Claim~\ref{clcontributions} follows.
From Lemma~\ref{LncNc}, Claim~\ref{clcontributing} (along with the remark which follows it) and Claim~\ref{clcontributions}, we deduce Theorem~\ref{Tlevels}.\vadjust{\goodbreak} \end{pf*}
\printaddresses
\end{document} | arXiv |
Keyboard Row Shift
Given an input string of characters, output the number of times we have to shift to another row while typing that string using a qwerty keyboard.
The input string will contain lower case alphabets,numbers and spaces. The newline key(enter) must also be considered at the end (only at the end). All shifts, from any row to any other will be considered as 1 shift only.
qwertyuiop
asdfghjkl enter
"sdkflsd" -> 0
"asdwexc" -> 3 # to end the string one enter has to pressed which is in the middle
"poierlkdjfpoeirldskjf" ->3
"123 jkjk" -> 2
"llsdkfj ldkfj" -> 2
"lkasdjmnbcv " -> 3
"jnjn 5" -> 6
This is code-golf, so shortest code wins.
Vedant KandoiVedant Kandoi
\$\begingroup\$ You need some test cases that contain a space and numbers. I'd also make it clear in your description that a shift can go any distance (going from the top row to the bottom is only 1 shift) \$\endgroup\$
\$\begingroup\$ Done, @NathanMerrill \$\endgroup\$
– Vedant Kandoi
\$\begingroup\$ How flexible is input? Is uppercase OK? How about an array of character strings or even an array that mixes digits and character strings? \$\endgroup\$
\$\begingroup\$ I have mentioned only lower case, input has to be string only not an array. \$\endgroup\$
Given a program in your language, generate another program that do exactly the same thing so every bytes in it are prime.
Shortest generator in every language win. No acception.
Nop in languages that only allow prime bytes are legit, but just don't post them(or make a community answer to put them)
\$\begingroup\$ Upvote this comment if you think that the idea of the challenge is not interesting and it should not be posted. \$\endgroup\$
\$\begingroup\$ Upvote this comment if you think that the idea is fine, but the challenge is unclear or needs fixing; in that case also leave a comment. \$\endgroup\$
\$\begingroup\$ I made a small poll above ↑. I think the idea is interesting, and if it's poorly worded, the post can't be fixed without suggestions. Note that the challenge is already fully specified in the current state (task description, winning criteria), and it's impossible to give sample input/output to test programs (if you understand the challenge, you will see why). \$\endgroup\$
\$\begingroup\$ Lots of languages quite rely on composite(not prime) bytes, so upvote this comment if you think it's too restrict \$\endgroup\$
\$\begingroup\$ I think the idea is interesting, especially in languages like Jelly which can do most anything with small subsets of its character set. My main problem is: how much of the language do we have to support in the input to our generator? Many languages which are capable of this task have far too many possible instructions, etc., every one of which would need to be remapped to an all-prime version. Unless perhaps it's possible to simply generate any given string and execute it whilst using only prime bytes... \$\endgroup\$
\$\begingroup\$ @ETHproductions I think a subset of an existing language can be considered another language? \$\endgroup\$
Are these strings character wise translatable? code-golf string decision-problem
Explanation with example:
Suppose you are given two strings of the same length, e.g. "code" and "golf". Try to find a set of translation rules for characters (like "All cs are replaced by gs" , or c -> g for short) such that applying those rules on "code" yields "golf" and applying them to "golf" again yields "code".
If there is such a set of translation rules, return a truthy value. Otherwise, if no such set can exist, return a falsy value.
In our example, the translation is possible with the rules g -> c, d -> l, e -> f, c -> g, l -> d and f -> e, so a truthy value should be returned.
However, if the two strings were "code" and "meta", no such rules exist: To get from "code" to "meta" we need the rule e -> a, but to get from "meta" to "code" we would need e -> o. But e cannot be translated into two different characters! Therefore, a falsy value should be returned.
Consise mathy explanation:
Given two strings \$s\$ and \$t\$ of the same length \$n\$, decide whether the following relation is bijective: $$\{(s_i,t_i)\ |\ 1 \leqslant i \leqslant n\} \cup \{(t_i,s_i)\ |\ 1 \leqslant i \leqslant n\}$$ where \$s_i\$ denotes the \$i^{th}\$ character of string \$s\$.
truthy:
"code", "golf"
"a", "a"
"abdabdcdacabcabd", "bacbacdcbdbadbac"
TODO: Add some more
falsy:
"code", "meta"
"aa", "ab"
"abc", "bca"
LaikoniLaikoni
\$\begingroup\$ This seems to be very similar to Check if words are isomorphs. \$\endgroup\$
\$\begingroup\$ @Dennis It is definitely related. However, as far as I see most answers there work by creating some sort of fingerprint and checking whether it is the same for both inputs. This approach alone won't work here as you also need to check the symmetry of substitutions. \$\endgroup\$
\$\begingroup\$ Not saying this makes it a dupe, because there might be golfier ways, but s and t are character wise translatable if and only if s++t and t++s are isomorphs. \$\endgroup\$
Guess the Q in the code
A popular puzzle known to me as Codewords takes a Crossword-like grid and consistently replaces each letter with a code value, usually a number from 1 to 26, and the solver then has to work out which code value represents each letter. There are a number of approaches to this:
There is a web site which can search for words given the pattern of repeating letters. Since repeated letters map to repeated code values, you can sometimes use the patten of repeated code values to determine the original word. For instance, the only word with the pattern 12.312..3... is churchwarden.
The puzzle normally provides a few of the letters to get you started. This may be enough to use a standard crossword solver to determine the original word. For instance, .h...h...d.. would again point you to the word churchwarden.
You could use frequency analysis to guess which letters are likely to be the popular letters such as e or t.
You could look for common prefixes or suffixes such as ally or ness. (I picked those two as they have repeating letters which are easier to spot.)
At least in all of the Codewords puzzles I have seen, q is always followed by a u and at least one letter. This means that if there is a code that is never the last or penultimate letter of a word, and always appears followed by the same code, then this could be a q.
However, checking all the codes to see how many distinct codes follow them is laborious. We need an automatic solution for this!
Please write a program or function that will guess which code(s) could represent the letter q. The input to the function will be an array of code values in a standard format. You will need to support 27 consistent distinct values, 26 for the codes themselves and a value for the background. These values can be integers but you could also use characters e.g. 26 letters and a space, in which case you can join them into strings, and even join the strings with a 28th character.
Your output will be all of the code values that have the property that, for each occurrence in the grid, either:
The previous and following cells are background or would extend past the side of the grid, or
There are two non-background following cells, and the first of those cells always contains the same value.
These rules apply in both the horizontal and vertical direction.
This is code-golf, so the shortest solution that breaks no standard loopholes wins!
Examples needed, I guess...
Hack g-code parser
\$\begingroup\$ This could be seen as asking for malicious code, which is not on-topic. \$\endgroup\$
\$\begingroup\$ @Laikoni this is not malicious\harmful code, because it don't causes harm to user. It is educational programming puzzle for white-hat hackers \$\endgroup\$
– Евгений Новиков
\$\begingroup\$ @Laikoni rm -rf / is harmful, but alert("pwned") is not \$\endgroup\$
\$\begingroup\$ Right, I think misread the challenge. If it is only about cracking this piece of code rather than actual implementations it's probably fine as a challenge. Any way, you need an objective winning criterion. code-golf in the sense of "the shortest input to open the alert prompt" seems like a good candidate. \$\endgroup\$
Find the pattern v2
I feel tired to do "find the pattern" exercise such as
1 2 3 4 (5)
1 2 4 8 (16)
1 2 3 5 8 (13)
Please write a program that finds the pattern for me.
Here, we define the pattern as a recurrence relation that fits the given input, with the smallest score. If there are multiple answers with the same smallest score, using any one is fine.
Let the \$k\$ first terms be initial terms for the recurrence relation, and the \$i\$'th term be \$f(i)\$ (\$i>k,i\in\mathbb N\$).
A non-negative integer \$x\$ adds\$1\$ to the score
The current index \$i\$ adds \$1\$ to the score
+, -, *, / (round up, down, towards zero, go further to zero, as you decide) and mod (a mod b always equal to a-b*(a/b)) each add \$1\$ to the score
For each initial term \$x\$, add \$1\$ to the score
\$f(i-n)\$ (with \$n\le k\$) adds \$1\$ to the score. E.g. Using the latest value \$f(i-1)\$ add \$1\$ to the score, and there must be at least 1 initial term.
Using parentheses to change the calculation order doesn't add anything to the score.
input -> [score] expression(not optimized)
1 2 3 4 -> [1] f(i) = i
1 2 4 8 -> [3] f(1) = 1, f(i) = 2*f(i-1)
1 2 3 5 8 -> [5] f(1) = 1, f(2) = 2, f(i) = f(i-1)+f(i-2)
Lowest score for worse case wins. If tie, shortest program in each language wins. Your program should run in polynomial time.
If someone's score is lower than 4(currently reachable), the first lowest score will be accepted. (differ from winning criticia)
p.s. See history for v1 sandbox
\$\begingroup\$ I think the challenge could benefit from a precise defining grammar for a pattern. As it stands, I feel the set of all expressions that may fit is too vast. \$\endgroup\$
– Jonathan Frech
\$\begingroup\$ @JonathanFrech Looks pretty precise to me. About "too vast" - there are only 6 bullets. \$\endgroup\$
\$\begingroup\$ @user202729 I am unsure if (round as you decide) includes the possibility of not rounding. \$\endgroup\$
\$\begingroup\$ Should "3 0 0 0 0 0 ..." be "f(1)=3, f(n)=0", do "for the recurrence relation" work? \$\endgroup\$
\$\begingroup\$ @l4m2 Yes. ----- \$\endgroup\$
\$\begingroup\$ I don't understand the "fun fact"... \$\endgroup\$
\$\begingroup\$ @user202729 Do you think it no fun or can't figure it out? \$\endgroup\$
\$\begingroup\$ The latter. ---- \$\endgroup\$
User ranking in language
Challenge is:
You need receive the input of "language". By "language" I mean the at left of the comma in the Title of an answer on the present code-golf site.
The source for the data are the answers posted here on code-golf site.
You must present results in order. Decreasing or Increasing does not matter, it only needs to be one of those two.
Output is a table of two columns {User, Upvotes}. Example: User enters Java and will obtain a list as the following:
user_A 9523 user_B 6000 user_C 120
Do not care very much about the output formatting as long it is clear the separation between the two columns and each line.
The will be no winner as it is a per language code-golf question.
sergiolsergiol
\$\begingroup\$ "The source for the data are the answers posted here on code-golf site." The answers for this challenge, or ALL challenges on PPCG? And with your example, does this mean it searches for each user all answers given in Java, and sums their total upvote count? Also, do we differentiate java/Java/Java 7/Java 10 etc as different inputs? \$\endgroup\$
\$\begingroup\$ @KevinCruijssen 1. Yes, the scope is all answers on PPCG. 2. Yes, it is the total upvote account. 3. This is flexible on uppercase/lowercase, but not on the rest. \$\endgroup\$
– sergiol
Concentration (or Memory or Match) is a game where players pick pairs of cards and try to find matches. The rules are as follows:
Two of each number from 1 to N are added to the deck.
The deck is shuffled and placed face-down (hidden).
The player selects a card. The card is revealed.
The player selects another card: The card is revealed.
If the two cards are different numbers, then they are hidden again.
Steps 3 through 5 are repeated until all cards are revealed.
For example, lets say I have the following cards: 1 1 2 2 3 3 4 4
Shuffle them: 1 4 3 1 4 3 2 2
Place them face down: X X X X X X X X
The player will then select a card: 1 X X X X X X X
And then another card: 1 X 3 X X X X X
The numbers don't match, so we put the cards face down, and select another card: X X X 1 X X X X
The player remembers the position of the other 1: 1 X X 1 X X X X
A match was found! These cards stay face-up, and we try again.
The challenge is to do this with limited memory. You need to write a function/program that takes two parameters:
A list of cards (with a constant value representing the hidden cards)
32 bytes of data. This can be stored as a string, a large integer, etc.
Your function must use this data (and only this data) to select the next card to reveal, and then update the 32 bytes of data. You may use a random number generator for free (as long as you do not use the seed to store data)
After the game is over, the game is scored by counting the number of times a card was revealed.
Below, I have provided a list of test cases. Your score is the length of the first test case you complete with a score of greater than 2N+30 (N is the number of distinct cards). Hard coding is not allowed, and your function should get a similar score with a different set of test cases.
9 4 1 5 7 3 4 1 6 3 9 7 2 2 0 5 6 0 8 8
1 7 0 2 1 3 8 8 6 9 2 3 4 5 6 5 10 4 7 10 0 9
3 5 10 3 5 1 9 10 1 2 0 4 0 8 6 7 8 4 9 2 6 7
9 1 7 8 2 8 9 10 2 0 6 6 5 3 5 1 10 0 4 7 3 4
7 8 2 9 8 0 0 9 3 1 5 2 4 1 3 10 4 10 5 6 6 7
3 7 9 5 1 2 8 6 3 6 4 0 10 1 2 7 10 9 8 4 5 0
7 6 9 6 2 5 5 0 11 9 3 10 8 3 4 8 4 2 11 1 1 10 7 0
10 2 8 5 11 3 4 2 6 11 4 7 10 9 8 9 1 1 6 0 5 7 3 0
4 0 7 7 4 6 11 8 3 9 3 2 5 1 11 10 8 2 9 5 0 10 1 6
10 8 8 0 0 3 7 4 11 10 9 2 4 5 6 9 5 6 7 1 1 11 3 2
1 7 0 8 9 10 4 5 10 6 5 1 4 3 7 2 0 9 3 11 8 2 6 11
3 7 10 11 1 5 10 2 9 3 8 11 5 9 6 0 0 4 12 12 4 8 1 6 2 7
0 11 9 11 2 9 10 6 5 2 7 0 12 1 10 7 6 8 3 3 4 1 12 5 8 4
3 9 5 12 1 6 8 10 2 4 11 3 6 8 9 1 10 12 7 0 11 0 5 4 7 2
1 7 11 9 0 2 6 6 5 0 8 3 10 8 7 12 3 9 12 4 2 4 11 1 5 10
8 4 9 0 7 3 1 3 9 6 11 6 8 2 12 10 2 12 5 5 0 1 10 7 4 11
13 4 13 4 1 2 9 8 10 7 3 12 2 0 5 6 0 5 8 6 9 12 11 10 1 7 3 11
13 0 0 6 1 4 11 3 12 9 2 10 9 2 7 8 12 5 7 6 11 4 10 1 5 3 13 8
13 10 3 6 12 1 0 8 5 6 3 11 4 9 2 9 8 11 12 2 5 7 7 1 4 10 0 13
7 4 11 8 10 1 6 0 3 10 6 0 8 2 11 5 3 1 7 13 9 13 12 9 4 2 5 12
13 5 7 6 1 12 8 2 8 0 5 7 10 6 4 11 12 1 2 10 3 13 11 3 0 9 9 4
9 2 4 3 7 4 2 7 10 5 13 6 5 12 8 11 0 8 14 11 0 6 1 9 3 13 14 1 10 12
8 14 8 7 11 7 2 12 4 5 13 6 1 3 5 0 1 4 9 0 10 2 11 12 13 6 3 14 10 9
4 9 5 1 5 10 2 11 2 9 13 10 3 13 12 8 7 1 6 8 14 0 4 12 14 11 3 7 6 0
5 10 4 3 2 4 2 3 9 13 14 6 0 11 5 13 12 8 0 9 7 8 6 11 14 1 12 10 1 7
11 1 10 8 0 12 9 1 7 4 4 12 6 8 14 5 10 5 14 0 11 3 7 9 2 2 3 13 6 13
0 8 15 6 4 1 9 8 14 13 2 11 4 6 9 14 0 2 3 5 1 15 5 10 13 12 3 7 12 7 10 11
13 3 4 13 2 12 9 1 5 1 6 7 11 9 10 0 15 14 12 15 14 8 7 3 4 5 8 11 0 2 10 6
6 1 7 9 3 1 15 7 5 14 9 14 0 8 10 0 4 12 12 13 3 6 11 4 15 2 11 5 2 13 8 10
2 0 14 7 13 13 15 7 3 8 15 5 10 12 4 5 3 14 9 6 12 10 9 11 8 6 1 1 4 11 2 0
3 13 12 4 10 13 8 0 7 11 11 6 8 1 12 5 5 9 6 10 14 3 2 9 14 4 1 15 15 7 0 2
16 12 7 14 13 5 5 4 11 0 0 9 12 15 15 9 14 2 6 3 11 8 16 10 6 13 3 1 2 8 1 7 10 4
5 5 10 4 9 15 0 11 11 1 13 15 9 0 3 1 16 6 8 7 2 6 8 2 14 14 16 4 3 12 12 13 10 7
0 6 7 4 12 16 0 3 6 1 7 12 8 13 15 9 2 11 2 15 11 10 5 13 14 5 9 14 3 8 4 16 1 10
8 0 1 2 16 11 3 2 15 4 15 5 14 9 13 7 14 9 10 12 11 12 16 6 13 6 1 5 10 0 7 8 4 3
9 16 15 13 11 2 5 6 16 3 9 4 10 14 14 12 8 11 0 6 0 15 10 7 13 5 1 12 3 4 2 1 7 8
11 15 14 3 8 12 17 15 1 0 12 6 9 0 9 13 11 14 16 2 4 5 13 16 10 7 8 5 10 4 17 3 6 7 1 2
11 14 5 7 14 0 12 4 8 12 11 13 16 16 15 6 3 1 8 3 1 5 7 17 10 2 17 13 15 9 9 4 2 0 6 10
15 13 6 12 5 4 9 2 11 2 13 16 3 9 1 16 10 11 1 12 14 17 8 0 17 6 8 10 5 0 7 4 14 7 3 15
0 9 13 15 8 11 6 5 9 16 11 7 14 7 5 15 3 12 4 16 2 14 3 0 1 10 13 1 6 10 17 2 17 8 4 12
10 13 14 10 5 7 16 8 8 7 6 3 0 2 17 17 2 1 15 16 3 4 5 0 12 9 1 12 14 11 13 9 15 6 4 11
7 14 0 13 15 6 12 1 14 16 10 15 8 11 0 9 2 3 4 18 12 17 8 13 9 6 4 11 5 1 5 16 17 3 18 10 7 2
12 9 17 2 7 12 4 15 13 0 17 9 11 3 6 4 8 5 11 16 8 1 7 0 13 15 16 10 3 14 18 2 14 1 10 6 18 5
4 0 5 14 7 8 1 0 12 15 11 2 11 5 3 9 15 17 6 8 7 10 12 3 6 16 2 18 16 13 4 14 9 17 13 18 1 10
13 13 18 9 0 17 12 17 8 0 14 16 14 6 10 6 9 15 1 15 2 4 11 5 1 3 4 3 11 2 5 16 8 7 10 7 18 12
18 11 13 6 14 9 13 14 9 18 17 11 0 1 17 2 0 6 10 3 8 4 1 5 15 7 5 2 12 15 4 8 7 16 10 12 3 16
0 17 5 6 19 0 11 1 7 18 9 14 16 2 16 13 1 4 3 17 15 7 10 2 15 9 18 13 8 14 6 12 5 10 3 8 11 19 12 4
4 3 10 12 2 7 9 11 0 16 16 7 19 2 13 3 12 10 5 18 4 15 15 0 11 19 18 13 14 1 5 17 6 9 14 8 6 17 8 1
0 17 16 11 7 19 8 5 17 16 14 1 10 6 11 7 15 10 14 13 15 4 2 4 1 9 6 3 13 18 18 0 19 3 9 12 8 5 2 12
10 15 10 0 2 14 3 19 1 18 4 14 8 2 5 17 4 11 17 12 13 16 9 3 0 13 7 5 18 6 8 9 12 1 19 16 11 7 6 15
15 2 16 4 14 0 1 5 12 18 10 12 4 14 16 11 8 7 13 15 11 3 19 10 3 9 17 6 13 19 9 1 0 18 8 2 7 5 17 6
1 12 17 7 8 8 12 20 5 10 20 16 13 3 18 15 5 4 10 9 6 14 11 6 7 15 16 19 0 1 9 18 19 2 2 3 13 4 17 0 11 14
7 17 2 12 19 9 0 11 17 4 12 0 10 11 16 14 3 20 7 15 2 9 8 18 16 19 5 3 8 14 6 18 1 13 5 1 4 15 13 10 6 20
0 18 12 4 20 7 14 5 10 16 1 13 9 15 12 0 6 19 2 5 16 10 1 17 15 11 20 3 14 2 18 3 17 11 7 8 4 8 6 19 13 9
9 18 15 0 16 7 19 4 13 20 5 10 2 2 0 12 8 6 3 15 16 14 17 7 10 3 13 6 1 20 1 17 19 11 5 4 12 14 11 8 9 18
13 6 16 11 20 3 10 8 5 7 17 15 14 0 7 6 14 17 4 15 2 12 8 1 13 18 11 10 1 9 20 16 5 19 18 2 12 19 3 9 4 0
4 5 16 11 2 1 14 9 16 6 20 21 0 13 12 13 7 8 15 11 10 18 14 3 15 5 21 18 12 17 19 9 0 19 7 4 3 20 1 8 2 10 6 17
10 6 7 0 18 1 19 5 8 12 15 2 5 4 11 20 16 17 13 14 17 1 16 9 21 21 0 3 6 3 2 7 8 13 19 20 14 15 9 11 10 4 18 12
2 20 12 21 10 4 8 11 13 3 20 1 6 21 12 13 4 17 19 16 17 15 14 9 16 9 5 10 8 6 11 18 1 5 2 18 0 0 3 7 19 14 15 7
16 0 17 8 11 6 3 18 9 16 14 20 12 12 10 1 14 1 10 21 4 19 0 2 13 3 15 17 21 9 6 18 2 20 4 11 7 19 5 8 7 5 13 15
3 4 4 10 0 3 16 14 19 8 17 9 5 9 6 12 7 13 13 1 15 0 5 20 2 7 1 17 21 11 11 18 20 6 15 16 2 12 14 19 8 10 18 21
12 11 7 8 8 10 9 18 5 6 15 12 10 0 14 3 6 22 17 15 17 21 14 16 9 4 18 13 1 11 4 20 3 19 22 1 13 7 21 5 0 20 2 2 16 19
2 20 10 17 6 0 21 10 19 4 12 8 0 7 18 15 17 18 9 4 2 14 1 5 14 6 16 22 13 11 11 15 21 3 7 19 8 9 22 20 3 16 13 1 12 5
12 11 6 16 18 15 12 11 10 9 8 15 22 5 21 18 20 0 2 19 17 8 10 5 3 14 9 22 20 3 1 21 19 6 0 16 1 13 4 14 2 13 17 4 7 7
20 5 16 4 17 7 8 15 0 10 9 3 16 13 8 12 1 14 17 11 18 11 2 7 6 1 4 20 3 5 22 22 2 6 13 10 19 15 12 14 18 9 0 19 21 21
11 1 0 5 7 15 0 12 3 9 6 15 7 13 11 20 9 17 8 4 22 4 10 5 21 18 16 14 13 10 18 12 21 17 16 2 6 22 3 14 19 1 8 19 20 2
10 10 18 6 4 1 9 6 15 7 7 11 21 13 15 21 22 16 12 20 2 18 3 17 19 14 14 0 16 5 3 4 1 9 13 11 22 20 19 8 5 0 23 17 23 2 8 12
3 3 19 13 5 10 17 18 4 16 6 2 21 4 12 7 13 8 7 10 6 2 1 15 5 21 23 22 0 11 9 16 20 11 20 23 1 14 18 8 14 19 9 17 0 15 22 12
14 7 15 20 19 7 13 0 20 2 18 1 11 21 17 12 16 10 8 3 15 2 22 23 18 16 9 1 3 13 10 19 8 11 6 4 5 22 0 17 21 23 14 4 12 9 6 5
1 12 15 21 11 21 18 6 4 22 5 23 14 19 13 8 15 7 5 0 8 13 16 4 1 6 12 11 14 18 20 17 9 23 19 3 2 17 9 0 16 7 3 20 10 22 2 10
23 9 15 10 13 17 17 21 12 18 2 20 3 8 0 16 2 12 6 20 7 19 16 0 8 9 4 6 5 13 18 14 1 19 21 3 22 15 1 4 22 10 5 7 11 14 11 23
3 21 22 5 19 7 2 24 18 19 9 2 24 14 6 0 12 0 21 18 23 15 8 11 6 4 8 17 16 20 14 16 4 7 10 11 1 9 17 3 23 10 20 5 13 22 13 1 12 15
13 6 5 22 1 17 2 21 5 0 14 24 19 21 7 19 6 23 12 14 4 9 3 22 15 9 2 16 20 7 13 12 8 23 18 0 15 10 8 24 20 16 3 1 4 11 10 18 17 11
12 6 19 0 9 14 22 2 13 20 19 9 6 3 21 20 24 0 4 24 1 5 11 15 1 21 8 5 16 16 23 18 7 11 10 17 15 12 3 22 7 10 2 23 17 8 14 4 13 18
6 22 2 18 16 19 12 0 17 14 23 4 6 17 3 10 4 22 13 9 20 19 0 9 8 1 24 2 23 14 11 5 7 16 13 10 12 8 7 3 20 21 15 5 18 21 11 1 15 24
18 12 6 22 19 19 9 24 5 14 13 2 22 8 4 16 3 9 18 8 0 10 23 10 2 20 21 5 23 20 1 11 0 17 7 7 17 12 15 3 16 15 13 24 14 4 6 21 11 1
7 14 19 5 4 17 1 22 23 24 12 1 20 6 24 7 21 10 8 11 20 3 25 13 0 2 16 21 0 6 9 11 4 18 22 2 15 5 18 8 13 12 3 14 23 10 17 9 16 15 25 19
10 4 1 13 11 24 7 12 15 20 11 0 23 23 5 7 25 25 17 8 20 24 2 22 19 9 12 16 4 6 21 9 17 6 5 15 10 0 18 8 22 2 14 16 3 19 21 18 1 3 13 14
16 22 3 2 25 21 11 1 10 12 4 15 0 2 13 11 3 14 19 1 18 12 8 23 18 22 9 16 4 0 20 9 17 24 7 10 6 8 23 25 15 7 21 20 19 5 14 13 6 5 17 24
21 20 8 16 1 18 4 0 14 5 22 3 0 23 22 1 9 4 24 7 21 2 23 15 11 12 14 25 8 12 7 15 13 18 3 19 10 2 5 10 16 24 19 13 6 20 6 9 17 11 17 25
6 24 17 18 24 22 10 11 20 16 16 3 4 21 23 25 14 1 0 1 7 2 5 13 6 4 10 21 17 14 23 11 9 9 15 25 0 19 19 12 20 22 12 13 15 2 3 8 5 18 8 7
11 16 25 6 19 5 26 17 24 6 12 4 15 18 21 7 10 5 23 3 13 8 13 23 20 8 11 12 9 22 1 14 21 2 25 10 15 9 0 2 0 26 3 24 17 22 14 4 18 16 20 19 1 7
26 0 19 3 0 20 1 25 13 3 12 6 20 11 21 13 2 11 23 15 17 15 24 23 10 8 26 5 18 9 16 18 4 9 19 10 14 7 22 17 22 4 16 8 24 2 25 5 14 12 6 21 1 7
9 25 13 14 19 17 20 2 11 5 19 23 25 21 0 16 8 15 7 16 24 24 3 15 20 10 8 4 0 12 13 22 9 4 26 18 22 2 23 18 3 6 6 12 17 26 21 10 1 11 14 7 5 1
17 4 20 23 13 0 3 26 25 16 3 2 13 10 8 12 9 5 14 2 26 25 21 23 22 17 7 14 24 1 4 15 19 16 12 5 9 22 1 20 19 7 11 6 21 15 11 18 6 8 18 0 10 24
14 12 3 17 7 2 19 11 0 21 10 24 4 6 8 8 25 4 6 23 9 1 25 15 12 9 23 10 2 26 1 0 11 7 19 13 18 3 17 13 16 26 20 20 16 15 22 22 18 14 5 24 21 5
4 15 22 10 27 14 7 0 2 17 3 6 20 18 0 16 21 4 3 11 11 12 23 5 15 22 9 17 2 1 10 18 25 14 27 19 16 23 20 21 6 25 26 24 13 1 24 26 9 13 8 19 8 7 5 12
3 17 8 9 3 9 23 15 10 25 0 6 26 21 11 12 27 1 11 13 14 2 6 1 20 7 19 24 22 14 22 25 16 17 26 5 24 27 12 13 18 4 23 20 19 4 7 15 21 8 5 16 10 2 0 18
9 11 14 21 3 12 10 4 14 1 5 18 27 9 6 0 3 19 7 12 17 26 26 25 15 11 16 24 17 18 20 8 7 19 15 1 4 13 22 23 8 25 20 16 22 2 6 0 10 5 2 24 23 21 13 27
7 23 2 26 5 24 19 17 2 18 11 7 16 5 8 26 24 21 8 14 10 16 18 9 27 1 4 22 15 25 17 4 0 0 21 3 15 12 11 23 19 27 10 13 6 22 3 14 6 13 12 9 20 1 20 25
17 4 24 21 11 22 12 13 0 1 17 23 15 4 6 2 27 14 3 16 8 13 3 5 2 6 0 9 23 14 25 26 10 18 19 26 8 11 10 20 21 18 22 20 25 9 7 12 1 7 19 16 5 27 15 24
8 27 28 24 9 9 16 22 13 20 23 25 13 21 0 26 21 8 16 24 27 18 26 23 3 7 3 25 22 15 14 6 5 7 11 4 1 20 17 19 17 12 4 14 2 1 5 18 2 28 12 0 10 19 15 11 6 10
21 9 3 23 27 4 15 22 12 2 16 26 4 13 28 16 20 6 6 3 10 1 18 5 11 0 21 7 8 25 23 17 9 15 22 14 24 2 5 10 12 17 20 1 24 7 18 0 14 26 19 13 28 25 11 8 27 19
26 10 19 20 17 10 9 19 7 15 4 27 0 15 18 2 6 2 22 6 12 25 11 9 24 26 5 11 5 21 17 7 8 4 18 14 16 24 28 1 3 28 27 14 12 8 13 13 23 3 16 23 25 21 22 0 1 20
1 0 19 11 20 17 8 3 2 18 7 18 21 5 26 9 10 6 20 2 28 8 26 19 21 17 16 14 3 14 22 12 13 6 24 9 10 22 0 27 11 13 27 7 15 4 28 1 16 23 25 23 25 12 24 15 4 5
5 9 12 7 19 25 6 24 12 6 25 3 1 16 28 2 3 14 16 4 5 26 7 8 20 19 18 22 10 27 13 28 21 17 11 26 4 18 0 20 27 24 15 10 23 11 0 21 9 22 13 14 8 1 17 15 23 2
Nathan MerrillNathan Merrill
\$\begingroup\$ If I use an rng, I could get very different results between runs. Perhaps score based on the average of 100 runs? \$\endgroup\$
\$\begingroup\$ @Spitemaster It is possible that a different RNG would change the score to be +1 or -1, but as scores get larger, the RNG seed matters less. If a person posts a solution that required a very specific RNG seed, it wouldn't be reproducible, and therefore, invalid. \$\endgroup\$
\$\begingroup\$ This seems like a nice challenge but it's a bit vague. What do you mean by "Your function must use this [32 bytes of] data". We can't use local variables? Do we have to assign that parameter that was passed to us in the function call to another value to be able to store data? \$\endgroup\$
– FireCubez
Lunar Arithmetic
Lunar Arithmetic (also called dismal arithmetic, a play on decimal) is an alternative to elemental arithmetic in which only addition and multiplication is defined, and then only for non-negative integers. For single-digit numbers, addition is defined as the maximum of the two digits, and multiplication is defined as the smaller of the two digits. For example:
2 + 5 = 5 + 2 = 5
2 * 5 = 5 * 2 = 2
To extend lunar addition to multi-digit numbers, simply consider the digits individually, counting empty digit places as zeroes as required. For example, consider 169 + 84:
+ 8 4
Lunar multiplication can now be dealt with in the same way as normal long multiplication, using the lunar rules for addition and multiplication. For example, consider 169 * 84:
* 8 4
1 4 4 (169 * 4)
+ 1 6 8 (169 * 8)
Write a function or program which, when given two integers, returns the result of their lunar addition and their lunar multiplication, in any order.
The input pair may be presented as a two-element list or array.
Input numbers may be provided in any reasonable format, including as a string, or as a list of digits.
Output may similarly be in any reasonable format.
The two output numbers must be separated in an unambiguous way, such as with newlines, spaces, as elements of a list or array.
Links to online demonstrations are appreciated, but not required.
Standard loopholes are disallowed.
This is code-golf, so shortest code in bytes per language wins!
In your submission, please specify the order in which the two results are presented, and your input and output formats if they are out of the ordinary.
Test cases go here
Original paper outlining dismal arithmetic
Numberphile video discussing lunar arithmetic
code-golf math arithmetic
Sandbox business
I see this as the potential starting point for a number of Lunar arithmetic challenges, such as computing the result of a longer calculation (such as 12+345*67*8), identification of lunar primes, lunar prime factorisation, methods of implementing subtraction and division... Of course that all depends on the response to this challenge. Any feedback on how to make this challenge better would be much appreciated!
SokSok
Posted: FreeChat Online
Laikoni
\$\begingroup\$ You're posting to main way too fast. I recommend waiting for at least a couple of days; people have to read before giving you some feedback (comment, upvote, whatever) anyway. \$\endgroup\$
\$\begingroup\$ okay @Bubbler I'll wait next time \$\endgroup\$
\$\begingroup\$ After you post a challenge, please edit the post and delete it. \$\endgroup\$
Simple ASCII Representation of a Screw Head
Inputs are a string and a number.
If the string is "Slot", the number will be 0, 60, 90, or 120. If the number is 0, output --; if 60, output /; if 90, output |, if 120, output \.
If the string is "Phillips", the number will be 0, 45, 90, or 135. If it's 0 or 90, output +; otherwise output x.
If the string is "Torx", the number will be 0, 60, or 120. Regardless of the value, output *.
If the string is "Spanner", the number will be 0 or 90. Output .. for 0 and : for 90.
Behavior for all other inputs is undefined.
[Is this challenge too simple? If so, could more test cases help?]
\$\begingroup\$ I think adding test cases that require modular division (so you could say Phillips 72270) and error handling (GenericScrewXyz 43 would throw an error). \$\endgroup\$
Display a rational tangle [WIP]
code-golf math arithmetic ascii-art
In a quest to classify mathematical knots, J. H. Conway discovered that certain simpler knotlike structures called rational tangles can be uniquely represented by rational numbers.
A tangle is an arrangement of two strands of rope such that each of the four ends lies at one corner of a rectangle. The four exceptional tangles are 0, 1, -1, and a special case usually referred to as 0/0 or ∞. A rational tangle is a tangle that can be obtained from the exceptional tangles by the following operations.
An example of a rational tangle, and the tangles 0/0, 0, 1, and -1 respectively.
The rational tangle operations
Because every rational number can be obtained by addition, negation, and reciprocal, there exists some rational tangle corresponding to every rational number.
If tangle a and b have rational number values x and `y respectively, then the operations in the diagram result in the following values.
Tangle | Value
-a -x
1/a 1/x
a + b x + y
a b -x + y
a , b -x + -y
Using the below representations of the four exceptional rational tangles, display an ASCII art rational tangle corresponding to a given rational number.
Exceptional rational tangles
\_/ \ /
_ | |
/ \ / \
3x3 ASCII art representing the rational tangles 1/0 and 0
\ / \ /
\ /
The rational tangles 1 and -1, illustrating how to display crossings
Displaying more complex rational tangles
Adding and rotating the above 3x3 blocks and rotation can yield any rational tangle.
To flip a rational tangle over the line x=y, switch all instances of 1 and -1, switch all instances of 0/0 and 0, and replace all extenders (below) as appropriate.
To rotate a rational tangle by 90 degrees in either direction, rotate the ASCII art by 90 degrees in that direction, switch all \ and /, switch all instances of 0/0 and 0, and replace all extenders (below) as appropriate.
To add two rational tangles, juxtapose them in the appropriate direction and orientation, adding extenders if dimensions do not match. Then surround the result with the following.
Pattern to surround the sum of two tangles in:
\ ... /
⋮ ⋮
/ ... \
Extenders:
\ /
| |
... ...
\ /
\_..._
_..._/
One possible representation of the tangle obtained through the equation [...]. This tangle is congruent to [...] because =.
\ /
\_ _/
\ /
__/ \_
\ /\ /
\ \
/ \/ \
/ \
A rational number. If you instead take input as an ordered pair (numerator,denominator) of integers, you will have denominator >= 0. (Denominator 0 is necessary to represent 0/0.)
An ASCII-art representation of the rational tangle corresponding to said number, where a crossing is displayed as one of the above. Whitespace can occur anywhere. Output can contain unnecessary copies of 0, or have other variations as long as the rational tangle is obtained by operations whose corresponding arithmetic operations result in the input.
Further sources
A further explanation of tangles and rational tangles: (https://rationaltangle.wordpress.com/what-are-tanglesrational-tangles/)
TODO: Reciprocal operation, better example, reference implementation
Should continued fraction representations be acceptable?
Since this is probably a difficult challenge and answers to such are undervoted, I'm prepared to award a bounty to the shortest answer and any improvement thereon.
This is getting more unwieldy. I'm thinking of simplifying into a challenge to simply add two rational tangles.
\$\begingroup\$ The introduction seems to be written as a reminder to someone who already knows the topic and needs a refresher. To someone who's never heard of rational tangles before, it throws up a lot more questions than answers. What's 0/0 (other than NaN)? What's ramification? What (other than a+b, which is fairly straightforward) is the diagram supposed to show? The intro says that the ends of the rope must be in the corners of a square: should that say rectangle? In the ASCII art, how do extenders cross? \$\endgroup\$
\$\begingroup\$ 1. Which part of the diagram corresponds to reciprocal? Does the left-hand part somehow show both negation and reciprocal at the same time? 2. Having looked at your additional reading link, the challenge turns out to be much simpler than the question made it seem. The paragraph starting "The two simplest tangles" and the subsequent one seem to be all that's needed. Perhaps you could use a simple challenge as an introduction and then have a follow-up which asks for an equality test? \$\endgroup\$
\$\begingroup\$ @PeterTaylor 1. Reciprocal is flipping over some axis, I forget which one. 2. Will do when I have time. I probably won't end up posting this for another month. \$\endgroup\$
Octonion multiplication code-golf math complex-numbers
Octonion is a further extension to the quaternion number system. An octonion can be written as
$$ x = x_0 e_0 + x_1 e_1 + x_2 e_2 + x_3 e_3 + x_4 e_4 + x_5 e_5 + x_6 e_6 + x_7 e_7 $$
where \$ x_i \$ are real numbers and \$ e_i \$ are the eight unit octonions.
Octonion multiplication has the following properties:
It is not commutative, i.e. \$ xy \ne yx \$.
It is not associative, i.e. \$ x(yz) \ne (xy)z \$.
But, luckily, multiplication is distributive over addition, i.e. \$ x(y+z) = xy + xz \$ and \$ (x+y)z = xz + yz\$.
The multiplication rule for unit octonions is presented below:
(will add a table from Wikipedia)
The unit octonions have several properties:
\$ e_0 \$ behaves like a real number 1.
\$ e_i^2 = -1 \$ and \$ e_i e_j = -e_j e_i \$ for \$ 1 \le i,j \le 7, i \ne j \$.
Multiply two octonion numbers.
Input & output
You can accept and output an octonion as any kind of consistent structure consisting of eight real numbers (four complex numbers or two quaternions or a mixture of types is also fine).
Scoring & winning criterion
Standard code-golf rules apply. The shortest program or function in bytes for each language wins.
BubblerBubbler
\$\begingroup\$ \$1e_0+2e_1+3e_2+4e_3+5e_4+6e_5+7e_6+8e_7\$ is written in NARS2000 as 1i2j3k4l5ij6jk7kl8. ;-) \$\endgroup\$
\$\begingroup\$ Somehow, I think NARS APL will win this one… \$\endgroup\$
Tetris battle
tetris king-of-the-hill javascript
I guess you know what Tetris is, but if you don't I'll try to explain:
So there is a 2 dimensional board, often with size 10x20. On the top of the board, in the middle, there is a random tetrominoe (also rotated randomly), a figure made of 4 squares. There are 7 shapes:
L J S Z T O I
# # ## ## ### ## #
# # ## ## # ## #
## ## #
Every second, the tetrominoe (later called "block") will move 1 cell to the bottom until there is no space left. Player can control it - Move it to left, right, rotate it by 90 degrees (left or right) or speed up its fall. When it can't fall down further, a new block is created and you take the control over it, losing possibility to move the previous block.
The target is to get as much points as possible. Player can get them, by filling a line (which is then removed), for example:
****####!$ <- user scores a point, the line is now removed
&&&& $$$
different character is a different block
Player can lose if there is no place for a new block to spawn.
You can play Tetris online here
The battle
In this challenge, you will have to create a bot to play Tetris. Unlike a normal playthrough, there will be two players playing on a single board.
Let's call the bots A and B.
Block spawning
Every block will start on the left, or right side, depending which bot's it is (left - A, right - B). There will be a 2 block margin, so the block will have some space to rotate. Block will always spawn with left/right align, not centered.
If a block can't fit, it will try to rotate, so it can be placed. If all 4 directions fail, you die.
Example for 10 width board:
--A----B--
Random size board
Width: random even number between 8 and 16.
Height: random number between 15 and 25.
It's to make it a bit harder. Try to plan your strategy so it fits all the heights!
You gain 1 point per destroyed block. Some bonus rules apply:
Enemy's block isn't my block
You can't use blocks placed by your enemy to gain score. After adding the last block to a line, you gain as much score as much sub-blocks you placed in it. Enemy sub-blocks don't count. Example:
Player A filled the line.
AAAABBBAAB
Player A gets 6 points.
Player B doesn't earn any.
If you do a move that removes more than just 1 line, you gain x times more score. Of course, x is the amount of lines.
Player A filled the line
--ABBB
AAABBBABBB <
BBBBBBABBB <
BBBBAAABBB <
B AA ABBB
Player A removed 3 lines.
Player A removed 12 (4+5+3) blocks he owns.
Player A scores 36 points.
Death & Game Over
When a new fails to spawn, it tries to rotate 90, 180, 270 or 360 degrees. If it still can't spawn. You lose. Note it won't try to change it's position, so be careful and try to not put any blocks in the spawn area.
When you die, you can't place any more blocks.
In case your enemy has more points than you, you lose.
Otherwise, the game doesn't end yet – the enemy will get double points for the next 10 blocks. The game will end when the bonus ends, when he beats your score or when he dies.
When the game is over, the bot with more points win.
The game will also end automatically after 300 actions. See API > Actions section for more information.
Work in progress. https://github.com/Soaku/Tetris-KOTH
Can't really tell you how it's gonna look like without a controller, right?
However, there is a project of how it's gonna look like:
Every game consists of actions. Your bot function will be executed at the beginning of each action. At the end of each, active tetrominoes will move 1 cell to the bottom.
The controller will wait a configurable amount of time between actions unless the preview is disabled.
Your function should return an integer in at most 0.1 seconds, otherwise your action is terminated and ignored. If it will cross the limit repeatedly, I might remove it.
Default loopholes are forbidden.
Aggressive bots are allowed but discouraged.
You cannot use any external libraries. The controller uses jQuery, but you aren't allowed to use it.
Don't try to access the document – Don't write nor read data.
Don't write your bot to beat specific enemies by countering their strategies.
Enemy spawn works like an occupied block. You cannot place anything here.
Main scoring
Every bot will play with every other bot.
Scoring works this way:
Victory: 2 points
Tie: 1 point
Lose: 0 points
RedCloverRedClover
\$\begingroup\$ I feel like the last part (bonus round) there probably isn't such a good idea. especially since bots are mostly programmed to play against one enemy, so it means a regular bot needs to implement special cases for one round, which really doesn't add much. \$\endgroup\$
– Destructible Lemon
\$\begingroup\$ If you do decide to make a massively multiplayer one, it's sort of unfair to bots that will perform better at the side that end up in the middle or vice versa, so making it a cylinder (the rows wrap around into themselves) would help \$\endgroup\$
\$\begingroup\$ Fine, I removed it from the answer itself, but that doesn't mean I can't do it for fun ( ͡° ͜ʖ ͡°) \$\endgroup\$
– RedClover
\$\begingroup\$ Just wanted to point out that your J, L, and T tetrominoes actually have 5 squares each; I'm fairly sure that wasn't intentional. \$\endgroup\$
\$\begingroup\$ The question describes things that happen when "enemy is dead", but does not describe what causes that or what happens when "you are dead". \$\endgroup\$
\$\begingroup\$ @KamilDrakari [On spawn] If a block can't fit, it will try to rotate, so it can be placed. If all 4 directions fail, you die. \$\endgroup\$
\$\begingroup\$ That is an awfully small section for something quite important. I would add a distinct section on Death, clarifying "when dead, you can no longer place blocks" and some edge cases for the 10 bonus pieces, such as what happens if a player dies during their bonus pieces and if they are required to continue placing the full 10 blocks or they can stop once they have a higher score. \$\endgroup\$
\$\begingroup\$ @KamilDrakari Oh, you're right. I'll try to add something about that, when I have some time. \$\endgroup\$
\$\begingroup\$ @ETHproductions I also noticed that J and L are pentominoes. T is actually a hexomino. But... \$\endgroup\$
– Heimdall
\$\begingroup\$ @ETHproductions thanks for pointing this out. Fixed \$\endgroup\$
Make numbers inflate to full width! code-golfstring
Related: Full Width Text
That is a challenge that changes ALL characters into full width -- BY ADDING A SPACE AFTER EACH CHARACTER. However, in this challenge only numbers are transformed, and are transform to the REAL FULL WIDTH FORM
This is a short and simple challenge, so I will keep the description short.
Write a program / function that accepts an ASCII string as input and outputs a UTF string with each number 0123456789 converted to its corresponding full width form 0123456789. The range of these full width numbers is U+FF10 - U+FF19.
You may output using the following format (As an example This is 1 apple is used):
A UTF-8 string (showing exactly This is 1 apple in STDOUT)
The same string as above but displayed in a non-UTF-8 encoding (eg. showing This is 1 apple in Windows-1252 codepage). In this case you must state the codepage of the output
A single list of UTF-8 bytes as integers that represents the transformed string (showing ord("T"),...,ord(" "),239,188,145,ord(" "),...,ord("e") with arbitrary delimiter)
You cannot mix integer output and character output, that is, you cannot literally change This is 1 apple to This is [239,188,145] apple. You cannot output a nested list if a list is outputted either.
Sample 1:
Input: 0123456789
Output: (Format 1) 0123456789
(Format 2) 0123456789 [CP437]
(Format 3) 239,188,144,239,188,145,239,188,146,239,188,147,239,188,148,239,188,149,239,188,150,239,188,151,239,188,152,239,188,153
Input: This is 1 apple
Output: (Format 1) This is 1 apple
(Format 2) This is 1 apple [Windows-1252]
(Format 3) 0x54 0x68 0x69 0x73 0x20 0x69 0x73 0x20 0xef 0xbc 0x91 0x20 0x61 0x70 0x70 0x6c 0x65
This is a code-golf, so shortest answer for each language wins. Standard loopholes are forbidden by default.
P.S. I would like to see both practical and esoteric language submissions. Specifically the last two output formats are especially allowed for esoteric languages.
Help Runbee manage the pollen! [better title needed?]
The arsonist friendly bee, Runbee, just finished picking up the pollen from the flower garden and now she wants to store it safely in her hive. Each pollen type is labeled with a digit (from 1 to 9). Runbee's "collection" is stored in a row of adjacent honeycomb cells, containing one grain each. For instance, this row could look like this:
But hmm, Runbee doesn't like this order... Because she's tired after flying all day, she can now only move a run of identical adjacent pollen grains to another position in the row to make it seem more organised. Her goal is to have as many identical pollen types one after another after the reordering.
Given a sequence S of digits ranging from 1 to 9, one shall:
Split S into (the longest possible) runs of consecutive adjacent elements. Then, one of the chunks should be moved to another position in S.
The resulting sequence S' should be chopped again into (the longest possible) runs of consecutive adjacent elements. Your goal is to search for the sequence S' which contains the most equal consecutive elements.
You can either output all such sequences S' (deduplicated or not) or just one of them.
For the example above, the possible ways to generate optimal orderings are (where (...) represent the point of removal and [...] the point of addition):
1 (3) 2 2 4 4 4 5 [3] 3 3 3 7 8 9 9 5 | 1 2 2 4 4 4 5 3 3 3 3 7 8 9 9 5
1 (3) 2 2 4 4 4 5 3 3 3 [3] 7 8 9 9 5 | 1 2 2 4 4 4 5 3 3 3 3 7 8 9 9 5
1 3 [3 3 3] 2 2 4 4 4 5 (3 3 3) 7 8 9 9 5 | 1 3 3 3 3 2 2 4 4 4 5 7 8 9 9 5
1 [3 3 3] 3 2 2 4 4 4 5 (3 3 3) 7 8 9 9 5 | 1 3 3 3 3 2 2 4 4 4 5 7 8 9 9 5
... To be written ..
... To be added ..
Mr. XcoderMr. Xcoder
\$\begingroup\$ Suggested testcase: 1 3 2 2 2 2 3 3 1 \$\endgroup\$
\$\begingroup\$ Note. Because TSP (decision version) is NP and maximum independent set is NP-hard, it's possible to have polynomial-time complexity; however given how unrelated those two problems appear to be, the actual transformation would be nontrivial and may be interesting to optimize. \$\endgroup\$
Is the array sorted?
Inspired in part by Creative ways to determine of an array is sorted.
Given an array of integers, find out if the array is sorted. This challenge is as simple as it sounds. I'm surprised that I couldn't find a question like this on PPCG. I'm aware that a lot of golfing languages will have short solutions, but perhaps this question could allow you to showcase a language that's more esoteric than it is practical?
An array of integers, or a sequence of integers if your language doesn't support arrays.
Truthy if the array is sorted, falsey otherwise
[1, 2, 3] => true
[3, 2, 1] => false
[] => true
[1] => true
[-3, -2, -1] => true
[-1, -2, -3] => false
Since this is code-golf, get ready to trim off some bytes! Happy golfing!
For the sandbox
I have searched thoroughly for a question like this, but I haven't been able to find this question posted previously. If it exists, or if it is too similar to another question, please link the question or comment why this question should/shouldn't be posted.
decision-problemcode-golfintegersorting
maxbmaxb
\$\begingroup\$ People don't post trivial challenges because it's not interesting to do in most languages. However having trivial challenges is not a too bad thing for esoteric languages like Brain-Flak. Dodos will still have great difficulty handling negative numbers. \$\endgroup\$
\$\begingroup\$ Good point. Since this challenge will mainly be interesting in esoteric languages, I suggest restricting the input to positive integers. It won't make the challenge any different for languages with signed integer types, while allowing other languages to participate without implementing a new number type. \$\endgroup\$
You are filling a survey. You don't like filling such thing, so you decide to automatically fill some random thing.
However, there may be some duplicated question, and some may be inverted. To simplify the question, we define:
Adding or removing word "not" or "no" inverts the sentence;
Adding or removing "un" at the beginning of a word inverts the sentence;
Adding or removing "n't" at the end of a word inverts the sentence;
Adding or removing word "any", "a" or "an" keeps (doesn't invert) the sentence;
Invertion can go through sentences even if they are not asked. E.g. "Do you own a car?" is twice inverted to "Don't you own no car?", so they should have same answer, no matter what sentences left are mentioned.
Input: A list of string, each of which is a sentence.
Output: A list of 2-possible-value, where inverted sentences are answered different and same sentences are answered same.
Are you rich?
Are you unrich?
Are you poor?
Are you not rich?
Possible sample outputs (assuming 0 and 1 as possible outputs)
code-golf. Decided to not have random to try to have more creative solution
\$\begingroup\$ I would rather have this challenge something closer to just "identify duplicate questions" rather than "create a possible set of answers such that you answer duplicate questions with appropriate inversion". Something like "Given two sentences, output one of 3 distinct values to indicate whether they are "duplicate", "inverted", or "unrelated"". If you insist on keeping the output as possible answers, then your "output" line needs to say so rather than just "a list" \$\endgroup\$
\$\begingroup\$ @KamilDrakari If no random requirement, false relation or something else may exist. Considered \$\endgroup\$
\$\begingroup\$ @KamilDrakari That is a good option, because it is a large part of the challenge, and the other part is likely to be implemented using brute force ⇒ very slow, not testable. \$\endgroup\$
\$\begingroup\$ @user202729 Nope, they'll count nos, uns and n'ts and mod2 \$\endgroup\$
Separate the syllables - in Finnish
Given a Finnish word as a string, separate each syllable.
In Finnish, syllables are very important for many different things in the grammar - often, the first place a foreigner starts with the language is with the syllable structure. Your job, as a prospective learner, is to decompose the given word into its syllables. To do this, your program or function must return a string in which the syllables have been separated by a single | or -.
Given any word, you can split the syllables apart by following five simple rules. In the below mapping, V indicates a vowel (aeiouyäö), a diphthong or a long vowel (see below) and C indicates a consonant (bcdfghjklmnpqrstvwxz). '-' indicates where the word should be split. Patterns are "greedy", so the longest matching pattern is the one that should be applied.
VV -> V-V
VC -> VC
VCV -> V-CV
VCCV -> VC-CV
VCCCV -> VCC-CV
What the &@?! are diphtongs?
Because Finnish is a complex language, we can't just let foreigners get off that easy with learning the basics! So, we came up with diphtongs - pairs of vowels that are considered a single vowel when determining syllables. The list of the ones important to this challenge is as follows (in an arbitrary order):
ai, ei, oi, ui, yi, äi, öi, ey, iy, äy, öy, au, eu, ou, iu, ie, uo, yö
Furthermore, we have long vowels - these also count as a single vowel, and are simply the vowel repeated twice:
aa, ee, ii, oo, uu, yy, ää, öö
You should note that:
Default I/O is allowed.
You may take input and produce output in any string encoding that contains at least the following characters: abcdefghijklmnopqrstuvwxyzåäö-| and/or their upper-case equivalents.
The encoding must be the same for input and for output.
This is code-golf. Shortest answer wins.
Give me the solutions!
Examples are given as input -> output with syllable boundaries indicated by a -.
perkele -> per-ke-le
sauna -> sau-na
koskenkorva -> kos-ken-kor-va
nopea -> no-pe-a
maa -> maa
suomenkieli -> suo-men-kie-li
esimerkki -> e-si-merk-ki
itsenäisyyspäivä -> it-se-näi-syys-päi-vä
koodigolffi -> koo-di-golf-fi
toritapaaminen -> to-ri-ta-paa-mi-nen
Happy golfing!
\$\begingroup\$ I don't understand how to use the patterns. What is the point of the VC pattern if it does not have any split? \$\endgroup\$
\$\begingroup\$ @feersum It's just there to indicate that any trailing consonants are considered part of the same syllable as the preceding vowel (i.e. the last example). \$\endgroup\$
Validate a simple bell-ringing method
A simple bell-ringing method on n bells has the following characteristics.
Each row exchanges at least one pair of adjacent bells. Any given bell can only take part in one exchange per row.
The notation for a row lists those bells that are not exchanged. Each row is separated with a .. However, rows that exchange all bells are notated with an X and do not use . separators, so that .16..16. would actually be notated X16X16X.
The first n-1 rows always exchange bell 1 with the next bell so that it becomes last.
The nth row, also known has the half lead, does not exchange bell 1.
The next n-1 rows are the same as the first n-1 rows but in reverse order. This brings bell 1 back to its original position.
The 2nth row, also known as the lead end, also does not exchange bell 1.
Up to three different lead ends may be used, being named "Plain", "Bob" and "Single".
After the lead end, the 2n-1 rows are rung again, possibly with a different lead end, and this repeats according to a predetermined pattern known as a touch.
The method starts and finishes with the bells in ascending order.
The following parts are intended to be submitted as separate questions.
Given any number of rows in bell-ringing notation, return a truthy or falsy value depending on whether they represent a valid permutation.
Given any number of rows in bell-ringing notation, convert them to permutations and output them in the most convenient format for input into subsequent parts.
Given a list of permutations corresponding to the first n rows of a method, append the first n-1 rows in reverse order, and then output the result of combining the permutations.
Given the permutation from part 3, and up to three different lead end permutations as output by part 2, and a string representing which the order in which the lead ends are to be used, calculate the permutations before and after each lead end, ensuring that they are all different, and that the last permutation is that the identity permutation.
Main rows: 5.1.5.1.5
Plain lead: 125
Bob: 145
Touch: PBPPBPBPPB
5.1.5.1.5 means that the permutations are (12)(34) and (23)(45) repeated, so the permutation before the first lead end in this case turns out to be just (23)(45) again. The plain lead is (34) and the bob is (23). The relevant permutations for the touch are as follows:
(23)(45)
then finishing with the identity permutation as desired.
\$\begingroup\$ I have to say, I am entirely confused by this and haven't the slightest idea what any of it means to the point that I can barely even identify what things confuse me. However, I can at least ask one thing: Rule 3 states "exchange bell 1 with the next bell so that it becomes last". How does "exchange bell 1 with the next bell" result in "it becomes last"? If I have 3 bells I would expect the result to be 213. Are you using a different definition of "exchange" than me? \$\endgroup\$
\$\begingroup\$ @KamilDrakari There are n-1 rows, which is exactly the minimum number needed to get bell 1 into last place, as long as each row moves bell 1 a further step each time. \$\endgroup\$
\$\begingroup\$ In point 2 should .16..16. say .16.16.? \$\endgroup\$
\$\begingroup\$ @PeterTaylor No, .16..16. is a list of five strings, three of which are empty. \$\endgroup\$
Prime Steganography
code-golf steganography string prime
Alice and Bob have devised a steganography encryption method where they encode the letters of their message (the "secret message") into the letters in the prime positions of the larger message (the "carrier message"). Your job is to create a program which takes a secret message as input and outputs a carrier message that hides the secret message in the letters in the prime positions.
Only letter characters in the carrier message count toward positions.
The secret message is not case sensitive and non-letters do not need to be encoded in any way.
The carrier message must contain only words that can be found in this list of English words. The carrier message does not need to make sense.
The carrier message can contain punctuation (any of .,?!"':;-), whitespace, and letter characters only.
The carrier message cannot hide a message that is longer than the secret message. If it ends on a prime after the last one needed, it is invalid. For instance, if the carrier message is 38 letters long and the secret message is only 11 letters long, the carrier would encode an extra letter and therefore is invalid. (Sandbox: help me phrase this better)
Input: 'Hello world'
Possible Output:
Ah! Eels live! Oh wall! Oars will board.
Lining up all the letters, you get these positions
A h E e l s l i v e O h w a l l O a r s w i l l b o a r d
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
All the letters in the prime positions would be extracted. h at 2, E at 3, l at 5, l at 7, O at 11, w at 13, O at 17, r at 19, l at 23, and d at 29. Thus the full message is hEllOwOrld, but case doesn't matter.
Standard rules and loopholes apply.
You can read in the word list using any method you would like, or not at all. Other word lists may be used and will not count toward byte count, but every word in your output must appear somewhere in the previously mentioned word list.
You may assume that the secret message only includes letters, whitespace, and punctuation.
Your program does not need to be deterministic, but does need to always output a valid carrier message which correctly encodes the secret message.
You may assume that the secret message is possible to encode with English words from the list. (Even though it may not be for one reason or another)
Sandbox note
I'm thinking this might make for an interesting popularity contest. The objective criteria is validity of the carrier message. The subjective criteria is how interesting/entertaining/convincing the messages that it outputs are.
BeefsterBeefster
\$\begingroup\$ Considering that only the letters of the secret message are encoded anyway, I think it would make more sense to only provide the letters of a secret message without punctuation. \$\endgroup\$
\$\begingroup\$ I'd reccomend giving the list of words as input rather than a defined word list. This makes it much easier to test \$\endgroup\$
\$\begingroup\$ @JoKing That would be covered by "You can read in the word list using any method you would like, or not at all. Other word lists may be used and will not count toward byte count, but every word in your output must appear somewhere in the previously mentioned word list." It's a pretty massive list of words, so any subset of it will do for testing. \$\endgroup\$
Stego-nography: Hide A Stegosaurus with Steganography!
cops-and-robbers steganography ascii-art
This doesn't work as a cops and robbers challenge. Leaving for possible inspiration.
Making something that converts an image with a live dinosaur into one with a dead dinosaur would be a clone of Hiding information in Cats.
Devise a method for hiding this ASCII stegosaurus in an image.
. .
/ `. .' "
.---. < > < > .---.
| \ \ - ~ ~ - / / |
_____ ..-~ ~-..-~
| | \~~~\.' `./~~~/
--------- \__/ \__/
.' O \ / / \ "
(_____, `._.' | } \/~~~/
`----. / } | / \__/
`-. | / | / `. ,~~|
~-.__| /_ - ~ ^| /- _ `..-'
| / | / ~-. `-. _ _ _
|_____| |_____| ~ - . _ _ _ _ _>
Source: custom 'cow' for cowsay
Create an encoder and a decoder. Post the decoder and two versions of three different images, each pair with and without the encoded stegosaurus. Include hashes for each of the six images.
The decoder must be able to decode any arbitrary ASCII text of at least the same size as the stegosaurus (either 677 characters or 14x63 characters if you assume the text is right-padded. Be consistent).
Only the part of the encoded message that actually includes the stegosaurus matters; extra bytes in the message can be ignored by the user or decoder, but must be either before/after the 677 characters or outside the 14x63 bounding rectangle, depending on how you encode it. (Sandbox: this phrasing is awkward)
In short, it must be possible for the robbers to kill your dinosaur (link to robber thread goes here) by replacing your stegosaurus with a dead one.
You may not use asymmetric encryption in your solution. (For example, encrypting the stegosaurus with your private key before hiding it and then putting the public key in the decoder)
You can use any lossless image format of your choice.
You do not need to post the source code of your decoder. Precompiled Windows or Linux binaries are allowed, but must be packaged with all of their dependencies and be able to be run without any installation. Obfuscation and minification are allowed for interpreted languages and likewise should prepackage all third-party dependencies.
You may not host your decoder on a webservice. The reason why is that it enables you to use symmetric encryption with no way to derive the key.
As an objective criteria for the encoded images to be not easily distinguishable, the maximum absolute pixel difference between the images should be less than 4/255 for each color component in the entire image.
If at least one of your dinosaurs survives after one week, your dinosaurs are safe and you earn 10 points (after which point you can explain your algorithm). If your dinosaurs are all killed within a week, you score 1 point for each 24 hour period they survived.
The cop with the most points wins.
You're a dinosaur hunter. The cops have hidden stegosauruses in three images. It's your job to find the stegos and kill them. You are to change the message hidden in each image from the original stegosaurus to this dead one:
..-~ ~-..-~
\~~~\.' `./~~~/
\__/ \__/
/ / \ "
_____ .' | } \/~~~/
| | .' / } | / \__/
--------'' | / | / `. ,~~|
.' X __| /_ - ~ ^| /- _ `..-'
(_____ _-~ | / | / ~-. `-. _ _ _
`__U___--~ |_____| |_____| ~ - . _ _ _ _ _>
Cracking a cop submission requires that you kill all of its hidden dinosaurs. You earn a point for each submission you crack. Note that the cop can encode garbage or null data around the dinosaur (either by bounding box or before/after the 677 characters), but you do not need to leave this data untouched.
The robber with the most cracked submissions wins.
Does this work as a cops-and-robbers challenge? Any problematic loopholes or ways to abuse this?
\$\begingroup\$ It's possible you simply considered this implied, but I would recommend indicating a timeframe after which a cop's answer is "safe", and requiring that safe answers post their encoder to prove that it can indeed hide arbitrary images. Requiring the decoder from the start seems best so that robbers can validate potential cracks. Also, each portion of the challenge needs some kind of scoring method. \$\endgroup\$
\$\begingroup\$ @KamilDrakari: It will probably be a week. I just wanted to test the waters on the idea first since cops and robbers challenges are hard to make. \$\endgroup\$
\$\begingroup\$ I'm really not sure if the stenography really adds much to the challenge. Stenography is all about hiding the fact that secrets are being transferred in the first place. Here, though, there's no reason to make the image look "normal", and so it becomes a "implement your own asymmetric crypto algorithm" \$\endgroup\$
\$\begingroup\$ If "implement your own asymmetric crypto algorithm" is what you want, then I don't think this should be posted. Banning crypto is fine if it is closing a loop hole, but here it's literally "Do crypto without using crypto". The line will be too hard to define IMO. \$\endgroup\$
\$\begingroup\$ @NathanMerrill: I'm definitely banning crypto to close a loophole because the challenge becomes trivial to make uncrackable with public key encryption. Symmetric crypto would be fine because you'd be able to reverse-engineer the decoder to derive a key. I suppose a possibility for better patching the loophole would be to require lossless images and require that changing a bit in the uncompressed pixel data changes at most one character in the output. That also effectively bans most hard-to-crack crypto. \$\endgroup\$
\$\begingroup\$ Hmm... Looking over the typical cops-and-robbers challenges, none of them are related to crypto... and I can see why. I like the general concept though. I think it's fun and whimsical and I think I can convert it into one or two code challenges (code-golf, code-challenge, and popularity-contest might all work) \$\endgroup\$
Buddhabrot (speed edition)
Your goal is to generate an image like this one:
(Sandbox: this image will be replaced by a valid solution image for the challenge)
This is a render of a 2D histogram called Buddhabrot. The algorithm for generating it is very simple. If you have heard of (or written a program to generate) the Mandelbrot Set, this will feel familiar. The algorithm goes as follows:
Generate a random complex number \$z_0\$
Iteratively perform the calculation \$z_{i+1}=z_i^2+z_0\$. Do this \$N\$ times
Check if the absolute value of the number \$z_N\$ is larger than 3
If it is, calculate all the numbers \$z_i, 0 \leq i\leq N\$ again and map them to pixels.
For each pixel that is mapped to, add \$1\$ to the counter for that pixel
Repeat the 4 steps above a few million (or billion/trillion) times.
Since this algorithm depends on random sampling rather than just a single calculation for each pixel, it is not as easy to parallelize on a GPU. However, a GPU will still perform better than a CPU for this task. Since it is random, it is also dependent on having a large number of iterations to generate a smooth image. With a low number of iterations, the output image will be grainy.
To participate in this challenge, write a full program which creates a render of the Buddhabrot. For this challenge, the maximum iteration number is set to 100. That means that for every random complex \$z_0\$, you must write to the counter array if \$|z_{100}| > 3\$. Note that if \$|z_i| > 3\$ for some \$i < 100\$, then you can quit the calculation, since you know that \$|z_{100}| > 3\$.
If you want some help to get you started, I recently made an attempt to optimize this problem. You can read about my journey if you want.
Generating the image
A basic algorithm for rendering a Buddhabrot image is described above. To make it efficient, I would suggest that you use an unsigned int* to hold all the counters for the pixels. When running the program, you calculate which pixel should have its counter increased, then you add 1 to that index in the counter array. When you are done with the random number generation, you take your array of counters, divide every element by the array's maximum value. Then all values in the array will be in the range \$[0,1]\$. You can then use that value as the grayscale color of the corresponding pixel.
The output image must be exactly 1024x1024 pixels
The maximum iteration number is 100
The complex numbers must be sampled from the rectangle in the complex plane given by \$-2.5 < Re(z_0) < 1.5, -2 < Im(z_0) < 2\$ (Sandbox: these limits are subject to change). The sampling does not have to be uniform (you might discard points which you know are not important). However, it does need to result in a picture which is visually similar to the one in the post.
The output image must be a visualization of the area in the complex plane given by \$-2 < Re(z) < 2, -2 < Im(z) < 2\$ (Sandbox: these limits might change slightly)
You have 20 minutes to perform the sampling and generate the image. I will assist with tweaking to maximize your score.
You are free to use CUDA or OpenCL to generate the image. For any other methods of implementation, please include instructions on how yo get the environment ready.
To ensure that we have an objective criteria for scoring, your score will be the total sum of all pixel counters. Since the Buddhabrot is a 2D histogram in its essence, this can be seen as the total number of samples. To be specific, you calculate your score before dividing by the maximum value and generating the image. The highest score wins. Note that if your approach is similar to mine, the theoretical maximum score is limited by the memory bandwidth. The bandwidth of my GPU is 484GB/s, and since each counter is 4 bytes, you can get 121 billion iterations per second. However, there might be more effective ways to save the samples using the CUDA caches, which could lead to higher performance than that. (Sandbox: I'm not 100% sure if my maximum speed calculation is valid)
Since I already have a working example for this problem, I have decided to add a further incentive. If your solution is 2x faster than my implementation, I'll reward 50 reputation once one month has passed from posting the question. If it is 5x faster, the reward goes up to 100 rep. If it is more than 10x faster, I'll award 150 rep. If you somehow manage to make your solution 30x faster, I'll throw in 200 rep. If multiple answers are eligible for the bonus, only the fastest one will be rewarded. If no answer is eligible for the bounty once one month has passed, the bounty will be rewarded to the first one who claims it. However, you can only claim one bounty, so if you have made your solution 5x faster but want to claim the 10x bounty, I will give you time to optimize your solution to reach the next bounty.
Intel 5820K 6-core 12-thread CPU running at 4.4GHz
16GB DDR4 RAM
NVIDIA GTX 1080Ti (11 GB GDDR5X, 3584 CUDA cores)
CUDA 9.0 (I'll add information about C++ version and other relevant info)
Windows 8.1 (sorry)
fastest-codemathcomplex-numbersfractalgraphical-output
Right now there are a few things that need to be clarified in the description.
I will also update the post with an image that is 1024x1024 pixels. Is the question clear?
Do I need to clarify anything besides the information that's left out right now?
Is it okay to add reputation rewards from the start to attract answers?
Are GPU challenges welcome? A short discussion about hardware was had in the chat, and I'm aware that not everyone has a NVIDIA GPU. That's why I made sure that there are OpenCL implementations of this problem.
I say that the sampling should be uniform, but there have been some optimized solutions using importance sampling for this specific problem, should I allow that?
\$\begingroup\$ Step 6 is "repeat the 4 steps above". Does that mean "repeat steps 2-5", or is it a mistake and should be "5 steps"? \$\endgroup\$
\$\begingroup\$ Two parts of the challenge leave me scratching my head when combined. When describing the formula you say that "if |zi|>3 for some i<100, then you can quit the calculation, since you know that |z100|>3." which seems to imply that |zi+1|>|zi| However, in the specification you assert that the range of possible values for z0 should be identical to the range of values displayed. Either this means some samples map to pixels off-screen or... I guess I could be misunderstanding something? \$\endgroup\$
\$\begingroup\$ @KamilDrakari I'll address all of these points more thoroughly later today. Yes, it should say repeat the 5 steps. As for the scoring, it is random, but since a good program will perform >10^10 iterations per second, the standard deviation will be very low. To maximize your score, you want to perform steps 1-5 as many times as possible within the time limit. And yes, samples could map to pixels off-screen. Those samples do not count towards your score. It is not always true that |z_i+1|>|z_i|, but if the absolute value goes above 2, it will start growing towards infinity. \$\endgroup\$
– maxb
\$\begingroup\$ Ah, I think to resolve my confusion about the scoring method you should add somewhere "Highest score wins" or "higher scores are better". \$\endgroup\$
\$\begingroup\$ "The sampling must not be uniform". In that case, you need to specify what it must be. \$\endgroup\$
\$\begingroup\$ @PeterTaylor I'm sorry, I meant "the sampling does not have to be uniform". I have seen some work with importance sampling, but I have not implemented that myself. \$\endgroup\$
A Golden March
code-golfmatharithmeticfibonaccigeometrynumber
Draw a circle whose circumference is the golden mean. Choose a point and label it 1, then move clockwise around the circle in steps of arc length 1, labeling the points 2, 3, and so on. At each step, the difference between each pair of adjacent numbers on the circle is a Fibonacci number.
from Futility Closet.
Given some natural number \$ n \geq 1 \$, determine the first \$n\$ points as described above. Then determine all differences between pairs of adjecent numbers and return them as a list.
The list should contain all the differences in the order in which they appear.
It should start with the difference between \$1\$ and one of its adjecent neighbours. Then you need to continue recording the differences in the direction you started with.
to be determined.
Prime number construction game
Inspired by this post, here is A "single player" version of the prime number construction game.
The game:
Start with a digit between 1 and 9.
Add a digit to the end, such that the resulting number is a prime number.
Repeat from step 2 until there are no more possible prime numbers.
Write a function (or required functions) that returns the sum of the lengths of the largest prime numbers you can get starting with each digit, following the game rules above.
The score:
Scoring will have two components:
The length of the code, in bytes (it's code-golf, after all), and
The sum of the length of the largest primes obtained, starting with each digit.
The total score will be the length of the code (1) minus the sum of the length of the largest primes obtained (2).
The lowest score wins.
Here's a code sample using R (verbose and non golfed):
isprime <- function(x) {
for(i in 2:sqrt(x))
if(x %% i == 0)
return(FALSE)
return(TRUE)
next_prime <- function(x) {
for(i in sample(c(1,3,7,9))) {
y <- strtoi(paste(c(x,i), collapse=''))
if(isprime(y))
return(y)
longest_primes <- function(x) {
primes <- 1:9
for(d in 1:9) {
y <- d
while(!is.null(y)) {
primes[d] <- y
y <- next_prime(y)
return(sum(nchar(primes)))
Here's a sample run of the code above:
Digit | Largest prime obtained | Length
------+------------------------+-------
1 | 19139 | 5
2 | 29399999 | 8
3 | 3797 | 4
5 | 53 | 2
7 | 719333 | 6
9 | 977 | 3
Total | | 38
Assuming the byte count of my code is 469, my final score is 469-38=431.
Posting your solution:
Please use the following header for your solution:
# [Language]: [Byte count] - [Sum of lengths] = [Total score].
Please include an explanation to your answer.
Miscelaneous:
I think this is a simple, yet fun, challenge. I've searched the site, and I haven't found a related challenge. If there's one, please point me.
I tried sometime ago to post a challenge, and I wasn't careful enough before posting (hence, I deleted it). I'd like to post a good first challenge. Any feedback will be appreciated.
BarrankaBarranka
\$\begingroup\$ Unfortunately, the possible results are very small, so this challenge may not be that interesting. (They are: 1979339333, 1979339339, 23399339, 29399999, 37337999, 4391339, 59393339, 6133373, 6733919, 6733997, 73939133, 839, 9719, for a total of -63.) Be aware that most golf languages will get a score near -63 just by brute forcing. \$\endgroup\$
\$\begingroup\$ @japh That may be. I just thought it would be fun. Maybe a "King of the Hill" dinámica would work? Bots competing to eliminate opponents generating primes? (Like the game mentioned in the linked post) \$\endgroup\$
– Barranka
Is it a D?
Given an image, check if it's a letter D.
Here D is defined as:
Let pixels labeled \$(x,y)\$ where larger \$x\$ mean at right and larger \$y\$ mean more up. If there exist \$x_0, x_1, f_0, f_1, f_2, f_3\$ such that:
\$ (f \rightarrow (x\rightarrow f(x+1)-f(x)))^n (f_i) (x) >0\$ for \$n>0, 0\leq i<4\$
\$ x_0<x_1\$
Pixel \$(x,y)\$ is true iff \$y>f_0(x) \text{ and } y<-f_1(x) \text{ and } (y>-f_2(x) \text{ or } y<f_3(x) \text{ or } x_0\leq x<x_1)\$
There should be a hole, i.e. a false pixel that can't reach border without acrossing a true pixel
Shortest code win. Don't mind if D doesn't follow the rule or a symbol which follows the rule doesn't look like a D at all
TODO: add test cases
DELETE_ME
\$\begingroup\$ A couple of examples would help here. Also, if I've understood the conditions correctly, I think you mean \$y>-f_2(x)\$. \$\endgroup\$
\$\begingroup\$ Also, those are some fast-growing \$f_i\$. Your font must have really, really tall D's. \$\endgroup\$
There are 1000 phones in a city. Add some switches so that we can freely decide linking pairs (Any amount of pairs). Two phones are connected iff some closed switches directly or indirectly link them.
In graph-theoretic terms, phones and "new points" are vertices, switches/links are edges, and you need to construct a graph with least number of edges such that for all possible list of pairs of the "phone"-vertices, there exists a configuration of the switches (subgraph), such that vertices in the same pair are in the same connected component, and vice versa.
You need to answer with:
A list of links, where 0~999 are the 1000 phones, and 1000~ are just point.
A program that take some pairs of phone ids, where no two numbers are same, and output the switches that need to be closed. It should run on tio in 60s.
Fewest switches win.
A sample solution may be:
1. [(i,j) for i in range(1000) for j in range(1000,1500)]
2. def f(a):
print ([(a[i][0],1000+i)for i in range(len(a))]+[(a[i][1],1000+i)for i in range(len(a))])
Which uses 500000 switches.
SN.
Should I require that when a connection is added or removed, there's a series of moves so that no other connections are affected?
Directly using sorting network for \$\mathcal O(n \log^2 n)\$ seems to use more switches than \$\mathcal O(n^2)\$ solution. Should I have more phones?
I'll Just Make A Snake
I noticed a post (now deleted) that pointed out that 05AB1E (legacy), when called with no arguments and non-empty input, will make a "snake" using the input:
Not sure what to do now with the input, so I'll just make a snake:
Cute, and seems simple enough. However, the snake generation has a few quirks that I thought would make an interesting challenge to replicate.
When given a non-empty string as input, a snake is generated using the following algorithm:
If the input string is shorter than 17 characters, pad the string with duplicates of itself until its length is >= 17.
Generate a snake segment by replacing the non-space characters of the pattern with characters from the input string, following the direction of the snake. (Print characters right-to-left 5, up-to-down 3, left-to-right 5, up-to-down 4.)
If there is a previous snake segment generated, replace its last line with the first line of this segment.
Remove the used characters from the string.
Discard the first character of the remainder of the string. If there is no remainder, instead set the remainder to the original input string, minus its first character.
Pad the string with copied of the original input string until its length is >= 17.
Repeat steps 2-6 N times, where N = Length of input string.
Return the generated snake.
You do not have to handle empty inputs or non-string inputs.
You do not have to print the beginning message "Not sure what to do now with the input, so I'll just make a snake:".
Note: 05AB1E (legacy) actually has some different behavior when given a base-10 integer as input. We will be ignoring that and treating all inputs as strings.
Input and output may be in any reasonable format.
Trailing spaces and/or a single trailing newline are acceptable.
This is code-golf, so shortest answer in bytes for each language wins. No answer will be marked as the answer.
Standard rules apply.
If possible, please add a link with a test for your code (i.e. TIO).
Adding an explanation for your answer is highly recommended.
Here is an (ungolfed) Python 2 program which replicates this behavior, for clarity and generation of additional test cases.
pattern = """{0}{1}{2}{3}{4}\n {5}\n {6}\n {7}\n{12}{11}{10}{9}{8}\n{13}\n{14}\n{15}\n{16}"""
def print_snake(in_str):
print "Not sure what to do now with the input, so I'll just make a snake:\n\n"
l = len(in_str)
snek = [0]
if l < 19:
d,m = divmod(19, l)
if m:
d+=1
in_str = in_str * d
print_str, lost_char, rem = in_str, "", ""
for i in range(l):
print_str = rem + in_str
print_str, lost_char, rem = print_str[:17], print_str[17], print_str[18:]
snek.pop()
snek += (pattern.format(*print_str)).split("\n")
print "\n".join(snek)
Input: *
Input: AB
Input: 123456789
Input: sadness*and*despair
sadne
d*dna
rsadn
*dna*
irsad
dna*s
airsa
na*ss
a*sse
spair
*ssen
ssend
despa
*desp
endas
d*des
ndasr
nd*de
dasri
and*d
asria
*and*
sriap
s*and
riaps
ss*an
iapse
ess*a
apsed
ness*
psed*
dness
sed*d
adnes
ed*dn
TriggernometryTriggernometry
\$\begingroup\$ How words get cut up seems a bit arbitrary and non-obvious. \$\endgroup\$
\$\begingroup\$ @Beefster Yup, but that's exactly how it's done in 05AB1E. See for yourself: tio.run/##MzBNTDJM/Q8ExYkpeanFxVqJeSlaKanFBYmZRQA \$\endgroup\$
– Triggernometry
What season is it?
I don't think we have a challenge for this yet, surprisingly. The closest ones are:
What season is it? (bad, closed)
Determine Season (only uses months)
Output the current season, using the astronomical definitions:
Spring begins on the day of the spring equinox
Summer begins on the summer solstice
Fall begins on the fall equinox
Winter begins on the winter solstice
You may use the northern or southern hemisphere dates.
None (use the current date).
One of these strings (case insensitive): spring, summer, fall (or autumn), winter.
Your program must finish in a reasonable (less than 1 day) amount of time.
Is calculating the dates of the equinoxes/solstices too difficult? (I don't want the best solution to involve a big lookup table)
What range of years should this be required to work in?
Should the output format be less strict? ("Any 4 distinct values" rather than the season names)
12Me2112Me21
\$\begingroup\$ I deide to upvote the old one and downvote this one \$\endgroup\$
\$\begingroup\$ They're sufficiently different so I think it's ok for both to exist. \$\endgroup\$
\$\begingroup\$ However looking at the wikipedia "Equinox" page, it mentions "However, because the Moon [...]" (second paragraph), which interpretation should be chosen? \$\endgroup\$
\$\begingroup\$ Is there any range of date that it's ok to be wrong outside the range? (say, 1~2 century? I don't know how the motion of the Earth/Sun will change in some million years) \$\endgroup\$
\$\begingroup\$ To your last question: I would say yes as the core of this challenge appears to be figuring out where the current date lies in relation to the solstices/equinoxes and translating that to specific strings doesn't, in my opinion, really add anything to the challenge. \$\endgroup\$
\$\begingroup\$ I guess this should use the second interpretation, since it seems to be easier to calculate and more commonly used. I'll probably allow errors of a few minutes, in the event that the equinox/solstice occurs near midnight where a small rounding error might affect the result. \$\endgroup\$
\$\begingroup\$ int main(){print("summer");} // guaranteed to be random \$\endgroup\$
– Kenzie | CommonCrawl |
Compute the sum of the number $10 - \sqrt{2018}$ and its radical conjugate.
The radical conjugate of this number is $10 + \sqrt{2018},$ so when we add them, the radical parts cancel, giving $10 + 10 = \boxed{20}.$ | Math Dataset |
Lee Hwa Chung theorem
The Lee Hwa Chung theorem is a theorem in symplectic topology.
The statement is as follows. Let M be a symplectic manifold with symplectic form ω. Let $\alpha $ be a differential k-form on M which is invariant for all Hamiltonian vector fields. Then:
• If k is odd, $\alpha =0.$
• If k is even, $\alpha =c\times \omega ^{\wedge {\frac {k}{2}}}$, where $c\in \mathbb {R} .$
References
• Lee, John M., Introduction to Smooth Manifolds, Springer-Verlag, New York (2003) ISBN 0-387-95495-3. Graduate-level textbook on smooth manifolds.
• Hwa-Chung, Lee, "The Universal Integral Invariants of Hamiltonian Systems and Application to the Theory of Canonical Transformations", Proceedings of the Royal Society of Edinburgh. Section A. Mathematical and Physical Sciences, 62(03), 237–246. doi:10.1017/s0080454100006646
| Wikipedia |
Ultraconnected space
In mathematics, a topological space is said to be ultraconnected if no two nonempty closed sets are disjoint.[1] Equivalently, a space is ultraconnected if and only if the closures of two distinct points always have non trivial intersection. Hence, no T1 space with more than one point is ultraconnected.[2]
Properties
Every ultraconnected space $X$ is path-connected (but not necessarily arc connected). If $a$ and $b$ are two points of $X$ and $p$ is a point in the intersection $\operatorname {cl} \{a\}\cap \operatorname {cl} \{b\}$, the function $f:[0,1]\to X$ defined by $f(t)=a$ if $0\leq t<1/2$, $f(1/2)=p$ and $f(t)=b$ if $1/2<t\leq 1$, is a continuous path between $a$ and $b$.[2]
Every ultraconnected space is normal, limit point compact, and pseudocompact.[1]
Examples
The following are examples of ultraconnected topological spaces.
• A set with the indiscrete topology.
• The Sierpiński space.
• A set with the excluded point topology.
• The right order topology on the real line.[3]
See also
• Hyperconnected space
Notes
1. PlanetMath
2. Steen & Seebach, Sect. 4, pp. 29-30
3. Steen & Seebach, example #50, p. 74
References
• This article incorporates material from Ultraconnected space on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
• Lynn Arthur Steen and J. Arthur Seebach, Jr., Counterexamples in Topology. Springer-Verlag, New York, 1978. Reprinted by Dover Publications, New York, 1995. ISBN 0-486-68735-X (Dover edition).
| Wikipedia |
Spectral notions of aperiodic order
DCDS-S Home
A survey of complex dimensions, measurability, and the lattice/nonlattice dichotomy
April 2017, 10(2): 191-211. doi: 10.3934/dcdss.2017010
Stability of nonlinear waves: Pointwise estimates
Margaret Beck
Margaret Beck, Boston University Department of Mathematics and Statistics, 111 Cummington Mall, Boston MA 02215, USA
Received July 2015 Revised November 2016 Published January 2017
Figure(6)
This is an expository article containing a brief overview of key issues related to the stability of nonlinear waves, an introduction to a particular technique in stability analysis known as pointwise estimates, and two applications of this technique: time-periodic shocks in viscous conservation laws [3] and source defects in reaction diffusion equations [1, 2].
Keywords: Green's function, pointwise estimates, stability, coherent structure.
Mathematics Subject Classification: 35Kxx, 34Dxx.
Citation: Margaret Beck. Stability of nonlinear waves: Pointwise estimates. Discrete & Continuous Dynamical Systems - S, 2017, 10 (2) : 191-211. doi: 10.3934/dcdss.2017010
M. Beck, T. Nguyen, B. Sandstede and K. Zumbrun, Toward nonlinear stability of sources via a modified Burgers equation, Phys. D, 241 (2012), 382-392. doi: 10.1016/j.physd.2011.10.018. Google Scholar
M. Beck, T. T. Nguyen, B. Sandstede and K. Zumbrun, Nonlinear stability of source defects in the complex Ginzburg-Landau equation, Nonlinearity, 27 (2014), 739-786. doi: 10.1088/0951-7715/27/4/739. Google Scholar
M. Beck, B. Sandstede and K. Zumbrun, Nonlinear stability of time-periodic viscous shocks, Arch. Ration. Mech. Anal., 196 (2010), 1011-1076. doi: 10.1007/s00205-009-0274-1. Google Scholar
N. Bekki and B. Nozaki, Formations of spatial patterns and holes in the generalized Ginzburg-Landau equation, Phys. Lett. A, 110 (1985), 133-135. doi: 10.1016/0375-9601(85)90759-5. Google Scholar
J. Bricmont, A. Kupiainen and G. Lin, Renormalization group and asymptotics of solutions of nonlinear parabolic equations, Communications on Pure and Applied Mathematics, 47 (1994), 893-922. doi: 10.1002/cpa.3160470606. Google Scholar
A. Doelman, Breaking the hidden symmetry in the ginzburg-landau equation, Physica D, 97 (1996), 398-428. doi: 10.1016/0167-2789(95)00303-7. Google Scholar
A. Doelman, B. Sandstede, A. Scheel and G. Schneider, The dynamics of modulated wave trains Mem. Amer. Math. Soc. 199 (2009), ⅷ+105pp. doi: 10.1090/memo/0934. Google Scholar
K. -J. Engel and R. Nagel, One-parameter Semigroups for Linear Evolution Equations vol. 194 of Graduate Texts in Mathematics, Springer-Verlag, New York, 2000. Google Scholar
D. Henry, Geometric Theory of Semilinear Parabolic Equations Springer-Verlag, Berlin, 1981. Google Scholar
L. N. Howard and N. Kopell, Slowly varying waves and shock structures in reaction-diffusion equations, Studies in Appl. Math., 56 (1976/77), 95-145. Google Scholar
P. Howard and K. Zumbrun, Stability of undercompressive shock profiles, J. Differential Equations, 225 (2006), 308-360. doi: 10.1016/j.jde.2005.09.001. Google Scholar
M. A. Johnson, P. Noble, L. M. Rodrigues and K. Zumbrun, Nonlocalized modulation of periodic reaction diffusion waves: Nonlinear stability, Arch. Ration. Mech. Anal., 207 (2013), 693-715. doi: 10.1007/s00205-012-0573-9. Google Scholar
T. Kapitula and J. Rubin, Existence and stability of standing hole solutions to complex Ginzburg-Landau equations, Nonlinearity, 13 (2000), 77-112. doi: 10.1088/0951-7715/13/1/305. Google Scholar
T. Kapitula and K. Promislow, Spectral and Dynamical Stability of Nonlinear Waves vol. 185 of Applied Mathematical Sciences, Springer, New York, 2013, URL http://dx.doi.org/10.1007/978-1-4614-6995-7, doi: 10.1007/978-1-4614-6995-7. Google Scholar
J. Lega, Traveling hole solutions of the complex Ginzburg-Landau equation: A review, Phys. D, 152/153 (2001), 269-287. doi: 10.1016/S0167-2789(01)00174-9. Google Scholar
T.-P. Liu, Interactions of nonlinear hyperbolic waves, in Nonlinear analysis (Taipei, 1989), World Sci. Publ., Teaneck, NJ, (1991), 171-183. Google Scholar
T.-P. Liu, Pointwise convergence to shock waves for viscous conservation laws, Comm. Pure Appl. Math., 50 (1997), 1113-1182. doi: 10.1002/(SICI)1097-0312(199711)50:11<1113::AID-CPA3>3.0.CO;2-D. Google Scholar
C. Mascia and K. Zumbrun, Pointwise Green function bounds for shock profiles of systems with real viscosity, Arch. Ration. Mech. Anal., 169 (2003), 177-263. doi: 10.1007/s00205-003-0258-5. Google Scholar
A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations vol. 44 of Applied Mathematical Sciences, Springer-Verlag, New York, 1983. doi: 10.1007/978-1-4612-5561-1. Google Scholar
S. Popp, O. Stiller, I. Aranson and L. Kramer, Hole solutions in the 1D complex Ginzburg-Landau equation, Phys. D, 84 (1995), 398-423. doi: 10.1016/0167-2789(95)00070-K. Google Scholar
B. Sandstede and A. Scheel, On the structure and spectra of modulated traveling waves, Math. Nachr., 232 (2001), 39-93. doi: 10.1002/1522-2616(200112)232:1<39::AID-MANA39>3.0.CO;2-5. Google Scholar
B. Sandstede and A. Scheel, Defects in oscillatory media: Toward a classification, SIAM Appl Dyn Sys, 3 (2004), 1-68. doi: 10.1137/030600192. Google Scholar
B. Sandstede, A. Scheel, G. Schneider and H. Uecker, Diffusive mixing of periodic wave trains in reaction-diffusion systems, To appear in JDE, 252 (2012), 3541-3574. doi: 10.1016/j.jde.2011.10.014. Google Scholar
B. Sandstede and A. Scheel, Hopf bifurcation from viscous shock waves, SIAM J. Math. Anal., 39 (2008), 2033-2052. doi: 10.1137/060675587. Google Scholar
G. Schneider, Diffusive stability of spatial periodic solutions of the Swift-Hohenberg equation, Comm. Math. Phys., 178 (1996), 679-702. doi: 10.1007/BF02108820. Google Scholar
B. Texier and K. Zumbrun, Relative Poincaré-Hopf bifurcation and galloping instability of traveling waves, Methods Appl. Anal., 12 (2005), 349-380. doi: 10.4310/MAA.2005.v12.n4.a1. Google Scholar
B. Texier and K. Zumbrun, Galloping instability of viscous shock waves, Physica D, 237 (2008), 1553-1601. doi: 10.1016/j.physd.2008.03.008. Google Scholar
M. van Hecke, Building blocks of spatiotemporal intermittency, Phys. Rev. Lett., 80 (1998), 1896-1899. Google Scholar
W. van Saarloos and P. C. Hohenberg, Fronts, pulses, sources and sinks in generalized complex Ginzburg-Landau equations, Phys. D, 56 (1992), 303-367. doi: 10.1016/0167-2789(92)90175-M. Google Scholar
K. Zumbrun, Instantaneous shock location and one-dimensional nonlinear stability of viscous shock waves, Quarterly of Applied Mathematics, 69 (2011), 177-202. doi: 10.1090/S0033-569X-2011-01221-6. Google Scholar
K. Zumbrun and P. Howard, Pointwise semigroup methods and stability of viscous shock waves, Indiana Univ. Math. J., 47 (1998), 741-871. doi: 10.1512/iumj.1998.47.1604. Google Scholar
Figure 1. Typical spectra of linear operators that are spectrally stable in a strong sense: $\sup \mathrm{Re} \sigma(\mathcal{L}) < 0$. On the left we see a half line of essential spectrum and an isolated eigenvalue (the cross), and on the right we see a parabolic region of essential spectrum and an isolated eigenvalue.
Figure Options
Download as PowerPoint slide
Figure 2. Typical spectra of linear operators that are spectrally stable in a weaker sense: $\sup \mathrm{Re} \sigma(\mathcal{L}) = 0$. On the left we see a half line of essential spectrum and an isolated eigenvalue (the cross) on the imaginary axis, and on the right we see a parabolic region of essential spectrum touching the imaginary axis and an embedded eigenvalue (denoted now in red for visual clarity) at the origin.
Figure 3. Floquet spectrum of a spectrally stable viscous shock near the origin. Note the spectrum is non-unique, as it can be shifted by any integer multiple of $2\pi{\rm{i}}$, and hence the parabolas repeat infinitely many times up and down the imaginary axis. There are two embedded eigenvalues at the origin, due to translations in space and time.
Figure 4. Left panel: original vertical contour with real part $\mu$ used in 3.4. Right panel: deformed contour used to obtain bounds on the Green's function. The parameters $\epsilon$ and $r$ can be chosen to be small and to optimize the resultant bounds.
Figure 5. On the left is a diagram of the profile of a source as a function of $x$ for a fixed value of $t$, with the motion of perturbations, relative to the speed of the defect core, indicated by the red arrows and the group velocities of the asymptotic wave trains. The right panel shows the behavior of small phase $\phi$ or wave number $\phi_x$ perturbation of a wave train: to leading order, they are transported with speed given by the group velocity $c_g$ without changing their shape [7].
Figure 6. On the left is a sketch of the space-time diagram of a perturbed source. The defect core will adjust in response to an imposed perturbation (although this is not depicted), and the emitted wave trains, whose maxima are indicated by the lines that emerge from the defect core, will therefore exhibit phase fronts that travel with the group velocities of the asymptotic wave trains away from the core towards $\pm \infty$. The right panel illustrates the profile of the anticipated phase function $\phi(x,t)$.
Peter Bella, Arianna Giunti. Green's function for elliptic systems: Moment bounds. Networks & Heterogeneous Media, 2018, 13 (1) : 155-176. doi: 10.3934/nhm.2018007
Virginia Agostiniani, Rolando Magnanini. Symmetries in an overdetermined problem for the Green's function. Discrete & Continuous Dynamical Systems - S, 2011, 4 (4) : 791-800. doi: 10.3934/dcdss.2011.4.791
Sungwon Cho. Alternative proof for the existence of Green's function. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1307-1314. doi: 10.3934/cpaa.2011.10.1307
Jeremiah Birrell. A posteriori error bounds for two point boundary value problems: A green's function approach. Journal of Computational Dynamics, 2015, 2 (2) : 143-164. doi: 10.3934/jcd.2015001
Wen-ming He, Jun-zhi Cui. The estimate of the multi-scale homogenization method for Green's function on Sobolev space $W^{1,q}(\Omega)$. Communications on Pure & Applied Analysis, 2012, 11 (2) : 501-516. doi: 10.3934/cpaa.2012.11.501
Kyoungsun Kim, Gen Nakamura, Mourad Sini. The Green function of the interior transmission problem and its applications. Inverse Problems & Imaging, 2012, 6 (3) : 487-521. doi: 10.3934/ipi.2012.6.487
Jongkeun Choi, Ki-Ahm Lee. The Green function for the Stokes system with measurable coefficients. Communications on Pure & Applied Analysis, 2017, 16 (6) : 1989-2022. doi: 10.3934/cpaa.2017098
Zhi-Min Chen. Straightforward approximation of the translating and pulsating free surface Green function. Discrete & Continuous Dynamical Systems - B, 2014, 19 (9) : 2767-2783. doi: 10.3934/dcdsb.2014.19.2767
Claudia Bucur. Some observations on the Green function for the ball in the fractional Laplace framework. Communications on Pure & Applied Analysis, 2016, 15 (2) : 657-699. doi: 10.3934/cpaa.2016.15.657
Tatsuya Arai. The structure of dendrites constructed by pointwise P-expansive maps on the unit interval. Discrete & Continuous Dynamical Systems - A, 2016, 36 (1) : 43-61. doi: 10.3934/dcds.2016.36.43
Chiu-Ya Lan, Huey-Er Lin, Shih-Hsien Yu. The Green's functions for the Broadwell Model in a half space problem. Networks & Heterogeneous Media, 2006, 1 (1) : 167-183. doi: 10.3934/nhm.2006.1.167
Vitali Liskevich, Igor I. Skrypnik. Pointwise estimates for solutions of singular quasi-linear parabolic equations. Discrete & Continuous Dynamical Systems - S, 2013, 6 (4) : 1029-1042. doi: 10.3934/dcdss.2013.6.1029
Yongqin Liu, Weike Wang. The pointwise estimates of solutions for dissipative wave equation in multi-dimensions. Discrete & Continuous Dynamical Systems - A, 2008, 20 (4) : 1013-1028. doi: 10.3934/dcds.2008.20.1013
Shijin Deng, Weike Wang. Pointwise estimates of solutions for the multi-dimensional scalar conservation laws with relaxation. Discrete & Continuous Dynamical Systems - A, 2011, 30 (4) : 1107-1138. doi: 10.3934/dcds.2011.30.1107
Lijuan Wang, Weike Wang. Pointwise estimates of solutions to conservation laws with nonlocal dissipation-type terms. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2835-2854. doi: 10.3934/cpaa.2019127
Feng Zhou, Zhenqiu Zhang. Pointwise gradient estimates for subquadratic elliptic systems with discontinuous coefficients. Communications on Pure & Applied Analysis, 2019, 18 (6) : 3137-3160. doi: 10.3934/cpaa.2019141
Jan Boman, Vladimir Sharafutdinov. Stability estimates in tensor tomography. Inverse Problems & Imaging, 2018, 12 (5) : 1245-1262. doi: 10.3934/ipi.2018052
Agnieszka Badeńska. No entire function with real multipliers in class $\mathcal{S}$. Discrete & Continuous Dynamical Systems - A, 2013, 33 (8) : 3321-3327. doi: 10.3934/dcds.2013.33.3321
Alfonso Sorrentino. Computing Mather's $\beta$-function for Birkhoff billiards. Discrete & Continuous Dynamical Systems - A, 2015, 35 (10) : 5055-5082. doi: 10.3934/dcds.2015.35.5055
Hongjie Dong, Seick Kim. Green's functions for parabolic systems of second order in time-varying domains. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1407-1433. doi: 10.3934/cpaa.2014.13.1407
HTML views (81) | CommonCrawl |
Special relativity and massless particles
I encountered an assertion that a massless particle moves with fundamental speed c, and this is the consequence of special relativity. Some authors (such as L. Okun) like to prove this assertion with the following reasoning:
Let's have $$ \mathbf p = m\gamma \mathbf v ,\quad E = mc^{2}\gamma \quad \Rightarrow \quad \mathbf p = \frac{E}{c^{2}}\mathbf v \qquad (.1) $$ and $$ E^{2} = p^{2}c^{2} + m^{2}c^{4}. \qquad (.2) $$ For the massless case $(.2)$ gives $p = \frac{E}{c}$. By using $(.1)$ one can see that $|\mathbf v | = c$.
But to me, this is non-physical reasoning. Relation $(.1)$ is derived from the expressions of impulse and energy for a massive particle, so its scope is limited to massive cases.
We can show that a massless particle moves with the speed of light by introducing the Hamiltonian formalism: for a free particle
$$ H = E = \sqrt{p^{2}c^{2} + m^{2}c^{4}}, $$ for a massless particle $$ H = pc, $$ and by using Hamilton's equation, it's easy to show that $$ \dot {|r|} = \frac{\partial H}{\partial p} = c. $$ But if I don't want to introduce the Hamiltonian formalism, what can I do to prove an assertion about the speed of a massless particle? Maybe the expression $\mathbf p = \frac{E}{c^{2}}\mathbf v$ can be derived without using the expressions for the massive case? But I can't imagine how to do it by using only SRT.
special-relativity mass hamiltonian-formalism
Red Act
$\begingroup$ I just faced this situation with my Modern Physics class this week, and in simple truth fudged it. I used (1) and simply waved my hands and said "in the limit of low mass" once or twice. I too, would like a really clean motivation without requiring Hamiltonian mechanics (which first semester juniors have not usually seen). $\endgroup$ – dmckee --- ex-moderator kitten♦ Sep 6 '13 at 20:28
$\begingroup$ Doesn't it follow that the particle's speed is c when its four-momentum is null (which is the case when $(mc^2)^2 = 0$)? $\endgroup$ – Alfred Centauri Sep 6 '13 at 21:00
$\begingroup$ @dmckee: It seems perfectly reasonable to me to require that a particle with zero mass behave the same as the limiting behavior of a particle with finite mass. Otherwise we would have a magic method for detecting arbitrarily small masses such as $10^{-100000000000}$ kg. We don't actually know that the photon, for example, is massless. All we really have are upper limits on its mass: arxiv.org/abs/0809.1003 . Of course there are theoretical reasons to prefer its mass to be exactly zero, but it's not impossible to construct theories in which its mass is nonzero. $\endgroup$ – Ben Crowell Sep 6 '13 at 21:42
$\begingroup$ @Ben I tried to ask your opinion in chat last week. Not getting it, I went ahead and gave those Modern physics students a simplified version of the argument from arxiv.org/abs/physics/0402024 for the relativistic mass and energy (because (1) I've always found the glancing asymmetric collision argument clunky and (2) I like to differ from the book more than follow it so the students get two different arguments for most things). Any thoughts? Do you have a favorite way (I'd already gotten the Lorentz transform and the velocity composition rule)? $\endgroup$ – dmckee --- ex-moderator kitten♦ Sep 6 '13 at 22:05
$\begingroup$ @dmckee: Sorry, I didn't see the chat invitation. Very flattering to have my opinion solicited. Thanks for pointing out to me the Sonego and Pin paper. I like the flavor of it and the fact that it works in 1+1 dimensions. But they assume a relation that's equivalent to the work-kinetic energy, and although that's what Einstein did in the 1905 SR paper, I've never seen any coherent justification for why it's OK simply to assume its validity in SR. It also seems a bit cumbersome. The four-vector approach has the advantage that the techniques it introduces are useful in general. Another [...] $\endgroup$ – Ben Crowell Sep 6 '13 at 23:28
For the reasons given in the comment above, I think the argument from the $m\rightarrow 0$ limit is valid. But if one doesn't like that, then here is an alternative. Suppose that a massless particle had $v<c$ in the frame of some observer A. Then some other observer B could be at rest relative to the particle. In that observer's frame of reference, the particle's three-momentum $\mathbf{p}$ is zero by symmetry, since there is no preferred direction for it to point. Then $E^2=p^2+m^2$ is zero as well, so the particle's entire energy-momentum four-vector is zero. But a four-vector that vanishes in one frame also vanishes in every other frame. That means we're talking about a particle that can't undergo scattering, emission, or absorption, and is therefore undetectable by any experiment.
Ben CrowellBen Crowell
$\begingroup$ Clever. I don't think my students would like this any more that the limit bit, but it is very slick. $\endgroup$ – dmckee --- ex-moderator kitten♦ Sep 6 '13 at 22:08
$\begingroup$ This is a cool argument, but doesn't it just replace the original question with a new question, namely "why does the energy-momentum-mass relation you write down hold for massless particles?" $\endgroup$ – joshphysics Sep 6 '13 at 22:41
$\begingroup$ @joshphysics Now that one, I can handle. The relationship between energy, momentum and mass rises out of the transformation properties of the energy and momentum, and while we found those from the explicit forms for massive particles the transformation properties must hold for any other form as well. $\endgroup$ – dmckee --- ex-moderator kitten♦ Sep 6 '13 at 22:58
$\begingroup$ @dmckee Hmm ok I want to believe this, but I don't entirely follow. I'll think about it more; thanks for the response. $\endgroup$ – joshphysics Sep 6 '13 at 23:25
$\begingroup$ @joshphysics: Your point is valid, but this is a question about the properties of massless particles, and $m^2=E^2-p^2$ is usually taken to be the definition of mass. If we hadn't already established $m^2=E^2-p^2$ or didn't believe that it was applicable to massless particles, then we would need to do something else as a preliminary to define what we meant by "massless." $\endgroup$ – Ben Crowell Sep 7 '13 at 0:00
One way is to consider that Lorentz transformations apply more fundamentally to momentum/ energy, than to space/time. So, with a boost transformation along the $z$ axis, we will have :
$\begin {pmatrix} p'_z \\E' \end{pmatrix} = \gamma(v)\begin {pmatrix} 1 & -\frac{v}{c} \\-\frac{v}{c} &1\end{pmatrix}\begin {pmatrix} p_z \\E \end{pmatrix}$
It is not difficult to see that, if, $|\large \frac{p_zc}{E}|=1$, then $|\large \frac{p'_zc}{E'}|=1$
Now, by dimensional analysis, we have : $\frac{\vec Pc}{E} = \frac{\vec V}{c}$, where $\vec V$ has the dimension of a velocity. The most natural possibility is that $\vec V$ is the velocity of the particle (for instance, with $v=V_z$, you have $V'_z=0$). So, we see, that a particle with speed $|\vec V|=c$ in a Galilean frame, has also $|\vec V'|=c$ in another Galilean frame. We could also check that the quantity $E^2-\vec p^2c^2$ is conserved by Lorentz transformations, and call this quantity $m^2c^4$, where $m$ is the mass. So, particles who have $\left|\frac{\vec Pc}{E}\right| = \left|\frac{\vec V}{c}\right|=1$, are massless particles.
stathisk
TrimokTrimok
$\begingroup$ I don't think dimensional analysis suffices here. $\endgroup$ – Ben Crowell Sep 7 '13 at 16:17
What is so special about speed of light in vacuum?
Can light travel slower than the maximum?
Characteristic of photons for constant speed
What is the derivation of the speed of light $c$ that is not based on electromagnetism?
Can massless particles change their quantum state? What about with entanglement?
Maxwell's equations as the particular case of massive vector field equation
About constraints of the first class and electrodynamics
Lagrangian formulation of free massive point particle in special relativity
How can we derive from $\{G,H\}=0$ that $G$ generates a transformations which leaves the form of Hamilton's equations unchanged?
Can all canonical transformations be generated using a generating function? | CommonCrawl |
Forbidden subgraph problem
In extremal graph theory, the forbidden subgraph problem is the following problem: given a graph $G$, find the maximal number of edges $\operatorname {ex} (n,G)$ an $n$-vertex graph can have such that it does not have a subgraph isomorphic to $G$. In this context, $G$ is called a forbidden subgraph.[1]
An equivalent problem is how many edges in an $n$-vertex graph guarantee that it has a subgraph isomorphic to $G$?[2]
Definitions
The extremal number $\operatorname {ex} (n,G)$ is the maximum number of edges in an $n$-vertex graph containing no subgraph isomorphic to $G$. $K_{r}$ is the complete graph on $r$ vertices. $T(n,r)$ is the Turán graph: a complete $r$-partite graph on $n$ vertices, with vertices distributed between parts as equally as possible. The chromatic number $\chi (G)$ of $G$ is the minimum number of colors needed to color the vertices of $G$ such that no two adjacent vertices have the same color.
Upper bounds
Turán's theorem
See also: Turán's theorem
Turán's theorem states that for positive integers $n,r$ satisfying $n\geq r\geq 3$,[3] $ \operatorname {ex} (n,K_{r})=\left(1-{\frac {1}{r-1}}\right){\frac {n^{2}}{2}}.$
This solves the forbidden subgraph problem for $G=K_{r}$. Equality cases for Turán's theorem come from the Turán graph $T(n,r-1)$.
This result can be generalized to arbitrary graphs $G$ by considering the chromatic number $\chi (G)$ of $G$. Note that $T(n,r)$ can be colored with $r$ colors and thus has no subgraphs with chromatic number greater than $r$. In particular, $T(n,\chi (G)-1)$ has no subgraphs isomorphic to $G$. This suggests that the general equality cases for the forbidden subgraph problem may be related to the equality cases for $G=K_{r}$. This intuition turns out to be correct, up to $o(n^{2})$ error.
Erdős–Stone theorem
See also: Erdős–Stone theorem
Erdős–Stone theorem states that for all positive integers $n$ and all graphs $G$,[4] $ \operatorname {ex} (n,G)=\left(1-{\frac {1}{\chi (G)-1}}+o(1)\right){\binom {n}{2}}.$
When $G$ is not bipartite, this gives us a first-order approximation of $\operatorname {ex} (n,G)$.
Bipartite graphs
For bipartite graphs $G$, the Erdős–Stone theorem only tells us that $\operatorname {ex} (n,G)=o(n^{2})$. The forbidden subgraph problem for bipartite graphs is known as the Zarankiewicz problem, and it is unsolved in general.
Progress on the Zarankiewicz problem includes following theorem:
Kővári–Sós–Turán theorem. For every pair of positive integers $s,t$ with $t\geq s\geq 1$, there exists some constant $C$ (independent of $n$) such that $ \operatorname {ex} (n,K_{s,t})\leq Cn^{2-{\frac {1}{s}}}$ for every positive integer $n$.[5]
Another result for bipartite graphs is the case of even cycles, $G=C_{2k},k\geq 2$. Even cycles are handled by considering a root vertex and paths branching out from this vertex. If two paths of the same length $k$ have the same endpoint and do not overlap, then they create a cycle of length $2k$. This gives the following theorem.
Theorem (Bondy and Simonovits, 1974). There exists some constant $C$ such that $ \operatorname {ex} (n,C_{2k})\leq Cn^{1+{\frac {1}{k}}}$ for every positive integer $n$ and positive integer $k\geq 2$.[6]
A powerful lemma in extremal graph theory is dependent random choice. This lemma allows us to handle bipartite graphs with bounded degree in one part:
Theorem (Alon, Krivelevich, and Sudakov, 2003). Let $G$ be a bipartite graph with vertex parts $A$ and $B$ such that every vertex in $A$ has degree at most $r$. Then there exists a constant $C$ (dependent only on $G$) such that $ \operatorname {ex} (n,G)\leq Cn^{2-{\frac {1}{r}}}$for every positive integer $n$.[7]
In general, we have the following conjecture.:
Rational Exponents Conjecture (Erdős and Simonovits). For any finite family ${\mathcal {L}}$ of graphs, if there is a bipartite $L\in {\mathcal {L}}$, then there exists a rational $\alpha \in [0,1)$ such that $\operatorname {ex} (n,{\mathcal {L}})=\Theta (n^{1+\alpha })$.[8]
A survey by Füredi and Simonovits describes progress on the forbidden subgraph problem in more detail.[8]
Lower bounds
There are various techniques used for obtaining the lower bounds.
Probabilistic method
While this method mostly gives weak bounds, the theory of random graphs is a rapidly developing subject. It is based on the idea that if we take a graph randomly with a sufficiently small density, the graph would contain only a small number of subgraphs of $G$ inside it. These copies can be removed by removing one edge from every copy of $G$ in the graph, giving us a $G$ free graph.
The probabilistic method can be used to prove $\operatorname {ex} (n,G)\geq cn^{2-{\frac {v(G)-2}{e(G)-1}}}$where $c$ is a constant only depending on the graph $G$.[9] For the construction we can take the Erdős-Rényi random graph $G(n,p)$, that is the graph with $n$ vertices and the edge been any two vertices drawn with probability $p$, independently. After computing the expected number of copies of $G$ in $G(n,p)$ by linearity of expectation, we remove one edge from each such copy of $G$ and we are left with a $G$-free graph in the end. The expected number of edges remaining can be found to be $\operatorname {ex} (n,G)\geq cn^{2-{\frac {v(G)-2}{e(G)-1}}}$ for a constant $c$ depending on $G$. Therefore, at least one $n$-vertex graph exists with at least as many edges as the expected number.
This method can also be used to find the constructions of a graph for bounds on the girth of the graph. The girth, denoted by $g(G)$, is the length of the shortest cycle of the graph. Note that for $g(G)>2k$, the graph must forbid all the cycles with length less than equal to $2k$. By linearity of expectation,the expected number of such forbidden cycles is equal to the sum of the expected number of cycles $C_{i}$ (for $i=3,...,n-1,n$.). We again remove the edges from each copy of a forbidden graph and end up with a graph free of smaller cycles and $g(G)>2k$, giving us $c_{0}n^{1+{\frac {1}{2k-1}}}$ edges in the graph left.
Algebraic constructions
For specific cases, improvements have been made by finding algebraic constructions. A common feature for such constructions is that it involves the use of geometry to construct a graph, with vertices representing geometric objects and edges according to the algebraic relations between the vertices. We end up with no subgraph of $G$, purely due to purely geometric reasons, while the graph has a large number of edges to be a strong bound due to way the incidences were defined. The following proof by Erdős, Rényi, and Sős[10] establishing the lower bound on $\operatorname {ex} (n,K_{2,2})$ as$\left({\frac {1}{2}}-o(1)\right)n^{3/2}.$, demonstrates the power of this method.
First, suppose that $n=p^{2}-1$ for some prime $p$. Consider the polarity graph $G$ with vertices elements of $\mathbb {F} _{p}^{2}-\{0,0\}$ and edges between vertices $(x,y)$ and $(a,b)$ if and only if $ax+by=1$ in $\mathbb {F} _{p}$. This graph is $K_{2,2}$-free because a system of two linear equations in $\mathbb {F} _{p}$ cannot have more than one solution. A vertex $(a,b)$ (assume $b\neq 0$) is connected to $\left(x,{\frac {1-ax}{b}}\right)$ for any $x\in \mathbb {F} _{p}$, for a total of at least $p-1$ edges (subtracted 1 in case $(a,b)=\left(x,{\frac {1-ax}{b}}\right)$). So there are at least ${\frac {1}{2}}(p^{2}-1)(p-1)=\left({\frac {1}{2}}-o(1)\right)p^{3}=\left({\frac {1}{2}}-o(1)\right)n^{3/2}$ edges, as desired. For general $n$, we can take $p=(1-o(1)){\sqrt {n}}$ with $p\leq {\sqrt {n+1}}$ (which is possible because there exists a prime $p$ in the interval$[k-k^{0.525},k]$ for sufficiently large $k$[11]) and construct a polarity graph using such $p$, then adding $n-p^{2}+1$ isolated vertices, which do not affect the asymptotic value.
The following theorem is a similar result for $K_{3,3}$.
Theorem (Brown, 1966). $\operatorname {ex} (n,K_{3,3})\geq \left({\frac {1}{2}}-o(1)\right)n^{5/3}.$[12]
Proof outline.[13] Like in the previous theorem, we can take $n=p^{3}$ for prime $p$ and let the vertices of our graph be elements of $\mathbb {F} _{p}^{3}$. This time, vertices $(a,b,c)$ and $(x,y,z)$ are connected if and only if $(x-a)^{2}+(y-b)^{2}+(z-c)^{2}=u$ in $\mathbb {F} _{p}$, for some specifically chosen $u$. Then this is $K_{3,3}$-free since at most two points lie in the intersection of three spheres. Then since the value of $(x-a)^{2}+(y-b)^{2}+(z-c)^{2}$ is almost uniform across $\mathbb {F} _{p}$, each point should have around $p^{2}$ edges, so the total number of edges is $\left({\frac {1}{2}}-o(1)\right)p^{2}\cdot p^{3}=\left({\frac {1}{2}}-o(1)\right)n^{5/3}$.
However, it remains an open question to tighten the lower bound for $\operatorname {ex} (n,K_{t,t})$ for $t\geq 4$.
Theorem (Alon et al., 1999) For $t\geq (s-1)!+1$, $\operatorname {ex} (n,K_{s,t})=\Theta (n^{2-{\frac {1}{s}}}).$[14]
Randomized Algebraic constructions
This technique combines the above two ideas. It uses random polynomial type relations when defining the incidences between vertices, which are in some algebraic set. Using this technique to prove the following theorem.
Theorem: For every $s\geq 2$, there exists some $t$ such that $\operatorname {ex} (n,K_{s,t})\geq \left({\frac {1}{2}}-o(1)\right)n^{1-{\frac {1}{s}}}$.
Proof outline: We take the largest prime power $q$ with $q^{s}\leq n$. Due to the prime gaps, we have $q=(1-o(1))n^{\frac {1}{s}}$. Let $f\in \mathbb {F} _{q}[x_{1},x_{2},\cdots ,x_{s},y_{1},y_{2},\cdots ,y_{s}]_{\leq d}$ be a random polynomial in $\mathbb {F} _{q}$ with degree at most $d=s^{2}$ in $X=(X_{1},X_{2},...,X_{s})$ and $Y=(Y_{1},Y_{2},...,Y_{s})$ and satisfying $f(X,Y)=f(Y,X)$. Let the graph $G$ have the vertex set $\mathbb {F} _{q}^{s}$ such that two vertices $x,y$ are adjacent if $f(x,y)=0$.
We fix a set $U\subset \mathbb {F} _{q}^{s}$, and defining a set $Z_{U}$ as the elements of $\mathbb {F} _{q}^{s}$ not in $U$ satisfying $f(x,u)=0$ for all elements $u\in U$. By the Lang–Weil bound, we obtain that for $q$ sufficiently large enough, we have $|Z_{U}|\leq C$ or $|Z_{U}|>{\frac {q}{2}}$ for some constant $C$.Now, we compute the expected number of $U$ such that $Z_{U}$ has size greater than $C$, and remove a vertex from each such $U$. The resulting graph turns out to be $K_{s,C+1}$ free, and at least one graph exists with the expectation of the number of edges of this resulting graph.
Supersaturation
Supersaturation refers to a variant of the forbidden subgraph problem, where we consider when some $h$-uniform graph $G$ contains many copies of some forbidden subgraph $H$. Intuitively, one would expect this to once $G$ contains significantly more than $\operatorname {ex} (n,H)$ edges. We introduce Turán density to formalize this notion.
Turán density
The Turán density of a $h$-uniform graph $G$ is defined to be
$\pi (G)=\lim _{n\to \infty }{\frac {\operatorname {ex} (n,G)}{\binom {n}{h}}}.$
It is true that ${\frac {\operatorname {ex} (n,G)}{\binom {n}{h}}}$ is in fact positive and monotone decreasing, so the limit must therefore exist. [15]
As an example, Turán's Theorem gives that $\pi (K_{r+1})=1-{\frac {1}{r}}$, and the Erdős–Stone theorem gives that $\pi (G)=1-{\frac {1}{\chi (H)-1}}$. In particular, for bipartite $G$, $\pi (G)=0$. Determining the Turán density $\pi (H)$ is equivalent to determining $\operatorname {ex} (n,G)$ up to an $o(n^{2})$ error.[16]
Supersaturation Theorem
Consider an $h$-uniform hypergraph $H$ with $v$ vertices. The supersaturation theorem states that for every $\epsilon >0$, there exists a $\delta >0$ such that if $G$ is a graph on $n$ vertices and at least $(\pi (H)+\epsilon ){\binom {n}{2}}$ edges for $n$ sufficiently large, then there are at least $\delta n^{v(H)}$ copies of $H$. [17]
Equivalently, we can restate this theorem as the following: If a graph $G$ with $n$ vertices has $o(n^{v(H)})$ copies of $H$, then there are at most $\pi (H){\binom {n}{2}}+o(n^{2})$ edges in $G$.
Applications
We may solve various forbidden subgraph problems by considering supersaturation-type problems. We restate and give a proof sketch of the Kővári–Sós–Turán theorem below:
Kővári–Sós–Turán theorem. For every pair of positive integers $s,t$ with $t\geq s\geq 1$, there exists some constant $C$ (independent of $n$) such that $ \operatorname {ex} (n,K_{s,t})\leq Cn^{2-{\frac {1}{s}}}$ for every positive integer $n$.[18]
Proof. Let $G$ be a $2$-graph on $n$ vertices, and consider the number of copies of $K_{1,s}$ in $G$. Given a vertex of degree $d$, we get exactly ${\binom {d}{s}}$ copies of $K_{1,s}$ rooted at this vertex, for a total of $\sum _{v\in V(G)}{\binom {\operatorname {deg} (v)}{s}}$ copies. Here, ${\binom {k}{s}}=0$ when $0\leq k<s$. By convexity, there are at total of at least $n{\binom {2e(G)/n}{s}}$ copies of $K_{1,s}$. Moreover, there are clearly ${\binom {n}{s}}$ subsets of $s$ vertices, so if there are more than $(t-1){\binom {n}{s}}$ copies of $K_{1,s}$, then by the Pigeonhole Principle there must exist a subset of $s$ vertices which form the set of leaves of at least $t$ of these copies, forming a $K_{s,t}$. Therefore, there exists an occurrence of $K_{s,t}$ as long as we have $n{\binom {2e(G)/n}{s}}>(t-1){\binom {n}{s}}$. In other words, we have an occurrence if ${\frac {e(G)^{s}}{n^{s-1}}}\geq O(n^{s})$, which simplifies to $e(G)\geq O(n^{2-{\frac {1}{s}}})$, which is the statement of the theorem. [19]
In this proof, we are using the supersaturation method by considering the number of occurrences of a smaller subgraph. Typically, applications of the supersaturation method do not use the supersaturation theorem. Instead, the structure often involves finding a subgraph $H'$ of some forbidden subgraph $H$ and showing that if it appears too many times in $G$, then $H$ must appear in $G$ as well. Other theorems regarding the forbidden subgraph problem which can be solved with supersaturation include:
• $\operatorname {ex} (n,C_{2t})\leq O(n^{1+1/t})$. [20]
• For any $t$ and $k\geq 2$, $\operatorname {ex} (n,C_{2k},C_{2k-1})\leq O\left(\left({\frac {n}{2}}\right)^{1+1/t}\right)$. [20]
• If $Q$ denotes the graph determined by the vertices and edges of a cube, and $Q^{*}$ denotes the graph obtained by joining two opposite vertices of the cube, then $\operatorname {ex} (n,Q)\leq \operatorname {ex} (n,Q^{*})=O(n^{8/5})$. [19]
Generalizations
The problem may be generalized for a set of forbidden subgraphs $S$: find the maximal number of edges in an $n$-vertex graph which does not have a subgraph isomorphic to any graph from $S$.[21]
There are also hypergraph versions of forbidden subgraph problems that are much more difficult. For instance, Turán's problem may be generalized to asking for the largest number of edges in an $n$-vertex 3-uniform hypergraph that contains no tetrahedra. The analog of the Turán construction would be to partition the vertices into almost equal subsets $V_{1},V_{2},V_{3}$, and connect vertices $x,y,z$ by a 3-edge if they are all in different $V_{i}$s, or if two of them are in $V_{i}$ and the third is in $V_{i+1}$ (where $V_{4}=V_{1}$). This is tetrahedron-free, and the edge density is $5/9$. However, the best known upper bound is 0.562, using the technique of flag algebras.[22]
See also
• Biclique-free graph
• Erdős–Hajnal conjecture
• Turán number
• Subgraph isomorphism problem
• Forbidden graph characterization
References
1. Combinatorics: Set Systems, Hypergraphs, Families of Vectors and Probabilistic Combinatorics, Béla Bollobás, 1986, ISBN 0-521-33703-8, p. 53, 54
2. "Modern Graph Theory", by Béla Bollobás, 1998, ISBN 0-387-98488-7, p. 103
3. Turán, Pál (1941). "On an extremal problem in graph theory". Matematikai és Fizikai Lapok (in Hungarian). 48: 436–452.
4. Erdős, P.; Stone, A. H. (1946). "On the structure of linear graphs". Bulletin of the American Mathematical Society. 52 (12): 1087–1091. doi:10.1090/S0002-9904-1946-08715-7.
5. Kővári, T.; T. Sós, V.; Turán, P. (1954), "On a problem of K. Zarankiewicz" (PDF), Colloq. Math., 3: 50–57, doi:10.4064/cm-3-1-50-57, MR 0065617
6. Bondy, J. A.; Simonovits, M. (April 1974). "Cycles of even length in graphs". Journal of Combinatorial Theory. Series B. 16 (2): 97–105. doi:10.1016/0095-8956(74)90052-5. MR 0340095.
7. Alon, Noga; Krivelevich, Michael; Sudakov, Benny. "Turán numbers of bipartite graphs and related Ramsey-type questions". Combinatorics, Probability and Computing. MR 2037065.
8. Füredi, Zoltán; Simonovits, Miklós (2013-06-21). "The history of degenerate (bipartite) extremal graph problems". arXiv:1306.5167 [math.CO].
9. Zhao, Yufei. "Graph Theory and Additive Combinatorics" (PDF). pp. 32–37. Retrieved 2 December 2019.
10. Erdős, P.; Rényi, A.; Sós, V. T. (1966). "On a problem of graph theory". Studia Sci. Math. Hungar. 1: 215–235. MR 0223262.
11. Baker, R. C.; Harman, G.; Pintz, J. (2001), "The difference between consecutive primes. II.", Proc. London Math. Soc., Series 3, 83 (3): 532–562, doi:10.1112/plms/83.3.532, MR 1851081
12. Brown, W. G. (1966). "On graphs that do not contain a Thomsen graph". Canad. Math. Bull. 9 (3): 281–285. doi:10.4153/CMB-1966-036-2. MR 0200182.
13. Zhao, Yufei. "Graph Theory and Additive Combinatorics" (PDF). pp. 32–37. Retrieved 2 December 2019.
14. Alon, Noga; Rónyai, Lajos; Szabó, Tibor (1999). "Norm-graphs: variations and applications". Journal of Combinatorial Theory. Series B. 76 (2): 280–290. doi:10.1006/jctb.1999.1906. MR 1699238.
15. Erdős, Paul; Simonovits, Miklós. "Supersaturated Graphs and Hypergraphs" (PDF). p. 3. Retrieved 27 November 2021.
16. Zhao, Yufei. "Graph Theory and Additive Combinatorics" (PDF). pp. 16–17. Retrieved 2 December 2019.
17. Simonovits, Miklós. "Extremal Graph Problems, Degenerate Extremal Problems, and Supersaturated Graphs" (PDF). p. 17. Retrieved 25 November 2021.
18. Kővári, T.; T. Sós, V.; Turán, P. (1954), "On a problem of K. Zarankiewicz" (PDF), Colloq. Math., 3: 50–57, doi:10.4064/cm-3-1-50-57, MR 0065617
19. Simonovits, Miklós. "Extremal Graph Problems, Degenerate Extremal Problems, and Supersaturated Graphs" (PDF). Retrieved 27 November 2021.
20. Erdős, Paul; Simonovits, Miklós. "Compactness Results in Extremal Graph Theory" (PDF). Retrieved 27 November 2021.
21. Handbook of Discrete and Combinatorial Mathematics By Kenneth H. Rosen, John G. Michaels p. 590
22. Keevash, Peter. "Hypergraph Turán Problems" (PDF). Retrieved 2 December 2019.
| Wikipedia |
Banach-Mazur game
(Redirected from Alpha-favourable space)
A game that appeared in the famous Scottish Book [a11], [a6], where its initial version was formulated as Problem 43 by the Polish mathematician S. Mazur: Given the space of real numbers $\mathbf{R}$ and a non-empty subset $E$ of it, two players $A$ and $B$ play a game in the following way: $A$ starts by choosing a non-empty interval $I_{0}$ of $\mathbf{R}$ and then $B$ responds by choosing a non-empty subinterval $I _ { 1 }$ of $I_{0}$. Then player $A$ in turn selects a non-empty interval $I _ { 2 } \subset I _ { 1 }$ and $B$ continues by taking a non-empty subinterval $I_3$ of $I_2$. This procedure is iterated infinitely many times. The resulting infinite sequence of nested intervals $\{ I _ { n } \} _ { n = 0 } ^ { \infty }$ is called a play. By definition, the player $A$ wins this play if the intersection $\cap _ { n = 0 } ^ { \infty } I _ {n}$ has a common point with $E$. Otherwise $B$ wins. Mazur had observed two facts:
a) if the complement of $E$ in some interval of $\mathbf{R}$ is of the first Baire category in this interval (equivalently, if $E$ is residual in some interval of $\mathbf{R}$, cf. also Category of a set; Baire classes), then player $A$ has a winning strategy (see below for the definition); and
b) if $E$ itself is of the first Baire category in $\mathbf{R}$, then $B$ has a winning strategy. The question originally posed by Mazur in Problem 43 of the Scottish Book (with as prize a bottle of wine!) was whether the inverse implications in the above two assertions hold. On August 4, 1935, S. Banach wrote in the same book that "Mazur's conjecture is true" . The proof of this statement of Banach however has never been published. The game subsequently became known as the Banach–Mazur game.
More than 20 years later, in 1957, J. Oxtoby [a8] published a proof for the validity of Mazur's conjecture. Oxtoby considered a much more general setting. The game was played in a general topological space $X$ with $\emptyset \neq E \subset X$ and the two players $A$ and $B$ were choosing alternatively sets $W _ { 0 } \supset W _ { 1 } \supset \ldots$ from an a priori prescribed family of sets $\mathcal{W}$ which has the property that every element of $\mathcal{W}$ contains a non-empty open subset of $X$ and every non-empty open subset of $X$ contains an element of $\mathcal{W}$. As above, $A$ wins if $( \cap _ { n = 0 } ^ { \infty } W _ { n } ) \cap E \neq \emptyset$, otherwise $B$ wins. Oxtoby's theorem says that $B$ has a winning strategy if and only if $E$ is of the first Baire category in $X$; also, if $X$ is a complete metric space, then $A$ has a winning strategy exactly when $E$ is residual in some non-empty open subset of $X$.
Later, the game was subjected to different generalizations and modifications.
Generalizations.
Only the most popular modification of this game will be considered. It has turned out to be useful not only in set-theoretic topology, but also in the geometry of Banach spaces, non-linear analysis, number theory, descriptive set theory, well-posedness in optimization, etc. This modification is the following: Given a topological space $X$, two players (usually called $\alpha$ and $\beta$) alternatively choose non-empty open sets $U _ { 1 } \supset V _ { 1 } \supset U _ { 2 } \supset V _ { 2 } \supset \ldots$ (in this sequence the $U _ { n }$ are the choices of $\beta$ and the $V _ { n }$ are the choices of $\alpha$; thus, it is player $\beta$ who starts this game). Player $\alpha$ wins the play $\{ U _ { n } , V _ { n } \} _ { n = 1 } ^ { \infty }$ if $\cap _ { n = 1 } ^ { \infty } U _ { n } = \cap _ { n = 1 } ^ { \infty } V _ { n } \neq \emptyset$, otherwise $\beta$ wins. To be completely consistent with the general scheme described above, one may think that $E = X$ and $\alpha$ starts by always choosing the whole space $X$. This game is often denoted by $\operatorname{BM} ( X )$.
A strategy $s$ for the player $\alpha$ is a mapping which assigns to every finite sequence $( U _ { 1 } \supset V _ { 1 } \supset \ldots \supset U _ { n } )$ of legal moves in $\operatorname{BM} ( X )$ a non-empty open subset $V _ { n }$ of $X$ included in the last move of $\beta$ (i.e. $V _ { n } \subset U _ { n }$). A stationary strategy (called also a tactics) for $\alpha$ is a strategy for this player which depends only on the last move of the opponent. A winning strategy (a stationary winning strategy) $s$ for $\alpha$ is a strategy such that $\alpha$ wins every play in which his/her moves are obtained by $s$. Similarly one defines the (winning) strategies for $\beta$.
A topological space $X$ is called weakly $\alpha$-favourable if $\alpha$ has a winning strategy in $\operatorname{BM} ( X )$, while it is termed $\alpha$-favourable if there is a stationary winning strategy for $\alpha$ in $\operatorname{BM} ( X )$. It can be derived from the work of Oxtoby [a8] (see also [a4], [a7] and [a9]) that the space $X$ is a Baire space exactly when player $\beta$ does not have a winning strategy in $\operatorname{BM} ( X )$. Hence, every weakly $\alpha$-favourable space is a Baire space. In the class of metric spaces $X$, a metric space is weakly $\alpha$-favourable if and only if it contains a dense and completely metrizable subspace. One can use these two results to see that the Banach–Mazur game is "not determined" . I.e. it could happen for some space $X$ that neither $\alpha$ nor $\beta$ has a winning strategy. For instance, the Bernstein set $X$ in the real line (cf. also Non-measurable set) is a Baire space which does not contain a dense completely metrizable subspace (consequently $X$ does not admit a winning strategy for either $\alpha$ or $\beta$).
The above characterization of weak $\alpha$-favourability for metric spaces has been extended for some non-metrizable spaces in [a10].
A characterization of $\alpha$-favourability of a given completely-regular space $X$ can be obtained by means of the space $C ( X )$ of all continuous and bounded real-valued functions on $X$ equipped with the usual sup-norm $\| f \| _ { \infty } : = \operatorname { sup } \{ | f ( x ) | : x \in X \}$. The following statement holds [a5]: The space $X$ is weakly $\alpha$-favourable if and only if the set
\begin{equation*} \{ f \in C ( X ) : f \ \text{attains its maximum in} \ X \} \end{equation*}
is residual in $C ( X )$. In other words, $X$ is weakly $\alpha$-favourable if and only if almost-all (from the Baire category point of view) of the functions in $C ( X )$ attain their maximum in $X$. The rich interplay between $X$, $C ( X )$ and $\operatorname{BM} ( X )$ is excellently presented in [a3].
The class of $\alpha$-favourable spaces (spaces which admit $\alpha$-winning tactics) is strictly narrower than the class of weakly $\alpha$-favourable spaces. G. Debs [a2] has exhibited a completely-regular topological space $X$ which admits a winning strategy for $\alpha$ in $\operatorname{BM} ( X )$, but does not admit any $\alpha$-winning tactics in $\operatorname{BM} ( X )$. Under the name "espaces tamisables" , the $\alpha$-favourable spaces were introduced and studied also by G. Choquet [a1].
[a10] is an excellent survey paper about topological games (including $\operatorname{BM} ( X )$).
[a1] G. Choquet, "Une classe régulières d'espaces de Baire" C.R. Acad. Sci. Paris Sér. I , 246 (1958) pp. 218–220
[a2] G. Debs, "Strategies gagnantes dans certain jeux topologique" Fundam. Math. , 126 (1985) pp. 93–105
[a3] G. Debs, J. Saint-Raymond, "Topological games and optimization problems" Mathematika , 41 (1994) pp. 117–132
[a4] M.R. Krom, "Infinite games and special Baire space extensions" Pacific J. Math. , 55 : 2 (1974) pp. 483–487
[a5] P.S. Kenderov, J.P. Revalski, "The Banach–Mazur game and generic existence of solutions to optimization problems" Proc. Amer. Math. Soc. , 118 (1993) pp. 911–917
[a6] "The Scottish Book: Mathematics from the Scottish Café" R.D. Mauldin (ed.) , Birkhäuser (1981)
[a7] R.A. McCoy, "A Baire space extension" Proc. Amer. Math. Soc. , 33 (1972) pp. 199–202
[a8] J. Oxtoby, "The Banach–Mazur game and the Banach category theorem" , Contributions to the Theory of Games III , Ann. of Math. Stud. , 39 , Princeton Univ. Press (1957) pp. 159–163
[a9] J. Saint-Raymond, "Jeux topologiques et espaces de Namioka" Proc. Amer. Math. Soc. , 87 (1983) pp. 499–504 Zbl 0511.54007
[a10] R. Telgárski, "Topological games: On the fiftieth anniversary of the Banach–Mazur game" Rocky Mount. J. Math. , 17 (1987) pp. 227–276 [[ZBL|0619.90110}}
[a11] S.M. Ulam, "The Scottish Book" , A LASL monograph , Los Alamos Sci. Lab. (1977) (Edition: Second)
Alpha-favourable space. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Alpha-favourable_space&oldid=37296
Retrieved from "https://encyclopediaofmath.org/index.php?title=Banach-Mazur_game&oldid=50395"
TeX semi-auto | CommonCrawl |
PHLST with adaptive tiling and its application to antarctic remote sensing image approximation
Towards deconvolution to enhance the grid method for in-plane strain measurement
February 2014, 8(1): 293-320. doi: 10.3934/ipi.2014.8.293
A local information based variational model for selective image segmentation
Jianping Zhang 1, , Ke Chen 2, , Bo Yu 3, and Derek A. Gould 4,
School of Mathematical Sciences, Dalian University of Technology, Dalian, Liaoning, 116024, China
Centre for Mathematical Imaging Techniques and Department of Mathematical Sciences, University of Liverpool, Liverpool L69 7ZL, United Kingdom
School of Mathematical Science, Dalian University of Technology, Dalian, Liaoning 116024
Radiology Department, Royal Liverpool University Hospitals, Prescot Street, Liverpool L7 8XP, United Kingdom
Received July 2011 Revised November 2012 Published March 2014
Many effective models are available for segmentation of an image to extract all homogenous objects within it. For applications where segmentation of a single object identifiable by geometric constraints within an image is desired, much less work has been done for this purpose. This paper presents an improved selective segmentation model, without `balloon' force, combining geometrical constraints and local image intensity information around zero level set, aiming to overcome the weakness of getting spurious solutions by Badshah and Chen's model [8]. A key step in our new strategy is an adaptive local band selection algorithm. Numerical experiments show that the new model appears to be able to detect an object possessing highly complex and nonconvex features, and to produce desirable results in terms of segmentation quality and robustness.
Keywords: geometric constraints., segmentation, level sets, Active contours, partial differential equations, local energy function.
Mathematics Subject Classification: Primary: 62H35, 65N22, 65N55; Secondary: 74G6.
Citation: Jianping Zhang, Ke Chen, Bo Yu, Derek A. Gould. A local information based variational model for selective image segmentation. Inverse Problems & Imaging, 2014, 8 (1) : 293-320. doi: 10.3934/ipi.2014.8.293
D. Adalsteinsson and J. A. Sethian, A fast level set method for propagating interfaces, J. Comput. Phys., 118 (1995), 269-277. doi: 10.1006/jcph.1995.1098. Google Scholar
D. Adalsteinsson and J. A. Sethian, A level set approach to a unified model for etching, deposition, and lithography. II. Three-dimensional simulations, J. Comput. Phys., 122 (1995), 348-366. doi: 10.1006/jcph.1995.1221. Google Scholar
L. Ambrosio and V. Tortorelli, Approximation of functionals depending on jumps by elliptic functionals via $\Gamma$-convergence, Commu. Pure and Applied Math., 43 (1990), 999-1036. doi: 10.1002/cpa.3160430805. Google Scholar
A. Araujo, S. Barbeiro and P. Serranho, Stability of Finite Difference Schemes for Complex Diffusion Processes, Pre-print, Departamento de Matematica da Universidade de Coimbra, DMUC report 10-23, 2010. doi: 10.1137/110825789. Google Scholar
G. Aubert and P. Kornprobst, Mathematical Problems in Image Processing, Springer, New York, 2002. Google Scholar
N. Badshah and K. Chen, Multigrid method for the Chan-Vese model in variational segmentation, Communications in Computational Physics, 4 (2008), 294-316. Google Scholar
N. Badshah and K. Chen, On two multigrid algorithms for modeling variation multiphase image segmentation, IEEE Trans. Image Processing, 18 (2009), 1097-1106. doi: 10.1109/TIP.2009.2014260. Google Scholar
N. Badshah and K. Chen, Image selective segmentation under geometrical constraints using an active contour approach, Commun. Comput. Phys., 7 (2010), 759-778. doi: 10.4208/cicp.2009.09.026. Google Scholar
X. Bresson, S. Esedoglu, P. Vandergheynst, J. Thiran and S. Osher, Fast global minimization of the active contour/snake models, J. Math. Imaging and Vision, 28 (2007), 151-167. doi: 10.1007/s10851-007-0002-0. Google Scholar
E. S. Brown, T. F. Chan and X. Bresson, A convex approach for multi-phase piecewise constant Mumford-Shah image segmentation, Int. J. Computer Vision, 98 (2012), 103-121. doi: 10.1007/s11263-011-0499-y. Google Scholar
E. S. Brown, T. F. Chan and X. Bresson, A convex relaxation method for a class of vector-valued minimization problems with applications to Mumford-Shah segmentation, UCLA CAM report 10-43, 2010. Google Scholar
M. Burger, G. Gilboa, S. Osher and J. Xu, Nonlinear inverse scale space methods, Commun. Math. Sci., 4 (2006), 179-212. doi: 10.4310/CMS.2006.v4.n1.a7. Google Scholar
J. F. Canny, Finding Edges and Lines in Images, Technical Report AITR-720, Massachusetts Institute of Technology, Artificial Intelligence Laboratory, 1983. Google Scholar
V. Caselles, R. Kimmel and G. Sapiro, Geodesic active contours, Int. J. Computer Vision, 22 (1997), 61-79. doi: 10.1023/A:1007979827043. Google Scholar
T. F. Chan, S. Esedoglu and M. Nikolova, Algorithms for finding global minimizers of image segmentation and denoising models, SIAM J. Applied Mathematics, 66 (2006), 1632-1648. doi: 10.1137/040615286. Google Scholar
T. F. Chan, B. Y. Sandberg and L. A. Vese, Active contours without edges for vector-valued images, J. Visual Commun. Image Representation, 11 (2000), 130-141. doi: 10.1006/jvci.1999.0442. Google Scholar
T. F. Chan and L. A. Vese, An efficient variational multiphase motion for the Mumford-Shah segmentation model, Proc. Asilomar Conf. Signals, Systems, Computers, 1 (2000), 490-494. doi: 10.1109/ACSSC.2000.911004. Google Scholar
T. F. Chan and L. Vese, Active coutours without edges, IEEE Trans. Image Processing, 10 (2001), 266-277. doi: 10.1109/83.902291. Google Scholar
T. F. Chan and J. H. Shen, Image Processing and Analysis: Variational, PDE, Wavelet and Stochastic Methods, SIAM, Philadelphia, 2005. doi: 10.1137/1.9780898717877. Google Scholar
G. Gilboa, N. Sochen and Y. Zeeni, Image enhancement and denoising by complex diffusion processes, IEEE Trans Pattern Anal. Mach. Intell., 26 (2004), 1020-1036. doi: 10.1109/TPAMI.2004.47. Google Scholar
T. Goldstein, X. Bresson and S. Osher, Geometric applications of the split Bregman method: Segmentation and surface reconstruction, J. Sci. Computing, 45 (2010), 272-293. doi: 10.1007/s10915-009-9331-z. Google Scholar
C. Gout, C. Le Guyader and L. A. Vese, Segmentation under geometrical consitions with geodesic active contour and interpolation using level set methods, Numerical Algorithms, 39 (2005), 155-173. doi: 10.1007/s11075-004-3627-8. Google Scholar
C. Le Guyader, N. Forcadel and C. Gout, Image segmentation using a generalized fast marching method, Numerical Algorithms, 48 (2008), 189-212. doi: 10.1007/s11075-008-9183-x. Google Scholar
M. Jeon, M. Alexander, W. Pedrycz and N. Pizzi, Unsupervised hierarchical image segmentation with level set and additive operator splitting, Pattern Recogn. Lett., 26 (2005), 1461-1469. doi: 10.1016/j.patrec.2004.11.023. Google Scholar
M. Kass, A. Witkin and D. Terzopoulos, Snake: Active contour models, Int. J. Computer Vision, 1 (1988), 321-331. doi: 10.1007/BF00133570. Google Scholar
S. Lankton and A. Tannenbaum, Localizing region-based active contours, IEEE Trans. Image Processing, 17 (2008), 2029-2039. doi: 10.1109/TIP.2008.2004611. Google Scholar
C. Li, C. Kao, J. Gore and Z. Ding, Implicit active contours driven by local binary fitting energy, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Washington, DC, USA), IEEE Computer Society, (2007), 1-7. doi: 10.1109/CVPR.2007.383014. Google Scholar
F. Li, M. K. Ng and C. Li, Variational fuzzy Mumford-Shah model for image segmentation, SIAM J. Appl. Math., 70 (2010), 2750-2770. doi: 10.1137/090753887. Google Scholar
J. Lie, M. Lysaker and X.-C. Tai, A binary level set model and some applications to Mumford-Shah image segmentation, IEEE Trans. Image Processing, 15 (2006), 1171-1181. doi: 10.1109/TIP.2005.863956. Google Scholar
R. Malladi, J. A. Sethian and B. C. Vemuri, Shape modeling with front propagation: A level set approach, IEEE Trans. Pattern Anal. Mach. Intell., 17 (1995), 158-175. doi: 10.1109/34.368173. Google Scholar
A. Marquina and S. Osher, Explicit algorithms for a new time dependent model based on level set motion for nonlinear deblurring and noise removal, SIAM J. Sci. Computing, 22 (2000), 387-405. doi: 10.1137/S1064827599351751. Google Scholar
H. Mewada and S. Patnaik, Variable kernel based Chan-Vese model for image segmentation, Annual IEEE India Conference (INDICON), (2009), 1-4. doi: 10.1109/INDCON.2009.5409429. Google Scholar
J. Mille, Narrow band region-based active contours and surfaces for 2D and 3D segmentation, Computer Vision and Image Understanding, 113 (2009), 946-965. doi: 10.1016/j.cviu.2009.05.002. Google Scholar
D. Mumford and J. Shah, Optimal approximation by piecewise smooth functions and associated variational problem, Commun. Pure Appl. Math., 42 (1989), 577-685. doi: 10.1002/cpa.3160420503. Google Scholar
S. Osher and R. Fedkiw, Level Set Methods and Dynamic Implicit Surfaces, Springer Verlag, 2005. Google Scholar
S. Osher and J. Sethian, Fronts propagating with curvature dependent speed: algorithms based on Hamilton-Jacobi formulations, J. Comput. Phys., 79 (1988), 12-49. doi: 10.1016/0021-9991(88)90002-2. Google Scholar
D. P. Peng, B. Merriman, S. Osher, H. K. Zhao and M. Kang, A PDE-Based fast local level set method, J. Comput. Phys., 155 (1999), 410-438. doi: 10.1006/jcph.1999.6345. Google Scholar
J. M. S. Prewitt, Object enhancement and extraction, in Picture Processing and Psychopictorics, (eds. B. S. Lipkin and A. Rosenfeld), New York: Academic, (1970), 75-149. Google Scholar
J. A. Sethian, Fast marching methods, SIAM Review, 41 (1999), 199-235. doi: 10.1137/S0036144598347059. Google Scholar
J. H. Shen, $\Gamma$-Convergence approximation to piecewise constant Mumford-Shah segmentation, Advanced Concepts for Intelligent Vision Systems, 3708 (2005), 499-506. doi: 10.1007/11558484_63. Google Scholar
I. Sobel, An isotropic $3\times3$ image gradient operator, Machine Vision for Three-Dimention Scenes, (ed. H. Freeman), (1990), 376-379. Google Scholar
M. Sussman, P. Smereka and S. Osher, A level set approach for computing solutions to incompressible two-phase flow, J. Comput. Phys., 114 (1994), 146-159. doi: 10.1006/jcph.1994.1155. Google Scholar
X. C. Tai, O. Christiansen, P. Lin and I. Skjaelaaen, Image segmentation using some piecewise constant level set methods with MBO type of projection, Int. J. Computer Vision, 73 (2007), 61-76. doi: 10.1007/s11263-006-9140-x. Google Scholar
L. A. Vese and T. F. Chan, A multiphase level set framework for image segmentation using the Mumford and Shah model, Int. J. Computer Vision, 50 (2002), 271-293. Google Scholar
H. K. Zhao, T. F. Chan, B. Merriman and S. Osher, A variational level set approach to multiphase motion, J. Comput. Phys., 127 (1996), 179-195. doi: 10.1006/jcph.1996.0167. Google Scholar
Yuan-Nan Young, Doron Levy. Registration-Based Morphing of Active Contours for Segmentation of CT Scans. Mathematical Biosciences & Engineering, 2005, 2 (1) : 79-96. doi: 10.3934/mbe.2005.2.79
Hayden Schaeffer. Active arcs and contours. Inverse Problems & Imaging, 2014, 8 (3) : 845-863. doi: 10.3934/ipi.2014.8.845
Frédéric Gibou, Doron Levy, Carlos Cárdenas, Pingyu Liu, Arthur Boyer. Partial Differential Equations-Based Segmentation for Radiotherapy Treatment Planning. Mathematical Biosciences & Engineering, 2005, 2 (2) : 209-226. doi: 10.3934/mbe.2005.2.209
Thomas Lorenz. Partial differential inclusions of transport type with state constraints. Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : 1309-1340. doi: 10.3934/dcdsb.2019018
Fanghua Lin, Dan Liu. On the Betti numbers of level sets of solutions to elliptic equations. Discrete & Continuous Dynamical Systems, 2016, 36 (8) : 4517-4529. doi: 10.3934/dcds.2016.36.4517
Ulrike Kant, Werner M. Seiler. Singularities in the geometric theory of differential equations. Conference Publications, 2011, 2011 (Special) : 784-793. doi: 10.3934/proc.2011.2011.784
Sun-Yung Alice Chang, Xi-Nan Ma, Paul Yang. Principal curvature estimates for the convex level sets of semilinear elliptic equations. Discrete & Continuous Dynamical Systems, 2010, 28 (3) : 1151-1164. doi: 10.3934/dcds.2010.28.1151
Laurence Guillot, Maïtine Bergounioux. Existence and uniqueness results for the gradient vector flow and geodesic active contours mixed model. Communications on Pure & Applied Analysis, 2009, 8 (4) : 1333-1349. doi: 10.3934/cpaa.2009.8.1333
Egil Bae, Xue-Cheng Tai, Wei Zhu. Augmented Lagrangian method for an Euler's elastica based segmentation model that promotes convex contours. Inverse Problems & Imaging, 2017, 11 (1) : 1-23. doi: 10.3934/ipi.2017001
Alberto Bressan, Ke Han, Franco Rampazzo. On the control of non holonomic systems by active constraints. Discrete & Continuous Dynamical Systems, 2013, 33 (8) : 3329-3353. doi: 10.3934/dcds.2013.33.3329
Herbert Koch. Partial differential equations with non-Euclidean geometries. Discrete & Continuous Dynamical Systems - S, 2008, 1 (3) : 481-504. doi: 10.3934/dcdss.2008.1.481
Lorenzo Zambotti. A brief and personal history of stochastic partial differential equations. Discrete & Continuous Dynamical Systems, 2021, 41 (1) : 471-487. doi: 10.3934/dcds.2020264
Wilhelm Schlag. Spectral theory and nonlinear partial differential equations: A survey. Discrete & Continuous Dynamical Systems, 2006, 15 (3) : 703-723. doi: 10.3934/dcds.2006.15.703
Eugenia N. Petropoulou, Panayiotis D. Siafarikas. Polynomial solutions of linear partial differential equations. Communications on Pure & Applied Analysis, 2009, 8 (3) : 1053-1065. doi: 10.3934/cpaa.2009.8.1053
Arnulf Jentzen. Taylor expansions of solutions of stochastic partial differential equations. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 515-557. doi: 10.3934/dcdsb.2010.14.515
Barbara Abraham-Shrauner. Exact solutions of nonlinear partial differential equations. Discrete & Continuous Dynamical Systems - S, 2018, 11 (4) : 577-582. doi: 10.3934/dcdss.2018032
Nguyen Thieu Huy, Ngo Quy Dang. Dichotomy and periodic solutions to partial functional differential equations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (8) : 3127-3144. doi: 10.3934/dcdsb.2017167
Runzhang Xu. Preface: Special issue on advances in partial differential equations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (12) : i-i. doi: 10.3934/dcdss.2021137
Mario Roldan. Hyperbolic sets and entropy at the homological level. Discrete & Continuous Dynamical Systems, 2016, 36 (6) : 3417-3433. doi: 10.3934/dcds.2016.36.3417
Paul Bracken. Exterior differential systems and prolongations for three important nonlinear partial differential equations. Communications on Pure & Applied Analysis, 2011, 10 (5) : 1345-1360. doi: 10.3934/cpaa.2011.10.1345
PDF downloads (125)
Jianping Zhang Ke Chen Bo Yu Derek A. Gould | CommonCrawl |
\begin{definition}[Definition:Normed Algebra]
Let $R$ be a division ring with norm $\norm {\,\cdot\,}_R$.
A '''normed algebra''' over $R$ is a pair $\struct {A, \norm{\,\cdot\,} }$ where:
:$A$ is a algebra over $R$
:$ \norm{\,\cdot\,} $ is a norm on $A$.
\end{definition} | ProofWiki |
Analysis of Markov-modulated fluid polling systems with gated discipline
Hybrid social spider optimization algorithm with differential mutation operator for the job-shop scheduling problem
March 2021, 17(2): 549-573. doi: 10.3934/jimo.2019123
Analysis of $GI^{[X]}/D$-$MSP/1/\infty$ queue using $RG$-factorization
Sujit Kumar Samanta , and Rakesh Nandi
Department of Mathematics, National Institute of Technology Raipur, Raipur-492010, India
* Corresponding author: Sujit Kumar Samanta
Received September 2018 Revised May 2019 Published March 2021 Early access October 2019
Fund Project: The first author acknowledges the Council of Scientific and Industrial Research (CSIR), New Delhi, India, for partial support from the project grant 25(0271)/17/EMR-Ⅱ.
Figure(2) / Table(10)
This paper analyzes an infinite-buffer single-server queueing system wherein customers arrive in batches of random size according to a discrete-time renewal process. The customers are served one at a time under discrete-time Markovian service process. Based on the censoring technique, the UL-type $ RG $-factorization for the Toeplitz type block-structured Markov chain is used to obtain the prearrival epoch probabilities. The random epoch probabilities are obtained with the help of classical principle based on Markov renewal theory. The system-length distributions at outside observer's, intermediate and post-departure epochs are obtained by making relations among various time epochs. The analysis of waiting-time distribution measured in slots of an arbitrary customer in an arrival batch has also been investigated. In order to unify the results of both discrete-time and its continuous-time counterpart, we give a brief demonstration to get the continuous-time results from those of the discrete-time ones. A variety of numerical results are provided to illustrate the effect of model parameters on the performance measures.
Keywords: Toeplitz type block-structured Markov chain, censored Markov chain, discrete-time Markovian service process (D-MSP), general independent batch arrival, queueing, UL-type $ RG $-factorization.
Mathematics Subject Classification: Primary: 60K25, 90B22, 68M20, 60K20.
Citation: Sujit Kumar Samanta, Rakesh Nandi. Analysis of $GI^{[X]}/D$-$MSP/1/\infty$ queue using $RG$-factorization. Journal of Industrial & Management Optimization, 2021, 17 (2) : 549-573. doi: 10.3934/jimo.2019123
J. Abate, G. L. Choudhury and W. Whitt, Asymptotics for steady-state tail probabilities in structured Markov queueing models, Comm. Statist. Stochastic Models, 10 (1994), 99-143. doi: 10.1080/15326349408807290. Google Scholar
A. S. Alfa, Applied Discrete-Time Queues, 2$^{nd}$ edition, Springer-Verlag, New York, 2016. doi: 10.1007/978-1-4939-3420-1. Google Scholar
A. S. Alfa, J. Xue and Q. Ye, Perturbation theory for the asymptotic decay rates in the queues with Markovian arrival process and/or Markovian service process, Queueing Syst., 36 (2000), 287-301. doi: 10.1023/A:1011032718715. Google Scholar
J. R. Artalejo, I. Atencia and P. Moreno, A discrete-time $Geo^{[X]}/G/1$ retrial queue with control of admission, Applied Mathematical Modelling, 29 (2005), 1100-1120. doi: 10.1016/j.apm.2005.02.005. Google Scholar
J. R. Artalejo and Q. L. Li, Performance analysis of a block-structured discrete-time retrial queue with state-dependent arrivals, Discrete Event Dyn. Syst., 20 (2010), 325-347. doi: 10.1007/s10626-009-0075-6. Google Scholar
F. Avram and D. F. Chedom, On symbolic $RG$-factorization of quasi-birth-and-death processes, TOP, 19 (2011), 317-335. doi: 10.1007/s11750-011-0195-7. Google Scholar
A. D. Banik and U. C. Gupta, Analyzing the finite buffer batch arrival queue under Markovian service process: $GI^{X}/MSP/1/N$, TOP, 15 (2007), 146-160. doi: 10.1007/s11750-007-0007-2. Google Scholar
P. P. Bocharov, C. D'Apice and S. Salerno, The stationary characteristics of the $G/MSP/1/r$ queueing system, Autom. Remote Control, 64 (2003), 288-301. doi: 10.1023/A:1022219232282. Google Scholar
H. Bruneel and B. G. Kim, Discrete-time Models for Communication Systems including ATM, The Springer International Series in Engineering and Computer Science, 205, Kluwer Academic Publishers, Boston, 1993. doi: 10.1007/978-1-4615-3130-2. Google Scholar
M. L. Chaudhry, A. D. Banik and A. Pacheco, A simple analysis of the batch arrival queue with infinite-buffer and Markovian service process using roots method: $GI^{[X]}/C$-$MSP/1/\infty $, Ann. Oper. Res., 252 (2017), 135-173. doi: 10.1007/s10479-015-2026-y. Google Scholar
M. L. Chaudhry, S. K. Samanta and A. Pacheco, Analytically explicit results for the $GI/C$-$MSP/1/\infty$ queueing system using roots, Probab. Engrg. Inform. Sci., 26 (2012), 221-244. doi: 10.1017/S0269964811000349. Google Scholar
E. Çinlar, Introduction to Stochastic Process, Prentice Hall, New Jersey, 1975. Google Scholar
D. Freedman, Approximating Countable Markov Chains, 2$^{nd}$ edition, Springer-Verlag, New York, 1983. doi: 10.1007/978-1-4613-8230-0. Google Scholar
Y. Gao and W. Liu, Analysis of the $GI/Geo/c$ queue with working vacations, Applied Mechanics and Materials, 197 (2012), 534-541. doi: 10.4028/www.scientific.net/AMM.197.534. Google Scholar
V. Goswami, U. C. Gupta and S. K. Samanta, Analyzing discrete-time bulk-service $Geo/Geo^b/m$ queue, RAIRO Operations Research, 40 (2006), 267-284. doi: 10.1051/ro:2006021. Google Scholar
W. K. Grassmann and D. P. Heyman, Equilibrium distribution of block-structured Markov chains with repeating rows, J. Appl. Probab., 27 (1990), 557-576. doi: 10.2307/3214541. Google Scholar
U. C. Gupta and A. D. Banik, Complete analysis of finite and infinite buffer $GI/MSP/1$ queue — A computational approach, Oper. Res. Lett., 35 (2007), 273-280. doi: 10.1016/j.orl.2006.02.003. Google Scholar
A. Horváth, G. Horváth and M. Telek, A joint moments based analysis of networks of $MAP/MAP/1$ queues, Performance Evaluation, 67 (2010), 759-778. doi: 10.1016/j.peva.2009.12.006. Google Scholar
J. J. Hunter, Mathematical techniques of applied probability, in Discrete-Time Models: Techniques and Applications, Operations Research and Industrial Engineering, Academic Press, New York, 1983. Google Scholar
T. Jiang and L. Liu, Analysis of a batch service multi-server polling system with dynamic service control, J. Ind. Manag. Optim., 14 (2018), 743-757. doi: 10.3934/jimo.2017073. Google Scholar
N. K. Kim, S. H. Chang and K. C. Chae, On the relationships among queue lengths at arrival, departure, and random epochs in the discrete-time queue with D-BMAP arrivals, Oper. Res. Lett., 30 (2002), 25-32. doi: 10.1016/S0167-6377(01)00110-9. Google Scholar
Q. Li, Y. Ying and Y. Q. Zhao, A $BMAP/G/1$ retrial queue with a server subject to breakdowns and repairs, Ann. Oper. Res., 141 (2006), 233-270. doi: 10.1007/s10479-006-5301-0. Google Scholar
Q. L. Li, Constructive Computation in Stochastic Models with Applications: The RGfactorization, Springer, Berlin and Tsinghua University Press, Beijing, 2010. doi: 10.1007/978-3-642-11492-2. Google Scholar
Q. L. Li and Y. Q. Zhao, Light-tailed asymptotics of stationary probability vectors of Markov chains of $GI/G/1$ type, Adv. in Appl. Probab., 37 (2005), 1075-1093. doi: 10.1017/S0001867800000677. Google Scholar
Q. L. Li and Y. Q. Zhao, A $MAP/G/1$ queue with negative customers, Queueing Syst., 47(1) (2004), 5–43. doi: 10.1023/B:QUES.0000032798.65858.19. Google Scholar
D. M. Lucantoni and M. F. Neuts, Some steady-state distributions for the $MAP/SM/1$ queue, Comm. Statist. Stochastic Models, 10 (1994), 575-598. doi: 10.1080/15326349408807311. Google Scholar
C. D. Meyer, Stochastic complementation, uncoupling Markov chains, and the theory of nearly reducible systems, SIAM Review, 31 (1989), 240-272. doi: 10.1137/1031050. Google Scholar
M. S. Mushtaq, S. Fowler and A. Mellouk, QoE in 5G cloud networks using multimedia services, in Proceeding of IEEE international Wireless Communication and Networking Conference (WCNC'16), Doha, Qatar, 2016. doi: 10.1109/WCNC.2016.7565173. Google Scholar
T. Ozawa, Analysis of queues with Markovian service processes, Stochastic Models, 20 (2004), 391-413. doi: 10.1081/STM-200033073. Google Scholar
A. Pacheco, S. K. Samanta and M. L. Chaudhry, A short note on the $GI/Geo/1$ queueing system, Statist. Probab. Lett., 82 (2012), 268-273. doi: 10.1016/j.spl.2011.09.022. Google Scholar
S. K. Samanta, Sojourn-time distribution of the $GI/MSP/1$ queueing system, OPSEARCH, 52 (2015), 756-770. doi: 10.1007/s12597-015-0202-0. Google Scholar
S. K. Samanta, M. L. Chaudhry and A. Pacheco, Analysis of $BMAP/MSP/1$ queue, Methodol. Comput. Appl. Probab., 18 (2016), 419-440. doi: 10.1007/s11009-014-9429-0. Google Scholar
S. K. Samanta, M. L. Chaudhry, A. Pacheco and U. C. Gupta, Analytic and computational analysis of the discrete-time $GI/D$-$MSP/1$ queue using roots, Comput. Oper. Res., 56 (2015), 33-40. doi: 10.1016/j.cor.2014.10.017. Google Scholar
S. K. Samanta, U. C. Gupta and M. L. Chaudhry, Analysis of stationary discrete-time $GI/D$-$MSP/1$ queue with finite and infinite buffers, 4OR, 7 (2009), 337-361. doi: 10.1007/s10288-008-0088-2. Google Scholar
S. K. Samanta and Z. G. Zhang, Stationary analysis of a discrete-time $GI/D$-$MSP/1$ queue with multiple vacations, Appl. Math. Model., 36 (2012), 5964-5975. doi: 10.1016/j.apm.2012.01.049. Google Scholar
K. D. Turck, S. D. Vuyst, D. Fiems, H. Bruneel and and S. Wittevrongel, Efficient performance analysis of newly proposed sleep-mode mechanisms for IEEE 802.16m in case of correlated downlink traffic, Wireless Networks, 19 (2013), 831-842. doi: 10.1007/s11276-012-0504-6. Google Scholar
Y. C. Wang, J. H. Chou and S. Y. Wang, Loss pattern of $DBMAP/DMSP/1/K$ queue and its application in wireless local communications, Appl. Math. Model., 35 (2011), 1782-1797. doi: 10.1016/j.apm.2010.10.009. Google Scholar
Y. Wang, C. Linb and Q. L. Li, Performance analysis of email systems under three types of attacks, Performance Evaluation, 67 (2010), 485-499. doi: 10.1016/j.peva.2010.01.003. Google Scholar
M. Yu and A. S. Alfa, Algorithm for computing the queue length distribution at various time epochs in $DMAP/G^{(1, a, b)}/1/N$ queue with batch-size-dependent service time, European J. Oper. Res., 244 (2015), 227-239. doi: 10.1016/j.ejor.2015.01.056. Google Scholar
M. Zhang and Z. Hou, Performance analysis of $MAP/G/1$ queue with working vacations and vacation interruption, Applied Mathematical Modelling, 35 (2011), 1551-1560. doi: 10.1016/j.apm.2010.09.031. Google Scholar
J. A. Zhao, B. Li, C. W. Kok and I. Ahmad, MPEG-4 video transmission over wireless networks: A link level performance study, Wireless Networks, 10 (2004), 133-146. doi: 10.1023/B:WINE.0000013078.74259.13. Google Scholar
Y. Q. Zhao, Censoring technique in studying block-structured Markov chains, in Advance in Algorithmic Methods for Stochastic Models, Notable Publications, 2000, 417–433. Google Scholar
Y. Q. Zhao, W. Li and W. J. Braun, Infinite block-structured transition matrices and their properties, Adv. in Appl. Probab., 30 (1998), 365-384. doi: 10.1239/aap/1035228074. Google Scholar
Y. Q. Zhao and D. Liu, The censored Markov chain and the best augmentation, Journal of Applied Probability, 33 (1996), 623-629. doi: 10.1017/S0021900200100063. Google Scholar
Figure 1. Various time epochs in LAS-DA
Figure 2. Various time epochs in EAS
Table 1. System-length distribution at prearrival epoch (LAS-DA)
$ n $ $ \pi^{-}_1(n) $ $ \pi^{-}_2(n) $ $ \pi^{-}_3(n) $ $ \pi^{-}_4(n) $ $ \mathit{\boldsymbol{\pi }}^{-}(n){\bf e} $
0 0.147931 0.087562 0.141983 0.215337 0.592813
10 0.005820 0.002639 0.005923 0.004720 0.019101
150 0.000000 0.000000 0.000000 0.000000 0.000000
$\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$
Sum 0.268062 0.144522 0.263029 0.324388 1.000000
Table 2. System-length distribution at random epoch (LAS-DA)
$ n $ $ \pi_1(n) $ $ \pi_2(n) $ $ \pi_3(n) $ $ \pi_4(n) $ $ \mathit{\boldsymbol{\pi }}(n){\bf e} $
$ L_{q}= 4.110096 $, $ W_{q}\equiv L_{q}/\lambda\overline{g}=16.988397 $
Table 3. System-length distribution at intermediate epoch (LAS-DA)
$ n $ $ \pi^{\bullet}_1(n) $ $ \pi^{\bullet}_2(n) $ $ \pi^{\bullet}_3(n) $ $ \pi^{\bullet}_4(n) $ $ \mathit{\boldsymbol{\pi }}^{\bullet}(n){\bf e} $
Table 4. System-length distribution at post-departure epoch (LAS-DA)
$ n $ $ \pi^{+}_1(n) $ $ \pi^{+}_2(n) $ $ \pi^{+}_3(n) $ $ \pi^{+}_4(n) $ $ \mathit{\boldsymbol{\pi }}^{+}(n){\bf e} $
Table 5. Waiting-time distribution of an arbitrary customer (LAS-DA)
$ k $ $ w_1(k) $ $ w_2(k) $ $ w_3(k) $ $ w_4(k) $ $ {\bf w}(k){\bf e} $
$ W_{q}\equiv \sum_{k=1}^{\infty}k{\bf w}(k){\bf e}=16.988398 $
Table 6. System-length distribution at prearrival epoch (EAS)
Table 7. System-length distribution at random epoch (EAS)
Table 8. System-length distribution at outside observer's epoch (EAS)
$ n $ $ \pi^{\circ}_1(n) $ $ \pi^{\circ}_2(n) $ $ \pi^{\circ}_3(n) $ $ \pi^{\circ}_4(n) $ $ \mathit{\boldsymbol{\pi }}^{\circ}(n){\bf e} $
$ L_{q}=1.040384 $, $ W_{q}\equiv L_{q}/\lambda\overline{g}=4.265574 $
Table 9. System-length distribution at post-departure epoch (EAS)
Table 10. Waiting-time distribution of an arbitrary customer (EAS)
$ W_{q}\equiv \sum_{k=1}^{\infty}k{\bf w}(k){\bf e}=4.265574 $
Samuel N. Cohen, Lukasz Szpruch. On Markovian solutions to Markov Chain BSDEs. Numerical Algebra, Control & Optimization, 2012, 2 (2) : 257-269. doi: 10.3934/naco.2012.2.257
Xi Zhu, Meixia Li, Chunfa Li. Consensus in discrete-time multi-agent systems with uncertain topologies and random delays governed by a Markov chain. Discrete & Continuous Dynamical Systems - B, 2020, 25 (12) : 4535-4551. doi: 10.3934/dcdsb.2020111
Ralf Banisch, Carsten Hartmann. A sparse Markov chain approximation of LQ-type stochastic control problems. Mathematical Control & Related Fields, 2016, 6 (3) : 363-389. doi: 10.3934/mcrf.2016007
Pikkala Vijaya Laxmi, Obsie Mussa Yesuf. Analysis of a finite buffer general input queue with Markovian service process and accessible and non-accessible batch service. Journal of Industrial & Management Optimization, 2010, 6 (4) : 929-944. doi: 10.3934/jimo.2010.6.929
Angelica Pachon, Federico Polito, Costantino Ricciuti. On discrete-time semi-Markov processes. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1499-1529. doi: 10.3934/dcdsb.2020170
Ralf Banisch, Carsten Hartmann. Addendum to "A sparse Markov chain approximation of LQ-type stochastic control problems". Mathematical Control & Related Fields, 2017, 7 (4) : 623-623. doi: 10.3934/mcrf.2017023
Ajay Jasra, Kody J. H. Law, Yaxian Xu. Markov chain simulation for multilevel Monte Carlo. Foundations of Data Science, 2021, 3 (1) : 27-47. doi: 10.3934/fods.2021004
Yueyuan Zhang, Yanyan Yin, Fei Liu. Robust observer-based control for discrete-time semi-Markov jump systems with actuator saturation. Journal of Industrial & Management Optimization, 2021, 17 (6) : 3013-3026. doi: 10.3934/jimo.2020105
Michael C. Fu, Bingqing Li, Rongwen Wu, Tianqi Zhang. Option pricing under a discrete-time Markov switching stochastic volatility with co-jump model. Frontiers of Mathematical Finance, 2022, 1 (1) : 137-160. doi: 10.3934/fmf.2021005
Jingzhi Tie, Qing Zhang. An optimal mean-reversion trading rule under a Markov chain model. Mathematical Control & Related Fields, 2016, 6 (3) : 467-488. doi: 10.3934/mcrf.2016012
Kun Fan, Yang Shen, Tak Kuen Siu, Rongming Wang. On a Markov chain approximation method for option pricing with regime switching. Journal of Industrial & Management Optimization, 2016, 12 (2) : 529-541. doi: 10.3934/jimo.2016.12.529
Samuel N. Cohen. Uncertainty and filtering of hidden Markov models in discrete time. Probability, Uncertainty and Quantitative Risk, 2020, 5 (0) : 4-. doi: 10.1186/s41546-020-00046-x
Kengo Matsumoto. On the Markov-Dyck shifts of vertex type. Discrete & Continuous Dynamical Systems, 2016, 36 (1) : 403-422. doi: 10.3934/dcds.2016.36.403
Sofian De Clercq, Koen De Turck, Bart Steyaert, Herwig Bruneel. Frame-bound priority scheduling in discrete-time queueing systems. Journal of Industrial & Management Optimization, 2011, 7 (3) : 767-788. doi: 10.3934/jimo.2011.7.767
Piotr Oprocha. Chain recurrence in multidimensional time discrete dynamical systems. Discrete & Continuous Dynamical Systems, 2008, 20 (4) : 1039-1056. doi: 10.3934/dcds.2008.20.1039
Zhongkui Li, Zhisheng Duan, Guanrong Chen. Consensus of discrete-time linear multi-agent systems with observer-type protocols. Discrete & Continuous Dynamical Systems - B, 2011, 16 (2) : 489-505. doi: 10.3934/dcdsb.2011.16.489
Sujit Kumar Samanta Rakesh Nandi | CommonCrawl |
Author Name Title
Publication Date Published
|< < 9 10 11 12 13 14 15 16 17 18 > >|
Aad G (2012) Search for light scalar top-quark pair production in final states with two leptons with the ATLAS detector in $\sqrt{s}=7\ \mathrm{TeV}$ proton-proton collisions in The European Physical Journal C
Aad G (2013) Search for light top squark pair production in final states with leptons and b-jets with the ATLAS detector in in Physics Letters B
Aad G (2013) Search for light top squark pair production in final states with leptons and b-jets with the ATLAS detector in <mml:math altimg="si1.gif" overflow="scroll" xmlns:xocs="http://www.elsevier.com/xml/xocs/dtd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.elsevier.com/xml/ja/dtd" xmlns:ja="http://www.elsevier.com/xml/ja/dtd" xmlns:mml="http://www.w3.org/ in Physics Letters B
Aad G (2013) Search for long-lived stopped R -hadrons decaying out of time with p p collisions using the ATLAS detector in Physical Review D
Aad G (2013) Search for long-lived, heavy particles in final states with a muon and multi-track displaced vertex in proton-proton collisions at <mml:math altimg="si1.gif" overflow="scroll" xmlns:xocs="http://www.elsevier.com/xml/xocs/dtd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.elsevier.com/xml/ja/dtd" xmlns:ja="http://www.elsevier.com/xml/ja/dtd" xmlns:mml in Physics Letters B
Aad G (2013) Search for long-lived, multi-charged particles in pp collisions at in Physics Letters B
Aad G (2012) Search for magnetic monopoles in sqrt[s]=7 TeV pp collisions with the ATLAS detector. in Physical review letters
Aad G (2014) Search for microscopic black holes and string balls in final states with leptons and jets with the ATLAS detector at s $$ \sqrt{s} $$ = 8 TeV in Journal of High Energy Physics
Aad G (2013) Search for microscopic black holes in a like-sign dimuon final state using large track multiplicity with the ATLAS detector in Physical Review D | CommonCrawl |
We will now look at some more examples applying the ratio test.
Using the ratio test, determine whether the series $\sum_{n=1}^{\infty} \frac{(4n)!}{4^n n!}$ converges or diverges.
This series is positive for all $n \in \mathbb{N}$ and so we can apply the ratio test. We have that:
\begin{align} \quad \lim_{n \to \infty} \frac{\frac{(4(n+1))!}{4^{n+1} (n+1)!}}{\frac{(4n)!}{4^n n!}} = \lim_{n \to \infty} \frac{(4n+4))!}{4^{n+1} (n+1)!} \frac{4^n n!}{(4n)!} = \lim_{n \to \infty} \frac{(4n + 1)(4n + 2)(4n + 3)(4n + 4)}{4 (n+1)} = \lim_{n \to \infty} (4n + 1)(4n + 2)(4n + 3) = \infty \end{align}
Therefore by the ratio test we have that $\sum_{n=1}^{\infty} \frac{(4n)!}{4^n n!}$ diverges.
Using the ratio test, determine whether the series $\sum_{n=1}^{\infty} \frac{n^2 + 2n + 1}{3^n + 2}$ converges or diverges.
Once again, this series is positive for all $n \in \mathbb{N}$ and so we can apply the ratio test. We have that:
\begin{align} \quad \lim_{n \to \infty} \frac{\frac{(n+1)^2 + 2(n+1) + 1}{3^{n+1} + 2}}{\frac{n^2 + 2n + 1}{3^n + 2}} = \lim_{n \to \infty} \frac{(n+1)^2 + 2(n+1) + 1}{3^{n+1} + 2} \frac{3^n + 2}{n^2 + 2n + 1} = \lim_{n \to \infty} \frac{(n^2 + 2n + 1 + 2n + 2 + 1)(3^n + 2)}{(3^{n+1} + 2)(n^2 + 2n + 1)} \\ \quad = \lim_{n \to \infty} \frac{n^2 + 4n + 4}{n^2 + 2n + 1} = \lim_{n \to \infty} \frac{\left (1 + \frac{4}{n} + \frac{4}{n^2} \right )}{\left ( 1 + \frac{1}{2n} + \frac{1}{n^2} \right )} \frac{\frac{1}{3} + \frac{2}{3^{n+1}}}{1 + \frac{2}{3^{n+1}}} = \frac{1}{3} \end{align}
Therefore by the ratio test, we have that $\sum_{n=1}^{\infty} \frac{(4n)!}{4^n n!}$ converges.
Show that any positive rational function comes up inconclusive when it comes to applying the ratio test. What sort of test will determine the convergence or divergence of series of rational functions?
Consider a typical rational function:
\begin{align} \quad p(n) = \frac{a_0 + a_1n + a_2n^2 + ... + a_p n^p}{b_0 + b_1n + b_2n^2 + ... + b_r n^r} \end{align}
Assume that $\{ p(n) \}$ is ultimately positive so that we can apply the ratio test. Then we have that:
\begin{align} \quad \lim_{n \to \infty} \frac{p(n+1)}{p(n)} = \lim_{n \to \infty} \frac{a_0 + a_1(n+1) + a_2(n+1)^2 + ... + a_p (n+1)^p}{b_0 + b_1(n+1) + b_2(n+1)^2 + ... + b_r (n+1)^r} \frac{b_0 + b_1n + b_2n^2 + ... + b_r n^r}{a_0 + a_1n + a_2n^2 + ... + a_p n^p} \end{align}
The degree of the polynomial in the numerator will be $p + r$ and the degree of the polynomial in the denominator will be $p + r$. Therefore the limit above is equal to $1$ and so the ratio test is inconclusive.
For positive rational functions, we can apply the limit comparison test frequently and determine the convergence or divergence from there.
Reprove the ratio test.
Suppose that $\{ a_n \}$ is an ultimately positive sequence and let $\lim_{n \to \infty} \frac{a_{n+1}}{a_n} = \rho$.
First consider the case where $0 ≤ \rho < 1$. Choose $r \in \mathbb{R}$ such that $\rho < r < 1$. Since $\lim_{n \to \infty} \frac{a_{n+1}}{a_n} = \rho$ then for some $N \in \mathbb{N}$ we have that if $n ≥ N$ then:
\begin{align} \quad \frac{a_{n+1}}{a_n} < r \\ \quad a_{n+1} < ra_n \end{align}
Therefore we have that:
\begin{align} \quad a_{N+1} < ra_N \\ \quad a_{N+2} < ra_{N+1} < r^2 a_N \\ \quad a_{N+3} < ra_{N+2} < r^2 a_{N+1} < r^3 a_N \\ \quad \quad \vdots \quad \quad \end{align}
Therefore we have that $a_n ≤ r^n a_N$ for all $n ≥ N$. Note that the series $\sum_{n=N+1}^{\infty} a_n$ converges as geometric series with $\mid r \mid < 1$, and so by the comparison test, we have that $\sum_{n=1}^{\infty} a_n$ converges.
Now suppose that $1 < \rho$. Choose $r \in \mathbb{R}$ such that $1 < r < \rho$. Since $\lim_{n \to \infty} \frac{a_{n+1}}{a_n} = \rho$ then for some $N \in \mathbb{N}$ we have that if $n ≥ N$ then:
\begin{align} \quad r < \frac{a_{n+1}}{a_n} \\ \quad ra_n < a_{n+1} \end{align}
\begin{align} \quad ra_N < a_{N+1} \\ \quad r^2 a_N < ra_{N+1} < a_{N+2} \\ \quad r^3 a_N < r^2 a_{N+1} < r a_{N+2} < a_{N+3} \\ \quad \quad \vdots \quad \quad \end{align}
Note that the series $\sum_{n=N+1}^{\infty} r^3 a_N$ diverges as a geometric series with $\mid r \mid ≥ 1$. Therefore we have by the comparison test that $\sum_{n=1}^{\infty} a_n$ also diverges. | CommonCrawl |
Notation for derivatives of $\sin x$. Can we use $\sin^\prime x$?
I was teaching Trig Derivatives today to a high school class and someone asked a question I've never had before:
Can we use $\sin^\prime x$ to represent the derivative of $\sin x$?
I have never seen that used when studying math and a Google search only showed one use.
However when I googled $\sin^\prime x$ it came back with hits for the derivative of $\sin x$. We do use a similar notation for the inverse and square of $\sin x$, so why not the derivative?
Any thoughts would be appreciated. Thanks.
calculus trigonometry derivatives
Lorenzo B.
A Math TeacherA Math Teacher
$\begingroup$ Yes, sin'(x) is not an uncommon notation though (sin(x))' and d(sin(x))/dx are more common. $\endgroup$ – user247327 May 7 '18 at 17:41
The notation you describe has a certain appeal, however it is not in common use. The usual shorthands are the notations I'm sure you're familiar with, such as $\frac{d}{dx}\sin x$, or denoting the function as $y$ or $f(x)$, and then talking about $y'$ or $f'(x)$.
I don't know of a reason that the "prime" notation isn't used in the way you describe, but we've been writing calculus down for so long now, that it basically comes down to inertia. Even if a new notation in basic calculus is great, it would take a massive effort to render it recognizable on any large scale, and most of us have better things to do with our time.
In other words, as long as you use the notation consistently, there's nothing logically wrong with it, but it's by no means a standard, and it's likely to appear funny-looking to the majority of mathematics readers. Whether you want to teach a non-standard notation in your class is entirely up to you. I think such a move can sometimes be warranted, and sometimes just cause confusion down the line.
G Tony JacobsG Tony Jacobs
The only reason it's not in common use is that we know $\sin' = \cos$. For other "named" special functions whose derivatives don't have a closed form, the analogous notation is often used, e.g. $\text{Ai}'(x)$ and $\text{Bi}'(x)$ for the Airy functions.
Robert IsraelRobert Israel
If I'm reading between the lines correctly, it seems like you're concerned that putting the prime before the parentheses is at odds with how we would write derivatives for other functions, just like how the notation $\sin^2(x)$ is at odds with how we would normally write $f(x)^2$.
But that just isn't true. We normally would write the derivative of $f$ as $f'(x)$. Strictly speaking $(f(x))'$ could even be considered wrong, being the derivative of the constant function $t\to f(x)$ for some fixed $x$.
It's just that because $\sin$ is a named function, the notation $\sin'(x)$ looks vaguely like you might be talking about some new function that just happens to be called $\sin'$.
Jack MJack M
Not the answer you're looking for? Browse other questions tagged calculus trigonometry derivatives or ask your own question.
How do I get good at Math?
A High-School Freshman's Journey Into Calculus
Why can't we do substitution in differentiation but is it ok in Taylor series?
What is the derivative of $\int_{-10}^{-3} e^{\tan(t)} \,dt$ with respect to x?
Why not teach $\lim\limits_{x \to 0} \frac { \sin({ K }_{ 1 }x) }{ { K }_{ 2 }x } =\frac { { K }_{ 1 } }{ { K }_{ 2 } } $?
Derivative of an ellipse
If $f(x)=\sin ax + e^{-ax}$ where $a$ is a constant, what is $f^{(18)}(x)$?
Calculus: application of chain rule seemingly contradicts what I've learned so far on deriving functions.
What is the theory behind differentiating functions with respect to variables other than the independent variables of that function?
How to improve at handling the (mild) complexity of high school trig? | CommonCrawl |
\begin{document}
\title{Two-phase compressible/incompressible Navier--Stokes system with inflow-outflow boundary conditions}
\author{Milan Pokorn\'y \thanks{The work of M. P. has been supported by the Czech Science Foundation, project No. 19-04243S} \and Aneta Wr\'oblewska-Kami\'nska \thanks{The work of A. W-K. has been supported by the grant of National Science Centre Poland Sonata Bis UMO-2020/38/E/ST1/00469.}
\and Ewelina Zatorska \thanks{The research of E. Z. leading to these results has been funded by the EPSRC Early Career Fellowship no. EP/V000586/1. This work was also partially supported by the Simons Foundation Award No 663281 granted to the Institute of Mathematics of the Polish Academy of Sciences for the years 2021-2023. } }
\date{\today}
\maketitle
{ \footnotesize \centerline{$^*\;$Charles University, Faculty of Mathematics and Physics, Mathematical Institute} \centerline{of Charles University, Sokolovsk\'a 83, 186 75 Praha 8, Czech Republic} \centerline{\small \texttt{[email protected]}}
\bigbreak \centerline{$^\dagger\;$Institute of Mathematics, Polish Academy of Sciences} \centerline{ul. \'Sniadeckich 8, 00-656 Warszawa, Poland} \centerline{\small \texttt{[email protected]}}
\bigbreak \centerline{$^\ddagger\;$Department of Mathematics, Imperial College London} \centerline{South Kensington Campus -- SW7 2AZ, London, UK} \centerline{\small \texttt{[email protected]}}
}
\bigbreak
\begin{abstract} We prove the existence of a weak solution to the compressible Navier--Stokes system with singular pressure that explodes when density achieves its congestion level. This is a quantity whose initial value evolves according to the transport equation. We then prove that the ``stiff pressure" limit gives rise to the two-phase compressible/incompressible system with congestion constraint describing the free interface. We prescribe of the velocity at the boundary and the value of density at the inflow part of the boundary of a general bounded $C^2$ domain. There are no restrictions on the size of the boundary conditions. \end{abstract}
{\bf Keywords:} Compressible/incompressible Navier--Stokes system, inhomogeneous boundary conditions, weak solutions, renormalized continuity equation, stiff pressure limit
\section{Introduction} \label{i} The fluid-type equations are often used as macroscopic models for collective dynamics. In the present paper we are particularly interested in a system that has been analysed in \cite{DeMiZa, DeMiNaZa} as a model the motion of big crowds. It is a two phase compressible-incompressible model describing the evolution of some averaged macroscopic quantities describing the crowd: the velocity $\vc{u}$, the density $\varrho$ and the congestion density $\varrho^*$. The latter describes the preferences of the individuals, or their physical dimensions, that restrict the neighbours from being too close to each other (from penetrating each other). It is set for each individual in the crowd at the initial time, and simply transported along with the flow. This means that when the density of the crowd $\varrho$ reaches this constraining value $\varrho^*$, the crowd behaves like the incompressible fluid. When the density $\varrho$ is strictly less than $\varrho^*$ the crowd behaves like compressible fluid, except that the particles move freely as there is no contact between them. The behaviour of the crowd is described by either incompressible or compressible Navier-Stokes equations on the moving subdomains separated by the interphase described by the relation: \eq{\label{condition} \pi(\varrho^*-\varrho)=0.} Here $\pi$ is the unknown ``pressure'', or rather, the Lagrangian multiplier associated with incompressibility condition satisfied by the velocity. Relation \eqref{condition} states that $\pi$ appears only on the subdomain with congestion. For $\varrho<\varrho^*$, on the other hand, $\pi$ is equal to $0$.
The similar free boundary problem was already analysed by Lions and Masmoudi in \cite{LM99} for $\varrho^*=1$. The authors showed that the two-phase system can be approximated by purely compressible Navier-Stokes equations with the pressure $\pi_n(\varrho)\approx\varrho^{\gamma_n}$ and $\gamma_n\to\infty$. The same kind of limit passage was also investigated later on for the PDE models of tumor growth \cite{PeVa, VauZat}. In the current paper we will focus on another approximation of the unknown pressure
$\pi_\varepsilon\approx\varepsilon \frac1{\varrho^*-\varrho}$, which has some benefits from the numerical perspective, see \cite{DeMiNaZa,DeHuNa}. Similar forms and asymptotic limits of the singular pressure appear in the models of traffic models \cite{BeDeDeRa, BeBr, BeDeBlMoRaRo}, collective dynamics \cite{DeHuNa}, or granular flow \cite{Maury, Perrin}.
All of the previous analytical results for the derivation of the two-phase compressible-incompressible system were obtained either for the whole space case, or for the bounded domain with zero Dirichlet boundary condition. In this paper, we want to extend the analysis to the setting where the inflow and outflow of the crowd is allowed, making the model suitable to describe various evacuation scenarios. Some numerical simulations for the hyperbolic version of such model were already performed in \cite{DeMiNaZa}.
Our starting point is the following system of equations in $(0,T)\times \Omega$:
\begin{subequations}\label{main} \eq{ \label{i1} \partial_t\varrho+{\rm div}_x (\varrho \vc{u})= 0, } \eq{ \label{i2} \partial_t(\varrho\vc{u})+{\rm div}_x (\varrho \vc{u} \otimes \vc{u}) + \nabla_x \pi_\varepsilon\Big(\frac{\varrho}{\vr^*}\Big)-{\rm div}_x \mathbb{S}(\nabla_x \vc{u})=\varrho(\vc{w}-\vc{u}), } \eq{\label{i3} \partial_{t} \vr^*+\vc{u}\cdot\nabla_x\vr^*=0,} \end{subequations} where $\Omega \subset R^d$, $d=2,3$ is a bounded domain of class $C^2$, $\varrho = \varrho(t,x)$ is the unknown mass density, $\vc{u} = \vc{u}(t,x)$ is the unknown velocity, $\vr^*=\vr^*(t,x)$ is the unknown congestion density, $(t,x)\in (0,T)\times\Omega\equiv Q_T$, $\varepsilon>0$, and $\vc{w}=\vc{w}(x)$ is given.
The stress tensor $\mathbb{S}$ will be specified below (as stress tensor for compressible Newtonian fluid). The pressure $\pi$ depends on the ratio of densities $\frac{\varrho}{\vr^*}$. It models a constraint on the density of the fluid $\varrho$ that cannot exceed the value $\vr^*$. More precisely, we take \begin{equation} \label{i5a} \pi_\varepsilon\Big(\frac{\varrho}{\vr^*}\Big) =\varepsilon\frac{\big(\frac{\varrho}{\vr^*}\big)^\alpha}{\big(1-\frac{\varrho}{\vr^*}\big)^\beta} \end{equation} with some $\alpha >1$ and $\beta>\frac 52$ id $d-3$ and $\beta>2$ if $d=3$. Note that the pressure fulfills \begin{equation} \label{i6} \pi_\varepsilon(0)=0, \quad \pi_\varepsilon'(z)>0 \quad \mbox{ for } 0<z<1.
\end{equation} This is similar to the so-called hard sphere pressure considered, e.g., in \cite{CHNY2}, that can be viewed as a special case of system \eqref{main} with $\vr^*=const$. Similarly as in \cite{CHNY2} we can relax the assumption on $\pi$ for $\varrho<\vr^*$ by addition of non-monotone pressure that vanishes for $\varrho\to\vr^*$. To avoid unnecessary technicalities, we skip this point in this paper. We consider our system together with initial conditions \begin{equation}\label{initc} \varrho(0)=\varrho_0,\quad\varrho\vc{u}(0)=\varrho_0\vc{u}_0,\quad \vr^*(0)=\vr^*_0, \end{equation} and boundary conditions \begin{equation} \label{i4}
\vc{u} |_{\partial \Omega} = \vc{u}_B, \quad \varrho|_{\Gamma_{\rm in}} = \varrho_B, \quad \vr^*|_{\Gamma_{\rm in}} = \vr^*_B, \end{equation} where \begin{equation} \label{i5}
\Gamma_{\rm in} = \left\{ x \in \partial \Omega \ \Big| \ {\vc{u}_B} \cdot \vc{n} < 0 \right\}, \quad
\Gamma_{\rm out} = \left\{ x \in \partial \Omega \ \Big| \ {\vc{u}_B} \cdot \vc{n} \geq 0 \right\} \end{equation} (we include to the outflow part of the boundary also the part where the normal velocity component is zero).
Note that system \eqref{i1}--\eqref{i3} reminds system studied (for homogeneous boundary conditions for the velocity) in \cite{M3NPZ}; there, the role of the congestion density was played by the entropy. Using a similar idea as in the above mentioned paper we introduce a new quantity $Z:=\frac{\varrho}{\vr^*}$; then the new unknown function $Z$ satisfies (at least formally, for smooth solutions) the continuity equation and we obtain the following system
\begin{subequations}\label{main_transformed} \eq{ \label{ia1} \partial_t\varrho+{\rm div}_x (\varrho \vc{u})= 0, } \eq{ \label{ia2} \partial_t(\varrho\vc{u})+{\rm div}_x (\varrho \vc{u} \otimes \vc{u}) + \nabla_x \pi_\varepsilon(Z)-{\rm div}_x \mathbb{S}(\nabla_x \vc{u})=\varrho(\vc{w}-\vc{u}), } \eq{\label{ia3} \partial_{t} Z+{\rm div}_x (Z \vc{u})=0} \end{subequations} with initial \begin{equation}\label{initca} \varrho(0)=\varrho_0,\quad\varrho\vc{u}(0)=\varrho_0\vc{u}_0,\quad Z(0)=Z_0:=\frac{\varrho_0}{\vr^*_0}, \end{equation} and boundary conditions \begin{equation} \label{ia4}
\vc{u} |_{\partial \Omega} = \vc{u}_B, \quad \varrho|_{\Gamma_{\rm in}} = \varrho_B, \quad Z|_{\Gamma_{\rm in}} = Z_B:=\frac{\varrho_B}{\vr^*_B}. \end{equation} Note, however, that by standard techniques we can get certain "better" information on the "density" only from the pressure term, therefore, as in \cite{M3NPZ}, we will consider a certain interplay of the initial and boundary conditions for $\varrho$ and $\vr^*$ which leads to the fact that the boundary and initial conditions for $\varrho$ are controlled by the initial and boundary conditions for $Z$ (see \eqref{data_Z}). Furthermore, we also have that the initial and boundary conditions for $Z$ belong to the interval $(0,1)$.
Combining the approach from \cite{M3NPZ} with \cite{CHNY2} we will be able to show that under certain additional technical assumptions problem \eqref{ia1}--\eqref{ia3} with initial \eqref{initca} and boundary conditions \eqref{ia4} possesses a weak solution defined below. We even slightly improve the result from \cite{CHNY2} in the sense that we may include for global-in-time existence result the case when the velocity flux is zero, see Remark \ref{RB1}. Next, it is possible to show that also \eqref{i1}--\eqref{i3} with initial \eqref{initc} and boundary conditions \eqref{i4} has a solution: the approach is based on suitable renormalization which allows us to return back to the unknown function $\vr^*$. On the other hand, we are more interested in the limit passage $\varepsilon \to 0+$; we perform the limit passage in the formulation with the unknown function $Z$ and later return back to the formulation with the function $\vr^*$. Here, we follow ideas from \cite{DeMiZa} or \cite{PeZa}. We will show that with
$\varepsilon\to 0^+$ the weak solutions to system \eqref{main} converge in some sense to the weak solution of the target system
\begin{subequations}\label{target}
\begin{equation}\label{rho}
\partial_{t}\varrho+ {\rm div}_x (\varrho\vc{u}) = 0,
\end{equation} \begin{equation}\label{mom}
\partial_t (\varrho\vc{u}) + {\rm div}_x (\varrho\vc{u} \otimes \vc{u}) + \nabla_x\pi -{\rm div}_x\mathbb{S}(\nabla_x\vc{u}) = \varrho(\vc{w}-\vc{u}),
\end{equation} \begin{equation}\label{rho_star}
\partial_{t}\varrho^*+\vc{u}\cdot\nabla_x\varrho^*=0,
\end{equation}
\begin{equation}\label{cons0}
0\leq \varrho\leq\vr^*,
\end{equation} \begin{equation}\label{div0}
{\rm div}_x\vc{u} =0 \ \text{in} \ \{\varrho=\vr^*\},
\end{equation} \begin{equation}
\pi\geq 0\ \mbox{in} \ \{\varrho=\vr^*\},\quad \pi= 0\ \mbox{in} \ \{\varrho<\vr^*\}.\label{pineq0}
\end{equation}
\end{subequations} We call this system a free boundary two-phase compressible/incompressible system. To justify this name note that when $\varrho=\vr^*$, i.e., when the density achieves its maximal value, due to condition \eqref{div0} the system behaves like inhomogeneous incompressible Navier-Stokes equations. When on the other hand $\varrho<\vr^*$, the system behaves like compressible pressureless Navier-Stokes equations with time-space variable upper bound for the density. This is one of the novelties here, as before we always had to consider the background pressure in the barotropic form.
One of the inspirations for this work was the numerical paper \cite{DeMiNaZa}, in which the inviscid variant of system \eqref{main} was considered with "do nothing" boundary condition for the velocity at the outflow part of the boundary. Even though this condition is often used in numerics, it is not very suitable for analysis, in particular for global-in-time type solutions for large data.
Even though we combine approaches from several other papers, the result itself is new and requires nontrivial extensions of previously known techniques and results.
\section{Main result} \label{M}
In what follows, we formulate main results of the paper. Before doing so, we state the main assumptions on the data of our problem. Concerning the given field $\vc{w}$, we consider (for simplicity) \begin{equation} \label{As1} \vc{w} \in L^\infty(Q_T;R^d). \end{equation} Further, the stress tensor $\mathbb{S}$ is characteristic for the Newtonian fluid and it is given by \begin{equation}\label{is} \mathbb{S}(\nabla_x \vc{u}) = \mu \left( \nabla_x \vc{u} + \nabla_x^t \vc{u} \right) + \lambda {\rm div}_x \vc{u} \mathbb{I}, \ \mu > 0 , \ \lambda \geq 0. \end{equation} The pressure $\pi_\varepsilon(\cdot)$ has the form \eqref{i5a} with $\alpha >1$ and $\beta >\frac 52$.
Similarly as in \cite{CHNY2}, we consider the following regularity assumptions \begin{equation} \label{Ass1} { \varrho_B, \vr^*_B \in C(\Gamma_{\rm in})}, \quad \vc{u}_B \in C^2(\partial \Omega; R^d), \quad \int_{\partial \Omega} \vc{u}_B\cdot\vc{n}\ {\rm d}S_x {\rm d} t \geq 0. \end{equation}
Furthermore, we assume that \begin{equation} \label{Ass2} \begin{array}{c} \displaystyle 0< \varrho_0<\vr^*_0, \ \mbox{a.e. in }\Omega, \quad \vr^*_0\in L^\infty(\Omega),\\ \displaystyle \intO{H_\varepsilon\lr{\frac{\varrho_0}{\vr^*_0}}}<\infty,\quad {\rm ess\ inf}_\Omega\, \varrho_0 >0, \quad {\rm ess\ inf}_\Omega\, (\vr^*_0 -\varrho_0) >0,\\ \vc{u}_0 \in L^2(\Omega;R^d),
\end{array} \end{equation} where \bFormula{Hf} \begin{split}
H_\varepsilon(z) &= z\int_0^z\frac{\pi_\varepsilon(s)}{s^2}\ {\rm d}s. \end{split} \end{equation} For the boundary data, \begin{equation} \label{Ass3}
\displaystyle {0< \varrho_B <\vr^*_B, \ \mbox{a.e. on }\Gamma_{\rm in}}, \quad {\rm ess\ inf}_{\Gamma_{\rm in}}\, \varrho_B >0, \quad {\rm ess\ inf}_{\Gamma_{\rm in}}\, (\vr^*_B -\varrho_B) >0.
\end{equation} Note that these assumptions yield that the initial energy $$
E_0 := \intO{\Big(\frac 12 \varrho_0 |\vc{u}_0|^2 + H_\varepsilon\Big(\frac{\varrho_0}{\vr^*_0}\Big)\Big)} <+\infty $$ as well as that there are positive constants $c_*$ and $c^*$ such that $$ c_* \leq \frac{1}{\vr^*_0} \leq c^*, \quad c_*\leq \frac{1}{\vr^*_B} \leq c^* \quad \text{a.e.} $$ Whence, rewriting our problem to the form \eqref{ia1}--\eqref{ia4}, we immediately have that \begin{equation} \label{data_Z} { c_* \varrho_0 \leq Z_0 \leq c^* \varrho_0 \mbox{ a.e. in }\Omega, \quad c_* \varrho_B\leq Z_B \leq c^*\varrho_B \quad \text{ a.e. on }\Gamma_{\rm in} } \end{equation}
Let us now introduce the definition of a weak solution to problem \eqref{ia1}--\eqref{ia4}. \begin{Definition} \label{def_in_Z} We say that $(\varrho, \vc{u}, Z)$ is a bounded energy weak solution of problem \eqref{ia1}--\eqref{ia4} on a time interval $(0,T)$ if the following five conditions are satisfied.
\begin{description} \item{1.} The triple of functions $(\varrho, \vc{u}, Z)$ fulfills: \eq{\label{fs-} &0 \leq c_\star \varrho \leq Z \leq c^\star \varrho \mbox{ a.e. in } Q_T, \quad \mbox{for}\quad 0<c_\star\leq c^\star <\infty,\\
&0 \leq Z<1 \text{ a.e. in } (0,T) \times \Omega, \quad \pi_\varepsilon(Z)\in L^1(0,T; L^1(\Omega)) \\ &\vc{u} \in L^2(0,T; W^{1,2}(\Omega; R^d)), \quad
\vc{u}|_{I\times\partial \Omega} = \vc{u}_B. } \item{2.} { The function $\varrho\in C_{\rm weak}([0,T], L^1(\Omega))$
satisfies the integral identity \bFormula{ce-} \begin{split} &\intO{\varrho(\tau,\cdot)\varphi(\tau,\cdot)} - \intO{\varrho_0(\cdot)\varphi(0,\cdot)} \\ &= \int_0^\tau\intO{\Big(\varrho\partial_t\varphi + \varrho \vc{u} \cdot \nabla_x \varphi\Big) }{\rm d}t - \int_0^\tau\int_{\Gamma_{\rm in}} \varrho_B \vc{u}_B \cdot \vc{n} \varphi \ {\rm d}S_x{\rm d} t \end{split} \end{equation} for any $\tau\in [0,T]$ and $\varphi \in C_c^1([0,T]\times({\Omega}\cup\Gamma_{\rm in}))$.} \item{3.} { The function $\varrho\vc{u}\in C_{\rm weak}([0,T], L^{1}(\Omega;R^d))$ satisfies the integral identity \bFormula{me-} \begin{split} &\intO{\varrho\vc{u}(\tau,\cdot)\cdot\boldsymbol{\varphi}(\tau,\cdot)} - \intO{\varrho_0\vc{u}_0(\cdot)\cdot\boldsymbol{\varphi}(0,\cdot)} \\ &= \int_0^\tau \intO{\Big( \varrho\vc{u}\cdot\partial_t\boldsymbol{\varphi}+\varrho \vc{u} \otimes \vc{u} : \nabla_x \boldsymbol{\varphi} + \pi_\varepsilon(Z){\rm div}_x \boldsymbol{\varphi} - \mathbb{S}(\nabla_x \vc{u}) : \nabla_x \boldsymbol{\varphi} \Big)}{\rm d}t\\ &\quad+\int_0^\tau \intO{\varrho(\vc{w}-\vc{u})\cdot\boldsymbol{\varphi}}{\rm d}t \end{split} \end{equation} for any $\tau\in [0,T]$ and any $\boldsymbol{\varphi} \in C^1_c ([0,T]\times\Omega; R^d)$.} \item{4.} { The function $Z\in C_{\rm weak}([0,T], L^1(\Omega))$
satisfies the integral identity \bFormula{ceZ-} \begin{split} &\intO{Z(\tau,\cdot)\varphi(\tau,\cdot)} - \intO{Z_0(\cdot)\varphi(0,\cdot)} \\ &= \int_0^\tau\intO{\Big(Z\partial_t\varphi + Z \vc{u} \cdot \nabla_x \varphi\Big) }{\rm d}t - \int_0^\tau\int_{\Gamma_{\rm in}} Z_B \vc{u}_B \cdot \vc{n} \varphi \ {\rm d}S_x{\rm d} t \end{split} \end{equation} for any $\tau\in [0,T]$ and $\varphi \in C_c^1([0,T]\times({\Omega}\cup\Gamma_{\rm in}))$.} \item{5.} { There is a Lipschitz extension $\vc{u}_\infty\in W^{1,\infty}(\Omega;R^d)$ of the vector field $\vc{u}_B$ such that the following energy inequality holds \bFormula{ei-} \begin{split}
&\intO{\Big(\frac 12\varrho|\vc{u}-\vc{u}_\infty|^2+H_\varepsilon(Z)\Big)(\tau)}+\int_0^\tau\intO{\tn S(\nabla_x(\vc{u}-\vc{u}_\infty)):\nabla_x(\vc{u}-\vc{u}_\infty)}{\rm d}t \\ &\quad
+\int_0^\tau\int_\Omega\pi_\varepsilon(Z) {\rm div}_x\vc{u}_\infty\ {\rm d}x{\rm d}t \\
&\le \intO{\Big(\frac 12\varrho_0|\vc{u}_0-\vc{u}_\infty|^2+H_\varepsilon(Z_0)\Big)}
-\int_0^\tau\intO{\varrho\vc{u}\cdot\nabla_x\vc{u}_\infty\cdot(\vc{u}-\vc{u}_\infty)}{\rm d}t \\ &\quad -\int_0^\tau\intO{\tn S(\nabla_x\vc{u}_\infty):\nabla_x(\vc{u}-\vc{u}_\infty)}{\rm d}t- \int_0^\tau\int_{\Gamma_{\rm in}} H_\varepsilon(Z_B)\vc{u}_B\cdot\vc n \ {\rm d}S_x{\rm d}t \\ &\quad + \int_0^\tau\intO{\varrho(\vc{w}-\vc{u})\cdot(\vc{u}-\vc{u}_\infty)}{\rm d}t
\end{split} \end{equation} for a.a. $\tau\in (0,T)$. } \end{description} \end{Definition} \noindent { \bRemark{R1-}
The continuity equations \eqref{ia1} and \eqref{ia3} give rise to \bFormula{mi} \intO{X(\tau)}\le\intO{X_0}-\int_0^\tau\int_{\Gamma_{\rm in}}X_B\vc{u}_B\cdot\vc n\ {\rm d} S_x{\rm d}t \end{equation} for all $X=\varrho,Z$, and $\tau\in [0,T]$. It can be obtained by taking in \eqref{ce-} and \eqref{ceZ-} test functions $\varphi=\varphi_\eta$, where $$\varphi_\eta(x) = \left\{\begin{array}{rl} 1 & \text{if } {\rm dist}\,(x,\Gamma_{\rm out})>\eta \\ \frac{1}{\eta} {\rm dist}\,(x,\Gamma_{\rm out}) & \text{if } {\rm dist}\,(x,\Gamma_{\rm out}) \leq \eta, \end{array} \right. $$ and then by letting $\eta\to 0+$.
\end{Remark}
\begin{Definition} \label{def_renor} We call $(Z,\vc{u})$ a renormalized solution of the continuity equation \eqref{ia3} provided: \begin{itemize} \item {$Z\in L^\infty(0,T;L^\infty(\Omega))$}, and $\vc{u}\in L^2(0,T; W^{1,2}(\Omega,R^3))$, \item $(Z,\vc{u})$ satisfies the weak formulation of the continuity equation \eqref{ceZ-}, \item for any $b\in C^1[0,1]$, $b(Z)\in C_{\rm weak}([0,T];L^1(\Omega))$ the weak formulation of the renormalized equation is satisfied, i.e., \begin{equation} \label{P3} \begin{split} &\intO{ b(Z) (\tau,\cdot)\varphi(\tau,\cdot)} -\intO{ b(Z_0) (\cdot)\varphi(0,\cdot)} \\ &= { \int_0^\tau\intO{ \Big({{ b(Z)\partial_t\varphi+} b(Z) \vc{u} \cdot \nabla_x \varphi -\varphi\left( b'(Z) Z - b(Z) \right) {\rm div}_x \vc{u} \Big)}} {\rm d}t }\\ &\quad - \int_0^\tau\int_{\Gamma_{\rm in}} b(Z_B) \vc{u}_B \cdot \vc{n} \varphi \ {\rm d}S_x {\rm d} t \end{split} \end{equation} for any $\varphi \in C^1_c([0,T]\times({\Omega}\cup\Gamma_{\rm in}))$. \end{itemize}
A weak solution to problem \eqref{ia1}--\eqref{ia4} satisfying in addition renormalized continuity equations \eqref{P3} for both $(Z,\mathbf u)$ and $(\varrho,\vc{u})$ is called a {\it renormalized weak solution}. \end{Definition}
\bRemark{R1aa} Note that due to \cite[Lemma 3.1]{CHJN}, and since we know that since both $\varrho$ and $Z$ are essentially bounded functions and $\vc{u} \in L^2(0,T;W^{1,2}(\Omega;R^d))$, any weak solution in the sense of Definition \ref{def_in_Z} is in fact a renormalized weak solution, provided $\Gamma_{\rm in}$ is a $C^2$ open $(d-1)$ dimensional manifold. \end{Remark}
In the proof that follows we will need that the boundary data for the velocity, $\vc{u}_B$, can be extended to the whole $\Omega$ in such a way that the extension is sufficiently smooth and its divergence is non-negative. This follows from the following result (see \cite[Lemmata 5.1, 5.2, and 5.3]{CHNY2}
\begin{Lemma} \label{Lextrem}
Let $\vc V \in C^2(\partial\Omega;R^d)$ be a vector field on the boundary $\partial\Omega$ of a bounded $C^2$ domain $\Omega$. Let $$ \int_{\partial \Omega} \vc V \cdot \vc{n} \ {\rm d}S_x \geq 0. $$ Then there exist a vector field { \bFormula{Le} \vc V_\infty\in W^{2,q}(\Omega;R^d), \quad 1\leq q<\infty, \quad {\rm div}_x {\vc V}_\infty\ge 0 \quad \text{ a.e. in } \Omega \end{equation}}
verifying ${\vc V}_\infty|_{\partial\Omega}=\vc V$. \\ If in addition $$ \int_{\partial \Omega} \vc V \cdot \vc{n} \ {\rm d}S_x =K> 0, $$ then the extension $\vc V_\infty$ satisfy { \bFormula{Lep}
{\rm div}_x {\vc V}_\infty\ge 0 \quad \text{ a.e. in } \Omega,\qquad and\qquad {\rm ess}\inf_{{\cal O}}\lr{{\rm div}_x {\vc V}_\infty}\ge C> 0, \end{equation}} where ${\cal O}$ is an open subset satisfying $\bar{\cal O}\subset\Omega$. \end{Lemma}
Our first result is a global-in-time existence theorem for solutions defined above. \begin{Theorem} \label{TM1!} Let $\Omega \subset R^d$, $d = 2,3$, be a bounded domain of class $C^{2}$ such that $\Gamma_{\rm in}$ is an open $C^2$ $d-1$ dimensional manifold. Let $\varepsilon>0$, $T>0$. { Under the assumptions \eqref{As1}--\eqref{Ass3} the problem \eqref{main_transformed}--\eqref{ia4} admits at least one bounded energy weak solution $(\varrho, \vc{u}, Z)$ on $(0,T)$ in the sense of Definition~\ref{def_in_Z}. Moreover $(\varrho,\vc{u},Z)$ is a renormalized solution in the sense of Definition \ref{def_renor}. } \end{Theorem}
Next we consider solutions to problem { \eqref{main}--\eqref{i5} with \eqref{is}}.
\begin{Definition} \label{def_in_vrs} We say that $(\varrho, \vc{u}, \vr^*)$ is a bounded energy weak solution of problem { \eqref{main}--\eqref{i5} with \eqref{is}} on a time interval $(0,T)$ if the following five conditions are satisfied.
\begin{description} \item{1.} The triple of functions belongs fulfills: \eq{\label{gs-} &0 \leq \varrho < \varrho^* \mbox{ a.e. in } Q_T\\
& c_* \leq \frac{1}{\vr^*} \leq c^* \text{ a.e. in } (0,T) \times \Omega, \quad \pi_\varepsilon\Big(\frac{\varrho}{\vr^*}\Big)\in L^1(0,T; L^1(\Omega)) \\ &\vc{u} \in L^2(0,T; W^{1,2}(\Omega; R^d)), \quad
\vc{u}|_{I\times\partial \Omega} = \vc{u}_B. } \item{2.} { The function $\varrho\in C_{\rm weak}([0,T], L^1(\Omega))$
satisfies the integral identity \bFormula{ge-} \begin{split} &\intO{\varrho(\tau,\cdot)\varphi(\tau,\cdot)} - \intO{\varrho_0(\cdot)\varphi(0,\cdot)} \\ &= \int_0^\tau\intO{\Big(\varrho\partial_t\varphi + \varrho \vc{u} \cdot \nabla_x \varphi\Big) }{\rm d}t - \int_0^\tau\int_{\Gamma_{\rm in}} \varrho_B \vc{u}_B \cdot \vc{n} \varphi \ {\rm d}S_x{\rm d} t \end{split} \end{equation} for any $\tau\in [0,T]$ and $\varphi \in C_c^1([0,T]\times({\Omega}\cup\Gamma_{\rm in}))$.} \item{3.} { The function $\varrho\vc{u}\in C_{\rm weak}([0,T], L^{1}(\Omega;R^d))$ satisfies the integral identity \bFormula{gge-} \begin{split} &\intO{\varrho\vc{u}(\tau,\cdot)\cdot\boldsymbol{\varphi}(\tau,\cdot)} - \intO{\varrho_0\vc{u}_0(\cdot)\boldsymbol{\varphi}(0,\cdot)} \\ &= \int_0^\tau \intO{\Big( \varrho\vc{u}\cdot\partial_t\boldsymbol{\varphi}+\varrho \vc{u} \otimes \vc{u} : \nabla_x \boldsymbol{\varphi} + \pi_\varepsilon \Big(\frac{\varrho}{\vr^*}\Big){\rm div}_x \boldsymbol{\varphi} - \mathbb{S}(\nabla_x \vc{u}) : \nabla_x \boldsymbol{\varphi} \Big)}{\rm d}t\\ &\quad+\int_0^\tau \intO{\varrho(\vc{w}-\vc{u})\cdot\boldsymbol{\varphi}}{\rm d}t \end{split} \end{equation} for any $\tau\in [0,T]$ and any $\boldsymbol{\varphi} \in C^1_c ([0,T]\times\Omega; R^d)$.} \item{4.} { The function $\vr^*\in C_{\rm weak}([0,T], L^1(\Omega))$
satisfies the integral identity \bFormula{geZ-} \begin{split} &\intO{\vr^*(\tau,\cdot)\varphi(\tau,\cdot)} - \intO{\vr^*_0(\cdot)\varphi(0,\cdot)} \\ &= \int_0^\tau\intO{\Big(\vr^*\partial_t\varphi + \vr^* {\rm div}_x (\varphi \vc{u}) \Big) }{\rm d}t - \int_0^\tau\int_{\Gamma_{\rm in}} \vr^*_B \vc{u}_B \cdot \vc{n} \varphi \ {\rm d}S_x{\rm d} t \end{split} \end{equation} for any $\tau\in [0,T]$ and $\varphi \in C_c^1([0,T]\times({\Omega}\cup\Gamma_{\rm in}))$.} \item{5.} { There is a Lipschitz extension $\vc{u}_\infty\in W^{1,\infty}(\Omega;R^d)$ of the vector field $\vc{u}_B$ such that the following energy inequality holds \bFormula{gi-} \begin{split}
&\intO{\Big(\frac 12\varrho|\vc{u}-\vc{u}_\infty|^2+H_\varepsilon\Big(\frac{\varrho}{\vr^*}\Big)\Big)(\tau)}+\int_0^\tau\intO{\tn S(\nabla_x(\vc{u}-\vc{u}_\infty)):\nabla_x(\vc{u}-\vc{u}_\infty)}{\rm d}t \\ &\quad
+\int_0^\tau\int_\Omega\pi_\varepsilon\Big(\frac{\varrho}{\vr^*}\Big){\rm div}_x\vc{u}_\infty \ {\rm d}x{\rm d}t \\
&\le \intO{\Big(\frac 12\varrho_0|\vc{u}_0-\vc{u}_\infty|^2+H_\varepsilon\Big({ \frac{\varrho_0}{\varrho^*_0}}
\Big)\Big)}
-\int_0^\tau\intO{\varrho\vc{u}\cdot\nabla_x\vc{u}_\infty\cdot(\vc{u}-\vc{u}_\infty)}{\rm d}t \\ &\quad -\int_0^\tau\intO{\tn S(\nabla_x\vc{u}_\infty):\nabla_x(\vc{u}-\vc{u}_\infty)}{\rm d}t- \int_0^\tau\int_{\Gamma_{\rm in}} H_\varepsilon\Big(\frac{\varrho_B}{\vr^*_B}\Big)\vc{u}_B\cdot\vc n \ {\rm d}S_x{\rm d}t \\ &\quad + \int_0^\tau \intO{\varrho (\vc{w}-\vc{u})\cdot (\vc{u}-\vc{u}_\infty)}{\rm d}t
\end{split} \end{equation} for a.a. $\tau\in (0,T)$. } \end{description}
{ Bounded energy weak solution to problem \eqref{main}--\eqref{i5} with \eqref{is} satisfying in addition renormalized continuity equation \eqref{P3} for $(\varrho,\vc{u})$ is called a renormalized weak solution. } \end{Definition}
We have the following result.
\begin{Theorem}\label{TM2} Let $\Omega \subset R^d$, $d = 2,3$, be a bounded domain of class $C^{2}$ such that $\Gamma_{\rm in}$ is an open $C^2$ $d-1$ dimensional manifold. Let $\varepsilon>0$, $T>0$. { Under the assumptions \eqref{As1}--\eqref{Ass3} the problem \eqref{main}--\eqref{i5} with \eqref{is} admits at least one renormalized bounded energy weak solution $(\varrho, \vc{u}, \vr^* )$ on $(0,T)$ in the sense of Definition~\ref{def_in_vrs}. } \end{Theorem}
Our next results concern the limit passage $\varepsilon\to 0$. We will show that when $\varepsilon\to0$ the weak solutions from the previous section approximate weak solutions to the system \eqref{target} defined below. \begin{Definition}\label{Def:limit} A quadruple $(\varrho,\vc{u},\vr^*,\pi)$ is called a global finite energy weak solution to \eqref{target}, \eqref{is}, with the initial data \eqref{initc}, \eqref{Ass2}, and the boundary conditions \eqref{i4}, \eqref{Ass1} if for any $T>0$: \begin{itemize} \item[(i)] There holds: $$0\leq\varrho\leq\vr^*\quad a.e.\ in \ (0,T)\times\Omega,$$ \eq{\label{cond:u} {\rm div}_x\vc{u}=0\quad a.e. \ in\ \{\varrho=\vr^*\},} \eq{\label{cond:pi} (\varrho^*-\varrho)\pi=0, } and \begin{align*} &\varrho\in C_w([0,T];L^\infty(\Omega)), \\ &\vr^*\in C_w([0,T];L^\infty(\Omega)),\\
& \vc{u} \in L^2(0,T;W^{1,2}(\Omega, R^d)),\quad \varrho|\vc{u}|^2 \in L^{\infty}(0,T; L^1(\Omega)),\\ &\pi\in {\cal M}^+ ((0,T)\times \Omega). \end{align*} \item[(ii)] For any $0\leq \tau\leq T$, equations \eqref{rho}, \eqref{mom}, \eqref{rho_star} are satisfied in the weak sense, more precisely:\\ -the continuity equation: \bFormula{ge-lim} \begin{split} &\intO{\varrho(\tau,\cdot)\varphi(\tau,\cdot)} - \intO{\varrho_0(\cdot)\varphi(0,\cdot)} \\ &= \int_0^\tau\intO{\Big(\varrho\partial_t\varphi + \varrho \vc{u} \cdot \nabla_x \varphi\Big) }{\rm d}t - \int_0^\tau\int_{\Gamma_{\rm in}} \varrho_B \vc{u}_B \cdot \vc{n} \varphi \ {\rm d}S_x{\rm d} t \end{split} \end{equation} holds for any $\tau\in [0,T]$ and $\varphi \in C_c^1([0,T]\times({\Omega}\cup\Gamma_{\rm in}))$,\\ -the momentum equation: \bFormula{gge-lim} \begin{split} &\intO{\varrho\vc{u}(\tau,\cdot)\cdot\boldsymbol{\varphi}(\tau,\cdot)} - \intO{\varrho_0\vc{u}_0(\cdot)\boldsymbol{\varphi}(0,\cdot)} \\ &= \int_0^\tau \intO{\Big( \varrho\vc{u}\cdot\partial_t\boldsymbol{\varphi}+\varrho \vc{u} \otimes \vc{u} : \nabla_x \boldsymbol{\varphi} + \pi{\rm div}_x \boldsymbol{\varphi} - \mathbb{S}(\nabla_x \vc{u}) : \nabla_x \boldsymbol{\varphi} \Big)}{\rm d}t\\ &\quad+\int_0^\tau \intO{\varrho(\vc{w}-\vc{u})\cdot\boldsymbol{\varphi}}{\rm d}t \end{split} \end{equation} holds for any $\tau\in [0,T]$ and any $\boldsymbol{\varphi} \in C^1_c ([0,T]\times\Omega; R^d)$,\\ - the transport equation for $\varrho^*$: \bFormula{geZ-lim} \begin{split} &\intO{\vr^*(\tau,\cdot)\varphi(\tau,\cdot)} - \intO{\vr^*_0(\cdot)\varphi(0,\cdot)} \\ &= \int_0^\tau\intO{\Big(\vr^*\partial_t\varphi + \vr^* {\rm div}_x (\varphi \vc{u}) \Big) }{\rm d}t - \int_0^\tau\int_{\Gamma_{\rm in}} \vr^*_B \vc{u}_B \cdot \vc{n} \varphi \ {\rm d}S_x{\rm d} t \end{split} \end{equation} for any $\tau\in [0,T]$ and $\varphi \in C_c^1([0,T]\times({\Omega}\cup\Gamma_{\rm in}))$. \item[(iii)] There is a Lipschitz extension $\vc{u}_\infty\in W^{1,\infty}(\Omega;R^d)$ of the vector field $\vc{u}_B$ such that the following energy inequality holds \bFormula{gi-lim} \begin{split}
&\intO{\frac 12\varrho|\vc{u}-\vc{u}_\infty|^2(\tau)}+\int_0^\tau\!\!\intO{\tn S(\nabla_x(\vc{u}-\vc{u}_\infty)):\nabla_x(\vc{u}-\vc{u}_\infty)}{\rm d}t
+\int_0^\tau\!\!\int_\Omega\pi{\rm div}_x\vc{u}_\infty \ {\rm d}x{\rm d}t \\
&\le \intO{\frac 12\varrho_0|\vc{u}_0-\vc{u}_\infty|^2}
-\int_0^\tau\!\!\intO{\varrho\vc{u}\cdot\nabla_x\vc{u}_\infty\cdot(\vc{u}-\vc{u}_\infty)}{\rm d}t \\ &\quad -\int_0^\tau\!\!\intO{\tn S(\nabla_x\vc{u}_\infty):\nabla_x(\vc{u}-\vc{u}_\infty)}{\rm d}t + \int_0^\tau\!\! \intO{\varrho (\vc{w}-\vc{u})\cdot (\vc{u}-\vc{u}_\infty)}{\rm d}t
\end{split} \end{equation} for a.a. $\tau\in (0,T)$. \end{itemize} \end{Definition} \begin{Remark} In the above definition all the terms must make sense, in particular, $\pi$ is not only a measure, but it is sufficiently regular so that the condition \eqref{cond:pi} makes sense. \end{Remark}
Our main theorem in this parts reads as follows. \begin{Theorem} \label{TM3} Let $\Omega \subset R^d$, $d = 2,3$, be a bounded domain of class $C^{2}$ such that $\Gamma_{\rm in}$ is an open $C^2$ $d-1$ dimensional manifold. Let $T>0$, and let assumptions \eqref{As1}--\eqref{Ass3} be satisfied.\\ If, in addition either \eq{ \int_{\partial\Omega}\vc{u}_B\cdot\vc{n}\, {\rm d} S_x=K>0, } or \eq{
\intO{Z_0}+T\int_{\Gamma_{in}}Z_B|\vc{u}_B\cdot\vc{n}|\, {\rm d} S_x<|\Omega|, } then the problem \eqref{target}, with \eqref{is} admits at least one renormalized bounded energy weak solution $(\varrho, \vc{u}, \vr^*,\pi )$ on $(0,T)$ in the sense of Definition~\ref{Def:limit}.
\end{Theorem}
The paper is organized as follows. In Section \ref{Sec:3} we present the approximate scheme starting from the level of truncations of singular pressure at the level described by parameter $\delta$. Later on, in Section \ref{4} we obtain the uniform estimates with respect to $\delta$ and pass to the limit. As an outcome of this section we prove the Theorem \ref{TM1!} and also Theorem \ref{TM2}. In Section \ref{Sec:lim}} we recall uniform estimates with respect to $\varepsilon$ and perform the limit passage $\varepsilon\to0$ and conclude the proof of Theorem \ref{TM3}.
\section{Approximate solution} \label{Sec:3} The purpose of this section is to construct approximate solutions to system \eqref{ia1}--\eqref{ia4}. We are not going to explain the details of the whole procedure but only to summarize it and to explain how can the existing literature be employed. We approximate the singular pressure in system \eqref{main_transformed} by the truncation \[ \pi_\delta (Z)= \begin{cases} \pi_\varepsilon (Z) & \text{ if } Z \in [0,1-\delta] \\ \pi_\varepsilon(1-\delta) + \varepsilon(Z-1+\delta)_+^\gamma
& \text{ if } Z \in (1- \delta,\infty),
\end{cases} \] where the exponent $\gamma$ has to be chosen sufficiently large in order to obtain sufficient estimates, in particular we need $\gamma>d$. This truncation allows to combine the arguments from \cite{M3NPZ, CHJN} in order to construct the solutions to system \eqref{main_transformed} with $\pi_\varepsilon$ replaced by $\pi_\delta$. This consists of regularising both equations for $\varrho$ and $Z$ by adding small viscosity term, we consider \begin{equation} \label{A1} \partial_t\varrho-\eta \Delta_x \varrho + {\rm div}_x (\varrho \vc{u}) = 0, \end{equation} \begin{equation} \label{A2}
\varrho(0,x)=\varrho_0(x),\;\left( -\eta \ \nabla_x \varrho + \varrho \vc{u} \right) \cdot \vc{n}|_{I\times\partial \Omega} = \left\{ \begin{array}{l} \varrho_B \vc{u}_B \cdot \vc{n} \ \mbox{if}\ [\vc{u}_B \cdot \vc{n}](x) \leq 0,\,x\in \partial\Omega,\\ \varrho \vc{u}_B \cdot \vc{n}\;\mbox{if}\ [\vc{u}_B \cdot \vc{n}](x) > 0,\,x\in \partial\Omega,\end{array} \right. \end{equation} \begin{equation} \label{A3} \partial_t(\varrho\vc{u})+
{\rm div}_x (\varrho \vc{u} \otimes \vc{u}) + \nabla_x \pi_{\delta} (Z) = {\rm div}_x \mathbb{S}(\nabla_x \vc{u}) -\eta \nabla_x\varrho\cdot\nabla_x\vc{u} +{ \eta{\rm div}_x\Big(|\nabla_x(\vc{u}-\vc{u}_\infty)|^2\nabla_x (\vc{u}-\vc{u}_\infty)\Big)}
\end{equation} \begin{equation} \label{A4} \vc{u}(0,x)=\vc{u}_0(x),\;
\vc{u}|_{I\times\partial \Omega} = \vc{u}_B, \end{equation} \begin{equation} \label{A5} \partial_tZ-\eta \Delta_x Z + {\rm div}_x (Z\vc{u}) = 0, \end{equation} \begin{equation} \label{A6}
Z(0,x)=Z_0(x),\;\left( -\eta \ \nabla_x Z + Z \vc{u} \right) \cdot \vc{n}|_{I\times\partial \Omega} = \left\{ \begin{array}{l} Z_B \vc{u}_B \cdot \vc{n} \ \mbox{if}\ [\vc{u}_B \cdot \vc{n}](x) \leq 0,\,x\in \partial\Omega,\\ Z \vc{u}_B \cdot \vc{n}\;\mbox{if}\ [\vc{u}_B \cdot \vc{n}](x) > 0,\,x\in \partial\Omega,\end{array} \right. \end{equation} with positive parameters $\varepsilon> 0$, $\delta > 0$, $\eta>0$. The solution to this system is obtained by means of Galerkin approximation of the momentum equation and Banach Fixed point theorem for existence of unique local in time solutions. We are not going to repeat all these details here, the details of this procedure can be found in \cite{CHJN} for the case with one continuity equation and it can be combined with the ideas and techniques from \cite{M3NPZ}, where two continuity equations were considered, exactly in the same setting as here: one quantity is included into the pressure, the other into the momentum. Let us only notice that when $\vc{u}$ is replaced by it's Galerkin approximation $\vc{u}^n$, then it is still possible to prove the comparison principle between $\varrho$ and $Z$, similarly to the above mentioned paper. Indeed, taking $c_\star,c^\star$ as in (\ref{fs-}) we may write $$ \partial_t(Z-c_\star \varrho ) - \eta \Delta_x (Z-c_\star \varrho ) + {\rm div}_x{\big(\vc{u}^n(Z-c_\star \varrho )\big)} = 0, $$ and $$ \partial_t(c^\star \varrho-Z) - \eta \Delta_x (c^\star \varrho-Z) + {\rm div}_x\big(\vc{u}^n (c^\star \varrho-Z)\big) = 0, $$ with the corresponding boundary conditions. Therefore, exactly as in Lemma 4.3 from \cite{CHJN} we show that since both equations have non-negative initial conditions, it is easy to see that also the solutions are non-negative and due to the uniqueness of solutions we deduce that $$(Z-c_\star \varrho)(t,x)\geq \inf_{x\in\overline{\Omega}}(Z_0-c_\star \varrho_0)e^{-Kt},\quad (c^\star \varrho-Z)(t,x)\geq \inf_{x\in\overline{\Omega}}(c^\star \varrho_0-Z_0)e^{-Kt},$$ as well as $\varrho(t,x) \geq \inf_{x\in \overline{ \Omega}}e^{-Kt}$, where \bFormula{K}
K= \|{\rm div}\,\vc{u}^n\|_{L^\infty(Q_T)}. \end{equation} Using again the assumptions on the initial data { \eqref{data_Z}}, we therefore obtain $$0 < c_\star \varrho \leq Z \leq c^\star \varrho \mbox{ a.e. in } Q_T.$$ With these inequalities in place we can let $n\to\infty$ and $\eta\to 0$ in order to obtain the weak solutions to system \eqref{main_transformed} with $\pi_\varepsilon$ replaced by $\pi_\delta$. Note, however, that in the limit process we do not control the $L^\infty$ norm of ${\rm div}\, \vc{u}$ which finally leads to $$0 \leq c_\star \varrho \leq Z \leq c^\star \varrho \mbox{ a.e. in } Q_T.$$ All other steps are more or less standard except for the fact that the solution is renormalized in the sense of Definition \ref{def_renor}. This fact, however, follows directly from \cite[Lemma 3.1]{CHJN} and will return to the procedure when letting $\varepsilon\to 0^+$.
The existence result is summarised in the following theorem.
\begin{Theorem} \label{TM1-} Let $\Omega \subset R^d$, $d = 2,3$, be a bounded domain of class $C^{2}$ such that $\Gamma_{\rm in}$ is an open $C^2$ $d-1$ dimensional manifold. Let $\delta >0$, $\varepsilon>0$ and $T>0$. Under the assumptions { \eqref{As1}--\eqref{Ass3}} the problem \eqref{ia1}--\eqref{ia4} with the pressure $\pi_\varepsilon$ replaced by $\pi_\delta$ admits at least one renormalized bounded energy weak solution $(\varrho_{\delta}, \vc{u}_{\delta}, Z_{\delta})$, i.e. \begin{description} \item{1.} The triple $(\varrho_{\delta},\vc{u}_{\delta},Z_{\delta})$ belongs { to the following functional space: \bFormula{fs} \begin{split} &\varrho_{\delta}, Z_{\delta} \in L^\infty (0,T; L^\gamma(\Omega)), \quad 0 \leq c_\star \varrho_{\delta} \leq Z_{\delta} \leq c^\star \varrho_{\delta} \ \text{ a.e. in } (0,T) \times \Omega, \\ &\vc{u}_{\delta} \in L^2(0,T; W^{1,2}(\Omega; R^d)), \quad
\vc{u}_{\delta}|_{I\times\partial \Omega} = \vc{u}_B. \end{split} \end{equation}} \item{2.} { The function $\varrho_{\delta}\in C_{\rm weak}([0,T], L^\gamma(\Omega))$ satisfies the integral identity \bFormula{ce} \begin{split} &\intO{\varrho_{\delta}(\tau,\cdot)\varphi(\tau,\cdot)} - \intO{\varrho_0(\cdot)\varphi(0,\cdot)} \\ &= \int_0^\tau\intO{\Big(\varrho_{\delta}\partial_t\varphi + \varrho_{\delta} \vc{u}_{\delta} \cdot \nabla_x \varphi\Big) }{\rm d}t - \int_0^\tau\int_{\Gamma_{\rm in}} \varrho_B \vc{u}_B \cdot \vc{n} \varphi \ {\rm d}S_x{\rm d} t \end{split} \end{equation}} for any $\tau\in [0,T]$ and $\varphi \in C_c^1([0,T]\times({\Omega}\cup\Gamma_{\rm in}))$. In particular, \bFormula{mi-} \intO{\varrho_{\delta}(\tau,\cdot)}\le\intO{\varrho_0}-\int_0^\tau\int_{\Gamma_{\rm in}}\varrho_B\vc{u}_B\cdot\vc n \ {\rm d}S_x{\rm d}t. \end{equation} \item{\it 3.} { The renormalized continuity equation \bFormula{rce} \begin{split} &\intO{b(\varrho_{\delta})(\tau,\cdot)\varphi(\tau,\cdot)} - \intO{b(\varrho_0)(\cdot)\varphi(0,\cdot)} \\ &= \int_0^\tau\intO{\Big(b(\varrho_{\delta})\partial_t\varphi + b(\varrho_{\delta} )\vc{u}_{\delta} \cdot \nabla_x \varphi+(b(\varrho_{\delta})-b'(\varrho_{\delta})\varrho_{\delta}){\rm div}_x\vc{u}_{\delta}\Big) }{\rm d}t \\ &\quad - \int_0^\tau\int_{\Gamma_{\rm in}} b(\varrho_B) \vc{u}_B \cdot \vc{n} \varphi \ {\rm d}S_x{\rm d} t \\ \end{split} \end{equation} holds for any $b\in C[0,\infty)$ with $b'\in C_c[0,\infty)$, $\tau\in [0,T]$, and $\varphi \in C_c^1([0,T]\times({\Omega}\cup\Gamma_{\rm in}))$.} \item{4.} { The function $\varrho_{\delta}\vc{u}_{\delta}\in C_{\rm weak}([0,T], L^{\frac{2\gamma}{\gamma+1}}(\Omega;R^d))$ satisfies the integral identity \bFormula{me} \begin{split} &\intO{\varrho_{\delta}\vc{u}_{\delta}(\tau,\cdot)\cdot\boldsymbol{\varphi}(\tau,\cdot)} - \intO{\varrho_0\vc{u}_0(\cdot)\cdot\boldsymbol{\varphi}(0,\cdot)} \\ &= \int_0^\tau\intO{ \Big(\varrho_{\delta}\vc{u}_{\delta}\cdot\partial_t\boldsymbol{\varphi}+ \varrho_{\delta} \vc{u}_{\delta} \otimes \vc{u}_{\delta} : \nabla_x \boldsymbol{\varphi} + \pi_\delta(Z_{\delta}) {\rm div}_x \boldsymbol{\varphi} - \mathbb{S}(\nabla_x \vc{u}_{\delta}) : \nabla_x \boldsymbol{\varphi} + \varrho_{\delta}(\vc{w}-\vc{u}_{\delta})\cdot \boldsymbol{\varphi}\Big) }{\rm d}t \end{split} \end{equation} for any $\tau\in [0,T]$ and $\boldsymbol{\varphi} \in C^1_c ([0,T]\times\Omega; R^d)$.} \item{5.}
The function $Z_{\delta}\in C_{\rm weak}([0,T], L^\gamma(\Omega))$ satisfies the integral equalities \eqref{ce}--\eqref{rce} with $\varrho_{\delta}$ replaced by $Z_{\delta}$, $\varrho_B$ replaced by $Z_B$, and $\varrho_0$ replaced by $Z_0$. \item{6.} The energy inequality \bFormula{ei} \begin{split}
&\intO{\Big(\frac 12\varrho_{\delta}|\vc{u}_{\delta}-\vc{u}_\infty|^2 + { H_{\delta}}(Z_{\delta})\Big)(\tau)} + \int_0^\tau\intO{\tn S(\nabla_x(\vc{u}_{\delta}-\vc{u}_\infty)):\nabla_x(\vc{u}_{\delta}-\vc{u}_\infty)}{\rm d}t \\ &\le
{ \intO{\Big(\frac 12\varrho_0|\vc{u}_0-\vc{u}_\infty|^2 + { H_{\delta}}(Z_0)\Big)}} - \int_0^\tau\intO{\pi_\delta(Z_{\delta}){\rm div}_x\vc{u}_\infty}{\rm d}t \\ &\quad -\int_0^\tau\intO{\varrho_{\delta}\vc{u}_{\delta}\cdot\nabla_x\vc{u}_\infty\cdot(\vc{u}_{\delta}-\vc{u}_\infty)}{\rm d}t - \int_0^\tau\intO{\tn S(\nabla_x\vc{u}_\infty):\nabla_x(\vc{u}_{\delta}-\vc{u}_\infty)}{\rm d}t \\ &\quad - \int_0^\tau\int_{\Gamma_{\rm in}} { H_{\delta}}(Z_B)\vc{u}_B\cdot\vc n\ {\rm d}S_x{\rm d}t + \int_0^\tau\intO{\varrho_{\delta}(\vc{w}-\vc{u}_{\delta})\cdot(\vc{u}_{\delta}-\vc{u}_\infty)}{\rm d}t
\end{split} \end{equation} holds for a.a. $\tau\in (0,T)$ and the extension $\vc{u}_\infty\in W^{1,\infty}(\Omega;R^d)$ of $\vc{u}_B$ discussed above.
In
\eqref{ei}, { the function $H_{\delta}(z)$ is defined by} \bFormula{Hep} H_{\delta}(z)= z\int_0^z \frac{ \pi_\delta(s)}{s^2}\ {\rm d}s. \end{equation} \end{description} \end{Theorem}
\section{Uniform estimates with respect to $\delta$ and limit $\delta\to 0$}\label{4}
In this section
we prove our first main result, Theorem \ref{TM1!}. For this reason we will deduce some a-priori estimates that are uniform with respect to $\delta$ with fixed positive $\varepsilon$. Then we will improve these estimates to show that the approximation of the pressure is in fact uniformly integrable. This, together with Lions compactness argument for the density sequence will allow us to identify the limit of the pressure term, and hence the whole system.
\subsection{Uniform estimates} \label{4.1}
We start by deriving uniform estimates for the triple $(\varrho_{\delta},\vc{u}_{\delta},Z_{\delta})$ constructed in Theorem \ref{TM1-}. Note that due to Lemma
\ref{Lextrem}, we have $\intO{\pi_\delta(Z_{\delta}(t,\cdot)){\rm div}_x\vc{u}_\infty}\ge 0$ at all time levels, and so, following \cite[Section 4.3.3]{CHJN} we can show that{ the} energy inequality \eqref{ei} in combination with the conservation of mass (\ref{mi-}) yields
\begin{equation} \label{Es3} \begin{aligned}
\|H_{\delta}(Z_{\delta})\|_{L^\infty(0,T;L^1(\Omega))} &\le c({\rm data}), \\
\|\varrho_{\delta}|\vc{u}_{\delta}|^2\|_{L^\infty(0,T; L^1(\Omega))} &\le c({\rm data}), \\
\|\vc{u}_{\delta}\|_{L^2(0,T;W^{1,2}(\Omega))} &\le c({\rm data}). \end{aligned} \end{equation}
From these estimates, using \eqref{Hep}, it follows that \begin{equation} \label{Es1}
\|Z_{\delta}\|_{L^\infty(0,T;L^\gamma(\Omega))} \le c({\rm data}), \end{equation} and hence, due to \eqref{fs} \begin{equation} \label{Es1r}
\|\varrho_{\delta}\|_{L^\infty(0,T;L^\gamma(\Omega))} \le c({\rm data}). \end{equation} Moreover \begin{equation} \label{Es1-} {\rm ess\ sup}_{t\in(0,T)}\intO{ \tilde{H}_{\delta}(Z_{\delta}(t,x))}\le c ({\rm data}), \end{equation}
where \bFormula{hep} \tilde{H}_{\delta}(z)= { \begin{cases} \varepsilon(1-z)^{-(\beta-1)} &\text{ if } z\in [0,1-\delta],\\ \varepsilon\delta^{-(\beta-1)}+\varepsilon\delta^{-\beta}(z-1+\delta) &\text{ if } z\in (1-\delta,\infty). \end{cases} }
\end{equation} This fact follows from the form of the pressure $\pi_\delta$ and the energy $H_{\delta}$, where $\tilde{H}_{\delta}(Z_{\delta})$ contains the most singular terms in $\delta$ for $\delta \to 0+$. By virtue of \eqref{Es3} and \eqref{Es1r} together with \eqref{fs}
\begin{equation} \label{Es5} \begin{aligned}
\|\varrho_{\delta}\vc{u}_{\delta}\|_{L^\infty(0,T; L^{\frac {2\gamma}{\gamma+1}}(\Omega))}+
\|\varrho_{\delta}\vc{u}_{\delta}\|_{L^2(0,T; L^{\frac{6\gamma}{\gamma+6}}(\Omega))}&\le c({\rm data}), \\
\|Z_{\delta}\vc{u}_{\delta}\|_{L^\infty(0,T; L^{\frac {2\gamma}{\gamma+1}}(\Omega))}+
\|Z_{\delta}\vc{u}_{\delta}\|_{L^2(0,T; L^{\frac{6\gamma}{\gamma+6}}(\Omega))}&\le c({\rm data}). \end{aligned} \end{equation}
\subsection{Limit in the continuity equation and boundedness of density} \label{adce}
{ We deduce from the estimates \eqref{Es3}--\eqref{Es1r} that \bFormula{uweak} \begin{split} \vc{u}_\delta\rightharpoonup\vc{u} \quad &\text{ in } L^2(0,T;W^{1,2}(\Omega)), \\ Z_\delta\rightharpoonup^*Z \quad &\text{ in } L^\infty(0,T; L^\gamma(\Omega)),\\ \varrho_\delta\rightharpoonup^*\varrho \quad &\text{ in } L^\infty(0,T; L^\gamma(\Omega)), \end{split} \end{equation} at least for a subsequence.} We also deduce from the continuity equation \eqref{ce} and its version for $Z_\delta$, thanks to \eqref{Es5} and \eqref{fs}, that the sequences of functions $t \mapsto \intO{\varrho_{\delta}\phi}$, and $t \mapsto \intO{Z_{\delta}\phi}$, $\phi\in C^1_c(\Omega)$, are equi-continuous. { Therefore, by { the} Arzel\`a--Ascoli theorem and separability of $L^{{\gamma'}}(\Omega)$, we get \bFormula{wreweak} \begin{split} \varrho_{\delta}\to\varrho \quad \text{ in } C_{\rm weak}(0,T;L^\gamma(\Omega)),\\ Z_{\delta}\to Z \quad \text{ in } C_{\rm weak}(0,T;L^\gamma(\Omega)). \end{split} \end{equation} Both of the sequences converge strongly in $L^2(0,T;W^{-1,2}(\Omega))$ due to compact
embedding $L^\gamma(\Omega)\hookrightarrow\hookrightarrow W^{-1,2}(\Omega)$.} In particular, this implies that \[ \varrho_{\delta}\vc{u}_{\delta}\rightharpoonup\varrho\vc{u} \quad \text{in } L^2(0,T; L^{\frac{6\gamma}{\gamma+6}}(\Omega)), \] \[ Z_{\delta}\vc{u}_{\delta}\rightharpoonup Z\vc{u} \quad \text{in } L^2(0,T; L^{\frac{6\gamma}{\gamma+6}}(\Omega)). \]
This enables us to pass to the limit in the weak formulation \eqref{ce} so that we get the identity \bFormula{continuity} \begin{split} &\intO{\varrho(\tau,\cdot)\varphi(\tau,\cdot)} - \intO{\varrho_0(\cdot)\varphi(0,\cdot)} \\ &= \int_0^\tau\intO{\Big(\varrho\partial_t + \varrho \vc{u} \cdot \nabla_x \varphi\Big) }{\rm d}t - \int_0^\tau\int_{\Gamma_{\rm in}} \varrho_B \vc{u}_B \cdot \vc{n} \varphi \ {\rm d}S_x{\rm d} t \end{split} \end{equation} and its analogue for $Z$. Both hold for any $\tau\in [0,T]$ and $\varphi \in C_c^1([0,T]\times({\Omega}\cup\Gamma_{\rm in}))$.
To conclude this subsection, we deduce from \eqref{Es1-} that \bFormula{Es7} 0 \le Z(t,x) < 1 \quad \text{a.e. in }\Omega. \end{equation} Indeed, for any fixed sufficiently small $\delta^*>0$, we have \bFormula{nn} \intO{\tilde H_{\delta^*}(Z(t))} \le \liminf_{\delta\to 0} \intO{\tilde H_{\delta^*}(Z_{\delta}(t))}\le \liminf_{\delta\to 0} \intO{\tilde H_{\delta}(Z_{\delta}(t))} \le c({\rm data}) \end{equation}
for almost all $t\in (0,T)$, where the first inequality is a consequence of convexity of function $H_{\delta^*}(\cdot)$ on $[0,1-\delta)$ as well its linearity in $Z_\delta$ in the remaining part, second inequality follows from monotonicity of the map $\delta\mapsto \tilde H_\delta(Z)$ in a small right neighbourhood of $0$, and the third inequality follows from (\ref{Es1-}). Next, as $\tilde H_\delta(\cdot)$ is globally Lipschitz, using the continuity equation we deduce that { $Z\in C([0,T);L^1(\Omega))$ and then $\tilde H_{\delta^*}(Z)\in C([0,T]; L^1(\Omega))$.} Therefore formula (\ref{nn}) implies \bFormula{nn+} \intO{\tilde H_{\delta^*}(Z(t))} \le c({\rm data}) { \quad \text{ for all } t\in [0,T]},
\end{equation} and uniformly in $\delta^*$. This implies that $Z\leq 1$. Finally letting $\delta^*\to 0$ in \eqref{nn+}, recalling \eqref{hep}, we obtain \[ \intO{(1-Z)^{-(\beta-1)}}\le c({\rm data}) { \quad \text{ for all } t\in [0,T],}
\] which yields \eqref{Es7}.
\subsection{Uniform integrability of pressure}\label{4.2}
In order to pass to the limit in the weak formulation of the momentum equation \eqref{me}, we have to improve estimates for pressure. So far, we do not even know whether the pressure is uniformly integrable in $\delta$. In this section we are going to prove it.
A general tool to obtain these estimates is the following Bogovskii lemma (see, e.g., \cite{GALN} or \cite[Theorem 10.11]{FEINOV}).
{ \bLemma{Bog} Let $\Omega \subset R^d$, $d\geq 2$, be a bounded Lipschitz domain. Then there exists a linear operator \[ {\cal B}:\Big\{f\in C^\infty_c(\Omega;R^d)\,\colon\,\intO{f}=0\Big\} \mapsto C^\infty_c(\Omega;R^d) \] satisfying the following three properties. \begin{enumerate} \item For all $f\in C^\infty_c(\Omega;R^d)$ satisfying $\intO{f}=0$ \[ {\rm div}{\cal B}[f]=f. \]
\item Let $\overline L^p(\Omega):=\{f\in L^p(\Omega)\,|\,\intO{f}=0\}$. Then the operator ${\cal B}$ extends to a bounded linear operator from $\overline{L}^p(\Omega)$ to $W^{1,p}(\Omega)$ for any $1<p<\infty$. In other words, for each $1<p<\infty$ there is $c(p)>0$ such that for all $f\in \overline L^p(\Omega)$ \[
\|{\cal B}[f]\|_{W^{1,p}(\Omega;R^3)}\le c(p)\|f\|_{L^p(\Omega)}. \]
\item If $f={\rm div}\,\vc g$ for some $\vc g\in L^q(\Omega)$, $1<q<\infty$ with $\vc g\cdot\vc n|_{\partial\Omega}=0$ in the sense of normal traces, then there is $c(q)>0$ such that \[
\|{\cal B}[f]\|_{L^q(\Omega;R^3)}\le c(q)\|\vc g\|_{L^q(\Omega,R^3)} \] for all $\vc g$ with the above properties. \end{enumerate} \end{Lemma} }
We employ this lemma to construct suitable test functions for the momentum equation. Note that by standard density argument we can extend the class of test functions in \eqref{me} to certain $W^{1,q}$-functions with zero trace in $\Gamma_{\rm out}$. Our test function will be a suitable test function due to estimates performed below.
We use in \eqref{me} the following test function \begin{equation}\label{B1} \vcg{\varphi} = \eta(t) {\cal B}\Big(\psi Z_\delta-\frac{\psi}{\intO{\psi}} \intO{\psi Z_\delta}\Big), \end{equation} where $\eta \in C^1_c(0,T)$ and $\psi \in C^\infty_c(\Omega)$, $0\leq \eta,\psi\leq 1$. Then we have \begin{equation} \label{B2} \int_0^T \intO{\eta \pi_\delta(Z_\delta) \Big(\psi Z_\delta - \frac{\psi}{\intO{\psi}}\intO{\psi Z_\delta}\Big)}{\rm d}t = \sum_{i=1}^6 I_i, \end{equation} where \begin{align*} I_1&=-\int_0^T\partial_t\eta\intO{\varrho_{\delta}\vc{u}_{\delta}\cdot{\cal B}\Big(\psiZ_{\delta} - \frac{\psi}{\intO{\psi}}\intO{\psi Z_\delta}\Big)}{\rm d}t, \\ I_2&=\int_0^T\eta\intO{\varrho_{\delta}\vc{u}_{\delta}\cdot{\cal B}({\rm div}\,(Z_{\delta}\vc{u}_{\delta}\psi))}{\rm d}t, \\ I_3&=-\int_0^T\eta\intO{\varrho_{\delta}\vc{u}_{\delta}\cdot{\cal B}\Big(Z_{\delta}\vc{u}_{\delta}\cdot\nabla_x\psi - \frac{\psi}{\intO{\psi}}\intO{Z_{\delta}\vc{u}_{\delta}\cdot\nabla_x\psi}\Big)}{\rm d}t, \\ I_4&=-\int_0^T\eta\intO{\varrho_{\delta}(\vc{u}_{\delta}\otimes\vc{u}_{\delta}):\nabla_x{\cal B}\Big(\psiZ_{\delta} - \frac{\psi}{\intO{\psi}}\intO{\psi Z_\delta}\Big)}{\rm d}t, \\ I_5&=\int_0^T\eta\intO{\tn S(\nabla_x\vc{u}_{\delta}): \nabla_x{\cal B}\Big(\psiZ_{\delta} - \frac{\psi}{\intO{\psi}}\intO{\psi Z_\delta}\Big)}{\rm d}t, \\ I_6&= \int_0^T\eta\intO{\varrho_{\delta} (\vc{w}-\vc{u}_{\delta})\cdot {\cal B}\Big(\psiZ_{\delta} - \frac{\psi}{\intO{\psi}}\intO{\psi Z_\delta}\Big)}{\rm d}t. \end{align*}
Clearly, $|\sum_{i=1}^6 I_i| \leq C$ with $C$ independent of $\delta$ by estimates \eqref{Es3}--\eqref{Es1r} and we end up with \begin{equation} \label{B3} \int_0^T \intO{\eta \pi_\delta(Z_\delta) \Big(\psi Z_\delta - \frac{\psi}{\intO{\psi}}\intO{\psi Z_\delta}\Big)}{\rm d}t \leq C. \end{equation} We now choose $\psi \in C^\infty_c(\Omega)$ such that $0\leq \psi\leq 1$ and $\psi \equiv 1$ in $K$ some compact set $K\subset\subset \Omega$.
For the fixed set $K$ and any such $\psi$ as above we further denote $$ M_{\delta,K}=\max_{t\in [0,T]} \intO{Z_\delta \psi}. $$ We claim that for each $K \subset\subset \Omega$ there exists $\delta_0 >0$ and $\lambda >1$ such that for any $\delta <\delta_0$ it holds \begin{equation}\label{lambda} \lambda M_{\delta,K} < \intO{\psi}. \end{equation} This follows from several facts shown before. The form of $\tilde H_\delta(Z_\delta)$, see \eqref{hep}, yields that for any positive (sufficiently small) $\delta_\theta$ we have on the set $Z_\delta>1-\delta_\theta$ (recall that $\varepsilon$ is fixed at this moment) \eq{ \tilde H_\delta(Z_\delta)> \frac{C}{\delta_\theta^{\beta-1}}. } Since $\beta>\frac 52$ (if $d=3$; note that it is enough to have $\beta >2$ which is the case if $d=2$) and due to the $L^\infty(0,T;L^1(\Omega))$ bound of $\tilde H_\delta(Z_\delta)$ we see that for arbitrarily small $\theta>0$ there exists $\delta_\theta$ such that $$
\sup_{\delta<\delta_\theta} \sup_{t\in[0,T]}\left|x\in \Omega: Z_\delta(t,x) > 1-\delta_\theta\right|<\theta. $$ Furthermore, for $\theta>0$, sufficiently small, the $L^\infty(0,T;L^1(\Omega))$ bound of $\tilde H_\delta(Z_\delta)$ implies that we may take $\delta_\theta = \theta^{\frac 23}$. Therefore we have (we assume $\delta \leq \delta_\theta$): $$ \begin{aligned} \max_{t\in[0,T]}\intO{Z_\delta \psi}&=\sup_{t\in[0,T]}\int_{\{Z_\delta(t,x) > 1-\delta_\theta\}}Z_\delta \psi {\rm d} {x}+\sup_{t\in[0,T]}\int_{\{Z_\delta(t,x) \leq 1-\delta_\theta\}}Z_\delta \psi {\rm d} {x}\\
&\leq \theta^{\frac{1}{\gamma'}}\|Z_\delta\|_{L^\infty(0,T;L^\gamma(\Omega))}+(1-\delta_\theta)\intO{\psi}\\ &\leq C\theta^{\frac{1}{\gamma'}}+(1-\delta_\theta)\intO{\psi}. \end{aligned} $$ Thus, taking $\theta$ possibly even smaller, we may achieve by taking $\gamma$ sufficiently large that $C\theta^{\frac{1}{\gamma'}} <\frac {\delta_\theta}{2} \intO{\psi}$ which leads to \begin{equation} \label{est_a} \max_{t\in[0,T]}\intO{Z_\delta \psi}\leq\lr{1-\frac{\delta_\theta}{2}}\intO{\psi}. \end{equation} Thus, for $\lambda:= \frac{1}{1-\frac{\delta_\theta}{2}}$ we showed \eqref{lambda}.
We return to inequality \eqref{B3}. Clearly, for the set $O_1:= \Big\{(t,x) \in (0,T)\times \Omega: Z_\delta(t,x) < \frac{\lambda M_{\delta,K}}{\intO{\psi}}<1\Big\}$ we have $$ \begin{aligned}
\Big|\int\int_{O_1} \eta \pi_\delta(Z_\delta) &\Big(\psi Z_\delta-\frac{\psi}{\intO{\psi}} \intO{Z_\delta\psi}\Big)\ {\rm d}x{\rm d}t\Big| \\
&\leq \Big|\int\int_{O_1} \eta \pi_\delta\Big(\frac{\lambda M_{\delta,K}}{\intO{\psi}}\Big) \Big(\psi Z_\delta-\frac{\psi}{\intO{\psi}} \intO{Z_\delta\psi}\Big)\ {\rm d}x{\rm d}t\Big| \leq CT. \end{aligned} $$ Next $$ \begin{aligned} &\int\int_{((0,T)\times \Omega)\setminus O_1} \eta\pi_\delta(Z_\delta) \psi Z_\delta \ {\rm d}x{\rm d}t \leq C + \int\int_{((0,T)\times \Omega)\setminus O_1}\eta \pi_\delta(Z_\delta) \Big(\frac{\psi}{\intO{\psi}} \intO{\psi Z_\delta}\Big) \ {\rm d}x{\rm d}t \\ &\leq C + \int\int_{((0,T)\times \Omega)\setminus O_1} \eta\psi\pi_\delta(Z_\delta)\frac{M_\delta}{\intO{\psi}} \ {\rm d}x{\rm d}t \leq C + \frac{1}{\lambda} \int\int_{((0,T)\times \Omega)\setminus O_1} \eta\psi \pi_\delta(Z_\delta) Z_\delta \ {\rm d}x{\rm d}t. \end{aligned} $$ The computations above imply that for any $K \subset\subset \Omega$ and corresponding $\psi\in C^\infty_c(\Omega)$ there exists $C=C(K)$ such that \begin{equation}\label{B4} \int_0^T \eta \intO{\psi \pi_\delta(Z_\delta) Z_\delta}{\rm d}t \leq C(K). \end{equation} Whence we also have \begin{equation}\label{B5} \int \int_{\{Z_\delta \leq 1-\delta \}} \eta \psi \varepsilon (1-Z_\delta)^{-\beta} \ {\rm d}x {\rm d}t \leq C(K), \end{equation} where $\eta \in C^\infty_c(0,T)$. \begin{Remark}\label{RB1} Note that this estimate was obtained without assuming short time interval for zero velocity flux as it was the case in \cite{CHNY2}. Therefore, from this point of view our paper even improves the result in the above cited paper in the sense that global in time solution exists provided $$ \int_{\partial \Omega} \vc{u}_B \cdot \vc{n}\ {\rm d}S\geq 0. $$ The case of negative flux, however, remains an interesting open problem. \end{Remark}
\subsection{Equi-integrability of pressure} \label{4.4}
In order to show equi-integrability of the sequence $\pi_\delta(Z_{\delta})$, we shall use the renormalized continuity equation \eqref{rce} with $\varrho_{\delta}$ replaced by $Z_{\delta}$. We fix the same cut-off functions $\eta$ in the time variable and $\psi$ in the spatial variables as in the previous section and $0\le \psi\in C^1_c(\Omega)$ and consider the following test function \[
\vcg{\varphi}=\eta(t){\cal B}(\psi b(Z_{\delta})-\alpha_\delta) \quad \text{ where } \alpha_\delta = \frac 1{|\Omega|}\intO{\psi b(Z_{\delta})} \] with \begin{align*} b(Z) &= \begin{cases} -\ln(1/2) & \text{ if } Z\in [0,1/2], \\ -\ln(1-Z) & \text{ if } Z \in (1/2,1-\delta), \\ -\ln\delta & \text{ if }Z \in [1-\delta,\infty). \end{cases} \end{align*} We note that \[ b'(Z) = \frac 1{1-Z}1_{(1/2,1-\delta)}(Z), \] where, as above, $1_E(Z)$ denotes the characteristic function of a set $E$. In view of \eqref{Es1}, \eqref{Es1-}, and \eqref{B5}, we notice also that for any $1\le p<\infty$, and any compact $K\subset\Omega$, \bFormula{pr3} \begin{split}
\|b(Z_{\delta})\|_{L^ \infty(0,T; L^p(K))} &\le c({\rm data}, K, p), \\
\|\eta Z_{\delta} b'(Z_{\delta})-b(Z_{\delta})\|_{L^\beta((0,T)\times K)} &\le c({\rm data},K,\eta),\\
\|Z_{\delta} b'(Z_{\delta})-b(Z_{\delta})\|_{L^\infty(0,T; L^{\beta-1}(K))} &\le c({\rm data},K), \end{split} \end{equation} where $\eta$ is as above. We test the momentum equation \eqref{me} by $\vcg{\varphi}$ to obtain the following identity \[ \int_0^T \eta \intO{\psi \varepsilon\pi_\delta(Z_{\delta}) b(Z_{\delta})}{\rm d}t= \sum_{i=1}^{8}I_i, \] where \begin{align*}
&I_1=\frac 1{|\Omega|}\int_0^T\eta(t)\int_{\Omega}\psi b(Z_{\delta}){\rm d}x\intO{\varepsilon\pi_\delta(Z_{\delta})}{\rm d}t, \\ &I_2=-\int_0^T\partial_t\eta\intO{\vr_\varepsilon\vc{u}_\varepsilon\cdot{\cal B}(\psi b(Z_{\delta})-\alpha_\delta)}{\rm d}t, \\ &I_3=\int_0^T\eta\intO{\varrho_{\delta}\vc{u}_{\delta}\cdot{\cal B}\Big({\rm div}_x(\psi b(Z_{\delta})\vc{u}_{\delta})\Big)}{\rm d}t, \\
&I_4=-\int_0^T\eta\intO{\varrho_{\delta}\vc{u}_{\delta}\cdot{\cal B}\Big( b(Z_{\delta})\vc{u}_{\delta}\cdot\nabla_x\psi- \frac 1{|\Omega|}\int_{\Omega} b(Z_{\delta})\vc{u}_{\delta}\cdot\nabla_x\psi \ {\rm d}x\Big) }{\rm d}t, \\ &I_5=\int_0^T\eta\intO{\varrho_{\delta}\vc{u}_{\delta}\cdot{\cal B}\Big[\psi\Big(Z_{\delta} b'(Z_{\delta})-b(Z_{\delta})\Big){\rm div}_x\vc{u}_{\delta}
- \frac 1{|\Omega|}\intO{\psi\Big(Z_{\delta} b'(Z_{\delta})-b(Z_{\delta})\Big){\rm div}_x\vc{u}_{\delta}}\Big]}{\rm d}t, \\ &I_6=\int_0^T\eta\intO{\tn S(\nabla_x\vc{u}_{\delta}):\nabla_x{\cal B}(\psi b(Z_{\delta})-\alpha_\delta)}{\rm d}t, \\ &I_7=- \int_0^T\eta\intO{\varrho_{\delta}\vc{u}_{\delta}\otimes\vc{u}_{\delta}:\nabla_x{\cal B}(\psi b(Z_{\delta})-\alpha_\delta)}{\rm d}t, \\ &I_8 = -\int_0^T\eta \intO{\varrho_{\delta} (\vc{w}-\vc{u}_{\delta})}\cdot {\cal B}(\psi b(Z_{\delta})-\alpha_\delta){\rm d}t. \end{align*}
The above calculation involves integration by parts and the renormalized equation \eqref{rce} for unknown $Z_\delta$. The function $b$ is clearly admissible in the renormalized continuity equation. We verify, using the approach of \cite{FEMAL}, estimates \eqref{Es3}--\eqref{Es1-}, and \eqref{pr3}, that {for any $\beta>5/2$ there is $\gamma>3/2$ (sufficiently large - $\gamma\to\infty$ as $\beta\to \frac 52+$)} such that absolute values of $I_1,\dots,I_8$ are bounded above by some positive constants. The most severe constraints on the values of $\beta$
and $\gamma$ within these calculations are imposed in estimating the term $|I_5|$. Note that in case $d=2$ the same approach as in \cite{FEMAL} can be used to see that the most singular term can be estimated for $\beta >2$. Effectuating this process, we obtain that for any compact set $K\subset \Omega$, \bFormula{pvre}
\|\eta\pi_\delta(Z_{\delta})b(Z_{\delta})\|_{L^{1}((0,T)\times K)}\le c({\rm data},K, \eta). \end{equation} Consequently, for fixed $\varepsilon>0$, the sequence $\pi_\delta(Z_{\delta})$ is equi-integrable in $L^{1}(Q)$ for any $Q\subset\subset (0,T)\times\Omega$ and \eq{\label{pvrecon} \pi_\delta(Z_{\delta})\rightharpoonup\overline{\pi(Z)} \quad \text{ in } L^{1}(J\times K) } for any compact set $J\times K\subset (0,T)\times \Omega$ at least for a chosen subsequence (not relabeled).
\subsection{Momentum equation} \label{4.5}
With the help of \eqref{Es3}, \eqref{Es1r}, and \eqref{pvre} employed in the momentum equation \eqref{me}, we verify equicontinuity of the sequence $t\mapsto \intO{\varrho_{\delta}\vc{u}_{\delta}(t,\cdot)\varphi(\cdot)}$ in $C[0,T]$ for any $\varphi\in C^1_c(\Omega)$ . Therefore, we may use the Arzel\`a--Ascoli theorem in combination with \eqref{Es5} the separability of $L^{\lr{\frac{2\gamma}{\gamma+1}}'}(\Omega)$ to show that \bFormula{bweak} \varrho_{\delta}\vc{u}_{\delta}\to\varrho\vc{u} \quad \text{ in } C_{\rm weak}([0,T];L^{\frac{2\gamma}{\gamma+1}}(\Omega)). \end{equation} Consequently, the imbedding $L^{\frac{2\gamma}{\gamma+1}}(\Omega)\hookrightarrow\hookrightarrow W^{-1,2}(\Omega)$ (for $\gamma>\frac32$) in combination with the weak convergence of $\vc{u}_n$ in $L^2(0,T; W^{1,2}(\Omega))$ { implies} \bFormula{cweak} \varrho_{\delta}\vc{u}_{\delta}\otimes\vc{u}_{\delta}\rightharpoonup\varrho\vc{u}\otimes\vc{u} \quad \text{ in } L^2(0,T; L^{{6\gamma}/{4\gamma+3}}(\Omega)). \end{equation}
Thus, letting $\delta\to 0$ in weak formulation of \eqref{me} while using \eqref{uweak}, \eqref{wreweak}, \eqref{pvrecon}, and \eqref{cweak} we obtain that for any $\tau\in [0,T]$ and $\boldsymbol{\varphi} \in C^1_c ([0,T]\times\Omega; R^d)$, \bFormula{me+} \begin{split} &\intO{\varrho\vc{u}(\tau,\cdot)\cdot\boldsymbol{\varphi}(\tau,\cdot)} - \intO{(\varrho_0\vc{u}_0)(\cdot)\cdot \boldsymbol{\varphi}(0,\cdot)} = \intO{ \left( \varrho \vc{u} \otimes \vc{u} : \nabla_x \boldsymbol{\varphi} + \varepsilon\overline{\pi(Z)} {\rm div}_x \boldsymbol{\varphi}\right)} \\ - &\int_0^\tau\intO{ \mathbb{S}(\nabla_x \vc{u}) : \nabla_x \boldsymbol{\varphi} }{\rm d}t + \int_0^\tau \intO{\varrho (\vc{w}-\vc{u})\cdot \boldsymbol{\varphi}}{\rm d}t. \end{split} \end{equation}
The proof of Theorem \ref{TM1!} is therefore complete if we show that \bFormula{pp} \overline{\pi(Z)}=\pi_\varepsilon(Z), \end{equation} which amounts, in fact, to show that the sequence $Z_{\delta}$ converges almost everywhere in $Q_T$.
\subsection{Strong convergence of $Z_{\delta}$}\label{4.6}
We denote by $\nabla_x\Delta^{-1}$ the pseudodifferential operator of the
Fourier symbol $\frac {{\rm i}\xi}{|\xi|^2}$ and by ${\cal R}$ the Riesz transform of the
Fourier symbol $\frac {\xi\otimes\xi}{|\xi|^2}$. Following Lions \cite{LI4} with modified in \cite{FNP}, we shall use the test function \[ \varphi(t,x)=\eta(t)\psi(x)\nabla_x\Delta^{-1}(Z_{\delta}\psi),\;\;\eta\in C^1_c(0,T),\;\psi\in C^1_c(\Omega) \] in the approximate momentum equation \eqref{me} and the test function \[ \varphi(t,x)=\eta(t)\psi(x)\nabla_x\Delta^{-1}(Z\psi),\;\;\eta\in C^1_c(0,T),\;\psi\in C^1_c(\Omega) \] in the limit momentum equation \eqref{me+}, subtract the resulting identities, and then perform the limit $\delta\to 0$. { These calculations are laborious but nowadays standard. One can find details e.g. in \cite[Lemma 3.2]{FNP}, \cite{NOST}, \cite{EF70} or \cite[Chapter 3]{FEINOV}) to obtain the following identity }
\bFormula{ddd!} \begin{split} &\int_0^T\intO{\eta\psi^2\Big(\overline{\pi(Z)} \;Z-(2\mu +\lambda) {\rm div}_x\vc{u}\, Z\Big)}\,{\rm d}t \\ &\quad -\int_0^T\intO{\eta\psi^2\Big(\overline{\pi(Z)\,Z} -(2\mu +\lambda)\overline{Z\,{\rm div}_x\vc{u}}\Big)}{\rm d}t \\ &= \int_0^T\eta\intO{\psi^2\vc{u}\cdot\Big(Z {\cal R}\cdot(\varrho\vc{u})-\varrho\vc{u}\cdot{\cal R}(Z)\Big)}{\rm d}t \\ &\quad - \lim_{\delta\to 0}\int_0^T\eta\intO{\psi^2\vc{u}_{\delta}\cdot\Big( Z_{\delta} {\cal R}\cdot(\varrho_{\delta}\vc{u}_{\delta})-\varrho_{\delta}\vc{u}_{\delta}\cdot{\cal R}(Z_{\delta})\Big) }{\rm d}t. \end{split} \end{equation}
In (\ref{ddd!}) and in the sequel the overlined quantities $\overline{b(Z,\vc{u})}$, resp. $\overline{b(Z)}$ denote $L^1(Q_T)$-weak limits of sequences $b(Z_{\delta},\vc{u}_{\delta})$ resp. $b(Z_{\delta})$ (or $b_\delta(Z_{\delta})$ if this is the case).
The most non-trivial
moment in this process is to show that the right-hand side of this identity vanishes. The details of this calculation and reasoning can be found in \cite[Lemma 3.2]{FNP}, \cite{EF70}, \cite{NOST}, or \cite[Chapter 3]{FEINOV}. Consequently, \bFormula{ad1} \lr{\overline{\pi(Z)\,Z} -\overline{\pi(Z)}\;Z}=(2\mu +\lambda)\Big(\overline{Z\,{\rm div}_x\vc{u}}-Z{\rm div}_x\vc{u}\Big), \end{equation} and so \bFormula{evf+} (2\mu +\lambda) \int_0^\tau\intO{\Big(Z{\rm div}\vc{u}-\overline{Z {\rm div}\vc{u}}\Big)}{\rm d}t = \int_0^\tau\intO{\Big(\overline{ \pi(Z)Z}-\overline{\pi(Z)}Z\Big)}{\rm d}t\leq 0, \end{equation} due to the fact that $\pi(Z)$ is increasing.
The next (and the last) step in the proof follows closely Section 4.5 in \cite{CHJN}. Note, that this procedure does not depend on the momentum equation anymore, therefore it will hold also for the limit passage $\varepsilon\to0$.
Since both $(Z_{\delta},\vc{u}_{\delta})$ and $(Z,\vc{u})$ satisfy the renormalized continuity equation \eqref{rce}, we obtain, in particular, that \begin{align*} &\int_{\Omega}\overline{L(Z(\tau))}\varphi \ {\rm d}x-\int_{\Omega}L(Z_0)\varphi(0,x) \ {\rm d}x \\ &= \int_0^\tau\int_{\Omega} \Big(\overline{L(Z)}\partial_t\varphi+\overline{L(Z)} \vc{u} \cdot \nabla_x \varphi-\varphi \overline{Z {\rm div}_x \vc{u}} \Big)\ {\rm d} {x}{\rm d}t +\int_0^\tau\int_{\Gamma_{\rm in}}L(Z_B)\vc{u}_B\cdot\vc n\varphi \ {\rm d}S_x{\rm d}t, \end{align*} and \begin{align*} &\int_{\Omega}L(Z(\tau,x))\varphi(\tau,x) \ {\rm d}x-\int_{\Omega}L(Z_0)\varphi(0,x)\ {\rm d}x \\ &= \int_0^\tau\int_{\Omega} \Big(L(Z)\partial_t\varphi+L(Z) \vc{u} \cdot \nabla_x \varphi-\varphi Z{\rm div}_x \vc{u} \Big) {\rm d} {x}{\rm d}t +\int_0^\tau\int_{\Gamma_{\rm in}}L(Z_B)\vc{u}_B\cdot\vc n\varphi \ {\rm d}S_x{\rm d}t \end{align*} where $L(Z)=Z\ln Z$,
and $\varphi\in C^1_c([0,T]\times(\Omega\cup\Gamma_{\rm in}))$. Subtracting these inequalities, we obtain \begin{align*} &\int_{\Omega}\Big(\overline{L(Z)}-L(Z)\Big)(\tau)\varphi(\tau,x) \ {\rm d}x \\ &= \int_0^\tau\int_{\Omega} \Big(\overline{L(Z)} -L(Z)\Big)(\vc{u} \cdot \nabla_x \varphi+\partial_t\varphi) \ {\rm d}x{\rm d}t-\int_0^T\intO{\varphi \Big(\overline{Z {\rm div}_x \vc{u}} -Z{\rm div}\vc{u}\Big)}{\rm d}t. \end{align*} Hence, by virtue of \eqref{evf+} and using also in particular the function $\varphi$ independent of time, we get \bFormula{dod5} \begin{split} &\intO{\Big(\overline{Z\log Z}- Z\log Z { \Big)}
(\tau,x)\varphi(x)} + \int_0^\tau\intO{ \Big(Z\log Z-\overline{Z \log Z}\Big) \vc{u} \cdot \nabla_x \varphi }{\rm d}t\leq 0
\end{split} \end{equation} for any $\tau\in [0,T]$ and $\varphi \in C^1_c(\Omega\cup\Gamma_{\rm in})$ with $\varphi \geq 0$. To show that the above formula for $\varphi(x)\to 1$ gives
\bFormula{dod6} \intO{\Big(\overline{Z\log Z}- Z\log Z { \Big)}
(\tau,\cdot)}\le 0, \end{equation} we can follow step by step the procedure described in Section 4.7 in \cite{CHNY2}. On the other hand, we have \[ \overline{Z\log Z}- Z\log Z\ge 0 \quad \text{ a.e. in } Q_T \] since $Z\log Z$ is convex. Thus, (\ref{dod6}) yields \[ \overline{Z\log Z}= Z\log Z \quad \text{ a.e. in } Q_T, \] and so
\bFormula{dod7} Z_{\delta}\to Z \quad \text{ a.e. in } Q_T \text{ and in } L^p(Q_T) \text{ for } 1\le p<\infty,
\end{equation} cf. e.g. \cite[Theorem 10.20]{FEINOV}. We deduce from \eqref{dod7} and \eqref{pvre} that for any compact $K\subset \Omega$, \bFormula{cp} \pi_\delta(Z_{\delta})\to \pi(Z) \;\text{a.e. in $Q_T$ and in $L^1((0,T)\times K)$}. \end{equation} In particular, we have $\overline{\pi(Z)}=\pi(Z)$ in equation \eqref{me+}, note, however, that this information is restricted solely to $Z$ and does not imply the strong convergence of $\varrho_{\delta}$.
\subsection{Energy inequality}
We first integrate \eqref{ei} over $0<\tau_1<\tau_2<T$ to obtain that \bFormula{ei!} \begin{split}
&\int_{\tau_1}^{\tau_2}\int_{\Omega}\Big(\frac 12\varrho_{\delta}|\vc{u}_{\delta}-\vc{u}_\infty|^2+H_\delta(Z_{\delta})\Big)(\tau,\cdot) \ {\rm d} x{\rm d}\tau+ \int_{\tau_1}^{\tau_2}\int_0^\tau\int_{\Omega}\tn S(\nabla_x(\vc{u}_{\delta}-\vc{u}_\infty)):\nabla_x(\vc{u}_{\delta}-\vc{u}_\infty) \ {\rm d} x{\rm d}t{\rm d}\tau \\ &\le
{ \int_{\tau_1}^{\tau_2}\intO{\Big(\frac 12\varrho_0|\vc{u}_0-\vc{u}_\infty|^2+H_\delta(Z_0)\Big)}}{\rm d}\tau -\int_{\tau_1}^{\tau_2}\int_0^\tau\int_{\Omega} \pi_\delta(Z_\delta){\rm div}_x\vc{u}_\infty \ {\rm d} x{\rm d}t{\rm d}\tau \\ &\quad -\int_{\tau_1}^{\tau_2}\int_0^\tau\int_{\Omega}\varrho_{\delta}\vc{u}_{\delta}\cdot\nabla_x\vc{u}_\infty\cdot(\vc{u}_{\delta}-\vc{u}_\infty) \ {\rm d} x{\rm d}t{\rm d}\tau -\int_{\tau_1}^{\tau_2}\int_0^\tau\int_{\Omega}\tn S(\nabla_x\vc{u}_\infty):\nabla_x(\vc{u}_{\delta}-\vc{u}_\infty) \ {\rm d} x{\rm d}t{\rm d}\tau \\ &\quad - \int_{\tau_1}^{\tau_2}\int_0^\tau\int_{\Gamma_{{\rm in}}} H_\delta(Z_{B})\vc{u}_{B}\cdot\vc n \ {\rm d}S_x{\rm d}t{\rm d}\tau + \int_{\tau_1}^{\tau_2}\int_0^\tau\int_{\Omega}\varrho_{\delta} (\vc{w}-\vc{u}_{\delta})\cdot (\vc{u}_{\delta}-\vc{u}_\infty) \ {\rm d} x{\rm d}t{\rm d}\tau.
\end{split} \end{equation} Now, we can use the convergences established in Section \ref{adce} and in \eqref{dod7} and (\ref{cp}) at the right-hand side and the same convergences in combination with the lower weak semi-continuity of convex functionals at the left-hand side (see e.g. \cite[Theorem 10.20]{FEINOV}). Note, however, that we must be slightly careful with the pressure term, where the convergence is only local. We therefore get \bFormula{ei!++} \begin{split}
&\int_{\tau_1}^{\tau_2}\int_{\Omega}\Big(\frac 12\varrho|\vc{u}-\vc{u}_\infty|^2+H_\varepsilon(Z)\Big)(\tau,\cdot) \ {\rm d} x{\rm d}\tau+ \int_{\tau_1}^{\tau_2}\int_0^\tau\int_{\Omega}\tn S(\nabla_x(\vc{u}-\vc{u}_\infty)):\nabla_x(\vc{u}-\vc{u}_\infty)\ {\rm d} x{\rm d}t{\rm d}\tau \\ &\quad +\int_{\tau_1}^{\tau_2}\int_\alpha^{\tau-\alpha}\int_{K} \pi_\varepsilon(Z){\rm div}_x\vc{u}_\infty \ {\rm d} x{\rm d}t{\rm d}\tau \le
{ \int_{\tau_1}^{\tau_2}\intO{\Big(\frac 12\varrho_0|\vc{u}_0-\vc{u}_\infty|^2+H_\varepsilon(Z_0)\Big)}}{\rm d}\tau
\\ &\quad -\int_{\tau_1}^{\tau_2}\int_0^\tau\int_{\Omega}\varrho\vc{u}\cdot\nabla_x\vc{u}_\infty\cdot(\vc{u}-\vc{u}_\infty) \ {\rm d} x{\rm d}t{\rm d}\tau -\int_{\tau_1}^{\tau_2}\int_0^\tau\int_{\Omega}\tn S(\nabla_x\vc{u}_\infty):\nabla_x(\vc{u}-\vc{u}_\infty) \ {\rm d} x{\rm d}t{\rm d}\tau \\ &\quad - \int_{\tau_1}^{\tau_2}\int_0^\tau\int_{\Gamma_{{\rm in}}} H_\varepsilon(Z_{B})\vc{u}_{B}\cdot\vc n \ {\rm d}S_x{\rm d}t{\rm d}\tau + \int_{\tau_1}^{\tau_2}\int_0^\tau\int_{\Omega}\varrho (\vc{w}-\vc{u})\cdot (\vc{u}-\vc{u}_\infty) \ {\rm d} x{\rm d}t{\rm d}\tau.
\end{split} \end{equation} Since the inequality holds for any $\alpha>0$, sufficiently small, and any $K$ compact subset of $\Omega$, we easily obtain at the end \bFormula{ei!+} \begin{split}
&\int_{\tau_1}^{\tau_2}\int_{\Omega}\Big(\frac 12\varrho|\vc{u}-\vc{u}_\infty|^2+H_\varepsilon(Z)\Big)(\tau,\cdot) \ {\rm d} x{\rm d}\tau+ \int_{\tau_1}^{\tau_2}\int_0^\tau\int_{\Omega}\tn S(\nabla_x(\vc{u}-\vc{u}_\infty)):\nabla_x(\vc{u}-\vc{u}_\infty)\ {\rm d} x{\rm d}t{\rm d}\tau \\ &\le
{ \int_{\tau_1}^{\tau_2}\intO{\Big(\frac 12\varrho_0|\vc{u}_0-\vc{u}_\infty|^2+H_\varepsilon(Z_0)\Big)}}{\rm d}\tau -\int_{\tau_1}^{\tau_2}\int_0^\tau\int_{\Omega} \pi_\varepsilon(Z){\rm div}_x\vc{u}_\infty \ {\rm d} x{\rm d}t{\rm d}\tau \\ &\quad -\int_{\tau_1}^{\tau_2}\int_0^\tau\int_{\Omega}\varrho\vc{u}\cdot\nabla_x\vc{u}_\infty\cdot(\vc{u}-\vc{u}_\infty) \ {\rm d} x{\rm d}t{\rm d}\tau -\int_{\tau_1}^{\tau_2}\int_0^\tau\int_{\Omega}\tn S(\nabla_x\vc{u}_\infty):\nabla_x(\vc{u}-\vc{u}_\infty) \ {\rm d} x{\rm d}t{\rm d}\tau \\ &\quad - \int_{\tau_1}^{\tau_2}\int_0^\tau\int_{\Gamma_{{\rm in}}} H_\varepsilon(Z_{B})\vc{u}_{B}\cdot\vc n \ {\rm d}S_x{\rm d}t{\rm d}\tau + \int_{\tau_1}^{\tau_2}\int_0^\tau\int_{\Omega}\varrho (\vc{w}-\vc{u})\cdot (\vc{u}-\vc{u}_\infty) \ {\rm d} x{\rm d}t{\rm d}\tau.
\end{split} \end{equation}
\subsection{Renormalized continuity equation}\label{ren_con_eq_delta}
In this section following \cite{CHJN} we can generalize the DiPerna-Lions theory for continuity equation with nonhomogenous boundary data. Recall that due to \eqref{uweak} $\varrho \in L^2(0,T;L^\gamma(\Omega))$ and $\vc{u} \in L^2(0,T;W^{1,2}(\Omega))$, and in particular $\gamma>2$. Due to \cite[Lemma~3.1]{CHJN} we may formulate the following result: \begin{Lemma}\label{Renorm} Let $\Omega \subset R^d$, $d=2,3$, be bounded domain of class $C^2$ such that $\Gamma_{\rm in}$ is an open $C^2$ $d-1$ dimensional manifold. Let $\varrho_B$ and $\vc{u}_B$ satisfy assumptions \eqref{Ass1}. Let $\varrho \in L^2(0,T;L^\gamma(\Omega))$ with $\gamma >2$ and $\vc{u} \in L^2(0,T;W^{1,2}(\Omega))$
satisfy the continuity equation in a weak sense, as in \eqref{continuity}.
Then $(\varrho, \vc{u})$ is also a renormalized solution of the continuity equation \eqref{continuity}, namely it verifies \eqref{P3} with $\varrho$ instead of $Z$. \end{Lemma} For the detailed proof see \cite[Section~3.1]{CHJN}. Let us present here just a sketch of it. The main difficulty is to reconstruct the the boundary term on $\Gamma_{\rm in}$. To this end the set $\Omega\cup \Gamma_{\rm in}$ is extended by properly chosen open (nonempty) set $\tilde{U}^+_h(\Gamma_{\rm in})$ being in neighborhood of $\Gamma_{\rm in}$ (one can think about it as an thin pillow attached to $\Gamma_{\rm in}$ outside of $\Omega$). It is defined as follows: \begin{equation}\label{setU} \tilde{U}^+_h(\Gamma_{\rm in}) =
\{ x \in {U}^+_h(\Gamma_{\rm in}) \,|\, x=\vc{X}(s,\vc{x}_0) \mbox{ for a certain }\vc{x}_0 \in\Gamma_{\rm in } \mbox{ and } 0<s<h \}
\end{equation} where
$${U}^+_h(\Gamma_{\rm in}) := \{ \vc{x}_0 + z \vc{n}(\vc{x_0}) \, | \, 0<z<h, \vc{x}_0 \in\Gamma_in\} \cap (\mathbb{R}^d \setminus \Omega)$$ and $$\vc{X}'(s,\vc{x}_0) = - \tilde\vc{u}_B (\vc{X}(s,\vc{x}_0)),\ \vc{X}(0) = \vc{x}_0 \in {U}^+_h(\Gamma_{\rm in})\cup\Gamma_{\rm in} \mbox{ for } s>0, \ \vc{X}(s,\vc{x}_0) \in {U}^+_h(\Gamma_{\rm in}) $$ with $$\tilde\vc{u}_B(x) = \vc{u}_B(\vc{x}_0), \ x = \vc{x}_0 + z \vc{n}(\vc{x}_0) \in {U}^+_h(\Gamma_{\rm in})$$ The set $\tilde{U}^+_h(\Gamma_{\rm in})$ is nonempty and open, for details see \cite[Section~3.1]{CHJN}. Here the regularity of $\Gamma_{\rm in }$ is used. Moreover proper extension $\tilde\vc{u}_B$ and $\tilde\varrho_B$ of $\vc{u}_B$ and $\varrho_B$ on $\tilde{U}^+_h(\Gamma_{\rm in})$ is constructed such that $\tilde\vc{u}_B \in C^1(\tilde{U}^+_h(\overline{\Gamma_{\rm in})})$, $\tilde\varrho_B \in W^{1,\infty}(\tilde{U}^+_h({\Gamma_{\rm in}}))$, and
\begin{equation}\label{ggg}
{\rm div}_x (\tilde\varrho_B \tilde\vc{u}_B ) = 0 \mbox{ in } \tilde{U}^+_h(\overline{\Gamma_{\rm in})}, \quad \tilde\varrho_B|_{\Gamma_{\rm in}} = \varrho_B, \ \tilde\vc{u}_B|_{\Gamma_{\rm in}} = \vc{u}_B,
\end{equation} Then extension of $(\varrho,\vc{u})$ to the set $\Omega_h:=\Omega\cup\Gamma_{\rm in} \cup \tilde{U}^+_h({\Gamma_{\rm in}})$, where $(\varrho,\vc{u})(t,x) = (\tilde\varrho_B,\tilde\vc{u}_B)$ on $\tilde{U}^+_h({\Gamma_{\rm in}})$, satisfies continuity equation in the sense of distributions on $\Omega_h$ and $\varrho \in L^2(0,T;L^2(\Omega_h))$ and $\vc{u} \in L^2(0,T; W^{1,2}(\Omega_h))$. Then by classical DiPerna and Lions arguments with Friedrichs lemma provide that \bFormula{rcebis} \begin{split} &\int_{\Omega_h} b(\varrho)(\tau,\cdot)\varphi(\tau,\cdot) \,{\rm d} {x} - \int_{\Omega_h} b(\varrho_0)(\cdot)\varphi(0,\cdot) \,{\rm d} {x} \\ &= \int_0^\tau\int_{\Omega_h} \Big(b(\varrho)\partial_t\varphi + b(\varrho )\vc{u} \cdot \nabla_x \varphi+(b(\varrho)-b'(\varrho)\varrho){\rm div}_x\vc{u}\Big) \dx\dt
\end{split} \end{equation} holds for any $b\in C[0,\infty)$ with $b'\in C_c[0,\infty)$, $\tau\in [0,T]$, and $\varphi \in C_c^1([0,T]\times{\Omega_h})$.
In order to find boundary term $\int_0^\tau\int_{\Gamma_{\rm in}} b(\varrho_B) \vc{u}_B \cdot \vc{n} \varphi \ {\rm d}S_x{\rm d} t$ we write $$\int_{\Omega_h} b(\varrho )\vc{u} \cdot \nabla_x \varphi \,{\rm d} {x} = \int_{\Omega} b(\varrho )\vc{u} \cdot \nabla_x \varphi \,{\rm d} {x} + \int_{\tilde{U}^+_h({\Gamma_{\rm in}})} b(\varrho )\vc{u} \cdot \nabla_x \varphi \,{\rm d} {x} . $$ By \eqref{ggg} and integration by parts $$\int_{\tilde{U}^+_h({\Gamma_{\rm in}})} b(\varrho )\vc{u} \cdot \nabla_x \varphi \,{\rm d} {x} = \int_{{\Gamma_{\rm in}}} b(\varrho_B )\vc{u}_B \cdot \vc{n} \varphi \,{\rm d} {x} + \int_{\tilde{U}^+_h({\Gamma_{\rm in}})} (\tilde\varrho_B b(\tilde\varrho_B ) - b(\tilde\varrho_B){\rm div}_x \tilde\vc{u}_B \,{\rm d} {x}. $$ Inserting two above identities to \eqref{rcebis}, letting $h\to 0$, recalling regularity of $(\tilde\varrho_B,\tilde\vc{u}_B)$ we obtain desired conclusion of Lemma~\ref{Renorm}.
Furthermore, since \eqref{continuity} is satisfied also for $Z$ instead of $\varrho$, due to \eqref{uweak} and by arguments of Lemma~\ref{Renorm}, $(Z,\vc{u})$ satisfies \eqref{P3}. That finishes the proof of Theorem~\ref{TM1!}. $\Box$
\subsection{Recovery of the system in terms of $(\varrho,\vc{u},\vr^*)$}\label{Sec:Recovery}
Our aim now is to prove that solution $(\varrho,\vc{u}, Z)$ can be identified with the solution $(\varrho,\vc{u},\vr^*)$ to the problem \eqref{main}. Namely, we need to show the existence of $\varrho^*$ satisfying Definition~\ref{def_in_vrs}. To this end we will use combination of arguments from \cite[Section~3.1]{CHJN}, and from \cite[Section~4]{DeMiZa}. First note that since $Z_0>0$ we have $$\frac{\varrho_0}{Z_0}= \varrho^*_0.$$ Moreover, recall that we already know that $\varrho$ and $Z$ satisfy renormalized continuity equations.
As in previous section, let us construct set
$\tilde{U}^+_h(\Gamma_{\rm in})$ and let us extend continuity equations for $(\varrho,\vc{u})$ and $(Z,\vc{u})$ on
$\Omega_h:=\Omega\cup\Gamma_{\rm in} \cup \tilde{U}^+_h({\Gamma_{\rm in}})$. In particular extension $\tilde\vc{u}_B$, $\tilde\varrho_B$, and $\tilde{Z}_B$ of $\vc{u}_B$, $\varrho_B$, and ${Z}_B$ on $\tilde{U}^+_h(\Gamma_{\rm in})$ is, such that $\tilde\vc{u}_B \in C^1(\tilde{U}^+_h(\overline{\Gamma_{\rm in})})$ and $\tilde\varrho_B, \tilde{Z}_B \in W^{1,\infty}(\tilde{U}^+_h({\Gamma_{\rm in}}))$, and
\begin{equation}\label{ggg2}
\begin{split}
& {\rm div}_x (\tilde\varrho_B \tilde\vc{u}_B ) = 0 \mbox{ in } \tilde{U}^+_h(\overline{\Gamma_{\rm in})}, \quad \tilde\varrho_B|_{\Gamma_{\rm in}} = \varrho_B, \ \tilde\vc{u}_B|_{\Gamma_{\rm in}} = \vc{u}_B,\\
&
{\rm div}_x (\tilde{Z}_B \tilde\vc{u}_B ) = 0 \mbox{ in } \tilde{U}^+_h(\overline{\Gamma_{\rm in})}, \quad \tilde{Z}_B|_{\Gamma_{\rm in}} = Z_B,
\end{split}
\end{equation} Then extensions of $(\varrho,\vc{u})$ and $(Z,\vc{u})$ to the set $\Omega_h:=\Omega\cup\Gamma_{\rm in} \cup \tilde{U}^+_h({\Gamma_{\rm in}})$, where $(\varrho,\vc{u})(t,x) = (\tilde\varrho_B,\tilde\vc{u}_B)$ and $(Z,\vc{u})(t,x) = (\tilde{Z}_B,\tilde\vc{u}_B)$ on $\tilde{U}^+_h({\Gamma_{\rm in}})$ satisfy continuity equations in the sense of distributions on $\Omega_h$.
Applying convolution with a standard family of regularizing kernels we obtain the regularized functions $[\varrho]_\omega$, $[Z]_\omega$ which satisfy \begin{equation}\label{reg_1omega}
\partial_t [\varrho]_\omega + {\rm div}_x([\varrho]_\omega\vc{u}) = R^1_\omega \mbox{ a.e. in }(0,T)\times\Omega_{\omega,h} \end{equation} \begin{equation}\label{reg_2omega}
\partial_t [Z]_\omega + {\rm div}_x([Z]_\omega\vc{u}) = R^2_\omega \mbox{ a.e. in }(0,T)\times\Omega_{\omega,h} \end{equation} where $$ \Omega_{\omega,h}: =
\{ x \in \Omega_h \, | \, {\rm dist}(x,\partial \Omega_h)>\omega \}. $$ Due to Friedrichs commutator lemma, see e.g. \cite[Lemma~10.12]{FEINOV}, we find that $$ R^1_\omega \to 0 \mbox{ and } R^2_\omega \to 0 \mbox{ in } L^1_{\rm loc}((0,T)\times \Omega_h) \mbox{ as } \omega\to 0. $$ Let us now multiply \eqref{reg_1omega} by $\frac{1}{[Z]_\omega+\lambda}$, and \eqref{reg_2omega} by $- \frac{[\varrho]_\omega + \lambda \varrho^*_0}{([Z]_\omega+\lambda)^2}$, with $\lambda>0$. Then after some algebraic manipulations we find that \begin{equation*} \begin{split} \partial_t \left( \frac{[\varrho]_\omega + \lambda \varrho^*_0}{[Z]_\omega+\lambda} \right) + {\rm div}_x \left( \left( \frac{[\varrho]_\omega + \lambda\varrho^*_0}{[Z]_\omega+\lambda} \right) \vc{u} \right) & - \left( \frac{([\varrho]_\omega + \lambda\varrho^*_0)[Z]_\omega}{([Z]_\omega+\lambda)^2} + \frac{\lambda \varrho^*_0 }{[Z]_\omega+\lambda} \right) {\rm div}_x \vc{u} \\ & = R^1_\omega \frac{1}{[Z]_\omega+\lambda} - R^1_\omega \frac{[\varrho]_\omega + \lambda \varrho^*_0}{([Z]_\omega+\lambda)^2} \quad \mbox{ a.e. in }(0,T)\times\Omega_{\omega,h}. \end{split} \end{equation*} Testing above by $\varphi \in C_c^1([0,T]\times{\Omega_h})$, after integration by parts and after passing with $\omega \to 0$ we obtain that \bFormula{rcebiss} \begin{split} &\int_{\Omega_h} \left( \frac{\varrho + \lambda \varrho^*_0}{Z+\lambda} \right) (\tau,\cdot)\varphi(\tau,\cdot) \,{\rm d} {x} - \int_{\Omega_h} \left( \frac{\varrho_0 + \lambda \varrho^*_0}{Z_0+\lambda} \right) (\cdot)\varphi(0,\cdot) \,{\rm d} {x} \\ &= \int_0^\tau\int_{\Omega_h} \Big(\left( \frac{\varrho + \lambda \varrho^*_0}{Z+\lambda} \right)\partial_t\varphi + \left( \frac{\varrho + \lambda\varrho^*_0}{Z+\lambda} \right) \vc{u} \cdot \nabla_x \varphi+ \left( \frac{(\varrho + \lambda\varrho^*_0)Z}{(Z+\lambda)^2} + \frac{\lambda \varrho^*_0 }{Z+\lambda} \right) {\rm div}_x\vc{u}\Big) \dx\dt
\end{split} \end{equation} for any $\varphi \in C_c^1([0,T]\times{\Omega_h})$.
Next we distinguish two cases:\\ Case 1. For $Z=0$, due \eqref{fs-} we notice that that $\varrho = 0$ and therefore $\frac{\varrho + \lambda\varrho^*_0}{Z+\lambda} = \varrho^*_0$ and $ \frac{(\varrho + \lambda\varrho^*_0)Z}{(Z+\lambda)^2} + \frac{\lambda \varrho^*_0 }{Z+\lambda} = \tilde\varrho^*_0 $, then \eqref{rcebiss} becomes trivial.\\ Case 2. For $Z>0$, we find that $\frac{\varrho + \lambda\varrho^*_0}{Z+\lambda} \leq \max\{\varrho^*_0, \frac{1}{c_*}\}.$ Since $\varrho + \lambda\varrho_0^* $ converges strongly to $\varrho$ as $\lambda \to 0$, as well as $Z + \lambda $ converges strongly to $Z$ as $\lambda \to 0$, we can pass with $\lambda \to 0$ in \eqref{rcebiss} using Legesgue's Dominated convergence theorem, to obtain that \bFormula{rcebisss} \begin{split} &\int_{\Omega_h} \frac{\varrho}{Z}(\tau,\cdot)\varphi(\tau,\cdot) \,{\rm d} {x} - \int_{\Omega_h} \frac{\varrho_0}{Z_0}(\cdot)\varphi(0,\cdot) \,{\rm d} {x} \\ &= \int_0^\tau\int_{\Omega_h} \Big(\frac{\varrho}{Z}\partial_t\varphi + \frac{\varrho}{Z}\vc{u} \cdot \nabla_x \varphi+\frac{\varrho}{Z}{\rm div}_x\vc{u}\Big) \dx\dt
\end{split} \end{equation} holds for any $\varphi \in C_c^1([0,T]\times{\Omega_h})$. By the same steps as in the proof of Lemma~\ref{Renorm} we find that after passing with $h\to 0$ we obtain \bFormula{rcebix} \begin{split} &\int_{\Omega} \frac{\varrho}{Z}(\tau,\cdot)\varphi(\tau,\cdot) \,{\rm d} {x} - \int_{\Omega} \frac{\varrho_0}{Z_0}(\cdot)\varphi(0,\cdot) \,{\rm d} {x} \\ &= \int_0^\tau\int_{\Omega} \Big(\frac{\varrho}{Z}\partial_t\varphi + \frac{\varrho}{Z}\vc{u} \cdot \nabla_x \varphi+\frac{\varrho}{Z}{\rm div}_x\vc{u}\Big) \dx\dt
- \int_0^\tau\int_{\Gamma_{\rm in}} \frac{\varrho_B}{Z_B} \vc{u}_B \cdot \vc{n} \varphi \ {\rm d}S_x{\rm d} t \end{split} \end{equation} Obviously, $\varrho^*$ defined as $\frac{\varrho}{Z}$ satisfies $\varrho^* \in \{ \min \{ \frac{1}{c^*} , \varrho^*_0 \}, \max \{ \frac{1}{c^*} , \varrho^*_0 \} \}$ a.e. in $(0,T)\times \Omega$, and thus $Z = \frac{\varrho}{\varrho^*}$ a.e. in $(0,T)\times \Omega$.
Finally we can conclude that Theorem~\ref{TM2} is proven. $\Box$
\section{Passage to the limit $\varepsilon\to 0$} \label{Sec:lim} The purpose of this section is to perform the limit $\varepsilon\to0$ in the auxiliary system (\ref{main_transformed}-\ref{ia4}) to prove the Theorem \ref{TM3}. From now on, by $\{\varrho_\varepsilon, Z_\varepsilon,\vc{u}_\varepsilon\}_{\varepsilon>0}$ we denote the sequence of solutions obtained in the previous section.
\subsection{Convergence following from the uniform estimates} The energy inequality established in the previous section, gives rise to the following estimates that are uniform with respect to $\varepsilon$: \begin{equation} \label{un_ep} \begin{gathered}
\sup_{t\in[0,T]}\lr{\|\sqrt{\varrho_\varepsilon} \vc{u}_\varepsilon (t)\|_{L^2(\Omega)}
+\left\|H(Z_\varepsilon)(t) \right\|_{L^1(\Omega)} }\leq C,\\
\intT{{\|\vc{u}_\varepsilon\|_{W^{1,2}(\Omega,\mathbb{R}^3)}^2}} \leq C. \end{gathered} \end{equation} From here and from the construction we also deduce that \eq{\label{Zr_ep} 0\leq Z_\varepsilon<1,\quad \mbox{a.e. in } Q_T,\quad 0\leq c_\star \varrho_\varepsilon\leq Z_\varepsilon\leq c^\star\varrho_\varepsilon,} in particular both sequences $Z_\varepsilon$, $\varrho_\varepsilon$ are uniformly bounded in $L^p(Q_T)$ for any $p\leq \infty$. Therefore, up to the subsequence, we have \eq{\label{conv_ep}
&\vc{u}_\varepsilon \rightarrow \vc{u} \qquad \text{weakly in } L^2(0,T;W^{1,2}(\Omega, \mathbb{R}^3)),\\ &Z_\varepsilon\rightarrow Z \qquad \text{in }\ C_{\rm weak}([0,T];L^{p}(\Omega)),\\ &\varrho_\varepsilon\rightarrow \varrho \qquad \text{ in }\ C_{\rm weak}([0,T];L^{p}(\Omega)), } for any $p$ finite. In addition to that, the limiting $Z$ and $\varrho$ will satisfy \eq{\label{Zr_lim} 0\leq Z\leq1,\quad \mbox{a.e. in } Q_T,\quad 0\leq c_\star \varrho\leq Z\leq c^\star\varrho.}
To obtain the uniform $L^1$ bound for the pressure we can follow one of two strategies:
\noindent(i) for the zero flux, i.e. for $\int_{\partial \Omega} \vc{u}_B \cdot \vc{n} \ {\rm d}S_x = 0$, we use our smallness assumption \eq{
\intO{Z_0}+T\int_{\Gamma_in}Z_B|\vc{u}_B\cdot\vc{n}|\ {\rm d}S_x<|\Omega|, } to see that it implies the condition \eqref{lambda} from the level of uniform in $\delta$ estimates, and we repeat the argument from the previous section.\\
\noindent(ii) for the positive flux, thanks to the energy estimate \eqref{ei!+} and the property \eqref{Lep} we already control the $L^1$ norm of $\pi_\varepsilon$ uniformly on the subset ${\cal O}$. In order to control the pressure on the complementary part, we repeat the Bogovskii type estimate with the test function: \eq{\label{testB} \bm{\psi}=\eta(t){\cal B}\lr{\xi},} where \eq{ \xi=\left\{\begin{array}{ll} 1& \mbox{in\ }\omega\setminus {\cal O}, \\
-\frac{|\Omega\setminus {\cal O}|}{|\Omega|} &\mbox{otherwise.} \end{array} \right.} Note that since ${\cal O}$ is an open subset of $\Omega$, the test function $\bm{\psi} $is well defined and it belongs to $W^{1,p}_0(\Omega)$ for any $p<\infty$, so it is an admissible test function.
Similarly as in the previous limit passage we can in fact show that the local-in-time pressure bound holds true with $\eta \equiv 1$. However, we do not have anymore the equi-integrability of the pressure as on the previous level of approximation. Therefore, the convergence in the sense of measures is the most we can hope for, we have \eq{\label{lim_pi_ep} &\pi_{\varepsilon}(Z_\varepsilon) \rightarrow\pi \quad\text{weakly\ in }\quad {\cal M}^+([0,T]\times K),\\ &Z_\varepsilon \pi_{\varepsilon}(Z_\varepsilon) \rightarrow\pi_1 \quad\text{weakly\ in }\quad {\cal M}^+([0,T]\times K). } The limiting momentum equation therefore reads
$$ \partial_t (\varrho\vc{u}) + {\rm div}_x (\varrho\vc{u} \otimes \vc{u}) + \nabla_x\pi -{\rm div}_x\mathbb{S}(\nabla_x \vc{u}) = \vc{0},$$ in the sense of distributions.
Already at this point we can identify the second limit in \eqref{lim_pi_ep} using the explicit form of the pressure \eqref{i5a}. We have \eq{ Z_\varepsilon\pi_\varepsilon(Z_\varepsilon)=\pi_\varepsilon(Z_\varepsilon)-\varepsilon\frac{1}{(1-Z_\varepsilon)^{\beta-1}}, } thus letting $\varepsilon\to0$ and observing that by \eqref{un_ep}, the last term converges to zero strongly, we obtain the relation \eq{\label{step1} \pi_1 =\pi } in the sense of distributions. The thing that remains to be shown in order to deduce the constraint $(1-Z)\pi=0$ is that we have $\pi_1=Z\pi$ in some sense (note that on the l.h.s. we have multiplication of measure by the $L^\infty$ function only). The recovery of the constraint requires stronger information about the convergence of $Z_\varepsilon$, and additional information about the regularity of $\pi$ and $Z$.
\subsection{Strong convergence of $Z_\varepsilon$} We first need to show that we have a variant of effective viscous flux equality. It can be derived by testing the approximate momentum equation by the inverse divergence operator $\psi\eta\nabla_x\Delta^{-1}[1_\Omega Z_\varepsilon]$ and by testing the momentum equation by $\psi\eta\nabla_x\Delta^{-1}[1_\Omega Z]$, and by comparison of the limits, for any $ \psi \in C_c^\infty((0,T)) $ and $\eta \in C_c^\infty(\Omega)$. Note that already at this stage we need to justify what the product $\pi Z$ means, i.e., whether $\eta\psi\nabla_x\Delta^{-1}[1_\Omega Z]$ is regular enough to be used as a test function in the limiting momentum equation. To justify this step we first write weak formulation of the limiting momentum equation \eq{\label{pixi} \langle\pi,{\rm div}_x\boldsymbol{\xi}\rangle_{({\cal M}(Q_T),C(Q_T))} &=\intTO{\mathbb{S}(\nabla_x \vc{u}):\nabla_x\boldsymbol{\xi}} -\intTO{\varrho\vc{u}\cdot\partial_{t}\boldsymbol{\xi}}\\ &\quad -\intTO{\varrho\vc{u}\otimes\vc{u}:\nabla_x\boldsymbol{\xi}} -\intTO{\varrho(\vc{w}-\vc{u})\cdot \boldsymbol{\xi}}} that is satisfied for all $\boldsymbol{\xi}\in C^1_c(Q_T)$. From now on we will treat this formulae as a definition of $\intTO{\pi{\rm div}_x\boldsymbol{\xi}}$. Let us check that the r.h.s. of \eqref{pixi} makes sense for $\boldsymbol{\xi}$ from much wider class, it is enough that \eqh{ \nabla_x \boldsymbol{\xi}\in L^2(0,T; L^2(\Omega;R^{d\times d})),\quad\nabla_x\boldsymbol{\xi}\in L^{5/2}(0,T; L^{5/2}(\Omega;R^{d\times d})),\quad \partial_{t}\xi\in L^1(0,T; L^2(\Omega;R^d)). } The second property is a consequence of simple interpolation property
$$\|\varrho|\vc{u}|^2\|_{L^{5/3}(Q_T)}\leq
\|\varrho|\vc{u}|^2\|_{L^\infty(0,T; L^1(\Omega))}^{2/5}\|\varrho|\vc{u}|^2\|_{L^1(0,T; L^3(\Omega))}^{3/5}. $$ So, if we take \eq{\label{class_xi} \xi\in W^{1,5/2}_0(Q_T;R^d),}
the r.h.s. of \eqref{pixi} will be well defined. Note that our $C^1_c(Q_T)$ functions are dense in $W^{1,5/2}_0(Q_T)$.
Let us check that $\boldsymbol{\xi}:=\psi\phi\nabla_x\Delta^{-1}[1_\Omega Z]$ has these properties. First, note that the Riesz operator $${\cal{A}}=\nabla_x\Delta^{-1}: \ L^p(R^d)\to D^{1,p}(R^d;R^d)$$ (homogeneous Sobolev space) is a continuous linear operator and we have that
$$\|\nabla_x{\cal{A}}[v]\|_{L^p(R^d;R^d)}\leq C(p)\|v\|_{L^p(R^d)},$$ for any $1<p<\infty$. Therefore
\eqh{\|\nabla_x\boldsymbol{\xi}\|_{L^\infty(0,T;L^p(R^d;R^{d\times d}))}&=
\|\nabla_x\lr{\psi\phi\nabla_x\Delta^{-1}[1_\Omega Z]}\|_{L^\infty(0,T;L^p(R^d;R^d))}\\
&\leq C(p,\psi,\phi) (1+\|1_\Omega Z\|_{L^\infty(0,T;L^p(R^d))})\leq C,} for any $p<\infty$. Next, using the continuity equation we get that \eqh{ \partial_{t}\boldsymbol{\xi}=\partial_{t}\lr{\psi\phi\nabla_x\Delta^{-1}[1_\Omega Z]}&=\phi\partial_{t}\psi\nabla_x\Delta^{-1}[1_\Omega Z]+\phi\psi\nabla_x\Delta^{-1}[1_\Omega \partial_{t} Z]\\ &=\phi\partial_{t}\psi\nabla_x\Delta^{-1}[1_\Omega Z]-\phi\psi\nabla_x\Delta^{-1}[ {\rm div}_x(1_\Omega Z\vc{u})]. } Using the properties of the double Riesz transform we obtain \eq{
\|\partial_{t}\boldsymbol{\xi}\|_{L^p(0,T;L^p(\Omega))}\leq C(p,\psi,\phi)(1+ \|Z\vc{u}\|_{L^p(0,T;L^p(\Omega))})\leq C, } for some $p>5/2$. Whence, we have shown that $\boldsymbol{\xi}=\psi\phi\nabla_x\Delta^{-1}[1_\Omega Z]$ is indeed in the class \eqref{class_xi}.
Now, note that $${\rm div}_x\boldsymbol{\xi}=\psi\nabla_x\phi\cdot\nabla_x\Delta^{-1}[1_\Omega Z]+\psi\phi Z,$$ so we can define \eq{ &\langle\pi,\phi\psi Z\rangle_{({\cal M}(Q_T),C(Q_T))}:=\langle\pi,{\rm div}_x\boldsymbol{\xi}\rangle_{({\cal M}(Q_T),C(Q_T))}- \langle\pi,\psi\nabla_x\phi\cdot\nabla_x\Delta^{-1}[1_\Omega Z]\rangle_{({\cal M}(Q_T),C(Q_T))}. } This means that $\langle\pi,\phi\psi Z\rangle_{({\cal M}(Q_T),C(Q_T))}$ is well defined iff $\langle\pi,\phi\psi \nabla_x\Delta^{-1}[1_\Omega Z]\rangle_{({\cal M}(Q_T),C(Q_T))}$
is well defined. For that we need $ \nabla_x\Delta^{-1}[1_\Omega Z]$ to be at least $C(Q_T)$, but this is true, as $Z\in C_{\rm weak}(0,T; L^p(\Omega))$ for sufficiently large $p$. In particular, for $p>d$, using the Morrey inequality, we get $ \nabla_x\Delta^{-1}[1_\Omega Z]\in C([0,T]\times \Ov{\Omega};R^d)$.
Taking the above into account, we have
\eq{ \label{evf_ep} &\lim_{\varepsilon \to 0^+} \intTO{\psi \phi \big( \pi_{\varepsilon}(Z_\varepsilon)-(\lambda+2\mu) {\rm div}_x \vc{u}_\varepsilon\big) Z_\varepsilon}\\ &= \langle\pi,\phi\psi Z\rangle_{({\cal M}(Q_T),C(Q_T))}-(\lambda+2\mu) \intTO{\psi \phi {\rm div}_x \vc{u} Z }. } From \eqref{evf_ep} it follows that \eq{\label{lim} &(\lambda+2\mu)\intTO{\psi\phi\lr{\Ov{Z{\rm div}_x\vc{u}}-Z{\rm div}_x\vc{u}}}\\ &=\langle\pi_1,\phi\psi \rangle_{({\cal M}(Q_T),C(Q_T)} -\langle\pi,\phi\psi Z\rangle_{({\cal M}(Q_T),C(Q_T))}\\ &=\langle\pi,\phi\psi (1-Z)\rangle_{({\cal M}(Q_T),C(Q_T))}\geq 0
} where we have used subsequently \eqref{step1}, and the limit of \eqref{Zr_ep}. Since both pairs $(Z_\varepsilon,\vc{u}_\varepsilon)$ and $(Z,\vc{u})$ satisfy the renormalized continuity equation, we can use the renormalization in the form $b(z)=z\log z$ to justify that $$Z_\varepsilon\to Z\quad \text{strongly in } L^p((0,T)\times \Omega),\quad \forall p<\infty.$$ The proof is identical as for the limit passage $\delta\to 0$.
Note, however, that similarly as in the previous section this property is not transferred to the sequence $\varrho_\varepsilon$, for which we only have \eqref{conv_ep}. Nevertheless, using this information and formula \eqref{lim} we can justify that
$$\langle\pi,\phi\psi (1-Z)\rangle_{({\cal M}(Q_T),C(Q_T))}=0.$$
Note that at this stage we can repeat the arguments from Section \ref{Sec:Recovery} in order to come back to the solution in terms of $\varrho,\vc{u},\varrho^*$. Indeed, note that the proof is based only on the properties of the renormalized continuity equations for $\varrho$ and $Z$, on the boudedness of $Z$ and $\varrho$, and on the regularity of $\vc{u}$ which are the same as on the previous level of approximation.
Having this, justification of condition \eqref{cond:u} amounts to repetition of proof of \cite[Lemma 4]{PeZa}, see also \cite[Lemma 2.1]{LM99}. The proof of Theorem \ref{TM3} is thus complete. $\Box$
\def\cprime{$'$} \def\ocirc#1{\ifmmode\setbox0=\hbox{$#1$}\dimen0=\ht0
\advance\dimen0 by1pt\rlap{\hbox to\wd0{\hss\raise\dimen0
\hbox{\hskip.2em$\scriptscriptstyle\circ$}\hss}}#1\else {\accent"17 #1}\fi}
\end{document} | arXiv |
arXiv.org > quant-ph > arXiv:2003.00583
arXiv:2003.00583 (quant-ph)
[Submitted on 1 Mar 2020 (v1), last revised 6 Oct 2021 (this version, v2)]
Title:Positivity and nonadditivity of quantum capacities using generalized erasure channels
Authors:Vikesh Siddhu, Robert B. Griffiths
Abstract: We consider various forms of a process, which we call {\em gluing}, for combining two or more complementary quantum channel pairs $(\mathcal{B},\mathcal{C})$ to form a composite. One type of gluing combines a perfect channel with a second channel to produce a \emph{generalized erasure channel} pair $(\mathcal{B}_g,\mathcal{C}_g)$. We consider two cases in which the second channel is (i) an amplitude-damping, or (ii) a phase-damping qubit channel; (ii) is the \emph{dephrasure channel} of Leditzky et al. For both (i) and (ii), $(\mathcal{B}_g,\mathcal{C}_g)$ depends on the damping parameter $0\leq p\leq 1$ and a parameter $0 \leq \lambda \leq 1$ that characterizes the gluing process. In both cases we study $Q^{(1)}(\mathcal{B}_g)$ and $Q^{(1)}(\mathcal{C}_g)$, where $Q^{(1)}$ is the channel coherent information, and determine the regions in the $(p,\lambda)$ plane where each is zero or positive, confirming previous results for (ii). A somewhat surprising result for which we lack any intuitive explanation is that $Q^{(1)}(\mathcal{C}_g)$ is zero for $\lambda \leq 1/2$ when $p=0$, but is strictly positive (though perhaps extremely small) for all values of $\lambda> 0$ when $p$ is positive by even the smallest amount. In addition we study the nonadditivity of $Q^{(1)}(\mathcal{B}_g)$ for two identical channels in parallel. It occurs in a well-defined region of the $(p,\lambda)$ plane in case (i). In case (ii) we have extended previous results for the dephrasure channel without, however, identifying the full range of $(p,\lambda)$ values where nonadditivity occurs. Again, an intuitive explanation is lacking.
Comments: 13 pages (two-column), 6 figures, v2 matches published version
Journal reference: IEEE Transactions on Information Theory, vol. 67, no. 7, pp. 4533 - 4545, July 2021
DOI: 10.1109/TIT.2021.3080819
Cite as: arXiv:2003.00583 [quant-ph]
(or arXiv:2003.00583v2 [quant-ph] for this version)
From: Vikesh Siddhu [view email]
[v1] Sun, 1 Mar 2020 20:58:11 UTC (448 KB)
[v2] Wed, 6 Oct 2021 02:05:53 UTC (534 KB) | CommonCrawl |
Structural basis for a complex I mutation that blocks pathological ROS production
Cardiac disruption of SDHAF4-mediated mitochondrial complex II assembly promotes dilated cardiomyopathy
Xueqiang Wang, Xing Zhang, … Zhihui Feng
NDUFAB1 confers cardio-protection by enhancing mitochondrial bioenergetics through coordination of respiratory complex and supercomplex assembly
Tingting Hou, Rufeng Zhang, … Xianhua Wang
Elucidating the contribution of ETC complexes I and II to the respirasome formation in cardiac mitochondria
Sehwan Jang & Sabzali Javadov
Phosphorylation of cyclophilin D at serine 191 regulates mitochondrial permeability transition pore opening and cell death after ischemia-reperfusion
Stephen Hurst, Fabrice Gonnot, … Shey-Shing Sheu
Mitochondrial Oxidative Phosphorylation defect in the Heart of Subjects with Coronary Artery Disease
Karima Ait-Aissa, Scott C. Blaszak, … Andreas M. Beyer
Pathogenesis of cardiac ischemia reperfusion injury is associated with CK2α-disturbed mitochondrial homeostasis via suppression of FUNDC1-related mitophagy
Hao Zhou, Pingjun Zhu, … Yundai Chen
TECRL deficiency results in aberrant mitochondrial function in cardiomyocytes
Cuilan Hou, Xunwei Jiang, … Tingting Xiao
Mitochondrial CaMKII causes adverse metabolic reprogramming and dilated cardiomyopathy
Elizabeth D. Luczak, Yuejin Wu, … Mark E. Anderson
Calpain-mediated protein targets in cardiac mitochondria following ischemia–reperfusion
Ling Li, Jeremy Thompson, … Qun Chen
Zhan Yin ORCID: orcid.org/0000-0002-3846-01471,
Nils Burger ORCID: orcid.org/0000-0003-3227-88941,
Duvaraka Kula-Alwar2,
Dunja Aksentijević ORCID: orcid.org/0000-0002-8480-67273,4,
Hannah R. Bridges ORCID: orcid.org/0000-0001-6890-60501,
Hiran A. Prag ORCID: orcid.org/0000-0002-4753-85671,
Daniel N. Grba ORCID: orcid.org/0000-0003-2915-951X1,
Carlo Viscomi1 nAff6,
Andrew M. James1,
Amin Mottahedin ORCID: orcid.org/0000-0002-3677-21981,2,5,
Thomas Krieg ORCID: orcid.org/0000-0002-5192-580X2,
Michael P. Murphy ORCID: orcid.org/0000-0003-1115-96181,2 &
Judy Hirst ORCID: orcid.org/0000-0001-8667-67971
Nature Communications volume 12, Article number: 707 (2021) Cite this article
Biophysical chemistry
Cryoelectron microscopy
Mitochondrial complex I is central to the pathological reactive oxygen species (ROS) production that underlies cardiac ischemia–reperfusion (IR) injury. ND6-P25L mice are homoplasmic for a disease-causing mtDNA point mutation encoding the P25L substitution in the ND6 subunit of complex I. The cryo-EM structure of ND6-P25L complex I revealed subtle structural changes that facilitate rapid conversion to the "deactive" state, usually formed only after prolonged inactivity. Despite its tendency to adopt the "deactive" state, the mutant complex is fully active for NADH oxidation, but cannot generate ROS by reverse electron transfer (RET). ND6-P25L mitochondria function normally, except for their lack of RET ROS production, and ND6-P25L mice are protected against cardiac IR injury in vivo. Thus, this single point mutation in complex I, which does not affect oxidative phosphorylation but renders the complex unable to catalyse RET, demonstrates the pathological role of ROS production by RET during IR injury.
Mitochondrial complex I catalyses the first step in the mammalian respiratory chain, electron transfer from NADH to CoQ (ubiquinone) coupled to proton transfer across the mitochondrial inner membrane, in order to generate the proton motive force (Δp) and drive ATP synthesis by oxidative phosphorylation. This asymmetric ~1 MDa complex comprises 45 protein subunits, encoded on both the mitochondrial and nuclear genomes, plus a flavin mononucleotide (FMN) and eight iron sulfur (FeS) centers to connect the active site for NADH oxidation from the matrix to the site of CoQ reduction from the membrane1,2,3 (Fig. 1A). Mutations to either the nuclear or mitochondrial (mtDNA) encoded subunits of complex I lead to devastating neuromuscular disorders that are typically associated with disrupted complex I catalysis and decreased mitochondrial ATP production4.
Fig. 1: Structure of complex I containing the ND6-P25L mutation.
A The NADH oxidation reaction is shown on the structure of ND6-P25L complex I (left). Complex I oxidizes NADH and reduces CoQ, and pumps four protons out of the mitochondrion to support the proton motive force (Δp). The reverse electron transfer (RET) reaction is shown on the structure of WT complex I (right). When Δp is large and the CoQ pool is highly reduced, electron and proton transfer at complex I are reversed; CoQ is oxidized and NAD+ reduced to NADH. Alternatively, the electrons are passed to O2 to form superoxide (O2−). The site of the mutation and ND6 subunit are shown, along with subunits NDUFA5 and NDUFA10. B The relative orientation of the NDUFA10 and NDUFA5 subunits on the hydrophilic and membrane arms, respectively (see A), defines ND6-P25L-CI to be in the deactive state. With the three structures superimposed by NDUFA10, the position of the three-helix bundle of NDUFA5 clearly differentiates the active and deactive states. C Comparison of the structures of the ND6 subunit in the WT-D and ND6-P25L complexes (left) and the WT-A and WT-D complexes (right). The gap between the top of TMHs 2 and 3 opens in ND6-P25L relative to WT-D, whereas the π-bulge is present in the WT-D and ND6-P25L complexes but not in WT-A. The asterisks mark the sidechain of ND6-Leu64 to visualize the rotation of the upper part of ND6 TMH3. Structures superimposed on the adjacent ND4L subunit. D Views from above TMHs 2 and 3 showing the rotation in the upper section of TMH3 (carrying Phe67 and Tyr69, top) that must occur to convert the WT-D and ND6-P25L complexes to the active state, and the decreasing (left to right) gap between them. Structures aligned to TMH2 (bottom). See also Supplementary Figs. 1–4, and Supplementary Tables 1 and 2.
The ND6 G14600A mtDNA mutation that leads to a proline to leucine substitution at position 25 in the ND6 subunit of complex I (ND6-P25L) was first discovered in a patient homoplasmic for the mutation who presented with Leigh syndrome and sensorineural deafness and died at 8 months of age5. The mutation was confirmed to cause an isolated complex I deficiency in muscle tissue and fibroblasts, and the effects were replicated in human transmitochondrial cybrids, which displayed hardly any complex I activity and drastically decreased amounts of the fully assembled enzyme5. However, a homoplasmic mouse model containing the corresponding ND6 G13997A mtDNA mutation, which also causes the ND6-P25L substitution, exhibited far milder effects and was presented as a model for Leber's Hereditary Optic Neuropathy6. Varying decreases in complex I-linked activity (by 20–60%) were reported in synaptosomes and heart and liver mitochondria, but neither the amount of the complex present in tissues, nor the ATP levels in synaptosomes appeared to be affected6,7. Intriguingly, although the mechanism remained unclear, mitochondria from the ND6-P25L variant exhibited a clear absence of reactive oxygen species (ROS) production by reverse electron transfer (RET) through complex I, driven by succinate oxidation6,7. In comparison, the ROS produced by NADH-linked substrates showed only small and variable increases compared to the wild-type enzyme.
When Δp is high and the CoQ pool is adequately reduced, the driving force for reverse proton transfer across the membrane is sufficient to drive electrons backwards through complex I, from reduced CoQ (CoQH2) to the flavin (Fig. 1A). This reversal of the normal electron transfer reaction causes a substantial increase in complex I-mediated production of superoxide, which is then dismutated to H2O2 by Mn-superoxide dismutase in the matrix8,9, and termed here "ROS production by RET". Mitochondrial ROS production by RET has been directly implicated in redox signaling in inflammation10, oxygen sensing11,12, activation of uncoupling by mitochondria in brown adipose tissue13,14, and in aging and the stress response in flies15,16. Furthermore, mitochondrial ROS production contributes to the tissue damage associated with cardiac ischemia–reperfusion (IR) injury17,18,19. However, much of the pathological and physiological significance of RET through complex I in vivo remains unclear, and the mechanism of ROS production by RET remains contentious. A variant of mammalian complex I that is unable to catalyse RET thus provides a wealth of opportunities for pathological, physiological and mechanistic insights.
Here, we began by determining the structure of the ND6-P25L variant of complex I from homoplasmic mice. Surprisingly, our structure showed the complex to be predominantly in the 'deactive' (D) state, a pronounced resting state that usually forms only very slowly, when the complex is not actively turning over, that can be rapidly reactivated to the 'active' (A) state by the addition of substrates20,21,22,23. In contrast, matching preparations of wild-type (WT) mouse complex I are predominantly in the A state, and require a prolonged warm incubation under non-turnover conditions for conversion to the D state22. Analyses of the structure revealed how subtle perturbations at the site of the mutation propagate through the protein to the CoQ-binding site, which is disordered in the D state, and kinetic analyses showed that the variant complex exhibits D-like characteristics even during turnover. Because the conditions for 'forward' catalysis (NADH, CoQ and low Δp) efficiently return the D state enzyme to A status, but those for reverse catalysis (NAD+, CoQH2 and high Δp) are unable to do so, the rate of NADH oxidation by the ND6-P25L variant closely matches that of WT, but the rate of RET by the variant is negligible. This structural and molecular basis for the lack of RET by the ND6-P25L variant of complex I in vitro prompted us to investigate the proposed pathological role of mitochondrial ROS production by RET in cardiac IR injury in vivo.
We also show that the ND6-P25L mice, which are unable to catalyze RET, are protected against cardiac IR injury in vivo, confirming the pathological role of mitochondrial ROS production by RET through complex I in heart attacks.
Structure of ND6-P25L complex I in the deactive state
To determine how the ND6-P25L mutation affects the structure of complex I, the complex (ND6-P25L-CI) was isolated from the heart mitochondria of ND6-P25L mice, and its structure determined by single-particle electron cryomicroscopy (cryo-EM). The catalytic (NADH:decylubiquinone oxidoreductase) activity of the preparation was 7.8 ± 0.4 (mean ± S.E.M., n = 3) μmol min−1 mg−1, relative to 10–12 μmol min−1 mg−1 for previously characterized preparations of WT mouse complex I (WT-CI)22, reflecting a modest loss of either specific activity or stability. The final cryo-EM density map reached a global resolution of 3.8 Å from 26,638 particles in the final reconstruction (Supplementary Figs. 1, 2, and Supplementary Table 1). The map revealed a fully intact complex matching WT-CI with all 45 subunits clearly present (Fig. 1A) and was described using a model containing 8063 residues (96% of the total, Supplementary Table 2) developed from models for the WT enzyme22.
Comparison of the map and model for ND6-P25L-CI with published maps and models for the A and D states of WT-CI showed clearly that it is in the D state. First, in WT-CI, the A and D complexes differ globally through the relative arrangement of their hydrophilic and membrane domains. The map-to-map correlation with the D state (EMDB-4356, 3.9 Å resolution) was 91–92%, relative to 78–82% with the A state (EMDB-11377, filtered to 3.9 Å resolution). Furthermore, the relative positions of two subunits, NDUFA10 on the membrane domain and NDUFA5 on the hydrophilic domain, provide a clear visual definition of the A/D transition, during which the two domains move relative to each other22, confirming ND6-P25L-CI is in the D state (Fig. 1B). Closer inspection of the ND6-P25L-CI model revealed all the known hallmarks of the D state2,22,23, particularly the π-bulge in TMH3 of the ND6 subunit (Fig. 1C and Supplementary Fig. 3) and extensive disorder of the loops in subunits ND1, ND3, and NDUFS2 that form parts of the CoQ-binding site (Supplementary Fig. 4). In stark contrast, WT-CI, prepared by the same method, is observed predominantly in the A state, with a 30-min, substrate-free incubation at 37 ˚C in vitro required for deactivation22.
To better compare the structures of ND6-P25L-CI and deactive WT-CI (WT-D), we reprocessed previously described cryo-EM data for the D state with updated software, obtaining a substantial improvement in resolution from 3.9 to 3.2 Å, and updated the model accordingly (Supplementary Figs. 1, 2, and Supplementary Tables 1, 2). The ND6-P25L-CI and WT-D structures match very closely overall (RMSD value 0.59 Å) and reveal only subtle structural differences in the vicinity of the mutation itself (Fig. 1C). The variant residue is located at the very start of ND6-TMH2 (on the matrix side of the membrane), pointing towards ND6-TMH3. In ND6-P25L-CI, TMH2 leans away from TMH3, relative to in WT-D (and WT-A) (Fig. 1C), increasing the distance between them (Fig. 1D and Supplementary Fig. 3). Most likely this occurs because of destabilizing steric interactions between the variant Leu sidechain on ND6-TMH2 and residues at the end of ND6-TMH3 and the start of the adjacent ND4L-TMH2, notably ND4L-Met27. As the P25L position sits closer to ND6-TMH3 in WT-A than WT-D (Fig. 1D), clashing particularly with ND6-Thr70, the A state of ND6-P25L-CI may be more strongly affected or destabilized than is visualized in our deactive structure.
ND6-TMH3 is important in the A/D transition as it contains a central π-bulge in the D state, but is α-helical throughout in the A state22. π-bulges are short sections of π-helix, less tightly wound than the standard α-helix. When the π-bulge is formed the upper (matrix) part of TMH3 rotates, relaxing the helical turn; conversely, destruction of the π-bulge during activation tightens the helical turn (Fig. 1C, D). Notably, the upper part of TMH3 contains two bulky aromatic residues, Phe67 and Tyr69, which adopt different rotational positions in the two states. Crucially, Phe67 transits through the TMH3–TMH2 interface, from one side to the other, as the upper part of TMH3 rotates (Fig. 1D and Supplementary Movie 1). Steric hindrance to the rotation, from these and other residues, likely explains why deactivation is so slow in the WT enzyme. Conversely, in ND6-P25L-CI, Leu25 pushes TMH2 away from the upper part of TMH3, decreasing the steric hindrance for the rotation, thereby accelerating deactivation by decreasing the energy barrier (Fig. 1D and Supplementary Movie 1). The ND6 TMH3–4 loop, which arches over the top of TMH2, also responds to deactivation (Fig. 1C). Comparison of structural data from the enzymes of different species has shown recently24 how formation of the π-bulge turns the top of ND6-TMH3 and rotates Phe67 away from the tight packing contact it makes with ND1, altering the conformation of ND1-TMH4, and relocating ND1-Tyr127 (at the top of ND1-TMH4) away from ND3-Cys39. As a result, the ND3-TMH1–2 loop that carries Cys39 is no longer anchored and becomes disordered, exposing the Cys to derivatization and NDUFS2 and NDUFS7 loops in the CoQ-binding site to the matrix. The resulting loss of structural integrity in the CoQ-binding site2,22,23 explains why the D state is catalytically inactive. Our structures thus reveal how the subtle ND6-P25L mutation destabilizes the CoQ-binding site, and indicate why spontaneous conversion of ND6-P25L-CI to the more stable D state happens far more rapidly than in WT-CI—consistent with the absence of the A state from our cryo-EM analyses of the mutant enzyme.
ND6-P25L complex I is active but with deactive-like characteristics
Comparison of membranes prepared from ND6-P25L heart mitochondria with those from WT mitochondria (Fig. 2A) revealed a small decrease in NADH:O2 oxidoreduction rate (the NADH:O2 reaction) through complexes I, III and IV, whereas the succinate:O2 reaction (through complexes II, III and IV) was unchanged. The catalytic rates did not exhibit any substantial "lag phases" upon the addition of substrates. Similar results were observed between ND6-P25L and WT heart mitochondria respiring on the NADH-linked substrates glutamate and malate, or on succinate (Fig. 2B). The NADH:O2 reaction rate was normalized to the amount of complex I flavin site present by comparing it to the NADH:APAD+ oxidoreduction rate (Fig. 2A), revealing that the decreased activity is due to a lower content of ND6-P25L-CI than WT-CI, not to its lower specific activity, in contrast to previous reports6,7. Therefore, despite being structurally characterized in the D state, ND6-P25L-CI switches quickly and efficiently into a catalytically-active state that is as competent for NADH oxidation as WT-CI.
Fig. 2: The active/deactive status and catalytic activity of ND6-P25L and WT complex I.
A The rates of NADH and succinate oxidation by complexes I–III–IV and II–III–IV, respectively, in mitochondrial membranes from WT and ND6-P25L (referred to as ND6) mouse heart tissue, and the rate of the complex I-specific NADH:APAD+ oxidoreduction reaction. The data are mean averages ± S.E.M. (n = 3, from three independent preparations each comprising four or five hearts) evaluated using an unpaired, two-tailed Student's t-test (**p < 0.01; p values are 0.0083, 0.5347, 0.0036, respectively). The decreased rate of NADH oxidation in ND6-P25L (83% of WT) is due to the lower amount of complex I present (85% of WT in the NADH:APAD+ assay). B The rates of O2 consumption by isolated heart mitochondria during respiration on glutamate/malate (0.5 mM) and upon subsequent addition of succinate (10 mM). Mitochondria were isolated from four hearts and pooled to provide a single test sample. The data are mean averages ± S.E.M. (glu/mal (glutamate/malate), n = 8, succinate, n = 4, technical replicates) evaluated using an unpaired, two-tailed Student's t-test (*p < 0.05; p values are 0.02 (glu/mal), 0.83 (succinate)). C The percentage of complex I in A-like states in ND6-P25L and WT membranes, either as-isolated or deactivated by incubation for 30 min at 37 °C in the absence of substrates, and before or after activation by addition of 1 mM NADH at room temperature for 10 s. The rates of NADH oxidation by samples treated with NEM were compared to the rates from NEM-free control samples to determine the proportion of A-like complex I present. The data are mean averages ± S.E.M. (n = 3, from three independent preparations each comprising four or five hearts), evaluated using a 2-way ANOVA test with Tukey's multiple comparisons correction (****p < 0.0001). D Catalytic activity assays of NADH oxidation by mitochondrial membranes from WT and ND6-P25L mouse hearts show linear rates of catalysis. The membranes are fully active prior to addition of 1 mM NEM at the start of the traces shown here. The NEM does not affect catalysis by WT-CI, but rapidly inhibits catalysis by ND6-P25L-CI. E Deactivation of complex I determined by differential labeling of the ND3-Cys39 peptide using light and heavy (13C2, 2-d2) labeled iodoacetamide, followed by LC-MS/MS analysis. The data are the percentage of ND3-Cys39 that is exposed to derivatization at the start of tissue homogenization for hearts from WT and ND6-P25L mice, following various times of ischemia. The data are mean averages ± S.E.M. from experiments on three independent hearts.
ND3-Cys39, on the ND3 TMH1–2 loop that becomes disordered in the D-state, is an important biochemical marker of the A/D status of complex I as it is exposed to thiol-derivatizing agents only in the D-state20. The susceptibility of ND3-Cys39 to alkylation by N-ethylmaleimide (NEM) was thus used to establish that, in the membrane preparations, ∼85% of WT-CI but only ∼10% of ND6-P25L-CI is NEM-insensitive, and therefore in an A-like state (Fig. 2C), consistent with the cryo-EM analyses. Subsequently, we attempted to observe ND6-P25L-CI in a predominantly NEM-insensitive, A-like state by applying pre-activation protocols. The complex was activated for 10 s by the addition of NADH at room temperature to induce turnover, then the membranes were treated with NEM on ice for 20 min, under which conditions both catalysis and the A–D transition are expected to occur very slowly. For ND6-P25L-CI the results revealed a mixture of states in which the A-like proportion of the population had increased, but still remained substantially below the levels observed starting from either WT-CI, or from WT-CI that had been pre-treated to deactivate it (Fig. 2C). Subsequently, it was found that in ND6-P25L-CI (but not WT-CI) ND3-Cys39 is exposed and able to react with NEM even when the complex is turning over and is thus in a catalytically-active state (Fig. 2D). These findings suggest that either: (i) while actively catalyzing NADH oxidation, ND6-P25L-CI leaves the catalytic cycle to transiently visit off-pathway states in which ND3-Cys39 is exposed; and/or (ii) the ND3-TMH1–2 loop is structurally altered and ND3-Cys39 exposed in one or more on-pathway states of ND6-P25L-CI during catalysis. In either case, the state(s) captured by NEM need not reflect full conversion of the enzyme to the structurally characterized, canonical D state. Notably, structures of complex I from the yeast Yarrowia lipolytica have revealed it to be in an 'intermediate' state, in which the corresponding π-bulge is present and ND3-Cys is exposed, but the CoQ-binding site remains relatively well-ordered24. This situation may resemble the states captured by NEM in ND6-P25L-CI.
To confirm that ND6-P25L-CI exhibits the same behavior in intact heart tissue, a semiquantitative differential cysteine labeling strategy, using light and heavy (13C2, 2-d2) iodoacetamide followed by LC-MS/MS-based detection of the labeled ND3-Cys39 tryptic peptide, was used to investigate the exposure of ND3-Cys39 in ischemic hearts (Fig. 2E). In WT hearts, the exposure of ND3-Cys39 increased gradually over time, reaching a plateau after 20 min ischemia, as described previously25. In contrast, in ND6-P25L hearts, ND3-Cys39 was already highly exposed in the first measurement, taken after the shortest period of ischemia possible. Although the level of exposure drifted upwards during further ischemia, even the initial level of Cys39 exposure for ND6-P25L hearts was comparable to that reached by WT hearts after 20–30 min ischemia. The much greater exposure of ND3-Cys39 in ND6-P25L-CI than WT-CI is thus clear within intact tissues as well as in the isolated complex.
ND6-P25L complex I is unable to catalyze ROS production by RET
ROS production by RET through complex I was investigated in isolated heart mitochondria by using succinate oxidation to reduce the CoQ pool and generate Δp8. While WT mitochondria exhibited a substantial rate of H2O2 production that was sensitive to the complex I inhibitor rotenone, only very low rates of rotenone-sensitive activity were observed in ND6-P25L mitochondria (Fig. 3A). The rotenone-insensitive rates, which are not due to RET by complex I, were similar in both cases, and all the rates were sensitive to FCCP, which abolishes Δp. As the rate of succinate oxidation in both ND6-P25L mitochondrial membranes and mitochondria is normal (Fig. 2A and B) the results suggest that ND6-P25L complex I is unable to catalyze H2O2 production by RET, as discussed previously6,7. In contrast, the H2O2 production from ND6-P25L mitochondria respiring on the substrates glutamate and malate, where complex I catalyzes NADH oxidation and not RET, was similar to that from WT mitochondria, whether complex I was inhibited or not (Fig. 3B). While NADH-linked H2O2 production is lower in mitochondrial membranes from the ND6-P25L variant treated with the complex I inhibitor piericidin A (Fig. 3C), the decrease is to the same extent as that in the NADH:O2 and NADH:APAD+ reactions (Fig. 2A). Following normalization for the amount of enzyme present, the specific activities for NADH-linked H2O2 production are the same (ND6-P25L relative to WT: 97.2 ± 3.8%, mean ± S.E.M., n = 3). Therefore, only complex I ROS production by RET is affected by the ND6-P25L mutation.
Fig. 3: ND6-P25L complex I does not catalyze reverse electron transfer.
Rates of production of H2O2 by isolated mitochondria from WT and ND6-P25L mouse hearts incubated with (A) succinate (10 mM) or (B) glu/mal (glutamate/malate; 10 mM of each). FCCP (5 µM) or rotenone (5 µM) were added as indicated. The data were obtained using a kinetic plate reader at room temperature and are mean averages ± S.E.M. (n = 3, where each replicate is from an independent mitochondrial preparation on a different heart) evaluated using a two-way ANOVA test with Tukey's multiple comparisons correction (****p < 0.0001). C Production of H2O2 by mitochondrial membranes isolated from WT and ND6-P25L mouse hearts incubated with NADH in the presence of piericidin A. The data are mean averages ± S.E.M. (n = 3, from three independent preparations each comprising four or five hearts) evaluated using an unpaired, two-tailed Student's t-test (**p = 0.0012). The decreased rate in ND6-P25L (83% of WT) is due to the lower amount of complex I present (85% of WT, Fig. 2A). D Redox status of the CoQ pool in isolated mitochondria from WT and ND6-P25L mouse hearts respiring on succinate. Rotenone (1 µM) was added as indicated. Data are reported for the predominant CoQ9 form. The data are mean averages ± S.E.M. (n = 6, where each replicate is from an independent mitochondrial preparation on a different heart). Statistical significance was assessed by a two-way ANOVA test with Tukey's multiple comparisons correction (**p < 0.01; p values for WT vs ND6 suc, 0.0011; ND6 suc vs ND6 + Rot, 0.228). E Membrane potentials calculated from the accumulation of [3H]-TPMP by isolated mitochondria from WT and ND6-P25L mouse hearts respiring on succinate. Rotenone (0.5 µM) was added as indicated. The data are mean averages ± S.E.M. (WT-succinate, n = 5, all others n = 6, where each replicate is from an independent mitochondrial preparation on a different heart). The differences were assessed by a two-way ANOVA test with Tukey's multiple comparisons correction. P values are 0.946 (Suc) and 0.996 (+Rot). F O2 consumption and H2O2 production by isolated heart mitochondria were measured at 37 °C in the O2K Oxygraph during respiration on glutamate/malate (0.5 mM). 10 mM succinate, 5 µM rotenone and 5 µM FCCP were added as indicated. A representative experiment is shown, typical of three replicates. See also Supplementary Fig. 5.
To confirm that the lack of succinate-driven H2O2 production in ND6-P25L mitochondria is due to an intrinsic difference in complex I activity and not to a secondary effect on the thermodynamics across complex I, the CoQ redox status26 and Δp were quantified under RET conditions. The CoQ pool was more reduced in ND6-P25L mitochondria than WT mitochondria (Fig. 3D); that this is due to the lack of CoQH2 oxidation by RET through ND6-P25L-CI was confirmed by using rotenone to block RET in WT-CI, which equalized the CoQ redox states (Fig. 3D). The CoQ pool in both WT and ND6-P25L mitochondria is highly reduced in the presence of succinate and rotenone (Fig. 3D) and no significant differences were identified in the mitochondrial membrane potential (Δψ, the major component of Δp)27 (Fig. 3E). Therefore, the lack of ROS production by RET in ND6-P25L mitochondria is not due to a decrease in either of its thermodynamic drivers, relative to their values in WT mitochondria.
ND6-P25L-CI appears normal in every way, except that it is unable to catalyze RET and has a greater propensity than WT-CI to adopt deactive or deactive-like states, suggesting that these two unusual characteristics are related. Both ND6-P25L-CI and WT-D-CI can be activated to catalyze NADH oxidation upon the provision of NADH and CoQ. However, it is currently unclear whether ND6-P25L-CI is unable to catalyze RET because it collapses rapidly into D-like states that cannot be reactivated under RET conditions, or because it is intrinsically unable to catalyze RET, even when starting from states that are fully competent for NADH oxidation. Therefore, to switch ND6-P25L-CI rapidly from actively catalyzing NADH oxidation to conditions favoring RET, we used glutamate/malate to establish NADH:O2 oxidoreduction in mitochondria, then added succinate (Fig. 3F, Supplementary Fig. 5). For WT mitochondria, succinate induced substantial increases in both O2 consumption and H2O2 production, consistent with an immediate switch to RET catalysis. In contrast, for ND6-P25L mitochondria addition of succinate led to a matching increase in O2 consumption, but the increase in H2O2 production was both much smaller and fully insensitive to rotenone. Therefore, in contrast to WT-CI, ND6-P25L-CI is unable to switch on or sustain RET catalysis, regardless of the state it is in when RET is initiated.
The ND6-P25L complex I mutation is protective against IR injury
Young adult mice homoplasmic for the ND6-P25L mutation are viable, they lack any gross phenotype and their complex I activity and ATP production are largely unimpaired6,7. Importantly, the hearts from ND6-P25L mice have been shown previously to be physiologically very similar to WT hearts by echocardiography7. Therefore, the lack of ROS production by RET through complex I in these mice defines an ideal model system to explore the postulated pathological role of ROS production by RET in IR injury17,18. During ischemia, lack of blood supply to the tissue causes O2 levels to drop, leading to a dramatic accumulation of mitochondrial succinate18,28. When the ischemic tissue is reperfused with oxygenated blood, for example upon release of an occluded coronary artery, the succinate is rapidly oxidized by the respiratory chain, driving RET through complex I, and the associated ROS formation is proposed to be a major cause of the tissue damage that occurs in heart attack, stroke, or organ transplantation17,18,28. If this model is correct, ND6-P25L mice should be protected against IR injury.
IR injury to the myocardium was compared in WT and ND6-P25L mice by occlusion of the left anterior descending (LAD) coronary artery, followed by reperfusion (Fig. 4A and B). Considerable damage to the heart was observed in WT mice but not in ND6-P25L mice, which are protected significantly against this form of cardiac IR injury (Fig. 4A and B). Importantly, the level of protection achieved against cardiac IR injury in the ND6-P25L mice was as great as that achieved by the most effective current therapeutic interventions such as ischemic preconditioning29. These findings are consistent with a central role for ROS production by complex I through RET in IR injury. During IR injury, RET at complex I is driven by succinate accumulation during ischemia and its subsequent oxidation upon reperfusion17,18, so we next assessed whether these processes were altered in the ND6-P25L mice. There were no significant differences in either normoxic or ischemic succinate levels (Fig. 4C), or CoQ redox state (Fig. 4D) between WT and ND6-P25L hearts. Furthermore, measurement of succinate levels in isolated Langendorff-perfused hearts exposed to global ischemia (Fig. 4E) further demonstrated that the ischemic accumulation of succinate was unaffected by the ND6-P25L mutation. Importantly, the succinate levels also decreased very rapidly and to similar extents in WT and ND6-P25L hearts upon reperfusion (Fig. 4E), indicating rapid oxidation of the accumulated succinate in both cases. Finally, when the mitochondria-targeted probe MitoB18,30 was used to measure H2O2 production in the heart in vivo, a substantial increase in mitochondrial H2O2 production upon reperfusion of the ischemic WT hearts was observed, that was not seen upon reperfusion of ND6-P25L hearts (Fig. 4F). Therefore, ND6-P25L mice are protected against cardiac IR injury by the inability of their complex I to catalyze ROS production by RET upon the reperfusion of ischemic tissue.
Fig. 4: Homoplasmic ND6-P25L mice are protected against cardiac IR injury due to lack of ROS production by RET at complex I.
A Representative images of slices from hearts showing cardiac infarcts, indicated by the light colored tissue, after 30 min of ischemia due to LAD occlusion, followed by 2 h reperfusion. B Quantification of cardiac infarct size as a proportion of the risk area in mice that underwent IR injury as in A. The data are mean averages ± S.E.M. (n = 6) evaluated using an unpaired, two-tailed Student's t-test (****p < 0.0001, p = 6.4 ×10−5). C, D Accumulation of succinate (C) and CoQ (D) redox state during normoxia (N) and ischemia (IS) in heart tissue. Each heart was cut in half, one half frozen immediately and the other half exposed to ischemia for 30 min and then frozen. The data are mean averages ± S.E.M. (n = 4) assessed by a two-way ANOVA with Tukey's multiple comparisons correction. C P values are 0.677 (N, WT vs ND6) and 0.538 (IS, WT vs ND6). D P values are 0.999 (N, WT vs ND6) and 0.320 (IS, WT vs ND6). E WT and ND6-P25L mouse hearts were Langendorff perfused before being subjected to either 20 min global ischemia (IS) or 20 min ischemia and 6 min reperfusion (IR); succinate levels were measured by LC-MS/MS. The data are mean averages ± S.E.M. (n = 4) evaluated by a two-way ANOVA with Tukey's multiple comparisons correction. P values are 0.991 (IS, WT vs ND6) and 0.969 (IR, WT vs ND6). F Quantification of the MitoP/B ratio as a marker for mitochondrial H2O2 production. ND6-P25L and WT mice were injected with MitoB by the tail vein, and hearts exposed to either ischemia (IS, 25 min normoxia then 30 min ischemia); or ischemia and reperfusion (IR, 10 min normoxia then 30 min ischemia then 15 min reperfusion), total 55 min in each case. The data are mean averages ± S.E.M. (WT, n = 5; ND6, n = 8), evaluated using an unpaired, two-tailed Student's t-test (*p < 0.05). The WT and ND6 cohorts were analyzed in separate experiments. P values are 0.0108 (WT, IS vs IR) and 0.104 (ND6, IS vs IR).
Here we have described the structural and molecular effects of a single point mutation in a mtDNA-encoded subunit of mammalian complex I. We have determined how its subtle effects create a functionally distinct enzyme that leads to an altered mitochondrial physiology with a major impact on pathophysiology. Therefore, we provide a comprehensive, mechanistic description on every level, directly linking specific molecular changes to their pathophysiological consequences.
On the molecular level, ND6-P25L complex I functions normally for NADH oxidation but is incapable of catalyzing reverse electron transport. It is thus only able to catalyze in one direction, whereas WT complex I not only catalyzes in both directions, but is a thermodynamically reversible catalyst that operates efficiently in either direction in response to only the smallest driving force31,32. Importantly, we confirmed that the lack of ROS production by RET in the ND6-P25L strain is directly attributable to complex I itself, because the thermodynamic driving forces and substrates present in ND6-P25L mitochondria are sufficient to drive substantial levels of RET in WT mitochondria. The structure of the ND6-P25L complex indicates how replacing the Pro residue with a Leu pushes ND6 TMHs 2 and 3 apart, allowing the upper section of TMH3 to rotate more easily and facilitating the spontaneous transition into D-like states. In the canonical WT D state, the CoQ-binding site is disordered, and reactivation upon the provision of NADH and CoQ likely includes the substrate acting as a template to reform the site23. In contrast, reactivation of the D state under RET conditions (upon the provision of CoQH2 and Δp) has not been demonstrated, and occurs either very slowly or not at all. It is therefore possible that ND6-P25L-CI is unable to catalyze RET, even when starting from an A-like state, because (as also may occur during NADH oxidation) it continually visits off-pathway D-like states but (unlike during NADH oxidation) the RET conditions are unable to promptly recover these off-pathway states and return them to catalysis. Alternatively, the ND6-P25L complex may be intrinsically incapable of catalyzing RET (even though it remains fully competent in NADH oxidation). This may be because instability in the conformation of ND6-TMH3 propagates to the ND3-TMH1-2 loop (Cys39 is no longer securely anchored)24, compromising its ability to complete the conformational changes that have been proposed to be crucial for catalysis33, and/or creating subtle changes at the CoQ-binding site that render it unable to bind CoQH2 effectively.
The specific lack of mitochondrial ROS production by RET in the ND6-P25L mouse, caused by a single point mutation that leaves all other aspects of mitochondrial metabolism untouched, makes the ND6-P25L mouse model a powerful resource for investigating its physiological roles and consequences in vivo. Mitochondrial ROS production by RET has been proposed to act as a redox signal in a range of physiological scenarios including inflammation10,13, oxygen sensing11, thermogenesis14 and stress response15. In addressing the role of ROS production by RET in vivo, we focused here on the role of complex I in cardiac IR injury17,18,19. RET has been proposed to be central to the mitochondrial ROS production that initiates cardiac IR injury according to the following model: during ischemia the mitochondrial metabolite succinate accumulates, then upon reperfusion it is rapidly oxidized, reducing the CoQ pool and building Δp and thereby driving ROS production by RET through complex I17,18. Consistent with this model, the ND6-P25L mutant mouse is protected against cardiac IR injury; it exhibits no change in succinate metabolism, but a substantial decrease in mitochondrial ROS production. Previously, strategies to inhibit succinate accumulation and/or oxidation during IR injury18,28,34,35,36,37 have also afforded protection against cardiac IR injury, as has inhibition of complex I catalysis38,39,40, albeit by using inhibitors that act in both directions and thereby prevent recovery of normal activity following reperfusion. The ND6-P25L mouse model has thus allowed the precise role of complex I in IR injury to be defined and will now provide crucial opportunities to explore the consequences of RET and mitochondrial ROS production through RET in further physiological settings.
The physiological role and relevance of the deactive transition in complex I has long been debated, especially because much of the evidence for its existence originated from in vitro systems in which extensive incubations are required for the A–D transition20,21,25,41. However, we have shown here that a single point mutation greatly enhances the propensity of complex I to enter D-like states in vivo, bypassing the need for extensive incubation, and offering the opportunity to probe their physiological and functional relevance. Importantly, because we have shown that rapidly deactivating complex I is protective against IR injury, destabilizing A-like states relative to D-like states (in analogy to the ND6-P25L mutation) presents a further route for pharmacological intervention against IR injury. It has already been shown that locking WT complex I in the D form by derivatizing Cys39 protects against IR injury19,41,42,43, although this approach still relies on the slow spontaneous deactivation process. In addition, we note that metformin has been reported to preferentially bind the D state of complex I44, and is also protective against IR injury in vivo45,46, although whether the protection is specifically mediated by complex I remains unclear. Overall, strategies that promote complex I deactivation, to prevent RET but allow reactivation once the conditions for NADH oxidation are restored, are promising therapeutically.
Finally, although this is the first structure of complex I containing a mutation associated with pathology, the single reported human case homoplasmic for this mtDNA variant was subject to a devastating pathology that led to death at eight months. In contrast, the homoplasmic ND6-P25L mouse model used here exhibited only a very mild phenotype with no substantial defect in oxidative phosphorylation, suggesting that epiststatic or environmental effects may have contributed in the single human case. Nevertheless, we have seen that this single amino acid substitution results in a major functional change in the mature complex, highlighting the fact that the propensity of the human enzyme to enter D-like states remains to be determined, and is likely to respond to subtle sequence variations. Thus, human mtDNA polymorphisms and mitochondrial haplotypes affecting this region of complex I could alter individual susceptibilities to RET-related pathologies, such as IR injury, or to inflammatory disorders associated with redox signaling by RET.
In conclusion, by a combination of structural, biochemical and in vivo experiments we have defined the effects of a single point mutation in a ~1MDa respiratory complex that fundamentally alters its molecular and physiological function with major implications for the role of complex I in health and disease.
Mouse strains
The ND6-P25L mouse strain6 was generously provided by Professor Douglas Wallace, University of Pennsylvania. ND6-P25L mice were bred and maintained in pathogen-free facilities with a 12 h:12 h light: dark cycle, a room temperature of 19˚C to 22˚C, relative humidity 55% ± 10%, and with ad lib food and water. All procedures were approved by the local ethics committees of the MRC Laboratory of Molecular Biology and the University of Cambridge and by the UK Home Office (PPL: P6C97520A). Wild-type (WT) mice (C57BL/6 J) were purchased from Charles River UK, Ltd (Margate, UK). As the mutant mice were backcrossed onto the C57BL/6 J background6 both the WT and ND6-P25L mice lack the mitochondrial NAD(P)H transhydrogenase47. Mice were used between 8 and 22 weeks and sacrificed by cervical dislocation. All procedures were carried out in accordance with the UK Animals (Scientific Procedures) Act, 1986 and the University of Cambridge Animal Welfare Policy.
Preparation of mitochondrial membranes from mouse hearts
Hearts were excised from WT or ND6-P25L mice and immersed immediately in ice-cold buffer containing 10 mM Tris-HCl (pH 7.4 at 4 °C), 75 mM sucrose, 225 mM sorbitol, 1 mM EGTA and 0.1% (w/v) fatty acid-free bovine serum albumin (BSA). All the following steps were carried out at 4 °C. Mitochondria were prepared as described previously48. The hearts were diced, washed, then homogenized in 10 mL buffer per gram of tissue by seven to ten strokes of a Potter–Elvehjem homogenizer fitted with a Teflon pestle at ~1000 rpm. The homogenate was centrifuged (1000 × g, 5 min), then the supernatant was recentrifuged (9000 × g, 10 min) to collect the crude mitochondria. The pellets were suspended in resuspension buffer (20 mM Tris-HCl, 1 mM EDTA, 10% glycerol, pH 7.4 at 4 °C) to a protein concentration of ~10 mg mL−1 and stored at −80 °C. Mitochondria suspensions were thawed on ice, sonicated using a Q700 Sonicator (Qsonica, US 65% amplitude with three 5-s bursts of sonication interspersed by 30-s intervals on ice) and centrifuged at 75,000 × g for 1 h. The pellets containing the mitochondrial membranes were homogenized in resuspension buffer to ca. 5 mg mL−1 and stored at −80 °C.
Preparation of complex I from mitochondrial membranes
ND6-P25L complex I was prepared as described previously22. 3–4 mL of membrane suspension were solubilized by addition of 1% dodecyl-β-D-maltoside (DDM, Glycon, Germany), along with 0.005% phenylmethane sulfonyl fluoride (PMSF, Sigma-Aldrich, UK), and centrifuged (32,000 × g, 30 min). The supernatant was loaded onto a 1 ml Q-sepharose HP column (GE Healthcare, UK) pre-equilibrated in buffer A (20 mM Tris-HCl (pH 7.14 at 20 °C), 1 mM EDTA, 0.1% DDM, 10% (v/v) ethylene glycol, 0.005% asolectin (Avanti Polar Lipid, USA) and 0.005% CHAPS (Calbiochem, Merck, Germany), and eluted by an increasing proportion of buffer B (buffer A + 1 M NaCl). Complex I eluted in ∼35 % buffer B. The fractions containing complex I were collected, concentrated to 100 μL using a 100 kDa MWCO Vivaspin 500 concentrator (Sartorius, Germany), and eluted from a Superose 6 Increase 5/150GL size exclusion column (GE Healthcare, UK) in 20 mM Tris-HCl (pH 7.14 at 20 °C), 150 mM NaCl and 0.05% DDM. Protein concentrations were determined using the BCA assay (Fisher Scientific UK). The catalytic activity of the isolated enzyme was determined as the rate of NADH:decylubiquinone oxidoreduction, using 0.5 μg mL−1 complex I in 20 mM Tris-HCl (pH 7.5 at 32 °C), 0.075% (w/v) asolectin and 0.075% (w/v) CHAPS, and 100 μM decylubiquinone. The reaction was initiated with 100 μM NADH and measured at 340–380 nm (ε = 4.81 mM−1 cm−1).
Cryo-EM data collection
UltrAuFoil gold grids (R 0.6/1, Quantifoil Micro Tools GmbH, Germany) were prepared as described previously23. Briefly, they were glow discharged (20 mA, 90 s), incubated in a solution of 5 mM 11-mercaptoundecyl hexaethyleneglycol (SensoPath Technologies, USA) in ethanol49 for 48 h in an anaerobic glovebox, then washed with ethanol and air-dried just before use. Using an FEI Vitrobot Mark IV, 2.5 μL of ND6-P25L-CI solution (3.5 mg mL−1) was applied to each grid at 4 °C in 100% humidity and blotted for 9–10 s at blotting force setting –10, before the grid was frozen by plunging it into liquid ethane. The highest quality cryo-frozen grids were identified using a 200 keV Talos Arctica transmission electron microscope, then high-resolution data acquisition was performed on a 300 keV Titan Krios microscope fitted with a Gatan Quantum K2 Summit detector at the Cambridge University cryo-EM facility. A total of 1519 micrographs were collected for ND6-P25L-CI, each with 40 movie frames, using the FEI EPU software. The calibrated pixel size was 1.054 Å, the defocus range was −1.5 to −3.0 μm and the total electron dose was 50.0 electrons Å−2 over a total exposure time of 10 s for each micrograph. Following inspection, 1492 micrographs were retained for analysis.
Cryo-EM data processing for ND6-P25L
Cryo-EM data processing was carried out using RELION 3.050. 1492 micrographs were motion corrected using Motioncor2 with dose-weighting50,51 followed by contrast transfer function (CTF) estimation using Gctf-1.1852 with the amplitude contrast set to 8%. 42,622 particles were extracted following template-free automatic particle picking in RELION and manual curation. The particles were 2D classified, and crudely 3D classified with an angular sampling interval of 3.7° resulting in 37,665 good particles. The particles were re-extracted and a second round of 3D classification using four classes to an angular sampling interval of 0.9° yielded two major classes which were indistinguishable and therefore combined (total of 25,629 particles). The particles were then subject to CTF refinement, Bayesian polishing using all frames, and 3D auto-refinement using EMD-11810 lowpass filtered to 60 Å as an initial model. The final 3D auto-refinement used solvent-flattened FSCs and postprocessing was performed on the final map using a mask made from the deactive mouse model (PDB 7AK5) by the molmap command in UCSF Chimera53, and extended by 3 pixels and with an added soft edge of 5 pixels in RELION. The map resolution was 3.82 Å based on the gold standard FSC = 0.143 criterion. The map was further subject to model-free global auto-sharpening using phenix.autosharpen with default parameters to improve map connectivity for model building and refinement.
Reprocessing of Cryo-EM data for deactive WT complex I
Initial data processing was performed using RELION-3.0 with final steps including CTF refinement performed in version 3.1. 1768 micrographs were motion corrected using the RELION implementation of Motioncorr with and without dose-weighting, followed by CTF estimation on the non-dose-weighted micrographs using Gctf-1.1852 with the amplitude contrast set to 8%. 148,440 particles were extracted from dose-weighted micrographs following autopicking in RELION using 46 2D classes from a bovine complex I dataset as a template, followed by manual curation, and 80,603 particles were selected following 2D classification. The particles were then 3D refined using EMD-4356 lowpass filtered to 40 Å as an initial model, and 3D classified crudely to an angular sampling interval of 1.8° to yield 62,595 particles in good classes. The particles were then subject to Bayesian polishing using all frames, and several iterative rounds of refinement and CTF refinement including beam tilt, trefoil and higher order aberration estimations54. A second round of 3D classification to an angular sampling interval of 1.8° yielded two classes of similar resolution and number of particles giving a total of 50,184 particles for final refinement with solvent-flattened FSCs. A minor class containing 12,411 particles was refined to a resolution of 3.74 Å, and represents the remaining active state population of the sample. Postprocessing was performed on the final deactive map using a mask made from the updated PDB model by the molmap command in UCSF Chimera53, extending by 3 pixels and adding a soft edge of 10 pixels in RELION. The map resolution was 3.17 Å based on the gold standard FSC = 0.143 criterion. To aid model building in Coot, multibody refinement was performed with three bodies to improve density in regions of relatively poor resolution.
Complex I model building and analyses
Model building and refinement were performed using Coot 0.9-pre55 and Phenix-1.16-354956. The active mouse model (PDB 6ZR2) was first crudely fitted into the deactive map using the Chimera fit-in-map command and then rigid-body fitted subunit-by-subunit using Phenix. The model was then subject to cycles of manual adjustment in Coot 0.9-pre using both the globally sharpened map and three multibody maps, and by phenix.realspace.refine with secondary structure restraints against the globally sharpened consensus map. The model, especially the clashscore, was improved by using ISOLDE at an intermediate stage57. The final Phenix-refined deactive model was fitted into the ND6-P25L Phenix auto-sharpened map using Chimera fit-in-map and the P25L mutation implemented, then it was subjected to cycles of manual and automated refinement as above. Model-to-map FSC curves were generated by creating a map at Nyquist frequency from the model using molmap in UCSF Chimera. The created map was compared to an unfiltered, unsharpened, masked experimental consensus map from RELION by using the Xmipp tool in SCIPION 1.258. Final model statistics were produced by Phenix-1.16-3549, MolProbity 4.459 and EMRinger60. The structures were analyzed using Coot, UCSF Chimera and PyMol61 and movies created in UCSF Chimera X62.
Catalytic activity measurements on mitochondrial membranes
All assays were carried out in 10 mM Tris-SO4 (pH 7.4) and 250 mM sucrose at 32 °C in a SPECTRAmax PLUS384 spectrophotometer (Molecular Devices, UK). NADH:O2, succinate:O2 and NADH:APAD+ (3-acetylpyridineadenine dinucleotide) oxidoreduction were determined using 10 μg-protein mL−1 membranes supplemented by 1.5 μM horse heart cytochrome c and 15 μg mL−1 alamethicin, unless otherwise specified. The rate of oxidation of 200 μM NADH was monitored at 340–380 nm (ε = 4.81 mM−1 cm−1) and confirmed to be fully inhibitor sensitive by addition of 1 μM piericidin A. The rate of oxidation of 5 mM succinate was determined using a coupled assay system comprising 90 μg mL−1 fumarate hydratase (FumC) and 500 μg mL−1 oxaloacetate decarboxylating malic dehydrogenase (MaeB) with 2 mM MgSO4 and 2 mM K2SO463, by monitoring the coupled reduction of 2 mM NADP+ at 340–380 nm (ε = 4.81 mM−1 cm−1) and confirmed to be fully inhibitor sensitive by addition of 2 μM atpenin A5 (Santa Cruz Biotechnology, USA). NADH:APAD+ oxidoreduction was monitored using 500 μM APAD+ and 200 μM NADH in the presence of 1 μM piericidin A at 400–450 nm (ε = 3.16 mM−1 cm−1). The rate of NADH-driven H2O2 production was monitored by the horseradish peroxidase (HRP)-dependent oxidation of Amplex Red to resorufin at 557–620 nm (ε = 51.60 mM−1 cm−1)64, using 40 μg-protein mL−1 membranes with 2 U mL−1 HRP, 10 μM Amplex Red, 10 U mL−1 superoxide dismutase from bovine erythrocytes (SOD), 0.5 μM piericidin A and 30 μM NADH. Background rates measured in the presence of catalase and the absence of membranes were subtracted.
Complex I was deactivated in membranes23 by incubating the membrane suspension (5 mg-protein mL−1) for 30 min at 37 °C in the absence of substrates. Complex I was pre-activated in membranes by diluting the membranes to 2 mg-protein mL−1 in assay buffer, adding 1 mM NADH and incubating at room temperature for 10 sec. To determine the A/D ratio, the membranes were diluted to 2 mg-protein mL−1 if necessary, then 0.5 μL of 200 mM N-ethylmaleimide (NEM) in DMSO were added to one 50 μL aliquot, and 0.5 μL of DMSO added to a second control aliquot. The two aliquots were incubated on ice for 20 min and their rates of NADH oxidation determined as above. The A/D ratio was calculated by assuming that in the NEM-treated sample only the complex in the A-state is capable of turnover, whereas in the control sample both the A- and D-states are capable.
Isolation of intact mitochondria from heart tissue
Mitochondria for functional analysis were isolated, with all steps performed on ice, by homogenization of freshly retrieved mouse heart tissue (12–20-week-old male wild-type C57BL/6 J or ND6-P25L mice) using an all-glass dounce homogenizer in STE buffer (250 mM sucrose, 5 mM Tris-Cl, 1 mM EGTA, pH 7.4 at 4 °C) supplemented with 0.1% (w/v) fatty acid-free BSA (STEB buffer). Homogenates were pelleted twice by centrifugation at 700 × g for 5 min (4 °C), then the supernatant collected and centrifuged at 5500 × g for 10 min (4 °C). The mitochondrial pellets were then resuspended in 1 mL STEB buffer and re-centrifuged at 5500 × g for 10 min (4 °C) prior to their final re-suspension in 80 μL STE buffer per heart. Protein concentrations were measured using the bicinchoninic acid (BCA) assay with bovine serum albumin (BSA) as a standard.
Combined mitochondrial O2 consumption and ROS measurements
For combined measurement of O2 consumption and ROS production a high resolution O2K Oxygraph with attached fluorescence LED module (Oroboros Instruments, Austria) was used. Mitochondria (0.15 mg-protein/mL) were incubated in 2 mL KCl buffer (120 mM KCl, 10 mM HEPES, 1 mM EGTA, pH 7.2 at 37 °C) supplemented with 40 μg/mL superoxide dismutase (Cu, Zn SOD), 20 μg/mL horseradish peroxidase (HRP), 200 μg/mL fatty acid free BSA, 12.5 μM Amplex Red while stirring at 37 °C. Chambers were closed and respiration was started by addition of 0.5 mM glutamate and malate, followed by addition of 10 mM succinate after 5 min. The O2 concentration was calibrated assuming equilibration with air in the open chamber and taking zero oxygen as the point when all oxygen was depleted by mitochondria in the closed chamber. Resorufin fluorescence was measured via the O2K fluorometer 525 nm LED modules and signal was calibrated via titration of known amounts of H2O2 (0–3 μM) in the presence of mitochondria, fatty-acid free BSA, HRP and Amplex Red.
Ischemic heart incubations
Hearts were retrieved from 8- to 16-week-old male wild-type C57BL/6 J or ND6-P25L mice following cervical dislocation. For labeling of exposed cysteine residues, the hearts were cut longitudinally into five equal slices, and the 0 min slice clamp frozen straight away at liquid nitrogen temperature. The residual pieces were incubated in the chest of the warmed (37 °C) mouse on a temperature-controlled heat pad for up to 30 min, then frozen. For the analysis of CoQ and succinate in ischemic hearts, the hearts were cut in half with one-half clamp frozen as the normoxic sample while the other half was placed back in the chest of the animal for 30 min as above and subsequently analyzed for succinate levels and CoQ redox state.
Labeling of exposed cysteine residues
For labeling of exposed cysteine residues, frozen heart tissue (~5 mg) was homogenized in 400 μl of ice cold 50 mM KPi buffer (pH 7.8 at 30 °C) containing 20 mM light iodoacetamide (Sigma-Aldrich, UK) and 10 mM TCEP, using a Precellys24 tissue homogenizer (6500 rpm, 15 s; Bertin Instruments, France) and lysis tubes (CK-14, Bertin Instruments, France). Cysteines were labeled for 5 min on ice then the reaction was quenched by addition of 1 mL of KPi buffer. The membranous fraction was pelleted at 17,000 × g, 4 °C for 5 min, the pellets washed with 1 mL of KPi buffer, and centrifuged as before. All residual thiols were labeled by resuspending the pellets in 45 μL of lysis buffer (4% SDS, 50 mM NaPi, pH 7.8 at 37 °C) containing 20 mM heavy (13C2, 2-d2) iodoacetamide (Sigma-Aldrich, UK) and 10 mM TCEP and incubated at 37 °C for 30 min. Following addition of the appropriate loading dye, samples were subjected to SDS-PAGE, then fixed in 50% methanol and 10% acetic acid and stained with QC Colloidal Coomassie Stain (BioRad, UK). Gel sections between the 10 and 20 kDa marker bands (Precision Plus Protein™ Dual Color Standard, BioRad, UK) were excised and the proteins were in-gel digested with trypsin. The extracted peptides were desalted with C18 tips (OMIX C18, Agilent, UK) according to the manufacturer's instructions and dissolved in MS sample buffer (20% acetonitrile, 0.1% formic acid) followed by MS analysis. Light and heavy iodoacetamide-labeled tryptic ND3 peptides were analyzed using a Xevo TQ-S triple-quadrupole mass spectrometer with the attached I-Class ACQUITY UPLC® system (both Waters, UK). The peptides were separated by reverse-phase at 30 ˚C on an ACQUITY UPLC® BEH C18 column (1.7 µM, 130 Å, 1 × 50 mm; Waters, UK). MS buffer A (5% acetonitrile, 0.1% formic acid) and MS buffer B (90% acetonitrile, 0.1% formic acid) were used at 200 μL/min for the following LC gradient: 0–0.3 min, 5% buffer B; 0.3–3 min, 5%-100% buffer B; 3–4 min, 100% buffer B; 4.0–4.10 min, 100%–5% buffer B; 4.10–4.60 min, 5% buffer B. Eluant was diverted to waste at 0–1 min and 4–5 min. The peptides were detected by multiple reaction monitoring in positive ion mode using the following settings: source spray voltage 3.0 kV; cone voltage 2 V; ion source temperature 150 ◦C; collision energy 25 V. For quantification the following transitions were used: light labeled ND3 peptide, 836.7 > 744.0; heavy labeled ND3 peptide, 838.7 > 746.0. The peak areas of light and heavy labeled ND3 peptides were quantified using the MassLynx 4.1 software (Waters, UK) and the proportion of light-labeled peptides determined.
Mitochondrial ROS measurements
Measurements of H2O2 production by mitochondria were performed at room temperature using a fluorometric plate reader to assess the conversion of Amplex Red to resorufin (CLARIOstar, BMG Labtech, Germany). Heart mitochondria (30 μg protein) were incubated with 4 μg HRP, 8 μg superoxide dismutase (Cu,Zn-SOD) and 40 μg fatty acid free BSA in a final volume of 100 μL KCl buffer in a 96-well plate. A further 100 μL KCl buffer was added containing 10 mM succinate or 10 mM (each, or 0.5 mM if specified) glutamate/malate, 12.5 μM Amplex Red, and inhibitors (5 µM of FCCP or rotenone) as indicated. Alternatively, 10 mM succinate was added following priming with glutamate/malate. Resorufin fluorescence (excitation at 560 nm, emission at 590 nm) was calibrated against known H2O2 concentrations (ε240 = 43.6 M−1 cm−1).
CoQ extraction from isolated mitochondria
CoQ extraction was performed as described previously26. Mitochondria (15 μg protein) were incubated for 5 min at 37 °C in 100 μL KCl buffer (120 mM KCl, 10 mM HEPES, 1 mM EGTA, pH 7.2 at 37 °C), with succinate (10 mM) and rotenone (1 μM) if indicated. Incubations were transferred to ice-cold extraction solution (200 μL of acidified methanol and 200 μL hexane) and vortexed. The hexane phase was separated by centrifugation (5 min, 17,000 × g, 4 °C), collected, dried down in 1 mL glass mass spectrometry vials (186005663CV, Waters, UK) under a stream of nitrogen and the CoQ extract was resuspended in methanol containing 2 mM ammonium formate followed by LC-MS analysis.
LC-MS/MS analysis of CoQ redox state
CoQ redox state determination by LC-MS was performed as described previously26. LC-MS/MS analyses were carried out using an I-Class ACQUITY UPLC® system attached to a Xevo TQ-S triple quadrupole mass spectrometer (both Waters, UK). Samples were kept at 8 °C prior to injection by the autosampler of 2–10 μL into a 15 μL flow-through needle and separated by reverse-phase at 45 °C using an ACQUITY UPLC® BEH C18 column (1.7 μM, 130 Å, 2.1 × 50 mm; Waters, UK). Mobile phase was isocratic 2 mM ammonium formate in methanol run at 0.8 mL/min over 5 min. For MS analysis, electrospray ionization in positive ion mode was used with the following settings: capillary voltage 1.7 kV; cone voltage 30 V; ion source temperature 100 °C; collision energy 22 V. Multiple reaction monitoring in positive ion mode was used for compound detection. Transitions used for quantification were: CoQ9, 812.9 > 197.2; CoQ9H2, 814.9 > 197.2. Samples were quantified using MassLynx 4.1 software (Waters, UK) to determine the peak areas for CoQ9 and CoQ9H2.
Membrane potential measurement in isolated mitochondria
Membrane potentials in isolated mitochondria were measured by the accumulation of radiolabeled triphenylmethyl phosphonium ([3H]-TPMP, American Radiolabeled Chemicals, USA) as described previously65,66. Mitochondria (100 μg) were incubated for 5 min at 37 °C in KCl buffer (120 mM KCl, 10 mM HEPES, 1 mM EGTA, pH 7.2 at 37 °C), with succinate (10 mM), 5 μM TPMP (cold), 50 nCi/mL [3H]-TPMP and, if indicated, 0.5 μM rotenone. After incubations mitochondria were pelleted by centrifugation (10,000 × g for 1 min, RT) and 200 μL of the supernatant was collected. Residual liquid was carefully removed and pellets were lysed in 40 μL of 20% Triton X-100 followed by addition of 160 μL H2O. Scintillation of pellet and supernatant was measured in a Tri-Carb 2800 R liquid scintillation analyzer (PerkinElmer, US) after addition of 3 mL scintillant. The membrane potential was calculated based on the measured scintillation counts using the following Eq. 1 and assuming a mitochondrial volume of 0.6 μL/mg protein and a TPMP binding correction of 2.166:
$$\Delta {\Psi}\left( {{\mathrm{mV}}} \right) = 61.5 \times {\mathrm{log}}_{10}\left( {\frac{1}{{2.1}} \times \frac{{\left[ {\left[ {{\,}^{3}{\mathrm{H}}} \right]{\mathrm{TPMP}}} \right]_{{\mathrm{in}}}}}{{\left[ {\left[ {{\,}^{3}{\mathrm{H}}} \right]{\mathrm{TPMP}}} \right]_{{\mathrm{out}}}}}} \right) = 61.5 \times {\mathrm{log}}_{10}\left( {\frac{1}{{2.1}} \times \frac{{\left( {\frac{{\left[ {{\,}^{3}{\mathrm{H}}} \right]{\mathrm{TPMPcounts}}_{{\mathrm{in}}}}}{{{\mathrm{Mito}}{\mathrm{.}}\,{\mathrm{vol}}{\mathrm{.}}}}} \right)}}{{\left( {\frac{{\left[ {{\,}^{3}{\mathrm{H}}} \right]{\mathrm{TPMPcounts}}_{{\mathrm{out}}}}}{{{\mathrm{Buffer}}\,{\mathrm{vol}}{\mathrm{.}}}}} \right)}}} \right)$$
The calculated membrane potentials were lower than reported previously for rat heart mitochondria8, which we ascribe to less stringent purification due to low the yields obtained from mouse heart leading to a greater background protein contamination.
LAD open-chest mouse model of acute myocardial IR injury
All procedures were carried out in accordance with the UK Home Office Guide on the Operation of Animal (Scientific Procedures) Act 1986 and have been approved by the University of Cambridge Animal Welfare Policy under license number 70/8238. Age and sex matched C57BL/6 J mice ranging between 9 and 20 weeks (22–32 g) were obtained from Charles River. An open chest model of acute myocardial ischemia/reperfusion injury was modified and used as described previously67. In short, mice were anesthetized with sodium pentobarbital (70 mg/kg intraperitoneal); following endotracheally intubation, ventilation at 110 breaths per minute (tidal volume 125–150 µL, dependent on weight), a sternal thoracotomy was performed, and the major branch of the left anterior descending (LAD) coronary artery was occluded for 30 minutes followed by 2 h of reperfusion as previously described66. Areas at risk were identified by infusion of Evans Blue following LAD re-ligation and infarct sizes were determined in a blinded fashion by triphenyltetrazolium chloride staining and expressed as a proportion of the area at risk29,67. For mitochondrial H2O2 analyses, 50 nmol MitoB was injected I.V. via the tail vein prior to the hearts being exposed to either 30 min ischemia or ischemia followed by 15 min reperfusion. Both conditions allowed MitoB to be present for 55 min prior to the end of the experiment18. Hearts were snap-frozen in liquid nitrogen and mitochondrial ROS was assessed by determination of the MitoP/MitoB ratio by LC-MS/MS relative to deuterated internal standards.
Succinate quantification
Succinate was extracted from tissues samples and quantified as described previously68. Briefly, tissues were homogenized in MS extraction buffer (50% methanol, 30% acetonitrile and 20% H2O), supplemented with an internal standard of 1 nmol of [13C4]-succinate (Sigma Aldrich, UK); 25 μL/mg wet weight tissue) in a Precellys 24 tissue homogenizer (6500 rpm, 15 s; Bertin Instruments, France). The homogenate was agitated on a shaking heat block (1400 rpm, 4 °C, 15 min) and then incubated (−20 °C, 1 h). The sample was then centrifuged (17,000 × g, 10 min, 4 °C), the pellet discarded and the centrifugation repeated. The resulting supernatant was transferred to pre-cooled MS vials and stored at –80 °C until analysis. Succinate analysis was carried out using a LCMS-8060 mass spectrometer with a Nexera X2 UPLC system (both Shimadzu, UK). 5 μL of sample were injected onto a SeQuant® ZIC®HILIC column (3.5 μm, 100 Å, 150 × 2.1 mm, 30 °C column temperature) with a ZIC®-HILIC guard column (200 Å, 1 × 5 mm; both MerckMillipore, UK). Separation was achieved with mobile phases of A) 10 mM ammonium bicarbonate and B) 100% acetonitrile running at a gradient of 0–0.1 min, 80% B; 4 min, 20% B; 10 min, 20% B; 11 min, 80% B; 15 min, 80% B. The mass spectrometer was operated in negative ion mode with multiple reaction monitoring and data acquired and analyzed using Labsolutions software (Shimadzu, UK). Succinate was quantified using standard curves with known amounts of succinate compared against the internal standard.
CoQ extraction from tissue
CoQ extraction and redox state analysis were performed as described previously26. Clamp frozen heart tissue (in situ ischemic heart model; ~5 mg) was weighed into cooled lysis tubes (CK-14, Bertin Instruments, France) on dry ice. Then a mixture of 250 μL ice-cold acidified methanol and 250 μL hexane was added, and tissue was rapidly homogenized in a Precellys24 tissue homogenizer (6500 rpm, 15 s; Bertin Instruments, France). The upper, CoQ-containing hexane layer was separated by centrifugation (5 min, 17,000 × g, 4 °C), dried down in 1 mL glass mass spectrometry vials (186005663CV, Waters, UK) under a stream of nitrogen and the CoQ extract was resuspended in methanol containing 2 mM ammonium formate for LC-MS analysis.
Langendorff-perfused mouse hearts
Langendorff-perfusion of mouse hearts was carried out as described previously68. All procedures were carried out with the approval of the Queen Mary, University of London local ethics committee and the UK Home office, under licence PPL PB137135C. Mice were administered terminal anesthesia via intra-peritoneal pentobarbitone injection (~140 mg/kg body weight). Beating hearts were rapidly excised, cannulated and perfused in isovolumic Langendorff mode at 80 mm Hg pressure maintained by a STH peristaltic pump controller feedback system (AD Instruments, UK), with phosphate-free Krebs–Henseleit (KH) buffer continuously gassed with 95% O2/5% CO2 (pH 7.4, 37 °C) containing (in mM): NaCl (116), KCl (4.7), MgSO4.7H2O (1.2), NaHCO3 (25), CaCl2 (1.4), glucose (11). Cardiac function was assessed using a fluid-filled cling-film balloon inserted into left ventricle (LV) connected via a line to a pressure transducer and a Powerlab system (AD Instruments, UK). The volume of the intraventricular balloon was adjusted using a 1.0 mL syringe to achieve an initial LV diastolic pressure (LVDP) of 4–9 mm Hg. Ex vivo cardiac functional parameters (systolic pressure, end diastolic pressure, heart rate, coronary flow, perfusion pressure) were monitored throughout the experiment using LabChart software v.7 (AD Instruments, UK) and analysed using Microsoft Excel software 1.16.16.15. After 20 min equilibration (normoxia), hearts were either subjected to 20 min global no-flow ischemia only, or 20 min ischemia followed by 6 min reperfusion. Hearts were subsequently snap frozen using Wollenberger tongs, pre-cooled in liquid nitrogen and stored at −80 °C until further analysis.
Statistical comparison between two groups was carried out using GraphPad Prism 8 software (GraphPad Software, USA) using two-tailed Student's t-tests. Multiple groups were compared using a two-way ANOVA with Tukey's multiple comparisons test. The number of biological replicates (n) and statistical values and tests are defined in each figure legend.
Reporting summary
Further information on experimental design is available in the Nature Research Reporting Summary linked to this paper.
The structure coordinates and Cryo-EM maps generated during this study are available at the RCSB PDB with accession codes: EMD-11811, PDB ID: 7AK6 (ND6-P25L) and EMD-11810, PDB ID: 7AK5 (deactive state). Associated raw cryo-EM micrograph images are available as EMPIAR entries EMPIAR-10605 (ND6-P25L) and EMPIAR-10604 (Deactive). Source data are provided with this paper.
Hirst, J. & Roessler, M. M. Energy conversion, redox catalysis and generation of reactive oxygen species by respiratory complex I. Biochim. Biophys. Acta 1857, 872–883 (2016).
Zhu, J., Vinothkumar, K. R. & Hirst, J. Structure of mammalian respiratory complex I. Nature 536, 354–358 (2016).
Agip, A. A., Blaza, J. N., Fedor, J. G. & Hirst, J. Mammalian respiratory complex I through the lens of Cryo-EM. Annu. Rev. Biophys. 48, 165–184 (2019).
Gorman, G. S. et al. Mitochondrial diseases. Nat. Rev. Dis. Prim. 2, 16080 (2016).
Malfatti, E. et al. Novel mutations of ND genes in complex I deficiency associated with mitochondrial encephalopathy. Brain 130, 1894–1904 (2007).
Lin, C. S. et al. Mouse mtDNA mutant model of Leber hereditary optic neuropathy. Proc. Natl Acad. Sci. USA. 109, 20065–20070 (2012).
McManus, M. J. et al. Mitochondrial DNA variation dictates expressivity and progression of nuclear DNA mutations causing cardiomyopathy. Cell Metab. 29, 78–90 (2019).
Robb, E. L. et al. Control of mitochondrial superoxide production by reverse electron transport at complex I. J. Biol. Chem. 293, 9869–9879 (2018).
Murphy, M. P. How mitochondria produce reactive oxygen species. Biochem. J. 417, 1–13 (2009).
Mills, E. L. et al. Succinate dehydrogenase supports metabolic repurposing of mitochondria to drive inflammatory macrophages. Cell 167, 457–470 (2016).
Fernandez-Aguera, M. C. et al. Oxygen sensing by arterial chemoreceptors depends on mitochondrial complex I signaling. Cell Metab. 22, 825–837 (2015).
Arias-Mayenco, I. et al. Acute O2 sensing: role of coenzyme QH2/Q ratio and mitochondrial ROS compartmentalization. Cell Metab. 28, 145–158 (2018).
Mills, E. L. et al. Accumulation of succinate controls activation of adipose tissue thermogenesis. Nature 560, 102–106 (2018).
Kazak, L. et al. UCP1 deficiency causes brown fat respiratory chain depletion and sensitizes mitochondria to calcium overload-induced dysfunction. Proc. Natl Acad. Sci. USA. 114, 7981–7986 (2017).
Scialo, F. et al. Mitochondrial ROS produced via reverse electron transport extend animal lifespan. Cell Metab. 23, 725–734 (2016).
Scialo, F. et al. Mitochondrial complex I derived ROS regulate stress adaptation in Drosophila melanogaster. Redox Biol. 32, 101450 (2020).
Chouchani, E. T. et al. A Unifying mechanism for mitochondrial superoxide production during ischemia-reperfusion injury. Cell Metab. 23, 254–263 (2016).
Chouchani, E. T. et al. Ischaemic accumulation of succinate controls reperfusion injury through mitochondrial ROS. Nature 515, 431–435 (2014).
Chouchani, E. T. et al. Cardioprotection by S-nitrosation of a cysteine switch on mitochondrial complex I. Nat. Med. 19, 753–759 (2013).
Galkin, A. & Moncada, S. Modulation of the conformational state of mitochondrial complex I as a target for therapeutic intervention. Interface Focus 7, 20160104 (2017).
Kotlyar, A. B. & Vinogradov, A. D. Slow active/inactive transition of the mitochondrial NADH-ubiquinone reductase. Biochim. Biophy. Acta 1019, 151–158 (1990).
Agip, A. A. et al. Cryo-EM structures of complex I from mouse heart mitochondria in two biochemically defined states. Nat. Struct. Mol. Biol. 25, 548–556 (2018).
Blaza, J. N., Vinothkumar, K. R. & Hirst, J. Structure of the deactive state of mammalian respiratory complex I. Structure 26, 312–319 (2018).
Grba, D. N. & Hirst, J. Mitochondrial complex I structure reveals ordered water molecules for catalysis and proton translocation. Nat. Struct. Mol. Biol. 27, 892–900 (2020).
Gorenkova, N., Robinson, E., Grieve, D. J. & Galkin, A. Conformational change of mitochondrial complex I increases ROS sensitivity during ischemia. Antioxid. Redox Signal. 19, 1459–1468 (2013).
Burger, N. et al. A sensitive mass spectrometric assay for mitochondrial CoQ pool redox state in vivo. Free Radic. Biol. Med. 147, 37–47 (2020).
Nicholls, D. G. & Ferguson, S. J. Bioenergetics 4. (Academic Press, 2013).
Martin, J. L. et al. Succinate accumulation drives ischaemia-reperfusion injury during organ transplantation. Nat. Metab. 1, 966–974 (2019).
Pell, V. R. et al. Ischemic preconditioning protects against cardiac ischemia reperfusion injury without affecting succinate accumulation or oxidation. J. Mol. Cell. Cardiol. 123, 88–91 (2018).
Cocheme, H. M. et al. Measurement of H2O2 within living Drosophila during aging using a ratiometric mass spectrometry probe targeted to the mitochondrial matrix. Cell Metab. 13, 340–350 (2011).
Pryde, K. R. & Hirst, J. Superoxide is produced by the reduced flavin in mitochondrial complex I: a single, unified mechanism that applies during both forward and reverse electron transfer. J. Biol. Chem. 286, 18056–18065 (2011).
Armstrong, F. A. & Hirst, J. Reversibility and efficiency in electrocatalytic energy conversion and lessons from enzymes. Proc. Natl Acad. Sci. USA 108, 14049–14054 (2011).
Cabrera-Orefice, A. et al. Locking loop movement in the ubiquinone pocket of complex I disengages the proton pumps. Nat. Commun. 9, 4500 (2018).
Article ADS PubMed PubMed Central CAS Google Scholar
Prag, H. A. et al. Ester prodrugs of malonate with enhanced intracellular delivery protect against cardiac ischemia-reperfusion injury in vivo. Cardiovasc. Drugs Ther. https://doi.org/10.1007/s10557-020-07033-6 (2021).
Xu, J. et al. Inhibiting succinate dehydrogenase by dimethyl malonate alleviates brain damage in a rat model of cardiac arrest. Neuroscience 393, 24–32 (2018).
Prag, H. A. et al. Selective delivery of dicarboxylates to mitochondria by conjugation to a lipophilic cation via a cleavable linker. Mol. Pharm. 17, 3526–3540 (2020).
Valls-Lacalle, L. et al. Selective inhibition of succinate dehydrogenase in reperfused myocardium with intracoronary malonate reduces infarct size. Sci. Rep. 8, 2442 (2018).
Chen, Q., Hoppel, C. L. & Lesnefsky, E. J. Blockade of electron transport before cardiac ischemia with the reversible inhibitor amobarbital protects rat heart mitochondria. J. Pharmacol. Exp. Ther. 316, 200–207 (2006).
Chen, Q., Moghaddas, S., Hoppel, C. L. & Lesnefsky, E. J. Reversible blockade of electron transport during ischemia protects mitochondria and decreases myocardial injury following reperfusion. J. Pharmacol. Exp. Ther. 319, 1405–1412 (2006).
Lesnefsky, E. J. et al. Blockade of electron transport during ischemia protects cardiac mitochondria. J. Biol. Chem. 279, 47961–47967 (2004).
Galkin, A. & Moncada, S. S-nitrosation of mitochondrial complex I depends on its structural conformation. J. Biol. Chem. 282, 37448–37453 (2007).
Prime, T. A. et al. A mitochondria-targeted S-nitrosothiol modulates respiration, nitrosates thiols, and protects against ischemia-reperfusion injury. Proc. Natl Acad. Sci. USA 106, 10764–10769 (2009).
Kim, M. et al. Attenuation of oxidative damage by targeting mitochondrial complex I in neonatal hypoxic-ischemic brain injury. Free Radic. Biol. Med. 124, 517–524 (2018).
Bridges, H. R., Jones, A. J., Pollak, M. N. & Hirst, J. Effects of metformin and other biguanides on oxidative phosphorylation in mitochondria. Biochem. J. 462, 475–487 (2014).
Calvert, J. W. et al. Acute metformin therapy confers cardioprotection against myocardial infarction via AMPK-eNOS-mediated signaling. Diabetes 57, 696–705 (2008).
Cahova, M. et al. Metformin prevents ischemia reperfusion-induced oxidative stress in the fatty liver by attenuation of reactive oxygen species formation. Am. J. Physiol. Gastroint. Liver Physiol. 309, 100–111 (2015).
Mekada, K. et al. Genetic differences among C57BL/6 substrains. Exp. Anim. 58, 141–149 (2009).
Fernandez-Vizarra, E. et al. Isolation of mitochondria for biogenetical studies: An update. Mitochondrion 10, 253–262 (2010).
Meyerson et al. Self-assembled monolayers improve protein distribution on holey carbon cryo-EM supports. Sci. Rep. 4, 7084 (2014).
Zivanov, J. et al. New tools for automated high-resolution cryo-EM structure determination in RELION-3. Elife 7, e42166 (2018).
Zheng, S. Q. et al. MotionCor2: anisotropic correction of beam-induced motion for improved cryo-electron microscopy. Nat. Methods 14, 331–332 (2017).
Zhang, K. Gctf: Real-time CTF determination and correction. J. Struct. Biol. 193, 1–12 (2016).
Pettersen, E. F. et al. UCSF Chimera–a visualization system for exploratory research and analysis. J. Comput. Chem. 25, 1605–1612 (2004).
Zivanov, J., Nakane, T. & Scheres, S. H. W. Estimation of high-order aberrations and anisotropic magnification from cryo-EM data sets in RELION-3.1. IUCrJ 7, 253–267 (2020).
Emsley, P., Lohkamp, B., Scott, W. G. & Cowtan, K. Features and development of Coot. Acta Crystallogr. D. Biol. Crystallogr. 66, 486–501 (2010).
Afonine, P. V. et al. Real-space refinement in PHENIX for cryo-EM and crystallography. Acta Crystallogr. D. Struct. Biol. 74, 531–544 (2018).
Croll, T. I. ISOLDE: a physically realistic environment for model building into low-resolution electron-density maps. Acta Crystallogr. D. Struct. Biol. 74, 519–530 (2018).
de la Rosa-Trevin, J. M. et al. Scipion: A software framework toward integration, reproducibility and validation in 3D electron microscopy. J. Struct. Biol. 195, 93–99 (2016).
Williams, C. J. et al. MolProbity: More and better reference data for improved all-atom structure validation. Protein Sci. 27, 293–315 (2018).
Barad, B. A. et al. EMRinger: side chain-directed model and map validation for 3D cryo-electron microscopy. Nat. Methods 12, 943–946 (2015).
The PyMOL Molecular Graphics System v. Version 2.2.3 (2019).
Goddard, T. D. et al. UCSF ChimeraX: Meeting modern challenges in visualization and analysis. Protein Sci. 27, 14–25 (2018).
Jones, A. J. & Hirst, J. A spectrophotometric coupled enzyme assay to measure the activity of succinate dehydrogenase. Anal. Biochem. 442, 19–23 (2013).
Kussmaul, L. & Hirst, J. The mechanism of superoxide production by NADH:ubiquinone oxidoreductase (complex I) from bovine heart mitochondria. Proc. Natl Acad. Sci. USA 103, 7607–7612 (2006).
Ross, M. F. et al. Accumulation of lipophilic dications by mitochondria and cells. Biochem. J. 400, 199–208 (2006).
Ross, M. F. et al. Rapid and extensive uptake and activation of hydrophobic triphenylphosphonium cations within cells. Biochem. J. 411, 633–645 (2008).
Antonucci, S. et al. Selective mitochondrial superoxide generation in vivo is cardioprotective through hormesis. Free Radic. Biol. Med. 134, 678–687 (2019).
Prag, H. A. et al. Mechanism of succinate efflux upon reperfusion of the ischemic heart. Cardiovasc. Res. https://doi.org/10.1093/cvr/cvaa148 (2021).
This research was funded by the Medical Research Council (MC_U105663141 (J.H.), MC_UU_00015/2 (J.H.), MC_U105663142 (M.P.M.), MC_ UU_00015/7 (M.P.M.), MC_UU_00015/5 (C.V.) and MR/P000320/1 (T.K.)) and by the Wellcome Trust (WT110158/Z/15/Z, 110159/Z/15/Z, RG88195, 202905/Z/16/Z and 206171/Z/17/Z). D.A. is supported by Barts Charity (MRC0215). A.M. is supported by the Swedish Research Council (2018-00623). We thank Professor Douglas Wallace (University of Pennsylvania) for generously providing the ND6-P25L mouse strain and A. Noor A. Agip (MBU) for assisting with the ND6-P25L mouse colony, Dima Chirgadze (University of Cambridge cryo-EM facility) for assistance with cryo-EM data collection, Tristan Croll (University of Cambridge) for assistance with structure modeling, Tracy Prime (MBU) for assisting with measurements of mitochondrial membrane potential, and Kurt Hoogewijs and Sabine Arndt (MBU) for help with developing the assay for complex I Cys39 exposure in the heart.
Carlo Viscomi
Present address: Department of Biomedical Sciences, University of Padova via Ugo Bassi 58/B, Padova, 35131, Italy
MRC Mitochondrial Biology Unit, University of Cambridge, Cambridge Biomedical Campus, Cambridge, UK
Zhan Yin, Nils Burger, Hannah R. Bridges, Hiran A. Prag, Daniel N. Grba, Carlo Viscomi, Andrew M. James, Amin Mottahedin, Michael P. Murphy & Judy Hirst
Department of Medicine, University of Cambridge, Cambridge, UK
Duvaraka Kula-Alwar, Amin Mottahedin, Thomas Krieg & Michael P. Murphy
William Harvey Research Institute, Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, UK
Dunja Aksentijević
Centre for Inflammation and Therapeutic Innovation, Queen Mary University of London, London, UK
Department of Physiology, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
Amin Mottahedin
Zhan Yin
Nils Burger
Duvaraka Kula-Alwar
Hannah R. Bridges
Hiran A. Prag
Daniel N. Grba
Andrew M. James
Thomas Krieg
Michael P. Murphy
Judy Hirst
Z.Y. managed the ND6-P25L mouse colony, prepared and characterized mitochondrial membranes and complex I, and carried out cryo-EM data collection for ND6-P25L-CI. Z.Y. and H.R.B. analyzed cryo-EM data and built the models, and Z.Y., D.N.G., H.R.B., and J.H. interpreted the structures. D.N.G. made the movie. C.V. established the ND6-P25L mouse colony. N.B. carried out the CoQ and peptide mass spectrometry analyses, and the mitochondrial O2 consumption and ROS analyses with assistance from A.M.J. and A.M. H.A.P. analyzed tissue succinate levels. D.K.-A. carried out the in vivo cardiac IR injury experiments. D.A. carried out the Langendorff heart perfusions. T.K. supervised the in vivo mouse experiments. J.H. initiated the project. J.H. and M.P.M. directed the project and wrote the paper with assistance from all the other authors.
Correspondence to Michael P. Murphy or Judy Hirst.
Peer review information Nature Communications thanks Alexander Galkin and Tim Rasmussen for their contribution to the peer review of this work.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Description of Additional Supplementary Files
Supplementary Movie 1
Yin, Z., Burger, N., Kula-Alwar, D. et al. Structural basis for a complex I mutation that blocks pathological ROS production. Nat Commun 12, 707 (2021). https://doi.org/10.1038/s41467-021-20942-w
DOI: https://doi.org/10.1038/s41467-021-20942-w
Mechanisms of mitochondrial respiratory adaptation
Christopher F. Bennett
Pedro Latorre-Muro
Pere Puigserver
Nature Reviews Molecular Cell Biology (2022)
Why succinate? Physiological regulation by a mitochondrial coenzyme Q sentinel
Edward T. Chouchani
A reversible mitochondrial complex I thiol switch mediates hypoxic avoidance behavior in C. elegans
John O. Onukwufor
M. Arsalan Farooqi
Andrew P. Wojtovich
Plant-specific features of respiratory supercomplex I + III2 from Vigna radiata
M. Maldonado
Z. Fan
J. A. Letts
Nature Plants (2022)
Defining roles of specific reactive oxygen species (ROS) in cell biology and physiology
Helmut Sies
Vsevolod V. Belousov
Christine Winterbourn | CommonCrawl |
Let $a \clubsuit b = \frac{2a}{b} \cdot \frac{b}{a}$. What is $(5 \clubsuit (3 \clubsuit 6)) \clubsuit 1$?
Looking at the definition of $a \clubsuit b$, we see that $a \clubsuit b = \frac{2a}{b} \cdot \frac{b}{a}=\frac{2a \cdot b}{b \cdot a} = \frac{2ab}{ab}.$ Both the numerator and denominator share a common factor of ab, so $a \clubsuit b = \frac{2 \cancel{ab}}{\cancel{ab}}=2.$ Thus, regardless of what a and b are (as long as neither are zero), $a \clubsuit b$ will always be 2. Looking at the given expression, a and b are never zero. Thus whatever the values of a and b, the expression will always evaluate to 2. Thus, the expression simplifies to $(5 \clubsuit (3 \clubsuit 6)) \clubsuit 1 = (5 \clubsuit 2) \clubsuit 1 = 2 \clubsuit 1 = \boxed{2}.$ | Math Dataset |
0.3: Modeling with Linear Functions
[ "article:topic", "linear function", "linear model", "authorname:openstax", "calcplot:yes", "license:ccbyncsa", "showtoc:yes", "transcluded:yes" ]
Borough of Manhattan Community College
MAT 206 Precalculus
0: Review - Linear Equations in 2 Variables
Contributed by OpenStax
Mathematics at OpenStax CNX
Identifying Steps to Model and Solve Problems
Building Linear Models
Using a Given Intercept to Build a Model
Using a Given Input and Output to Build a Model
Using a Diagram to Model a Problem
Building Systems of Linear Models
Section Excercises
Skills to Develop
Build linear models from verbal descriptions.
Model a set of data with a linear function.
Emily is a college student who plans to spend a summer in Seattle. She has saved $3,500 for her trip and anticipates spending $400 each week on rent, food, and activities. How can we write a linear model to represent her situation? What would be the x-intercept, and what can she learn from it? To answer these and related questions, we can create a model using a linear function. Models such as this one can be extremely useful for analyzing relationships and making predictions based on those relationships. In this section, we will explore examples of linear function models.
Figure \(\PageIndex{1}\): (credit: EEK Photography/Flickr)
When modeling scenarios with linear functions and solving problems involving quantities with a constant rate of change, we typically follow the same problem strategies that we would use for any type of function. Let's briefly review them:
Identify changing quantities, and then define descriptive variables to represent those quantities. When appropriate, sketch a picture or define a coordinate system.
Carefully read the problem to identify important information. Look for information that provides values for the variables or values for parts of the functional model, such as slope and initial value.
Carefully read the problem to determine what we are trying to find, identify, solve, or interpret.
Identify a solution pathway from the provided information to what we are trying to find. Often this will involve checking and tracking units, building a table, or even finding a formula for the function being used to model the problem.
When needed, write a formula for the function.
Solve or evaluate the function using the formula.
Reflect on whether your answer is reasonable for the given situation and whether it makes sense mathematically.
Clearly convey your result using appropriate units, and answer in full sentences when necessary.
Now let's take a look at the student in Seattle. In her situation, there are two changing quantities: time and money. The amount of money she has remaining while on vacation depends on how long she stays. We can use this information to define our variables, including units.
Output: \(M\), money remaining, in dollars
Input: \(t\), time, in weeks
So, the amount of money remaining depends on the number of weeks: \(M(t)\)
We can also identify the initial value and the rate of change.
Initial Value: She saved $3,500, so $3,500 is the initial value for M.
Rate of Change: She anticipates spending $400 each week, so –$400 per week is the rate of change, or slope.
Notice that the unit of dollars per week matches the unit of our output variable divided by our input variable. Also, because the slope is negative, the linear function is decreasing. This should make sense because she is spending money each week.
The rate of change is constant, so we can start with the linear model \(M(t)=mt+b\). Then we can substitute the intercept and slope provided.
Figure \(\PageIndex{2}\)
To find the x-intercept, we set the output to zero, and solve for the input.
\[\begin{align} 0&=−400t+3500 \\ t&=\dfrac{3500}{400} \\ &=8.75 \end{align}\]
The x-intercept is 8.75 weeks. Because this represents the input value when the output will be zero, we could say that Emily will have no money left after 8.75 weeks.
When modeling any real-life scenario with functions, there is typically a limited domain over which that model will be valid—almost no trend continues indefinitely. Here the domain refers to the number of weeks. In this case, it doesn't make sense to talk about input values less than zero. A negative input value could refer to a number of weeks before she saved $3,500, but the scenario discussed poses the question once she saved $3,500 because this is when her trip and subsequent spending starts. It is also likely that this model is not valid after the x-intercept, unless Emily will use a credit card and goes into debt. The domain represents the set of input values, so the reasonable domain for this function is \(0{\leq}t{\leq}8.75\).
In the above example, we were given a written description of the situation. We followed the steps of modeling a problem to analyze the information. However, the information provided may not always be the same. Sometimes we might be provided with an intercept. Other times we might be provided with an output value. We must be careful to analyze the information we are given, and use it appropriately to build a linear model.
Some real-world problems provide the y-intercept, which is the constant or initial value. Once the y-intercept is known, the x-intercept can be calculated. Suppose, for example, that Hannah plans to pay off a no-interest loan from her parents. Her loan balance is $1,000. She plans to pay $250 per month until her balance is $0. The y-intercept is the initial amount of her debt, or $1,000. The rate of change, or slope, is -$250 per month. We can then use the slope-intercept form and the given information to develop a linear model.
\[\begin{align} f(x)&=mx+b \\ &=-250x+1000 \end{align}\]
Now we can set the function equal to 0, and solve for \(x\) to find the x-intercept.
\[\begin{align} 0&=-250+1000 \\ 1000&=250x \\ 4&=x \\ x&=4 \end{align}\]
The x-intercept is the number of months it takes her to reach a balance of $0. The x-intercept is 4 months, so it will take Hannah four months to pay off her loan.
Many real-world applications are not as direct as the ones we just considered. Instead they require us to identify some aspect of a linear function. We might sometimes instead be asked to evaluate the linear model at a given input or set the equation of the linear model equal to a specified output.
Given a word problem that includes two pairs of input and output values, use the linear function to solve a problem.
Identify the input and output values.
Convert the data to two coordinate pairs.
Find the slope.
Write the linear model.
Use the model to make a prediction by evaluating the function at a given x-value.
Use the model to identify an x-value that results in a given y-value.
Answer the question posed.
Example \(\PageIndex{1}\): Using a Linear Model to Investigate a Town's Population
A town's population has been growing linearly. In 2004 the population was 6,200. By 2009 the population had grown to 8,100. Assume this trend continues.
Predict the population in 2013.
Identify the year in which the population will reach 15,000.
The two changing quantities are the population size and time. While we could use the actual year value as the input quantity, doing so tends to lead to very cumbersome equations because the y-intercept would correspond to the year 0, more than 2000 years ago!
To make computation a little nicer, we will define our input as the number of years since 2004:
Input: \(t\), years since 2004
Output: \(P(t)\), the town's population
To predict the population in 2013 (\(t=9\)), we would first need an equation for the population. Likewise, to find when the population would reach 15,000, we would need to solve for the input that would provide an output of 15,000. To write an equation, we need the initial value and the rate of change, or slope.
To determine the rate of change, we will use the change in output per change in input.
\[m=\dfrac{\text{change in output}}{\text{change in input}}\]
The problem gives us two input-output pairs. Converting them to match our defined variables, the year 2004 would correspond to \(t=0\), giving the point \((0,6200)\). Notice that through our clever choice of variable definition, we have "given" ourselves the y-intercept of the function. The year 2009 would correspond to \(t=5\), giving the point \((5,8100)\).
The two coordinate pairs are \((0,6200)\) and \((5,8100)\). Recall that we encountered examples in which we were provided two points earlier in the chapter. We can use these values to calculate the slope.
\[\begin{align} m&=\dfrac{8100-6200}{5-0}\\ &=\dfrac{1900}{5} \\ &=380 \text{ people per year} \end{align}\]
We already know the y-intercept of the line, so we can immediately write the equation:
\[P(t)=380t+6200\]
To predict the population in 2013, we evaluate our function at \(t=9\).
\[\begin{align} P(9)&=380(9)+6,200 \\ &=9,620 \end{align}\]
If the trend continues, our model predicts a population of 9,620 in 2013.
To find when the population will reach 15,000, we can set \(P(t)=15000\) and solve for \(t\).
\[\begin{align} 15000&=380t+6200 \\ 8800&=380t \\ t&{\approx}23.158 \end{align}\]
Our model predicts the population will reach 15,000 in a little more than 23 years after 2004, or somewhere around the year 2027.
\(\PageIndex{1}\): A company sells doughnuts. They incur a fixed cost of $25,000 for rent, insurance, and other expenses. It costs $0.25 to produce each doughnut.
Write a linear model to represent the cost C of the company as a function of \(x\), the number of doughnuts produced.
Find and interpret the y-intercept.
a. \(C(x)=0.25x+25,000\) b. The y-intercept is \((0,25,000)\). If the company does not produce a single doughnut, they still incur a cost of $25,000.
\(\PageIndex{2}\): A city's population has been growing linearly. In 2008, the population was 28,200. By 2012, the population was 36,800. Assume this trend continues.
a. 41,100 b. 2020
It is useful for many real-world applications to draw a picture to gain a sense of how the variables representing the input and output may be used to answer a question. To draw the picture, first consider what the problem is asking for. Then, determine the input and the output. The diagram should relate the variables. Often, geometrical shapes or figures are drawn. Distances are often traced out. If a right triangle is sketched, the Pythagorean Theorem relates the sides. If a rectangle is sketched, labeling width and height is helpful.
Example \(\PageIndex{2}\): Using a Diagram to Model Distance Walked
Anna and Emanuel start at the same intersection. Anna walks east at 4 miles per hour while Emanuel walks south at 3 miles per hour. They are communicating with a two-way radio that has a range of 2 miles. How long after they start walking will they fall out of radio contact?
In essence, we can partially answer this question by saying they will fall out of radio contact when they are 2 miles apart, which leads us to ask a new question:
"How long will it take them to be 2 miles apart?"
In this problem, our changing quantities are time and position, but ultimately we need to know how long will it take for them to be 2 miles apart. We can see that time will be our input variable, so we'll define our input and output variables.
Input: \(t\), time in hours.
Output: \(A(t)\), distance in miles, and \(E(t)\), distance in miles
Because it is not obvious how to define our output variable, we'll start by drawing a picture such as Figure \(\PageIndex{3}\).
Initial Value: They both start at the same intersection so when \(t=0\), the distance traveled by each person should also be 0. Thus the initial value for each is 0.
Rate of Change: Anna is walking 4 miles per hour and Emanuel is walking 3 miles per hour, which are both rates of change. The slope for \(A\) is 4 and the slope for \(E\) is 3.
Using those values, we can write formulas for the distance each person has walked.
\[A(t)=4t\]
\[E(t)=3t\]
For this problem, the distances from the starting point are important. To notate these, we can define a coordinate system, identifying the "starting point" at the intersection where they both started. Then we can use the variable, \(A\), which we introduced above, to represent Anna's position, and define it to be a measurement from the starting point in the eastward direction. Likewise, can use the variable, \(E\), to represent Emanuel's position, measured from the starting point in the southward direction. Note that in defining the coordinate system, we specified both the starting point of the measurement and the direction of measure.
We can then define a third variable, \(D\), to be the measurement of the distance between Anna and Emanuel. Showing the variables on the diagram is often helpful, as we can see from Figure \(\PageIndex{4}\).
Recall that we need to know how long it takes for \(D\), the distance between them, to equal 2 miles. Notice that for any given input \(t\), the outputs \(A(t)\), \(E(t)\), and \(D(t)\) represent distances.
Figure \(\PageIndex{4}\): We can use the Pythagorean Theorem because we have drawn a right angle.
Using the Pythagorean Theorem, we get:
\[\begin{align} d(t)^2&=A(t)^2+E(t)^2 \\ &=(4t)^2+(3t)^2 \\ &=16t^2+9t^2 \\ &=25t^2 \\ D(t)&=\pm\sqrt{25t^2} &\text{Solve for $D(t)$ using the square root} \\ &= \pm 5|t| \end{align}\]
In this scenario we are considering only positive values of \(t\), so our distance \(D(t)\) will always be positive. We can simplify this answer to \(D(t)=5t\). This means that the distance between Anna and Emanuel is also a linear function. Because D is a linear function, we can now answer the question of when the distance between them will reach 2 miles. We will set the output \(D(t)=2\) and solve for \(t\).
\[\begin{align} D(t)&=2 \\ 5t&=2 \\ t&=\dfrac{2}{5}=0.4 \end{align}\]
They will fall out of radio contact in 0.4 hours, or 24 minutes.
Should I draw diagrams when given information based on a geometric shape?
Yes. Sketch the figure and label the quantities and unknowns on the sketch.
Example \(\PageIndex{3}\): Using a Diagram to Model Distance between Cities
There is a straight road leading from the town of Westborough to Agritown 30 miles east and 10 miles north. Partway down this road, it junctions with a second road, perpendicular to the first, leading to the town of Eastborough. If the town of Eastborough is located 20 miles directly east of the town of Westborough, how far is the road junction from Westborough?
It might help here to draw a picture of the situation. See Figure \(\PageIndex{5}\). It would then be helpful to introduce a coordinate system. While we could place the origin anywhere, placing it at Westborough seems convenient. This puts Agritown at coordinates \((30, 10)\), and Eastborough at \((20,0)\).
Using this point along with the origin, we can find the slope of the line from Westborough to Agritown:
\[m=\dfrac{10-0}{30-0}=\dfrac{1}{3}\]
The equation of the road from Westborough to Agritown would be
\[W(x)=\dfrac{1}{3}x\]
From this, we can determine the perpendicular road to Eastborough will have slope \(m=–3\). Because the town of Eastborough is at the point \((20, 0)\), we can find the equation:
\[\begin{align} E(x)&=−3x+b \\ 0&=−3(20)+b &\text{Substitute in $(20, 0)$} \\ b&=60 \\ E(x)&=−3x+60 \end{align}\]
We can now find the coordinates of the junction of the roads by finding the intersection of these lines. Setting them equal,
\[\begin{align} \dfrac{1}{3}x&=−3x+60 \\ \dfrac{10}{3}x&=60 \\ 10x&=180 \\ x&=18 &\text{Substituting this back into $W(x)$} \\ y&=W(18) \\ &= \dfrac{1}{3}(18) \\&=6 \end{align}\]
The roads intersect at the point \((18,6)\). Using the distance formula, we can now find the distance from Westborough to the junction.
\[\begin{align} \text{distance}&=\sqrt{(x_2-x_1)^2+(y_2-y_1)^2} \\ &=\sqrt{(18-0)^2+(6-0)^2} \\ &\approx 18.743 \text{miles} \end{align}\]
One nice use of linear models is to take advantage of the fact that the graphs of these functions are lines. This means real-world applications discussing maps need linear functions to model the distances between reference points.
\(\PageIndex{3}\): There is a straight road leading from the town of Timpson to Ashburn 60 miles east and 12 miles north. Partway down the road, it junctions with a second road, perpendicular to the first, leading to the town of Garrison. If the town of Garrison is located 22 miles directly east of the town of Timpson, how far is the road junction from Timpson?
Real-world situations including two or more linear functions may be modeled with a system of linear equations. Remember, when solving a system of linear equations, we are looking for points the two lines have in common. Typically, there are three types of answers possible, as shown in Figure \(\PageIndex{6}\).
Given a situation that represents a system of linear equations, write the system of equations and identify the solution.
Identify the input and output of each linear model.
Identify the slope and y-intercept of each linear model.
Find the solution by setting the two linear functions equal to another and solving for \(x\),or find the point of intersection on a graph.
Example \(\PageIndex{4}\): Building a System of Linear Models to Choose a Truck
Rental Company
Jamal is choosing between two truck-rental companies. The first, Keep on Trucking, Inc., charges an up-front fee of $20, then 59 cents a mile[1]. The second, Move It Your Way, charges an up-front fee of $16, then 63 cents a mile. When will Keep on Trucking, Inc. be the better choice for Jamal?
The two important quantities in this problem are the cost and the number of miles driven. Because we have two companies to consider, we will define two functions.
Input: \(d\), distance driven in miles
Outputs: \(K(d):\) cost, in dollars, for renting from Keep on Trucking
\(M(d):\) cost, in dollars, for renting from Move It Your Way
Initial Value: Up-front fee: \(K(0)=20\) and \(M(0)=16\)
Rate of Change: \(K(d)=\dfrac{$0.59}{\text{mile}}\) and \(P(d)=\dfrac{$0.63}{\text{mile}}\)
A linear function is of the form \(f(x)=mx+b\). Using the rates of change and initial charges, we can write the equations
\[K(d)=0.59d+20\]
\[M(d)=0.63d+16\]
Using these equations, we can determine when Keep on Trucking, Inc., will be the better choice. Because all we have to make that decision from is the costs, we are looking for when Move It Your Way, will cost less, or when \(K(d)<M(d)\). The solution pathway will lead us to find the equations for the two functions, find the intersection, and then see where the \(K(d)\) function is smaller.
These graphs are sketched in Figure \(\PageIndex{7}\), with \(K(d)\) in blue.
To find the intersection, we set the equations equal and solve:
\[\begin{align} K(d)&=M(d) \\ 0.59d+20&=0.63d+16 \\ 4&=0.04d \\ 100&=d \\ d&=100 \end{align}\]
This tells us that the cost from the two companies will be the same if 100 miles are driven. Either by looking at the graph, or noting that \(K(d)\) is growing at a slower rate, we can conclude that Keep on Trucking, Inc. will be the cheaper price when more than 100 miles are driven, that is \(d>100\).
We can use the same problem strategies that we would use for any type of function.
When modeling and solving a problem, identify the variables and look for key values, including the slope and y-intercept.
Draw a diagram, where appropriate.
Check for reasonableness of the answer.
Linear models may be built by identifying or calculating the slope and using the y-intercept.
The x-intercept may be found by setting \(y=0\), which is setting the expression \(mx+b\) equal to 0.
The point of intersection of a system of linear equations is the point where the x- and y-values are the same.
A graph of the system may be used to identify the points where one line falls below (or above) the other line.
Explain how to find the input variable in a word problem that uses a linear function.
Determine the independent variable. This is the variable upon which the output depends.
Explain how to find the output variable in a word problem that uses a linear function.
Explain how to interpret the initial value in a word problem that uses a linear function.
To determine the initial value, find the output when the input is equal to zero.
Explain how to determine the slope in a word problem that uses a linear function.
Find the area of a parallelogram bounded by the \(y\) axis, the line \(x=3\), the line \(f(x)=1+2x\),and the line parallel to \(f(x)\) passing through (2, 7).
6 square units
Find the area of a triangle bounded by the x-axis, the line \(f(x)=12–\frac{1}{3}x\), and the line perpendicular to \(f(x)\) that passes through the origin.
Find the area of a triangle bounded by the y-axis, the line \(f(x)=9–\frac{6}{7}x\), and the line perpendicular to \(f(x)\) that passes through the origin.
20.012 square units
Find the area of a parallelogram bounded by the x-axis, the line \(g(x)=2\), the line \(f(x)=3x\), and the line parallel to \(f(x)\) passing through (6,1).
For the following exercises, consider this scenario: A town's population has been decreasing at a constant rate. In 2010 the population was 5,900. By 2012 the population had dropped 4,700. Assume this trend continues.
Identify the year in which the population will reach 0.
For the following exercises, consider this scenario: A town's population has been increased at a constant rate. In 2010 the population was 46,020. By 2012 the population had increased to 52,070. Assume this trend continues.
For the following exercises, consider this scenario: A town has an initial population of 75,000. It grows at a constant rate of 2,500 per year for 5 years.
Find the linear function that models the town's population \(P\) as a function of the year, \(t\), where \(t\) is the number of years since the model began.
\(P(t)=75,000+2500t\)
Find a reasonable domain and range for the function \(P\).
If the function P is graphed, find and interpret the x- and y-intercepts.
(–30, 0) Thirty years before the start of this model, the town had no citizens. (0, 75,000) Initially, the town had a population of 75,000.
If the function P is graphed, find and interpret the slope of the function.
When will the output reached 100,000?
Ten years after the model began.
What is the output in the year 12 years from the onset of the model?
For the following exercises, consider this scenario: The weight of a newborn is 7.5 pounds. The baby gained one-half pound a month for its first year.
Find the linear function that models the baby's weight \(W\) as a function of the age of the baby, in months, \(t\).
\(W(t)=7.5t+0.5\)
Find a reasonable domain and range for the function \(W\).
If the function W is graphed, find and interpret the x- and y-intercepts.
\((−15,0)\): The x-intercept is not a plausible set of data for this model because it means the baby weighed 0 pounds 15 months prior to birth. \((0, 7.5)\): The baby weighed 7.5 pounds at birth.
If the function W is graphed, find and interpret the slope of the function.
When did the baby weight 10.4 pounds?
At age 5.8 months.
What is the output when the input is 6.2? Interpret your answer.
For the following exercises, consider this scenario: The number of people afflicted with the common cold in the winter months steadily decreased by 205 each year from 2005 until 2010. In 2005, 12,025 people were afflicted.
Find the linear function that models the number of people inflicted with the common cold C as a function of the year, \(t\).
\(C(t)=12,025−205t\)
Find a reasonable domain and range for the function \(C\).
If the function C is graphed, find and interpret the x- and y-intercepts.
\((58.7, 0)\): In roughly 59 years, the number of people inflicted with the common cold would be \(0.(0,12,025)\): Initially there were 12,025 people afflicted by the common cold.
If the function C is graphed, find and interpret the slope of the function.
When will the output reach 0?
In what year will the number of people be 9,700?
For the following exercises, use the graph in Figure, which shows the profit, \(y\), in thousands of dollars, of a company in a given year, \(t\), where \(t\) represents the number of years since 1980.
Graph of a line from (15, 150) to (25, 130).
Find the linear function \(y\), where \(y\) depends on \(t\), the number of years since 1980.
\(y=−2t+180\)
Find and interpret the x-intercept.
In 2070, the company's profit will be zero.
Find and interpret the slope.
\(y=30t−300\)
(10, 0) In 1990, the profit earned zero profit.
For the following exercises, use the median home values in Mississippi and Hawaii (adjusted for inflation) shown in Table. Assume that the house values are changing linearly.
1950 $25,200 $74,400
2000 $71,400 $272,700
In which state have home values increased at a higher rate?
If these trends were to continue, what would be the median home value in Mississippi in 2010?
If we assume the linear trend existed before 1950 and continues after 2000, the two states' median house values will be (or were) equal in what year? (The answer might be absurd.)
During the year 1933
For the following exercises, use the median home values in Indiana and Alabama (adjusted for inflation) shown in Table. Assume that the house values are changing linearly.
If these trends were to continue, what would be the median home value in Indiana in 2010?
In 2004, a school population was 1001. By 2008 the population had grown to 1697. Assume the population is changing linearly.
a.How much did the population grow between the year 2004 and 2008?
b.How long did it take the population to grow from 1001 students to 1697 students?
c.What is the average population growth per year?
d.What was the population in the year 2000?
e.Find an equation for the population, P, of the school \(t\) years after 2000.
f.Using your equation, predict the population of the school in 2011.
a. 696 people; b.4 years ;c.174 people per year; d. 305 people; e.\(P(t)=305+174t\); f. 2219 people
In 2003, a town's population was 1431. By 2007 the population had grown to 2134. Assume the population is changing linearly.
b.How long did it take the population to grow from 1431 people to 2134 people?
e.Find an equation for the population, \(P\) of the town \(t\) years after 2000.
f.Using your equation, predict the population of the town in 2014.
A phone company has a monthly cellular plan where a customer pays a flat monthly fee and then a certain amount of money per minute used on the phone. If a customer uses 410 minutes, the monthly cost will be $71.50. If the customer uses 720 minutes, the monthly cost will be $118.
a. Find a linear equation for the monthly cost of the cell plan as a function of \(x\), the number of monthly minutes used.
b. Interpret the slope and y-intercept of the equation.
c. Use your equation to find the total monthly cost if 687 minutes are used.
a. \(C(x)=0.15x+10\); b. The flat monthly fee is $10 and there is an additional $0.15 fee for each additional minute used; c. $113.05
A phone company has a monthly cellular data plan where a customer pays a flat monthly fee of $10 and then a certain amount of money per megabyte (MB) of data used on the phone. If a customer uses 20 MB, the monthly cost will be $11.20. If the customer uses 130 MB, the monthly cost will be $17.80.
a. Find a linear equation for the monthly cost of the data plan as a function of \(x\), the number of MB used.
c. Use your equation to find the total monthly cost if 250 MB are used.
In 1991, the moose population in a park was measured to be 4,360. By 1999, the population was measured again to be 5,880. Assume the population continues to change linearly.
a. Find a formula for the moose population, \(P\) since 1990.
b. What does your model predict the moose population to be in 2003?
a. \(P(t)=190t+4360\) b. 6640 moose
In 2003, the owl population in a park was measured to be 340. By 2007, the population was measured again to be 285. The population changes linearly. Let the input be years since 1990.
a. Find a formula for the owl population, P. Let the input be years since 2003.
b. What does your model predict the owl population to be in 2012?
The Federal Helium Reserve held about 16 billion cubic feet of helium in 2010 and is being depleted by about 2.1 billion cubic feet each year.
a. Give a linear equation for the remaining federal helium reserves, R, in terms of \(t\), the number of years since 2010.
b. In 2015, what will the helium reserves be?
c. If the rate of depletion doesn't change, in what year will the Federal Helium Reserve be depleted?
a. \(R(t)=16−2.1t\) b. 5.5 billion cubic feet c. During the year 2017
Suppose the world's oil reserves in 2014 are 1,820 billion barrels. If, on average, the total reserves are decreasing by 25 billion barrels of oil each year:
a. Give a linear equation for the remaining oil reserves, R, in terms of \(t\), the number of years since now.
b. Seven years from now, what will the oil reserves be?
c. If the rate at which the reserves are decreasing is constant, when will the world's oil reserves be depleted?
You are choosing between two different prepaid cell phone plans. The first plan charges a rate of 26 cents per minute. The second plan charges a monthly fee of $19.95 plus 11 cents per minute. How many minutes would you have to use in a month in order for the second plan to be preferable?
More than 133 minutes
You are choosing between two different window washing companies. The first charges $5 per window. The second charges a base fee of $40 plus $3 per window. How many windows would you need to have for the second company to be preferable?
When hired at a new job selling jewelry, you are given two pay options:
Option A: Base salary of $17,000 a year with a commission of 12% of your sales
Option B: Base salary of $20,000 a year with a commission of 5% of your sales
How much jewelry would you need to sell for option A to produce a larger income?
More than $42,857.14 worth of jewelry
When hired at a new job selling electronics, you are given two pay options:
How much electronics would you need to sell for option A to produce a larger income?
Option A: Base salary of $10,000 a year with a commission of 9% of your sales
1 Rates retrieved Aug 2, 2010 from http://www.budgettruck.com and http://www.uhaul.com/
Jay Abramson (Arizona State University) with contributing authors. Textbook content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Download for free at https://openstax.org/details/books/precalculus.
0.2: Graphs of Linear Functions
1: Functions
OpenStax
linear function
linear model | CommonCrawl |
\begin{document}
\title{Ideals in deformation quantizations over $\mathbb{Z}/p^n\mathbb{Z}$} \author{Akaki Tikaradze} \address{The University of Toledo, Department of Mathematics, Toledo, Ohio, USA} \email{\tt [email protected]} \maketitle
\begin{abstract} Let $\bf{k}$ be a perfect field of characteristic $p>2.$ Let $A_1$ be an Azumaya algebra over a smooth symplectic affine variety over $\bf{k}.$ Let $A_n$ be a deformation quantization of $A_1$ over $W_n(\bf{k})$. We prove that all $W_n(\bf{k})$-flat two-sided ideals of $A_n$ are generated by central elements.
\end{abstract}
\vspace*{0.5in}
Let $\bf{k}$ be a perfect field of characteristic $p>2.$
For $n\geq 1$, let $W_n(\bold{k})$ denote the ring of length $n$ Witt vectors over $\bf{k}.$ Also, $W(\bold{k})$ will denote the ring of Witt vectors over $\bf{k}$. As usual, given an algebra $B$ its center will be denoted by $Z(B).$
Throughout the paper we will fix once and for all an affine smooth symplectic variety $X$ over $\bf{k},$
and an Azumaya algebra $A_1$ over $X$ (equivalently over $\mathcal O_X.)$ Thus, we may (and will) identify the center of $A_1$ with $\mathcal O_X$-the structure ring of $X: Z(A_1)=\mathcal O_X.$
Let $\lbrace, \rbrace$ denote the corresponding Poisson bracket on $\mathcal O_X.$ A deformation quantization of $A_1$ over $W_n(\bold{k}), n\geq 1$ is, by definition, a flat associatiove $W_n(\bf{k})$-algebra $A$ equipped with an isomorphism $A/pA\simeq A_1$ such that for any $a, b\in A$ such that $a\ mod\ p\in \mathcal O_X, b\ mod\ p\in \mathcal O_X,$ one has $$\lbrace a\ mod\ p, b\ mod\ p\rbrace=(\frac{1}{p} [a, b])\ mod\ p.$$ One defines similarly a quantization of $A_1$ over $W(\bold{k}).$
Main result of this note is the following
\begin{theorem}\label{azumaya} Let $A$ be a deformation quantization over $W_n(\bf{k})$ of an Azumaya algebra $A_1$ over $X.$
Let $I\subset A$ be a two-sided ideal which is flat over $W_n(\bold{k}).$ Then $I$ is generated
by central elements: $I=(Z(A)\cap I)A.$
\end{theorem} \footnote{We showed in \cite{Ti} that Hochschild cohomology of a quantization $A$ is isomorphic to the de Rham-Witt
complex $W_n\Omega^{*}_X$ of $X$} Before proving this result we will need to recall some results of Stewart and Vologodsky \cite{SV} on centers of certain algebras over $W_n(\bf{k})$.
Throughout for an associative flat $W_n(\bf{k})$-algebra $R,$ we will denote its reduction $\mod p^m$ by $R_m=R/p^mR.$ Also center of an algebra $R_m$ will be denoted by $Z_m, m\leq n.$ Recall that in this setting there is the natural deformation Poisson bracket on $Z_1$ defined as follows. Given $z, w\in Z_1,$ let $\tilde{z},\tilde{w}$ be lifts in $R$ of $z, w$ respectively. Then put $$\lbrace z, w\rbrace=\frac{1}{p}[\tilde{z}, \tilde{w}] \mod p.$$
In this setting, Stewart and Vologodsky [\cite{SV} formula (1.3)] constructed a ring homomorphism $\phi_m:W_m(Z_1)\to Z_m$ from the ring of length $m$ Witt vectors over $Z_1$ to $Z_m,$
defined as follows $$\phi_n(z_1,\cdots, z_m)=\sum_{i=1}^mp^{i-1}\tilde{z_i}^{p^{m-i}}$$ where $\tilde{z_i}\in R$ is a lift of $z_i, 1\leq i\leq m$. We also have the following natural maps $$r:Z_m\to Z_{m-1}, r(x)= x\mod p^{m-1}, v:Z_{m-1}\to Z_m, v(x)=p\tilde{x}$$ where $\tilde{x}$ is a lift of $x$ in $R_m.$ On the other hand on the level of Witt vectors of $Z_1,$ we have the Verschibung map $V:W_m(Z_1)\to W_{m+1}(Z_1)$ and the Frobenius map $F:W_m(Z_1)\to W_{m-1}(Z_1).$ It was checked in \cite{SV} that above maps commute $$\phi_{m-1}F=r\phi_m,\quad \phi_mV=v\phi_{m-1}.$$
We will recall the following crucial computation from \cite{SV}.
Let $x=\phi_m(z), z=(z_1,\cdots,z_m)\in W_m(Z_1)$ and let $\tilde{x}$ be a lift of $x$ in $R_{m+1}.$ Then it was verified in \cite{SV} that the following inequality holds in $Der_{\bf{k}}(Z_1, Z_1)$ \begin{equation}\label{1}
\delta_x=(\frac{1}{p^m}[\tilde{x},-])\mod p|_{Z_1}=\sum_{i=1}^m z_i^{p^{m-i}-1}\lbrace z_i,-\rbrace \end{equation}
The main result of [\cite{SV}, Theorem 1] states that if $\spec Z_1$ is smooth variety and the deformation Poisson bracket on $Z_1$ is induced from a symplectic form on $\spec Z_1,$ then the map $\phi_m$ is an isomorphism for all $m\leq n$. In particular, $${Z_1}^{p^m}=Z_{m+1}\mod p.$$ We will need the following slight generalization of this result. Its proof follows very closely to the one in [\cite{SV}, Theorem 1].
\begin{prop}\label{center}
Let $n\geq 1$ and $m\subset \mathcal O{}_X$ be an ideal, and let $B=\mathcal O{}_X/m^{p^{n}}\mathcal O{}_X.$ Let $R$ be an associative flat $W_n(\bold{k})$-algebra such that $Z(R/pR)=B$ and the corresponding deformation Poisson bracket
on $B$ coincides with the one induces from $X.$ Then
$$Z(R)=\phi_n(W_n(B)),\quad Z(R)\cap pR=\phi_n(VW_{n-1}(B)).$$
\end{prop}
Just as in [\cite{SV}, Lemma 2.7] the following result plays the crucial role.
\begin{lemma}\label{muh}
Let $z_1,\cdots, z_n\in B$ be such that $\sum_{i=1}^{n} z_i^{p^{n-i}-1}dz_i=0.$ Then $z_i\in B^p+{\bar{m}}^{p^{i}}B,$ where $\bar{m}=m/m^{p^n}\mathcal O{}_X.$ \end{lemma}
\begin{proof} Put $S=\mathcal O_X.$ We will proceed by induction on $n.$ Let $n=1.$ Let $z_1$ be a lift of $x_1$ in $S.$ Thus $dz_1\in m^p\Omega^1_S\cap dS.$ Since $\Omega^1_S/dS$ is a flat $S^p$-module, it follows that
$$m^p\Omega^1_S\cap dS=m^pdS=d(m^pS).$$ So, $dx_1\in d(m^pS)$. Hence $$x_1\in m^pS+Ker(d)=m^pS+S^p.$$ Assume that our statement is true for $n-1.$ Let $x_i\in S$ be a lift of $z_i, 1\leq i\leq n.$ Thus $$\sum_{i=1}^{n} x_i^{p^{n-i}-1}dx_i\in m^{p^{n}}\Omega^1_S.$$ As usual $Z^1(\Omega_S)$ will denote $Ker(d)\subset \Omega^1_S.$ Remark that $$Z^1(\Omega_S)\cap m^{p^{n}}\Omega^1_S=m^{p^{n}}\Omega^1_S,$$ this follows from flatness of
$\Omega^1_S/Z^1(\Omega_S)$ over $S^{p^{n}}.$ Thus $$\sum x_i^{p^{n-i}-1}dx_i\in m^{p^{n}}Z^1(\Omega^{*}_S).$$ Recall that the inverse Cartier map $C^{-1}:\Omega^1_S\to \Omega^1_S/dS$ is defined as follows $$C^{-1}(fdg)=f^pg^{p-1}dg.$$ Recall also that smoothness of $S$ implies that $C^{-1}$ defines an isomorphism onto
$Z(\Omega^1_S)/dS,$ Thus we may write $$C^{-1}(\sum_{i<n}x_i^{p^{n-1-i}}dx_i)=C^{-1}(x)+dx', x\in m^{p^{n-1}}\Omega^1_S, x'\in S.$$ Injectivity of $C^{-1}$ implies that $$\sum_{i<n}x_i^{p^{n-i-1}}dx_i\in m^{p^{n-1}}\Omega^1_S.$$ Thus by induction assumption, we get that $x_i\in S^p+m^iS, i<n.$ Therefore $dx_n\in m^{p^{n}}S$, so $x_n\in S^p+m^{p^{n}}S$ \end{proof}
\begin{lemma}\label{key} Let $z_1,\cdots, z_n\in B$ be such that $\sum_{i=1}^nz_i^{p^{n-i}-1}\lbrace z_i, -\rbrace=0.$ Then $z_i\in B^p+m^{p^{i}}B.$ \end{lemma}
\begin{proof}
Symplectic form $\omega\in \Omega^2_S$ gives an isomorphism $$\iota: \Omega^1_S\to Der{\bf{k}}(S,S)=Hom_S(\Omega^1_S, S)$$ such that $\iota(gdf)=g\lbrace f, -\rbrace, f, g\in S.$ Since $\Omega^1_B=\Omega^1_S\otimes_{S^{p^{n+1}}}B$, we get an isomorphism $\bar{\iota}:\Omega^1_B\to Der_{\bf{k}}(B, B)$ defined as $\bar{\iota}(xdy)=x\lbrace y, -\rbrace.$ Thus applying $\bar{\iota}^{-1}$ to the equality $$\sum_{i=1}^nz_i^{p^{n-i}-1}\lbrace z_i, -\rbrace=0,$$ we obtain that $$\sum_{i=1}^n z_i^{p^{n-i}-1}dz_i=0.$$ Hence by Lemma \ref{muh}
we are done. \end{proof}
\noindent Once Lemma \ref{key} is established, the proof of Proposition \ref{center} is identical to the one in \cite{SV}. Indeed, by induction assumption, $\phi_{n-1}:W_{n-1}(B)\to Z(R_{n-1})$ is surjective. Let $x\in Z(R)$. Then there exists $z=(z_1,\cdots,z_{n-1})\in W_{n-1}(B)$ such that
$$x'=x\mod p^{n-1}=\phi_{n-1}(z).$$ Hence by equality \eqref{1}, we have $$0=\delta_{x'}=\sum_{i=1}^{n-1}z_i^{p^{n-1-i}}\lbrace z_i,-\rbrace=0.$$ Therefore by Proposition \ref{key}, we get that $z_i=a_i^p+b_i$, where $b_i\in \bar{m}^{p^{i}}B.$ Thus, $$\phi_{n-1}(z)=\phi_{n-1}(a_1^p,\cdots,a_{n-1}^p)\in \phi_n(W_{n}(B))\mod p^{n-1}.$$ Hence $$Z(A)\subset \phi_{n}(W_n(B))+p^{n-1}R.$$ Now since $v:Z(R_{n-1})\to Z(A)\cap pR$ is an isomorphism, surjectivity of $\phi_{n-1}$ implies that $\phi_n(W_n(B))=Z(R)$ and $\phi_n(VW_{n-1}(B))=Z(R)\cap pR.$ This concludes the proof of Proposition \ref{center}.
Now we can prove our main result.
\begin{proof}[Proof of Theorem \ref{azumaya}]
We proceed by induction on $n.$ When $n=1,$ the statement is a standard property of Azumaya algebras [\cite{MR}, Proposition 7.9]. We will assume that the statement holds for $n$ and prove it for $n+1.$ Thus, $A$ is a quantization of $A_1$ over $W_{n+1}(\bf{k})$, and $I\subset A$ is a $W_{n+1}(\bf{k})$-flat two-sided ideal. Let us identify $Z_m=Z(A/p^mA)$ with $W_m(Z_1)$ via the isomorphism $\phi_m,m\leq n+1.$ We will put $I_i=I\mod p^i.$ Since by the assumption $I_i, A_i$ are free $W_i(\bold{k})$-modules, it follows that $A_i/I_i$ is a free $W_i(\bold{k})$-module for all $i\leq n+1.$
Put $m_{n}=Z_{n}\cap I_{n}.$ Recall that $r^i:A_m\to A_{m-i}, i\leq m$ denotes the projection map, while $v^i:A_{m-i}\to A_m$ denotes the multiplication by $p^i.$ For $i\leq n$, let us put $m_i=m_{n}\mod p^i.$ So $m_i$ is an ideal in $Z_i.$ It follows from the inductive assumption that for all $i\leq n,$ we have $I_i=m_iA_i.$ Moreover, since $A_1$ is an Azumaya algebra, we have $m_1=m_1A_1\cap Z_1.$
Since $A/I$ is a free $W_{n+1}(\bold{k})$-module, we have a short exact sequence $$ \begin{CD} 0\to I_1 @>{{v}}^{n}>> I@>r >> I_{n} \to 0 . \end{CD} $$ We claim that for all $x\in m_{n}$, we have $${F}^{n-1}d(x)\subset m_1\Omega^1_{Z_1},$$ here $$F:W_n\Omega^{*}_{Z_1}\to W_{n-1}\Omega^{*}_{Z_1}$$
is the Frobenius map of the de Rham-Witt complex of $Z_1.$ Indeed, it follows from the above exact sequence that if $\tilde{x}$ is a lift of $x$ in $A_{n+1}$, then $\delta_x=\frac{1}{p^n}ad(\tilde{x}):A_1\to A_1$ is a derivation such that $Im(\delta_x)\subset I_1$. In particular, $\delta_x|_{Z_1}$ is a derivation of $Z_1$ whose range lies in the ideal $m_1.$ Therefore, by equality \eqref{1} it follows that the derivation $\delta_x$ corresponds to $F^n(dx)\in \Omega^1_{Z_1}$ under the identification $Der(Z_1)=\Omega^1_{Z_1}$ by the symplectic form on $X.$ Thus we obtain the desired inclusion. Recalling that $m_1=F^{n-1}(m_n)Z_1$, we get
$$F^{n-1}d(m_{n})\subset F^{m-1}(m_{n})\Omega^1_{Z_1}.$$
Next we will use the following
\begin{lemma}\label{frob}
Let $S$ be a smooth essentially of finite type commutative ring over $\bold{k}.$ Let $m$ be an ideal in $W_{n}(S)$ such that $F^{n-1}(dm)\subset F^{n-1}(m)\Omega^1_{S}.$ Put $\bar{m}=m \mod VW_{n-1}(S).$ Then $\bar{m}=(\bar{m}\cap S^p)S.$
\end{lemma}
\begin{proof}
At first we will show that $$F^{n-1}(m)\Omega^1_S\cap F^{n-1}(W_{n}\Omega^1_S)=F^{n-1}(mW_{n}\Omega^1_S).$$ For simplicity we put $N_n=F^{n-1}(W_{n}\Omega^1_S).$ Thus, we want to show that $$\bar{m}^{p^{n-1}}\Omega^1_S\cap N_n=\bar{m}^{p^{n-1}}N_n.$$ It follows from our assumptions on $S$ that $\Omega^1_S/N_n$ is a flat $S^{p^{n-1}}$-module [\cite{Il}, Proposition 2.2.8 and isomorphism (3.11.3)]. Therefore $N_n/\bar{m}^{p^{n-1}}N_n$ injects into $\Omega^1_S/\bar{m}^{p^{n-1}}\Omega^1_S.$ Hence $$\bar{m}^{p^{n-1}}\Omega^1_S\cap N_n=\bar{m}^{p^{n-1}}N_n.$$ Thus $$F^{n-1}(dm)\subset F^{n-1}(mW_{n}\Omega^1_{S}).$$ Recall that $Ker F^{n-1}=V(W_{n-1}\Omega^1_S).$ Hence we can conclude that $$dm\subset mW_{n}\Omega^1_S+V(W_{n-1}\Omega^1_S).$$ Therefore, $d\bar{m}\subset \bar{m}\Omega^1_S.$
Now we claim that $\bar{m}=(\bar{m}\cap S^p)S.$ Since the statement is local, we may assume that $S$ is a regular local ring. It follows that any derivation $g:S\to S$ preserves $m: g(\bar{m})\subset \bar{m}.$ Thus, $I$ is a submodule of $S$ viewed as a $\Hom_{S^p}(S, S)$-module. Since $S$ is a finite rank free $S^p$-module, $\Hom_{S^p}(S, S)$ is a matrix algebra over $S^p$ and the claim follows.
So $\bar{m}={m'}^pS$ for some ideal $m'\subset S.$
\end{proof} Thus we can conclude using Lemma \ref{frob} that $m_n \mod VW_{n-1}(Z_1)$ is generated by elements in $Z_1^p.$ Since $m_1=F^{n-1}(m_n)Z_1,$ we get that $m_1=(m_1\cap Z_1^{p^n})Z_1.$ Thus $m_1=l^{p^n}Z_1$ for some ideal $l\subset Z_1.$ We have a short exact sequenced
$$ \begin{CD} 0\to I_n @>{{v}}>> I@>\mod p >> I_{1} \to 0 . \end{CD} $$ Recall that $I_n=m_nA_n, m_n\subset Z_n.$
Let $x\in m_1.$ There there exist $z\in Z(A)$ and $y\in A$ such that $z+py\in I$ and $z\mod p=x.$ Therefore $$[py, A]\subset I.$$ Hence $$py\in Z(A/I)\cap pA/I.$$ Applying Proposition \ref{center} to $R=A/I, m=l^{p^n},$ we may write $$py=\sum_{i\geq 1}p^i{z}_i^{p^{n-i}}+py'$$ where $z_i \mod p\in Z_1, y'\in I.$ Thus $$z+\sum_{i\geq 1}p^iz_i^{p^{n-i}}+py'\in I,$$ moreover $y'\in I$ and $z\mod p=x.$ Thus $$z'=z+\sum_ip^iz_i^{p^{n-i}}\in Z(A)\cap I, x=z' \mod p.$$ Hence $$I_1=(I\cap Z(A))A \mod p.$$
On the other hand $$pI=vI_n=v((I_n\cap Z_n)A_n)\subset (I\cap Z)A.$$ Therefore $I=(Z(A)\cap I)A.$
\end{proof}
Typical setting where Theorem \ref{azumaya} can be used is as follows. Let $Y$ be a smooth affine variety over $W_n(\bf{k}).$ Let $\bar{Y}$ denote the $\mod p$ reduction of $Y.$ Let $D_{Y}$ (respectively $D_{\bar{Y}}$) denote the ring of crystalline (PD) differential operators on $Y$ (respectively $\bar{Y}$). Then $D_{\bar{Y}}=D_Y/pD_Y$ is an Azumaya algebra over $X=T^{*}(\bar{Y})^{(1)}$-the Frobenius twist of the cotangent bundle of $\bar{Y}$ (by \cite{BMR}). Also, it follows from \cite{BK} that the corresponding deformation Poisson bracket on $X$ is induced from the symplectic form on $T^{*}(\bar{Y}).$ Thus, Theorem \ref{azumaya} applies to $A=D_{Y}.$
\begin{remark}
In Theorem \ref{azumaya} it is necessary to assume that ideal $I$ be flat over $W_n(\bf{k}).$ Indeed, let $m$ be an ideal in $Z_1$, such that $(m\cap Z_1^{p^{n-1}})Z_1\ne m$. Let $I$ be a preimage of $mA_1$ in $A$ under $\mod p$ reduction map. Then since $$Z(A) \mod p=Z_1^{p^{n-1}},$$ it follows that $I\neq (I\cap Z(A))A.$ In particular, $A$ is not an Azumaya algebra over $Z(A)$ for $n\geq 2.$
\end{remark}
In what follows we will assume that the ground field $\bold{k}$ is algebraically closed. As a consequence of Theorem \ref{azumaya}, we have the following criterion for (topological) simplicity of $W(\bold{k})$-algebras.
\begin{cor}\label{simple}
Let $A$ be a quantization of $A_1$ over $W(\bold{k})$ Then $Z(A)=W(\bf{k})$ and algebra $A[p^{-1}]$ is (topologically) simple.
\end{cor}
\begin{proof}
As before, we will put $A_n=A/p^nA, n\geq 1.$ Let us denote by $r_i$ the quotient map $A\to A/p^iA.$ As before, $r^i:A_{n+i}\to A_{n}$ denotes the quotient map. Then it follows that $r^n(Z_{n+1})= Z_1^{p^n}.$ Hence $$r_1(Z(A))\subset \cap_{i=1}^{\infty} Z_1^{p^i}=\bold{k}.$$ Hence $Z(A)\subset W(\bold{k})+pA$, which implies that $$Z(A)\subset \cap_{n=1}^{\infty}(W(\bold{k})+p^nA)=W(\bold{k}).$$ Let $I\subset A[p^{-1}]$ be a closed two-sided ideal. Put $I'=I\cap A[p^{-1}].$ Then $I'$ is a topologically free $W(\bold{k})$-module. Thus, $I'_n=r_n(I')$ is a two-sided ideal of $A_n$ which is free as a $W_n(\bf{k})$-module. Let us put $m_n=I'_n\cap Z_n, n\geq 1.$ Using Theorem \ref{azumaya} we obtain that for all $n\geq 1$ $$m_1={r}^n(m_{n+1})Z_1.$$ So, $$m_1=(m_1\cap Z_1^{p^n})Z_1.$$ Let us put $$m_1\cap Z_1^{p^n}=l_n^{p^n},\quad l_n\subset Z_1.$$ Clearly ideals $l_n$ form an ascending chain: $$l_n\subset l_{n+1}\subset \cdots.$$ Thus $\cup_{n=1}^{\infty}l_n=l_i,$ for some $i.$ To summarize, $m_1=l_i^{p^n}Z_1$ for all $n\geq i.$ Therefore $m_1^p=m_1.$
Thus, either $m_1=0$ or $m_1=Z_1.$ Therefore, $I'=0$ or $I'=A.$ \end{proof}
The next result provides a criterion for simplicity of algebras defined over global rings. Let $R$ be a commutative domain. We will say that an $R$-algebra $A$ has a generic freeness property over $R$ if for any finitely generated left $A$-module $M$, there exists a nonzero element $f\in R$ such that $M_f$ is a free $R_f$-modules. \begin{cor}\label{Simple}
Let $R $ be a finitely generated subring of $\mathbb{C},$ and $F$ its field of fractions. Let $S$ be an $R$-algebra
which has generic freeness property over $R.$ Assume that for all nonzero $f\in R$ and
for infinitely many primes $p,$
there exists an algebraically closed field $\bold{k}$ of characteristic $p$ and a homomorphism
$\rho:R_f\to W(\bold{k}$), such that
$S\otimes_R\bold{k}$ is an Azumaya algebra and $\spec Z(S\otimes_R\bold{k})$ equipped with
the deformation Poisson bracket is a smooth
symplectic variety over $\bold{k}.$ Then algebra $S\otimes_RF$ is a simple.
\end{cor}
\begin{proof} Assume that algebra $S\otimes_RF$ is not simple. Then there exists an $R$-torsion free nonzero proper ideal $I\subset S$ such that $I\cap R=0$. By localizing $R$ we may assume by generic flatness of $S$ that $S/I$ is a nonzero free $R$-module. Let $p$ be a prime and let $\rho:R\to W(\bold{k})$ be a homomorphism as in the statement.
Denote by $A$ the $p$-adic completion of $S\otimes_RW(\bold{k}),$ and denote the $p$-adic completion of $I\otimes_RW(\bold{k})$ by $\bar{I}.$
Thus, algebra $A$ satisfies assumptions of Corollary \ref{simple}. Hence $\bar{I}[p^{-1}]=A[p^{-1}].$ In particular, $A/\bar{I}=(S/I)\otimes_RW(\bold{k})$ is a nonzero $p$-torsion $W(\bf{k})$-module, a contradiction. \end{proof}
\end{document} | arXiv |
Science and Technology of Advanced Materials (Jan 2016)
Understanding the peculiarities of the piezoelectric effect in macro-porous BaTiO3
James I. Roscow,
Vitaly Yu. Topolov,
Christopher R. Bowen,
John Taylor,
Anatoly E. Panich
James I. Roscow
Department of Mechanical Engineering, Materials and Structures Centre, University of Bath
Vitaly Yu. Topolov
Department of Physics, Southern Federal University
Christopher R. Bowen
Department of Electrical and Electronic Engineering, University of Bath
Institute of High Technologies and Piezotechnics, Southern Federal University
Journal volume & issue
pp. 769 – 776
This work demonstrates the potential of porous BaTiO3 for piezoelectric sensor and energy-harvesting applications by manufacture of materials, detailed characterisation and application of new models. Ferroelectric macro-porous BaTiO3 ceramics for piezoelectric applications are manufactured for a range of relative densities, α = 0.30–0.95, using the burned out polymer spheres method. The piezoelectric activity and relevant parameters for specific applications are interpreted by developing two models: a model of a 3–0 composite and a 'composite in composite' model. The appropriate ranges of relative density for the application of these models to accurately predict piezoelectric properties are examined. The two models are extended to take into account the effect of 90° domain-wall mobility within ceramic grains on the piezoelectric coefficients $ d_{3j}^{\ast} $. It is shown that porous ferroelectrics provide a novel route to form materials with large piezoelectric anisotropy $ \left( {{{d_{33}^{\ast} } \mathord{\left/ {\vphantom {{d_{33}^{\ast} } {\left| {d_{31}^{\ast} } \right|}}} \right. \kern-0pt} {\left| {d_{31}^{\ast} } \right|}} > > 1} \right) $ at 0.20 ≤ α ≤ 0.45 and achieve a high squared figure of merit $ d_{33}^{\ast} $$ g_{33}^{\ast} $. The modelling approach allows a detailed analysis of the relationships between the properties of the monolithic and porous materials for the design of porous structures with optimum properties.
Published in Science and Technology of Advanced Materials
Country of publisher
LCC subjects
Technology: Electrical engineering. Electronics. Nuclear engineering: Materials of engineering and construction. Mechanics of materials
Technology: Chemical technology: Biotechnology
http://www.tandfonline.com/toc/tsta20/current | CommonCrawl |
In this diagram, both polygons are regular. What is the value, in degrees, of the sum of the measures of angles $ABC$ and $ABD$?
[asy]
draw(10dir(0)--10dir(60)--10dir(120)--10dir(180)--10dir(240)--10dir(300)--10dir(360)--cycle,linewidth(2));
draw(10dir(240)--10dir(300)--10dir(300)+(0,-10)--10dir(240)+(0,-10)--10dir(240)--cycle,linewidth(2));
draw(10dir(300)+(-1,0)..9dir(300)..10dir(300)+dir(60),linewidth(2));
draw(10dir(300)+(-1.5,0)..10dir(300)+1.5dir(-135)..10dir(300)+(0,-1.5),linewidth(2));
label("A",10dir(240),W);
label("B",10dir(300),E);
label("C",10dir(0),E);
label("D",10dir(300)+(0,-10),E);
draw(10dir(300)+2dir(-135)--10dir(300)+dir(-135),linewidth(2));
[/asy]
The interior angle of a square is 90, and the interior angle of a hexagon is 120, making for a sum of $\boxed{210}$. If you don't have the interior angles memorized, you can calculate them using the following formula: $180\left(\frac{n-2}{n}\right),$ where $n$ is the number of sides in the polygon. | Math Dataset |
\begin{document}
\theoremstyle{plain}
\newtheorem{Thm}{Theorem}[section] \newtheorem{Prop}{Proposition}[section] \newtheorem{Defi}{Definition}[section] \newtheorem{Cor}{Corollary}[section] \newtheorem{Lem}{Lemma}[section]
\makeatletter\renewcommand{\theequation}{ \thesection.\arabic{equation}} \@addtoreset{equation}{section}\makeatother
\setlength{\baselineskip}{16pt} \newcommand{\mathop{\overline{\lim}}}{\mathop{\overline{\lim}}} \newcommand{\mathop{\underline{\lim}}}{\mathop{\underline{\lim}}} \newcommand{\mathop{\mbox{Av}}}{\mathop{\mbox{Av}}} \newcommand{{\rm spec}}{{\rm spec}}
\def\rm{\rm} \def\({(\!(} \def\){)\!)} \def\mathbb{R}{\mathbb{R}} \def\mathbb{Z}{\mathbb{Z}} \def\mathbb{N}{\mathbb{N}} \def\mathbb{C}{\mathbb{C}} \def\mathbb{T}{\mathbb{T}} \def{\bf E}{{\bf E}} \def\mathbb{H}{\mathbb{H}} \def{\bf P}{{\bf P}} \def{\cal M}{{\cal M}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def{\cal D}{{\cal D}} \def{\cal X}{{\cal X}} \def{\cal A}{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal L}{{\cal L}} \def\alpha{\alpha} \def\beta{\beta}
\def\varepsilon{\varepsilon} \def\delta{\delta} \def\gamma{\gamma} \def\kappa{\kappa} \def\lambda{\lambda} \def\varphi{\varphi} \def\theta{\theta} \def\sigma{\sigma} \def\tau{\tau} \def\omega{\omega} \def\Delta{\Delta} \def\Gamma{\Gamma} \def\Lambda{\Lambda} \def\Omega{\Omega} \def\Theta{\Theta} \def\langle{\langle} \def\rangle{\rangle} \def\left({\left(} \def\right){\right)} \def\;\operatorname{const}{\;\operatorname{const}} \def\operatorname{dist}{\operatorname{dist}} \def\operatorname{Tr}{\operatorname{Tr}} \def\qquad\qquad{\qquad\qquad} \def\noindent{\noindent} \def\begin{eqnarray*}{\begin{eqnarray*}} \def\end{eqnarray*}{\end{eqnarray*}} \def\mbox{supp}{\mbox{supp}} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def{\bf p}{{\bf p}} \def{\rm sgn\,}{{\rm sgn\,}} \def{\bf 1}{{\bf 1}} \def{\it Proof.}{{\it Proof.}} \def\vskip2mm{\vskip2mm} \def\noindent{\noindent} \def{\bf z}{{\bf z}} \def{\bf x}{{\bf x}} \def{\bf y}{{\bf y}} \def{\bf v}{{\bf v}} \def{\bf e}{{\bf e}} \def{\rm pr}{{\rm pr}}
\def{\rm Cap}{{\rm Cap}} \def{\textstyle\frac12}{{\textstyle\frac12}}
\begin{center} {\Large Density of space-time distribution\\
of Brownian first hitting
of a disc and a ball. } \\ \vskip4mm {K\^ohei UCHIYAMA} \\ \vskip2mm {Department of Mathematics, Tokyo Institute of Technology} \\ {Oh-okayama, Meguro Tokyo 152-8551\\ e-mail: \,[email protected]} \end{center}
\vskip18mm \noindent {\it running head}: space-time distribution of Brownian first hitting
\vskip2mm \noindent {\it key words}: harmonic measure for heat operator, Brownian hitting time, caloric measure, parabolic measure, space-time distribution
\vskip2mm \noindent {\it AMS Subject classification (2010)}: Primary 60J65, Secondary 60J45, 60J60.
\vskip16mm
\begin{abstract}
We compute the joint distribution of the site and the time at which a $d$-dimensional standard Brownian motion $B_t$ hits the surface of the ball $ U(a) =\{|{\bf x}|<a\}$ for the first time. The asymptotic form of its density is obtained when either the hitting time or the starting site $B_0$ becomes large. Our results entail that if Brownian motion is started at ${\bf x}$ and conditioned to hit $U(a)$ at time $t$ for the first time, the distribution of the hitting site approaches the uniform distribution or the point mass at $a{\bf x}/|{\bf x}|$ according as $|{\bf x}|/t$ tends to zero or infinity; in each case we provide a precise asymptotic estimate of the density. In the case when $|{\bf x}|/t$ tends to a positive constant we show the convergence of the density and derive an analytic expression of the limit density.
\end{abstract} \vskip6mm
\noindent \section{ Introduction}
The harmonic measure (also called caloric measure or caloric measure in the present context \cite{W1}) of the unbounded space-time domain
$$D=\{({\bf x},t)\in \mathbb{R}^d\times (0,\infty) : |{\bf x}|>a\}$$ $(a>0)$
for the heat operator $\frac12\Delta -\partial_t$ consists of two components, one supported by the initial time boundary $t=0$ and the other by the lateral boundary $\{|{\bf x}|= a\}\times \{t>0\}$. The former one is nothing but the measure whose density is given by the heat kernel for physical space $|{\bf x}|>a$ with Dirichlet zero boundary condition. This paper concerns the latter, aiming to find a precise asymptotic form of it when the distance of the reference point from the boundary
becomes large. In the probabilistic term this latter part is given by the joint distribution, $H({\bf x}, dtd\xi)$, of the site $\xi$ and the time $t$ at which the $d$-dimensional standard Brownian motion hits the surface of the ball $ U(a) =\{|{\bf x}|<a\}$ for the first time: given a bounded continuous function $\varphi(\xi,t)$ on the lateral boundary of $D$, the bounded solution $u=u({\bf x},t)$ of the heat equation $(\frac12 \Delta -\partial_t)u=0$ in $D$ satisfying the boundary condition
$$u(\xi, t) = \varphi(\xi, t) \,\,\, (|\xi|=a, t>0) \quad\mbox{and} \quad u({\bf x},0)= 0 \,\, \, (|{\bf x}|>a)$$ can be expressed in the boundary integral
$$u({\bf x},t) =\int_0^t \int_{|\xi|=a}\varphi(\xi, t-s) H({\bf x}, dsd\xi).$$ The probability measure $H({\bf x}, dtd\xi)$ has a smooth density, which may be factored into the product of the hitting time density and the density for the hitting site distribution conditional on the hitting time. While the asymptotic forms of the first factor are computed in several recent papers \cite{Ubh}, \cite{BMR}, \cite{HM}, \cite{Ubes}, the latter seems to be rarely investigated and in this paper we carry out the computation of it. Consider the hitting site distribution of $\partial U(a)$ for the Brownian motion conditioned to start at ${\bf x} \notin U(a)$ and hit $U(a)$ at time $t$ for the first time.
It would be intuitively clear that the conditional distribution of the hitting site becomes nearly uniform on the sphere for large $t$ if $|{\bf x}|$ is small relative to $t$, while one may speculate that it concentrates about the point $a{\bf x}/|{\bf x}|$ as $|{\bf x}|$ becomes very large in comparison with $t$. Our results entail that in the limit there appears the uniform distribution or the point mass at $a{\bf x}/|{\bf x}|\in \partial U(a)$ according as $|{\bf x}|/t$ tends to zero or infinity; in each case we provide a certain exact estimate of the density. In the case when $|{\bf x}|/t$ tends to a positive constant the conditional distribution has a limit, of which we derive an analytic expression for the density. Using these results together with the estimates of hitting time density obtained in \cite{Ubes} we can compute the hitting distributions of bounded Borel sets, as is carried out in a separate paper \cite{Ucarl_B}. When $|{\bf x}|/t$ tends to become large, the problem is comparable to that for the hitting distribution for the Brownian motion with a large constant drift started at ${\bf x}$ and for the latter process one may expect that the distribution is uniform if it is projected on the cross section of $U(a)$ cut with the plane passing through the origin and perpendicular to the unit vector ${\bf x}/|{\bf x}|$. This is true in the sense of weak convergence of measures, but in a finer measure the distribution is not flat: the density of the projected distribution has large values along the circumference of the cross section. For such computation it is crucial to have a certain delicate estimate of the hitting distribution for $t$ small, which we also provide in this paper.
\section{Notation and Main Results} In this section we present main results obtained in this paper, of which some detailed statements may be given later sections. Before doing that, we give basic notation used throughout and state the results on the hitting time distribution from \cite{Ubes}. \vskip2mm\noindent \vskip20mm {\sc {\bf 2.1.} Notation.} \,
We fix the radius $a>0$ of the Euclidian ball $U(a)= \{{\bf x}\in \mathbb{R}^d: |{\bf x}|<a\}$ ($d=2,3,\ldots$). Let $P_{\bf x}$ be the probability law of a $d$-dimensional standard Brownian motion, denoted by $B_t, t\geq 0$, started at ${\bf x}\in \mathbb{R}^d$ and $E_{\bf x}$ the expectation under $P_{\bf x}$. We usually write $P$ and $E$ for $P_{\bf 0}$ and $E_{{\bf 0}}$, respectively, where ${\bf 0}$ designates the origin of $\mathbb{R}^d$.
The following notation is used throughout the paper.
\begin{eqnarray*} &&\nu =\frac{d}{2}-1 \quad (d=1,2,\ldots);\\ &&{\bf e} = (1, 0,\ldots,0) \in \mathbb{R}^d; \\
&&\sigma_a =\inf \{t>0: |B_t|\leq a\}; \\
&&q_a^{(d)}(x,t) =\frac{d}{dt}P_{\bf x}[\sigma_a \leq t] \quad (x=|{\bf x}|>a). \\ && p_t^{(d)}(x) = (2\pi t)^{-d/2} e^{-x^2/2t}.\\ && \Lambda_\nu(y)= \frac{(2\pi)^{\, \nu+1}}{2y^{\nu}K_{\nu}(y)}\quad (y>0).\\
&&\omega_{d-1} =2\pi^{d/2}/ \Gamma(d/2) \, \,\,(\mbox{the area of $d-1$ dimensional unit sphere}).\\
&&\mu_d = \omega_{d-1}/\omega_{d-2} = \sqrt{\pi}\, \Gamma(\nu+{\textstyle\frac12})/\Gamma(\nu+1).
\end{eqnarray*} Here $K_\nu$ is the modified Bessel function of the second kind of order $\nu$. We usually write
$x$ for $|{\bf x}|$, ${\bf x}\in \mathbb{R}^d$ (as above); $d=2\nu +2$ and $\nu$ are used interchangeably; and we sometime write $q^\nu(x,t)$ for $q^{(d)}(x,t)$ when doing so gives rise to no confusion and facilitates computation or exposition and also $B(t)$ for $B_t$ for typographical reason. When working on the plane we often tacitly use complex notation to denote points of it, for instance a point of the circle $\partial U(a)$ is indicated as $ae^{i\theta}$ with $\theta$ denoting the (well-defined) argument of the point.
We write $x\vee y$ and $x\wedge y$ for the maximum and minimum of real numbers $x, y$, respectively; $f(t) \sim g(t)$ if
$f(t)/g(t)\to 1$ in any process of taking limit. The symbols $C, C_1, C',$ etc, denote universal positive constants whose precise values are unimportant; the same symbol may takes different values in different occurrences.
\vskip2mm\noindent
{\sc {\bf 2.2.} Density of Hitting Time Distribution.}
Here we state the results from \cite{Ubes} on $q_a^{(d)}(x,t)$, the density for $\sigma_a$. The definition of $q^{(d)}(x,t)$ may be naturally extended to Bessel processes of order $\nu$ and the results concerning it given below may be applied to such extension if $\nu\geq 0$.
\vskip2mm\noindent {\bf Theorem A.} {\it Uniformly for $x > a$, as $t\to\infty$, \begin{equation}\label{R2} q_a^{(d)}(x,t) \,\sim\, a^{2\nu}\Lambda_\nu\bigg(\frac{a x}{t}\bigg) p^{(d)}_t(x) \bigg[1-\bigg(\frac{a}{x}\bigg)^{2\nu}\bigg] \qquad (d\geq 3) \end{equation} and for $d=2$,} \begin{equation}\label{R21} q_a^{(2)}(x,t) = p^{(2)}_t(x) \times
\left\{ \begin{array} {ll}
{\displaystyle \frac{4\pi\lg(x/a)\,}{(\lg (t/a^2))^2}\Big(1+o(1)\Big) } \quad& (x \leq \sqrt t\,),\\ [5mm]
{\displaystyle \Lambda_0\bigg(\frac{a x}{t}\bigg) \Big(1+o(1)\Big) }\quad&( x > \sqrt t\,).
\end{array} \right. \end{equation}
\vskip2mm
From the known properties of $K_\nu(z)$ it follows that
\begin{equation}\label{lambda} \Lambda_\nu(y) = (2\pi)^{\nu+1/2} y^{-\nu+1/2}\, e^{y} ( 1+O(1/y)) \quad \mbox{as}\quad y\to\infty; \end{equation} \[ \Lambda_\nu(0) = \frac{2\pi^{\nu+1}} {\Gamma(\nu)} ( = \nu\omega_{d-1}) \quad\mbox{for}\quad\nu >0;\quad \Lambda_0(y) \sim \frac{\pi}{-\lg y} \quad \mbox{as}\quad y \downarrow 0. \]
\vskip2mm\noindent {\bf Theorem B.} \, {\it For each $\nu\geq 0$ it holds that uniformly
for all $t>0$ and $x>a$, } \begin{equation}\label{result6}
q_a^{(d)}(x,t) = \frac{x-a}{\sqrt{2\pi}\, t^{3/2}} e^{-(x-a)^2/2t}\bigg(\frac{a}{x}\bigg)^{(d-1)/2}\Bigg[1+ O\bigg( \frac{t}{ax}\bigg)\Bigg].
\end{equation}
\vskip2mm\noindent
{\sc Remark 1.}\, Under certain constraints on $x$ and $ t$ some finer error estimates in the formulae of Theorem A are given in \cite{Ubh} ($d=2$, $|x|<\sqrt t$)) and in \cite{Ubes} ($|x|/t\to\infty$). The formula (\ref{result6}) of Theorem B is sharp only if $x/t \to\infty$. The case $t\to\infty$ of it is contained in Theorem A apart from the error estimate. A better error estimate is obtained in \cite{BMR} by a purely analytic approach. A probabilistic proof of (\ref{result6}) is found in \cite{Usaus2}. We shall use (\ref{result6}) primarily for the case $0<t<a^2$.
\vskip2mm\noindent {\sc Remark 2} (Scaling property). From the scaling property of Brownian motion it follows that \[
q^{(d)}_a(x,t) = a^{-2} q^{(d)}_1 (x/a, t/a^2); \mbox{and}
\]
\[ \frac{P_{x{\bf e}}[B(\sigma_a)\in d\xi\,|\, \sigma_a =t] }{m_a(d\xi)} = \frac{P_{(x/a){\bf e}}[B(\sigma_1)\in d\xi'\,|\, \sigma_1 =t/a^2] }{m_1(d\xi')}\Big|_{\xi' = \xi/a}
\] for all dimensions $\geq 2$. Even though because of this we can obtain the result for $a\neq 1$ by simply substituting $t/a^2$ and $x/a$ in place of $t$ and $x$, respectively, in the formula for $a=1$, in the above we have exhibited the formula for $q^{(d)}_a(x,t)$ with $a>0$ arbitrary. We shall follow this example in stating the results of the present work. It is warned that we are not so scrupulous in doing that: in particular, to indicate the constrains of $t$ (and/or $x$) we often simply write $t>1$ when we should write $t>a^2$ for instance.
\vskip2mm\noindent {\sc {\bf 2.3.} Density of Hitting Site Distribution Conditional on $\sigma_a =t$.} \,
For finding
the asymptotic form of the hitting distribution, with that of $q^{(d)}(x,t) $ being given in {\bf 2.2}, it remains to estimate the conditional density $P_{{\bf x}}[B_t\in d\xi | \sigma_a=t]/d\xi$. Before stating the results on it we shall consider the argument of the hitting site $B(\sigma_a)$ in the case $d=2$, when the winding number around the origin is naturally associated with the process.
\vskip2mm
{\sc {\bf 2.3.1.} Density for $\arg B(\sigma_a)$ (Case $d=2$)}. \, Let $\arg B_t\in \mathbb{R}$ be the argument of $B_t$ (regarded as a complex Brownian motion), which is a.s. uniquely determined by continuity under the convention $\arg B_0\in (-\pi,\pi]$. The function of $\lambda\in \mathbb{R}$ defined by \begin{equation}\label{Def}
\Phi_a(\lambda;v) =\frac{K_0(av)}{K_{\lambda}(av)} \quad (v>0).
\end{equation} turns out to be a characteristic function of a probability distribution on $\mathbb{R}$.
\begin{Thm}\label{prop2} \quad
${\displaystyle \Phi_a(\lambda;v) = \lim_{x/t\to v} E_{x{\bf e}}[e^{i\lambda\arg B(\sigma_a)}| \sigma_a = t].} $ \end{Thm} Since $\Phi_a(\lambda;v)$ is continuous at $\lambda=0$, Theorem \ref{prop2} shows that the conditional law of $\arg B(\sigma_a)$ converges to the probability law whose characteristic function equals $\Phi_a(\lambda;v)$. In fact $\Phi_a$ is smooth in $\lambda$, so that the limit law has a density. If $f_a(\theta;v)$ denotes the density, then $$\Phi_a(\lambda;v) = \int_{-\infty}^\infty e^{i\lambda \theta }f_a(\theta;v)d\theta \qquad (\lambda \in \mathbb{R});$$
we shall see that the density of the conditional law converges: \begin{eqnarray}\label{f}
f_a(\theta;v) = \lim_{x/t\to v} \frac{P_{x{\bf e}}[\arg B_t\in d\theta| \sigma_a = t] }{d\theta} \qquad\quad (-\infty <\theta<\infty, v>0) \end{eqnarray}
(Section {\bf 3.2.2}).
By (\ref{Def}) $$\Phi_{a}(\lambda;0+) =0 \quad (\lambda\neq 0) \quad \mbox{ and} \quad
\Phi_{a}(\lambda;+\infty) =1,$$ which shows that the probability $f_a(\theta;v)d\theta$ concentrates in the limit at infinity as $v\downarrow 0$ and
at zero as $v\to\infty$.
Since for $0<y<\infty$, \begin{equation}\label{K/nu}
\lg K_\lambda(y)\sim |\lambda|\lg |\lambda| \quad\mbox{as}\quad \lambda \to\pm \infty, \end{equation} for each $v>0$, $f_a(\,\cdot\,;v)$ can be extended to an entire function; in particular its support (as a function on $\mathbb{R}$) is the whole real line and we can then readily infer that $f_a(\theta;v)>0$ for all $\theta$ (see (\ref{f_v})). $K_{i\eta}(av)$ is an entire function of $\eta$ and has zeros on and only on the real axis. If $\eta_0$ is its smallest positive zero, then \vskip2mm \quad\qquad $\int_0^\infty f_a(\theta;v)e^{\eta \theta}d\theta $ is finite or infinite according as $\eta<\eta_0$ or $\eta\geq \eta_0$; \vskip2mm\noindent
it can be shown that $0< \eta_0 - av\leq C(av)^{1/3}$ for $v>1$ and $-C(\lg av)^2 \leq \eta_0-\pi/|\lg av|<0$ for $v<1/2$ \cite{Uw}.
The next result is derived independently of Theorem \ref{prop2} and in a quite different way.
\begin{Prop}\label{thm0} \, For $v>0$ \begin{equation}\label{EQ0}
f_a(\theta;v) \geq \pi^{-1} a vK_0(av) \,e^{av\cos \theta} \cos \theta \qquad (|\theta|\leq {\textstyle\frac12}\pi). \end{equation} \end{Prop}
\vskip2mm {\sc {\bf 2.3.2.}\, Density for Hitting Site}. \,
Let $m_a(d\xi)$ denote the uniform probability distribution on $\partial U(a)$, namely
$m_a(d\xi) = (\omega_{d-1}a^{d-1})^{-1}|d\xi|$, where $\omega_{d-1}$ denotes the area of the $d-1$ dimensional unit sphere $\partial U(1)$, $d\xi \subset \partial U(a)$ an surface element and $|d\xi|$ its Lebesgue measure. Let ${\rm Arg}\, z$, $z\in \mathbb{R}^2$ denote the principal value $\in (-\pi,\pi]$ of $\arg z$.
\begin{Thm}\label{thm1.1} \,\, {\rm (i)} If $d=2$, uniformly for $\theta\in (-\pi, \pi]$, as $ x/t\to 0$ and $t\to \infty$
$$\frac{P_{x{\bf e}}[{\rm Arg} \,B_t\in d\theta \,|\, \sigma_a =t] }{d\theta} = \frac{1}{2\pi} + O\Big(\frac{x}{t} \ell(x,t)\Big),$$ where $\ell(x,t)= (\lg t)^{2}/ \lg (x+2a)$ if $a <x<\sqrt t$ and $= \lg (t/x)$ if $x>\sqrt t$.
{\rm (ii)} If $d\geq3$, uniformly for $\xi\in \partial U(a)$, as $v= x/t\to 0$ and $t\to \infty$,
$$\frac{P_{x{\bf e}}[B_t\in d\xi\,|\, \sigma_a =t] }{m_a(d\xi)} = 1+O\Big(\frac{x}{t}\Big).$$ \end{Thm} \vskip2mm The orders of magnitude for the error terms given in Theorem \ref{thm1.1} are correct ones (see Theorem \ref{thm-1} and Corollary \ref{cor-12}).
Let $\theta=\theta(\xi)\in [0,\pi]$ denote the colatitude of a point $\xi\in \partial U(a)$ with $a{\bf e}$ taken to be the north pole, namely $a\cos \theta = \xi\cdot {\bf e}$. \vskip2mm
\begin{Thm}\label{thm1.21} For each $M>1$, uniformly for $0< v< M$ and $\xi \in \partial U(a)$, as $t\to\infty$ and $x/t \to v$ \begin{equation}\label{lim}
\frac{P_{x{\bf e}}[B_t\in d\xi\,|\, \sigma_a =t] }{m_a(d\xi)} \, \longrightarrow \, \sum_{n=0}^\infty \frac{K_\nu(av)}{K_{\nu+n}(av)} H_n(\theta). \end{equation} Here $\theta =\theta(\xi)$; and $H_0(\theta) \equiv 1$ and for $n\geq 1$, $$H_n(\theta) = \left\{\begin{array} {ll} 2 \cos n \theta \quad &\mbox{if} \quad d=2,\\ (1+\nu^{-1}n) C_n^\nu(\cos \theta) \quad
&\mbox{if} \quad d \geq 3,
\end{array}\right.$$ where
$C_n^\nu(z)$ is the Gegenbauer polynomial of order $n$ associated with $\nu$. \end{Thm}
According to (\ref{K/nu}) the convergence of the series appearing as the limit in (\ref{lim}) is quite fast. For $d=2$, as one may notice, (\ref{lim}) is obtained from Theorem \ref{prop2} by using
Poisson summation formula. The limit function represented by the series approaches unity as $v\downarrow 0$ (uniformly in $\theta$), so that the asserted uniformity of convergence implies that the density on the left converges to unity as $x/t\to 0$, comforming to Theorem \ref{thm1.1}.
\begin{Thm}\label{thm1.2} \, Uniformly for $t>1$, as $v:= x/t\to\infty$ \begin{eqnarray*}
&&\frac{P_{x{\bf e}}[B_t\in d\xi\,|\, \sigma_a =t] }{\omega_{d-1}m_a(d\xi)} \\
&&\quad = \bigg(\frac{av}{2\pi}\bigg)^{(d-1)/2}e^{-av(1-\cos \theta)}\cos \theta \bigg[1+ O\Big(\frac{1}{av\cos^3 \theta}\Big)\bigg]
\quad \mbox{if} \quad 0 \leq \theta \leq \frac12\pi - \frac1{(av)^{1/3}}, \\ &&\quad \asymp \bigg(\frac{av}{2\pi}\bigg)^{(d-1)/2} e^{-av(1-\cos \theta)} \frac{1 }{ (av)^{1/3} } \qquad \mbox{if} \quad \frac12\pi - \frac1{(av)^{1/3}} < \theta \leq \frac{\pi}{2} + \frac1{(av)^{1/3}}, \end{eqnarray*} where $\theta =\theta(\xi)$; $f(t)\asymp g(t)$ signifies that $f(t)/g(t)$ is bounded away from zero and infinity. \end{Thm}
Combined with Theorem B, Theorem \ref{thm1.2} yields an asymptotic result of the joint distribution of $(B_{\sigma_a}, \sigma_a)$. On noting that $(\frac{y}{2\pi})^{(d-1)/2}=[ye^{y}/\Lambda_{\nu}(y)](1+ O(1/y))$ ($y>1)$,
$\cos \theta = {\bf x}\cdot \xi/ax$,
$$e^{- av (1-\cos \theta)} p_t^{(d)}(x-a) = p^{(d)}_t(|{\bf x}-\xi|)$$ and $\cos \theta \sim \frac12 \pi -\theta$ as $\theta \to\frac12 \pi$, we state the first half of it as the following
\begin{Cor}\label{thm3.02} \, Uniformly under the constraint ${\bf x}\cdot \xi /ax > (av)^{-1/3}$ and $t>a^2$, as $v:= x/t\to\infty$ $$ \frac{P_{x{\bf e}}[B(\sigma_a)\in d\xi,\, \sigma_a \in dt] }{\omega_{d-1}m_a(d\xi)dt}
= a^{2\nu}\frac{{\bf x}\cdot \xi}{t} p^{(d)}_t(|{\bf x}-\xi|)\Bigg[1+O\bigg( \frac1{av\cos ^3\theta}\bigg)\Bigg]. $$ \end{Cor}
\vskip2mm
As is clear from Theorem \ref{thm1.2} the distribution of $B(\sigma_a)$ converges to the Dirac delta measure at $a{\bf e}$, the north pole of $\partial U(a)$, as $v\to\infty$. The distribution may be normalized so as to approach a positive multiple of the non-degenerate measure $\cos \theta \, m_a(d\xi)$ in obvious manner, even though the density has singularity along the circumference.
The next corollary states this in terms of the colatitude $\Theta(\sigma_a) :=\theta(B(\sigma_a)$ of $B(\sigma_a)$ (see also Lemma \ref{wc}).
\begin{Cor}\label{thm3.01} As $v:= x/t\to\infty$ under $t>a^2$ \begin{eqnarray*}
&&\bigg(\frac{2\pi}{av}\bigg)^{(d-1)/2}e^{av(1-\cos \theta)} P_{x{\bf e}}[\Theta(\sigma_a)\in d\theta\,|\, \sigma_a =t] \\ &&\qquad\qquad\quad \Longrightarrow \quad \,\omega_{d-2}{\bf 1}(0\leq\theta\leq {\textstyle\frac12} \pi) \cos \theta\,\sin^{d-2}\theta\, {d\theta}, \end{eqnarray*} where ${\bf 1}({\cal S})$ is the indicator function of a statement ${\cal S}$, \lq \,$\Rightarrow$\rq \, signifies the weak convergence of finite measures on $\mathbb{R}$ (in fact the convergence holds in the total variation norm) and $\omega_0=2$. \end{Cor}
The essential content involved in Theorem \ref{thm1.2} concerns the two-dimensional Brownian motion even if it includes the higher-dimensional one (cf. Section 6).
The rest of the paper is organized as follows. In Section 3 we deal with the case when $x/t$ is bounded and prove Theorems \ref{prop2} through \ref{thm1.21}. In Section 4 we provide several preliminary estimates of the hitting distribution density mainly for $t<1$, that prepare for verification of Theorem \ref{thm1.2} made in Section 5 for the case $d=2$ and in Section 6 for the case $d\geq 3$. Proposition \ref{thm0} is obtained in Section 5.1 as a byproduct of a preliminary result for the proof of Theorem \ref{thm1.2}. In Section 7 the results obtained are applied to the corresponding problem for Brownian motion with drift. In the final section, Appendix, we present a classical formula for the hitting distribution of $U(a)$ and give a comment on an approach to the present problem based on it.
\section{ Proofs of Theorems \ref{prop2} through \ref{thm1.21} }
This section consists of three subsections. In the first subsection we let $d=2$ and prove Theorem \ref{prop2}. The proofs of Theorems \ref{thm1.1} and \ref{thm1.21} are given in the rest. The essential ideas for all of them are already found in the first subsection.
Our proofs involve Bessel processes of varying order $\nu$ and it is convenient to introduce notation specific to them. Let $X_t$ be a Bessel process of order $\nu\in \mathbb{R}$ and denote by $P_x^{BS(\nu)}$ and $E_x^{BS(\nu)}$ the probability law of $(X_t)_{t\geq 0}$ started at $x\geq 0$ and the expectation w.r.t. it, respectively. If $\nu= -1/2$, it is a standard Brownian motion and we write $P^{BM}_x$ for $P_x^{BS(-1/2)}$. With this convention we
suppose $\nu\geq 0$ in what follows, so that $X_t\geq 0$ a.s. under $P_x^{BS(\nu)}$ ($x\geq 0$). The expression $2\nu +2$ which is not integral may appear, while the letter $d$ always designates a positive integer signifying the dimension of $B_t$, a $d$-dimensional Brownian motion under a probability law $P_{\bf x}$.
Let $T_a$ denote the first passage time of $a$ for $X_t$: $T_a = \inf\{t\geq 0: X_t =a\}$.
\vskip2mm\noindent
{\sc {\bf 3.1.} The characteristic function of $\arg B(\sigma_a)$ ($d=2$).}\,
The proofs of Theorems \ref{prop2} and the case $d=2$ of Theorems \ref {thm1.1} and \ref{thm1.21} rest on the following
\begin{Prop}\label{lem3.20}\, For $\lambda\in \mathbb{R}$, $x>a$ and $t>0$,
$$ E_{x{\bf e}}[ e^{i\lambda\arg B(\sigma_a)}\,|\,\sigma_a = t]
= \frac{q_a^{(2|\lambda|+2)}(x,t)}{q_a^{(2)}(x,t)} \bigg(\frac{x}{a}\bigg)^{|\lambda|}.
$$ \end{Prop}
\noindent
In this subsection we first exhibit how this proposition leads to Theorem \ref{prop2}, then
prove two lemmas concerning Bessel processes and used in later subsections as well, and finally prove Proposition \ref{lem3.20} by using these lemmas.
\vskip2mm
{\sc {\bf 3.1.1.} Deduction of Theorem \ref{prop2} from Proposition \ref{lem3.20}. } On using Theorem A and (\ref{lambda}) in turn, as $x/t\to v>0$ \begin{eqnarray}
\frac{q_a^{(2|\lambda|+2)}(x,t)}{q_a^{(2)}(x,t)} \bigg(\frac{x}{a}\bigg)^{|\lambda|}
&\sim & \bigg(\frac{x}{a}\bigg)^{|\lambda|}\frac{ a^{2|\lambda|}\Lambda_{|\lambda|}(av)p^{(2|\lambda|+2)}_t(x) }{\Lambda_0(av)p^{(2)}_t(x)}
\nonumber\\
&\sim&\frac{K_{0}(av)}{K_{|\lambda|}(av)}. \label{K/K} \end{eqnarray} Noting that $K_{-\nu} (z)=K_\nu(z)$, we obtain the identity of Theorem \ref{prop2} according to Proposition \ref{lem3.20}. \qed
\vskip2mm
{\sc {\bf 3.1.2.} Two lemmas based on the Cameron Martin formula.} \, It is consistent to our notation to write \begin{equation}\label{1dim} q_a^{(1)}(x,t)=\frac{ P_x^{BM}[ T_a\in dt]}{dt} = \frac{x-a}{\sqrt{2\pi t^3}} e^{- (x-a)^2/2t} \quad (x>a). \end{equation} Recall that $q^\nu_a = q^{(2\nu+2)}_a$, and $P_x^{BS(\nu)}$, $P^{BM}_x$, $X_t$ and $T_a$ are introduced at the beginning of this section.
\begin{Lem} \label{lem2.1} \, Put $ \beta_\nu = \frac18 (1-4\nu^2)$ $(\nu\geq 0)$. Then \begin{equation}\label{Eq0}
q_a^{\nu}(x,t) =q_a^{(1)}(x,t) \bigg(\frac{a}{x}\bigg)^{\nu+\frac12} E^{BM}_x\Bigg[\exp\bigg\{ \beta_\nu \int_0^{t} \frac{ds}{X_s^{2}}\bigg\}\,\Bigg| T_a =t \Bigg]. \end{equation} \end{Lem} \vskip2mm\noindent {\it Proof.} \, We apply the formula of drift transform (based on the Cameron Martin formula). Put $Z(t) = e^{\int_0^t \gamma(X_s)d X_s -\frac12 \int_0^t \gamma^2(X_s)ds}$, where $\gamma(x)=(\nu+\frac12)x^{-1}$ and $X_t$ is a linear Brownian motion. Then \begin{equation}\label{4.1.1} \int_{t-h}^t q_a^{\nu}(x,s)ds= P^{BS(\nu)}_x[t-h\leq T_a <t] =E_x^{BM}[Z(t); t-h\leq T_a <t] \end{equation} for $0<h<t$. By Ito's formula we have $\int_0^t dX_s /{X_s}= \lg (X_t/X_0) + \frac12\int_0^{t} ds/X_s^2$ ($t < T_0$). Hence $$Z(T_a)= \bigg(\frac{a}{X_0}\bigg)^{\nu+\frac12} \exp\bigg[ \frac{1-4\nu^2}{8}\int_0^{T_a}\frac{ds}{X_s^2}\bigg], $$ which together with (\ref{4.1.1}) leads to the identity (\ref{Eq0}).
\begin{Lem} \label{lem2.2} \, For $\lambda \geq 0$ \begin{equation} \label{Eq1}
E_x^{BS(\nu)}\Bigg[\exp\bigg\{ -\frac{\lambda(\lambda+2\nu)}2 \int_0^{t} \frac{ds}{X_s^{2}}\bigg\}\,\Bigg| T_a =t \Bigg] =\bigg(\frac{x}{a}\bigg)^\lambda\frac{q_a^{\lambda+\nu}(x,t)}{q_a^{\nu}(x,t)}. \end{equation} \end{Lem} \vskip2mm\noindent {\it Proof.}\, Write $\tau = \int_0^t X_s^{-2}ds$. By the same drift transformation as applied in the preceding proof we see $$\frac{E_x^{BS(\nu)}[e^{-\frac12 \lambda(\lambda+2\nu) \tau}; T_a \in dt]}{dt}
= q_a^{(1)}(x,t) \bigg(\frac{a}{x}\bigg)^{\nu+\frac12} E^{BM}_x[e^{-\frac12 \lambda(\lambda+2\nu) \tau} e^{\beta_\nu\tau}\,|\,T_a =t].$$ Noting $-\frac12 \lambda(\lambda+2\nu) +\beta_\nu = \beta_{\lambda+\nu}$ we apply (\ref{Eq0}) with $\lambda+\nu$ in place of $\nu$ to see that the right-hand side above is equal to $(x/a)^\lambda q_a^{\lambda+\nu}(x,t)$, while the left-hand side is equal to that of (\ref{Eq1}) multiplied by $q^\nu_a(x,t)$, hence we have (\ref{Eq1}). \qed
\vskip2mm
{\bf 3.1.3.} {\sc Proof of Proposition \ref{lem3.20}.} For the proof we apply the skew product representation of two-dimensional Brownian motion. Let $Y(\cdot)$ be a standard linear Brownian motion with $Y(0)=0$ independent of $|B_\cdot|$. Then $\arg B_t -\arg B_0$ has the same law as $Y(\int_0^t |B_s|^{-2}ds)$ (\cite{IM}), so that
$$ E_{x{\bf e}}[ e^{i\lambda\arg B(\sigma_a)};\sigma_a \in dt]
= E_{x{\bf e}}\otimes E^Y\Big[ e^{i\lambda Y\big({ \int_0^t |B_s|^{-2}ds}\big)};\sigma_a \in dt\Big]
$$
where $E^Y$ denotes the expectation with respect to the probability measure of $Y(\cdot)$ and $\otimes$ signifies the direct product of measures (with an abuse of notation). Note that
$|B_t|$ is a two-dimensional Bessel process (of order $\nu=0$) and
take the conditional expectation of $e^{i\lambda Y\big( \int_0^t |B_s|^{-2}ds\big)}$ given $|B_{s}|, s\geq t$ to find the equality
\begin{eqnarray*} E_{x{\bf e}}[ e^{i\lambda\arg B(\sigma_a)} \,|\, \sigma_a = t] = E_x^{BS(0)} \bigg[\exp\Big\{-\frac{\lambda^2}2 \int_0^t X_s^{-2}ds\Big\}\,\Big|\, T_a =t\bigg], \end{eqnarray*} of which, by formula (\ref{Eq1}), the right-hand side equals
$$(x/a)^{|\lambda|}q_a^{|\lambda|}(x,t)/q_a^{0}(x,t),$$ showing the required identity. \qed
\vskip2mm
Let $b>a$. Then for each $s>0$, the ratio $q^{(2)}_b (x,t-s)/q^{(2)}_a(x,t)$ is asymptotic to $\sqrt{b/a}\, e^{(b-a)v}e^{- \frac12 v^2 s}$ as $x/t\to v$, $t\to\infty$ and, on considering the hitting of $U(b)$, we observe \begin{eqnarray} && f_a(\theta;v) d\theta \nonumber \\
&&= \lim \frac{ \int_0^t \int_\mathbb{R} P_{\bf x}[\arg B_{\sigma_b} \in d\theta'\,|\, \sigma_b =t-s] q^{(2)}_b(x,t-s) P_{be^{i\theta'}} [\arg B_{\sigma_a} \in d\theta, \sigma_a\in ds] }{q^{(2)}_a(x,t)} \nonumber\\ &&= \sqrt{\frac{b}{a}} \, e^{(b-a)v}\int_\mathbb{R} E_{b e^{i\theta'}}\Big[e^{-v^2\sigma_a/2}; \arg B_{\sigma_a} \in d\theta\Big]f_b(\theta';v)d\theta' \label{f_v} \end{eqnarray} (with an appropriate interpretation of $ \arg B_{\sigma_a}$ under $P_{be^{i\theta'}} $), which shows that $f_a(\theta;v)>0$ for all $\theta$ and all $ v>0$.
\vskip2mm\noindent
{{\bf 3.2.} \sc An upper bound of $q_1^{(2\lambda+2)}(x,t)$ for large $\lambda$.}
For the proofs of Theorems \ref{thm1.1} and \ref{thm1.21} we need a prtinent upper bound of the characteristic function appearing in Proposition \ref{lem3.20} for large integral values of $\lambda $. To this end we prove Lemma \ref{lem3.40} below. The result is extended to non-integral values of $\lambda$ in Lemma \ref{lem3.51} that verifies the uniform convergence of the limit appearing in (\ref{f}) of the conditional density for $\arg B(\sigma_a)$. \vskip2mm {\bf 3.2.1.} Here we prove the following lemma.
\begin{Lem}\label{lem3.40} There exists constants $C_1$ and $A_1>0$ such that for all $n=1, 2, \ldots $, $t>1$ and $x> 1$, \begin{equation}\label{q/p} q_1^{(2n+2)}(x,t) \leq C_1 (A_1/n)^n p_{t+1/n}^{(2n+2)}(x). \end{equation} \end{Lem} \vskip2mm\noindent {\it Proof.}~ By the identity $$p_{t+\varepsilon}^{(2n+2)}(x) = \int_0^{t+\varepsilon} q_1^{(2 n+2)}(x,t+\varepsilon -s) p^{(2n+2)}_{s}(1)ds$$ we have $$p_{t+\varepsilon}^{(2n+2)}(x) \geq \bigg[\inf_{0\leq s\leq \varepsilon} q_1^{(2 n+2)}(x,t + s)\bigg] \int_0^\varepsilon \frac{e^{-1/2s}}{(2\pi s)^{n+1}}ds$$ for every $0<\varepsilon<t$. We choose $\varepsilon=1/n$ and evaluate the last integral from below to see $$ \int_0^{1/n} \frac{e^{-1/2s}}{(2\pi s)^{n+1}}ds = \int_{n/2}^\infty e^{-u}u^{n-1}\frac{du}{2\pi^{n+1}} \geq \frac{A_0}{\sqrt n}\bigg(\frac{n}{e\pi}\bigg)^{n} $$ for some universal constant $A_0>0$. If $x>2$, we apply the inequality of Harnack type given in the next lemma to find the inequality (\ref{q/p}).
It remains to deal with the case $1<x < 2$, which however can be reduced to the case $x=2$. Indeed, by partitioning the whole event according as $2$ is reached before $t/2$ or not, we see (by recalling $ q_1^{(2n+2)} = q_1^{n}$) that if $1<x<2$, $$q_1^{n}(x,t) = P^{BS(n)}_x[T_1\wedge T_2>t/2]\sup_{1<y<2} q^{n}(y, t/2) + \sup_{t/2 \leq s < t} q_1^{n}(2,s).$$ The required upper bound of the second term on the right-hand side follows from the result for $x=2$ since $p_{s}^{(2n+2)}(2) \leq 4^{n+1} p_{t+1/n}^{(2n+2)} (x)$ for $t/2 \leq s<t$. As for the first term, by Lemma \ref{lem2.1} we infer that the supremum involved in it is bounded by a universal constant (since $\beta_\nu\leq 0$ for $\nu\geq 1$). On the other hand, by the same drift transform that is used in the proof of Lemma \ref{lem2.1} we see \begin{eqnarray*} P^{BS(n)}_x\Big[T_1\wedge T_2> \frac{t}2\Big] &=& E^{BM}_x\Bigg[\bigg(\frac{X_{t/2}}{x}\bigg)^{n+\frac12} \exp\bigg\{ \beta_n\int_0^{t/2} \frac{ds}{X_s^{2}}\bigg\}; T_1\wedge T_2>\frac{t}2\Bigg]\\ &\leq& e^{1/8}2^{n+1/2}e^{- n^2t/16} P^{BM}_x[T_1\wedge T_2> t/2], \end{eqnarray*}
which is enough for the required bound. \qed
\begin{Lem}\label{lem-2}\, There exist constants $C_2>1$ and $A_2>0$ such that whenever $x\geq 2$ and $n=2, 3,\ldots$, $$q_1^{(n)}(x,t-\tau)\leq C_2 A_2^n q_1^{( n)}(x,t) \quad \mbox{for \,\, $t>1$ \, and \, $0\leq \tau\leq 1/n$},$$ or, equivalently,\,\, $q_1^{(n)}(x,t)\leq C_2 A_2^n \inf_{ 0\leq s \leq 1/n}q_1^{( n)}(x,t+s) $ \, for \, $t>1-1/n$. \end{Lem} \vskip2mm\noindent
{\it Proof.} \, Let $Q$ be the hyper-cube in $\mathbb{R}^n$ of side length $2$ and centered at the origin and put $D=\{({\bf y},s): {\bf y}\in Q, 0<s< 1+\tau\}$, the cubic cylinder with the base $Q\times \{0\}$ and of height $1+\tau$. The function $u({\bf y},s):=q_1^{(n)}(|{\bf x}+{\bf y}|,t-s)$ satisfies the equation $\partial_s u + \frac12 \sum_{j=1}^n \partial_{j}^2 u=0$ in $D$, where $\partial_j$ denotes the partial derivative w.r.t. the $j$-th coordinate of ${\bf y}$. Let $p^0_s(x,y)$ be the heat kernel on the physical space $[-1,1]$ with zero Dirichlet boundary and put
$$p^0_s({\bf x},{\bf y})=\Pi_{j=1}^n p_s^0(x_j,y_j) \quad \mbox{and} \quad K({\bf S},s)=\pm \partial_ j p_s^0({\bf 0},{\bf y})|_{{\bf y}={\bf S}},$$ where the sign is chosen so that $\pm \partial_ j $ becomes inner normal derivative at ${\bf S}\in \partial Q$. Then $$u({\bf 0},\tau)= \int_{\partial Q}d {\bf S}\int_\tau^{1} K({\bf S}, s-\tau) u({\bf S},s)ds+ \int_Q p^0_{1-\tau}({\bf 0},{\bf y})u({\bf y},1)d{\bf y}.$$ Since all the functions involved in these two integrals are non-negative, we have $$q_1^{( n)}(x,t)= u({\bf 0},0) \geq \int_{\partial Q}d {\bf S}\int_\tau^{1} K({\bf S}, s) u({\bf S},s)ds+ \int_Q p^0_{1}({\bf 0},{\bf y})u({\bf y},1)d{\bf y},$$ and, comparing the right-hand side with the integral representation of $u({\bf 0},\tau)=q_1^{( n)}(x,t-\tau)$, we have
$q^{( n)}(x,t-\tau) \leq M_n q^{( n)}(x,t)$, where $M_n = M'_{n}\vee M''_{n}$ with
$$ M'_{n}=\sup_{{\bf S}} \sup_{ \tau<s<1} \frac{K({\bf S}, s-\tau) }{K({\bf S}, s) }, \quad M''_n= \sup_{{\bf y}}\frac{ p^0_{1-\tau}({\bf 0},{\bf y})}{ p^0_{1}({\bf 0},{\bf y})}.$$
We must find a positive constant $A_2$ for which $M_n < C_2A_2^n$ if $\tau <1/n$. By the reflection principle we have
$$p^0_s(0,y) =\sum_{k=-\infty}^\infty (-1)^k p_s^{(1)}(y-2k).$$
Since $\sup_y p^0_{1-\tau}(0,y) /p^0_1(0,y) $ tends to unity as $\tau \to0$, we have
$M''_n<2^{n}$ for all $\tau$ small enough. To find an upper bound of $M_n'$ we
deduce the following bounds: for some constant $C\geq 1$,
\begin{equation}\label{Ineq0}
\frac{p^0_{s-\tau}(0,y)}{p^0_s(0,y)}\leq C \sqrt{\frac{s}{s-\tau} }\quad \mbox{for} \quad \tau <s\leq 1,\, |y| <1; \mbox{and}
\end{equation}
\begin{equation}\label{Ineq1}
\frac2{s}p^{(1)}_s(1) - \frac6{s}p^{(1)}_s(3) < \mp \frac{\partial}{\partial y}p^0_s(0,y)\Big|_{y=\pm1} < \frac2{s}p^{(1)}_s(1)
\quad \mbox{for} \quad 0 <s\leq 1.
\end{equation}
The inequalities in (\ref{Ineq1}) are easy to show and its proof is omitted. As for (\ref{Ineq0}) we observe that if $\tau \leq s/2$, then $s>s-\tau >s/2$ so that the ratio on the left is bounded, while uniformly for $|y|>1/2$ and for $\tau>s/2$, the ratio tends to zero as $s\to 0$; in the remaining case $|y|\leq 1/2$, $ s/2< \tau<s$ the inequality (\ref{Ineq0}) is obvious. From (\ref{Ineq0}) and (\ref{Ineq1}) we see that if $s\geq 2\tau$, $M'_n < (C 2^{1/2})^n$; also if $\tau < s<2\tau$, then
for $\tau =1/n$ small enough,
$$M'_n < 2\bigg[C \sqrt{\frac{s}{s-\tau} }\, \bigg]^{n}\exp\Big\{-\frac{\tau}{2(s-\tau)s}\Big\}\leq 2C^n\exp\Big\{-\frac{n}{2(u-1)u} + \frac{n}2 \lg\frac1{u-1}\Big\},$$
where we put $u= s/\tau$. Thus, putting $m= \sup_{1\leq u\leq 2} \Big[- \frac{1}{2(u-1)u}+ \frac12 \lg \frac1{u-1}\Big]$ we have $M'_n \leq 2 (Ce^m)^n$. The proof of the lemma is complete.
\qed
\vskip2mm {\bf 3.2.2.} {\sc Convergence of the density for $\arg B_t$ conditioned on $\sigma_a=t$ ($d=2$). }\, Here we prove that the convergence in (\ref{f}) holds uniformly in $\theta$ locally uniformly in $v$. \begin{Thm}\label{thm3.3} Let $d=2$. For each $M>1$, uniformly for
$\theta\in \mathbb{R}$ and $x \in (a, Mt)$, as $t\to\infty$
$$\frac{P_{x{\bf e}}[\arg B(\sigma_a)\in d\theta \,|\, \sigma_a =t]}{ \,d\theta}= f_a(\theta; x/t)(1+o(1)).$$
\end{Thm}
For the proof we need the following extension of Lemma \ref{lem3.40}.
\begin{Lem}\label{lem3.51} There exist constants $C$ and $A>0$ such that for all $\lambda > 1$, $t>1$ and $x> 1$, \begin{equation}\label{q/p2} q_1^{(2\lambda+2)}(x,t) \leq C(t/x)^\delta (A/\lambda)^\lambda p_{t+1/\lambda}^{(2\lambda+2)}(x). \end{equation} Here $\delta$ denotes the fractional part of $\lambda$. \end{Lem} \vskip2mm\noindent {\it Proof.}~ We may and do suppose $\lambda\in (n, n+1)$ for a positive integer $n$. Remember that $(P_x^{BS(\nu)}, X_t)$ designates a Bessel process of dimension $2\nu+2$. Put $\delta =\lambda-n$ and $\gamma(y) = \delta/y$. Then the drift transform gives $$P_x^{BS(\lambda)}[ \Gamma; T_1 \geq t] = E_x^{BS(n)} [Z(t); \Gamma, T_1 \geq t]$$
for any event $\Gamma$ measurable w.r.t. $(X_s)_{s\leq t}$ (cf. e.g. \cite{IW}). Here, since the drift term of $X_t$ under $P^{BS(n)}$ equals $(2n+1)/2X_t$, $$Z(t) = \int_0^t \gamma(X_s)dX_s - \frac12 \int_0^t\Big[ \frac{\gamma(X_t)(2n+1)}{X_t} + [\gamma(X_s)]^2\Big]ds.$$
By Ito's formula we have $\int_0^t \gamma(X_s)dX_s = \delta\lg (X_t/X_0) - \frac12\int_0^t \gamma\,'(X_s)ds$. Observing $\gamma\,'(y)+(2n+1)\gamma(y)/y +\gamma^2(y) = (2n\delta +\delta^2)/y^2$, as in Section {\bf 3.1.2} we find
$$q_1^{(2\lambda+2)}(x,t) = x^{-\delta} E_x^{BS(n)}[e^{-\frac12 (2n\delta+\delta^2)\int_0^t X_s^{-2}ds}\, |\, T_1=t]q_1^{(2n+2)}(x,t). $$ The conditional expectation being dominated by unity, substitution from Lemma \ref{lem3.40} yields $$q_1^{(2\lambda+2)}(x,t) \leq (t/x)^\delta t^{-\delta} C_1 (A_1/n)^n p_{t+1/n}^{(2n+2)}(x)\leq C (t/x)^\delta n^\delta (A_1/\lambda)^\lambda p_{t+1/\lambda}^{(2\lambda+2)}(x),$$ showing the inequality of the lemma with any $A>A_1$. \qed
\vskip2mm\noindent {\it Proof of Theorem \ref{thm3.3}.} Let $a=1$. From the lemma above and Proposition \ref{lem3.20} we see that for $t>1, x>1$,
$$ E_{x{\bf e}}[ e^{i\lambda\arg B(\sigma_a)}|\sigma_1 = t]
\leq C' \Big[1\vee \lg\frac{t}{x}\Big]^{2}\Big(\frac{A}{|\lambda|}\Big)^{|\lambda|} \bigg(\frac{x}{t}\bigg)^{|\lambda|-\delta} \exp \frac{(x/t)^2}{2|\lambda|}, \,\,
\quad n< |\lambda| \leq n+1, $$ where we have also used the lower bound $q^{(2)}_1(x,t)\geq C[1\vee \lg(t/x)]^{-2}\, p_t^{(2)}(x)$. On recalling (\ref{K/nu}) as well as Theorem \ref{prop2} this shows that the characteristic function on the left converges to $K_0(x/t)/K_\lambda(x/t)$, the Fourier transform of $f_1(\cdot, x/t)$, in $L_1(d\lambda)$ uniformly for $x/t <M$, hence the uniform convergence asserted in the lemma. \qed
\vskip2mm\noindent
{\sc {\bf 3.3.} Distribution of $\Theta(\sigma_a)$}.\,
In this subsection we give proofs of Theorems \ref{thm1.1} and \ref{thm1.21}. To facilitate the exposition we first introduce the conditional density $g(\theta;x,t) $. We then expand $g$ into a series of spherical functions which almost immediately leads to (a refined version of) Theorem \ref{thm1.1} and to Theorem \ref{thm1.21}.
\vskip2mm
{\sc {\bf 3.3.1.} The conditional density $g(\theta;x,t) $.} Let $\theta(\xi)$ denote the colatitude of $\xi\in \partial U(a)$ as before.
By rotational symmetry around the axis $\eta{\bf e}, \eta\in \mathbb{R}$ we can define $g(\theta;x,t)$ by \begin{equation}\label{sin0}
g(\theta;x,t) := \frac{P_{x{\bf e}}[B(\sigma_a)\in d\xi \,|\, \sigma_a =t] }{m_a(d\xi )}, \quad \theta = \theta(\xi) \in [0,\pi]. \end{equation} Denote the colatitude $\theta(B_t)$ by $\Theta_t \in [0,\pi]$,
so that
$$ \cos \Theta_t = {\bf e}\cdot B_t /|B_t|.$$
Let $d\geq 3$ and $d\xi = a^{d-1}\sin^{d-2} \theta\, d o\times d\theta$, where $d o$ designates a $(d-2)$-dimensional surface element of $(d-2)$-dimensional unit sphere.
Then $m_a(d\xi) = \sin^{d-2} \theta \,d\theta |do|/\omega_{d-1}$ and we see that \begin{equation}\label{sin}
g(\theta;x,t)= \frac{P_{x{\bf e}}[\Theta(\sigma_a)\in d\theta \,|\, \sigma_a =t]}{ \mu_d^{-1}\sin^{d-2}\theta \,d\theta}. \end{equation} Here $\mu_d= \int_0^\pi \sin^{d-2}\theta d\theta = \omega_{d-1}/\omega_{d-2}$.
When $d=2$, we have $\Theta_t = |{\rm Arg} \,B_t|$ and
$$g(|\theta|;x,t) = 2\pi \frac{ P_{x{\bf e}}[{\rm Arg} \, B(\sigma_a)\in d\theta \,|\, \sigma_a =t]}{ d\theta}, \quad \theta\in (-\pi,\pi).$$
Thus the measure $g(|\theta|;x,t) d\theta/2\pi$ on $|\theta|\leq \pi$ is the probability law of ${\rm Arg}\, B(\sigma_a)$ under $P_{x{\bf e}}[\cdot\,|\, \sigma_a=t]$ and we may/should naturally regard $g(|\theta|;x,t)$ as a (continuous) function on the torus $\mathbb{R}/2\pi \mathbb{Z} \cong [-\pi, \pi]$.
It is noted that by letting $\omega_0=2$ so that $\mu_2=\pi$, the last expression conforms to (\ref{sin}). (In (\ref{sin}) the differential quotient at the end point $\theta =0$ (or $\pi$) is understood to be the right (resp. left) derivative of the distribution function.)
\vskip2mm
{\bf 3.3.2.} {\sc Series expansion of $g(\theta; x,t)$ when $d=2$}.\, Let $d=2$ and $g(\theta, x,t)$ be given as above. Denote by
$\alpha_n=\alpha_n(x,t)$, $n=0, 1, 2,\ldots$ the coefficients of the {\it Fourier cosine series} of $g(\theta) = g(\theta;x,t)$, $\theta\in [0, \pi]$: $\alpha_0=\pi^{-1}\int_0^\pi g(\theta)d\theta = 1$ and for $n\geq 1$,
$$\alpha_n = \frac2{\pi}\int_0^\pi g(\theta)\cos n\theta\, d\theta =
2 E_{x{\bf e}}[\cos n \Theta(\sigma_a)\,|\, \sigma_a =t],$$ so that \begin{equation}\label{F-exp} g(\theta; x,t)= \sum_{n=0}^\infty \alpha_n(t,x) \cos n\theta,
\end{equation} where the Fourier series is uniformly convergent (with any $x, t$ fixed) as one may infer from the smoothness of $g$ (or alternatively from our estimation of $\alpha_n$ given in (\ref{alpha}) below). Since $E_{x{\bf e}}[\cos n \Theta(\sigma_a)\,|\, \sigma_a =t] = E_{x{\bf e}}[e^{in \arg B(\sigma_a)}\,|\, \sigma_a =t]$, substitution from Proposition \ref{lem3.20} yields \begin{equation}\label{-0} \alpha_n(x,t) = 2\frac{q_a^{(2n+2)}(x,t)}{q_a^{(2)}(x,t)} \bigg(\frac{x}{a}\bigg)^{n}.
\end{equation} Based on this formula we derive the next result that provides an exact asymptotic form of the error term in Theorem \ref{thm1.1} (i).
(As another possibility one may use a classical formula for $g(\theta;x,t)q^{(2)}_a(x,t)$ that we give in Appendix.)
\begin{Thm} \label{thm-1} \, Let $d=2$. Uniformly for $\theta\in [0, \pi]$ and $x>a$, as $t\to\infty$ with $x/t\to 0$, $$g(\theta; x,t) = 1 + \frac{ax}{t} \ell_0(x,t)\Big[(1+o(1))\cos \theta + O\Big(\frac{x}{t}\Big)\Big],$$ where $$\ell_0(x,t)=\Big(1-\frac{a^2}{x^2}\Big) \frac{(\lg t)^{2}}{2 \lg (x/a)} \quad \mbox{ if }\,\, 1<x<\sqrt t ; \mbox{ and} \,\, = 2 \lg \frac{t}{x} \quad \mbox{ if} \,\,\, x>\sqrt t.$$ \end{Thm} \vskip2mm\noindent {\it Proof.} \, By elementary computation we deduce from Theorem A and (\ref{-0}) that \begin{equation}\label{-3} \alpha_1= \frac{ax}{t} \ell_0(x,t)(1+o(1)) \end{equation}
as $ x/t \to 0$. Plainly $\alpha_n(x,t)\geq 0$. It therefore suffices to show that \begin{equation}\label{-1} \sum_{n=2}^\infty \alpha_n(x,t) = O\bigg(\frac{x^2}{t^2} \ell_0(x,t)\bigg). \end{equation} Although Theorem A also yields $ \alpha_n(x,t) = O\Big((x/t)^n\ell_0(x,t)\Big)$ for each $n= 2, 3 \ldots$, for the present purpose we need an upper bound valid uniformly in $n$. Such a uniform bound is provided by
Lemma \ref{lem3.40} and on using it \begin{equation}\label{alpha} \alpha_n(x,t) \leq C_2 \frac{A_1^n}{n^n} \bigg(\frac{x}{t}\bigg)^{n} \frac{e^{-x^2/2t}}{tq_1^{(2)}(x,t)} \leq C_3 \frac{A_1^n}{n^n} \bigg(\frac{x}{t}\bigg)^{n} \ell_0(x,t),
\end{equation} which implies (\ref{-1}).
\qed
\vskip2mm
{\bf 3.3.3.} {\sc Series expansion of $g(\theta; x,t)$ when $d\geq 3$.} \, Recall
$$ P_{x{\bf e}}[\Theta(\sigma_a)\in d\theta\,|\, \sigma_a =t] = \mu_d^{-1}g(\theta;x,t) \sin^{d-2} \theta\,d\theta. $$ \begin{Thm} \label{thm-11} \, Let $d\geq 3$. For $\theta\in [0, \pi]$ and $x>a$, $$g(\theta;x,t) = \sum_{n=0}^\infty\bigg(\frac{x}{a}\bigg)^n\frac{q_a^{n+\nu}(x,t)}{q_a^{\nu}(x,t)} h_n(0)h_n(\theta),$$ where $h_n(\theta)$ denotes the $n$-th normalized eigenfunction of the Legendre process of order $\nu$ (see Section 6).
\end{Thm} \vskip2mm\noindent {\it Proof.} \, Let $(P^{L(\nu)}_\theta, \Theta_t)$ denote the Legendre process (on the state space $[0,\pi]$) of order $\nu$. Then by the skew product representation of $d$-dimensional Brownian motion we have \begin{eqnarray*} P_{\bf x}[\Theta(\sigma_a) \in d\theta, \sigma_a \in dt]
=(P_{\theta_0}^{L(\nu)}\otimes P_x^{BS(\nu)})[\Theta_\tau \in d\theta\,|\, T_a =t]q^{(d)}(x,t), \end{eqnarray*} where $\tau = \int_0^{T_a} X_s^{-2}ds$ and $\theta_0$ is the colatitude of ${\bf x}$. We apply the spectral expansion of the density of the distribution of $\Theta_t$ (see (\ref{spexp})) and Lemma \ref{lem2.2} in turn to deduce that \begin{eqnarray}\label{hexp}
&& (P_{\theta_0}^{L(\nu)}\otimes P_x^{BS(\nu)})[\Theta_\tau \in d\theta\,|\, T_a =t] /d\theta \nonumber\\
&&= E_x^{BS(\nu)}\Bigg[ \sum_{n=0}^\infty \exp\Big\{-\frac{n(n+2\nu)}{2} \tau \Big\}h_n(\theta_0)h_n(\theta) \frac{\sin^{d-2} \theta}{\mu_d} \,\Bigg| \, T_a =t \Bigg] \nonumber\\ &&= \frac{1}{\mu_d}\sum_{n=0}^\infty\bigg(\frac{x}{a}\bigg)^n\frac{q_a^{n+\nu}(x,t)}{q_a^{\nu}(x,t)} h_n(\theta_0)h_n(\theta) \sin^{d-2} \theta. \end{eqnarray} Comparing this with (\ref{sin}) shows the formula of the theorem. \qed \vskip2mm In view of the defining identity (\ref{sin0}) the case $d\geq 3$ of Theorem \ref{thm1.1} follows from
\begin{Cor} \label{cor-12} \, Let $d\geq 3$. Uniformly for $\theta\in [0, \pi]$ and $x>a$, as $x/t\to 0$ $$g(\theta;x,t) = 1 + \frac{ax}{t}\bigg[\frac{1-(a/x)^{d} }{1-(a/x)^{d-2}}\Big(\frac{d}{d-2}+o(1)\Big)\cos \theta + O\Big(\frac{x}{t}\Big)\bigg].$$ \end{Cor} \vskip2mm\noindent{\it Proof.}\, The asserted formula is derived as in the case $d=2$ by observing that $h_1(0)h_1(\theta) = 2(\nu +1)\cos \theta$ (see Section {\bf 6.1.1}) and $$\frac{x}{a}\frac{q_a^{1+\nu}(x,t)}{q_a^{\nu}(x,t)} \sim \frac{ax}{t}\cdot \frac{1-(a/x)^{2+2\nu}}{ 2\nu(1-(a/x)^{2\nu})}(1+o(1)).$$ \qed
\vskip2mm
{\bf 3.3.4.} {\sc Proof of Theorem \ref{thm1.21}.} \, Proof of Theorem \ref{thm1.21} proceeds as follows. For $d=2$ Theorem \ref{thm-11} is valid with $h_n(0)h_n(\theta)$ replaced by $2\cos n\theta$ if $n\geq 1$ as we have already observed (see (\ref{F-exp}) and (\ref{-0})); here it is warned that if $d=2$ the product $h_n(\theta_0)h_n(\theta)$ must be replaced by $2\cos n(\theta-\theta_0)$ ($n\geq1$) in (\ref{hexp}). In any case substitution from (\ref{K/K}) gives the relation of Theorem \ref{thm1.21} for $d=2$ at a formal level. The relation (\ref{K/K}) is immediately extended to
$$\frac{q_a^{|\lambda|+\nu}(x,t)}{q_a^\nu(x,t)} \bigg(\frac{x}{a}\bigg)^{|\lambda|}
\sim \frac{K_{\nu}(av)}{K_{\nu+|\lambda|}(av)}\qquad (x/t\to v). $$ With these remarks as well as (\ref{q/p}) taken into account we obtain from (\ref{F-exp}) and Theorem \ref{thm-11} that for all $d\geq 2$, as $x/t\to v$
$$g(\theta;x,t) - \sum_{n=0}^\infty \frac{K_\nu(av)}{K_{\nu+n}(av)} b_nh_n(\theta) \, \longrightarrow\, 0, $$ uniformly in $\theta\in [0,\pi]$ and $0<v<M$ for each $M$. Here $b_nh_n(\theta)= 2\cos n\theta$ for $n\geq 1$ if $d=2$ and $b_n=h_n(0)$ if $d\geq 3$.
This shows Theorem \ref{thm1.21} except for identification of the constant factor in the case $d\geq 3$, which we give at the last line of Section {\bf 6.1.1}.
\vskip2mm\noindent {\sc Remark 3.} \, There exists an unbounded and increasing positive function $C(v)$, $v>0$ such that $C(0+)\geq 1$ and $$1/C(x/t) \leq g(\theta;x,t) \leq C(x/t) \qquad (0\leq \theta\leq \pi, t>1).$$ The upper bound follows from (\ref{F-exp}), Theorem \ref{thm-11} and estimates like (\ref{alpha}), while the lower bound can be verified by an argument analogous to the one as made at (\ref{f_v}) (or in Section {\bf 5.4}).
\section{ Estimates of the hitting density for $t<1$ }
Put for $z>a$ \begin{equation}\label{h_z00} h_a(z,t,\phi) = \frac{P_{z{\bf e}}[\Theta(\sigma_a)\in d\phi, \sigma_a \in dt]}{ \mu_d^{-1}\sin^{d-2}\phi\, \,d\phi dt}, \quad \phi\in [0,\pi], \end{equation} or, by means of $g=g_a$ given in (\ref{sin}), $$ h_a(z,t,\phi)= g_a(\phi; z, t) q^{(d)}_a(z,t);$$ recall that $g_a(\phi;z, t)$ represents the density with respect to $m_a(d\xi)dt$ evaluated at $\xi$ with colatitude $\phi = \theta(\xi)$ of the hitting site distribution conditional on $\sigma_a=t$ and $B_0=z{\bf e}$. In view of rotational symmetry of Brownian motion it follows that for any $\xi\in \partial U(a)$ with ${\bf z}\cdot\xi /xa=\cos \theta$ and ${\bf z} \notin U(a)$, \[
h_a(|{\bf z}|,t, \theta) := \frac{P_{{\bf z}}[B(\sigma_a)\in d\xi, \sigma_a \in dt] }{m_a(d\xi )dt}. \]
In this section we provide some upper and lower bounds of $h_a(z,t,\phi)$ for $t<1$, which are used in the next section for estimation of it when $z/t$ along with $t$ tends to infinity. We include certain easier results for $t\geq 1$. The main results of this section are
given in Lemmas \ref{lem3.5} and \ref{Imp1}.
For all dimensions $d\geq 2$ the function $h_a(z,t,\phi)$ satisfies the scaling relation $$h_a( z, t, \phi) = a^{-2} h_1(z/a, t/a^2, \phi).$$
Throughout this section $X_t$ always denotes a standard linear Brownian motion. As in the preceding section $P_y^{BM}$ and $E^{BM}_y$ denote the probability and expectation for $X_t$, and $T_y$ the first passage time of $X$ to $y$. We shall apply the skew product representation of $d$-dimensional Brownian motion and the Bessel processes of dimensions $d\geq 2$ will become relevant. However, most of the results of this section that actually concerns the Bessel processes follows from the one for the linear Brownian motion $X_t$ because of the boundedness of the Radon-Nikodym density $Z(t)$ ($t<1$) that is given in the proof of Lemma \ref{lem2.1} (see Remark 4 below for more details).
\vskip2mm\noindent
{\bf 4.1.} {\sc Some Basic Estimates.}
\vskip2mm
\begin{Lem}\label{lem3.2i} \, Let $b>0 $. For $0<y<b$ and $0< t \leq b^2$, $${\displaystyle \, \frac{P^{BM}_y[T_0\in dt, T_b< T_0]}{dt} \leq C\frac{yb^2}{t^2}p^{(1)}_t(b)}.$$ \end{Lem} \vskip2mm\noindent {\it Proof.} \, By reflection principle it follows that $$\frac{P^{BM}_y[T_0\in dt, T_0< T_b]}{dt} = \frac1{\sqrt{2\pi t^3}} \sum_{n= -\infty}^\infty( 2nb +y) \exp\bigg\{-\frac{ (2nb +y)^2}{2t}\bigg\} $$ (\cite{KS}, (8.26)). Writing the right-hand side above in terms of $q_a^{(1)}$ (cf. (\ref{1dim})) we see that
\begin{eqnarray*} \frac{P^{BM}_y[T_0\in dt, T_b< T_0]}{dt} & =& q^{(1)}_0(y,t) - \frac{P^{BM}_y[T_0\in dt, T_0< T_b]}{dt} \\ &=&
\sum_{n=1}^\infty [q^{(1)}_0(2nb-y,t)- q^{(1)}_0(2nb+y,t)].
\end{eqnarray*} On using the mean value theorem the difference under the summation symbol is dominated by $$\frac{2y}{\sqrt{2\pi t^3}} \frac{[(2n+1)b]^2}{ t} e^{- [(2n-1)b]^2/2t} \quad (0<y < b, 0<t<b^2).$$ By easy domination of these terms for $n\geq 2$ we find
the upper bound of the lemma.
\qed
\vskip2mm\noindent
{\sc Remark 4.} \,Lemma \ref{lem3.2i} is extended to $d$-dimensional Bessel Processes $|B_t|$ with essentially the same bound if the positions $0, y$ and $b$ are raised to $a$, $a+y$ and $a+b$, respectively, by using the drift transformation. For later reference here we give it in the form
\begin{equation}\label{drift}
P_{(a+y){\bf e}}[ A \,|\, \sigma_{a} = t] = c_a(y,t) E_{a+y}^{BM}[ e^{\beta_\nu \int_0^t X_s^{-2}ds}; A^X \,|\, T_a =t],
\end{equation}
where $\beta_\nu =\frac18 (1-4\nu^2)= \frac18(d-1)(3-d)$, $A$ is an event of the process $|B_s|, 0\leq s\leq t$, $A^X$ the corresponding one for $X$ and
$$c_a(y,t) := \bigg(\frac{a}{a+y}\bigg)^{(d-1)/2}\frac{q_a^{(1)}(a+y,t) }{q_a^{(d)}(a+y,t)} =1 + O\bigg(\frac{t}{a(a+y)}\bigg) \quad (0<t< a^2, y>0).$$ (The last equality follows from Theorem B.)
\begin{Lem}\label{lem3.2ii} \, For $\alpha >0$ there is a constant $\kappa_{\alpha,d}$ (depending on $d, \alpha$) such that $$E_{(1+y){\bf e}}\bigg[\bigg(\int_0^t\frac{ds}{ |B_s|^{2}}\bigg)^{-\alpha} \,\bigg|\, \tau_{U(1+\lambda)}<t, \sigma_1=t\bigg] \leq \kappa_{\alpha,d} (1+\lambda^{2\alpha}) t^{- \alpha}$$ for $\lambda>0$, $0<y < \lambda$ and $ t<\lambda^2$,
where $\tau_{U(b)}$ denotes the first exit time from $U(b)$. \end{Lem} \vskip2mm\noindent {\it Proof.}\, The proof is given only for the case $d=1$. Put $M_t = \max_{s\leq t} X_s$. Then the conditional expectation in the lemma multiplied by $t^\alpha$ is at most
$$E^{BM}_{y} [ (1+M_t)^{2\alpha}\,|\, T_\lambda< T_0=t] \leq 4^\alpha + 4^\alpha\frac{E^{BM}_y[M_t^{2\alpha}; T_\lambda<t \,|\, T_0=t]}{P^{BM}_y[ T_\lambda <t\,|\, T_0=t]}.$$
The last ratio may be expressed as a weighted average of $ E^{BM}_\lambda[ M_{t-s}^{2\alpha}\,|\, T_0 =t-s]$ over $0\leq s\leq t$, which, by virtue of scaling property, is dominated by $C'_\alpha\lambda^{2\alpha}$, yielding the desired bound. \qed \vskip2mm
\begin{Lem}\label{lem3.4} \, There exists a constant $\kappa_{d}$ depending only on $d$
such that for $0 < \lambda \leq 8$,
$$h_a(a+y,t, \phi) \leq \kappa_{d} \frac{a^{2\nu+1}y}{t} \bigg(p_t^{(1)}(y) p_t^{(d-1)}(a\phi) + \frac{ (\lambda a)^2}{t}p_t^{(d)}(\lambda a)\bigg)$$ whenever $0\leq \phi<\pi$, $0< y< \lambda a$ and $ 0<t<(\lambda a)^2$.
\end{Lem}
\vskip2mm\noindent
{\it Proof.}\, We may let $a=1$. Suppose $d=2$. Let $(P^Y, (Y_t))$ be a standard Brownian motion on the torus
$\mathbb{R}/ 2\pi \mathbb{Z}$ (identified with $(-\pi,\pi]$) that is started at 0 and independent of $(B_t)_{t\geq 0} $. Then by skew product representation of $B_t$
\begin{equation}\label{h_skew} h_1(1+y,t, \phi)= 2\pi (P^Y\otimes P_{(1+y){\bf e}}) [ Y_\tau\in d\phi, \sigma_1\in dt ]/d\phi dt,
\end{equation}
where $ \tau = \int_0^{\sigma_1} |B_s|^{-2} ds$ and $\otimes$ signifies the direct product of measures.
We rewrite this identity by means of the linear Brownian motion $X_t$ only. Because of translation invariance of the law of the increment of $X_t$ we shift the starting point of $X_t$ so that $X_0=y$ and define
\begin{equation}\label{tau_1}
\tau^X = \int_0^{T_0}\frac{ds}{(1+X_s)^{2}}.
\end{equation}
We perform the integration of $Y$ first and apply the drift transform (as in the proof of Lemma \ref{lem2.1}) to deduce from (\ref{h_skew}) that
\begin{equation}\label{h_skew2}
h_1(1+y,t, \phi)= \frac{2\pi}{\sqrt{1+y}} E^{BM}_{y} \Big[e^{\frac18 \tau^X } p_{\tau^X}^{\rm trs}(\phi) \,\Big|\, T_0=t\Big] q_0^{(1)}(y,t),
\end{equation}
where $p_t^{\rm trs}(\phi)$ denotes the density of the distribution of $Y_t$. We break the conditional expectation above into two parts according as $T_0 < T_\lambda$ or $T_0> T_\lambda$, and denote the corresponding ones by $J(T_0<T_\lambda) $ and $J(T_0> T_\lambda )$, respectively. Note that $\tau^X <t $ (under $T_0=t$) so that $ p_{\tau^X}^{\rm trs}(\phi) \leq C p_{\tau^X}^{(1)}(\phi)$ if $\sqrt t < \lambda\leq 8$.
Then, using Lemma \ref{lem3.2i} (with $b=\lambda$) and Lemma \ref{lem3.2ii} (with $\alpha= 1/2$) we observe
\begin{eqnarray}\label{J1}
J(T_0> T_\lambda )
&=& E_y^{BM} [e^{\frac18 \tau^X} p^{{\rm trs}}_{\tau^X}(\phi) \,|\, T_0=t> T_\lambda] \times P^{BM}_{y}[ T_0>T_\lambda \,|\, T_0=t]
\nonumber\\
&\leq& C e^{\frac18 t}E_y^{BM} [(\tau^{X})^{-1/2} \,|\, T_0=t> T_\lambda] \times P^{BM}_{y}[ T_0>T_\lambda \,|\, T_0=t] \nonumber\\ &\leq& Ce^{\lambda^2/8}\frac{(1+\lambda)}{t^{1/2}}\times \frac{y\lambda^2}{t^2}p^{(1)}_t(\lambda) \times \frac{1}{q_0^{(1)}(y,t)}.
\end{eqnarray}
On the other hand, the trivial domination
$P^{BM}_{y}[ T_0<T_\lambda \,|\, T_0=t] \leq 1$ yields
\begin{eqnarray}\label{J2}
J(T_0<T_\lambda)
&\leq & Ce^{\lambda^2/8}E^{BM}_{y}[p_{\tau^X}^{(1)}(\phi) \,|\, T_0=t< T_\lambda] \\
& \leq&C e^{\lambda^2/8}(1+\lambda) p^{(1)}_t(\phi). \nonumber \end{eqnarray} Here the second inequality is due to the inequality $p^{(1)}_{\tau^X}(\phi) \leq (1+\lambda)p^{(1)}_t(\phi)$ that is valid if $(1+\lambda)^{-2}t<\tau^X<t$, hence if $t< T_\lambda$. On recalling $q_0^{(1)}(y,t) = (y/t)p_t^{(1)}(y)$ these together show the estimate of the lemma when $d=2$.
The higher-dimensional case $d\geq 3$ can be dealt with in the same way in view of what is noted in Remark 4 and the fact that the transition density of a (spherical) Brownian motion on the $(d-1)$-dimensional sphere is comparable with that on the flat space if $t$ is small (cf. Section {\bf 6.1.2}). The details are omitted. \qed
\vskip2mm
\begin{Lem}\label{lem3.41} \, Uniformly for $y>0$, as $(y^3+|\phi|^3 ) /t \to 0$ and $t\downarrow 0$
$$\frac{h_a(a+y,t, \phi)}{2\pi} = \frac{a^{2\nu+1}y}{t} p_t^{(1)}(y) p_t^{(d-1)}(a\phi)(1+o(1)).$$
\end{Lem}
\vskip2mm\noindent
{\it Proof.}\, This proof is performed by examining the preceding one. We suppose $d=2$ and $a=1$. By virtue of the identity (\ref{h_skew2}) it suffices to show
\begin{equation}\label{EXPT}
E^{BM}_{y} \Big[e^{\frac18 \tau^X } p_{\tau^X}^{\rm trs}(\phi) \,\Big|\, T_0=t\Big] = p_t^{(1)}(\phi)(1+o(1))
\end{equation}
in the same limit as in the lemma. Given $t>0$ we put $\lambda= \lambda(t)= t^{1/3}$. With $b= \lambda(t)$ the inequality of Lemma \ref{lem3.2i} holds true, hence also (\ref{J1}) and (\ref{J2}) do even though $\lambda(t)$ depends on $t$. From the constraint on $\phi, y$ and $t$ imposed in the lemma it follows that
\begin{equation}\label{ratio}
\frac{y+|\phi| +\sqrt t}{\lambda(t)} \to 0\quad \mbox{and}\quad \frac{\phi^2\lambda(t)}{t} \to 0.
\end{equation} As before we break the expectation into two parts. The part $J(T_0> T_\lambda )$ is negligible, for the last member in (\ref{J1}) is at most a positive multiple of $t^{-3/2}p_t^{(1)}(\lambda)/p_t^{(1)}(y)$ and the latter is $o(p_t^{(1)}(\phi))$ under (\ref{ratio}). As for
$J(T_0< T_\lambda )$ the estimate from above is provided by (\ref{J2}). For, $C$ in (\ref{J2}) that comes in from the bound $ p_{\tau^X}^{\rm trs}(\phi) \leq C p_{\tau^X}^{(1)}(\phi)$ may be taken arbitrarily close to 1 as $\tau^X<t \to 0$. The estimate from below is obtained by observing that if $T_0< T_\lambda$ (so that $\tau^X > (1+\lambda)^2t$), then
$$\frac{p_{\tau^X}^{(1)}(\phi)}{p_t^{(1)}(\phi)} = \sqrt{\frac{t}{\tau^X}} \exp\Big\{-\frac{\phi^2}{2t\tau^X}\int_0^t\frac{2X_s+X_s^2}{(1+X_s)^2}ds\Big\}\geq \frac{1}{1+ \lambda}e^{- 2\phi^2\lambda/t} \to 1.$$ The proof of the lemma is complete.
\qed
\vskip2mm The estimate of Lemma \ref{lem3.4}, which concerns the case when $(z-a)/t$ is bounded above, will be improved in Lemma \ref{Imp1} of the next subsection. The next lemma
provides a bound of $ h_a(z,t, \phi)$ valid for a wide range of the variables $z$, $\phi$ and $t$. To simplify the description of it as well as of its proof we bring in a notation that represents $ h_a(z,t, \phi)$ in a different way.
For ${\bf z}\notin U(a)$, put \begin{equation}\label{h_z0}
h^*_a({\bf z},t) = \frac{P_{{\bf z}}[ B(\sigma_a)\in d\xi, \sigma_a \in dt]}{ m_a(d\xi) dt}\bigg|_{\xi=a{\bf e}},
\end{equation} which may be also understood to be the density evaluated at $(0,t)$ of the joint law of $(\Theta(\sigma_a), \sigma_a)$ under $P_{\bf z}$.
If ${\bf z}\cdot{\bf e}/z=\cos \phi\neq -1$, $z=|{\bf z}|$, then $ h_a(z,t, \phi)= h^*_a({\bf z},t)$ due to rotational symmetry of Brownian motion.
When $d=2$
these may be given as follows:
\begin{equation}\label{h_z2} h^*_a(ze^{i\phi},t) = h_a(z, t, \phi) = 2\pi \frac{P_{z{\bf e}}[{\rm Arg}\, B(\sigma_a)\in d\phi, \sigma_a \in dt]} {d\phi dt}.
\end{equation}
\vskip2mm
\begin{Lem}\label{lem3.5} \, Let $|{\bf z}|>a$ (${\bf z}\in \mathbb{R}^d$) and put $r=|{\bf z}- a{\bf e}|$. Then for some constant $\kappa_d$,
$$\quad h^*_a({\bf z},t) \leq \kappa_d q^{(d)}_a(z,t) \qquad \mbox{ if} \quad t> a^2\vee ar; \,\mbox{and}$$
$$h^*_a({\bf z},t) \leq \kappa_d a^{2\nu} \, \frac{a r}{t}\, p_t^{(d)}(r) \quad\, \mbox{if} \quad t \leq a^2\vee ar. $$
\end{Lem}
\vskip2mm\noindent
{\it Proof.}\, The case $ t \geq ar$ is readily disposed of. Indeed
the asserted inequality is implied by Theorems \ref{thm1.1} and \ref{thm1.21} (in conjunction with Theorem A) if $ t \geq a^2\vee ar$, and
by Lemma \ref{lem3.4} if $ar<t <a^2$ (note that $p^{(d)}_t(r)\asymp p^{(d)}_t(0)$ in the latter case).
In the rest of proof we let $a=1$ and suppose $t\leq r$, the case which plainly entails $t<1\vee r$ and thus concerns the second bound of the lemma. Take positive numbers $\varepsilon<1$ and $R$ so that $r-\varepsilon>R>\varepsilon$. Then, on considering the ball about $(1-\varepsilon){\bf e}$ of radius $\varepsilon$,
\begin{eqnarray}\label{951}
h^*_1({\bf z},t) &\leq& \varepsilon^{-2\nu-1} h^*_\varepsilon( {\bf z}- (1-\varepsilon){\bf e}, t) \nonumber\\
& =&\int_0^t \frac{h^*_\varepsilon(\xi,t-s)}{\varepsilon^{2\nu+1}} \int_{\partial U(R)} P_{{\bf z}- (1-\varepsilon){\bf e}}[\sigma_{U(R)} \in ds, B_{s} \in d\xi].
\end{eqnarray} Here, in the middle member we have the factor $\varepsilon^{-2\nu-1}$ in front of $h^*_\varepsilon$ since the uniform probability measure of the surface element $d\xi$ at ${\bf e}$ of the sphere $\partial U(\varepsilon)+ (1-\varepsilon){\bf e}$ equals $\varepsilon^{-2\nu-1}m_1(d\xi')$ with $d\xi' \subset \partial U(1)$ designating the projection of $d\xi$ on $\partial U(1)$ (see Remark 5 following this proof for the inequality).
Write
$$r_*=r_*(\varepsilon) = | {\bf z} -(1-\varepsilon){\bf e}|, \,\,\tilde r =r_*- R \,\, \mbox{ and} \,\,\, \tilde R=R-\varepsilon$$
and suppose that $ R<4\varepsilon<r/2$ so that
$${\textstyle |r-r_*|<\varepsilon < \frac18 r, \,\,\, |r-\tilde r| < \frac14 r, \,\,\, \tilde R< 3\varepsilon \,\, \,\mbox{ and} \,\,\, \tilde r >\frac14 t.}$$
We apply, for $s>\varepsilon(R+\varepsilon)$, the first inequality of the lemma that we have already proved at the beginning of this proof and, for $s \leq\varepsilon(R+\varepsilon)$, Lemma \ref{lem3.4} with $\lambda=3$ to infer that
$$ \sup_{\xi\in \partial U(R)} \frac{h^*_\varepsilon(\xi, s)}{\varepsilon^{2\nu+1}} \leq \kappa_d\bigg(\frac1{\varepsilon} \vee \frac{\tilde R}{s}\bigg)p_s^{(d)}(\tilde R).$$ Thus the repeated integral in (\ref{951}) is dominated by a constant multiple of
\[ I := \int_0^{t} \bigg(\frac1{\varepsilon} \vee \frac{\tilde R}{s}\bigg) p_s^{(d)}(\tilde R) q_{R}(r_*, t-s)ds.
\]
Write $I_{[a,b]}$ for the integral above restricted on the interval $[a,b]$. Applying Theorem B we see \[ I_{[0,t/2]}\leq \kappa'_d\int_0^{t/2} \bigg(\frac1{\varepsilon} \vee \frac{\tilde R}{s}\bigg) p_s^{(d)}(\tilde R) \frac{\tilde r}{t-s}p_{t-s}^{(1)}(\tilde r) \bigg(\frac{R}{r}\bigg)^{(d-1)/2} ds\Big[1+ O\Big(\frac{t}{Rr}\Big)\Big].
\] On using the inequality $1/(t-s) \geq 1/t + s/t^2$, the right-hand side is bounded above by \begin{eqnarray*}
\frac{\kappa''_d\,\tilde r}{t^{3/2}} \bigg(\frac{R}{r}\bigg)^{(d-1)/2} e^{-\tilde r^2/2t}\int_0^{\infty} \bigg(\frac1{\varepsilon} \vee \frac{\tilde R}{s}\bigg)\exp\Big\{-\frac{\tilde r^2s}{2t^2}- \frac{\tilde R^2}{2s}\Big\}\frac{ds}{s^{d/2}} \Big[1\vee \frac{t}{\varepsilon r}\Big]. \end{eqnarray*} Supposing \begin{equation}\label{000} \tilde R \tilde r/t >1/2, \end{equation} we compute the last integral (use if necessary (\ref{13}) of Section {\bf5.2}) to conclude $$I_{[0,t/2]}\leq \kappa_d'''\bigg(\frac1{\varepsilon} \vee \frac{\tilde r}{t}\bigg)\frac{1}{t^{d/2}}\bigg(\frac{R}{\tilde R}\bigg)^{(d-1)/2} e^{- (\tilde r+ \tilde R)^2/2t}e^{\tilde R^2/2t}\Big[1\vee \frac{t}{\varepsilon r}\Big]. $$ For the other interval $[t/2,t]$ we obtain $$\bigg(\frac1{\varepsilon} \vee \frac{\tilde R}{t}\bigg)^{-1}I_{[t/2,t]} \leq \frac{\kappa_d}{t^{d/2}}\int_{0}^{t/2} q_R(r_*,s)ds \leq \frac{\kappa_d}{ t^{d/2}}P^{BM}_0\Big[\max_{s\leq t/2}X_s > r_*-R\Big],$$ and, since the last probability is at most $2e^{-2(r_*-R)^2/t}$, taking $R= 2 \varepsilon$ (so that $\tilde R= \varepsilon$ and $\tilde r +\tilde R =r_*-\varepsilon$) yields $$I_{[t/2,t]} \leq \kappa_d' \bigg(\frac1{\varepsilon} \vee \frac{\varepsilon}{t}\bigg)\,p^{(d)}_{t/2}(r_*-2\varepsilon),$$ which combined with the bound of $I_{[0,t/2]} $ obtained above shows
$$I \leq \kappa_d''' \bigg(\frac1{\varepsilon} \vee \frac{r}{t}\bigg)\, p_t^{(d)}(r_*-\varepsilon) e^{\varepsilon^2/2t}\Big[1\vee \frac{t}{\varepsilon r}\Big]. $$ provided $r/t>1$ and (\ref{000}) is true. We may suppose $r^2> 8t$. For if $r^2\leq 8t$, entailing $r<8$ and $p_t^{(d)}(r) \asymp p_t^{(d)}(0)$, the formula to be shown follows from Lemma \ref{lem3.4} with $\lambda =8$. Now take $\varepsilon=t/r$, which conforms to the requirement (\ref{000}) as well as the condition $\varepsilon< r/8$ imposed at the beginning of the proof. Then,
$p_t^{(d)}(r_*-\varepsilon) e^{\varepsilon^2/2t}\leq p_t^{(d)}(r-2\varepsilon) e^{\varepsilon^2/2t} \leq p_t^{(d)}(r)e^{2\varepsilon r/t} = p_t^{(d)}(r)e^{2}$, and we find that $h^*_1({\bf z},t) \leq \kappa_d (r/t)p_t^{(d)}(r)$ as asserted in the lemma. \qed
\vskip2mm\noindent {\sc Remark 5.} The inequality in (\ref{951}) though appearing intuitively obvious may require verification. We suppose $d=2$ for simplicity and use the notation $h_a^*({\bf z},t,\theta)$, $-\pi < \theta <\pi$, given in (\ref{h*}) of the next section (it designates the density of $(\sigma_a, {\rm Arg} \, B(\sigma_a))$ at $(t,\theta)$). Write $0'$ for $(1-\varepsilon){\bf e}$. For any $1< b< z$, the Brownian motion starting at ${\bf z}$ hits $\partial U(b)$ before $U(\varepsilon)+0'$ (the shift of $U(\varepsilon)$ by $0'$), hence \begin{equation}\label{Rem5} h_\varepsilon^*({\bf z}-(1-\varepsilon){\bf e},t) = \frac{1}{2\pi}\int_{-\pi}^{\pi} d\phi \int_{0}^t h^*_{b}({\bf z},t-s, \phi) h_\varepsilon^*(be^{i\phi}-0',s)ds. \end{equation} By using an explicit form of the Poisson kernel of the domain $\mathbb{C}\setminus U(\varepsilon)$ we deduce that for each $\delta>0$ (chosen small), as $y:=b-1 \downarrow 0$ and $\phi \to 0$ \begin{equation}\label{Rem51}
\frac{1}{2\pi \varepsilon}\int_0^\delta h_\varepsilon^*(be^{i\phi}-0',s)ds = \frac1{\pi}\cdot \frac{y}{y^2+ (b\phi)^2}(1+o(1))
\end{equation}
(cf. Appendix (C)). Restricting the range of the outer integral to $|\phi|< \sqrt y$ in (\ref{Rem5}) and passing to the limit we obtain the required upper bound of $h_1^*({\bf z},t,0) =h_1^*({\bf z},t)$.
\vskip2mm\noindent
{\bf 4.2.} {\sc Refinement in Case $t<1$.} \,
In the next section we shall apply Lemma \ref{lem3.4} with ${\bf z}$ on the plane that is tangent to $U(a)$ at a point of the surface $ \partial U(a)$. By the underlying rotational invariance we may suppose that the plane is tangent at $a{\bf e}$ so that ${\bf z}\cdot {\bf e} = a$. Let $\phi$ be the colatitude of ${\bf z}$ so that
\begin{equation}\label{eta_y}\eta:=|{\bf z}- a{\bf e}|= a\tan \phi \quad\mbox{and}\quad y:= |{\bf z}|-a= a\sec \phi -a.
\end{equation}
Then $y/a\sim \frac12 \phi^2$ and an elementary computation yields
\begin{equation}\label{Texp}
\phi^2+ \frac{y^2}{a^2} = \phi^2 + (\sec \phi -1) ^2 = \frac{\eta^2}{a^2} - \frac{5}{12}\phi^4 - O(\phi^6),
\end{equation}
from which one may infer in one way or another that
in the case when $y/\sqrt t$ is large the upper estimate of Lemma \ref{lem3.4} is not fine enough: in fact the term $-\frac5{12}a^2\phi^4/2t$ can be removed from the exponent of the exponential factor involved in $p_t^{(1)}(y) p_t^{(d-1)}(a\phi)$ as asserted in the next proposition (cf. its Corollary). (However, the bound of Lemma \ref{lem3.4} is of correct order if $\sqrt t > y$.) This seemingly minor flaw becomes serious in the proof of Theorem \ref{thm1.2} (when $\theta$ is close to $\frac12 \pi$).
The next proposition partially improves both Lemma \ref{lem3.4} and the second inequality of Lemma
\ref{lem3.5} (in the case $t<1$). Remember the definition of $h^*_a({\bf z},t)$ given right after (\ref{h_z0}).
\vskip2mm
\begin{Prop}\label{cor_Imp}
Let $z=|{\bf z}|>a$, ${\bf z}\cdot {\bf e}/z = \cos \phi$ $(|\phi|< \pi)$, $y= z-a$, and $r= |{\bf z}- a{\bf e}|$ as in Lemmas \ref{lem3.4}
and \ref{lem3.5}. There exist positive constants $C_1$, $C_2$ and $C$ depending only on $d$ such that whenever $ t<a^2, y<a$ and $ |\phi| <1$,
\begin{eqnarray*}
\frac{C_1 a^{2\nu+1}y}{t}p_t^{(d)}(r) e^{-C[a\phi)^2(\phi^2+a^{-1}\sqrt t]/ t} \,\leq \, h^*_a({\bf z},t) \,
\leq \, \frac{C_2 a^{2\nu+1}y}{t}p_t^{(d)}(r)
e^{C\phi^4[ay+(a\phi)^2]/t}.
\end{eqnarray*}
\end{Prop} \vskip2mm
\begin{Cor} \label{prop_main} There exists positive constants $\kappa_d$ and $M,$ depending only on $d$ such that if $t<a^2$ and $\eta$, $\phi$ and $y$ are given as in (\ref{eta_y}) with $|\phi| <1$, then $$
h_a(a+y,t, \phi) \leq \frac{\kappa_d a^{2\nu+1} y}{t}p^{(d)}_t(\eta) e^{M\eta^6/a^4t }.
$$
\end{Cor}
\vskip2mm
Our proof of Proposition \ref{cor_Imp} rests on the skew product formula (\ref{h_skew}) and requires some elaborate estimate of the distribution of the random time $\tau^X$ given in (\ref{tau_1}), namely $$\tau^X = \int_0^{T_0} \frac{ds}{ (1+ X_s)^2}.$$ Here $X$ denotes a standard linear Brownian motion; its law conditioned on $X_0=r$ is denoted by $P_r^X$ as mentioned in the beginning of this section.
In the situation we are interested in, the starting point of $B_t$ is close to the sphere $\partial U(1)$, so that the Bessel process $|B_t|$ may be replaced by linear Brownian motion.
\begin{Lem}\label{claim} \, For $b>0$ and $r>0$, \begin{equation}\label{trunc}
P^{BM}_r[ X_{1-s} \geq br +rs \,\,\mbox{for some}\,\, s\in [0,1]\,|\, T_0=1] \leq 6 e^{- \frac23 b^2r^2}. \end{equation} \end{Lem} \vskip2mm\noindent {\it Proof.}\,
Let $R_t, t\geq 0$ be a three-dimensional Bessel process and $L_r$ its last passage time of $r$. Then we have the following sequence of identities of conditional laws: \begin{eqnarray}\label{id_law} &&(X_{1-s})_{0\leq s\leq 1} \,\,\mbox{conditioned on}\,\,X_0=r, T_0=1\nonumber\\
&&\quad \stackrel{\rm law}{=} (R_s)_{0\leq s\leq 1} \,\,\mbox{conditioned on}\,\, R_0=0, L_r=1 \nonumber\\
&&\quad \stackrel{\rm law}{=} (R_s)_{0\leq s\leq 1} \,\,\mbox{conditioned on}\,\, R_0=0, R_1=r\\
&&\quad \stackrel{\rm law}{=} (R_{1-s})_{0\leq s\leq 1} \,\,\mbox{conditioned on}\,\, R_0=r, R_1=0 \nonumber\\ &&\quad \stackrel{\rm law}{=} (sR_{s^{-1}-1})_{0\leq s\leq 1} \,\,\mbox{conditioned on}\,\, R_0=r \nonumber
\end{eqnarray}
(see \S 1.6 and \S 8.1 of \cite{YY} and (3.7) and (3.6) in \S XI.3 of \cite{RY}). On using the last expression a simple manipulation shows that the conditional probability in (\ref{trunc}) equals \begin{equation}\label{EQ_r}
P^R[ R_u > (b+1)r+ bru \,\,\mbox{for some}\,\, u \geq 0\,|\, R_0=r ], \end{equation} where $P^R$ denotes the law of $(R_t)$.
Since $R_t$ has the same law as the distance from the origin of a three-dimensional Brownian motion starting at $(r/\sqrt 3, r/\sqrt 3,r/\sqrt 3)$, the probability in (\ref{EQ_r}) is dominated by
$$3P^{BM}_{r/\sqrt 3}\Big[ |X_s| > (b+1+ bs)r/\sqrt3 \,\,\mbox{for some}\,\, s \geq 0 \Big],$$ which is at most $6 e^{- \frac23 b^2r^2}$ according to a well known bound of escape probability of a linear Brownian motion with drift. The bound (\ref{trunc}) has been verified. \qed
\begin{Lem}\label{Imp} \, There exists a constant $C>1$ such that for $0< \delta \leq 1$, $0<t<1$ and $y > 0$,
${\rm (i)} \quad {\displaystyle P^{BM}_y\Big[\tau^X \geq \frac{t}{1+ (1-\delta)y}\,\Big|\, T_0 =t\Big]\leq C\bigg(1\wedge \frac{\sqrt t}{\delta y}\bigg)e^{-3[\delta (1- 2y)]^2y^2/2t}\quad \mbox{if}\quad y<\frac14.}$ \vskip3mm
${\rm (ii)} \quad {\displaystyle P^{BM}_y\Big[\tau^X \geq \frac{t}{1+ (1+\delta)y+ \delta y^2}\,\Big|\, T_0 =t\Big]\geq 1- C^{-1} e^{-\delta^2y^2/6t}.}$ \end{Lem}
\vskip2mm\noindent
{\it Proof.}\, By the scaling property of $X$ the conditional probabilities to be estimated may be written as
\vskip2mm
$I_{-}:= P^{BM}_r[\tilde\tau^X \geq \frac{1}{1+ (1-\delta)y}\,|\, T_0 =1]\quad$ and
$\quad I_{+}:= P^{BM}_r [\tilde\tau^X \geq \frac{1}{1+ (1+\delta)y+ \delta y^2}\,|\, T_0 =1],$
\vskip2mm\noindent
where
$$ r=\frac{y}{\sqrt t},\,\, \tilde \tau^X = \int_0^1 \frac{ds}{(1+ \sqrt t X_s)^2}.$$
According to Lemma \ref{claim} the lower bound (ii) readily follows from this expression. Indeed, if $\sqrt t X_{1-s} < ys +\frac12 \delta y$ for $0<s<1$, then $$ \tilde \tau^X \geq \int_0^1\frac {ds}{(1+ys+\frac12\delta y)^2}= \frac1{1+(1+\delta )y+(1+\frac12\delta)\frac12 \delta y^2},$$ implying the occurrence of the event of the conditional probability giving $I_+$, hence the required lower bound.
The upper bound (i) requires a delicate estimation. We write the event under the conditional probability for $I_-$ in the form \begin{eqnarray}\label{write} \tilde\tau^X - \frac{1}{1+ y} &=& \int_0^1\bigg[\frac1{(1+\sqrt t X_s)^2} -\frac1{(1+ys)^2}\bigg]ds \\ &\geq& \frac{\delta y}{(1+(1-\delta)y)(1+y)}.\nonumber \end{eqnarray}
Observe that the integral above is less than $2\int_0^1(ys-\sqrt t\, X_s)ds$ a.s. and the last member is larger than $\delta y(1-2 y)$ (for $y>0$), so that the inequality (\ref{write}) implies
\begin{equation}\label{write2} \int_0^1(ys-\sqrt t\, X_s)ds \geq
\frac12 \delta y(1 - 2y) \quad \mbox{if} \quad \sup_{0<s<1}|X_s -rs |< 2r.
\end{equation} Owing to Lemma \ref{claim} we have $P^{BM}_0[ \sup_{0<s<1}|X_s -rs |>2r \,|\, T_r=1] \leq 12 e^{- 2y^2/t} $,
which along with (\ref{write2}) shows
$$I_-\leq P^{BM}_0\bigg[\int_0^1(ys-\sqrt t X_s)ds \geq \frac12 \delta y(1 - 2y)\,\bigg|\, T_r =1\bigg] + 12e^{-2y^2/t}.$$
Using (\ref{id_law}) again we rewrite the probability on the right in terms of the three dimensional Bessel process $R_t$, which results in
$$P^R\bigg[\int_0^\infty \frac{r-R_s}{(1+s)^3}ds>\frac12\delta r(1 - 2 y) \,\bigg|\, R_0 =r\bigg]. $$ For our present objective of obtaining an upper bound we may replace $R_s$ by $X_s$. Since the random variable $\int_0^\infty \frac{r-X_s}{(1+s)^3}ds= \frac12\int_0^\infty (1+s)^{-2}dX_s$ is Gaussian of mean zero under $P^{BM}_r$ and its variance equals $$ E^{BM}_0\bigg[\Big(\int_0^\infty \frac{ X_s ds}{(1+s)^{3}}\Big)^2\bigg]= \frac14\int_0^\infty (1+s)^{-4}sds = \frac1{12}, $$ it follows that if $y<1/4$, $$I_- \leq C\bigg(1\wedge \frac{1}{\delta r}\bigg)e^{-3r^2(\delta - 2\delta y)^2/2} +12e^{-2y^2/t}.$$ On the right-hand side the second term may be absorbed into the first, resulting in the required bound. \qed
\vskip2mm\vskip2mm The next lemma, valid for all $d\geq 2$, improves the bound of Lemma \ref{lem3.4} when $r/t>1$.
\vskip2mm\noindent
\begin{Lem}\label{Imp1} \, There exists a positive constant $C$ depending only on $d$ such that
\begin{equation}\label{crucial}
h_a(a+y,t, \phi) \leq \frac{Ca^{2\nu+1}y}{t^{1+d/2}}
\exp\Big\{- \frac{1}{2t}\Big( (a^2+ ay)\phi^2 + y^2 -\frac{a^2}{12} \phi^4 - 12a y\phi^4 \Big) \Big\}.
\end{equation}
whenever $0< y <a$, $t<a^2$ and $|\phi|<1$. \end{Lem}
\vskip2mm\noindent
{\it Proof.}\, Suppose $d=2$ and $a=1$, the case $d\geq 3$ being briefly discussed at the end of this proof. Let $\tau$ be as in the preceding lemma. From (\ref{h_skew2}) it plainly follows that
\begin{equation}\label{h_skew3}
h_1(1+y,t, \phi)\leq 2\pi E^{BM}_y[e^{\tau^X/8}p^{(1)}_{\tau^X}(\phi) \,|\, T_0=t]q^{(1)}_1(1+y,t).
\end{equation}
Noting $\tau^X< t$, we compute $E_y^{BM} [e^{-\phi^2/2\tau^X}\,|\, T_0= t]$.
Define the random variable $\mit\Delta$ via $$\frac1{ \tau^X} = \frac{1+y - y\mit\Delta}{t},$$ so that \begin{equation} \label{so that}
E_y^{BM}[e^{-\phi^2/2\tau^X}| T_0= t] = e^{-(1+y)\phi^2/2t}E_y^{BM}[e^{(\phi^2/2t)y\mit\Delta}| T_0= t]. \end{equation} Put
$F(\delta)=E_y^{BM}[\mit\Delta \geq \delta$ $|\, T_0= t ]$ for $ -\infty<\delta\leq 1$. Then by Lemma \ref{Imp} (i)
$$F(\delta) = P^{BM}_y\Big[ \tau^X \geq \frac{t}{1+ (1-\delta)y}\,\Big|\, T_0 =t\Big]\leq \frac{ C}{1+\delta y t^{-1/2}\,} e^{-3[\delta(1 - 2y)]^2 y^2/2t}$$ (for $y<1/4, 0<\delta\leq 1$). Put $$A = \frac{\phi^2}{2t}y \quad\, \mbox{and} \quad\, B= A\frac{\phi^2}{y} = \frac{\phi^4}{2t}.$$
Then, noting $F(1-0)=0$, we perform integration by parts to see that
\begin{eqnarray*} E_y^{BM}[e^{(\phi^2/2t)y\mit\Delta}| T_0= t] &=& -\int_{-\infty}^1 e^{A\delta}dF(\delta) = \int_{-\infty}^1 A e^{A\delta}F(\delta) d\delta\\ &\leq& 1+ C\int_{0}^1 \frac{ A}{1+\delta y t^{-1/2}\,} \exp\Big\{A\delta - 3 \frac{\delta^2(1-2y)^2y^2}{2t}\Big\}d\delta. \end{eqnarray*}
The last integral restricted to the
interval $(\phi^2/y)\wedge 1\leq \delta \leq 1$ is dominated by 4, provided $y<1/8$, for in this interval we have $\delta y \geq \phi^2$ so that the exponent involved in the integrand is bounded from above by
$$A\delta - 3\frac{\delta^2y^2}{2t} \Big(1 - \frac14\Big)^2 \leq - \frac14 A\delta$$
($y\leq 1/8$), and thus the integral by $\int_0^\infty A e^{-A\delta/4}d\delta =4$. On the other hand, write the exponent as $$A\delta - 3 (1-2y)^2\frac{\delta^2y^2}{2t} = \frac{B}{12} - 3\bigg(\frac{ y}{\sqrt{2t}}\delta - \frac{1}{6}\sqrt B\bigg)^2 + 3\frac{4 \delta^2 y^3(1 -y)}{2t}$$ and observe that the last term is less than $6\phi^4 y/t$ if $\delta\leq \phi^2/y$. Then, we transform the integral over $[0, \phi^2/y)$ by changing the variable of integration according to $u=\frac{ y}{\sqrt{2t}}\delta - \frac16 \sqrt B$ and, noting $A\sqrt{2t}/y = \sqrt B$, we deduce that it is at most
$$\sqrt B \int_{-\sqrt{B}/6}^{5\sqrt{B}/6}\frac{e^{-3 u^2}du}{ 1+ \sqrt 2 (u + \frac16\sqrt{B})} \exp\Big\{\frac{B}{12} + \frac{6y\phi^4}{t}\Big\}\leq \frac{C\sqrt{B}}{1+\sqrt B}\exp\Big\{\frac{B}{12} + \frac{6y\phi^4}{t}\Big\},
$$ hence by virtue of (\ref{so that}) \begin{eqnarray}\label{exp9}
E_y^{BM}[e^{-\phi^2/2\tau^X}| T_0= t] &\leq& Ce^{-(1+y)\phi^2/2t}\bigg(1 + \frac{\sqrt{B}}{1+\sqrt B} \exp\Big\{ \frac{ \frac{1}{12} \phi^4 + 12 y\phi^4 }{2t} \Big\}\bigg) \nonumber \\ &\leq& C' \exp\Big\{ \frac{ -(1+y)\phi^2 + \frac{1}{12} \phi^4 + 12 y\phi^4 }{2t} \Big\}. \end{eqnarray} On recalling (\ref{h_skew3}) (and (\ref{1dim}) as to $q_1^{(1)}$) this concludes
the assertion of the lemma, for if $\phi^2 > t$, then $p^{(1)}_{\tau^X}(\phi) \leq p^{(1)}_{t/2}(\phi)$ on the event $\tau^X\leq t$ and $p^{(1)}_{\tau^X}(\phi)\leq \frac1{\sqrt {2\pi t}}e^{-\phi^2/2\tau^X}$ oherwise, while if $\phi^2\leq t/2$, the lemma is obvious (see e.g. Lemma \ref{lem3.4}).
The case $d\geq 3$ is dealt with in the same way as above for the same reason mentioned at the end of the proof of Lemma \ref{lem3.4}. We employ the skew product representation of $d$-dimensional Brownian motion. For the radial component the same remark as given in Remark 4 is applied to the bounds obtained in Lemma \ref{Imp}. The spherical component behaves as the Brownian motion on the flat space for small $t$. It follows that in place of (\ref{h_skew2}) we have
$$h_1(1+y,t, \phi) \leq C_d E^{BM}_y[p^{(d-1)}_{\tau^X}(\phi)\,|\, T_0=t]q^{(1)}_1(1+y,t).$$ Thus the desired bound (\ref{crucial}) follows from (\ref{exp9}).
\qed
\vskip2mm\noindent
{\it Proof of Proposition \ref{cor_Imp}.}\, For $|\phi|<1$, \begin{eqnarray}
|{\bf z}- a{\bf e}|^2 &=& (y+a)^2 - 2(ay+a^2)\cos \phi +a^2 \nonumber\\ &=&y^2+ (a+y)\phi^2 -\frac{a^2}{12}\phi^4 + O(\phi^4 ay + a^2\phi^6), \label{cos_f} \end{eqnarray} and the upper bound in Proposition \ref{cor_Imp} follows from Lemma \ref{Imp1}.
For the lower bound we suppose $a=1$ for simplicity and apply the skew product expression (\ref{h_skew}). Suppose $d=2$. As in the proof of Lemma \ref{lem3.4} (see (\ref{h_skew2})) we have
$$
h^*_1({\bf z},t) = h_1(1+y,t,\phi) \geq E_y^{BM}[p_{\tau^X}^{(1)}(\phi) \,|\, T_0=t] q^{(1)}_0(y,t).
$$ Plainly $\tau^X<t$ from the definition, while by (ii) of Lemma \ref{Imp} with $\delta = \sqrt t \,/y$ we see that
$P^{BM}_y[\tau^X>t (1+y + \sqrt t(1+y))^{-1}\,|\, T_0=t] \geq 1-e^{-1/6}$. If $t < \phi^2$ (so that $p_\tau(\phi)$ is increasing in $\tau \in (0, t]$), it therefore follows that
$$h^*_1({\bf z},t) \geq \kappa_d \frac{y}{t^2} \exp\bigg\{- \frac{y^2 + (1+y)\phi^2+ 2\phi^2\sqrt t\, )}{2t}\bigg\},$$
entailing the required lower bound in view of (\ref{cos_f}). If $t\geq \phi^2$, then the conditional expectation above is bounded below by
$\kappa_d/\sqrt t$ and observing $(r^2-y^2) /t \leq (1+y)$ we obtain $h^*_1({\bf z},t) \geq \kappa_d'yt^{-1}p^{(2)}_t(r)$, a better lower bound.
\qed
\section{ Proof of Theorem \ref{thm1.2} (Case $d=2$) }
Throughout this section we let $d=2$; also let ${\bf x} =x{\bf e}$ and write $v$ for $x/t$. The definition of $h_a$ given at the beginning of Section 4 may read
$$h_a(x,t, |\theta|) = 2\pi P_{\bf x}[{\rm Arg}\, B(\sigma_a)\in d\theta, \sigma_a\in dt]/d\theta dt \quad (x>a, -\pi <\theta <\pi).$$ In this section we prove \vskip20mm
\begin{Thm}\label{thm5.1} Let $v=x/t$. Then, \vskip2mm {\rm (i)} uniformly for $ 0\leq \theta < \frac12\pi - v^{-1/3} $ and for $t>1$, as $v\to \infty$ \begin{equation}\label{EQ}
h_a( x,t, \theta) = 2\pi av \, p_t^{(2)}(|{\bf x} -ae^{i\theta}|) \cos \theta \bigg[ 1 + O\Big( \frac{1}{(\frac12\pi -\theta)^3 v}\Big) \bigg];\, \mbox{and} \end{equation}
{\rm (ii)} there exists a universal constant $C$ such that for $ |\frac12\pi -\theta|<(av)^{-1/3}$, $t>a^2$ and $v>2/a$, $$C^{-1} \frac{1}{(av)^{1/3}}\leq \frac{h_a(x, t, \theta)}{a v e^{-av(1-\cos \theta)} p_t^{(2)}(x -a)} \leq C \frac{1}{(av)^{1/3}}. $$ \end{Thm}
\vskip2mm Note that $\cos \theta \sim \frac12\pi -\theta$ as $\theta \to \frac12\pi$ and \begin{equation}\label{eq5.10}
p_t^{(2)}(|{\bf x} -ae^{i\theta}|) =e^{-av(1-\cos \theta)} p_t^{(2)}(x -a). \end{equation} The following corollary of Theorem \ref{thm5.1} is a restatement of Theorem \ref{thm1.2} for the case $d=2$.
\begin{Cor}\label{cor1} For $t>a^2$ and $v= x/t>2/a$, \begin{eqnarray*}
&&\frac{P_{\bf x}[{\rm Arg}\, B(\sigma_a)\in d\theta\,|\, \sigma_a =t]}{d\theta} \\
&&\quad = \sqrt{\frac{av}{2\pi}}e^{-av(1-\cos \theta)}\cos \theta \bigg[ 1 + O\Big(\frac{1}{av\cos^3 \theta}\Big)\bigg] \quad \mbox{if} \quad \cos \theta \geq\frac{1}{(av)^{1/3}};\, and \\
&&\quad \asymp \sqrt{av}\,e^{-av(1-\cos \theta)} (av)^{-1/3} \qquad\qquad\qquad\qquad\,\, \mbox{if} \quad |\cos \theta| \leq \frac{1}{(av)^{1/3}}. \end{eqnarray*} \end{Cor}
For $|\theta| > \frac12 \pi + (av)^{1/3}$ we shall obtain an upper bound (Lemma \ref{wc}) which together with Corollary \ref{cor1} verifies the next corollary.
\begin{Cor}\label{cor2} As $v:=x/t\to\infty$
$$ \sqrt{\frac{\pi}{2av}}e^{av(1-\cos \theta)} P_{{\bf x}}[{\rm Arg}\, B(\sigma_a)\in d\theta\,|\, \sigma_a =t] \,\Longrightarrow \,\frac{1}{2}{\bf 1}\Big( |\theta| <\frac12 \pi\Big) \cos \theta \,d\theta.$$ \end{Cor}
\vskip2mm For the proof it will become convenient to bring in the notation \begin{equation}\label{h*}
h^*_a({\bf z},t, \theta) =2\pi \frac{P_{\bf z}[{\rm Arg}\, B(\sigma_a)\in d\theta, \sigma_a\in dt]}{d\theta dt} \quad (|{\bf z}| >a, 0 \leq |\theta |< \pi). \end{equation} which is a natural extension of $h^*_a$ introduced in Section {\bf 4.1}: $h^*_a({\bf z},t)= h^*_a({\bf z},t,0)$; also,
$h_a(z, t, |\theta|) = h^*_a(z{\bf e},t, \theta)$.
\vskip2mm\noindent
{\bf 5.1.} {\sc Lower Bound I}. \vskip2mm
The following lemma, though easy to obtain, gives a correct asymptotic form of $h_a$ if $\theta \in (0, \pi/2)$ is away from $\frac12 \pi$ and provides a guideline for later arguments. Combined with Theorem A it also entails Proposition \ref{thm0}. Let ${\bf x} =x{\bf e}$ and $v=x/t$ and put $$\Psi_a(x,t,\theta) = \frac{2\pi ax}{t} e^{-\frac{ax}{t}(1-\cos \theta)} p_t^{(2)}(x -a) \Big(\cos \theta -\frac{a}{x}\Big).
$$
\vskip2mm
\begin{Lem}\label{LBD} \, For all $x>a, t>a^2$ and $\theta\in (-\frac12 \pi,\frac12 \pi)$, \begin{equation}\label{LB}
\frac{P_{\bf x}[\arg B(\sigma_a)\in d\theta, \sigma_a\in dt] }{d\theta dt} \geq \Psi_a(x,t,\theta);
\end{equation}
in particular $h_a(x,t, \theta) \geq \Psi_a(x,t,\theta)$. \end{Lem}
\vskip2mm\noindent {\it Proof.}\, We represent points on the plane by complex numbers. Let $0\leq \theta <\frac12 \pi$ and denote by $L(\theta)$ the straight line tangent to the circle $\partial U(a)$ at $ae^{i\theta}$. Let $\sigma_{L(\theta)}$ be the first time $B_t$ hits $L(\theta)$ and consider the coordinate system $(u, l)$ where the $u$-axis is the line through ${\bf x}$ perpendicular to $L(\theta)$ and the $l$-axis is $L(\theta)$ so that the $l$-coordinate of the tangential point $ae^{i\theta}$ equals $x\sin \theta$ (see Figure 1). Put \begin{equation}\label{psi0} \psi_a(l,t)= \frac{P_{\bf x}[ B(\sigma_{L(\theta)})\in d l, \sigma_{L(\theta)} \in dt]\,}{dl dt} \end{equation} and
\begin{equation}\label{K} U = \int_0^{t} ds \int_{\mathbb{R}\setminus \{x\sin \theta\}} \psi_a(l, t-s) h^*_a(\xi^*_a(l),s,\theta)dl, \end{equation} where $h^*_a$ is defined by (\ref{h*}) and $\xi_a^*(l)$ denotes the point of the plane which lies on $L(\theta)$ and whose $l$-coordinate equals $l$ (so that $\xi^*_a(x\sin \theta) = ae^{i\theta}$).
Then
\begin{equation}\label{LB9} h_a(x, t, \theta) = h^*_a({\bf x},t,\theta) = 2\pi a \psi_a(x\sin \theta,t) + U.
\end{equation}
Here the factor $a$ of the first term on the right-hand side of (\ref{LB9}) comes out from the relation $dl = ad\theta$ valid at $ae^{i\theta}$;
for the present proof we need only the lower bound (with $U$ being discarded) that is verified by the same argument as in Remark 5; the equality however is used later, whose verification we give after this proof. We claim \begin{equation}\label{clm8}
\psi_a(x\sin \theta,t) = \frac1{2\pi a}\Psi_a(x,t,\theta).
\end{equation} Since the $u$-coordinate of ${\bf x}$ equals $x\cos \theta -a$ we have in turn
$$
\psi_a(l,t) = \frac{x\cos \theta -a}{t} \, p^{(1)}_{t}(x\cos \theta -a)p^{(1)}_{t}(l)$$ and \begin{equation}\label{eq3.1}
\psi_a(x\sin \theta,t)
= \frac{x\cos \theta -a}{2\pi t^2} e^{- |x e^{i\theta} -a|^2/2t }. \end{equation} Hence, noting (\ref{eq5.10}), we readily identify the right-hand side of (\ref{eq3.1}) with that of (\ref{clm8}).
Finally one may realize that (\ref{LB9}) shows $a\psi_a(x\sin \theta,t)$ to be a lower bound for the density of the distribution of $(\arg B(\sigma_a), \sigma_a)$ (rather than $({\rm Arg}\, B(\sigma_a), \sigma_a)$). \qed \vskip2mm
{\it Proof of (\ref{LB9}).} We are to take the limit as $b\downarrow a$ in the expression
\begin{equation}\label{m9} h_a^*({\bf x},t,\theta) = \int_0^tds \int_{l\in \mathbb{R}} \psi_b(l,t-s)h^*_a(\xi^*_b(l), s,\theta)dl \qquad (a<b <x).
\end{equation}
Here the coordinate $l$ and $\xi^*_b(l)$ are analogously defined with the tangential line to $\partial U(b)$ at $be^{i\theta}$.
Put $y=b-a$ and split the inner integral in (\ref{m9}) at $l= x\sin \theta \pm \sqrt y$.
First consider
$$
m_{\rm in}(b) := \int_0^tds \int_{|l- x\sin \theta|<\sqrt y} \psi_b(l,t-s)h^*_a(\xi^*_b(l), s,\theta)dl. $$
As in Remark 5 we apply the explicit form of the Poisson kernel of $\mathbb{C} \setminus U(a)$ to see that for each $\delta>0$, uniformly for $l: |l-x\sin \theta|<\sqrt y$, as $y\downarrow 0$
$$ ({2\pi a})^{-1}\int_0^\delta h^*_a(\xi^*_b(l), s,\theta) ds =\frac{1}{\pi}\cdot \frac{y}{y^2+(l- x\sin \theta)^2}(1+o(1),$$
which yields
$
\lim_{b\downarrow a} m_{\rm in}(b) = 2\pi a\psi_a(x\sin\theta, t) $ in view of continuity of $\psi_b(l,t-s)$.
As for the contribution of the range $\{l: |l-x\sin \theta|\geq \sqrt y \}$, denoted by $m_{\rm out}(b)$, we
substitute in the integral representing it the expression
$$h^*_a(\xi^*_b(l), s,\theta)= \int_0^s ds' f_{b-a}(l-l', s-s')\int_{l' \in \mathbb{R}} h^*_a(\xi^*_a(l'), s',\theta)dl'ds', $$ where $f_{y}(l-l', s-s') = y (s-s')^{-1}p_{s-s'}^{(1)}(y)p_{s-s'}^{(1)}(l-l')$, representing the space-time hitting density of the line $L(\theta)$ for the Brownian motion $B_t$ conditioned on $B_s=\xi^*_b(l)$, and
perform the integration w.r.t. $dsdl$ first to see that $m_{\rm out}(b)$ converges to $U$. \qed
\vskip2mm\noindent
{\bf 5.2.} {\sc Upper Bound I}.
\begin{Prop}\label{UBD1} \, Let $v=x/t>1$ and $t>1$. For some universal constant $C>0$, $$h_a(x,t, \theta) \leq \Psi_a(x,t,\theta) \bigg[1+\frac{C}{ (\frac12 \pi -\theta)^3 av} \bigg]\quad\quad \mbox{if}\quad 0 \leq \theta <\frac{\pi}2 -\frac1{(av)^{1/3}} . $$ \end{Prop} \vskip2mm\noindent
For the proof of Proposition \ref{UBD1} we compute $U$ given in (\ref{K}): it suffices to show the upper bound \begin{equation}\label{U} U \leq \frac{C\Psi_a(x,t,\theta)}{ (\frac12 \pi -\theta)^3 av} \qquad \mbox{for}\quad 0 \leq \theta <\frac{\pi}2 -\frac1{(av)^{1/3}}. \end{equation} Let $\psi_a$ be the density of the hitting distribution in space-time of $L(\theta)$ defined by (\ref{psi0}). Bringing in the new variable $\eta\in \mathbb{R}$ by
$$l= x\sin \theta -\eta $$
we write
\begin{equation}\label{psi01}
\psi_a(l,t) = \frac{x\cos \theta -a}{t} \, p^{(1)}_{t}(x\cos \theta -a)p^{(1)}_{t}(x\sin \theta -\eta).
\end{equation}
We break the repeated integral defining $U$ into two parts by splitting the time interval $[0,t]$ at $s=a/v$ (namely $s/a^2 = 1/av$, conforming to the scaling relation) and denote the corresponding integrals by
$$U_{[0,a/v] }\quad \mbox{ and}\quad U_{[a/v,t]},$$
respectively.
The rest of the proof is divided into three steps.
{\it Step 1.} \,
The essential task for the proof is performed in the estimation of $U_{[0, 1/v]}$, which is involved in Lemmas \ref{lem5.2.1} through \ref{lem5.2.4}.
Recall
$$U_{[0,a/v]} = \int_0^{a/v} ds \int_{\mathbb{R}} \psi_a(l, t-s)h_a^*(\xi^*_a(l),s,\theta)dl,$$ and write $$J_E = \frac1{a}\int_{E} e^{v\eta \sin \theta } \,d\eta\int_0^{a/v} \exp\Big\{-\frac{v^2}{2} s\Big\}h_a^*(\xi^*_a(l),s, \theta)ds \qquad (E \subset [0,\infty)).$$
With an obvious
reason of comparison we may restrict our consideration to the half line $l< x\sin \theta$, i.e. to $\eta >0$.
\begin{Lem}\label{lem5.2.1} \qquad\qquad
$U_{[0,a/v]} \leq C\Psi_a(x,t,\theta) J_{[0,\infty)}.$
\end{Lem}
\vskip2mm\noindent
{\it Proof.}\, We see from (\ref{psi01})
$$
\psi_a(l,t-s)
= \frac{x\cos \theta -a}{t-s} e^{-ax(1-\cos \theta)/(t-s)}\, p^{(2)}_{t-s}(x -a)\exp\Big\{\frac{2 x\eta \sin \theta - \eta^2}{2(t-s)}\Big\}.
$$
On using $\frac1{t-s} = \frac1{t}+\frac{s}{t(t-s)}$ an elementary computation leads to \begin{eqnarray}\label{psi00} e^{- ax(1-\cos \theta)/(t-s)}\, p^{(2)}_{t-s}(x -a) &=& (1-s/t)^{-1}p^{(2)}_{t}(x-a) e^{- av(1-\cos \theta)} e^{-v^2s/2} \nonumber\\ && \,\,\, \times \exp\Big\{\frac{-v^2s^2 +2avs \cos \theta - a^2st^{-1}}{2(t-s)}\Big\}. \end{eqnarray} and substitution in the preceding formula yields \begin{eqnarray*} \psi_a(l,t-s) & =& \bigg(\frac{t}{t-s}\bigg)^2\frac1{2\pi a} \Psi_a(x,t,\theta)e^{v\eta \sin \theta}e^{-v^2s/2}\\ && \,\,\times \exp\Big\{\frac{-(v^2s^2 +\eta^2 -2vs\eta \sin \theta) + 2avs \cos \theta - a^2st^{-1}}{2(t-s)}\Big\}. \end{eqnarray*}
With the help of the inequality $v^2s^2 +\eta^2 -2vs\eta \sin \theta >0$ this leads to \begin{equation}\label{Ineq3}
\psi_a(l,t-s) \leq \bigg(\frac{t}{t-s}\bigg)^2\frac1{2\pi a}\Psi_a(x,t,\theta) e^{v|\eta| \sin \theta }\exp\Big\{-\Big(\frac{v^2}{2} -\frac{av\cos \theta}{t-s}\Big)s\Big\}
\end{equation} valid for all $0<s < t, |\eta|<\infty$.
Now, in (\ref{Ineq3}) we get a constant to dominate both the heading factor and the term $(av\cos \theta)s/(t-s)$ in the exponent for $s< a/v$ to see the inequality of the lemma. \qed \vskip2mm
\vskip2mm
{\it Step 2.} \, In this step we prove three lemmas that together verify
\begin{equation}\label{U0}
U_{[0,a/v]} \leq C \Psi_a(x,t,\theta)\frac1{ av \cos^{3} \theta} \quad\mbox{if}\quad \cos \theta > \frac1{(av)^3} .
\end{equation}
Let $\phi$ denote the angle between the rays $ra^{i\theta}, r\geq 0$ and $r\xi^*_a(x\sin \theta -\eta), r\geq 0$ so that $$\eta = a\tan \phi \quad \mbox{ and} \quad y =a \sec \phi -a$$ and $h_a^*(\xi^*_a(l),s, \theta) = h_a(a+y,s, \phi)$. Applying Lemma \ref{lem3.4} with $\lambda=\pi$, we infer that for $0\leq b <b'\leq 1$, \begin{eqnarray*} J_{[b,b']} &\leq & \frac{2\kappa_d}{a} \int_{b}^{b'} e^{v\eta \sin \theta } \,d\eta \int_0^{a/v} \frac{ay}{s} p_s^{(1)}(y) p_s^{(1)}(a\phi) e^{-v^2s/2}ds. \end{eqnarray*} Now and later we use the formula \begin{eqnarray}\label{13} \int_0^\infty \exp\Big\{-\frac{\eta^2}{2s}-\frac {v^2s}{2}\Big\}\frac{ds}{s^{p+1}} &=& 2\bigg(\frac{v}{\eta}\bigg)^p K_p(v\eta) \\ &\sim& \left\{ \begin{array} {ll} 2^p\Gamma(p) \eta^{-2p}\quad & ( v\eta \downarrow 0, p>0) \nonumber\\[2mm] {\displaystyle \bigg(\frac{v}{\eta}\bigg)^p\frac{ \sqrt{2\pi} \, e^{-v\eta} }{\sqrt{v\eta}} }\quad& (v\eta \to \infty, p\geq 0) \end{array} \right. \label{K_p} \end{eqnarray} valid for all $\eta>0$ and $v>0$ (\cite{E}, p146). Noting that since $y\sim \frac12 a\phi^2\sim \frac1{2a} \eta^2$, \begin{equation}\label{oo} \frac{ay}{s} p_s^{(1)}(y) p_s^{(1)}(a\phi) \leq \frac{\eta^2}{s^{2}} e^{-(y^2+\phi^2)/2s} \qquad (\eta< 1), \end{equation}
we apply the equality in (\ref{13}) with $\sqrt{y^2+\phi^2}$ in place of $\eta$ to deduce \begin{equation}\label{K0} J_{[b,b']} \leq \frac{C}{a}\int_b^{b'} \eta^2\frac{v}{\sqrt{y^2+\phi^2}} K_{1}(v\sqrt{y^2+\phi^2}) e^{v\eta \sin\theta} d\eta. \end{equation} Recall (\ref{Texp}), which may reduce to \begin{equation}\label{Ineq2}
y^2 + (a\phi)^2 > \eta^2(1 - Ca^{-2}\eta^2) \quad (|\phi|< 1) \end{equation} (for some $C>0$), and we evaluate
the integral over $\eta <a/v$ and conclude the following
\begin{Lem}\label{lem5.2.2}
$$J_{[0,a/v]} \leq C\int_0^{a/v} e^{v\eta\sin \theta}d\eta \asymp \frac1v. $$
\end{Lem}
In the rest of this proof of Proposition \ref{UBD1} we suppose for simplicity
$$a=1.$$
\vskip2mm The integral $J_{[1/v, \infty)} $ may be easily evaluated with the same bound as above if $\theta$ is supposed to be away from $\frac12 \pi$. In order to include the case when $\theta$ is close to $\frac12 \pi$ and the use of (\ref{K0}) does not lead to adequate result we seek a finer estimation of the integral and to this end we split the remaining interval $[1/v,\infty)$ at $v^{-1/4}$. (For any number $\frac15<p< \frac13$, we may take $ v^{-p}$ as the point of splitting instead of $v^{-1/4}$.)
Put $$\alpha= \sqrt{1- \sin \theta}$$
(so that $|\frac12\pi -\theta| \sim \sqrt{2}\,\alpha$).
\begin{Lem}\label{lem5.2.3} \qquad\qquad $J_{[v^{-1}, v^{-1/4}]} \leq C/v\alpha^{3} \qquad \mbox{if} \quad v \alpha^{3} \geq 1.$ \end{Lem} \vskip2mm\noindent {\it Proof.} \,
In place of Lemma \ref{lem3.4} we apply Corollary \ref{prop_main} (in Section {\bf 4,2}), according to which we have $$
h_1(1+y,t, \phi) \leq \frac{Cy}{t^{2}}
\exp\Big\{- \frac{1}{2t} \eta^2(1 - c\eta^4) \Big\}
$$ with some universal constant $c$. In view of the inequality
$\sqrt{\eta^2- c \eta^6\,}> \eta (1- c\eta^4)$ ($0<\eta <\!<1$) this application effects replacing $K_1(v\sqrt{\phi^2+y^2})$ by
$ K_1(v\eta- cv \eta^5) $
in the integral of (\ref{K0}), so that the exponent appearing in its integrand is at most
$$-v\sqrt{\eta^2- c\eta^6} +v\eta \sin \theta \leq -v\alpha^2\eta + cv\eta^5.$$
Hence $$J_{[v^{-1}, v^{-1/4}]} \leq C' \int_{1/v}^{v^{-1/4}} e^{- v\alpha^2 \eta + c v\eta^5 } \sqrt {v\eta }\,d\eta
$$ and the last integral is dominated by $$\frac{C'e^{c} }{v\alpha^3}\int_{\alpha^2}^{v^{3/4}\alpha^2}e^{-u}\sqrt u \, du \leq \frac{C }{v\alpha^3},$$ as desired. \qed
\vskip2mm
\begin{Lem}\label{lem5.2.4}\qquad\qquad
$J_{[v^{-1/4}, \infty)} = O(ve^{-v^{1/12}}) \qquad \mbox{if} \quad v \alpha^{3} \geq 1.$
\end{Lem} \vskip2mm\noindent {\it Proof.}\, Lemma \ref{lem3.5} applied with $t=s (<1)$ and $r=\eta$ gives \begin{equation}\label{I4} h_1^*(\xi^*_1 (l), s, \theta) \leq \kappa_2 \frac{\eta}{s^2}\exp\Big\{-\frac{\eta^2}{2s}\Big\}. \end{equation} Substitution from this bound in (\ref{K}) yields \begin{equation}\label{I41} J_{[v^{-1/4}, \infty)} \leq C \int_{v^{-1/4}}^\infty e^{(1-\alpha)v\eta} d\eta \int_{0}^{1/v} \frac{\eta}{s^2}\exp\Big\{-\frac{v^2}{2}s -\frac{\eta^2}{2s} \Big\}ds. \end{equation} On applying (\ref{13}) again the inner integral on the right-hand side above is asymptotic to a constant multiple of $\sqrt{{v}/{\eta}}\,e^{-v\eta}$ as $v\eta\to \infty.$ Hence, for $\alpha \geq v^{-1/3},$ \begin{eqnarray*} J_{[v^{-1/4}, \infty)} \leq C'\int_{v^{-1/4}}^\infty e^{-\alpha^2 v\eta}\sqrt{\frac{v}{\eta}} d\eta &=& \frac{C'}{\alpha }\int_{\alpha^2 v^{3/4}}^\infty e^{-u}\frac{d u}{\sqrt u} \\ &\leq& \frac{C''}{\alpha^2 v^{3/8}}e^{-\alpha^2 v^{3/4}} \leq C'''ve^{-v^{1/12}}, \end{eqnarray*} where the last inequality follows from $\alpha^2 v^{3/4} > (\alpha^2 v^{2/3})v^{1/12}$ and $\alpha^2 v^{3/8}> 1/v$. Thus the lemma has been proved. \qed \vskip2mm
Combining Lemmas \ref{lem5.2.2}, \ref{lem5.2.3} and \ref{lem5.2.4} we conclude (\ref{U0})
as announced at the beginning of Step 2.
\vskip2mm
{\it Step 3.} Here we compute $U_{(a/v,t]}$ and finish the proof of Proposition \ref{UBD1}. We continue to suppose $a=1$. Instead of (\ref{psi00}) we write
\begin{eqnarray*}
e^{- x(1-\cos \theta)/(t-s)}\, p^{(2)}_{t-s}(x -1) &=& (1-s/t)^{-1}p^{(2)}_{t}(x-1) e^{- v(1-\cos \theta)} \\
&&\,\, \times \exp\Big\{\frac{- x^2s/t +2vs \cos \theta - s/t}{2(t-s)}\Big\}
\end{eqnarray*}
and, instead of (\ref{Ineq3}), we deduce the following expression of $\psi_1(l,t-s)e^{-\eta^2/2s}$: \begin{eqnarray*} && \frac{x\cos\theta -1}{t-s} \,[e^{- x(1-\cos \theta)/(t-s)}p^{(2)}_{t-s}(x- 1) ]\,e^{x\eta (\sin \theta)/(t-s)}e^{-\eta^2/2(t-s)}\times e^{-\eta^2/2s}\\ &&=\bigg(\frac{t}{t-s}\bigg)^{2}\frac{\Psi_1(x,t,\theta)}{2\pi }
\exp\Big\{-\frac{1}{2(t-s)}\Big[\frac{x^2s}{t} +\frac{\eta^2t}{s} -2x\eta \sin \theta - 2vs\cos \theta +\frac{s}{t}\Big]\Big\}. \end{eqnarray*} Write the formula in the square brackets in the exponent as $$\frac{t}{s}\Big(\frac{s}{t} x \sin \theta -\eta\Big)^2 + s\Big[(x\cos \theta -2)v\cos \theta +\frac{1}{t} \,\Big]$$ and apply Lemma \ref{lem3.5} to see the bound $h_1^*(\xi^*_1(l), s, \theta) \leq C_1 {\eta}{s^{-1}}p_s^{(2)}(\eta) $ for $s<1$. Then we readily deduce that for $s<1$, \begin{eqnarray*} \frac{\psi_1(l,t-s) h_1^*(\xi^*_1(l), s, \theta)}{\Psi_1(x,t,\theta)} &\leq& C\frac{t^2\eta}{(t-s)^2s^2} \exp\Big\{-\frac{t}{2(t-s)s}\Big(\frac{s}{t}x\sin \theta -\eta \Big)^2\Big\} \\ &&\times \,\exp \Big\{-\frac{s}{2(t-s)}\Big[(x \cos \theta - 2)v\cos \theta \Big]\Big\}.
\end{eqnarray*} We integrate the right-hand side over the half line $\eta\geq 0$. By applying the inequality $$\int_0^\infty p_T^{(1)}(\eta-m)\eta d\eta = \int_{-m}^\infty p_T^{(1)}(u) (u+m)du \leq \sqrt{\frac{T}{2\pi}}e^{-m^2/2T} + m$$ (valid for $m>0, T>0$), an easy computation yields $$\frac{U_{[1/v, 1/2]}}{\Psi_1(x,t,\theta)} \leq C'e^{-v/4}+ C' v\int_{1/v}^{1/2} \exp \Big\{-\frac{s}{2(t-s)}\Big[(x \cos \theta -2)v\cos \theta \Big]\Big\}\frac{ds}{\sqrt s} $$ ($v>2, t>1$), of which the right-hand side is $O(e^{-\frac13 v^{1/3}})$ if $\cos \theta \geq v^{-1/3}$, hence $U_{[1/v, 1/2]}$ is negligible in this regime. We use Lemma \ref{lem3.5} again to have the bound $h_1^*(\xi^*_1(l), s, \theta) \leq C_1 p_s^{(2)}(\eta) $ for $s\geq1/2$ and we see $U_{[1/2, t]} = O(e^{-v})$ in a similar way.
The proof of Proposition \ref{UBD1} is now complete. \qed
\vskip2mm The next lemma, essentially a corollary of the proof of Proposition \ref{UBD1}, provides a crude upper bound for the case $\cos \theta \leq - v^{-1/3}$. Combined with Corollary \ref{cor1} it in particular verifies Corollary \ref{cor2}.
\begin{Lem} \label{wc} $$h_a(x,t, \phi) \leq
C \frac{-\Psi_a(x,t,\theta)}{ |\theta-\frac12 \pi|^3 v} \quad\quad \mbox{if}\quad \frac{\pi}2 +\frac1{(av)^{1/3}} <\theta \leq \pi . $$ \end{Lem} \vskip2mm\noindent {\it Proof.}\, We have $h_a(x,t, \phi) \leq U$ (see (\ref{m9} and the discussion succeeding it if necessary) and observe that the identity (\ref{psi01}), hence the inequality (\ref{Ineq3}) are valid for $\frac12 \pi <\theta <\pi$ if the minus sign is put on the right-hand sides of them. The proof of (\ref{U}) may be then adapted in a trivial way to the present case. \qed
\vskip2mm\noindent
{\bf 5.3.} {\sc Upper Bound II}.
\begin{Prop}\label{UBD2} \, Let $v=x/t >2/a$ and $t>a^2$. For some universal constant $C$
$$h^*_a({\bf x},t,\theta) \leq C (a v)^{2/3} e^{-av(1-\cos \theta)} p_t^{(2)}(x -a) \quad \mbox{if} \quad \Big|\frac{\pi}2 -\theta\Big| \leq \frac1{(av)^{1/3}}. $$ \end{Prop} \vskip2mm\noindent
{\it Proof.}\, Let $a=1$. Put $\gamma =\frac12 \pi -\theta$ and suppose $ |\gamma| \leq v^{-1/3}$. Let $\delta$ be a small positive number chosen later and $\beta =\gamma+\delta$ and denote by $L(\beta)$ the line passing through the origin and $e^{i(\frac12\pi-\beta)}$ so as to make the angle $\frac12 \pi-\beta$ with the real axis (see Figure 2 in Section {\bf 5.4.} below). In this proof we consider the first hitting of $L(\beta)$ by the two-dimensional Brownian motion starting at ${\bf x}= x$ (or $=x{\bf e}$). Let $y$ be the coordinate of $L(\beta)$ such that $y=0$ for the point $e^{i(\frac12\pi-\beta)}$ and $y=-1$ for the origin and $\psi_\beta({\bf x}; y,t)$ the density of the hitting distribution of $L(\beta)$. Let $\sigma(L(\beta))$ denote the first hitting time of $L(\beta)$ and $\eta(B_{\sigma(L(\beta))})$ the $y$ coordinate of the hitting site $B_{\sigma(L(\beta))}\in L(\beta)$. Then we deduce \begin{eqnarray}\label{psi} \psi_\beta({\bf x};y,t)& :=& \frac{P_{\bf x}[ \eta(B_{\sigma(L(\beta))})\in dy, \sigma(L(\beta)) \in dt] }{dydt} \\ &=& \frac{x\cos \beta}{t}p^{(1)}_{t}(x\cos \beta)p^{(1)}_{t}(x\sin \beta -y-1) \nonumber\\ &=&\frac{x\cos \beta}{t}p_{t}^{(2)}(x)\exp\Big\{\frac{x(y+1)\sin \beta -\frac12 (y+1)^2}{t}\Big\}.\nonumber \end{eqnarray} It holds that \begin{equation}\label{clear} h^*_1({\bf x},t,\theta) \leq 2\int_0^t ds \int_{0}^\infty \psi_\beta({\bf x};y,t-s)h^*_1(\xi^*(y),s,\theta)dy, \end{equation} where $\xi^*(y)$ denotes the point of $\mathbb{R}^2$ lying on $L(\beta)$ of coordinate $y$ (see Figure 2 of the next subsection). According to Lemmas \ref{lem3.4} and \ref{lem3.5} \begin{equation}\label{h_bd1} h^*_1(\xi^*(y), s,\theta) \leq \left\{ \begin{array}{ll} C ys^{-2} e^{-(y^2+\delta^2)/2s} \quad &\mbox{if} \,\,\, \,y<1, s<1,\\[2mm] C(rs^{-1}\vee 1)p_s(r) \quad &\mbox{otherwise}, \end{array} \right.
\end{equation} where $r= |\xi^*(y) - e^{i\theta}|$. The rest of the proof is performed by showing Lemmas \ref{lem_up_bd1} and \ref{lem_up_bd2} given below.
\begin{Lem}\label{lem_up_bd1} \, For some universal constant $C$,
$$\int_0^{1/v} ds \int_{0}^{\infty} \psi_\beta({\bf x}; y,t-s)h^*_1(\xi^*(y),s,\theta)dy \leq C v e^{v\cos \theta} p_t^{(2)}(x) v^{-1/3}.$$ \end{Lem} \vskip2mm\noindent {\it Proof.}\, We split the range of the outer integral at $y=1$ and denote the corresponding the repeated integral for $[0,1]$ and $(1,\infty)$ by $I[0,1]$ and $I(1,\infty)$, respectively. As in the step 2 of the proof of Lemma \ref{UBD1} we see \begin{eqnarray}\label{**} I_{[0,1]} &\leq& C vp_t^{(2)}(x)\int_{0}^1 e^{v(y+1)\sin \beta} dy\int_0^{1/v} \frac{y}{s^2}e^{-\frac12 v^2 s - (y^2+\delta^2)/2s} ds \nonumber \\ &\leq& C vp_t^{(2)}(x)\int_0^1 e^{v(y+1)\sin \beta - v\sqrt{y^2+\delta^2} } \frac{\sqrt v\,y}{(y^2+\delta^2)^{3/4}}dy. \end{eqnarray} Put $$f(y)= \frac{y^2}{\sqrt{y^2+\delta^2} +\delta} - 2\delta y.$$ Suppose $\delta\geq \gamma$. Then \begin{eqnarray*} \sqrt{y^2+\delta^2} - (y+1)\sin \beta &\geq& \sqrt{y^2+\delta^2}-\delta -2\delta y -\sin \gamma\\ &=& f(y) -\sin \gamma \end{eqnarray*} and, since $(x\sin \gamma)/(t-s)= v\sin \gamma + O(1)$ for $s\leq 1/v$ and
$\sin \gamma =\cos \theta$,
the last integral in (\ref{**}) is dominated by a constant multiple of \begin{eqnarray*} && e^{v\cos \theta} \int_0^1 e^{-vf(y)}\frac{\sqrt v y}{(y^2+\delta^2)^{3/4}}dy\\ &&= \frac{e^{v\cos \theta}}{\sqrt{v\delta}}\int_0^{ \sqrt{v/\delta}} \exp\Big\{-\frac{u^2}{\sqrt{1+u^2/v\delta}+1} +2\delta^{3/2} v^{1/2}\, u\Big\}\frac{udu}{(1+ u^2/v\delta)}, \end{eqnarray*} where we have changed the variable of integration according to $y= (\delta/v)^{1/2} \,u$. Now taking $\delta =v^{-1/3}$ we can readily conclude that $$I_{[0,1]} \leq C v e^{v\cos \theta} p_t^{(2)}(x) v^{-1/3}.$$
We can readily compute $I_{(1,\infty)}$ to be $v e^{v\cos \theta} p_t^{(2)}(x) \times O(e^{-v/4}).$ Thus the proof of Lemma \ref{lem_up_bd1} is complete. \qed
\begin{Lem}\label{lem_up_bd2} \, For some universal constant $C$,
$$\int_{1/v}^t ds \int_{0}^{\infty} \psi_\beta({\bf x}; y,t-s)h^*_1(\xi^*(y),s,\theta)dy \leq C v e^{v\cos \theta} p_t^{(2)}(x) \times e^{-v/4}.$$ \end{Lem} \vskip2mm\noindent {\it Proof.}\, We restrict the range of the outer integral to $[1/v,1]$, the other part being easy to estimate, and divide the resulting integral by $p_t^{(2)}(x)$. It suffices to examine the exponent of the exponential factor appearing in $\psi_\beta({\bf x};y,t-s)h^*_1(\xi^*(y),s,\theta)/p_t^{(2)}(x) $ (in view of (\ref{psi}) and (\ref{h_bd1}) ), which is \begin{eqnarray*} &&-\,\frac{sx^2}{2t(t-s)}+ \frac{2x(y+1)\sin \beta - (y+1)^2}{2(t-s)} - \frac{y^2+\delta^2}{2s}\\ &&\leq -\, \frac{1}{2(t-s)}\Big(\frac{s}{t}x^2 + \frac{t}{s}y^2 -4x(y+1)\delta\Big) - \frac{y}{2(t-s)}-\frac{\delta^2}{2s}, \end{eqnarray*} where for the inequality we have applied $\sin \beta \leq 2\delta$. On the one hand for $y<1$, $$ [sx^2t^{-1}- 4x(y+1)\delta]/2(t-s) \geq x(1-8\delta)/2(t-s) \geq v/3,$$
provided $s\geq 1/v$ and $\delta<1/24$. On the other hand for $y\geq 1$ $$\Big(\frac{s}{t}x^2 + \frac{t}{s}y^2 -4x(y+1)\delta\Big) = \bigg(\sqrt{\frac{s}{t}}x - \sqrt{\frac{t}{s}}y\bigg)^2 +2(1- 2(1+y^{-1})\delta)xy,$$ which may be supposed larger than $vyt$. From these observations it is easy to ascertain the bound of the lemma. \qed
\vskip2mm\noindent
{\bf 5.4.} {\sc Lower Bound II and Completion of Proof of Theorem \ref{thm5.1}. }
If $\cos \theta > v^{-1/3}$, the first formula of Theorem \ref{thm5.1} follows from Lemma \ref{LBD} and Proposition \ref{UBD1}. Let $|\cos \theta| < v^{-1/3}$. The upper bound in the second relation of Theorem \ref{thm5.1} follows from Proposition \ref{UBD2}. For derivation of the lower bound we let $a=1$ and examine the proof of Lemma \ref{lem_up_bd1}. By the same computation as in it with the help of the lower bound in Proposition \ref{cor_Imp} we see that $$I_{[\delta^2, 1]} \geq Cv e^{v\cos \theta} p_t^{(2)}(x) v^{-1/3},$$ which however is not enough since Brownian motion may have hit $U(1)$ before $L(\beta)$. The proof of the upper bound have rested on the inequality (\ref{clear}), while we need a reverse inequality for the lower bound; for the present purpose it suffices to prove \begin{eqnarray*}
h^*_1({\bf x}, t,\theta) \geq c \int_0^{1/v} ds \int_{\delta^2}^1 \psi_\beta({\bf x}; y,t-s)h^*_1(\xi^*(y),s,\theta)dy \end{eqnarray*} for $|\frac12 \pi- \theta|\leq v^{-1/3} $ and $\delta=v^{-1/3}$ and for some universal constant $c>0$, which, on comparing with (\ref{psi}), follows if we have \begin{equation}\label{FP} \psi_\beta^*({\bf x}; y,t) \geq c\psi_\beta({\bf x}; y,t) \quad \mbox{for} \quad \delta^2< y<1
\end{equation} (with the same $c$ as above), where $$\psi_\beta^*({\bf x}; y,t) = \frac{P_{\bf x}[ \eta(B_{\sigma(L(\beta))})\in dy, \sigma_{L(\beta)} \in dt, \sigma_{1} >t] \,}{dydt} $$ ($\eta(B_{\sigma(L(\beta))})$ denotes the $y$ coordinate of $B_{\sigma(L(\beta))}$ as in the preceding proof). Let $L'(\beta)$ be the line tangent to the unit circle at $e^{-i \beta}$ and for the proof of (\ref{FP}) we consider the first hitting by $B_t$ of $L'(\beta)$. Let ${\bf z}(l)$ denote the point on $L'(\beta)$ of coordinate $l$, where $l=0$ for $e^{-i\beta}(1+i)$ and $l>0$ on the upper half of $L'(\beta)$ (see Figure 2). Then for $\delta^2<y<1$, we have
\begin{eqnarray}\label{M}
\psi_\beta({\bf x}; y,t) &=& q_0^{(1)}(x\cos \beta, t) p_t^{(1)}(y) \nonumber\\
&=& \int_0^tds\int_{-\infty}^\infty q_{1}^{(1)}(x\cos \beta, t-s)p_{t-s}^{(1)}(l) \psi_\beta({\bf z}(l); y, s)dl
\end{eqnarray}
and the corresponding relation for $\psi_\beta^*({\bf x}; y,t)$ (with $\psi_\beta^*$ in place of $\psi_\beta$ in both places). Noting $ \psi_\beta({\bf z}(l); y, s) = q_0^{(1)}(1,s) p_s^{(1)}(l-y)$ and integrating w.r.t. $l$, we apply Lemma \ref{9} (i) given below (with $b=1$ so that $\rho t =1/v$ and $\sqrt{\rho t} =o(\delta)$) (hence $(\rho t)^{3/2} <\!< \delta/v$) to see that the outer integral may be restricted to $|s-1/v| < \delta/v$, so that $$\psi_\beta({\bf x}; y,t) \sim \int_{(1-\delta)/v} ^{(1+\delta)/v} ds\int_{-\infty}^\infty q_{1}^{(1)}(x\cos\beta,t-s)p_{t-s}^{(1)}(l) \psi_\beta({\bf z}(l); y, s)dl,$$
of which the inner integral may be restricted to $l> 0$ with at least half the contribution of the integral preserved. Thus the proof of (\ref{FP}) is finished if we show that for some $c>0$, $\psi_\beta^*({\bf z}(l); y, s) \geq c\psi_\beta({\bf z}(l); y, s)$ for $ y> s^{2/3}$ and $l\geq 0$, which we rewrite in terms of $\psi_{0}$ and $\psi^*_{0}$ as \begin{equation}\label{eq911} \psi_0^*(1+i(1+l); y, s) \geq c\,\psi_0(1+i(1+l); y, s) , \quad l\geq 0, \, y> s^{2/3}. \end{equation} This is proved in Lemma \ref{91} after showing the following lemma.
\begin{Lem}\label{9} Let $0<b<x$ and put $\rho =b/x$. For any $\varepsilon>0$ there exists a positive constant $M\geq 1$ that depends only on $\varepsilon$ such that {\rm (i)} whenever $\rho t< 1/ M$, $\rho <1-\varepsilon$ and $b\geq \varepsilon$, \begin{equation}\label{eq9}
\int_{|s- \rho t| < M(\rho t)^{3/2}} q_b^{(1)}(x, t-s) q_0^{(1)}(b, s)ds \geq (1-\varepsilon)q_0^{(1)}(x, t),\end{equation} and {\rm (ii)} whenever $ t< bx/ M^2$ and $\rho <1-\varepsilon$, (\ref{eq9}) holds if the range of integration is replaced by $|s - \rho t| < M(\rho t)^{3/2} b^{-1}$. \end{Lem}
The integral in (\ref{eq9}) extended to the whole interval $[0,t]$ equals $q_0^{(1)}(x,t)$ and the lemma asserts that substantial contribution to it comes from a small interval about $\rho t = bt/x$ (at least if $x$ is kept away from zero).
\vskip2mm\noindent {\it Proof.} \, In this and the next proofs we apply the identity
\begin{equation}\label{eq90}
p^{(1)}_{t-s}(z-\xi)p^{(1)}_s(y-z) = p^{(1)}_{t}(y-\xi)p^{(1)}_{T}\Big( \frac{s}t (y-\xi) -y+z\Big),\quad T =\frac{s(t-s)}{t}
\end{equation}
($0<s<t, y,z, \xi\in \mathbb{R}$), This gives $$q_b^{(1)}(x, t-s) q_0^{(1)}(b, s) =\frac{(x-b)b}{(t-s)s} p^{(1)}_t(x)p^{(1)}_T\Big(\frac{s}tx -b\Big).$$ The range of integration of the integral in (\ref{eq9}) may be written as \begin{equation}\label{eq903}
|s/t -\rho|\leq M\rho \sqrt {\rho t}, \end{equation}
which entails $\frac{(x-b)b}{(t-s)s} = \frac{(1-\rho)xb}{(1-s/t)ts} \sim\frac{xb}{ts}$ as $\rho t \to 0$, and hence it suffices to show that \begin{equation}\label{eq92}
\int_{|s- \rho t| < M(\rho t)^{3/2}} \frac{b}s \,p^{(1)}_T\Big(\frac{s}tx -b\Big)ds > 1-\frac12 \varepsilon \end{equation} if $1/\rho t$ and $M$ are large enough. Observing $$\frac{b}s \,p^{(1)}_T\Big(\frac{s}tx -b\Big) = \frac{b}{s\sqrt{2\pi (1-s/t)s\,}} \exp\Big\{ -\frac{b^2}{2(1-s/t)\rho t}\Big( \frac{s}{\rho t} +\frac{\rho t}{s} -2\Big)\Big\}$$
and $u+u^{-1} -2 = (1-u)^2 + O((1-u)^3)$ as $u\to 1$, we apply the Laplace method to see that the integral in (\ref{eq92}) is asymptotic to
$$\int_{|u- 1| < M\sqrt{\rho t}} \frac1{\sqrt{2\pi \lambda}} e^{- (u-1)^2/2\lambda}du,$$
where $\lambda = (1-\rho)\rho t/b^2$. If the variable of integration is changed by $y = (u-1)/\sqrt \lambda$,
then this integral becomes $\int_{-r}^r p^{(1)}_1(y)dy$ with $r$ given by
$$r= M \sqrt{\rho t/\lambda} = Mb/\sqrt{1-\rho},$$ which extends to the whole line as $M\to\infty$ if $b\geq \varepsilon$. Thus we obtain the assertion (i).
As for the second assertion (ii) we multiply the right-hand side of (\ref{eq903}) by $b^{-1}$, and if $b^{-1}\rho\sqrt{\rho t} =\sqrt{\rho t}/x = \sqrt{bt/x^3}\to 0$, then $\frac{(x-b)b}{(t-s)s} \sim\frac{xb}{ts}$ as above. The rest of the proof is the same. \qed
\vskip2mm Recall (\ref{eq911}) and note that it expresses the inequality \begin{equation}\label{eq91} \frac{P_{1+i+il}[ \Im B_\tau -1 \in dy, \tau\in ds, \tau <\sigma_1]}{ds dy} \geq \frac{cP_{1+i+il}[ \Im B_\tau-1 \in dy, \tau\in ds]}{ds dy}, \end{equation} where $B_t$ is a standard complex Brownian motion and $\tau$ is the first hitting time of the imaginary axis by it.
\begin{Lem}\label{91} For a constant $c>0$, (\ref{eq91}) holds true for $0< s\leq 1$, $l\geq 0$ and $y\geq s^{2/3}$. \end{Lem} \vskip2mm\noindent {\it Proof.} \, The proof rests on the fact that if $Y_t $ denote the linear Brownian motion $\Im B_t$, then the conditional probability \begin{equation} \label{923}
P[Y_{s'}>0, 0\leq s'\leq s|Y_0=l, Y_s=y] = 1-e^{-2yl /s} \quad (l> 0, y> 0, s>0) \end{equation}
is bounded away from zero if (and only if) so is $yl/s$. ((\ref{923}) is immediate from the expression of transition density for $Y_t$ killed at the origin.)
For $\xi >0$ put $$Q_\xi(y,t)= q^{(1)}_0(\xi,t)p^{(1)}_t(y).$$ Then for $0<b<1$, \begin{eqnarray*} \psi_0(1+ i(1+l);y,s) &=&Q_{1}(y-l,s)\\ &=& \int_0^s ds'\int_{-\infty}^\infty Q_{1-b}(y'-l, s-s') Q_{b}(y-y',s') dy'. \end{eqnarray*} Take $b= s^{1/3}$ in the last integral. Then by performing the integration w.r.t. $y'$ and noting $(bs)^{3/2}b^{-1} = b^2s$ we apply Lemma \ref{9} (ii) (with $x=1$) to infer that the $s'$-integration above may be restricted to the interval
$$ |s'-bs| \leq Mb^2s$$
with some $M\geq 1$. Let $\phi= \tan^{-1} b$, $\eta= |b+i -e^{i\phi}| (= \sec\phi -1)$ and $\sigma(L_b)$ be the first hitting time of the line $L_b:=\{b+iy': y'\in \mathbb{R}\}$. Since the slope of the tangent line of $\partial U(1)$ at $e^{i\phi}$ is $b+o(b)$ and $$b\eta/s \sim 1/2$$
(as $s\to 0$), the identity (\ref{923}) shows that if $s' \sim (1- b)s \sim s$, $$\frac{P_{1+i(1+l)}[ \Im B_{\sigma(L_b)} \in dy', \sigma(L_b)\in ds', \sigma_1>s']}{dy'ds'} \geq c_1 Q_{1-b}(y',s'), \quad y'\geq 0$$ with $c_1 =\frac12( 1- e^{-1})$, hence $\psi_0^*(1+i(1+l); y, s)$ is bounded below by a constant multiple of \[
\int_{|s'-bs|< Mb^2s}ds'\int_{y'\geq 0} Q_{1-b}( y'-l, s-s')\psi_0^*(b+i(1+y'); y, s') dy'. \] It therefore sufices to show that there exists $c_2>0$ such that if $s'\sim bs$ and $ y\geq s^{2/3}$, then \begin{eqnarray*} \psi_0^*(b+i(1+y'); y, s') \geq c_2Q_{b}(y-y', s'), \quad y'\geq 0, \end{eqnarray*} which also follows from (\ref{923}) as is easily checked by noting $s^{2/3} \eta/bs \sim \frac12$. Thus the lemma has been proved. \qed \vskip2mm
\section{Case $d\geq 3$ and Legendre Process} This section consists of two subsections. The first one concerns the transition density of a Legendre Process and provides the spectral expansion of it as well as its behavior for small time which are employed in Section {\bf 3.3} and Section {\bf 4}, respectively. The second subsection is devoted to the proof of Theorem \ref{thm1.2} for $d\geq 3$.
\vskip2mm\noindent {{\bf 6.1.} \sc Legendre Process.} \,
Let $d\geq 3$. The colatitude $\Theta_t$ of $B_t/|B_t|$ is a Legendre process on $[0,\pi]$ regulated by the generator $$\frac1{2\sin^{2\nu} \theta}\frac{\partial}{\partial \theta}\sin^{2\nu}\theta\frac{\partial}{\partial \theta} =\frac12 \frac{\partial^2}{\partial \theta^2} +\nu\cot \theta\frac{\partial}{\partial \theta}$$ with each boundary point being entrance and non-exit (\cite{IM}). We compute the transition law of $\Theta_t$. Let $P_t^\nu(\theta_0, \theta)$ be the density of it w. r. t. the normalized invariant measure:
$$ \frac{P[\Theta_t\in d\theta|\Theta_0=\theta_0] }{d\theta} = P^\nu_t(\theta_0, \theta)\frac{\sin^{2\nu}\theta}{\mu_d},$$ where $\mu_d=\int_0^\pi \sin^{d-2}\theta d\theta= \omega_{d-1}/\omega_{d-2}$.
\vskip2mm {\bf 6.1.1.} {\sc Eigenfunction Expansion.} \, Eigenfunctions of the Legendre semigroup are given by $$C_n^\nu(\cos \theta) = \sum_{j=0}^n \frac{\Gamma(\nu+j)\Gamma(n+\nu- j)}{j!(n-j)![\Gamma(\nu)]^2}\cos[(2j-n)\theta],$$ where $C_n^\nu$ is a polynomial of order $n$ called the Gegenbauer (alias ultraspherical) polynomial and in the special case $\nu=\frac12$ it agrees with the Legendre polynomial (see Appendix (A)). They together constitute a complete orthogonal system of $L^2([0,\pi], \sin^{2\nu} \theta d\theta)$. (Cf.\cite{SW}, p.151 and \cite{W}, p.367; also \cite{T}, Section 4.5 for $\nu=1/2$.) Given $\nu>0$, we denote their normalization by $h_n(\theta)$: $$h_n(\theta)= \sqrt{\mu_d}\,\gamma_n^{-1} C_n^\nu(\cos \theta), $$ where the factors $\gamma_n>0$ are given by $$\gamma_n^2=\int_0^\pi [C_n^\nu(\cos \theta) ]^2\sin^{2\nu} \theta d\theta = \frac{\pi \Gamma(n+2\nu)}{2^{2\nu-1} [\Gamma(\nu)]^2(n+\nu)n!}$$ (cf. \cite{Sm}). (It is readily checked that $\mu_d/\gamma_0^2 =1$, so that $h_0\equiv1$.) Then \begin{equation}\label{spexp} P^\nu_t(\theta_0, \theta) = \sum_{n=0}^\infty e^{-\frac12 n(n+2\nu)t}h_n(\theta_0)h_n(\theta). \end{equation} For translation of the formula of Corollary \ref{cor-12} into that of Theorem \ref{thm1.21} one may use the formulae
$C_n^\nu(1)= \Gamma(n+2\nu)/\Gamma(2\nu)n!$ and $\Gamma(2\nu) = 2^{2\nu-1}\Gamma(\nu)\Gamma(\nu+\frac12)/\sqrt \pi$ to see
$$h_n(0)h_n(\theta) = \frac{\mu_d C_n^\nu(1)}{\gamma_n^{2} }\, C_n^\nu(\cos \theta) = \frac{\nu+n}{\nu}C_n^\nu(\cos \theta) .$$ \vskip2mm {\bf 6.1.2.} {\sc Evaluation of $P^\nu_t(0, \theta)$ for $t$ small.} \, An application of transformation of drift shows that uniformly for $ 0\leq \theta <1$ and $t <1$ \begin{equation} \label{asymp} P^\nu_t(0, \theta) = \omega_{d-1}p^{(d-1)}_t(\theta) \Big[1+O( \theta^4 +t)\Big]. \end{equation} Indeed, if $X_t$ is a $(d-1)$-dimensional Bessel process, $\gamma(\theta) =\nu( \cot \theta - \theta^{-1})$ and $$Z_t = \exp\Big\{ \int_0^t\gamma(X_s)dX_s -\int_0^t [\nu \gamma(X_s)X_s^{-1} + {\textstyle\frac12} \gamma^2(X_s)]ds\Big\}$$ then
$$P[\Theta_t\in d\theta, {\cal E}_t^{\Theta}\,|\,\Theta_0=\theta_0] = E^{BS(\nu-\frac12)}[ Z_t; X_t\in d\theta,\, {\cal E}_t^{X} \,|\, X_0=0],$$ where ${\cal E}_t^{\Theta}=\{\Theta_s <1\, \,\mbox{for}\,\, s<t\}$, ${\cal E}_t^{X}=\{X_s <1\, \,\mbox{for}\,\, s<t\}$ and $E^{BS(\nu-\frac12)}$ signifies the expectation by the law of the Bessel process $X_t$. By simple computation using Ito's formula we have $$Z_t= \exp\bigg\{\int_0^{X_t}\gamma(u)du -\frac12 \int_0^t [\gamma'(X_s) +2\nu \gamma(X_s)X_s^{-1} + \gamma^2(X_s)]ds\bigg\}$$ as well as $\gamma(\theta) = -\frac13 2\nu\theta + O(\theta^3), \gamma'(\theta)= -\frac13 2\nu +O(\theta^2)$.
Noting that $p_t^{(d-1)}(\theta)$ is the density of $P^{BS(\nu-\frac12)} [X_t \in d\theta \,|\,X_0=0]$ w.r.t.
$\omega_{d-2}\theta^{d-2}d\theta$ and $$[\omega_{d-2}\theta^{d-2}]/ [\mu_d^{-1}\sin^{2\nu} \theta] = \omega_{d-1}(1 + 3^{-1} \nu \theta^2) + O(\theta^4),$$ substitution yields (\ref{asymp}).
\vskip2mm\noindent {\bf 6.2.} {\sc Proof of Theorem \ref{thm1.2} ($d\geq 3$).}
Recall the definitions of $g(\theta;x,t)$ given in Section {\bf 3.3.1} and of
$h_a( x,t, \phi)$ in (\ref{h_z00}). Noting $|d\xi|=a^{d-1}\omega_{d-1} m_a(d\xi)$, we then see that for ${\bf x}=x{\bf e}$ and $\xi \in \partial U(a)$ of colatitude $\theta$
$$\frac{P_{{\bf x}}[B_{\sigma_a}\in d\xi, \sigma_a\in dt]}{|d\xi|dt} = \frac{g(\theta;x,t)}{a^{d-1}\omega_{d-1}} q^{(d)}(x,t)= \frac{h_a(x,t, \theta)}{a^{d-1}\omega_{d-1}} $$ and that owing to Theorem A the two relations of Theorem \ref{thm1.2} are equivalent to the corresponding ones in Theorem \ref{thm5.1} if adapted to the higher dimensions: in the right-hand side of the first formula of Theorem \ref{thm5.1} the heading factor $2\pi a$ is replaced by $a^{d-1}\omega_{d-1}$ and $p_t^{(2)}(x-a)$ by $p_t^{(d)}(x-a)$, and similarly for the second one. For the proof of them
we may repeat the same procedure for two-dimensional case with suitable modification, but here we adopt another way of reducing the problem to that for the two-dimensional case: roughly speaking we have $(d-2)$-dimensional variable, denoted by ${\bf z}$, against which the additional factor \begin{equation}\label{factor}
p^{(d-2)}_{t-s}(z)p^{(d-2)}_s(z), \quad z=|{\bf z}| \end{equation} that must be incorporated in the computation is integrated to produce the factor $p_t^{(d-2)}(0) $ (because of the semi-group property of $p_t$), which together with $p_t^{(2)}(x-a)$ constitutes the factor $p_t^{(d)}(x-a)$ in the final formula.
More details are given below. Recollecting the proof of Proposition \ref{UBD1}, we regard the two-dimensional space where the problem is discussed in it as a subspace of $\mathbb{R}^d$ in this proof and the line $L(\theta)$ (introduced in the proof of Lemma \ref{LBD}) as the intersection of this subspace with a $(d-1)$-dimensional hyper-plane, named $\Delta(\theta)$, that is tangent at $\xi$ with $\xi\cdot {\bf e} =\cos \theta$ to the sphere $\partial U(a)$. (Here we write $\Delta(\theta)$ for the hyper-plane which is determined not by $\theta$ but by $\xi$ since the variable $\theta$ is essential in the present issue.) Let $M(\theta, l)$ be the $(d-2)$-dimensional subspace contained in $\Delta(\theta)$ passing through $\xi^*(l)\in L(\theta)$ ($l$ is a coordinate of $L(\theta)$ as before) and perpendicular to the line $L(\theta)$. Put $$H_a({\bf y}, t,\xi) = \frac{P_{\bf y}[B(\sigma_a)\in d\xi, \sigma_a\in dt]\,}{m_a(d\xi)dt}\quad\quad ({\bf y} \notin U(a), \,\xi\in \partial U(a)) $$ and $$ \psi(l,t) = \frac{P_{\bf x}[\, {\rm pr}_{L(\theta)} B(\sigma_{\Delta(\theta)})\in d l, \sigma_{\Delta(\theta)}\in dt]}{dl dt} ,$$ where $ {\rm pr}_{L(\theta)}$ denotes the orthogonal projection on $L(\theta)$, and define $U^{(d)}$ as in (\ref{K}) but with $H_a({\bf y}, t,\xi)$ in place of $h^*_a({\bf y}, t,\theta)$.
Then for each $l$ the claim (\ref{U}) is replaced by
\begin{eqnarray*} U^{(d)} &=& \int_0^t ds \int_{\mathbb{R}} \psi(l, t-s)dl \int_{M(\theta,l)} p^{(d-2)}_{t-s}(z)H_a(\xi^*(l) +{\bf z},s,\xi) |d{\bf z}| \\ &\leq & \frac{C\Psi_a(x,t,\theta)}{av\cos^3 \theta}. \end{eqnarray*}
For the region in which $z < \eta $, we may simply multiply the integrand in (\ref{K}) by (\ref{factor}) without anything that requires particular attention. If $z > \eta $, we also multiply the integrand by $p^{(d-2)}_{t-s}({\bf z})$, replace $h^*_a(\xi^*(l), s,\theta)$ by $H_a(\xi^*(l)+{\bf z},s, \xi)$ and use the bound $$ H_a(\xi^*(l)+{\bf z},s,\xi) \leq \frac{C z^2}{s}p_s^{(2)}(\eta) p_s^{(d-2)}(z)\Big(1\vee\frac{z^2}{\sqrt s}\Big) e^{C_1z^6/2s}$$ in Step 2 (Lemmas \ref{lem5.2.2} and \ref{lem5.2.4}) (i.e., the step corresponding to that in the proof of Proposition \ref{UBD1}), and $$ H_a(\xi^*(l)+{\bf z},s,\xi) \leq \frac{C z}{s}p_s^{(2)}(\eta) p_s^{(d-2)}(z)$$ in the last part of Step 2 (Lemma \ref{lem5.2.4}) and in Step 3. In Step 2 there appears the integral $$\int_0^b \bigg(\frac{z^2}{s} + \frac{z^4}{\sqrt s}\bigg) \exp\Big\{-\frac{z^2 -6C_1z^6}{2T}\Big\} \frac{z^{d-3}dz}{T^{(d-2)/2}} \quad \mbox{where}\quad T= \frac{s(t-s)}{t},$$ which is made less than unity for $s<1/v$ by taking $b$ small enough, especially with $b=v^{-1/4}$.
In Step 3 (and the last part of Step 2) we have only to notice that
$$\int_b^\infty \frac{z}{\sqrt s} p^{(d-2)}_{t-s}(z)p^{(d-2)}_s(z)z^{d-3}dz \leq C e^{-vb/4} $$ for $s<1/v$. With these considerations taken into account the proof of Proposition \ref{UBD1} goes through virtually intact. The further details are omitted.
In a similar way Proposition \ref{UBD2} and the lower bound obtained in {\bf 5.3} are extended to the dimensions $d\geq 3$.
\vskip2mm\vskip2mm\noindent
\section{ Brownian Motion with A Constant Drift} In this section we present the results for the Brownian motion with a constant drift that are readily derived from those given above for the bridge. The Brownian motion $B_t$ started at ${\bf x}$ and conditioned to hit $U(a)$ at $t$ with $v:=x/t$ kept away from zero may be comparable or similar to the process $B_t - tv{\bf e}$ in significant respects and some of our results for the former one is more naturally comprehensible in its translation in terms of the latter (see (\ref{transl}) at the end of this section).
\vskip2mm\noindent {\bf 7.1. \, Formulae in general setting} \vskip2mm\noindent Given $v>0$, we put $${\bf v} = v{\bf e}$$
(but ${\bf x}\notin U(a)$ is arbitrary) and label the objects defined by means of $B^{({\bf v})}_t:=B_t- t {\bf v}$ in place of $B_t$ with the superscript $\,^{({\bf v})}$ like $\sigma_a^{({\bf v})}, \Theta_t^{({\bf v})},$ etc. The translation is made by using the formula for drift transform. We put $\gamma(\cdot)= -{\bf v}$ (constant function) and
$Z(s) = e^{\int_0^s \gamma(B_u)\cdot d B_u -\frac12 \int_0^s |\gamma|^2(B_u)du}$, so that $P_{\bf x}[ (B_t^{({\bf v})})_{t\leq s}\in \Gamma] = E_{\bf x}[ Z(s); (B_t)_{t\leq s} \in \Gamma]$ for $\Gamma $ a measurable set of $C([0,s],\mathbb{R}^d)$.
It follows that $Z(\sigma_a) = \exp\{-{\bf v}\cdot B(\sigma_a) +{\bf v}\cdot B_0-\frac12 v^2\sigma_a \}$. Hence
\begin{eqnarray*}
&&P_{{\bf x}}[B^{({\bf v})}(\sigma^{({\bf v})}_a) \in d\xi, \sigma^{({\bf v})}_a \in dt] \\
&&\quad = e^{{\bf v}\cdot{\bf x} -\frac12 v^2t}e^{-{\bf v}\cdot \xi}P_{{\bf x}}[B(\sigma_a)\in d\xi, \sigma_a \in dt],
\end{eqnarray*}
and putting
$$ f_{a,t}^{({\bf v})}({\bf x},\xi) =\frac{e^{-{\bf v}\cdot \xi}P_{{\bf x}}[ B(\sigma_a) \in d\xi \,|\, \sigma_a =t]}{m_a(d\xi)}, $$ we obtain $$ \frac{P_{{\bf x}}[B^{({\bf v})}(\sigma^{({\bf v})}_a) \in d\xi, \sigma^{({\bf v})}_a \in dt] }{m_a(d\xi)dt}= e^{{\bf v}\cdot{\bf x} -\frac12 v^2t}q_a^{(d)}(x,t) f_{a,t}^{({\bf v})}({\bf x},\xi), $$ \begin{equation}\label{7.1.1} \frac{P_{{\bf x}}[ \sigma^{({\bf v})}_a \in dt] }{dt}= e^{{\bf v}\cdot{\bf x} -\frac12 v^2t}q_a^{(d)}(x,t) \int_{\partial U(a)} f_{a,t}^{({\bf v})}({\bf x},\xi)m_a(d\xi), \end{equation} and \begin{equation}\label{7.1.2}
\frac{P_{{\bf x}}[B^{({\bf v})}(\sigma^{({\bf v})}_a) \in d\xi \,|\, \sigma^{({\bf v})}_a = t] }{m_a(d\xi)} = \frac{f_{a,t}^{({\bf v})}({\bf x},\xi)}{\int_{\partial U(a)} f_{a,t}^{({\bf v})}({\bf x},\xi)m_a(d\xi) }. \end{equation}
Suppose $x/t\to 0$ and $t\to\infty$. By Theorem \ref{thm1.1}, $$f_{a,t}^{({\bf v})}({\bf x},\xi) =e^{-{\bf v}\cdot \xi} \Big(1+ O\Big(\frac{x}{t}\ell(x,t)\Big)\Big),$$ so that
$$\frac{P_{{\bf x}}[\sigma^{({\bf v})}_a \in dt] }{dt} = \Big[\int_{|\xi|=a} e^{-{\bf v}\cdot \xi}m_a(d\xi)\Big] e^{{\bf v}\cdot{\bf x} -\frac12 v^2t}q_a^{(d)}(x,t) \Big(1+ O\Big(\frac{x}{t}\ell(x,t)\Big)\Big),$$
where $\ell(x,t)$ is the same function as given in Theorem \ref{thm1.1} if $d=2$ and $\ell(x,t) \equiv 1$ if $d\geq 3$. Noting $e^{{\bf v}\cdot {\bf x} -\frac12 v^2t}p_t^{(d)}(x) = p_t^{(d)}(|{\bf x} - t{\bf v}|)$ we deduce from Theorem A that \begin{equation}\label{7.1.3}
e^{{\bf v}\cdot{\bf x} -\frac12 v^2t}q_a^{(d)}(x,t) = a^{2\nu}\Lambda_\nu\Big(\frac{ax}{t}\Big)p_t^{(d)}(|{\bf x} - t{\bf v}|) \bigg[1-\Big(\frac{a}x\Big)^{2\nu}\bigg](1+o(1)) \end{equation} for $d\geq 3$ and an analogous relation for $d=2$ (where the formula must be modified in the case $x\leq \sqrt t$ according to (\ref{R21})). We have the identities $C_0^\nu \equiv 1$ and $$ \int_0^\pi e^{-z\cos \theta} C_n^{\nu}(\cos \theta) \sin^{2\nu}\theta \,d\theta =(-1)^n\frac{2^\nu \sqrt \pi \Gamma(\nu+\frac12)\Gamma(n+2\nu)}{\Gamma(2\nu)n!} \cdot \frac{I_{n+\nu}(z)}{z^\nu},$$ where $I_\nu(z)$ is the modified Bessel function of the first kind of order $\nu$ and, on putting $n=0$ in the latter,
$$\int_{|\xi|=a} e^{-{\bf v}\cdot \xi}m_a(d\xi) = \frac{2^\nu \sqrt \pi \,\Gamma(\nu+\frac12) }{\mu_{d}}\cdot\frac{I_{\nu}(v)}{v^\nu}.$$
Let $g(\phi;y)$, $y>0$, denote the function represented by the series in (\ref{lim}), namely \begin{equation}\label{g7.1} g(\phi; y) = \sum_{n=0}^\infty \frac{K_\nu(y)}{K_{\nu+n}(y)}H_n(\phi). \end{equation} Then, owing to Theorem \ref{thm1.21}, as $x/t \to \tilde v>0$ and $t\to\infty$, $$f_{a,t}^{({\bf v})}({\bf x},\xi) = e^{-{\bf v}\cdot \xi} g(\phi ;a\tilde v)(1+o(1)) \quad \mbox{for}\quad \xi\in \partial U(a), $$ where $\xi\cdot{\bf x}/ax = \cos \phi$. It is worth noting that if $ \tilde v/v$ is small, then the function $e^{-{\bf v}\cdot \xi} g(\theta ;a\tilde v)$ is maximized about $\xi_a := - a{\bf e}\in \partial U(a)$ (not $a{\bf e}$) irrespective of ${\bf x}$.
\vskip2mm\noindent {\bf 7.2. \, Case ${\bf x}- t{\bf v} =o(t)$}
\vskip2mm\noindent
In this subsection we let ${\bf x} = x{\bf e} $, while ${\bf v}$ is arbitrary but subject to the condition $$\quad \frac{ {\bf x}}{t}-{\bf v} \,\longrightarrow \, 0,$$
so that $${\bf v}\cdot\xi = t^{-1}{\bf x}\cdot \xi +o(1) = av\cos \theta +o(1)$$ uniformly for $\xi\in \partial U(a)$ with $\xi\cdot {\bf x}/ax =\cos \theta$.
Define $g^{({\bf v})}_a({\bf x},t,\theta)$ by \[
g^{({\bf v})}_a({\bf x},t,\theta) = \frac{P_{{\bf x}}[B^{({\bf v})}(\sigma^{({\bf v})}_a) \in d\xi\,|\, \sigma^{({\bf v})}_a = t] }{m_a(d\xi)}.
\] Then by (\ref{7.1.2}) \begin{equation}\label{7.2.2} g^{({\bf v})}_a({\bf x},t,\theta) =
\frac{e^{-{\bf v}\cdot\xi}g_a(x,t,\theta)} {\mu_d^{-1} \int_0^\pi e^{-{\bf v}\cdot\xi}g_a(x,t,\phi)\sin^{d-2}\phi \,d\phi}, \end{equation} where $g_a(x,t,\theta$ is defined in (\ref{g7.1}).
Let $\Xi_{av} $ denote the (normalizing) constant $$\Xi_{av}=\int_{0}^\pi e^{- av\cos \theta}g(\theta;av)\frac{\sin^{d-2} \theta \, d\theta}{\mu_d}.$$
(Remember that $g(\theta;av)$ is the density w.r.t. $\mu_d^{-1}\sin^{d-2} \theta d\theta$ of the limit distribution of $\Theta(\sigma_a)$ conditioned on $\sigma_a =t$, $B_0=x{\bf e}$.) Then as $t\to \infty$ under $|{\bf x}/t -{\bf v}| \to 0$, we have $\Xi_{av} \sim \int_{|\xi|=a} f_{a,t}^{({\bf v})}({\bf x},\xi)m_a(d\xi)$ and hence \vskip3mm
(i) \quad ${\displaystyle \frac{P_{x{\bf e}}[ \sigma^{({\bf v})}_a \in dt] }{dt} \, \sim
\,\Xi_{av}\,a^{2\nu}\Lambda_\nu(av) p_t^{(d)}(|{\bf x} - t{\bf v}|) (1+o(1)); } $
\vskip3mm (ii) \quad ${\displaystyle g^{({\bf v})}_a(x{\bf e}, t,\theta) \,\sim\,\frac1{\Xi_{av}} e^{-av\cos \theta}g(\theta;av)}, $
\vskip2mm\noindent where the last asymptotic relation is uniform for $0\leq \theta\leq \pi$ and $v \leq M $ for any $M>0$; for (i) use the identities (\ref{7.1.1}) and (\ref{7.1.3}).
\vskip3mm\noindent
Similarly, substituting the formula of Corollary \ref{thm3.02} in (\ref{7.2.2}) (cf. (\ref{sin})) we obtain an asymptotic form of $g_a^{({\bf v})}({\bf x},t,\theta)$ as $v\to\infty$. On observing that this leads to
$$\Xi_{av} \sim \mu_{d}^{-1}v\int_0^{\pi/2}\sin^{d-2}\theta \cos \theta d\theta = \omega_{d-2}v/(d-1)\omega_{d-1}$$
($v\to \infty)$, a simple computation yields the following asymptotic relations: as $ v\to\infty$ and $|x -tv|/t \to 0$,
$$ \frac{P_{x{\bf e}}[ \sigma^{({\bf v})}_a \in dt] }{dt} \, \sim \, \frac{\omega_{d-2}}{d-1}a^{2\nu+1}v\, p_t^{(d)}(|{\bf x} - t{\bf v}|)$$ and, if $(av)^{-1/3} \leq \cos\theta \leq 1$, \begin{eqnarray*} g^{({\bf v})}_a(x{\bf e}, t,\theta)
= (d-1)\mu_d \bigg[\cos \theta+ O\Big(\frac{1}{av\cos^2 \theta}\Big)\bigg](1+o(1)),
\end{eqnarray*}
where $o(1)$ is independent of $\theta$;
also Corollary \ref{thm3.01} may translate into \begin{equation}\label{E}
P_{x \bf e} [\Theta_t^{({\bf v})}\in d\theta\,|\, \sigma_a^{({\bf v})}=t] \,\Longrightarrow \,(d-1){\bf 1}(0\leq\theta<{\textstyle\frac12} \pi) \sin^{d-2}\theta\, \cos \theta\, {d\theta}.
\end{equation} The last convergence result may be intuitively comprehended by noticing that the right-hand side is the law of the colatitude of a random variable taking values in the \lq northern hemisphere' of $\partial U(a)$ whose projection on the \lq equatorial plane' is uniformly distributed on the \lq\lq hyper disc'', $\mathbb{D}$ say, on the plane; in short it may be thought as the distribution on the sphere induced by the uniform ray coming from the direction ${\bf e}$. Let ${\rm pr}_{{\bf e}}$ denote this projection on the equatorial plane. Then the result given in (\ref{E}) may be restated as follows: $P_{x\bf e} [{\rm pr}_{\bf e} B^{({\bf v})}_t\in dw\,|\, \sigma_a^{({\bf v})}=t]$, $d w \subset \mathbb D$ converges weakly to the uniform measure on $\mathbb{D}$. We rephrase Theorem \ref{thm1.2} in a similar fashion. Let $\xi \in \partial U(a)$, $\xi\cdot {\bf e} =a\cos \theta$ and $w={\rm pr}_{{\bf e}} \xi$ and note that
$$a-|w| \sim 2^{-1}a\cos^2 \theta \,\,\, (\theta \to {\textstyle \frac12} \pi), \quad \cos \theta = \sqrt{1-|w|^2/a^2} \quad\mbox{ and} \quad |d\xi| = |dw|/\cos \theta$$
and that $m_a(d\xi) = a^{-d+1}|d\xi|/\omega_{d-1}$ and $\omega_{d-2}=(d-1)c_{d-1}^*$, where $c^*_{n}$ denotes the volume of the unit ball in $\mathbb{R}^n$.
Then, from Theorem \ref{thm1.2} we deduce that uniformly for $w\in \mathbb{D}$, \begin{eqnarray} \label{transl}
&& \frac{P_{x\bf e} [{\rm pr}_{\bf e} B^{({\bf v})}_t\in dw\,|\, \sigma_a^{({\bf v})}=t] }{[a^{d-1}c^*_{d-1}]^{-1}|dw|} \\
&&\quad = \bigg[1 +O\bigg(\frac1{(1-|w|/a)^{3/2} av}\bigg) \bigg](1+o(1)) \qquad \mbox{for} \quad |w|/a <1- (av)^{-2/3},\nonumber\\
&& \quad \asymp (av)^{-1/3}/\sqrt{1- |w|/a} \qquad\qquad \qquad\qquad \mbox{for} \quad 1- (av)^{-2/3} \leq |w|/a \leq 1, \nonumber \end{eqnarray}
as $v\to \infty$ and $|x{\bf e}/t-{\bf v}|\to 0$, showing convergence of the density on the one hand and indicating the effect of Brownian noise that manifests itself as the singularity of the density along the boundary of $\mathbb{D}$.
The strict equalities ${\bf x}/x ={\bf v}/v = {\bf e}$ we have assumed above can be relaxed. Essential assumption is
${\bf x} - t{\bf v} =o(t)$, entailing that ${\bf v}\cdot \xi = t^{-1}{\bf x}\cdot \xi +o(1) = av\cos \theta +o(1)$ uniformly for $\xi\in \partial U(a)$ with $\xi\cdot {\bf x}/ax =\cos \theta$. The identity (7.1) does not hold any more, but two sides of it are asymptotically equivalent and the other relations including (7.2) remain valid.
\section{Appendix}
(A) \, The Gegenbauer polynomials $C^\nu_n(x)$, $n=0, 1,2,\ldots$, may be defined as the coefficients of $z^n$ in the Taylor series $(z^2 - 2xz +1)^{-\nu}= \sum C^\nu_n(x)z^n$ ($|z|<1, |x|\leq 1, \nu>0$) and form an orthogonal basis of the space $L^2([-1,1],(1-x)^{\nu})$ (cf. page 151 of \cite{SW}).
The function $u(x)=C^\nu_n(x)$ satisfies $$(x^2-1)u'' + (2\nu+1) xu' - n(n+2\nu)u =0 $$ and it follows that if $Y(\theta)=u(\cos \theta)$, $$\frac12 Y'' + \nu \cot \theta\, Y' + \frac{n(n+\nu)}{2} Y =0.$$ \vskip2mm\noindent
(B) The density $ P_{\bf x}[B(\sigma_a)\in d\xi, \sigma_a\in dt]/ m_a(d\xi)dt$ admits an explicit eigenfunction expansion. In the case
$d=2$ it is given below. Let $p^0_{(a)}(t,{\bf x},{\bf y})$ denote the transition probability of a two-dimensional Brownian motion that is killed when it hits $U(a)$. Then according to Eq(8) on p. 378 in \cite{CJ} \begin{equation}\label{seriesexp} p^0_{(a)}(t, x{\bf e}, {\bf y}) = \frac{1}{2\pi} \sum_{n=-\infty}^\infty \cos n\theta
\int_0^\infty e^{-\lambda^2 t/2} \frac{U_n(\lambda, x)U_n(\lambda, y)}{J^2_n(a\lambda)+ Y^2_n(a\lambda)}\lambda d\lambda,
\end{equation} where $J_n$ and $Y_n$ are the usual Bessel functions of the first and second kind, respectively, $$U_n(\lambda, y) = Y_n(\lambda a) J_n(\lambda y) - J_n(\lambda a)Y_n(\lambda y)$$
and ${\bf y} =(y,\theta)$, the polar coordinate of ${\bf y}$ (with $y=|{\bf y}|$, $\cos \theta ={\bf y}\cdot {\bf e}/y$).
From the identity $(Y_\nu J'_\nu - J_\nu Y'_\nu)(z) = -2/\pi z$ it follows that $(\partial/\partial y) U_n(\lambda, y)|_{y=a} = -2/\pi a$ and \begin{eqnarray*}
\frac{P_{x{\bf e}}[{\rm Arg} \,B_t\in d\theta, \sigma_a \in dt]}{a d\theta dt} &=& \frac12\frac{\partial}{\partial y} p^0_{(a)}(t, x{\bf e}, {\bf y})|_{y=a} \\ & =& \sum_{n=-\infty}^\infty I_n(x, t)\cos n\theta \end{eqnarray*} where $$ I_n(x, t)= \frac{1}{2a\pi^2}
\int_0^\infty e^{-\lambda^2 t/2} \frac{- U_n(\lambda, x) \lambda d\lambda }{J^2_n(a\lambda)+ Y^2_n(a\lambda)}.$$ Since integration by $a d\theta$ reduces the density given above to $q^{(2)}_a(x,t)$, we have $$q^{(2)}_a(x,t) = 2\pi aI_0(x, t) = \frac1{\pi} \int_0^\infty e^{-\lambda^2 t/2} \frac{- U_0(\lambda, x) \lambda d\lambda }{J^2_0(a\lambda)+ Y^2_0(a\lambda)}$$ and
$$\frac{P_{x{\bf e}}[{\rm Arg}\, B_t\in d\theta \,|\,\sigma_a = t]}{2\pi d\theta}= \frac1{2\pi}+\frac1{2\pi I_0(x,t)} \sum_{n=1}^\infty I_n(x, t)\cos n\theta. $$ On comparing with (\ref{-0}) $2I_n(x,t)$ must agree with $a^{-1}q^{(2)}_a(x,t)\alpha_n(x,t)$, so that $$q_a^{(2n+2)}(x,t) = \bigg(\frac{a}{x}\bigg)^{n} 2a\pi I_{n}(x,t) =\frac{1}{\pi} \bigg(\frac{a}{x}\bigg)^{n}
\int_0^\infty \frac{- U_n(\sqrt{2\alpha}, x) e^{-\alpha t} d\alpha }{J^2_n(a\sqrt{2\alpha})+ Y^2_n(a \sqrt{2\alpha})}.$$ This last formula, though not used in this paper, is valid for non-integral $n$ and useful: e.g., its use provides another approach in which one may dispense with the arguments using the Cauchy integral theorem for the proofs in \cite{Ubh} and \cite{Ubes}.
The integral transform involved in the Fourier series (\ref{seriesexp}) is derived by using the Weber formula (\cite{T1}, p. 86) and the higher-dimensional analogue is given by the Legendre series (as in (\ref{spexp}) with an integral transform similar to the one in (\ref{seriesexp})).
\vskip2mm\noindent (C)\, We prove that for each $\delta>0$, as $y\downarrow 0$ and $\phi \to 0$ \begin{equation}\label{Rem510}
\frac1{a^{d-1}\omega_{d-1}}\int_0^\delta h_a(a+y, s, \phi)ds = \frac{2y}{\omega_{d-1} [y^2+ (a\phi)^2]^{d/2}}(1+o(1))
\end{equation} (a result used in Remark 5). This is an expression of the obvious fact that as $y \downarrow 0$ the hitting distribution of $\partial U(a)$ for the Brownian motion started at
$(a+y){\bf e}$ converges to that of the plane tangent to it at $a{\bf e}$: the ratio on the right-hand side is a substitute of the density of the latter distribution, where $(a\phi)^2$ in the denominator must be replaced by $|{\bf z}- a{\bf e}|^2$ with ${\bf z}$ being any point of the plane such that ${\bf z}\cdot {\bf e}/ |{\bf z}| = \cos \theta$.
For verification let $P({\bf z},\xi;a)$ be the Poisson kernel of the exterior of the ball $U(a)$, with respect to the
uniform probability $m_a(d\xi)$ so that $\int_{\partial U(a)}P({\bf z},\xi; a)m_a(d\xi) = (a/z)^{2\nu}$ ($z=|{\bf z}|>a$). It is given by
$$P({\bf z},\xi; a) = \frac{a^{2\nu}(z^2-a^2)}{|{\bf z}-\xi|^d}, \quad z>a, \, \xi\in \partial U(a).$$ Let $\xi$ be such that ${\bf z}\cdot\xi/za=\cos \phi$. Then by an elementary computation we find that $$P({\bf z},\xi; a) = \frac{2a^{2\nu+1}y}{[y^2+ (a\phi)^2]^{d/2}}(1+o(1))\quad \mbox{as}\quad y:= z-a \downarrow 0, \,\, \phi \to 0$$ and this shows (\ref{Rem510}), for $P({\bf z},\xi; a)$ equals the whole integral $\int_0^\infty h_a({\bf z}, s, \phi)ds$ and this integral restricted to $[\delta,\infty)$ is dominated by a constant multiple of $y$ owing to Theorem A and Theorem \ref{thm1.1} (cf. the first inequality of Lemma \ref{lem3.5}).
\vskip2mm
\end{document} | arXiv |
Successor function
In mathematics, the successor function or successor operation sends a natural number to the next one. The successor function is denoted by S, so S(n) = n + 1. For example, S(1) = 2 and S(2) = 3. The successor function is one of the basic components used to build a primitive recursive function.
Successor operations are also known as zeration in the context of a zeroth hyperoperation: H0(a, b) = 1 + b. In this context, the extension of zeration is addition, which is defined as repeated succession.
Overview
The successor function is part of the formal language used to state the Peano axioms, which formalise the structure of the natural numbers. In this formalisation, the successor function is a primitive operation on the natural numbers, in terms of which the standard natural numbers and addition is defined. For example, 1 is defined to be S(0), and addition on natural numbers is defined recursively by:
m + 0= m,
m + S(n)= S(m + n).
This can be used to compute the addition of any two natural numbers. For example, 5 + 2 = 5 + S(1) = S(5 + 1) = S(5 + S(0)) = S(S(5 + 0)) = S(S(5)) = S(6) = 7.
Several constructions of the natural numbers within set theory have been proposed. For example, John von Neumann constructs the number 0 as the empty set {}, and the successor of n, S(n), as the set n ∪ {n}. The axiom of infinity then guarantees the existence of a set that contains 0 and is closed with respect to S. The smallest such set is denoted by N, and its members are called natural numbers.[1]
The successor function is the level-0 foundation of the infinite Grzegorczyk hierarchy of hyperoperations, used to build addition, multiplication, exponentiation, tetration, etc. It was studied in 1986 in an investigation involving generalization of the pattern for hyperoperations.[2]
It is also one of the primitive functions used in the characterization of computability by recursive functions.
See also
• Successor ordinal
• Successor cardinal
• Increment and decrement operators
• Sequence
References
1. Halmos, Chapter 11
2. Rubtsov, C.A.; Romerio, G.F. (2004). "Ackermann's Function and New Arithmetical Operations" (PDF).
• Paul R. Halmos (1968). Naive Set Theory. Nostrand.
Hyperoperations
Primary
• Successor (0)
• Addition (1)
• Multiplication (2)
• Exponentiation (3)
• Tetration (4)
• Pentation (5)
Inverse for left argument
• Predecessor (0)
• Subtraction (1)
• Division (2)
• Root extraction (3)
• Super-root (4)
Inverse for right argument
• Predecessor (0)
• Subtraction (1)
• Division (2)
• Logarithm (3)
• Super-logarithm (4)
Related articles
• Ackermann function
• Conway chained arrow notation
• Grzegorczyk hierarchy
• Knuth's up-arrow notation
• Steinhaus–Moser notation
| Wikipedia |
What is the digit in the hundredths place of the decimal equivalent of $\frac{9}{160}$?
Since the denominator of $\dfrac{9}{160}$ is $2^5\cdot5$, we multiply numerator and denominator by $5^4$ to obtain \[
\frac{9}{160} = \frac{9\cdot 5^4}{2^5\cdot 5\cdot 5^4} = \frac{9\cdot 625}{10^5} = \frac{5625}{10^5} = 0.05625.
\]So, the digit in the hundredths place is $\boxed{5}$. | Math Dataset |
\begin{document}
\title[Extensions of line bundles and Brill--Noether loci] {Extensions of line bundles and Brill--Noether loci of rank-two vector bundles on a general curve}
\author{Ciro Ciliberto} \curraddr{Dipartimento di Matematica, Universit\`a degli Studi di Roma Tor Vergata\\ Via della Ricerca Scientifica - 00133 Roma \\Italy} \email{[email protected]}
\author{Flaminio Flamini} \curraddr{Dipartimento di Matematica, Universit\`a degli Studi di Roma Tor Vergata\\ Via della Ricerca Scientifica - 00133 Roma \\Italy} \email{[email protected]}
\let\thefootnote\relax\footnotetext{2010 {\em Mathematics Subject Classification}. Primary: 14H60, 14D20, 14J26, 14M12; Secondary: 14N05, 14D06.} \keywords{Brill-Noether loci; Semistable vector bundles; Ruled surfaces; Degeneracy loci; Moduli.}
\thanks{{\em Thanks} The authors wish to thank E. Ballico, for interesting discussions on \cite{Ballico1,Lau}, and P.\,E. Newstead, A. Castorena for having pointed out to us additional references we missed in the huge amount of papers on this topic. At last, warm thanks to the referee for careful reading and positive comments.}
\begin{abstract} In this paper we study Brill-Noether loci for rank-two, (semi)stable vector bundles on a general curve $C$. Our aim is to describe the general member $\mathcal{F}$ of some of its components just in terms of extensions of line bundles with suitable {\em minimality properties}, providing information both on families of irreducible, unisecant curves of the ruled surface $\mathbb{P}(\mathcal{F})$ and on the birational geometry of the component of the Brill-Noether locus to which $\mathcal{F}$ belongs. \end{abstract}
\maketitle
\tableofcontents
\section*{Introduction}\label{sec:intro} Let $C$ be a smooth, irreducible projective curve of genus $g$ and $U_C(d)$ be the moduli space of (semi)stable, degree $d$, rank-two vector bundles on $C$. In this paper we will be mainly concerned with $C$ of general moduli.
Our aim is to study the Brill-Noether loci $B_C^{k}(d) \subset U_C(d)$ parametrizing (classes of) vector bundles \linebreak $[\mathcal{F}] \in U_C(d)$ having $h^0(C,\mathcal{F}) \geqslant k$, with $k$ a non-negative integer.
The classical Brill-Noether theory for line bundles on a general curve is very important and well established (cf., e.g., \cite{ACGH}). Brill-Noether theory for higher-rank vector bundles is a very active research area (see References, for some results in the subject), but several basic questions concerning Brill--Noether loci, like non-emptiness, dimension, irreducibility, local structure, etc., are still open in general. Contrary to the rank-one case, the Brill-Noether loci for $C$ general do not always behave as expected (cf. e.g. \cite{BeFe} and \S\,\ref{SS:low}).
Apart from its intrinsic interest, Brill-Noether theory is important in view of applications to other areas, like birational geometry to mention just one (cf. e.g. \cite{Beau,Be,BeFe,HR,Muk}).
The most general existence result in the rank-two case is the following:
\begin{theorem}\label{thm:TB} (see \cite{TB}) Let $C$ be a curve with general moduli of genus $g \geqslant1$. Let $k \geqslant 2$ and $i := k + 2g-2-d \geqslant 2$ be integers. Let $\rho_d^k := 4g-3 - i k$ and assume $$\rho_d^k \geqslant 1 \;\; {\rm when} \; d \; {\rm odd}, \;\;\; \rho_d^k \geqslant 5 \;\; {\rm when} \; d \; {\rm even}.$$Then $B^{k}_C(d)$ is not empty and it contains a component $\mathcal B$ of the expected dimension $\rho_d^k$. \end{theorem}
\noindent This previous result is proved with a quite delicate degeneration argument (cf. also \cite{CMT}); in some particular cases, one has improvements of it (cf., e.g., \cite{Sun,Lau,TB0,TB00,Tan,FO,TB1,LNP}).
The degeneration technique used in \cite{TB}, though powerful, does not provide a \emph{geometric description} of the (isomorphism classes of) bundles $\mathcal{F}$ in $B^{k}_C(d)$, in particular of the general one in a component. By ``geometric description'' we mean a description of families of curves on the ruled surface $\mathbb{P}(\mathcal{F})$, in particular of unisecant curves to its fibres. This translates in turn to exhibiting $\mathcal{F}$ as an extension of line bundles $$(*)\;\;\; 0 \to N \to \mathcal{F} \to L \to 0$$(cf.\,\eqref{eq:Fund}), which we call a {\em presentation} of $\mathcal{F}$. Of particular interest is a presentation $(*)$ with suitable {\em minimality properties} on the quotient line bundle $L$, which translate into minimality properties for families of irreducible unisecant curves on the surface $\mathbb{P}(\mathcal{F})$ (cf.\,\S\,\ref{S:VB} below).
This approach provides basic information about the vector bundle $\mathcal{F}$, which can be useful in a field in which so little is known and which has not been given so far: indeed, the description of such a minimal presentation is not known in general and, in particular, has not been provided in Theorem \ref{thm:TB}.
One of the main objective of this paper is to shed some light on this subject. As a consequence of our analysis, we provide explicit parametric representations (and so information about the birational geometry) of some components of $B_C^{k}(d)$ (cf.\,\S's\,\ref{S:PSBN},\,\ref{S:BND}).
Viewing rank two vector bundles as extensions of line bundles is very classical: by suitably interpreting the classical language, this goes back to C. Segre \cite {Seg}. In recent times, it has been exploited, e.g., in \cite[\S\,2,3]{BeFe}, \cite[\S\,8]{Muk}, where the case of canonical determinant and $g \leqslant 12$ has been treated. As noted in \cite{BeFe}, this approach ``{\em works well enough in low genera.... but seems difficult to implement in general }''. However, we tried to follow this route, with no upper-bounds on the genus but, as we will see, by bounding the speciality $i:=h^1(C,\mathcal{F})$.
Our approach is as follows. We construct (semi)stable vector bundles $\mathcal{F}$ in Brill-Noether loci, as extensions of line bundles $L$ and $N$: the Brill--Noether loci we hit in this way depend on the cohomology of $L$ and $N$ and on the behaviour of the {\em coboundary map} $H^0(C, L) \stackrel{\partial}{\longrightarrow} H^1(C,N)$ associated to $(*)$, cf.\,\S's\,\ref{S:BNLN},\,\ref{sec:constr}; we exhibit explicit constructions of such vector bundles in Theorems \ref{LN}, \ref{C.F.VdG}, \ref{uepi}, \ref{unepi}. These theorems provide existence results for $B^{k}_C(d)$ which are comparable to, though slightly worse but easier to prove than, Theorem \ref {thm:TB} (cf.\,Remark.\,\ref{rem:C.F.VdG}--(3)). At the same time, they imply non--emptiness for fibres of the determinant map $B^{k}_C(d)\to {\rm Pic}^ d(C)$, i.e. for Brill--Noether loci with fixed determinant $\det(\mathcal{F}) := L\otimes N$, for any possible $L$ and $N$ as in the assumptions therein; this is in the same spirit of \cite {LaNw}, where however only the case with fixed determinant of odd degree has been considered.
In any event, as we said, the main purpose of this paper is not the one of constructing new components of Brill-Noether loci, but of providing a minimal presentation for the \emph{general element} of components of $B^{k}_C(d)$, for $C$ with general moduli. To do this, we take line bundles $L$ and $N$ with assumptions as in Theorems \ref{LN}--\ref{unepi} and let them vary in their own Brill-Noether loci of their Picard schemes. Accordingly, we let the constructed bundles $\mathcal{F}$ vary in suitable degeneracy loci $\Lambda \subseteq {\rm Ext}^1(L,N)$, defined in such a way that $\mathcal{F} \in \Lambda$ general has the desired speciality $i$ (cf. \S\,\ref{S:PSBN}). In this way we obtain irreducible varieties parametrizing triples $(L,N,\mathcal{F})$, i.e. any such variety is endowed with a morphism $\pi$ to $U_C(d)$, whose image is contained in a component $\mathcal B$ of a Brill-Noether locus. To find a minimal presentation $(*)$ for a general member $\mathcal{F}$ of $\mathcal B$, we are reduced to find conditions on $L$, $N$ and on the coboundary map $\partial$ ensuring the morphism $\pi$ to be dominant onto $\mathcal B$. We achieve this goal by using results in \S\,\ref{S:VB}, which deal with the study of some families of irreducible unisecants of given speciality on the ruled surface $\mathbb{P}(\mathcal{F})$ (cf.\,Lemmas \ref{lem:claim1},\,\ref{lem:claim2},\,Corollaries \ref{C.F.1b},\,\ref{C.F.1c},\,\ref{C.F.2},\,\ref{C.F.3b}\,and Remarks\,\ref{rem:17lug},\,\ref{rem:bohb}).
As is clear from the foregoing description, to find a presentation of $\mathcal{F}$ general in a component of a Brill-Noether locus is a difficult problem in general. We are able to solve it here for $i\leqslant 3$.
Our main results for Brill--Noether loci $B_C^k(d)$ are Theorems \ref{i=1}, \ref{i=2}, \ref{i=3}, which respectively deal with cases $i=1,2,3$ and $k= d-2g+2+i$. A first fact we prove therein is that (a component of) $B_C^k(d)$ is filled-up by vector bundles $\mathcal{F}$ having a minimal {\em special} presentation as $$0 \to N \to \mathcal{F} \to \omega_C(- D_{i-1}) \to 0,$$where $D_{i-1} \in {\rm Sym}^{i-1}(C)$, $N \in {\rm Pic}^{d-2g+1-i}$ and $\mathcal{F} \in \Lambda_{i-1}$ are general, where $\Lambda_{i-1}$ is a {\em good component} of the degeneracy locus
$$\left\{\mathcal{F} \in {\rm Ext}^1(\omega_C(- D_{i-1}), N) \,|\, \dim {\rm Coker} \left(H^0(C,L) \stackrel{\partial}{\longrightarrow} H^1(C,N) \right) \geqslant i-1 \right\} \subseteq {\rm Ext}^1(\omega_C(- D_{i-1}), N)$$(cf.\,Def.\,\ref{def:ass1}, for precise definitions of special presentation and minimality, Rem.\,\ref{rem:wt} and Defs.\,\ref{def:goodc}, \ref{def:goodtot}, for goodness, and Thms.\,\ref{thm:mainext1},\,\ref{thm:mainextt}, for existence of good components). The case $i=1$ was already treated in \cite {Ballico1} with different methods (cf. Remark \ref{rem:i=1}); in cases $i=2,3$ our results are new.
Statements of our main results, to which the reader is referred, contain even more. Indeed they also describe families of special, irreducible unisecants of $\mathbb{P}(\mathcal{F})$ which are of minimal degree with respect to its tautological line bundle. Apart from its intrinsic interest, this description plays a fundamental role when one tries to construct components of the Hilbert scheme parametrizing linearly normal, genus $g$ and degree $d$ special scrolls in projective spaces and whose general point parametrizes a {\em stable scroll} (cf. e.g.\,\cite{CF}).
Finally, the proofs of Theorems \ref{i=1}, \ref{i=2}, \ref{i=3} show in particular that the map $\pi$ from the parameter space of triples $(L, N, \mathcal{F})$ to (the dominated component of) $B_C^k(d)$ is generically finite, sometimes even birational (cf. Rmk.\,\ref{rem:i=1}--(3)), giving therefore information about the birational geometry of the Brill--Noether locus.
Other main results of the paper are given by Theorems \ref{prop:M12K}, \ref{prop:M22K}, \ref{prop:M32K} which deal with the canonical determinant case.
In principle, there is no obstruction in pushing further the ideas in this paper, to treat higher speciality cases. However, this is increasingly complicated and therefore we limited ourselves to expose at the end of the paper a few suggestions on how to proceed in general and propose a conjecture (see \S\,\ref{sec:high}).
The paper is organized as follows. Section \ref{S:VB} is devoted to preliminaries about families of special, irreducible unisecants on ruled surfaces $\mathbb{P}(\mathcal{F})$ and the corresponding special presentation of the bundle $\mathcal{F}$. Sections \ref{S:MS} and \ref{S:BNLN} are devoted to recalling basic facts on (semi)stable, rank--two vector bundles of degree $d$, extensions of line bundles, and useful results of Lange--Narashiman and Maruyama (cf. Proposition \ref{prop:LN} and Lemma \ref{lem:technical}). Section \ref{sec:constr} is the technical one, which contains our constructions of vector bundles in Brill-Noether loci as extensions of line bundles $L$ and $N$. Section \ref{S:PSBN} is where we deal with parameter spaces of triples and maps from them to $U_C(d)$, landing in Brill-Noether loci. The general machinery developed in the previous sections is then used in \S\,\ref{S:BND}, in order to prove our main results mentioned above.
\section{Notation and terminology}\label{sec:P}
In this paper we work over $\mathbb C$. All schemes will be endowed with the Zariski topology. We will interchangeably use the terms rank-$r$ vector bundle on a scheme $X$ and rank-$r$ locally free sheaf.
We denote by $\sim$ the linear equivalence of divisors, by $\sim_{alg}$ their algebraic equivalence and by $\equiv$ their numerical equivalence. We may abuse notation and identify divisor classes with the corresponding line bundles, interchangeably using additive and multiplicative notation.
If $\mathcal P$ is the parameter space of a flat family of subschemes of $X$ and if $Y$ is an element of the family, we denote by $Y \in \mathcal P$ the point corresponding to $Y$. If $\mathcal M$ is a moduli space, parametrizing geometric objects modulo a given equivalence relation, we denote by $[Z] \in \mathcal M$ the moduli point corresponding to the equivalence class of $Z$.
Let
\noindent $\bullet$ $C$ be a smooth, irreducible, projective curve of genus $g$, and
\noindent $\bullet$ $\mathcal{F}$ be a rank-two vector bundle on $C$.
Then, $F:= \mathbb{P}(\mathcal{F}) \stackrel{\rho}{\to} C$ will denote the {\em (geometrically) ruled surface} (or {\em the scroll}) associated to $(\mathcal{F}, C)$; $f$ will denote the general $\rho$-fibre and
$\mathcal{O}_F(1)$ the {\em tautological line bundle}. A divisor in $|\mathcal{O}_F(1)|$ will be usually denoted by $H$. If $\widetilde{\Gamma}$ is a divisor on $F$, we will set ${\rm deg}(\widetilde{\Gamma}) := \widetilde{\Gamma} H$.
We will use the notation $$d:= \deg(\mathcal{F}) = \deg(\det(\mathcal{F})) = H^2 = \deg(H);$$$i(\mathcal{F}):= h^1(\mathcal{F})$ is called the {\em speciality} of $\mathcal{F}$ and will be denoted by $i$, if there is no danger of confusion. $\mathcal{F}$ (and $F$) is {\em non-special} if $i= 0$, {\em special} otherwise.
As customary, $W^r_a(C)$ will denote the {\em Brill-Noether locus}, parametrizing line bundles $A \in Pic^a(C)$ such that $h^0(A) \geqslant r+1$, $$\rho(g, r, a):= g - (r+1) ( r+g-a)$$the {\em Brill-Noether number} and \begin{equation}\label{eq:Petrilb} \mu_0(A) : H^0(C,A) \otimes H^0(\omega_C \otimes A^{\vee}) \to H^0(C, \omega_C) \end{equation} the {\em Petri map}. As for the rest, we will use standard terminology and notation as in e.g. \cite{ACGH}, \cite{Ha}, etc.
\section{Scrolls unisecants}\label{S:VB}
We recall some basic facts on unisecant curves of the scroll $F$ (cf. \cite{GP2, Ghio} and \cite[V-2]{Ha}).
One has ${\rm Pic}(F) \cong \mathbb{Z}[\mathcal{O}_F(1)] \oplus \rho^*({\rm Pic}(C))$ (cf.\ \cite[\S\,5, Prop.\,2.3]{Ha}). Let ${\rm Div}_F$ be the scheme (not of finite type) of effective divisors on $F$, which is a sub-monoid of ${\rm Div}(F)$. For any $k \in {\mathbb N}$, let ${\rm Div}^k_F$ be the subscheme (not of finite type) of ${\rm Div}_F$ formed by all divisors $\widetilde{\Gamma}$ such that $\mathcal{O}_F(\widetilde{\Gamma}) \cong \mathcal{O}_F(k) \otimes \rho^*(N^{\vee})$, for some $N \in {\rm Pic} (C)$ (this $N$ is uniquely determined); then one has a natural morphism $$\Psi_k: {\rm Div}^k_F \to {\rm Pic}(C), \;\;\;\;\;\;\;\; \widetilde{\Gamma} \stackrel{\Psi_k}{\longrightarrow} N.$$
If $D \in {\rm Div} (C)$, then $\rho^*(D)$ will be denoted by $f_D$. Then $\widetilde{\Gamma} \in {\rm Div}^k_F$ if and only if $\widetilde{\Gamma} \sim kH - f_D$, for some $D \in {\rm Div}(C)$, and $\deg(\widetilde{\Gamma}) = k \deg(\mathcal{F}) - \deg(D)$.
The curves in ${\rm Div}^1_F$ are called {\em unisecants} of $F$. Irreducible unisecants are isomorphic to $C$ and called {\em sections} of $F$. For any positive integer $\delta$, we consider (cf. \cite[\S\;5]{Ghio})
$${\rm Div}_F^{1,\delta} : = \{\widetilde{\Gamma} \in {\rm Div}_F^1 \; | \; {\rm deg}(\widetilde{\Gamma}) = \delta \},$$which is the {\em Hilbert scheme of unisecants of degree $\delta$ of $F$} (w.r.t. $H$).
\begin{remark}\label{rem:Hilbert} Let $\widetilde{\Gamma} = \Gamma + f_A$ be a unisecant, with $\Gamma$ a section and $A $ effective. Equivalently, we have an exact sequence \begin{equation}\label{eq:Fund2} 0 \to N ( - A) \to \mathcal{F} \to L \oplus \mathcal{O}_{A} \to 0 \end{equation} (cf.\;\cite{CCFMnonsp, CCFMBN}); in particular if $A =0$, i.e. $\widetilde{\Gamma} = \Gamma$ is a section, $\mathcal{F}$ fits in the exact sequence \begin{equation}\label{eq:Fund} 0 \to N \to \mathcal{F} \to L \to 0 \end{equation} and \begin{equation}\label{eq:Ciro410} \mathcal{N}_{\Gamma/F} \cong L \otimes N^{\vee}, \;\; {\rm so} \;\; \Gamma^2 = \deg(L) - \deg(N) = 2\delta - d, \end{equation} (cf. \;\cite[\S\;V,\;Prop. 2.6, 2.9]{Ha}). Accordingly $\Psi_{1,\delta}: {\rm Div}^{1,\delta}_F \to {\rm Pic}^{d-\delta}(C)$, the restriction of $\Psi_1$, endows ${\rm Div}_F^{1,\delta}$ with a structure of Quot scheme: with notation as in \cite[\S\,4.4]{Ser}, one has \begin{equation}\label{eq:isom1} \begin{array}{rrcc} \Phi_{1,\delta}: & {\rm Div}_F^{1,\delta} & \stackrel{\cong}{\longrightarrow} & {\rm Quot}^C_{\mathcal{F},\delta+t-g+1} \\
& \widetilde{\Gamma} & \longrightarrow & \left\{\mathcal{F} \to\!\!\to L \oplus \mathcal{O}_A \right\}. \end{array} \end{equation} From standard results (cf. e.g. \cite[\S\,4.4]{Ser}), \eqref{eq:isom1} gives identifications between tangent and obstruction spaces: \begin{equation}\label{eq:tang} H^0(\mathcal{N}_{\widetilde{\Gamma}/F}) \cong T_{[\widetilde{\Gamma}]} ({\rm Div}_F^{1,\delta}) \cong {\rm Hom} (N (-A), L \oplus \mathcal{O}_A) \;\;\; {\rm and} \; \;\; H^1(\mathcal{N}_{\widetilde{\Gamma}/F}) \cong {\rm Ext}^1 (N (-A), L \oplus \mathcal{O}_A) \end{equation} Finally, if $\widetilde{\Gamma} \sim H - f_D$, then one easily checks that \begin{equation}\label{eq:isom2}
|\mathcal{O}_F(\widetilde{\Gamma})| \cong \mathbb{P}(H^0(\mathcal{F} (-D))). \end{equation}
\end{remark}
\begin{definition}\label{def:ass0} $\widetilde{\Gamma} \in {\rm Div}^{1,\delta}_F$ is said to be:
\noindent
(a) {\em linearly isolated (li)} if $\dim(|\mathcal{O}_F(\widetilde{\Gamma} )|) =0$,
\noindent (b) {\em algebraically isolated (ai)} if $\dim({\rm Div}^{1,\delta}_F) =0$.
\end{definition}
\begin{remark}\label{rem:linisol} (1) If $\widetilde{\Gamma}$ is ai, then it is also li but the converse is false (c.f. e.g. Example \ref{ex:contro}, Corollary \ref{C.F.3b}).
\noindent (2) When ${\rm Div}^{1,\delta}_F$ is of pure dimension, a sufficient condition for $\widetilde{\Gamma}$ to be ai is $h^0(\mathcal{N}_{\widetilde{\Gamma}/F}) = 0$ (cf. e.g. Theorem \ref{C.F.VdG}, Corollary \ref{C.F.1c} and \S\,\ref{i=1} below). \end{remark}
\subsection{The Segre-invariant}\label{ss:S}
\begin{definition}\label{def:seginv} The {\em Segre invariant} of $\mathcal{F}$ is defined as $$s(\mathcal{F}) := \deg(\mathcal{F}) - 2 ({\rm max} \; \{\deg (N) \}),$$where the maximum is taken among all sub-line bundles $N$ of $\mathcal{F}$ (cf.\ e.g.\ \cite{LN}). The bundle $\mathcal{F}$ is {\em stable} [resp. {\em semi--stable}], if $s(\mathcal{F})>0$ [resp. if $s(\mathcal{F})\geqslant 0$].
Equivalently $\mathcal{F}$ is stable [resp. semistable] if for every sub-line bundle $N\subset \mathcal{F}$ one has $\mu(N)<\mu(\mathcal{F})$ [resp. $\mu(N)\le\mu(\mathcal{F})$], where
$\mu(\mathcal E)=\deg (\mathcal E)/{\rm rk}(\mathcal E)$ is the \emph{slope} of a vector bundle $\mathcal E$.
\end{definition}Note that, for any $A \in {\rm Pic}(C)$, one has \begin{equation}\label{eq:seginv} s(\mathcal{F}) = s(\mathcal{F} \otimes A). \end{equation}
\begin{remark}\label{rem:seginv} From \eqref{eq:Ciro410}, $s(\mathcal{F})$ coincides with the minimum self-intersection of sections of $F$. In particular, if $\Gamma \in {\rm Div}^{1,\delta}_F$ is a section s.t. $\Gamma^2 = s(\mathcal{F})$, then $s(\mathcal{F}) = 2 \delta - d$ and $\Gamma$ is a section of {\em minimal degree} of $F$, i.e. for any section $\Gamma' \subset F$ one has $\deg(\Gamma') \geqslant \deg(\Gamma)$. \end{remark}
We recall the following fundamental result.
\begin{proposition}\label{prop:Nagata} Let $C$ be of genus $g \geqslant 1$ and let $\mathcal{F}$ be indecomposable. Then, $2-2g \leqslant s(\mathcal{F}) \leqslant g$. \end{proposition}
\begin{proof} The lower-bound follows from $\mathcal{F}$ being indecomposable (see e.g. \cite[V,\;Thm.\;2.12(b)]{Ha}). The upper-bound is Nagata's Theorem (see \cite{Na}). \end{proof}
\subsection{Special scrolls unisecants}\label{ss:SU}
In the paper we will be mainly concerned about the speciality of unisecants of a (necessarily special) scroll $F$.
\begin{definition}\label{def:lndirec} For $\widetilde{\Gamma} \in {\rm Div}_F$, we set $\mathcal{O}_{\widetilde{\Gamma}} (1) := \mathcal{O}_F(1) \otimes \mathcal{O}_{\widetilde{\Gamma}}$. The {\em speciality} of $\widetilde{\Gamma}$ is $i(\widetilde{\Gamma}) := h^1(\mathcal{O}_{\widetilde{\Gamma}} (1))$. $\widetilde{\Gamma}$ is {\em special} if $i(\widetilde{\Gamma}) >0$. \end{definition}
If $\widetilde{\Gamma}$ is given by \eqref{eq:Fund2}, then by \eqref{eq:isom2} one has
$\widetilde{\Gamma} \in |\mathcal{O}_F(1) \otimes \rho^*(N^{\vee}(A))|$. Applying $\rho_*$ to the exact sequence $$0 \to \mathcal{O}_F(1) \otimes \mathcal{O}_F(-\widetilde{\Gamma})\to \mathcal{O}_F(1) \to \mathcal{O}_{\widetilde{\Gamma}}(1) \to 0$$and using $\rho_*( \mathcal{O}_F(1) \otimes \mathcal{O}_F(-\widetilde{\Gamma})) \cong N ( - A)$, $ R^1\rho_*(\mathcal{O}_F(\rho^*(N ( -A))) = 0$, we get \begin{equation}\label{eq:iLa} i(\widetilde{\Gamma}) = h^1(L \oplus \mathcal{O}_{A}) = h^1(L) = i(\Gamma), \end{equation}where $\Gamma$ the unique section in $\widetilde{\Gamma}$.
The following examples show that, in general, speciality is not constant either in linear systems or in algebraic families.
\begin{example}\label{ex:1} Take $g=3$, $i=1$ and $d=9= 4g-3$. There are smooth, linearly normal, special scrolls $S \subset \mathbb{P}^5$ of degree $9$, speciality $1$, sectional genus $3$ with general moduli containing a unique
special section $\Gamma$ which is a genus $3$ canonical curve (cf. \cite[Thm.\;6.1]{CCFMsp}). Moreover, $\Gamma$ is the unique section of minimal degree $4$ (cf. also \cite{Seg}). There are lines $f_1, \ldots, f_5$ of the ruling, such that $\widetilde{\Gamma} := \Gamma + f_1 + \ldots + f_5 \in |H|$, where $H$ the hyperplane section of $S$. These curves $\widetilde{\Gamma}$ vary in a sub-linear system of dimension $2$ contained in $|H|$, whose movable part is the complete linear system $|f_1 + \cdots + f_5|$. The curves as $\widetilde{\Gamma}$ are the only special unisecants in $|H|$. \end{example}
\begin{example}\label{ex:contro} Let $C$ be a non-hyperelliptic curve of genus $g \geqslant 3$, $d = 3g-4$ and $N \in {\rm Pic}^{g-2}(C)$ general. $N$ is non-effective with $h^1(N) = 1$. Consider ${\rm Ext}^1(\omega_C, N)$. It has dimension $2g-1$ and its general point gives rise to a rank-two vector bundle $\mathcal{F}$ of degree $d$, fitting in an exact sequence like \eqref{eq:Fund}, with $L = \omega_C$. By generality, the coboundary map $\partial : H^0(\omega_C) \to H^1(N) \cong \mathbb{C}$ is surjective (cf. Corollary \ref{cor:mainext1} below); therefore $i(\mathcal{F}_u)=1$. Since $\mathcal{F}$ is of rank-two with $\det(\mathcal{F}) = \omega_C \otimes N$, by Riemann-Roch one has $h^0(\mathcal{F} \otimes N^{\vee}) =1$. From \eqref{eq:isom2}, the canonical section $\Gamma \subset F$, corresponding to $\mathcal{F} \to\!\! \to \omega_C$, is li. From \eqref{eq:Ciro410}, $\mathcal{N}_{\Gamma/F} \cong \omega_C \otimes N^{\vee}$ hence $h^i(\mathcal{N}_{\Gamma/F}) = 1-i$, for $i = 0, 1$. Let $\mathcal D$ be the irreducible, one-dimensional component of the Hilbert scheme containing the point corresponding to $\Gamma$ (which is smooth for the Hilbert scheme). Therefore $\mathcal D$ is an algebraic (non-linear) family whose general member is a li section. As a consequence of Proposition \ref{prop:lem4} below, $\Gamma$ is the only special section in $\mathcal D$. In particular, if all curves in $\mathcal D$ are irreducible, then $\Gamma$ is the only special curve in $\mathcal D$ (see Lemma \ref{lem:ovviolin}).
Note that $\mathcal{F}$ is indecomposable. Indeed, assume $\mathcal{F} = A \oplus B$, with $A$, $B$ line bundles. Since $h^0(\mathcal{F} \otimes N^{\vee}) =1$, we may assume $h^0(A-N) = 1$ and $h^0(B-N) = 0$. By the genericity of $N$, $A-N$ and $B-N$ are both general of their degrees. Therefore $\deg(A-N) = g$, hence $\deg(A) = 2g-2$ and $\deg(B) = g-2$. The image of $A$ in the surjection $\mathcal{F} \to \!\!\to \omega_C$ is zero, otherwise $A=\omega_C$ hence $B =N$ which is impossible, because $h^0(B-N) = 0$. Then we would have an injection $A \hookrightarrow N$ which is impossible by degree reasons. \end{example}
Since ${\rm Div}^{1,\delta}_F$ is a Quot-scheme, there is the universal quotient $\mathcal Q_{1,\delta} \to {\rm Div}^{1,\delta}_F$. Taking ${\rm Proj} (\mathcal Q_{1,\delta}) \stackrel{p}{\to} {\rm Div}^{1,\delta}_F$, we can consider
\begin{equation}\label{eq:aga}
\mathcal S_F^{1,\delta} := \{\widetilde{\Gamma} \in {\rm Div}^{1,\delta}_F \;\; | \;\; R^1p_*(\mathcal{O}_{\mathbb{P}(\mathcal Q_{1,\delta})}(1))_{\widetilde{\Gamma}} \neq 0\} \; \;\; {\rm and} \; \;\; a_F(\delta) := \dim (\mathcal S_F^{1,\delta}), \end{equation} i.e. $\mathcal S_F^{1,\delta}$ is the support of $R^1p_*(\mathcal{O}_{\mathbb{P}(\mathcal Q_{1,\delta})}(1))$. It parametrizes degree $\delta$, special unisecants of $F$.
\begin{definition}\label{def:ass1} Let $\widetilde{\Gamma}$ be a special unisecant of $F$. Assume $\widetilde{\Gamma} \in \mathfrak{F}$, where $\mathfrak{F} \subseteq {\rm Div}^{1,\delta}_F$ is a subscheme.
\noindent $\bullet$ We will say that $\widetilde{\Gamma}$ is:
\noindent (i) {\em specially unique (su)} in $\mathfrak{F}$, if $\widetilde{\Gamma}$ is the only special unisecant in $\mathfrak{F}$, or
\noindent (ii) {\em specially isolated (si)} in $\mathfrak{F}$, if $\dim_{\widetilde{\Gamma}} \left(\mathcal S_F^{1,\delta} \cap \mathfrak{F} \right) = 0$.
\noindent $\bullet$ In particular:
\noindent
(a) when $\mathfrak{F} = |\mathcal{O}_F(\widetilde{\Gamma})|$, $\widetilde{\Gamma}$ is said to be {\em linearly specially unique (lsu)} in case (i) and {\em linearly specially isolated (lsi)} in case (ii);
\noindent (b) when $ \mathfrak{F} = {\rm Div}^{1,\delta}_F$, $\widetilde{\Gamma}$ is said to be {\em algebraically specially unique (asu)} in case (i) and {\em algebraically specially isolated (asi)} in case (ii).
\noindent $\bullet$ When a section $\Gamma \subset F$ is asi, we will say that $\mathcal{F}$ is {\em rigidly specially presented (rsp)} as $\mathcal{F} \to \!\! \to L$ or by the sequence \eqref{eq:Fund} corresponding to $\Gamma$. When $\Gamma$ is ai (cf. Def. \ref{def:ass0}), we will say that $\mathcal{F}$ is {\em rigidly presented (rp)} via $ \mathcal{F} \to \!\! \to L$ or \eqref{eq:Fund}. \end{definition}
For examples, c.f. e.g. \S\,\ref{S:BND} below.
\begin{lemma}\label{lem:ovviolin} Let $\Gamma \subset F$ be a section corresponding to a sequence as in \eqref{eq:Fund}. A section $\Gamma'$, corresponding to $\mathcal{F} \to\!\!\!\!\! \to L'$, is s.t. $\Gamma \sim \Gamma'$ if and only if $L \cong L'$. In particular
\noindent (a) $i(\Gamma) = i(\Gamma')$;
\noindent (b) $\Gamma$ is lsu if and only if it is lsi if and only if it is li. \end{lemma}
\begin{proof} The first assertion follows from \eqref{eq:isom2}. Then, (a) and (b) are both clear. \end{proof}
\begin{proposition}\label{prop:lem4} Let $F$ be indecomposable and let $\Gamma \in \mathfrak{F} \subseteq \mathcal{S}^{1,\delta}_F$ be a section, where $\mathfrak{F}$ is an irreducible, projective scheme of dimension $k$. Assume:
\noindent (a) $k \geqslant 1$, if $\mathfrak{F}$ is a linear system;
\noindent (b) either $k \geqslant 2$, or $k=1$ and $\mathfrak{F}$ with base points, if $\mathfrak{F}$ is not linear.
Then, $\mathfrak{F}$ contains reducible unisecants $ \widetilde{\Gamma} $ with \begin{equation}\label{eq:speciality} i(\widetilde{\Gamma}) \geqslant i(\Gamma). \end{equation} \end{proposition}
\begin{proof} If $k \geqslant 2$, let $t$ be the unique integer such that $0 \leqslant k':= k-2t \leqslant 1$. Let $f_1, \ldots, f_t$ be $t$ general $\rho$-fibres of $F$. Since $k' \geqslant 0$, by imposing to the curves in $\mathfrak{F}$ to contain fixed general pairs of points on $f_1, \ldots, f_t$, we see that $$\mathfrak{F}' := \mathfrak{F} \left(- \sum_{i=1}^t f_i\right) \subset \mathfrak{F}$$is non-empty, all components of it have dimension $k'$, and they all parametrize unisecants $\Gamma' \sim_{alg} \Gamma - \sum_{i=1}^t f_i$. Then $\mathfrak{F}$ contains reducible elements $\widetilde{\Gamma}$, and they verify \eqref{eq:speciality} by upper-semicontinuity. This proves the assertion when $k \geqslant 2$.
So we are left with the case $k=1$. Assume first that $\mathfrak{F}$ is a linear pencil. Since $\mathfrak{F} \subseteq |\mathcal{O}_F(\Gamma)|$, from the exact sequence $0 \to \mathcal{O}_F \to \mathcal{O}_F(\Gamma) \to \mathcal{O}_{\Gamma}(\Gamma) \to 0$, the line bundle $\mathcal{O}_{\Gamma}(\Gamma)$ is effective so $\Gamma^2 \geqslant 0$. Let ${\rm Bs}(\mathfrak{F})$ be the base locus of $\mathfrak{F}$. If $\Gamma^2 >0$, take $p \in {\rm Bs}(\mathfrak{F})$. We can clearly split off the fibre through $p$ with one condition, thus proving the result.
If $\Gamma^2 = 0$, $\mathfrak{F}$ is a base-point-free pencil. So $F$ contains two disjoint sections and this implies that $\mathcal{F}$ is decomposable, a contradiction.
Finally, if $\mathfrak{F}$ is non-linear, then ${\rm Bs} (\mathfrak{F}) \neq \emptyset$ and we can argue as in the linear case with $\Gamma^2 >0$. \end{proof}
\section{Brill-Noether loci} \label{S:MS}
As usual, $U_C(d)$ denotes the moduli space of (semi)stable, degree $d$, rank-two vector bundles on $C$. The subset $U_C^s(d)\subseteq U_C(d)$ parametrizing (isomorphism classes of) stable bundles, is an open subset. The points in ${U_C}^{ss}(d):=U_C(d)\setminus U_C^s(d)$ correspond to (S-equivalence classes of) \emph{strictly semistable} bundles (cf. e.g. \cite{Ram,Ses}).
\begin{proposition}\label{prop:sstabh1} Let $C$ be a smooth curve of genus $g \geqslant 1$ and let $d$ be an integer.
\noindent (i) If $d \geqslant 4g-3$, then for any $[\mathcal{F}] \in U_C(d)$, one has $i(\mathcal{F}) = 0$.
\noindent (ii) If $g \geqslant 2$ and $d \geqslant 2g-2$, for $[\mathcal{F}] \in U_C(d)$ general, one has $i(\mathcal{F}) = 0$.
\end{proposition} \begin{proof} For (i), see \cite[Lemma 5.2]{New}; for (ii) see \cite[p. 100]{Lau} or \cite[Rem. 3]{Ballico}. \end{proof}
Thus, from Proposition \ref{prop:sstabh1}, Serre duality and invariance of stability under operations like tensoring with a line bundle or passing to the dual bundle, for $g \geqslant 2$ it makes sense to consider the proper sub-loci of $U_C(d)$ parametrizing classes $[\mathcal{F}]$ such that $i(\mathcal{F}) > 0$ for \begin{equation}\label{eq:congd} 2g-2 \leqslant d \leqslant 4g-4. \end{equation}
\begin{definition}\label{def:BNloci} Given non-negative integers $d$, $g$ and $ i $, we set \begin{equation}\label{eq:ki} k_i = d - 2 g + 2 + i. \end{equation}Given a curve $C$ of genus $g$, we define
$$B^{k_i}_C(d) := \left\{ [\mathcal{F}] \in U_C(d) \, | \, h^0(\mathcal{F}) \geqslant k_i \right\} =
\left\{ [\mathcal{F}] \in U_C(d) \, | \, h^1(\mathcal{F}) \geqslant i \right\}$$which we call the $k_i^{th}$--{\em Brill-Noether locus}. \end{definition}
\begin{remark}\label{rem:BNloci} The Brill-Noether loci $B^{k_i}_C(d)$ have a natural structure of closed subschemes of $U_C(d)$:
\noindent $(a)$ When $d$ is odd, $U_C(d)=U^s_C(d)$, then $U_C(d)$ is a fine moduli space and the existence of a universal bundle on $C \times U_C(d)$ allows one to construct $B^{k_i}_C(d)$ as the degeneracy locus of a morphism between suitable vector bundles on $U_C(d)$ (see, e.g. \cite{GT, M}). Accordingly, the {\em expected dimension} of $B^{k_i}_C(d)$ is ${\rm max} \{ -1, \ \rho_d^{k_i}\}$, where \begin{equation}\label{eq:bn} \rho_d^{k_i}:= 4g - 3 - i k_i \end{equation} is the \emph{Brill-Noether number}. If $\emptyset \neq B^{k_i}_C(d) \neq U_C(d)$, then $B^{k_i+1}_C(d) \subseteq {\rm Sing}(B^{k_i}_C(d))$. Since any $[\mathcal{F}] \in B^{k_i}_C(d)$ is stable, it is a smooth point of $U_C(d)$ and $T_{[\mathcal{F}]}(U_C(d))$ can be identified with $H^0( \omega_C \otimes \mathcal{F} \otimes \mathcal{F}^{\vee})^{\vee}$. If $[\mathcal{F}] \in B^{k_i}_C(d) \setminus B^{k_i+1}_C(d)$,
the tangent space to $B^{k_i}_C(d)$ at $[\mathcal{F}]$ is the annihilator of the image of the cup--product, {\em Petri map} of $\mathcal{F}$ (see, e.g. \cite{TB00}) \begin{equation}\label{eq:petrimap} P_{\mathcal{F}} : H^0(C, \mathcal{F}) \otimes H^0(C, \omega_C \otimes \mathcal{F}^{\vee}) \longrightarrow H^0(C, \omega_C \otimes \mathcal{F} \otimes \mathcal{F}^{\vee}). \end{equation} If $[\mathcal{F}] \in B^{k_i}_C(d) \setminus B^{k_i+1}_C(d)$, then $$\rho_d^{k_i} = h^1(C, \mathcal{F} \otimes \mathcal{F}^{\vee}) - h^0(C, \mathcal{F}) h^1(C, \mathcal{F})$$and $B^{k_i}_C(d)$ is non--singular, of the expected dimension at $[\mathcal{F}]$ if and only if $P_{\mathcal{F}}$ is injective.
\noindent $(b)$ When $d$ is even, $U_C(d)$ is not a fine moduli space (because $U^{ss}_C(d)\neq \emptyset$). There is a suitable open, non-empty subscheme $ \mathcal Q^{ss} \subset \mathcal Q$ of a certain Quot scheme $\mathcal Q$ defining $U_C(d)$ via the GIT-quotient sequence $$0 \to PGL(q) \to \mathcal Q^{ss} \stackrel{\pi}{\longrightarrow} U_C(d) \to 0$$(cf. e.g. \cite{Tha} for details); one can define $B^{k_i}_C(d)$ as the image via $\pi$ of the degeneracy locus of a morphism between suitable vector bundles on $\mathcal Q^{ss}$. The fibres of $\pi$ over strictly semistable bundle classes are not isomorphic to $PGL(q)$. It may happen for a component $\mathcal B $ of a Brill--Noether locus to be totally contained in $U_C^{ss}(d)$; in this case the lower bound $\rho_d^{k_i}$ for the expected dimension of $\mathcal B$ is no longer valid (cf. Corollary \ref{C.F.2} below and \cite[Remark 7.4]{BGN}). The lower bound $ \rho_d^{k_i}$ is still valid if $\mathcal B\cap U^s_C(d)\neq \emptyset$. \end{remark}
\begin{definition}\label{def:regsup} Assume $B_C^{k_i}(d) \neq \emptyset$. A component $ \mathcal B \subseteq B_C^{k_i}(d)$ such that $\mathcal B \cap U_C^s(d) \neq \emptyset$ will be called {\em regular}, if $\dim(\mathcal B) = \rho_d^{k_i}$, {\em superabundant}, if $\dim(\mathcal B) > \rho_d^{k_i}$. \end{definition}
\section{(Semi)stable vector bundles and extensions}\label{S:BNLN}
In this section we discuss how to easily produce special, (semi)stable vector bundles $\mathcal{F}$ as extensions of line bundles $L$ and $N$ as in \eqref{eq:Fund}. This is the same as considering vector bundles $\mathcal{F}$, with a sub-line bundle $N$ s.t. $\mathcal{F} \otimes N^{\vee}$ has a nowhere vanishing section.
If $g =2$, in the range \eqref{eq:congd} one has bundles $\mathcal{F}$ with slope $1 \leqslant \mu(\mathcal{F}) \leqslant 2$ on a hyperelliptic curve, which have been studied in \cite{BGN, BMNO, M1, M4}. Thus, we will assume $C$ non-hyperelliptic of genus $g \geqslant 3$, with $d$ as in \eqref{eq:congd}.
\subsection{Extensions and a result of Lange-Narashiman}\label{ss:LN} Let $\delta \leqslant d$ be a positive integer. Consider $L \in {\rm Pic}^{\delta}(C)$ and $N \in {\rm Pic}^{d-\delta}(C)$; ${\rm Ext}^1(L,N)$ parametrizes (strong) isomorphism classes of extensions (cf. \cite[p. 31]{Frie}). Any $u \in {\rm Ext}^1(L,N)$ gives rise to a degree $d$, rank-two vector bundle $\mathcal{F}=\mathcal{F}_u$ as in \eqref{eq:Fund}. In order to get $\mathcal{F}_u$ (semi)stable, a necessary condition is (cf. Remark \ref{rem:seginv}) \begin{equation}\label{eq:assumptions} 2\delta-d \geqslant 0. \end{equation}Therefore, by Riemann-Roch theorem, we have \begin{equation}\label{eq:lem3note} m:= \dim({\rm Ext}^1(L,N)) = \left\{\begin{array}{ll}
2\delta - d + g - 1 & {\rm if} \; L \; |\!\!\!\!\! \cong N \\ g & {\rm if} \; L \cong N. \end{array} \right. \end{equation}
\begin{lemma}\label{lem:1e2note} Let $\mathcal{F}$ be a (semi)stable, special, rank-two vector bundle on $C$ of degree $d \geqslant 2g-2$. Then $\mathcal{F} = \mathcal{F}_u$, for a special, effective line bundle $L$ on $C$, $N = \det(\mathcal{F}) \otimes L^{\vee}$ as in \eqref{eq:Fund} and $u \in {\rm Ext}^1(L,N)$. \end{lemma} \begin{proof} By Serre duality, $i(\mathcal{F}) >0 $ gives a non-zero morphism $\mathcal{F} \stackrel{\sigma^{\vee}}{\to} \omega_C$. The line bundle $L:= {\rm Im}(\sigma^{\vee}) \subseteq \omega_C$ is special. Set $\delta= \deg(L)$. Since $\mathcal{F}$ is (semi)stable, then \eqref{eq:assumptions} holds hence $\delta \geqslant \frac{d}{2} \geqslant g-1$, therefore $\chi(L) \geqslant 0$, so $L$ is effective. \end{proof}
\begin{remark}\label{rem:rigid} In the setting of Lemma \ref{lem:1e2note}, consider the scroll $F= \mathbb{P}(\mathcal{F})$ and let $\Gamma \subset F$ be the section corresponding to $L$, with $L \in {\rm Pic}^{\delta}(C)$ a special, effective quotient of minimal degree of $\mathcal{F}$. Suppose $\mathcal{F}$ indecomposable. From Proposition \ref{prop:lem4}, one has \begin{equation}\label{eq:aiutoE} \Gamma \subset F \; \; {\rm is \; li\; with} \; \; a_{F}(\delta) \leqslant 1, \end{equation} where $a_{F}(\delta)$ as in \eqref{eq:aga}. Then $\mathcal{F}$ is rsp via $L$ if $a_F(\delta)=0$, and even rp if $\Gamma$ is ai. \end{remark} Fix $L$ special, effective of degree $\delta$ and $N$ of degree $d-\delta$, where $d$ satisfies \eqref{eq:congd} and \eqref{eq:assumptions} (so $\deg(L) \geqslant \deg(N) \geqslant 0$). We fix once and for all the following notation: \begin{equation}\label{eq:exthyp1} \begin{array}{lcl} j:= h^1(L) > 0, & & \ell := h^0(L) = \delta-g+1 +j >0,\\ r := h^1(N) \geqslant 0,& & n := h^0(N)= d- \delta-g+1 +r\geqslant 0. \end{array} \end{equation} Any $u \in {\rm Ext^1}(L,N)$ gives rise to the following diagram \begin{equation}\label{eq:exthyp} \begin{array}{crcccccl} (u): & 0 \to & N & \to & \mathcal{F}_u & \to & L & \to 0 \\ {\rm deg} & & d-\delta & & d & & \delta & \\ h^0 & & n & & & & \ell & \\ h^1 & & r & & & & j & \end{array} \end{equation}Let $$\partial_u : H^0(L) \to H^1(N)$$be the {\em coboundary map} (simply denoted by $\partial$ if there is no danger of confusion) and let $\cork(\partial_u) := \dim({\rm Coker}(\partial_u))$. Then $$i(\mathcal{F}_u) = j + \cork(\partial_u).$$As for (semi)stability of $\mathcal{F}_u$, information can be obtained by using \cite[Prop. 1.1]{LN} (see Proposition \ref{prop:LN} below) and the projection technique from \cite{CF} (see Theorem \ref{C.F.VdG} below).
For the reader's convenience, we recall \cite[Prop. 1.1]{LN} (cf. also \cite[\S's\,3,\,4]{Be}, \cite[\S\,3]{BeFe}). Take $u \in {\rm Ext}^1(L,N)$. Tensor by $N^{\vee}$ and consider $ \mathcal{E}_e:= \mathcal{F}_u \otimes N^{\vee}$, which is an extension $$(e) : \;\;\; 0 \to \mathcal{O}_C \to \mathcal{E}_e \to L \otimes N^{\vee} \to 0,$$where $e \in {\rm Ext}^1(L\otimes N^{\vee}, \mathcal{O}_C)$. Then $\deg(\mathcal{E}_e) = 2\delta - d$. From \eqref{eq:seginv}, one has $s(\mathcal{F}_u) = s(\mathcal{E}_e)$ and, by Serre duality, $u$ and $e$ define the same point in \begin{equation}\label{eq:PP} \mathbb{P}:= \mathbb{P}(H^0(K_C + L-N)^{\vee}). \end{equation}
\begin{remark}\label{rem:sech}
If $\deg(L-N)=2\delta - d \geqslant 2$, then $\dim(\mathbb{P}) \geqslant g \geqslant 3 $ and the map $\varphi := \varphi_{|K_C+L-N|} : C \to \mathbb{P}$ is a morphism. Set $X := \varphi(C) \subset \mathbb{P}$. For any positive integer $h$ denote by ${\rm Sec}_h(X)$ the {\em $h^{th}$-secant variety of $X$}, defined as the closure of the union of all linear subspaces $\langle \varphi (D) \rangle \subset \mathbb{P}$, for all effective general divisors of degree $h$. One has \begin{equation}\label{eq:sech} \dim({\rm Sec}_h(X)) = {\rm min} \{\dim(\mathbb{P}), 2h-1\}. \end{equation} \end{remark}
\begin{proposition} \label{prop:LN} (see \cite[Prop. 1.1]{LN}) Let $2\delta-d \geqslant 2$. For any integer $$\sigma \equiv 2\delta - d \pmod{2} \;\;and \;\; 4 + d - 2 \delta \leqslant \sigma \leqslant 2\delta -d,$$one has
$$s(\mathcal{E}_e) \geqslant \sigma \Leftrightarrow e \in \!\!\!\!| \ \ {\rm Sec}_{\frac{1}{2}(2\delta - d + \sigma -2)}(X).$$ \end{proposition}
To end this section, we remark that later on we will need the following technical result.
\begin{lemma}\label{lem:technical} Let $L$ and $N$ be as in \eqref{eq:exthyp} and such that $h^0(N-L) = 0$. Take $u, u' \in {\rm Ext}^1(L,N)$ such that:
\noindent (i) the corresponding rank-two vector bundles $\mathcal{F}_u$ and $\mathcal{F}_{u'}$ are stable, and
\noindent (ii) there exists an isomorphism $\varphi$ \[ \begin{array}{ccccccl} 0 \to & N & \stackrel{\iota_1}{\longrightarrow} & \mathcal{F}_{u'} & \to & L & \to 0 \\
& & & \downarrow^{\varphi} & & & \\ 0 \to & N & \stackrel{\iota_2}{\longrightarrow} & \mathcal{F}_u & \to & L & \to 0 \end{array} \]such that $\varphi \circ \iota_1 = \lambda \iota_2$, for some $\lambda \in \mathbb{C}^*$.
Then $\mathcal{F}_u = \mathcal{F}_{u'}$, i.e. $u, u'$ are proportional vectors in ${\rm Ext}^1(L,N)$. \end{lemma}
\begin{proof} The proof is similar to that in \cite[Lemma 1]{Ma3} so it can be left to the reader.
\end{proof}
\section{Stable bundles as extensions of line bundles}\label{sec:constr}
In this section we start with line bundles $L$ and $N$ on $C$ as in \S\,\ref{S:BNLN}, and consider rank--two vector bundles $\mathcal{F}$ on $C$ arising as extensions as in \eqref {eq:Fund}. We give conditions under which $\mathcal{F}$ is stable, with a given speciality, and $L$ is a quotient with suitable minimality properties.
\subsection{The case $N$ non-special}\label{S:Nns} Here we focus on the case $N$ non-special. Notation as in \eqref{eq:exthyp1}, \eqref{eq:exthyp}, with therefore $r=0$ (by the non-speciality assumption).
\begin{theorem}\label{LN} Let $j \geqslant 1$ and $g \geqslant 3$ be integers. Let $C$ be of genus $g$ with general moduli. Let $\delta$ and $d$ be integers such that \begin{equation}\label{eq:ln2} \delta \leqslant g-1 + \frac{g}{j} -j, \end{equation} \begin{equation}\label{eq:ln3} \delta + g- 1 \leqslant d \leqslant 2\delta -2. \end{equation}Let $N \in {\rm Pic}^{d-\delta}(C)$ be general and $L \in W^{\delta-g+j}_{\delta}(C)$ be a smooth point (i.e. $L$ of speciality $j$ and so $h^0(L) = \ell$ as in \eqref{eq:exthyp1}). Then, for $u \in {\rm Ext}^1(L,N)$ general, the corresponding rank-two vector bundles $\mathcal{F}_u$ is indecomposable with:
\noindent (i) $s(\mathcal{F}_u) = 2 \delta -d$. In particular $2 \leqslant s(\mathcal{F}_u) \leqslant \frac{g}j-j$, hence $\mathcal{F}_u$ is also stable;
\noindent (ii) $i(\mathcal{F}_u) = j$;
\noindent (iii) $L$ is a quotient of minimal degree of $\mathcal{F}_u$.
\end{theorem}
\begin{remark}\label{rem:lnb} (1) Inequality in \eqref{eq:ln2} is equivalent to $\rho(g, \ell -1, \delta) \geqslant 0$, where $\rho(g, \ell-1,\delta)$ is the Brill-Noether number as in \S\ref{sec:P} for line bundles $L \in {\rm Pic}^{\delta}(C)$ of speciality $j$; this is a necessary and sufficient condition for $C$ of genus $g$ with general moduli to admit such a line bundle $L$ (cf.\,\cite{ACGH}). For what concerns \eqref{eq:ln3}, the upper-bound on $d$ reads $2 \delta - d \geqslant 2$, which is required to apply Proposition \ref{prop:LN}, whereas the lower-bound is equivalent to $\deg(N) = d-\delta \geqslant g-1$, hence the general $N \in {\rm Pic}^{d-\delta}(C)$, for $C$ general, is non-special.
\noindent (2) Notice that \eqref{eq:ln3} implies $\delta \geqslant g+1$. Thus, together with \eqref{eq:ln2}, one has $g \geqslant j^2+ 2j$ i.e. $1 \leqslant j \leqslant \sqrt{g+1} -1$. \end{remark}
\begin{proof}[Proof of Theorem \ref{LN}] By Remark \ref{rem:lnb}-(1) $N$ is non-special, so (ii) holds. Moreover, always by Remark \ref{rem:lnb}-(1), we can use Proposition \ref{prop:LN} with $\sigma := 2\delta-d$. One has $\dim(\mathbb{P}) = 2\delta-d+g-2$. From \eqref{eq:sech}, we have$$\dim\left({\rm Sec}_{\frac{1}{2}(2(2\delta-d)-2)}(X)\right) = {\rm min} \{ \dim(\mathbb{P}), 2(2\delta-d) -3 \} = 2(2\delta-d) -3,$$since
\eqref{eq:ln2} and \eqref{eq:ln3} yield $2\delta-d \leqslant\frac gj-j$ which implies $2(2\delta-d) -3 < 2\delta-d+g-2 $. In particular, \linebreak $ \dim\left({\rm Sec}_{\frac{1}{2}(2(2\delta-d)-2)}(X)\right) < \dim(\mathbb{P})$. From Proposition \ref{prop:LN}, $u \in {\rm Ext}^1(L,N)$ general is such that $s(\mathcal{F}_u) \geqslant 2\delta-d$. If $\Gamma$ is the section corresponding to $\mathcal{F}_u \to \!\! \to L$, one has $\Gamma^2 = 2 \delta - d$ as in \eqref{eq:Ciro410} so $s(\mathcal{F}_u) = 2 \delta - d$, hence (i) and (iii) are also proved. \end{proof}
\begin{corollary}\label{cor:LN} Assumptions as in Theorem \ref{LN}. Let $\Gamma$ be the section of $F_u = \mathbb{P}(\mathcal{F}_u)$ corresponding to $\mathcal{F}_u \to \!\! \to L$. Then $\Gamma$ is of minimal degree. In particular, $\Gamma$ is li and $0 \leqslant \dim({\rm Div}_{F_u}^{1,\delta}) \leqslant 1$. \end{corollary}
\begin{proof} The fact that $\Gamma$ is of minimal degree follows from (i) of Theorem \ref{LN}. The rest is a consequence of minimality and Proposition \ref{prop:lem4}. \end{proof}
\begin{theorem}\label{C.F.VdG} Let $j \geqslant 1$ and $g \geqslant 3$ be integers. Let $C$ be of genus $g$ with general moduli. Let $\delta$ and $d$ be integers such that \eqref {eq:ln2} holds and moreover
\begin{equation}\label{eq:cf3} \delta + g+3 \leqslant d \leqslant 2\delta. \end{equation}Let $N \in {\rm Pic}^{d-\delta}(C)$ and $L \in W^{\delta-g+j}_{\delta}(C)$ be general points. Then, for any $u \in {\rm Ext}^1(L,N)$, the corresponding rank-two vector bundle $\mathcal{F}_u$ is very-ample, with $i(\mathcal{F}_u) = j$. Moreover, for $u \in {\rm Ext}^1(L,N)$ general
\noindent (i) $L$ is the quotient of minimal degree of $\mathcal{F}_u$, thus $$0 \leqslant s(\mathcal{F}_u) = 2\delta -d \leqslant \frac{g-4j-j^2}{j},$$ so
$\mathcal{F}_u$ is stable when $2\delta-d >0$, strictly semistable when $d=2\delta$;
\noindent (ii) if $\Gamma$ is the section of $F_u = \mathbb{P}(\mathcal{F}_u)$ corresponding to $\mathcal{F}_u \to \!\! \to L$, then ${\rm Div}^{1,\delta}_{F_u} = \left\{ \Gamma\right\}$ and $\mathcal{F}_u$ is rp via $L$. \end{theorem}
\begin{proof} The proof is as in \cite[Theorem 2.1]{CF}, and it works also in the case $d= 2\delta$, not considered there. \end{proof}
\begin{remark}\label{rem:C.F.VdG} (1) The lower-bound in \eqref{eq:cf3} reads $\deg(N) = d-\delta \geqslant g+3$, hence $N \in {\rm Pic}^{d-\delta}(C)$ general is non-special and $\delta \geqslant g+3$.
\noindent (2) From \eqref{eq:ln2} and $\delta \geqslant g+3$, one has $g \geqslant j^2+ 4j$ i.e. $1 \leqslant j \leqslant \sqrt{g+4} -2$.
\noindent (3) The bounds on $d$ in \eqref{eq:ln3} and \eqref{eq:cf3} are in general slightly worse than those in Theorem \ref{thm:TB} (cf. \cite[Remark 1.7]{CF}). For $j$ close to the upper-bound (see Remark \ref{rem:lnb}-(2), respectively (2) above), the difference is of the order of $\sqrt{g}$. However our approach gives the additional information of the description of
vector bundles in irreducible components of $B_C^{k_j}(d)$ (see also \S\,\ref{S:BND}) simply as line bundle extensions, with no use of either limit linear series or degeneration techniques. \end{remark}
\subsection{The case $N$ special}\label{S:Ns}
In this section $N \in {\rm Pic}^{d-\delta}(C)$ is assumed to be special. Hence, in \eqref{eq:exthyp1}, we have $\ell, j, r >0$ whereas $n \geqslant 0$ (according to the fact that $N$ is either effective or not). For any integer $t >0$, consider \begin{equation}\label{eq:wt}
\mathcal W_t := \left\{ u \in {\rm Ext}^1(L,N) \ | \ \cork(\partial_u) \geqslant t \right\} \subseteq {\rm Ext}^1(L,N), \end{equation}which has a natural structure of determinantal scheme; as such, $\mathcal W_t$ has {\em expected codimension} \begin{equation}\label{eq:clrt} c(l,r,t) := t (\ell - r + t) \end{equation}As in \eqref{eq:lem3note}, put $m = \dim( {\rm Ext}^1(L,N))$. Thus, if $m >0$ and $\mathcal W_t \neq \emptyset$, then any irreducible component $\Lambda_t \subseteq \mathcal W_t$ is such that \begin{equation}\label{eq:expdimwt} \dim(\Lambda_t) \geqslant {\rm min} \left\{m , m- c(\ell,r,t) \right\}, \end{equation}where the right-hand-side is the {\em expected dimension} of $\mathcal W_t$. These loci have been considered also in \cite[\S\,2,\,3]{BeFe}, \cite[\S\;6,\,8]{Muk} for low genus and for vector bundles with canonical determinant.
\begin{remark}\label{rem:cirow} One has $\dim({\rm Ker}(\partial_u)) = 1 + \dim(\langle \Gamma\rangle)$, where $\Gamma$ is the section corresponding to the quotient $\mathcal{F} \to \!\!\to L$. Note that $\dim(\langle \Gamma\rangle) = -1$ if and only if $\dim({\rm Ker}(\partial_u)) = 0$, i.e. $H^0(\mathcal{F}) = H^0(N)$. This happens if and only if $\Gamma$ is
a fixed component of $|\mathcal{O}_F(1)|$, i.e. if and only if the image of the map $\Phi_{|\mathcal{O}_F(1)|}$ has dimension smaller than $2$. If $d \geqslant 2g-2$ and this happens, then one must have $n \geqslant i \geqslant j \geqslant 1$, where $i = i(\mathcal{F})$. \end{remark}
\begin{remark}\label{rem:wt} The coboundary map $\partial_u$ can be interpreted in terms of multiplication maps among global sections of suitable line bundles on $C$. Indeed, consider $r \geqslant t$ and $\ell \geqslant {\rm max} \{1,r-t\}$. Denote by $$\cup : H^0(L) \otimes H^1(N - L) \longrightarrow H^1(N),$$the cup-product: for any $u \in H^1(N - L) \cong {\rm Ext}^1(L,N)$, one has $\partial_u (-) = - \cup u.$ By Serre duality,
the consideration of $\cup$ is equivalent to the one of the multiplication map \begin{equation}\label{eq:mu} \mu := \mu_{L,K_C-N} : H^0(L) \otimes H^0(K_C-N) \to H^0(K_C+L-N) \end{equation} (when $N=L$, $\mu$ coincides with $\mu_0(L)$ as in \eqref{eq:Petrilb}). For any subspace $W \subseteq H^0(K_C-N)$, we set \begin{equation}\label{eq:muW}
\mu_W:= \mu|_W : H^0(L) \otimes W \to H^0(K_C+L-N). \end{equation}Imposing $\cork(\partial_u) \geqslant t$ is equivalent to $$V_t := {\rm Im}(\partial_u)^{\perp} \subset H^0(K_C - N) $$ having dimension at least $t$. Therefore $$\mathcal W_t = \left\{u \in H^0(K_C+L-N)^{\vee} \mid \exists\,V_t \subseteq H^0(K_C-N),\; {\rm s.t.} \; \dim(V_t) \geqslant t \;{\rm and}\; {\rm Im}(\mu_{V_t}) \subseteq \{u = 0 \} \right\},$$where $\{u = 0 \} \subset H^0(K+L-N)$ is the hyperplane defined by $u \in H^0(K_C+L-N)^{\vee}$. \end{remark}
\begin{theorem}\label{thm:mainext1} Let $C$ be a smooth curve of genus $g \geqslant 3$. Let $$r\geqslant 1, \; \ell \geqslant {\rm max} \{1, r-1\}, \; m \geqslant \ell +1$$be integers as in \eqref{eq:lem3note}, \eqref{eq:exthyp1}. Then (cf.\,\eqref{eq:wt},\eqref{eq:clrt}):
\noindent (i) $c(\ell, r,1) =\ell-r+1\geqslant 0$;
\noindent (ii) $\mathcal W_1$ is irreducible of the expected dimension $\dim (\mathcal W_1) = m - c(\ell, r,1) \geqslant r$. In particular $\mathcal W_1={\rm Ext}^1(L,N)$ if and only if $\ell=r-1$. \end{theorem}
\begin{proof} Part (i) and $ m - c(\ell, r,1)\geqslant r$ are obvious. Let us prove (ii). Since $\ell, r \geqslant 1$, both $L$ and $K_C-N$ are effective. One has an inclusion $$0 \to L \to K_C+L-N$$obtained by tensoring by $L$ the injection $\mathcal{O}_C \hookrightarrow K_C-N$ given by a given non--zero section of $K_C-N$. Thus, for any $V_1 \in \mathbb{P}(H^0(K_C-N))$, $\mu_{V_1}$ as in \eqref{eq:muW} is injective. Since $m \geqslant \ell + 1$, one has $\dim({\rm Im} (\mu_{V_1})) = \ell \leqslant m-1 $, i.e. ${\rm Im}(\mu_{V_1})$ is contained in some hyperplane of $H^0(K_C+L-N)$. Let $$\Sigma:= \left\{\sigma := H^0(L) \otimes V_1^{\sigma} \subseteq H^0(L) \otimes H^0(K_C-N) \; \mid \; V_1^{\sigma} \in \mathbb{P}(H^0(K_C-N)) \right\}.$$Thus $\Sigma \cong \mathbb{P}(H^0(K_C-N))$, so it is irreducible of dimension $r-1$. Since $\mathbb{P}(H^0(K_C+L-N)^{\vee}) = \mathbb{P}$ as in \eqref{eq:PP}, we can define the incidence variety
$$\mathcal{J} := \left\{(\sigma, \pi) \in \Sigma \times \mathbb{P} \; | \; \mu_{V_1^{\sigma}}(\sigma) \subseteq \pi \right\} \subset \Sigma \times \mathbb{P}.$$ Let $$ \Sigma \stackrel{pr_1}{\longleftarrow} \mathcal{J} \stackrel{pr_2}{\longrightarrow} \mathbb{P}$$be the two projections. As we saw, $pr_1$ is surjective. In particular $\mathcal{J} \neq \emptyset$ and, for any $\sigma \in \Sigma$,
$$pr_1^{-1} (\sigma) = \left\{ \pi \in \mathbb{P}\,|\,\mu_{V_1^{\sigma}}(\sigma) \subseteq \pi\right\} \cong
|\mathcal{I}_{\widehat{\sigma}\vert \mathbb{P}^{\vee}} (1)|,$$where $\widehat{\sigma} := \mathbb{P}(\mu_{V_1^{\sigma}}(\sigma))$. Since $\dim(\widehat{\sigma}) = \ell -1$, then $\dim(pr_1^{-1} (\sigma)) = m -1 - \ell \geqslant 0$.
This shows that $\mathcal{J}$ is irreducible and $\dim(\mathcal{J}) = m-1 - c(\ell,r,1) \leqslant m-1$. Then, $\widehat{\mathcal W}_1 := \mathbb{P}(\mathcal W_1) = \overline{pr_2(\mathcal{J})}$. Recalling \eqref{eq:expdimwt}, $\mathcal{W}_1 \neq \emptyset$ is irreducible, of the expected dimension $m - c(\ell,r,1)$. \end{proof}
\begin{corollary}\label{cor:mainext1} Assumptions as in Theorem \ref{thm:mainext1}. If $\ell \geqslant r$, then $\mathcal W_1 \varsubsetneq {\rm Ext}^1(L,N)$ and
\noindent (i) for $u \in {\rm Ext}^1(L,N)$ general, $\partial_u$ is surjective, in which case the corresponding bundle $\mathcal{F}_u$ is special with $i(\mathcal{F}_u) = h^1(L) = j$;
\noindent (ii) for $v \in \mathcal W_1$ general, $\cork(\partial_v)=1$, hence the corresponding bundle $\mathcal{F}_v$ is special with $i(\mathcal{F}_v) = j+1$. \end{corollary}
\subsubsection{{\bf Surjective coboundary}}\label{ss:uepi} Take $ 0 \neq u \in {\rm Ext}^1(L,N)$ and assume $\partial_u$ is surjective (from Corollary \ref{cor:mainext1}, this happens e.g. when $\ell \geqslant r$, $m \geqslant \ell +1$ and $u$ general).
\begin{theorem}\label{uepi} Let $j \geqslant 1$ and $g \geqslant 3$ be integers. Let $C$ be of genus $g$ with general moduli. Let $\delta$ and $d$ be integers such that \eqref {eq:ln2} holds and moreover
\begin{equation}\label{eq:uepi3} 2g-2 \leqslant d \leqslant 2 \delta - g. \end{equation}Let $L \in W^{\delta-g+j}_{\delta}(C)$ be a smooth point and $N \in {\rm Pic}^{d - \delta}(C)$ be any point. Then, for $u \in {\rm Ext}^1(L,N)$ general, the corresponding bundle $\mathcal{F}_u$ is indecomposable with
\noindent (i) speciality $i(\mathcal{F}_u) = j = h^1(L)$.
\noindent (ii) $s(\mathcal{F}_u) = g-\epsilon$, $\epsilon \in \{0,1\}$ such that $\epsilon \equiv d+g \pmod{2}$. In particular, $\mathcal{F}_u$ is stable.
\noindent (iii) The minimal degree of a quotient of $\mathcal{F}_u$ is $\frac{d+g-\epsilon}{2}$ and $1- \epsilon \leqslant \dim\left({\rm Div}^{1,\frac{d+g-\epsilon}{2}}_{F_u} \right) \leqslant 1$;
\noindent (iv) $L$ is a minimal degree quotient of $\mathcal{F}_u$ if and only if $\epsilon =0$ and $ d = 2 \delta-g$.
\end{theorem}
\begin{remark}\label{rem:uepib} (1) From \eqref{eq:uepi3} we get $\delta \geqslant \frac{3}{2}g-1$ hence from \eqref{eq:ln2}, $j \leqslant \frac{\sqrt{g^2 + 16 g} - g}{4}$.
\noindent (2) Since $L$ is special, then $\delta \leqslant 2g-2$. Therefore, the upper-bound in \eqref{eq:uepi3} implies
$d - \delta \leqslant \delta - g \leqslant g -2$, i.e., any $N \in {\rm Pic}^{d-\delta}(C)$ is special too.
\noindent (3) The inequalities \eqref {eq:ln2}, \eqref{eq:uepi3} imply $\ell \geqslant r $, $m \geqslant \ell +1$ as in the assumptions of Corollary \ref{cor:mainext1}. Indeed:
\noindent $\bullet$ $\ell \geqslant r$ reads \begin{equation}\label{eq:aiuto} \delta \geqslant g-1 + r - j. \end{equation} Since $r = \delta -d + g -1 + n$, then \eqref{eq:aiuto} is equivalent to $d \geqslant 2g-2 -j +n $. Thus \eqref{eq:aiuto} holds by \eqref{eq:uepi3}, if $n \leqslant 1$.
If $n \geqslant 2$, $C$ with general moduli implies $r \leqslant \frac{g}{n} \leqslant \frac{g}{2}$ so $g-1 + r - j \leqslant \frac{3}{2} g - 1 - j$ and \eqref{eq:aiuto} holds because $\delta \geqslant \frac{3}{2} g -1$.
\noindent $\bullet$ We have $d - \delta \leqslant \delta -g < \delta$ by \eqref{eq:uepi3}. So from \eqref{eq:lem3note} we have $m = 2 \delta - d + g - 1$. Thus $m \geqslant \ell +1$ reads $d \leqslant \delta + 2g- 3 - j$. By \eqref{eq:uepi3}, to prove this it suffices to prove $2 \delta - g \leqslant \delta + 2g- 3 - j$. This in turn is a consequence of \eqref {eq:ln2}.
\noindent (4) Notice that, under hypotheses of Theorem \ref{uepi}, when $\epsilon =1$ $L$ is not of minimal degree: from (iii), one would have $d=2\delta - g + 1$ which is out of range in \eqref{eq:uepi3}. Indeed, if $d = 2\delta - g + 1$ and e.g. $\delta = 2g-2$, then $d=3g-3$, $\deg(N) = d - \delta = g-1$, thus if $N$ is general, it is non-special, which is a case already considered in Theorem \ref{LN}. From (1) above, to allow minimality for $L$ also for $\epsilon =1$, one should replace \eqref{eq:ln2}, \eqref{eq:uepi3} in the statement of Theorem \ref{uepi} with the more annoying conditions $\delta \leqslant {\rm min} \{ g-1 + \frac{g}{j}- j, 2g-3\}$ and $d \leqslant 2\delta - g + \epsilon$, respectively. \end{remark}
\begin{proof}[Proof of Theorem \ref{uepi}] By Remark \ref{rem:uepib}-(2), $N$ is special. Moreover, by Remark \ref{rem:uepib}-(3) and Corollary \ref{cor:mainext1}, for $u \in {\rm Ext}^1(L,N)$ general, $\partial_u$ is surjective. Hence (i) holds.
From the upper-bound in \eqref{eq:uepi3} and $g \geqslant 3$, we can apply Proposition \ref{prop:LN} with the choice $\sigma := g-\epsilon$, i.e., the maximum for which $\sigma \equiv 2 \delta - d \pmod{2}$, $\sigma \leqslant2 \delta - d$ and one has a strict inclusion $${\rm Sec}_{\frac{1}{2}(2\delta-d + g-\epsilon -2)}(X) \subset \mathbb{P},$$from \eqref{eq:lem3note} and \eqref{eq:sech}.
If $\epsilon =0$, (ii) follows from Propositions \ref{prop:Nagata}, \ref{prop:LN}. Let $\Gamma \subset F_u$ be a section of minimal degree, which we denote by $m_0$. Then, $\Gamma^2= 2 m_0 - d = g$ (cf. \eqref{eq:Ciro410} and Remark \ref{rem:seginv}). In particular, $m_0 = \frac{d+g}{2}$ and$$1 = \Gamma^2 - g + 1 \leqslant \chi(\mathcal{N}_{\Gamma/F_u}) \leqslant \dim\left({\rm Div}^{1,\frac{d+g}{2}}_{F_u} \right) \leqslant 1,$$where the upper-bound holds by the minimality condition (cf. proof of Proposition \ref{prop:lem4}). This proves (iii) in this case.
When $\epsilon=1$, by Propositions \ref{prop:Nagata}, \ref{prop:LN}, one has $g-1 \leqslant s(\mathcal{F}_u) \leqslant g$ and, by parity, the leftmost equality holds. As above, part (iii) holds also for $\epsilon =1$.
Finally, $L$ is a minimal degree quotient if and only if $2 \delta = d+g- \epsilon$ which by \eqref{eq:uepi3} is only possible for $\epsilon =0$, proving (iv) (cf. Remark \ref{rem:uepib}-(4)).
\end{proof}
\subsubsection{{\bf Non-surjective coboundary}}\label{ss:unepi} From Corollary \ref{cor:mainext1}, when $\ell \geqslant r$ and $m \geqslant \ell+1$, for $v \in \mathcal W_1\subsetneq {\rm Ext}^1(L,N)$ general, one has $\cork(\partial_v)=1$.
\begin{definition}\label{def:goodc} Take $\ell \geqslant r \geqslant t \geqslant 1$ integers. Assume
\noindent (1) there exists an irreducible component $\Lambda_t \subseteq \mathcal W_t$ with the expected dimension $\dim(\Lambda_t) = m - c(\ell, r,t)$;
\noindent (2) for $v \in \Lambda_t$ general, $\cork(\partial_v)=t$.
\noindent Any such $\Lambda_t$ is called a {\em good component} of $\mathcal W_t$. \end{definition}
\noindent By Theorem \ref{thm:mainext1}, $\Lambda_1 = \mathcal W_1$ is good. In \S\,\ref{ssec:suffcon} we shall give sufficient conditions for goodness when $t \geqslant 2$.
With notation as in \eqref{eq:PP}, for any $t \geqslant 1$ and any good component $\Lambda_t$, we set \begin{equation}\label{eq:Lahat} \widehat{\Lambda}_t := \mathbb{P}(\Lambda_t) \subset \mathbb{P} \end{equation}(cf. notation as in the proof of Theorem \ref{thm:mainext1} for $\widehat{\mathcal W}_1$).
\begin{theorem}\label{unepi} Let $g \geqslant 3$, $d \geqslant 2g-2$, $j \geqslant 1$, $\ell \geqslant r \geqslant t \geqslant 1$ be integers. Take $\epsilon \in \{0,1\}$ such that $$d + g - c(\ell,r,t) \equiv \epsilon \pmod{2}.$$Take $\delta = \ell + g - 1 - j$ and assume
\begin{equation}\label{eq:unepi0} g \geqslant c(\ell, r, t) + \epsilon, \end{equation} \begin{equation}\label{eq:unepi1} g + r - j - 1 \leqslant \delta \leqslant g-1 + \frac{g}{j} - j, \end{equation} \begin{equation}\label{eq:unepi2} 2 \delta - d \geqslant {\rm max} \{ 2, g - c(\ell, r, t)- \epsilon\}. \end{equation} Let $C$ be a curve of genus $g$ with general moduli. Let $L \in W^{\ell-1}_{\delta}(C)$ be a smooth point, $N \in {\rm Pic}^{d-\delta}(C)$ of speciality $r$. Then, for any good component $\Lambda_t$ and for $v \in \Lambda_t$ general, the corresponding bundle $\mathcal{F}_v$ is
\noindent (i) special with $i(\mathcal{F}_v) = j+t = h^1(L) +t$;
\noindent (ii) $s(\mathcal{F}_v) \geqslant g - c(\ell,r,t) - \epsilon \geqslant 0$; in particular, when $ g - c(\ell,r,t)>0$, $\mathcal{F}_v$ is stable, hence indecomposable.
\noindent (iii) If $2 \delta = d + g - c(\ell,r,t) - \epsilon$, then $L$ is a quotient of minimal degree of $\mathcal{F}_v$. \end{theorem}
\begin{remark}\label{rem:unepi} (1) As before, the upper-bound on $\delta$ in \eqref{eq:unepi1} is equivalent to $\rho(g , \ell -1 , \delta) \geqslant 0$ whereas the lower bound to $\ell \geqslant r$.
\noindent (2) Condition $2\delta -d \geqslant 2$ in \eqref{eq:unepi2} is as in Proposition \ref{prop:LN}; the other condition in \eqref{eq:unepi2} will be clear by reading the proof of Theorem \ref{unepi}.
\noindent
\noindent (3) Arguing as in Remark \ref{rem:uepib}-(3), one shows that $m \geqslant \ell +1$.
\end{remark}
\begin{proof}[Proof of Theorem \ref{unepi}] By \eqref{eq:unepi2} we may apply Proposition \ref{prop:LN} choosing $\sigma := g-c(\ell,r,t) - \epsilon$, which is non-negative by \eqref{eq:unepi0}. This is the maximum integer such that $\sigma \leqslant 2 \delta - d$, $\sigma \equiv 2\delta - d \pmod{2}$ and such that $\dim(\widehat{\Lambda}_t) > \dim ({\rm Sec}_{\frac{1}{2}(2\delta-d + \sigma -2)}(X))$, as it follows from \eqref{eq:unepi2}. Then, if $v \in \widehat{\Lambda}_t$ general, the assertions hold. \end{proof}
\begin{remark}\label{unepine} When $N$ of degree $d-\delta$ is non-effective of speciality $r$, by Riemann-Roch $ r = \delta - d + g-1$. Thus, by \eqref{eq:ki}, one simply has $c (\ell, r, t) = t\;(d-2g+2+j+t)$, i.e. $c (\ell, r, t) = t\;k_{j+t}$, and conditions in Theorem \ref{unepi} can be replaced by more explicit numerical conditions on $d$ and $\delta$ $$\delta \leqslant g-1 + \frac{g}{j} - j \; \; \; {\rm and} \; \;\; d \leqslant g-1 - t + {\rm min} \left\{ \delta, g - 1 - j + \frac{g - \epsilon}{t}\right\},$$where $\epsilon \in \{0,1\}$ such that $d + g - t k_{t+j} \equiv \epsilon \pmod{2}$, and $$2 \delta - d \geqslant {\rm max} \{ 2, g - t k_{t+j} - \epsilon\}.$$ \end{remark}
\begin{remark}\label{unepie} When otherwise $N$, of degree $d-\delta$, is effective and of speciality $r$ one gets $d \geqslant \delta + g - r$. Moreover, since its Brill-Noether number $\rho(g, n-1, d - \delta)$ has to be non negative (by the generality of $C$), one gets $d \leqslant \delta + g - 1 + \frac{g}{r}- r$. Thus, conditions in Theorem \ref{unepi} can be replaced by numerical conditions $$g -1 -j +r \leqslant \delta \leqslant g-1 - j + {\rm min} \left\{\frac{g}{j}, \frac{g - \epsilon}{t} + r - t-1 \right\},$$ $$g + \delta - r \leqslant d \leqslant \delta + g - 1 + \frac{g}{r} - r \;\;\; {\rm and} \;\;\; 2 \delta - d \geqslant {\rm max} \{ 2, g - c(\ell,r,t) - \epsilon\},$$with $\epsilon$ and $c(\ell,r,t)$ as in Theorem \ref{unepi}. \end{remark}
\subsection{Existence of good components}\label{ssec:suffcon} Recalling what defined in \eqref{eq:lem3note}, \eqref{eq:exthyp1}, \eqref{eq:wt}, \eqref{eq:clrt} and in Remark \ref{rem:wt}, one has
\begin{theorem}\label{thm:mainextt} Let $C$ be a smooth curve of genus $g \geqslant 3$. Take integers $m$, $\ell$, $r$ and $t$ and assume $\ell \geqslant r \geqslant t \geqslant 2$. Take any integer $\eta$ such that \begin{equation}\label{eq:extt0} 0 \leqslant \eta \leqslant {\rm min}\{ t (r-t), \ell (t-1) \} \; \;\; {\rm and}\;\; \; m \geqslant \ell t+1-\eta. \end{equation} Suppose, in addition, that the subvariety $\Sigma_{\eta} \subseteq \mathbb{G}(t, H^0(K_C-N)) := \mathbb{G}$, parametrizing $V_t \in \mathbb{G}$ s.t. \begin{equation}\label{eq:aiutomamma} \dim({\rm Ker}(\mu_{V_t})) \geqslant \eta, \end{equation}has pure codimension $\eta$ in $\mathbb{G}$ and that, for the general point $V_t$ in any irreducible component of $\Sigma_{\eta}$, equality holds in \eqref{eq:aiutomamma}. Then:
\noindent (i) $c(\ell,r,t) >0$;
\noindent (ii) $\emptyset \neq \mathcal W_t \subset \mathcal W_1 \subset {\rm Ext}^1(L,N)$, where all the inclusions are strict;
\noindent (iii) there exists a good component $\Lambda_t $ of $ \mathcal W_t$.
\end{theorem}
\begin{proof} By \eqref{eq:extt0} one has $m \geqslant \ell +1$; moreover $\ell \geqslant r$ by assumption. Thus, from Corollary \ref{cor:mainext1}, $\emptyset \neq \mathcal W_1 \subset {\rm Ext}^1(L,N)$ and the inclusion is strict. By definition $\mathcal W_t \subset \mathcal W_1$, where the inclusion is strict by Corollary \ref{cor:mainext1}. A similar argument as in the proof of Theorem \ref{thm:mainext1} applies. \end{proof}
\begin{corollary}\label{cor:mainextt} Let $C$ be of genus $g \geqslant 3$ with general moduli. Let $j \geqslant 1$, $\ell \geqslant r \geqslant t \geqslant 2$ and $m \geqslant \ell t+1$ be integers. Let $L \in W^{\delta-g+j}_{\delta}(C)$ be a smooth point.
If $j \geqslant t$, $N \in {\rm Pic}^{d-\delta}(C)$ is general and $\ell \leqslant 2\delta - d$, then for $V_t \in \mathbb{G} (t, H^0(K_C-N))$ general, $\mu_{V_t}$ is injective. In particular, there exists a good component $ \emptyset \neq \Lambda_t \subseteq \mathcal W_t$. \end{corollary}
\begin{proof} Set $h := 2\delta -d$ and let $N_0:= L-D \in {\rm Pic}^{d-\delta}(C)$, with $D=\sum_{i=1}^h p_i \in C^{(h)}$ general. Since $0 < \ell \leqslant h$, we have $h^0(N_0) =0$. Thus, $N \in {\rm Pic}^{d-\delta}(C)$ general is also non-effective, so $h^1(N) = h^1(N_0)$.
Let $\mu$ be as in \eqref{eq:mu}. To prove injectivity of $\mu_{V_t}$ as in \eqref{eq:muW} for $N$ and $V_t$ general, it suffices to prove a similar condition for $$\mu^0: H^0(L) \otimes H^0\left(K_C - L + D \right) \to H^0\left(K_C + D \right).$$Consider $$W:= H^0(K_C-L) \subset H^0\left(K_C-L+ D\right).$$ One has $\dim(W) = j$. We have the diagram \[\begin{array}{rcl} H^0(L) \otimes W \cong H^0(L) \otimes H^0(K_C-L) & \stackrel{ \mu^0_{W} }{\longrightarrow} & H^0\left(K_C+D \right) \\ \searrow^{\mu_0(L)} & & \nearrow_{\iota} \\ & H^0(K_C) & \end{array}
\]where $\mu^0_{W} = \mu^0|_{H^0(L) \otimes W}$, $\mu_0(L)$ is as in \eqref{eq:Petrilb} and $\iota$ is the obvious inclusion.
By Gieseker--Petri theorem $\mu_0(L)$ is injective. By composition with $\iota$, $\mu^0_{W}$ is also injective. Since by assumptions $t \leqslant j$, then for any $\widetilde{V}_t \in \mathbb{G} (t,W)$, $\mu^0_{\widetilde{V}_t }$ is also injective. By semicontinuity, for $N \in {\rm Pic}^{d-\delta}(C)$ and $V_t \in \mathbb{G} (t, H^0(K_C-N))$ general, $\mu_{V_t}$ is injective. Then, one can conclude by using Theorem \ref{thm:mainextt}. \end{proof}
\section{Parameter spaces}\label{S:PSBN}
Let $C$ be a projective curve of genus $g$ with general moduli. Given a sequence as in \eqref{eq:exthyp}, for brevity we set $$\rho(L):= \rho(g, \ell-1, \delta) \; \;\; {\rm and} \; \;\; \rho(N) := \rho(g, n-1, d-\delta),$$ $$ W_L:= \left\{ \begin{array}{cc} W_{\delta}^{\ell-1}(C) & {\rm if} \; \rho(L)>0\\ \{L\} & {\rm if} \; \rho(L)=0 \end{array} \right. \;\;\; {\rm and}\;\;\;
W_N:= \left\{ \begin{array}{cc} W_{d-\delta}^{n-1}(C) & {\rm if} \; \rho(N)>0\\ \{N\} & {\rm if} \; \rho(N)=0 \end{array} \right..$$Both $W_L$ and $W_N$ are irreducible, generically smooth, of dimensions $\rho(L)$ and $\rho(N)$
(cf. \cite[p. 214]{ACGH}). Let $$\mathcal{N} \to C \times {\rm Pic}^{d-\delta}(C) \;\; {\rm and} \;\; \mathcal{L} \to C \times {\rm Pic}^{\delta}(C)$$be Poincar\'e line-bundles. With an abuse of notation, we will denote by $\mathcal L$ (resp., by $\mathcal N$) also the restriction of Poincar\'e line-bundle to the Brill-Noether locus. Set$$\mathcal Y := {\rm Pic}^{d-\delta}(C) \times W_L \;\; \;\; {\rm and} \;\; \;\; \mathcal Z:= W_N \times W_L \subset \mathcal Y .$$They are both irreducible, of dimensions \begin{equation}\label{eq:yde1} \dim(\mathcal Y) = g + \rho(L) \;\;\; {\rm and} \; \;\; \dim(\mathcal Z) = \rho(N) + \rho(L). \end{equation}Consider the natural projections \[ \begin{array}{ccccc} C \times {\rm Pic}^{d-\delta}(C) & \stackrel{pr_{1,2}}{\longleftarrow} &\!\!\!C \times\mathcal Y & \stackrel{pr_{2,3}}{\longrightarrow} & \mathcal Y \\
& & \;\;\;\; \downarrow^{pr_{1,3}} & & \\
& &\!\! C \times W_L & & . \end{array} \]As in \cite[p. 164-179]{ACGH}), we define $$\mathfrak{E}_\delta := R^1(pr_{2,3})_* \left( pr_{1,2}^* (\mathcal N) \otimes pr_{1,3}^* (\mathcal L^{\vee}) \right),$$depending on the choices of $d$ and $\delta$. By \eqref{eq:lem3note}, when $2 \delta - d \geqslant 1$, $\mathfrak{E}_\delta$ is a vector bundle of rank $m = 2 \delta - d + g -1$ on $\mathcal Y$ whereas, when $d = 2 \delta$, $\mathfrak{E}_\delta$ is a vector bundle of rank $g-1$ on $\mathcal Y \setminus \Delta_{\mathcal Y}$, where
$\Delta_{\mathcal Y} = \{ (M, M) \; | \; M \in W_L\} \cong W_L $. We set \begin{equation}\label{eq:ude} \mathcal U := \left\{ \begin{array}{cc} \mathcal Y & {\rm if} \; 2 \delta - d \geqslant 1 \\ \mathcal Y \setminus \Delta_{\mathcal Y} & {\rm if} \; d = 2 \delta \end{array} \right. \;\;\;\;\;\; \;\;\;\;\;\; {\rm and} \;\;\;\;\;\;\;\;\;\;\;\; \mathbb{P}(\mathfrak{E}_\delta) \stackrel{\gamma}{\longrightarrow} \mathcal U, \end{equation}where $\gamma$ the projective bundle morphism: the $\gamma$-fibre of $y = (N,L) \in \mathcal U$ is $\mathbb{P}(\Ext^1(L,N)) = \mathbb{P}$, as in \eqref{eq:PP}.
From \eqref{eq:lem3note} and \eqref{eq:yde1}, one has \begin{equation}\label{eq:yde2}
\dim(\mathbb{P}(\mathfrak{E}_\delta)) = g + \rho(L) + m-1 \;\;\; \;\;\; {\rm and} \;\;\;\;\;\; \dim(\mathbb{P}(\mathfrak{E}_\delta)|_{\mathcal{Z}}) = \rho(N)+ \rho(L) + m-1. \end{equation}Since (semi)stability is an open condition (cf. e.g. \cite[Prop. 6-c, p. 17]{Ses}), for any choice of integers $g$, $d$ and $\delta$ satisfying numerical conditions as in the theorems and corollaries proved in \S's\;\ref{S:Nns} and \ref{S:Ns}, there is an open, dense subset $\mathbb{P}(\mathfrak{E}_\delta)^0 \subseteq \mathbb{P}(\mathfrak{E}_\delta)$ and a morphism \begin{equation}\label{eq:pde} \pi_{d,\delta} : \mathbb{P}(\mathfrak{E}_\delta)^0 \rightarrow U_C(d). \end{equation} We set \begin{equation}\label{eq:Nudde} \mathcal{V}^{\delta,j}_{d} := {\rm Im} (\pi_{d,\delta}) \; \; \; {\rm and} \;\;\; \nu_d^{\delta,j} = \dim(\mathcal{V}^{\delta,j}_{d}). \end{equation}
\subsection{Non-special $N$}\label{ssec:Nns} We will put ourselves in the hypotheses either of Theorem \ref{LN} or of Theorem \ref{C.F.VdG}. In either case, $d - \delta \geqslant g-1$ so $N$ can be taken general in $\Pic^{d-\delta}(C)$ and $\mathcal{V}^{\delta,j}_{d} \subseteq B^{k_j}_C(d)$, by what proved about (semi)stability.
\noindent \subsubsection{{\bf Case $2 \delta - d \geqslant 1$}} In this case, by what proved in Theorems \ref{LN}, \ref{C.F.VdG}, one has $\mathcal{V}^{\delta,j}_{d} \subseteq B^{k_j}_C(d) \cap U_C^s(d)$. Therefore any irreducible component of $B^{k_j}_C(d)$ intersected by $\mathcal{V}^{\delta,j}_{d}$ has at least dimension $\rho_d^{k_j}$ (cf. Remark \ref{rem:BNloci} and Definition \ref{def:regsup}).
\begin{proposition}\label{C.F.1} Assumptions as in Theorem \ref{C.F.VdG}, with $2 \delta - d \geqslant 1$. Then, for any integers $j,\,\delta,\,d$ therein, there exists an irreducible component $\mathcal{B} \subseteq B^{k_j}_C(d)$ such that:
\noindent (i) $\mathcal{V}^{\delta,j}_{d} \subseteq \mathcal{B}$;
\noindent (ii) $\mathcal{B}$ is regular and generically smooth;
\noindent (iii) for $[\mathcal{E}] \in \mathcal{B}$ general, $\mathcal{E}$ is stable, with $s(\mathcal{E}) \geqslant 2\delta -d$ and $i(\mathcal{E}) =j$. \end{proposition}
\begin{proof} Parts (i) and (iii) follow from Theorem \ref{C.F.VdG} (note that the Segre invariant is lower-semicontinuous; cf. also \cite[\S\;3]{LN}). Assertion (ii) follows from the fact that, for $[\mathcal{F}] \in \mathcal{V}^{\delta,j}_{d}$ general, the Petri map $P_{\mathcal{F}}$ is injective (cf. Remark \ref{rem:BNloci} and \cite[Lemma2.1]{CF}). \end{proof}
\begin{lemma}\label{lem:claim1} In the hypotheses of Theorem \ref{C.F.VdG}, with $2 \delta - d \geqslant 1$, the morphism $\pi_{d,\delta}$ is generically injective. \end{lemma} \begin{proof} Let $[\mathcal{F}] \in \mathcal{V}^{\delta,j}_{d}$ be general; then $\mathcal{F} = \mathcal{F}_u$, for $u \in \Ext^1(L,N)$ and $y= (N,L) \in \mathcal{U}$ general. Then
$$\pi_{d,\delta}^{-1}([\mathcal{F}_u]) = \left\{ (N', L', u') \in \mathbb{P}(\mathfrak{E})^0 \, | \; \mathcal{F}_{u'} \cong \mathcal{F}_u \right\}.$$Assume by contradiction there exists $ (N', L', u') \neq (N,L,u)$ in $\pi_{d, \delta}^{-1} ([\mathcal{F}_u])$. Then $N\otimes L \cong N' \otimes L'$.
\noindent (1) If $L \cong L' \in W_L$ then $N \cong N' \in \Pic^{d-\delta}(C)$. Thus, $u, u' \in \mathbb{P}$. Let $\varphi : \mathcal{F}_{u'} {\to} \mathcal{F}_u$ be the isomorphism between the two bundles. Since $\mathcal{F}_u$ is stable, then $u \neq u' \in \mathbb{P}$ (notation as in \eqref{eq:PP}) and we have the diagram \[ \begin{array}{ccccccl} 0 \to & N & \stackrel{\iota_1}{\longrightarrow} & \mathcal{F}_{u'} & \to & L & \to 0 \\
& & & \downarrow^{\varphi} & & & \\ 0 \to & N & \stackrel{\iota_2}{\longrightarrow} & \mathcal{F}_u & \to & L & \to 0. \end{array} \]The maps $\varphi \circ \iota_1$ and $\iota_2$ determine two non-zero sections $ s_1 \neq s_2 \in H^0(\mathcal{F}_u \otimes N^{\vee}) $. They are linearly dependent, otherwise the section $\Gamma \subset F_u$, corresponding to $\mathcal{F}_u \to \!\! \to L$, would not be li (cf. \eqref{eq:isom2} and Theorem \ref{C.F.VdG}-(ii)). So $s_1= \lambda s_2$. But then Lemma \ref{lem:technical} implies $u=u'$, a contradiction.
\noindent
(2) If $L \cong\!\!\!\!\!| \,\,\; L' \in W_{L}$ (in particular, $\rho(L)>0$), sections $\Gamma \neq \Gamma'$, corresponding respectively to $\mathcal{F}_u \to \!\!\to L$ and $\mathcal{F}_u \to \!\! \to L'$, would be such that $\Gamma \sim_{alg} \Gamma'$ on $F_u$, contradicting Theorem \ref{C.F.VdG}-(ii).
\end{proof}
\begin{example}\label{ex:C.F.1} One can have loci $\mathcal{V}^{\delta,j}_{d}$ of the same dimension for different values of $\delta$. For instance, let $j =2$, $g \geqslant 18$ and $d=2g+9$ in Theorem \ref{C.F.VdG}. Then $g+5 \leqslant\delta \leqslant g+6$ are admissible values in \eqref{eq:ln2} and for both of them one has $2 \delta - d >0$. Now $\rho(g,7,g+5) = g-16 > g-18 = \rho(g,8,g+6) $; by \eqref{eq:Nudde} and Lemma \ref{lem:claim1} one has $\nu_{2g+9}^{g+5,2} = \nu_{2g+9}^{g+6,2} = 3g-17$. \end{example}
\begin{remark}\label{rem:claim1} From \eqref{eq:bn}, \eqref{eq:Nudde} and Lemma \ref{lem:claim1}, for varieties $\mathcal{V}^{\delta,j}_{d}$ as in Proposition \ref{C.F.1} (i.e. defined by integers $d$, $\delta$ and $j$ as in Theorems \ref{LN}, \ref{C.F.VdG}, with $2\delta - d \geqslant 1$) one has \begin{equation}\label{eq:nudde2} \nu_d^{\delta,j} - \rho_d^{k_j} = d (j-1) - \delta (j -2) - (g-1) (j+1). \end{equation}
\noindent (1) For $j =1$, $\nu_d^{\delta,1} - \rho_d^{k_1} = \delta - 2g+2 \leqslant 0$, since $L$ special, and equality holds if and only if $\delta$ reaches the upper-bound in \eqref{eq:ln2}.
\noindent (2) Otherwise, for $j \geqslant 2$, using the upper-bound in \eqref{eq:ln2} and the fact $d < 2 \delta$, from \eqref{eq:nudde2} one gets $$\nu_d^{\delta,j} - \rho_d^{k_j} < \delta j - g j - g + j + 1 \leqslant 1 - j^2 <0.$$ Thus, $\mathcal{V}^{\delta,j}_{d}$ can never be dense in a regular component of $B_C^{k_j}(d)$ unless $j=1$ and $\delta = 2g-2$. \end{remark}
\begin{corollary}\label{C.F.1b} Let $C$ be of genus $g \geqslant 5$ with general moduli. For any integer $d$ s.t. $3g+1 \leqslant d \leqslant 4g-5$, the variety $\mathcal V_{d}^{2g-2,1}$ is dense in a regular, generically smooth component $\mathcal B \subseteq B_C^{k_1}(d)$. Moreover:
\noindent (i) $[\mathcal{F}_u]\in \mathcal V_{d}^{2g-2,1}$ general is stable and comes from $u \in \Ext^1(\omega_C,N)$ general, with $N \in \Pic^{d - 2g+2}(C)$ general. In particular, $i(\mathcal{F}_u)=1$.
\noindent (ii) The minimal degree quotient of $\mathcal{F}_u$ is $\omega_C$, so $s(\mathcal{F}_u) = 4g-4-d >0$.
\noindent (iii) ${\rm Div}_{F_u}^{1,2g-2} = \{ \Gamma\}$, where $\Gamma$ is the section of $F_u = \mathbb{P}(\mathcal{F}_u)$ corresponding to $\mathcal{F}_u \to \!\!\! \to \omega_C$ (i.e. $\mathcal{F}_u$ is rp via $\omega_C$). \end{corollary}
\begin{proof} It follows from Theorem \ref{C.F.VdG}, with $2\delta - d \geqslant 1$ and $j=1$, from Proposition \ref{C.F.1} and from Remark \ref{rem:claim1}. \end{proof}
\begin{remark}\label{30.12l} Using Theorem \ref{LN} and Corollary \ref{cor:LN}, one can prove results similar to Proposition \ref{C.F.1} and Corollary \ref{C.F.1b} with slightly different numerical bounds. As in Remark \ref{rem:claim1}, $\mathcal{V}^{\delta,j}_{d}$ can never be dense in a regular component of $B_C^{k_j}(d)$, unless $\delta = 2g-2$ and $j = 1$. The numerical bounds in this case are $3g-3 \leqslant d \leqslant 4g-6$ with $g \geqslant 3$, hence the cases not already covered by Corollary \ref{C.F.1b} are $3g-3 \leqslant d \leqslant {\rm {min}} \{3g, 4g-6\}$. \end{remark}
\begin{corollary}\label{C.F.1c} Let $C$ be of genus $g \geqslant 6$ with general moduli. For any integer $d$ s.t. $3g-3 \leqslant d \leqslant 3g$, the variety $\mathcal V_{d}^{2g-2,1}$ is dense in a regular, generically smooth component $\mathcal B \subseteq B_C^{k_1}(d)$. Moreover:
\noindent
\noindent (i) $[\mathcal{F}_u] \in \mathcal V_{d}^{2g-2,1}$ general is stable and comes from $u \in \Ext^1(\omega_C,N)$ general, with $N \in \Pic^{d - 2g+2}(C)$ general (so non-special). In particular, $i(\mathcal{F}_u)=1$.
\noindent (ii) The minimal degree quotient of $\mathcal{F}_u$ is $\omega_C$, thus $s(\mathcal{F}_u) = 4g-4-d \geqslant g-4$.
\noindent (iii) ${\rm Div}_{F_u}^{1,2g-2} = \{ \Gamma\}$, where $\Gamma$ the section of $F_u = \mathbb{P}(\mathcal{F}_u)$ corresponding to $\mathcal{F}_u \to \!\!\! \to \omega_C$ (i.e. $\mathcal{F}_u$ is rp via $\omega_C$).
\end{corollary}
\begin{proof} We need to prove that $\pi_{d, 2g-2}$ is generically injective. The proof of Lemma \ref{lem:claim1} shows that, for $[\mathcal{F}_u] \in \mathcal V_{d}^{2g-2,1}$ general, one has $\dim(\pi_{d, 2g-2}^{-1} ([\mathcal{F}_u]) ) \leqslant \dim({\rm Div}_{F_u}^{1,2g-2})$. By construction of $\mathcal V_{d}^{2g-2,1}$ and by \eqref{eq:Ciro410}, $\mathcal{N}_{\Gamma/F_u} \cong K_C-N$. Since $N$ is general of degree $d-2g+2$, one has $h^1(N)=0$. From Remark \ref{rem:linisol} we conclude. To prove the injectivity of $P_{\mathcal{F}_u}$ one can argue as in \cite[Lemma2.1]{CF} (we leave the easy details to the reader). \end{proof}
\subsubsection{{\bf Case $d = 2 \delta$}} Fom what proved in Theorem \ref{C.F.VdG}, for any integers $j \geqslant 1$, $g$ and $\delta$ as in \eqref{eq:ln2} and in Remark \ref{rem:C.F.VdG}, we have $$\mathcal V_{2\delta}^{\delta,j} \subseteq B_{C}^{k_j}(2\delta) \cap U_C^{ss}(2\delta).$$
\begin{lemma}\label{lem:claim2} The morphism $\mathbb{P}(\mathfrak{E}_{\delta})^0 \stackrel{\pi_{2\delta, \delta}}{\longrightarrow} U_C(d)$ contracts the $\gamma$-fibres, with $\gamma$ as in \eqref{eq:ude}. Thus, $\nu_{2\delta}^{\delta,j} \leqslant g + \rho(L)$. \end{lemma} \begin{proof} For any $y=(N,L) \in \mathcal{U}$, $\gamma^{-1}(y) \cong \mathbb{P}$ as in \eqref{eq:PP}. For any $u \in \mathbb{P}$, one has ${\rm gr}(\mathcal{F}_u) = L\oplus N$, where ${\rm gr}(\mathcal{F}_u)$ is the graded object associated to $\mathcal{F}_u$ (cf. \cite[Thm. 4]{Ses}). Therefore, all elements in a $\gamma$-fibre determine $S$-equivalent bundles (cf. e.g. \cite{M, Tha}). This implies that $\pi_{d, \delta}$ contracts any $\gamma$-fibre. \end{proof}
\begin{corollary}\label{C.F.2} Let $C$ be of genus $g \geqslant 5$ with general moduli. One has$$B_C^{k_1}(4g-4) = \overline{\mathcal V_{4g-4}^{2g-2,1}}.$$Thus:
\noindent (i) $B_C^{k_1}(4g-4) $ is irreducible, of dimension $g < \rho_{4g-4}^{k_1} = 2g-2$. In particular, it is birational to $\Pic^{2g-2}(C)$, with $B_C^{k_2}(4g-4) = \{[\omega_C \oplus \omega_C]\}$;
\noindent (ii) $[\mathcal{F}_u] \in B_C^{k_1}(4g-4) $ general comes from $u \in \Ext^1(\omega_C,N)$ general, with $N \in \Pic^{2g-2}(C)$ general. Hence, $i(\mathcal{F}_u)=1$.
\noindent (iii) The minimal degree quotient of $\mathcal{F}_u$ is $\omega_C$, thus $s(\mathcal{F}_u) = 0$ and $\mathcal{F}_u$ is strictly semistable.
\noindent (iv) ${\rm Div}_{F_u}^{1,2g-2} = \{ \Gamma\}$, where $\Gamma$ the section of $F_u = \mathbb{P}(\mathcal{F}_u)$ corresponding to $\mathcal{F}_u \to \!\!\! \to \omega_C$ (i.e. $\mathcal{F}_u$ is rp via $\omega_C$).
\end{corollary} \begin{proof} From Theorem \ref{C.F.VdG}, the only case for $d = 4g-4$ is $j=1$ and $\delta = 2g-2$. Since $d = 2 \delta$, from \eqref{eq:ude} we have $\mathcal U \cong \Pic^{2g-2} (C) \setminus \{\omega_C\}$ and $\mathfrak{E}$ is a vector bundle of rank $g-1$ on $\mathcal U$. From Lemma \ref{lem:claim2}, the moduli map $\pi_{d, 2g-2}$ factors through a map from $\mathcal U$ to $B_C^{k_1}(4g-4) $, which is injective by Chern class reasons.
Next we prove that $B_C^{k_1}(4g-4) $ is irreducible. Consider $[\mathcal{F}] $ general in a component of $B_C^{k_1}(4g-4) $; it can be presented via an exact sequence as in \eqref{eq:Fund}, with $L$ special and effective (cf. Lemma \ref{lem:1e2note}). Since $s(\mathcal{F}) =0$, then $\deg(L) = 2g-2$, i.e. $L \cong \omega_C$. Thus, we are in the image of $ \mathcal U$ to $B_C^{k_1}(4g-4)$.
The remaining assertions are easy to check and can be left to the reader. \end{proof}
Corollary \ref{C.F.2} has been proved already in \cite[Theorems 7.2, 7.3 and Remark 7.4]{BGN}, via different techniques. Our proof is completely independent.
\subsection{Special $N$}\label{ssec:Ns} Under the numerical assumptions of Theorem \ref{uepi}, any $N \in \Pic^{d-\delta}(C)$ is special (cf. Remark \ref{rem:uepib}-(2)) and, for $u \in \Ext^1(L,N)$ general, $\partial_u$ is surjective (cf. Remark \ref{rem:uepib}-(3)). Hence $i(\mathcal{F}_u) = h^1(L) = j$. We have:
\begin{proposition}\label{C.F.3} Assumptions as in Theorem \ref{uepi}. For any integers $j,\;\delta$ and $d$ therein, there exists an irreducible component $\mathcal B \subseteq B_C^{k_j}(d)$ such that:
\noindent (i) $\mathcal{V}^{\delta,j}_{d} \subseteq \mathcal B$;
\noindent (ii) $\mathcal B \cap U_C^s(d) \neq \emptyset$;
\noindent (iii) For $[\mathcal{E}] \in \mathcal B$ general, $\mathcal{E}$ is stable, with $s(\mathcal{E}) \geqslant g-\epsilon$ and $\epsilon$ as in Theorem \ref{uepi}. The minimal degree quotients of $\mathcal{E}$ as well as the minimal degree sections of $\mathbb{P}(\mathcal{E})$ are as in (iii) and (iv) of Theorem \ref{uepi}. In particular, $L$ is of minimal degree if and only if $ d = 2 \delta - g$.
\noindent (iv) If moreover $d \geqslant \delta + g - 3$ (so $\delta \geqslant 2g-3$), then $\mathcal B$ is also regular and generically smooth.
\end{proposition}
\begin{proof} Assertions (i), (ii) and (iii) follow from Theorem \ref{uepi}, the map \eqref{eq:pde} and the fact that the Segre invariant is lower-semicontinuous (cf. e.g. \cite[\S\;3]{LN}).
To prove (iv), we argue as in \cite[Lemma 2.1]{CF}. Take $\mathcal{F}_0 = L \oplus N$, with $N \in \Pic^{d-\delta}(C)$ general. Then, $N$ is non-effective and $1 \leqslant h^1(N) \leqslant 2$. The Petri map $P_{\mathcal{F}_0}$ decomposes as $\mu_0(L) \oplus \mu$, where $\mu_0(L)$ is the Petri map of $L$ as in \eqref{eq:Petrilb} and $\mu$ is as in \eqref{eq:mu}. Since $C$ has general moduli, $\mu_0(L)$ is injective (cf. \cite[(1.7), p. 215]{ACGH}). The injectivity of $\mu$ is immediate when $h^1(N) = 1$ (cf. the proof of Theorem \ref{thm:mainext1}). When
$h^1(N) =2$, the generality of $N$ implies that $|K_C-N|$ is a base-point-free pencil so the injectivity of $\mu$ follows from the base-point-free pencil trick, since $h^0(N- (K_C-L))=0$ (because $K_C-L$ is effective and $N$ non-effective). By semicontinuity on the elements of $\Ext^1(L,N)$ and the fact that $\mathcal{V}^{\delta,j}_{d} \subseteq \mathcal B$, the Petri map $P_{\mathcal{E}}$ is injective. One concludes by Remark \ref{rem:BNloci}. \end{proof}
\begin{remark}\label{rem:C.F.3} Computing $\dim(\mathbb{P}(\mathfrak{E}_\delta)) - \rho_d^{k_j}$ one finds the right-hand-side of \eqref{eq:nudde2}. Since $d < 2 \delta$ (see \eqref{eq:uepi3}), as in Remark \ref{rem:claim1}-(2) one sees that $\dim(\mathbb{P}(\mathfrak{E}_\delta)) - \rho_d^{k_j} <0$, unless $j =1$ and $\delta = 2g-2$, in which case $\dim(\mathbb{P}(\mathfrak{E}_\delta)) - \rho_d^{k_j} =0$. As in Lemma \ref{lem:claim1}, we see that $\pi_{d, 2g-2}$ is generically injective. Thus, with notation as in \eqref{eq:Nudde}, one has $\nu_d^{\delta,j} \geqslant \rho_d^{k_j}$ only if $j=1$, $\delta = 2g-2$ and $N \in \Pic^{d-\delta}(C)$ is general, in which case $\nu_d^{2g-2,1} = \rho_d^{k_1}$. \end{remark}
\begin{corollary}\label{C.F.3b} Let $C$ be of genus $g \geqslant 3$ with general moduli. For any integer $d$ such that $2g-2 \leqslant d \leqslant 3g-4$, one has $\nu_d^{2g-2,1}= \rho_d^{k_1} = 6g-6-d$. Moreover:
\noindent (i) $[\mathcal{F}_u] \in \mathcal V_{d}^{2g-2,1}$ general is stable and comes from $u \in \Ext^1(\omega_C,N)$ general, with $N \in \Pic^{d - 2g+2}(C)$ general (hence special, non-effective). In particular, $i(\mathcal{F}_u)=1$.
\noindent (ii) If $ 3g-5\leqslant d \leqslant 3g-4$, then $\mathcal V_{d}^{2g-2,1}$ is dense in a regular, generically smooth component of $B_C^{k_1}(d)$.
\noindent (iii) $s(\mathcal{F}_u) = g-\epsilon$, with $\epsilon$ as in Theorem \ref{uepi}. Quotients of minimal degree of $\mathcal{F}_u$ (equivalently sections of minimal degree on $F_u = \mathbb{P}(\mathcal{F}_u)$) are those described in Theorem \ref{uepi}-(111) and (iv). In particular, they are li sections.
\noindent (iv) The canonical section $\Gamma \subset F_u$ is the only special section; it is lsu and asu but not ai. Moreover, it is of minimal degree only when $d = 3g-4$.
\noindent (v) $\mathcal{F}_u$ is rsp but not rp via $\omega_C$. \end{corollary}
\begin{proof} (i), (ii) and (iii) follow from Theorem \ref{uepi}, Proposition \ref{C.F.3} and Remarks \ref{rem:BNloci}, \ref{rem:C.F.3}. Sections of minimal degree are li (see the proof of Proposition \ref{prop:lem4}).
As for (iv) and (v), from Serre duality and the fact that $\mathcal{F}_u$ is of rank-two with $\det(\mathcal{F}_u) = \omega_C \otimes N$, one has \begin{equation}\label{eq:casaciro2} h^0(\mathcal{F}_u \otimes N^{\vee}) = h^1(\mathcal{F}_u^{\vee} \otimes \omega_C \otimes N) = h^1(\mathcal{F}_u). \end{equation} Since $i(\mathcal{F}_u) = 1$, from \eqref{eq:isom2} $\Gamma$ is li. Since $N$ is special and non-effective, from \eqref{eq:Ciro410}, $ {\rm Div}^{1,2g-g}_{F_u}$ is smooth, of dimension $3g-3-d \geqslant 1$ at $\Gamma$. Thus, $\Gamma$ is not ai but, since $W_{2g-2}^{g-1} (C) = \{\omega_C\}$, it is asu (see the proof of Proposition \ref{prop:lem4} and Remark \ref{rem:C.F.3}). For the same reason, from Theorem \ref{uepi}-(iv), the only possibility for $\omega_C$ to be a minimal quotient is $d = 3g-4$. Finally, the fact that $\Gamma \subset F_u$ is the only special section follows from Remark \ref{rem:C.F.3}. \end{proof}
Recall that, when $N \in \Pic^{d-\delta}(C)$ is special and $L \in W^{\delta-g+j}_{\delta}(C)$ is a smooth point, assumptions as in Theorem \ref{thm:mainext1} imply that $\partial_u$ is surjective for $u \in \Ext^1(L,N)$ general (cf. Corollary \ref{cor:mainext1}), and so $i(\mathcal{F}_u) = j$.
Therefore, to have $i(\mathcal{F}_u) >j$, we are forced to use degeneracy loci described in \eqref{eq:wt}. To do this let $y = (N,L)$ be general in $\mathcal{U}$, respectively in $\mathcal{Z}$, when $N$ is non-effective, respectively when it is effective (recall notation as in \eqref{eq:ude}). Set $\mathbb{P}(y) := \gamma^{-1}(y) \cong \mathbb{P}$. Take numerical assumptions as in Remark \ref{unepine}, respectively in Remark \ref{unepie}, according to $N$ is respectively non-effective or effective.
With notation as in \eqref{eq:Lahat}, for any good component $\widehat{\Lambda}_t(y) \subseteq \widehat{W}_t(y) \subset \mathbb{P}(y)$ we have $$\emptyset \neq \widehat{\mathcal W}_t^{\rm {Tot}} \subset \mathbb{P}(\mathfrak{E}_\delta),$$where a point in $\widehat{\mathcal W}_t^{\rm {Tot}}$ corresponds to the datum of a pair $(y, u)$, with $y = (N,L)$ and $u \in \widehat{W}_t(y)$. Any irreducible component of $\widehat{\mathcal W}_t^{\rm {Tot}} $ has dimension at least $\dim (\mathbb{P}(\mathfrak{E}_\delta)) - c (\ell,r,t)$ (where $c(\ell,r,t)$ as in \eqref{eq:clrt} and where $\dim (\mathbb{P}(\mathfrak{E}_\delta))$ as in \eqref{eq:yde2}). From the generality of $y$, for any good component $\widehat{\Lambda}_t(y) $, we have an irreducible component $$\widehat{\Lambda}_t^{\rm {Tot}} \subseteq \widehat{\mathcal W}_t^{\rm {Tot}} \subset \mathbb{P}(\mathfrak{E}_\delta) $$such that \begin{itemize} \item[(i)] $\widehat{\Lambda}_t^{\rm {Tot}}$ dominates $\mathcal{U}$ (resp., $\mathcal{Z}$); \item[(ii)] $\dim(\widehat{\Lambda}_t^{\rm {Tot}}) = \dim (\mathbb{P}(\mathfrak{E}_\delta)) - c (\ell,r,t)$; \item[(iii)] for $(y, u) \in \widehat{\Lambda}_t^{\rm {Tot}}$ general, $\cork(\partial_u)=t$;
\item[(iv)] if $\lambda:= \gamma|_{\widehat{\Lambda}_t^{\rm {Tot}}}$, for $y$ general one has $\lambda^{-1}(y) = \widehat{\Lambda}_t(y) $. \end{itemize}
\begin{definition}\label{def:goodtot} Any component $\widehat{\Lambda}_t^{\rm {Tot}}$ satisfying (i)-(iv) above will be called a {\em (total) good component} of $\widehat{\mathcal W}_t^{\rm {Tot}}$. \end{definition}
We set \begin{equation}\label{eq:nuddet}
\mathcal{V}^{\delta,j,t}_{d} := {\rm Im} \left(\pi_{d, \delta}|_{\widehat{\Lambda}_t^{\rm {Tot}}}\right) \subseteq B_C^{k_{j+t}}(d) \;\;\;\; {\rm and} \;\;\;\; \nu_d^{\delta,j,t} := \dim(\mathcal{V}^{\delta,j,t}_{d}). \end{equation} Two cases have to be discussed, according to the effectivity of $N$.
\noindent \subsubsection{{\bf $N$ non-effective}}\label {ssec:noneff} With assumptions as in Remark \ref{unepine}, $N$ can be taken general in $\Pic^{d-\delta}(C)$; the general bundle in $\mathcal{V}^{\delta,j,t}_{d} $ is stable (by Theorem \ref{unepi} and by the open nature of stability).
For brevity sake, set \begin{equation}\label{eq:fojt} \varphi_0 (\delta, j,t) :=\dim (\widehat{\Lambda}_t^{{\rm Tot}}) -\rho_d^{k_{j+t}} = d (j-1) - \delta (j-2) - (g-1) (j+1) + j t, \end{equation}which therefore takes into account the expected dimension of the general fibre of
$\pi_{d,\delta}|_{\widehat{\Lambda}_t^{{\rm Tot}}}$ and the codimension of its image in a regular component of $B_C^{k_{j+t}}(d)$).
One has $\varphi_0 (\delta, j,t) \geqslant \nu_d^{\delta,j,t} - \rho_d^{k_{j+t}} $ with equality if and only if
$\pi_{d, \delta}|_{\widehat{\Lambda}_t^{{\rm Tot}}}$ is generically finite. Thus, from Remark \ref{rem:BNloci}, it is clear that $\mathcal{V}^{\delta,j,t}_{d}$ cannot fill up a dense subset of a component of $B_C^{k_{j+t}}(d) $ if $\varphi_0(\delta, j,t) <0$; in other words, the negativity of $\varphi_0(\delta, j,t)$ gives numerical obstruction to describe the general point of a (regular) component of $B_C^{k_{j+t}}(d) $.
\noindent $\bullet$ For $j =1$, one has \begin{equation}\label{eq:fo1t} \varphi_0 (\delta, 1,t) = \delta - 2g + 2 + t. \end{equation}
\noindent $\bullet$ When $j \geqslant 2$, from Remark \ref{unepine} and arguing as in Remark \ref{rem:claim1}, one gets \begin{equation}\label{eq:fojtb} \varphi_0 (\delta, j,t) \leqslant j (t-j). \end{equation}Thus, $\mathcal{V}^{\delta,j,t}_{d}$ never fills up a dense subset of a component of $B_C^{k_{j+t}}(d)$ as soon as $j >t \geqslant 1$.
\subsubsection {{\bf $N$ effective}} With assumptions as in Remark \ref{unepie}, $N$ is general in $W_{\rho(N)}$. From the second equality in \eqref{eq:yde2}, for any $n \geqslant 1$, one puts \begin{equation}\label{eq:fnjt} \varphi_n (\delta, j,t) :=\dim (\widehat{\Lambda}_t^{{\rm Tot}}) -\rho_d^{k_{j+t}} = \varphi_0(\delta, j,t) - n (r-t), \end{equation}where $\varphi_0(\delta,j,t)$ as in \eqref{eq:fojt} above.
\begin{remark}\label{rem:conpar2} For a total good component $\widehat{\Lambda}_t^{{\rm Tot}}$ and for $(L,N,u) \in \widehat{\Lambda}_t^{{\rm Tot}}$ general, one has $n (r-t) = h^0(N) \; {\rm rk}(\partial_u)$. Hence, $n (r-t)$ is non-negative and it is zero if and only if $r=t$, i.e. $\partial_u$ is the zero map. Therefore, $\varphi_n(\delta, j,t) \leqslant \varphi_0(\delta, j,t)$ and equality holds if and only if $r=t$. The possibility for a $\mathcal{V}^{\delta,j,t}_{d}$ to fill up a dense subset of a component of $B_C^{k_{j+t}}(d)$ can be discussed as in \S~ \ref {ssec:noneff}. \end{remark}
By definition of $\nu_{d}^{\delta,j,t}$, it is clear that $\nu_{d}^{\delta,j,t} - \rho_d^{k_{j+t}} \leqslant \varphi_n(\delta, j,t)$, for any $n \geqslant 0$. Thus a necessary condition for $\nu_d^{\delta,j,t} \geqslant \rho_d^{k_{j+t}}$, i.e. for $\mathcal V_d^{\delta,j,t}$ to have at least the dimension of a regular component of $B_C^{k_{j+t}}(d)$, is $\varphi_n(\delta, j,t) \geqslant 0$.
Next proposition easily follows.
\begin{proposition}\label{C.F.4} Assumptions as in Theorem \ref{unepi} (more precisely, either as in Remark \ref{unepine}, when $N$ is non-effective, or as in Remark \ref{unepie}, when $N$ is effective). Then for any integers $j,\;\delta$ and $d$ therein, there exists an irreducible component $\mathcal B \subseteq B_C^{k_{j+t}}(d)$ such that:
\noindent (i) $\mathcal{V}^{\delta,j,t}_{d} \subseteq \mathcal B$;
\noindent (ii) For $[\mathcal{E}] \in \mathcal B$ general, $s(\mathcal{E}) \geqslant g- c(\ell,r,t)-\epsilon \geqslant 0$, where $c(\ell,r,t)$ as in \eqref{eq:clrt} and $\epsilon \in \{0,1\}$ such that $d+g-c(\ell,r,t) \equiv \epsilon \pmod{2}$;
\noindent (iii) $\mathcal B \cap U_C^s(d) \neq \emptyset$, if $g - c(\ell,r,t)-\epsilon >0$. \end{proposition}
\begin{remark}\label{rem:17lug} In order to estimate $\nu_d^{\delta,j,t}$, one has to estimate the dimension of the general fibre of the map $\pi_{d, \delta}$ restricted to a total good component $\widehat{\Lambda}_t^{\rm {Tot}}$. Thus, if for
$[\mathcal{F}] \in \mathcal V_d^{\delta,j,t}$ general we put for simplicity $f_{\mathcal{F}} := \dim\left(\pi_{d, \delta}|_{\widehat{\Lambda}_t^{Tot}}^{-1} ([\mathcal{F}])\right)$, a rough estimate is \begin{equation}\label{eq:aiutoB} f_{\mathcal{F}} \leqslant a_{F}(\delta), \end{equation}where $F = \mathbb{P}(\mathcal{F})$ and $a_{F}(\delta)$ the dimension of the scheme of special unisecants of degree $\delta$ on $F$ as in \eqref{eq:aga}.
\end{remark}
\begin{remark}\label{rem:bohb} Assume $j=1$ in Proposition \ref{C.F.4}.
\noindent (1) When $N \in \Pic^{d-\delta}(C)$ is general, assumptions as in Remark \ref{unepine} give $\delta \leqslant 2g-2$ and $N$ non-effective, for any $t \geqslant 1$. The only case to consider is therefore $\varphi_0 (\delta, 1,t)$. A necessary condition for $\mathcal{V}_d^{\delta, 1,t}$ to have dimension at least $\rho_d^{k_{j+t}}$ is $\varphi_0(\delta, 1,t) \geqslant 0$, i.e. $\delta \geqslant 2g-2 - t$ (cf. \eqref{eq:fo1t}). Thus:
\noindent
$\bullet$ when $\delta = 2g-2 - t$, then $L = \omega_C(-D_t)$, with $D_t \in C^{(t)}$, $t<g$, imposing independent conditions to $|\omega_C|$. Since $\varphi_0(\delta, 1,t)=0$, for $D_t \in C^{(t)}$ general, the estimate \eqref{eq:aiutoB} and a parameter count suggest that for $[\mathcal{F}] \in \mathcal V_d^{2g-2-t,1,t} $ general one has $a_{F}(2g-2-t) = 0$, i.e. $\mathcal{F}$ is rsp via $\omega_C(-D_t)$, and $\mathcal{B} = \overline{\mathcal{V}_d^{\delta, 1,t}}$ is regular.
\noindent $\bullet$ to the opposite, when $\delta = 2g-2$, then $L = \omega_C$. Let $[\mathcal{F}] \in \mathcal{V}_d^{2g-2, 1,t}$ be general, and let $\Gamma \subset F = \mathbb{P}(\mathcal{F})$ be the canonical section corresponding to $\mathcal{F} \to\!\! \to \omega_C$. By definition of $\mathcal{V}_d^{2g-2, 1,t}$, $\mathcal{F} = \mathcal{F}_v$ for $v \in \Lambda_t \subset \Ext^1(N, \omega_C)$ general in a good component. By \eqref{eq:isom2} and \eqref{eq:casaciro2}, one has
$\dim(|\mathcal{O}_{F}(\Gamma)|) = t$. Thus, $[\mathcal{F}] \in \mathcal V_d^{2g-2,1,t} $ general is not rsp via $\omega_C$, since the general fibre of $\pi_{d, 2g-2}|_{\widehat{\Lambda}_t^{{\rm Tot}}}$ has dimension at least $t$.
It is therefore natural to expect that the component $\mathcal B$ in Proposition \ref{C.F.4} is such that $$\mathcal B = \overline{\mathcal V_d^{2g-2,1,t}} = \overline{\mathcal V_d^{2g-3,1,t}} = \cdots = \overline{\mathcal V_d^{2g-2-t,1,t}},$$where $[\mathcal{F}] \in \mathcal B$ general is rsp only when $[\mathcal{F}]$ is considered as element in $\mathcal V_d^{2g-2-t,1,t}$.
\noindent (2) One may expect something similar when $j=1$ and $N$ effective general in $W_{\rho(N)}$. In this case, $\varphi_n(\delta, 1,t) \geqslant 0$ gives $ \delta \geqslant 2g-2 + rn - t (n+1) $ whereas, from the first line of bounds on $\delta$ in Remark \ref{unepie}, we get $\delta \leqslant {\rm min} \{ 2g-2, g-2 + r-t + \frac{g-\epsilon}{t} \}$. A necessary condition for $\nu_d^{\delta,j,t} \geqslant \rho_d^{k_{j+t}}$ is therefore \begin{equation}\label{eq:8.1} rn - t (n+1) <0, \end{equation}otherwise either $L$ would be non special, contradicting Lemma \ref{lem:1e2note}, or $L \cong \omega_C$, so $\mathcal{F}_v$ would be not rsp as in (1) above. In the next section, we will discuss these questions. \end{remark}
\section{Low speciality, canonical determinant}\label{S:BND} In this section we apply results in \S's\;\ref{S:Nns}, \ref{S:Ns} and \ref{S:PSBN} to describe Brill-Noether loci of vector bundles with canonical determinant and Brill-Noether loci of vector bundles of fixed degree $d$ and low speciality $i \leqslant 3$ on a curve $C$ with general moduli. In particular, the more general analysis discussed in the previous sections allows us to determine rigidly specially presentation of the general point of irreducible components arising from constructions in \S\;\ref{S:PSBN},
From now on, for any integers $g \geqslant 3$, $i \geqslant 1$ and $2g-2 \leqslant d \leqslant 4g-4$, we will set \begin{equation}\label{eq:Btilde} \widetilde{B_C^{k_i}}(d) := \left\{ \begin{array}{cl} B_C^{k_i}(d) & \mbox{if either $d$ odd or $d=4g-4$} \\ B_C^{k_i}(d)\cap U_C^s(d) & \mbox{otherwise} \end{array} \right. \end{equation}
\subsection{Vector bundles with canonical determinant}\label{SS:low} Given an integer $d$ and any $\xi \in \Pic^d(C)$, there exists the {\em moduli space of (semi)stable, rank-two vector bundles with fixed determinant $\xi$}. Following \cite{Muk2, Muk}, we denote it by $M_C(2, \xi)$ (sometimes a different notation is used, see e.g. \cite{Ses,Be,BeFe,TB000,Oss,Voi,Beau2,LNP}).
The scheme $M_C(2, \xi)$ is defined as the fibre over $\xi \in \Pic^d(C)$ of the {\em determinantal map} \begin{equation}\label{eq:det} U_C(d) \stackrel{\rm det}{\longrightarrow} \Pic^d (C). \end{equation}For any $\xi \in \Pic^d(C)$, $M_C(2, \xi)$ is smooth, irreducible, of dimension $3g-3$ (cf. \cite{NR1,Ses}).
Brill-Noether loci can be considered in $M_C(2, \xi)$. Recent results for arbitrary $\xi$ are given in \cite{Oss,Oss2,LaNw}. A case which has been particularly studied (for its connections with Fano varieties) is $M_C(2, \omega_C)$. Seminal papers on the subject are \cite{BeFe, Muk2}; other important results are contained in \cite{TB000,Voi,IVG,LNP}. If $[\mathcal{F}] \in M_C(2, \omega_C)$, Serre duality gives \begin{equation}\label{eq:net1} i(\mathcal{F}) = k_i(\mathcal{F}) := k. \end{equation} For $[\mathcal{F}] \in M_C(2, \omega_C) \subset U_C(2g-2)$, the Petri map $P_{\mathcal{F}}$ in \eqref{eq:petrimap} splits as $P_{\mathcal{F}} = \lambda_{\mathcal{F}} \oplus \mu_{\mathcal{F}}$, where $$\lambda_{\mathcal{F}} : \bigwedge^2 H^0(\mathcal{F}) \to H^0(\omega_C) \quad \mbox {and}\quad \mu_{\mathcal{F}} : {\rm Sym}^2 (H^0(\mathcal{F})) \to H^0({\rm Sym}^2 (\mathcal{F}));$$the latter is called the {\em symmetric Petri map}.
For $[\mathcal{F}] \in M_C(2, \omega_C)$ general, one has $k = 0$ (cf. \cite[\S\,4]{Muk}, after formula (4.3)). For any $k \geqslant 1$, one sets$$M_C^k(2, \omega_C) := \{ [\mathcal{F}] \in M_C(2, \omega_C) \; | \; h^0(\mathcal{F}) = h^1(\mathcal{F}) \geqslant k\}$$which is called the $k^{th}$-{\em Brill-Noether locus} in $M_C(2, \omega_C)$. In analogy with \eqref{eq:Btilde}, we set $$\widetilde{M_C^k}(2, \omega_C) := M_C^k(2, \omega_C) \cap U_C^s(2g-2).$$By \cite[Prop.\,1.4]{Muk}, \cite[\S\,2]{BeFe} and \eqref{eq:net1}, one has $${\rm expcodim}_{M_C(2, \omega_C)} (\widetilde{M_C^k}(2, \omega_C)) = \frac{k(k+1)}{2} \leqslant k^2 = i(\mathcal{F}) k_i(\mathcal{F}).$$Similarly to $\widetilde{B_C^{k_i}}(d)$, if $[\mathcal{F}]\in \widetilde{M_C^k}(2, \omega_C)$, then $\widetilde{M_C^k}(2, \omega_C)$ is smooth and regular (i.e. of the expected dimension) at $[\mathcal{F}]$ if and only if $\mu_{\mathcal{F}}$ is injective (see \cite{BeFe,Muk2,Muk}).
Several basic questions on $\widetilde{M_C^k}(2, \omega_C)$, like non-emptiness, irreducibility, etc., are still open. A description of these bundles in terms of extensions (as we do here) is available only for some $k$ on $C$ general of genus $g \leqslant 12$ (cf. \cite[\S\,4]{Muk}, \cite{BeFe}). Further existence results are contained in \cite{TB1,LNP}. On the other hand, if one assumes $ [\mathcal{F}]\in M_C^k(2, \omega_C)$, injectivity of $\mu_{\mathcal{F}}$ on $C$ general of genus $g \geqslant 1$ has been proved in \cite{TB000} (cf. \cite{Beau2} for $k < 6$ with a different approach).
\subsection{Case $i=1$}\label{ssec:1} In this case $\rho_d^{k_1} = 6g-6 -d$. Using notation and results as in \S\,\ref{S:PSBN}, we get:
\begin{theorem}\label{i=1} Let $C$ be of genus $g \geqslant 5$, with general moduli. For any integer $d$ s.t. $2g-2 \leqslant d \leqslant 4g-4$, $$\widetilde{B_C^{k_1}}(d)= \overline{\mathcal V_d^{2g-2,1}},$$as in Corollaries \ref{C.F.1b}, \ref{C.F.1c}, \ref{C.F.2} and \ref{C.F.3b}. In particular,
\noindent (i) $\widetilde{B_C^{k_1}}(d)$ is non-empty, irreducible. For $2g-2 \leqslant d \leqslant 4g-5$ it is regular, whereas $\dim(\widetilde{B_C^{k_1}}(4g-4)) = g < \rho_{4g-4}^{k_1} = 2g-2$.
\noindent (ii) For $3g-5 \leqslant d \leqslant 4g-4$, $\widetilde{B_C^{k_1}}(d)$ is generically smooth.
\noindent (iii) $[\mathcal{F}] \in \widetilde{B_C^{k_1}}(d)$ general is stable for $2g-2 \leqslant d \leqslant 4g-5$, and strictly semistable for $d=4g-4$, fitting in a (unique) sequence$$ 0 \to N \to \mathcal{F} \to \omega_C \to 0,$$where $N \in \Pic^{d-2g+2}(C)$ is general, the coboundary map is surjective and $i(\mathcal{F})=1$.
\noindent (iv) For $3g-4 \leqslant d \leqslant 4g-4$ and $[\mathcal{F}] \in \widetilde{B_C^{k_1}}(d)$ general, one has $s(\mathcal{F}) = 4g-4-d$, the quotient of minimal degree being $\omega_C$. The section $\Gamma \subset F= \mathbb{P}(\mathcal{F})$ corresponding to $\mathcal{F} \to \!\!\! \to \omega_C$ is the only special section of $F$. Moreover: \begin{itemize} \item for $d \geqslant 3g-3$, $\Gamma$ is ai, \item for $d = 3g-4$, $\Gamma$ is lsu and asu but not ai. \end{itemize}
\noindent (v) For $2g-2 \leqslant d \leqslant 3g-5$ and $[\mathcal{F}] \in \widetilde{B_C^{k_1}}(d)$ general, one has $s(\mathcal{F}) = g-\epsilon$, with $\epsilon \in \{0,1\}$ such that $d+g \equiv \epsilon \pmod{2}$. The section $\Gamma \subset F$ is the only special section; it is asu but not ai. Moreover, $\Gamma$ is not of minimal degree; indeed:
\begin{itemize} \item when $d+g$ is even, minimal degree sections of $F$ are li sections of degree $\frac{d+g}{2}$ s.t. $\dim( {\rm Div}_{F}^{1,\frac{d+g}{2}}) = 1$;
\item when $d+g$ is odd, minimal degree sections are li of degree $\frac{d+g-1}{2}$ and $\dim( {\rm Div}_{F}^{1,\frac{d+g-1}{2}}) \leqslant 1$.
\end{itemize}
\noindent (vi) In particular, for $2g-2 \leqslant d \leqslant 4g-4$, $[\mathcal{F}] \in \widetilde{B_C^{k_1}}(d)$ general is rp via $\omega_C$. \end{theorem}
\begin{proof} All the assertions, except the irreducibility, follow from Corollaries \ref{C.F.1b}, \ref{C.F.1c}, \ref{C.F.2} and \ref{C.F.3b}. For $d = 4g-4$ irreducibility has been proved in Corollary \ref{C.F.2}. Thus, we focus on cases $ 2g-2 \leqslant d \leqslant 4g-5$.
Let us consider an irreducible component $\mathcal B \subseteq \widetilde{B_C^{k_1}}(d)$. From Lemma \ref{lem:1e2note}, $[\mathcal{F}] \in \mathcal B$ general is as in \eqref {eq:Fund}, with $h^1(L) = j \geqslant 1$ and $L$ of minimal degree among special, effective quotient line bundles. Moreover $\dim(\mathcal B) \geqslant \rho_d^{k_1}$ (cf. Remark \ref{rem:BNloci}). Two cases have to be considered.
\noindent (1) If $i(\mathcal{F}) =1$, then $j=1$ (notation as in \eqref{eq:exthyp1}, \eqref{eq:exthyp}) and $\partial: H^0(L) \to H^1(N)$ is surjective. In particular $\ell \geqslant r$. If $r=0$ then we are in cases of Corollaries \ref{C.F.1b}, \ref{C.F.1c}, and $\mathcal B = \overline{\mathcal V_d^{2g-2,1}}$. If $r>0$, as in Remark \ref{rem:C.F.3} one has $$0 \leqslant \dim(\mathbb{P}(\mathfrak{E}_\delta)) - \dim(\mathcal{B}) \leqslant \delta-2g+2$$(cf. \eqref{eq:nudde2}). Hence $\delta = 2g-2$ and $\mathcal B = \overline{\mathcal V_d^{2g-2,1}}$ as in Corollary \ref{C.F.3b}.
\noindent (2) Assume $i(\mathcal{F}) = i > 1$. As in Remarks \ref{rem:claim1}, \ref{30.12l}, \ref{rem:C.F.3} one has $L = \omega_C$. Thus $i > 1$ forces $r \geqslant {\rm cork}(\partial) = i-1 > 0 $. Recalling \eqref{eq:lem3note} and \eqref{eq:PP}, one has $\dim(\mathbb{P}) = 5g-6-d$. Therefore, $\mathcal B$ must be regular and $\mathcal{F}$ corresponds to the general point of $\Ext^1(\omega_C, N)$, with $N \in \Pic^{d-2g+2}(C)$ general, so non-effective. In particular, one has $2g-2 \leqslant d \leqslant 3g-4$ and $ r = 3g-3-d$. On the other hand, since $$\ell = g, \; 1 \leqslant r \leqslant g-1, \; 2g -1 \leqslant m = 5g-5-d \leqslant 3g-3$$we are in the hypotheses of Corollary \ref{cor:mainext1}, hence ${\rm cork}(\partial)= 0$, a contradiction. \end{proof}
\begin{remark}\label{rem:i=1} (1) Theorem \ref{i=1} gives alternative proofs of results in \cite{Sun,Lau,BGN} for the rank-two case. It provides in addition a description of the general point of $\widetilde{B_C^{k_1}}(d) \cong \widetilde{B_C^1}(4g-4-d)$, for any $2g-2 \leqslant d \leqslant 4g-4$. The same description is given in \cite{Ballico1}, with a different approach, i.e. using {\em general negative elementary transformations} as in \cite{Lau}. In terms of scrolls of speciality $1$, partial classification are given also in \cite[Theorem 3.9]{GP3}.
\noindent (2) As a consequence of Theorem \ref{i=1}, one observes that the Segre invariant $s$ does not stay constant on a component of the Brill-Noether locus. For example, the general element of $\widetilde{B_C^{k_1}}(4g-7)$ has $s=3$ and $i = 1$; on the other hand, in Theorem \ref{C.F.VdG}, we constructed vector bundles in $ \mathcal V_{4g-7}^{2g-3,1} \subset \widetilde{B_C^{k_1}}(4g-7)$ with $s = i = 1$. The minimal special quotient of the latter vector bundles is the canonical bundle minus a point, whereas for the general vector bundle in $\widetilde{B_C^{k_1}}(4g-7)$ is the canonical bundle.
\noindent (3) From the proof of Theorem \ref{i=1}, for $d \leqslant 4g-3$, the map $\pi_{d,2g-2}$ is birational onto $\widetilde{B_C^{k_1}}(d) = B_C^{k_1}(d)$, i.e. $B_C^{k_1}(d)$ is uniruled. \end{remark}
\begin{theorem}\label{prop:M12K} Let $C$ be of genus $g \geqslant 5$, with general moduli. Then $\widetilde{M_C^1}(2, \omega_C) \neq \emptyset$. Moreover, there exists an irreducible component which is
\begin{itemize} \item[(i)] generically smooth \item[(ii)] regular (i.e. of dimension $3g-4$), and \item[(iii)] its general point $[\mathcal{F}_u]$ comes from $u \in \mathbb{P}(\Ext^1(\omega_C, \mathcal{O}_C))$ general. In particular, $s(\mathcal{F}_u) = g - \epsilon$, where $\epsilon \in \{0,1\}$ such that $g \equiv \epsilon \pmod{2}$. \end{itemize} \end{theorem}
\begin{proof} Take $u \in \mathbb{P}(\Ext^1(\omega_C, \mathcal{O}_C))$ general. With notation as in \eqref{eq:exthyp1}, \eqref{eq:exthyp} one has $$\ell = r = g, \; {\rm and} \; m = 3g-3 \geqslant \ell+1.$$Thus, from Corollary \ref{cor:mainext1} and from \eqref{eq:net1}, $h^0(\mathcal{F}_u) = h^1(\mathcal{F}_u) =1$. From \eqref{eq:lem3note}, $\dim( \mathbb{P}(\Ext^1(\omega_C, \mathcal{O}_C)) = 3g-4$. Thus, $\mathcal{F}_u $ stable with $s(\mathcal{F}_u) = g-\epsilon$ follows from Proposition \ref{prop:LN}, with $\sigma = g-\epsilon$. This shows that $\widetilde{M_C^1}(2, \omega_C) \neq \emptyset$.
Since $\bigwedge^2 H^0(\mathcal{F}_u) = (0)$, $\mu_{\mathcal{F}_u}$ is injective if and only if $P_{\mathcal{F}_u}$ is. On the other hand, one has $H^0(\mathcal{F}_u) \otimes H^0(\omega_C \otimes \mathcal{F}_u^{\vee}) \cong \mathbb{C}$. Therefore, one needs to show that $P_{\mathcal{F}_u}$ is not the zero-map. This follows by limit of $P_{\mathcal{F}_u}$ when $u$ tends to $0$, so that $\mathcal{F}_0 = \mathcal{O}_C \oplus \omega_C$: then the limit of $P_{\mathcal{F}_u}$ is the map $H^ 0(\mathcal{O}_C)\otimes H^ 0(\mathcal{O}_C)\to H^ 0(\mathcal{O}_C)$.
To get (i)-(iii) at once, one observes that $\pi_{2g-2, 2g-2}|_{\mathbb{P}(\Ext^1(\omega_C, \mathcal{O}_C))} $ is generically injective, since the exact sequence $$ 0 \to \mathcal{O}_C \to \mathcal{F}_u \to \omega_C \to 0$$is unique: indeed, the surjection $\mathcal{F}_u \to\!\!\to \omega_C$ is unique and $h^0(\mathcal{F}_u) = 1$ (cf. \eqref{eq:isom2} and computations as in \eqref{eq:casaciro2}), moreover, by Lemma \ref{lem:technical}, two general vector bundles in $\mathbb{P}(\Ext^1(\omega_C, \mathcal{O}_C))$ cannot be isomorphic. \end{proof}
\begin{remark}\label{rem:net2} (1) For a similar description, cf. \cite{BeFe}. Generic smoothness for components of $\widetilde{M_C^1}(2, \omega_C)$ follows also from results in \cite{TB000,Beau2}.
\noindent (2) From Theorem \ref{i=1}, $[\mathcal{F}] \in \widetilde{B_C^{k_1}}(2g-2)$ general fits in a sequence
$0 \to \eta \to \mathcal{F} \to \omega_C \to 0$, with $\eta \in \Pic^0(C)$ general. Hence the map $\widetilde{d} := {\rm det}|_{\widetilde{B_C^{k_1}}(2g-2)}$ is dominant. Since $\rm{Pic}^{2g-2}(C)$ and $\widetilde{B_C^{k_1}}(2g-2)$ are irreducible and generically smooth, then $\widetilde{d}^{-1}(\eta) = \widetilde{M_C^1}(2, \omega_C \otimes \eta)$ is equidimensional and each component is generically smooth. Theorem \ref{prop:M12K} yields that in this situation each component of $ \widetilde{M_C^1}(2, \omega_C \otimes \eta)$ has dimension $3g-4$ (equal to the expected dimension). This agrees with \cite[Theorem 1.1]{Oss}. \end{remark}
\subsection{Case $i =2$}\label{ssec:2} In this case, $\rho_d^{k_2} = 8g- 11 - 2d$.
\begin{theorem}\label{i=2}
Let $C$ be of genus $g \geqslant 3$, with general moduli. For any integer $d$ s.t. $2g-2 \leqslant d \leqslant 3g-6$, one has $\widetilde{B_C^{k_2}}(d) \neq \emptyset$.
\noindent (i) $\overline{\mathcal V_d^{2g-3,1,1}}$ is the unique component of $\widetilde{B_C^{k_2}}(d) $, whose general point corresponds to a vector bundle $\mathcal{F}$ with $i(\mathcal{F}) = 2$. Moreover, $\overline{\mathcal V_d^{2g-3,1,1}} = \overline{\mathcal V_d^{2g-2,1,1}}$ and it is a regular component of $\widetilde{B_C^{k_2}}(d) $.
\noindent (ii) For $[\mathcal{F}]\in \overline{\mathcal V_d^{2g-3,1,1}}$ general, one has $s(\mathcal{F}) \geqslant 3g-4-d-\epsilon >0$ and $\mathcal{F}$ fits in an exact sequence$$0 \to N \to \mathcal{F} \to \omega_C (-p) \to 0,$$with
\begin{itemize} \item[$\bullet$] $p \in C$ general, \item[$\bullet$] $N \in \Pic^{d-2g+3}(C)$ general, and \item[$\bullet$] $\mathcal{F}=\mathcal{F}_v$ and $v$ is general in the good locus $ \mathcal W_1 \subset \Ext^1(\omega_C(-p), N)$. \end{itemize}
\noindent (iii) A section $\Gamma \subset F$, corresponding to a quotient $\mathcal{F} \to\!\!\to \omega_C(-p)$, is not of minimal degree. However, it is of minimal degree among special sections and it is asi but not ai (i.e. $\mathcal{F}$ is rsp but not rp via $\omega_C(-p)$).
\noindent (iv) For $g \geqslant 13$ and $2g+6 \leqslant d \leqslant 3g-7$, $\overline{\mathcal V_d^{2g-3,1,1}}$ is generically smooth.
\end{theorem}
\begin{proof} Once part (i) has been proved, parts (ii)--(iii) follow from Theorem \ref{unepi} and Proposition \ref{C.F.4}, with $\delta=2g-3$, $j=t=1$, whereas part (iv) follows from Proposition \ref{C.F.1}-(ii), with $j=2$.
The proof of part (i) consists of four steps.
\noindent {\bf Step 1}. In this step, we show that if $\mathcal B$ is an irreducible component of $\widetilde{B_C^{k_2}}(d)$ such that, for $[\mathcal{F}] \in \mathcal B$ general, $i (\mathcal{F}) =2$, then $\mathcal B$ comes from a total good component $\widehat{\mathcal W}^{\rm Tot}_1 \subseteq \mathbb{P}(\mathfrak{E}_{\delta})$, for some $\delta$ (cf. Thm. \ref{thm:mainext1} and Def. \ref{def:goodc}).
Indeed, let \eqref{eq:Fund} be a special presentation of $\mathcal{F}$ with $L$ a quotient of minimal degree. Then, from Remarks \ref{rem:claim1}, \ref{30.12l}, \ref{rem:C.F.3}, one has $h^1(L)=j= 1$ as $\mathcal B$ is a component. Hence, with notation as in \eqref{eq:exthyp1}, \eqref{eq:exthyp} and \eqref{eq:wt}, one must have $t=1$ and $\ell \geqslant r-1$. Moreover, $d \leqslant 3g - 6$, $\delta \geqslant g-1$ and $j=1$ imply that $m = 2\delta - d + g -1 \geqslant \delta - g + 3 = \ell + 1 $ (recall notation as in \eqref{eq:lem3note}). Therefore, we can apply Theorem \ref{thm:mainext1}, finishing the proof of this step.
\noindent {\bf Step 2}. In this step we determine which of the constructed loci either $\mathcal V_{d}^{\delta,j}$, as in \eqref{eq:Nudde}, or $\mathcal V_{d}^{\delta,j,t}$, as in \eqref{eq:nuddet}, has general point $[\mathcal{F}]$ such that $i(\mathcal{F}) =2$ and dimension at least $\rho_d^{k_2} = 8g-11-2d$ (hence, it can be conjecturally dense in a component of $\widetilde{B_C^{k_2}}(d)$). We will prove that this only happens for $2g-3 \leqslant \delta \leqslant 2g-2$ and $j = t = 1$. Moreover, we will show that the presentation of $\mathcal{F}$ is specially rigid only if $\delta = 2g-3$.
Let $\mathcal V$ be any such locus, and let $\mathcal{F}$ be its general point which is presented as in \eqref{eq:Fund} with special quotient $L$. As in Step 1, one finds $j=1$ hence $N$ has to be special and $t=1$, so $\mathcal V$ has to be necessarily of the form $\mathcal V_{d}^{\delta,1,1}$ (i.e. loci of the form $\mathcal V_{d}^{\delta,j}$ are excluded). We have two cases to consider: (a) $N$ non-effective, (b) $N$ effective.
\noindent {\bf Case (a)}. As in Remark \ref{rem:bohb}-(1), recall that a necessary condition for $\dim(\mathcal V_{d}^{\delta,1,1}) = \nu^{\delta,1,1}_d \geqslant \rho_d^{k_2}$ is $\varphi_0(\delta,1,1) \geqslant 0$, i.e. $2g-3 \leqslant \delta \leqslant 2g-2$.
In case $\delta = 2g-2$, $[\mathcal{F}] \in \mathcal{V}_d^{2g-2,1,1}$ general is not rsp via $\omega_C$ (it follows from the fact that $i=2$ and computation as in \eqref{eq:casaciro2}).
In case $\delta = 2g-3$, the hypotheses $ 2g-2 \leqslant d \leqslant 3g-6$ ensure stability for $\mathcal{F}$ (cf. Theorems \ref{thm:mainext1}, \ref{unepi} and Proposition \ref{C.F.4}). By definition, $\mathcal V_{d}^{2g-3,1,1} = {\rm Im} (\pi_{d,2g-3}|_{\widehat{\mathcal W}^{\rm Tot}_1})$, where $\widehat{\mathcal W}^{\rm Tot}_1 \subset \mathbb{P}(\mathfrak{E}_{2g-3})$ is the good locus for $t=1$. To accomplish the proof, we need to show that the fibre of $\pi_{d,2g-3}|_{\widehat{\mathcal W}^{\rm Tot}_1}$ over $[\mathcal{F}] \in \mathcal V_{d}^{2g-3,1,1}$ general is finite. As in \eqref{eq:aiutoB} in Remark \ref{rem:17lug}, it suffices to prove the following:
\begin{claim}\label{4.1} $a_{F}(2g-3) = 0$. \end{claim} \begin{proof}[Proof of the Claim] Assume by contradiction this is not zero. Since $\mathcal{F}$ is stable, hence unsplit, from $\varphi_0(2g-3,1,1) =0$ and Remark \ref{rem:rigid}, $a_F(2g-3)$ must be $1$ (cf. Proposition \ref{prop:lem4}).
Let $\mathfrak{F}$ be the corresponding one-dimensional family of sections of $F= \mathbb{P}(\mathcal{F})$, which has positive self-intersection, since $\mathcal{F}$ is stable. From Proposition \ref{prop:lem4} and Step 1, the system $\mathfrak{F}$ cannot be contained in a linear system, otherwise we would have sections of degree lower than $2g-3$.
Thus, from the proof of Proposition \ref{prop:lem4}, there is an open, dense subset $C^0 \subset C$ such that, for any $q \in C^0$, one has $\mathcal{F}= \mathcal{F}_v$ with $ v = v_q \in \Ext^1(\omega_C(-q), N_q))$, where $\{N_q\}_{q \in C^0}$ is a $1$-dimensional family of non-isomorphic line bundles of degree $d - 2g+3$, whose general member is general in $\Pic^{d-2g+3}(C)$ . Let $\Gamma_q \subset F_v$ be the section corresponding to $\mathcal{F}_v \to\!\!\to \omega_C(-q)$; so the one-dimensional family is $\mathfrak{F} = \{\Gamma_q \}_{q\in C^0}$.
We set $\widetilde{\Gamma}_q := \Gamma_q + f_q$, for $q \in C^0$. From \eqref{eq:Fund2}, $\widetilde{\Gamma}_q$ corresponds to $\mathcal{F}_v \to\!\!\!\!\to \omega_C(-q) \oplus \mathcal{O}_q$, whose kernel we denote by $N'_q$. Then $\widetilde{\mathfrak{F}} = \{ \widetilde{\Gamma}_q \}_{q\in C^0}$ is a one-dimensional family of unisecants of $F_v$ of degree $2g-2$ and speciality $1$ (cf. \eqref{eq:iLa}). For $h,q \in C^0$, we have $$c_1(N'_h) = \det(\mathcal{F}_v) \otimes \omega_C^{\vee} = c_1(N'_q).$$Therefore,
from \eqref{eq:isom2}, $\widetilde{\mathfrak{F}}$ is contained in a linear system $|\mathcal{O}_{F_v} (\Gamma)|$. By Bertini's theorem, the general member of $|\mathcal{O}_{F_v} (\Gamma)|$ is a section of degree $2g-2$. In particular,
$\dim(|\mathcal{O}_{F_v} (\Gamma)|) \geqslant 2$.
If $L_{\Gamma}$ is the corresponding quotient line bundle, since $\Gamma \sim \widetilde{\Gamma}_q$, then $c_1(L_{\Gamma}) = \omega_C$, i.e. $\Gamma$ is a canonical section. This is a contradiction: indeed, if $M_{\omega_C}$ is the kernel of the surjection $\mathcal{F}_v \to\!\!\!\to \omega_C$, we have (cf. \eqref{eq:casaciro2})
$$2 \leqslant \dim(|\mathcal{O}_{F_v} (\Gamma)|) = h^0(\mathcal{F}_v \otimes M^{\vee}_{\omega_C}) - 1 = i(\mathcal{F}_v) -1 =1.$$ \end{proof}
\noindent {\bf Case (b)}. As in Remark \ref{rem:bohb}-(2), a necessary condition for $ \nu^{\delta,1,1}_d \geqslant \rho_d^{k_2}$ is \eqref{eq:8.1}, i.e. $nr - n - 1 <0$. Since $n,r\geqslant 1$, the only possibility is $r = t= 1$. Taking into account \eqref{eq:fo1t}, \eqref{eq:fnjt} and Proposition \ref{C.F.4}, one has $\varphi_n(\delta, 1,1) = \varphi_0(\delta, 1,1) = \delta-2g+3 \geqslant 0$. In any case we would have $d \geqslant 3g-5$, which is out of our range. Thus, case (b) cannot occur.
\noindent {\bf Step 3}. In this step we prove that $\overline{\mathcal V_{d}^{2g-3,1,1}}$ is actually a component of $\widetilde{B_C^{k_2}}(d)$.
Let $\mathcal B \subseteq \widetilde{B_C^{k_2}}(d)$ be a component containing $\overline{\mathcal V_{d}^{2g-3,1,1}}$ and let $[\mathcal{F}] \in \mathcal B$ general. By semicontinuity, $\mathcal{F}$ has speciality $i=2$. It has also a special presentation as in \eqref{eq:Fund}, with $2g-3 \leqslant \deg(L) = \delta \leqslant 2g-2$. Since $C$ has general moduli, then $h^1(L) = j =1$ so the corank of the coboundary map is $t=1$. If $\delta = 2g-3$, from Step 2 we are done.
Assume therefore $\delta = 2g-2$, so $L = \omega_C$. Notice that:$ (i) \; r = h^1(N) \leqslant g; \;\; (ii) \; m = \dim({\rm Ext}^1(N, \omega_C)) \geqslant g+1$. Indeed, (i) is trivial if $N$ is effective. On the other hand, if $h^0(N)=0$, then $h^1(N) = 3g-3-d < g$ since $d \geqslant 2g-2$. As for (ii), $m = 5g-5-d$ (cf. \eqref{eq:lem3note}), hence (ii) follows since $d \leqslant 3g-6$. So we are in position to apply Theorem \ref{thm:mainext1} and Corollary \ref{cor:mainext1}, which yield that $\widehat{\mathcal{W}}^{\rm Tot}_1 \subseteq \mathbb{P}(\mathfrak{E}_{2g-2})$ is irreducible and good. Hence $\dim (\widehat{\mathcal{W}}_1^{\rm Tot}) \leqslant 8g-10 - 2d$ (equality holds when $N$ is general, i.e. non effective).
On the other hand, $\mathcal B$ is the image of $\widehat{\mathcal{W}}^{\rm Tot}_1$ via $\pi_{d,2g-2}$ (cf. Step 1) and the general fibre of this map has dimension at least $1$ because $h^1(\mathcal{F}) = 2$ (cf. \eqref{eq:isom2} and computation as in \eqref{eq:casaciro2}). Thus $$8g-11-2d \geqslant \dim(\mathcal B) \geqslant \dim (\overline{\mathcal V_{d}^{2g-3,1,1}}) = 8g-11-2d = \rho_d^{k_2}.$$This proves that $\mathcal B = \overline{\mathcal V_{d}^{2g-3,1,1}}$ is a regular component.
The previous argument also shows that $ \overline{\mathcal V_{d}^{2g-3,1,1}} = \overline{\mathcal V_{d}^{2g-2,1,1}}$ (cf. Remark \ref{rem:bohb}) and that the dimension of the general fibre of $\pi_{d, 2g-2}|_{\widehat{\mathcal{W}}_1^{\rm Tot}}$ onto $\mathcal V_{d}^{2g-2,1,1}$ has exactly dimension $1$ (actually, it is a $\mathbb{P}^1$, cf. Lemma \ref{lem:ovviolin}).
\noindent {\bf Step 4}. Assume we have a component $\mathcal B \subseteq \widetilde{B_C^{k_2}}(d)$, whose general point corresponds to a vector bundle $\mathcal{F}$ with $i(\mathcal{F})=2$. From Step 1, $[\mathcal{F}] \in \mathcal B$ general can be specially presented as in \eqref{eq:Fund}, with $h^1(L) = j =1$, so $N$ is special. The same discussion as in Steps 2 and 3 shows that $\mathcal B = \overline{\mathcal V_{d}^{2g-3,1,1}}$. \end{proof}
\begin{remark}\label{rem:casaciro3} (i) For $[\mathcal{F}] \in \overline{\mathcal V_{d}^{2g-3,1,1}}$ general, one has $\ell = g-1$, $r = 3g-4-d$ and $m = 5g-7-d$ (cf. \eqref{eq:lem3note}, \eqref{eq:exthyp1}). So $\ell \geqslant r + 1$ (because $d \geqslant 2g-2$), moreover $m \geqslant \ell +g$ (because $d \leqslant 3g-6$). Note that the inequality $m \geqslant \ell +1$ is necessary to ensure $\emptyset \neq \mathcal{W}_1^{T\rm ot} \subset \mathbb{P}(\mathfrak{E}_{2g-2})$ (see the proof of Theorem \ref{thm:mainext1}).
\noindent (ii) Step 4 of Theorem \ref{i=2} shows that, if $\mathcal B$ is a component of $ \widetilde{B_C^{k_2}}(d)$, different from $\overline{\mathcal V_{d}^{2g-3,1,1}}$, then $\mathcal B$ is a component of $ \widetilde{B_C^{k_i}}(d)$, for some $i \geqslant 3$, and as such it is not regular. Otherwise, we would have $$8g-11 - 2d \leqslant \dim(\mathcal B) = 4g-3 - i(d-2g+2+i) \leqslant 10 -3d -16,$$i.e $d \leqslant 2g-7$ which is out of our range for $d$.
\end{remark}
\begin{remark}\label{rem:i=2} (1) Take $\widetilde{\Gamma}_p$ as in the proof of Claim \ref{4.1}. Then, $\mathcal{N}_{\widetilde{\Gamma}_p /F_v}$ is non special on the (reducible) unisecant $\widetilde{\Gamma}_p$. Indeed,
$\omega_{\widetilde{\Gamma}_p} \otimes \mathcal{N}_{\widetilde{\Gamma}_p /F_v}^{\vee}|_{\Gamma}
\cong \mathcal{O}_{\Gamma} (K_F)$ whereas $ \omega_{\widetilde{\Gamma}_p} \otimes \mathcal{N}_{\widetilde{\Gamma}_p /F_v}^{\vee}|_{f_p} \cong \mathcal{O}_{\mathbb{P}^1}(-2)$. Thus $\widetilde{\Gamma}_p \in {\rm Div}^{1,2g-2}_{F_v}$ is a smooth point. Moreover, $h^0(\mathcal{N}_{\widetilde{\Gamma}_p /F_v}) = 3g-2-d \geqslant 2$ for $d \leqslant 3g-4$. From the generality of $v$ in the good locus $\mathcal W_1$, \eqref{eq:isom2} and from computation as in \eqref{eq:casaciro2}, one has that $\widetilde{\Gamma}_p \subset F_v$ is a (reducible) unisecant, moving in a complete linear pencil of special unisecants whose general member is a canonical section, and $\widetilde{\Gamma}_p$ is algebraically equivalent on $F_v$ to non-special sections of degree $2g-2$.
As soon as $d \leqslant 3g-6$, there are in ${\rm Div}^{1,2g-2}_{F_v}$ unisecants containing two general fibres (cf. Proposition \ref{prop:lem4}) hence the ruled surface $F_v$ has (non special) sections of degree smaller than $2g-3$.
\noindent (2) Take $N \in \Pic^k(C)$ general with $0 < k \leqslant g-2$. Since $N$ is special, non-effective,
from Corollary \ref{cor:mainext1} and Remark \ref{unepine}, $v \in \Lambda_1 \subset \Ext^1(\omega_C,N)$ general determines $\mathcal{F}:=\mathcal{F}_v$ stable, with $i(\mathcal{F})=2$. If $\Gamma$ denotes the canonical section corresponding to $\mathcal{F} \to \omega_C$, from \eqref{eq:isom2} one has $\dim(|\mathcal{O}_{F} (\Gamma)|) = 1 $ and all unisecants in this linear pencil are special (cf. Lemma \ref{lem:ovviolin}). Since $F$ is indecomposable,
$|\mathcal{O}_{F} (\Gamma)|$ has base-points (cf. the proof of Proposition \ref{prop:lem4}, from which we keep the notation). Thus, $\mathcal{F}$ is rsp via $\omega_C(-p)$, for $p= \rho(q)$ and $q \in F$ a base point of the pencil (recall Remark \ref{rem:bohb}). \end{remark}
\begin{remark}\label{i=2Teix} In \cite{TB0,TB00} the locus $\widetilde{B_C^{2}}(b)$ is studied, for $g \geqslant 2$ and $3 \leqslant b \leqslant 2g-1$. It is proved there with different arguments that, when $C$ has general moduli, then $\widetilde{B_C^{2}} (b)$ is not empty, irreducible, regular (with $\rho^2_b = 2b-3$), generically smooth and $[\mathcal{E}] \in \widetilde{B_C^{2}}(b)$ general is stable, with $h^0(\mathcal{E}) =2$, fitting in a sequence \begin{equation}\label{eq:aiutoF} 0 \to \mathcal{O}_C \to \mathcal{E} \to L \to 0. \end{equation}Considering the natural isomorphism $\widetilde{B_C^2}(b) \cong \widetilde{B_C^{k_2}}(4g-4-b)$, when $d := 4g-4-b$ is as in Theorem \ref{i=2} we recover Teixidor's results (without irreducibility) via a different approach. Thus, Teixidor's results and our analysis imply that, for any $2g-2 \leqslant d \leqslant 4g-7$, $\widetilde{B_C^{k_2}}(d) = \overline{\mathcal V_{d}^{2g-3,1,1}}$. Theorem \ref{i=2} provides in addition the rigidly special presentation of the general element of $\widetilde{B_C^{k_2}}(d)$. \end{remark}
\begin{theorem}\label{prop:M22K} Let $C$ be of genus $g \geqslant 3$, with general moduli. Then, $\widetilde{M_C^2}(2, \omega_C) \neq \emptyset$ and irreducible. Moreover, it is regular (i.e. of dimension $3g-6$), and its general point $[\mathcal{F}_v]$ fits into an exact sequence $$0 \to \mathcal{O}_C(p) \to \mathcal{F}_v \to \omega_C (-p) \to 0,$$where
\begin{itemize} \item[$\bullet$] $p \in C$ is general, and \item[$\bullet$] $v \in \Lambda_1 =\mathcal W_1 \subset \Ext^1(\omega_C(-p), \mathcal{O}_C(p))$ is general. \end{itemize} \end{theorem}
\begin{proof} Irreducibility follows from \cite[Thm. 1.3]{Oss2}. With notation as in \eqref{eq:exthyp1}, \eqref{eq:exthyp}, one has $$\ell = r = g-1, \; m = h^1(2p- K_C) = 3g - 5 \geqslant g = \ell + 1;$$from Corollary \ref{cor:mainext1}, $\mathcal{W}_1$ is good and $v \in \mathcal W_1$ general is such that $\cork(\partial_v) =1$. In particular, $\dim(\mathcal W_1) = 3g-6$.
Stability of $\mathcal{F}_v$, with $1 < s(\mathcal{F}_v) = \sigma < g$, follows from Proposition \ref{prop:LN}. Finally one uses the same approach as in Claim \ref{4.1} to deduce that $\pi_{2g-2, 2g-3}|_{\mathcal W^{\rm Tot}_1}$ is generically finite (cf. \eqref{eq:pde}), since $\mathcal{F}_v$ is rsp via $\omega_C(-p)$. \end{proof}
Generic smoothness of the components of $\widetilde{M_C^2}(2, \omega_C)$ follows from results in \cite{TB000,Beau2}. Theorem \ref{prop:M22K} can be interpreted in the setting of \cite{BeFe} as saying that, for a curve $C$ of general moduli of genus $g \geqslant 3$, $\mathbb{P}({\rm Ext}^1(\omega_C, \mathcal{O}_C))$ is not contained in the divisor $D_1$ considered in that paper.
\subsection{Case $i=3$}\label{ssec:3} One has $\rho_d^{k_3} = 10g - 18 - 3d$. We have the following:
\begin{theorem}\label{i=3} Let $C$ be of genus $g \geqslant 8$, with general moduli. For any integer $d$ s.t. $2g-2 \leqslant d \leqslant \frac{5}{2}g-6$, one has $\widetilde{B_C^{k_3}}(d) \neq \emptyset$. Moreover:
\noindent (i) $ \overline{\mathcal V_d^{2g-4,1,2}}$ is the unique component of $\widetilde{B_C^{k_3}}(d) $ of types either \eqref{eq:Nudde} or \eqref{eq:nuddet}, whose general point corresponds to a vector bundle $\mathcal{F}$ with $i(\mathcal{F}) = 3$. Furthermore, it is regular and $\overline{\mathcal V_d^{2g-4,1,2}}= \overline{\mathcal V_d^{2g-3,1,2}} = \overline{\mathcal V_d^{2g-2,1,2}}$.
\noindent (ii) For $[\mathcal{F}]\in \overline{\mathcal V_d^{2g-4,1,2}}$ general, one has $s(\mathcal{F}) \geqslant 5g-10-2d-\epsilon \geqslant 2-\epsilon$ and $\mathcal{F}$ fits into an exact sequence $$0 \to N \to \mathcal{F} \to \omega_C (-D_2) \to 0,$$where \begin{itemize} \item[$\bullet$] $D_2 \in C^{(2)}$ is general, \item[$\bullet$] $N \in \Pic^{d-2g+4}(C)$ is general (special, non-effective), \item[$\bullet$] $\mathcal{F}=\mathcal{F}_v$ with $v$ general in a good component $\Lambda_2 \subset \Ext^1(N, \omega_C(-D_2))$ (cf. Definition \ref{def:goodc}). \end{itemize}
\noindent (iii) Any section $\Gamma \subset F= \mathbb{P}(\mathcal{F})$, corresponding to a quotient $\mathcal{F} \to\!\!\to \omega_C(-D_2)$, is not of minimal degree. However, it is minimal among special sections of $F$; moreover, $\Gamma$ is asi but not ai (i.e., $\mathcal{F}$ is rsp via $\omega_C(-D_2)$). \end{theorem}
\begin{proof} As in Theorem \ref{i=2}, once (i) has been proved, parts (ii)-(iii) follow from results proved in previous sections. Precisely, by definition of $\mathcal V_d^{2g-4,1,2}$ one has $L = \omega_C(-D_2)$, with $D_2 \in C^{(2)}$, $t=2$ and $N \in {\rm Pic}^{d-2g+4} (C)$ of speciality $r \geqslant 2$. From regularity of the component, Proposition \ref{C.F.4} and \eqref{eq:fojt}, \eqref{eq:fo1t}, \eqref{eq:fnjt} give $$0 = \nu^{2g-4,1,2}_d - \rho_d^{k_3} \leqslant \dim(\widehat{\Lambda}^{\rm Tot}_2) - \rho_d^{k_3} = \varphi_n(2g-4,1,2) = \varphi_0(2g-4,1,2) - n (r-2) = - n (r-2).$$Thus, $n(r-2) = 0$. This implies that
the general fibre of $\pi_{d,2g-4}|_{\widehat{\Lambda}^{\rm Tot}_2}$ is finite, i.e. $[\mathcal{F}_v] \in \mathcal V_d^{2g-4,1,2}$ general is rsp via $\omega_C(-D_2)$ (correspondingly $\Gamma \subset F_v = \mathbb{P}(\mathcal{F}_v)$ is asi as in (iii)).
Since $n(r-2) = 0$, then either $n=0$ or $r=2$. The latter case cannot occur otherwise we would have $n = d - 3g+7 \leqslant - \frac{g}{2} + 1 <0$, by the assumptions on $d$.
Thus $n=0$ and $r = 3g-5-d$. Moreover, from \eqref{eq:lem3note}, \eqref{eq:clrt}, one has $m = 5g-10 -d$ and $c (\ell, r,2) = 2d+10 -4g$, so a good component $\Lambda_2 \subset \mathbb{P}({\rm Ext}^1(\omega_C(-D_2), N))$ has dimension $9g-20-3d$. If we add up to this quantity $g$, for the parameters of $N$, we get $10 g - 20 - 3d$. Thus, regularity forces $D_2 $ to be general in $C^{(2)}$.
Now, $\mathcal{N}_{\Gamma/F_v} \cong K_C - D_2 -N$ (cf. \eqref{eq:Ciro410}) so $h^i(\mathcal{N}_{\Gamma/F_v}) = h^{1-i}(N+D_2)$ for $0 \leqslant i \leqslant 1$. By the assumptions on $d$, $\deg(N+D_2) = d-2g+6 \leqslant \frac{g}{2}$, thus generality of $N$ implies that $N+D_2$ is also general, so $h^0(N+D_2) = 0$ and $h^1(N+D_2) = 3g-7-d \geqslant \frac{g}{2} -1$. This implies that $\Gamma$ is not ai and not of minimal degree among quotient line bundles of $\mathcal{F}$.
Numerical conditions of Theorem \ref{unepi} (see also Remark \ref{unepine}) are satisfied for $j=1$, $t = 2$ and $\delta = 2g-4$, under the assumptions $d \leqslant \frac{5}{2} g -6$.
Finally, the fact that $\Gamma$ is of minimal degree among special, quotient line bundles of $\mathcal{F}$ follows from the proof of part (i) below, which consists of the following steps.
\noindent {\bf Step 1}. In this step we determine which of the loci of the form $\mathcal V_{d}^{\delta,j}$, as in \eqref{eq:Nudde}, or $\mathcal V_{d}^{\delta,j,t}$, as in \eqref{eq:nuddet}:
\begin{itemize} \item[(a)] has the general point $[\mathcal{F}]$ with $i(\mathcal{F}) = 3$,
\item[(b)] is the image, via $\pi_{d,\delta}$, of a parameter space in $\mathbb{P}(\mathfrak{E}_{\delta})$ of dimension at least $\rho_d^{k_3} = 10g - 18 - 3d$. \end{itemize}
Let $\mathcal V$ be such locus and use notation as in \eqref{eq:exthyp1}, \eqref{eq:exthyp}. From Remarks \ref{rem:claim1}, \ref{30.12l}, \ref{rem:C.F.3}, conditions (a) and (b) are both satisfied only if the presentation of $[\mathcal{F}] \in \mathcal V$ general as in \eqref{eq:Fund} with $L$ special and effective, is such that $N \in {\rm Pic}^{d-\delta}(C)$ is special and the coboundary map $\partial : H^0(L) \to H^1(N)$ is not surjective. Possibilities are:
\noindent $(i)$ $j=1$ and $t= \cork(\partial)=2$;
\noindent $(ii)$ $j=2$ and $t=1$.
In any event, one has $\ell \geqslant r$ (in particular, we will be in position to apply Theorems \ref{thm:mainext1}, \ref{thm:mainextt}; cf. e.g the proof of Claim \ref{cl:good} below). Indeed:
\noindent $\bullet$ in case $(i)$, the only possibilities for $\ell <r$ are $r-2 \leqslant \ell \leqslant r-1$. Then, $\dim(\mathbb{P}({\rm Ext}^1(L,N)) = 2 \delta - d + g - 2$, $\rho(L) = g - (\delta - g + 2)$, $\rho(N) = g-rn$, so the number of parameters is $\delta + 4(g-1) - d - rn < \rho_d^{k_3}$ since $d \leqslant \frac{5}{2}g -6$.
\noindent $\bullet$ in case $(ii)$, the only possibility for $\ell <r$ is $\ell = r-1$. The same argument as above applies, the only difference is that $\rho(L) = g - 2(\delta - g + 3)$.
Since $\ell \geqslant r$, we see that case $(ii)$ cannot occur by \eqref{eq:fojtb} and \eqref{eq:fnjt}. Thus, we focus on $ \mathcal V_{d}^{\delta,1,2}$, investigating for which $\delta$ it satisfies (b). We will prove that this only happens for $2g-4 \leqslant \delta \leqslant 2g-2$.
We have two cases: (1) $N$ effective, (2) $N$ non-effective. We will show that only case (2) occurs.
\noindent {\bf Case (1)}. When $N$ is effective, from Remark \ref{rem:bohb}, a necessary condition for (b) to hold is \eqref{eq:8.1}, which reads $(r-2) n - 2 <0$. This gives $2 \leqslant r \leqslant 3$, since $r \geqslant t=2$. We can apply Theorem \ref{unepi} (more precisely, Remark \ref{unepie}): the first line of bounds on $\delta$ in Remark \ref{unepie} gives $\delta \leqslant \frac{3g-\epsilon}{2} + r - 4 $. In particular, one must have $\delta \leqslant \frac{3g-\epsilon}{2} -1$.
On the other hand, another necessary condition for (b) to hold is $\varphi_n(\delta, 1,2) \geqslant 0$ (cf. Proposition \ref{C.F.4}). From Remark \ref{rem:conpar2}, $\varphi_n(\delta, 1,2) \leqslant \varphi_0(\delta, 1,2) = \delta-2g+4$ (cf. \eqref{eq:fo1t}) and $\varphi_0(\delta, 1,2) \geqslant 0$ gives $\delta \geqslant 2g-4$ which contradicts $\delta \leqslant \frac{3g-\epsilon}{2} -1$, since $g \geqslant 8$.
\noindent {\bf Case (2)}. When $N$ is non-effective, we apply Theorem \ref{unepi} (more precisely, Remark \ref{unepine}), with $j=1$ and $t=2$. By the same argument as in case (1), we see that $2g-4 \leqslant \delta \leqslant 2g-2$.
\noindent {\bf Step 2}. In this step we prove that the loci $\mathcal V_d^{\delta,1,2}$, with $2g-4 \leqslant \delta \leqslant 2g-2$, are not empty. Precisely, we will exhibit components $\mathcal V_d^{\delta,1,2}$ which are the image, via $\pi_{d,\delta}$, of a total good component $\widehat{\Lambda}^{\rm Tot}_2 \subset \mathbb{P}(\mathfrak{E}_\delta)$, of dimension $\rho_d^{k_3} + \delta - 2g + 4$ (cf. Definition \ref{def:goodtot} and \eqref{eq:nuddet}).
We will treat only the case $\delta= 2g-4$, i.e. $L= \omega_C(-D_2)$, with $D_2 \in C^{(2)}$, since the cases $L= \omega_C, \, \omega_C(-p)$ can be dealt with similar arguments and can be left to the reader.
\begin{claim}\label{cl:Grass} Let $N \in \Pic^{d-2g+4}(C)$ be general. For $V_2 \in \mathbb{G} (2, H^0(K_C-N))$ general, the map $\mu_{V_2}$ as in \eqref{eq:muW} is injective. \end{claim} \begin{proof}[Proof of Claim \ref{cl:Grass}] The general $V_2 \in \mathbb{G} (2, H^0(K_C-N))$ determines a base point free linear pencil on $C$. Indeed, $h^0(K_C-N) = 3g-5-d \geqslant 5$. Take $\sigma_1, \sigma_2 \in H^0(K_C-N)$ general sections. If $p \in C$ is such that $\sigma_i(p)=0$, for $i=1,2$, by the generality of the sections we would have
$p \in {\rm Bs}(|K_C-N|)$ so $h^0(N+p)=1$. This is a contradiction because $N$ is general and $\deg(N) < g-1$. The injectivity of $\mu_{V_2}$ follows from the base-point-free pencil trick: indeed, ${\rm Ker}(\mu_{V_2}) \cong H^0(N(-D_2))$ which is zero since $N$ is non-effective. \end{proof}
\begin{claim}\label{cl:good} Let $N \in \Pic^{d-2g+4}(C)$ and $D_2 \in C^{(2)}$ be general. Then, there exists a unique good component $\Lambda_2 \subset \Ext^1(\omega_C(-D_2), N)$ whose general point $v$ is such that ${\rm Coker}(\partial_v)^{\vee}$ is general in $\mathbb{G} (2, H^0(K_C-N))$ (cf. Remark \ref{rem:wt}). \end{claim} \begin{proof}[Proof of Claim \ref{cl:good}] With notation as in \eqref{eq:lem3note}, \eqref{eq:exthyp1}, we have $\ell = g-2$, $m= 5g-9-d$ and $r= 3g-5-d$. Then assumptions on $d$ and $g$ imply \begin{equation}\label{eq:aiutow2} m \geqslant 2 \ell +1 \; \;{\rm and} \;\; \ell \geqslant r \geqslant t=2 \end{equation} (cf. Step 1 for $\ell \geqslant r$). From \eqref{eq:aiutow2} and Claim \ref{cl:Grass}, we are in position to apply Theorem \ref{thm:mainextt}, with $\eta=0$ and $\Sigma_{\eta} = \mathbb{G} (2, H^0(K_C-N))$.
This yields the existence of a good component $\Lambda_2 \subseteq \mathcal W_2 \subset {\rm Ext^1}(\omega_C(-D_2), N)$. Actually, $\Lambda_2$ is the only good component whose general point $v$ gives ${\rm Coker}(\partial_v)^{\vee} = V_2$ general in $\mathbb{G} (2, H^0(K_C-N))$.
Indeed any component of $\mathcal W_2$, whose general point $v$ is such $\dim({\rm Coker}(\partial_v)) =2$, is obtained in the following way (cf. the proofs of Theorems \ref{thm:mainext1}, \ref{thm:mainextt}):
\noindent $\bullet$ take any $\Sigma \subseteq \mathbb{G} (2, H^0(K_C-N))$ irreducible, of codimension $\eta\geqslant 0$;
\noindent $\bullet$ for $V_2$ general in $\Sigma$, consider $H^0(\omega_C(-D_2)) \otimes V_2$ and the map $\mu_{V_2}$ as in \eqref{eq:muW};
\noindent $\bullet$ let $\tau := \dim({\rm Ker}(\mu_{V_2})) \geqslant 0$ and $\mathbb{P} := \mathbb{P}({\rm Ext}^1(\omega_C(-D_2), N)) = \mathbb{P}(H^0(2K_C -D_2 - N)^{\vee})$ (cf. \eqref{eq:PP});
\noindent $\bullet$ consider the incidence variety
$$\mathcal{J}_{\Sigma} := \left\{(\sigma, \pi) \in \Sigma \times \mathbb{P} \; | \; {\rm Im}(\mu_{V_2}) \subset \pi \right\}.$$Since $m \geqslant 2 \ell + 1 - \tau$, one has $\mathcal{J}_{\Sigma} \neq \emptyset$ (cf. the proofs of Theorems \ref{thm:mainext1}, \ref{thm:mainextt});
\noindent $\bullet$ consider the projections $\Sigma \stackrel{pr_1}{\longleftarrow} \mathcal{J}_{\Sigma} \stackrel{pr_2}{\longrightarrow} \mathbb{P}$;
\noindent
$\bullet$ the fibre of $pr_1$ over any point $V_2$ in the image is $\left\{ \pi \in \mathbb{P}\,|\, {\rm Im}(\mu_{V_2}) \subset \pi\right\}$, i.e. it is isomorphic to the linear system of hyperplanes of $\mathbb{P}$ passing through the linear subspace $\mathbb{P}({\rm Im}(\mu_{V_2}))$. For $V_2 \in \Sigma$ general, this fibre is irreducible of dimension $m -1 - 2 \ell + \tau = 3g-6-d+\tau$. In particular, there exists a unique component $\mathcal{J} \subseteq \mathcal{J}_{\Sigma}$ dominating $\Sigma$ via $pr_1$;
\noindent $\bullet$ since $r=3g-5-d$, one has
$$\dim(\mathcal{J}) = 9g-20-3d + \tau - \eta = {\rm expdim}(\widehat{\mathcal W}_2) + \tau - \eta,$$where $\widehat{\mathcal W}_2 = \mathbb{P}(\mathcal W_2) \subset \mathbb{P}$ (notation as in the proof of Theorem \ref{thm:mainext1}). By construction, $pr_2(\mathcal{J}) \subseteq \widehat{\mathcal W}_2$. Moreover, if $\epsilon$ denotes the dimension of the general fibre of $pr_2|_{\mathcal{J}}$, then $pr_2(\mathcal{J})$ can fill up a component $\mathcal X$ of $\widehat{\mathcal W}_2$ only if $\tau - \eta - \epsilon \geqslant 0$: the component $\mathcal X$ is good when equality holds.
When $\Sigma = \mathbb{G} (2, H^0(K_C-N))$, then $\eta = 0$ and, by Claim \ref{cl:Grass}, also $\tau = 0$. Since $\mathcal{J} \subseteq \mathcal{J}_{\mathbb{G} (2, H^0(K_C-N))}$ is the unique component dominating $\mathbb{G} (2, H^0(K_C-N))$, then $pr_2(\mathcal{J})$ fills up a component $\widehat{\Lambda}_2$ of $\widehat{\mathcal W}_2$, i.e. $\epsilon = 0$. Thus $\widehat{\Lambda}_2$ is good. \end{proof}
By the generality of $N \in {\rm Pic}^{d-2g+4} (C)$ and of $D_2 \in C^{(2)}$, Claim \ref{cl:good} ensures the existence of a total good component $\widehat{\Lambda}_2^{\rm Tot} \subset \mathbb{P} (\mathfrak{E}_{2g-4})$.
For $2g-4 \leqslant \delta \leqslant 2g-2$, we will denote by $V_d^{\delta}$ the total good component we constructed in this step. To ease notation, we will denote by $\mathcal V_d^{\delta}$ its image in $\widetilde{B_C^{k_3}}(d)$ via $\pi_{d,\delta}$, which is a $\mathcal V_{d}^{\delta,1,2}$ as in Step 1.
\noindent {\bf Step 3}. In this step, we prove that $\mathcal V_d^{2g-2}$ has dimension $\rho_d^{k_3}$.
From Step 2, one has $\dim(V_d^{2g-2}) = \rho_d^{k_3} + 2 = 10g - 16 - 3d$.
We want to show that the general fibre of $\pi_{d,2g-2}|_{V_d^{2g-2}}$ has dimension two. To do this, we use similar arguments as in the proof of Lemma \ref{lem:claim1}.
Let $[\mathcal{F}] \in \mathcal V_d^{2g-2}$ be general; by Step 2, $\mathcal{F} = \mathcal{F}_u$, for $u \in \widehat{\Lambda}_2 \subset \mathbb{P}(H^0(2 K_C- N)^{\vee})$ general and $N \in {\rm Pic}^{d-2g+2}(C)$ general, where $\widehat{\Lambda}_2 = pr_2(\mathcal{J})$ and
$\mathcal{J} \subset \mathbb{G} (2, H^0(K_C-N)) \times \mathbb{P}(H^0(2 K_C- N)^{\vee})$ the unique component dominating $ \mathbb{G} (2, H^0(K_C-N))$ (cf. the proof of Claim \ref{cl:good}). Then $$(\pi_{d,2g-2}|_{V_d^{2g-2}})^{-1}([\mathcal{F}_u]) = \left\{ (N', \omega_C, u') \in V_d^{2g-2} \, | \; \mathcal{F}_{u'} \cong \mathcal{F}_u \right\}.$$In particular, one has $N \cong N'$ so $u, u' \in \widehat{\Lambda}_2 \subset \mathbb{P}(H^0(2 K_C- N)^{\vee})$.
Let $\varphi : \mathcal{F}_{u'} \stackrel{\cong}{\to} \mathcal{F}_u$ be the isomorphism between the two bundles and consider the diagram \[ \begin{array}{ccccccl} 0 \to & N & \stackrel{\iota_1}{\longrightarrow} & \mathcal{F}_{u'} & \to & \omega_C & \to 0 \\
& & & \downarrow^{\varphi} & & & \\ 0 \to & N & \stackrel{\iota_2}{\longrightarrow} & \mathcal{F}_u & \to & \omega_C & \to 0. \end{array} \]
If $u= u'$, then $\varphi = \lambda \in \mathbb{C}^*$ (since $\mathcal{F}_u$ is simple) and the maps $\lambda \iota_1$ and $\iota_2$ determine two non-zero sections $s_1 \neq s_2 \in H^0(\mathcal{F}_u \otimes N^{\vee})$. Similar computation as in \eqref{eq:casaciro2} shows that $h^0(\mathcal{F}_u \otimes N^{\vee}) = i(\mathcal{F}_u) = 3$, since $u \in \widehat{\Lambda}_2$ general. Therefore, if $\Gamma \subset F_u$ denotes the section
corresponding to $\mathcal{F}_u \to \!\! \to \omega_C$, $(\pi_{d,2g-2}|_{V_d^{2g-2}})^{-1}([\mathcal{F}_u])$ contains
a $\mathbb{P}^2$ isomorphic to $|\mathcal{O}_{F_u}(\Gamma)|$ (cf. \eqref{eq:isom2} and Lemma \ref{lem:ovviolin}).
The case $u \neq u'$ cannot occur. Indeed, for any inclusion $\iota_1$ as above, there exist an inclusion $\iota_2$ and a $\lambda = \lambda (\iota_1, \iota_2) \in \mathbb{C}^*$ such that $\varphi \circ \iota_1 = \lambda \iota_2$, otherwise we would have $\dim(|\mathcal{O}_{F_u}(\Gamma)|) > 2$, a contradiction. One concludes by Lemma \ref{lem:technical}.
In conclusion, the general fibre of $\pi_{d,2g-2}|_{V_d^{2g-2}}$ has dimension two (actually, this fibre is a $\mathbb{P}^2$).
\noindent {\bf Step 4}. In this step we prove that $\overline{\mathcal V_d^{2g-2}}= \overline{\mathcal V_d^{2g-3}} = \overline{\mathcal V_d^{2g-4}} := \overline{\mathcal V}$. In particular, the presentation of $[\mathcal{F}]\in \overline{\mathcal V}$ general will be specially rigid only for $\delta = 2g-4$.
From Step 2 one has $\dim(V_d^{\delta}) = \rho_d^{k_3} + \delta - 2g+4$, for $2g-4 \leqslant \delta \leqslant 2g-2$. Moreover, the general element of $V_d^{\delta}$ can be identified with a pair $(F,\Gamma)$, where $F = \mathbb{P}(\mathcal{F})$, $\Gamma \subset F$ a section corresponding to $\mathcal{F} \to\!\!\to \omega_C(-D)$, where
$D \in C^{(2g-2-\delta)}$ and, for $\delta = 2g-2$, one has $D = 0$ and $\dim(|\mathcal{O}_F(\Gamma) |)=2$.
We will now prove that there exist dominant, rational maps:
\noindent $(a)$ $r_1: V^{2g-2}_d \times C \dasharrow V^{2g-3}_d$, such that $r_1((F,\Gamma), p) = (F,\Gamma_p)$, where $\Gamma_p \subset F$ is a section corresponding to $\mathcal{F} \to\!\!\to \omega_C(-p)$,
\noindent $(b)$ $r_2: \widetilde{V}^{2g-2}_d \dasharrow V^{2g-4}_d$, where $ \widetilde{V}^{2g-2}_d$ is a finite cover $\varphi: \widetilde{V}^{2g-2}_d \to V^{2g-2}_d$ endowed with a rational map $\psi:\widetilde{V}^{2g-2}_d \dasharrow C^{(2)}$: if $\xi \in \widetilde{V}^{2g-2}_d$ is general and $\varphi(\xi) := (F,\Gamma)$, then $r_2(\xi) = (F, \Gamma')$, with $\Gamma'$ a section corresponding to $\mathcal{F} \to\!\!\to \omega_C(-\psi(\xi))$.
\noindent The existence of these maps clearly proves that $\overline{\mathcal V_d^{2g-2}}= \overline{\mathcal V_d^{2g-3}} = \overline{\mathcal V_d^{2g-4}}$.
\noindent {$(a)$} Take $(F,\Gamma)$ general in $V^{2g-2}_d$ and $p \in C$ general. Then, the restriction map $$ \mathbb{C}^3 \cong H^0(\mathcal{O}_F(\Gamma)) \to H^0(\mathcal{O}_{f_p} (\Gamma)) \cong \mathbb{C}^2$$is surjective, because the general member of
$|\mathcal{O}_F(\Gamma) |$ is irreducible. Hence there is a unique $\Gamma_p \in | \mathcal{O}_F(\Gamma - f_p)|$.
We claim that $\Gamma_p$ is irreducible, i.e. it is a section. If not, $\Gamma_p$ would be a section plus a number $n \geqslant 1$ of fibres. As we saw, $n \leqslant 1$ (cf. Step 1) so $n=1$. This determines an automorphism of $C$ and, since $C$ has general moduli, this automorphism must be the identity. This is impossible because
the map $\Phi_{\Gamma} : F \dasharrow \mathbb{P}^2$, given by $|\mathcal{O}_F(\Gamma)|$, is dominant hence it is ramified only in codimension one.
In conclusion, $\Gamma_p$ corresponds to $\mathcal{F} \to\!\! \to \omega_C(-p)$ and $(F, \Gamma_p)$ belongs to $V_d^{2g-3}$, and this defines $r_1$. The proof that $ (F, \Gamma_p)$ belongs to $V_d^{2g-3}$ is postponed for a moment (cf. Claim \ref{cl:maronna}).
\noindent {$(b)$} Given $(F, \Gamma)$ general in $V^{2g-2}_d$, we can consider the map $\Phi_{\Gamma}$ as in Case (a). Since $\Phi_{\Gamma}$ maps the rulings of $F$ to lines, it determines a morphism $\Psi : C \to C' \subset (\mathbb{P}^2)^{\vee}$. From Step 1, no (scheme-theoretical) fibre of $\Psi$ can have length bigger than two. Therefore, since $C$ has general moduli, $\Psi : C \to C'$ is birational and moreover, since $g \geqslant 8$, $C'$ has a certain number $n$ of double points, corresponding to curves of type $\Gamma_D + f_D$, with $D \in C^{(2)}$ fibre of $\Psi$ over a double point of $C'$.
Then the general point $\xi$ of $\widetilde{V}_d^{2g-2}$ corresponds to a triple $(F, \Gamma, D)$ (with $D \in C^{(2)}$ as above), the pair $(F, \Gamma_D)$ belongs to $V_d^{2g-4}$ (cf. Claim \ref{cl:maronna}) and $r_2(F, \Gamma, D) = (F,\Gamma_D)$, $\psi(F, \Gamma, D)= D$.
\begin{claim}\label{cl:maronna} With the above notation, $ (F, \Gamma_p)$ belongs to $V_d^{2g-3}$ and $(F, \Gamma_D)$ belongs to $V_d^{2g-4}$.
\end{claim} \begin{proof}[Proof of Claim \ref{cl:maronna}] We prove the claim for $(F, \Gamma_D)$, since the proof is similar in the other case. Take $(F,\Gamma)$ general in $V^{2g-2}_d$; this determines a sequence \begin{equation}\label{eq:maronna} 0 \to N \to \mathcal{F} \to \omega_C \to 0, \end{equation}where $N$ is general of degree $d-2g+2$ and the corresponding extension is general in the unique (good) component $\widehat{\Lambda}_2 \subset \mathbb{P}({\rm Ext}^1(\omega_C,N))$ dominating $\mathbb{G}(2, H^0(K_C-N))$ (cf. the proof of Step 2); thus, if $\partial$ is the coboundary map, then ${\rm Coker} (\partial) $ is a general two-dimensional quotient of $H^1(N)$.
On the other hand, $(F, \Gamma_D)$ determines a sequence $$0 \to N(D) \to \mathcal{F} \to \omega_C(-D) \to 0.$$ Since $\deg(N(D)) = d-2g+4 \leqslant \frac{g}{2} - 2 < g-1$ and $N(D)$ is general of its degree, then $h^0(N(D))=0$. In view of $$0 \to N \to N(D) \to \mathcal{O}_D \to 0,$$one has the exact sequence $$0 \to H^0(\mathcal{O}_D) \cong \mathbb{C}^2 \to H^1(N) \stackrel{\alpha}{\longrightarrow} H^1(N(D)) \to 0.$$
The existence of the unisecant $\Gamma_D + f_D$ on $F$ gives rise to the sequence \begin{equation}\label{eq:maronna2} 0 \to N \to \mathcal{F} \to \omega_C(-D) \oplus \mathcal{O}_D \to 0 \end{equation}(cf. \eqref{eq:Fund2}). This sequence corresponds to an element $\xi \in {\rm Ext}^1( \omega_C(-D) \oplus \mathcal{O}_D, N)$, which by Serre duality, is isomorphic to $H^0(\mathcal{O}_D)^{\vee} \oplus {\rm Ext}^1 (\omega_C(-D), N)$ (cf. \cite[Prop. III.6.7, Thm. III.7.6]{Ha}). So $\xi = (\sigma, \eta)$, with $\sigma \in H^0(\mathcal{O}_D)^{\vee}$and $\eta \in {\rm Ext}^1 (\omega_C(-D), N) \cong H^1(N(D) \otimes \omega_C^{\vee})$.
We have the following diagram \[ \begin{array}{ccc} H^0(\mathcal{O}_D) \oplus H^0(\omega_C (-D)) & \stackrel{\partial_0}{\longrightarrow} & H^1(N) \\ \uparrow & & \;\; \downarrow^{\alpha} \\ H^0(\omega_C(-D)) & \stackrel{\partial'}{\longrightarrow} & H^1(N(D)) \\ \uparrow & & \downarrow \\ 0 & & 0 \end{array} \]where $\partial_0$, $\partial'$ are the coboundary maps. The action of $\xi$ on $\{0\} \oplus H^0(\omega_C (-D))$ coincides with the action of $\eta$ on $H^0(\omega_C (-D))$ via cup-product. This yields an isomorphism
${\rm Coker}(\partial') \stackrel{\cong}{\longrightarrow} {\rm Coker}(\partial_0)$.
Notice that \eqref{eq:maronna2} can be seen as a limit of \eqref{eq:maronna}. Since ${\rm Coker} (\partial)$ is a general two-dimensional quotient of $H^1(N)$, then also ${\rm Coker} (\partial_0)$ is general. The above argument implies that ${\rm Coker} (\partial')$ is also general, proving the assertion (cf. the proof of Claim \ref{cl:good}). \end{proof}
Finally, to prove that $r_1$, $r_2$ are dominant, it suffices to prove the following:
\begin{claim}\label{cl:ri} The general fibre of $r_i$ has dimension two, for $1 \leqslant i \leqslant 2$.
\end{claim} \begin{proof}[Proof of Claim \ref{cl:ri}] It suffices to prove that there are fibres of dimension two. For $r_1$, take $(F, \Gamma, p) $ general in $V_{d}^{2g-2} \times C$.
The fibre of $r_1$ containing this triple consists of all triples $(F, \Gamma', p)$, with $\Gamma' \in |\mathcal{O}_{F} (\Gamma)|$ so it has dimension two since $i(\mathcal{F}) =3$ (cf. computation as in \eqref{eq:casaciro2}). The same argument works for $r_2$. \end{proof}
\noindent {\bf Step 5}. In this step we prove that $\overline{\mathcal V}$ is an irreducible component of $\widetilde{B_C^{k_3}}(d)$.
\begin{claim}\label{cl:tutte} Let $(F, \Gamma_D) \in V_d^{2g-2-i}$ be general, with $1 \leqslant i \leqslant 2$ and $D \in C^{(i)}$. Then $|\Gamma_D + f_D|$ has dimension two, its general member $\Gamma$ is smooth and it corresponds to a sequence $0 \to N (-D) \to \mathcal{F} \to \omega_C \to 0$. Consequently, the pair $(F, \Gamma_D)$ is in the image of $r_i$.
\end{claim}
\begin{proof}[Proof of Claim \ref{cl:tutte}] Given the first part of the statement, the conclusion is clear. To prove the first part, note that the existence of $\Gamma \subset F$ gives an exact sequence \begin{equation}\label{eq:lala} 0 \to N \to \mathcal{F} \to \omega_C(-D) \to 0, \end{equation}hence $h^0(\mathcal{O}_F(\Gamma_D + f_D)) = h^0(\mathcal{F} \otimes N^{\vee} (D)) = h^0(\mathcal{F} \otimes \omega_C \otimes \det(\mathcal{F})^{\vee}) = h^1(\mathcal{F}) = 3$ (cf. \eqref{eq:isom2}). This implies the assertion. \end{proof}
Let now $\mathcal B \subseteq \widetilde{B_C^{k_3}}(d)$ be a component containing $\overline{\mathcal V}$. From Step 4, $[\mathcal{F}] \in \mathcal B$ general has speciality $i=3$ and a special presentation as in \eqref{eq:Fund} with $L$ of minimal degree $\delta$. Thus $2g-4 \leqslant \delta \leqslant 2g-2$, since the Segre invariant is lower semi-continuous (cf. Remark \ref{rem:seginv} and also \cite[\S\;3]{LN}).
By Claim \ref{cl:tutte}, $2g-3\leqslant \delta \leqslant 2g-2$ does not occur under the minimality assumption on $L$. Indeed, in both cases we have a two-dimensional linear system $|\Gamma|$, whose general member is a section, corresponding to a surjection $\mathcal{F} \to\!\! \to \omega_C$ and we proved that there would be curves in this linear system containing two rulings.
If $\delta = 2g-4$, we have an exact sequence as in \eqref{eq:lala}. By specializing to a general point of $\overline{\mathcal V} = \overline{\mathcal V^{2g-4,1,2}}$, because of Claim \ref{cl:tutte}, we see that in \eqref{eq:lala} one has $h^0(N) = 0$. Hence, $h^1(N)$ is constant. Since for the general element of $\overline{\mathcal V}$, ${\rm Ker}(\mu_{V_2}) = (0)$ the same happens for the general element of $\mathcal B$ i.e., with notation as in the proof of Claim \ref{cl:good}, $\tau$ is constant equal to zero. Therefore, also $\eta = \epsilon = 0$ for the general point of $\mathcal B$ (see l.c.), which implies the assertion. \end{proof}
With our approach, we cannot conclude that $\overline{\mathcal V_{d}^{2g-4,1,2}} $ in Theorem \ref{i=3} is the unique regular component, whose general point $[\mathcal{F}]$ is such that $i(\mathcal{F})=3$, because we do not know if $\Lambda_2 \subset \mathcal W_2$ is the only good component when $N \in \Pic^{d-2g+4}(C)$ and $D_2 \in C^{(2)}$ general. However, results in \cite{Tan,TB00} imply that $\widetilde{B_C^{k_3}}(d)$ is irreducible for $d \leqslant \frac{10}{3}g -7$, though they say nothing on rigid special presentation of the general element. Putting all together, we have:
\begin{corollary}\label{cor:i=3} Under the assumptions of Theorem \ref{i=3}, one has $\widetilde{B_C^{k_3}}(d) = \overline{\mathcal V_{d}^{2g-4,1,2}}$. \end{corollary}
\begin{remark}\label{rem:i=3} (1) Theorem \ref{C.F.VdG}, for $j=3$, shows the existence of elements of $\widetilde{B_C^{k_3}}(d)$ with injective Petri map in the range
$g \geqslant 21$, $g+3 \leqslant \delta \leqslant \frac{4}{3} g - 4$, $2g+6 \leqslant d \leqslant \frac{8}{3}g-9$. This gives a proof, alternative to the one in \cite{TB00}, of generic smoothness of $\widetilde{B_C^{k_3}}(d)$ in the above range.
\noindent (2) If $\frac{5}{2}g - 5 \leqslant d \leqslant \frac{10}{3}g-7 $, $\widetilde{B_C^{k_3}}(d)$ is, as we saw, irreducible but in general it is no longer true that it is determined by a (total) good component. To see this, we consider a specific example.
Take $\widetilde{B_C^{k_3}}(3g-4)$, which is non-empty, irreducible, generically smooth, of dimension $g-6$ and $[\mathcal{F}] \in \widetilde{B_C^{k_3}}(3g-4)$ general is such that $i(\mathcal{F})=3$ by \cite{Tan,TB00}. By Lemma \ref{lem:1e2note}, $\mathcal{F}$ can be rigidly presented as in \eqref{eq:Fund}, where $L \in W_{\delta}^{\delta-g+j}(C)$ and $1 \leqslant j \leqslant 3$.
The cases $j=2,\;3$ cannot occur: the stability condition \eqref{eq:assumptions} imposes $\delta > \frac{3}{2}g - 2$, but if $j=3$, $\rho(L) \geqslant 0$ forces $\delta \leqslant \frac{4}{3}g-3$ whereas if $j=2$, $\rho(L) \geqslant 0$ implies $\delta \leqslant \frac{3}{2}g - 3$; in both cases we get a contradiction.
The only possible case is therefore $j=1$, so the corank of the coboundary map is $t =2$, which implies that $N$ is of speciality $r \geqslant 2$. Since $\chi(N) = 2g-3 - \delta$, the case $N$ non-effective would give $\delta>2g-3$, i.e. $L \cong \omega_C$. But in this case, $a_{F} (2g-2) \geqslant 2$ (usual computations as in \eqref{eq:casaciro2}) against the rigidity assumptions.
Therefore $N$ must be effective, with $n = h^0(N) = 2g-3-\delta + r$. We want to show that the hypotheses of Corollary \ref{cor:mainext1} hold. Assume by contradiction $\ell <r$; then \begin{equation}\label{eq:zaid} \delta < g-2+r. \end{equation} From stability $3g-4 < 2\delta < 2g-4+2r$, i.e. $g-2r <0$. Since $C$ has general moduli, one has $\rho(N)\geqslant 0$, hence $h^0(N)=1$. So $d-\delta = g-r$ and \eqref{eq:zaid} yields $d = \delta + d-\delta < 2g-2$ a contradiction. Thus, $\ell \geqslant r$.
Now, from \eqref{eq:lem3note}, $m = 2 \delta - 2 g + 3$ since $N$ is not isomorphic to $L$. Thus, $m \geqslant \ell +1$: this is equivalent to $\delta \geqslant g$, which holds by stability.
In conclusion, by Corollary \ref{cor:mainext1}, $\mathcal W_1^{\rm Tot}$ is irreducible, of the expected dimension. Assume that $\widehat{\mathcal W}^{\rm Tot}_1$ contains a total good component $\widehat{\Lambda}_2^{\rm Tot}$, whose image via $\pi_{3g-4, \delta}$ is $\widetilde{B_C^{k_3}}(3g-4)$. Thus, $r \geqslant 2$. On the other hand $r=2$ cannot occur since $$c(\ell,2,2) = 2(\delta - g +2) > 2\delta - 2g +2 = m-1= \dim(\mathbb{P}({\rm Ext}^1(L,N))),$$a contradiction. Therefore, one has $r \geqslant 3$.
From the second equality in \eqref{eq:yde2}$$\dim(\mathbb{P}(\mathfrak{E}_\delta)|_{\mathcal{Z}}) = (r+1) \delta - (2r-1) g - r (r-3)$$and the codimension of $\widehat{\Lambda}_2^{\rm Tot}$
is $$c(\ell,r,2) = 2 (\delta - g + 4 - r).$$Set $a := a_{F_v}(\delta)$. From Remark \ref{rem:rigid} we can assume $a \leqslant 1$. Therefore,$$\dim({\rm Im}(\pi_{d, \delta}|_{\widehat{\Lambda}_2^{\rm Tot}})) = g-6$$gives $(r-1) \delta = 2r g - 2g + r^2 - 5 r + 2 + a$, i.e. $$\delta = 2g+r-4 + \frac{a-2}{r-1}.$$This yields a contradiction. Indeed, since $0 \leqslant a \leqslant 1$, $r \geqslant 3$ and $\delta$ is an integer, the only possibility is $r=3$, $a=0$, $\delta = 2g-2$ which we already saw to contradict the rigidity assumption. \end{remark}
\begin{theorem}\label{prop:M32K} Let $C$ be of genus $g \geqslant 4$, with general moduli. Then, $\widetilde{M_C^3}(2, \omega_C) \neq \emptyset$. Moreover, there exists an irreducible component which is regular (i.e. of dimension $3g-9$), whose general point $[\mathcal{F}]$ fits in a sequence \begin{equation}\label{eq:presott} 0 \to \mathcal{O}_C(p+q) \to \mathcal{F} \to \omega_C (-p-q) \to 0, \end{equation}where
\begin{itemize} \item[$\bullet$] $p +q \in C^{(2)}$ general, and \item[$\bullet$] $\mathcal{F}=\mathcal{F}_v$ with $v \in \Lambda \subset \mathcal W_2 \subset \Ext^1(\omega_C(-p), \mathcal{O}_C(p))$ general in $\Lambda$, which is a component of $\mathcal W_2$ of dimension $3g-10$ (hence, not good). \end{itemize} \end{theorem}
\begin{proof} With notation as in \eqref{eq:exthyp1}, \eqref{eq:exthyp}, for $\mathcal{F}=\mathcal{F}_v$ as in \eqref {eq:presott}, we have $$\ell = r = g-2, \; t = 2, \; m = h^1(2p + 2q - K_C) = 3g - 7.$$Consider the map (notation as in \eqref{eq:mu} and \eqref {eq:muW}) $$\mu: H^0(\omega_C(-p-q)) \otimes H^0(\omega_C(-p-q)) \to H^0(\omega_C^{\otimes 2} (-2p-2q)).$$ For $V_2 \in \mathbb{G}(2, H^0(\omega_C(-p-q)))$ general, $\mu_{V_2}$ has kernel of dimension $1$ (cf. computations as in Claim \ref{cl:good}). Arguing as in the proofs of Theorem \ref{thm:mainextt} and Claim \ref{cl:good}, there is a component $\Lambda \subset \mathcal W_2 \subset \Ext^1(\omega_C(-p-q), \mathcal{O}_C(p+q))$ (dominating $\mathbb{G} (2, H^0(\omega_C(-p-q))$, hence not good) of dimension $3g-10$.
Stability of $\mathcal{F}$ follows from Proposition \ref{prop:LN}. This shows that $\widetilde{M_C^3}(2, \omega_C) \neq \emptyset$. Regularity and generic smoothness follow from the injectivity of the symmetric Petri map as in \cite{TB000,Beau2}.
The fact that $[\mathcal{F}]$ general has a presentation as in \eqref{eq:presott} follows from an obvious parameter computation. \end{proof}
By \cite[\S\,4.3]{IVG}, $\widetilde{M_C^3}(2, \omega_C)$ contains a unique regular component; Theorem \ref{prop:M32K} provides in addition a rigidly special presentation of its general element.
\subsection{A conjecture for $i \geqslant 4$}\label{sec:high} For any integers $i \geqslant 4$ and $d$ as in Theorems \ref{LN}, \ref{C.F.VdG}, \ref{uepi}, \ref{unepi}, one has $\widetilde{B_C^{k_i}}(d) \neq \emptyset$. In particular, when $d$ is as in Theorem \ref{C.F.VdG}, with $j=i$, one deduces that $\widetilde{B_C^{k_i}}(d)$ contains a regular, generically smooth component. This gives existence results in the same flavour as Theorem \ref{thm:TB}.
One may wish to give a special, rigid presentation of the general point of all components of $\widetilde{B_C^{k_i}}(d)$. The following less ambitious conjecture is inspired by the results in this paper.
\begin{conjecture}\label{igeq4} {\em Let $i \geqslant 4$ and $g > i^2 - i + 1$ be integers. Let $C$ be of genus $g$, with general moduli. Let $d$ be an integer such that $$2g-2 \leqslant d < 2g-2 - i + \frac{g-\epsilon}{i-1},$$with $\epsilon \in \{0,1\}$ such that $d + g - (i-1) k_i \equiv \epsilon \pmod{2}$. Then, there exists an irreducible, regular component $\mathcal B \subseteq \widetilde{B_C^{k_i}}(d)$, s.t. $$\mathcal B = \overline{\mathcal V_{d}^{2g-1-i,1,i-1}}.$$In particular, $[\mathcal{F}] \in \mathcal B$ general is stable, with $i(\mathcal{F})=i$, $s(\mathcal{F}) \geqslant g - (i-1) k_i - \epsilon >0$ and it is rigidly specially presented as $$0 \to N \to \mathcal{F} \to \omega_C (-D_{i-1}) \to 0,$$where
\begin{itemize} \item[$\bullet$] $D \in C^{(i-1)}$ is general, \item[$\bullet$] $N \in \Pic^{d-2g+1+i}(C)$ is general (i.e. special, non-effective), \item[$\bullet$] $\mathcal{F}=\mathcal{F}_v$ with $v \in \Lambda_{i-1} \subset \Ext^1(N, \omega_C(-D))$ general in a good component; \end{itemize}
} \end{conjecture}
\noindent The bounds on $g$ and $d$ in Conjecture \ref{igeq4} ensure the following:
\begin{itemize} \item[(1)] $\rho_d^{k_i} \geqslant 0$ which is equivalent to $ d \leqslant 2g-2 - i + \frac{4g-3}{i}$ (cf. \eqref{eq:bn}). \item[(2)] $N \in \Pic^{d-2g+1+i}(C)$ general is special, non-effective; indeed $r= 3g-2-i -d >0$. \item[(3)] $r \geqslant i-1 = \cork(\partial_v)$, which is equivalent to $d \leqslant 3g-1-2i$. \item[(4)] There are no obstructions for a good component $\Lambda_{i-1} \subset \Ext^1(\omega_C(-D),N)$ to exist; indeed, one has $$\dim(\mathbb{P}) = m-1 = 5g-4 - 2i - d$$ from \eqref{eq:lem3note}, and from \eqref{eq:clrt}, Remark \ref{unepine}, we have $$c(\ell,r,t) = (i-1) k_i = (i-1) (d-2g+2+i),$$since $t =i-1$ and $N$ non-effective. Therefore $$\dim(\Lambda_{i-1}) = m-1 - c (\ell,r,t) = 3g-2 - i - i (d-2g+2+i)$$is non-negative as soon as $d \leqslant 2g-3-i + \frac{3g-2}{i}$. \item[(5)] From Remark \ref{unepine}, $v \in \Lambda_{i-1}$ general is such that $s(\mathcal{F}_v) \geqslant g - (i-1) k_i - \epsilon$, which is positive because of the upper-bound on $d$. Thus $\mathcal{F}_v$ is stable. \item[(6)] The interval $ 2g-2 \leqslant d < 2g-2 - i + \frac{g-\epsilon}{i-1}$ is not empty. \end{itemize}
From Remark \ref{rem:rigid} and \eqref{eq:fojt}, to prove Conjecture \ref{igeq4} it would suffice to prove the following two facts:
\begin{itemize} \item[(a)] {\em For $i \geqslant 4$, $D \in C^{(i-1)}$ general and $L = \omega_C(- D)$, there exists a good component $$\Lambda_{i-1} \subseteq \mathcal W_{i-1} \subset {\rm Ext}^1(\omega_C(- D), N).$$ }
\item[(b)] {\em For $v \in \Lambda_{i-1}$ general, $\mathcal{F}_v$ is rsp via $\omega_C(-D)$. }
\end{itemize}
Concerning (a), notice that for no $V_{i-1} \in \mathbb{G}(i-1, H^0(K_C-N))$ the map $\mu_{V_{i-1}}$ can be injective. Indeed, $d\geqslant 2g-2$ and $i \geqslant 4$ imply $$\dim(H^0(K_C-D) \otimes V_{i-1}) = (i-1) g - (i-1)^2>5g-3-d-2i=h^0(2K_C - N - D).$$ Hence, according to Theorem \ref{thm:mainextt}, one should find an irreducible subvariety $\Sigma_{\eta} \subset \mathbb{G}(i-1, H^0(K_C - D))$ of codimension $\eta:= d + (i-6) g + 7 - (i-2)^2$ such that $\dim({\rm Ker}(\mu_{V_{i-1}})) = \eta$ for $V_{i-1} \in \Sigma_{\eta}$ general.
Concerning (b), the minimality assumption implies $a_{F_v} (2g-1-i) \leqslant 1$, for $v \in \Lambda_{i-1}$ general and $F_v = \mathbb{P}(\mathcal{F}_v)$. To prove rigidity, one has to show that $a_{F_v} (2g-1-i) = 0$. This is equivalent to prove a regularity statement for a Severi variety of nodal curves on $F_v$. Indeed, for any section $\Gamma_{D}$ corresponding to a quotient $\mathcal{F}_v \to\!\! \to \omega_C(-D)$ as above,
the linear system $|\Gamma_D + f_D|$ has dimension $i-1$, it is independent on $D$ and its general member $\Gamma$ is a section corresponding to a quotient $\mathcal{F}_v \to \!\! \to \omega_C$. The curve $\Gamma_D + f_D$ belongs to the Severi variety of $(i-1)$-nodal curves in $|\Gamma|$. So rigidity is equivalent to show that this Severi variety has the expected dimension zero. Proving this is equivalent to prove that $D$, considered as a divisor on $\Gamma_D$, imposes independent condition to $|\Gamma|$. Unfortunately, the known results on regularity of Severi varieties (see \cite{N,Tanb,Tanb1}) do not apply in this situation.
\end{document} | arXiv |
\begin{document}
\title{Regularity of $rac{n} \begin{abstract} \noindent We prove H\"older continuity for $\frac{n}{2}$-harmonic maps from subsets of ${\mathbb R}^n$ into a sphere. This extends a recent one-dimensional result by F. Da Lio and T. Rivi\`{e}re to arbitrary dimensions. The proof relies on compensation effects which we quantify adapting an approach for Wente's inequality by L. Tartar, instead of Besov-space arguments which were used in the one-dimensional case. Moreover, fractional analogues of Hodge decomposition and higher order Poincar\'e inequalities as well as several localization effects for nonlocal operators similar to the fractional laplacian are developed and applied. \\[1ex] {\bf Keywords:} Harmonic maps, nonlinear elliptic PDE, regularity of solutions.\\ {\bf AMS Classification:} 58E20, 35B65, 35J60, 35S05. \end{abstract} \tableofcontents \thispagestyle{empty}
\section{Introduction} In his seminal work \cite{Hel90} F. H{\'e}lein proved regularity for harmonic maps from the two-dimensional unit disk $B_1(0) \subset {\mathbb R}^2$ into the $m$-dimensional sphere ${\mathbb S}^{m-1} \subset {\mathbb R}^m$ for arbitrary $m \in {\mathbb N}$. These maps are critical points of the functional \[
E_2(u) := \int \limits_{B_1(0) \subset {\mathbb R}^2} \abs{\nabla u}^2, \qquad \mbox{where } u \in W^{1,2}(B_1(0),{\mathbb S}^{m-1}). \] The importance of this result is the fact that harmonic maps in two dimensions are special cases of critical points of conformally invariant variational functionals, which play an important role in physics and geometry and have been studied for a long time: H\'elein's approach is based on the discovery of a compensation phenomenon appearing in the Euler-Lagrange equations of $E_2$, using a relation between $\operatorname{div}$-$\operatorname{curl}$ expressions and the Hardy space. This kind of relation had been discovered shortly before in the special case of determinants by S. M\"uller \cite{Mul90} and was generalized by R. Coifman, P.L. Lions, Y. Meyer and S. Semmes \cite{CLMS93}. H\'{e}lein extended his result to the case where the sphere ${\mathbb S}^{m-1}$ is replaced by a general target manifold developing the so-called moving-frame technique which is used in order to enforce the compensation phenomenon in the Euler-Lagrange equations \cite{Hel91}. Finally, T. Rivi\`{e}re \cite{Riv06} was able to prove regularity for critical points of general conformally invariant functionals, thus solving a conjecture by S. Hildebrandt \cite{Hil82}. He used an ingenious approach based on K. Uhlenbeck's results in gauge theory \cite{Uhl82} in order to implement $\operatorname{div}$-$\operatorname{curl}$ expressions in the Euler-Lagrange equations, a technique which can be reinterpreted as an extension of H\'elein's moving frame method; see \cite{IchEnergie}. For more details and references we refer to H\'{e}lein's book \cite{Hel02} and the extensive introduction in \cite{Riv06} as well as \cite{RivVanc09}.\\ Naturally, it is interesting to see how these results extend to other dimensions: In the four-dimensional case, regularity can be proven for critical points of the following functional, the so-called extrinsic biharmonic maps: \[
E_4(u) := \int \limits_{B_1(0) \subset {\mathbb R}^4} \abs{\Delta u}^2, \qquad \mbox{where } u \in W^{2,2}(B_1(0),{\mathbb R}^m). \] This was done by A. Chang, L. Wang, and P. Yang \cite{CWY99} in the case of a sphere as the target manifold, and for more general targets by P. Strzelecki \cite{Sbi03}, C. Wang \cite{Wang04} and C. Scheven \cite{Scheven08}; see also T. Lamm and T. Rivi\`{e}re's paper \cite{LR08}. More generally, for all even $n \geq 6$ similar regularity results hold, and we refer to the work of A. Gastel and C. Scheven \cite{GS09} as well as the article of P. Goldstein, P. Strzelecki and A. Zatorska-Goldstein \cite{GSZG09}.\\ In odd dimensions non-local operators appear, and only two results for dimension $n=1$ are available. In \cite{DR09Sphere}, F. Da Lio and T. Rivi\`{e}re prove H\"older continuity for critical points of the functional \[ E_1(u) = \int \limits_{{\mathbb R}^1} \abs{\Delta^{\frac{1}{4}} u}^2,\qquad \mbox{defined on distributions $u$ with finite energy and $u \in {\mathbb S}^{m-1}$ a.e.} \] In \cite{DR209} this is extended to the setting of general target manifolds.\\ \\ In general, we consider for $n, m \in {\mathbb N}$ and some domain $D \subset {\mathbb R}^n$ the regularity of critical points on $D$ of the functional \begin{equation}\label{eq:energy} E_n(v) = \int \limits_{{\mathbb R}^n} \abs{\lapn v}^2,\qquad v \in H^{\frac{n}{2}}({\mathbb R}^n,{\mathbb R}^m),\ v \in {\mathbb S}^{m-1} \mbox{ a.e. in $D$.} \end{equation} Here, $\lapn$ denotes the operator which acts on functions $v \in L^2({\mathbb R}^n)$ according to \[ \brac{\lapn v}^\wedge(\xi) = \abs{\xi}^{\frac{n}{2}}\ v^\wedge(\xi) \qquad \mbox{for almost every $\xi \in {\mathbb R}^n$,} \] where $()^\wedge$ denotes the application of the Fourier transform. The space $H^{\frac{n}{2}}({\mathbb R}^n)$ is the space of all functions $v \in L^2({\mathbb R}^n)$ such that $\lapn v \in L^2({\mathbb R}^n)$. The term ``critical point'' is defined as usual: \begin{definition}[Critical Point]\label{def:critpt} Let $u \in \Hf({\mathbb R}^n,{\mathbb R}^m)$, $D \subset {\mathbb R}^n$. We say that $u$ is a critical point of $E_n(\cdot)$ on $D$ if $u(x) \in {\mathbb S}^{m-1}$ for almost every $x \in D$ and \[
\ddtz E(u_{t,\varphi}) = 0 \] for any $\varphi \in C_0^\infty(D,{\mathbb R}^m)$ where $u_{t,\varphi} \in \Hf({\mathbb R}^n)$ is defined as \[ u_{t,\varphi} = \begin{cases}
\Pi (u+t\varphi)\qquad &\mbox{in $D$,}\\
u\qquad &\mbox{in ${\mathbb R}^n \backslash D$.}\\
\end{cases} \] Here, $\Pi$ denotes the orthogonal projection from a tubular neighborhood of ${\mathbb S}^{m-1}$ into ${\mathbb S}^{m-1}$ defined as $\Pi(x) = \frac{x}{\abs{x}}$. \end{definition} If $n$ is an even number, the domain of $E_n(\cdot)$ is just the classic Sobolev space $H^{\frac{n}{2}}({\mathbb R}^n) \equiv W^{\frac{n}{2},2}({\mathbb R}^n)$, for odd dimensions this is a fractional Sobolev space (see Section \ref{sec:fracsobspace}). Functions in $H^{\frac{n}{2}}({\mathbb R}^n)$ can contain logarithmic singularities (cf. \cite{Frehse73}) but this space embeds continuously into $BMO({\mathbb R}^n)$, and even only slightly improved integrability or more differentiability would imply continuity.\\
In the light of the existing results in even dimensions and in the one-dimensional case, one may expect that similar regularity results should hold for any dimension. As a first step in that direction, we establish regularity of $n/2$-harmonic maps into the sphere. \begin{theorem}\label{th:regul} For any $n \geq 1$, critical points $u \in \Hf({\mathbb R}^2)$ of $E_n$ on a domain $D$ are locally H\"older continuous in $D$. \end{theorem} Note that here -- in contrast to \cite{DR09Sphere} -- we work on general domains $D \subseteq {\mathbb R}^n$. This is motivated by the facts that H\"older continuity is a local property and that $\lapn$ (though it is a non-local operator) still behaves ``pseudo-local'': We impose our conditions (here: being a critical point and mapping into the sphere) only in some domain $D \subset {\mathbb R}^n$, and still get interior regularity within $D$.\\ Let us comment on the strategy of the proof. As said before, in all even dimensions the key tool for proving regularity is the discovery of \emph{compensation phenomena} built into the respective Euler-Lagrange equation. For example, critical points $u \in W^{1,2}(D,{\mathbb S}^{m-1})$ of $E_2$ satisfy the following Euler-Lagrange equation \cite{Hel90} \begin{equation}\label{eq:ELhel} \Delta u^i = u^i \abs{\nabla u}^2, \qquad \mbox{weakly in $D$, \quad for all $i =1\ldots m$.} \end{equation} For mappings $u \in W^{1,2}({\mathbb R}^2,{\mathbb S}^{m-1})$ this is a critical equation, as the right-hand side seems to lie only in $L^1$: If we had no additional information, it would seem as if the equation admitted a logarithmic singularity (for examples see, e.g., \cite{Riv06}, \cite{Frehse73}). But, using the constraint $\abs{u} \equiv 1$, one can rewrite the right-hand side of \eqref{eq:ELhel} as \[
u^i \abs{\nabla u}^2 = \sum_{j=1}^m \brac{u^i \nabla u^j - u^j \nabla u^i}\cdot \nabla u^j = \sum_{j=1}^m \brac{\partial_1 B_{ij}\ \partial_2 u^j - \partial_2 B_{ij}\ \partial_1 u^j} \] where the $B_{ij}$ are chosen such that $\partial_1 B_{ij} = u^i \partial_2 u^j - u^j \partial_2 u^i$, and $-\partial_2 B_{ij} = u^i \partial_1 u^j - u^j \partial_1 u^i$, a choice which is possible due to Poincar\'e's Lemma and because \eqref{eq:ELhel} implies $\operatorname{div} \brac{u^i \nabla u^j - u^j \nabla u^i} = 0$ for every $i,j = 1\ldots m$. Thus, \eqref{eq:ELhel} transforms into \begin{equation}\label{eq:ELhelTr}
\Delta u^i = \sum_{j=1}^m \brac{\partial_1 B_{ij}\ \partial_2 u^j - \partial_2 B_{ij}\ \partial_1 u^j}, \end{equation} a form whose right-hand side exhibits a compensation phenomenon which in a similar way already appeared in the so-called Wente inequality \cite{Wente69}, see also \cite{BC84}, \cite{Tartar85}. In fact, the right-hand side belongs to the Hardy space (cf. \cite{Mul90}, \cite{CLMS93}) which is a proper subspace of $L^1$ with enhanced potential theoretic properties. Namely, members of the Hardy space behave well with Calder\'on-Zygmund operators, and by this one can conclude continuity of $u$.\\ An alternative and for our purpose more viable way to describe this can be found in L. Tartar's proof \cite{Tartar85} of Wente's inequality: Assume we have for $a,b \in L^2({\mathbb R}^2)$ a solution $w \in H^1({\mathbb R}^2)$ of \begin{equation}\label{eq:wentepde} \Delta w = \partial_1 a\ \partial_2 b - \partial_2 a\ \partial_1 b \qquad \mbox{weakly in ${\mathbb R}^2$}. \end{equation} Taking the Fourier-Transform on both sides, this is (formally) equivalent to \begin{equation}\label{eq:tartarfourierpde} \abs{\xi}^2 w^\wedge(\xi) = c \int \limits_{{\mathbb R}^2} a^\wedge(x)\ b^\wedge(\xi-x) \left ( x_1 (\xi_2 - x_2) - x_2 (\xi_1 - x_1) \right )\ dx,\ \qquad \mbox{for $\xi \in {\mathbb R}^2$}. \end{equation} Now the compensation phenomena responsible for the higher regularity of $w$ can be identified with the following inequality: \begin{equation}\label{eq:tartarcompensation} \abs{x_1 (\xi_2 - x_2) - x_2 (\xi_1 - x_1)} \leq \abs{\xi}\abs{x}^{\frac{1}{2}}\abs{\xi-x}^\frac{1}{2}. \end{equation} Observe, that $\abs{x}$ as well as $\abs{\xi - x}$ appear to the power $1/2$, only. Interpreting these factors as Fourier multipliers, this means that only ``half of the gradient'', more precisely $\Delta^{\frac{1}{4}}$, of $a$ and $b$ enters the equation, which implies that the right-hand side is a ``product of lower order'' operators. In fact, plugging \eqref{eq:tartarcompensation} into \eqref{eq:tartarfourierpde}, one can conclude $w^\wedge \in L^1({\mathbb R}^2)$ just by H\"older's and Young's inequality on Lorentz spaces -- consequently one has proven continuity of $w$, because the inverse Fourier transform maps $L^1$ into $C^0$. As explained earlier, \eqref{eq:ELhel} can be rewritten as \eqref{eq:ELhelTr} which has the form of \eqref{eq:wentepde}, thus we have continuity for critical points of $E_2$, and by a bootstraping argument (see \cite{Tomi69}) one gets analyticity of these points.\\ \\
As in Theorem~\ref{th:regul} we prove only interior regularity, it is natural to work with localized Euler-Lagrange equations which look as follows, see Section \ref{sec:eleqn}:
\begin{lemma}[Euler-Lagrange Equations] Let $u \in \Hf({\mathbb R}^n)$ be a critical point of $E_n$ on a domain $D \subset {\mathbb R}^n$. Then, for any cutoff function $\eta \in C_0^\infty(D)$, $\eta \equiv 1$ on an open neighborhood of a ball $\tilde{D} \subset D$ and $w := \eta u$, we have \begin{equation}\label{eq:eleqn}
-\int \limits_{{\mathbb R}^n} w^i\ \lapn w^j\ \lapn \psi_{ij} = \int \limits_{{\mathbb R}^n} \lapn w^j\ H(w^i,\psi_{ij}) - \int \limits_{{\mathbb R}^n} a_{ij} \psi_{ij} , \quad \mbox{for any $\psi_{ij} = -\psi_{ji} \in C_0^\infty(\tilde{D})$}, \end{equation} where $a_{ij} \in L^2({\mathbb R}^n)$, $i,j = 1,\ldots,m$, depend on the choice of $\eta$. Here, we adopt Einstein's summation convention. Moreover, $H(\cdot,\cdot)$ is defined on $H^{\frac{n}{2}}({\mathbb R}^n) \times H^{\frac{n}{2}}({\mathbb R}^n)$ as \begin{equation}\label{eq:defH} H(a,b) := \lapn (a b) - a \lapn b - b \lapn a, \qquad \mbox{for $a,b \in \Hf({\mathbb R}^n)$}. \end{equation} Furthermore, $u \in {\mathbb S}^{m-1}$ on $D$ implies the following structure equation \begin{equation}\label{eq:structure} w^i \cdot \lapn w^i = -\frac{1}{2} H(w^i,w^i) + \frac{1}{2}\lapn \eta^2 \qquad \mbox{a.e. in ${\mathbb R}^n$.} \end{equation} \end{lemma} Similar in its spirit to \cite{DR09Sphere} we use that \eqref{eq:eleqn} and \eqref{eq:structure} together control the full growth of $\lapn w$, though here we use a different argument applying an analogue of Hodge decomposition to show this, see below. Note moreover that as we have localized our Euler-Lagrange equation, we do not need further rewriting of the structure condition \eqref{eq:structure} as was done in \cite{DR09Sphere}.\\ Whereas in \eqref{eq:wentepde} the compensation phenomenon stems from the structure of the right-hand side, here it comes from the leading order term $H(\cdot,\cdot)$ appearing in \eqref{eq:eleqn} and \eqref{eq:structure}. This can be proved by Tartar's approach \cite{Tartar85}, using essentially only the following elementary ``compensation inequality'' similar in its spirit to \eqref{eq:tartarcompensation} \begin{equation}\label{eq:ourcompensation} \abs{\abs{x-\xi}^p - \abs{\xi}^p - \abs{x}^p} \leq C_p \begin{cases}
\abs{x}^{p-1} \abs{\xi} + \abs{\xi}^{p-1} \abs{x}, \qquad &\mbox{if $p > 1$,}\\
\abs{x}^{\frac{p}{2}} \abs{\xi}^{\frac{p}{2}}, \qquad &\mbox{if $p \in (0,1]$.}\\
\end{cases} \end{equation} More precisely, we will prove in Section~\ref{sec:tart} \begin{theorem}\label{th:compensation} For $H$ as in \eqref{eq:defH} and $u,v \in \Hf({\mathbb R}^n)$ one has \[
\Vert H(u,v) \Vert_{L^2({\mathbb R}^n)} \leq C\ \Vert \brac{\lapn u}^\wedge \Vert_{L^2({\mathbb R}^n)}\ \Vert \brac{ \lapn v}^\wedge \Vert_{L^{2,\infty}({\mathbb R}^n)}. \] \end{theorem} An equivalent compensation phenomenon was observed in the case $n=1$ in \cite{DR09Sphere}\footnote{In fact, all compensation phenomena appearing in \cite{DR09Sphere} can be proven by our adaption of Tartar's method using simple compensation inequalities, thus avoiding the use of paraproduct arguments (but at the expense of using the theory of Lorentz spaces).}. Note that interpreting again the terms of \eqref{eq:ourcompensation} as Fourier multipliers, it seems as if this equation (and as a consequence Theorem~\ref{th:compensation}) estimates the operator $H(u,v)$ by products of lower order operators applied to $u$ and $v$. Here, by ``products of lower order operators'' we mean products of operators whose differential order is strictly between zero and $\frac{n}{2}$ and where the two operators together give an operator of order $\frac{n}{2}$. In fact, this is exactly what happens in special cases, e.g. if we take the case $n=4$ where $\lapn = \Delta$: \[
H(u,v) = 2 \nabla u \cdot \nabla u \qquad \mbox{if $n = 4$}. \] Another case we will need to control is the case where $u = P$ is a polynomial of degree less than $\frac{n}{2}$. As (at least formally) $\lapn P = 0$ this is to estimate \[
\lapn (Pv) - P \lapn v. \] This case is not contained in Theorem~\ref{th:compensation} as a non-zero polynomial does not belong to $\Hf({\mathbb R}^n)$. Obviously, in the one-dimensional case $P$ is only a constant, and thus $H(P,v) \equiv 0$. In higher dimensions, as we will show in Proposition~\ref{pr:lapsmonprod2}, $H(P,v)$ is still a product of lower order operators.\\ As we are going to show in Section~\ref{ss:loolocwell}, products of lower order operators (in the way this term is defined above) ``localize well''. By that we mean that the $L^2$-norm of such a product evaluated on a ball is estimated by the product of $L^2$-norms of $\lapn$ applied to the factors evaluated at a slightly bigger ball, up to harmless error terms. As a consequence, one expects this to hold as well for the term $H(u,v)$, and in fact, we can show the following ``localized version'' of Theorem~\ref{th:compensation}, proven in Section~\ref{sec:localTart}. \begin{theorem}[Localized Compensation Results]\label{th:localcomp} There is a uniform constant $\gamma > 0$ depending only on the dimension $n$, such that the following holds. Let $H(\cdot,\cdot)$ be defined as in \eqref{eq:defH}. For any $v \in \Hf({\mathbb R}^n)$ and $\varepsilon > 0$ there exist constants $R > 0$ and $\Lambda_1 > 0$ such that for any ball $B_r(x) \subset {\mathbb R}^n$, $r \in (0,R)$, \[ \Vert H(v,\varphi) \Vert_{L^2(B_r(x))} \leq \varepsilon\ \Vert \lapn \varphi \Vert_{L^2({\mathbb R}^n)} \qquad \mbox{for any $\varphi \in C_0^\infty(B_r(x))$}, \] and \[ \Vert H(v,v) \Vert_{L^2(B_r(x))} \leq \varepsilon\ [[v]]_{B_{\Lambda_1 r}(x)} + C_{\varepsilon,v} \sum_{k=-\infty}^\infty 2^{-\gamma \abs{k}} [[v]]_{B_{2^{k+1} r}(x) \backslash B_{2^{k} r}(x)}. \] Here, $[[v]]_{A}$ is a pseudo-norm, which in a way measures the $L^2$-norm of $\lapn v$ on $A \subset {\mathbb R}^n$. More precisely, for odd $n$ \[ [[v]]_{A} := \Vert \lapn v \Vert_{L^2(A)} + \brac{\int \limits_A \int \limits_A \abs{x-y}^{-n-1}\ \abs{\nabla^{\frac{n-1}{2}}v(x) - \nabla^{\frac{n-1}{2}} v(y)}^2\ dx\ dy}^{\frac{1}{2}}, \] and for even $n$ we set $[[v]]_{A} := \Vert \lapn v \Vert_{L^2(A)} + \Vert \nabla^{\frac{n}{2}} v \Vert_{L^2(A)}$. \end{theorem}
As mentioned before, by the structure of our Euler-Lagrage equations, these local estimates control the local growth of the $\frac{n}{4}$-operator of any critical point, as we will show using an analogue of the Hodge decomposition in the fractional case, see Section \ref{ss:hodge}. \begin{theorem}\label{th:hodge} There are uniform constants $\Lambda_2 > 0$ and $C > 0$ such that the following holds: For any $x \in {\mathbb R}^n$ and any $r > 0$ we have for every $v \in L^2({\mathbb R}^n)$, $\operatorname{supp} v \subset B_r(x)$, \[ \Vert v \Vert_{L^2(B_r(x))} \leq C \sup_{\varphi \in C_0^\infty(B_{\Lambda_2 r}(x))} \frac{1}{\Vert \lapn \varphi \Vert_{L^2({\mathbb R}^n)}}\ \int \limits_{{\mathbb R}^n} v\ \lapn \varphi. \] \end{theorem} Then, by an iteration technique adapted from the one in \cite{DR09Sphere} (see the appendix) we conclude in Section~\ref{sec:growth} that the critical point $u$ of $E_n$ lies in a Morrey-Campanato space, which implies H\"older continuity.\\ As for the sections not mentioned so far: In Section~\ref{sec:basics} we will cover some basic facts on Lorentz and Sobolev spaces. In Section~\ref{sec:poincmv} we will prove a fractional Poincar\'{e} inequality with a mean value condition of arbitrary order. In Section~\ref{sec:locEf} various localizing effects are studied. In Section~\ref{sec:homognormhn2} we compare two pseudo-norms $\Vert \lapn v \Vert_{L^2(A)}$ and $[v]_{\frac{n}{2},A}$ of $H^{\frac{n}{2}}$, and finally, in Section~\ref{sec:growth}, Theorem~\ref{th:regul} is proved.\\[1em]
Finally, let us remark the following two points: As we cut off the critical points $u$ to bounded domains, the assumption $u \in L^2({\mathbb R}^n)$ is not necessary, one could, e.g., assume $u \in L^\infty({\mathbb R}^n)$, $\lapn u \in L^2({\mathbb R}^n)$, thus regaining a similar ``global'' result as in \cite{DR09Sphere}. Observe moreover, that the application of a cut-off function within $D$ to the critical point $u$ is a very brute operation, which nevertheless suffices our purposes as in this note we are only interested in \emph{interior} regularity. For the analysis of the boundary behaviour of $u$ one probably would need a more careful cut-off argument.\\[1em]
We will use fairly standard \emph{notation}:\\ As usual, we denote by ${\mathcal{S}} \equiv {\mathcal{S}}({\mathbb R}^n)$ the Schwartz class of all smooth functions which at infinity go faster to zero than any quotient of polynomials, and by ${\Sw^{'}} \equiv {\Sw^{'}}({\mathbb R}^n)$ its dual. For a set $A \subset {\mathbb R}^n$ we will denote its $n$-dimensional Lebesgue measure by $\abs{A}$, and $rA$, $r > 0$, will be the set of all points $rx \in {\mathbb R}^n$ where $x \in A$. By $B_r(x) \subset {\mathbb R}^n$ we denote the open ball with radius $r$ and center $x \in {\mathbb R}^n$. If no confusion arises, we will abbreviate $B_r \equiv B_r(x)$. When we speak of a multiindex $\alpha$ we will usually mean $\alpha = (\alpha_1,\ldots,\alpha_n) \in \brac{{\mathbb N} \cup \{0\}}^n \equiv \brac{{\mathbb N}_0}^n$ with length $\abs{\alpha} := \sum_{i=1}^n \alpha_i$. For such a multiindex $\alpha$ and $x = (x_1,\ldots,x_n)^T \in {\mathbb R}^n$ we denote by $x^\alpha = \prod_{i=1}^n \brac{x_i}^{\alpha_i}$ where we set $(x_i)^0 := 1$ even if $x_i = 0$. For a real number $p \geq 0$ we denote by $\lfloor p \rfloor$ the biggest integer below $p$ and by $\lceil p \rceil$ the smallest integer above $p$. If $p \in [1,\infty]$ we usually will denote by $p'$ the H\"older conjugate, that is $\frac{1}{p} + \frac{1}{p'} = 1$. By $f \ast g$ we denote the convolution of two functions $f$ and $g$. As mentioned before, we will denote by $f^\wedge$ the Fourier transform and by $f^\vee$ the inverse Fourier transform, which on the Schwartz class ${\mathcal{S}}$ are defined as \[
f^\wedge(\xi) := \int \limits_{{\mathbb R}^n} f(x)\ e^{-2\pi \mathbf{i}\ x\cdot \xi}\ dx,\quad f^\vee(x) := \int \limits_{{\mathbb R}^n} f(\xi)\ e^{2\pi \mathbf{i}\ \xi\cdot x}\ d\xi. \] By $ \mathbf{i}$ we denote here and henceforth the imaginary unit $ \mathbf{i}^2 = -1$. ${\mathcal{R}}$ is the Riesz operator which transforms $v \in {\mathcal{S}}({\mathbb R}^n)$ according to $({\mathcal{R}} v)^\wedge(\xi) := \mathbf{i} \frac{\xi}{\abs{\xi}} v^\wedge(\xi)$. More generally, we will speak of a zero-multiplier operator $M$, if there is a function $m \in C^\infty({\mathbb R}^n \backslash \{0\})$ homogeneous of order $0$ and such that $(Mv)^\wedge(\xi) = m(\xi)\ v^\wedge(\xi)$ for all $\xi \in {\mathbb R}^n \backslash \{0\}$. For a measurable set $D \subset {\mathbb R}^n$, we denote the integral mean of an integrable function $v: D \to {\mathbb R}$ to be $(v)_D \equiv \fint_D v \equiv \frac{1}{\abs{D}} \int_D v$. Lastly, our constants -- usually denoted by $C$ or $c$ -- can possibly change from line to line and usually depend on the space dimensions involved, further dependencies will be denoted by a subscript, though we will make no effort to pin down the exact value of those constants. If we consider the constant factors to be irrelevant with respect to the mathematical argument, for the sake of simplicity we will omit them in the calculations, writing $\aleq{}$, $\ageq{}$, $\aeq{}$ instead of $\leq$, $\geq$ and $=$.\\
\noindent {\bf Acknowledgment.} The author would like to thank Francesca Da Lio and Tristan Rivi\`{e}re for introducing him to the topic, and Pawe{\l} Strzelecki for suggesting to extend the results of \cite{DR09Sphere} to higher dimensions. Moreover, he is very grateful to his supervisor Heiko von der Mosel for the constant support and encouragement, as well as for many comments and remarks on the drafts of this work. The author is supported by the Studienstiftung des Deutschen Volkes.
\section{Lorentz-, Sobolev Spaces and Cutoff Functions}\label{sec:basics} \subsection{Interpolation} In the following section we will state some fundamental properties of interpolation theory, which will be used to ``translate'' results from classical Lebesgue and Sobolev spaces into the setting of Lorentz and fractional Sobolev spaces. For more on interpolation spaces, we refer to Tartar's monograph \cite{Tartar07}.\\
There are different methods of interpolation. We state here the so-called $K$-Method, only. \begin{definition}[Interpolation by the $K$-Method]\label{def:KMethod} (Compare \cite[Definition 22.1]{Tartar07})\\ Let $X,Y$ be normed spaces with respective norms $\Vert \cdot \Vert_X$, $\Vert \cdot \Vert_Y$ and assume that $Z = X+Y$ is a normed space with norm \[ \Vert z \Vert_{X+Y} := \inf_{z = x+y} \brac{\Vert x\Vert_X + \Vert y \Vert_Y}. \] For $t \in (0,\infty)$ and $z \in X+Y$ we denote \[ K(z,t) = \inf_{\ontop{z = x +y}{x \in X, y \in Y}} \Vert x \Vert_X + t \Vert y \Vert_Y, \] and for $\theta \in (0,1)$ and $q \in [1,\infty]$, \[ \Vert z \Vert_{[X,Y]_{\theta,q}}^q := \int \limits_{t=0}^\infty \brac{t^{-\theta}\ K(z,t)}^q \frac{dt}{t}. \] The space $[X,Y]_{\theta,q}$ with norm $\Vert \cdot \Vert_{[X,Y]_{\theta,q}}$ is then defined as every $z \in X+Y$ such that $\Vert z \Vert_{[X,Y]_{\theta,q}} < \infty$. \end{definition}
\begin{proposition}\label{pr:bs:xyqintoxyqs} (Compare \cite[Lemma 22.2]{Tartar07})\\ Let $X,Y,Z$ be as in Definition \ref{def:KMethod}. If $1 \leq q < q' \leq \infty$, $\theta \in (0,1)$, then \[ [X,Y]_{\theta,q} \subset [X,Y]_{\theta,q'}, \] and the embedding is continuous. \end{proposition} \begin{proofP}{\ref{pr:bs:xyqintoxyqs}} Fix $\theta \in (0,1)$. Denote \[ E_p := [X,Y]_{\theta,p}, \quad \mbox{for }p \in [1,\infty]. \] Then for $q < \infty$, $t_0 > 0$, using that $K(z,t)$ is monotone rising in $t$, \[ \begin{ma} \Vert z \Vert_{E_q}^q &=& \int \limits_{t=0}^\infty t^{-\theta q} \brac{K(z,t)}^q\ \frac{dt}{t}\\ &\geq& \int \limits_{t=t_0}^\infty t^{-\theta q} \brac{K(z,t)}^q\ \frac{dt}{t}\\ &\geq& \brac{K(z,t_0)}^q \frac{\brac{t_0}^{-\theta q}}{\theta q}, \end{ma} \] that is \[ t_0^{-\theta}\ K(z,t_0) \aleq{} \Vert z \Vert_{E_q}, \quad \mbox{for every $t_0 > 0$}, \] which implies \begin{equation}\label{eq:bs:xyinftyxyq} \Vert z \Vert_{E_\infty} \leq C_{\theta,q} \Vert z \Vert_{E_q} \quad \mbox{for any $q \in [1,\infty]$}. \end{equation} Thus, by H\"older inequality for $\infty > q' > q$, \[ \begin{ma} \Vert z \Vert_{E_{q'}}^{q'} &=& \Vert t^{-\theta} K(z,t) \Vert_{L^{q'}\brac{(0,\infty),\frac{dt}{t}}}^{q'}\\ &\aleq{}& \Vert z \Vert_{E_\infty}^{q'-q}\ \Vert z \Vert_{E_q}^q\\ &\overset{\eqref{eq:bs:xyinftyxyq}}{\aleq{}}& \Vert z \Vert_{E_q}^{q'}. \end{ma} \] \end{proofP}
The following two fundamental lemmata tell us how linear and bounded or linear and compact operators defined on the spaces $X$ and $Y$ from Definition \ref{def:KMethod} behave on the interpolated spaces. \begin{lemma}[Interpolation Theorem]\label{la:interpolation} (See \cite[Lemma 22.3]{Tartar07})\\ Let $X_1,Y_1,Z_1$, $X_2,Y_2,Z_2$ be as in Definition \ref{def:KMethod}. Assume there is a linear operator $T$ defined on $Z = X+Y$ such that $T: X_1 \to X_2$ and $T: Y_1 \to Y_2$ and assume there are constants $\Lambda_X, \Lambda_Y > 0$ such that \begin{equation}\label{eq:interpol:Tx1x2y1y2} \Vert T \Vert_{\mathcal{L}(X_1,X_2)} \leq \Lambda_X,\quad \Vert T \Vert_{\mathcal{L}(Y_1,Y_2)} \leq \Lambda_Y. \end{equation} Denote for $\theta \in (0,1)$ and $q \in [1,\infty]$, $E_1 := [X_1,Y_1]_{\theta,q}$ and $E_2 := [X_2,Y_2]_{\theta,q}$. Then $T$ is a linear, bounded operator $T: E_1 \to E_2$ such that \[ \Vert T \Vert_{\mathcal{L}(E_1,E_2)} \leq \Lambda_X^{1-\theta} \Lambda_Y^{\theta}. \] \end{lemma} \begin{proofL}{\ref{la:interpolation}} Denote by $K_1$, $K_2$ the $K(\cdot,\cdot)$ used to define $E_1$ and $E_2$, respectively. For $z \in E_1$ and any decomposition $z = x_1 + y_1$, $x_1 \in X_1$, $y_1 \in Y_1$ we have \[ \begin{ma} t^{-\theta}K_2(Tz,t) &\leq& t^{-\theta} \brac{\Vert Tx_1 \Vert_{X_2} + t \Vert Ty_1 \Vert_{Y_2}}\\ &\overset{\eqref{eq:interpol:Tx1x2y1y2}}{\leq}& \Lambda_X^{1-\theta} \Lambda_Y^\theta \ \brac{\frac{\Lambda_Y}{\Lambda_X} t}^{-\theta} \brac{\Vert x_1 \Vert_{X_1} + t \frac{\Lambda_Y}{\Lambda_X} \Vert y_1 \Vert_{Y_1}}.\\ \end{ma} \] Taking the infimum over all decompositions $z = x_1 + y_1$, this implies for $\gamma := \frac{\Lambda_Y}{\Lambda_X} > 0$, \[ t^{-\theta}K_2(Tz,t) \leq \Lambda_X^{1-\theta}\ \Lambda_Y^\theta\ (\gamma t)^{-\theta} K_1(z,\gamma t). \] Using the definition of $E_1,E_2$, we have shown \[ \Vert Tz \Vert_{E_2} \leq \Lambda_X^{1-\theta}\ \Lambda_Y^\theta\ \Vert z \Vert_{E_1}. \] \end{proofL}
\begin{lemma}[Compactness]\label{la:interpol:compact} (See \cite[Lemma 41.4]{Tartar07})\\ Let $X,Y,Z$ be as in Definition \ref{def:KMethod}. Let moreover $G$ be a Banach space and assume there is an operator $T$ defined on $Z = X + Y$ such that $T: X \to G$ is linear and continuous and $T: Y \to G$ is linear and compact. Then for any $\theta \in (0,1)$, $q \in [1,\infty]$, $T: [X,Y]_{\theta,q} \to G$ is compact. \end{lemma} \begin{proofL}{\ref{la:interpol:compact}} Fix $\theta \in (0,1)$. By Proposition~\ref{pr:bs:xyqintoxyqs} it suffices to prove the compactness of the embedding for $q = \infty$. Set $E := [X,Y]_{\theta,\infty}$. We denote by $\Lambda$ the norm of $T$ as a linear operator from $X$ to $G$.\\ Let $z_k \in E$ and assume that \begin{equation}\label{eq:int:zkeleq1} \Vert z_k \Vert_{E} \leq 1 \quad \mbox{for any $k \in {\mathbb N}$.} \end{equation} If there are infinitly many $z_k = 0$, there is nothing to prove, so assume that $z_k \neq 0$ for all $k \in {\mathbb N}$. Pick for any $k,n \in {\mathbb N}$, $x^n_k, y^n_k$ such that $x^n_k + y^n_k = z_k$ and \[ \Vert x^n_k \Vert + \frac{1}{n} \Vert y^n_k \Vert \leq 2K(z_k,\frac{1}{n}) \overset{\eqref{eq:int:zkeleq1}}{\leq} 2 \frac{1}{n^\theta}. \] Consequently, for any $k,l,n \in {\mathbb N}$, \[ \begin{ma} \Vert T z_k - T z_l \Vert_G &\leq& \Vert T (x^n_k -x^n_l) \Vert_G + \Vert T(y^n_k - y^n_l) \Vert_G\\ &\leq& \Lambda \brac{\Vert x^n_k \Vert_X + \Vert x^n_l \Vert_X} + \Vert T(y^n_k - y^n_l) \Vert_G\\ &\leq& \frac{4\Lambda}{n^\theta} + \Vert T(y^n_k - y^n_l) \Vert_G, \end{ma} \] and \begin{equation}\label{eq:int:ynk}
\Vert y^n_k \Vert_Y \leq 2 n^{1-\theta} \quad \mbox{for any $k,n \in {\mathbb N}$} \end{equation} Now we apply a Cantor diagonal sequence argument as follows: Set \[
\left (k_{i,0} \right )_{i=1}^\infty := \left (i \right )_{i=1}^\infty \] and choose for a given sequence $\left (k_{i,n} \right )_{i=1}^\infty$ a subsequence $\left (k_{i,n+1} \right )_{i=1}^\infty$ such that \[
k_{i,n} = k_{i,n+1} \quad \mbox{for any $1 \leq i \leq n$} \] and \[
\Vert T(y^{n+1}_{k_{i,n+1}} - y^{n+1}_{k_{j,n+1}}) \Vert_G \leq \frac{1}{n+1} \quad \mbox{for any $i,j \geq n+1$}. \] The latter is possible, as $T$ is a compact operator from $Y$ to $G$ and \eqref{eq:int:ynk} implies for any fixed $n+1 \in {\mathbb N}$ a uniform bound of $y^{n+1}_{k_i,n}$, $i \in {\mathbb N}$.\\ Finally for any $1 < i < j < \infty$, setting $k_i := k_{i,i+1}$ \[
\Vert T z_{k_i} - T z_{k_j} \Vert_G \leq \frac{4\Lambda}{i^\theta} + \frac{1}{i}, \] which implies convergence for $i \to \infty$. \end{proofL}
\subsection{Lorentz Spaces} In this section, we recall the definition of Lorentz spaces, which are a refinement of the standard Lebesgue-spaces. For more on Lorentz spaces, the interested reader might consider \cite{Hunt66}, \cite{Ziemer}, \cite[Section 1.4]{GrafC08}. \begin{definition}[Lorentz Space] Let $f: {\mathbb R}^n \to {\mathbb R}$ be a Lebesgue-measurable function. We denote \[ d_f(\lambda) := \abs{\{ x \in {\mathbb R}^n\ :\ \abs{f(x)} > \lambda\}}. \] The decreasing rearrangement of $f$ is the function $f^\ast$ defined on $[0,\infty)$ by \[ f^\ast(t) := \inf \{s > 0:\ d_f(s) \leq t\}. \] For $1 \leq p \leq \infty$, $1 \leq q \leq \infty$, the Lorentz space $L^{p,q} \equiv L^{p,q}({\mathbb R}^n)$, is the set of measurable functions $f: {\mathbb R}^n \to {\mathbb R}$ such that $\Vert f \Vert_{L^{p,q}} < \infty$, where \[ \Vert f \Vert_{L^{p,q}} := \begin{cases}
\left (\int \limits_0^\infty \left (t^{\frac{1}{p}} f^\ast (t) \right )^q \frac{dt}{t} \right )^\frac{1}{q}, \quad &\mbox{if $q < \infty$,}\\
\sup_{t > 0} t^{\frac{1}{p}} f^\ast (t),\quad &\mbox{if $q = \infty$, $p < \infty$},\\
\Vert f \Vert_{L^\infty({\mathbb R}^n)},\quad &\mbox{if $q = \infty$, $p = \infty$}.
\end{cases} \] Observe that $\Vert \cdot \Vert_{L^{p,q}}$ does \emph{not} satisfy the triangle inequality. \end{definition} \begin{remark} We have not defined the space $L^{\infty,q}$ for $q \in [1,\infty)$. For the sake of overview, whenever a result on Lorentz spaces is stated in a way that $L^{p,q}$ for $p = \infty$, $q \in [1,\infty]$ is admissible, we in fact only claim that result for $p = \infty$, $q = \infty$. \end{remark}
An alternative definition of Lorentz spaces using interpolation can be stated as follows. \begin{lemma}\label{la:lpqinterpoldef} (See \cite[Lemma 22.6, Theorem 26.3]{Tartar07}\\ Let $q \in [1,\infty]$. For $1 < p < \infty$ \[ L^{p,q} = \left [L^1({\mathbb R}^n), L^\infty({\mathbb R}^n)\right ]_{1-\frac{1}{p},q}, \] for $2 < p < \infty$ \[ L^{p,q} = \left [L^2({\mathbb R}^n), L^\infty({\mathbb R}^n)\right ]_{1-\frac{2}{p},q}, \] and finally for $1 < p < 2$, \[ L^{p,q} = \left [L^1({\mathbb R}^n), L^2({\mathbb R}^n)\right ]_{2-\frac{2}{p},q}, \] and the norms of the respective spaces are equivalent. \end{lemma}
For H\"older's inequality on Lorentz spaces, we will need moreover the following result on the decreasing rearrangement. \begin{proposition}\label{pr:fgastleqfaga}(See \cite[Proposition 1.4.5]{GrafC08})\\ For any $f,g \in {\mathcal{S}}({\mathbb R}^n)$ and any $t > 0$, \[ (fg)^\ast(2t) \leq f^\ast(t)\ g^\ast(t). \] \end{proposition} \begin{proofP}{\ref{pr:fgastleqfaga}} We have for any $s$, $s_1$, $s_2 > 0$ such that $s = s_1 s_2$, \[ \{x \in {\mathbb R}^n:\ \abs{f(x)g(x)} > s \} \subset \{x \in {\mathbb R}^n:\ \abs{f(x)} > s_1 \} \cup \{x \in {\mathbb R}^n:\ \abs{g(x)} > s_2 \}, \] so \[ d_{fg}(s) \leq d_f(s_1) + d_f(s_2). \] Consequently, for any $t > 0$, \[ \{s > 0:\ d_{fg}(s) \leq 2t\} \supset \{s = s_1 s_2 > 0:\ d_{f}(s_1) \leq t,\ d_{g}(s_2) \leq t\}, \] which implies \[
(fg)^\ast(2t) \leq \inf \{s = s_1 s_2 > 0:\ d_{f}(s_1) \leq t,\ d_{g}(s_2) \leq t\}. \] Of course, \[
d_f(f^\ast(t) + \frac{1}{k}) \leq t,\quad \mbox{and}\quad d_g(g^\ast(t) + \frac{1}{k}) \leq t \quad \mbox{for any $k \in {\mathbb N}$}, \] so for any $k \in {\mathbb N}$ \[ \inf \{s = s_1 s_2 > 0:\ d_{f}(s_1) \leq t,\ d_{g}(s_2) \leq t\} \leq (f^\ast(t) + \frac{1}{k})g^\ast(t) + \frac{1}{k}). \] We conclude by letting $k$ go to $\infty$. \end{proofP}
\begin{proposition}[Basic Lorentz Space Operations]\label{pr:dl:lso} Let $f \in L^{p_1,q_1}$ and $g \in L^{p_2,q_2}$, $1 \leq p_1,p_2,q_1,q_2 \leq \infty$. \begin{itemize}
\item [(i)] If $\frac{1}{p_1} + \frac{1}{p_2} = \frac{1}{p} \in [0,1]$ and $\frac{1}{q_1} + \frac{1}{q_2} = \frac{1}{q}$ then $f g \in L^{p,q}$ and
\[
\Vert fg \Vert_{L^{p,q}} \aleq{p,q} \Vert f \Vert_{L^{p_1,q_1}}\ \Vert g \Vert_{L^{p_2,q_2}}.
\]
\item [(ii)] If $\frac{1}{p_1} + \frac{1}{p_2} - 1 = \frac{1}{p} > 0$ and $\frac{1}{q_1} + \frac{1}{q_2} = \frac{1}{q}$ then $f \ast g \in L^{p,q}$ and
\[
\Vert f\ast g \Vert_{L^{p,q}} \aleq{p,q} \Vert f \Vert_{L^{p_1,q_1}}\ \Vert g \Vert_{L^{p_2,q_2}}.
\]
\item [(iii)] For $p_1 \in (1,\infty)$, $f$ belongs to $L^{p_1}({\mathbb R}^n)$ if and only if $f \in L^{p_1,p_1}$. The ''norms`` of $L^{p_1,p_1}$ and $L^{p_1}$ are equivalent.
\item [(iv)] If $p_1 \in (1,\infty)$ and $q \in [q_1,\infty]$ then also $f \in L^{p_1,q}$.
\item [(v)] Finally, $\frac{1}{\abs{\cdot}^\lambda} \in L^{\frac{n}{\lambda},\infty}$, whenever $\lambda \in (0,n)$.
\end{itemize} \end{proposition} \begin{proofP}{\ref{pr:dl:lso}} As for (i), this is proved using classical H\"older inequality and Proposition~\ref{pr:fgastleqfaga} in the following way: \[ \begin{ma}
&&\int \limits_0^\infty \brac{t^{\theta} (fg)^\ast(t) }^q \frac{dt}{t}\\ &\overset{\sref{P}{pr:fgastleqfaga}}{\aleq{}}& \int \limits_0^\infty \brac{t^{\theta q_1} f^\ast(t)^{q_1}\ t^{-1} }^{\frac{q}{q_1}}\ \brac{t^{\theta q_2} g^\ast(t)^{q_2}\ t^{-1} }^{\frac{q}{q_2}}\ dt\\ &\leq& \brac{\int \limits_0^\infty t^{\theta q_1} f^\ast(t)^{q_1} \ \frac{dt}{t}}^{\frac{q}{q_1}}\ \brac{\int \limits_0^\infty t^{\theta q_2} g^\ast(t)^{q_2} \ \frac{dt}{t}}^{\frac{q}{q_2}}. \end{ma} \] As for (ii), this is the result in \cite[Theorem 2.6]{ONeil63}. As for (iii), this follows by the definition of $f^\ast$. Property (iv) was proven in Proposition~\ref{pr:bs:xyqintoxyqs}.\\ Lastly we consider Property (v). One checks that \[
\{x\in {\mathbb R}^n: \abs{x}^{-\lambda} > s\} = B_{s^{-\frac{1}{\lambda}}}(0), \] so \[
\brac{\abs{\cdot}^{-\lambda}}^\ast(t) = c_n\ t^{-\frac{\lambda}{n}}, \] which readily implies \[
\Vert \abs{\cdot}^{-\lambda} \Vert_{L^{p,\infty}} = c_n \sup_{t > 0} t^{\frac{1}{p}}\ t^{-\frac{\lambda}{n}}, \] which is finite if and only if $p = \frac{n}{\lambda}$. \end{proofP}
As the Lorentz spaces can be defined by interpolation, see Lemma~\ref{la:lpqinterpoldef}, by the Interpolation Theorem, Lemma~\ref{la:interpolation}, the following holds. \begin{proposition}[Fourier Transform in Lorentz Spaces] \label{pr:fourierlpest} For any $f \in {\mathcal{S}}$, $p \in (1,2)$, $q \in [1,\infty]$ we have \[ \Vert f^\wedge \Vert_{L^{p',q}} \leq C_p \Vert f \Vert_{L^{p,q}}, \quad \Vert f^\vee \Vert_{L^{p',q}} \leq C_p \Vert f \Vert_{L^{p,q}}. \]
Here, $\frac{1}{p'} + \frac{1}{p} = 1$. \end{proposition}
\begin{proposition}[Scaling in Lorentz Spaces]\label{pr:scalinglorentz} Let $\lambda > 0$ and $f \in {\mathcal{S}}({\mathbb R}^n)$. If we denote $\tilde{f}(\cdot) := f(\lambda \cdot)$, then \[ \Vert \tilde{f} \Vert_{L^{p,q}} = \lambda^{-\frac{n}{p}} \Vert f \Vert_{L^{p,q}}. \] \end{proposition} \begin{proofP}{\ref{pr:scalinglorentz}} We have that $d_{\tilde{f}}(s) = \lambda^{-n} d_f(s)$ for any $s > 0$ and thus $\tilde{f}^\ast(t) = f^\ast (\lambda^n t)$ for any $t > 0$. Hence, \[ \int \limits_0^\infty \left (t^{\frac{1}{p}} \tilde{f}^\ast(t) \right )^q\ \frac{dt}{t} = \lambda^{-q\frac{n}{p}} \int \limits_0^\infty \left ((\lambda^n t)^{\frac{1}{p}} f^\ast(\lambda t) \right )^q\ \frac{dt}{t} = \lambda^{-q\frac{n}{p}} \Vert f \Vert_{L^{p,q}}^q. \] We can conclude. \end{proofP}
\begin{proposition}[H\"older inequality in Lorentz Spaces]\label{pr:hoeldercompactsupp} Let $\operatorname{supp} f \subset \bar{D}$, where $D \subset {\mathbb R}^n$ is a bounded measurable set. Then, whenever $\infty > p_1 > p \geq 1$, $q \in [1,\infty]$ \begin{equation}\label{eq:bs:estvp1q1} \Vert f \Vert_{L^{p,q}} \leq C_{p,p_1,q}\ \abs{D}^{\frac{1}{p} - \frac{1}{p_1}}\ \Vert f \Vert_{L^{p_1}}. \end{equation} \end{proposition} \begin{proofP}{\ref{pr:hoeldercompactsupp}} Denote by $\chi \equiv \chi_D$ the characteristic function of the set $D \subset {\mathbb R}^n$. One checks that \[ \chi^\ast(t) = \begin{cases}
1\quad &\mbox{if $t < \abs{D}$},\\
0\quad &\mbox{if $t \geq \abs{D}$.}
\end{cases} \] Consequently, \[ \Vert \chi \Vert_{L^{p_2,q_2}} \aeq{} \abs{D}^{\frac{1}{p_2}} \quad \mbox{whenever $1 \leq p_2 < \infty$, $q_2 \in [1,\infty]$}. \] One concludes by applying H\"older's inequality in Lorentz spaces, Proposition~\ref{pr:dl:lso} (i), choosing $q_2 = q$ and $p_2$ such that \[
\frac{1}{p_2} + \frac{1}{p_1} = \frac{1}{p}, \] using also the continuous embedding $L^{p_1} \subset L^{p_1,\infty}$, \[
\Vert f \Vert_{L^{p,q}} = \Vert f \chi \Vert_{L^{p,q}} \aleq{} \Vert f \Vert_{L^{p_1,\infty}}\ \Vert \chi \Vert_{L^{p_2,q}} \aleq{} \Vert f \Vert_{L^{p_1}}\ \abs{D}^{\frac{1}{p}-\frac{1}{p_1}}. \]
\end{proofP}
\subsection{Fractional Sobolev Spaces}\label{sec:fracsobspace} In the following section we will give two equivalent definitions of the fractional Sobolev space $H^s \equiv H^s({\mathbb R}^n)$, $s > 0$. The first definition is motivated by the interpretation of the Laplace operator as Fourier multiplier operator. \begin{definition}[Fractional Sobolev Spaces by Fourier Transform]\label{def:fracSob} Let $f \in L^2({\mathbb R}^n)$. We say that for some $s \geq 0 $ the function $f \in H^s \equiv H^s({\mathbb R}^n)$ if and only if $\Delta^\frac{s}{2} f \in L^2({\mathbb R}^n)$. Here, the operator $\Delta^{\frac{s}{2}}$ is defined as \[ \Delta^{\frac{s}{2}} f := \left (\abs{\cdot}^s f^\wedge\right )^\vee. \] The norm, under which $H^s({\mathbb R}^n)$ becomes a Hilbert space is \[ \Vert f \Vert_{H^s({\mathbb R}^n)}^2 := \Vert f \Vert_{L^2({\mathbb R}^n)}^2 + \Vert \laps{s} f \Vert_{L^2({\mathbb R}^n)}^2. \] \end{definition} \begin{remark}\label{rem:lapsClasLap} Observe, that the definition of $\laps{2}$ coincides with the usual laplacian only up to a multiplicative constant, but this saves us from the nuisance to deal with those standard factors in every single calculation. \end{remark} \begin{remark} Observe that $\laps{s} f$ is a real function whenever $f \in {\mathcal{S}}({\mathbb R}^n,{\mathbb R})$. In fact, this is true for any multiplier operator $M$ defined for some multiplier $m \in C^\infty({\mathbb R}^n \backslash \{0\}$ as \[
(Mf)^\wedge(\cdot) := m(\cdot)\ f^\wedge (\cdot), \] once we assume the additional condition \begin{equation}\label{eq:remfracsob:mmxibarmxi}
\overline{m(\xi)} = m(-\xi) \quad \mbox{for any $\xi \in {\mathbb R}^n \backslash \{0\}$}, \end{equation} where by $\overline{\cdot}$ we denote the complex conjugate. This again can be seen as follows: \[ \begin{ma}
\overline{M f} &=& \overline{\brac{m(\cdot)\ f^\wedge}^\vee}\\ &=& \brac{\overline{m(\cdot)\ f^\wedge}}^\wedge\\ &\overset{\eqref{eq:remfracsob:mmxibarmxi}}{=}& \brac{m(-\cdot)\ f^\wedge(-\cdot)}^\wedge\\ &=& \brac{ \brac{Mf}^{\wedge}(-\cdot)}^\wedge\\ &=& \brac{ \brac{Mf}^{\vee}(\cdot)}^\wedge \qquad = Mf. \end{ma} \] \end{remark} \begin{remark} In Section \ref{ss:idlaps} we will prove an integral representation for the fractional laplacian $\laps{s}$. \end{remark} On the other hand, fractional Sobolev spaces can be defined by interpolation: \begin{lemma}[Fractional Sobolev Spaces by Interpolation]\label{la:fracsobdef2}${}$\\ (See \cite[Chapter 23]{Tartar07})\\ Let $s \in (0,\infty)$. Then \[
H^s({\mathbb R}^n) = [W^{i,2}({\mathbb R}^n),W^{j,2}({\mathbb R}^n)]_{\theta,2}, \] with equivalent norms, whenever $\theta = \frac{s-i}{j-i} \in(0,1)$ for $i < s < j$, $i,j \in {\mathbb N}_0$. \end{lemma}
\begin{lemma}[Compactly Supported Smooth Functions are Dense]\label{la:Tartar07:Lemma15.10} (see \cite[Lemma 15.10.]{Tartar07})\\ The space $C_0^\infty({\mathbb R}^n) \subset H^s({\mathbb R}^n)$ is dense for any $s \geq 0$, t. \end{lemma}
Our next goal is Poincar\'{e}'s inequality. As we want to use the standard blow up argument to prove it, we premise a (trivial) uniqueness and a compactness result: \begin{lemma}[Uniqueness of solutions]\label{la:bs:uniqueness} Let $f \in H^s({\mathbb R}^n)$, $s > 0$. If $\Delta^{\frac{s}{2}} f \equiv 0$, then $f \equiv 0$. \end{lemma} \begin{proofL}{\ref{la:bs:uniqueness}} As $f \in H^s({\mathbb R}^n)$, $f^\wedge$ exists and $f^\wedge(\xi) = \abs{\xi}^{-s} 0 = 0$ for almost every $\xi \in {\mathbb R}^n$. Thus, $f^\wedge \equiv 0$ as $L^2$-function and we conclude that also $f \equiv 0$. \end{proofL}
\begin{lemma}[Compactness]\label{la:bs:compact} Let $D \subset {\mathbb R}^n$ be a smoothly bounded domain, $s > 0$. Assume that there is a constant $C > 0$ and $f_k \in H^s({\mathbb R}^n)$, $k \in {\mathbb N}$, such that for any $k \in {\mathbb N}$ the conditions $\operatorname{supp} f_k \subset \bar{D}$ and $\Vert f_k \Vert_{H^s} \leq C$ hold. Then there exists a subsequence $f_{k_i}$, such that $f_{k_i} \xrightarrow{i \to \infty} f \in H^s$ weakly in $H^s$, strongly in $L^2({\mathbb R}^n)$, and pointwise almost everywhere. Moreover, $\operatorname{supp} f \subset \bar{D}$. \end{lemma} \begin{proofL}{\ref{la:bs:compact}} Fix $D \subset {\mathbb R}^n$ and let $\eta \in C_0^\infty(2D)$, $\eta \equiv 1$ on $D$. Define the operator \[
S:\ v \in L^2({\mathbb R}^n) \mapsto \eta v. \] As $D$ is a bounded subset, $S$ is compact as an operator $W^{j,2}({\mathbb R}^n) \to L^2({\mathbb R}^n)$ for any $j \in {\mathbb N}$ and continuous as an operator $L^2({\mathbb R}^n) \to L^2({\mathbb R}^n)$. Consequently, Lemma~\ref{la:interpol:compact} for $G = X = L^2$, $Y = W^{j,2}$ and Lemma~\ref{la:fracsobdef2} imply that $S$ is also a compact operator $H^s({\mathbb R}^n) \to L^2({\mathbb R}^n)$ for all $s \in (0,j)$. As $S$ is the identity on all functions $f \in L^2({\mathbb R}^n)$ such that $\operatorname{supp} f \subset \bar{D}$, we conclude the proof of the claim of convergence in $L^2$ and pointwise almost everywhere, which implies also the support condition. Lastly, the weak convergence result stems from the fact that $H^s$ is a Hilbert space. \end{proofL}
\begin{remark} As for weak convergence, one can prove that $f_k \to f$ weakly in $H^s({\mathbb R}^n)$ for some $s > 0$ implies that $\laps{s} f_k \to \laps{s}f$ weakly in $L^2$. In fact assume that $f_k \to f$ weakly in $H^s({\mathbb R}^n)$ and in particular $\Vert f_k \Vert_{H^s} \leq C$. For any $\varphi \in C_0^\infty({\mathbb R}^n)$, \[
\int \limits_{{\mathbb R}^n} \laps{s}f_k\ \varphi = \int \limits_{{\mathbb R}^n} f_k\ \laps{s}\varphi \xrightarrow{k \to \infty} \int \limits_{{\mathbb R}^n} f\ \laps{s}\varphi \int \limits_{{\mathbb R}^n} \laps{s}f\ \varphi. \] Next, for any $w \in L^2({\mathbb R}^n)$ and $w_\varepsilon \in C_0^\infty({\mathbb R}^n)$ such that $\Vert w - w_\varepsilon\Vert_{L^2} \leq \varepsilon$, \[
\big \vert \int \limits_{{\mathbb R}^n} \laps{s}f_k\ w -\int \limits_{{\mathbb R}^n} \laps{s}f \ w \big \vert \leq \big \vert \int \limits_{{\mathbb R}^n} \laps{s}f_k\ w_\varepsilon -\int \limits_{{\mathbb R}^n} \laps{s}f \ w_\varepsilon \big \vert + C \varepsilon. \] Thus, letting $\varepsilon$ go to zero and $k$ to infininity, we can prove weaky convergence of $\laps{s} f_k$ in $L^2({\mathbb R}^n)$. \end{remark}
With the compactness lemma, Lemma~\ref{la:bs:compact}, we can prove Poincar\'{e}'s inequality. As in \cite[Theorem A.2]{DR09Sphere} we will use a support-condition in order to ensure compactness of the embedding $H^s({\mathbb R}^n)$ into $L^2({\mathbb R}^n)$ (see Lemma~\ref{la:bs:compact}). This support condition can be seen as saying that all derivatives up to order $\lfloor \frac{s}{2} \rfloor$ are zero at the boundary, therefore it is not surprising that such an inequality should hold. \begin{lemma}[Poincar\'{e} Inequality]\label{la:poinc} For any smoothly bounded domain $D \subset {\mathbb R}^n$, $s > 0$, there exists a constant $C_{D,s} > 0$ such that \begin{equation}\label{eq:poinc} \Vert f \Vert_{L^2({\mathbb R}^n)} \leq C_{D,s}\ \Vert \laps{s} f \Vert_{L^2({\mathbb R}^n)}, \quad \mbox{for all $f \in H^s({\mathbb R}^n)$, $\operatorname{supp} f \subset \bar{D}$}. \end{equation} If $D = r\tilde{D}$ for some $r > 0$, then $C_{D,s} = C_{\tilde{D},s}r^{s}$. \end{lemma} \begin{remark} One checks as well, that $C_{D,s} = C_{\tilde{D},s}$ if $D$ is a mere translation of some smoothly bounded domain $\tilde{D}$. This is clear, as the operator $\laps{s}$ commutes with translations. \end{remark} \begin{proofL}{\ref{la:poinc}} We proceed as in the standard blow-up proof of Poincar\'{e}'s inequality: Assume \eqref{eq:poinc} is false and that there are functions $f_k \in H^s({\mathbb R}^n)$, $\operatorname{supp} f_k \subset \bar{D}$, such that \begin{equation}\label{eq:poinc:small} \Vert f_k \Vert_{L^2({\mathbb R}^n)} > k \Vert \laps{s} f_k \Vert_{L^2({\mathbb R}^n)}, \quad \mbox{for every $k \in {\mathbb N}$}. \end{equation} Dividing by $\Vert f_k \Vert_{L^2({\mathbb R}^n)}$ we can assume w.l.o.g. that $\Vert f_k \Vert_{L^2({\mathbb R}^n)} = 1$ for every $k \in {\mathbb N}$. Consequently, we have for every $k \in {\mathbb N}$ \[ \Vert f_k \Vert_{H^s({\mathbb R}^n)} \aleq{} \Vert f_k \Vert_{L^2({\mathbb R}^n)} + \Vert \laps{s} f_k \Vert_{L^2({\mathbb R}^n)} \aleq{} 1. \] Modulo passing to a subsequence of $(f_k)_{k \in {\mathbb N}}$, we can assume by Lemma~\ref{la:bs:compact} that $f_k$ converges weakly to some $f \in H^s({\mathbb R}^n)$ with $\operatorname{supp} f \subset \bar{D}$ and strongly in $L^2({\mathbb R}^n)$. This implies, that $\Vert f \Vert_{L^2({\mathbb R}^n)} = 1$ and \[ \Vert \laps{s} f \Vert_{L^2({\mathbb R}^n)} \leq \liminf_{k \to \infty}\ \Vert \laps{s} f_k \Vert_{L^2({\mathbb R}^n)} \overset{\eqref{eq:poinc:small}}{=} 0. \] But this is a contradiction, as Lemma~\ref{la:bs:uniqueness} implies that $f \equiv 0$.\\ If $D = r\tilde{D}$ for some $r > 0$, we define as usual a scaled function $\tilde{f}(x) := f(rx)$ and use that \[
\brac{\laps{s} \tilde{f}}(x) = r^s\ \brac{\laps{s} v}(rx) \] in order to conclude. \end{proofL}
A simple consequence of the ``standard Poincar\'{e} inequality'' is the following \begin{lemma}[Slightly more general Poincar\'{e} inequality]\label{la:poincExt} For any smoothly bounded domain $D \subset {\mathbb R}^n$, $0 < s \leq t$, there exists a constant $C_{D,t} > 0$ such that \[ \Vert \laps{s} f \Vert_{L^2({\mathbb R}^n)} \leq C_{D,t}\ \Vert \laps{t} f \Vert_{L^2({\mathbb R}^n)}, \quad \mbox{for all $f \in H^t({\mathbb R}^n)$, $\operatorname{supp} f \subset \bar{D}$}. \] If $D = r\tilde{D}$ for some $r > 0$, then $C_{D,t} = C_{\tilde{D},t}r^{t-s}$. \end{lemma} \begin{proofL}{\ref{la:poincExt}} We have \[ \begin{ma}
\Vert \laps{s} f \Vert_{L^2} &=& \Vert \abs{\cdot}^s\ f^\wedge \Vert_{L^2}\\ &\leq& \Vert \abs{\cdot}^t\ f^\wedge \Vert_{L^2({\mathbb R}^n \backslash B_1(0))} + \Vert f^\wedge \Vert_{L^2(B_1(0))}\\ &\leq& \Vert \laps{t}f \Vert_{L^2} + \Vert f \Vert_{L^2}\\ &\overset{\sref{L}{la:poinc}}{\leq}& C_{D,t}\ \Vert \laps{t} f \Vert_{L^2}. \end{ma} \] By scaling one concludes. \end{proofL}
The following lemma can be interpreted as an existence result for the equation $\laps{s} w = v$ - or as a variant of Poincar\'{e}'s inequality: \begin{lemma}\label{la:lapmsest2} Let $s \in (0,n)$, $p \in [2,\infty)$ such that \begin{equation}\label{eq:lapmsest:pcond} \frac{n-s}{n} > \frac{1}{p} \geq \frac{n-2s}{2n}. \end{equation} Then for any smoothly bounded set $D \subset {\mathbb R}^n$ there is a constant $C_{D,s,p}$ such that for any $v \in {\mathcal{S}}({\mathbb R}^n)$, $\operatorname{supp} v \subset \bar{D}$, we have $\lapms{s} v \in L^p({\mathbb R}^n)$ and \[ \Vert \lapms{s} v \Vert_{L^p({\mathbb R}^n)} \leq C_{D,p,s}\ \Vert v \Vert_{L^2}. \] Here, $\lapms{s} v$ is defined as $(\abs{\cdot}^{-s} v^\wedge )^\vee$. In particular, if $s \in (0,\frac{n}{2})$, \[ \Vert \lapms{s} v \Vert_{L^2({\mathbb R}^n)} \leq C_{D,s}\ \Vert v \Vert_{L^2}. \] If $D = r\tilde{D}$, then $C_{D,p,s} = r^{s+\frac{n}{p}-\frac{n}{2}}\ C_{\tilde{D},p,s}$. \end{lemma} \begin{proofL}{\ref{la:lapmsest2}} We want to make the following reasoning rigorous: \[ \begin{ma} \Vert \lapms{s} v \Vert_{L^p} &\overset{\ontop{\sref{P}{pr:fourierlpest}}{p \in [2,\infty)}}{\leq}& C_p\ \Vert (\lapms{s} v)^\wedge \Vert_{L^{p',p}}\\ &=& C_p\ \Vert \abs{\cdot}^{-s}\ v^\wedge \Vert_{L^{p',p}}\\ &\overset{(\star)}{\leq}& C_p\ \Vert \abs{\cdot}^{-s} \Vert_{L^{\frac{n}{s},\infty}}\ \Vert v^\wedge \Vert_{L^{q,p}}\\ &\overset{p \geq 2}{\leq}& C_p\ \Vert \abs{\cdot}^{-s} \Vert_{L^{\frac{n}{s},\infty}}\ \Vert v^\wedge \Vert_{L^{q,2}}\\ &\overset{\ontop{\sref{P}{pr:fourierlpest}}{q \geq 2}}{\leq}& C_{p,s,q}\ \Vert v \Vert_{L^{q',2}}\\ &\overset{\ontop{\sref{P}{pr:hoeldercompactsupp}}{q' \leq 2}}{\leq}& C_{s,q}\ C_D\ \Vert v \Vert_{L^2}. \end{ma} \] To do so, we need to find $q \in [2,\infty)$ such that $(\star)$ holds: \[ \frac{1}{p'} = \frac{1}{q} + \frac{s}{n} \] which is possible by virtue of \eqref{eq:lapmsest:pcond}. Then the validity of $(\star)$ follows from Proposition~\ref{pr:dl:lso} and we conclude scaling as in Proposition~\ref{pr:scalinglorentz}. \end{proofL}
The next lemma can be seen as an adaption of Hodge decomposition to the setting of the fractional laplacian: \begin{lemma}[Hodge Decomposition]\label{la:hodge} Let $f \in L^2({\mathbb R}^n)$, $s > 0$. Then for any smoothly bounded domain $D \subset {\mathbb R}^n$ there are functions $\varphi \in H^{s}({\mathbb R}^n)$, $h \in L^2({\mathbb R}^n)$ such that \[
\operatorname{supp} \varphi \subset \bar{D}, \] \[
\int \limits_{{\mathbb R}^n} h\ \laps{s} \psi = 0, \quad \mbox{for all } \psi \in C_0^\infty(D), \] and \[
f = \laps{s} \varphi + h\quad \mbox{almost everywhere in ${\mathbb R}^n$}. \] Moreover, \begin{equation}\label{eq:hodge:est} \Vert h \Vert_{L^2({\mathbb R}^n)} + \Vert \laps{s} \varphi \Vert_{L^2({\mathbb R}^n)} \leq 5 \Vert f \Vert_{L^2({\mathbb R}^n)}. \end{equation} \end{lemma} \begin{proofL}{\ref{la:hodge}} Set \[
E(v) := \int \limits_{{\mathbb R}^n} \abs{ \laps{s} v - f}^2,\quad \mbox{for $v \in H^s({\mathbb R}^n)$ with $\operatorname{supp} v \subset \bar{D}$}. \] Then, \begin{equation}\label{eq:hodge:coerc}
\Vert \laps{s} v \Vert_{L^2({\mathbb R}^n)}^2 \leq 2 E(v) + 2\Vert f \Vert_{L^2({\mathbb R}^n)}^2. \end{equation} As $D$ is smoothly bounded, Poincar\'{e}'s inequality, Lemma~\ref{la:poinc}, implies for any $v \in H^s({\mathbb R}^n)$ with $\operatorname{supp} v \subset \bar{D}$ \[
\Vert v \Vert_{H^s}^2 \leq C_{s,D} (E(v) + \Vert f \Vert_{L^2({\mathbb R}^n)}^2). \] Thus $E(\cdot)$ is coercive, i.e. for an $E(\cdot)$-minimizing sequence $(\varphi_k)_{k=1}^\infty \subset H^s({\mathbb R}^n)$ with $\operatorname{supp} \varphi_k \subset \bar{D}$ we can assume \[
\Vert \varphi_k \Vert_{H^s}^2 \leq C (E(0) + \Vert f \Vert_{L^2({\mathbb R}^n)}^2) = 2C \Vert f \Vert_{L^2({\mathbb R}^n)}^2,\quad \mbox{for every $k \in {\mathbb N}$}. \] By compactness, see Lemma~\ref{la:bs:compact}, up to taking a subsequence of $k \to \infty$, we have weak convergence of $\varphi_k$ to some $\varphi$ in $H^s({\mathbb R}^n)$ and strong convergence in $L^2$, as well as $\operatorname{supp} \varphi \subset \bar{D}$.\\ $E(\cdot)$ is lower semi-continuous with respect to weak convergence in $H^s({\mathbb R}^n)$, so $\varphi$ is a minimizer of $E(\cdot)$.\\ If we call $h := \laps{s} \varphi - f$, Euler-Lagrange-Equations give that \[
\int \limits_{{\mathbb R}^n} h\ \laps{s} \psi = 0, \quad \mbox{for any $\psi \in C_0^\infty(D)$}. \] Estimate \eqref{eq:hodge:coerc} for $\varphi$ and the fact that $\Vert h \Vert_{L^2}^2 = E(\varphi) \leq E(0)$ imply \eqref{eq:hodge:est}. \end{proofL} \begin{remark} In fact, $h$ will satisfy enhanced local estimates, similar to estimates for harmonic function, see Lemma~\ref{la:estharmonic}. \end{remark}
\subsection{Annuli-Cutoff Functions}\label{ss:cutoff}
We will have to localize our equations, so we introduce as in \cite{DR09Sphere} a decomposition of unity as follows: Let $\eta \equiv \eta^0 \in C_0^\infty(B_2(0))$, $\eta \equiv 1$ in $B_1(0)$ and $0 \leq \eta \leq 1$ in ${\mathbb R}^n$. Let furthermore $\eta^k \in C_0^\infty(B_{2^{k+1}}(0)\backslash B_{2^{k-1}}(0))$, $k \in {\mathbb N}$, such that $0 \leq \eta^k \leq 1$, $\sum_{k=0}^\infty \eta^k = 1$ pointwise in ${\mathbb R}^n$ and $\abs{\nabla^i \eta^k} \leq C_i 2^{-ki}$ for any $i \in {\mathbb N}_0$.\\ We call $\eta^k_{r,x} := \eta^k (\frac{\cdot - x}{r})$, though we will often omit the subscript when $x$ and $r$ should be clear from the context.\\ For the sake of completeness we sketch the construction of those $\eta^k$: \begin{proof}[Construction of suitable cutoff functions] Firstly, pick $\eta \equiv \eta^0 \in C_0^\infty(B_2(0))$, $\eta \equiv 1$ on, say, $B_{\frac{3}{2}}(0)$ and $\eta(x) \in [0,1]$ for any $x \in {\mathbb R}^n$. We set for $k \in {\mathbb N}$, \begin{equation}\label{eq:defcutoff} \eta^k(\cdot) := \left (1-\sum \limits_{l=0}^{k-1} \eta^l(\cdot) \right ) \sum \limits_{l=0}^{k-1} \eta^l \left (\frac{\cdot}{2} \right ). \end{equation} Obviously, $\eta^k$ is smooth and we have the following crucial properties \begin{itemize} \item[(i)] $\eta^k \in C_0^\infty(B_{2^{k+1}}(0) \backslash \overline{B_{2^{k-1}}(0)})$, if $k \geq 1$, and \item[(ii)] $\sum_{l=0}^k \eta^l \equiv 1$ in $B_{2^k}(0)$, for every $k \geq 0$. \end{itemize} Indeed, this can be shown by induction: First, one checks that (i), (ii) are true for $k = 0,1$. Then, assume that (i) and (ii) hold for some positive integer $k -1$. By (ii) we have that $1-\sum_{l=0}^{k-1} \eta_l \equiv 0$ in $B_{2^{k-1}}(0)$ and (i) implies that $\sum_{l=0}^{k-1} \eta_l \left (\frac{\cdot}{2} \right ) \equiv 0$ in ${\mathbb R}^n \backslash B_{2^{k-1+1}2}$. This implies (i) for $k$. Moreover, \[ \sum \limits_{l=0}^k \eta^l = \sum \limits_{l=0}^{k-1} \eta^l + \left (1-\sum \limits_{l=0}^{k-1} \eta^l \right )(\cdot) \sum \limits_{l=0}^{k-1} \eta^l \left (\frac{\cdot}{2} \right ). \] By (ii) for $k-1$ on $B_{2^{k-1}2} = B_{2^{k}}$ the sum $\sum_{l=0}^{k-1} \eta^l \left (\frac{\cdot}{2} \right )$ is identically $1$ so (ii) holds for $k$ as well. Consequently, by induction (i) and (ii) hold for all $k \in {\mathbb N}_0$. It is easy to check that also $0 \leq \eta_k \leq 1$.\\
Moreover, one checks that $\abs{\nabla^i \eta^k} \leq C_i 2^{-ki}$ for every $i \in {\mathbb N}_0$: In fact, if we abbreviate $\psi^k := \sum_{l=0}^{k} \eta^k$, we have of course \[ \abs{\nabla^i \eta^k} \leq \abs{\nabla^i \psi^k} + \abs{\nabla^i \psi^{k-1}}. \] It is enough, to show that $\abs{\nabla^i \psi^k} \leq C_i 2^{-ki}$: We have \[ \psi^{k} = \psi^{k-1} + (1-\psi^{k-1})(\cdot)\ \psi^{k-1} \brac{\frac{1}{2} \cdot}. \] By property (ii) we know that $\psi^{k} \equiv 1$ in $B_{2^{k}}$ and $\psi^{k} \equiv 0$ in ${\mathbb R}^n \backslash B_{2^{k+1}}$, so the gradient in those sets is trivial. On the other hand, in $B_{2^{k+1}} \backslash B_{2^{k}}$ we know that $\psi^{k-1} \equiv 0$, by property (i), hence $\psi^k = \psi^{k-1} (\frac{1}{2} \cdot)$ in this set. This implies \[ \nabla^i \psi^k = 2^{-i} (\nabla^i \psi^{k-1}) \left (\frac{1}{2} \cdot \right ). \] By induction one arrives then at $\abs{\nabla^i \psi^k} \leq 2^{-ki} \Vert \nabla^i \eta^0 \Vert_{L^\infty}$. \end{proof}
We want to estimate some $L^p$-Norms of $\laps{s} \eta^k_{r,x}$. In order to do so, we will need the following Proposition: \begin{proposition}\label{pr:weirdgwedgeest} (Cf. \cite[Exercise 2.2.14, p.108]{GrafC08})\\ For every $g \in {\mathcal{S}}({\mathbb R}^n)$, $p \in [1,2]$, $s \geq 0$, $-\infty < \alpha < n \frac{p-2}{p} < \beta < \infty$, we have \[ \Vert \brac{\laps{s}g}^\wedge \Vert_{L^p({\mathbb R}^n)} \leq C_{\alpha,\beta,p}\ \left (\Vert \laps{s+\alpha} g \Vert_{L^2({\mathbb R}^n)} + \Vert \laps{s+\beta} g \Vert_{L^2({\mathbb R}^n)} \right ). \] \end{proposition} \begin{proofP}{\ref{pr:weirdgwedgeest}} Set $q := \frac{2p}{2-p}$. We abbreviate $f := \brac{\laps{s} g}^\wedge$ and set $f = f_1 + f_2$, where $f_1 = f \chi_{B_1(0)}$. Here, $\chi_{B_1(0)}$ denotes as usual the characteristic function of $B_1(0)$. Then $f_1(x) = \abs{x}^\alpha f_1(x)\ \abs{x}^{-\alpha}$ and hence \[ \begin{ma} \Vert f_1(x) \Vert_{L^p({\mathbb R}^n)} &\leq& \Vert \abs{\cdot}^\alpha\ f_1 \Vert_{L^2(B_1(0))}\ \Vert \abs{\cdot}^{-\alpha} \Vert_{L^{q}(B_1(0))}\\
&\overset{q\alpha < n}{\leq}& C_\alpha \Vert \abs{\cdot}^\alpha f \Vert_{L^2(B_1(0))}. \end{ma} \] The same works for $f_2$, using that $q\beta > n$. Consequently, one arrives at \[ \Vert f \Vert_{L^p({\mathbb R}^n)} \leq C_{\alpha,\beta,p} (\Vert \abs{\cdot}^\alpha f \Vert_{L^2({\mathbb R}^n)} + \Vert \abs{\cdot}^\beta f \Vert_{L^2({\mathbb R}^n)}). \] Replacing again $f = \brac{\laps{s} g}^\wedge$ and using that $\abs{\cdot}^\alpha \brac{\laps{s}g}^\wedge = (\laps{\alpha+s} g)^\wedge$, $\abs{\cdot}^\beta \brac{\laps{s}g}^\wedge = (\laps{\beta+s} g)^\wedge$ and then applying Plancherel Theorem for $L^2$-functions, one concludes. \end{proofP}
\begin{proposition}\label{pr:etarkgoodest} For any $s > 0$, $p \in [1,2]$, there is a constant $C_{s,p} > 0$, such that for any $k \in {\mathbb N}_0$, $x \in {\mathbb R}^n$, $r > 0$ denoting as usual $p' := \frac{p}{p-1}$, \begin{equation}\label{eq:etarkgoodest:fourier} \Vert \left (\laps{s} \eta_{r,x}^k \right )^\wedge \Vert_{L^{p}({\mathbb R}^n)} \leq C_{s,p}\ (2^k r)^{-s + \frac{n}{p'}}. \end{equation} In particular, \begin{equation}\label{eq:etarkgoodest:linfty} \Vert \laps{s} \eta_{r,x}^k \Vert_{L^{p'}({\mathbb R}^n)} \leq C_{s,p}\ (2^k r)^{-s + \frac{n}{p'}}. \end{equation} \end{proposition} \begin{proofP}{\ref{pr:etarkgoodest}} Fix $r > 0$, $k \in {\mathbb N}$ and $x \in {\mathbb R}^n$. Set $\tilde{\eta}(\cdot) := \eta^k_{r,x} (x+2^kr\cdot)$. By scaling it then suffices to show that for a uniform constant $C_{s,p} > 0$ \begin{equation}\label{eq:etarkgoodest:prescaled} \Vert \left (\laps{s} \tilde{\eta} \right )^\wedge \Vert_{L^p({\mathbb R}^n)} \leq C_{s,p}. \end{equation} First of all, for any $i \in {\mathbb N}$ there is a constant $C_i > 0$ independent of $r$, $x$, $k$ such that \[ \Vert \tilde{\eta} \Vert_{W^{i,2}} \leq C_{i}. \] In fact, by the choice of the scaling for $\tilde{\eta}$, we have that $\operatorname{supp} \tilde{\eta} \subset B_2(0)$, $\abs{\nabla^j \tilde{\eta}} \leq C_i$ for any $1 \leq j \leq i$.\\ Consequently, as for any $\alpha,\beta \geq 0$ the spaces $H^{s+\alpha}$ and $H^{s+\beta}$ are by Lemma~\ref{la:fracsobdef2} (equivalent to) the interpolation spaces $[L^2({\mathbb R}^n),W^{i,2}({\mathbb R}^n)]_{\theta,2}$, for some $i = i_{\alpha,\beta} \in {\mathbb N}$ and $\theta \in (0,1)$, we have for any $\alpha,\beta,s \geq 0$ \begin{equation}\label{eq:etarkgoodest:etawi2est} \Vert \tilde{\eta} \Vert_{H^{s+\alpha}} + \Vert \tilde{\eta} \Vert_{H^{s+\beta}} \leq C_{\alpha,\beta,s} \Vert \tilde{\eta}. \Vert_{W^{i,2}({\mathbb R}^n)} \end{equation} But by Proposition~\ref{pr:weirdgwedgeest} for some admissible $\alpha, \beta \geq 0$ (depending on $p$; in the case $p = 2$ we can choose $\alpha = \beta = 0$), \[ \begin{ma} \Vert \left (\laps{s} \tilde{\eta} \right )^\wedge \Vert_{L^p({\mathbb R}^n)} &\leq& C_{\alpha,\beta,p} (\Vert \laps{s+\alpha} \tilde{\eta} \Vert_{L^2} + \Vert \laps{s+\beta} \tilde{\eta} \Vert_{L^2})\\ &\leq& C_{\alpha,\beta,p}\ (\Vert \tilde{\eta} \Vert_{H^{s+\alpha}}\ + \Vert \tilde{\eta} \Vert_{H^{s+\beta}})\\ &\leq& C_{\alpha,\beta,p,s}. \end{ma} \] Consequently, we have shown \eqref{eq:etarkgoodest:prescaled}, and by scaling back we conclude the proof of \eqref{eq:etarkgoodest:fourier}. Equation \eqref{eq:etarkgoodest:linfty} then follows by the continuity of the inverse Fourier-transform from $L^p$ to $L^{p'}$ whenever $p \in [1,2]$, see Proposition~\ref{pr:fourierlpest}. \end{proofP}
One important consequence is, that in a weak sense $\laps{s} P$ vanishes for a polynomial $P$, if $s$ is greater than the degree of $P$: \begin{proposition}\label{pr:lapspol} Let $\alpha$ be a multiindex $\alpha = (\alpha_1,\ldots,\alpha_n)$, where $\alpha_i \in {\mathbb N}_0$, $1 \leq i \leq n$. If $s > 0$ such that $\abs{\alpha} = \sum \limits_{i=1}^n \abs{\alpha_i} < s$ then \[
\lim_{R \to \infty} \int \limits_{{\mathbb R}^n} \eta_R x^\alpha\ \laps{s} \varphi = 0, \quad \mbox{for every $\varphi \in {\mathcal{S}}({\mathbb R}^n)$}. \] Here, $x^\alpha := (x_1)^{\alpha_1}\cdots (x_n)^{\alpha_n}$. \end{proposition} \begin{proofP}{\ref{pr:lapspol}} One checks that for some constant $c_\alpha$, \begin{equation}\label{eq:lapspol:xalphapsi}
x^\alpha \psi = c_\alpha \brac{\partial^\alpha \psi^\vee}^\wedge \quad \mbox{for all $\psi \in {\mathcal{S}}({\mathbb R}^n)$}. \end{equation} This and the fact that for any $\psi \in {\mathcal{S}}({\mathbb R}^n)$ we have also $\psi^\wedge \in {\mathcal{S}}({\mathbb R}^n)$ and $x^\alpha \psi \in {\mathcal{S}}({\mathbb R}^n)$ implies (using as well integration by parts) \[ \begin{ma}
&&\int \limits_{{\mathbb R}^n} \psi\ x^\alpha \laps{s} \varphi\\ &\overset{\eqref{eq:lapspol:xalphapsi}}{=}& c_\alpha \int \limits_{{\mathbb R}^n} \brac{\partial^\alpha \psi^\vee}^\wedge\ \laps{s} \varphi\\ &=& c_\alpha \int \limits_{{\mathbb R}^n} \abs{\cdot}^s \varphi^\wedge\ \partial^\alpha \psi^\vee\\ &=& \sum_{\abs{\beta} \leq \abs{\alpha}} c_{\alpha,\beta} \int \limits_{{\mathbb R}^n} m_{\alpha,\beta,s}(\cdot)\ \abs{\cdot}^{s-\abs{\alpha}+\abs{\beta}}\ \brac{\partial^\beta \varphi^\wedge}\ \psi^\wedge(-\cdot), \end{ma} \] where $m_{\alpha,\beta,s} \in C^\infty({\mathbb R}^n \backslash \{0\})$ is some zero multiplier. Denoting by $M_{\alpha,\beta,s}$ the respective Fourier multiplier operator with multiplier $m_{\alpha,\beta,s}$ we arrive at \[
\int \limits_{{\mathbb R}^n} \psi\ x^\alpha\ \laps{s} \varphi = \sum_{\abs{\beta} \leq \abs{\alpha}} c_{\alpha,\beta} \int \limits_{{\mathbb R}^n} \brac{x^\beta \varphi}\ M_{\alpha,\beta,s}\ \laps{s-\abs{\alpha}+\abs{\beta}} \psi. \] In particular, this is true for $\psi := \eta_R$, and we have for any $p \in (1,2)$, $R > 1$, \[ \begin{ma}
&&\int \limits_{{\mathbb R}^n} \eta_R\ x^\alpha \laps{s} \varphi\\ &\aleq{}& \sup_{\abs{\beta} \leq \abs{\alpha}} \Vert x^\beta \varphi \Vert_{L^p({\mathbb R}^n)}\ \Vert M_{\alpha,\beta,s} \laps{s-\abs{\alpha}+\abs{\beta}} \eta_R \Vert_{L^{p'}({\mathbb R}^n)}\\ &\aleq{}& C_{\alpha,\varphi,p,s}\ \sup_{\abs{\beta} \leq \abs{\alpha}} \Vert \laps{s-\abs{\alpha}+\abs{\beta}} \eta_R \Vert_{L^{p'}({\mathbb R}^n)}\\ &\overset{\sref{P}{pr:etarkgoodest}}{\aleq{}}& R^{-s+\abs{\alpha}+\frac{n}{p'}}. \end{ma} \] Here we used as well that multiplier operators such as $M_{\alpha,\beta,s}$ map $L^{p'}$ into $L^{p'}$ continuously for $p' \in (1,\infty)$ by H\"ormander's theorem \cite{Hoermander60}. As $\abs{\alpha} < s$, we can choose $p' \in (2,\infty)$ such that $-s+\abs{\alpha}+\frac{n}{p'} < 0$, and taking the limit $R \to \infty$ we conclude. \end{proofP} \begin{remark} \label{rem:cutoffPolbdd} One can even show, that \[
\Vert \laps{s} \brac{\eta_{r,0} x^\alpha} \Vert_{L^p({\mathbb R}^n)} \leq C_{s,p}\ r^{-s + \abs{\alpha} + \frac{n}{p}} \quad \mbox{for any $p \in [2,\infty]$, $\abs{\alpha} < s$, $r > 0$}. \] This is done similar to the proof of Proposition~\ref{pr:etarkgoodest}: First one proves the claim for $r = 1$, then scaling implies the claim, using that \[
\eta_{r,0}(x) x^\alpha = r^\abs{\alpha} \eta_{1,0}(r^{-1} x) (r^{-1} x)^\alpha. \] \end{remark}
\begin{remark} We will use Proposition~\ref{pr:lapspol} in a formal way, by saying that formally $\laps{s} x^\alpha = 0$ whenever $\abs{\alpha} < s$. Of course, as we defined the operator $\laps{s}$ on $L^2$-Functions only, this formal argument should be verified in each calculation by using that \[ \lim_{R\to \infty} \laps{s} \brac{\eta_R x^\alpha} = 0, \] where the limit will be taken in an appropriate sense. For the sake of simplicity, we will omit this recurring argument. \end{remark}
\subsection{An Integral Definition for the Fractional Laplacian}\label{ss:idlaps} A further definition of the fractional laplacian for small order without the use of the Fourier transform are based on the following two propositions. \begin{proposition}\label{pr:eqdeflaps1} Let $s \in (0,1)$. For some constant $c_n$ and any $v \in {\mathcal{S}}({\mathbb R}^n)$, \[
\laps{s} v (\bar{y})= c_n \int \limits_{{\mathbb R}^n} \frac{v(x)-v(\bar{y})}{\abs{x-\bar{y}}^{n+s}}\ dx \quad \mbox{for any $\bar{y} \in {\mathbb R}^n$}. \] \end{proposition} \begin{proofP}{\ref{pr:eqdeflaps1}} It is enough to prove the claim for $\bar{y} = 0$. In fact, denote by $\tau_{\bar{y}}$ the translation operator \[ \tau_{\bar{y}} v (\cdot) := v (\cdot+\bar{y}). \] Then, as any multiplier operator commutes with translations, assuming the claim to be true for $\bar{y} = 0$ , \[ \begin{ma} \laps{s}v (\bar{y}) &=& \laps{s} \brac{\tau_{\bar{y}}v} (0)\\ &=& c_n \int \limits_{{\mathbb R}^n} \frac{\tau_{\bar{y}}v(x)-\tau_{\bar{y}}v(0)}{\abs{x}^{n+s}}\ dx\\ &=& c_n \int \limits_{{\mathbb R}^n} \frac{v(x+\bar{y})-v(\bar{y})}{\abs{x}^{n+s}}\ dx\\ &=& c_n \int \limits_{{\mathbb R}^n} \frac{v(x)-v(\bar{y})}{\abs{x-\bar{y}}^{n+s}}\ dx, \end{ma} \] where the transformation formula is valid because the integral converges absolutely as $s \in (0,1)$.\\ So let $\bar{y} = 0$, $v \in {\mathcal{S}}({\mathbb R}^n)$. For any $R > 1 > \varepsilon > 0$ we set \[
\eta_R := \eta_{R,0}^0,\quad \mbox{and}\quad \eta_{4\varepsilon} = \eta_{4\varepsilon,0}^0, \] and decompose $v = v_1 + v_2 + v_3 + v_4$ as follows: \[
\begin{ma}
v &=& \eta_{4\varepsilon} \brac{v-v(0)} + (1-\eta_{4\varepsilon}) \brac{v-v(0)} + v(0)\\ &=:& v_1 + \eta_{R} (1-\eta_{4\varepsilon}) \brac{v-v(0)} + \eta_{R} v(0)\\ &&\quad + \brac{1-\eta_{R}} \ebrac{(1-\eta_{4\varepsilon}) \brac{v-v(0)} + v(0)}\\ &=:& v_1 + v_2 + v_3 + v_4,
\end{ma} \] that is \[
\begin{ma}
v_1 &=& \eta_{4\varepsilon} \brac{v-v(0)},\\
v_2 &=& \eta_{R} (1-\eta_{4\varepsilon}) \brac{v-v(0)},\\
v_3 &=& \eta_{R} v(0),\\
v_4 &=& \brac{1-\eta_{R}} \ebrac{(1-\eta_{4\varepsilon}) \brac{v-v(0)} + v(0)}\\ &=& \brac{1-\eta_{R}} \ebrac{(1-\eta_{4\varepsilon}) v + \eta_{4\varepsilon} v(0)}.
\end{ma} \] Observe that $v_k \in {\mathcal{S}}({\mathbb R}^n)$, $k=1\ldots 4$, and in particular $\laps{s} v_k$ is well defined in the sense of Definition \ref{def:fracSob}. So for any $\varphi \in C_0^\infty(B_{2\varepsilon}(0))$ \[
\int \limits_{{\mathbb R}^n} \laps{s} v\ \varphi = I_1 + I_2 + I_3 + I_4, \] where \[
I_k := \int \limits_{{\mathbb R}^n} \laps{s} v_k\ \varphi,\quad k=1,2,3,4. \] First, observe that by the Lebesgue-convergence theorem, \begin{equation}\label{eq:eqdeflaps:I4}
\lim_{R \to \infty} I_4 = \lim_{R \to \infty} \int \limits_{{\mathbb R}^n} \brac{1-\eta_{R}} \ebrac{(1-\eta_{4\varepsilon}) v + \eta_{4\varepsilon} v(0)} \laps{s} \varphi = 0. \end{equation} By Proposition~\ref{pr:etarkgoodest}, more precisely using \eqref{eq:etarkgoodest:linfty} for $p' = \infty$, \[
\abs{I_3} \aleq{} \abs{v(0)} \Vert \varphi \Vert_{L^1} R^{-s}, \] so \begin{equation}\label{eq:eqdeflaps:I3}
\lim_{R \to \infty} I_3 = 0. \end{equation} As for $v_2$, \[ \begin{ma} \int \limits_{{\mathbb R}^n} \laps{s} v_2\ \varphi &=& \int \limits_{{\mathbb R}^n} \abs{\cdot}^s\ v_2^\wedge(\cdot)\ \varphi^\wedge(-\cdot)\\ &=& \int \limits_{{\mathbb R}^n} \abs{\xi}^{s}\ \brac{v_2 \ast \varphi(-\cdot)}^\wedge(\xi)\ d\xi\\ &=& c_n \int \limits_{{\mathbb R}^n} \abs{x}^{-n-s}\ \brac{v_2 \ast \varphi(-\cdot)}(x)\ dx. \end{ma} \] The last equality is true, as $\operatorname{supp} (v_2 \ast \varphi) \subset {\mathbb R}^n \backslash B_\varepsilon(0)$ and (see \cite[Theorem 2.4.6]{GrafC08}) \[
\int \limits_{{\mathbb R}^n} \abs{\xi}^s\ \psi^\wedge(\xi)\ d\xi = c_n \int \limits_{{\mathbb R}^n} \abs{y}^{-n-s}\ \psi(y)\ dy,\quad \mbox{for any $\psi \in C_0^\infty({\mathbb R}^n \backslash \{0\})$.} \] Consequently, as the integrals involved converge absolutely, Fubini's theorem implies \[ \begin{ma} &&\int \limits_{{\mathbb R}^n} \laps{s} v_2\ \varphi\\
&=& c_n \int \limits_{{\mathbb R}^n} \int \limits_{{\mathbb R}^n} \varphi(-y)\ \frac{v_2(x-y)}{\abs{x}^{n+s}}\ dy\ dx\\ &=&c_n \int \limits_{B_{2\varepsilon}} \varphi(-y) \int \limits_{{\mathbb R}^n \backslash B_\varepsilon} \eta_R(x-y)(1-\eta_{4\varepsilon}(x-y)) \frac{v(x-y)-v(0)}{\abs{x}^{n+s}}\ dx\ dy. \end{ma} \] By Lebesgue's dominated convergence theorem, \begin{equation}\label{eq:eqdeflaps:I2} \begin{ma} \lim_{R\to \infty} I_2 &=& c_n \int \limits_{{\mathbb R}^n} \varphi(-y) \int \limits_{{\mathbb R}^n \backslash B_\varepsilon} (1-\eta_{4\varepsilon}(x-y)) \frac{v(x-y)-v(0)}{\abs{x}^{n+s}}\ dx\ dy\\ &=& c_n \int \limits_{{\mathbb R}^n} \varphi(-y) \int \limits_{{\mathbb R}^n} (1-\eta_{4\varepsilon}(x-y)) \frac{v(x-y)-v(0)}{\abs{x}^{n+s}}\ dx\ dy. \end{ma} \end{equation} Together, we infer from equations \eqref{eq:eqdeflaps:I4}, \eqref{eq:eqdeflaps:I3} and \eqref{eq:eqdeflaps:I2} that for any $\varepsilon \in (0,1)$ and any $\varphi \in C_0^\infty(B_{2\varepsilon}(0))$, \[ \begin{ma}
\int \limits_{{\mathbb R}^n} \laps{s}v\ \varphi &=& \int \limits_{{\mathbb R}^n} \eta_{4\varepsilon} (v-v(0))\ \laps{s}\varphi\\
&&\quad + c_n \int \limits_{{\mathbb R}^n} \varphi(-y) \int \limits_{{\mathbb R}^n} (1-\eta_{4\varepsilon}(x-y)) \frac{v(x-y)-v(0)}{\abs{x}^{n+s}}\ dx\ dy. \end{ma} \] We choose a specific $\varphi := \omega \varepsilon^{-n} \eta_{\varepsilon}$, where $\omega > 0$ is chosen such that \begin{equation}\label{eq:eqdeflaps:intvpeq1}
\int \limits_{{\mathbb R}^n} \varphi = \int \limits_{{\mathbb R}^n} \abs{\varphi} = 1. \end{equation} The function $\laps{s}v$ is continuous because for $v \in {\mathcal{S}}({\mathbb R}^n)$ in particular $(\laps{s}v)^\wedge \in L^1({\mathbb R}^n)$. Consequently, \[ \lim_{\varepsilon \to 0} \int \limits_{{\mathbb R}^n} \laps{s} v\ \varphi = \laps{s} v(0). \] It remains to compute the limit $\varepsilon \to 0$ of \[
\widetilde{I} := \int \limits_{{\mathbb R}^n} \eta_{4\varepsilon} (v-v(0))\ \laps{s}\varphi, \] and \[
\widetilde{II} := \int \limits_{{\mathbb R}^n} \varphi(-y) \int \limits_{{\mathbb R}^n} (1-\eta_{4\varepsilon}(x-y)) \frac{v(x-y)-v(0)}{\abs{x}^{n+s}}\ dx\ dy. \] As for $\tilde{I}$, by Proposition~\ref{pr:etarkgoodest}, that is \eqref{eq:etarkgoodest:linfty} for $p' = \infty$, applied to $\varphi$, \[ \begin{ma} \abs{\widetilde{I}} &\aleq{}& \varepsilon^{-n-s}\ \int \limits_{B_{8\varepsilon}(0)} \abs{v(y)-v(0)}\ dy\\ &\aleq{}& \Vert \nabla v \Vert_{L^\infty}\ \varepsilon^{-n-s+1} \abs{B_{8\varepsilon}}\\ &\aleq{}& \Vert \nabla v \Vert_{L^\infty}\ \varepsilon^{1-s}. \end{ma} \] As $s < 1$, this implies \[
\lim_{\varepsilon \to 0} \widetilde{I} = 0. \] As for $\widetilde{II}$, we write \[ \begin{ma} && \varphi(-y) (1-\eta_{4\varepsilon}(x-y)) \frac{v(x-y)-v(0)}{\abs{x}^{n+s}}\\ &=& \varphi(-y) \frac{v(x)-v(0)}{\abs{x}^{n+s}}\\ &&- \eta_{4\varepsilon}(x-y)\ \varphi(-y)\ \frac{v(x)-v(0)}{\abs{x}^{n+s}}\\ &&+ \varphi(-y) (1-\eta_{4\varepsilon}(x-y)) \frac{v(x-y)-v(x)}{\abs{x}^{n+s}}\\ &=:& ii_1 + ii_2 + ii_3. \end{ma} \] By choice of $\varphi$, and by Fubini's theorem which is applicable as all integrals are absolutely convergent, \[
\int \limits_{{\mathbb R}^n} \int \limits_{{\mathbb R}^n} ii_1\ dy\ dx= \int \limits_{{\mathbb R}^n} \frac{v(x)-v(0)}{\abs{x}^{n+s}}\ dx. \] Moreover, using \eqref{eq:eqdeflaps:intvpeq1} \[
\int \limits_{{\mathbb R}^n} \int \limits_{{\mathbb R}^n} \abs{ii_2}\ dy\ dx \aleq{} \Vert \nabla v \Vert_{L^\infty}\ \int \limits_{B_{10\varepsilon}(0)} \frac{1}{\abs{x}^{n+s-1}}\ dx \aleq{} \varepsilon^{1-s}, \] and \[
\int \limits_{{\mathbb R}^n} \int \limits_{{\mathbb R}^n} \abs{ii_3}\ dy\ dx \aleq{} \varepsilon\ \Vert \nabla v \Vert_{L^\infty}\ \int \limits_{{\mathbb R}^n \backslash B_{\varepsilon}(0)} \frac{1}{\abs{x}^{n+s}}\ dx \aleq{} \varepsilon^{1-s}. \] As a consequence, we can conclude \[
\lim_{\varepsilon \to 0} \widetilde{II} = \int \limits_{{\mathbb R}^n} \frac{v(x)-v(0)}{\abs{x}^{n+s}}\ dx. \] \end{proofP}
If $s \in [1,2)$ the integral definition for $\laps{s}$ in Proposition~\ref{pr:eqdeflaps1} is potentially non-convergent, so we will have to rewrite it as follows. \begin{proposition}\label{pr:eqdeflaps2} Let $s \in (0,2)$. Then, \[
\laps{s} v (\bar{y})= \frac{1}{2} c_n \int \limits_{{\mathbb R}^n} \frac{v(\bar{y}-x)+v(\bar{y} + x)-2v(\bar{y})}{\abs{x}^{n+s}}\ dx. \] \end{proposition}
\begin{remark} This is consistent with Proposition~\ref{pr:eqdeflaps2}. In fact, if $s \in (0,1)$ \[ \int \limits_{{\mathbb R}^n} \frac{v(y+x)-v(y)}{\abs{x}^{n+s}}\ dx= \int \limits_{{\mathbb R}^n} \frac{v(y-x)-v(y)}{\abs{x}^{n+s}} \ dx, \] just by transformation rule and the symmetry of the kernel $\frac{1}{\abs{x}^{n+s}}$. For this argument to be true, the condition $s \in (0,1)$ is necessary, because it guarantees the absolute convergence of the integrals above. \end{remark}
\begin{proofP}{\ref{pr:eqdeflaps2}} This is done analogously to Proposition~\ref{pr:eqdeflaps1}, where one replaces $v(\cdot)$ by $v(\cdot) + v(-\cdot)$ and uses that \[ \brac{\laps{s} v}(0) = \frac{1}{2} \brac{\laps{s} \brac{v(-\cdot)}(0) + \laps{s} \brac{v(\cdot)}(0)}. \] Then, the involved integrals converge for any $s \in (0,2)$, as \[ \abs{v(x)+v(-x)-2v(0)} \leq \Vert \nabla^2 v \Vert_{L^\infty}\ \abs{x}^2. \] \end{proofP}
\begin{proposition}\label{pr:eqpdeflapscpr} For any $s \in (0,2)$, $v,w \in {\mathcal{S}}({\mathbb R}^n)$ \[
\int \limits_{{\mathbb R}^n} \laps{s}v\ w = c_n \int \limits_{{\mathbb R}^n} \int \limits_{{\mathbb R}^n} \frac{\brac{v(x)-v(y)}\ \brac{w(y)-w(x)}}{\abs{x-y}^{n+s}}\ dx\ dy. \] \end{proposition} \begin{proofP}{\ref{pr:eqpdeflapscpr}} We have for $v, w \in {\mathcal{S}}({\mathbb R}^n)$, $x \in {\mathbb R}^n$ by several applications of the transformation rule \begin{equation}\label{eq:eqpdeflapscpr:intdy} \begin{ma}
&&\int \limits_{{\mathbb R}^n} \brac{v(y+x)+v(y-x)-2v(y)}\ w(y)\ dy\\ &=&\int \limits_{{\mathbb R}^n} v(y+x)w(y) + v(y)\ w(y+x) - v(y)w(y) - v(y+x)w(y+x)\ dy\\ &=&\int \limits_{{\mathbb R}^n} v(y+x)\ \brac{w(y)-w(y+x)} + v(y)\ \brac{w(y+x) - w(y)}\ dy\\ &=&\int \limits_{{\mathbb R}^n} \brac{v(y+x)-v(y)}\ \brac{w(y)-w(y+x)}\ dy. \end{ma} \end{equation} As all involved integrals converge absolutely and applying Fubini's theorem, \[ \begin{ma} &&\int \limits_{{\mathbb R}^n} \laps{s} v(y)\ w(y)\ dy\\ &\overset{\sref{P}{pr:eqdeflaps2}}{=}& c_n \int \limits_{{\mathbb R}^n} \int \limits_{{\mathbb R}^n} \frac{\brac{v(y+x)+v(y-x)-2v(y)}\ w(y)}{\abs{x}^{n+s}}\ dx\ dy\\ &=& c_n \int \limits_{{\mathbb R}^n} \int \limits_{{\mathbb R}^n} \frac{\brac{v(y+x)+v(y-x)-2v(y)}\ w(y)}{\abs{x}^{n+s}}\ dy\ dx\\ &\overset{\eqref{eq:eqpdeflapscpr:intdy}}{=}& c_n \int \limits_{{\mathbb R}^n} \int \limits_{{\mathbb R}^n} \frac{\brac{v(y+x)-v(y)}\ \brac{w(y)-w(y+x)}}{\abs{x}^{n+s}}\ dy\ dx. \end{ma} \] \end{proofP}
In particular the following equivalence result holds: \begin{proposition}[Fractional Laplacian - Integral Definition]\label{pr:equivlaps} Let $s \in (0,1)$. For a constant $c_n > 0$ and for any $v \in {\mathcal{S}}({\mathbb R}^n)$ \[ \Vert \laps{s} v \Vert^2_{L^2({\mathbb R}^n)} = c_n \int \limits_{{\mathbb R}^n}\ \int \limits_{{\mathbb R}^n} \frac{\abs{v(x)-v(y)}^2}{\abs{x-y}^{n+2s}}\ dx\ dy. \] In particular, the function \[ (x,y) \in {\mathbb R}^n \times {\mathbb R}^n \mapsto \frac{\abs{v(x)-v(y)}^2}{\abs{x-y}^{n+2s}} \] belongs to $L^1({\mathbb R}^n \times {\mathbb R}^n)$ whenever $v \in H^s({\mathbb R}^n)$. \end{proposition}
Next, we will introduce the pseudo-norm $[v]_{D,s}$, a quantity which for $s \in (0,1)$ actually is equivalent to the local, homogeneous $H^{s}$-norm, see \cite{Tartar07}, \cite{TaylorI}. But we will not use this fact as we will work with $s = \frac{n}{2}$ for $n \in {\mathbb N}$, including $n \in {\mathbb N}$ greater than $4$. Nevertheless, we will see in Section \ref{sec:homognormhn2} that $[v]_{D,\frac{n}{2}}$ is ``almost'' comparable to $\Vert \lapn v \Vert_{L^2(D)}$.
\begin{definition}\label{def:lochnnorm} For a domain $D \subset {\mathbb R}^n$ and $s \geq 0$ we set \begin{equation}\label{eq:defhsloc}
\left ([u]_{D,s} \right )^2 := \int \limits_{D} \int \limits_{D} \frac{\abs{\nabla^{\lfloor s \rfloor}u(z_1) - \nabla^{\lfloor s \rfloor}u(z_2)}^2}{\abs{z_1-z_2}^{n+2(s-\lfloor s \rfloor)}} \ dz_1\ dz_2 \end{equation} if $s \not \in {\mathbb N}_0$. If $s \in {\mathbb N}_0$ we just define $[u]_{D,s} = \Vert \nabla^s u \Vert_{L^2(D)}$. \end{definition} \begin{remark} By the definition of $[\cdot]_{D,s}$ it is obvious that for any polynomial $P$ of degree less than $s$, \[ [v+P]_{D,s} = [v]_{D,s}. \] \end{remark}
\section{Mean Value Poincar\'{e} Inequality of Fractional Order}\label{sec:poincmv} \begin{proposition}[Estimate on Convex Sets] \label{pr:annulusuxmuy:convex} Let $D$ be a convex, bounded domain and $\gamma < n+2$, then for any $v \in C^\infty({\mathbb R}^n)$, \[
\int \limits_D \int \limits_D \frac{\abs{v(x)-v(y)}^2}{\abs{x-y}^\gamma}\ dx\ dy \leq C_{D,\gamma}\ \int \limits_D \abs{\nabla v(z)}^2\ dz. \] If $\gamma = 0$, the constant $C_{D,\gamma} = C_n\ \abs{D} \operatorname{diam}(D)^2$. \end{proposition} \begin{proofP}{\ref{pr:annulusuxmuy:convex}} By the Fundamental Theorem of Calculus, \[ \begin{ma}
&&\int \limits_{D} \int \limits_{D} \frac{\abs{v(x) - v(y)}^2}{\abs{x-y}^\gamma}\ dx\ dy\\ &\leq& \int \limits_{t=0}^1 \int \limits_{D} \int \limits_{D} \frac{\abs{\nabla v(x+t(y-x))}^2}{\abs{x-y}^{\gamma-2}}\ dx\ dy\ dt\\ &\leq& \int \limits_{t=0}^{\frac{1}{2}} \int \limits_{D} \int \limits_{D} \frac{\abs{\nabla v(x+t(y-x))}^2}{\abs{x-y}^{\gamma-2}}\ dx\ dy\ dt\\ &&\quad + \int \limits_{t=\frac{1}{2}}^{1} \int \limits_{D} \int \limits_{D} \frac{\abs{\nabla v(x+t(y-x))}^2}{\abs{x-y}^{\gamma-2}}\ dy\ dx\ dt. \end{ma} \] Using the convexity of $D$, more precisely using the fact that the transformation $x \mapsto x+t(y-x)$ maps $D$ into a subset of $D$, \[ \begin{ma} &\leq& \int \limits_{t=0}^{\frac{1}{2}} \int \limits_{D} \int \limits_{D} \frac{\abs{\nabla v(z)}^2}{(1-t)^{2-\gamma}\abs{z-y}^{\gamma-2}}\ (1-t)^{-n}\ dz\ dy\ dt\\ &&\quad + \int \limits_{t=\frac{1}{2}}^{1} \int \limits_{D} \int \limits_{D} \frac{\abs{\nabla v(z)}^2}{t^{2-\gamma}\abs{x-z}^{\gamma-2}}\ t^{-n}\ dz\ dx\ dt\\ &\aleq{}& \int \limits_{D} \abs{\nabla v(z)}^2 \int \limits_{D} \abs{z-z_2}^{2-\gamma}\ dz_2\ \ dz\\ &\overset{\gamma < n+2}{\aleq{}}& \int \limits_{D} \abs{\nabla v(z)}^2 \ dz.\\ \end{ma} \] \end{proofP}
An immediate consequence for $\gamma = 0$ is the classic Poincar\'{e} inequality for mean values on convex domains. \begin{lemma}\label{la:poincCMV} There is a uniform constant $C > 0$ such that for any $v \in C^\infty({\mathbb R}^n)$ and for any convex, bounded set $D \subset {\mathbb R}^n$ \[
\int \limits_D \abs{v - (v)_D}^2 \leq C\ \brac{\operatorname{diam}(D)}^2\ \Vert \nabla v \Vert_{L^2(D)}^2. \] \end{lemma} In the following two sections we prove in Lemma~\ref{la:poincmv} and Lemma~\ref{la:poincmvAn} higher (fractional) order analogues of this Mean-Value-Poincar\'e-Inequality, on the ball and on the annulus, respectively. More precisely, for $\eta_{r}^k$ from Section \ref{ss:cutoff} we will only show that \[
\Vert \laps{s} (\eta_r^k v) \Vert_{L^2({\mathbb R}^n)} \aleq{} \Vert \laps{s} v \Vert_{L^2({\mathbb R}^n)}, \] if $v$ satisfies a mean value condition, similar to the following: For some $N \in {\mathbb N}_0$ and a domain $D \subset {\mathbb R}^n$ (in our example e.g. $D = \operatorname{supp} \eta_{r}^k$ and $N = \lceil s \rceil - 1$) \begin{equation}\label{eq:meanvalueszero} \fint \limits_{D} \partial^\alpha v = 0, \quad \mbox{for any multiindex $\alpha \in ({\mathbb N}_0)^n$, $\abs{\alpha} \leq N$}. \end{equation} The necessary ingredients are not that different from those in the proofs of similar statements as e.g. in \cite{DR09Sphere} or \cite[Proposition 3.6.]{GM05} and can be paraphrased as follows: For any $s > 1$ we can decompose $\laps{s}$ into $\laps{t} \circ T$ for some $t \in (0,1)$ and where $T$ is a classic differential operator possibly plugged behind a Riesz-transform. So, we first focus in Proposition~\ref{pr:mvpoinc} on the case $\laps{s}$ where $s \in (0,1)$. There we first use the integral representation of $\laps{t}$ as in Section \ref{ss:idlaps} and then apply in turns the fundamental theorem of calculus and the mean value condition.
\subsection{On the Ball}
We premise some very easy estimates. \begin{proposition}\label{pr:intbrxmyest} For $s \in (0,1)$, there exists a constant $C_s > 0$ such that for any $x \in B_r(x_0)$ \[ \int \limits_{B_r(x_0)} \frac{1}{\abs{x-y}^{n+2s-2}}\ dy \leq C_s\ r^{2-2s}, \] and \[ \int \limits_{{\mathbb R}^n \backslash B_{2r}(x_0)} \frac{1}{\abs{x-y}^{n+2s}}\ dy \leq C_s\ r^{-2s}. \] \end{proposition} \begin{proofP}{\ref{pr:intbrxmyest}} We have \[ \begin{ma} \int \limits_{B_r(x_0)} \frac{1}{\abs{x-y}^{n+2s-2}}\ dy &\leq& \int \limits_{B_{2r}(0)} \frac{1}{\abs{z}^{n+2s-2}}\ dz\\ &\overset{s < 1}{\aeq{s}}& (2r)^{2-2s} \end{ma} \] and \[ \begin{ma} \int \limits_{{\mathbb R}^n \backslash B_{2r}(x_0)} \frac{1}{\abs{x-y}^{n+2s}}\ dy &\leq& 2^{n+2s}\ \int \limits_{{\mathbb R}^n \backslash B_{2r}(0)} \frac{1}{\abs{z}^{n+2s}}\ dz\\ &\overset{s >0}{\aeq{s}}& (2r)^{-2s}. \end{ma} \] \end{proofP}
\begin{proposition}\label{pr:doubleintvmvest} Let $\gamma \in [0,n+2)$, $N \in {\mathbb N}$. Then for a constant $C_{N,\gamma}$ and for any $v \in C^\infty({\mathbb R}^n)$ satisfying \eqref{eq:meanvalueszero} on some $D = B_{r} \subset {\mathbb R}^n$, \[ \int \limits_{B_{r}} \int \limits_{B_{r}} \frac{\abs{v(x)-v(y)}^2}{\abs{x-y}^\gamma}\ dy\ dx \leq C_{N,\gamma}\ r^{2N-\gamma} \int \limits_{B_{r}} \int \limits_{B_{r}} \abs{\nabla^{N}v(x) - \nabla^{N}v(y)}^2\ dx\ dy. \] \end{proposition} \begin{proofP}{\ref{pr:doubleintvmvest}} It suffices to prove this proposition for $B_1(0)$ and then scale the estimate. So let $r = 1$. By Proposition~\ref{pr:annulusuxmuy:convex}, \[ \begin{ma} &&\int \limits_{B_1} \int \limits_{B_1} \frac{\abs{v(x)-v(y)}^2}{\abs{x-y}^\gamma}\ dy\ dx\\ &\aleq{}& \int \limits_{B_1} \abs{\nabla v(z)}^2\ dz\\ &\overset{\eqref{eq:meanvalueszero}}{=}& \int \limits_{B_1} \abs{\nabla v(z) - (\nabla v)_{B_1}}^2\ dz\\ &\aleq{}& \int \limits_{B_1}\ \int \limits_{B_1} \abs{\nabla v(z) - \nabla v(z_2)}^2\ dz\ dz_2\\ \end{ma} \] Iterating this procedure $N$ times with repeated use of Proposition~\ref{pr:annulusuxmuy:convex} for $\gamma = 0$, we conclude.
\end{proofP}
\begin{proposition}\label{pr:mvpoinc} For any $N \in {\mathbb N}_0$, $s \in [0,1)$ there is a constant $C_{N,s} > 0$ such that the following holds. For any $v \in C^\infty({\mathbb R}^n)$, $r > 0$, $x_0 \in {\mathbb R}^n$ such that \eqref{eq:meanvalueszero} holds on $D = B_{4r}(x_0)$ we have for all multiindices $\alpha$, $\beta \in ({\mathbb N}_0)^n$, $\abs{\alpha} + \abs{\beta} = N$ \[ \left \Vert \laps{s} \left ((\partial^\alpha \eta_{r,x_0}) (\partial^\beta v) \right ) \right \Vert_{L^2({\mathbb R}^n)} \leq C_{N,s}\ [v]_{B_{4r}(x_0),N+s}. \] \end{proposition} \begin{proofP}{\ref{pr:mvpoinc}} The case $s = 0$ follows by the classic Poincar\'e inequality, so let from now on $s \in (0,1)$. Set \[ w(y) := (\partial^\alpha \eta_{r}(y)) (\partial^\beta v(y)). \] Note that $\operatorname{supp} w \subset B_{2r}$. Moreover, by the definition of $\eta_{r}$, we have \begin{equation} \label{eq:loc:partialalphaetaest} \abs{w} \leq C_{\alpha}\ r^{-\abs{\alpha}} \abs{\partial^\beta v} \leq C_N r^{\abs{\beta}-N} \abs{\partial^\beta v}. \end{equation} By Proposition~\ref{pr:equivlaps} we have to estimate \[ \begin{ma} \Vert \laps{s} w \Vert_{L^2}^2 &\aeq{}&\int \limits_{{\mathbb R}^n}\ \int \limits_{{\mathbb R}^n} \frac{\abs{w(x)-w(y)}^2}{\abs{x-y}^{n+2s}}\ dx\ dy\\ &=& \int \limits_{B_{4r}}\ \int \limits_{B_{4r}} \frac{\abs{w(x)-w(y)}^2}{\abs{x-y}^{n+2s}}\ dx\ dy\\ &&\quad +2\int \limits_{B_{4r}}\ \int \limits_{{\mathbb R}^n \backslash B_{4r}} \frac{\abs{w(x)-w(y)}^2}{\abs{x-y}^{n+2s}}\ dx\ dy\\ &&\quad + \int \limits_{{\mathbb R}^n \backslash B_{4r}}\ \int \limits_{{\mathbb R}^n \backslash B_{4r}} \frac{\abs{w(x)-w(y)}^2}{\abs{x-y}^{n+2s}}\ dx\ dy\\ &=& \int \limits_{B_{4r}}\ \int \limits_{B_{4r}} \frac{\abs{w(x)-w(y)}^2}{\abs{x-y}^{n+2s}}\ dx\ dy\\ &&\quad+ 2\int \limits_{B_{4r}}\ \abs{w(y)}^2 \int \limits_{{\mathbb R}^n \backslash B_{4r}} \frac{1}{\abs{x-y}^{n+2s}}\ dx\ dy\\ &=:& I + 2II. \end{ma} \] To estimate $II$, we use the fact that $\operatorname{supp} w \subset B_{2r}$ and the second part of Proposition~\ref{pr:intbrxmyest} to get \[ \begin{ma} \abs{II} &\aleq{s}& r^{-2s} \int \limits_{B_{4r}} \abs{w(y)}^2 \ dy\\ &\overset{\eqref{eq:loc:partialalphaetaest}}{\aleq{}}& r^{2(\abs{\beta}-N-s)} \int \limits_{B_{4r}} \abs{\partial^{\beta}v(y)}^2\ dy\\ &\overset{\eqref{eq:meanvalueszero}}{\aleq{}}&r^{2(\abs{\beta}-N-s)} \int \limits_{B_{4r}} \abs{\partial^{\beta}v(y)- \brac{\partial^{\beta}v}_{B_{4r}}}^2\ dy\\ &\aleq{}& r^{2(\abs{\beta}-N-s)-n} \int \limits_{B_{4r}} \int \limits_{B_{4r}} \abs{\partial^{\beta}v(y)-\partial^{\beta}v(x)}^2\ dy\ dx. \end{ma} \] As $\partial^\beta v$ satisfies \eqref{eq:meanvalueszero} for $N-\abs{\beta}$, by Proposition~\ref{pr:doubleintvmvest} for $\gamma = 0$, \[ \int \limits_{B_{4r}} \int \limits_{B_{4r}} \abs{\partial^{\beta}v(y)-\partial^{\beta}v(x)}^2\ dy\ dx \aleq{N} r^{2(N-\abs{\beta})} \int \limits_{B_{4r}} \int \limits_{B_{4r}} \abs{\nabla^N v(y)-\nabla^N v(x)}^2\ dx\ dy. \] Furthermore, we have for $x,y \in B_{4r}$ \[ r^{-n-2s} \aleq{s} \abs{x-y}^{-n-2s}, \] which altogether implies that \[ \abs{II} \aleq{} [v]_{B_{4r},N+s}. \] In order to estimate $I$, note that \[ \begin{ma} &&\abs{w(x)-w(y)}\\
&\leq& \Vert \partial^\alpha \eta_r\Vert_{L^\infty} \abs{\partial^\beta v(x)- \partial^\beta v(y)} + \Vert \nabla \partial^\alpha \eta_r \Vert_{L^\infty}\ \abs{x-y}\ \abs{\partial^\beta v(y)}\\ &\aleq{N}& r^{-\abs{\alpha}} \abs{\partial^\beta v(x)- \partial^\beta v(y)} + r^{-\abs{\alpha}-1}\abs{x-y}\ \abs{\partial^\beta v(y)}. \end{ma} \] Thus, we can decompose $\abs{I} \aleq{} \abs{I_1} + \abs{I_2}$ where \[ I_1 = r^{2(\abs{\beta}-N)} \int \limits_{B_{4r}}\ \int \limits_{B_{4r}} \frac{\abs{\partial^\beta v(x)-\partial^\beta v(y)}^2}{\abs{x-y}^{n+2s}}\ dx\ dy, \] and \[ \begin{ma} I_2 &=& r^{2(\abs{\beta}-N-1)} \int \limits_{B_{4r}}\ \int \limits_{B_{4r}} \frac{\abs{\partial^\beta v(y)}^2}{\abs{x-y}^{n-2+2s}}\ dx\ dy\\ &\overset{\ontop{\sref{P}{pr:intbrxmyest}}{s < 1}}{\aleq{}}& r^{2(\abs{\beta}-N)-2s} \int \limits_{B_{4r}} \abs{\partial^\beta v(y)}^2\ dy\\ &\overset{\eqref{eq:meanvalueszero}}{\aleq{}}& r^{2(\abs{\beta}-N)-(n+2s)} \int \limits_{B_{4r}} \int \limits_{B_{4r}} \abs{\partial^\beta v(y) - \partial^\beta v(z)}^2\ dy\ dz. \end{ma} \] Using again that $\partial^\beta v$ satisfies \eqref{eq:meanvalueszero} for $N-\abs{\beta}$ on $B_{4r}$, by Proposition~\ref{pr:doubleintvmvest} for $\gamma = n+2s$ \[ \begin{ma} \abs{I_1} &\aleq{}& r^{-n-2s} \int \limits_{B_{4r}} \int \limits_{B_{4r}} \abs{\nabla^N u (x)-\nabla^N u(y)}^2\ dx\ dy\\ &\aleq{}&\int \limits_{B_{4r}} \int \limits_{B_{4r}} \frac{\abs{\nabla^N u (x)-\nabla^N u(y)}^2}{\abs{x-y}^{n+2s}}\ dx\ dy, \end{ma} \] and the same for $I_2$. This concludes the case $s > 0$. \end{proofP}
\begin{lemma}[Poincar\'{e} inequality with mean value condition (Ball)]\label{la:poincmv} For any $N \in {\mathbb N}_0$, $s \in [0,N+1)$, $t \in [0,N+1-s)$ there is a constant $C_{N,s,t}$ such that the following holds. For any $r > 0$, $x_0 \in {\mathbb R}^n$ and any $v \in C^\infty({\mathbb R}^n)$ satisfying \eqref{eq:meanvalueszero} for $N$ and on $D = B_{4r}(x_0)$, we have \[ \begin{ma} \Vert \laps{s} \eta_{r,x_0} v \Vert_{L^2({\mathbb R}^n)} &\leq& C_{s,t}\ r^{t}\ [v]_{B_{4r}(x_0),s+t}\\ &\leq& C_{s,t}\ r^{t} \Vert \laps{s+t} v \Vert_{L^2({\mathbb R}^n)}. \end{ma} \] \end{lemma} \begin{proofL}{\ref{la:poincmv}} We have \[
\laps{s} \approx \laps{\gamma} \laps{\delta} \Delta^{K} \] for \[ \begin{ma} \gamma &=& s-\lfloor s \rfloor \in [0,1),\\ \delta &=& \lfloor s \rfloor -2 \left \lfloor \frac{\lfloor s \rfloor}{2} \right \rfloor \in \{0,1\},\\ K &=& \left \lfloor \frac{\lfloor s \rfloor}{2} \right \rfloor \in {\mathbb N}_0. \end{ma} \] More precisely, if $\delta = 1$ (cf. Remark \ref{rem:lapsClasLap}), \[ \laps{s} = c_n {\mathcal{R}}_{i} \laps{\gamma} \partial_i \Delta^{K}, \] and if $\delta = 0$, \[ \laps{s} = c_n \laps{\gamma} \Delta^{K}. \] As the Riesz Transform ${\mathcal{R}}_{i}$ is a bounded operator from $L^2$ into $L^2$ we can estimate both cases by \[ \Vert \laps{s} (\eta_r v) \Vert_{L^2} \aleq{} \sum \limits_{\ontop{\alpha,\beta \in ({\mathbb N}_0)^n}{\abs{\alpha} + \abs{\beta} = 2K+\delta}} \Vert \laps{\gamma} \left ((\partial^\alpha \eta_r) (\partial^\beta v) \right ) \Vert_{L^2}. \] This and Proposition~\ref{pr:mvpoinc} imply \[
\Vert \laps{s} (\eta_r v) \Vert_{L^2}^2 \aleq{s} \brac{[v]_{B_{4r}(x_0),s}}^2\\ \] If $t = 0$ this gives the claim. So let now $t > 0$. If $s \in {\mathbb N}$, we have by the mean value property (which holds for $\nabla^s v$ as $s < N+1$, so $s \leq N$) \[ \begin{ma}
[v]_{B_{4r}(x_0),s}^2 &\aeq{}& \Vert \nabla^s v \Vert_{L^2}^2\\ &\aleq{}& r^{-n} \int \limits_{B_{4r}} \int \limits_{B_{4r}} \brac{\nabla^s u(x) - \nabla^s u(y)}^2\ dx\ dy\\ &\aleq{}& \int \limits_{B_{4r}} \int \limits_{B_{4r}} \frac{\brac{\nabla^s u(x) - \nabla^s u(y)}^2}{\abs{x-y}^n}\ dx\ dy.\\ \end{ma} \] So for every $s > 0$ we have \[ \begin{ma}
[v]_{B_{4r}(x_0),s}^2 &\aleq{}& \int \limits_{B_{4r}} \int \limits_{B_{4r}} \frac{\brac{\nabla^{\lfloor s\rfloor} u(x) - \nabla^{\lfloor s\rfloor} u(y)}^2}{\abs{x-y}^{n+2(s-\lfloor s \rfloor) }} \ dx\ dy.\\ \end{ma} \] If $\lfloor s \rfloor = \lfloor s+t \rfloor$, this implies using $\abs{x-y} \ageq{} r$ for $x,y \in B_{4r}$, \[
[v]_{B_{4r}(x_0),s}^2 \aleq{} r^{2t} [v]_{B_{4r}(x_0),s+t}^2. \] If $\lfloor s \rfloor < \lfloor s+t \rfloor \leq N$, $\nabla^{\lfloor s \rfloor} v$ satisfies the mean value condition \eqref{eq:meanvalueszero} up to the order $N-\lfloor s \rfloor \geq 1$ as $\lfloor s \rfloor < N$.\\ With this in mind one can see, using Proposition~\ref{pr:doubleintvmvest}, if $s+t > \lfloor s + t \rfloor$ \[
[v]_{B_{4r}(x_0),s} \leq r^{\lfloor s+t\rfloor-s} [v]_{B_{4r}(x_0),\lfloor s + t\rfloor} \] or else if $s+t = \lfloor s + t \rfloor$ \[
[v]_{B_{4r}(x_0),s} \leq r^{t} [v]_{B_{4r}(x_0),s + t}. \] In the former case, we can again use that $\abs{x-y} \ageq{} r$ for any $x,y \in B_{4r}$ to conclude. \end{proofL} \begin{remark} By obvious modifications of the proofs, one checks that the result of Lemma~\ref{la:poincmv} is also valid if $v$ satisfies \eqref{eq:meanvalueszero} on a ball $B_{\lambda r}$ for $\lambda \in (0,4)$. The constant then depends also on $\lambda$. \end{remark}
\subsection{On the Annulus} In order to get an estimate similar to Proposition~\ref{pr:annulusuxmuy:convex} on the annulus, Proposition~\ref{pr:annulusuxmuy}, we would like to divide the annulus in finitely many convex parts. As this is clearly not possible, we have to enlarge the non-convex part of the annulus. \begin{proposition}[Convex cover]\label{pr:convexcoverI} Let $A = B_2 \backslash B_1(0)$ or $B_2 \backslash B_{\frac{1}{2}}(0)$. Then for each $\varepsilon > 0$ there is $\lambda = \lambda_\varepsilon > 0$, $M = M_\varepsilon \in {\mathbb N}$ and a family of open sets $C_j \subset {\mathbb R}^n$, $j \in \{1,\ldots,M\}$ such that the following holds. \begin{itemize}
\item For each $j \in \{1,\ldots,M\}$ the set $C_j$ is convex.
\item The union \[ B_2 \backslash B_1 \subset \bigcup\limits_{j=1}^M C_j \subset B_2 \backslash B_{1-\varepsilon}\quad \mbox{or} \quad B_2 \backslash B_{\frac{1}{2}} \subset \bigcup \limits_{j=1}^M C_j \subset B_2 \backslash B_{\frac{1}{2}-\varepsilon}, \] respectively. \item For each $i,j \in \{1,\ldots,M\}$ such that $C_i \cap C_j \neq \emptyset$ \[ \conv{C_i \cup C_{j}} \subset B_2 \backslash B_{1-\varepsilon}\quad \mbox{or} \quad \conv{C_i \cup C_{j}} \subset B_2 \backslash B_{\frac{1}{2}-\varepsilon}, \] respectively, where $\conv{C_i \cup C_{j}}$ denotes the convex hull of $C_i \cup C_{j}$. \item For each $x,y \in A$, at least one of the following conditions holds
\begin{itemize}
\item[(i)] $\abs{x-y} \geq \lambda$ or
\item[(ii)] both $x,y \in C_j$ for some $j$.
\end{itemize} \end{itemize} \end{proposition} \begin{proofP}{\ref{pr:convexcoverI}} We sketch the case $B_2 \backslash B_1$. Fix $\varepsilon > 0$ and denote by \[
{\mathbb S} := \{x \in {\mathbb R}^n:\ \abs{x} = 1 \} \subset {\mathbb R}^n. \] For $r > 0$ and $x \in {\mathbb S}$ we define \[ S_r(x) := {\mathbb S} \cap B_r(x). \] For any $r > 0$ we can pick $(x_k)_{k=1}^M \subset {\mathbb S}$, such that $\{S_r(x_k)\}_{k=1}^M$ covers all of ${\mathbb S}$ where $M = M_r \in {\mathbb N}$ is a finite number. We set $S_k := S_{2r}(x_k)$. If $r = r_\varepsilon > 0$ is chosen small enough, one can also guarantee that the convex hull $\conv{S_k \cup S_l}$ for every $k,l \in \{1,\ldots,M\}$ with $S_k \cap S_l \neq \emptyset$ is a subset of $B_{1} \backslash B_{\frac{1}{2}-\varepsilon}$.\\ The sets $C_j$ are then defined as \[
C_j = \conv{\left \{ x \in {\mathbb R}^n:\ \abs{x} < 2,\ x = \alpha y\ \mbox{for $\alpha > 1$ and $y \in S_j$} \right \}}. \] They obviously satisfy the first three properties.\\ In order to prove the last property, note that \[
\abs{x-y} \geq \abs{\frac{x}{\abs{x}} - \frac{y}{\abs{y}}} \quad \mbox{for all $x,y \in B_2 \backslash B_1$}. \] So assume there is $x,y \in B_2 \backslash B_1$ such that $\{x,y\} \not \subset C_j$ for all $j = 1,\ldots,M$. But this in particular implies that for some $k = 1,\ldots,M$, $\frac{x}{\abs{x}} \in S_{r}(x_k)$ but $\frac{y}{\abs{y}} \not \in S_{2r}(x_k)$. In particular, for a constant $\lambda = \lambda_r$ only depending on $r$ and the dimension $n$, \[
\abs{\frac{y}{\abs{y}} - \frac{x}{\abs{x}}} \geq \lambda_r. \] \end{proofP}
\begin{proposition}\label{pr:annulusuxmuy:gammaeq0} Let $A = B_2 \backslash B_1(0)$ or $B_2 \backslash B_{\frac{1}{2}}(0)$. Then for any $\varepsilon > 0$, there exists a constant $C_{\varepsilon} > 0$ so that the following holds. For any $v \in C^\infty({\mathbb R}^n)$ \[
\int \limits_A \int \limits_A \abs{v(x) - v(y)}^2\ dx\ dy \leq C_{\varepsilon}\ \int \limits_{\tilde{A}} \abs{\nabla v}^2(z)\ dz, \] where $\tilde{A} = B_2 \backslash B_{1-\varepsilon}(0)$ or $B_2 \backslash B_{\frac{1}{2}-\varepsilon}(0)$, respectively. \end{proposition} \begin{proofP}{\ref{pr:annulusuxmuy:gammaeq0}} By Proposition~\ref{pr:convexcoverI} we can estimate \[ \begin{ma}
&&\int \limits_A \int \limits_A \abs{v(x) - v(y)}^2\ dx\ dy\\ &\leq& \sum \limits_{i,j = 1}^M \int \limits_{C_i} \int \limits_{C_j} \abs{v(x) - v(y)}^2\ dx\ dy\\ &=:& \sum \limits_{i,j = 1}^M I_{i,j}. \end{ma} \] If $i=j$ we have by convexity of $C_i$ and Proposition~\ref{pr:annulusuxmuy:convex} \[
I_{i,j} \leq C_{C_j}\ \int \limits_{C_j} \abs{\nabla v}^2(z)\ dz \leq C_{\varepsilon}\ \int \limits_{\tilde{A}} \abs{\nabla v}^2(z)\ dz. \] If $i$ and $j$ are such that $C_i \cap C_j \neq \emptyset$, \[ \begin{ma}
I_{i,j} &\leq& \int \limits_{\conv{C_i\cup C_j}} \int \limits_{\conv{C_i\cup C_j}} \abs{v(x) - v(y)}^2\ dx\ dy\\ &\overset{\sref{P}{pr:annulusuxmuy:convex}}{\aleq{}}& \int \limits_{\conv{C_i\cup C_j}} \abs{\nabla v}^2\\ &\overset{\sref{P}{pr:convexcoverI}}{\aleq{}}& \int \limits_{\tilde{A}} \abs{\nabla v}^2.\\ \end{ma} \] Finally, in any other case for $i$, $j$, there are indices $k_l \in \{1,\ldots,M\}$, $l=1,\ldots,L$, such that $k_1 = i$ and $k_L = j$ and $C_{k_l} \cap C_{k_{l+1}} \neq 0$. Let's abbreviate \[
(v)_k := \fint_{C_k} v. \] With this notation, \[ \begin{ma} &&I_{i,j}\\ &=& \int \limits_{C_i} \int \limits_{C_j} \abs{v(x) - v(y)}^2\ dx\ dy\\ &\leq& C_M \left ( \int \limits_{C_i} \int \limits_{C_j} \abs{v(x) - (v)_j}^2 + \sum \limits_{l=1}^{L-1} \abs{(v)_{k_l} - (v)_{k_{l+1}}}^2 + \abs{(v)_i - v(y)}^2\ dx\ dy \right )\\ &\aleq{}& I_{j,j} + \sum \limits_{l=i}^{L} I_{k_l,l_{l+1}} + I_{i,i} . \end{ma} \] So we can reduce this case for $i$, $j$, to the estimates of the previous cases and conclude. \end{proofP}
As a consequence we have \begin{proposition}\label{pr:annulusuxmuy} Let $A = B_2 \backslash B_1(0)$ or $B_2 \backslash B_{\frac{1}{2}}(0)$. Then for any $\varepsilon > 0$, $\gamma \in [0,n+2)$ there exists a constant $C_{\varepsilon,\gamma} > 0$ so that the following holds. For any $v \in C^\infty({\mathbb R}^n)$ \[
\int \limits_A \int \limits_A \frac{\abs{v(x) - v(y)}^2}{\abs{x-y}^\gamma}\ dx\ dy \leq C_{\varepsilon,\gamma}\ \int \limits_{\tilde{A}} \abs{\nabla v(z)}^2\ dz, \] where $\tilde{A} = B_2 \backslash B_{1-\varepsilon}(0)$ or $B_2 \backslash B_{\frac{1}{2}-\varepsilon}(0)$, respectively. \end{proposition} \begin{proofP}{\ref{pr:annulusuxmuy}} By Proposition~\ref{pr:convexcoverI} we can divide \[
\begin{ma} &&\int \limits_A \int \limits_A \frac{\abs{v(x) - v(y)}^2}{\abs{x-y}^\gamma}\ dx\ dy\\ &\leq& \sum_{j=1}^M \int \limits_{C_j} \int \limits_{C_j} \frac{\abs{v(x) - v(y)}^2}{\abs{x-y}^\gamma}\ dx\ dy + \lambda^{-\gamma} \int \limits_{A} \int \limits_{A} \abs{v(x)-v(y)}^2\ dx\ dy.
\end{ma} \] These quantities are estimated by Proposition~\ref{pr:annulusuxmuy:convex} and Proposition~\ref{pr:annulusuxmuy:gammaeq0}, respectively. \end{proofP}
As a consequence of the last estimate, analogously to the case of a ball, we can prove the following Poincar\'{e}-inequality: \begin{lemma}[Poincar\'{e}'s Inequality with mean value condition (Annulus)]\label{la:poincmvAn} For any $N \in {\mathbb N}_0$, $s \in [0,N+1)$, $t \in [0,N+1-s)$ there is a constant $C_{N,s,t}$ such that the following holds. For any $v \in C^\infty({\mathbb R}^n)$, $x_0 \in {\mathbb R}^n$, $r > 0$ such that $v$ satisfies \eqref{eq:meanvalueszero} for $N$ on $D = A_k = B_{2^{k+1} r}(x_0) \backslash B_{2^{k-1} r}(x_0)$ or $D = A_k = B_{2^{k+1} r}(x_0) \backslash B_{2^{k} r}(x_0)$ we have \[ \Vert \laps{s} (\eta^k_{r,x_0} v) \Vert_{L^2({\mathbb R}^n)} \leq C_{s,t}\ \brac{2^k r}^{t}\ [v]_{\tilde{A}_k,s+t},\\ \] where \[
\tilde{A}_k = B_{2^{k+2} r}(x_0) \backslash B_{2^{k-2} r}(x_0). \] \end{lemma} \begin{proofL}{\ref{la:poincmvAn}} The methods used are similar to the case of the ball, cf. in particular the proof of Proposition~\ref{pr:mvpoinc} and Lemma~\ref{la:poincmv}. We only sketch the case $t =0$.\\ One picks an open set $E$, $\operatorname{supp} \eta_{r}^k \subset E \subset \tilde{A}_k$, such that $\operatorname{dist}(\partial E, \operatorname{supp} \eta_{r}^k) \in (0,\varepsilon)$ and $\operatorname{dist}(E, \partial \tilde{A}_k) > 0$, for very small $\varepsilon > 0$. As in the case of a ball, one can reduce the problem to essentially estimate \[
\int \limits_{E} \int \limits_{E} \frac{ \abs{\partial^\beta v(x)-\partial^\beta v(y)}^2}{\abs{x-y}^{n+2s}}\ dx\ dy, \] \[
\int \limits_{\operatorname{supp} \eta_{r}^k} \frac{ \abs{\partial^\beta v(x)}^2}{\varepsilon^{n+2s}}\ dx, \] for some multiindex $\abs{\beta} \leq N$. Applying the mean value condition \eqref{eq:meanvalueszero} and Proposition~\ref{pr:annulusuxmuy}, these integrals are estimated by \[
\int \limits_{\tilde{E}} \abs{\nabla \partial^\beta v(z)}^2\ dz, \] for some $E \subset \tilde{E} \subset \tilde{A}_k$, where $\tilde{E}$ is a bit ``fatter`` than $E$. Iterating this (and in every step thickening the set $E$ by a tiny factor $\varepsilon$) until we reach the highest differentiability, we conclude. \end{proofL} \begin{remark}\label{rem:poincmvAn} Again, one checks that the claim is also satisfied if $v$ satisfies \eqref{eq:meanvalueszero} on a possibly smaller annulus, making the constant depending also on this scaling. \end{remark}
\subsection{Comparison between Mean Value Polynomials on Different Sets} For a bounded domain $D \subset {\mathbb R}^n$ and $N \in {\mathbb N}_0$ and for $v \in {\mathcal{S}}({\mathbb R}^n)$ we define the polynomial $P(v) \equiv P_{D,N}(v)$ to be the unique polynomial of order $N$ such that \begin{equation}\label{eq:pmeanvaluedef}
\fint \limits_{D} \partial^\alpha (v-P(v)) = 0, \quad \mbox{for every multiindex $\alpha \in ({\mathbb N}_0)^n$, $\abs{\alpha} \leq N$.} \end{equation} The goal of this section is to estimate in Proposition~\ref{pr:etarkpbmpkest} and Lemma~\ref{la:mvestbrakShrpr} the difference \[
P_{B_r(x),N}(v) - P_{B_{2^k r}(x)\backslash B_{2^{k-1}r}(x),N}(v), \quad \mbox{for $k \in {\mathbb Z}$} \] in terms of $\laps{s} v$. To do so, we adapt the methods applied in the proof of \cite[Lemma 4.2]{DR09Sphere}, the main difference being that we have to extend their argument to polynomials of degree greater than zero.\\ We will need an inductive description of $P(v)$. First, for a multiindex $\alpha = (\alpha_1,\ldots,\alpha_n)$ set \[ \alpha! := \alpha_1 !\ldots \alpha_n! = \partial^\alpha x^\alpha. \] For $i \in \{0, \ldots, N\}$ set \begin{equation}\label{eq:definQmeanval} \begin{ma}
Q^{i}_{D,N}(v) &:=& Q^{i+1}_{D,N}(v) + \sum \limits_{\abs{\alpha} = i} \frac{1}{\alpha!}\ x^\alpha \fint \limits_D \partial^\alpha (v-Q^{i+1}_{D,N}(v)),\\ Q^{N}_{D,N}(v) &:=& \sum \limits_{\abs{\alpha}=N} \frac{1}{\alpha!}\ x^\alpha \fint \limits_D \partial^\alpha v. \end{ma} \end{equation} One checks that \begin{equation}\label{eq:palphaqieqp} \partial^\alpha Q^i = \partial^\alpha P,\quad \mbox{whenever $\abs{\alpha} \geq i$,} \end{equation} and in particular $Q^0 = P$.\\ Moreover we will introduce the following sets of annuli: \[ A_j \equiv A_j(r) = B_{2^j r} \backslash B_{2^{j-1}r},\quad \tilde{A}_j \equiv \tilde{A}_j(r) := A_j \cup A_{j+1}. \] \begin{proposition}\label{pr:vmqmmvi} For any $N \in {\mathbb N}$, $s \in (N,N+1]$, $D \subseteq D_2 \subset {\mathbb R}^n$ smoothly bounded domains there is a constant $C_{D_2,D,N,s}$ such that the following holds: Let $v \in C^\infty({\mathbb R}^n)$. For any multiindex $\alpha \in ({\mathbb N}_0)^n$ such that $\abs{\alpha} = i \leq N-1$, \[ \begin{ma} &&\int \limits_{D_2} \Babs{\partial^\alpha (v-Q_{D,N}^{i+1}(v)) - \brac{\partial^\alpha (v-Q^{i+1}_{D,N}(v))}_D}\\ &\leq& C_{D_2,D,N,s}\ \left (\frac{\abs{D_2}}{\abs{D}}\right )^{\frac{1}{2}} \operatorname{diam}(D_2)^{\frac{n}{2}+s-N}\ [v]_{D_2,s} \end{ma} \] where $[v]_{D,s}$ is defined as in \eqref{eq:defhsloc}.\\ If $D = r \tilde{D}$, $D_2 = r \tilde{D}_2$, then $C_{D_2,D,N,s} = r^{N-i} C_{\tilde{D}_2,\tilde{D},N,s}$. \end{proposition} \begin{proofP}{\ref{pr:vmqmmvi}} Let us denote \[
I := \int \limits_{D_2} \Babs{\partial^\alpha (v-Q_{D,N}^{i+1}) - \brac{ \partial^\alpha (v-Q^{i+1}_{D,N}(v))}_D }. \] A first application of H\"older's and classic Poincar\'{e}'s inequality yields \[
I \leq C_{D,D_2}\ \abs{D_2}^{\frac{1}{2}}\ \Vert \nabla \partial^\alpha (v-Q_{D,N}^{i+1}) \Vert_{L^2(D_2)}. \] Next, \eqref{eq:palphaqieqp} and the definition of $P$ in \eqref{eq:pmeanvaluedef} imply that we can apply classic Poincar\'e inequality $N-i-1$ times more, to estimate $I$ by \[ \begin{ma} &\leq& C_{D_2,D,N}\ \abs{D_2}^{\frac{1}{2}}\ \Vert \nabla^N (v-P_{D,N}(v)) \Vert_{L^2(D_2)}\\ &\overset{\eqref{eq:definQmeanval}}{=}& C_{D_2,D,N}\ \abs{D_2}^{\frac{1}{2}}\ \Vert \nabla^N v- \brac{ \nabla^N v}_D \Vert_{L^2(D_2)}. \end{ma} \] If $s = N+1$, yet another application of Poincar\'{e}'s inequality yields the claim. In the case $s \in (N,N+1)$, we estimate further \[ \begin{ma} I &\leq& C_{D_2,D,N}\ \left (\frac{\abs{D_2}}{\abs{D}}\right )^{\frac{1}{2}}\ \left ( \int \limits_{D_2} \int \limits_{D_2} \abs{\nabla^N v(x) - \nabla^N v(y)}^2\ dx\ dy \right )^{\frac{1}{2}}, \end{ma} \] which is bounded by \[C_{D_2,D,N}\ \left (\frac{\abs{D_2}}{\abs{D}}\right )^{\frac{1}{2}}\ \operatorname{diam}(D_2)^{\frac{n+2(s-N)}{2}}\ \left ( \int \limits_{D_2} \int \limits_{D_2} \frac{\abs{\nabla^N v(x) - \nabla^N v(y)}^2}{\abs{x-y}^{n+2(s-N)}}\ dx\ dy \right )^{\frac{1}{2}}.\\ \] The scaling factor for $D = r \tilde{D}$ then follows by the according scaling factors of Poincar\'{e}'s inequality. \end{proofP}
\begin{proposition}\label{pr:mvestnachbarn} For any $N \in {\mathbb N}_0$, $s \in (N,N+1]$, there is a constant $C_{N,s} > 0$ such that the following holds: For any $j \in {\mathbb Z}$, any multiindex $\abs{\alpha} \leq i \leq N$ and $v \in C^\infty({\mathbb R}^n)$
\[ \left \Vert \partial^\alpha \left (Q^{i}_{A_j,N} - Q^{i}_{A_{j+1},N} \right ) \right \Vert_{L^\infty (A_j)} \leq C_{N,s} (2^j r)^{s-\abs{\alpha}-\frac{n}{2}}\ [v]_{\tilde{A}_j,s}. \] \end{proposition} \begin{proofP}{\ref{pr:mvestnachbarn}} Assume first that $i = N$. Then if $s \in (N,N+1)$, \[
\begin{ma}
&&\Vert \partial^\alpha (Q^N_{A_j} - Q^N_{A_{j+1}}) \Vert_{L^\infty(A_j)}\\ &\overset{\eqref{eq:definQmeanval}}{\aleq{}}& (2^jr)^{N-\abs{\alpha}} \frac{1}{\abs{A_j}^2} \int \limits_{\tilde{A}_j} \int \limits_{\tilde{A}_j} \abs{\nabla^N v(x) - \nabla^N v(y)}\ dx\ dy\\ &\aleq{}& (2^jr)^{N-\abs{\alpha}} \frac{1}{\abs{A_j}} \left (\int \limits_{\tilde{A}_j} \int \limits_{\tilde{A}_j} \abs{\nabla^N v(x) - \nabla^N v(y)}^2\ dx\ dy \right )^{\frac{1}{2}}\\ &\aleq{}& (2^jr)^{-\abs{\alpha}-\frac{n}{2}+s} [v]_{\tilde{A}_j,s}.
\end{ma} \] If $s = N+1$ and $i = N$, one uses classic Poincar\'{e} inequality to prove the claim.\\ Now let $i \leq N-1$, $s \in (N,N+1]$, and assume we have proven the claim for $i+1$. By \eqref{eq:definQmeanval}, \[ \begin{ma}
&&Q^i_{A_j}-Q^i_{A_{j+1}}\\ &=& Q^{i+1}_{A_j}-Q^{i+1}_{A_{j+1}}\\ &&\quad + \sum \limits_{\abs{\beta} = i} \frac{1}{\beta!}\ x^\beta \left (\fint \limits_{A_j} \partial^\beta (v-Q^{i+1}_{A_{j+1}}) - \fint \limits_{A_{j+1}} \partial^\beta (v-Q^{i+1}_{A_{j+1}})\right )\\ &&\quad + \sum \limits_{\abs{\beta} = i} \frac{1}{\beta!}\ x^\beta \left (\fint \limits_{A_j} \partial^\beta (Q^{i+1}_{A_{j+1}} - Q^{i+1}_{A_j}) \right ). \end{ma} \] Consequently, \[ \begin{ma}
&&\Vert \partial^\alpha (Q^i_{A_j}-Q^i_{A_{j+1}}) \Vert_{L^\infty(A_j)}\\ &\aleq{}& \Vert \partial^\alpha (Q^{i+1}_{A_j}-Q^{i+1}_{A_{j+1}}) \Vert_{L^\infty(A_j)}\\ &&\quad + (2^j r)^{i-\abs{\alpha}} \sum \limits_{\abs{\beta} = i} \fint \limits_{A_j} \abs{\partial^\beta (v-Q^{i+1}_{A_{j+1}}) - \fint \limits_{A_{j+1}} \partial^\beta (v-Q^{i+1}_{A_{j+1}}) }\\ &&\quad + (2^j r)^{i-\abs{\alpha}} \sum \limits_{\abs{\beta} = i} \Vert \partial^\beta (Q^{i+1}_{A_{j+1}} - Q^{i+1}_{A_j}) \Vert_{L^\infty(A_j)}. \end{ma} \] Then the claim for $i+1$ and Proposition~\ref{pr:vmqmmvi} conclude the proof. \end{proofP}
\begin{proposition}\label{pr:mvestbrak} For any $N \in {\mathbb N}_0$, $s \in (N,N+1]$ there is a constant $C_{N,s}$ such that the following holds. For any multiindex $\alpha \in ({\mathbb N}_0)^n$, $\abs{\alpha} \leq i \leq N$, for any $r > 0$, $k \in {\mathbb Z}$ and any $v \in {\mathcal{S}}({\mathbb R}^n)$ if $s-\frac{n}{2} \not \in \{i,\ldots,N\}$, \[
\Vert \partial^\alpha (Q^i_{B_r} - Q^i_{A_k}) \Vert_{L^\infty(\tilde{A}_k)} \leq C_{N,s}\ r^{s-\abs{\alpha}-\frac{n}{2}} \left (2^{k(s-\abs{\alpha}-\frac{n}{2})} + 2^{k(i-\abs{\alpha})} \right )\ [v]_{{\mathbb R}^n,s}, \] and if $s-\frac{n}{2} \in \{i,\ldots,N\}$, \[ \begin{ma}
&&\Vert \partial^\alpha (Q^i_{B_r} - Q^i_{A_k}) \Vert_{L^\infty(\tilde{A}_k)}\\
&&\leq C_{N,s}\ r^{s-\abs{\alpha}-\frac{n}{2}}\ 2^{k(i-\abs{\alpha})} \left (\abs{k} +1+2^{k(s-i-\frac{n}{2})} \right ) \ [v]_{{\mathbb R}^n,s}.
\end{ma} \] Here as before, $A_k = B_{2^{k}r}(x) \backslash B_{2^{k-1}r}(x)$ and $\tilde{A}_k = B_{2^{k+1}r}(x) \backslash B_{2^{k-1}r}(x)$. \end{proposition} \begin{proofP}{\ref{pr:mvestbrak}} For the sake of shortness of presentation, let us abbreviate \[
d^{i,\alpha}_k := \Vert \partial^\alpha (Q^i_{B_r} - Q^i_{A_k}) \Vert_{L^\infty(\tilde{A}_k)}. \] Assume first $i = N$. \[
\begin{ma}
d^{N,\alpha}_k &\overset{\eqref{eq:definQmeanval}}{\aleq{}}& \left \Vert \sum \limits_{\abs{\beta} = N} \frac{\partial^\alpha x^\beta}{\beta!} \left ( \fint \limits_{B_r} \partial^\beta v - \fint \limits_{A_k} \partial^\beta v \right ) \right \Vert_{L^\infty(\tilde{A}_k)}\\ &\aleq{}& (2^k r)^{N-\abs{\alpha}} \abs{\fint \limits_{B_r} \nabla^N v - \fint \limits_{A_k} \nabla^N v}\\ &\aeq{}& (2^k r)^{N-\abs{\alpha}} \abs{\sum \limits_{l=-\infty}^0 \frac{\abs{A_l}}{\abs{B_r}} \fint \limits_{A_l} \nabla^N v - \fint \limits_{A_k} \nabla^N v}.
\end{ma} \] As $\frac{\abs{A_l}}{\abs{B_r}} = 2^{ln} (1-2^{-n})$ and thus $\sum \limits_{l=-\infty}^0 \frac{\abs{A_l}}{\abs{B_r}} = 1$, for $k > 0$ we estimate further \[
\begin{ma}
&&d^{N,\alpha}_k\\ &\aleq{}& (2^k r)^{N-\abs{\alpha}} \sum \limits_{l=-\infty}^0 2^{ln} \abs{\fint \limits_{A_l} \nabla^N v - \fint \limits_{A_k} \nabla^N v}\\ &\aleq{}& (2^k r)^{N-\abs{\alpha}} \sum \limits_{l=-\infty}^0 2^{ln} \sum \limits_{j=l}^{k-1} \abs{\fint \limits_{A_j} \nabla^N v - \fint \limits_{A_{j+1}} \nabla^N v}\\ &\overset{(\bigstar)}{\aleq{}}& (2^k r)^{N-\abs{\alpha}} \sum \limits_{l=-\infty}^0 2^{ln} \sum \limits_{j=l}^{k-1} (2^{j}r)^{-n} \left (\int \limits_{\tilde{A}_j}\ \int \limits_{\tilde{A}_j} \abs{\nabla^N v(x) - \nabla^N v(y)}^2 \ dx\ dy \right )^{\frac{1}{2}}\\ &\aleq{}& (2^k r)^{N-\abs{\alpha}} \sum \limits_{l=-\infty}^0 2^{ln} \sum \limits_{j=l}^{k-1} (2^{j}r)^{-\frac{n}{2}+s-N}\ [v]_{\tilde{A}_j,s}.\\ \end{ma} \] Of course, if $s = N+1$, one replaces the estimate in $(\bigstar)$ and uses instead Poincar\'{e}'s inequality. If $k \leq 0$ one has by virtually the same computation, \[ \begin{ma}
d^{N,\alpha}_k &\aleq{}& (2^k)^{N-\abs{\alpha}}r^{s-\frac{n}{2}-\abs{\alpha}}\ \Big ( \sum \limits_{l=-\infty}^{k-1} 2^{ln} \sum \limits_{j=l}^{k-1} 2^{j(-\frac{n}{2}+s-N)}\ [v]_{\tilde{A}_j,s}\\
&&+ \sum \limits_{l=k}^{0} 2^{ln} \sum \limits_{j=k}^{l-1} 2^{j(-\frac{n}{2}+s-N)}\ [v]_{\tilde{A}_j,s} \Big ).\\ \end{ma} \] Now we have to take care, whether $s-\frac{n}{2}-N = 0$ or not. Let \[
a_k := \begin{cases}
2^{k(s-\frac{n}{2}-N)}, \quad &\mbox{if $s-\frac{n}{2} - N \neq 0$,}\\
\abs{k}, \quad &\mbox{if $s-\frac{n}{2} - N = 0$,}\\
\end{cases} \] and respectively, \[
b_l := \begin{cases}
2^{l(s-\frac{n}{2}-N)}, \quad &\mbox{if $s-\frac{n}{2} - N \neq 0$,}\\
\abs{l}, \quad &\mbox{if $s-\frac{n}{2} - N = 0$.}\\
\end{cases} \] With this notation, applying H\"older's inequality for series, $d^{N,\alpha}_k$ is estimated independently of whether $k > 0$ or not, by \[ (2^k)^{N-\abs{\alpha}} r^{s-\abs{\alpha}-\frac{n}{2}} \sum \limits_{l=-\infty}^0 2^{ln} \left ( a_k + b_l \right )\ \left (\sum \limits_{j=-\infty}^{\infty} [v]_{\tilde{A}_j,s}^2 \right )^{\frac{1}{2}}\\ \] \[ \begin{ma} &\aleq{}& r^{s-\frac{n}{2}-\abs{\alpha}} \brac{2^{k(N-\abs{\alpha})}a_k + (2^k)^{N-\abs{\alpha}} \sum \limits_{l=-\infty}^0 2^{ln}b_l} [v]_{{\mathbb R}^n,s}\\ &\aleq{}& r^{s-\frac{n}{2}-\abs{\alpha}}\ [v]_{{\mathbb R}^n,s} \left ( 2^{k(N- \abs{\alpha})} a_k + 2^{k(N-\abs{\alpha})} \right ). \end{ma} \]
This concludes the case $i=N$. Next, let $i < N$ and assume the claim is proven for $i+1$. \[ \begin{ma} d^{i,\alpha}_{k} &=&\Vert \partial^\alpha (Q_{B_r}^{i} - Q_{A_{k}}^i) \Vert_{L^\infty(\tilde{A}_{k})}\\
&\overset{\eqref{eq:definQmeanval}}{\aleq{}}& d^{i+1,\alpha}_{k} + \sum \limits_{\abs{\beta} = i} \left (2^{k}r \right )^{i-\abs{\alpha}} \abs{ \fint \limits_{B_r} \partial^\beta (v-Q_{B_r}^{i+1}) -\fint \limits_{A_{k}} \partial^\beta (v-Q_{A_{k}}^{i+1})}\\ &\aleq{}& d^{i+1,\alpha}_{k}\\ &&+ \sum \limits_{\abs{\beta} = i} \left (2^{k}r \right )^{i-\abs{\alpha}} c_n \sum \limits_{l=-\infty}^0 2^{ln} \abs{\fint \limits_{A_l} \partial^\beta (v-Q_{B_r}^{i+1}) -\fint \limits_{A_{k}} \partial^\beta (v-Q_{A_{k}}^{i+1})},
\end{ma} \] where $c_n 2^{ln} = \frac{\abs{A_l}}{\abs{B_r}}$, so $\sum \limits_{l = - \infty}^0 c_n 2^{ln} = 1$ as we have done in the case $i=N$ above. We estimate further, \[ d_{k}^{i,\alpha} \aleq{} d^{i+1,\alpha}_{k} + \] \[ + \sum \limits_{\abs{\beta} = i} \left (2^{k}r \right )^{i-\abs{\alpha}} \sum \limits_{l=-\infty}^0 2^{ln} \left (d^{i+1,\beta}_{l} + \abs{\fint \limits_{A_l} \partial^\beta (v-Q_{A_l}^{i+1}) -\fint \limits_{A_{k}} \partial^\beta (v-Q_{A_{k}}^{i+1})}\right ).
\] As above in the case $i=N$ we use a telescoping series to write \[ \begin{ma}
&&\abs{\fint \limits_{A_l} \partial^\beta (v-Q_{A_l}^{i+1}) -\fint \limits_{A_{k}} \partial^\beta (v-Q_{A_{k}}^{i+1})}\\ &\leq& \sum \limits_{j=l}^{k-1} \abs{\fint \limits_{A_j} \partial^\beta (v-Q_{A_j}^{i+1}) -\fint \limits_{A_{j+1}} \partial^\beta (v-Q_{A_{j+1}}^{i+1})}\\ &\aleq{}& \sum \limits_{j=l}^{k-1} \left \Vert \partial^\beta (Q^{i+1}_{A_j} - Q^{i+1}_{A_{j+1}} ) \right \Vert_{L^\infty (A_j)}\\ &&\quad + \fint \limits_{\tilde{A}_j} \abs{\partial^\beta (v-Q_{A_{j+1}}^{i+1}) -\fint \limits_{A_{j+1}} \partial^\beta (v-Q_{A_{j+1}}^{i+1})}\\ &=:& \sum \limits_{j=l}^{k-1} (I_j + II_j). \end{ma} \] Again we should have taken care of whether $l < k-1$ or $k-1 \leq l$, but as in the case $i=N$ both cases are treated the same way. The term $I_j$ is estimated by Proposition~\ref{pr:mvestnachbarn}, \[
I_j \aleq{} \left (2^j r\right )^{s-\abs{\beta}-\frac{n}{2}}\ [v]_{\tilde{A}_j,s} = \left (2^j r\right )^{s-i-\frac{n}{2}} [v]_{\tilde{A}_j,s}. \] And by Proposition~\ref{pr:vmqmmvi}, \[
II_j \aleq{} (2^j r)^{-n+\frac{n}{2}+s -i}\ [v]_{\tilde{A}_j,s} = (2^j r)^{s-i-\frac{n}{2}}\ [v]_{\tilde{A}_j,s}. \] Hence, \[ \begin{ma}
&&\abs{\fint \limits_{A_l} \partial^\beta (v-Q_{A_l}^{i+1}) -\fint \limits_{A_{k}} \partial^\beta (v-Q_{A_{k}}^{i+1})}\\ &\aleq{}& r^{s-i-\frac{n}{2}} \sum \limits_{j=l}^{k-1} (2^j)^{s-i-\frac{n}{2}}\ [v]_{\tilde{A}_j,s}\\ &\aleq{}& r^{s-i-\frac{n}{2}} \left ( a_k + b_l\right ) \left (\sum \limits_{j=l}^{k-1} [v]_{\tilde{A}_j,s}^2 \right )^{\frac{1}{2}},\\ \end{ma} \] for $a_k$ and $b_k$ similar to the case $i=N$ above defined as \[
a_k := \begin{cases}
2^{k(s-\frac{n}{2}-i)}, \quad &\mbox{if $s-\frac{n}{2} - i \neq 0$,}\\
\abs{k}, \quad &\mbox{if $s-\frac{n}{2} - i = 0$,}\\
\end{cases} \] and respectively, \[
b_l := \begin{cases}
2^{l(s-\frac{n}{2}-i)}, &\quad \mbox{if $s-\frac{n}{2} - i \neq 0$,}\\
\abs{l}, \quad &\mbox{if $s-\frac{n}{2} - i = 0$.}\\
\end{cases} \] Plugging all these estimates in, we have achieved the following estimate \[
\begin{ma}
&& d^{i,\alpha}_k\\ &\aleq{}& d^{i+1,\alpha}_k + \sum \limits_{\abs{\beta} = i} \left (2^k r \right )^{i-\abs{\alpha}}\ \sum \limits_{l=-\infty}^0 2^{ln} d_l^{i+1,\beta}\\ &&\quad + r^{s-\abs{\alpha}-\frac{n}{2}}\ 2^{k(i-\abs{\alpha})}\ (a_k + 1)\ [v]_{{\mathbb R}^n,s}.
\end{ma} \] In either case, whether $s-\frac{n}{2}-\tilde{i} = 0$ for some $\tilde{i} \geq i$ or not, using the claim for $i+1$ we have \[
\begin{ma}
\sum \limits_{\abs{\beta} = i} \left (2^k r \right )^{i-\abs{\alpha}}\ \sum \limits_{l=-\infty}^0 2^{ln} d_l^{i+1,\beta} \aleq{} C_{N,s}\ r^{s-\frac{n}{2}-\abs{\alpha}} [v]_{{\mathbb R}^n,s},
\end{ma} \] and thus can conclude. \end{proofP}
As an immediate consequence of Proposition~\ref{pr:mvestbrak} for $i=0$, $\abs{\alpha} = 0$, and $s = \frac{n}{2}$, we get the following two results. \begin{proposition}\label{pr:etarkpbmpkest} For a uniform constant $C > 0$, for any $v \in {\mathcal{S}}({\mathbb R}^n)$, $r > 0$, $k \in {\mathbb N}$ \[
\Vert \eta_r^k (P_{B_r,\lceil \frac{n}{2} \rceil -1}(v) - P_{A_k,\lceil \frac{n}{2} \rceil -1}(v)) \Vert_{L^\infty({\mathbb R}^n)} \leq C\ (1+\abs{k}) \Vert \lapn v \Vert_{L^2({\mathbb R}^n)}. \] Here, $A_k = B_{2^{k+1}r}(x) \backslash B_{2^{k}r}(x)$ and $\tilde{A}_k = B_{2^{k+1}r}(x) \backslash B_{2^{k-1}r}(x)$. \end{proposition}
\begin{proposition}\label{pr:estetarkvmp} There exists a constant $C > 0$ such that for any $r > 0$, $x_0 \in {\mathbb R}^n$, $k \in {\mathbb N}_0$, $v \in {\mathcal{S}}({\mathbb R}^n)$ we have \[
\Vert \eta_{r,x_0}^k (v-P) \Vert_{L^2({\mathbb R}^n)} \leq C \left (2^k r \right )^{\frac{n}{2}}\ (1+\abs{k})\ \Vert \lapn v \Vert_{L^2({\mathbb R}^n)}, \] where $P$ is the polynomial of order $N := \left \lceil \frac{n}{2} \right \rceil-1$ such that $v-P$ satisfies the mean value condition \eqref{eq:meanvalueszero} in $D := B_{2r}$. Here, in a slight abuse of notation for $k = 0$, $\eta_r^k \equiv \eta_{r}-\eta_{\frac{1}{2}r}$ for $\eta$ from Section \ref{ss:cutoff}. \end{proposition} \begin{proofP}{\ref{pr:estetarkvmp}} Let $P_k$ be the polynomial of order $N = \left \lceil \frac{n}{2} \right \rceil-1$ such that $v$ satisfies the mean value condition \eqref{eq:meanvalueszero} in $B_{2^k r} \backslash B_{2^{k-1} r}$. We then have, \[
\Vert \eta_r^k (v-P) \Vert_{L^2({\mathbb R}^n)} \aleq{} \Vert \eta_r^k (v-P_k) \Vert_{L^2({\mathbb R}^n)} + \left (2^k r\right )^{\frac{n}{2}} \Vert \eta_r^k (P-P_k) \Vert_{L^\infty}. \] As Proposition~\ref{pr:etarkpbmpkest} estimates the second part of the last estimate, we are left to estimate \[
\Vert \eta_r^k (v-P_k) \Vert_{L^2({\mathbb R}^n)} \leq C \left (2^k r \right )^{\frac{n}{2}}\ \Vert \lapn v \Vert_{L^2({\mathbb R}^n)}. \] But this is rather easy and can be proven by similar arguments as used in the proof of Lemma~\ref{la:poincmvAn}, see also Remark \ref{rem:poincmvAn}: as by classic Poincar\'e inequality and the fact that by choice of $P_k$ the mean values over $B_{2^{k+1}r} \backslash B_{2^{k}r}$ of all derivatives up to order $\lfloor \frac{n}{2} \rfloor$ of $v-P_k$ are zero, so \[
\Vert \eta_r^k (v-P_k) \Vert_{L^2({\mathbb R}^n)} \aleq{} \left (2^k r \right )^{\lfloor \frac{n}{2} \rfloor}\ \Vert \nabla^{\lfloor \frac{n}{2} \rfloor} (v-P_k) \Vert_{L^2(B_{2^{k+1}r} \backslash B_{2^{k-1}r})}. \] If $n$ is an even number, this proves the claim. If $n$ is odd, we use again the mean value condition to see \[ \begin{ma}
&&\Vert \nabla^N (v-P_k) \Vert_{L^2(B_{2^{k+1}r} \backslash B_{2^{k-1}r})}^2\\ &\aleq{}& \fint \limits_{B_{2^{k+1}r} \backslash B_{2^{k}r}} \int \limits_{B_{2^{k+1}r} \backslash B_{2^{k-1}r}} \abs{\nabla^N v(x) - \nabla^N v(y)}^2\ dx\ dy\\ &\aleq{}& \left (2^k r \right )^{n-2 \lfloor \frac{n}{2} \rfloor} \int \limits_{B_{2^{k+1}r} \backslash B_{2^{k-1}r}}\int \limits_{B_{2^{k+1}r} \backslash B_{2^{k-1}r}} \frac{\abs{\nabla^N v(x) - \nabla^N v(y)}^2}{\abs{x-y}^{2n-2\lfloor \frac{n}{2} \rfloor}}\ dx\ dy \\ &\aleq{}& \left (2^k r \right )^{n-2\lfloor \frac{n}{2} \rfloor}\ \Vert \lapn v \Vert_{L^2({\mathbb R}^n)}^2. \end{ma} \] Taking the square root of the last estimate, one concludes. \end{proofP}
We will need the following a little bit sharper version of Proposition~\ref{pr:etarkpbmpkest}, too. \begin{lemma}\label{la:mvestbrakShrpr}(compare \cite[Lemma 4.2]{DR09Sphere}))\\ Let $N := \lceil \frac{n}{2} \rceil-1$ and $\gamma > N$. Then for $\tilde{\gamma} = -N + \min (n,\gamma)$ and for any $v \in {\mathcal{S}}({\mathbb R}^n)$, $B_r(x_0) \subset {\mathbb R}^n$, $r > 0$, \[ \sum \limits_{k=1}^\infty 2^{-\gamma k} \left \Vert (P_{B_r,N}(v) - P_{A_k,N}(v)) \right \Vert_{L^{\infty}(\tilde{A}_k)} \leq C_\gamma \ \sum \limits_{j=-\infty}^\infty 2^{-\abs{j}\tilde{\gamma}}\ [v]_{\tilde{A}_j,\frac{n}{2}}. \] Here, $A_k = B_{2^{k+1}r}(x) \backslash B_{2^{k}r}(x)$ and $\tilde{A}_k = B_{2^{k+1}r}(x) \backslash B_{2^{k-1}r}(x)$. \end{lemma} \begin{remark} More precisely, we will prove for $i \in \{0,\ldots,N\}$, that whenever $\gamma > N$, $\abs{\alpha} \leq i$, for $\tilde{\gamma} := \min (n-N,\gamma -N)$ \begin{equation}\label{eq:mvpoincSiagammaClaim} \sum_{k=-\infty}^\infty 2^{-\gamma \abs{k}} \Vert \partial^\alpha (Q^i_{B_r} - Q^i_{A_k}) \Vert_{L^\infty(\tilde{A}_k)} \leq C_{\gamma,N} \left (r^{-\abs{\alpha}}\ \sum \limits_{j=-\infty}^\infty 2^{-\abs{j} \tilde{\gamma}}\ [v]_{\tilde{A}_j,\frac{n}{2}} \right ). \end{equation} This more precise statement will be used in the estimates for the homogeneous norm $[\cdot]_s$, Lemma~\ref{la:comps01}. \end{remark} \begin{proofL}{\ref{la:mvestbrakShrpr}} As in the proof of Proposition~\ref{pr:mvestbrak}, set \[
d^{i,\alpha}_k := \Vert \partial^\alpha (Q^i_{B_r} - Q^i_{A_k}) \Vert_{L^\infty(\tilde{A}_k)}. \] Moreover, we set \[
S^{i,\alpha}_\gamma := \sum \limits_{k=1}^\infty 2^{-\gamma k}\ d^{i,\alpha}_k \] and \[
S^{i,\alpha}_{-\gamma} := \sum \limits_{k=-\infty}^0 2^{\gamma k}\ d^{i,\alpha}_k. \] Then, by the computations in the proof of Proposition~\ref{pr:mvestbrak}, for any $\abs{\alpha} \leq N$, \[ \begin{ma}
S^{N,\alpha}_\gamma &\aleq{}& r^{-\abs{\alpha}}\ \sum \limits_{k=1}^\infty \sum \limits_{l=-\infty}^0\ \sum \limits_{j=l}^{k-1} 2^{-jN+ln-\gamma k+kN-k\abs{\alpha}}\ [v]_{\tilde{A}_j,\frac{n}{2}}\\ &=& r^{-\abs{\alpha}}\ \sum \limits_{j=-\infty}^0 2^{-jN}\ [v]_{\tilde{A}_j,\frac{n}{2}}\ \sum \limits_{l=-\infty}^j\ \sum \limits_{k=1}^\infty 2^{ln}\ 2^{k(N-\gamma-\abs{\alpha})}\\ &&+\ r^{-\abs{\alpha}}\ \sum \limits_{j=1}^\infty 2^{-jN}\ [v]_{\tilde{A}_j,\frac{n}{2}}\ \sum \limits_{l=-\infty}^0\ \sum \limits_{k=j+1}^\infty 2^{ln}\ 2^{k(N-\gamma-\abs{\alpha})}\\ &\overset{\gamma > N}{\aleq{}}& r^{-\abs{\alpha}}\ \sum \limits_{j=-\infty}^0 2^{j(n-N)}\ [v]_{\tilde{A}_j,\frac{n}{2}}\\ &&+\ r^{-\abs{\alpha}}\ \sum \limits_{j=1}^\infty 2^{j(-\gamma-\abs{\alpha})}\ [v]_{\tilde{A}_j,\frac{n}{2}}.\\ \end{ma} \] Similarly, \[ \begin{ma}
S^{N,\alpha}_{-\gamma} &\aleq{}& r^{-\abs{\alpha}}\ \sum \limits_{k=-\infty}^0 \sum \limits_{l=-\infty}^{k-1}\ \sum \limits_{j=l}^{k-1} 2^{-jN+ln+\gamma k+kN-k\abs{\alpha}}\ [v]_{\tilde{A}_j,\frac{n}{2}}\\ && +\ r^{-\abs{\alpha}}\ \sum \limits_{k=-\infty}^0 \sum \limits_{l=k}^0\ \sum \limits_{j=k}^{l-1} 2^{-jN+ln+\gamma k+kN-k\abs{\alpha}}\ [v]_{\tilde{A}_j,\frac{n}{2}} \\ &\aleq{}& r^{-\abs{\alpha}}\
\sum_{j=-\infty}^0 2^{-jN} [v]_{\tilde{A}_j,\frac{n}{2}}\ \sum_{k = j+1}^{0} \sum_{l=-\infty}^{j} 2^{ln} 2^{k(\gamma+N-\abs{\alpha})}\\
&& +\ r^{-\abs{\alpha}}\ \sum \limits_{j=-\infty}^0 2^{-jN} [v]_{\tilde{A}_j,\frac{n}{2}} \sum_{k=-\infty}^{j} \sum_{l=j+1}^{0} 2^{ln} 2^{k(\gamma+N-\abs{\alpha})}\\ &\overset{\abs{\alpha}\leq N}{\aleq{}}& r^{-\abs{\alpha}}\
\sum_{j=-\infty}^0 2^{j(n-N)} [v]_{\tilde{A}_j,\frac{n}{2}}\\
&& +\ r^{-\abs{\alpha}}\ \sum \limits_{j=-\infty}^0 2^{j(\gamma-\abs{\alpha})} [v]_{\tilde{A}_j,\frac{n}{2}}. \end{ma} \] For $0 \leq i \leq N-1$, again using the computations done for the proof of Proposition~\ref{pr:mvestbrak}, \[
\begin{ma}
S^{i,\alpha}_\gamma &\aleq{}& S^{i+1,\alpha}_\gamma\\ &&+\ r^{i-\abs{\alpha}} \sum_{\abs{\beta} = i}\ \sum \limits_{k=1}^\infty 2^{k (i-\abs{\alpha}-\gamma)} S_{-n}^{i+1,\beta} \\ &&+\ r^{-\abs{\alpha}} \sum \limits_{k=1}^\infty 2^{k(i-\abs{\alpha}-\gamma)} \sum \limits_{l=-\infty}^0 2^{ln} \sum \limits_{j=l}^{k-1} 2^{-ji}\ [v]_{\tilde{A}_j,\frac{n}{2}}\\ &\overset{\gamma > i}{\aleq{}}& S^{i+1,\alpha}_\gamma\\ &&+\ r^{i-\abs{\alpha}} \sum_{\abs{\beta} = i}\ S^{i+1,\beta}_{-n}\\ &&+\ r^{-\abs{\alpha}}\ \sum \limits_{j=-\infty}^0 2^{j(n-i)}\ [v]_{\tilde{A}_j,\frac{n}{2}}\\ &&+\ r^{-\abs{\alpha}}\ \sum \limits_{j=1}^\infty 2^{j(-\gamma-\abs{\alpha})}\ [v]_{\tilde{A}_j,\frac{n}{2}}\\ &\overset{i\leq N}{\aleq{}}& S^{i+1,\alpha}_\gamma\\ &&+\ r^{i-\abs{\alpha}} \sum_{\abs{\beta} = i}\ S^{i+1,\beta}_{-n}\\ &&+\ r^{-\abs{\alpha}}\ \sum \limits_{j=-\infty}^0 2^{j(n-N)}\ [v]_{\tilde{A}_j,\frac{n}{2}}\\ &&+\ r^{-\abs{\alpha}}\ \sum \limits_{j=1}^\infty 2^{j(-\gamma-\abs{\alpha})}\ [v]_{\tilde{A}_j,\frac{n}{2}}.\\ \end{ma} \] And \[
\begin{ma}
S^{i,\alpha}_{-\gamma}
&\aleq{}& S^{i+1,\alpha}_{-\gamma}\\ &&+\ r^{i-\abs{\alpha}} \sum_{\abs{\beta} = i}\ \sum \limits_{k=-\infty}^0 2^{k (i-\abs{\alpha}+\gamma)} S_{-n}^{i+1,\beta} \\ &&+\ r^{-\abs{\alpha}} \sum \limits_{k=-\infty}^0 2^{k(i-\abs{\alpha}+\gamma)} \sum \limits_{l=-\infty}^{k-1} 2^{ln} \sum \limits_{j=l}^{k-1} 2^{-ji}\ [v]_{\tilde{A}_j,\frac{n}{2}}\\ &&+\ r^{-\abs{\alpha}} \sum \limits_{k=-\infty}^0 2^{k(i-\abs{\alpha}+\gamma)} \sum \limits_{l=k}^{0} 2^{ln} \sum \limits_{j=k-1}^{l} 2^{-ji}\ [v]_{\tilde{A}_j,\frac{n}{2}}\\
&\aleq{}& S^{i+1,\alpha}_{-\gamma}\\ &&+\ r^{i-\abs{\alpha}} \sum_{\abs{\beta} = i}\ S_{-n}^{i+1,\beta} \\ &&+\ r^{-\abs{\alpha}} \sum \limits_{j=-\infty}^0 2^{-ji} [v]_{\tilde{A}_j,\frac{n}{2}} \sum \limits_{l=-\infty}^{j} \sum \limits_{k=j+1}^{0} 2^{ln} 2^{k(i-\abs{\alpha}+\gamma)}\\ &&+\ r^{-\abs{\alpha}} \sum_{j=-\infty}^0 2^{-ji} [v]_{\tilde{A}_j,\frac{n}{2}} \sum_{k=-\infty}^{j} \sum_{l=j}^0 2^{ln} 2^{k(i-\abs{\alpha}+\gamma)}\\ &\aleq{}& S^{i+1,\alpha}_{-\gamma}\\ &&+\ r^{i-\abs{\alpha}} \sum_{\abs{\beta} = i}\ S_{-n}^{i+1,\beta} \\ &&+\ r^{-\abs{\alpha}} \sum \limits_{j=-\infty}^0 2^{j(n-i)} [v]_{\tilde{A}_j,\frac{n}{2}}\\ &&+\ r^{-\abs{\alpha}} \sum_{j=-\infty}^0 2^{j(\gamma - \abs{\alpha})} [v]_{\tilde{A}_j,\frac{n}{2}}\\
&\overset{i \leq N}{\aleq{}}& S^{i+1,\alpha}_{-\gamma}\\ &&+\ r^{i-\abs{\alpha}} \sum_{\abs{\beta} = i}\ S_{-n}^{i+1,\beta} \\ &&+\ r^{-\abs{\alpha}} \sum \limits_{j=-\infty}^0 2^{j(n-N)} [v]_{\tilde{A}_j,\frac{n}{2}}\\ &&+\ r^{-\abs{\alpha}} \sum_{j=-\infty}^0 2^{j(\gamma - \abs{\alpha})} [v]_{\tilde{A}_j,\frac{n}{2}}\\ \end{ma} \] Consequently, one can prove by induction for $i \in \{0,\ldots,N\}$, that \eqref{eq:mvpoincSiagammaClaim} holds whenever $\gamma > N$, $\abs{\alpha} \leq i$, for $\tilde{\gamma} := \min (n-N,\gamma -N)$, i.e. \[ S^{i,\alpha}_\gamma + S^{i,\alpha}_{-\gamma} \leq C_{\gamma,N} \left (r^{-\abs{\alpha}}\ \sum \limits_{j=-\infty}^\infty 2^{-\abs{j} \tilde{\gamma}}\ [v]_{\tilde{A}_j,\frac{n}{2}} \right ), \] Taking $i = 0$, $\alpha = 0$, we conclude. \end{proofL}
\section{Integrability and Compensation Phenomena}\label{sec:tart} We will frequently use the following operator \begin{equation}\label{eq:Hdef}
H(u,v) := \lapn (u v) - (\lapn u) v - u \lapn v,\quad \mbox{for }u,v \in {\mathcal{S}}({\mathbb R}^n). \end{equation} In general there is no product rule making $H(u,v) \equiv 0$, or $H(u,v)$ an operator of lower order, as would happen if $n \in 4{\mathbb N}$. But in some way this quantity still acts \emph{like} an operator of lower order, as Lemma~\ref{la:tart:prphuvwedgeest} shows.\\ This was observed in \cite{DR09Sphere}. As remarked there, the compensation phenomena that appear are very similar to the ones in Wente's inequality (see the introduction of \cite{DR09Sphere} for more on that). In fact, in this note we would like to stress that even an argument very similar Tartar's proof in \cite{Tartar85} still works.\\ \\ In this section we present a rather simple estimate which somehow models the compensation phenomenon: More specifically, for $p > 0$ we are going to treat in Corollary \ref{co:esttriangle2} the quantity \[ \abs{\abs{x-y}^p-\abs{y}^p - \abs{x}^p}. \]
\begin{proposition}\label{pr:esttriangle1} For any $x,y \in {\mathbb R}^n$ and any $p > 0$ we have \[
\abs{ \abs{x-y}^p - \abs{y}^p} \leq C_p\ \begin{cases}
\abs{x}^p \quad &\mbox{if $p \in (0,1)$,}\\
\abs{x}^p + \abs{x} \abs{y}^{p-1} \quad &\mbox{if $p \geq 1$}.
\end{cases} \] \end{proposition} \begin{proofP}{\ref{pr:esttriangle1}} The inequality is obviously true if $\abs{y} \leq 2 \abs{x}$ or $x = 0$. So assume $x \neq 0$ and $2\abs{x} < \abs{y}$, in particular, \begin{equation}\label{eq:tart:asuxleq12y} \abs{y-tx} \geq \abs{y}-t\abs{x} > \brac{1-\frac{t}{2}} \abs{y} > \abs{x}, \quad \mbox{for any $t \in (0,1)$}. \end{equation} We use Taylor expansion for $f(t) = \abs{y-tx}^p$ to write \[ \abs{\abs{x-y}^p - \abs{y}^p} \aleq{} \sum_{k=1}^{\lfloor p \rfloor} \abs{\frac{d^k}{dt^k} \Big \vert_{t=0} \abs{y-tx}^p} + \sup_{t \in (0,1)} \abs{\frac{d^{\lfloor p \rfloor + 1}}{dt^{\lfloor p \rfloor + 1}} \abs{y-tx}^p}. \] For $k \geq 1$, \[ \abs{\frac{d^{k}}{dt^{k}} \abs{y-tx}^p} \aleq{} \abs{y-tx}^{p-k} \abs{x}^k. \] So for $1 \leq k \leq \lfloor p \rfloor$, \[ \abs{\frac{d^k}{dt^k} \Big \vert_{t=0} \abs{y-tx}^p} \aleq{} \abs{y}^{p-k}\ \abs{x}^k \overset{\abs{x}<\abs{y}}{\aleq{}} \abs{x} \abs{y}^{p-1}. \] For $k = \lfloor p \rfloor + 1 > p$, $s \in (0,1)$, \[ \abs{\frac{d^{k}}{ds^{k}} \abs{y-sx}^p} \aleq{} \abs{y-sx}^{p-k} \abs{x}^k \overset{\eqref{eq:tart:asuxleq12y}}{\aleq{}} \abs{x}^p. \] \end{proofP}
Proposition~\ref{pr:esttriangle1} has the following consequence \begin{corollary}\label{co:esttriangle2} For any $x,y \in {\mathbb R}^n$ and any $p > 0$, $\theta \in [0,1]$ we have for a uniform constant $C_p > 0$ \[
\abs{ \abs{x-y}^p - \abs{y}^p - \abs{x}^p} \leq C_p\ \begin{cases}
\abs{x}^{p\theta}\ \abs{y}^{p(1-\theta)} \quad &\mbox{if $p \in (0,1]$,}\\
\abs{x}^{p-1}\abs{y} + \abs{x} \abs{y}^{p-1} \quad &\mbox{if $p > 1$}.
\end{cases} \] \end{corollary} \begin{proofC}{\ref{co:esttriangle2}} We only prove the case $p > 1$, the case $p \in (0,1]$ is similar. By Proposition~\ref{pr:esttriangle1}, \[ \begin{ma} &&\abs{ \abs{x-y}^p - \abs{y}^p - \abs{x}^p}\\ &\aleq{}& \min \left \{\abs{x}^p, \abs{y}^p \right \} + \abs{x}^{p-1}\abs{y} + \abs{y}^{p-1}\abs{x}\\ &\leq& 2\abs{x}^{p-1}\abs{y} + \abs{y}^{p-1}\abs{x}. \end{ma} \] \end{proofC}
\begin{lemma}\label{la:tart:prphuvwedgeest} For any $u, v \in {\mathcal{S}}({\mathbb R}^n)$ we have in the case $n = 1,2$ \[ \abs{H(u,v)^\wedge} \leq C\ \abs{(\Delta^{\frac{n}{8}} u)^\wedge}\ast \abs{(\Delta^{\frac{n}{8}} v)^\wedge}(\xi), \] and in the case $n \geq 3$ \[
\abs{(H(u,v))^\wedge} \leq C\ \abs{(\Delta^{\frac{n-2}{4}} u)^\wedge} \ast \abs{(\laps{1} v)^\wedge} + C \abs{(\laps{1} u)^\wedge} \ast \abs{(\Delta^{\frac{n-2}{4}} v)^\wedge}. \] \end{lemma} \begin{proofL}{\ref{la:tart:prphuvwedgeest}} As $u,v \in {\mathcal{S}} ({\mathbb R}^n)$ one checks that $H(u,v) \in L^2({\mathbb R}^n)$ and thus its Fourier -Transform is well defined. Consequently, \[ \begin{ma}
(H(u,v))^\wedge(\xi) &=& \abs{\xi}^{\frac{n}{2}} u^\wedge\ast v^\wedge(\xi) - v^\wedge \ast (\abs{\cdot}^{{\frac{n}{2}}} u^\wedge)(\xi) - u^\wedge \ast (\abs{\cdot}^{{\frac{n}{2}}} v^\wedge) (\xi)\\ &=& \int \limits_{{\mathbb R}^n} u^\wedge(\xi-y)\ v^\wedge (y)\ \left ( \abs{\xi}^{{\frac{n}{2}}} - \abs{\xi-y}^{\frac{n}{2}} - \abs{y}^{{\frac{n}{2}}} \right )\ dy. \end{ma} \] If $n = 1,2$, Corollary \ref{co:esttriangle2} (for $p := \frac{n}{2}$) gives \[
\abs{\abs{\xi}^{{\frac{n}{2}}} - \abs{y}^{{\frac{n}{2}}} - \abs{\xi - y}^{{\frac{n}{2}}} } \leq C\ \abs{y}^{\frac{n}{4}}\ \abs{\xi - y}^{\frac{n}{4}}, \] in the case $n \geq 3$ we have \[
\abs{\abs{\xi}^{{\frac{n}{2}}} - \abs{y}^{{\frac{n}{2}}} - \abs{\xi - y}^{{\frac{n}{2}}} } \leq C\ (\abs{y}^{\frac{n-2}{2}}\ \abs{\xi - y} + \abs{\xi - y}^{\frac{n-2}{2}}\ \abs{y}). \] This gives the claim. \end{proofL}
\begin{theorem}\label{th:integrability} (Cf. \cite{Tartar85}, \cite[Theorem 1.2, Theorem 1.3]{DR09Sphere})\\ Let $u,v \in \mathcal{S}({\mathbb R}^n)$ and set \[
H(u,v) := \lapn (uv) - v\lapn u - u\lapn v. \] Then, \[
\Vert H(u,v)^\wedge \Vert_{L^{2,1}({\mathbb R}^n)} \leq C_n\ \Vert \lapn u \Vert_{L^2({\mathbb R}^n)}\ \Vert \lapn v \Vert_{L^2({\mathbb R}^n)}. \] and \[
\Vert H(u,v) \Vert_{L^2({\mathbb R}^n)} \leq C_n\ \Vert (\lapn u)^\wedge \Vert_{L^{2,\infty}({\mathbb R}^n)}\ \Vert \lapn v \Vert_{L^2({\mathbb R}^n)}. \] In particular, \[
\Vert H(u,v) \Vert_{L^2({\mathbb R}^n)} \leq C_n\ \Vert \lapn u \Vert_{L^{2}({\mathbb R}^n)}\ \Vert \lapn v \Vert_{L^2({\mathbb R}^n)}. \] \end{theorem} \begin{proofT}{\ref{th:integrability}} Lemma~\ref{la:tart:prphuvwedgeest} implies, in the case $n = 1,2$ \[
\abs{(H(u,v))^\wedge} \leq C \left ( \abs{\cdot}^{-\frac{n}{4}} \abs{(\lapn u)^\wedge} \right )\ast \left ( \abs{\cdot}^{-\frac{n}{4}} \abs{(\lapn v)^\wedge} \right ) \] and in the case $n \geq 3$ \[ \begin{ma}
\abs{(H(u,v))^\wedge} &\leq& C \left ( \abs{\cdot}^{-1} \abs{(\lapn u)^\wedge} \right )\ast \left ( \abs{\cdot}^{-\frac{n-2}{2}} \abs{(\lapn v)^\wedge} \right )\\ && + C \left ( \abs{\cdot}^{-\frac{n-2}{2}} \abs{(\lapn u)^\wedge} \right )\ast \left ( \abs{\cdot}^{-1} \abs{(\lapn v)^\wedge} \right ). \end{ma} \] Now we use H\"older's inequality: By Proposition~\ref{pr:dl:lso} we have that \[ \begin{array}{lll} \abs{\cdot}^{-\frac{n}{4}} \in L^{4,\infty}({\mathbb R}^n), \quad & L^{2} \cdot L^{4,\infty} \subset L^{\frac{4}{3},2}, \quad & L^{2,\infty} \cdot L^{4,\infty} \subset L^{\frac{4}{3},\infty},\\ \abs{\cdot}^{-1} \in L^{n,\infty}({\mathbb R}^n), \quad & L^{2} \cdot L^{n,\infty} \subset L^{\frac{2n}{n+2},2}, \quad & L^{2,\infty} \cdot L^{n,\infty} \subset L^{\frac{2n}{n+2},\infty},\\ \abs{\cdot}^{-\frac{n-2}{2}} \in L^{\frac{2n}{n-2},\infty}({\mathbb R}^n), \quad & L^{2} \cdot L^{\frac{2n}{n-2},\infty} \subset L^{\frac{n}{n-1},2}, \quad & L^{2,\infty} \cdot L^{\frac{2n}{n-2},\infty} \subset L^{\frac{n}{n-1},\infty}. \end{array} \] Moreover, convolution acts as follows \[ \begin{array}{lll} L^{\frac{4}{3},2} \ast L^{\frac{4}{3},2} \subset L^{2,1}, \quad & L^{\frac{4}{3},\infty} \ast L^{\frac{4}{3},2} \subset L^{2},\\ L^{\frac{2n}{n+2},2} \ast L^{\frac{n}{n-1},2} \subset L^{2,1}, \quad & L^{\frac{2n}{n+2},2} \ast L^{\frac{n}{n-1},\infty} + L^{\frac{2n}{n+2},\infty} \ast L^{\frac{n}{n-1},2} \subset L^{2}. \end{array} \] We can conclude. \end{proofT}
\section{Localization Results for the Fractional Laplacian}\label{sec:locEf} Even though $\Delta^s$ is a nonlocal operator, its ``differentiating force'' concentrates around the point evaluated. Thus, to estimate $\laps{s}$ at a given point $x$ one has to look ``only around'' $x$. In this spirit the following results hold.
\subsection{Multiplication with disjoint support} In \cite{DR09Sphere} a special case of the following Lemma is used many times. As a consequence of lower order effects appearing when dealing with dimensions and orders greater than one, we will need it in a more general setting, namely for arbitrary homogeneous multiplier operators. \begin{lemma} \label{la:bs:disjsuppGen} Let $M$ be an operator with Fourier multiplier $m \in {\Sw^{'}}({\mathbb R}^n,{\mathbb C})$, $m \in C^\infty({\mathbb R}^n \backslash \{0\},{\mathbb C})$, i.e. \[
Mv := (m v^\wedge)^\vee \quad \mbox{for any $v \in {\mathcal{S}}$}. \] If $m$ is homogeneous of order $\delta > -n$, for any $a, b \in {\mathcal{S}}({\mathbb R}^n,{\mathbb C})$ such that for some $\gamma, d > 0$, $x \in {\mathbb R}^n$, $\operatorname{supp} a \subset B_\gamma(x)$ and $\operatorname{supp} b \subset {\mathbb R}^n \backslash B_{d+\gamma}(x)$, \[
\abs{\int \limits_{{\mathbb R}^n} a\ Mb} \leq C_M\ d^{-n-\delta} \ \Vert a \Vert_{L^1({\mathbb R}^n)}\ \Vert b \Vert_{L^1({\mathbb R}^n)}. \] \end{lemma} An immediate consequence, taking $m := \abs{\cdot}^{s+t}$, is \begin{corollary} \label{co:bs:disjsupp} Let $s, t > -n$, $s+t > -n$. Then, for all $a,b \in {\mathcal{S}}({\mathbb R}^n,{\mathbb C})$, such that for some $d, \gamma > 0$, $\operatorname{supp} a \subset B_\gamma(x)$ and $\operatorname{supp} b \subset {\mathbb R}^n \backslash B_{d+\gamma}(x)$, \[
\abs{\int \limits_{{\mathbb R}^n} \laps{s} a\ \laps{t} b} \leq C_{n,s,t}\ d^{-(n+s+t)}\ \Vert a \Vert_{L^1}\ \Vert b \Vert_{L^1}. \] \end{corollary}
Lemma~\ref{la:bs:disjsuppGen} follows from the following proposition, as the commutation of translations and multiplier operators allows us to assume $\operatorname{supp} a \subset B_{\gamma}(0)$ and $\operatorname{supp} b \subset {\mathbb R}^n \backslash B_{\gamma + d}(0)$.
\begin{proposition} \label{pr:bs:disjsuppGen} Let $m \in C^\infty({\mathbb R}^n \backslash \{0\},{\mathbb C})\cap {\Sw^{'}}$. If for some $\delta > -n$ we have that $m(\lambda x) = \lambda^\delta m(x)$ for any $x \in {\mathbb R}^n \backslash \{0\}$ and any $\lambda > 0$, \[
\abs{\int \limits_{{\mathbb R}^n} m\ \varphi^\wedge} \leq C_m\ d^{-n-\delta} \ \Vert \varphi \Vert_{L^1({\mathbb R}^n)}, \quad \mbox{for any $\varphi \in C_0^\infty({\mathbb R}^n \backslash \overline{B_d(0)},{\mathbb C})$, $d > 0$}. \] \end{proposition}
Proposition~\ref{pr:bs:disjsuppGen} again follows from some general facts about the Fourier Transform on tempered distributions: \begin{proposition}[Smoothness takes over to Fourier Transform]\label{pr:bs:disjsuppp1}${}$\\ Let $f \in {\Sw^{'}}({\mathbb R}^n,{\mathbb C})$ and $f \in C^\infty({\mathbb R}^n \backslash \{ 0 \},{\mathbb C})$. If moreover $f$ is weakly homogeneous of order $\delta \in {\mathbb R}$, i.e. \[
f[\varphi(\lambda \cdot )] = \lambda^{-n-\delta} f[\varphi], \quad \mbox{for all $\varphi \in \mathcal{S}({\mathbb R}^n,{\mathbb C})$,} \] then $f^\wedge, f^\vee \in {\Sw^{'}}({\mathbb R}^n,{\mathbb C})$ also belong to $C^\infty({\mathbb R}^n \backslash \{0\},{\mathbb C})$. \end{proposition} \begin{proofP}{\ref{pr:bs:disjsuppp1}} We refer to \cite[Proposition 2.4.8]{GrafC08}. \end{proofP}
\begin{proposition}[Homogeneity takes over to Fourier Transform]\label{pr:bs:disjsuppp2} Let $f \in {\Sw^{'}}({\mathbb R}^n,{\mathbb C})$. If $f$ is weakly homogeneous of order $\delta \in {\mathbb R}$, then $g = f^\wedge \in {\Sw^{'}}({\mathbb R}^n,{\mathbb C})$ and $h = f^\vee \in {\Sw^{'}}({\mathbb R}^n,{\mathbb C})$ are weakly homogeneous of order $\gamma = -n - \delta$. \end{proposition} \begin{proofP}{\ref{pr:bs:disjsuppp2}} This follows just by the definition of Fourier transform on tempered distributions, \[
f^\wedge [\varphi(\lambda \cdot)] = f [\varphi(\lambda \cdot)^\wedge] = \lambda^{-n} f [\varphi^\wedge (\frac{1}{\lambda} \cdot)] = \lambda^{-n} \lambda^{- (-n-\delta)} f [\varphi^\wedge]. \] The case $f^\vee$ is done the same way. \end{proofP}
\begin{proposition}[Weak Homogeneity and Strong Homogeneity]\label{pr:bs:disjsuppp3}${}$\\ Let $g \in {\Sw^{'}}({\mathbb R}^n,{\mathbb C})$, $g \in C^\infty({\mathbb R}^n \backslash \{0\},{\mathbb C})$. If $g$ is weakly homogeneous of order $\gamma$, then also pointwise \[
g(\lambda x) = \lambda^\gamma g(x), \quad \mbox{for every $x \in {\mathbb R}^n \backslash \{0\}$, $\lambda > 0$.} \] \end{proposition} \begin{proofP}{\ref{pr:bs:disjsuppp3}} We have for any $\varphi \in {\mathcal{S}}({\mathbb R}^n,{\mathbb C})$, and any $\lambda > 0$ \[
g[\varphi(\lambda^{-1} \cdot)] = \int \limits g (x)\ \varphi(\lambda^{-1} x)\ dx = \lambda^n \int \limits g(\lambda z)\ \varphi(z) dz \] and by weak homogeneity \[
\lambda^{n+\gamma} g[\varphi] = g[\varphi(\lambda^{-1} \cdot)]. \] Thus, \[
\int \limits_{{\mathbb R}^n} (\lambda^\gamma \tilde{g}(x) - \tilde{g}(\lambda x)) \varphi(x) = 0, \quad \mbox{for any $\varphi \in {\mathcal{S}}$} \] which implies $\lambda^\gamma \tilde{g}(x) = \tilde{g}(\lambda x)$ for any $x \neq 0$. \end{proofP}
\begin{proposition}[Strong Homogeneity]\label{pr:bs:disjsuppp4} Let $g \in {\Sw^{'}}({\mathbb R}^n,{\mathbb C})$, $g \in C^\infty({\mathbb R}^n \backslash \{ 0\},{\mathbb C})$. If there is $\gamma \leq 0$ such that \[
g(\lambda x) = \lambda^\gamma g(x) \quad \mbox{for every $x \in {\mathbb R}^n \backslash \{0\}$, $\lambda > 0$} \] then \[
\abs{\int \limits g\ \varphi} \leq d^\gamma \Vert g \Vert_{L^\infty({\mathbb S}^{n-1})}\ \Vert \varphi \Vert_{L^1({\mathbb R}^n)},\quad \mbox{for every $\varphi \in C_0^\infty({\mathbb R}^n\backslash \overline{B_d(0)})$, $d > 0$.} \] \end{proposition} \begin{proofP}{\ref{pr:bs:disjsuppp4}} For every $\varphi \in C_0^\infty({\mathbb R}^n\backslash \overline{B_d(0)})$, $d > 0$, we have \[
\abs{\int \limits g(x)\ \varphi(x)\ dx} = \abs{\int \limits \abs{x}^\gamma\ g\brac{\frac{x}{\abs{x}}}\ \varphi(x)\ dx} \overset{\gamma \leq 0}{\leq} d^{\gamma}\ \Vert g \Vert_{L^\infty({\mathbb S}^{n-1})}\ \Vert \varphi \Vert_{L^1({\mathbb R}^n)}. \] \end{proofP}
Proposition~\ref{pr:bs:disjsuppp1} - Proposition~\ref{pr:bs:disjsuppp4} imply Proposition~\ref{pr:bs:disjsuppGen}.
\subsection{Equations with disjoint support localize} As a consequence of Corollary \ref{co:bs:disjsupp} we can \emph{de facto} localize our equations, i.e. replace multiplications of nonlocal operators applied to mappings with disjoint support (which would be zero in the case of local operators) by an operator of order zero: \begin{lemma}[Localizing]\label{la:bs:localizing} Let $b \in \Hf({\mathbb R}^n)$. Assume there is $d,\gamma > 0$, $x \in {\mathbb R}^n$ such that for $E := B_{\gamma+d}(x)$, $\operatorname{supp} b \subset {\mathbb R}^n \backslash E$. Then there is a function $a \in L^2({\mathbb R}^n)$ such that for $D := B_{\gamma}(x)$ \[
\int \limits_{{\mathbb R}^n} \lapn b\ \lapn \varphi = \int \limits_{{\mathbb R}^n} a\ \varphi, \quad \mbox{for every $\varphi \in C_0^\infty(D)$} \] and \[
\Vert a \Vert_{L^2({\mathbb R}^n)} \leq C_{D,E}\Vert b \Vert_{L^2({\mathbb R}^n)}. \] \end{lemma} \begin{proofL}{\ref{la:bs:localizing}} We are going to show that \begin{equation}\label{eq:bs:fstarbdd}
\abs{f(\varphi)} := \abs{\int \limits_{{\mathbb R}^n} \lapn b\ \lapn \varphi} \leq C_{D,E}\Vert \varphi \Vert_{L^2({\mathbb R}^n)} \quad \mbox{for every $\varphi \in C_0^\infty(D)$.} \end{equation} Then $f(\cdot)$ is a linear and bounded operator on the dense subspace $C_0^\infty(D) \subset L^2(D)$. Hence, it is extendable to all of $L^2(D)$. Being a linear functional, by Riesz' representation theorem there exists $a \in L^2(D)$ such that $f(\varphi) = \langle a, \varphi \rangle_{L^2(D)}$ for every $\varphi \in L^2(D)$.\\ It remains to prove \eqref{eq:bs:fstarbdd}, which is done as in the proofs of \cite{DR09Sphere}. Set $r := \frac{1}{2} (\gamma+d)$, so that $E = B_{2r}(x) \supset D$. Applying Corollary \ref{co:bs:disjsupp} \[ \begin{ma} \int \limits_{{\mathbb R}^n} \lapn b\ \lapn \varphi &=& \sum \limits_{k=1}^\infty\ \int \limits_{{\mathbb R}^n} \lapn (\eta^k_{r,x} b)\ \lapn \varphi\\ &=:& \sum \limits_{k=1}^\infty\ I_k.\\ \end{ma} \] If $k \geq 3$, using that the support of $\eta_r^k$ and $\varphi$ are disjoint, more precisely by Corollary \ref{co:bs:disjsupp}, \[ \begin{ma} II_k &\overset{\sref{C}{co:bs:disjsupp}}{\aleq{}}& 2^{-2kn} \Vert \eta^k_r b \Vert_{L^1({\mathbb R}^n)} \Vert \varphi \Vert_{L^1({\mathbb R}^n)}\\ &\aleq{}& 2^{-\frac{3}{2}kn} \Vert \eta^k_r b \Vert_{L^2({\mathbb R}^n)} \Vert \varphi \Vert_{L^1({\mathbb R}^n)}\\ &\aleq{}& 2^{-\frac{3}{2}kn} \Vert b \Vert_{L^2({\mathbb R}^n)} \Vert \varphi \Vert_{L^2(D)}.\\ \end{ma} \] For $1 \leq k \leq 3$ we use that the support of $a$ and $\varphi$ are disjoint, to get also by Corollary \ref{co:bs:disjsupp} \[
II_k \aleq{} d^{-{3}{2}n} \Vert b \Vert_{L^2({\mathbb R}^n)} \Vert \varphi \Vert_{L^2(D)}. \] Consequently, \[
\sum_{k=1}^\infty II_k \leq C_{D,E} \Vert b \Vert_{L^2({\mathbb R}^n)}\ \Vert \varphi \Vert_{L^2(D)}. \] \end{proofL}
\subsection{Hodge decomposition: Local estimates of s-harmonic functions}\label{ss:hodge} If for an integrable function $h$ we have weakly $\Delta h = 0$ in a, say, big ball, we can estimate \[ \Vert h \Vert_{L^2(B_r)} \leq C \left (\frac{r}{\rho}\right )^2 \Vert h \Vert_{L^2(B_\rho)}, \quad \mbox{for $0 < r < \rho$}. \] The goal of this subsection is to prove in Lemma~\ref{la:estharmonic} a similar estimate, for the nonlocal operator $\lapn$.\\ \begin{proposition}\label{pr:estconvolutlplmp} Let $s \in (0,\frac{n}{2})$. Then for any $x \in {\mathbb R}^n$, $r > 0$ and $v \in {\mathcal{S}}$, such that $\operatorname{supp} v \subset B_r(x)$, and any $k \in {\mathbb N}_0$, \[ \Vert \abs{(\laps{s} \eta_{r,x}^k)^\wedge} \ast \abs{(\lapms{s} v)^\wedge} \Vert_{L^2({\mathbb R}^n)} \leq C_s 2^{-ks} \Vert v \Vert_{L^2({\mathbb R}^n)}. \] \end{proposition} \begin{proofP}{\ref{pr:estconvolutlplmp}} By convolution rule and \[ \frac{1}{1} + \frac{1}{2} = 1 + \frac{1}{2} \] we have \begin{equation}\label{eq:convol1} \Vert \abs{(\laps{s} \eta_{r,x}^k)^\wedge} \ast \abs{(\lapms{s} v)^\wedge} \Vert_{L^2({\mathbb R}^n)} \aleq{} \Vert (\laps{s} \eta_{r,x}^k)^\wedge \Vert_{L^1({\mathbb R}^n)}\ \Vert (\lapms{s} v)^\wedge \Vert_{L^2({\mathbb R}^n)}. \end{equation} By Lemma~\ref{la:lapmsest2}, \begin{equation}\label{eq:convol2} \Vert (\lapms{s} v)^\wedge \Vert_{L^2({\mathbb R}^n)} = \Vert \lapms{s} v \Vert_{L^2({\mathbb R}^n)} \leq C_s r^s\Vert v \Vert_{L^2({\mathbb R}^n)}. \end{equation} Furthermore, Proposition~\ref{pr:etarkgoodest} implies \begin{equation}\label{eq:convol3} \Vert (\laps{s} \eta_{r,x}^k)^\wedge \Vert_{L^1({\mathbb R}^n)} \leq C_s (2^kr)^{-s}. \end{equation} Together, \eqref{eq:convol1}, \eqref{eq:convol2} and \eqref{eq:convol3} give the claim. \end{proofP}
As a consequence we have \begin{proposition}\label{pr:estlapnetlapmnvL2} There is a uniform constant $C > 0$ such that for any $r > 0$, $x \in {\mathbb R}^n$, $v \in {\mathcal{S}}$, such that $\operatorname{supp} v \subset B_r(x)$, and for any $k \in {\mathbb N}_0$ \[ \Vert \lapn (\eta_{r,x}^k \lapmn v) \Vert_{L^2({\mathbb R}^n)} \leq C\ 2^{-k \frac{1}{4}} \Vert v \Vert_{L^2({\mathbb R}^n)}. \] \end{proposition} \begin{proofP}{\ref{pr:estlapnetlapmnvL2}} We have according to \eqref{eq:Hdef} \[ \lapn (\eta_{r,x}^k \lapmn v) = (\lapn \eta_{r,x}^k) \lapmn v + \eta_{r,x}^k v + H(\eta_{r,x}^k,\lapmn v). \] By the support condition on $v$, \[ \eta_{r,x}^k v = 0, \quad \mbox{if $k \geq 1$}, \] so trivially for any $k \in {\mathbb N}_0$, \[ \Vert \eta_{r,x}^k v \Vert_{L^2({\mathbb R}^n)} \leq 2^{\frac{n}{2}}\ 2^{-k \frac{n}{4}} \Vert v \Vert_{L^2({\mathbb R}^n)}. \] Next, applying Proposition~\ref{pr:etarkgoodest} for $s = \frac{n}{2}$ and $p = 4$ and Lemma~\ref{la:lapmsest2} for $s = \frac{n}{2}$ and $p'=4$, we have \[ \Vert (\lapn \eta_{r,x}^k) \lapmn v \Vert_{L^2({\mathbb R}^n)} \leq \Vert (\lapn \eta_{r,x}^k) \Vert_{L^4}\ \Vert \lapmn v \Vert_{L^4} \aleq{}
2^{-k\frac{n}{4}}r^{-\frac{n}{4}} r^{\frac{n}{4}}\ \Vert v \Vert_{L^2}. \] Thus, we have shown that \begin{equation}\label{eq:hetalpmvleft} \Vert \lapn (\eta_{r,x}^k \lapmn v) \Vert_{L^2({\mathbb R}^n)} \aleq{} 2^{-k\frac{n}{4}} \Vert v \Vert_{L^2({\mathbb R}^n)} + \Vert H(\eta_{r,x}^k,\lapmn v) \Vert_{L^2({\mathbb R}^n)}. \end{equation} By Lemma~\ref{la:tart:prphuvwedgeest} we have that in the case $n=1,2$ \[ \Vert H(\eta_{r,x}^k,\lapmn v) \Vert_{L^2({\mathbb R}^n)} \aleq{} \Vert \abs{(\Delta^{\frac{n}{8}} \eta_{r,x}^k)^\wedge}\ast \abs{(\Delta^{-\frac{n}{8}} v)^\wedge} \Vert_{L^2({\mathbb R}^n)}, \] and in the case $n \geq 3$ \[ \begin{ma} &&\Vert H(\eta_{r,x}^k,\lapmn v) \Vert_{L^2({\mathbb R}^n)}\\ &\aleq{}& \Vert \abs{(\Delta^{\frac{n-2}{4}} \eta_{r,x}^k)^\wedge} \ast \abs{(\Delta^{\frac{2-n}{4}} v)^\wedge} \Vert_{L^2} + \Vert \abs{(\laps{1} \eta_{r,x}^k)^\wedge} \ast \abs{(\lapms{1} v)^\wedge} \Vert_{L^2}. \end{ma} \] That is, in order to prove the claim we need the estimate \begin{equation}\label{eq:estconvol.hopefull} \Vert \abs{(\laps{s} \eta_{r,x}^k)^\wedge} \ast \abs{(\lapms{s} v)^\wedge} \Vert_{L^2} \leq C_s\ 2^{-ks} \Vert v \Vert_{L^2} \end{equation} where $s = \frac{n}{4}$ in the case $n= 1,2$ and $s = \frac{n-2}{2}$ or $s =1$ in the case $n \geq 3$. In all three cases we have that $0 < s < \frac{n}{2}$ and Proposition~\ref{pr:estconvolutlplmp} implies \eqref{eq:estconvol.hopefull}. Plugging these estimates into \eqref{eq:hetalpmvleft} we conclude. \end{proofP}
\begin{lemma}[Estimate of the Harmonic Term]\label{la:estharmonic} Let $h \in L^2({\mathbb R}^n)$, such that \begin{equation}\label{eq:lapnhWeq0} \int \limits_{{\mathbb R}^n} h\ \lapn \varphi = 0 \quad \mbox{for any $\varphi \in C_0^\infty(B_{\Lambda r}(x))$.} \end{equation} for some $\Lambda > 0$. Then, for a uniform constant $C > 0$ \[ \Vert h \Vert_{L^2(B_{r}(x))} \leq C\ \Lambda^{-\frac{1}{4}} \Vert h \Vert_{L^2({\mathbb R}^n)}. \] \end{lemma} \begin{proofL}{\ref{la:estharmonic}} It suffices to prove the claim for large $\Lambda$, say $\Lambda \geq 8$. Let $k_0 \in {\mathbb N}$, $k_0 \geq 3$, such that $\Lambda < 2^{k_0} \leq 2\Lambda$. Approximate $h$ by functions $h_\varepsilon \in C_0^\infty({\mathbb R}^n)$ such that for any $\varepsilon > 0$ the distance $\Vert h - h_\varepsilon \Vert_{L^2({\mathbb R}^n)} \leq \varepsilon$ and $\Vert h_\varepsilon \Vert_{L^2({\mathbb R}^n)} \leq 2 \Vert h \Vert_{L^2({\mathbb R}^n)}$. By Riesz' representation theorem, \[ \Vert h_\varepsilon \Vert_{L^2(B_r(x))} = \sup_{\ontop{v \in C_0^\infty(B_r(x))}{\Vert v \Vert_{L^2} \leq 1}} \int \limits h_\varepsilon v.\\ \] For such a $v$, note that by Proposition~\ref{la:lapmsest2}, $\lapmn v \in L^p({\mathbb R}^n)$ for any $p > 2$, and thus $\eta_{r,x}^k \lapmn v \in L^2({\mathbb R}^n)$ for any $k \in {\mathbb N}_0$. Moreover, by Proposition~\ref{pr:estlapnetlapmnvL2} \begin{equation}\label{eq:estharm:lapnetalapmnv}
\Vert \lapn (\eta_{r,x}^k \lapmn v) \Vert_{L^2({\mathbb R}^n)} \leq C\ 2^{-\frac{k}{4}}. \end{equation} In order to apply \eqref{eq:lapnhWeq0}, we rewrite\footnote{Note that $\lapn h_\varepsilon \in L^p({\mathbb R}^n)$ for any $p \in (1,2)$. In fact, for all large $k \in {\mathbb N}$ the $L^p$-Norm on Annuli $A_k = B_{2^{k+1}(0)} \backslash B_{2^k(0)}$, $\Vert \lapn h_\varepsilon \Vert_{L^p(A_k)} \leq C_{h_\varepsilon} 2^{-kn (\frac{3}{2}-\frac{1}{p}} \Vert h_\varepsilon \Vert_{L^2({\mathbb R}^n)}$ as can be shown using Corollary \ref{co:bs:disjsupp}. Thus, $(\lapn h_\varepsilon) (\lapmn v) \in L^1({\mathbb R}^n)$.} \[ \begin{ma} \int \limits h_\varepsilon\ v &=& \int \limits (\lapn h_\varepsilon) (\lapmn v)\\ &=& \sum \limits_{k=0}^\infty \int \limits (\lapn h_\varepsilon)\ \eta_{r,x}^k\ \lapmn v\\ &=&\sum \limits_{k=k_0-1}^\infty \int \limits h_\varepsilon\ \lapn (\eta_{r,x}^k \lapmn v) + \sum \limits_{k=0}^{k_0-2} \int \limits h_\varepsilon\ \lapn (\eta_{r,x}^k \lapmn v)\\ &=:& I + II. \end{ma} \] The second term $II$ goes to zero as $\varepsilon \to 0$. In fact, for $k \leq k_0-2$ we have that $\operatorname{supp} \eta_{r,x}^k \subset B_{\Lambda r}(x)$ and thus \[ \begin{ma} \int \limits_{{\mathbb R}^n} h_\varepsilon\ \lapn (\eta_{r,x}^k \lapmn v) &\overset{\eqref{eq:lapnhWeq0}}{=}& \int \limits (h_\varepsilon - h)\ \lapn (\eta_{r,x}^k \lapmn v)\\ &\leq& \Vert h_\varepsilon - h\Vert_{L^2({\mathbb R}^n)}\ \Vert \lapn (\eta_{r,x}^k \lapmn v) \Vert_{L^2({\mathbb R}^n)}\\ &\leq& \varepsilon\ \Vert \lapn (\eta_{r,x}^k \lapmn v) \Vert_{L^2({\mathbb R}^n)}\\ &\overset{\eqref{eq:estharm:lapnetalapmnv}}{\leq}& C_\Lambda\ \varepsilon. \end{ma} \] For the remaining term $I$ we have, using again Proposition~\ref{pr:estlapnetlapmnvL2}, \[ \begin{ma} I &=& \sum \limits_{k=k_0-1}^\infty \int \limits h_\varepsilon\ \lapn (\eta_{r,x}^k \lapmn v)\\ &\leq& \sum \limits_{k=k_0-1}^\infty \Vert \lapn (\eta_{r,x}^k \lapmn v) \Vert_{L^2({\mathbb R}^n)}\ \Vert h_\varepsilon \Vert_{L^2({\mathbb R}^n)}.\\ &\overset{\eqref{eq:estharm:lapnetalapmnv}}{\leq}& \Vert h_\varepsilon \Vert_{L^2({\mathbb R}^n)}\ \sum \limits_{k=k_0-1}^\infty 2^{-\frac{k}{4}}\\ &\aleq{}& \Vert h \Vert_{L^2({\mathbb R}^n)}\ \sum \limits_{k=k_0-1}^\infty 2^{-\frac{k}{4}}. \end{ma} \] Because of \[ \sum \limits_{k=k_0-1}^\infty 2^{-\frac{k}{4}} \leq C 2^{-k_0\frac{1}{4}} \leq C \Lambda^{-\frac{1}{4}}, \] we arrive at \[ \int \limits h_\varepsilon\ v \leq C \varepsilon + C \Lambda^{-\frac{1}{4}}\ \Vert h \Vert_{L^2({\mathbb R}^n)}. \] Consequently, for all $\varepsilon > 0$, \[
\Vert h \Vert_{L^2(B_r(x))} \leq (C+1) \varepsilon + C \Lambda^{-\frac{1}{4}}\ \Vert h \Vert_{L^2({\mathbb R}^n)}. \] Letting $\varepsilon \to 0$, we conclude. \end{proofL}
The following theorem proves Theorem~\ref{th:hodge}. \begin{theorem}\label{th:localest} There are uniform constants $\Lambda, C > 0$ such that the following holds: For any $x \in {\mathbb R}^n$ and any $r > 0$ we have for every $v \in L^2({\mathbb R}^n)$, $\operatorname{supp} v \subset B_r(x)$ \[ \Vert v \Vert_{L^2(B_r(x))} \leq C \sup_{\varphi \in C_0^\infty(B_{\Lambda r}(x))} \frac{1}{\Vert \lapn \varphi \Vert_{L^2({\mathbb R}^n)}}\ \int \limits_{{\mathbb R}^n} v \lapn \varphi. \] \end{theorem} \begin{proofT}{\ref{th:localest}} We have, \[ \Vert v \Vert_{L^2(B_r(x))} = \sup_{\ontop{f \in L^2({\mathbb R}^n)}{\Vert f \Vert_{L^2} \leq 1}} \int \limits v\ f. \] By Lemma~\ref{la:hodge} and Lemma~\ref{la:estharmonic}, we decompose $f = \lapn \varphi + h$, $\varphi \in \Hf({\mathbb R}^n)$ and $\operatorname{supp} \varphi \subset B_{\Lambda r}(x)$, $\Vert h \Vert_{L^2(B_r(x))} \leq C\ \Lambda^{-\frac{1}{4}}$ for arbitrarily large $\Lambda > 0$. Thus, by the support condition on $v$, \[ \Vert v \Vert_{L^2(B_r(x))} \leq C \sup_{\ontop{\varphi \in C_0^\infty(B_{\Lambda r}(x))}{\Vert \lapn \varphi \Vert_{L^2({\mathbb R}^n)} \leq 1}} \int \limits v \lapn \varphi + C\Lambda^{-\frac{1}{4}}\ \Vert v \Vert_{L^2(B_r(x))}. \] Taking $\Lambda$ large enough, we can absorb and conclude. \end{proofT}
\subsection{Products of lower order operators localize well}\label{ss:loolocwell} The goal of this subsection are Lemma~\ref{la:lowerorderlocalest} and Lemma~\ref{la:lowerorderlocalest2}, which essentially state that terms of the form \[
\laps{s} a\ \Delta^{\frac{n}{4}-\frac{s}{2}}b \] ``localize alright'', if $s$ is neither of the extremal values $0$ nor $\frac{n}{2}$.
\begin{proposition}[Lower Order Operators and $L^2$]\label{pr:lowerorderest} For any $s \in (0,\frac{n}{2})$, $M_1$, $M_2$ zero multiplier operators there exists a constant $C_{M_1,M_2,s} > 0$ such that for any $u,v \in {\mathcal{S}}$, \[
\Vert M_1\Delta^{\frac{2s-n}{4}} u\ M_2\Delta^{-\frac{s}{2}} v \Vert_{L^2({\mathbb R}^n)} \leq C_{M_1,M_2,s} \Vert u \Vert_{L^2({\mathbb R}^n)}\ \Vert v \Vert_{L^2({\mathbb R}^n)}. \] \end{proposition} \begin{proofP}{\ref{pr:lowerorderest}} Set $p := \frac{n}{s}$ and $q := \frac{2n}{n-2s}$. As $2 < p,q < \infty$ (using also H\"ormander's multiplier theorem, \cite{Hoermander60}), \[ \begin{ma}
&&\Vert M_1 \Delta^{\frac{2s-n}{4}} u\ M_2 \Delta^{-\frac{s}{2}} v \Vert_{L^2}\\ &\leq& \Vert M_1 \Delta^{\frac{2s-n}{4}} u\Vert_{L^p} \ \Vert M_2 \Delta^{-\frac{s}{2}} v \Vert_{L^q}\\ &\overset{p,q \in (1,\infty)}{\aleq{}}& \Vert \Delta^{\frac{2s-n}{4}} u\Vert_{L^p} \ \Vert \Delta^{-\frac{s}{2}} v \Vert_{L^q}\\ &\overset{\ontop{p,q \in [2,\infty)}{\sref{P}{pr:fourierlpest}}}{\aleq{s}}& \Vert \abs{\cdot}^{\frac{2s-n}{2}} u^\wedge\Vert_{L^{p',p}} \ \Vert \abs{\cdot}^{-s} v^\wedge \Vert_{L^{q',q}}\\ &\overset{p,q \geq 2}{\aleq{s}}& \Vert \abs{\cdot}^{\frac{2s-n}{2}} u^\wedge\Vert_{L^{p',2}} \ \Vert \abs{\cdot}^{-s}\ v^\wedge \Vert_{L^{q',2}}\\ &\overset{\sref{P}{pr:dl:lso}}{\aleq{s}}& \Vert u^\wedge\Vert_{L^{2,2}} \ \Vert v^\wedge \Vert_{L^{2,2}}\\ &=&\Vert u \Vert_{L^2} \ \Vert v \Vert_{L^{2}}.\\ \end{ma} \] \end{proofP}
\begin{lemma}\label{la:lowerorderlocalest} Let $s \in (0,\frac{n}{2})$ and $M_1, M_2$ zero multiplier operators. Then there is a constant $C_{M_1,M_2,s} > 0$ such that the following holds. For any $u,v \in {\mathcal{S}}$ and any $\Lambda > 2$, \[ \begin{ma} &&\Vert M_1\laps{s} u\ M_2 \Delta^{\frac{n}{4}-\frac{s}{2}}v \Vert_{L^2(B_r(x))}\\ &\leq& C_{M_1,M_2,s}\ \left (\Vert \lapn u \Vert_{L^2(B_{2\Lambda r}(x))} + \Lambda^{-s} \sum \limits_{k=1}^\infty 2^{-ks} \Vert \eta_{\Lambda r,x}^k \lapn u \Vert_{L^2} \right ) \Vert \lapn v \Vert_{L^2}. \end{ma} \] \end{lemma} \begin{proofL}{\ref{la:lowerorderlocalest}} As usual \[
\Vert \laps{s} M_1 u\ \Delta^{\frac{n}{4}-\frac{s}{2}}\ M_2 v \Vert_{L^2(B_r(x))} = \sup_{\ontop{\varphi \in C_0^\infty(B_r(x),{\mathbb C})}{\Vert \varphi \Vert_{L^2} \leq 1}} \abs{\int \limits M_1 \laps{s}u \ M_2 \Delta^{\frac{n}{4}-\frac{s}{2}}\ v\ \varphi}. \] For such a $\varphi$ we then decompose $\laps{s} u$ into the part which is close to $B_r(x)$ and the far-off part: \[ \begin{ma}
&&\int \limits M_1 \laps{s}u \ M_2 \Delta^{\frac{n}{4}-\frac{s}{2}}\ v\ \varphi\\ &=& \int \limits M_1 \Delta^{\frac{s}{2}-\frac{n}{4}} (\eta_{\Lambda r} \lapn u) \ M_2\Delta^{\frac{n}{4}-\frac{s}{2}}\ v\ \varphi\\ &&+ \sum_{k=1}^\infty \int \limits M_1 \Delta^{\frac{s}{2}-\frac{n}{4}} (\eta^k_{\Lambda r} \lapn u) \ M_2 \lapms{s}\lapn\ v\ \varphi \\ &=:& I + \sum \limits_{k=1}^\infty II_k. \end{ma} \] We first estimate the $I$ by Proposition~\ref{pr:lowerorderest} \[
\abs{I} \aleq{s} \Vert \eta_{\Lambda r} \lapn u \Vert_{L^2}\ \Vert \lapn v \Vert_{L^2}. \] In order to estimate $II_k$, observe that for any $\varphi \in C_0^\infty(B_r(x),{\mathbb C})$, $\Vert \varphi \Vert_{L^2} \leq 1$, $s \in (0,\frac{n}{2})$, if we set $p := \frac{2n}{n+2s} \in (1,2)$ \begin{equation}\label{eq:lol:vplapmsnv} \begin{ma} &&\Vert \varphi\ M_2 \lapms{s} \lapn v \Vert_{L^1}\\ &\leq& \Vert \varphi \Vert_{L^p({\mathbb R}^n)}\ \Vert M_2 \lapms{s} \lapn v \Vert_{L^{p'}({\mathbb R}^n)}\\ &\aleq{}& r^s\ \Vert \lapms{s} \lapn v \Vert_{L^{p'}({\mathbb R}^n)}\\ &\overset{p' \geq 2}{\aleq{}}& r^s\ \Vert \abs{\cdot}^{-s} \brac{\lapn v}^\wedge \Vert_{L^{p,p'}({\mathbb R}^n)}\\ &\overset{p' \geq 2}{\aleq{}}& r^s\ \Vert \abs{\cdot}^{-s} \brac{\lapn v}^\wedge \Vert_{L^{p,2}({\mathbb R}^n)}\\ &\aleq{}& r^s\ \Vert \abs{\cdot}^{-s} \Vert_{L^{\frac{n}{s},\infty}}\ \Vert \brac{\lapn v}^\wedge \Vert_{L^2}\\ &\aleq{}& r^s\ \Vert \lapn v \Vert_{L^2}. \end{ma} \end{equation} Hence, as for any $k \geq 1$ we have $\operatorname{dist} (\operatorname{supp} \varphi,\operatorname{supp} \eta_{\Lambda r}^k) \ageq{} 2^{k}\Lambda r$, \[ \begin{ma} &&\abs{\int \limits M_1 \Delta^{\frac{s}{2}-\frac{n}{4}} (\eta^k_{\Lambda r} \lapn u) \ M_2 \Delta^{\frac{n}{4}-\frac{s}{2}}\ v\ \varphi}\\ &\overset{\sref{L}{la:bs:disjsuppGen}}{\aleq{s,M}}& (2^k \Lambda r)^{-n-s+\frac{n}{2}} \Vert \eta^k_{\Lambda r} \lapn u \Vert_{L^1}\ \Vert M_2 \Delta^{\frac{n}{4}-\frac{s}{2}}\ v\ \varphi\Vert_{L^1}\\ &\overset{\eqref{eq:lol:vplapmsnv}}{\aleq{s,M}}& (2^k \Lambda r)^{-\frac{n}{2}-s} \Vert \eta^k_{\Lambda r} \lapn u \Vert_{L^1}\ r^s\ \Vert \lapn v \Vert_{L^2}\\ &\aleq{}& (2^k \Lambda r)^{-s} \Vert \eta^k_{\Lambda r} \lapn u \Vert_{L^2}\ r^{s}\ \Vert \lapn\ v\Vert_{L^2}\ \\ &\aeq{}& 2^{-ks} \Lambda^{-s} \Vert \eta^k_{\Lambda r} \lapn u \Vert_{L^2}\ \Vert \lapn v\Vert_{L^2}. \end{ma} \] \end{proofL}
A different version of the same effect is the following Lemma. \begin{lemma}\label{la:lowerorderlocalest2} Let $s \in (0,\frac{n}{2})$ and $M_1, M_2$ be zero-multiplier operators. Then there is a constant $C_{M_1,M_2,s} > 0$ such that the following holds. For any $u,v \in {\mathcal{S}}$ and for any $\Lambda > 2$, $r > 0$, $B_r \equiv B_r(x) \subset {\mathbb R}^n$, \[ \begin{ma} &&\Vert M_1 \laps{s} u\ M_2\Delta^{\frac{n}{4}-\frac{s}{2}}\ v \Vert_{L^2(B_r(x))}\\ &\leq& C_{M_1,M_2,s}\ \Vert \eta_{\Lambda r,x}\lapn u \Vert_{L^2}\ \Vert \eta_{\Lambda r,x}\lapn v \Vert_{L^2}\\ &&+ C_{M_1,M_2,s}\ \Lambda^{-s}\ \Vert \eta_{\Lambda r,x} \lapn v \Vert_{L^2}\ \sum_{k=1}^\infty 2^{-sk} \Vert \eta_{\Lambda r,x}^k \lapn u \Vert_{L^2}\\ &&+ C_{M_1,M_2,s}\ \Lambda^{s-\frac{n}{2}}\ \Vert \eta_{\Lambda r,x} \lapn u \Vert_{L^2}\ \sum_{l=1}^\infty 2^{(s-\frac{n}{2})l} \Vert \eta_{\Lambda r,x}^l \lapn v \Vert_{L^2}\\ && + C_{M_1,M_2,s}\ \Lambda^{-\frac{n}{2}}\ \sum_{k,l = 1}^\infty 2^{-(ks + l (\frac{n}{2}-s))} \Vert \eta^k_{\Lambda r,x} \lapn u \Vert_{L^2} \ \Vert \eta^l_{\Lambda r,x} \lapn v \Vert_{L^2}. \end{ma} \] \end{lemma} \begin{proofL}{\ref{la:lowerorderlocalest2}} We have \[ \begin{ma}
&&M_1 \laps{s} u\ M_2\Delta^{\frac{n}{4}-\frac{s}{2}}\ v \\ &=& M_1\Delta^{\frac{s}{2}-\frac{n}{4}} \brac{\eta_{\Lambda r} \lapn u}\ M_2\Delta^{-\frac{s}{2}}\ \brac{\eta_{\Lambda r} \lapn v} \\ &&+ \sum_{k=1}^\infty M_1 \Delta^{\frac{s}{2}-\frac{n}{4}} \brac{\eta^k_{\Lambda r} \lapn u}\ M_2 \Delta^{-\frac{s}{2}}\ \brac{\eta_{\Lambda r} \lapn v} \\ &&+ \sum_{l=1}^\infty M_1\Delta^{\frac{s}{2}-\frac{n}{4}} \brac{\eta_{\Lambda r} \lapn u}\ M_2 \Delta^{-\frac{s}{2}}\ \brac{\eta_{\Lambda r}^l \lapn v} \\ &&+ \sum_{k,l=1}^\infty M_1\Delta^{\frac{s}{2}-\frac{n}{4}} \brac{\eta^k_{\Lambda r} \lapn u}\ M_2 \Delta^{-\frac{s}{2}}\ \brac{\eta_{\Lambda r}^l \lapn v} \\ &=& I + \sum_{k=1}^\infty II_k + \sum_{l=1}^\infty III_k + \sum_{k,l=1}^\infty IV_{k,l}. \end{ma} \] By Proposition~\ref{pr:lowerorderest}, \[
\Vert I \Vert_{L^2} \aleq{} \Vert \eta_{\Lambda r}\lapn u \Vert_{L^2}\ \Vert \eta_{\Lambda r} \lapn v \Vert_{L^2}. \] As in the proof of Lemma~\ref{la:lowerorderlocalest}, \[
\Vert II_k \Vert_{L^2(B_r)} \aleq{} 2^{-sk} \Lambda^{-s} \Vert \eta_{\Lambda r}^k \lapn u \Vert_{L^2}\ \Vert \eta_{\Lambda r} \lapn v \Vert_{L^2}, \] and \[
\Vert III_l \Vert_{L^2(B_r)} \aleq{} 2^{(s-\frac{n}{2})l} \Lambda^{s-\frac{n}{2}} \Vert \eta_{\Lambda r} \lapn u \Vert_{L^2}\ \Vert \eta_{\Lambda r}^l \lapn v \Vert_{L^2}. \] Finally, \[ \begin{ma}
\Vert IV_{k,l} \Vert_{L^2(B_r)} &\aleq{}& \brac{2^k \Lambda r}^{-s}\ \Vert \eta^k_{\Lambda r} \lapn u \Vert_{L^2} \ \Vert \lapms{s} \brac{\eta^l_{\Lambda r} \lapn v} \Vert_{L^2(B_r)}\\
&\aleq{}& \brac{2^k \Lambda r}^{-s}\ \brac{2^l \Lambda r}^{s-\frac{n}{2}}\ r^{\frac{n}{2}}\ \Vert \eta^k_{\Lambda r} \lapn u \Vert_{L^2} \ \Vert \eta^l_{\Lambda r} \lapn v \Vert_{L^2} \\ &\aleq{}& \Lambda^{-\frac{n}{2}}\ 2^{- (ks + l (\frac{n}{2}-s))}\ \Vert \eta^k_{\Lambda r} \lapn u \Vert_{L^2} \ \Vert \eta^l_{\Lambda r} \lapn v \Vert_{L^2}. \end{ma} \] \end{proofL}
\subsection{Fractional Product Rules for Polynomials} It is obvious, that for any constant $c \in {\mathbb R}$ and any $\varphi \in {\mathcal{S}}$, $s > 0$, \[
\laps{s} (c\varphi ) = c\laps{s} \varphi. \] In this section, we are going to extend this kind of product rule to polynomials of degree greater than zero, which in our application will be mean value polynomials as in \eqref{eq:meanvalueszero}. As we have to deal with dimensions greater than one, our mean value polynomials will be also of order greater than zero, making such product rules important.
\begin{proposition}[Product Rule for Polynomials]\label{pr:lapsmonprod2} Let $N \in {\mathbb N}_0$, $s \geq N$. Then for any multiplier operator $M$ defined by \[ (M v)^\wedge = m v^\wedge, \quad \mbox{for any $v \in {\mathcal{S}}$}, \] for $m \in C^\infty({\mathbb R}^n \backslash \{0\},{\mathbb C})$ and homogeneous of order zero, there exists for every multiindex $\beta \in \brac{ {\mathbb N}_0}^n$, $\abs{\beta} \leq N$, a multiplier operator $M_\beta \equiv M_{\beta,s,N}$, $M_\beta = M$ if $\abs{\beta} = 0$, with multiplier $m_\beta \in C^\infty({\mathbb R}^n \backslash \{0\},{\mathbb C})$ also homogeneous of order zero such that the following holds. Let $Q = x^\alpha$ for some multiindex $\alpha \in \brac{{\mathbb N}_0}^n$, $\abs{\alpha} \leq N$. Then \begin{equation}\label{eq:lapsqvarphiclaim} M\laps{s} (Q\varphi) = \sum \limits_{\abs{\beta} \leq \abs{\alpha}} \partial^\beta Q\ M_\beta \laps{s-\abs{\beta}} \varphi \quad \mbox{for any $\varphi \in {\mathcal{S}}$}. \end{equation} Consequently, for any polynomial $P = \sum \limits_{\abs{\alpha} \leq N} c_\alpha x^\alpha$, \[ M\laps{s} (P\varphi) = \sum \limits_{\abs{\beta} \leq N} \partial^\beta P\ M_\beta \laps{s-\abs{\beta}} \varphi \quad \mbox{for any $\varphi \in {\mathcal{S}}$}. \] \end{proposition} \begin{proofP}{\ref{pr:lapsmonprod2}} The claim for $P$ follows immediately from the claim about $Q$ as left- and right-hand side are linear in the space of polynomials.\\ We will prove the claim for $Q$ by induction on $N$, but first we make some preperatory observations. For an operator $M$ with multiplier $m$ as requested, for $\alpha \in ({\mathbb N}_0)^n$ a multiindex and $s \in {\mathbb R}$ set \[ m_{\alpha,s}(\xi) := \frac{1}{\brac{2\pi \mathbf{i}}^{\abs{\alpha}}}\abs{\xi}^{\abs{\alpha}-s}\ \partial^\alpha \brac{\abs{\xi}^s\ m(\xi)}, \quad \xi \in {\mathbb R}^n \backslash \{0\}, \] and let $M_{\alpha,s}$ be the according operator with $m_{\alpha,s}$ as Fourier multiplier. In a slight abuse of this notation, for multiindices with only one entry we will write \[
M_{k,s} \equiv M_{\alpha_k,s} \quad \mbox{for $k \in (1,\ldots,n)$}, \] where $\alpha_k = (0,\ldots,0,1,0\ldots,0)$ and the $1$ is exactly at the $k$th entry of $\alpha_k$.\\ Note that $m_{\alpha,s}(\cdot)$ is homogeneous of order zero. In fact, this is true as the derivative of a function of zero homogeneity has homogeneity $-1$, a fact which itself follows from taking the limit $h \to 0$ in the following equation which is valid for any $i \in \{1,\ldots,n\}$, $\xi \in {\mathbb R}^n \backslash \{0\}$, $\lambda > 0$, $0 \neq h \in (-\abs{\xi},\abs{\xi})$ \[
\frac{m(\lambda (\xi + h e_i)) - m(\lambda \xi)}{\lambda h} = \lambda^{-1} \frac{m(\xi + h e_i) - m(\xi)}{h}. \] Also, we have the following relation for any $s \in {\mathbb R}$, \begin{equation}\label{eq:lapsmon:malphsrel} \brac{M_{\alpha,s}}_{\beta,s-\abs{\alpha}} = M_{\alpha+\beta,s}. \end{equation} Observe furthermore that \[ x_1 v(x) = -\frac{1}{2\pi \mathbf{i}} \brac{\partial_1 v^\wedge}^\vee(x), \] so for $s \geq 1$ \[ \begin{ma} &&\brac{M\laps{s} \brac{(\cdot)_1 v}}^\wedge(\xi)\\ &=& -\frac{1}{2\pi \mathbf{i}}\ m(\xi)\ \abs{\xi}^s \partial_1 v^\wedge(\xi)\\ &=& -\frac{1}{2\pi \mathbf{i}}\ \partial_1 (M\laps{s} v)^\wedge(\xi) + \frac{1}{2\pi \mathbf{i}} \partial_1 (m(\xi) \abs{\xi}^s)\ v^\wedge(\xi)\\ &=& -\frac{1}{2\pi \mathbf{i}}\ \partial_1 (M\laps{s} v)^\wedge(\xi) + \brac{M_{1,s} \laps{s-1} v}^\wedge(\xi), \end{ma} \] that is \begin{equation}\label{eq:MlapsQphi:it} M\laps{s} \brac{(\cdot)_1 v}(x) = x_1 M\laps{s} v + M_{1,s} \laps{s-1} v. \end{equation} So one could suspect that for $Q = x^\alpha$ for some multiindex $\alpha$, $\abs{\alpha} \leq s$, \begin{equation}\label{eq:MlapsQ:IV} M\laps{s} (Q\varphi) = \sum_{\abs{\beta} \leq s} \partial^\beta Q\ \frac{1}{\beta!} M_{\beta,s}\ \laps{s-\abs{\beta}} \varphi. \end{equation} where \[ \beta! := \beta_1!\ldots\beta_n!. \] This is of course true if $Q \equiv 1$. As induction hypothesis, fix $N > 0$ and assume \eqref{eq:MlapsQ:IV} to be true for any monomial $\tilde{Q}$ of degree at most $\tilde{N} < N$ whenever $s \geq \tilde{N}$ and $M$ is an operator with the desired properties. Let then $Q$ be a monomial of degree at most $N$, and assume $s \geq N$. We decompose w.l.o.g. $Q = x_1 \tilde{Q}$ for some monomial $\tilde{Q}$ of degree at most $N-1$. Then, \begin{equation}\label{eq:MlapsQphi:indfirststep} M\laps{s} (Q\varphi) \overset{\eqref{eq:MlapsQphi:it}}{=} x_1 M\laps{s} \brac{\tilde{Q}\varphi} + M_{1,s} \laps{s-1} \brac{\tilde{Q}\varphi}. \end{equation} For a multiindex $\beta = (\beta_1,\ldots,\beta_n) \in \brac{{\mathbb N}_0}^n$ let us set \[ \tau_1 (\beta) := (\beta_1+1,\beta_2,\ldots,\beta_n) \quad \mbox{and}\quad \tau_{-1} (\beta) := (\beta_1-1,\beta_2,\ldots,\beta_n). \] Observe that \begin{equation}\label{eq:MlapsQphi:pbxq} \partial^\beta (x_1 Q) = \beta_1 \partial^{\tau_{-1}(\beta)} Q + x_1 \partial^\beta Q. \end{equation} Applying now in \eqref{eq:MlapsQphi:indfirststep} the induction hypothesis \eqref{eq:MlapsQ:IV} on $M\laps{s}$ and $M_{1,s} \laps{s-1}$, we have \[ \begin{ma} M\laps{s} (Q\varphi) &=& x_1 \sum_{\abs{\beta} \leq s} \partial^\beta \tilde{Q}\ \frac{1}{\beta!} M_{\beta,s}\ \laps{s-\abs{\beta}} \varphi\\ && + \sum_{\abs{\tilde{\beta}} \leq s-1} \partial^{\tilde{\beta}} \tilde{Q}\ \frac{1}{\tilde{\beta}!} \brac{M_{1,s}}_{\tilde{\beta},s-1}\ \laps{s-(\abs{\tilde{\beta}}+1)} \varphi\\
&\overset{\eqref{eq:lapsmon:malphsrel}}{=}& \sum_{\abs{\beta} \leq s} x_1 \partial^\beta \tilde{Q}\ \frac{1}{\beta!} M_{\beta,s}\ \laps{s-\abs{\beta}} \varphi\\ && + \sum_{\abs{\tilde{\beta}} \leq s-1} \partial^{\tilde{\beta}} \tilde{Q}\ \frac{1}{\tilde{\beta}!} \brac{M_{\tau_1(\tilde{\beta}),s}}\ \laps{s-\abs{\tau_1(\tilde{\beta})}} \varphi.\\
\end{ma}
\] Next, by \eqref{eq:MlapsQphi:pbxq}
\[
\begin{ma}
&=& \sum_{\abs{\beta} \leq s} \partial^\beta \brac{x_1 \tilde{Q}} \ \frac{1}{\beta!} M_{\beta,s}\ \laps{s-\abs{\beta}} \varphi\\
&&-\sum_{\ontop{\abs{\beta} \leq s}{\beta_1 \geq 1}} \partial^{\tau_{-1} (\beta)} \tilde{Q} \ \frac{\beta_1}{\beta!} M_{\beta,s}\ \laps{s-\abs{\beta}} \varphi\\
&&+ \sum_{\abs{\tilde{\beta}} \leq s-1} \partial^{\tilde{\beta}} \tilde{Q}\ \frac{1}{\tilde{\beta}!}\ M_{\tau_1(\tilde{\beta}),s}\ \laps{s-\abs{\tau_1(\tilde{\beta})}} \varphi\\
&=& \sum_{\abs{\beta} \leq s} \partial^\beta \brac{x_1 \tilde{Q}} \ \frac{1}{\beta!} M_{\beta,s}\ \laps{s-\abs{\beta}} \varphi\\
&&-\sum_{\ontop{\abs{\beta} \leq s}{\beta_1 \geq 1}} \partial^{\tau_{-1} (\beta)} \tilde{Q} \ \frac{1}{\tau_{-1}(\beta)!} M_{\beta,s}\ \laps{s-\abs{\beta}} \varphi\\
&&+ \sum_{\abs{\tilde{\beta}} \leq s-1} \partial^{\tilde{\beta}} \tilde{Q}\ \frac{1}{\tilde{\beta}!}\ M_{\tau_1(\tilde{\beta}),s}\ \laps{s-\abs{\tau_1(\tilde{\beta})}} \varphi\\
&=& \sum_{\abs{\beta} \leq s} \partial^\beta \brac{x_1 \tilde{Q}} \ \frac{1}{\beta!} M_{\beta,s}\ \laps{s-\abs{\beta}} \varphi.
\end{ma} \]
\end{proofP}
\begin{proposition}\label{pr:pvarphiest} There is a uniform constant $C > 0$ such that the following holds: Let $u \in {\mathcal{S}}$ and $P$ any polynomial of degree at most $N := \lceil \frac{n}{2} \rceil-1$. Then for any $\Lambda > 2$, $B_r(x_0) \subset {\mathbb R}^n$, $\varphi \in C_0^\infty(B_{r}(x_0))$, $\Vert \lapn \varphi \Vert_{L^2({\mathbb R}^n)} \leq 1$, \[ \begin{ma} &&\Vert \lapn (P \varphi) - P \lapn \varphi \Vert_{L^2(B_{r}(x_0))}\\ &\leq& C\ \brac{\Vert \lapn \brac{\eta_{\Lambda r,x_0}(u-P)} \Vert_{L^2({\mathbb R}^n)} + \Vert \lapn u \Vert_{L^2(B_{2\Lambda r}(x_0))}}\\ && + C\ \Lambda^{-1} \sum \limits_{k=1}^\infty 2^{-k} \Vert \eta_{\Lambda r,x_0}^k \lapn u \Vert_{L^2({\mathbb R}^n)}. \end{ma} \] \end{proposition} \begin{proofP}{\ref{pr:pvarphiest}} By Proposition~\ref{pr:lapsmonprod2} (where we take $M$ the identity and $s = \frac{n}{2}$) \[ \lapn (P \varphi) - P \lapn \varphi = \sum \limits_{1 \leq \abs{\beta} \leq N} \partial^\beta P\ M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} \varphi. \] As we estimate the $L^2$-norm on $B_{r}$ and there $\eta_{\Lambda r} \equiv 1$, we will further rewrite \[ \begin{ma} &=& -\sum \limits_{1 \leq \abs{\beta} \leq N} \partial^\beta (\eta_{\Lambda r}(u-P)) M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} \varphi\\ &&+ \sum \limits_{1 \leq \abs{\beta} \leq N} \partial^\beta u\ M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} \varphi\\ &=:& \sum \limits_{1 \leq \abs{\beta} \leq N} (I_\beta + II_\beta) \qquad \mbox{on $B_r(x_0)$}. \end{ma} \] As $1 \leq \abs{\beta} \leq N < \frac{n}{2}$, we have by Lemma~\ref{la:lowerorderlocalest} for $v = \varphi$ \[ \begin{ma}
\Vert II_\beta \Vert_{L^2(B_{r})} &\aleq{}& \Vert \lapn u \Vert_{L^2(B_{2\Lambda r})} + \Lambda^{-\abs{\beta}}\sum \limits_{k=1}^\infty 2^{-k\abs{\beta}} \Vert \eta_{\Lambda r}^k \lapn u \Vert_{L^2}\\ &\leq&\Vert \lapn u \Vert_{L^2(B_{2\Lambda r})} + \Lambda^{-1}\sum \limits_{k=1}^\infty 2^{-k} \Vert \eta_{\Lambda r}^k \lapn u \Vert_{L^2}. \end{ma} \] We can write \[
I_\beta = M_\beta \Delta^{\frac{2\abs{\beta}-n}{4}}\ \lapn (\eta_{\Lambda r} (v-P))\ M_{\beta} \lapms{\abs{\beta}} \lapn \varphi \] and by Proposition~\ref{pr:lowerorderest} applied to $\lapn (\eta_{\Lambda r} (u-P))$ and $\lapn \varphi$ for $s = \abs{\beta}$ \[
\Vert I_\beta \Vert_{L^2({\mathbb R}^n)} \aleq{} \Vert \lapn (\eta_{\Lambda r} (u-P)) \Vert_{L^2({\mathbb R}^n)}. \] \end{proofP} \section{Local Estimates and Compensation Phenomena: Proof of Theorem~\ref{th:localcomp}}\label{sec:localTart} Theorem~\ref{th:localcomp} is essentially a consequence of the following two results.
\begin{lemma}\label{la:hvphilocest} There is a uniform constant $C > 0$ such that for any ball $B_r(x_0) \subset {\mathbb R}^n$, $\varphi \in C_0^\infty(B_{r}(x_0))$, $\Vert \lapn \varphi \Vert_{L^2} \leq 1$, and $\Lambda > 4$ as well as for any $v \in \Hf({\mathbb R}^n)$, \[ \Vert H(v,\varphi) \Vert_{L^2(B_{r}(x_0))} \] \[ \leq C\ \left ([v]_{B_{4\Lambda r}(x_0),\frac{n}{4}} + \Vert \lapn v \Vert_{B_{2\Lambda r}(x_0)}+ \Lambda^{-\frac{1}{2}} \Vert \lapn v \Vert_{L^2({\mathbb R}^n)} \right ). \] \end{lemma} \begin{proofL}{\ref{la:hvphilocest}} We have for almost every point in $B_r \equiv B_r(x_0)$, \[ \begin{ma} H(v,\varphi) &=& \lapn (v \varphi) - v \lapn \varphi - \varphi \lapn v \\ &=& \lapn (\eta_{\Lambda r} v \varphi) - \eta_{\Lambda r} v \lapn \varphi - \varphi \lapn \left (\eta_{\Lambda r} v + (1-\eta_{\Lambda r}) v\right )\\ &=& I - II - III. \end{ma} \] Then we rewrite for a polynomial $P$ of order $\lceil \frac{n}{2} \rceil-1$ which we will choose below, using again that the support of $\varphi$ lies in $B_r$, so $\varphi \eta_{\Lambda r} = \varphi$ on ${\mathbb R}^n$, \[ I = \lapn (\eta_{\Lambda r} (v-P) \varphi) + \lapn \brac{P \varphi}, \] \[ II = \eta_{\Lambda r} (v-P) \lapn \varphi + P \lapn \varphi, \] \[ III = \varphi \lapn (\eta_{\Lambda r} (v-P)) + \varphi \lapn (\eta_{\Lambda r} P) + \varphi \lapn ((1-\eta_{\Lambda r}) v ). \] Thus, \[
I-II-III = \widetilde{I} + \widetilde{II} - \widetilde{III}, \] where \[ \begin{ma} \widetilde{I} &=& H(\eta_{\Lambda r} (v-P),\varphi),\\ \widetilde{II} &=& \lapn (P \varphi) - P \lapn \varphi,\\ \widetilde{III} &=& \varphi \lapn (P+(1-\eta_{\Lambda r}) (v-P) ).\\ \end{ma} \] Theorem~\ref{th:integrability} implies \[
\Vert \widetilde{I} \Vert_{L^2({\mathbb R}^n)} \aleq{} \Vert \lapn (\eta_{\Lambda r} (v-P) ) \Vert_{L^2}, \] Proposition~\ref{pr:pvarphiest} states for $u = v$ and $s = \frac{n}{2}$ that \[ \begin{ma} && \Vert \widetilde{II} \Vert_{L^2(B_r)}\\ &\aleq{}& \Vert \lapn \eta_{\Lambda r}(v-P) \Vert_{L^2({\mathbb R}^n)} + \Vert \lapn v \Vert_{L^2(B_{2\Lambda r})} + \Lambda^{-1} \sum \limits_{k=1}^\infty 2^{-k} \Vert \eta_{\Lambda r}^k \lapn v \Vert_{L^2({\mathbb R}^n)}\\ &\aleq{}& \Vert \lapn \eta_{\Lambda r}(v-P) \Vert_{L^2({\mathbb R}^n)} + \Vert \lapn v \Vert_{L^2(B_{2\Lambda r})} + \Lambda^{-1} \Vert \lapn v \Vert_{L^2({\mathbb R}^n)}.\\ \end{ma} \] It remains to estimate $\widetilde{III}$. Choose $P$ to be the polynomial such that $v-P$ satisfies the mean value condition \eqref{eq:meanvalueszero} for $N = \lceil \frac{n}{2} \rceil - 1$ and in $B_{2\Lambda r}(x_0)$.\\ We have to estimate for $\psi \in C_0^\infty(B_r)$, $\Vert \psi \Vert_{L^2} \leq 1$, \[ \int \limits \widetilde{III} \psi = \int \limits \psi \varphi\ \lapn (P+(1-\eta_{\Lambda r})(v-P)). \] Note that \[
P+(1-\eta_{\Lambda r})(v-P) = \eta_{\Lambda r} P + (1-\eta_{\Lambda r}v) \in {\mathcal{S}}({\mathbb R}^n), \] so we can write \[ \begin{ma} \int \limits \widetilde{III} \psi &=& \int \limits \lapn (\psi \varphi)\ P+(1-\eta_{\Lambda r})(v-P)\\ &=& \lim_{R \to \infty} \int \limits \lapn (\psi \varphi)\eta_R P + \int \limits \lapn (\psi \varphi) (1-\eta_{\Lambda r})(v-P). \end{ma} \] By Remark \ref{rem:cutoffPolbdd} we have \[
\int \limits \lapn (\psi \varphi)\eta_R P = o(1) \quad \mbox{for $R \to \infty$}, \] so in fact we only have to estimate for any $R > 1$ \[ \begin{ma}
&&\sum \limits_{k=1}^\infty \int \limits \psi\ \varphi\ \lapn (\eta_R \eta_{\Lambda r}^k (v-P))\\ &\overset{\sref{L}{la:bs:disjsuppGen}}{\aleq{}}& \sum \limits_{k=1}^\infty (2^k \Lambda r)^{-\frac{3}{2} n}\ \Vert \varphi \Vert_{L^2}\ \Vert \eta_{\Lambda r}^k (v-P) \Vert_{L^1}\\ &\overset{\sref{L}{la:poinc}}{\aleq{}}& \sum \limits_{k=1}^\infty (2^k \Lambda)^{-n} r^{-\frac{n}{2}}\ \Vert \eta_{\Lambda r}^k (v-P) \Vert_{L^2}\\ &=& \Lambda^{-\frac{n}{2}}\ \sum \limits_{k=1}^\infty 2^{-\frac{n}{2} k}\ \brac{2^k \Lambda r}^{-\frac{n}{2}}\ \Vert \eta_{\Lambda r}^k (v-P) \Vert_{L^2}\\ &\overset{\sref{P}{pr:estetarkvmp}}{\aleq{}}& \Lambda^{-\frac{n}{2}}\ \sum \limits_{k=1}^\infty 2^{-k\frac{n}{2}}(1+k)\ \Vert \lapn v \Vert_{L^2({\mathbb R}^n)}\\ &\aleq{}& \Lambda^{-\frac{n}{2}} \Vert \lapn v \Vert_{L^2({\mathbb R}^n)}\\ &\leq& \Lambda^{-\frac{1}{2}} \Vert \lapn v \Vert_{L^2({\mathbb R}^n)}.\\ \end{ma} \] In order to finish the whole proof it is then only necessary to apply Lemma~\ref{la:poincmv} which implies that \[
\Vert \lapn \brac{\eta_{\Lambda r} (v-P)} \Vert_{L^2} \aleq{} [v-P]_{B_{4\Lambda r},\frac{n}{2}} = [v]_{B_{4\Lambda r},\frac{n}{2}}. \] \end{proofL}
\begin{lemma}\label{la:hwwlocest} For any $v \in H^{\frac{n}{2}}({\mathbb R}^n)$, $\varepsilon \in (0,1)$, there exists $\Lambda > 0$, $R > 0$, $\gamma > 0$ such that for all $x_0 \in {\mathbb R}^n$, $r < R$ \[ \begin{ma}
&&\Vert H(v,v) \Vert_{L^2(B_r(x_0))}\\ &\leq& \varepsilon \brac{ [v]_{B_{4\Lambda r},\frac{n}{2}} + \Vert \lapn v \Vert_{L^2(B_{4\Lambda r})} }\\ &&+ C\ \Lambda^{\frac{1}{2}} \brac{ \sum_{k=1}^\infty 2^{-\gamma k} \Vert \lapn v \Vert_{L^2(A_k)} + \sum_{k=-\infty}^\infty 2^{-\gamma \abs{k}} [v]_{A_k,\frac{n}{2}}. } \end{ma} \] Here we set $A_k := B_{2^{k+4}4\Lambda r} \backslash B_{2^{k-1} r}$. \end{lemma} \begin{proof} Let $\delta = \varepsilon \tilde{\delta} > 0 \in (0,1)$, where $\tilde{\delta}$ is a uniform constant whose value will be chosen later. Pick $\Lambda > 10$ depending on $\delta$ and $v$ such that \begin{equation}\label{eq:loc:Lambdalapnvsmall}
\Lambda^{-\frac{1}{2}}\ \Vert \lapn v \Vert_{L^2({\mathbb R}^n)} \leq \delta. \end{equation} Depending on $\delta$ and $\Lambda$ choose $R > 0$ so small such that \begin{equation}\label{eq:loc:vbLambdasmall}
[v]_{B_{10\Lambda r}(x_0),\frac{n}{2}} + \Vert \lapn v \Vert_{L^2(B_{10\Lambda r}(x_0))} \leq \delta,\quad \mbox{for all $x_0 \in {\mathbb R}^n$, $r < R$}. \end{equation}
We can assume that $v \in C_0^\infty({\mathbb R}^n)$. In fact, by Lemma~\ref{la:Tartar07:Lemma15.10} we can approximate $v$ in $\Hf({\mathbb R}^n)$ by $v_k \in C_0^\infty({\mathbb R}^n)$ such that \eqref{eq:loc:Lambdalapnvsmall} and \eqref{eq:loc:vbLambdasmall} are fulfilled for any $v_k$ with $2\delta$ instead of $\delta$ which is a uniform constant \emph{only} depending on $\varepsilon$. Here one uses that \[
[v_k-v]_{{\mathbb R}^n,\frac{n}{2}} \overset{\sref{P}{pr:equivlaps}}{=} \Vert \lapn (v_k - v) \Vert_{L^2} \xrightarrow{k \to \infty} 0. \] By Theorem~\ref{th:integrability} and the bilinearity of $H(\cdot,\cdot)$, \[
\Vert H(v_k,v_k) - H(v,v)\Vert_{L^2({\mathbb R}^n)} \xrightarrow{k \to \infty} 0. \] So both sides of the claim for $v_k$ converge to the respective sides of the claim for $v$, whereas the constants stay the same.\\
From now on let $r \in (0,R)$ and $x_0 \in {\mathbb R}^n$ be arbitrarily fixed and denote $B_r \equiv B_r(x_0)$. Set $P \equiv P_\Lambda \equiv P_{B_{2\Lambda r}}(v)$ the polynomial of degree $N := \lceil \frac{n}{2} \rceil - 1$ such that the mean value condition \eqref{eq:meanvalueszero} holds on $B_{2\Lambda r}(x_0)$. We denote $\eta_{\Lambda r} \equiv \eta_{\Lambda r, x_0}$ and $\tilde{\eta}_\rho := \eta_{\rho,0}$.\\ As $P$ is not a function in ${\mathcal{S}}({\mathbb R}^n)$, we ``approximate'' it by $P^\rho := \tilde{\eta}_{\rho} P$, $\rho > \rho_0$ where we choose $\rho_0 > 2\max\{2\Lambda r+\abs{x_0},1\}$ such that $B_{\frac{1}{2}\rho_0}(0) \supset \operatorname{supp} v$. Note that in particular, we only work with $\rho > 0$ such that \[
\tilde{\eta}_\rho \equiv 1 \quad \mbox{on $\operatorname{supp} \eta_{2\Lambda r,x_0} \cup \operatorname{supp} v$, for all $\rho > \rho_0$}. \] Then, \begin{equation} \label{eq:hvvest:v1}
v = \tilde{\eta}_\rho v = \eta_{\Lambda r} (v- P) + \tilde{\eta}_\rho(1-\eta_{\Lambda r}) (v-P) + P^\rho =: v_\Lambda + v^\rho_{-\Lambda} + P^\rho. \end{equation} Observe that all three terms on the right-hand side are functions of ${\mathcal{S}}({\mathbb R}^n)$.
We have \begin{equation}\label{eq:hvvest:v2}
v^2 = (v_\Lambda)^2 + (v_{-\Lambda}^\rho)^2 + \left (P^\rho \right )^2 + 2 v_\Lambda\ v^\rho_{-\Lambda} + 2 \left (v_\Lambda+ v^\rho_{-\Lambda}\right )\ P^\rho. \end{equation} As we want to estimate $H(v,v)$ on $B_r \equiv B_r(x_0)$, we are going to rewrite $H(v,v)\varphi$ for an arbitrary $\varphi \in C_0^\infty(B_r)$, such that $\Vert \varphi \Vert_{L^2({\mathbb R}^n)} \leq 1$. For any $\rho > \rho_0$ (with the goal of letting $\rho \to \infty$ in the end), we will use the following facts \[
\varphi P^\rho = \varphi P,\quad v_\Lambda P^\rho = v_\Lambda P, \quad \varphi v^\rho_{-\Lambda} = 0. \] Now we start the rewriting process: \[ \begin{ma}
&&H(v,v) \varphi\\ &=& \left ( \lapn \brac{v^2} - 2 v \lapn v \right ) \varphi\\
&\overset{\eqref{eq:hvvest:v2}}{=}& \Big ( \lapn (v_\Lambda)^2 + \lapn (v^\rho_{-\Lambda})^2 + \lapn \left (P^\rho \right )^2 \\ &&+ 2 \lapn \left (v_\Lambda\ v^\rho_{-\Lambda}\right ) + 2 \lapn \left (\left (v_\Lambda+ v^\rho_{-\Lambda}\right )\ P^\rho\right )\\ && - 2 v_\Lambda \lapn v_\Lambda - 2 v_\Lambda \lapn v^\rho_{-\Lambda} - 2 v_\Lambda \lapn P^\rho\\ && - 2 P^\rho \lapn \brac{v_\Lambda + v^\rho_{-\Lambda}} - 2 P^\rho \lapn P^\rho \Big ) \varphi\\
&=& H(v_\Lambda,v_\Lambda) \varphi\\ &&+ 2 \left (\lapn \left (\left (v_\Lambda+ v^\rho_{-\Lambda}\right )\ P^\rho\right ) - P\ \lapn \left (v_\Lambda + v^\rho_{-\Lambda} \right ) \right ) \varphi\\ &&+ \brac{\lapn \left (P^\rho \right )^2 }\varphi\\ &&+ \left (\lapn (v_{-\Lambda}^\rho)^2 + 2 \lapn \left (v_\Lambda\ v_{-\Lambda}^\rho\right ) - 2 v_\Lambda \lapn v_{-\Lambda}^\rho\right )\varphi\\ && - 2\left ( P\ \lapn P^\rho + v_{\Lambda} \lapn P^\rho \right )\varphi.\\ \end{ma} \] Now we add and substract terms, that vanish for $\rho \to \infty$, and arrive at \[ \begin{ma}
&&H(v,v) \varphi\\ &=& H(v_\Lambda,v_\Lambda) \varphi\\ &&+\ 2\ \left (\lapn \left (\left (v_\Lambda+ v^\rho_{-\Lambda}\right )\ P\right ) - P\ \lapn \left (v_\Lambda + v^\rho_{-\Lambda} \right ) \right ) \varphi\\ &&+\brac{\lapn \brac{\brac{\tilde{\eta}_\rho}^2 P P} - P \lapn \brac{\brac{\tilde{\eta}_\rho}^2 P} }\varphi\\ &&+ \left (\lapn (v_{-\Lambda}^\rho)^2 + 2 \lapn \left (v_\Lambda\ v_{-\Lambda}^\rho\right ) - 2 v_\Lambda \lapn v_{-\Lambda}^\rho\right )\varphi\\ && +\left (P\ \lapn \brac{\brac{\tilde{\eta}_\rho}^2P} - 2\ P\ \lapn P^\rho - 2\ v_{\Lambda} \lapn P^\rho \right ) \varphi\\ && +\ 2\ \lapn \brac{v^\rho_{-\Lambda}(\tilde{\eta}_\rho -1)P}\ \varphi\\ &=:& \left (I + II + III + IV + V + VI\right ) \varphi. \end{ma} \] First we treat the terms $V$ and $VI$ which will be the parts vanishing for $\rho \to \infty$. \underline{As for $V$}, we have by Remark \ref{rem:cutoffPolbdd} using also that $\rho > 1$, \[
\Vert \lapn P^\rho \Vert_{L^\infty({\mathbb R}^n)} \leq C_{r,\Lambda,v,x_0}\ \rho^{N-\frac{n}{2}} \leq C_{r,\Lambda,v,x_0} \rho^{-\frac{1}{2}}, \] and by an analogous method one can see that the following holds, too: \[ \Vert \lapn \brac{\brac{\tilde{\eta}_\rho}^2P} \Vert_{L^\infty({\mathbb R}^n)} \leq C_{r,\Lambda,v,x_0}\ \rho^{N-\frac{n}{2}} \leq C_{r,\Lambda,v,x_0} \rho^{-\frac{1}{2}}. \] Consequently, \[
\Vert V \Vert_{L^2(B_r)} \leq C_{r,x_0,v,\Lambda} \rho^{-\frac{1}{2}}. \] Next, \underline{as for $VI$}, the product rule for polynomials, Proposition~\ref{pr:lapsmonprod2} for $M = Id$, $\varphi = v_{-\Lambda}^\rho (\tilde{\eta}_\rho -1) \in {\mathcal{S}}({\mathbb R}^n)$, implies that for some zero-multiplier operator $M_\beta$, \[
\lapn \brac{v^\rho_{-\Lambda}(\tilde{\eta}_\rho -1)P} = \sum_{\abs{\beta} \leq N} \partial^\beta P\ M_{\beta} \Delta^{\frac{n-2\abs{\beta}}{4}} \brac{v^\rho_{-\Lambda}(\tilde{\eta}_\rho -1)}. \] As a consequence, using that $P$ is a polynomial with coefficients depending on $\Lambda, r, v, x_0$, \[
\Vert VI \Vert_{L^2(B_r)} \leq C_{v,r,x_0,\Lambda} \sum_{\abs{\beta} \leq N} \Vert M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} \brac{v^\rho_{-\Lambda}(\tilde{\eta}_\rho -1)} \Vert_{L^2(B_r)}. \] Now we use the disjoint support lemma, Lemma~\ref{la:bs:disjsuppGen}, to estimate for some $k_0 = k_0(\rho,x_0,\Lambda) \geq 1$ tending to $\infty$ as $\rho \to \infty$, \[ \begin{ma}
&&\Vert M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} \brac{v^\rho_{-\Lambda}(\tilde{\eta}_\rho -1)} \Vert_{L^2(B_r)}\\ &\leq& \sum_{k=k_0}^\infty \Vert M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} \brac{\eta_{\Lambda r,x_0}^k (v-P) \brac{\tilde{\eta}_\rho (1-\tilde{\eta}_\rho)}} \Vert_{L^2(B_r)}\\ &\overset{\sref{L}{la:bs:disjsuppGen}}{\leq}& C_{r,\Lambda} \sum_{k=k_0}^\infty 2^{-k(n - \abs{\beta})} \Vert \brac{\eta_{\Lambda r,x_0}^k (v-P)} \Vert_{L^2({\mathbb R}^n)}\\ &\overset{\sref{P}{pr:estetarkvmp}}{\leq}& C_{r,\Lambda} \sum_{k=k_0}^\infty 2^{-k(\frac{n}{2} - N)} (1+\abs{k})\ \Vert \lapn v \Vert_{L^2({\mathbb R}^n)}. \end{ma} \] As $N < \frac{n}{2}$, we have proven that \[
\Vert V \Vert_{L^2(B_r(x_0))} + \Vert VI \Vert_{L^2(B_r(x_0))} = o(1) \quad \mbox{for $\rho \to \infty$}. \] Next, \underline{we treat $I$}. By Theorem~\ref{th:integrability} and Lemma~\ref{la:poincmv} we have \[
\Vert I \Vert_{L^2(B_r)} \aleq{} \Vert \lapn v_\Lambda \Vert_{L^2({\mathbb R}^n)}^2 \aleq{} \brac{[v]_{B_{4\Lambda r},\frac{n}{2}}}^2 \overset{\eqref{eq:loc:vbLambdasmall}}{\aleq{}} \delta\ [v]_{B_{4\Lambda r},\frac{n}{2}}. \] \underline{As for $II$}, by Proposition~\ref{pr:lapsmonprod2}, for any $w \in {\mathcal{S}}({\mathbb R}^n)$ \[
\begin{ma} &&\varphi \brac{\lapn (w\ P) - P \lapn w}\\ &=& \varphi \ \sum_{1 \leq \abs{\beta} \leq N} \partial^\beta P\ M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} w\\ &\overset{\operatorname{supp} \varphi}{=}& \varphi \sum_{1 \leq \abs{\beta} \leq N} \brac{\partial^\beta \brac{\eta_{\Lambda r}(P-v)}\ M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} w + \partial^\beta v\ M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} w},\\ \end{ma} \] so \[
\Vert II \Vert_{L^2(B_r)} \leq \sum_{1 \leq \abs{\beta} \leq N} II^\beta_{1,\Lambda} + II^\beta_{2,\Lambda} + II^\beta_{1,-\Lambda} + II^\beta_{2,-\Lambda}, \] where \[ \begin{ma}
II^\beta_{1,\Lambda} &=& \Vert \partial^\beta \brac{\eta_{\Lambda r}(P-v)}\ M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} v_\Lambda \Vert_{L^2(B_r)}\\ &=& \Vert \partial^\beta v_\Lambda\ M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} v_\Lambda \Vert_{L^2(B_r)},\\
II^\beta_{2,\Lambda} &=& \Vert \partial^\beta v\ M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} v_\Lambda \Vert_{L^2(B_r)},\\
II^\beta_{1,-\Lambda} &=& \Vert \partial^\beta \brac{\eta_{\Lambda r}(P-v)}\ M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} v^\rho_{-\Lambda} \Vert_{L^2(B_r)}\\
&=&\Vert \partial^\beta v_\Lambda\ M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} v^\rho_{-\Lambda} \Vert_{L^2(B_r)},\\ II^\beta_{2,-\Lambda} &=& \Vert \partial^\beta v\ M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} v^\rho_{-\Lambda} \Vert_{L^2(B_r)}. \end{ma} \] Observe that all the operators involved are of order strictly between $(0,\frac{n}{2})$. Consequently, by Proposition~\ref{pr:lowerorderest} and Poincar\'e's inequality, Lemma~\ref{la:poincmv}, \[ \begin{ma}
II^\beta_{1,\Lambda} &\aleq{}& \Vert \lapn \brac{\eta_{\Lambda r}(P-v)} \Vert_{L^2({\mathbb R}^n)}\ \Vert \lapn v_\Lambda \Vert_{L^2({\mathbb R}^n)}\\ &\aleq{}& \brac{[v]_{B_{4\Lambda r},\frac{n}{2}}}^2\\ &\overset{\eqref{eq:loc:vbLambdasmall}}{\aleq{}}& \delta\ [v]_{B_{4\Lambda r},\frac{n}{2}}. \end{ma} \] By Lemma~\ref{la:lowerorderlocalest} and Poincar\'e's inequality, Lemma~\ref{la:poincmv}, \[ \begin{ma}
II^\beta_{2,\Lambda} &\aleq{}& \Vert \lapn v_\Lambda \Vert_{L^2} \brac{\Vert \lapn v \Vert_{L^2(B_{2\Lambda r})} + \Lambda^{\frac{n}{2}-\abs{\beta}}\sum_{k=1}^\infty 2^{-\frac{k}{2}} \Vert \eta_{\Lambda r}^{k} \lapn v \Vert_{L^2}}\\ &\aleq{}& [v]_{B_{4\Lambda r},\frac{n}{2}}\ \brac{\Vert \lapn v \Vert_{L^2(B_{4\Lambda r})} + \Lambda^{-\frac{1}{2}} \Vert \lapn v \Vert_{L^2}}\\ &\overset{\ontop{\eqref{eq:loc:vbLambdasmall}}{\eqref{eq:loc:Lambdalapnvsmall}}}{\aleq{}}& \delta\ \brac{\Vert \lapn v \Vert_{L^2(B_{4\Lambda r})} + [v]_{B_{4\Lambda r},\frac{n}{2}}}. \end{ma} \] As for $II^\beta_{2,-\Lambda}$ and $II^\beta_{1,-\Lambda}$, we estimate for any $w \in {\mathcal{S}}({\mathbb R}^n)$, \[ \begin{ma} &&\Vert \partial^\beta w\ M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} v^\rho_{-\Lambda} \Vert_{L^2(B_r)}\\ &\aleq{}& \sum_{k=1}^\infty \Vert \partial^\beta\lapmn \brac{\eta_{4 r} \lapn w}\ M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} \eta_{\Lambda r}^k (v-P) \tilde{\eta}_\rho \Vert_{L^2(B_r)}\\ &&+ \sum_{l,k=1}^\infty \Vert \partial^\beta\lapmn \brac{\eta_{4 r}^l \lapn w}\ M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} \eta_{\Lambda r}^k (v-P) \tilde{\eta}_\rho \Vert_{L^2(B_r)}\\ &=:& \Sigma_1+ \Sigma_2. \end{ma} \] We first concentrate on $\Sigma_1$. As before, by Lemma~\ref{la:bs:disjsuppGen} and using that $1 \leq \abs{\beta} < \frac{n}{2}$, \[ \begin{ma}
&&\Vert \partial^\beta\lapmn \brac{\eta_{4 r} \lapn w}\ M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} \eta_{\Lambda r}^k (v-P) \tilde{\eta}_\rho \Vert_{L^2(B_r)}\\ &\aleq{}& \brac{2^k \Lambda r}^{-\frac{3}{2}n+\abs{\beta}} \Vert \partial^\beta\lapmn \brac{\eta_{4 r} \lapn w}\Vert_{L^2}\ \Vert \eta_{\Lambda r}^k (v-P) \Vert_{L^1}\\ &\overset{\sref{L}{la:lapmsest2}}{\aleq{}}& \brac{2^k \Lambda r}^{-n+\abs{\beta}} (4 r)^{\frac{n}{2}-\abs{\beta}} \Vert \eta_{4 r} \lapn w\Vert_{L^2}\ \Vert \eta_{\Lambda r}^k (v-P) \Vert_{L^2}\\ &\aeq{}& \Lambda^{\abs{\beta}-\frac{n}{2}}\ \Vert \eta_{4 r} \lapn w \Vert_{L^2}\ 2^{(\abs{\beta}-n)k}\ (\Lambda r)^{-\frac{n}{2}} \Vert \eta_{\Lambda r}^k (v-P) \Vert_{L^2}. \end{ma} \] Thus, by Proposition~\ref{pr:estetarkvmp} and as $\abs{\beta} < \frac{n}{2}$ (making $\sum_{k > 0} k\ 2^{-k(\frac{n}{2}-\abs{\beta})}$ convergent), \[ \begin{ma} \Sigma_1&\aleq{}& \Lambda^{\abs{\beta}-\frac{n}{2}} \Vert \eta_{4 r} \lapn w\Vert_{L^2}\ \Vert \lapn v \Vert_{L^2}\\ &\aleq{}& \Lambda^{-\frac{1}{2}} \Vert \lapn w \Vert_{L^2(B_{4\Lambda r})}\ \Vert \lapn v \Vert_{L^2({\mathbb R}^n)}\\ &\overset{\eqref{eq:loc:Lambdalapnvsmall}}{\aleq{}}& \delta\ \Vert \lapn w \Vert_{L^2(B_{4\Lambda r})}. \end{ma} \] For the estimate of $\Sigma_2$ we observe \[ \begin{ma} &&\Vert \partial^\beta\lapmn \brac{\eta_{4 r}^l \lapn w}\ M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} \eta_{\Lambda r}^k (v-P) \tilde{\eta}_\rho \Vert_{L^2(B_r)}\\ &\overset{\sref{L}{la:bs:disjsuppGen}}{\aleq{}}& (2^l r)^{-\frac{n}{2}-\abs{\beta}}\ \Vert \brac{\eta_{4 r}^l \lapn w} \Vert_{L^1}\ \Vert M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} \eta_{\Lambda r}^k (v-P) \tilde{\eta}_\rho\Vert_{L^2(B_r)}\\ &\overset{\sref{L}{la:bs:disjsuppGen}}{\aleq{}}& (2^l r)^{-\frac{n}{2}-\abs{\beta}}\ \Vert \brac{\eta_{4 r}^l \lapn w} \Vert_{L^1}\ \brac{2^k \Lambda r}^{-\frac{3}{2}n+\abs{\beta}} \Vert \eta_{\Lambda r}^k (v-P) \Vert_{L^1}\ r^{\frac{n}{2}}\\ &\aleq{}& r^{-\frac{n}{2}}\ 2^{-\abs{\beta}l}\ \Vert \brac{\eta_{4 r}^l \lapn w} \Vert_{L^2}\ \brac{2^k \Lambda}^{-n+\abs{\beta}} \Vert \eta_{\Lambda r}^k (v-P) \Vert_{L^2}. \end{ma} \] Summing first over $k$ and then over $l$, using again Proposition~\ref{pr:estetarkvmp} and that $\abs{\beta} \in [1,N]$, \[ \begin{ma} \Sigma_2 &\aleq{}& \Lambda^{-\frac{n}{2}+N}\ \sum_{l=1}^\infty 2^{-l} \Vert \eta_{4 r}^l \lapn w \Vert_{L^2}\ \Vert \lapn v \Vert_{L^2}\\ &\overset{\eqref{eq:loc:Lambdalapnvsmall}}{\aleq{}}& \delta\ \sum_{l=1}^\infty 2^{-l} \Vert \eta_{4 r}^l \lapn w \Vert_{L^2}. \end{ma} \] So we have shown that \[ \begin{ma} &&\Vert \partial^\beta w\ M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} v_{-\Lambda}^\rho \Vert_{L^2(B_r)}\\ &\aleq{}&\delta\ \sum_{l=1}^\infty 2^{-l} \Vert \eta_{4 r}^l \lapn w \Vert_{L^2} + \delta \Vert \lapn w \Vert_{L^2(B_{4\Lambda r})}\\ &\aleq{}&\delta \Vert \lapn w \Vert_{L^2({\mathbb R}^n)}. \end{ma} \] Setting $w = v$ in the case of $II^\beta_{2,-\Lambda}$ and $w = v_\Lambda$ in the case of $II^\beta_{1,-\Lambda}$, this implies \[ II^\beta_{1,-\Lambda} \aleq{} \delta \Vert \lapn v_\Lambda \Vert_{L^2} \aleq{} \delta\ [v]_{B_{4\Lambda r},\frac{n}{2}}, \] and \[ II^\beta_{2,-\Lambda} \aleq{} \sum_{l=1}^\infty 2^{-l} \Vert \lapn v \Vert_{L^2(A_l)} + \delta \Vert \lapn v \Vert_{L^2(B_{4\Lambda r)}}. \] \underline{As for $III$}, using yet again \eqref{eq:hvvest:v1}, we have \[
P_\rho \tilde{\eta}_\rho = v - v_{\Lambda} - v_{-\Lambda}^\rho \tilde{\eta}_\rho. \] As a consequence, we can rewrite \[ \begin{ma}
III &=& \brac{\lapn \brac{\brac{\tilde{\eta}_\rho}^2 P P} - P \lapn \brac{\brac{\tilde{\eta}_\rho}^2 P} }\varphi\\ &=& \brac{\lapn \brac{\brac{v - v_{\Lambda} - v_{-\Lambda}^\rho \tilde{\eta}_\rho}P} - P \lapn \brac{v - v_{\Lambda} - v_{-\Lambda}^\rho \tilde{\eta}_\rho}}\varphi. \end{ma} \] Thus, the only part we have not estimated already in $II$ (or which is estimated exactly as in $II$, as the term containing $v^\rho_{-\Lambda} \tilde{\eta}_\rho$) is \[
\lapn \brac{v P} - P\lapn v. \] Again by Proposition~\ref{pr:lapsmonprod2}, this is decomposed into terms of the following form (for $1 \leq \abs{\beta} \leq N$) \[ \begin{ma}
&&\partial^\beta P\ M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} v\\ &=& -\partial^\beta \brac{(v-P)(1-\eta_{\Lambda r})}\ M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} v\\ &&- \partial^\beta \brac{(v-P)\eta_{\Lambda r}}\ M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} v\\ &&+ \partial^\beta v\ M_\beta \Delta^{\frac{n-2\abs{\beta}}{4}} v\\ &=:& III_1 + III_2 + III_3. \end{ma} \] Of course, \[
\Vert III_1 \Vert_{L^2(B_r)} = 0. \] By Lemma~\ref{la:lowerorderlocalest}, \[ \begin{ma} &&\Vert III_2 \Vert_{L^2(B_r)}\\ &\aleq{}& \Vert \lapn (v-P)\eta_{\Lambda r} \Vert_{L^2} \brac{ \Vert \lapn v \Vert_{L^2(B_{2\Lambda r})} + \Lambda^{-\frac{1}{2}} \sum_{k=1}^\infty 2^{-\frac{k}{2}} \Vert \lapn v \Vert_{L^2(A_k)}}\\ &\overset{\sref{L}{la:poincmv}}{\aleq{}}& [v]_{\frac{n}{2},4\Lambda r} \brac{ \Vert \lapn v \Vert_{L^2(B_{2\Lambda r})} + \sum_{k=1}^\infty 2^{-\frac{k}{2}} \Vert \lapn v \Vert_{L^2(A_k)}}\\ &\overset{\eqref{eq:loc:vbLambdasmall}}{\aleq{}}& \delta [v]_{\frac{n}{2},4\Lambda r} + \delta \sum_{k=1}^\infty 2^{-\frac{k}{2}} \Vert \lapn v \Vert_{L^2(A_k)}. \end{ma} \] And by Lemma~\ref{la:lowerorderlocalest2} and \eqref{eq:loc:vbLambdasmall}, \[
\Vert III_3 \Vert_{L^2(B_r)} \aleq{} \delta \Vert \lapn v \Vert_{L^2(B_{4\Lambda r})} + \sum_{k=1}^\infty 2^{-\frac{k}{2}} \Vert \lapn v \Vert_{L^2(A_k)}. \]
Finally, \underline{we have to estimate $IV$}. Set \[
\tilde{A}_k := B_{2^{k+4}\Lambda r} \backslash B_{2^{k-4}\Lambda r}. \] Using Lemma~\ref{la:bs:disjsuppGen} the first term is done as follows (setting $P_k$ to be the polynomial of order $N$ where $v-P_k$ satisfies \eqref{eq:meanvalueszero} on $B_{2^{k+1} \Lambda r} \backslash B_{2^{k-1} \Lambda r}$) \[
\begin{ma} &&\Vert \lapn \brac{\eta_{\Lambda r}^k (1-\eta_{\Lambda r})\brac{\tilde{\eta}_{\rho}}^2 (v-P)^2} \Vert_{L^2(B_r)}\\ &\aleq{}& 2^{-k\frac{3}{2}n} \Lambda^{-\frac{3}{2}n} r^{-n} \Vert \sqrt{\eta_{\Lambda r}^k} (v-P) \Vert_{L^2}^2\\ &\aleq{}& 2^{-k\frac{3}{2}n} \Lambda^{-\frac{3}{2}n} r^{-n} \brac{\Vert \sqrt{\eta_{\Lambda r}^k} (v-P_k) \Vert_{L^2}^2 + 2^{nk} (\Lambda r)^n \Vert \sqrt{\eta_{\Lambda r}^k} (P-P_k) \Vert_{L^\infty}^2}\\ &\overset{\sref{L}{la:poincmvAn}}{\aleq{}}& 2^{-k\frac{3}{2}n} \Lambda^{-\frac{3}{2}n} r^{-n} \brac{(2^k\Lambda r)^n \brac{[v]_{\tilde{A}_k,\frac{n}{2}}}^2 + 2^{nk} (\Lambda r)^n \Vert \sqrt{\eta_{\Lambda r}^k} (P-P_k) \Vert_{L^\infty}^2}\\ &\overset{\sref{P}{pr:etarkpbmpkest}}{\aleq{}}& \Lambda^{-\frac{n}{2}}\ 2^{-k\frac{n}{2}}\ \brac{\brac{[v]_{\tilde{A}_k,\frac{n}{2}}}^2 + k\Vert \sqrt{\eta_{\Lambda r}^k} (P-P_k) \Vert_{L^\infty}\ \Vert \lapn v \Vert_{L^2}}\\ &\aleq{}& \Lambda^{-\frac{n}{2}}\ 2^{-k\frac{n-\frac{1}{4}}{2}}\ \brac{\brac{[v]_{\tilde{A}_k,\frac{n}{2}}}^2 + \Vert \sqrt{\eta_{\Lambda r}^k} (P-P_k) \Vert_{L^\infty} \Vert \lapn v \Vert_{L^2}}.
\end{ma} \] Note that as $\frac{n}{2}-\frac{1}{8} > \lceil \frac{n}{2} \rceil -1$, on the one hand Lemma~\ref{la:mvestbrakShrpr} is applicable and on the other hand we have by Proposition~\ref{pr:equivlaps} \[
\sum_{k=1}^\infty 2^{-k \frac{n-\frac{1}{4}}{2}} \brac{[v]_{\tilde{A}_k,\frac{n}{2}}}^2 \aleq{} \Vert \lapn v \Vert_{L^2({\mathbb R}^n)} \sum_{k=1}^\infty 2^{-k \frac{n-\frac{1}{4}}{2}} [v]_{\tilde{A}_k,\frac{n}{2}}. \] Consequently, we have for some $\gamma > 0$ \[ \begin{ma}
\Vert \lapn (v^\rho_{-\Lambda})^2 \Vert_{L^2(B_r)} &\aleq{}& \brac{1+\Vert \lapn v \Vert_{L^2}}\ \sum_{k=-\infty}^\infty 2^{-\gamma \abs{k}} [v]_{\tilde{A}_k,\frac{n}{2}}\\
&\overset{\eqref{eq:loc:Lambdalapnvsmall}}{\aleq{}}& \Lambda^{\frac{1}{2}}\ \sum_{k=-\infty}^\infty 2^{-\gamma \abs{k}} [v]_{\tilde{A}_k,\frac{n}{2}}. \end{ma} \] For the next term in $IV$, using the disjoint support as well as Poincar\'{e}'s inequality, Lemma~\ref{la:poinc} and Lemma~\ref{la:poincmv}, and the estimate on mean value polynomials, Proposition~\ref{pr:estetarkvmp}, and as \[ v_\Lambda v_{-\Lambda}^\rho = \sum_{k=1}^3 v_\Lambda \brac{\eta_{\Lambda r}^k \tilde{\eta}_\rho\ (v-P)}, \] we can estimate \[ \begin{ma} &&\Vert \lapn \brac{v_\Lambda\ v_{-\Lambda}^\rho }\Vert_{L^2(B_r)}\\ &\overset{\sref{L}{la:bs:disjsuppGen}}{\leq}& \sum_{k=1}^3\ \brac{2^k \Lambda r}^{-\frac{3}{2} n} \Vert v_\Lambda \Vert_{L^2}\ \Vert \eta_{\Lambda r}^k (v-P)\Vert_{L^2}\ r^{\frac{n}{2}}\\ &\overset{\sref{L}{la:poinc}}{\aleq{}}& \sum_{k=1}^3\ \brac{2^k \Lambda r}^{-\frac{3}{2} n}\ \brac{\Lambda r}^{\frac{n}{2}} \Vert \lapn v_\Lambda \Vert_{L^2}\ \Vert \eta_{\Lambda r}^k (v-P)\Vert_{L^2}\ r^{\frac{n}{2}}\\ &\overset{\ontop{\sref{L}{la:poincmv}}{\sref{P}{pr:estetarkvmp}}}{\aleq{}}&\Lambda^{-\frac{n}{2}} [v]_{B_{4\Lambda r},\frac{n}{2}}\ \Vert \lapn v \Vert_{L^2({\mathbb R}^n)}\\ &\overset{\eqref{eq:loc:Lambdalapnvsmall}}{\aleq{}}& \delta\ [v]_{B_{4\Lambda r},\frac{n}{2}}. \end{ma} \] Last but not least, \[ \begin{ma}
&&\Vert v_\Lambda \lapn \eta_{\Lambda r}^k (v-P)\tilde{\eta}_\rho \Vert_{L^2(B_r)}\\
&\overset{\sref{L}{la:bs:disjsuppGen}}{\aleq{}}& (2^k \Lambda r)^{-n} \Vert v_\Lambda \Vert_{L^2}\ \Vert \eta_{\Lambda r}^k (v-P) \Vert_{L^2}\\
&\overset{\ontop{\sref{L}{la:poinc}}{\sref{L}{la:poincmv}}}{\aleq{}}& 2^{-nk} \brac{\Lambda r}^{-\frac{n}{2}} [v]_{B_{4\Lambda r},\frac{n}{2}}\ \Vert \eta_{\Lambda r}^k (v-P) \Vert_{L^2}\\
&\overset{\eqref{eq:loc:vbLambdasmall}}{\aleq{}}& 2^{-k\frac{n}{2}} \delta \brac{ \brac{2^{k} \Lambda r}^{-\frac{n}{2}} \Vert \eta_{\Lambda r}^k (v-P_k) \Vert_{L^2} + \Vert \eta_{\Lambda r}^k (P - P_k) \Vert_{L^\infty}}\\
&\overset{\sref{L}{la:poincmvAn}}{\aleq{}}& \delta \brac{2^{-\frac{n}{2}k}\ [v]_{A_k,\frac{n}{2}} + 2^{-\frac{n}{2}k} \Vert \eta_{\Lambda r}^k \brac{ P-P_k} \Vert_{L^\infty} }.
\end{ma} \] Again, as $\frac{n}{2} > N$, Lemma~\ref{la:mvestbrakShrpr} implies that for some $\gamma > 0$. \[
\Vert v_\Lambda \lapn v_{-\Lambda} \Vert_{L^2(B_r)} \aleq{} \sum_{k=-\infty}^\infty 2^{-\gamma\abs{k}} [v]_{A_k,\frac{n}{2}}. \] We conclude by taking $\delta = \tilde{\delta} \varepsilon$ for a uniformly small $\tilde{\delta} > 0$ which does \emph{not} depend on $\Lambda$ or $\Vert \lapn v \Vert_{L^2}$. \end{proof}
\begin{small}\end{small}\section{Euler-Lagrange Equations}\label{sec:eleqn} As in \cite{DR09Sphere} we will have two equations controlling the behavior of a critical point of $E_n$. First of all, we are going to use a different structure equation: Obviously, for any $u \in H^{\frac{n}{2}}({\mathbb R}^n,{\mathbb R}^m)$ with $u(x) \in {\mathbb S}^{m-1}$ almost everywhere on a domain $D \subset {\mathbb R}^n$, we have for $w := \eta u$, $\eta \in C_0^\infty(D)$, \[
\sum_{i=1}^m w^i \lapn w^i = - \frac{1}{2} \sum_{i=1}^m H(w^i,w^i) + \lapn \eta, \] or in the contracted form \begin{equation}\label{eq:structureeq}
w \cdot \lapn w = -\frac{1}{2} H(w,w) + \lapn \eta. \end{equation}
The Euler-Lagrange Equations are computed similar as in \cite{DR09Sphere}, \cite{Hel02}. \begin{proposition}[Localized Euler-Lagrange Equation]\label{pr:eleq} Let $\eta \in C_0^\infty(D)$ and $\eta \equiv 1$ on an open neighborhood of some ball $\tilde{D} \subset D$.\\ Let $u \in \Hf({\mathbb R}^n,{\mathbb R}^m)$ be a critical point of $E_n(\cdot)$ on $D$, cf. Definition \ref{def:critpt}. Then $w := \eta u$ satisfies for every $\psi_{ij} \in C_0^\infty(\tilde{D})$, such that $\psi_{ij} = - \psi_{ji}$, \begin{equation}\label{eq:ELeq}
-\int \limits_{{\mathbb R}^n} w^i\ \lapn w^j\ \lapn \psi_{ij} = - \int \limits_{{\mathbb R}^n} a_{ij} \psi_{ij} + \int \limits_{{\mathbb R}^n} \lapn w^j\ H(w^i,\psi_{ij}). \end{equation} Here $a_{ij} \in L^2({\mathbb R}^n)$ depends on the choice of $\eta$. \end{proposition} \begin{remark} Note in the following proof, that this result holds also if $u \in L^\infty({\mathbb R}^n)$ and $\lapn u \in L^2({\mathbb R}^n)$, the setting of \cite{DR09Sphere}. \end{remark}
\begin{proofP}{\ref{pr:eleq}} Let $\varphi \in C_0^\infty(D,{\mathbb R}^m)$. Recall that in Definition \ref{def:critpt} we have set \[
u_t = \begin{cases}
u + t d\pi_u [\varphi] + o(t)\quad &\mbox{in $D$,}\\
u\quad &\mbox{in ${\mathbb R}^n \backslash D$}.\\
\end{cases} \] Then $u_t$ belongs to $\Hf({\mathbb R}^n,{\mathbb R}^{m})$ and $u_t \in {\mathbb S}^{m-1}$ a.e. in $D$. Hence, Euler-Lagrange equations of the functional $E_n$ defined in \eqref{eq:energy} look like \[
\int \limits_{{\mathbb R}^n} \lapn u \cdot \lapn d\pi_u [\varphi] = 0, \qquad \mbox{for any $\varphi \in C_0^\infty(D)$.} \] In particular, for any $v \in C_0^\infty(D)$ such that $v \in T_u {\mathbb S}^{m-1}$ a.e. (i.e. $d\pi_u[v] = v$ in $D$) \begin{equation}\label{eq:el:vtangential}
\int \limits_{{\mathbb R}^n} \lapn u \cdot \lapn v = 0. \end{equation} Let $\psi_{ij} \in C_0^\infty(\tilde{D},{\mathbb R})$, $1 \leq i,j \leq m$, $\psi_{ij} = - \psi_{ij}$. Then $v^j := \psi_{ij} u^i \in \Hf({\mathbb R}^n)$, $1 \leq j \leq m$. Moreover, $u \cdot v = 0$. As for $x \in D$ the vector $u(x) \in {\mathbb R}^m$ is orthogonal to the tangential space of ${\mathbb S}^{m-1}$ at the point $u(x)$, this implies $v \in T_u {\mathbb S}^{m-1}$. Consequently, \eqref{eq:el:vtangential} holds for this $v$ by approximation\footnote{In fact, approximate this $v \in \Hf({\mathbb R}^n)$ by $v_k \in C_0^\infty({\mathbb R}^n)$, see Lemma~\ref{la:Tartar07:Lemma15.10}. By the Interpolation theorem we have for $\eta \in C_0^\infty(D)$, $\eta \equiv 1$ on $\tilde{D}$ that $\Vert \eta v_k -v \Vert_{\Hf} = \Vert \eta (v_k -v) \Vert_{\Hf} \leq C_\eta \Vert v_k -v \Vert_{\Hf}$}.\\ Let $\eta$ be the cutoff function from above, i.e. $\eta \in C_0^\infty(D)$, $\eta \equiv 1$ on an open neighborhood of the ball $\tilde{D} \subset D$ and set $w := \eta u$.\\ Because of $\operatorname{supp} \psi \subset \tilde{D}$ we have that $v^j = w^i \psi_{ij}$. Thus, by \eqref{eq:el:vtangential} \begin{equation}\label{eq:el:cutpde}
\int \limits_{{\mathbb R}^n} \lapn w^j\ \lapn (w^i \psi_{ij}) = \int \limits_{{\mathbb R}^n} \lapn (w^j-u^j)\ \lapn (w^i \psi_{ij}). \end{equation} Observe that $w^i \in L^\infty({\mathbb R}^n) \cap \Hf({\mathbb R}^n)$ and by choice of $\eta$ and $\tilde{D}$, there exists $d > 0$ such that $\operatorname{dist} \brac{\operatorname{supp} (w^j-u^j), \tilde{D}}> d$. Hence, Lemma~\ref{la:bs:localizing} implies that there is $\tilde{a}_{j} \in L^2({\mathbb R}^n)$, $\Vert \tilde{a} \Vert_{L^2({\mathbb R}^n)} \leq C_{u,D,\tilde{D},\eta}$ such that \begin{equation}\label{eq:el:localizinga}
\int \limits_{{\mathbb R}^n} \lapn (w^j-u^j)\ \lapn \varphi = \int \limits_{{\mathbb R}^n} \tilde{a}_{j} \varphi \quad \mbox{for all $\varphi \in C_0^\infty(\tilde{D})$}. \end{equation} Consequently, for $a_{ij} := \tilde{a}_j w^i \in L^2({\mathbb R}^n)$, (again by approximation) \[
\int \limits_{{\mathbb R}^n} \lapn (w^j-u^j)\ \lapn (w^i \varphi) = \int \limits_{{\mathbb R}^n} a_{ij} \varphi \quad \mbox{for all $\varphi \in C_0^\infty(\tilde{D})$}. \] Hence, \eqref{eq:el:cutpde} can be written as \begin{equation}\label{eq:el:cutpde2}
\int \limits_{{\mathbb R}^n} \lapn w^j\ \lapn (w^i \psi_{ij}) = \int \limits_{{\mathbb R}^n} a_{ij} \psi_{ij}, \end{equation} which is valid for every $\psi_{ij} \in C_0^\infty(\tilde{D})$ such that $\psi_{ij} = -\psi_{ji}$.\\ Moving on, we have just by the definition of $H(\cdot,\cdot)$, \begin{equation}\label{eq:el:prdrle}
\lapn (w^i \psi_{ij}) = \lapn w^i\ \psi_{ij} + w^i\ \lapn \psi_{ij} + H(w^i,\psi_{ij}). \end{equation} Hence, putting \eqref{eq:el:cutpde2} and \eqref{eq:el:prdrle} together \[ \begin{ma} &&-\int \limits_{{\mathbb R}^n} w^i\ \lapn w^j\ \lapn \psi_{ij}\\
&=& - \int \limits_{{\mathbb R}^n} a_{ij} \psi_{ij} + \int \limits_{{\mathbb R}^n} \lapn w^j \ \lapn w^i\ \psi_{ij} + \int \limits_{{\mathbb R}^n} \lapn w^j\ H(w^i,\psi_{ij})\\ &\overset{\psi_{ij} = -\psi_{ji}} {=}& - \int \limits_{{\mathbb R}^n} a_{ij} \psi_{ij} + \int \limits_{{\mathbb R}^n} \lapn w^j\ H(w^i,\psi_{ij}). \end{ma} \] \end{proofP}
\begin{remark} The only change one has to do here, if $u \not \in L^2({\mathbb R}^n)$ but e.g. $u \in L^\infty({\mathbb R}^n)$ is to prove \eqref{eq:el:localizinga} by an alternative for Lemma~\ref{la:bs:localizing}. In fact, if we assume only $f = w^j-u^j \in L^\infty({\mathbb R}^n)$, we can still estimate for any $\varphi \in C_0^\infty(\tilde{D})$ and suitably chosen $\eta_{r,x_0}^k$ \[ \begin{ma}
&&\int \limits f\ \Delta^{\frac{n}{2}} \varphi\\ &\overset{\sref{L}{la:bs:disjsuppGen}}{\aleq{}}& \sum_{k=1}^\infty (2^k r)^{-2n} \Vert \eta_{r,x_0}^k f \Vert_{L^1}\ \Vert \varphi \Vert_{L^1}\\ &\aleq{}& \abs{\tilde{D}}^{\frac{1}{2}}\ \Vert \varphi \Vert_{L^2}\ \Vert f \Vert_{L^\infty} \sum_{k=1}^\infty (2^k r)^{-n}\\ &\aeq{}& C_{D,\tilde{D}}\ \Vert \varphi \Vert_{L^2}\ \Vert f \Vert_{L^\infty}. \end{ma} \] Thus, in exactly the same way as in the proof of Lemma~\ref{la:bs:localizing} we conclude the existence of $\tilde{a}$ as in \eqref{eq:el:localizinga}. \end{remark}
\section{Homogeneous Norm for the Fractional Sobolev Space}\label{sec:homognormhn2} We recall from Section~\ref{ss:idlaps} the definition of the ``homogeneous norm'' $[u]_{D,s}$: If $s \geq 0$, $s \not \in {\mathbb N}_0$, \[ \left ([u]_{D,s} \right )^2 := \int \limits_{D} \int \limits_{D} \frac{\abs{\nabla^{\lfloor s \rfloor}u(z_1) - \nabla^{\lfloor s \rfloor}u(z_2)}^2}{\abs{z_1-z_2}^{n+2(s-\lfloor s \rfloor)}} \ dz_1\ dz_2. \] Otherwise, $[u]_{D,s}$ is just $\Vert \nabla^s u \Vert_{L^2(D)}$. \\ \subsection{Comparison results for the homogeneous norm} The goal of this section is the following lemma which compares for balls $B$ the size of $[u]_{B,\frac{n}{2}}$ to the size of $\Vert \lapn u \Vert_{L^2(B)}$. Obviously, these two semi-norms are not equivalent. In fact, take for instance any nonzero $u \in H^{\frac{n}{2}}({\mathbb R}^n)$ with support outside of $B$. Then $[u]_{B,\frac{n}{2}}$ vanishes, but $\lapn u$ can not be constantly zero (cf. Lemma~\ref{la:bs:uniqueness}). Anyway, these two semi-norms can be compared in the following sense: \begin{lemma}\label{la:comps01} There is a uniform $\gamma > 0$ such that for any $\varepsilon >0$, $n \in {\mathbb N}$, there exists a constant $C_\varepsilon > 0$ such that for any $v \in \Hf({\mathbb R}^n)$, $B_r \equiv B_r(x) \subset {\mathbb R}^n$ \[ \begin{ma}
[v]_{B_r,\frac{n}{2}} \leq \varepsilon [v]_{B_{8r},\frac{n}{2}} &+& C_\varepsilon \Big [\Vert \lapn v \Vert_{L^2(B_{16r})}\\ &&+ \sum_{k=1}^\infty 2^{-nk} \Vert \eta_{8r}^{k} \lapn v \Vert_{L^2}\\ &&+ \sum \limits_{j=-\infty}^\infty 2^{-\gamma \abs{j}}\ [v]_{\tilde{A}_j,\frac{n}{2}} \Big ] \\ \end{ma} \] where $\tilde{A}_j = B_{2^{j+5}r} \backslash B_{2^{j-5}r}$. \end{lemma} \begin{proofL}{\ref{la:comps01}} It suffices to prove this for $v \in {\mathcal{S}}({\mathbb R}^n)$, as ${\mathcal{S}}({\mathbb R}^n)$ is dense in $\Hf({\mathbb R}^n)$. Set $N := \lceil \frac{n}{2} \rceil-1$, $s := \frac{n}{2}-N \in \{\frac{1}{2},1\}$, and let $P_{2r}$ be the polynomial of degree $N$ such that the mean value condition \eqref{eq:meanvalueszero} holds for $N$ and $B_{2r}$. Let at first $n$ be odd. Set $\tilde{v} := \eta_{2r} (v-P_{2r})$. Note that \begin{equation} \label{eq:comps01:veqvmp} \tilde{v} = v - P_{2r} \quad \mbox{on $B_r$}. \end{equation} Consequently, \[ \begin{ma}
\brac{[v]_{B_r,\frac{n}{2}} }^{2} &\overset{\eqref{eq:comps01:veqvmp}}{=}& \brac{[\tilde{v}]_{B_r,\frac{n}{2}}}^{2}\\ &\overset{s:=\frac{1}{2}}{\leq}& \sum_{\abs{\alpha} = N}\ \int \limits_{{\mathbb R}^n}\ \int \limits_{{\mathbb R}^n} \frac{ (\partial^\alpha \tilde{v}(x) - \partial^\alpha \tilde{v}(y))(\partial^\alpha \tilde{v}(x) - \partial^\alpha \tilde{v}(y) )}{\abs{x-y}^{n+2s}}\ dx\ dy\\
&\overset{\sref{P}{pr:eqpdeflapscpr}}{\aeq{}}& \sum_{\abs{\alpha} = N}\ \int \limits_{{\mathbb R}^n} \laps{s} \partial^\alpha \tilde{v} \ \laps{s} \partial^\alpha \tilde{v}. \end{ma} \] Thus, \[
\brac{[v]_{B_r,\frac{n}{2}}}^2 \aleq{N} \Vert \lapn \tilde{v} \Vert_{L^2}\ \sup_{\ontop{\varphi \in C_0^\infty(B_{4r}(0))}{\Vert \lapn \varphi \Vert_{L^2} \leq 1}} \int \limits_{{\mathbb R}^n} \lapn \tilde{v}\ M\lapn \varphi, \] where $M$ is a zero-multiplier operator. One checks that by a similar argument this also holds for $n$ even. Using Young's inequality, \[ \begin{ma}
[v]_{B_r,\frac{n}{2}} &\aleq{N}& \varepsilon \Vert \lapn \tilde{v} \Vert_{L^2} + \frac{1}{\varepsilon} \sup_{\ontop{\varphi \in C_0^\infty(B_{4r})}{\Vert \lapn \varphi \Vert_{L^2} \leq 1}} \int \limits_{{\mathbb R}^n} \lapn \tilde{v}\ M\lapn \varphi\\ &\overset{\sref{L}{la:poincmv}}{\aleq{}}& \varepsilon [v]_{B_{8r},\frac{n}{2}} + \frac{1}{\varepsilon} \sup_{\ontop{\varphi \in C_0^\infty(B_{4r})}{\Vert \lapn \varphi \Vert_{L^2} \leq 1}} \int \limits_{{\mathbb R}^n} \lapn \tilde{v}\ M\lapn \varphi.\\ \end{ma} \] For such a $\varphi \in C_0^\infty(B_{4r})$, $\Vert \lapn \varphi \Vert_{L^2} \leq 1$ we decompose \[ \begin{ma}
&&\int \limits_{{\mathbb R}^n} \lapn \tilde{v}\ M\lapn \varphi\\ &\overset{\sref{P}{pr:lapspol}}{=}& \int \limits_{{\mathbb R}^n} \lapn v \ M\lapn \varphi\\ && - \sum_{k=1}^\infty\ \int \limits_{{\mathbb R}^n} \lapn \brac{\eta_{2r}^k (v-P_{2r})} \ M\lapn \varphi\\ &=&\int \limits_{{\mathbb R}^n} \lapn v \ \eta_{8r}M\lapn \varphi\\ && + \sum_{k=1}^\infty\ \int \limits_{{\mathbb R}^n} \lapn v \ \eta^k_{8r}M\lapn \varphi\\ && - \sum_{k=1}^\infty\ \int \limits_{{\mathbb R}^n} \lapn \brac{\eta_{2r}^k (v-P_{2r})} \ M\lapn \varphi\\ &=:& I + \sum_{k=1}^\infty II_k - \sum_{k=1}^\infty III_k. \end{ma} \] In fact, to apply Proposition~\ref{pr:lapspol} correctly, we should have used a similar argument as in the proof of Lemma~\ref{la:hwwlocest}. That is, we should have approximated $v$ by compactly supported functions, then for such functions we should have decomposed for some $\tilde{\eta}_\rho$, $\rho \geq \rho_0$, where $\rho_0$ depends on the support of $v$, $r$, $x$ such that $B_{2\rho}(0)$ contains the support of $v$ and $\tilde{v}$, \[
\lapn \tilde{v} = \lapn v + \lapn (\tilde{v}-v) = \lapn v + \sum_{k = 1}^\infty \lapn \brac{\eta_{2r}^k \tilde{\eta}_\rho (v-P_{2r})} + \lapn \brac{\tilde{\eta}_\rho P}. \] Then one would have applied Remark \ref{rem:cutoffPolbdd} to see that $\Vert M \lapn \brac{\tilde{\eta}_\rho P} \Vert_{L^\infty}$ tends to zero as $\rho \to \infty$. But we will omit the details, and continue instead.\\ Obviously, using H\"ormander's theorem \cite{Hoermander60}, \[
\abs{I} \aleq{} \Vert \lapn v \Vert_{L^2(B_{8r})}. \] Moreover, for any $k \in {\mathbb N}$ by Lemma~\ref{la:bs:disjsuppGen} and Poincar\'{e}'s inequality, Lemma~\ref{la:poinc}, \[ \begin{ma}
\abs{II_k} &\aleq{}& \brac{2^kr}^{-n}\ \Vert \eta_{8r}^{k} \lapn v \Vert_{L^2}\ r^{n}\\ &=& 2^{-nk}\ \Vert \eta_{8r}^{k} \lapn v \Vert_{L^2}. \end{ma} \] As for $III_k$, let for $k \in {\mathbb N}$, $P_{2r}^k$ the polynomial which makes $v-P_{2r}^k$ satisfy the mean value condition \eqref{eq:meanvalueszero} on $B_{2^{k+2}r} \backslash B_{2^{k}r}$. If $k \geq 3$, \[ \begin{ma}
\abs{III_k} &\overset{\sref{L}{la:bs:disjsuppGen}}{\aleq{}}& r^{-\frac{n}{2}}\ \brac{2^{k}}^{-\frac{3}{2}n}\ \Vert \eta_{2r}^k (v-P_{2r}) \Vert_{L^2}\\ &\aleq{}& r^{-\frac{n}{2}}\ 2^{-\frac{3}{2}nk}\ \brac{\Vert \eta_{2r}^k (v-P^k_{2r}) \Vert_{L^2} + 2^{k\frac{n}{2}} r^{\frac{n}{2}} \Vert \eta_{2r}^k (P_{2r}-P^k_{2r}) \Vert_{L^\infty}}\\ &\overset{\sref{L}{la:poincmvAn}}{\aleq{}}& \ 2^{-nk}\ \brac{ [v]_{\tilde{A}_k,\frac{n}{2}} + \Vert \eta_{2r}^k (P_{2r}-P^k_{2r}) \Vert_{L^\infty}}. \end{ma} \] This and Lemma~\ref{la:mvestbrakShrpr} imply for a $\gamma > 0$, \[
\sum_{k=3}^\infty III_k \aleq{} \sum \limits_{j=-\infty}^\infty 2^{-\abs{j} \gamma}\ [v]_{\tilde{A}_j,\frac{n}{2}}. \] It remains to estimate $III_1$, $III_2$ (where we can not use the disjoint support lemma, Lemma~\ref{la:bs:disjsuppGen}). Let from now on $k=1$ or $k=2$. By Lemma~\ref{la:poincmvAn} \[
\Vert \lapn\eta_{2r}^k (v-P^k_{2r}) \Vert_{L^2} \aleq{} [v]_{\tilde{A}_k,\frac{n}{2}}, \] so \[ \begin{ma}
III_k &\leq& \Vert \lapn \brac{\eta_{2r}^k (v-P^k_{2r}) }\Vert_{L^2} + \Vert \lapn \brac{\eta_{2r}^k \left (P^k_{2r} - P_{2r} \right ) }\Vert_{L^2}\\ &\aleq{}& [v]_{\tilde{A}_k,\frac{n}{2}} + \Vert \lapn\brac{\eta_{2r}^k \left (P^k_{2r} - P_{2r} \right )} \Vert_{L^2}.
\end{ma} \] The following will be similar to the calculations in the proof of Lemma~\ref{la:poincmv} and Proposition~\ref{pr:mvpoinc}. Set \[ w_{\alpha,\beta}^k := \partial^\alpha \eta_{2r}^k\ \partial^\beta \brac{P^k_{2r} - P_{2r}}. \] We calculate for odd $n \in {\mathbb N}$, \[ \Vert \lapn\brac{\eta_{2r}^k \brac{P^k_{2r} - P_{2r}} }\Vert_{L^2}^2 \aleq{} \sum_{\abs{\alpha} + \abs{\beta} = \frac{n-1}{2}} [w_{\alpha,\beta}^k]_{{\mathbb R}^n,\frac{1}{2}}^2. \] Note that $\operatorname{supp} w_{\alpha,\beta}^k \subset B_{2^{k+2}r} \backslash B_{2^k r}$, so \[ \begin{ma} &&[w^k_{\alpha,\beta}]_{{\mathbb R}^n,\frac{1}{2}}^2\\ &\aleq{}& \Vert w^k_{\alpha,\beta} \Vert^2_{L^\infty} \int \limits_{\tilde{A}_k} \int \limits_{{\mathbb R}^n \backslash B_{40r}} \frac{1}{\abs{x-y}^{n+1}}\ dx\ dy\\ &&+ \Vert \nabla w^k_{\alpha,\beta}\Vert^2_{L^\infty}\ \int \limits_{\tilde{A}_k} \int \limits_{B_{\frac{1}{4}r}} \frac{1}{\abs{x-y}^{n-1}}\ dx\ dy\\ &&+ \Vert \nabla w^k_{\alpha,\beta}\Vert^2_{L^\infty}\ \int \limits_{\tilde{A}_k} \int \limits_{B_{40r} \backslash B_{\frac{1}{4}r}} \frac{1}{\abs{x-y}^{n-1}}\ dx\ dy\\ &\aleq{}& \Vert w^k_{\alpha,\beta} \Vert^2_{L^\infty} r^{n-1} + r^{n+1} \Vert \nabla w^k_{\alpha,\beta} \Vert^2_{L^\infty}\\ &\aleq{}& \max_{\abs{\delta} \leq \frac{n+1}{2}} r^{2\abs{\delta}} \Vert \partial^\delta (P_{2r} - P^k_{2r})\Vert_{L^\infty(\operatorname{supp} \eta_{2r}^k)}^2\\ &\aeq{}& \max_{\abs{\delta} \leq N} r^{2\abs{\delta}} \Vert \partial^\delta (P_{2r} - P^k_{2r})\Vert_{L^\infty(\operatorname{supp} \eta_{2r}^k)}^2. \end{ma} \] Taking the square root, we have shown that \[ \sum_{k=1}^2 \Vert \lapn\brac{\eta_{2r}^k \brac{P^k_{2r} - P_{2r}} }\Vert_{L^2} \aleq{} \max_{\abs{\delta} \leq N} r^{\abs{\delta}} \sum_{k=1}^2 \Vert \partial^\beta (P_{2r} - P^k_{2r})\Vert_{L^\infty(\operatorname{supp} \eta_{2r}^k)}. \] Of course, the same holds true if $n \in {\mathbb N}$ is even. Now, in the proof of Lemma~\ref{la:mvestbrakShrpr}, more precisely in \eqref{eq:mvpoincSiagammaClaim}, it was shown that \[ \begin{ma} &&\sum_{k=1}^2 \Vert \partial^\delta (P_{2r} - P^k_{2r}) \Vert_{L^\infty(\tilde{A}_k)}\\ &\aleq{}&\sum_{k=1}^\infty 2^{-nk}\ \Vert \partial^\delta (P_{2r} - P^k_{2r}) \Vert_{L^\infty(\tilde{A}_k)}\\ &=&\sum_{k=1}^\infty 2^{-nk}\ \Vert \partial^\delta (Q^{\abs{\delta}}_{2r} - Q^{\abs{\delta}}_{k} \Vert_{L^\infty(\tilde{A}_k)}\\ &\overset{\eqref{eq:mvpoincSiagammaClaim}}{\aleq{}}& r^{-\abs{\delta}} \sum_{j=-\infty}^\infty 2^{-(n-N) \abs{j}} [v]_{\tilde{A}_j,\frac{n}{2}}. \end{ma} \] Here, of course, we have set \[Q_{2r}^{\abs{\delta}} := Q_{B_{2r},N}^{\abs{\delta}}\] and \[Q_{k}^{\abs{\delta}} := Q_{B_{2^{k+2}r} \backslash B_{2^{k}r},N}^{\abs{\delta}}.\] This concludes the proof. \end{proofL}
\subsection{Localization of the homogeneous Norm} For the convenience of the reader, we will repeat the proof of the following result in \cite{DR09Sphere}. \begin{lemma}\label{la:homogloc}(\cite[Theorem A.1]{DR09Sphere})\\ For any $s > 0$ there is a constant $C_s > 0$ such that the following holds. For any $v \in {\mathcal{S}}({\mathbb R}^n)$, $r > 0$, $x \in {\mathbb R}^n$, \[ \brac{[v]_{B_{r}(x),s}}^2 \leq C_s \sum_{k=-\infty}^{-1} \brac{[v]_{A_k,s}}^2. \] Here $A_k$ denotes $B_{2^{k+1}r}(x) \backslash B_{2^{k-1}r}(x)$. \end{lemma} \begin{proofL}{\ref{la:homogloc}} This is obvious for any $s \in {\mathbb N}$. Moreover, it suffices to prove the case $s \in (0,1)$, as for $\tilde{s} > 1$, \[
[v]_{D,\tilde{s}} = [\nabla^{\lfloor \tilde{s} \rfloor}v]_{D,\tilde{s}-\lfloor \tilde{s} \rfloor} \quad \mbox{for any domain $D \subset {\mathbb R}^n$}. \] So let $s \in (0,1)$. Denote \[ \tilde{A}_k := B_{2^{k+1}r}(x) \backslash B_{2^{k}r}(x), \] and set \[ (v)_k := \fint \limits_{A_k} v,\quad \mbox{and}\quad (v)_{\tilde{k}} := \fint \limits_{\tilde{A}_k} v, \] as well as \[ [v]_k := [v]_{A_k,s},\quad \mbox{and} \quad [v]_r := [v]_{B_r(x),s}. \] With these notations, \[ \begin{ma} [v]_{r} &\leq& \sum_{k,l=-\infty}^{-1}\ \int \limits_{\tilde{A}_k} \int \limits_{\tilde{A}_l} \frac{\abs{v(x)-v(y)}^2}{\abs{x-y}^{n+2s}}\ dx\ dy\\ &\leq& 3 \sum_{k=-\infty}^{-1} [v]_k^2\\ &&+2\sum_{k=-\infty}^{-1} \sum_{l=-\infty}^{k-2}\ \int \limits_{\tilde{A}_k} \int \limits_{\tilde{A}_l} \frac{\abs{v(x)-v(y)}^2}{\abs{x-y}^{n+2s}}\ dx\ dy.\\ \end{ma} \] For $x \in \tilde{A}_k$ and $y \in \tilde{A}_l$ and $l \leq k-2$, \[ \begin{ma} &&\frac{\abs{v(x)-v(y)}^2}{\abs{x-y}^{n+2s}}\\ &\aleq{s}& \brac{2^{k}r}^{-n-2s} \abs{v(x)-v(y)}^2\\ &\aleq{s}& \brac{2^{k}r}^{-n-2s} \brac{\abs{v(x)-(v)_{\tilde{k}}}^2 + \abs{v(y) - (v)_{\tilde{l}}}^2 + \abs{(v)_{\tilde{l}} - (v)_{\tilde{k}}}^2}\\ &\aleq{s}& \brac{2^{k}r}^{-n-2s} \brac{\abs{v(x)-(v)_{\tilde{k}}}^2 + \abs{v(y) - (v)_{\tilde{l}}}^2 + \abs{l-k}\sum_{i=l}^{k-1}\abs{(v)_{\tilde{i}} - (v)_{\widetilde{i+1}}}^2}.\\ &=:& I + II + III. \end{ma} \] As for $I$ and $II$, we have \[ \int \limits_{\tilde{A}_k} \abs{v-(v)_{\tilde{k}}}^2 \aleq{} \frac{1}{\abs{A_{\tilde{k}}}} \brac{2^k r}^{n+2s} [v]_k^2 \] and \[ \int \limits_{\tilde{A}_l} \abs{v-(v)_{\tilde{l}}}^2 \aleq{} \frac{1}{\abs{A_{\tilde{l}}}} \brac{2^l r}^{n+2s} [v]_l^2. \] Consequently, \[ \begin{ma} &&\sum_{k=-\infty}^{-1} \sum_{l=-\infty}^{k-2} \int \limits_{\tilde{A}_k} \int \limits_{\tilde{A}_l} I\\ &\leq& \sum_{k=-\infty}^{-1} \sum_{l=-\infty}^{k-2} \frac{\abs{\tilde{A}_l}}{\abs{\tilde{A}_k}} [v]_k^2\\ &\aleq{}& \sum_{k=-\infty}^{-1} [v]_k^2 \sum_{l=-\infty}^{k-2} 2^{l-k}\\ &\aleq{}& \sum_{k=-\infty}^{-1} [v]_k^2. \end{ma} \] Similarly, \[ \begin{ma} &&\sum_{l=-\infty}^{-1} \sum_{k=l+1}^{-1} \int \limits_{\tilde{A}_k} \int \limits_{\tilde{A}_l} II\\ &\aleq{s}& \sum_{l=-\infty}^{-1} \sum_{k=l+1}^{-1} 2^{2(l-k)s} [v]_l^2 \\ &\aleq{s}& \sum_{l=-\infty}^{-1} [v]_l^2.\\ \end{ma} \] As for $III$, we have \[ \begin{ma} &&\abs{(v)_{\tilde{i}} - (v)_{\widetilde{i+1}}}^2\\ &\aleq{}& \brac{2^i r}^{-2n}\ 2^{i(n+2s)}r^{n+2s}\ [v]_i^2\\ &=& 2^{(-n+2s)i}\ r^{-n+2s}\ [v]_i^2. \end{ma} \] This implies that we have to estimate \[ \begin{ma} &&\sum_{k=-\infty}^{-1} \sum_{l=-\infty}^{k-2} \sum_{i=l}^{k-1} (k-l) 2^{-k(n+2s)}r^{-n-2s} \abs{A_l} \abs{A_k} 2^{(-n+2s)i} r^{-n+2s} [v]_i^2\\ &=& \sum_{i=-\infty}^{-2} 2^{(-n+2s)i}\ [v]_i^2 \sum_{l=-\infty}^{i} \sum_{k=i+1}^{-1} (k-l)\ 2^{-2ks}\ 2^{ln}. \end{ma} \] Now, for any $a \in {\mathbb Z}$, $q \in [0,1)$ \[
\sum_{k=a}^\infty k q^k = q^a \sum_{k=0}^\infty (k+a) q^{k} = q^a \brac{\sum_{k=0}^\infty k q^{k} + a \sum_{k=0}^\infty q^{k}} \leq C_q\ q^a\ (a+1) \] Consequently for any $l \leq i$, \[ \begin{ma} &&\sum_{k=i+1}^0 (k-l)\ 2^{-2ks}\\ &\leq& 2^{-2ls} \sum_{k= i+1}^\infty (k-l)\ 2^{-2(k-l)s}\\ &=& 2^{-2ls} \sum_{\tilde{k}= i+1-l}^\infty \tilde{k}\ 2^{-2\tilde{k}s}\\ &\aleq{s}& 2^{-2ls} (i-l+2)\ 2^{-2s(i-l)}\\ &=& 2^{-2si}\ (i-l+2),\\ \end{ma} \] and \[ \begin{ma} &&\sum_{l=-\infty}^i 2^{ln} (i-l+2)\\ &=& 2^{ni} \sum_{l=-\infty}^i 2^{(l-i)n} (i-l+2)\\ &\leq& 2^{ni} \sum_{l=-\infty}^0 2^{ln} \brac{2-l}\\ &\aeq{}&2^{ni}. \end{ma} \] Thus, \[ \sum_{i=-\infty}^{-2} 2^{(-n+2s)i}\ [v]_i^2 \sum_{l=-\infty}^{i}\sum_{k=i+1}^{-1} (k-l)\ 2^{-2ks}\ 2^{ln} \aleq{} \sum_{i=-\infty}^{-2} [v]_i^2. \] \end{proofL}
\begin{remark}\label{rem:akegal} By the same reasoning as in Lemma~\ref{la:homogloc}, one can also see that for two Annuli-families of different width, say $A_{k} := B_{2^{k+\lambda}r} \backslash B_{2^{k-\lambda r}}$ and $\tilde{A}_k := B_{2^{k+\Lambda}r} \backslash B_{2^{k-\Lambda r}}$ we can compare \[ [v]_{A_k,s} \leq C_{\lambda,\Lambda,s} \sum_{l=k-N_{\lambda,\Lambda}}^{k+N_{\lambda,\Lambda}} [v]_{\tilde{A}_l,s}. \] In particular we don't have to be too careful about the actual choice of the width of the family $A_k$ for quantities like \[ \sum_{k=-\infty}^\infty 2^{-\gamma \abs{k}} [v]_{A_k,s}, \] as long as we can afford to deal with constants depending on the change of width, i.e. if we can afford to have e.g. \[ C_{\Lambda,\lambda,\gamma,s} \sum_{l=-\infty}^\infty 2^{-\gamma \abs{l}} [v]_{\tilde{A}_l,s}; \] In fact this is because of \[ \begin{ma} &&\sum_{k=-\infty}^\infty 2^{-\gamma \abs{k}} [v]_{A_k,s}\\ &\leq& \sum_{k=-2N+1}^{2N-1} [v]_{A_k,s} + \sum_{k=-\infty}^{-2N} 2^{\gamma k}\ [v]_{A_k,s} + \sum_{k=2N}^{\infty} 2^{-\gamma k}\ [v]_{A_k,s}\\
&\aleq{\Lambda,\lambda}& \sum_{k=-2N+1}^{2N-1} \sum_{l=k-N}^{k+N} [v]_{\tilde{A}_l,s} + \sum_{k=-\infty}^{-2N} \sum_{l=k-N}^{k+N} 2^{\gamma k}\ [v]_{\tilde{A}_l,s}\\ && + \sum_{k=2N}^{\infty} \sum_{l=k-N}^{k+N} 2^{-\gamma k}\ [v]_{\tilde{A}_l,s}\\
&\aleq{\Lambda,\lambda}& 4N 2^{3\gamma N} \sum_{l=-3N}^{3N} 2^{-\gamma \abs{l}}\ [v]_{\tilde{A}_l,s} + 2^{\gamma N} \sum_{k=-\infty}^{-2N} \sum_{l=k-N}^{k+N} 2^{\gamma l}\ [v]_{\tilde{A}_l,s}\\ && + 2^{\gamma N}\sum_{k=2N}^{\infty} \sum_{l=k-N}^{k+N}2^{-\gamma l}\ [v]_{\tilde{A}_l,s}\\ &\aleq{\Lambda,\lambda}& \sum_{l=-3N}^{3N} 2^{-\gamma \abs{l}}\ [v]_{\tilde{A}_l,s} + 2N \sum_{l=-\infty}^{-N} 2^{\gamma l}\ [v]_{\tilde{A}_l,s} + 2N\ \sum_{l=N}^{\infty} 2^{-\gamma l}\ [v]_{\tilde{A}_l,s}\\ &\leq& C_{\Lambda,\lambda,\gamma} \sum_{l=-\infty}^\infty 2^{-\gamma \abs{l}}\ [v]_{\tilde{A}_l,s}. \end{ma} \] Of course, the same argument holds for $[v]_{A_k,s}$ replaced by $\Vert \laps{s} v \Vert_{L^2(A_k)}$, too. \end{remark} \section{Growth Estimates: Proof of Theorem~\ref{th:regul}}\label{sec:growth} In this section, we derive growth estimates from equations \eqref{eq:structureeq} and \eqref{eq:ELeq}, similar to the usual Dirichlet-Growth estimates. \begin{lemma}\label{la:estStrEq} Let $w \in \Hf({\mathbb R}^n,{\mathbb R}^m)$, $\varepsilon > 0$. Then there exist constants $\Lambda > 0$, $R > 0$, $\gamma > 0$ such that if $w$ is a solution of \eqref{eq:structureeq}, then for any $x_0 \in {\mathbb R}^n$, $r \in (0,R)$ \[ \begin{ma}
&&\Vert w \cdot \lapn w \Vert_{L^2(B_r(x_0))}\\ &\leq& \varepsilon \brac{\Vert \lapn w \Vert_{L^2(B_{4\Lambda r})} + [w]_{B_{4\Lambda r},\frac{n}{2}}}\\ &&+ C_{\Lambda, w} \brac{r^\frac{n}{2} + \sum_{k=1}^\infty 2^{-\gamma k} \Vert \lapn w \Vert_{L^2(A_k)} + \sum_{k=-\infty}^\infty 2^{-\gamma \abs{k}} [w]_{A_k,\frac{n}{2}}}. \end{ma} \] Here, $A_k = B_{2^{k+1} r}(x_0) \backslash B_{2^{k-1} r}(x_0)$. \end{lemma} \begin{proofL}{\ref{la:estStrEq}} By \eqref{eq:structureeq}, \[ \Vert w \cdot \lapn w \Vert_{L^2(B_r)} \leq \Vert H(w,w) \Vert_{L^2(B_r)} + \Vert \lapn \eta^2 \Vert_{L^2(B_r)}. \] As $\lapn \eta^2$ is bounded (by a similar argument as the one in the proof of Proposition~\ref{pr:etarkgoodest}), \[ \Vert \lapn \eta^2 \Vert_{L^2(B_r)} \leq C_{\eta} r^{\frac{n}{2}}. \] We conclude by applying Lemma~\ref{la:hwwlocest}, using also Remark \ref{rem:akegal}. \end{proofL}
The next lemma is a simple consequence of H\"older and Poincar\'{e} inequality, Lemma~\ref{la:poinc}. \begin{lemma}\label{la:growth:avp} Let $a \in L^2({\mathbb R}^n)$. Then \[ \int \limits_{{\mathbb R}^n} a\ \varphi \leq C\ r^{\frac{n}{2}}\ \Vert a \Vert_{L^2({\mathbb R}^n)}\ \Vert \lapn \varphi \Vert_{L^2({\mathbb R}^n)} \] for any $\varphi \in C_0^\infty(B_r(x_0))$, $r > 0$. \end{lemma}
\begin{lemma}\label{est:ELeq} For any $w \in \Hf\cap L^\infty({\mathbb R}^n,{\mathbb R}^m)$ and any $\varepsilon > 0$ there are constants $\Lambda > 0$, $R > 0$ such that if $w$ is a solution to \eqref{eq:ELeq} for some ball $\tilde{D} \subset {\mathbb R}^n$ then for any $B_{\Lambda r}(x_0) \subset \tilde{D}$, $r \in (0,R)$ and any skew-symmetric $\alpha \in {\mathbb R}^{n\times n}$, $\abs{\alpha} \leq 2$, \[ \Vert w^i \alpha_{ij} \lapn w^{j} \Vert_{L^2(B_r(x_0))} \leq \varepsilon \Vert \lapn w \Vert_{L^2(B_{\Lambda r}(x_0))} + C_{\varepsilon,\tilde{D},w} \brac{r^{\frac{n}{2}} + \sum_{k=1}^\infty 2^{-nk}\ \Vert \lapn w \Vert_{L^2(A_k)}}. \] Here, $A_k = B_{2^{k+1} r}(x_0) \backslash B_{2^{k-1} r}(x_0)$. \end{lemma} \begin{proofL}{\ref{est:ELeq}} Let $\delta = C \varepsilon> 0$ for a uniform constant $C$ which will be clear later. Set $\Lambda_1 > 1$ ten times the uniform constant $\Lambda$ from Theorem~\ref{th:localest} and choose $\Lambda_2 > 10$ such that \begin{equation}\label{eq:estEleq:Lambda2} \brac{\Lambda_2}^{-\frac{1}{2}} \Vert \lapn w \Vert_{L^2({\mathbb R}^n)} \leq \delta. \end{equation} We then define $\Lambda := 10 \Lambda_1 \Lambda_2$. Choose $R > 0$ such that \begin{equation}\label{eq:estEleq:wnormsleqdelta} [w]_{B_{10\Lambda r},\frac{n}{2}}+\Vert \lapn w \Vert_{L^2(B_{10\Lambda r})} \leq \delta \mbox{\quad for any $x_0 \in {\mathbb R}^n$, $r \in (0,R)$}. \end{equation} Fix now any $r \in (0,R)$, $x_0 \in {\mathbb R}^n$ such that $B_{\Lambda r}(x_0) \subset \tilde{D}$. For the sake of brevity, we set $v := w^i \alpha_{ij} \lapn w^j$. By Theorem~\ref{th:localest} \[ \Vert v \Vert_{L^2(B_r)} \leq \Vert \eta_r v \Vert_{L^2} \leq C \sup_{\ontop{\varphi \in C_0^\infty(B_{\Lambda_1 r}(x_0))}{\Vert \lapn \varphi \Vert_{L^2} \leq 1}} \int \limits \eta_r\ v\ \lapn \varphi. \] We have for such a $\varphi \in C_0^\infty(B_{\Lambda_1 r}(x_0))$, $\Vert \lapn \varphi \Vert_{L^2} \leq 1$, \[ \begin{ma} \int \limits_{{\mathbb R}^n} \eta_r v\ \lapn \varphi &=& \int \limits v\ \lapn \varphi + \int \limits (\eta_r-1)\ v\ \lapn \varphi\\ &=:& I + II. \end{ma} \] In order to estimate $II$, we use the compact support of $\varphi$ in $B_{{\Lambda_1} r}$ and apply Corollary \ref{co:bs:disjsupp} and Poincar\'{e}'s inequality, Lemma~\ref{la:poinc}: \[ \begin{ma} II &=& \int \limits (\eta_r-1) v\ \lapn \varphi\\ &\overset{\ontop{\sref{C}{co:bs:disjsupp}}{\sref{L}{la:poinc}}}{\leq}& C_{{\Lambda_1}} \sum_{k=1}^\infty 2^{-nk}\ \Vert \eta^k_{r} v \Vert_{L^2}\ \Vert \lapn \varphi \Vert_{L^2({\mathbb R}^n)}\\ &\leq& C_{{\Lambda_1}} \sum_{k=1}^\infty 2^{-nk}\ \Vert \eta^k_{r} v \Vert_{L^2}\\ &\leq& C_{{\Lambda_1}} \Vert w \Vert_{L^\infty}\ \sum_{k=1}^\infty 2^{-nk}\ \Vert \eta^k_{r} \lapn w \Vert_{L^2}\\ \end{ma} \] In fact, this inequality is first true for $k \geq K_{\Lambda_1}$ (when we can guarantee a disjoint support of $\eta_{r}^k$ and $\varphi$). By choosing a possibly bigger constant $C_{\Lambda_1}$ it holds also for any $k \geq 1$.\\ The remaining term $I$ is controlled by the PDE \eqref{eq:ELeq}, setting $\psi_{ij} := \alpha_{ij} \varphi$ which is an admissible test function: \[ \begin{ma} I &\overset{\eqref{eq:ELeq}}{=}& \int \limits_{{\mathbb R}^n} a_{ij}\ \alpha_{ij}\ \varphi - \alpha_{ij}\ \int \limits_{{\mathbb R}^n} \lapn w^j\ H(w^i,\varphi)\\ &=:& I_1 - \alpha_{ij}\int \limits_{{\mathbb R}^n} \eta_{4{\Lambda_1} r}\ \lapn w^j\ H(w^i,\varphi) - \alpha_{ij}\sum_{k=1}^\infty\ \int \limits_{{\mathbb R}^n} \eta^k_{4{\Lambda_1} r}\ \lapn w^j\ H(w^i,\varphi)\\ &=:& I_1 - I_2 - \sum_{k=1}^\infty I_{3,k}. \end{ma} \] By Lemma~\ref{la:growth:avp}, \[ I_1 \leq C_{\Lambda_1} r^\frac{n}{2}\ \Vert a \Vert_{L^2}. \] By Lemma~\ref{la:hvphilocest} (taking $r = \Lambda_1 r$ and $\Lambda = \Lambda_2$) and the choice of $\Lambda_2$ and $R$, \eqref{eq:estEleq:Lambda2} and \eqref{eq:estEleq:wnormsleqdelta}, \[ I_2 \aleq{} \delta\ \Vert \eta_{4{\Lambda_2} r} \lapn w \Vert_{L^2}. \] As for $I_{3,k}$, because the support of $\varphi$ and $\eta^k_{4{\Lambda_1} r}$ is disjoint, by Lemma~\ref{la:bs:disjsuppGen}, \[ \begin{ma} &&\int \limits_{{\mathbb R}^n} \eta^k_{4{\Lambda_1} r} \lapn w^j H(w^i,\varphi)\\ &=& \int \limits_{{\mathbb R}^n} \eta^k_{4{\Lambda_1} r} \lapn w^j \brac{\lapn (w^i \varphi) - w^i \lapn \varphi} \\ &\overset{\sref{L}{la:bs:disjsuppGen}}{\aleq{}}& C_{\Lambda_1}\ \brac{2^k r}^{-n} \Vert \eta^k_{4{\Lambda_1} r} \lapn w^j \Vert_{L^2} \Vert w \Vert_{L^\infty}\ r^{n}\\ &\aeq{}& \Vert w \Vert_{L^\infty}\ 2^{-nk}\ \Vert \eta^k_{4{\Lambda_1} r} \lapn w^j \Vert_{L^2}.\\ \end{ma} \] Using Remark \ref{rem:akegal}, we conclude. \end{proofL}
\begin{lemma}\label{la:west} Let $w \in \Hf\cap L^\infty({\mathbb R}^n,{\mathbb R}^m)$ satisfy \eqref{eq:structureeq} and \eqref{eq:ELeq} (for some ball $\tilde{D}$, and some $\eta$). Assume furthermore that $w(y) \in {\mathbb S}^{m-1}$ for almost every $y \in \tilde{D}$. Then for any $\varepsilon > 0$ there is $\Lambda > 0$, $R > 0$ and $\gamma > 0$, such that for all $r \in (0,R)$, $x_0 \in {\mathbb R}^n$ such that $B_{\Lambda r}(x_0) \subset \tilde{D}$, \[ \begin{ma} &&[w]_{B_r,\frac{n}{2}} + \Vert \lapn w \Vert_{L^2(B_r)}\\ & \leq& \varepsilon \brac{[w]_{B_{\Lambda r},\frac{n}{2}} + \Vert \lapn v \Vert_{L^2(B_{\Lambda r})}}\\ &&+ C_{\varepsilon} \sum_{k=-\infty}^\infty 2^{-\gamma \abs{k}} \brac{[w]_{A_k,\frac{n}{2}} + \Vert \lapn w \Vert_{L^2(A_k)}}\\ && + C_{\varepsilon} r^{\frac{n}{2}}. \end{ma} \] Here, $A_k = B_{2^{k+1} r}(x_0) \backslash B_{2^{k-1} r}(x_0)$. \end{lemma} \begin{proofL}{\ref{la:west}} Let $\varepsilon > 0$ be given and $\delta := \delta_\varepsilon$ to be chosen later. Take from Lemma~\ref{la:estStrEq} and Lemma~\ref{est:ELeq} the smallest $R$ to be our $R > 0$ and the biggest $\Lambda$ to be our $\Lambda > 20$, such that the following holds: For any skew symmetric matrix $\alpha \in {\mathbb R}^{n\times n}$, $\abs{\alpha} \leq 2$ and any $B_{\Lambda r}(x_0) \equiv B_{\Lambda} \subset \tilde{D}$, $r \in (0,R)$ and for a certain $\gamma > 0$ \[ \begin{ma} &&\Vert w \cdot \lapn w \Vert_{L^2(B_{16r})} + \Vert w^i \alpha_{ij} \lapn w^{j} \Vert_{L^2(B_{16r})}\\ &\leq& \delta \brac{\Vert \lapn w \Vert_{L^2(B_{\Lambda r})} + [w]_{B_{\Lambda r},\frac{n}{2}}}\\ &&+ C_{\delta, w} \brac{r^\frac{n}{2} + \sum_{k=-\infty}^\infty 2^{-\gamma \abs{k}} \brac{\Vert \lapn w \Vert_{L^2(A_k)} + [w]_{A_k,\frac{n}{2}}}}. \end{ma} \] In particular, as $\abs{w} = 1$ on $B_{16r}(x_0) \subset \tilde{D}$ we have \begin{equation}\label{eq:lapnwallest} \Vert \lapn w \Vert_{L^2(B_{16r})} \leq \delta \brac{\Vert \lapn w \Vert_{L^2(B_{\Lambda r})} + [w]_{B_{\Lambda r},\frac{n}{2}}} + C_{\delta, w} \brac{r^\frac{n}{2} + \sum_{k=-\infty}^\infty 2^{-\gamma \abs{k}} \brac{\Vert \lapn w \Vert_{L^2(A_k)} + [w]_{A_k,\frac{n}{2}}}}. \end{equation} Then, by Lemma~\ref{la:comps01} we have for a certain $\gamma > 0$ (possibly smaller than the one chosen before) \[ \begin{ma} &&[w]_{B_r,\frac{n}{2}} + \Vert \lapn w \Vert_{L^2(B_r)}\\ &\overset{\sref{L}{la:comps01}}{\leq}& \varepsilon [w]_{B_{16r}} + C_{\varepsilon} \left (\Vert \lapn w \Vert_{L^2(B_{16r})} + \sum_{k=-\infty}^\infty 2^{-\gamma \abs{k}} \brac{[w]_{A_k,\frac{n}{2}} + \Vert \lapn w \Vert_{L^2(A_k)}} \right )\\ &\overset{\eqref{eq:lapnwallest}}{\aleq{}}& \varepsilon [w]_{B_{16r}} + \delta C_{\varepsilon} \brac{\Vert \lapn w \Vert_{L^2(B_{\Lambda r})} + [w]_{B_{\Lambda r},\frac{n}{2}}}\\ && +C_{\varepsilon,\delta,w, \tilde{D}} \left (r^\frac{n}{2} + \sum_{k=-\infty}^\infty 2^{-\gamma \abs{k}} \brac{[w]_{A_k,\frac{n}{2}} + \Vert \lapn w \Vert_{L^2(A_k)}} \right ).\\ \end{ma} \] Thus, if we set $\delta := \brac{C_{\varepsilon}}^{-1} \varepsilon$, the claim is proven. \end{proofL}
Finally, we can prove Theorem~\ref{th:regul}, which is an immediate consequence of the following theorem and the Euler-Lagrange-Equations, Lemma~\ref{pr:eleq}. \begin{theorem}\label{th:wreg} Let $w \in \Hf({\mathbb R}^n) \cap L^\infty$ as in Lemma~\ref{la:west}. Then for any $E \subset \tilde{D}$ with positive distance from $\partial \tilde{D}$ there is $\beta > 0$ such that $w \in C^{0,\beta}(E)$. \end{theorem} \begin{proofT}{\ref{th:wreg}} Squaring the estimate of Lemma~\ref{la:west}, we have for arbitary $\varepsilon > 0$ some $\Lambda > 0$ (which we can chose w.l.o.g. to be $2^{K_\Lambda -1}$ for some $K_\Lambda \in {\mathbb N}$), $R > 0$ and $\gamma > 0$ and any $B_r(x_0) \subset {\mathbb R}^n$ where $B_{\Lambda r}(x_0) \subset \tilde{D}$, $r \in (0,R]$ \[ \begin{ma} &&\brac{[w]_{B_r,\frac{n}{2}}}^2 + \brac{\Vert \lapn w \Vert_{L^2(B_r)}}^2\\ &\leq& 4\varepsilon^2 \brac{[w]_{B_{\Lambda r},\frac{n}{2}}^2 + \Vert \lapn w \Vert_{L^2(B_{\Lambda r})}^2}\\ &&+ C_{\varepsilon} \sum_{k=-\infty}^\infty 2^{-\gamma \abs{k}} \brac{[w]_{A_k(r),\frac{n}{2}}^2 + \Vert \lapn w \Vert^2_{L^2(A_k(r))}}\\ &&+ C_{\varepsilon} r^{n}. \end{ma} \] Here, \[
A_k(r) \equiv A_k(r,x_0) = B_{2^{k+1}r}(x_0) \backslash B_{2^{k-1}r}(x_0). \] Set \[ a_k(r) \equiv a_k(r,x_0) := [w]_{A_k(r),\frac{n}{2}}^2 + \Vert \lapn w \Vert_{L^2(A_k(r))}^2. \] Then, for some uniform $C_1 > 0$ and $c_1 > 0$ and $K \equiv K_\Lambda \in {\mathbb N}$ such that $2^{K_\Lambda-1} = \Lambda$ \[ \Vert \lapn w \Vert_{L^2(B_{\Lambda r})}^2 \leq C_1 \sum_{k=-\infty}^{K_\Lambda} a_k(r), \] and by Lemma~\ref{la:homogloc} also \[ [w]_{B_{\Lambda r},\frac{n}{2}}^2 \leq C_1 \sum_{k=-\infty}^{K_\Lambda} a_k(r), \] and of course, \[ [w]_{B_r,\frac{n}{2}}^2 + \Vert \lapn w \Vert_{L^2(B_r)}^2 \geq c_1 \sum_{k=-\infty}^{-1} a_k(r), \] as well as $\Vert a_k(r) \Vert_{l^1({\mathbb Z})} \aleq{} \Vert \lapn w \Vert_{L^2({\mathbb R}^n)}^2$. Choosing $\varepsilon > 0$ sufficiently small to absorb the effects of the independent constants $c_1$ and $C_1$, this implies \begin{equation}\label{eq:wreg:growthak} \sum_{k=-\infty}^{-1} a_k(r) \leq \frac{1}{2}\sum_{k=-\infty}^{K_\Lambda} a_k(r) + C\sum_{k=-\infty}^\infty 2^{-\gamma\abs{k}} a_k(r) + C r^{n} \end{equation} This is valid for any $B_r(x_0) \subset B_{\Lambda r}(x_0) \subset \tilde{D}$, where $r \in (0,R)$. Let $E$ be a bounded subset of $\tilde{D}$ with proper distance to the boundary $\partial \tilde{D}$. Let $R_0 \in (0,R)$ such that for any $x_0 \in E$ the ball $B_{2\Lambda R_0}(x_0) \subset \tilde{D}$. Fix some arbitrary $x_0 \in E$. Let now for $k \in {\mathbb Z}$, \[ b_k \equiv b_k(x_0) := [w]_{A_k(\frac{R_0}{2}),\frac{n}{2}}^2 + \Vert \lapn w \Vert_{L^2(A_k(\frac{R_0}{2}))}^2 = a_k(\frac{R_0}{2}). \] Then for any $N \leq 0$, \[
\begin{ma}
\sum_{k=-\infty}^N b_k &=& \sum_{k=-\infty}^N a_{k}(\frac{R_0}{2})\\ &=& \sum_{k=-\infty}^{-1} a_{k+(N+1)}(\frac{R_0}{2})\\ &=& \sum_{k=-\infty}^{-1} a_{k}(2^{N}R_0)\\ &\overset{\eqref{eq:wreg:growthak}}{\leq}& \frac{1}{2}\sum_{k=-\infty}^{K_{\Lambda} } a_k(2^{N}R_0) + C \sum_{k=-\infty}^\infty 2^{-\gamma\abs{k}} a_k(2^N R_0) + C\ R_0^n\ 2^{nN}\\ &\leq& \frac{1}{2}\sum_{k=-\infty}^{K_{\Lambda}+N+1} a_k(\frac{R_0}{2}) + C\ 2^\gamma \sum_{k=-\infty}^\infty 2^{-\gamma\abs{k-N}} a_k(\frac{R_0}{2}) + C\ R_0^n\ 2^{nN}\\ &=& \frac{1}{2}\sum_{k=-\infty}^{K_{\Lambda}+N+1} b_k + C\ 2^\gamma \sum_{k=-\infty}^\infty 2^{-\gamma\abs{k-N}} b_k + C\ R_0^n\ 2^{nN}
\end{ma} \] Consequently, by Lemma~\ref{la:iteration}, for a $N_0 < 0$ and a $\beta > 0$ (not depending on $x_0$), \[ \sum_{k=-\infty}^N b_k \leq C\ 2^{\beta N}, \quad \mbox{for any $N \leq N_0$}. \] This implies in particular for $\tilde{R}_0 = 2^{N_0}R_0$ (again using Lemma~\ref{la:homogloc} ) \[ [v]_{B_r(x_0),\frac{n}{2}} \leq C_{R_0}\ r^{\frac{\beta}{2}} \mbox{\quad for all $r < \tilde{R}_0$ and $x_0 \in E$}. \] Finally, Dirichlet Growth Theorem, Theorem~\ref{la:it:dg}, implies that $v \in C^{0,\beta}(E)$. \end{proofT}
\renewcommand{A}{A} \renewcommand{A.\arabic{subsection}}{A.\arabic{subsection}} \section{Ingredients for the Dirichlet Growth Theorem} \subsection{Iteration Lemmata} With the same argument as in \cite[Proposition A.1]{DR09Sphere} the following Iteration Lemma can be proven. \begin{lemma}\label{la:driteration} Let $a_k \in l^1({\mathbb Z})$, $a_k \geq 0$ for any $k \in {\mathbb Z}$ and assume that there is $\alpha > 0$, $\Lambda > 0$ such that for any $N \leq 0$ \begin{equation}\label{eq:dritersumknak}
\sum_{k=-\infty}^N a_k \leq \Lambda \left (\sum_{k=N+1}^\infty 2^{\gamma(N+1-k)} a_k + 2^{\alpha N} \right ). \end{equation} Then there is $\beta \in (0,1)$, $\Lambda_2 > 0$ such that for any $N \leq 0$ \[
\sum_{k=-\infty}^N a_k \leq 2^{\beta N} \Lambda_2. \] \end{lemma} \begin{proofL}{\ref{la:driteration}} Set for $N \leq 0$ \[ A_N := \sum_{k=-\infty}^N a_k. \] Then obviously, \[ a_k = A_k - A_{k-1}. \] Equation \eqref{eq:dritersumknak} then reads as (note that $A_N \in l^\infty({\mathbb Z})$) \[ \begin{ma} A_N &\leq& \Lambda \brac{\sum_{k=N+1}^\infty 2^{\gamma(N+1-k)} \brac{A_k-A_{k-1}} + 2^{\alpha N} }\\ &=& \Lambda \brac{\sum_{k=N+1}^\infty 2^{\gamma(N+1-k)} A_k - \sum_{k=N+2}^\infty 2^{\gamma(N-(k-1))} A_{k-1} - A_N + 2^{\alpha N} }\\ &=& \Lambda \brac{\sum_{k=N+1}^\infty 2^{\gamma(N+1-k)} A_k - \sum_{k=N+1}^\infty 2^{\gamma(N-k)} A_{k} - A_N + 2^{\alpha N} }\\ &=& \Lambda \brac{\sum_{k=N+1}^\infty 2^{\gamma(N+1-k)} A_k - 2^{-\gamma}\sum_{k=N+1}^\infty 2^{\gamma(N-k+1)} A_{k} - A_N + 2^{\alpha N} }\\ &=& \Lambda \brac{(1-2^{-\gamma})\sum_{k=N+1}^\infty 2^{\gamma(N+1-k)} A_k - A_N + 2^{\alpha N} }. \end{ma} \] This calculation is correct as $(A_k)_{k \in {\mathbb Z}} \in l^\infty({\mathbb Z})$ and $\brac{2^{\gamma {N+1-k}}}_{k = N}^\infty \in l^1([N,N+1,\ldots,\infty])$ because of the condition $\gamma > 0$. Otherwise we could not have used linearity for absolutely convergent series.\\ We have shown that \eqref{eq:dritersumknak} is equivalent to \[ A_N \leq \frac{\Lambda}{1+\Lambda} \brac{1-2^{-\gamma}} \sum_{k=N+1}^\infty 2^{\gamma(N+1-k)} A_k + \frac{\Lambda}{1+\Lambda} 2^{\alpha N}. \] Set $\tau := \frac{\Lambda}{\Lambda+1}\brac{1-2^{-\gamma}}$. Then, for all $N \leq 0$, \begin{equation}\label{eq:it:1step} A_N \leq \tau \sum_{k=N+1}^\infty 2^{\gamma (N+1-k)} A_k + 2^{\alpha N}. \end{equation} Set \[ \tau_k := \begin{cases}
1 \quad &\mbox{if $k = 0$},\\
\tau (\tau + 2^{-\gamma})^{k-1}\quad &\mbox{if $k \geq 1$}.
\end{cases} \] Then for any $K \geq 0$, $N \leq 0$, \begin{equation}\label{eq:it:IA} A_{N-K} \leq \tau_{K+1} \sum_{k=N+1}^\infty 2^{\gamma(N+1-k)} A_k + \sum_{k=0}^{K} \tau_k 2^{\alpha (N-K+k)}. \end{equation} In fact, this is true for $K = 0$, $N \leq 0$ by \eqref{eq:it:1step}. Moreover, if we assume that \eqref{eq:it:IA} holds for some $K \geq 0$ and all $N \leq 0$, we compute \[ \begin{ma} &&A_{N-K-1}\\ &=& A_{(N-1)-K}\\ &\overset{\eqref{eq:it:IA}}{\leq}& \tau_{K+1} \sum_{k=N}^\infty 2^{\gamma(N-k)} A_k + \sum_{k=0}^K \tau_k 2^{\alpha (N-1-K+k)}\\ &=&\tau_{K+1} \brac{ A_N + 2^{-\gamma}\sum_{k=N+1}^\infty 2^{\gamma(N+1-k)} A_k}\\ &&+ \sum_{k=0}^K \tau_k 2^{\alpha (N-1-K+k)}\\ &\overset{\eqref{eq:it:1step}}{\leq}& \tau_{K+1} \brac{ \tau \sum_{k=N+1}^\infty 2^{\gamma (N+1-K)} A_k + 2^{\alpha N} + 2^{-\gamma}\sum_{k=N+1}^\infty 2^{\gamma(N+1-k)} A_k}\\ &&+ \sum_{k=0}^K \tau_k 2^{\alpha (N-1-K+k)}\\ &\leq& \tau_{K+1} (\tau + 2^{-\gamma}) \sum_{k=N+1}^\infty 2^{\gamma(N+1-k)} A_k + \tau_{K+1} 2^{\alpha N}\\ &&+ \sum_{k=0}^K \tau_k 2^{\alpha (N-(K+1)+k)}\\ &=& \tau_{K+2} \sum_{k=N+1}^\infty 2^{\gamma(N+1-k)} A_k + \sum_{k=0}^{K+1} \tau_k 2^{\alpha (N-(K+1)+k)}. \end{ma} \] This proves \eqref{eq:it:IA} for any $K \geq 0$ and $N \leq 0$. As $\tau_{K} \leq 1$, \[ A_{N-K} \leq C_\gamma \tau_{K+1} A_\infty + 2^{\alpha N} C_\alpha. \] So for any $\tilde{N} \leq 0$, \[ \begin{ma} A_{\tilde{N}} &=& A_{(\tilde{N}+\left \lfloor \frac{\abs{\tilde{N}}}{2} \right \rfloor) - \left \lfloor \frac{\abs{\tilde{N}}}{2} \right \rfloor}\\ &\leq& C_\gamma \brac{A_\infty + 1}\ \tau_{\left \lfloor \frac{\abs{\tilde{N}}}{2} \right \rfloor} + 2^{\alpha (\tilde{N}+\left \lfloor \frac{\abs{\tilde{N}}}{2} \right \rfloor)}\\ &\leq& C_{\gamma,\alpha}\ \brac{A_\infty + 1}\ \brac{\tau_{\left \lfloor \frac{\abs{\tilde{N}}}{2} \right \rfloor} + 2^{-\alpha \frac{\abs{\tilde{N}}}{2}}}. \end{ma} \] Using now that $\tau_{k} \leq 2^{-\theta k}$ for all $k \geq 0$ and some $\theta > 0$, have shown that \[ A_{\tilde{N}} \leq C_{\gamma,\alpha} \brac{A_\infty+1}\ 2^{\mu \tilde{N}}. \] for some small $\mu > 0$. \end{proofL}
As a consequence the following Iteration Lemma holds, too. \begin{lemma}\label{la:iteration} For any $\Lambda_1,\Lambda_2,\gamma > 0$, $L \in {\mathbb N}$ there exists a constant $\Lambda_3 > 0$ and an integer $\bar{N} \leq 0$ such that the following holds. Let $(a_k) \in l^1({\mathbb Z})$, $a_k \geq 0$ for any $k \in {\mathbb Z}$ such that for every $N \leq 0$, \begin{equation}\label{eq:it:bigguy}
\sum \limits_{k=-\infty}^N a_k \leq \frac{1}{2} \sum \limits_{k=-\infty}^{N+L} a_k + \Lambda_1 \sum \limits_{k=-\infty}^N 2^{\gamma \brac{k-N}} a_k + \Lambda_2 \sum \limits_{k=N+1}^\infty 2^{\gamma(N-k)} a_k + \Lambda_2 2^{\gamma N}. \end{equation} Then for any $N \leq \bar{N}$, \[
\sum \limits_{k=-\infty}^N a_k \leq \Lambda_3 \sum \limits_{k=N+1}^{\infty} 2^{\gamma (N-k)} a_k + \Lambda_3 2^{\gamma N} \] and consequently by Lemma~\ref{la:driteration} for some $\beta \in (0,1)$, $\Lambda_4 > 0$ (depending only on $\Vert a_k \Vert_{l^1({\mathbb Z})}$, $\Lambda_3$) and for any $N \leq \bar{N}$ \[
\sum_{k=-\infty}^N a_k \leq \Lambda_4 2^{\beta N}. \] \end{lemma} \begin{proofL}{\ref{la:iteration}} Firstly, \eqref{eq:it:bigguy} implies by absorption of $\frac{1}{2} \sum_{k=-\infty}^N a_k$ to the right hand side, \[ \begin{ma}
&&\sum \limits_{k=-\infty}^N a_k\\ &\overset{\eqref{eq:it:bigguy}}{\leq}& 2 \sum \limits_{k=N+1}^{N+L} a_k + 2\Lambda_1 \sum \limits_{k=-\infty}^N 2^{\gamma (k-N)} a_k\\ &&+ 2\Lambda_2 \sum \limits_{k=N+1}^\infty 2^{\gamma(N-k)} a_k + \Lambda_2 2^{\gamma N}\\ &\leq& 2^{\gamma L+1} \sum \limits_{k=N+1}^{N+L} 2^{\gamma(N-k)} a_k + 2\Lambda_1 \sum \limits_{k=-\infty}^N 2^{\gamma (k-N)} a_k\\ &&+ 2\Lambda_2 \sum \limits_{k=N+1}^\infty 2^{\gamma(N-k)} a_k +\Lambda_2 2^{\gamma N}\\ &\leq& 2\Lambda_1 \sum \limits_{k=-\infty}^N 2^{\gamma (k-N)} a_k\\ &&+ \brac{2^{\gamma L+1} + 2\Lambda_2}\ \sum \limits_{k=N+1}^\infty 2^{\gamma(N-k)} a_k +\Lambda_2 2^{\gamma N}.\\ \end{ma} \] Next, choose $K \in {\mathbb N}$ such that $2^{-\gamma K} \leq \frac{1}{4\Lambda_1}$. Then, \[ \begin{ma}
&&\sum \limits_{k=-\infty}^N a_k\\
&\leq& 2\Lambda_1 \sum \limits_{k=-\infty}^{N-K} 2^{\gamma (k-N)} a_k + 2\Lambda_1 \sum \limits_{k=N-K+1}^N 2^{\gamma (k-N)} a_k\\ &&\quad+ \brac{2^{\gamma L+1} + 2\Lambda_2}\ \sum \limits_{k=N+1}^\infty 2^{\gamma(N-k)} a_k + \Lambda_2 2^{\gamma N}\\ &\leq& \frac{1}{2} \sum \limits_{k=-\infty}^{N-K} a_k + 2\Lambda_1 \sum \limits_{k=N-K+1}^N a_k\\ &&+ \brac{2^{\gamma L+1} + 2\Lambda_2}\ \sum \limits_{k=N+1}^\infty 2^{\gamma(N-k)} a_k + \Lambda_2 2^{\gamma N}. \end{ma} \] Consequently, again by absorbing \[ \begin{ma}
&&\sum \limits_{k=-\infty}^{N-K} a_k\\
&\leq& 4\Lambda_1 \sum \limits_{k=N-K+1}^N a_k + \brac{2^{\gamma L+2} + 4\Lambda_2}\ \sum \limits_{k=N+1}^\infty 2^{\gamma(N-k)} a_k +2 \Lambda_2 2^{\gamma N}\\ &\leq& 4\Lambda_1 2^{\gamma K} \sum \limits_{k=N-K+1}^N 2^{\gamma(N-K-k)} a_k\\ &&\quad + 2^{\gamma K}\brac{2^{\gamma L+2} + 4\Lambda_2}\ \sum \limits_{k=N+1}^\infty 2^{\gamma(N-K-k)} a_k +2 \Lambda_2 2^{\gamma N}\\ &\leq& \brac{4\Lambda_1 2^{\gamma K} + 2^{\gamma K}\brac{2^{\gamma L+2} + 4\Lambda_2}}\ \sum \limits_{k=N-K+1}^\infty 2^{\gamma(N-K-k)} a_k\\ &&+2 \Lambda_2 2^K\ 2^{\gamma {N-K}}\\ &=:& \Lambda_3\ \brac{\sum \limits_{k=N-K+1}^\infty 2^{\gamma(N-K-k)} a_k + 2^{\gamma \brac{N-K}}}. \end{ma} \] This is valid for any $N \leq 0$, so for any $\tilde{N} \leq -K$ \[
\sum \limits_{k=-\infty}^{\tilde{N}} a_k \leq \Lambda_3\ \brac{\sum \limits_{k=\tilde{N}+1}^\infty 2^{\gamma(\tilde{N}-k)} a_k + 2^{\gamma \tilde{N}}}. \] We conclude by Lemma~\ref{la:driteration}. \end{proofL}
\subsection{A fractional Dirichlet Growth Theorem} In this section we will state and prove a Dirichlet Growth-Type theorem using mainly Poincar\'{e}'s inequality. For an approach by potential analysis, we refer to \cite{Adams75}, in particular \cite[Corollary after Proposition 3.4]{Adams75}.\\ Let us introduce some quantities related to Morrey- and Campanato spaces as treated in \cite{GiaquintaMI83} for some domain $D \subset {\mathbb R}^n$, $\lambda > 0$ \[
J_{D,\lambda,R}(v) := \sup_{\ontop{x \in D}{0 < \rho < R}}\ \brac{\rho^{-\lambda}\ \int \limits_{D \cap B_{\rho}(x)} \abs{v}^2}^{\frac{1}{2}} \] and \[
M_{D,\lambda,R}(v) := \sup_{\ontop{x \in D}{0 < \rho < R}}\ \brac {\rho^{-\lambda}\ \int \limits_{D \cap B_{\rho}(x)} \abs{v-(v)_{D \cap B_\rho (x)}}^2}^{\frac{1}{2}}. \] Moreover, let us denote by $C^{0,\alpha}(D)$, $\alpha \in (0,1)$ all H\"older continuous functions with the exponent $\alpha$. Then the following relations hold: \begin{lemma}[Integral Characterization of H\"older continuous functions]\label{la:it:cshoelder} (See \cite[Theorem III.1.2]{GiaquintaMI83})\\ Let $D \subset {\mathbb R}^n$ be a smoothly bounded set, and $\lambda \in (n,n+2)$, $v \in L^2(D)$. Then $v \in C^{0,\alpha}(D)$ for $\alpha = \frac{\lambda-n}{2}$ if and only if for some $R > 0$ \[
M_{D,\lambda,R}(v) < \infty. \] \end{lemma}
\begin{lemma}[Relation between Morrey- and Campanato spaces]\label{la:it:mscs} (See \cite[Proposition III.1.2]{GiaquintaMI83})\\ Let $D \subset {\mathbb R}^n$ be a smoothly bounded set, and $\lambda \in (1,n)$, $v \in L^2(D)$. Then for a constant $C_{D,\lambda} > 0$ \[
J_{D,\lambda,R}(v) \leq C_{D,\lambda,R}\ \brac{\Vert v \Vert_{L^2(D)} + M_{D,\lambda,R}(v)}. \] \end{lemma}
As a consequence of Lemma~\ref{la:it:mscs} we have \begin{lemma}\label{la:it:mdleqmdnN} Let $D \subset {\mathbb R}^n$ be a convex, smoothly bounded domain. Set $N := \lceil \frac{n}{2} \rceil -1$. Then if $v \in L^2(D)$, $\lambda \in (n,n+2)$, \[
M_{D,\lambda,R}(v) \leq C_{D,\lambda,R} \brac{\Vert v \Vert_{H^{N}(D)} + \sum \limits_{\abs{\alpha} = N} M_{D ,\lambda-2N,R}(\partial^\alpha v)}. \] \end{lemma} \begin{proofL}{\ref{la:it:mdleqmdnN}} For any $r \in (0,R)$, $x \in D$ set $B_r \equiv B_r(x)$. As $D$ is convex, also $B_r \cap D$ is convex, so by classic Poincar\'e inequality on convex sets, Lemma~\ref{la:poincCMV}, \[ \begin{ma} \int \limits_{D \cap B_r} \abs{v-(v)_{D \cap B_r}}^2 &\overset{\sref{L}{la:poincCMV}}{\leq}& C \operatorname{diam}(D \cap B_r)^2\ \int \limits_{D \cap B_r} \abs{\nabla v}^2\\ &\aleq{}& r^2 \int \limits_{D \cap B_r} \abs{\nabla v}^2. \end{ma} \] Consequently, \[
M_{D,\lambda,R}(v) \leq C_n\ J_{D, \lambda-2,R}(\nabla v). \] As $\lambda \in (n,n+2)$, that is in particular $\lambda - 2 < n$, by Lemma~\ref{la:it:mscs}, \[
J_{D, \lambda-2,R}(\nabla v) \leq C_{D,R,\lambda}\ \left (\Vert \nabla v \Vert_{L^2(D)} + M_{D,\lambda,R}(\nabla v) \right ). \] Iterating this estimate $N$ times, using that $\lambda - 2N > 0$, we conclude. \end{proofL}
Finally, we can prove a sufficient condition for H\"older continuity on $D$ expressed by the growth of $\lapn v$: \begin{lemma}[Dirichlet Growth Theorem]\label{la:it:dg} Let $D \subset {\mathbb R}^n$ be a smoothly bounded, convex domain, let $v \in H^{\frac{n}{2}}({\mathbb R}^n)$ and assume there are constants $\Lambda > 0$, $\alpha \in (0,1)$, $R > 0$ such that \begin{equation}\label{eq:smallgrowthlapnv}
\sup_{\ontop{r \in (0,R)}{x \in D}} r^{-\alpha} [v]_{B_r(x),\frac{n}{2}} \leq \Lambda. \end{equation} Then $v \in C^{0,\alpha}(D)$. \end{lemma} \begin{proofL}{\ref{la:it:dg}} We only treat the case where $n$ is odd, the even dimension case is similar. Set $N := \lfloor \frac{n}{2} \rfloor$. We have for any $x \in D$, $r \in (0,R)$, $D_r \equiv D_r(x):= B_r(x) \cap D$, using that the boundary of $D$ is smooth and thus $\abs{D_r(x)} \geq c_D \abs{B_r(x)}$ for any $x \in D$ (because there are no sharp outward cusps in $D$) \[ \begin{ma}
&&\int \limits_{D_r} \abs{\nabla^N v(x) - \brac{\nabla^N v}_{D_r}}^2\\ &\aleq{}& \frac{\brac{\operatorname{diam}(D_r)}^{2(n-N)}} {\abs{D_r}}\ \int \limits_{D_r} \int \limits_{D_r} \frac{\abs{\nabla^N v(x)-\nabla^N v(y)}^2}{\abs{x-y}^{2(n-N)}}\ dx\ dy\\ &\aleq{}& r^{n-2N} \brac{[v]_{B_r(x),\frac{n}{2}}}^2\\ &\overset{\eqref{eq:smallgrowthlapnv}}{\aleq{}}& r^{n-2N+2\alpha} \Lambda^2. \end{ma} \] Thus, for $\lambda = n+2\alpha \in (n,n+2)$ \[ M_{D ,\lambda-2N,R}(\nabla^N v) \aleq{} \Lambda. \] By Lemma~\ref{la:it:mdleqmdnN} this implies \[
M_{D ,\lambda,R}(v) \aleq{} \Lambda+\Vert v \Vert_{H^{N}(D)} < \infty \] which by Lemma~\ref{la:it:cshoelder} is equivalent to $v \in C^{0,\alpha}(D)$. \end{proofL}
\begin{tabbing} \quad\=Armin Schikorra\\ \>RWTH Aachen University\\ \>Institut f\"ur Mathematik\\ \>Templergraben 55\\ \>52062 Aachen\\ \>Germany\\ \\ \>email: [email protected]\\ \>page: www.instmath.rwth-aachen.de/$\sim$schikorra \end{tabbing}
\end{document} | arXiv |
\begin{definition}[Definition:Euclidean Metric/Real Vector Space]
Let $\R^n$ be an $n$-dimensional real vector space.
The '''Euclidean metric''' on $\R^n$ is defined as:
:$\ds \map {d_2} {x, y} := \paren {\sum_{i \mathop = 1}^n \paren {x_i - y_i}^2}^{1 / 2}$
where $x = \tuple {x_1, x_2, \ldots, x_n}, y = \tuple {y_1, y_2, \ldots, y_n} \in \R^n$.
\end{definition} | ProofWiki |
\begin{document}
\title{A note on Fourier coefficients of Poincar\'e series}
\author{Emmanuel Kowalski} \address{ETH Z\"urich -- D-MATH\\
R\"amistrasse 101\\
8092 Z\"urich\\
Switzerland} \email{[email protected]} \author{Abhishek Saha} \address{ETH Z\"urich -- D-MATH\\
R\"amistrasse 101\\
8092 Z\"urich\\
Switzerland} \email{[email protected]} \author{Jacob Tsimerman} \address{Princeton University \\ Fine Hall, Princeton
NJ 08540, USA} \email{[email protected]}
\subjclass[2000]{11F10, 11F30, 11F46}
\keywords{Poincar\'e series,
Fourier coefficients, Siegel modular forms, orthogonality, Siegel
fundamental domain}
\begin{abstract}
We give a short and ``soft'' proof of the asymptotic orthogonality
of Fourier coefficients of Poincar\'e series for classical modular
forms as well as for Siegel cusp forms, in a qualitative form. \end{abstract}
\maketitle
\section{Introduction}
The Petersson formula (see, e.g.,~\cite[Ch. 14]{ant}) is one of the most basic tools in the analytic theory of modular forms on congruence subgroups of $\SL(2,\mathbf{Z})$. One of its simplest consequences, which explains its usefulness, is that it provides the asymptotic orthogonality of distinct Fourier coefficients for an orthonormal basis in a space of cusp forms, when the analytic conductor is large (e.g., when the weight or the level is large). From the proof of the Petersson formula, we see that this orthonormality is equivalent (on a qualitative level) to the assertion that the $n$-th Fourier coefficient of the $m$-th Poincar\'e series is essentially the Kronecker symbol $\delta(m,n)$. \par In this note, we provide a direct ``soft'' proof of this fact in the more general context of Siegel modular forms when the main parameter is the weight $k$. Although this is not sufficient to derive the strongest applications (e.g., to averages of $L$-functions in the critical strip), it provides at least a good motivation for the more quantitative orthogonality relations required for those. And, as we show in our paper~\cite{kst} concerning the local spectral equidistribution of Satake parameters for certain families of Siegel modular forms of genus $g=2$, the ``soft'' proof suffices to derive some basic consequences, such as the analogue of ``strong approximation'' for cuspidal automorphic representations, and the determination of the conjectural ``symmetry type'' of the family. See Corollary~\ref{cor-simple} for a simple example of this when $g=1$.
\par
\par \textbf{Acknowledgements.} Thanks to M. Burger for helpful remarks concerning the geometry of the Siegel fundamental domain.
\section{Classical modular forms}
In this section, we explain the idea of our proof for classical modular forms; we hope this will be useful as a comparison point in the next section, especially for readers unfamiliar with Siegel modular forms. Let $k\geqslant 2$ be an even integer, $m\geqslant 1$ an integer. The $m$-th Poincar\'e series of weight $k$ is defined by $$ P_{m,k}(z)=\sum_{\gamma\in \Gamma_{\infty}\backslash \Gamma}{ (cz+d)^{-k}e(m\gamma\cdot z) }, $$ where $\Gamma=SL(2,\mathbf{Z})$, acting on the Poincar\'e upper half-plane $\mathbf{H}$, $$ \Gamma_{\infty} = \Bigl\{\pm\begin{pmatrix}1&n\\0&1\end{pmatrix}: n\in \mathbf{Z} \Bigr\} $$ is the stabilizer of the cusp at infinity, and we write $$ \gamma = \begin{pmatrix}a&b\\c&d \end{pmatrix},\quad\quad (a,b,c,d)\in\mathbf{Z}^4. $$ \par It is well known that for $k\geqslant 4$, $m\geqslant 1$, this series converges absolutely and uniformly on compact sets, and that it defines a cusp form of weight $k$ for $\Gamma=\SL(2,\mathbf{Z})$. We denote by $p_{m,k}(n)$, $n\geqslant 1$, the Fourier coefficients of this Poincar\'e series, so that $$ P_{m,k}(z)=\sum_{n\geqslant 1}{p_{m,k}(n)e(nz)} $$ for all $z\in \mathbf{H}$.
\begin{proposition}[Asymptotic orthogonality of Fourier coefficients
of Poincar\'e series]\label{pr-k}
With notation as above, for fixed $m\geqslant 1$, $n\geqslant 1$, we
have $$ \lim_{k\rightarrow +\infty}{p_{m,k}(n)}=\delta(m,n). $$ \end{proposition}
\begin{proof} The idea is to use the definition of Fourier coefficients as $$ p_{m,k}(n)=\int_{U}{P_{m,k}(z)e(-nz)dz} $$ where $U$ is a suitable horizontal interval of length $1$ in $\mathbf{H}$, and $dz$ is the Lebesgue measure on such an interval; we then let $k\rightarrow +\infty$ under the integral sign, using the definition of the Poincar\'e series to understand that limit. \par We select $$
U=\{x+iy_0 \,\mid\, |x|\leqslant 1/2\} $$ for some fixed $y_0>1$. The Lebesgue measure is then of course $dx$. \par Consider a term $$ (cz+d)^{-k}e(m\gamma \cdot z) $$ in the Poincar\'e series as $k\rightarrow +\infty$. We have $$
\Bigl|(cz+d)^{-k}e(m\gamma \cdot z)
\Bigr|\leqslant |cz+d|^{-k} $$ for all $z \in \mathbf{H}$ and $\gamma\in \SL(2,\mathbf{Z})$, since $m\geqslant 0$ and $\gamma\cdot z\in \mathbf{H}$. But for $z\in U$, we find \begin{equation}\label{eq-j}
|cz+d|^2=(cx+d)^2+c^2y_0^2\geqslant c^2y_0^2. \end{equation} \par If $c\not=0$, since $c$ is an integer, the choice of $y_0>1$ leads to $c^2y_0^2>1$, and hence $$
\Bigl|(cz+d)^{-k}e(m\gamma \cdot z)
\Bigr|\rightarrow 0 $$ as $k\rightarrow +\infty$, uniformly for $z\in U$ and $\gamma\in \Gamma$ with $c\not=0$. On the other hand, if $c=0$, we have $\gamma\in \Gamma_{\infty}$; this means this corresponds to a single term which we take to be $\gamma=\mathrm{Id}$, and we then have $$ (cz+d)^{-k}e(m\gamma \cdot z)=e(mz) $$ for all $k$ and $z\in U$. \par Moreover, all this shows also that $$
\Bigl|(cz+d)^{-k}e(m\gamma \cdot z)
\Bigr|\leqslant |cz+d|^{-4} $$ for $k\geqslant 4$ and $\gamma\in \Gamma_{\infty}\backslash\Gamma$. Since the right-hand converges absolutely and uniformly on compact sets, we derive by dominated convergence that $$ P_{m,k}(z)\rightarrow e(mz) $$ for all $z\in U$. The above inequality gives further $$
|P_{m,k}(z)|\leqslant \sum_{\gamma\in\Gamma_{\infty}\backslash
\Gamma}{|cz+d|^{-4}} $$ for $k\geqslant 4$ and $z\in U$. Since $U$ is compact, we can integrate by dominated convergence again to obtain $$ \int_{U}{P_{m,k}(z)e(-nz)dz}\longrightarrow \int_{U}{e((m-n)z)dz} =\delta(m,n) $$ as $k\rightarrow +\infty$. \end{proof}
It turns out that the same basic technique works for the other most important parameter of cusp forms, the level. For $q\geqslant 1$ and $m\geqslant 1$ integers, let now $$ P_{m,q}(z)=\sum_{\gamma\in \Gamma_{\infty}\backslash \Gamma_0(q)}{ (cz+d)^{-k}e(m\gamma\cdot z) } $$ be the $m$-th Poincar\'e series of weight $k$ for the Hecke group $\Gamma_0(q)$, and let $p_{m,q}(n)$ denote its Fourier coefficients.
\begin{proposition}[Orthogonality with respect to the level]\label{pr-q}
With notation as above, for $k\geqslant 4$ fixed, for any fixed $m$ and
$n$, we have $$ \lim_{q\rightarrow +\infty}{p_{m,q}(n)}=\delta(m,n). $$ \end{proposition}
\begin{proof} We start with the integral formula $$ p_{m,q}(n)=\int_{U}{P_{q,m}(z)e(-nz)dz} $$ as before. To proceed, we observe that $\Gamma_{\infty}\backslash \Gamma_0(q)$ is a subset of $\Gamma_{\infty}\backslash \Gamma$, and hence we can write $$ P_{m,q}(z)=\sum_{\gamma\in \Gamma_{\infty}\backslash
\Gamma}{\Delta_q(\gamma) (cz+d)^{-k}e(m\gamma\cdot z) }, $$ where $$ \Delta_q\Bigl(\begin{pmatrix}a&b\\c&d\end{pmatrix} \Bigr)= \begin{cases} 1&\text{ if } c\equiv 0\mods{q},\\ 0&\text{ otherwise.} \end{cases} $$ \par We let $q\rightarrow +\infty$ in each term of this series. Clearly, we have $\Delta_q(\gamma)=0$ for all $q>c$, unless if $c=0$, in which case $\Delta_q(\gamma)=1$. Thus $$ \Delta_q(\gamma) (cz+d)^{-k}e(m\gamma\cdot z)\rightarrow 0 $$ if $c\not=0$, and otherwise $$ \Delta_q(\gamma) (cz+d)^{-k}e(m\gamma\cdot z)=e(mz). $$ \par Moreover, we have obviously $$
\Bigl|\Delta_q(\gamma) (cz+d)^{-k}e(m\gamma\cdot z)
\Bigr|\leqslant |cz+d|^{-k} $$ and since $k\geqslant 4$, this defines an absolutely convergent series for all $z$. We therefore obtain $$ P_{m,q}(z)\rightarrow e(mz) $$ for any $z\in U$. Finally, the function $$
z\mapsto \sum_{\gamma\in \Gamma_{\infty}\backslash \Gamma}{|cz+d|^{-k}} $$ being integrable on $U$, we obtain the result after integrating. \end{proof}
Here is a simple application to show that such qualitative statements are not entirely content-free:
\begin{corollary}[``Strong approximation'' for $GL(2)$-cusp forms] \label{cor-simple}
Let $\mathbf{A}$ be the ad\`ele ring of $\mathbf{Q}$. For each irreducible,
cuspidal, automorphic representation $\pi$ of $GL(2,\mathbf{A})$ and each
prime $p$, let $\pi_p$ be the unitary, admissible representation of
$GL(2,\mathbf{Q}_p)$ that is the local component of $\pi$ at $p$. Then, for
any finite set of primes $S$, as $\pi$ runs over the cuspidal
spectrum of $GL(2,\mathbf{A})$ unramified at primes in $S$, the set of
tuples $(\pi_p)_{p\in S}$ is dense in the product over $p\in S$ of
the unitary tempered unramified spectrum $X_p$ of $GL(2,\mathbf{Q}_p)$. \end{corollary}
\begin{proof}
This is already known, due to Serre~\cite{serre} (if one uses
holomorphic forms) or Sarnak~\cite{sarnak} (using Maass forms), but
we want to point out that this is a straightforward consequence of
Proposition~\ref{pr-k}; for more details, see the Appendix
to~\cite{kst}. We first recall that the part of unitary unramified
spectrum of $GL(2,\mathbf{Q}_p)$ with trivial central character can be
identified with $[-2\sqrt{p},2\sqrt{p}]$ via the map sending Satake
parameters $(\alpha,\beta)$ to $\alpha+\beta$. The subset $X_p$ can
then be identified with $[-2,2]$, and for $\pi=\pi(f)$ attached to a
cuspidal primitive form unramified at $p$, the local component
$\pi_p(f)$ corresponds to the normalized Hecke eigenvalue $\lambda_f(p)$. \par Now the (well-known) point is that for any integer of the form $$ m=\prod_{p\in S}{p^{n(p)}}\geqslant 1, $$ and any cusp form $f$ of weight $k$ with Fourier coefficients $n^{(k-1)/2}\lambda_f(n)$, the characteristic property $$ \langle f,P_{m,k}(\cdot)\rangle=\frac{\Gamma(k-1)}{(4\pi m)^{k-1}} m^{(k-1)/2}\lambda_f(m) $$ of Poincar\'e series (see, e.g.,~\cite[Lemma 14.3]{ant}) implies that $$ p_{m,k}(1)=\sum_{f\in H_k}{\omega_f\lambda_f(m)} =\sum_{f\in
H_k}{\omega_f\prod_{p\in S}{U_{n(p)}(\lambda_f(p))}},\quad
\omega_f=\frac{\Gamma(k-1)}{(4\pi)^{k-1}}\frac{1}{\|f\|^2}, $$ where $H_k$ is the Hecke basis of weight $k$ and level $1$, $U_n$
denotes Chebychev polynomials, and $\|f\|$ is the Petersson norm. Because the linear combinations of Chebychev polynomials are dense in $C([-2p^{1/2},2p^{1/2}])$ for any prime $p$, the fact that $$ \lim_{k\rightarrow +\infty}{p_{m,k}(1)}=\delta(m,1)= \begin{cases} 1&\text{ if all $n(p)$ are zero,}\\ 0&\text{ otherwise,} \end{cases} $$ (given by Proposition~\ref{pr-k}) shows, using the Weyl equidistribution criterion, that $(\pi_p(f))_{f\in H_k}$, when counted with weight $\omega_f$, becomes equidistributed as $k\rightarrow +\infty$ with respect to the product of Sato-Tate measures over $p\in S$. Since each factor has support equal to $[-2,2]=X_p$, this implies trivially the result. \end{proof}
\section{Siegel modular forms}
We now proceed to generalize the previous result to Siegel cusp forms; although some notation will be recycled, there should be no confusion. For $g\geqslant 1$, let $\mathbf{H}_g$ denote the Siegel upper half-space of genus $g$ $$ \mathbf{H}_g=\{z=x+iy\in M(g,\mathbf{C})\,\mid\, \T{z}=z,\quad y \text{ is positive definite}\}, $$ on which the group $\Gamma_g=\Sp(2g,\mathbf{Z})$ acts in the usual way $$ \gamma\cdot z=(az+b)(cz+d)^{-1} $$ (see, e.g.,~\cite[Ch. 1]{klingen} for such basic facts; we always write $$ \gamma=\begin{pmatrix} a&b\\ c& d \end{pmatrix} $$ for symplectic matrices, where the blocks are themselves $g\times g$ matrices). Let $A_g$ denote the set of symmetric, positive-definite matrices in $M(g,\mathbf{Z})$ with integer entries on the main diagonal and half-integer entries off it. Further, let $$ \Gamma_{\infty} = \Bigl\{\pm\begin{pmatrix}1&s\\0&1\end{pmatrix}: s \in M(g,\mathbf{Z}),\ s=\T{s} \Bigr\}. $$ \par For $k\geqslant 2$, even,\footnote{\ Forms of odd weight $k$ do exist if
$g$ is even, but behave a little bit differently, and we restrict to
$k$ even for simplicity.} and a matrix $s\in A_g$, the Poincar\'e series $\mathcal{P}_{s,k}$ is defined by $$ \mathcal{P}_{s,k}(z)=\sum_{\gamma\in \Gamma_{\infty}\backslash \Gamma_g}{ \det(cz+d)^{-k}e(\Tr(s\ (\gamma\cdot z))) } $$ for $z$ in $\mathbf{H}_g$. This series converges absolutely and uniformly on compact sets of $\mathbf{H}_g$ for $k>2g$; indeed, as shown by Maass~\cite[(32), Satz 1]{maass}), the series $$ \mathcal{M}_{s,k}(z)=\sum_{\gamma\in \Gamma_{\infty}\backslash
\Gamma_g}{ |\det(cz+d)|^{-k}\exp(-2\pi\Tr(s\ \Imag(\gamma\cdot z)))} $$ which dominates it termwise converges absolutely and uniformly on compact sets (see also~\cite[p. 90]{klingen}; note that, in contrast with the case of $SL(2,\mathbf{Z})$, one can not ignore the exponential factor here to have convergence). The Poincar\'e series $\mathcal{P}_{s,k}$ is then a Siegel cusp form of weight $k$ for $\Gamma_g$. Therefore, it has a Fourier expansion $$ \mathcal{P}_{s,k}(z)=\sum_{t\in A_g}{p_{s,k}(t)e(\Tr(tz))}, $$ which converges absolutely and uniformly on compact subsets of $\mathbf{H}_g$.
\begin{theorem}[Orthogonality for Siegel-Poincar\'e
series]\label{th:siegelortho}
With notation as above, for any fixed $s$, $t\in A_g$, we have $$
\lim_{k\rightarrow +\infty}{p_{s,k}(t)}=\delta'(s,t)\frac{|\Aut(s)|}{2}, $$ where the limit is over even weights $k$, $\delta'(s,t)$ is the Kronecker delta for the $\GL(g,\mathbf{Z})$-equivalence classes of $s$ and $t$, and where $\Aut(s)=O(s,\mathbf{Z})$ is the finite group of integral points of the orthogonal group of the quadratic form defined by $s$. \end{theorem}
This result suggests to define the Poincar\'e series with an additional constant factor $2/|\Aut(s)|$, in which case this theorem is exactly analogous to Proposition~\ref{pr-k}. And indeed, this is how Maass defined them~\cite{maass}.
\begin{proof} We adapt the previous argument, writing first $$ p_{s,k}(t)=\int_{U_g}{\mathcal{P}_{s,k}(z)e(-\Tr(tz))dz} $$ where $U_g=U_g(y_0)$ will be taken to be the (compact) set of matrices $$ U_g(y_0)=\mathcal{U}_g+iy_0\mathrm{Id}, $$ for some real number $y_0>1$ to be selected later, where $$ \mathcal{U}_g=\{x\in M(g,\mathbf{R})\,\mid\, x\text{ symmetric and }
|x_{i,j}|\leqslant 1/2\text{ for all } 1\leqslant i,j\leqslant g\} \; $$ the measure $dz$ is again Lebesgue measure. \par Before proceeding, we first recall that \begin{equation}\label{eq-exp-neg}
|e(\Tr(s\gamma \cdot z))|\leqslant 1 \end{equation} for all $s\in A_g$, $\gamma\in \Gamma_g$ and $z\in \mathbf{H}_g$. Indeed, since $s$ is a real matrix, we have $$
|e(\Tr(s\gamma \cdot z))|=\exp(-2\pi\Tr(s\Imag(\gamma \cdot z))) $$ and the result follows from the fact that $$ \Tr(sy)\geqslant 0 $$ for any $s\in A_g$ and $y$ positive definite. To see the latter, we write $y=\T{q}q$ for some matrix $q$, and we then have $$ sy=s\T{q}q=q^{-1}tq $$ with $t=qs\T{q}$; then $t$ is still positive, while $\Tr(sy)=\Tr(t)$, so $\Tr(sy)\geqslant 0$. \par We then have the following Lemma:\footnote{\ This statement is used to
replace the inequality~(\ref{eq-j}), which has no obvious analogue
when $g\geqslant 2$.}
\begin{lemma}\label{lemma-strict} For any integer $g\geqslant 1$, there exists a real number $y_0>1$,
depending only on $g$, such that for any $\gamma\in\Gamma_g$ written $$ \gamma=\begin{pmatrix}a & b\\ c&d \end{pmatrix},\quad (a,b,c,d)\in M(g,\mathbf{Z}), $$ with $c\not=0$ and for all $z\in U_g(y_0)$, we have \begin{equation}\label{eq-strict}
|\det(cz+d)|>1 \end{equation}
whereas if $c=0$, we have $|\det(cz+d)|=1$. \end{lemma}
Assuming the truth of this lemma, we find that \begin{equation}\label{eq-dominated}
|\det(cz+d)^{-k}e(\Tr(s\gamma \cdot z))|\leqslant |\det(cz+d)|^{-2g-1} \exp(-2\pi\Tr(s\Imag(\gamma \cdot z))) \end{equation} for any $k>2g$, all $z\in U_g$ and $\gamma\in\Gamma_{\infty}\backslash \Gamma_g$, and also that \begin{equation}\label{eq-siegellimit} \det(cz+d)^{-k}e(\Tr(s\gamma \cdot z))\longrightarrow 0\text{ as } k\rightarrow +\infty, \end{equation} for all $z\in U_g$ and all $\gamma$ with $c\not=0$. On the other hand, if $c=0$, we have $$ \gamma=\begin{pmatrix} a& 0\\ 0& \T{a^{-1}} \end{pmatrix}, $$ up to $\Gamma_{\infty}$-equivalence, where $a\in \GL(g,\mathbf{Z})$ and hence \begin{align*} \det(cz+d)^{-k}e(\Tr(s\gamma \cdot z))&= e(\Tr(s az \T{a})))= e(\Tr(az\T{a}s))\\ &= e(\Tr(\T{a}saz)) =e(\Tr((a\cdot s) z)) \end{align*} where $a\cdot s=\T{a}sa$ (we use here that $k$ is even). \par Using~\eqref{eq-dominated},~\eqref{eq-siegellimit} and the absolute convergence of $\mathcal{M}_{s,2g+1}(z)$, we find that $$ \mathcal{P}_{s,k}(z)\longrightarrow \sum_{a\in \GL(g,\mathbf{Z})/\pm1}{e(\Tr((a\cdot s)z))} $$ as $k\rightarrow +\infty$, for all $z\in U_g$. (The series converges as a subseries of the Poincar\'e series.) \par Then we multiply by $e(-\Tr(tz))$ and integrate over $U_g$, using~(\ref{eq-dominated}) and the fact that $\mathcal{M}_{s,2g+1}$ is bounded on $U_g$ to apply the dominated convergence theorem, and obtain $$ p_{s,k}(t)\longrightarrow \sum_{a\cdot s=t}{1}, $$ a number which is either $0$, if $s$ and $t$ are not equivalent, or has the same cardinality as $\Aut(s)/2$ if they are. This completes the proof of Theorem~\ref{th:siegelortho}. \end{proof}
We still need to prove Lemma~\ref{lemma-strict}. We are going to use the description of the Siegel fundamental domain $\mathcal{F}_g$ for the action of $\Gamma_g$ on $\mathbf{H}_g$. Precisely, $\mathcal{F}_g$ is the set of $z \in \mathbf{H}_g$ satisfying all the following conditions: \begin{enumerate} \item For all $\gamma\in\Gamma_g$, we have $$
|\det(cz+d)|\geqslant 1; $$ \item The imaginary part $\Imag(z)$ is Minkowski-reduced; \item The absolute value of all coefficients of $\Reel(z)$ are $\leqslant
1/2$. \end{enumerate} \par Siegel showed that the first condition can be weakened to a finite list of inequalities (see, e.g.,~\cite[Prop. 3.3, p. 33]{klingen}): there exists a finite subset $C_g\subset \Gamma_g$, such that $z \in \mathbf{H}_g$ belongs to $\mathcal{F}_g$ if and only if $z$ satisfies (2), (3) and \begin{equation}\label{eq-finite}
|\det(cz+d)|\geqslant 1\text{ for all } \gamma\in C_g\text{ with } c\not=0. \end{equation} \par Moreover, if~(\ref{eq-finite}) holds with equality sign for some
$\gamma\in C_g$, then $z$ is in the boundary of $\mathcal{F}_g$; if this is not the case, then $|\det(cz+d)|>1$ for all $\gamma\in \Gamma_g$ with $c\not=0$.
\begin{proof}[Proof of Lemma~\ref{lemma-strict}]
First, we show that if $y_0>1$ is chosen large enough, the matrix
$iy_0\mathrm{Id}$ is in $\mathcal{F}_g$. The only condition that
must be checked is~(\ref{eq-finite}) when $\gamma\in C_g$ satisfies
$c\not=0$, since the other two are immediate (once the definition of
Minkowski-reduced is known; it holds for $y_0\mathrm{Id}$ when
$y_0\geqslant 1$). For this, we use the following fact, due to
Siegel~\cite[Lemma 9]{siegel} (see also~\cite[Lemma 3.3,
p. 34]{klingen}): for any fixed $z=x+iy\in\mathbf{H}_g$ and any
$\gamma\in\Gamma_g$ with $c\not=0$, the function $$
\alpha\mapsto |\det(c(x+i\alpha)+d)|^2 $$ is strictly increasing on $[0,+\infty[$ and has limit $+\infty$ as $\alpha\rightarrow +\infty$. Taking $z=i$, we find that $$
\lim_{y\rightarrow+\infty}{|\det(iyc+d)|}=+\infty $$ for every $\gamma\in C_g$. In particular, since $C_g$ is finite, there exists $y_0>1$ such that $$
|\det(cz_0+d)|>1 $$ for $z_0=iy_0$ and $\gamma\in C_g$, which is~(\ref{eq-finite}) for $iy_0$. \par Because $\mathcal{U}_g$ is compact, it is now easy to also extend this to $z=x+iy_0$ with $x\in \mathcal{U}_g$. Precisely, for fixed $\gamma\in \Gamma_g$ with $c\not=0$, the function $$ \begin{cases}
\mathcal{U}_g\times ]0,+\infty[ \rightarrow \mathbf{R}\\
(x,\alpha) \mapsto |\det(c(x+i\alpha)+d)|^2 \end{cases} $$ is a polynomial in the variables $(x,\alpha)$. As a polynomial in $\alpha$, as observed by Siegel, it is in fact a polynomial in $\alpha^2$ with non-negative coefficients, and it is non-constant because $c\not=0$. (It is not difficult to check that the degree, as polynomial in $\alpha$, is $2\rank(c)$). This explains the limit $$
\lim_{y\rightarrow +\infty}{|\det(c(x+iy)+d)|^2}=+\infty, $$ but it shows also that it is uniform over the compact set $\mathcal{U}_g$, and over the $\gamma\in C_g$ with
$c\not=0$. Therefore we can find $y_0$ large enough so that~(\ref{eq-finite}) holds for all $z\in U_g$, and indeed holds with the strict condition $|\det(cz+d)|>1$ on the right-hand side. By the remark after~(\ref{eq-finite}), this means that $z$ is not in the boundary of $\mathcal{F}_g$, and hence~(\ref{eq-strict}) holds for all $\gamma$ with $c\not=0$. \end{proof}
\begin{remark}
The argument is very clear when $\det(c)\not=0$: we write \begin{align*} \det(c(x+iy)+d)&=\det(iyc)\det(1-iy^{-1}c^{-1}(cx+d))\\ &= (iy)^g\det(c) (1+O(y^{-1})) \end{align*} for fixed $(c,d)$, uniformly for $x\in \mathcal{U}_g$. \end{remark}
\begin{remark}
It would be interesting to know the optimal value of $y_0$ in
Lemma~\ref{lemma-strict}. For $g=1$, any $y_0>1$ is suitable. For
$g=2$, Gottschling~\cite[Satz 1]{gott} has determined a finite set
$C_2$ which determines as above the Siegel fundamental domain,
consisting of $19$ pairs of matrices $(c,d)$; there are $4$ in which
$c$ has rank $1$, $c$ is the identity for the others. Precisely: for
$c$ of rank $1$, $(c,d)$ belongs to $$ \Bigl\{ \Bigl(\begin{pmatrix} 1&0\\ 0&0 \end{pmatrix},\begin{pmatrix} 0&0\\ 0&1 \end{pmatrix} \Bigr),
\Bigl(\begin{pmatrix} 0&0\\ 0&1 \end{pmatrix},\begin{pmatrix} 1&0\\ 0&0 \end{pmatrix} \Bigr),
\Bigl(\begin{pmatrix} 1&-1\\ 0&0 \end{pmatrix},\begin{pmatrix} 1&\text{$0$ or $1$}\\ -2&1 \end{pmatrix} \Bigr) \Bigr\}, $$ and for $c$ of rank $2$, we have $c=1$ and $d$ belongs to $$ \Bigl\{ 0, \begin{pmatrix} s&0\\ 0& 0 \end{pmatrix},
\begin{pmatrix} 0&0\\ 0& s \end{pmatrix},
\begin{pmatrix} s&0\\ 0& s \end{pmatrix},
\begin{pmatrix} s&0\\ 0& -s \end{pmatrix},
\begin{pmatrix} 0&s\\ s& 0 \end{pmatrix},
\begin{pmatrix} s&s\\ s& 0 \end{pmatrix},
\begin{pmatrix} 0&s\\ s& s \end{pmatrix} \Bigr\} $$ where $s\in \{-1,1\}$. It should be possible to deduce a value of $y_0$ using this information. Indeed, quick numerical experiments suggest that, as in the case $g=1$, any $y_0>1$ would be suitable. \end{remark}
\begin{remark}
Analogues of Corollary~\ref{cor-simple} can not be derived
immediately in the setting of Siegel modular forms because the link
between Fourier coefficients and Satake parameters is much more
involved; the case $g=2$ is considered, together with further
applications and quantitative formulations, in~\cite{kst}. \end{remark}
\end{document} | arXiv |
Pseudomathematics
Pseudomathematics, or mathematical crankery, is a mathematics-like activity that does not adhere to the framework of rigor of formal mathematical practice. Common areas of pseudomathematics are solutions of problems proved to be unsolvable or recognized as extremely hard by experts, as well as attempts to apply mathematics to non-quantifiable areas. A person engaging in pseudomathematics is called a pseudomathematician or a pseudomath.[1] Pseudomathematics has equivalents in other scientific fields, and may overlap with other topics characterized as pseudoscience.
Pseudomathematics often contains mathematical fallacies whose executions are tied to elements of deceit rather than genuine, unsuccessful attempts at tackling a problem. Excessive pursuit of pseudomathematics can result in the practitioner being labelled a crank. Because it is based on non-mathematical principles, pseudomathematics is not related to attempts at genuine proofs that contain mistakes. Indeed, such mistakes are common in the careers of amateur mathematicians, some of whom go on to produce celebrated results.[1]
The topic of mathematical crankery has been extensively studied by mathematician Underwood Dudley, who has written several popular works about mathematical cranks and their ideas.
Examples
One common type of approach is claiming to have solved a classical problem that has been proved to be mathematically unsolvable. Common examples of this include the following constructions in Euclidean geometry—using only a compass and straightedge:
• Squaring the circle: Given any circle drawing a square having the same area.
• Doubling the cube: Given any cube drawing a cube with twice its volume.
• Trisecting the angle: Given any angle dividing it into three smaller angles all of the same size.[2][3][4]
For more than 2,000 years, many people had tried and failed to find such constructions; in the 19th century, they were all proven impossible.[5][6]: 47
Another notable case were "Fermatists", who plagued mathematical institutions with requests to check their proofs of Fermat's Last Theorem.[7][8]
Another common approach is to misapprehend standard mathematical methods, and to insist that the use or knowledge of higher mathematics is somehow cheating or misleading (e.g., the denial of Cantor's diagonal argument[9]: 40ff or Gödel's incompleteness theorems).[9]: 167ff
History
The term pseudomath was coined by the logician Augustus De Morgan, discoverer of De Morgan's laws, in his A Budget of Paradoxes (1872). De Morgan wrote:
The pseudomath is a person who handles mathematics as the monkey handled the razor. The creature tried to shave himself as he had seen his master do; but, not having any notion of the angle at which the razor was to be held, he cut his own throat. He never tried a second time, poor animal! but the pseudomath keeps on at his work, proclaims himself clean-shaved, and all the rest of the world hairy.[10]
De Morgan named James Smith as an example of a pseudomath who claimed to have proved that π is exactly 3+1/8.[1] Of Smith, De Morgan wrote: "He is beyond a doubt the ablest head at unreasoning, and the greatest hand at writing it, of all who have tried in our day to attach their names to an error."[10] The term pseudomath was adopted later by Tobias Dantzig.[11] Dantzig observed:
With the advent of modern times, there was an unprecedented increase in pseudomathematical activity. During the 18th century, all scientific academies of Europe saw themselves besieged by circle-squarers, trisectors, duplicators, and perpetuum mobile designers, loudly clamoring for recognition of their epoch-making achievements. In the second half of that century, the nuisance had become so unbearable that, one by one, the academies were forced to discontinue the examination of the proposed solutions.[11]
The term pseudomathematics has been applied to attempts in mental and social sciences to quantify the effects of what is typically considered to be qualitative.[12] More recently, the same term has been applied to creationist attempts to refute the theory of evolution, by way of spurious arguments purportedly based in probability or complexity theory, such as intelligent design proponent William Dembski's concept of specified complexity.[13][14]
See also
• 0.999..., often fallaciously[15] claimed to be distinct from 1
• Indiana Pi Bill
• Eccentricity (behavior)
• Mathematical fallacy
• Pseudoscience
References
1. Lynch, Peter. "Maths discoveries by amateurs and distractions by cranks". The Irish Times. Retrieved 2019-12-11.
2. Dudley, Underwood (1983). "What To Do When the Trisector Comes" (PDF). The Mathematical Intelligencer. 5 (1): 20–25. doi:10.1007/bf03023502. S2CID 120170131.
3. Schaaf, William L. (1973). A Bibliography of Recreational Mathematics, Volume 3. National Council of Teachers of Mathematics. p. 161. Pseudomath. A term coined by Augustus De Morgan to identify amateur or self-styled mathematicians, particularly circle-squarers, angle-trisectors, and cube-duplicators, although it can be extended to include those who deny the validity of non-Euclidean geometries. The typical pseudomath has but little mathematical training and insight, is not interested in the results of orthodox mathematics, has complete faith in his own capabilities, and resents the indifference of professional mathematicians.
4. Johnson, George (1999-02-09). "Genius or Gibberish? The Strange World of the Math Crank". The New York Times. Retrieved 2019-12-21.
5. Wantzel, P M L (1837). "Recherches sur les moyens de reconnaître si un problème de Géométrie peut se résoudre avec la règle et le compas". Journal de Mathématiques Pures et Appliquées. 1. 2: 366–372.
6. Bold, Benjamin (1982) [1969]. Famous Problems of Geometry and How to Solve Them. Dover Publications.
7. Konrad Jacobs, Invitation to Mathematics, 1992, p. 7
8. Underwood Dudley, Mathematical Cranks 2019, p. 133
9. Dudley, Underwood (1992). Mathematical Cranks. Mathematical Association of America. ISBN 0-88385-507-0.
10. De Morgan, Augustus (1915). A Budget of Paradoxes (2nd ed.). Chicago: The Open Court Publishing Co.
11. Dantzig, Tobias (1954). "The Pseudomath". The Scientific Monthly. 79 (2): 113–117. Bibcode:1954SciMo..79..113D. JSTOR 20921.
12. Johnson, H. M. (1936). "Pseudo-Mathematics in the Mental and Social Sciences". The American Journal of Psychology. 48 (2): 342–351. doi:10.2307/1415754. ISSN 0002-9556. JSTOR 1415754. S2CID 146915476.
13. Elsberry, Wesley; Shallit, Jeffrey (2011). "Information theory, evolutionary computation, and Dembski's "complex specified information"". Synthese. 178 (2): 237–270. CiteSeerX 10.1.1.318.2863. doi:10.1007/s11229-009-9542-8. S2CID 1846063.
14. Rosenhouse, Jason (2001). "How Anti-Evolutionists Abuse Mathematics" (PDF). The Mathematical Intelligencer. 23: 3–8.
15. "Why Does 0.999… = 1?".
Further reading
• Underwood Dudley (1987), A Budget of Trisections, Springer Science+Business Media. ISBN 978-1-4612-6430-9. Revised and reissued in 1996 as The Trisectors, Mathematical Association of America. ISBN 0-88385-514-3.
• Underwood Dudley (1997), Numerology: Or, What Pythagoras Wrought, Mathematical Association of America. ISBN 0-88385-524-0.
• Clifford Pickover (1999), Strange Brains and Genius, Quill. ISBN 0-688-16894-9.
• Bailey, David H.; Borwein, Jonathan M.; de Prado, Marcos López; Zhu, Qiji Jim (2014). "Pseudo-Mathematics and Financial Charlatanism: The Effects of Backtest Overfitting on Out-of-Sample Performance" (PDF). Notices of the AMS. 61 (5): 458–471. doi:10.1090/noti1105.
Pseudoscience
List of topics characterized as pseudoscience
Terminology
• Cargo cult science
• Charlatan
• Crank
• Fringe theory
• Fringe science
• Pseudoarchaeology
• Pseudohistory
• Pseudomathematics
• Junk science
• Paranormal
• Pathological science
• Quackery
• Snake oil
• Superseded scientific theory
• True-believer syndrome
• Voodoo Science
Topics
characterized as
pseudoscience
Medicine
• Acupuncture
• Aromatherapy
• Adrenal fatigue
• Alternative medicine
• Anthroposophic medicine
• Applied kinesiology
• Ayurveda
• Bates method
• Biorhythms
• Bloodletting
• Body memory
• Chiropractic
• Chromotherapy
• Correactology
• Cryonics
• Crystal healing
• Cupping
• Detoxification
• Colon cleansing
• Doctrine of signatures
• Doktor Koster's Antigaspills
• Ear candling
• Electromagnetic hypersensitivity
• Energy medicine
• Fad diet
• Germ theory denialism
• HIV/AIDS denialism
• Homeopathy
• Humorism
• Iridology
• Leaky gut syndrome
• Lunar effect
• Macrobiotic diet
• Magnet therapy
• Miracle Mineral Supplement
• Naturopathy
• Palmistry
• Panchagavya
• Patent medicine
• Phrenology
• Primal therapy
• Radionics
• Reiki
• Traditional medicine
• Traditional Chinese medicine
• Trepanning
• Vertebral subluxation
• Wind turbine syndrome
Social science
• 2012 phenomenon
• Ancient astronauts
• Catastrophism
• Conspiracy theory
• 5G conspiracy
• 9/11 conspiracy theories
• Chemtrail conspiracy theory
• Climate change denial
• COVID-19 misinformation
• Moon landing conspiracy theories
• Conversion therapy
• Generational theory
• Generationism
• Strauss–Howe generational theory
• Hollow Earth theory
• Indigo children
• Japhetic theory
• Mediumship
• Nazi archaeology
• Nibiru cataclysm
• Parapsychology
• Pseudoarchaeology
• Pseudohistory
• Genocide denial
• Historical negationism
• Holocaust denial
• Pseudolaw
• Recovered-memory therapy
• Past life regression
• Scientific racism
• Aryan race
• Melanin theory
Physics
• Anti-gravity
• Cold fusion
• Faster-than-light travel
• Perpetual motion
• Quantum mysticism
• Reactionless drive
• Dean drive
• EMDrive
• Teleportation
• Tractor beam
• Water-fueled car
Other
• Alchemy
• Aquatic ape hypothesis
• Astrology
• Biodynamic agriculture
• Biological transmutation
• Creation science
• Cryptozoology
• Dianetics
• Auditing
• Dowsing
• Electronic voice phenomenon
• Eugenics
• Facilitated communication
• Feng shui
• Flat Earth theory
• Graphology
• Intelligent design
• Laundry ball
• Levitation
• Lysenkoism
• Numerology
• Orgone
• Polygraph
• Pseudoscientific metrology
• Rapid prompting method
• Statement analysis
• Ufology
• Voice stress analysis
• Water memory
Promoters of
pseudoscience
• Sucharit Bhakdi
• Del Bigtree
• Igor and Grichka Bogdanoff
• Brigitte Boisselier
• Rhonda Byrne
• Robert Charroux
• Deepak Chopra
• Clonaid
• Vernon Coleman
• Ignatius L. Donnelly
• Gaia, Inc.
• Max Gerson
• Nicholas Gonzalez
• Goop (company)
• Graham Hancock
• David Icke
• William Donald Kelley
• Robert F. Kennedy Jr
• Corentin Louis Kervran
• The Light (newspaper)
• Mike Lindell
• Jenny McCarthy
• Joseph Mercola
• Ministry of Ayush
• Theodor Morell
• Hans Alfred Nieper
• Mehmet Oz
• Raël (Claude Vorilhon)
• Randolph Stone
• Paul Joseph Watson
• Andrew Wakefield
Related topics
• Bogdanov affair
• Bourgeois pseudoscience
• Demarcation problem
• Scientific method
• Suppressed research in the Soviet Union
Resources
• Committee for Skeptical Inquiry
• Cults of Unreason
• An Encyclopedia of Claims, Frauds, and Hoaxes of the Occult and Supernatural
• Fads and Fallacies in the Name of Science
• Fortean Times
• JREF
• Quackwatch
• Skeptical Inquirer
• The Natural History of Quackery
• The Psychology of the Occult
• The Ragged Edge of Science
• The Skeptic Encyclopedia of Pseudoscience
• The Skeptic's Dictionary
Major mathematics areas
• History
• Timeline
• Future
• Outline
• Lists
• Glossary
Foundations
• Category theory
• Information theory
• Mathematical logic
• Philosophy of mathematics
• Set theory
• Type theory
Algebra
• Abstract
• Commutative
• Elementary
• Group theory
• Linear
• Multilinear
• Universal
• Homological
Analysis
• Calculus
• Real analysis
• Complex analysis
• Hypercomplex analysis
• Differential equations
• Functional analysis
• Harmonic analysis
• Measure theory
Discrete
• Combinatorics
• Graph theory
• Order theory
Geometry
• Algebraic
• Analytic
• Arithmetic
• Differential
• Discrete
• Euclidean
• Finite
Number theory
• Arithmetic
• Algebraic number theory
• Analytic number theory
• Diophantine geometry
Topology
• General
• Algebraic
• Differential
• Geometric
• Homotopy theory
Applied
• Engineering mathematics
• Mathematical biology
• Mathematical chemistry
• Mathematical economics
• Mathematical finance
• Mathematical physics
• Mathematical psychology
• Mathematical sociology
• Mathematical statistics
• Probability
• Statistics
• Systems science
• Control theory
• Game theory
• Operations research
Computational
• Computer science
• Theory of computation
• Computational complexity theory
• Numerical analysis
• Optimization
• Computer algebra
Related topics
• Mathematicians
• lists
• Informal mathematics
• Films about mathematicians
• Recreational mathematics
• Mathematics and art
• Mathematics education
• Mathematics portal
• Category
• Commons
• WikiProject
| Wikipedia |
Why shouldn't gravity be a force?
I am interested to know the reasons why we shouldn't treat gravity as a force in, for example, General Relativity. Won't we be able to model it accurately by treating it as only a force?
general-relativity forces gravity
non-sensicalnon-sensical
$\begingroup$ Closely related: If gravity isn't a force, then why do we learn in school that it is? and Why do we still need to think of gravity as a force? $\endgroup$ – ACuriousMind♦ Sep 2 '16 at 17:23
Curved spacetime is currently considered to be the model which reflects best gravity including the equivalence principle, and it is complying excellently with the needs of astronomy.
Einstein did not "prove" that spacetime is curved, but he used it as a model for his description of gravity. And it is working so fine that nearly everybody uses curved spacetime for explaining gravitation and general relativity.
However, we have to remember that curved spacetime is only a model, and you can also imagine gravity as a field (in compliance with general relativity) see e.g. this article.
MoonrakerMoonraker
Yes, we would not be able to really calculate it, or model it well, or probably at all. General Relativity (GR) takes the Equivalence Principle seriously. It uses the mass (call it gravitational mass, really it includes anything that in some way contributes to the so called energy-stress tensor, so radiation also etc) as the source of gravity, and uses that to calculate the geometry of spacetime (with some appropriate boundary/initial/final conditions), and then all particles travel in geodesics of that spacetime.
The equations for the spacetime as function of the stress energy tensor are Einstein's equations. See https://en.wikipedia.org/wiki/Einstein_field_equations
So the mass just enters in in creating the spacetime, upon which everything travels then in geodesics (if no other field or force is present). The only difference on particle masses is if their mass is 0, ie, if they are radiation, or massless particles like the photon. Then, they still travel in geodesics, but they are so-called light-like or null geodesics (locally their speed will be c, the line element $ds^2$ will be zero, i.e. null). The other thing is the Einstein equations are nonlinear, gravity in essence interacts with itself, and that is hard or impossible to do in a force equation, though possible and done other non-linear theories in Quantum Field Theory. It is one (actually that it interacts with all forms of energy) of the reasons that trying to quantize GR leads to a non-renormalizable quantum theory.
In Newtonian physics you calculate a force from the sources creating the force, maybe through a field, and then use the force divided by m (except if m = 0, where it has nothing to say) to get the acceleration, and then the trajectory. GR gets the trajectory directly. GR says it is not a force, but a property of spacetime, and how particles move in it.
That is why force is not a useful entity in GR. Some people still use the term, conceptually, to mean the effect of gravity through spacetime, but it's easy to get confused.
Bob BeeBob Bee
We should discuss two viewpoints: Newtonian and Einsteinian.
In Newton's view gravity is a force that causes massive bodies to be accelerated. However, in Einstein's view gravity is a manifestation of the curvature of spacetime. Despite the fact that these views are conceptually very different, they yield the same predictions in most of cosmological contexts. In the limit of deep potential minima (or strong spatial curvature), only general relativity yields the correct results.
Principle of equivalence:
The gravitational force acting between two objects is $$F=-\frac{GM_g m_g}{r^2}$$ where $m_g$ is gravitational mass. The negative sign in above expression implies that gravity is always an attractive force. According to Newton's second law of motion force and acceleration are related by $$ F = m_i a$$ where $m_i$ is the inertial mass that determines the resistance acceleration by any force.
As it turns out, gravitational mass and inertial mass are equal. $$ m_g = m_i$$ However, there is no reason for them to be equal, it is completely coincidence (In Newtonian point of view). This is what motivated Einstein to devise his theory. In GR, curvature is a property of spacetime itself and curved spacetime tells mass-energy how to move. Therefore, gravitational acceleration of an object should be independent of mass and composition, it is just following geodesic that is dictated by the geometry of spacetime.
DiferansiyelDiferansiyel
$\begingroup$ Could you tell me in what case does $m_g \neq m_i$? $\endgroup$ – non-sensical Sep 3 '16 at 12:51
If you jump down what do you feel? You feel that you are weightless. At the same time it is obvious that you get accelerated.
What do you feel when you push the accelerator pedal of your car? You feel that you are heavier than usual. At the same time it is obvious that you get accelerated.
As you know force is defined as the product of mass and acceleration. From our everyday experience we define a positive acceleration as something that makes us heavier. So the first example contradicts this experience.
So if you sit in a carousel and swing around you are accelerated (since any change in direction is an acceleration) and get heavier. Flying in the ISS you swing around (the earth) but feel not any acceleration and you stay weightless. To give the phenomenon of acceleration in a gravitational field a name Einstein named the gravitation not a force but a bending the space and locally slowing down or speeding up time.
What I told you is what is our experience today. The genius of Einstein is that he derived this from thought experiments and the guess of equations in the General Relativity.
HolgerFiedlerHolgerFiedler
Not the answer you're looking for? Browse other questions tagged general-relativity forces gravity or ask your own question.
If gravity isn't a force, then why do we learn in school that it is?
Why do we still need to think of gravity as a force?
Could gravity be an emergent property of nature?
What are the reasons to expect that gravity should be quantized?
Gravity - Force or Result?
Why do we study teleparallel gravity if it is equivalent to general relativity? | CommonCrawl |
\begin{document}
{\obeylines \small \vspace*{-1.0cm} \hspace*{3.5cm}C'est pour toi que je joue, Alf c'est pour toi, \hspace*{3.5cm}Tous les autres m'\'ecoutent, mais toi tu m'entends ... \hspace*{3.5cm}\'Exil\'e d'Amsterdam vivant en Australie, \hspace*{3.5cm}Ulysse qui jamais ne revient sur ses pas ... \hspace*{3.5cm}Je suis de ton pays, m\'et\'eque comme toi, \hspace*{3.5cm}Quand il faudra mourir, on se retrouvera\footnote{Free after Georges Moustaki, ``Grand-p\`ere''} \vspace*{0.5cm} \hspace*{5.5cm} {\it To the memory of Alf van der Poorten} \vspace*{0.5cm}
} \title[Vanishing of $\mu$ ] {On the vanishing of Iwasawa's constant
$\mu$ for the cyclotomic $\mathbb{Z}_p$-extensions of $CM$ number
fields.} \author{Preda Mih\u{a}ilescu} \address[P. Mih\u{a}ilescu]{Mathematisches Institut
der Universit\"at G\"ottingen} \email[P. Mih\u{a}ilescu]{[email protected]} \keywords{11R23 Iwasawa Theory, 11R27 Units} \date{Version 2.0 \today}
\begin{abstract}
We prove that $\mu = 0$ for the cyclotomic $\mathbb{Z}_p$-extensions of CM
number fields.
\end{abstract} \maketitle \tableofcontents \section{Introduction} Iwasawa gave in his seminal paper \cite{Iw1} from $1973$ examples of $\mathbb{Z}_p$-extensions in which the structural constant $\mu \neq 0$. In the same paper, he proved that if $\mu = 0$ for the cyclotomic $\mathbb{Z}_p$-extension of some number field $\mathbb{K}$, then the constant vanishes for any cyclic $p$-extension of $\mathbb{K}$ -- and thus for any number field in the pro-$p$ solvable extension of $\mathbb{K}$. Iwasawa also suggested in that paper that $\mu$ should vanish for the cyclotomic $\mathbb{Z}_p$-extension of all number fields, a fact which is sometimes called \textit{Iwasawa's conjecture}. The conjecture has been proved by Ferrero and Washington \cite{FW} for the case of abelian fields. In this paper, we give an independent proof, which holds for all CM fields: \begin{theorem} \label{main} Let $\mathbb{K}$ be a CM number field and $p$ an odd prime. Then Iwasawa's constant $\mu$ vanishes for the cyclotomic $\mathbb{Z}_p$-extension $\mathbb{K}_{\infty}/\mathbb{K}$. \end{theorem}
\subsection{Notations and basic facts on decomposition of $\Lambda$-modules} In this paper $p$ is an odd prime. In the sequel, the Iwasawa constant $\mu$ for some number field $\mathbb{K}$ will always refer to the $\mu$-invariant for the $\mathbb{Z}_p$-cyclotomic extension of $\mathbb{K}$. We shall denote number fields by black board bold characters, e.g. $\mathbb{K}, \mathbb{L}$, etc. If $\mathbb{K}$ is a number field, its cyclotomic $\mathbb{Z}_p$-extension is $\mathbb{K}_{\infty}$ and $\mathbb{B}_{\infty}/\mathbb{Q}$ is the $\mathbb{Z}_p$-extension of $\mathbb{Q}$, so $\mathbb{K}_{\infty} = \mathbb{K} \cdot \mathbb{B}_{\infty}$. We denote as usual by $\Gamma$ the Galois group $\hbox{Gal}(\mathbb{K}_{\infty}/\mathbb{K})$ and let $\tau \in \Gamma$ be a topological generator.
Let $\mathbb{B}_1 = \mathbb{Q}$ and $\mathbb{B}_n \subset \mathbb{B}_{\infty}$ be the intermediate extensions of $\mathbb{B}_{\infty}$ and let the counting be given by $[ \mathbb{B}_n : \mathbb{Q} ] = p^{n-1}$, so $\mathbb{B}_n = \mathbb{B}_{\infty} \cap \mathbb{Q}[ \zeta_{p^n} ]$. Let $\kappa > 0$ be the integer for which $\mathbb{K} \cap \mathbb{B}_{\infty} = \mathbb{B}_{\kappa}$; then $\mu_{p^{\kappa}} \subset \mathbb{K}$ and $\mu_{p^{\kappa+1}} \not \subset \mathbb{K}$. We shall use a counting of the intermediate fields of $\mathbb{K}_{\infty}$ that reflects this situation and let $\mathbb{K}_0 = \mathbb{K}_1 = \ldots = \mathbb{K}_{\kappa} = \mathbb{K}$ and $[ \mathbb{K}_{\kappa+1} : \mathbb{K} ] = p$, etc. The constant $\kappa$ can be determined in the same way for any number fields and we adopt the same counting in any cyclotomic $\mathbb{Z}_p$-extension. This way, for $n \geq \kappa$ we always have $\mu_{p^n} \subset \mathbb{K}_n$. We let $\gamma \in \hbox{Gal}(\mathbb{B}_{\infty}/\mathbb{Q})$ be a topological generator of $\hbox{Gal}(\mathbb{B}_{\infty}/\mathbb{Q})$ and let $\tau \in \Gamma := \hbox{Gal}(\mathbb{K}_{\infty}/\mathbb{K})$ be a topological generator for $\Gamma$. We may thus assume that $\tau = \gamma^{p^{\kappa-1}}$ for some lift $\gamma \in \hbox{Gal}(\mathbb{K}_{\infty}/\mathbb{Q})$ of $\gamma$ and write as usual $T = \tau-1, \Lambda = \mathbb{Z}_p[[ T ]]$ and \begin{eqnarray*}
\omega_n & = & \tau^{p^{n-\kappa}} - 1 = (T+1)^{p^{n-\kappa}} - 1, \\
\nu_{m,n} & = & \omega_m/\omega_n, \quad \hbox{for $m > n \geq \kappa$}. \end{eqnarray*} Since $\mathbb{K} = \mathbb{K}_1 = \mathbb{K}_{\kappa}$, we may also write $\nu_{m, 1} = \nu_{m, \kappa}$. Note that the special numeration of fields which we introduce in order to ascertain that $\mu_{p^n} \subset \mathbb{K}_n$ induces a shift in the exponents in the definition of $\omega_n$. In terms of $\gamma$ we recover the classical definitions, but our exposition is made in terms of the topological generator $\tau$. Note that the base field $\mathbb{K}$ with respect to which we shall bring our proof, is still to be defined, and will be a CM on in which we assume that $\mu > 0$, and in which some useful additional conditions are fulfilled.
The $p$-Sylow subgroups of the class groups $\id{C}(\mathbb{K}_n)$ of the intermediate fields of $\mathbb{K}_n$ are denoted as usual by $A_n = A_n(\mathbb{K}) = \left( \id{C}(\mathbb{K}_n) \right)_p$; the explicit reference to the base field $\mathbb{K}$ will be used when we refer simultaneously to sequences of class groups related to different base fields. Traditionally, $A = A(\mathbb{K}) = \varinjlim A_n$. We use the projective limit, which we denote by $\apr{\mathbb{K}} = \varprojlim(A_n)$, as explained in detail below. The maximal $p$-abelian unramified extensions of $\mathbb{K}_n$ are denoted by $\mathbb{H}_n = \mathbb{H}_n(\mathbb{K}) \supset \mathbb{K}_n$ and $X_n = \hbox{Gal}(\mathbb{H}_n/\mathbb{K}_n)$. The projective limit with respect to restriction maps is $X = \varprojlim(X_n)$: it is a Noetherian $\Lambda$-torsion module on which complex conjugation acts inducing the decomposition $X = X^+ \oplus X^-$. The Artin maps $\varphi : A_n \rightarrow X_n$ are isomorphisms of finite abelian $p$-groups; whenever the abelian extension $\mathbb{M}/\mathbb{K}$ is clear in the context, we write $\varphi(a)$ for the Artin Symbol $\lchooses{\mathbb{M}/\mathbb{K}}{a}$, where $a \in \id{C}(\mathbb{K})$ if $\mathbb{M}$ is unramified, or $a$ is an ideal of $\mathbb{K}$ otherwise. Complex conjugation acts on class groups and Galois groups, inducing direct sum decompositions in plus and minus parts: \[ A_n = A_n^+ \oplus A_n^-, \quad X_n = X_n^+ \oplus X_n^-, \quad \hbox{etc.} \] The idempotents generating plus and minus parts are $\frac{1 \pm \jmath}{2}$; since $2$ is a unit in the ring $\mathbb{Z}_p$ acting on these groups, we also have $Y^+ = (1+\jmath) Y$ and $Y^- = (1-\jmath) Y$, for $Y \in \{ A_n, X_n, X, \ldots \}$. Throughout the paper, we shall use, by a slight abuse of notation and unless explicitly specified otherwise, the additive writing for the group ring actions. This is preferable for typographical reasons. If $M$ is some Noetherian $\Lambda$-torsion module on which $\jmath$ acts, inducing a decomposition $M = M^+ \oplus M^-$, we write $\mu^-(M) = \mu(M^-), \lambda^-(M) = \lambda(M^-)$, etc, for the Iwasawa constants of this module. In the case when $M = \apr{\mathbb{M}}$ or $M = X(\mathbb{M})$ is attached to some number field $\mathbb{M}$, we simply write $\mu^-(\mathbb{M})$ or $\mu(\apr{\mathbb{M}})$.
By assumption on $\mathbb{K}$, the norms $\mbox{\bf N}_{\mathbb{K}_m/\mathbb{K}_n} : A_m \rightarrow A_n$ are surjective for all $m > n > 0$. Therefore, the sequence $(A_n)_{n \in
\mathbb{N}}$ is projective with respect to the norm maps and we denote their projective limit by ${\rm \raise1ex\hbox{\tiny p}\kern-.1667em\hbox{A}} = \varprojlim_n A_n$. The Artin map induces an isomorphism of compact $\Lambda$-modules $\varphi :\ {\rm \raise1ex\hbox{\tiny p}\kern-.1667em\hbox{A}} \rightarrow X$. The elements of ${\rm \raise1ex\hbox{\tiny p}\kern-.1667em\hbox{A}}$ are norm coherent sequences $a = (a_n)_{n
\in \mathbb{N}} \in {\rm \raise1ex\hbox{\tiny p}\kern-.1667em\hbox{A}}$ with $a_n \in A_n$ for $n \geq \kappa$; we let $a_0 = a_1 = \ldots = a_{\kappa}$. It is customary to identify $X$ with ${\rm \raise1ex\hbox{\tiny p}\kern-.1667em\hbox{A}}$ via the Artin map; we shall not use injective limits here, but make explicit reference to the projective limit ${\rm \raise1ex\hbox{\tiny p}\kern-.1667em\hbox{A}}$.
It is a folklore fact that if $\mu$ vanishes for the cyclotomic $\mathbb{Z}_p$-extension of some CM field $\mathbb{K}_{start}$, then it vanishes for any finite algebraic extension thereof. We prove this in Fact \ref{blow} in the Appendix. In order to prove the Theorem \ref{main} we shall need to taylor some base field $\mathbb{K}/\mathbb{K}_{start}$, which is a Galois CM extension of $\mathbb{Q}$ and enjoys some additional conditions that shall be discussed bellow; of course, we assume that $\mu(\mathbb{K}) > 0$. Before describing the construction of $\mathbb{K}$ however, we need to introduce some definitions and auxiliary constructions.
\subsection{Decomposition and Thaine Shifts} Let $M$ be a Noetherian $\Lambda$-torsion module. It is associated to an \textit{elementary} Noetherian $\Lambda$-torsion module $\id{E}(M) \sim M$ defined by: \begin{eqnarray*} \begin{array}{c c c c c c c }
\id{E}(M) & = & \id{E}_{\lambda}(M) & \oplus & \id{E}_{\mu}(M), & \quad & \hbox{with} \\
\id{E}_{\mu}(M) & = & \oplus_{i=1}^r \Lambda/(p^{e_i} \Lambda), & \quad & \id{E}_{\lambda}(M) & = & \oplus_{j=1}^{r'}
\Lambda/(f_j^{e'_j} \Lambda), \end{array} \end{eqnarray*} where all $e_i, e'_j > 0$ and $f_j \in \mathbb{Z}_p[ T ]$ are irreducible distinguished polynomials. The pseudoisomorphism $M \sim \id{E}(M)$ is given by the exact sequence \begin{eqnarray} \label{psis}
1 \rightarrow K_1 \rightarrow M \rightarrow \id{E}(M) \rightarrow K_2 \rightarrow 1 , \end{eqnarray} in which the kernel and cokernel $K_1, K_2$ are finite. We define $\lambda$- and $\mu$-parts of $M$ as follows: \begin{definition}
Let $M$ be a Noetherian $\Lambda$-torsion module. The $\lambda$-part
$\id{L}(M)$ is the maximal $\Lambda$-submodule of $M$ of finite
$p$-rank. The $\mu$-part $\id{M}(M)$ is the $\mathbb{Z}_p$-torsion submodule
of $M$; it follows from the Weierstra{\ss} Preparation Theorem that
there is some $m > 0$ such that $\id{M}(M) = M[ p^m ]$. The maximal
finite $\Lambda$-submodule of $M$ is $\id{F}(M)$, its finite
part. By definition, $\id{L}(M) \cap \id{M}(M) = \id{F}(M)$.
Let the module $\id{D}(M) = \id{L}(M) + \id{M}(M)$ be the {\em
decomposed submodule} of $M$. Then for all $x \in \id{D}(M)$
there are $x_{\lambda} \in \id{L}(M), x_{\mu} \in \id{M}(M)$ such
that $x = x_{\lambda} + x_{\mu}$, the decomposition being unique iff
$\id{F}(M) = 0$. The pseudoisomorphism $M \sim \id{E}(M)$ implies
that $[ M : \id{D}(M) ] < \infty $.
If $x \in M \setminus \id{D}(M)$, the $L$- and the $D$-orders of $x$
are, respectively \begin{eqnarray} \label{dords} \ell(x) & = & \min\{ j > 0 \ : \ p^{j} x \in \id{L}(M) \}, \quad \hbox{and} \\ \delta(x) & = & \min\{ k > 0 \ : \ p^{k} x \in \id{D}(M) \} \leq \ell(x). \nonumber \end{eqnarray}
We say that a Noetherian $\Lambda$-module $M$ of $\mu$-type is {\em
rigid} if the map $\psi : M \rightarrow \id{E}(M)$ is injective. Rigid modules have the fundamental property that for any distinguished polynomial $g(T) \in \mathbb{Z}_p[ T ]$ and any $x \in M$ \begin{eqnarray} \label{rigid} g(T) x = 0 \quad \Leftrightarrow \quad x = 0. \end{eqnarray} \end{definition} Note that for CM base fields $\mathbb{M}$, the modules $\id{M}(\apr{\mathbb{M}_{\infty}})$, where $\mathbb{M}_{\infty}$ is the cyclotomic $\mathbb{Z}_p$-extension, are rigid. The following fact about decomposition is proved in the last section of the Appendix. \begin{proposition} \label{tpdeco} Let $\mathbb{M}$ be a number field, let $\mathbb{T}_n \supset \mathbb{H}(\mathbb{M}_n) \supset \mathbb{M}_n$ be the ray class fields to some fixed ray $\eu{M}_0 \subset \id{O}(\mathbb{M})$ and $M = \apr{\mathbb{M}}$ be the projective limit of the galois groups $T_n = \hbox{Gal}(\mathbb{T}_n/\mathbb{M}_n)$. Assume in addition that the following condition is satisfied by $\mathbb{M}$: if $r = p\hbox{-rk} (\id{L}(M))$ and $L_1 = \mbox{\bf N}_{\mathbb{M}_{\infty}/\mathbb{M}_1}(\id{L}(M))$ then \begin{eqnarray} \label{cstab} p\hbox{-rk} (L_1) = r \quad \hbox{ and ${\rm ord} (x) > p^2$ for all $x \in L_1 \setminus L_1^p$}. \end{eqnarray} If these hypotheses hold and $x \in M$ is such that $p x \in \id{D}(M)$, then $T^2 x \in \id{D}(M)$. \end{proposition}
We observe that the condition \rf{cstab} can be easily satisfied by eventually replacing an initial field $\mathbb{M}$ with some extension, as explained in Fact \ref{b2} of Appendix 3.1
Next we define Thaine shifts and lifts. Let $\mathbb{M}$ be a CM Galois field and consider $a = (a_n)_{n \in \mathbb{N}} \in \apr{\mathbb{M}}$, some norm coherent sequence. We assume that the norm maps $\mbox{\bf N}_{\mathbb{M}_n/\mathbb{M}_{n'}} : A_n^-(\mathbb{M}) \rightarrow A_{n'}^-(\mathbb{M})$ are surjective for all $n > n' \geq 1$ and fix some integer $m > \kappa(\mathbb{M})$ -- with $\mathbb{M} \cap \mathbb{B}_{\infty} = \mathbb{B}_{\kappa(\mathbb{M})}$ -- and a totally split prime $\eu{q} \in a_{m}$, which is inert in $\mathbb{M}_{\infty}/\mathbb{M}_m$. Let $q \in \mathbb{Q}$ be the rational prime below $\eu{q}$ and assume that $q \equiv 1 \bmod p$. Let $\mathbb{F} \subset \mathbb{Q}[ \zeta_q ]$ be the subfield of degree $p$ in the \nth{q} cyclotomic extension. We let $\mathbb{L}_n = \mathbb{M}_n \cdot \mathbb{F}$ and $\mathbb{L}_{\infty} = \mathbb{M}_{\infty} \cdot \mathbb{F}$. The tower $\mathbb{L}_{\infty}/\mathbb{L}$ is {\em the
inert Thaine shift} of the initial cyclotomic extension $\mathbb{M}_{\infty}/\mathbb{M}$, induced by $\eu{q} \in a_m$. Let $\eu{Q} \subset \mathbb{L}_m$ be the ramified prime above $\eu{q}$. According to Lemma \ref{thlift} in the Appendix, we may apply Tchebotarew's Theorem in order to construct a sequence $b = (b_n)_{n \in \mathbb{N}} \in \apr{\mathbb{L}}$ such that $b_m = [ \eu{Q} ]$ is the class of $\eu{Q}$ and $\mbox{\bf N}_{\mathbb{L}_n/\mathbb{M}_n}(b_n) = a_n$ for all $n \in \mathbb{N}$. In the projective limit, we then also have $\mbox{\bf N}_{\mathbb{L}_{\infty}/\mathbb{M}_{\infty}}(b) = a$. A sequence determined in this way will be denoted {\em a Thaine lift of
$a$}. It is not unique.
Let $F = \hbox{Gal}(\mathbb{F}/\mathbb{Q})$ be generated by $\nu \in F$; we write $s := \nu - 1$ and $\Phi_p(\nu) = \frac{(s+1)^p - 1}{s}$. By using the identity $\frac{x^p-1}{x-1} = \frac{(y+1)^p - 1 }{y} = y^{p-1} + O(p)$, we see that the algebraic norm verifies \begin{eqnarray} \label{anorm} \id{N} & := & \sum_{i=0}^{p-1} \nu^i = \Phi_p(\nu) = p u(s) + s^{p-1} = p + s f(s), \\ & & \quad \nonumber f \in \mathbb{Z}_p[ X ] \setminus p \mathbb{Z}_p[ s ], \quad \in (\ZM{p^N}[ s ])^{\times}, \forall N > 0. \end{eqnarray}
Since $\eu{q}$ is totally split in $\mathbb{M}_m/\mathbb{Q}$, the extensions $\mathbb{L}_n/\mathbb{Q}$ are Galois and $F$ commutes with $\hbox{Gal}(\mathbb{M}_m/\mathbb{Q})$, for every $m$ and $\hbox{Gal}(\mathbb{L}_n/\mathbb{M}_n) \cong F$.
\subsection{Constructing the base field} For our proof we shall choose a base field $\mathbb{K}$ as follows. Start with some CM field $\mathbb{K}_{start}$ for which one assumes that $\mu > 0$, and let $-D$ be a quadratic non-residue modulo $p$ -- so $\lchooses{-D}{p} = -1$ -- and $\rg{k} = \mathbb{Q}[ \sqrt{-D} ]$ be a quadratic imaginary extension. Our start field should be galois and contain the \nth{p} roots of unity. We require thus that $\mathbb{K} \supset \mathbb{K}_{start}^{(n)}[ \zeta_p, \sqrt{-D} ]$.
We additionally expect that the primes that ramify in $\mathbb{K}_{\infty}/\mathbb{K}$ be totally ramified and the norms $N_{n,m} : A(\mathbb{K}_n) \rightarrow A(\mathbb{K}_m)$ be surjective, so $\mathbb{K}_{\infty} \cap \mathbb{H}(\mathbb{K}) = \mathbb{K}$. We also require that the exponent $\exp(\id{M}(\apr{\mathbb{K}})) \geq p^2$, which can be achieved by means of Fact \ref{b2}. The first step of the construction consists thus in replacing $\mathbb{K}_{start}$ by an initial field $\mathbb{K}_{ini} = \mathbb{K}_{start}^{(n)}[ \zeta_p, \sqrt{-D} ]$. If $\exp(\id{M}(\apr{\mathbb{K}_{ini}})) = p$, then replace $\mathbb{K}_{ini}$ by some Thaine shift thereof, in order to increase the exponent. We need to fulfill the condition \rf{cstab} required in Proposition \ref{tpdeco}; for this we determine an integer $t$ as follows: let $L = \id{L}(\apr{\mathbb{K}_{ini}})$ and $r = p\hbox{-rk} (L) = \lambda(\mathbb{K}_{ini})$. Let $L_t = \mbox{\bf N}_{\mathbb{K}_{ini,\infty}}/\mathbb{K}_{ini; t}(L)$. We then require that \begin{eqnarray} \label{stab} p\hbox{-rk} (L_t) = r \quad \hbox{ and ${\rm ord} (x) > p^2$} \quad \hbox{ for all $x \in L_t \setminus L_t^p$}. \end{eqnarray}
\begin{figure}
\caption{Construction of the base-field $\mathbb{K}$}
\label{fig:generalconstr}
\end{figure}
With this, we let $\mathbb{K}'_{ini} = \mathbb{K}_{ini,t} \subset \mathbb{K}_{ini,
\infty}$. Finally we apply the Proposition \ref{tpdeco} in order to choose a further extension of $\mathbb{K}'_{ini}$ in the same cyclotomic $\mathbb{Z}_p$-extension, that yields some simple decomposition properties for Thaine lifts.
Let $\kappa' = \kappa(\mathbb{K}'_{ini})$ and $\tilde{\tau} = \gamma^{p^{\kappa'-1}}$ generate $\tilde{\Gamma} := \hbox{Gal}\mathbb{K}'_{ini,
\infty}/\mathbb{K}'_{ini}$. Recall that $\gamma$ is a generator of $\hbox{Gal}(\mathbb{B}_{\infty}/\mathbb{B})$, which explains the definition of $\tilde{\tau}$; finally, $\tilde{T} = \tilde{\tau}-1$ and we let $p^{b}$ be the exponent of $\id{M}(\apr{\mathbb{K}'_{ini}})$. We let $k'$ be such that $\tilde{\omega}_{k'} \in ( p^{b+1}, T^{2(b+1)} )$ and define $\mathbb{K} = \mathbb{K}'_{ini, k'} = \mathbb{K}_{ini,k+t}$ and let finally $\kappa = \kappa(\mathbb{K})$ and $\tau = \gamma^{p^{\kappa-1}}, T = \tau-1$, etc. We conclude from Proposition \ref{tpdeco} and the choice of $k'$ that \begin{eqnarray} \label{kdeco} T \cdot \apr{\mathbb{K}} \subset \id{D}(\apr{\mathbb{K}}). \end{eqnarray} Moreover: \begin{remark} \label{kp1} Suppose that $\mathbb{L}/\mathbb{K}$ is a Thaine shift and $y \in \apr{\mathbb{L}}$ is such that either \begin{itemize} \item[ 1. ] $p y = x + w$ with $x \in \iota_{\mathbb{L}/\mathbb{K}}(\apr{\mathbb{K}})$ and $w
\in \id{M}(\apr{\mathbb{L}})$, or \item[ 2. ] $p^{b+1} y \in \id{L}(\apr{\mathbb{L}})$. \end{itemize} Then $T y \in \id{D}(\apr{\mathbb{L}})$ too.
The second point is a direct consequence of the choice of $k'$ and of Proposition \ref{tpdeco}. For the first, since $x \in \apr{\mathbb{K}}$, we know that $p^b x \in \id{L}(\apr{\mathbb{K}})$, so $p^{b+1} y = p^b x - p^b w \in \id{L}(\apr{\mathbb{K}}) + \id{M}(\apr{\mathbb{L}}) \subset \id{D}(\apr{\mathbb{L}})$, and the fact follows from point 2. \end{remark}
The construction of the Thaine shift is shown in the Figure \ref{fig:thaine} \begin{figure}
\caption{Thaine shift and lift}
\label{fig:thaine}
\end{figure}
This concludes the sequence of steps for the construction of the base field $\mathbb{K}$, which are reflected in the figure \ref{fig:generalconstr}. We review the conditions fulfilled by this field: \begin{itemize} \item[ 0. ] The field $\mathbb{K}' = \mathbb{K}_{start}^{(n)}[ \zeta_p, \sqrt{-D} ]$
and $\mathbb{K}_{ini} = \mathbb{K}'_s$ with $t$ subject to \rf{stab}. \item[ 1. ] The field $\mathbb{K}$ is a Galois CM extension $\mathbb{K}/\mathbb{Q}$ which
contains the \nth{p} roots of unity and such that $\mu(\mathbb{K}) > 0$ for
the cyclotomic $\mathbb{Z}_p$-extension of $\mathbb{K}$. The primes that ramify in $\mathbb{K}_{\infty}/\mathbb{K}$ are totally ramified. \item[ 2. ] We have $T \cdot (\apr{\mathbb{K}}) \subset \id{D}(\apr{\mathbb{K}})$ and the
properties in Remark \ref{kp1} are verified. \item[ 3. ] The numbering of intermediate fields starts from $\kappa$,
where $\mathbb{K} \cap \mathbb{B} = \mathbb{B}_{\kappa}$. \item[ 4. ] The exponent $\exp(\id{M}(\apr{\mathbb{K}})) \geq p^2$. \item[ 5. ] The field $\mathbb{K}$ contains an imaginary quadratic extension
$\rg{k} = \mathbb{Q}[ \sqrt{ -d } ] \subset \mathbb{K}$ which has trivial $p$-part
of the class group.
\end{itemize}
\subsection{Plan of the paper} We choose a base field $\mathbb{K}$ as shown in the previous section and a norm coherent sequence \[ a = (a_n)_{n \in \mathbb{N}} \in \id{M}(\apr{\mathbb{K}}) \setminus \left( p \cdot
\apr{\mathbb{K}} + (p, T) \id{M}(\apr{\mathbb{K}})\right). \] We note that condition 0. in the choice of $\mathbb{K}_{ini}$ readily implies that ${\rm ord} (a) = {\rm ord} (a_1)$, so $({\rm ord} (a)/p) \cdot a_1 \neq 0$ and thus \begin{eqnarray}
\label{abase}
( {\rm ord} (a)/p ) \in \id{M}(\apr{\mathbb{K}})[ p ] \setminus T \id{M}(\apr{\mathbb{K}})[ p ]. \end{eqnarray} We let $o:=o_T(a) \geq 0 $ be such that $a \in (T^o) \cdot \apr{\mathbb{K}} \setminus (T^{o+1}) \cdot \apr{\mathbb{K}}$.
In the second Chapter, we build a Thaine shift with respect to a prime $\eu{q} \in a_m$ and a lift $b$ to $a$ and derive the main cohomological properties of the shifted extension. The most important facts are the decomposition $T x \in \id{D}(\apr{\mathbb{L}})$ for all $x \in \apr{\mathbb{L}}$ with $\ell(x) \leq p \cdot {\rm ord} (\id{M}(\apr{\mathbb{K}})$ and vanishing of the Tate cohomology $\widehat{H}^0(F, \apr{\mathbb{L}})$. Based on this and the fact that $T b$ is decomposed while $s b_m = 0$, as the class of a ramified ideal, we obtain in Chapter 3 a sequence of algebraic consequences which eventually lead to the fact that $T a = \id{N}(b) \in \omega_m \id{M}(\apr{K})$; since this holds for arbitrary choices of $m$, independently of $a$, we obtain a contradiction to \rf{abase}, which proves the Iwasawa conjecture.
The paper is written so that the main ideas of the proof can be presented in the main part of the text, leading in an efficient way to the final proof. The technical details and results are deduced with a richness of detail, in the appendices.
\section{Thaine shift and proof of the Main Theorem} We have selected in the first Chapter a base field $\mathbb{K}$ which is CM and endowed with a list of properties. Consider the $\mathbb{F}_p[[ T ]]$-module $P := \apr{\mathbb{K}})/(p)$ and let $\pi : \apr{\mathbb{K}} \rightarrow P$ be the natural projection. Then for any $a \in \id{M}(\apr{\mathbb{K}}) \setminus p \cdot \apr{\mathbb{K}}$ there is some integer $o_T(a) \geq 0$ such that the image $\pi(a) \in \apr{\mathbb{K}}/(p)$ verifies $\pi(a) \in T^{o_T(a)} \pi(\apr{\mathbb{K}})$. We choose $a \in \id{M}(\apr{\mathbb{K}}) \setminus p \cdot \apr{\mathbb{K}}$ with the minimal value of $o_T(a)$; let $m > \kappa(\mathbb{K})$ be such that \begin{eqnarray} \label{mdef} \deg(\omega_m) > 2 (o_T(a) + 1) \end{eqnarray} and $\eu{q} \in a_m$ be a totally split prime. Let $q \subset \mathbb{N}$ be the rational prime below $\eu{q}$; since $\mathbb{K}$ contains the \nth{p^m} roots of unity, it follows that $q \equiv 1 \bmod p^m$. We let $\mathbb{L} = \mathbb{K} \cdot \mathbb{F}; \mathbb{L}_{\infty} = \mathbb{K}_{\infty} \cdot \mathbb{F}$ be the Thaine shift induced by $\eu{q}$, as described in the section \S 1.2 and let $b \in \apr{\mathbb{L}}$ be a Thaine lift of $a$.
For $C$ some $\mathbb{Z}_p[ s ]$-module, we use the Tate cohomologies associated to $C$, defined by \begin{eqnarray} \label{tc} \hat{H}^0(F, C) & = & \hbox{ Ker }(s : C \rightarrow C)/(\id{N} C), \\ \hat{H}^1(F, C) & = & \hbox{ Ker }(\id{N} : C \rightarrow C)/(s C). \nonumber \end{eqnarray} The notation introduced here will be kept throughout the paper.
Let $B'_n \subset A^-(\mathbb{K}_n)$ be the submodule spanned by the classes of primes that ramify in $\mathbb{L}_n/\mathbb{K}_n$. By choice of $\mathbb{L}$, these are the primes above $q$ and consequently $B'_n = \iota_{m,n}(B'_m)$ for all $n > m$. Here $\iota_{m,n} : A^-(\mathbb{L}_m) \rightarrow A^-(\mathbb{L}_n)$ is the natural lift map. We let $p^v$ be the exponent of $B'_m$, so $p^v B'_n = 0$ for all $n \geq m$.
Since $B'_n$ is constant up to isomorphism for all $n > m$, the vanishing of $\hat{H}^0(F, \apr{\mathbb{L}})$ is a straight forward consequence of :
\begin{lemma} \label{lh0} \begin{eqnarray} \label{h0} \hbox{ Ker }(s : \apr{\mathbb{L}} \rightarrow \apr{\mathbb{L}}) = \iota(\apr{\mathbb{K}}) \end{eqnarray} In particular, $\hat{H}^0(F, \apr{\mathbb{L}}) = 0$. \end{lemma} \begin{proof}
Consider $x = (x_n)_{n \in \mathbb{N}} \in \hbox{ Ker }( s : \apr{\mathbb{L}} \rightarrow
\apr{\mathbb{L}})$ and let $N > m + b$. Let $\eu{X} \in x_N$ be a prime:
then $(\eu{X}^{s(1-\jmath)}) = (\xi^{1-\jmath})$, for some $\xi \in
\mathbb{L}_N$ and $\id{N}(\xi^{1-\jmath}) \in \mu(\mathbb{K}_N)$. Since $\eu{q}_m$
is inert in $\mathbb{K}_N/\mathbb{K}_m$, we have $\id{N}(\mathbb{L}_N) \cap \mu(\mathbb{K}_N)
\subset \mu(\mathbb{K}_N)^p$. We may thus assume, after eventually modifying
$\xi$ by a root of unity, that $\id{N}(\xi^{1-\jmath}) =
1$. Hilbert's Theorem 90 implies that there is some $\alpha \in
\mathbb{L}_N$ such that \[ \eu{X}^{s(1-\jmath)} = (\xi^{1-\jmath}) = (\alpha^{1-\jmath})^s \quad \Rightarrow \quad
(\eu{X}/(\alpha))^{(1-\jmath)s} = (1). \]
The ideal $\eu{Y} := (\eu{X}/(\alpha))^{1-\jmath} \in x_N^2$ verifies $\eu{Y}^s = (1)$. If $x_N \not \in \iota(\apr{\mathbb{K}})$, then $\eu{Y}$ must be a product of ramified primes, so $x_N^2 \in B'_N + \iota_{\mathbb{K}_N, \mathbb{L}_N}(A_N^-(\mathbb{K}))$. Recall that $B'_N = \iota_{m,N}(B'_m)$ is spanned by the classes of the ramified primes and $p^v B'_N = 0$. In particular $x_N^2 \in B'_N + \iota_{\mathbb{K}_N,
\mathbb{L}_N}(A_N^-(\mathbb{K}))$ and $B'_N = \iota_{m,N}(B'_m)$ imply that \[ x_{N-v} = \mbox{\bf N}_{N, N-v} (x_N) \in \iota_{\mathbb{K}_{N-v}}, \mathbb{L}_{N-v}(A^-(\mathbb{K}_{N-v})) + {B'_m}^{p^v} = \iota_{\mathbb{K}_{N-v}}, \mathbb{L}_{N-v}(A^-(\mathbb{K}_{N-v})).\] This happens for all $N > m + v$, so $x \in \iota_{\mathbb{K}, \mathbb{L}}(\apr{\mathbb{K}})$, as claimed. \end{proof}
Note that at finite levels we have \[ \hat{H}^0(F,A_n^-(\mathbb{L})) \cong B'_n/(B'_n \cap
\iota_{\mathbb{L}/\mathbb{K}}(A_n^-(\mathbb{K}))) \neq 0. \] The Herbrand quotient of finite groups is trivial, so $| \hat{H}^0(F,A_n^-(\mathbb{L})) | = | \hat{H}^1(F,A_n^-(\mathbb{L})) |$. In the projective limit however, $\widehat{H}^1(F, \apr{\mathbb{L}}) \neq 0$ so equality is not maintained. In the Appendix \S 3.3, we prove though the following result:
\begin{lemma} \label{hord} Suppose that $v \in \id{M}(\apr{\mathbb{L}})$ has non trivial image in the Tate group $\widehat{H}^1\left( F, \left( \id{M}(\apr{\mathbb{L}}) \right) \right)$. Then either ${\rm ord} (v) > p \exp(\id{M}(\apr{\mathbb{K}})$ or $T^2 v \in s \apr{\mathbb{L}}$. \end{lemma}
We turn now our attention to the decomposition of the Thaine lift $b$; we prove in the Appendix 3.3 the following \begin{proposition} \label{px} Let $F = < \nu >$ be a cyclic group of order $p$ acting on the $p$-abelian group $B$ and let $x \in B$ be such that $y = \id{N}(x)$ has order ${\rm ord} (y) = q = p^l > p$. Then ${\rm ord} (x) \leq p q$. \end{proposition}
It implies \begin{corollary} \label{bdeco} Let $y \in \apr{\mathbb{L}}$ be such that $\ell(\id{N}(y)) = q > p$. Then $q \leq \ell(y) \leq p q$ and $T y \in \id{D}(\apr{\mathbb{L}})$. All these facts hold in particular for any Thaine lift $b$. In this case, one has additionally ${\rm ord} ( s b ) \leq {\rm ord} (b) \leq p q$. \end{corollary} \begin{proof}
Let $f(T) \in \mathbb{Z}_p[ T ]$ be a distinguished polynomial that
annihilates $p^{\ell(y)} y \in \id{L}(\apr{\mathbb{L}})$ and let $\beta =
f(T) y$. Then $\alpha := \id{N}(y)$ has the order $q > p$, by
hypothesis, so we may apply the Proposition \ref{px}. This implies
that ${\rm ord} (\beta) \leq p q$ and thus $\ell(y) \leq p q \leq p
\exp(\id{M}(\apr{\mathbb{K}}))$, by definition of this order. The first claim
in Remark \ref{kp1} implies that $T y \in \id{D}(\apr{\mathbb{L}})$. Since
${\rm ord} (a) = {\rm ord} (\id{N}(b)) > p$ by choice of $a$, the statement
applies in particular to any Thaine lift $b$. In this case, we know
that $p b_m = a_m$ and ${\rm ord} (a_m) = q$, hence ${\rm ord} (b) \geq
{\rm ord} (b_m) = p q$, hence ${\rm ord} (b) = p q$. We obviously have ${\rm ord} (s b) \leq {\rm ord} (b)$. \end{proof}
\subsection{The vanishing of $\mu$} Since $s b_m = 0$, the Theorem VI of Iwasawa \cite{Iw} implies that there is some $c \in \apr{\mathbb{L}}$ such that $s b = \nu_{m,1} c$. Then $\nu_{m,1} (\id{N}(c) ) = 0$ and the Fact \ref{nonu} in the Appendix implies that $\id{N}(c) = 0$. Moreover, \[ p^{\ell(b)} c = p^{\ell(b)} (\nu_{m,1} s b ) \in \id{L}(\apr{\mathbb{L}}) \] so, by Corollary \ref{bdeco}, $\ell(c) \leq p q$ too, and thus $T c \in \id{D}(\apr{\mathbb{L}})$. Let $T c = c_{\lambda} + c_{\mu}$. Since $\id{L}(\apr{\mathbb{K}}) \cap \id{M}(\apr{\mathbb{K}}) = 0$, it follows that $\id{N}(c_{\lambda}) = \id{N}(c_{\mu}) = 0$, individually. We have, by comparing parts, $s b_{\mu} = \nu_{m,1} c_{\mu}$, so $p q \cdot \nu_{m,1} c_{\mu} = 0$, and since $\id{M}(\apr{\mathbb{L}})$ is rigid, \rf{rigid} implies that ${\rm ord} (c_{\mu}) \leq p q$. We may thus apply Lemma \ref{hord}, which implies that $T^2 c_{\mu} \in s \id{M}(\apr{\mathbb{L}})$, say $T^2 c_{\mu} = s x$; then $s (T^2 b_{\mu} - \nu_{m,1} x) = 0$ and Lemma \ref{lh0} implies that $T^2 b_{\mu} = \nu_{m,1} x + z, z \in \iota(\apr{\mathbb{L}})$. By taking norms we obtain $T^3 a = \nu_{m,1} x + p z$. This implies $o_T(a)+3 \geq \deg(\nu_{m,1})$ and we can choose $m$ large enough, to obtain a contradiction. This confirms the claim of the Main Theorem.
\section{Appendix} In the Appendix, unless otherwise specified, the notation used in the various Facts and Lemmata is the one used in the section where these are invoked in the text. In the next section we provide a list of disparate, useful facts: \subsection{Auxiliary facts} We start by proving that if $\mu >0$ for some number field, then it is also non - trivial for finite extensions thereof. \begin{fact} \label{blow} Let $K$ be a number field for which $\mu(K) \neq 0$ and $L/K$ be a finite extension, which is Galois over $K$. Then $\mu(L) \neq 0$. \end{fact} \begin{proof}
If $M \subset L$ has degree coprime to $p$, then $\hbox{ Ker }( \iota : A(K)
\rightarrow A(M)) = 0$, so we may reduce the proof to the case of a cyclic
Kummer extension of degree $p$. Let $M = L^{\hbox{\tiny Gal}(L/K)_p}$ be the
fixed field of some $p$-Sylow subgroup of $\hbox{Gal}(L/K)$. Then $p$ does
not divide $[ M : K ]$, so $\hbox{ Ker }(\iota : A^-(K) \rightarrow A^-(M)) = 0$, and thus $\mu(M) \neq 0$. We may assume
without loss of generality, that $M$ contains the \nth{p} roots of
unity. Since $p$-Sylow groups are solvable, the extension $L/M$
arises as a sequence of cyclic Kummer extensions of degree $p$. It
will thus suffice to consider the case in which $k$ is a number
field with $\mu \neq 0$ and containing the \nth{p} roots of unity
and $k' = k[ a^{1/p} ]$ is a cyclic Kummer extension of degree $p$.
We claim that under these premises, $\mu(k') \neq 0$. Let $k_n
\subset k_{\infty}$ and $k'_n \subset k'_{\infty}$ be the
intermediate fields of the cyclotomic $\mathbb{Z}_p$-extensions, let $\nu$
generate $\hbox{Gal}(k'/k)$. Let $F/k_{\infty}$ be an abelian unramified
extension with $\hbox{Gal}(F/k_{\infty}) \cong \mathbb{F}_p[[ T ]]$; such an
extension must exist, as a consequence of $\mu > 0$. There is thus
for each $n > 0$ a $\delta_n \in k_{n}^{\times}$ such that $F_n =
k_{n}\left[ \delta_n^{\mathbb{F}_p[[ T ]]/p} \right]$ is an unramified
extension with galois group $G_n = \hbox{Gal}(F_n/k_n)$ of $p$-rank $r_n
:= p\hbox{-rk} (G_n) > p^{n-c}$ for some $c \geq 0$. We define $F'_n = F_n[
a^{1/p} ]$ and let $\overline{F}'_n \supset F'_n$ be the maximal
subextension which is unramified over $k'_n$. We have
$\overline{F}'_n \supseteq F_n$ and thus $p\hbox{-rk} (\hbox{Gal}( \overline{F}'_n
/ k'_n ) ) \geq p\hbox{-rk} (\hbox{Gal}(F_n/k_n)) \rightarrow \infty$. Consequently,
$k'_{\infty}$ has an unramified elementary $p$-abelian extension of
infinite rank, and thus $\mu(k') > 0$, which completes the proof. \end{proof}
\begin{fact} \label{b2} Let $\mathbb{K}$ be a CM extension with $\mu > 0$. Then it is possible to build a further CM extension $\mathbb{L}/\mathbb{K}$ with $\exp(\id{M}^-(\mathbb{L})) > p^2$. \end{fact} \begin{proof}
We have shown in Fact \ref{blow} that we may assume that $\mu_{p^3}
\subset \mathbb{K}$. Let $a = (a_n)_{n \in \mathbb{N}} \in \id{M}^-(\mathbb{K})$ and $\eu{q}
\in a_2$, with $a_2 \neq 0$, be a totally split prime which is inert
in $\mathbb{K}_{\infty}/\mathbb{K}_2$. Let $\mathbb{L}/\mathbb{K}_2$ be the inert Thaine shift of
degree $p^2$ induced by $\eu{q}$, let $b_2 = [ \eu{Q}^{(1-\jmath)/2}
]$ be the class of the ramified prime of $\mathbb{L}$ above $\eu{q}$ and $b
= (b_m)_{m \in \mathbb{N}}$ be a sequence through that extends $b_2$, such that
$N_{\mathbb{L}/\mathbb{K}}(b) = a$. Then $b \not \in \id{L}^-(\mathbb{L})$ and there is
some polynomial $f(T) \in \mathbb{Z}_p[ T ]$ such that $f(T) b \in
\id{M}^-(\mathbb{L})$, while $\mbox{\bf N}_{\mathbb{L}/\mathbb{K}}(f(T) b) = f(T) a$. The
capitulation kernel $\hbox{ Ker }(\iota : A^-(\mathbb{K}_n) \rightarrow A^-(\mathbb{L}))$ is
trivial and consequently ${\rm ord} (T f(T) b) \geq p^2 {\rm ord} (a)$; hnece
$\exp(M^-(\mathbb{L})) > p^2$. Thus $\mathbb{L}$ verifies the claimed properties. \end{proof} The following fact was proved by Sands in \cite{Sd}: \begin{fact}
\label{sa}
Let $\mathbb{L}/\mathbb{K}$ be a $\mathbb{Z}_p$-extension of number fields in which all the
primes above $p$ are completely ramified. If $F(T) \in \mathbb{Z}_p[ T ]$ is
the minimal annihilator polynomial of $\id{L}(\mathbb{L})$, then $(F, \nu_{n,1})
= 1$ for all $n > 1$. \end{fact} As a consequence, \begin{corollary} \label{nonu} Let $\mathbb{K}$ be a CM field and suppose that $x \in \apr{\mathbb{K}}$ verifies $\nu_{n,1} x = 0$. Then $x = 0$. \end{corollary} \begin{proof}
Let $q$ be the exponent of the $\mathbb{Z}_p$-torsion of $\apr{\mathbb{K}}$. It
follows then that $q x \in \id{L}(\apr{\mathbb{K}})$ is annihilated by
$\nu_{n,1}$, so the Fact \ref{sa} implies that $q x = 0$ and thus $x
\in \id{M}(\apr{\mathbb{K}})$. Since $\nu_{m,1} x = 0$ it follows that $x =
0$, as claimed. \end{proof}
\subsection{Applications of the Tchebotarew Theorem} We prove the existence of Thaine lifts. \begin{lemma} \label{thlift} Let $\mathbb{K}$ be a CM field and $a = (a_n)_{n \in \mathbb{N}} \in \apr{\mathbb{K}}$ and $\mathbb{L} = \mathbb{K} \cdot \mathbb{F}$ be a Thaine shift induced by a split prime $\eu{q} \in a_m$. Then there is a Thaine lift $b = (b_n)_{n \in \mathbb{N}} \in \apr{\mathbb{L}}$ with the properties that $\id{N}(b) = a$ and $b_m$ is the class of the ramified prime $\eu{Q} \subset \mathbb{L}$ above $\eu{q}$. \end{lemma} \begin{proof}
We prove by induction that for each $n \geq 1$ there is a class $b_n
\in A_n^-(\mathbb{L})$ with $\id{N}(b_n) = a_n$ and $N_{n,n-1}(b_n) =
b_{n-1}$. The claim holds for $n \leq m$ by definition. Assume that
it holds for $n \geq m$ and consider minus parts of the maximal
$p$-abelian unramified extensions
\[ \mathbb{H}(\mathbb{K}_n)^-/ \mathbb{K}_n, \mathbb{H}^-(\mathbb{L}_n)/\mathbb{L}_n, \mathbb{H}^-(\mathbb{L}_{n+1})/\mathbb{L}_{n+1}, \quad \hbox{ and } \quad \mathbb{H}^-(\mathbb{K}_{n+1})/\mathbb{K}_{n+1}. \]
For $\mathbb{H}' \in \{ \mathbb{H}^-(\mathbb{K}_n), \mathbb{H}^-(\mathbb{L}_n), \mathbb{H}^-(\mathbb{K}_{n+1}) \}$, we obviously have $\mathbb{H}' \cdot
\mathbb{L}_{n+1} \subset \mathbb{H}^-(\mathbb{L}_{n+1})$. For some unramified
Kummer extension $M/K$ we denote by $\varphi(x) = \lchooses{M/K}{x}$
the Artin symbol of a class or of an ideal. The induction
hypothesis implies that both $\varphi(b_n)$ and $\varphi(a_{n+1})$
restrict to $\varphi(a_n) \in \hbox{Gal}(\mathbb{H}^-(\mathbb{K}_n)/\mathbb{K}_n)$, and since
$A^-_{n+1}(\mathbb{K})$ and $A^-_n(\mathbb{L})$ surject by the respective norms to
$A^-_n(\mathbb{K}_n)$, it follows that
\[ \mathbb{L}_{n+1} \mathbb{H}(\mathbb{K}_{n+1}) \cap \mathbb{L}_{n+1} \mathbb{H}^-(\mathbb{L}_n) = \mathbb{L}_{n+1}
\mathbb{H}^-(\mathbb{K}_n). \] There is thus some automorphism $x \in
\hbox{Gal}(\mathbb{H}^-(\mathbb{L}_{n+1})/\mathbb{L}_{n+1})$, such that \[ x \big \vert_{\hbox{\tiny Gal}(\mathbb{H}^-(\mathbb{L}_n)/\mathbb{L}_n)} = \varphi(b_n) \quad \hbox{ and } \quad
x \big \vert_{\hbox{\tiny Gal}(\mathbb{H}^-(\mathbb{K}_{n+1})/\mathbb{K}_{n+1})} = \varphi(a_{n+1}). \] By Tchebotarew, there are infinitely many totally split primes $\eu{R} \subset \mathbb{L}_{n+1}$ with Artin symbol $\lchooses{\mathbb{H}^-(\mathbb{L}_{n+1})/\mathbb{L}_{n+1}}{\eu{R}} = x$, and by letting $b_{n+1} = [ \eu{R} ]$ for such a prime, we have $\id{N}(b_{n+1}) = a_{n+1}$ and $N_{n+1,n}(b_{n+1}) = b_n$. We obtain by induction a norm coherent sequence $b = (b_n)_{n \in \mathbb{N}}$ which verifies $b_m = [ \eu{Q} ]$ and $\id{N}(b) = a$, and this completes the proof. \end{proof}
\subsection{Proof of Proposition \ref{px} and related results} Let the notations be like in the statement of the Proposition and let $q = p^k = {\rm ord} (y)$ and $r \geq q$ be the order of $x$ and $\rg{R} = \ZM{r}[ s ]$. Then $\rg{R}$ has maximal ideal $(p, s)$ and $s$ is nilpotent. By definition, $\rg{R}$ acts on $x$ and we consider the modules $X = \rg{R} x$ and $Y = \ZM{q'} y \subset X$.
With these notations, we are going to prove that $p q x = 0$. We start by proving \begin{lemma} \label{h0fin} Under the given assumptions on $x, y$ and with the notations above, \[ \hat{H}^0(F, X) = 0, \] or $q p x = 0$. Moreover, $s X \cap Y \cong \mathbb{F}_p \subset X[ s, p ]$ in all cases, while if $\hat{H}^0(F, X) \neq 0$, the bi-torsion $X[ s, p ] \cong \mathbb{F}_p^2$ and $q' x \in X[ s, p ] \setminus s X$. \end{lemma} \begin{proof}
Let $N$ be a sufficiently large integer,
such that $p^N x = 0$ and let $R = \ZM{p^N}$, so $X$ is an
$R[ s ]$-module. We note that in $R' := R/(\id{N})$ the maximal ideal is
$(s)$, thus a principal nilpotent ideal, as one can verify from the
definition of the norm $\id{N} = \sum_{i=0}^{p-1} \nu^i$. Indeed, in
$R$ we have the identity $p = s^{p-1} \cdot v(s) \bmod \id{N}, v(s)
\in R^{\times}$, so the image of $p$ in $R'$ is a power of $s$,
hence the claim. As a consequence, in any finite cyclic $R'$-module $M$
we have $M[ p, s ] \cong \mathbb{F}_p$.
We consider the two modules $N_1 = \mathbb{Z}_p x$ and $N_2 = R s x$, such that
$X = N_1 + N_2$, and only $N_2$ is an $R$-module, and in fact an $R'$-
module, since it is annihilated by $\id{N}$\footnote{This proof is
partially inspired by some results in the Ph. D. Thesis of Tobias
Bembom \cite{Be}}. Note that $q N_1$ is also annihilated by the norm;
this covers also the case when $q x = 0$, so $q N_1 = 0$.
Since $X = N_1 + N_2$, it follows that
$X[ p ] = N_1[ p ] + N_2[ p ]$.
We always have that $N_2[ s ] \cong \mathbb{F}_p$, since $N_2$ is an
$R'$-module and $(s)$ is the maximal ideal of $R'$. Let $K := X[ s ]
= \hbox{ Ker }( s : X \rightarrow X )$ and assume that $H_0 := \hat{H}^0(F, X) \neq
0$; then the bi-torsion $X[p, s] = X[ s ][ p ]$ is not
$\mathbb{F}_p$-cyclic. It follows that $X[ p, s] \not \cong N_1 \cap X[ p , s ] \cong \mathbb{F}_p$.
Let $r = q p ^e = {\rm ord} (x)$, with $e \geq 0$. Note that
\[ y_0 := (q/p) y = q x u(s) + (q/p) s^{p-1} \in X[ p, s ], \] for
all values of $e$. If $e = 0$, then $y_0 = (q/p) s^{p-1} \in N_2[ p,
s ]^{\times}$. But then $(q/p) s x \neq 0$ and thus $N_1[ p, s ] =
0$ so $\widehat{H}(F, X) = 0$ in this case. Assume that $e > 0$ and $y_0
\in N_2$. Since $y_0 = q x + (q/p) f(s) ( s x )$ and the last term,
$y_1:=(q/p) f(s) ( s x ) \in N_2$, it follows that $q x = y_0-y_1
\in N_2$, so $\widehat{H}(F, X) = 0$ in this case too. It remains that if
$\widehat{H}(F, X) \neq 0$, then $y_0 \in N_1 \setminus N_2$ and $e > 0$.
In this case $N_1 \cap N_2 = \emptyset$ and $X = N_1 \oplus N_2$. We let $(q/p)
(p^e x - c y) = 0$, with $(c, p) = 1$. Then there is some $w \in X[
q/p ]$ with $p^e x = c y + w$, and the decomposition $X = N_1 \oplus
N_2$ induces a decomposition of the $q/p$-torsions, so we may write
$w = a (q/p) + b v(s) s^N x$, with $N > 0$ and $a, b \in \{ 0, 1,
\ldots, p-1\}$, $v(s) \in {R'}^{\times}$. Taking norms in the
identity $(p^e - a (q/p) ) x = c y + b v(s) s^N x$ we obtain
\[ p y \cdot ( c - p^{e-1} + a (q/p^2) ) = 0.\] We have assumed $q >
p^2$, so for $e > 1$ the cofactor of $p y$ is a unit, which implies $p
y = 0$, in contradiction with ${\rm ord} (y) = q > p^2$. Therefore $e = 1$
and
\begin{eqnarray}
\label{hneq1}
y = p x + v(s) s^{p-1} x, \quad \hbox{ and } \quad (q/p) s^{p-1} x = 0.
\end{eqnarray}
Consequently, if $\widehat{H}(F, X) \neq 0$, then ${\rm ord} (x) = q p$ and
there is an $N < (p-1) k$ such that $N_2[ s ] = \mathbb{F}_p \cdot (s^N x)$.
This completes the proof.
\end{proof}
We can now assume that $\hat{H}^0(F, X) = 0$ and since $X$ is finite, the Herbrand quotient vanishes and $\hat{H}^1(F, X) = 0$. There is thus an exact sequence of $\mathbb{Z}_p[ s ]$-modules \[ 0 \rightarrow X[ p ] \rightarrow X \rightarrow X \rightarrow X/p X \rightarrow 0, \] in which $X/ p X$ is cyclic generated by $x$ and the arrow $X \rightarrow X$ is the multiplication by $p$ map $ \cdot p$.
Since $ \hat{H}^0(F, X) = 0$, it follows that $\hbox{ Ker }(s : X \rightarrow X)[ p ] \cong \mathbb{F}_p$ and $X[ p ]$ is $\mathbb{F}_p[ s ]$-cyclic, as is $X/p X$. Using this observation, we provide now the proof of the Proposition. \begin{proof}
There is some distinguished polynomial $\phi \in \rg{R}$ with
$\phi(s) X = 0$ and $\deg(\phi) = p\hbox{-rk} (X)$. Indeed, let $d =
p\hbox{-rk} (X)$ and $\overline{x} \in X/p X$ be the image of $x$, so $(s^i
b)_{i=0,d-1}$ have independent images in $X/p X$, by definition of
the rank and span $X$, as a consequence of the Nakayama
Lemma. Therefore $s^d x \in p X$ and there is a monic distinguished
annihilator polynomial
\[ \phi(s) = s^d - p^e h(s) \] of $X$, with $e \geq 0$ and $h$ a
polynomial of $\deg(h) < \deg(\phi) \leq p$, which is not
$p$-divisible. Note that, by minimality of $d$, the case $e = 0$
only occurs if $\phi(s) = s^d$. We shall distinguishe several cases
depending on the degree $d = \deg(\phi)$ and on properties of
$h(s)$.
We consider the cases in which $d < p$ first. Assume that $s^c x =
0$ for some $c < p$, so $h = 0$. Upon multiplication with
$s^{p-1-c}$ we obtain $s^{p-1} X = 0$. Thus $\id{N} ( x ) = y =
(s^{p-1} + p u(s)) x = p u(s) x$, and $y u^{-1}(s) = y = p x$, which
settles this case.
We can assume now that $\phi(s) = s^d - p^e h(s)$ and $e > 0$, with $h(s)$
a non trivial distinguished polynomial of degree $\deg(h) < d < p$. As a consequence, if $d < p-1$, then \begin{eqnarray*} y & = & \id{N}(x) = p u(s) x + s^{p-1} x = p (u(s) + s^{p-1-d} p^{e-1} h(s)) x \\ & = & p v(s) x, \quad v(s) = u(s) + s^{p-1-d} p^{e-1} h(s)\in \rg{R}^{\times}. \end{eqnarray*} Hence $y = v^{-1}(s) z = p x$, which confirms the claim in this case.
If $d = p-1$, then $s^{p-1} x = p \cdot (p^{e-1} h(s)) x$ and thus \[ y = (p u(s) + s^{p-1}) x = p(u(s) + p^{e-1} h(s)) x; \] if the expression in the brackets is a unit, we may conclude like before. Otherwise, $e=1$ and $h(s) = -1 + s h_1(s)$, and thus $y \in s X$, so $\id{N}(y) = p y = 0$, in contradiction with the assumption\footnote{At this point the assumption that ${\rm ord} (y) > p$
plays a crucial role, and if it were not to hold, modules such that
${\rm ord} (x)$ becomes arbitrarily large are conceivable} that ${\rm ord} (y) > p$.
The case $d = p$ is more involved. We claim that ${\rm ord} (x) < p^2 {\rm ord} (y)$. Assume that this is not the case. Since $d = p$, we have $p\hbox{-rk} (X) = p$ and thus $s^{p-1} x = y - p x u(s) \not \in p X$. In particular $y \not \in p X$. Let $q = p^{k} = {\rm ord} (y)$ and assume that ${\rm ord} (x) = p^{e+k} = p^e \cdot {\rm ord} (y), e > 1$. We note that ${\rm ord} (s^{p-1} x) = p^{e-1} q$, since $s^{p-1} u^{-1}(s) x = - p x + y$ has annihilator $p^{e-1} q$. Consider the generators $s^j x$ of $X$; there is an integer $j$ in the interval $0 \leq j < p-1$, such that \[ {\rm ord} (s^j x) = {\rm ord} (x) > {\rm ord} (s^{j+1} x) = {\rm ord} (s^{p-1} x) = {\rm ord} (x)/p. \] Recall from Lemma \ref{h0fin}, that we are in the case when $X[ p ]$ is a cyclic $\mathbb{F}_p[ s ]$ module of dimension $p$ as an $\mathbb{F}_p$-vector space, and $\widehat{H}^0(F, X) = 0$. Let \begin{eqnarray*}
\id{F}_0 & := & \{ q p^{e-1} s^i x \ : \ i = 0, 1, \ldots, j \} \subset X[ p ], \\
\id{F}_1 & :=& \{ q p^{e-2} s^{j+i} x \ : \ i = 1, 2, \ldots, p-j-1 \} \subset X[ p ] , \end{eqnarray*} and $\id{F} = \id{F}_0 \cup \id{F}_1$. Then $\id{F}_i \subset X[ p ]$ are $\mathbb{F}_p$-bases of some cyclic $\mathbb{F}_p[ s ]$ submodules $F_0, F_1 \subset X[ p ]$ with $\dim_{\mathbb{F}_p}(F_0) \leq j+1$ and $\dim_{\mathbb{F}_p}(F_0) \leq p-(j+1)$. We claim that $X[ p ] = F_0 \oplus F_1 = 0$. For each $z \in X[ p ]$ there is some maximal $z' \in X$ -- thus $z'$ having non-trivial image $0 \neq \overline{z}' \in X/p X$ -- and such that $z = q' z'$ for some $q' \in p^{\mathbb{N}}$. Since the generators of $X/p X$ are mapped this way to $F = F_0 + F_1$, which is an $\mathbb{F}_p$-vector space, it follows by linearity that $F = X[ p ]$. Comparing dimensions, we find that $F_0 \cap F_1 = 0$ so there is a direct sum $F = F_0 \oplus F_1 = X[ p ]$, as claimed.
Note that \[ 0 \neq (q/p) y = q x + (q/p) s^{p-1} u^{-1}(s) x \in X[ p ][ s ] ; \] upon multiplication with $p^{e-1} \geq p$ we obtain \begin{eqnarray} \label{s1} 0 & = & q p^{e-2} y = q p^{e-1} u(s) x + q p^{e-2} s^{p-1} x \nonumber \\ & = & q p^{e-1} (u_0(s) + u_1(s) ) x + q p^{e-2} s^{p-1} x, \end{eqnarray} where $u_0 \in F_0$ and $u_1 \in F_1$ are the projections of the unit $u(s)$, thus \[ u_0(s) \equiv \frac{\sum_{i=0}^{j} s^i \binom{p}{i}}{p} \bmod \ p, \quad u_1(s) \equiv \frac{\sum_{i=j+1}^{p-2} s^i \binom{p}{i}}{p} \bmod \ p . \] By definition of $j$, it follows that
\[ f_0 := q p^{e-1} u_0(s) x \in F_0, \quad \ f_1 := s^{p-1} q
p^{e-2} x \in F_1, \] and $q p^{e-1} u_1(s) x = 0$. The identity in
\rf{s1} becomes $f_0 + f_1 = 0$, and since $F_0 \cap F_1 = 0$ and
$f_i \in F_i, \ i = 0, 1$, it follows that $f_0 = f_1 = 0$. However,
$f_1 = s^{p-1} q p^{e-2} x \in \id{F}_1$ is a basis element which generates
$F_1[ s ]$, so it cannot vanish. The contradiction
obtained implies that we must have in this case also ${\rm ord} (x) \leq p q$.
Consequently, ${\rm ord} (x) \leq p \cdot {\rm ord} (y)$ in all cases, which
completes the proof of the Proposition. \end{proof}
\subsection{The Norm Principle, ray class fields and proof of Lemma \ref{hord}} Let $q$ be a rational prime with $q \equiv 1 \bmod p^m$. Let $\rg{F} \subset \mathbb{Q}_q[ \zeta_q ]$ be the subfield of the (ramified) \nth{q} cyclotomic extension, which has degree $p$ over $\mathbb{Q}_q$. Thus $\rg{F}$ is the completion at the unique ramified prime above $q$ of the field $\mathbb{F}$ defined in the text. The field $\rg{F}$ is a tamely ramified extension of $\mathbb{Q}_q$, so class field theory implies that $\hbox{Gal}(\rg{F}/\mathbb{Q}_q)$ is isomorphic to a quotient of order $p$ of $\ZMs{q}$, so letting $S = (\ZMs{q})^p$, we have $\hbox{Gal}(\rg{F}/\mathbb{Q}_q) \cong \ZMs{q}/S$. Let thus $r \in \mathbb{Z}$ be such that $r^{(q-1)/p} \not \equiv 1 \bmod q$: if $g \in \mathbb{F}_q^{\times}$ generates the multiplicative group of the finite field with $q$ elements and $m = v_p(q-1)$, then one can set $r = g^{(q-1)/p^m} \ \ {\rm rem } \ \ q$. Let in addition $\rg{K}_n/\mathbb{Q}_q$ be the \nth{p^n} cyclotomic extension of $\mathbb{Q}_q$: under the given premises, we have for $n > m$ the extension degree $[ \rg{K}_n : \mathbb{Q}_q ] = p^{n-m}$ and $\rg{K}_n = \mathbb{Q}_q\left[ r^{1/p^{n-m}} \right]$, while for $n \leq m$ the extension is trivial. Letting $r_n = r^{1/p^{n-m}} \in \rg{K}_n$, and $\rg{L}_n = \rg{K}_n \cdot \rg{F}$, we deduce by class field theory that $r_n$ generates $\rg{K}_n^{\times}/\id{N}(\rg{L}_n^{\times})$, under the natural projection. Indeed, the extension $\rg{L}_n/\rg{K}_n$ is a ramified $p$ extension, so the galois group must be a quotient of the roots of unity $W(\mathbb{Z}_q)$, hence the claim. We shall use these elementary observation in order to derive the structure of $\widehat{H}^1(F, A^-(\mathbb{L}_n))$ in our usual setting and prove some necessary conditions for elements $v_n \in A^-(\mathbb{L}_n)$ which verify $0 \neq \beta(v_n) \in \widehat{H}^1(F, A^-(\mathbb{L}_n))$, under the natural projection $\beta : A^-(\mathbb{L}_n) \rightarrow \widehat{H}^1(F, A^-(\mathbb{L}_n))$.
Let $\mathbb{K}, \mathbb{L}, \mathbb{F}$, etc. be the fields defined in the main part of the paper and let us denote by $I(\mathbb{M})$ the ideals of some arbitrary number field, and $P(\mathbb{M}) \subset I(\mathbb{M})$ the principal ideals. The maximal $p$-abelian unramified extension is $\mathbb{H}(\mathbb{M})$ and the $p$-part of the ray class field to the ray $\eu{M}_q = q \cdot \id{O}(\mathbb{M})$ will be denoted by $\mathbb{T}_q(\mathbb{M})$. If $\mathbb{M}$ is a CM field, then complex conjugation acts, inducing $I^-(\mathbb{M}), P^-(\mathbb{M})$ and $\mathbb{H}^-, \mathbb{T}_q^-$, in the natural way. In our context, we let in addition $P_N^-(\mathbb{L}) := \id{N}(P^-(\mathbb{L})) \subset P^-(\mathbb{K})$. We let $\Delta_n = \hbox{Gal}(\mathbb{K}_n/\rg{k})$ for arbitrary $n$. The following is an elementary result in the proof of the Hasse Norm Principle: \begin{lemma} \label{princid} Let $\mathbb{K}, \mathbb{L}$ and $F$ be like above. Then \begin{eqnarray} \label{h1gen} \widehat{H}^{(1)}( F, A^-(\mathbb{L}_n) ) \cong P^-(\mathbb{K}_n)/P_N^-(\mathbb{L}_n) \cong \mathbb{F}_p[ \Delta_m ], \quad \hbox{for all $n > 0$}, \end{eqnarray} the isomorphism being one of cyclic $\mathbb{F}_p[ \Delta_m ]$-modules. \end{lemma} \begin{proof}
Note that both modules in \rf{h1gen} are annihilated by $p$. In the
case of $P^-(\mathbb{K})/P_N^-(\mathbb{L})$, this is a direct consequence of
$(P^-(\mathbb{K}))^p = \mbox{\bf N}_{\mathbb{L}/\mathbb{K}}(P^-(\mathbb{K})) \subset P_N^-(\mathbb{L})$. Let
$\beta : A^-(\mathbb{L}) \rightarrow \widehat{H}^{(1)}( F, A^-(\mathbb{L}) )$ and $\pi_N :
P^-(\mathbb{K}) \rightarrow P^-(\mathbb{K})/P_N^-(\mathbb{L})$ denote the natural projections and
let $a \in \hbox{ Ker }( \id{N} : A^-(\mathbb{L}) \rightarrow A^-(\mathbb{L}) )$. Then $p u(s) a =
- s^{p-1} a$ and thus $p a = - s^{p-1} u^{-1}(s) a$ and a fortiori
$\beta(p a) = 0$ for all $a$, so $p \widehat{H}^{(1)}( F, A^-(\mathbb{L}) ) =
0$, thus confirming that $\widehat{H}^{(1)}( F, A^-(\mathbb{L}_n) )$ is an $\mathbb{F}_p$-module too.
Let now $\eu{A} \in a$ be some ideal and $(\alpha) =
\id{N}(\eu{A})$. The principal ideal $\eu{a} :=
(\alpha/\overline{\alpha}) \in P^-(\mathbb{K})$ has image $\pi_N(\eu{a}) \in
P^-(\mathbb{K})/P_N^-(\mathbb{L})$ which depends on $a$ but not on the choice of
$\eu{A} \in a$. This is easily seen by choosing a different ideal
$\eu{B} = (x) \eu{A} \in a$: then $\id{N}(\eu{B}^{1-\jmath}) =
\eu{a} \cdot \id{N}(x/\overline{x}) \in \eu{a} \cdot P_N^-(\mathbb{L})$,
and $\pi_N(\id{N}(\eu{B}^{1-\jmath})) =
\pi_N(\id{N}(\eu{A}^{1-\jmath})) = \pi_N(\eu{a})$ depends only on
$a$. Suppose now that $\eu{a} \in P^-_N(\mathbb{L})$, so $\pi_N(\eu{a}) =
1$. Then there is some $y \in \mathbb{L}^{\times}$ such that
$\id{N}(\eu{A}^{1-\jmath}) = (\id{N}(y)^{1-\jmath})$ and thus
$\id{N}(\eu{A}/(y))^{1-\jmath} = (1)$. Since $\widehat{H}^{(1)}$ vanishes
for ideals, it follows that there is a further ideal $\eu{X} \subset
\mathbb{L}$ such that
\[ \eu{A}^{1-\jmath} = \left((y) \eu{X}^s \right)^{1-\jmath} , \]
and thus $a^2 \in (A^-(\mathbb{L}))^s$. But then $\beta(a) = 0$. We have
shown that there is a map $\lambda : \widehat{H}^{(1)}( F, A^-(\mathbb{L}) ) \rightarrow
P^-(\mathbb{K})/P_N^-(\mathbb{L})$ defined by the sequence of associations
$\beta(a) \mapsto \eu{a} \mapsto \pi_N(\eu{a})$, which is a well
defined injective homomorphism of $\mathbb{F}_p$-modules.
In order to show that $\lambda$ is an isomorphism, let $\eu{x} :=
(x/\overline{x}) \in P^-(\mathbb{K}) \setminus P_N^-(\mathbb{L})$ be a principal
ideal that is not a norm from $\mathbb{L}$. Let the Artin symbol of $x$ be
$\sigma = \lchooses{\mathbb{L}/\mathbb{K}}{x} \in \hbox{Gal}(\mathbb{L}/\mathbb{K})$; by definition,
$\mathbb{L}$ is also $CM$, so complex conjugation commutes with $\sigma$
and we have \[ \lchooses{\mathbb{L}/\mathbb{K}}{\overline{x}} = \lchooses{\mathbb{L}/\mathbb{K}}{x^{\jmath} } = \sigma^{\jmath} = \sigma . \] Consequently, $\lchooses{\mathbb{L}/\mathbb{K}}{(x/\overline{x})} = 1$ -- we may thus choose, by Tchebotarew, a principal prime $(\rho) \subset \mathbb{K}$ with $\rho \cong x/\overline{x} \bmod q$, and which is split in $\mathbb{L}/\mathbb{K}$. Let $\eu{R} \subset \mathbb{L}$ be a prime above $(\rho)$ and $r := [ \eu{R}^{1-\jmath} ] \in (A^-)(\mathbb{L})$. We claim that $\beta(r) \neq 0$; assume not, so $\eu{R} = (y) \eu{Y}^s$ for some $y \in \mathbb{L}$ and $\eu{Y} \subset \mathbb{L}$ and thus $\id{N}(\eu{R}^{1-\jmath}) = (\id{N}(y/\overline{y})) \cong \eu{x} \bmod P^-_N(\mathbb{L})$. Since $(\id{N}(y/\overline{y})) \in P^-_N(\mathbb{L})$ by definition, it follows that $\eu{x} = (x/\overline{x}) \in P_N^-(\mathbb{L})$, which contradicts the choice of $x$. It follows that $\lambda$ is an isomorphism of $\mathbb{F}_p[ \Delta_m ]$-modules. The proof will be completed if we show that $P^-(\mathbb{K})/P_N^-(\mathbb{L}) \cong \mathbb{F}_p[ \Delta_m ]$. Since $\Delta_m$ acts transitively on the pairs of complex conjugate primes above $q$ in $\mathbb{K}_n$, one verifies that $(\mathbb{K}_n^{\times})^-/\id{N}((\mathbb{L}_n^{\times})^-) \cong \mathbb{F}_p[ \Delta_m ]$. The field $\rg{k}$ contains no \nth{p} roots of unity, and thus $\mbox{\bf N}_{\mathbb{K}_n/\rg{k}}( P^-(\mathbb{K}_n)/P_N^-(\mathbb{L}_n) ) \neq 0$. The isomorphism \[ P^-(\mathbb{K})/P_N^-(\mathbb{L}) \cong \mathbb{F}_p[ \Delta_m ] \] follows, and this completes the proof. \end{proof}
Consider the field $\mathbb{T}'_n := \mathbb{T}^-_q(\mathbb{K}_n)$ defined as the minus $p$-part of the ray class field to the modulus $Q_n := q \id{O}(\mathbb{K}_n)$ -- i.e., if $\rg{T}$ is the full ray class field to $Q_n$ and $\rg{T}_p$ is the $p$-part of this field, thus fixed by all $q'$-Sylow subgroups of $\hbox{Gal}(\rg{T}/\mathbb{K}_n)$, for all primes $q' \leq q$, then $\mathbb{T}'_{n}$ is the subfield of $\rg{T}_p$ fixed by $(\hbox{Gal}(\rg{T}_p/\mathbb{K}_n))^+$. As mentioned above, the completions at individual primes $\eu{q}'$ above $q$ are cyclic groups \[ \mathbb{T}'_{n,\eu{q}'}/\mathbb{H}(\mathbb{K}_n)_{\eu{q}'} \cong (\mathbb{F}_q[ \zeta_{p^n} ]^{\times})_p \cong C_{p^n} .\] Consequently, \begin{eqnarray} \label{tn} \hbox{Gal}(\mathbb{T}'_n/\mathbb{H}(\mathbb{K}_n)) \cong \prod_{g \in \Delta_m} C_{p^n} . \end{eqnarray} Since $q$ is ramified in $\mathbb{L}_n/\mathbb{K}_n$, the residual fields of the ray class subfield $\mathbb{T}_n := \mathbb{T}^-_q(\mathbb{L}_n)$ are the same as the ones of $\mathbb{T}'_n$ and thus $\mathbb{T}_n = \mathbb{H}(\mathbb{L}_n) \cdot \mathbb{T}'_n$. Let also $\mathbb{E}'_n \subset \mathbb{T}'_n$ be the maximal subextension with $p \hbox{Gal}(\mathbb{E}'_n/\mathbb{H}'_n) = 0$ -- thus the $p$-elementary extension of $\mathbb{H}(\mathbb{K}_n)$ contained in $\mathbb{T}'_n$ -- and $\mathbb{E}_n = \mathbb{E}'_n \cdot \mathbb{H}(\mathbb{L}_n)$. Since the local extensions $\mathbb{T}'_{n,\eu{q}'}/\mathbb{H}(\mathbb{K}_n)_{\eu{q}'}$ are cyclotomic, the ramification of $\mathbb{E}'_n/\mathbb{H}(\mathbb{K}_n)$ is absorbed by $\mathbb{L}_n/\mathbb{K}_n$ and therefore $\mathbb{E}_n \subset \mathbb{H}(\mathbb{L}_n)$.
Consider a class $x \in A^-(\mathbb{L}_n)$ such that $0 \neq \beta(x) \in \widehat{H}^1(F,A^-(\mathbb{L}_n))$ and let $\eu{A} \in x$ be a prime and $(\alpha) = \id{N}(\eu{A}) \subset \mathbb{K}_n$ be the principal ideal below it. By Lemma \ref{princid}, it follows that $\pi_N((\alpha/\overline{\alpha})) \neq 0$ and thus the Artin symbol $y' = \lchooses{\mathbb{T}'_n/\mathbb{K}_n}{(\alpha/\overline{\alpha})} \in \hbox{Gal}(\mathbb{T}'_n/\mathbb{K}_n)$ generates a cycle of maximal length in $\hbox{Gal}(\mathbb{T}'_n/\mathbb{H}(\mathbb{K}_n))$, so it acts non trivially in $\mathbb{E}'_n/\mathbb{H}(\mathbb{K}_n)$. If $y \in \hbox{Gal}(\mathbb{T}_n/\mathbb{L}_n)$ is any lift of $\varphi(x)$, i.e. $y \vert_{\mathbb{H}(\mathbb{L}_n)} = \varphi(x)$, then $\id{N}(y')$ acts non trivially in $\mathbb{E}'_n/\mathbb{H}(\mathbb{K}_n)$. The converse holds too. Suppose that $x \in A^-(\mathbb{L}_n)$ and let $y \in \hbox{Gal}(\mathbb{T}_n/\mathbb{L}_n)$ be some lift of $\varphi(x)$. If $\id{N}(y)$ fixes $\mathbb{H}(\mathbb{K}_n)$ and acts non trivially in $\mathbb{E}'_n/\mathbb{H}(\mathbb{K}_n)$, then $\beta(x) \neq 0$. Indeed, by choosing $\eu{A} \in x$ a prime with $\lchooses{\mathbb{T}_n/\mathbb{K}_n}{\eu{A}} = y$, we see that $I := \id{N}(\eu{A})$ must be a principal ideal, since $\lchooses{\mathbb{T}_n)/\mathbb{K}_n}{I}$ fixes $\mathbb{H}(\mathbb{K}_n)$ and moreover, the Artin symbol acts non trivially on $\mathbb{E}'_n/\mathbb{H}(\mathbb{K}_n)$, so $\pi_N(I) \neq 0$. The claim follows from Lemma \ref{princid}.
Let $\mathbb{T} = \cup_n \mathbb{T}_n$. In the projective limit, we conclude that $x = (x_n)_{n \in \mathbb{N}}$ has $\beta(x) \neq 0$ iff for any lift $y \in \hbox{Gal}(\mathbb{T}/\mathbb{L}_{\infty})$ of $\varphi(x) \in \hbox{Gal}(\mathbb{H}(\mathbb{L}_{\infty}))$ the norm $\id{N}(y)$ fixes $\mathbb{H}(\mathbb{K}_{\infty})$ and acts non trivially in $\mathbb{E}'/\mathbb{H}(K_{\infty})$, with $\mathbb{E} = \cup_n
\mathbb{E}'_n$. Moreover, $\mathbb{T}/\mathbb{H}(\mathbb{L}_{\infty})$ is the product of $| \Delta_m
|$ independent $\mathbb{Z}_p$-extensions and $\id{N}(y)$ is of $\lambda$-type. We have thus from \ref{h1gen} a further isomorphism, \begin{eqnarray}
\label{h1e}
\widehat{H}^1(F, A^-(\mathbb{L}_n)) \cong \hbox{Gal}(\mathbb{E}'_n/\mathbb{H}(\mathbb{K}_n)) \cong \hbox{Gal}(\mathbb{E}'_{\infty}/\mathbb{H}(\mathbb{K}_{\infty}). \end{eqnarray}
We have thus proved: \begin{fact} \label{h1str} For every $n > m$, a class $x \in A^-(\mathbb{L}_n)$ has non trivial image $\beta(x) \in \widehat{H}^1(F, A^-(\mathbb{L}_n))$ iff for any lift $y \in \hbox{Gal}(\mathbb{T}_n/\mathbb{L}_n)$ of $\varphi(x) \in \hbox{Gal}(\mathbb{H}(\mathbb{L}_n)/\mathbb{L}_n)$, the norm $y' := \id{N}(y) \in \hbox{Gal}(\mathbb{T}'_n/\mathbb{K}_n)$ fixes $\mathbb{H}_n$ and acts non trivially in $\mathbb{E}'_n$; equivalently, $y'$ generates a maximal cycle in $\hbox{Gal}(\mathbb{T}'_n/\mathbb{H}(\mathbb{K}_n))$. In the projective limit, $x = (x_n)_{n \in
\mathbb{N}}$ has $\beta(x) \neq 0$ iff for any lift $y \in \hbox{Gal}(\mathbb{T}/\mathbb{L}_{\infty})$ of $\varphi(x) \in \hbox{Gal}(\mathbb{H}(\mathbb{L}_{\infty}))$ the norm $\id{N}(y)$ fixes $\mathbb{H}(\mathbb{K}_{\infty})$ and acts non trivially in $\mathbb{E}'/\mathbb{H}(K_{\infty})$, with $\mathbb{E} = \cup_n \mathbb{E}'_n$. Moreover, there is an exact sequence \begin{eqnarray} \label{eex}
1 \rightarrow \hbox{Gal}(\mathbb{H}(\mathbb{L}_{\infty}/\mathbb{L}_{\infty}) \rightarrow \hbox{Gal}(\mathbb{T}/\mathbb{L}_{\infty}) \rightarrow (\mathbb{Z}_p)^{|\Delta_m|} \rightarrow 1, \end{eqnarray} and thus $\hbox{Gal}(\mathbb{T}/\mathbb{K}_{\infty})$ is Noetherian $\Lambda$-module. \end{fact}
We now prove the Lemma \ref{hord}: \begin{proof}
Assume that $\widehat{H}^1\left( F, \left( \id{M}(\apr{\mathbb{L}}) \right)
\right) \neq 0$. Consider the modules $M_n = \hbox{Gal}(\mathbb{T}_n/\mathbb{L}_n)$ and
$M = \varprojlim_n M_n = \hbox{Gal}(\mathbb{T}/\mathbb{L}_{\infty})$. It follows from
Fact \ref{h1str} that $M/\hbox{Gal}(\mathbb{H}(\mathbb{L}_{\infty})) \cong
\mathbb{Z}_p^{|\Delta_m|}$, so $M$ is a Noetherian $\Lambda$-module and the
exact sequence \rf{eex} shows that $M$ is a rigid module and the
further premises of Proposition \ref{tpdeco} hold too, as a
consequnece of the choice of $\mathbb{K}$. Let $v' \in M$ be such that the
restriction $v = v' \big \vert_{\mathbb{H}(\mathbb{L}_{\infty})} \in
\id{M}(\apr{\mathbb{L}})$ and it has non trivial image in $\widehat{H}^1\left(
F, \left( \id{M}(\apr{\mathbb{L}}) \right) \right)$, via the inverse Artin map.
In particular $v \not \in \id{L}(\apr{\mathbb{L}})$ and also $v' \not \in \id{L}(M)$. Assume
that ${\rm ord} (v) \leq p \exp(\id{M}(\apr{\mathbb{K}}))$. The Proposition
\ref{tpdeco} together with Remark \ref{kp1} imply that $T^2 v' =
v'_{\mu} + v'_{\lambda}$ is decomposed. But then $v'_{\mu} \in
\hbox{Gal}(\mathbb{H}(\mathbb{L}_{\infty}))$ so it follows from the above Fact that
$\beta(T^2 v) = 0$. Thus, if $v \in \id{M}(\apr{\mathbb{L}})$ has order
${\rm ord} (v) \leq p \cdot \exp(\id{M}(\apr{\mathbb{K}})$,
then $T^2 v \in s \id{M}(\apr{\mathbb{L}})$, which completes the proof of
Lemma \ref{hord}. \end{proof} \subsection{Proof of the Proposition \ref{tpdeco}} The proof of the Proposition requires a longer analysis of the growth of modules $\Lambda x_n$ for indecomposed elements. This will be divided in a sequence of definitions and Lemmata which eventually lead to the proof.
The arguments of this section will take repeatedly advantage of the following elementary Lemma\footnote{I owe to Cornelius Greither
several elegant ideas which helped simplify my original proof.}: \begin{lemma} \label{ab} Let $A$ and $B$ be finitely generated abelian $p-$groups denoted additively, and let $N: B\rightarrow A$, $\iota:A\rightarrow B$ be two $\mathbb{Z}_p$ - linear maps such that: \begin{itemize} \item[1.] $N$ is surjective. \item[2.] The $p-$ranks $p\hbox{-rk} (A) = p\hbox{-rk} (p A) = p\hbox{-rk} (B) = r$. \item[3.] $N(\iota(a))=p a,\forall a\in A$. \end{itemize} Then \begin{itemize} \item[ A. ] The inclusion $\iota(A) \subset p B$ holds
unconditionally. \item[ B. ] We have $\iota(A) = p B$ and $B[ p ] = \hbox{ Ker }(N) \subset
\iota(A)$. Moreover, ${\rm ord} (x) = p \cdot {\rm ord} ( \iota(N (x) ) )$ for all $x \in
B$. \item[ C. ] If there is a group homomorphism $T : B
\rightarrow B$ with $\iota(A) \subseteq \hbox{ Ker }(T)$ and $\nu := \iota \circ N =
p + \binom{p}{2} T + O(T^2)$, then $\nu = \cdot p$, i.e. $\iota
(N(x)) = p x$ for all $x \in B$. \end{itemize} \end{lemma} \begin{proof}
Since $A$ and $B$ have the same $p$-rank and $N$ is surjective, we
know that the map $\overline{N} : B/p B \rightarrow A/p A$ is an
isomorphism\footnote{For finite abelian $p$-groups $X$ we denote
$R(X) = X/pX$ by \textit{roof} of $X$ and $S(X) = X[ p ]$ is its
\textit{socle}}. Therefore, the map induced by $N \iota$ on the
roof is trivial. Hence $\overline{\iota} : A/p A \rightarrow B/p B$ is also
zero and thus $\iota(A) \subset p B$, which confirms the claim A.
The premise $p\hbox{-rk} (p A) = p\hbox{-rk} (A)$ implies that $p\hbox{-rk} (A)
= p\hbox{-rk} (\iota(A))$, as follows from
\[ p\hbox{-rk} (A) \geq p\hbox{-rk} (\iota(A) = p\hbox{-rk} (\iota(A)[ p ]) \geq p\hbox{-rk} ( p A )
= p\hbox{-rk} (A). \] We now consider the map $\iota' : A/p A \rightarrow p B/ p^2
B$ together with $\overline{N}$. From the hypotheses we know that $N
\iota'$ is the multiplication by $p$ isomorphism: $\cdot p : A/ p A
\rightarrow p A/p^2 A$, as consequence of $p\hbox{-rk} (A) = p\hbox{-rk} (\iota(A)) = p\hbox{-rk} (p
A)$. It follows that $\iota'$ is an isomorphism of $\mathbb{F}_p$-vector
spaces and hence $\iota : A \rightarrow p B$ is surjective, so $\iota(A) = p
B$. Consequently $| B | / |\iota(A) | = p^r$. Let $x \in \hbox{ Ker }(N)$;
since $N : B/p B \rightarrow A/p A$ is surjective, the norm does not vanish
for $x \not \in p B$. Consequently, for $x \in \hbox{ Ker }(N) \subset p B$
we have $N x = p x = 0$, so $\hbox{ Ker }(N) = B[ p ] \subset \iota(A)$, so
$\hbox{ Ker }(N) = \iota(A)[ p ]$, as claimed.
We now prove that ${\rm ord} (x) = p \cdot {\rm ord} (\iota(N (x) )$ for all $x
\in B$. Consider the following maps $\pi : B \rightarrow \iota(A), x \mapsto
p x$ and $\pi' = \iota \circ N$. Since $p B = \iota(A)$, both maps
are surjective and there is an isomorphism $\phi : p B \rightarrow p B$ such
that $\pi = \phi \circ \pi'$. Therefore
\[ {\rm ord} (x)/p = {\rm ord} (p x) = {\rm ord} (\phi(p x)) = {\rm ord} (\iota(N(x))), \]
and thus ${\rm ord} (x) = p \cdot {\rm ord} (\iota(N(x)))$, as claimed.
For point C. we let $x \in B$, so $p x \in p B = \iota(A)$ and thus
$p T x = T ( p x ) = 0$. Consequently $T x \in B[ p ] \subset
\iota(A)$ and thus $T^2 x = 0$. From the definition of $\nu = \iota
\circ N = p + Tp \frac{p-1}{2} + O(T^2)$ we conclude that $\nu x = p
x + \frac{p-1}{2} T p x + O(T^2) x = p x$, which confirms the claim
C. and completes the proof. \end{proof}
In this section $\mathbb{M}_{\infty}/\mathbb{M}$ is an arbitrary $\mathbb{Z}_p$-extension in which all the primes that ramify, ramify completely and $X' \subseteq Y := \hbox{Gal}(\mathbb{T}/\mathbb{M}_{\infty})$ is a Noetherian $\Lambda$-submodule, the limit of the ray class groups $\hbox{Gal}(\mathbb{T}_n/\mathbb{M}_n)$ to some ray module associated to the base field with trivial finite part. Thus $\omega_n x = 0$ implies $T x = 0$ by Fact \ref{sa}. The ray class groups $Y_n = \varphi^{-1}(\hbox{Gal}(\mathbb{H}(\mathbb{T}_n)/\mathbb{M}_{\infty})$ may also coincide with class groups $\mathbb{A}(\mathbb{M}_n)$.
We shall apply the Theorem \ref{tpdeco} to the concrete cases in which $\mathbb{M} = \mathbb{L}$ and $Y$ is either $\hbox{Gal}(\mathbb{H}^-(\mathbb{L})/\mathbb{L}_{\infty})$ or $\hbox{Gal}(\mathbb{T}/\mathbb{M}_{\infty})$, where $\mathbb{T}$ is the injective limit of subfields of ray class groups, defined above.
We denote by $L, M, D$ the $\lambda$-, the $\mu$- and the decomposed parts, respectively, of $X'$: thus, in the notation of the introduction, we have $L = \id{L}(X'), M = \id{M}(X'), D = \id{D}(X')$. One can for instance think of $\mathbb{M}$ as $\mathbb{K}$ in \S 2 and $X' = \varphi(\apr{\mathbb{K}}) = X^-$ or $X' = \hbox{Gal}(\mathbb{T}/\mathbb{H}(\mathbb{K}_{\infty}))$.
For $x \in M$, the order is naturally defined by ${\rm ord} (x) = \min\{ p^k \ : \ p^k x = 0 \}$. Since $X'$ has no finite submodules, it follows that ${\rm ord} (x) = {\rm ord} (T^j x)$ for all $j > 0$.
We introduce some distances $d_n : X' \times X' \rightarrow \mathbb{N}$ as follows: let $x, z \in X'$; then \[ d_n(x, z) := p\hbox{-rk} (\Lambda (x_n - z_n)); \quad d_n(x) = p\hbox{-rk} (\Lambda x_n) . \] We obviously have $d_n(x,z) \leq d_n(x,y)+d_n(x,z)$ and $d_n(x) \geq 0$ with $d_n(x) = 0$ for the trivial module. Also, if $f \in \mathbb{Z}_p[ T ]$ is some distinguished polynomial of degree $\phi = \deg(f)$, then $d_n(x)-\phi \leq d_n(f x) \leq d_n(x)$ for all $x \in X$. We shall write $d(x, y) = \lim_n d_n(x,y)$. Also, for explicit elements $u, v \in X'_k$, we may write $d(u, v) = p\hbox{-rk} (\Lambda (u-v))$. This can be useful for instance when no explicit lifts of $u, v$ to $X'$ are known. The simplest fact about the distance is: \begin{fact} \label{dlam} Let $x, z \in X'$ be such that $d_n(x, z) \leq N$ for some fixed bound $N$ and all $n > 0$. Then $x-z \in L$ and $N \leq \ell := p\hbox{-rk} (L)$. For every fixed $d \geq p\hbox{-rk} (L)$ there is an integer $n_0(d)$ such that for any $x \in X' \setminus L$ and $n > n_0$, if $d_n(x) \leq d$ then $x \in \nu_{n,n_0} X'$. \end{fact} \begin{proof}
The element $y = x-z$ generates at finite levels modules $\Lambda
y_n$ of bounded rank, so it is neither of $\mu$-type nor
indecomposed. Thus $y \in L$ and consequently $d_n(y) \leq p\hbox{-rk} (L_n)
\leq \ell$ for all $n$, which confirms the first claim.
For the second claim, note that if $x \not \in L$, then $d_n(x) \rightarrow
\infty$, so the boundedness of $d_n(x)$ becomes a strong constraint
for large $n$. Next we recall that $F(T) x \in M$ and since
$d_n(F(T) x) \leq d_n(x)$, we may assume that $x \in M$. Now $d_n(x)
\leq d$ implies the existence of some distinguished polynomial $h
\in \mathbb{Z}_p[ T ]$ with $\deg(h) = d$ and such that $h(T) x_n = 0$. The
exponent of $M$ is bounded by $p^{\mu}$, so there is a finite set
$\id{H} \subset \mathbb{Z}_p[ T ]$ from which $h$ can take its values. Let
now $n_0$ be chosen such that $\nu_{n_0,1} \in (h(T), p^{\mu})$ for
all $h \in \id{H}$. Such a choice is always possible, since
$\nu_{n,1} = p^{\mu} \cdot V(T) + T^{p^{n-\mu}-1} \cdot W(T)$ for
some $V(T), W(T) \in \Lambda$. We may thus choose $n$ sufficiently
large, such that the Euclidean division $T^{p^{n-\mu}} = q(T) h(T) +
r(T)$ yields remainders $r(T)$ which are divisible by $p^{\mu}$ for
all $h \in \id{H}$. Let $n_0$ be the smallest such integer. With
this choice, for any $h \in \id{H}$, it follows that $h(T) x_n = 0$
implies $\nu_{n_0,1} x_n = 0$ and thus $\nu_{n_0,1} x = \nu_{n,1} w$
for some $w \in X'$. Then $\nu_{n_0,1}( x - \nu_{n,n_0} w) = 0$ and
thus $x = \nu_{n,n_0} w$, by Fact \ref{sa} and the assumption on
$X'$.
\end{proof}
We pass now to the proof of Proposition \ref{tpdeco}. \begin{proof}
Let $x \in X'$ and suppose that $l$ is the smallest integer such
that $p^l x \in L$ and let $f_x(T)$ be the minimal annihilator
polynomial of $p^l x$, so $y := f_x(T) x \in M$, since $p^l y =
0$. We claim that
\[ p^j x_{n+j} - \iota_{n,n+j}(x_n) \in f_x(T) \Lambda x_{n+j}
\subset \Lambda y_{n+j}, \quad \forall j > 0, \] and in particular
$\iota_{n,n+l}(x_n) = p^l x_{n+l} - h_{n+l}(T) (f_x(T) x_{n+l})$ is
decomposed. We let $n_1 > n_0$ be such that for all $x = (x_n)_{n
\in \mathbb{N}}$ in $X' \setminus p X'$ we have ${\rm ord} (p^{l+\mu} x_n) > p$;
here $n_0$ is the constant established in Fact \ref{dlam} with
respect to the bound rank $d = p\hbox{-rk} (L) + 1$. For $n > n_1$ and $x
\in X' \setminus ( D + p X')$, we have \begin{eqnarray} \label{strong} p^l x_{n+l} - \iota_{n,n+l}(x_n) = f_x(T) h_n(T) x_{n+l} \in M_{n+l}. \end{eqnarray} Indeed, consider the modules $B = \Lambda x_{n+1}/(f_x(T) \cdot \Lambda x_{n+1})$ and $A = \Lambda x_n/ (f_x(T) \cdot \Lambda x_n)$. Since $\iota_{n,n+1} x_{n} \not \in f_x(T) \Lambda x_{n+1}$ for $n > n_1$ -- as follows from the condition imposed on the orders -- the induced map $\iota : A \rightarrow B$ is rank preserving. We can thus apply the Lemma \ref{ab}, an $l$-fold iteration of which implies the claim \rf{strong}. We now apply the hypothesis that $p x = c + u \in D$ for some $c \in L, u \in M[ p^{l-1} ]$; note that the condition \rf{cstab} allows us to conclude from Lemma \ref{ab} and $c \in L$ that $\iota_{n,n+k}(c_n) = p^k c_{n+k}$ for all $c = (c_n)_{n \in \mathbb{N}} \in L$ In particular, \[ p \omega_{n} c_{n+1} = \omega_n \iota_{n,n+1}(c_n) = 0, \quad \hbox{so} \quad \omega_{n+1} c_{n+1} \in L_{n+1}[ p ] = \iota_{1,n+1}(L_1[ p ]). \] as a consequence of the same Lemma. We deduce under the above hypothesis on $n$, that \[ p^l x_{n+l} = p^{l-1} c_{n+l} = \iota_{n+1,n+l} c_{n+1} = \iota_{n,n+l}(x_n) + h y_{n+l} . \] By applying $\omega_n$ to this identity and using $\omega_n c_{n+1} \in \iota_{1,n+1}( L_1[ p ] )$, so $T \omega_n c_{n+1} = 0$, we find $T h(T) \omega_n y_{n+l} = 0$. The Iwasawa's Theorem VI \cite{Iw} implies that there is some $z(n) \in X'$ such that $T h \omega_n y = \nu_{n+l,1} z(n)$. Since $p^l y = 0$, we have in addition $p^l \nu_{n+l,1} z(n) = 0$, so $p^l z(n) = 0$ and $z(n) \in M$, by Fact \ref{nonu}. We stress here the dependency of $z \in X'$ on the choice of $n > n_1$ by writing $z(n)$; this is however a norm coherent sequence and $z_m(n)$ will denote its projection in $X'_m$ for all $m > 0$. We obtain $\nu_{n,1}( T^2 h(T) y - \nu_{n+l,n} z(n) ) = 0$. This implies $T^2 h(T) y = \nu_{n+l,n} z(n)$, by the same Fact \ref{nonu}. Reinserting this relation in the initial identity, we find
\begin{eqnarray}
\label{cdeco}
\iota_{n,n+l}( T^2 x_n - z_n(n)) = \iota_{n+1,n+l} (T^2 c_{n+1}).
\end{eqnarray}
We prove that \rf{cdeco} implies that $T^2 x$ must be decomposed.
For this we invoke the Fact \ref{dlam} with respect to the sequence
$w^{(n)} = T^2 x - z(n)$. We have $d_n( \iota_{n+1,n+l} (T^2
c_{n+1}) ) \leq p\hbox{-rk} (L)$ for all $n$; since $z(n) \in M$, and $T^2
x$ is assumed indecomposed, then $w^{(n)} \not \in D$ either, and
the Fact \ref{dlam} together with the choice of $n_1$ imply that
$w_n^{(n)} \in \nu_{n,n_0} X'$ for all $n > n_1$. But then \[ w_n^{(n)} = \iota_{n,n+l}( T^2 x_n - z_n(n)) = \nu_{n,n_0} a_n \in \iota_{n_0,n}(X'_{n_0}) . \] It follows in particular that \[ {\rm ord} (T^2 x_n - z_n(n)) \leq p^l {\rm ord} (\iota_{n,n+l}( T^2 x_n - z_n(n))) \leq p^l \exp(X'_{n_0}) . \] This holds for arbitrary large $n$, and since $z_n(n) \in M_n$, we have ${\rm ord} (T^2 x_n - z_n(n)) = {\rm ord} (T^2 x_n)$ for $n > n_1$. Therefore, the assumption that $T x \not \in D$ implies that ${\rm ord} (T^2 x_n) \leq p^l \exp(X'_{n_0})$ for all $n > n_1$: it is thus uniformly bounded for all $n$, which would imply that $x \in M$, in contradiction with the assumption $x \not \in D$. We have thus proved the claim for all $x \in X' \setminus (D + p X')$. The general case follows by applying Nakayama to the module $X'$.
\end{proof}
\textbf{Acknowledgments}: The main proof idea was first investigated
in the wish to present the result in the memorial volume \cite{Alf}
for Alf van der Poorten. However not more than the trivial case was
correctly proved; S\"oren Kleine undertook within part of his PhD
Thesis the task of brushing up the proof. He succeeded to do so, with
the exception of the details related to decomposition and Lemma \ref{hord}, which we
treated here. I owe to S\"oren Kleine for numerous useful discussions
sustained during the development of this paper. I am particularly
grateful to Andreas Nickel for critical discussions of an earlier
version of this work. In G\"ottingen, the graduate students in winter
term of 2015-2016, Pavel Coupek, Vlad Cri\c{s}an and Katharina
M\"uller presented in a seminar this work, I owe them for their great
contribution to the verification of the final redaction. The paper is dedicated to the memory of
Alf van der Poorten.
\end{document} | arXiv |
Transcendental equation
In applied mathematics, a transcendental equation is an equation over the real (or complex) numbers that is not algebraic, that is, if at least one of its sides describes a transcendental function.[1] Examples include:
${\begin{aligned}x&=e^{-x}\\x&=\cos x\\2^{x}&=x^{2}\end{aligned}}$
A transcendental equation need not be an equation between elementary functions, although most published examples are.
In some cases, a transcendental equation can be solved by transforming it into an equivalent algebraic equation. Some such transformations are sketched below; computer algebra systems may provide more elaborated transformations.[2]
In general, however, only approximate solutions can be found.[3]
Transformation into an algebraic equation
Ad hoc methods exist for some classes of transcendental equations in one variable to transform them into algebraic equations which then might be solved.
Exponential equations
If the unknown, say x, occurs only in exponents:
• applying the natural logarithm to both sides may yield an algebraic equation,[4] e.g.
$4^{x}=3^{x^{2}-1}\cdot 2^{5x}$ transforms to $x\ln 4=(x^{2}-1)\ln 3+5x\ln 2$, which simplifies to $x^{2}\ln 3+x(5\ln 2-\ln 4)-\ln 3=0$, which has the solutions $x={\frac {-3\ln 2\pm {\sqrt {9(\ln 2)^{2}-4(\ln 3)^{2}}}}{2\ln 3}}.$
This will not work if addition occurs "at the base line", as in $4^{x}=3^{x^{2}-1}+2^{5x}.$
• if all "base constants" can be written as integer or rational powers of some number q, then substituting y=qx may succeed, e.g.
$2^{x-1}+4^{x-2}-8^{x-2}=0$ transforms, using y=2x, to ${\frac {1}{2}}y+{\frac {1}{16}}y^{2}-{\frac {1}{64}}y^{3}=0$ which has the solutions $y\in \{0,-4,8\}$, hence $x=\log _{2}8=3$ is the only real solution.[5]
This will not work if squares or higher power of x occurs in an exponent, or if the "base constants" do not "share" a common q.
• sometimes, substituting y=xex may obtain an algebraic equation; after the solutions for y are known, those for x can be obtained by applying the Lambert W function, e.g.:
$x^{2}e^{2x}+2=3xe^{x}$ transforms to $y^{2}+2=3y,$ which has the solutions $y\in \{1,2\},$ hence $x\in \{W_{0}(1),W_{0}(2),W_{-1}(1),W_{-1}(2)\}$, where $W_{0}$ and $W_{-1}$ the denote the real-valued branches of the multivalued $W$ function.
Logarithmic equations
If the unknown x occurs only in arguments of a logarithm function:
• applying exponentiation to both sides may yield an algebraic equation, e.g.
$2\log _{5}(3x-1)-\log _{5}(12x+1)=0$ transforms, using exponentiation to base $5.$ to ${\frac {(3x-1)^{2}}{12x+1}}=1,$ which has the solutions $x\in \{0,2\}.$ If only real numbers are considered, $x=0$ is not a solution, as it leads to a non-real subexpression $\log _{5}(-1)$ in the given equation.
This requires the original equation to consist of integer-coefficient linear combinations of logarithms w.r.t. a unique base, and the logarithm arguments to be polynomials in x.[6]
• if all "logarithm calls" have a unique base $b$ and a unique argument expression $f(x),$ then substituting $y=\log _{b}(f(x))$ may lead to a simpler equation,[7] e.g.
$5\ln(\sin x^{2})+6=7{\sqrt {\ln(\sin x^{2})+8}}$ transforms, using $y=\ln(\sin x^{2}),$ to $5y+6=7{\sqrt {y+8}},$ which is algebraic and can be solved. After that, applying inverse operations to the substitution equation yields ${\sqrt {\arcsin \exp y}}=x.$
Trigonometric equations
If the unknown x occurs only as argument of trigonometric functions:
• applying Pythagorean identities and trigonometric sum and multiple formulas, arguments of the forms $\sin(nx+a),\cos(mx+b),\tan(lx+c),...$ with integer $n,m,l,...$ might all be transformed to arguments of the form, say, $\sin x$. After that, substituting $y=\sin(x)$ yields an algebraic equation,[8] e.g.
$\sin(x+a)=(\cos ^{2}x)-1$ transforms to $(\sin x)(\cos a)+{\sqrt {1-\sin ^{2}x}}(\sin a)=1-(\sin ^{2}x)-1$, and, after substitution, to $y(\cos a)+{\sqrt {1-y^{2}}}(\sin a)=-y^{2}$ which is algebraic[9] and can be solved. After that, applying $x=2k\pi +\arcsin y$ obtains the solutions.
Hyperbolic equations
If the unknown x occurs only in linear expressions inside arguments of hyperbolic functions,
• unfolding them by their defining exponential expressions and substituting $y=exp(x)$ yields an algebraic equation,[10] e.g.
$3\cosh x=4+\sinh(2x-6)$ unfolds to ${\frac {3}{2}}(e^{x}+{\frac {1}{e^{x}}})=4+{\frac {1}{2}}\left({\frac {(e^{x})^{2}}{e^{6}}}-{\frac {e^{6}}{(e^{x})^{2}}}\right),$ which transforms to the equation ${\frac {3}{2}}(y+{\frac {1}{y}})=4+{\frac {1}{2}}\left({\frac {y^{2}}{e^{6}}}-{\frac {e^{6}}{y^{2}}}\right),$ which is algebraic[11] and can be solved. Applying $x=\ln y$ obtains the solutions of the original equation.
Approximate solutions
Approximate numerical solutions to transcendental equations can be found using numerical, analytical approximations, or graphical methods.
Numerical methods for solving arbitrary equations are called root-finding algorithms.
In some cases, the equation can be well approximated using Taylor series near the zero. For example, for $k\approx 1$, the solutions of $\sin x=kx$ are approximately those of $(1-k)x-x^{3}/6=0$, namely $x=0$ and $x=\pm {\sqrt {6}}{\sqrt {1-k}}$.
For a graphical solution, one method is to set each side of a single-variable transcendental equation equal to a dependent variable and plot the two graphs, using their intersecting points to find solutions (see picture).
Other solutions
• Some transcendental systems of high-order equations can be solved by “separation” of the unknowns, reducing them to algebraic equations.[12][13]
• The following can also be used when solving transcendental equations/inequalities: If $x_{0}$ is a solution to the equation $f(x)=g(x)$ and $f(x)\leq c\leq g(x)$, then this solution must satisfy $f(x_{0})=g(x_{0})=c$. For example, we want to solve $\log _{2}\left(3+2x-x^{2}\right)=\tan ^{2}\left({\frac {\pi x}{4}}\right)+\cot ^{2}\left({\frac {\pi x}{4}}\right)$. The given equation is defined for $-1<x<3$. Let $f(x)=\log _{2}\left(3+2x-x^{2}\right)$ and $g(x)=\tan ^{2}\left({\frac {\pi x}{4}}\right)+\cot ^{2}\left({\frac {\pi x}{4}}\right)$. It is easy to show that $f(x)\leq 2$ and $g(x)\geq 2$ so if there is a solution to the equation, it must satisfy $f(x)=g(x)=2$. From $f(x)=2$ we get $x=1\in (-1,3)$. Indeed, $f(1)=g(1)=2$ and so $x=1$ is the only real solution to the equation.
See also
• Mrs. Miniver's problem – Problem on areas of intersecting circles
References
1. I.N. Bronstein and K.A. Semendjajew and G. Musiol and H. Mühlig (2005). Taschenbuch der Mathematik (in German). Frankfurt/Main: Harri Deutsch. Here: Sect.1.6.4.1, p.45. The domain of equations is left implicit throughout the book.
2. For example, according to the Wolfram Mathematica tutorial page on equation solving, both $2^{x}=x$ and $e^{x}+x+1=0$ can be solved by symbolic expressions, while $x=\cos x$ can only be solved approximatively.
3. Bronstein et al., p.45-46
4. Bronstein et al., Sect.1.6.4.2.a, p.46
5. Bronstein et al., Sect.1.6.4.2.b, p.46
6. Bronstein et al., Sect.1.6.4.3.b, p.46
7. Bronstein et al., Sect.1.6.4.3.a, p.46
8. Bronstein et al., Sect.1.6.4.4, p.46-47
9. over an appropriate field, containing $\sin a$ and $\cos a$
10. Bronstein et al., Sect.1.6.4.5, p.47
11. over an appropriate field, containing $e^{6}$
12. V. A. Varyuhin, S. A. Kas'yanyuk, “On a certain method for solving nonlinear systems of a special type”, Zh. Vychisl. Mat. Mat. Fiz., 6:2 (1966), 347–352; U.S.S.R. Comput. Math. Math. Phys., 6:2 (1966), 214–221
13. V.A. Varyukhin, Fundamental Theory of Multichannel Analysis (VA PVO SV, Kyiv, 1993) [in Russian]
• John P. Boyd (2014). Solving Transcendental Equations: The Chebyshev Polynomial Proxy and Other Numerical Rootfinders, Perturbation Series, and Oracles. Other Titles in Applied Mathematics. Philadelphia: Society for Industrial and Applied Mathematics (SIAM). doi:10.1137/1.9781611973525. ISBN 978-1-61197-351-8.
| Wikipedia |
Original Paper
Optimal spatial-dynamic management to minimize the damages caused by aquatic invasive species
Katherine Y. Zipp ORCID: orcid.org/0000-0002-7206-51591,
Yangqingxiang Wu2,
Kaiyi Wu3 &
Ludmil T. Zikatanov ORCID: orcid.org/0000-0002-5189-42302
Letters in Spatial and Resource Sciences volume 12, pages199–213(2019)Cite this article
Invasive species have been recognized as a leading threat to biodiversity. In particular, lakes are especially affected by species invasions because they are closed systems sensitive to disruption. Accurately controlling the spread of invasive species requires solving a complex spatial-dynamic optimization problem. In this work we propose a novel framework for determining the optimal management strategy to maximize the value of a lake system net of damages from invasive species, including an endogenous diffusion mechanism for the spread of invasive species through boaters' trips between lakes. The proposed method includes a combined global iterative process which determines the optimal number of trips to each lake in each season and the spatial-dynamic optimal boat ramp fee.
Note that this utility function can be further generalized to allow for nonlinear impacts of income and to allow for congestion to impact boaters (i.e. independence among boaters' utilities).
We assume that the invasion status is updated at the end of the season such that boaters' trip decisions depend on \(x_{s-1}\) and at the end of the season when the invasion status is updated \(x_s\) depends on the boating decisions in season s.
We take \(a=2.824153\) which approximates the sigmoid with \(\ell _\infty \) error at most 0.056075.
Accounting for a non-constant welfare loss that depended on the number of lakes invaded was found to be minor in Zipp et al. (2019) (around 2%), however, future work could allow the welfare loss per boater to depend on the number of invaded lakes.
Clearly, allowing a negative number of boating trips would not be realistic.
Brock, W., Xepapadeas, A.: Diffusion-induced instability and pattern formation in infinite horizon recursive optimal control. J. Econ. Dyn. Control 32(9), 2745 (2008). https://doi.org/10.1016/j.jedc.2007.08.005
Brock, W., Xepapadeas, A.: Pattern formation, spatial externalities and regulation in coupled economic-ecological systems. J. Environ. Econ. Manag. 59(2), 149 (2010). https://doi.org/10.1016/j.jeem.2009.07.003
Bruckerhoff, L., Havel, J., Knight, S.: Survival of invasive aquatic plants after air exposure and implications for dispersal by recreational boats. Hydrobiologia 746(1), 113 (2014). https://doi.org/10.1007/s10750-014-1947-9
Chivers, C., Leung, B.: Predicting invasions: alternative models of human-mediated dispersal and interactions between dispersal network structure and Allee effects. J. Appl. Ecol. 49(5), 1113 (2012). https://doi.org/10.1111/j.1365-2664.2012.02183.x
Deutsch, F.: Best approximation in inner product spaces, CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC, vol. 7. Springer, New York (2001). https://doi.org/10.1007/978-1-4684-9298-9
Eiswerth, M.E., van Kooten, G.C.: Uncertainty, economics, and the spread of an invasive plant species. Am. J. Agric. Econ. 84(5), 1317 (2002)
Epanchin-Niell, R.S., Hufford, M.B., Aslan, C.E., Sexton, J.P., Port, J.D., Waring, T.M.: Controlling invasive species in complex social landscapes. Front. Ecol. Environ. 8(4), 210 (2010). https://doi.org/10.1890/090029
Hof, J.: Optimizing spatial and dynamic population-based control strategies for invading forest pests. Nat. Resour. Model. 11(3), 197 (1998). https://doi.org/10.1111/j.1939-7445.1998.tb00308.x
Horan, R., Wolf, C.A., Fenichel, E.P., Mathews, K.H.: Spatial management of wildlife disease. Rev. Agric. Econ. 27(3), 483 (2005). https://doi.org/10.1111/j.1467-9353.2005.00248.x
Horsch, E.J., Lewis, D.J.: The effects of aquatic invasive species on property values: evidence from a quasi-experiment. Land Econ. 85(3), 391 (2009). https://doi.org/10.1353/lde.2009.0042
Leung, B., Lodge, D.M., Finnoff, D., Shogren, J.F., Lewis, M.A., Lamberti, G.: An ounce of prevention or a pound of cure: bioeconomic risk analysis of invasive species. Proc. R. Soc. B Biol. Sci. 269(1508), 2407 (2002). https://doi.org/10.1098/rspb.2002.2179
Lewis, D.J., Provencher, B., Beardmore, B.: Using an intervention framework to value salient ecosystem services in a stated preference experiment. Ecol. Econ. 114, 141 (2015). https://doi.org/10.1016/j.ecolecon.2015.03.025
Moorhouse, T.P., Macdonald, D.W.: Are invasives worse in freshwater than terrestrial ecosystems? WIREs Water 2(1), 1 (2015). https://doi.org/10.1002/wat2.1059
Provencher, B., Lewis, D.J., Anderson, K.: Disentangling preferences and expectations in stated preference analysis with respondent uncertainty: the case of invasive species prevention. J. Environ. Econ. Manag. 64(2), 169 (2012). https://doi.org/10.1016/j.jeem.2012.04.002
Rothlisberger, J.D., Chadderton, W.L., McNulty, J., Lodge, D.M.: Aquatic invasive species transport via trailered boats: what is being moved, who is moving it, and what can be done. Fisheries 35(3), 121 (2010). https://doi.org/10.1577/1548-8446-35.3.121
Sala, O.E., Chapin, F.S., Armesto, J., Berlow, E.L., Bloomfield, J., Dirzo, R., Huber-Sanwald, E., Huenneke, L.F., Jackson, R.B., Kinzig, a, Leemans, R., Lodge, D.M., Mooney, H., Oesterheld, M., Poff, N.L., Sykes, M.T., Walker, B.H., Walker, M., Wall, D.H.: Global biodiversity scenarios for the year 2100. Science 287(5459), 1770 (2000). https://doi.org/10.1126/science.287.5459.1770
Sanchirico, J.N., Albers, H.J., Fischer, C., Coleman, C.: Spatial management of invasive species: pathways and policy options. Environ. Resource Econ. 45(4), 517 (2010). https://doi.org/10.1007/s10640-009-9326-0
Train, K.E.: Discrete Choice Methods with Simulation, pp. 1–388. Cambridge University Press, Cambridge (2009). https://doi.org/10.1017/CBO9780511753930
Wilcove, D.S., Rothstein, D., Dubow, J., Phillips, A., Losos, E.: Quantifying threats to imperiled species in the United States. BioScience 48(8), 607 (1998). https://doi.org/10.2307/1313420
Zipp, K.Y., Lewis, D.J., Provencher, B., Zanden, M.J.V.: The spatial dynamics of the economic impacts of an aquatic invasive species: an empirical analysis. Land Econ. 95(1), 1–18 (2019)
Department of Agricultural Economics, Sociology and Education, Penn State, University Park, PA, 16802, USA
Katherine Y. Zipp
Department of Mathematics, Penn State, University Park, PA, 16802, USA
Yangqingxiang Wu
& Ludmil T. Zikatanov
Department of Mathematics, Tufts University, Medford, MA, 02155, USA
Kaiyi Wu
Search for Katherine Y. Zipp in:
Search for Yangqingxiang Wu in:
Search for Kaiyi Wu in:
Search for Ludmil T. Zikatanov in:
Correspondence to Katherine Y. Zipp.
The work of Katherine Y. Zipp was partially supported by the Department of Agricultural Economics, Sociology, and Education at Penn State, the USDA National Institute of Food and Agriculture and Multistate Hatch Appropriations under Project # PEN04631 and Accession # 1014400, and a seed grant from the Institute for CyberScience at Penn State. The work of Yangqingxiang Wu was partially supported by the Department of Agricultural Economics, Sociology, and Education at Penn State. The work of Ludmil T. Zikatanov was partially supported by NSF Grants DMS-1720114 and DMS–1819157 and a seed grant from the Institute for CyberScience at Penn State.
Appendix: On the convergence of the algorithm
The algorithm we proposed above utilizes a sequence of quadratic programming problems which, as we have shown in Sect. 2.5, are solvable analytically. This is a novel approach and to support this design we give a brief analysis of its convergence.
We note that the goal is to maximize \({{\mathscr {G}}}(\varvec{b})\) given by [(see (11)]:
$$\begin{aligned} {{\mathscr {G}}}(\varvec{b})= \left[ \langle {\mathbb {D}}(\widetilde{\varvec{b}}) \varvec{b},\varvec{b}\rangle + 2\langle \varvec{f}(\widetilde{\varvec{b}}),\varvec{b}\rangle + \langle g(\widetilde{\varvec{b}},\varvec{P}(\widetilde{\varvec{\tau }})),\varvec{1}\rangle \right] \bigg |_{\widetilde{\varvec{b}}=\varvec{b}} = {{\mathscr {F}}}(\widetilde{\varvec{b}},\varvec{b})\bigg |_{\widetilde{\varvec{b}}=\varvec{b}}= {{\mathscr {F}}}(\varvec{b},\varvec{b}). \end{aligned}$$
In the equations above, one can think of \({{\mathscr {F}}}(\widetilde{\varvec{b}},\varvec{b})\) as extending \({{\mathscr {G}}}\) from the "line" \(\varvec{b}=\widetilde{\varvec{b}}\) to the "plane" \((\widetilde{\varvec{b}},\varvec{b})\). It should be clear that we use the terms "line" and "plane" loosely here to identify the multidimensional analogues of such objects.
Since we are free to choose the extension \({{\mathscr {F}}}\) we may assume that we have extended the profit function so that
$$\begin{aligned} {{\mathscr {G}}}(\varvec{b})={{\mathscr {F}}}(\varvec{b},\varvec{b})\le {{\mathscr {F}}}(\widetilde{\varvec{b}},\varvec{b}), \quad \forall (\widetilde{\varvec{b}},\varvec{b}), \quad \text{ satisfying } \text{ constraints }. \end{aligned}$$
Recall that, Algorithm 3, for a given \(\varvec{b}_k\) maximizes \({{\mathscr {F}}}(\varvec{b}_k,\varvec{c})\) with respect to \(\varvec{c}\), and the optimal value of \({{\mathscr {F}}}\) is at \(\varvec{c} = \varvec{b}_{k+1}\). Note that this implies that
$$\begin{aligned} {{\mathscr {F}}}(\varvec{b}_k, \varvec{b}_{k+1})\ge {{\mathscr {F}}}(\varvec{b}_k, \varvec{b}_*), \end{aligned}$$
where \(\varvec{b}_*\) is the optimal solution which maximizes \({{\mathscr {F}}}(\varvec{b},\varvec{b})\) (the optimal value we want to find). As we have shown, such relation holds because at \(\varvec{c}=\varvec{b}_{k+1}\), the function \({{\mathscr {F}}}(\varvec{b}_k,\varvec{c})\) viewed as function of \(\varvec{c}\), is at a maximum. Therefore, the value of \({{\mathscr {F}}}(\varvec{b}_k,\varvec{b}_{k+1})\) cannot be smaller than the value of \({{\mathscr {F}}}(\varvec{b}_k,\varvec{b}_*)\).
If we further assume that \(\lim _{k\rightarrow \infty }\varvec{b}_k=\varvec{b}_{\infty }\) (which we found numerically to be always true in all examples we tried) and take the limit on both sides. Since \({{\mathscr {F}}}\) is continuous (not necessarily differentiable, just merely continuous is enough here), we obtain that
$$\begin{aligned} {{\mathscr {F}}}(\varvec{b}_*,\varvec{b}_{*})\ge {{\mathscr {F}}}(\varvec{b}_{\infty },\varvec{b}_{\infty }) \ge {{\mathscr {F}}}(\varvec{b}_{\infty },\varvec{b}_{*})\ge {{\mathscr {F}}}(\varvec{b}_{*},\varvec{b}_{*}). \end{aligned}$$
The first inequality holds because \(\varvec{b}_*\) is the optimal solution, the second holds because of the limit w.r.t k in (17) and last inequality follows from (16).
Since the left and right sides of these inequalities are equal, we must have equality everywhere. In conclusion, under the simple assumptions we made above, if the sequence of iterates converges then it converges to an optimal value of the objective function.
To make the argument precise, let us point out that the sequence of all iterates may not converge, but may have one, two or more convergent subsequences. As is known, by Heine–Borel theorem, as long as this sequence is bounded (which it is because of the constraints), it must be a convergent subsequence. The considerations given above apply to any convergent subsequence as well. We can then conclude that the function values of the limit of any such convergent subsequence is the optimal value of the benefit function. We have the following result and its proof is an immediate consequence of the considerations above.
If the extension satisfies (16), then for every convergent subsequence of iterates \(\{b_{k_j}\}_{j=1}^J\), \(\lim \limits _{j\rightarrow \infty } b_{k_j}=b_{\infty ,j}\) the function values converge to the optimal value, namely,
$$\begin{aligned} {{\mathscr {F}}}(b_{\infty ,j},b_{\infty ,j})= {{\mathscr {F}}}(b_{*},b_{*}), \quad j=1,2,\ldots ,J. \end{aligned}$$
We further remark that, while the optimal solutions, i.e. the limits of subsequences, \(\{b_{\infty ,j}\}_{j=1}^J\) may be different, the function values at such points are the same.
Finally, let us note that the technique of extending a function to a higher dimensional space (from one variable to two) is well known in the theory of partial differential equations and can be viewed as a special reguralization. The reason is that a less regular problem can be extended to more regular and better behaved in higher dimension.
Zipp, K.Y., Wu, Y., Wu, K. et al. Optimal spatial-dynamic management to minimize the damages caused by aquatic invasive species. Lett Spat Resour Sci 12, 199–213 (2019). https://doi.org/10.1007/s12076-019-00237-x
Issue Date: December 2019
DOI: https://doi.org/10.1007/s12076-019-00237-x
Spatial-dynamic management
Convex optimization
Bioeconomic
Not logged in - 3.227.2.246 | CommonCrawl |
Combining national survey with facility-based HIV testing data to obtain more accurate estimate of HIV prevalence in districts in Uganda
Joseph Ouma ORCID: orcid.org/0000-0001-5249-74401,
Caroline Jeffery2 na1,
Joseph J. Valadez2,
Rhoda K. Wanyenze3,
Jim Todd4 na1 &
Jonathan Levin1 na1
National or regional population-based HIV prevalence surveys have small sample sizes at district or sub-district levels; this leads to wide confidence intervals when estimating HIV prevalence at district level for programme monitoring and decision making. Health facility programme data, collected during service delivery is widely available, but since people self-select for HIV testing, HIV prevalence estimates based on it, is subject to selection bias. We present a statistical annealing technique, Hybrid Prevalence Estimation (HPE), that combines a small population-based survey sample with a facility-based sample to generate district level HIV prevalence estimates with associated confidence intervals.
We apply the HPE methodology to combine the 2011 Uganda AIDS indicator survey with the 2011 health facility HIV testing data to obtain HIV prevalence estimates for districts in Uganda. Multilevel logistic regression was used to obtain the propensity of testing for HIV in a health facility, and the propensity to test was used to combine the population survey and health facility HIV testing data to obtain the HPEs. We assessed comparability of the HPEs and survey-based estimates using Bland Altman analysis.
The estimates ranged from 0.012 to 0.178 and had narrower confidence intervals compared to survey-based estimates. The average difference between HPEs and population survey estimates was 0.00 (95% CI: − 0.04, 0.04). The HPE standard errors were 28.9% (95% CI: 23.4–34.4) reduced, compared to survey-based standard errors. Overall reduction in HPE standard errors compared survey-based standard errors ranged from 5.4 to 95%.
Facility data can be combined with population survey data to obtain more accurate HIV prevalence estimates for geographical areas with small population survey sample sizes. We recommend use of the methodology by district level managers to obtain more accurate HIV prevalence estimates to guide decision making without incurring additional data collection costs.
Accurate data are needed for monitoring health programmes and interventions and for appropriate allocation of resources. In most of sub-Saharan Africa (SSA), where the HIV/AIDS epidemic is generalized, national population surveys, such as AIDS Indicator Surveys (AIS), are preferred to provide reliable health indicator estimates for programme monitoring [1]. The surveys are however designed to provide estimates at national and regional levels, but small sample sizes at district or sub-district levels lead to less reliable indicator estimates, that have wider confidence intervals [1,2,3,4].
Health Information Systems such as the District Health Information System (DHIS2) provide another source of information that can be used for monitoring the HIV/AIDS epidemic. This data is collected more regularly, available at more decentralized levels, e.g. districts and costs less to collect. WHO, UNAIDS and other development partners recommend use of routine facility data in addition to other data sources to monitor programme performance, assess intervention coverage and measure levels of disease in a population [5]. Use of routine health facility data informed the adjustments in HIV prevalence estimates in many countries in Eastern and Southern Africa [6]. Several other studies highlight the utility of data from routine service delivery in informing service delivery decisions [7, 8]. Routine service delivery data, however, are collected only on individuals who attend/access health facilities and thus provide potentially biased estimates of population indicators.
In addition, development partners and ministries of health in middle and low-income countries have invested in electronic health information systems including the DHIS2, to improve the quality and timeliness of data from the systems. In Uganda, Ministry of Health (MoH) with support from development partners conduct quarterly reviews to validate data reported into DHIS2 [9]. With this investment, there is a need to find ways to utilize this source of information to inform service delivery decisions. Combining routine data with a relatively small sample of respondents from population survey data has been found to produce more accurate indicator estimates [10, 11].
Statistical models in packages such as SPECTRUM or THEMBISA attempt to use both routine and population survey data to calculate HIV/AIDS indicators. Model inputs such as ANC prevalence, mortality, number of individuals on ART and recent HIV prevalence when not available, complicate their use [12]. A simpler and more robust method may be easier to use and give good results.
Larmarange and Bendaud obtained district level estimates from population survey data from 17 countries using a kernel density approach implemented in PrevR [13]. In districts with inadequate number of observations in the survey sample, estimates were obtained based on observations from neighboring districts or administrative units and were categorized as "uncertain" estimates [13]. Using a similar approached, PrevR, UNAIDS found "uncertain" estimates in up to 86% of the districts in Mozambique and in 79% of the districts in Uganda [13, 14].
In this study, we explore use of the readily available health facility service delivery data in combination with population survey data to obtain more accurate HIV prevalence estimates at district level for monitoring interventions and disease impact in the general population. We implement the Hybrid Prevalence Estimation (HPE) methodology to obtain HIV prevalence estimate and 95% confidence interval for districts in Uganda. The estimation process accounts for sample size limitations associated with population survey data at district level and self-selection bias associated with health facility testers, a limitation that many researchers have not been able to address adequately [2,3,4, 15,16,17,18].
We analyzed data from the 2011 Uganda AIDS Indicator Survey (UAIS) and health facility HIV testing data from the national DHIS2 collected during 2011. UAIS data was downloaded from the Measure DHS website www.measuredhs.com after obtaining consent from ICF/Macro international, while health facility testing data was extracted from the DHIS2 hosted at MoH after obtaining written permission from MoH. Ethical clearance to conduct this study was obtained from the University of Witwatersrand Human Research Ethics Committee (HREC) and Uganda National Council for Science and Technology (UNCST).
Uganda AIDS Indicator survey
The UAIS is a nationally representative, population-based, HIV serological survey, designed to provide HIV prevalence estimates at national and regional levels [19]. The survey used a two-stage cluster sampling design. For the 2011 survey, Uganda was divided into 10 geographical regions each consisting of 8–15 neighboring districts. Clusters were randomly selected from each region with probability proportional to number of households in the cluster. The estimated number of households per cluster were projections from the 2002 National Population and Housing Census (NPHC) [20]. Clusters were enumeration areas from the 2002 NPHC. Sample sizes were allocated equally across the 10 geographical regions. A systematic sample of 25 households were then selected from each cluster using the 2002 NPHC sampling frame. All adults present in the selected household and who consented to participate in the survey were interviewed [19]. More details about the survey are available from www.measuredhs.com.
For this study, a total of 19,475 individuals (8532 men and 10,943 women) aged 15–49 years and tested for HIV during the survey were considered. Variables included in the analysis were (i) at cluster level: area of residence (urban/rural) and region of the country and at (ii) individual level: respondents' gender, marital status, education level attained, number of sexual partners including husband/wife in the 12 months preceding the survey, employment status and distance to nearest health facility.
A multilevel logistic regression model was fitted to the UAIS data to obtain the respondents' probability of testing for HIV in a health facility. The model was fitted using a total of 470 clusters. The average number of observations per cluster were 45(min = 11 and max = 64). Unequal sample selection probabilities were accounted for by incorporating scaled sampling weights. Carle's methodology was applied to adjust/scale the sampling weights [21]. The models were fitted using maximum likelihood method in Stata statistical software, release 15 [22].
Survey respondents were considered to have tested for HIV at a health facility if they reported that they tested for HIV in health facility and received their test results in the 12 months preceding the survey. Pregnant or breastfeeding women who tested for HIV during antenatal care attendance and individuals who tested at an HIV care centre such as The AIDS Support Organization (TASO) and AIDS Information Centre (AIC) were included in the analysis. Health facilities included facilities owned and managed by government (public) and private organizations that reported HIV testing data to the national DHIS2.
Health facility data
Health facility HIV testing data comprised of data reported to the national DHIS2. The system was developed to provide accurate, timely and quality routine data for monitoring and planning for the health sector in Uganda [9, 23]. Training and technical support from development partners and MoH has led to improvement in the quality and reliability of data in the system [9]. Aggregated HIV testing data is reported by health facilities to the DHIS2 on a monthly basis. The data includes HIV testing at all inpatient and outpatient departments in health facilities. For 2011 reporting period, data was disaggregated by age (i.e. 0–14, 15–49 and 50+ years) and gender (male and female). For this study, we considered males and females aged 15–49 years.
Indicators considered for this analysis were: number of individuals who were tested and received their HIV test results (A) and number of individuals who tested HIV positive (B). For ANC data, we considered number of pregnant women counseled, tested and received their HIV test results (C) at first antenatal visit and the number who tested HIV positive (D). HIV counseling and testing algorithm in Uganda recommends HIV testing for any individual whose most recent negative HIV test result was conducted more than 3 months prior to the current visit to the health facility [24]. Some individuals may test multiple times within a year but may not disclose to health workers resulting in double counting, a key limitation for this study. Furthermore, some pregnant women may test for HIV before seeking antenatal care and test again during antenatal attendance leading to double counting in the data reported to the national DHIS2.
Variables based on DHIS2 data were defined as follows;
Total number of individuals tested for HIV = (A + C)
Number HIV positive = (B + D)
Total number of males tested for HIV = males in A
Number of males tested HIV positive = males in B
Total number of females tested for HIV = (females in A) + C
Number of female tested HIV positive = (females in B) + D
Addressing possible bias in health facility data
Routine facility data collected as part of service delivery consists of individuals who self-select, limiting its' use for general population health indicator monitoring. To obtain general population indicator estimates, some researchers have used census projections as denominators, however this approach often results in coverage estimates that are greater than 100% [25]. Population surveys are preferred to obtain health indicator denominators since their design takes into account population distribution in the country [25,26,27,28]. The UAIS comprise two subpopulations, namely individuals who tested for HIV in a health facility in the 12 months preceding the survey (the facility testers) and those who did not test for HIV in a health facility (the non-facility testers) for the same period. We assume that the UAIS estimates of HIV prevalence for those who tested for HIV in a health facility and for those who did not test for HIV in a health facility are accurate at regional levels, since estimates of domain proportions from a multistage survey are unbiased. We apply this assumption to adjust the denominators of the DHIS2 data so that at the regional level, DHIS2 HIV prevalence estimates are similar to UAIS prevalence estimates. The adjustment process was carried out as follows:
We obtained the HIVs prevalence \( {\hat{k}}_f \) among health facility testers in each region in the UAIS data.
We adjusted denominators in the DHIS2 data for each region using \( {n}_{ajdusted}^r=\frac{n_{pos}}{{\hat{k}}_f} \), where npos is the observed number of individuals who tested HIV positive in each region in the DHIS2 data.
Calculated an adjustment factor (δf) for each region, using \( {\delta}_f=\frac{n_{ajdusted}^r}{n_r} \), where nr is the observed number of individuals who tested for HIV in each region from the DHIS2 data.
We applied the adjustment factor (δf), to obtain \( {n}_{ajdusted}^d \), the adjusted number of individuals who tested for HIV in a health facility at district level using, \( {n}_{ajdusted}^d={\delta}_f\ast {n}_d \), where nd is the observed number of individuals tested for HIV at district level.
HIV prevalence (Pf) based on DHIS2 adjusted data in the district was then obtained as a ratio of npos, the total observed positives and nadjusted the adjusted number of individuals who tested for HIV in the district, i.e. \( {P}_f=\frac{n_{pos}}{n_{ajdusted}^d} \)
Hybrid prevalence estimation methodology
We consider n individuals in the UAIS to include nc individuals who tested for HIV at a health facility during the 12 months preceding the survey and know their test result and \( {n}_{\underset{\_}{c}} \) individuals who did not test for HIV at a health facility and therefore do not know their HIV status. i.e. \( n={n}_c+{n}_{\underset{\_}{c}} \). Using health facility prevalence computed in step 5 above, we computed district HIV prevalence as a weighted average of prevalence from DHIS2 data, Pf and prevalence among individuals who did not test for HIV in a health facility, \( {\hat{P}}_{\underset{\_}{s}} \) estimated from the UAIS data.
$$ \hat{P}={\hat{\pi}}_c{P}_f+\left(1-{\hat{\pi}}_c\right){\hat{P}}_{\underset{\_}{s}} $$
where;
\( \hat{P} \) – HPE/combined estimate, \( {\hat{\pi}}_c \) – the estimated probability of testing for HIV in a health facility, Pf− Adjusted HIV prevalence for individuals tested at a health facility and \( {\hat{P}}_{\underset{\_}{s}} \) – HIV prevalence for individuals tested during the survey and had not tested for HIV in a health facility in the 12 months preceding the survey. We estimated \( {\hat{\pi}}_c \) from UAIS data using multilevel logistic regression adjusting for both individual and cluster level factors. Applying this model, we account for clustering at cluster level [25]. Although the probability of testing for HIV in a health facility was obtained at individual level, we used average district level probability of testing to combine the estimates. Since average probability of HIV testing is obtained from a survey sample containing both facility and non-facility testers, we estimate the variance and standard errors (SE) for the HPE respectively as follows;
$$ {\displaystyle \begin{array}{l} Var\left(\hat{P}\right)=\frac{1}{n}\left\{{\hat{P}}_{\underset{\_}{s}}\left(1-{\hat{P}}_{\underset{\_}{s}}\right)\left(1-{\hat{\pi}}_c\right)+\left(1-{\hat{\pi}}_c\right)\ {\left({P}_f-{\hat{P}}_{\underset{\_}{s}}\right)}^2\right\}\\ {}\mathrm{and}\kern0.37em SE=\sqrt{\mathit{\operatorname{var}}\left(\hat{P}\right)}\end{array}} $$
We assess accuracy of the HPEs compared to survey-based prevalence estimates by computing the percentage change in standard errors. We further assessed agreement of the estimates obtained using the HPE methodology with those from population survey method (Direct population survey estimate) using a Bland Altman analysis [26, 27].
All analysis was carried out in Stata statistical analysis software, Release 15 [22] and R version 3.5.0 [28].
Of the 19,475 individuals, 6729 (34.6%) tested for HIV in a health facility in the 12 months preceding the survey. HIV prevalence among those who tested in a health facility was 0.084 compared to 0.068 among those who did not test in a health facility (Table 1).
Table 1 Regional level HIV prevalence estimates
From health facility data, national (unadjusted) HIV prevalence was 0.058 (Male: 0.057 Female: 0.059). A total of 4,758,991 (female: 73.7%) individuals were tested for HIV in a health facility. (Table 1). DHIS2 HIV positivity by gender is presented Additional file 1: Appendix 1.
Weighting/annealing factor
Overall (national) average propensity to test in a health facility was 0.35 (male: 0.27, female:0.41). It ranged from 0.001 to 0.95 (Fig. 1). Mid Northern region had the highest average propensity to test for HIV in health facility, 0.44 (male: 0.40 and female: 0.49) while Mid-Eastern region had the lowest, 0.25 (male: 0.16, female: 0.32) (Fig. 1).
Propensity to test for HIV in a health facility
Hybrid prevalence estimates
HIV prevalence was highest in Central 1 region (0.11) and lowest in Mid-Western and Mid-Eastern regions (0.04 in each region). District level HPEs ranged from 0.01 to 0.18. Average HIV prevalence by region were; Central 1: 0.11 (min; 0.06, max; 0.18), Central 2: 0.10 (0.08, 0.17), East Central: 0.05 (0.02, 0.09), Mid-Eastern: 0.04 (0.01, 0.09), Mid Northern: 0.09 (0.05, 0.14), Mid-Western: 0.08 (0.03,0.16), North East:0.04 (0.04, 0.10), South West: 0.08 (0.04, 0.13) and West Nile: 0.04 (0.03, 0.07) (Table 2). Table 2 also presents HPE, survey and DHIS2 based district HIV prevalence estimates by district.
Table 2 HPE HIV prevalence estimates, (HPE, Survey and unadjusted DHIS2)
Figure 2 presents HIV prevalence maps in; both sexes (map a); in males (map b); and in females (map c). HPEs had similar patterns for both sexes, males and females consistent with the regional level prevalence estimates from population survey in Table 1. Districts in Central 1 region, Mid northern region, Island, and those along lake shores had higher overall, male and female HIV prevalence estimates (Fig. 2, and Additional file 1: Appendix 2) while districts in mid-eastern and West Nile region had lower HIV prevalence estimates. HPEs were not calculated for two districts (Bukwo in mid-eastern region and Ntoroko in mid-western region) because UAIS data points was not available for those districts.
District Hybrid Prevalence Estimates. Maps created based on study data using Stata Statistical Software: Release 15. User licence was acquired before using the software
Figure 3 compares district HIV prevalence estimates from population survey and HPE while in Fig. 4, we compare HPE and the adjusted DHIS2 data for selected districts. Prevalence comparison between HPE and survey for all districts is presented in Additional file 1: Appendix 3. The figures show that HPEs had narrower confidence intervals compared to direct survey estimates indicating an improvement in the precision of the estimates.
District prevalence estimates from combined and population survey data. P_survey- survey based prevalence estimate while P_HPE is HIV prevalence based on the HPE methodology
District prevalence estimates from combined and DHIS2 data. P_HIS- Health facility-based prevalence estimate while P_HPE is HIV prevalence based on the HPE methodology
Of the 110 districts, 51 (46.4%) had lower HPEs (point estimates) and 59 (53.6%) had higher HPE compared to the survey-based district prevalence estimates (Fig. 3, Additional file 1: Appendix 3).
HPEs were however lower than the DHIS2 prevalence estimates in 74 (67.3%) and higher in 36 (32.7%) of the districts in Uganda (Fig. 4, Additional file 1: Appendix 4).
A joint comparison of the HP estimates with both survey-based and health facility-based prevalence estimates show that 33 (30.0%) of the districts had lower HPEs while 18 (16.4%) had higher estimates compared to both the survey and health facility-based prevalence estimates.
Precision of HPE and population survey estimates
Standard errors of the HPEs were generally lower compared to SEs from survey-based estimates (Fig. 5). Of the districts, 105 (95.5%) had lower HPE SEs compared to SEs from survey-based estimates. Overall, the HPE standard errors were decreased by 28.9% (95% CI: 23.4–34.4) compared to survey-based standard errors.
Standard errors of estimates from survey and the HPE
Similarity of HPE and survey-based estimates
On average, there is no difference between survey and HPE estimates, 0.0 (95% CI: − 0.04,0.04) (Fig. 6a). Average difference for males was − 0.01 (95% CI: − 0.05,0.03) while for females was 0.00 (95% CI: − 0.06,0.06). Although there seems to be a bias (0.01) when assessing the agreement between HP and survey-based estimates for males (Fig. 6b), the 95% confidence interval of the difference between the estimates are narrow. Additionally, there was no systematic pattern of the points as the average of the estimates increases.
Difference plot comparing HPE and Direct survey estimates. PREV_HIS- HIV Prevalence based on health facility data, PREV_hp-HIV prevalence based on the HPE methodology while PREV_dom- HIV prevalence based on survey data only
The mean difference between the HPE and DHIS2 estimates was 0.01 (95% CI: − 0.05,0.06) (Fig. 7a). Average difference for males was − 0.01 (95% CI: − 0.07, 0.06) while for females was 0.02 (95% CI: − 0.05, 0.09), (Fig. 7b and c respectively). The size of the difference increased with increase in the mean of the estimates. This is seen from the wider variability of the points about the no-difference (zero) line as the values of the average of the estimates increase (Fig. 7a-c). The average difference was 0.02 and confidence intervals were wider when comparing HPEs and survey-based estimates for females (Fig. 7c).
Difference plot comparing HPE and DHIS2 estimates. PREV_HIS- HIV Prevalence based on health facility data, PREV_hp-HIV prevalence based on the HPE methodology while PREV_dom- HIV prevalence based on survey data only
In this study, we implement a novel approach, the Hybrid Prevalence Estimation methodology to obtain HIV prevalence estimates for districts in Uganda. We combined DHIS2 HIV testing data with information of non-facility testers from the 2011 UAIS data to obtain district level HIV prevalence estimates.
Although national population surveys are the gold standard for calculating population level health indicators, district level estimates from these surveys are less accurate due to the reduced sample sizes at district or lower administrative levels. The demand for accurate indicator estimates at district or lower administrative levels for programme monitoring motivates use of innovative approaches to provide the estimates. We obtained district level HIV prevalence estimates by combining population survey information with DHIS2 data using a Hybrid Prevalence Estimation methodology. Our estimates had narrower confidence intervals compared to estimates from the population survey at the district level, consistent with findings elsewhere [10, 11]. The HPE was calculated from three parameters; 1) Prevalence in the health facility sample, 2) prevalence among non-facility testers from the population survey sample and 3) the propensity to test for HIV in a health facility from the population survey sample. We also observed that HIV prevalence estimate obtained using the HPE methodology was similar to the population survey HIV prevalence estimates for male and females combined, and for males only while it was lower for females. Additionally, UAIS based prevalence estimates were generally higher while DHIS2 prevalence estimates were lower than the consistent with findings elsewhere [29].
In the UAIS, the population can be divided into two domains: 1) those that have access to health facilities, get tested for HIV, and are linked to appropriate care if found HIV positive, and 2) those that do not access health facilities and may remain unknown in the health care system. Barriers to health care access for the latter subpopulation may include factors such as low/no education and not being in stable sexual relationship that also increase the risk of HIV transmission [30, 31]. Combining survey with DHIS2 data therefore generates more precise indicator estimates that can be used to improve planning and service delivery for the general population at district levels where service delivery decision are implemented.
Facility level data has known limitations including selection bias, as it is not a random sample from the population for measuring general population level prevalence [15, 16, 18, 32]. Studies in Uganda [33, 34], Tanzania [35] and Zambia [36, 37], have also found facility-based antenatal HIV testing data has biased estimates of HIV prevalence' and therefore not appropriate for calculating HIV/AIDS indicators including HIV prevalence in the general population. The HPE methodology requires use of a small population survey sample [10, 11] to correct for bias in indicator estimates from health facility testing data. We used Uganda AIDS indicator survey data to correct for the bias in DHIS2 so as to obtain the HIV prevalence estimates for districts in Uganda. Other population surveys such as the demographic and health surveys can be similarly combined with facility-based data to obtain general population indicator estimates for planning and decision making, especially in low resourced environments where resource constraint limits collection of large sample sizes.
We applied a weighting factor, propensity to test in a health facility, calculated using multilevel logistic regression to combine the two data sources. Individuals and cluster level predictors of testing for HIV were included in the model. Predictors of access to testing or health care system may also impact HIV disease risk as noted elsewhere [10]. Multilevel logistic regression is also appropriate for the UAIS design and enables inclusion of both individual and cluster level risk factors in the modelling process. The model also accounts for clustering [21, 25, 38].
There was no difference in prevalence estimates from the HPE and Survey based approaches but confidence intervals of the HPEs were narrower, demonstrating efficiency of the HPE methodology in obtaining population level estimates as observed elsewhere [11, 18, 39].
We applied multilevel modeling which has multiple advantages over classical models including use of HIV risk factors at individual and cluster levels. We used data from the 2011 UAIS, a more recent survey, the Population HIV Impact Assessment (PHIA), completed in 2017 was not publicly available at the time of this study. DHS data are prone to refusal to participate, this may have bias on the results of this study as those who refuse to participate may have characteristics different from those who participated in the study. Furthermore, this study was limited to complete case analysis thus reducing the effective sample size used for the analysis. DHIS2 data includes individuals who may have tested multiple times which can lead to the use of wrong or unrepresentative denominators for individuals tested at health facilities. Studies elsewhere report repeat testing ranging from 3 to 13% [40, 41]. We further note that some health facilities, especially privately owned, do not report their data to the national DHIS2 further lowering the representativeness of health facility HIV testing data.
The growing demand for accurate information for programme management and policy formulation will require strategies that use all the available information efficiently with little or no additional resource investments. Countries and development partners continue to build and strengthen DHIS2 through capacity building and regular data quality assessments. We applied a simple tool, HPE methodology, to support efficient use of DHIS data in combination with small survey samples to obtain more accurate indicator estimates at district or lower administrative levels. HPE obtained in this study had reduced standard errors (by 28.8%) compared to survey-based estimates demonstrating improved accuracy and reliability of the estimates. We therefore recommend use of the methodology to combine DHIS2 data with population survey data to obtain population level indicator estimates for lower administrative levels where the survey samples are small for accurate indicator estimation.
The 2011 AIDS indicator survey datasets analyzed during the current study are available from https://dhsprogram.com/what-we-do/survey/survey-display-373.cfm. Health facility HIV testing dataset can be accessed from Ministry of Health, Uganda following a reasonable request. Ethics clearance from Uganda National Council for Science and Technology (UNCST) is required to access the data. All data used in this study were identified by participant unique IDs, no additional identifying information was included in the data.
AIC:
AIDS Information Centre
AIDS:
AIS:
Aids Indicator Survey
DHIS2:
District Health Information System Version 2
DHS:
Demographic Health Survey
HIV:
Human Immunodeficiency Virus
HPE:
Hybrid Prevalence Estimate
HREC:
Human Research Ethics Committee
MOH:
NPHC:
National Population and Housing Census
PHIA:
Population HIV Impact Assessment
SSA:
TASO:
UAIS:
UNAIDS:
The Joint United Nations Program on HIV/AIDS
UNCST:
Uganda National Council for Science and Technology
UNAIDS/WHO Working Group on Global HIV/AIDS and STI Surveillance. Monitoring HIV impact using Population-Based Surveys. 2015.
McGovern ME, Marra G, Radice R, Canning D, Newell M-L, Barnighausen T. Adjusting HIV prevalence estimates for non-participation: an application to demographic surveillance. J Int AIDS Soc. 2015;18(1):19954.
Marston M, Harriss K, Slaymaker E. Non-response bias in estimates of HIV prevalence due to the mobility of absentees in national population-based surveys: a study of nine national surveys. Sex Transm Infect. 2008;84(Suppl 1):71–8.
Vinod M, Hong R, Khan S, Gu Y. Evaluating HIV Estimates from National Population-Based Surveys for Bias resulting from non-Reponse. DHS analytical studies no. 12. Calverton, Maryland: Macro International Inc.; 2008.
Chan M, Kazatchkine M, Lob-levyt J, Obaid T, Schweizer J, Veneman A, et al. Meeting the demand for results and accountability : a call for action on health data from eight Global Health agencies. PLoS Med. 2010;7(1):5–8.
UNAIDS. ENDING AIDS: Progress Towards the 90–90-90 Targets. Global Aids Update. 2017. Available from: http://www.unaids.org/sites/default/files/media_asset/Global_AIDS_update_2017_en.pdf.
Rice B, Boulle A, Baral S, Egger M, Mee P, Fearon E, et al. Strengthening Routine Data Systems to Track the HIV Epidemic and Guide the Response in Sub-Saharan Africa. JMIR Public Heal Surveill. 2018;4(2):e36.
Sheng B, Marsh K, Slavkovic AB, Gregson S, Eaton JW, Bao L. Statistical models for incorporating data from routine HIV testing of pregnant women at antenatal clinics into HIV/AIDS epidemic estimates. AIDS. 2017;31(Suppl 1):S87–94.
Kiberu VM, Matovu JK, Makumbi F, Kyozira C, Mukooyo E, Wanyenze RK. Strengthening district-based health reporting through the district health management information software system: the Ugandan experience. BMC Med Inform Decis Mak. 2014;14(1):40.
Hedt BL, Pagano M. Health indicators: eliminating bias from convenience sampling estimators. Stat Med. 2011;30(5):560–8.
Jeffery C, Pagano M, Hemingway J, Valadez JJ. Hybrid prevalence estimation : method to improve intervention coverage estimations. PNAS. 2018;115(51):13063–8.
Avenir Health. Spectrum Manual: Spectrum System of Policy Models. Available from: https://www.avenirhealth.org/Download/Spectrum/Manuals/AIMManualEnglish.pdf.
Larmarange J, Bendaud V. HIV estimates at second subnational level from national population-based surveys. AIDS. 2014;28(Suppl 4):S469–2476.
UNAIDS. Developing Subnational Estimates of HIV Prevalence and the Number of People Living with HIV. 2014.
Wilson KC, Mhangara M, Dzangare J, Eaton JW, Hallett TB, Mugurungi O, et al. Does nonlocal women's attendance at antenatal clinics distort HIV prevalence surveillance estimates in pregnant women in Zimbabwe? AIDS. 2017;31(Suppl 1):S95–102.
Zaba BW, Carpenter LM, Boerma JT, Gregson S, Nakiyingi J, Urassa M. Adjusting ante-natal clinic data for improved estimates of HIV prevalence among women in sub-Saharan Africa. AIDS. 2000;14(17):2741–50.
Manda S, Masenyetse L, Cai B, Meyer R. Mapping HIV prevalence using population and antenatal sentinel-based HIV surveys: a multi-stage approach. Popul Health Metrics. 2015;13:22.
Gregson S, Terceiria N, Kakowa M, Mason PR, Anderson RM, Chandiwana SKCM. Study of bias in antenatal clinic HIV-1 surveillance data in a high contraceptive prevalence population in sub-Saharan Africa. AIDS. 2002;16(4):643–52.
Ministry of Heath and ICF international. Uganda AIDS Indicator Survey (AIS) 2011. Kampala Uganda and Rockville, Maryland, USA; 2012. Available from: http://health.go.ug/docs/UAIS_2011_REPORT.pdf.
Uganda Bureau of Statistics. 2002 Uganda Population and Housing Census Administrative Report 2007.
Carle AC. Fitting multilevel models in complex survey data with design weights: recommendations. BMC Med Res Methodol. 2009;9(1):1–13.
StataCorp. Stata Statistical Software: Release 15. College Station, TX: StataCorp LLC; 2017. Available from: https://www.stata.com/.
Ministry of Health Kampala Uganda. No Title. Electronic Health Management Information System. Available from: http://www.health.go.ug/oldsite/node/76.
World Health Organization. Consolidated guidelines on person-centred HIV patient monitoring and case surveillance. AIDS. 2017;31(Suppl 1):S87–94 Available from: http://www.differentiatedcare.org/Portals/0/adam/Content/Y1Jnet-yCkO4DGu8XS9VMg/File/9789241512633-eng.pdf.
Rabe-Hesketh S, Skrondal A. Multilevel modelling of complex survey data. J R Stat Soc Ser A Stat Soc. 2006;169(4):805–27.
Bland U, Giavarina D. Lessons in biostatistics Biochemia Medica. 2015;25(2):141–51.
Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet. 1986;1(8476):307-10.
Core Team R. R: a language and environment for statistical computing [internet]. Vienna: The R Foundation; 2013. Available from: http://www.r-project.org.
Gouws E, Mishra V, Fowler TB. Comparison of adult HIV prevalence from national population-based surveys and antenatal clinic surveillance in countries with generalised epidemics: Implications for calibrating surveillance data. Sex Transm Infect. 2008;84(Suppl I):i17–i23.
Muyunda B, Musonda P, Mee P, Todd J, Michelo C. Educational attainment as a predictor of HIV testing uptake among women of child-bearing age: analysis of 2014 demographic and health survey in Zambia. Front Public Heal. 2018;6:192.
Kiros G, Workagegn F, Gebretsadik LA. Predictors of HIV-test utilization in PMTCT among antenatal care attendees in government health centers: institution-based cross-sectional study using health belief model in Addis Ababa, Ethiopia, 2013. HIV/AIDS - Res Palliat Care. 2015;215–22.
Gregson S, Dharmayat K, Pereboom M, Takaruza A, Mugurungi O, Schur N, Nyamukapa CA. Do HIV prevalence trends in antenatal clinic surveillance represent trends in the general population in the antiretroviral therapy era? The case of Manicaland, East Zimbabwe. AIDS. 2015;29(14):1845–53.
Fabiani M, Fylkesnes K, Nattabi B, Ayella EO, Declich S. Evaluating two adjustment methods to extrapolate HIV prevalence from pregnant women to the general female population in sub-Saharan Africa. AIDS. 2003;17(3):399–405.
Musinguzi J, Kirungi W, Opio A, Montana L, Mishra V, Madraa E, et al. Comparison of HIV prevalence estimates from sentinel surveillance and a National Population-Based Survey in Uganda, 2004-2005. JAIDS J Acquir Immune Defic Syndr. 2009;51(1):78–84.
Kwesigabo G, Killewo JZ, Urassa W, Mbena E, Mhalu F, Lugalla JL, et al. Monitoring of HIV-1 infection prevalence and trends in the general population using pregnant women as a sentinel population: 9 years experience from the Kagera region of Tanzania. J Acquir Immune Defic Syndr. 2000;23(5):410–7.
Fylkesnes K, Musonda RM, Sichone M, Ndhlovu Z, Tembo F, Monze M. Declining HIV prevalence and risk behaviours in Zambia: evidence from surveillance and population-based surveys. AIDS. 2001;15(7):907–16.
Fylkesnes K, Ndhlovu Z, Kasumba K, Musonda RM, Sichone M. Studying dynamics of the HIV epidemic. AIDS. 1998;12(10):1227–42.
Wong GY, Mason WM. The hierarchical logistic regression model for multilevel analysis. J Am Stat Assoc. 1985;80(391):513–24.
Judith RG, Anne B, Michel C, Rosemary MM, Maina K, Isaac M, Francis TLZ. Factors influencing the difference in HIV prevalence between antenatal clinic and general population in sub- Saharan Africa. AIDS. 2001;15:1717–25.
Kulkarni S, Tymejczyk O, Gadisa T, Lahuerta M, Remien RH, Melaku Z, et al. "Testing, testing": multiple HIV-positive tests among patients initiating antiretroviral therapy in Ethiopia. J Int Assoc Provid AIDS Care. 2017;16(6):546–54.
Maina I, Wanjala P, Soti D, Kipruto H, Boerma T. Using health-facility data to assess subnational coverage of maternal and child health indicators, Kenya. Bull World Health Organ. 2017;95:683–94.
We acknowledge the Ugandan Ministry of Health and its partners in conducting the Uganda AIDS Indicator Survey, and for the permission to use both DHIS2 and UAIS datasets.
This study was supported through the Wellcome Trust, grant 107754/Z/15/Z-DELTAS Africa via Sub-Saharan Africa Consortium for Advanced Biostatistics (SSACAB). The DELTAS Africa Initiative is an independent funding scheme of the African Academy of Sciences (AAS) Alliance for Accelerating Excellence in Science in Africa (AESA) and is supported by the New Partnership for Africa's Development Planning and Coordinating Agency (NEPAD Agency) with funding from the Wellcome Trust (Grant No. 107754/Z/15/Z) and the UK government. The views expressed in this publication are those of the authors and not necessarily those of the AAS, NEPAD Agency, Wellcome Trust or the UK government.
Caroline Jeffery, Jim Todd and Jonathan Levin contributed equally to this work.
Division of Epidemiology and Biostatistics, School of Public Health, University of Witwatersrand, Johannesburg, South Africa
Joseph Ouma & Jonathan Levin
METRe Group, Department of International Health, Liverpool School of Tropical Medicine, Pembroke Place, Liverpool, L3 5QA, UK
Caroline Jeffery & Joseph J. Valadez
Department of Disease Control and Environmental Health, Makerere University School of Public Health, Kampala, Uganda
Rhoda K. Wanyenze
Department of Population Health, Faculty of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, London, UK
Jim Todd
Joseph Ouma
Caroline Jeffery
Joseph J. Valadez
JO, JJV and JL designed the research. JO compiled and analysed the data and wrote the first manuscript draft. JL, CJ, JT, RKW provided critical content analysis, helped draft and review the manuscript. All authors have read and approved the final manuscript.
Correspondence to Joseph Ouma.
Ethical clearance to conduct this study was obtained from the University of Witwatersrand Human Research Ethics Committee (HREC), clearance Certificate number M171053. Further clearance was obtained from the Uganda National Council for Science and Technology (UNCST) with registration number HS2366. Data for the study was obtained from surveys conducted in Uganda. The study was a secondary analysis of data and therefore consent to participate is not applicable.
Additional file 1: Appendix 1.
Population survey and DHIS2 prevalence estimates. Appendix 2. HPE HIV prevalence estimates and associated 95% CI. Appendix 3. Comparison of district prevalence estimates for the HP and survey-based estimates. Appendix 4. Comparison of district prevalence estimates for the HPE and DHIS2-based estimates.
Ouma, J., Jeffery, C., Valadez, J.J. et al. Combining national survey with facility-based HIV testing data to obtain more accurate estimate of HIV prevalence in districts in Uganda. BMC Public Health 20, 379 (2020). https://doi.org/10.1186/s12889-020-8436-z
Population survey
Health Information System
District Health Information System
Biostatistics and methods | CommonCrawl |
\begin{document}
\title{Computational interpretations of classical reasoning: From the epsilon calculus to stateful programs}
\begin{abstract} The problem of giving a computational meaning to classical reasoning lies at the heart of logic. This article surveys three famous solutions to this problem - the epsilon calculus, modified realizability and the dialectica interpretation - and re-examines them from a modern perspective, with a particular emphasis on connections with algorithms and programming. \end{abstract}
\section{Introduction}
\label{sec-intro}
This paper grew out of two talks I gave on the subject of proof interpretations: The first, at a conference on Hilbert's epsilon calculus at the University of Montpellier in 2015, and the second, at the Humboldt Kolleg workshop on the \emph{mathesis universalis} held in Como in 2017. Both of these events were notable in they brought together researchers from mathematics, computer science and philosophy, and as a consequence, speakers were faced with the challenge of making their ideas appealing for and comprehensible to each of these groups.
The latter workshop was particularly fascinating in its focus on the \emph{mathesis universalis}: The search for a universal mode of thought, or language, capable of capturing and connecting ideas from all of the sciences. It struck me then that part of my research has been driven, in a certain sense, by the desire to uncover universal characteristics behind the central objects of my own field of study - proof interpretations - and in particular to conceive of a language in which the algorithmic ideas which underlie them can be elegantly expressed. As such, my initial motivation when writing this article was to collect some of my own thoughts in this direction and put them down on paper.
In doing so, however, I realised that an article of this kind could also serve another purpose: To give an accessible and high level overview of a number of proof interpretations which play a central role in the history of logic but are often poorly understood outside of specialist areas of application. Proof interpretations are, by their very nature, highly syntactic objects, and gaining an understanding of how they actually \emph{work} when applied to a concrete proof tends to require a certain amount of hands-on experience. But perhaps a simple and informal case study tackled in parallel by several proof interpretations may form an accessible and insightful introduction to these techniques which would complement the many excellent textbooks on the subject?
I settled on an article which combines both of these goals. In the first main section I give an overview of three very well-known computational interpretations of classical logic: Hilbert's epsilon calculus, Kreisel's modified realizability together with the A translation, and finally G\"{o}del's Dialectica interpretation. My tactic is suppress many of the complicated definitions and to focus instead on a very simple example - the drinkers paradox - and sketch how each interpretation would deal with it in turn. In doing so I am able to highlight some of the key features which characterise each of the interpretations but are often invisible until one has acquired a working knowledge of how they are applied in practise.
In the second part, I turn my attention to the relationship between the interpretations and core algorithmic ideas which underlie them. I begin with a general discussion on proof interpretations and the roles they play in modern proof theory, then focus on two ideas which feature in current research on program extraction: learning procedures and stateful programs. Both of these have been used by myself and others to characterise algorithms connected to classical reasoning, and together offer an illustration of how traditional proof theoretic techniques can be reinterpreted in a modern setting. Here I focus primarily on the Dialectica interpretation, and continue to use the drinkers paradox as a running example, in order to facilitate direction comparison with the earlier section.
\subsection*{Prerequisites}
I have sought to capture the spirit of the \emph{mathesis universalis} in another way, by making this article accessible to as general an audience as possible. Nevertheless, I assume the reader is acquainted first-order logic and formal reasoning, and I expect a passing familiarity with the typed lambda calculus will make what follows a lot easier to read. Just in case, I mention a few crucial things here. The \emph{finite types} are defined inductively by the following clauses:
\begin{itemize}
\item $\NN$ is a type;
\item if $\rho$ and $\tau$ are types, $\rho\to\tau$ is the type of functions from $\rho$ to $\tau$.
\end{itemize}
Often is it convenient to talk about product types $\rho\times \tau$ in addition. Functions which take functions as an argument are called \emph{functionals}.
System T is the well-known calculus of primitive recursive functionals in all finite types, whose terms consist of variables $x^\rho,y^\rho,z^\rho,\ldots$ for each type, symbols for zero $0:\NN$ and successor $s:\NN\to\NN$, allow the construction of new terms via lambda abstraction $\lambda x.t$ and application $t(s)$, and finally, contain recursors $R_\rho$ of each type, which allow the definition of primitive recursive functionals. Further details on System T can be found in e.g. \cite{AvFef(1998.0)}.
\subsection*{A note on terminology}
Proof interpretations are often inconsistently named and confused with one another. In this paper, I will use the term \emph{functional interpretation} to denote any proof interpretation which maps proofs to functionals of higher-type: Thus both realizability and the Dialectica interpretation are functional interpretations. The latter is also referred to as G\"{o}del's functional interpretation, and consequently often as just `the functional interpretation'. I am as guilty as anyone for propagating such confusion elsewhere, but here, since I discuss a number of proof interpretations, I will rigidly stick to the name \emph{Dialectica}.
\section{The drinkers paradox}
\label{sec-DP}
The running example throughout this paper will be a simple theorem of classical predicate logic, commonly known as the \emph{drinkers paradox}. This nickname apparently originates with Smullyan \cite{Smullyan(1978.0)}, and refers to the following popular formulation of the theorem in natural language:
\begin{quote} In any pub, there is someone that, if they are drinking, then everyone in the pub is drinking. \end{quote}
Of course, in pure logical terms, the drinkers paradox is nothing more than the simple first-order formula
\begin{equation*}\exists n(P(n)\to \forall m P(m))\end{equation*}
where here, since we will primarily work in the setting of classical arithmetic, $P$ is assumed to be some decidable predicate over the natural numbers $\NN$. In fact, to make things slightly simpler for the functional interpretations (and more interesting for the epsilon calculus) we will study the following prenexation of the drinkers paradox:
\begin{equation*} \DP \quad : \quad \exists n\forall m (P(n)\to P(m)). \end{equation*}
which will henceforth be labelled $\DP$.
The drinkers paradox is appealing for the proof theorist because it has a one-line proof in classical logic which doesn't give us any way to actually compute the `canonical drinker' $n$. It goes as follows: Either everyone in the pub is drinking, in which case we can set $n:=0$, or there is at least one person $m$ who is not drinker, in which case we set $n:=m$.
As such, any computational interpretation of classical logic has to have some way of dealing with the drinkers paradox, and as we will see, despite its apparent simplicity, $\DP$ illuminates several key features of the interpretations, which is precisely why it makes such a convenient working example.
Before moving on, a rather compact, semi-formal derivation of $\DP$ in a Hilbert-style calculus is provided in Figure 1 - semi-formal because several of our inferences conflate a number of steps. Though we will refrain from carrying out any formal manipulations on proof trees, this will be used later to provide some insight into how the different techniques act on the proof in order to extract witnesses. Here $\forall$ax and $\forall$r refer to instances of the $\forall$-axiom and rule respectively (and similarly for the existential quantifier), while $\LEM$ denotes an instance of the law of excluded-middle and ctr an instance of contraction. The inference $(\ast)$ combines several steps which will typically not be relevant from a computational point of view.
\begin{figure}
\caption{An informal Hilbert-style derivation of $\DP$}
\label{fig-DP}
\end{figure}
\section{Three famous interpretations}
\label{sec-interpretations}
In the first main part of the article, I give what is intended to be an accessible outline of three of the most famous computational interpretations of classical logic: Hilbert's epsilon calculus, Kreisel's modified realizability and G\"{o}del's Dialectica interpretation. The reader who wants to see a full exposition (and indeed full definitions) of these interpretations is encouraged to consult one of the many references I will provide on the way. My aim here is \emph{not} to give a comprehensive or detailed introduction, but to offer a case study which I hope will not only provide some insight into how the interpretations work in practise, but will hint at the deep connections between each of them.
It is important to point out that by focusing on just three interpretations, I inevitably exclude several significant approaches to program extraction, notably those originating from the French tradition such as Krivine's classical realizability \cite{Krivine(2009.0)}. This a reflection of the fact that I am somewhat familiar with the former, and much less so with the latter. Moreover, each of the interpretations I mention have a certain historical significance and are well-known outside of theoretical computer science, which I feel also justifies my choice.
\subsection*{A note on precision}
Proof interpretations are formal maps on derivations, and as such, act in a very precise way on our proof of the drinkers paradox. Here, I give somewhat \emph{imprecise} versions of the formal extraction procedure, because I want to avoid superfluous bureaucratic details as much as possible. My emphasis here is on the salient features of the interpretations. A good example of this attitude is my cavalier approach to negative translations, which technically speaking have to be carefully specified and are, for example, sensitive to whether they are embeddings into intuitionistic or minimal logic. For the sake of clarity I skirt such issues here and only mention them in passing. Naturally, the reader curious about these subtleties will find a full explanation in the standard texts, and detailed references will be given here.
\subsection{The epsilon calculus}
Hilbert's epsilon calculus, developed in a series of lectures in the early 1920s, constitutes one of the first attempts at grounding classical first-order theories in a computational setting. Much like the Dialectica interpretation, it was first presented as a technique for proving the consistency of arithmetic, in which it comes hand-in-hand with the corresponding substitution method. For a more detailed introduction to the epsilon calculus itself, together with an extensive list of references, the reader is directed to \cite{AvZach(2002.0)}.
While the epsilon calculus remains a topic of interest in mathematical logic, philosophy and linguistics, concrete applications in mathematics or computer science remain limited in comparison to the interpretations that follow - a phenomenon which we discuss in Section \ref{sec-interlude}. Nevertheless, of the three interpretations we present in this section, the epsilon calculus is perhaps unrivalled in the simplicity and elegance of its main idea: To first bring predicate logic down to the propositional level by replacing all quantifier instances by a piece of syntax called an \emph{epsilon term}, and to then systematically eliminate these terms via an complex backtracking algorithm, thus leaving us in a finitary system in which existential statements are assigned explicit realizers.
Let's look at this idea more closely. Very informally, the epsilon calculus assigns to each predicate $A(x)$ an epsilon term $\epsilon x A(x)$, whose intended interpretation is a choice function which selects a witness for $\exists x A(x)$ whenever the latter is true. The idea is then to \emph{replace} the quantified formula $\exists x A(x)$ with $A(\epsilon x A(x))$, since under this interpretation these two formulas would be equivalent. The universal quantifier is dealt with similarly: bearing in mind that $\forall x A(x)\leftrightarrow \neg \exists x \neg A(x)$ over classical logic, we interpret $\forall x A(x)$ as $A(\epsilon x \neg A(x))$, since $\neg A(\epsilon x \neg A(x))$ holds only if $\neg \forall x A(x)$.
We now want to transform proofs in predicate logic to proofs in the epsilon calculus. So what happens if we replace all instances of quantifiers in a proof with the corresponding epsilon term? It turns out that the quantifier rules are trivially eliminated with epsilon terms in place of quantifiers: If
\begin{prooftree} \AxiomC{$\vdots$} \noLine \UnaryInfC{$A(x)\to B$} \UnaryInfC{$\exists x A(x)\to B$} \end{prooftree}
occurs in our proof then we simply replace the free variable $x$ with $\epsilon x A(x)$ and we get a derivation
\begin{prooftree} \AxiomC{$\vdots$} \noLine \UnaryInfC{$A(\epsilon x A(x))\to B$.} \end{prooftree}
On the other hand, if we have an instance of the quantifier \emph{axiom}
\begin{equation*}A(t)\to \exists x A(x)\end{equation*}
we now need to introduce a new axiom which governs the corresponding epsilon term, namely:
\begin{equation*}(\ast) \ \ \ A(t)\to A(\epsilon x A(x)).\end{equation*}
Axioms of the form $(\ast)$ are called \emph{critical axioms}. In simple terms, the epsilon calculus is the quantifier-free calculus we obtain by removing quantifiers and their axioms and rules, and adding epsilon terms together with the critical axioms. Note that for first-order theories like arithmetic things are already a bit more complicated, but for now we have enough structure to give an epsilon-style proof of the drinkers paradox!
Referring to our derivation in Figure 1, let us use the following abbreviations:
\begin{equation*} \begin{aligned} \epsilon_k&:=\epsilon k \neg P(k) \\ \epsilon_m[n]&:=\epsilon m \neg (P(n)\to P(m)) \\ \epsilon_n&:=\epsilon n (P(n)\to P(\epsilon_m[n])) \end{aligned} \end{equation*}
whose meaning in the epsilon calculus is indicated below:
\begin{equation*} \begin{aligned} P(\epsilon_k)&\leftrightarrow \exists k \neg P(k)\\ (P(n)\to P(\epsilon_m[n]))&\leftrightarrow \forall m(P(n)\to P(m))\\ P(\epsilon_n)\to P(\epsilon_m[\epsilon_n])&\leftrightarrow \exists n\forall m(P(n)\to P(m)) \end{aligned} \end{equation*}
Note that the final formula is the drinkers paradox itself.
Figure 2 shows the translation of our first-order derivation of $\DP$ into one in the epsilon calculus. The translation is quite straightforward, and uses nothing more than the principles sketched above: For each quantifier rule we simple replace the variable in question by the relevant epsilon term (and the rule vanishes), whereas each quantifier axiom is interpreted by the corresponding critical axiom, which is made explicit in Figure 2. The proof is shorter due to the elimination of the quantifier rules, and the instance of $\LEM$ now applies to the decidable predicate $P(\epsilon_k)$, and so is now computationally benign.
\begin{figure}
\caption{An informal epsilon-style derivation of $\DP$}
\label{fig-eps}
\end{figure}
So far so good: We have transformed a proof in classical predicate logic to a quantifier-free version in which the main logical content is now encoded in the critical axioms. Can we get rid of these in some way in order to give the proof a computational interpretation?
This is the role played by epsilon substitution, which forms the core of the epsilon calculus technique. In short: Any first-order proof can only use finitely many instances of the critical axioms. Therefore, if for all relevant formulas $A(x)$ which feature in the proof, we can find a \emph{concrete approximation} for the epsilon term $\epsilon x A(x)$, which rather than satisfying \emph{all} critical axioms $A(t)\to A(\epsilon x A(x))$, satisfy just those which are used in the proof, then we have a way of finding explicit witnesses for existential statements. The epsilon substitution method is an algorithm for eliminating critical formulas in this way.
It goes as follows: We first assign all relevant epsilon terms the canonical value $0$. We then examine our finite list of critical formulas until we find one which fails. This would mean that there is some term formula $A$ and term $t$ such that $A(t)$ is true but $A(\epsilon x A(x))$ false under our current assignment. But we can now use the this information to repair our epsilon term, by setting $\epsilon x A(x):=t$, which from now on serves as a witness for $\exists x A(x)$. The process is then repeated, until all critical formulas have been repaired.
This may sound quite straightforward, but epsilon substitution is in fact a complicated business! In the case of arithmetic, several erroneous algorithms were initially proposed, before a correct technique involving transfinite induction up to $\varepsilon_0$ was finally given by Ackermann in \cite{Ackermann(1940.0)} - a modern treatment of which is provided by Moser \cite{Moser(2006.0)}. The difficulty with the substitution method is primarily due to the presence of \emph{nested} epsilon terms, whereby trying to fix one critical axiom can suddenly invalidate several others which were previously considered fixed, leading to a subtle backtracking procedure. Further details of the general algorithm are far beyond the scope of this paper, but we \emph{are} able to provide a simple version for the case of the drinkers paradox!
First we should remark that in our proof, only two of our epsilon terms, $\epsilon_k$ and $\epsilon_n$, represent existential (as opposed to universal) quantifiers, so we only attempt to find approximations for these (we explain this in more detail below). Let us first try $\epsilon_k=\epsilon_n=0$. Then C2 and C3 are trivially satisfied, but not necessarily C1. In this case there are two possibilities: Either we get lucky and $P(0)\to P(\epsilon_m[0])$ holds, in which case we're done, or our guess fails because $P(0)\wedge \neg P(\epsilon_m[0])$. We now learn from our failure and repair the broken epsilon terms, setting $\epsilon_k:=\epsilon_m[0]$. Now C1 holds and C3 remains the same as before, but under our new assignment C2 becomes
\begin{equation*} (P(\epsilon_m[0])\to P(\epsilon_m[\epsilon_m[0]]))\to (P(0)\to P(\epsilon_m[0])) \end{equation*}
which from our assumption $P(0)\wedge \neg P(\epsilon_m[0])$ is now false. So we repair this in turn and set $\epsilon_n:=\epsilon_m[0]$. A quick run through of each critical axiom under the assignment $\epsilon_k=\epsilon_n:=\epsilon_m[0]$ reveals that we're done.
\begin{figure}
\caption{Interpretation of $\DP$ via epsilon calculus}
\end{figure}
The substitution method generalises this strategy of guessing, learning from the failure of our guesses, and repairing. As such, despite its complexity, it offers a beautifully clear computational semantics for classical logic: The building of approximations to non-computational objects via a game of trial and error. This has inspired a number of more modern approaches to program extraction, which we will mention later.
As we already pointed out, there are no critical formulas for $\epsilon_m[n]$. Rather, this plays the role of a function variable, and our witness for $\epsilon_n$ is constructed relative to this variable. Without wanting to go into more detail here, $\epsilon_m[n]$ is a placeholder which represents how the drinkers paradox might be used as a lemma in the proof of another theorem. As such, we do not have an ideal witness for $\exists n$ in the drinkers paradox, but rather an approximate witness relative to $\epsilon_m[n]$. We will see this phenomenon repeat itself throughout this paper, where in each setting there will be a specific structure which plays the role of a `counter argument', which represents the universal quantifier and against which our approximate witness is computed.
We summarise our epsilon-style interpretation of $\DP$ in Figure 3, giving a flowchart representation of the corresponding substitution algorithm.
\subsection{Modified realizability}
\label{sec-interpretations-mr}
Before we move on to the world of realizability, let us consider the following elegant interpretation of $\DP$ as a game between two players, who are traditionally named $\exists$loise and $\forall$belard. $\exists$loise's goal is to establish the truth of $\DP$ by choosing a witness for $\exists n$, whereas her opponent $\forall$belard is tasked with disproving $\DP$ by producing a counter witness for $\forall m$. Fortunately for $\exists$loise, the rules of the game dictate that she can backtrack and change her mind!
So how might the game go? $\exists$loise begins by picking an arbitrary witness for $\exists n$, let's say $n:=0$, thereby claiming that $\forall m(P(0)\to P(m))$ is true. $\forall$belard responds by playing some $m$ with the aim of showing that $\exists$loise's guess was wrong. There are now two possibilities: Either $P(0)\to P(m)$ is true and $\exists$loise was right all along, in which case she wins, or $\forall$abelard's challenge was successful and $P(0)\wedge\neg P(m)$ holds. But now $\exists$loise responds by simply changing her mind and playing $n:=m$. Now any further play $m'$ from $\forall$belard is destined to fail since $P(m)\to P(m')$ will always be true!
What we have done is describe a winning strategy for $\exists$loise in a quantifier-game corresponding to $\DP$. That there is a correspondence between classical validity and winning strategies in this sense is well-known and widely researched, dating back to e.g. Novikoff in \cite{Novikoff(1943.0)}, and today game semantics is an important topic in logic. Here it will form a useful sub-theme and will help us characterise the behaviour of certain functionals which arise from classical proofs.
Modified realizability is one of several forms of realizability which arose as a concrete implementation of the Brouwer-Heyting-Kolmgorov (BHK) interpretation of intuitionistic logic. Introduced by Kreisel in \cite{Kreisel(1959.0)}, modified realizability works in a \emph{typed} setting, and allows us to transform proofs in intuitionistic arithmetic to terms in the typed lambda calculus.
As a clean and elegant formulation of the BHK interpretation in the typed setting, variants of modified realizability have proven extremely popular techniques for extracting programs from proofs. In particular, refinements of the interpretation form the theoretical basis for the Minlog system \cite{Minlog}, a proof assistant which automates program extraction and is primarily motivated by the synthesis of verified programs (see e.g. \cite{BergLFS(2015.0),BergMiySch(2016.0)} for examples of this). For a comprehensive account of the interpretation itself, the reader is directed to e.g. \cite[Chapter 5]{Kohlenbach(2008.0)} or \cite[Chapter 7]{SchWai(2011.0)}.
Similarly to the epsilon calculus, modified realizability consists of two components: An interpretation and a soundness proof. The interpretation maps each formula $A$ in Heyting arithmetic to a new formula $\mr{x}{A}$, where $x$ is a potentially empty tuple of variables whose length and type depends on the structure of $A$. The interpretation is quite simple, so we state it in Figure 4.
\begin{figure}
\caption{Modified realizability}
\end{figure}
The soundness proof for modified realizability states that whenever $A$ is provable in Heyting arithmetic, then there is some term $t$ of System T (i.e. the typed lambda calculus of primitive recursive functionals in all finite types) satisfying $\mr{t}{A}$, which can moreover be formally extracted from the proof:
\begin{equation*} \HA\vdash A\mbox{ \ \ implies \ \ } \mbox{System T}\vdash \mr{t}{A} \end{equation*}
So far, what we have described applies only to intuitionistic theories, so we need an additional step to extend the interpretation to classical logic. This turns out to be rather subtle, and usually relies on a combination of the G\"{o}del-Getzen negative translation together with some variant of the so-called Dragalin/Friedman/Leivant trick, also known as the A-translation. A detailed discussion of this technique is beyond the scope of this paper, but we briefly present a variant due to \cite{BergSch(1995.0)} and describe how it acts on $\DP$.
First the negative translation. Negative translations are well-known methods for embedding classical logic in intuitionistic logic. There are several variants of the translation, each of which involves strategically inserting double negations in certain places in a logical formula in such a way that if $A$ is provable in classical logic, then $A^N$ is provable intuitionistically. In particular, we would have
\begin{equation*}\PA\vdash A\mbox{ \ \ implies \ \ } \HA\vdash A^N\end{equation*}
Negative translations are widely used and and the relationship between the many varieties is well understood (see \cite{FerGOli(2012.0)} for a detailed study). We deliberately avoid going into more detail and specifying a translation to use here. We simply state without justification that a suitable negative translation of the drinkers paradox is the following:
\begin{equation*} \DP^N:\equiv \neg\neg\exists n\forall m(P(n)\to \neg\neg P(m)) \end{equation*}
which is provable in intuitionistic (and in fact minimal) logic. Now to the A-translation. The reason we need an intermediate step in addition to the negative translation is that the negative translation alone results in a formula with no computational meaning at all: Since $\bot$ is a prime formula, we have
\begin{equation*} \mr{f}{\neg A}\equiv \forall x(\mr{x}{A}\to \bot) \end{equation*}
i.e. $f$ is just an empty tuple. In order to make realizability `sensitive' to the negative translation, we treat $\bot$ as a new predicate, which in particular has a special realizability interpretation
\begin{equation*} \mr{x}{\bot} \end{equation*}
where $x$ has some predetermined type $\tau$. With this adjustment we would have
\begin{equation*} \mr{f}{\neg A}\equiv \forall x(\mr{x}{A}\to \mr{fx}{\bot}) \end{equation*}
where $fx$ has the non-empty type $\tau$. Now that realizability interacts with negated formulas, we are ready to go! The first step is to examine the negated principle $\DP^N$, and carefully apply the clauses of Figure 4 together with the special interpretation of $\bot$.
We now do this in detail. We first assume that the decidable predicate $P(n)$ can be coded as a prime formula $t(n)=0$ and thus has an empty realizer. So starting with the inner formula it is not to difficult to work out that
\begin{equation*} \begin{aligned} \mr{f}{(P(n)\to \neg\neg P(m))}&\equiv P(n)\to \forall a((P(m)\to \mr{a}{\bot})\to \mr{fa}{\bot}) \end{aligned} \end{equation*}
where $f:\tau\to\tau$. The next step is to treat the quantifiers, and we end up with
\begin{equation*} \begin{aligned} &\mr{e,g}{\exists n\forall m (P(n)\to \neg\neg P(m))}\\ &\equiv \forall m(\mr{gm}{(P(e)\to \neg\neg P(m))})\\ &\equiv \underbrace{\forall m (P(e)\to \forall a((P(m)\to \mr{a}{\bot})\to \mr{gma}{\bot}))}_{Q(e,g)} \end{aligned} \end{equation*}
where for convenience we label this formula $Q(e,g)$ as indicated. Note that types of these realizers are given by $e:\NN$ and $g:\NN\to\tau\to\tau$. Finally, interpreting the whole formula yields
\begin{equation} \label{mr-main}\begin{aligned} &\mr{\Phi}{\neg\neg\exists n\forall m(P(n)\to \neg\neg P(m))}\\ &\equiv \forall p(\mr{p}{\neg\exists n\forall m (P(n)\to \neg\neg P(m))}\to \mr{\Phi p}{\bot})\\ &\equiv \forall p(\forall e,g(Q(e,g)\to\mr{peg}{\bot})\to \mr{\Phi p}{\bot}) \end{aligned} \end{equation}
where now $p:\sigma$ and $\Phi:\sigma\to\tau$ for $\sigma:=\NN\to (\NN\to\tau\to\tau)\to\tau$.
\begin{figure}
\caption{Modified realizability interpretation of $\DP$}
\end{figure}
Just like we did not give a formal description of the epsilon substitution method, we will not present in detail how a concrete functional $\Phi$ satisfying the above is rigorously extracted from the proof of the negative translation of $\DP$. Rather, we simply present a term which does the trick and carefully explain why. Let's define
\begin{equation*}\Phi p:=p0h_p\mbox{ \ \ \ where \ \ \ }h_p:=\lambda m^\NN,a^\tau\; \left(\mbox{$a$ if $P(0)\to P(m)$ else $pm(\lambda n^\NN,b^\tau.b)$}\right).\end{equation*}
We need to prove that $\Phi$ satisfies $(\ref{mr-main})$ above. In order to do this, we have to simultaneously unwind $(\ref{mr-main})$ together with the definition of $\Phi$. Our goal is to show that $\mr{\Phi p}{\bot}$ whenever $p$ satisfies the premise of $(\ref{mr-main})$. So let's assume the latter, i.e.
\begin{equation}\label{mr-prem}\forall e,g(Q(e,g)\to \mr{peg}{\bot})\end{equation}
and first instantiate $e,g:=0,h_p$, which gives us
\begin{equation*}Q(0,h_p)\to \mr{\underbrace{p0h_p}_{=\Phi p}}{\bot}\end{equation*}
It is now enough to show that $Q(0,h_p)$ holds, since then we have $\mr{\Phi p}{\bot}$ as indicated. Referring back to the definition of $Q(e,g)$ above, this in turn boils down to showing that, for any $m,a$:
\begin{equation}\label{mr-aux} P(0)\to (P(m)\to \mr{a}{\bot})\to \mr{h_p ma}{\bot}.\end{equation}
There are now two cases. In the first, $P(0)\to P(m)$ holds, which means that $h_pma=a$ and so $(\ref{mr-aux})$ becomes
\begin{equation*}P(0)\to (P(m)\to\mr{a}{\bot})\to \mr{a}{\bot}.\end{equation*}
This is now true, since either $P(0)$ is false and it follows trivially or $P(m)$ holds, and so $\mr{a}{\bot}$ follows from the assumption $P(m)\to\mr{a}{\bot}$.
In the second case we have $P(0)\wedge\neg P(m)$, which means that $h_pma=pm(\lambda n,b.n)$ and $(\ref{mr-aux})$ becomes
\begin{equation*} P(0)\to (P(m)\to \mr{a}{\bot})\to \mr{pm(\lambda n,b.b)}{\bot}, \end{equation*}
which appears less simple to validate. But the trick is to now go back to our main assumption (\ref{mr-prem}), this time instantiating $e,g:=m,\lambda n,b.n$. We are now done so long as $Q(m,\lambda n,b.b)$ holds, which written out properly is
\begin{equation*}P(m)\to (P(n)\to \mr{b}{\bot})\to \mr{b}{\bot}.\end{equation*}
But this now trivially true for any $n,b$ since by assumption we have $\neg P(m)$! Therefore climbing back up: We have established $\mr{pm(\lambda n,b.b)}{\bot}$ and hence $\mr{h_pma}{\bot}$, which in turn proves $Q(0,h_p)$ and thus $\mr{p0h_p}{\bot}$.
At first glance, the verification of our realizer looks somewhat convoluted, jumping back and forth between our various hypothesis until one by one they have been discharged. However, things become much clearer if we visualise the above argument in terms of the game sketched at the beginning of the section.
Let's look at it again, this time giving each step a game-theoretic reading. In order to verify $\mr{\Phi p}{\bot}$ we set $e,g:=0,h_p$ in $(\ref{mr-prem})$, which can be seen as a first attempt by $\exists$loise to prove $\DP$. In this context, $\forall$belard's job is to disprove $Q(0,h_p)$, which he does by choosing some $m,a$ and hoping that $(\ref{mr-aux})$ fails. Either $P(0)\to P(m)$ is true, in which case $(\ref{mr-aux})$ holds and $\exists$loise wins, or $\forall$belard chose a good counterexample and $P(0)\wedge\neg P(m)$. $\exists$loise now appeals to the premise of $(\ref{mr-prem})$ a second time, setting $e,g:=m,\lambda n,b.n$, and wins unless $Q(m,\lambda n,b.b)$ can be shown false. But under our assumption $\neg P(m)$ this is impossible, and so $\exists$loise has a winning strategy.
What is interesting here is how the winning strategy is encoded by the components of our term $\Phi$: $\exists$loise's moves are arguments for the term $p$, whereas $\forall$belard's move is represented by the internal function abstraction $\lambda m,a$.
The reader interested in exploring the the connection between realizability, negative translations and games could take \cite{Coquand(1995.0)} as a starting point. A fascinating illustration of this phenomenon in the case of the axiom of choice is provided by \cite{BBC(1998.0)}, which also inspired my own work in \cite{Powell(2015.0)}.
We finish off this section, as before, with a diagrammatic representation of the algorithm which underlies this realizability interpretation of $\DP$. Note that where for the epsilon calculus the epsilon function $\epsilon_m[-]$ played the role of the `counter', here the same thing is represented by the player $\forall$belard, or more precisely, the function abstraction within our term $\Phi$.
\subsection{Dialectica interpretation}
\label{sec-interpretations-dial}
We now come to our final computational interpretation of classical logic: G\"{o}del's functional, or Dialectica interpretation. The name `Dialectica' refers to the journal in which the interpretation was first published in 1958 \cite{Goedel(1958.0)}, though the interpretation had been conceived as early as the 1930s, and had been presented in lectures from the early 1940s. An English translation of the original article can be found in Volume II of the \emph{Collected Works} \cite{Goedel(1990.0)}, which is preceded by an illuminating introduction by Troelstra.
The original purpose of the Dialectica interpretation was to produce a relative consistency proof for Peano arithmetic. The interpretation maps the first-order theory of arithmetic to the primitive recursive functionals in finite types, thereby showing that the consistency of the former follows from that of the latter. The interpretation was soon extended to full classical analysis by Spector, in another groundbreaking article \cite{Spector(1962.0)}, which is also a fascinating read due to the extensive footnotes from Kreisel, who put together and completed the paper following Spector's early death in 1961.
Much like the epsilon calculus, then, the Dialectica interpretation has its origins in Hilbert's program and the problem of consistency, though in contrast it has achieved great success as a tool in modern applied proof theory, being central to the proof mining program initiated by Kreisel in the 1960s and brought to maturity by Kohlenbach from the 1990s.
For a comprehensive introduction to the interpretation the reader is directed to the Avigad and Feferman's \emph{Handbook} chapter \cite{AvFef(1998.0)} or the textbook of Kohlenbach \cite{Kohlenbach(2008.0)}, particularly Chapters 8-10. The latter is also the standard reference for \emph{applications} of the Dialectica interpretation in mathematics (and applied proof theory in general).
The basic set up of the interpretation bears a close resemblance to modified realizability, although there are crucial differences between the two. The Dialectica assigns to each formula $A$ of Heyting arithmetic a new formula of the form $\exists x\forall y\dt{A}{x}{y}$, where $\dt{A}{x}{y}$ is quantifier-free and $x$ and $y$ are tuples of variables whose length and type depend on the structure of $A$. The inner part of the interpretation is defined by induction on formulas, which we give in Figure 6 below.
\begin{figure}
\caption{The Dialectica interpretation}
\end{figure}
When one first sees the definition of the Dialectica interpretation, it is immediate that it departs from the usual BHK style on which realizability is based, and the interpretation of implication in particular seems rather ad-hoc until one understands where it comes from! The point here is that Dialectica interpretation acts as a kind of Skomelisation, pulling all the quantifiers to the front of the formula but preserving logical validity, so that
\begin{equation*}A\leftrightarrow \exists x\forall y\dt{A}{x}{y}\end{equation*}
provably in the usual higher-type formulation of classical logic plus a quantifier-free form of choice.
For most of the logical connectives, there is one obvious way of defining the interpretation. However, in the case of implication we have several ways of pulling the quantifiers out to the front, which make use of classical logic to a greater or lesser degree. G\"{o}del chose the option which Skolemises implication in the `least non-constructive way', using only Markov's principle and a weak independence of premise. A detailed explanation of all this is given in \cite[Section 2.3]{AvFef(1998.0)} and \cite[pp. 127--129]{Kohlenbach(2008.0)}. I highlight it here because the treatment of implication can often seem mystifying until one sees that it is defined the way it is for a reason!
As with modified realizability, the Dialectica interpretation comes equipped with a soundness proof: If $A$ is provable in Heyting arithmetic then there is some term $t$ of System T satisfying $\forall y\dt{A}{t}{y}$, and moreover, this term can be extracted from the proof of $A$:
\begin{equation*} \HA\vdash A\mbox{ \ \ implies \ \ } \mbox{System T}\vdash \forall y\dt{A}{t}{y}. \end{equation*}
We are now in a similar situation to the last section: We have a computational interpretation of intuitionistic arithmetic, which we now need to extend in some way to deal with classical logic. It turns out that the same trick works: We just combine the Dialectica interpretation with the negative translation. But crucially, for the Dialectica we do not need an intermediate step corresponding to the A-translation, since negated formulas are already considered computational thanks to the interpretation of implication: We have
\begin{equation} \label{dial-neg} \neg A\mbox{ is interpreted as } \exists g\forall x\dt{A\to \bot}{g}{x}\equiv \exists g\forall x(\dt{A}{x}{gx}\to \bot) \end{equation}
On top of this, the Dialectica allows us to simplify our negative translated formulas considerably. If $A$ is a prime formula then
\begin{equation*} \neg\neg A\mbox{ is interpreted as } \neg\neg A\mbox{ which is intuitionistically equivalent to } A \end{equation*}
Therefore when interpreting a complicated formula we can continually remove inner double negations which apply only to quantifier-free inner parts, which saves us a lot of effort!
The reader may wonder why we didn't go ahead and do this in the case of modified realizability. This is another consequence of not needing to treat $\bot$ as a special symbol: The variant of the $A$ translation we gave uses implicitly that $\DP^N$ is provable in \emph{minimal} logic, in which case we can replace $\bot$ with anything and the proof still goes through. However, the removal of double negations before quantifier-free formulas requires ex-falso quodlibet and hence the simplified form of $\DP^N$ we will consider below cannot be dealt with by the $A$ translation. For more on this rather subtle point see \cite[Chapter 14]{Kohlenbach(2008.0)}.
So let's now focus on how the Dialectica interpretation deals with $\DP$, or more specifically, its negative translation, we which already gave in the previous section as
\begin{equation*}\neg\neg \exists n\forall m (P(n)\to \neg\neg P(m)).\end{equation*}
As discussed above, the first remarkable difference with realizability is that the inner double negation essentially vanishes. Since $P(n)\to \neg\neg P(m)$ is quantifier-free, it can be encoded as a prime formula and we have
\begin{equation*}(P(n)\to \neg\neg P(m))\mbox{ is interpreted as } (P(n)\to \neg\neg P(m))\leftrightarrow (P(n)\to P(m))\end{equation*}
since in Heyting arithmetic $\neg\neg P(m)\leftrightarrow P(m)$. Therefore, looking at how the Dialectica interprets quantifiers and removing our inner double negation in this way, we have that
\begin{equation*}\exists n\forall m(P(n)\to \neg\neg P(m))\mbox{ is interpreted as }\exists n\forall m (P(n)\to P(m))\end{equation*}
Therefore in order to deal with $\DP^N$, we need to interpret a formula of the form $\neg\neg B$ where $B:\equiv \exists n\forall m(P(n)\to P(m))$ is an $\exists\forall$ formula. It's here that the interpretation of implication comes into play.
To interpret the outer negations, let's first look at $\neg B$. Following (\ref{dial-neg}), this is interpreted as
\begin{equation*}\begin{aligned} \exists g\forall n\neg (P(n)\to P(gn)) \end{aligned}\end{equation*}
and therefore negating a second time, the interpretation of $\neg\neg B$ is given by
\begin{equation*} \exists \Phi\forall g \neg\neg (P(\Phi g)\to P(g(\Phi g)))\leftrightarrow \exists \Phi\forall g (P(\Phi g)\to P(g(\Phi g))) \end{equation*}
Putting this all together, we have that the Dialectica interpretation of the negative translation of $\DP$ is, modulo the elimination of inessential double negations, the following:
\begin{equation} \label{dial-int} \exists \Phi\forall g(P(\Phi g)\to P(g(\Phi g)). \end{equation}
Our challenge is to find a functional $\Phi$ which satisfies (\ref{dial-int}). It turns out that such a functional is a lot easier to write down than that for modified realizability, and so this time we want to hint - very informally nevertheless - at how such a term can be extracted from the semi-formal proof given in Figure 1, as it highlights some important features of the \emph{soundness} of the Dialectica interpretation.
The main idea is to extract our functional $\Phi$ recursively over the structure of the proof. Roughly speaking this goes as follows: There are two main branches of the proof, which are then combined using excluded-middle to yield two potential witnesses, which are reduced to an exact witness via contraction, which computationally speaking induces a case distinction.
Let's first focus on the left-hand branch, which establishes
\begin{equation*}\exists k\neg P(k)\to \exists n\forall m(P(n)\to P(m)).\end{equation*}
The functional interpretation of the negative translation of the above is equivalent to
\begin{equation*}\exists \Psi_1\forall k,g(\neg P(k)\to P(\Psi_1 kg)\to P(g(\Psi_1 kg)))\end{equation*}
which is very easily satisfied by $\Psi_1 kg:=k$ i.e.
\begin{equation*} \forall k,g(\neg P(k)\to P(k)\to P(gk)) \end{equation*}
Now we turn to the right-hand branch, which establishes
\begin{equation*}\forall k P(k)\to \exists n\forall m (P(n)\to P(m))\end{equation*}
\begin{figure}
\caption{An informal extraction of $\Phi$}
\end{figure}
This is interpreted as
\begin{equation*}\exists \Psi_2,\Psi_3\forall g(P(\Psi_3 g)\to P(\Psi_2 g)\to P(g(\Psi_2 g))) \end{equation*}
which is also easily satisfied by $\Psi_2 g=0$ and $\Psi_3 g=g0$ i.e.
\begin{equation*} \forall g(P(g0)\to P(0)\to P(g0)) \end{equation*}
We now mimic the combining of the two branches by setting $k:=g0$ (note that this would formally correspond to several steps). We obtain
\begin{equation*} \forall g(\neg P(g0)\vee P(g0) \to (P(g0)\to P(g(g0)))\vee (P(0)\to P(g0))) \end{equation*}
and so eliminating the now quantifier-free instance of LEM we end up with
\begin{equation*} \forall g((P(g0)\to P(g(g0)))\vee (P(0)\to P(g0))). \end{equation*}
The final instance of contraction is dealt with by carrying out a definition-by-cases: Setting
\begin{equation*}\Phi g:=\mbox{$0$ if $P(0)\to P(g0)$ else $g0$}\end{equation*}
we have
\begin{equation*} \forall g(P(\Phi g)\to P(g(\Phi g))) \end{equation*}
and so we're done. Of course, with a little thought we could have come up with such a program without going through the formal extraction. However, we want to illustrate how the construction of realizers mimics the formal proof, which we sketch via our proof tree in Figure 7. Note that, technically speaking, the application of the negative translation to the proof in Figure 1 would result in a new, bigger proof tree which establishes $\DP^N$, and over which our realizer would be extracted. However, for readability we conflate this heavily in Figure 7, since it's only the main structure we want to highlight.
Figure 8 gives our usual summary of the algorithm underlying the interpretation. In this case it is quite succinct: Our functional $\Phi$ is nothing more than a straightforward case distinction and our `counter' is just a simple function $g$. In this way, our interpretation is neither embellished by a nested function calls as in modified realizability, or with an explicit backtracking algorithm as for the epsilon calculus.
\begin{figure}
\caption{Dialectica interpretation of $\DP$}
\end{figure}
\subsection{Final remarks}
It will hopefully not have escaped the reader that in the simple case of the drinkers paradox, all three interpretations arrive at essentially the same basic algorithm:
\begin{itemize}
\item Try a default value $n:=0$;
\item Query the truth value of $P(0)\to P(m)$ where $m$ is some counter value obtained from the environment;
\item In the case of failure, update $n:=m$.
\end{itemize}
This useful feature of the drinkers paradox allows us to focus on the very different ways in which each interpretation describes the underlying algorithm. Notably, the interaction with a counter value is present in each of the above examples, but encoded using a variety of structures, from a simple function in the case of the Dialectica to an internal abstraction in the case of modified realizability. At this point, it is perhaps worth pausing a moment to discuss these differences and the way in which they have affected how each interpretation has been used since Hilbert's program.
\section{Interlude: A general perspective on proof interpretations}
\label{sec-interlude}
It may seem remarkable that as we slowly approach a century since the emergence of the epsilon calculus, a comprehensive comparison and assessment of the well-established computational interpretations of classical logic is still lacking - though a number of studies in this direction (such as \cite{Trifonov(2011.0)}) have been produced.
Comparing proof interpretations is difficult. For a start, they are acutely sensitive to the way in which they are applied: An unnecessary double negation that would be routinely removed by a human applying a proof interpretation by hand could result in a considerable blow-up in complexity for a program formally extracted by a machine. Thus the Dialectica interpretation as a tool in proof mining is different to the Dialectica interpretation as an extraction mechanism in a proof assistant. Closely related to this is the fact that proof interpretations are in a constant state of flux, undergoing adaptations and refinements in quite specific directions as they become established as tools in contrasting areas of application. So when comparing, for instance, the Dialectica to modified realizability, we have to quite carefully specify which of the many variants we have in mind.
Another problem lies in deciding the \emph{nature} of the comparison we wish to make. For example, it is clear that in terms of their definitions, modified realizability and dialectica belong very much to the same family of interpretations (an impression which is made precise in \cite{Oliva(2006.0)}), whereas the epsilon calculus is different species altogether. Nevertheless, in terms of the programs they produce for $\DP$, it can be argued that the Dialectica has more in common with the epsilon calculus than it does with realizability. Without wanting to read too much into our simple case study, the point is simply that there is more to a proof interpretation than meets the eye, and a similarity in the definitional structure of two interpretations is not necessarily reflected when one examines the programs they produce.
As such, it is difficult to make sweeping claims about the relationship between proof interpretations without a certain amount of qualification. Nevertheless, it is undoubtedly the case that over the years, each of the three interpretations of classical logic mentioned above has developed a reputation, closely tied to the domains in which they primarily feature, and moreover posses certain characteristics which make them appealing for certain applications, and less appealing for others.
Perhaps the most striking feature of Hilbert's epsilon calculus is that it contains, implicitly, an elegant computational explanation of classical reasoning. Classical logic allows us to construct `ideal objects' which are represented by epsilon terms, and the substitution methods shows us how to build approximations to these objects via a form of trial and error. In this way, the epsilon calculus can be seen as a direct precursor to much later work such as Coquand's game semantics \cite{Coquand(1995.0)}, Avigad's update procedures \cite{Avigad(2002.0)}, the Aschieri/Berardi learning-based realizability \cite{AscBer(2010.0)} and much more besides, which are all concerned in some way with a computational semantics of classical arithmetic via backtracking, or learning, a topic which we explore in more detail later.
All of this aside, the epsilon calculus plays active role in structural proof theory, where it has deep links with cut elimination \cite{Mints(2008.0)}, and the variants of the substitution method are still studied, particularly with regards to complexity issues \cite{MosZac(2006.0)}. More recently, the syntactic representation of quantifiers offered by the epsilon terms has been utilised by proof assistants such as Isabelle and HOL \cite{AhrGie(1999.0)}. Epsilon terms also feature in philosophy and linguistics, and the reader interested in this is encouraged to consult the many references provided in \cite{AvZach(2002.0)}.
However, when it comes to concrete applications for the purpose of program extraction in mathematics or computer science, the epsilon calculus has been far superseded by the functional interpretations, which dominate this area for a number of reasons.
One property which the functional interpretations share is that the classical interpretations both factor through a natural intuitionistic counterpart. While the epsilon calculus applies directly to classical logic, the functional interpretations deal with this separately via the negative translation. For many applications of program extraction, particularly those which lean towards formal verification, the underlying proofs are indeed purely intuitionistic, in which case it makes far more sense to work with a direct implementation of the BHK interpretation such as modified realizability.
Another important feature of the functional interpretations is that the programs they extract are expressed as simple and clean lambda terms rather than an intricate transfinite recursion, as in the substitution method (though transfinite recursion is of course present in System T, it is conveniently encoded using the higher type recursors). In particular, lambda terms can be viewed in a very direct sense as functional programs, and are easily translated into real functional language like Haskell or ML. Therefore when it comes to the formal extraction of functional programs from intuitionistic proofs, variants of modified realizability are an obvious choice, and their success in this area is proven by the wide range of applications, ranging from real number computation \cite{BergMiySch(2016.0)} to the synthesis of SAT solvers \cite{BergLFS(2015.0)} (see the Minlog page at \cite{Minlog} for further examples).
For applications of proof interpretations in mathematics, in the sense of the \emph{proof mining} program \cite{Kohlenbach(2008.0)}, it is the Dialectica interpretation that has the starring role. Since applications of this kind typically deal with classical logic, they would in principle would also be suited to the epsilon calculus, and indeed early examples of proof mining made use of epsilon substitution \cite{Kreisel(1952.0)}. However, the fact that the Dialectica interpretation extracts functional programs is also crucial for modern proof mining, due to the direct use of \emph{mathematical properties} of those functionals.
In proof mining, the Dialectica interpretation is typically used in its monotone variant, which technically speaking is a combination of the usual Dialectica interpretation with a bounding relation on functionals known as majorizability. It is precisely this combination of proof interpretation and majorizability which enables the extraction highly uniform, low complexity bounds from proofs which use what at first glance appear to be computationally intractable analytical principle such as Heine-Borel compactness. For much more detailed discussion on the role the Dialectica interpretation plays in proof mining, see \cite{Kohlenbach(2018.0)}. The main point to take away here is that the `functional' part of functional interpretations is a crucial part of their success!
In the inevitably simplistic picture I have drawn above, I have deliberately contrasted the algorithmic elegance of the epsilon calculus with the applicability of the functional interpretations. A natural question is whether we can combine these two features in some way. This forms the main narrative of the remainder of the paper.
As we have seen, the functional interpretations deal with classical logic somewhat indirectly via negative translations. While in the case of realizability this can often be given a nice presentation in terms of games, in the case of the Dialectica interpretation in particular, the algorithm which is contained in the normalization of the term extracted from a negated proof is often very difficult to understand, and this is particularly the case once one moves away from simple examples like the drinkers paradox and analyzes `real' theorems from mathematics, where complex and abstract forms of recursion start to play a role.
Of the three interpretations, it is the Dialectica which has featured most prominently in my own research. I was drawn to it by its importance to the proof mining program, and the rich variety of mathematical theorems which have been studied using this interpretation, to which I also made a few small contributions.
In my own case studies \cite{OliPow(2015.1),OliPow(2015.0),OliPow(2017.0),Powell(2012.0),Powell(2013.0)} which focused on relatively strong theorems from mathematical analysis and well quasi-order theory, I was surprised by how the operational behaviour of the extracted terms were initially quite difficult to understand, but after carefully analysing them it became clear that they implemented rather clever backtracking algorithms. This led me to start thinking about the relationship between the Dialectica and the epsilon calculus, or more specifically, to ask whether terms extracted by the Dialectica in certain settings could be characterised as an epsilon-style procedures.
Although nowadays the epsilon calculus has become something of a specialised topic, in the past it was very much in the minds of those who pioneered applied proof theory. Early case studies by Kreisel were based on the substitution method, which in particular was used to prove soundness of the no-counterexample interpretation \cite{Kreisel(1951.0),Kreisel(1952.0)}, a close relative of the Dialectica interpretation (but see \cite{Kohlenbach(1999.0)}).
Particularly fascinating for me is the possibility that, while working on his groundbreaking extension of the Dialectica to full classical analysis, Spector was already thinking of a new way to deal with the axiom of choice using an epsilon-style algorithm. In his first footnote to \cite{Spector(1962.0)}, Kreisel writes of the draft of the paper he received from Spector before the latter's death:
\begin{quote} \emph{This last half page states that the proof of the G\"{o}del translation of axiom F would use a generalization of Hilbert's substitution method as illustrated in the special case of \S12.1. However Spector's notes do not contain any details, so that it is not quite clear how to reconstruct the proof he had in mind.} \end{quote}
Here axiom F is essentially the negative translation of the axiom of countable choice, which Spector had already interpreted using his schema of bar recursion in all finite types. While we do not know precisely what Spector intended, the idea of trying to replace the complicated forms of recursion which underlie the Dialectica with something more transparent is of great relevance now that proof interpretations are primarily used for practical program extraction as opposed to consistency proofs, and I talk about some more recent research in this direction in Section \ref{sec-dial-learning}.
The search for a clear algorithmic representation of terms extracted by the Dialectica led me to think, more generally, of how one could utilise concepts from the theory of programming languages to help capture what is going on `underneath the bonnet' of the Dialectica. To this end I worked on incorporating the notion of a global state into the interpretation via the state monad, and I discuss stateful programs in a much broader context in Section \ref{sec-dial-state}.
\section{Epsilon style algorithms and the Dialectica interpretation}
\label{sec-dial}
In Section \ref{sec-interpretations} I presented three old and well established computational interpretations of classical logic, by sketching how they act in the simple case of the drinkers paradox. My stress here was on the historical context in which they arose, and on highlighting key features of the interpretations which help explain the roles they play in modern proof theory.
My goal in this section is to focus more closely on the connection between them, and more specifically, to demonstrate how the core idea underlying the epsilon calculus can help us better understand the Dialectica interpretation. As such, I consider two concepts in which epsilon substitution style algorithms can be elegantly phrased, but which are both much more general: Learning algorithms and stateful programs.
I have used each of these to study the Dialectica interpretation, in \cite{Powell(2016.0)} and \cite{Powell(2018.1)} respectively, and these studies will form the main narrative of this section. Nevertheless, learning and programming with state feature much more broadly in modern approaches to program extraction, and I was certainly not the first to utilise them in this way. Therefore my priority here is to explain both in suitably general terms that their application in a broader context can be appreciated.
\subsection{Learning algorithms}
\label{sec-dial-learning}
As we have seen, the notion of learning as a computational semantics of classical logic in already explicitly present in the epsilon calculus, and can actually be seen in each of the interpretations of the drinkers paradox in Section \ref{sec-interpretations}. In fact, over the years this idea has continually resurfaced in a variety of different settings, some of which have already been mentioned (and a detailed survey of which would be an extensive work in its own right).
In this section, I will introduce a specific form of learning phrased in the language of all finite types, which I found useful in clarifying certain aspects of the Dialectica interpretation in \cite{Powell(2016.0)}, characterising extracted programs as epsilon-style procedures. I will briefly sketch how this relates to the drinkers paradox, but here our running example is far less illuminating, and so I will go on to present a Skolem form $\DP^\omega$ of the drinkers paradox, which is solved using a simple kind of learning procedure that appears frequently in the literature.
Computational interpretations of classical logic via learning are particularly meaningful when one focuses on $\forall \exists \forall$-formulas. By a simple adaptation of our argument in Section \ref{sec-interpretations-dial} we see that the combination of the negative translation and dialectica interpretation carries out the following transformation on such formulas:
\begin{equation*} \forall a\exists x\forall y Q(a,x,y)\mapsto \exists\Phi\forall a,g Q(a,\Phi ag,g(\Phi ag)) \end{equation*}
which happens to also coincide with Kreisel's no-counterexample interpretation for formulas of this type. Here, we can visualise $\Phi ag$ as constructing an approximation to the non-constructive object $x$ in the following precise sense: Rather than demanding some $x$ such that $Q(a,x,y)$ is valid for all $y$, we compute for any given $g$ an $x$ such that $Q(a,x,gx)$ holds.
Such reformulations of $\forall\exists\forall$-theorems are widely studied in proof mining: In the case of convergence they go under the name of `metastability'. Here $x$ is a number, and one is typically concerned with producing a computable bound for the approximation of $x$ i.e. $\forall a,g\exists x\leq \Phi ag Q(a,x,gx)$ (see \cite{KohSaf(2014.0)} for an interesting discussion of this and several related concepts). However, $x$ could be a more complicated object, such as a maximal ideal or a choice sequence, and in this case approximations to $x$ are often built using a complex forms of higher-order recursion whose normalization is subtle and whose operational meaning can be difficult to intuit, but which can be cleanly and elegantly presented as learning procedures.
So what is a learning procedure? In my own version of this well-known idea \cite{Powell(2016.0)}, I start with the idea of a learning \emph{algorithm}. A learning algorithm is assigned a type $\rho,\tau$, and, roughly speaking, seeks to build approximations to an object of type $\rho$ using building blocks of type $\tau$. Formally, it is given by a tuple $\mathcal{L}=(Q,\xi,\oplus)$ where the components are to be understood as follows:
\begin{itemize}
\item $Q$ is a predicate on $\rho$ which tells us if a suitably `good' approximation has been found;
\item $\xi:\rho\to \tau$ is a function which takes a current approximation $x$ and returns a new building block $\xi(x)$;
\item $\oplus:\rho\times \tau\to \rho$ combines $x$ with a building block $y$ to form a new approximation $x\oplus y$.
\end{itemize}
Given any initial object $x_0$, a learning algorithm triggers a learning \emph{procedure} $x_0,x_1,x_2,\ldots$, which is defined recursively by
\begin{equation*} x_{i+1}=\begin{cases}x_i & \mbox{if $Q(x_i)$}\\ x_i\oplus\xi(x_i) & \mbox{otherwise}\end{cases} \end{equation*}
The basic idea is very simple. At each stage in the procedure we check whether or not $x_i$ is a good approximation. If it is, we leave it unchanged, if not, we assume that by its failure we have learned a new piece of information $\xi(x_i)$, which we can use to update the approximation $x_{i+1}=x_i\oplus\xi(x_i)$. We are typically interested in learning algorithms whose learning procedures terminate at some point, by which we mean there is some $k$ such that $Q(x_k)$ holds. In this case we say that $x_k$ is the \emph{limit} of the procedure and write $\lim\lp{x}:=x_k$.
The dialectica interpretation of the drinkers paradox can be solved by an extremely simple learning algorithm of length at most two: Given our counter function $g$ we define $\mathcal{L}$ by
\begin{equation*} Q(x)=P(x)\to P(gx) \ \ \ \ \ \ \xi=g \ \ \ \ \ \ x\oplus y=y \end{equation*}
and it is easy to see that $\Psi g:=\lim\lp{0}$ works. Of course this is essentially the same as the program $$\Phi g=\mbox{$0$ if $P(0)\to P(g0)$ else $g0$},$$ but here the learning that we see in the epsilon calculus is made explicit. For the sake of completeness, we give the summary of the algorithm in Figure 9.
\begin{figure}
\caption{Interpretation of $\DP$ via learning procedure}
\end{figure}
Unsurprisingly, learning algorithms in \cite{Powell(2016.0)} were not introduced to help better understand the drinkers paradox, but to give an algorithmic description of how complicated objects such as choice sequences are built, which is more or less the role they play elsewhere in the literature. The main part of my own paper is in fact the study of learning algorithms on infinite sequences, which are rather subtle and involve non-trivial termination arguments.
To give a the reader a flavour of a meaningful learning procedure, let's take instead of a single predicate $P$ a sequence $(P_n)$ of decidable predicates, and consider the following sequential form of the drinkers paradox:
\begin{equation*} \DP^\omega \ \ \ \colon \ \ \ \exists f^{\NN\to\NN} \forall n,m(P_n(fn)\to P_n(m)) \end{equation*}
which is provable using classical logic in conjunction with the axiom of countable choice. The Dialectica interpretation of the negative translation of this (before bringing the $\exists f$ out to the front) is given by
\begin{equation*} \forall \omega,\phi\exists f(P_{\omega f}(f(\omega f))\to P_{\omega f}(\phi f)) \end{equation*}
where $\omega,\phi$ are functionals of type $(\NN\to\NN)\to\NN$. In other words, we need to build an approximation to the Skolem function $f$ which works, not for all $n$ and $y$, but just for $\omega f$ and $\phi f$. We can construct such an $f$ in $\omega,\phi$ by taking the limit of the learning algorithm $\mathcal{L}[\lambda i.0]$ for $\mathcal{L}$ of type $\NN\to\NN$, $\NN\times \NN$ given by
\begin{itemize}
\item $Q(f):=P_{\omega f}(f(\omega f))\to P_{\omega f}(\phi f)$
\item $\xi(f):=(\omega f,\phi f)$
\item $f\oplus (n,x):=f[n\mapsto x]$
\end{itemize}
where $f[n\mapsto x]$ is the function $f$ updated with the value $x$ at argument $n$. It's immediately clear that once the underlying learning procedure terminates with some $f$ satisfying $Q(f)$ then we're done, as this this is just the definition of a realizer for the functional interpretation of $\DP^\omega$. What is less clear is what the procedure does and why it terminates!
The learning procedure generates a sequence of functions
\begin{equation*}f_0,f_1,f_2,\ldots\end{equation*}
where $f_{i+1}=f_i[n_i\mapsto x_i]$ (we write $(n_i,x_i):=(\omega f_i,\phi f_i)$ for simplicity). Whenever our current attempt $f_i$ fails i.e. $\neg Q(f_i)$ holds, we know in particular that $\neg P_{n_i}(x_i)$, and so we have learned something about our predicates $P_i$. Updating $f_{i+1}=f[n_i\mapsto x_i]$ means that $f_{i+1}$ is now a valid choice sequence at point $n_i$, since $P_{n_i}(x_i)\to P_{n_i}(y)$ for all $y$. Looking at the procedure as a whole, it's clear that $f_{i+1}$ is in fact a valid choice sequence for all of $n_0,\ldots,n_i$, and so our learning procedure can be viewed as building an increasingly accurate approximation to the ideal choice function $f$.
Now we need to impose a condition on $\omega$ and $\phi$, namely that they are \emph{continuous}, which means that they only look at a finite part of their input. This is not an unreasonable assumption, since all computable functionals in particular are continuous, but it nevertheless means that our learning procedure cannot be defined as a term of System T, which is a good indication of the fact that we are going beyond the usual soundness theorem for the Dialectica interpretation, since $\DP^\omega$ is not provable in Peano arithmetic.
For each $n$, the sequence $f_0(n),f_1(n),f_2(n),\ldots$ changes at most once, and thus the functions $f_0,f_1,f_2,\ldots$ tend to some limit. By continuity of $\omega$, this means that $\omega f_0,\omega f_1,\omega f_2,\ldots$ eventually becomes constant, and so at some point $j$ we must have $n_{j+1}=\omega f_{j+1}\in\{n_0,\ldots,n_j\}$ and so in particular $\neg P_{n_{j+1}}(f_{j+1}(n_{j+1}))$, which implies $Q(f_{j+1})$, and hence termination of the procedure. A diagram of the learning procedure as a whole is given in Figure 10.
\begin{figure}
\caption{Interpretation of $\DP^\omega$ via learning procedure}
\end{figure}
The reader will hopefully have noticed that our learning procedure is nothing more than a simple version of epsilon substitution, formulated in the higher-order language of the Dialectica interpretation and used to produce a realizer for the Dialectica interpretation of a choice principle! We imagine the function $f$ in $\DP^\omega$ as finding some element such that $\neg P_i(m)$ holds, if it exists, and as such identify $fn$ with $\epsilon m\neg P_n(m)$. Our learning procedure starts by setting $fn=0$ for all $n$, and then, via a process of trial and error, repeatedly corrects the function until a sufficiently good approximation has been found. The current critical formula is given by $Q(f)$.
The idea of solving $\DP^\omega$ in this way can already be found in \S12.1 of Spector \cite{Spector(1962.0)}, and it is precisely this that he claims to have generalised to solve the functional interpretation of full countable choice. While his proof was never recovered, learning procedures have since emerged in several places, and many works on this topic can be characterised as the study of more complicated versions of this idea.
My own work in this direction has been primarily concerned with the Dialectica interpretation itself and the development of Spector's idea. In \cite{Powell(2016.0)} I study a more general class of learning procedure, and show in particular that Spector's bar recursion can be seen as a kind of `forgetful' learning procedure. I extend this idea in \cite{Powell(2018.0)}, exploiting the connection established in \cite{Powell(2014.0)} between bar recursion and Berger's open recursion \cite{Berger(2004.0)}, and devise learning procedure which solves the Dialectica interpretation of a simple form of Zorn's lemma.
However, the relationship between classical logic and learning has also been developed by other authors in different contexts. To give a few examples: In \cite{BBC(1998.0)}, a variation of the algorithm above is captured in a realizability setting via a form of recursion in higher-types now known as the BBC functional, which has since been studied in greater depth in \cite{Berger(2004.0)}. In \cite{Avigad(2002.0)}, learning procedures are called \emph{update procedures}, and are used to prove the $1$-consistency of arithmetic.
Finally, and perhaps most importantly, in \cite{AscBer(2010.0)} and later works by these authors (see in particular \cite{Aschieri(2011.1),Aschieri(2011.0)}), a special form of realizability is developed which is is based entirely on the notion of learning. Here, learning is captured through a `state of knowledge', which contains a finitary approximation to a Skolem function. Realizers for existential statements are evaluated relative to this state, and they either succeed, or discover an error in the current state, in which case the state is updated and the process repeats. In this, more than any of the other works mentioned here, learning is characterised as a stateful program, which brings me to the final part of the paper.
\subsection{Stateful programs}
\label{sec-dial-state}
Learning procedures are illuminating precisely because they capture the computational content of classical reasoning on an algorithmic level, through the \emph{evolution of a state}. The idea of a computation as an action on a state is the foundation of the imperative programming paradigm, on which many major programming languages - including C++, Java and Python - are partially based. This begs a much broader question: Can we utilise ideas from the theory of imperative languages to more clearly and elegantly express the computational meaning of classical reasoning?
This point of view has been explored primarily in the French style of program extraction via e.g. Krivine realizability, in which there is a greater tendency to capture the computational content of classical logic via control operators such as call/cc, rather than through logical techniques like negative translations (see \cite{Miquel(2011.0)} for an interesting discussion of this).
However, my emphasis here is somewhat different: Namely on adapting \emph{traditional} proof interpretations like the Dialectica so that the operational behaviour of extracted programs can be better understood. The benefits for this are great: the Dialectica has undoubtedly the richest catalogue of applications of all proof interpretations, and so a characterisation of extracted terms as stateful programs could lead to fascinating connections between imperative languages and non-constructive principles from everyday mathematics.
This final section discusses ideas in this direction. My main goal will be to indicate how stateful programs can be simulated in a functional setting, and describe how they can be used to enrich the Dialectica interpretation and give it a more imperative flavour. Though this takes as its inspiration both the epsilon substitution and the learning procedures just presented, work of this kind is more general and focused on breaking down proof interpretations on a much lower level. It is also far less developed, with only a few recent works which view the traditional interpretations from this perspective, and as such this section can be seen as an extended conclusion, looking ahead to future research.
I should start by clarifying the difference between imperative and functional programs. Simply put, imperative programs are built using commands which act on some underlying state. A while loop can be seen as a simple program written in an imperative style. Consider the following formulation of the factorial function:
\begin{equation*} \begin{aligned} &i:=1\\ &j:=1\\ &\text{\tt{while} $i< n$ do}\\ & \ \ \ \ \ \ i:=i+1\\ & \ \ \ \ \ \ j:=i\cdot j\\ &\text{\tt{print} $j$} \end{aligned} \end{equation*}
The loop is preceded by some variable assignments which determine an initial state, each iteration of the loop modifies the state until it terminates, and finally the output of the computation is read off from the state.
Another program which fits the imperative paradigm is of course the epsilon substitution method, where one imagines epsilon terms as global variables: Our initial state assigns to each epsilon term the value $0$, and each stage of the method updates the state by correcting an epsilon term until a suitable approximation is found. The closely related learning procedures of the last section can be seen in a similar way.
Programs extracted using the functional interpretations are, however, primarily expressed in some abstract functional language. In functional languages, the emphasis is on \emph{what} a program should do, rather than \emph{how} is does it. Programs are written as simple recursive equations and the concept of global state is not naturally present. In the functional style, the factorial function would be specified as just
\begin{equation*} f(0)=1 \ \ \ \ \ \ \ \ f(n+1)=(n+1)\cdot f(n) \end{equation*}
The use of simple higher-order functional languages like System T is a central feature of the functional interpretations, and as highlighted in Section \ref{sec-interlude}, a key component of their success. Nevertheless, there \emph{is} a natural way of incorporating imperative features into functional calcui: The use a \emph{monad}, a structure familiar to any functional programmer. A full technical definition of a monad and it's role in programming is far beyond the scope of this paper (see e.g. \cite{Moggi(1991.0),Wadler(1995.0)}), but we will give quick sketch of the \emph{state monad}, which is of relevance in this section.
Suppose we are working in a simple functional calculus like System T and we want to capture some overriding global state which keeps track of certain aspects of the computations. To this end, we introduce to our calculus a new type $S$ of states, together with an mapping $T$ on types defined by
\begin{equation*}TX:=S\to X\times S.\end{equation*}
This is the state monad. Monads come equipped with two operators: A \emph{unit} of type $X\to TX$ and a \emph{bind} of type $TX\to (X\to TY)\to TY$. For the state monad these are given by the maps
\begin{equation*} \begin{aligned} \textrm{unit}(x)&:=\lambda s. (x,s)\\ \textrm{bind}(a)(b)&:=\lambda s. bxs'\mbox{ \ \ \ where $(x,s')=as$} \end{aligned} \end{equation*}
In our enriched language, base types $X$ are interpreted as monadic types $TX=S\to X\times S$, while function types $X\to Y$ are interpreted as objects of type $X\to TY=X\to S\to Y\times S$. The unit map specifies a neutral translation from terms of plain type to the corresponding monadic type: For the state monad, neutrality means that the state remains unchanged. The bind map allows us to apply monadic functions to monadic arguments, as we will see below.
In the pure functional world, a term of ground type $X$ is just a program which evaluates to some value of that type. Under the state monad, it becomes a term which takes an initial state $s_1$ and returns a pair $(u,s_2)$ consisting of a value of type $X$ and a final state $s_2$. Similarly, in the pure word, a term of type $X\to Y$ is a function which takes an input $x$ and returns an output $y$. Under the state monad, it becomes a function which takes an input $x$ together with an initial state $s_1$ and returns an output-final state pair $(y,s_2)$.
To emphasise this point, consider our two version of the factorial function. The purely functional one has type $\NN\to\NN$: it takes an input a number $n$ and evaluates to $n!$. The imperative version, however, acts on an underlying state: It takes as input the number $n$ together with the initial state $(i,j):=(1,1)$ and returns the output $n!$ together with the final state $(i,j):=(n,n!)$. In this sense it has type $\NN\to S\to \NN\times S$.
The bind operator is responsible for function application on the monadic level: It takes a monadic argument $a:TX$ together with a monadic function $b:X\to TY$ and returns a monadic output $\textrm{bind}(a)(b):TY$. This output mimics the following procedure: We take our initial state $s_1$ and plug it into our argument $a$ to obtain a pair $(u,s_2)$ of type $X\times S$. We treat $s_2$ as an intermediate state in the computation, and plug both it and the value $u$ into the monadic function $b$ to obtain a pair $(v,s_3)$ of type $Y\times S$.
The purpose of all this is to try to convey to the reader how stateful functions like our while loop can be simulated in a functional environment via the state monad. We can, more generally, translate the finite types as a whole to a corresponding hierarchy of monadic types, and using the unit and bind operations define a translation on pure terms of System T to monadic terms in a variety of ways, depending on what kind of computation we are trying to simulate and what kind of information we aim to capture in our state. Again, this is presented in detail in \cite{Moggi(1991.0),Wadler(1995.0)}.
So how can all this be applied to the Dialectica interpretation? Suppose we have a proof of a simple $\forall\exists $ statement $\forall x\exists y Q(x,y)$: The dialectica interpretation would normally extract a pure program
\begin{equation*} f:X\to Y \end{equation*}
which takes an input $x\in X$ and returns an output $y\in Y$ satisfying $Q(x,y)$. However, using the state monad together with a suitable translation on terms, we could instead set up the interpretation so that it extracts a state sensitive program
\begin{equation*} b:X\to S\to Y\times S \end{equation*}
which takes an input $x\in X$ and initial state $s_1\in S$, and returns an output pair $bxs=(y,s_2)$, where $y$ satisfies $Q(x,y)$ and the final state $s_2$ tells us something about what our function \emph{did} during the computation, the idea being to make explicit in some sense the underlying substitution, or learning algorithm.
In a recent paper \cite{Powell(2018.1)}, I explore some of the things which a global state can do in this context, focusing in particular on the program's `interaction with the mathematical environment'. In other words, when performing a computation on some ambient mathematical structure, be it a predicate, a bounded monotone sequence, a colouring of a graph etc., the state monad is used to track the way in which the program queries this environment as it attempts to build an approximation to some object related to that structure e.g. an approximate drinker, an interval of metastability, an approximate monochromatic set etc. In this way, the state monad allows us to smoothly capture the epsilon style procedures which \emph{implicitly} underlie the normalization of extracted functional terms.
Let's try to illustrate this with a very simple example using, for the final time, the drinkers paradox. Note that the $\forall \exists$ formula we have in mind here is the (partial) Dialectica interpretation of $\DP^N$, namely $\forall g\exists x(P(x)\to P(gx))$. Suppose that our mathematical environment consists of a single predicate $P$, and that our states are defined to be finite piece of information about this predicate i.e. for $s\in S$ we have
\begin{equation*} s=[Q_0(n_0),\ldots,Q_{k-1}(n_{k-1})] \end{equation*}
for where $Q_i$ is either $P$ or $\neg P$. Our realizer could then take a state $s$ as an input, and whenever it tests $P$ on a given value it will add this information to the state. For the usual drinkers paradox, we would get the following variation of the program given in Section \ref{sec-dial}, which now has the monadic type $(\NN\to\NN)\to S\to \NN\times S$:
\begin{equation*} \Phi gs:=\begin{cases}\pair{0,s::\neg P(0)} & \mbox{if $\neg P(0)$}\\ \pair{0,s:: P(0):: P(g0)} & \mbox{if $P(g0)$}\\ \pair{g0,s::P(0):: \neg P(g0)} & \mbox{otherwise}\end{cases} \end{equation*}
This breaks our usual case distinction into parts, starting by testing the truth value of $P(0)$. If this is false, we know that $P(0)\to P(g0)$ is true and thus $0$ is realizer for $\DP$, and we add what we have learned about $P$ to the state. On the other hand, if $P(0)$ is true, the truth value of $P(0)\to P(g0)$ is still undetermined, so we must now test $P(g0)$, leading to two possible outcomes. Unlike the pure program, our stateful program returns a \emph{reason} as to why the witness it has returned works, giving us the information about the mathematical environment which has determined its choice. Alternatively put, our program returns a record of the underlying substitution procedure carried out by the term. As usual we present this as an algorithm in Figure 11.
\begin{figure}
\caption{Interpretation of $\DP$ with state}
\end{figure}
We emphasise that this is just a simple illustration of how the state could be used to enhance the programs we extract - to just record the interactions which have taken place with the mathematical environment. In particular, the realizer makes no use of what is currently written in the state. One could change this so that it first looks to see whether a truth value of e.g. $P(0)$ is already present in $s$, and only then proceed to test $P(0)$. In cases where the cost of evaluating $P$ is high, this would improve the efficiency of the extracted program, allowing us to store previously computed values so that they can be accessed at a later stage in the computation. More sophisticated variants are possible: Where $P$ is not decidable, we can simply make an arbitrary choice for its truth value and make the validity of our output witness dependent on the truth of the state. In \cite{Powell(2018.1)} a very general framework for working with state is presented via an abstract soundness theorem, and a range of applications are discussed, ranging from Herbrand's theorem to program synthesis.
I am certainly not alone in turning towards concepts such as monads to try to better explain what is going on under the syntax of traditional interpretations, although to date there is comparatively little research in this direction. Similar work, also pertaining to the Dialectica interpretation, has been carried out by Pedrot in \cite{Pedrot(2015.0)}. Berger et al. have used the state monad to automatically extract list sorting programs using modified realizability \cite{BergSeiWoo(2014.0)}, while the state monad is utilised by Birolo in \cite{Birolo(2012.0)} to give a more general version of Aschieri-Berardi learning realizability. Interestingly, monads also feature prominently in the construction of the Dialectica categories \cite{dePaiva(1991.0)} - abstract presentations of the interpretation which formed early models of linear logic - though there they play a somewhat different role.
\section{Conclusion}
\label{sec-conc}
I began this paper with an introduction to the epsilon calculus, a proof theoretic technique almost a century old and originating in the foundational crisis of mathematics. I concluded in the present day by sketching how stateful programs are being used to capture algorithmic aspects of proof interpretations and characterise the operational behaviour of extracted programs.
Yet the ideas which underpin the latter are in some way already present in the substitution method, illustrating how certain universal characteristics seem to underlie computational interpretations of classical logic, manifesting themselves in different ways in different settings.
In my short presentations of the three traditional proof interpretations, I hope to have hinted at the deep connections that they share, which are not necessarily apparent in their formal definitions. Some of these connections have been made more explicit in modern research, in particular the utilisation of epsilon substitution-like procedures in the setting of functional interpretations, which formed the main topic of the second part of this paper.
As proof theory moves away from foundational concerns and towards applications in mathematics and computer science, is has become increasingly relevant to utilise new languages and techniques to modernise traditional proof theoretic methods, and it will be fascinating to see how proof interpretations continue to evolve.
\end{document} | arXiv |
\begin{definition}[Definition:Cosine/Definition from Circle/Third Quadrant]
Consider a unit circle $C$ whose center is at the origin of a cartesian plane.
:500px
Let $P = \tuple {x, y}$ be the point on $C$ in the third quadrant such that $\theta$ is the angle made by $OP$ with the $x$-axis.
Let $AP$ be the perpendicular from $P$ to the $y$-axis.
Then the '''cosine''' of $\theta$ is defined as the length of $AP$.
Thus by definition of third quadrant, the '''cosine''' of such an angle is negative.
Category:Definitions/Cosine Function
\end{definition} | ProofWiki |
\begin{document}
\draft \title{ Particle on a polygon: Quantum Mechanics } \author{Rajat Kumar Pradhan\cite{rajat}} \address{Vikram Deb College, Jeypore 764 001, Orissa, India} \author{Sandeep K. Joshi\cite{jos}} \address{Institute of Physics, Sachivalaya Marg, Bhubaneswar 751 005, India}
\maketitle
\begin{abstract}
We study the quantization of a model proposed by Newton to explain
centripetal force namely, that of a particle moving on a regular
polygon. The exact eigenvalues and eigenfunctions are obtained.
The quantum
mechanics of a particle moving on a circle and in an infinite
potential well are derived as limiting cases.
\end{abstract} \begin{multicols}{2} \section{Introduction}
The model of a particle bouncing off a circular constraint on a polygonal path was originally devised by Newton to motivate the concept of centripetal force\cite{chandra} associated with circular motion of the particle. It serves the precise purpose of highlighting the notion of a force continuously operating on the particle on a circle as the limiting case of motion on an inscribed polygon with the corners acting as force centers. However, the intuitive simplicity of the circle case has contributed to the consistent ignoring of Newton's original model. Recently the problem has received some attention \cite{ancin,herman} mostly from the historical perspective. No quantum mechanical treatment of the problem is available in the literature. The quantum mechanics of the polygon model will be of much pedagogical value for introductory courses in quantum theory inasmuch as it is an illustration of the fact that the symmetry (the N-fold discrete rotational symmetry in the present case) of a problem is fully reflected in the wavefunction.
In this work we first reduce the Lagrangian for the system by using the constraint equation and then use a suitable generalized coordinate to bring the Hamiltonian to the free particle form. The resulting eigenvalues are shown to reduce to those of a particle on a circle in the limit as the number of sides $N \rightarrow \infty$ and to those of a particle in an infinite potential well in the $N=2$ case.
\section{The Classical Hamiltonian}
We begin by constructing the full Lagrangian for a particle of unit mass constrained to move on an $N$-sided regular polygon. The $N$-gon can be parametrized by
\begin{equation} \label{rcoord} r=b~sec(\xi-(2m-1)\pi/N),~~~~~\frac{2(m-1)\pi}{N} \leq \xi \leq \frac{2m\pi}{N} \end{equation}
\noindent where $m=1,...N$ labels the sides of the polygon. As shown in Fig. \ref{fig1}, $b=a~cos(\pi/N)$ is the length of the normal from the center of the circumcircle (of radius $a$) to the side. The length of a side is then given by $c=2a~sin(\pi/N)$.
The Lagrangian will be given by
\begin{equation} \label{lagran} L=\frac{1}{2}{\dot r}^2 + \frac{1}{2}r^2{\dot \xi}^2+\lambda (r-b~sec(\xi-(2m-1)\pi/N)) \end{equation}
\noindent where $\lambda$ is a Lagrange multiplier implementing the constraint
\begin{equation} \label{const} \Omega=r-b~sec(\xi-(2m-1)\pi/N)=0 \end{equation}
To reduce the Lagrangian to the only relevant degree of freedom i.e., $\xi$, we employ the constraint directly to bring it to the form
\begin{equation} \label{reduce} L_r=\frac{1}{2}b^2sec^4(\xi-(2m-1)\pi/N){\dot \xi}^2 \end{equation}
The momentum conjugate to the coordinate $\xi$ is
\begin{equation} \label{pxi} P_{\xi}=\partial L_r/\partial {\dot \xi} = b^2sec^4(\xi-(2m-1)\pi/N){\dot \xi}. \end{equation}
Finally, the classical reduced Hamiltonian is given by
\begin{equation} \label{clHamilt} H(\xi)=\frac{P_{\xi}^2}{2b^2}cos^4(\xi-(2m-1)\pi/N) \end{equation}
\section{Quantization}
A closer look at the Hamiltonian Eq. \ref{clHamilt} reveals that if we introduce a new generalized coordinate
\begin{equation} \label{qcoord} q=b~tan(\xi-(2m-1)\pi/N) \end{equation}
\noindent it reduces to the simple free particle form
\begin{equation} \label{Hamilt} H(q)=\frac{1}{2} {\dot q}^2 \end{equation}
\noindent Following the usual Schr\"odinger prescription for quantization, the quantum canonical momentum is $P_q=-i\hbar\partial/\partial q=\hbar k$. The Hamiltonian is therefore the usual free particle Schrodinger operator
\begin{equation} \label{schop} {\cal H}= -\frac{\hbar^2}{2}\frac{\partial^2}{\partial q^2}. \end{equation}
A more formal approach to the quantization of such a Hamiltonian Eq. \ref{clHamilt} would be to construct the Laplace-Beltrami operator in terms of the properly constructed quantum momentum operator \cite{rabin}. We emphasize that the same quantum Hamiltonian is obtained by this procedure also.
The free particle solutions are now given by the plane waves
\begin{equation} \label{solut} \psi(q)=A~e^{\pm ikq} \end{equation}
\noindent Reverting back to the $\xi$ variable we have
\begin{equation} \psi_m(\xi)=A~e^{ikb~tan(\xi - (2m-1)\pi/N)} \end{equation}
\noindent for the wavefunction in the $m^{th}$ side. The normalization is obtained from
\begin{equation} \label{norm}
\sum_{m=1}^{N} \int_{2(m-1)\pi/N}^{2m\pi/N} |\psi_m(\xi)|^2 d\xi~=~1 \end{equation}
whence $A=1/\sqrt{2\pi}$.
The boundary condition following from the singlevaluedness of the wavefunction
\begin{equation} \label{bcond} \psi_1(\xi=0)~=~\psi_N(\xi=2\pi) \end{equation}
\noindent leads to the quantization condition
\begin{eqnarray} \label{qcond} kb~tan(\pi/N)&=&n\pi \nonumber \\ & or & \nonumber \\ ka~sin(\pi/N)&=&n\pi \end{eqnarray}
\noindent The energy eigenvalues are now given by
\begin{equation} \label{energy} E_n~=~\frac{n^2\pi^2\hbar^2}{2a^2sin^2(\pi/N)}. \end{equation}
\noindent We note that the above quantization conditions are also obtained by considering symmetric and antisymmetric solutions about the midpoint of a side, $\xi=(2m-1)\pi/N$ namely,
$$\psi_{m}^s(\xi)~=~A_s~cos(kb~tan(\xi-(2m-1)\pi/N))$$
and
$$\psi_{m}^a(\xi)~=~A_a~sin(kb~tan(\xi-(2m-1)\pi/N)) $$
\noindent and imposing the periodic boundary conditions
$$\psi_{m}^{s,a}(\xi)~=~\psi_{m+1}^{s,a}(\xi+2\pi/N).$$
In Fig. \ref{fig2} we show the solutions for a hexagon ($N=6$).
\section{The Circle ($N \rightarrow \infty$) Limit}
The familiar example of a particle moving on a circle\cite{atkins} would correspond to the limit $N \rightarrow \infty$ for the polygon model. In this limit, as each side of the polygon reduces to just a point, it is necessary to redefine the conjugate variables as $\phi = N\xi$ for the angle variable and $l=ka/N$ for the angular momentum in units of $\hbar$. We see that the eigenvalues now become $E_n=n^2\hbar^2/2a^2$ and the eigenfunctions reduce to the well known solutions $\psi(\phi)~=~A~e^{\pm i n \phi}$ for a free particle on a circle.
\section{The Infinite Potential Well ($N=2$) Case}
In the interesting case of $N=2$, classically the polygon reduces to just a segment of length $2a$ traversed in both directions as discussed very lucidly by Anicin \cite{ancin}. Quantum mechanically this is equivalent to a particle confined in an infinite potential well of width $2a$. Indeed, the eigenvalue equation Eq. \ref{energy} reduces to the familiar eigenvalues
\begin{equation} \label{infwell} E_n=\frac{n^2\pi^2\hbar^2}{8a^2}. \end{equation}
\noindent Note that in the above we have effected the replacement of wavevector $k$ by $k/2$ to take care of the two-fold reflection symmetry which is inherent in the reduction of the polygon to the segment of length $2a$. We remark that since the parameterization Eq. \ref{rcoord} of the polygon does not survive down to the $N=2$ case for obvious reasons, the eigenfunctions cannot be expressed in terms of the variable $q$ or $\xi$. The eigenvalue equation Eq. \ref{energy}, nevertheless, is robust and gives the true eigenvalues.
\section{Summary}
We have quantized Newton's polygon model and derived the general eigenvalues and eigenfunctions. We have recovered the quantum mechanics in the circle limit of the polygon. Moreover, the energy eigenvalues for the $N=2$ case have been identified with those of a particle in an infinite potential well. In the light of the above results it would be interesting exercise for students of introductory courses on quantum mechanics to look into the polygon analogues of the well studied problems of Stark effect for rotor\cite{pauling}, Aharonov-Bohm effect for a particle on a circle etc..
\acknowledgments
We gratefully acknowledge fruitful discussions with V. V. Shenoi, and J. Kamila. One of the authors (RKP) would like to thank the Institute of Physics, Bhubaneswar, for hospitality where a part of the work was carried out.
\begin{references}
\bibitem[\dagger]{rajat}e-mail: [email protected]
\bibitem[\ddagger]{jos}e-mail: [email protected]
\bibitem{chandra} See e.g. S. Chandrashekar, "Newton's Principia for the
common reader" (Clarendon Press, Oxford, 1995), pp 67-91.
\bibitem{ancin} B. A. Anicin, "The rattle in the cradle", Am. J.
Phys. {\bf 55}(6), 533-537 (1987).
\bibitem{herman} Herman Erlichson, "Motive force and centripetal force
in Newton's mechanics", Am. J. Phys. {\bf 59}(9), 842-849 (1991).
\bibitem{rabin} E. Abdalla and R. Bannerjee, "Quantization of the
multidimensional rotor", quant-ph/9803021.
\bibitem{atkins} P. W. Atkins, Molecular Quantum Mechanics (Oxford University Press, New York, second edition, 1983), pp. 58-63.
\bibitem{pauling} L. Pauling and W. B. Wilson, Introduction to Quantum Mechanics (McGraw-Hill Pub. Co., Tokyo, 1935).
\end{references} \end{multicols} \begin{figure}
\caption{The regular polygon circumscribed by a circle of radius $a$.}
\label{fig1}
\end{figure}
\begin{figure}
\caption{Symmetric ($\psi^s_m(\xi)$) and antisymmetric ($\psi^a_m(\xi)$) ground state ($n=1$) wavefunctions for the case of hexagon ($N=6$).}
\label{fig2}
\end{figure} \begin{figure}
\caption{Symmetric ($\psi^s_m(\xi)$) and antisymmetric ($\psi^a_m(\xi)$) first excited state ($n=2$) wavefunctions for the case of hexagon ($N=6$).}
\label{fig3}
\end{figure}
\end{document} | arXiv |
Ahmed Abbes
Ahmed Abbes (born 24 May 1970) is a Tunisian-French mathematician and a Director of Research at the Institut des Hautes Études Scientifiques (IHÉS). He is known for his work in arithmetic geometry.
Ahmed Abbes
Born (1970-05-24) 24 May 1970
NationalityTunisian
French
Alma mater
• Paris-Sud University
• École Normale Supérieure
Awards
• CNRS Bronze Medal (2005)
• Corresponding member of the Tunisian Academy of Sciences, Letters, and Arts
Scientific career
FieldsMathematics
Institutions
• French National Centre for Scientific Research
• Institut des Hautes Études Scientifiques
• University of Rennes 1
• Paris-Sud University
ThesisThéorie d'Arakelov et courbes modulaires (1995)
Doctoral advisorLucien Szpiro
InfluencesMichel Raynaud
Early life and education
Abbes was born on 24 May 1970 in Sfax, Tunisia.[1] Abbes received a bronze medal in 1988 and a silver medal in 1989 at the International Mathematical Olympiad while representing Tunisia.[2] Abbes has both French and Tunisian citizenship.[1]
Abbes studied at the École Normale Supérieure from 1990 to 1994 and then received his doctorate from Paris-Sud University in 1995 under the supervision of Lucien Szpiro, with the thesis Théorie d'Arakelov et courbes modulaires on Arakelov theory and modular curves.[3][4] At Paris-Sud, Michel Raynaud was one of his mentors.[3] Abbes received his habilitation in 2003.[1]
Career
Abbes was a post-doctoral researcher at the Institut des Hautes Études Scientifiques (IHÉS) from 1995 to 1996 and was also a post-doctoral researcher at the Max Planck Institute for Mathematics in 1996.[1] From 1996 to 2007, he was a Chargé de recherche at the CNRS at Paris-Sud University.[1] From 2007 to 2011, he was a CNRS Director of Research (2nd class) at the University of Rennes 1. In 2011, he moved to the IHÉS where he was a CNRS Director of Research (2nd class) until 2013 and where he has been a CNRS Director of Research (1st class) since 2013.[1][5]
Abbes was an editor for Astérisque from 2010 to 2018 and is the co-editor-in-chief of the Tunisian Journal of Mathematics.[1]
Abbes is a Coordinator of the Tunisian Campaign for the Academic and Cultural Boycott of Israel (TACBI).[6][7] He is also a Secretary of the French Association of Academics for Respect for International Law in Palestine (AURDIP).[6][7]
Research
Abbes's research concerns the geometric and cohomological properties of sheaves on manifolds over perfect fields of positive characteristic and p-adic fields.[5] He has worked on a p-adic Simpson correspondence and other topics in p-adic Hodge theory with Michel Gros.[5]
Awards
In 2005, Abbes was awarded the CNRS Bronze Medal.[3] He is a corresponding member of the Tunisian Academy of Sciences, Letters, and Arts.[5]
References
1. "Ahmed Abbes Curriculum Vitae" (PDF). Tunisian Academy of Sciences, Letters, and Arts. Retrieved 21 December 2020.
2. "Ahmed Abbes". International Mathematical Olympiad. Retrieved 21 December 2020.
3. "2005 Médailles de Bronze" (PDF). French National Centre for Scientific Research (in French). December 2005. Retrieved 21 December 2020.
4. Ahmed Abbes at the Mathematics Genealogy Project
5. "Ahmed Abbes, mathématicien". Institut des Hautes Études Scientifiques (in French). Retrieved 21 December 2020.
6. Abbes, Ahmed (1 July 2019). "Ahmed Abbes: Une nouvelle forme de solidarité mutuelle entre la Tunisie et la Palestine". Leaders Tunisie (in French). Retrieved 21 December 2020.
7. Abbes, Ahmed; Falk, Richard A. (22 November 2019). "A letter to Tunisia's new president: Keep fighting for Palestinian rights". Middle East Eye. Retrieved 21 December 2020.
External links
• Ahmed Abbes at the Mathematics Genealogy Project
Authority control
International
• ISNI
• VIAF
National
• Norway
• Germany
• Israel
• United States
• Netherlands
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
\begin{document}
\typeout{---------------------------- segc.tex --------------------------}
\typeout{---------------------------- segc.tex ----------------------------}
\title{The Segal Conjecture for Infinite Discrete Groups}
\typeout{----------------------- Abstract ------------------------}
\begin{abstract} We formulate and prove a version of the Segal Conjecture for infinite groups. For finite groups it reduces to the original version. The condition that $G$ is finite is replaced in our setting by the assumption that there exists a finite model for the classifying space $\underline{E}G$ for proper actions. This assumption is satisfied for instance for word hyperbolic groups or cocompact discrete subgroups of Lie groups with finitely many path components. As a consequence we get for such groups $G$ that the zero-th stable cohomotopy of the classifying space $BG$ is isomorphic to the $I$-adic completion of the ring given by the zero-th equivariant stable cohomotopy of $\underline{E}G$ for $I$ the augmentation ideal. \end{abstract}
\typeout{-------------------- Section 0: Introduction --------------------------}
\setcounter{section}{-1} \section{Introduction} \label{sec:Introduction}
We first recall the Segal Conjecture for a finite group $G$. The equivariant stable cohomotopy groups $\pi^n_G(X)$ of a $G$-$CW$-complex are modules over the ring $\pi^0_G = \pi^0_G(\ast)$ which can be identified with the Burnside ring $A(G)$. Here and elsewhere $\ast$ denotes the one-point-space The augmentation homomorphism $A(G) \to {\mathbb Z}$ is the ring homomorphism sending the class of a finite set to its cardinality. The augmentation ideal ${\mathbb I}_G \subseteq A(G)$ is its kernel. Let $\pi^m_G(X)\widehat{_ {{\mathbb I}_G}}$ be the ${\mathbb I}_G$-adic completion $ \invlim{n \to \infty}{\pi^m_G(X)/{\mathbb I}_G^n \cdot \pi^m_G(X)}$ of $\pi^m_G(X)$.
The following result was formulated as a conjecture by Segal and proved by Carlsson~\cite{Carlsson(1984)}.
\begin{theorem}[Segal Conjecture for finite groups] \label{the:Segal_Conjecture_for_finite_groups} For every finite group $G$ and finite $G$-$CW$-complex $X$ there is an isomorphism \[ \pi^m_G(X)\widehat{_ {{\mathbb I}_G}} \xrightarrow{\cong} \pi^m_s(EG \times_G X). \] \end{theorem}
In particular we get for $X = \ast$ and $m = 0$ an isomorphism \[ A(G)\widehat{_ {{\mathbb I}_G}} \xrightarrow{\cong} \pi^0_s(BG). \]
The purpose of this paper is to formulate and prove a version of it for infinite (discrete) groups, i.e., we will show
\begin{theorem}{(Segal Conjecture for infinite groups).} \label{the:Segal_Conjecture_for_infinite_groups} Let $G$ be a (discrete) group. Let $X$ be a finite proper $G$-$CW$-complex. Let $L$ be a proper finite dimensional $G$-$CW$-complex with the property that there is an upper bound on the order of its isotropy groups. Let $f \colon X \to L$ be a $G$-map.
Then the map of pro-${\mathbb Z}$-modules \[ \lambda^m_G(X) \colon \left\{\pi^m_G(X)/{\mathbb I}_G(L)^n \cdot \pi^m_G(X)\right\}_{n \ge 1} \xrightarrow{\cong} \left\{\pi_s^m\left((EG \times_G X)_{(n-1)}\right)\right\}_{n \ge 1} \] defined in~\eqref{map_lambda_of_pro-modules} is an isomorphism of pro-${\mathbb Z}$-modules.
In particular we obtain an isomorphism \[
\pi^m_G(X)\widehat{_ {{\mathbb I}_G(L)}} \cong \pi_s^m(EG \times_G X). \] If there is a finite $G$-$CW$-model for $\underline{E}G$, we obtain an isomorphism \[ \pi^m_G(\underline{E}G)\widehat{_ {{\mathbb I}_G(\underline{E}G)}} \cong \pi_s^m(BG). \] \end{theorem}
Here $\underline{E}G$ is the classifying space for proper $G$-actions and $\pi^*_G(X)$ is equivariant stable cohomotopy as defined in~\cite[Section~6]{Lueck(2005r)}. The ideal ${\mathbb I}_G(L)$ is the augmentation ideal in the ring $\pi^0_G(L)$ (see Definition~\ref{def:augmentation_ideal}). We view $\pi^*_G(X)$ as $\pi^0_G(L)$-module by the multiplicative structure on equivariant stable cohomotopy and the map $f$. We denote by $\pi^m_G(X)\widehat{_ {{\mathbb I}_G(L)}}$ its ${\mathbb I}_G(L)$-completion. More explanations will follow in the main body of the text.
In~\cite{Lueck(2005r)} various mutually distinct notions of a Burnside ring of a group $G$ are introduced which all agree with the standard notion for finite $G$. If there is a finite $G$-$CW$-model for $\underline{E}G$, then the homotopy theoretic definition is $A(G) := \pi^0_G(\underline{E}G)$, we define the ideal ${\mathbf I}_G \subseteq A(G)$ to be ${\mathbf I}_G(\underline{E}G)$, and we get in this notation from Theorem~\ref{the:Segal_Conjecture_for_infinite_groups} an isomorphism \[ A(G)\widehat{_{I_G}} \cong \pi_s^0(BG). \]
We will actually formulate for every equivariant cohomology theory $\mathcal{H}_?^*$ with multiplicative structure a Completion Theorem (see Problem~\ref{pro:formulation_of_the_Completion_Theorem}). It is not expected to be true in all cases. We give a strategy for its proof in Theorem~\ref{the:strategy_of_proof_of_the_completion_theorem}. We show that this applies to equivariant stable cohomotopy, thus proving Theorem~\ref{the:Segal_Conjecture_for_infinite_groups}. It also applies to equivariant topological $K$-theory, where the Completion Theorem for infinite groups has already been proved in~\cite{Lueck-Oliver(2001a)}.
If $G$ is finite, we can take $L = \underline{E}G = \ast$ and then Theorem~\ref{the:Segal_Conjecture_for_infinite_groups} reduces to Theorem~\ref{the:Segal_Conjecture_for_finite_groups}. We will not give a new proof of Theorem~\ref{the:Segal_Conjecture_for_finite_groups}, but use it as input in the proof of Theorem~\ref{the:Segal_Conjecture_for_infinite_groups}.
This paper is part of a general program to systematically study equivariant homotopy theory, which is well-established for finite groups and compact Lie groups, for infinite groups and non-compact Lie groups. The motivation comes among other things from the Baum-Connes Conjecture and the Farrell-Jones Conjecture.
\section{Equivariant Cohomology Theories with Multiplicative Structure} \label{sec:Equivariant_Cohomology_Theories_with_Multiplicative_Structures}
We briefly recall the axioms of a (proper) equivariant cohomology $\mathcal{H}_?^*$ theory with values in $R$-modules with multiplicative structure. More details can be found in~\cite{Lueck(2005c)}.
Let $G$ be a (discrete) group. Let $R$ be a commutative ring with unit. A \emph{(proper) $G$-cohomology theory $\mathcal{H}^*_G$ with values in $R$-modules} assigns to any pair $(X,A)$ of (proper) $G$-$CW$-complexes $(X,A)$ a ${\mathbb Z}$-graded $R$-module $\{\mathcal{H}^n_G(X,A) \mid n \in {\mathbb Z}\}$ such that $G$-homotopy invariance holds and there exists long exact sequences of pairs and long exact Mayer-Vietoris sequences. Often one also requires the disjoint union axiom what we will need not here since all our disjoint unions will be over finite index sets.
A \emph{multiplicative structure} is given by a collection of $R$-bilinear pairings \begin{eqnarray*} \cup \colon \mathcal{H}^{m}_G(X,A) \otimes_R \mathcal{H}^{n}_G(X,B) & \to & \mathcal{H}^{m+n}_G(X,A\cup B). \end{eqnarray*} This product is required to be graded commutative, to be associative, to have a unit $1 \in \mathcal{H}^0_G(X)$ for every (proper) $G$-$CW$-complex $X$, to be compatible with boundary homomorphisms and to be natural with respect to $G$-maps.
Let $\alpha\colon H \to G$ be a group homomorphism. Given an $H$-space $X$, define the \emph{induction of $X$ with $\alpha$} to be the $G$-space $\operatorname{ind}_{\alpha} X$ which is the quotient of $G \times X$ by the right $H$-action $(g,x) \cdot h := (g\alpha(h),h^{-1} x)$ for $h \in H$ and $(g,x) \in G \times X$. If $\alpha\colon H \to G$ is an inclusion, we also write $\operatorname{ind}_H^G$ instead of $\operatorname{ind}_{\alpha}$.
A \emph{(proper) equivariant cohomology theory $\mathcal{H}_?^*$ with values in $R$-modules} consists of a collection of (proper) $G$-cohomology theories $\mathcal{H}^*_G$ with values in $R$-modules for each group $G$ together with the following so called \emph{induction structure}: given a group homomorphism $\alpha\colon H \to G$ and a (proper) $H$-$CW$-pair $(X,A)$ there are for each $n \in {\mathbb Z}$ natural homomorphisms \begin{eqnarray} \operatorname{ind}_{\alpha}\colon \mathcal{H}^n_G(\operatorname{ind}_{\alpha}(X,A)) &\to & \mathcal{H}^n_H(X,A)\label{induction_structure} \end{eqnarray} If $\ker(\alpha)$ acts freely on $X$, then $\operatorname{ind}_{\alpha}\colon \mathcal{H}^n_G(\operatorname{ind}_{\alpha}(X,A)) \to \mathcal{H}^n_H(X,A)$ is bijective for all $n \in {\mathbb Z}$. The induction structure is required to be compatibility with the boundary homomorphisms, to be functorial in $\alpha$ and to be compatible with inner automorphisms.
Sometimes we will need the following lemma whose elementary proof is analogous to the one in~\cite[Lemma 1.2]{Lueck(2002b)}.
\begin{lemma}\label{lem:calh_G(G/H)_and_calh_H(ast)}
Consider finite subgroups $H,K \subseteq G$ and an element
$g \in G$ with $gHg^{-1} \subseteq K$.
Let $R_{g^{-1}}\colon G/H \to G/K$ be the $G$-map
sending $g'H$ to $g'g^{-1}K$ and
$c(g)\colon H \to K$ be the group homomorphism sending $h$ to $ghg^{-1}$.
Denote by $\operatorname{pr}\colon (\operatorname{ind}_{c(g)\colon H \to K}\ast) \to \ast$ the projection
to the one-point space $\ast$.
Then the following diagram commutes
\[ \xymatrix@!C=9em{
\mathcal{H}_G^n(G/K) \ar[r]^{\mathcal{H}^n_G(R_{g^{-1}})} \ar[d]_{\operatorname{ind}_K^G} & \mathcal{H}_G^n(G/H) \ar[d]^{\operatorname{ind}_H^G}
\\
\mathcal{H}^n_K(\ast) \ar[r]_{\operatorname{ind}_{c(g)} \circ \mathcal{H}_K^n(\operatorname{pr})} & \mathcal{H}^n_H(\ast) } \]
\end{lemma}
Let $\mathcal{H}_?^*$ be a (proper) equivariant cohomology theory. A \emph{multiplicative
structure} on it assigns a multiplicative structure to the associated (proper)
$G$-coho\-mo\-logy theory $\mathcal{H}^*_G$ for every group $G$ such that for each group
homomorphism $\alpha \colon H \to G$ the maps given by the induction structure
of~\eqref{induction_structure} are compatible with the multiplicative structures on
$\mathcal{H}^*_G$ and $\mathcal{H}^*_H$.
\begin{example}{\bf Equivariant cohomology theories coming from non-equi\-va\-riant ones.} \label{exa:equivariant_cohomology_theories}
Let $\mathcal{K}^*$ be a (non-equivariant) cohomology theory with multiplicative structure,
for instance singular cohomology or topological $K$-theory. We can assign to it an
equivariant cohomology theory with multiplicative structure $\mathcal{H}_?^*$ in two ways.
Namely, for a group $G$ and a pair of $G$-$CW$-complexes $(X,A)$ we define
$\mathcal{H}^n_G(X,A)$ by $\mathcal{K}^n(G\backslash (X,A))$ or by $\mathcal{K}^n(EG \times_G(X,A))$. \end{example}
\begin{example}{\bf (Equivariant topological $K$-theory).} \label{exa:equivariant_topological_K-theory} In~\cite{Lueck-Oliver(2001a)} equivariant topological $K$-theory is defined for finite proper equivariant $CW$-complexes in terms of equivariant vector bundles. It reduces to the classical notion which appears for instance in~\cite{Atiyah(1967)}. Its relation to equivariant $KK$-theory is explained in~\cite{Phillips(1988)}. This definition is extended to (not necessarily finite) proper equivariant $CW$-complexes in~\cite{Lueck-Oliver(2001a)} in terms of equivariant spectra using $\Gamma$-spaces and yields a proper equivariant cohomology theory $K_?^*$ with multiplicative as explained in~\cite[Example~1.7]{Lueck(2005c)}. It has the property that for any finite subgroup $H$ of a group $G$ we have \[ K^n_G(G/H) = K^n_H(\ast) = \left\{ \begin{array}{lll} R_{{\mathbb C}}(H) & & n \text{ even}; \\ \{0\} & & n \text{ odd}, \end{array} \right. \] where $R_{{\mathbb C}}(H)$ denote the complex representation ring of $H$. \end{example}
\begin{example}{\bf (Equivariant Stable Cohomotopy).} \label{exa:equivariant_cohomotopy} In~\cite[Section~6]{Lueck(2005r)} equivariant stable cohomotopy $\pi_?^*$ is defined for finite proper equivariant $CW$-complexes in terms of maps of sphere bundles associated of equivariant vector bundles. For finite $G$ it reduces to the classical notion. This definition is extended to arbitrary proper $G$-$CW$-complexes by Degrijse-Hausmann-Lueck-Patchkoria-Schwede~\cite{Degrijse-Hausmann-Lueck-Patchkoria-Schwede(2019)}, where a systematic study of equivariant homotopy theory for (not necessarily compact) Lie groups and proper $G$-$CW$-complexes is developed.
Let $H \subseteq G$ be a finite subgroup. Recall that by the induction structure we have $\pi^n_G(G/H) = \pi^n_H(\ast)$. The equivariant stable homotopy groups $\pi_H^n$ are computed in terms of the splitting due to Segal and tom Dieck (see~\cite[Proposition~2]{Segal(1971)} and~\cite[Theorem~7.7 in Chapter~II on page~154]{Dieck(1987)}) by \[ \pi^H_n = \pi_{-n}^H = \bigoplus_{(K)} \pi^s_{-n}(BW_HK), \] where $\pi_{-n}^s$ denotes (non-equivariant) stable homotopy and $(K)$ runs through the conjugacy classes of subgroups of $H$. In particular we get \[
\begin{array}{lllll} |\pi^n_G(G/H)| & < & \infty & & n \le -1;
\\
\pi^0_G(G/H) & = & A(H); & &
\\
\pi^n_G(G/H) & = & \{0\} & & n \ge 1, \end{array} \] where $A(H)$ is the Burnside ring. \end{example}
\typeout{-------------------- Section 2 ---------------------------------------}
\section{Some Preliminaries about Pro-Modules} \label{sec:Some_Preliminaries_about_Pro-Modules}
It will be crucial to handle pro-systems and pro-isomorphisms and not to pass directly to inverse limits. In this section we fix our notation for handling pro-$R$-modules for a commutative ring with unit $R$. For the definitions in full generality see for instance~\cite[Appendix]{Artin-Mazur(1969)} or~\cite[\sect 2]{Atiyah-Segal(1969)}.
For simplicity, all pro-$R$-modules dealt with here will be indexed by the positive integers. We write $\{M_n,\alpha_n\}$ or briefly $\{M_n\}$ for the inverse system \[
M_0 \xleftarrow{\alpha_1} M_1 \xleftarrow{\alpha_2} M_2 \xleftarrow{\alpha_3} M_3 \xleftarrow{\alpha_4} \ldots. \] and also write $\alpha_n^m := \alpha_{m+1} \circ \cdots \circ \alpha_{n}\colon M_n \to M_m$ for $n > m$ and put $\alpha^n_n =\operatorname{id}_{M_n}$. For the purposes here, it will suffice (and greatly simplify the notation) to work with ``strict'' pro-homomorphisms $\{f_n\} \colon \{M_n,\alpha_n\} \to \{N_n,\beta_n\}$, i.e., a collection of homomorphisms $f_n \colon M_n \to N_n$ for $n \ge 1$ such that $\beta_{n}\circ f_n = f_{n-1}\circ\alpha_{n}$ holds for each $ n\ge 2$. Kernels and cokernels of strict homomorphisms are defined in the obvious way, namely levelwise.
A pro-$R$-module $\{M_n,\alpha_n\}$ will be called \emph{pro-trivial} if for each $m \ge 1$, there is some $n\ge m$ such that $\alpha^m_n = 0$. A strict homomorphism $f\colon \{M_n,\alpha_n\} \to \{N_n,\beta_n\}$ is a \emph{pro-isomorphism} if and only if $\ker(f)$ and $\operatorname{coker}(f)$ are both pro-trivial, or, equivalently, for each $m\ge 1$ there is some $n\ge m$ such that $\operatorname{im}(\beta_n^m) \subseteq \operatorname{im}(f_m)$ and $\ker(f_n) \subseteq \ker(\alpha_n^m)$. A sequence of strict homomorphisms \[ \{M_n,\alpha_n\} \xrightarrow{\{f_n\}} \{M_n',\alpha_n'\} \xrightarrow{g_n} \{M_n'',\alpha_n''\} \] will be called \emph{exact} if the sequences of $R$-modules $M_n \xrightarrow{f_n} N_n \xrightarrow{g_n} M_n''$ is exact for each $n \ge 1$, and it is called \emph{pro-exact} if $g_n \circ f_n = 0$ holds for $n \ge 1$ and the pro-$R$-module $\{\ker(g_n)/\operatorname{im}(f_n)\bigr\}$ is pro-trivial.
The elementary proofs of the following two lemmas can be found for instance in~\cite[Section~2]{Lueck(2007)}.
\begin{lemma}\label{lem:pro-exactness_and_limits} Let $0 \to \{M_n',\alpha_n'\} \xrightarrow{\{f_n\}} \{M_n,\alpha_n\} \xrightarrow{\{g_n\}} \{M_n'',\alpha_n''\} \to 0$ be a pro-exact sequence of pro-$R$-modules. Then there is a natural exact sequence \begin{multline*} 0 \to \invlim{n \ge 1}{M_n'} \xrightarrow{\invlim{n \ge 1}{f_n}} \invlim{n \ge 1}{M_n} \xrightarrow{\invlim{n \ge 1}{g_n}} \invlim{n \ge 1}{M_n''} \xrightarrow{\delta} \\
\higherlim{n \ge 1}{1}{M_n'} \xrightarrow{\higherlim{n \ge 1}{1}{f_n}} \higherlim{n \ge 1}{1}{M_n} \xrightarrow{\higherlim{n \ge 1}{1}{g_n}} \higherlim{n \ge 1}{1}{M_n''} \to 0. \end{multline*} In particular a pro-isomorphism $\{f_n\} \colon \{M_n,\alpha_n\} \to \{N_n,\beta_n\}$ induces isomorphisms \[ \begin{array}{llcl} \invlim{n \ge 1}{f_n} \colon & \invlim{n \ge 1}{M_n} & \xrightarrow{\cong} & \invlim{n \ge 1}{N_n}; \\ \higherlim{n \ge 1}{1}{f_n} \colon & \higherlim{n \ge 1}{1}{M_n} & \xrightarrow{\cong} & \higherlim{n \ge 1}{1}{N_n}. \end{array} \] \end{lemma}
\begin{lemma}\label{lem:pro-exactness_and_exactness} Fix any commutative Noetherian ring $R$, and any ideal $I\subseteq R $. Then for any exact sequence $M' \to M \to M''$ of finitely generated $R$-modules, the sequence \[
\{M'/I^nM'\} \to \{M/I^nM\} \to \{M''/I^nM''\} \] of pro-$R$-modules is pro-exact. \end{lemma}
\typeout{-------------------- Section 3---------------------------------------}
\section{The Formulation of a Completion Theorem} \label{sec:The_Formulation_of_a_Completion_Theorem}
Consider a proper equivariant $G$-cohomology theory $\mathcal{H}_?^*$ with multiplicative structure. In the sequel $\mathcal{H}^*$ is the non-equivariant cohomology theory with multiplicative structure given by $\mathcal{H}^*_G$ for $G = \{1\}$. Notice that $\mathcal{H}^0(\ast)$ is a commutative ring with unit and $\mathcal{H}^n_G(X)$ is a $\mathcal{H}^0(\ast)$-module. In some future applications $\mathcal{H}^0(\ast)$ will be just ${\mathbb Z}$. In the sequel $[Y,X]^G$ denotes the set of $G$-homotopy classes of $G$-maps $Y \to X$. Notice that evaluation at the unit element of $G$ induces a bijection $[G,X]^G \xrightarrow{\cong} \pi_0(X)$. It is compatible with the left $G$-actions, which are on the source induced by precomposing with right multiplication $r_g \colon G \to G, g' \mapsto g'g$ and on the target by the given left $G$-action on $X$.
So we can represent elements in $G\backslash \pi_0(X)$ by classes $\overline{x}$ of $G$-maps $x \colon G \to X$, where $x \colon G \to X$ and $y \colon G \to X$ are equivalent, if for some $g \in G$ the composite $y \circ r_g$ is $G$-homotopic to $x$.
\begin{definition}[Augmentation ideal]\label{def:augmentation_ideal}
For any proper $G$-CW-complex $X$ the \emph{augmentation module} ${\mathbb I}_G^n(X)\subseteq
\mathcal{H}^n_G(X)$ is defined as the kernel of the map
\[ \mathcal{H}^n_G(X) \xrightarrow{\prod_{\overline{x} \in G\backslash \pi_0(X)} \operatorname{ind}_{\{1\}
\to G} \circ \mathcal{H}^n_G(x)} \prod_{\overline{x} \in G\backslash \pi_0(X)}
\mathcal{H}^n(\ast). \]
(The composite above is independent of the choice of $x \in
\overline{x}$ by $G$-homotopy invariance and Lemma~\ref{lem:calh_G(G/H)_and_calh_H(ast)}.)
If $n = 0$, the map above is a ring homomorphism and ${\mathbb I}_G(X) := {\mathbb I}^0_G(X)$ is an
ideal called \emph{the augmentation ideal}. \end{definition}
Given a $G$-map $f\colon X \to Y$, the induced map $\mathcal{H}^n_G(f) \colon \mathcal{H}^n_G(Y) \to \mathcal{H}^n_G(X)$ restricts to a map ${\mathbb I}_G^n(Y) \to {\mathbb I}_G^n(X)$.
We will need the following elementary lemma:
\begin{lemma}\label{lem:nilpotence_of_In}
Let $X$ be a CW-complex of dimension $(n-1)$. Then any $n$-fold
product of elements in ${\mathbb I}_G^*(X)$ is zero. \end{lemma} \begin{proof}Write $X=Y\cup A$, where $Y$ and $A$ are closed subsets, $Y$ contains $X^{(n-2)}$ as a homotopy deformation retract, and $A$ is a disjoint union of $(n{-}1)$-disks. Fix elements $v_1,v_2,\dots,v_n\in {\mathbb I}_G^*(X)$. We can assume by induction that $v_1\cdots{}v_{n-1}$ vanishes after restricting to $Y$, and hence that it is the image of an element $u\in \mathcal{H}_G^*(X,Y)$. Also, $v_n$ clearly vanishes after restricting to $A$, and hence is the image of an element $v\in \mathcal{H}_G^*(X,A)$. The product of $v_1\cdots{}v_{n-1}$ and $v_n$ is the image in $\mathcal{H}_G^*(X)$ of the element $uv\in \mathcal{H}_G^*(X,Y\cup A) = \mathcal{H}_G^*(X,X) = 0$ and so $v_1 \cdots{} v_n=0$. \end{proof}
Now fix a map $f \colon X \to L$ between $G$-$CW$-complexes. Consider $\mathcal{H}^*_G(X)$ as a module over the ring $\mathcal{H}^0_G(L)$. Consider the composition \begin{multline*}
{\mathbb I}_G(L)^n \cdot \mathcal{H}_G^m(X) \xrightarrow{i} \mathcal{H}_G^m(X)
\xrightarrow{\mathcal{H}_G^m(\operatorname{pr})} \mathcal{H}_G^m(EG \times X) \\
\xrightarrow{\left(\operatorname{ind}_{G \to \{1\}}\right)^{-1}} \mathcal{H}^m(EG
\times_G X) \xrightarrow{\mathcal{H}^m(j)} \mathcal{H}^m\left((EG \times_G
X)_{(n-1)}\right), \end{multline*} where $i$ and $j$ denote the inclusions, $\operatorname{pr}$ the projection and $(EG \times_G X)_{(n-1)}$ is the $(n-1)$-skeleton of $EG \times_G X$. This composite is zero because of Lemma~\ref{lem:nilpotence_of_In} since its image is contained in ${\mathbb I}^n\left((EG \times_G X)_{(n-1)}\right)$. Thus we obtain a homomorphism of pro-$\mathcal{H}^0(\ast)$-modules \begin{multline} \lambda^m_G(f \colon X \to L) \colon \left\{\mathcal{H}^m_G(X)/{\mathbb I}_G(L)^n \cdot \mathcal{H}^m_G(X)\right\}_{n \ge 1} \\ \to \left\{\mathcal{H}^m_G\left((EG \times_G X)_{(n-1)}\right)\right\}_{n \ge 1}. \label{map_lambda_of_pro-modules} \end{multline} We will sometimes write $\lambda^m_G$ or $\lambda^m_G(X)$ instead of $\lambda^m_G(f \colon X \to L)$ if the map $f \colon X \to L$ is clear from the context. Notice that the target of $\lambda^m_G(f \colon X \to L)$ depends only on $X$ but not on the map $f \colon X \to L$, whereas the source does depend on $f$.
\begin{problem}[Completion Problem]\label{pro:formulation_of_the_Completion_Theorem}
Under which conditions on ${\mathcal{H}^*_?}$ and $L$ is the map of
pro-$\mathcal{H}^0(\ast)$-modules $\lambda^m_G(f \colon X \to L)$ defined
in~\eqref{map_lambda_of_pro-modules} an isomorphism of
pro-$\mathcal{H}^0(\ast)$-modules? \end{problem}
\begin{remark}[Consequences of the Completion Theorem]\label{rem:Consequences_of_the_Completion_Theorem} Suppose that the map of pro-$\mathcal{H}^0(\ast)$-modules $\lambda^m_G(X)$ defined in~\eqref{map_lambda_of_pro-modules} is an isomorphism of pro-$\mathcal{H}^0(\ast)$-modules. Obviously the pro-module $\{\mathcal{H}^m_G(X)/{\mathbb I}_G(L)^n \cdot \mathcal{H}^m_G(X)\}_{n \ge 1}$ satisfies the Mittag-Leffler condition since all structure maps are surjective. This implies that its ${\lim}^1$-term vanishes. We conclude from Lemma~\ref{lem:pro-exactness_and_limits} \begin{eqnarray*} \higherlim{n \to \infty}{1}{\mathcal{H}^m\left((EG \times_G X)_{(n-1)}\right)} & = & 0; \\ \invlim{n \to \infty}{\mathcal{H}^m\left((EG \times_G X)_{(n-1)}\right)} & \cong & \invlim{n \to \infty}{\mathcal{H}^m_G(X)/{\mathbb I}_G(L)^n \cdot \mathcal{H}^m_G(X)}. \end{eqnarray*} Milnor's exact sequence \begin{multline*}
0 \to \higherlim{n \to \infty}{1}{\mathcal{H}^{m-1}\left((EG \times_G
X)_{(n-1)}\right)} \to \mathcal{H}^m(EG \times_G X)
\\
\to \invlim{n \to \infty}{\mathcal{H}^m\left((EG \times_G
X)_{(n-1)}\right)} \to 0 \end{multline*} implies that we obtain an isomorphism \[ \mathcal{H}^m(EG \times_G X) \cong \invlim{n \to
\infty}{\mathcal{H}^m_G(X)/{\mathbb I}_G(L)^n \cdot \mathcal{H}^m_G(X)}. \] \end{remark}
\begin{remark}[Taking $L = \underline{E}G$] \label{rem:taking_L_to_be_underlineEG} The \emph{classifying space $\underline{E}G$ for proper $G$-actions} is a proper $G$-$CW$-complex such that the $H$-fixed point set is contractible for every finite subgroup $H \subseteq G$. It has the universal property that for every proper $G$-$CW$-complex $X$ there is up to $G$-homotopy precisely one $G$-map $f \colon X \to \underline{E}G$. Recall that a $G$-$CW$-complex is proper if and only if all its isotropy groups are finite and is finite if and only if it is cocompact. There is a cocompact $G$-$CW$-model for the classifying space $\underline{E}G$ for proper $G$-actions for instance if $G$ is word-hyperbolic in the sense of Gromov, if $G$ is a cocompact subgroup of a Lie group with finitely many path components, if $G$ is a finitely generated one-relator group, if $G$ is an arithmetic group, a mapping class group of a compact surface or the group of outer automorphisms of a finitely generated free group. For more information about $\underline{E}G$ we refer for instance to~\cite{Baum-Connes-Higson(1994)} and~\cite{Lueck(2005s)}.
Suppose that there is a finite model for the classifying space of proper $G$-actions $\underline{E}G$. Then we can apply this to $\operatorname{id} \colon \underline{E}G \to \underline{E}G$ and obtain an isomorphism \[ \mathcal{H}^m(BG) \cong \invlim{n \to
\infty}{\mathcal{H}^m_G(\underline{E}G)/{\mathbb I}_G(\underline{E}G)^n \cdot \mathcal{H}^m_G(\underline{E}G)}. \] \end{remark}
\begin{remark}[The free case]\label{rem:The_torsionfree_case} The statement of the Completion Theorem as stated in Problem~\ref{pro:formulation_of_the_Completion_Theorem} is always true for trivial reasons if $X$ is a free finite $G$-$CW$-complex. Then induction induces an isomorphism \[ \operatorname{ind}_{G \to \{1\}} \colon \mathcal{H}^m(G\backslash X) \xrightarrow{\cong} \mathcal{H}^m_G(X). \] Since ${\mathbb I}(G\backslash X)^n = 0$ for large enough $n$ by Lemma~\ref{lem:nilpotence_of_In}, the canonical map \[ \{\mathcal{H}^m(G\backslash X)\}_{n \ge 1} \xrightarrow{\cong} \{\mathcal{H}^m(G\backslash X)/{\mathbb I}_G(L)^n \cdot \mathcal{H}^m(G\backslash X)\}_{n \ge 1} \] with the constant pro-$\mathcal{H}^0(\ast)$-module as source is an isomorphism. Hence the source of $\lambda^m_G(f \colon G \to X)$ can be identified with constant pro-$\mathcal{H}^0(\ast)$-module $\{\mathcal{H}^m(G\backslash X)\}_{n \ge 1}$.
The projection $EG \times_G X \to G\backslash X$ is a homotopy equivalence and induces an isomorphism pro-${\mathbb Z}$-modules \[ \left\{\mathcal{H}^m\left((G\backslash X)_{(n-1)}\right)\right\}_{n \ge 1} \xrightarrow{\cong} \left\{\mathcal{H}^m\left(\left(EG \times_G X\right)_{(n-1)}\right)\right\}_{n \ge 1}. \] Since $G\backslash X$ is finite dimensional, the canonical map \[ \{\mathcal{H}^m(G\backslash X)\}_{n \ge 1}
\xrightarrow{\cong} \left\{\mathcal{H}^m\left((G\backslash X)_{(n-1)}\right)\right\}_{n \ge 1} \] is an isomorphism of pro-${\mathbb Z}$-modules. Hence also the target of $\lambda^m_G(f \colon G \to X)$ can be identified with constant pro-$\mathcal{H}^0(\ast)$-module $\{\mathcal{H}^m(G\backslash X)\}_{n \ge 1}$. One easily checks that under these identifications $\lambda^m_G(f \colon G \to X)$ is the identity.
Hence the Completion Theorem is only interesting in the case, where $G$ contains torsion. \end{remark}
\typeout{-------------------- Section 4 ---------------------------------------}
\section{A Strategy for a Proof of a Completion Theorem} \label{sec:A_Strategy_for_a_Proof_of_a_Completion_Theorem}
\begin{theorem}{\bf (Strategy for the proof of Theorem~\ref{the:Segal_Conjecture_for_infinite_groups}).} \label{the:strategy_of_proof_of_the_completion_theorem}
Let $\mathcal{H}^?_*$ be an equivariant cohomology theory with values in $R$-modules with a
multiplicative structure. Let $L$ be a proper $G$-$CW$-complex. Suppose that
the following conditions are satisfied, where $\mathcal{F}(L)$ is the family of subgroups
of $G$ given by $\{H \subseteq G \mid L^H \not= \emptyset\}$. \begin{enumerate}
\item\label{the:strategy_of_proof_of_the_completion_theorem:Notherian}
The ring $\mathcal{H}^0(\ast)$ is Noetherian;
\item\label{the:strategy_of_proof_of_the_completion_theorem:fin._gen.}
For any $H \in \mathcal{F}(L)$ and $m \in {\mathbb Z}$ the
$\mathcal{H}^0(\ast)$-module $\mathcal{H}^m_H(\ast)$ is finitely generated;
\item\label{the:strategy_of_proof_of_the_completion_theorem:ideals}
Let $H \in \mathcal{F}(L)$, let $\mathcal{P} \subseteq
\mathcal{H}^0_H(\ast)$ be a prime ideal, and let $f \colon G/H \to L$ be any
$G$-map. Then the augmentation ideal
\[ {\mathbb I}(H) = \ker \left(\mathcal{H}^0_H(\ast) \xrightarrow{\mathcal{H}^0_H(\operatorname{pr})} \mathcal{H}^0_H(H)
\xrightarrow{\operatorname{ind}_{\{1\} \to H}} \mathcal{H}^0(\ast)\right) \]
is contained in $\mathcal{P}$
if $\mathcal{H}^0_G(L) \xrightarrow{\mathcal{H}^0_G(f)} \mathcal{H}^0_G(G/H) \xrightarrow{\operatorname{ind}_{G \to \{1\}}}
\mathcal{H}^0_H(\ast)$ maps ${\mathbb I}_G(L)$ into $\mathcal{P}$;
\item\label{the:strategy_of_proof_of_the_completion_theorem:finite_groups}
The Completion Theorem is true for every finite group $H$ with $H \in \mathcal{F}(L)$
in the case, where $X = L = \ast$ and $f = \operatorname{id} \colon \ast \to \ast$. In other words, for every
finite group $H$ with $L^H \not= \emptyset$ the map of pro-$\mathcal{H}^0(\ast)$-modules \begin{eqnarray*}
\lambda^m_H(\ast) \colon \left\{\mathcal{H}^m_H(\ast)/{\mathbb I}(H)^n\right\}_{n \ge 1} & \to &
\left\{\mathcal{H}^m\left((BH)_{(n-1)}\right)\right\}_{n \ge 1} \end{eqnarray*} defined in~\eqref{map_lambda_of_pro-modules} is an isomorphism of pro-$\mathcal{H}^0(\ast)$-modules.
\end{enumerate}
Then, under the conditions above, the Completion Theorem is true for $\mathcal{H}^?_*$ and every $G$-map $f \colon X \to L$ from a finite proper $G$-$CW$-complex $X$ to $L$, i.e., the map of pro-$\mathcal{H}^0(\ast)$-modules \begin{eqnarray*}
\lambda^m_G(X) \colon \left\{\mathcal{H}^m_G(X)/{\mathbb I}_G(L)^n \cdot \mathcal{H}^m_G(X)\right\}_{n \ge 1} & \to & \left\{\mathcal{H}^m_G\left((EG \times_G X)_{(n-1)}\right)\right\}_{n \ge 1} \end{eqnarray*} defined in~\eqref{map_lambda_of_pro-modules} is an isomorphism of pro-$\mathcal{H}^0(\ast)$-modules. \end{theorem}
\begin{proof}
We first prove the Completion Theorem for $X = G/H$, i.e., for any a $G$-map
$f \colon G/H \to L$. Obviously $H$ belongs to $\mathcal{F}(L)$. The following diagram of pro-modules commutes
\[ \xymatrix@!C=19em{\{\mathcal{H}^m_G(G/H)/{\mathbb I}_G(L)^n \cdot \mathcal{H}^m_G(G/H)\}_{n \ge 1}
\ar[d]_{\{\operatorname{ind}_{H \to G}\}_{n \ge 1}} \ar[r]^-{\lambda^m_G(f \colon
G/H \to L)} & \{\mathcal{H}^m\left((EG \times_G G/H)_{(n-1)}\right)\}_{n \ge 1}
\\
\{\mathcal{H}^m_H(\ast)/{\mathbb I}_G(L)^n \cdot \mathcal{H}^m_H(\ast) \ar[d]_{\operatorname{pr}}\}_{n \ge 1} &
\\
\{\mathcal{H}^m_H(\ast)/{\mathbb I}(H)^n \cdot \mathcal{H}^m_H(\ast)\}_{n \ge 1}
\ar[r]_-{\lambda^m_H(\operatorname{id} \colon \ast \to \ast)} &
\{\mathcal{H}^m\left((BH)_{(n-1)}\right)\}_{n \ge 1} \ar[uu]_{\{\mathcal{H}^m_G(\operatorname{pr})\}_{n \ge 1}} }
\]
where $\operatorname{pr}$ denotes the obvious projection. The lower horizontal
arrow is an isomorphism of pro-modules by
condition~\eqref{the:strategy_of_proof_of_the_completion_theorem:finite_groups}. The
right vertical arrow and the upper left vertical arrow are
obviously isomorphisms of pro-modules.
Hence the upper horizontal arrow is an isomorphism of pro-modules if
we can show that the lower left vertical arrow is an isomorphism of
pro-modules.
Let $I_f$ be the image of ${\mathbb I}_G(L)$ under the composite of ring homomorphisms
\[ \mathcal{H}^0_G(L) \xrightarrow{\mathcal{H}^0_G(f)} \mathcal{H}^0_G(G/H)
\xrightarrow{\operatorname{ind}_{H \to G}} \mathcal{H}^0_H(\ast) \]
Let $J_f$ be the ideal
in $\mathcal{H}^0_H(\ast)$ generated by $I_f$. Obviously $I_f \subseteq J_f \subseteq {\mathbb I}(H)$.
Then the left lower vertical arrow is the composite
\begin{multline*} \mathcal{H}^m_H(\ast)/{\mathbb I}_G(L)^n \cdot \mathcal{H}^m_H(\ast) \to
\mathcal{H}^m_H(\ast)/(J_f)^n \cdot \mathcal{H}^m_H(\ast)
\\
\to
\mathcal{H}^m_H(\ast)/{\mathbb I}(H)^n \cdot \mathcal{H}^m_H(\ast), \end{multline*}
where the first
map is already levelwise an isomorphisms and in particular an
isomorphism of pro-modules. In order to show that the second map is
an isomorphism of pro-modules, it remains to show that
${\mathbb I}(H)^k \subseteq J_f$ for an appropriate integer $k \ge 1$.
Equivalently, we want to show that the ideal ${\mathbb I}(H)/J_f$ of
the quotient ring $\mathcal{H}^0_H(\ast)/J_f$ is nilpotent. Since
$\mathcal{H}^0_H(\ast)$ is Noetherian by
conditions~\eqref{the:strategy_of_proof_of_the_completion_theorem:Notherian}
and~\eqref{the:strategy_of_proof_of_the_completion_theorem:fin._gen.}, the ideal
${\mathbb I}(H)/J_f$ is finitely generated. Hence it suffices to show
that ${\mathbb I}(H)/J_f$ is contained in the nilradical, i.e.,
the ideal consisting of all nilpotent elements, of
$\mathcal{H}^0_H(\ast)/J_f$. The nilradical agrees with the intersection of all the
prime ideals of $\mathcal{H}^0_H(\ast)/J_f$
by~\cite[Proposition~1.8]{Atiyah-McDonald(1969)}. The preimage of a prime ideal in
$\mathcal{H}^0_H(\ast)/J_f$ under the projection $\mathcal{H}^0_H(\ast) \to \mathcal{H}^0_H(\ast)/J_f$
is again a prime ideal. Hence it remains to show
that any prime ideal of $\mathcal{H}^0_H(\ast)$ which contains $I_f$ also
contains ${\mathbb I}(H)$. But this is guaranteed by
condition~\eqref{the:strategy_of_proof_of_the_completion_theorem:ideals}. This
finishes the proof in the case $X = G/H$.
The general case of a $G$-map $f \colon X \to L$ from a finite
$G$-$CW$-complex $X$ to a $G$-$CW$-complex $L$
is done by induction over the dimension $r$ of $X$ and
subinduction over the number of top-dimensional equivariant cells.
For the induction step we write $X$ as a $G$-pushout
\[
\xycomsquareminus{G/H \times S^{r-1}}{q}{Y}{j}{k}{G/H \times D^r}{Q}{X}
\]
In the sequel we equip $G/H \times S^{r-1}$, $Y$ and $G/H \times
D^r$ with the maps to $L$ given by the composite of $f \colon X \to
L$ with $k \circ q$, $k$ and $Q$. The long exact Mayer-Vietoris
sequence of the $G$-pushout above is a long exact sequence of
$\mathcal{H}^0_G(L)$-modules and looks like \begin{multline*}
\ldots \to \mathcal{H}^{m-1}(G/H \times D^r) \oplus \mathcal{H}^{m+1}_G(Y) \to
\mathcal{H}^{m-1}_G(G/H \times S^{r-1})
\to \mathcal{H}^m_G(X)
\\
\to \mathcal{H}^m_G(G/H \times D^r) \oplus \mathcal{H}^m_G(Y) \to
\mathcal{H}^m_G(G/H \times S^{r-1})
\to \ldots. \end{multline*} Condition~\eqref{the:strategy_of_proof_of_the_completion_theorem:fin._gen.} implies that $\mathcal{H}^m_G(G/H)$ and $\mathcal{H}^m_G(G/H \times D^r )$ are finitely generated as $\mathcal{H}^0(\ast)$-modules. Since $\mathcal{H}^0(\ast)$ is Noetherian by condition~\eqref{the:strategy_of_proof_of_the_completion_theorem:Notherian}, the $\mathcal{H}^0(\ast)$-module $\mathcal{H}^m_G(X)$ is finitely generated provided that the $\mathcal{H}^0(\ast)$-module $\mathcal{H}^m_G(Y)$ is finitely generated. Thus we can show inductively that the $\mathcal{H}^0(\ast)$-module $\mathcal{H}^m_G(X)$ is finitely generated for every $m \in {\mathbb Z}$. In particular the ring $\mathcal{H}^0_G(X)$ is Noetherian. Let $J \subseteq \mathcal{H}^0_G(X)$ be the ideal generated by the image of ${\mathbb I}_G(L)$ under the ring homomorphism $\mathcal{H}^0_G(L) \to \mathcal{H}^0_G(X)$. Then for every $\mathcal{H}^0_G(X)$-module the obvious map $\{M/{\mathbb I}_G(L)^n \cdot M\}_{n\ge1} \to \{M/J^n \cdot M\}_{n \ge 1}$ is levelwise an isomorphism and in particular an isomorphism of $\mathcal{H}^0_G(X)$-modules. We conclude from Lemma~\ref{lem:pro-exactness_and_exactness} that the following sequence of pro-$\mathcal{H}^0(\ast)$-modules is exact, where $M/I$ stands for $M/I \cdot M$. \begin{multline}
\ldots \to \{\mathcal{H}^{m-1}(G/H \times D^r)/{\mathbb I}_G(L)\}_{n\ge 1} \oplus
\{\mathcal{H}^{m-1}_G(Y)/{\mathbb I}_G(L)\}_{n\ge 1}
\\
\to \{\mathcal{H}^{m-1}_G(G/H \times S^{r-1})/{\mathbb I}_G(L)\}_{n\ge 1}
\\
\to \{\mathcal{H}^m_G(X)/{\mathbb I}_G(L)\}_{n\ge 1}
\to \{\mathcal{H}^m_G(G/H \times D^r)/{\mathbb I}_G(L)\}_{n\ge 1} \oplus
\{\mathcal{H}^m_G(Y)/{\mathbb I}_G(L)\}_{n\ge 1}
\\
\to \{\mathcal{H}^m_G(G/H \times S^{r-1})/{\mathbb I}_G(L)\}_{n\ge 1}
\to \ldots \label{MV-sequence_I-adic_for_calh_G} \end{multline}
Applying $EG_{(n-1)} \times_G -$ to the $G$-pushout above yields a pushout and thus a long exact Mayer-Vietoris sequence \begin{multline*}
\ldots \to \mathcal{H}^{m-1}\left(EG_{(n-1)} \times_G \left(G/H \times
D^r\right)\right) \oplus \mathcal{H}^{m-1}_G\left(EG_{(n-1)} \times_G Y\right)
\\
\to \mathcal{H}^{m-1}_G\left(EG_{(n-1)} \times_G \left(G/H \times S^{r-1}\right)\right)
\\
\to \mathcal{H}^m_G\left(EG_{(n-1)} \times_G X\right)
\\
\to \mathcal{H}^m_G\left(EG_{(n-1)} \times_G \left(G/H \times
D^r\right)\right) \oplus \mathcal{H}^m_G\left(EG_{(n-1)} \times_G Y\right)
\\
\to \mathcal{H}^m_G\left(EG_{(n-1)} \times_G \left(G/H \times S^{r-1}\right)\right)
\to \ldots \end{multline*}
The obvious map \[ \left\{\mathcal{H}^m_G\left(EG_{(n-1)} \times_G Z\right)\right\}_{n \ge 1} \xrightarrow{\cong} \bigl\{\mathcal{H}^m_G\bigl(\left(EG\times_G Z\right)_{(n-1)}\bigr)\bigr\}_{n \ge 1} \] is an isomorphism of pro-$\mathcal{H}^0(\ast)$-modules for any finite dimensional $G$-$CW$-complex $Z$. Hence we obtain a long exact sequence of pro-$\mathcal{H}^0(\ast)$-modules \begin{multline}
\ldots \to \bigl\{\mathcal{H}^{m-1}_G\bigl(\bigl(EG \times_G \bigl(G/H \times
D^r\bigr)\bigr)_{(n-1)} \bigr)\bigr\}_{n \ge 1} \oplus
\bigl\{\mathcal{H}^m_G\bigl(\bigl(EG \times_G
Y\bigr)_{(n-1)}\bigr)\bigr\}_{n \ge 1}
\\
\to \bigl\{\mathcal{H}^{m-1}_G\bigl(\bigl(EG \times_G \bigl(G/H \times
S^{r-1}\bigr)\bigr)_{(n-1)}\bigr)\bigr\}_{n \ge 1}
\\
\to \bigl\{\mathcal{H}^m_G\bigl(\bigl(EG \times_G
X\bigr)_{(n-1)}\bigr)\bigr\}_{n \ge 1}
\\
\to \bigl\{\mathcal{H}^m_G\bigl(\bigl(EG \times_G \bigl(G/H \times
D^r\bigr)\bigr)_{(n-1)} \bigr)\bigr\}_{n \ge 1} \oplus
\bigl\{\mathcal{H}^m_G\bigl(\bigl(EG \times_G
Y\bigr)_{(n-1)}\bigr)\bigr\}_{n \ge 1}
\\
\to \bigl\{\mathcal{H}^m_G\bigl(\bigl(EG \times_G \bigl(G/H \times
S^{r-1}\bigr)\bigr)_{(n-1)}\bigr)\bigr\}_{n \ge 1}
\to \ldots \label{MV-sequence_for_calh((EG_times_X)_(n-1))} \end{multline}
Now the various maps $\lambda^m_G$ induce a map from the long exact sequence of pro-$\mathcal{H}^0(\ast)$-modules~\eqref{MV-sequence_I-adic_for_calh_G} to the long exact sequence of pro-$\mathcal{H}^0(\ast)$-modules~\eqref{MV-sequence_for_calh((EG_times_X)_(n-1))}. The maps for $G/H \times S^{r-1}$, $G/H \times D^r$ and $Y$ are isomorphisms of pro-$\mathcal{H}^0(\ast)$-modules by induction hypothesis and by $G$-homotopy invariance applied to the $G$-homotopy equivalence $G/H \times D^r \to G/H$. By the Five-Lemma for maps of pro-modules the map \begin{eqnarray*}
\lambda^m_G(X) \colon \{\mathcal{H}^m_G(X)/{\mathbb I}_G(L)^n \cdot \mathcal{H}^m_G(X)\}_{n \ge 1} & \to & \{\mathcal{H}^m_G\left((EG \times_G X)_{(n-1)}\right)\}_{n \ge 1} \end{eqnarray*} is an isomorphism of pro-$\mathcal{H}^0(\ast)$-modules. This finishes the proof of Theorem~\ref{the:strategy_of_proof_of_the_completion_theorem}. \end{proof}
The next lemma will be needed to check condition~\eqref{the:strategy_of_proof_of_the_completion_theorem:ideals} appearing in Theorem~\ref{the:strategy_of_proof_of_the_completion_theorem}.
Given an G-cohomology theory $\mathcal{H}^*_G$. there is an equivariant version of the Atiyah-Hirzebruch spectral sequence of $\mathcal{H}^0(\ast)$-modules which converges to $\mathcal{H}^{p+q}(L)$ in the usual sense provided that $L$ is finite dimensional, and whose $E_2$-term is \[ E^{p,q}_2 := H^p_G(L;\mathcal{H}^q_G(G/?)), \] where $H^p_G(X;\mathcal{H}^q_G(G/?))$ is the \emph{Bredon cohomology} of $L$ with coefficients in the ${\mathbb Z}\curs{Or}(G)$-module sending $G/H$ to $\mathcal{H}^q_G(G/H)$. If $\mathcal{H}^*_G$ comes with a multiplicative structure, then this spectral sequence comes with a multiplicative structure.
\begin{lemma}\label{lem:edge_argument} Suppose that $L$ is a $l$-dimensional proper $G$-$CW$-complex for some positive integer $l$. Suppose that for $r = 2,3, \ldots, l$ the differential appearing in the Atiyah-Hirzebruch spectral sequence for $L$ and $\mathcal{H}^*_G$ \[ d^{0,0}_r \colon E^{0,0}_r \to E^{r,1-r}_r \] vanishes rationally.
\begin{enumerate} \item\label{lem:edge_argument:xk_in_the_image_of_edge} Then we can find for a given $x \in H^0_G(L;\mathcal{H}^0_G(G/?))$ a positive integer $k$ such that $x^k$ is contained in the image of the edge homomorphism \[ \operatorname{edge}^{0,0} \colon \mathcal{H}^0_G(L) \to H^0_G(L;\mathcal{H}^0_G(G/?)); \]
\item\label{lem:edge_argument:improving_the_conditions}
Let $H \in \mathcal{F}(L)$, let $\mathcal{P} \subseteq
\mathcal{H}^0_H(\ast)$ be a prime ideal and let $f \colon G/H \to L$ be any
$G$-map. Suppose that the augmentation ideal
\[ {\mathbb I}(H) = \ker \left(\mathcal{H}^0_H(\ast) \xrightarrow{ \mathcal{H}^0_H(\operatorname{pr})} \mathcal{H}^0_H(H)
\xrightarrow{\operatorname{ind}_{\{1\}\to H}} \mathcal{H}^0(\ast)\right) \]
is contained in $\mathcal{P}$
if $\mathcal{P}$ contains the image of the structure map for $H$ of the inverse limit
over the orbit category $\curs{Or}(G;\mathcal{F}(L))$ associated to the family $\mathcal{F}(L)$
\[ \phi_H \colon \invlim{G/K \in \curs{Or}(G;\mathcal{F}(L))}{{\mathbb I}(K)} \to {\mathbb I}(H). \]
Then condition~\eqref{the:strategy_of_proof_of_the_completion_theorem:ideals}
appearing in Theorem~\ref{the:strategy_of_proof_of_the_completion_theorem}
is satisfied for $H$, $\mathcal{P}$ and $f$. \end{enumerate} \end{lemma} \begin{proof}~\eqref{lem:edge_argument:xk_in_the_image_of_edge} Consider $x \in H^0_G(L;\mathcal{H}^0_G(G/?))$. We construct inductively positive integers $k_1$, $k_2$, $\ldots,$ $k_{l}$ such that \[ \begin{array}{rclr} x^{\prod_{i=1}^{r} k_i} & \in & E^{0,0}_{r+1} & \quad \text{ for } r = 1,2, \ldots ,l; \end{array} \] Put $k_1 = 1$. We have $H^0_G(L;\mathcal{H}^0_G(G/?)) = E^{0,0}_2$ and hence $x = x^1 = x^{\prod_{i=1}^{1} k_i} \in E^{0,0}_2$. This finishes the induction beginning $r = 1$.
In the induction step from $(r - 1)$ to $r \ge 2$ we can assume that we have already constructed $k_1, \ldots, k_{r-1}$ and shown that $x^{\prod_{i=1}^{r-1} k_i}$ belongs to $E^{0,0}_r$. Now choose $k_r$ with $k_r \cdot d^{0,0}_r\left(x^{\prod_{i=1}^{r-1} k_i}\right) = 0$. This is possible since by assumption $d^{0,0}_r \otimes_{{\mathbb Z}} \operatorname{id}_{{\mathbb Q}} = 0$. For any element $y \in E^{0,0}_{r}$ one checks inductively for $j = 1,2, \ldots$ \[ d^{0,0}_r(y^j) = j \cdot d^{0,0}_r(y) \cdot y^{j-1}. \] This implies \[ d^{0,0}_r\left(x^{\prod_{i=1}^{r} k_i}\right) = d^{0,0}_r\left(\left(x^{\prod_{i=1}^{r-1} k_i}\right)^{k_{r}}\right) = k_{r} \cdot d^{0,0}_ {r}\left(x^{\prod_{i=1}^{r-1} k_i}\right) \cdot \left(x^{\prod_{i=1}^{r-1}}\right)^{k_r -1} = 0. \] Since $E^{0,0}_{r+1}$ is the kernel of $d^{0,0}_r \colon E^{0,0}_r \to E^{0,0}_{r+1}$, we conclude $x^{\prod_{i=1}^r k_i} \in E^{0,0}_{r+1}$. Since $L$ is $l$-dimensional, we get for $k = \prod_{i=1}^{l} k_i$ that $x^k \in E_{\infty}^{0,0}$. Since $E_{\infty}^{0,0}$ is the image of the edge homomorphism $\operatorname{edge}^{0,0}$, assertion~\ref{lem:edge_argument:xk_in_the_image_of_edge} follows. \\[2mm]~\eqref{lem:edge_argument:improving_the_conditions} Consider the following commutative diagram \[ \xymatrix{ & H^0_G\left(\EGF{G}{\mathcal{F}(L)};\mathcal{H}^0_G(G/?)\right) \ar[d]^-{H^0_G(u)} \ar[r]^-{\alpha}_-{\cong} & \invlim{G/K \in \curs{Or}(G;\mathcal{F}(L))}{\mathcal{H}_K^0(\ast)} \ar[dddd]^{\Phi_H} \\ \mathcal{H}^0_G(L) \ar[r]^-{\operatorname{edge}^{0,0}} \ar[d]^-{\mathcal{H}^0_G(f)} & H^0_G\left(L;\mathcal{H}^0_G(G/?)\right) \ar[d]^-{H^0_G(f)} & \\ \mathcal{H}^0_G(G/H) \ar[r]^-{\operatorname{edge}^{0,0}}_-{\cong} \ar[rrdd]^{\operatorname{ind}_{H \to G}}_{\cong}& H^0_G\left(G/H;\mathcal{H}^0_G(G/?)\right) \ar[rdd]^{\quad\operatorname{ind}_{H \to G} \circ H^0_G(i_H)} & \\ & & \\ & & \mathcal{H}_H^0(\ast) } \] Here $\alpha$ is the isomorphism, which sends $v \in H^0_G\left(\EGF{G}{\mathcal{F}(L)};\mathcal{H}^0_G(G/?)\right)$ to the system of elements that is for $G/K \in \curs{Or}(G;\mathcal{F}(L))$ the image of $v$ under the homomorphism \begin{multline*} H^0_G\left(\EGF{G}{\mathcal{F}(L)};\mathcal{H}^0_G(G/?)\right) \xrightarrow{H^0_G(i_K)} H^0_G\left(G/K;\mathcal{H}^0_G(G/?)\right) \\ \xrightarrow{(\operatorname{edge}^{0,0})^{-1}} \mathcal{H}^0_G(G/K) \xrightarrow{\operatorname{ind}_{\{1\} \to K}} \mathcal{H}_K^0(\ast), \end{multline*} for the up to $G$-homotopy unique $G$-map $i_K \colon G/K \to \EGF{G}{\mathcal{F}(L)}$. The $G$-map $u \colon L \to \EGF{G}{\mathcal{F}(L)}$ is the up to $G$-homotopy unique $G$-map from $L$ to the classifying space of the family $\mathcal{F}(L)$, and $\Phi_H$ is the structure map of the inverse limit for $H$. We have to prove that ${\mathbb I}(H)$ is contained in the prime ideal $\mathcal{P}$ provided that $\mathcal{P}$ contains the image of ${\mathbb I}_G(L)$ under the composite $\mathcal{H}^0_G(L) \xrightarrow{\mathcal{H}^0_G(f)} \mathcal{H}^0_G(G/H) \xrightarrow{\operatorname{ind}_{H \to G}} \mathcal{H}^0_H(\ast)$.
Consider $a \in \invlim{G/K \in \curs{Or}(G;\mathcal{F}(L))}{{\mathbb I}(K)}$. Let $x \in H^0_G\left(L;\mathcal{H}^0_G(G/?)\right)$ be the image of $a$ under the composite \begin{multline*} \invlim{G/K \in \curs{Or}(G;\mathcal{F}(L))}{{\mathbb I}(K)} \to \invlim{G/K \in \curs{Or}(G;\mathcal{F}(L))}{\mathcal{H}_K^0(\ast)} \\ \xrightarrow{\alpha^{-1}} H^0_G\left(\EGF{G}{\mathcal{F}(L)};\mathcal{H}^0_G(G/?)\right) \\ \xrightarrow{H^0_G\left(u;\mathcal{H}^0_G(G/?)\right) } H^0_G\left(L;\mathcal{H}^0_G(G/?)\right). \end{multline*} We conclude from assertion~\eqref{lem:edge_argument:xk_in_the_image_of_edge} that for some positive number $k$ there is an element $y \in \mathcal{H}^0_G(L)$ with $\operatorname{edge}^{0,0}(y) = x^k$. One easily checks that $y$ belongs to ${\mathbb I}_G(L)$, just inspect the diagram above for $H = \{1\}$. Hence the composite \[ \mathcal{H}^0_G(L) \xrightarrow{\mathcal{H}^0_G(f)} \mathcal{H}^0_G(G/H) \xrightarrow{\operatorname{ind}_{H \to G}} \mathcal{H}^0_H(\ast) \] maps $y$ to $\mathcal{P}$ by assumption. An easy diagram chase shows that \[ \phi_H \colon \invlim{G/K \in \curs{Or}(G;\mathcal{F}(L))}{{\mathbb I}(K)} \to {\mathbb I}(H) \] maps $a^k$ to $\mathcal{P}$. Since $\mathcal{P}$ is a prime ideal and $\phi_H$ is multiplicative, $\phi_H $ sends $a$ to $\mathcal{P}$. Hence the image of $\phi_H \colon \invlim{G/K \in \curs{Or}(G;\mathcal{F}(L))}{{\mathbb I}(K)} \to {\mathbb I}(H)$ lies $\mathcal{P}$. Hence we get by assumption ${\mathbb I}(H) \subseteq \mathcal{P}$. This finishes the proof of Lemma~\ref{lem:edge_argument}. \end{proof}
\typeout{-------------------- Section 5 --------------------------------------}
\section{The Segal Conjecture for Infinite Groups} \label{sec:The_Segal_Conjecture_for_Infinite_Groups}
In this section we prove the Segal Conjecture~\ref{the:Segal_Conjecture_for_infinite_groups} for infinite groups. It is just the Completion Theorem formulated in Problem~\ref{pro:formulation_of_the_Completion_Theorem} for equivariant stable cohomotopy $\mathcal{H}_?^* = \pi_?^*$ under the condition that there is an upper bound on the orders of finite subgroups on $L$ and $L$ has finite dimension.
\begin{proof}[Proof of Theorem~\ref{the:Segal_Conjecture_for_infinite_groups}] We want to apply Theorem~\ref{the:strategy_of_proof_of_the_completion_theorem} and therefore have to prove conditions~\eqref{the:strategy_of_proof_of_the_completion_theorem:Notherian}, \eqref{the:strategy_of_proof_of_the_completion_theorem:fin._gen.}, \eqref{the:strategy_of_proof_of_the_completion_theorem:ideals} and~\eqref{the:strategy_of_proof_of_the_completion_theorem:finite_groups} appearing there.
Condition~\eqref{the:strategy_of_proof_of_the_completion_theorem:Notherian} appearing there is satisfied because of $\pi^0_s(\ast) = {\mathbb Z}$.
Condition~\eqref{the:strategy_of_proof_of_the_completion_theorem:fin._gen.} is satisfied because of Example~\ref{exa:equivariant_cohomotopy}.
Next we prove condition~\eqref{the:strategy_of_proof_of_the_completion_theorem:ideals}. Recall the assumption that there is an upper bound on the orders of finite subgroups of $L$ and that $L$ is finite dimensional. Recall that $\mathcal{F}(L)$ denotes the family of finite subgroups $H \subseteq G$ with $L^H \not= \emptyset$. We can find by Example~\ref{exa:equivariant_cohomotopy} for every $q \in {\mathbb Z}$ with $q \not= 0$ a positive integer $C(q)$ such that the order of $\pi^q_H(\ast)$ divides $C(q)$ for every $H \in \mathcal{F}(L)$. Furthermore recall that $L$ is finite dimensional. Consider the equivariant cohomological Atiyah-Hirzebruch spectral sequence converging to $\pi^{p+q}_G(L)$. Its $E_2$-term is given by \[ E^{p,q}_2 = H^p_G(L;\pi^q_G(\ast)). \] Therefore $E^{r,1-r}_r$ is annihilated by multiplication with $C(1-r)$ and hence rationally trivial for $r \ge 2$. Hence for $r \ge 2$ the differential \[ d^{0,0}_r \colon E^{0,0}_r \to E^{r,1-r}_r \] vanishes rationally. We have shown that the conditions appearing in Lemma~\ref{lem:edge_argument} are satisfied. Hence in order to verify condition~\eqref{the:strategy_of_proof_of_the_completion_theorem:ideals}, it suffices to prove for any family $\mathcal{F}$ of subgroups of $G$ with the property that there exists an upper bound on the orders of subgroups appearing $\mathcal{F}$, any $H \in \mathcal{F}$ and any prime ideal $\mathcal{P}$ of the Burnside ring $A(H)$ that $\mathcal{P}$ contains the augmentation ideal ${\mathbb I}_H$ provided $\mathcal{P}$ contains the image of the structure map for $H$ of the inverse limit \[ \phi_H \colon \invlim{G/K \in \curs{Or}(G;\mathcal{F})}{{\mathbb I}_K} \to {\mathbb I}_H. \]
Fix a finite group $H$. We begin with recalling some basics about
the prime ideals in the Burnside ring $A(H)$ taken
from~\cite{Dress(1969)}. In the
sequel $p$ is a prime number or $p = 0$. For a subgroup $K \subseteq
H$ let $\mathcal{P}(K,p)$ be the preimage of $p\cdot {\mathbb Z}$ under the
character map for $K$
\[ \operatorname{char}^H_K \colon A(H) \to {\mathbb Z}, \quad [S] \mapsto
|S^K|. \]
This is a prime ideal and each prime ideal of $A(H)$ is of
the form $\mathcal{P}(K,p)$. If $\mathcal{P}(K,p) = \mathcal{P}(L,q)$, then $p = q$.
If $p$ is a prime, then $\mathcal{P}(K,p) = \mathcal{P}(L,p)$ if and only if
$(K[p]) = (L[p])$, where $K[p]$ is the minimal normal subgroup of
$K$ with a $p$-group as quotient. Notice for the sequel that $K[p] =
\{1\}$ if and only if $K$ is a $p$-group. If $p = 0$, then
$\mathcal{P}(K,p) = \mathcal{P}(L,p)$ if and only if $(K) = (L)$.
Fix a prime ideal $\mathcal{P} = \mathcal{P}(K,p)$.
Choose a positive integer $m$ such
that $|H|$ divides $m$ for all $H\in \mathcal{F}$. Fix $H \in \mathcal{F}$.
Choose a free $H$-set $S$ together with a
bijection $u \colon S \xrightarrow{\cong} [m]$, where $[m] = \{1,2,
\ldots , m\}$. Such $S$ exists since $|H|$ divides $m$ and we can
take for $S$ the disjoint union of $\frac{m}{|H|}$ copies of $H$.
Thus we obtain an injective group homomorphism
\[ \rho_u \colon H \to S_m, \quad h \mapsto u \circ l_h \circ u^{-1}, \]
where $l_h \colon S \to S$ is given by left multiplication with $h$
and $S_m = \operatorname{aut}([m])$ is the group of permutations of $[m]$. Let
$S_m[\rho_u]$ denote the $H$-set obtained from $S_m$ by the
$H$-action $h \cdot \sigma := \rho_u(h) \circ \sigma$. Let
$\operatorname{Syl}_p(S_m)$ be the $p$-Sylow subgroup of $S_m$. Let
$S_m/\operatorname{Syl}_p(S_m)[\rho_u]$ denote the $H$-set obtained from the
homogeneous space $S_m/\operatorname{Syl}_p(S_m)$ by the $H$-action given by $h
\cdot \overline{\sigma} = \overline{\rho_u(h) \circ \sigma}$. The
$H$-action on $S_m[\rho_u]$ is free. If for $K \subseteq H$ we have
$\left(S_m/\operatorname{Syl}_p(S_m)[\rho_u]\right)^K \not= \emptyset$, then for
some $\sigma \in S_m$ we get $\rho_u(K) \subseteq \sigma \cdot
\operatorname{Syl}_p(S_m) \cdot \sigma^{-1}$ and hence $K$ must be a $p$-group.
Suppose that $T$ is another free $H$-set together with a bijection
$v \colon T \xrightarrow{\cong} [m]$. Then we can choose an
$H$-isomorphism $w \colon S \to T$. Let $\tau \in S_m$ be given by
the composition $v \circ w \circ u^{-1}$. Then $c(\tau) \circ \rho_u
= \rho_v$ holds, where $c(\tau) \colon S_m \to S_m$ sends $\sigma$
to $\tau \circ \sigma \circ \tau^{-1}$. Moreover, left
multiplication with $\tau$ induces isomorphisms of $H$-sets \begin{eqnarray*} S_m[\rho_u] & \cong_H & S_m[\rho_v]; \\ S_m/\operatorname{Syl}_p(S_m)[\rho_u] & \cong_H & S_m/\operatorname{Syl}_p(S_m)[\rho_v]. \end{eqnarray*} Hence we obtain elements in A(H) \begin{eqnarray*} [S_m] & := & [S_m[\rho_u]]; \\ {[S_m/\operatorname{Syl}_p(S_m)]} & := & {[S_m/\operatorname{Syl}_p(S_m)[\rho_u]]}, \end{eqnarray*} which are independent of the choice of $S$ and $u \colon S \xrightarrow{\cong} [m]$. If $i \colon H_0 \to H_1$ is an injective group homomorphisms between elements in $\mathcal{F}$, then one easily checks that the restriction homomorphism $A(i) \colon A(H_1) \to A(H_0)$ sends $[S_m]$ to $[S_m]$ and $[S_m/\operatorname{Syl}_p(S_m)]$ to $[S_m/\operatorname{Syl}_p(S_m)]$. Thus we obtain elements \[ [[S_m]], [[S_m/\operatorname{Syl}_p(S_m)]] \in \invlim{G/K \in
\curs{Or}(G;\mathcal{F})}{A(K)} \] Define elements \[
|S_m| \cdot 1, |S_m/\operatorname{Syl}_p(S_m)| \cdot 1 \in \invlim{G/K \in
\curs{Or}(G;\mathcal{F})}{A(K)} \]
by the collection of elements $|S_m| \cdot
[K/K]$ and $|S_m/\operatorname{Syl}_p(S_m)| \cdot [K/K]$ in $A(K)$ for $K \in \mathcal{F}$. Thus we get elements \[
[[S_m]] - |S_m| \cdot 1, [[S_m/\operatorname{Syl}_p(S_m)]] - |S_m/\operatorname{Syl}_p(S_m)| \cdot 1 \in \invlim{G/K \in \curs{Or}(G;\mathcal{F})}{{\mathbb I}_K}. \]
The image of $[[S_m]] - |S_m| \cdot 1$ and
$[[S_m/\operatorname{Syl}_p(S_m)]] - |S_m/\operatorname{Syl}_p(S_m)| \cdot 1$ respectively under the structure map of the inverse limit $\invlim{G/K \in \curs{Or}(G;\mathcal{F})}{{\mathbb I}_K}$ for the object $G/H \in
\curs{Or}(G;\mathcal{F})$ is $[S_m] - |S_m| \cdot [H/H]$ and $[S_m/\operatorname{Syl}_p(S_m)] - |S_m/\operatorname{Syl}_p(S_m)| \cdot [H/H]$. Hence by assumption \begin{eqnarray*}
[S_m] - |S_m| \cdot [H/H] & \in & \mathcal{P}(K,p); \\
{[S_m/\operatorname{Syl}_p(S_m)]} - |S_m/\operatorname{Syl}_p(S_m)| \cdot [H/H] & \in & \mathcal{P}(K,p). \end{eqnarray*}
Therefore $\operatorname{char}^H_K \colon A(H) \to {\mathbb Z}$ sends both $[S_m] - |S_m| \cdot [H/H]$ and $[S_m/\operatorname{Syl}_p(S_m)] -
|S_m/\operatorname{Syl}_p(S_m)| \cdot [H/H]$ to elements in $p {\mathbb Z}$. Since $\operatorname{char}_K^H([S_m] -
|S_m| \cdot [H/H]) = 0 - |S_m|$ for $K \not= \{1\}$, we conclude that $K = \{1\}$ or that $p \not= 0$. If $K = \{1\}$, then ${\mathbb I}(H) = \mathcal{P}(\{1\},0)$ is contained in $\mathcal{P}(K,p)$. Suppose that $K \not= \{1\}$. Then $p$ is a prime. We have \begin{multline*}
\operatorname{char}^H_K\left([S_m/\operatorname{Syl}_p(S_m)] - |S_m/\operatorname{Syl}_p(S_m)| \cdot [H/H]\right) \\
= \left|\left(S_m/\operatorname{Syl}_p(S_m)\right)^K\right| - |S_m/\operatorname{Syl}_p(S_m)|. \end{multline*}
Since this integer must belong to $p {\mathbb Z}$ and $|S_m/\operatorname{Syl}_p(S_m)|$ is prime to $p$, we get $\left(S_m/\operatorname{Syl}_p(S_m)\right)^K \not= \emptyset$. Hence $K$ must be a $p$-group. This implies $\mathcal{P}(K,p) = \mathcal{P}(\{1\},p)$ and therefore ${\mathbb I}(H) = \mathcal{P}(\{1\},0) \subseteq \mathcal{P}(K,p)$. This finishes the proof of condition~\eqref{the:strategy_of_proof_of_the_completion_theorem:ideals}.
Condition~\eqref{the:strategy_of_proof_of_the_completion_theorem:finite_groups} follows from the proof of the Segal Conjecture for a finite group $H$ due to Carlsson~\cite{Carlsson(1984)}. This finishes the proof of Theorem~\ref{the:Segal_Conjecture_for_infinite_groups}. \end{proof}
\typeout{-------------------- Section 6 --------------------------------------}
\section{An improved Strategy for a Proof of a Completion Theorem} \label{sec:An_improved_Strategy_for_a_Proof_of_a_Completion_Theorem}
The next result follows from Theorem~\ref{the:strategy_of_proof_of_the_completion_theorem}, Lemma~\ref{lem:edge_argument} and a construction of a modified Chern character analogous to the one in~\cite[Theorem~4.6 and Lemma~6.2]{Lueck(2005c)} which will ensure that the condition about the differentials in the equivariant Atiyah-Hirzebruch spectral sequence appearing in Lemma~\ref{lem:edge_argument} is satisfied. We do not give more details here, since the interesting case of the Segal Conjecture and of the Atiyah-Segal Completion Theorem are already covered by Theorem~\ref{the:Segal_Conjecture_for_infinite_groups} and by~\cite{Lueck-Oliver(2001a)}.
Let $G$ be a (discrete) group. Let $\mathcal{F}$ be a family of subgroups of $G$ such that there is an upper bound on the orders of the subgroups appearing $\mathcal{F}$. Let $\mathcal{H}^?_*$ be an equivariant cohomology theory with values in $R$-modules which satisfies the disjoint union axiom. Define a contravariant functor \begin{equation} \mathcal{H}^q_{?}(\ast) \colon \curs{FGINJ} \to R\text{-}\curs{MODULES} \label{functor_coming_from_calh} \end{equation} with the category $\curs{FGINJ}$ of finite groups with injective group homomorphism as source by sending an injective homomorphism $\alpha \colon H \to K$ to the composite \[ \mathcal{H}_K^q(\ast) \xrightarrow{\mathcal{H}^q(\operatorname{pr})} \mathcal{H}^q_K(K/H) \xrightarrow{\operatorname{ind}_{\alpha}} \mathcal{H}^q_H(\ast), \] where $\operatorname{pr} \colon H/K = \operatorname{ind}_{\alpha}(\ast) \to \ast$ is the projection and $\operatorname{ind}_{\alpha}$ comes from the induction structure of $\mathcal{H}_?^*$. Assume that $\mathcal{H}^?_*$ comes with a multiplicative structure.
\begin{theorem}[Improved Strategy for the proof of Theorem~\ref{the:Segal_Conjecture_for_infinite_groups}] \label{the:improved_strategy_of_proof_of_the_completion_theorem} Suppose that the following conditions are satisfied. \begin{enumerate}
\item\label{the:improved_strategy_of_proof_of_the_completion_theorem:Notherian} The ring
$\mathcal{H}^0(\ast)$ is Noetherian;
\item\label{the:improved_strategy_of_proof_of_the_completion_theorem:fin._gen.}
Let $H \subseteq G$ be any finite subgroup and $m \in {\mathbb Z}$ be any integer. Then the
$\mathcal{H}^0(\ast)$-module $\mathcal{H}^m_H(\ast)$ is finitely generated, there exists an integer
$C(H,m)$ such that multiplication with $C(H,m)$ annihilates the torsion-submodule
$\operatorname{tors}_{{\mathbb Z} }(\mathcal{H}^m_H(\ast))$ of the abelian group $\mathcal{H}^m_H(\ast)$ and the $R$-module
$\mathcal{H}^m_H(\ast)/\operatorname{tors}_{{\mathbb Z}}(\mathcal{H}^m_H(\ast))$ is projective;
\item\label{the:improved_strategy_of_proof_of_the_completion_theorem:ideals}
Let $H$ be any element of $\mathcal{F}$. Let $\mathcal{P} \subseteq \mathcal{H}^0_H(\ast)$ be any prime
ideal. Then the augmentation ideal
\[ {\mathbb I}(H) = \ker \left(\mathcal{H}^0_H(\ast) \to \mathcal{H}^0_H(H) \xrightarrow{\cong}
\mathcal{H}^0(\ast)\right) \]
is contained in $\mathcal{P}$ if $\mathcal{P}$ contains the image of the
structure map for $H$ of the inverse limit
\[ \phi_H \colon \invlim{G/K \in \curs{Or}(G;\mathcal{F})}{{\mathbb I}(K)} \to {\mathbb I}(H); \]
\item\label{the:improved_strategy_of_proof_of_the_completion_theorem:finite_groups}
The Completion Theorems is true for every finite group
$H$ in the case $X = L = \ast$ and $f = \operatorname{id} \colon \ast \to \ast$, i.e., for every finite
group $H$ the map of pro-$\mathcal{H}^0(\ast)$-modules \begin{eqnarray*}
\lambda^m_H(\ast) \colon \{\mathcal{H}^m_H(\ast)/{\mathbb I}(H)^n\}_{n \ge 1} & \to & \{\mathcal{H}^m\left((BH)_{(n-1)}\right)\}_{n \ge 1} \end{eqnarray*} defined in~\eqref{map_lambda_of_pro-modules} is an isomorphism of pro-$\mathcal{H}^0(\ast)$-modules;
\item\label{the:improved_strategy_of_proof_of_the_completion_theorem:Mackey} The covariant functor~\eqref{functor_coming_from_calh} extends to a Mackey functor.
\end{enumerate}
Then the Completion Theorem is true for $\mathcal{H}^?_*$ and every $G$-map $f \colon X \to L$ from a finite proper $G$-$CW$-complex $X$ to a proper finite dimensional $G$-$CW$-complex $L$ with the property that there is an upper bound on the order of its isotropy groups. $L$, i.e., the map of pro-$\mathcal{H}^0(\ast)$-modules \begin{eqnarray*}
\lambda^m_G(X) \colon \left\{\mathcal{H}^m_G(X)/{\mathbb I}_G(L)^n \cdot \mathcal{H}^m_G(X)\right\}_{n \ge 1} & \to & \left\{\mathcal{H}^m_G\left((EG \times_G X)_{(n-1)}\right)\right\}_{n \ge 1} \end{eqnarray*} defined in~\eqref{map_lambda_of_pro-modules} is an isomorphism of pro-$\mathcal{H}^0(\ast)$-modules. \end{theorem}
\begin{remark}\label{rem:advantage_of_the_improved_version}
The advantage of Theorem~\ref{the:improved_strategy_of_proof_of_the_completion_theorem}
in comparison with Theorem~\ref{the:strategy_of_proof_of_the_completion_theorem} is that
the conditions do not involve $L$ and $f \colon X \to L$ anymore and do only depend on
the functor $\mathcal{H}^q_?(\ast) \colon \curs{FGINJ} \to {\mathbb Z}\text{ -}\curs{MODULES}$. If one considers
the case $R = {\mathbb Z}$ and assumes $\mathcal{H}^0(\ast) = {\mathbb Z}$, then
condition~\eqref{the:improved_strategy_of_proof_of_the_completion_theorem:Notherian} is
obviously satisfied and
condition~\eqref{the:improved_strategy_of_proof_of_the_completion_theorem:fin._gen.}
reduces to the condition that for any finite subgroup $H \subseteq G$ and any integer $m
\in {\mathbb Z}$ the abelian group $\mathcal{H}^m_H(\ast)$ is finitely generated. \end{remark}
\begin{remark}[Family version]\label{rem:family_version} We mention without proof that there is a also a family version of Theorem~\ref{the:Segal_Conjecture_for_infinite_groups}. Its formulation is analogous to the one of the family version of the Atiyah-Segal Completion Theorem for infinite groups, see~\cite[Section~6]{Lueck-Oliver(2001b)}. \end{remark}
\typeout{-------------------- References -------------------------------}
\addcontentsline{toc<<}{section}{References}
\end{document} | arXiv |
\begin{document}
\title{The rank of variants of nilpotent pseudovarieties} \author{J. Almeida and M. H. Shahzamanian} \address{J. Almeida and M. H. Shahzamanian\\ Centro de Matemática e Departamento de Matemática, Faculdade de Ciências, Universidade do Porto, Rua do Campo Alegre, 687, 4169-007 Porto, Portugal} \email{[email protected]; [email protected]} \subjclass[2020]{Primary 20M07. Scondary 20M35} \keywords{Profinite semigroup, monoid, block group,
pseudovariety, Mal'cev nilpotent semigroup.}
\begin{abstract}
We investigate the rank of pseudovarieties defined by
several of the variants of nilpotency conditions for semigroups in
the sense of Mal'cev. For several of them, we provide finite bases
of pseudoidentities. We also show that the Neumann--Taylor variant
does not have finite rank.
\end{abstract} \maketitle
\section{Introduction}\label{pre}
A pseudovariety of semigroups is a class of finite semigroups closed under taking subsemigroups, homomorphic images, and finite direct products. By a theorem of Reiterman~\cite{Rei}, we may define a pseudovariety of semigroups by a set of pseudoidentities, which is called a basis. We say that a pseudovariety of semigroups is finitely based if it admits a finite basis of pseudoidentities.
Let $\mathsf{V}$ be a semigroup pseudovariety. We say that a semigroup $S$ is $n$-generated if there is some subset of~$S$ with at most $n$~elements that generates~$S$. We say that $\mathsf{V}$ has rank at most $n$ if, whenever a semigroup $S$ is such that all its $n$-generated subsemigroups belong to~$\mathsf{V}$, so does $S$. If there is such an integer $n\ge0$, then $\mathsf{V}$ is said to have finite rank and the minimum value of $n$ for which $\mathsf{V}$ has rank at most $n$ is called the rank of~$\mathsf{V}$ and is denoted by $\mbox{rk}(\mathsf{V})$. If there is no such $n$, then $\mathsf{V}$ is said to have infinite rank. The rank of $\mathsf{V}$ is less than or equal to the number of variables involved in the ultimate description of $\mathsf{V}$ by sequences of identities (in the sense of Eilenberg and Sch{\"u}tzenberger \cite{Eil-Sch}) or in the description by pseudoidentities. Hence, a semigroup pseudovariety with infinite rank is non-finitely based. The class $\mathsf{V}^{(n)}$ is defined as the collection of all finite semigroups $S$ such that every $n$-generated subsemigroup of $S$ belongs to $\mathsf{V}$. Note that the class $\mathsf{V}^{(n)}$ is a semigroup pseudovariety and if $\mathsf{V} =\mathsf{V}^{(n)}$ for some $n$ then $\mbox{rk}(\mathsf{V})\leq n$.
Mal'cev \cite{Mal} and independently Neumann and Taylor \cite{Neu-Tay} have shown that nilpotent groups may be defined by semigroup identities (that is, without using inverses). This leads to the notion of a nilpotent semigroup (in the sense of Mal'cev).
For a semigroup $S$ and elements $x,y,z_{1},z_{2}, \ldots$ one recursively defines two sequences $$\lambda_n=\lambda_{n}(x,y,z_{1},\ldots, z_{n})\quad{\rm and} \quad \rho_n=\rho_{n}(x,y,z_{1},\ldots, z_{n})$$ by $$\lambda_{0}=x, \quad \rho_{0}=y$$ and $$\lambda_{n+1}=\lambda_{n} z_{n+1} \rho_{n}, \quad \rho_{n+1}=\rho_{n} z_{n+1} \lambda_{n}.$$ We use the notation $S^1$ where if $S$ has an identity element then $S^1=S$, otherwise $S^{1}=S\cup \{1\}$ is obtained by adding an identity element to $S$. A semigroup is said to be \emph{nilpotent}\footnote{Mal'cev's original
paper \cite{Mal} defines nilpotent semigroups in terms of a weaker
property since Mal'cev does not allow $c_1,\ldots,c_n$ to take the
value $1$ if $S\neq S^{1}$. The definition used here comes from
\cite{Lal}. By \cite[Lemma 3.2]{Alm-Kuf-Sha}, the two definitions
agree on the class of finite semigroups.} (MN) if \begin{displaymath}
\exists n\ge0\ \forall a,b\in S\ \forall c_{1}, \ldots , c_{n}\in S^{1},\
\lambda_{n}(a,b,c_{1},\ldots, c_{n}) = \rho_{n}(a,b,c_{1},\ldots,c_{n}). \end{displaymath}
Recall that a semigroup $S$ is said to be \emph{Neumann--Taylor} \cite{Neu-Tay} (NT) if \begin{displaymath}
\exists n\ge2\ \forall a,b \in S\ \forall c_2,\ldots,c_n \in S^1,\
\lambda_n(a,b,1,c_2,\ldots,c_n)=\rho_n(a,b,1,c_2,\ldots,c_n) \end{displaymath} A semigroup $S$ is said to be \emph{positively Engel} (PE) if \begin{align*}
&\exists n\ge2\ \forall a,b \in S\ \forall c\in S^{1},\\
&\quad
\lambda_{n}(a,b,1,1,c,c^{2},\ldots ,c^{n-2})
=\rho_{n} (a,b,1,1,c,c^{2},\ldots ,c^{n-2}), \end{align*} while $S$ is said to be \emph{Thue--Morse} (TM) if \begin{displaymath}
\exists n\ge0\ \forall a,b \in S,\
\lambda_{n}(a,b,1,1,\ldots,1)=\rho_{n}(a,b,1,1,\ldots,1). \end{displaymath}
Each of the above classes of finite semigroups (nilpotent, Neumann--Taylor, positively Engel, Thue--Morse) constitutes a pseudovariety. Actually, the above descriptions are examples of ultimate equational definitions of pseudovarieties in the sense of Eilenberg and Sch{\"u}tzenberger \cite{Eil-Sch}. We denote them respectively by $\mathsf{MN}$, $\mathsf{NT}$, $\mathsf{PE}$ and $\mathsf{TM}$.
In this paper, we investigate the rank of pseudovarieties defined by Mal’cev nilpotency conditions. In particular, we show that the pseudovariety $\mathsf{NT}$ has infinite rank and, therefore, it is non-finitely based.
We denote by $\mathsf{G_{nil}}$ the pseudovariety of all finite nilpotent groups and by $\mathsf{\overline{G}_{nil}}$ the pseudovariety of all finite semigroups whose subgroups belong to $\mathsf{G_{nil}}$. The pseudovariety $\mathsf{BG}$ is the collection of all finite block groups, that is, finite semigroups in which each element has at most one inverse and the pseudovariety $\mathsf{BG_{nil}}$ is the collection of all finite block groups whose subgroups are nilpotent. Every finite semigroup that is not nilpotent has a subsemigroup that is minimal for not being nilpotent, in the sense that every proper subsemigroup and every Rees factor semigroup is nilpotent. Semigroups with this minimality condition have been described in \cite{Jes-Sha3} where they are called minimal non-nilpotent\ semigroups. It was shown that a minimal non-nilpotent\ semigroup in the pseudovariety $\mathsf{\overline{G}_{nil}}$ is of one of four types of semigroups. These four types of semigroups are a right or left zero semigroup or the union of a completely $0$-simple inverse ideal with a $2$-generated subsemigroup. A minimal non-nilpotent semigroup $S$ which is not a group or a right or left zero semigroup has a completely $0$-simple inverse ideal $M$ and $S$ acts on the ${\mathcal
R}$-classes of $M$ where the different types of orbits of this action determine the type of $S$. The pseudovariety $\mathsf{BG_{nil}}$ does not contain any semigroup of the first type but may have semigroups of other types, the pseudovariety $\mathsf{PE}$ does not contain any semigroup of the first and second types but may have semigroups of other types and the pseudovariety $\mathsf{MN}$ does not contain any semigroup of any types. In this paper, we define a pseudovariety by excluding non-nilpotent groups and the first three types of minimal non-nilpotent semigroups, while allowing the fourth type. We name this pseudovariety $\mathsf{NMN_{4}}$. Thus, it is a class of semigroups that is just one step away from being $\mathsf{MN}$ while being contained in the pseudovariety $\mathsf{PE}$; it has deserved our attention and we dedicate a section to it.
It turns out that $\mathsf{NMN_{4}}$ sits strictly between the pseudovarieties $\mathsf{NT}$ and $\mathsf{PE}$.
We calculate the ranks of the pseudovarieties $\mathsf{MN}$, $\mathsf{NMN_{4}}$, $\mathsf{PE}$ and $\mathsf{TM}$. They are respectively $4$, $3$, $2$ and $2$.
Let $\mathcal{C}$ be a class of finite semigroups. The pseudovariety generated by $\mathcal{C}$ is denoted by $\langle \mathcal{C} \rangle$. Let $\mathsf{Inv}$ be the class of all finite inverse semigroups and $\mathsf{A}$ be the pseudovariety of all finite aperiodic semigroups. At the end of the paper, we compare the pseudovarieties $\langle\mathsf{Inv}\cap\mathsf{A}\rangle$ and $\langle\mathsf{Inv}\rangle\cap \mathsf{A}$
with the pseudovariety $\mathsf{MN}\cap \mathsf{A}$. We show that the pseudovariety $\langle\mathsf{Inv}\cap\mathsf{A}\rangle$ is strictly contained in $\mathsf{MN}\cap\mathsf{A}$ and the pseudovarieties $\langle\mathsf{Inv}\rangle\cap\mathsf{A}$ and $\mathsf{MN}\cap \mathsf{A}$ are incomparable. We further compare the pseudovarieties $\mathsf{MN}$, $\mathsf{NT}$, $\mathsf{NMN_{4}}$, $\mathsf{PE}$, $\mathsf{TM}$, $\mathsf{MN}^{(2)}$, and $\mathsf{MN}^{(3)}$. The diagrams at Figure~\ref{fig1} represent the strict inclusion relationships and equalities between these pseudovarieties respectively in the general case and in the aperiodic case.
\begin{figure}
\caption{Two pseudovariety posets}
\label{fig1}
\end{figure}
Some of our arguments depend on checking certain properties of specific finite semigroups. Although in principle such calculations could be carried out by hand they may be done much faster with the help of a computer. For this purpose, we used the well established programming language GAP \cite{Gap}. The relevant programs are included in an appendix.
\section{Preliminaries}\label{sec:prelims}
For standard notation and terminology relating to finite semigroups, refer to \cite{Alm, Cli}.
We denote by $\mathcal{B}_n(G)$ an $n\times n$ Brandt semigroup over a group $G$. Note that $\mathcal{B}_n(G)$ is an inverse completely $0$-simple semigroup.
For elements $s_1,\ldots,s_n$ of a semigroup $S$, we denote $\langle s_1\ldots,s_n\rangle$ the subsemigroup that they generate.
Jespers and Okni{\'n}ski proved that a completely $0$-simple semigroup $S$ with a maximal subgroup $G$ is nilpotent if and only if $G$ is nilpotent and $S$ is a Brandt semigroup over $G$ \cite[Lemma 2.1]{Jes-Okn}. The next lemma gives a necessary and sufficient condition for a finite semigroup not to be nilpotent.
\begin{lem}[\cite{Jes-Sha}]
\label{finite-nilpotent}
A finite semigroup $S$ is not nilpotent if and only if there exist a
positive integer $m$, distinct elements $x, y\in S$, and elements
$w_{1},\ldots, w_{m}\in S^{1}$ such that the equalities $x =
\lambda_{m}(x, y, w_{1}, \ldots, w_{m})$ and $y = \rho_{m}(x,y,
w_{1}, \ldots, w_{m})$ hold. \end{lem}
Assume that a finite semigroup $S$ has a proper ideal $M=\mathcal{B}_n(G)$ with $n>1$.
There is an action $\Gamma$ of $S$ on the ${\mathcal R}$-classes of $M$ which plays a key role in~\cite{Jes-Sha3}. In this paper, we consider the dual definition of this action and denote it again by $\Gamma$. We define the action $\Gamma$ of $S$ on the ${\mathcal L}$-classes of $M$, which is a representation (a semigroup homomorphism) $\Gamma : S\longrightarrow \mathcal{T}$, where $\mathcal{T}$ denotes the full transformation semigroup on the set $\{1, \ldots, n\} \cup \{\theta\}$. The definition is as follows, for $1\leq j\leq n$ and $s\in S$,
$$\Gamma(s)(j) = \left\{ \begin{array}{ll}
j' & \mbox{if} ~(g;i,j)s=(g';i,j') ~ \mbox{for some} ~ g, g' \in G, \, 1\leq i \leq n\\
\theta & \mbox{otherwise}\end{array} \right.$$ and
$\Gamma (s)(\theta ) =\theta$. The representation $\Gamma$ is called the \emph{${\mathcal L}$-representation} of $S$.
For every $s \in S$, $\Gamma(s)$ may be written as a product of \emph{orbits} which are cycles of the form $(j_{1}, j_{2}, \ldots, j_{k})$ or sequences of the form $(j_{1}, j_{2}, \ldots, j_{k}, \theta)$, where $1\leq j_{1}, \ldots, j_{k}\leq n$. The latter orbit means that $\Gamma(s)(j_{i})=j_{i+1}$ for $1\leq i\leq k-1$, $\Gamma(s)(j_{k})=\theta$, $\Gamma(s)(\theta) =\theta$, and there does not exist $1\leq r \leq n$ such that $\Gamma(s)(r)=j_{1}$. Orbits of the form $(j)$ with $j\in\{1,\ldots,n\}$ are written explicitly in the orbit decomposition of $\Gamma(s)$. By convention, we omit orbits of the form $(j, \theta)$ in the decomposition of $\Gamma(s)$ (this is the reason for writing cycles of length one). If $\Gamma(s)(j)=\theta$ for every $1\leq j \leq n$, then we simply denote $\Gamma(s)$ by $\overline{\theta}$.
If all orbits of a transformation $\varepsilon$ appear in the expression of $\Gamma(s)$ as a product of disjoint orbits, then we denote this by $\varepsilon\subseteq \Gamma(s)$. If $$\Gamma(s)(j_{i,1})=j_{i,2}, \ldots,\ \Gamma(s)(j_{i,p_i-1})=j_{i,p_i}\quad (i=1,\ldots,q)$$ then we write \begin{equation}\label{[]} [j_{1,1},j_{1,2}, \ldots,j_{1,p_1};\ldots;j_{i,1},j_{i,2}, \ldots,j_{i,p_i};\ldots; j_{q,1},j_{q,2}, \ldots,j_{q,p_q}] \sqsubseteq \Gamma(s). \end{equation} Note that (\ref{[]}) does not imply that $\Gamma(s)(j_{i,p_i})=j_{i,1}$, in contrast with inclusion $(j_{i,1},j_{i,2}, \ldots,j_{i,p_i})\subseteq \Gamma(s)$. Also (\ref{[]}) does not imply that there does not exist $1\leq r \leq n$ such that $\Gamma(s)(r)=j_{i,1}$, in contrast with the inclusion $(j_{i,1},j_{i,2}, \ldots,j_{i,p_i},\theta)\subseteq \Gamma(s)$.
Note that, if $g\in G$ and $1 \leq n_1, n_2 \leq n$ with $n_1 \neq n_2$ then $$\Gamma((g;n_1,n_2)) = (n_1,n_2,\theta) \mbox{ and } \Gamma((g;n_1,n_1)) = (n_1).$$ Therefore, if the group $G$ is trivial, then we may view the elements of $M$ as transformations.
Let $T$ be a semigroup with a zero $\theta_T$ and let $M=\mathcal{B}_n(G)$ be a Brandt semigroup. Let $\Delta$ be a representation of $T$ in the full transformation semigroup on the set $\{ 1, \ldots, n\} \cup \{\theta\}$ such that for every $t\in T$, $\Delta(t) (\theta ) =\theta$, $\Delta^{-1}(\overline{\theta})= \{\theta_T\}$, and
$\Delta(t)$ restricted to $\{ 1, \ldots, n\} \setminus \Delta(t)^{-1}(\theta)$ is injective.
The semigroup $S=M \cup^{\Delta} T$ is the $\theta$-disjoint union of
$M$ and $T$ (that is the disjoint union with the zeros identified).
The multiplication is such that $T$ and $M$ are subsemigroups, $$(g;i,j) \, t = \left\{ \begin{array}{ll}
(g; i,\Delta (t)(j)) & \mbox{ if } \Delta (t)(j) \neq \theta\\
\theta & \mbox{ otherwise,}
\end{array} \right.
$$ and $$ t(g;i,j) = \left\{ \begin{array}{ll}
(g;i',j) & \mbox{ if } \Delta (t)(i')=i\\
\theta &\mbox{ otherwise. }
\end{array} \right.
$$
Let $\mathsf{V}$ be a pseudovariety of finite semigroups. A pro-$\mathsf{V}$ semigroup is a compact semigroup that is residually $\mathsf{V}$. For the pseudovariety $\mathsf{S}$ of all finite semigroups, we call pro-$\mathsf{S}$ semigroups profinite semigroups. We denote by $\overline{\Omega}_{A}\mathsf{V}$ the free pro-$\mathsf{V}$ semigroup on the set $A$. Such free objects are characterized by appropriate universal properties: the profinite semigroup $\overline{\Omega}_{A}\mathsf{V}$ comes endowed with a mapping $\iota\colon A \rightarrow \overline{\Omega}_{A}\mathsf{V}$ such that, for every mapping $\phi \colon A \rightarrow S$ into a pro-$\mathsf{V}$ semigroup $S$, there exists a unique continuous homomorphism $\widehat{\phi}\colon \overline{\Omega}_{A}\mathsf{V} \rightarrow S$ such that $\widehat{\phi}\circ\iota=\phi$. For more details on this topic we refer the reader to \cite{Alm}.
Let $r$ be an integer. We denote the free pro-$\mathsf{V}$ semigroup on the set $\{x_1,\ldots,x_r\}$ by $\overline{\Omega}_{r}\mathsf{V}$. Recall that a pseudoidentity (over $\mathsf{V}$) is a formal equality $\pi = \rho$ between $\pi,\rho\in \overline{\Omega}_{r}\mathsf{V}$ for some integer $r$. For a set $\Sigma$ of $\mathsf{V}$-pseudoidentities, we denote by $\llbracket \Sigma \rrbracket_{\mathsf{V}}$ (or simply $\llbracket \Sigma \rrbracket$ if $\mathsf{V}$ is understood from the context) the class of all $S \in \mathsf{V}$ that satisfy all pseudoidentities from $\Sigma$. Reiterman \cite{Rei} proved that a subclass $\mathsf{V}$ of a pseudovariety $\mathsf{W}$ is a pseudovariety if and only if $\mathsf{V}$ is of the form $\llbracket \Sigma \rrbracket_{\mathsf{W}}$ for some set $\Sigma$ of $\mathsf{W}$-pseudoidentities.
Let $1\leq r$, $S$ be a pro-\pv V semigroup, and $\pi\in\overline{\Omega}_{r}\mathsf{V}$. The operation $\pi_S\colon S^r \rightarrow S$ is defined as follows: $$\pi_S(s_1,\ldots,s_r)=\widehat{\phi}(\pi)$$ where $\phi\colon \{x_1,\ldots,x_r\} \rightarrow S$ is the mapping given by $\phi(x_i)=s_{i}$, for all $1\leq i\leq r$ and $\widehat{\phi}$ was defined above. The families of operations $(\pi_S)_{S\in\pv V}$ that may be obtained in this way are precisely those that commute with homomorphisms between members of~\pv V and are called implicit operations. Moreover, the correspondence $\pi\in\overline{\Omega}_{r}\mathsf{V}\mapsto(\pi_S)_{S\in\pv V}$ is injective and one often identifies $\pi$ with its image.
For an element $s$ of a finite semigroup $S$, $s^{\omega}$ denotes the unique idempotent power of $s$. This defines a unary implicit operation $x \mapsto x^{\omega}$ on finite semigroups. In particular, given an element $s$ of a profinite semigroup $S$, we may consider $s^\omega=(x^\omega)_S(s)$. Note that $s^{\omega}$ is the limit of the sequence $(s^{n!})_n$.
For a profinite semigroup $S$, let $\mathrm{End}(S)$ be the monoid of all continuous endomorphisms of $S$. In case $S$ has a finitely generated dense subsemigroup, it turns out that $\mathrm{End}(S)$ is a profinite monoid under the pointwise convergence topology, that is, as a subspace of the product space $S^S$ \cite{Hunter:1983} (see also \cite{Steinberg:2010pm}). In particular, given $\varphi$ in $\mathrm{End}(S)$, we may consider the continuous endomorphism $\varphi^\omega$, which is the pointwise limit of the sequence $\lim\varphi^{n!}$.
As examples, consider the pseudovarieties $\mathsf{G_{nil}}$ and $\mathsf{BG}$. We have \begin{displaymath}
\mathsf{G_{nil}}
=\llbracket
\psi^{\omega}(x)=x^{\omega},x^{\omega}y=yx^{\omega}=y
\rrbracket \end{displaymath} where $\psi$ is the continuous endomorphism of the free profinite semigroup on $\{x,y\}$ such that $\psi(x)=x^{\omega-1}y^{\omega-1}xy$ and $\psi(y)=y$ \cite[Example 4.15(2)]{Alm2}, and \begin{displaymath}
\mathsf{BG}=\llbracket (ef)^{\omega}=(fe)^{\omega} \rrbracket \end{displaymath} where $e=x^{\omega}$, $f=y^{\omega}$ (see, for example, \cite[Exercise~5.2.7]{Alm}).
\section{The pseudovariety \pv{MN}} \label{sec:pvy-pvmn}
Jespers and Riley \cite{Jes-Ril} called a semigroup $S$ \emph{weakly
Mal’cev nilpotent} (WMN) if \begin{align*}
&\exists n\ge0\ \forall a,b\in S\ \forall c_{1},c_2\in S^{1},\\
&\quad
\lambda_{n}(a,b,c_{1},c_2,c_1,c_2,\ldots)
= \rho_{n}(a,b,c_{1},c_2,c_1,c_2,\ldots). \end{align*} They proved that a linear semigroup $S$ is MN if and only if $S$ is WMN \cite[Corollary 12]{Jes-Ril}. Recall that a linear semigroup is a subsemigroup of a matrix monoid $M_n(K)$ over a field $K$. In particular, all finite semigroups are linear. The following lemma gives a slightly simpler characterization of Mal'cev nilpotency for finite semigroups by avoiding referring to the identity element.
\begin{lem}\label{finite-nil-dif}
A finite semigroup $S$ is Mal'cev nilpotent if and only if
\begin{align}
\label{eq:finite-nil-dif}
&\exists n\ge0\ \forall a,b,c_1,c_2\in S,\\
&\quad\notag
\lambda_n(a,b,c_1,c_2,c_1,c_2,\ldots) =
\rho_n(a,b,c_1,c_2,c_1,c_2,\ldots).
\end{align} \end{lem}
\begin{proof}
Suppose that there exists a finite semigroup $S$ that satisfies
\eqref{eq:finite-nil-dif}
but is not Mal'cev nilpotent.
If $S\not\in \mathsf{BG}_{nil}$, then there exists a regular
${\mathcal J}$-class $\mathcal{M}^0(G,n,m;P)\setminus \{\theta\}$ of
$S$ such that one of the following conditions holds:
\begin{enumerate}[label=(\roman*)]
\item\label{item:finite-nil-dif-1} $G$ is not a nilpotent group;
\item\label{item:finite-nil-dif-2} there exist integers $1\leq
i_1<i_2\leq n$ and $1\leq j\leq m$ such that $p_{ji_1},p_{ji_2}\neq
\theta$;
\item\label{item:finite-nil-dif-3} there exist integers $1\leq i\leq
n$ and $1\leq j_1<j_2\leq m$ such that $p_{j_1i},p_{j_2i}\neq
\theta$.
\end{enumerate}
In case \ref{item:finite-nil-dif-1}, $G$ is not MN and, thus, $G$ is
not WMN. Since $G$ has an identity, $G$ does not satisfy the
condition of the lemma, a contradiction. If
\ref{item:finite-nil-dif-2} holds, then
\begin{align*}
&\lambda_n((1_G;i_1,j),(1_G;i_2,j),
(1_G;i_1,j),(1_G;i_1,j),(1_G;i_1,j),(1_G;i_1,j),\ldots) \neq\\
&\rho_n((1_G;i_1,j),(1_G;i_2,j),
(1_G;i_1,j),(1_G;i_1,j),(1_G;i_1,j),(1_G;i_1,j),\ldots),
\end{align*}
for every integer $n\geq 0$, which contradicts the assumption that
$S$ satisfies~\eqref{eq:finite-nil-dif}. Similarly, we reach a
contradiction assuming Condition \ref{item:finite-nil-dif-3}.
Now, suppose that $S\in \mathsf{BG}_{nil}$. Since
$S\not\in\mathsf{MN}$, by Lemma~\ref{finite-nilpotent} there exist a
positive integer $m$, distinct elements $x, y\in S$ and elements $
w_{1}, w_{2}, \ldots, w_m\in S^{1}$ such that
\begin{eqnarray}
\label{nil-fi}
x = \lambda_{m}(x, y, w_1, \ldots, w_m)
\mbox{ and }
y = \rho_{m}(x,y, w_1,\ldots, w_m).
\end{eqnarray}
As $x\neq y$ and $S\in \mathsf{BG}_{nil}$, by \eqref{nil-fi}, there
exists a regular ${\mathcal
J}$-class $$M=\mathcal{M}^0(G,n,n;I_n)\setminus \{\theta\}$$
of $S$ such that $x,y\in M$. Then, there exist elements
$(g;i,j),(g';i',j')\in M$ such that $x=(g;i,j)$ and $y=(g';i',j')$.
By \eqref{nil-fi}, we also obtain that $[j,i';j',i] \sqsubseteq
\Gamma(w_1)$ and $[j,i;j',i'] \sqsubseteq \Gamma(w_2)$, where
$\Gamma$ is an ${\mathcal L}$-representation of $S$. If $w_1=1$,
then we have $j=i'$ and $j'=i$. In case $i\neq j$, we get
$(i,j)\subseteq \Gamma(w_2)$. Thus, we have
\begin{align*}
&\lambda_{n}((1_G;i,i),(1_G;j,j),w_2,w_2^2,w_2,w_2^2,\ldots) \neq\\
&\rho_{n}((1_G;i,i),(1_G;j,j),w_2,w_2^2,w_2,w_2^2,\ldots),
\end{align*}
for every integer $n\ge0$, which contradicts the assumption that $S$
satisfies~\eqref{eq:finite-nil-dif}. If $i=j$, then we have
$i=j=i'=j'$ which entails that $G$ is not nilpotent, in
contradiction with the assumption that $S\in \mathsf{BG}_{nil}$.
Also, if $w_2=1$, we reach a similar contradiction. Therefore, we
have $w_1,w_2\neq 1$. If $(i,j)=(i',j')$, then we conclude that $G$
is not nilpotent, contradicting the assumption that $S\in
\mathsf{BG}_{nil}$. Hence, we have $(i,j)\neq(i',j')$. Now, as
$[j,i';j',i] \sqsubseteq \Gamma(w_1)$ and $[j,i;j',i'] \sqsubseteq
\Gamma(w_2)$ we have
\begin{align*}
&\lambda_{n}((1_G;i,j),(1_G;i',j'),w_1,w_2,w_1,w_2,\ldots) \neq\\
&\rho_{n}((1_G;i,j),(1_G;i',j'),w_1,w_2,w_1,w_2,\ldots),
\end{align*} for every integer $n\ge0$.
The result follows. \end{proof}
Combining Lemma~\ref{finite-nil-dif} with the same method as in the proof of \cite[Lemma 2.2]{Jes-Sha}, one obtains the following lemma.
\begin{lem}
\label{finite-nilpotentW1W2}
A finite semigroup $S$ is not nilpotent if and only if there exist a
positive integer $m$, distinct elements $x, y\in S$ and elements $w_1,w_2\in S$
such that the equalities
$$x =
\lambda_{m}(x, y, w_1, w_2,w_1, w_2,\ldots)\ \text{and}\ y = \rho_{m}(x,y,w_1, w_2,w_1, w_2,\ldots)$$ hold. \end{lem}
Also using Lemma~\ref{finite-nil-dif}, one immediately deduces the following theorem.
\begin{thm}
\label{PS MN}
The following equality holds
\begin{displaymath}
\mathsf{MN}=\llbracket \phi^{\omega}(x)=\phi^{\omega}(y) \rrbracket
\end{displaymath}
where $\phi$ is the continuous endomorphism of the free profinite
semigroup on $\{x,y,z,t\}$ such that $\phi(x)=xzytyzx$,
$\phi(y)=yzxtxzy$, $\phi(z)=z$, and $\phi(t)=t$. \end{thm}
In particular, Theorem~\ref{PS MN} shows that the pseudovariety $\mathsf{MN}$ is finitely based.
\section{The pseudovariety \pv{PE}} \label{sec:pvy-pvpe}
As in \cite{Jes-Ril}, we denote by $F_{7}$ the 7-element semigroup which is the $\theta$-disjoint union of the Brandt semigroup $\mathcal{B}_2(\{1\})$ with the cyclic group $\{1,u\}$ of order~$2$ with a zero $\theta$ adjoined: \begin{eqnarray} \label{ex2} F_{7} &=& \mathcal{B}_2(\{1\}) \cup^{\Gamma_{F_7}} \{1,u,\theta\} ,\end{eqnarray} where $\Gamma_{F_7}(u)=(1,2)$ and $\Gamma_{F_7}(1)=(1)(2)$. Note that $F_{7}=\langle u,(1;1,1)\rangle$. In fact, $F_{7}$ is a member of the family of minimal non-cryptic inverse semigroups considered by Reilly in \cite{Reil}.
We recall that a semigroup $S$ divides a semigroup $T$ and we write $S\prec T$ if $S$ is a homomorphic image of a subsemigroup of $T$. We say that a finite semigroup $S$ is $\times$-prime whenever, for all finite semigroups $T_1$ and $T_2$, if $S\prec T_1\times T_2$, then $S\prec T_1$ or $S\prec T_2$. For example, a right or left zero semigroup with 2 elements is $\times$-prime \cite[Exercise 9.3.1(a)]{Alm}.
A more sophisticated example of $\times$-prime semigroups is $F_{7}$ \cite[Theorem 7.3.10]{Rho-Ste}. It follows that the class of all finite semigroups $S$ such that $F_7\not\prec S$ is a pseudovariety.
In \cite{Jes-Ril}, it is proved that a finite semigroup $S$ is positively Engel if and only if the following properties hold: \begin{enumerate} \item all non-null principal factors of $S$ are inverse semigroups whose maximal subgroups are nilpotent groups. \item $S$ does not have an epimorphic image which admits $F_{7}$ as a subsemigroup. \end{enumerate} We rewrite that result as Theorem~\ref{PE F_7}.
\begin{thm}
\label{PE F_7}
Let $S$ be a finite semigroup. The semigroup $S$ is positively Engel
if and only if $S\in\mathsf{BG_{nil}}$ and $F_7\not\prec S$. \end{thm}
\begin{proof}
Suppose that $S\in\mathsf{BG_{nil}}$ and $S\not\in\mathsf{PE}$. Then
there exists an onto homomorphism $\phi:S\rightarrow S'$ such that
$F_7$ is a subsemigroup of $S'$. Let $U=\phi^{-1}(F_7)$. Then
$\phi\mid_U:U\rightarrow F_7$ is an onto homomorphism, so that
$F_7\prec S$.
Now, suppose that $F_7\prec S$. Then, there exist a subsemigroup $U$ of $S$ and an onto homomorphism $\phi:U\rightarrow F_7$. There exist elements $a,w\in U$ such that $\phi(a)=(1;1,1)$ and $\phi(w)=u$. Since \begin{align*} \lambda_{2n}((1;1,1),u,1,1,u,u^2,\ldots,u^{n-2})&=(1;1,1),\\ \rho_{2n}((1;1,1),u,1,1,u,u^2,\ldots,u^{n-2})&=(1;2,2),\\ \lambda_{2n-1}((1;1,1),u,1,1,u,u^2,\ldots,u^{n-2})&=(1;1,2),\\ \rho_{2n-1}((1;1,1),u,1,1,u,u^2,\ldots,u^{n-2})&=(1;2,1), \end{align*} for every positive integer $n$, we have $$\lambda_n(a,w,1,1,w,w^2,\ldots,w^{n-2})\neq\rho_n(a,w,1,1,w,w^2,\ldots,w^{n-2}),$$ for every positive integer $n$. This shows that $S\not\in\mathsf{PE}$. The result follows. \end{proof}
\begin{prop}
\label{PS PE identity}
The sequence
\begin{displaymath}
\left\{\bigl(
\lambda_{n!}(x,y,z,z^{2},\ldots ,z^{n!}),
\rho_{n!}(x,y,z,z^{2},\ldots,z^{n!})\bigr)
\mid n\in \mathbb{N}\right\}
\end{displaymath}
converges in
$(\overline{\Omega}_{\{x,y,z\}}\mathsf{S})^2$. \end{prop}
\begin{proof}
For simplicity, we
let
\begin{displaymath}
\overline{\lambda}_{i}=\lambda_{i}(x,y,z,\ldots ,z^{i})
\mbox{ and }
\overline{\rho}_{i}=\rho_{i}(x,y,z,\ldots ,z^{i})
\end{displaymath}
for every $1\leq i$. Suppose that $S$ is a finite semigroup and
$\phi:\overline{\Omega}_{\{x,y,z\}}\mathsf{S}\rightarrow S$ is a
continuous homomorphism. Let $k=\abs{S}$. By the pigeonhole principle, there
exist positive integers $t$ and $r$ such that $t<r\leq k^3+1$ and
\begin{align*}
&\Bigl(\phi\bigl(\lambda_{t}(x,y,z,\ldots ,z^{t})\bigr),
\phi\bigl(\rho_{t}(x,y,z,\ldots ,z^{t})\bigr),\phi(z^{t+1})\Bigr)\\
&=\Bigl(\phi\bigl(\lambda_{r}(x,y,z,\ldots ,z^{r})\bigr),
\phi\bigl(\rho_{r}(x,y,z,\ldots ,z^{r})\bigr),\phi(z^{r+1})\Bigr).
\end{align*}
Let $s$ be a positive integer such that $t\leq s(r-t)< r$. Then, the
equality
\begin{displaymath}
\bigl(\phi(\overline{\lambda}_{n!}),\phi(\overline{\rho}_{n!})\bigr)
=\bigl(\phi(\overline{\lambda}_{(n+1)!}),\phi(\overline{\rho}_{(n+1)!})\bigr)
\end{displaymath}
holds for every $n>s(r-t)$. The result follows. \end{proof}
We denote by $(\lambda_{\mathsf{PE}},\rho_{\mathsf{PE}})$ the limit of the sequence $(\overline{\lambda}_{n!},\overline{\rho}_{n!})_{n}$ in the preceding proposition.
\begin{thm} \label{PS PE} The following equality holds \begin{center}$\mathsf{PE}=\mathsf{TM}\cap\llbracket \phi(\lambda_{\mathsf{PE}})=\phi(\rho_{\mathsf{PE}})\rrbracket$\end{center} where $\phi$ is the continuous endomorphism of the free profinite semigroup on $\{x,y,z\}$ such that $ \phi(x)=xzzx$, $\phi(y)=zxxz$, $\phi(z)=z$. \end{thm}
\begin{proof}
Suppose that $S\in \mathsf{TM}\cap\llbracket
\phi(\lambda_{\mathsf{PE}})=\phi(\rho_{\mathsf{PE}})\rrbracket$ and
$F_7\prec S$. Then there exist a subsemigroup $U$ of $S$ and an onto
homomorphism $\psi:U\rightarrow F_7$. Let $a,b$ be elements of $U$
such that $\psi(a)=(1;1,1)$ and $\psi(b)=u$. Then, we have
$\lambda_n(a,b,1,1,b,b^2,\ldots,b^{n-2})\neq\rho_n(a,b,1,1,b,b^2,\ldots,b^{n-2})$
for every positive integer $n$, which contradicts the assumption
that $S$ satisfies the pseudoidentity
$\phi(\lambda_{\mathsf{PE}})=\phi(\rho_{\mathsf{PE}})$. Therefore,
by Theorem~\ref{PE F_7}, $S\in \mathsf{PE}$.
The converse follows at once from the definition of the
pseudovariety $\mathsf{PE}$. \end{proof}
\section{The pseudovariety $\pv{NMN}_{4}$} \label{sec:pvy-pvNMN_4}
We denote by $F_{12}$ the 12-element subsemigroup of the full transformation semigroup on the set $\{1,2,3\} \cup \{\theta\}$ given by the union of the Brandt semigroup $\mathcal{B}_3(\{1\})$ with the semigroup $\langle w_1,w_2\rangle$, \begin{equation*}
F_{12} = \mathcal{B}_3(\{1\}) \cup \langle w_1,w_2\rangle, \end{equation*} where $w_1=(1)(3,2,\theta)$ and $w_2=(3,1,2,\theta)$. By \cite[Lemma 3.2]{Jes-Sha3}, the semigroup $F_{12}$ is not nilpotent but the subsemigroup $\langle w_1,w_2 \rangle$ is nilpotent.
Let $S$ be a semigroup in which $\mathcal{B}_n(G)$ is a proper ideal. We define the functions $\epsilon_1$ and $\epsilon_2$ from $S$ to the power set $\mathcal{P}(\{1,\ldots,n\})$ of the set $\{1,\ldots,n\}$
as follows: \begin{align*}
\epsilon_1(w)
&=\{i\mid 1\leq i\leq n\mbox{ and }\Gamma(w)(i)\neq \theta\},\\
\epsilon_2(w)
&=\{i\mid 1\leq i\leq n
\mbox{ and there exists an integer $j$ such that } \Gamma(w)(j)=i\} \end{align*} for every $w\in S$. We also define the functions $\iota_1$ and $\iota_2$ from $S$ to $\mathbb{N}$ as follows: \begin{displaymath}
\iota_1(w)=\vert\epsilon_1(w)\vert \mbox{ and }\iota_2(w)=\vert\epsilon_2(w)\vert \end{displaymath} for every $w\in S$.
Let $\mathsf{Excl}(F_7,F_{12})$ denote the class of all finite semigroups $S$ that are divisible neither by $F_7$ nor by $F_{12}$. We say that the semigroup $S$ is \emph{$NMN_{4}$} if $S\in\mathsf{BG_{nil}}$ and $S\in \mathsf{Excl}(F_7,F_{12})$. We proceed with several technical lemmas that serve to show that the class $\mathsf{NMN_{4}}$ of all finite $NMN_{4}$ semigroups is a pseudovariety. In view of the definition and Theorem~\ref{PE F_7}, $\mathsf{NMN_{4}}$ is contained in the pseudovariety $\mathsf{PE}$. It amounts to a routine calculation (see the appendix) to verify that the semigroup $F_{12}$ satisfies the identity $\lambda_{3}(a,b,1,1,c) =\rho_{3} (a,b,1,1,c)$, for $a,b\in F_{12}$ and $c\in F_{12}^{1}$. Hence, we have $F_{12} \in \mathsf{PE}$. Therefore, as $F_{12}$ is not $NMN_{4}$, the pseudovariety $\mathsf{NMN_{4}}$ is strictly contained in $\mathsf{PE}$.
\begin{lem}
\label{iota}
Let $S$ be a semigroup admitting $\mathcal{B}_n(G)$ as a proper
ideal and let $w$ be an element of $S$. Suppose
that
\begin{displaymath}
(a_1,\ldots,a_m,\theta)\subseteq\Gamma(w)
\end{displaymath}
for some integers $1\leq a_1,\ldots,a_m\leq n$ with $m>1$. Then, we
have
\begin{displaymath}
\max\{\iota_1(w^py),\iota_1(yw^p),\iota_2(w^py),\iota_2(yw^p)\}
<\min\{\iota_1(w),\iota_2(w)\}
\end{displaymath}
for every element $y\in S^1$ and integer $p>1$. \end{lem}
\begin{proof}
Since $\epsilon_1(w^py)\subseteq\epsilon_1(w)$ and
$a_{m-1}\not\in\epsilon_1(w^py)$, we obtain
$\iota_1(w^py)<\iota_1(w)$.
There is a one-to-one function $\phi$ from $\epsilon_1(yw^p)$ to
$\epsilon_1(w)$ as follows: for elements $a\in \epsilon_1(yw^p)$ and
$b\in\epsilon_1(w)$, let $\phi(a)=b$ if $\Gamma(yw^{p-1})(a)=b$. We
deduce that $\iota_1(yw^p)\leq\iota_1(w)$. Since $m>1$, we have
$a_{1}\in \epsilon_1(w)$. Hence, we obtain
$\iota_1(yw^p)<\iota_1(w)$.
There is a one-to-one function $\phi$ from $\epsilon_2(w^py)$ to
$\epsilon_1(w)$ as follows: for elements $a\in \epsilon_2(w^py)$ and
$b\in\epsilon_1(w)$, let $\phi(a)=b$ if $\Gamma(wy)(b)=a$. We deduce
that $\iota_2(w^py)\leq\iota_1(w)$. Since $a_{1}\in \epsilon_1(w)$,
we obtain $\iota_2(w^py)<\iota_1(w)$.
There is a one-to-one function $\phi$ from $\epsilon_2(yw^p)$ to
$\epsilon_1(w)$ as follows: for elements $a\in \epsilon_2(yw^p)$ and
$b\in\epsilon_1(w)$, let $\phi(a)=b$ if $\Gamma(w)(b)=a$. We deduce
that $\iota_2(yw^p)\leq\iota_1(w)$. Since $a_1\in \epsilon_1(w)$, we
obtain $\iota_2(yw^p)<\iota_1(w)$.
Similarly, we have
$\iota_1(w^py),\iota_1(yw^p),\iota_2(w^py),\iota_2(yw^p)<\iota_2(w)$. \end{proof}
\begin{lem}
\label{m,l}
Let $S$ be a finite semigroup. Suppose that there exist a proper
ideal $S'$ and an element $w$ of $S$ that satisfy the following
properties:
\begin{enumerate}[label=(\roman*)]
\item the semigroup $M = \mathcal{B}_n(G)$ is a proper ideal of
$S/S'$ for some finite group $G$ and integer $n\geq 2$;
\item $(m,l) \subseteq \Gamma(w)$ for some distinct positive
integers $1\leq l,m\leq n$ where $\Gamma$ is an ${\mathcal L}$-representation\ of $S/S'$.
\end{enumerate}
Then, we must have $F_{7}\prec S$. \end{lem}
\begin{proof}
We claim that there is an onto homomorphism from the subsemigroup
$Z=\langle (1;l,l),w \rangle$ of $S$ to $F_{7}$. Define the function
$\psi:Z\rightarrow F_7$ as follows:
\begin{itemize}
\item if $z\in S'$ or $\Gamma(z)(l)=\Gamma(z)(m)=\theta$ then $\psi(z)=\theta$;
\item if $(m,l) \subseteq \Gamma(z)$ then $\psi(z)=u$;
\item if $(m)(l)\subseteq \Gamma(z)$ then $\psi(z)=1$;
\item if $(l) \subseteq \Gamma(z)$ and $\Gamma(z)(m)=\theta$ then $\psi(z)=(1;1,1)$;
\item if $(l,m,\theta) \subseteq \Gamma(z)$ then $\psi(z)=(1;1,2)$;
\item if $(m,l,\theta) \subseteq \Gamma(z)$ then $\psi(z)=(1;2,1)$;
\item if $(m) \subseteq \Gamma(z)$ and $\Gamma(z)(l)=\theta$ then $\psi(z)=(1;2,2)$.
\end{itemize}
Since $\Gamma$ is an ${\mathcal L}$-representation\ $S/S'$, $\psi$ is a homomorphism. \end{proof}
\begin{lem}
\label{m,l,m'}
Let $S$ be a finite semigroup. Suppose that there exist a proper
ideal $S'$ and elements $w_1$ and $w_2$ of $S$ that satisfy the
following properties:
\begin{enumerate}[label=(\roman*)]
\item the semigroup $M = \mathcal{B}_n(G)$ is a proper ideal of
$S/S'$ for some finite group $G$ and integer $n\geq 3$;
\item the relations $[m,l,m'] \sqsubseteq \Gamma(w_1)$, $[m, m']
\sqsubseteq \Gamma(w_2)$, and $(l)\subseteq \Gamma(w_2)$ hold for
some pairwise distinct positive integers $1\leq l,m,m'\leq n$
where $\Gamma$ is an ${\mathcal L}$-representation\ of $S/S'$.
\end{enumerate}
Then, at least one of the conditions $F_{7}\prec S$ or $F_{12}\prec
S$ holds. \end{lem}
\begin{proof}
Suppose that the integer $m'$ is in a cycle of $\Gamma(w_1)$ with
length $L_{w_1}$. Then $(l,m')\subseteq \Gamma(w_1^{L_{w_1}-1}w_2)$.
Hence, by Lemma~\ref{m,l}, there is an onto homomorphism from the
subsemigroup $\langle (1;l,l),w_1^{L_{w_1}-1}w_2 \rangle$ of $S$ to
$F_{7}$. Suppose next that the integer $m'$ is in a cycle of
$\Gamma(w_2)$ with length $L_{w_2}$. Then $(l,m)\subseteq
\Gamma(w_1w_2^{L_{w_2}-1})$. Hence, by Lemma~\ref{m,l}, there is an
onto homomorphism from the subsemigroup $\langle
(1;l,l),w_1w_2^{L_{w_2}-1} \rangle$ of $S$ to $F_{7}$.
Now, suppose that the integer $m'$ is in non-cycle orbits of both
$\Gamma(w_1)$ and $\Gamma(w_2)$. Then the following set $\Sigma$ is
not empty:
\begin{align*}
\Sigma= \bigl\{
&(y_1,y_2;a,b,c)\mid y_1,y_2\in S,1\leq a,b,c\leq n,[a,c,b]
\sqsubseteq \Gamma(y_1), [a, b] \sqsubseteq \Gamma(y_2),\\
&(c)\subseteq \Gamma(y_2),\mbox{ the orbit that contains $[a,c,b]$
in $\Gamma(y_1)$ is not a cycle}\\
&\mbox{and the orbit that contains $[a,b]$ in $\Gamma(y_2)$ is not a
cycle}
\bigr\}.
\end{align*}
Since $S$ is finite, there exists an element
$\Delta=(y_1,y_2;a,b,c)\in \Sigma$ such that there does not exist an
element $\Delta'=(u_1,u_2;e,f,d)\in \Sigma$ for which
$\iota_i(u_k)<\iota_i(y_k)$ for every $i,k\in\{1,2\}$.
Since $\Delta\in \Sigma$, there exist integers
\begin{displaymath}
1\leq
a_1,\ldots,a_t,a'_1,\ldots,a'_{t'},b_1,\ldots,b_r,b'_1,\ldots,b'_{r'}\leq
n
\end{displaymath}
such that
\begin{displaymath}
(a_1,\ldots,a_t,c,b_1,\ldots,b_r,\theta)
\subseteq \Gamma(y_1),\
(c)(a'_1,\ldots,a'_{t'},b'_1,\ldots,b'_{r'},\theta)
\subseteq \Gamma(y_2),
\end{displaymath}
$a_t=a'_{t'}=a$ and $b_1=b'_{1}=b$. Suppose that
$\{a_1,\ldots,a_t\}\cap\{b'_1,\ldots,b'_{r'}\}\neq\emptyset$. Then,
there exist integers $1\leq \gamma \leq t$ and $1\leq \lambda \leq
r'$ such that $a_{\gamma}=b'_{\lambda}$. Thus, we obtain
$(a_{\gamma})\subseteq \Gamma(y_1^{t-\gamma }y_2^{\lambda})$,
$[a_{\gamma+1},c] \sqsubseteq \Gamma(y_1^{t-\gamma }y_2^{\lambda})$
and $[a_{\gamma+1},a_{\gamma},c] \sqsubseteq
\Gamma(y_1^{t-\gamma+1}y_2^{\lambda-1})$. If the integer $c$ is in a
cycle of $\Gamma(y_1^{t-\gamma }y_2^{\lambda})$ or
$\Gamma(y_1^{t-\gamma+1}y_2^{\lambda-1})$, then similarly there is
an onto homomorphism from a subsemigroup of $S$ to $F_{7}$.
Otherwise, the integer $c$ is in non-cycle orbits of
$\Gamma(y_1^{t-\gamma+1}y_2^{\lambda-1})$ and $\Gamma(y_1^{t-\gamma
}y_2^{\lambda})$. We conclude that
$(y_1^{t-\gamma+1}y_2^{\lambda-1},y_1^{t-\gamma
}y_2^{\lambda};a_{\gamma+1},c,a_{\gamma})\in \Sigma$. Since
$a_{\gamma}\not\in\{a_t,b'_1\}$, we obtain $\gamma<t$ and $1<
\lambda$. Then, by Lemma~\ref{iota}, we have
\begin{align*}
&\iota_1(y_1^{t-\gamma+1}y_2^{\lambda-1}) <\iota_1(y_1),\
\iota_2(y_1^{t-\gamma+1}y_2^{\lambda-1}) <\iota_2(y_1), \\
&\iota_1(y_1^{t-\gamma}y_2^{\lambda})<\iota_1(y_2),
\mbox{ and }
\iota_2( y_1^{t-\gamma } y_2^{\lambda})<\iota_2(y_2),
\end{align*}
a contradiction.
Suppose next that
$\{a'_1,\ldots,a'_{t'}\}\cap\{b_1,\ldots,b_r\}\neq\emptyset$. Then,
there exist integers $1\leq \gamma \leq t'$ and $1\leq \lambda \leq
r$ such that $a'_{\gamma}=b_{\lambda}$. Thus, we obtain
$(b_{\lambda})\subseteq \Gamma(y_2^{t'-\gamma+1}y_1^{\lambda-1})$,
$[c,b_{\lambda-1}] \sqsubseteq
\Gamma(y_2^{t'-\gamma+1}y_1^{\lambda-1})$ and
$[c,b_{\lambda},b_{\lambda-1}] \sqsubseteq
\Gamma(y_2^{t'-\gamma}y_1^{\lambda})$. If the integer $c$ is in a
cycle of $\Gamma(y_2^{t'-\gamma+1}y_1^{\lambda-1})$ or
$\Gamma(y_2^{t'-\gamma}y_1^{\lambda})$, then similarly there is an
onto homomorphism from a subsemigroup of $S$ to $F_{7}$. Otherwise,
the integer $c$ is in non-cycle orbits of
$\Gamma(y_2^{t'-\gamma}y_1^{\lambda})$ and
$\Gamma(y_2^{t'-\gamma+1}y_1^{\lambda-1})$. It follows that
$(y_2^{t'-\gamma}y_1^{\lambda},y_2^{t'-\gamma+1}y_1^{\lambda-1};
c,b_{\lambda-1},b_{\lambda})\in \Sigma$. Since
$a'_{\gamma}\not\in\{a'_{t'},b_1\}$, we obtain $\gamma<t'$ and $1<
\lambda$. Then, by Lemma~\ref{iota}, we have
\begin{align*}
&\iota_1(y_2^{t'-\gamma}y_1^{\lambda})<\iota_1(y_1),\
\iota_2(y_2^{t'-\gamma}y_1^{\lambda})<\iota_2(y_1),\\
&\iota_1(y_2^{t'-\gamma+1}y_1^{\lambda-1})<\iota_1(y_2),
\mbox{ and }
\iota_2(y_2^{t'-\gamma+1}y_1^{\lambda-1})<\iota_2(y_2),
\end{align*}
which is again a contradiction.
We may, therefore, assume that
\begin{displaymath}
\{a_1,\ldots,a_t\}\cap\{b'_1,\ldots,b'_{r'}\}=\emptyset
\mbox{ and }
\{a'_1,\ldots,a'_{t'}\}\cap\{b_1,\ldots,b_r\}=\emptyset.
\end{displaymath}
We denote by $F'_{12}$ the subsemigroup of the full transformation
semigroup on the set $\{a_t,b_1,c\} \cup \{\theta'\}$ given by the
union of the Brandt semigroup $\mathcal{B}_3(\{1\})$ with the
semigroup $\langle w_1,w_2\rangle$,
\begin{eqnarray}
F'_{12} &=& \mathcal{B}_3(\{1\}) \cup^{\Gamma_{F'_{12}}}\langle w_1,w_2\rangle,
\end{eqnarray}
where $\Gamma_{F'_{12}}(w_1)=(c)(a_t,b_1,\theta')$ and
$\Gamma_{F'_{12}}(w_2)=(a_t,c,b_1,\theta')$ and $\Gamma_{F'_{12}}$
is an ${\mathcal L}$-representation\ of $F'_{12}$. The semigroup $F'_{12}$ is generated by the
elements $(1;a_t,b_1)$, $w_1$ and $w_2$. On the other hand, the
semigroup $F_{12}$ is also generated by three elements, namely
$(1;2,3)$, $(1)(2,3,\theta)$ and $(3,1,2,\theta)$ which are
basically the same as for $F'_{12}$ up to renaming the elements of
the set on which they act: $a_t\mapsto 2$, $b_1\mapsto 3$, and
$c\mapsto 1$. Hence, the transformation semigroups $F'_{12}$ and
$F_{12}$ are isomorphic. We claim that there is an onto homomorphism
from the subsemigroup $Z=\langle y_1,y_2,(1;b_1,a_t)\rangle$ onto
$F'_{12}$. Let $\psi:Z\rightarrow F'_{12}$ be the function defined
as follows:
\begin{enumerate}
\item if $z\in S'$, then $\psi(z)=\theta'$;
\item otherwise, $\Gamma(z)(i)=j$ if and only if
$\Gamma'(\psi(z))(i)=j$ for every $i,j\in\{a_t,b_1,c\}$.
\end{enumerate}
We prove that $\psi$ is a homomorphism. Let $s_1,s_2\in Z$ and
$i\in\{a_t,b_1,c\}$. Suppose that
$\Gamma'(\psi(s_1))(\Gamma'(\psi(s_2))(i))=\alpha$ for some
$\alpha\in\{a_t,b_1,c\}$. Then, we obtain
$\Gamma'(\psi(s_1))(i)=\beta$ for some $\beta\in\{a_t,b_1,c\}$ and,
thus, $\Gamma(s_1)(i)=\beta$ and $\Gamma(s_2)(\beta)=\alpha$, whence
$\Gamma(s_1s_2)(i)=\alpha$. Now, suppose
that $$\Gamma'(\psi(s_1))(\Gamma'(\psi(s_2)(i)))=\theta'.$$
If $\Gamma'(\psi(s_1s_2))(i)\neq \theta'$, then there exists an
integer $\gamma\not\in\{a_t,b_1,c\}$ such that
$\Gamma(s_1)(i)=\gamma$ and $\Gamma(s_2)(\gamma)=\lambda$ for some
integer $\lambda$ in the set $\{a_t,b_1,c\}$. Since $s_2\in Z$,
there exist elements
$v_1,\ldots,v_{n_{s_2}}\in\{y_1,y_2,(1;b_1,a_t)\}$ such that
$s_2=v_1\cdots v_{n_{s_2}}$. Suppose that $\gamma\in
\{b'_2,\ldots,b'_{r'}\}\cup\{b_2,\ldots,b_r\}$. Since
\begin{displaymath}
\{a_1,\ldots,a_t\}\cap\{b'_1,\ldots,b'_{r'}\}=\emptyset
\mbox{ and }
\{a'_1,\ldots,a'_{t'}\}\cap\{b_1,\ldots,b_r\}=\emptyset,
\end{displaymath}
we obtain $\Gamma(v_1)(\gamma)\in
\{b'_2,\ldots,b'_{r'}\}\cup\{b_2,\ldots,b_r\}$. Similarly, we have
\begin{displaymath}
\Gamma(v_1\cdots v_{n_{s_2}})(\gamma)
\in \{b'_2,\ldots,b'_{r'}\}\cup\{b_2,\ldots,b_r\}.
\end{displaymath}
This contradicts the assumption that $\Gamma(s_2)(\gamma)=\lambda$
and $\lambda\in\{a_t,b_1,c\}$. Similarly, we obtain $\gamma\not\in
\{a'_2,\ldots,a'_{r'}\}\cup\{a_2,\ldots,a_r\}$, which entails
$\lambda\not \in\{a_t,b_1,c\}$, a contradiction. Hence,
$\Gamma'(\psi(s_1s_2))(i)=\theta'$ and, thus, $\psi$ is a
homomorphism.
The result follows. \end{proof}
\begin{lem}
\label{F12 prec}
Let $T$ and $V$ be finite block groups. If $F_{12}\prec T\times V$,
then at least one of the conditions
$T\not\in\mathsf{Excl}(F_7,F_{12})$ or
$V\not\in\mathsf{Excl}(F_7,F_{12})$ holds. \end{lem}
\begin{proof}
Since $F_{12}\prec T\times V$, there exists a subsemigroup $U$ of
$T\times V$ such that there is an onto homomorphism
$\phi:U\rightarrow F_{12}$. Let
$$T\times V= R_1 \supsetneqq R_2 \supsetneqq \cdots \supsetneqq
R_{h} \supsetneqq R_{h+1} = \emptyset$$ be a principal series of
$T\times V$. That is, each $R_i$ is an ideal of $T\times V$ and
there is no ideal of $T\times V$ strictly between $R_i$ and
$R_{i+1}$. Since $T$ and $V$ are finite, there exist elements $r_1,
r_2$ and $r_3$ in the subsemigroup $U$ and an integer $1\leq i \leq
h$ that satisfy the following properties:
\begin{enumerate}
\item $\phi(r_1)=(1;2,3)$, $\phi(r_2)=w_1$ and $\phi(r_3)=w_2$;
\item $r_1 \in R_{i} \setminus R_{i+1}$;
\item $\phi^{-1}((1;2,3))\cap R_{i+1}=\emptyset$.
\end{enumerate}
By $\phi(r_1r_2r_1)=(1;2,3)$ and~(3), $R_{i} / R_{i+1}$
is an inverse completely $0$-simple semigroup or an inverse
completely simple semigroup, say $M=\mathcal{B}_n(G)$. Then, there
exist an element $g\in G$ and integers $1\leq\alpha,\beta\leq n$
such that $r_1=(g;\alpha,\beta)$. Let $o$ be the order of $G$.
Suppose that $\alpha=\beta$. Then, we have $r_1^{o+1}=r_1$ and,
thus, $\phi^{-1}((1;2,3))\cap \phi^{-1}(\theta)\neq \emptyset$, a
contradiction. As $\phi(r_1r_2r_1)=\phi(r_1r^2_3r_1)=(1;2,3)$, we
have $r_1r_2r_1,r_1r^2_3r_1\in R_{i} \setminus R_{i+1}$ and, thus,
$\Gamma(r_1r_2r_1)=\Gamma(r_1r^2_3r_1)=(\alpha, \beta,\theta)$ where
$\Gamma$ is an ${\mathcal L}$-representation\ of $R/R_{i+1}$ on $M$. Therefore, $[\beta,\alpha]
\sqsubseteq \Gamma(r_2)$ and there exists an integer
$1\leq\gamma\leq n$ such that $[\beta,\gamma,\alpha] \sqsubseteq
\Gamma(r_3)$. Also, since $\phi(r_1r_3r_2r_3r_1)=(1;2,3)$, we obtain
$(\gamma) \subseteq \Gamma(r_2)$.
The elements $r_1,r_2$ and $r_3$ are in $T\times V$. Then, there
exist elements $t_1,t_2, t_3\in T$ and $v_1,v_2,v_3\in V$ such that
$r_1=(t_1,v_1)$, $r_2=(t_2,v_2)$ and $r_3=(t_3,v_3)$. Let
\begin{displaymath}
T= T_1 \supsetneqq T_2 \supsetneqq \cdots \supsetneqq T_{h'}
\supsetneqq T_{h'+1} = \emptyset
\end{displaymath}
and
\begin{displaymath}
V= V_1 \supsetneqq V_2 \supsetneqq \cdots \supsetneqq V_{h''}
\supsetneqq V_{h''+1} = \emptyset
\end{displaymath}
be, respectively, principal series of $T$ and $V$. Suppose that $t_1
\in T_{j} \setminus T_{j+1}$ and $v_1 \in V_{k} \setminus V_{k+1}$,
for some integers $1\leq j\leq h'$ and $1\leq k\leq h''$.
Since $[\alpha,\beta] \sqsubseteq \Gamma(r_1)$ and $[\beta,\alpha]
\sqsubseteq \Gamma(r_2)$, we obtain $(r_1r_2)^or_1(r_2r_1)^o=r_1$.
Then, we have
\begin{eqnarray}\label{t_1v_1}
(t_1t_2)^ot_1(t_2t_1)^o=t_1\mbox{ and }(v_1v_2)^ov_1(v_2v_1)^o=v_1.
\end{eqnarray}
By \cite[Theorem 3]{Fab}, we have $R_{i} \setminus R_{i+1}=(T_{j}
\setminus T_{j+1})\times (V_{k} \setminus V_{k+1})$. Let $A_1=T_{j}
\setminus T_{j+1}$ and $A_2=V_{k} \setminus V_{k+1}$. Again,
by~(\ref{t_1v_1}), the subset $A_i\cup\{\theta\}$ is an inverse
completely $0$-simple semigroup or $A_i$ is an inverse completely
simple semigroup, say $M_{A_i}=\mathcal{B}_{n_{A_i}}(G_{A_i})$ or
$M_{A_i}=\mathcal{B}_{1}(G_{A_i})$ for $1\leq i\leq 2$. Then, there
exist elements $(l;a,b)\in M_{A_1}$ and $(l';a',b')\in M_{A_2}$,
such that $t_1=(l;a,b)$ and $v_1=(l';a',b')$.
Since $r_1^2$ is in $R_{i+1}$ and $R_{i} \setminus R_{i+1}=A_1\times
A_2$, we obtain $t_1^2\not\in A_1$ or $v_1^2\not\in A_2$. By
symmetry, we may assume that $t_1^2\not\in A_1$, whence $a\neq b$.
As $r_1r_2r_1,r_1r^2_3r_1\in A_1\times A_2$, we have
$t_1t_2t_1,t_1t^2_3t_1\in A_1$ and, thus,
$\Gamma'(t_1t_2t_1)=\Gamma'(t_1t^2_3t_1)=(a, b,\theta)$ where
$\Gamma'$ is an ${\mathcal L}$-representation\ of $T/T_{j+1}$ on $M_{A_1}$. Therefore, $[b,a]
\sqsubseteq \Gamma'(t_2)$ and there exists an integer $1\leq x\leq
n_{A_1}$ such that $[b,x,a] \sqsubseteq \Gamma'(t_3)$. Also, since
$r_1r_3r_2r_3r_1\in A_1\times A_2$, it follows that
$t_1t_3t_2t_3t_1\in A_1$ and, thus, $(x) \subseteq \Gamma'(t_2)$. By
Lemma~\ref{m,l,m'}, we conclude that $F_{7}\prec T$ or $F_{12}\prec
T$. \end{proof}
Lemma~\ref{F12 prec} yields the following theorem.
\begin{thm}
The class $\mathsf{NMN_{4}}$ of all finite $NMN_{4}$ semigroups is a pseudovariety. \end{thm}
\begin{prop}
\label{PS NMN_{4} identity}
The sequence
\begin{displaymath}
\left\{\bigl(
\lambda_{n!}(x,y,w_1,w_2,w_1,w_2,\ldots),
\rho_{n!}(x,y,w_1,w_2,w_1,w_2,\ldots)\bigr)
\mid n\in \mathbb{N}\right\}
\end{displaymath}
converges in $(\overline{\Omega}_{\{x,y,w_1,w_2\}}\mathsf{S})^2$. \end{prop}
\begin{proof}
The proof is similar to the proof of Proposition~\ref{PS PE
identity}.
\end{proof}
We denote the limit of the sequence \begin{displaymath}
\bigl(\lambda_{n!}(x,y,w_1,w_2,w_1,w_2,\ldots),
\rho_{n!}(x,y,w_1,w_2,w_1,w_2,\ldots)\bigr)_{n} \end{displaymath} in the preceding proposition by $(\overline{\lambda}(x,y,w_1,w_2),\overline{\rho}(x,y,w_1,w_2))$.
\begin{lem}
\label{NMN_{4}}
Let $S\in \mathsf{BG_{nil}}$. The semigroup
$S$ is not $NMN_{4}$ if and only if, there exist elements $a,w_1,w_2 \in
S$ such
that
\begin{displaymath}
\overline{\lambda}(aw_2,w_2a,w_1,w_2)\neq\overline{\rho}(aw_2,w_2a,w_1,w_2)
\end{displaymath}
and
\begin{align*}
(\overline{\rho}(aw_2,w_2a,w_1,w_2) w_1&
\overline{\lambda}(aw_2,w_2a,w_1,w_2))^{\omega+1}\\
&\quad\quad=\overline{\rho}(aw_2,w_2a,w_1,w_2) w_1
\overline{\lambda}(aw_2,w_2a,w_1,w_2).
\end{align*}
\end{lem}
\begin{proof}
Suppose that there exist elements $a,w_1,w_2\in S$ such that
\begin{displaymath}
\overline{\lambda}(aw_2,w_2a,w_1,w_2)\neq\overline{\rho}(aw_2,w_2a,w_1,w_2)
\end{displaymath}
and
\begin{align*}
(\overline{\rho}(aw_2,w_2a,w_1,w_2) w_1&
\overline{\lambda}(aw_2,w_2a,w_1,w_2))^{\omega+1}\\
&\quad\quad=\overline{\rho}(aw_2,w_2a,w_1,w_2) w_1
\overline{\lambda}(aw_2,w_2a,w_1,w_2).
\end{align*}
Then
\begin{displaymath}
\lambda_{k^2+1}(aw_2,w_2a,w_1,w_2,w_1,w_2,\ldots)
\neq \rho_{k^2+1}(aw_2,w_2a,w_1,w_2,w_1,w_2,\ldots)
\end{displaymath}
where
$k=\abs{S}$.
By the pigeonhole principle, there exist positive integers $t$ and
$r$ such that $t<r\leq k^2+1$ with
\begin{align*}
&\bigl(
\lambda_{t}(aw_2,w_2a,w_1,w_2,w_1,w_2,\ldots),
\rho_{t}(aw_2,w_2a,w_1,w_2,w_1,w_2,\ldots)\bigr)\\
&= \bigl(
\lambda_{r}(aw_2,w_2a,w_1,w_2,w_1,w_2,\ldots),
\rho_{r}(aw_2,w_2a,w_1,w_2,w_1,w_2,\ldots)\bigr).
\end{align*}
Put $s_1= \lambda_{t}$, $s_2=\rho_{t}$ and $h=r-t$. To simplify the
notation, we let $\widehat{\lambda}_j=\lambda_{t+j}$ and
$\widehat{\rho}_j=\rho_{t+j}$ for $j\ge0$. In this notation, we may write
$s_1 =\widehat{\lambda}_0=\widehat{\lambda}_h
=\lambda_{h}(s_1, s_2, v_{t+1}, \ldots, v_{t+h})$ and
$s_2 =\widehat{\rho}_0=\widehat{\rho}_h
=\rho_{h}(s_1,s_2, v_{t+1}, \ldots, v_{t+h})$ where $v_{2l-1}=w_1$
and $v_{2l}=w_2$ for all $1\leq l$. Note further that $s_1\ne s_2$.
Let
\begin{displaymath}
S= S_1 \supsetneqq S_2 \supsetneqq \cdots \supsetneqq S_{s}
\supsetneqq S_{s+1} = \emptyset
\end{displaymath}
be a principal series of $S$. Suppose that $s_1 \in S_i \setminus
S_{i+1}$ for some $1\leq i\leq s$. Because $S_i$ and $S_{i+1}$ are
ideals of $S$, the above equalities yield $s_2 \in S_i \setminus
S_{i+1}$ and $v_{t+1}, \ldots, v_{t+h} \in S \setminus S_{i+1}$.
Furthermore, $S_i / S_{i+1}$ is a completely $0$-simple semigroup.
First, suppose that $S_i \setminus S_{i+1}$ is a group. We denote by
$e$ its identity element. Since $s_1,s_2 \in S_i \setminus S_{i+1}$,
the equalities $s_1 =\lambda_{h}(s_1, s_2, v_{t+1}, \ldots,
v_{t+h})$ and $s_2 =\rho_{h}(s_1, s_2, v_{t+1}, \ldots, v_{t+h})$
yield $\widehat{\lambda}_j, \widehat{\rho}_j \in S_{i}\setminus
S_{i+1}$, for every $1 \leq j \leq h$. Now, as $e$ is the identity
element of $S_{i}\setminus S_{i+1}$, we have $\widehat{\lambda}_j
v_{t+j+1} \widehat{\rho}_j=\widehat{\lambda}_j e v_{t+j+1}
\widehat{\rho}_j$ and $\widehat{\rho}_j v_{t+j+1}
\widehat{\lambda}_j=\widehat{\rho}_je v_{t+j+1}\widehat{\lambda}_j$,
for every $0 \leq j < h$. If $S_{i+1} \neq \emptyset$ and
$ev_{t+j}\in S_{i+1}$ holds for some $1 \leq j \leq h$, then
$s_1,s_2\in S_{i+1}$, a contradiction. So, we must have $ev_{t+j}
\in S_i \setminus S_{i+1}$, for all $1 \leq j \leq h$. Consequently,
the following conditions are satisfied:
\begin{align*}
&s_1 =\lambda_{h}(s_1, s_2, v_{t+1}, \ldots, v_{t+h})
=\lambda_{h}(s_1, s_2, ev_{t+1}, \ldots, ev_{t+h})\\
&\neq s_2 = \rho_{h}(s_1,s_2, v_{t+1}, \ldots, v_{t+h})
=\rho_{h}(s_1,s_2, ev_{t+1}, \ldots, ev_{t+h}).
\end{align*}
By Lemma~\ref{finite-nilpotent}, it follows that $S_i \setminus
S_{i+1}$ is a non-nilpotent group. Hence, $S\not\in
\mathsf{BG_{nil}}$, a contradiction.
Now, suppose that $S_i \setminus S_{i+1}$ is not a group. By
\cite[Lemma 2.1]{Jes-Okn}, in this case, we have
$S_{i}/S_{i+1}=\mathcal{B}_n(G)$ with $G$ a nilpotent group and
$n>1$. Also, since $s_{1},s_{2}\in S_{i}\setminus S_{i+1}$, there
exist $1 \leq n_1,n_2,n_3,n_4 \leq n$ and $g, g' \in G$ such that
$s_1 =(g; n_1, n_2)$ and $s_2 = (g'; n_3, n_4)$.
Hence,
\begin{displaymath}
[n_2,n_3;n_4,n_1] \sqsubseteq \Gamma(v_{t+1}),[n_2, n_1;n_4, n_3]
\sqsubseteq \Gamma(v_{t+2}).
\end{displaymath}
Suppose that $(n_1, n_2)= (n_3, n_4)$. Then, there exist elements
$k_{j} \in G$ such that $(k;\alpha,n_2)v_{t+j}=
(kk_{j};\alpha,n_1)$, for every $k \in G, \alpha \in \{n_1,n_2\}$
and $1 \leq j \leq h$. Since $\widehat{\lambda}_{j-1}=
(g_{j-1};n_1,n_2)$ and $\widehat{\rho}_{j-1}= (g'_{j-1};n_1,n_2)$
for some $g_{j-1},g_{j-1}'\in G$, we get
$$\widehat{\lambda}_{j}= (g_{j-1}k_jg'_{j-1};n_1,n_2),~
\widehat{\rho}_{j}= (g'_{j-1}k_jg_{j-1};n_1,n_2)$$ and, thus,
\begin{displaymath}
g = \lambda_{h}(g, g', k_{1}, \ldots, k_{h}),~ g' =
\rho_{h}(g, g',k_{1}, \ldots, k_{h}).
\end{displaymath}
In view of Lemma~\ref{finite-nilpotent}, this yields a contradiction
with $G$ being nilpotent. We have shown that $(n_{1},n_{2})\neq
(n_{3},n_{4})$.
If $t$ is an even integer,
then
\begin{displaymath}
\Gamma(\overline{\lambda}(aw_2,w_2a,w_1,w_2))=(n_1,n_2,\theta),\
\Gamma(\overline{\rho}(aw_2,w_2a,w_1,w_2))=(n_3,n_4,\theta),
\end{displaymath}
\begin{displaymath}
[n_2,n_3;n_4,n_1] \sqsubseteq \Gamma(w_{1}),
\mbox{ and }
[n_2, n_1;n_4, n_3] \sqsubseteq
\Gamma(w_{2}).
\end{displaymath}
Otherwise, we have
\begin{displaymath}
\Gamma(\overline{\lambda}(aw_2,w_2a,w_1,w_2))=(n_1,n_4,\theta),\
\Gamma(\overline{\rho}(aw_2,w_2a,w_1,w_2))=(n_3,n_2,\theta),
\end{displaymath}
\begin{displaymath}
[n_2, n_1;n_4, n_3] \sqsubseteq \Gamma(w_{1}),
\mbox{ and }
[n_2,n_3;n_4,n_1] \sqsubseteq \Gamma(w_{2}).
\end{displaymath}
By symmetry, we may assume that $t$ is even. Now, as
\begin{align*}
(\overline{\rho}(aw_2,w_2a,w_1,w_2)&w_1
\overline{\lambda}(aw_2,w_2a,w_1,w_2))^{\omega+1}\\
&\quad\quad=\overline{\rho}(aw_2,w_2a,w_1,w_2) w_1
\overline{\lambda}(aw_2,w_2a,w_1,w_2),
\end{align*}
we obtain $n_2=n_3$. Therefore, the following relations hold:
\begin{displaymath}
[n_4,n_1] \sqsubseteq \Gamma(w_{1}),\
(n_2) \subseteq \Gamma(w_{1}),
\mbox{ and }
[n_4,n_2, n_1] \sqsubseteq \Gamma(w_{2}).
\end{displaymath}
Now, by Lemmas~\ref{m,l} and \ref{m,l,m'}, we have $F_{7}\prec S$ or
$F_{12}\prec S$. This shows that $S\not\in\mathsf{NMN_{4}}$.
Conversely, suppose that $S$ is not $NMN_{4}$. It follows that $S\not\in
\mathsf{Excl}(F_7,F_{12})$. Then, there exist a subsemigroup $U$ of
$S$ and an onto homomorphism $\phi:U\rightarrow T$ such that $T$ is
isomorphic with $F_7$ or $F_{12}$. Let
\begin{displaymath}
U= U_1 \supsetneqq U_2 \supsetneqq \cdots \supsetneqq U_{s'}
\supsetneqq U_{s'+1} = \emptyset
\end{displaymath}
be a principal series of $U$. Suppose that $T$ is isomorphic with
$F_7$. Since $U$ is finite, there exist elements $a$ and $r_1$ in
the semigroup $U$ and an integer $1\leq i'\leq s'$ that satisfy the
following properties:
\begin{enumerate}
\item $\phi(a)=(1;1,1)$ and $\phi(r_1)=u$;
\item $a \in U_{i'} \setminus U_{i'+1}$;
\item $\phi^{-1}((1;1,1))\cap U_{i'+1}=\emptyset$.
\end{enumerate}
Since
$(\overline{\lambda}(ar_1,r_1a,r_1^2,r_1),\overline{\rho}(ar_1,r_1a,r_1^2,r_1))
=(\lambda_{\gamma},\rho_{\gamma})$ for some even integer~$\gamma$,
we obtain
\begin{displaymath}
\bigl(\phi(\overline{\lambda}(ar_1,r_1a,r_1^2,r_1)),
\phi(\overline{\rho}(ar_1,r_1a,r_1^2,r_1))\bigr)
=\bigl((1;1,2),(1;2,1)\bigr).
\end{displaymath}
It follows that
$\overline{\lambda}(ar_1,r_1a,r_1^2,r_1)\neq\overline{\rho}(ar_1,r_1a,r_1^2,r_1)$.
Now, as
\begin{displaymath}
\phi(\overline{\rho}(ar_1,r_1a,r_1^2,r_1) r_1^2
\overline{\lambda}(ar_1,r_1a,r_1^2,r_1))=(1;2,2),
\end{displaymath}
similarly as in the proof of Lemma~\ref{F12 prec}, we have
\begin{align*}
(\overline{\rho}(ar_1,r_1a,r_1^2,r_1)& r_1^2
\overline{\lambda}(ar_1,r_1a,r_1^2,r_1))^{\omega+1}\\
&\quad\quad=\overline{\rho}(ar_1,r_1a,r_1^2,r_1) r_1^2
\overline{\lambda}(ar_1,r_1a,r_1^2,r_1).
\end{align*}
Now, suppose that $T$ is isomorphic with $F_{12}$. Since $U$ is
finite, there exist elements $a,r_1,r_2$ in the semigroup $U$ and an
integer $1\leq i'\leq s'$ that satisfy the following properties:
\begin{enumerate}
\item $\phi(a)=(1;2,3)$, $\phi(r_1)=w_1$ and $\phi(r_2)=w_2$;
\item $a \in U_{i'} \setminus U_{i'+1}$;
\item $\phi^{-1}((1;2,3))\cap U_{i'+1}=\emptyset$.
\end{enumerate}
Since, $\phi(\overline{\lambda}(ar_2,r_2a,r_1,r_2))=(1;2,1)$ and
$\phi(\overline{\rho}(ar_2,r_2a,r_1,r_2))=(1;1,3)$, we obtain
$\overline{\lambda}(ar_2,r_2a,r_1,r_2)\neq\overline{\rho}(ar_2,r_2a,r_1,r_2)$.
Now, as $$\phi(\overline{\rho}(ar_2,r_2a,r_1,r_2) w_1
\overline{\lambda}(ar_2,r_2a,r_1,r_2))=(1;1,1),$$ again similarly as
in the proof of Lemma~\ref{F12 prec}, we have
\begin{align*}
(\overline{\rho}(ar_1,r_1a,r_1^2,r_1)& r_1^2
\overline{\lambda}(ar_1,r_1a,r_1^2,r_1))^{\omega+1}\\
&\quad\quad=\overline{\rho}(ar_1,r_1a,r_1^2,r_1) r_1^2
\overline{\lambda}(ar_1,r_1a,r_1^2,r_1).
\end{align*}
The result follows. \end{proof}
By Lemma~\ref{NMN_{4}}, we have $\mathsf{NT}\subseteq\mathsf{NMN_{4}}$. In Section~\ref{sec:comp-above-pseud}, we show that the pseudovariety $\mathsf{NT}$ is strictly contained in $\mathsf{NMN_{4}}$.
Lemma~\ref{NMN_{4}} yields the following theorem.
\begin{thm}
\label{PS NTL4}
Let $S$ be a finite semigroup. The semigroup $S$ is in $\mathsf{NMN_{4}}$
if and only if $S\in \mathsf{BG_{nil}}$ and satisfies the
implication
\begin{align*}
&(\overline{\rho}(aw_2,w_2a,w_1,w_2) w_1
\overline{\lambda}(aw_2,w_2a,w_1,w_2))^{\omega+1}\\
&\quad=\overline{\rho}(aw_2,w_2a,w_1,w_2) w_1
\overline{\lambda}(aw_2,w_2a,w_1,w_2)\\
&\qquad\qquad\qquad\Rightarrow\overline{\lambda}(aw_2,w_2a,w_1,w_2)
=\overline{\rho}(aw_2,w_2a,w_1,w_2).
\end{align*} \end{thm}
Even though we have shown that $\mathsf{NMN_{4}}$ is a pseudovariety, and therefore admits a basis of pseudoidentities by Reiterman’s theorem, we leave as an open problem to determine a simple such basis. In particular, we do not know whether $\mathsf{NMN_{4}}$ admits a finite basis of pseudoidentities.
From \cite[Corollary 4.2]{Jes-Sha3}, it follows that every minimal non-nilpotent\ semigroup $S$ in the pseudovariety $\mathsf{\overline{G}}_{nil}$ is an epimorphic image of one of the following semigroups: \begin{enumerate}
\item a semigroup of right or left zeros with 2 elements; \item $\mathcal{B}_2(G) \cup^{\Gamma} T$ such that $T=\langle u
\rangle \cup \{\theta\}$ with $\theta$ the zero of $S$,
$u^{2^{k}}=1$ the identity of $T \setminus \{\theta\}$ (and of $S$)
and $\Gamma (u)= (1,2)$ (for $k=1$, one obtains $F_7$); \item $\mathcal{B}_3(G) \cup^{\Gamma} \langle w_1, w_2\rangle$, with
$\Gamma(w_1)=(2,1,3,\theta)$ and $\Gamma(w_2)=(2,3,\theta)(1)$,
$w_{2}w_{1}^{2}=w_{1}^{2}w_{2}=w_{1}^{3}=w_{2}w_{1}w_{2}= \theta$; \item $\mathcal{B}_n(G) \cup^{\Gamma} \langle v_1, v_2\rangle$, with
\begin{displaymath}
[k, m;k',m']\sqsubseteq \Gamma(v_1)
\mbox{ and }
[k, m';k', m] \sqsubseteq \Gamma(v_2)
\end{displaymath}
for pairwise distinct integers $k, k', m$ and $m'$ between $1$ and
$n$, there do not exist distinct integers $l_1$ and $l_2$ between
$1$ and $n$ such that $(l_1,l_2)\subseteq \Gamma(x)$ for some $x\in
\langle v_1, v_2 \rangle$, and there do not exist pairwise distinct
integers $o_1, o_2$ and $o_3$ between $1$ and $n$ such that
\begin{displaymath}
(o_2,o_1,o_3,\theta) \subseteq \Gamma(y_1),
(o_2,o_3,\theta)(o_1)\subseteq \Gamma(y_2)
\end{displaymath}
for some $y_1,y_2\in \langle v_1, v_2 \rangle$. \end{enumerate}
The pseudovariety $\mathsf{BG_{nil}}$ does not contain any semigroup in (1). The pseudovariety $\mathsf{PE}$ does not contain any semigroup in (1) and (2). The pseudovariety $\mathsf{NMN_{4}}$ does not contain any semigroup in (1), (2) and (3). Clearly the pseudovariety $\mathsf{MN}$ does not contain any semigroup in (1)--(4).
\section{The pseudovariety \pv{NT}} \label{sec:pvy-pvnt}
Unlike the pseudovarieties $\mathsf{MN}$ and $\mathsf{PE}$, the pseudovariety $\mathsf{NT}$ turns out to have infinite rank.
In this section, we use the concept of automaton congruence. For this reason, we first give this definition for the convenience of the reader. Recall that a congruence on the (incomplete) automaton $\mathcal{A}=(Q,\Sigma,\delta)$ is an equivalence relation $\sim$ on the state set $Q$ such that if $p \sim q$ and $\delta(p,a)$ and $\delta(q, a)$ are both defined in $\mathcal{A}$, then $\delta(p,a) \sim \delta(q, a)$. The quotient automaton $\mathcal{A}/{\sim}$ has set of states $Q/{\sim}$ and it has an $a$-labeled transition from the $\sim$-class $[p]$ of $p$ to $[q]$ if there exists an $a$-labeled transition of $\mathcal{A}$ from $p'$ to $q'$ for some states $p'\sim p$ and $q'\sim q$.
The main result of this section is the following theorem, whose proof occupies the remainder of the section.
\begin{thm}
\label{PS NT}
The pseudovariety $\mathsf{NT}$ has infinite rank and, therefore, it
is non-finitely based. \end{thm}
\begin{proof}
To establish the theorem, we show that, for infinitely many positive
integers $m$, there exists a finite semigroup $S\not\in\mathsf{NT}$
all of whose $(m-1)$-generated subsemigroups lie in $\mathsf{NT}$.
Consider the alphabet $A_n=\{x,y,w_1,\ldots,w_{n+2}\}$ and its
action on the set
\begin{displaymath}
Q_n=\{a_i,a_i',b_i,b_i',c_i,d_i: i=1,\ldots,2^n
\}
\end{displaymath}
given by the following formulas, where we adopt common notation for
partial transformations of a finite set, in terms of a 2-row matrix
in which the first row is the domain and the image of an element in
the domain is indicated immediately below it:
\begingroup \allowdisplaybreaks
\begin{align*}
\eta(x)
&=
\begin{pmatrix}
c_1&c_2&c_3&\ldots&c_{2^{n}}&b'_1&b'_2&b'_3&\ldots&b'_{2^n} \\
a_1&a_2&a_3& &a_{2^{n}}&d_1 &d_2 &d_3 & &d_{2^{n}}
\end{pmatrix},\\
\eta(y)
&=
\begin{pmatrix}
a'_1&a'_2&a'_3&\ldots&a'_{2^{n}}&d_1&d_2&d_3&\ldots&d_{2^n} \\
c_1 &c_2 &c_3 & &c_{2^{n}} &b_1&b_2&b_3& &b_{2^{n}}
\end{pmatrix},\\
\eta(w_1)
&=
\begin{pmatrix}
a_{2^{n-1}+1}&a_{2^{n-1}+2}&\ldots&a_{2^{n}} & b_{2^{n-1}+1}&b_{2^{n-1}+2}&\ldots&b_{2^{n}} \\
b'_1 &b'_2 & &b'_{2^{n-1}}& a'_1 &a'_2 & &a'_{2^{n-1}}
\end{pmatrix},\\
\eta(w_2)
&=
\begin{pmatrix}
b_{2^{n-2}+1} &b_{2^{n-2}+2} &\ldots&b_{2^{n-1}} &a_{2^{n-2}+1} &\ldots&a_{2^{n-1}} \\
b'_{2^{n-1}+1}&b'_{2^{n-1}+2}& &b'_{2^{n-1}+2^{n-2}} &a'_{2^{n-1}+1}& &a'_{2^{n-1}+2^{n-2}}
\end{pmatrix},\\
\eta( w_i)
&=
\left(\begin{matrix}
a_{2^{n-i}+1} &a_{2^{n-i}+2} &\ldots&a_{2^{n-(i-1)}}\\
b'_{2^{n-1}+\cdots+2^{n-(i-1)}+1}&b'_{2^{n-1}+\cdots+2^{n-(i-1)}+2}& &b'_{2^{n-1}+\cdots+2^{n-i}}
\end{matrix}\right.\\
&\qquad\qquad
\left.\begin{matrix}
b_{2^{n-i}+1} &b_{2^{n-i}+2} &\ldots&b_{2^{n-(i-1)}} \\
a'_{2^{n-1}+\cdots+2^{n-(i-1)}+1}&a'_{2^{n-1}+\cdots+2^{n-(i-1)}+2}& &a'_{2^{n-1}+\cdots+2^{n-i}}
\end{matrix}\right),\\
&\ \text{for odd}\ i\in\{3,\ldots,n\},\\ \\
\eta(w_{i})
= &
\left(\begin{matrix}
b_{2^{n-i}+1} &b_{2^{n-i}+2} &\ldots&b_{2^{n-(i-1)}} \\
b'_{2^{n-1}+\cdots+2^{n-(i-1)}+1}&b'_{2^{n-1}+\cdots+2^{n-(i-1)}+2}& &b'_{2^{n-1}+\cdots+2^{n-i}}
\end{matrix}\right.\\
&\qquad\qquad
\left.\begin{matrix}
a_{2^{n-i}+1} &a_{2^{n-i}+2} &\ldots&a_{2^{n-(i-1)}}\\
a'_{2^{n-1}+\cdots+2^{n-(i-1)}+1}&a'_{2^{n-1}+\cdots+2^{n-(i-1)}+2}& &a'_{2^{n-1}+\cdots+2^{n-i}}
\end{matrix}\right),\\
&\ \mbox{for even}\ i\in\{4,\ldots,n\},\\
\eta(w_{n+1})
&=
\begin{pmatrix}
a_{1} &b_{1} \\
b'_{2^{n}}&a'_{2^{n}}
\end{pmatrix},\
\eta(w_{n+2})=
\begin{pmatrix}
b_{1} &a_{1} \\
b'_{2^{n}}&a'_{2^{n}}
\end{pmatrix}.
\end{align*}
\endgroup
Let $m> 5$ be an even integer, $n=m-4$ and $N_{2^{n}}$ be the set
\begin{eqnarray}
\label{ex4}
N_{2^{n}} = M \cup\langle
\eta(x),\eta(y),\eta(w_1),\ldots,\eta(w_{n+2})\rangle,
\end{eqnarray}
where the angle brackets denote the subsemigroup of the partial
transformation semigroup on $Q_n$ generated by the given
transformations and $M\simeq\mathcal{B}_{6\times 2^{n}}(\{1\})$ is
the Brandt semigroup of all at most rank~1 transformations of the
set $Q_n$. Note that $N_{2^n}$ is a subsemigroup of the partial
transformation semigroup on the set $Q_n$.
We claim that $N_{2^{n}}\not\in\mathsf{NT}$ and all
$(m-1)$-generated subsemigroups of $N_{2^{n}}$ are in $\mathsf{NT}$.
To establish our claim, we first define the finite semigroup $N$,
which plays an important role in this proof. In the following, we
define the automaton $\mathcal{A}_n$, whose transition semigroup is
the semigroup $N_{2^{n}}$. We also define a congruence $\sim$ on
$\mathcal{A}_n$ such that the transition semigroup of the quotient
automaton $\mathcal{A}_n/{\sim}$ is the semigroup $N$.
Let $\mathcal{N}$ be the following automaton:
\begin{center}
\begin{tikzpicture}[x=1.2mm,y=1.2mm,thick,->,>=stealth',
shorten >=1pt]
\node [state] (c) at (0,0) {$c$};
\node [state] (a) at (16,-8) {$a$};
\node [state] (a') at (16,8) {$a'$};
\node [state] (b) at (44,8) {$b$};
\node [state] (b') at (44,-8) {$b'$};
\node [state] (d) at (60,0){$d$};
\path (c) edge[below] node {$X$} (a) (a') edge[above] node {$Y$}
(c) (a) edge[below] node {$W_1$} (b') edge[right] node {$W_2$}
(a') (d) edge[above] node {$Y$} (b) (b') edge[below] node {$X$}
(d) (b) edge[above] node {$W_1$} (a') edge[left] node {$W_2$}
(b');
\end{tikzpicture}
\end{center}
and $N$ be the transition semigroup of the automaton $\mathcal{N}$.
Denoting $\varepsilon$ the transition homomorphism
$\{X,Y,W_1,W_2\}^+\to N$, we obtain
\begin{equation}
\label{eq:XYW1W2}
\varepsilon(X)=
\begin{pmatrix}
c&b' \\
a&d
\end{pmatrix},\
\varepsilon(Y)=
\begin{pmatrix}
a'&d \\
c &b
\end{pmatrix},\
\varepsilon(W_1)=
\begin{pmatrix}
a &b\\
b'&a'
\end{pmatrix},\
\varepsilon(W_2)=
\begin{pmatrix}
b &a\\
b'&a'
\end{pmatrix}.
\end{equation}
The image of $\varepsilon$ is
\begin{displaymath}
N=
\langle
\varepsilon(X),\varepsilon(Y),\varepsilon(W_1),\varepsilon(W_{2})
\rangle,
\end{displaymath}
viewed as a subsemigroup of the partial transformation semigroup on
the set $Q=\{a,a',b,b',c,d\}$.
Since the image of none of the rank~2 partial bijections in~\eqref{eq:XYW1W2}
is the domain of one of them, any product of them with at least two
factors has rank at most~1, so that
\begin{equation}
\label{eq:N}
N=\mathcal{B}_6(\{1\})\cup
\{\varepsilon(X),\varepsilon(Y),\varepsilon(W_1),\varepsilon(W_{2})\}
\end{equation}
where $\mathcal{B}_6({1})$ stands here for the semigroup of partial
bijections of the state set of rank at most one.
Consider the alphabet $B_n$ consisting of the following letters:
\begin{itemize}
\item $x,y,w_1,\ldots,w_{n+2}$,
\item $t_{e_i,e_{i+1}}$ ($i=1,\ldots,2^n-1$,
$e\in\{a,a',b,b',c,d\}$),
\item $t_{a_{2^n},a'_1}$, $t_{a'_{2^n},b_1}$, $t_{b_{2^n},b'_1}$,
$t_{b'_{2^n},c_1}$, $t_{c_{2^n},d_1}$, $t_{d_{2^n},a_1}$.
\end{itemize}
We extend the transition homomorphism $\eta:A_n^+\to N_{2^n}$ to
$B_n^+$ as follows:
\begin{displaymath}
\eta(t_{e,f})=
\begin{pmatrix}
e\\
f
\end{pmatrix}.
\end{displaymath}
Note that $\eta$ still takes its values in $N_{2^n}$.
Let $\mathcal{A}_n=(Q_n,B_n,\delta_n)$ be the automaton where
$\delta_n(q,g)=\Gamma(g)(q)$, for all $q\in Q_n$ and $g\in B_n$,
$\Gamma$ being the ${\mathcal L}$-representation\ of $N_{2^{n}}$ on $M$ given by~$\eta$. Let
$\mathcal{A}'_n=(Q,B_n,\delta'_n)$ be the quotient of the automaton
$\mathcal{A}_n$ obtained by identifying all states with the same
letter, that is, forgetting the numerical indices. In other words,
there is a quotient morphism $\phi'_n:\mathcal{A}_n\rightarrow
\mathcal{A}'_n$ where $\phi'_n(e_i)=e$ for every $e\in Q$ and $1\leq
i\leq 2^n$. Also, let $\Gamma'(g)(q)=\delta'_n(q,g)$, for all $q\in
Q$ and $g\in B_n$. It is easy to verify that the transition
semigroup of the automaton $\mathcal{A}'_n$ is the semigroup $N$ and
$\Gamma'$ is an ${\mathcal L}$-representation\ of $N$ on $\mathcal{B}_6(\{1\})$. Denoting
$\psi:B_n^+\to N$ the corresponding transition homomorphism, note
that $\psi(x)=\varepsilon(X)$, $\psi(y)=\varepsilon(Y)$,
$\psi(w_i)=\varepsilon(W_1)$ for $i$ odd, and
$\psi(w_i)=\varepsilon(W_2)$ for $i$ even.
\begin{lem}\label{NN}
If $z\in B_n^{+}\setminus A_n$ and $z'\in A_n$, then we have
$\psi(z)\ne\psi(z')$. Moreover, the following equalities hold:
$\psi^{-1}(\varepsilon(X))=\{x\}$,
$\psi^{-1}(\varepsilon(Y))=\{y\}$,
$\psi^{-1}(\varepsilon(W_1))=\{w_1,w_3,\ldots,w_{n+1}\}$, and
$\psi^{-1}(\varepsilon(W_2))=\{w_2,\ldots,w_{n+2}\}$.
\end{lem}
\begin{proof}
As was already observed, none of the transformations
in~\eqref{eq:XYW1W2} may be written as products of two elements.
Also, under $\psi$,
the subset $B_n\setminus A_n$ maps to $\mathcal{B}_6(\{1\})$.
Hence, if $z\in B_n^{+}\setminus A_n$ and $z'\in A_n$, then $z$
and $z'$ are not identified by $\psi$.
\end{proof}
Let $z\in A_n$ and $z_1,z_2\in B_n^{+}\setminus A_n$. We claim that
$\eta(z)\neq\eta(z_1z_2)$. Indeed, suppose the equality holds. Then,
the elements $z$ and $z_1z_2$ have the same action on the states of
the automaton $\mathcal{A}_n$. So, they have the same action on the
states of the automaton $\mathcal{A}'_n$. However, this contradicts
Lemma \ref{NN}. Hence, we have the following corollary.
\begin{cor}\label{N2nN2n}
If $z\in B_n^{+}\setminus A_n$ and $z'\in A_n$, then we have
$\eta(z)\ne\eta(z')$.\qed
\end{cor}
Since
\begin{align*}
\left\langle\bigl\{\bigr.\right.
&\eta(t_{e_1,e_2}),\eta(t_{e_2,e_3}),\ldots,\eta(t_{e_{2^n-1},e_{2^n}}),
\eta(t_{a_{2^n},a'_1}),\eta(t_{a'_{2^n},b_1}),\\
&\eta(t_{b_{2^n},b'_1}),\eta(t_{b'_{2^n},c_1}),
\eta(t_{c_{2^n},d_1}),\eta(t_{d_{2^n},a_1})
\mid \left.\bigl.e\in \{a,a',b,b',c,d\}\bigr\}\right\rangle=M,
\end{align*}
the transition semigroup of the automaton $\mathcal{A}_n$ is the
semigroup $N_{2^n}$.
In the following lemma, we examine the $\lambda$ and $\rho$
sequences, which is useful for testing whether the semigroups
$N_{2^n}$ lie in~$\mathsf{NT}$.
\begin{lem}\label{lambda-n+3}
Suppose that the elements
\begin{equation}\label{line-lambda-n+3}
\delta_n\bigl(p,\lambda_{n+3}(\alpha,\beta,1,\gamma_1,\gamma_2,\ldots,
\gamma_{n+2})\bigr),
\delta_n\bigl(p',\rho_{n+3}(\alpha,\beta,1,\gamma_1,\gamma_2,\ldots,
\gamma_{n+2})\bigr)
\end{equation}
are defined in the automaton $\mathcal{A}_n$, for some
$\alpha,\beta \in B_n^+$, $\gamma_1,\gamma_2,\ldots, \gamma_{n+2}
\in B_n^*$ and $p,p'\in Q_n$. Then, one of the following
conditions holds:
\begin{enumerate}[label={(\roman*)}]
\item\label{item:lambda-n+3-1}
$\{\psi(\alpha),\psi(\beta)\}=\{\varepsilon(X),\varepsilon(Y)\}$,
$\psi(\gamma_1)=\psi(\gamma_3)=\cdots=\psi(\gamma_{n+1})=\varepsilon(W_1)$ and
$\psi(\gamma_2)=\psi(\gamma_4)=\cdots=\psi(\gamma_{n+2})=\varepsilon(W_2)$;
\item\label{item:lambda-n+3-2} $\psi(\alpha)=\psi(\beta)=
\begin{psmallmatrix}
e\\
e
\end{psmallmatrix}$
and $\psi(\gamma_i)\in\{
\begin{psmallmatrix}
e\\
e
\end{psmallmatrix},1\}$
for all $1\leq i\leq n+2$ and some element
$e\in Q$.
\end{enumerate}
\end{lem}
\begin{proof}
By (\ref{line-lambda-n+3}),
$\delta_n(p,\lambda_2(\alpha,\beta,1,\gamma_1))$ and
$\delta_n(p',\rho_{2}(\alpha,\beta,1,\gamma_1))$ the elements are
also defined in the automaton $\mathcal{A}_n$. It follows that
$\delta'_n(\phi'_n(p),\lambda_2(\alpha,\beta,1,\gamma_1))$ and
$\delta'_n(\phi'_n(p'),\rho_{2}(\alpha,\beta,1,\gamma_1))$ are
defined in the automaton $\mathcal{A}'_n$ and, thus, the
inequalities
\begin{displaymath}
\lambda_{2}\bigl(\psi(\alpha),\psi(\beta),1,\psi(\gamma_1)\bigr)
\neq 0\ne
\rho_{2}\bigl(\psi(\alpha),\psi(\beta),1,\psi(\gamma_1)\bigr)
\end{displaymath}
hold in the semigroup $N$. See the appendix for a calculation
which would be too cumbersome to carry out by hand
checking that one of the following conditions holds:
\begin{enumerate}[label={(\alph*)}]
\item\label{item:lambda-n+3-3}
$\{\psi(\alpha),\psi(\beta)\}=\{\varepsilon(X),\varepsilon(Y)\}$
and $\psi(\gamma_1)=\varepsilon(W_1)$;
\item\label{item:lambda-n+3-4} $\psi(\alpha)=\psi(\beta)=
\begin{psmallmatrix}
e\\
e
\end{psmallmatrix}
$
and $\psi(\gamma_1)\in\{
\begin{psmallmatrix}
e\\
e
\end{psmallmatrix}
,1\}$ for some state $e\in Q$.
\end{enumerate}
First, suppose that Condition~\ref{item:lambda-n+3-3} holds. We
show that \ref{item:lambda-n+3-1} also holds. Note that we have
\begin{align*}
&\bigl\{\lambda_{2}\bigl(\psi(\alpha),\psi(\beta),1,\psi(\gamma_1)\bigr),
\rho_{2}\bigl(\psi(\alpha),\psi(\beta),1,\psi(\gamma_1)\bigr)\bigr\}\\
&\quad=\{\varepsilon(XYW_1YX),\varepsilon(YXW_1XY)\}.
\end{align*}
By~\eqref{eq:N},
the ranks of the elements $\varepsilon(XYW_1YX)$ and
$\varepsilon(YXW_1XY)$ are both one and, thus, we have
\begin{displaymath}
\{\varepsilon(XYW_1YX),\varepsilon(YXW_1XY)\}
=\left\{
\begin{pmatrix}
b'\\
a
\end{pmatrix},
\begin{pmatrix}
a'\\
b
\end{pmatrix}
\right\}.
\end{displaymath}
Since
$\lambda_{3}\bigl(\psi(\alpha),\psi(\beta),1,\psi(\gamma_1),\psi(\gamma_2)\bigr)
\ne 0
\ne\rho_{3}\bigl(\psi(\alpha),\psi(\beta),1,\psi(\gamma_1),\psi(\gamma_2)\bigr)$,
it follows that $\psi(\gamma_2)=\varepsilon(W_2)$.
Similarly, we have
\begin{align*}
&\bigl\{\lambda_{3}\bigl(\psi(\alpha),\psi(\beta),1,\psi(\gamma_1),\psi(\gamma_2)\bigr),
\rho_{3}\bigl(\psi(\alpha),\psi(\beta),1,\psi(\gamma_1),\psi(\gamma_2)\bigr)\bigr\}\\
&\quad=\left\{\begin{pmatrix}
a'\\
a
\end{pmatrix},
\begin{pmatrix}
b'\\
b
\end{pmatrix}
\right\},
\end{align*}
which yields the equality $\psi(\gamma_3)=\varepsilon(W_1)$. Also,
similarly, we obtain the equalities
$\psi(\gamma_5)=\psi(\gamma_7)=\cdots=\psi(\gamma_{n+1})=\varepsilon(W_1)$
and
$\psi(\gamma_4)=\psi(\gamma_6)=\cdots=\psi(\gamma_{n+2})=\varepsilon(W_2)$.
Now, suppose that Condition~\ref{item:lambda-n+3-4} holds. We
claim that so does~\ref{item:lambda-n+3-2}. Indeed, we then have
\begin{displaymath}
\bigl\{\lambda_{2}\bigl(\psi(\alpha),\psi(\beta),1,\psi(\gamma_1)\bigr),
\rho_{2}\bigl(\psi(\alpha),\psi(\beta),1,\psi(\gamma_1)\bigr)\bigr\}
=\left\{\begin{pmatrix}
e\\
e
\end{pmatrix}
\right\}.
\end{displaymath}
By~\eqref{eq:N} and \eqref{eq:XYW1W2},
if $\Gamma'(Z)(e)=e$, for some $Z\in N$, then $Z\in
\mathcal{B}_6(\{1\})$. Now, as
\begin{displaymath}
\lambda_{3}\bigl(\psi(\alpha),\psi(\beta),1,\psi(\gamma_1),\psi(\gamma_2)\bigr)
\neq 0\ne
\rho_{3}\bigl(\psi(\alpha),\psi(\beta),1,\psi(\gamma_1),\psi(\gamma_2)\bigr),
\end{displaymath}
we obtain $\psi(\gamma_2)\in\{\begin{psmallmatrix}
e\\
e
\end{psmallmatrix},
1\}$. Similarly, we have
$\psi(\gamma_i)\in\{\begin{psmallmatrix}
e\\
e
\end{psmallmatrix},1\}$ for all $3\leq i\leq n+2$.
\end{proof}
To continue our investigation, we need the following lemma, which is
easy to prove by induction.
\begin{lem}\label{lambda-rho}
Let
\begin{align*}
\lambda_{i}
&=\lambda_{i}(x, y, 1, w_{1}, w_{2}, \ldots,w_n,
w_{n+1},w_{n+2},w_{n+1},w_{n+2},\ldots),\\
\rho_{i}
&=\rho_{i}(x, y, 1, w_{1}, w_{2}, \ldots,w_n,
w_{n+1},w_{n+2},w_{n+1},w_{n+2},\ldots).
\end{align*}
Then, the following equalities hold:
\begin{align*}
\bigl(\eta(\lambda_{1}),\eta(\rho_{1})\bigr)
&=\left(
\begin{pmatrix}
b'_1&b'_2&\ldots&b'_{2^{n}}\\
b_1 &b_2 & &b_{2^n}
\end{pmatrix},
\begin{pmatrix}
a'_1&a'_2&\ldots&a'_{2^{n}}\\
a_1 &a_2 & &a_{2^n}
\end{pmatrix}
\right),\\
\bigl(\eta(\lambda_{i}),\eta(\rho_{i})\bigr)
&=\bigg(
\begin{pmatrix}
b'_{2^{n-1}+\cdots+2^{n-(i-1)}+1}&\ldots&b'_{2^n}\\
a_1 & &a_{2^{n-(i-1)}}
\end{pmatrix},\\
&\qquad
\begin{pmatrix}
a'_{2^{n-1}+\cdots+2^{n-(i-1)}+1}&\ldots&a'_{2^n}\\
b_1 & &b_{2^{n-(i-1)}}
\end{pmatrix}
\bigg),\\
&\qquad\mbox{for even}\ 2\leq i\leq n-2,\\
\bigl(\eta(\lambda_{i}),\eta(\rho_{i})\bigr)
&=\bigg(
\begin{pmatrix}
b'_{2^{n-1}+\cdots+2^{n-(i-1)}+1}&\ldots&b'_{2^n}\\
b_1 & &b_{2^{n-(i-1)}}
\end{pmatrix},\\
&\qquad
\begin{pmatrix}
a'_{2^{n-1}+\cdots+2^{n-(i-1)}+1}&\ldots&a'_{2^n}\\
a_1 & &a_{2^{n-(i-1)}}
\end{pmatrix}
\bigg),\\
&\qquad\mbox{for odd}\ 3\leq i\leq n-1,\\
\bigl(\eta(\lambda_{n}),\eta(\rho_{n})\bigr)
&=\left(
\begin{pmatrix}
b'_{2^n-1}&b'_{2^n}\\
a_{1} &a_{2}
\end{pmatrix},
\begin{pmatrix}
a'_{2^n-1}&a'_{2^n}\\
b_{1} &b_{2}
\end{pmatrix}
\right),\\
\bigl(\eta(\lambda_i),\eta(\rho_i)\bigr)
&=\left(
\begin{pmatrix}
b'_{2^n}\\
b_{1}
\end{pmatrix},
\begin{pmatrix}
a'_{2^n}\\
a_{1}
\end{pmatrix}
\right),\ \mbox{for odd}\ n< i,\ \mbox{and}\\
\bigl(\eta(\lambda_{i}),\eta(\rho_{i})\bigr)
&=\left(
\begin{pmatrix}
b'_{2^n}\\
a_{1}
\end{pmatrix},
\begin{pmatrix}
a'_{2^n}\\
b_{1}
\end{pmatrix}
\right),\ \mbox{for even}\ n< i.
\end{align*}
\end{lem}
By Lemma \ref{lambda-rho}, we conclude that the subsemigroup
$\langle\eta(A_n)\rangle$
of $N_{2^n}$ is not in $\mathsf{NT}$.
Also, by Corollary \ref{N2nN2n}, no subsemigroup of $N_{2^n}$
containing $\eta(A_n)$ may be generated by less than $m=n+4$
elements. To proceed, we prove that
\begin{equation}
\begin{split}
\parbox[t]{.8\textwidth}{if $U$ is a subsemigroup
of $N_{2^n}$ and $U$ is not in $\mathsf{NT}$, then $U$ contains
the set $\eta(A_n)$.}
\end{split}
\label{U}
\end{equation}
This shows that all $(m-1)$-generated subsemigroups of $N_{2^{n}}$
are in $\mathsf{NT}$ and, thereby, (\ref{U}) is enough to complete the
proof of Theorem~\ref{PS NT}.
So, suppose that $U\notin\mathsf{NT}$ is a subsemigroup
of~$N_{2^n}$. Since $U\not\in\mathsf{NT}$, there exist letters $r_1,
r_2\in B_n^+$ and $v_{1}, v_{2}, \ldots,v_{n+2}\in B_n^*$, such that
\begin{equation}
\label{eq:U-non-NT-1}
\eta(r_1), \eta(r_2), \eta(v_{1}),\ldots, \eta(v_{n+2})\in U,
\end{equation}
\begin{equation}
\label{eq:U-non-NT-2}
\parbox[t]{.8\textwidth}{the elements
$\lambda_{n+3}\bigl(\eta(r_1), \eta(r_2), 1, \eta(v_{1})
\eta(v_{2}), \ldots, \eta(v_{n+2})\bigr)$ and
$\rho_{n+3}\bigl(\eta(r_1), \eta(r_2), 1, \eta(v_{1}),
\eta(v_{2}), \ldots, \eta(v_{n+2})\bigr)$
of~$U$ are nonzero and distinct.}
\end{equation}
Therefore, there exist elements
$f,g,f',g'\in Q$ and integers $1\leq i_1,i_2,j_1,j_2\leq 2^n$ such
that
$\delta_n(f_{i_1},\lambda_{n+3})=g_{j_1}$,
$\delta_n(f'_{i_2},\rho_{n+3})=g'_{j_2}$ and if
$\delta_n(f_{i_1},\rho_{n+3})$ is defined then
$\delta_n(f_{i_1},\rho_{n+3})\neq g_{j_1}$,
where
\begin{displaymath}
\lambda_{k}=\lambda_{k}(r_1, r_2, 1, v_{1}, v_{2}, \ldots, v_{k-1})\ \text{and}\
\rho_{k}=\rho_{k}(r_1, r_2, 1, v_{1}, v_{2}, \ldots, v_{k-1})
\end{displaymath}
for every $1\leq k\leq n+3$.
Therefore, by Lemmas \ref{NN} and \ref{lambda-n+3}, one of the
following conditions holds:
\begin{enumerate}[start=1,label={\bfseries (I\arabic*)}]
\item\label{I1} we have $\{r_1,r_2\}=\{x,y\}$,
$v_1,v_3,\ldots,v_{n+1}\in\{w_1,w_3,\ldots,w_{n+1}\}$, and
$v_2,v_4,\ldots,v_{n+2}\in\{w_2,w_4,\ldots,w_{n+2}\}$;
\item\label{I2} there exists an element $e\in Q$ such that, if
$\delta_n(s,f)=t$ for some $f\in \{r_1,r_2,v_1,\ldots, v_{n+2}\}$
and some $s,t\in Q_n$ with $f\neq 1$, then there exist integers
$1\leq l_1,l_2\leq 2^n$ such that $s=e_{l_1}$ and $t=e_{l_2}$.
\end{enumerate}
First, we assume that Case \ref{I1} holds. Suppose that there exists
the least integer $k\in\{1,\ldots,n+2\}$ such that $v_k\neq w_k$.
Then, for $1\leq i\leq k$, the sequences $\eta(\lambda_i)$ and
$\eta(\rho_i)$ are given by Lemma \ref{lambda-rho}. Since $v_k\in
\{w_1,w_2,\ldots,w_{n+2}\}$, there exists an integer $k'$ such that
$v_k=w_{k'}$. Also, since
$v_1,v_3,\ldots,v_{n+1}\in\{w_1,w_3,\ldots,w_{n+1}\}$ and
$v_2,v_4,\ldots,v_{n+2}\in\{w_2,w_4,\ldots,w_{n+2}\}$, we have
$k'\not\in \{k-1,k+1\}$. We claim that it is impossible that $k\ne
k'$, which contradicts the assumptions $w_k\ne v_k=w_{k'}$. To
establish the claim, we distinguish several cases, taking into
account the choice of $\eta(w_k)$ at the beginning of the proof of
Theorem~\ref{PS NT}. In each case, we show that
$\eta(\lambda_{k+1})=0$ or $\eta(\lambda_{k+2})=0$, which
contradicts our assumption that $\eta(\lambda_{n+3})\neq 0$
(see~\eqref{eq:U-non-NT-2}). For that purpose, we take into account
without further reference the definition of $\eta(w_{k'})$ and
Lemma~\ref{lambda-rho}. Here are the cases in question:
\begin{itemize}
\item $k'< k\leq n$: since $k'< k$, we have $2^{n-(k-1)}<
2^{n-k'}+1$; we deduce that $\eta(\lambda_{k+1})= 0$.
\item $k'< k= n+1$: since $k'< n+1$, we have $1< 2^{n-k'}+1$ and,
thus, we get $\eta(\lambda_{n+2})= 0$.
\item $k'< k= n+2$: if $k'< n+1$, then we have $1< 2^{n-k'}+1$ so
that $\eta(\lambda_{n+3})= 0$; also, if $k'= n+1$, since $k'\neq
k-1$, then we have $\eta(\lambda_{n+3})= 0$.
\item $k < k' \le n$:
since $k < k'$, we have
\begin{align*}
\lefteqn{\quad\bigl(\eta(\lambda_{k+1}),\eta(\rho_{k+1})\bigr)}
\\
&\qquad=\Bigg(
\begin{pmatrix}
e_{2^{n-1}+\cdots+2^{n-(k-1)}+2^{n-k'}+1}&\ldots&e_{2^{n-1}+\cdots+2^{n-(k-1)}+2^{n-(k'-1)}}\\
f_{2^{n-k}+\cdots+2^{n-(k'-1)}+1} &
&f_{2^{n-k}+\cdots+2^{n-k'}}
\end{pmatrix},\\
&\qquad\quad\quad
\begin{pmatrix}
g_{2^{n-1}+\cdots+2^{n-(k-1)}+2^{n-k'}+1}&\ldots&g_{2^{n-1}+\cdots+2^{n-(k-1)}+2^{n-(k'-1)}}\\
h_{2^{n-k}+\cdots+2^{n-(k'-1)}+1} & &h_{2^{n-k}+\cdots+2^{n-k'}}
\end{pmatrix}
\Bigg),
\end{align*}
for some elements $e,f,g,h\in\{a,a',b,b'\}$. Since $k'\neq k+1$,
the inequalities $2^{n-k}+1< 2^{n-k}+\cdots+2^{n-(k'-1)}+1$ and
$2^{n-k}+\cdots+2^{n-k'}<2^{n-(k-1)}$ hold. There exists an integer $k''$ such that
$v_{k+1}=w_{k''}$.
We have
\begin{align*}
\lefteqn{\quad\quad \eta(\lambda_{k+1}w_{k''})}
\\
&\qquad=
\begin{pmatrix}
e_{2^{n-1}+\cdots+2^{n-(k-1)}+2^{n-k'}+1}&\ldots&e_{2^{n-1}+\cdots+2^{n-(k-1)}+2^{n-(k'-1)}}\\
f_{2^{n-k}+\cdots+2^{n-(k'-1)}+1} &
&f_{2^{n-k}+\cdots+2^{n-k'}}
\end{pmatrix}\\
&\qquad\qquad\left(\begin{matrix}
a''_{2^{n-k''}+1} &\ldots&a''_{2^{n-(k''-1)}}\\
b'_{2^{n-1}+\cdots+2^{n-(k''-1)}+1} & &b'_{2^{n-1}+\cdots+2^{n-k''}}
\end{matrix}\right.\\
&\qquad\qquad\qquad\qquad\qquad\qquad
\left.\begin{matrix}
b''_{2^{n-k''}+1} &\ldots&b''_{2^{n-(k''-1)}} \\
a'_{2^{n-1}+\cdots+2^{n-(k''-1)}+1} & &a'_{2^{n-1}+\cdots+2^{n-k''}}
\end{matrix}\right),
\end{align*}
for some $a'',b''\in\{a,b\}$. If $k<k''$, then
$2^{n-(k''-1)}<2^{n-k}+1< 2^{n-k}+\cdots+2^{n-(k'-1)}+1$ which
entails $\eta(\lambda_{k+1}w_{k''})=0$. Also, if $k''<k$, then
$2^{n-k}+\cdots+2^{n-k'}<2^{n-(k-1)}<2^{n-k''}+1$ and, thus,
again, we obtain $\eta(\lambda_{k+1}w_{k''})=0$. We conclude that
$k''=k$ and $v_{k+1}=w_k$. It follows that
\begin{align*}
\eta(\lambda_{k+1}w_k)
=&\left(\begin{matrix}
e_{2^{n-1}+\cdots+2^{n-(k-1)}+2^{n-k'}+1} &\ldots\\
q_{2^{n-1}+\cdots+2^{n-(k-1)}+2^{n-(k+1)}+\cdots+2^{n-(k'-1)}+1}&
\end{matrix}\right.\\
&\qquad\qquad\quad\left.\begin{matrix}
e_{2^{n-1}+\cdots+2^{n-(k-1)}+2^{n-(k'-1)}}\\
q_{2^{n-1}+\cdots+2^{n-(k-1)}+2^{n-(k+1)}+\cdots+2^{n-k'}}
\end{matrix}\right),
\end{align*}
for some element $q\in\{a,a',b,b'\}$.
Since $k'\neq k+1$, we have
\begin{align*}
&2^{n-1}+\cdots+2^{n-(k-1)}+2^{n-(k'-1)}\\
&< 2^{n-1}+\cdots+2^{n-(k-1)}+2^{n-(k+1)}+\cdots+2^{n-(k'-1)}+1,
\end{align*}
and, thus, $\eta(\lambda_{k+2})= 0$.
\item $k< k'=n+1$:
since $k< k'$, we have $k\leq n$, which yields the equality
\begin{align*}
\bigl(\eta(\lambda_{k+1}),\eta(\rho_{k+1})\bigr)
=\Bigg(&
\begin{pmatrix}
e_{2^{n-1}+\cdots+2^{n-(k-1)}+1}\\
f_{2^{n-(k-1)}}
\end{pmatrix},
\begin{pmatrix}
g_{2^{n-1}+\cdots+2^{n-(k-1)}+1}\\
h_{2^{n-(k-1)}}
\end{pmatrix}
\Bigg),
\end{align*}
for some elements $e,f,g,h\in\{a,a',b,b'\}$. There exists an integer $k''$ such that
$v_{k+1}=w_{k''}$.
We have
\begin{align*}
\lefteqn{\quad\quad \eta(\lambda_{k+1}w_{k''})}
\\
&\qquad=
\begin{pmatrix}
e_{2^{n-1}+\cdots+2^{n-(k-1)}+1}\\
f_{2^{n-(k-1)}}
\end{pmatrix}\left(\begin{matrix}
a''_{2^{n-k''}+1} &\ldots&a''_{2^{n-(k''-1)}}\\
b'_{2^{n-1}+\cdots+2^{n-(k''-1)}+1} & &b'_{2^{n-1}+\cdots+2^{n-k''}}
\end{matrix}\right.\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad
\left.\begin{matrix}
b''_{2^{n-k''}+1} &\ldots&b''_{2^{n-(k''-1)}} \\
a'_{2^{n-1}+\cdots+2^{n-(k''-1)}+1} & &a'_{2^{n-1}+\cdots+2^{n-k''}}
\end{matrix}\right),
\end{align*}
for some $a'',b''\in\{a,b\}$. If $k<k''$, then
$2^{n-(k''-1)}<2^{n-(k-1)}$, which yields the equality
$\eta(\lambda_{k+1}w_{k''})=0$. Also, if $k''<k$, then
$2^{n-(k-1)}<2^{n-k''}+1$ and, thus, again, we obtain
$\eta(\lambda_{k+1}w_{k''})=0$. This shows that $k''=k$, so that
$v_{k+1}=w_k$. It follows that
\begin{align*}
\lefteqn{\eta(\lambda_{k+1}w_k\rho_{k+1})}\\
&\quad=\begin{pmatrix}
e_{2^{n-1}+\cdots+2^{n-(k-1)}+1}\\
f_{2^{n-(k-1)}}
\end{pmatrix}
\begin{pmatrix}
e'_{2^{n-(k-1)}} &g'_{2^{n-(k-1)}}\\
f'_{2^{n-1}+\cdots+2^{n-k}}&h'_{2^{n-1}+\cdots+2^{n-k}}
\end{pmatrix}
\\ &\qquad\qquad
\begin{pmatrix}
g_{2^{n-1}+\cdots+2^{n-(k-1)}+1}\\
h_{2^{n-(k-1)}}
\end{pmatrix}
= 0,
\end{align*}
for some letters $e',f',g',h'\in\{a,a',b,b'\}$, which shows that
$\eta(\lambda_{k+2})= 0$.
\item $k<k'=n+2$:
this case is handled similarly to the preceding one.
\end{itemize}
Hence, we must have
\begin{equation}
v_1=w_1,\ldots, v_{n+2}=w_{n+2}.
\label{v_1=w_1}
\end{equation}
Now, in Case~\ref{I1}, we
have $\{r_1,r_2\}=\{x,y\}$ and (\ref{v_1=w_1}) holds. It follows from~(\ref{eq:U-non-NT-1})
that the subsemigroup $U$ contains $\eta(A_n)$, which
establishes~\eqref{U}.
To complete the proof of Theorem~\ref{PS NT}, we proceed to consider
Case~\ref{I2}. It turns out to be impossible as we show that it
leads to a contradiction.
Now, we assume that Case \ref{I2} holds.
Let $s\in \{r_1,r_2,v_1,\ldots,v_{n+2}\}$.
Taking into account how $s$ acts in the quotient automaton
$\mathcal{A}'_n$, where it labels a loop at the state $e$, we deduce
that either there is a factorization
\begin{equation}
s=\xi_1\zeta_1\xi_2\cdots\zeta_{n_s}\xi_{n_s+1}
\label{eq:factorization}
\end{equation}
satisfying the
following conditions:
\begin{enumerate}
\item $1\leq n_s$;
\item $\xi_1, \xi_{n_s+1}\in \langle x,y\rangle^1$;
\item $\xi_2,\ldots, \xi_{n_{s}}\in \langle x,y\rangle$;
\item $\zeta_1,\ldots, \zeta_{n_{s}}\in \{w_1,\ldots,w_{n+2}\}$,
\end{enumerate}
or a letter of $B_n\setminus A_n$ intervenes in $s$, or $s=1$.
To continue our treatment of Case \ref{I2}, we need to consider
another quotient of the automaton $\mathcal{A}_n$, this time
dropping the letters and retaining just the indices of the states.
Let $I_n=\{1,\ldots,2^n\}$ and let $\mathcal{A}''_n=(I_n,B_n,\mu_n)$
be the quotient of the automaton $\mathcal{A}_n$ given by the
morphism $\phi''_n:\mathcal{A}_n\rightarrow \mathcal{A}''_n$ where
$\phi''_n(e_i)=i$
for every $e\in Q$ and $i\in I_n$. Let $\kappa_n$ be transition
homomorphism for the automaton~$\mathcal{A}''_n$.
\begin{lem}\label{A''}
There exists a permutation $\sigma\in S_{2^n}$ such that
\begin{align*}
\kappa_n(w_i)
&=
\begin{pmatrix}
\sigma(2^{i-1}) &\sigma(2^{i-1}+2^i) &\ldots&\sigma(2^{i-1}+(2^{n-i}-1)\times 2^{i})\\
\sigma(2^{i-1}+1)&\sigma(2^{i-1}+2^i+1)& &\sigma(2^{i-1}+(2^{n-i}-1)\times 2^i+1)
\end{pmatrix},\\
\kappa_n(w_{n})
&=\begin{pmatrix}
\sigma(2^{n-1})\\
\sigma(2^{n-1}+1)
\end{pmatrix},\
\kappa_n(w_{n+1})=\kappa_n(w_{n+2})=
\begin{pmatrix}
\sigma(2^{n})\\
\sigma(1)
\end{pmatrix},\\
\end{align*}
for every integer $1\leq i< n$.
\end{lem}
\begin{proof}
We proceed by induction on $n$.
For $n=1$, let $\sigma\in S_2$ with $\sigma(1)=2$ and $\sigma(2)=1$.
Since the equalities $\kappa_1(w_1)=\begin{psmallmatrix}
2\\
1
\end{psmallmatrix}$ and
$\kappa_1(w_2)=\kappa_1(w_3)=\begin{psmallmatrix}
1\\
2
\end{psmallmatrix}$ hold, the result is
satisfied for $n=1$.
Now, we suppose that $\sigma\in S_{2^n}$ is a permutation such
that the statement of the lemma holds for~$n$.
We define a permutation $\sigma'\in S_{2^{n+1}}$ as follows:
$\sigma'(2k)=\sigma(k)$ and $\sigma'(2k-1)=\sigma(k)+2^n$, for all
$1\leq k\leq 2^n$. To establish that $\sigma'$ has the required
property for the next value of $n$, we need to relate the actions
$\kappa_{n+1}(w_{i+1})$ of $w_{i+1}$ in $\mathcal{A}''_{n+1}$ and
$\kappa_n(w_i)$ of $w_i$ in $\mathcal{A}''_n$:
\begin{align*}
\kappa_{n+1}(w_{i+1})
&=\left(\begin{matrix}
2^{n-i}+1 &2^{n-i}+2 &\ldots \\
2^n+2^{n-1}+\cdots+2^{n-(i-1)}+1 &2^n+2^{n-1}+\cdots+2^{n-(i-1)}+2
\end{matrix}\right.\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
\left.\begin{matrix}
2^{n-(i-1)} \\
2^n+2^{n-1}+\cdots+2^{n-i}
\end{matrix}\right),\\
\kappa_n(w_{i})
&=\left(\begin{matrix}
2^{n-i}+1 &2^{n-i}+2 &\ldots\\
2^{n-1}+\cdots+2^{n-(i-1)}+1 &2^{n-1}+\cdots+2^{n-(i-1)}+2&
\end{matrix}\right.\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
\left.\begin{matrix}
2^{n-(i-1)}\\
2^{n-1}+\cdots+2^{n-i}
\end{matrix}\right).
\end{align*}
Then, from the induction hypothesis, we get
\begin{align*}
\kappa_{n+1}(w_{i+1})
&=\begin{pmatrix}
\sigma(2^{i-1}) &\ldots&\sigma(2^{i-1}+(2^{n-i}-1)\times 2^{i}) \\
\sigma(2^{i-1}+1)+2^n& &\sigma(2^{i-1}+(2^{n-i}-1)\times 2^i+1)+2^n
\end{pmatrix}\\
&=\begin{pmatrix}
\sigma'(2^{i}) &\ldots&\sigma'(2^{i}+(2^{(n+1)-(i+1)}-1)\times 2^{i+1})\\
\sigma'(2^{i}+1)& &\sigma'(2^{i}+(2^{(n+1)-(i+1)}-1)\times 2^{(i+1)}+1)
\end{pmatrix},
\end{align*}
for $1\leq i\leq n$.
Also, since $\kappa_{n+1}(w_1)=\begin{psmallmatrix}
2^{n}+1&\ldots&2^{n+1}\\
1 & &2^{n}
\end{psmallmatrix}$ and $1\leq \sigma(1),\ldots,\sigma(2^n)\leq 2^n$, we have
\begin{align*}
\kappa_{n+1}(w_1)
&=\begin{pmatrix}
\sigma(1)+2^n&\sigma(2)+2^n&\ldots&\sigma(2^n)+2^n\\
\sigma(1) &\sigma(2) & &\sigma(2^n)
\end{pmatrix}\\
&=\begin{pmatrix}
\sigma'(1)&\sigma'(3)&\ldots&\sigma'(2^{n+1}-1)\\
\sigma'(2)&\sigma'(4)& &\sigma(2^{n+1})
\end{pmatrix}.
\end{align*}
Similarly, we have
$\kappa_{n+1}(w_{n+2})=\kappa_{n+1}(w_{n+3})=
\begin{psmallmatrix}
\sigma'(2^{n+1})\\
\sigma'(1)
\end{psmallmatrix}$.
The result follows.
\end{proof}
By Lemma \ref{A''}, the automaton $\mathcal{A}''_n$ is a cycle. For
example, the automaton $\mathcal{A}''_4$ is drawn in
Figure~\ref{fig:Adoubleprime4}.
\begin{figure}
\caption{The automaton $\mathcal{A}''_4$}
\label{fig:Adoubleprime4}
\end{figure}
We come back to Case \ref{I2}. Let $\sigma\in S_{2^n}$ be the
permutation given by Lemma \ref{A''} for the automaton
$\mathcal{A}''_n$ and $\sigma^{-1}$ be the inverse of the
permutation~$\sigma$. Since
$\delta_n(f_{i_1},\lambda_{n+3})=g_{j_1}$ and
$\delta_n(f'_{i_2},\rho_{n+3})=g'_{j_2}$, the condition of
Case~\ref{I2} implies that there exist integers $1\leq
l_1,l_2,l_3,l_4\leq 2^n$ such that
$\delta_n(e_{l_1},\lambda_n)=e_{l_2}$ and
$\delta_n(e_{l_3},\rho_n)=e_{l_4}$. Hence, we have
$\mu_n(l_1,\lambda_n)=l_2$ and $\mu_n(l_3,\rho_n)=l_4$. Now, we
claim that
\begin{equation}
\eta(\lambda_n)=\begin{pmatrix}
e_{l_1}\\
e_{l_2}
\end{pmatrix}
\text{ and }
\eta(\rho_n)= \begin{pmatrix}
e_{l_3}\\
e_{l_4}
\end{pmatrix}.
\label{eq:lamdan-rhon-rank-1}
\end{equation}
If a letter of $B_n\setminus A_n$ intervenes in any element of the
set $\{r_1,r_2,v_1,\ldots,v_{n-1}\}$ then the claim clearly holds.
Now, suppose that no letter of $B_n\setminus A_n$ intervenes in any
element of the set $\{r_1,r_2,v_1,\ldots,v_{n-1}\}$. Since
$r_1,r_2\neq 1$, we may consider the numbers $\nu(r_i)$ of factors
$w_j$ ($1\le j\le n+2$) that intervene in the corresponding
factorization~\eqref{eq:factorization} of~$r_i$. Note that
$\nu(r_i)>0$ because $n_{r_i}\ge1$.
Hence, the numbers of occurrences of elements of the set
$\{w_1,\ldots,w_{n+2}\}$ as factors in factorizations of $\lambda_n$
and $\rho_n$ are at least~$2^n$. We have $\mu_n(i,z)=i$, for every
integer $1\leq i\leq 2^n$ and an element $z\in\{x,y\}$. Also, if
$\mu_n(i,z)$ is defined for some integer $1\leq i\leq 2^n$ and
element $z\in\{w_1,\ldots,w_{n+2}\}$, then
$\sigma^{-1}(\mu_n(i,z))=\sigma^{-1}(i)+1$. Now, as
$\mu_n(l_1,\lambda_n)$ and $\mu_n(l_3,\rho_n)$ are defined and the
automaton $\mathcal{A}''_n$ is a cycle of length $2^n$, the
sequences $\lambda_n$ and $\rho_n$ go through the cycle of
$\mathcal{A}''_n$ at least once. Therefore, we have
\begin{align*}
&\lambda_n=\alpha_1w_{n+1}\beta_1,\ \rho_n=\alpha_2w_{n+1}\beta_2 \mbox{ or}\\
&\lambda_n=\alpha_1w_{n+2}\beta_1,\ \rho_n=\alpha_2w_{n+2}\beta_2,
\end{align*}
for some $\alpha_1,\beta_1,\alpha_2,\beta_2\in A_n^*$. Let
$\nu(\alpha_1)$ be the numbers of factors $w_j$ ($1\le j\le n+2$)
that intervene in $\alpha_1$. Suppose that there exist integers
$1\leq l'_1,l'_2\leq 2^n$ such that
$\delta_n(e_{l'_1},\lambda_n)=e_{l'_2}$. Since
$\lambda_n=\alpha_1w_{n+1}\beta_1$ or
$\lambda_n=\alpha_1w_{n+2}\beta_1$, we have
$\mu_n(l_1,\alpha_1)=\sigma(2^n)$ and
$\mu_n(l'_1,\alpha_1)=\sigma(2^n)$. It follows that
$l_1=l'_1=\sigma(2^n-\nu(\alpha_1)\mod 2^n)$. Similarly, one may
show that $l_2=l'_2$. We deduce that
$\eta(\lambda_n)=\begin{psmallmatrix}
e_{l_1}\\
e_{l_2}
\end{psmallmatrix}$. Also, in the same way, one shows that
$\eta(\rho_n)=\begin{psmallmatrix}
e_{l_3}\\
e_{l_4}
\end{psmallmatrix}$.
Similarly, if $v_{n}=CzD$ with $z\in\{w_n,w_{n+1},w_{n+2}\}$, then
there exist integers $1\leq l''_1,l''_2\leq 2^n$ such that
$\eta(v_n)=\begin{psmallmatrix}
e_{l''_1}\\
e_{l''_2}
\end{psmallmatrix}$. Then, by (\ref{eq:lamdan-rhon-rank-1}), we have
$\eta(\lambda_{n+1})=\begin{psmallmatrix}
e_{l_1}\\
e_{l_2}
\end{psmallmatrix}\begin{psmallmatrix}
e_{l''_1}\\
e_{l''_2}
\end{psmallmatrix}\begin{psmallmatrix}
e_{l_3}\\
e_{l_4}
\end{psmallmatrix}$ and $\eta(\rho_{n+1})=\begin{psmallmatrix}
e_{l_3}\\
e_{l_4}
\end{psmallmatrix}\begin{psmallmatrix}
e_{l''_1}\\
e_{l''_2}
\end{psmallmatrix}
\begin{psmallmatrix}
e_{l_1}\\
e_{l_2}
\end{psmallmatrix}$. Now, as
$\eta(\lambda_{n+1}),\eta(\rho_{n+1})\neq 0$, we have
$l_2=l_4=l''_1$ and $l_1=l_3=l''_2$. It follows that
$\eta(\lambda_{n+1})=\eta(\rho_{n+1})=\begin{psmallmatrix}
e_{l_1}\\
e_{l_2}
\end{psmallmatrix}$. Also, if a letter of $B_n\setminus A_n$
intervenes in $v_{n}$ or $v_n=1$, then
$\eta(\lambda_{n+1})=\eta(\rho_{n+1})$. Thus, no letter of
$B_n\setminus A_n$ intervenes in $v_{n}$, $v_n\neq 1$, and
\begin{equation}
\text{the letters $w_n,w_{n+1},w_{n+2}$ do not intervene in $v_n$.}
\label{eq:vn-without-side-transitions}
\end{equation}
Suppose that the number of occurrences of letters from the set
$\{w_1,\ldots,w_{n+2}\}$ in the factorization of $v_{n}$
in~(\ref{eq:factorization}) is at least $2^{n-1}$. Then, there
exists a factorization $v_n=v'_nv''_n$ such that the number of
occurrences of letters from the set $\{w_1,\ldots,w_{n+2}\}$ in the
factorization of $v'_{n}$ is at least $2^{n-1}$ and
strictly less than $2^{n}-1$. Thus, there exist integers
$q_1\in\{1,\ldots,2^{n-1}\}$ and $q_2\in \{2^{n-1}+1,\ldots,2^n\}$
such that $\mu_n(\sigma(q_1),v'_n)=\sigma(q_2)$ or
$\mu_n(\sigma(q_2),v'_n)=\sigma(q_1)$. Now, since
\begin{align*}
\kappa_n(w_{n}) &=\begin{pmatrix}
\sigma(2^{n-1})\\
\sigma(2^{n-1}+1)
\end{pmatrix}\
\text{and}\
\kappa_n(w_{n+1})=\kappa_n(w_{n+2})=
\begin{pmatrix}
\sigma(2^{n})\\
\sigma(1)
\end{pmatrix},
\end{align*}
at least one of the letters $w_n,w_{n+1},w_{n+2}$ intervenes in
$v'_n$, which contradicts \eqref{eq:vn-without-side-transitions}.
Hence, the number of occurrences of letters from the set
$\{w_1,\ldots,w_{n+2}\}$ in the factorization of $v_{n}$
in~(\ref{eq:factorization}) is strictly less than $2^{n-1}-1$.
Let $E=\{\sigma(1),\ldots,\sigma(2^{n-1})\}$ and
$F=\{\sigma(2^{n-1}+1),\ldots,\sigma(2^n)\}$. As
$\lambda_{n+1}=\lambda_nv_n\rho_n$ and
$\rho_{n+1}=\rho_nv_n\lambda_n$, from
\eqref{eq:lamdan-rhon-rank-1} we get that
$\delta_n(e_{l_2},v_n)=e_{l_3}$ and
$\delta_n(e_{l_4},v_n)=e_{l_1}$, and, thus, we have
\begin{equation}
\text{$\mu_n(l_2,v_n)=l_3$ and $\mu_n(l_4,v_n)=l_1$.}
\label{eq:vn}
\end{equation}
By~\eqref{eq:vn-without-side-transitions}, from the structure of the
automaton $\mathcal{A}''_n$ we deduce that $l_2,l_3\in G_1$ and
$l_4,l_1\in G_2$ for some sets $G_1,G_2\in\{E,F\}$. Now, as
$\eta(\lambda_{n+1})=\begin{psmallmatrix}
e_{l_1}\\
e_{l_4}
\end{psmallmatrix}$ and $\eta(\rho_{n+1})=\begin{psmallmatrix}
e_{l_3}\\
e_{l_2}
\end{psmallmatrix}$, with the same argument as above, we have
$l_4,l_3\in H_1$ and $l_2,l_1\in H_2$ for some sets
$H_1,H_2\in\{E,F\}$. This shows that $G_1=G_2$. Now, as $G_1=G_2$
and because of~\eqref{eq:vn-without-side-transitions}, we obtain
\begin{equation}
\sigma^{-1}(l_2),\sigma^{-1}(l_4)<\sigma^{-1}(l_3),\sigma^{-1}(l_1).
\label{eq:order}
\end{equation}
Since
\begin{equation}
\sigma^{-1}(l_1)-\sigma^{-1}(l_4)=\sigma^{-1}(l_3)-\sigma^{-1}(l_2)
\label{eq:equal-steps}
\end{equation}
and $\eta(\lambda_n)\neq \eta(\rho_n)$, we have
\begin{equation}
l_3\neq l_1
\label{eq:l3l1}
\end{equation}
because of~(\ref{eq:lamdan-rhon-rank-1}). By symmetry, we may assume
that $\sigma^{-1}(l_3)<\sigma^{-1}(l_1)$, so that
\begin{displaymath}
\sigma^{-1}(l_2)<\sigma^{-1}(l_4)<\sigma^{-1}(l_3)<\sigma^{-1}(l_1)
\end{displaymath}
by~(\ref{eq:equal-steps}).
Hence, by~(\ref{eq:vn}), since there is a
unique element of~$A_n^+$ of which none of $w_n,w_{n+1},w_{n+2}$ is
a factor that in $\mathcal{A}''_n$ maps $l_4$ to~$l_3$, we deduce
that there are factorizations $v_n=AB=B'C$ such that
$\mu_n(l_2,A)=l_4$, $\mu_n(l_4,B)=\mu_n(l_4,B')=l_3$,
$\mu_n(l_3,C)=l_1$ and there are factorizations
\begin{equation}
B=\tau_1\omega_1\tau_2\cdots\tau_{n_B}\omega_{n_B}\tau_{n_B+1}\
\text{and}\
B'=\tau'_1\omega_1\tau'_2\cdots\tau'_{n_B}\omega_{n_B}\tau'_{n_B+1}
\label{eq:B,B'}
\end{equation}
satisfying the following conditions:
\begin{enumerate}
\item $\tau_i,\tau'_i\in \langle x,y\rangle^1$, for all $1\leq i\leq
n_B+1$;
\item $\omega_i\in \{w_1,\ldots,w_{n+2}\}$, for all $1\leq i\leq n_B$.
\end{enumerate}
There exists an integer $1\leq n'$ such that $2^{n'}\leq
\sigma^{-1}(l_1) - \sigma^{-1}(l_2) < 2^{n'+1}$. Consider the set
\begin{displaymath}
\mathop{\mathrm{St}}\nolimits=\{\sigma(\sigma^{-1}(l_2)),\sigma(\sigma^{-1}(l_2)+1),\ldots,
\sigma(\sigma^{-1}(l_1)-1)\}.
\end{displaymath}
We claim that
\begin{equation}
\begin{split}
\parbox[t]{.8\textwidth}{there exists an integer $n'\leq n''$
such that the letter $w_{n''}$ only acts in~$\mathcal{A}''_n$
on one of the states of the set $\mathop{\mathrm{St}}\nolimits$.}
\end{split}
\label{eq:claim}
\end{equation}
By Lemma~\ref{A''}, we have
\begin{align*}
\kappa_n(w_{n'})=
&\left(
\begin{matrix}
\sigma(2^{n'-1}) &\sigma(2^{n'-1}+2^{n'}) &\ldots\\
\sigma(2^{n'-1}+1)&\sigma(2^{n'-1}+2^{n'}+1)&
\end{matrix}
\right.\\
&\qquad\qquad\qquad\qquad
\left.
\begin{matrix}
\sigma(2^{n'-1}+(2^{n-n'}-1)\times 2^{n'})\\
\sigma(2^{n'-1}+(2^{n-n'}-1)\times 2^{n'}+1)
\end{matrix}\right)
\end{align*}
Then, $w_{n'}$ acts on one or two of the states in the set $\mathop{\mathrm{St}}\nolimits$. If
$w_{n'}$ only acts on one of the states of~$\mathop{\mathrm{St}}\nolimits$, then the claim
\eqref{eq:claim} holds. Now, suppose that $w_{n'}$ acts on two of
the states of~$\mathop{\mathrm{St}}\nolimits$. Then, there exist nonnegative integers $k_1$
and $k_2$
such that $k_2\leq 2^{n-n'}-1$,
$\sigma^{-1}(l_2)+k_1=2^{n'-1}+k_2\times 2^{n'}$, and
$\sigma(\sigma^{-1}(l_2)+k_1),\sigma(\sigma^{-1}(l_2)+k_1+2^{n'})\in
\mathop{\mathrm{St}}\nolimits$. We deduce that $\sigma(2^{n'-1}+k_2\times 2^{n'}+2^{n'-1})\in
\mathop{\mathrm{St}}\nolimits$. We have $2^{n'-1}+k_2\times 2^{n'}+2^{n'-1}=(k_2+1)\times
2^{n'}$. There exist integers $n''$ and $k$ such that $n'< n''$,
$0\leq k\leq 2^{n-n''}-1$, and $2^{n''-1}+k\times
2^{n''}=(k_2+1)\times 2^{n'}$. Now, as
\begin{align*}
\kappa_n(w_{n''})=
&\left(
\begin{matrix}
\sigma(2^{n''-1}) &\sigma(2^{n''-1}+2^{n''}) &\ldots\\
\sigma(2^{n''-1}+1)&\sigma(2^{n''-1}+2^{n''}+1)&
\end{matrix}
\right.\\
&\qquad\qquad\qquad\qquad
\left.
\begin{matrix}
\sigma(2^{n''-1}+(2^{n-n''}-1)\times 2^{n''})\\
\sigma(2^{n''-1}+(2^{n-n''}-1)\times 2^{n''}+1)
\end{matrix}\right),
\end{align*}
$w_{n''}$ acts on the state $\sigma(2^{n'-1}+k_2\times
2^{n'}+2^{n'-1})$ which is in $\mathop{\mathrm{St}}\nolimits$. Since $2^{n'}\leq
\sigma^{-1}(l_1) - \sigma^{-1}(l_2) < 2^{n'+1}$ and $n'< n''$,
$w_{n''}$ may not act on the states of $\mathop{\mathrm{St}}\nolimits$ twice. Then, the claim
\eqref{eq:claim} holds.
Now, using \eqref{eq:claim}, we prove that $l_2=l_4$ and
$l_1=l_3$, which is in contradiction with (\ref{eq:l3l1}).
As $v_n=AB=B'C$ and by (\ref{eq:vn}),
(\ref{eq:B,B'}) and \eqref{eq:claim}, the letter $w_{n''}$
intervenes once in $B$ and $B'$ and intervenes in neither $A$
nor~$C$. Now, as the letter $w_{n''}$ intervenes once in each of the
words $B$ and $B'$, by (\ref{eq:B,B'}) and \eqref{eq:claim}, there
exists an integer $l$ such that $1\leq l\leq n_B$ and $\omega_{l}=w_{n''}$.
If $A\not\in \langle x,y\rangle^1$, as $\mu_n(l_2,A)=l_4$, then
there is a factorization
\begin{equation}
A=\tau''_1\omega'_1\tau''_2\cdots\tau''_{n_A}\omega'_{n_A}\tau''_{n_A+1} < \label{eq:A}
\end{equation}
satisfying the following conditions:
\begin{enumerate}
\item $0<n_A$;
\item $\tau''_i\in \langle x,y\rangle^1$, for all $1\leq i\leq
n_A+1$;
\item $\omega'_i\in \{w_1,\ldots,w_{n+2}\}$, for all $1\leq i\leq n_A$.
\end{enumerate} Since $v_n=AB=B'C$ and $\mu_n(l_2,v_n)=l_3$, the letter $w_{n''}$ acts on the states $\sigma(\sigma^{-1}(l_2)+n_A+l)$ and $\sigma(\sigma^{-1}(l_2)+l)$. A contradiction with \eqref{eq:claim}.
It follows that $A\in \langle x,y\rangle^1$. Similarly, we have $C\in \langle x,y\rangle^1$.
Now, as $\mu_n(l_2,A)=l_4$, $\mu_n(l_3,C)=l_1$ and
$\mu_n(i,z)=i$, for every integer $1\leq i\leq 2^n$ and an element
$z\in\{x,y\}$, we have $l_2=l_4$ and $l_1=l_3$.
The above shows that Case \ref{I2} does not hold. \end{proof}
\section{The pseudovariety \pv{TM}} \label{sec:pvs-pvtm}
It is known that a finite group $G$ is in $\mathsf{TM}$ if and only if $G$ is an extension of a nilpotent group by a 2-group (see \cite{Bo-Ma, Sir}) and a finite semigroup $S$ is in $\mathsf{TM}$ if and only if the principal factors of $S$ are either null semigroups or inverse semigroups over groups that are extensions of nilpotent groups by 2-groups \cite[Corollary 10]{Jes-Ril}.
For the pseudovarieties $\mathsf{TM}$, $\mathsf{MN}^{(2)}$ and $\mathsf{MN}^{(3)}$, we have the following result.
\begin{thm} \label{PS TM EUNNG} The following statements hold: \begin{enumerate} \item $\mathsf{TM}=\llbracket \phi^{\omega}(x)=\phi^{\omega}(y) \rrbracket$ where $\phi$ is the continuous endomorphism of the free profinite semigroup on $\{x,y\}$ such that $\phi(x)=xy$ and $\phi(y)=yx$. \item $\mathsf{MN}^{(i)}=\llbracket \psi(\phi^{\omega}(x))=\psi(\phi^{\omega}(y)) \rrbracket$ where $\phi$ is the continuous endomorphism of the free profinite semigroup on $\{x,y,z,t\}$ such that $ \phi(x)=xzytyzx,\phi(y)=yzxtxzy,\phi(z)=z,\phi(t)=t$ and $\psi$ runs over all homomorphisms from $\{x, y, z, t\}^+$ to $\{a_1,\ldots,a_i\}^+$ ($i=2,3$). \end{enumerate} \end{thm}
Theorem~\ref{PS TM EUNNG}(1) and Theorem~\ref{PS TM EUNNG}(2) follow easily from the definition of the pseudovarieties $\mathsf{TM}$ and $\mathsf{MN}^{(i)}$ ($i=2,3$), respectively. Theorem~\ref{PS TM EUNNG}(1) has already been observed by the first author \cite[Corollary 6.4]{Alm3}.
In \cite{Hig-Mar}, the semigroup $N_1$ is defined as the $\theta$-disjoint union of the Brandt semigroup $\mathcal{B}_4(\{1\})$ with the null semigroup $\{w,v,\theta\}$, \begin{eqnarray} \label{ex1}
N_1 = \mathcal{B}_4(\{1\}) \cup^{\Gamma_{N_1}} \{w,v,\theta\}, \end{eqnarray} where $\Gamma_{N_1}(w)=(2,3,\theta)(1,4,\theta)$ and $\Gamma_{N_1}(v)=(1,3,\theta)(2,4,\theta)$. Note that $\langle s_1,s_2,s_3\rangle \in\mathsf{MN}$, for all $s_1,s_2, s_3\in N_1$, and $N_1\not\in\mathsf{MN}$.
Note also that there exists a finite semigroup $S$ such that for all $s_1, s_2\in S$, $\langle s_1,s_2\rangle \in\mathsf{NMN_{4}}$ but $S\not\in\mathsf{NMN_{4}}$. This holds, for example, for the semigroup $F_{12}$.
\begin{cor}\label{PS MN Rank}
The ranks of the pseudovarieties $\mathsf{MN}$, $\mathsf{PE}$,
$\mathsf{NMN_{4}}$ and $\mathsf{TM}$ are respectively $4$, $2$, $3$ and
$2$. \end{cor}
\begin{proof}
By Theorems~\ref{PS PE} and \ref{PS TM EUNNG}(1), the ranks of the
pseudovarieties $\mathsf{PE}$ and $\mathsf{TM}$ are at most~$2$.
Since the semigroup $F_7$ is not in the
pseudovariety $\mathsf{PE}$ and all subsemigroups of $F_7$ generated
by an element of $F_7$ are in the pseudovariety $\mathsf{PE}$, the
rank of the pseudovariety $\mathsf{PE}$ is $2$.
Let $S=\{a,b\}$ be a nontrivial right zero semigroup. It is easy to verify that $S\not\in \mathsf{TM}$. Since the subsemigroups $\{a\}$ and $\{b\}$ are in $\mathsf{TM}$,
the rank of the pseudovariety $\mathsf{TM}$ is~$2$.
By Theorem~\ref{PS MN} and Lemma~\ref{NMN_{4}}, the ranks of the
pseudovarieties $\mathsf{MN}$ and $\mathsf{NMN_{4}}$ are bounded,
respectively, by $4$ and $3$. The properties of the semigroups $N_1$
and $F_{12}$ stated above imply that the ranks of the
pseudovarieties $\mathsf{MN}$ and $\mathsf{NMN_{4}}$ are respectively,
$4$ and $3$. \end{proof}
\section{Comparison of the above pseudovarieties} \label{sec:comp-above-pseud}
In~\cite{Jes-Sha}, the upper non-nilpotent graph ${\mathcal N}_{S}$ of a finite semigroup $S$ is investigated. The vertices of ${\mathcal
N}_{S}$ are the elements of $S$ and there is an edge between $x$ and $y$ if the subsemigroup generated by $x$ and $y$ is not nilpotent. Thus, a finite semigroup $S$ belongs to $\mathsf{MN}^{(2)}$ if and only if the graph ${\mathcal N}_S$ has no edges. In \cite{Sha}, the collection of all such finite semigroups is denoted $\mathsf{EUNNG}$. By \cite[Theorem 2.6]{Jes-Sha}, we have $\mathsf{MN}^{(2)} \subseteq \mathsf{PE}$.
We proceed to give an example of a semigroup in $\mathsf{PE}\setminus \mathsf{MN}^{(2)}$.
Let $N_2$ be the transition semigroup of the following automaton:
\begin{center}
\begin{tikzpicture}[x=4mm,y=4mm,thick,->,>=stealth',
shorten >=1pt]
\node [state] (1) at (0,10) {1};
\node [state] (5) at (5,10) {5};
\node [state] (4) at (10,10) {4};
\node [state] (10) at (15,5) {10};
\node [state] (8) at (0,5) {8};
\node [state] (2) at (0,0) {2};
\node [state] (7) at (5,0) {7};
\node [state] (3) at (10,0) {3};
\node [state] (6) at (-5,5) {6};
\node [state] (9) at (10,5) {9};
\path
(8) edge[right] node {$\alpha$} (2)
(2) edge[below] node {$\alpha$} (6)
(6) edge[below] node {$\alpha$} (1)
(1) edge[below] node {$\alpha$} (5)
(3) edge[below] node {$\alpha$} (7)
(9) edge[right] node {$\alpha$} (4)
(1) edge[right] node {$\beta$} (8)
(7) edge[below] node {$\beta$} (2)
(5) edge[below] node {$\beta$} (4)
(4) edge[below] node {$\beta$} (10)
(10) edge[right] node {$\beta$} (3)
(3) edge[right] node {$\beta$} (9);
\end{tikzpicture}
\end{center}
It amounts to a routine but lengthy calculation (see the appendix)
to verify that the semigroup $N_2$ is a nilpotent extension of its
unique 0-minimal ideal $M=\mathcal{B}_{10}(\{1\})$.
Hence, every principal factor of $N_2$ except $M$ is null. Also, since the rank of the nonzero elements of $M$ is one, $N_2$ has no element with a nontrivial cycle. It follows that \begin{equation}
\begin{split}
\parbox[t]{.8\textwidth}{$N_2$ does not have any element
$\gamma$ such that $(a_1)(a_2)\subseteq\gamma$, for some
distinct integers $a_1,a_2\in \{1,\ldots,10\}$.}
\end{split}
\label{eq:cycle} \end{equation} Suppose that $a,b\in N_2$ and $c\in N_2^1$. Since every principal factor of $N_2$ is either a null semigroup or an inverse semigroup over a trivial group, the semigroup $N_2$ is in $\mathsf{TM}$ by \cite[Theorem 9]{Jes-Ril}. Thus, if $c=1$ then there exists an integer $n$ such that $\lambda_{n+2}(a,b,1,1,c,\ldots,c^n)=\rho_{n+2}(a,b,1,1,c,\ldots,c^n)$. Now, suppose that $c\neq 1$. By (\ref{eq:cycle}), there exists an integer $n_c$ such that $c^{n_c}\in\{\overline{\theta},(1),\ldots,(10)\}$. If $c^{n_c}=\overline{\theta}$, then clearly we have $\lambda_{n_c+2}(a,b,1,1,c,\ldots,c^{n_c})=\rho_{n_c+2}(a,b,1,1,c,\ldots,c^{n_c})=\overline{\theta}$. Now, suppose that $c^{n_c}=(v)$ and $[v', v, v''] \sqsubseteq c$ for some integers $v,v',v''\in \{1,\ldots,10\}$. As in $N_2$ no element has a nontrivial cycle, we get $v=v'=v''$. Then, we obtain $(v)\subseteq c$ and, thus, the equality $c^{n'}=(v)$ holds for every integer $n'\geq n_c$. Since $c^{n_c}$ is an idempotent and the Rees quotient $N_2/M$ is nilpotent, we must have $c^{n_c}\in M$. It follows that $\lambda_{n_c+2}(a,b,1,1,c,\ldots,c^{n_c}), \rho_{n_c+2}(a,b,1,1,c,\ldots,c^{n_c})\in M$. If \begin{equation}
\label{eq:N2-PE}
\lambda_{n_c+3}(a,b,1,1,c,\ldots,c^{n_c},c^{n_c+1})
\neq \overline{\theta}\neq
\rho_{n_c+3}(a,b,1,1,c,\ldots,c^{n_c},c^{n_c+1}), \end{equation} then $\lambda_{n_c+2}(a,b,1,1,c,\ldots,c^{n_c}),\rho_{n_c+2}(a,b,1,1,c,\ldots,c^{n_c})\neq\overline{\theta}$ and, thus, there exist integers $i_1, i_2, j_1, j_2\in \{1,\ldots,10\}$ such that
$\lambda_{n_c+2}=
\begin{psmallmatrix}
i_1\\
i_2
\end{psmallmatrix} $ and
$\rho_{n_c+2}=
\begin{psmallmatrix}
j_1\\
j_2
\end{psmallmatrix} $.
Now, as $c^{n_c+1}=(v)$, we have $i_1=i_2=j_1=j_2=v$. It follows that $$\lambda_{n_c+2}(a,b,1,1,c,\ldots,c^{n_c})=\rho_{n_c+2}(a,b,1,1,c,\ldots,c^{n_c})=(v).$$ On the other hand, if~\eqref{eq:N2-PE} fails, then we conclude that \begin{displaymath}
\lambda_{n_c+4}(a,b,1,1,c,\ldots,c^{n_c+2})
=\rho_{n_c+4}(a,b,1,1,c,\ldots,c^{n_c+2})
=\overline{\theta}. \end{displaymath} Therefore, the semigroup $N_2$ is in $\mathsf{PE}$.
Now, let $x= \begin{psmallmatrix}
4 \\
1 \end{psmallmatrix} $, $y= \begin{psmallmatrix}
2 \\
3 \end{psmallmatrix} $, $w_1=\beta\alpha= \begin{psmallmatrix}
1 & 3 & 10 & 7\\
2 & 4 & 7 & 6 \end{psmallmatrix} $, and $w_2=\alpha\beta= \begin{psmallmatrix}
1 & 3 & 6 & 9\\
4 & 2 & 8 & 10 \end{psmallmatrix} $.
Since \begin{displaymath}
\lambda_k(x,y,w_1,w_2,w_1,w_2,\ldots)\neq \rho_k(x,y,w_1,w_2,w_1,w_2,\ldots) \end{displaymath} for every $1\leq k$, by \cite[Corollary 12]{Jes-Ril} the semigroup $N_2$ is not nilpotent. Now, as $N_2$ is two-generated, we have $N_2\in\mathsf{PE}\setminus\mathsf{MN}^{(2)}$. It amounts to a routine but very long calculation (see the appendix) to verify that the semigroup $N_2$ satisfies the identity $\lambda_{4}(x,y,1,w_1,w_2,w_3)=\rho_{4}(x,y,1,w_1,w_2,w_3)$, for $x,y\in N_2$ and $w_1,w_2,w_3\in N_2^1$. Hence, we have $N_2 \in \mathsf{NT}$.
Figure~\ref{fig:interval-MN-BG} represents the strict inclusion relationships between pseudovarieties we have considered so far. \begin{figure}
\caption{Two intervals of a poset of pseudovarieties}
\label{fig:interval-MN-BG}
\end{figure}
We have $F_{12}\in\mathsf{MN}^{(2)}\setminus \mathsf{NMN_{4}}$ and $N_2\in\mathsf{NT}\setminus\mathsf{MN}^{(2)}$.
Also, the class $(\mathsf{MN}^{(2)}\cap\mathsf{NMN_{4}})\setminus \mathsf{NT}$ is not empty. An example of a semigroup in this class is the subsemigroup $N_3$ of the full transformation semigroup on the set $\{1,2,3,4\} \cup \{\theta\}$ given by the union of the semigroup $N_1$ with the transformation $q=(2,1,\theta)(4,3,\theta)$. It is proved in \cite{Jes-Sha} that $N_3\in\mathsf{MN}^{(2)}\setminus \mathsf{NT}$. It is again routine (see the appendix) to verify that $N_3$ satisfies the identity $\lambda_2(aw_2,w_2a,w_1,w_2)=\rho_2(aw_2,w_2a,w_1,w_2)$, for $a\in N_3$ and $w_1,w_2\in N_3^1$. Now, as $N_3\in \mathsf{BG_{nil}}$, by Theorem~\ref{PS NTL4} the semigroup $N_3$ is $NMN_{4}$.
The semigroup $F_{12}$ is not nilpotent and it is generated by 3 elements. Now, as $N_1\in \mathsf{MN}^{(3)}\setminus \mathsf{MN}$ and $F_{12}\in \mathsf{MN}^{(2)}\setminus \mathsf{MN}^{(3)}$, we have \begin{displaymath}
\mathsf{MN}\subsetneqq\mathsf{MN}^{(3)}\subsetneqq\mathsf{MN}^{(2)}. \end{displaymath} By Lemma \ref{NMN_{4}}, we have $\mathsf{MN}^{(3)}\subseteq \mathsf{NMN_{4}}$. Since $N_2 \not\in \mathsf{MN}^{(2)}$, $\mathsf{MN}^{(3)}$ is strictly contained in $\mathsf{NMN_{4}}$. Similarly, we have $N_3\in \mathsf{MN}^{(3)}$. Hence, the pseudovarieties $\mathsf{MN}^{(3)}$ and $\mathsf{NT}$ are incomparable.
Higgins and Margolis \cite{Hig-Mar} showed that
$N_1\in\langle\mathsf{Inv}\rangle\cap\mathsf{A}$ (in fact, they showed that $N_1\in\langle\mathsf{Inv}\rangle\cap\mathsf{A}\setminus \langle\mathsf{Inv}\cap\mathsf{A}\rangle$).
The semigroup $N_1$ is not in the pseudovariety $\mathsf{MN}\cap\mathsf{A}$. We claim that the class $\mathsf{MN}\cap\mathsf{A}\setminus \langle\mathsf{Inv}\rangle\not \neq \emptyset$ and we conclude that the pseudovarieties $\langle\mathsf{Inv}\rangle\cap\mathsf{A}$ and $\mathsf{MN}\cap \mathsf{A}$ are incomparable.
Since the pseudovariety generated by finite inverse semigroups is precisely the pseudovariety of finite semigroups whose idempotents commute \cite{Ash}, we need to show that the class $\mathsf{MN}\cap\mathsf{A}\setminus\llbracket x^{\omega}y^{\omega}=y^{\omega}x^{\omega}\rrbracket$ is not empty. Let $N_4=\{e, f, ef, 0\}$ be a semigroup where $e^2 = e, f^2 = f$ and $fe = 0\ne ef$. Since $N_4$ is $\mathcal{J}$-trivial, we have $N_4\in \mathsf{MN}\cap\mathsf{A}$. Now, as $ef \neq 0$ and $fe=0$, we have $N_4\in \mathsf{MN}\cap\mathsf{A}\setminus\llbracket x^{\omega}y^{\omega}=y^{\omega}x^{\omega}\rrbracket$.
\begin{thm}\label{A MN}
Figure~\ref{fig:aperiodic}
represents all relations of equality and strict containment between
the pseudovarieties represented in it:
\begin{figure}
\caption{A poset of aperiodic pseudovarieties}
\label{fig:aperiodic}
\end{figure} \end{thm}
\begin{proof}
Suppose that $T\in(\mathsf{Inv}\cap\mathsf{A}) \setminus(\mathsf{MN}\cap\mathsf{A})$. Since $T$ is finite, $T$ has a principal series $$T= T_1 \supsetneqq T_2 \supsetneqq \cdots \supsetneqq T_{t} \supsetneqq T_{t+1} = \emptyset .$$ Each principal factor $T_i / T_{i+1}$ $(1 \leq i\leq t)$ of $T$ is aperiodic and either completely $0$-simple or completely simple.
Since $T\not\in \mathsf{MN}$, by Lemma~\ref{finite-nilpotentW1W2} there exist a positive integer $h$, distinct elements $t_1,\, t_2 \in T$ and elements $w_1, w_2\in T$ such that \begin{displaymath}
t_1 =\lambda_{h}(t_1, t_2, w_1, w_2,w_1, w_2,\ldots)
\mbox{ and }
t_2 =\rho_{h}(t_1, t_2, w_1, w_2,w_1, w_2,\ldots). \end{displaymath} Suppose that $t_1 \in T_i \setminus T_{i+1}$, for some $1\leq i\leq t$. Because $T_i$ and $T_{i+1}$ are ideals of~$T$, the above equalities imply that $t_2 \in T_i \setminus T_{i+1}$ and $w_{1}, w_{2} \in T \setminus T_{i+1}$. Furthermore, one obtains that $T_i / T_{i+1}$ is an aperiodic completely $0$-simple semigroup.
If $T_i \setminus T_{i+1}$ is a trivial group, then we have a contradiction with the assumption that $t_1$ and $t_2$ are distinct.
Now, as $T_i/T_{i+1}$ is an aperiodic completely simple inverse semigroup, it is easy to show that it is of the form $T_{i}/T_{i+1}=\mathcal{B}_n(\{1\})$ for some $n$.
Since $t_{1},t_{2}\in \mathcal{B}_n(\{1\})$, there exist $1 \leq n_1,n_2,n_3,n_4 \leq n$ such that $t_1 =(1; n_1, n_2)$, $\, t_2 = (1; n_3, n_4)$. Note that, \begin{displaymath}
[n_2,n_3;n_4,n_1] \sqsubseteq \Gamma(w_1),\
[n_2, n_1;n_4, n_3] \sqsubseteq \Gamma(w_2) \end{displaymath} where $\Gamma$ is an ${\mathcal L}$-representation\ of $T/T_{i+1}$. Since $t_1 \neq t_2$ and $\Gamma(w_1)$ restricted to $\{ 1, \ldots, n\} \setminus \Gamma(w_1)^{-1}(\theta)$ is injective, we obtain $n_1\neq n_3$. It follows that the cycle $(n_1,n_3)$ is contained in $\Gamma(w_2^{-1}w_1)$. This contradicts the assumption that $T$ is aperiodic. Hence, $\mathsf{Inv}\cap\mathsf{A}$ is contained in $\mathsf{MN}$. The semigroup $N_4$ is in the subclass $\mathsf{MN}\cap\mathsf{A}\setminus \langle\mathsf{Inv}\cap\mathsf{A}\rangle$. Therefore, $\langle\mathsf{Inv}\cap\mathsf{A}\rangle$ is strictly contained in $\mathsf{MN}\cap\mathsf{A}$.
A finite aperiodic semigroup $S$ is in $\mathsf{TM}$ (respectively $\mathsf{PE}$) if and only if the principal factors of $S$ are either null semigroups or inverse semigroups \cite[Corollary 10 and 8]{Jes-Ril}. Therefore, we have $\mathsf{PE}\cap\mathsf{A}=\mathsf{TM}\cap\mathsf{A}=\mathsf{BG}\cap\mathsf{A}$.
Since $\mathsf{MN}\subsetneqq\mathsf{MN}^{(3)}\subsetneqq\mathsf{MN}^{(2)}$, $\mathsf{MN}^{(3)}\subsetneqq \mathsf{NMN_{4}}$ and the semigroups used earlier to distinguish the corresponding pseudovarieties are all aperiodic, we have $\mathsf{MN}\cap\mathsf{A}\subsetneqq\mathsf{MN}^{(3)}\cap\mathsf{A}\subsetneqq\mathsf{MN}^{(2)}\cap\mathsf{A}$ and $\mathsf{MN}^{(3)}\cap\mathsf{A}\subsetneqq\mathsf{NMN_{4}}\cap\mathsf{A}$.
With the help of a computer (see the appendix), one can check that the semigroup $N_1$ satisfies the identity $\lambda_3(x,y,1,w_1,w_2)=\rho_3(x,y,1,w_1,w_2)$, for $x,y\in N_1$ and $w_1,w_2\in N_1^1$. Hence, we have $N_1 \in \mathsf{NT}$. The semigroup $N_1$ is aperiodic, but it is not in $\mathsf{MN}$, the semigroup $N_3$ is aperiodic and in $\mathsf{NMN_{4}}$ but it is not in $\mathsf{NT}$, and the semigroup $F_{12}$ is aperiodic and in $\mathsf{PE}$ but is not in $\mathsf{NMN_{4}}$. Hence, $\mathsf{MN}\cap\mathsf{A}$ is strictly contained in $\mathsf{NT}\cap\mathsf{A}$, $\mathsf{NT}\cap\mathsf{A}$ is strictly contained in $\mathsf{NMN_{4}}\cap\mathsf{A}$, and $\mathsf{NMN_{4}}\cap\mathsf{A}$ is strictly contained in $\mathsf{PE}\cap\mathsf{A}$. \end{proof}
\begin{opm} Are the pseudovarieties $\mathsf{MN}^{(2)}$ and $\mathsf{MN}^{(3)}$ finitely based? \end{opm}
\textit{Acknowledgments.}
The authors were partially supported by CMUP, which is financed by national funds through FCT -- Fundação para a Ciência e a Tecnologia, I.P., under the project UIDB/00144/2020. The second author also acknowledges FCT support through a contract based on the “Lei do Emprego Científico” (DL 57/2016).
\section*{Appendix}
For the calculations in GAP mentioned in the paper, we start by introducing a couple of functions which serve to test in a transformation semigroup $S$ with zero, whether or not the equality \begin{displaymath}
\lambda_n(x,y,w_1,w_2,\ldots,w_n)=\rho_n(x,y,w_1,w_2,\ldots,w_n) \end{displaymath} holds, where $x,y\in S$ are arbitrary, and each $w_i$ satisfies the following conditions determined by a \emph{directive vector} $v=(v_1,\ldots,v_n)$: \begin{itemize} \item $w_i=1$ if $v_i=1$; \item $w_i\in S$ is arbitrary if $v_i=\text{"+"}$ \item $w_i\in S^1$ is arbitrary if $v_i=\text{"*"}$. \end{itemize} There is a global control variable ``res'' (which needs to be initialized) whose value is either true if there is no counterexample to the identity, or the first counterexample to be found otherwise. The function ``Malcev'' does the calculation for a given pair $x,y$, calling itself recursively. The function ``check'' calls ``Malcev'' for all possible pairs $x,y$.
\begin{lstlisting}[language=GAP] res := true; universe := function(c,S)
if c = 1 then
return [IdentityTransformation];
elif c = "+" then
return S;
else
return Union(S,[IdentityTransformation]);
fi; end; Malcev := function(i,x,y,v,S,zero,ce)
local z;
if res <> true then
return;
fi;
if i <= Size(v) then
if x <> zero and y <> zero and x <> y then
for z in universe(v[i],S) do
Malcev(i+1,x*z*y,y*z*x,v,S,zero,Concatenation(ce,[z]));
od;
fi;
else
if x <> y then
res := ce;
fi;
fi; end; check := function(v,S,zero)
local x, y;
for x in S do
for y in S do
Malcev(1,x,y,v,S,zero,[x,y]);
od;
od;
return res; end; \end{lstlisting}
\noindent 1. Testing the identity \begin{displaymath}
xyyx\,z\,yxxy=yxxy\,z\,xyyx \end{displaymath} in the semigroup $F_{12}$, where $z$ is allowed to take the value 1: \begin{lstlisting}[language=GAP] a := Transformation([1,4,2,4]);; b := Transformation([2,4,1,4]);; c := Transformation([4,3,4,4]);; F12 := Semigroup(a,b,c);; zero := Transformation([4,4,4,4]);; check([1,1,"*"],F12,zero); \end{lstlisting}
\noindent 2. Determining the triples $(x,y,z)\in N\times N\times N^1$ such that $xy\,z\,yx\ne0$ and $yx\,z\,xy\ne0$: \begin{lstlisting}[language=GAP] a := Transformation([2,7,4,7,7,7,7]);; b := Transformation([7,7,7,5,7,1,7]);; c := Transformation([7,3,7,7,6,7,7]);; d := Transformation([7,6,7,7,3,7,7]);; N := Semigroup(a,b,c,d);; zero := Transformation([7,7,7,7,7,7,7]);; for x in N do
for y in N do
xy := x*y;
yx := y*x;
for z in Union(N,[One(a)]) do
if xy*z*yx <> zero and yx*z*xy <> zero then
Print("x=",x," y=",y," z=",z,"\n");
fi;
od;
od; od; \end{lstlisting}
\noindent 3. Testing the identity \begin{displaymath}
xyzyx\,t\,yxzxy = yxzxy\,t\,xyzyx \end{displaymath} in the semigroup $N_1$, where $z$ and $t$ are allowed to take the value 1: \begin{lstlisting}[language=GAP] a := Transformation([3,4,5,5,5]);; b := Transformation([4,3,5,5,5]);; c := Transformation([5,5,1,5,5]);; d := Transformation([5,5,5,2,5]);; N1 := Semigroup(a,b,c,d);; zero := Transformation([5,5,5,5,5]);; check([1,"*","*"],N1,zero); \end{lstlisting}
\noindent 4. Testing the identity \begin{displaymath}
xyzyx\,t\,yxzxy\;s\;yxzxy\,t\,xyzyx
=yxzxy\,t\,xyzyx\;s\;xyzyx\,t\,yxzxy \end{displaymath} in the semigroup $N_2$, where $z,t$ and $s$ are allowed to take the value 1: \begin{lstlisting}[language=GAP] a := Transformation([5,6,7,11,11,1,11,2,4,11,11]);; b := Transformation([8,11,9,10,4,11,2,11,11,3,11]);; N2 := Semigroup(a,b);; zero := Transformation([11,11,11,11,11,11,11,11,11,11,11]);; check([1,"*","*","*"],N2,zero); \end{lstlisting}
\noindent 5. Testing the identity \begin{displaymath} xtztx\,t\,txzxt = txzxt\,t\,xtztx \end{displaymath} in the semigroup $N_3$, where $z$ and $t$ are allowed to take the value 1 (although, for the latter, the value 1 always verifies the identity), does not fit in our general recursive scheme so we use an ad hoc program: \begin{lstlisting}[language=GAP] a := Transformation([3,4,5,5,5]);; b := Transformation([4,3,5,5,5]);; c := Transformation([5,5,1,5,5]);; d := Transformation([5,5,5,2,5]);; e := Transformation([5,1,5,3,5]);; N3 := Semigroup(a,b,c,d,e);; zero := Transformation([5,5,5,5,5]);; control := true;; for x in N3 do
for t in N3 do
xt := x*t;
tx := t*x;
if xt <> zero and tx <> zero and xt <> tx then
for z in Union(N3,[One(a)]) do
xtztx := xt*z*tx;
txzxt := tx*z*xt;
if xtztx*t*txzxt <> txzxt*t*xtztx then
Print("the identity fails with:\n");
Print("x=",x," z=",z," t=",t,"\n");
control := false;
break;
fi;
od;
fi;
od; od; if control = true then
Print("\nthe identity holds\n"); fi; \end{lstlisting}
\end{document} | arXiv |
\begin{document}
\newcommand{\commA}[2][]{\todo[#1,color=yellow]{A: #2}} \newcommand{\commI}[2][]{\todo[#1,color=green!60]{I: #2}}
\newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{example}[theorem]{Example} \newtheorem{algol}{Algorithm} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{prop}[theorem]{Proposition} \newtheorem{definition}[theorem]{Definition} \newtheorem{question}[theorem]{Question} \newtheorem{problem}[theorem]{Problem} \newtheorem{remark}[theorem]{Remark} \newtheorem{conjecture}[theorem]{Conjecture}
\def\vskip5pt\hrule\vskip5pt{\vskip5pt\hrule\vskip5pt}
\def\Cmt#1{\underline{{\sl Comments:}} {\it{#1}}}
\newcommand{\Modp}[1]{ \begin{color}{blue}
#1\end{color}}
\def\bl#1{\begin{color}{blue}#1\end{color}}
\def\red#1{\begin{color}{red}#1\end{color}}
\def{\mathcal A}{{\mathcal A}} \def{\mathcal B}{{\mathcal B}} \def{\mathcal C}{{\mathcal C}} \def{\mathcal D}{{\mathcal D}} \def{\mathcal E}{{\mathcal E}} \def{\mathcal F}{{\mathcal F}} \def{\mathcal G}{{\mathcal G}} \def{\mathcal H}{{\mathcal H}} \def{\mathcal I}{{\mathcal I}} \def{\mathcal J}{{\mathcal J}} \def{\mathcal K}{{\mathcal K}} \def{\mathcal L}{{\mathcal L}} \def{\mathcal M}{{\mathcal M}} \def{\mathcal N}{{\mathcal N}} \def{\mathcal O}{{\mathcal O}} \def{\mathcal P}{{\mathcal P}} \def{\mathcal Q}{{\mathcal Q}} \def{\mathcal R}{{\mathcal R}} \def{\mathcal S}{{\mathcal S}} \def{\mathcal T}{{\mathcal T}} \def{\mathcal U}{{\mathcal U}} \def{\mathcal V}{{\mathcal V}} \def{\mathcal W}{{\mathcal W}} \def{\mathcal X}{{\mathcal X}} \def{\mathcal Y}{{\mathcal Y}} \def{\mathcal Z}{{\mathcal Z}}
\def\mathbb{C}{\mathbb{C}} \def\mathbb{F}{\mathbb{F}} \def\mathbb{K}{\mathbb{K}} \def\mathbb{L}{\mathbb{L}} \def\mathbb{G}{\mathbb{G}} \def\mathbb{Z}{\mathbb{Z}} \def\mathbb{R}{\mathbb{R}} \def\mathbb{Q}{\mathbb{Q}} \def\mathbb{N}{\mathbb{N}} \def\textsf{M}{\textsf{M}} \def\mathbb{U}{\mathbb{U}} \def\mathbb{P}{\mathbb{P}} \def\mathbb{A}{\mathbb{A}} \def\mathfrak{p}{\mathfrak{p}} \def\mathfrak{n}{\mathfrak{n}} \def\mathcal{X}{\mathcal{X}} \def\textrm{\bf x}{\textrm{\bf x}} \def\textrm{\bf w}{\textrm{\bf w}} \def\textrm{\bf a}{\textrm{\bf a}} \def\textrm{\bf k}{\textrm{\bf k}} \def\textrm{\bf e}{\textrm{\bf e}} \def\overline{\Q}{\overline{\mathbb{Q}}} \def \Kab{\mathbb{K}^{\mathrm{ab}}} \def \Qab{\mathbb{Q}^{\mathrm{ab}}} \def \Qtr{\mathbb{Q}^{\mathrm{tr}}} \def \Kc{\mathbb{K}^{\mathrm{c}}} \def \Qc{\mathbb{Q}^{\mathrm{c}}} \newcommand \rank{\operatorname{rk}} \def\Z_\K{\mathbb{Z}_\mathbb{K}} \def\Z_{\K,\cS}{\mathbb{Z}_{\mathbb{K},{\mathcal S}}} \def\Z_{\K,\cS_f}{\mathbb{Z}_{\mathbb{K},{\mathcal S}_f}} \def\Z_{\K,\cS_{f,\Gamma}}{\mathbb{Z}_{\mathbb{K},{\mathcal S}_{f,\Gamma}}}
\def\mathbf {F}{\mathbf {F}}
\def\({\left(} \def\){\right)} \def\[{\left[} \def\right]{\right]} \def\langle{\langle} \def\rangle{\rangle}
\def\gen#1{{\left\langle#1\right\rangle}} \def\genp#1{{\left\langle#1\right\rangle}_p} \def{\left\langle P_1, \ldots, P_s\right\rangle}{{\left\langle P_1, \ldots, P_s\right\rangle}} \def{\left\langle P_1, \ldots, P_s\right\rangle}_p{{\left\langle P_1, \ldots, P_s\right\rangle}_p}
\defe{e}
\def\e_q{e_q} \def{\mathfrak h}{{\mathfrak h}}
\def{\mathrm{lcm}}\,{{\mathrm{lcm}}\,}
\def\({\left(} \def\){\right)} \def\fl#1{\left\lfloor#1\right\rfloor} \def\rf#1{\left\lceil#1\right\rceil} \def\qquad\mbox{and}\qquad{\qquad\mbox{and}\qquad}
\def\tilde\jmath{\tilde\jmath} \def\ell_{\rm max}{\ell_{\rm max}} \def\log\log{\log\log}
\def{\rm m}{{\rm m}} \def\hat{h}{\hat{h}} \def{\rm GL}{{\rm GL}} \def\mathrm{Orb}{\mathrm{Orb}} \def\mathrm{Per}{\mathrm{Per}} \def\mathrm{Preper}{\mathrm{Preper}} \def \S{\mathcal{S}} \def\vec#1{\mathbf{#1}} \def\ov#1{{\overline{#1}}} \def{\mathrm Gal}{{\mathrm Gal}} \def{\mathrm S}{{\mathrm S}} \def\mathrm{tors}{\mathrm{tors}} \def\mathrm{PGL}{\mathrm{PGL}} \def{\rm H}{{\rm H}} \def\G_{\rm m}{\mathbb{G}_{\rm m}}
\def\house#1{{
\setbox0=\hbox{$#1$}
\vrule height \dimexpr\ht0+1.4pt width .5pt depth \dp0\relax
\vrule height \dimexpr\ht0+1.4pt width \dimexpr\wd0+2pt depth \dimexpr-\ht0-1pt\relax
\llap{$#1$\kern1pt}
\vrule height \dimexpr\ht0+1.4pt width .5pt depth \dp0\relax}}
\newcommand{{\boldsymbol{\alpha}}}{{\boldsymbol{\alpha}}} \newcommand{{\boldsymbol{\omega}}}{{\boldsymbol{\omega}}}
\newcommand{{\operatorname{Ch}}}{{\operatorname{Ch}}} \newcommand{{\operatorname{Elim}}}{{\operatorname{Elim}}} \newcommand{{\operatorname{proj}}}{{\operatorname{proj}}} \newcommand{{\operatorname{\mathrm{h}}}}{{\operatorname{\mathrm{h}}}} \newcommand{\operatorname{ord}}{\operatorname{ord}}
\newcommand{\mathrm{h}}{\mathrm{h}} \newcommand{\mathrm{aff}}{\mathrm{aff}} \newcommand{{\operatorname{Spec}}}{{\operatorname{Spec}}} \newcommand{{\operatorname{Res}}}{{\operatorname{Res}}}
\def{\mathfrak A}{{\mathfrak A}} \def{\mathfrak B}{{\mathfrak B}}
\numberwithin{equation}{section} \numberwithin{theorem}{section}
\title{On semigroup orbits of polynomials in subgroups}
\author[Jorge Mello]{Jorge Mello}
\address{University of New South Wales. mailing adress:\newline School of Mathematics and Statistics UNSW Sydney NSW, 2052 Australia.}
\email{[email protected]}
\keywords{}
\begin{abstract} We study intersections of semigroup orbits in polynomial dynamics with multiplicative subgroups, extending results of Ostafe and Shparlinski (2010). \end{abstract}
\maketitle \section{Introduction} Let $K$ be a field of charateristic $0$ and $\overline{K}$ its algebraic closure. Let $\mathcal{F}= \{ \phi_1,..., \phi_k \} \subset K[X]$ be a set of polynomials of degree at least $2$, let $x \in K$, and let
\begin{center}$\mathcal{O}_{\mathcal{F}}(x)= \{ \phi_{i_n} \circ ... \circ \phi_{i_1}(x) | n \in \mathbb{N} , i_j =1,...,k. \}$\end{center} denote the forward orbit of $P$ under $\mathcal{F}$. We denote the $n$-dimensional torus $\mathbb{G}^n_m$ as $(\overline{\mathbb{Q}}^*)^n$ endowed with the group law defined by the multiplication coordinate by coordinate.
For $\mathcal{S} \subset K$ reasonably sparse and somehow unrelated to $\mathcal{F}$, it is natural to study the intersection $\mathcal{O}_{\mathcal{F}}(x) \cap \mathcal{S}$. A generalisation of this situation is to study the intersection of orbits generated by multivariate polynomials with higher dimensional algebraic varieties. This is known as the \textit{dynamical Mordell-Lang conjecture}, for which we refer \cite{BGT}. In the univariate case, when $\mathcal{S}= \mathbb{U}$ is the set of roots of unity and the initial points are defined over the cyclotomic closure $K^c:=K(\mathbb{U})$ over an algebraic number field, Ostafe \cite{O} has proved finiteness for such points that are preperiodic for the initial polynomial.
When $k=1$ and $\mathcal{S} \subset K$ has certain multiplicative properties in the univariate case ( e.g. a finitely generated group $\Gamma \subset K^*$) Ostafe and Shparlinski \cite{OS} have provided results for the frequency of intersections of polynomial orbits with such sets. Namely, they have proved that \begin{center}
$\# \{ n \leq N : f^{(n)}(x) \in \Gamma\} \leq \dfrac{(10 \log \deg f + o(1))N}{\log \log N}$, as $N \rightarrow \infty$, \end{center} for $f \in K[X], x \in K$.
In this paper we seek to generalise results of this sort when the dynamical systems are generated as semigroups under composition by several maps initially. Precisely, putting $\mathcal{F}_n= \{\phi_{i_n}\circ ... \circ \phi_{i_1}| 1 \leq i_j \leq k \}$ for the $n$-level set, and supposing that $\{t_N\}_N^{+\infty}$ is a sequence of positive integers going to $\infty$ satisfying that \begin{center}
$\# \{ v \in \Gamma | v = f(u), f \in \mathcal{F}_n, n \leq N \} \geq ck^{t_N}$
\end{center} for each $N$, where $c>0$ is a constant,
we prove among other results that \begin{center}
$\# \{ v \in \Gamma | v = f(u), f \in \mathcal{F}_n, n \leq N \} \leq \exp ( \exp((10 \log d + o(1))t_N) $, \end{center} as $N \rightarrow \infty $, where $d = \max_i \deg \phi_i$. Namely, if the number of orbit points of iteration order at most $N$ falling on a finitely generated group is bigger than a multiple of the size of the complete $k$-tree of depth $t_N-1$, then such pursued number grows slower than a sequence obtained by exponentiating twice a multiple of the sequence $\{t_N\}$.
In particular, if our conditions are satisfied with
\begin{center} $t_N \sim\dfrac{1}{10 \log d + o(1)} \log \log \left( \dfrac{(10 \log d + o(1)) N)}{\log \log N} \right)$, as $ N \rightarrow \infty$, \end{center} then we can generalize and recover the results of \cite{OS} under such conditions.
In Section~\ref{sec2} we set some notations and facts about heights, orbits and finitely generated groups. In Section~\ref{sec3} we recall some arithmetic and combinatoric results that are used to obtain results of frequency with orbits generated by a sequence of maps in Section~\ref{sec4}. In Section~\ref{sec5} we state a necessary recent graph theory result that is used in Section~\ref{sec6} to obtain results about the frequency of intersection of polynomial semigroup orbits with sets.
\section{Preliminar notations} \label{sec2} Let $K$ be a field of charateristic $0$ and $\overline{K}$ its algebraic closure. For $x \in \overline{\mathbb{Q}} $, the naive logarithmic height $h(x)$ is given by
\begin{center}$ \sum_{v \in M_K} \dfrac{[K_v: \mathbb{Q}_v]}{[K:\mathbb{Q}]} \log(\max \{1, |x|_v\}$, \end{center}
where $M_K$ is the set of places of $K$, $M_K^\infty$ is the set of archimedean (infinite) places of $K$, $M_K^0$ is the set of nonarchimedean (finite) places of $K$, and for each $v \in M_K$, $|.|_v$ denotes the corresponding absolute value on $K$ whose restriction to $\mathbb{Q}$ gives the usual $v$-adic absolute value on $\mathbb{Q}$. Also, we write $K_v$ for the completion of $K$ with respect to $|.|$, and we let $\mathbb{C}_v$ denote the completion of an algebraic closure of $K_v$. To simplify notation, we let $d_v=[K_v:\mathbb{Q}_v]/[K:\mathbb{Q}]$. Let $\mathcal{F}= \{ \phi_1,..., \phi_k \} \subset K[X]$ be a set of polynomials of degree at least $2$, let $x \in K$, and let $\mathcal{O}_{\mathcal{F}}(x)= \{ \phi_{i_n} \circ ... \circ \phi_{i_1}(x) | n \in \mathbb{N} , i_j =1,...,k. \}$ denote the forward orbit of $P$ under $\mathcal{F}$. We denote the $n$-dimensional torus $\mathbb{G}^n_m$ as $(\overline{\mathbb{Q}}^*)^n$ endowed with the group law defined by the multiplication coordinate by coordinate. \begin{definition}
A polynomial $F \in \overline{\mathbb{Q}}[X,Y]$ is said to be special if it has a factor of the form $aX^mY^n - b$ or $aX^m - bY^n$ for some $a, b \in \overline{\mathbb{Q}}$ and $m,n \geq 0$. Otherwise we call $F$ to be non-special. \end{definition}
\begin{definition}
For a finitely generated group $\Gamma \subset \mathbb{G}_m^n$, we define the division group $\overline{\Gamma}$ by \begin{center}
$\overline{\Gamma}= \{ x \in \mathbb{G}_m^n | \exists t \in \mathbb{N}$ with $ x^t \in \Gamma \}$. \end{center}
\end{definition}
\begin{definition}
For $E, \epsilon \geq 0$ and a set $\mathcal{S} \subset \mathbb{G}_m^n$, we define the sets
\begin{center}
$\mathscr{B}_n(\mathcal{S}, E)= \{ x \in \mathbb{G}_m^n | \exists y, z \in \mathbb{G}_m^n $ with $ x= yz, y \in \mathcal{S}, h(z) \leq E \}$
\end{center} and
\begin{align*}
\mathscr{C}_n(\mathcal{S}, \epsilon)= \{ x \in \mathbb{G}_m^n | \exists y, z \in \mathbb{G}_m^n \text{ with } x= yz, y \in \mathcal{S}, h(z) \leq \epsilon(1+ h(y)) \}.
\end{align*}
\end{definition}
We also omit the subscript $n$ for $n=1$ writing \begin{center}
$\mathscr{B}(\mathcal{S}, E)= \mathscr{B}_1(\mathcal{S}, E)$ and $\mathscr{C}(\mathcal{S}, \epsilon)=\mathscr{C}_1(\mathcal{S}, \epsilon)$. \end{center}
We also write $\mathscr{A}(K, H)$ for the set of elements in the field of height at most $H$, namely \begin{center}
$\mathscr{A}(K, H)= \{ x \in \overline{K}^* | h(x) \leq H \}$. \end{center} For $\mathcal{F}= \{\phi_1,..., \phi_k \}$, we set \begin{center}$J=\{ 1,...,k \}, \quad W= \prod_{i=1}^\infty J$, \quad and \quad $\Phi_w:=(\phi_{w_j})_{j=1}^\infty$\end{center}
to be a sequence of polynomials from $\mathcal{F}$ for $w= (w_j)_{j=1}^\infty \in W$.
In this situation we let \begin{center}$\Phi_w^{(n)}=\phi_{w_n} \circ ... \circ \phi_{w_1}$ with $\Phi_w^{(0)}=$Id,
and also $\mathcal{F}_n :=\{ \Phi_w^{(n)} | w \in W \}$.\end{center}
Precisely, we consider polynomials sequences $\Phi$ $= (\phi_{i_j})_{j=1}^\infty \in \prod_{i=1}^\infty \mathcal{F}$ and $x \in \overline{K}$, denoting $\Phi^{(n)}(x):=\phi_{i_n}(\phi_{i_{n-1}}(...(\phi_{i_1}(x)))$.
The set \begin{align*}\{ x, \Phi^{(1)}(x), \Phi^{(2)}(x), \Phi^{(3)}(x),... \}
=\{ x, \phi_{i_1}(x), \phi_{i_2}(\phi_{i_1}(x)), \phi_{i_3}(\phi_{i_2}(\phi_{i_1}(x)),... \}\end{align*} is called the forward orbit of $x$ under $\Phi$, denoted by $\mathcal{O}_{\Phi} (x)$.
The point $x$ is said to be $\Phi$-preperiodic if $\mathcal{O}_{\Phi} (x)$ is finite.
For a $x \in K$, the $\mathcal{F}$-orbit of $x$ is defined as \begin{align*}
\mathcal{O}_{\mathcal{F}}(x)=\{ \phi(x) | \phi \in \bigcup_{n \geq 1} \mathcal{F}_n \}= \{ \Phi_w^{(n)}(x) | n \geq 0, w \in W \} = \bigcup_{w \in W} \mathcal{O}_{\Phi_w} (x).
\end{align*}
The point $x$ is called preperiodic for $\mathcal{F}$ if $\mathcal{O}_{\mathcal{F}}(x)$ is finite.
For $\mathcal{S} \subset K$ and an integer $N \geq 1$, we use $T_{x,\Phi}(N, \mathcal{S})$ to denote the number of $n \leq N$ with $\Phi^{(n)}(w) \in \mathcal{S}$, namely,
\begin{center}
$T_{x,\Phi}(N, \mathcal{S}) = \# \{ n \leq N | \Phi^{(n)}(x) \in \mathcal{S} \}$.
\end{center}
For $f = \sum_{i=0}^d a_i X^i \in \overline{\mathbb{Q}}[X]$ and $K$ a field containing all the coefficients of $f$, denote the weil height of $f$ by
\begin{center}
$h(f) = \sum_{v \in M_K} d_v \log(\max_i |a_i|_v)$,
\end{center} and for the system of polynomials $\mathcal{F}= \{ \phi_1,..., \phi_k \}$, denote $h(\mathcal{F})=\max_i h({\phi_i})$.
We revisit the following bound calculated in other works, for example, \cite[Proposition 3.3]{M}.
\begin{prop}\label{prop2.4}
Let $\mathcal{F}= \{ \phi_1,..., \phi_k \}$ be a finite set of polynomials over $K$ with $\deg \phi_i= d_i \geq 2$, and $d:= \max_i d_i$. Then for all $n \geq 1$ and $\phi \in \mathcal{F}_n$, we have
\begin{center}
$h(\phi) \leq \left(\dfrac{d^n-1}{d-1}\right)h(\mathcal{F}) + d^2\left(\dfrac{d^{n-1}-1}{d-1}\right)\log 8= O(d^n(h(\mathcal{F})+1))$.
\end{center}
\end{prop}
The following is an easy consequence of \cite[Corollary 2.3]{OS}.
\begin{prop}
Let $K$ be an number field and $\mathcal{F}= \{ \phi_1,..., \phi_k \} \subset K[X]$ a dynamical system of polynomials. Let also $g \in k[X]$ be such that $g, g \circ \phi_1,..., g\circ \phi_k$ have at least two distinct roots in $\overline{\mathbb{Q}}$. Then, for every finitely generated subgroup $\Gamma \subset K^*, x \in \overline{\mathbb{Q}}, E >0$, we have that
\begin{center}
$\mathcal{O}_{\mathcal{F}}(x) \cap g^{-1}(\mathscr{B}(\overline{\Gamma},E))$
\end{center} is finite.
\end{prop}
\section{Some preliminar results} \label{sec3}
We define the height of $\bold{x}= (x,y) \in \mathbb{G}_m^2$ by $h(\bold{x})= h(x) + h(y)$.
For $F \in \overline{\mathbb{Q}}[X,Y]$ an absolutely irreducible polynomial of degree $d$ and height $h$, which is not special, we use the notation $\Delta = \deg_X F + \deg_Y F$.
For $\Gamma$ a finitely generated subgroup of $\mathbb{G}^2_m$ of rank $r >0$, we take $K$ to be the smallest number field containing all coefficients of $F$ and the group $\Gamma$, so that \begin{center}
$F \in K[X,Y]$ and $\Gamma \subset (K^*)^2$. \end{center} Letting $\mathcal{C} \subset \overline{\mathbb{Q}}^2$ be the zero set of the above polynomial, we state the following technical result. \begin{lemma}\label{lem3.1}\cite[Lemma 4.5]{OS}
Let $K, \Gamma, \mathcal{C}, \Delta$ and $h$ as above with $\Delta \geq 2$. Then there is a constant $c_0(K, \Gamma)$ depending only on $K$ and the generators of $\Gamma$, such that for $\zeta$ defined by
\begin{center}
$\zeta^{-1}= c_0(K,\Gamma)\exp (2\Delta^2)\Delta^{7r+22}(\Delta +h)(\log \Delta)^6$,
\end{center} where $r$ is the rank of $\Gamma$, we have that
\begin{center}
$\# \left( \mathcal{C} \cap \mathscr{C}_2(\overline{\Gamma}, \zeta) \right) \leq \exp \left( (h+1) \exp \left( (2 + o(1)) \Delta^2 \right) \right)$.
\end{center} \end{lemma} In other side and more generally, if $K$ is an algebraically closed field of characteristic zero, we consider polynomials $F \in K[X]$ that are not monomials. If one denotes \begin{center}$A(n,r)= (8n)^{4n^4(n+r+1)}$, \end{center} we quote the following counting result. \begin{lemma}\label{lem3.2}\cite[Lemma 4.7]{OS}
Let $F \in K[X]$ be a polynomial of degree $D$ which is not a monomial and let $\Gamma \subset K^*$ be a multiplicative subgroup of rank $r$. Then
\begin{center}
$\# \{ (u,v) \in \Gamma^2 | F(u)=v \} < D. A(D+1, r) + D. 2^{D+1}$.
\end{center}
\end{lemma} We also will make use of the combinatorial statement below, which has been used and proved in a number of works. \begin{lemma}\cite[Lemma 4.8]{OS}
Let $2 \leq T < N/2$. For any sequence
\begin{center}
$0 \leq n_1 < ... < n_T \leq N$,
\end{center}there exists $r \leq 2N/T$ such that $n_{i+1}- n_i=r$ for at least $T(T-1)/4N$ values of $i \in \{ 1,...,T-1 \}$. \end{lemma} The following result for more general fields is a direct application of the previous lemma.
\begin{prop}\label{prop3.4}
Let $K$ be an arbitrary field, $x \in K$ and let $\mathcal{S} \subset K$ be an arbitrary subset of $K$. Suppose there exist a real number $0< \tau < 1/2$, and also $\Phi$ a sequence of polynomials contained in $\mathcal{F}= \{ \phi_1,.., \phi_k \} \subset K[X]$ such that
\begin{center}
$T_{x,\Phi}(N, \mathcal{S})= \tau N \geq 2$.
\end{center} Then there exists an integer $t \leq 2 \tau^{-1}$ such that
\begin{center}
$\# \{ (u,v) \in \mathcal{S}^2 | \exists \psi \in \mathcal{F}_t $ with $ \psi (u)=v \} \geq \dfrac{\tau^2N}{8}$.
\end{center}
\end{prop}
\begin{proof}
Letting $T:= T_{x,\Phi}(N, \mathcal{S})$, we consider all the values $1\leq n_1 <...< n_T \leq N$ such that $\Phi^{(n_i)}(x) \in \mathcal{S}, i=1,...,T-1$.
From the previous lemma, there exists $t \leq 2 \tau^{-1}$ such that the number of $i=1,..., T-1$ with $n_{i+1}- n_i=t$ is at least
\begin{center}
$\dfrac{T(T-1)}{4N} = \dfrac{T^2}{4}\left( 1 - \dfrac{1}{T} \right) = \dfrac{\tau^2 N}{4} \left( 1 - \dfrac{1}{T} \right) \geq \dfrac{\tau^2N}{8}$.
\end{center} Moreover, if $\mathcal{J}:= \{ 1 \leq j \leq T-1 | n_{j+1} - n_j = t \}$, then for each $ j \in \mathcal{J}$,
\begin{center}
$\Phi^{(n_j)}(x) \in \mathcal{S}$ and $\Phi^{(n_{j+1})}(x)= \psi(\Phi^{(n_j)}(x)) \in \mathcal{S}$, where $\psi \in \mathcal{F}_t$.
\end{center} and hence
\begin{center}
$\# \{ (u,v) \in \mathcal{S}^2 | \psi (u)=v $ for some $\psi \in \mathcal{F}_t \} \geq \dfrac{\tau^2N}{8}$.
\end{center} \end{proof}
\section{Orbits in sets} \label{sec4}
\begin{definition}
We say that an orbit $\mathcal{O}_{\mathcal{F}}(x)$ of an element $x \in K$ under a semigroup generated by a finite set $\mathcal{F}$ intersects a family of sets $\mathcal{S}=\{ \mathcal{S}_N \}_{N \in \mathbb{N}}$ \textit{with low frequency} if
\begin{center}
$\displaystyle\lim_{N \rightarrow \infty} \dfrac{\displaystyle\max_{\Phi \textit{ sequence of } \mathcal{F}} T_{x, \Phi}(N, \mathcal{S}_N)}{N}=0$
\end{center} In the particular case that $\mathcal{S}$ is a set with $S:=\mathcal{S}_1=\mathcal{S}_2=...$ in the limit above, we say that $\mathcal{O}_{\mathcal{F}}(x)$ \textit{intersects the set $\mathcal{S}$ with low frequency}.
\end{definition}
Now we give a result for the frequency of intersection of orbits of semigroups of polynomials with the set $\mathscr{C}(\overline{\Gamma}, \epsilon)$ for a finitely generated subgroup $\Gamma \subset \mathbb{G}_m$. \begin{theorem}\label{th4.2}
Let $K$ be an number field and $\mathcal{F}= \{ \phi_1,..., \phi_k \} \subset K[X]$ a finite set of polynomials that are not monomials with $\deg \phi_i= d_i \geq 2$, and $d = \max_i d_i$. Suppose that, for a finitely generated subgroup $\Gamma \subset K^*$ of rank $r$, $x \in \overline{\mathbb{Q}}$, and $\theta_N=(\log N)^{-2}(\log \log N)^{-7r/2 -12}$, we have that $\mathcal{O}_{\mathcal{F}}(x)$ intersects the family of sets $\{ \mathscr{C}(\overline{\Gamma}, \theta_N)\}_N$ with low frequency. Then \begin{center}
$\displaystyle\max_{\Phi \textit{ sequence of } \mathcal{F}} T_{x, \Phi}(N, \mathscr{C}(\overline{\Gamma}, \theta_N)) \leq \dfrac{(4 \log d + o(1))N}{ (\log \log \log N)}$, as $N \rightarrow \infty$. \end{center} \end{theorem}
\begin{proof}
For each $\Phi$, we define $ \tau_{\Phi}$ by $\tau_\Phi =T_{x,\Phi}(N, \mathscr{C}(\overline{\Gamma}, \theta_N))/N$.
We can assume that \begin{equation}\label{eq4.1}
\tau_{\Phi} \geq \dfrac{4 \log d }{\log \log \log N} \geq \dfrac{2}{N} \end{equation} for some $\Phi$, for otherwise there is nothing to be proved.
For $N$ large enough, Proposition~\ref{prop3.4} shows that there exists \begin{center} $t_{\Phi} \leq 2 \tau_{\Phi}^{-1} \leq \dfrac{\log \log \log N}{2 \log d }$ \end{center} such that \begin{center}
$\# \{ (u,v) \in \mathscr{C}(\overline{\Gamma}, \theta_N)^2 | \psi (u)=v $ for some $\psi \in \mathcal{F}_{t_{\Phi}} \} \geq \dfrac{\tau_{\Phi}^2N}{8}$. \end{center} For any $\psi \in \mathcal{F}_{t_{\Phi}}$, we denote by $\mathcal{C}_{\psi}$ the curve defined by the zero set of the polynomial $\psi(X) - Y=0$. Then \begin{align*}
\displaystyle\sum_{ \psi \in \mathcal{F}_{t_\Phi}} &\#(C_{\psi} \cap \mathscr{C}(\overline{\Gamma}, \theta_N)^2 ) \\&= \displaystyle\sum_{ \psi \in \mathcal{F}_{t_\Phi}} \# \{ (u,v) \in \mathscr{C}(\overline{\Gamma}, \theta_N)^2 | \psi (u)=v\} \\& = \# \{ (u,v) \in \mathscr{C}(\overline{\Gamma}, \theta_N)^2 | \psi (u)=v \text{ for some }\psi \in \mathcal{F}_{t_\Phi} \} \\& \geq \dfrac{\tau_{\Phi}^2N}{8}.
\end{align*}
The set $\{ (u,v) \in \mathscr{C}(\overline{\Gamma}, \theta_N)^2 | \psi (u)=v\}$ is the intersection of the curve $C_{\psi}$ with the set $\mathscr{C}(\overline{\Gamma}, \theta_N)^2$.
We will define a $\zeta_\Phi$ as in Lemma~\ref{lem3.1} with parameters $\Delta_\Phi=d^{t_{\Phi}} +1$ and $h=h(\mathcal{F}_{t_\Phi})$. By Proposition~\ref{prop2.4}, we have that \begin{center} $h \leq O(d^{t_\Phi}(h(\mathcal{F})+1))= O(\Delta_\Phi)$, \end{center} where the referred constant does not depend on $\Phi$ satisfying (\ref{eq4.1}), but only on $\mathcal{F}$.
Moreover, \begin{center}
$\Delta_{t_\Phi}=d^{t_\Phi} +1 \leq (\log \log N)^{1/2} + 1$, \end{center} and thus \begin{align*}
\zeta_{\Phi}^{-1}:&= \exp(2 \Delta_\Phi^2 + O(1))\Delta_\Phi^{7r + 23}(\log \Delta_\Phi)^6\\ &= O((\log N)^2 (\log \log N)^{\frac{7r + 23}{2}}(\log \log \log N)^6), \end{align*} for $N$ sufficiently large, with the referred constant not depending on $\Phi$ satisfying (\ref{eq4.1}) again.
For our choice of $\theta_N$, we have that $\theta_N \leq \zeta_\Phi /2$ for any $N$ large enough, and so \begin{center}
$\mathscr{C}(\overline{\Gamma}, \theta_N)^2 \subset \mathscr{C}_2(\overline{\Gamma} \times \overline{\Gamma}, \zeta_{\Phi})$. \end{center} By the previous calculations, this implies that \begin{center}$\sum_{\psi \in \mathcal{F}_{t_\Phi}} \# ( \mathcal{C}_{\psi} \cap \mathscr{C}_2(\overline{\Gamma} \times \overline{\Gamma}, \zeta) \geq \dfrac{\tau_\Phi^2 N}{8}$.\end{center}
Using Lemma~\ref{lem3.1} to obtain upper bounds for the $\# ( \mathcal{C}_{\psi} \cap \mathscr{C}_2(\overline{\Gamma} \times \overline{\Gamma}, \zeta)$, knowing that $t_\Phi \leq 2 \tau_\Phi^{-1}$, we will have, as $\tau_\Phi \rightarrow 0, N \rightarrow \infty$, that \begin{align*} N &\leq 8\tau_{\Phi}^{-2} k^{t_\Phi} \exp (h\exp ((2 + o(1)\Delta_\Phi^2)) \\& \leq8 \tau_{\Phi}^{-2}k^{t_\Phi} \exp (\exp( \exp ((2 \log d + o(1)t_\Phi))) \\& \leq 8 \tau_{\Phi}^{-2}k^{t_\Phi} \exp (\exp( \exp ((4 \log d + o(1)\tau_\Phi^{-1}))) \\& \leq \{\exp (\exp( \exp ((4 \log d + o(1)\tau_\Phi^{-1})))\},\end{align*} and then \begin{center}
$\tau_\Phi \leq \dfrac{(4 \log d + o(1))}{\log \log \log N}$, \end{center} and hence \begin{center}
$\tau_\Phi \leq \displaystyle\max_{\Phi} \tau_\Phi \leq \dfrac{(4 \log d + o(1))}{\log \log \log N}$ \end{center} as wanted when $N \rightarrow \infty$ . \end{proof} \begin{corollary}\label{cor4.3} Under the conditions of Theorem~\ref{th4.2} we have that \begin{center}
$\# \{ y \in \mathscr{C}(\overline{\Gamma}, \theta_N) | y = f(x), f \in \mathcal{F}_n, n \leq N \} \leq k^N\dfrac{(4 \log d + o(1))N}{ (\log \log \log N)} $ \end{center} as $N \rightarrow \infty $. \end{corollary} \begin{proof}
Given $N$ very large, the set $\mathcal{F}_N$ contains $k^N$ polynomials. For each $f \in \mathcal{F}_N$, we can choose a sequence $\Phi$ of terms in $\mathcal{F}$ whose $\Phi^{(N)}= f$, obtaining $k^N$ sequences representing the elements of $\mathcal{F}_N$.
For each sequence $\Phi$ chosen, when $N$ is large, \begin{center} $\# \{n \leq N | \Phi^{(N)}(x) \in \mathscr{C}(\overline{\Gamma}, \theta_N) \} \leq \dfrac{(4 \log d + o(1))N}{ (\log \log \log N)}$ \end{center} uniformly for any $\Phi$ by the previous theorem, or in other words, for each path in the $N$-tree $\mathcal{F}_N$. Since there are $k^N$ paths(polynomials, sequences) in the $n$-tree $\mathcal{F}_N$, this yields \begin{center}
$\# \{ y \in \mathscr{C}(\overline{\Gamma}, \theta_N) | y = f(x), f \in \mathcal{F}_n, n \leq N \} \leq k^N\dfrac{(4 \log d + o(1))N}{ (\log \log \log N)} $ \end{center} as $N \rightarrow \infty $.
\end{proof}
\begin{theorem}\label{th4.4}
Let $K$ be a field of charateristic zero and $\mathcal{F}= \{ \phi_1,..., \phi_k \} \subset K[X]$ a finite set of polynomials that are not monomials with $\deg \phi_i= d_i \geq 2$ and $d = \max_i d_i$. Then, for a finitely generated subgroup $\Gamma \subset K^*$ of rank $r, x \in K$ such that $\mathcal{O}_{\mathcal{F}}(x)$ intersects $\Gamma$ with low frequency, we have that \begin{center}
$\displaystyle\max_{\Phi \text{ sequence in } \mathcal{F}} T_{x, \Phi}(N, \Gamma) \leq \dfrac{(10 \log d + o(1))N}{ (\log \log N)}$, as $N \rightarrow \infty$. \end{center} \end{theorem} \begin{proof}
As before, we again define $\tau_\Phi = T_{x, \Phi}(N, \Gamma)/N $ and assume $\tau_{\Phi} \geq 2/N$. Again from Proposition~\ref{prop3.4}, for $N$ large, there exists $t_\Phi \leq 2 \tau_{\Phi}^{-1}$ such that \begin{center}
$\dfrac{\tau_\Phi^2 N}{8} \leq \displaystyle\sum_{ \psi \in \mathcal{F}_{t_\Phi}} \# \{ (u,v) \in \Gamma^2 | \psi (u)=v\}$, \end{center} which by Lemma~\ref{lem3.2}, as $\deg \psi \leq d^{t_\Phi}$, is upper bounded by \begin{center} $k^{2\tau_\Phi^{-1}}\left( d^{2\tau_\Phi^{-1}}A(d^{2\tau_\Phi^{-1}} +1, r) + d^{2\tau_\Phi^{-1}} 2^{d^{2\tau_\Phi^{-1}}+1} \right)$. \end{center}Therefore \begin{center}
$\dfrac{\tau_\Phi^2 N}{8} \leq k^{2\tau_\Phi^{-1}} d^{2\tau_\Phi^{-1}}A(d^{2\tau_\Phi^{-1}} +1, r) + k^{2\tau_\Phi^{-1}}d^{2\tau_\Phi^{-1}} 2^{d^{2\tau_\Phi^{-1}}+1}$. \end{center} When $N \rightarrow \infty$ ($\tau_{\Phi} \rightarrow 0$ uniformly on $\Phi$), we have \begin{center}$N \leq 8 \tau_{\Phi}^{-2} \left( k^{2\tau_\Phi^{-1}} d^{2\tau_\Phi^{-1}}A(d^{2\tau_\Phi^{-1}} +1, r) + k^{2\tau_\Phi^{-1}}d^{2\tau_\Phi^{-1}} 2^{d^{2\tau_\Phi^{-1}}+1} \right)$ \end{center} bounded by \begin{center}
$N \leq \exp \left( \exp((10 \log d + o(1))\tau^{-1}_{\Phi}) \right)$, \end{center} from where the result follows. \end{proof} And as in Corollary~\ref{cor4.3}, the following is proven in an analogous way, working for more general fields of charateristic zero. \begin{corollary}\label{cor4.5} Under the conditions of Theorem~\ref{th4.4} we have that \begin{center}
$\# \{ y \in \Gamma | y = f(x), f \in \mathcal{F}_n, n \leq N \} \leq k^N\dfrac{(10 \log d + o(1))N }{ (\log \log N)}$, \end{center} as $N \rightarrow \infty $. \end{corollary} \section{A graph theory result} \label{sec5} Here we present a graph theory result of M\'{e}rai and Shparlinski \cite{MS} that will be used later in proofs.
Let $\mathcal{H}$ be a directed graph with possible multiple edges. Let $\mathcal{V}(\mathcal{H})$ be the set of vertices of $\mathcal{H}$. For $u,v \in \mathcal{V}(\mathcal{H})$, let $d(u,v)$ be the distance from $u$ to $v$, that is, the length of a shortest (directed) path from $u$ to $v$. Assume, that all the vertices have the out-degree $k\geq 1$, and the edges from all vertices are labeled by $\{1,...,k \}$.
For a word $\omega \in \{1,...,k \}^*$ over the alphabet $\{ 1,...,k\}$ and $ u \in \mathcal{V}(\mathcal{H})$, let $\omega(u) \in \mathcal{V}(\mathcal{H})$ be the end point of the walk started from $u$ and following the edges according to $\omega$.
Let us fix $u \in \mathcal{V}(\mathcal{H})$ and a subset $\mathcal{A} \subset \mathcal{V}(\mathcal{H})$. Then for words $\omega_1,..., \omega_l$ put \begin{align*}
L_N(u,\mathcal{A}; \omega_1,...,\omega_l)= \# \{v &\in \mathcal{V}(\mathcal{H}) : d(u,v) \leq N,\\ &d(u, \omega_i(v)) \leq N , \omega_i(v) \in \mathcal{A}, i=1,...,l \}. \end{align*} To state the results, for $k,t \geq 1$, let $B(k,t)$ denote the size of the complete $k$-tree of depth $t-1$, that is \begin{center} $ B(k,t) =
\begin{cases}
\ t
\ &\quad\text{if} ~ k=1, \\
\
\
\ \dfrac{k^t-1}{k-1}
\ &\quad\text{otherwise} ~. \\
\end{cases}
$ \end{center}
\begin{lemma}\label{lem5.1}
Let $u \in \mathcal{V}(\mathcal{H})$, and $t,l \geq 1$ be fixed. If $\mathcal{A} \subset \mathcal{V}(\mathcal{H})$ is a subset of vertices with
\begin{align*}
\#\{ v \in \mathcal{A} : d&(u,v) \leq N \}\\ & \geq \max \left\{ 3B(k,t), \dfrac{3l}{t}\# \{v \in \mathcal{V}(\mathcal{H}) : d(u,v) \leq N \} \right\},
\end{align*} then there exist words $\omega_1,...,\omega_l \in \{ 1,...,k \}^*$ of length at most $t$ such that \begin{center}
$L_N(u,\mathcal{A}; \omega_1,...,\omega_l) \gg \dfrac{t}{B(k,t)^{l+1}} \# \{v \in \mathcal{V}(\mathcal{H}) : d(u,v) \leq N \}$, \end{center} where the implied constant depend only on $l$.
\end{lemma} \section{more results of orbits in sets} \label{sec6}
\begin{theorem}
Let $K$ be a field of charateristic zero and $\mathcal{F}= \{ \phi_1,..., \phi_k \}\newline \subset K[X]$ a finite set of polynomials that are not monomials, with $\deg \phi_i= d_i \geq 2$ and $d = \max_i d_i$.
Suppose that $\Gamma \subset K^*$ is a finitely generated subgroup of rank $r$, and $ u \in K$.
Let also $t,l \geq 1$ be integers such that $t \geq 3l$ and
$\# \{ v \in \Gamma | v = f(u), f \in \mathcal{F}_n, n \leq N \} \geq 3B(k,t)$. Then \begin{align*}
\# \{ v \in \Gamma | v = f(u), f \in \mathcal{F}_n, n \leq N \} \ll_l \dfrac{B(k,t)^{l+1}}{t}(d^t A(d^t+1,r) +d^t2^{d^t +1}). \end{align*}
\end{theorem} \begin{proof}
We consider the directed graph with the elements of $\Gamma$ as vertices, and edges $(x,\phi_i(x))$ for $i=1,...,k$ and $x \in \Gamma$. With the notation of Section~\ref{sec5} and Lemma~\ref{lem5.1}, we let $\Gamma$ take the place of $\mathcal{H}$ and $\mathcal{A}$.
By hypothesis, $l \leq t/3$ and $\#\{ v \in \Gamma, d(u,v) \leq N \} \geq 3B(k,t)$. From Lemma~\ref{lem5.1}, there exist words $\omega_1,...,\omega_l \in \{ 1,...,k\}^*$ of length at most $t$, and therefore degree at most $d^t$, such that
\begin{equation}\label{eq6.1}
L_N(u, \Gamma; \omega_1,...,\omega_l) \gg_l \dfrac{t}{B(k,t)^{l+1}} \# \{v \in \mathcal{V}(\Gamma) : d(u,v) \leq N \}.
\end{equation} By Lemma~\ref{lem3.2}, we compute
\begin{align*}
L_N&(u, \Gamma; \omega_1,...,\omega_l)\\&=\# \{v \in \mathcal{V}(\Gamma) : d(u,v),d(u, \omega_i(v)) \leq N , \omega_i(v) \in \Gamma, i=1,...,l \}\\
&\leq \displaystyle\sum_{i \leq l} \# \{v \in \mathcal{V}(\Gamma) : d(u,v),d(u, \omega_i(v)) \leq N , \omega_i(v) \in \Gamma \}\\
& \leq \displaystyle\sum_{i \leq l} \# \{(x,y) \in \Gamma^2 : y= \omega_i(x) \} \\
&\leq \displaystyle\sum_{i \leq l} (d^t A(d^t+1,r) +d^t2^{d^t +1})\\
&=l(d^t A(d^t+1,r) +d^t2^{d^t +1}).
\end{align*}Gathering this with (\ref{eq6.1}), we conclude that
\begin{center}
$\# \{v \in \mathcal{V}(\Gamma) : d(u,v) \leq N \} \ll_l \dfrac{B(k,t)^{l+1}l}{t}(d^t A(d^t+1,r) +d^t2^{d^t +1})$,
\end{center} as desired. \end{proof} \begin{corollary}\label{cor6.2}
Let $K$ be a field of charateristic zero and $\mathcal{F}= \{ \phi_1,..., \phi_k \}\newline \subset K[X]$ a finite set of polynomials that are not monomials, with $\deg \phi_i= d_i \geq 2$ and $d = \max_i d_i$.
Suppose that $\Gamma \subset K^*$ is a finitely generated subgroup of rank $r$, $ u \in K$, and $\{t_N\}_N$ is a sequence of positive integers that goes to $\infty$ as $N \rightarrow \infty$ and that satisfies
\begin{center}
$\# \{ v \in \Gamma | v = f(u), f \in \mathcal{F}_n, n \leq N \} \geq 3B(k,t_N)$
\end{center} for each $N$.
Then \begin{center}
$\# \{ v \in \Gamma | v = f(u), f \in \mathcal{F}_n, n \leq N \} \leq \exp \left( \exp((10 \log d + o(1))t_N) \right)$, \end{center} as $N \rightarrow \infty $.
\end{corollary}
\begin{proof}
In the proof of the previous result, we can choose $l \geq 1$ an arbitrary integer and $N$ big enough so that $t_N \geq 3l$. For each of these $t_N$'s, we can apply the previous theorem, obtaining that
\begin{align*}
\# \{ v \in \Gamma | v = f(u), f \in \mathcal{F}_n, n \leq N \} \ll_l \dfrac{B(k,t_N)^{l+1}}{t_N}(d^{t_N} A(d^{t_N}+1,r) +d^{t_N}2^{d^{t_N} +1}). \end{align*} Moreover, since $t_N \rightarrow \infty$ as $N \rightarrow \infty$, it yields \begin{align*}B(k,t_N)^{l+1}(d^{t_N} A(d^{t_N}+1,r) +d^{t_N}2^{d^{t_N} +1}) =\exp \left( \exp((10 \log d + o(1))t_N) \right),\end{align*} from where the result follows.
\end{proof}
\begin{remark}
If the hypothesis of Corollary~\ref{cor6.2} are satisfied with \begin{center}$t_N \sim\dfrac{1}{10 \log d + o(1)} \log \log \left( \dfrac{(10 \log d + o(1)) N)}{\log \log N} \right)$, as $ N \rightarrow \infty$, \end{center}
then we recover and generalize Corollary~\ref{cor4.5}, as well as Theorem 3.1 of \cite{OS}, under our referred conditions. \end{remark} \begin{theorem}\label{th6.4}
Let $K$ be an algebraic number field and $\mathcal{F}= \{ \phi_1,..., \phi_k \}\newline \subset K[X]$ a finite set of polynomials that are not monomials, with $\deg \phi_i= d_i \geq 2$ and $d = \max_i d_i$.
Suppose that $\Gamma \subset K^*$ is a finitely generated subgroup of rank $r$, $ u \in K$, and $\{t_N\}_N$ is a sequence of positive integers that goes to $\infty$ as $N \rightarrow \infty$ and that satisfies
\begin{center}
$\# \{ v \in \mathscr{C}(\overline{\Gamma}, \theta_N) | v = f(u), f \in \mathcal{F}_n, n \leq N \} \geq 3B(k,t_N)$
\end{center} for each $N$, whith $\theta_N \leq \left( \exp(d^{-2t_N})d^{N(-7r-24)}\right)$.
Then \begin{align*}
\# \{ v \in \mathscr{C}(\overline{\Gamma}, \theta_N) | v = f(u), f \in \mathcal{F}_n, n \leq N \} \leq \exp (\exp( \exp ((4 \log d + o(1))t_N))), \end{align*} as $N \rightarrow \infty $. \end{theorem} \begin{proof}
We consider the directed graph with the elements of $\mathscr{C}(\overline{\Gamma}, \theta)$ as vertices, and edges $(x,\phi_i(x))$ for $i=1,...,k$ and $x \in \mathscr{C}(\overline{\Gamma}, \theta)$. With the notation of Section~\ref{sec5} and Lemma~\ref{lem5.1}, we let $\mathscr{C}(\overline{\Gamma}, \theta)$ take the place of $\mathcal{H}$ and $\mathcal{A}$.
By hypothesis, we can choose $l \geq 1$ an arbitrary integer and $N$ big enough so that $t_N \geq 3l$ and $\#\{ v \in \mathscr{C}(\overline{\Gamma}, \theta), d(u,v) \leq N \} \geq 3B(k,t_N)$. From Lemma~\ref{lem5.1}, for each $N$, there exist words $\omega_1,...,\omega_l \in \{ 1,...,k\}^*$ of length at most $t_N$, and therefore degree at most $d^{t_N}$, such that
\begin{equation}\label{eq6.2}
L_N(u, \mathscr{C}(\overline{\Gamma}, \theta); \omega_1,...,\omega_l) \gg_l \dfrac{t_N}{B(k,t_N)^{l+1}} \# \{v \in \mathcal{V}(\mathscr{C}(\overline{\Gamma}, \theta)) : d(u,v) \leq N \}.
\end{equation} Putting $\Delta_{t_N}=d^{t_N} +1$ and $h_N= h(\mathcal{F}_{t_N})$, we have $h=O(\Delta_{t_N})$ by Proposition~\ref{prop2.4}.
Defining $\zeta_N$ as in Lemma~\ref{lem3.1} with parameters $h_N, \Delta_{t_N}$ we have that
\begin{align*}\zeta^{-1}= \exp(2 \Delta_{t_N}^2 + O(1))\Delta_{t_N}^{7r + 23}(\log \Delta_{t_N})^6
=O(\exp(d^{2t_N})d^{t_N(7r+23)}(t_N\log d)^6).
\end{align*}
As $\theta_N\leq \zeta_N/2= O\left(\left(\exp(d^{2t_N})d^{t_N(7r+23)}(t_N\log d)^6\right)^{-1}\right)$, for $N$ large enough, it is true that
\begin{center}
$\mathscr{C}(\overline{\Gamma}, \theta_N)^2 \subset \mathscr{C}_2(\overline{\Gamma} \times \overline{\Gamma}, \zeta)$. \end{center}By Lemma~\ref{lem3.1}, we compute
\begin{align*}
L_N&(u, \mathscr{C}(\overline{\Gamma}, \theta_N); \omega_1,...,\omega_l)\\&=\# \{v \in \mathcal{V}(\mathscr{C}(\overline{\Gamma}, \theta_N)) : d(u,v),d(u, \omega_i(v)) \leq N , \omega_i(v) \in \mathscr{C}(\overline{\Gamma}, \theta), i=1,...,l \}\\
&\leq \displaystyle\sum_{i \leq l} \# \{v \in \mathcal{V}(\mathscr{C}(\overline{\Gamma}, \theta_N)) : d(u,v),d(u, \omega_i(v)) \leq N , \omega_i(v) \in \mathscr{C}(\overline{\Gamma}, \theta_N) \}\\
& \leq \displaystyle\sum_{i \leq l} \# \{(x,y) \in \mathscr{C}(\overline{\Gamma}, \theta_N)^2 : y= \omega_i(x) \} \\
& \leq \displaystyle\sum_{i \leq l} \# \{(x,y) \in \mathscr{C}_2(\overline{\Gamma} \times \overline{\Gamma}, \zeta_N) : y= \omega_i(x) \}\\
& \leq l \exp ((h_N+1)\exp ((2 + o(1))\Delta_{t_N}^2)).
\end{align*} Gathering this with (\ref{eq6.2}), it follows that
\begin{align*}
&\# \{v \in \mathcal{V}(\mathscr{C}(\overline{\Gamma}, \theta_N)) : d(u,v) \leq N \}\\& \ll_l \dfrac{B(k,t_N)^{l+1}l}{t_N}\exp ((h_N+1)\exp ((2 + o(1))\Delta_{t_N}^2))\\
& \leq \dfrac{\exp (\exp( \exp ((4 \log d + o(1))t_N)))}{t_N},
\end{align*}as we wanted to show.
\end{proof} \begin{remark}
If the hypothesis of Theorem~\ref{th6.4} are satisfied with \begin{center}$t_N \sim\dfrac{1}{4 \log d + o(1)} \log \log \log \left( \dfrac{(4 \log d + o(1)) N)}{\log \log \log N} \right)$, as $ N \rightarrow \infty$, \end{center}
then we recover and generalize Corollary~\ref{cor4.3}, as well as Theorem 2.4 of \cite{OS}, under our referred conditions. \end{remark}
\end{document} | arXiv |
Issai Schur
Issai Schur (10 January 1875 – 10 January 1941[1]) was a Russian mathematician who worked in Germany for most of his life. He studied at the University of Berlin. He obtained his doctorate in 1901, became lecturer in 1903 and, after a stay at the University of Bonn, professor in 1919.
Issai Schur
Born(1875-01-10)10 January 1875
Mogilev, Russian Empire
Died10 January 1941(1941-01-10) (aged 66)
Tel Aviv, Mandatory Palestine
Known for
• Schur decomposition
• Schur's lemma
Scientific career
FieldsMathematics
Doctoral advisor
• Georg Frobenius
• Lazarus Fuchs
Doctoral students
• Richard Brauer
• Robert Frucht
• Maximilian Herzberger
• Eberhard Hopf
• Bernhard Neumann
• Rose Peltesohn
• Heinz Prüfer
• Richard Rado
• Isaac Jacob Schoenberg
• Arnold Scholz
• Wilhelm Specht
• Karl Dörge
• Wolfgang Hahn
As a student of Ferdinand Georg Frobenius, he worked on group representations (the subject with which he is most closely associated), but also in combinatorics and number theory and even theoretical physics. He is perhaps best known today for his result on the existence of the Schur decomposition and for his work on group representations (Schur's lemma).
Schur published under the name of both I. Schur, and J. Schur, the latter especially in Journal für die reine und angewandte Mathematik. This has led to some confusion.[2]
Childhood
Issai Schur was born into a Jewish family, the son of the businessman Moses Schur and his wife Golde Schur (née Landau). He was born in Mogilev on the Dnieper River in what was then the Russian Empire. Schur used the name Schaia (Isaiah as the epitaph on his grave) rather than Issai up in his middle twenties.[3] Schur's father may have been a wholesale merchant.[4]
In 1888, at the age of 13, Schur went to Liepāja (Courland, now in Latvia), where his married sister and his brother lived, 640 km north-west of Mogilev. Kurland was one of the three Baltic governorates of Tsarist Russia, and since the Middle Ages the Baltic Germans were the upper social class.[5][6] The local Jewish community spoke mostly German and not Yiddish.[7]
Schur attended the German-speaking Nicolai Gymnasium in Libau from 1888 to 1894 and reached the top grade in his final examination, and received a gold medal.[8] Here he became fluent in German.
Education
In October 1894, Schur attended the University of Berlin, with concentration in mathematics and physics. In 1901, he graduated summa cum laude under Frobenius and Lazarus Immanuel Fuchs with his dissertation On a class of matrices that can be assigned to a given matrix,[9] which contains a general theory of the representation of linear groups. According to Vogt,[10] he began to use the name Issai at this time. Schur thought that his chance of success in the Russian Empire was rather poor,[11] and because he spoke German so perfectly, he remained in Berlin. He graduated in 1903 and was a lecturer at the University of Berlin. Schur held a position as professor at the Berlin University for the ten years from 1903 to 1913.[12]
In 1913 he accepted an appointment as associate professor and successor of Felix Hausdorff at the University of Bonn. In the following years Frobenius tried various ways to get Schur back to Berlin. Among other things, Schur's name was mentioned in a letter dated 27 June 1913[13] from Frobenius to Robert Gnehm (the School Board President of the ETH) as a possible successor to Carl Friedrich Geiser.[14] Frobenius complained that they had never followed his advice before and then said: "That is why I can't even recommend Prof. J. Schur (now in Bonn) to you. He's too good for Zurich, and should be my successor in Berlin". Hermann Weyl got the job in Zurich. The efforts of Frobenius were finally successful in 1916, when Schur succeeded Johannes Knoblauch as adjunct professor. Frobenius died a year later, on 3 August 1917. Schur and Carathéodory were both named as the frontrunners for his successor. But they chose Constantin Carathéodory in the end. In 1919 Schur finally received a personal professorship, and in 1921 he took over the chair of the retired Friedrich Hermann Schottky. In 1922, he was also added to the Prussian Academy of Sciences.
During the time of Nazism
After the takeover by the Nazis and the elimination of the parliamentary opposition, the Law for the Restoration of the Professional Civil Service on 7 April 1933, prescribed the release of all distinguished public servants that held unpopular political opinions or who were "Jewish" in origin; a subsequent regulation[15] extended this to professors and therefore also to Schur. Schur was suspended and excluded from the university system. His colleague Erhard Schmidt fought for his reinstatement, and since Schur had been a Prussian official before the First World War,[16] he was allowed to participate in certain special lectures on teaching in the winter semester of 1933/1934 again. Schur withdrew his application for leave from the Science Minister and passed up the offer of a visiting professorship at the University of Wisconsin–Madison for the academic year 1933–34.[17] One element that likely played a role in the rejection of the offer was that Schur no longer felt he could cope with the requirements that would have come with a new beginning in an English-speaking environment.[18]
Already in 1932, Schur's daughter Hilde had married the doctor Chaim Abelin in Bern.[19] As a result, Issai Schur visited his daughter in Bern several times. In Zurich he met often with George Pólya, with whom he was on friendly terms since before the First World War.[20]
On such a trip to Switzerland in the summer of 1935, a letter reached Schur from Ludwig Bieberbach signed on behalf of the Rector's, stating that Schur should urgently seek him out in the University of Berlin.[21] They needed to discuss an important matter with him. It involved Schur's dismissal on 30 September 1935.[22]
Schur remained a member of the Prussian Academy of Sciences after his release as a professor, but a little later he lost this last remnant of his official position. Due to an intervention from Bieberbach in the spring of 1938 he was forced to explain his resignation from the commission of the Academy.[23] His membership in the Advisory Board of Mathematische Zeitschrift was ended in early 1939.[24]
Emigration
Schur found himself lonely after the flight of many of his students and the expulsion of renowned scientists from his previous place of work. Only Dr. Helmut Grunsky had been friendly to him, as Schur reported in the late thirties to his expatriate student Max Menachem Schiffer.[25] The Gestapo was everywhere. Since Schur had announced to his wife his intentions to commit suicide in case of a summons to the Gestapo,[26] in the summer of 1938 his wife took his letters, and with them a summons from the Gestapo, sent Issai Schur to a relaxing stay in a home outside of Berlin and went with medical certificate allowing her to meet the Gestapo in place of her husband. There they flatly asked why they were still staying in Germany. But there were economic obstacles to the planned emigration: emigrating Germans had a pre-departure Reich Flight Tax to pay, which was a quarter of their assets. Now Schur's wife had inherited a mortgage on a house in Lithuania, which because of the Lithuanian foreign exchange determination could not be repaid. On the other hand, Schur was forbidden to default or leave the mortgage to the German Reich. Thus the Schurs lacked cash and cash equivalents. Finally, the missing sum of money was somehow supplied, and to this day it does not seem to be clear who were the donors.
Schur was able to leave Germany in early 1939.[27] His health, however, was already severely compromised. He traveled in the company of a nurse to his daughter in Bern, where his wife also followed a few days later. There they remained for several weeks and then emigrated to Palestine. Two years later, on his 66th birthday, on 10 January 1941, he died in Tel Aviv of a heart attack.
Work
Schur continued the work of his teacher Frobenius with many important works for group theory and representation theory. In addition, he published important results and elegant proofs of known results in almost all branches of classical algebra and number theory. His collected works[28] are proof of this. There, his work on the theory of integral equations and infinite series can be found.
Linear groups
In his doctoral thesis Über eine Klasse von Matrizen, die sich einer gegebenen Matrix zuordnen lassen Issai Schur determined the polynomial representations of the general linear group $GL(n,\mathbb {C} )$ on the field $\mathbb {C} $ of complex numbers. The results and methods of this work are still relevant today.[29] In his book, J.A. Green determined the polynomial representations of $GL(n,\mathbb {K} )$ over infinite fields $\mathbb {K} $ with arbitrary characteristic.[30] It is mainly based on Schur's dissertation. Green writes, "This remarkable work (of Schur) contained many very original ideas, developed with superb algebraic skill. Schur showed that these (polynomial) representations are completely reducible, that each irreducible one is "homogeneous" of some degree $r\geq 0$, and that the equivalence types of irreducible polynomial representations of $GL_{n}(\mathbb {C} )$, of fixed homogeneous degree $r$, are in one-one correspondence with the partitions $\lambda =(\lambda _{1},\ldots ,\lambda _{n})$ of $r$ into not more than $n$ parts. Moreover Schur showed that the character of an irreducible representation of type $\lambda $ is given by a certain symmetric function ${\underline {S}}_{\lambda }$ in $n$ variables (since described as a "Schur function")." According to Green, the methods of Schur's dissertation today are important for the theory of algebraic groups.
In 1927 Schur, in his work On the rational representations of the general linear group, gave new proofs for the main results of his dissertation. If $V$ is the natural $n$-dimensional $\mathbb {C} $ vector space on which $GL(n,\mathbb {C} )$ operates, and if $r$ is a natural number, then the $r$-fold tensor product $V^{\otimes r}$ over $\mathbb {C} $ is a $GL(n,\mathbb {C} )$-module, on which the symmetric group $S_{r}$ of degree $r$ also operates by permutation of the tensor factors of each generator $v_{1}\otimes \ldots \otimes v_{r}$ of $V^{\otimes r}$. By exploiting these $S_{r}-GL(n,\mathbb {C} )$-bimodule actions on $V^{\otimes r}$, Schur manages to find elegant proofs of his sentences. This work of Schur was once very well known.
Professorship in Berlin
Schur lived in Berlin as a highly respected member of the academic world, an apolitical scholar. A leading mathematician and outstanding and very successful teacher, he held a prestigious chair at the University of Berlin for 16 years.[31] Until 1933, his research group had an excellent reputation at the University of Berlin in Germany and beyond. With Schur in the center, his faculty worked with representation theory, which was extended by his students in different directions (including solvable groups, combinatorics, matrix theory).[32] Schur made fundamental contributions to algebra and group theory which, according to Hermann Weyl, were comparable in scope and depth to those of Emmy Noether (1882–1935).[33]
When Schur's lectures were canceled in 1933, there was an outcry among the students and professors who appreciated him and liked him.[34] By the efforts of his colleague Erhard Schmidt Schur was allowed to continue lecturing until the end of September 1935 for the time being.[35] Schur was the last Jewish professor who lost his job at this time.[36]
Zurich lecture
In Switzerland, Schur's colleagues Heinz Hopf and George Pólya were informed of the dismissal of Schur in 1935. They tried to help as best they could.[37] On behalf of the Mathematical Seminars chief Michel Plancherel, on 12 December 1935[38] the school board president Arthur Rohn invited Schur to une série de conférences sur la théorie de la représentation des groupes finis. At the same time he asked that the formal invitation should come from President Rohn, comme le prof. Schur doit obtenir l'autorisation du ministère compétent de donner ces conférences. George Pólya arranged from this invitation of the Mathematical Seminars the Conference of the Department of Mathematics and Physics on 16 December.[39] Meanwhile, on 14 December the official invitation letter from President Rohn had already been dispatched to Schur.[40] Schur was promised for his guest lecture a fee of CHF 500.
Schur did not reply until 28 January 1936, on which day he was first in the possession of the required approval of the local authority.[41] He declared himself willing to accept the invitation. He envisaged beginning the lecture on 4 February.[42] Schur spent most of the month of February in Switzerland. Before his return to Germany he visited his daughter in Bern for a few days, and on 27 February he returned via Karlsruhe, where his sister lived, to Berlin. In a letter to Pólya from Berne, he concludes with the words: From Switzerland I take farewell with a heavy heart.[43]
In Berlin, meanwhile, Ludwig Bieberbach, in a letter dated 20 February 1936, informed the Reich Minister for Science, Art, and Education on the journey of Schur, and announced that he wanted to find out what was the content of the lecture in Zurich.[44]
Significant students
Schur had a total of 26[45] graduate students, some of whom acquired a mathematical reputation. Among them are
• Alfred Brauer, University of Berlin (1928)
• Richard Brauer, University of Berlin (1925)
• Karl Dörge, University of Berlin (1925)
• Bernhard Neumann, University of Berlin, Cambridge University (1932, 1935)
• Félix Pollaczek, University of Berlin (1922)
• Heinz Pruefer, University of Berlin, (1921)
• Richard Rado, University of Berlin, Cambridge University (1933, 1935)
• Isaac Jacob Schoenberg, Alexandru Ioan Cuza University of Iaşi (1926)
• Wilhelm Specht, University of Berlin (1932)
• Helmut Wielandt, University of Berlin (1935)
Legacy
Concepts named after Schur
Among others, the following concepts are named after Issai Schur:
• List of things named after Issai Schur
• Schur algebra
• Schur complement
• Schur index
• Schur indicator
• Schur multiplier
• Schur orthogonality relations
• Schur polynomial
• Schur product
• Schur test
• Schur's inequality
• Schur's theorem
• Schur-convex function
• Schur–Weyl duality
• Lehmer–Schur algorithm
• Schur's property for normed spaces.
• Jordan–Schur theorem
• Schur–Zassenhaus theorem
• Schur triple
• Schur decomposition
• Schur's lower bound
Quotes
In his commemorative speech, Alfred Brauer (PhD candidate of Schur) spoke about Issai Schur as follows:[46] As a teacher, Schur was excellent. His lectures were very clear, but not always easy and required cooperation – During the winter semester of 1930, the number of students who wanted to attend Schur's theory of numbers lecture, was such that the second largest university lecture hall with about 500 seats was too small. His most human characteristics were probably his great modesty, his helpfulness and his human interest in his students.
Heinz Hopf, who had been in Berlin before his appointment to Zurich at the ETH Privatdozent, held – as is clear from oral statements and also from letters – Issai Schur as a mathematician and greatly appreciated man. Here, this appreciation was based entirely on reciprocity: in a letter of 1930 to George Pólya on the occasion of the re-appointment of Hermann Weyl, Schur says of Hopf: Hopf is a very excellent teacher, a mathematician of strong temperament and strong effect, a master's discipline, trained excellent in other areas. – If I have to characterize him as a man, it may suffice if I say that I sincerely look forward to each time I meet with him.
Schur was, however, known for putting a correct distance in personal affairs. The testimony of Hopf is in accordance with statements of Schur's former students in Berlin, by Walter Ledermann and Bernhard Neumann.[47]
Publications
• Schur, Issai (1968), Grunsky, Helmut (ed.), Vorlesungen über Invariantentheorie, Die Grundlehren der mathematischen Wissenschaften, vol. 143, Berlin, New York: Springer-Verlag, ISBN 9780387041391, MR 0229674
• Schur, Issai (1973), Brauer, Alfred; Rohrbach, Hans (eds.), Gesammelte Abhandlungen, Berlin, New York: Springer-Verlag, ISBN 978-3-540-05630-0, MR 0462891
Notes
1. Ledermann, Walter, and Neumann, Peter M.; "The Life of Issai Schur through Letters and Other Documents", in Joseph, Melnikov, Rentschler (2003), p. 45.
2. Ledermann, W. (1983). "Issai Schur and his school in Berlin". Bull. London Math. Soc. 15 (2): 97–106. doi:10.1112/blms/15.2.97.
3. Vogt, Annette. Issai Schur: als Wissenschaftler vertrieben. In Schoeps, Grozinger & Mattenklott [401, S. 217–235 (1999)]
4. The Kopelman Foundation. Mogiljow. JewishGen Belarus SIG, on The Jewish Encyclopedia Web site www.jewishgen.org/belarus/je_mogilev.htm conceived, created, and funded by The Kopelman Foundation, accessed 28 December 2003.
5. Blaushild, Immanuel. Libau. In Snyder [423, §1 (c. 1995)]
6. Snyder, Stephen, project coordinator. A Town Named Libau (Liepaja, Latvia). JewishGen Web site www.Jewlshgen.org/ylzkor/libau/libau.html accessed 27 December 2003. (Translation of the 36-page booklet: A Town Named Libau in English, German and Hebrew and additional material about Libau, Editor and Publisher of booklet unknown, believed to have been published in Israel, 1985.)
7. Beare, Arlene, ed. History of Latvia and Courland Web site accessed 1 March 2004: www.jewishgen.org/Latvia/SIG_History_of_Latvia_and_Courland.html (This history is derived from a few sources including [38] but mainly edited from the presentation made by Ruvin Ferber at the 21st International Conference of Jewish Genealogy held in London in July 2001.)
8. vgl. Vogt, Anne
9. Schur, Issai. Über eine Klasse von Matrizen, die sich einer gegebenen Matrix zuordnen lassen. Doctoral dissertation, Universität Berlin, 1901; reprinted in Brauer k Rohrbach [71, Band I, pp. 1–72 (1973)]
10. vgl. Vogt, Anne
11. Chandler, Bruce; Magnus, Wilhelm. The History of Combinatorial Group Theory: A Case Study in the History of Ideas. Studies in the History of Mathematics and Physical Sciences 9. Springer-Verlag, New York, 1982.
12. vgl. Biographie der Leopoldino Carolina
13. Hermann Weyl: Nachlaß. Handschriften und Nachlässe, ETH Bibliothek, 1006:1.
14. Carl Friedrich Geiser (1843–1934), der bei Ludwig Schläfli in Bern promoviert hatte, war von 1873 bis 1913 ordentlicher Professor am Eidgenössischen Polytechnikum in Zürich.
15. 3. Verordnung zur Durchführung des Berufsbeamtengesetzes. Vom 6. Mai 1933, RGBl.I S.245f.
16. Diese sogenannten "Altbeamten" waren von der Entlassung wegen jüdischer Abstammung vorerst ausgenommen, 1. Verordnung zur Durchführung des Berufsbeamtengesetzes. Vom 11. April 1933, RGBl.I S.195.
17. Walter Ledermann, Peter M. Neumann: The Life of Issai Schur through Letters and other Documents. In Anthony Joseph et al. Studies in Memory of Issai Schur, Birkhäuser 2003. Brief des Ministeriums vom 11. September 1933, Brief von Schur vom 15. September 1933
18. Diese Ansicht vertritt Alfred Brauer in seiner Gedenkrede
19. Schur war sein 1906 mit der Ärztin Regina Frumkin verheiratet. Der Ehe entsprossen zwei Kinder Georg und Hilde. Georg, der etwas älter als Hilde war, studierte Physik und war später als Versicherungsmathematiker in Israel tätig.
20. George Pólya (1887–1985) hatte sich nach seinem Studium in Budapest 1914 und nach Aufenthalten in Göttingen und Paris an der ETH habilitiert. 1928 wurde er zum ordentlichen Professor ernannt. Ab 1940 war er dann in den USA tätig, zuletzt an der Stanford University. – Seine Bekanntschaften mit Schur geht auf die Zeit vor dem Ersten Weltkrieg zurück: Es existieren zahlreiche Briefe von Schur an Pólya aus den Jahren 1913/14, die in den Stanford University Libraries aufbewahrt werden
21. Mitteilung von Frau Susanne Abelin der Enkelin von Issai Schur, Sommer 2001. Der Brief vom 20. August 1935 ist in Walter Ledermann, Peter M. Neumann: The Life of Issai Schur through Letters and other Documents. In Anthony Joseph et al. Studies in Memory of Issai Schur, Birkhäuser 2003. Seite lxxii
22. Die von Hitler und Göring unterschriebene Entpflichtungsurkunde datiert vom 28. September 1935. Siehe Walter Ledermann, Peter M. Neumann: The Life of Issai Schur through Letters and other Documents. In Anthony Joseph et al. Studies in Memory of Issai Schur, Birkhäuser 2003. Seite lxxiv. Die Entlassung wäre anhand des Reichsbürgergesetzes ohnehin spätestens zum 31. Dezember 1935 verfügt worden
23. Der Vorgang ist beschrieben im Buch von Reinhard Siegmund-Schultze: Mathematiker auf der Flucht vor Hitler. Dokumente zur Geschichte der Mathematik, Band 10. Deutsche Mathematiker Vereinigung, Vieweg, 1998. Seite 69/70; die Austrittserklärung datiert vom 6. April 1938. Das Buch enthält darüber hinaus weitere interessante Angaben über die Situation von Schur in den dreißiger Jahren
24. Siehe Volker R. Remmert: Mathematical Publishing in the Third Reich. Math. Intelligencer 22 (3) 2000, Seite 22–30
25. "Long after the war, I talked to Grunsky about that remark and he literally started to cry: You know what I did? I sent him a postcard to contratulate him on his sixtieth birthday. I admired him so much and was very respectful in that card. How lonely he must have been to remember such a small thing", Schiffer, Menachem Max; Issai Schur. Some Personal Reminiscences (1986); 1998 in: Begehr, H. (Hrsg.), Mathematik in Berlin. Geschichte und Dokumentation, 1998 Aachen.
26. Siehe dazu und für das Folgende: Alfred Brauers Gedenkrede
27. Vergleiche den Brief des Reichsministers für Wissenschaft, Erziehung und Volksbildung an Issai Schur vom 24. Februar 1939. Walter Ledermann, Peter M. Neumann: The Life of Issai Schur through Letters and other Documents. In Anthony Joseph et al. Studies in Memory of Issai Schur, Birkhäuser 2003. Seite lxxxi
28. veröffentlicht von Alfred Brauer und Hans Rohrbach
29. See Festschrift der DMV Seite 549
30. Polynomial representations of $GL(n)$ ISBN 978-0-387-10258-0
31. vgö. Chandler, Bruce; Magnus, Wilhelm.
32. Briining, Jochen; Ferus, Dirk; Siegmund-Schultze; Reinhard. Terror and Exile: Persecution and Expulsion of Mathematicians from Berlin between 1933 and 1945. An Exhibition on the Occasion of the International Congress of Mathematicians, Technische Universitat Berlin, 19 to 27 August 1998, Deutsche Mathematiker-Vereinigung, Berlin, 1998.
33. Pinl, Max; Furtmiiller, Lux. Mathematicians under Hitler. Seite 178
34. vgl. Briining, Jochen Seite 27
35. Pinl, Max; Furtmiiller, Lux. Mathematicians under Hitler. Seite 178
36. Soifer, Alexander. Issai Schur: Ramsey theory before Ramsey. Geombinatorics, 5 (1995), 6–23
37. Urs Stammbach Die Zürcher Vorlesung von Issai Schur über Darstellungstheorie Seite xiii, ETH-Bibliothek 2004
38. Schulratsarchiv der ETH-Zürich. Akten 1935/36, ETH-Bibliothek.
39. Protokoller der Abteilung IX, Mathematik und Physik. Protokolle der Konferenzen der Abt. IX, Hs 1079:3, Handschriften und Nachlässe, ETH-Bibliothek Zürich
40. Schulratsarchiv der ETH-Zürich. Missiven 1935, 3119, ETH-Bibliothek
41. Schulratsarchiv der ETH-Zürich. Akten 1935/36, ETH-Bibliothek
42. Gemäß einem später geschriebenen Lebenslauf – siehe Walter Ledermann, Peter M. Neumann: The Life of Issai Schur through Letters and other Documents. In Anthony Joseph et al. Studies in Memory of Issai Schur. Seite lxxvii, Birkhäuser 2003. – fanden die Vorlesungen zwischen dem 4. und dem 18. Februar statt
43. Siehe Department of Special Collections and University Archives, Stanford University Libraries, 26. Februar 1936.
44. Der Vorgang wird beschrieben in Charles Curtis: Pioneers of representation theory. History of Mathematics vol. 15, Amer. Math. Soc./London Math. Soc. 1999, Seite 131
45. Siehe Mathematics Genealogy Project, North Dakota State University
46. Gedenkrede vom 8. November 1960 anlässlich der Schur-Gedenkfeier im Rahmen der 150-Jahrfeier der Universität Berlin. Siehe Issai Schur: Gesammelte Abhandlungen, Seiten v–xiv. Alfred Brauer hat 1928 bei Schur promoviert.
47. Siehe Interview with Bernhard Neumann, Newsletter of the European Mathematical Society, 39, March 2001, 9–11; Walter Ledermann: Issai Schur and his school in Berlin, Bull. London Math. Soc. 15 (1983), 97–106. Bernhard Neumann doktorierte 1932, Walter Ledermann bestand das Examen für Lehramtskandidaten im Jahre 1933
References
• Curtis, Charles W. (2003), Pioneers of Representation Theory: Frobenius, Burnside, Schur, and Brauer, History of Mathematics, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-2677-5, MR 1715145Review
• Joseph, Anthony; Melnikov, Anna; Rentschler, Rudolf, eds. (2003), Studies in memory of Issai Schur, Progress in Mathematics, vol. 210, Boston, MA: Birkhäuser Boston, doi:10.1007/978-1-4612-0045-1, ISBN 978-0-8176-4208-2, MR 1985184
External links
• O'Connor, John J.; Robertson, Edmund F., "Issai Schur", MacTutor History of Mathematics Archive, University of St Andrews
• Issai Schur at the Mathematics Genealogy Project
Authority control
International
• FAST
• ISNI
• VIAF
• WorldCat
National
• Norway
• France
• BnF data
• Germany
• Israel
• United States
• Sweden
• Czech Republic
• Australia
• Croatia
• Netherlands
• Poland
Academics
• CiNii
• Leopoldina
• MathSciNet
• Mathematics Genealogy Project
• Scopus
• zbMATH
People
• Deutsche Biographie
• Trove
Other
• SNAC
• IdRef
| Wikipedia |
\begin{document}
\title{An elliptic semilinear equation with source term and boundary measure data: the supercritical case } \author{ {\bf Marie-Fran\c{c}oise Bidaut-V\'eron\thanks{E-mail address: [email protected]}}\\[0.5mm]
{\bf Giang Hoang\thanks{ E-mail address: [email protected] }}\\[0.5mm] {\bf Quoc-Hung Nguyen\thanks{ E-mail address: [email protected] }}\\[0.5mm] {\bf Laurent V\'eron\thanks{ E-mail address: [email protected]}}\\[2mm] {\small Laboratoire de Math\'ematiques et Physique Th\'eorique, }\\ {\small Universit\'e Fran\c{c}ois Rabelais, Tours, FRANCE}} \date{}
\maketitle \begin{abstract} We give new criteria for the existence of weak solutions to an equation with a super linear source term \begin{align*} -\Delta u = u^q ~~\text{in}~\Omega,~~u=\sigma~~\text{on }~\partial\Omega \end{align*} where $\Omega$ is a either a bounded smooth domain or $\mathbb{R}_+^{N}$, $q>1$ and $\sigma\in \mathfrak{M}^+(\partial\Omega)$ is a nonnegative Radon measure on $\partial\Omega$. One of the criteria we obtain is expressed in terms of some Bessel capacities on $\partial\Omega$. We also give a sufficient condition for the existence of weak solutions to equation with source mixed terms.
\begin{align*}
-\Delta u = |u|^{q_1-1}u|\nabla u|^{q_2} ~~\text{in}~\Omega,~~u=\sigma~~\text{on }~\partial\Omega
\end{align*}
where $q_1,q_2\geq 0, q_1+q_2>1, q_2<2$, $\sigma\in \mathfrak{M}(\partial\Omega)$ is a Radon measure on $\partial\Omega$. \end{abstract} \section{Introduction and main results} Let $\Omega$ be a bounded smooth domain in $\mathbb{R}^N$ or $\Omega=\mathbb{R}_+^{N}:=\mathbb{R}^{N-1}\times(0,\infty)$, $N\geq 3$, and $g:\mathbb{R}\times\mathbb{R}^N\mapsto \mathbb{R}$ be a continuous function. In this paper, we study the solvability of the problem \begin{equation}\label{121120149b} \begin{array}{lll} -\Delta u = g(u,\nabla u)\qquad&\text{in}~\Omega, \\ \phantom{-\Delta} u = \sigma\qquad&\text{on }~\partial\Omega,\\ \end{array}\end{equation} where $\sigma\in \mathfrak{M}(\partial\Omega)$ is a Radon measure on $\partial\Omega$. All solutions are understood in the usual very weak sense, which means that $u\in L^1_{}(\Omega)$, $g(u,\nabla u)\in L^1_{\rho}(\Omega)$, where $\rho(x)$ is the distance from $x$ to $\partial\Omega$ when $\Omega$ is bounded, or $u\in L^1_{}(\mathbb{R}_+^{N}\cap B)$, $g(u,\nabla u)\in L^1_{\rho}(\mathbb{R}_+^{N}\cap B)$ for any ball $B$ if $\Omega=\mathbb{R}_+^{N}$, and \begin{align} \int_\Omega u (-\Delta \xi)dx =\int_\Omega g(u,\nabla u)\xi dx-\int_{\partial\Omega}\frac{\partial \xi}{\partial n}d\sigma \end{align} for any $\xi \in C^2(\overline{\Omega})\cap C_c(\mathbb{R}^N)$ with $\xi=0$ in $\Omega^c$, where $\rho(x)=\operatorname{dist}(x,\partial\Omega)$, $n$ is the outward unit vector on $\partial\Omega$. It is well-known that such a solution $u$ satisfies \begin{align*} u=\mathbf{G}[g(u,\nabla u)]+\mathbf{P}[\sigma]~~\text{a. e. in } \Omega, \end{align*} where $\mathbf{G}[.],\mathbf{P}[.]$, respectively the Green and the Poisson potentials associated to $-\Delta$ in $\Omega$, are defined from the Green and the Poisson kernels by \begin{align*} \mathbf{P}[\sigma](y)=\int_{\partial\Omega}\operatorname{P}(y,z)d\sigma(z), ~~\mathbf{G}[g(u,\nabla u)](y)=\int_{\Omega}\operatorname{G}(y,x)g(u,\nabla u)(x)dx, \end{align*} see \cite{MV5}.
Our main goal is to establish necessary and sufficient conditions for the existence of weak solutions of \eqref{121120149b} with boundary measure data, together with sharp pointwise estimates of the solutions. In the sequel we study two cases for the problem \eqref{121120149b}:
\noindent {\bf 1}- The pure power case
\begin{equation}\label{pow1} \begin{array}{lll}
-\Delta u = |u|^{q-1}u\qquad&\text{in}~\Omega, \\ \phantom{-\Delta} u = \sigma\qquad&\text{on }~\partial\Omega,\\ \end{array}\end{equation} with $u\geq 0$, $q>1$ and $\sigma\geq 0$.
\noindent {\bf 2}- The mixed gradient-power case \begin{equation}\label{pow2} \begin{array}{lll}
-\Delta u = |\nabla u|^{q_2}|u|^{q_1-1}u\qquad&\text{in}~\Omega, \\ \phantom{-\Delta} u = \sigma\qquad&\text{on }~\partial\Omega,\\ \end{array}\end{equation} with $q_1,q_2>0$, $q_1+q_2>1$ and $q_2<2$.
The problem \eqref{pow1} has been first studied by Bidaut-V\'eron and Vivier \cite{BiVi} in the subcritical case $1<q<\frac{N+1}{N-1}$ with $\Omega$ bounded. They proved that \eqref{pow1} admits a nonnegative solution provided $\sigma(\partial\Omega)$ is small enough. They also proved that for any $\sigma\in\mathfrak M^+_b(\partial\Omega)$ there holds
\begin{equation}\label{pow3} \begin{array}{lll} {\bf G}[({\bf P}[\sigma])^q]\leq c\sigma(\partial\Omega){\bf P}[\sigma] \end{array}\end{equation} for some $c=c(N,p,q)>0$. Then Bidaut-V\'eron and Yarur \cite{BiYa} considered again the problem \eqref{pow1} in a bounded domain in a more general situation since they allowed both interior and boundary measure data, giving a complete description of the solutions in the subcritical case, and sufficient conditions for existence in the supercritical case. In particular they showed that the problem \eqref{pow1} has a solution if and only if \begin{equation}\label{pow4} \begin{array}{lll} {\bf G}[({\bf P}[\sigma])^q]\leq c{\bf P}[\sigma] \end{array}\end{equation} for some $c=c(N,q,\Omega)>0$, see \cite[Th 3.12-3.13, Remark 3.12]{BiYa}.
The absorption case, i.e. $g(u,\nabla u)=-|u|^{q-1}u$ has been studied by Gmira and V\'eron \cite{GmV} in the subcritical case (again $1<q<\frac{N+1}{N-1}$) and by Marcus and V\'eron in the supercritical case \cite{MV1}, \cite{MV4}, \cite{MV5}. The case $g(u,\nabla u)=-|\nabla u|^{q}$ was studied by Nguyen Phuoc and V\'eron \cite{NPhV} and extended recently to the case $g(u,\nabla u)=-|\nabla u|^{q_2}|u|^{q_1-1}u$ by Marcus and Nguyen Phuoc \cite{NPhM}. To our knowledge, the problem \eqref{pow2} has not yet been studied.
To state our results, let us introduce some notations. We write $A\lesssim (\gtrsim )B$ if $A\leq (\geq)C B $ for some $C$ depending on some structural constants, $ A \asymp B$ if $A\lesssim B\lesssim A$. Various capacities will be used throughout the paper. Among them are the Riesz and Bessel capacities in $\mathbb{R}^{N-1}$ defined respectively by
\begin{align*}
\operatorname{Cap}_{I_{\gamma},s}(O)=\inf\left\{\int_{\mathbb{R}^{N-1}}f^sdy: f\geq 0, I_\gamma*f\geq \chi_{O} \right\},
\end{align*}
\begin{align*}
\operatorname{Cap}_{G_{\gamma},s}(O)=\inf\left\{\int_{\mathbb{R}^{N-1}}f^sdy: f\geq 0, G_\gamma*f\geq \chi_{O} \right\},
\end{align*}
for any Borel set $O\subset\mathbb{R}^{N-1}$, where $s>1,$ $I_\gamma, G_{\gamma}$ are the Riesz and the Bessel kernels in $\mathbb{R}^{N-1}$ with order $\gamma\in (0,N-1)$. We remark that
\begin{align}\label{051220143}
\operatorname {Cap}_{G_{\gamma},s}(O)\geq\operatorname{Cap}_{I_{\gamma},s}(O)\geq C|O|^{1-\frac{\gamma s}{N-1}}
\end{align}
for any Borel set $O\subset \mathbb{R}^{N-1}$ where $\gamma s<N-1$ and $C$ is a positive constant.
When we consider equations in a bounded smooth domain $\Omega$ in $\mathbb{R}^N$ we use a specific capacity that we define as follows: there exist open sets $O_1,...,O_m$ in $\mathbb{R}^N$, diffeomorphisms $T_i:O_i\mapsto B_1(0)$ and compact sets $K_1,...,K_m$ in $\partial\Omega$ such that
\begin{description}
\item[a. ] $K_i\subset O_i$, $\partial\Omega\subset \bigcup\limits_{i = 1}^m K_i$.
\item[b.] $T_i(O_i\cap \partial\Omega)=B_1(0)\cap \{x_N=0\}$, $T_i(O_i\cap \Omega)=B_1(0)\cap \{x_N>0\}$.
\item[c.] For any $x\in O_i\cap \Omega$, $\exists y\in O_i\cap\partial\Omega$, $\rho(x)=|x-y|$.
\end{description}
Clearly, $\rho(T_i^{-1}(z))\asymp|z_N|$ for any $z=(z',z_N)\in B_1(0)\cap \{x_N>0\}$ and $|\mathbf{J}_{T_i}(x)|\asymp 1$ for any $x\in O_i\cap \Omega$, here $\mathbf{J}_{T_i}$ is the Jacobian matrix of $T_i$. \\
\begin{definition}\label{131120144} Let $\gamma\in (0,N-1), s>1$. We define the $\operatorname{Cap}_{\gamma,s}^{\partial\Omega}$-capacity of a compact set $E\subset \partial\Omega$ by
\begin{align*}
\operatorname{Cap}_{\gamma,s}^{\partial\Omega}(E)=\sum_{i=1}^{m}\operatorname{Cap}_{G_{\gamma},s}(\tilde{T}_i(E\cap K_i)),
\end{align*}
where $T_i(E\cap K_i)=\tilde{T}_i(E\cap K_i)\times\{x_N=0\}$.
\end{definition}
Notice that, if $\gamma s>N-1$ then there exists $C=C(N,\gamma,s,\Omega)>0$ such that \begin{align}\label{051220144}
\operatorname{Cap}_{\gamma,s}^{\partial\Omega}(\{x\})\geq C
\end{align} for all $x\in \partial\Omega$. Also the definition does not depend on the choice of the sets $O_i$.
Our first two theorems give criteria for the solvability of the problem \eqref{121120149b} in $\mathbb{R}^N_+$.
\begin{theorem}\label{121120142} Let $q>1$ and $\sigma\in \mathfrak{M}_b^+(\mathbb{R}^{N-1})$. Then, the following statements are equivalent
\begin{description}
\item${\bf 1}$ There exists $C>0$ such that the inequality \begin{align}\label{121120143} \sigma(K)\leq C \operatorname{Cap}_{I_{\frac{2}{q}},q'}(K)\end{align} holds for any compact set $K\subset\mathbb{R}^{N-1}$.
\item${\bf 2}$ There exists $C>0$ such that the relation
\begin{align}\label{121120144} \mathbf{G}\left[\left(\mathbf{P}[\sigma]\right)^q\right]\leq C \mathbf{P}[\sigma]<\infty ~~a.e \text{ in }~~ \mathbb{R}^{N}_+
\end{align}
holds.\\
\item[3.] The problem
\begin{equation}\label{121120145} \begin{array}{lll}
-\Delta u = u^q ~~&\text{ in }~\mathbb{R}^{N}_+, \\
\phantom{ -\Delta}
u = \varepsilon\sigma\quad ~&\text{ in }~\partial\mathbb{R}^{N}_+,\\
\end{array} \end{equation}
has a positive solution for $\varepsilon>0$ small enough.
\end{description}
\noindent Moreover, there is a constant $C_0>0$ such that if any one
of the two statement ${\bf 1}$ and ${\bf 2}$ holds with $C\leq C_0$, then equation \eqref{121120145}
admits a solution u with $\varepsilon=1$ which satisfies
\begin{align}
u\asymp \mathbf{P}[\sigma].
\end{align}
Conversely, if \eqref{121120145} has
a solution u with $\varepsilon=1$, then the two statements ${\bf 1}$ and ${\bf 2}$ hold for some $C>0$.
\end{theorem}
As a consequence of Theorem \ref{121120142} when $g(u,\nabla u)=|u|^{q-1}u$ ($q>1$) and $\Omega=\mathbb R^{N}_+$, we prove that if \eqref{pow1} has a nonnegative solution $u$ with $\sigma\in \mathfrak M^+_b(\mathbb{R}^{N-1})$, then
\begin{align}\label{XX} \sigma(B_r^{'}(y'))\leq C r^{N-\frac{q+1}{q-1}} \end{align}
for any ball $B^{'}_r(y')$ in $\mathbb{R}^{N-1}$ where $C=C(q,N)$ and $q>\frac{N+1}{N-1}$; if $1<q\leq \frac{N+1}{N-1}$, then $\sigma\equiv 0$. Conversely, if $q>\frac{N+1}{N-1}$, $d\sigma=fdz$ for some $f\geq 0$ which satisfies
\begin{align}\label{051220142}
\int_{B_r^{'}(y')}f^{1+\varepsilon} dz\leq Cr^{N-1-\frac{2(\varepsilon+1)}{q-1}}
\end{align}
for some $\varepsilon>0$, then there exists a constant $C_0=C_0(N,q)$ such that \eqref{121120149b} has a nonnegative solution if $C\leq C_0$. The above inequality is an analogue of the classical Fefferman-Phong condition \cite{Fe}. In particular, \eqref{051220142} holds if $f$ belongs to the Lorentz space $L^{\frac{(N-1)(q-1)}{2},\infty}(\mathbb{R}^{N-1})$.\\
We give sufficient conditions for the existence of weak solutions to \eqref{121120149b} when $g(u,\nabla u)=|u|^{q_1-1}u|\nabla u|^{q_2}$, $q_1,q_2\geq 0$, $q_1+q_2>1$ and $q_2<2$.
\begin{theorem}\label{1311201437}
Let $q_1,q_2\geq 0,q_1+q_2>1,q_2<2$ and $\sigma\in \mathfrak M(\mathbb{R}^{N-1})$ such that $\mathbf{P}[|\sigma|]<\infty$ a.e. in $\mathbb{R}^{N-1}$. Assume that
there exists $C>0$ such that for any Borel set $K\subset\mathbb{R}^{N-1}$ there holds
\begin{align}\label{1311201438}
|\sigma|(K)\leq C \operatorname{Cap}_{I_{\frac{2-q_2}{q_1+q_2}},(q_1+q_2)'}(K).\end{align} Then the problem
\begin{equation}\label{1311201439} \begin{array}{lll}
-\Delta u = |u|^{q_1-1}u|\nabla u|^{q_2} ~~&\text{ in}~\mathbb{R}^{N}_+, \\
\phantom{ -\Delta }
u = \varepsilon\sigma\quad ~&\text{ in }~\partial\mathbb{R}^{N}_+,\\
\end{array}\end{equation}
has a solution for $\varepsilon>0$ small enough and it satisfies
\begin{align}\label{2110201427}
|u|\lesssim \mathbf{P}[|\sigma|],~~|\nabla u|\lesssim \rho^{-1} \mathbf{P}[|\sigma|]. \end{align}
\end{theorem}
\begin{remark} In any case and in view of \eqref{051220143}, if $d\sigma=fdz,$ $f\in L^{\frac{(N-1)(q_1+q_2-1)}{2-q_2},\infty}(\mathbb{R}^{N-1})$ and $(N-1)(q_1+q_2-1)>2-q_2$ then \eqref{1311201438} holds for some $C>0$ and the problem \eqref{1311201439} has a solution for $\varepsilon>0$ small enough. However, we can see that condition \eqref{1311201438} implies $\mathbf{P}[|\sigma|]<\infty$ a.e, see Theorem \ref{2010201420}. \end{remark}
In a bounded domain $\Omega$ we obtain existence results analogous to Theorem \ref{121120142} and \ref{1311201437} provided the capacities on $\partial\Omega$ set in Definition \ref{131120144} are used instead of the Riesz capacities.
\begin{theorem}\label{121120147} Let $q>1$, $\Omega\subset\mathbb{R}^N$ be a bounded domain with a $C^2$ boundary and $\sigma\in \mathfrak{M}^+(\partial\Omega)$. Then, the following statements are equivalent:
\begin{description}
\item${\bf 1}$ There exists $C>0$ such that the inequality
\begin{align}\label{121120146}
\sigma(K)\leq C \operatorname{Cap}^{\partial\Omega}_{\frac{2}{q},q'}(K)\end{align} for any Borel set $K\subset\partial\Omega$. \item${\bf 2}$ There exists $C>0$ such that the inequality
\begin{align}\label{121120148}
\mathbf{G}\left[\left(\mathbf{P}[\sigma]\right)^q\right]\leq C \mathbf{P}[\sigma]<\infty ~~a.e \text{ in }~~ \Omega,
\end{align}
holds.
\item[3.] The problem \begin{equation}\label{121120149} \begin{array}{lll}
-\Delta u = u^q ~~&\text{ in}~\Omega,
\\\phantom{ -\Delta}
u = \varepsilon\sigma\quad &\text{ on }~\partial\Omega,\\
\end{array}
\end{equation}
admits a positive solution for $\varepsilon>0$ small enough.
\end{description}
Moreover, there is a constant $C_0>0$ such that if any one
of the two statements ${\bf1}$ and ${\bf 2}$ holds with $C\leq C_0$, then equation \eqref{121120149}
has a solution u with $\varepsilon=1$ which satisfies
\begin{align}
u\asymp \mathbf{P}[\sigma].
\end{align}
Conversely, if \eqref{121120149} has
a solution u with $\varepsilon=1$, the above two statements ${\bf 1}$ and ${\bf 2}$ hold for some $C>0$.
\end{theorem}
From \eqref{051220144}, we see that if $\sigma\in \mathfrak{M}^+(\partial\Omega)$ and $ 1<q<\frac{N+1}{N-1}$, then \eqref{121120146} holds for some constant $C>0$. Hence, in this case, the problem \eqref{121120149} has a positive solution for $\varepsilon>0$ small enough.
\begin{theorem}\label{1311201440}
Let $q_1,q_2\geq 0,q_1+q_2>1,q_2<2,$ $\Omega\subset \mathbb{R}^N$ be a bounded domain with a $ C^2$ boundary and $\sigma\in \mathfrak{M}(\partial\Omega)$. Assume that there exists $C>0$ such that the inequality
\begin{align}\label{2010201446}
|\sigma|(K)\leq C \operatorname{Cap}^{\partial\Omega}_{\frac{2-q_2}{q_1+q_2},(q_1+q_2)'}(K)\end{align} holds for any Borel set $K\subset\partial\Omega$. Then the problem
\begin{equation}\label{2010201447} \begin{array}{lll}
-\Delta u = |u|^{q_1-1}u|\nabla u|^{q_2}\qquad&\text{in}~\Omega, \\
\phantom{ -\Delta}
u = \varepsilon\sigma&\text{on }~\partial\Omega,\\
\end{array} \end{equation}
has a solution for $\varepsilon>0$ small enough which satisfies \eqref{2110201427}.
\end{theorem}
\begin{remark} A discussion about the optimality of this condition, as well as the one of Theorem \ref{1311201437}, is conducted in Remark \ref{opt}. We define the subcritical range by
\begin{equation}\label{subcri}
(N-1)q_1+Nq_2<N+1\quad\text{or equivalently }\;(N-1)(q_1+q_2-1)<2-q_2.
\end{equation} If we assume that we are in the subcritical case, then problem \eqref{2010201447} has a solution for any measure $\sigma\in \mathfrak M_b(\partial\Omega)$ and
$\varepsilon>0$ small enough.
\end{remark}
\section{Integral equations } Let $\Omega$ be either $\mathbb{R}^{N-1}\times (0,\infty)$ or $\Omega$ a bounded domain in $\mathbb{R}^N$ with a $C^2$ boundary $\partial\Omega$. For $0\leq \alpha\leq \beta <N$, we denote
\begin{align}
\mathbf{N}_{\alpha,\beta}(x,y)=\frac{1}{|x-y|^{N-\beta}\max\left\{|x-y|,\rho(x),\rho(y)\right\}^\alpha}\qquad\forall (x,y)\in \overline{\Omega}\times\overline{\Omega}.
\end{align} We set
\begin{align*}
\mathbf{N}_{\alpha,\beta}[\nu
](x)=\int_{\overline{\Omega}}\mathbf{N}_{\alpha,\beta}(x,y)d\nu(y)\qquad\forall \nu\in\mathfrak{M}^+(\overline{\Omega}),
\end{align*}
and denote $\mathbf{N}_{\alpha,\beta}[f]:=\mathbf{N}_{\alpha,\beta}[fdx]$ if $f\in L^{1}_{loc}(\Omega),~f\geq 0$.
In this section, we are interested in the solvability of the following integral equations
\begin{align}
U=\mathbf{N}_{\alpha,\beta}\left[U^q(\rho(.))^{\alpha_0}\right]+\mathbf{N}_{\alpha,\beta}[\omega]
\end{align} where $\alpha_0\geq 0$ and $\omega \in\mathfrak{M}^+(\overline{\Omega})$. \\
We follow the deep ideas developed by Kalton and Verbitsky in \cite{KaVe} who analyzed a PDE problem under the form of an integral equation.
They proved a certain number of properties of this integral equation which are crucial for our study and, for the sake of completeness, we recall them here.
Let $X$ be a metric space and $\nu\in \mathfrak{M}^+(X)$. Let $\mathbf{K}$ be a Borel positive kernel function $\mathbf{K}:X\times X\mapsto (0,\infty]$ such that $\mathbf{K}$ is symmetric and $\mathbf{K}^{-1}$ satisfies a quasi-metric
inequality, i.e. there is a constant $C\geq 1$ such that for all
$x,y,z\in X$ we have
\begin{align*}
\frac{1}{\mathbf{K}(x,y)}\leq C\left(\frac{1}{\mathbf{K}(x,z)}+\frac{1}{\mathbf{K}(z,y)}\right).
\end{align*}
Under these conditions, we can define the quasi-metric $d$ by
$$d(x,y)=\frac{1}{\mathbf{K}(x,y)},$$ and denote by $\mathbb{B}_{r}(x)=\{y\in X\!:d(x,y)<r\}$ the open $d$-ball of radius $r>0$ and center $x$. Note that this set can be empty.
For $\omega\in\mathfrak{M}^+(X)$, we define the potentials $\mathbf{K}\omega$ and $\mathbf{K}^{\nu}f$ by
$$\mathbf{K}\omega(x)=\int_{X}\mathbf{K}(x,y)d\omega(y),~~\mathbf{K}^{\nu}f(x)=\int_{X}\mathbf{K}(x,y)f(y)d\nu(y),$$
and for $q>1$, the capacity $\operatorname{Cap}^\nu_{\mathbf{K},q'}$ in $X$ by
\begin{align*}
\operatorname{Cap}^\nu_{\mathbf{K},q'}(E)=\inf\left\{\int_{X}g^{q'}d\nu: g\geq 0 , \mathbf{K}^\nu g\geq \chi_E\right\},
\end{align*}
for any Borel set $E\subset X$.
\begin{theorem}[\cite{KaVe}] \label{151020148} Let $q>1$ and $\nu, \omega\in\mathfrak{M}^+(X)$ such that
\begin{align}\label{1510201410}
&~~~~~~~~~ \int_{0}^{2r}\frac{\nu(\mathbb{B}_s(x))}{s}\frac{ds}{s}\leq C \int_{0}^{r}\frac{\nu(\mathbb{B}_s(x))}{s}\frac{ds}{s},\\[4mm]
&
\sup_{y\in \mathbb{B}_r(x)}\int_{0}^{r}\frac{\nu(\mathbb{B}_s(y))}{s}\frac{ds}{s}\leq C \int_{0}^{r}\frac{\nu(\mathbb{B}_s(x))}{s}\frac{ds}{s},\label{1510201410*}
\end{align}
for any $r>0,x\in X$, where $C>0$ is a constant. Then the following statements are equivalent:
\begin{description}
\item${\bf 1}$ The equation $u= \mathbf{K}^\nu u^q+\varepsilon \mathbf{K}\omega$ has a solution for some $\varepsilon>0$.
\item${\bf 2}$ The inequality
\begin{align}\label{151020143}
\int_{E}(\mathbf{K}\omega_E)^{q}d\sigma\leq C\omega(E)
\end{align}
holds for any Borel set $E\subset X$ where $\omega_E=\chi_E\omega$.
\item[3.] For any Borel set $E\subset X$, there holds
\begin{align}\label{1510201414} \omega(E)\leq C \operatorname{Cap}^\nu_{\mathbf{K},q'}(E). \end{align}
\item[4.] The inequality
\begin{align}\label{151020144}
\mathbf{K}^\nu\left(\mathbf{K}\omega\right)^q\leq C \mathbf{K}\omega<\infty ~~\nu-a.e.
\end{align}
holds.
\end{description}
\end{theorem}
We check below that $N_{\alpha,\beta}$ satisfies all assumptions of $\mathbf{K}$ in Theorem \ref{151020148}.
\begin{lemma}\label{201020141} $\mathbf{N}_{\alpha,\beta}$ is symmetric and satisfies the quasi-metric inequality.
\end{lemma}
\begin{proof} Clearly, $\mathbf{N}_{\alpha,\beta}$ is symmetric. Now we check the quasi-metric inequality associated to $\mathbf{N}_{\alpha,\beta}$ and $X=\overline{\Omega}$. For any $x,z,y\in \overline{\Omega}$ such that $x\not= y\not= z$, we have \begin{align*}
|x-y|^{N-\beta+\alpha}&\lesssim |x-z|^{N-\beta+\alpha}+|z-y|^{N-\beta+\alpha}\\&\lesssim \frac{1}{\mathbf{N}_{\alpha,\beta}(x,z)}+\frac{1}{\mathbf{N}_{\alpha,\beta}(z,y)}. \end{align*}
Since $|\rho(x)-\rho(y)|\leq |x-y|$, there holds \begin{align*}
& |x-y|^{N-\beta}(\rho(x))^\alpha+|x-y|^{N-\beta}(\rho(y))^\alpha\lesssim |x-y|^{N-\beta}(\min\{\rho(x),\rho(y)\})^\alpha+|x-y|^{N-\beta+\alpha}\\&~~~~~~~\lesssim \left(|x-z|^{N-\beta}+|z-y|^{N-\beta}\right)(\min\{\rho(x),\rho(y)\})^\alpha+|x-z|^{N-\beta+\alpha}+|z-y|^{N-\beta+\alpha}\\&~~~~~~\lesssim\left((\rho(x))^\alpha|x-z|^{N-\beta}+|x-z|^{N-\beta+\alpha}\right)+\left((\rho(y))^\alpha|z-y|^{N-\beta}+|z-y|^{N-\beta+\alpha}\right)\\&~~~~~~~\lesssim \frac{1}{\mathbf{N}_{\alpha,\beta}(x,z)}+\frac{1}{\mathbf{N}_{\alpha,\beta}(z,y)}.
\end{align*}Thus,
\begin{align*}
\frac{1}{\mathbf{N}_{\alpha,\beta}(x,y)} \lesssim \frac{1}{\mathbf{N}_{\alpha,\beta}(x,z)}+\frac{1}{\mathbf{N}_{\alpha,\beta}(z,y)}.
\end{align*}
\end{proof}
Next we give sufficient conditions for \eqref{1510201410}, \eqref{1510201410*} to hold, in view of the applications that we develop in Sections 3 and 4.
\begin{lemma}\label{011220141} If $d\nu(x)= (\rho(x))^{\alpha_0}\chi_\Omega dx$ with $\alpha_0\geq 0$, then \eqref{1510201410} and \eqref{1510201410*} hold.
\end{lemma}
\begin{proof} It is easy to see that for any $x\in \overline{\Omega},~s>0$ \begin{align} B_{2^{-\frac{\alpha+1}{N-\beta}}S}(x)\cap\overline{\Omega}\subset \mathbb{B}_s(x)\subset B_{S}(x)\cap\overline{\Omega}, \end{align}
with $S=\min\{s^{\frac{1}{N-\beta+\alpha}},s^{\frac{1}{N-\beta}}(\rho(x))^{-\frac{\alpha}{N-\beta}}\}$ and $\mathbb{B}_s(x)=\overline{\Omega}$ when $s>2^{\frac{\alpha N}{N-\alpha}}(diam\,(\Omega))^N$.\\ We show that for any $0\leq s<8diam\,(\Omega)$, $x\in\overline{\Omega}$
\begin{align}\label{161020147}
\nu(B_s(x))\asymp (\max\{\rho(x),s\})^{\alpha_0} s^N.
\end{align}Indeed, take $0\leq s<8diam\,(\Omega)$, $x\in\overline{\Omega}$.
There exist $\varepsilon=\varepsilon(\Omega)\in (0,1)$ and $x_s\in \Omega$ such that $B_{\varepsilon s}(x_s)\subset B_s(x)\cap \Omega$ and $\rho(x_s)>\varepsilon s$.
\noindent (a) If $0\leq s\leq \frac{\rho(x)}{4}$, so for any $y\in B_s(x)$, $
\rho(y)\asymp \rho(x). $ Thus we obtain \eqref{161020147} because
$$\nu(B_s(x))\asymp (\rho(x))^{\alpha_0} |B_s(x)\cap\Omega|\asymp (\rho(x))^{\alpha_0} s^N.$$
\noindent (b) If $s>\frac{\rho(x)}{4}$, since $\rho(y)\leq \rho(x)+|x-y|<5s$ for any $y\in B_s(x)$, there holds $\nu(B_s(x))\lesssim s^{N+\alpha_0}$ and we have the following dichotomy:
\noindent(b.1) either $s\leq 4\rho(x) $, then
$$\nu(B_s(x))\gtrsim \nu(B_{\frac{\rho(x)}{4}}(x))\asymp (\rho(x))^{\alpha_0+N} \gtrsim s^{N+\alpha_0} ;$$
(b.2) or $s\geq 4\rho(x)$,
we have for any $y\in B_{\varepsilon s/2}(x_s)$, $\rho(y)\geq -|y-x_s|+\rho(x_s)>\varepsilon s/2 $. It follows \begin{align*} \nu(B_s(x))\gtrsim \nu(B_{\varepsilon s/2}(x_s))\gtrsim s^{N+\alpha_0}. \end{align*} Therefore \eqref{161020147} holds.
\noindent Next, for any $0\leq s<2^{\frac{(\alpha+1)(N-\beta+\alpha)}{N-\beta}}(diam\,(\Omega))^{N-\beta+\alpha}$ and $x\in\overline{\Omega}$, we have
\begin{align*}
\nu(\mathbb{B}_s(x))&\asymp (\max\{\rho(x),\min\{s^{\frac{1}{N-\beta+\alpha}},s^{\frac{1}{N-\beta}}(\rho(x))^{-\frac{\alpha}{N-\beta}}\}\})^{\alpha_0} \\&~~\times\left(\min\{s^{\frac{1}{N-\beta+\alpha}},s^{\frac{1}{N-\beta}}(\rho(x))^{-\frac{\alpha}{N-\beta}}\}\right)^N\\&\asymp \left\{ \begin{array}{l} s^{\frac{\alpha_0+N}{N-\beta+\alpha}} ~~~~~~~~~~~~~~~~~\text{if}~\rho(x)\leq s^{\frac{1}{N-\beta+\alpha}}, \\ (\rho(x))^{\alpha_0-\frac{\alpha N}{N-\beta}}s^{\frac{N}{N-\beta}}~ ~\text{if }~\rho(x)\geq s^{\frac{1}{N-\beta+\alpha}},\\
\end{array} \right. \end{align*} and $ \nu(\mathbb{B}_s(x))=\nu(\overline{\Omega})\asymp (diam\,(\Omega))^{\alpha_0+N}$ if $s>2^{\frac{(\alpha+1)(N-\beta+\alpha)}{N-\beta}}(diam\,(\Omega))^{N-\beta+\alpha}$. We get, \begin{align*} \int_{0}^{r}\frac{\nu(\mathbb{B}_s(x))}{s}\frac{ds}{s}\asymp \left\{ \begin{array}{l} (diam\,(\Omega))^{\alpha_0+\beta-\alpha} ~~~\text{if}~r> (diam\,(\Omega))^{N-\beta+\alpha}, \\
r^{\frac{\alpha_0+\beta-\alpha}{N-\beta+\alpha}} ~~~~~~~~~~~~~~~~\text{if}~r\in ((\rho(x))^{N-\beta+\alpha},(diam\,(\Omega))^{N-\beta+\alpha}], \\ (\rho(x))^{\alpha_0-\frac{\alpha N}{N-\beta}}r^{\frac{\beta}{N-\beta}}~ ~\text{if }~r\in (0,(\rho(x))^{N-\beta+\alpha}].\\
\end{array} \right. \end{align*} Therefore \eqref{1510201410} holds. It remains to prove \eqref{1510201410*}. For any $x\in \overline{\Omega}$ and $r>0$, it is clear that if $r>\frac{1}{2}(\rho(x))^{N-\beta+\alpha}$ we have
\begin{align*} \sup_{y\in \mathbb{B}_r(x)}\int_{0}^{r}\frac{\nu(\mathbb{B}_s(y))}{s}\frac{ds}{s}\lesssim \min\{r^{\frac{\alpha_0+\beta-\alpha}{N-\beta+\alpha}},(diam\,(\Omega))^{\alpha_0+\beta-\alpha} \}, \end{align*} from which inequality we obtain \begin{align*}
\sup_{y\in \mathbb{B}_r(x)}\int_{0}^{r}\frac{\nu(\mathbb{B}_s(y))}{s}\frac{ds}{s}\lesssim \int_{0}^{r}\frac{\nu(\mathbb{B}_s(x))}{s}\frac{ds}{s}.
\end{align*}
If $0<r\leq \frac{1}{2}(\rho(x))^{N-\beta+\alpha}$, we have $\mathbb{B}_r(x)\subset B_{r^{\frac{1}{N-\beta}}(\rho(x))^{-\frac{\alpha}{N-\beta}}}(x)$ and $\rho(x)\asymp\rho(y)$ for all $y\in B_{r^{\frac{1}{N-\beta}}(\rho(x))^{-\frac{\alpha}{N-\beta}}}(x)$, thus \begin{align*}
\sup_{y\in \mathbb{B}_r(x)}\int_{0}^{r}\frac{\nu(\mathbb{B}_s(y))}{s}\frac{ds}{s}&\leq \sup_{|y-x|<r^{\frac{1}{N-\beta}}(\rho(x))^{-\frac{\alpha}{N-\beta}}}\int_{0}^{r}\frac{\nu(\mathbb{B}_s(y))}{s}\frac{ds}{s}\\& \asymp \sup_{|y-x|<r^{\frac{1}{N-\beta}}(\rho(x))^{-\frac{\alpha}{N-\beta}}}(\rho(y))^{\alpha_0-\frac{\alpha N}{N-\beta}}r^{\frac{\beta}{N-\beta}}
\\& \asymp (\rho(x))^{\alpha_0-\frac{\alpha N}{N-\beta}}r^{\frac{\beta}{N-\beta}} \\& \asymp \int_{0}^{r}\frac{\nu(\mathbb{B}_s(x))}{s}\frac{ds}{s}.
\end{align*}
Therefore, \eqref{1510201410*} holds.
\end{proof}
\begin{remark} Lemma \ref{201020141} and \ref{011220141} in the case $\alpha=\beta=2$ and $\alpha_0=q+1$ had already been proved by Kalton and Verbitsky in \cite{KaVe}.
\end{remark}
\begin{definition}\label{zz}
For $\alpha_0\geq 0, 0\leq \alpha\leq \beta<N$ and $ s>1$, we define $\operatorname{Cap}^{\alpha_0}_{\mathbf{N}_{\alpha,\beta},s}$ by \begin{align*}
\operatorname{Cap}^{\alpha_0}_{\mathbf{N}_{\alpha,\beta},s}(E)=\inf\left\{\int_{\overline{\Omega}}g^{s}(\rho(x))^{\alpha_0}dx: g\geq 0 , \mathbf{N}_{\alpha,\beta}[g(\rho(.))^{\alpha_0}]\geq \chi_E\right\}
\end{align*}
for any Borel set $E\subset\overline{\Omega}$.
\end{definition}
Clearly, we have \begin{align*} \operatorname{Cap}^{\alpha_0}_{\mathbf{N}_{\alpha,\beta},s}(E)=\inf\left\{\int_{\overline{\Omega}}g^{s}(\rho(x))^{-\alpha_0(s-1)}dx: g\geq 0 , \mathbf{N}_{\alpha,\beta}[g]\geq \chi_E\right\}
\end{align*}
for any Borel set $E\subset\overline{\Omega}$. Furthermore we have by \cite[Theorem 2.5.1]{55AH}, \begin{align}\label{051220141}
\left(\operatorname{Cap}^{\alpha_0}_{\mathbf{N}_{\alpha,\beta},s}(E)\right) ^{1/s}=\sup\left\{\omega (E):\omega\in \mathfrak{M}_b^+(E),||\mathbf{N}_{\alpha,\beta}[\omega]||_{L^{s'}(\Omega,(\rho(.)))^{\alpha_0}dx)}\leq 1\right\} \end{align} for any compact set $E\subset\overline{\Omega}$, where $s'$ is the conjugate exponent of $s$.
Thanks to Lemma \ref{201020141} and \ref{011220141} , we can apply Theorem \ref{151020148} and we obtain:
\begin{theorem}\label{2010201420}
Let $\omega\in\mathfrak{M}^+(\overline{\Omega})$, $\alpha_0\geq 0, 0\leq \alpha\leq \beta<N$ and $q>1$. Then the following statements are equivalent:
\begin{description}
\item${\bf 1}$ The equation $u= \mathbf{N}_{\alpha,\beta}[ u^q(\rho(.))^{\alpha_0}]+\varepsilon \mathbf{N}_{\alpha,\beta}[ \omega]$ has a solution for $\varepsilon>0$ small enough.
\item${\bf 2}$ The inequality
\begin{align}\label{201020143}
\int_{E\cap\Omega}(\mathbf{N}_{\alpha,\beta}[\omega_E])^{q}(\rho(x))^{\alpha_0}dx\leq C\omega(E)
\end{align}
holds for some $C>0$ and any Borel set $E\subset\overline{\Omega}$, $\omega_E=\omega\chi_E$.
\item[3.] The inequality \begin{align}\label{201020147}
\omega(K)\leq C \operatorname{Cap}^{\alpha_0}_{\mathbf{N}_{\alpha,\beta},q'}(K)
\end{align}
holds for some $C>0$ and any compact set $K\subset\overline{\Omega}$.
\item[4.] The inequality
\begin{align}\label{201020146}
\mathbf{N}_{\alpha,\beta}\left[\left(\mathbf{N}_{\alpha,\beta}[\omega]\right)^q(\rho(.))^{\alpha_0}\right]\leq C\mathbf{N}_{\alpha,\beta}[\omega]<\infty ~~a.e \text{ in }~~ \Omega
\end{align}
holds for some $C>0$.
\end{description}
\end{theorem}
To apply the previous theorem we need the following result.
\begin{proposition}\label{2010201434}Let $q>1$, $\nu, \omega\in \mathfrak{M}^+(X)$. Suppose that $A_1,A_2,B_1,B_2:X\times X\mapsto [0,+\infty)$ are Borel positive Kernel functions with $A_1\asymp A_2,B_1\asymp B_2$. Then, the following statements are equivalent: \begin{description}
\item${\bf 1}$ The equation $u=A^\nu_1u^q+\varepsilon B_1\omega\;$ $\nu$-a.e has a solution for $\varepsilon>0$ small enough.
\item${\bf 2}$ The equation $u=A^\nu_2u^q+\varepsilon B_2\omega\;$ $\nu-$a.e has a solution for $\varepsilon>0$ small enough.
\item[3.] The problem $u\asymp A^\nu_1u^q+\varepsilon B_1\omega\;$ $\nu$-a.e has a solution for $\varepsilon>0$ small enough.
\item[4.] The equation $u\gtrsim A^\nu_1u^q+\varepsilon B_1\omega\;$ $\nu$-a.e has a solution for $\varepsilon>0$ small enough.
\end{description}
\end{proposition}
\begin{proof} We prove only that 4 implies 2. Suppose that there exist $c_1>0,\varepsilon_0>0$ and a position Borel function $u$ such that
\begin{align*}
A^\nu_1u^q+\varepsilon_0 B_1\omega\leq c_1 u.
\end{align*}
Taken $c_2>0$ with $A_2\leq c_2A_1,B_2\leq c_2B$. We consider $u_{n+1}=A^\nu_2u_n^q+\varepsilon_0(c_1c_2)^{-\frac{q}{q-1}} B_2\omega$ and $u_0=0$ for any $n\geq 0$. Clearly, $u_{n}\leq (c_1c_2)^{-\frac{1}{q-1}}u$ for any $n$ and $\{u_n\}$ is nondecreasing. Thus, $U=\lim\limits_{n\to \infty }u_n$ is a solution of $U=A^\nu_2U^q+\varepsilon_0(c_1c_2)^{-\frac{q}{q-1}} B_2\omega$.
\end{proof}
The following results provide some relations between the capacities $\operatorname{Cap}^{\alpha_0}_{\mathbf{N}_{\alpha,\beta},s}$ and the Riesz capacities on $\mathbb{R}^{N-1}$ which allow to define the capacities on $\partial\Omega$.
\begin{proposition}\label{2010201417} Assume that $\Omega=\mathbb{R}^{N-1}\times (0,\infty)$ and let $\alpha_0\geq 0$ such that $$-1+s'(1+\alpha-\beta)<\alpha_0<-1+s'(N-\beta+\alpha).$$ There holds \begin{align}\label{201020149} \operatorname{Cap}^{\alpha_0}_{\mathbf{N}_{\alpha,\beta},s}(K\times\{0\})\asymp \operatorname{Cap}_{I_{\beta-\alpha+\frac{\alpha_0+1}{s'}-1},s'}(K) \end{align}
for any compact set $K\subset\mathbb{R}^{N-1},$
\end{proposition}
\begin{proof} The proof relies on an idea of \cite[Corollary 4.20]{QH}. Thanks to \cite[Theorem 2.5.1]{55AH} and \eqref{051220141}, we get \eqref{201020149} from the following estimate: for any $\mu\in \mathfrak{M}_b^+(\mathbb{R}^{N-1})$
\begin{align}\label{2010201410} ||\mathbf{N}_{\alpha,\beta}[\mu\otimes\delta_{\{x_N=0\}}]||_{L^{s'}(\Omega,(\rho(.)))^{\alpha_0}dx)}\asymp|| I_{\beta-\alpha+\frac{\alpha_0+1}{s'}-1}[\mu]||_{L^{s'}(\mathbb{R}^{N-1})}, \end{align} where $I_{\gamma}[\mu]$ is the Riesz potential of $\mu$ in $\mathbb{R}^{N-1}$, i.e
\begin{align*}I_{\gamma}[\mu](y)=\int_{0}^{\infty}\frac{\mu(B'_r(y))}{r^{N-1-\gamma}}\frac{dr}{r}~~\forall~y\in\mathbb{R}^{N-1},\end{align*} with $B'_r(y)$ being a ball in $\mathbb{R}^{N-1}$. We have
\begin{align*}
||\mathbf{N}_{\alpha,\beta}[\mu\otimes\delta_{\{x_N=0\}}]||_{L^{s'}(\Omega,(\rho(.))^{\alpha_0}dx)}^{s'}&=\int_{\mathbb{R}^{N-1}}\int_{0}^{\infty}\left(\int_{\mathbb{R}^{N-1}}\frac{d\mu(z)}{(|x'-z|^2+x_N^2)^{\frac{N-\beta+\alpha}{2}}}\right)^{s'}x_N^{\alpha_0}dx_Ndx'\\&
\asymp\int_{\mathbb{R}^{N-1}}\int_{0}^{\infty}\left(\int_{x_N}^{\infty}\frac{\mu(B'_r(x'))}{r^{N-\beta+\alpha}}\frac{dr}{r}\right)^{s'}x_N^{\alpha_0}dx_Ndx'.
\end{align*}
Notice that
\begin{align*}
\int_{0}^{\infty}\left(\int_{x_N}^{\infty}\frac{\mu(B'_r(x'))}{r^{N-\beta+\alpha}}\frac{dr}{r}\right)^{s'}x_N^{\alpha_0}dx_N&\geq \int_{0}^{\infty}\left(\int_{x_N}^{2x_N}\frac{\mu(B'_r(x'))}{r^{N-\beta+\alpha}}\frac{dr}{r}\right)^{s'}x_N^{\alpha_0}dx_N\\& \gtrsim
\int_{0}^{\infty}\left(\frac{\mu(B'_{x_N}(x'))}{x_N^{N-\beta+\alpha-\frac{\alpha_0+1}{s'}}}\right)^{s'}\frac{d x_N}{x_N}.
\end{align*} On the other hand, using H\"older's inequality and Fubini's Theorem, we obtain
\begin{align*}
&\int_{0}^{\infty}\left(\int_{x_N}^{\infty}\frac{\mu(B'_r(x'))}{r^{N-\beta+\alpha}}\frac{dr}{r}\right)^{s'}x_N^{\alpha_0}dx_N\leq \int_{0}^{\infty}\left(\int_{x_N}^{\infty}r^{-\frac{s}{2s'}}\frac{dr}{r}\right)^{\frac{s'}{s}}\int_{x_N}^{\infty}\left(\frac{\mu(B'_r(x'))}{r^{N-\beta+\alpha-\frac{1}{2s'}}}\right)^{s'}\frac{dr}{r}x_N^{\alpha_0}dx_N\\[2mm]&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
=C
\int_{0}^{\infty}\int_{x_N}^{\infty}\left(\frac{\mu(B'_r(x'))}{r^{N-\beta+\alpha-\frac{1}{2s'}}}\right)^{s'}\frac{dr}{r}x_N^{\alpha_0-\frac{1}{2}}dx_N \\[2mm]&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
=C \int_{0}^{\infty}\int_{0}^{r}x_N^{\alpha_0-\frac{1}{2}}dx_N\left(\frac{\mu(B'_r(x'))}{r^{N-\beta+\alpha-\frac{1}{2s'}}}\right)^{s'}\frac{dr}{r}\\[2mm]&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
=C \int_{0}^{\infty}\left(\frac{\mu(B'_{r}(x'))}{r^{N-\beta+\alpha-\frac{\alpha_0+1}{s'}}}\right)^{s'}\frac{d r}{r}.
\end{align*}
Thus,
\begin{align}
||\mathbf{N}_{\alpha,\beta}[\mu\otimes\delta_{\{x_N=0\}}]||_{L^{s'}(\Omega,(\rho(.)))^{\alpha_0}dx)}\asymp\left(\int_{\mathbb{R}^{N-1}}\int_{0}^{\infty}\left(\frac{\mu(B'_{r}(y))}{r^{N-\beta+\alpha-\frac{\alpha_0+1}{s'}}}\right)^{s'}\frac{d r}{r}dy\right)^{1/s'}.
\end{align}
It implies \eqref{2010201410} from \cite[Theorem 2.3]{55VHV}.
\end{proof}
\\
\begin{proposition}\label{2010201414} Let $\Omega\subset \mathbb{R}^N$ be a bounded domain a $ C^2$ boundary. Assume $\alpha_0\geq 0$ and $-1+s'(1+\alpha-\beta)<\alpha_0<-1+s'(N-\beta+\alpha)$. Then there holds \begin{align}\label{121120141}
\operatorname{Cap}^{\alpha_0}_{\mathbf{N}_{\alpha,\beta},s}(E)\asymp \operatorname{Cap}^{\partial\Omega}_{\beta-\alpha+\frac{\alpha_0+1}{s'}-1,s}(E)
\end{align}
for any compact set $E\subset\partial\Omega\subset\mathbb{R}^N.$
\end{proposition}
\begin{proof} Let $K_1,...,K_m$ be as in definition \ref{131120144}. We have
\begin{align*}
\operatorname{Cap}^{\alpha_0}_{\mathbf{N}_{\alpha,\beta},s}(E)\asymp \sum_{i=1}^{m}\operatorname{Cap}^{\alpha_0}_{\mathbf{N}_{\alpha,\beta},s}(E\cap K_i),
\end{align*}
for any compact set $E\subset\partial\Omega.$ By definition \ref{131120144}, we need to prove that
\begin{align}
\operatorname{Cap}^{\alpha_0}_{\mathbf{N}_{\alpha,\beta},s}(E\cap K_i)\asymp \operatorname{Cap}_{G_{\beta-\alpha+\frac{\alpha_0+1}{s'}-1},s}(\tilde{T}_i(E\cap K_i))~~\forall~i=1,2,...,m.
\end{align}
We can show that for any $\omega\in\mathfrak{M}_b^+(\partial\Omega)$ and $i=1,...,m$, there exists $\omega_i\in \mathfrak{M}_b^+(\tilde{T}_i(K_i))$ with $T_i(K_i)=\tilde{T}_i(K_i)\times\{x_N=0\}$ such that \begin{align*} \omega_{i}(O)=\omega(T_i^{-1}(O\times \{0\})) \end{align*} for all Borel set $O\subset \tilde{T}_i(K_i)$, its proof can be found in \cite[Proof of Lemma 5.2.2]{55AH}. Thanks to \cite[Theorem 2.5.1]{55AH}, it is enough to show that for any $i\in \{1,2,...,m\}$ there holds
\begin{align}\label{2010201412}
||\mathbf{N}_{\alpha,\beta}[\chi_{K_i}\omega]||_{L^{s'}(\Omega,(\rho(.)))^{\alpha_0}dx)}\asymp|| G_{\beta-\alpha+\frac{\alpha_0+1}{s'}-1}[\omega_i]||_{L^{s'}(\mathbb{R}^{N-1})},
\end{align}
where $G_{\gamma}[\omega_i]~(0<\gamma<N-1)$ is the Bessel potential of $\omega_i$ in $\mathbb{R}^{N-1}$, i.e
\begin{align*}
G_{\gamma}[\omega_i](x)=\int_{\mathbb{R}^{N-1}}G_{\gamma}(x-y)d\omega_i(y).
\end{align*}
Indeed, we have
\begin{align*}
& ||\mathbf{N}_{\alpha,\beta}[\omega\chi_{K_i}]||_{L^{s'}(\Omega,(\rho(.)))^{\alpha_0}dx)}^{s'}=\int_{\Omega}\left(\int_{K_i}\frac{d\omega(z)}{|x-z|^{N-\beta+\alpha}}\right)^{s'}(\rho(x))^{\alpha_0}dx
\\&~~~~~=\int_{O_i\cap \Omega}\left(\int_{K_i}\frac{d\omega(z)}{|x-z|^{N-\beta+\alpha}}\right)^{s'}(\rho(x))^{\alpha_0}dx+\int_{\Omega\backslash O_i}\left(\int_{K_i}\frac{d\omega(z)}{|x-z|^{N-\beta+\alpha}}\right)^{s'}(\rho(x))^{\alpha_0}dx
\\&~~~~~\asymp\int_{O_i\cap \Omega}\left(\int_{K_i}\frac{d\omega(z)}{|x-z|^{N-\beta+\alpha}}\right)^{s'}(\rho(x))^{\alpha_0}dx+(\omega(K_i))^{s'}.
\end{align*}
Here we used $|x-z|\asymp 1$ for any $x\in \Omega\backslash O_i, z\in K_i$. \\ By a standard change of variable we obtain
\begin{align*}
& \int_{O_i\cap \Omega}\left(\int_{K_i}\frac{d\omega(z)}{|x-z|^{N-\beta+\alpha}}\right)^{s'}(\rho(x))^{\alpha_0}dx+(\omega(K_i))^{s'}\\&~~~~~=\int_{T_i(O_i\cap \Omega)}\left(\int_{K_i}\frac{d\omega(z)}{|T_i^{-1}(y)-z|^{N-\beta+\alpha}}\right)^{s'}(\rho(T_i^{-1}(y)))^{\alpha_0}|\mathbf{J}_{T_i}(T_i^{-1}(y))|^{-1}dy+(\omega(K_i))^{s'}
\\&~~~~~\asymp\int_{B_1(0)\cap \{x_N>0\}}\left(\int_{K_i}\frac{d\omega(z)}{|y-T_i(z)|^{N-\beta+\alpha}}\right)^{s'}y_N^{\alpha_0}dy+(\omega(K_i))^{s'}~\text{ with } y=(y',y_N),
\end{align*}
since $|T_i^{-1}(y)-z|\asymp |y-T_i(z)| $, $|\mathbf{J}_{T_i}(T_i^{-1}(y))|\asymp 1$ and $\rho(T_i^{-1}(y))\asymp y_N$ for all $(y,z)\in T_i(O_i\cap \Omega)\times K_i$. \
From the definition of $\omega_i$, we have
\begin{align*}
&\int_{B_1(0)\cap \{x_N>0\}}\left(\int_{K_i}\frac{1}{|y-T_i(z)|^{N-\beta+\alpha}}d\omega(z)\right)^{s'}y_n^{\alpha_0}dy+(\omega(K_i))^{s'}
\\&~~~=\int_{B_1(0)\cap \{x_N>0\}}\left(\int_{\tilde{T}_i(K_i)}\frac{1}{(|y'-\xi|^2+y_N^2)^{\frac{{N-\beta+\alpha}}{2}}}d\omega_i(\xi)\right)^{s'}y_N^{\alpha_0}dy_Ndy'+(\omega(K_i))^{s'}\\&~~~\asymp \int_{\mathbb{R}^{N-1}}\int_{0}^{\infty}\left(\int_{\min\{y_N,R\}}^{2R}\frac{\omega_i(B'_r(y'))}{r^{N-\beta+\alpha}}\frac{dr}{r}\right)^{s'}y_N^{\alpha_0}dy_Ndy'~~\text{ with }~R=\operatorname{diam\,}(\Omega).
\end{align*}
As in the proof of Proposition \ref{2010201417}, there holds
\begin{align*}
&\int_{\mathbb{R}^{N-1}}\int_{0}^{\infty}\left(\int_{\min\{y_N,R\}}^{2R}\frac{\omega_i(B'_r(y'))}{r^{N-\beta+\alpha}}\frac{dr}{r}\right)^{s'}y_N^{\alpha_0}dy_Ndy'
\\& ~~~~~\asymp \int_{\mathbb{R}^{N-1}}\int_{0}^{2R}\left(\frac{\omega_i(B'_r(y'))}{r^{{N-\beta+\alpha}-\frac{\alpha_0+1}{s'}}}\right)^{s'}\frac{d r}{r}dy'. \end{align*}
Therefore, we get \eqref{2010201412} from \cite[Theorem 2.3]{55VHV}.
\end{proof}
\begin{remark} Proposition \ref{2010201417} and \ref{2010201414} with $\alpha=\beta=2,\alpha_0=q+1$ were demonstrated by Verbitsky in \cite[Apppendix B]{Dyn}, using an alternative approach.
\end{remark}
\section{Proof of the main results}
We denote
\begin{align*}
\mathbf{P}[\sigma](x)=\int_{\partial\Omega}\operatorname{P}(x,z)d\sigma(z), ~~\mathbf{G}[f](x)=\int_{\Omega}\operatorname{G}(x,y)f(y)dy
\end{align*}
for any $\sigma\in \mathfrak{M}(\partial\Omega), f\in L^1_{\rho}(\Omega), f\geq 0$. Then the unique weak solution of
$$\begin{array}{lll}
-\Delta u=f\qquad&\text {in }\Omega,\\
\phantom{ -\Delta}
u=\sigma\qquad&\text {on }\partial\Omega,
\end{array}$$
can be represented by
\begin{align*}
u(x)=\mathbf{G}[f](x)+\mathbf{P}[\sigma](x)~~\forall ~x\in\Omega.
\end{align*}
We recall below some classical estimates for the Green and the Poisson kernels.
\begin{align*}
& \operatorname{G}(x,y)\asymp \min\left\{\frac{1}{|x-y|^{N-2}}, \frac{\rho(x)\rho(y)}{|x-y|^{N}}\right\},\\&
\operatorname{P}(x,z)\asymp \frac{\rho(x)}{|x-z|^{N}},
\end{align*}
and
\begin{align*}
|\nabla_x \operatorname{G}(x,y)|\lesssim \frac{\rho(y)}{|x-y|^{N}}\min\left\{1,\frac{|x-y|}{\sqrt{\rho(x)\rho(y)}}\right\}, ~~|\nabla_x \operatorname{P}(x,z)|\lesssim \frac{1}{|x-z|^N},
\end{align*}
for any $(x,y,z)\in\Omega\times\Omega\times\partial\Omega$, see \cite{BiVi}.
Since $|\rho(x)-\rho(y)|\leq |x-y|$ we have \begin{align*}
\max\left\{\rho(x)\rho(y),|x-y|^2\right\}\asymp\max\left\{|x-y|,\rho(x),\rho(y)\right\}^2. \end{align*}
Thus,
\begin{align}\label{051220145}
\min\left\{1,\left(\frac{|x-y|}{\sqrt{\rho(x)\rho(y)}}\right)^\gamma\right\}
\asymp \frac{|x-y|^\gamma}{\left(\max\left\{|x-y|,\rho(x),\rho(y)\right\}\right)^{\gamma}}~~\text{ for }~\gamma>0.
\end{align}
Therefore, \begin{align}\label{1311201427a} \operatorname{G}(x,y)\asymp \rho(x)\rho(y)\mathbf{N}_{2,2}(x,y),~~\operatorname{P}(x,z)\asymp \rho(x)\mathbf{N}_{\alpha,\alpha}(x,z) \end{align} and \begin{align}\label{1311201426a}
|\nabla_x \operatorname{G}(x,y)|\lesssim \rho(y)\mathbf{N}_{1,1}(x,y), ~~~|\nabla_x \operatorname{P}(x,z)|\lesssim \mathbf{N}_{\alpha,\alpha}(x,z) \end{align}for all $(x,y,z)\in \overline{\Omega}\times\overline{\Omega}\times\partial\Omega,$ $ \alpha\geq 0.$
\\
\begin{proof}[Proof of Theorem \ref{121120142} and Theorem \ref{121120147}] By \eqref{1311201427a}, the following equivalence holds
\begin{align*}
&\mathbf{G}\left[\left(\mathbf{P}[\sigma]\right)^q\right]\lesssim \mathbf{P}[\sigma]<\infty ~~a.e \text{ in }~ \Omega. \Longleftrightarrow
&\mathbf{N}_{2,2}\left[\left(\mathbf{N}_{2,2}[\sigma]\right)^q\rho^{q+1}\right]\lesssim \mathbf{N}_{2,2}[\sigma]<\infty ~~a.e \text{ in }~ \Omega.
\end{align*} Furthermore
\begin{align*}
U\asymp \mathbf{G}[U^q]+\mathbf{P}[\sigma] &\Longleftrightarrow U\asymp \rho\mathbf{N}_{2,2}[\rho U^q]+\rho\mathbf{N}_{2,2}[\sigma],
\end{align*}
which in turn is equivalent to
$$V\asymp \mathbf{N}_{2,2}[\rho^{q+1}V^q]+\mathbf{N}_{2,2}[\sigma]\text{ with }V=U\rho^{-1}.
$$
By Proposition \ref{2010201417} and \ref{2010201414} we have: \begin{align*}
\operatorname{Cap}_{I_{\frac{2}{q}},q'}(K)\asymp\operatorname{Cap}^{q+1}_{\mathbf{N}_{2,2},q'}(K\times\{0\})\qquad\forall\,K\subset \mathbb{R}^{N-1}, K\text{ compact}.
\end{align*}
if $\Omega=\mathbb{R}^{N}_+$, and
\begin{align*}
\operatorname{Cap}^{\partial\Omega}_{\frac{2}{q},q'}(K)\asymp\operatorname{Cap}^{q+1}_{\mathbf{N}_{2,2},q'}(K)\qquad\forall \,K\subset \partial\Omega, K\text{ compact}.
\end{align*}
if $\Omega$ is a bounded domain.
Thanks to Theorem \eqref{2010201420} with $\omega=\sigma$, $\alpha=2,\beta=2,\alpha_0=q+1$ and proposition \ref{2010201434}, we get the results.
\end{proof}
\\
\begin{proof}[Proof of Theorem \ref{1311201437} and \ref{1311201440}] By \eqref{1311201427a} and \eqref{1311201426a}, we have \begin{align}\label{211020142}
&\operatorname{G}(x,y)\leq C\rho(x)\rho(y)\mathbf{N}_{1,1}(x,y),~~|\nabla_x\operatorname{G}(x,y)|\leq C\rho(y)\mathbf{N}_{1,1}(x,y),\\
&\operatorname{P}(x,z)\leq C\rho(y)\mathbf{N}_{1,1}(x,z),~~|\nabla_x\operatorname{P}(x,z)|\leq C\mathbf{N}_{1,1}(x,z),\label{211020143} \end{align} for all $(x,y,z)\in\Omega\times\Omega\times\partial\Omega$ and for some constant $C>0$.\\ For any $u\in W_{loc}^{1,1}(\Omega)$, we set \begin{align*}
\mathbf{F}(u)(x)=\int_{\Omega}\operatorname{G}(x,y)|u(y)|^{q_1-1}u(y)|\nabla u(y)|^{q_2}dy+\int_{\partial\Omega}\operatorname{P}(x,z)d\sigma(z). \end{align*} Using \eqref{211020142} and \eqref{211020143}, we have \begin{align*}
&|\mathbf{F}(u)|\leq C\rho(.) \mathbf{N}_{1,1}\left[|u|^{q_1}|\nabla u|^{q_2}\rho(.)\right]+C\rho(.)\mathbf{N}_{1,1}[|\sigma|],\\&
|\nabla \mathbf{F}(u)|\leq C \mathbf{N}_{1,1}\left[|u|^{q_1}|\nabla u|^{q_2}\rho(.)\right]+C\mathbf{N}_{1,1}[|\sigma|]. \end{align*} Therefore, we can easily see that if \begin{align}\label{2010201448}
\mathbf{N}_{1,1}\left[\left(\mathbf{N}_{1,1}[|\sigma|]\right)^{q_1+q_2}(\rho(.))^{q_1+1}\right]\leq \frac{\left(q_1+q_2-1\right)^{q_1+q_2-1}}{\left(C(q_1+q_2)\right)^{q_1+q_2}} \mathbf{N}_{1,1}[|\sigma|]<\infty ~~a.e \text{ in }~~ \Omega \end{align}
holds, then $\mathbf{F}$ is the map from $\mathbf{E}$ to $\mathbf{E}$, where
$$\mathbf{E}=\left\{u\in W_{loc}^{1,1}(\Omega): |u|\leq \lambda\rho(.)\mathbf{N}_{1,1}[|\sigma|],~ |\nabla u|\leq \lambda\mathbf{N}_{1,1}[|\sigma|]~~\text{ a.e in }~\Omega\right\}$$
with $\lambda=\frac{C(q_1+q_2)}{q_1+q_2-1}$. \\ Assume that \eqref{2010201448} holds.
We denote $\cal S$ by the subspace of functions $f\in W_{loc}^{1,1}(\Omega)$ with norm
\begin{align*}
||f||_{{\cal S}}=||f||_{L^{q_1+q_2}(\Omega,(\rho(.))^{1-q_2}dx)}+|||\nabla f|||_{L^{q_1+q_2}(\Omega,(\rho(.))^{1+q_2}dx)}<\infty.
\end{align*}
Clearly, $\mathbf{E}\subset \cal S$, $\mathbf{E}$ is closed under the strong topology of $\cal{S}$ and convex. \\
On the other hand,
it is not difficult to show that $\mathbf{F}$ is continuous and $\mathbf{F}(\mathbf{E})$ is precompact in $\cal{S}$.
Consequently, by Schauder's fixed point theorem, there exists $u\in \mathbf{E}$ such that $\mathbf{F}(u)=u$. Hence, $u$ is a solution of \eqref{1311201439}-\eqref{2010201447} and it satisfies
\begin{align*}
|u|\leq \lambda\rho(.)\mathbf{N}_{1,1}[|\sigma|],~ |\nabla u|\leq \lambda\mathbf{N}_{1,1}[|\sigma].
\end{align*}
Thanks to Theorem \ref{2010201420} and Proposition \ref{2010201417}, \ref{2010201414}, we verify that assumptions \eqref{1311201438} and \eqref{2010201447} in Theorem \ref{1311201437} and \ref{1311201440} are equivalent to \eqref{2010201448}. This completes the proof of the Theorems. \end{proof}
\begin{remark}\label{opt} We do not know whether conditions \eqref{1311201438} and \eqref{2010201446} are optimal or not. It is noticeable that if $\mathbf{P}[|\sigma|]\in L^{q_1+q_2}(\Omega,\rho^{1-q_2} dx)$, it is proved in \cite[Th 1.1] {MV2} that, if $\Omega$ is a ball, then $|\sigma|$ belongs to the Besov-Sobolev space $B^{-\frac{2-q_2}{q_1+q_2},q_1+q_2}(\partial\Omega)$. Therefore inequality
\begin{align*}
|\sigma|(K)\leq C \left(\operatorname{Cap}^{\partial\Omega}_{\frac{2-q_2}{q_1+q_2},(q_1+q_2)'}(K)\right)^{\frac{1}{(q_1+q_2)'}}\end{align*}
holds for any Borel set $K\subset\partial\Omega$, and it is a necessary condition for \eqref{2010201446} to hold since $\frac{1}{(q_1+q_2)'}<1$. In a general $C^2$ bounded domain, it is easy to see that this property, proved in a particular case in \cite[Th 2.2] {MV1} is still valid thanks to the equivalence relation (2.23) therein between Poisson's kernels, see also the proof of Proposition \ref{2010201414}. The difficulty for obtaining a necessary condition of existence lies in the fact that, if the inequality $u\geq \mathbf{P}[\sigma]$ is clear, $|\nabla u|\gtrsim \rho^{-1}\mathbf{P}[\sigma]$ is not true. It can also be shown that if
$$|u|^{q_1}|\nabla u|^{q_2}\leq C (\mathbf{G}(|\sigma|))^{q_1}(\rho\mathbf{N}_{1,1}[|\sigma|])^{q_2}\in L^1(\Omega,\rho(.) dx),$$ then $\sigma $ is absolutely continuous with respect to $ \operatorname{Cap}^{\partial\Omega}_{\frac{2-q_2}{q_1+q_2},(q_1+q_2)'}$. \end{remark}
\section{Extension to Schr\"odinger operators with Hardy potentials} We can apply Theorem \ref{2010201420} to solve the problem \begin{equation*}\begin{array}{lll} -\Delta u -\frac{\kappa}{\rho^2}u= u^q~~&\text{in}~\Omega, \\ \phantom{-\Delta u -\frac{\kappa}{\rho^2}} u = \sigma\quad ~&\text{on }~\partial\Omega,\\ \end{array}\end{equation*} where $\kappa\in [0,\frac{1}{4}]$ and $\sigma\in\mathfrak{M}^+(\partial\Omega)$.
Let $\operatorname{G}_\kappa,\operatorname{P}_\kappa$ be the Green kernel and Poisson kernel of $-\Delta-\frac{\kappa}{\rho^2}$ in $\Omega$ with $\kappa\in [0,\frac{1}{4}]$. It is proved that \begin{align*}
&\operatorname{G}_{\kappa}(x,y)\asymp \min\left\{\frac{1}{|x-y|^{N-2}}, \frac{(\rho(x)\rho(y))^{\frac{1+\sqrt{1-4\kappa}}{2}}}{|x-y|^{N-1+\sqrt{1-4\kappa}}}\right\},\\&
\operatorname{P}_{\kappa}(x,z)\asymp \frac{(\rho(x))^{\frac{1+\sqrt{1-4\kappa}}{2}}}{|x-z|^{N-1+\sqrt{1-4\kappa}}}, \end{align*}
for all $(x,y,z)\in \overline{\Omega}\times\overline{\Omega}\times\partial\Omega$, see \cite{FMT,MT1,GV}. Therefore, from \eqref{051220145} we get \begin{align}\label{1311201427} &\operatorname{G}_{\kappa}(x,y)\asymp (\rho(x)\rho(y))^{\frac{1+\sqrt{1-4\kappa}}{2}}\mathbf{N}_{1+\sqrt{1-4\kappa},2}(x,y),\\&\label{1311201428} \operatorname{P}_\kappa(x,z)\asymp (\rho(x))^{\frac{1+\sqrt{1-4\kappa}}{2}}\mathbf{N}_{\alpha,1-\sqrt{1-4\kappa}+\alpha}(x,z), \end{align} for all $(x,y,z)\in \overline{\Omega}\times\overline{\Omega}\times\partial\Omega,$ $ \alpha\geq 0.$ We denote \begin{align*} \mathbf{P}_\kappa[\sigma](x)=\int_{\partial\Omega}\operatorname{P}_\kappa(x,z)d\sigma(z), ~~\mathbf{G}_\kappa[f](x)=\int_{\Omega}\operatorname{G}_\kappa(x,y)f(y)dy \end{align*} for any $\sigma\in \mathfrak{M}^+(\partial\Omega), f\in L^1(\Omega,\rho^{\frac{1+\sqrt{1-4\kappa}}{2}}dx), f\geq 0$. Then the unique weak solution of $$\begin{array}{lll} -\Delta u-\frac{\kappa}{\rho^2}u=f\qquad&\text {in }\Omega,\\ \phantom{ -\Delta u-\frac{\kappa}{\rho^2}} u=\sigma\qquad&\text {on }\partial\Omega, \end{array}$$ satisfies the following integral equation \cite{GV} \begin{align*} u=\mathbf{G}_\kappa[f]+\mathbf{P}_\kappa[\sigma]~~\text{a.e. in } \Omega. \end{align*}
As in the proofs of Theorem \ref{121120142} and Theorem \ref{121120147} the relation \begin{align*} \mathbf{G}_\kappa\left[\left(\mathbf{P}_\kappa[\sigma]\right)^q\right]\lesssim \mathbf{P}_\kappa[\sigma]<\infty ~~\text{a.e in }~~ \Omega, \end{align*} is equivalent to \begin{align*} \mathbf{N}_{1+\sqrt{1-4\kappa},2}\left[\left(\mathbf{N}_{1+\sqrt{1-4\kappa},2}[\sigma]\right)^q\rho^{\frac{(q+1)(1+\sqrt{1-4\kappa})}{2}}\right]\lesssim \mathbf{N}_{1+\sqrt{1-4\kappa},2}[\sigma]<\infty ~~\text{a.e in }~~ \Omega, \end{align*} and the relation \begin{align*} U\asymp \mathbf{G}_\kappa[U^q]+\mathbf{P}_\kappa[\sigma], \end{align*} is equivalent to \begin{align*}V\asymp \mathbf{N}_{1+\sqrt{1-4\kappa},2}[\rho^{\frac{(q+1)(1+\sqrt{1-4\kappa})}{2}}V^q]+\mathbf{N}_{1+\sqrt{1-4\kappa},2}[\sigma]\quad\text{with } V=U\rho^{-\frac{1+\sqrt{1-4\kappa}}{2}}. \end{align*} Thanks to Theorem \ref{2010201420} with $\omega=\sigma$, $\alpha=1+\sqrt{1-4\kappa},\beta=2,\alpha_0=\frac{(q+1)(1+\sqrt{1-4\kappa})}{2}$ and proposition \ref{2010201434}, \ref{2010201417}, \ref{2010201414}, we obtain.
\begin{theorem} \label{1311201410}Let $q>1, 0\leq \kappa\leq \frac{1}{4}$ and $\sigma\in \mathfrak{M}^+(\partial\Omega)$. Then, the following statements are equivalent \begin{description}
\item${\bf 1}$ There exists $C>0$ such that the following inequalities hold \begin{align}\label{2010201428} \sigma(O)\leq C \operatorname{Cap}_{I_{\frac{q+3-(q-1)\sqrt{1-4\kappa}}{2q}},q'}(O)\end{align} for any Borel set $O\subset\mathbb{R}^{N-1}$ if $\Omega=\mathbb{R}^{N}_+$ and \begin{align}\label{2010201429}
\sigma(O)\leq C \operatorname{Cap}^{\partial\Omega}_{\frac{q+3-(q-1)\sqrt{1-4\kappa}}{2q},q'}(O)\end{align} for any Borel set $O\subset\partial\Omega$ if $\Omega$ is a bounded domain.
\item${\bf 2}$ There exists $C>0$ such that the inequality
\begin{align}
\label{2010201433b} & \mathbf{G}_\kappa\left[\left(\mathbf{P}_\kappa[\sigma]\right)^q\right]\leq C \mathbf{P}_\kappa[\sigma]<\infty ~~a.e \text{ in }~~ \Omega,
\end{align}
holds.
\item[3.] Problem
\begin{equation}\label{2010201431} \begin{array}{lll}
-\Delta u -\frac{\kappa}{\rho^2}u= u^q~~&\text{in}~\Omega, \\
\phantom{ -\Delta u -\frac{\kappa}{\rho^2}}
u = \varepsilon\sigma\quad ~&\text{on }~\partial\Omega,\\
\end{array} \end{equation}
has a positive solution for $\varepsilon>0$ small enough.
\end{description}
Moreover, there is a constant $C_0>0$ such that if any one
of the two statements ${\bf 1}$ and ${\bf 2}$ holds with $C\leq C_0$, then equation \ref{2010201431}
has a solution u with $\varepsilon=1$ which satisfies
\begin{align}
u\asymp \mathbf{P}_\kappa[\sigma].
\end{align}
Conversely, if \eqref{2010201431} has
a solution u with $\varepsilon=1$, then the two statements ${\bf 1}$ and ${\bf 2}$ hold for some $C>0$.
\end{theorem}
\begin{remark} The problem \eqref{2010201431} admits a subcritical range
$$1<q< \frac{N+\frac{1+\sqrt{1-4\kappa}}{2}}{N+\frac{1+\sqrt{1-4\kappa}}{2}-2}.
$$
If the above inequality, the problem can be solved with any positive measure provided $\sigma(\partial\Omega)$ is small enough.
The role of this critical exponent has been pointed out in \cite{MT1} and \cite{GV} for the removability of boundary isolated singularities of solutions of
$$ -\Delta u -\frac{\kappa}{\rho^2}u+ u^q=0~\text{in}~\Omega
$$
i.e. solutions which vanish on the boundary except at one point. Furthermore the complete study of the problem
\begin{equation}\label{2010201431+1} \begin{array}{lll}
-\Delta u -\frac{\kappa}{\rho^2}u+ u^q=0\quad&\text{in}~\Omega, \\
\phantom{ -\Delta u -\frac{\kappa}{\rho^2}+ u^q}
u = \sigma\quad ~&\text{on }~\partial\Omega,\\
\end{array} \end{equation}
is performed in \cite{GV} in the supercritical range
$$q\geq \frac{N+\frac{1+\sqrt{1-4\kappa}}{2}}{N+\frac{1+\sqrt{1-4\kappa}}{2}-2}.
$$
The necessary and sufficient condition is therein expressed in terms of the absolute continuity of $\sigma$ with respect to the $\operatorname{Cap}_{I_{\frac{q+3-(q-1)\sqrt{1-4\kappa}}{2q}},q'}$-capacity.
\end{remark}
\end{document} | arXiv |
The distribution of spacings between quadratic residues: We study the distribution of spacings between squares modulo q, where q is square-free and highly composite, in the limit as the number of prime factors of q goes to infinity. We show that all correlation functions are Poissonian, which among other things, implies that the spacings between nearest neighbors, normalized to have unit mean, have an exponential distribution.
The distribution of spacings between quadratic residues, II: We study the distribution of spacings between squares in Z/QZ as the number of prime divisors of Q tends to infinity. In an ealier paper Kurlberg and Rudnick proved that the spacing distribution for square free Q is Poissonian, this paper extends the result to arbitrary Q.
Hecke theory and equidistribution for the quantization of linear maps of the torus: We study semi-classical limits of eigenfunctions of a quantized linear hyperbolic automorphism of the torus (``cat map''). For some values of Planck's constant, the spectrum of the quantized map has large degeneracies. Our first goal in this paper is to show that these degeneracies are coupled to the existence of quantum symmetries. There is a commutative group of unitary operators on the state-space which commute with the quantized map and therefore act on its eigenspaces. We call these ``Hecke operators'', in analogy with the setting of the modular surface.
We call the eigenstates of both the quantized map and of all the Hecke operators ``Hecke eigenfunctions''. Our second goal is to study the semiclassical limit of the Hecke eigenfunctions. We will show that they become equidistributed with respect to Liouville measure, that is the expectation values of quantum observables in these eigenstates converge to the classical phase-space average of the observable.
On quantum ergodicity for linear maps of the torus: We prove a strong version of quantum ergodicity for linear hyperbolic maps of the torus (``cat maps''). We show that there is a density one sequence of integers so that as N tends to infinity along this sequence, all eigenfunctions of the quantum propagator at inverse Planck constant N are uniformly distributed.
A key step in the argument is to show that for a hyperbolic matrix in the modular group, there is a density one sequence of integers N for which its order (or period) modulo N is somewhat larger than the square root of N.
On a character sum problem of H. Cohn: Let f be a complex valued function on a finite field F such that f(0) = 0, f(1) = 1, and |f(x)| = 1 for x ≠ 0. Cohn asked if it follows that f is a nontrivial multiplicative character provided that \(\sum_{x \in F} f(x) \overline{f(x+h)} = -1\) for \(h \neq 0\). We prove that this is the case for finite fields of prime cardinality under the assumption that the nonzero values of f are roots of unity.
Value distribution for eigenfunctions of desymmetrized quantum maps: We study the value distribution and extreme values of eigenfunctions for the ``quantized cat map''. This is the quantization of a hyperbolic linear map of the torus. In a previous paper it was observed that there are quantum symmetries of the quantum map - a commutative group of unitary operators which commute with the map, which we called ``Hecke operators''. The eigenspaces of the quantum map thus admit an orthonormal basis consisting of eigenfunctions of all the Hecke operators, which we call ``Hecke eigenfunctions''.
In this note we investigate suprema and value distribution of the Hecke eigenfunctions. For prime values of the inverse Planck constant N for which the map is diagonalizable modulo N (the ``split primes'' for the map), we show that the Hecke eigenfunctions are uniformly bounded and their absolute values (amplitudes) are either constant or have a semi-circle value distribution as N tends to infinity. Moreover in the latter case different eigenfunctions become statistically independent. We obtain these results via the Riemann hypothesis for curves over a finite field (Weil's theorem) and recent results of N. Katz on exponential sums. For general N we obtain a nontrivial bound on the supremum norm of these Hecke eigenfunctions.
On the order of unimodular matrices modulo integers: Assuming the Generalized Riemann Hypothesis, we prove the following: If b is an integer greater than one, then the multiplicative order of b modulo N is larger than \(N^{1-\epsilon}\) for all N in a density one subset of the integers. If A is a hyperbolic unimodular matrix with integer coefficients, then the order of A modulo p is greater than \(p^{1-\epsilon}\) for all p in a density one subset of the primes. Moreover, the order of A modulo N is greater than \(N^{1-\epsilon}\) for all N in a density one subset of the integers.
On the distribution of matrix elements for the quantum cat map: For many classically chaotic systems it is believed that the quantum wave functions become uniformly distributed, that is the matrix elements of smooth observables tend to the phase space average of the observable. In this paper we study the fluctuations of the matrix elements for the desymmetrized quantum cat map. We present a conjecture for the distribution of the normalized matrix elements, namely that their distribution is that of a certain weighted sum of traces of independent matrices in SU(2). This is in contrast to generic chaotic systems where the distribution is expected to be Gaussian. We compute the second and fourth moment of the normalized matrix elements and obtain agreement with our conjecture.
On the period of the linear congruential and power generators: We consider the periods of the linear congruential and the power generators modulo n and, for fixed choices of initial parameters, give lower bounds that hold for ``most'' n when n ranges over three different sets: the set of primes, the set of products of two primes (of similar size), and the set of all integers. For most n in these sets, the period is at least \(n^{1/2+\epsilon(n)}\) for any monotone function \(\epsilon(n)\) tending to zero as \(n\) tends to infinity. Assuming the Generalized Riemann Hypothesis, for most n in these sets the period is greater than \(n^{1-\epsilon}\) for any \(\epsilon > 0\). Moreover, the period is unconditionally greater than \(n^{1/2+\delta}\), for some fixed \(\delta>0\) for a positive proportion of \(n\) in the above mentioned sets. These bounds are related to lower bounds on the multiplicative order of an integer e modulo \(p-1\), modulo \(\lambda(pl)\), and modulo \(\lambda(m)\) where \(p,l\) range over the primes, \(m\) ranges over the integers, and where \(\lambda(n)\) is the order of the largest cyclic subgroup of \((Z/nZ)^\times\).
Lattice points on circles and discrete velocity models for the Boltzmann equation: The construction of discrete velocity models or numerical methods for the Boltzmann equation, may lead to the necessity of computing the collision operator as a sum over lattice points. The collision operator involves an integral over a sphere, which corresponds to the conservation of energy and momentum. In dimension two there are difficulties even in proving the convergence of such an approximation since many circles contain very few lattice points, and some circles contain many badly distributed lattice points. However, by showing that lattice points on most circles are equidistributed we find that the collision operator can indeed be approximated as a sum over lattice points in the two-dimensional case. For higher dimensions, this result has already been obtained by A. Bobylev et. al. (SIAM J. Numerical Analysis 34 no 5 p. 1865-1883 (1997) )
Poisson statistics via the Chinese remainder theorem: We consider the distribution of spacings between consecutive elements in subsets of \(Z/qZ\) where \(q\) is highly composite and the subsets are defined via the Chinese remainder theorem. We give a sufficient criterion for the spacing distribution to be Poissonian as the number of prime factors of q tends to infinity, and as an application we show that the value set of a generic polynomial modulo q have Poisson spacings. We also study the spacings of subsets of \(Z/q_1 q_2Z\) that are created via the Chinese remainder theorem from subsets of \(Z/q_1Z\) and \(Z/q_2Z\) (for \(q_1,q_2\) coprime), and give criteria for when the spacings modulo \(q_1q_2\) are Poisson. We also give some examples when the spacings modulo \(q_1q_2\) are not Poisson, even though the spacings modulo \(q_1\) and modulo \(q_2\) are both Poisson.
Poisson spacing statistics for value sets of polynomials: If f is a polynomial with integer coefficients and q is an integer, we may regard f as a map from Z/qZ to Z/qZ. We show that the distribution of the (normalized) spacings between consecutive elements in the image of these maps becomes Poissonian as q tends to infinity along any sequence of square free integers such that the mean spacing modulo q tends to infinity.
Bounds on supremum norms for Hecke eigenfunctions of quantized cat maps: We study extreme values of desymmetrized eigenfunctions (so called Hecke eigenfunctions) for the quantized cat map, a quantization of a hyperbolic linear map of the torus. In a previous paper it was shown that for prime values of the inverse Planck's constant N=1/h, such that the map is diagonalizable (but not upper triangular) modulo N, the Hecke eigenfunctions are uniformly bounded. The purpose of this paper is to show that the same holds for any prime N provided that the map is not upper triangular modulo N.
We also find that the supremum norms of Hecke eigenfunctions are Oepsilon (Nepsilon) for all epsilon>0 in the case of N square free.
Matrix elements for the quantum cat map: Fluctuations in short windows: We study fluctuations of the matrix coefficients for the quantized cat map. We consider the sum of matrix coefficients corresponding to eigenstates whose eigenphases lie in a randomly chosen window, assuming that the length of the window shrinks with Planck's constant. We show that if the length of the window is smaller than the square root of Planck's constant, but larger than the separation between distinct eigenphases, then the variance of this sum is proportional to the length of the window, with a proportionality constant which coincides with the variance of the individual matrix elements corresponding to Hecke eigenfunctions.
Bounds on exponential sums over small multiplicative subgroups: We show that there is significant cancellation in certain exponential sums over small multiplicative subgroups of finite fields, giving an exposition of the arguments by Bourgain and Chang.
Products in Residue Classes: We consider a problem of P. Erdos, A. M. Odlyzko and A. Sarkozy about the representation of residue classes modulo m by products of two not too large primes. While it seems that even the Extended Riemann Hypothesis is not powerful enough to achieve the expected results, here we obtain some unconditional results ``on average'' over moduli m and residue classes modulo m and somewhat stronger results when the average is restricted to prime moduli m = p. We also consider the analogous question wherein the primes are replaced by easier sequences so, quite naturally, we obtain much stronger results.
The Dynamical Mordell-Lang Conjecture: We prove a special case of a dynamical analogue of the classical Mordell-Lang conjecture. In particular, let phi be a rational function with no superattracting periodic points other than exceptional points. If the coefficients of phi are algebraic, we show that the orbit of a point outside the union of proper preperiodic subvarieties of (P1)g has only finite intersection with any curve contained in (P1)g. Our proof uses results from p-adic dynamics together with an integrality argument.
A gap principle for dynamics: Let \(f_1,...,f_g \in C(z)\) be rational functions, let \(\Phi=(f_1,...,f_g)\) denote their coordinatewise action on \((P^1)^g\), let \(V \subset (P^1)^g\) be a proper subvariety, and let \(P=(x_1,... ,x_g) \in (P^1)^g(C)\) be a nonpreperiodic point for \(\Phi\). We show that if \(V\) does not contain any periodic subvarieties of positive dimension, then the set of \(n\) such that \(\Phi^n(P) \in V(C)\) must be very sparse. In particular, for any \(k\) and any sufficiently large \(N\), the number of \(n \leq N\) such that \(\Phi^n(P) \in V(C)\) is less than \(\log^k N\), where \(log^k\) denotes the k-th iterate of the log function. This can be interpreted as an analog of the gap principle of Davenport-Roth and Mumford.
Gaussian point count statistics for families of curves over a fixed finite field: We produce a collection of families of curves, whose point count statistics over \(F_p\) becomes Gaussian for \(p\) fixed. In particular, the average number of $Fp$-points on curves in these families tends to infinity.
Created: 2020-10-03 Sat 22:15 | CommonCrawl |
\begin{document}
\title[Transversality in hyperbolic and parabolic maps]{Transversality in the setting of hyperbolic and parabolic maps } \author{Genadi Levin, Weixiao Shen and Sebastian van Strien}
\maketitle
{\centering\footnotesize Dedicated to Lawrence Zalcman.\par}
\begin{abstract} In this paper we consider families of holomorphic maps defined on subsets of the complex plane, and show that the technique developed in \cite{LSvS1}
to treat unfolding of critical relations can also be used to deal with cases where the critical orbit converges to a hyperbolic attracting or a parabolic periodic orbit. As before this result applies to rather general families of maps, such as polynomial-like mappings, provided some lifting property holds. Our Main Theorem states that either the multiplier of a hyperbolic attracting periodic orbit depends univalently on the parameter and bifurcations at parabolic periodic points are generic, or one has persistency of periodic orbits with a fixed multiplier. \end{abstract}
\section{Introduction} When studying families of maps defined on an open subset of the complex plane, it is useful to have certain transversality properties. For example, do multipliers of attracting periodic points depend univalently on the parameter and do parabolic periodic points undergo generic bifurcations? Building on a method developed in \cite{LSvS1}
we establish such transversality results in a very general setting.
The conclusion of our Main Theorem states that one has either {\em such transversality} or
{\em persistency of periodic points with the same multiplier holds.}
The key assumption in our Main Theorem is a so-called {\em lifting property}
defined in \S\ref{subsec:lifting}. It turns
out that this assumption is applicable in rather general settings,
including
families of maps with an infinite number of singular values, such as polynomial-like mappings and also
maps with essential singularities.
Although the Main Theorem applies to complex maps, let us first mention applications to certain families of real maps. For example, consider the periodic doubling cascade associated to the family $f_\lambda=\lambda x (1-x)$, $x\in [0,1]$ and $\lambda\in [0,4]$. It is well-known that the multiplier $\kappa(\lambda)$ of attracting periodic orbit decreases in $\lambda$ diffeomorphically in each interval for which $\kappa(\lambda)\in [-1,1)$ and that one has generic bifurcations when $\kappa(\lambda)=\pm 1$. An application of our result is that the same conclusion holds for families of the form $f_\lambda(x)=\lambda f(x)$ and similarly for $g_c(z)=g(z)+c$ where $f$ and $g$ are rather general interval maps. \iffalse Indeed, the maps we will consider need only be locally defined, and includes families of the form \begin{equation}\label{examples}
f_c(x)=|x|^\ell +c\mbox{ and }g_c(z)= b e^{-1/|z|^\ell} +c. \end{equation} \fi An important feature of our method is that we obtain information about the sign of the derivative of $\kappa'$, namely that $\kappa'>0$, see \S\ref{sec:realmaps}.
The main aim of this paper is to obtain results which apply to maps which are defined only locally, e.g., having essential singularities. It turns out that many more classical results can be recovered also. Before stating our results more formally, let us discuss previous results and the approach that is used in this paper.
The study of the dependence of a multiplier on parameters goes back to Douady-Hubbard \cite{DH0,DH1}, who obtained the celebrated result (using Sullivan's quasiconformal surgery) that the multiplier map $\kappa$ uniformizes hyperbolic components of the Mandelbrot set. Milnor \cite{Milnor1} generalized this result to spaces of hyperbolic polynomials and Rees \cite{Rees} to the space of degree two rational maps. Douady-Hubbard theorem is equivalent to the local claim that $\kappa'\neq 0$ whenever $|\kappa|<1$.
In \cite{Le3} and independently in \cite{Ep2} the latter local result is generalized to spaces of polynomials and rational maps.
In \cite{Le3}, a polynomial or a rational function $f$ is considered which has $r$ cycles $O_i$ with corresponding multipliers $\kappa_i\in\overline\mathbb{D}=\{|\kappa|\le 1\}$
such that if, for some $i$, $\kappa_i=1$ then $O_i$ is not degenerate and if $\kappa_i=0$ then $O_i$ contains a single critical point which is simple. Now consider a local space $\Lambda$, containing $f$, of functions with a constant number of different critical values and with constant multiplicities at the critical points. Then the multipliers of those $r$ cycles contribute $r$ independent parameters in the coordinate system of $\Lambda$, see \cite[Theorems 2 and 6]{Le3} for precise statements.
The proof, which is a development of \cite{Le0}, \cite{LSY}, \cite{Le1}, \cite{Le2}, see also \cite{Mak}, relies on a non-trivial identity \cite[Theorems 1 and 5]{Le3} between, on the one hand, a specific function associated to a given cycle $O$ of $f$, and on the other hand, the gradient $\nabla\kappa$ of the multiplier of this cycle as a function of the coordinates of $\Lambda$. If $\kappa=1$, a modification is needed by replacing the gradient by $$\lim(1-\kappa(g))\nabla\kappa(g)$$ as $g\to f$ along $g$ having a cycle near $O$ with multiplier $\kappa(g)\neq 1$. A calculation as in Lemma \ref{lem2.2}
shows that the limit is equal to $$(f^q)''(a)\nabla g^q|_{g=f}(a)$$ (independently on $a\in O$). In particular, if $f(z)=z^2+c_1$ so that
$\Lambda=\{g_w=z^2+w\}$, it gives $$\frac{d}{dw}g_w^q(a_k)|_{w=c_1}\neq 0.$$ This recovers Douady-Hubbard's result \cite{DH1} that every primitive component of the Mandelbrot set has a simple cusp. The Main Theorem
of the present paper, which can be found on page \pageref{thm:main}, is a far reaching generalization of this fundamental fact to suitable families of local maps. The paper \cite{Le3} contains also a detailed historical account.
In turn, in Epstein's work \cite[Proposition 20]{Ep2} the above result is proven in the case when all $\kappa_i\in\overline\mathbb{D}\setminus\{0,1\}$ under the assumption that all critical points of $f$ are simple and $r$ is maximal (i.e., $r=\deg f-1$ for polynomials and $r=2\deg f-2$ for rational functions). Epstein's work \cite{Ep2}, see also \cite{Ep3}, is a development after his earlier work \cite{Ep0}, \cite{Ep1}, is
coordinate free and can also be used to prove transversality when there are critical relations. The techniques build on an approach pioneered by Thurston. In Thurston's approach, when $f$ is a globally defined holomorphic map, $P$ is a finite $f$-forward invariant set containing the postcritical set and the Thurston map $\sigma_f\colon \mbox{Teich}(\widehat{\mathbb{C}}\setminus P) \to \mbox{Teich}(\widehat{\mathbb{C}}\setminus P)$
is defined by pulling back an almost holomorphic structure. It turns out that $\sigma_f$ is contracting, see \cite[Corollary 3.4]{DH2}. In Thurston's result on the topological realisation of rational maps, Douady \& Hubbard \cite{DH2} use that the dual of the derivative of the Thurston map $\sigma_f$ is equal to the Thurston pushforward operator $f_*$ which acts on the space of quadratic differentials. Epstein's approach is to exploit the injectivity of $f_*-id$. One of the innovations in \cite{Ep2} is the introduction of the space Teichm\"uller deformation space $\mbox{Def}^B_A(f)$ which can be used in a wide range of settings.
\defMilnor \!\! \& \! \!\! \!Thurston{Milnor and \!Thurston}
There were other approaches for critically finite rational maps: Sullivan's pull-back argument \cite{Su,McS,MS}. Milnor and Thurston's \cite{MT} and Tsujii's \cite{Tsu0,Tsu} work. Their initial purpose was to prove the monotonicity of entropy for the real quadratic family. All these and previously discussed methods break down if the map is not globally defined, see \cite{LSvS1a} for details.
In \cite{LSvS1,LSvS1a} we develop a method which goes back to Milnor \!\! \& \! \!\! \!Thurston's and Tsujii's approaches in allowing us to deal with maps which are locally defined.
{Milnor \!\! \& \! \!\! \!Thurston} associated to the space of quadratic maps and a given combinatorial data on a periodic orbit $P\ni 0$ of $f_{c_1}$ of period $q$ where $f_{c}(z)\equiv z^2+c$, a map
which assigns to a $q$ tuple of points a new $q$ tuple of points,
$$F: (z_1,...z_q)\mapsto (\hat z_1,...,\hat z_q)$$ where $\hat z_q=0$ and $f_{z_1}(\hat z_i)=z_{i+1 \!\!\! \mod q}$. Since $F$ is many-valued, {Milnor \!\! \& \! \!\! \!Thurston} considered a lift $\tilde{F}$ of this map to the universal cover and apply Teichm\"uller theory to show that $\tilde{F}$ is strictly contracting in the Teichmuller metric of the universal cover.
In \cite{LSvS1,LSvS1a} we bypass this 'global' approach by rephrasing it locally. This is done in the set-up of so-called (locally defined) marked maps (and their local deformations) and replacing $F$ by an (non-linear) operation which associates to a holomorphic motion $h_\lambda$ of a finite set $P$ another holomorphic motion $\hat h_\lambda$ of $P$ that we call its {\it lift}. The {\it lifting property} is an assumption that sequences of successive lifts of holomorphic motions are compact. In \cite{LSvS1}, \cite{LSvS1a} we considered the case that the set $P$ is finite and prove that assuming the lifting property either some 'critical relation' persists along some non-trivial manifold in parameter space or one has transversality, i.e. the 'critical relation' unfolds transversally. In \cite{LSvS1}, \cite{LSvS1a} we also prove the lifting property for many interesting families of maps. An important feature of our method is that for real maps we obtain {\em positive transversality} originated in \cite{Tsu0,Tsu}: some determinant is not only non-zero but has positive sign.
\iffalse
As mentioned, the purpose of this paper is to consider families of maps which are not globally defined. For such families the previous methods break down. Indeed, if $f\colon U\to V$ is, for example, a polynomial-like map then each point in the boundary of $V$ is a singular value, $\mbox{Teich}(V\setminus P)$ is
infinite dimensional, Thurston's algorithm is only locally well-defined and it is not clear whether it is locally contracting.
\defMilnor \!\! \& \! \!\! \!Thurston{Milnor \!\! \& \! \!\! \!Thurston} In \cite{LSvS1}, \cite{LSvS1a} we develop a method which goes back to Milnor \!\! \& \! \!\! \!Thurston's approach from \cite{MT} and Tsujii's transfer operator approach in \cite{Tsu0,Tsu} allowing us to deal with with maps which are locally defined. {Milnor \!\! \& \! \!\! \!Thurston} associated to the space of quadratic maps and a given combinatorial data on a periodic orbit, a map
which assigns to a $q$ tuple of points a new $q$ tuple of points, $F: (z_1,...z_q)\mapsto (\hat z_1,...,\hat z_q)$ where $\hat z_q=0$ and $f_{z_1}(\hat z_i)=z_{i+1 \!\!\! \mod q}$ where $f_{c}(z)\equiv z^2+c$. Since $F$ is many-valued, {Milnor \!\! \& \! \!\! \!Thurston} considered a lift $\tilde{F}$ of this map to the universal cover and apply Teichm\"uller theory to show that $\tilde{F}$ is strictly contracting in the Teichmuller metric of the universal cover.
In \cite{LSvS1}, \cite{LSvS1a} we bypass the issue that
$\mbox{Teich}(V\setminus P)$ is infinitely dimensional when $f$ is not globally defined, by rephrasing their approach locally. This is done in the set-up of so-called (locally defined) marked maps (and their local deformations) and replacing $F$ by an operator which acts infinitesimally on holomorphic motions. In \cite[Main Theorem]{LSvS1} we considered the case that the set $P$ is finite. Thus we obtain
a linear map $\mathcal A$
which acts on a finite dimensional vector space, by considering how
the initial speed vector of a holomorphic speed along $P$ lifts under the local deformation of $f$.
It turns out that the spectrum of $\mathcal A$ is closely related to some transversality property.
The main result in \cite[Main Theorem]{LSvS1} is then to show that under the {\lq}lifting property{\rq} a dichotomy for the spectrum of $\mathcal A$ holds. As a corollary we obtain that, under this lifting property,
either some critical relation persists along some non-trivial manifold in parameter space or one has transversality, i.e. the critical relation unfolds transversally. Here the lifting property is an assumption that sequences of successive lifts of holomorphic motions are compact.
An important feature of our method is that for real maps we obtain {\em positive transversality}:
some determinant is not only non-zero but has positive.
In the second part of \cite{LSvS1}, we then show that this lifting property holds not only in previously considered global cases but also for interesting classes of maps where the {\lq}pushforward{\rq} approach breaks down because the map $f$ is locally defined and there are essential singularities. Thus we obtain partial monotonicity for the family $x\mapsto |x|^\ell+c$ where $\ell>1$ is large, but not necessarily an integer.
\subsection{The purpose of this paper. }
\fi
Here we consider maps with a non-finite 'postcritical set' $P$, namely locally defined maps with an attracting or a parabolic periodic orbit. The results of this paper do not aim to replicate the results mentioned previously, so we only consider the case of maps with a single attracting or parabolic periodic orbit. The proof follows the same strategy as in \cite{LSvS1}, \cite{LSvS1a}. Although we don't use the transfer operator which is defined in \cite{LSvS1}, \cite{LSvS1a} as the linearisation of the lift operator (and is infinitely dimensional if $P$ is infinite), the proof here has a strong 'flavor' of it, see the beginning of Sect \ref{sec:ThmA} and Appendix A. As mentioned, the maps we consider are allowed to have essential singularities.
For other results for transversality in the setting of polynomial, rational or finite type maps (which have at most a finite number of singular values), see \cite{Astor,BE, D, DH1, FG,Le00,Le4, LSvS2,Str}.
\subsection{Organisation of this paper} In \S\ref{sec:results} we will state the results of this paper, in \S\ref{sec:outline} we will give an outline of the three steps used in the proof, which will be given in \S\ref{sec:ThmA}-\ref{sec:ThmC}.
Applications to complex and real families will be then given in \S\ref{sec:complex}-\ref{sec:realmaps}.
{\bf Acknowledgments.} We thank Mitsu Shishikura and Michel Zinsmeister for very fruitful discussions. We also thank Alex Eremenko for inspiring us to write this paper, by posing the question which we answer in Theorem~\ref{thm:eremenko}. We also would like to thank the anonymous referee. The authors acknowledge Adam Epstein for pointing out an error/typo at the very end of the proof of Lemma 4.1 in the Arxiv version of this paper. This work was started when GL visited Imperial College (London) and continued during his visit at the Institute of Mathematics of PAN (Warsaw). He thanks both Institutions for their generous hospitality. GL acknowledges the support of ISF grant No. 1226/17, WS acknowledges the support of NSFC grant No. 11731003, and SvS acknowledges the support of ERC AdG Grant No. 339523 RGDD.
\section{Statement of results}\label{sec:results}
\subsection{Marked maps and holomorphic deformations} Let $U$ an open subset of $\mathbb{C}$ and $g\colon U\to \mathbb{C}$ be holomorphic on $U$. Assume that $c_1\in U$. Then we say that $g$ is a {\em marked map w.r.t. $c_1$} if $c_n=g^{n-1}(c_1)$ is well-defined for all $n\ge 1$ and $\overline P\subset U$ where $P=\{c_n\}_{n=1}^\infty$.
We say that $(g,G)_W$ is a {\em local holomorphic deformation} of $g$ if: \begin{enumerate} \item $W$ is an open connected subset of $\mathbb{C}$ containing $c_1$; \item $G\colon (w,z)\in W\times U \mapsto G_w(z)\in \mathbb{C}$ is a holomorphic map such that $G_{c_1}=g$; \item $DG_w(z)\neq 0$ for all $z\in U$, $w\in W$.
\end{enumerate} Here and later on $D=\frac{\partial}{\partial z}$, $D^k=\frac{\partial^k}{\partial z^k}$. Note that $c_1$ plays the role of a special point in the dynamical space $U$, but it is also a special point in the parameter space $W$.
\begin{example} Let $g \colon U\to \mathbb{C}$ be a marked map w.r.t. $c_1$. Moreover, let $W=\mathbb{C}$ and $G\colon W\times U \to \mathbb{C}$ be defined by $G(w,z)=G_w(z)=g(z)+(w-c_1)$. Then $(g,G)_W$ is a local holomorphic deformation of $g$.
\end{example}
\subsection{Holomorphic motions and the lifting property}\label{subsec:lifting} Recall that $h_\lambda$ is a {\em holomorphic motion} of a set $X\subset \mathbb{C}$ over $(\Lambda, 0)$ where $\Lambda$ is a domain in $\mathbb{C}$ which contains $0$, if
$h_\lambda \colon X\to \mathbb{C}$ satisfies:
(i) $h_0(x)=x$, for all $x\in X$,
(ii) $h_\lambda(x)\ne h_\lambda(y)$ whenever $x\ne y$ and $\lambda\in \Lambda$, $x,y\in X$ and
(iii) $\Lambda \ni \lambda \mapsto h_\lambda(z)$ is holomorphic.
Quite often we will consider the special case where $\Lambda$ is equal to $\mathbb{D}_r:=B(0,r)$
where $B(a,r):=\{w\in\mathbb{C}: |w-a|<r\}$.
Let $K$ be a bounded set such that $P\subset K\subset\overline{K}\subset U$ and $g(K)\subset K$. The following notions of the {\it lift} and the {\it lifting property} were introduced and studied in \cite{LSvS1}, \cite{LSvS1a} (in the case of a finite set $K$):
\iffalse \begin{definition}\label{def:liftingproperty} Say that $(g,G)_W$ has the {\it lifting property for the set} $K$ if the following holds: Given a holomorphic motion $h_\lambda^{(0)}$ of $K$ over $(\Lambda, 0)$, where $\Lambda$ is a domain in $\mathbb{C}$ which contains $0$, there exist $\varepsilon>0$ and holomorphic motions $h_\lambda^{(n)}$, $n=1,2,,\ldots$ of $K$ over $(\mathbb{D}_{\varepsilon}, 0)$ such that for each $n\ge 0$, \marginpar{$n\ge 0$ instead $k\ge 1$}
\begin{enumerate} \item $h_\lambda^{(n)}(c_1)\in W$ for each $\lambda \in \mathbb{D}_\varepsilon$; \item $h_\lambda^{(n+1)}$ is the lift of $h_\lambda^{(n)}$ over $(\mathbb{D}_\varepsilon, 0)$ for $(g, G)_W$, that is, $$G_{h_\lambda^n(c_1)}(h_\lambda^{(n+1)}(x))=h_\lambda^{(n)}(g(x)), x\in K$$
\item there exists $M>0$ such that $|h_\lambda^{(n)}(x)|\le M$ for all $x\in K$, $\lambda\in \mathbb{D}_{\varepsilon}$ and $n\ge 0$. \end{enumerate} \end{definition} \fi
\begin{definition}\label{def:liftingproperty} Given a holomorphic motion $h_\lambda$ of $K$ over $(\Lambda, 0)$,
we say that a holomorphic motion $\hat{h}_\lambda$ of $K$ over $(\Lambda_0, 0)$ where $\Lambda_0\subset \Lambda$,
is a {\em lift} of $h_{\lambda}$ for $(g, G)_W$ if for all $\lambda\in \Lambda_0$ and $x\in K$, \begin{equation}\label{liftdef} G_{h_\lambda(c_1)}(\hat{h}_{\lambda}(x))=h_{\lambda}(g(x)). \end{equation} We say that $(g,G)_W$ has the {\it lifting property for the set} $K$ if the following holds: Given a holomorphic motion $h_\lambda^{(0)}$ of $K$ over $(\Lambda, 0)$, there exist $\varepsilon>0$ and holomorphic motions $h_\lambda^{(n)}$, $n=1,2,,\ldots$ of $K$ over $(\mathbb{D}_{\varepsilon}, 0)$ such that for each $n\ge 0$,
\begin{enumerate} \item $h_\lambda^{(n)}(c_1)\in W$ for each $\lambda \in \mathbb{D}_\varepsilon$; \item $h_\lambda^{(n+1)}$ is the lift of $h_\lambda^{(n)}$ over $(\mathbb{D}_\varepsilon, 0)$ for $(g, G)_W$,
\item there exists $M>0$ such that $|h_\lambda^{(n)}(x)|\le M$ for all $x\in K$, $\lambda\in \mathbb{D}_{\varepsilon}$ and $n\ge 0$. \end{enumerate} \end{definition} The next lemma and the subsequent remark clarify this notion: \begin{lemma} Given any holomorphic motion $h_\lambda$ of $K$ over $(\mathbb{D}_{\epsilon_0}, 0)$, there is a sequence of holomorphic motions $h_\lambda^{(n)}$, $n\ge 0$, of $K$ where $h_\lambda^{(0)}=h_\lambda$ and $h_\lambda^{(n)}$ is the lift of $h_\lambda^{(n-1)}$ over $(\mathbb{D}_{\varepsilon_n}, 0)$ with some maximal $\varepsilon_n>0$. \end{lemma} \begin{proof} It is enough to show that given a holomorphic motion $h_\lambda$ of $K$ over $(\mathbb{D}_{\epsilon_0}, 0)$, there exists $\epsilon_1>0$ (which depends on $\epsilon_0>0$ and $h_\lambda$) such that the lift $\hat{h}_\lambda$ of $h_\lambda$ for the set $K$ exists over $\mathbb{D}_{\epsilon_1}$.
Indeed, as $\overline{K}$ is compact it follows from the definition of local holomorphic deformation that there are a finite open covering $\{B(x_j, r_j)\}$ of $K$ and $\epsilon_1>0$ such that, for $\lambda\in\mathbb{D}_{\epsilon_1}$ the following hold: $h_\lambda(c_1)\in W$, for each $j$ the map
$G_{h_\lambda(c_1)}$ is injective on $B(x_j,2r_j)$ and $h_\lambda(g(y))\in G_{h_\lambda(c_1)}(B(x_j,2r_j))$ for $y\in B(x_j,r_j)$. Then define $\hat{h}_\lambda(y):=G_{h_\lambda(c_1)}^{-1}(h_\lambda(g(y))$ for $y\in B(x_j,r_j)$ where
$G_{h_\lambda(c_1)}^{-1}$ is the inverse map to
$G_{h_\lambda(c_1)}|_{B(x_j,2r_j)}$.
\end{proof}
\begin{remark}
The lifting property means that for any holomorphic motion $h_\lambda$ of $K$ over $(\Lambda, 0)$, there is $\epsilon>0$ such that in the previous lemma $\varepsilon_n>\varepsilon$ holds for all $n\ge 0$ and such that the family $\{h_\lambda^{(n)}(x)\}$ is uniformly founded on $\mathbb{D}_{\varepsilon}\times K$. \end{remark} \begin{remark}
Note that if $K_1\subset K_2$ are two forward invariant sets then the lifting property for $K_2$ implies the lifting property for $K_1$ (this follows from Slodkowski's generalised lambda lemma \cite{Slod} and also \cite{AIM}). \end{remark}
In~\cite{LSvS1,LSvS1a}, we studied the case when the {\lq}postcritical set{\rq} $P$ is finite and showed that if this lifting property holds then, under suitable circumstances, critical relations unfold transversally. The present paper considers the case where $P$ is an infinite orbit converging to an attracting or neutral periodic orbit and the main result shows that, again, the lifting property implies transversality.
\iffalse GGGGG moved after Main Th GGGGGGGG
In this paper we will not check whether a particular family of maps satisfies the lifting property, but refer to the results proved in the second part of \cite{LSvS1,LSvS1a} where the following is shown:
\begin{lemma} The above lifting property holds for families of the form \begin{itemize} \item $f_c(z)=f(z)+c$, $c\in \mathbb{C}$ with $f\in \mathcal F$ defined in \S\ref{subsec:additive}. \item $f_w(z)=wf(z)$, with $f\in \mathcal E \cup \mathcal E_0$ defined in \S\ref{subsec:multiplicative}. \end{itemize} \end{lemma}
The above classes of families include families of maps with an essential singularity, such as $G_w(z)= b e^{-1/|z|^\ell} +w$. A lifting property also holds within the spaces of rational and polynomial maps, see \cite[Appendix C]{LSvS1a}.
ggggggggggggggggggggggggggg \fi
\subsection{The statement of Main Theorem} In this paper we will study marked maps $g$ such that $P=\{c_n\}_{n=1}^\infty$ is an {\em infinite} orbit of $g$ so that $c_n$ converges to a periodic orbit $\mathcal{O}=\{a_0, a_1, \ldots, a_{q-1}\}$ and consider a holomorphic deformation $(g,G)_W$ of $g$. The main theorem of this paper shows that if $(g,G)_W$ satisfies the lifting property, then the orbit $\mathcal{O}$ depends {\lq}transversally{\rq} on the parameter $w$ (in a sense made precise below).
As usual, we say that $\mathcal{O}$ is {\em hyperbolic attracting} if $\kappa:=Dg^q(a_0)\in \mathbb{D}\setminus \{0\}$. We say that $\mathcal{O}$ is {\em non-degenerate parabolic} if there exists $l,p \in \mathbb{Z}$, $p\ge 1$, $(l,p)=1$ such that $\kappa=e^{2\pi i l/p}$ and $D^{p+1} g^{p q} (a_0)\not=0$. Let \begin{equation}\label{eqn:Q}
Q(z):=\left.\frac{d}{dw} G_w^q(z)\right|_{w=c_1}, \end{equation} which is a holomorphic function defined in a neighborhood of $\mathcal{O}\cup P$.
\begin{example}\label{ex:Q} If $\kappa=1$ then non-degeneracy means that $D^2g^q(a_0)\ne 0$. If, moreover, $Q(a_0)\ne 0$ then the Taylor expansion of $G_w^q(z)$ at $z=a_0, w=c_1$ is given by $G_w^q(z)=z+\alpha (z-a_0)^2 + \beta (w-c_1) + h.o.t.$ with $\alpha,\beta\ne 0$. Our Main Theorem states that a certain lifting property implies that either $Q(a_0)\ne 0$ or that the parabolic periodic point persists for all parameters $w$ near $c_1$. \end{example}
We will also use the following subset of the basin of $\mathcal{O}$:
\begin{equation}\label{eqn:Omegar} \Omega_r:=\left\{z\in U: \sup_{n=0}^\infty d(f^n(z),\mathcal{O})\le r, \text{ and } \lim_{n\to\infty} d(f^n(z), \mathcal{O})=0\right\} \mbox{ where } r>0.
\end{equation} In the attracting case, $\bigcup B(a_j,\rho)\subset\Omega_r\subset\bigcup \overline{B(a_j,r)}$ whenever $r$ is small enough and some even smaller $\rho>0$. In the parabolic case, the Leau-Fatou Flower Theorem (see Lemma~\ref{lem:Omega} in Appendix B) tells us that $(\bigcup B(a_j,\rho))\setminus \Omega_r$, even though non-empty, is located in a very thin region near the repelling directions.
\label{thm:main} \begin{mainthm} Assume that $\mathcal{O}$ is either hyperbolic attracting or non-degenerate parabolic and that $(g,G)_W$ satisfies the lifting property for $P_{r_0}=P\cup \Omega_{r_0}$ for some $r_0>0$. Then one has \begin{enumerate} \item {\em either } the following {\em transversality} property: \begin{equation}\label{eqn:Qtran} D^2g^q(a_0) Q(a_0)\not= Q'(a_0)(\kappa -1); \end{equation} \item {\em or persistency of periodic points with the same multiplier holds:} there is a neighborhood $W_1$ of $c_1$ and holomorphic functions $a_j(w)$ defined in $W_1$ with $a_j(c_1)=a_j$ such that for each $w\in W_1$, $G_w^q(a_j(w))=a_j(w)$ and $DG_w^q(a_0(w))=\kappa$ is constant. \end{enumerate} \end{mainthm} In this paper we will not check whether a particular family of maps satisfies the lifting property, but refer to the results proved in the second part of \cite{LSvS1,LSvS1a} where the following is shown (with obvious changes in the proof): \begin{lemma}
Let $f_w$, $U$ be one of the following. \begin{itemize} \item $f_w(z)=f(z)+w$, $w\in \mathbb{C}$ and $U=D\setminus\{0\}$ with $f\in \mathcal F$ defined in \S\ref{subsec:additive}, \item $f_w(z)=wf(z)$, $w\in\mathbb{C}\setminus\{0\}$, with $f\in \mathcal E\cup \mathcal E_0$ defined in \S\ref{subsec:multiplicative} and $U=D\setminus f^{-1}(1)$ for $f\in \mathcal E$, $U=D\setminus \pm f^{-1}(1)$ for $f\in \mathcal E_0$.
\end{itemize}
Let $c_1$ be attracted to either hyperbolic attracting or parabolic periodic orbit of $f_{c_1}$ and $W_1$ is a neighborhood of $c_1$. Then $g=f_{c_1}|_U$ is a marked map, $G_w=f_w|_U$, $w\in W_1$ is a holomorphic deformation of $g$ and $(g,G)_W$ satisfies the lifting property for any $P_{r_0}\subset U$. \end{lemma}
The above classes of families include families of maps with an essential singularity, such as $G_w(z)= b e^{-1/|z|^\ell} +w$ for real non-zero $z$. A lifting property also holds within the spaces of rational and polynomial maps, see \cite[Appendix C]{LSvS1a}.
\subsection{Clarifying the transversality condition~(\ref{eqn:Qtran}) and the non-degeneracy condition} \begin{lemma}\label{lem2.2} \begin{enumerate} \item If $\kappa=1$ then (\ref{eqn:Qtran}) implies
\begin{equation}Q(a_j)\not=0\mbox{ for all }j=0,1,\dots,q-1. \end{equation} \item If $\kappa\not = 1$ then there exists a holomorphic function $w\mapsto a_0(w)$ for $w$ near $c_1$ so that $a_0(w)$ is a fixed point of $G_w^q$ for all $w$ near $c_1$ and so that $a_0(c_1)=a_0$. Defining $\kappa(w)=DG_w^q(a_0(w))$
we have that
(\ref{eqn:Qtran}) implies
\begin{equation}\label{eqn:kappa'}
\kappa'(c_1)=\frac{D^2g^q(a_0) Q(a_0)- Q'(a_0)(\kappa-1)}{1-\kappa}\ne 0 .
\end{equation} \end{enumerate} \end{lemma} \begin{proof}
If $\kappa=1$, then (\ref{eqn:Qtran}) is reduced to $Q(a_0)\not=0$, which is equivalent to $Q(a_j)\not=0$ for all $j=0,1,\ldots, q-1$. If
$\kappa\not=1$ then by the Implicit Function Theorem, there exists holomorphic functions $a_j(w)$, defined near $c_1$ such that $a_j(c_1)=a_j$ and $G_w(a_j(w))=a_{j+1}(w)$ for all $0\le j<q$, where $a_q(w)=a_0(w)$. Let $\kappa(w)=DG_w^q(a_0(w))$. To see that (\ref{eqn:kappa'}) holds, let $\mathcal{G}(w,z)=G_w^q(z)$. Then $$\mathcal{G}(w, a_j(w))=a_j(w), \dfrac{\partial \mathcal{G}}{\partial z} (w, a_j(w))=\kappa(w),$$ for each $j$. Differentiating and evaluating at $w=c_1$, we obtain $$Q(a_j)=(1-Dg^q(a_j)) a_j'(c_1), Q'(a_j)+D^2g^q(a_j)a_j'(c_1)=\kappa'(c_1).$$ Thus the equality in (\ref{eqn:kappa'}) holds. The inequality in (\ref{eqn:kappa'}) is equivalent to (\ref{eqn:Qtran}). \end{proof}
\begin{remark} In the parabolic case, the non-degeneracy condition is necessary as shown by the following example. Let $G_w(z)= w\sin z$. Choose $a_0\in (\pi/2, 3\pi/2)$ so that $\tan a_0=-a_0$ and let $w_0=1/\cos a_0$, $g=G_{w_0}$. Then $\mathcal{O}=\{a_0,-a_0\}$ is a cycle of $g$ of period $2$ with $g'(a_0)=g'(a_1)=1$. This parabolic cycle attracts both critical values $w_0$ and $-w_0$ of $g$ and is degenerate. On the other hand,
$$Q(a_0)=\left.\dfrac{d}{dw} G_w^2(a_0)\right|_{w=w_0}=\sin (-a_0)+Dg(-a_0)\sin (a_0) =0.$$ Note that at the parameter $w_0=1/\cos(a_0)\approx-2.26 $ the family of maps $G_w(x)=w\sin(x)$, $w\in \mathbb{R}$ undergoes a pitchfork bifurcation, where the attracting period two orbit of this map for $w\in (w_0,w_0+\epsilon)$ becomes for $w\in (w_0-\epsilon,w_0)$ a repelling two orbit and splits-off two new periodic orbits, both of which are attracting, see Figure~\ref{fig1} in the last section. \end{remark}
\subsection{Applications to transversality within additive complex families} \label{subsec:additive} Let us start by complex families of the form $f_c(z)=f(z)+c$, $c\in \mathbb{C}$. Let $\mathcal{F}$ denote the collection of holomorphic maps $f: D\to V$, where \begin{enumerate} \item [(1)] $D$ is a bounded open set in $\mathbb{C}$ with $0\in \overline{D}$; \item [(2)] $V$ is a bounded open set in $\mathbb{C}$; \item [(3)] $f: D\setminus\{0\}\to V\setminus \{0\}$ is an un-branched covering; \item [(4)] The following separation property holds: $V\supset B(0;\diam(D)) \supset D$. \end{enumerate}
\begin{theorem}\label{thm:Main2} Let $f\in \mathcal{F}$ and let $G_w(z)= f(z)+w$. Suppose that $c_1\in\overline{D}$ is such that $g=G_{c_1}$ has an attracting or parabolic cycle $\mathcal{O}=\{a_0,\cdots,a_{q-1}\}$ with multiplier $\kappa\not=0$. Then $c_n=g^{n-1}(c_1)$ is well defined and converges to $\mathcal{O}$ as $n\to\infty$ and the following transversality holds: \begin{equation} \kappa'(c_1)\ne 0\mbox{ if } \kappa\ne 1 \,\, \mbox{ and } \,\, Q(a_j)\ne 0\mbox{ for } a_j\in \mathcal{O}\mbox{ if } \kappa=1.\label{eq:trans} \end{equation} \iffalse If $f$ is real and $c_1\in \overline{D}\cap\mathbb{R}$,
then the following holds: \begin{enumerate}
\item[(a)] if $\kappa(c_1)\in [-1,1)$ then $\kappa'(c_1)>0$ if $f|D\cap \mathbb{R}$ has a local minimum at
$0$ and $\kappa'(c_1)<0$ if $f|D\cap \mathbb{R}$ has a local maximum at $0$; \item[(b)] if $\kappa(c_1)=1$ and $c_{qk+1}\to a_0$ as $k\to\infty$, then $Q(a_0)>0$. \end{enumerate} \fi \end{theorem} \iffalse \begin{theorem} \label{thm:Main2} Suppose $g: U_g\cup\{0\}\to V$ be so that that $g\colon U_g\to V$ is a holomorphic where \begin{enumerate} \item $U_g$ is a bounded open set in $\mathbb{C}$ such that $U_g\supset P=\{c_n\}_{n\ge 1}$ and $c_0:=0\in \overline{U}_g$; \item $V$ is a bounded open set in $\mathbb{C}$; \item $g: U_g\setminus \{0\}\to V\setminus \{c_1\}$ is an unbranched covering; \item the separation property
$V\supset B(c_1;\diam(U_g)) \supset U_g$
holds. \end{enumerate} Let $G_w(z)=g(z)+(w-c_1)$, where $w\in W:=\mathbb{C}$. Then if $g$ has an attracting hyperbolic or parabolic periodic point with multiplier $\kappa$ then transversality holds: \begin{equation} \kappa'(c_1)\ne 0\mbox{ if } \kappa\ne 1 \,\, \mbox{ and } \,\, Q(c_1)\ne 0\mbox{ if } \kappa=1.\label{eq:trans} \end{equation} If $g$ is real then, moreover, the sign of $-\kappa'(c_1)$ coincides with the sign of the 'positive direction' (i.e. the direction of increasing entropy) resp. $Q(c_1)>0$.
\end{theorem} \fi
\begin{example} The conclusion of the above theorem applies for example for the following families: \begin{itemize} \item $f_c(z)=z^d+c$, $c\in \mathbb{C}$ and where $d\ge 2$ is an integer.
\item $f_c(x)= b e^{-1/|x|^\ell} +c$ where $\ell\ge 1$, $b> 2(e\ell)^{1/\ell}$ are fixed and $c\in D$. Here $D=U^-\cup U^+$, $U^-=-U^+$, $U^\pm$ are disjoint topological disks symmetric w.r.t. the real axis and $\{0\}=\overline{U^+}\cap \overline{U^-}$. Furthermore, there is $R>1$ such that $f_0:D\to \mathbb{D}_R\setminus \{0\}$ is an unbranched covering, $D\subset \mathbb{D}_R$, and $U^+\cap \mathbb{R}\supset (0,\beta]$ where $\beta>0$ is so that the map $f_{-\beta}$ has the Chebysheb combinatorics: $f_{-\beta}(\beta)=\beta$. \end{itemize} \end{example}
\begin{remark} When $G_c$ is a real family, the sign of $\kappa'$ and $Q(a_0)$ is given in Section~\ref{sec:realmaps}, see also Appendix A. \end{remark} \begin{remark}\label{quadr} For the quadratic family $G_c(z)=z^2+c$ the inequalities in (\ref{eq:trans}) were already known. The inequality $\kappa'(c_1)\ne 0$ for $c_1$ so that $G_{c_1}$ has a hyperbolic attracting periodic point was established in \cite{DH1}. When $c_1$ is real and $G_{c_1}$ has either a hyperbolic attracting or a parabolic periodic point with multiplier $+1$, the signs for $\kappa'(c_1)$ and $Q(a_0)$ were also already known, see for example \cite[Lemma 4.5]{Milnor}. \end{remark}
\iffalse
\begin{remark} In Proposition~\label{prop:signkappa'} the meaning of $Q(a_0)>0$ is clarified as follows. If $f|D\cap \mathbb{R}$ has a local maximum then $Q(a_0)>0$ implies that there exists $\epsilon>0$
so that there exists no fixed point of $G_w^q$ near $a_0$ for $w\in (c_1-\epsilon,c_1)$. If $f|D\cap \mathbb{R}$ has a local mimimum at $0$, then the same conclusion holds for $w\in (c_1,c_1+\epsilon)$. \end{remark} \fi
\subsection{Applications to transversality within multiplicative complex families} \label{subsec:multiplicative} To state our next theorem, we say that $v$ is an {\em asymptotic value} of a holomorphic map $f\colon D\to \mathbb{C}$ if there exists a path $\gamma\colon [0,1)\to D$ so that $\gamma(t)\to \partial D$ and $f(\gamma(t))\to v$ as $t\uparrow 1$. We say that $v$ is a {\em singular value} if it is a critical value or an asymptotic value. Let us next consider families of the form $f_w (z)=w f(z)$ where $f: D\to V$ is a holomorphic map such that: \begin{enumerate} \item[(a)] $D,V$ are open sets which are symmetric w.r.t. the real line so that $f(D)=V$; \item[(b)] Let $I=D\cap \mathbb{R}$ then there exists $c>0$ so that $I\cup \{c\}$ is a (bounded or unbounded) open interval and $0\in \overline I$, $c\in \mbox{int}(\overline{I})$. Moreover, $f$ extends continuously to $\overline{I}$ so that $f(I)\subset \mathbb{R}$ and $\lim_{z\in D, z\to 0} f(z)=0$.
\item[(c)] Let $D_+$ be the component of $D$ which contains $I\cap (c,\infty)$, where $D_+$ might be equal to $D$. Then $u\in D\setminus \{0\}$ and $v\in D_+\setminus \{0\}$, $v\ne u$, implies $u/v\in V$. \end{enumerate}
\noindent Let $\mathcal{E}$ be the class of maps which satisfy $(a)$,$(b)$,$(c)$ and assumption $(d)$: \begin{enumerate} \item[(d)] $f\colon D\to V$ has no singular values in $V\setminus \{0,1\}$ and $c>0$ is minimal such that $f$ has a positive local maximum at $c$ and $f(c)=1$. \end{enumerate} \noindent Similarly let $\mathcal{E}_o$ be the class of maps which satisfy $(a)$,$(b)$,$(c)$ and assumption $(e)$: \begin{enumerate} \item[(e)] $f$ is odd, $f\colon D\to V$ has no singular values in $V\setminus \{0,\pm 1\}$
and $c>0$ is minimal such that $f$ has a positive local maximum at $c$ and $f(c)=1$. \end{enumerate}
The class $\mathcal{F}$ was introduced in \cite{LSvS1} and classes $\mathcal{E}$, $\mathcal{E}_o$ in \cite{LSvS1,LSvS1a} and include maps for which $V$ or $D$ are bounded sets.
\begin{theorem}\label{thm:Main3} Let $f: D\to V$ be holomorphic map from $\mathcal{E}\cup \mathcal{E}_o$ and define $g=c_1\cdot f\colon D \to c_1\cdot V$ where we assume that $c_1\in D_+\setminus \{0\}$. Assume that $g$ has a hyperbolic attracting or a parabolic cycle $\mathcal{O}=\{a_0,\cdots,a_{q-1}\}\subset D\setminus\{0\}$ with multiplier $\kappa$. Take $G_w(z)=w f(z)$ where $w\in W:=D_+\setminus \{0\}$. Then at $w=c_1$, one of the following holds: \begin{itemize}
\item transversality holds: $$ \kappa'(c_1)\ne 0\mbox{ if } \kappa\ne 1 \,\, \mbox{ and } \,\, Q(a_j)\ne 0\mbox{ for } a_j\in \mathcal{O}\mbox{ if } \kappa=1,$$ \item $f\in \mathcal{E}_o$ and $\mathcal{O}$ is symmetric with respect to the origin. \end{itemize} \iffalse If $c_1\in I$ is positive and $\mathcal{O}\subset I\cap \mathbb{R}^+$,
then the following holds:
if $\kappa(c_1)\in [-1,1)$ then $\kappa'(c_1)<0$ and if $\kappa(c_1)=1$ then $Q(a_0)>0$. \fi \end{theorem}
\begin{example} The conclusion of the previous theorem applies for example to the following families:
\begin{itemize} \item $f_b(z)=b z(1-z)$, $b\in \mathbb{C}\setminus \{0\}$, \item $f_b (z)=b \exp(z) (1-\exp(z))$, $b\in \mathbb{C}\setminus \{0\}$, \item $f_b (z)=b [\sin(z)]^2$, $b\in \mathbb{C}\setminus \{0\}$, \item $f_b (z)=b \sin(z)$, $b\in \mathbb{C}\setminus \{0\}$,
\item $f_b(z)=bf(z)$ where $f$ is the unimodal map $f\colon [0,1]\to [0,1]$ defined by
$$f(x)=\exp(2^\ell) \left( -\exp(-1/|x-1/2|^\ell) + \exp(-2^\ell) \right)$$ satisfying $f(0)=f(1)=0$, $f(1/2)=1$ which has a flat critical point at $c=1/2$. Here $\ell\ge \ell_0$ where $\ell_0$ is chosen sufficiently large. This implies that $f$ has an extension $f\colon D\to V$ which is in $\mathcal E_0$, where
$V$ is a punctured bounded disc and $D$ consists of two components $D_-\cup D_+$.
Here we assume that the parameter $b\in D_+$. \end{itemize} \end{example}
\begin{remark} The classes $\mathcal{E}$ and $\mathcal{E}_o$ both contain maps for which the set of singular values has infinite cardinality. \end{remark}
\subsection{Periodic points do not disappear after they are born} For real maps, additional arguments allow us to obtain the sign of $\kappa'(c_1)$ and $Q(a_0)$.
Each $f\in \mathcal{E}\cup \mathcal{E}_o$ defines naturally a unimodal map $f: J:=(0, b)\to\mathbb{R}$ where $$b=\sup\{b'\in I: b'>0 \text{ and }f(x)>0\mbox{ for }x\in(0,b')\}.$$ We denote by $\mathcal{E}_u$ and $\mathcal{E}_{o,u}$ the collection of unimodal maps obtained in this way from maps in $\mathcal{E}$ and $\mathcal{E}_o$ respectively. Recall that $c$ is the turning point of $f$ in $J$.
We denote by $\mathcal{F}_u^+$ (resp. $\mathcal{F}_u^-$) the collection of $C^1$ unimodal maps $f: J\to \mathbb{R}$, where $J\ni 0$ is an open interval, such that $f|J\setminus \{0\}$ allows an extension to a map $F:D\to V$ in $\mathcal{F}$ with $J\setminus \{0\}=(D\cap \mathbb{R})\setminus \{0\}$ and such that $f$ has a maximum (resp. minimum) at $0$. Put $c=\{0\}$.
\begin{theorem}\label{thm:eremenko} Consider a family of unimodal maps $f_t$ satisfying one of the following: \begin{enumerate} \item[(i)] $f_t=f+t$ with $f\in \mathcal{F}_u^+$ and $t\in J$; \item[(ii)] $f_t= t\cdot f$ with $f\in \mathcal{E}_u\cup \mathcal{E}_{o,u}$ and $t\in J$. \end{enumerate} Suppose $f_{t_*}$, $t_*>c$, has a period cycle $\mathcal{O}$, then for any $t\in J$ with $t\ge t_*$, $f_t$ has a periodic cycle $\mathcal{O}_t$ of the same period such that $\mathcal{O}_t$ depends on $t$ continuously and $\mathcal{O}_{t_*}=\mathcal{O}$.
Similarly for \begin{enumerate} \item [(iii)] $f_t=f+t$ with $f\in\mathcal{F}_u^-$ and $t\in J$, \end{enumerate} if $f_{t_*}$, $t_*<c$, has a period cycle $\mathcal{O}$, then for any $t\in J$ with $t\le t_*$, $f_t$ has a periodic cycle $\mathcal{O}_t$ of the same period such that $\mathcal{O}_t$ depends on $t$ continuously and $\mathcal{O}_{t_*}=\mathcal{O}$. \end{theorem}
\section{Outline of the proof of the Main Theorems} \label{sec:outline} The setting in this paper (and it's purpose) is similar to that in \cite{LSvS1}, \cite{LSvS1a} except there the case where the postcritical set is finite is considered. Here we will follow the same strategy in the proof as in that paper. So let us recall the main steps in the proof of Theorem 2.1 of \cite{LSvS1} or the Main Theorem of \cite{LSvS1a}
:
\begin{enumerate}[leftmargin=8mm] \item[(A)] Assume that transversality fails.
Then there exists a holomorphic motion $h_\lambda$ of $P$ over $(\mathbb{D}_\epsilon,0)$ with the speed $v$ at $\lambda=0$: $$\frac{dh_\lambda(c_n)}{d\lambda}\Big\vert_{\lambda=0}=v(c_n) =\frac{d}{dw}\left.
G_w^{n-1}(w)\right |_{w=c_1} \mbox{ for $n=1,2,\cdots$.}$$ \item[(B)] Let $h^{(0)}_\lambda=h_\lambda$ and $h^{(k+1)}_\lambda$ be the lift of $h^{(k)}_\lambda$ for $k=0,1,2,\cdots$. By the lifting property, all holomorphic motions $h^{(k)}_\lambda$ of $P$ are defined over $(\mathbb{D}_{\hat\epsilon},0)$ for some $\hat\epsilon>0$ and are uniformly bounded. Moreover, by (A), all $h^{(k)}_\lambda$ are asymptotically invariant of order $m=1$, i.e., $$h^{(k+1)}_\lambda(c_n)-h^{(k)}_\lambda(c_n)=O(\lambda^{m+1})$$ with $m=1$. Consider averages \begin{equation} \hat h_\lambda^{(k)}(z)= \dfrac{1}{k} \sum_{i=0}^{k-1} h_\lambda^{(i)}(z),\label{eq:av} \end{equation} $k=1,2,\cdots$ and let $\hat h_\lambda$ be a limit map of $\hat h_\lambda^{(k)}$ along a subsequence. Then:
\item[(B1)] $\hat h_\lambda$ is again a holomorphic motion of $P$ over (perhaps, smaller) disk,
\item[(B2)] $\hat h_\lambda$ is asymptotically invariant of order $m+1=2$.
\item[(B3)]
Repeating the procedure, we find that for every $m=1,2,\dots$ there is a holomorphic motion which is asymptotically invariant of order $m$.
\item[(C)] Finally, the (B3) yields that the {\lq}critical relation{\rq}
persists for all $w$ in a manifold containing $c_1$ of dimension $>0$. \end{enumerate}
When $P$ is a finite set, steps (A) and (B1) are straightforward. In the current set-up this can be also made to work, as is shown in this paper, but sometimes with considerable technical efforts. Moreover, we need to require the lifting property to be satisfied not only on the postcritical set $P$ but also on a bigger set which is a local basin of attraction of either hyperbolic or parabolic cycle.
\begin{definition} A holomorphic motion $h_\lambda$ of $P_{r_0}:=P\cup \Omega_{r_0}$
over $(\mathbb{D}_\varepsilon,0)$ is called {\em admissible} if for each $\lambda\in \mathbb{D}_\varepsilon$, $z\mapsto h_\lambda(z)$ is holomorphic in the interior of $\Omega_{r_0}$. It is called {\em asymptotically invariant of order $m$} if for each $z\in P$, $$\widehat{h}_\lambda(z)- h_\lambda(z)=O(\lambda^{m+1}) \text{ as } \lambda\to 0$$ where $\widehat{h}_\lambda$ is the lift of $h_\lambda$. \end{definition}
If $h_\lambda$ is asymptotically invariant of order $m$, then: (1) its lift $\widehat{h}_\lambda$ is asymptotically invariant of order $m$, too, and (2) $G_{h_\lambda(c_1)}(h_\lambda(x))=h_{\lambda}(g(x))+O(\lambda^{m+1})$, $x\in P$. See \cite{LSvS1a} for details.
\newtheorem*{ThmA}{Theorem A} \newtheorem*{ThmB}{Theorem B} \newtheorem*{ThmC}{Theorem C}
The proof of the Main Theorem is broken into the following three steps. \begin{ThmA} Assume transversality fails, so assume equality holds in (\ref{eqn:Qtran}). Then there exists an admissible holomorphic motion $H_\lambda$ of the set $P_{r_0}$ over $(\mathbb{D}_\epsilon,0)$ for some $\epsilon>0$ such that $$\frac{dH_\lambda}{d\lambda}\Big\vert_{\lambda=0}(c_n)=v(c_n), \ n=1,2,\cdots,$$
where $v(c_n)=\left.\frac{d}{dw}G_w^{n-1}(w)\right|_{w=c_1}.$ In particular, the holomorphic motion is asymptotically invariant of order $1$. \end{ThmA}
\begin{ThmB} For any $m\ge 1$, if there is an admissible holomorphic motion $h_{\lambda}$ of the set $P_{r_0}$ over some $\mathbb{D}_{\varepsilon}$ which is asymptotically invariant of order $m$, then there is an admissible holomorphic motion $\widetilde{h}_{\lambda}$ of the set $P_{r_0}$ over some $\mathbb{D}_{\varepsilon'}$ which is asymptotically invariant of order $m+1$ such that $$\widetilde{h}_\lambda(z)-h_\lambda(z)=O(\lambda^{m+1})\text{ as } \lambda\to 0, \text{ for any } z\in P.$$ \end{ThmB}
\begin{ThmC} Suppose for any $m\ge 1$, there is an admissible holomorphic motion $h_{\lambda,m}$ of $\overline{P}_{r_0}$ over $(\mathbb{D}_{\varepsilon_m},0)$ for some $\varepsilon_m>0$ such that $$\dfrac{d}{d\lambda}h_{\lambda,m}(c_1)\Big\vert_{\lambda=0}=1$$ and $h_{\lambda,m}$ is asymptotically invariant of order $m$. Then the second alternative of the Main Theorem holds. \end{ThmC}
\begin{proof} [Proof of the Main Theorem] Assume that the transversality condition fails. Then by Theorems A and B, we obtain a sequence of admissible holomorphic motions $h_{\lambda,m}$ of $\overline{P}_{r_0}$ satisfying the assumption of Theorem C. Thus the second alternative of the Main Theorem holds. \end{proof} Theorems A and B will be proved in Sect \ref{sec:ThmA} and \ref{sec:ThmB} respectively, where the hyperbolic case is much easier and will be done first. Theorem C will be proved in Sect \ref{sec:ThmC}.
\subsection{How to construct admissible holomorphic motions} We end this section with the following lemma which is useful in constructing admissible holomorphic motions.
\begin{lemma} \label{lem:improvemotion} Let $h_\lambda$ be a holomorphic motion of $P_{r_0}$ over $(\mathbb{D}_\varepsilon,0)$ for some $\varepsilon>0$ which is asymptotically invariant of order $m$. Assume that for each $K_0>1$, there is a $g$-invariant open set $W\subset P_{r_0}$ such that \begin{enumerate} \item $z\mapsto h_\lambda(z)$ is $K_0$-qc in $W$ for all $\lambda\in \mathbb{D}_\varepsilon$; \item $\Omega_{r_0}\subset \bigcup_{n=0}^\infty g^{-n}(W)$. \end{enumerate} Then there exists an admissible holomorphic motion $\check{h}_\lambda$ of $P_{r_0}$ over $(\mathbb{D}_{\varepsilon'},0)$ for some $\varepsilon'>0$ such that \begin{equation}\label{eqn:hwidehath} h_{\lambda}(c_k)-\check{h}_{\lambda}(c_k)=O(\lambda^{m+1})\text{ for each }k\ge 1. \end{equation} \end{lemma}
\begin{proof} By the lifting property, restricting to a smaller disk $\mathbb{D}_{\varepsilon'}$, the holomorphic motion $h_\lambda$ allows successive lifts $h_\lambda^{(n)}$ of $P_{r_0}$ over $\mathbb{D}_{\varepsilon'}$.
By compactness of holomorphic motions, there exists $n_k\to\infty$, such that $h_\lambda^{(n_k)}$ converges to a holomorphic motion $\check{h}_\lambda$ of $P_{r_0}$ over $\mathbb{D}_{\varepsilon'}$ locally uniformly.
For each $k\ge 1$, by asymptotic invariance of $h_\lambda$, $h_\lambda^{(n)}(c_k)-h_\lambda(c_k)=O(\lambda^{m+1})$, hence (\ref{eqn:hwidehath}) holds.
Let us prove that $z\mapsto \check{h}_\lambda(z)$ is holomorphic in $\Omega_{r_0}^o$. To this end, it suffices to show that $\check{h}_\lambda$ is $K_0$-qc in $\Omega_{r_0}^o$ for each $K_0>1$. Given $K_0>1$, let $W$ be given by the assumption.
For each $z_0\in \Omega_{r_0}^o$, there is a neighborhood $Z$ of $z_0$ and $n_0\ge 1$ such that $g^{n_0}(Z)\subset W$, and hence $g^n(Z)\subset W$ for all $n\ge n_0$. Since $$h_\lambda^{(0)}(g^n(z))= G_{h_\lambda^{(0)}(c_1)}\circ \cdots \circ G_{h_\lambda^{(n-1)}(c_1)} (h_\lambda^{(n)}(z)),$$ it follows that $h_\lambda^{(n)}$ is $K_0$-qc in $Z$ for each $n\ge n_0$. Therefore, $\check{h}_\lambda$ is $K_0$-qc in $Z$.
\end{proof}
\iffalse Here is a brief outline of the organisation of this paper. In the next section we recall (and slightly generalise) the procedure of averaging holomorphic motions which is originated in \cite{LSvS1,LSvS1a}. The proof of the Main Theorem in the hyperbolic case is given in Section~\ref{sect:hyp}. It is substantially simpler than the proof in the parabolic case and gives a clear illustration of the strategy (A)-(D) as it avoids some of the main technical problems appearing in the proof of the Main Theorem in the parabolic case.
Let us now summarize the main points of the proof in the parabolic case. Assume that $g$ has a parabolic periodic orbit and that transversality fails. Let us fix an open set $\Omega$ as in Lemma~\ref{lem:Omega} and let $K=\overline{\Omega}\cup P$. Step (A) amounts to proving there exists a holomorphic motion with the required initial speed in $P$:
\begin{theorem} \label{thm:existencemotion} Assume transversality fails. There exists a holomorphic motion $H_\lambda$ of the set $K$ over some $\mathbb{D}_\epsilon$ such that $$\frac{dH_\lambda}{d\lambda}\Big\vert_{\lambda=0}(c_n)=v(c_n), \ n=1,2,\cdots,$$ and, moreover, $H_\lambda(z)$ is holomorphic in $z\in \Omega$ for every $\lambda\in \mathbb{D}_\epsilon$. \end{theorem}
In Step (B1) we need to show that averaging holomorphic motions as in ~(\ref{eq:av}) gives again a holomorphic motion. To do this, we will use the following result (which holds in a general context):
\newtheorem*{thm:diff-frac}{Theorem~\ref{thm:diff-frac}} \begin{thm:diff-frac} Given $r, \epsilon>0$ small enough, $R>0$ and $K\in (1,3/2)$ there is a constant $C_*>0$ with the following property. Let $\Omega_*$ be a cardioid-like set at $0$ so that $\Omega_*\cup \mathcal C_\epsilon\supset \overline{B(0,3r)}$. Let $H:B(0,3r)\to B(0,R)$ be a $K$-quasiconformal mapping which is conformal in $\Omega_*$ and assume that $\overline{B(-r,r)}\subset \Omega_*\cup \{0\}$. Then for all $z_1,z_2\in B(-r,r)$ and all $a\in B(-r,r)$, $z_1,z_2\ne a$.
\begin{equation*} \left| \dfrac{H(z_1)-H(a)}{z_1-a}-\dfrac{H(z_2)-H(a)}{z_2-a} \right| \le C_* |z_1-z_2|^{2/5}. \end{equation*} \end{thm:diff-frac} Steps (B2) and (C) essentially amount to repeating an argument from the proof of the main theorem of \cite{LSvS1,LSvS1a} which will give:
\newtheorem*{thm:asymptinv1}{Theorem~\ref{thm:asymptinv1}} \begin{thm:asymptinv1} For each $m\ge 1$ \begin{enumerate} \item[(a)] there exist $\epsilon_m>0$ and a holomorphic motion $H_\lambda^{(m)}$ of $P$ over $\mathbb D_{\epsilon_m}$ such that $H_\lambda^m(c_1)=c_1+\lambda+O(\lambda^2)$ and, for all $i\ge 0$, $$G^q_{H_\lambda^m(c_1)} (H_\lambda^m(c_{qi+1}))=H_\lambda^m(c_{qi+1+q})+O(\lambda^{m+1});$$ \item[(b)] $H_\lambda^{m+1}(c_{qi+1})-H_\lambda^{m}(c_{qi+1})=O(\lambda^{m+1})$, $i\ge0$
\end{enumerate} \end{thm:asymptinv1}
Step (D) and the proof of the Main Theorem in the parabolic case will then be completed in the last Section~\ref{sec:stepD}. The applications will then be covered in the final section.
\fi \iffalse In this section we will show that one can find a sequence of holomorphic motions $H^{(m)}_\lambda$ which are invariant of order $m$ as in statement (a) of the next theorem.
\begin{theorem}\label{thm:asymptinv1} Let $(g,G)_W$ be a local holomorphic deformation of $g$ and assume that $(g,G)_W$ has the lifting property of $P$ where as before $\overline{P}\subset U$ and $P=\{c_n\}_{n=1}^\infty$. Then For each $m\ge 1$ there exists $\epsilon_m>0$ and a holomorphic motion $H_\lambda^{(m)}$ of $P$ over $\mathbb D_{\epsilon_m}$ such that \begin{enumerate} \item[(a)] $H_\lambda^{(m)}$ is asymptotically holomorphic of order $m$: \begin{equation*} \begin{array}{rl} H_\lambda^m(c_1)&=c_1+\lambda+O(\lambda^2) \\ G^q_{H_\lambda^m(c_1)} &(H_\lambda^m(c_{qi+1}))=H_\lambda^m(c_{qi+1+q})+O(\lambda^{m+1}) \mbox{ for all }i\ge 0;\end{array} \end{equation*} \item[(b)] $H_\lambda^{(m)}$ extends to a holomorphic motion of $\Omega$ so that $H_\lambda^{(m)}(z)$ is holomorphic in $z\in \Omega$ for each $\lambda\in \mathbb D_{\epsilon_m}$; \item[(c)] $H_\lambda^{m+1}(c_{qi+1})-H_\lambda^{m}(c_{qi+1})=O(\lambda^{m+1})$, $i\ge0$.
\end{enumerate} \end{theorem} \begin{proofof}{Theorem~\ref{thm:asymptinv1}} \fi
\iffalse
\begin{proof} See \cite[Lemma 3.3]{LSvS1,LSvS1}.
\end{proof}
We now complete the proof of the theorem by induction in $m=1,2,\cdots$. First take $m=1$. Let $H_\lambda^{(1)}=H_\lambda$ and $\epsilon_1=\epsilon$ where $H_\lambda$ is a holomorphic motion of $K$ over $\mathbb{D}_\epsilon$ as in Theorem~\ref{average}. In particular, it is asymptotically invariant of order $1$. Then (a) and (b) for $m=1$ follow. Now apply Theorem~\ref{thm:diff-frac} taking $h_\lambda=H_\lambda^{(1)}$. Resulting map $H_\lambda^{(2)}:=\mathcal{H}[H_\lambda^{(1)}]$ satisfies (c). Indeed, $\mathcal{H}$ is constructed by two steps (1)-(2) where because of Proposition~\ref{mtom+1} step (1) increases the asymptotical invariance by $1$, while step (2) preserves the order of invariance. The general induction step from $m$ to $m+1$ goes in the same way. \end{proofof} \fi
\section{Admissible holomorphic motions of asymptotic invariance order one}\label{sec:ThmA}
In this section, we shall prove Theorem A. Let $$v(c_n)=\left.\frac{d}{dw} G_w^{n-1}(w) \right|_{w=c_1},\,\, n\ge 1.$$ So we have \begin{equation}\label{vrec} v(c_{n+1})=L(c_n)+Dg(c_n) v(c_n), \, n\ge 1, \end{equation} where \begin{equation} \label{eqn:Lz}
L(z)=\left.\frac{\partial G_w(z)}{\partial w}\right|_{w=c_1}. \end{equation}
Differentiating (\ref{liftdef}) at $\lambda=0$ we see from (\ref{vrec}) that if $H_\lambda$ is a holomorphic motion of $P_r$ (or simply $P=\{c_n\}\subset P_r$) with $d H_\lambda/d\lambda|_{\lambda=0}(c_n)=v(c_n)$, $n\ge 1$, then $H_\lambda$ is asymptotically invariant of order $1$.
Below we will use the following formula (which follows immediately by induction): \begin{equation} Q(z):=\dfrac{\partial G^q_w}{\partial w}\Big\vert_{w=c_1} (z) =\sum_{i=0}^{q-1} Dg^i(g^{q-i}(z)) L(g^{q-i-1}(z)). \label{eq:partGL} \end{equation}
\subsection{The hyperbolic case}
The following lemma is essentially contained in \cite[Lemma 6.10 and Remark 6.7]{ALdM}, but we add the proof for the reader's convenience.
\begin{lemma}\label{lem:coho} Let $f:\mathbb{D}\to \mathbb{D}$ be a holomorphic injection such that $f(0)=0$ and $\kappa=f'(0)\in \mathbb{D}\setminus \{0\}$ and let $\Gamma:\mathbb{D}\to \mathbb{C}$ be a holomorphic function. Let $a\in \mathbb{D}\setminus \{0\}$ and $b\in \mathbb{C}$ be arbitrary. Assume that \begin{equation}\label{eqn:hypcoh} \Gamma(0)f''(0)-\Gamma'(0)(f'(0)-1)=0. \end{equation} Then there exists a holomorphic map $w:\mathbb{D}\to \mathbb{C}$ such that $w(a)=b$ and \begin{equation}\label{eqn:coh} w\circ f(z)= \Gamma(z)+ f'(z) w(z). \end{equation} \end{lemma} \iffalse \begin{remark} 1. Applying a coordinate change, i.e., replacing $f$ by a conjugate map $\tilde f=\phi^{-1}\circ f\circ\phi$. results in changing the initial equation~(\ref{eqn:coh}) to another one of the same form with $f,v,Q$ replaced respectively by $\tilde f$, $\tilde v=(v\circ \phi^{-1}) (\phi'\circ \phi^{-1})$ and $\tilde Q=Q\circ \phi^{-1}$. In particular, for $\phi$ to be the Koenigs coordinate of $f$ at $0$ we have $f(z)=\kappa z$.
2. The condition (\ref{eqn:hypcoh}) is also necessary. However, the solutions of the equation (\ref{eqn:coh}) are far from being unique. Indeed, if $v$ is a solution and $f(z)=\kappa z$, then $v(z)+ k z$ is also a solution for any $k\in \mathbb{C}$. \end{remark} \fi
\begin{proof} Let $\varphi: \mathbb{D} \to \mathbb{C}$ denote the Koenigs linearization, i.e., $\varphi$ is a conformal map onto its image, with $\varphi(0)=0$, $\varphi'(0)=1$ and $\varphi (f(z))=\kappa \varphi(z)$ for all $z\in \mathbb{D}$. Write $\widetilde{w}(z)=w\circ \varphi^{-1}(z)\varphi'(\varphi^{-1}(z))$, and $\widetilde{\Gamma}(z)=\Gamma(\varphi^{-1}(z)) \varphi'(f\circ \varphi^{-1}(z))$. Since $\varphi'(f(z))f'(z)=\kappa \varphi'(z)$, the equation (\ref{eqn:coh}) is reduced to the following form: \begin{equation}\label{eqn:coh1} \widetilde{w}(\kappa z)=\widetilde{\Gamma}(z)+ \kappa \widetilde{w}(z). \end{equation} From $\varphi(f(z))=\kappa \varphi(z)$, we obtain $$\varphi'(f(z))f'(z)=\kappa \varphi'(z)$$ and $$\varphi''(f(z))f'(z)^2+\varphi'(f(z)) f''(z) =\kappa \varphi''(z),$$ hence $\varphi''(0)=f''(0)/(\kappa-\kappa^2)$. Thus the condition (\ref{eqn:hypcoh}) is equivalent to $\widetilde{\Gamma}'(0)=0$. Under this condition, $$u(z)=-\kappa^{-1}\sum_{n=0}^\infty \widetilde{\Gamma}'(\kappa^n z)$$ defines a holomorphic map satisfying $\kappa u(\kappa z) =\widetilde{\Gamma}'(z)+\kappa u(z).$ Let $\widetilde{w}$ be a holomorphic map such that $\tilde{w}(\varphi(a))=\varphi'(a)b$, $\tilde{w}(0)=\tilde{\Gamma}(0)/(1-\kappa)$ and such that $\tilde{w}''(z)=u'(z)$. Then it solves the equation (\ref{eqn:coh1}) and $w(z)=\widetilde{w}\circ \varphi(z)/\varphi'(z)$ solves the equation (\ref{eqn:coh}) with $w(a)=b$.
\end{proof} \iffalse Passing to the Koenigs coordinate, we may assume that $f(z)=\kappa z$. By adding a constant to $Q$, we may assume that $Q(0)=0$. Then $Q'(0)=0$, and hence the series $$w(z)= - \frac{1}{\kappa} \sum_{j=0}^\infty Q'(\kappa^j z) $$ converges uniformly in $\mathbb{D}_\varepsilon$ for some $\varepsilon>0$. Integrating $w$ we obtain the desired $v$. \fi \iffalse Let $g: U\to \mathbb{C}$ be a holomorphic map and let $P=\{c_n\}_{n=1}^\infty$ be an {\em infinite} orbit of $g$ such that $c_n$ converges to a hyperbolic attracting fixed periodic orbit $\{a_0, a_1, \ldots, a_{q-1}\}$. Let $(g,G)_W$ be a holomorphic deformation of $g$. Let $a_j(w)$ denote the analytic continuation of $a_j$, well-defined in a neighborhood of $c_1$ in $W$. Let $$\kappa(w)=DG_w^q(a_j(w))$$ denote the multiplier. \fi \begin{proof}[{\bf Proof of Theorem A in the attracting case}] Let $\delta>0$ be such that $g^q$ maps $B(a_0, \delta)$ injectively into $B(a_0,\delta)$ and let $N$ be such that $c_N\in B(a_0,\delta)$. By assumption, $$Q(a_0) D^2 g^q (a_0)=Q'(a_0)(\kappa-1).$$ So by Lemma~\ref{lem:coho}, there is a holomorphic map $w: B(a_0,\delta)\to\mathbb{C}$ such that $$w(g^q(z))=Q(z)+ Dg^q(z) w(z) \text{ for } z\in B(a_0,\delta),$$ and $w(c_N)=v(c_N)$. The function $w$ extends naturally to a map $w: P_{r_0}\to\mathbb{C}$ that satisfies $w(g(z))=L(z)+Dg(z) w(z)$ which is holomorphic in $\Omega_{r_0}^o$. Since $w(c_N)=v(c_N)$, and $v(c_{n+1})=L(c_n)+Dg(c_n) v(c_n)$, it follows that $w(c_n)=v(c_n)$ for all $n\ge N$.
In particular, $w|P_{\delta/2}$ is Lipschitz. Thus $H_\lambda(z):= z+\lambda w(z)$ defines a holomorphic motion of $P_{\delta/2}$ over $(\mathbb{D}_{\varepsilon},0)$ for some $\varepsilon>0$. Since every point in $\Omega_{r_0}$ eventually lands in $\bigcup B(a_j,\delta/2)$, applying Lemma~\ref{lem:improvemotion} completes the proof.
\end{proof}
\subsection{The parabolic case} \begin{lemma}\label{prop:vC1} Let $W$ be a neighborhood of $0$ in $\mathbb{C}$ and let $f, \Gamma: W\to \mathbb{C}$ be holomorphic functions with $f(0)=0$, $f'(0)=e^{2\pi i l/p}$ and $D^{p+1}f^p(0)\not=0$, where $l, p\in \mathbb{Z}$, $p\ge 1$ and $(l,p)=1$, and with \begin{equation}\label{eqn:QQ'tan} \Gamma'(0)(f'(0)-1)= \Gamma(0) f''(0). \end{equation} Let $P=\{z_n: n\ge 1\}\subset W$ be an infinite orbit of $f$ such that $z_n=f^{n-1}(z_1)\to 0$ and let $v: P\to \mathbb{C}$ be a function such that $$v(z)f'(z)+ \Gamma(z)= v(f(z)), \mbox{ for each } z\in P.$$ Then $v$ extends to a $C^1$ map $V: \mathbb{C}\to \mathbb{C}$ such that $\overline{\partial} V(0)=0$. \end{lemma} \begin{proof} {\bf Step 1.} Let us prove that there exists a polynomial $h$ and a holomorphic function $\widehat{\Gamma}$ defined near $0$ such that
\begin{equation}\label{eqn:boldQ} D^j\widehat{\Gamma}(0)=0, \text{ for } j=0,1,\cdots, p+1, \end{equation} and such that
\begin{equation}\label{eqn:step1} \textbf{v}(z_n)(f^p)'(z_n)+\widehat{\Gamma}(z_n)=\textbf{v}(z_{n+p}), \end{equation} for all $n$ large enough, where $\textbf{v}(z)= v(z)+h(z)$.
We first deal with the case $p=1$. Then $f''(0)\not=0$ and $\Gamma(0)=0$. Define $h_1(z)=\Gamma'(0)/f''(0)$, $\Gamma_1=\Gamma(z)-h_1(z)f'(z)+h_1(f(z))$ and $v_1(z)=v(z)+h_1(z)$. Then $\Gamma_1(0)=\Gamma_1'(0)=0$ and $v_1(z)f'(z)+\Gamma_1(z)=v_1(f(z))$ for each $z\in P$. Let now $b_1=\Gamma_1''(0)/f''(0)$, $h(z)=h_1(z)+b_1 z$. Then $\widehat{\Gamma}(z)=\Gamma_1(z)-b_1(z f'(z)-f(z))$ and $\textbf{v}(z)=v_1(z)+b_1 z=v(z)+h(z)$ satisfy the desired property. \iffalse -------------------
Calculations: $\widehat{\Gamma}(0)=\Gamma_1(0)-b_1(0 f'(0)-f(0))=\Gamma_1(0)=0$, $\widehat{\Gamma}'(0)=\Gamma_1'(0)-b_1(0 f''(0)+f'(0)-f'(0))=\Gamma_1'(0)=0$ and $\widehat{\Gamma}''(0)=\Gamma_1''(0)-b_1 f''(0)=0$.
$$\textbf{v}(z)f'(z)+\widehat{\Gamma}(z)= (v_1(z)+b_1 z)f'(z) + \Gamma_1(z)-b_1(z f'(z)-f(z))=$$ $$v_1(z)f'(z)+\Gamma_1(z)+b_1 f(z)=v_1(f(z))+b_1 f(z)= \textbf{v}(f(z))$$
------------------------------ \fi
Now assume $p>1$. {\bf Claim.} For any $1\le k\le p$, there is a polynomial $h_{1,k}$ such that
$\Gamma_{1,k}(z)=\Gamma(z)- h_{1,k}(z) f'(z) + h_{1,k}(f(z))$ satisfies $$\Gamma_{1,k}(z)=O(z^{k+1}).$$
Let us prove this by induction. For $k=1$, define $h_{1,1}(z)=\Gamma(0)/(f'(0)-1)$. Then the claim follows from (\ref{eqn:QQ'tan}). Assume now the claim holds for some $1\le k<p$. Let $A$ be such that $\Gamma_{1,k}(z)=Az^{k+1}+O(z^{k+2})$.
Define $h_{1,k+1}(z)=h_{1,k}(z)+ b z^{k+1}$, where $b=A/(f'(0)-f'(0)^{k+1})$. Then $$\begin{array}{rl}\Gamma_{1,k+1}(z)&=\Gamma_{1,k}(z)-b z^{k+1}f'(z)+b f(z)^{k+1}\\ &=Az^{k+1} -b f'(0)z^{k+1}+ bf'(0)^{k+1} z^{k+1}+O(z^{k+2})=O(z^{k+2})\end{array}$$ completing the proof of the claim.
Define $h_1=h_{1,p}$, $\Gamma_1(z)=\Gamma_{1,p}$, $v_1(z)=v(z)+ h_1(z) $ and $$\widetilde{\Gamma}_1(z)=Df^p(z) \sum_{k=1}^{p} \frac{\Gamma_1(f^{k-1}(z))}{Df^k(z)}=O(z^{p+1}).$$ Then $v_1(z)f'(z)+\Gamma_1(z)=v_1(f(z))$ for each $z\in P$, hence $$v_1(z) (f^p)'(z) + \widetilde{\Gamma}_1(z) = v_1(f^p(z)), z\in P.$$ Now take $b_1:=\widetilde{\Gamma}_1^{(p+1)}(0)/(pD^{p+1}f^p(0))$, $h(z)=h_1(z)+b_1 z$, $\textbf{v}(z)= v_1(z) +b_1 z$ and $\widehat{\Gamma}=\widetilde{\Gamma}_1(z)-b_1(z(f^p)'(z)-f^p(z))$. Then $\widehat{\Gamma}$ and $\textbf{v}$ satisfy (\ref{eqn:boldQ}) and (\ref{eqn:step1}).
\iffalse Define $h(z)=\Gamma''(0)/f''(0)z$.\marginpar{NewG $h$} Then $\widehat{\Gamma}=\Gamma(z)-h(z)f'(z)+h(f(z))$ and $\textbf{v}(z)=v(z)+h(z)$ satisfy the desired property.
Now assume $p>1$. {\bf Claim.} For any $1\le k\le p$, there is a polynomial $h_{1,k}$ such that
$\Gamma_{1,k}(z)=\Gamma(z)- h_{1,k}(z) f'(z) + h_{1,k}(f(z))$ satisfies $$\Gamma_{1,k}(z)=O(z^{k+1}).$$
Let us prove this by induction. For $k=1$, define $h_{1,1}(z)=\Gamma(0)/(f'(0)-1)$. Then the claim follows from (\ref{eqn:QQ'tan}). Assume now the claim holds for some $1\le k<p$. Let $A$ be such that $\Gamma_{1,k}(z)=Az^{k+1}+O(z^{k+2})$.
Define $h_{1,k+1}(z)=h_{1,k}(z)+ b z^{k+1}$, where $b=A/(f'(0)-f'(0)^{k+1})$. Then $$\begin{array}{rl}\Gamma_{1,k+1}(z)&=\Gamma_{1,k}(z)-b z^{k+1}f'(z)+b f(z)^{k+1}\\ &=Az^{k+1} -b f'(0)z^{k+1}+ bf'(0)^{k+1} z^{k+1}+O(z^{k+2})=O(z^{k+2})\end{array}$$ completing the proof of the claim.
Define $h=h_{1,p}$, $\Gamma_1(z)=\Gamma_{1,p}$, $v_1(z)=v(z)+ h(z) $ and $$\widetilde{\Gamma}_1(z)=Df^p(z) \sum_{k=1}^{p} \frac{\Gamma_1(f^{k-1}(z))}{Df^k(z)}=O(z^{p+1}).$$ Then $v_1(z)f'(z)+\Gamma_1(z)=v_1(f(z))$ for each $z\in P$, hence \marginpar{$z\in P$ addG} $$v_1(z) (f^p)'(z) + \widetilde{\Gamma}_1(z) = v_1(f^p(z)), z\in P.$$ Now take $b_1:=\widetilde{\Gamma}_1^{(p+1)}(0)/(pD^{p+1}f^p(0))$, $\textbf{v}(z)= v_1(z) +b_1 z$ and $\widehat{\Gamma}=\widetilde{\Gamma}_1(z)-b_1(z(f^p)'(z)-f^p(z))$. Then $\widehat{\Gamma}$ and $\textbf{v}$ satisfy (\ref{eqn:boldQ}) and (\ref{eqn:step1}). \fi
{\bf Step 2.} Let us prove that there exist $M>0$ and $\sigma>0$ such that \begin{equation}\label{step2ineq}
|\textbf{v}(z_n)-\textbf{v}(z_{n+p})|\le M|z_n-z_{n+p}|^{1+\sigma}. \end{equation} Let $A=D^{p+1}f^p(0)/(p+1)!\not=0$. By the Leau-Fatou Flower Theorem, see Appendix B,
$|(f^p)'(z_n)|\sim 1-|A|(p+1)|z_n|^p$ and \begin{equation}\label{step2assy}
\frac{z_n}{z_{n+p}}\sim \frac{1}{1-|A||z_n|^p}. \end{equation}
Fix an arbitrary $\varepsilon\in (0,1)$ and $\delta\in (0, |A|(p-\varepsilon))$. There exists $n_0$ such that
$$\gamma_n:=\left(\frac{|z_n|}{|z_{n+p}|}\right)^{1+\varepsilon} |(f^p)'(z_n)|\le 1-\delta |z_n|^p$$
holds for all $n\ge n_0$. Write $w_n= |\textbf{v}(z_n)|/|z_n|^{1+\varepsilon}$ and let $C_0$ be a constant such that
$|\widehat{\Gamma}(z_n)|\le C_0|z_{n+p}|^{p+2}$ for all $n$ which exists by (\ref{eqn:boldQ}, \ref{step2assy}. Then for all $n\ge n_0$, \begin{align*}
w_{n+p} \le\gamma_n w_n+\frac{|\widehat{\Gamma}(z_n)|}{|z_{n+p}|^{1+\varepsilon}}
\le (1-\delta|z_n|^p) w_n + C_0 |z_{n+p}|^{p+1-\varepsilon}. \end{align*}
Let $M_0>0$ be such that $w_n\le M_0$ for all $n\le n_0$ and such that $C_0|z_{n+p}|^{1-\varepsilon} <\delta M_0$ for all $n\ge n_0$. Then by induction using that $|z_{n+p}|<|z_n|$, we obtain $w_n\le M_0$ for all $n$.
Finally \begin{multline*} \textbf{v}(z_{n+p})-\textbf{v}(z_n)=\widehat{\Gamma}(z_n)+ \left((f^p)'(z_n)-1\right) \textbf{v}(z_n)=O(z_n^{p+2}) +O(z_n^{p+1+\varepsilon})\\=O(z_n^{p+1+\varepsilon})
=O(|z_n-z_{n+p}|^{(p+1+\varepsilon)/(p+1)})=O(|z_n-z_{n+p}|^{1+\varepsilon/(p+1)}). \end{multline*}
Now, let us prove that $\textbf{v}:P\to\mathbb{C}$ extends continuous to $0$ by showing that $$\lim_{n\to\infty}\textbf{v}(z_n)=0.$$ It follows from (\ref{step2ineq}) that given $n$ there exists $\lim_{j\to\infty}\textbf{v}(z_{n+jp})$ so it is enough to prove that the latter limit is $0$ for any $n$. Assuming by a contradiction that this is not the case for some $n_0$ and using (\ref{eqn:boldQ}) and (\ref{eqn:step1}) we get (replacing if necessary $n_0$ by $n_0+j_0p$ with a big $j_0$),
$$|\textbf{v}(z_{n_0+jp})|\le |\textbf{v}(z_{n_0})|\Pi_{i=1}^{j-1}|(f^p)'(z_{n_0+ip})|(1+O(|z_{n_0+ip}|^{p+2}))\to 0$$
as $j\to\infty$ because $\Pi_{i=1}^{j-1}(f^p)'(z_{n_0+ip})=(f^{jp})'(z_{n_0})\to 0$ and because $\sum_{i=0}^\infty|z_{n_0+ip}|^{p+2}\le\sum_{i=0}^\infty O(i^{-(p+2)/p})<\infty$, a contradiction.
{\bf Step 3.} We shall now prove that $\textbf{v}: P\to \mathbb{C}$ extends to a $C^1$ function $\textbf{V}:\mathbb{C}\to\mathbb{C}$ such that $\partial \textbf{V}(0)=\overline{\partial } \textbf{V}(0)=0$. Once this is proved, we obtain a desired extension $V$ of $v$ by setting $V=\textbf{V}-h$.
Indeed, by the Leau-Fatou Flower Theorem (see Appendix B) for each $j=0,1,\ldots, p-1$, there exists $\theta_j\in \mathbb{R}$ with $Ae^{2\pi i\theta_j}=-|A|<0, \theta_{j+1}=\theta_j+2\pi/p$ such that the argument $z_{np+j}$ converges to $\theta_j$ as $n\to\infty$. Moreover, the argument of $z_{n+p}-z_n=Az_n^p (1+o(z_n))$ converges to $\pi$. Therefore, there is a $C^1$ diffeomorphism $H:\mathbb{C}\to \mathbb{C}$ with $H(z)=z+o(|z|)$ near $z=0$ such that $H^{-1}(z_{np+j})$ lies in order on the ray $\theta=\theta_j$. Write $z'_n=H^{-1}(z_n)$. Then $\textbf{v}\circ H(z'_n)-\textbf{v}\circ H(z'_{n+p})=o(|z'_n-z'_{n+p}|)$, $\textbf{v}\circ H|H^{-1}(P)$, and hence $\textbf{v}$, extends to a $C^1$ map defined on $\mathbb{C}$ with zero partial derivatives at $0$.
\end{proof}
\begin{proof}[{\bf Proof of Theorem A in the parabolic case}] Define $v(c_n)=\left.\frac{d G_w^{n-1}(w)}{dw}\right|_{w=c_1}$. By Proposition~\ref{prop:vC1} applying for $f(z)=g^q(z+a_j)$ and $\Gamma(z)=Q(z+a_j)$ for each $j$, the vector field $v$ on $P$ extends to a $C^1$ vector field $V$ in $\mathbb{C}$ such that $\overline{\partial} V(z)\to 0$ as $z\to a_j$, $j=0,1,\ldots, q-1$. We can surely make the extension compactly supported. So $\mu=\overline{\partial} V$ is a qc vector field. By the Measurable Riemann Mapping Theorem, there is a holomorphic motion $h_\lambda$ of $\mathbb{C}$ over some disk $\mathbb{D}_\varepsilon$, such that $\overline{\partial}h_\lambda=\lambda \mu \partial h_\lambda$ and $h_\lambda(z)=z+o(1)$ as $z\to\infty$. In particular, $h_\lambda$ defines a holomorphic motion of $P_{r_0}$ which is asymptotically invariant of order $1$ and
$$\left.\frac{d}{d\lambda} h_\lambda(z) \right|_{\lambda=0}=V(z), \forall z\in P.$$
To complete the proof, we shall apply Lemma~\ref{lem:improvemotion}. Let us verify the conditions. For each $K>1$, choose $r$ small enough so that $|\overline{\partial} V|<(K-1)(K+1)$ holds in $\Omega_r\subset \bigcup_j \overline{B(a_j, r)}$ and let $W=\Omega_r$. Both conditions are clearly satisfied. \end{proof}
\section{Averaging and promoting asymptotic invariance}\label{sec:ThmB} In this section, we prove Theorem B.
\subsection{The averaging process}\label{subsec:averaging} Suppose that $(g,G)_W$ is a local holomorphic deformation of a marked map $g: U\to \mathbb{C}$ which has the lifting property of some set $K$ with $P\subset K$ and $g(K)\subset K$. Let $h_\lambda$ be a holomorphic motion of $K$ over $(\mathbb{D}_\epsilon,0)$. By the lifting property, there is $\varepsilon'>0$ and a sequence $h_\lambda^{(k)}$, $k=0,1,\cdots$ of holomorphic motions of $K$ over $(\mathbb{D}_{\epsilon'},0)$ so that $h^{(0)}_\lambda=h_\lambda$ and $h^{(k+1)}_\lambda$ is the lift of $h^{(k)}_\lambda$, for each $k\ge 0$.
Let $H_\lambda$ be a (locally uniform) limit for some subsequence $k_i\to \infty$ of $$\tilde{h}_\lambda^{(k)}:= \dfrac{1}{k} \sum_{i=0}^{k-1} h_\lambda^{(i)}.$$ The following proposition is proved in \cite[Lemma 2.12]{LSvS1}, \cite[Lemma 6.4]{LSvS1a}.
\begin{prop}\label{mtom+1} Assume
$h_\lambda^{0}$ is asymptotically invariant of some order $m$, i.e.
$$h_\lambda^{(k+1)}(c_j)- h^{(k)}_\lambda(c_j)=O(\lambda^{m+1}), \ \ j=1,2,\cdots \mbox{ as }\lambda\to 0$$
for $k=0$ (hence, for all $k$).
Then $H_\lambda$ is asymptotically invariant of order $m+1$, i.e.
$$\hat{H}_\lambda(c_j)- H_\lambda(c_j)=O(\lambda^{m+2}), \ \ j=1,2,\cdots \mbox{ as }\lambda\to 0.$$
\end{prop}
When $K$ is finite, then $H_\lambda$, when restricting $\lambda$ to a smaller disk, is automatically a holomorphic motion. However, this is not necessarily the case when $K$ has infinite cardinality. We solve this issue by considering holomorphic motions which are `almost' conformal near $\mathcal{O}$, using distortion estimates.
\subsection{The attracting case} \begin{proof}[{\bf Proof of Theorem B in the attracting case}]
Let $h_\lambda$ be an admissible holomorphic motion of $P_{r_0}$ over $(\mathbb{D}_\varepsilon,0)$ which is asymptotically invariant of order $m$ and let $h_\lambda^{(k)}$, $\hat{h}_\lambda^{(k)}$ and $H_\lambda$ be as in Subsection~\ref{subsec:averaging}.
Let us prove that there is $r\in (0,r_0)$ and $\varepsilon_1>0$ such that $H_\lambda$ is an admissible holomorphic motion of $P_{r}$ over $(\mathbb{D}_{\varepsilon_1},0)$. Indeed, by definition of the lifting property, there exists $M>0$ and $\varepsilon_2>0$ such that $|h_\lambda^{(k)}(z)|\le M$ for all $z\in P_{r_0}$ and $\lambda\in \mathbb{D}_{\varepsilon_2}$. By Slodowski's theorem, there exists $M'>0$ such that $h_\lambda^{(k)}$ extends to a holomorphic motion of $\mathbb{C}$ over $\mathbb{D}_{\varepsilon_2}$ such that $h_\lambda^{(k)}(z)=z$ whenever $|z|>M'$. By Bers-Royden's Theorem~\cite{BersRoyden}, there exists $K(\lambda)>1$ for each $\lambda\in \mathbb{D}_{\varepsilon_2}$ with $K(\lambda)\to 1$ as $\lambda\to 0$ such that for each $k=0,1,\ldots$, $h_\lambda^{(k)}$ is $K(\lambda)$-qc. Thus for each $\delta>0$ there exists $\varepsilon(\delta)>0$ such that $|h_\lambda^{(k)}(z)-z|\le \delta$ for all $z\in P_{r_0}$ and $\lambda\in \mathbb{D}_{\varepsilon(\delta)}$. Since $a_j$ is in the interior of $P_{r_0}$, it follows that there is $\varepsilon_3>0$ such that $|(h_\lambda^{(k)})'(a_j)-1|<\frac{1}{3}$ for all $\lambda\in \mathbb{D}_{\varepsilon_3}$. By the Koebe Distortion Theorem, there exists $r\in (0,r_0)$ such that for any $z_1, z_2\in B(a_j, r)$, $z_1\not=z_2$, $$\left|\frac{h_\lambda^{(k)}(z_1)-h_\lambda^{(k)}(z_2)}{z_1-z_2}-1\right|<\frac{1}{2},$$ which implies that
$$\left|\frac{H_\lambda(z_1)-H_\lambda(z_2)}{z_1-z_2}-1\right|\le\frac{1}{2},$$ hence $H_\lambda$ is a holomorphic motion of $\Omega_{r}$ over $\mathbb{D}_{\varepsilon_3}$. As $P\setminus \Omega_{r}$ is finite, the statement follows by choosing $\varepsilon_1$ sufficiently small.
To complete the proof, we extend $H_\lambda$ to a holomorphic motion of $P_{r_0}$ in an arbitrary way and then apply Lemma~\ref{lem:improvemotion} as follows: simply take $W=\Omega_r$ for each $K>1$.
\end{proof} \subsection{The parabolic case} The parabolic case is more complicated. We shall need the following distortion lemma: \begin{lemma}\label{lem:distortionparabolic} Given a positive integer $p$, $\alpha\in (0,1)$, $M>0$ and $R>0$ with $3R<M$ and $(3R)^\alpha<\pi/(4p)$, there is $K_0>1$ such that if $H:B(0,M)\to B(0,M)$ is a $K_0$-qc map satisfying
$H|\partial B(0,M)=id$ and $$\overline{\partial}H=0\text{ a.e. on }B(0,3R)\setminus \mathcal{C}(R),$$
where $$\mathcal{C}(R)=\{re^{2\pi it}: 0<r<3R, |t-(2k+1)\pi/p|< r^{\alpha}\text{ for some } k=0,1,\ldots, p-1\},$$
then for any $z_1, z_2\in \mathcal{C}'(R)=\{re^{2\pi it}: 0<r<R, |t-2k\pi/p|< r^{\alpha}\text{ for some } k=0,1,\ldots, p-1\}$, we have
$$|H(z_1)-H(z_2)-(z_1-z_2)|\le \dfrac{1}{2}|z_1-z_2|.$$ \end{lemma}
\begin{proof} Choose $q'\in (1, 1+\alpha/2)$ and let $p'>1$ be such that $1/p'+1/q'=1$. Let $D=B(0, 3R)$. Let $\varepsilon>0$ be a small constant to be determined. Since $H|\partial B(0,M)=id$, it is well-known, see for example~\cite[Chapter V]{Ah}, that provided that $K_0$ is sufficiently close to $1$, $$\int\int_{D} |\overline{\partial} H|^{p'} d|u|^2<\varepsilon^{p'},\, \text{ and }|H(z)-z|<\varepsilon R, \text{ for all }z\in D.$$ Since $H$ is ACL, we can apply the Cauchy-Pompeiu Formula $$H(z)-z=\dfrac{1}{2\pi i} \int_{\partial D} \frac{H(u)-u}{u-z} \, du - \frac{1}{\pi} \int\int_{D} \frac{\overline{\partial} H(u)}{u-z} \,
|du|^2,$$
for $z\in D$. For $z_1, z_2\in \mathcal{C}'(R)$, and $u\in \partial D$, we have $|u-z_1|\ge 2R$, $|u-z_2|\ge 2R$, so \begin{multline*}
\left|\dfrac{1}{2\pi i} \int_{\partial D} \frac{H(u)-u}{u-z_1} du-\dfrac{1}{2\pi i} \int_{\partial D} \frac{H(u)-u}{u-z_2} du \right|
= \left|(z_1-z_2)\dfrac{1}{2\pi i} \int_{\partial D} \frac{H(u)-u}{(u-z_1)(u-z_2)} du\right|\\
\le |z_1-z_2|\dfrac{1}{2\pi} \frac{\varepsilon R}{4R^2}\cdot 2\pi \cdot 3R < \frac{1}{4} |z_1-z_2|, \end{multline*}
where we choose $\varepsilon$ small enough to obtain the last inequality. For $u\in \mathcal{C}(R)$, we have $|u-z_j|\ge \rho |u|$, where $\rho=\rho(p,\alpha,R)>0$ is a constant. Thus \begin{multline*}
\int\int_{\mathcal{C}(R)} \frac{1}{|u-z_1|^{q'}|u-z_2|^{q'}} d|u|^2
\le \frac{1}{\rho^{2q'}} \sum_{k=0}^{p-1} \int_0^{3R} \int_{|t-(2k+1)\pi/p|< r^{\alpha}} \frac{1}{r^{2q'}} r dt dr\\ = \frac{2p}{\rho^{2q'}}\frac{(3R)^{2+\alpha-2q'}}{2+\alpha-2q'}=: C. \end{multline*} Therefore, \begin{multline*}
\left|\int\int_{D} \frac{\overline{\partial} H(u)}{u-z_1} d|u|^2-\int\int_{D} \frac{\overline{\partial} H(u)}{u-z_2}d|u|^2\right|
= \left|\int\int_{\mathcal{C}(R)} \frac{\overline{\partial} H(u)}{u-z_1} d|u|^2-\int\int_{\mathcal{C}(R)} \frac{\overline{\partial} H(u)}{u-z_2} d|u|^2\right|\\
= |z_1-z_2| \left|\int\int_{\mathcal{C}(R)} \frac{\overline{\partial} H(u)}{(u-z_1)(u-z_2)}d|u|^2\right|\\
\le |z_1-z_2|\left(\int\int_{\mathcal{C}(R)} |\overline{\partial} H(u)|^{p'} d|u|^2\right)^{1/p'} \left(\int\int_{\mathcal{C}(R)} \frac{1}{|u-z_1|^{q'}|u-z_2|^{q'}} d|u|^2 \right)^{1/q'}\\
\le |z_1-z_2| \varepsilon C^{1/q'} <\pi |z_1-z_2|/4, \end{multline*} where, once again, we choose $\varepsilon>0$ small enough to obtain the last inequality. The lemma follows. \end{proof}
\begin{proof}[{\bf Proof of Theorem B in the parabolic case}] Extend each $h_\lambda^{(k)}$ to a holomorphic motion of $\mathbb{C}$ over $(\mathbb{D}_{\varepsilon'},0)$ and such that $h_{\lambda}^{(k)}(z)=z$ for all $\lambda\in \mathbb{D}_{\varepsilon'}$, $k$ and $|z|>M'$. Fix $\alpha\in (0,1)$. Let $R=\tau(r_0)/3$ be given by Lemma~\ref{lem:leau-fatou} (1) and let $\tilde K=K_0(p,\alpha, M', R)$ be given by Lemma~\ref{lem:distortionparabolic}. By \cite{BersRoyden}, there exists $\varepsilon_1>0$ such that $h_{\lambda}^{(k)}$ is $\tilde{K}$-qc for all $\lambda\in \mathbb{D}_{\varepsilon_1}$. Thus for each $z_1,z_2\in \mathcal{C}'_j$, $0\le j<p$, $z_1\not=z_2$, and any $k\ge 0$, $\lambda\in \mathbb{D}_{\varepsilon_1}$,
$$\left|\frac{h_\lambda^{(k)}(z_1)-h_\lambda^{(k)}(z_2)}{z_1-z_2}-1\right|\le \frac{1}{2},$$
hence $$\left|\frac{H_\lambda(z_1)-H_\lambda(z_2)}{z_1-z_2}-1\right|\le \frac{1}{2}.$$ It follows that $H_\lambda$ is a holomorphic motion of $P\cup \bigcup_j \mathcal{C}'_j(\tau)$ over $\mathbb{D}_{\varepsilon_2}$ of asymptotic invariance of order $m+1$. By extending it to a holomorphic motion of $P_{r_0}$ in an arbitrary way and applying Lemma~\ref{lem:improvemotion} by taking $W=\bigcup \mathcal{C}'_j(\tau)$ for all $K_0>1$, we complete the proof.
\end{proof}
\section{Asymptotic invariance of an arbitrarily large order}\label{sec:ThmC} In this section, we shall prove Theorem C. For each $m\ge 1$, let $h_{\lambda, m}$ be a holomorphic motion of $\overline{P}$ over $(\mathbb{D}_{\varepsilon_m},0)$ for some $\varepsilon_m>0$ which is asymptotically invariant of order $m$ such that $h_{\lambda, m}(c_1)=c_1+\lambda+O(\lambda^2)$ as $\lambda\to 0$. \iffalse Reparametrizing the holomorphic motions if necessary, we may assume $h_{\lambda, m}(c_1)=c_1+\lambda$ for all $\lambda\in \mathbb{D}_{\varepsilon_m}$. Reparametrization does not change the asymptotic invariance order: once $h_{\lambda,m}$ is asymptotically invariant of order m and $h_{\lambda, m}(c_1)=c_1+\lambda$, then for all $k\ge 1$, the first $m$ terms in the Taylor series of $h_\lambda(c_k)$ is fixed according to $h_\lambda(c_k)= G_{h_\lambda(c_1)}^{k-1} (h_\lambda(c_1))+O(\lambda^{m+1})$. \fi {\bf Claim:} we may assume that the holomorphic motion $h_{\lambda,m}$ satisfies
$h_{\lambda, m}(c_1)=c_1+\lambda$. Indeed, for each $m$ take a reparametrisation $\lambda=\lambda_m(\mu)$ so that for $\hat h_{\mu,m}:=h_{\lambda_m(\mu),m}$ we have $\hat h_{\mu, m}(c_1)=c_1+\mu$. Then $\hat h_{\mu,m}$ is still asymptotically invariant of order $m$. That is, $G_{\hat h_\mu(c_1)}(\hat h_\mu(z))=\hat h_\mu(g(z))+O(\mu^{m+1})$. Renaming the new holomorphic motion again $h_{\lambda,m}$ the claim follows.
Then for each $k\ge 1$, $$h_{\lambda,m} (c_k) =G_{\lambda+c_1}^{k-1}(\lambda+c_1)+O(\lambda^{m+1})$$
and so the first $m$ terms in the Taylor series of $h_\lambda(c_k)$ is fixed according to this formula and therefore $h_{\lambda,m}(c_k)-h_{\lambda,{m+1}}(c_k)=O(\lambda^{m+1})$.
Assume without loss of generality that $c_{nq+1}\to a_1$ as $n\to\infty$ and
define $\varphi_m(\lambda):= \lim_{n\to\infty} h_{\lambda,m}(c_{nq+1})-a_1:= h_{\lambda,m} (a_1)-a_1$. Then \begin{equation}\label{eqn:varphim} G_{c_1+\lambda}^q(\varphi_m(\lambda)+a_1)=\varphi_m(\lambda) +a_1+O(\lambda^{m+1}). \end{equation}
\begin{lemma}\label{lem:61} There is a function $\varphi(\lambda)$, holomorphic near $\lambda=0$ and $m_0$, such that $$G_{c_1+\lambda}^q (\varphi(\lambda)+a_1)=\varphi(\lambda)+a_1,$$ and such that for each $m\ge m_0$, \begin{equation}\label{eqn:varphimvarphi}
\varphi_m(\lambda)-\varphi(\lambda)=O(|\lambda|^{m/3}) \mbox{ as } \lambda\to 0. \end{equation} \end{lemma} \begin{proof}
In the case $\kappa=Dg^q(a_1)\not=1$, we simply take $\varphi(\lambda)$ so that $\varphi(\lambda)+a_1$ is the fixed point of $G_{c_1+\lambda}^q$ obtained as analytic continuation of $a_1$. It is easy to check that (\ref{eqn:varphimvarphi}) holds for all $m\ge 1$ with an even better error term: $O(\lambda^{m+1})$ instead of $O(|\lambda|^{m/3})$.
Now assume $\kappa=1$ and consider the map $\Phi(\lambda, z)= G_{c_1+\lambda}^q (z+a_1)-z-a_1$ which is holomorphic in a neighborhood of the origin in $\mathbb{C}^2$. Clearly, $\Phi(0,z)$ is not identically zero, so by Weierstrass' Preparation Theorem, there is a Weierstrass polynomial $$Q(\lambda,z)=z^2+ 2u(\lambda) z+ v(\lambda)=(z+u(\lambda))^2+v(\lambda)-u(\lambda)^2$$ such that $\Phi(\lambda, z)=Q(\lambda, z) R(\lambda, z)$, where $R$ is a holomorphic function near the origin and $R(0,0)\not=0$, and $u, v$ are holomorphic near the origin in $\mathbb{C}$ with $u(0)=v(0)=0$. Consider the discriminant $\Delta(\lambda)=u(\lambda)^2-v(\lambda)$ which satisfies $\Delta(0)=0$. By (\ref{eqn:varphim}), for each $m$, \begin{equation}\label{eqn:varphim1} (\varphi_m(\lambda)+u(\lambda))^2 -\Delta (\lambda)=O(\lambda^{m+1}). \end{equation}
Let us distinguish two cases.
{\em Case 1.} $\Delta(\lambda)\equiv 0$ (for $\lambda\in \mathbb{D}_\varepsilon$ for some $\varepsilon>0$). Then $-u(\lambda)$ is the only zero of $Q(\lambda, z)$ near $0$. By (\ref{eqn:varphim1}), $\varphi_m(\lambda)+u(\lambda)=O(|\lambda|^{(m+1)/2})$. So the claim holds with $\varphi=-u$.
{\em Case 2.} $\Delta(\lambda)\not\equiv 0.$ Then there is $n_0\ge 1$ and $A\not=0$ such that $\Delta(\lambda)= A\lambda^{n_0}+O(\lambda^{n_0+1})$. Assume $m\ge n_0$. By (\ref{eqn:varphim1}),
$$(\varphi_m(\lambda)+u(\lambda))^2 =A\lambda^{n_0} +O(\lambda^{n_0+1}),$$ which implies that $n_0$ is even. Therefore, there exists holomorphic functions $\varphi_{\pm}(\lambda)$ such that $Q(\lambda, z)=(z-\varphi_+(\lambda))(z-\varphi_-(\lambda))$. The lemma holds for either $\varphi(\lambda)=\varphi_-(\lambda)$ or $\varphi(\lambda)=\varphi_-(\lambda)$. \end{proof}
\iffalse We shall prove that $\sigma(\lambda):=DG_{c_1+\lambda}(\varphi(\lambda)+c_1)$ is constant. Arguing by contradiction, assume
that there exists $m_0\ge 1$ and $A\not=0$, such that $$\sigma(\lambda)-\sigma(0)=A\lambda ^{m_0}+O(\lambda^{m_0+1}).$$
Fix $m=m_0$. There exists $\varepsilon'=\varepsilon_m'>0$ such that the following hold: \begin{enumerate} \item There exist admissible holomorphic motions $H_\lambda^{m,k}$ of $\overline{P}$ over $\mathbb{D}_{2\varepsilon'}$, $k=0,1,\ldots,$ such that $H_\lambda^{m,0}=h_{\lambda,m}$ and such that $H_\lambda^{m,k+1}$ is the lift of $H_\lambda^{m,k}$ for each $k$ and the family $\{H_\lambda^{m,k}(x)\}_{k\ge 0}$ is uniformly bounded on $\mathbb{D}_{2\varepsilon'}\times \overline{P}$. \end{enumerate} Since $H_\lambda^{m,0}(c_1)=c_1+\lambda+O(\lambda^2)$, we have $$DG_{H_\lambda^{m,0}(c_1)}^q(a_0(H_\lambda^{m,0}(c_1)))-\kappa(c_1)=A\lambda^m+O(\lambda^{m+1})$$ where $\kappa_0=\kappa(c_1)$.
It also follows that $$a_0(H_\lambda^{m,0}(c_1))-H_\lambda^{m,0}(a_0)=O(\lambda^{m+1})$$ and, for any two holomorphic functions $z$, $\tilde z$ near $\lambda=0$ such that $z(0)=0$ and $\tilde z(\lambda)-z(\lambda)=O(\lambda^{m+1})$, for any $n$, $$DG_{H_\lambda^{m,0}(c_1)}(\tilde z(\lambda))-DG_{H_\lambda^{m,n}(c_1)}(z(\lambda))=O(\lambda^{m+1}).$$
Therefore, there is $C'>0$ so that for any $|\lambda|<\epsilon'$ and any $n$,
$$|DG_{H_\lambda^{m, n}(c_1)}\circ G_{H_\lambda^{m, n+1}(c_1)}\circ \cdots \circ G_{H_{\lambda}^{m,n+q-1}(c_1)}(H_{\lambda}^{m,n+q}(a_0))-DG_{H_\lambda^{m,0}(c_1)}^q (a_0(H_\lambda^{m,0}(c_1)))|\le C' |\lambda|^{m+1}.$$
It follows that there exists $\varepsilon''>0$ such that whenever $|\lambda|\le \varepsilon''$ we have for all $n$,
$$|DG_{H_\lambda^{m, n}(c_1)}\circ G_{H_\lambda^{m, n+1}}\circ \cdots \circ G_{H_{\lambda}^{m,n+q-1}}(H_{\lambda}^{m,n+q}(a_0))-(\kappa_0+A\lambda^{m})|<C''|\lambda|^{m+1}.$$
Therefore, we can choose $\lambda$ with $|\lambda|$ arbitrarily small and such that
$$|DG_{H_\lambda^{m, n}(c_1)}\circ G_{H_\lambda^{m, n+1}}\circ \cdots \circ G_{H_{\lambda}^{m,n+q-1}}(H_{\lambda}^{m,n+q}(a_0))|<|\kappa_0|-\frac{1}{2}|A||\lambda|^{m}<|\kappa_0|$$ holds for every $n$. We fix such a choice of $\lambda$ now.
Hence, there is $r>0$ a small constant such that for each $z$ and each $n\ge 0$ with $|z-H_{\lambda}^{m,n+q}(a_0)|<r$,
$$|DG_{H_\lambda^{m, n}(c_1)}\circ G_{H_\lambda^{m, n+1}(c_1)}\circ \cdots \circ G_{H_{\lambda}^{m,n+q-1}(c_1)}(z)|<|\kappa_0|-\frac{1}{4}|A||\lambda|^{m}=: \tilde\kappa \in (0,|\kappa_0|).$$
Let $l_0>0$ be large enough such that for each positive integer $l\ge l_0$ and any $n\ge 0$, $|H_\lambda^{m,n} (a_0)- H_\lambda^{m,n} (c_{lq+1})|<r$.
Observe that by the definition of lift, $$H_{\lambda}^{m,n}(g^q(x))=G_{H_\lambda^{m, n}(c_1)}\circ G_{H_\lambda^{m, n+1}(c_1)}\circ \cdots \circ G_{H_{\lambda}^{m,n+q-1}(c_1)}(H_{\lambda}^{m,n+q}(x)).$$ Then for any $l\ge l_0$, and any $n\ge 0$, \begin{align*}
& |H_\lambda^{m,n}(a_0)- H_\lambda^{m,n} (c_{(l+1)q+1})|\\
=& |G_{H_\lambda^{m,n} (c_1)} \circ \cdots G_{H_\lambda^{m, n+q-1}(c_1)} (H_\lambda^{m,n+q} (a_0))-G_{H_\lambda^{m,n} (c_1)} \circ \cdots G_{H_\lambda^{m, n+q-1}(c_1)} (H_\lambda^{m,n+q} (c_{lq+1})|\\
\le & \tilde\kappa |H_\lambda^{m,n+q} (a_1)-H_\lambda^{m,n+q} (c_{lq+1})|. \end{align*} This implies that
$$|H_\lambda^{m,0} (a_0)- H_\lambda^{m,0} (c_{lq+1})|=O(\tilde\kappa^l).$$
Since $H_\lambda^{m,0}$ is bi-H\"older, we get finally that $c_{lq+1}-a_0=O(\tilde\kappa^l)$. On the other hand, since $Dg^q(a_0)=\kappa_0$ where $|\kappa_0|\in (0,1)$, then $|c_{lq+1}-a_0|\ge C|\kappa_0|^l$ for some $C>0$.
However, $\tilde\kappa<|\kappa_0|$, a contradiction!
\iffalse Therefore, there are $C'>0$, $\epsilon'>0$ so that for any $|\lambda|<\epsilon'$,
$$|DG_{H_\lambda^{m, n}(c_1)}\circ G_{H_\lambda^{m, n+1}(c_1)}\circ \cdots G_{H_{\lambda}^{m,n+q-1}(c_1)}(H_{\lambda}^{m,n+q}(a_1))-DG_{c_1+\lambda}^q (a_0(c_1+\lambda)|\le C' |\lambda|^{m_0+1}.$$
It follows that there exists $\varepsilon''>0$ such that whenever $|\lambda|\le \varepsilon''$, we have
$$|DG_{H_\lambda^{m, n}(c_1)}\circ G_{H_\lambda^{m, n+1}}\circ \cdots G_{H_{\lambda}^{m,n+q-1}}(H_{\lambda}^{m,n+q}(a_1))-(\kappa_0+A\lambda^{m})|<C''|\lambda|^{m+1}.$$
Therefore, we can choose $\lambda$ with $|\lambda|$ arbitrarily small and such that
$$|DG_{H_\lambda^{m, n}(c_1)}\circ G_{H_\lambda^{m, n+1}}\circ \cdots G_{H_{\lambda}^{m,n+q-1}}(H_{\lambda}^{m,n+q}(a_1))|<|\kappa_0|-|A||\lambda|^{m}<|\kappa_0|$$ holds for every $n$. We fix such a choice of $\lambda$ now.
Let $r>0$ be a small constant such that for each $z$ and each $n\ge 0$ with $|z-H_{\lambda}^{m,n+q}(a_0)|<r$, \marginpar{$c_1$ added}
$$|DG_{H_\lambda^{m, n}(c_1)}\circ G_{H_\lambda^{m, n+1}(c_1)}\circ \cdots G_{H_{\lambda}^{m,n+q-1}(c_1)}(z)|<|\kappa_0|-|A||\lambda|^{m_0}=: \tilde\kappa \in (0,|\kappa_0|).$$
Let $l_0>0$ be large enough such that for each positive integer $l\ge l_0$ and any $n\ge 0$, $|H_\lambda^{m,n} (a_1)- H_\lambda^{m,n} (c_{lq+1})|<r$. Then for any $l\ge l_0$, and any $n\ge 0$, \marginpar{$nq$ to $n+q$ below?} \begin{align*}
& |H_\lambda^{m,n}(a_0)- H_\lambda^{m,n} (c_{(l+1)q+1})|\\
=& |G_{H_\lambda^{m,n} (c_1)} \circ \cdots G_{H_\lambda^{m, n+q-1}(c_1)} (H_\lambda^{m,n+q} (a_1))-G_{H_\lambda^{m,n} (c_1)} \circ \cdots G_{H_\lambda^{m, n+q-1}(c_1)} (H_\lambda^{m,n+q} (c_{lq+1})|\\
\le & \tilde\kappa |H_\lambda^{m,n+q} (a_1)-H_\lambda^{m,n+q} (c_{lq+1})|. \end{align*} It follows that
$$|H_\lambda^{m,0} (a_0)- H_\lambda^{m,0} (c_{lq+1})|=O(\tilde\kappa^l).$$
Since $H_\lambda^{m,0}$ is bi-H\"older, this implies that $c_{lq+1}-a_0=O(\tilde\kappa^l)$. On the other hand, since $Dg^q(a_0)=\kappa_0$ where $|\kappa_0|\in (0,1)$, $|c_{lq+1}-a_0|\ge C|\kappa_0|^l)$ for some $C>0$.
However, $\tilde\kappa<|\kappa_0|$, a contradiction! \fi
\end{proof} \begin{proof}[Proof of Theorem C in the case $\kappa=1$] \fi
\begin{proof}[Completion of the proof of Theorem C]
We want to show that $\sigma(\lambda):=DG_{c_1+\lambda}^ q( \varphi(\lambda)+a_1)$ is constant. Arguing by contradiction, assume that this is not the case. Then there exists $m_1\ge 1$ and $A\not=0$, such that $$\sigma(\lambda)-\sigma(0)=3A\lambda^{m_1}+O(\lambda^{m_1+1}).$$
Fix $m\ge \max(3(m_1+1),m_0)$. There exists $\varepsilon'=\varepsilon_m'>0$ such that the following hold: \begin{itemize} \item There exist admissible holomorphic motions $h_{\lambda,m}^{k}$ of $\overline{P}_{r_0}$ over $\mathbb{D}_{2\varepsilon'}$, $k=0,1,\ldots,$ such that $h_{\lambda,m}^{0}=h_{\lambda,m}$, $h_{\lambda,m}(c_1)=c_1+\lambda$ and such that $h_{\lambda,m}^{k+1}$ is the lift of $h_{\lambda,m}^k$ for each $k$, and the
sequence $\{h_{\lambda,m}^{k}(x)\}$ is uniformly bounded in $(\lambda,x)\in
\mathbb{D}_{2\varepsilon'}\times \overline{P}_{r_0}$. \end{itemize}
Thus, there exists $C=C_m>0$ such that whenever $|\lambda|<\varepsilon'$ and $n\ge 0$, \begin{equation}\label{eq:asymp1}
|h_{\lambda,m}^{n} (c_1)-c_1-\lambda|\le C |\lambda|^{m+1}, \end{equation} \begin{equation}\label{eq:asymp2}
|h_{\lambda,m}^{n} (a_1)-\varphi_m(\lambda)-a_1|\le C|\lambda|^{m+1}. \end{equation} By Lemma~\ref{lem:61}, enlarging $C$ if necessary, we have \begin{equation}\label{eq:asymp3}
|\varphi(\lambda)-\varphi_m(\lambda)|\le C|\lambda|^{m/3}\le C|\lambda|^{m_1+1}. \end{equation} Put $$\mathcal{G}_\lambda^{(n)}(z)=G_{h_{\lambda,m}^{n}(c_1)}\circ G_{h_{\lambda,m}^{n+1}(c_1)}\circ \cdots \circ G_{h_{\lambda,m}^{n+q-1}(c_1)}(z).$$ Then $h_{\lambda,m}^{(n)}(g^q(z))=\mathcal{G}_\lambda^{(n)}(h_{\lambda,m}^{(n+q)}(z))$ and the sequence $\{\mathcal{G}_\lambda^{(n)}(z)\}$ is uniformly bounded in $\lambda\in\mathbb{D}_{\varepsilon'}$, $z\in B(a_1,\delta_0)$ for some $\delta_0>0$. By the lifting property (\ref{eq:asymp1}),(\ref{eq:asymp2}), (\ref{eq:asymp3}) enlarging $C$ further, we have
$$|D(\mathcal{G}_{\lambda}^{(n)})(h_{\lambda,m}^{n+q}(a_1))-\sigma(\lambda)|\le C |\lambda|^{m_1+1}.$$
for each $n\ge 0$. It follows that there exists $\varepsilon''>0$ such that whenever $|\lambda|\le \varepsilon''$, we have
$$|D\mathcal{G}_{\lambda}^{(n)}(h_{\lambda,m}^{n+q}(a_1))-(\sigma(0)+3A\lambda^{m_1})|=O(|\lambda|^{m_1+1}).$$
Therefore, we can choose $\lambda\not=0$ with $|\lambda|$ arbitrarily small and such that
$$|D\mathcal{G}_{\lambda}^{(n)}(h_{\lambda,m}^{n+q}(a_1))|<|\sigma(0)|-2|A||\lambda|^{m_1}<|\sigma(0)|=|\kappa|$$ holds for every $n$. We fix such a choice of $\lambda$ now.
Let $\delta>0$ be a small constant such that for each $z$ and each $n\ge 0$ with $|z-h_{\lambda,m}^{n+q}(a_1)|<\delta$,
$$|D\mathcal{G}_\lambda^{(n)}(z)|<|\sigma(0)|-|A||\lambda|^{m_1}=: \kappa'<|\sigma(0)|=|\kappa|.$$
Let $l_0>0$ be large enough such that for each positive integer $l\ge l_0$ and any $n\ge 0$, $|h_{\lambda,m}^{n} (a_1)- h_{\lambda,m}^{n} (c_{lq+1})|<\delta$. Then for any $l\ge l_0$, and any $n\ge 0$, \begin{multline*}
|h_{\lambda,m}^{n}(a_1)- h_{\lambda,m}^{n} (c_{(l+1)q+1})|
= |\mathcal{G}_{\lambda}^{(n)}(h_{\lambda,m}^{n+q} (a_1))-\mathcal{G}_{\lambda}^{(n)} (h_{\lambda,m}^{n+q}(c_{lq+1})|\\
\le \kappa' |h_{\lambda,m}^{n+q} (a_1)-h_{\lambda,m}^{nq} (c_{lq+1})|. \end{multline*} It follows that \begin{equation}\label{eqn:hlambdama1}
|h_{\lambda,m}^{0} (a_1)- h_{\lambda,m}^{0} (c_{lq+1})|=O({\kappa'}^l). \end{equation}
Let us now distinguish two cases to complete the proof by deducing a contradiction.
{\em Case 1.} $|\kappa|<1$. In this case, $P_{r_0}$ contains a neighborhood of $a_1$, so by the admissible property of $h_{\lambda, m}$, we have $|h_{\lambda,m}^0(a_1)-h_{\lambda,m}^{0} (c_{lq+1}|\asymp |a_1-c_{lq+1}|\asymp |\kappa|^l$. This leads to $|\kappa|<|\kappa'|$, a contradiction!
{\em Case 2.} $|\kappa|=1$. In this case, $c_{lq+1}$ converges to $a_1$ only polynomially fast. However, since $h_{\lambda,m}$ is qc and hence bi-H\"older, (\ref{eqn:hlambdama1}) implies that $c_{lq+1}$ converges to $a_1$ exponentially fast, a contradiction!
\end{proof}
\iffalse \section{In the real parabolic case $\frac{\partial G^q_w}{\partial w} (a_0)\big\vert_{w=c}\ge 0$ holds}
\marginpar{Paragraph changed} In this section we assume that $g$ is real marked map w.r.t. $c_1$ so that the sequence $c_{n}:=g^{n-1}(c_1)$, $n=1,2,\dots$ tends to a non-degenerate parabolic periodic orbit $\mathcal O:=\{a_0,\dots,a_{q-1}\}$ with multiplier $+1$. Consider a holomorphic deformation $(g,G)_W$ and assume that $(g,G)_W$ has the lifting property for the set $P=\{c_n\}_{n\ge 1}$
(notice that this condition is weaker than the one of the Main Theorem).
Under these assumptions we will show that $Q(a_0)=\frac{\partial G^q_w}{\partial w}\big\vert_{w=c_1} (a_0)\ge 0$, where $a_0=\lim_{k\to\infty} c_{kq+1}.$
\marginpar{changes}
This section will also motivate the choice of the particular vector field along $P$ appearing in Section~\ref{sec:ThmA}.
From~(\ref{eq:partGL}) it follows that the 2nd equality holds in
$$\Delta(z):= \sum_{j=1}^q \dfrac{L(g^{j-1}(z))}{Dg^j(z)}=\dfrac{1}{Dg^q(z)}\dfrac{\partial G^q_w}{\partial w}\Big\vert_{w=c} (z)$$ Since $Dg^q(a_0)=1$, $$\Delta(a_0)=\dfrac{\partial G^q_w}{\partial w}\Big\vert_{w=c} (a_0) = \sum_{j=1}^q \dfrac{L(g^{j-1}(a_0))}{Dg^j(a_0)} = \sum_{j=0}^{q-1} \dfrac{L(g^j(a_0))}{Dg^{j+1}(a_0)} .$$
\begin{prop}\label{prop:ge0} Assume that $(g,G)_W$ has the lifting property for the set $P$.
Moreover, assume that $g$ is real, has a periodic point $a_0$ of period $q$ with $Dg^q(a_0)=1$, $D^2g^q(a_0)\neq 0$ so that $c_{qk+1}\to a_0$
and so that $Dg^q(c_{kq+1})>0$ for each $k\ge 0$. Then $$\dfrac{\partial G^q_w}{\partial w}\Big\vert_{w=c} (a_0)\ge 0.$$ \end{prop} \begin{proofof}{Proposition~\ref{prop:ge0}} By Proposition~\ref{prop:32}, see also \cite{Le1}, \begin{equation} D(\rho):=1+\sum_{n=1}^\infty \dfrac{\rho^nL(c_n)}{Dg^n(c_1)} >0\mbox{ for all } 0<\rho < 1. \label{drho} \end{equation}
Arguing by contradiction, assume that $\Delta(a_0)<0$. Then there exist $\rho_0\in (0,1)$ and $k_0\ge 1$ such that for each $\rho_0<\rho<1$ and each $k\ge k_0$, we have \begin{equation}\label{eqn:Bkrho} B_k(\rho):=\sum_{j=1}^q \dfrac{\rho^j L(c_{qk+j})}{Dg^{j}(c_{qk+1})}< \Delta(a_0)/2. \end{equation}
By assumption, $M_k:=k^2 Dg^{kq}(c_1)>0$ holds for all $k\ge 0$. By Leau-Fatou Flower theorem, see Appendix B, $Dg^q(c_{kq+1})=1-2/k +O(k^{-2})$, so $M:=\lim_{k\to\infty} M_k>0$ exists. Enlarging $k_0$, we have $M_k> M/2$ for all $k\ge k_0$. Then \begin{align*} D(\rho) &= 1+\mathlarger{\sum}_{n=1}^\infty \dfrac{\rho^nL(c_n)}{Dg^n(c_1)} = 1+ \mathlarger{\sum}_{k=0}^\infty \mathlarger{\sum}_{j=1}^{q}\dfrac{\rho^{qk+j}L(c_{qk+j})}{Dg^{qk+j}(c_1)}=\\
& \\
&=1+\mathlarger{ \mathlarger{\sum}}_{k=0}^\infty \left[ \frac{\rho^{qk}}{Dg^{qk}(c_1)} \sum_{j=1}^{q} \dfrac{\rho^j L(c_{qk+j})}{Dg^{j}(c_{qk+1})} \right] =1+\mathlarger{ \mathlarger{\sum}}_{k=0}^\infty \frac{k^2\rho^{qk}}{M_k} B_k(\rho)\\
& \le 1+\mathlarger{\sum}_{k=0}^{k_0} \frac{k^2\rho^{qk}}{M_k} B_k(\rho) +\frac{\Delta(a_0)}{4M} \sum_{k=k_0+1}^\infty k^2 \rho^{qk}, \end{align*} provided that $\rho_0<\rho<1$. \iffalse
For each $j$, since $Dg^{qk+j}(c_1)= Dg^{j}(g^{qk}(c_1)) Dg^{qk}(c_1) $ there exists $B_j\in \mathbb{R}$ such that $$Dg^{kq+j}(c_1)=Dg^{j}(a_0)\left(1+\dfrac{B_j}{k}+ o(1/k)\right)
\left(\dfrac{M}{k^2} + o(1/k^2)\right).$$
Let Using this and $L(c_{qk+j})=L(a_{j-1})+O(1/k)$, we obtain
\begin{equation*}
\begin{array}{rl} D(\rho) &= 1+\mathlarger{\sum}_{n=1}^\infty \dfrac{\rho^nL(c_n)}{Dg^n(c)} = 1+ \mathlarger{\sum}_{j=1}^{q} \mathlarger{\sum}_{k=0}^\infty \dfrac{\rho^{qk+j}L(c_{qk+j})}{Dg^{qk+j}(c)}=\\
& \\
&=1+\mathlarger{ \mathlarger{\sum}}_{k=0}^\infty \left[ \frac{\rho^{qk}}{Dg^{qk}(c_1)} \sum_{j=1}^{q} \dfrac{\rho^j L(c_{qk+j})}{Dg^{j}(c_{qk+1})} \right] \\
&=1+ \mathlarger{ \mathlarger{\sum}}_{k=0}^\infty \left[\dfrac{k^2\rho^{qk}}{M_k} \underbrace{\sum_{j=1}^{q} \dfrac{\rho^j (L(a_{j-1})+O(1/k))}{Dg^{j}(a_0)}}_{B_k(\rho)} \right]=\\
&=1+ \mathlarger{ \mathlarger{\sum}}_{k=0}^\infty \left[\dfrac{k^2\rho^{qk}}{\left(M+ o(1)\right) } \underbrace{\left( \Delta(a_0)+ \mathlarger{\sum}_{j=1}^{q} \dfrac{(\rho^j-1) L(a_{j-1})}{Dg^{j}(a_0)} + \rho^jO(1/k) \right)}_{=B_k(\rho)} \right].
\end{array} \end{equation*}
Now assume by contradiction that $\Delta(a_0)<0$. Then there exists $k_0$ and $\rho_0\in (0,1)$ so that
$B_k(\rho)\le \Delta(a_0)/2<0$ for $k\ge k_0$ for all $\rho\in [\rho_0,1]$. Moreover, $\sup B_k \le B_*$.
\fi Since $\sum_{k=k_0+1}^\infty k^2 \rho^{qk} \to \infty$ as $\rho\to 1$, this implies that $\liminf_{\rho \nearrow 1} D(\rho)=-\infty$, a contradiction with $D(\rho)>0$ for all $\rho \in (0,1)$.
\end{proofof}
\subsection{A vector field $v_\rho$ along $P$ so that $\rho \mathcal A v_\rho = v_\rho$}
\begin{prop}\label{prop:32} Let $g$ be a marked map w.r.t. $c_1$, and $P=\{c_n\}_{n\ge 1}$ converges to the periodic orbit $\mathcal O$ of $g$ which has the multiplier $+1$ and not degenerate. Consider a holomorphic deformation $(g,G)_W$.
Assume that $(g,G)_W$ has the lifting property for the set $P$.
Then for all $|\rho|<1$ one has $$1+\sum_{n\ge 1} \dfrac{\rho^nL(c_n)} {Dg^n(c_1)}\ne 0$$
where $L(x)=\frac{\partial G_w}{\partial w}|_{w=c_1}(x)$. \end{prop} \begin{proof}
Let us for the moment assume that $h_\lambda(c_i)=c_i+v_i\lambda + O(\lambda^2)$ defines a holomorphic motion of $P$. Then its lift $\hat h_\lambda(c_i)=c_i+\hat v_i \lambda + O(\lambda^2)$ is defined for $|\lambda|$ small and $$G_{h_\lambda(c_1))}(\hat h_\lambda(c_i))=h_\lambda(c_{i+1})=c_{i+1}+v_{i+1}\lambda+O(\lambda^2).$$ Writing $D_i=Dg(c_i)$ we obtain $$L_i v_1 + D_i\hat v_i = v_{i+1}, i\ge 1$$ where $L_i=L(c_i)$.
Taking $v=(v_1,v_2,\dots)$ and $\hat v=(\hat v_1,\hat v_2,\dots)$ we have that $\hat v=\mathcal{A}v$ where $$\mathcal{A}=\left( \begin{array}{cccccc}
-L_1/D_1 & 1/D_1 & 0 & \dots & \dots & \dots \\
-L_2/D_2 & 0 & 1/D_2 & 0 & \dots & \dots \\ -L_3/D_3 & 0 & 0 & 1/D_3 & \dots & \dots \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots
\end{array}\right).$$
Assume $\rho\ne 0$ and consider the vector field $v_\rho$ on $P$ defined for $n>1$ by \begin{equation*} \begin{array}{rl} v_\rho(c_n) &:=\dfrac{Dg^{n-1}(c_1)}{\rho^{n-1}} \sum_{k=0}^{n-1} \dfrac{\rho^kL_k}{Dg^k(c_1)}=\\ & \\ &=L_{n-1}+\dfrac{Dg(c_{n-1})L_{n-2}}{\rho} + \dfrac{Dg^2(c_{n-2})L_{n-3}}{\rho^2 }+ \dots + \dfrac{Dg^{n-1}(c_1)L_0}{\rho^{n-1}} \end{array}\end{equation*} (with $L_0=1$) and $v_\rho(c_1)=1$. Notice that for this vector field we get $\rho \mathcal A v_\rho = v_\rho$.
Assume, by contradiction, that for some $0<|\rho|<1$, $$1+\sum_{n\ge 1} \dfrac{\rho^nL(c_n)} {Dg^n(c_1)}= 0.$$ Then we have that $$v_\rho(c_n)= - \dfrac{Dg^{n-1}(c)}{\rho^{n-1}} \sum_{k=n}^{\infty} \dfrac{\rho^kL(c_k)}{Dg^k(c)}= - \sum_{j=1}^\infty \dfrac{\rho^jL(c_{n+j-1})}{Dg^j(c_n)}. $$ For simplicity write $v_{i,\rho}=v_\rho(c_i)$. In the next lemma we will show that $v_\rho$ defines a Lipschitz vector field. Because of this,
$v_\rho$ defines a holomorphic motion $h_{\lambda,\rho}$ for $|\lambda|<\epsilon$ for some $\epsilon>0$. As $\rho$ is fixed, let us write $h_\lambda=h_{\lambda,\rho}$. Since $(g,G)_W$ has the lifting property, it follows that the consecutive sequence of lifts $h^{(n)}_\lambda$ of $h_\lambda$ form a normal family. Write $h_\lambda^{(n)}(x)=x+\lambda v^{(n)}_\rho(x) + O(\lambda^2)$. Then $v^{(n)}_\rho=\dfrac{1}{\rho^n}v_\rho$ which, since $\rho<1$, contradicts that the family $h_\lambda^{(n)}$ forms a normal family.
\end{proof}
\begin{lemma}\label{liprho}
The vector field $$v_\rho(c_n):= \sum_{j=1}^\infty \dfrac{\rho^jL(c_{n+j-1})}{Dg^j(c_n)}$$ defined on the set $P=\{c_i\}_{i\ge 1}$
is Lipschitz.
\end{lemma}
\begin{proof}
Given $x\in U$ such that $g^i(x)\in U$ for all $i\ge 1$,
define $$V_\rho(x)=\sum_{j=1}^\infty \dfrac{\rho^jL(g^{j-1}(x))}{Dg^j(x)}.$$
We have $V_\rho(c_n)=v_\rho(c_n)$, $n=1,2,\cdots$.
Notice that if $Z(z)=\dfrac{1}{a_0(x)\dots a_{n-1}(x)}$ then
$$Z'(z)=\dfrac{-1}{a_0(x)\dots a_{n-1}(x)}\sum_{i=0}^{n-1} \dfrac{a_i'(x)}{a_i(x)}.$$
Writing $Dg^n(x)=Dg(g^{n-1}(x))\dots Dg(x)$ this gives $$D [\dfrac{1}{Dg^n(x)}]=\dfrac{-1}{Dg^n(x)} \sum_{i=0}^{n-1} \dfrac{D^2g(g^i(x))}{Dg(g^i(x))}.$$ Hence \begin{equation}\label{eq:v'} V_\rho'(x)= \sum_{j=1}^\infty \rho^j\left[ \dfrac{L'(g^{j-1}(x))}{Dg(g^{j-1}(x)}-\dfrac{L(g^{j-1}(x))}{Dg^j(x)} \sum_{i=0}^{n-1} \dfrac{D^2g(g^i(x))}{Dg(g^i(x))}\right]. \end{equation} Since the periodic orbit $\mathcal O=\{a_0,\cdots,a_{q-1}\}$ of $g: U\to \mathbb{C}$ has multiplier $+1$ and is not degenerate, by the proof of Lemma~\ref{lem:leau-fatou} for each $a_j$ there is a convex set $\Delta_j$ in the basin of $\mathcal O$ with a boundary point $a_j$ such that $g^q(\Delta_j)\subset \Delta_j$. Moreover, the closures of $\Delta_j$, $0\le j\le q-1$ are pairwise disjoint, all but finitely many points of the orbit $P$ are in $\Delta:=\cup_{j=0}^{q-1} \Delta_j$ and there exists $M_1>0$ such that, for each $x\in \Delta_j$, $0\le j\le q-1$, and each $k$:
$$|Dg^{kq}(x)|\ge \frac{M_1}{k^2}.$$
Notice also that the function $D^2g/Dg=D(\log Dg)$ is bounded in $\Delta$. These bounds along with the definition for $V_\rho$, (\ref{eq:v'}) and since $|\rho|<1$ imply that for some $K>0$ and all $x\in C$,
$$|V_\rho(x)|\le K, \ \ |V_\rho'(x)|\le K.$$
As each component $\Delta_j$ of $\Delta$ is convex (so that $x_1,x_2\in \Delta_j$ implies $|V_\rho(x_1)-V_\rho(x_2)|\le K|x_1-x_2|$) and only finitely many points of $P$ is outside of $\Delta$ we conclude that $V_\rho$ is Lipschitz on $P$. \end{proof}
\fi \iffalse \section{A holomorphic extension $V$ of $v$ near the parabolic point}\label{sect:ext}
\iffalse \begin{prop}\label{prop:vC1} The function $v:P\to\mathbb{C}$ extends to a $C^1$ function in $V:\mathbb{C}\to \mathbb{C}$ such that $\overline{\partial} V(z)=0$ for $z\in \{a_0, a_1, \ldots, a_{q-1}\}.$ \end{prop} \begin{proof} It suffices to prove that $v$ extends to a $C^1$ function in a neighborhood of $a_j$ for each $j=0,1,\ldots, a_{q-1}$. Without loss of generality, we consider the case $j=0$, assume $a_0=0$, $g^q(x)=x+x^2+ \theta x^3+O(x^4)$ near $x=0$ and that $c_{qj+1}\to a_0$ as $j\to \infty$. Write $z_m=c_{qm+1}=x_m+iy_m$ and $v_m=v(z_m)$. \marginpar{SVS\\ $qm \to qm+1$} Note that $y_m/x_m\to 0$ and it suffices to show \begin{equation}\label{eqn:C1suff} \lim_{n\to\infty} \frac{v_{m+1}-v_m}{z_{m+1}-z_m}\,\, \mbox{exists.} \end{equation}
Let $$Q(z)=\sum_{i=1}^{q}\frac{L(g^{i-1}(z))}{Dg^i(z)}.$$ By assumption $Q(a_0)=Q(0)=0$, there exist $A, B\in \mathbb{C}$ such that $$Q(z)=2Az+Bz^2+O(z^3).$$ By definition of $v_m$, we have \begin{multline}\label{eqn:vmrec} v_{m+1}=Dg^q(x_m) (Q(z_m)+v_m)=(1+2z_m +3\theta z_m^2) (2Az_m+Bz_m^2+v_m)+O(z_m^3)\\ =(1+2z_m) v_m+2Az_m+(4A+B)z_m^2+3\theta z_m^2 v_m +O(z_m^3), \end{multline} where we also used the boundedness of $v_m$. Putting $\beta=4A+B-3\theta A$, and \begin{equation}\label{eqn:vmwm} w_m=v_m+A+ \beta z_m, \end{equation} we obtain $$w_{m+1}=(1+2z_m+3\theta z_m^2) w_m+O(z_m^3).$$
\newcommand{\widetilde{w}}{\widetilde{w}} Let $\widetilde{w}_{m}=w_m/z_m^2$. Then $$\widetilde{w}_{m+1}=\frac{1+2z_m+3\theta z_m^2}{(1+z_m+\theta z_m^2+O(z_m^3))^2} \widetilde{w}_m +O(z_m)= \left(1+ (\theta-1) z_m^2+O(z_m^3)\right) \widetilde{w}_m+ O(z_m).$$ It follows that there exists a constant $K>0$ such that
$$|\widetilde{w}_{m+1}|\le (1+K|z_m|^2) |\widetilde{w}_m|+ K |z_m|.$$
Let us prove that there is $C>0$ such that $|\widetilde{w}_m|\le C\log m$ for all $m\ge 2$. Let $C_0>0$ and $m_0\ge 2$ be such that $|z_m|\le C_0/m$ for all $m\ge m_0$. Choose $C>0$ such that $$\log \frac{m+1}{m}>\frac{KC_0}{C} \frac{1}{m} +\frac{KC_0^2\log m}{m^2}$$ for all $m\ge m_0$ and such that $|\widetilde{w}_{m_0}|\le C\log m_0$. If $|\widetilde{w}_m|\le C \log m$ for some $m\ge m_0$,
$$|\widetilde{w}_{m+1}|\le (1+KC_0^2/m^2) C\log m + KC_0/m< C \log (m+1).$$
This proves that $|\widetilde{w}_m|=O(\log m)$ and hence $|w_m|=O(z_m^2 \log m)$. Furthermore
$$|w_{m+1}-w_m|=O(z_m w_m) +O(z_m^3)=O(|z_m|^3\log m).$$ Since $z_{m+1}-z_{m}\sim z_m^2\sim m^{-2}$, (\ref{eqn:C1suff}) follows.
\marginpar{Add why \\ can assume\\$\bar V=0$}
\end{proof} \fi
\iffalse \begin{proof}[Step D, assuming Theorem~\ref{thm:asymptinv1}] First, we may assume that the holomorphic motion $H_\lambda^{m}$ given by Theorem~\ref{thm:asymptinv1} (a) satisfies $H_\lambda^m(c_1)=c_1+\lambda$. (We then might have lost the statement (b), but actually not as shown below.) Then for each $k\ge 1$, $$H_\lambda^m (c_k) =G_{\lambda+c_1}^{k-1}(\lambda+c_1)+O(\lambda^{m+1}).$$ (So $H_\lambda^m (c_k)-H_\lambda^{m+1}(c_k)=O(\lambda^{m+1})$.)
Putting $\varphi_m(\lambda)= \lim_{n\to\infty} H_{c_1+\lambda}^{m}(c_{nq+1})-a_1=: H_{c_1+\lambda}^m (a_1)-a_1$, we have\marginpar{$c$ to $c_1$} \begin{equation}\label{eqn:varphim} G_{c_1+\lambda}^q(\varphi_m(\lambda)+a_1)=\varphi_m(\lambda) +a_1+O(\lambda^{m+1}). \end{equation}
{\bf Claim 1.} There is a function $\varphi(\lambda)$, holomorphic near $\lambda=0$ and $m_0$, such that $$G_{c_1+\lambda}^q (\varphi(\lambda)+a_1)=\varphi(\lambda)+a_1,$$ and such that for each $m\ge m_0$,
$$\varphi_m(\lambda)-\varphi(\lambda)=O(|\lambda|^{m/3}) \mbox{ as } \lambda\to 0.$$
\begin{proof}[Proof of Claim 1] (A slightly different proof of Proposition~\label{twop}) Consider the map $\Phi(\lambda, z)= G_{c_1+\lambda}^q (z+a_1)-z-a_1$ which is holomorphic in a neighborhood of the origin in $\mathbb{C}^2$. Clearly, $\Phi(0,z)$ is not identically zero, so by the Weierstrass' Preparation Theorem, there is a Weierstrass polynomial $$Q(\lambda,z)=z^2+ 2u(\lambda) z+ v(\lambda)=(z+u(\lambda))^2+v(\lambda)-u(\lambda)^2$$ such that $\Phi(\lambda, z)=Q(\lambda, z) R(\lambda, z)$, where $R$ is a holomorphic function near the origin and $R(0,0)\not=0$, and $u, v$ are holomorphic near the origin in $\mathbb{C}$ and $u(0)=v(0)=0$. Consider the discriminant $\Delta(\lambda)=u(\lambda)^2-v(\lambda)$ which satisfies $\Delta(0)=0$. By (\ref{eqn:varphim}), for each $m$, \begin{equation}\label{eqn:varphim1} (\varphi_m(\lambda)+u(\lambda))^2 +\Delta (\lambda)=O(\lambda^{m+1}). \end{equation}
Let us distinguish two cases.
{\em Case 1.} $\Delta(\lambda)\equiv 0$ (for $\lambda\in \mathbb{D}_\varepsilon$ for some $\varepsilon>0$). Then $-u(\lambda)$ is the only zero of $Q(\lambda, z)$ near $0$. By (\ref{eqn:varphim1}), $\varphi_m(\lambda)+u(\lambda)=O(|\lambda|^{(m+1)/2})$. So the claim holds with $\varphi=-u$.
{\em Case 2.} $\Delta(\lambda)\not\equiv 0.$ Then there is $n_0\ge 1$ and $A\not=0$ such that $\Delta(\lambda)= A\lambda^{n_0}+O(\lambda^{n_0+1})$.\marginpar{$\Delta$corrected} Assume $m\ge n_0$. By (\ref{eqn:varphim1}),
$$(\varphi_m(\lambda)+u(\lambda)^2 =-A\lambda^{n_0} +O(\lambda^{n_0+1}),$$ which implies that $n_0$ is even. Therefore, there exists holomorphic functions $\lambda_{\pm}(a)$ such that $Q(\lambda, z)=(z-\varphi_+(\lambda))(z-\varphi_-(\lambda))$. The claim holds for either $\varphi(\lambda)=\varphi_-(\lambda)$ or $\varphi(\lambda)=\varphi_-(\lambda)$. \end{proof}
We want to show that $DG_{c_1+\lambda}^ q( \varphi(\lambda)+a_1)\equiv 1$. Arguing by contradiction, assume that this is not the case. Then there exists $m_0\ge 1$ and $A\not=0$, such that $$DG_{c_1+\lambda}^ q( \varphi(\lambda)+a_1)-1=3A\lambda^{m_0}+O(\lambda^{m_0+1}).$$
Fix $m\ge 3(m_0+1)$. There exists $\varepsilon'=\varepsilon_m'>0$ such that the following hold: \begin{enumerate} \item There exist holomorphic motions $H_\lambda^{m,k}$ of $\overline{P}$ over $\mathbb{D}_{2\varepsilon'}$, $k=0,1,\ldots,$ such that $H_\lambda^{m,0}=H_\lambda^m$ and such that $H_\lambda^{m,k+1}$ is the lift of $H_\lambda^{m,k}$ for each $k$. \end{enumerate}
Thus, there exists $C=C_m>0$ such that whenever $|\lambda|<\varepsilon'$ and $n\ge 0$,
$$|H_\lambda^{m,n} (c_1)-c_1-\lambda|\le C |\lambda|^{m+1},$$
$$|H_\lambda^{m,n} (a_1)-\varphi_m(\lambda)|\le C|\lambda|^{m+1}.$$
By Claim 1, enlarging $C$ if necessary, we have $$|\varphi(\lambda)-\varphi_m(\lambda)|\le C|\lambda|^{m/3}\le C|\lambda|^{m_0+1}.$$
$$|DG_{H_\lambda^{m, n}(c_1)}\circ G_{H_\lambda^{m, n+1}(c_1)}\circ \cdots G_{H_{\lambda}^{m,n+q-1}(c_1)}(H_{\lambda}^{m,n+q}(a_1))-DG_{c_1+\lambda}^q (\varphi(\lambda)+a_1)|\le C' |\lambda|^{m_0+1}.$$
It follows that there exists $\varepsilon''>0$ such that whenever $|\lambda|\le \varepsilon''$, we have
$$|DG_{H_\lambda^{m, n}(c_1)}\circ G_{H_\lambda^{m, n+1}}\circ \cdots G_{H_{\lambda}^{m,n+q-1}}(H_{\lambda}^{m,n+q}(a_1))-(1+pA\lambda^{m_0})|<C''|\lambda|^{m_0+1}.$$
Therefore, we can choose $\lambda$ with $|\lambda|$ arbitrarily small and such that
$$|DG_{H_\lambda^{m, n}(c_1)}\circ G_{H_\lambda^{m, n+1}}\circ \cdots G_{H_{\lambda}^{m,n+q-1}}(H_{\lambda}^{m,n+q}(a_1))|<1-2|A||\lambda|^{m_0}<1$$ holds for every $n$. We fix such a choice of $\lambda$ now.
Let $r>0$ be a small constant such that for each $z$ and each $n\ge 0$ with $|z-H_{\lambda}^{m,n+q}(a_1)|<r$, \marginpar{$c_1$ added}
$$|DG_{H_\lambda^{m, n}(c_1)}\circ G_{H_\lambda^{m, n+1}(c_1)}\circ \cdots G_{H_{\lambda}^{m,n+q-1}(c_1)}(z)|<1-|A||\lambda|^{m_0}=: \kappa \in (0,1).$$
Let $l_0>0$ be large enough such that for each positive integer $l\ge l_0$ and any $n\ge 0$, $|H_\lambda^{m,n} (a_1)- H_\lambda^{m,n} (c_{lq+1})|<r$. Then for any $l\ge l_0$, and any $n\ge 0$, \marginpar{$nq$ to $n+q$ below?} \begin{align*}
& |H_\lambda^{m,n}(a_1)- H_\lambda^{m,n} (c_{(l+1)q+1})|\\
=& |G_{H_\lambda^{m,n} (c_1)} \circ \cdots G_{H_\lambda^{m, n+q-1}(c_1)} (H_\lambda^{m,nq} (a_1))-G_{H_\lambda^{m,n} (c_1)} \circ \cdots G_{H_\lambda^{m, n+q-1}(c_1)} (H_\lambda^{m,n+q} (c_{lq+1})|\\
\le & \kappa |H_\lambda^{m,n+q} (a_1)-H_\lambda^{m,nq} (c_{lq+1})|. \end{align*} It follows that
$$|H_\lambda^{m,0} (a_1)- H_\lambda^{m,0} (c_{lq+1})|=O(\kappa^l),$$
which implies that $c_{lq+1}$ converges to $a_1$ exponentially fast. Since $H_\lambda^{m,0}$ is bi-H\"older, this implies that $|c_{lq+1}-a_1|$ is exponentially small in $l$.
However, $c_{lq+1}$ only converges to $a_1$ polynomially fast, a contradiction! \end{proof} \fi
\section{Distortion for some quasiconformal mappings}
\iffalse \section{The holomorphic motion $H_\lambda(z)$ satisfies a differentiability condition at the parabolic fixed point}
\defH{H} Throughout this section $H$ will be a quasiconformal map. The main purpose of this section is to prove Theorem~\ref{thm:diff-frac} which asserts that under some conditions $H$ satisfies a kind of H\"older condition
at the parabolic fixed point on some differential quotients, see inequality (\ref{eq:diff-frac}).
\subsection{Astala's theorem}
Later on in this section it will be useful to have that $\frac{\partial H}{\partial \bar u}(u)\in L^p$ for some fairly large $p$. For this we will use, see \cite{Astala}
\begin{theorem}[Astala] Suppose that $f: B(0,1)\to B(0,1)$ is a $K$-quasiconformal mapping with $f(0)=0$. Then $|f(E)|\le M(K)|E|^{1/K}$ where $|E|$ is the Lebesgue measure of a Borel set $E\subset B(0,1)$ and $M(K)$ depends only on $K$. \end{theorem} An easy consequence is the following (which we will use at the end of this section): \begin{coro}\label{coroAst} Under the conditions of Astala's theorem, for every $1<r<2K/(K-1)$ there exists $M_*(K,r)>0$ such that
$$\int_{B(0,1)} |f_{\bar z}(z)|^r|dz|^2\le M_*(K,r).$$ In particular, $f_{\bar z}\in L^r$.
\end{coro}
\subsection{Teichm\"uller-Wittich-Belinskii Theorem}
\begin{theorem} Let $H$ be a quasiconformal mapping near $z=0$, $H(0)=0$ and $\mu_H(z)$ is the dilatation of $H$ at $z$. Assume
$$\dfrac{1}{2\pi} \int\int_{|z|<t} \dfrac{|\mu_H(z)|}{|z|^2} dxdy < \infty\mbox{ for some }t<\infty$$ then $H$ is conformal at $z=0$. \end{theorem}
Here $H$ is said to be conformal at $z=z_0$ if $$\lim_{z\to z_0} \dfrac{H(z)-H(z_0)}{z-z_0}$$ exists and is non-zero. For references to this theorem we refer to \cite{Shi}.
\subsection{Checking the assumptions of the T-W-B Theorem}
Assume that $H$ is a quasiconcoformal homeomorphism so that $H$ is holomorphic outside the cusp region
$\mathcal C_{\epsilon}:=\{\rho e^{i\varphi_\rho } ; | \varphi_\rho| \le \epsilon\rho\}$.
\begin{prop} The assumption of the T-W-B theorem is satisfied. In particular $H$ is conformal at $0$. \end{prop}
\begin{proof} Let $H$ have Beltrami coefficient $\mu$. Then $\mu=0$ on $B(0,\epsilon)\setminus \mathcal C_\epsilon$
for some $\epsilon>0$ and $|\mu(z)|$ is at most $||\mu||_\infty$. To check the T-W-M theorem we need to obtain an upper bound for
$$\int_{B(0,\epsilon)} \dfrac{|\mu_H(z)|}{|z|^2} |dz|^2.$$ This is bounded by
$$\int\int_{|z|<r, z\in \mathcal C_\epsilon} ||\mu||_\infty
\dfrac{dxdy}{|z^2|}.$$ So it is enough to bound
$$\int\int_{|z|<r, z\in \mathcal C_\epsilon} \dfrac{dx dy}{|z|^2} = 2 \int_0^r \, d\rho \int_{-\phi_\rho}^{\phi_\rho} \dfrac{\rho d\varphi} {\rho^2} = 4 \int_0^r \, d\rho \frac{\phi_\rho}{\rho} $$ Since $\phi_\rho=\hat a \rho$ we get $$\le 4\hat a \int_0^r d\rho = 4\hat a r$$
\end{proof}
\subsection{Conditions for applicability of Cauchy-Pompeiu formula}
Following \cite[p41]{Vekua} we have:
\begin{theorem}[Cauchy-Pompeiu formula] Let $H$ be continuous and $\frac{\partial H}{\partial \bar u}\in L^p$ in the domain $G$ for some $p>2$. Let $D,\bar D\subset G$ be Jordan domain (see next page) with a rectifiable boundary.
Then the Cauchy-Pompeiu formula holds for $z\in D$:
$$H(z)=\dfrac{1}{2\pi i} \int_{\partial D} \frac{H(u)}{u-z} \, du - \frac{1}{\pi} \lim_{r\to 0}\int\int_{D\setminus D(z,r)} \dfrac{\frac{\partial H}{\partial \bar u}(u)}{u-z} \,
|du|^2$$
In fact, the latter limit exists.
\end{theorem}
\subsection{Computing the difference of two difference quotients}
The main result of this section is the following
\begin{theorem}\label{thm:diff-frac} Assume that $H: B(0,r')\to \mathbb{C}$ is continuous for some $r'>3r$ where $r$ is small enough, $$H_{\bar z}\in L^{6+\delta}$$ and that $H_{\bar z}=0$ outside $\mathcal C_\epsilon$ where $r,\epsilon>0$ are so that $r\epsilon$ is small enough too. Then for all $z_1,z_2\in B(-r,r)$ and all $a\in B(-r,r)$, $z_1,z_2\neq a$,
\begin{equation} \left| \dfrac{H(z_1)-H(a)}{z_1-a}-\dfrac{H(z_2)-H(a)}{z_2-a} \right| \le
\tilde C \left( \sup_{z\in \partial B(0,3r)} |H(z)| + (\int_{\mathcal C_\epsilon\cap B(0,3r)} |H_{\bar u} |^5du|^2)^{1/5} \right)|z_1-z_2|^{2/5}
\label{eq:diff-frac}\end{equation}
where $\tilde C=\tilde C(r,\epsilon)$. \end{theorem} \begin{proof}
We would like to apply the previous theorem to $\frac{H(z)-H(a)}{z-a}$ in the domain $D=B(0,3r)$. To do this we need to check that
$$\frac{\partial }{\partial \bar z} \left( \frac{H(z)-H(a)}{z-a} \right) = \dfrac{\frac{\partial H}{\partial \bar z}(z)}{z-a}$$
is in $L^p$ for some $p>2$. To check this take $p',q'>1$ so that
$\frac{1}{p'}+\frac{1}{q'}=1$, use that $\frac{\partial H}{\partial \bar z}$ is zero outside $$\mathcal C_\epsilon':=\mathcal C_\epsilon\cap B(0,3r)$$
and apply the H\"older inequality:
$$\int\int_{B(0,3r)} \Big\vert \dfrac{\frac{\partial H}{\partial \bar z}(z)|}{z-a}\Big\vert^p \, |dz|^2 =
\int\int_{\mathcal C_\epsilon'} \Big\vert \dfrac{\frac{\partial H}{\partial \bar z}(z)|}{z-a}\Big\vert^p \, |dz|^2 \le
$$
$$
\left(\int\int_{\mathcal C_\epsilon'} \Big|\frac{\partial H}{\partial \bar z}(z)\Big|^{pp'} |dz|^2 \right)^{1/p'}
\int\int_{\mathcal C_\epsilon'} \left( \frac{1}{|z-a|^{pq'}} |dz|^2 \right)^{1/q'}$$
To estimate the final integral, since $z\in \mathcal C_{\epsilon}:=\{\rho e^{i\phi}; |\phi|\le \epsilon\rho\}$, $a\in B(-r,r)$ so that
$|z-a|\ge \beta|z|$ for some $\beta>0$, it is enough to estimate
$$\int\int_{\mathcal C_\epsilon'} \dfrac{|dz|^2}{|z|^\ell} =2 \int_0^{3r} d\rho \int_{-\varphi_\rho}^{\varphi_\rho} \rho \frac{d \varphi }{\rho^\ell} = 4\epsilon \int_0^{3r} \dfrac{d\rho}{\rho^{\ell-2}} =
\frac{4\epsilon}{3-\ell} \rho^{3-\ell} \Big\vert_0^{3r}$$
where $\ell=pq'$ and where as before we use that $\varphi_\rho=\hat a\rho$. To ensure this is bounded we need $3-\ell>0$.
Since $p>2$ and $\ell=pq'<3$ we need to choose $q'=\frac{3}{2}-\sigma$ for some $\sigma>0$ small. Since $\frac{1}{p'}+\frac{1}{q'}=1$ this means $p'>3$. To get the first integral in the above product of integrals bounded we need $\frac{\partial H}{\partial \bar z} \in L^{pp'}$ and so that $\frac{\partial H}{\partial \bar z} \in L^{6+\sigma}$ for some $\sigma>0$ positive.
Take $a,z_1,z_2\in B(-r,r)$. The previous subsection implies that we can use the Cauchy-Pompeiu formula to the following expression
and obtain $$\dfrac{H(z_1)-H(a)}{z_1-a}-\dfrac{H(z_2)-H(a)}{z_2-a} = $$ $$=\dfrac{1}{2 \pi i} \int_{\partial B(0,3r)} \dfrac{H(u)-H(a)}{u-a} \left( \dfrac{1}{u-z_1}-\dfrac{1}{u-z_2}\right) du - $$
$$\quad \dfrac{1}{\pi}\int \int_{\mathcal C_\epsilon'} \dfrac{\frac{\partial H}{\partial \bar u}}{u-a}\left( \dfrac{1}{u-z_1}-\dfrac{1}{u-z_2}\right) |du|^2$$ The first integral is at most
$$|z_1-z_2| \dfrac{1}{2\pi} \int _{\partial B(0,3r)} \Big| \dfrac{H(u)-H(a)}{u-a} \left( \dfrac{1}{u-z_1}-\dfrac{1}{u-z_2}\right)\Big| |du|$$
$$2|z_1-z_2| \sup_{z\in \partial B(0,3r)} |H(z)| \quad \frac{2\pi (3r)}{r^3}$$
where we used that $|u-z_1|,|u-z_2||u-a|> r$.
Since we only need to integrate the 2nd integral in $\mathcal C_\epsilon'$ we can assume that $u\in \mathcal C_\epsilon'$ and since $a\in B(-r,r)$ we have that
$|u-a|\ge \alpha |u|$ for some $\alpha>0$. Thus the 2nd integral is bounded from above by
\begin{equation}\frac{1}{\alpha \pi} \int\int_{\mathcal C_\epsilon} |H_{\bar u}(u)| \frac{|z_1-z_2| |du|^2}{|u(u-z_1)(u-z_2)|}\le \label{eqdiffrac2} \end{equation}
\begin{equation}\frac{1}{\alpha \pi} \left( \int\int_{\mathcal C_\epsilon'} |H_{\bar u}|^p |du|^2 \right)^{1/p}
\left(\int\int_{\mathcal C_\epsilon'} \frac{|z_1-z_2|^q |du|^2}{|u(u-z_1)(u-z_2)|^q}\right)^{1/q}
\label{eqdiffrac3} \end{equation}
where we take $q=5/4$, $p=5$. Since $H_{\bar u}\in L^5$ we obtain that the first factor is bounded. Let now bound the term
$$\int\int_{\mathcal C_\epsilon'} \frac{|z_1-z_2|^q |du|^2}{(u(u-z_1)(u-z_2))^q} =
\frac{|z_1-z_2|^q}{|z_1z_2|^q}\int \int_{\mathcal C_\epsilon'} \frac{|du|^2} {|u(1-u/z_1)(1-u/z_2) |^q}$$
Note that if $z\in B(-r,r)$ and $u\in \mathcal C_\epsilon$
then $u/z$ is in region $\{z; \arg(z)>\pi/2-\alpha\}$ where $\alpha=\arcsin(3r\epsilon)$ is small. \marginpar{Add picture?}
So
\begin{equation} \begin{array}{cll}
|1-\frac{u}{z}| &\ge \cos(\alpha)>1/2 & \mbox{ when }|\frac{u}{z}|\le 1 \\
|1-\frac{u}{z}| &\ge |\frac{u}{z}| &\mbox{ when }|\frac{u}{z}|>1
\end{array}
\label{eq:uz}\end{equation}
Assume $0<|z_2|\le |z_1|$ so $|u/z_1|\le |u/z_2|$
and decompose the integral
$$\int \int_{\mathcal C_\epsilon'} \frac{|du^2|} {|u(1-u/z_1)(1-u/z_2) |^q}$$
as
$$\int\int_{\mathcal C_\epsilon', |u/z_2|\le 1} + \int\int_{\mathcal C_\epsilon', |u/z_2|> 1, |u/z_1|\le 1} + \int\int_{\mathcal C_\epsilon', |u/z_1|\ge 1} $$
and denote these integrals as
$$ I + II + III. $$
Using (\ref{eq:uz}) $$|I| \le \int\int_{\mathcal C_\epsilon', |u/z_2|\le 1} \, \frac{|du|^2}{|u \frac{1}{4} |^q} =
4^q \int_{0}^{|z_2|} \, d\rho \int_0^{\epsilon\rho} \frac{\rho d\phi}{\rho^q} =4^q \int_0^{|z_2|} \frac{\epsilon\rho^2}{\rho^q} \, d\rho = $$
$$= 4^q \epsilon
\frac{\rho^{3-q}}{3-q} \Big\vert^{|z_2|}_0 = \frac{4^q \epsilon}{3-q} |z_2|^{3-q}.$$ Since $q=5/4$ this expression is bounded. Again using (\ref{eq:uz})
$$| II | \le \int\int_{\mathcal C'_\epsilon, |z_1|\ge |u| >|z_2|} \frac{|du|^2}{|u \frac{1}{2} \frac{u}{z_2} |^q} =
2^q |z_2|^q \int\int_{\mathcal C_\epsilon', |z_1|\ge |u| >|z_2|} \frac{|du|^2}{|u|^{2q}} = $$
$$
2^q |z_2|^q \int_{|z_2|}^{|z_1|} \, d\rho \int_0^{\epsilon\rho} \frac{\rho d\phi} {\rho^{2q}} =
2^q |z_2|^q \int_{|z_2|}^{|z_1|} \frac{a\rho^2}{\rho^{2q}} \, d\rho $$
$$\quad \quad = 2^q |z_2|^q \epsilon \frac{\rho^{3-2q}}{3-2q} \Big\vert_{|z_2|}^{|z_1|}=
\frac{2^q \epsilon |z_2|^q}{3-2q} (|z_1|^{3-2q} - |z_2|^{3-2q} )$$
Since $q=5/4$ this integral is again bounded. Once again using (\ref{eq:uz})
$$| III | \le \int\int_{\mathcal C_\epsilon', |u| >|z_1|} \frac{|du|^2}{|u \frac{u}{z_1} \frac{u}{z_2} |^q} =
|z_1z_2|^q \int_{|z_1|}^{3r} d\rho \int_0^{\epsilon\rho} \frac{\rho d\rho }{\rho^{3q}} = $$
$$= \epsilon|z_1z_2|^q \int_{|z_1|}^\epsilon \rho^{2-3q} \, d\rho =
\epsilon|z_1z_2|^q \frac{\rho^{3-3q}}{3-3q} \Big \vert ^{3r}_{|z_1|} =$$
$$=
\frac{a|z_1z_2|^q}{3(q-1)} \left( \frac{1}{|z_1|^{3q-3}} - \frac{1}{(3r)^{3q-3}} \right)$$
Here we use that $2-3q<-1$ since $q>1$.
Let now estimate
$$ \frac{|z_1-z_2|^q}{|z_1z_2|^q} \left (I + II + III\right)$$
taking into account that $0<|z_2|\le |z_1|$. The first term is bounded above by
$$C \cdot \frac{|z_1-z_2|^q}{|z_1z_2|^q} |z_2|^{3-q}= C \cdot \frac{|z_1-z_2|^q}{|z_1|^q} |z_2|^{3-2q}
\le C\cdot |z_1-z_2|^q |z_2|^{3-3q} \le C|z_1-z_2|^q.$$
Using that $0<|z_2|<|z_1|$ , the second term is bounded by
$$C \cdot \frac{|z_1-z_2|^q}{|z_1z_2|^q}
|z_2|^q(|z_1|^{3-2q} - |z_2|^{3-2q} ) \le C\dfrac{|z_1-z_2|^q}{|z_1|^{3q-3}} \le $$
$$C \quad \quad \le |z_1-z_2|^{q-3q+3} \dfrac{|z_1-z_2|^{3q-3}}{|z_1|^{3q-3}}
\le C 2^{3q-3} |z_1-z_2|^{3-2q}.$$ In the same way the third term is bounded by
$$C \frac{|z_1-z_2|^q}{|z_1|^{3q-3}} =C |z_1-z_2|^{q-3q+3} \dfrac{|z_1-z_2|^{3q-3}}{|z_1|^{3q-3}}
\le C 2^{3q-3} |z_1-z_2|^{3-2q}.
$$
The constant $C$ above depends only on $\epsilon,r$.
Note that $q=5/4$ and so $3-2q=1/2$. Thus we obtain that the expression (\ref{eqdiffrac2}) is bounded by
$$ \left(\int\int_{\mathcal C_\epsilon'} \frac{|z_1-z_2|^q |du|^2}{|u(u-z_1)(u-z_2)|^q}\right)^{1/q}
\le C_1 \left( \int_{\mathcal C'_\epsilon} |H_{\bar u} |^5|du|^2 \right)^{1/5}(|z_1-z_2|^{1/2})^{4/5}$$ for some $C_1=C_1(\epsilon,r)$. Together with the first part of the proof this concludes the proof of Theorem~\ref{thm:diff-frac}.
\end{proof}
\begin{proofof}{Theorem~\ref{thm:diff-frac}} By Corollary~\ref{coroAst}, since $K\in (1,3/2)$, $H_{\bar z}\in L^{6+\delta}$ for some $\delta>0$ and
$$\int_{B(0,3r)} |H_{\bar u} |^5|du|^2\le \frac{(2R)^5}{(3r)^7}M_*(K,5).$$ Then apply Theorem~\ref{thm:diff-frac}. \end{proofof}
\fi \section{Averaging holomorphic motions}In this section we will use the H\"older estimate from Theorem~\ref{thm:diff-frac} to show that if we take the average of lifts of a holomorphic motion then we obtain again a holomorphic motion.
\begin{theorem}\label{average} Let $h_\lambda$ be a holomorphic motion of $K$ over $\mathbb{D}_\epsilon$ such that for each $\lambda\in \mathbb{D}_\epsilon$, $h_\lambda(z)$ is holomorphic in $z\in \Omega$.
Then there exists another holomorphic motion denoted by $\mathcal{H}[h_\lambda]$ so that $H_\lambda:=\mathcal{H}[h_\lambda]$ is a holomorphic motion of $K$ over some $\mathbb{D}_{\hat\epsilon}$ which is associated to $h_\lambda$ and is constructed in two steps (1)-(2) as follows:
(1) Let $h^0_\lambda=h_\lambda$ and $h^k_\lambda$, $k=1,2,\cdots$ be holomorphic motions of $K$ such that $h_\lambda^{k+1}$ is the lift of $h_\lambda^k$, $k=0,1,\cdots$. Define $$\hat h_\lambda^{(k)}(z)= \dfrac{1}{k} \sum_{i=0}^{k-1} h_\lambda^{(i)}(z).$$ There exists a subsequence $i_k\to \infty$ and $\hat h_\lambda$ so that $\hat h_\lambda^{(i_k)}\to \hat h_\lambda$ uniformly on $P\cup \hat\Omega$ where $\hat\Omega$ is an open set and $\hat\Omega\subset\cup_{j=0}^{q-1}\Omega_j$, iterate of any point of $\Omega$ eventually enters $\hat\Omega$, and so that $\hat h_\lambda$ is a holomorphic motion of $P\cup \hat\Omega$ over $\mathbb{D}_{\hat\epsilon}$.
(2) Extend $\hat h_\lambda$ to a holomorphic motion $\psi_\lambda$ of $K$. Let $\psi^{(k+1)}_\lambda$ be the lift of $\psi^{(k)}_\lambda$ on $K$, $k=0,1,\cdots$. Then the required $H_\lambda$ is a uniform limit of a subsequence of $\{\psi^{(k)}_\lambda\}_{k\ge 0}$.
Moreover, for each $\lambda\in \mathbb{D}_{\hat\epsilon}$, $H_\lambda(z)$ is holomorphic in $z\in \Omega$. \end{theorem}
The proof of Theorem~\ref{average} consists of two steps. In Propositions~\ref{prop:Hproperties} and \ref{Hmot} we will show that a limit map $\hat h_\lambda$ defines a holomorphic motion (perhaps, over a smaller disk).
\begin{prop}\label{prop:Hproperties}
There exist $\epsilon_*>0$, a subsequence $i_k\to \infty$ and $\hat h_\lambda$ so that
$\hat h_\lambda^{(i_k)}\to \hat h_\lambda$ uniformly on $K$.
$\hat h_\lambda$ has the following properties: \begin{enumerate} \item $\hat h_\lambda(z)\to z$ as $\lambda\to 0$ uniformly in $z\in K$. \item For each $j\in \{0,1,\cdots,q-1\}$ there is a disk $B_j\subset \Omega_j$ specified in the beginning of Section~\ref{sect:ext} (i.e., such that $a_j\in \partial B_j$ and a direction from the point $a_j$ to the center of $B_j$ is an attracting direction of $g^q$ at the neutral fixed point $a_j$) as follows. For every $a\in \cup_{j}\overline{B_j}$ there exists $DH_\lambda(a)$ such that
$\forall \epsilon>0$, $\exists \delta>0$ so that for each $j$, $\forall a, z\in \overline{B_j}$
$$|z-a|<\delta \implies \Big| \frac{\hat h_\lambda(z)-\hat h_\lambda(a)}{z-a} - D\hat{h}_\lambda(a) \Big| <\epsilon,$$ \item Moreover, $D\hat{h}_\lambda(a)\to 1$ as $\lambda\to 0$ uniformly in $a\in \cup_j\overline{B_j}$. \end{enumerate} \end{prop}
\begin{proof} Since $\{h^{(i)}_\lambda\}_i$ is a uniformly bounded sequence of holomorphic motions, by \cite{BersRoyden}, Theorem 1 and Corollary 2, there is $\epsilon_*>0$ and $K_*\in (1,3/2)$ such that every $h_\lambda^i$ for $\lambda\in \mathbb{D}_{\epsilon_*}$, $i\ge0$ is $K_*$-quasiconformal and moreover $\{h^{(i)}_\lambda(z)\}_{i\ge 0}$ is equicontinuous in $(\lambda,z)\in \overline{\mathbb{D}_{\epsilon_*}}\times K$. Therefore, every sequence of $\hat h_\lambda^{(k)}(z)$ contains a subsequence which converges to some $\hat h_\lambda(z)$ uniformly in $(\lambda,z)\in \overline{\mathbb{D}_{\epsilon_*}}\times K$.
Since also $h_\lambda^{(i)}(z)\to z$ as $\lambda\to 0$ uniformly in $z,\lambda$, (1) follows for each such limit map $\hat h_\lambda$.
Let us prove (2)-(3) choosing also an appropriate subsequence. It is enough to prove (2)-(3) for each $j$. So let $j=0$ and we can assume $a_0=0$ and $B_0=B(-r,r)$ for $r>0$ small. Then Theorem~\ref{thm:diff-frac} applies.
Hence, $\forall \epsilon >0$ $\exists \delta>0$ so that for all $z_1,z_2,a\in \bar{B}_0$ and all $|\lambda|<\epsilon_*$ and all $i\ge0$:
$$|z_1-z_2|<\delta \implies \Big| \frac{h_\lambda^{(i)}(z_1)-h_\lambda^{(i)}(a)}{z_1-a} - \frac{h_\lambda^{(i)}(z_2)-h_\lambda^{(i)}(a)}{z_2-a} \Big| <\epsilon . $$ Taking the limit of $z_2\to a$, the previous statement implies that there is $Dh_\lambda^{(i)}(a)$ such that
$\forall \epsilon >0$ $\exists \delta>0$ such that
for all $ |\lambda|<\epsilon_*$, $z,a\in \bar{B}_0$, $a\ne z$ and such that $|z-a|<\delta$,
$$ \Big| \frac{h_\lambda^{(i)}(z)-h_\lambda^{(i)}(a)}{z-a} - Dh_\lambda^{(i)}(a) \Big| <\epsilon . $$ Since $$\frac{\hat{h}_\lambda^{(k)}(z)-\hat{h}_\lambda^{(k)}(a)}{z-a} = \dfrac{1}{k} \sum_{i=0}^{k-1} \frac{h_\lambda^{(i)}(z)-h_\lambda^{(i)}(a)}{z-a}$$
$|z-a|<\delta$ implies
$$\Big| \frac{\hat{h}_\lambda^{(k)}(z)-\hat{h}_\lambda^{(k)}(a)}{z-a} -\frac{1}{k} \sum_{i=0}^{k-1} Dh^{(i)}_\lambda(a) \Big|
=\Big| \frac{1}{k} \sum_{i=0}^{k-1} \frac{h_\lambda^{(i)}(z)-h_\lambda^{(i)}(a)}{z-a} -\frac{1}{k} \sum_{i=0}^{k-1} Dh^{(i)}_\lambda(a) \Big|=$$
$$\le \frac{1}{k} \sum_{i=0}^{k-1} \Big| \frac{h_\lambda^{(i)}(z)-h_\lambda^{(i)}(a)}{z-a} - Dh^{(i)}_\lambda(a) \Big| < \epsilon$$ It follows that there exists a subsequence $i_k\to \infty$ and $\hat{h}_\lambda$ so that
$\hat h_\lambda^{(i_k)}\to \hat h_\lambda$ uniformly in $\bar{B}_0$ and moreover that
there exists $D\hat{h}_\lambda(a)$ such that
$\forall \epsilon>0$, $\exists \delta>0$ so that $\forall a, z\in \overline{B}_0$,
$$|z-a|<\delta \implies \Big| \frac{\hat{h}_\lambda(z)-\hat{h}_\lambda(a)}{z-a} - D\hat{h}_\lambda(a) \Big| <\epsilon.$$
\noindent {\bf Claim:} $D\hat{h}_\lambda(z)\to 1$ as $\lambda\to 0$ uniformly in $a\in \bar{B}_0$. {\bf Proof of Claim:} It is enough to show that $Dh^{(i)}_\lambda(a)\to 1$ as $\lambda\to 0$ uniformly in $a\to \bar{B}_0$.
So take a subsequence $i_k\to \infty$ so that
$Dh^{i_n}_{\lambda_n} (a_n)$ converges to some $d$, for sequences $ \lambda_n\to 0$, $i_n\to \infty$, $a_n\in \bar{B}_0$.
Then $\forall \epsilon>0, \exists \delta>0$ so that for $z\in \bar{B}_0$,
$$|z-a_n|<\delta \implies
\Big \vert \dfrac{ h^{i_n}_{\lambda_n}(z)-h^{i_n}_{\lambda_n}(a_n)}{z-a_n} - d \Big \vert <\epsilon$$ However, uniformly in $z\in \bar{B}_0$ and $i_n$,
$$h^{i_n}_{\lambda_n} (z)\to z, \lambda_n\to 0.$$
It follows that $|1-d|<\epsilon$. Since $\epsilon>0$ is arbitrary, $d=1$. \end{proof}
In the next theorem we will use the argument principle to show that $\hat{h}_\lambda$ is a holomorphic motion over some $\mathbb{D}_{\hat\epsilon_0}$. Given $\alpha\in (0,\pi/2)$ and $r>0$, let $S_{\alpha,r}$ be the sector
$$S_{\alpha,r}=\{\rho e^{i\psi}: \rho<r, |\psi|<\alpha\}.$$
\begin{prop}\label{Hmot} There exist $r>0$, $\alpha\in (0,\pi/2)$ and $\hat\epsilon_0>0$ such that $\hat h_\lambda$ is a holomorphic motion over $\mathbb{D}_{\hat\epsilon_0}$ of a subset $\hat\Omega:=\Omega(r,\alpha)=\cup_{j=0}^{q-1}\Omega_j(r,\alpha)$ of $\Omega$ where $$\Omega_j(r,\alpha)=\{x\in \Omega_j: dist(x,\partial \Omega_j)>r/2\}\cup (a_j+ e_j S_{\alpha,r})$$ and where $e_j$ is a unit vector so $e_j$ is an attracting direction at the parabolic fixed point $a_j$ of $g^q$. \end{prop} \begin{proof} Since the closures of $\Omega_j$ are disjoint, it is enough to show the claim for each $\Omega_j$ separately. Let's prove it for $\Omega_0$ (for other $j$'s the proof is similar). Making an affine change of coordinates, one can assume that $a_0=0$ and $e_0=-1$.
Given $t\in (0,\pi/2)$ let $L_{t}=\{-r+r e^{i\phi}, |\phi|\le t\}$ be an arc of the circle $|z+r|=r$. Now choose $r,\alpha, t$ small enough and find a Jordan curve $\gamma$ which consists of an arc $L_{t}$ which is completed by a simple curve $\gamma_0$ in $\Omega_0$ in such a way that the "interior" of $\gamma$ united with the point $0$ covers $\overline{\Omega_0(r,\alpha)}$. Shrinking $\alpha$ if necessary we can choose $\gamma_0$ so that the distance between $\gamma_0$ and $\overline{\Omega_0(r,\alpha)}$ is at least $r/5$. \marginpar{Add figure?}
Let us apply the previous Proposition~\ref{prop:Hproperties} twice as follows: (A) claims (2)-(3) with $a=0$ and $z\in L_{t}$ and the claim (1) with $z\in \gamma_0$ and, consequently, (B) claims
(2)-(3) with $a=0$ and $z\in e_0 S(\alpha,r)$ and the claim (1) with $z\in \Omega_0(r,\alpha)\setminus e_0 S_{\alpha,r}$. Then there exists $\hat\epsilon_0>0$ such that for every $|\lambda|<\hat\epsilon_0$, (a) and, consequently, (b) hold:
(a) the curve $\gamma_{0,\lambda}:=\hat h_\lambda(\gamma_0)$ is contained in a $r/10$-neighborhood of $\gamma_0$ and $\hat h_\lambda(L_t)\subset \hat h_\lambda(0)\pm e^{i\pi/2}e_0 S_{r/10,\alpha/10}$,
(b) $\hat h_\lambda(\Omega_0(r,\alpha))\subset \hat{h}_\lambda(0)+\Omega_0(r-r/10, \alpha+\alpha/10)$.
It follows that given $x\in \Omega_0(r,\alpha)$ for each $\lambda\in \mathbb{D}_{\hat\epsilon_0}$ the point $\hat h_\lambda(x)$ is enclosed by the curve $\gamma_\lambda:=\hat h_\lambda(\gamma)$. Hence, $$\frac{1}{2\pi i}\int_\gamma \frac{D\hat{h}_\lambda(z)}{\hat{h}_\lambda(z)-\hat{h}_\lambda(x)}dz=1,$$ that is, for every such $x$, $\hat{h}_\lambda: \Omega_0(r,\alpha)\to \mathbb{C}$ takes the value $\hat{h}_\lambda(x)$ only at $x$. \end{proof} Note that since the set $P\setminus \hat\Omega$ is finite and $h_\lambda^{(i)}(z)$ are uniformly bounded, passing perhaps to a subsequence of $i_k$ one can assume that $\hat h_\lambda$ is a holomorphic motion of $P\cup \hat\Omega$. Let $\psi_\lambda$ be an extension of $\hat h_\lambda$ to a holomorphic motion of $K$.
The second, final step in proving Theorem~\ref{average} is almost identical to before.
Let $\psi^0_\lambda=\psi_\lambda$ and let $\psi^k_\lambda$, $k=1,2,\cdots$ be consecutive lifts. By the lifting property, they are all defined over some $\mathbb{D}_{\hat\epsilon}$ and are uniformly bounded. Moreover, $\psi^0_\lambda(z)$ is holomorphic in $z$ in the open set $\hat\Omega$. Hence, $\psi_\lambda^k$ is holomorphic in $z\in g^{-k}(\hat\Omega)$ and since iterate of every point of $\Omega$ eventually enters $\hat\Omega$ the statement follows from the compactness of the family $\{\psi_\lambda^k\}_{k=0}^\infty$. This completes the proof of Theorem~\ref{average}.
\section{Asymptotically invariant holomorphic motions}\label{sect:inv}
In this section we will show that one can find a sequence of holomorphic motions $H^{(m)}_\lambda$ which are invariant of order $m$ as in statement (a) of the next theorem.
\begin{theorem}\label{thm:asymptinv1} Let $(g,G)_W$ be a local holomorphic deformation of $g$ and assume that $(g,G)_W$ has the lifting property of $P$ where as before $\overline{P}\subset U$ and $P=\{c_n\}_{n=1}^\infty$. Then For each $m\ge 1$ there exists $\epsilon_m>0$ and a holomorphic motion $H_\lambda^{(m)}$ of $P$ over $\mathbb D_{\epsilon_m}$ such that \begin{enumerate} \item[(a)] $H_\lambda^{(m)}$ is asymptotically holomorphic of order $m$: \begin{equation*} \begin{array}{rl} H_\lambda^m(c_1)&=c_1+\lambda+O(\lambda^2) \\ G^q_{H_\lambda^m(c_1)} &(H_\lambda^m(c_{qi+1}))=H_\lambda^m(c_{qi+1+q})+O(\lambda^{m+1}) \mbox{ for all }i\ge 0;\end{array} \end{equation*} \item[(b)] $H_\lambda^{(m)}$ extends to a holomorphic motion of $\Omega$ so that $H_\lambda^{(m)}(z)$ is holomorphic in $z\in \Omega$ for each $\lambda\in \mathbb D_{\epsilon_m}$; \item[(c)] $H_\lambda^{m+1}(c_{qi+1})-H_\lambda^{m}(c_{qi+1})=O(\lambda^{m+1})$, $i\ge0$.
\end{enumerate} \end{theorem} \begin{proofof}{Theorem~\ref{thm:asymptinv1}} \iffalse Suppose that $h_\lambda$ is a holomorphic motion of $P$ over $\mathbb{D}_\epsilon$. By the lifting property, there is a sequence $h_\lambda^{(k)}$, $k=0,1,\cdots$ of holomorphic motions of $P$ over $\mathbb{D}_\epsilon$ so that $h^{(0)}_\lambda=h_\lambda$ and $h^{(k+1)}_\lambda$ is the lift of $h^{(k)}_\lambda$, for each $k\ge 0$.
Let $H_\lambda$ be a (locally uniform) limit of $\hat{h}_\lambda^{(k_i)}$ as $k_i\to \infty$ and where $$\hat{h}_\lambda^{(k)}:= \dfrac{1}{k} \sum_{i=0}^{k-1} h_\lambda^{(i)}.$$ Note that $H_\lambda$ is not necessarily a holomorphic motion over the set $P$ (which in general has infinite cardinality). On the other hand,
for each $N\ge 1$ there exists $r_N>0$ so that $H_\lambda$ is a holomorphic motion of $\{c_n\}_{n=1}^N$ over $\mathbb{D}_{r_N}$. Therefore, for each $N\ge 2$ there exists a well-defined lift $\hat{H}_{N,\lambda}$ of $H_\lambda$ so that $\hat{H}_{N,\lambda}$ is a holomorphic motion of $\{c_n\}_{n=1}^{N-1}$ over some $\mathbb{D}_{r'_N}$
where $r'_N$ is a decreasing sequence of positive numbers. From the definition of lift, it follows that $\hat{H}_{N+1,\lambda}(c_n)=\hat{H}_{N,\lambda}(c_n)$, for $n=1,\cdots,N-1$ and $|\lambda|<r'_N$. Define $\hat H_\lambda(c_n):=\hat H_{n+1,\lambda}(c_n)$, for $n\ge 1$ and $|\lambda|<r'_{n+1}$. To complete the theorem we will need the following proposition.
\begin{prop}\label{mtom+1} Assume
$h_\lambda^{0}$ is asymptotically invariant of some order $m$, i.e.
$$h_\lambda^{(k+1)}(c_j)- h^{(k)}_\lambda(c_j)=O(\lambda^{m+1}), \ \ j=1,2,\cdots \mbox{ as }\lambda\to 0$$
for $k=0$ (hence, for all $k$).
Then $H_\lambda$ is asymptotically invariant of order $m+1$, i.e.
$$\hat{H}_\lambda(c_j)- H_\lambda(c_j)=O(\lambda^{m+2}), \ \ j=1,2,\cdots \mbox{ as }\lambda\to 0.$$
\end{prop}
\begin{proof} See \cite[Lemma 3.3]{LSvS1LSvS1}.
\end{proof} \fi Induction in $m=1,2,\cdots$. First take $m=1$. Let $H_\lambda^{(1)}=H_\lambda$ and $\epsilon_1=\epsilon$ where $H_\lambda$ is a holomorphic motion of $K$ over $\mathbb{D}_\epsilon$ as in Theorem~\ref{average}. In particular, it is asymptotically invariant of order $1$. Then (a) and (b) for $m=1$ follow. Now apply Theorem~\ref{average} taking $h_\lambda=H_\lambda^{(1)}$. Resulting map $H_\lambda^{(2)}:=\mathcal{H}[H_\lambda^{(1)}]$ satisfies (c). Indeed, $\mathcal{H}$ is constructed by two steps (1)-(2) where because of Proposition~\ref{mtom+1} step (1) increases the asymptotical invariance by $1$, while step (2) preserves the order of invariance. The general induction step from $m$ to $m+1$ goes in the same way. \end{proofof}
\section{Conclusion of the proof of Theorem~\ref{thm:asymptinv1}: step (D)}\label{sec:stepD}
\fi
\section{Applications to transversality for complex families} \label{sec:complex}
In this section we will show that Theorems~\ref{thm:Main2} and \ref{thm:Main3} follow from the Main Theorem. First we show that the attracting periodic cycle contains the singular value in its immediate basin of attraction.
\iffalse \begin{remark}\marginpar{added remark} If $g\colon U_g\to V_g$ is as in Theorem~\ref{thm:Main2} then by assumption $g^{-1}(U_g)\subset U_g$. If $g=bf(z)$ is as in Theorem~\ref{thm:Main3} then by assumption $f\in \mathcal{E}$ (resp. $f\in \mathcal{E}_o$), and so $g\colon D\to V_b=bV$ has no singular values in $V_b\setminus \{0,b\}$ (resp. in $V\setminus \{0,\pm b\}$) and $b\in D_+$. By assumption (c) in the definition of $\mathcal{E},\mathcal{E}_o$, for any any $u\in D\setminus \{0\}$ with $u\ne b$, we have that $u/b\in V$. Since $u/b\notin \{0,1\}$ this implies $f^{-1}(u/b)\in D\setminus \{0\}$ when $f\in \mathcal{E}$ and so in this case $g^{-1}(D\setminus \{0,b\})\subset D\setminus \{0\}$. Similarly $g^{-1}(D\setminus \{0,\pm b\})\subset D\setminus \{0\}$ when $f\in \mathcal{E}_o$. Note that the assumption that $w\in D_+$ is used in this argument. If $D$ consists of just one component then by definition $D_+=D$. \end{remark} \fi
Let $\mathbb{R}_+=(0,\infty)$. \begin{prop} \label{prop:c1toa0}
Consider one of the following three situations: \begin{enumerate} \item [(i)] $g(z)=f(z)+c_1$ with $f\in\mathcal{F}$, $c_1\in\overline{D}$, and $\mathcal{O}$ is an attracting or parabolic cycle of $g$ with multiplier $\kappa\not=0$; \item [(ii)] $g(z)= c_1 f(z)$ with $f\in\mathcal{E}$, $c_1\in D^+\setminus \{0\}$ and $\mathcal{O}\subset D\setminus \{0\}$ is an attracting or parabolic cycle of $g$ with multiplier $\kappa\not=0$; \item [(iii)] $g(z)= c_1 f(z)$ with $f\in\mathcal{E}_o$, $c_1\in D^+\setminus \{0\}$ and $\mathcal{O}\subset D\setminus \{0\}$ is an attracting or parabolic cycle of $g$ with multiplier $\kappa\not=0$ but $\mathcal{O}\not=-\mathcal{O}$;
\end{enumerate}
Then \begin{enumerate} \item there is a simply connected open set $W$ with the following property: \begin{itemize} \item $g^{kq}$ is univalent on $W$ for each $k\ge 1$, \item $g^{kq}$ converges uniformly on $W$ to a point in $\mathcal{O}$; \item $W\ni c_1$ in cases (i) and (ii), while $W\ni c_i$ or $W\ni -c_i$ in case (iii). \end{itemize} \item if $\mathcal O$ is a parabolic periodic orbit then it is non-degenerate.
\end{enumerate} \end{prop}
\newcommand{E}{E} \begin{proof} We shall prove this proposition by the classical Fatou argument. Let $E=\{c_1\}$ in case (i) and (ii) and let $E=\{\pm c_1\}$ in case (iii).
Note that the assumption implies that $0\not\in \mathcal{O}$ in all cases. Take a Jordan disk $A_0\subset D\setminus \{0\}$ along with a univalent map $\varphi:A_0\to \mathbb{C}$ (Koenigs or Fatou coordinate) so that $g^{q'}(A_0)\subset A_0$ where (i) in attracting case: $a_0\in A_0$, $q'=q$, $\varphi(A_0)=\mathbb{D}_\rho$ for some $\rho>0$ and $\varphi\circ g^q=\kappa \varphi$ on $A_0$, (ii) in the parabolic case: $a_0\in \partial A_0$, $q'=rq$ for a minimal $r\ge 1$, $\varphi(A_0)=\{z: \Re(z)>M\}$ for some $M>0$ and $\varphi\circ g^{q'}=\varphi+1$ on $A_0$.
In each case, we observe that for any connected open set $B\subset D\setminus (E\cup\{0\})$, $g: g^{-1}(B)\to B$ is an un-branched covering and $g^{-1}(B)\subset D\setminus \{0\}$. Here we use that in case (ii), (iii) that $c_1\in D_+$ since this implies by assumption (c) in the definition of $\mathcal{E},\mathcal{E}_o$, that $D/{c_1}\subset V$.
Let $A_n$, $n=1,2,\ldots$, denote the component of $g^{-n}(A_0)$ with $A_{q'k}\supset A_0$ for each $k=1,2,\ldots$ and $g(A_n)\subset A_{n-1}$. There exists a minimal $N\ge 0$ such that $A_N\cap E\not=\emptyset$. Indeed, otherwise, since $A_0$ is a simply connected open set contained in $D\setminus (E\cup\{0\})$, we obtain from the observation above by induction that $A_n\subset D\setminus (E\cup\{0\})$ and $g: A_n\to A_{n-1}$ is an un-branched covering for each $n\ge 1$. It follows that $g^{kq'}: A_{kq'} \to A_0$ is an un-branched covering, hence a conformal map for each $k=1,2,\ldots$. Thus $\varphi$ extends to a univalent function from $A=\bigcup_{k=0}^\infty A_{kq}$ onto $\mathbb{C}$ via the functional equation $\varphi(f^{q'}(z))=\kappa \varphi(z)$ or $\varphi(f^{q'}(z))=\varphi(z)+1$, which implies that $A=\mathbb{C}$, contradicting with $E\cap A=\emptyset$. Taking $W=A_N$ completing the proof of (1).
Let us now prove (2). So assume that $\mathcal{O}$ is parabolic. In case (i) and (ii), as the argument above shows that each attracting petal around $\mathcal{O}$ intersects the orbit of $c_1$, the cycle is non-degenerate. Assume now that we are in case (iii). Then either $c_1$ or $-c_1\in W$. Since the map $g$ is odd and $\mathcal{O}\not=-\mathcal{O}$, only one of $c_1$ and $-c_1$ is contained in the basin of attraction of $\mathcal{O}$, and thus the statements (2) hold for the same reason as before. \iffalse We are left to prove the statement (3) in case (iv) or (v) or case (i) but with $f$ and $c_1$ real. Note that if we are case (iv) or (v) but $c_1\not\in D_+$, then $c_1\in (0,c]$. Since
$g|[0,c]$ is increasing, $c_n=g^n(c)$ converges to an attracting or parabolic fixed point. It follows that $\mathcal{O}$ is this fixed point, and that $c_1$ is in the basin of $\mathcal{O}$. So it remains to prove (3) assuming that we are case (i)-(iii) but with $c_1$, $f$ real. In this case, the simply connected domain can be taken symmetric with respect to the real line. \fi \end{proof}
\begin{proof}[Proof of Theorems~\ref{thm:Main2} and~\ref{thm:Main3}] The proposition above implies that $g$ (restricted to a suitable small domain $U$) is a marked map w.r.t. $c_1$. Choosing $r_0>0$ small enough, we have $P_{r_0}$ is compact subset of $U$. Then, as is in shown in \cite{LSvS1,LSvS1a} the lifting property~\ref{def:liftingproperty} holds. Indeed, let $h^{(0)}_\lambda:=h_\lambda \colon P_{r_0}\to \mathbb{C}$, $\lambda\in \mathbb{D}_\epsilon$ be a holomorphic motion. As is shown in \cite{LSvS1,LSvS1a}, one can define a sequence of holomorphic motions $h^{(n)}_\lambda$ as in (2) of Definition~\ref{def:liftingproperty} so that properties (1),(3) also hold. So $(g,G)_W$ has the lifting property for the set $P_{r_0}$. In particular, the first parts of Theorems~\ref{thm:Main2}-\ref{thm:Main3} follow from the Main Theorem. \end{proof}
\section{Application to families of real maps}\label{sec:realmaps}
In this section, we shall prove Theorem~\ref{thm:eremenko}. To this end, we shall need the result in~\cite{LSvS1,LSvS1a} to determine the sign of $\kappa'$ and $Q$ in the transversality inequalities in Theorems~\ref{thm:Main2} and~\ref{thm:Main3}.
Throughout let $f_t$ be a family as in the assumption of Theorem~\ref{thm:eremenko}, case (i) or (ii). The case (iii) can be easily reduced to case (i). Put $c=0$ in case (i), so that $c$ is the common turning point of $f_t$. For $t< c$, $f_t$ has no periodic point of period greater than one, so in the following, we shall mainly concerned with $$t\in J_+:=(c,\infty)\cap J.$$ The following is an immediate consequence of Proposition~\ref{prop:c1toa0}. \begin{prop}\label{prop:realc12a0} Suppose that $f_{c_1}$, $c_1\in J^+$, has an attracting or parabolic cycle $\mathcal{O}\subset J$ and let $a_0$ be the rightmost point in $\mathcal{O}$. Then $f^{kq}$ is monotone on $[a_0,c_1]$ for all $k\ge 0$ and $f^kq$ converges uniformly on $[a_0, c_1]$ to $a_0$ as $k\to\infty$. \end{prop}
\begin{proof} Let $\tilde{f}$ denote the complex extension in $\mathcal{F}$ or $\mathcal{E}\cup\mathcal{E}_o$. Since $\tilde{f}$ is real symmetric, the simply connected domain $W$ as claimed in Proposition~\ref{prop:c1toa0} can be taken to be symmetric with respect to $\mathbb{R}$. If $c_1\in W$, then the statement follows. If $c_1\not\in W$, then $f\in\mathcal{E}_{o,u}$, $-c_1\in W$ and there exists $a_0\in \mathcal{O}$ such that $f^{kq}$ converges to $a_0$ on the interval $K$ bounded by $a_0$ and $-c_1$. However, $a_0>0$ and $-c_1<0$ so $K\ni 0$. Since $f(0)=0$, this is absurd! \end{proof}
\begin{prop}\label{prop:kappa=0} If $g:=f_{c_1}$ has a cycle $\mathcal{O}$ with multiplier $0$, and let $\kappa(t)$ denote the multiplier of the attracting cycle of $f_t$ for $t$ close to $c_1$. Then there exists $\varepsilon>0$ such that $\kappa(t)> 0$ for $t\in (c_1-\varepsilon, c_1)$ and $\kappa(t)<0$ for $t\in (c_1, c_1+\varepsilon)$. \end{prop} \begin{proof} In~\cite{LSvS1}, \cite{LSvS1a} the following inequality was proved
\begin{equation}\frac{ \left.\frac{d}{d t} f_t^q(c)\right|_{t=c_1}}{Dg^{q-1}(c_1)}>0 . \label{eq:postrans22} \end{equation} Let us deduce the conclusion of the proposition. Let $a(t)$ denote the fixed point of $f_t^q$ near $c$ for $t$ close to $c_1$. Assume for definiteness that $Dg^{q-1}(c_1)>0$. Then it follows that there is $\varepsilon>0$ such that $f_t^q(c)>c$ for $t\in (c_1,c_1+\varepsilon)$ and $f^q_t(c)<c$ for $t\in (c_1-\varepsilon, c_1)$. Thus for $t\in (c_1,c_1+\varepsilon)$, $a(t)>c$ and for $t\in (c_1-\varepsilon, c_1)$, $a(t)<c$. Reducing $\varepsilon$ if necessary, $Dg_t^{q-1}(g_t(a(t))>0$, so $\kappa(t)= Dg(a(t)) Dg^{q-1}(g_t(a(t))$ has the sign as claimed. \end{proof}
We say an open subinterval $J_1$ of $J$ is an {\em attracting window} if for each $t\in J_1$, $f_t$ has an attracting periodic cycle $\mathcal{O}_t$ with multiplier $\kappa(t)\in (-1,1)$. By the Implicit Function Theorem, $\mathcal{O}_t$ and $\kappa(t)$ depending on $t$ in a $C^1$ way. In particular, the cycles $\mathcal{O}_t$ have the same period, which is called the period of the attracting window.
\begin{lemma}\label{lem:attrwin} Let $J_1$ be an attracting window of period $q\ge 2$ and let $\kappa(t)$ be the corresponding multiplier function. Then $\kappa(t)$ is strictly decreasing in $J_1$. \end{lemma} \begin{proof} Assume without loss of generality that $J_1=(t_-, t_+)$ is a maximal attracting window. First, by Lemma~\ref{prop:kappa=0}, $\kappa$ can have at most one zero in $J_1$. Indeed, otherwise, let $c_1<\hat{c}_1$ be two consecutive zeros, then it follows that $\kappa<0$ in a right neighborhood of $c_1$ and $\kappa>0$ in a left neighborhood of $\hat{c}_1$, which implies by the intermediate value theorem that there is another zero of $\kappa$ in-between $c_1$ and $\hat{c}_1$, absurd!
Now let us assume that $\kappa(t_0)=0$ for some $t_0\in J_1$. Then $\kappa(t)>0$ for $t\in (t_-, t_0)$ and $\kappa(t)<0$ for $t\in (t_0, t_+)$. By Theorems~~\ref{thm:Main2} and~\ref{thm:Main3}, $\kappa'(t)\not=0$ for each $t\not=t_0$. This forces $\kappa'(t)<0$ for all $t\in J_1\setminus \{0\}$, so $\kappa(t)$ is strictly decreasing in $J_1$.
Finally, let us prove that $\kappa$ does have a zero in $J_1$. Indeed, it is easy to see $t_-,t_+\in J$, so by the maximality of $t_+$, we have $\kappa(t)\to \pm 1$ as $t\nearrow t_{\pm}$. Since $\kappa$ is monotone in $J_1$ (by Theorems~~\ref{thm:Main2} and~\ref{thm:Main3}), the intermediate value theorem implies that $\kappa$ has a zero.
\end{proof}
\begin{prop}\label{prop:saddlenode} Assume that for some $c_1\in J_+$, $g=f_{c_1}$ has a parabolic cycle with multiplier $1$. Let $q$ be the period of the cycle and let $a_0$ be the rightmost point in the cycle.
Then there exist $\varepsilon>0$ and $\delta>0$ such that for each $t\in (c_1, c_1+\varepsilon)$, $f_t$ has an attracting cycle of period $q$ and $a_0(t)\to a_0$ as $t\to c_1$, and for each $t\in (c_1-\varepsilon, c_1)$ and $x\in [a_0-\delta, a_0+\delta]$, $f_t^q(x)<x$. Equivalently, for $a_0:=\lim_{k\to\infty} g^{kq}(c_1)$, we have $$Q(a_0):=\dfrac{d}{dt} f_t^q(a_0)\Big\vert_{t=c_1}>0.$$ \end{prop}
\begin{proof} By Proposition~\ref{prop:realc12a0}, $c_{kq+1}$ decreases to $a_0$ and $D^2 g^q(a_0)<0$. By Theorems~\ref{thm:Main2} and~\ref{thm:Main3}, $Q(a_0)\not=0$. So $f_t^q(x)$ displays a saddle-node bifurcation at $(a_0, c_1)$. It is well-known, see for example~\cite[Proposition 7.7.5]{BS} that there exist $\varepsilon>0$ and $\delta>0$ such that for each $t$ in $$J_1=\left\{\begin{array}{ll} [c_1, c_1+\varepsilon) &\mbox{ if } Q(a_0)>0,\\ (c_1-\varepsilon, c_1] &\mbox{ if } Q(a_0)<0, \end{array} \right.$$ $f_t^q$ has two fixed points $a_-(t)$ and $a_+(t)$, depending continuously in $t$, with $a_-(c_1)=a_+(c_1)=a_0$, and with $0<Df_t^q(a_-(t))<1$ and $Df_t^q(a_+(t))>1$, while for $t$ in $$J_2=\left\{\begin{array}{ll} (c_1-\varepsilon, c_1) &\mbox{ if } Q(a_0)>0,\\ (c_1, c_1+\varepsilon) &\mbox{ if } Q(a_0)<0, \end{array} \right.$$
$f_t^q(x)<x$ for each $|x-a_0|\le \delta$. So $J_1^o$ is an attracting window, and by Lemma~\ref{lem:attrwin}, the multiplier function $\kappa(t)=Df_t^q(a_-(t))$ is monotone decreasing. Thus $J_1=[c_1, c_1+\varepsilon)$ and $Q(a_0)>0$. \end{proof}
\begin{prop}\label{prop:pd} Assume that $f_{c_1}$ has a parabolic cycle $\mathcal{O}$ of period $q$ and with multiplier $-1$. Then for any positive integer $N\ge 1$, there exist $\varepsilon>0$ and $\delta>0$ such that for each $t\in (c_1-\varepsilon, c_1+\varepsilon)$, $f_t^{2Nq}$ has exactly three fixed points in the $\delta$-neighborhood of $\mathcal{O}$, two of them hyperbolic attracting, and one hyperbolic repelling. \end{prop} \begin{proof} Note that the assumption implies that $c_1\in J_+$. For the case $N=1$, this is a well-known fact about periodic doubling bifurcation. Indeed, by Theorems~\ref{thm:Main2} and ~\ref{thm:Main3}, $\kappa'(c_1)\not=0$, where $\kappa(t)$ is the multiplier of the periodic orbit of $f_t$ of period $q$ near $\mathcal{O}$ for $t$ close to $c_1$. We must have $\kappa'(c_1)<0$ for otherwise, a small right-sided neighborhood of $c_1$ would be an attracting window on which the multiplier function is increasing which is ruled out by Lemma~\ref{lem:attrwin}. On the other hand, by Proposition~\ref{prop:c1toa0}, $D^3 g^{2q}(a)\not=0$ for each $a\in \mathcal{O}$. Thus the statement follows, for example, by ~\cite[Proposition 7.7.6]{BS}.
The case for general $N$ follows: $D^3 g^{2qN}(a)\not=0$ for $a\in\mathcal{O}$. For reducing $\varepsilon,\delta$ if necessary, $f^{2Nq}$ has at most three fixed points in the $\delta$-neighborhood of $\mathcal{O}$ for $t\in (c_1-\varepsilon, c_1)$. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:eremenko}] Let $\kappa$ be the multiplier the $f_{t_*}$-cycle $\mathcal{O}$. First let us assume $\kappa\not=1$. Then by the Implicit Function Theorem, there is a maximal $T$ such that $J_1:=[c_1, T)\subset J$ and each $f_t$, $t\in J_1$, has a periodic cycle $\mathcal{O}_t$ which depends on $t$ continuously and such that $\mathcal{O}_t=\mathcal{O}$ and the multiplier $\kappa(t)\not=1$. We shall prove that $T$ is the right endpoint of $J$. Arguing by contradiction, assume that this is not the case. By the maximality of $T$, $\lim_{t\to T} \kappa(t)=1$. By the intermediate value theorem, we are in one of the following cases:
{\em Case 1.} $\kappa(t)>1$ for all $t\in J_1$. Choose a subsequence $t_n\nearrow T$ such that $\mathcal{O}_{t_n}$ converges to a periodic orbit $\mathcal{O}_T$ of $f_T$. Let $q$ denote the period of $\mathcal{O}$ and $q'$ the period of $\mathcal{O}_T$, then $q=q' m$ for some positive integer $m$.
If $\mathcal{O}_T$ has multiplier $1$, then by Proposition~\ref{prop:saddlenode}, there exist $\varepsilon>0$ and $\delta>0$ such that $f_{t}^{q'}(x)<x$ for each $t\in (T-\varepsilon, T)$ and $|x-a_0(T)|<\delta$, where $a_0(T)$ denotes the rightmost point in the parabolic cycle $\mathcal{O}_T$. By continuity, there exist $\varepsilon',\delta'>0$, we have $f_t^q(x)<x$ when $t\in (T-\varepsilon',T)$ and $|x-a_0(T)|<\delta'$. This is in contradiction with the construction of $\mathcal{O}_T$.
Assume now that $\mathcal{O}_T$ has multiplier $-1$. Then $q=2q'N$ for some integer $N\ge 1$. By Proposition~\ref{prop:pd}, for each $n$ large enough, $f_{t_n}^{q}$ has exactly three fixed points, two of which are attracting. However $\mathcal{O}_{t_n}$ contains at least two repelling fixed point of $f_{t_n}^q$, a contradiction!
{\em Case 2.} $\kappa(t)<1$ for all $t\in J_1$. Then there exists $\varepsilon>0$ such that $(T-\varepsilon, T)$ is an attracting window and by Lemma~\ref{lem:attrwin}, $\kappa(t)$ is strictly decreasing, thus contradicting the maximality of $T$.
So now let us consider the case $\kappa=1$. By Proposition~\ref{prop:saddlenode}, there is $\varepsilon>0$ such that $(c_1, c_1+\varepsilon)$ is an attracting window. By what have proved above, the attracting periodic orbits allow further continuation until the right endpoint of $J$. \end{proof} \begin{remark} Figure~\ref{fig1} shows the bifurcation diagram for the family $f_{c_1}(x)=c_1 \sin(x)$, $c_1\in \mathbb{R}$. It also shows that for this family there are degenerate parabolic bifurcations, which occur when $\mathcal{O}=-\mathcal{O}$. \end{remark}
\iffalse \begin{remark} Let us prove the last assertion in Theorems~\ref{thm:Main2} and \ref{thm:Main3}. So assume that $g$ is real, let $c$ be its turning point, let $c_1=g(c)$ and $g=G_{c_1}$. Let us first consider the case that $g^q(c)=c$. Let $a(w)$ be the analytic continuation of the fixed point of $G_w^q$ with $a(c_1)=c$, and let $\kappa(w)$ be its multiplier. If $g$ has a minimum at $c$, $x\mapsto g^q(x)$ has a local maximum (minimum) at $c$ if $Df^{q-1}(c_1)<0$ (resp. $>0$), and reversely when $g$ has a maximum at $c$. It was proved in
From all this it follows that
$\mathbb{R}\ni w \to \kappa(w)$ is strictly increasing near $w=c_1$ if $g$ has a local minimum, and is strictly decreasing near $w=c_1$ if $g$ has a local maximum. By the Main Theorem $\kappa'(w)\ne 0$ when $\kappa\in [-1,1)\setminus \{0\}$. It follows that $\kappa'(w)>0$ (resp. $\kappa'(w)<0$) whenever $\kappa\in [-1,1)\setminus \{0\}$ if $g$ has a local minimum (resp. maximum) at $c$. As, by \cite{LSvS2}, the topological entropy is decreasing (resp. increasing) in $w$ when $g$ has a local minimum (resp. maximum) at $c$, the assertions about the sign of $\kappa'$ in Theorems ~\ref{thm:Main2} or \ref{thm:Main3} follow. The assertion about the sign of $Q(a_0)$ when $\kappa=1$ follows from the previous section. \end{remark} \fi
\begin{figure}
\caption{\small The bifurcation diagram of $f_c(x)=c\sin(x)$. For each $c\in [-10,10]$, the last 100 iterates from the set $\{f^k_c(\pi/2)\}_{k=0}^{1000}$ are drawn (in the vertical direction). The bifurcation points for which only {\lq}one half of a parabola{\rq} is visible, correspond to pitchfork bifurcations (where both critical values are attracted to a single parabolic periodic point), and where two halves are visible correspond to period doubling bifurcations. Note that $\sin \in \mathcal{E}_o$, where $D=D_+=V=\mathbb{C}$.}
\label{fig1}
\end{figure}
\appendix
\section{In the real parabolic case $\frac{\partial G^q_w}{\partial w} (a_0)\big\vert_{w=c}\ge 0$ holds}
We assume that $g$ is real marked map w.r.t. $c_1$ so that the sequence $c_{n}:=g^{n-1}(c_1)$, $n=1,2,\dots$ tends to a non-degenerate parabolic periodic orbit $\mathcal O:=\{a_0,\dots,a_{q-1}\}$ with multiplier $+1$. Consider a holomorphic deformation $(g,G)_W$ and assume that $(g,G)_W$ has the lifting property for the set $P=\{c_n\}_{n\ge 1}$
(notice that this condition is local and weaker than the one of the Main Theorem).
Under these assumptions we will show that
$Q(c_1)=\frac{\partial G^q_w}{\partial w}\big\vert_{w=c_1} (a_0)\ge 0$.
This Appendix will also motivate the choice of the particular vector field along $P$ appearing in Section~\ref{sec:ThmA}.
From~(\ref{eq:partGL}) it follows that the 2nd equality holds in
$$\Delta(z):= \sum_{j=1}^q \dfrac{L(g^{j-1}(z))}{Dg^j(z)}=\dfrac{1}{Dg^q(z)}\dfrac{\partial G^q_w}{\partial w}\Big\vert_{w=c} (z)$$ Since $Dg^q(a_0)=1$, $$\Delta(a_0)=\dfrac{\partial G^q_w}{\partial w}\Big\vert_{w=c} (a_0) = \sum_{j=1}^q \dfrac{L(g^{j-1}(a_0))}{Dg^j(a_0)} = \sum_{j=0}^{q-1} \dfrac{L(g^j(a_0))}{Dg^{j+1}(a_0)} .$$
\begin{prop}\label{prop:ge0} Assume that $(g,G)_W$ has the lifting property for the set $P$.
Moreover, assume that $g$ is real, has a periodic point $a_0$ of period $q$ with $Dg^q(a_0)=1$, $D^2g^q(a_0)\neq 0$ so that $c_1$ is in the basin of $a_0$ and so that $Dg^q(c_{kq+1})>0$ for each $k\ge 0$. Then $$\dfrac{\partial G^q_w}{\partial w}\Big\vert_{w=c} (a_0)\ge 0.$$ \end{prop} \begin{proofof}{Proposition~\ref{prop:ge0}} By Proposition~\ref{prop:32}, see also \cite{Le1}, \begin{equation} D(\rho):=1+\sum_{n=1}^\infty \dfrac{\rho^nL(c_n)}{Dg^n(c_1)} >0\mbox{ for all } 0<\rho < 1. \label{drho} \end{equation}
Arguing by contradiction, assume that $\Delta(a_0)<0$. Then there exists $\rho_0\in (0,1)$ and $k_0\ge 1$ such that for each $\rho_0<\rho<1$ and each $k\ge k_0$, we have \begin{equation}\label{eqn:Bkrho} B_k(\rho):=\sum_{j=1}^q \dfrac{\rho^j L(c_{qk+j})}{Dg^{j}(c_{qk+1})}< \Delta(a_0)/2. \end{equation}
By assumption, $M_k:=k^2 Dg^{kq}(c_1)>0$ holds for all $k\ge 0$. By Leau-Fatou Flower theorem, see Appendix B, $$Dg^q(c_{kq+1})=1-2/k +O(\log k \,\, k^{-2}),$$ so $M:=\lim_{k\to\infty} M_k>0$ exists. Enlarging $k_0$, we have $M_k> M/2$ for all $k\ge k_0$. Then \begin{align*} D(\rho) &= 1+\mathlarger{\sum}_{n=1}^\infty \dfrac{\rho^nL(c_n)}{Dg^n(c)} = 1+ \mathlarger{\sum}_{k=0}^\infty \mathlarger{\sum}_{j=1}^{q}\dfrac{\rho^{qk+j}L(c_{qk+j})}{Dg^{qk+j}(c)}=\\
& \\
&=1+\mathlarger{ \mathlarger{\sum}}_{k=0}^\infty \left[ \frac{\rho^{qk}}{Dg^{qk}(c_1)} \sum_{j=1}^{q} \dfrac{\rho^j L(c_{qk+j})}{Dg^{j}(c_{qk+1})} \right] =1+\mathlarger{ \mathlarger{\sum}}_{k=0}^\infty \frac{k^2\rho^{qk}}{M_k} B_k(\rho)\\
& \le 1+\mathlarger{\sum}_{k=0}^{k_0} \frac{k^2\rho^{qk}}{M_k} B_k(\rho) +\frac{\Delta(a_0)}{M} \sum_{k=k_0+1}^\infty k^2 \rho^{qk}, \end{align*} provided that $\rho_0<\rho<1$. \iffalse
For each $j$, since $Dg^{qk+j}(c_1)= Dg^{j}(g^{qk}(c_1)) Dg^{qk}(c_1) $ there exists $B_j\in \mathbb{R}$ such that $$Dg^{kq+j}(c_1)=Dg^{j}(a_0)\left(1+\dfrac{B_j}{k}+ o(1/k)\right)
\left(\dfrac{M}{k^2} + o(1/k^2)\right).$$
Let Using this and $L(c_{qk+j})=L(a_{j-1})+O(1/k)$, we obtain
\begin{equation*}
\begin{array}{rl} D(\rho) &= 1+\mathlarger{\sum}_{n=1}^\infty \dfrac{\rho^nL(c_n)}{Dg^n(c)} = 1+ \mathlarger{\sum}_{j=1}^{q} \mathlarger{\sum}_{k=0}^\infty \dfrac{\rho^{qk+j}L(c_{qk+j})}{Dg^{qk+j}(c)}=\\
& \\
&=1+\mathlarger{ \mathlarger{\sum}}_{k=0}^\infty \left[ \frac{\rho^{qk}}{Dg^{qk}(c_1)} \sum_{j=1}^{q} \dfrac{\rho^j L(c_{qk+j})}{Dg^{j}(c_{qk+1})} \right] \\
&=1+ \mathlarger{ \mathlarger{\sum}}_{k=0}^\infty \left[\dfrac{k^2\rho^{qk}}{M_k} \underbrace{\sum_{j=1}^{q} \dfrac{\rho^j (L(a_{j-1})+O(1/k))}{Dg^{j}(a_0)}}_{B_k(\rho)} \right]=\\
&=1+ \mathlarger{ \mathlarger{\sum}}_{k=0}^\infty \left[\dfrac{k^2\rho^{qk}}{\left(M+ o(1)\right) } \underbrace{\left( \Delta(a_0)+ \mathlarger{\sum}_{j=1}^{q} \dfrac{(\rho^j-1) L(a_{j-1})}{Dg^{j}(a_0)} + \rho^jO(1/k) \right)}_{=B_k(\rho)} \right].
\end{array} \end{equation*}
Now assume by contradiction that $\Delta(a_0)<0$. Then there exists $k_0$ and $\rho_0\in (0,1)$ so that
$B_k(\rho)\le \Delta(a_0)/2<0$ for $k\ge k_0$ for all $\rho\in [\rho_0,1]$. Moreover, $\sup B_k \le B_*$.
\fi Since $\sum_{k=k_0+1}^\infty k^2 \rho^{qk} \to \infty$ as $\rho\to 1$, this implies that $\liminf_{\rho \nearrow 1} D(\rho)=-\infty$, a contradiction with $D(\rho)>0$ for all $\rho \in (0,1)$.
\end{proofof}
\subsection{A vector field $v_\rho$ along $P$ so that $\rho \mathcal A v_\rho = v_\rho$}
\begin{prop}\label{prop:32} Let $g$ be a marked map w.r.t. $c_1$, and $P=\{c_n\}_{n\ge 1}$ converges to the periodic orbit $\mathcal O$ of $g$ which has the multiplier $+1$ and not degenerate. Consider a holomorphic deformation $(g,G)_W$.
Assume that $(g,G)_W$ has the lifting property for the set $P$.
Then for all $|\rho|<1$ one has $$1+\sum_{n\ge 1} \dfrac{\rho^nL(c_n)} {Dg^n(c_1)}\ne 0$$
where $L(x)=\frac{\partial G_w}{\partial w}|_{w=c_1}(x)$. \end{prop} \begin{proof} Let us for the moment assume that $h_\lambda(c_i)=c_i+v_i\lambda + O(\lambda^2)$ defines a holomorphic motion of $P$.
Then its lift $\hat h_\lambda(c_i)=c_i+\hat v_i \lambda + O(\lambda^2)$ is defined for $|\lambda|$ small and $$G_{h_\lambda(c_1))}(\hat h_\lambda(c_i))=h_\lambda(c_{i+1})=c_{i+1}+v_{i+1}\lambda+O(\lambda^2).$$ Writing $D_i=Dg(c_i)$ we obtain $$L_i v_1 + D_i\hat v_i = v_{i+1}, i\ge 1$$ where $L_i=L(c_i)$.
Taking $v=(v_1,v_2,\dots)$ and $\hat v=(\hat v_1,\hat v_2,\dots)$ we have that $\hat v=\mathcal{A}v$ where $$\mathcal{A}=\left( \begin{array}{cccccc}
-L_1/D_1 & 1/D_1 & 0 & \dots & \dots & \dots \\
-L_2/D_2 & 0 & 1/D_2 & 0 & \dots & \dots \\ -L_3/D_3 & 0 & 0 & 1/D_3 & \dots & \dots \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots
\end{array}\right).$$
Assume $\rho\ne 0$ and consider the vector field $v_\rho$ on $P$ defined for $n>1$ by \begin{equation*} \begin{array}{rl} v_\rho(c_n) &:=\dfrac{Dg^{n-1}(c_1)}{\rho^{n-1}} \sum_{k=0}^{n-1} \dfrac{\rho^kL_k}{Dg^k(c_1)}=\\ & \\ &=L_{n-1}+\dfrac{Dg(c_{n-1})L_{n-2}}{\rho} + \dfrac{Dg^2(c_{n-2})L_{n-3}}{\rho^2 }+ \dots + \dfrac{Dg^{n-1}(c_1)L_0}{\rho^{n-1}} \end{array}\end{equation*} (with $L_0=1$) and $v_\rho(c_1)=1$. Notice that for this vector field we get $\rho \mathcal A v_\rho = v_\rho$.
Assume, by contradiction, that for some $0<|\rho|<1$, $$1+\sum_{n\ge 1} \dfrac{\rho^nL(c_n)} {Dg^n(c_1)}= 0.$$ Then we have that $$v_\rho(c_n)= - \dfrac{Dg^{n-1}(c)}{\rho^{n-1}} \sum_{k=n}^{\infty} \dfrac{\rho^kL(c_k)}{Dg^k(c)}= - \sum_{j=1}^\infty \dfrac{\rho^jL(c_{n+j-1})}{Dg^j(c_n)}. $$ For simplicity write $v_{i,\rho}=v_\rho(c_i)$. In the next lemma we will show that $v_\rho$ defines a Lipschitz vector field. Because of this,
$v_\rho$ defines a holomorphic motion $h_{\lambda,\rho}$ for $|\lambda|<\epsilon$ for some $\epsilon>0$. As $\rho$ is fixed, let us write $h_\lambda=h_{\lambda,\rho}$. Since $(g,G)_W$ has the lifting property, it follows that the consecutive sequence of lifts $h^{(n)}_\lambda$ of $h_\lambda$ form a normal family. Write $h_\lambda^{(n)}(x)=x+\lambda v^{(n)}_\rho(x) + O(\lambda^2)$. Then $v^{(n)}_\rho=\dfrac{1}{\rho^n}v_\rho$ which, since $\rho<1$, contradicts that the family $h_\lambda^{(n)}$ forms a normal family.
\end{proof}
\begin{lemma}\label{liprho}
The vector field
$$v_\rho(c_n):= \sum_{j=1}^\infty \dfrac{\rho^jL(c_{n+j-1})}{Dg^j(c_n)}$$ defined on the set $P=\{c_i\}_{i\ge 1}$
is Lipschitz.
\end{lemma}
\begin{proof}
Given $x\in U$ such that $g^i(x)\in U$ for all $i\ge 1$,
define $$V_\rho(x)=\sum_{j=1}^\infty \dfrac{\rho^jL(g^{j-1}(x))}{Dg^j(x)}.$$
We have $V_\rho(c_n)=v_\rho(c_n)$, $n=1,2,\cdots$.
Moreover, \begin{equation}\label{eq:v'} V_\rho'(x)= \sum_{j=1}^\infty \rho^j\left[ \dfrac{L'(g^{j-1}(x))}{Dg(g^{j-1}(x)}-\dfrac{L(g^{j-1}(x))}{Dg^j(x)} \sum_{i=0}^{n-1} \dfrac{D^2g(g^i(x))}{Dg(g^i(x))}\right]. \end{equation} Since the periodic orbit $\mathcal O=\{a_0,\cdots,a_{q-1}\}$ of $g: U\to \mathbb{C}$ has multiplier $+1$ and is not degenerate, by the proof of Lemma~\ref{lem:leau-fatou} for each $a_j$ there is a convex set $\Delta_j$ in the basin of $\mathcal O$ with a boundary point $a_j$ such that $g^q(\Delta_j)\subset \Delta_j$. Moreover, the closures of $\Delta_j$, $0\le j\le q-1$ are pairwise disjoint, all but finitely many points of the orbit $P$ are in $\Delta:=\cup_{j=0}^{q-1} \Delta_j$. Since $g^{nq}$ converges uniformly on $\Delta_j$ to $a_j$, it follows that $Dg^j(x)\le C\rho^{-j/2}$, where $C$ is a constant.
These bounds along with the definition for $V_\rho$, (\ref{eq:v'}) and since $|\rho|<1$ imply that for some $K>0$ and all $x\in \Delta$,
$$|V_\rho(x)|\le K, \ \ |V_\rho'(x)|\le K.$$
As each component $\Delta_j$ of $\Delta$ is convex (so that $x_1,x_2\in \Delta_j$ implies $|V_\rho(x_1)-V_\rho(x_2)|\le K|x_1-x_2|$) and only finitely many points of $P$ is outside of $\Delta$ we conclude that $V_\rho$ is Lipschitz on $P$. \end{proof}
\section{The Leau-Fatou Flower} Suppose that $\mathcal{O}$ is a non-degenerate parabolic periodic orbit as above.
Fix $\alpha\in (0,1)$.
For each $j$, and $r>0$, define $$\Theta_j^{\text{att}}=\{\theta\in [0,1): D^{p +1} g^{p q}(a_j) e^{2\pi i p\theta}\text{ is real and negative}\},$$ $$\Theta_j^{\text{rep}}=\{\theta\in [0,1): D^{p+1} g^{p q}(a_j) e^{2\pi i p\theta}\text{ is real and positive}\},$$
$$\mathcal{C}_j(r)=\{a_j+se^{2\pi it}: 0<s<r, |t-\theta|<s^\alpha\text{ for some } \theta\in \Theta_j^{\text{rep}}\},$$ and
$$\mathcal{C}'_j(r)=\{a_j+se^{2\pi it}: 0<s<r, |t-\theta|<s^\alpha\text{ for some } \theta\in \Theta_j^{\text{attr}}\},$$
The following is a variation of the well-known Leau-Fatou Flower Theorem. \begin{lemma}[Leau-Fatou Flower Theorem]\label{lem:leau-fatou} \label{lem:Omega}
\begin{enumerate} \item For each $r>0$, there exists $\tau=\tau(r)>0$ such that $$B(a_j,\tau)\setminus \Omega_{r}\subset \mathcal{C}_j(\tau).$$ \item For any $r>\tau>0$ and $z_0\in \Omega_r^o$, there exists $n_0=n_0(z_0)$ such that $$g^n(z_0)\in \bigcup_j \mathcal{C}'_j(\tau)$$ for all $n\ge n_0$. \end{enumerate} \end{lemma} \begin{proof} This result is essentially contained in \cite{CG} or \cite{Milnor},
so we will contend ourself with a sketch in the case that $\mathcal{O}=\{0\}$ and $g(z)=z-z^{p+1}+O(z^{p+2})$. So $\Theta^{attr}=\{2\pi j/p: 0\le j<p\}$ and $\Theta^{rep}=\{\pi (2j+1)/p: 0\le j<p\}$.
Let us first prove (2). As described in \cite{Milnor} there are $p$ {\em attracting petals} $U_j$, $0\le j<p$, such that $U_j$ lies in the sector $(2j-1)\pi/p<\theta< (2j+1)\pi/p$ and such that for each $z_0\in U$ with $z_n:=g^n(z_0)\to 0$, $z_n\not=0$, there exists $n_0$ such that $z_n\in U_j$ for some $j$ and all $n\ge 0$. Let us prove that $z_n$ eventually lands in $\mathcal{C}'_j$. Indeed, assuming $j=0$ without loss of generality, and putting $w_n=-z_n^{-p}$, we have
$$w_{n+1}=w_n+1+O(|w_n|^{-1/p}).$$ From this, it is easy to check that for any $\tau>0$, $z_n\in \mathcal{C}'(\tau)$ for all $n\ge n(\tau)$. The statement (2) is proved.
Let us prove the statement (1). First, by \cite{Milnor}, there exists $r_*>0$ such that if $\{w_{-n}\}_{n=0}^\infty$ is a $g$-backward orbit inside $\overline{B(0, r_*)}$, then $w_{-n}\to 0$. We may assume without loss of generality that $r\in (0, r_*)$. Next, we check that there exists $\tau_0>0$ such that for any $\tau\in (0,\tau_0)$ the set $\mathcal{C}(\tau)$ is backward invariant under $g$: $g|\mathcal{C}(\tau)$ is injective and $g(\mathcal{C}(\tau)\supset \mathcal{C}(\tau)$. Arguing by contradiction, assume that the statement (1) is false for some $r\in (0,r_*)$. Then for any $n\ge 1$, there is $z_{n}\in B(0,1/n)\setminus (\mathcal{C}(1/n)\cup\{0\})$ and a minimal positive integer $k_n$ such that $g^{k_n}(z_n)\not\in B(0, r)$. Passing to a subsequence we may assume $g^{k_n-j}(z_n)\to w_{-j}$ as $n\to\infty$ for each $j$. Thus we obtain a $g$-backward orbit $\{w_{-j}\}_{j=1}^\infty$ with $w_{-j}\not\in \mathcal{C}(\tau)$ and $w_{-j}\in \overline{B(0,r)}\setminus \mathcal{C}(\tau)$ for $j\ge 1$. However, applying the statement (2) to $g^{-1}$, we see that this is impossible. \end{proof}
\iffalse \begin{mainthm-par} Assume that $(g,G)_W$ has the lifting property of $\overline{\Omega}_r$, for some $r>0$ then the following alternative holds: \begin{enumerate} \item[(1)] $\delta'(c_1)\ne 0$ \item[(2)] $G_w$ has a parabolic cycle of period $q$ for all $w$ in a neighborhood of $c_1$. \end{enumerate} Moreover, if $(g,G)_W$ is real then $\delta'(c_1)>0$.
\end{mainthm-par} \fi
\end{document} | arXiv |
Infinitely many solutions for quasilinear equations with critical exponent and Hardy potential in $ \mathbb{R}^N $
Global existence and scattering of equivariant defocusing Chern-Simons-Schrödinger system
September 2020, 40(9): 5571-5590. doi: 10.3934/dcds.2020238
Linearization of a nonautonomous unbounded system with nonuniform contraction: A spectral approach
Ignacio Huerta ,
Departamento de Matemática, Universidad Técnica Federico Santa María, Casilla 110-V, Valparaíso, Chile
* Corresponding author: Ignacio Huerta
Received January 2020 Revised March 2020 Published June 2020
Fund Project: This research has been partially supported FONDECYT Regular 1170968 and CONICYTPCHA/2015-21150270
For a nonautonomous linear system with nonuniform contraction, we construct a topological conjugacy between this system and an unbounded nonlinear perturbation. This topological conjugacy is constructed as a composition of homeomorphisms. The first one is set up by considering the fact that linear system is almost reducible to diagonal system with a small enough perturbation where the diagonal entries belong to spectrum of the nonuniform exponential dichotomy; and the second one is constructed in terms of the crossing times with respect to unit sphere of an adequate Lyapunov function associated to the linear system.
Keywords: Nonuniform dichotomy spectrum, nonuniform hyperbolicity, topological conjugacy.
Mathematics Subject Classification: 34D09, 37D25, 37B55.
Citation: Ignacio Huerta. Linearization of a nonautonomous unbounded system with nonuniform contraction: A spectral approach. Discrete & Continuous Dynamical Systems, 2020, 40 (9) : 5571-5590. doi: 10.3934/dcds.2020238
L. Barreira and C. Valls, Smoothness of invariant manifolds for nonautonomous equations, Comm. Math. Phys., 259 (2005), 639-677. doi: 10.1007/s00220-005-1380-z. Google Scholar
B. F. Bylov, Almost reducible system of differential equations, Sibirsk. Mat. Ž., 3 (1962), 333–359. Google Scholar
Á. Castañeda and I. Huerta, Nonuniform almost reducibility of nonautonomous linear differential equations, J. Math. Anal. Appl., 485 (2020), 22pp. doi: 10.1016/j.jmaa.2019.123822. Google Scholar
Á. Castañeda and G. Robledo, Almost reducibility of linear difference systems from a spectral point of view, Commun. Pure Appl. Anal., 16 (2017), 1977-1988. doi: 10.3934/cpaa.2017097. Google Scholar
Á. Castañeda and G. Robledo, Dichotomy spectrum and almost topological conjugacy on nonautonomous unbounded difference systems, Discrete Contin. Dyn. Syst., 38 (2018), 2287-2304. doi: 10.3934/dcds.2018094. Google Scholar
J. Chu, F.-F. Liao, S. Siegmund, Y. Xia and W. Zhang, Nonuniform dichotomy spectrum and reducibility for nonautonomous equations, Bull. Sci. Math., 139 (2015), 538-557. doi: 10.1016/j.bulsci.2014.11.002. Google Scholar
F.-F. Liao, Y. Jiang, and Z. Xie, A generalized nonuniform contraction and Lyapunov function, Abstr. Appl. Anal., 2012 (2012), 14pp. doi: 10.1155/2012/613038. Google Scholar
L. Jiang, Strongly topological linearization with generalized exponential dichotomy, Nonlinear Anal., 67 (2007), 1102-1110. doi: 10.1016/j.na.2006.06.054. Google Scholar
F. Lin, Hartman's linearization on nonautonomous unbounded system, Nonlinear Anal., 66 (2007), 38-50. doi: 10.1016/j.na.2005.11.007. Google Scholar
F. X. Lin, Spectrum sets and contractible sets of linear differential equations, Chinese Ann. Math. Ser. A, 11 (1990), 111-120. Google Scholar
K. J. Palmer, A generalization of Hartman's linearization theorem, J. Math. Anal. Appl., 41 (1973), 753-758. doi: 10.1016/0022-247X(73)90245-X. Google Scholar
K. J. Palmer, A characterization of exponential dichotomy in terms of topological equivalence, J. Math. Anal. Appl., 69 (1979), 8-16. doi: 10.1016/0022-247X(79)90175-6. Google Scholar
K. J. Palmer, The structurally stable linear systems on the half-line are those with exponential dichotomies, J. Differential Equations, 33 (1979), 16-25. doi: 10.1016/0022-0396(79)90076-7. Google Scholar
C. Pugh, On a theorem of P. Hartman, Amer. J. Math., 91 (1969), 363-367. doi: 10.2307/2373513. Google Scholar
J. L. Shi and K. Q. Xiong, On Hartman's linearization theorem and Palmer's linearization theorem, J. Math. Anal. Appl., 192 (1995), 813-832. doi: 10.1006/jmaa.1995.1205. Google Scholar
Y. Xia, Y. Bai and D. O'Regan, A new method to prove the nonuniform dichotomy spectrum theorem in $\mathbb{R}^n$, Proc. Amer. Math. Soc., 147 (2019), 3905-3917. doi: 10.1090/proc/14535. Google Scholar
Y. Xia, H. Huang and K. Kou, Hartman-Grobman theorem for the impulsive system with unbounded nonlinear term, Qual. Theory Dyn. Syst., 16 (2017), 705-730. doi: 10.1007/s12346-016-0218-8. Google Scholar
X. Zhang, Nonuniform dichotomy spectrum and normal forms for nonautonomous differential systems, J. Funct. Anal., 267 (2014), 1889-1916. doi: 10.1016/j.jfa.2014.07.029. Google Scholar
Álvaro Castañeda, Gonzalo Robledo. Dichotomy spectrum and almost topological conjugacy on nonautonomus unbounded difference systems. Discrete & Continuous Dynamical Systems, 2018, 38 (5) : 2287-2304. doi: 10.3934/dcds.2018094
Luis Barreira, Claudia Valls. Growth rates and nonuniform hyperbolicity. Discrete & Continuous Dynamical Systems, 2008, 22 (3) : 509-528. doi: 10.3934/dcds.2008.22.509
Jana Rodriguez Hertz. Genericity of nonuniform hyperbolicity in dimension 3. Journal of Modern Dynamics, 2012, 6 (1) : 121-138. doi: 10.3934/jmd.2012.6.121
Yakov Pesin. On the work of Dolgopyat on partial and nonuniform hyperbolicity. Journal of Modern Dynamics, 2010, 4 (2) : 227-241. doi: 10.3934/jmd.2010.4.227
António J.G. Bento, Nicolae Lupa, Mihail Megan, César M. Silva. Integral conditions for nonuniform $μ$-dichotomy on the half-line. Discrete & Continuous Dynamical Systems - B, 2017, 22 (8) : 3063-3077. doi: 10.3934/dcdsb.2017163
Luis Barreira, Claudia Valls. Regularity of center manifolds under nonuniform hyperbolicity. Discrete & Continuous Dynamical Systems, 2011, 30 (1) : 55-76. doi: 10.3934/dcds.2011.30.55
Giovanni Forni. A geometric criterion for the nonuniform hyperbolicity of the Kontsevich--Zorich cocycle. Journal of Modern Dynamics, 2011, 5 (2) : 355-395. doi: 10.3934/jmd.2011.5.355
Luis Barreira, Claudia Valls. Nonuniform exponential dichotomies and admissibility. Discrete & Continuous Dynamical Systems, 2011, 30 (1) : 39-53. doi: 10.3934/dcds.2011.30.39
Luis Barreira, Claudia Valls. Delay equations and nonuniform exponential stability. Discrete & Continuous Dynamical Systems - S, 2008, 1 (2) : 219-223. doi: 10.3934/dcdss.2008.1.219
Luis Barreira, Claudia Valls. Admissibility versus nonuniform exponential behavior for noninvertible cocycles. Discrete & Continuous Dynamical Systems, 2013, 33 (4) : 1297-1311. doi: 10.3934/dcds.2013.33.1297
Luis Barreira, Claudia Valls. Center manifolds for nonuniform trichotomies and arbitrary growth rates. Communications on Pure & Applied Analysis, 2010, 9 (3) : 643-654. doi: 10.3934/cpaa.2010.9.643
Hassan Emamirad, Arnaud Rougirel. A functional calculus approach for the rational approximation with nonuniform partitions. Discrete & Continuous Dynamical Systems, 2008, 22 (4) : 955-972. doi: 10.3934/dcds.2008.22.955
Boris Kalinin, Anatole Katok and Federico Rodriguez Hertz. New progress in nonuniform measure and cocycle rigidity. Electronic Research Announcements, 2008, 15: 79-92. doi: 10.3934/era.2008.15.79
Luis Barreira, Claudia Valls. Characterization of stable manifolds for nonuniform exponential dichotomies. Discrete & Continuous Dynamical Systems, 2008, 21 (4) : 1025-1046. doi: 10.3934/dcds.2008.21.1025
César M. Silva. Admissibility and generalized nonuniform dichotomies for discrete dynamics. Communications on Pure & Applied Analysis, 2021, 20 (10) : 3419-3443. doi: 10.3934/cpaa.2021112
Hongbo Guan, Yong Yang, Huiqing Zhu. A nonuniform anisotropic FEM for elliptic boundary layer optimal control problems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1711-1722. doi: 10.3934/dcdsb.2020179
Mi-Ho Giga, Yoshikazu Giga. A subdifferential interpretation of crystalline motion under nonuniform driving force. Conference Publications, 1998, 1998 (Special) : 276-287. doi: 10.3934/proc.1998.1998.276
Shyam Sundar Ghoshal. BV regularity near the interface for nonuniform convex discontinuous flux. Networks & Heterogeneous Media, 2016, 11 (2) : 331-348. doi: 10.3934/nhm.2016.11.331
Adriano Da Silva, Alexandre J. Santana, Simão N. Stelmastchuk. Topological conjugacy of linear systems on Lie groups. Discrete & Continuous Dynamical Systems, 2017, 37 (6) : 3411-3421. doi: 10.3934/dcds.2017144
Thai Son Doan, Martin Rasmussen, Peter E. Kloeden. The mean-square dichotomy spectrum and a bifurcation to a mean-square attractor. Discrete & Continuous Dynamical Systems - B, 2015, 20 (3) : 875-887. doi: 10.3934/dcdsb.2015.20.875
HTML views (70)
Ignacio Huerta | CommonCrawl |
Great icosicosidodecahedron
In geometry, the great icosicosidodecahedron (or great icosified icosidodecahedron) is a nonconvex uniform polyhedron, indexed as U48. It has 52 faces (20 triangles, 12 pentagrams, and 20 hexagons), 120 edges, and 60 vertices.[1] Its vertex figure is a crossed quadrilateral.
Great icosicosidodecahedron
TypeUniform star polyhedron
ElementsF = 52, E = 120
V = 60 (χ = −8)
Faces by sides20{3}+12{5}+20{6}
Coxeter diagram
Wythoff symbol3/2 5 | 3
3 5/4 | 3
Symmetry groupIh, [5,3], *532
Index referencesU48, C62, W88
Dual polyhedronGreat icosacronic hexecontahedron
Vertex figure
5.6.3/2.6
Bowers acronymGiid
Related polyhedra
It shares its vertex arrangement with the truncated dodecahedron. It additionally shares its edge arrangement with the great ditrigonal dodecicosidodecahedron (having the triangular and pentagonal faces in common) and the great dodecicosahedron (having the hexagonal faces in common).
Truncated dodecahedron
Great icosicosidodecahedron
Great ditrigonal dodecicosidodecahedron
Great dodecicosahedron
References
1. Maeder, Roman. "48: great icosicosidodecahedron". MathConsult.{{cite web}}: CS1 maint: url-status (link)
External links
• Eric W. Weisstein, Great icosicosidodecahedron (Uniform polyhedron) at MathWorld.
| Wikipedia |
Corporate Finance & Accounting
Corporate Finance & Accounting Financial Ratios
Return on Risk-Adjusted Capital – RORAC Definition
By Marshall Hargrave
What Is Return on Risk-Adjusted Capital – RORAC?
The return on risk-adjusted capital (RORAC) is a rate of return measure commonly used in financial analysis, where various projects, endeavors, and investments are evaluated based on capital at risk. Projects with different risk profiles are easier to compare with each other once their individual RORAC values have been calculated. The RORAC is similar to return on equity (ROE), except the denominator is adjusted to account for the risk of a project.
The Formula for RORAC Is
Return on Risk Adjusted Capital=Net IncomeRisk-Weighted Assetswhere:Risk-Weighted Assets = Allocated risk capital, economiccapital, or value at risk\begin{aligned} &\text{Return on Risk Adjusted Capital}=\frac{\text{Net Income}}{\text{Risk-Weighted Assets}}\\ &\textbf{where:}\\ &\text{Risk-Weighted Assets = Allocated risk capital, economic}\\ &\text{capital, or value at risk}\\ \end{aligned}Return on Risk Adjusted Capital=Risk-Weighted AssetsNet Incomewhere:Risk-Weighted Assets = Allocated risk capital, economiccapital, or value at risk
How to Calculate Return on Risk-Adjusted Capital – RORAC
Return on Risk-Adjusted Capital is calculated by dividing a company's net income by the risk-weighted assets.
What Does Return on Risk-Adjusted Capital (RORAC) Tell You?
Return on risk-adjusted capital takes into account the capital at risk, whether it be related to a project or company division. Allocated risk capital is the firm's capital, adjusted for a maximum potential loss based on estimated future earnings distributions or the volatility of earnings.
Companies use RORAC to place greater emphasis on firm-wide risk management. For example, different corporate divisions with unique managers can use RORAC to quantify and maintain acceptable risk-exposure levels. This calculation is similar to risk-adjusted return on capital (RAROC). With RORAC, however, the capital is adjusted for risk, not the rate of return. RORAC is used when the risk varies depending on the capital asset being analyzed.
RORAC is commonly used in financial analysis, where various projects or investments are evaluated based on capital at risk.
Allows for an apples-to-apples comparison of projects with different risk profiles.
Similar to risk-adjusted return on capital, but RAROC adjusts the return for risk, not the capital.
Example of How to Use RORAC
Assume a firm is evaluating two projects it has engaged in over the previous year and needs to decide which one to eliminate. Project A had total revenues of $100,000 and total expenses of $50,000. The total risk-weighted assets involved in the project is $400,000.
Project B had total revenues of $200,000 and total expenses of $100,000. The total risk-weighted assets involved in Project B is $900,000. The RORAC of the two projects is calculated as:
Project A RORAC=$100,000−$50,000$400,000=12.5%Project B RORAC=$200,000−$100,000$900,000=11.1%\begin{aligned} &\text{Project A RORAC}=\frac{\$100,000-\$50,000}{\$400,000}=12.5\%\\ &\text{Project B RORAC}=\frac{\$200,000-\$100,000}{\$900,000}=11.1\%\\ \end{aligned}Project A RORAC=$400,000$100,000−$50,000=12.5%Project B RORAC=$900,000$200,000−$100,000=11.1%
Even though Project B had twice as much revenue as Project A, once the risk-weighted capital of each project is taken into account, it is clear that Project A has a better RORAC.
The Difference Between RORAC and RAROC
RORAC is similar to, and easily confused with, two other statistics. Risk-adjusted return on capital (RAROC) is usually defined as the ratio of risk-adjusted return to economic capital. In this calculation, instead of adjusting the risk of the capital itself, it is the risk of the return that is quantified and measured. Often, the expected return of a project is divided by value at risk to arrive at RAROC.
Another statistic similar to RORAC is the risk-adjusted return on risk-adjusted capital (RARORAC). This statistic is calculated by taking the risk-adjusted return and dividing it by economic capital, adjusting for diversification benefits. It uses guidelines defined by the international risk standards covered in Basel III.
Limitations of Using Return on Risk-Adjusted Capital – RORAC
Calculating the risk-adjusted capital can be cumbersome as it requires understanding the value at risk calculation.
For related insight, read more about how risk-weighted assets are calculated based on capital risk.
Risk-Adjusted Return On Capital (RAROC)
Risk-adjusted return on capital is an adjustment to the return on an investment that accounts for the element of risk.
What's Economic Capital?
Economic capital is the amount of capital that a firm, usually in financial services, needs to ensure that the company stays solvent given its risk profile.
The annual return is the compound average rate of return for a stock, fund or asset per year over a period of time.
Profit margin gauges the degree to which a company or a business activity makes money. It represents what percentage of sales has turned into profits.
Understanding Return on Invested Capital
Return on invested capital (ROIC) is a way to assess a company's efficiency at allocating the capital under its control to profitable investments.
How the Benefit-Cost Ratio Works
The benefit-cost ratio is a ratio that attempts to identify the relationship between the cost and benefits of a proposed project.
Using Economic Capital to Determine Risk
How do you calculate IRR in Excel?
How to Calculate Return on Assets (ROA) With Examples
What is the formula for calculating net present value (NPV) in Excel?
The Difference Between Return on Equity and Return on Capital
How do you calculate the debt-to-equity ratio? | CommonCrawl |
Set (mathematics)
A set is the mathematical model for a collection of different[1] things;[2][3][4] a set contains elements or members, which can be mathematical objects of any kind: numbers, symbols, points in space, lines, other geometrical shapes, variables, or even other sets.[5] The set with no element is the empty set; a set with a single element is a singleton. A set may have a finite number of elements or be an infinite set. Two sets are equal if they have precisely the same elements.[6]
This article is about what mathematicians call "intuitive" or "naive" set theory. For a more detailed account, see Naive set theory. For a rigorous modern axiomatic treatment of sets, see Set theory.
Sets are ubiquitous in modern mathematics. Indeed, set theory, more specifically Zermelo–Fraenkel set theory, has been the standard way to provide rigorous foundations for all branches of mathematics since the first half of the 20th century.[5]
History
Main article: Set theory
The concept of a set emerged in mathematics at the end of the 19th century.[7] The German word for set, Menge, was coined by Bernard Bolzano in his work Paradoxes of the Infinite.[8][9][10]
Georg Cantor, one of the founders of set theory, gave the following definition at the beginning of his Beiträge zur Begründung der transfiniten Mengenlehre:[11][1]
A set is a gathering together into a whole of definite, distinct objects of our perception or our thought—which are called elements of the set.
Bertrand Russell introduced the distinction between a set and a class (a set is a class, but some classes, such as the class of all sets, are not sets; see Russell's paradox):[12]
When mathematicians deal with what they call a manifold, aggregate, Menge, ensemble, or some equivalent name, it is common, especially where the number of terms involved is finite, to regard the object in question (which is in fact a class) as defined by the enumeration of its terms, and as consisting possibly of a single term, which in that case is the class.
Naive set theory
Main article: Naive set theory
The foremost property of a set is that it can have elements, also called members. Two sets are equal when they have the same elements. More precisely, sets A and B are equal if every element of A is an element of B, and every element of B is an element of A; this property is called the extensionality of sets.[13]
The simple concept of a set has proved enormously useful in mathematics, but paradoxes arise if no restrictions are placed on how sets can be constructed:
• Russell's paradox shows that the "set of all sets that do not contain themselves", i.e., {x | x is a set and x ∉ x}, cannot exist.
• Cantor's paradox shows that "the set of all sets" cannot exist.
Naïve set theory defines a set as any well-defined collection of distinct elements, but problems arise from the vagueness of the term well-defined.
Axiomatic set theory
In subsequent efforts to resolve these paradoxes since the time of the original formulation of naïve set theory, the properties of sets have been defined by axioms. Axiomatic set theory takes the concept of a set as a primitive notion.[14] The purpose of the axioms is to provide a basic framework from which to deduce the truth or falsity of particular mathematical propositions (statements) about sets, using first-order logic. According to Gödel's incompleteness theorems however, it is not possible to use first-order logic to prove any such particular axiomatic set theory is free from paradox.
How sets are defined and set notation
Mathematical texts commonly denote sets by capital letters[15][5] in italic, such as A, B, C.[16] A set may also be called a collection or family, especially when its elements are themselves sets.
Roster notation
Roster or enumeration notation defines a set by listing its elements between curly brackets, separated by commas:[17][18][19][20]
A = {4, 2, 1, 3}
B = {blue, white, red}.
This notation was introduced by Zermelo in 1908.[21] In a set, all that matters is whether each element is in it or not, so the ordering of the elements in roster notation is irrelevant (in contrast, in a sequence, a tuple, or a permutation of a set, the ordering of the terms matters). For example, {2, 4, 6} and {4, 6, 4, 2} represent the same set.[22][16][23]
For sets with many elements, especially those following an implicit pattern, the list of members can be abbreviated using an ellipsis '...'.[24][25] For instance, the set of the first thousand positive integers may be specified in roster notation as
{1, 2, 3, ..., 1000}.
Infinite sets in roster notation
An infinite set is a set with an endless list of elements. To describe an infinite set in roster notation, an ellipsis is placed at the end of the list, or at both ends, to indicate that the list continues forever. For example, the set of nonnegative integers is
{0, 1, 2, 3, 4, ...},
and the set of all integers is
{..., −3, −2, −1, 0, 1, 2, 3, ...}.
Semantic definition
Another way to define a set is to use a rule to determine what the elements are:
Let A be the set whose members are the first four positive integers.
Let B be the set of colors of the French flag.
Such a definition is called a semantic description.[26][27]
Set-builder notation
Main article: Set-builder notation
Set-builder notation specifies a set as a selection from a larger set, determined by a condition on the elements.[27][28][29] For example, a set F can be defined as follows:
$F=\{n\mid n{\text{ is an integer, and }}0\leq n\leq 19\}.$
In this notation, the vertical bar "|" means "such that", and the description can be interpreted as "F is the set of all numbers n such that n is an integer in the range from 0 to 19 inclusive". Some authors use a colon ":" instead of the vertical bar.[30]
Classifying methods of definition
Philosophy uses specific terms to classify types of definitions:
• An intensional definition uses a rule to determine membership. Semantic definitions and definitions using set-builder notation are examples.
• An extensional definition describes a set by listing all its elements.[27] Such definitions are also called enumerative.
• An ostensive definition is one that describes a set by giving examples of elements; a roster involving an ellipsis would be an example.
Membership
Main article: Element (mathematics)
If B is a set and x is an element of B, this is written in shorthand as x ∈ B, which can also be read as "x belongs to B", or "x is in B".[13] The statement "y is not an element of B" is written as y ∉ B, which can also be read as "y is not in B".[31][32]
For example, with respect to the sets A = {1, 2, 3, 4}, B = {blue, white, red}, and F = {n | n is an integer, and 0 ≤ n ≤ 19},
4 ∈ A and 12 ∈ F; and
20 ∉ F and green ∉ B.
The empty set
Main article: Empty set
The empty set (or null set) is the unique set that has no members. It is denoted ∅ or $\emptyset $ or { }[33][34] or ϕ[35] (or ϕ).[36]
Singleton sets
Main article: Singleton (mathematics)
A singleton set is a set with exactly one element; such a set may also be called a unit set.[6] Any such set can be written as {x}, where x is the element. The set {x} and the element x mean different things; Halmos[37] draws the analogy that a box containing a hat is not the same as the hat.
Subsets
Main article: Subset
If every element of set A is also in B, then A is described as being a subset of B, or contained in B, written A ⊆ B,[38] or B ⊇ A.[39] The latter notation may be read B contains A, B includes A, or B is a superset of A. The relationship between sets established by ⊆ is called inclusion or containment. Two sets are equal if they contain each other: A ⊆ B and B ⊆ A is equivalent to A = B.[28]
If A is a subset of B, but A is not equal to B, then A is called a proper subset of B. This can be written A ⊊ B. Likewise, B ⊋ A means B is a proper superset of A, i.e. B contains A, and is not equal to A.
A third pair of operators ⊂ and ⊃ are used differently by different authors: some authors use A ⊂ B and B ⊃ A to mean A is any subset of B (and not necessarily a proper subset),[40][31] while others reserve A ⊂ B and B ⊃ A for cases where A is a proper subset of B.[38]
Examples:
• The set of all humans is a proper subset of the set of all mammals.
• {1, 3} ⊂ {1, 2, 3, 4}.
• {1, 2, 3, 4} ⊆ {1, 2, 3, 4}.
The empty set is a subset of every set,[33] and every set is a subset of itself:[40]
• ∅ ⊆ A.
• A ⊆ A.
Euler and Venn diagrams
An Euler diagram is a graphical representation of a collection of sets; each set is depicted as a planar region enclosed by a loop, with its elements inside. If A is a subset of B, then the region representing A is completely inside the region representing B. If two sets have no elements in common, the regions do not overlap.
A Venn diagram, in contrast, is a graphical representation of n sets in which the n loops divide the plane into 2n zones such that for each way of selecting some of the n sets (possibly all or none), there is a zone for the elements that belong to all the selected sets and none of the others. For example, if the sets are A, B, and C, there should be a zone for the elements that are inside A and C and outside B (even if such elements do not exist).
Special sets of numbers in mathematics
There are sets of such mathematical importance, to which mathematicians refer so frequently, that they have acquired special names and notational conventions to identify them.
Many of these important sets are represented in mathematical texts using bold (e.g. $\mathbf {Z} $) or blackboard bold (e.g. $\mathbb {Z} $) typeface.[41] These include
• $\mathbf {N} $ or $\mathbb {N} $, the set of all natural numbers: $\mathbf {N} =\{0,1,2,3,...\}$ (often, authors exclude 0);[41]
• $\mathbf {Z} $ or $\mathbb {Z} $, the set of all integers (whether positive, negative or zero): $\mathbf {Z} =\{...,-2,-1,0,1,2,3,...\}$;[41]
• $\mathbf {Q} $ or $\mathbb {Q} $, the set of all rational numbers (that is, the set of all proper and improper fractions): $\mathbf {Q} =\left\{{\frac {a}{b}}\mid a,b\in \mathbf {Z} ,b\neq 0\right\}$. For example, −7/4 ∈ Q and 5 = 5/1 ∈ Q;[41]
• $\mathbf {R} $ or $\mathbb {R} $, the set of all real numbers, including all rational numbers and all irrational numbers (which include algebraic numbers such as ${\sqrt {2}}$ that cannot be rewritten as fractions, as well as transcendental numbers such as π and e);[41]
• $\mathbf {C} $ or $\mathbb {C} $, the set of all complex numbers: C = {a + bi | a, b ∈ R}, for example, 1 + 2i ∈ C.[41]
Each of the above sets of numbers has an infinite number of elements. Each is a subset of the sets listed below it.
Sets of positive or negative numbers are sometimes denoted by superscript plus and minus signs, respectively. For example, $\mathbf {Q} ^{+}$ represents the set of positive rational numbers.
Functions
A function (or mapping) from a set A to a set B is a rule that assigns to each "input" element of A an "output" that is an element of B; more formally, a function is a special kind of relation, one that relates each element of A to exactly one element of B. A function is called
• injective (or one-to-one) if it maps any two different elements of A to different elements of B,
• surjective (or onto) if for every element of B, there is at least one element of A that maps to it, and
• bijective (or a one-to-one correspondence) if the function is both injective and surjective — in this case, each element of A is paired with a unique element of B, and each element of B is paired with a unique element of A, so that there are no unpaired elements.
An injective function is called an injection, a surjective function is called a surjection, and a bijective function is called a bijection or one-to-one correspondence.
Cardinality
Main article: Cardinality
The cardinality of a set S, denoted |S|, is the number of members of S.[42] For example, if B = {blue, white, red}, then |B| = 3. Repeated members in roster notation are not counted,[43][44] so |{blue, white, red, blue, white}| = 3, too.
More formally, two sets share the same cardinality if there exists a bijection between them.
The cardinality of the empty set is zero.[45]
Infinite sets and infinite cardinality
The list of elements of some sets is endless, or infinite. For example, the set $\mathbb {N} $ of natural numbers is infinite.[28] In fact, all the special sets of numbers mentioned in the section above are infinite. Infinite sets have infinite cardinality.
Some infinite cardinalities are greater than others. Arguably one of the most significant results from set theory is that the set of real numbers has greater cardinality than the set of natural numbers.[46] Sets with cardinality less than or equal to that of $\mathbb {N} $ are called countable sets; these are either finite sets or countably infinite sets (sets of the same cardinality as $\mathbb {N} $); some authors use "countable" to mean "countably infinite". Sets with cardinality strictly greater than that of $\mathbb {N} $ are called uncountable sets.
However, it can be shown that the cardinality of a straight line (i.e., the number of points on a line) is the same as the cardinality of any segment of that line, of the entire plane, and indeed of any finite-dimensional Euclidean space.[47]
The continuum hypothesis
Main article: Continuum hypothesis
The continuum hypothesis, formulated by Georg Cantor in 1878, is the statement that there is no set with cardinality strictly between the cardinality of the natural numbers and the cardinality of a straight line.[48] In 1963, Paul Cohen proved that the continuum hypothesis is independent of the axiom system ZFC consisting of Zermelo–Fraenkel set theory with the axiom of choice.[49] (ZFC is the most widely-studied version of axiomatic set theory.)
Power sets
Main article: Power set
The power set of a set S is the set of all subsets of S.[28] The empty set and S itself are elements of the power set of S, because these are both subsets of S. For example, the power set of {1, 2, 3} is {∅, {1}, {2}, {3}, {1, 2}, {1, 3}, {2, 3}, {1, 2, 3}}. The power set of a set S is commonly written as P(S) or 2S.[28][50][16]
If S has n elements, then P(S) has 2n elements.[51] For example, {1, 2, 3} has three elements, and its power set has 23 = 8 elements, as shown above.
If S is infinite (whether countable or uncountable), then P(S) is uncountable. Moreover, the power set is always strictly "bigger" than the original set, in the sense that any attempt to pair up the elements of S with the elements of P(S) will leave some elements of P(S) unpaired. (There is never a bijection from S onto P(S).)[52]
Partitions
Main article: Partition of a set
A partition of a set S is a set of nonempty subsets of S, such that every element x in S is in exactly one of these subsets. That is, the subsets are pairwise disjoint (meaning any two sets of the partition contain no element in common), and the union of all the subsets of the partition is S.[53][54]
Basic operations
Main article: Algebra of sets
Suppose that a universal set U (a set containing all elements being discussed) has been fixed, and that A is a subset of U.
• The complement of A is the set of all elements (of U) that do not belong to A. It may be denoted Ac or A′. In set-builder notation, $A^{\text{c}}=\{a\in U:a\notin A\}$. The complement may also be called the absolute complement to distinguish it from the relative complement below. Example: If the universal set is taken to be the set of integers, then the complement of the set of even integers is the set of odd integers.
Given any two sets A and B,
• their union A ∪ B is the set of all things that are members of A or B or both.
• their intersection A ∩ B is the set of all things that are members of both A and B. If A ∩ B = ∅, then A and B are said to be disjoint.
• the set difference A \ B (also written A − B) is the set of all things that belong to A but not B. Especially when B is a subset of A, it is also called the relative complement of B in A. With Bc as the absolute complement of B (in the universal set U), A \ B = A ∩ Bc .
• their symmetric difference A Δ B is the set of all things that belong to A or B but not both. One has $A\,\Delta \,B=(A\setminus B)\cup (B\setminus A)$.
• their cartesian product A × B is the set of all ordered pairs (a,b) such that a is an element of A and b is an element of B.
Examples:
• {1, 2, 3} ∪ {3, 4, 5} = {1, 2, 3, 4, 5}.
• {1, 2, 3} ∩ {3, 4, 5} = {3}.
• {1, 2, 3} − {3, 4, 5} = {1, 2}.
• {1, 2, 3} Δ {3, 4, 5} = {1, 2, 4, 5}.
• {a, b} × {1, 2, 3} = {(a,1), (a,2), (a,3), (b,1), (b,2), (b,3)}.
The operations above satisfy many identities. For example, one of De Morgan's laws states that (A ∪ B)′ = A′ ∩ B′ (that is, the elements outside the union of A and B are the elements that are outside A and outside B).
The cardinality of A × B is the product of the cardinalities of A and B. (This is an elementary fact when A and B are finite. When one or both are infinite, multiplication of cardinal numbers is defined to make this true.)
The power set of any set becomes a Boolean ring with symmetric difference as the addition of the ring and intersection as the multiplication of the ring.
Applications
Sets are ubiquitous in modern mathematics. For example, structures in abstract algebra, such as groups, fields and rings, are sets closed under one or more operations.
One of the main applications of naive set theory is in the construction of relations. A relation from a domain A to a codomain B is a subset of the Cartesian product A × B. For example, considering the set S = {rock, paper, scissors} of shapes in the game of the same name, the relation "beats" from S to S is the set B = {(scissors,paper), (paper,rock), (rock,scissors)}; thus x beats y in the game if the pair (x,y) is a member of B. Another example is the set F of all pairs (x, x2), where x is real. This relation is a subset of R × R, because the set of all squares is subset of the set of all real numbers. Since for every x in R, one and only one pair (x,...) is found in F, it is called a function. In functional notation, this relation can be written as F(x) = x2.
Principle of inclusion and exclusion
Main article: Inclusion–exclusion principle
The inclusion–exclusion principle is a technique for counting the elements in a union of two finite sets in terms of the sizes of the two sets and their intersection. It can be expressed symbolically as
$|A\cup B|=|A|+|B|-|A\cap B|.$
A more general form of the principle gives the cardinality of any finite union of finite sets:
${\begin{aligned}\left|A_{1}\cup A_{2}\cup A_{3}\cup \ldots \cup A_{n}\right|=&\left(\left|A_{1}\right|+\left|A_{2}\right|+\left|A_{3}\right|+\ldots \left|A_{n}\right|\right)\\&{}-\left(\left|A_{1}\cap A_{2}\right|+\left|A_{1}\cap A_{3}\right|+\ldots \left|A_{n-1}\cap A_{n}\right|\right)\\&{}+\ldots \\&{}+\left(-1\right)^{n-1}\left(\left|A_{1}\cap A_{2}\cap A_{3}\cap \ldots \cap A_{n}\right|\right).\end{aligned}}$
See also
• Algebra of sets
• Alternative set theory
• Category of sets
• Class (set theory)
• Dense set
• Family of sets
• Fuzzy set
• Internal set
• Mereology
• Multiset
• Principia Mathematica
• Rough set
Notes
1. Cantor, Georg; Jourdain, Philip E.B. (Translator) (1915). "Contributions to the founding of the theory of transfinite numbers". New York Dover Publications (1954 English translation). By an 'aggregate' (Menge) we are to understand any collection into a whole (Zusammenfassung zu einem Ganzen) M of definite and separate objects m of our intuition or our thought. {{cite journal}}: Cite journal requires |journal= (help) Here: p.85
2. P. K. Jain; Khalil Ahmad; Om P. Ahuja (1995). Functional Analysis. New Age International. p. 1. ISBN 978-81-224-0801-0.
3. Samuel Goldberg (1 January 1986). Probability: An Introduction. Courier Corporation. p. 2. ISBN 978-0-486-65252-8.
4. Thomas H. Cormen; Charles E Leiserson; Ronald L Rivest; Clifford Stein (2001). Introduction To Algorithms. MIT Press. p. 1070. ISBN 978-0-262-03293-3.
5. Halmos 1960, p. 1.
6. Stoll, Robert (1974). Sets, Logic and Axiomatic Theories. W. H. Freeman and Company. pp. 5. ISBN 9780716704577.
7. José Ferreirós (16 August 2007). Labyrinth of Thought: A History of Set Theory and Its Role in Modern Mathematics. Birkhäuser Basel. ISBN 978-3-7643-8349-7.
8. Steve Russ (9 December 2004). The Mathematical Works of Bernard Bolzano. OUP Oxford. ISBN 978-0-19-151370-1.
9. William Ewald; William Bragg Ewald (1996). From Kant to Hilbert Volume 1: A Source Book in the Foundations of Mathematics. OUP Oxford. p. 249. ISBN 978-0-19-850535-8.
10. Paul Rusnock; Jan Sebestík (25 April 2019). Bernard Bolzano: His Life and Work. OUP Oxford. p. 430. ISBN 978-0-19-255683-7.
11. Georg Cantor (Nov 1895). "Beiträge zur Begründung der transfiniten Mengenlehre (1)". Mathematische Annalen (in German). 46 (4): 481–512.
12. Bertrand Russell (1903) The Principles of Mathematics, chapter VI: Classes
13. Halmos 1960, p. 2.
14. Jose Ferreiros (1 November 2001). Labyrinth of Thought: A History of Set Theory and Its Role in Modern Mathematics. Springer Science & Business Media. ISBN 978-3-7643-5749-8.
15. Seymor Lipschutz; Marc Lipson (22 June 1997). Schaum's Outline of Discrete Mathematics. McGraw Hill Professional. p. 1. ISBN 978-0-07-136841-4.
16. "Introduction to Sets". www.mathsisfun.com. Retrieved 2020-08-19.
17. Charles Roberts (24 June 2009). Introduction to Mathematical Proofs: A Transition. CRC Press. p. 45. ISBN 978-1-4200-6956-3.
18. David Johnson; David B. Johnson; Thomas A. Mowry (June 2004). Finite Mathematics: Practical Applications (Docutech Version). W. H. Freeman. p. 220. ISBN 978-0-7167-6297-3.
19. Ignacio Bello; Anton Kaul; Jack R. Britton (29 January 2013). Topics in Contemporary Mathematics. Cengage Learning. p. 47. ISBN 978-1-133-10742-2.
20. Susanna S. Epp (4 August 2010). Discrete Mathematics with Applications. Cengage Learning. p. 13. ISBN 978-0-495-39132-6.
21. A. Kanamori, "The Empty Set, the Singleton, and the Ordered Pair", p.278. Bulletin of Symbolic Logic vol. 9, no. 3, (2003). Accessed 21 August 2023.
22. Stephen B. Maurer; Anthony Ralston (21 January 2005). Discrete Algorithmic Mathematics. CRC Press. p. 11. ISBN 978-1-4398-6375-6.
23. D. Van Dalen; H. C. Doets; H. De Swart (9 May 2014). Sets: Naïve, Axiomatic and Applied: A Basic Compendium with Exercises for Use in Set Theory for Non Logicians, Working and Teaching Mathematicians and Students. Elsevier Science. p. 1. ISBN 978-1-4831-5039-0.
24. Alfred Basta; Stephan DeLong; Nadine Basta (1 January 2013). Mathematics for Information Technology. Cengage Learning. p. 3. ISBN 978-1-285-60843-3.
25. Laura Bracken; Ed Miller (15 February 2013). Elementary Algebra. Cengage Learning. p. 36. ISBN 978-0-618-95134-5.
26. Halmos 1960, p. 4.
27. Frank Ruda (6 October 2011). Hegel's Rabble: An Investigation into Hegel's Philosophy of Right. Bloomsbury Publishing. p. 151. ISBN 978-1-4411-7413-0.
28. John F. Lucas (1990). Introduction to Abstract Mathematics. Rowman & Littlefield. p. 108. ISBN 978-0-912675-73-2.
29. Weisstein, Eric W. "Set". mathworld.wolfram.com. Retrieved 2020-08-19.
30. Ralph C. Steinlage (1987). College Algebra. West Publishing Company. ISBN 978-0-314-29531-6.
31. Marek Capinski; Peter E. Kopp (2004). Measure, Integral and Probability. Springer Science & Business Media. p. 2. ISBN 978-1-85233-781-0.
32. "Set Symbols". www.mathsisfun.com. Retrieved 2020-08-19.
33. Halmos 1960, p. 8.
34. K.T. Leung; Doris Lai-chue Chen (1 July 1992). Elementary Set Theory, Part I/II. Hong Kong University Press. p. 27. ISBN 978-962-209-026-2.
35. Aggarwal, M.L. (2021). "1. Sets". Understanding ISC Mathematics Class XI. Vol. 1. Arya Publications (Avichal Publishing Company). p. A=3.
36. Sourendra Nath, De (January 2015). "Unit-1 Sets and Functions: 1. Set Theory". Chhaya Ganit (Ekadash Shreni). Scholar Books Pvt. Ltd. p. 5.
37. Halmos 1960, Sect.2.
38. Felix Hausdorff (2005). Set Theory. American Mathematical Soc. p. 30. ISBN 978-0-8218-3835-8.
39. Peter Comninos (6 April 2010). Mathematical and Computer Programming Techniques for Computer Graphics. Springer Science & Business Media. p. 7. ISBN 978-1-84628-292-8.
40. Halmos 1960, p. 3.
41. George Tourlakis (13 February 2003). Lectures in Logic and Set Theory: Volume 2, Set Theory. Cambridge University Press. p. 137. ISBN 978-1-139-43943-5.
42. Yiannis N. Moschovakis (1994). Notes on Set Theory. Springer Science & Business Media. ISBN 978-3-540-94180-4.
43. Arthur Charles Fleck (2001). Formal Models of Computation: The Ultimate Limits of Computing. World Scientific. p. 3. ISBN 978-981-02-4500-9.
44. William Johnston (25 September 2015). The Lebesgue Integral for Undergraduates. The Mathematical Association of America. p. 7. ISBN 978-1-939512-07-9.
45. Karl J. Smith (7 January 2008). Mathematics: Its Power and Utility. Cengage Learning. p. 401. ISBN 978-0-495-38913-2.
46. John Stillwell (16 October 2013). The Real Numbers: An Introduction to Set Theory and Analysis. Springer Science & Business Media. ISBN 978-3-319-01577-4.
47. David Tall (11 April 2006). Advanced Mathematical Thinking. Springer Science & Business Media. p. 211. ISBN 978-0-306-47203-9.
48. Cantor, Georg (1878). "Ein Beitrag zur Mannigfaltigkeitslehre". Journal für die Reine und Angewandte Mathematik. 1878 (84): 242–258. doi:10.1515/crll.1878.84.242.
49. Cohen, Paul J. (December 15, 1963). "The Independence of the Continuum Hypothesis". Proceedings of the National Academy of Sciences of the United States of America. 50 (6): 1143–1148. Bibcode:1963PNAS...50.1143C. doi:10.1073/pnas.50.6.1143. JSTOR 71858. PMC 221287. PMID 16578557.
50. Halmos 1960, p. 19.
51. Halmos 1960, p. 20.
52. Edward B. Burger; Michael Starbird (18 August 2004). The Heart of Mathematics: An invitation to effective thinking. Springer Science & Business Media. p. 183. ISBN 978-1-931914-41-3.
53. Toufik Mansour (27 July 2012). Combinatorics of Set Partitions. CRC Press. ISBN 978-1-4398-6333-6.
54. Halmos 1960, p. 28.
References
• Dauben, Joseph W. (1979). Georg Cantor: His Mathematics and Philosophy of the Infinite. Boston: Harvard University Press. ISBN 0-691-02447-2.
• Halmos, Paul R. (1960). Naive Set Theory. Princeton, N.J.: Van Nostrand. ISBN 0-387-90092-6.
• Stoll, Robert R. (1979). Set Theory and Logic. Mineola, N.Y.: Dover Publications. ISBN 0-486-63829-4.
• Velleman, Daniel (2006). How To Prove It: A Structured Approach. Cambridge University Press. ISBN 0-521-67599-5.
External links
• The dictionary definition of set at Wiktionary
• Cantor's "Beiträge zur Begründung der transfiniten Mengenlehre" (in German)
Set theory
Overview
• Set (mathematics)
Axioms
• Adjunction
• Choice
• countable
• dependent
• global
• Constructibility (V=L)
• Determinacy
• Extensionality
• Infinity
• Limitation of size
• Pairing
• Power set
• Regularity
• Union
• Martin's axiom
• Axiom schema
• replacement
• specification
Operations
• Cartesian product
• Complement (i.e. set difference)
• De Morgan's laws
• Disjoint union
• Identities
• Intersection
• Power set
• Symmetric difference
• Union
• Concepts
• Methods
• Almost
• Cardinality
• Cardinal number (large)
• Class
• Constructible universe
• Continuum hypothesis
• Diagonal argument
• Element
• ordered pair
• tuple
• Family
• Forcing
• One-to-one correspondence
• Ordinal number
• Set-builder notation
• Transfinite induction
• Venn diagram
Set types
• Amorphous
• Countable
• Empty
• Finite (hereditarily)
• Filter
• base
• subbase
• Ultrafilter
• Fuzzy
• Infinite (Dedekind-infinite)
• Recursive
• Singleton
• Subset · Superset
• Transitive
• Uncountable
• Universal
Theories
• Alternative
• Axiomatic
• Naive
• Cantor's theorem
• Zermelo
• General
• Principia Mathematica
• New Foundations
• Zermelo–Fraenkel
• von Neumann–Bernays–Gödel
• Morse–Kelley
• Kripke–Platek
• Tarski–Grothendieck
• Paradoxes
• Problems
• Russell's paradox
• Suslin's problem
• Burali-Forti paradox
Set theorists
• Paul Bernays
• Georg Cantor
• Paul Cohen
• Richard Dedekind
• Abraham Fraenkel
• Kurt Gödel
• Thomas Jech
• John von Neumann
• Willard Quine
• Bertrand Russell
• Thoralf Skolem
• Ernst Zermelo
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
Major topics in Foundations of Mathematics
Mathematical logic
• Peano axioms
• Mathematical induction
• Formal system
• Axiomatic system
• Hilbert system
• Natural deduction
• Mathematical proof
• Model theory
• Mathematical constructivism
• Modal logic
• List of mathematical logic topics
Set theory
• Set
• Naive set theory
• Axiomatic set theory
• Zermelo set theory
• Zermelo–Fraenkel set theory
• Constructive set theory
• Descriptive set theory
• Determinacy
• Russell's paradox
• List of set theory topics
Type theory
• Axiom of reducibility
• Simple type theory
• Dependent type theory
• Intuitionistic type theory
• Homotopy type theory
• Univalent foundations
• Girard's paradox
Category theory
• Category
• Topos theory
• Category of sets
• Higher category theory
• ∞-groupoid
• ∞-topos theory
• Mathematical structuralism
• Glossary of category theory
• List of category theory topics
Authority control: National
• Germany
• Czech Republic
| Wikipedia |
Long time dynamics of a multidimensional nonlinear lattice with memory
Balancing survival and extinction in nonautonomous competitive Lotka-Volterra systems with infinite delays
October 2015, 20(8): 2691-2714. doi: 10.3934/dcdsb.2015.20.2691
Coexistence solutions of a competition model with two species in a water column
Hua Nie 1, , Sze-Bi Hsu 2, and Jianhua Wu 1,
College of Mathematics and Information Science, Shaanxi Normal University, Xi'an, Shaanxi 710119, China
Department of Mathematics, National Tsing Hua University, National Center of Theoretical Science, Hsinchu 300
Received October 2014 Revised March 2015 Published August 2015
Competition between species for resources is a fundamental ecological process, which can be modeled by the mathematical models in the chemostat culture or in the water column. The chemostat-type models for resource competition have been extensively analyzed. However, the study on the competition for resources in the water column has been relatively neglected as a result of some technical difficulties. We consider a resource competition model with two species in the water column. Firstly, the global existence and $L^\infty$ boundedness of solutions to the model are established by inequality estimates. Secondly, the uniqueness of positive steady state solutions and some dynamical behavior of the single population model are attained by degree theory and uniform persistence theory. Finally, the structure of the coexistence solutions of the two-species system is investigated by the global bifurcation theory.
Keywords: degree theory, coexistence solution, global bifurcation., Water column, uniqueness.
Mathematics Subject Classification: Primary: 35K57, 35B32; Secondary: 35B5.
Citation: Hua Nie, Sze-Bi Hsu, Jianhua Wu. Coexistence solutions of a competition model with two species in a water column. Discrete & Continuous Dynamical Systems - B, 2015, 20 (8) : 2691-2714. doi: 10.3934/dcdsb.2015.20.2691
M. Ballyk, L. Dung, D. A. Jones and H. L. Smith, Effects of random motility on microbial growth and competition in a flow reactor,, SIAM J. Appl. Math., 59 (1999), 573. doi: 10.1137/S0036139997325345. Google Scholar
R. Courant and D. Hilbert, Methods of Mathematical Physics,, Vol. I, (1953). Google Scholar
M. G. Crandall and P. H. Rabinowitz, Bifurcation from simple eigenvalues,, J. Functional Analysis, 8 (1971), 321. doi: 10.1016/0022-1236(71)90015-2. Google Scholar
E. N. Dancer, On the indices of fixed points of mappings in cones and applications,, J. Math. Anal. Appl., 91 (1983), 131. doi: 10.1016/0022-247X(83)90098-7. Google Scholar
E. N. Dancer, On positive solutions of some pairs of differential equations,, Trans. Amer. Math. Soc., 284 (1984), 729. doi: 10.1090/S0002-9947-1984-0743741-4. Google Scholar
Y. Du and L. F. Mei, On a nonlocal reaction-diffusion-advection equation modelling phytoplankton dynamics,, Nonlinearity, 24 (2011), 319. doi: 10.1088/0951-7715/24/1/016. Google Scholar
J. P. Grover, Resource Competition,, Chapman and Hall, (1997). doi: 10.1007/978-1-4615-6397-6. Google Scholar
S. B. Hsu, Steady states of a system of partial differential equations modeling microbial ecology,, SIAM J. Math. Anal., 14 (1983), 1130. doi: 10.1137/0514087. Google Scholar
S. B. Hsu and Y. Lou, Single phytoplankton species growth with light and advection in a water column,, SIAM J. Appl. Math., 70 (2010), 2942. doi: 10.1137/100782358. Google Scholar
S. B. Hsu and P. Waltman, On a system of reaction-diffusion equations arising from competition in an un-stirred chemostat,, SIAM J. Appl. Math., 53 (1993), 1026. doi: 10.1137/0153051. Google Scholar
J. López-Gómez and R. Parda, Existence and uniqueness of coexistence states for the predator-prey model with diffusion: The scalar case,, Differential Integral Equations, 6 (1993), 1025. Google Scholar
P. Magal and X. Q. Zhao, Global attractors and steady states for uniformly persistent dynamical systems,, SIAM J. Math Anal., 37 (2005), 251. doi: 10.1137/S0036141003439173. Google Scholar
J. P. Mellard, K. Yoshiyama, E. Litchman and C. A. Klausmeier, The vertical distribution of phytoplankton in stratified water columns,, J. Theoret. Biol., 269 (2011), 16. doi: 10.1016/j.jtbi.2010.09.041. Google Scholar
H. Nie and J. Wu, Multiplicity results for the unstirred chemostat model with general response functions,, Sci. China Math., 56 (2013), 2035. doi: 10.1007/s11425-012-4550-4. Google Scholar
H. Nie and J. Wu, Positive solutions of a competition model for two resources in the unstirred chemostat,, J. Math. Anal. Appl., 355 (2009), 231. doi: 10.1016/j.jmaa.2009.01.045. Google Scholar
H. Nie and J. Wu, Uniqueness and stability for coexistence solutions of the unstirred chemostat model,, Appl. Anal., 89 (2010), 1141. doi: 10.1080/00036811003717954. Google Scholar
J. P. Shi and X. F. Wang, On global bifurcation for quasilinear elliptic systems on bounded domains,, J. Differential Equations, 246 (2009), 2788. doi: 10.1016/j.jde.2008.09.009. Google Scholar
H. L. Smith and X. Q. Zhao, Robust persistence for semidynamical systems,, Nonlinear Anal., 47 (2001), 6169. doi: 10.1016/S0362-546X(01)00678-2. Google Scholar
J. Smoller, Shock Waves and Reaction-Diffusion Equations,, $2^{nd}$ edition, (1994). doi: 10.1007/978-1-4612-0873-0. Google Scholar
D. Tilman, Resource Competition and Community Structure,, Princeton University Press, (1982). Google Scholar
M. X. Wang, Nonlinear Elliptic Equations,, (in Chinese) Science Press, (2010). Google Scholar
J. Wu, H. Nie and G. S. K. Wolkowicz, A mathematical model of competition for two essential resources in the unstirred chemostat,, SIAM J. Appl. Math., 65 (2004), 209. doi: 10.1137/S0036139903423285. Google Scholar
J. Wu, H. Nie and G. S. K. Wolkowicz, The effect of inhibitor on the plasmid-bearing and plasmid-free model in the unstirred chemostat,, SIAM J. Math. Anal., 38 (2007), 1860. doi: 10.1137/050627514. Google Scholar
J. Wu and G. S. K. Wolkowicz, A system of resource-based growth models with two resources in the un-stirred chemostat,, J. Differential Equations, 172 (2001), 300. doi: 10.1006/jdeq.2000.3870. Google Scholar
K. Yoshiyama and H. Nakajima, Catastrophic transition in vertical distributions of phytoplankton: alternative equilibria in a water column,, J. Theoret. Biol., 216 (2002), 397. doi: 10.1006/jtbi.2002.3007. Google Scholar
K. Yoshiyama, J. P. Mellard, E. Litchman and C. A. Klausmeier, Phytoplankton competition for nutrients and light in a stratified water column,, Am. Nat., 174 (2009), 190. doi: 10.1086/600113. Google Scholar
Todd Young. A result in global bifurcation theory using the Conley index. Discrete & Continuous Dynamical Systems - A, 1996, 2 (3) : 387-396. doi: 10.3934/dcds.1996.2.387
Linfeng Mei, Sze-Bi Hsu, Feng-Bin Wang. Growth of single phytoplankton species with internal storage in a water column. Discrete & Continuous Dynamical Systems - B, 2016, 21 (2) : 607-620. doi: 10.3934/dcdsb.2016.21.607
Danfeng Pang, Hua Nie, Jianhua Wu. Single phytoplankton species growth with light and crowding effect in a water column. Discrete & Continuous Dynamical Systems - A, 2019, 39 (1) : 41-74. doi: 10.3934/dcds.2019003
Tomás Caraballo, Marta Herrera-Cobos, Pedro Marín-Rubio. Global attractor for a nonlocal p-Laplacian equation without uniqueness of solution. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1801-1816. doi: 10.3934/dcdsb.2017107
Chunxiao Guo, Fan Cui, Yongqian Han. Global existence and uniqueness of the solution for the fractional Schrödinger-KdV-Burgers system. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 1687-1699. doi: 10.3934/dcdss.2016070
Johanna D. García-Saldaña, Armengol Gasull, Hector Giacomini. Bifurcation values for a family of planar vector fields of degree five. Discrete & Continuous Dynamical Systems - A, 2015, 35 (2) : 669-701. doi: 10.3934/dcds.2015.35.669
Marko Nedeljkov, Sanja Ružičić. On the uniqueness of solution to generalized Chaplygin gas. Discrete & Continuous Dynamical Systems - A, 2017, 37 (8) : 4439-4460. doi: 10.3934/dcds.2017190
Dominique Blanchard, Nicolas Bruyère, Olivier Guibé. Existence and uniqueness of the solution of a Boussinesq system with nonlinear dissipation. Communications on Pure & Applied Analysis, 2013, 12 (5) : 2213-2227. doi: 10.3934/cpaa.2013.12.2213
Gunog Seo, Gail S. K. Wolkowicz. Pest control by generalist parasitoids: A bifurcation theory approach. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020163
Kousuke Kuto. Stability and Hopf bifurcation of coexistence steady-states to an SKT model in spatially heterogeneous environment. Discrete & Continuous Dynamical Systems - A, 2009, 24 (2) : 489-509. doi: 10.3934/dcds.2009.24.489
Shu-Ming Sun. Existence theory of capillary-gravity waves on water of finite depth. Mathematical Control & Related Fields, 2014, 4 (3) : 315-363. doi: 10.3934/mcrf.2014.4.315
Xue Yang, Xinglong Wu. Wave breaking and persistent decay of solution to a shallow water wave equation. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 2149-2165. doi: 10.3934/dcdss.2016089
Vladimir V. Chepyzhov, Monica Conti, Vittorino Pata. A minimal approach to the theory of global attractors. Discrete & Continuous Dynamical Systems - A, 2012, 32 (6) : 2079-2088. doi: 10.3934/dcds.2012.32.2079
Kunquan Lan, Wei Lin. Uniqueness of nonzero positive solutions of Laplacian elliptic equations arising in combustion theory. Discrete & Continuous Dynamical Systems - B, 2016, 21 (3) : 849-861. doi: 10.3934/dcdsb.2016.21.849
Marianne Akian, Stéphane Gaubert, Antoine Hochart. A game theory approach to the existence and uniqueness of nonlinear Perron-Frobenius eigenvectors. Discrete & Continuous Dynamical Systems - A, 2020, 40 (1) : 207-231. doi: 10.3934/dcds.2020009
Chunqing Wu, Patricia J.Y. Wong. Global asymptotical stability of the coexistence fixed point of a Ricker-type competitive model. Discrete & Continuous Dynamical Systems - B, 2015, 20 (9) : 3255-3266. doi: 10.3934/dcdsb.2015.20.3255
Meng Wang, Wendong Wang, Zhifei Zhang. On the uniqueness of weak solution for the 2-D Ericksen--Leslie system. Discrete & Continuous Dynamical Systems - B, 2016, 21 (3) : 919-941. doi: 10.3934/dcdsb.2016.21.919
Toyohiko Aiki, Adrian Muntean. On uniqueness of a weak solution of one-dimensional concrete carbonation problem. Discrete & Continuous Dynamical Systems - A, 2011, 29 (4) : 1345-1365. doi: 10.3934/dcds.2011.29.1345
Fouad Hadj Selem, Hiroaki Kikuchi, Juncheng Wei. Existence and uniqueness of singular solution to stationary Schrödinger equation with supercritical nonlinearity. Discrete & Continuous Dynamical Systems - A, 2013, 33 (10) : 4613-4626. doi: 10.3934/dcds.2013.33.4613
Lucio Boccardo, Alessio Porretta. Uniqueness for elliptic problems with Hölder--type dependence on the solution. Communications on Pure & Applied Analysis, 2013, 12 (4) : 1569-1585. doi: 10.3934/cpaa.2013.12.1569
Hua Nie Sze-Bi Hsu Jianhua Wu | CommonCrawl |
\(\def\d{\displaystyle} \def\course{Math 228} \newcommand{\f}[1]{\mathfrak #1} \newcommand{\s}[1]{\mathscr #1} \def\N{\mathbb N} \def\B{\mathbf{B}} \def\circleA{(-.5,0) circle (1)} \def\Z{\mathbb Z} \def\circleAlabel{(-1.5,.6) node[above]{$A$}} \def\Q{\mathbb Q} \def\circleB{(.5,0) circle (1)} \def\R{\mathbb R} \def\circleBlabel{(1.5,.6) node[above]{$B$}} \def\C{\mathbb C} \def\circleC{(0,-1) circle (1)} \def\F{\mathbb F} \def\circleClabel{(.5,-2) node[right]{$C$}} \def\A{\mathbb A} \def\twosetbox{(-2,-1.5) rectangle (2,1.5)} \def\X{\mathbb X} \def\threesetbox{(-2,-2.5) rectangle (2,1.5)} \def\E{\mathbb E} \def\O{\mathbb O} \def\U{\mathcal U} \def\pow{\mathcal P} \def\inv{^{-1}} \def\nrml{\triangleleft} \def\st{:} \def\~{\widetilde} \def\rem{\mathcal R} \def\sigalg{$\sigma$-algebra } \def\Gal{\mbox{Gal}} \def\iff{\leftrightarrow} \def\Iff{\Leftrightarrow} \def\land{\wedge} \def\And{\bigwedge} \def\entry{\entry} \def\AAnd{\d\bigwedge\mkern-18mu\bigwedge} \def\Vee{\bigvee} \def\VVee{\d\Vee\mkern-18mu\Vee} \def\imp{\rightarrow} \def\Imp{\Rightarrow} \def\Fi{\Leftarrow} \def\var{\mbox{var}} \def\Th{\mbox{Th}} \def\entry{\entry} \def\sat{\mbox{Sat}} \def\con{\mbox{Con}} \def\iffmodels{\bmodels\models} \def\dbland{\bigwedge \!\!\bigwedge} \def\dom{\mbox{dom}} \def\rng{\mbox{range}} \def\isom{\cong} \DeclareMathOperator{\wgt}{wgt} \newcommand{\vtx}[2]{node[fill,circle,inner sep=0pt, minimum size=4pt,label=#1:#2]{}} \newcommand{\va}[1]{\vtx{above}{#1}} \newcommand{\vb}[1]{\vtx{below}{#1}} \newcommand{\vr}[1]{\vtx{right}{#1}} \newcommand{\vl}[1]{\vtx{left}{#1}} \renewcommand{\v}{\vtx{above}{}} \def\circleA{(-.5,0) circle (1)} \def\circleAlabel{(-1.5,.6) node[above]{$A$}} \def\circleB{(.5,0) circle (1)} \def\circleBlabel{(1.5,.6) node[above]{$B$}} \def\circleC{(0,-1) circle (1)} \def\circleClabel{(.5,-2) node[right]{$C$}} \def\twosetbox{(-2,-1.4) rectangle (2,1.4)} \def\threesetbox{(-2.5,-2.4) rectangle (2.5,1.4)} \def\ansfilename{practice-answers} \def\shadowprops{{fill=black!50,shadow xshift=0.5ex,shadow yshift=0.5ex,path fading={circle with fuzzy edge 10 percent}}} \newcommand{\hexbox}[3]{ \def\x{-cos{30}*\r*#1+cos{30}*#2*\r*2} \def\y{-\r*#1-sin{30}*\r*#1} \draw (\x,\y) +(90:\r) -- +(30:\r) -- +(-30:\r) -- +(-90:\r) -- +(-150:\r) -- +(150:\r) -- cycle; \draw (\x,\y) node{#3}; } \renewcommand{\bar}{\overline} \newcommand{\card}[1]{\left| #1 \right|} \newcommand{\twoline}[2]{\begin{pmatrix}#1 \\ #2 \end{pmatrix}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)
Discrete MathematicsAn Open Introduction
Oscar Levin
IndexPrevUpNext
PrevUpNext
0Introduction and Preliminaries
What is Discrete Mathematics?
Mathematical Statements
1Counting
Additive and Multiplicative Principles
Binomial Coefficients
Combinations and Permutations
Combinatorial Proofs
Advanced Counting Using PIE
2Sequences
Arithmetic and Geometric Sequences
Polynomial Fitting
Solving Recurrence Relations
3Symbolic Logic and Proofs
4Graph Theory
Planar Graphs
Euler Paths and Circuits
Matching in Bipartite Graphs
5Additional Topics
Generating Functions
Introduction to Number Theory
Selected Solutions
List of Symbols
Authored in PreTeXt
Section0.3Sets
The most fundamental objects we will use in our studies (and really in all of math) are sets. Much of what follows might be review, but it is very important that you are fluent in the language of set theory. Most of the notation we use below is standard, although some might be a little different than what you have seen before.
For us, a set will simply be an unordered collection of objects. Two examples: we could consider the set of all actors who have played The Doctor on Doctor Who, or the set of natural numbers between 1 and 10 inclusive. In the first case, Tom Baker is a element (or member) of the set, while Idris Elba, among many others, is not an element of the set. Also, the two examples are of different sets. Two sets are equal exactly if they contain the exact same elements. For example, the set containing all of the vowels in the declaration of independence is precisely the same set as the set of vowels in the word "questionably" (namely, all of them); we do not care about order or repetitions, just whether the element is in the set or not.
SubsectionNotation
We need some notation to make talking about sets easier. Consider,
\begin{equation*} A = \{1, 2, 3\}. \end{equation*}
This is read, "\(A\) is the set containing the elements 1, 2 and 3." We use curly braces "\(\{,~~ \}\)" to enclose elements of a set. Some more notation:
\begin{equation*} a \in \{a, b, c\}. \end{equation*}
The symbol "\(\in\)" is read "is in" or "is an element of." Thus the above means that \(a\) is an element of the set containing the letters \(a\text{,}\) \(b\text{,}\) and \(c\text{.}\) Note that this is a true statement. It would also be true to say that \(d\) is not in that set:
\begin{equation*} d \not\in \{a, b, c\}. \end{equation*}
Be warned: we write "\(x \in A\)" when we wish to express that one of the elements of the set \(A\) is \(x\text{.}\) For example, consider the set,
\begin{equation*} A = \{1, b, \{x, y, z\}, \emptyset\}. \end{equation*}
This is a strange set, to be sure. It contains four elements: the number 1, the letter b, the set \(\{x,y,z\}\text{,}\) and the empty set (\(\emptyset = \{ \}\text{,}\) the set containing no elements). Is \(x\) in \(A\text{?}\) The answer is no. None of the four elements in \(A\) are the letter \(x\text{,}\) so we must conclude that \(x \notin A\text{.}\) Similarly, consider the set \(B = \{1,b\}\text{.}\) Even though the elements of \(B\) are elements of \(A\text{,}\) we cannot say that the set \(B\) is one of the elements of \(A\text{.}\) Therefore \(B \notin A\text{.}\) (Soon we will see that \(B\) is a subset of \(A\text{,}\) but this is different from being an element of \(A\text{.}\))
We have described the sets above by listing their elements. Sometimes this is hard to do, especially when there are a lot of elements in the set (perhaps infinitely many). For instance, if we want \(A\) to be the set of all even natural numbers, would could write,
\begin{equation*} A = \{0, 2, 4, 6, \ldots\}, \end{equation*}
but this is a little imprecise. A better way would be
\begin{equation*} A = \{x \in \N \st \exists n\in \N ( x = 2 n)\}. \end{equation*}
Breaking that down: "\(x \in \N\)" means \(x\) is in the set \(\N\) (the set of natural numbers, \(\{0,1,2,\ldots\}\)), "\(:\)" is read "such that" and "\(\exists n\in \N (x = 2n) \)" is read "there exists an \(n\) in the natural numbers for which \(x\) is two times \(n\)" (in other words, \(x\) is even). Slightly easier might be,
\begin{equation*} A = \{x \st x\text{ is even} \}. \end{equation*}
Note: Sometimes people use \(|\) or \(\backepsilon\) for the "such that" symbol instead of the colon.
Defining a set using this sort of notation is very useful, although it takes some practice to read them correctly. It is a way to describe the set of all things that satisfy some condition (the condition is the logical statement after the "\(\st\)" symbol). Here are some more examples:
Example0.3.1
Describe each of the following sets both in words and by listing out enough elements to see the pattern.
\(\{x \st x + 3 \in \N\}\text{.}\)
\(\{x \in \N \st x + 3 \in \N\}\text{.}\)
\(\{x \st x \in \N \vee -x \in \N\}\text{.}\)
\(\{x \st x \in \N \wedge -x \in \N\}\text{.}\)
This is the set of all numbers which are 3 less than a natural number (i.e., that if you add 3 to them, you get a natural number). The set could also be written as \(\{-3, -2, -1, 0, 1, 2, \ldots\}\) (note that 0 is a natural number, so \(-3\) is in this set because \(-3 + 3 = 0\)).
This is the set of all natural numbers which are 3 less than a natural number. So here we just have \(\{0, 1, 2,3 \ldots\}\text{.}\)
This is the set of all integers (positive and negative whole numbers, written \(\Z\)). In other words, \(\{\ldots, -2, -1, 0, 1, 2, \ldots\}\text{.}\)
Here we want all numbers \(x\) such that \(x\) and \(-x\) are natural numbers. There is only one: 0. So we have the set \(\{0\}\text{.}\)
We already have a lot of notation, and there is more yet. Below is a handy chart of symbols. Some of these will be discussed in greater detail as we move forward.
\(\emptyset\)
The empty set is the set which contains no elements.
\(\U\)
The universe set is the set of all elements.
\(\N\)
The set of natural numbers. That is, \(\N = \{0, 1, 2, 3\ldots\}\text{.}\)
\(\Z\)
The set of integers. That is, \(\Z = \{\ldots, -2, -1, 0, 1, 2, 3, \ldots\}\text{.}\)
\(\Q\)
The set of rational numbers.
\(\R\)
The set of real numbers.
\(\pow(A)\)
The power set of any set \(A\) is the set of all subsets of \(A\text{.}\)
Set Theory Notation
\(\{, \}\)
We use these braces to enclose the elements of a set. So \(\{1,2,3\}\) is the set containing 1, 2, and 3.
\(\st\)
\(\{x \st x > 2\}\) is the set of all \(x\) such that \(x\) is greater than 2.
\(\in\)
\(2 \in \{1,2,3\}\) asserts that 2 is an element of the set \(\{1,2,3\}\text{.}\)
\(\not\in\)
\(4 \notin \{1,2,3\}\) because 4 is not an element of the set \(\{1,2,3\}\text{.}\)
\(\subseteq\)
\(A \subseteq B\) asserts that \(A\) is a subset of \(B\): every element of \(A\) is also an element of \(B\text{.}\)
\(\subset\)
\(A \subset B\) asserts that \(A\) is a proper subset of \(B\): every element of \(A\) is also an element of \(B\text{,}\) but \(A \ne B\text{.}\)
\(\cap\)
\(A \cap B\) is the intersection of \(A\) and \(B\): the set containing all elements which are elements of both \(A\) and \(B\text{.}\)
\(\cup\)
\(A \cup B\) is the union of \(A\) and \(B\): is the set containing all elements which are elements of \(A\) or \(B\) or both.
\(\times\)
\(A \times B\) is the Cartesian product of \(A\) and \(B\): the set of all ordered pairs \((a,b)\) with \(a \in A\) and \(b \in B\text{.}\)
\(\setminus\)
\(A \setminus B\) is \(A\) set-minus \(B\): the set containing all elements of \(A\) which are not elements of \(B\text{.}\)
\(\bar{A}\)
The complement of \(A\) is the set of everything which is not an element of \(A\text{.}\)
\(\card{A}\)
The cardinality (or size) of \(A\) is the number of elements in \(A\text{.}\)
Investigate!4
Find the cardinality of each set below.
\(A = \{3,4,\ldots, 15\}\text{.}\)
\(B = \{n \in \N \st 2 \lt n \le 200\}\text{.}\)
\(C = \{n \le 100 \st n \in \N \wedge \exists m \in \N (n = 2m+1)\}\text{.}\)
Find two sets \(A\) and \(B\) for which \(|A| = 5\text{,}\) \(|B| = 6\text{,}\) and \(|A\cup B| = 9\text{.}\) What is \(|A \cap B|\text{?}\)
Find sets \(A\) and \(B\) with \(|A| = |B|\) such that \(|A\cup B| = 7\) and \(|A \cap B| = 3\text{.}\) What is \(|A|\text{?}\)
Let \(A = \{1,2,\ldots, 10\}\text{.}\) Define \(\mathcal{B}_2 = \{B \subseteq A \st |B| = 2\}\text{.}\) Find \(|\mathcal{B}_2|\text{.}\)
For any sets \(A\) and \(B\text{,}\) define \(AB = \{ab \st a\in A \wedge b \in B\}\text{.}\) If \(A = \{1,2\}\) and \(B = \{2,3,4\}\text{,}\) what is \(|AB|\text{?}\) What is \(|A \times B|\text{?}\)
SubsectionRelationships Between Sets
We have already said what it means for two sets to be equal: they have exactly the same elements. Thus, for example,
\begin{equation*} \{1, 2, 3\} = \{2, 1, 3\}. \end{equation*}
(Remember, the order the elements are written down in does not matter.) Also,
\begin{equation*} \{1, 2, 3\} = \{1, 1+1, 1+1+1\} = \{I, II, III\} \end{equation*}
since these are all ways to write the set containing the first three positive integers (how we write them doesn't matter, just what they are).
What about the sets \(A = \{1, 2, 3\}\) and \(B = \{1, 2, 3, 4\}\text{?}\) Clearly \(A \ne B\text{,}\) but notice that every element of \(A\) is also an element of \(B\text{.}\) Because of this we say that \(A\) is a subset of \(B\text{,}\) or in symbols \(A \subset B\) or \(A \subseteq B\text{.}\) Both symbols are read "is a subset of." The difference is that sometimes we want to say that \(A\) is either equal to or is a subset of \(B\text{,}\) in which case we use \(\subseteq\text{.}\) This is analogous to the difference between \(\lt\) and \(\le\text{.}\)
Let \(A = \{1, 2, 3, 4, 5, 6\}\text{,}\) \(B = \{2, 4, 6\}\text{,}\) \(C = \{1, 2, 3\}\) and \(D = \{7, 8, 9\}\text{.}\) Determine which of the following are true, false, or meaningless.
\(A \subset B\text{.}\)
\(B \subset A\text{.}\)
\(B \in C\text{.}\)
\(\emptyset \in A\text{.}\)
\(\emptyset \subset A\text{.}\)
\(A \lt D\text{.}\)
\(3 \in C\text{.}\)
\(3 \subset C\text{.}\)
\(\{3\} \subset C\text{.}\)
False. For example, \(1\in A\) but \(1 \notin B\text{.}\)
True. Every element in \(B\) is an element in \(A\text{.}\)
False. The elements in \(C\) are 1, 2, and 3. The set \(B\) is not equal to 1, 2, or 3.
False. \(A\) has exactly 6 elements, and none of them are the empty set.
True. Everything in the empty set (nothing) is also an element of \(A\text{.}\) Notice that the empty set is a subset of every set.
Meaningless. A set cannot be less than another set.
True. \(3\) is one of the elements of the set \(C\text{.}\)
Meaningless. \(3\) is not a set, so it cannot be a subset of another set.
True. \(3\) is the only element of the set \(\{3\}\text{,}\) and is an element of \(C\text{,}\) so every element in \(\{3\}\) is an element of \(C\text{.}\)
In the example above, \(B\) is a subset of \(A\text{.}\) You might wonder what other sets are subsets of \(A\text{.}\) If you collect all these subsets of \(A\) into a new set, we get a set of sets. We call the set of all subsets of \(A\) the power set of \(A\text{,}\) and write it \(\pow(A)\text{.}\)
Let \(A = \{1,2,3\}\text{.}\) Find \(\pow(A)\text{.}\)
\(\pow(A)\) is a set of sets, all of which are subsets of \(A\text{.}\) So
\begin{equation*} \pow(A) = \{ \emptyset, \{1\}, \{2\}, \{3\}, \{1,2\}, \{1, 3\}, \{2,3\}, \{1,2,3\}\}. \end{equation*}
Notice that while \(2 \in A\text{,}\) it is wrong to write \(2 \in \pow(A)\) since none of the elements in \(\pow(A)\) are numbers! On the other hand, we do have \(\{2\} \in \pow(A)\) because \(\{2\} \subseteq A\text{.}\)
What does a subset of \(\pow(A)\) look like? Notice that \(\{2\} \not\subseteq \pow(A)\) because not everything in \(\{2\}\) is in \(\pow(A)\text{.}\) But we do have \(\{ \{2\} \} \subseteq \pow(A)\text{.}\) The only element of \(\{\{2\}\}\) is the set \(\{2\}\) which is also an element of \(\pow(A)\text{.}\) We could take the collection of all subsets of \(\pow(A)\) and call that \(\pow(\pow(A))\text{.}\) Or even the power set of that set of sets of sets.
Another way to compare sets is by their size. Notice that in the example above, \(A\) has 6 elements and \(B\text{,}\) \(C\text{,}\) and \(D\) all have 3 elements. The size of a set is called the set's cardinality . We would write \(|A| = 6\text{,}\) \(|B| = 3\text{,}\) and so on. For sets that have a finite number of elements, the cardinality of the set is simply the number of elements in the set. Note that the cardinality of \(\{ 1, 2, 3, 2, 1\}\) is 3. We do not count repeats (in fact, \(\{1, 2, 3, 2, 1\}\) is exactly the same set as \(\{1, 2, 3\}\)). There are sets with infinite cardinality, such as \(\N\text{,}\) the set of rational numbers (written \(\mathbb Q\)), the set of even natural numbers, and the set of real numbers (\(\mathbb R\)). It is possible to distinguish between different infinite cardinalities, but that is beyond the scope of this text. For us, a set will either be infinite, or finite; if it is finite, the we can determine its cardinality by counting elements.
Find the cardinality of \(A = \{23, 24, \ldots, 37, 38\}\text{.}\)
Find the cardinality of \(B = \{1, \{2, 3, 4\}, \emptyset\}\text{.}\)
If \(C = \{1,2,3\}\text{,}\) what is the cardinality of \(\pow(C)\text{?}\)
Since \(38 - 23 = 15\text{,}\) we can conclude that the cardinality of the set is \(|A| = 16\) (you need to add one since 23 is included).
Here \(|B| = 3\text{.}\) The three elements are the number 1, the set \(\{2,3,4\}\text{,}\) and the empty set.
We wrote out the elements of the power set \(\pow(C)\) above, and there are 8 elements (each of which is a set). So \(\card{\pow(C)} = 8\text{.}\) (You might wonder if there is a relationship between \(\card{A}\) and \(\card{\pow(A)}\) for all sets \(A\text{.}\) This is a good question which we will return to in Chapter 1.)
SubsectionOperations On Sets
Is it possible to add two sets? Not really, however there is something similar. If we want to combine two sets to get the collection of objects that are in either set, then we can take the union of the two sets. Symbolically,
\begin{equation*} C = A \cup B, \end{equation*}
read, "\(C\) is the union of \(A\) and \(B\text{,}\)" means that the elements of \(C\) are exactly the elements which are either an element of \(A\) or an element of \(B\) (or an element of both). For example, if \(A = \{1, 2, 3\}\) and \(B = \{2, 3, 4\}\text{,}\) then \(A \cup B = \{1, 2, 3, 4\}\text{.}\)
The other common operation on sets is intersection . We write,
\begin{equation*} C = A \cap B \end{equation*}
and say, "\(C\) is the intersection of \(A\) and \(B\text{,}\)" when the elements in \(C\) are precisely those both in \(A\) and in \(B\text{.}\) So if \(A = \{1, 2, 3\}\) and \(B = \{2, 3, 4\}\text{,}\) then \(A \cap B = \{2, 3\}\text{.}\)
Often when dealing with sets, we will have some understanding as to what "everything" is. Perhaps we are only concerned with natural numbers. In this case we would say that our universe is \(\N\text{.}\) Sometimes we denote this universe by \(\U\text{.}\) Given this context, we might wish to speak of all the elements which are not in a particular set. We say \(B\) is the complement of \(A\text{,}\) and write,
\begin{equation*} B = \bar A \end{equation*}
when \(B\) contains every element not contained in \(A\text{.}\) So, if our universe is \(\{1, 2,\ldots, 9, 10\}\text{,}\) and \(A = \{2, 3, 5, 7\}\text{,}\) then \(\bar A = \{1, 4, 6, 8, 9,10\}\text{.}\)
Of course we can perform more than one operation at a time. For example, consider
\begin{equation*} A \cap \bar B. \end{equation*}
This is the set of all elements which are both elements of \(A\) and not elements of \(B\text{.}\) What have we done? We've started with \(A\) and removed all of the elements which were in \(B\text{.}\) Another way to write this is the set difference :
\begin{equation*} A \cap \bar B = A \setminus B. \end{equation*}
It is important to remember that these operations (union, intersection, complement, and difference) on sets produce other sets. Don't confuse these with the symbols from the previous section (element of and subset of). \(A \cap B\) is a set, while \(A \subseteq B\) is true or false. This is the same difference as between \(3 + 2\) (which is a number) and \(3 \le 2\) (which is false).
Let \(A = \{1, 2, 3, 4, 5, 6\}\text{,}\) \(B = \{2, 4, 6\}\text{,}\) \(C = \{1, 2, 3\}\) and \(D = \{7, 8, 9\}\text{.}\) If the universe is \(\U = \{1, 2, \ldots, 10\}\text{,}\) find:
\(A \cup B\text{.}\)
\(A \cap B\text{.}\)
\(B \cap C\text{.}\)
\(A \cap D\text{.}\)
\(\bar{B \cup C}\text{.}\)
\(A \setminus B\text{.}\)
\((D \cap \bar C) \cup \bar{A \cap B}\text{.}\)
\(\emptyset \cup C\text{.}\)
\(\emptyset \cap C\text{.}\)
\(A \cup B = \{1, 2, 3, 4, 5, 6\} = A\) since everything in \(B\) is already in \(A\text{.}\)
\(A \cap B = \{2, 4, 6\} = B\) since everything in \(B\) is in \(A\text{.}\)
\(B \cap C = \{2\}\) as the only element of both \(B\) and \(C\) is 2.
\(A \cap D = \emptyset\) since \(A\) and \(D\) have no common elements.
\(\bar{B \cup C} = \{5, 7, 8, 9, 10\}\text{.}\) First we find that \(B \cup C = \{1, 2, 3, 4, 6\}\text{,}\) then we take everything not in that set.
\(A \setminus B = \{1, 3, 5\}\) since the elements 1, 3, and 5 are in \(A\) but not in \(B\text{.}\) This is the same as \(A \cap \bar B\text{.}\)
\((D \cap \bar C) \cup \bar{A \cap B} = \{1, 3, 5, 7, 8, 9, 10\}.\) The set contains all elements that are either in \(D\) but not in \(C\) (i.e., \(\{7,8,9\}\)), or not in both \(A\) and \(B\) (i.e., \(\{1,3,5,7,8,9,10\}\)).
\(\emptyset \cup C = C\) since nothing is added by the empty set.
\(\emptyset \cap C = \emptyset\) since nothing can be both in a set and in the empty set.
You might notice that the symbols for union and intersection slightly resemble the logic symbols for "or" and "and." This is no accident. What does it mean for \(x\) to be an element of \(A\cup B\text{?}\) It means that \(x\) is an element of \(A\) or \(x\) is an element of \(B\) (or both). That is,
\begin{equation*} x \in A \cup B \qquad \Iff \qquad x \in A \vee x \in B. \end{equation*}
Similarly,
\begin{equation*} x \in A \cap B \qquad \Iff \qquad x \in A \wedge x \in B. \end{equation*}
\begin{equation*} x \in \bar A \qquad \Iff \qquad \neg (x \in A). \end{equation*}
which says \(x\) is an element of the complement of \(A\) if \(x\) is not an element of \(A\text{.}\)
There is one more way to combine sets which will be useful for us: the Cartesian product, \(A \times B\). This sounds fancy but is nothing you haven't seen before. When you graph a function in calculus, you graph it in the Cartesian plane. This is the set of all ordered pairs of real numbers \((x,y)\text{.}\) We can do this for any pair of sets, not just the real numbers with themselves.
Put another way, \(A \times B = \{(a,b) \st a \in A \wedge b \in B\}\text{.}\) The first coordinate comes from the first set and the second coordinate comes from the second set. Sometimes we will want to take the Cartesian product of a set with itself, and this is fine: \(A \times A = \{(a,b) \st a, b \in A\}\) (we might also write \(A^2\) for this set). Notice that in \(A \times A\text{,}\) we still want all ordered pairs, not just the ones where the first and second coordinate are the same. We can also take products of 3 or more sets, getting ordered triples, or quadruples, and so on.
Let \(A = \{1,2\}\) and \(B = \{3,4,5\}\text{.}\) Find \(A \times B\) and \(A \times A\text{.}\) How many elements do you expect to be in \(B \times B\text{?}\)
\(A \times B = \{(1,3), (1,4), (1,5), (2,3), (2,4), (2,5)\}\text{.}\)
\(A \times A = A^2 = \{(1,1), (1,2), (2,1), (2,2)\}\text{.}\)
\(|B\times B| = 9\text{.}\) There will be 3 pairs with first coordinate \(3\text{,}\) three more with first coordinate \(4\text{,}\) and a final three with first coordinate \(5\text{.}\)
SubsectionVenn Diagrams
There is a very nice visual tool we can use to represent operations on sets. A Venn diagram displays sets as intersecting circles. We can shade the region we are talking about when we carry out an operation. We can also represent cardinality of a particular set by putting the number in the corresponding region.
Each circle represents a set. The rectangle containing the circles represents the universe. To represent combinations of these sets, we shade the corresponding region. For example, we could draw \(A \cap B\) as:
Here is a representation of \(A \cap \bar B\text{,}\) or equivalently \(A \setminus B\text{:}\)
A more complicated example is \((B \cap C) \cup (C \cap \bar A)\text{,}\) as seen below.
Notice that the shaded regions above could also be arrived at in another way. We could have started with all of \(C\text{,}\) then excluded the region where \(C\) and \(A\) overlap outside of \(B\text{.}\) That region is \((A \cap C) \cap \bar B\text{.}\) So the above Venn diagram also represents \(C \cap \bar{\left((A\cap C)\cap \bar B\right)}.\) So using just the picture, we have determined that
\begin{equation*} (B \cap C) \cup (C \cap \bar A) = C \cap \bar{\left((A\cap C)\cap \bar B\right)}. \end{equation*}
SubsectionExercises
Let \(A = \{1,2,3,4,5\}\text{,}\) \(B = \{3,4,5,6,7\}\text{,}\) and \(C = \{2,3,5\}\text{.}\)
Find \(A \cap B\text{.}\)
Find \(A \cup B\text{.}\)
Find \(A \setminus B\text{.}\)
Find \(A \cap \overline{(B \cup C)}\text{.}\)
Find \(A \times C\text{.}\)
Is \(C \subseteq A\text{?}\) Explain.
Is \(C \subseteq B\text{?}\) Explain.
\(A \cap B = \{3,4,5\}\text{.}\)
\(A \cup B = \{1,2,3,4,5,6,7\}\text{.}\)
\(A \setminus B = \{1,2\}\text{.}\)
\(A \cap \bar{(B \cup C)} = \{1\}\text{.}\)
\(A \times C = \{ (1,2), (1,3), (1,5), (2,2), (2,3), (2,5), (3,2), (3,3), (3,5), (4,2)\text{,}\) \((4,3), (4,5), (5,2), (5,3), (5,5)\}\)
Yes. All three elements of \(C\) are also elements of \(A\text{.}\)
No. There is an element of \(C\text{,}\) namely the element 2, which is not an element of \(B\text{.}\)
Let \(A = \{x \in \N \st 3 \le x \le 13\}\text{,}\) \(B = \{x \in \N \st x \mbox{ is even} \}\text{,}\) and \(C = \{x \in \N \st x \mbox{ is odd} \}\text{.}\)
Find \(B \cap C\text{.}\)
Find \(B \cup C\text{.}\)
Find an example of sets \(A\) and \(B\) such that \(A\cap B = \{3, 5\}\) and \(A \cup B = \{2, 3, 5, 7, 8\}\text{.}\)
Find an example of sets \(A\) and \(B\) such that \(A \subseteq B\) and \(A \in B\text{.}\)
For example, \(A = \{1,2,3\}\) and \(B = \{1,2,3,4,5,\{1,2,3\}\}\)
Recall \(\Z = \{\ldots,-2,-1,0, 1,2,\ldots\}\) (the integers). Let \(\Z^+ = \{1, 2, 3, \ldots\}\) be the positive integers. Let \(2\Z\) be the even integers, \(3\Z\) be the multiples of 3, and so on.
Is \(\Z^+ \subseteq 2\Z\text{?}\) Explain.
Is \(2\Z \subseteq \Z^+\text{?}\) Explain.
Find \(2\Z \cap 3\Z\text{.}\) Describe the set in words, and using set notation.
Express \(\{x \in \Z \st \exists y\in \Z (x = 2y \vee x = 3y)\}\) as a union or intersection of two sets already described in this problem.
\(2\Z \cap 3\Z\) is the set of all integers which are multiples of both 2 and 3 (so multiples of 6). Therefore \(2\Z \cap 3\Z = \{x \in \Z \st \exists y\in \Z(x = 6y)\}\text{.}\)
\(2\Z \cup 3\Z\text{.}\)
Let \(A_2\) be the set of all multiples of 2 except for \(2\text{.}\) Let \(A_3\) be the set of all multiples of 3 except for 3. And so on, so that \(A_n\) is the set of all multiple of \(n\) except for \(n\text{,}\) for any \(n \ge 2\text{.}\) Describe (in words) the set \(\bar{A_2 \cup A_3 \cup A_4 \cup \cdots}\text{.}\)
Draw a Venn diagram to represent each of the following:
\(A \cup \bar B\)
\(\bar{(A \cup B)}\)
\(A \cap (B \cup C)\)
\((A \cap B) \cup C\)
\(\bar A \cap B \cap \bar C\)
\((A \cup B) \setminus C\)
\(A \cup \bar B\text{:}\)
\(\bar{(A \cup B)}\text{:}\)
\(A \cap (B \cup C)\text{:}\)
\((A \cap B) \cup C\text{:}\)
\(\bar A \cap B \cap \bar C\text{:}\)
\((A \cup B) \setminus C\text{:}\)
Describe a set in terms of \(A\) and \(B\) (using set notation) which has the following Venn diagram:
Find the following cardinalities:
\(|A|\) when \(A = \{4,5,6,\ldots,37\}\)
\(|A|\) when \(A = \{x \in \Z \st -2 \le x \le 100\}\)
\(|A \cap B|\) when \(A = \{x \in \N \st x \le 20\}\) and \(B = \{x \in \N \st x \mbox{ is prime} \}\)
Let \(A = \{a, b, c, d\}\text{.}\) Find \(\pow(A)\text{.}\)
We are looking for a set containing 16 sets.
Let \(A = \{1,2,\ldots, 10\}\text{.}\) How many subsets of \(A\) contain exactly one element (i.e., how many singleton subsets are there)? How many doubleton subsets (containing exactly two elements) are there?
Let \(A = \{1,2,3,4,5,6\}\text{.}\) Find all sets \(B \in \pow(A)\) which have the property \(\{2,3,5\} \subseteq B\text{.}\)
Find an example of sets \(A\) and \(B\) such that \(|A| = 4\text{,}\) \(|B| = 5\text{,}\) and \(|A \cup B| = 9\text{.}\)
For example, \(A = \{1,2,3,4\}\) and \(B = \{5,6,7,8,9\}\) gives \(A \cup B = \{1,2,3,4,5,6,7,8,9\}\text{.}\)
Are there sets \(A\) and \(B\) such that \(|A| = |B|\text{,}\) \(|A\cup B| = 10\text{,}\) and \(|A\cap B| = 5\text{?}\) Explain.
In a regular deck of playing cards there are 26 red cards and 12 face cards. Explain, using sets and what you have learned about cardinalities, why there are only 32 cards which are either red or a face card. | CommonCrawl |
\begin{definition}[Definition:State]
Let $G$ be a game whose outcome is determined by the realization of a random variable $X$.
Each of the possible values that can be taken by $X$ is known as a '''state''' of $G$.
\end{definition} | ProofWiki |
\begin{document}
\title[$L^{p}$ representations of étale groupoids]{Representations of étale groupoids on $L^p$-spaces} \author[Eusebio Gardella]{Eusebio Gardella} \address{Eusebio Gardella\\ Westfälische Wilhelms-Universität Münster, Fachbereich Mathematik, Einsteinstraße 62, 48149 Münster, Germany} \email{[email protected]} \urladdr{http://pages.uoregon.edu/gardella/} \author[Martino Lupini]{Martino Lupini} \address{Martino Lupini\\ Fakultät für Mathematik, Universität Wien, Oskar-Morgenstern-Platz 1, Room 02.126, 1090 Wien, Austria} \email{[email protected]} \urladdr{http://www.lupini.org/} \curraddr{Mathematics Department\\ California Institute of Technology\\ 1200 E. California Blvd\\ MC 253-37\\ Pasadena, CA 91125} \thanks{Eusebio Gardella was supported by the US National Science Foundation through Grant DMS-1101742. Martino Lupini was supported by the York University Elia Scholars Program. This work was completed when the authors were attending the Thematic Program on Abstract Harmonic Analysis, Banach and Operator Algebras at the Fields Institute. The hospitality of the Fields Institute is gratefully acknowledged.} \dedicatory{} \subjclass[2000]{Primary 47L10, 22A22; Secondary 46H05} \keywords{Groupoid, Banach bundle, $L^p$-space, $L^p$-operator algebra, Cuntz algebra}
\begin{abstract} For $p\in (1,\infty)$, we study representations of étale groupoids on $L^{p}$ -spaces. Our main result is a generalization of Renault's disintegration theorem for representations of étale groupoids on Hilbert spaces. We establish a correspondence between $L^{p}$-representations of an étale groupoid $G$, contractive $L^{p}$-representations of $C_{c}(G)$, and tight regular $L^{p}$-representations of any countable inverse semigroup of open slices of $G$ that is a basis for the topology of $G$. We define analogs $ F^{p}(G)$ and $F_{\mathrm{red}}^{p}(G)$ of the full and reduced groupoid C*-algebras using representations on $L^{p}$-spaces. As a consequence of our main result, we deduce that every contractive representation of $F^{p}(G)$ or $F_{\mathrm{red}}^{p}(G)$ is automatically completely contractive. Examples of our construction include the following natural families of Banach algebras: discrete group $L^{p}$-operator algebras, the analogs of Cuntz algebras on $L^{p}$-spaces, and the analogs of AF-algebras on $L^{p} $ -spaces. Our results yield new information about these objects: their matricially normed structure is uniquely determined. More generally, groupoid $L^{p}$-operator algebras provide analogs of several families of classical C*-algebras, such as Cuntz-Krieger C*-algebras, tiling C*-algebras, and graph C*-algebras. \end{abstract}
\maketitle \tableofcontents
\section{Introduction}
Groupoids are a natural generalization of groups, where the operation is no longer everywhere defined. Succinctly, a groupoid can be defined as a small category where every arrow is invertible, with the operations being composition and inversion of arrows. A groupoid is called locally compact when it is endowed with a (not necessarily Hausdorff) locally compact topology compatible with the operations; see \cite{paterson_groupoids_1999}. Any locally compact group is in particular a locally compact groupoid. More generally, one can associate to a continuous action of a locally compact group on a locally compact Hausdorff space the corresponding action groupoid as in \cite{lupini_polish_2014}. This allows one to regard locally compact groupoids as a generalization of topological dynamical systems.
A particularly important class of locally compact groupoids are those where the operations are local homeomorphisms. These are the so-called étale---or $ r$-discrete \cite{renault_groupoid_1980}---groupoids, and constitute the groupoid analog of actions of discrete groups on locally compact spaces. In fact, they can be described in terms of partial actions of inverse semigroups on locally compact spaces; see \cite{exel_inverse_2008}. Alternatively, one can characterize étale groupoids as the locally compact groupoids having an open basis of \emph{bisections}, i.e.\ sets where the source and range maps are injective \cite[Section 3]{exel_inverse_2008}. In the étale case, the set of all open bisections is an inverse semigroup.
The representation theory of étale groupoids on Hilbert spaces has been intensively studied since the seminal work of Renault \cite {renault_groupoid_1980}; see also the monograph \cite {paterson_groupoids_1999}. A representation of an étale groupoid $G$ on a Hilbert space is an assignment $\gamma \mapsto T_{\gamma }$ of an invertible isometry $T_{\gamma }$ between Hilbert spaces to any element $\gamma $ of $G$ . Such an assignment is required to respect the algebraic and measurable structure of the groupoid. The fundamental result of \cite {renault_groupoid_1980} establishes a correspondence between the representations of an étale groupoid $G$ and the nondegenerate $I$-norm contractive representations of $C_{c}(G)$. (The $I$-norm on $C_{c}(G)$ is the analogue of the $L^{1}$-norm for discrete groups. When $G$ is Hausdorff, $C_{c}(G)$ is just the space of compactly-supported continuous functions on $ G$. The non-Hausdorff case is more subtle; see \cite[Definition 3.9] {exel_inverse_2008}.) Moreover, such a correspondence is compatible with the natural notions of equivalence for representations of $G$ and $C_{c}(G)$. In turn, nondegenerate representations of $C_{c}(G)$ correspond to tight regular representations of any countable inverse semigroup $\Sigma $ of open bisections of $G$ that forms a basis for the topology of $G$. Again, such a correspondence preserves the natural notions of equivalence for representations of $C_{c}(G)$ and $\Sigma $. Tightness is a nondegeneracy condition introduced by Exel in \cite[Section 11]{exel_inverse_2008}. In the case when the set $G^{0}$ of objects of $G$ is compact and zero-dimensional, one can take $\Sigma $ to be the inverse semigroup of compact open bisections of $G$. In this case the semilattice $E(\Sigma )$ of idempotent elements of $\Sigma $ is the Boolean algebra of clopen subsets of $G^{0}$, and a representation of $G$ is tight if and only if its restriction to $ E(\Sigma )$ is a Boolean algebra homomorphism.
In this paper, we show how an important chapter in the theory of C*-algebras admits a natural generalization to algebras of operators on $L^{p}$-spaces. We prove that the correspondences described in the paragraph above generalize when one replaces representations on Hilbert spaces with representations on $L^{p}$-spaces for some Hölder exponent $p$ in $(1,\infty )$. For $p=2$, one recovers Renault's and Exel's results. Interestingly, the proofs for $p=2$ and $p\neq 2$ differ drastically. The methods when $p\neq 2$ are based on the characterization of invertible isometries of $L^{p}$-spaces proved by Banach in \cite[Section 5]{banach_theorie_1993}. The result was later generalized by Lamperti to not necessarily surjective isometries between $L^{p}$-spaces \cite{lamperti_isometries_1958}, hence the name Banach-Lamperti theorem.
Following \cite {pisier_completely_1990,merdy_factorization_1996,daws_p-operator_2010} we say that a representation of a matricially normed algebra $A$ on $ L^{p}(\lambda )$ is $p$-\emph{completely contractive }if all its amplifications are contractive when the algebra of $n\times n$ matrices of bounded linear operators on $L^{p}(\lambda )$ is identified with the algebra of bounded linear operators on $L^{p}(\lambda \times c_{n})$. (Here and in the following, $c_{n}$ denotes the counting measure on $n$ points.) If $G$ is an étale groupoid, then the identification between $M_{n}(C_{c}(G))$ and $ C_{c}(G_{n})$ for a suitable amplification $G_{n}$ of $G$ defines matricial norms on the algebra $C_{c}(G)$. As a corollary of our analysis a contractive representation of $C_{c}(G)$ on an $L^{p}$-space is automatically $p$-completely contractive.
In the case of Hilbert space representations, the universal object associated to $C_{c}(G) $ is the groupoid C*-algebra $C^{\ast }(G) $, as defined in \cite[Chapter 3]{paterson_groupoids_1999}. One can also define a reduced version $C_{\mathrm{red}}^{\ast }(G) $ (see \cite[pages 108-109] {paterson_groupoids_1999}), that only considers representations of $C_{c}(G) $ that are induced---in the sense of Rieffel \cite[Appendix D] {paterson_groupoids_1999}---from a Borel probability measure on the space of objects of $G$. Amenability of the groupoid $G$ implies that the canonical surjection from $C^{\ast }(G) $ to $C_{\mathrm{red}}^{\ast }(G) $ is an isomorphism. In the case when $G$ is a countable discrete group, these objects are the usual full and reduced group C*-algebras.
A similar construction can be performed for an arbitrary $p$ in $(1,\infty )$ , and the resulting universal objects are the full and reduced groupoid $ L^{p}$-operator algebras $F^{p}(G)$ and $F_{\mathrm{red}}^{p}(G)$ of $G$. When $G$ is a countable discrete group, these are precisely the full and reduced group $L^{p}$-operator algebras of $G$ as defined in \cite {phillips_crossed_2013}; see also \cite{gardella_group_2014}. When $G$ is the groupoid associated with a Bratteli diagram as in \cite[Section 2.6] {renault_c*-algebras_2009}, one obtains the spatial $L^{p}$-analog of an AF C*-algebra; see \cite{phillips_analogs_2014}. (The $L^{p}$-analogs of UHF C*-algebras are considered in \cite {phillips_simplicity_2013,phillips_isomorphism_2013}.) When $G$ is one of the Cuntz groupoids defined in \cite[Section 2.5]{renault_c*-algebras_2009}, one obtains the $L^{p}$-analogs of the corresponding Cuntz algebra from \cite {phillips_analogs_2012,phillips_simplicity_2013,phillips_isomorphism_2013}. \newline \indent More generally, this construction provides several new examples of $ L^{p}$-analogs of \textquotedblleft classical\textquotedblright\ C*-algebras, such as Cuntz-Krieger algebras, graph algebras, and tiling C*-algebras (all of which can be realized as groupoid C*-algebras for a suitable étale groupoid; see \cite{kumjian_graphs_1997} and \cite {paterson_groupoids_1999}). It is worth mentioning here that there seems to be no known example of a nuclear C*-algebra that cannot be described as the enveloping C*-algebra of a locally compact groupoid.
The groupoid perspective pursued in this paper contributes to clarify what are the well-behaved\ representations of algebraic objects---such as the Leavitt algebras, Bratteli diagrams, or graphs---on $L^{p}$-spaces. In \cite {phillips_analogs_2012,phillips_simplicity_2013,phillips_isomorphism_2013}, several characterizations are given for well behaved representations of Leavitt algebras and stationary Bratteli diagrams. The fundamental property considered therein is the uniqueness of the norm that they induce. The groupoid approach shows that these representations are precisely those coming from representations of the associated groupoid or, equivalently, its inverse semigroup of open bisections.
Another upshot of the present work is that the groupoid $L^{p}$-operator algebras $F^{p}(G)$ and $F_{\mathrm{red}}^{p}(G)$ satisfy an automatic $p$ -complete contractiveness\ property for contractive homomorphisms into other $L^{p}$-operator algebras. In fact, $F^{p}(G)$ and $F_{\mathrm{red}}^{p}(G)$ have canonical matrix norms. Such matrix norm structure satisfies the $L^{p}$ -analog of Ruan's axioms for operator spaces as defined in \cite[Subsection 4.1]{daws_p-operator_2010} building on \cite {pisier_completely_1990,merdy_factorization_1996}. Using the terminology of \cite[Subsection 4.1]{daws_p-operator_2010}, this turns the algebras $ F^{p}(G)$ and $F_{\mathrm{red}}^{p}(G)$ into $p$-operator systems such that the multiplication is $p$-completely contractive. It is a corollary of our main results that any contractive representation of these algebras on an $ L^{p}$-space is automatically $p$-completely contractive. As a consequence the matrix norms on $F^{p}(G)$ and $F_{\mathrm{red}}^{p}(G)$ are uniquely determined---as it is the case for C*-algebras.
It is still not clear what are the well-behaved algebras of operators on $ L^{p}$-spaces. Informally speaking, these should be the $L^{p}$-operator algebras that behave like C*-algebras. The results in this paper provide strong evidence that $L^{p}$-operator algebras of the form $F^{p}(G)$ and $ F_{\mathrm{red}}^{p}(G)$ for some étale groupoid $G$, indeed behave like\ C*-algebras. Beside having the automatic complete contractiveness property for contractive representations on $L^{p}$-spaces, another property that $ F^{p}(G)$ and $F_{\mathrm{red}}^{p}(G)$ share with C*-algebras is being generated by spatial partial isometries as defined in \cite {phillips_analogs_2012}. These are the partial isometries whose support and range idempotents are hermitian operators in the sense of \cite {lumer_semi-inner-product_1961}; see also \cite{berkson_hermitian_1972}. (In the C*-algebra case, the hermitian idempotents are precisely the orthogonal projections.) In particular, this property forces the algebra to be a C*-algebra in the case $p=2$. (A stronger property holds for unital C*-algebras, namely being generated by invertible isometries; see \cite[ Theorem~II.3.2.16]{blackadar_operator_2006}. As observed by Chris Phillips, this property turns out to fail for some important examples of algebras of operators on $L^{p}$-spaces, such as the $L^{p}$-analog of the Toeplitz algebra.)
The present work indicates that the properties of being generated by spatial partial isometries, and having automatic complete contractiveness for representations on $L^{p}$-spaces, are very natural requirements for an $ L^{p}$-operator algebra to behave like a C*-algebra.
We believe that the results of this paper are a step towards a successful identification of those properties that characterize the class of \textquotedblleft well behaved" $L^{p}$-operator algebras.
\subsection{Notation}
\label{Subsection: notation} We denote by $\omega $ the set of natural numbers including $0$. An element $n\in \omega $ will be identified with the set $\{0,1,\ldots ,n-1\}$ of its predecessors. (In particular, 0 is identified with the empty set.) We will therefore write $j\in n$ to mean that $j$ is a natural number and $j<n$.\newline \indent For $n\in \omega $ or $n=\omega $, we denote by $c_{n}$ the counting measure on $n$. We denote by $\mathbb{Q}(i)^{\oplus \omega }$ the set of all sequences $(\alpha _{n})_{n\in \omega }$ of complex numbers in $\mathbb{Q} (i) $ such that $\alpha _{n}=0$ for all but finitely many indices $n\in \omega $.\newline \indent All Banach spaces will be \emph{reflexive}, and will be endowed with a (Schauder)\emph{\ basis}. We recall here some terminology concerning bases in Banach spaces. For more information and details we refer the reader to the monographs \cite{carothers_short_2005,megginson_introduction_1998}. Suppose that $(b_{n})_{n\in \omega }$ is a basis of a Banach space $Z$. We denote by $(b_{n}^{\prime })_{n\in \omega }$ the associated sequence of \emph{coefficient functionals}. If $k\in \omega $, then $b_{k}^{\prime}$ is the element of the dual space $Z^{\prime }$ of $Z$ that maps $z\in Z$ to the $k$-th coefficient of $z$ with respect to the basis $(b_{n})_{n\in \omega }$ . If $Z$ is reflexive, then $(b_{n}^{\prime })_{n\in \omega }$ is a basis for $Z^{\prime }$ \cite[Proposition 5.3]{singer_bases_1970}. The basis $ (b_{n})_{n\in \omega }$ is:
\begin{itemize} \item \emph{unconditional }if, for every $x\in Z$, the series $ \sum_{n}b_{n}^{\prime }(x)b_{n}$ converges unconditionally to $x$ (this means that for any bijection $\pi\colon \omega \rightarrow \omega $ the series $\sum_{n}b_{\pi (n)}^{\prime }b_{n}$ converges to $x$);
\item \emph{normal }if $\left\| b_{n}\right\| =\left\| b_{n}^{\prime
}\right\| =1$ for every $n\in \omega $;
\item \emph{boundedly complete }if the series $\sum\limits_{n\in \omega }\lambda _{n}b_{n}$ converges in $Z$ whenever it has uniformly bounded partial sums. \end{itemize}
We say that a positive real number $K>0$ is a \emph{basis constant} for $
(b_{n})_{n\in \omega }$ if we have $\left\|\sum_{i\in n}\langle x, b_{i}^{\prime }\rangle b_i\right\|\leq K \|x\|$ for all $x\in Z$ and all $ n\in\omega$. It is furthermore an \emph{unconditional basis constant }if $
\left\|\sum_{i\in A}\langle x, b_{i}^{\prime }\rangle b_i\right\|\leq K
\|x\| $ for all $x\in Z$ and every finite subset $A$ of $\omega$. Every basis has a basis constant $K$ \cite[Theorem 3.1]{carothers_short_2005}, and every unconditional basis has an unconditional basis constant \cite[ Proposition 4.2.29]{megginson_introduction_1998}. By \cite[Theorem~7.4] {carothers_short_2005}, every basis of a reflexive Banach space is boundedly complete.
All Borel spaces will be \emph{standard}. For a standard Borel space $X$, we denote by $B(X)$ the space of complex-valued bounded Borel functions on $X$, and by $\mathcal{B}(X)$ the $\sigma $-algebra of Borel subsets of $X$. For a Borel measure $\mu $ on a standard Borel space $X$, we denote by $\mathcal{B} _{\mu }$ the \emph{measure algebra} of $\mu $. This is the quotient of the Boolean algebra $\mathcal{B}(X)$ of Borel subsets of $X$ by the ideal of $ \mu $-null Borel subsets. By \cite[Exercise 17.44]{kechris_classical_1995} $ \mathcal{B}_{\mu }$ is a complete Boolean algebra. The characteristic function of a set $F$ will be denoted by $\chi _{F}$.
Given a measure space $(X,\mu )$ and a Hölder exponent $p\in (1,\infty )$, we will denote the Lebesgue space $L^{p}(X,\mu )$ simply by $L^{p}(\mu )$. Recall that $L^{p}(\mu )$ is separable precisely if there is a $\sigma $ -finite Borel measure $\lambda $ on a standard Borel space $Z$ such that $ L^{p}(\lambda )$ is isometrically isomorphic to $L^{p}(\mu )$. Moreover there exists $n\in \omega \cup \{\omega \}$ such that $( Z,\lambda) $ is Borel-isomorphic to $([0,1]\sqcup n,\nu \sqcup c_{n})$, where $ \nu $ is the Lebesgue measure on $[0,1]$. The push-forward of a measure $\mu $ under a function $\phi $ will be denote by $\phi _{\ast }\mu $ or $\phi _{\ast }(\mu )$.
If $X$ and $Z$ are Borel spaces, we say that $Z$ is \emph{fibred} over $X$ if there is a Borel surjection $q\colon Z\rightarrow X$. In this case, we call $q$ the \emph{fiber map}. A \emph{section }of $Z$ is a map $\sigma \colon X\rightarrow Z$ such that $q\circ \sigma $ is the identity map of $X$ . For $x\in X$, we denote the value of $\sigma $ at $x$ by $\sigma _{x}$, and the fiber $q^{-1}(\{x\})$ over $x$ is denoted by $Z_{x}$. If $Z^{(0)}$ and $Z^{(1)}$ are Borel spaces fibred over $X$ via fiber maps $q^{(0)}$ and $ q^{(1)}$ respectively, then their \emph{fiber product} $Z^{(0)}\ast Z^{(1)}$ is the Borel space fibred over $X$ defined by $Z^{(0)}\ast Z^{(1)}=\{(z^{(0)},z^{(1)})\colon q^{(0)}(z^{(0)})=q^{(1)}(z^{(1)})\}$.
If $E$ and $F$ are Banach spaces, we will denote by $B(E,F)$ the Banach space of bounded linear maps from $E$ to $F$. When $E=F$, we abbreviate $ B(E,E)$ to just $B(E)$. Despite the apparent notational conflict with the set of Borel functions $B(X)$ on a measurable space $X$, confusion will be unlikely to arise, and it should always be clear from the context what $ B(\cdot )$ means. Given a Banach space $E$, its dual space will be denoted by $E^{\prime }$. Similarly, if $T\colon E\rightarrow F$ is a bounded linear operator between Banach spaces $E$ and $F$, its dual map will be denoted $ T^{\prime }\colon F^{\prime }\rightarrow E^{\prime }$. Finally, if $p\in (1,\infty )$, we will write $p^{\prime }$ for its conjugate Hölder exponent, which satisfies $\frac{1}{p}+\frac{1}{p^{\prime }}=1$. (We will reserve the letter $q$ for fiber maps.) We exclude $p=1$ in our analysis mostly for convenience, because we use duality in many situations. We do not know whether the results of this paper carry over to the case $p=1$.
\section{Borel Bundles of Banach spaces}
\label{Section: Banach bundles}
\begin{definition} \label{definition Banach bundles}
Let $X$ be a Borel space. A (standard)\emph{\ Borel Banach bundle} over $X$ is a Borel space $\mathcal{Z}$ fibred over $X$ together with
\begin{enumerate} \item Borel maps $+\colon \mathcal{Z}\ast \mathcal{Z}\rightarrow \mathcal{Z}$ , $\cdot \colon \mathbb{C}\times \mathcal{Z}\rightarrow \mathcal{Z}$, and $
\| \cdot \| \colon \mathcal{Z}\rightarrow \mathbb{C}$,
\item a Borel section $\mathbf{0}\colon X\rightarrow \mathcal{Z}$,
\item a Borel function $d\colon X\rightarrow \omega \cup \{ \omega \} $, and
\item a sequence $\left( \sigma _{n}\right) _{n\in \omega }$ of Borel sections $\sigma _{n}\colon X\rightarrow \mathcal{Z}$ \end{enumerate}
such that the following holds:
\begin{itemize} \item $\mathcal{Z}_{x}$ is a reflexive Banach space of dimension $d(x)$ with zero element $\mathbf{0}_{x}$ for every $x\in X$;
\item there is $K>0$ such that, for every $x\in X$, the sequence $\left( \sigma _{n,x}\right) _{n\in d(x)}$ is a basis of $\mathcal{Z}_{x}$ with basis constant $K$, and the sequence $(\sigma _{n,x}^{\prime })_{n\in d(x)}$ is a basis of $\mathcal{Z}_{x}^{\prime }$ with basis constant $K$. \end{itemize} \end{definition}
In the definition above $(\sigma _{n,x}^{\prime })_{n\in d(x)}$ is the sequence of coefficient functionals associated with the basis $(\sigma _{n,x})_{n\in d(x)}$ of $\mathcal{Z}_{x}$. We say that the sequence $ (\sigma _{n})_{n\in \omega }$ is a \emph{basic sequence} for $\mathcal{Z}$ with \emph{basis constant} $K$. If furthermore there exists $K>0$ such that for every $x\in X$, the sequences $(\sigma _{n,x})_{n\in d(x)}$ and $\left( \sigma _{n,x}^{\prime }\right) _{n\in d(x)}$ are unconditional bases of $ \mathcal{Z}_{x}$ and $\mathcal{Z}_{x}^{\prime }$ with unconditional basis constant $K$, then we say that $(\sigma _{n})_{n\in \omega }$ is an \emph{ unconditional basic sequence }with unconditional basis constant $K$. Finally, we say that $(\sigma _{n})_{n\in \omega }$ is a \emph{normal basic sequence} if $\left\Vert \sigma _{n,x}\right\Vert =\left\Vert \sigma _{n,x}^{\prime }\right\Vert =1$ for every $n\in \omega $ and $x\in X$.
\begin{example}[Constant bundles] Let $X$ be a Borel space, let $Z$ be a reflexive Banach space, and set $ \mathcal{Z}=X\times Z$. Then $\mathcal{Z}$ with the product Borel structure is naturally a Borel Banach bundle, where each fiber $\mathcal{Z}_{x}$ is isomorphic to $Z$. In the particular case when $Z$ is the field of complex numbers, this is called the \emph{trivial bundle} over $X$. \end{example}
\begin{example}[Disjoint unions] \label{Example:disjoint-union}Suppose that, for $i\in \left\{ 0,1\right\} $, $\mathcal{Z}_{i}$ is a Borel Banach bundle on the Borel space $X_{i}$. Then one can consider the disjoint union $X_{0}\sqcup X_{1}$, which is endowed with a canonical Borel structure. One can then consider the \emph{disjoint union bundle} $\mathcal{Z}:=\mathcal{Z}_{0}\sqcup \mathcal{Z}_{1}$, which is a Borel Banach bundle over $X_{0}\sqcup X_{1}$ defined by $\mathcal{Z}_{x}:=( \mathcal{Z}_{i})_{x}$ for $x\in X_{i}$. \end{example}
\begin{remark} \label{Remark:dimension-fibers}In view of Example \ref {Example:disjoint-union}, in the development of the theory of Borel Banach bundles one can assume without loss of generality that the fibers $\mathcal{Z }_{x}$ of the bundle $\mathcal{Z}$ have fixed dimension $d$ independent of $ x\in X$. Indeed, an arbitrary Borel Banach bundle is a disjoint union in the sense of Example \ref{Example:disjoint-union} of bundles with fibers with fixed dimension. In view of this observation, we will consider in the following Borel Banach bundles with fiber of a fixed dimension $d\in \omega \cup \left\{ \omega \right\} $. Furthermore, to fix the ideas we only consider the case when $d$ is infinite, which is the most interesting case. \end{remark}
Let $q\colon \mathcal{Z}\rightarrow X$ be a Borel Banach bundle. Then the space of Borel sections of $\mathcal{Z}$ has a natural structure of $B(X)$ -module. Accordingly, if $\xi _{1}$ and $\xi _{2}$ are Borel sections of $ \mathcal{Z}$ and $f\in B(X)$, we denote by $\xi _{1}+\xi _{2}$ and $f\xi $ the Borel sections given by $\left( \xi _{1}+\xi _{2}\right) _{x}=(\xi _{1})_{x}+(\xi _{2})_{x}\ $and $\left( f\xi \right) _{x}=f(x)\xi _{x}$ for every $x$ in $X$. If $E$ is a Borel subset of $X$, then $q^{-1}(E)$ is canonically a Borel Banach bundle over $E$, called the \emph{restriction }of
$\mathcal{Z}$ to $E$, and denoted by $\mathcal{Z}|_{E}$.
\begin{remark} A Borel Banach bundle where each fiber is a Hilbert space is called a \emph{ Borel Hilbert bundle}. Such bundles (usually called just Hilbert bundles) are the key notion in the study of representation of groupoids on Hilbert spaces; see \cite[Appendix F]{williams_crossed_2007}, \cite[Section 3.1] {paterson_groupoids_1999}, and \cite[Section 2]{ramsay_virtual_1971}. The Gram-Schmidt process shows that a Borel Hilbert bundle $\mathcal{H}$ over $X$ always has a basic sequence $(\sigma _{n})_{n\in \omega }$ such that for all $x$ in $X$, the sequence $(\sigma _{n,x})_{n\in \omega }$ is an \emph{ orthonormal basis }of $\mathcal{H}_{x}$. \end{remark}
A similar formalism is used by Lafforgue to study semi-continuous bundles of Banach spaces in duality \cite {lafforgue_k-theorie_2002,lafforgue_k-theorie_2007}.
\subsection{Canonical Borel structures\label{Subsection: canonical Borel structures}}
Let $X$ be a Borel space, and let $\mathcal{Z}$ be a set (with no Borel structure) fibred over $X$. Assume there are operations \begin{equation*} +\colon \mathcal{Z}\ast \mathcal{Z}\rightarrow \mathcal{Z}\ \ ,\ \ \cdot \colon \mathbb{C}\times \mathcal{Z}\rightarrow \mathcal{Z}\ \ \mbox{ and }\ \ \Vert \cdot \Vert \colon \mathcal{Z}\rightarrow \mathbb{C}, \end{equation*} making each fiber a \emph{reflexive }Banach space. In this situation, we will say that $\mathcal{Z}$ is a \emph{bundle of Banach spaces} over $X$, and will denote it by $\bigsqcup\nolimits_{x\in X}\mathcal{Z}_{x}$. Let $ \mathcal{Z}^{\prime }$ be the set of pairs $\left( x,v\right) $ for $x\in X$ and $v\in \mathcal{Z}_{x}^{\prime }$. Then $\mathcal{Z}^{\prime }$ is also a bundle of Banach spaces over $X$. Suppose further that there exist $K>0$ and a sequence $(\sigma _{n})_{n\in \omega }$ of sections $\sigma _{n}\colon X\rightarrow \mathcal{Z}$ such that, for every $x\in X$, the sequence $ (\sigma _{n,x})_{n\in \omega }$ is a basis of $\mathcal{Z}_{x}$ with associated sequence of coefficient functionals $(\sigma _{n,x}^{\prime })_{n\in \omega }$ and with basis constant $K$. Assume that for every $m\in \omega $ and every sequence $(\alpha _{j})_{j\in m}$ in $\mathbb{Q} (i)^{\oplus m}$, the map $X\rightarrow \mathbb{R}$ given by $x\mapsto \left\Vert \sum\limits_{j\in m}\alpha _{j}\sigma _{j,x}\right\Vert$ is Borel. Set \begin{equation*} Z=\left\{ \left( x,(\alpha _{n})_{n\in \omega }\right) \in X\times \mathbb{C} ^{\omega }\colon \sup_{m\in \omega }{}\left\Vert \sum\limits_{j\in m}\alpha _{j}\sigma _{j,x}\right\Vert <\infty \right\} \text{.} \end{equation*}
We claim that $Z$ is a Borel subset of $X\times \mathbb{C}^{\omega }$. To see this, note that a pair $\left( x,(\alpha _{n})_{n\in \omega }\right) $ in $X\times \mathbb{C}^{\omega }$ belongs to $Z$ if and only if there is $ N\in \omega $ such that for every $m,k\in \omega $ there is $(\beta _{j})_{j\in m}$ in $\mathbb{Q}(i)^{\oplus m}$ such that $\max_{j\in n}\left\vert \alpha _{j}-\beta _{j}\right\vert \leq \frac{1}{2^{k}}\ $and $ \left\Vert \sum\nolimits_{j\in m}\beta _{j}\sigma _{j,x}\right\Vert {}<N$. In other words, $Z$ can be written as \begin{equation*} Z=\bigcup_{N\in \omega }\bigcap_{m,k\in \omega }\bigcup_{\left( \beta _{j}\right) _{j\in m}\in \mathbb{Q}(i)^{\oplus m}}Z(N,m,k,\left( \beta _{j}\right) _{j\in m})\text{,} \end{equation*} where $Z(N,m,k,\left( \beta _{j}\right) _{j\in m})$ is the set of pairs $ \left( x,(\alpha _{n})_{n\in \omega }\right) $ in $X\times \mathbb{C} ^{\omega }$ such that $\max_{j\in n}\left\vert \alpha _{j}-\beta _{j}\right\vert \leq \frac{1}{2^{k}}\ $and $X\times \mathbb{C}^{\omega }$ and $\left\Vert \sum\nolimits_{j\in m}\beta _{j}\sigma _{j,x}\right\Vert {}<N $. Since the map $x\mapsto \left\Vert \sum\nolimits_{j\in m}\beta _{j}\sigma _{j,x}\right\Vert $ is Borel, $Z(N,m,k,\left( \beta _{j}\right) _{j\in m})$ is a Borel subset of $X\times \mathbb{C}^{\omega }$. Since Borel sets form a $\sigma $-algebra, this shows that $Z$ is Borel.
The assignment \begin{equation*} \left( x,(\alpha _{n})_{n\in \omega }\right) \mapsto \sum\limits_{n\in \omega }\alpha _{n}\sigma _{n,x} \end{equation*} induces a bijection $Z\rightarrow \mathcal{Z}$ since, for every $x\in X$, the sequence $(\sigma _{n,x})_{n\in \omega }$ is a boundedly complete basis of $\mathcal{Z}_{x}$ \cite[Definition 4.4.8]{megginson_introduction_1998}. (We are using here the fact that every basis of a reflexive Banach space is boundedly complete \cite[Theorem 4.4.15]{megginson_introduction_1998}.) This bijection induces a standard Borel structure on $\mathcal{Z}$, and it is not difficult to verify that such a Borel structure turns $\mathcal{Z}$ into a Borel Banach bundle. A similar argument shows that the set \begin{equation*} Z^{\prime }=\left\{ \left( x,(\alpha _{n})_{n\in \omega }\right) \in X\times \mathbb{C}^{\omega }\colon \sup_{m\in \omega }{}\left\Vert \sum\limits_{j\in m}\alpha _{j}\sigma _{j,x}^{\prime }\right\Vert <\infty \right\} \end{equation*} is Borel, and that the map from $Z^{\prime }$ to $\mathcal{Z}^{\prime }$ given by $\left( x,(\alpha _{n})_{n\in \omega }\right) \mapsto \sum\nolimits_{n\in \omega }\alpha _{n}\sigma _{n,x}^{\prime }$ is a bijection. This induces a standard Borel structure on $\mathcal{Z}^{\prime }$ that makes $\mathcal{Z}^{\prime }$ a Borel Banach bundle. It follows from the definition of the Borel structures on $\mathcal{Z}$ and $\mathcal{Z} ^{\prime }$, that the canonical pairing $\mathcal{Z}\ast \mathcal{Z}^{\prime }\rightarrow \mathbb{C}$ is Borel. In fact, for $\left( x,(\alpha _{n})_{n\in \omega }\right) \in Z$ and $\left( x,(\beta _{n})_{n\in \omega }\right) \in Z^{\prime }$, we have \begin{equation*} \left\langle \sum\limits_{n\in \omega }\alpha _{n}\sigma _{n,x},\sum\limits_{m\in \omega }\beta _{m}\sigma _{m,x}^{\prime }\right\rangle =\sum\limits_{n\in \omega }\alpha _{n}\beta _{n}\text{.} \end{equation*}
The standard Borel structures on $\mathcal{Z}$ and $\mathcal{Z}^{\prime }$ described above will be referred to as the \emph{canonical Borel structures} associated with the sequence $(\sigma _{n})_{n\in \omega }$ of Borel sections $X\rightarrow \mathcal{Z}$. By \cite[Theorem~14.12] {kechris_classical_1995}, these can be equivalently described as the\emph{\ } Borel structures generated by the sequence of functionals on $\mathcal{Z}$ and $\mathcal{Z}^{\prime }$ given by $z\mapsto \left\langle z,\sigma _{n,q(z)}^{\prime }\right\rangle \ $and$\ \ w\mapsto \left\langle \sigma _{n,q(w)},w\right\rangle $ for $n\in \omega $.
As a consequence of the previous discussion, we conclude that if $\mathcal{Z} $ is a Borel Banach bundle, then the Borel structure on $\mathcal{Z}$ is generated by the sequence of maps $\mathcal{Z}\rightarrow \mathbb{C}$ given by $z\mapsto \left\langle z,\sigma _{n,q(z)}^{\prime }\right\rangle $ for $n$ in $\omega $. Moreover, the dual bundle $\mathcal{Z}^{\prime }$ has a \emph{ unique }Borel Banach bundle structure making the canonical pairing Borel. In the following, whenever $\mathcal{Z}$ is a Borel Banach bundle, we will always consider $\mathcal{Z}^{\prime }$ as a Borel Banach bundle endowed with such a canonical Borel structure.\newline \indent The following criterion to endow a Banach bundle with a Borel structure is an immediate consequence of the observations contained in this subsection.
\begin{lemma} \label{Lemma: canonical Borel structure} Let $(Z_{k})_{k\in \omega }$ be a sequence of reflexive Banach spaces. For every $k\in \omega $, let $ (b_{n,k})_{n\in \omega }$ be a basis of $Z_{k}$ with associated sequence of coefficient functionals $(b_{n,k}^{\prime })_{n\in \omega }$, and suppose that both $(b_{n,k})_{n\in \omega }$ and $(b_{n,k}^{\prime })_{n\in \omega }$ have basis constant $K$ independent of $k$. Let $\mathcal{Z}$ be a bundle of Banach spaces over $X$, and assume there exist a Borel partition $ (X_{k})_{k\in \omega }$ of $X$, and isometric isomorphisms $\psi _{x}\colon Z_{k}\rightarrow \mathcal{Z}_{x}$ and $\psi _{x}^{\prime }\colon Z_{k}^{\prime }\rightarrow \mathcal{Z}_{x}^{\prime }$ for $k\in \omega $ and $x\in X_{k}$. For $k,n\in \omega $ and $x\in X_{k}$, set $\sigma _{n,x}=\psi _{x}(b_{n,k})$ and $\sigma _{n,x}^{\prime }=\psi _{x}^{\prime }(b_{n,k}^{\prime })$. Then there are unique Borel Banach bundle structures on $\mathcal{Z}$ and $\mathcal{Z}^{\prime }$ such that $(\sigma _{n})_{n\in \omega }$ and $\left( \sigma _{n}^{\prime }\right) _{n\in \omega }$ are basic sequences, and such that the canonical pairing between $\mathcal{Z}$ and $\mathcal{Z}^{\prime }$ is Borel. \end{lemma}
\subsection{Banach space valued \texorpdfstring{$L^p$}{Lp}-spaces\label {Section: Banach Lp spaces}}
For the remainder of this section, we fix a Borel Banach bundle $q\colon \mathcal{Z}\rightarrow X$ over the standard Borel space $X$, a basic sequence $(\sigma _{n})_{n\in \omega }$ of $\mathcal{Z}$ with basis constant $K$, a $\sigma $-finite Borel measure $\mu $ on $X$, and a Hölder exponent $ p\in (1,\infty )$.
Denote by $\mathcal{L}^{p}(X,\mu ,\mathcal{Z})$ the space of Borel sections $ \xi \colon X\rightarrow \mathcal{Z}$ such that \begin{equation*}
N_{p}(\xi )^{p}=\int \left\| \xi _{x}\right\| ^{p}\ d\mu (x)<\infty . \end{equation*} It follows from the Minkowski inequality that $\mathcal{L}^{p}(X,\mu , \mathcal{Z})$ is a seminormed complex vector space. We denote by $ L^{p}(X,\mu ,\mathcal{Z})$ the normed space obtained as a quotient of the seminormed space $\left( \mathcal{L}^{p}(X,\mu ,\mathcal{Z}),N_{p}\right) $. When $\mathcal{Z}$ is the trivial bundle over $X$, then $L^{p}(X,\mu , \mathcal{Z})$ coincides with the Banach space $L^{p}(X,\mu )$. Consistently, we will abbreviate $\mathcal{L}^{p}(X,\mu ,\mathcal{Z})$ and $L^{p}(X,\mu , \mathcal{Z})$ to $\mathcal{L}^{p}(\mu ,\mathcal{Z})$ and $L^{p}(\mu , \mathcal{Z})$, respectively.
The usual Riesz-Fischer type argument \cite[Theorem 4.8] {brezis_functional_2011} shows that $L^{p}\left( \mu ,\mathcal{Z}\right) $ is a Banach space; see \cite[Theorem 1.1.8]{hayes_direct}. A standard argument (see \cite[Theorem 1.1.8]{hayes_direct}) proves the analog \cite[ Theorem 4.9]{brezis_functional_2011} in this context: convergence in $ L^{p}\left( \mu ,\mathcal{Z}\right)$ implies that there exists a subsequence that converges almost everywhere.
As it is customary, we will identify an element of $\mathcal{L}^{p}(\mu , \mathcal{Z})$ with its image in the quotient $L^{p}(\mu ,\mathcal{Z})$. We will also write $\Vert \cdot \Vert _{p}$, or just $\Vert \cdot \Vert $ if no confusion is likely to arise, for the norm on $L^{p}(\mu ,\mathcal{Z})$ induced by $N_{p}$.
\begin{proposition} \label{Proposition: basis} Let $\xi \in L^{p}(\mu,\mathcal{Z}) $. Then:
\begin{enumerate} \item The function $\left\langle \xi ,\sigma _{n}^{\prime }\right\rangle \colon X\rightarrow \mathbb{R}$ defined by $x\mapsto \left\langle \xi _{x},\sigma _{n,x}^{\prime }\right\rangle $ belongs to $\mathcal{L}^{p}(\mu )$;
\item The sequence $\left(\sum\limits_{k\in n }\left\langle \xi ,\sigma _{k}^{\prime}\right\rangle \sigma _{k}\right)_{n\in\omega}$ converges to $ \xi $. \end{enumerate} \end{proposition}
\begin{proof} (1). The function $\left\langle \xi ,\sigma _{n}^{\prime }\right\rangle $ is Borel because the canonical pairing map is Borel. Moreover, the estimate \begin{equation*} \int \left\vert \left\langle \xi _{x},\sigma _{n,x}^{\prime }\right\rangle \right\vert ^{p}\ d\mu (x)\leq \left( 2K\right) ^{p}\int \left\Vert \xi _{x}\right\Vert ^{p}\ d\mu (x)=\left( 2K\Vert \xi \Vert \right) ^{p} \end{equation*} shows that $\left\langle \xi ,\sigma _{n}^{\prime }\right\rangle$ belongs to $\mathcal{L}^{p}(\mu)$.
\indent(2). For every $x\in X$, and using that $K$ is a basis constant for $ (\sigma _{n,x})_{n\in \omega }$, we have \begin{equation*} \left\Vert \sum\limits_{k\in n}\left\langle \xi _{x},\sigma _{k,x}^{\prime }\right\rangle \sigma _{k,x}\right\Vert \leq K\left\Vert \xi _{x}\right\Vert . \end{equation*} Given $\varepsilon >0$ and $n\in \omega $, define the Borel set \begin{equation*} F_{n,\varepsilon }=\left\{ x\in X\colon \left\Vert \sum\limits_{k\in n}\left\langle \xi _{x},\sigma _{k,x}^{\prime }\right\rangle \sigma _{k,x}-\xi _{x}\right\Vert \leq \varepsilon \right\} \text{.} \end{equation*} Then $\bigcup\limits_{n\in \omega }F_{n,\varepsilon }=X$. By the dominated convergence theorem, there is $n_{0}\in \omega $ such that \begin{equation*} \int_{X\backslash F_{n_{0},\varepsilon }}\left\Vert \xi _{x}\right\Vert ^{p}\ d\mu (x)<\varepsilon \text{.} \end{equation*} Thus, for $n\geq n_{0}$, we have \begin{align*} \left\Vert \sum\limits_{k\in n}\left\langle \xi ,\sigma _{k}^{\prime }\right\rangle \sigma _{k}-\xi \right\Vert _{p}^{p}& =\int \left\Vert \sum\limits_{k\in n}\left\langle \xi _{x},\sigma _{k,x}^{\prime }\right\rangle \sigma _{k,x}-\xi _{x}\right\Vert ^{p}\ d\mu (x) \\ & \leq \mu \left( F_{n,\varepsilon }\right) \varepsilon +\left( K+1\right) ^{p}\int_{X\left\backslash F_{n,\varepsilon }\right. }\left\Vert \xi _{x}\right\Vert ^{p}d\mu (x)\\ &\leq \left( \left( K+1\right) ^{p}+1\right) \varepsilon . \end{align*} This shows that the sequence $\left( \sum\limits_{k\in n}\left\langle \xi ,\sigma _{k}^{\prime }\right\rangle \sigma _{k}\right) _{n\in \omega }$ converges to $\xi $. \end{proof}
In view of Proposition \ref{Proposition: basis}, the sequence $\left( \sigma _{n}\right) _{n\in \omega }$ can be thought as a basis of $L^{p}(\mu , \mathcal{Z})$ over $L^{p}(\mu )$. In particular, Proposition \ref {Proposition: basis} implies that $L^{p}(\mu ,\mathcal{Z})$ is a \emph{ separable} Banach space. It is not difficult to verify that, if $(\sigma _{n})_{n\in \omega }$ is an \emph{unconditional }basic sequence for $ \mathcal{Z}$, then the series $\sum\nolimits_{k\in \omega }\left\langle \xi ,\sigma _{k}^{\prime }\right\rangle \sigma _{k}$ converges \emph{ unconditionally }to $\xi $ for every $\xi \in L^{p}(\mu ,\mathcal{Z})$. Furthermore, since $\left( \sigma _{n,x}\right) _{n\in \omega }$ is a \emph{ boundedly complete} basis of $\mathcal{Z}_{x}$ for every $x\in X$, a standard argument shows that $\left( \sigma _{n}\right) _{n\in \omega }$ has as similar property as basis of $L^{p}\left( \mu ,\mathcal{Z}\right) $ over $ L^{p}\left( \mu \right) $. In other words, if the series $\sum_{k\in \omega }f_{k}\sigma _{k}$ has uniformly bounded partial sums, then it converges in $ L^{p}(\mu ,\mathcal{Z})$.
\subsection{Pairing\label{Subsection: pairing}}
In this subsection, we show that there is a natural pairing between $ L^{p}(\mu ,\mathcal{Z})$ and $L^{p^{\prime }}(\mu ,\mathcal{Z}^{\prime })$, under which we may identify $L^{p}(\mu ,\mathcal{Z})^{\prime }$ with $ L^{p^{\prime }}(\mu ,\mathcal{Z}^{\prime })$. Define a map \begin{equation*} \langle \cdot ,\cdot \rangle \colon L^{p}(\mu ,\mathcal{Z})\times L^{p^{\prime }}(\mu ,\mathcal{Z}^{\prime })\rightarrow \mathbb{C}\ \ \mbox{ by }\ \ \langle \xi ,\eta \rangle =\int \langle \xi _{x},\eta _{x}\rangle \ d\mu (x) \end{equation*} for all $\xi \in L^{p}(\mu ,\mathcal{Z})$ and all $\eta \in L^{p^{\prime }}(\mu ,\mathcal{Z}^{\prime })$. Young's inequality shows that the assignment $x\mapsto \langle \xi _{x},\eta _{x}\rangle $ is integrable, and hence the map above is well defined. The following is the analog in this context of the classical Riesz representation theorem \cite[Theorem 4.11] {brezis_functional_2011}, and can be proved with similar methods.
\begin{theorem} \label{Theorem: dual} The function from $L^{p^{\prime }}\left( \mu ,\mathcal{ Z}^{\prime }\right) $ to $L^{p}\left( \mu ,\mathcal{Z}\right) ^{\prime }$ given by \begin{equation*} \eta \mapsto \langle \cdot ,\eta \rangle =\int \langle \cdot _{x},\eta _{x}\rangle \ d\mu (x) \end{equation*} is an isometric isomorphism. \end{theorem}
It follows in particular that the Banach space $L^{p}(\mu ,\mathcal{Z})$ is reflexive. (Recall that all the fibres of the Banach bundle $\mathcal{Z}$ are assumed to be reflexive Banach spaces.)
\subsection{Bundles of \texorpdfstring{$L^p$}{Lp}-spaces\label{Subsection: Lp-bundles}}
Consider a Borel probability measure $\mu $ on a standard Borel space $X$. Let $\lambda $ be a Borel probability measure on a standard Borel space $Z$ fibred over $X$ via a fiber map $q$ such that $q_{\ast }(\lambda )=\mu $. By \cite[Exercise 17.35]{kechris_classical_1995}, the measure $\lambda $ admits a disintegration $(\lambda _{x})_{x\in X}$ with respect to $\mu $, which is also written as $\lambda =\int \lambda _{x}\ d\mu (x)$. In other words,
\begin{itemize} \item there is a Borel assignment $x\mapsto \lambda _{x}$, where $\lambda _{x}$ is a probability measure on $\mathcal{Z}_{x}$, and
\item for every bounded Borel function $f\colon Z\rightarrow \mathbb{C}$, we have $\int f\ d\lambda =\int \ d\mu (x)\int f\ d\lambda _{x}$. \end{itemize}
Consider the Banach bundle $\mathcal{Z}=\bigsqcup\nolimits_{x\in X}L^{p}(\lambda _{x})$ over $X$, where the fiber $\mathcal{Z}_{x}$ over $x$ is $L^{p}(\lambda _{x})$.
\begin{theorem} \label{thm: BBb structure} There is a canonical Borel Banach bundle structure on $\mathcal{Z}$ such that $L^p(\mu,\mathcal{Z})$ is isometrically isomorphic to $L^p(\lambda)$. \end{theorem}
\begin{proof} Let us assume for simplicity that $\mu $ and $\lambda _{x}$ are atomless for every $x\in X$. In this case, by \cite[Theorem~2.2]{graf_classification_1989} , we can assume without loss of generality that
\begin{itemize} \item $X$ is the unit interval $[ 0,1] $ and $\mu$ is its Lebesgue measure;
\item $Z$ is the unit square $[ 0,1] ^{2}$ and $\lambda$ is its Lebesgue measure;
\item $q\colon Z\to X$ is the projection onto the first coordinate; and
\item $\lambda _{x}$ is the Lebesgue measure on $\{ x\} \times [ 0,1] $ for every $x\in X$. \end{itemize}
Let $(h_{n})_{n\in \omega }$ be the Haar system on $[0,1]$ defined as in \cite[Chapter 3]{carothers_short_2005}. For $n\in \omega $ and $x\in \lbrack 0,1]$, define $h_{n,x}^{(p)}\colon \lbrack 0,1]\rightarrow \mathbb{R}$ by $
h_{n,x}^{(p)}(t)=h_{n}(t)/\left\| h_{n}\right\| _{p}$ for every $t\in \lbrack 0,1]$. Then $(h_{n,x}^{(p)})_{n\in \omega }$ is a normalized basis of $L^{p}(\lambda _{x})$ for every $x\in \lbrack 0,1]$. It follows from the discussion in Subsection \ref{Subsection: canonical Borel structures} that there are unique Borel Banach bundle structures on $\mathcal{Z}$ and $ \mathcal{Z}^{\prime }=\bigsqcup\nolimits_{x\in X}L^{p^{\prime }}(\lambda _{x})$ such that $(h_{n}^{(p)})_{n\in \omega }$ and $(h_{n}^{(p^{\prime })})_{n\in \omega }$ are normal basic sequences for $\mathcal{Z}$ and $ \mathcal{Z}^{\prime }$, and that the canonical pairing between $\mathcal{Z}$ and $\mathcal{Z}^{\prime }$ is Borel.
We claim that $L^{p}(\mu ,\mathcal{Z})$ can be canonically identified with $ L^{p}(\lambda )$. Given $f\in L^{p}(\lambda )$, consider the Borel section $ s_{f}\colon X\rightarrow \mathcal{Z}$ defined by $s_{f,x}(t)=f(x,t)$ for $ x,t\in \lbrack 0,1]$. It is clear that $s_{f,x}$ belongs to $L^{p}(\mu ,
\mathcal{Z})$ and that $\int \left\| s_{f,x}\right\| _{p}^{p}\ d\mu
(x)=\left\| f\right\| _{p}^{p}$. It follows that the map $f\mapsto s_{f,x}$ induces an isometric linear map $s\colon L^{p}(\lambda )\rightarrow L^{p}(\mu ,\mathcal{Z})$. The fact that $s$ is surjective is a consequence of Proposition \ref{Proposition: basis}, since the range of $s$ is a closed linear subspace of $L^{p}(\mu ,\mathcal{Z})$ that contains $h_{n}^{(p)}$ for every $n\in \omega $.
The case when $\lambda $ and $\mu $ are arbitrary Borel probability measures can be treated similarly, using the classification of disintegration of Borel probability measures given in \cite[Theorem~3.2] {graf_classification_1989}, together with Lemma~\ref{Lemma: canonical Borel structure}. In fact, the results of \cite{graf_classification_1989} show that the same conclusions hold if $\lambda $ is a Borel $\sigma $\emph{ -finite }measure. \end{proof}
\begin{definition} \label{Definition: Lp-bundle} Let $X$ be a Borel space, and let $\mu $ be a Borel probability measure on $X$. An $L^{p}$\emph{-bundle} over $(X,\mu )$ is a Borel Banach bundle $\mathcal{Z}=\bigsqcup\nolimits_{x\in X}L^{p}(\lambda _{x})$ obtained from the disintegration of a $\sigma $ -finite Borel measure $\lambda $ on a Borel space $Z$ fibred over $X$, as described in Theorem~\ref{thm: BBb structure}. \end{definition}
\subsection{Decomposable operators\label{Subsection: decomposable operators}}
Throughout this section we let $q_{X}\colon \mathcal{Z}\rightarrow X$ and $ q_{Y}\colon \mathcal{W}\rightarrow Y$ be standard Borel Banach bundles with basic sequences $(\sigma _{n})_{n\in \omega }$ and $(\tau _{n})_{n\in \omega }$, respectively, and we let $\phi \colon X\rightarrow Y$ be a Borel isomorphism.
\begin{definition} Let $B( \mathcal{Z},\mathcal{W},\phi) $ be the space of bounded linear maps of the form $T\colon \mathcal{Z}_{x}\to \mathcal{W}_{\phi (x) }$ for some $ x\in X$. For such a map $T$, we denote the corresponding point $x$ in $X$ by $x_T$. \end{definition}
Consider the Borel structure on $B( \mathcal{Z},\mathcal{W},\phi) $ generated by the maps $T\mapsto x_T$ and $T \mapsto \left\langle T\sigma _{n,x_T },\tau _{m,\phi(x_T) }^{\prime}\right\rangle$ for $n,m\in \omega $. It is not difficult to check that the operator norm and composition of operators are Borel functions on $B( \mathcal{Z},\mathcal{W},\phi)$, which make $B( \mathcal{Z},\mathcal{W},\phi) $ into a Borel space fibred over $X$.
\begin{lemma} The Borel space $B( \mathcal{Z},\mathcal{W},\phi) $ is standard. \end{lemma}
\begin{proof} Let $V$ be the set of elements $\left( x,(c_{n,k})_{n,m\in \omega }\right)$ in $X\times \mathbb{C}^{\omega \times \omega }$ such that, for some $M\in \omega $ and every $(\alpha _{n})_{n\in \omega }\in \mathbb{Q}(i)^{\oplus \omega }$, we have \begin{equation*}
\sup_{m\in \omega }\left\| \sum\limits_{k\in m}\left( \sum\limits_{n\in
\omega }\alpha_{n}c_{n,m}\right) \tau _{\phi (x),m}\right\| \leq M\sup_{n\in
\omega }\left\| \sum\limits_{k\in n}\alpha _{k}\sigma _{x,k}\right\| \text{.} \end{equation*} Then $V$ is a Borel subset of $X\times \mathbb{C}^{\omega \times \omega }$, and it is therefore a standard Borel space by \cite[Corollary~13.4] {kechris_classical_1995}. The result follows since the function $B(\mathcal{Z },\mathcal{W},\phi )\rightarrow X\times \mathbb{C}^{\omega \times \omega }$ given by \begin{equation*} T\mapsto \left( x_{T},\left( \left\langle T\sigma _{n,x_{T}},\tau _{m,\phi (x_{T})}^{^{\prime }}\right\rangle \right) _{\left( n,m\right) \in \omega \times \omega }\right) \end{equation*} is a Borel isomorphism between $B(\mathcal{Z},\mathcal{W},\phi )$ and $V$. \end{proof}
Fix Borel $\sigma $-finite measures $\mu $ on $X$ and $\nu $ on $Y$ with $ \phi _{\ast }(\mu )\sim \nu $. Suppose that $x\mapsto T_{x}$ is a Borel section of $B(\mathcal{Z},\mathcal{W},\phi )$ such that, for some $M\geq 0$ and $\mu $-almost every $x\in X$, we have \begin{equation}
\left\| T_{x}\right\| ^{p}\leq M^{p}\frac{d\phi _{\ast }(\mu )}{\ d\nu } (\phi (x))\text{\label{Equation: norm decomposable}.} \end{equation} Then one can define a bounded linear operator $T\colon L^{p}(\mu ,\mathcal{Z} )\rightarrow L^{p}(\nu ,\mathcal{W})$ by setting $(T\xi )_{y}=T_{\phi ^{-1}(y)}\xi _{\phi ^{-1}(y)}$ for all $y\in Y$. It is not hard to verify that $T$ is indeed bounded, with norm given by the infimum of the $M>0$ for which \eqref{Equation: norm decomposable} holds for $\mu $-almost every $ x\in X$. Operators of this form are called \emph{decomposable} with respect to the Borel isomorphism $\phi \colon X\rightarrow Y$. The Borel section $ x\mapsto T_{x}$ corresponding to the decomposable operator $T$ is called the \emph{disintegration }of $T$ with respect to the Borel isomorphism $\phi \colon X\rightarrow Y$.
\begin{remark} \label{Remark:uniqueness}It is not difficult to verify that the disintegration of a decomposable operator $T$ is essentially unique, in the sense that if $x\mapsto T_{x}$ and $x\mapsto \widetilde{T}_{x}$ are two Borel sections defining the same decomposable operator, then $T_{x}= \widetilde{T}_{x}$ for $\mu $-almost every $x$ in $X$; see for example \cite[ Lemma F.20]{williams_crossed_2007}. \end{remark}
Given a bounded Borel function $g\colon Y\rightarrow \mathbb{C}$, we denote by $\Delta _{g}$ the corresponding multiplication operator on $L^{p}(\nu , \mathcal{W})$. The following characterization of decomposable operators is the natural generalization of the similar characterization of decomposable operators on Hilbert bundles \cite[Theorem F.21]{williams_crossed_2007} ---see also \cite[Theorem 7.10]{takesaki_theory_2002}---and can be proved with similar methods.
\begin{proposition} \label{Proposition: characterization decomposable} For a bounded map $ T\colon L^{p}(\mu,\mathcal{Z}) \to L^{p}( \nu ,\mathcal{W}) $, the following are equivalent:
\begin{enumerate} \item $T$ is decomposable with respect to $\phi $;
\item $\Delta _{g}T=T\Delta _{g\circ \phi }$ for every bounded Borel function $g\colon Y\rightarrow \mathbb{C}$;
\item There is a countable collection $\mathcal{F}$ of Borel subsets of $Y$ that separates the points of $Y$, such that $\Delta _{\chi _{F}}T=T\Delta _{\chi _{\phi ^{-1}[F]}}$ for every $F\in \mathcal{F}$. \end{enumerate} \end{proposition}
\begin{definition} \label{Definition: phi morphism}A $\phi $-\emph{isomorphism }from $\mathcal{Z }$ to $\mathcal{W}$ is a Borel section $x\mapsto T_{x}$ of the bundle $B( \mathcal{Z},\mathcal{W},\phi )$ such that $T_{x}$ is a surjective isometry for every $x\in X$. \end{definition}
If $T=\left( T_{x}\right) _{x\in X}$ is a $\phi $-isomorphism from $\mathcal{ Z}$ to $\mathcal{W}$, we denote by $T^{-1}$ the $\phi ^{-1}$-isomorphism $ (T_{\phi ^{-1}(y)}^{-1})_{y\in Y}$ from $\mathcal{W}$ to $\mathcal{Z}$.
\begin{definition} \label{Definition: isomorphism bundles} If $X=Y$, then $\mathcal{Z}$ and $ \mathcal{W}$ are said to be \emph{isomorphic }if there is an $\mbox{id}_{X}$ -isomorphism from $\mathcal{Z}$ to $\mathcal{W}$. In this case an $\mbox{id} _{X}$-isomorphism is simply called an \emph{isomorphism}. \end{definition}
The classical Guichardet decomposition theorem \cite{paterson_groupoids_1999} ---see also \cite[page 67]{renault_groupoid_1980}---admits a straightforward generalization from Hilbert bundles to Banach bundles, which can be proved by the same method.
\begin{theorem} \label{Theorem: Guichardet} If $T\colon L^{p}(\mu ,\mathcal{Z})\rightarrow L^{p}(\nu ,\mathcal{W})$ is an invertible isometry decomposable with respect to $\phi $, then $T$ admits a disintegration \begin{equation*} x\mapsto \left( \frac{d\phi _{\ast }(\mu )}{d\nu }\phi (x)\right) ^{\frac{1}{ p}}T_{x} \end{equation*} where $x\mapsto T_{x}$ is a $\phi $-isomorphism from the restriction of $ \mathcal{Z}$ to a $\mu $-conull Borel set to the restriction of $\mathcal{W}$ to a $\nu $-conull Borel set. \end{theorem}
\section{Banach representations of étale groupoids}
\subsection{Some background notions on groupoids\label{Subsection: background on groupoids}}
A \emph{groupoid} can be defined as a (nonempty) small category where every arrow is invertible. The set of objects of a groupoid $G$ is denoted by $ G^{0}$. Identifying an object with its identity arrow, one can regard $G^{0}$ as a subset of $G$. We will denote the source and range maps on $G$ by $ s,r\colon G\rightarrow G^{0}$, respectively. A pair of arrows $\left( \gamma ,\rho \right) $ is \emph{composable} if $s(\gamma )=r(\rho )$. The set of pairs of composable arrows will be denoted, as customary, by $G^{2}$. If $ (\gamma ,\rho )$ is a pair of composable arrows of $G$, we denote their composition by $\gamma \rho $. If $A$ and $B$ are subsets of $G$, we denote by $AB$ the set of $\gamma \rho $ for $\gamma \in A$ and $\rho \in B$ composable. Similarly, if $A$ is a subset of $G$ and $\gamma \in G$, then we write $A\gamma $ for $A\{\gamma \}$ and $\gamma A$ for $\{\gamma \}A$. In particular, when $x$ is an object of $G$, then $Ax$ denotes the set of elements of $A$ with source $x$, while $xA$ denotes the set of elements of $ A $ with range $x$. A \emph{bisection }of a groupoid $G$ is a subset $A$ of $ G$ such that source and range maps are injective on $A$. (Bisections are called $G$-sets in \cite{renault_groupoid_1980,paterson_groupoids_1999}.) If $U\subseteq G^{0}$, then the set of elements of $G$ with source and range in $U$ is again a groupoid, called the \emph{restriction }of $G$ to $U$ (or the
\emph{contraction }in \cite{mackey_ergodic_1963,ramsay_virtual_1971}), and will be denoted by $G|_{U}$.
A \emph{locally compact groupoid }is a groupoid endowed with a topology having a countable basis of Hausdorff open sets with compact closures, such that
\begin{enumerate} \item composition and inversion of arrows are continuous maps, and
\item the set of objects $G^{0}$, as well as $Gx$ and $xG$ for every $x\in G^{0}$, are locally compact Hausdorff spaces. \end{enumerate}
It follows that also source and range maps are continuous, since $s(\gamma )=\gamma ^{-1}\gamma $ and $r(\gamma )=\gamma \gamma ^{-1}$ for all $\gamma \in G$. It should be noted that the topology of a locally compact groupoid might not be (globally) Hausdorff. Examples of non-Hausdorff locally compact groupoids often arise in the applications, such as the holonomy groupoid of a foliation; see \cite[Section 2.3]{paterson_groupoids_1999}. Locally compact groups are the locally compact groupoids with only one object.
\begin{definition} An \emph{étale groupoid }is a locally compact groupoid such that composition of arrows---or, equivalently, the source and range maps---are local homeomorphisms. \end{definition}
This in particular implies that $Gx$ and $xG$ are countable discrete sets for every $x\in G^{0}$. Étale groupoids can be regarded as the analog of countable discrete groups. In fact, countable discrete groups are precisely the étale groupoids with only one object.
In the following we suppose that $G$ is an étale groupoid. If $U$ is an open Hausdorff subset of $G$, then $C_{c}(U)$ is the space of compactly supported continuous functions on $U$. Recall that $B(G)$ denotes the space of complex-valued Borel functions on $G$. We define $C_{c}(G)$ to be the linear span inside $B(G)$ of the union of the sets $C_{c}(U)$ for $U$ ranging over the open Hausdorff subsets of $G$. (Equivalently, $U$ ranges over a covering of $G$ consisting of open bisections \cite[Proposition 3.10] {exel_inverse_2008}.)
\begin{remark} When $G$ is a \emph{Hausdorff} étale groupoid, then $C_{c}(G)$ as defined above coincides with the space of compactly supported continuous functions on $G$. \end{remark}
One can define the convolution product and involution on $C_{c}(G)$ by \begin{equation*} (f\ast g)(\gamma )=\sum_{\rho _{0}\rho _{1}=\gamma }f(\rho _{0})g(\rho _{1})\ \ \mbox{ and }\ \ f^{\ast }(\gamma )=\overline{f(\gamma ^{-1})} \end{equation*} for $f,g\in C_{c}(G)$. For $f\in C_{c}(G)$, its $I$-norm is given by \begin{equation*}
\left\| f\right\| _{I}=\max \left\{ \sup_{x\in G}\sum_{\gamma \in xG}\left\vert f(\gamma )\right\vert ,\sup_{x\in G}\sum_{\gamma \in Gx}\left\vert f(\gamma )\right\vert \right\} . \end{equation*} These operations turn $C_{c}(G)$ into a normed *-algebra; see \cite[Section 2.2]{paterson_groupoids_1999}.
Similarly, one can define the space $B_{c}(G)$ as the linear span inside $ B(G)$ of the space of complex-valued bounded Borel functions on $G$ vanishing outside a compact Hausdorff subset of $G$. Convolution product, inversion, and the $I$-norm can be defined exactly in the same way on $ B_{c}(G)$ as on $C_{c}(G)$, making $B_{c}(G)$ a normed *-algebra; see \cite[ Section 2.2]{paterson_groupoids_1999}. Both $C_{c}(G)$ and $B_{c}(G)$ have a contractive approximate identity.
\begin{definition} \label{Definition:RepresentationCc(G)}A \emph{representation} of $C_{c}(G)$ on a Banach space $Z$ is an algebra homomorphism $\pi \colon C_{c}(G)\rightarrow B(Z)$. We say that $\pi $ is \emph{contractive} if it is contractive with respect to the $I$-norm on $C_{c}(G)$. Two representation $ \pi_0 ,\pi_1$ of $C_{c}( G) $ on $B( Z_1) $ and $B(Z_2)$ are \emph{ equivalent }if there exists a surjective linear isometry $u\colon Z_1\rightarrow Z_2$ such that $\pi_2( f) u=u\pi_1 ( f) $ for every $f\in C_{c}( G) $. \end{definition}
A Borel probability measure $\mu $ on $G^{0}$ induces $\sigma $-finite Borel measures $\nu $ and $\nu ^{-1}$ on $G$ given by $\nu (A)=\int_{G^{0}}\left\vert xA\right\vert \ d\mu (x)$ and $\nu ^{-1}(A)=\nu (A^{-1})$ for every Borel subset $A$ of $G$. Observe that $\nu $ is the measure obtained integrating the Borel family $(c_{xG})_{x\in X}$---where $ c_{xG}$ denotes the counting measure on $xG$---with respect to $\mu $. Similarly, $\nu^{-1}$ is the measure obtained integrating $(c_{Gx})_{x\in X}$ with respect to $\mu $. The measure $\mu $ is said to be \emph{ quasi-invariant }if $\nu $ and $\nu ^{-1}$ are equivalent, in symbols $\nu \sim \nu ^{-1}$. In such case, the Radon-Nikodym derivative $\frac{\ d\nu }{ \ d\nu ^{-1}}$ will be denoted by $D$. Results of Hahn \cite{hahn_haar_1978} and Ramsay \cite[Theorem~3.20]{ramsay_topologies_1982} show that one can always choose---as we will do in the following---$D$ to be a Borel homomorphism from $G$ to the multiplicative group of strictly positive real numbers.
\begin{remark} Étale groupoids can be characterized as those locally compact groupoids whose topology admits a countable basis of \emph{open bisections}. \end{remark}
Closely related to the notion of an étale groupoid is that of an inverse semigroup.
\begin{definition} An \emph{inverse semigroup} is a semigroup $S$ such that for every element $ s $ of $S$, there exists a unique element $s^{\ast }$ of $S$ such that $ ss^{\ast }s=s$ and $s^{\ast }ss^{\ast }=s^{\ast }$. \end{definition}
Let $G$ be an étale groupoid, and denote by $\Sigma (G)$ the set of open bisections of $G$. The operations \begin{equation*} AB=\{\gamma \rho \colon (\gamma ,\rho )\in (A\times B)\cap G^{2}\}\ \ \mbox{ and }\ \ A^{-1}=\{\gamma ^{-1}\colon \gamma \in A\} \end{equation*} turn $\Sigma (G)$ into an inverse semigroup. The set $\Sigma _{c}(G)$ of \emph{precompact }open bisections of $G$ is a subsemigroup of $\Sigma (G)$. Similarly, the set $\Sigma _{\mathcal{K}}(G)$ of \emph{compact} open bisections of $G$ is also a subsemigroup of $\Sigma (G)$.
\begin{definition} \label{Definition:ample}An étale groupoid $G$ is called \emph{ample }if $ \Sigma _{\mathcal{K}}(G)$ is a basis for the topology of $G$. This is equivalent to the assertion that $G^{0}$ has a countable basis of compact open sets. \end{definition}
\subsection{Representations of étale groupoids on Banach bundles\label {Subsections: representations on Banach bundles}}
Throughout the rest of this section, we fix an étale groupoid $G$, and a Borel Banach bundle $q\colon \mathcal{Z}\rightarrow G^{0}$. We define the \emph{groupoid of fiber-isometries} of $\mathcal{Z}$ by \begin{equation*} \mathrm{Iso}(\mathcal{Z})=\left\{ (T,x,y)\colon T\colon \mathcal{Z} _{x}\rightarrow \mathcal{Z}_{y}\mbox{ is an invertible isometry, and }x,y\in G^{0}\right\} . \end{equation*} We denote the elements of $\mathrm{Iso}(\mathcal{Z})$ simply by $T\colon \mathcal{Z}_{x}\rightarrow \mathcal{Z}_{y}$. The set $\mathrm{Iso}(\mathcal{Z })$ has naturally the structure of groupoid with set of objects $G^{0}$, where the source and range of the fiber-isometry $T\colon \mathcal{Z} _{x}\rightarrow \mathcal{Z}_{y}$ are $s(T)=x$ and $r(T)=y$, respectively. If $\left( \sigma _{n}\right) _{n\in \omega }$ is a basic sequence for $ \mathcal{Z}$, then the Borel structure generated by the maps \begin{equation*} T\mapsto \left\langle T\sigma _{n,s\left( T\right) },\sigma^{\prime}_{m,r\left( T\right) }\right\rangle \end{equation*} for $n,m\in \omega $, is standard, and makes composition and inversion of arrows Borel. In other words $\mathrm{Iso}(\mathcal{Z})$ is a \emph{standard Borel groupoid }\cite[Definition 2.4.1]{lupini_polish_2014}.
Let $\mu $ be a quasi-invariant Borel probability measure on $G^{0}$. A map $ T\colon G\rightarrow \mathrm{Iso}(\mathcal{Z})$ is said to be a \emph{$\mu $
-almost everywhere homomorphism}, if there exists a $\mu $-conull Borel subset $U$ of $G^{0}$ such that the restriction of $T$ to $G|_{U}$ is a Borel groupoid homomorphism which is the identity on $U$.
\begin{definition} \label{Definition: representation on Banach bundle} A \emph{representation} of $G$ on $\mathcal{Z}$ is a pair $(\mu ,T)$ consisting of a quasi-invariant Borel probability measure $\mu $ on $G^{0}$, and a $\mu $-almost everywhere homomorphism $T\colon G\rightarrow \mathrm{Iso}(\mathcal{Z})$. \end{definition}
If $G$ is a discrete group, then a Borel Banach bundle over $G^{0}$ is just a Banach space $Z$, and a representation of $G$ on $Z$ is a Borel group homomorphism from $G$ to the Polish group $\mathrm{Iso}(Z) $ of invertible isometries of $Z$ endowed with the strong operator topology. (It should be noted here that a Borel group homomorphism from $G$ to $\mathrm{Iso}(Z) $ is automatically continuous by \cite[Theorem~9.10]{kechris_classical_1995}.)
\begin{definition} \label{Definition: dual representation} Let $(\mu ,T)$ be a representation of $G$ on $\mathcal{Z}$. The \emph{dual representation} of $(\mu ,T)$ is the representation $\left( \mu ,T^{\prime }\right) $ of $G$ on $\mathcal{Z} ^{\prime }$ defined by \begin{equation*} (T^{\prime })_{\gamma }=(T_{\gamma ^{-1}})^{\prime }\colon \mathcal{Z} _{s(\gamma )}^{\prime }\rightarrow \mathcal{Z}_{r(\gamma )}^{\prime } \end{equation*} for all $\gamma \in G$. \end{definition}
There is a natural notion of equivalence for representations of $G$ on Banach bundles. Recall that an isomorphism $v:\mathcal{Z}\rightarrow \widetilde{\mathcal{Z}}$ between Borel Banach bundles on $X$ is a Borel section $\left( v_{x}\right) _{x\in X}$ of the Borel Banach bundle $B( \mathcal{Z},\widetilde{\mathcal{Z}},\mbox{id}_{X})$ such that $\nu _{x}: \mathcal{Z}_{x}\rightarrow \widetilde{\mathcal{Z}}_{x}$ is a surjective linear isometry; see Definition \ref{Definition: isomorphism bundles}.
\begin{definition} \label{Definition: equivalence representations} Two representations $(\mu ,T) $ and $(\widetilde{\mu },\widetilde{T})$ of $G$ on Borel Banach bundles $ \mathcal{Z}$ and $\widetilde{\mathcal{Z}}$ over $G^{0}$, are said to be \emph{equivalent}, if $\mu \sim \widetilde{\mu }$ and there are a $\mu $ -conull Borel subset $U$ of $G^{0}$, and an isomorphism $v\colon \mathcal{Z}
|_{U}\rightarrow \widetilde{\mathcal{Z}}|_{U}$ such that $\widetilde{T}
_{\gamma }v_{s(\gamma )}=v_{r(\gamma )}T_{\gamma }$ for every $\gamma \in G|_{U}$. \end{definition}
It is clear that two representations are equivalent if and only if their dual representations as in Definition \ref{Definition: dual representation} are equivalent.
From now on, we fix a Hölder exponent $p\in (1,\infty )$. Suppose that $(\mu ,T)$ is a representation of $G$ on $\mathcal{Z}$. Then the equation \begin{equation} (\pi _{T}(f)\xi )_{x}=\sum\limits_{\gamma \in xG}f(\gamma )D(\gamma )^{- \frac{1}{p}}T_{\gamma }\xi _{s(\gamma )}\text{\label{Equation: integrated form representation}} \end{equation} for $f\in C_{c}(G)$, $\xi \in L^{p}(\mu ,\mathcal{Z})$, and $x\in G^{0}$, defines an $I$-norm contractive, nondegenerate representation $\pi _{T}\colon C_{c}(G)\rightarrow B(L^{p}(\mu ,\mathcal{Z}))$. This can be proved by proceeding as in the proof of \cite[Proposition 3.1.1] {paterson_groupoids_1999}, where $2$ is replaced by $p$, using the duality result from Theorem~\ref{Theorem: dual}.
\begin{definition} \label{Definition:integrated-form}Let $(\mu ,T)$ be a representation of $G$ on $\mathcal{Z}$. We call the representation $\pi _{T}\colon C_{c}(G)\rightarrow B(L^{p}(\mu ,\mathcal{Z}))$ described above the \emph{ integrated form }of $(\mu ,T)$. \end{definition}
\begin{remark} \label{remark: extend to BcG} Given a representation $(\mu ,T)$ of $G$ on $ \mathcal{Z}$, one can show that there is an $I$-norm contractive nondegenerate representation $\pi _{T}\colon B_{c}(G)\rightarrow B(L^{p}(\mu ,\mathcal{Z}))$ defined by the same expression as in Equation \eqref{Equation: integrated form representation}. \end{remark}
Given $f\in C_c(G)$, we let $\check{f}\in C_c(G)$ be given by $\check{f} (\gamma)=f(\gamma^{-1})$ for all $\gamma\in G$.
\begin{definition} \label{Definition:dual}Let $\mu $ be a Borel $\sigma $-finite measure on $ G^{0}$ and let $\pi \colon C_{c}(G)\rightarrow B(L^{p}(\mu ,\mathcal{Z}))$ be an $I$-norm contractive nondegenerate representation. The \emph{dual representation} of $\pi $ is the $I$-norm contractive nondegenerate representation $\pi ^{\prime }\colon C_{c}(G)\rightarrow B(L^{p^{\prime }}(\mu ,\mathcal{Z}^{\prime }))$ given by $\pi ^{\prime }(f)=\pi (\check{f} )^{\prime }$ for all $f\in C_{c}(G)$. \end{definition}
A straightforward computation shows that the dual representation of $\pi _{T} $ as in Definition \ref{Definition:dual} is the integrated form of the dual representation of $T$ from Definition \ref{Definition: dual representation}. A similar argument as \cite{paterson_groupoids_1999} ---using the version of Guichardet's decomposition theorem provided by Theorem~\ref{Theorem: Guichardet}---shows that he assignment $T\mapsto \pi _{T}$ preserves the natural notions of equivalence of representations introduced in Definition \ref{Definition:RepresentationCc(G)} and Definition \ref{Definition: equivalence representations}.
\begin{proposition} Let $(\mu ,\mathcal{Z})$ and $(\lambda ,\mathcal{W})$ be Borel Banach bundles over $G^{0}$, and let $T$ and $S$ be groupoid representations of $G$ on $\mathcal{Z}$ and $\mathcal{W}$, respectively. Then $T$ and $S$ are equivalent if and only if $\pi _{T}$ and $\pi _{S}$ are equivalent. \end{proposition}
\subsection{Amplification of representations\label{Subsection: Amplification of representations}}
Given a natural number $n\geq 1$, regard $M_{n}(C_{c}(G))$ as a normed *-algebra with respect to the usual matrix product and involution, and the $ I $-norm \begin{equation*}
\left\| [f_{ij}]_{i,j\in n}\right\| _{I}=\max \left\{ \max_{x\in G^{0}}\max_{j\in n}\sum\limits_{j\in n}\sum\limits_{\gamma \in xG}\left\vert f_{ij}(\gamma )\right\vert \ ,\ \max_{x\in G^{0}}\max_{i\in n}\sum\limits_{j\in n}\sum\limits_{\gamma \in Gx}\left\vert f_{ij}(\gamma )\right\vert \right\} . \end{equation*}
\begin{definition} Let $\mu $ be a $\sigma $-finite Borel measure on $G^{0}$, and let $\pi \colon C_{c}(G)\rightarrow B(L^{p}(\mu ,\mathcal{Z}))$ be a representation. We define its \emph{amplification} $\pi ^{(n)}\colon M_{n}(C_{c}(G))\rightarrow B(\ell ^{p}(n,L^{p}(\mu ,\mathcal{Z})))$ by \begin{equation*} \pi ^{(n)}([f_{ij}]_{i,j\in n})[\xi _{j}]_{j\in n}=\left[ \sum\limits_{j\in n}\pi (f_{ij})\xi _{j}\right] _{i\in n}\text{.} \end{equation*} The representation $\pi $ is $I$\emph{-norm completely contractive} if $\pi ^{(n)}$ is $I$-norm contractive for every $n\in \mathbb{N}$. \end{definition}
If one starts with a representation $T$ of a groupoid on a Borel Banach bundle, one may take its integrated form, and then its amplification to matrices over $C_{c}(G)$, as in the definition above. The resulting representation $\pi _{T}^{(n)}$ is the integrated form of a representation of an amplified groupoid, which we proceed to describe. Given $n\geq 1$, denote by $G_{n}$ the groupoid $n\times G\times n$ endowed with the product topology, with set of objects $G^{0}\times n$, and operations defined by \begin{equation*} s(i,\gamma ,j)=(s(\gamma ),j)\ ,\ \ r(i,\gamma ,j)=(r(\gamma ),i)\ \ \mbox{ and }\ \ (i,\gamma ,j)(j,\rho ,k)=(i,\gamma \rho ,k)\text{.} \end{equation*} First of all we can observe that we can identify $M_{n}(C_{c}(G))$ with $ C_{c}(G_{n})$. Furthermore the $I$-norm on $M_{n}(C_{c}(G))$ described above is exactly the norm on $M_{n}(C_{c}(G))$obtained from the $I$-norm on $ C_{c}(G_{n})$ via the identification of $M_{n}(C_{c}(G))$ with $C_{c}(G_n)$.
\indent Denote by $\mathcal{Z}^{(n)}$ the Borel Banach bundle over $ G^{0}\times n$ such that $\mathcal{Z}_{(x,j)}^{(n)}=\mathcal{Z}_{x}$, with basic sequence $(\sigma _{k}^{(n)})_{k\in \omega }$ defined by $\sigma _{k,(x,j)}^{(n)}=\sigma _{k,x}$ for $(x,j)\in G^{0}\times n$. Endow $ G^{0}\times n$ with the measure $\mu ^{(n)}=\mu \times c_{n}$, and define the \emph{amplification} $T^{(n)}\colon G_{n}\rightarrow \mathrm{Iso}( \mathcal{Z}^{(n)})$ of $T$ by $T_{(i,\gamma ,j)}^{(n)}=T_{\gamma }$ for $ (i,\gamma ,j)\in G_{n}$. It is easy to verify that $\pi _{T}^{(n)}$ corresponds to the integrated form of the representation $T^{(n)}$ under the canonical identifications of $M_{n}(C_{c}(G))$ with $C_{c}(G_{n})$ and of $ \ell ^{p}(n,L^{p}(\mu ,\mathcal{Z}))$ with $L^{p}(\mu ^{(n)},\mathcal{Z} ^{(n)})$. This proves that the representations $\pi _{T}^{(n)}$ and $\pi _{T^{(n)}}$ are equivalent.
\subsection{Representations of étale groupoids on \texorpdfstring{$L^p$}{Lp} -bundles\label{Subsection: representations groupoids Lp-bundles}}
In this section, we want to isolate a particularly important and natural class of representations of an étale groupoid on Banach spaces. \newline \indent We fix a quasi-invariant measure $\mu $ on $G^{0}$. Let $\lambda $ be a $\sigma $-finite Borel measure on a standard Borel space $Z$ fibred over $G^{0}$ via $q$, and assume that $\mu =q_{\ast }(\lambda )$. Denote by $ \mathcal{Z}$ the $L^{p}$-bundle $\bigsqcup\nolimits_{x\in G^{0}}L^{p}(\lambda _{x})$ over $\mu $ obtained from the disintegration $ \lambda =\int \lambda _{x}\ d\mu (x)$ as in Theorem~\ref{thm: BBb structure}.
\begin{definition} \label{Definition: Lp representation} Adopt the notation from the comments above. A representation $T\colon G\rightarrow \mathrm{Iso}(\mathcal{Z})$ is called an \emph{$L^{p}$-representation} of $G$ on $\mathcal{Z}$. Under the identification $L^{p}(\mu ,\mathcal{Z})\cong L^{p}(\lambda )$ given by Theorem~\ref{thm: BBb structure}, the integrated form $\pi _{T}\colon C_{c}(G)\rightarrow B(L^{p}(\lambda ))$ of $T$, is an $I$-norm contractive nondegenerate representation. \end{definition}
It will be shown in Theorem~\ref{Theorem: correspondence representations} that every $I$-norm contractive nondegenerate representation of $C_{c}(G)$ on an $L^{p}$-space is the integrated form of some $L^{p}$-representation of $G$.
\begin{remark} It is clear that an $L^{2}$-representation of $G$ in the sense of Definition \ref{Definition: Lp representation}, is a representation of $G$ on a Borel Hilbert bundle. Conversely, any representation of $G$ on a Borel Hilbert bundle is equivalent---as in Definition \ref{Definition: equivalence representations}---to an $L^{2}$-representation. In fact, if $\mathcal{H}$ is a Borel Hilbert bundle over $G^{0}$, then for every $0\leq \alpha \leq \omega $ the set $X_{\alpha }=\{x\in G^{0}\colon \mathrm{dim}(\mathcal{H} _{x})=\alpha \}$ is Borel. Thus, $\mathcal{H}$ is isomorphic to the Hilbert bundle $\mathcal{Z}_{0}=\bigsqcup\nolimits_{0\leq \alpha \leq \omega }X_{\alpha }\times \ell ^{2}(\alpha )$.Set $Z=\bigsqcup\nolimits_{0\leq \alpha \leq \omega }(X_{\alpha }\times \alpha )$, and define a $\sigma $ -finite Borel measure $\lambda $ on $Z$ by $\lambda =\bigsqcup\nolimits_{0\leq \alpha \leq \omega }(\mu \times c_{\alpha })$. It is immediate that $\mathcal{Z}_{0}$ is (isomorphic to) the Borel Hilbert bundle $\bigsqcup\nolimits_{x\in G^{0}}L^{2}(\lambda _{x})$ obtained from the disintegration of $\lambda $ with respect to $\mu $. \end{remark}
In view of the above remark, there is no difference, up to equivalence, between $L^{2}$-representations and representations on Borel Hilbert bundles. The theory of $L^{p}$-representations of $G$ for $p\in (1,\infty) $ can therefore be thought of as a generalization of the theory of representations of $G$ on Borel Hilbert bundles.
\begin{example}[Left regular representation] \label{Example: left regular representation} Take $Z=G$ and $\lambda =\nu $, in which case the disintegration of $\lambda $ with respect to $\mu $ is $ (c_{xG})_{x\in X}$. For $\gamma \in G$, define the surjective linear isometry \begin{equation*} T_{\gamma }^{\mu ,p}\colon \ell ^{p}(s(\gamma )G)\rightarrow \ell ^{p}(r(\gamma )G) \end{equation*} by $(T_{\gamma }^{\mu ,p}\xi )(\rho )=\xi (\gamma ^{-1}\rho )$. The assignment $\gamma \mapsto T_{\gamma }^{\mu ,p}$ defines a representation $ T^{\mu ,p}$ of $G$ on the Borel Banach bundle $\bigsqcup\nolimits_{x\in G^{0}}\ell ^{p}(xG)$ which we shall call the \emph{left regular }$L^{p}$- \emph{representation} of $G$ associated with $\mu $. \end{example}
When the Hölder exponent $p$ is clear from the context, we will write $ T^{\mu }$ in place of $T^{\mu ,p}$. A straightforward computation shows that the dual $(T^{\mu ,p})^{\prime }$ of the left regular $L^{p}$-representation associated with $\mu $ is the left regular $L^{p^{\prime }}$-representation $ T^{\mu ,p^{\prime }}$ associated with $\mu $. Following Rieffel's induction theory and for consistency with \cite[Section 3.1 and Appendix D] {paterson_groupoids_1999} we denote by $\mathrm{Ind}\left( \mu \right) $ the integrated form of $T^{\mu ,p}$. The same computation as in \cite[page 100] {paterson_groupoids_1999} where $2$ is replaced by $p$ shows that $\mathrm{ Ind}(\mu )$ is the left action of $C_{c}(G)$ on $L^{p}(\nu ^{-1})$ by convolution. This allows one to deduce that in the Hausdorff case the left regular representations of $G$ separate points. The non-Hausdorff case is more subtle. A treatment of left-regular representations of non-Hausdorff groupoids on Hilbert spaces is given in \cite{khoshkam_regular_2002}.
\begin{definition} Let us say that a family $\mathcal{M}$ of quasi-invariant probability measures on $G^{0}$ \emph{separates points}, if for every nonzero function $ f\in C_{c}(G)$, there is a measure $\mu \in \mathcal{M}$ such that $f$ does not vanish on the support of the integrated measure $\nu =\int c_{xG}\ d\mu (x)$. Similarly, a collection $\mathcal{R}$ of representations of $C_{c}(G)$ on Banach algebras is said to \emph{separate points} if for every nonzero function $f\in C_{c}(G)$, there is a representation $\pi \in \mathcal{R}$ such that $\pi (f)$ is nonzero. \end{definition}
\begin{lemma} \label{Lemma: faithful measure} If $G$ is a Hausdorff étale groupoid, then a function $f$ in $C_{c}(G)$ belongs to $\mathrm{Ker}(\mathrm{Ind}(\mu ))$ if and only if it vanishes on the support of $\nu $. \end{lemma}
\begin{proof} Suppose that $f$ vanishes on the support of $\nu $. Then \begin{equation*} \left\langle \mathrm{Ind}(\mu )(f)\xi ,\eta \right\rangle _{L^{p}(\nu )}=\int f(\gamma )\left\langle T_{\gamma }\xi _{s(\gamma )},\eta _{r(\gamma )}\right\rangle D^{-\frac{1}{p}}(\gamma )\ d\nu (\gamma )=0\text{.} \end{equation*} for every $\xi \in L^{p}(\nu )$ and $\eta \in L^{p^{\prime }}(\nu )$, so $ \mathrm{Ind}(\mu )(f)=0$. Conversely, if $\mathrm{Ind}(\mu )(f)=0$ then $ f\ast \xi =0$ for every $\xi \in L^{p}(\nu ^{-1})$. In particular, $f=f\ast \chi _{G^{0}}=0$ in $L^{p}(\nu ^{-1})$. Thus $f(\gamma )=0$ for $\nu ^{-1}$ -almost every $\gamma \in G$ and hence also for $\nu $-almost every $\gamma \in G$. Since $G$ is assumed to be Hausdorff, $f$ is continuous, and hence it vanishes on the support of $\nu $. \end{proof}
By Lemma~\ref{Lemma: faithful measure}, in the Hausdorff case a family $ \mathcal{M}$ of Borel probability measures on $G^{0}$ separates points if and only if the collection of left regular representations associated with elements of $\mathcal{M}$ separates points.
\begin{proposition} \label{Proposition: separates points}If $G$ is a Hausdorff étale groupoid, then the family of left regular representations associated with quasi-invariant Borel probability measures on $G^{0}$ separates points. \end{proposition}
\begin{proof} A quasi-invariant Borel probability measure is said to be transitive if it is supported by an orbit. Every orbit carries a transitive measure, which is unique up to equivalence; see \cite[Definition 3.9 of Chapter 1 ] {renault_groupoid_1980}. It is well known that the transitive measures constitute a collection of quasi-invariant Borel probability measures on $ G^{0}$ that separates points; see \cite[Proposition 1.11 of Chapter 2] {renault_groupoid_1980}, so the proof is complete. \end{proof}
\section{Representations of inverse semigroups on \texorpdfstring{$L^p$}{Lp} -spaces}
\subsection{The Banach-Lamperti theorem\label{Subsection: Banach-Lamperti theorem}}
Let $\mu $ and $\nu $ be Borel probability measures on standard Borel spaces $X$ and $Y$, and let $p\in \lbrack 1,\infty )\setminus \left\{ 2\right\} $. The \emph{Banach-Lamperti theorem }\cite[Theorem 3.2.5] {fleming_isometries_2003} classifies the linear isometries from $L^{p}\left( \mu \right) $ to $L^{p}\left( \nu \right) $ in terms of \emph{regular set isomorphisms}; see \cite[Definition 3.2.3]{fleming_isometries_2003}. We are interested in the case of \emph{surjective }linear isometries. In such a case the Banach-Lamperti theorem can be stated as follows.
\begin{theorem}[Banach-Lamperti] \label{Theorem: Banach-Lamperti} Let $p\in \lbrack 1,\infty )\setminus \{2\}$
. If $u:L^{p}(\mu )\rightarrow L^{p}(\nu )$ is a surjective linear isometry, then there are conull Borel subsets $X_{0}$ and $Y_{0}$ of $X$ and $Y$, a Borel isomorphism $\phi \colon X_{0}\rightarrow Y_{0}$ such that $\phi _{\ast }(\mu )|_{X_{0}}\sim \nu |_{Y_{0}}$, and a Borel function $g\colon Y\rightarrow \mathbb{C}$ with $\left\vert g(y)\right\vert ^{p}=\frac{d\phi _{\ast }(\mu )}{\ d\nu }(y)$ for $\nu $-almost every $y\in Y$, such that $ u\xi =g\cdot \left( \xi \circ \phi ^{-1}\right) $ for every $\xi \in L^{p}(\nu )$. \end{theorem}
Theorem \ref{Theorem: Banach-Lamperti} was proved by Banach in \cite[Section 5]{banach_theorie_1993} and later generalized by Lamperti to not necessarily surjective isometries \cite{lamperti_isometries_1958}.
\subsection{Hermitian idempotents and spatial partial isometries}
Let $X$ be a complex vector space. The following definition is taken from \cite{lumer_semi-inner-product_1961}.
\begin{definition} A \emph{semi-inner product} on $X$ is a function $[ \cdot ,\cdot ]\colon X\times X\rightarrow \mathbb{C}$ satisfying:
\begin{enumerate} \item $\left[ \cdot ,\cdot \right] $ is linear in the first variable;
\item $\left[ x,\lambda y\right] =\overline{\lambda }\left[ x,y\right] $ for every $\lambda \in \mathbb{C}$ and $x,y\in X$;
\item $\left[ x,x\right] \geq 0$ for every $x\in X$, and equality holds if and only if $x=0$;
\item $\left\vert \left[ x,y\right] \right\vert \leq \left[ x,x\right] ^{ \frac{1}{2}}\left[ y,y\right] ^{\frac{1}{2}}$ for every $x,y\in X$. \end{enumerate}
The \emph{norm} on $X$ associated with the semi-inner product $[ \cdot
,\cdot ] $ is defined by $\left\| x\right\| =\left[ \cdot ,\cdot \right] ^{ \frac{1}{2}}$ for $x\in X$. \end{definition}
In general, there might be different semi-inner products on $X$ that induce the same norm. Nonetheless, it is not difficult to see that on a smooth Banach space---and in particular on $L^{p}$-spaces---there is at most one semi-inner product compatible with its norm; see the remark after the proof of Theorem~3 in \cite{lumer_semi-inner-product_1961}.
\begin{definition} A semi-inner product on a Banach space that induces its norm is called \emph{ compatible}. A Banach space $X$ endowed with a compatible semi-inner product is called a \emph{semi-inner product space}. \end{definition}
By the above discussion, if $X$ is a smooth Banach space, then a compatible semi-inner product---when it exists---is unique.
\begin{remark} It is easy to verify that the norm of $L^{p}( \lambda) $ is induced by the semi-inner product \begin{equation*}
\left[ f,g\right] =\left\| g\right\| _{p}^{2-p}\int f(x) \overline{g(x)} \left\vert g(x) \right\vert ^{p-2} d\lambda(x) \end{equation*} for $f,g\in L^{p}(\lambda) $ with $g\neq 0$. \end{remark}
An \emph{inner product }on $X$ is precisely a semi-inner product such that moreover $\left[ x,y\right] =\overline{\left[ y,x\right] }$ for every $ x,y\in X$. Semi-inner products allow one to generalize notions concerning operators on Hilbert spaces to more general Banach spaces.
\begin{definition} Let $X$ be a semi-inner product space, and let $T\in B(X)$. The \emph{ numerical range} $W( T) $ of $T$, is the set \begin{equation*} \left\{[ Tx,x]\colon x\in X, [ x,x] =1\right\}\subseteq \mathbb{C}. \end{equation*} The operator $T$ is called \emph{hermitian} if $W( T) \subseteq \mathbb{R}$. \end{definition}
Adopt the notation and terminology from the definition above. In view of \cite{lumer_semi-inner-product_1961} the following statements are equivalent:
\begin{enumerate} \item $T$ is hermitian;
\item $\left\| 1+irT\right\| =1+o\left( r\right) $ for $r\rightarrow 0$;
\item $\left\| \exp \left( irT\right) \right\| =1$ for all $r\in \mathbb{R} $ . \end{enumerate}
It is clear that when $X$ is a Hilbert space, an operator is hermitian if and only if it is self-adjoint. In particular, the hermitian idempotents on a Hilbert space are exactly the orthogonal projections.
Let $\lambda $ be a Borel measure on a standard Borel space $Z$. Hermitian idempotents on $L^{p}(\lambda )$, for $p\neq 2$, have been characterized in \cite[Chapter 3]{torrance_adjoints_1968} (see also \cite {berkson_hermitian_1972}): these are precisely the multiplication operators associated with characteristic functions on Borel subsets of $Z$.
Recall that a bounded linear operator $s$ on a Hilbert space is a \emph{ partial isometry }if there is another bounded linear operator $t$ such that $ st$ and $ts$ are orthogonal projections. The following is a generalization of partial isometries on Hilbert spaces to $L^p$-spaces. We use the term `spatial' in accordance to the terminology in \cite {phillips_analogs_2012,phillips_simplicity_2013,phillips_isomorphism_2013,phillips_crossed_2013} .
\begin{definition}
\label{definition: spatial}Let $X$ be a semi-inner product space and $s\in B(( X) $. We say that $s$ is a \emph{partial isometry} if $(\| s\| \leq 1$
and there exists $t\in B(( X) $ such that $(\| t\| \leq 1$ and $st$ and $ts$ are idempotent. If moreover $st$ and $ts$ are \emph{hermitian }idempotents then we say that $s$ is \emph{spatial} partial isometry. \end{definition}
Following \cite{phillips_analogs_2012}, we call an element $t$ as in Definition \ref{definition: spatial} a \emph{reverse} of $s$. We call $ts$ and $st$ the \emph{source} and \emph{range} idempotents of $s$, respectively. We denote by $\mathcal{S}(X)$ the set of all spatial partial isometries in $B(X)$, and by $\mathcal{E}(X)$ the set of hermitian idempotents in $B(X)$.
It is a standard fact in Hilbert space theory that all partial isometries on a Hilbert space are spatial. Moreover, the reverse of a partial isometry on a Hilbert space is unique, and it is given by its adjoint. The situation for partial isometries on $L^{p}$-spaces, for $p\neq 2$, is rather different. The following proposition can be taken as a justification for the term \textquotedblleft spatial\textquotedblright .
\begin{proposition} Let $p\in (1,\infty )\setminus \{2\}$ and let $\lambda $ be a $\sigma $ -finite Borel measure on a standard Borel space $Z$. If $e$ is a hermitian idempotent in $L^{p}(\lambda )$, then there is a Borel subset $E$ of $Z$ such that $e=\Delta _{\chi _{E}}$. More generally, if $s$ is a spatial partial isometry on $L^{p}(\lambda )$, then there are Borel subsets $E$ and $ F$ of $Z$, a Borel isomorphism $\phi \colon E\rightarrow F$, and a Borel function $g\colon F\rightarrow \mathbb{C}$ such that \begin{equation*} (s\xi )(y)= \begin{cases} g(y)\cdot (\xi \circ \phi ^{-1})(y) & \text{if }y\in F\text{, and} \\ 0 & \text{ otherwise} \end{cases} \end{equation*} for all $\xi $ in $L^{p}(\lambda )$ and for $\lambda $-almost every $y\in Z$. \end{proposition}
\begin{proof} The result follows from the characterization of hermitian idempotents mentioned above, together with the Banach-Lamperti theorem. \end{proof}
\begin{remark} Adopt the notation of the above proposition. It is easy to check that the reverse of $s$ is also spatial, and that it is given by \begin{equation*} (t\xi )(y)= \begin{cases} \overline{(g\circ \phi )(y)}\cdot (\xi \circ \phi )(y) & \text{if }y\in E \text{, and} \\ 0 & \text{ otherwise} \end{cases} \end{equation*} for all $\xi $ in $L^{p}(\lambda )$ and for $\lambda $-almost every $y\in Z$ . In particular, the reverse of a spatial partial isometry of an $L^{p}$ -space is unique. We will consequently write $s^{\ast }$ for the reverse of a spatial partial isometry $s$. \end{remark}
For $p\in \left( 1,+\infty \right) \setminus \left\{ 2\right\} $ the set $ \mathcal{S}(L^{p}(\lambda ))$ of spatial partial isometries on $ L^{p}(\lambda )$ is an inverse semigroup, and the set $\mathcal{E} (L^{p}(\lambda ))$ of hermitian idempotents on $L^{p}(\lambda )$ is precisely the semilattice of idempotent elements of $\mathcal{S} (L^{p}(\lambda ))$. Moreover, the map $\mathcal{B}_{\lambda }\rightarrow \mathcal{E}(L^{p}(\lambda ))$ given by $F\mapsto \Delta _{\chi _{F}}$ is an isomorphism of semilattices. Thus, $\mathcal{E}(L^{p}(\lambda ))$ is a complete Boolean algebra.
\begin{remark} If $(e_{j})_{j\in I}$ is an increasing net of hermitian idempotents, then $ \sup\nolimits_{j\in I}e_{j}$ is the limit of the sequence $(e_{j})_{j\in I}$ in the strong operator topology. \end{remark}
\subsection{Representations of inverse semigroups}
We now turn to inverse semigroup representations on $L^{p}$-spaces by spatial partial isometries. Fix an inverse semigroup $\Sigma $, and recall that $M_{n}(\Sigma )$ has a natural structure of inverse semigroup for every $n\geq 1$ by \cite[Proposition 2.1.4]{paterson_groupoids_1999}.
\begin{definition} Let $\lambda$ be a $\sigma $-finite Borel measure on a standard Borel space. A \emph{representation} of $\Sigma $ on $L^{p}(\lambda) $ is a semigroup homomorphism $\rho \colon \Sigma \to \mathcal{S}( L^{p}(\lambda))$.
For $n\geq 1$ denote by $\lambda ^{\left( n\right) }$ the measure $\lambda \times c_{n}$, where $c_{n}$ is the counting measure on $n$. We define the \emph{amplification} $\rho ^{(n)}\colon M_{n}(\Sigma )\rightarrow \mathcal{S} (L^{p}(\lambda ^{(n)}))$ of $\rho $, by $\rho ^{n}([\sigma _{ij}]_{i,j\in n})=[\rho (\sigma _{ij})]_{i,j\in n}$, where we identify $B(L^{p}(\lambda ^{(n)}))$ with $M_{n}(B(L^{p}(\lambda )))$ in the usual way.\newline \indent The \emph{dual }of $\rho $ is the representation $\rho ^{\prime }\colon \Sigma \rightarrow \mathcal{S}(L^{p^{\prime }}(\lambda ))$ given by $ \rho ^{\prime }(\sigma )=\rho (\sigma ^{\ast })^{\prime }$ for $\sigma \in \Sigma $. \end{definition}
\begin{definition} \label{Definition:CSIGMA}Denote by $\mathbb{C}\Sigma $ the complex *-algebra of formal linear combinations of elements of $\Sigma $, with operations determined by $\delta _{\sigma }\delta _{\tau }=\delta _{\sigma \tau }$ and $ \delta _{\sigma }^{\ast }=\delta _{\sigma ^{\ast }}$ for all $\sigma ,\tau \in \Sigma $, and endowed with the $\ell ^{1}$-norm. The canonical identification of $\mathbb{C}M_{n}(\Sigma )$ with $M_{n}(\mathbb{C}\Sigma )$ for $n\geq 1$, defines matrix norms on $\mathbb{C}\Sigma $. \end{definition}
\begin{remark} Every representation $\rho \colon \Sigma \rightarrow \mathcal{S} (L^{p}(\lambda ))$ induces a contractive representation $\pi _{\rho }\colon \mathbb{C}\Sigma \rightarrow B(L^{p}(\lambda ))$ such that $\pi _{\rho }(\delta _{\sigma })=\rho \left( \sigma \right) $ for $\sigma \in \Sigma $. It is not difficult to verify the following facts:
\begin{enumerate} \item Since, for $n\geq 1$, the amplification $\pi _{\rho }^{(n)}$ of $\pi _{\rho }$ to $M_{n}(\mathbb{C}\Sigma) $ is the representation associated with the amplification $\rho ^{(n) }$ of $\rho $, it follows that $\pi _{\rho }$ is $p$-completely contractive.
\item The representation $\pi _{\rho ^{\prime }}$ associated with the dual $ \rho ^{\prime }$ of $\rho $ is the dual of the representation $\pi _{\rho }$ associated with $\rho $. \end{enumerate} \end{remark}
\begin{definition} Let $\lambda $ and $\mu $ be $\sigma $-finite Borel measure on standard Borel spaces, and let $\rho $ and $\kappa $ be representations of $\Sigma $ on $L^{p}(\lambda) $ and $L^{p}(\mu) $ respectively. We say that $\rho $ and $\kappa $ are \emph{equivalent }if there is a surjective linear isometry $ u\colon L^{p}(\lambda) \to L^{p}(\mu) $ such that $u\rho (\sigma) =\kappa (\sigma) u$ for every $\sigma \in \Sigma $. \end{definition}
Adopt the notation of the definition above. If $\rho $ and $\kappa $ are equivalent, then their dual representations $\rho ^{\prime }$ and $\kappa ^{\prime }$ are also equivalent. Similarly, if $\rho $ and $\kappa $ are equivalent, then the corresponding representations $\pi _{\rho }$ and $\pi _{\kappa }$ of $\mathbb{C}\Sigma $ are equivalent.
\subsection{Tight representations of semilattices}
In the following, all semilattices will be assumed to have a minimum element $0$. Consistently, all inverse semigroups will be assumed to have a neutral element $0$, which is the minimum of the associated idempotent semilattice. In the rest of this subsection we recall some definitions from Section 11 of \cite{exel_inverse_2008}.
\begin{definition} \label{Definition:cover}Let $E$ be a semilattice and let $\mathcal{B}=( \mathcal{B},0,1,\wedge ,\vee ,\lnot )$ be a Boolean algebra. A \emph{ representation} of $E$ on $\mathcal{B}$ is a semilattice morphism $ E\rightarrow (\mathcal{B},\wedge )$ satisfying $\beta (0)=0$.
Two elements $x,y$ of $E$ are said to be \emph{orthogonal}, written $x\perp y $, if $x\wedge y=0$. Furthermore, we say that $x$ and $y$ \emph{intersect (each other)} if they are not orthogonal. \newline \indent If $X\subseteq Y\subseteq E$, then $X$ is a \emph{cover }for $Y$ if every nonzero element of $Y$ intersects an element of $X$. \end{definition}
It is easy to verify that a representation of a semilattice $E$ on a Boolean algebra sends orthogonal elements to orthogonal elements. It is also immediate to check that a cover for the set of predecessors of some $x\in E$ is also a cover for $\{x\}$.
\begin{notation} If $X$ and $Y$ are (possibly empty) subsets of $E$, we denote by $E^{X,Y}$ the set \begin{equation*} E^{X,Y}=\{z\in E\colon z\leq x\ \mbox{ for all }x\in X,\mbox{ and }z\perp y\ \mbox{ for all }y\in Y\}. \end{equation*} \end{notation}
We are now ready to state the definition of tight representation of a semilattice.
\begin{definition} \label{Definition: tight representation semilattice} Let $E$ be a semilattice and let $\mathcal{B}$ be a Boolean algebra. A representation $ \beta \colon E\rightarrow \mathcal{B}$ is said to be \emph{tight} if for every pair $X,Y$ of (possibly empty) finite subsets of $E$ and every finite cover $Z$ of $E^{X,Y}$, we have \begin{equation} \bigvee\limits_{z\in Z}\beta (z)=\bigwedge_{x\in X}\beta (x)\wedge \bigwedge_{y\in Y}\lnot \beta (y).\text{\label{Equation: tight}} \end{equation} \end{definition}
Proposition 11.9 of \cite[Proposition 11.9]{exel_inverse_2008} shows that, when $E$ is also a Boolean algebra, the tight representations of $E$ are precisely the Boolean algebra homomorphisms.
\begin{definition} \label{Definition:dense-subsemilattice}Suppose that $E$ is a semilattice. A subsemilattice $F$ of $E$ is \emph{dense} if for every $x\in E$ nonzero there is $y\in F$ nonzero such that $y\leq x$. \end{definition}
\begin{lemma} \label{Lemma: dense}Suppose that $E$ is a semilattice, and $F$ is a dense subsemilattice of $E$. If $\beta $ is a tight representation of $E$ on a Boolean algebra $\mathcal{B}$, then the restriction of $\beta $ to $F$ is tight. \end{lemma}
\begin{proof} Suppose that $X,Y\subset F$ and that $Z$ is a cover for $F^{X,Y}$. We claim that $Z$ is a cover for $E^{X,Y}$. Pick $x\in E^{X,Y}$ nonzero, and use density of $F$ in $E$ to find a nonzero $y\in F$ such that $y\leq x$. Since $y\in F^{X,Y}$ and $Z$ is a cover for $F^{X,Y}$, there is $z\in Z$ such that $z$ and $y$ intersect. Therefore also $z$ and $x$ intersect. This shows that $Z$ is a cover for $ E^{X,Y}$. Therefore Equation~\eqref{Equation: tight} holds, as desired. \end{proof}
\subsection{Tight representations of inverse semigroups on $L^p$-spaces}
As in the case of representation of inverse semigroups on Hilbert spaces (see \cite[Section 13]{exel_inverse_2008}), we will isolate a class of \textquotedblleft well behaved\textquotedblright\ representations of inverse semigroups on $L^{p}$-spaces. The following definition is a natural generalization of \cite[Definition 13.1]{exel_inverse_2008}.
\begin{definition} \label{Definition: tight representation inverse semigroup} Let $\lambda $ be a $\sigma $-finite Borel measure on a standard Borel space. A representation $\rho \colon \Sigma \rightarrow \mathcal{S}(L^{p}(\lambda ))$ is said to be \emph{tight }if its restriction to the idempotent semilattice $E(\Sigma )$ of $\Sigma $ is a tight representation on the Boolean algebra $\mathcal{E} (L^{p}(\lambda ))$ of hermitian idempotents. \end{definition}
\begin{remark} If $\rho \colon \Sigma \rightarrow \mathcal{S}(L^{p}(\lambda ))$ is a tight representation as above, then the net $(\rho (\sigma ))_{\sigma \in E(\Sigma )}$ converges to the identity in the strong operator topology. Thus, tightness should be thought of as a \emph{nondegeneracy} condition for representations of inverse semigroups. \end{remark}
\begin{definition} \label{Definition: regular tight} A tight representation $\rho $ of $\Sigma $ on $L^{p}(\lambda )$ is said to be \emph{regular} if, for every idempotent open bisection $U$ of $G$, the element $\rho ( U) $ is the limit of the net $ ( \rho ( V) ) _{V}$ where $V$ ranges among all idempotent open bisections of $G$ with compact closure contained in $U$, ordered by inclusion. In formulas, \begin{equation} \rho (U)=\lim_{V\in E(\Sigma _{c}(G)),\overline{V}\subseteq U}\rho (V)\text{ \label{Equation: regular relation}.} \end{equation} \end{definition}
\subsection{Representations of semigroups of bisections\label{Subsection: integrated representation slices}}
Let $G$ be an étale groupoid, let $\lambda $ be a $\sigma $-finite Borel measure on a standard Borel space, and let $\pi $ be a contractive nondegenerate representation of $C_{c}(G)$ on $L^{p}(\lambda )$. Denote by $ \Sigma _{c}(G)$ the inverse semigroup of precompact open bisections of $G$. In this subsection, we show how to associate to $\pi $ a tight, regular representation $\rho _{\pi }$ of $\Sigma _{c}(G)$ on $L^{p}(\lambda )$.
Given a precompact open bisection $A$ of $G$, $\xi \in L^{p}(\lambda )$, and $\eta \in L^{p^{\prime }}(\lambda )$, the assignment $f\mapsto \langle \pi
(f)\xi ,\eta \rangle $ is a $\| \cdot \| _{\infty }$-continuous linear functional on $C_{c}(A)$ of norm at most $\| \xi \| \| \eta \| $. By the Riesz-Markov-Kakutani representation theorem, there is a Borel measure $\mu _{A,\xi ,\eta }$ supported on $A$, of total mass at most $\| \xi \| \| \eta
\| $, such that \begin{equation} \langle \pi (f)\xi ,\eta \rangle =\int f\text{ }d\mu _{A,\xi ,\eta }\text{ \label{Equation: integral}} \end{equation} for every $f\in C_{c}(G)$. If $A,B\in \Sigma _{c}(G)$, then $\mu _{A,\xi ,\eta }$ and $\mu _{B,\xi ,\eta }$ coincide on $A\cap B$. Arguing as in \cite [page 87, and pages 98-99]{paterson_groupoids_1999}, we conclude that there is a Borel measure $\mu _{\xi ,\eta }$ defined on all of $G$, such that $\mu _{A,\xi ,\eta }$ is the restriction of $\mu _{\xi ,\eta }$ to $A$, for every $A\in \Sigma _{c}(G)$, and moreover $\langle \pi (f)\xi ,\eta \rangle =\int f\ d\mu _{\xi ,\eta }$ for every $f\in C_{c}(G)$.
\begin{lemma} \label{Lemma: dense span} The linear span of $\{ \pi(\chi_A) \xi \colon A\in \Sigma _{c}(G), \xi \in L^{p}(\lambda)\}$ is dense in $L^p(\lambda)$. \end{lemma}
\begin{proof} Let $\eta \in L^{p^{\prime }}(\lambda )$ satisfy $\langle \rho _{\pi }(A)\xi ,\eta \rangle =0$ for every $\xi \in L^{p}(\lambda )$ and every $A\in \Sigma _{c}(G)$. Since $\rho_{\pi}(A)=\pi(\chi_A)$, we have $\langle \rho _{\pi }(A)\xi ,\eta \rangle =\int \chi _{A}\ d\mu _{\xi ,\eta }$, so it follows that $\mu _{\xi ,\eta }(A)=0$ for every $\xi \in L^{p}(\lambda )$ and every $ A\in \Sigma _{c}(G)$. Thus $\langle \pi (f)\xi ,\eta \rangle =0$ for every $ f\in C_{c}(G)$ and every $\xi \in L^{p}(\lambda )$. Since $\pi $ is nondegenerate, we conclude that $\eta =0$, which finishes the proof. \end{proof}
Equation~\eqref{Equation: integral} allows one to extend $\pi $ to $B_{c}(G)$ by defining \begin{equation*} \left\langle \pi (f)\xi ,\eta \right\rangle =\int fd\mu _{\xi ,\eta } \end{equation*} for $f\in B_{c}(G)$, $\xi \in L^{p}(\lambda )$, and $\eta \in L^{p}\left( \eta \right) $. Lemma~2.2.1 of \cite{paterson_groupoids_1999} shows that $ \pi $ is indeed a nondegenerate representation of $B_{c}(G)$ on $ L^{p}(\lambda )$. In particular the function$\ \rho _{\pi }:A\mapsto \pi (\chi _{A})$ is a semigroup homomorphism from $\Sigma _{c}(G)$ to $B\left( L^{p}(\lambda )\right) $. We will show below that such a function is a tight, regular representation of $\Sigma _{c}(G)$ on $L^{p}(\lambda )$
Suppose that $f\in B(G^{0})$. Define $\pi (f)\in B\left( L^{p}(\lambda )\right) $ by \begin{equation} \left\langle \pi (f)\xi ,\eta \right\rangle =\int fd\mu _{\xi ,\eta }\text{ \label{Equation: integral2}} \end{equation} for $\xi \in L^{p}(\lambda )$, and $\eta \in L^{p^{\prime }}(\lambda )$. Since \begin{equation} \pi (fg)=\pi (f)\pi (g)\text{\label{Equation: multiplicative}} \end{equation} for $f,g\in B_{c}(G^{0})$, it follows via a monotone classes argument that Equation~\eqref{Equation: multiplicative} holds for any $f,g\in B(G^{0})$. In particular, $\pi (\chi _{A})$ is an idempotent for every $A\in \mathcal{B} (G^{(0)})$. It follows from Lemma~\ref{Lemma: dense span} that $\pi \left( \chi _{G^{0}}\right) $ is the identity operator on $L^{p}(\lambda )$. Fix now $A\in \mathcal{B}(G^{0})$ and $r\in \mathbb{R}$. For any $\xi \in L^{p}(\lambda )$ and $\eta \in L^{p^{\prime }}(\lambda )$ such that $ \left\Vert \xi \right\Vert ,\left\Vert \eta \right\Vert \leq 1$ we have that \begin{equation*} \left\vert \left\langle \left( 1+ir\pi (\chi _{A})\right) \xi ,\eta \right\rangle \right\vert =\left\vert \int \left( \chi _{G^{0}}+ir\chi _{A}\right) d\mu _{\xi ,\eta }\right\vert \leq \left\Vert \chi _{G^{0}}+ir\chi _{A}\right\Vert _{\infty }\leq 1+\frac{1}{2}r^{2}\text{.} \end{equation*} Therefore $\left\Vert 1+ir\pi (\chi _{A})\right\Vert \leq 1+\frac{1}{2}r^{2}$ . This shows that $\pi (\chi _{A})$ is an hermitian idempotent of $ L^{p}(\lambda )$. It follows from Equation~\eqref{Equation: integral2} and Equation~\eqref{Equation: multiplicative} that the function $A\mapsto \pi (\chi _{A})$ is a $\sigma $-complete homomorphism of Boolean algebras from $ \mathcal{B}(G^{0})$ to $\mathcal{E}\left( L^{p}(\lambda )\right) $. In particular $A\rightarrow \pi \left( \chi _{A}\right) $ a tight semilattice representation; see \cite[Proposition 11.9]{exel_inverse_2008}. By Lemma \ref {Lemma: dense} the restriction of a tight representation to a dense subsemilattice---in the sense of Definition \ref {Definition:dense-subsemilattice}---is still tight, it follows that the function $\rho _{\pi }:A\mapsto \pi (\chi _{A})$ for $A\in \Sigma _{c}(G)$ is a tight representation of $\Sigma _{c}(G)$ on $L^{p}(\lambda )$, which is moreover regular by $\sigma $-completeness. The same argument shows that if $ \Sigma $ is an inverse subsemigroup of $\Sigma _{c}(G)$ which forms a basis for the topology of $G$, then the restriction of $\rho _{\pi }$ to $\Sigma $ is a tight, regular representation of $\Sigma $ on $L^{p}(\lambda )$.
\begin{remark} It is clear that if $\pi $ and $\widetilde{\pi }$ are $I$-norm contractive nondegenerate representations of $C_{c}(G)$ on $L^{p}$-spaces, then $\pi $ and $\widetilde{\pi }$ are equivalent if and only if $\rho _{\pi }$ and $ \rho _{\widetilde{\pi }}$ are equivalent. \end{remark}
\section{Disintegration of representations}
Throughout this section, we let $G$ be an étale groupoid and $\Sigma $ be an inverse subsemigroup of $\Sigma _{c}(G)$ that forms a basis for the topology of $G$. Let $\lambda $ be a $\sigma $-finite measure on a standard Borel space $Z$.
\subsection{The disintegration theorem}
\begin{theorem} \label{Theorem: disintegration representation inverse} If $\rho \colon \Sigma \rightarrow \mathcal{S}(L^{p}(\lambda ))$ is a tight, regular representation, then there exist a Borel map $q:Z\rightarrow G^{0}$ such that $\mu =q_{\ast }(\lambda )$ is a quasi-invariant Borel measure on $G^{0}$ . Furthermore if $\lambda =\int \lambda _{x}\ d\mu (x)$ denotes the disintegration of $\lambda $ with respect to $\mu $, then there exists a representation $T$ of $G$ on the Borel Banach bundle $\bigsqcup\nolimits_{x \in G^{0}}L^{p}(\lambda _{x})$, such that \begin{equation*} \langle \rho (A)\xi ,\eta \rangle =\int_{A}D(\gamma )^{-\frac{1}{p}}\langle T_{\gamma }\xi _{s(\gamma )},\eta _{r(\gamma )}\rangle \ d\nu (\gamma ) \end{equation*} for $A\in \Sigma $, for $\xi \in L^{p}(\lambda )$, and for $\eta \in L^{p^{\prime }}(\lambda )$. \end{theorem}
The rest of this section is dedicated to the proof of the theorem above. For simplicity and without loss of generality, we will focus on the case where $ \lambda $ is a probability measure. In the following, we fix a representation $\rho $ as in the statement of Theorem~\ref{Theorem: disintegration representation inverse}.
\subsection{Fibration}
Define $\Phi \colon E(\Sigma )\rightarrow \mathcal{B}_{\lambda }$ by $\Delta _{\chi _{\Phi (U)}}=\rho (U)$ for $U\in E(\Sigma )$. Denote by $\mathcal{U}$ the semilattice of open subsets of $G^{0}$. Extend $\Phi $ to a function $ \mathcal{U}\rightarrow \mathcal{B}_{\lambda }$ by setting \begin{equation*} \Phi (V)=\bigcup_{W\in E(\Sigma _{c}),\overline{W}\subseteq U}\Phi (W). \end{equation*} Then $\Delta _{\chi _{\Phi (V)}}$ is the limit in the strong operator topology of the increasing net $(\Delta _{\chi _{\rho (W)}})_{W\in E(\Sigma _{c}),\overline{W}\subseteq V}$. By Equation~ \eqref{Equation: regular relation}, the expression above indeed defines an extension of $\Phi $. Moreover, a monotone classes argument shows that $\Phi $ is a representation. Tightness of $\rho $ together with Equation~ \eqref{Equation: regular relation} further imply that $\Phi (U\cup V)=\Phi (U)\cup \Phi (V)$ whenever $U$ and $V$ are disjoint, and that \begin{equation} \Phi \left( \bigcup\limits_{n\in \omega }U_{n}\right) \subseteq \bigcup\limits_{n\in \omega }\Phi \left( U_{n}\right) \text{\label{Equation: countable subadditivity}} \end{equation} for any sequence $\left( U_{n}\right) _{n\in \omega }$ in $\mathcal{U}$. For $U\in \mathcal{U}$, set $m(U)=\lambda (\Phi (U))$. Using \cite[Proposition 3.2.7]{paterson_groupoids_1999}, one can extend $m$ to a Borel measure on $ G^{0}$ by setting \begin{equation*} m(E)=\inf \left\{ m(U)\colon U\in \mathcal{U}\text{, }E\subseteq U\right\} \end{equation*} for $E\in \mathcal{B}(G^{0})$. Extend $\Phi $ to a homomorphism from $ \mathcal{B}(G^{0})$ to $\mathcal{B}_{\lambda }$, by setting \begin{equation*} \Phi (E)=\bigwedge \left\{ \Phi (U)\colon U\in \mathcal{U}\text{, } U\supseteq E\right\} \text{.} \end{equation*} (The infimum exists by completeness of $\mathcal{B}_{\lambda }$.)
\begin{lemma} The map $\Phi\colon \mathcal{B}(G^0)\to \mathcal{B}_\lambda$ is a $\sigma $ -complete Boolean algebra homomorphism. \end{lemma}
\begin{proof} We claim that given $E_{0}$ and $E_{1}$ in $\mathcal{B}(G^{0})$, we have $ \Phi (E_{0}\cap E_{1})=\Phi (E_{0})\cap \Phi (E_{1})$. \newline \indent To prove the claim, observe that if $U_{j}$ is an open set containing $E_{j}$ for $j\in \{0,1\}$, then $\Phi (U_{0}\cap U_{1})=\Phi (U_{0})\cap \Phi (U_{1})$, and thus $\Phi (E_{0}\cap E_{1})\subseteq \Phi (E_{0})\cap \Phi (E_{1}).$ In order to prove that equality holds, it is enough to show that given $\varepsilon >0$, we have \begin{equation*} \lambda \left( \Phi (E_{0}\cap E_{1})\right) \geq \lambda \left( \Phi \left( E_{0}\right) \cap \Phi \left( E_{1}\right) \right) -\varepsilon . \end{equation*} Fix an open set $U$ containing $E_{0}\cap E_{1}$ such that $m(U)\leq m(E_{0}\cap E_{1})+\varepsilon $. Let $V_{0}$ and $V_{1}$ be open sets satisfying $E_{j}\setminus (E_{0}\cap E_{1})\subseteq V_{j}$ for $j=0,1$, and $\mu (V_{j})\leq \mu (E_{j}\setminus (E_{0}\cap E_{1}))+\varepsilon $. For $j=0,1$, set $U_{j}=U\cup V_{j}$. Then $U_{j}\supseteq E_{j}$ and \begin{align*} \lambda (\Phi (E_{0}\cap E_{1}))& =m(E_{0}\cap E_{1})\geq m(U)-\varepsilon \geq m(U_{0}\cap U_{1})-3\varepsilon \\ & =\lambda (\Phi (U_{0}\cap U_{1}))-3\varepsilon =\lambda (\Phi (U_{0})\cap \Phi (U_{1}))-3\varepsilon \geq \lambda (\Phi (E_{0})\cap \Phi (E_{1}))-3\varepsilon \text{.} \end{align*} We have therefore shown that $\Phi (E_{0}\cap E_{1})=\Phi (E_{0})\cap \Phi (E_{1})$, so the claim is proved. It remains to show that if $(E_{n})_{n\in \omega }$ is a sequence of pairwise disjoint Borel subsets of $G^{0}$, then \begin{equation*} \Phi \left( \bigcup\limits_{n\in \omega }E_{n}\right) =\bigcup\limits_{n\in \omega }\Phi \left( E_{n}\right) \text{.} \end{equation*} By Equation~\eqref{Equation: countable subadditivity}, the left-hand side is contained in the right-hand side. On the other hand, we have \begin{equation*} \lambda \left( \Phi \left( \bigcup\limits_{n\in \omega }E_{n}\right) \right) =m\left( \bigcup\limits_{n\in \omega }E_{n}\right) =\sum\limits_{n\in \omega }m(E_{n})=\sum\limits_{n\in \omega }\lambda (\Phi (E_{n}))=\lambda \left( \bigcup\limits_{n\in \omega }\Phi (E_{n})\right) , \end{equation*} so we conclude that equality holds, and the proof is complete. \end{proof}
By \cite[Theorem~15.9]{kechris_classical_1995}, there is a Borel function $ q\colon Z\rightarrow G^{0}$ such that $\Phi (E)=q^{-1}(E)$ for every $E\in \mathcal{B}(G^{0})$. Moreover, the map $q$ is unique up to $\lambda $-almost everywhere equality.
\subsection{Measure}
Define a Borel probability measure $\mu$ on $G^0$ by $\mu =q_{\ast }(\lambda) $. Consider the disintegration $\lambda =\int \lambda _{x}\ d\mu (x) $ of $\lambda $ with respect to $\mu $, the Borel Banach bundle $ \mathcal{Z}=\bigsqcup\limits_{x\in G^{0}}L^{p}(\lambda_x)$, and identify $ L^{p}(\lambda) $ with $L^{p}(\mu,\mathcal{Z})$ as in Theorem~\ref{thm: BBb structure}.
For $A\in \Sigma $, denote by $\theta _{A}\colon A^{-1}A\rightarrow AA^{-1}$ the homomorphism defined by $\theta _{A}(x)=r(Ax)$ for $x\in A^{-1}A$. Since $\rho (A)$ is a spatial partial isometry with domain $\Phi (s(A))$ and range $\Phi (r(A))$, there are a Borel function $g_{A}\colon \Phi (r(A))\rightarrow \mathbb{C}$ and a Borel isomorphism $\phi _{A}\colon \Phi (s(A))\rightarrow \Phi (r(A))$ such that \begin{equation} (\rho (A)\xi )_{z}=g_{A}(z)\xi (\phi _{A}^{-1}(z))\text{\label{Equation: spatial partial isometry}} \end{equation} for $z\in \Phi (r(A))$. We claim that $(q\circ \phi _{A})(z)=(\theta _{A}\circ q)(z)$ for $\lambda $-almost every $z\in \Phi (r(A))$. By the uniqueness assertion in \cite[Theorem~15.9]{kechris_classical_1995}, it is enough to show that $(\theta _{A}\circ q\circ \phi _{A}^{-1})^{-1}(U)=\Phi (U)$ for every $U\in E(\Sigma )$ with $U\subseteq r(A)$. We have \begin{equation*} (\theta _{A}\circ q\circ \phi _{A}^{-1})^{-1}(U)=(\phi _{A}\circ q^{-1}\circ \theta _{A}^{-1})(U)=\phi _{A}(\Phi (\theta _{A}^{-1}(U)))=\phi _{A}(\Phi (A^{-1}UA)). \end{equation*}
Given $\xi \in L^{p}(\lambda |_{\Phi (r(A))})$, set $\eta =\xi \circ \phi _{A}$. Then \begin{align*} \Delta _{\chi _{\phi _{A}\left( \Phi \left( A^{-1}UA\right) \right) }}\xi & =\left( \Delta _{\chi _{\Phi \left( A^{-1}UA\right) }}\eta \right) \circ \phi _{A}^{-1}=\left( \rho \left( A^{-1}UA\right) \eta \right) \circ \phi _{A}^{-1}=\left( \rho (A)^{-1}\rho (U)\rho (A)\eta \right) \circ \phi _{A}^{-1} \\ & =\left( \rho (A)^{-1}\rho (U)g_{A}\xi \right) \circ \phi _{A}^{-1}=\left( \rho (A)^{-1}\chi _{\Phi (U)}g_{A}\xi \right) \circ \phi _{A}^{-1} \\ & =\left( \left( g_{A}\circ \phi _{A}\right) ^{-1}\left( \chi _{\Phi (U)}\circ \phi _{A}\right) \left( g_{A}\circ \phi _{A}\right) \eta \right) \circ \phi _{A}^{-1}=\chi _{\Phi (U)}\xi =\Delta _{\chi _{\Phi (U)}}\xi \text{.} \end{align*} Thus $\Phi (U)=\phi _{A}(\Phi (A^{-1}UA))=(\theta _{A}\circ q\circ \phi _{A}^{-1})^{-1}(U)$, and hence $(\theta _{A}\circ q\circ \phi _{A}^{-1})(z)=q(z)$ for $\lambda $-almost every $z\in \Phi (r(A))$, as desired. The claim is proved. It is shown in \cite[Proposition 3.2.2]
{paterson_groupoids_1999} that $\mu $ is quasi-invariant whenever $(\theta _{A})_{\ast }\mu |_{s(A)}\sim \mu |_{r(A)}$ for every open bisection $A$ of $ G$. The same proof in fact shows that it is sufficient to check this condition for $A\in \Sigma $. Given $A\in \Sigma $, we have \begin{eqnarray*}
\mu |_{r(A)} &=&q_{\ast }\lambda |_{\Phi (r(A))}\sim q_{\ast }((\phi _{A})_{\ast }\lambda |_{\Phi (s(A))})=(q\circ \phi _{A})_{\ast }\lambda
|_{\Phi (s(A))}=(\theta _{A}\circ q)_{\ast }\lambda |_{\Phi (s(A))} \\
&=&(\theta _{A})_{\ast }(q_{\ast }\lambda |_{\Phi (s(A))})=(\theta _{A})_{\ast }\mu |_{s(A)}\text{,} \end{eqnarray*} so $\mu $ is quasi-invariant.
\subsection{Disintegration}
For $x\in G^{0}$, set $Z_{x}=q^{-1}(\{x\})$, and note that $Z_{x}=\Phi (\{x\})$. Given $A\in \Sigma $, regard $\rho (A)$ as a surjective linear isometry \begin{equation*}
\rho (A)\colon L^{p}(\lambda |_{\Phi (s(A))})\rightarrow L^{p}(\lambda
|_{\Phi (r(A))}). \end{equation*}
Let $\mathcal{Z}$ denote the Borel Banach bundle $\bigsqcup\nolimits_{x\in G^{0}}L^{p}(\lambda _{x})$, and identify $L^{p}(\lambda |_{\Phi (s(A))})$
and $L^{p}(\lambda |_{\Phi (r(A))})$ with $L^{p}(\mu |_{s(A)},\mathcal{Z}
|_{s(A)})$ and $L^{p}(\mu |_{r(A)},\mathcal{Z}|_{r(A)})$, respectively. If $ U\in E(\Sigma )$ satisfies $U\subseteq r(A)$, one uses $\rho (A^{-1}UA)=\rho (A)^{-1}\rho (U)\rho (A)$ to show that \begin{equation*} \Delta _{U}\circ \rho (A)=\rho (A)\circ \Delta _{\theta _{A}(U)}\text{.} \end{equation*}
By Theorem~\ref{Theorem: Guichardet}, there is a Borel section $x\mapsto T_{x}^{A}$ of $B(\mathcal{Z}|_{s(A)},\mathcal{Z}|_{r(A)},\theta _{A})$ consisting of invertible isometries, such that \begin{equation*}
(\rho (A)\xi )|_{Z_{y}}=\left( \frac{d(\theta _{A})_{\ast }\mu }{\ d\mu }
(y)\right) ^{\frac{1}{p}}T_{\theta _{A}^{-1}(y)}^{A}\xi |_{Z_{\phi ^{-1}(y)}} \end{equation*} for $\mu $-almost every $y\in r(A)$. Since \begin{equation*}
\left( \rho (A)\xi \right) |_{Z_{y}}=\left( g_{A}\right) |_{Z_{y}}\cdot
\left( \xi |_{Z_{y}}\circ \left( (\phi _{A})|_{Z_{\theta _{A}^{-1}(y)}}^{|Z_{y}}\right) ^{-1}\right) \end{equation*} for $\mu $-almost every $y\in r(A)$, we have \begin{equation*}
T_{x}^{A}\xi =\left( \frac{d(\theta _{A})_{\ast }\mu }{\ d\mu }\left( \theta _{A}(x)\right) \right) ^{\frac{1}{p}}\left( g_{A}\right) |_{Z_{\theta _{A}(x)}}\left( \xi \circ \left( (\phi _{A})|_{Z_{x}}^{|Z_{\theta _{A}(x)}}\right) ^{-1}\right) \end{equation*} for $\mu $-almost every $x\in s(A)$. Fix $A,B\in \Sigma $. Since $\rho $ is a representation, $\rho (AB)=\rho (A)\rho (B)$. Therefore it follows from the uniqueness of the direct integral representation of a decomposable operator---see Remark \ref{Remark:uniqueness}---that $T_{x}^{AB}=T_{\theta _{B}(x)}^{A}T_{x}^{B}$ for $\mu $-a.e. $x\in s(AB)$. Similarly if $A,B\in \Sigma $ and $U\in E(\Sigma )$ is such that $AU=BU$, then the uniqueness of the direct integral representation of a decomposable operator shows that $ T_{x}^{A}=T_{x}^{B}$ for $\mu $-a.e. $x\in U$. Since $E(\Sigma )$ is a basis for the topology of $G^{0}$ we conclude that $T_{x}^{A}=T_{x}^{B}$ for $\mu $ -a.e. $x\in s(A)\cap s(B)$ such that $Ax=Bx$. A similar argument shows that, if $A\in \Sigma $, then $(T_{x}^{A})^{-1}=T_{\theta _{A}(x)}^{A^{-1}}$ for $ \mu $-a.e. $x\in s(A)$. It follows that, up to discarding a $\nu $-null set, the assignment $T\colon G\rightarrow \mathrm{Iso}(\mathcal{Z})$ given by $ T_{\gamma }=T_{s(\gamma )}^{A}$ for some $A\in \Sigma $ containing $\gamma $ , determines a representation of $G$ on $\mathcal{Z}$. Indeed let $X$ be the set of $x\in G^{0}$ such that
\begin{enumerate} \item for every $A,B\in \Sigma $ such that $x\in s(AB)$, $ T_{x}^{AB}=T_{\theta _{B}(x)}^{A}T_{x}^{B}$,
\item for every $A\in \Sigma $ such that $x\in s(A)$, $(T_{x}^{A})^{-1}=T_{ \theta _{A}(x)}^{A^{-1}}$,
\item for every $A,B\in \Sigma $ such that $x\in s(A)\cap s(B)$ and $Ax=Bx$, $T_{x}^{A}=T_{x}^{B}$. \end{enumerate}
Then by the discussion above and since $\Sigma $ is countable, $X$ is a $\mu
$-conull subset of $G^{0}$. We claim that the restriction of $T$ to $G|_{X}$ is a groupoid homomorphism. Indeed if $\gamma ,\rho $ are elements of $
G|_{X} $ such that $r(\rho )=s(\gamma )$ and $B,A\in \Sigma $ are such that $ \gamma \in A$ and $\rho \in B$, then, since $s(\rho ),s(\gamma )\in X$, by (1) and (3) we get that $T_{\gamma }T_{\rho }=T_{s(\gamma )}^{A}T_{s(\rho )}^{B}=T_{s(\rho )}^{AB}=T_{\gamma \rho }$. Similarly applying (2) and (3)
one obtains that $T_{\gamma }^{-1}=T_{\gamma ^{-1}}$ for any $\gamma \in G|_{X}$. This concludes the proof that the restriction of $T$ to $G|_{X}$ is a groupoid homomorphism. If $A\in \Sigma $ then the maps $\gamma \mapsto T_{s(\gamma )}^{A}$ and $\gamma \mapsto T_{\gamma }$ agree on $A\cap G|_{X}$ . Since $x\mapsto T_{x}^{A}$ is a Borel map on $s(A)$ and $\Sigma $ is a countable basis for the topology of $G$, it follows that the function $
\gamma \mapsto T_{\gamma }$ is Borel on $G|_{X}$. This concludes the proof that $T$ is a representation of $G$ on the Banach bundle $\mathcal{Z}$ in the sense of Definition \ref{Definition: representation on Banach bundle}. It is a consequence of Equation~\eqref{Equation: spatial partial isometry} that \begin{equation*} \left\langle \rho (A)\xi ,\eta \right\rangle =\int D\left( xA\right) ^{- \frac{1}{p}}\left\langle T_{xA}\xi _{\theta _{A}^{-1}(x)},\eta _{x}\right\rangle \ d\mu (x)\text{,} \end{equation*} for every $\xi \in L^{p}(\mathcal{Z},\mu )$ and every $\eta \in L^{p^{\prime }}(\mu ,\mathcal{Z}^{\prime })$. This concludes the proof of Theorem~\ref {Theorem: disintegration representation inverse}.
\subsection{Correspondence}
Let as above $G$ be an étale groupoid, and $\Sigma _{c}\left( G\right) $ be the inverse semigroup of precompact open bisections of $G$. Let $\Sigma $ be an inverse subsemigroup of $\Sigma _{c}\left( G\right) $ that forms a basis for the topology of $G$.
Suppose that $\pi $ is a contractive representation of $C_{c}(G)$ on $ L^{p}(\lambda )$. It is shown in Subsection \ref{Subsection: integrated representation slices} that, identifying $L^{p}\left( \lambda \right) $ with $L^{p}\left( \mu ,\mathcal{Z}\right) $, $\pi $ induces a tight, regular representation of $\Sigma _{c}\left( G\right) $ on $L^{p}\left( \lambda \right) $.\ Since $\Sigma $ forms a basis for the topology of $G$, the restriction $\rho _{\pi }$ of such a representation to $\Sigma $ is still tight by Lemma \ref{Lemma: dense}.
Let now $(T,\mu )$ be a representation of $G$ on an $L^{p}$-bundle $ \bigsqcup\nolimits_{x\in G^{0}}L^{p}(\lambda _{x})$. Setting $\lambda :=\int \lambda _{x}d\mu \left( x\right) $, one can identify $L^{p}\left( \mu , \mathcal{Z}\right) $ with $L^{p}\left( \lambda \right) $ by Theorem \ref {thm: BBb structure}. Then one can consider the integrated form $\pi _{T}$ of $T$, which is a representation of $C_{c}\left( G\right) $ on $L^{p}\left( \lambda \right) $. Then we set, following the notation above, $\rho _{T}:=\rho _{\pi _{T}}$, which is a tight, regular representation of $\Sigma $ on $L^{p}\left( \lambda \right) $. An inspection of the definition of $ \rho _{\pi }$ from Subsection \ref{Subsection: integrated representation slices} shows that one can explicitly define $\rho _{T}$ via the formula \begin{equation*} \langle \rho _{T}(A)\xi ,\eta \rangle =\int_{r(A)}D^{-\frac{1}{p} }(xA)\left\langle T_{xA}\xi _{\theta _{A}^{-1}(x)},\eta _{x}\right\rangle \ d\mu (x) \end{equation*} for all $A\in \Sigma $, for all $\xi \in L^{p}(\mu ,\mathcal{Z})$, and all $ \eta \in L^{p^{\prime }}(\mu ,\mathcal{Z}^{\prime })$.
\begin{theorem} \label{Theorem: correspondence representations} Adopt the notation of the comments above.
\begin{enumerate} \item The assignment $T\mapsto \rho_T$ determines a bijective correspondence between representations of $G$ on $L^p$-bundles and tight regular representations of $\Sigma $ on $L^p$-spaces.
\item The assignment $\pi \mapsto \rho _{\pi }$ determines a bijective correspondence between contractive representations of $C_{c}(G)$ on $L^{p}$ -spaces and tight regular representations of $\Sigma $ on $L^{p}$ spaces.
\item The assignment $T\mapsto \pi _{T}$ is a bijective correspondence between representations of $G$ on $L^{p}$-bundles and contractive representations of $C_{c}(G)$ on $L^{p}$-spaces. \end{enumerate}
Moreover, the correspondences in (1), (2), and (3) preserve the natural relations of equivalence of representations. \end{theorem}
\begin{proof} First we show that, given a representation $T$ of $G$ on an $L^{p}$-bundle $ \mathcal{Z}$, the corresponding representation $\rho _{T}$ of $\Sigma $ is right. Consider the repr...
(1). This is an immediate consequence of the Disintegration Theorem~\ref {Theorem: disintegration representation inverse}.
(2). Suppose that $\rho $ is a tight representation of $\Sigma $ on $ L^{p}(\lambda )$. Applying the Disintegration Theorem~\ref{Theorem: disintegration representation inverse} one obtains a representation $\left( \mu ,T\right) $ of $G$ on the bundle $\bigsqcup_{x\in G^{0}}L^{p}(\lambda _{x})$ for a disintegration $\lambda =\int \lambda _{x}d\mu \left( x\right) $ . One can then assign to $\rho $ the integrated form $\pi _{\rho }$ of $ \left( \mu ,T\right) $. It is easy to verify that the maps $\rho \mapsto \pi _{\rho }$ and $\pi \mapsto \rho _{\pi }$ are mutually inverse.
Finally (3) follows from combining (1) and (2). \end{proof}
\section{\texorpdfstring{$L^p$}{Lp}-operator algebras of étale groupoids}
Throughout this section, we fix a H\"older exponent $p\in (1,\infty)$.
\subsection{$L^p$-operator algebras\label{Subsection: Lp operator algebras}}
\begin{definition} A \emph{concrete }$L^{p}$-\emph{operator algebra} is a subalgebra $A$ of $ B(L^{p}(\lambda ))$ for some $\sigma $-finite Borel measure $\lambda $ on a standard Borel space. The identification of $M_{n}(A)$ with a subalgebra of $ B(L^{p}(\lambda ^{(n)}))$ induces a norm on $M_{n}(A)$. The collection of such norms defines a $p$-operator space structure on $A$ as in \cite[Section 4.1]{daws_p-operator_2010}. Moreover the multiplication on $A$ is a $p$ -completely contractive bilinear map. Equivalently $M_{n}(A)$ is a Banach algebra for every $n\in \mathbb{N}$.
An \emph{abstract $L^{p}$-operator algebra} is a Banach algebra $A$ endowed with a $p$-operator space structure, which is $p$-completely isometrically isomorphic to a concrete $L^{p}$-operator algebra. \end{definition}
Let $A$ be a separable matricially normed algebra and let $\mathcal{R}$ be a collection of $p$-completely contractive nondegenerate representations of $A$ on $L^{p}$-spaces. Set $I_{\mathcal{R}}=\bigcap\limits_{\pi \in \mathcal{R}} \mathrm{Ker}(\pi )$. Then $I_{\mathcal{R}}$ is an ideal in $A$. Arguing as in \cite[Section 1.2.16]{blecher_operator_2004}, the completion $F^{\mathcal{ R}}(A)$ of $A/I_{\mathcal{R}}$ with respect to the norm \begin{equation*}
\| a+I_{\mathcal{R}}\| =\sup \{\| \pi (a)\| \colon \pi \in \mathcal{R}\} \end{equation*} for $a\in A$, is a Banach algebra. Moreover, $F^{\mathcal{R}}(A)$ has a natural $p$-operator space structure that makes it into an (abstract) $L^{p}$ -operator algebra.
\begin{remark} If $\mathcal{R}$ separates the points of $A$, then the ideal $I_{\mathcal{R} } $ is trivial, and hence the canonical map $A\to F^{\mathcal{R}}(A) $ is an injective $p$-completely contractive homomorphism. \end{remark}
\begin{definition} Let $\mathcal{R}^{p}$ be the collection of \emph{all }$p$-completely contractive nondegenerate representations of $A$ on $L^{p}$-spaces associated with $\sigma $-finite Borel measures on standard Borel spaces. Then $F^{\mathcal{R}^{p}}(A)$ is abbreviated to $F^{p}(A)$, and called the \emph{enveloping $L^{p}$-operator algebra} of $A$. \end{definition}
Suppose further that $A$ is a matricially normed *-algebra with a completely isometric \emph{linear} involution $a\mapsto \check{a}$. (For example, for an \'etale groupoid $G$, we endow $C_c(G)$ with its $I$-norm and the linear involution $f\mapsto \check{f}$ defined before Definition~\ref{Definition: dual representation}.) If $\pi \colon A\rightarrow B(L^{p}(\lambda ))$ is a $ p$-completely contractive nondegenerate representation as before, then the dual representation of $\pi $ is the $p^{\prime }$-completely contractive nondegenerate representation $\pi ^{\prime }$ given by $\pi ^{\prime }(a)=\pi (a^{\ast })^{\prime }$ for all $a\in A$.\newline \indent Let $\mathcal{R}$ be a collection of $p$-completely contractive nondegenerate representations of $A$ on $L^{p}$-spaces, and denote by $ \mathcal{R}^{\prime }$ the collection of duals of elements of $\mathcal{R}$. It is immediate that the involution of $A$ extends to a $p$-completely isometric anti-isomorphism $F^{\mathcal{R}}(A)\rightarrow F^{\mathcal{R} ^{\prime }}(A)$. Finally, since $(\mathcal{R}^{p})^{\prime }=\mathcal{R} ^{p^{\prime }}$, the discussion above shows that the involution of $A$ extends to a $p$-completely isometric anti-isomorphism $F^{p}(A)\rightarrow F^{p^{\prime }}(A)$.
\subsection{The full $L^p$-operator algebra of an étale groupoid}
Let $G$ be an étale groupoid. Recall that we can regard $C_{c}(G)$ as a matricially normed *-algebra where $M_{n}(C_{c}(G))$ is endowed with the $I$ -norm described in Subsection \ref{Subsection: Amplification of representations}.
\begin{definition} \label{Definition:full}We define the \emph{full }$L^{p}$-\emph{operator algebra }$F^{p}(G)$ of $G$ to be the enveloping $L^{p}$-operator algebra of the matricially normed *-algebra $C_{c}(G)$. \end{definition}
\begin{remark} By Proposition \ref{Proposition: separates points}, when $G$ is Hausdorff the family of $p$-completely contractive nondegenerate representations of $ C_{c}(G)$ on $L^{p}$-spaces separates the points of $C_{c}(G)$, and hence the canonical map $C_{c}(G)\rightarrow F^{p}(G)$ is injective. \end{remark}
The proof of the following is straightforward, and is left to the reader.
\begin{proposition} \label{Proposition: correspondence Cc and Fp} The correspondence sending a $ p $-completely contractive representation of $F^{p}(G)$ on an $L^{p}$-space to its restriction to $C_{c}(G)$, is a bijective correspondence between $p$ -completely contractive representations of $F^{p}(G)$ on $L^{p}$-spaces and $ p$-completely contractive representations of $C_{c}(G)$ on $L^{p}$-spaces. \end{proposition}
\begin{definition} Let $\Sigma $ be an inverse semigroup, and consider the matricially normed *-algebra structure on $\mathbb{C}\Sigma $ described in Definition \ref {Definition:CSIGMA}. Denote by $\mathcal{R}_{\mathrm{tight}}^{p}$ the collection of tight representations of $\Sigma $ on $L^{p}$-spaces. We define the \emph{tight enveloping $L^{p}$-operator algebra} of $\Sigma $, denoted $F_{\mathrm{tight}}^{p}(\Sigma )$, to be $F^{\mathcal{R}_{\mathrm{ tight}}^{p}}(\mathbb{C}\Sigma )$. \end{definition}
\begin{remark} Since the dual of a tight representation is also tight, it follows that the involution on $\mathbb{C}\Sigma $ extends to a $p$-completely isometric anti-isomorphism $F_{\mathrm{tight}}^{p}(\Sigma )\rightarrow F_{\mathrm{tight }}^{p^{\prime }}(\Sigma )$. \end{remark}
From Theorem \ref{Theorem: correspondence representations} we can deduce the following corollary.
\begin{corollary} \label{Corollary: automatically completely}If $A$ is an $L^{p}$-operator algebra, then any contractive homomorphism from $C_{c}(G)$ or $F^{p}(G)$ to $ A$ is automatically $p$-completely contractive. \end{corollary}
\begin{proof} It is enough to show that any contractive nondegenerate representation of $ C_{c}(G)$ on an $L^{p}$-space is $p$-completely contractive. This follows from part (3) of Theorem~\ref{Theorem: correspondence representations}, together with the fact that the integrated form of a representation of $G$ on an $L^{p}$-bundle is $p$-completely contractive, as observed in Subsection \ref{Subsection: Amplification of representations}. \end{proof}
\begin{corollary} \label{cor: tight inv smgp and gpid alg} Adopt the assumptions of Theorem~ \ref{Theorem: correspondence representations}, and suppose moreover that $G$ is ample. Then the $L^{p}$-operator algebras $F^{p}(G)$ and $F_{\mathrm{tight }}^{p}(\Sigma )$ are $p$-completely isometrically isomorphic. In particular, $F^{p}(G)$ is generated by its spatial partial isometries. \end{corollary}
\begin{proof} Observe that when $G$ is ample, and $\Sigma $ is the inverse semigroup of compact open slices, any tight representation of $\Sigma $ on an $L^{p}$ -space is automatically regular. Thus the statement follows from part (2) of Theorem~\ref{Theorem: correspondence representations}. \end{proof}
\subsection{Reduced $L^{p}$-operator algebras of étale groupoids\label {Subsection: reduced}}
Let $\mu $ be a (not necessarily quasi-invariant) Borel probability measure on $G^{0}$, and let $\nu $ be the measure on $G$ associated with $\mu $ as in Subsection \ref{Subsection: background on groupoids}. Denote by $\mathrm{ Ind}(\mu )\colon C_{c}(G)\rightarrow B(L^{p}(\nu ^{-1}))$ the left action by convolution. Then $\mathrm{Ind}(\mu )$ is contractive and nondegenerate.
\begin{remark} When $\mu $ is quasi-invariant, the representation $\mathrm{Ind}(\mu )$ is the integrated form of the left regular representation $T^{\mu }$ of $G$ on $ \bigsqcup\nolimits_{x\in G^{0}}\ell ^{p}(xG)$ as defined in Subsection \ref {Subsection: representations groupoids Lp-bundles}. When $G$ is Hausdorff, the same argument as in Lemma~\ref{Lemma: faithful measure} shows that a function $f$ in $C_{c}(G)$ belongs to $\mathrm{Ker}(\mathrm{Ind}(\mu ))$ if and only if $f$ vanishes on the support of $\nu $. \end{remark}
\begin{definition} Define $\mathcal{R}_{\mathrm{red}}^{p}$ red to be the collection of representations $\mathrm{\mathrm{Ind}}(\mu )$ where $\mu $ varies among the Borel probability measures on $G^{0}$. The \emph{reduced }$L^{p}$-\emph{ operator algebra } $F_{\mathrm{red}}^{p}(G)$ of $G$ is the enveloping $L^{p}$
-operator algebra $F^{\mathcal{R}_{\mathrm{red}}^{p}}(C_{c}(G))$. The norm on $F_{\mathrm{red}}^{p}(G)$ is denoted by $\| \cdot \| _{\mathrm{red}} $. \end{definition}
The identity map on $C_{c}(G)$ induces a canonical $p$-completely contractive homomorphism $F^{p}(G)\rightarrow F_{\mathrm{red}}^{p}(G)$ with dense range. By Proposition \ref{Proposition: separates points}, when $G$ is Hausdorff the family $\mathcal{R}_{\mathrm{red}}^{p}$ separates points, and hence the canonical map $C_{c}(G)\rightarrow F_{\mathrm{red}}^{p}(G)$ is injective.
\begin{remark} The dual of $\mathrm{Ind}(\mu )\colon C_{c}(G)\rightarrow B(L^{p}(\nu ^{-1})) $ is the representation $\mathrm{Ind}(\mu )\colon C_{c}(G)\rightarrow B(L^{p^{\prime }}(\nu ))$, and thus the involution on $ C_{c}(G)$ extends to a $p$-completely isometric anti-isomorphism $F_{\mathrm{ red}}^{p}(G)\rightarrow F_{\mathrm{red}}^{p^{\prime }}(G)$. \end{remark}
For $x\in G^{0}$, we write $\delta _{x}$ for its associated point mass measure, and write $\mathrm{Ind}(x)$ in place of $\mathrm{Ind}(\delta _{x})$ . In this case, $\nu $ is the counting measure $c_{xG}$ on $xG$, and $\nu ^{-1}$ is the counting measure $c_{Gx}$ on $Gx$. Moreover, $\mathrm{Ind}(x)$ is given by \begin{equation*} (\mathrm{Ind}(x)f(\xi ))(\rho )=\sum\limits_{\gamma \in r(\rho )G}f(\gamma )\xi (\gamma ^{-1}\rho ) \end{equation*} for $f\in C_{c}(G)$, $\xi \in L^{p}(\nu ^{-1})$, and $\rho \in Gx$. The same argument in the proof of \cite[Proposition 3.1.2]{paterson_groupoids_1999} gives the following.
\begin{proposition} \label{Proposition: induced representations} Let $\mu $ be a probability measure on $G^{0}$. If $f\in C_{c}(G)$, then \begin{equation*}
\| \mathrm{Ind}(\mu )f\| =\sup_{x\in \mathrm{supp}(\mu )}\| \mathrm{Ind}
(x)(f)\| \text{.} \end{equation*} \end{proposition}
\begin{corollary} The algebra $F_{\mathrm{red}}^{p}(G)$ of $G$ is $p$-completely isometrically isomorphic to the enveloping $L^{p}$-operator algebra $F^{\mathcal{R} }(C_{c}(G))$ with respect to the family of representations $\mathcal{R}=\{ \mathrm{Ind}(x)\colon x\in G^{0}\}$. \end{corollary}
\subsection{Amenable groupoids and their $L^{p}$-operator algebras}
There are several equivalent characterizations of amenability for étale groupoids. By \cite[Theorem~2.2.13]{anantharaman-delaroche_amenable_2000}, an étale groupoid is amenable if and only if has an approximate invariant mean; see \cite[Definition 4.1.1]{renault_c*-algebras_2009}
\begin{lemma} \label{Lemma: amenable amplification} If $G$ is amenable and $m\geq 1$, then its amplification $G_{m}$ is amenable. \end{lemma}
\begin{proof} Let $( f_{n}) _{n\in \omega }$ be an approximate invariant mean for $G$. For $n\in\omega$, define $f^{(m)}_n\colon C_c(G_m)\to\mathbb{C}$ by $f_{n}^{(m) }( i,\gamma ,j) =\frac{1}{m}f_{n}(\gamma)$ for $(i,\gamma,j)\in G_m$. It is not difficult to verify that $( f_{n}^{(m) }) _{n\in \omega }$ is an approximate invariant mean for $G_m$. We omit the details. \end{proof}
\begin{definition} A pair of sequences $(g_n) _{n\in \omega }$ and $( h_{n})_{n\in \omega }$ of positive functions in $C_c(G) $ is said to be an \emph{approximate invariant }$p$\emph{-mean} for $G$, if they satisfy the following:
\begin{enumerate} \item $\sum\nolimits_{\gamma \in xG}g_{n}(\gamma )^{p}\leq 1$ and $ \sum\nolimits_{\gamma \in xG}h_{n}(\gamma )^{p^{\prime }}\leq 1$ for every $ n\in \omega $ and every $x\in G^{0}$,
\item the sequence of functions $x\mapsto \sum\nolimits_{\rho \in xG}g_{n}(\rho )h_{n}(\rho )$ converges to $1$ uniformly on compact subsets of $G^{0}$, and
\item the sequences of functions \begin{equation*}
\gamma \mapsto \sum\limits_{\rho \in r(\gamma )G}|g_{n}(\gamma ^{-1}\rho
)-g_{n}(\rho )|^{p}\quad \text{and}\quad \gamma \mapsto \ \sum\limits_{\rho
\in r(\gamma )G}|h_{n}(\gamma ^{-1}\rho )-h_{n}(\rho )|^{p^{\prime }} \end{equation*} converge to $0$ uniformly on compact subsets of $G$. \end{enumerate} \end{definition}
It is not difficult to see that any amenable groupoid has an approximate invariant $p$-mean. Indeed, if $( f_{n}) _{n\in \omega }$ is any approximate invariant mean on $G$, then the sequences $( f_{n}^{1/p}) _{n\in \omega }$ and $( f_{n}^{1/p^{\prime }}) _{n\in\omega }$ define an approximate invariant $p$-mean on $G$.
\begin{remark} It is easy to check that if $( g_{n},h_{n}) _{n\in \omega }$ is an approximately invariant $p$-mean on $G$, then $( h_{n}\ast g_{n}) _{n\in \omega }$ converges to $1$ uniformly on compact subsets of $G$. \end{remark}
The following theorem asserts that full and reduced $L^{p}$-operator algebras of amenable étale groupoids are canonically isometrically isomorphic. The proof is the straightforward generalization of \cite[Theorem 4.2.1]{renault_c*-algebras_2009} where $2$ is replaced by $p$, and approximate invariant means are replaced by approximate invariant $p$-means.
\begin{theorem} \label{thm: amenable groupoid} Suppose that $G$ is amenable. Then the canonical homomorphism $F^{p}(G)\rightarrow F_{\mathrm{red}}^{p}(G)$ is a $p$ -completely isometric isomorphism. \end{theorem}
\section{Examples: analogs of Cuntz algebras and AF-algebras}
Throughout this section, we let $p\in (1,\infty)$.
\subsection{The Cuntz $L^p$-operator algebras}
Fix $d\in \omega $ with $d\geq 2$. The following is \cite[Definition 1.1] {phillips_analogs_2012} and \cite[Definition 7.4 (2)]{phillips_analogs_2012} . Algebra representations of complex unital algebras are always assumed to be unital.
\begin{definition} \label{Definition: Leavitt} Define the \emph{Leavitt algebra} $L_{d}$ to be the universal (complex) algebra with generators $s_{0},\ldots ,s_{d-1},s_{0}^{\ast },\ldots ,s_{d-1}^{\ast }$, subject to the relations $ \sum\nolimits_{j\in d}s_{j}s_{j}^{\ast }=1$ and $s_{j}^{\ast }s_{k}=\delta _{j,k}$ for $j,k\in d$.
If $\lambda $ is a $\sigma $-finite Borel measure on a standard Borel space, a \emph{spatial representation} of $L_{d}$ on $L^{p}(\lambda )$ is an algebra homomorphism $\rho \colon L_{d}\rightarrow B(L^{p}(\lambda ))$ such that for $j\in d$, the operators $\rho (s_{j})$ and $\rho (s_{j}^{\ast })$ are mutually inverse spatial partial isometries, i.e.\ $\rho (s_{j}^{\ast })=\rho (s_{j})^{\ast }$. \end{definition}
It is a consequence of a fundamental result of Cuntz from \cite {cuntz_simple_1977} that any two *-representations of $L_{d}$ on a Hilbert space induce the same norm on $L_{d}$. The corresponding completion is the Cuntz C*-algebra $\mathcal{O}_{d}$. Cuntz's result was later generalized by Phillips in \cite{phillips_analogs_2012} to spatial representations of $ L_{d} $ on $L^{p}$-spaces. Theorem~8.7 of \cite{phillips_analogs_2012} asserts that any two spatial $L^{p}$-representations of $L_{d}$ induce the same norm on it. The corresponding completion is the Cuntz $L^{p}$-operator algebra $\mathcal{O}_{d}^{p}$; see \cite[Definition 8.8] {phillips_analogs_2012}. We now to explain how one can realize $\mathcal{O} _{d}^{p}$ as a groupoid $L^{p} $-operator algebra. Denote by $d^{\omega }$ the space of infinite sequences of elements of $d$, endowed with the product topology. (Recall that $d$ is identified with the set $\left\{ 0,1,\ldots ,d-1\right\} $ of its predecessors.) Denote by $d^{<\omega }$ the space of (possibly empty) finite sequences of elements of $d$. The length of an element $a$ of $d^{<\omega }$ is denoted by $\mathrm{lh}(a)$. For $a\in d^{<\omega }$ and $x\in d^{\omega } $, define $a^{\smallfrown }x\in d^{\omega }$ to be the concatenation of $a$ and $x$. For $a\in d^{<\omega }$ , denote by $[a]$ the set of elements of $d^{\omega }$ having $a$ as initial segment. Clearly $\{[a]\colon a\in d^{<\omega }\}$ is a clopen basis for $ d^{\omega }$.
\begin{definition} The \emph{Cuntz inverse semigroup} $\Sigma _{d}$ is the inverse semigroup generated by a zero $0$, a unit $1$, and elements $s_{j}$ for $j\in d$, satisfying $s_{j}^{\ast }s_{k}=0$ whenever $j\neq k$. \end{definition}
Set $s_{\varnothing }=1$ and $s_{a}=s_{a_{0}}\cdots s_{a_{lh(a)-1}}\in \Sigma _{d}$ for $a\in d^{<\omega }\setminus \left\{ \varnothing \right\} $. Every element of $\Sigma _{d}$ can be written uniquely as $s_{a}s_{b}^{\ast } $ for some $a,b\in d^{<\omega }$.
\begin{remark} The nonzero idempotents $E(\Sigma _{d})$ of $\Sigma _{d}$ are precisely the elements of the form $s_{a}s_{a}^{\ast }$ for $a\in d^{<\omega }$. Moreover, the function $d^{<\omega }\cup \{0\}\rightarrow E(\Sigma )$ given by $ a\mapsto s_{a}s_{a}^{\ast }$ and $0\mapsto 0$, is a semilattice map, where $ d^{<\omega }$ has its (downward) tree ordering defined by $a\leq b$ if and only if $b$ is an initial segment of $a$, and $0$ is a least element of $ d^{<\omega }\cup \{0\}$. \end{remark}
Observe that if $a,b\in d^{<\omega }$, then $ab=0$ if and only if $a(j)\neq b(j)$ for some $j\in \min \{\mathrm{lh}(a),\mathrm{lh}(b)\}$
\begin{lemma} \label{Lemma: tight representation E(Cuntz)} Let $\mathcal{B}$ be a Boolean algebra and let $\beta \colon d^{<\omega }\rightarrow \mathcal{B}$ be a representation. Then $\beta $ is tight if and only if $\beta (\varnothing )=1 $ and $\beta (a)\leq \bigvee\nolimits_{j\in d}\beta (a^{\smallfrown }j)$ for every $a\in d^{<\omega }$. \end{lemma}
\begin{proof} Suppose that $\beta $ is tight. Since $1$ is a cover of $E^{\varnothing ,\varnothing }$, we have $\beta (\varnothing )=1$. Similarly, $ \{a^{\smallfrown }j\colon j\in d\}$ is a cover of $E^{\{a\},\varnothing }$ and thus $\beta (a)\leq \bigvee\nolimits_{j\in d}\beta (a^{\smallfrown }j)$. Let us now show the \textquotedblleft if\textquotedblright\ implication. By \cite[Proposition 11.8]{exel_inverse_2008}, it is enough to show that for every $a\in d^{<\omega }$ and every finite cover $Z$ of $\{a\}$, one has $ \beta (a)\leq \bigvee\nolimits_{z\in Z}\beta (z)$. That this is true follows from the hypotheses, using induction on the maximum length of elements of $Z$ . \end{proof}
\begin{lemma} \label{Lemma: tight representation Cuntz} Let $\lambda $ be a $\sigma $ -finite Borel measure on a standard Borel space, and $\rho $ be a representation of $\Sigma _{d}$ on $L^{p}(\lambda )$. Then $\rho $ is tight if and only if $\sum\nolimits_{j\in d}\rho (s_{j}s_{j}^{\ast })=\rho (1)=1.$ \end{lemma}
\begin{proof}
Suppose that $\rho $ is tight. Then $\rho |_{E(\Sigma )}$ is tight and therefore \begin{equation*} 1=\rho (1)=\bigvee\limits_{j\in d}\rho (s_{j}s_{j}^{\ast })=\sum\limits_{j\in d}\rho (s_{j}s_{j}^{\ast }) \end{equation*} by Lemma~\ref{Lemma: tight representation E(Cuntz)}. Conversely, given $a\in d^{<\omega }$, we have \begin{eqnarray*} \sum\limits_{j\in d}\rho (s_{a^{\smallfrown }j}s_{a^{\smallfrown }j}^{\ast }) &=&\sum\limits_{j\in d}\rho (s_{a}s_{j}s_{j}^{\ast }s_{a}^{\ast })=\sum\limits_{j\in d}\rho (s_{a})\rho (s_{j}s_{j}^{\ast })\rho (s_{a}^{\ast }) \\ &=&\rho (s_{a})\left( \sum\limits_{j\in d}\rho (s_{j}s_{j}^{\ast })\right) \rho (s_{a}^{\ast })=\rho (s_{a})\rho (1)\rho (s_{a}^{\ast })=\rho (s_{a}s_{a}^{\ast })\text{,} \end{eqnarray*} which shows that $\rho $ is tight, concluding the proof. \end{proof}
\begin{proposition} \label{Proposition: Cuntz algs 1} The algebra $F_{\mathrm{tight}}^{p}(\Sigma _{d})$ is $p$-completely isometric isomorphic to $\mathcal{O}_{d}^{p}$. \end{proposition}
\begin{proof} Observe that the Leavitt algebra $L_{d}$ (see Definition \ref{Definition: Leavitt}) is isomorphic to the quotient of $\mathbb{C}\Sigma _{d}$ by the ideal generated by the elements $\delta _{1}-\sum\limits_{j\in d}\delta _{s_{j}s_{j}^{\ast }}$ and $\delta _{0}$. (Here, $\delta _{s}$ denotes the canonical element in $\mathbb{C}\Sigma _{d}$ corresponding to $s\in \Sigma _{d}$.) By Lemma~\ref{Lemma: tight representation Cuntz}, tight representations of $\Sigma _{d}$ correspond precisely to spatial representations of the Leavitt algebra $L_{d}$ as defined in \cite[ Definition 7.4]{phillips_analogs_2012}. The result then follows. \end{proof}
It is well known that $\Sigma _{d}$ the inverse semigroup of compact open bisections of the ample groupoid $\mathcal{G}_{d}$ described in \cite {renault_c*-algebras_2009} (and denoted by $\mathcal{O}_{d}$ therein).
\begin{theorem} Let $d\geq 2$ be a positive integer, and let $\mathcal{G}_{d}$ denote the corresponding Cuntz groupoid. Then $F^{p}(\mathcal{G}_{d})$ is canonically $ p $-completely isometrically isomorphic to $\mathcal{O}_{d}^{p}$. \end{theorem}
\begin{proof} It is easy to check that the function $s_{a}s_{b}^{\ast }\mapsto \lbrack a,b] $ defines an injective homomorphism from $\Sigma _{d}$ to the inverse semigroup of compact open bisections of $\mathcal{G}_{d}$. It is well known that $\mathcal{G}_{d}$ is amenable; see \cite[Exercise 4.1.7] {renault_c*-algebras_2009}. It follows from Theorem~\ref{thm: amenable groupoid}, Corollary~\ref{cor: tight inv smgp and gpid alg}, and Proposition \ref{Proposition: Cuntz algs 1}, that there are canonical $p$-completely isometric isomorphisms \begin{equation*} F_{\mathrm{red}}^{p}(\mathcal{G}_{d})\cong F^{p}(\mathcal{G}_{d})\cong F_{ \mathrm{tight}}^{p}(\Sigma _{d})\cong \mathcal{O}_{d}^{p}.\qedhere \end{equation*} \end{proof}
\subsection{Analogs of AF-algebras on $L^{p}$-spaces}
In this subsection, we show how one can use the machinery developed in the previous sections to construct those $L^{p}$-analogs of AF-algebras that look like C*-algebras, and which are called ``spatial" in \cite {phillips_analogs_2014}.
Fix $n\in \mathbb{N}$. The algebra $M_{n}( \mathbb{C}) $ of $n\times n$ matrices with complex coefficients can be (algebraically) identified with $ B( \ell ^{p}( n) ) $. This identification turns $M_{n}( \mathbb{C}) $ into an $L^{p}$-operator algebra that we will denote---consistently with \cite {phillips_analogs_2012}---by $M_{n}^{p}$. It is not difficult to verify that $M_{n}^{p}$ can be realized as a groupoid $L^{p}$-operator algebra, and we proceed to outline the argument.
Denote by $T_{n}$ the principal groupoid determined by the trivial equivalence relation on $n$. It is well-known (see \cite[page 121] {renault_groupoid_1980}) that $T_{n}$ is amenable. Moreover, the inverse semigroup $\Sigma _{\mathcal{K}}(T_{n})$ of compact open bisections of $ T_{n} $, is the inverse semigroup generated by a zero element $0$, a unit $1$ , and elements $e_{jk}$ for $j,k\in n$, subject to the relations $ e_{jk}^{\ast }e_{\ell m}=\delta _{k\ell }e_{jm}$ for $j,k,\ell ,m\in n$. Since $\left\{ e_{jj}:j\in n\right\} $ for a cover of $1$ in the sense of Definition \ref{Definition:cover}, a tight $L^{p}$-representation $\rho $ of $\Sigma _{\mathcal{K}}\left( T\right) $ satisfies $1=\rho (1)=\sum_{j\in n}\rho (e_{jj})$. It thus follows from \cite[Theorem~7.2] {phillips_analogs_2012} that the map from $M_{n}^{p}$ to the range of $\rho $ , defined by assigning $\rho \left( e_{jk}\right) $ to the $jk$-th matrix unit in $M_{n}^{p}$, is isometric. We conclude that $F^{p}(T_{n})$ is isometrically isomorphic to $M_{n}^{p}$. Reasoning in the same way at the level of amplifications shows that $F^{p}(T_{n})$ and $M_{n}^{p}$ are in fact $p$-completely isometrically isomorphic.
If $k\in \mathbb{N}$ and $\mathbf{n}=\left( n_{0},\ldots ,n_{k-1}\right) $ is a $k$-tuple of natural numbers, then the Banach algebra $ M_{n_{0}}^{p}\oplus \cdots \oplus M_{n_{k-1}}^{p}$ acts naturally on the $ L^{p}$-direct sum $\ell ^{p}(n_{0})\oplus _{p}\cdots \oplus _{p}\ell ^{p}(n_{k-1})\cong \ell ^{p}(n_{0}+\cdots +n_{k-1})$. The Banach algebra $ M_{n_{0}}^{p}\oplus \cdots \oplus M_{n_{k-1}}^{p}$ can also be realized as groupoid $L^{p}$-operator algebra by considering the disjoint union of the groupoids $T_{n_{0}},T_{n_{1}},\ldots ,T_{n_{k-1}}$.
Here is the definition of spatial $L^p$-operator AF-algebras
\begin{definition} \label{definition: spatial AF-algebra} A separable Banach algebra $A$ is said to be a \emph{spatial $L^p$-operator AF-algebra} if there exists a direct system $(A_n,\varphi_n)_{n\in\omega}$ of $L^p$-operator algebras $A_n$ which are isometrically isomorphic to algebras of the form $ M_{n_0}^p\oplus\cdots\oplus M_{n_k}^p$, with isometric connecting maps $ \varphi_n\colon A_n\to A_{n+1}$, and such that $A$ is isometrically isomorphic to the direct limit $\varinjlim (A_n,\varphi_n)_{n\in\omega}$. \end{definition}
Banach algebras as in the definition above, as well as more general direct limits of semisimple finite-dimensional $L^{p}$-operator algebras, will be studied in \cite{phillips_analogs_2014}.
In the rest of this subsection, we will show that spatial $L^{p}$-operator AF-algebras can be realized as groupoid $L^{p}$-operator algebras. \newline
\subsubsection{Spatial $L^p$-operator UHF-algebras.}
For simplicity, we will start by observing that spatial $L^{p}$-operator UHF-algebras are groupoid $L^{p}$-operator algebras. Spatial $L^{p}$ -operator UHF-algebras are the spatial $L^{p}$-operator AF-algebras where the building blocks $A_{n}$ appearing in the definition are all full matrix algebras $M_{d_{n}}^{p}$ for some $d_{n}\in \omega $. These have been defined and studied in \cite{phillips_isomorphism_2013}.
Let $d=(d_{n})_{n\in \omega }$ be a sequence of positive integers. Denote by $A_{d}^{p}$ the corresponding $L^{p}$-operator UHF-algebra defined as above; see also \cite[Definition 3.9]{phillips_simplicity_2013}. In the following we will show that $A_{d}^{p}$ is the enveloping algebra of a natural groupoid associated with the sequence $d$. Define $Z_{d}=\prod_{j\in n}d_{j}$ , and consider the groupoid \begin{equation*} G_{d}=\left\{ \left( \alpha ^{\smallfrown }x,\beta ^{\smallfrown }x\right) \colon \alpha ,\beta \in \prod_{j\in n}d_{j},x\in \prod_{j\geq n}d_{j},n\in \omega \right\} \end{equation*} having $Z_{d}$ as set of objects. (Here we identify $x\in Z_{d}$ with the pair $(x,x)\in G_{d}$.) The operations are defined by $s(\alpha ^{\smallfrown }x,\beta ^{\smallfrown }x)=\beta ^{\smallfrown }x$, $(\alpha ^{\smallfrown }x,\beta ^{\smallfrown }x)^{-1}=(\beta ^{\smallfrown }x,\alpha ^{\smallfrown }x)$, and $(\alpha ^{\smallfrown }x,\beta ^{\smallfrown }x)(\gamma ^{\smallfrown }y,\delta ^{\smallfrown }y)=(\alpha ^{\smallfrown }x,\delta ^{\smallfrown }y)$ when $\beta ^{\smallfrown }x=\gamma ^{\smallfrown }y$. It is well-known that $G_{d}$ is amenable; see \cite[ Section~4.2]{renault_c*-algebras_2009}, and specifically Theorem~4.2.5 there.
Given $k\in \omega $ and given $\alpha $ and $\beta $ in $\prod_{j\in k}d_{j} $, define $U_{\alpha \beta }$ to be the set of $\left( \alpha ^{\smallfrown }x,\beta ^{\smallfrown }x\right) \in G_{d}$ for $x\in \prod_{j\geq k}d_{j}$. Then the collection of $U_{\alpha ,\beta }$ for $ \alpha ,\beta \in \prod\nolimits_{j\in k}d_{j}$ and $k\in \omega $ is a basis of compact open bisections for an ample groupoid topology on $G_{d}$. \newline \indent Fix $k\in \omega $ and consider the compact groupoid $G_{d}^{k}$ given by the union of $U_{\alpha ,\beta }$ for $\alpha ,\beta \in \prod_{j\in k}d_{j}$. The groupoid $G_{d}$ can be seen as the topological direct limit of the system $(G_{d}^{k})_{k\in \omega }$. It is clear that, if $n=d_{0}\cdots d_{k-1}$, then $G_{d}^{k}$ is isomorphic to the groupoid $ T_{n}$ defined previously. Therefore $F^{p}(G_{d}^{k})$ is isometrically isomorphic to $M_{d_{0}\cdots d_{k-1}}^{p}$.
For $k\in \mathbb{N}$, identify $C(G_{d}^{k})$ with a *-subalgebra of $ C_{c}(G_{d})$, by setting $f\in C(G_{d}^{k})$ to be $0$ outside $G_{d}^{k}$. For $k<n$, we claim that the inclusion map from $C(G_{d}^{k})$ to $ C(G_{d}^{n})$ induces an isometric embedding $\varphi _{n}\colon F^{p}(G_{d}^{k})\rightarrow F^{p}(G_{d}^{n})$. This can be easily verified by direct computation, after noticing that $G_{d}^{k}$ and $G_{d}^{n}$ are amenable, and hence the full and reduced norms on $C(G_{d}^{k})$ and $ C(G_{d}^{n})$ coincide. One then obtains a direct system $\left( F^{p}(G_{d}^{k}),\varphi _{n}\right) _{n\in \mathbb{N}}$ with isometric connecting maps whose limit is $F^{p}(G)$. Since $F^{p}(G_{d}^{k})\cong M_{d_{0}\cdots d_{k-1}}^{p}$ as observed above, we conclude that $ F^{p}(G_{d})\cong A_{d}^{p}$.
\subsubsection{Spatial $L^p$-operator AF-algebras.}
As in the C*-algebra case, there is a natural correspondence between $L^{p}$ -operator AF-algebras and Bratteli diagrams. (For the definition of Bratteli diagrams, see \cite[Subsection 7.2.3]{rordam_introduction_2000}.) Let $(E,V) $ be a Bratteli diagram, and $A^{(E,V) }$ be the associated $L^{p}$-operator AF-algebra. The algebra $A^{(E,V)}$ can be defined in the same way that AF $ C^*$-algebras are defined from a Bratteli diagram, as direct limits of sums of finite dimensional $C^*$-algebras, except that each matrix algebra is now given its (spatial) $L^p$-operator norm, as described at the beginning of this subsection. With this definition of spatial $L^p$-operator AF-algebras, it is unclear that such algebras are indeed $L^p$-operator algebras, since it is not known whether $L^p$-operator algebras are closed under direct limits (with contractive maps). In the following, we will explain how to realize $A^{(E,V) }$ as a groupoid $L^{p}$-operator algebra. As a consequence, it will follow that spatial $L^p$-operator AF-algebras are always representable on $L^p$-spaces, which was not known before.
Denote by $X$ the set of all infinite paths in $(E,V)$. Then $X$ is a compact zero-dimensional space. Denote by $G^{(E,V)}$ the tail equivalence relation on $X$, regarded as a principal groupoid having $X$ as set of objects. It is well known that $G^{(E,V)}$ is amenable; see \cite[Chapter III, Remark 1.2]{renault_c*-algebras_2009}. If $\alpha ,\beta $ are \emph{ finite} paths of the same length and with the same endpoints, define $ U_{\alpha \beta }$ to be the set of elements of $G^{(E,V)}$ of the form $ \left( \alpha ^{\smallfrown }x,\beta ^{\smallfrown }x\right) $. The collection of all the sets $U_{\alpha \beta }$ is a basis for an ample groupoid topology on $G^{(E,V)}$. For $k\in \omega $, let $G_{k}^{(E,V)}$ be the union of $U_{\alpha \beta }$ over all finite paths $\alpha ,\beta $ as before that moreover have length at most $k$. Then $G_{n}^{(E,V)}$ is a compact groupoid and $G$ is the topological direct limit of $ (G_{k}^{(E,V)})_{k\in \omega }$.\newline \indent Fix $k\in \omega $. Denote by $l$ the cardinality of the $k$-th vertex set $V_{k}$. Denote by $n_{0},\ldots ,n_{l-1}$ the \emph{ multiplicities }of the vertices in $V_{k}$. (The multiplicity of a vertex in a Bratteli diagram is defined in the usual way by recursion.) Set $\mathbf{n} =(n_{0},\ldots ,n_{l-1})$, and observe that $G_{k}^{(E,V)}$ is isomorphic to the groupoid $T_{\mathbf{n}}$ as defined above. In particular $ F^{p}(G_{n}^{(E,V)})\cong M_{n_{0}}^{p}\oplus \cdots \oplus M_{n_{l-1}}^{p}$ . As before, one can show that the direct system $(F^{p}(G_{n}^{(E,V)}))_{n \in \omega }$ has isometric connecting maps, and that the inductive limit is $F^{p}(G^{(E,V)})$. This concludes the proof that $A^{(E,V)}$ is $p$ -completely isometrically isomorphic to $F^{p}\left( G^{(E,V)}\right) $. In particular, this shows that $A^{(E,V)}$ is indeed an $L^{p}$-operator algebra.
\section{Concluding remarks and outlook}
It is not difficult to see that the class of $L^{p}$-operator algebras is closed---within the class of all matricially normed Banach algebras---under taking subalgebras and ultraproducts. As noted by Ilijas Farah and N.\ Christopher Phillips, this observation, together with a general result from logic for metric structures, implies that the class of $L^{p}$-operator algebras is---in model-theoretic jargon---\emph{universally axiomatizable}. This means that $L^{p}$-operator algebras can be characterized as those matricially normed Banach algebras satisfying certain expressions only involving the algebra operations, the matrix norms, continuous functions from $\mathbb{R}^{n}$ to $\mathbb{R}$, and suprema over balls of matrix amplifications.
Determining what these expressions are seems to be, in our opinion, an important problem in the theory of algebras of operators on $L^{p}$-spaces.
\begin{problem} \label{Problem:axiomatize}Find an explicit intrinsic characterization of $ L^{p}$-operator algebras within the class of matricially normed Banach algebras. \end{problem}
An explicit characterization of algebras acting on\emph{\ subspaces of quotients }of $L^{p}$-spaces was provided by Le Merdy in \cite {merdy_representation_1996}.\ These are precisely the matricially normed Banach algebras that are moreover $p$-operator spaces in the terminology of \cite{daws_p-operator_2010}, and such that multiplication is $p$-completely contractive. Similar results have been obtained by Junge for algebras of operators on subspaces of $L^{p}$-spaces; see \cite[Corollary~1.5.2.2] {junge_factorization_1996}.
A stumbling block towards answering Problem \ref{Problem:axiomatize} is the fact that $L^{p}$-operator algebras are not closed under quotients; see \cite {gardella_quotients_2014}. This gives a lower bound on the complexity of a possible axiomatization of $L^{p}$-operator algebras. Precisely, it implies that $L^{p}$-operator algebras do not form a \emph{variety }of Banach algebras in the sense of \cite{dixon_varieties_1976}, unlike algebras acting on subspaces of quotients of $L^{p}$-spaces.
It is conceivable that, for sufficiently nice groupoids, ideals of the associated $L^{p}$-operator algebras correspond to subgroupoid. In turn quotients would correspond to quotients at the groupoid level. In particular this would imply that such groupoid $L^{p}$-operator algebras are indeed closed under quotients. This would be another feature of groupoid $L^{p}$ -operator algebra, and similarity trait with C*-algebras. Such a general result about quotients would also simplify the task of proving simplicity of $L^{p}$-operator algebras coming from groupoids. This problem has been dealt by ad hoc methods for UHF and Cuntz $L^{p}$ operator algebras in \cite {phillips_simplicity_2013}.
\begin{problem} Is $F_{red}^{p}(G)$ simple whenever $G$ is a minimal and topologically principal étale groupoid? \end{problem}
A potential application of groupoids to the theory of $L^{p}$-operator algebras comes from the technique of Putnam subalgebras. Let $X$ be a compact metric space and let $h\colon X\rightarrow X$ be a homeomorphism. Denote by $u$ the canonical unitary in the C*-crossed product $C^{\ast }( \mathbb{Z},X,h)$ implementing $h$. If $Y$ is a closed subset of $X$, then the corresponding \emph{Putnam subalgebra} $C^{\ast }(\mathbb{Z},X,h)_{Y}$ is the C*-subalgebra of $C^{\ast }(\mathbb{Z},X,h)$ generated by $C(X)$ and $ uC_{0}(X\setminus Y)$. It is known that $C^{\ast }(\mathbb{Z},X,h)_{Y}$ can be described as the enveloping C*-algebra of a suitable étale groupoid.
In the context of C*-algebras, Putnam subalgebras are fundamental in the study of transformation group C*-algebras of minimal homeomorphisms. For example, Putnam showed in \cite[Theorem~3.13]{putnam_c*-algebras_1989} that if $h$ is a minimal homeomorphism of the Cantor space $X$, and $Y$ is a nonempty clopen subset of $X$, then $C^{\ast }(\mathbb{Z},X,h)_{Y}$ is an AF-algebra. This is then used in \cite{putnam_c*-algebras_1989} to prove that the crossed product $C^{\ast} ( \mathbb{Z},X,h) _{Y}$ is a simple A$ \mathbb{T} $-algebra of real rank zero. Similarly, Putnam subalgebras were used by Huaxin Lin and Chris Phillips in \cite{lin_crossed_2010} to show that, under a suitable assumption on $K$-theory, the crossed product of a finite-dimensional compact metric space by a minimal homeomorphism is a simple unital C*-algebra with tracial rank zero.
Considering the groupoid description of Putnam subalgebras provides a natural application of our constructions to the theory of $L^{p}$-crossed products introduced in \cite{phillips_crossed_2013}. It is conceivable that with the aid of groupoid $L^{p}$-operator algebras, Putnam subalgebras could be used to obtain generalizations of the above mentioned results to $L^{p}$ -crossed products.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
\end{document} | arXiv |
\begin{document}
\def\spacingset#1{\renewcommand{\baselinestretch} {#1}\small\normalsize} \spacingset{1}
\if11 {
\title{\bf Strictly Proper Kernel Scoring Rules and Divergences with an Application to Kernel Two-Sample Hypothesis Testing}
\author{Hamed Masnadi-Shirazi \hspace{.2cm}\\
School of Electrical and Computer Engineering \\Shiraz University \\Shiraz, Iran}
\maketitle } \fi
\if01 {
\begin{center}
{\LARGE\bf Strictly Proper Kernel Scoring Rules and Divergences with an Application to Kernel Two-Sample Hypothesis Testing} \end{center}
} \fi
\begin{abstract} We study strictly proper scoring rules in the Reproducing Kernel Hilbert Space. We propose a general Kernel Scoring rule and associated Kernel Divergence. We consider conditions under which the Kernel Score is strictly proper. We then demonstrate that the Kernel Score includes the Maximum Mean Discrepancy as a special case. We also consider the connections between the Kernel Score and the minimum risk of a proper loss function. We show that the Kernel Score incorporates more information pertaining to the projected embedded distributions compared to the Maximum Mean Discrepancy. Finally, we show how to integrate the information provided from different Kernel Divergences, such as the proposed Bhattacharyya Kernel Divergence, using a one-class classifier for improved two-sample hypothesis testing results. \end{abstract}
\noindent {\it Keywords:} strictly proper scoring rule, divergences, kernel scoring rule, minimum risk, projected risk, proper loss functions,
probability elicitation, calibration, Bayes error bound, Bhattacharyya distance, feature selection, maximum mean discrepancy,
kernel two-sample hypothesis testing, embedded distribution
\spacingset{1.45} \label{intro} \section{Introduction} Strictly proper scoring rules \cite{savage,DeGroot2, Raftery} are integral to a number of different applications namely, forecasting \cite{Tilmann2007, Brocker2009}, probability elicitation \cite{book:Eliciting},
classification \cite{HamedNunoLossDesign,HamedNunoJMLRRegularize}, estimation \cite{EstWithScrictLoss}, and finance \cite{Duffie}. Strictly proper scoring rules are closely related to entropy functions, divergence measures and bounds on the Bayes error that are important for applications such as feature selection \cite{NUNOMaxDiversityNIPS, NunoNaturalFeatures,NewFeatPers,FeatMutual}, classification and regression \cite{KLboost,ProjectClass,FriedmanPersuit} and information theory \cite{Fano, fdivrisk, book:InfoTheory, minimaxrisk}.
Despite their vast applicability and having been extensively studied, strictly proper scoring rules have only recently been studied in Reproducing Kernel Hilbert Spaces. In \cite{Dawid2007KScore, Raftery} a certain kernel score is defined and in \cite{Zawadzki} its divergence is shown to be equivalent to the Maximum Mean Discrepancy. The Maximum Mean Discrepancy (MMD) \cite{MMD} is defined as the squared difference between the embedded means of two distributions embedded in an inner product kernel space. It has been used in hypothesis testing where the null hypothesis is rejected if the MMD of two sample sets is above a certain threshold \cite{TwoMMD,MMD}. Recent work pertaining to the MMD has concentrated on the kernel function \cite{Barat3,Barat1,Barat2, Barat4} or improved estimates of the mean embedding \cite{Barat5} or methods of improving its implementation \cite{Barat6} or incorporating the embedded covariance \cite{FisherMMD} among others.
In this paper we study the notion of strictly proper scoring rules in the Reproducing Kernel Hilbert Space. We introduce a general Kernel Scoring rule and associated Kernel Divergence that encompasses the MMD and the kernel score of \cite{Dawid2007KScore, Raftery, Zawadzki} as special cases. We then provide conditions under which the proposed Kernel Score is proven to be strictly proper. We show that being strictly proper is closely related to the injective property of the MMD.
The Kernel Score is shown to be dependent on the choice of an embedded projection vector $\Phi({\bf w})$ and concave function $C$. We consider a number of valid choices of $\Phi({\bf w})$ such as the canonical vector, the normalized kernel Fisher discriminant projection vector and the normalized kernel SVM projection vector \cite{book:vapnik} that lead to strictly proper Kernel Scores and strictly proper Kernel Divergences.
We show that the proposed Kernel Score is related to the minimum risk and that the $C$ is related to the minimum conditional risk function. This connection is made possible by looking at risk minimization in terms of proper loss functions \cite{Buja, HamedNunoLossDesign, HamedNunoJMLRRegularize}. This allows us to study the effect of choosing different $C$ functions and establish its relation to the Bayes error. We then provide a method for generating $C$ functions for Kernel Scores that are arbitrarily tight upper bounds on the Bayes error. This is especially important for applications that rely on tight bounds on the Bayes error such as classification, feature selection and feature extraction among others. In the experiment section we confirm that such tight bounds on the Bayes error lead to improved feature selection and classification results.
We show that strictly proper Kernel Scores and Kernel Divergences, such as the Bhattacharyya Kernel Divergence, include more information about the projected embedded distributions compared to the MMD. We provide practical formulations for calculating the Kernel Score and Kernel Divergence and show how to combine the information provided from different Kernel Divergences with the MMD using a one class classifier \cite{OneClassPhd} for significantly improved hypothesis testing results.
The paper is organized as follows. In Section 2 we review the required background material. In Section 3 we introduce the Kernel Scoring Rule and Kernel Divergence and consider conditions under which they are strictly proper. In Section 4 we establish the connections between the Kernel Score and and MMD and show that the MMD is a special case of the Bhattacharyya Kernel Score. In Section 5 we show the connections between the Kernel Score and the minimum risk and explain how arbitrarily tighter bounds on the Bayes error are possible. In Section 6 we discuss practical consideration in computing the Kernel Score and Kernel Divergence given sample data. In Section 7 we propose a novel one-class classifier that can combine all the different Kernel Divergences into a powerful hypothesis test. Finally, in Section 8 we present extensive experimental results and apply the proposed ideas to feature selection and hypothesis testing on bench-mark gene data sets.
\section{Background Material Review} In this section we provide a review of required background material on strictly proper scoring rules, proper loss functions and positive definite kernel embedding of probability distributions.
\subsection{Strictly Proper Scoring Rules and Divergences} The concept of strictly proper scoring rules can be traced back to the seminal paper of \cite{savage}. This idea was expanded upon by later papers such as \cite{DeGroot2,Dawid1981} and has been most recently studied under a broader context \cite{book:Eliciting, Raftery}. We provide a short review of the main ideas in this field.
Let $\Omega$ be a general sample space and $\cal P$ be a class of probability measures on $\Omega$. A scoring rule $S:{\cal P} \times \Omega \rightarrow \mathbb{R}$ is a real valued function that assigns the score $S(P,x)$ to a forecaster that quotes the measure $P \in \cal P$ and the event $a \in \Omega$ materializes. The expected score is written as $S(P,Q)$ and is the expectation of $S(P,.)$ under $Q$ \begin{equation} S(P,Q)=\int S(P,a) dQ(a), \end{equation} assuming that the integral exists. We say that a scoring rule is proper if \begin{equation} S(Q,Q) \ge S(P,Q) ~\mbox{for all}~ P,Q \end{equation} and we say that a scoring rule is strictly proper when $S(Q,Q)=S(P,Q)$ if and only if $P=Q$. We define the divergence associated with a strictly proper scoring rule $S$ as \begin{equation} div(P,Q)=S(Q,Q)-S(P,Q) \ge 0 \end{equation} which is a non-negative function and has the property of \begin{equation} \label{eq:StrictDivProperty} div(P,Q)=0 ~\mbox{if and only if}~ P=Q. \end{equation}
Presented less formally, the forecaster makes a prediction regarding an event in the form of a probability distribution $P$. If the actual event $a$ materializes then the forecaster is assigned a score of $S(P,a)$. If the true distribution of events is $Q$ then the expected score is $S(P,Q)$. Obviously, we want to assign the maximum score to a skilled and trustworthy forecaster that predicts $P=Q$. A strictly proper score accomplishes this by assigning the maximum score if and only if $P=Q$.
If the distribution of the forecasters predictions is $\nu(P)$ then the overall expected score of the forecaster is \begin{equation} \int \nu(P) S(P,Q) dP. \end{equation} The overall expected score is maximum when the expected score $S(P,Q)$ is maximum for each prediction $P$, which happens when $P=Q$ for all $P$, assuming that the score is strictly proper.
\subsection{Risk Minimization and the Classification Problem } \label{sec:DeriveTangentLoss} Classifier design by risk minimization has been extensively studied in ~\citep{friedman,zhang,Buja,HamedNunoLossDesign}. In summary, a classifier $h$ is defined as a mapping from a feature vector ${\bf x} \in \cal X$ to a class
label $y \in \{-1,1\}$. Class labels $y$ and feature vectors ${\bf x}$ are sampled from the probability distributions $P_{Y|X}(y|{\bf x})$ and $P_{\bf X}({\bf x})$ respectively. Classification is accomplished by taking the sign of the classifier predictor $p: {\cal X} \rightarrow \mathbb{R}$. This can be written as \begin{equation}
h({\bf x}) = sign[p({\bf x})].
\label{eq:h} \end{equation}
The optimal predictor $p^*({\bf x})$ is found by minimizing the risk over a non-negative loss function $L({\bf x},y)$ and written as \begin{equation}
R(p) = E_{{\bf X},Y}[L(p({\bf x}),y)].
\label{eq:risk} \end{equation} This is equivalent to minimizing the conditional risk \begin{equation*}
E_{Y|{\bf X}} [L(p({\bf x}),y)|{\bf X} = {\bf x}] \end{equation*} for all ${\bf x} \in {\cal X}$.
The predictor $p({\bf x})$ is decomposed and typically written as \begin{equation*}
p({\bf x}) = f(\eta({\bf x})),
\label{eq:compose} \end{equation*} where $f: [0,1] \rightarrow \mathbb{R}$ is called the link function and
$\eta({\bf x}) = P_{Y|{\bf X}}(1|{\bf x})$ is the posterior probability function.
The optimal predictor can now be learned by first analytically finding the optimal link $f^*(\eta)$ and then estimating $\eta({\bf x})$, assuming that $f^*(\eta)$ is one-to-one.
If the zero-one loss \begin{eqnarray*}
L_{0/1}(y,p) = \frac{1- sign(yp)}{2} = \left\{ \begin{array}{ll}
0, & \mbox{if $y=sign(p)$};\\
1, & \mbox{if $y \ne sign(p)$},\end{array} \right. \end{eqnarray*}
is used, then the associated conditional risk \begin{eqnarray} \label{eq:zeronecondrisk}
C_{0/1}(\eta,p) = \eta \frac{1- sign(p)}{2} +
(1-\eta) \frac{1 + sign(p)}{2}
= \left\{ \begin{array}{ll}
1-\eta, & \mbox{if $p=f(\eta) \geq 0 $};\\
\eta, & \mbox{if $p=f(\eta)<0$} \end{array} \right. \end{eqnarray}
is equal to the probability of error of the classifier of~(\ref{eq:h}). The associated conditional zero-one risk is minimized by any $f^*$ such that \begin{equation}
\left\{
\begin{array}{cc}
f^*(\eta) > 0 & \mbox{if $\eta > \frac{1}{2} $} \\
f^*(\eta) = 0 & \mbox{if $\eta = \frac{1}{2} $} \\
f^*(\eta) < 0 & \mbox{if $\eta < \frac{1}{2} $.}
\end{array}
\right.
\label{eq:Bayesnec} \end{equation} For example the two links of \begin{equation*}
f^*=2\eta-1 \quad \mbox{and} \quad f^*=\log\frac{\eta}{1-\eta}
\label{eq:fexamples} \end{equation*} can be used.
The resulting classifier $h^*({\bf x}) = sign[f^*(\eta({\bf x}))]$
is now the optimal Bayes decision rule. Plugging $f^*$ back into the conditional zero-one risk gives the minimum conditional zero-one risk \begin{eqnarray} \label{eq:zeronemincondrisk}
C^*_{0/1} (\eta) &&= \eta\left(\frac{1}{2}-\frac{1}{2}sign(2\eta-1)\right)+
(1-\eta)\left(\frac{1}{2}+\frac{1}{2}sign(2\eta-1)\right) \\
&&=\left\{ \begin{array}{ll}
(1-\eta) & \mbox{if $\eta \geq \frac{1}{2} $}\\
\eta & \mbox{if $\eta<\frac{1}{2}$}\end{array} \right. \\
&&=\min\{\eta, 1-\eta\}. \end{eqnarray}
The optimal classifier that is found using the zero-one loss has the
smallest possible risk and is known as the Bayes error $R^*$ of the corresponding classification problem ~\citep{JordanBartlett, zhang, book:ProbPatRec}.
We can change the loss function and replace the zero-one loss with a so-called margin loss in the form of $L_{\phi}(y,p({\bf x})) = \phi(yp({\bf x}))$.
Unlike the zero-one loss, margin losses allow for a non-zero loss on positive values of the margin $yp$. Such loss functions can be shown to produce classifiers that have better generalization ~\citep{book:vapnik}. Also unlike the zero-one loss, margin losses are typically designed to be differentiable over their entire domain. The exponential loss and logistic loss used in the AdaBoost and LogitBoost Algorithms \cite{friedman} and the hinge loss used in SVMs are some examples of margin losses \cite{zhang,Buja}. The conditional risk of a margin loss can now be written as \begin{equation}
C_\phi(\eta,p) = C_\phi(\eta,f(\eta)) = \eta \phi(f(\eta)) +
(1-\eta) \phi(-f(\eta)).
\label{eq:CondRisk} \end{equation} This is minimized by the link \begin{equation}
f^*_{\phi}(\eta) = \arg\min_{f} C_\phi(\eta,f)
\label{eq:fstarphi} \end{equation} and so the minimum conditional risk function is \begin{equation}
C^*_\phi(\eta) = C_\phi(\eta,f^*_\phi).
\label{eq:C*phi} \end{equation} For most margin losses, the optimal link is unique and can be found analytically. Table~\ref{tab:losses} presents the exponential, logistic and hinge losses along with their respective link and minimum conditional risk functions.
\begin{table}[t]
\centering
\caption{\protect\footnotesize{Loss $\phi$, optimal link $f^*_{\phi}(\eta)$,
optimal inverse link $[f^*_{\phi}]^{-1}(v)$ ,
and minimum conditional risk $C_\phi^*(\eta)$ of popular learning
algorithms.}}
\begin{tabular}{|c|c|c|c|c|}
\hline
Algorithm & $\phi(v)$ & $f^*_{\phi}(\eta)$ & $[f^*_{\phi}]^{-1}(v)$ &
$C_{\phi}^*(\eta)$ \\
\hline
\hline
AdaBoost & $\exp(-v)$ & $\frac{1}{2} \log \frac{\eta}{1-\eta}$&
$\frac{e^{2v}}{1+e^{2v}}$ & $2 \sqrt{\eta (1-\eta)}$\\
LogitBoost & $\log(1+e^{-v})$ & $\log \frac{\eta}{1-\eta}$ &
$\frac{e^{v}}{1+e^{v}}$ &
$-\eta \log \eta - (1-\eta) \log (1 - \eta)$\\
SVM & $\max(1-v,0)$ & $sign(2 \eta - 1)$ & NA & $1 - |2\eta -1|$\\
\hline
\end{tabular}
\label{tab:losses} \end{table}
\subsubsection{Probability Elicitation and Proper Losses} Conditional risk minimization can be related to probability elicitation ~\citep{savage,DeGroot} and has been studied in ~\citep{Buja,HamedNunoLossDesign,Reid}.
In probability elicitation we find the probability estimator ${\hat \eta}$ that maximizes the expected score \begin{equation}
I(\eta,{\hat \eta}) = \eta I_{1}({\hat \eta}) + (1-\eta) I_{-1}({\hat \eta}),
\label{eq:expreward} \end{equation} of a score function that assigns a score of $I_1({\hat \eta})$ to prediction ${\hat \eta}$ when event $y=1$ holds and a score of $I_{-1}({\hat \eta})$ to prediction ${\hat \eta}$ when $y=-1$ holds. The scoring function is said to be proper if $I_1$ and $I_{-1}$ are such that the expected score is maximal when ${\hat \eta} = \eta$, in other words \begin{equation}
I(\eta,{\hat \eta}) \leq I(\eta, \eta) = J(\eta), \,\,\, \forall \eta
\label{eq:Savagebound} \end{equation} with equality if and only if ${\hat \eta} = \eta$. This holds for the following theorem.
\begin{Thm}{~\citep{savage}}
Let $I(\eta,{\hat \eta})$ be as defined
in~(\ref{eq:expreward}) and $J(\eta) = I(\eta,\eta)$.
Then~(\ref{eq:Savagebound}) holds if and only if $J(\eta)$ is convex and
\begin{equation}
\label{eq:Is}
I_1(\eta) = J(\eta) + (1-\eta) J^\prime(\eta) \quad \quad \quad
I_{-1}(\eta) = J(\eta) -\eta J^\prime(\eta).
\end{equation}
\label{thm:savage} \end{Thm}
Proper losses can now be related to probability elicitation by the following theorem which is most important for our purposes. \begin{Thm}{~\citep{HamedNunoLossDesign}} \label{Thm:HamedNuno} Let $I_1(\cdot)$ and
$I_{-1}(\cdot)$ be as in (\ref{eq:Is}),
for any continuously differentiable convex $J(\eta)$ such that
$J(\eta) = J(1-\eta)$, and $f(\eta)$ any invertible function such that
$f^{-1}(-v) = 1 - f^{-1}(v)$.
Then
\begin{equation*}
I_1(\eta) = -\phi(f(\eta)) \quad \quad \quad \quad \quad \quad
I_{-1}(\eta) = -\phi(-f(\eta)) \label{eq:I1I-1f}
\end{equation*}
if and only if
\begin{equation*}
\phi(v) = -J\left(f^{-1}(v)\right) - (1- f^{-1}(v)) J^\prime
\left(f^{-1}(v)\right).
\label{eq:phieq}
\end{equation*}
\label{thm:risk} \end{Thm} It is shown in \citep{zhang} that $C_\phi^*(\eta)$ is concave and that \begin{eqnarray}
C_\phi^*(\eta) &=& C_\phi^*(1-\eta) \label{eq:Cstarsym}\\
{[f_\phi^*]}^{-1}(-v) &=& 1 - [f_\phi^*]^{-1}(v) \label{eq:fstarsym}. \end{eqnarray} We also require that $C^*_\phi(0) = C^*_\phi(1) = 0$ so that the
minimum risk is zero when $P_{Y|{\bf X}}(1|{\bf x}) = 0$ or
$P_{Y|{\bf X}}(1|{\bf x}) = 1$.
In summary, for any continuously differentiable $J(\eta) = -C_\phi^*(\eta)$ and invertible $f(\eta) = f^*_\phi(\eta)$, the conditions of Theorem \ref{Thm:HamedNuno} are satisfied and so the loss will take the form of \begin{equation}
\phi(v) = C_\phi^*\left([f_\phi^*]^{-1}(v)\right) + (1- [f_\phi^*]^{-1}(v))
[C_\phi^*]^\prime\left([f_\phi^*]^{-1}(v)\right)
\label{eq:phieq2} \end{equation}
and $I(\eta,{\hat \eta}) = -C_\phi(\eta,f)$.
In this case, the predictor of minimum risk is $p^* = f^*_\phi(\eta)$, the minimum risk is \begin{equation} \label{equ:MinRiskInit}
R({p^*}) = \int_{\bf x} P_{{\bf X}}({\bf x}) \left[ P_{{\bf Y}|{\bf X}}(1|{\bf x})\phi({ p^*}({\bf x})) + P_{{\bf Y}|{\bf X}}(-1|{\bf x})\phi(-{ p^*}({\bf x})) \right] d{\bf x} \end{equation} and posterior probabilities $\eta$ can be found using \begin{equation}
\eta({\bf x}) = [f^*_\phi]^{-1}( p^*({\bf x})).
\label{eq:link} \end{equation} Finally, the loss is said to be proper
and the predictor calibrated~\citep{DeGroot, Platt, Caruana, Raftery}.
In practice, an estimate of the optimal predictor ${\hat p}^*({\bf x})$ is found by minimizing the empirical risk \begin{equation}
R_{emp}(p) = \frac{1}{n} \sum_i L(p({\bf x}_i),y_i)
\label{eq:emprisk} \end{equation} over a training set ${\cal D} = \{({\bf x}_1,y_1), \ldots, ({\bf x}_n,y_n)\}$. Estimates of the probabilities $ \eta({\bf x})$ are now found from ${\hat p}^*$ using \begin{equation}
{\hat \eta}({\bf x}) = [f^*_\phi]^{-1}({\hat p^*}({\bf x})).
\label{eq:hateta} \end{equation}
\subsection{Positive Definite Kernel Embedding of Probability Distributions } In this section we review the notion of embedding probability measures into reproducing kernel Hilbert spaces \cite{book:RHKS, Fukumizu2004, Sriperumbudur2010}.
Let ${\bf x} \in \cal X$ be a random variable defined on a topological space $\cal X$ with associated probability measure $P$. Also, let $\cal H$ be a Reproducing Kernel Hilbert Space (RKHS) . Then there is a mapping $\Phi: {\cal X} \rightarrow {\cal H}$
such that \begin{equation} <\Phi({\bf x}),f>_{\cal H}=f({\bf x}) ~\mbox{for all}~ f \in {\cal H}. \end{equation} The mapping can be written as $\Phi({\bf x}) = k({\bf x},.)$ where $k(.,{\bf x})$ is a positive definite kernel function parametrized by ${\bf x}$. A dot product representation of $k({\bf x},{\bf y})$ exists in the form of \begin{equation} k({\bf x},{\bf y})=<\Phi({\bf x}),\Phi({\bf y})>_{\cal H} \end{equation} where ${\bf x},{\bf y} \in {\cal X}$.
For a given Reproducing Kernel Hilbert Space $\cal H$, the mean embedding ${\bm \mu}_P \in {\cal H}$ of the distribution $P$ exists under certain conditions and is defined as \begin{equation} {\bm \mu}_P(t)=<{\bm \mu}_P(.),k(.,t)>_{\cal H}=E_{\cal X}[k(x,t)]. \end{equation} In words, the mean embedding ${\bm \mu}_P$ of the distribution $P$ is the expectation under $P$ of the mapping $k(.,t)=\Phi(t)$.
The maximum mean discrepancy (MMD) \cite{MMD} is expressed as the squared difference between the embedded means ${\bm \mu}_P$ and ${\bm \mu}_Q$ of the two embedded distributions $P$ and $Q$ as \begin{eqnarray} \label{eq:MMDDEf}
MMD_{\cal F}(P,Q)=||{\bm \mu}_P - {\bm \mu}_Q||^2_{\cal H}. \end{eqnarray} where $\cal F$ is a unit ball in a universal RKHS which requires that $k(.,x)$ be continuous among other things. It can be shown that the Reproducing Kernel Hilbert Spaces associated with the Gaussian and Laplace kernels are universal \cite{Steinwart}. Finally, an important property of the MMD is that it is injective which is formally stated by the following theorem. \begin{Thm}{~\citep{MMD}} \label{Thm:InjectiveMMD} Let $\cal F$ be a unit ball in a universal RKHS $\cal H $ defined on the compact metric space $\cal X$ with associated continuous kernel $k(.,x)$. $MMD_{\cal F}(P,Q) = 0$ if and only if $P=Q$. \end{Thm}
\section{Strictly Proper Kernel Scoring Rules and Divergences }
In this section we define the Kernel Score and Kernel Divergence and show when the Kernel Score is strictly proper.
To do this we need to define the projected embedded distribution. \begin{definition} Let ${\bf x} \in \cal X$ be a random variable defined on a topological space $\cal X$ with associated probability distribution $P$. Also, let $\cal H$ be a universal RKHS with associated positive definite kernel function $k({\bf x},{\bf w})=<\Phi({\bf x}),\Phi({\bf w})>_{\cal H}$. The projection of $\Phi({\bf x})$ onto a fixed vector $\Phi({\bf w})$ in $\cal H$ is denoted by $x^p$ and found as \begin{eqnarray} \label{eq:FindxpwK} x^p=\frac{k({\bf w},{\bf x})}{\sqrt{k({\bf w},{\bf w})}}. \end{eqnarray} The univariate distribution associated with $x^p$ is defined as the projected embedded distribution of $P$ and denoted by $P^p$. The mean and variance of $P^p$ are denoted by $\mu^p_P$ and $(\sigma^p_P)^2$. \end{definition}
The Kernel Score and Kernel Divergence are now defined as follows. \begin{definition} Let $P$ and $Q$ be two distributions on $\cal X$. Also, let $\cal H$ be a universal RKHS with associated positive definite kernel function $k({\bf x},{\bf w})=<\Phi({\bf x}),\Phi({\bf w})>_{\cal H}$ where $\cal F$ is a unit ball in $\cal H$. Finally, assume that $\Phi({\bf w})$ is a fixed vector in $\cal H$.
The Kernel Score between distributions $P$ and $Q$ is defined as \begin{eqnarray} S_{C,k,{\cal F},{\Phi({\bf w})}}(P,Q)= \int \left( \frac{P^p(x^p) + Q^p(x^p)}{2} \right) C\left( \frac{P^p(x^p)}{P^p(x^p) + Q^p(x^p)} \right)d(x^p), \end{eqnarray} and the Kernel Divergence between distributions $P$ and $Q$ is defined as \begin{eqnarray} KD_{C,k,{\cal F},{\Phi({\bf w})}}(P,Q) &=& \frac{1}{2} - S_{C,k,{\cal F},{\Phi({\bf w})}}(P,Q) \\ &=&\frac{1}{2} - \int \left( \frac{P^p(x^p) + Q^p(x^p)}{2} \right) C\left( \frac{P^p(x^p)}{P^p(x^p) + Q^p(x^p)} \right)d(x^p), \end{eqnarray} where $C$ is a continuously differentiable strictly concave symmetric function such that $C(\eta)=C(1-\eta)$ for all $\eta \in [0~1]$, $C(0)=C(1)=0$, $C(\frac{1}{2})=\frac{1}{2}$ and $P^p$ and $Q^p$ are the projected embedded distributions of $P$ and $Q$. \end{definition}
We can now present conditions under which a Kernel Score is strictly proper and Kernel Divergence
has the important property of (\ref{eq:StrictDivProperty}). \begin{Thm} \label{thm:StrictDivProperty} The Kernel Score is strictly proper and the Kernel Divergence has the property of \begin{equation} \label{eq:StrictKernelDivProperty} KD_{C,k,{\cal F},{\Phi({\bf w})}}(P,Q)=0 ~\mbox{if and only if}~ P=Q, \end{equation} if $\Phi({\bf w})$ is chosen such that it is not in the orthogonal compliment of the set $M=\{{\bm \mu}_P - {\bm \mu}_Q\}$, where ${\bm \mu}_P$ and ${\bm \mu}_Q$ are the mean embeddings of $P$ and $Q$ respectively. \end{Thm} \begin{proof} See supplementary material \ref{app:KscoreStrict}.
\end{proof} We denote Kernel Divergences that have the desired property of (\ref{eq:StrictKernelDivProperty}) as Strictly Proper Kernel Divergences. The canonical projection vector $\Phi({\bf w})$ that is not in the orthogonal compliment of $M=\{{\bm \mu}_P - {\bm \mu}_Q\}$
is to choose $\Phi({\bf w})=\frac{({\bm \mu}_P-{\bm \mu}_Q)}{||({\bm \mu}_P-{\bm \mu}_Q)||_{\cal H}}$. The following lemma lists some valid choices. \begin{lemma} \label{thm:validchoices} The Kernel Score and Kernel Divergence associated with the following choices of $\Phi({\bf w})$ are strictly proper. \begin{enumerate} \item
$\Phi({\bf w})=\frac{({\bm \mu}_P-{\bm \mu}_Q)}{||({\bm \mu}_P-{\bm \mu}_Q)||_{\cal H}}$. \item $\Phi({\bf w})$ equal to the normalized kernel Fisher discriminant projection vector. \item $\Phi({\bf w})$ equal to the normalized kernel SVM projection vector. \end{enumerate} \end{lemma} \begin{proof} See supplementary material \ref{app:ValidVect}.
\end{proof} In what follows we consider the implications of choosing different $\Phi({\bf w})$ projections and concave functions $C$ for the Strictly Proper Kernel Score and Kernel Divergence.
\section{The Maximum Mean Discrepancy Connection}
If we choose $C$ to be the concave function of $C_{Exp}(\eta)=\sqrt{(\eta(1-\eta))}$ and assume that the univariate projected embedded distributions $P^p$ and $Q^p$ are Gaussian then, using the Bhattacharyya bound \cite{BattLeeFeat,ImageSegClust}, we can readily show that \begin{eqnarray} \label{eq:battarExp2} &&S_{C,k,{\cal F},{\Phi({\bf w})}}(P,Q)= \frac{1}{2}\cdot e^{(B)}, \\ &&KD_{C_{Exp},k,{\cal F},{\Phi({\bf w})}}(P,Q)=\frac{1}{2} -\frac{1}{2}\cdot e^{(B)}, \\ && B=\frac{1}{4}\log \left ( \frac{1}{4} \left (\frac{( \sigma^p_P)^2}{( \sigma^p_Q)^2}+\frac{( \sigma^p_Q)^2}{( \sigma^p_P)^2}+2 \right)\right )+\frac{1}{4} \left ( \frac{( \mu^p_P- \mu^p_Q)^2}{( \sigma^p_P)^2 + ( \sigma^p_Q)^2} \right), \end{eqnarray} where $ \mu^p_P$, $ \mu^p_Q$, $\sigma^p_P$ and $ \sigma^p_Q$ are the means and variances of the projected embedded distributions $P^p$ and $Q^p$. We will refer to these as the Bhattacharyya Kernel Score and Bhattacharyya Kernel Divergence. Note that if $\sigma^p_P = \sigma^p_Q$ then the above equation simplifies to $B=\frac{1}{4} \left ( \frac{( \mu^p_P- \mu^p_Q)^2}{( \sigma^p_P)^2 + ( \sigma^p_Q)^2} \right)$.
This leads to the following results.
\begin{lemma} \label{lemma:AlternateMMD}
Let $P$ and $Q$ be two distributions where $\mu^p_P$ and $\mu^p_Q$ are the respective means of the projected embedded distributions $P^p$ and $Q^p$ with projection vector $\Phi({\bf w})=\frac{({\bm \mu}_P-{\bm \mu}_Q)}{||({\bm \mu}_P-{\bm \mu}_Q)||_{\cal H}}$. Then \begin{eqnarray} MMD_{\cal F}(P,Q)=(\mu^p_P - \mu^p_Q)^2. \end{eqnarray} \end{lemma} \begin{proof} See supplementary material \ref{app:MMDNewLook}.
\end{proof} With this new alternative outlook on the MMD, it can be seen as a special case of a strictly proper Kernel Score under certain assumptions outlined in the following theorem.
\begin{Thm} \label{Thm:KScoreMMDConnection}
Let $C$ be the concave function of $C_{Exp}(\eta)=\sqrt{(\eta(1-\eta))}$ and $\Phi({\bf w})=\frac{({\bm \mu}_P-{\bm \mu}_Q)}{||({\bm \mu}_P-{\bm \mu}_Q)||_{\cal H}}$. Then \begin{eqnarray} MMD_{\cal F}(P,Q) \propto \log \left(2 S_{C_{Exp},k,{\cal F},{\Phi({\bf w})}}(P,Q) \right)
\end{eqnarray} under the assumption that the projected embedded distributions $P^p$ and $Q^p$ are Gaussian distributions of equal variance.
\end{Thm} \begin{proof} See supplementary material \ref{app:MMDKSCoreConnection}.
\end{proof}
In other words, if we set $\Phi({\bf w})=\frac{({\bm \mu}_P-{\bm \mu}_Q)}{||({\bm \mu}_P-{\bm \mu}_Q)||}$ and project onto this vector, the MMD is equal to the distance between the means of the projected embedded distributions squared. Note that while the MMD incorporates all the higher moments of the distribution of the data in the original space and determines a probability distribution uniquely \cite{KernelEmbeddingBeyonds}, it
completely disregards the higher moments of the projected embedded distributions.
This suggests that by incorporating more information regarding the projected embedded distributions, such as its variance, we can arrive at measures such as the Bhattacharyya Kernel Divergence that are more versatile than the MMD in the finite sample setting. In the experimental section we apply these measures to the problem of kernel hypothesis testing and show that they outperform the MMD.
\section{Connections to the Minimum Risk} In this section we establish the connection between the Kernel Score and the minimum risk associated with the projected embedded distributions. This will provide further insight towards the effect of choosing different concave $C$ functions and different projection vectors $\Phi({\bf w})$ on the Kernel Score. First, we present a general formulation for the minimum risk of (\ref{eq:risk}) for a proper loss function and show that we can partition any such risk into two terms akin to partitioning of the Brier score~\citep{DeGroot, Murphy1972}. \begin{lemma} \label{Thm:Brier} Let $\phi$ be a proper loss function in the form of (\ref{eq:phieq2}) and ${\hat p^*}({\bf x})$ an estimate of the optimal predictor ${ p^*}({\bf x})$. The risk $R({\hat p^*})$ can be partitioned into a term that is a measure of calibration $R_{Calibration}$ plus a term that is the minimum risk $R({p^*})$ in the form of \begin{eqnarray} R({\hat p^*}) &=& \\
&& \int_{\bf x} P_{{\bf X}}({\bf x}) \left[ P_{{\bf Y}|{\bf X}}(1|{\bf x})\left(\phi({\hat p^*}({\bf x}))-\phi({ p^*}({\bf x}))\right) + P_{{\bf Y}|{\bf X}}(-1|{\bf x})\left(\phi(-{\hat p^*}({\bf x}))-\phi(-{ p^*}({\bf x})) \right) \right] d{\bf x} \nonumber \\
&+& \int_{\bf x} P_{{\bf X}}({\bf x}) \left[ P_{{\bf Y}|{\bf X}}(1|{\bf x})\phi({ p^*}({\bf x})) + P_{{\bf Y}|{\bf X}}(-1|{\bf x})\phi(-{ p^*}({\bf x})) \right] d{\bf x} \nonumber \\ &=& R_{Calibration} + R({p^*}). \end{eqnarray}
Furthermore the minimum risk term $R({p^*})$ can be written as \begin{eqnarray} \label{eq:MinRiskClean}
R({p^*})=\int_{\bf x} P_{{\bf X}}({\bf x}) C_\phi^*(P_{Y|{\bf X}}(1|{\bf x}) ) d{\bf x}. \end{eqnarray}
\end{lemma} \begin{proof} See supplementary material \ref{app:Brier}. \end{proof}
The following theorem that writes the Kernel Score in terms of the minimum risk associated with the projected embedded distributions $R^p({p^*})$ is now readily proven. \begin{Thm} \label{Thm:KDMinRiskConect} Let $P$ and $Q$ be two distributions and choose $C=C^*_{\phi}$. Then \begin{eqnarray} \label{eq:KDMinRiskequality} S_{C^*_{\phi},k,{\cal F},{\Phi({\bf w})}}(P,Q)= R^p({p^*}),
\end{eqnarray} where $R^p({p^*})$ is the minimum risk associated with the projected embedded distributions of $P^p$ and $Q^p$. \end{Thm} \begin{proof} See supplementary material \ref{app:KscoreMinRisk}.
\end{proof} We conclude that the minimum risk associated with the projected embedded distributions term $R^p({p^*})$, and in turn the Kernel Score $S_{C^*_{\phi},k,{\cal F},{\Phi({\bf w})}}(P,Q)$,
are constants related to the distributions $P^p$ and $Q^p$ (determined by the choice of $\Phi({\bf w})$) and the choice of $C_\phi^*$.
The effect of changing $C=C^*_{\phi}$ can now be studied in detail by noting the general result presented in the following theorem \cite{ATightUpperBound, ArbitrarilyTight, book:ProbPatRec}.
\begin{Thm} \label{thm:tightestC} Let $C_\phi^*$ be a continuously differentiable concave symmetric function such that $C_\phi^*(\eta)=C_\phi^*(1-\eta)$ for all $\eta \in [0~1]$, $C_\phi^*(0)=C_\phi^*(1)=0$ and $C_\phi^*(\frac{1}{2})=\frac{1}{2}$. Then $C_\phi^*(\eta) \ge \min(\eta,1-\eta)$ and $R({p^*}) \ge R^*$. Furthermore, for any $\epsilon$ such that $R({p^*})-R^* \le \epsilon$ there exists $\delta$ and $C_\phi^*$ where $C_\phi^*(\eta) - \min(\eta,1-\eta) \le \delta$. \end{Thm} \begin{proof} See Section 2 of \cite{ATightUpperBound}, Section 2, Theorems 2 and 4 of \cite{ArbitrarilyTight}, and Chapter 2 of \cite{book:ProbPatRec}. \end{proof} The above theorem, when especially applied to the projected embedded distributions, states that the minimum risk associated with the projected embedded distributions $R^p({p^*})$ is an upper bound on the Bayes risk associated with the projected embedded distributions ${R^{p}}^*$ and as $C_\phi^*$ is made arbitrarily close to $C_{0/1}^*=\min(\eta,1-\eta)$ this upper bound is tight.
In summary, using different $\Phi({\bf w})$ in the Kernel Score formulation, changes the projected embedded distributions of $P^p$ and $Q^p$ and the Bayes risk associated with these projected embedded distributions ${R^{p}}^*$. Using different $C_\phi^*$ changes the upper bound estimate of this Bayes risk $R^p({p^*})$.
\subsubsection{ Tighter Bounds on the Bayes Error} \label{sec:TighterBounds22} We can easily verify that, in general, the minimum risk is equal to the Bayes error when $C_\phi^*=C_{0/1}^*=\min(\eta,1-\eta)$, leading to the smallest possible minimum risk for fixed data distributions. Unfortunately, $C_{0/1}^*=\min(\eta,1-\eta)$ is not continuously differentiable and so we consider other $C_\phi^*$ functions. For example when $C_{LS}^*(\eta)=-2\eta(\eta-1)$ is used,
the minimum risk simplifies to \begin{eqnarray}
R_{C_{LS}^*}(p^*)=\int \frac{P_{{\bf X}|Y}({\bf x}|1)P_{{\bf X}|Y}({\bf x}|-1)}{(P_{{\bf X}|Y}({\bf x}|1)+P_{{\bf X}|Y}({\bf x}|-1))} d{\bf x}, \end{eqnarray} which is equal to the asymptotic nearest neighbor bound \cite{book:Fukunaga,NNClassification} on the Bayes error. We have used the notation $R_{C_{LS}^*}(p^*)$ to make it clear that this is the minimum risk associated with the $C_{LS}^*$ function.
\begin{figure*}\label{fig:JTableParamPlot}
\end{figure*}
From Theorem \ref{thm:tightestC} we know that when the minimum risk is computed under other $C_{\phi}^*$ functions, a list of which is presented in Table-\ref{tab:JTableParameteres}, an upper bound on the Bayes error is being computed.
Also, the $C_{\phi}^*$ that are closer to $C_{0/1}^*$ result in minimum risk formulations that provide tighter bounds on the Bayes error. Figure-\ref{fig:JTableParamPlot} shows that $C_{LS}^*$, $C_{Cosh}^*$, $C_{Sec}^*$, $C_{Log}^*$, $C_{Log-Cos}^*$ and $C_{Exp}^*$ are in order the closest to $C_{0/1}^*$ and the corresponding minimum-risk formulations in Table-\ref{tab:RefFormulasDiffJ4} provide, in the same order, tighter bounds on the Bayes error. This can also be directly verified by noting that $R_{C_{Exp}^*}$ is equal to the Bhattacharyya bound \cite{book:Fukunaga}, $R_{C_{LS}^*}$ is equal to the asymptotic nearest neighbor bound \cite{book:Fukunaga,NNClassification}, $R_{C_{Log}^*}$ is equal to the Jensen-Shannon divergence \cite{JenShannonLin} and $R_{C_{Log-Cos}^*}$ is similar to the bound in \cite{ArbitrarilyTight}. These four formulations have been independently studied in the literature and the fact that they produce upper bounds on the Bayes error has been directly verified. Here we have rederived these four measures by resorting to the concept of minimum risk and proper loss functions which not only allows us to provide a unified approach to these different methods but has also led to a systematic method for deriving other novel bounds on the Bayes error, namely $R_{C_{Cosh}^*}$ and $R_{C_{Sec}^*}$.
\begin{table}[tbp]
\centering
\caption{\protect\footnotesize{ $C^*_{\phi}(\eta)$ specifics used to compute the minimum-risk.}}
\begin{tabular}{|c|c|}
\hline
Method & $C^*_{\phi}(\eta)$ \\
\hline
\hline
LS & $-2\eta(\eta-1)$ \\
\hline
Log & $-0.7213(\eta\log(\eta)-(1-\eta)\log(1-\eta))$ \\
\hline
Exp & $\sqrt{\eta(1-\eta)}$ \\
\hline
Log-Cos & $(\frac{1}{2.5854})\log(\frac{\cos(2.5854(\eta-\frac{1}{2}))}{\cos(\frac{2.5854}{2})})$ \\
\hline
Cosh & $-\cosh(1.9248(\frac{1}{2}-\eta))+\cosh(\frac{-1.9248}{2})$ \\
\hline
Sec & $-\sec(1.6821(\frac{1}{2}-\eta))+\sec(\frac{-1.6821}{2})$ \\
\hline
\end{tabular}
\label{tab:JTableParameteres} \end{table}
\begin{table}[tbp] \centering \caption{\protect\footnotesize{Minimum-risk for different $C^*_{\phi}(\eta)$ }} \resizebox{\textwidth}{!}{
\begin{tabular}{|c|c|}
\hline
$C^*_{\phi}(\eta)$ & $R_{C_{\phi}^*}$ \\
\hline
\hline
Zero-One & Bayes Error \\
& \\
\hline
LS & $\int \frac{P({\bf x}|1)P({\bf x}|-1)}{P({\bf x}|1)+P({\bf x}|-1)} d{\bf x}$ \\
& \\
\hline
Exp & $\frac{1}{2}\int \sqrt{P({\bf x}|1)P({\bf x}|-1)} d{\bf x}$ \\
& \\
\hline
Log & $-\frac{0.7213}{2}D_{KL}(P({\bf x}|1)||P({\bf x}|1)+P({\bf x}|-1))-\frac{0.7213}{2}D_{KL}(P({\bf x}|-1)||P({\bf x}|1)+P({\bf x}|-1))$ \\
& \\
\hline
Log-Cos & $\int \frac{P({\bf x}|1)+P({\bf x}|-1)}{2} \left[ \frac{1}{2.5854}\log\left(\frac{\cos(\frac{2.5854(P({\bf x}|1)-P({\bf x}|-1))}{2(P({\bf x}|1)+P({\bf x}|-1))})}{cos(\frac{2.5854}{2})}\right) \right] d{\bf x}$ \\
& \\
\hline
Cosh & $\int \frac{P({\bf x}|1)+P({\bf x}|-1)}{2} \left[ -\cosh(\frac{1.9248(P({\bf x}|-1)-P({\bf x}|1))}{2(P({\bf x}|1)+P({\bf x}|-1))}) +\cosh(\frac{-1.9248}{2}) \right] d{\bf x}$ \\
& \\
\hline
Sec & $\int \frac{P({\bf x}|1)+P({\bf x}|-1)}{2} \left[ -\sec(\frac{1.6821(P({\bf x}|-1)-P({\bf x}|1))}{2(P({\bf x}|1)+P({\bf x}|-1))}) +\sec(\frac{-1.6821}{2}) \right] d{\bf x}$ \\
& \\
\hline \end{tabular} } \label{tab:RefFormulasDiffJ4} \end{table}
Next, we demonstrate a general procedure for deriving a class of polynomial functions $C_{Poly-n}^*(\eta)$ that are increasingly and arbitrarily close to $C_{0/1}^*(\eta)$.
\begin{Thm} \label{Thm:TightestBoundCR} Let \begin{eqnarray} C_{Poly-n}^*(\eta)=K_2(\int Q(\eta) d(\eta) +K_1\eta) \end{eqnarray} where \begin{eqnarray} &&Q(\eta) = \int-(\eta(1-\eta))^n d(\eta), \\ &&K_1 = -Q(\frac{1}{2}), \\
&&K_2 = \frac{\frac{1}{2}}{\left. (\int Q(\eta) d(\eta) +K_1\eta)\right|_{\eta=\frac{1}{2}}}. \end{eqnarray} Then
$R_{C_{Poly-n}^*} \ge R_{C_{Poly-(n+1)}^*} \ge R^*$ for all $n \ge 0$ and $R_{C_{Poly-n}^*}$ converges to $R^*$ as $n \rightarrow \infty$.
\end{Thm} \begin{proof} See supplementary material \ref{app:Polyn}. \end{proof}
As an example, we derive $C_{Poly-2}^*(\eta)$ by following the above procedure \begin{eqnarray} {C^*_{Poly-2}}''(\eta)=-(\eta(1-\eta))^2=-(\eta^2+\eta^4-2\eta^3). \end{eqnarray} From this we have \begin{eqnarray} {C_{Poly-2}^*}'(\eta)=-(\frac{1}{3}\eta^3+\frac{1}{5}\eta^5-\frac{2}{4}\eta^4) + K_1. \end{eqnarray} Satisfying ${C_{Poly-2}^*}'(\frac{1}{2})=0$ we find $K_1=\frac{1}{60}$. Therefore, \begin{eqnarray} C_{Poly-2}^*(\eta)=K_2(-\frac{1}{12}\eta^4 -\frac{1}{30}\eta^6 +\frac{1}{10}\eta^5 +\frac{1}{60}\eta). \end{eqnarray} Satisfying $C_{Poly-2}^*(\frac{1}{2})=\frac{1}{2}$ we find $K_2=\frac{960}{11}$.
\begin{figure*}\label{fig:PlotPolinomialJ}
\end{figure*}
Figure-\ref{fig:PlotPolinomialJ} plots $C_{Poly-2}^*(\eta)$ which shows that, as expected, it is a closer approximation to $C_{0/1}^*(\eta)$ when compared to $C_{LS}^*(\eta)$. Following the same steps, it is readily shown that $C_{LS}^*(\eta)=C_{Poly-0}^*(\eta)$, meaning that $C_{LS}^*(\eta)$ is derived from the special case of $n=0$.
As we increase $n$, we increase the order of the resulting polynomial which provides a tighter fit to $C_{0/1}^*(\eta)$. Figure-\ref{fig:PlotPolinomialJ} also plots $C_{Poly-4}^*(\eta)$ \begin{eqnarray} \label{eq:CPoly4} &&C_{Poly-4}^*(\eta)= \\ &&1671.3(-\frac{1}{90}\eta^{10} +\frac{1}{18}\eta^9 -\frac{3}{28}\eta^8 +\frac{2}{21}\eta^7 -\frac{1}{30}\eta^6 +\frac{1}{1260}\eta) \nonumber \end{eqnarray} which is an even closer approximation to $C_{0/1}^*(\eta)$. Table-\ref{tab:RefFormulasDiffJPolyn} shows the corresponding minimum-risk $R_{C_{Poly-n}^*}(p^*)$ for different $C_{Poly-n}^*(\eta)$ functions, with $R_{C_{Poly-4}^*}(p^*)$ providing the tightest bound on the Bayes error. Arbitrarily tighter bounds are possible by simply using $C_{Poly-n}^*(\eta)$ with larger $n$.
\begin{table}[tbp] \centering \caption{\protect\footnotesize{Minimum-risk for different $C_{Poly-n}^*(\eta)$ }} \resizebox{\textwidth}{!}{
\begin{tabular}{|c|c|}
\hline
$C^*_{\phi}(\eta)$ & $R_{C_{\phi}^*}$ \\
\hline
\hline
Zero-One & Bayes Error \\
& \\
\hline
Poly-0 (LS) & $\int \frac{P({\bf x}|1)P({\bf x}|-1)}{P({\bf x}|1)+P({\bf x}|-1)} d{\bf x}$ \\
& \\
\hline
Poly-2 & $\frac{K_2}{2} \int -\frac{P({\bf x}|1)^4}{12(2P({\bf x}))^3} - \frac{P({\bf x}|1)^6}{30(2P({\bf x}))^5} +\frac{P({\bf x}|1)^5}{10(2P({\bf x}))^4} +K_1P({\bf x}|1) d{\bf x} $ \\
& $K_1=0.0167,K_2=87.0196,P({\bf x})=\frac{P({\bf x}|1)+P({\bf x}|-1)}{2}$ \\
& \\
\hline
Poly-4 & $\frac{K_2}{2} \int -\frac{P({\bf x}|1)^{10}}{90(2P({\bf x}))^9} +\frac{P({\bf x}|1)^9}{18(2P({\bf x}))^8} -\frac{3P({\bf x}|1)^8}{28(2P({\bf x}))^7} +\frac{2P({\bf x}|1)^7}{21(2P({\bf x}))^6} -\frac{P({\bf x}|1)^6}{30(2P({\bf x}))^5}+K_1P({\bf x}|1) d{\bf x}$ \\
& $K_1=7.9365\times10^{-4},K_2=1671.3,P({\bf x})=\frac{P({\bf x}|1)+P({\bf x}|-1)}{2}$ \\
& \\
\hline \end{tabular} } \label{tab:RefFormulasDiffJPolyn} \end{table}
Such arbitrarily tight bounds on the Bayes error are important in a number of applications such as in feature selection and extraction \cite{NUNOMaxDiversityNIPS, NunoNaturalFeatures,NewFeatPers,FeatMutual}, information theory \cite{Fano, fdivrisk, book:InfoTheory, minimaxrisk}, classification and regression \cite{KLboost,ProjectClass,FriedmanPersuit}, etc.
In the experiments section we specifically show how using $C^*_{\phi}$ with tighter bounds on the Bayes error results in better performance on a feature selection and classification problem. We then consider the effect of using projection vectors $\Phi({\bf w})$ that are more discriminative, such as the normalized kernel Fisher discriminant projection vector or normalized kernel SVM projection vector described in Lemma {\ref{thm:validchoices}}, rather than the canonical projection vector
of $\Phi({\bf w})=\frac{({\bm \mu}_P-{\bm \mu}_Q)}{||({\bm \mu}_P-{\bm \mu}_Q)||_{\cal H}}$. We show that these more discriminative projection vectors $\Phi({\bf w})$ result in significantly improved performance on a set of kernel hypothesis testing experiments.
\section{Computing The Kernel Score and Kernel Divergence in Practice } In most applications the distributions of $P$ and $Q$ are not directly known and are solely represented through a set of sample points. We assume that the data points $\{{\bf x}_1, ..., {\bf x}_{n_1}\}$ are sampled from $P$ and the data points $\{{\bf x}_1, ..., {\bf x}_{n_2}\}$ are sampled from $Q$. Note that the Kernel Score can be written as \begin{eqnarray} S_{C^*_{\phi},k,{\cal F},{\Phi({\bf w})}}(P,Q)={\mathbf E}_{Z}\left[ C\left( \frac{P^p(x^p)}{P^p(x^p) + Q^p(x^p)} \right) \right], \end{eqnarray} where the expectation is over the distribution defined by $P_Z(z)=\frac{P^p(x^p) + Q^p(x^p)}{2}$. The empirical Kernel Score and empirical Kernel Divergence can now be written as \begin{eqnarray} \label{KScoreEMP} {\hat S}_{C^*_{\phi},k,{\cal F},{\Phi({\bf w})}}(P,Q)&=&\frac{1}{n}\sum_{i=1}^{n} C\left( \frac{P^p(x^p_i)}{P^p(x^p_i) + Q^p(x^p_i)} \right) \\ \label{KDEMP} \widehat{KD}_{C,k,{\cal F},{\Phi({\bf w})}}(P,Q) &=& \frac{1}{2} - {\hat S}_{C,k,{\cal F},{\Phi({\bf w})}}(P,Q), \end{eqnarray} where $n=n_1+n_2$ and $x^p_i$ is the projection of $\Phi({\bf x}_i)$ onto $\Phi({\bf w})$.
Calculating $x^p_i$ in the above formulation using equation (\ref{eq:FindxpwK}) is still not possible because we generally don't know $\Phi({\bf w})$ and ${\bf w}$. A similar problem exists for the MMD. Nevertheless the MMD \cite{TwoMMD} is estimated in practice as \begin{eqnarray} \label{eq:MMDPrac}
&&\widehat{MMD}_{\cal F}(P,Q) = ||\hat{\bm \mu}_P - \hat{\bm \mu}_Q||^2_{\cal H} \\ &=&\frac{1}{n_1n_1}\sum_{i=1}^{n_1}\sum_{j=1}^{n_1}K({\bf x}_i{\bf x}_j) -\frac{2}{n_1n_2}\sum_{i=1}^{n_1}\sum_{j=1}^{n_2}K({\bf x}_i{\bf x}_j) + \frac{1}{n_2n_2}\sum_{i=1}^{n_2}\sum_{j=1}^{n_2}K({\bf x}_i{\bf x}_j). \end{eqnarray}
In view of Lemma \ref{lemma:AlternateMMD} the MMD can be equivalently estimated as \begin{eqnarray} \label{eq:MMDaltPrac1} \widehat{MMD}_{\cal F}(P,Q)=(\hat \mu^p_P - \hat \mu^p_Q)^2, \end{eqnarray} where \begin{eqnarray} \label{eq:MMDaltPrac2} \hat \mu^p_P=\frac{\frac{1}{n_1n_1}\sum_{i=1}^{n_1}\sum_{j=1}^{n_1}K({\bf x}_i{\bf x}_j) -\frac{1}{n_1n_2}\sum_{i=1}^{n_1}\sum_{j=1}^{n_2}K({\bf x}_i{\bf x}_j)}{T}, \end{eqnarray} \begin{eqnarray} \label{eq:MMDaltPrac3} \hat \mu^p_Q=\frac{\frac{1}{n_1n_2}\sum_{i=1}^{n_1}\sum_{j=1}^{n_2}K({\bf x}_i{\bf x}_j) - \frac{1}{n_2n_2}\sum_{i=1}^{n_2}\sum_{j=1}^{n_2}K({\bf x}_i{\bf x}_j)}{T} \end{eqnarray} and \begin{eqnarray} \label{eq:MMDaltPrac4} T=\sqrt{\frac{1}{n_1n_1}\sum_{i=1}^{n_1}\sum_{j=1}^{n_1}K({\bf x}_i{\bf x}_j) -\frac{2}{n_1n_2}\sum_{i=1}^{n_1}\sum_{j=1}^{n_2}K({\bf x}_i{\bf x}_j) + \frac{1}{n_2n_2}\sum_{i=1}^{n_2}\sum_{j=1}^{n_2}K({\bf x}_i{\bf x}_j)}. \end{eqnarray} It can easily be verified that equations (\ref{eq:MMDaltPrac1})-(\ref{eq:MMDaltPrac4}) and equation (\ref{eq:MMDPrac}) are equivalent.
This equivalent method for calculating the MMD can be elaborated as projecting the two embedded sample sets onto $\Phi({\bf w})=\frac{(\hat{\bm \mu}_P-\hat{\bm \mu}_Q)}{||(\hat{\bm \mu}_P-\hat{\bm \mu}_Q)||}$, estimating the means $\hat\mu^p_P$ and $\hat\mu^p_Q$ of the projected embedded sample sets and then finding the distance between these estimated means. This might seem like over complicating the original procedure. Yet, it serves to show that the MMD is solely measuring the distance between the means while disregarding all the other information available regarding the projected embedded distributions. Similarly, assuming that
$\Phi({\bf w})=\frac{(\hat{\bm \mu}_P-\hat{\bm \mu}_Q)}{||(\hat{\bm \mu}_P-\hat{\bm \mu}_Q)||}$, $x^p_i$ can now be estimated as
\begin{eqnarray} \label{eq:projctedxpiPrac}
x^p_i&&\!\!\!\!\!\!=<\Phi({\bf x}_i),{\bf w}>=<\Phi({\bf x}_i),\frac{({\hat{\bm \mu}_P}-{\hat{\bm \mu}_Q})}{||({\hat{\bm \mu}_P}-{\hat{\bm \mu}_Q})||}> \\
&&=\frac{<\Phi({\bf x}_i),{\hat{\bm \mu}_P}>-<\Phi({\bf x}_i),{\hat{\bm \mu}_Q}>}{||({\hat{\bm \mu}_P}-{\hat{\bm \mu}_Q})||} \\ &&=\frac{\frac{1}{n_1}\sum_{j=1}^{n_1}<\Phi({\bf x}_i),\Phi({\bf x}_j)>-\frac{1}{n_2}\sum_{j=1}^{n_2}<\Phi({\bf x}_i),\Phi({\bf x}_j)>}{T} \\ \label{eq:projctedxpiPracLAST} &&=\frac{\frac{1}{n_1}\sum_{j=1}^{n_1}K({\bf x}_i,{\bf x}_j)-\frac{1}{n_2}\sum_{j=1}^{n_2}K({\bf x}_i,{\bf x}_j)}{T}. \end{eqnarray} Once the $x^p_i$ are found for all $i$ using equation (\ref{eq:projctedxpiPracLAST}), estimating other statistics such as the variance is trivial. For example, the variances of the projected embedded distributions can now be estimated as \begin{eqnarray} \label{eq:ProjectedVariance11} (\hat \sigma^p_P)^2=\frac{1}{n_1}\sum_{i=1}^{n_1}(x^p_i-\hat \mu^p_P)^2 \\ \label{eq:ProjectedVariance22} (\hat \sigma^p_Q)^2=\frac{1}{n_2}\sum_{i=1}^{n_2}(x^p_i-\hat \mu^p_Q)^2. \end{eqnarray} In light of this, the empirical Bhattacharyya Kernel Score and empirical Bhattacharyya Kernel Divergence can now be readily calculated in practice as \begin{eqnarray} \label{eq:battarExp2EMP} &&\hat S_{C,k,{\cal F},{\Phi({\bf w})}}(P,Q)= \frac{1}{2}\cdot e^{(B)}, \\ &&\widehat{KD}_{C_{Exp},k,{\cal F},{\Phi({\bf w})}}(P,Q)=\frac{1}{2} -\frac{1}{2}\cdot e^{(B)}, \\ && \hat B=\frac{1}{4}\log \left ( \frac{1}{4} \left (\frac{( \hat \sigma^p_P)^2}{( \hat \sigma^p_Q)^2}+\frac{( \hat \sigma^p_Q)^2}{( \hat \sigma^p_P)^2}+2 \right)\right )+\frac{1}{4} \left ( \frac{( \hat \mu^p_P- \hat \mu^p_Q)^2}{( \hat \sigma^p_P)^2 + ( \hat \sigma^p_Q)^2} \right). \end{eqnarray} Finally, the empirical Kernel Score of equation (\ref{KScoreEMP}) and the empirical Kernel Divergence of equation (\ref{KDEMP}) can be calculated in practice after finding $P^p(x^p_i)$ and $Q^p(x^p_i)$ using any one dimensional probability model.
Note that in the above formulations we used the canonical $\Phi({\bf w})=\frac{(\hat{\bm \mu}_P-\hat{\bm \mu}_Q)}{||(\hat{\bm \mu}_P-\hat{\bm \mu}_Q)||}$. A similar approach is possible for other valid choices of $\Phi({\bf w})$. Namely, the projection vector $\Phi({\bf w})$ associated with the kernel Fisher discriminant
can be found in the form of \begin{eqnarray} \Phi({\bf w})=\sum_{j=1}^{n} \alpha_j \Phi({\bf x}_j) \end{eqnarray} using Algorithm 5.16 in \cite{Book:KernelMethodsforPatternAnalysis}. In this case $x^p_i$ can be found as \begin{eqnarray} \label{eq:xpiLDAEMP}
x^p_i=\frac{<\Phi({\bf w}), \Phi({\bf x}_i)>}{||\Phi({\bf w})||} = \frac{\sum_{j=1}^{n} \alpha_j K({\bf x}_i,{\bf x}_j)}{||\Phi({\bf w})||}, \end{eqnarray} where \begin{eqnarray}
||\Phi({\bf w})|| = \sqrt{\sum_{i=1}^{n}\sum_{j=1}^{n} \alpha_i \alpha_j K({\bf x}_i,{\bf x}_j)}. \end{eqnarray}
The projection vector $\Phi({\bf w})$ associated with the kernel SVM can also be found in the form of \begin{eqnarray} \Phi({\bf w})=\sum_{j \in SV} \alpha_j \phi({\bf x_j}) \end{eqnarray} using standard algorithms \cite{SVMSolver, SVMGuide}, where $SV$ is the set of support vectors. In this case the $x^p_i$ can be found using equation (\ref{eq:xpiLDAEMP}) calculated over the support vectors.
\section{One-Class Classifier for Kernel Hypothesis Testing} \label{sec:OneClass} From Theorem \ref{thm:StrictDivProperty} we conclude that the Kernel Divergence is injective similar to the MMD. This means that the Kernel Divergence can be directly thresholded and used in hypothesis testing. We showed that while the MMD simply measures the distance between the means of the projected embedded distributions, the Bhattacharyya Kernel Divergence (BKD) incorporates information about both the means and variances of the two projected embedded distributions. We also showed that in general the Kernel Divergence (KD) provides a measure related to the minimum risk of the two projected embedded distributions. Each one of these measures takes into account a different aspect of the two projected embedded distributions in relation to each other. We can integrate all of these measures into our hypothesis test by constructing a vector where each element is a different measure and learn a one-class classifier for this vector. In the hypothesis testing experiments of Section \ref{sec:HypoTests}, we constructed the vectors [MMD, KD] and [MMD, BKD] and implemented a simple one-class nearest neighbor classifier with infinity norm ~\citep{OneClassPhd} as depicted in Figure \ref{fig:RejectRegion}.
\begin{figure*}\label{fig:RejectRegion}
\end{figure*}
\section{Experiments} In this section we include various experiments that confirm different theoretical aspects of the Kernel Score and Kernel Divergence.
\subsection{Feature selection experiments}
Different bounds on the Bayes error are used in feature selection and ranking algorithms
\cite{NUNOMaxDiversityNIPS, NunoNaturalFeatures,NewFeatPers,FeatMutual, FeatRankEntropy}. In this section we show that the tighter bounds we have derived, namely $C^*_{Poly-2}$ and $C^*_{Poly-4}$, allow for improved feature selection and ranking. The experiments used ten binary UCI data sets of relatively small size: (\#1) Habermanӳ survival,(\#2) original Wisconsin breast cancer , (\#3) tic-tac-toe , (\#4) sonar, (\#5) Pima-diabetes , (\#6) liver disorder , (\#7) Cleveland heart disease , (\#8) echo-cardiogram , (\#9) breast cancer prognostic, and (\#10) breast cancer diagnostic.
Each data set was split into five folds, four of which were used for training and one for testing. This created five train-test pairs per data set, over which the results were averaged. The original data was augmented with noisy features. This was done by taking each feature and adding random scaled noise to a certain percentage of the data points. The scale parameters were $\{ 0.1, 0.3\}$ and the percentage of data points that were randomly affected was $\{ 0.25, 0.50, 0.75 \}$. Specifically, for each feature, a percentage of the data points had scaled zero mean Gaussian noise added to that feature in the form of \begin{eqnarray} {\bf x}_i={\bf x}_i+{\bf x}_i \cdot {y} \cdot s, \end{eqnarray} where ${\bf x}_i$ is the $i$-th feature of the original data vector, ${y} \in N(0,1)$ is the Gaussian noise and $s$ is the scale parameter.
The empirical minimum risk was then computed for each feature where $P_{{\bf X}|Y}({
\bf x}|y)$ was modeled as a $10$ bin histogram.
A greedy feature selection algorithm was implemented in which the features were ranked according to their empirical minimum risk and the highest ranked $5\%$ and $10\%$ of the features were selected. The selected features were then used to train and test a linear SVM classifier. If a certain minimum risk $C^*_{\phi}$ is a better bound on the Bayes error, we would expect it to choose better features and these better features should translate into a better SVM classifier with smaller error rate on the test data. Five different $C^*_{\phi}$ were considered namely $C^*_{Poly-4}$, $C^*_{Poly-2}$, $C^*_{LS}$, $C^*_{Log}$ and $C^*_{Exp}$ and the error rate corresponding to each $C^*_{\phi}$ was computed and averaged over the five folds. The average error rates were then ranked such that a rank of $1$ was assigned to the $C^*_{\phi}$ with smallest error and a rank of $5$ assigned to the $C^*_{\phi}$ with largest error.
The \emph{rank over selected features} was computed by averaging the ranks found by using both $5\%$ and $10\%$ of the highest ranked features. This process was repeated a total of $25$ times for each UCI data set and the \emph{over all average rank} was found by averaging
over the $25$ experiment runs. The \emph{over all average rank} found for each UCI data set and each $C^*_{\phi}$ is reported in Table-\ref{tab:OverAllFeatRanks}. The last two columns of this table are the total number of times each $C^*_{\phi}$ has had the best rank over the ten different data sets (\#W) and a ranking of the \emph{over all average rank} computed for each data set and then averaged across all data sets (Rank). It can be seen that $C^*_{Poly-4}$ which was designed to have the tightest bound on the Bayes error has the most number of wins of $4$ and smallest Rank of $2.4$ while $C^*_{Exp}$ which has the loosest bound on the Bayes error has the least number of wins of $0$ and worst Rank of $3.75$. As expected, the Rank for each $C^*_{\phi}$ is in order of how tightly they approximate the Bayes error with in order $C^*_{Poly-4}$, $C^*_{Poly-2}$ and $C^*_{LS}$ at the top and $C^*_{Log}$ and $C^*_{Exp}$ at the bottom. This is in accordance with the discussion of Section~\ref{sec:TighterBounds22}.
\begin{table}[tbp] \centering \caption{\protect\footnotesize{The \emph{over all average rank} for each UCI data set and each $C^*_{\phi}$. }} \resizebox{\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c| }
\hline
$C^*_{\phi}$ & \#1&\#2&\#3&\#4&\#5&\#6&\#7&\#8&\#9&\#10 & \#W & Rank \\
\hline
$C^*_{Poly-4}$ & 2.9 & 2.75 & 3.62 & 2.77 & {\bf 2.86} & {\bf 2.39} & 3.55 & {\bf 2.87} & 3.25 & {\bf 2.86} & 4 & 2.4 \\
\hline
$C^*_{Poly-2}$ & {\bf 2.82} & 2.88 & 3.37 & {\bf 2.62} & 2.87 & 2.74 & 3.27 & 2.9 & 3.46 & 2.98 & 2 & 2.7 \\
\hline
$C^*_{LS}$ & 3.02 & {\bf 2.73} & 2.92 & 3.03 & 2.9 & 3.17 & {\bf 2.65} & 2.96 & 2.97 & 3.16 & 2 & 2.8 \\
\hline
$C^*_{Log}$ & 3.16 & 3.32 & {\bf 2.5} & 3.44 & 3.0 & 3.2 & 2.78 & 3.08 & {\bf 2.62} & 2.88 & 2 & 3.35 \\
\hline
$C^*_{Exp}$ & 3.1 & 3.32 & 2.59 & 3.14 & 3.37 & 3.5 & 2.75 & 3.19 & 2.7 & 3.12 & 0 & 3.75 \\
\hline \end{tabular} } \label{tab:OverAllFeatRanks} \end{table}
\subsection{ Kernel Hypothesis Testing Experiments } \label{sec:HypoTests}
The first set of experiments comprised of hypothesis tests on Gaussian samples. Specifically, two hypothesis tests were considered. In the first test, we used $250$ samples for each $25$ dimensional Gaussian distribution ${\mathcal N}(0,1.5I)$ and ${\mathcal N}(0,1.7I)$. Note that the means are equal and the variances are slightly different. In the second test, we used $250$ samples for each $25$ dimensional Gaussian distribution ${\mathcal N}(0,1.5I)$ and ${\mathcal N}(0.1,1.5I)$. Note that the variances are equal and the means are slightly different. In both cases the reject thresholds were found from $100$ bootstrap iterations for a fixed type-I error of $\alpha=0.05$. We used the Gaussian kernel embedding for all experiments and the kernel parameter was found using the median heuristic of \cite{TwoMMD}. Also, the Kernel Divergence (KD) used $C^*_{Poly-4}$ of equation (\ref{eq:CPoly4}) and one dimensional Gaussian distribution models. Unlike the classification problem described in the previous section, having a tight estimate of the Bayes error is not important for hypothesis testing experiments and so the actual concave $C$ function used is not crucial.
The type-II error test results for $100$ repetitions are reported in Table \ref{tab:GaussMMDHypoTest} for the MMD, BKD, KD methods, where ${\Phi(\bf w})=\frac{({{\bm \mu}_P}-{{\bm \mu}_Q})}{||({{\bm \mu}_P}-{{\bm \mu}_Q})||}$, along with the combined method described in Section \ref{sec:OneClass} where a one-class nearest neighbor classifier with infinity norm is learned for [MMD, KD] and [MMD, BKD]. These results are typical and in general (a) the KD and BKD methods do better than the MMD when the means are equal and the variances are different, (b) the MMD does better than the KD and BKD when the variances are equal and the means are different and (c) The combined methods of [MMD, KD] and [MMD, BKD] do well for both cases. We usually don't know which case we are dealing with in practice and so the combined methods of [MMD, KD] and [MMD, BKD] are preferred.
\begin{table}[tbp] \centering \caption{\protect\footnotesize{Percentage of type-II error for the hypothesis tests on two types of Gaussian samples given $\alpha=0.05$. }}
\begin{tabular}{|c|c||c| }
\hline
Method & $\sigma_1=1.5$, $\sigma_2=1.7$ & $\mu_1=0$, $\mu_2=0.1$ \\
& $\mu_1=\mu_2=0$ & $\sigma_1=\sigma_2=1.5$ \\
\hline
MMD & 46 & 25 \\
\hline
KD & 13 & 42 \\
\hline
BKD & {\bf 11} & 40 \\
\hline
[MMD, KD] & 13 & 25 \\
\hline
[MMD, BKD] & 12 & {\bf 24} \\
\hline \end{tabular} \label{tab:GaussMMDHypoTest} \end{table}
\subsubsection{Bench-Mark Gene Data Sets} Next we evaluated the proposed methods on a group of high dimensional bench-mark gene data sets.
The data sets are detailed in Table \ref{tab:GeneDataNUMS} and are challenging given their small sample size and high dimensionality.
The hypothesis testing involved splitting the positive samples in two and using the first half to learn the reject thresholds from $1000$ bootstrap iterations for a fixed type-I error of $\alpha=0.05$. We used the Gaussian kernel embedding for all experiments and the kernel parameter was found using the median heuristic of \cite{TwoMMD}. The Kernel Divergence (KD) used $C^*_{Poly-4}$ of equation (\ref{eq:CPoly4}) and one dimensional Gaussian distribution models. The type-II error test results for $1000$ repetitions are reported in Table \ref{tab:GeneDataResultsHypo} for the MMD, BKD, KD, [MMD, KD] and [MMD, BKD] methods. Also, three projection directions are considered namely, MEANS where $\Phi({\bf w})=\frac{({{\bm \mu}_P}-{{\bm \mu}_Q})}{||({{\bm \mu}_P}-{{\bm \mu}_Q})||}$,
FISHER where the $\Phi({\bf w})$ associated with the kernel Fisher linear discriminant is used, and SVM where the $\Phi({\bf w})$ associated with the kernel SVM is used.
We have reported the rank of each method among the five methods with the same projection direction under RANK1 and the overall rank among all fifteen methods under RANK2 in the last column. Note that the first row of Table \ref{tab:GeneDataResultsHypo} with MMD distance measure and MEANS projection direction is the only method previously proposed in the literature \cite{MMD}. We should also note that the KD with FISHER projection direction encountered numerical problems in the form of very small variance estimates, which resulted in poor performance.
Nevertheless, we can see that in general the KD and BKD methods which incorporate more information regarding the projected distributions, outperform the MMD. Second, using more discriminant projection directions like the FISHER or SVM outperform simply projecting onto MEANS. Finally, the [MMD, KD] and [MMD, BKD] methods that combine the information provided by both the MMD and the KD or BKD have the lowest ranks. Specifically, the [MMD, KD] with SVM projection direction has the overall lowest rank among all fifteen methods.
\begin{table}[tbp] \centering \caption{\protect\footnotesize{Gene data set details. }} \resizebox{\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c| }
\hline
Number & Data Set & \#Positives & \#Negatives & \#Dimensions \\
\hline
\#1& Lung Cancer Womenӳ Hospital & 31 & 150 & 12533 \\
\hline
\#2& Lukemia & 25 & 47 & 7129 \\
\hline
\#3& Lymphoma Harvard Outcome & 26 & 32 & 7129 \\
\hline
\#4& Lymphoma Harvard & 19 & 58 & 7129 \\
\hline
\#5& Central Nervous System Tumor & 21 & 39 & 7129 \\
\hline
\#6& Colon Tumor & 22 & 40 & 2000 \\
\hline
\#7& Breast Cancer ER & 25 & 24 & 7129 \\
\hline \end{tabular} } \label{tab:GeneDataNUMS} \end{table}
\begin{table}[tbp] \centering \caption{\protect\footnotesize{ Percentage of type-II error for the gene data sets given $\alpha=0.05$. RANK1 is the rank of each method among the five methods with the same projection direction and RANK2 is the overall rank among all fifteen methods.}} \resizebox{\textwidth}{!}{
\begin{tabular}{|c|c||c|c|c|c|c|c|c||c|c| }
\hline
Projection & Measure & \#7 & \#6 & \#5 & \#4 & \#3 & \#2 & \#1 & Rank1 & Rank2 \\
\hline
MEANS &MMD & 24.3 & 27.4 & 95 & 31.2 & 90.8 & 11.7 & 6.3 & 3.42 & 9.14\\
\hline
MEANS &KD & 9.8 & 58.5 & 83.8 & 53.1 & 79.2 & 64.7 & 7.7 & 3.71 & 10.14\\
\hline
MEANS &BKD & 12 & 56.9 & 83.4 & 52.5 & 79.3 & 58.0 & 3.7 & 3.14 & 9.14\\
\hline
MEANS &[MMD, KD] & 12.2 & 48 & 82.9 & 25.2 & 84.1 & 14.7 & 3.0 & 2.57 & 7.42\\
\hline
MEANS & [MMD, BKD] & 13.2 & 47.3 & 81.9 & 24.3 & 83.6 & 14.0 & 3.2 & {\bf 2.14} & 6.42\\
\hline
\hline
FISHER& MMD & 5.8 & 26.5 & 90.2 & 24.8 & 83.1 & 14.1 & 4.2 & {\bf 1.78} & 6.07\\
\hline
FISHER&KD & 100 & 100 & 100 & 100 & 100 & 100 & 100 & 5 & 15\\
\hline
FISHER&BKD & 30.6 & 52.4 & 82.4 & 66.0 & {\bf 64.0} & 73.3 & 22.6 & 3.14 & 10.28\\
\hline
FISHER&[MMD, KD] & 9.6 & 26.5 & 95.3 & 31.9 & 93.6 & 18.7 & 5.4 & 3.14 & 9.42\\
\hline
FISHER&[MMD, BKD] & 6.2 & {\bf 26.4} & 82.8 & 31.0 & 74.9 & 18.6 & 5.4 & 1.92 & 5.21\\
\hline
\hline
SVM& MMD & 22.8 & 29.9 & 95.1 & 26.2 & 89.2 & {\bf 10.0} & 2.1 & 3.85 & 8.00\\
\hline
SVM&KD & {\bf 4.0} & 48.2 & {\bf 81.3} & 33.4 & 82.4 & 41.1 & 1.0 & 3.28 & 6.42\\
\hline
SVM&BKD & 4.3 & 44.2 & 86.3 & 32.0 & 79.4 & 34.4 & 0.5 & 2.85 & 6.57\\
\hline
SVM&[MMD, KD] & 6.3 & 28.1 & 88.4 & {\bf 20.5} & 86.4 & 13.7 & {\bf 0.4} & {\bf 2.28} & {\bf 5.14}\\
\hline
SVM&[MMD, BKD] & 6.6 & 28.2 & 89.0 & {\bf 20.5} & 84.9 & 13.8 & {\bf 0.4} & 2.71 & 5.57\\
\hline \end{tabular} } \label{tab:GeneDataResultsHypo} \end{table}
\section{Conclusion} While we have concentrated on the hypothesis testing problem in the experiments section, we envision many different applications for the Kernel Score and Kernel Divergence. We showed that the MMD is a special case of the Kernel Score and so the Kernel Score can now be used in all other applications based on the MMD, such as integrating biological data, imitation learning, etc. We also showed that the Kernel Score is related to the minimum risk of the projected embedded distributions and we showed how to derive tighter bounds on the Bayes error. Many applications that are based on risk minimization, bounds on the Bayes error or divergence measures such as classification, regression, feature selection, estimation, information theory etc, can now use the Kernel Score and Kernel Divergence to their benefit. We presented the Kernel Score as a general formulation for a score function in the Reproducing Kernel Hilbert Space and considered when it has the important property of being strictly proper. The Kernel Score is thus also directly applicable to probability elicitation, forecasting, finance and meteorology which rely on strictly proper scoring rules.
\begin{center} {\large\bf SUPPLEMENTARY MATERIAL} \end{center}
\section{Proof of Theorem \ref{thm:StrictDivProperty}} \label{app:KscoreStrict} If $P=Q$ then \begin{eqnarray} KD_{C,k,{\cal F},{\Phi({\bf w})}}(P,Q)=-\int P^p(z)C(\frac{1}{2})dz + \frac{1}{2} = -\frac{1}{2}\int P^p(z)dz + \frac{1}{2}= 0. \end{eqnarray} Next, we prove the converse. The proof is identical to Theorem 5 of \cite{MMD} up to the point where we must prove that if $KD_{C,k,{\cal F},{\Phi({\bf w})}}(P,Q)=0$ then ${\bm \mu}_P={\bm \mu}_Q$. To show this we write \begin{eqnarray} KD_{C,k,{\cal F},{\Phi({\bf w})}}(P,Q)=-\int \left( \frac{P^p(z) + Q^p(z)}{2} \right) C\left( \frac{P^p(z)}{P^p(z) + Q^p(z)} \right)dz +\frac{1}{2} =0
\end{eqnarray} or \begin{eqnarray} \int \left( \frac{P^p(z) + Q^p(z)}{2} \right) C\left( \frac{P^p(z)}{P^p(z) + Q^p(z)} \right)dz =\frac{1}{2}.
\end{eqnarray} Since $C(\eta)$ is concave and has a maximum value of $\frac{1}{2}$ at $\eta=\frac{1}{2}$ then the above equation can only hold if
\begin{eqnarray} C\left(\frac{P^p(z)}{P^p(z)+Q^p(z)}\right)=\frac{1}{2}, \end{eqnarray} which means that \begin{eqnarray} \frac{P^p(z)}{P^p(z)+Q^p(z)}=\frac{1}{2}, \end{eqnarray} and so \begin{eqnarray} P^p(z)=Q^p(z). \end{eqnarray} From this we conclude that their associated means must be equal, namely \begin{eqnarray} \mu^p_P=\mu^p_Q. \end{eqnarray} The above equation can be written as \begin{eqnarray} <{\bm \mu}_P, \Phi({\bf w})>_{\cal H}=<{\bm \mu}_Q, \Phi({\bf w})>_{\cal H} \end{eqnarray} or equivalently as \begin{eqnarray} <{\bm \mu}_P-{\bm \mu}_P, \Phi({\bf w})>_{\cal H}=0. \end{eqnarray} Since $\Phi({\bf w})$ is not in the orthogonal compliment of ${\bm \mu}_P-{\bm \mu}_P$ then it must be that \begin{eqnarray} {\bm \mu}_P={\bm \mu}_Q. \end{eqnarray} The rest of the proof is again identical to Theorem 5 of \cite{MMD} and the theorem is similarly proven.
To prove that the Kernel Score is strictly proper we note that if $P=Q$ then $KD_{C,k,{\cal F},{\Phi({\bf w})}}(Q,Q)=0$ and so $S_{C,k,{\cal F},{\Phi({\bf w})}}(Q,Q)=\frac{1}{2}$. This means that we need to show that $S_{C,k,{\cal F},{\Phi({\bf w})}}(Q,Q)=\frac{1}{2} \ge S_{C,k,{\cal F},{\Phi({\bf w})}}(P,Q)$. This readily follows since $C(\eta)$ is strictly concave with maximum at $C(\frac{1}{2})=\frac{1}{2}$.
\section{Proof of Lemma \ref{thm:validchoices}} \label{app:ValidVect}
$\Phi({\bf w})=\frac{({\bm \mu}_P-{\bm \mu}_Q)}{||({\bm \mu}_P-{\bm \mu}_Q)||_{\cal H}}$ is not in the orthogonal compliment of $M=\{{\bm \mu}_P - {\bm \mu}_Q\}$ since \begin{eqnarray}
<\frac{({\bm \mu}_P-{\bm \mu}_Q)}{||({\bm \mu}_P-{\bm \mu}_Q)||_{\cal H}}, {\bm \mu}_P - {\bm \mu}_Q>_{\cal H} \ne 0. \end{eqnarray}
The $\Phi({\bf w})$ equal to the kernel Fisher discriminant projection vector is not in the orthogonal compliment of $M=\{{\bm \mu}_P - {\bm \mu}_Q\}$ because if it were then the kernel Fisher discriminant objective, which can be written as $\frac{\mu^p_P - \mu^p_Q}{(\sigma^p_P)^2 + (\sigma^p_Q)^2}$, would not be maximized and would instead be equal to zero.
The $\Phi({\bf w})$ equal to the kernel SVM projection vector is not in the orthogonal compliment of $M=\{{\bm \mu}_P - {\bm \mu}_Q\}$ since the kernel SVM
is equivalent to the kernel Fisher discriminant computed on the set of support vectors ~\cite{SVDequalLDA}.
\section{Proof of Lemma \ref{lemma:AlternateMMD}} \label{app:MMDNewLook} We know that $\mu^p_P$ is the projection of ${\bm \mu}_P$ onto $\Phi({\bf w})$ so we can write \begin{eqnarray}
\mu^p_P=<{\bm \mu}_P, \Phi({\bf w})>_{\cal H}=<{\bm \mu}_P, \frac{({\bm \mu}_P-{\bm \mu}_Q)}{||({\bm \mu}_P-{\bm \mu}_Q)||_{\cal H}}>_{\cal H}
=\frac{<{\bm \mu}_P,{\bm \mu}_P>_{\cal H}-<{\bm \mu}_P,{\bm \mu}_Q>_{\cal H}}{||({\bm \mu}_P-{\bm \mu}_Q)||_{\cal H}} \end{eqnarray} Similarly, \begin{eqnarray}
\mu^p_Q=<{\bm \mu}_Q, \Phi({\bf w})>_{\cal H}=<{\bm \mu}_Q, \frac{({\bm \mu}_P-{\bm \mu}_Q)}{||({\bm \mu}_P-{\bm \mu}_Q)||_{\cal H}}>_{\cal H}
=\frac{<{\bm \mu}_Q,{\bm \mu}_P>_{\cal H}-<{\bm \mu}_Q,{\bm \mu}_Q>_{\cal H}}{||({\bm \mu}_P-{\bm \mu}_Q)||_{\cal H}}. \end{eqnarray} Hence, \begin{eqnarray}
&&(\mu^p_P - \mu^p_Q)^2=\left(\frac{<{\bm \mu}_P,{\bm \mu}_P>_{\cal H}-2<{\bm \mu}_P,{\bm \mu}_Q>_{\cal H}+<{\bm \mu}_Q,{\bm \mu}_Q>_{\cal H}}{||({\bm \mu}_P-{\bm \mu}_Q)||_{\cal H}} \right)^2 \\
&&= \left(\frac{<({\bm \mu}_P-{\bm \mu}_Q),({\bm \mu}_P-{\bm \mu}_Q)>_{\cal H}}{||({\bm \mu}_P-{\bm \mu}_Q)||_{\cal H}}\right)^2
= \left(\frac{||({\bm \mu}_P-{\bm \mu}_Q)||_{\cal H}^2}{||({\bm \mu}_P-{\bm \mu}_Q)||_{\cal H}}\right)^2 = ||({\bm \mu}_P-{\bm \mu}_Q)||_{\cal H}^2. \end{eqnarray}
\section{Proof of Theorem \ref{Thm:KScoreMMDConnection}} \label{app:MMDKSCoreConnection} The result readily follows by setting $MMD_{\cal F}(P,Q)=(\mu^p_P - \mu^p_Q)^2$ and $\sigma^p_P = \sigma^p_Q$ into equation (\ref{eq:battarExp2}).
\section{Proof of Lemma \ref{Thm:Brier}} \label{app:Brier}
By adding and subtracting $\int_{\bf x} P_{{\bf X}}({\bf x}) \left[ P_{{\bf Y}|{\bf X}}(1|{\bf x})\phi({ p^*}({\bf x})) + P_{{\bf Y}|{\bf X}}(-1|{\bf x})\phi(-{ p^*}({\bf x})) \right] d{\bf x}$ and considering equation (\ref{equ:MinRiskInit}), the risk $R({\hat p^*})$ can be written as \begin{eqnarray}
R({\hat p^*}) &=& E_{{\bf X},Y}[\phi(y{\hat p^*}({\bf x}))]=\int_{\bf x} P_{{\bf X}}({\bf x}) \sum_y P_{{\bf Y}|{\bf X}}(y|{\bf x})\phi(y{\hat p^*}({\bf x})) d{\bf x} \\
&=& \int_{\bf x} P_{{\bf X}}({\bf x}) \left[ P_{{\bf Y}|{\bf X}}(1|{\bf x})\phi({\hat p^*}({\bf x})) + P_{{\bf Y}|{\bf X}}(-1|{\bf x})\phi(-{\hat p^*}({\bf x})) \right] d{\bf x} \nonumber \\
&=& \int_{\bf x} P_{{\bf X}}({\bf x}) \left[ P_{{\bf Y}|{\bf X}}(1|{\bf x})\left(\phi({\hat p^*}({\bf x}))-\phi({ p^*}({\bf x}))\right) + P_{{\bf Y}|{\bf X}}(-1|{\bf x})\left(\phi(-{\hat p^*}({\bf x}))-\phi(-{ p^*}({\bf x})) \right) \right] d{\bf x} \nonumber \\
&+& \int_{\bf x} P_{{\bf X}}({\bf x}) \left[ P_{{\bf Y}|{\bf X}}(1|{\bf x})\phi({ p^*}({\bf x})) + P_{{\bf Y}|{\bf X}}(-1|{\bf x})\phi(-{ p^*}({\bf x})) \right] d{\bf x} \nonumber \\ &=& R_{Calibration} + R({p^*}). \end{eqnarray}
The first term denoted $R_{Calibration}$ is obviously zero if we have a perfectly calibrated predictor such that ${\hat p^*}({\bf x}) ={p^*}({\bf x})$
for all ${\bf x}$ and is thus a measure of calibration. Finally, using equation $\eta({\bf x})=P_{Y|{\bf X}}(1|{\bf x})={[f_\phi^*]}^{-1}(p^*({\bf x})) $ and Theorem \ref{Thm:HamedNuno}, the minimum risk term $R({p^*})$ can be written as \begin{eqnarray}
&&R({p^*}) = \int_{\bf x} P_{{\bf X}}({\bf x}) \left[ P_{{\bf Y}|{\bf X}}(1|{\bf x})\phi({ p^*}({\bf x})) + P_{{\bf Y}|{\bf X}}(-1|{\bf x})\phi(-{ p^*}({\bf x})) \right] d{\bf x} \\ &=& \int_{\bf x} P_{{\bf X}}({\bf x}) [ \eta({\bf x}) C_\phi^*(\eta({\bf x}) ) + \eta({\bf x})(1-\eta({\bf x})) [C_\phi^*]^\prime(\eta({\bf x})) \\ &+& (1-\eta({\bf x})) C_\phi^*((1-\eta({\bf x})) ) + (1-\eta({\bf x}))(\eta({\bf x})) [C_\phi^*]^\prime((1-\eta({\bf x}))) ] d{\bf x} \\ &=& \int_{\bf x} P_{{\bf X}}({\bf x}) [ \eta({\bf x}) C_\phi^*(\eta({\bf x}) ) + \eta({\bf x})(1-\eta({\bf x})) [C_\phi^*]^\prime\left(\eta({\bf x})\right) \\ &+& C_\phi^*(\eta({\bf x}) ) - \eta({\bf x}) C_\phi^*(\eta({\bf x}) ) - \eta({\bf x})(1-\eta({\bf x})) [C_\phi^*]^\prime\left(\eta({\bf x})\right) ] d{\bf x} \\ &=& \int_{\bf x} P_{{\bf X}}({\bf x}) C_\phi^*(\eta({\bf x}) ) d{\bf x} \\ &=& \int_{\bf x} P_{{\bf X}}({\bf x}) C_\phi^*({[f_\phi^*]}^{-1}(p^*({\bf x})) ) d{\bf x} \\
&=& \int_{\bf x} P_{{\bf X}}({\bf x}) C_\phi^*(P_{Y|{\bf X}}(1|{\bf x}) ) d{\bf x} \\
\end{eqnarray}
\section{Proof of Theorem \ref{Thm:KDMinRiskConect}} \label{app:KscoreMinRisk} Assuming equal priors $P_Y(1)=P_Y(-1)=\frac{1}{2}$, \begin{eqnarray} \label{eq:EqualPriorPX}
P_{{\bf X}}({\bf x}) =\frac{P_{{\bf X}|Y}({\bf x}|1) + P_{{\bf X}|Y}({\bf x}|-1)}{2} \end{eqnarray} and \begin{eqnarray} \label{eq:P1givenx}
P_{Y|{\bf X}}(1|{\bf x})=\frac{P_{{\bf X}|Y}({\bf x}|1)}{P_{{\bf X}|Y}({\bf x}|1)+P_{{\bf X}|Y}({\bf x}|-1)}. \end{eqnarray} We can now write the minimum risk as \begin{eqnarray} \label{eq:SrefUnder} R({p^*}) =
\int_{X} \left(\frac{P_{{\bf X}|Y}({\bf x}|1)+P_{{\bf X}|Y}({\bf x}|-1)}{2}\right) C_\phi^*\left(\frac{P_{{\bf X}|Y}({\bf x}|1)}{P_{{\bf X}|Y}({\bf x}|1)+P_{{\bf X}|Y}({\bf x}|-1)}\right) d{\bf x} \end{eqnarray}
Equation (\ref{eq:KDMinRiskequality}) readily follows by setting $P_{{\bf X}|Y}({\bf x}|1)=P^p$ and $P_{{\bf X}|Y}({\bf x}|-1)=Q^p$, in which case $R({p^*})$ is $R^p({p^*})$.
\section{Proof of Theorem \ref{Thm:TightestBoundCR}} \label{app:Polyn} The symmetry requirement of $C_{\phi}^*(\eta)=C_{\phi}^*(1-\eta)$ results in a similar requirement on the second derivative ${C_{\phi}^*}''(\eta)={C_{\phi}^*}''(1-\eta)$ and concavity requires that the second derivative satisfy ${C_{\phi}^*}''(\eta)<0$. The symmetry and concavity constraints can both be satisfied by considering \begin{eqnarray} {C_{Poly-n}^*}''(\eta) \propto -(\eta(1-\eta))^n.
\end{eqnarray} From this we write \begin{eqnarray} {C_{Poly-n}^*}'(\eta) \propto \int-(\eta(1-\eta))^n d(\eta) + K_1 = Q(\eta)+K_1.
\end{eqnarray} Satisfying the constraint that ${C_{Poly-n}^*}'(\frac{1}{2})=0$, we find $K_1$ as \begin{eqnarray} K_1 = -Q(\frac{1}{2}). \end{eqnarray} Finally, $C_{Poly-n}^*(\eta)$ is \begin{eqnarray} C_{Poly-n}^*(\eta)=K_2(\int Q(\eta) d(\eta) +K_1\eta), \end{eqnarray} where \begin{eqnarray}
K_2=\frac{\frac{1}{2}}{\left. (\int Q(\eta) d(\eta) +K_1\eta)\right|_{\eta=\frac{1}{2}}} \end{eqnarray} is a scaling factor such that $C_{Poly-n}^*(\frac{1}{2})=\frac{1}{2}$. $C_{Poly-n}^*(\eta)$ meets all the requirements of Theorem \ref{thm:tightestC} so $C_{Poly-n}^*(\eta) \ge C_{0/1}^*(\eta)$ for all $\eta \in [0~1]$ and $R_{C_{Poly-n}^*} \ge R^*$.
Next, we need to prove that if we follow the above procedure for $n+1$ and find $C_{Poly-(n+1)}^*(\eta)$ then $R_{C_{Poly-n}^*} \ge R_{C_{Poly-(n+1)}^*}$. We accomplish this by showing that $C_{Poly-n}^*(\eta) \ge C_{Poly-(n+1)}^*(\eta)$. Without loss of generality, let \begin{eqnarray} {C_{Poly-n}^*}''(\eta) = -(\eta(1-\eta))^n \end{eqnarray}
and \begin{eqnarray} {C_{Poly-(n+1)}^*}''(\eta) = -(\eta(1-\eta))^{n+1} \end{eqnarray} then ${C_{Poly-n}^*}''(\eta) \le {C_{Poly-(n+1)}^*}''(\eta)$ since $\eta \in [0~1]$. Also, since ${C_{Poly-n}^*}''(\eta)<0$ and ${C_{Poly-n}^*}''(\eta)={C_{Poly-n}^*}''(1-\eta)$ then ${C_{Poly-n}^*}'(\eta)$ is a monotonically decreasing function and ${C_{Poly-n}^*}'(\eta)=-{C_{Poly-n}^*}'(1-\eta)$ and so ${C_{Poly-n}^*}'(\frac{1}{2})=0$. From the mean value theorem \begin{eqnarray} {C_{Poly-n}^*}''(c_1)={C_{Poly-n}^*}'(\frac{1}{2})-{C_{Poly-n}^*}'(\eta)=-{C_{Poly-n}^*}'(\eta) \end{eqnarray} and \begin{eqnarray} {C_{Poly-(n+1)}^*}''(c_2)={C_{Poly-(n+1)}^*}'(\frac{1}{2})-{C_{Poly-(n+1)}^*}'(\eta)=-{C_{Poly-(n+1)}^*}'(\eta) \end{eqnarray} for any $0 \le \eta \le \frac{1}{2}$ and some $0 \le c_1 \le \frac{1}{2}$ and $0 \le c_2 \le \frac{1}{2}$. Since ${C_{Poly-n}^*}''(\eta) \le {C_{Poly-(n+1)}^*}''(\eta)$ for all $\eta \in [0~1]$ then ${C_{Poly-n}^*}''(c_1) \le {C_{Poly-(n+1)}^*}''(c_2)$ and so \begin{eqnarray} {C_{Poly-n}^*}'(\eta) \ge {C_{Poly-(n+1)}^*}'(\eta) \end{eqnarray} for all $0 \le \eta \le \frac{1}{2}$. A similar argument leads to \begin{eqnarray} {C_{Poly-n}^*}'(\eta) \le {C_{Poly-(n+1)}^*}'(\eta) \end{eqnarray} for all $\frac{1}{2} \le \eta \le 1$.
Since ${C_{Poly-n}^*}'(\frac{1}{2})=0$ and ${C_{Poly-n}^*}''(\eta) \le 0$ then $C_{Poly-n}^*(\eta)$ has a maximum at $\eta=\frac{1}{2}$. Also, since $C_{Poly-n}^*(\eta)$ is a polynomial of $\eta$ with no constant term, then $C_{Poly-n}^*(0)=0$ and because of symmetry $C_{Poly-n}^*(1)=0$. From the mean value theorem \begin{eqnarray} {C_{Poly-n}^*}'(c_1)=C_{Poly-n}^*(\eta)-C_{Poly-n}^*(0)=C_{Poly-n}^*(\eta) \end{eqnarray} and \begin{eqnarray} {C_{Poly-(n+1)}^*}'(c_2)=C_{Poly-(n+1)}^*(\eta)-C_{Poly-(n+1)}^*(0)=C_{Poly-(n+1)}^*(\eta) \end{eqnarray} for any $0 \le \eta \le \frac{1}{2}$ and some $0 \le c_1 \le \frac{1}{2}$ and $0 \le c_2 \le \frac{1}{2}$. Since ${C_{Poly-n}^*}'(\eta) \ge {C_{Poly-(n+1)}^*}'(\eta)$ for all $0 \le \eta \le \frac{1}{2}$ then ${C_{Poly-n}^*}'(c_1) \ge {C_{Poly-(n+1)}^*}'(c_2)$ and so \begin{eqnarray} C_{Poly-n}^*(\eta) \ge C_{Poly-(n+1)}^*(\eta) \end{eqnarray} for all $0 \le \eta \le \frac{1}{2}$. A similar argument leads to \begin{eqnarray} C_{Poly-n}^*(\eta) \ge C_{Poly-(n+1)}^*(\eta) \end{eqnarray} for all $\frac{1}{2} \le \eta \le 1$. Finally, since $C_{Poly-n}^*(\eta)$ and $C_{Poly-(n+1)}^*(\eta)$ are concave functions with maximum at $\eta=\frac{1}{2}$, scaling these functions by $K_2$ and $K_2'$ respectively, so that their maximum is equal to $\frac{1}{2}$ will not change the final result of \begin{eqnarray} C_{Poly-n}^*(\eta) \ge C_{Poly-(n+1)}^*(\eta) \end{eqnarray} for all $0 \le \eta \le 1$.
Finally, to show that $R_{C_{Poly-n}^*}$ converges to $R^*$ we need to show that $C_{Poly-n}^*(\eta)$ converges to $C_{0/1}^*(\eta)=\min\{\eta, 1-\eta\}$ as $n \rightarrow \infty$. We can expand $\int Q(\eta) d(\eta)$
and write $C_{Poly-n}^*(\eta)$ as \begin{eqnarray} C_{Poly-n}^*(\eta)=K_2(a_1\eta^{(2n+2)} + a_2\eta^{(2n+2)-1} + a_3\eta^{(2n+2)-2} ... + a_{n+1}\eta^{(2n+2)-n} + K_1\eta). \end{eqnarray} Assuming that $0 \le \eta \le \frac{1}{2}$ then \begin{eqnarray} \lim_{n\rightarrow \infty} C_{Poly-n}^*(\eta) = K^{\top}_2( 0 + K_1^{\top}\eta) = K^{\top}_2 K_1^{\top} \eta, \end{eqnarray} where $K_1^{\top} =\lim_{n\rightarrow \infty} K_1$ and $K_2^{\top}=\lim_{n\rightarrow \infty} K_2$. Since \begin{eqnarray}
K_1K_2=\frac{-\frac{1}{2}Q(\frac{1}{2})}{(\int Q(\eta)d(\eta) - Q(\frac{1}{2})\eta)|_{\eta=\frac{1}{2}}} \end{eqnarray} then \begin{eqnarray} K_1^{\top}K_2^{\top}= \frac{-\frac{1}{2}Q(\frac{1}{2})}{( 0 - \frac{1}{2}Q(\frac{1}{2}) )} = 1. \end{eqnarray} So, we can write \begin{eqnarray} \lim_{n\rightarrow \infty} C_{Poly-n}^*(\eta) = K^{\top}_2 K_1^{\top} \eta = \eta . \end{eqnarray} A similar argument for $\frac{1}{2} \le \eta \le 1$ completes the convergence proof.
\end{document} | arXiv |
Rs. 559 is divided into $a, b$ and $c$ in such a way that $2 \times$ part of $a=3 \times$ part of $b=4 \times$ part of $c$, then find the part of $c$.
(a) Rs. 129
(b) Rs. 559
(c) Rs. 42
(d) Rs. 43
Let $2 a=3 b=4 c=k$ $\therefore \quad a=\frac{k}{2}, b=\frac{k}{3}$ and $c=\frac{k}{4}$
$\therefore$ The L.C.M. of $2,3,4=12$
$\therefore \quad a=\frac{k}{2} \times 12=6$
$b=\frac{k}{3} \times 12=4$
$c=\frac{k}{4} \times 12=3$
Hence $a: b: c=6: 4: 3$
According to question,
& a+b+c=6 x+4 x+3 x=559 \\
\Rightarrow \quad & 13 x=559 \\
\Rightarrow \quad & x=\frac{559}{13}=43 \\
\therefore \text { Part of } c=& 3 x=3 \times 43 \\
=& \text { Rs. } 129 ; \text { Ans. }
$\quad a+b+c=6 x+4 x+3 x=559$ $\Rightarrow \quad 13 x=559$ $\Rightarrow \quad x=\frac{559}{13}=43$ $\therefore$ Part of $c=3 x=3 \times 43$ $\quad=$ Rs. $129 ;$ Ans. $\therefore \quad$ Correct option is$($ a)
Trick :
The ratios of $a: b: c$
=3 \times 4: 4 \times 2: 2 \times 3 \\
=6: 4: 3
$\therefore \quad$ The part of $c=\frac{3 \times 559}{6+4+3}$
=Rs. 129: Ans.
Gitesh kumar Garg Answered Feb 26, 2022 Edited Feb 26, 2022 by Gitesh kumar Garg
A bag contains rupees, fifty paise $\&$ twenty-five paise coins in the proportion $2: 6: 8$. If the total amount is Rs. 210 , then find the number of coins of each kind.
Bittu Asked in Mathematics Dec 12, 2022
If in a $\triangle A B C, B$ is the orthocenter and if circumcenter of $\triangle A B C$ is $(2,4)$ and vertex $A$ is $(0,0)$ then coordinate of vertex $C$ is
straight-line
Find point $P$ on $x$-axis such that $(A P+P B)$ is minimum where $A(1,1) \& B(3,4)$
In a mixture, the ratio of milk and water is $8: 7$. If 11 litres of water is mixed, then the ratio becomes $8: 9$. Find the amount of milk in the initial mixture.
The ratio of milk and water in 70 $\mathrm{kg}$ mixture is $3: 2$. If $14 \mathrm{~kg}$ water is mixed with the mixture, then find the ratio of milk and water.
5 being subtracted from a number and then the remainder being divided by 4, the result becomes 3. Find the number.
The ratio of the ages of Sachin and Amit is $7: 4$ and the sum of their ages is 33 years, then find the ratio of their ages before 6 years.
The length, breadth and height of a room are 5 m, 4 m and 3 m respectively. Find the cost of white washing the walls of the room and ceiling at the rate of Rs 7.50 per m^2.
surface-area
Parwez (V.P.) has some cocks and goats. If the total numbers of heads and feet are 76 and 212 respectively, then find the number of cocks he has.
rajeshsah Asked in Chemistry Feb 4, 2022
A compound contain equal mosses of the elements $A, B$ and $C$ if the atomic weights of $A, B$ and $C$ are 20 . 40 and 60 respectively. The empirical formula of the compound is
by rajeshsah
physical-chemistry | CommonCrawl |
Zariski tangent space
In algebraic geometry, the Zariski tangent space is a construction that defines a tangent space at a point P on an algebraic variety V (and more generally). It does not use differential calculus, being based directly on abstract algebra, and in the most concrete cases just the theory of a system of linear equations.
Motivation
For example, suppose given a plane curve C defined by a polynomial equation
F(X,Y) = 0
and take P to be the origin (0,0). Erasing terms of higher order than 1 would produce a 'linearised' equation reading
L(X,Y) = 0
in which all terms XaYb have been discarded if a + b > 1.
We have two cases: L may be 0, or it may be the equation of a line. In the first case the (Zariski) tangent space to C at (0,0) is the whole plane, considered as a two-dimensional affine space. In the second case, the tangent space is that line, considered as affine space. (The question of the origin comes up, when we take P as a general point on C; it is better to say 'affine space' and then note that P is a natural origin, rather than insist directly that it is a vector space.)
It is easy to see that over the real field we can obtain L in terms of the first partial derivatives of F. When those both are 0 at P, we have a singular point (double point, cusp or something more complicated). The general definition is that singular points of C are the cases when the tangent space has dimension 2.
Definition
The cotangent space of a local ring R, with maximal ideal ${\mathfrak {m}}$ is defined to be
${\mathfrak {m}}/{\mathfrak {m}}^{2}$
where ${\mathfrak {m}}$2 is given by the product of ideals. It is a vector space over the residue field k:= R/${\mathfrak {m}}$. Its dual (as a k-vector space) is called tangent space of R.[1]
This definition is a generalization of the above example to higher dimensions: suppose given an affine algebraic variety V and a point v of V. Morally, modding out ${\mathfrak {m}}$2 corresponds to dropping the non-linear terms from the equations defining V inside some affine space, therefore giving a system of linear equations that define the tangent space.
The tangent space $T_{P}(X)$ and cotangent space $T_{P}^{*}(X)$ to a scheme X at a point P is the (co)tangent space of ${\mathcal {O}}_{X,P}$. Due to the functoriality of Spec, the natural quotient map $f:R\rightarrow R/I$ induces a homomorphism $g:{\mathcal {O}}_{X,f^{-1}(P)}\rightarrow {\mathcal {O}}_{Y,P}$ for X=Spec(R), P a point in Y=Spec(R/I). This is used to embed $T_{P}(Y)$ in $T_{f^{-1}P}(X)$.[2] Since morphisms of fields are injective, the surjection of the residue fields induced by g is an isomorphism. Then a morphism k of the cotangent spaces is induced by g, given by
${\mathfrak {m}}_{P}/{\mathfrak {m}}_{P}^{2}$
$\cong ({\mathfrak {m}}_{f^{-1}P}/I)/(({\mathfrak {m}}_{f^{-1}P}^{2}+I)/I)$
$\cong {\mathfrak {m}}_{f^{-1}P}/({\mathfrak {m}}_{f^{-1}P}^{2}+I)$
$\cong ({\mathfrak {m}}_{f^{-1}P}/{\mathfrak {m}}_{f^{-1}P}^{2})/\mathrm {Ker} (k).$
Since this is a surjection, the transpose $k^{*}:T_{P}(Y)\rightarrow T_{f^{-1}P}(X)$ is an injection.
(One often defines the tangent and cotangent spaces for a manifold in the analogous manner.)
Analytic functions
If V is a subvariety of an n-dimensional vector space, defined by an ideal I, then R = Fn / I, where Fn is the ring of smooth/analytic/holomorphic functions on this vector space. The Zariski tangent space at x is
mn / (I+mn2),
where mn is the maximal ideal consisting of those functions in Fn vanishing at x.
In the planar example above, I = (F(X,Y)), and I+m2 = (L(X,Y))+m2.
Properties
If R is a Noetherian local ring, the dimension of the tangent space is at least the dimension of R:
dim m/m2 ≧ dim R
R is called regular if equality holds. In a more geometric parlance, when R is the local ring of a variety V at a point v, one also says that v is a regular point. Otherwise it is called a singular point.
The tangent space has an interpretation in terms of K[t]/(t2), the dual numbers for K; in the parlance of schemes, morphisms from Spec K[t]/(t2) to a scheme X over K correspond to a choice of a rational point x ∈ X(k) and an element of the tangent space at x.[3] Therefore, one also talks about tangent vectors. See also: tangent space to a functor.
In general, the dimension of the Zariski tangent space can be extremely large. For example, let $C^{1}(\mathbf {R} )$ be the ring of continuously differentiable real-valued functions on $\mathbf {R} $. Define $R=C_{0}^{1}(\mathbf {R} )$ to be the ring of germs of such functions at the origin. Then R is a local ring, and its maximal ideal m consists of all germs which vanish at the origin. The functions $x^{\alpha }$ for $\alpha \in (1,2)$ define linearly independent vectors in the Zariski cotangent space $m/m^{2}$, so the dimension of $m/m^{2}$ is at least the ${\mathfrak {c}}$, the cardinality of the continuum. The dimension of the Zariski tangent space $(m/m^{2})^{*}$ is therefore at least $2^{\mathfrak {c}}$. On the other hand, the ring of germs of smooth functions at a point in an n-manifold has an n-dimensional Zariski cotangent space.[lower-alpha 1]
See also
• Tangent cone
• Jet (mathematics)
Notes
1. https://mathoverflow.net/questions/44705/cardinalities-larger-than-the-continuum-in-areas-besides-set-theory/44733#44733
Citations
1. Eisenbud & Harris 1998, I.2.2, pg. 26.
2. Smoothness and the Zariski Tangent Space, James McKernan, 18.726 Spring 2011 Lecture 5
3. Hartshorne 1977, Exercise II 2.8.
Sources
• Eisenbud, David; Harris, Joe (1998). The Geometry of Schemes. Springer-Verlag. ISBN 0-387-98637-5 – via Internet Archive.
• Hartshorne, Robin (1977). Algebraic Geometry. Graduate Texts in Mathematics. Vol. 52. New York: Springer-Verlag. ISBN 978-0-387-90244-9. MR 0463157.
• Zariski, Oscar (1947). "The concept of a simple point of an abstract algebraic variety". Transactions of the American Mathematical Society. 62: 1–52. doi:10.1090/S0002-9947-1947-0021694-1. MR 0021694. Zbl 0031.26101.
External links
• Zariski tangent space. V.I. Danilov (originator), Encyclopedia of Mathematics.
| Wikipedia |
\begin{document}
\begin{abstract} Let $\mathcal{S}^*_t(\alpha_1,\alpha_2)$ denote the class of functions $f$ analytic in the open unit disc $\Delta$, normalized by the condition $f(0)=0=f'(0)-1$ and satisfying the following two--sided inequality: \begin{equation*}
-\frac{\pi\alpha_1}{2}< \arg\left\{\frac{zf'(z)}{f(z)}\right\}
<\frac{\pi\alpha_2}{2} \quad (z\in\Delta), \end{equation*} where $0<\alpha_1,\alpha_2\leq1$. The class $\mathcal{S}^*_t(\alpha_1,\alpha_2)$ is a subclass of strongly starlike functions of order $\beta$ where $\beta=\max\{\alpha_1,\alpha_2\}$. The object of the present paper is to derive some certain inequalities including (for example), upper and lower bounds for ${\rm Re}\{zf'(z)/f(z)\}$, growth theorem, logarithmic coefficient estimates and coefficient estimates for functions $f$ belonging to the class $\mathcal{S}^*_t(\alpha_1,\alpha_2)$. \end{abstract}
\author[R. Kargar, J. Sok\'{o}{\l} and H. Mahzoon]
{R. Kargar, J. Sok\'{o}{\l} and H. Mahzoon }
\address{Young Researchers and Elite Club, Ardabil Branch, IAU, Ardabil, Iran}
\email {[email protected] {\rm (R. Kargar)}}
\address{ University of Rzesz\'{o}w, Faculty of Mathematics and Natural
Sciences, ul. Prof. Pigonia 1, 35-310 Rzesz\'{o}w, Poland}
\email{[email protected] {\rm (J. Sok\'{o}{\l})}}
\address{Department of Mathematics, Firoozkouh Branch, IAU, Firoozkouh, Iran} \email {[email protected] {\rm (H. Mahzoon)}}
\maketitle
\section{Introduction}
Let $\mathcal{H}$ be the class functions $f$ which are analytic in the open unit disk $\Delta=\{z\in \mathbb{C} : |z|<1\}$. Also, let $\mathcal{A}\subset \mathcal{H}$ denote the class of functions $f$ of the form \begin{equation}\label{f}
f(z)=z+\sum_{n=2}^{\infty}a_{n}z^{n}\quad(z\in\Delta), \end{equation} which are normalized by the condition $f(0)=0=f'(0)-1$ in $\Delta$. The subclass of $\mathcal{A}$ consisting of all univalent functions $f(z)$ in $\Delta$ will be denoted by $\mathcal{S}$. We say that a function $f\in \mathcal{S}$ is starlike of order $\alpha$, where $0\leq \alpha<1$ if, and only if, \begin{equation*}
{\rm Re}\left\{\frac{zf'(z)}{f(z)}\right\}>\alpha\quad (z\in \Delta). \end{equation*} We denote by $\mathcal{S}^*(\alpha)$ the class of starlike functions of order $\alpha$. The class $\mathcal{S}^*(\alpha)$ was introduced by Robertson (see \cite{ROB}). Also, we say that a function $f\in \mathcal{S}$ is strongly starlike of order $\beta$, where $0<\beta\leq 1$ if, and only if, \begin{equation*}
\left|\arg\left\{\frac{zf'(z)}{f(z)}\right\}\right|<\frac{\pi \beta}{2}\quad (z\in \Delta). \end{equation*} The functions class of strongly starlike functions of order $\beta$ is denoted by $\mathcal{SS}^*(\beta)$. The class $\mathcal{SS}^*(\beta)$ was introduced independently by Stankiewicz (see \cite{Stan1}, \cite{Stan2}) and by Brannan and Kirvan (see \cite{Brannan}). We remark that $\mathcal{SS}^*(1)\equiv \mathcal{S}^*(0)=\mathcal{S}^*$, where $\mathcal{S}^*$ denotes the class of starlike functions.
Let $f(z)$ and $g(z)$ be two analytic functions in $\Delta$. Then the function $f(z)$ is said to be subordinate to $g(z)$ in $\Delta$, written by
$f(z)\prec g(z)$ or $f\prec g$, if there exists an analytic function $w(z)$ in $\Delta$ with $w(0)=0$ and $|w(z)|<1$, and such that $f(z)=g(w(z))$ for all $z\in\Delta$.
In the sequel, we consider the analytic function $G(z):=G(\alpha_1,\alpha_2,c)(z)$ as follows \begin{equation}\label{g}
G(\alpha_1,\alpha_2,c)(z)
:=\left(\frac{1+cz}{1-z}\right)^{(\alpha_1+\alpha_2)/2}\quad
(G(0)=1, z\in \Delta), \end{equation} where $0<\alpha_1, \alpha_2\leq 1$, $c=e^{\pi i\theta}$ and $\theta=\frac{\alpha_2-\alpha_1}{\alpha_2+\alpha_1}$. Also, we consider the set $\Omega_{\alpha_1,\alpha_2}$ as follows \begin{equation}\label{omega}
\Omega_{\alpha_1,\alpha_2}:=\left\{w\in\mathbb{C} :
-\frac{\pi\alpha_1}{2}< \arg\{w\} <
\frac{\pi\alpha_2}{2}\right\}. \end{equation}
We note that the function $G(z)$ is convex univalent in $\Delta$ and maps $\Delta$ onto $\Omega_{\alpha_1,\alpha_2}$ (see \cite{DJ}). Since \begin{align*}
\left(\frac{1+cz}{1-z}\right)^{(\alpha_1+\alpha_2)/2}
&=\left(1+\frac{(1+c)z}{1-z}\right)^{(\alpha_1+\alpha_2)/2}\\
&=1+\sum_{k=1}^{\infty}\binom{(\alpha_1+\alpha_2)/2}{k}(1+c)^k \left(\frac{z}{1-z}\right)^k\quad(z\in\Delta), \end{align*} using the binomial formula, we obtain \begin{equation}\label{binomial}
G(z)=1+\sum_{n=1}^{\infty}\lambda_n z^n\quad(z\in\Delta), \end{equation} where \begin{equation}\label{lambda n}
\lambda_n:=\lambda_n(\alpha_1,\alpha_2,c)=\sum_{k=1}^{n}
\binom{n-1}{k-1}\binom{(\alpha_1+\alpha_2)/2}{k}(1+c)^k
\quad (n\geq1). \end{equation} We note that $\lambda_n$ may be conveniently written in the form \begin{equation*}
\lambda_n=\frac{(\alpha_1+\alpha_2)(1+c)}{2} {_2F_1}(1-n,1-(\alpha_1+\alpha_2)/2;2;1+c)\quad(n\geq1), \end{equation*} where notation $_2F_1$ stands for the Gauss hypergeometric function.
The main purpose of this paper is to study the class $\mathcal{S}^*_t(\alpha_1,\alpha_2)$ which is provided below.
\begin{definition} A function $f\in\mathcal{A}$ belongs to the class $\mathcal{S}^*_t(\alpha_1,\alpha_2)$, if $f$ satisfies the following two--sided inequality \begin{equation}\label{DS}
-\frac{\pi\alpha_1}{2}< \arg\left\{\frac{zf'(z)}{f(z)}\right\}
<\frac{\pi\alpha_2}{2} \quad (z\in\Delta), \end{equation} where $0<\alpha_1, \alpha_2\leq 1$. \end{definition} The class $\mathcal{S}^*_t(\alpha_1,\alpha_2)$ was introduced by Takahashi and Nunokawa (see \cite{Takahashi}). We recall here the fact, that in \cite{[BC1]} and in \cite{[BC2]} a similar class was studied. It is clear that $\mathcal{S}^*_t(\alpha_1,\alpha_2)\subset \mathcal{S}^*$ and that $\mathcal{S}^*_t(\alpha_1,\alpha_2)$ is a subclass of the class of strongly starlike functions of order $\beta=\max\{\alpha_1, \alpha_2\}$, i.e. $\mathcal{S}^*_t(\alpha_1,\alpha_2)\subset\mathcal{S}^*(\beta,\beta)\equiv \mathcal{SS}^*(\beta)$.
In Geometric Function Theory there exist many certain subclasses of analytic functions which have been defined by the subordination relation, see for example \cite{kessiberian}, \cite{kescomplex}, \cite{kesBooth}, \cite{KO2011}, \cite{Ma1992} \cite{Men}, \cite{RaiSok2015}, \cite{Sok2011}, \cite{SokSta}. It is clear that defining a class by using the subordination makes it easy to investigate it's geometric properties. Below, we present a necessary and sufficient condition for functions to be the class $\mathcal{S}^*_t(\alpha_1,\alpha_2)$. Actually, we present the definition of the class $\mathcal{S}^*_t(\alpha_1,\alpha_2)$ by using the subordination.
\begin{lemma}\label{l11} Let $f(z)\in \mathcal{A}$. Then $f\in\mathcal{S}^*_t(\alpha_1,\alpha_2)$ if, and only if, \begin{equation}\label{el11}
\frac{zf'(z)}{f(z)}\prec G(z)\quad
(z\in\Delta), \end{equation} where $G(z)$ is defined in \eqref{g}. \end{lemma} \begin{proof} Let $G(z)$ be given by \eqref{g}. By \eqref{DS}, $ \left\{zf'(z)/f(z)\right\}$ lies in the domain $\Omega_{\alpha_1,\alpha_2}$, where $\Omega_{\alpha_1,\alpha_2}$ in defined in \eqref{omega} and it is known that $G(\Delta)=\Omega_{\alpha_1,\alpha_2}$. The function $G(z)$ is univalent in $\Delta$ and thus, by the subordination principle, we get \eqref{el11}. \end{proof}
For $f(z)=a_0+a_1z+a_2z^2+\cdots$ and $g(z)=b_0+b_1z+b_2z^2+\cdots$, their Hadamard product (or convolution) is defined by $ (f\ast g)(z)=a_0b_0+a_1b_1z+a_2b_2z^2+\cdots $. The convolution has the algebraic properties of ordinary multiplication. Many of convolution problems were studied by St. Ruscheweyh in \cite{RU1} and have found many applications in various fields. The following lemma will be useful in this paper.
\begin{lemma}\label{t0} {\rm (see \cite{RUST})} Let $F,H\in \mathcal H$ be any convex univalent functions in $\Delta$. If $f\prec F$ and $g(z)\prec H(z)$, then \begin{equation}\label{1t0}
f(z)*g(z)\prec F(z)*H(z) \quad(z\in\Delta). \end{equation} \end{lemma}
Following, one have an another useful lemma (see \cite{Rog}).
\begin{lemma}\label{lem1.3}
Let $q(z)=\sum_{n=1}^{\infty}C_nz^n$ be analytic and univalent in $\Delta$, and suppose that $q(z)$ maps $\Delta$ onto a convex domain. If $p(z) = \sum_{n=1}^{\infty}A_nz^n$ is analytic in $\Delta$ and satisfies the following subordination \begin{equation*}
p(z)\prec q(z)\qquad (z\in\Delta), \end{equation*} then \begin{equation*}
|A_n|\leq |C_1|\qquad (n=1,2,3,\ldots). \end{equation*} \end{lemma}
The structure of this paper is the following. Early, we find a lower bound and an upper bound for ${\rm Re}\{zf'(z)/f(z)\}$, where $f\in \mathcal S^*_t(\alpha_1,\alpha_2)$. Moreover, as a corollary we show that if $f$ is a strongly starlike function of order $\beta$, then the upper bound for ${\rm Re}\{zf'(z)/f(z)\}$ is equal to $2^\beta$ where $|z|\leq 1/3$. Next, we present some subordination relations which will be useful in order to estimate the logarithmic coefficients. At the end, we estimate the coefficients of $f\in S^*_t(\alpha_1,\alpha_2)$ and we will show how that the coefficient bounds are related to the well--known Bieberbach conjecture (see \cite{Bie}) proved by de Branges in 1985 (see \cite{Branges}).
\section{Some inequalities and subordination relations}\label{sec1}
We begin this section with the following.
\begin{theorem}\label{t21} Assume that $f\in\mathcal{A}$. If $f\in \mathcal S^*_t(\alpha_1,\alpha_2)$, then \begin{equation}\label{1t21}
{\rm Re}\left\{\frac{zf'(z)}{f(z)}\right\}
\geq
\left(\frac{1-(2\cos\frac{\theta}{2}+1)r}{1-r}\right)^{(\alpha_1+\alpha_2)/2}\ \ \ 0\leq|z|=r\leq\frac{1}{2\cos\frac{\theta}{2}+1} \end{equation} and \begin{equation}\label{2t21}
0<{\rm Re}\left\{\frac{zf'(z)}{f(z)}\right\}
\leq
\left(\frac{1+(2\cos\frac{\theta}{2}-1)r}{1-r}\right)^{(\alpha_1+\alpha_2)/2}\
\ \
0\leq|z|=r<1, \end{equation} where $0<\alpha_1,\alpha_2\leq 1$ and $\theta=\frac{\alpha_2-\alpha_1}{\alpha_2+\alpha_1}$. \end{theorem}
\begin{proof} Let the function $f$ be in the class $\mathcal S^*_t(\alpha_1,\alpha_2)$. Then by Lemma \ref{l11} and by the definition of subordination, there exists a Schwarz function $w(z)$, satisfying the following conditions \begin{equation*}
w(0)=0 \quad {\rm and}\quad |w(z)|<1 \quad (z\in\Delta) \end{equation*} and such that \begin{equation*}
\frac{zf'(z)}{f(z)}=\left(\frac{1+cw(z)}{1-w(z)}\right)^{(\alpha_1+\alpha_2)/2}\quad (z\in\Delta). \end{equation*} Define \begin{equation}\label{F}
F(z):=\frac{1+cw(z)}{1-w(z)}\quad (z\in\Delta). \end{equation} It is clear that ${\rm Re}\{F(z)\}>0 $ in the unit disk. We shall describe ${\rm Re}\{F(z)\}$ more precisely. From \eqref{F} we have \begin{equation*}
\left|F(z)-1\right|=\left|\frac{(1+c)w(z)}{1-w(z)}\right|\leq \frac{2|w(z)|\cos\frac{\theta}{2}}{1-|w(z)|}. \end{equation*}
For $|z|=r < 1$, using the known fact that (see \cite{PLD}) \begin{equation*}
|w(z)|\leq |z|, \end{equation*} this gives \begin{equation*}
\left|F(z)-1\right|\leq \frac{2r\cos\frac{\theta}{2}}{1-r}\quad
(|z|=r<1). \end{equation*}
Thus, $F(z)$ for $|z| = r < 1$ lies in the disk which the center $C = 1$ and the radius $R$ given by \begin{equation*}
R:=\frac{2r\cos\frac{\theta}{2}}{1-r}. \end{equation*}
We note that the origin is outside of this disk for $|z| < 1/(1+2\cos(\theta/2))$, and so we obtain \eqref{1t21} and \eqref{2t21}. This ends the proof of Theorem \ref{t21}. \end{proof}
Putting $\alpha_1=\alpha_2=\beta$ in Theorem \ref{t21}, we have:
\begin{corollary} Let $f$ be a strongly starlike function of order $\beta$, where $0<\beta\leq 1$. Then \begin{equation*} \left(\frac{1-3r}{1-r}\right)^\beta\leq{\rm Re}\left\{\frac{zf'(z)}{f(z)}\right\}\leq
\left(\frac{1+r}{1-r}\right)^\beta\qquad (|z|=r\leq1/3). \end{equation*} In particular, if we let $r=1/3$, then
\begin{equation*} 0<{\rm Re}\left\{\frac{zf'(z)}{f(z)}\right\}<2^\beta. \end{equation*} \end{corollary}
As a corollary, by \cite{Eeing} or \cite[Theorem 3.2a]{MM-book}, and by Theorem \ref{t21}, we obtain a sufficient condition for functions belonging the class $S^*_t(\alpha_1,\alpha_2)$.
\begin{lemma}
If $f$ satisfies the following subordination
\begin{equation}\label{1+zf'' f'}
1+\frac{zf''(z)}{f'(z)}\prec G(z)\quad |z|\leq \frac{1}{1+2\cos \frac{\theta}{2}},
\end{equation}
then $f$ satisfies
\begin{equation*}
\frac{zf'(z)}{f(z)}\prec G(z)\quad |z|\leq \frac{1}{1+2\cos \frac{\theta}{2}},
\end{equation*}
where $G(z)$ is defined in \eqref{g}. \end{lemma} \begin{proof}
Denote
\begin{equation*}
p(z)=\frac{zf'(z)}{f(z)}\quad(z\in\Delta),
\end{equation*}
where $p$ is analytic and $p(0)=1$.
A simple calculation implies that
\begin{equation*}
p(z)+\frac{zp'(z)}{p(z)}=1+\frac{zf''(z)}{f'(z)}
\end{equation*}
and by \eqref{1+zf'' f'} we get
\begin{equation*}
p(z)+\frac{zp'(z)}{p(z)}\prec G(z).
\end{equation*}
Since ${\rm Re} (p(z))>0$ in $\Delta$ and $G(z)$ is convex, the desired result follows, \cite{MM-book}. \end{proof}
In order to estimate of $f$ (Growth Theorem), where $f\in\mathcal S^*_t(\alpha_1,\alpha_2)$ and the logarithmic coefficients of members of $S^*_t(\alpha_1,\alpha_2)$, we need the following theorem. \begin{theorem}\label{t1} If $f\in\mathcal S^*_t(\alpha_1,\alpha_2)$, then \begin{equation}\label{0t1}
\log\left\{\frac{f(z)}{z}\right\}\prec\int_0^z \frac{G(t)-1}{t}{\rm d}t, \end{equation} where the function $G$ is convex univalent of the form \eqref{g}. Moreover, \begin{equation}\label{wide g}
\widetilde{G}(z)=\int_0^z \frac{G(t)-1}{t}{\rm d}t=\sum_{n=1}^{\infty}\frac{\lambda_n}{n}z^n, \end{equation} is convex univalent too, where $\lambda_n$ is defined in \eqref{lambda n}. \end{theorem}
\begin{proof} The subordination relation \eqref{el11} gives us \begin{equation}\label{2t1}
z\left(\log\left\{\frac{f(z)}{z}\right\}\right)'\prec G(z)-1, \end{equation} where $G(z)-1$ is convex univalent. For $x\geq0$ the function \begin{equation*}
\tilde{h}(x;z)=\sum\limits_{k=1}^\infty\frac{(1+x)z^k}{k+x} \end{equation*} is convex univalent in $\Delta$ (see \cite{RU2}). Since, for \begin{equation*}
\tilde{h}(0;z)=\sum\limits_{k=1}^\infty\frac{z^k}{k}, \end{equation*} we have by \eqref{1t0} \begin{equation}\label{3t1}
\left[g(z)\prec F(z)\right]\Rightarrow\left[g(z)*\tilde{h}(0;z)\prec F(z)*\tilde{h}(0;z)\right], \end{equation} whenever $F(z)$ is a convex univalent function. Because \begin{equation}\label{4t1}
g(z)*\tilde{h}(0;z)=\int_0^z \frac{g(t)}{t}{\rm d}t, \end{equation} then from \eqref{2t1}, \eqref{3t1} and from \eqref{4t1}, we have \begin{equation*}
\int_0^z \log\left\{f(t)\right\}'{\rm d}t\prec\int_0^z \frac{G(t)-1}{t}{\rm d}t, \end{equation*} this gives \eqref{0t1}. Moreover, \begin{equation*}
\widetilde{G}(z)=\int_0^z \frac{G(t)-1}{t}{\rm
d}t=\left\{G(z)-1\right\}*\tilde{h}(0;z), \end{equation*} where $G(z)-1$ and $\tilde{h}(0;z)$ are convex univalent functions. Since the class of convex univalent functions is preserved under the convolution (see \cite{RUSS}), therefore, we conclude that the function $\widetilde{G}(z)$ is convex univalent. This is the end of proof. \end{proof}
Because $\widetilde{G}(z)$ is univalent, we may rewrite Theorem \ref{t1} as the following corollary.
\begin{corollary}\label{c1} If $f(z)\in\mathcal S^*_t(\alpha_1,\alpha_2)$, then \begin{eqnarray}\label{1c1}
\frac{f(z)}{z}&\prec&\exp\int_0^z \frac{G(t)-1}{t}{\rm d}t\nonumber\\
&=&\exp\widetilde{G}(z), \end{eqnarray} where $G$ and $\widetilde{G}$ are of the form \eqref{g} and \eqref{wide g}, respectively. \end{corollary}
\begin{theorem}\label{t2} Let $\widetilde{G}$ be of the form \eqref{wide g}. If $f(z)\in\mathcal S^*_t(\alpha_1,\alpha_2)$, then \begin{eqnarray}\label{1t2}
r\exp\widetilde{G}(-r)<\left|f(z)\right|
<r\exp\widetilde{G}(r), \end{eqnarray}
for each $r=|z|<1$. \end{theorem}
\begin{proof} From \eqref{1c1}, we have \begin{eqnarray}\label{2t2}
\frac{f(z)}{z}\in
\exp\widetilde{G}(|z|\leq r), \end{eqnarray}
for each $0<r<1$ and $|z|\leq r$, where $\exp\widetilde{G}(z)$ is convex univalent and for each $0<r<1$ the set
$\exp\widetilde{G}(|z|\leq r)$ is a set symmetric with respect to the real axis. Furthermore, \begin{equation*}
\exp\widetilde{G}(-r)\leq\left|\exp
\widetilde{G}(z)\right|\leq\exp\widetilde{G}(r) \end{equation*}
for each $0<r<1$ and $|z|\leq r$. Therefore, from \eqref{2t2} and we obtain \eqref{1t2}. \end{proof}
\section{On Logarithmic Coefficients and Coefficients}
The logarithmic coefficients $\gamma_n$ of $f(z)$ are defined by \begin{equation}\label{log coef}
\log\left\{\frac{f(z)}{z}\right\}=\sum_{n=1}^{\infty}2\gamma_n z^n\quad (z\in \Delta), \end{equation} which play an important role for various estimates in the theory of univalent functions. For example, if $f\in \mathcal{S}$, then we have \begin{equation*}
\gamma_1=\frac{a_2}{2}\quad{\rm and} \quad \gamma_2=\frac{1}{2}\left(a_3-\frac{a_2^2}{2}\right) \end{equation*} and the sharp estimates \begin{equation*}
|\gamma_1|\leq1 \quad{\rm and}\quad |\gamma_2|\leq \frac{1}{2}(1+2e^{-2})\approx 0.635\ldots, \end{equation*} hold. For $n\geq3$, the estimate of $\gamma_n$
is much harder and no significant upper bounds for $|\gamma_n|$ when $f\in \mathcal{S}$ and is still open for $n\geq3$. The sharp upper bounds for modulus of logarithmic coefficients are known for functions in very few subclasses of $\mathcal{S}$. For functions in the class $\mathcal{S}^*$, it is easy to prove that $|\gamma_n|\leq 1/n$ for $n\geq1$ and equality holds for the Koebe function. For another subclasses of $\mathcal{S}$, see also \cite{kargarJAnal,PWS1,PWS2,thomas}. Following, we estimate the logarithmic coefficients of $f\in\mathcal{S}^*_t(\alpha_1,\alpha_2)$.
\begin{theorem}\label{th.logcoef}
Let $0<\alpha_1, \alpha_2\leq 1$, $c=e^{\pi i\theta}$ and $\theta=\frac{\alpha_2-\alpha_1}{\alpha_2+\alpha_1}$. Let $f\in \mathcal{S}^*_t(\alpha_1,\alpha_2)$ and the coefficients of $\log(f(z)/z)$ be given by \eqref{log coef}. Then \begin{equation}\label{gamma_n}
|\gamma_n|\leq \frac{(\alpha_1+\alpha_2)}{2n}\cos \frac{1}{2}\theta. \end{equation} The result is sharp. \end{theorem}
\begin{proof}
Let us $f\in \mathcal{S}^*_t(\alpha_1,\alpha_2)$. With replacing \eqref{binomial} and \eqref{log coef} in \eqref{2t1}, we have \begin{equation*}
\sum_{n=1}^{\infty}2n\gamma_n z^n\prec \sum_{n=1}^{\infty}\lambda_n z^n, \end{equation*} Applying Lemma \ref{lem1.3} gives \begin{equation*}
2n|\gamma_n|\leq |\lambda_1|, \end{equation*} where \begin{equation*}
\lambda_1=\frac{(1+c)(\alpha_1+\alpha_2)}{2}. \end{equation*} Thus the desired inequality \eqref{gamma_n} follows. The equality holds for the logarithmic coefficients of the function
\begin{equation*}
z\mapsto z\exp \widetilde{G}(z),
\end{equation*}
where $\widetilde{G}$ is defined in \eqref{wide g}. This completes the proof. \end{proof}
If we let $\alpha_1=\alpha_2=\beta$ in the above Theorem \ref{th.logcoef}, we get the following result which previously is obtained by Thomas, see \cite[Theorem 1]{thomas}. \begin{corollary} Let $f$ be a strongly starlike function of order $\beta$, where $0<\beta\leq 1$. Then the logarithmic coefficients of $f$ satisfy the sharp inequality \begin{equation*}
|\gamma_n|\leq \frac{1}{n}\beta. \end{equation*} In particular, taking $\beta=1$ gives us the estimate of logarithmic coefficients of starlike functions. \end{corollary}
Next, we estimate the coefficients of the function $f$ of the form \eqref{f} belonging to the class $\mathcal{S}^*_t(\alpha_1,\alpha_2)$. We remark that, generally our result is not sharp.
\begin{theorem}\label{t22}
Let $f$ be of the form \eqref{f} belongs to the class $\mathcal{S}^*_t(\alpha_1,\alpha_2)$. Then
\begin{equation}\label{an}
|a_n|\leq\left\{
\begin{array}{ll}
(\alpha_1+\alpha_2)\cos \frac{1}{2}\theta & \qquad\hbox{$n=2$,} \\ \\
\frac{\alpha_1+\alpha_2}{n-1}\cos \frac{1}{2}\theta
\prod_{k=2}^{n-1}\left(1+\frac{\alpha_1+\alpha_2}{k-1}\cos \frac{1}{2}\theta\right) & \qquad\hbox{$n=3,4,\ldots$,} \end{array} \right. \end{equation} where $c=e^{\pi i\theta}$, $\theta=(\alpha_2-\alpha_1)/(\alpha_2+\alpha_1)$ and $0<\alpha_1,\alpha_2\leq 1$. \end{theorem}
\begin{proof} Consider the function $q(z)$ as follows \begin{equation}\label{1t3}
zf'(z)=q(z)f(z)\quad (z\in\Delta). \end{equation} Thus by Lemma \ref{l11}, we have \begin{equation}\label{2t3}
q(z)\prec G(z)\quad (z\in\Delta), \end{equation} where $G(z)$ is defined by \eqref{g}. If we let \begin{equation*}\label{3t3}
q(z)=1+\sum_{n=1}^{\infty} A_nz^n, \end{equation*} then by Lemma \ref{lem1.3}, we see that the subordination relation \eqref{2t3} implies that \begin{equation}\label{4t3}
|A_n|\leq(\alpha_1+\alpha_2)\cos \frac{1}{2}\theta\quad (n=1,2,3,\ldots). \end{equation} If we equate the coefficients of $z^n$ in both sides of \eqref{1t3} we obtain \begin{equation*}
na_n=A_{n-1} a_1+A_{n-2}a_2+\cdots+A_1a_{n-1}+A_0a_n\quad
(n=2,3,\ldots), \end{equation*} where $A_0=a_1=1$. With a simple calculation and also by the inequality \eqref{4t3} we get \begin{align*}
|a_n|&=\frac{1}{n-1}\times \left|A_{n-1}a_1+A_{n-2}a_2+\cdots+A_1a_{n-1}\right|\\
&\leq\frac{\alpha_1+\alpha_2}{n-1}\cos \frac{1}{2}\theta(|a_{1}|+|a_2|+\cdots+|a_{n-1}|)\\
&=\frac{\alpha_1+\alpha_2}{n-1}\cos \frac{1}{2}\theta\sum_{k=1}^{n-1}|a_k|. \end{align*}
It is clear that $|a_2|\leq (\alpha_1+\alpha_2)\cos \frac{1}{2}\theta$. To prove the remaining part of the theorem, we need to show that \begin{equation}\label{induction}
\frac{\alpha_1+\alpha_2}{n-1}\cos \frac{1}{2}\theta\sum_{k=1}^{n-1}|a_k|\leq \frac{\alpha_1+\alpha_2}{n-1}\cos \frac{1}{2}\theta
\prod_{k=2}^{n-1}\left(1+\frac{\alpha_1+\alpha_2}{k-1}\cos \frac{1}{2}\theta\right)\quad (n=3,4,5\ldots). \end{equation} Using induction and simple calculation, we could to prove the inequality \eqref{induction}. Hence, the desired estimate for
$|a_n|\, (n = 3, 4, 5,\ldots)$ follows, as asserted in \eqref{an}. This completes the proof of theorem. \end{proof} Selecting $\alpha_1=\alpha_2=\beta$, in the above Theorem \ref{t22}, we may obtain bounds on coefficients of strongly starlike function of order $\beta$, although they are not sharp when $n=3,4,\ldots$.
\begin{corollary}
If the function $f$ of the form \eqref{f} is a strongly starlike function of order $\beta$, then
\begin{equation*}
|a_n|\leq\left\{
\begin{array}{ll}
2\beta & \qquad\hbox{$n=2$,} \\ \\
\frac{2\beta}{n-1}
\prod_{k=2}^{n-1}\left(1+\frac{2\beta}{k-1}\right) & \qquad\hbox{$n=3,4,\ldots$,} \end{array} \right. \end{equation*} where $0<\beta\leq 1$. The equality occurs for the function $f_\beta(z)=z/(1-z)^{2\beta}$. Taking $\beta=1$, we get the sharp estimate for the coefficients of starlike functions.
\end{corollary}
\end{document} | arXiv |
I had $\$30$ in allowance money and spent it as indicated in the pie graph shown. How many dollars did I spend on burgers?
[asy]
size(150);
pair A, B, C, D, O, W, X, Y, Z;
O=(0,0);
A=(.707,.707);
B=(-.966,.259);
C=(-.707,-.707);
D=(.342,-.940);
draw(Circle(O, 1));
draw(O--A);
draw(O--B);
draw(O--C);
draw(O--D);
W=(-.1,.5);
label("Movies", W, N);
label("$\frac{1}{3}$", W, S);
X=(-.55, 0);
label("Burgers", X, S);
Y=(-.17,-.7);
label("Ice Cream", Y, N);
label("$\frac{1}{5}$", Y, S);
Z=(.5, -.15);
label("Music", Z, N);
label("$\frac{3}{10}$", Z, S);
[/asy]
Because $\frac{1}{3}$ of the money was spent on movies and there is 30 dollars, the amount of money spent on movies is $\frac{1}{3} \cdot 30=10$ dollars. Likewise, $\frac{3}{10} \cdot 30=9$ dollars were spent on music and $\frac{1}{5} \cdot 30 = 6$ dollars were spent on ice cream. Thus, the total amount of money spent on movies, music, and ice cream is $\$10+\$9+\$6=\$25$. The remaining amount of money is spent on burgers. Thus, the money spent on burgers is $\$30-\$25=\$\boxed{5}$. | Math Dataset |
\begin{document}
\begin{abstract} We analyze a general class of difference operators $H_\varepsilon = T_\varepsilon + V_\varepsilon$ on $\ell^2((\varepsilon {\mathbb Z})^d)$, where $V_\varepsilon$ is a multi-well potential and $\varepsilon$ is a small parameter. We derive full asymptotic expansions of the prefactor of the exponentially small eigenvalue splitting due to interactions between two ``wells'' (minima) of the potential energy, i.e., for the discrete tunneling effect. We treat both the case where there is a single minimal geodesic (with respect to the natural Finsler metric induced by the leading symbol $h_0(x,\xi)$ of $H_\varepsilon$) connecting the two minima and the case where the minimal geodesics form an $\ell+1$ dimensional manifold, $\ell\geq 1$. These results on the tunneling problem are as sharp as the classical results for the Schr\"odinger operator in \cite{hesjo}. Technically, our approach is pseudodifferential and we adapt techniques from \cite{hesjo2} and \cite{hepar} to our discrete setting. \end{abstract}
\title{Tunneling for a class of difference operators:\ Complete Asymptotics}
\section{Introduction}
The aim of this paper is to derive complete asymptotic expansions for the interaction between two potential minima of a difference operator on a scaled lattice, i.e., for the discrete tunneling effect.
We consider a rather general class of families of difference operators $\left(H_\varepsilon\right)_{\varepsilon>0}$ on the Hilbert space $\ell^2((\varepsilon {\mathbb Z})^d)$, as the small parameter $\varepsilon>0$ tends to zero. The operator $H_\varepsilon$ is given by \begin{align} \label{Hepein} H_\varepsilon &= T_\varepsilon + V_\varepsilon , \quad\text{where}\quad T_\varepsilon = \sum_{\gamma\in(\varepsilon {\mathbb Z})^d} a_\gamma \tau_\gamma ,\\ (\tau_\gamma u)(x) &= u(x+\gamma) \, ,\qquad \quad (a_\gamma u)(x) := a_\gamma(x; \varepsilon) u(x) \quad \mbox{for} \quad x,\gamma\in(\varepsilon {\mathbb Z})^d \label{agammataugamma} \end{align} and $V_\varepsilon$ is a multiplication operator which in leading order is given by a multiwell-potential $V_0 \in \mathscr C^\infty ({\mathbb R} ^d)$.
The interaction between neighboring potential wells leads by means of the tunneling effect to the fact that the eigenvalues and eigenfunctions are different from those of an operator with decoupled wells, which is realized by the direct sum of ``Dirichlet-operators'' situated at the several wells. Since the interaction is small, it can be treated as a perturbation of the decoupled system.
In \cite{kleinro4}, we showed that it is possible to approximate the eigenfunctions of the original Hamiltonian $H_\varepsilon$ with respect to a fixed spectral interval by (linear combinations of) the eigenfunctions of the several Dirichlet operators situated at the different wells and we gave a representation of $H_\varepsilon$ with respect to a basis of Dirichlet-eigenfunctions.
In \cite{kleinro5} we gave estimates for the weighted $\ell^2$-norm of the difference between exact Dirichlet eigenfunctions and approximate Dirichlet eigenfunctions, which are constructed using the WKB-expansions given in \cite{kleinro3}.
In this paper, we consider the special case, that only Dirichlet operators at two wells have an eigenvalue (and exactly one) inside a given spectral interval. Then it is possible to compute complete asymptotic expansions for the elements of the interaction matrix and to obtain explicit formulae for the leading order term.\\
This paper is based on the thesis \cite{thesis}. It is the sixth in a series of papers (see \cite{kleinro} - \cite{kleinro5}); the aim is to develop an analytic approach to the semiclassical eigenvalue problem and tunneling for $H_\varepsilon$ which is comparable in detail and precision to the well known analysis for the Schr\"odinger operator (see \cite{Si1} and \cite{hesjo}). We remark that the analysis of tunneling has been extended to classes of pseudodifferential operators in ${\mathbb R} ^d$ in \cite{hepar} where tunneling is discussed for the Klein-Gordon and Dirac operator. This article in turn relies heavily on the ideas in the analysis of Harper's equation in \cite{hesjo2} and previous results from \cite{sjo} covering classes of analytic symbols. Since our formulation of the spectral problem for the operator in \eqref{Hepein} is pseudo-differential in spirit, it has been possible to adapt the methods of \cite{hepar} to our case. Since our symbols are analytic only in the momentum variable $\xi$, but not in the space variable $x$, the results of \cite{sjo} do not all automatically apply.
Our motivation comes from stochastic problems (see \cite{kleinro}, \cite{begk1}, \cite{begk2}). A large class of discrete Markov chains analyzed in \cite{begk2} with probabilistic techniques falls into the framework of difference operators treated in this article.
We expect that similar results hold in the more general case that the Hamiltonian is a generator of a jump process in ${\mathbb R} ^d$, see \cite{klr} for first results in this direction.
\begin{hyp}\label{hyp1} \begin{enumerate} \item The coefficients $a_\gamma(x; \varepsilon)$ in \eqref{Hepein} are functions \begin{equation}\label{agammafunk} a: (\varepsilon {\mathbb Z})^d \times {\mathbb R} ^d \times (0,\varepsilon_0] \rightarrow {\mathbb R} \, , \qquad (\gamma, x, \varepsilon) \mapsto a_\gamma(x; \varepsilon)\, , \end{equation} satisfying the following conditions: \begin{enumerate} \item[(i)] They have an expansion \begin{equation}\label{agammaexp} a_\gamma(x; \varepsilon) = \sum_{k=0}^{N-1} \varepsilon^k a_\gamma^{(k)}(x) + R^{(N)}_\gamma (x; \varepsilon)\, ,\qquad N\in{\mathbb N}^*\, , \end{equation} where $a_\gamma \in\mathscr C^\infty({\mathbb R} ^d\times (0,\varepsilon_0])$ and $a_\gamma^{(k)}\in\mathscr C^\infty({\mathbb R} ^d)$ for all $\gamma\in(\varepsilon {\mathbb Z})^d$ and $0\leq k \leq N-1$. \item[(ii)] $\sum_{\gamma\in(\varepsilon {\mathbb Z})^d} a_\gamma^{(0)} = 0$ and $a_\gamma^{(0)} \leq 0$ for $\gamma \neq 0$. \item[(iii)] $a_\gamma(x; \varepsilon) = a_{-\gamma}(x+\gamma; \varepsilon)$ for all $x \in {\mathbb R} ^d, \gamma \in (\varepsilon {\mathbb Z})^d$ \item[(iv)] For any $c>0$ and $\alpha\in{\mathbb N}^d$ there exists $C>0$ such that for $0\leq k\leq N-1$ uniformly with respect to $x\in{\mathbb R} ^d$ and $\varepsilon\in (0,\varepsilon_0]$. \begin{equation}\label{abfallagamma}
\| \, e^{\frac{c|.|}{\varepsilon}} \partial_x^\alpha a^{(k)}_.(x)\|_{\ell_\gamma^2((\varepsilon {\mathbb Z})^d)}\leq C \qquad\text{and} \qquad
\|\,
e^{\frac{c|.|}{\varepsilon}} \partial_x^\alpha R^{(N)}_.(x)\|_{\ell^2_\gamma((\varepsilon {\mathbb Z})^d)}
\leq C\varepsilon^N\; . \end{equation} \item[(v)]
$\Span \{\gamma\in(\varepsilon {\mathbb Z})^d\,|\, a^{(0)}_\gamma(x) <0\}= {\mathbb R} ^d$ for all $x\in{\mathbb R} ^d$. \end{enumerate} \item \begin{enumerate} \item[(i)] The potential energy $V_\varepsilon$ is the restriction to $(\varepsilon {\mathbb Z})^d$ of a function $\widehat{V}_\varepsilon\in\mathscr C^\infty ({\mathbb R} ^d, {\mathbb R} )$ which has an expansion \begin{equation}\label{hatVell} \widehat{V}_{\varepsilon}(x) = \sum_{\ell=0}^{N-1}\varepsilon^l V_\ell(x) + R_{N}(x;\varepsilon) \, ,\qquad N\in{\mathbb N}^*\, , \end{equation} where $V_\ell\in\mathscr C^\infty({\mathbb R} ^d)$, $R_{N}\in \mathscr C^\infty ({\mathbb R} ^d\times (0,\varepsilon_0])$ for some $\varepsilon_0>0$ and for any compact set $K\subset {\mathbb R} ^d$
there exists a constant $C_K$ such that $\sup_{x\in K} |R_{N}(x;\varepsilon)|\leq C_K \varepsilon^{N}$. \item[(ii)] $V_\varepsilon$ is polynomially bounded and there exist constants $R, C > 0$ such that
$V_\varepsilon(x) > C$ for all $|x| \geq R$ and $\varepsilon\in(0,\varepsilon_0]$. \item[(iii)] $V_0(x)\geq 0$ and it takes the value $0$ only at a finite number of non-degenerate minima $x^j,\; j\in \mathcal{C} =\{1,\ldots , r\}$, which we call potential wells. \end{enumerate} \end{enumerate} \end{hyp}
We remark that for $T_\varepsilon$ defined in \eqref{Hepein}, under the assumptions given in Hypothesis \ref{hyp1}, one has $T_\varepsilon = \Op_\varepsilon^{{\mathbb T}}(t(.,.;\varepsilon))$ (see Appendix \ref{app1} for definition and details of the quantization on the $d$-dimensional torus ${\mathbb T}^d := {\mathbb R} ^d/ (2\pi {\mathbb Z})^d$) where $t\in\mathscr C^\infty\left({\mathbb R} ^d\times{\mathbb T}^d\times (0,\varepsilon_0]\right)$ is given by \begin{equation}\label{talsexp} t(x,\xi; \varepsilon) = \sum_{\gamma\in(\varepsilon {\mathbb Z})^d} a_\gamma (x; \varepsilon) \exp \left(-\tfrac{i}{\varepsilon}\gamma\cdot\xi\right)\; . \end{equation} Here $t(x,\xi;\varepsilon)$ is considered as a function on ${\mathbb R} ^{2d}\times (0,\varepsilon_0]$, which is $2\pi$-periodic with respect to $\xi$. By condition (a)(iv) in Hypothesis \ref{hyp1}, the function $\xi\mapsto t(x, \xi; \varepsilon)$ has an analytic continuation to ${\mathbb C}^d$. Moreover for all $B>0$ \begin{equation}\label{agammasum}
\sum_\gamma \left| a_\gamma(x; \varepsilon)\right| e^{\frac{B|\gamma|}{\varepsilon}} \leq C \qquad\text{and thus}\qquad
\sup_{x\in{\mathbb R} ^d} |a_\gamma(x; \varepsilon)| \leq C e^{-\frac{B|\gamma|}{\varepsilon}} \end{equation}
uniformly with respect to $x$ and $\varepsilon$. We further remark that (a)(iv) implies $\bigl|a_\gamma^{(k)}(x)- a_\gamma^{(k)}(x + h)\bigr|\leq C |h|$ for $0\leq k \leq N-1$ uniformly with respect to $\gamma\in(\varepsilon {\mathbb Z})^d$ and $x,h\in{\mathbb R} ^d$ and (a)(ii),(iii),(iv) imply that $T_\varepsilon$ is symmetric and bounded and that for some $C>0$ \begin{equation}\label{Tvonunten}
\skpd{u}{T_\varepsilon u} \geq - C \varepsilon \|u\|^2_{\ell^2}\;, \qquad u\in\ell^2((\varepsilon {\mathbb Z})^d)\; . \end{equation}
Furthermore, we set \begin{align}\label{texpand} t(x,\xi;\varepsilon) &= \sum_{k=0}^{N-1} \varepsilon^k t_k (x,\xi) + \widehat{t}_N(x,\xi;\varepsilon)\quad \text{with}\\ t_k(x, \xi) &:= \sum_{\gamma\in(\varepsilon {\mathbb Z})^d} a_\gamma^{(k)}(x) e^{-\frac{i}{\varepsilon}\gamma\xi}\, , \qquad 0\leq k \leq N-1\,,\nonumber\\ \widehat{t}_N(x, \xi; \varepsilon) &:= \sum_{\gamma\in(\varepsilon {\mathbb Z})^d} R_\gamma^{(N)}(x; \varepsilon) e^{-\frac{i}{\varepsilon}\gamma\xi}\nonumber\; . \end{align} Thus, in leading order, the symbol of $H_\varepsilon$ is $h_0:=t_0+V_0$. Combining \eqref{agammaexp} and (a)(iii) shows that the $2\pi$-periodic function ${\mathbb R} ^d\ni\xi\mapsto t_0(x,\xi)$ is even with respect to $\xi\mapsto -\xi$, i.e., \begin{equation}\label{agammasym} a_\gamma^{(0)}(x) = a_{-\gamma}^{(0)}(x)\, , \qquad x\in{\mathbb R} ^d, \,\gamma\in(\varepsilon {\mathbb Z})^d \end{equation}
(see \cite{kleinro}, Lemma 1.2) and therefore \begin{equation}\label{tcosh} t_0(x,\xi) = \sum_{\gamma\in(\varepsilon {\mathbb Z})^d} a_\gamma^{(0)}(x) \cos \bigl(\tfrac{1}{\varepsilon}\gamma\cdot\xi\bigr) \; . \end{equation} At $\xi=0$, for fixed $x\in{\mathbb R} ^d$ the function $t_0$ defined in \eqref{texpand} has by Hypothesis \ref{hyp1}(a)(ii) an expansion \begin{equation}\label{kinen}
t_0(x,\xi) = \skp{\xi}{B(x)\xi} + \sum_{\natop{|\alpha|=2n}{n\geq2}} B_\alpha (x) \xi^\alpha \qquad\text{as}\;\; |\xi|\to 0 \end{equation} where $\alpha\in{\mathbb N}^d$, $B\in\mathscr C^\infty ({\mathbb R} ^d, \mathcal{M}(d\times d,{\mathbb R} ))$, for any $x\in{\mathbb R} ^d$ the matrix $B(x)$ is positive definite and symmetric and $B_\alpha$ are real functions. By straightforward calculations one gets for $1\leq \mu,\nu\leq d$ \begin{equation}\label{Bnumu} B_{\nu\mu}(x) = -\frac{1}{2\varepsilon^2} \sum_{\gamma\in(\varepsilon {\mathbb Z})^d} a_\gamma^{(0)}(x) \gamma_\nu\gamma_\mu\; . \end{equation}
We set \begin{equation}\label{h0tilde} \tilde{h}_0: {\mathbb R} ^{2d} \rightarrow {\mathbb R} \, , \quad \tilde{h}_0(x,\xi) := - t_0(x, i\xi) - V_0(x)\; . \end{equation}
In order to work in the context of \cite{kleinro2}, we shall assume
\begin{hyp}\label{hyp2} At the minima $x^j,\, j\in\mathcal{C}$, of $V_0$, we assume that $t_0$ defined in \eqref{texpand} fulfills \[ t_0(x^j, \xi) >0 \quad\text{if} \quad \xi \in{\mathbb T}^d\setminus \{0\} \, .\] \end{hyp}
For any set $D\subset {\mathbb R} ^d$, we denote the restriction to the lattice by $D_\varepsilon := D\cap (\varepsilon {\mathbb Z})^d$.\\
By Hypothesis \ref{hyp1}, $\tilde{h}_0$ is even and hyperconvex\footnote{For a normed vector space $V$ we call a function $L\in\mathscr C^2(V,{\mathbb R} )$ hyperconvex, if there exists a constant $\alpha > 0$ such that
\[ D^2L|_{v_0}(v,v)\geq\alpha \|v\|^2\quad\text{for all}\quad v_0,v\in V\, .\] } with respect to momentum. We showed in \cite{kleinro}, Prop. 2.9, that any function $f\in\mathscr C^\infty(T^*M,{\mathbb R} )$, which is hyperconvex in each fibre, is automatically hyperregular\footnote{We recall from e.g. \cite{abma} that $f$ is hyperregular if its fibre derivative $D_Ff$ - related to the Legendre transform - is a global diffeomorphism: $T^*M \rightarrow TM$.} (here $M$ denotes a smooth manifold, which in our context is equal to ${\mathbb R} ^d$).
We can thus introduce the associated Finsler distance $d = d_\ell$ on ${\mathbb R} ^d$ as in \cite{kleinro}, Definition 2.16, where we set $\widetilde{M}:={\mathbb R} ^d\setminus\{x^k\, , \, k\in\mathcal{C}\}$. Analog to \cite{kleinro}, Theorem 1.6, it can be shown that $d$ is locally Lipschitz and that for any $j\in\mathcal{C}$, the distance $d^j(x):= d(x, x^j)$ fulfills the generalized eikonal equation and inequality respectively \begin{align}\label{eikonal2} \tilde{h}_0 \bigl(x, \nabla d^j (x)\bigr) &= 0 \; ,\qquad x\in\Omega^j \\ \tilde{h}_0\bigl(x, \nabla d^j(x)\bigr) &\leq 0\; , \qquad x\in{\mathbb R} ^d \end{align}
where $\Omega^j$ is some neighborhood of $x_j$. We remark that, assuming only Hypothesis \ref{hyp1}, it is possible that balls of finite radius with respect to the Finsler distance, i.e. $B_r(x):=\{y\in{\mathbb R} ^d\,|\, d(x,y)\leq r\}, r<\infty$, are unbounded in the Euclidean distance (and thus not compact). In this paper, we shall not discuss consequences of this effect.
Crucial quantities for the subsequent analysis are for $j,k\in \mathcal{C}$ \begin{equation}\label{distances} S_{jk}:= d(x^j, x^k)\, , \qquad\text{and}\qquad S_0:= \min_{j\neq k}d(x^j,x^k) \; . \end{equation}
\begin{rem}\label{remhypmultwell} Since $d$ is locally Lipschitz-continuous (see \cite{kleinro}), it follows from \eqref{agammasum} that for any $B>0$ and any bounded region $\Sigma\subset {\mathbb R} ^d$ there exists a constant $C>0$ such that \begin{equation}\label{agammasupnorm2}
\sum_{\natop{\gamma\in(\varepsilon {\mathbb Z})^d}{|\gamma|<B}} \Bigl\|a_\gamma (\,.\, ; \varepsilon)
e^{\frac{d(.,.+\gamma)}{\varepsilon}}\Bigr\|_{\ell^\infty(\Sigma)} \leq C\; . \end{equation} \end{rem}
For $\Sigma\subset{\mathbb R} ^d$ we define the space $\ell^2_{\Sigma_\varepsilon}:= i_{\Sigma_\varepsilon} \left(\ell^2(\Sigma_\varepsilon)\right) \subset \ell^2((\varepsilon {\mathbb Z})^d)$ where $ i_{\Sigma_\varepsilon}$ denotes the embedding via zero extension. Then we define the Dirichlet operator \begin{equation} \label{HepD}
H_\varepsilon^{\Sigma} :=\mathbf{1}_{\Sigma_\varepsilon} H_\varepsilon|_{\ell^2_{\Sigma_\varepsilon}} \;:\; \ell^2_{\Sigma_\varepsilon} \rightarrow \ell^2_{\Sigma_\varepsilon} \end{equation}
with domain $\mathscr D (H_\varepsilon^{\Sigma}) = \{u\in\ell^2_{\Sigma_\varepsilon}\,|\, V_\varepsilon u \in \ell^2_{\Sigma_\varepsilon}\}$.
For a fixed spectral interval it is shown in \cite{kleinro4} that the difference between the exact spectrum and the spectra of Dirichlet realizations of $H_\varepsilon$ near the different wells is exponentially small and determined by the Finsler distance between the two nearest neighboring wells. In the following we give additional assumptions.
The following hypothesis gives assumptions concerning the separation of the different wells using Dirichlet operators and the restriction to some adapted spectral interval $I_\varepsilon$.
\begin{hyp}\label{hypIMj} \begin{enumerate} \item There exist constants $\eta>0$ and $C>0$ such that for all $x\in{\mathbb R} ^d$
\[ \left\| a_{(.)} (x; \varepsilon) e^{\frac{1}{\varepsilon} d(x, x+\, . \, )} |\, . \,
|^{\frac{d + \eta}{2}} \right\|_{\ell^2((\varepsilon {\mathbb Z})^d)} \leq C \; . \] \item For $j\in\mathcal{C}$, we choose a compact manifold $M_j\subset {\mathbb R} ^d$ with $\mathscr C^2$-boundary such that the following holds: \begin{enumerate} \item $x^j\in M_j$, $d^j\in\mathscr{C}^2(M_j)$ and $x^k\notin M_j$ for $k\in\mathcal{C}, \,k\neq j$. \item Let $X_{\tilde{h}_0}$ denote the Hamiltonian vector field with respect to $\tilde{h}_{0}$ defined in \eqref{h0tilde}, $F_{t}$ denote the flow of $X_{\tilde{h}_0}$ and set \begin{equation}\label{Lambdaplus}
\Lambda_{\pm} := \bigl\{ (x,\xi)\in T^*{\mathbb R} ^{d}\, |\, F_{t}(x,\xi) \rightarrow (x^j,0)\quad \text{for} \quad t \rightarrow \mp \infty \bigr\} \; . \end{equation} Then, for $\pi : T^*{\mathbb R} ^d \rightarrow {\mathbb R} ^d$ denoting the bundle projection $\pi (x, \xi) = x$, we have
\[ \Lambda_+(M_j):=\pi^{-1}(M_j) \cap \Lambda_+ = \bigl\{ (x, \nabla d^j(x)) \in T^*{\mathbb R} ^d\, |\, x\in M_j\bigr\} \; . \] Moreover $\pi\bigl(F_t(x,\xi)\bigr) \in M_j$ for all $(x,\xi)\in \pi^{-1}(M_j) \cap \Lambda_+$ and all $t\leq 0$. \end{enumerate} \item Given $M_j,\, j\in\mathcal{C}$, let $I_\varepsilon = [\alpha (\varepsilon),\beta (\varepsilon)]$ be an interval, such that
$\alpha (\varepsilon),\beta (\varepsilon) = O(\varepsilon)$ for $\varepsilon\to 0$. Furthermore there exists a function $a(\varepsilon)>0$ with the property $|\log a(\varepsilon)| = o\left(\frac{1}{\varepsilon}\right),\, \varepsilon\to 0$, such that none of the operators $H_\varepsilon,H_\varepsilon^{M_1},\ldots H_\varepsilon^{M_r}$ has spectrum in $[\alpha(\varepsilon)-2a(\varepsilon),\alpha(\varepsilon))$ or $(\beta(\varepsilon),\beta(\varepsilon)+2a(\varepsilon)]$. \end{enumerate} \end{hyp}
By \cite{kleinro}, Theorem 1.5, the base integral curves of $X_{\tilde{h}_0}$ on ${\mathbb R} ^d\setminus\{x_1,\ldots x_m\}$ with energy $0$ are geodesics with respect to $d$ and vice versa. Thus Hypothesis \ref{hypIMj}, 2(b), implies in particular that there is a unique minimal geodesic between any point in $M_j$ and $x^j$.
Clearly, $\Lambda_+ (M_j)$ is a Lagrange manifold (by 2(b)) and since the flow $F_t$ preserves $\tilde{h}_0$, we have $\Lambda_+(M_j)\subset \tilde{h}_0^{-1}(0)$ by \eqref{Lambdaplus}. Thus the eikonal equation $\tilde{h}_0(x, \nabla d^j(x)) =0$ holds for $x\in M_j$. It follows from the construction of the solution of the eikonal equation in \cite{kleinro3} that in fact $d^j\in \mathscr C^\infty(M_j)$. We recall that, in a small neighborhood of $x^j$, the equation $\xi = \nabla d^j$ parametrizes by construction the outgoing manifold $\Lambda_+$ of the hyperbolic fixed point $(x^j, 0)$ of $X_{\tilde{h}_0}$ in $T^*M_j$. Hypothesis \ref{hypIMj}, (2), ensures this globally.\\
Since the main theorems in this paper treat fine asymptotics for the interaction between two wells, we assume the following hypothesis. It guarantees that neither the wells are to far from each other nor the difference between the Dirichlet eigenvalues is to big (otherwise the main term of the interaction matrix has the same order of magnitude as the error term).\\
Given Hypothesis \ref{hypIMj}, we assume in addition
\begin{hyp}\label{hypkj} \begin{enumerate} \item Only two Dirichlet operators $H_\varepsilon^{M_j}$ and $H_\varepsilon^{M_k}$, $j,k\in\mathcal{C},$ have an eigenvalue (and exactly one) in the spectral interval $I_\varepsilon$, which we denote by $\mu_j$ and $\mu_k$ respectively, with corresponding real Dirichlet eigenfunctions $v_j$ and $v_k$. \item We choose coordinates such that $x^j_d<0$ and $x^k_d>0$ and we set \begin{equation}\label{Hnull}
{\mathbb H}_d := \{ x\in{\mathbb R} ^d\, |\, x_d=0\} \; . \end{equation} \item For \[ S := \min_{r\in\mathcal{C}} \min_{x\in\partial M_r} d(x, x^r)\, , \] let $0<a < 2S - S_0$ and $S_{jk} < S_0 + a$ and for all $\delta > 0$ \begin{equation}\label{mualphamu}
|\mu_j - \mu_k |= \expord{(a-\delta)} \; . \end{equation} We define the closed ``ellipse'' \begin{equation}\label{ellipse}
E := \{ x\in {\mathbb R} ^d\,|\, d^j(x)+ d^k(x) \leq S_0 + a \} \end{equation} and assume that $E\subset \stackrel{\circ}{M}_{j}\cup \stackrel{\circ}{M}_{k}$. \item For $R>0$ we set \begin{equation}\label{HR}
{\mathbb H}_{d,R}:= \{ x\in {\mathbb R} ^d\, |\, -R<x_d<0\} \end{equation} and choose $R>0$ large enough such that \begin{equation}\label{EHR}
E\cap \{x\in{\mathbb R} ^d\,|\, x_d\leq -R\} = \emptyset\, , \qquad E\cap {\mathbb H}_{d,R} \subset \stackrel{\circ}{M}_j \qquad \text{and}\qquad E\cap {\mathbb H}_{d,R}^c \subset \stackrel{\circ}{M}_k\, . \end{equation} \end{enumerate} \end{hyp}
The tunneling between the wells $x^j$ and $x^k$ can be described by the interaction term \begin{equation}\label{wjkdef} w_{jk} = \skpd{v_j}{\bigl(\mathbf{1} - \mathbf{1}_{M_k}\bigr) T_\varepsilon v_k} = \skpd{v_j}{\bigl[T_\varepsilon, \mathbf{1}_{M_k}\bigr] v_k} \end{equation} introduced in \cite{kleinro4}, Theorem 1.5 (cf. Theorem \ref{ealphafalpha}).
The main topic of this paper is to derive complete asymptotic expansions for $w_{jk}$, using the approximate eigenfunctions we constructed in \cite{kleinro5}.
\begin{rem}\label{wjkalt} \begin{enumerate} \item Since the set ${\mathbb H}_{d,R}$ fulfills the assumptions on the set $\Omega$ introduced in \cite{kleinro4}, it follows from \cite{kleinro4}, Proposition 1.7, that the interaction $w_{jk}$ between the two wells $x^j$ and $x^k$ (cf. Theorem \ref{ealphafalpha}) is given by \begin{equation}\label{wjkaltglg} w_{jk} = \skpd{ [T_\varepsilon, \mathbf{1}_{{\mathbb H}_{d,R}}]\mathbf{1}_{E} v_j}{\mathbf{1}_{E} v_k} + \expord{S_0 + a -\eta} \, , \qquad \eta>0\; . \end{equation} In order to use symbolic calculus to compute asymptotic expansions of $w_{jk}$, we will smooth the characteristic function $\mathbf{1}_{{\mathbb H}_{d,R}}$ by convolution with a Gaussian. \item It follows from the results in \cite{kleinro2} that, by Hypothesis \ref{hypIMj},(3), the Dirichlet eigenvalues $\mu_j$ and $\mu_k$ lie in $\varepsilon^{\frac{3}{2}}$-intervals around some eigenvalues of the associated harmonic oscillators at the wells $x^j$ and $x^k$ as constructed in \cite{kleinro5}, (1.19). Thus we can use the approximate eigenfunctions and the weighted estimates given in \cite{kleinro5}, Theorem 1.7 and 1.8 respectively. \end{enumerate} \end{rem}
Next we give assumptions on the geometric setting, more precisely on the geodesics between the two wells given in Hypothesis \ref{hypkj}. First we consider the generic setting, where there is exactly one minimal geodesic between the two wells. Later on, we consider the more general situation where the minimal geodesics build a manifold.
We recall from \cite{kleinro} that, as usual, geodesics are the critical points of the length functional of the Finsler structure induced by $\tilde{h}_0$.
\begin{hyp}\label{hypgeo1} There is a unique minimal geodesic $\gamma_{jk}$ (with respect to the Finsler distance $d$) between the wells $x^j$ and $x^k$. Moreover, $\gamma_{jk}$ intersects the hyperplane ${\mathbb H}_d$ transversally at some point $y_0 = (y'_0, 0)$ (possibly after redefining the origin) and is nondegenerate at $y_0$ in the sense that, transversally to $\gamma_{jk}$, the function $d^k + d^j$ changes quadratically, i.e., the restriction of $d^j(x) + d^k(x)$ to ${\mathbb H}_d$ has a positive Hessian at $y_0$. \end{hyp}
\begin{figure}
\caption{The regions $E$, $M_j$ and $M_k$, the point $y_0$ and the curve $\gamma_{jk}$}
\label{Bild2}
\end{figure}
\begin{theo}\label{wjk-expansion} Let $H_\varepsilon$ be a Hamiltonian as in \eqref{Hepein} satisfying Hypotheses \ref{hyp1} and \ref{hyp2} and assume that Hypotheses \ref{hypIMj}, \ref{hypkj} and \ref{hypgeo1} are fulfilled. For $m=j,k$, let $b^m\in\mathscr C_0^\infty({\mathbb R} ^d\times (0,\varepsilon_0])$and $b^m_\ell\in \mathscr C^\infty_0 ({\mathbb R} ^d), \, \ell\in {\mathbb Z}/2, \ell\geq -N_m$ for some $N_m\in {\mathbb N}$ be such that the approximate eigenfunctions $\widehat{v}_m^\varepsilon\in\ell^2((\varepsilon {\mathbb Z})^d)$ of the Dirichlet operators $H_\varepsilon^{M_m}$ constructed in \cite{kleinro5}, Theorem 1.7, have asymptotic expansions \begin{equation} \label{hatvm} \widehat{v}_m^\varepsilon (x; \varepsilon) = \varepsilon^{\frac{d}{4}} e^{-\frac{d^m(x)}{\varepsilon}} b^m(x; \varepsilon)\quad\text{with}\quad b^m(x;\varepsilon) \sim \sum_{\natop{\ell\in {\mathbb Z}/2}{\ell\geq -N_m}} \varepsilon^\ell b^m_\ell \; . \end{equation} Then there is a sequence $(I_p)_{p\in {\mathbb N}/2}$ in ${\mathbb R} $ such that \[ w_{jk} \sim \varepsilon^{\frac{1}{2}-(N_j + N_k)} e^{-S_{jk}/\varepsilon} \sum_{p\in {\mathbb N}/2} \varepsilon^p I_p \, . \] The leading order is given by \begin{equation}\label{0thm1} I_0 = \frac{(2\pi)^{\frac{d-1}{2}}}{\sqrt{\det D^{2}_\perp(d^j + d^k) (y_0)}} b^k_{-N_k}(y_0)
\sum_{\eta\in{\mathbb Z}^d} \tilde{a}_\eta(y_0) \eta_d \sinh \bigl(\eta\cdot \nabla d^j (y_0)\bigr) b^j_{-N_j}(y_0) \end{equation} where we set $\tilde{a}_{\frac{\gamma}{\varepsilon}}(x) := a_\gamma^{(0)}(x)$ and \begin{equation}\label{0athm1} D^{2}_\perp f := \Bigl( \partial_r\partial_p f \Bigr)_{1\leq r,p\leq d-1}\; . \end{equation} \end{theo}
\begin{rem} \begin{enumerate} \item The sum on the right hand side of \eqref{0thm1} is equal to the leading order of $\frac{1}{i} \Op_\varepsilon^{{\mathbb T}}\bigl(w\bigr) \Psi b^j(y_0)$ where \begin{equation}\label{current} w(x,\xi) := \partial_{\xi_d} t_0 (x, \xi - i \nabla d^j(x)) = -i \sum_{\gamma\in(\varepsilon {\mathbb Z})^d} a_\gamma^{(0)}(x) \frac{\gamma_d}{\varepsilon} e^{-\frac{i}{\varepsilon}\gamma\cdot (\xi - i\nabla d^j (x))}\; . \end{equation} To interpret this term (and formula \eqref{0thm1}) semiclassically, observe that $v(x,\xi) := \partial_\xi t_0 (x,\xi)$ is - by Hamilton's equation - the velocity field associated to the leading order kinetic Hamiltonian $t_0$ (or Hamiltonian $h_0 = t_0 + V_0$), evaluated on the physical phase space $T^*{\mathbb R} ^d$. In \eqref{current}, with respect to the momentum variable, the phase space is pushed into the complex domain, over the region $M_j\subset {\mathbb R} ^d$ from Hypothesis \ref{hypIMj} \[ T^*M_j \ni (x, \xi) \mapsto (x, \xi - i \nabla d^j(x)) \in \Lambda \subset T^*M_j\otimes {\mathbb C}\subset {\mathbb C}^{2d}\; . \] The smooth manifold $\Lambda$ lies as a graph over $T^*M_j$ and projects diffeomorphically. In some sense the complex deformation $\Lambda$ structurally stays as close a possible to the physical phase space $T^*M_j$, being both ${\mathbb R} $-symplectic and $I$-Langrangian.
We recall the basic definitions (see \cite{sjo} or \cite{hesjo3}): The standard symplectic form in ${\mathbb C}^{2d}$ is $\sigma = \sum_j d\zeta_j \wedge dz_j$ where $z_j = x_j + i y_j$ and $\zeta_j = \xi_j + i \eta_j$. It decomposes into \begin{align*} \Re \sigma &= \frac{1}{2}(\sigma + \bar{\sigma}) = \sum_j d\xi_j \wedge dx_j - d\eta_j \wedge dy_j\\ \Im \sigma &= \frac{1}{2}(\sigma - \bar{\sigma}) = \sum_j d\xi_j \wedge dy_j + d\eta_j \wedge dx_j\, . \end{align*} Both $\Re \sigma$ and $\Im \sigma$ are real symplectic forms in ${\mathbb C}^{2d}$, considered as a real space of dimension $4d$. A submanifold $\Lambda$ of ${\mathbb C}^{2d}$ (of real dimension $2d$) is called $I$-Langrangian if it is Lagrangian for $\Im \sigma$, and
$\Lambda$ is called ${\mathbb R} $-symplectic if $\Re\sigma|_\Lambda$ - which denotes the pull back under the embedding $\Lambda \hookrightarrow {\mathbb C}^{2d}$ - is non-degenerate. In our example, one checks in a straightforward way that both $T^*M_j$ and $\Lambda$ are ${\mathbb R} $-symplectic and $I$-Langrangian. In this paper we shall not explicitly use this structure of $\Lambda$ (it is essential for the microlocal theory of resonances, see \cite{hesjo3}); rather, the manifold $\Lambda$ appears somewhat mysteriously through explicit calculation.
Still, it seems to be physical folklore that both tunneling and resonance phenomena are related to complex deformations of phase space. Our formulae make this precise in the following sense: The leading order $I_0$ of the tunneling is given by
the velocity field $v|_\Lambda$ (in the direction $e_d$) where $\Lambda$ is the ${\mathbb R} $-symplectic, $I$-Langrangian manifold obtained as deformation of $T^*M_j$ through the field $\nabla d^j$ induced by the Finsler distance $d^j$, the leading amplitudes $b^j_{-N_j}(y_0)$, $b^k_{-N_k}(y_0)$ of the WKB expansions and the ``hydrodynamical factor'' $\sqrt{\det D^{2}_\perp(d^j + d^k) (y_0)}$ describing deviations from the shortest path connecting the two potential minima.
Thus, in some sense, tunneling is described by a matrix element of a current (at least in leading order). On physical grounds it is perhaps very plausible that such formulae should hold in the semiclassical limit in any case which exhibits a leading order Hamiltonian. That this is actually true in the case of difference operators considered in this article is conceptually a main result of this paper. For pseudodifferential operators in ${\mathbb R} ^d$ this is proven in \cite{hepar}. For a less precise, but conceptually related, statement see \cite{kleinro4}. \item If $\mu_j$ and $\mu_k$ correspond to the ground state energy of the harmonic oscillators associated to the Dirichlet operators at the wells (see \cite{kleinro5}), we have $N_j = N_k = 0$. Moreover $b^j(y_0)$ and $b^k(y_0)$ are strictly positive. Thus if $\gamma_{jk}$ intersects ${\mathbb H}_d$ orthogonal, it follows from Hypothesis \ref{hyp1}, (1)(ii), that $I_0>0$.
\end{enumerate} \end{rem}
If there are finitely many geodesics connecting $x^j$ and $x^k$, separated away from the endpoints, their contributions to the interaction $w_{jk}$ simply add up (as conductances working in parallel do). This is more complicated (but conceptually similar) in the case where the minimal geodesics form a manifold.
\begin{hyp}\label{hypgeo2} For some $1\leq \ell < d$, the minimal geodesics from $x^j$ to $x^k$ (with respect to the Finsler distance d) form an orientable $\ell+1$-dimensional submanifold $G$ of ${\mathbb R} ^d$ (possibly singular at $x^j$ and $x^k$). Moreover $G$ intersects the hyperplane ${\mathbb H}_d$ transversally (possibly after redefining the origin). Then \begin{equation}\label{Gnull}
G_0:= G \cap {\mathbb H}_d \end{equation} is a $\ell$-dimensional submanifold of $G$. \end{hyp}
We shall show in Step 2 of the proof of Theorem \ref{wjk-expansion2} below (assuming only Hypothesis \ref{hypgeo2}) that any system of linear independent normal vector fields $N_m,\, m=\ell+1, \ldots , d$, on $G_0$ possesses an extension to a suitable tubular neighborhood of $G_0$ as a family of commuting vector fields. In particular, with such a choice of vector fields $N_m,\, m=\ell + 1, \ldots d$, \begin{equation}\label{transversHessian}
D^2_{\perp, G_0}\bigl(d^j + d^k\bigr) := \Bigl( N_m N_n (d^j + d^k)|_{G_0}\Bigr)_{\ell+1\leq m,n \leq d-1} \end{equation} is a symmetric matrix. We assume
\begin{hyp}\label{hypgeo2a} The transverse Hessian $ D^2_{\perp, G_0}\bigl(d^j + d^k\bigr)$ of $d^j + d^k$ at $G_0$ defined in \eqref{transversHessian} is positive for all points on $G$ (which we shortly denote as $G$ being non-degenerate at $G_0$). \end{hyp}
\begin{theo}\label{wjk-expansion2} Let $H_\varepsilon$ be a Hamiltonian as in \eqref{Hepein} satisfying Hypotheses \ref{hyp1} and \ref{hyp2} and assume that Hypotheses \ref{hypIMj}, \ref{hypkj}, \ref{hypgeo2} and \ref{hypgeo2a} are fulfilled. For $m=j,k$, let $\widehat{v}_m^\varepsilon$ be as in \eqref{hatvm}. Then there is a sequence $(I_p)_{p\in {\mathbb N}/2}$ in ${\mathbb R} $ such that \[ w_{jk} \sim \varepsilon^{-(N_j + N_k)} \varepsilon^{(1-\ell)/2} e^{-S_{jk}/\varepsilon} \sum_{p\in {\mathbb N}/2} \varepsilon^p I_p \, . \] The leading order is given by \begin{equation}\label{0thm2}
I_0 = (2\pi)^{(d-(\ell+1))/2} \int_{G_0} \frac{1}{\sqrt{\det D^2_{\perp, G_0}\bigl(d^j + d^k\bigr)(y)}} b^k(y) b^j(y) \sum_{\eta \in\Gamma} \tilde{a}_\eta(y) \eta_d \sinh \bigl(\eta\cdot \nabla d^j (y)\bigr) \, d\sigma (y) \end{equation} where we used the notation given in Theorem \ref{wjk-expansion}. \end{theo}
We remark that - after appropriate complex deformations - an essential idea in the proof of Theorem \ref{wjk-expansion} and Theorem \ref{wjk-expansion2} is to replace discrete sums by integrals up to a very small error and then apply stationary phase. This replacement of a sum by an integral is considerably more involved in the case of Theorem \ref{wjk-expansion2} and represents a main difficulty in the proof. \\
Concerning the case of the Schr\"odinger operator, results analog to Theorem \ref{wjk-expansion2} certainly hold true, but to the best of our knowledge are not published (for the somewhat related case of resonances, see \cite{hesjo3}).\\
The outline of the paper is as follows.
Section \ref{section2} consists of preliminary results needed for the proofs of both theorems. The proofs of Theorem \ref{wjk-expansion} and Theorem \ref{wjk-expansion2}, are then given in in Section \ref{section3} and Section \ref{section4} respectively. In Section \ref{section5} we give some additional results on the interaction matrix. Appendix \ref{app1} consists of some results for the symbolic calculus of periodic symbols. In Appendix \ref{app2} we recall a basic result from \cite{kleinro4} about the tunneling where the interaction matrix $w_{jk}$ is defined.\\
{\sl Acknowledgements.} The authors thank B. Helffer for many valuable discussions and remarks on the subject of this paper.
\section{Preliminary Results on the interaction term $w_{jk}$}\label{section2}
Throughout this section we assume that Hypotheses \ref{hyp1}, \ref{hyp2} and \ref{hypIMj} are fulfilled and the interaction term $w_{jk}$ is as defined in \eqref{wjkdef}. \\
Following \cite{hesjo2} and \cite{hepar}, we set for some $C_0>0$ \begin{equation}\label{phinull} \phi_0(t) := iC_0 t^2\qquad\text{and}\qquad \phi_s(t) := \phi_0(t-s)\, , \qquad s,t\in {\mathbb R} \end{equation} and define the multiplication operator \begin{equation}\label{pis} \pi_s(x) := \frac{\sqrt{C_0}}{\sqrt{\pi \varepsilon}} e^{\frac{i}{\varepsilon} \phi_s(x_d)} = \frac{\sqrt{C_0}}{\sqrt{\pi \varepsilon}} e^{-\frac{C_0}{\varepsilon} (x_d-s)^2} \, , \quad x\in{\mathbb R} ^d \end{equation} where the factor is chosen such that $\int_{\mathbb R} \pi_s\, ds = 1$.
\begin{prop}\label{prop1} \begin{equation}\label{0prop1} w_{jk} = \int_{-R}^0 \skpd{\bigl[T_\varepsilon, \pi_s\bigr] \mathbf{1}_E v_j}{\mathbf{1}_E v_k} \, ds+ \expord{S_0 + a - \eta}\, , \qquad \eta > 0 \; . \end{equation} \end{prop}
\begin{proof} By \cite{kleinro4}, Proposition 4.2, we get by arguments similar to those given in the proof of \cite{kleinro4}, Theorem 1.7, for all $\eta>0$ \[ w_{jk} = \skpd{\mathbf{1}_E v_j}{ \bigl[T_\varepsilon, \mathbf{1}_{M_k}\bigr] \mathbf{1}_E v_k} + \expord{S_0 + a - \eta} \; .\] Using $\int_{\mathbb R} \pi_s\, ds = 1$ this yields \begin{equation}\label{17prop1} w_{jk}=\skpd{\int_{-R}^0 \pi_s\, ds \,\mathbf{1}_E v_j}{ \bigl[T_\varepsilon, \mathbf{1}_{M_k}\bigr] \mathbf{1}_E v_k} + A + B + \expord{S_0 + a - \eta} \end{equation} where \begin{align*} A &:= \skpd{\int_{-\infty}^{-R} \pi_s\, ds \,\mathbf{1}_E v_j}{ \bigl[T_\varepsilon, \mathbf{1}_{M_k}\bigr] \mathbf{1}_E v_k}\quad\text{and} \\ B &:= \skpd{\int_0^\infty \pi_s\, ds\, \mathbf{1}_E v_j}{ \bigl[T_\varepsilon, \mathbf{1}_{M_k}\bigr] \mathbf{1}_E v_k} \; . \end{align*} By the assumptions on $E$ and $R$ in Hypothesis \ref{hypkj}, we have $A=0$. In order to show that $B=\expord{S_0 + a - \eta}$, we use \cite{kleinro4}, Lemma 5.1, telling us that for all $C>0$ and $\delta>0$ \begin{equation}\label{2prop1} \bigl[T_\varepsilon, \mathbf{1}_{M_k}\bigr] = \mathbf{1}_{\delta M_k}\bigl[ T_\varepsilon, \mathbf{1}_{M_k}\bigr] \mathbf{1}_{\delta M_k} + \expord{C} \end{equation} where, for any $A\subset {\mathbb R} ^d$, we set \begin{equation}\label{deltaA}
\delta A := \{ x\in {\mathbb R} ^d\, |\, \dist (x, \partial A) \leq \delta\}\; . \end{equation} Setting \begin{equation}\label{12prop1}
b_{\delta,k}:= \min \{ |x_d|\, |\, x\in E\cap \delta M_k\} \; , \end{equation} we write \begin{equation}\label{12aprop1}
\int_0^\infty \pi_s \, ds \,\mathbf{1}_{E\cap \delta M_k}(x) = e^{-\frac{C_0}{\varepsilon} b_{\delta,k}^2} \frac{\sqrt{C_0}}{\sqrt{\pi\varepsilon}} \int_0^\infty \mathbf{1}_{E\cap \delta M_k}(x)
e^{-\frac{C_0}{\varepsilon}((x_d-s)^2 - b_{\delta,k}^2)} \, ds\; . \end{equation} Since ${\mathbb H}^c_{d,R}\cap E \subset \stackrel{\circ}{M}_k$ by Hypothesis \ref{hypkj}, it follows that, for $\delta>0$ sufficiently small, $x_d<0$ for $x\in E\cap \delta M_k$ and thus
$|x_d - s| \geq |x_d| \geq b_{\delta,k}>0$ for $s\geq 0$. Therefore the substitution $z^2 = \frac{C_0}{\varepsilon} \bigl((x_d-s)^2 - b_{\delta,k}^2\bigr)$ on the right hand side of \eqref{12aprop1} yields $\frac{1}{\sqrt{\varepsilon}}ds \leq \frac{z\sqrt{\varepsilon}}{b_{\delta,k}}dz$ and thus by straightforward calculation for some $C_\delta>0$ \begin{equation}\label{1prop1}
\sup_x \Bigl| \int_0^\infty \pi_s \, ds \mathbf{1}_{E\cap \delta M_k}(x)\Bigr| \leq C_\delta \sqrt{\varepsilon} e^{-\frac{C_0}{\varepsilon}b_{\delta,k}^2}\; . \end{equation} Combining \eqref{2prop1} and \eqref{1prop1} and using $d^j(x) + d^k(x) \geq S_{jk}$ gives for all $\delta>0$ \begin{equation}\label{3prop1}
|B| \leq C_\delta \sqrt{\varepsilon} e^{-\frac{C_0}{\varepsilon}b_{\delta,k}^2} e^{-\frac{S_{jk}}{\varepsilon}} \Bigl\|e^{\frac{d^j}{\varepsilon}} v_j\Bigr\|_{\ell^2}
\Bigl\| e^{\frac{d^k}{\varepsilon}} [T_\varepsilon, \mathbf{1}_{\delta M_k}\mathbf{1}_{M_k}]\mathbf{1}_{\delta M_k} v_k\Bigr\|_{\ell^2}\, . \end{equation} The definition of $T_\varepsilon$ and $\mathbf{1}_{M_k}v_k = v_k$ yield $\bigl[T_\varepsilon, \mathbf{1}_{M_k}\bigr]v_k(x) = \bigl(\mathbf{1} - \mathbf{1}_{M_k}\bigr)(x)\sum_\gamma a_\gamma (x;\varepsilon) v_k(x+\gamma)$. The triangle inequality $d^k(x) \leq d(x, x+\gamma) + d^k(x+\gamma)$ and the Cauchy-Schwarz-inequality with respect to $\gamma$ therefore give \begin{multline}\label{4prop1}
\Bigl\| e^{\frac{d^k}{\varepsilon}} \mathbf{1}_{\delta M_k} [T_\varepsilon, \mathbf{1}_{M_k}]\mathbf{1}_{\delta M_k} v_k\Bigr\|^2_{\ell^2} =
\sum_{x\in(\varepsilon {\mathbb Z})^d}\Bigl|\mathbf{1}_{\delta M_k}\bigl(\mathbf{1} - \mathbf{1}_{M_k}\bigr)(x) \sum_{\gamma\in(\varepsilon {\mathbb Z})^d} a_\gamma(x;\varepsilon) e^{\frac{d^k(x)}{\varepsilon}}
\bigl(\mathbf{1}_{\delta M_k} v_k\bigr)(x+\gamma)\Bigr|^2 \\
\leq \sum_{x\in M_{k,\varepsilon}^c\cap \delta M_k} \Bigl( \sum_{\gamma \in(\varepsilon {\mathbb Z})^d} \bigl| a_\gamma (x;\varepsilon) e^{\frac{d(x, x+\gamma)}{\varepsilon}}
\langle \gamma\rangle_\varepsilon^{\frac{d+\eta}{2}}\bigr|^2 \Bigr)
\Bigl( \sum_{\gamma\in(\varepsilon {\mathbb Z})^d} \bigl| e^{\frac{d^k(x+\gamma)}{\varepsilon}} \bigl(\mathbf{1}_{\delta M_k} v_k\bigr)(x+\gamma)
\langle \gamma\rangle_\varepsilon^{-\frac{d+\eta}{2}}\bigr|^2 \Bigr) \end{multline}
where we set $\langle \gamma\rangle_\varepsilon := \sqrt{\varepsilon^2 + |\gamma|^2}$. By Hypothesis \ref{hypIMj}, for $\eta>0$ chosen consistently, the first factor on the right hand side of \eqref{4prop1} is bounded by some constant $C>0$ uniformly with respect to $x$. Changing the order of summation therefore yields \begin{align}
\Bigl\| e^{\frac{d^k}{\varepsilon}} \mathbf{1}_{\delta M_k}[T_\varepsilon, \mathbf{1}_{M_k}]\mathbf{1}_{\delta M_k} v_k\Bigr\|^2_{\ell^2} &\leq C \sum_{\gamma\in (\varepsilon {\mathbb Z})^d} \langle \gamma\rangle_\varepsilon^{-(d+\eta)}
\sum_{x\in M_{k,\varepsilon}^c\cap \delta M_k}\bigl|e^{\frac{d^k(x+\gamma)}{\varepsilon}} \bigl(\mathbf{1}_{\delta M_k} v_k\bigr)(x+\gamma)\bigr|^2 \nonumber\\
&\leq \tilde{C} \Bigl\| e^{\frac{d^k}{\varepsilon}}v_k\Bigr\|^2_{\ell^2}\label{5prop1}\; . \end{align} We now insert \eqref{5prop1} into \eqref{3prop1} and use that, by \cite{kleinro5}, Proposition 3.1, the Dirichlet eigenfunctions decay exponentially fast, i.e. there is a constant $N_0 \in {\mathbb N}$ such that
$\| e^{\frac{d^i}{\varepsilon}}v_i\|_{\ell^2} \leq \varepsilon^{-N_0}$ for $i=j, k$. This gives for any $\eta>0$
\[ |B| \leq C e^{-\frac{1}{\varepsilon}(C_0 b_{\delta,k}^2 + S_{jk} - \eta)}\; . \] Since $b_{\delta,k}>0$ we can choose $C_0$ such that $C_0 b_{\delta,k}^2 + S_{j,k} \geq S_0 + a$, showing that $B = \expord{S_0 + a - \eta}$ for $C_0$ sufficiently large and therefore by \eqref{17prop1} \begin{equation}\label{6prop1} w_{jk} = \skpd{\int_{-R}^0 \pi_s\, ds \,\mathbf{1}_E v_j}{ \bigl[T_\varepsilon, \mathbf{1}_{M_k}\bigr] \mathbf{1}_E v_k} + \expord{S_0 + a - \eta}\; . \end{equation} In order to get the stated result, we use the symmetry of $T_\varepsilon$ to write \begin{align} &\skpd{\int_{-R}^0 \pi_s\, ds \,\mathbf{1}_E v_j}{ \bigl[T_\varepsilon, \mathbf{1}_{M_k}\bigr] \mathbf{1}_E v_k} \nonumber\\ &\qquad=\skpd{T_\varepsilon \int_{-R}^0 \pi_s\, ds\, \mathbf{1}_E v_j}{\mathbf{1}_E v_k} - \skpd{\int_{-R}^0 \pi_s\, ds\, \mathbf{1}_E v_j}{ \mathbf{1}_{M_k} T_\varepsilon \mathbf{1}_E v_k}\nonumber \\ &\qquad= \skpd{\bigl[ T_\varepsilon, \int_{-R}^0 \pi_s\, ds\bigr] \mathbf{1}_E v_j}{ \mathbf{1}_E v_k} + \sum_{i=1}^5 R_i\label{7prop1} \end{align} where by commuting $T_\varepsilon$ with $\mathbf{1}_E$ and inserting $\mathbf{1}_{M_j} + \mathbf{1}_{M_j^c}$ in $R_2$ and $R_3$ \begin{align*} R_1 &:= \skpd{\int_{-R}^0 \pi_s\, ds \bigl[T_\varepsilon, \mathbf{1}_E\bigr] v_j}{\mathbf{1}_E v_k} \\ R_2 &:= \skpd{\int_{-R}^0 \pi_s\, ds \,\mathbf{1}_E\mathbf{1}_{M_j} T_\varepsilon v_j}{ \mathbf{1}_E v_k} \\ R_3 &:= \skpd{\int_{-R}^0 \pi_s\, ds \,\mathbf{1}_E\mathbf{1}_{M_j^c} T_\varepsilon v_j}{\mathbf{1}_E v_k} \\ R_4 &:= -\skpd{\int_{-R}^0 \pi_s\, ds\, \mathbf{1}_E v_j}{ \mathbf{1}_E\mathbf{1}_{M_k} T_\varepsilon v_k} \\ R_5 &:= -\skpd{\int_{-R}^0 \pi_s\, ds \,\mathbf{1}_E v_j}{ \mathbf{1}_{M_k}\bigl[T_\varepsilon, \mathbf{1}_E\bigr] v_k}\; . \end{align*}
We are now going to prove that $\bigl|\sum_i R_i\bigr| = \expord{S_0 + a - \eta}$ for all $\eta>0$.
Since $\mathbf{1}_E(x) \bigl(\mathbf{1}_E(x+\gamma) -\mathbf{1}_E(x)\bigr)$ is equal to $-1$ for $x\in E, x+\gamma \in E^c$ and zero otherwise, we have \begin{align}
\bigl|R_1\bigr| &= \Bigl| \sum_{x,\gamma\in (\varepsilon {\mathbb Z})^d} \int_{-R}^0 \pi_s\, ds \, v_k(x) a_\gamma(x; \varepsilon) v_j(x+\gamma) \mathbf{1}_E (x)
\bigl( \mathbf{1}_E(x+\gamma) - \mathbf{1}_E (x)\bigr)\Bigr| \nonumber\\
&= \Bigl| \sum_{x,\gamma\in (\varepsilon {\mathbb Z})^d} \int_{-R}^0 \pi_s\, ds \, v_k(x) a_\gamma(x; \varepsilon) v_j(x+\gamma) \mathbf{1}_E (x) \mathbf{1}_{E^c}(x+\gamma)\Bigr|\; . \label{8prop1} \end{align} Using for the first step that $d^j(x+\gamma) + d^k(x+\gamma) \geq S_0 + a$ for $x+\gamma\in E^c$ and for the second step the triangle inequality for $d$, we get \begin{align}
\text{rhs} \eqref{8prop1} &\leq e^{-\frac{S_0 + a}{\varepsilon}} \sum_{x,\gamma\in(\varepsilon {\mathbb Z})^d} \Bigl|
\int_{-R}^0 \pi_s\, ds \, e^{\frac{d^k(x+\gamma)}{\varepsilon}}v_k(x) a_\gamma(x; \varepsilon) e^{\frac{d^j(x+\gamma)}{\varepsilon}}v_j(x+\gamma) \mathbf{1}_E (x)
\mathbf{1}_{E^c}(x+\gamma)\Bigr|\nonumber \\
&\leq e^{-\frac{S_0 + a}{\varepsilon}} \sum_{x\in(\varepsilon {\mathbb Z})^d} \Bigl|
\Bigl( \int_{-R}^0 \pi_s\, ds \, e^{\frac{d^k}{\varepsilon}}\mathbf{1}_E v_k\Bigr)(x)\Bigr|\sum_{\gamma\in (\varepsilon {\mathbb Z})^d} \Bigl| a_\gamma(x; \varepsilon) e^{\frac{d(x, x+\gamma)}{\varepsilon}}
\Bigl( e^{\frac{d^j}{\varepsilon}}\mathbf{1}_{E^c} v_j\Bigr) (x+\gamma)\Bigr| \nonumber\\
&\leq e^{-\frac{S_0 + a}{\varepsilon}} \Bigl\| e^{\frac{d^k}{\varepsilon}}v_k\Bigr\|_{\ell^2} \Biggl(\sum_{x\in (\varepsilon {\mathbb Z})^d}\Bigl| \sum_{\gamma\in (\varepsilon {\mathbb Z})^d} a_\gamma(x; \varepsilon)
e^{\frac{d(x, x+\gamma)}{\varepsilon}}\Bigl( e^{\frac{d^j}{\varepsilon}} v_j\Bigr) (x+\gamma)\Bigr|^2 \Biggr)^{1/2} \label{9prop1} \end{align} where in the last step we used the Cauchy-Schwarz-inequality with respect to $x$ and $\int_{\mathbb R} \pi_s\, ds = 1$. By Cauchy-Schwarz-inequality with respect to $\gamma$ analog to \eqref{4prop1} and \eqref{5prop1} we get \begin{align}
&\sum_{x\in (\varepsilon {\mathbb Z})^d}\Bigl| \sum_{\gamma\in (\varepsilon {\mathbb Z})^d} a_\gamma(x; \varepsilon)
e^{\frac{d(x, x+\gamma)}{\varepsilon}}\Bigl( e^{\frac{d^j}{\varepsilon}} v_j\Bigr) (x+\gamma)\Bigr|^2 \nonumber\\
&\quad = \sum_{x\in (\varepsilon {\mathbb Z})^d} \Bigl( \sum_{\gamma \in(\varepsilon {\mathbb Z})^d} \bigl| a_\gamma (x;\varepsilon) e^{\frac{d(x, x+\gamma)}{\varepsilon}}
\langle \gamma\rangle_\varepsilon^{(d+\eta)/2}\bigr|^2 \Bigr)
\Bigl( \sum_{\gamma\in(\varepsilon {\mathbb Z})^d} \bigl| e^{\frac{d^j(x+\gamma)}{\varepsilon}} v_k\bigr)(x+\gamma) \langle \gamma\rangle_\varepsilon^{-(d+\eta)/2}\bigr|^2 \Bigr)\nonumber\\
&\quad \leq C \Bigl\| e^{\frac{d^j}{\varepsilon}}v_j\Bigr\|^2_{\ell^2}\label{18prop1}\; . \end{align} Inserting \eqref{18prop1} into \eqref{9prop1} gives by \eqref{8prop1} together with \cite{kleinro5}, Proposition 3.1, for any $\eta>0$ \begin{equation}\label{10prop1}
\bigl|R_1\bigr| \leq C e^{-\frac{S_0 + a}{\varepsilon}} \Bigl\| e^{\frac{d^k}{\varepsilon}}v_k\Bigr\|_{\ell^2} \Bigl\| e^{\frac{d^j}{\varepsilon}}v_j\Bigr\|_{\ell^2} \leq C e^{-\frac{S_0 + a-\eta}{\varepsilon}}\; . \end{equation} Analog arguments show \begin{equation}\label{16prop1}
|R_5| = \expord{S_0 + a - \eta}\; . \end{equation}
We analyze $|R_2 + R_4|$ together, writing \[
\bigl| R_2 + R_4 \bigr| \leq \sum_{x\in(\varepsilon {\mathbb Z})^d} \int_{-R}^0 \pi_s\, ds \mathbf{1}_E(x) \Bigl| v_k(x) \bigl(\mathbf{1}_{M_j}T_\varepsilon v_j\bigr)(x) - v_j(x) \bigl( \mathbf{1}_{M_k}
T_\varepsilon v_k\bigr) (x)\Bigr| \; . \] Now using that \[ v_k \mathbf{1}_{M_j}T_\varepsilon v_j - v_j \mathbf{1}_{M_k}T_\varepsilon v_k + V_\varepsilon v_j v_k - V_\varepsilon v_j v_k = v_k H_\varepsilon^{M_j} v_j - v_j H_\varepsilon^{M_k} v_k =
(\mu_j - \mu_k) v_j v_k \] we get by Hypothesis \ref{hypkj}, Cauchy-Schwarz-inequality and since $d^j(x) + d^k(x) \geq S_{jk}$ \begin{align}
\bigl| R_2 + R_4 \bigr| &\leq |\mu_j - \mu_k| e^{-\frac{S_{jk}}{\varepsilon}} \sum_{x\in(\varepsilon {\mathbb Z})^d} \int_{-R}^0 \pi_s\, ds \mathbf{1}_E(x) \Bigl| e^{\frac{d^j(x)}{\varepsilon}} v_j(x)
e^{\frac{d^k(x)}{\varepsilon}} v_k(x)\Bigr|\nonumber\\
&\leq e^{-\frac{S_{jk} + a - \delta}{\varepsilon}} \Bigl\| e^{\frac{d^j}{\varepsilon}}v_j\Bigr\|_{\ell^2} \Bigl\| e^{\frac{d^k}{\varepsilon}}v_k\Bigr\|_{\ell^2}\nonumber \\ &\leq C e^{-\frac{S_0 + a - \eta}{\varepsilon}}\label{11prop1} \end{align} where in the last step we used again \cite{kleinro5}, Proposition 3.1, and $S_{jk}\geq S_0$.
The term $|R_3|$ can be estimated by methods similar to those used to estimate $|B|$ above. By Hypothesis \ref{hypkj} we have $E\cap {\mathbb H}_{d,R} \subset \stackrel{\circ}{M}_j$. Thus $x_d>0$ for $x\in E\cap M_j^c$ and, setting
$b_j:= \min \{|x_d|\,|\, x\in E\cap M_j^c\}$, we have $|x_d - s| \geq |x_d|\geq b_j >0$ for $s\leq 0$. Thus we get analog to \eqref{12aprop1} and \eqref{1prop1} \begin{equation}\label{13prop1}
\sup_x \Bigl| \int_{-R}^0 \pi_s \, ds \mathbf{1}_{E\cap M^c_j}(x)\Bigr| \leq C \sqrt{\varepsilon} e^{-\frac{C_0}{\varepsilon}b_j^2} \end{equation} and similar to \eqref{3prop1}, using Cauchy-Schwarz-inequality, \begin{equation}\label{14prop1}
\bigl| R_3\bigr| \leq C \sqrt{\varepsilon} e^{-\frac{1}{\varepsilon}(C_0 b_j^2 + S_{jk})}\Bigl\| e^{\frac{d^k}{\varepsilon}} v_k\Bigr\|_{\ell^2}
\Bigl\| e^{\frac{d^j}{\varepsilon}} T_\varepsilon v_j\Bigr\|_{\ell^2} \end{equation} As in \eqref{4prop1} and\eqref{5prop1}, we estimate the last factor in \eqref{14prop1} as \begin{multline*}
\bigl\| e^{\frac{d^j}{\varepsilon}} T_\varepsilon v_j\bigr\|^2_{\ell^2} =
\sum_{x\in(\varepsilon {\mathbb Z})^d}\Bigl|\sum_{\gamma\in(\varepsilon {\mathbb Z})^d} a_\gamma(x;\varepsilon) e^{\frac{d^j(x)}{\varepsilon}} v_j\bigr)(x+\gamma)\Bigr|^2 \\
\leq \sum_{x\in (\varepsilon {\mathbb Z})^d} \Bigl( \sum_{\gamma \in(\varepsilon {\mathbb Z})^d} \bigl| a_\gamma (x;\varepsilon) e^{\frac{d(x, x+\gamma)}{\varepsilon}}
\langle \gamma\rangle_\varepsilon^{\frac{d+\eta}{2}}\bigr|^2 \Bigr)
\Bigl( \sum_{\gamma\in(\varepsilon {\mathbb Z})^d} \bigl| e^{\frac{d^j(x+\gamma)}{\varepsilon}} v_j(x+\gamma) \langle \gamma\rangle_\varepsilon^{-\frac{d+\eta}{2}}\bigr|^2 \Bigr)\\ \leq C \sum_{\gamma\in (\varepsilon {\mathbb Z})^d} \langle \gamma\rangle_\varepsilon^{-(d+\eta)}
\sum_{x\in (\varepsilon {\mathbb Z})^d}\bigl|e^{\frac{d^j(x+\gamma)}{\varepsilon}} v_j(x+\gamma)\bigr|^2
\leq \tilde{C} \Bigl\| e^{\frac{d^j}{\varepsilon}}v_j\Bigr\|^2_{\ell^2}\; . \end{multline*}
Thus choosing $C_0$ such that $C_0 b_j^2 + S_{jk} \geq S_0 + a$, we get again by \cite{kleinro5}, Proposition 3.1, for any $\eta>0$ \begin{equation}\label{15prop1}
\bigl| R_3\bigr| \leq C e^{-\frac{1}{\varepsilon}(C_0 b_j^2 + S_{jk})}\Bigl\| e^{\frac{d^k}{\varepsilon}} v_k\Bigr\|_{\ell^2}
\Bigl\| e^{\frac{d^j}{\varepsilon}} v_j\Bigr\|_{\ell^2} \leq C e^{-\frac{1}{\varepsilon}(S_0 + a - \eta)}\; . \end{equation} Inserting \eqref{15prop1}, \eqref{11prop1}, \eqref{16prop1} and \eqref{10prop1} into \eqref{7prop1} yields \eqref{0prop1} by \eqref{6prop1} and interchanging of integration and summation.
\end{proof}
In the next step we analyze the commutator in \eqref{0prop1} using symbolic calculus.
\begin{prop} \label{prop2} For any $u\in \ell^2((\varepsilon {\mathbb Z})^d)$ compactly supported and $x\in (\varepsilon {\mathbb Z})^d$ we have with the notation $\xi = (\xi', \xi_d)\in {\mathbb T}^d$ \begin{multline}\label{0prop2} \bigl[ T_\varepsilon, \pi_s\bigr] u(x) = \frac{\sqrt{C_0}}{\sqrt{\pi\varepsilon}} (2\pi)^{-d} \sum_{y\in(\varepsilon {\mathbb Z})^d} e^{\frac{i}{2\varepsilon}( \phi_s(y_d) + \phi_s(x_d))} u(y)\\ \times \;\int_{[-\pi, \pi]^d} e^{\frac{i}{\varepsilon}(y-x)\xi} \Bigl( t\bigl(x, \xi', \xi_d - \frac{1}{2}\phi_s'(\frac{x_d + y_d}{2}); \varepsilon \bigr) - t\bigl(x, \xi', \xi_d + \frac{1}{2}\phi_s'(\frac{x_d + y_d}{2}); \varepsilon\bigr) \Bigr) \, d\xi \end{multline} where $\phi_s' (t) = \frac{d}{dt}\phi_s(t)= 2 i C_0 (t-s)$ and $T_\varepsilon = \Op_\varepsilon^{\mathbb T} (t)$ as given in \eqref{psdo2dTorus}. \end{prop}
\begin{proof} By Definition \ref{pseudo},(4), we have \begin{align}\label{1prop2} \bigl( T_\varepsilon \pi_s u\bigr) (x) &= \frac{\sqrt{C_0}}{\sqrt{\pi\varepsilon}} (2\pi)^{-d} \sum_{y\in(\varepsilon {\mathbb Z})^d} u(y) \int_{[-\pi, \pi]^d} e^{\frac{i}{\varepsilon}((y-x)\xi + \phi_s(y_d))} t(x, \xi; \varepsilon) \, d\xi \\
\bigl(\pi_s T_\varepsilon u\bigr) (x) &= \frac{\sqrt{C_0}}{\sqrt{\pi\varepsilon}} (2\pi)^{-d} \sum_{y\in(\varepsilon {\mathbb Z})^d} u(y) \int_{[-\pi, \pi]^d} e^{\frac{i}{\varepsilon}((y-x)\xi + \phi_s(x_d))} t(x, \xi; \varepsilon) \, d\xi \label{2prop2} \end{align} Setting \begin{equation}\label{3prop2} \xi_{\pm} := \Bigl( \xi', \xi_d \pm \frac{1}{2} \phi_s'\bigl(\frac{x_d + y_d}{2}\bigr)\Bigr) \end{equation} we have \begin{align}\label{4prop2} (y-x)\xi + \phi_s(y_d) &= (y-x)\xi_+ + \frac{1}{2} \bigl( \phi_s(y_d) + \phi_s(x_d)\bigr) \\ (y-x)\xi + \phi_s(x_d) &= (y-x)\xi_- + \frac{1}{2} \bigl( \phi_s(y_d) + \phi_s(x_d)\bigr) \nonumber \end{align} In fact, \begin{equation}\label{5prop2} (y-x)\xi_{\pm} + \frac{1}{2}\bigl( \phi_s(y_d) + \phi_s(y_d)\bigr) = (y-x)\xi \pm (y_d - x_d) iC_0 \Bigl( \frac{x_d + y_d}{2} - s\Bigr) + \frac{C_0 i}{2} \Bigl( (y_d - s)^2 + (x_d-s)^2\Bigr) \; . \end{equation} Writing $y_d - x_d = (y_d - s) - (x_d - s)$ and $\frac{x_d + y_d}{2} -s = \frac{1}{2} \bigl((x_d - s) + (y_d-s)\bigr)$ gives \begin{align*} \text{rhs}\eqref{5prop2} &= (y-x)\xi \pm \frac{iC_0}{2} \Bigl( (y_d - s)^2 - (x_d-s)^2\Bigr) + \frac{iC_0}{2} \Bigl( (y_d-s)^2 + (x_d-s)^2\Bigr) \\ &= \begin{cases} (y-x)\xi + \phi_s(y_d)\; \text{ for }\; + \\ (y-x)\xi + \phi_s(x_d)\; \text{ for } \; - \end{cases}\; . \end{align*}
Since, with respect to $\xi$, $t$ has an analytic continuation to ${\mathbb C}^d$, it is possible to combine the integrals in \eqref{1prop2} and \eqref{2prop2} using the contour deformation given by the substitution \eqref{3prop2}. To this end, we first need the following Lemma
\begin{Lem}\label{Lem1}
Let $f: {\mathbb C}\rightarrow {\mathbb C}$ be analytic in $\Omega_b:= \{ z\in {\mathbb C}\,|\, \Im z < b\}$ for some $b>0$ and $2\pi$-periodic on the real axis, i.e. $f(x+2\pi) = f(x)$ for all $x\in {\mathbb R} $. Then for any $a<b$ \[ \int_{-\pi + ia}^{\pi + ia} f(z)\, dz = \int_{-\pi}^\pi f(x)\, dx\; . \] \end{Lem}
\begin{proof}[Proof of Lemma \ref{Lem1}] If $f$ is periodic on the real line, if follows that $f(z) = f(z+2\pi)$ for $z\in \Omega_b$ by the identity theorem. Then Cauchy's Theorem yields \begin{equation}\label{1Lem1}
\int_{-\pi + ia}^{\pi + ia} f(z)\, dz - \int_{-\pi }^{\pi } f(z)\, dz = \int_{-\pi + ia}^{-\pi} f(z)\, dz + \int_{\pi }^{\pi + ia} f(z)\, dz\; . \end{equation} The substitution $\tilde{z}= z - 2\pi$ in the last integral on the right hand side of \eqref{1Lem1} gives by the periodicity of $f$ \begin{align*} \text{rhs}\eqref{1Lem1} &= \int_{-\pi + ia}^{-\pi} f(z)\, dz + \int_{-\pi }^{-\pi + ia} f(\tilde{z} + 2\pi)\, d\tilde{z}\\ &= \int_{-\pi + ia}^{-\pi} f(z)\, dz + \int_{-\pi }^{-\pi + ia} f(\tilde{z})\, d\tilde{z} = 0 \; , \end{align*} proving the stated result. \end{proof}
We come back to the proof of Proposition \ref{prop2}. For shortening the notation we set \begin{equation}\label{6prop2}
a:= \frac{1}{2} \phi_s'\bigl(\frac{x_d + y_d}{2}\bigr) = C_0 \Bigl( \frac{x_d + y_d}{2} - s\Bigr) \; . \end{equation} Inserting the substitution \eqref{3prop2} in \eqref{1prop2}, we get by \eqref{4prop2} and \eqref{6prop2} \begin{align} \bigl( T_\varepsilon \pi_s u\bigr) (x) &= \frac{\sqrt{C_0}}{\sqrt{\pi\varepsilon}} (2\pi)^{-d} \sum_{y\in(\varepsilon {\mathbb Z})^d} u(y) \int_{[-\pi, \pi]^d} e^{\frac{i}{\varepsilon}\bigl((y-x)\xi_+ + \frac{1}{2}(\phi_s(y_d) + \phi_s(x_d))\bigr)} t (x, \xi; \varepsilon) \, d\xi \nonumber\\ &= \frac{\sqrt{C_0}}{\sqrt{\pi\varepsilon}} (2\pi)^{-d} \sum_{y\in(\varepsilon {\mathbb Z})^d} u(y)\int_{[-\pi, \pi]^{d-1}}\, d\xi'_+
\nonumber \\ & \hspace{5mm} \times\; \int_{-\pi-ia}^{\pi + i a} \, d(\xi_+)_d e^{\frac{i}{\varepsilon}\bigl((y-x)\xi_+ + \frac{1}{2}(\phi_s(y_d) + \phi_s(x_d))\bigr)} t (x, \xi_+', (\xi_+)_d - ia; \varepsilon)\nonumber \\ & = \frac{\sqrt{C_0}}{\sqrt{\pi\varepsilon}} (2\pi)^{-d} \sum_{y\in(\varepsilon {\mathbb Z})^d} u(y) \int_{[-\pi, \pi]^{d}} e^{\frac{i}{\varepsilon}\bigl((y-x)\xi + \frac{1}{2}(\phi_s(y_d) + \phi_s(x_d))\bigr)} t (x, \xi', \xi_d - ia; \varepsilon) \, d\xi \label{7prop2} \end{align} where in the last step we used Lemma \ref{Lem1}.
By analog arguments for \eqref{2prop2} we get \begin{equation}\label{8prop2}
\bigl(\pi_s T_\varepsilon u\bigr) (x) = \frac{\sqrt{C_0}}{\sqrt{\pi\varepsilon}} (2\pi)^{-d} \sum_{y\in(\varepsilon {\mathbb Z})^d} u(y) \int_{[-\pi, \pi]^d} e^{\frac{i}{\varepsilon}\bigl((y-x)\xi + \frac{1}{2}(\phi_s(y_d) + \phi_s(x_d))\bigr)} t (x, \xi', \xi_d + ia; \varepsilon) \, d\xi \end{equation} and thus combining \eqref{7prop2} and \eqref{8prop2} gives \eqref{0prop2}.
\end{proof}
The idea is now to write the $s$-dependent terms in \eqref{0prop2} as $s$-derivative of some symbol. To this end, we first introduce some smooth cut-off functions on the right hand side of \eqref{0prop1}. \\
Let $\chi_R\in \mathscr C_0^\infty ({\mathbb R} )$ be such that $\chi_R (s)=1$ for $s\in [-R, R]$ and $\chi_E\in \mathscr C_0^\infty ({\mathbb R} ^d)$ such that $\chi_E(x) = 1$ for $x\in E$. Moreover we assume that $\chi_R(s) = \chi_R(-s)$ and $\chi_E(x) = \chi_E(-x)$. Then it follows directly from Proposition \ref{prop1} that \begin{equation}\label{0prop1a}
w_{jk} = \int_{-R}^0 \skpd{\chi_R(s) \bigl[T_\varepsilon, \pi_s\bigr] \chi_E \mathbf{1}_E v_j}{\chi_E\mathbf{1}_E v_k}\, ds + \expord{S_0 + a - \eta}\, , \qquad \eta>0 \; . \end{equation}
\begin{prop}\label{prop3} There are compactly supported smooth mappings \[ {\mathbb R} \ni s\mapsto q_s \in S_0^0(1)({\mathbb R} ^{2d}\times {\mathbb T}^d)\quad\text{and}\quad {\mathbb R} \ni s\mapsto r_s \in S_0^\infty(1)({\mathbb R} ^{2d}\times {\mathbb T}^d)\] such that $q_s(x, y, \xi; \varepsilon)$ and $r_s(x, y, \xi; \varepsilon)$ have analytic continuations to ${\mathbb C}^d$ with respect to $\xi\in{\mathbb R} ^d$ (identifying functions on ${\mathbb T}^d$ with periodic functions on ${\mathbb R} ^d$). Moreover, $q_s$ has an asymptotic expansion \begin{equation}\label{00prop3} q_s(x, y, \xi; \varepsilon) \sim \sum_{n=0}^\infty \varepsilon^n q_{n,s}(x, y, \xi)\; . \end{equation} and, setting $\sigma:= \frac{x_d + y_d}{2} - s$, \begin{multline}\label{0prop3}
\chi_R(s) \chi_E(x)\chi_E(y) e^{-\frac{C_0}{\varepsilon}\sigma^2}\Bigl[ t \bigl(x, \xi', \xi_d - i C_0 \sigma; \varepsilon\bigr) - t \bigl(x, \xi', \xi_d + i C_0\sigma; \varepsilon\bigr)\Bigr] \\ = \varepsilon \partial_s \Bigl[ e^{-\frac{C_0}{\varepsilon}\sigma^2} q_s(x, y, \xi; \varepsilon)\Bigr]
+ e^{-\frac{C_0}{\varepsilon}\sigma^2} r_s (x, y, \xi; \varepsilon)\; . \end{multline} \end{prop}
\begin{proof} We first remark that by \eqref{talsexp} \begin{multline}\label{1prop3}
t \bigl(x, \xi', \xi_d - i C_0 \sigma; \varepsilon \bigr) - t \bigl(x, \xi', \xi_d + i C_0\sigma; \varepsilon\bigr)
\\ =\sum_{\gamma\in(\varepsilon {\mathbb Z})^d} a_\gamma (x, \varepsilon) e^{-\frac{i}{\varepsilon}\gamma'\xi'} \Bigl[ e^{-\frac{i}{\varepsilon}\gamma_d (\xi_d - iC_0\sigma)} - e^{-\frac{i}{\varepsilon}\gamma_d(\xi_d + i C_0 \sigma)}\Bigr]\\ = \sum_{\gamma\in(\varepsilon {\mathbb Z})^d} a_\gamma (x, \varepsilon) e^{-\frac{i}{\varepsilon}\gamma\xi} 2 \sinh \Bigl( \frac{\gamma_d}{\varepsilon}C_0 \sigma\Bigr)\; . \end{multline} Thus from the assumptions on $\chi_R$ and $\chi_E$ it follows that the left hand side of \eqref{0prop3} is odd with respect to $\sigma\mapsto -\sigma$. Modulo $S^\infty$, \eqref{0prop3} is equivalent to \begin{equation}\label{2prop3}
\chi_R(s) \chi_E(x)\chi_E(y)\Bigl[ t \Bigl(x, \xi', \xi_d - i C_0 \sigma; \varepsilon\Bigr) - t \Bigl(x, \xi', \xi_d + i C_0 \sigma; \varepsilon\Bigr)\Bigr] = \bigl( 2 C_0 \sigma + \varepsilon \partial_s\bigr) q_s(x, y, \xi; \varepsilon)\; . \end{equation} Here $q$ is compactly supported in $x, y$ and $s$ (and thus in $\sigma$) and $q$ is even with respect to $\sigma\mapsto -\sigma$ since $\partial_s = -\partial_\sigma$. We set \begin{align}\label{3prop3}
g_s(x, y, \xi; \varepsilon) &:= \chi_R(s) \chi_E(x)\chi_E(y)\frac{1}{2 C_0 \sigma} \Bigl(t \bigl(x, \xi', \xi_d - i C_0 \sigma; \varepsilon \bigr) - t \bigl(x, \xi', \xi_d + i C_0 \sigma; \varepsilon\bigr)\Bigr)\\ &= \sum_{\ell = 0}^\infty \varepsilon^\ell g_{\ell, s}(x, y, \xi)\nonumber \end{align} where by \eqref{1prop3} \begin{equation}\label{6prop3} g_{\ell, s}(x, y, \xi) :=
- \chi_R(s) \chi_E(x)\chi_E(y)\sum_{\gamma\in(\varepsilon {\mathbb Z})^d} a^{(\ell)}_\gamma(x) e^{-\frac{i}{\varepsilon}\gamma\xi} \frac{1}{C_0 \sigma} \sinh \Bigl( \frac{\gamma_d}{\varepsilon} C_0 \sigma\Bigr)\; . \end{equation} Then \eqref{2prop3} can be written as \begin{equation}\label{4prop3} \Bigl( 1 + \frac{\varepsilon}{2 C_0 \sigma}\partial_s \Bigr) q_s(x, y, \xi; \varepsilon) = g_s (x, y, \xi; \varepsilon)\; . \end{equation} Formally \eqref{4prop3} leads to the von-Neumann-series \begin{equation}\label{5prop3} q_s (x, y, \xi; \varepsilon) = \sum_{m=0}^\infty \varepsilon^m \Bigl(-\frac{1}{2C_0 \sigma} \partial_s\Bigr)^m g_s (x, y, \xi; \varepsilon)\; . \end{equation} Using \eqref{00prop3}, \eqref{3prop3} and Cauchy-product, \eqref{5prop3} gives \begin{equation}\label{7prop3} q_{n, s}(x, y, \xi) = \sum_{\ell+ m = n} \Bigl(-\frac{1}{2C_0 \sigma} \partial_s\Bigr)^m g_{\ell, s}(x, y, \xi)\; . \end{equation} By \eqref{3prop3} $g$ and $g_{\ell}$, $\ell\in{\mathbb N}$, are even with respect to $\sigma\mapsto -\sigma$. Moreover, the operator $\frac{1}{\sigma}\partial_s = -\frac{1}{\sigma}\partial_\sigma$ maps a monomial in $\sigma$ of order $2m$ to a monomial of order $\max \{0, 2m-2\}$. Thus, for $x,y\in\supp \chi_E$ and $s\in [-R, R]$, the right hand side of \eqref{7prop3} is well-defined and analytic and even in $\sigma$ for any $n\in{\mathbb N}$. In particular, it is bounded at $\sigma = 0$ or equivalently at $s=\frac{x_d + y_d}{2}$. Therefore $q_{n,s} \in S_0^0(1)({\mathbb R} ^{2d}\times {\mathbb T}^d)$ for any $n\in{\mathbb N}$ and it is $\mathscr C^\infty_0$ with respect to $s\in{\mathbb R} $.
By a Borel-procedure with respect to $\varepsilon$ there exists a symbol $q_s\in S_0^0(1)({\mathbb R} ^{2d}\times {\mathbb T}^d)$ which is $\mathscr C_0^\infty$ as a function of $s\in{\mathbb R} $ such that \eqref{00prop3} holds. Moreover, $\partial_s q_s(x, y, \xi; \varepsilon)$ is analytic in $\xi$ by uniform convergence of the Borel procedure and the analyticity of $q_{n,s}$. Thus \eqref{0prop3} holds for some $r_s \in S_0^\infty(1)({\mathbb R} ^{2d}\times {\mathbb T}^d)$ and since the left hand side of \eqref{2prop3} has an analytic continuation to ${\mathbb C}^d$ with respect to $\xi$, the same is true for $r_s(x, y, \xi; \varepsilon)$.
\end{proof}
We remark that by \eqref{7prop3} and \eqref{6prop3}, the leading order term $q_0$ at the point $s=\frac{x_d + y_d}{2}$ is given by \begin{align} q_{0,\frac{x_d + y_d}{2}}(x, y, \xi) &= - \chi_R\Bigl(\frac{x_d + y_d}{2}\Bigr) \chi_E(x)\chi_E(y) \sum_{\gamma\in(\varepsilon {\mathbb Z})^d} a_\gamma^{(0)}(x) \frac{\gamma_d}{\varepsilon} e^{-\frac{i}{\varepsilon}\gamma\xi} \nonumber \\ &= \frac{1}{i} \chi_E(y) \chi_E (x) \partial_{\xi_d} t_0 (x, \xi) \label{8prop3} \end{align} where in the second step we used \eqref{texpand} and the fact that $\chi_R(\frac{x_d + y_d}{2}) = 1$ for $x, y \in \supp \chi_E$.
We now define the operators $Q_s$ and $R_s$ on $\ell^2((\varepsilon {\mathbb Z})^d)$ by \begin{align}\label{Q_sdef} Q_s u(x) &:= \sqrt{\frac{C_0}{\varepsilon \pi}} (2\pi)^{-d} \sum_{y\in(\varepsilon {\mathbb Z})^d} e^{\frac{i}{2\varepsilon} (\phi_s(y_d) + \phi_s(x_d))} u(y) \int_{{\mathbb T}^d} e^{\frac{i}{\varepsilon}(y-x)\xi} q_s(x, y, \xi; \varepsilon) \, d\xi \\ R_s u(x) &:= \sqrt{\frac{C_0}{\varepsilon \pi}} (2\pi)^{-d} \sum_{y\in(\varepsilon {\mathbb Z})^d} e^{\frac{i}{2\varepsilon} (\phi_s(y_d) + \phi_s(x_d))} u(y) \int_{{\mathbb T}^d} e^{\frac{i}{\varepsilon}(y-x)\xi} r_s(x, y, \xi; \varepsilon) \, d\xi \label{R_sdef} \end{align}
Then we get the following formula for the interaction term $w_{jk}$.
\begin{prop}\label{prop4}
For $Q_s$ given in \eqref{Q_sdef}, the interaction term is given by \begin{equation}\label{0prop4} w_{jk} = \varepsilon \skpd{Q_0 \mathbf{1}_E v_j}{ \mathbf{1}_E v_k} + O\Bigl(\varepsilon^\infty e^{-\frac{1}{\varepsilon}S_{jk}}\Bigr)\; . \end{equation} \end{prop}
\begin{proof} We first remark that by the definition \eqref{phinull} of $\phi_s$ we have \begin{equation}\label{1prop4} \frac{i}{2\varepsilon}\bigl( \phi_s(y_d) + \phi_s(x_d)\bigr) = -\frac{C_0}{\varepsilon} \Bigl[ \bigl(\frac{x_d + y_d}{2}-s\bigr)^2 + \frac{1}{4} (y_d - x_d)^2\Bigr]\; . \end{equation} Combining Proposition \ref{prop2} with Proposition \ref{prop3} and \eqref{1prop4} gives \begin{align} \chi_R(s)\chi_E &[T_\varepsilon, \pi_s] \chi_E \mathbf{1}_E v_j(x) = \sqrt{\frac{C_0}{\varepsilon \pi}} (2\pi)^{-d} \sum_{y\in(\varepsilon {\mathbb Z})^d} \mathbf{1}_E(y) v_j(y) e^{-\frac{C_0}{4\varepsilon}(y_d - x_d)^2}\nonumber\\
&\times\,\int_{{\mathbb T}^d} e^{\frac{i}{\varepsilon}(y-x)\xi} \varepsilon \partial_s \Bigl( e^{-\frac{C_0}{\varepsilon}(\frac{x_d + y_d}{2} - s)^2} q_s(x, y, \xi; \varepsilon)\Bigr)
+ e^{-\frac{C_0}{\varepsilon}(\frac{x_d + y_d}{2} - s)^2}
r_s (x, y, \xi; \varepsilon)\, d\xi\nonumber\\
&= \bigl( \varepsilon\partial_s Q_s + R_s \bigr) \mathbf{1}_E v_j(x) \label{2prop4} \end{align} where the second equation follows from the definitions \eqref{Q_sdef} and \eqref{R_sdef}. Thus by \eqref{0prop1a} we get for any $\eta>0$ \begin{align}\label{3prop4}
w_{jk} &= \int_{-R}^0 \skpd{\bigl(\varepsilon \partial_s Q_s + R_s\bigr) \mathbf{1}_E v_j}{\mathbf{1}_E v_k} \, ds + O\Bigl(e^{-\frac{S_0 + a - \eta}{\varepsilon}}\Bigr)\\
&= \varepsilon \skpd{Q_0 \mathbf{1}_E v_j}{\mathbf{1}_E v_k} - S_1 + S_2 + O\Bigl(e^{-\frac{S_0 + a - \eta}{\varepsilon}}\Bigr) \; ,\nonumber \end{align} where \begin{align}\label{4prop4}
S_1 &:= \varepsilon \skpd{Q_{-R}\mathbf{1}_E v_j}{\mathbf{1}_E v_k}\\
S_2 &:= \varepsilon \int_{-R}^0 \skpd{R_s \mathbf{1}_E v_j}{\mathbf{1}_E v_k}\, ds\label{5prop4} \end{align} To analyse $S_2$, we first introduce the following notation, which will be used again later on. We set (see Definition \ref{pseudo}) \begin{align}\label{8prop4} \tilde{u}_s(x) &:= e^{\frac{i}{2\varepsilon}\phi_s(x_d)} u(x) = e^{-\frac{C_0}{2\varepsilon}(x_d - s)^2} u(x) \\ \tilde{Q}_s &:= \widetilde{\Op}_\varepsilon^{\mathbb T} \Bigl(\sqrt{\frac{C_0 \varepsilon}{\pi}} q_s\Bigr) \label{9prop4}\\ \tilde{R}_s &:= \widetilde{\Op}_\varepsilon^{\mathbb T} \Bigl(\sqrt{\frac{C_0 \varepsilon}{\pi}} r_s\Bigr)\,, \label{11prop4} \end{align} then \begin{equation}\label{11aprop4}
\varepsilon \skpd{Q_{s}u}{v} = \skpd{\tilde{Q}_s \tilde{u}_s}{\tilde{v}_{s}} \quad\text{ and }\quad
\varepsilon \skpd{R_{s}u}{v} = \skpd{\tilde{R}_s \tilde{u}_s}{\tilde{v}_{s}}\; . \end{equation} To analyse $S_2$ we write, using \eqref{11aprop4} \begin{align}\label{6prop4}
\bigl| S_2\bigr| &= \Bigl| \int_{-R}^0 \skpd{e^{-\frac{d^k}{\varepsilon}} \tilde{R}_s e^{-\frac{d^k}{\varepsilon}} e^{-\frac{(d^k + d^j)}{\varepsilon}}
e^{\frac{d^j}{\varepsilon}}\mathbf{1}_E \tilde{v}_{j,s}}
{e^{\frac{d^k}{\varepsilon}}\mathbf{1}_E \tilde{v}_{k,s}}\, ds \Bigr| \\
&\leq e^{-\frac{S_{jk}}{\varepsilon}}\int_{-R}^0 \Bigl\| e^{-\frac{d^k}{\varepsilon}} \tilde{R}_s e^{-\frac{d^k}{\varepsilon}}
e^{\frac{d^j}{\varepsilon}}\mathbf{1}_E \tilde{v}_{j,s}\Bigr\|_{\ell^2}
\Bigl\| e^{\frac{d^k}{\varepsilon}}\mathbf{1}_E \tilde{v}_{k,s}\Bigr\|_{\ell^2}\, ds\; .\nonumber \end{align} Since $r_s\in S_0^\infty (1)({\mathbb R} ^{2d}\times {\mathbb T}^d)$, it follows from Corollary \ref{cor1app} together with Proposition \ref{prop3app} that for some $C>0$ \begin{align}
\bigl| S_2\bigr| &\leq C \varepsilon^\infty e^{-\frac{S_{jk}}{\varepsilon}}\int_{-R}^0 \Bigl\|e^{\frac{d^j}{\varepsilon}}\mathbf{1}_E \tilde{v}_{j,s}
\Bigr\|_{\ell^2} \Bigl\| e^{\frac{d^k}{\varepsilon}}\mathbf{1}_E \tilde{v}_{k,s}\Bigr\|_{\ell^2}\, ds\nonumber\\ &= O\Bigl(\varepsilon^\infty e^{-\frac{S_{jk}}{\varepsilon}}\Bigr)\label{7prop4} \end{align} where for the second step we used weighted estimates for the Dirichlet eigenfunctions given in \cite{kleinro5}, Proposition 3.1, together with
the fact that $|\tilde{u}_s(x)|\leq |u(x)|$.
By \eqref{4prop4} and \eqref{11aprop4} we get \begin{equation}
\bigl| S_1\bigr| = \Bigl|\skpd{\tilde{Q}_{-R}\mathbf{1}_E \tilde{v}_{j, -R}}{\mathbf{1}_E \tilde{v}_{k, -R}}\Bigr|
\leq \bigl\| \mathbf{1}_E \tilde{Q}_{-R}\mathbf{1}_E \tilde{v}_{j, -R} \bigr\|_{\ell^2}\, \bigl\|\mathbf{1}_E \tilde{v}_{k, -R}\bigr\|_{\ell^2}\; . \end{equation} Again by Corollary \ref{cor1app} together with \eqref{8prop4}, \eqref{9prop4} and since $q_s\in S_0^0(1)({\mathbb R} ^{2d}\times {\mathbb T}^d)$ we have for some $C>0$ \begin{equation}\label{10prop4}
\bigl| S_1\bigr|\leq C \sqrt{\varepsilon} \bigl\|\mathbf{1}_E e^{-\frac{C_0}{2\varepsilon}(\,. \,+R)^2}v_j \bigr\|_{\ell^2}\,
\bigl\|\mathbf{1}_E e^{-\frac{C_0}{2\varepsilon}(\,. \,+R)^2}v_k\bigr\|_{\ell^2} \leq \sqrt{\varepsilon} C e^{-\frac{C_0}{\varepsilon}R_E^2} \end{equation}
for $R_E:= \min_{x\in E} |x_d - R|$. Thus taking $R$ large enough such that $R_E>S_{jk}$ and inserting \eqref{10prop4} and \eqref{7prop4} in \eqref{3prop4} proves the proposition.
\end{proof}
In the next proposition we show that, modulo a small error, the interaction term only depends on a small neighborhood of the point or manifold respectively where the geodesics between $x^j$ and $x^k$ intersect ${\mathbb H}_d$. Since the proof is analogue, we discuss the point and manifold case simultaneously.
\begin{prop}\label{prop5} Let $\Psi\in \mathscr C_0^\infty (\stackrel{\circ}{M}_{j}\cap \stackrel{\circ}{M}_{k}\cap E)$ denote a cut-off-function near $y_0\in {\mathbb H}_d$ (or $G_0\subset {\mathbb H}_d$ respectively) such that $\Psi=1$ in a neighborhood $U_\Psi$ of $y_0$ (or $G_0$ respectively) and for some $C>0$ \begin{equation}\label{1prop5} \frac{C_0}{2}x_d^2 + d^j(x) + d^k(x) - S_{jk} >C \, , \qquad x\in \supp (1-\Psi)\, . \end{equation} Then, for the restriction $\Psi^\varepsilon:= r_\varepsilon \Psi$ of $\Psi$ to the lattice $(\varepsilon {\mathbb Z})^d$ (see \eqref{restrict}), \begin{equation}\label{0prop5} w_{jk} = \varepsilon \skpd{Q_0 \Psi^\varepsilon v_j}{\Psi^\varepsilon v_k} + O\Bigl(\varepsilon^\infty e^{-\frac{1}{\varepsilon}S_{jk}}\Bigr)\; . \end{equation} \end{prop}
\begin{proof} Using Proposition \ref{prop4} and the notation \eqref{8prop4}, \eqref{9prop4} together with \eqref{11aprop4} we have \begin{align}\label{2prop5} w_{jk}&= \skpd{\tilde{Q}_0 \mathbf{1}_E \tilde{v}_{j,0}}{ \mathbf{1}_E \tilde{v}_{k,0}} + O\Bigl(\varepsilon^\infty e^{-\frac{1}{\varepsilon}S_{jk}}\Bigr)\\ &= \varepsilon \skpd{Q_0 \Psi^\varepsilon \mathbf{1}_E v_j}{\Psi^\varepsilon \mathbf{1}_E v_k} + R_1 + R_2 + R_3 + O\Bigl(\varepsilon^\infty e^{-\frac{1}{\varepsilon}S_{jk}}\Bigr)\nonumber \end{align} where, using $\mathbf{1}_E \Psi = \Psi$, \begin{align}\label{3prop5} R_1 &= \skpd{\tilde{Q}_0 (1-\Psi^\varepsilon ) \mathbf{1}_E \tilde{v}_{j,0}}{ \Psi^\varepsilon \tilde{v}_{k,0}}\\ R_2 &= \skpd{\tilde{Q}_0\Psi^\varepsilon \tilde{v}_{j,0}}{(1-\Psi^\varepsilon ) \mathbf{1}_E \tilde{v}_{k,0}}\\ R_3 &= \skpd{\tilde{Q}_0 (1-\Psi^\varepsilon )\mathbf{1}_E \tilde{v}_{j,0}}{ (1-\Psi^\varepsilon )\mathbf{1}_E \tilde{v}_{k,0}}\; . \end{align}
To estimate $|R_1|$ we write \begin{align*}
\bigl|R_1\bigr| &= \Bigl| \skpd{e^{-\frac{1}{\varepsilon}(d^k + d^j)}(1-\Psi^\varepsilon )e^{\frac{d^j}{\varepsilon}} \mathbf{1}_E \tilde{v}_{j,0}}
{\chi_E e^{\frac{d^k}{\varepsilon}}\tilde{Q}_0^*e^{-\frac{d^k}{\varepsilon}}e^{\frac{d^k}{\varepsilon}}\Psi^\varepsilon \tilde{v}_{k,0}}\Bigr|\\
&\leq \Bigl\|e^{-\frac{1}{\varepsilon}(d^k + d^j + \frac{C_0}{2}(.)_d^2)} (1-\Psi^\varepsilon ) e^{\frac{d^j}{\varepsilon}} \mathbf{1}_E v_j\Bigr\|_{\ell^2}
\Bigl\| \chi_E e^{\frac{d^k}{\varepsilon}}\tilde{Q}_0^*e^{-\frac{d^k}{\varepsilon}}e^{\frac{d^k}{\varepsilon}} \Psi^\varepsilon \tilde{v}_{k,0}\Bigr\|_{\ell^2} \end{align*} where $\chi_E$ denotes a cut-off function as introduced above Proposition \ref{prop3}. Since by \eqref{9prop4} \[ \tilde{Q}_0^* = \widetilde{\Op}_\varepsilon^T \Bigl(\sqrt{\frac{C_0 \varepsilon}{\pi}} q_0^*\Bigr)\quad\text{for}\quad q_0^*(x,y,\xi;\varepsilon)= q_0(y,x,\xi;\varepsilon)\in S_0^0(1)({\mathbb R} ^{2d}\times {\mathbb T}^d)\, ,\] it follows from Proposition \ref{prop3app} that $\chi_E e^{\frac{d^k}{\varepsilon}}\tilde{Q}_0^*e^{-\frac{d^k}{\varepsilon}}$ is the 0-quantization of a symbol $q_{0,d^k,0}\in S^{\frac{1}{2}}_0(1)({\mathbb R} ^d\times {\mathbb T}^d)$. Thus by Corollary \ref{cor1app} and \eqref{1prop5}, for some $C, C'>0$, \begin{equation}\label{4prop5}
|R_1| \leq e^{-\frac{S_{jk}+C}{\varepsilon}} C' \sqrt{\varepsilon} \Bigl\|e^{\frac{d^j}{\varepsilon}} v_j\Bigr\|_{\ell^2} \Bigl\|e^{\frac{d^k}{\varepsilon}} v_k\Bigr\|_{\ell^2} =
O\Bigl(\varepsilon^\infty e^{-\frac{S_{jk}}{\varepsilon}}\Bigr) \end{equation} where the last estimate follows from \cite{kleinro5}, Proposition 3.1.\\
Similar arguments show $|R_2| = O(\varepsilon^\infty e^{-\frac{S_{jk}}{\varepsilon}}) = |R_3|$, thus by \eqref{2prop5} this finishes the proof.
\end{proof}
In the next step, we show that modulo the same error term, the Dirichlet eigenfunctions $v_m,\, m=j,k,$ can be replaced by the approximate eigenfunctions $\widehat{v}_m^\varepsilon$ given in \eqref{hatvm}. We showed in \cite{kleinro5}, Theorem 1.7, that for some smooth functions $b^m, b^m_\ell$, compactly supported in a neighborhood of $M_m$, the approximate eigenfunctions $\widehat{v}^\varepsilon_m\in \ell^2((\varepsilon {\mathbb Z})^d)$ are given by the restrictions to $(\varepsilon {\mathbb Z})^d$ of \begin{equation}\label{approx} \widehat{v}_m := \varepsilon^{\frac{d}{4}} e^{-\frac{d^m}{\varepsilon}} b^m \, , \qquad\text{where}\quad\
b^m \sim \sum_{\ell\geq M} \varepsilon^\ell b^m_\ell \end{equation} (using the notation in \cite{kleinro5}, these restrictions are $\widehat{v}^\varepsilon_{m,1,0}$). In \cite{kleinro5}, Theorem 1.8 we proved that for any $K$ compactly supported in $M_m$ the estimate \begin{equation}\label{approxl2}
\Bigl\| e^{\frac{d^m}{\varepsilon}}(v_m - \widehat{v}^\varepsilon_m)\Bigr\|_{\ell^2(K)} = O\bigl(\varepsilon^\infty)\; . \end{equation} holds. Using \eqref{approxl2} we get the following Proposition.
\begin{prop}\label{prop6} Let $ \widehat{v}^\varepsilon_m\in \ell^2((\varepsilon {\mathbb Z})^d),\, m=j,k,$ denote the approximate eigenfunctions of $H_\varepsilon$ in $M_m$ constructed in \cite{kleinro5}, Theorem 1.7, then, for $\Psi^\varepsilon$ as defined in Proposition \ref{prop5}, \begin{equation}\label{0prop6} w_{jk} = \varepsilon \skpd{Q_0 \Psi^\varepsilon \widehat{v}^\varepsilon_j}{\Psi^\varepsilon \widehat{v}^\varepsilon_k } + O\Bigl(\varepsilon^\infty e^{-\frac{1}{\varepsilon}S_{jk}}\Bigr)\; . \end{equation} \end{prop}
\begin{proof}
By Proposition \ref{prop5} \begin{multline}\label{1prop6} w_{jk} = \varepsilon \skpd{Q_0 \Psi^\varepsilon \widehat{v}^\varepsilon_j}{\Psi^\varepsilon \widehat{v}^\varepsilon_k } + \varepsilon \skpd{Q_0 \Psi^\varepsilon (v_j - \widehat{v}^\varepsilon_j)}{\Psi^\varepsilon v_k }\\ + \varepsilon \skpd{Q_0 \Psi^\varepsilon \widehat{v}^\varepsilon_j}{\Psi^\varepsilon (v_k - \widehat{v}^\varepsilon_k) }+ O\Bigl(\varepsilon^\infty e^{-\frac{1}{\varepsilon}S_{jk}}\Bigr)\; . \end{multline} Using the notation \eqref{8prop4}, \eqref{9prop4} with $\tilde{u}:= \tilde{u}_0$ together with \eqref{11aprop4}, we can write \begin{align}
\bigl| \varepsilon \skpd{Q_0 \Psi^\varepsilon (v_j - \widehat{v}^\varepsilon_j)}{\Psi^\varepsilon v_k }\bigr| &=
\bigl| \skpd{\tilde{Q}_0 \Psi^\varepsilon (\tilde{v}_j - \tilde{\widehat{v}}^\varepsilon_j)}{\Psi^\varepsilon \tilde{v}_k }\bigr| \nonumber\\
&= \bigl| \skpd{\chi_E e^{-\frac{d^k}{\varepsilon}}\tilde{Q}_0 e^{\frac{d^k}{\varepsilon}} \chi_E \Psi^\varepsilon e^{-\frac{d^k + d^j}{\varepsilon}}
e^{\frac{d^j}{\varepsilon}}(\tilde{v}_j - \tilde{\widehat{v}}^\varepsilon_j)}{e^{\frac{d^k}{\varepsilon}}\Psi^\varepsilon \tilde{v}_k }\bigr|\nonumber \\
&\leq e^{-\frac{S_{jk}}{\varepsilon}}\sqrt{\varepsilon} C \bigl\| \Psi^\varepsilon e^{- \frac{C_0 (.)_d^2}{\varepsilon}}
e^{\frac{d^j}{\varepsilon}}(v_j - \widehat{v}^\varepsilon_j)\bigr\|_{\ell^2}
\bigl\| \Psi^\varepsilon e^{\frac{d^k}{\varepsilon}}v_k \bigr\|_{\ell^2}\; ,\label{2prop6} \end{align} where, analog to \eqref{4prop5}, the last estimate follows from Proposition \ref{prop3} together with Corollary \ref{cor1app} for the operator $\chi_E e^{-\frac{d^k}{\varepsilon}}\tilde{Q}_0 e^{\frac{d^k}{\varepsilon}} \chi_E$. Since $\Psi$ is compactly supported in $\stackrel{\circ}{M}_j$, we get by \eqref{approxl2} for any $N\in{\mathbb N}$ \begin{equation}\label{3prop6}
\bigl\| \Psi^\varepsilon e^{- \frac{C_0 (.)_d^2}{\varepsilon}} e^{\frac{d^j}{\varepsilon}}(v_j -\widehat{v}^\varepsilon_j)\bigr\|_{\ell^2} \leq
\bigl\| e^{\frac{d^j}{\varepsilon}}(v_j - \widehat{v}^\varepsilon_j)\bigr\|_{\ell^2(\supp \Psi)} = O(\varepsilon^N)\; . \end{equation} Since by \cite{kleinro5}, Proposition 3.1 \begin{equation}\label{4prop6}
\bigl\| \Psi^\varepsilon e^{\frac{d^k}{\varepsilon}}v_k \bigr\|_{\ell^2} \leq C \varepsilon^{-N_0} \end{equation} for some $C>0$, $N_0\in{\mathbb N}$, we can conclude by inserting \eqref{4prop6} and \eqref{3prop6} in \eqref{2prop6} \begin{equation}\label{5prop6}
\bigl| \varepsilon \skpd{Q_0 \Psi^\varepsilon (v_j - \widehat{v}^\varepsilon_j)}{\Psi^\varepsilon v_k }\bigr| = O\Bigl(\varepsilon^\infty e^{-\frac{S_{jk}}{\varepsilon}}\Bigr)\; . \end{equation} Analog arguments show \begin{equation}\label{6prop6}
\bigl| \varepsilon \skpd{Q_0 \Psi^\varepsilon \widehat{v}^\varepsilon_j}{\Psi^\varepsilon (v_k - \widehat{v}^\varepsilon_k)}\bigr| = O\Bigl(\varepsilon^\infty e^{-\frac{S_{jk}}{\varepsilon}}\Bigr)\; . \end{equation} Inserting \eqref{5prop6} and \eqref{6prop6} in \eqref{1prop6} gives \eqref{0prop6}.
\end{proof}
Proposition \ref{prop6} together with \eqref{approx}, \eqref{8prop4} and \eqref{11aprop4} lead at once to the following corollary.
\begin{cor}\label{cor1} For $b^j, b^k\in \mathscr C_0^\infty ({\mathbb R} ^d\times (0,\varepsilon_0])$ as given in \eqref{hatvm}, $\Psi$ as defined in Proposition \ref{prop5} and the restriction map $r_\varepsilon$ given in \eqref{restrict} we have \begin{equation}\label{0cor1}
w_{jk} = \varepsilon^{\frac{d}{2}} e^{-\frac{S_{jk}}{\varepsilon}} \skpd{\widehat{Q}_0 r_\varepsilon \Psi b^j}{ e^{-\frac{\varphi}{\varepsilon}}\Psi b^k} + O\Bigl(\varepsilon^\infty e^{-\frac{1}{\varepsilon}S_{jk}}\Bigr) \end{equation} where for $\tilde{Q}_0$ defined in \eqref{9prop4} we set \begin{align}\label{1cor1}
\varphi (x) &:= d^j(x) + d^k(x) + C_0|x_d|^2 - S_{jk} \\
\widehat{Q}_0 &:= e^{\frac{1}{2\varepsilon}C_0(.)_d^2} e^{\frac{d^j}{\varepsilon}} \tilde{Q}_0 e^{-\frac{d^j}{\varepsilon}}e^{-\frac{1}{2\varepsilon}C_0(.)_d^2} \; .\label{2cor1} \end{align} \end{cor}
\begin{rem}\label{Remcor1} \begin{enumerate} \item Setting $\psi (x) = \frac{1}{2\varepsilon}C_0 x_d^2 + \frac{1}{\varepsilon} d^j (x)$, it follows from Proposition \ref{prop3app} together with \eqref{2cor1} and \eqref{9prop4} that the operator $\widehat{Q}_0$ is the $0$-quantization of a symbol $\widehat{q}_{\psi}\in S_0^{\frac{1}{2}}(1)({\mathbb R} ^d\times {\mathbb T}^d)$, which has an asymptotic expansion, in particular \begin{equation}\label{0remcor1} \widehat{Q}_0 = \Op_{\varepsilon}^{\mathbb T}\bigl(\widehat{q}_{\psi}\bigr)\, , \qquad \widehat{q}_\psi (x, \xi; \varepsilon) \sim \varepsilon^{\frac{1}{2}} \sum_{n=0}^\infty \varepsilon^n \widehat{q}_{n,\psi}(x, \xi)\; . \end{equation} Modulo $S^{\frac{3}{2}}_0(1)({\mathbb R} ^{d}\times {\mathbb T}^d)$, the symbol $\widehat{q}_\psi$ is given by \begin{equation}\label{0aremcor1} \varepsilon^{\frac{1}{2}} \widehat{q}_{0,\psi}(x, \xi) = \sqrt{\frac{\varepsilon C_0}{\pi}} q_{0,0} \bigl(x, x, \xi - i\nabla d^j(x) - iC_0 x_d e_d\bigr) \end{equation} where $e_d$ denotes the unit vector in $d$-direction (see Proposition \ref{prop3}). At the intersection point or intersection manifold, i.e. for $y=y_0$ or $y\in G_0$ respectively, by \eqref{8prop3} the leading order of the symbol is given by \begin{equation}\label{1.remcor1} \varepsilon^{\frac{1}{2}}\widehat{q}_{0,\psi}(y,\xi)= \frac{1}{i}\sqrt{\frac{\varepsilon C_0}{\pi}} \partial_{\xi_d} t_0 \bigl(y, \xi - i\nabla d^j (y)\bigr) = -\sqrt{\frac{\varepsilon C_0}{\pi}}\sum_{\eta\in{\mathbb Z}^d} \tilde{a}_\eta(y) \eta_d e^{-i\eta\cdot (\xi-i\nabla d^j (y))} \end{equation} where $\tilde{a}_{\eta} = a_{\varepsilon\eta}^{(0)}$ for $\eta\in{\mathbb Z}^d$. \item By Corollary \ref{cor1} we can write \begin{equation}\label{1thm1}
w_{jk} = \varepsilon^{\frac{d}{2}} e^{-\frac{S_{jk}}{\varepsilon}} \sum_{x\in(\varepsilon {\mathbb Z})^d} e^{-\frac{\varphi(x)}{\varepsilon}} \bigl( \widehat{Q}_0 r_\varepsilon \Psi b^j\bigr)(x) \bigl(\Psi b^k\bigr)(x) + O\Bigl(\varepsilon^\infty e^{-\frac{S_{jk}}{\varepsilon}}\Bigr)\; . \end{equation}
\item In the setting of Hypothesis \ref{hypgeo2}, we have $\varphi|_{G_0} = 0$ and moreover, since $d^j + d^k$ is minimal on $G_0$,
$\nabla \varphi|_{G_0} = 0$ and $\varphi (x)>0$ for $x\in \supp \Psi \setminus G_0$. \end{enumerate} \end{rem}
\section{Proof of Theorem \ref{wjk-expansion}}\label{section3}
A key element of the proofs of both theorems is replacing the sum on the right hand side of \eqref{1thm1} by an integral, up to a small error. Here we follow arguments from \cite{giacomo}.
In particular, in the case of just one minimal geodesic, we can use Corollary C.2 in \cite{giacomo}, telling us the following: Let $a\in\mathscr C_0^\infty ({\mathbb R} ^n, {\mathbb R} )$ and $\psi\in \mathscr C^\infty ({\mathbb R} ^n, {\mathbb R} )$ be such that $\psi(x_0)=0$, $D^2\psi (x_0)>0$ and $\psi(x)>0$ for $x\in \supp a\setminus \{x_0\}$ for some $x_0\in{\mathbb R} ^n$. Then there exists a sequence $(J_k)_{k\in{\mathbb N}}$ in ${\mathbb R} $ such that \begin{equation}\label{2thm1} \varepsilon^{\frac{d}{2}} \sum_{x\in(\varepsilon {\mathbb Z})^d} a(x)e^{-\frac{\psi(x)}{\varepsilon}} \sim \sum_{k=0}^\infty \varepsilon^k J_k\quad
\text{where}\quad J_0 = \frac{(2\pi)^{\frac{d}{2}} a(x_0)}{\sqrt{\det D^2\psi (x_0)}}\; . \end{equation} We observe that the proof of \eqref{2thm1} for $a(x)$ being independent of $\varepsilon$ immediately generalizes to an asymptotic expansion $a(x,\varepsilon) \sim \sum \varepsilon^k a_k(x)$.
In order to apply \eqref{2thm1} to the right hand side of \eqref{1thm1} we have to verify the assumptions above for $\psi = \varphi $ defined in \eqref{1cor1} and for some $a\in\mathscr C_0^\infty$ which is equal to $\Psi b^k \bigl(\widehat{Q}_0 r_\varepsilon \Psi b^j\bigr)$ on $(\varepsilon {\mathbb Z})^d$ and has an asymptotic expansion in $\varepsilon$.\\
It follows directly from its definition that $\varphi(y_0)=0$. Since $d^j (x)+ d^k (x)- S_{jk}> 0$ in $E\setminus \gamma_{jk}$ by triangle inequality and $x_d^2>0$ for all $x\in \gamma_{jk}, x\neq y_0$, it follows that $\varphi (x)>0$ for $x\in \supp \Psi \setminus \{y_0\}$.
To see the positivity of $D^2\varphi (y_0)$ we first remark that by Hypothesis \ref{hypgeo1} $d^j + d^k$, restricted to ${\mathbb H}_d$, has a positive Hessian at $y_0$, which we denote by $D^2_\perp (d^j + d^k)(y_0)$. Since furthermore $d^j + d^k$ is constant along the geodesic, it follows that the full Hessian $D^2(d^j + d^k)(y_0)$ has $d-1$ positive eigenvalues and the eigenvalue zero. The Hessian of $C_0 x_d^2$ at $y_0$ is diagonal and the only non-zero element is $\partial_d^2 (C_0 x_d^2) = 2C_0>0$. Thus the Hessian $D^2 \varphi (y_0)$ is a non-negative quadratic form. In order to show that it is in fact positive, we analyze its determinant. Writing the last column as the sum $\nabla \partial_d (d^j + d^k)(y_0) + v$ where $v_k=0$ for $1\leq k \leq d-1$ and $v_d=2 C_0$ we get \begin{align} \det D^2\varphi (y_0) &= \det D^2 (d^j + d^k)(y_0) + \det \begin{pmatrix} D^{2}_\perp(d^j + d^k)(y_0) & 0 \\ * & 2 C_0 \end{pmatrix}\nonumber\\
&= 2 C_0 \det D^{2}_\perp(d^j + d^k)(y_0) >0 \label{6thm1} \end{align} where the second equality follows from the fact that one eigenvalue of $D^2(d^j + d^k)(y_0)$ is zero as discussed above and thus its determinant is zero. This proves that $D^2\varphi (y_0)$ is non-degenerate and thus we get $D^2\varphi (y_0) >0$.
By Proposition \ref{prop1app}, Remark \ref{remprop1app} and \eqref{0remcor1} the operator $\widehat{Q}_0= \Op_{\varepsilon}^{\mathbb T}(\widehat{q}_\psi)$ on $\ell^2((\varepsilon {\mathbb Z})^d)$ (multiplied from the right by the restriction operator $r_\varepsilon$) is equal to the restriction of the operator $\Op_{\varepsilon}(\widehat{q}_\psi)$ on $L^2({\mathbb R} ^d)$. Here we consider $\widehat{q}_\psi$ as periodic element of the symbol class $S_0^{\frac{1}{2}}(1)\bigl({\mathbb R} ^d\times {\mathbb R} ^d\bigr)$. In particular, for $x\in(\varepsilon {\mathbb Z})^d$ we have \begin{equation}\label{13thm1}
\Psi b^k(x) \widehat{Q}_0 r_\varepsilon \Psi b^j (x) = \Psi b^k(x) \Op_{\varepsilon} (\widehat{q}_\psi) \Psi b^j (x) \end{equation} where $r_\varepsilon$ denotes the restriction to the lattice $(\varepsilon {\mathbb Z})^d$ defined in \eqref{restrict}. We therefore set \begin{equation}\label{14thm1} a(x;\varepsilon) := \Psi b^k(x) \Op_{\varepsilon} \bigl(\widehat{q}_\psi\bigr) \Psi b^j (x) \, , \qquad x\in{\mathbb R} ^d\; . \end{equation} Then $a = \Psi b^k \bigl(\widehat{Q}_0 r_\varepsilon \Psi b^j\bigr)$ on $(\varepsilon {\mathbb Z})^d$ and $a(.; \varepsilon)\in\mathscr C_0^\infty({\mathbb R} ^d)$, because $\Psi, b^k, b^j\in \mathscr C_0^\infty({\mathbb R} ^d)$ (see e.g. \cite{dima}, which gives that $\Op_{\varepsilon} \bigl(\widehat{q}_\psi\bigr)$ maps $\mathcal{S}$ to $\mathcal{S}$).
Next we show that $a(x; \varepsilon)$ has an asymptotic expansion in $\varepsilon$. It suffices to show this for $\Op_{\varepsilon} (\widehat{q}_\psi) \Psi b^j$.
It follows from the asymptotic expansions of $\widehat{q}_\psi$ and $b^j$ in \eqref{0remcor1} and \eqref{hatvm} that \begin{align} \Op_{\varepsilon} (\widehat{q}_\psi) \Psi b^j (x;\varepsilon) &\sim \sum_{n=0}^\infty \sum_{\natop{\ell\in {\mathbb Z}/2}{\ell\geq -N_j}} \varepsilon^{\frac{1}{2}+n+\ell} \Op_{\varepsilon} (\widehat{q}_{n,\psi}) \Psi b^j_\ell (x)\nonumber \\ & \sim \sum_{n=0}^\infty \sum_{\natop{\ell\in {\mathbb Z}/2}{\ell\geq -N_j}} \varepsilon^{\frac{1}{2}+n+\ell} (2\pi\varepsilon )^{-d} \int_{{\mathbb R} ^{2d}} e^{\frac{i}{\varepsilon}(y-x)\xi} \widehat{q}_{n,\psi}(x ,\xi) \Psi b^j_\ell (y) \, dy \, d\xi \nonumber\\ &\sim \sum_{n=0}^\infty \sum_{\natop{\ell\in {\mathbb Z}/2}{\ell\geq -N_j}}\sum_{m=0}^\infty \varepsilon^{\frac{1}{2}+n+\ell+m} (2\pi)^{-d} \int_{{\mathbb R} ^{2d}} e^{i(y-x)\zeta} \widehat{q}_{m,n,\psi}(x) \zeta^m \Psi b^j_\ell (y) \, dy \, d\zeta \label{5thm1} \end{align} where the last equality follows from the analyticity of $\widehat{q}_\psi$ with respect to $\xi$, using the substitution $\zeta \varepsilon = \xi$. The functions $\widehat{q}_{m,n,\psi}(x)$ are the coefficients of the expansion of $\widehat{q}_{n,\psi}(x,\cdot)$ into a convergent power series in $\xi$ at zero.
Thus we can apply \eqref{2thm1} to \eqref{1thm1}, which gives \begin{equation}\label{3thm1}
w_{jk} \sim e^{-\frac{S_{jk}}{\varepsilon}} \sum_{k=0}^\infty \varepsilon^k J_k
\end{equation} where $J_0$ is the leading order term of \begin{equation}\label{4thm1} \tilde{J}_0 = \frac{(2\pi)^{\frac{d}{2}}}{\sqrt{\det D^2\varphi (y_0)}}b^k(y_0) (\Op_{\varepsilon}(\widehat{q}_\psi) \Psi b^j)(y_0;\varepsilon) \; . \end{equation}
By \eqref{1.remcor1} it follows that \begin{equation}\label{20thm1} \widehat{q}_{0,0,\psi}(y_0) = -\sqrt{\frac{C_0}{\pi}} \sum_{\eta\in{\mathbb Z}^d} \tilde{a}_\eta(y_0) \eta_d e^{-\eta\cdot \nabla d^j (y_0)} \; . \end{equation} Thus, by \eqref{5thm1} and Fourier inversion formula, the leading order term of $ (\Op_{\varepsilon}(\widehat{q}_\psi) \Psi b^j)(y_0;\varepsilon)$ is given by \begin{multline}\label{21thm1}
\varepsilon^{\frac{1}{2}- N_j}\widehat{q}_{0,0,\psi}(y_0) (2\pi)^{-d} \int_{{\mathbb R} ^{2d}} e^{i(y-y_0)\zeta} \Psi b^j_{-N_j} (y) \, dy \, d\zeta \\
= - \varepsilon^{\frac{1}{2}- N_j}\sqrt{\frac{C_0}{\pi}} \sum_{\eta\in{\mathbb Z}^d} \tilde{a}_\eta(y_0) \eta_d e^{-\eta\cdot \nabla d^j (y_0)} \Psi b^j_{-N_j} (y_0) \end{multline}
From \eqref{21thm1},\eqref{6thm1}, \eqref{4thm1} and \eqref{3thm1} it follows that $w_{jk}$ has the stated asymptotic expansion (where $J_0 = I_0 \varepsilon^{\frac{1}{2}-(N_j + N_k)}$) with leading order \begin{equation}\label{23thm1} I_0 = - \frac{(2\pi)^{\frac{d-1}{2}}}{\sqrt{\det D^{2}_\perp(d^j + d^k) (y_0)}} b^k_{-N_k}(y_0)
\sum_{\eta\in{\mathbb Z}^d} \tilde{a}_\eta(y_0) \eta_d e^{-\eta\cdot \nabla d^j (y_0)} b^j_{-N_j}(y_0) \; . \end{equation} Writing \begin{multline}\label{22thm1}
\sum_{\eta\in{\mathbb Z}^d} \tilde{a}_\eta(y_0) \eta_d e^{-\eta\cdot \nabla d^j (y_0))} = \frac{1}{2}\sum_{\eta\in{\mathbb Z}^d} \bigl( \tilde{a}_{\eta}(y_0) \eta_d e^{-\eta\cdot \nabla d^j (y_0)} + \tilde{a}_{-\eta}(y_0)(-\eta_d) e^{\eta\cdot \nabla d^j (y_0)}\bigr) \\ = \sum_{\eta\in{\mathbb Z}^d} \tilde{a}_{\eta}(y_0) \eta_d \sinh \bigl(\eta\cdot \nabla d^j (y_0)\bigr) \end{multline} where in the last step we used $\tilde{a}_\eta(y_0) = \tilde{a}_{-\eta}(y_0)$ (see \eqref{agammasym}) and inserting \eqref{22thm1} into \eqref{23thm1} gives \eqref{0thm1}. Note that all $I_k$ are indeed real (since $w_{jk}$ is real).
\nopagebreak
$\Box$
\section{Proof of Theorem \ref{wjk-expansion2}}\label{section4}
{\sl Step 1:} As in the previous proof, we start proving that the sum in the formula \eqref{1thm1} for the interaction term $w_{jk}$ can, up to small error, be replaced by an integral. This can be done using the following lemma, which is proven e.g. in \cite{giacomo}, Proposition C1, using Poisson's summation formula.
\begin{Lem}\label{LemmaC1}
For $h>0$ let $f_h$ be a smooth, compactly supported function on ${\mathbb R} ^d$ with the property: there exists $N_0\in {\mathbb N}$ such that for all $\alpha\in{\mathbb N}^d, |\alpha|\geq N_0$ there exists a $h$-independent constant $C_\alpha$ such that \begin{equation}\label{1LemC1}
\int_{{\mathbb R} ^d} |\partial^\alpha f_h (y)|\, dy \leq C_\alpha\, . \end{equation} Then \begin{equation}\label{2LemC1} h^d \sum_{y\in h{\mathbb Z}^d} f_h(y) = \int_{{\mathbb R} ^d} f_h(y) \, dy + O( h^\infty)\, , \qquad (h\to 0) \; . \end{equation} \end{Lem}
We shall verify that Lemma \ref{LemmaC1} can be used to evaluate the interaction matrix as given in \eqref{1thm1}. For $a$ given by \eqref{14thm1}
we claim that for any $\alpha_1\in {\mathbb N}^d$ there is a constant $C_{\alpha_1}$ such that \begin{equation}\label{1.1thm2}
\sup_{x\in{\mathbb R} ^d} \bigl| \partial^{\alpha_1}_x a(x; \varepsilon) \bigr| \leq C_{\alpha_1}\varepsilon^{\frac{1}{2}} \; . \end{equation} Clearly it suffices to prove \begin{equation}\label{14.1thm2}
\sup_{x\in{\mathbb R} ^d} \bigl| \Psi(x) \partial^{\alpha_1}_x \Op_\varepsilon(\widehat{q}_\psi) \Psi b^j(x; \varepsilon) \bigr| \leq C_{\alpha_1}\varepsilon^{\frac{1}{2}} \end{equation}
or, by Sobolev`s Lemma (see i.e. \cite{Folland}), for all $\beta\in{\mathbb N}^d$ with $|\beta|\leq \frac{d}{2}+ 1$ \begin{equation}\label{15.1thm2}
\bigl\| \Psi \partial^{\beta + \alpha_1} \Op_\varepsilon(\widehat{q}_\psi) \Psi b^j(\,.\,; \varepsilon) \bigr\|_{L^2} \leq C\varepsilon^{\frac{1}{2}} \; . \end{equation}
Setting for $0\leq \ell \leq |\beta +\alpha_1|$
\[ c_\ell(\xi) := \sum_{\natop{\gamma\in{\mathbb N}^d}{|\gamma|=\ell}} \frac{1}{\gamma!}\partial_\xi^\gamma \xi^{\beta + \alpha_1} \quad\text{and}\quad
\widehat{q}_{\psi,\ell}(x, \xi; \varepsilon) := \sum_{\natop{\gamma\in{\mathbb N}^d}{|\gamma|=\ell}} \partial_x^\gamma \widehat{q}_\psi(x, \xi; \varepsilon)\; , \] we have by symbolic calculus (see e.g. \cite{martinez}, Thm.2.7.4 ) \begin{align}
\partial^{\beta + \alpha_1} \Op_\varepsilon(\widehat{q}_\psi) &= \Bigl(\frac{i}{\varepsilon}\Bigr)^{|\beta + \alpha_1|} \Op_\varepsilon(c_0) \Op_\varepsilon(\widehat{q}_\psi)\nonumber\\
& = \Bigl(\frac{i}{\varepsilon}\Bigr)^{|\beta + \alpha_1|} \sum_{\ell=0}^{|\beta + \alpha_1|} \Op_\varepsilon(\widehat{q}_{\psi, \ell}) \Op_\varepsilon(c_\ell)\Bigl(\frac{\varepsilon}{i}\Bigr)^{\ell}\nonumber\\
&= \sum_{\ell=0}^{|\beta + \alpha_1|} \Op_\varepsilon(\widehat{q}_{\psi, \ell}) c_\ell (\partial_\xi)\label{16.1thm2} \end{align}
where in the last step we used that $c_\ell(\xi)$ is homogeneous of degree $|\beta + \alpha_1| - \ell$. Since $\Psi b^j$ is smooth and $\widehat{q}_\psi\in S_0^\frac{1}{2}(1)\bigl({\mathbb R} ^{2d}\bigr)$, \eqref{15.1thm2} (and thus \eqref{1.1thm2}) follows from \eqref{16.1thm2} together with the Theorem of Calderon and Vaillancourt (see e.g. \cite{dima}).\\
Then for $\varphi$ and $a$ given by \eqref{1cor1} and \eqref{14thm1} respectively and for $h=\sqrt{\varepsilon}$, we set $y=\frac{x}{h}$ and \begin{equation}\label{2.1thm2}
f_h(y) := h^\ell e^{-\varphi_h (y)} A_h(y) \quad\text{where}\quad \varphi_h(y) := \frac{\varphi (hy)}{h^2} \quad \text{and}\quad A_h(y) := a(hy; h^2)\; . \end{equation} Then for $\alpha\in{\mathbb N}^d$ \begin{equation}\label{5.1thm2}
\partial^\alpha f_h =: h^\ell g_{h,\alpha} e^{-\varphi_h} \end{equation} where $g_{h,\alpha}$ is a sum of products, where the factors are given by $\partial^{\alpha_1} A_h$ and $\partial^{\alpha_2}\varphi_h, \ldots , \partial^{\alpha_m}\varphi_h$ for partitions $\alpha_1, \ldots \alpha_m\in{\mathbb N}^d$ of $\alpha$, i.e. $\sum_r \alpha_r = \alpha$. By \eqref{1.1thm2} and \eqref{2.1thm2} we have for some $C_{\alpha_1}$ independent of $h$ \begin{equation}\label{6.1thm2}
\sup_{y\in{\mathbb R} ^d} \bigl| \partial^{\alpha_1} A_h(y) \bigr| \leq h^{1 + |\alpha_1|} C_{\alpha_1}\; . \end{equation}
In order to analyze $|\partial^{\alpha_2} \varphi_h|$, we remark that Taylor expansion at $y_0$ yields for $\beta\in {\mathbb N}^d$ \begin{multline}\label{7.1thm2}
\partial^{\beta} \varphi_h (y) = h^{|\beta| - 2} (\partial^{\beta}\varphi)(hy)
= h^{|\beta| - 2} (\partial^{\beta}\varphi)(hy_0) +
h^{|\beta| - 1} (\nabla \partial^{\beta}\varphi)|_{hy_0}(y- y_0) \\
+ h^{|\beta|} \int_0^1 \frac{(1-t)^2}{2} (D^2 \partial^{\beta}\varphi)|_{h(y_0 + t(y-y_0))}[y-y_0]^2 \, dt\; . \end{multline} Since for $y\in \supp A_h, y_0\in h^{-1}G_0$ the curve $t\mapsto h(y_0 + t(y-y_0))$ lies in a compact set, it follows from \eqref{7.1thm2} together with
Remark \ref{Remcor1},(3), that for some $C_{\beta}$ and for $N_\beta = \max \{0, |\beta|-2\}$ \begin{equation}\label{8.1thm2}
|\partial^{\beta} \varphi_h (y)| \leq C_{\beta} h^{N_{\beta}} \bigl( 1+ |y-y_0|^2\bigr) \, , \qquad y_0\in h^{-1}G_0\, , \; y\in \supp A_h\; . \end{equation} Thus using the above mentioned structure of $g_{h,\alpha}$ we get \begin{equation}\label{9.1thm2}
\bigl| g_{h, \alpha}(y)\bigr| \leq C_\alpha h \Bigl( 1+ \bigl|y - y_0\bigr|^{2 |\alpha|}\Bigr) \end{equation} where $C_\alpha$ is uniform for $y\in \supp A_h$ and $y_0\in h^{-1}G_0$. Taking the infimum over all $y_0$ on the right hand side of \eqref{9.1thm2} we get \begin{equation}\label{9a.1thm2}
\bigl| g_{h, \alpha}(y)\bigr| \leq C_\alpha h \Bigl( 1+ \bigl( \dist (y, h^{-1}G_0) \bigr)^{2 |\alpha|}\Bigr) \end{equation} Since by Hypothesis \ref{hypgeo2a} $G$ is non-degenerate at $G_0$ we have for some $C>0$ \[ \varphi (x) \geq C \dist (x, G_0)^2 \] and therefore \begin{equation}\label{10.1thm2} \varphi_h (y) \geq C \frac{1}{h^2} \dist (hy, G_0)^2 = C \dist (y, h^{-1} G_0)^2\; . \end{equation} Combining \eqref{5.1thm2}, \eqref{9a.1thm2} and \eqref{10.1thm2} gives \begin{align}
\int_{{\mathbb R} ^d} \bigl| \partial^\alpha f_h (y) \bigr| \, dy &=
h^\ell \int_{{\mathbb R} ^d} \bigl| g_{h, \alpha} e^{- \varphi_h (y)}\bigr|\, dy\nonumber\\
&\leq C_\alpha h^{\ell + 1} \int_{\supp A_h} e^{- C \dist(y,h^{-1} G_0)^2} \Bigl( 1 + \bigl(\dist(y, h^{-1} G_0)\bigr)^{2|\alpha|}\Bigr) \, dy \nonumber\\
&=C_\alpha h^{\ell + 1 - d} \int_{\supp \Psi} e^{-\frac{C}{h^2} \dist(x, G_0)^2} \Bigl( 1 + h^{-2|\alpha|} \bigl(\dist(x, G_0)\bigr)^{2|\alpha|}\Bigr) \, dx \label{11.1thm2} \end{align} where in the last step we used the substitution $x = h y$.
Using the Tubular Neighborhood Theorem, there is a diffeomorphism \begin{equation}\label{tubk} k: \supp \Psi \rightarrow G_0\times (-\delta, \delta)^{d-\ell} \, , \quad k(x) = (s,t)\; . \end{equation} Here $\delta>0$ must be chosen adapted to $\supp \Psi$, which is an arbitrary small neighborhood of $G_0$. Denoting by $d\sigma$ the Euclidean surface element on $G_0$, the right hand side of \eqref{11.1thm2} can thus be estimated from above by \begin{align}
C'_\alpha h^{\ell + 1 - d}\int_{G_0\times (-\delta, \delta)^{d-\ell}} e^{-\frac{C}{h^2}t^2} \Bigl( 1 + \Bigl(\frac{t}{h}\Bigr)^{2|\alpha|}\Bigr) \, d\sigma(s) \, dt \nonumber\\
\leq \tilde{C}_\alpha h \int_{{\mathbb R} ^{d-\ell}} e^{-C \tau^2}\bigl( 1 + |\tau|^{2|\alpha|}\bigr) \, d\tau \leq \widehat{C}_\alpha \label{11a.1thm2} \end{align} where in the last step we used that $G_0$ was assumed to be compact and the substitution $t = \tau h$.
By \eqref{11.1thm2} and \eqref{11a.1thm2} we can use Lemma \ref{LemmaC1} for $f_h$ given in \eqref{2.1thm2} and thus we have by \eqref{1thm1} together with \eqref{13thm1} and \eqref{14thm1} \begin{equation}\label{12.1thm2} w_{jk} = \varepsilon^{-\frac{d}{2}} e^{-\frac{S_{jk}}{\varepsilon}} \int_{{\mathbb R} ^d} e^{-\frac{\varphi(x)}{\varepsilon}} \bigl(\Psi b^k\bigr)(x) \bigl(\Op_\varepsilon(\widehat{q}_\psi)\Psi b^j\bigr) (x)\, dx + O\Bigl(e^{-\frac{S_{jk}}{\varepsilon}} \varepsilon^\infty \Bigr)\; . \end{equation}
{\sl Step 2:} Next we use an adapted version of stationary phase.
On $G_0$ we choose linear independent tangent unit vector fields $E_m$, $1\leq m \leq \ell $, and linear independent normal unit vector fields $N_m$, $\ell+1\leq m\leq d$, where we set $N_d = e_d$, the normal vector field on ${\mathbb H}_d$. Possibly shrinking $\supp \Psi$, the diffeomorphism $k$ given in \eqref{tubk} can be chosen such that for each $x\in \supp \Psi$ there exists exactly one $s\in G_0$ and $t\in (-\delta, \delta)^{d-\ell}$ such that \begin{equation}\label{1.2thm2} x= s + \sum_{m=\ell+1}^d t_{m-\ell} N_m(s) \quad\text{for}\quad k(x) = (s, t) \, . \end{equation} This follows from the proof of the Tubular Neighborhood Theorem, see e.g. \cite{hirsch}. It allows to continue the vector fields $N_m$ from $G_0$ to $\supp \Psi$ by setting $N_m(x):= N_m(s)$, thus $N_m = \partial_{t_{m-\ell}}$. It follows that these vector fields $N_m(x)$ actually satisfy the conditions above Hypothesis \ref{hypgeo2a} (in particular, they commute). We define \[ \tilde{\varphi}:= \varphi \circ k^{-1} : G_0\times (-\delta, \delta)^{d-\ell} \rightarrow {\mathbb R} \quad\text{ with }\quad \tilde{\varphi}(s,t) := \varphi \circ k^{-1}(s,t) = \varphi (x)\; .\]
Since $\varphi(x) = d^j(x) + d^k(x) + C_0 x_d^2 - S_{jk}$ it follows from the construction above that \begin{align}\label{2.2thm2}
\tilde{\varphi}|_{k(G_0)} &= \varphi|_{G_0} = 0 \\
E_m \varphi|_{G_0} &= 0\,,\quad \text{for}\;1\leq m \leq \ell \nonumber\\
\partial_{t_m}\tilde{\varphi}|_{k(G_0)} &= N_{m+\ell} \varphi|_{G_0} = 0\, , \quad\text{for}\; 1\leq m \leq d-\ell \nonumber\\
D\varphi|_{G_0} &= 0\; .\nonumber \end{align}
By Hypothesis \ref{hypgeo2a} the transversal Hessian of the restriction of $d^j + d^k$ to ${\mathbb H}_d$ at $G_0$ is positive definite, i.e. \begin{equation}\label{1a.2thm2}
D^2_{\perp,G_0} \bigl(d^j + d^k\bigr) = \Bigl( N_m N_{m'} (d^j + d^k)|_{G_0} \Bigr)_{\ell +1\leq m, m' \leq d-1}\, >0\; . \end{equation} Analog to the proof of Theorem \ref{wjk-expansion} we use that $d^j+d^k$ is constant along the geodesics. Thus, for any $x_0\in G_0$, the matrix $\bigl( N_r N_p (d^j + d^k)(x_0)\bigr)_{\ell + 1 \leq r,p \leq d}$ has $d-\ell - 1$ positive eigenvalues and one zero eigenvalue and in particular its determinant is zero. Since \begin{equation}\label{5.2thm2}
N_r N_p\varphi = \begin{cases}
2 C_0 + N_d N_d (d^j + d^k) \quad\text{for}\quad (r,p)= (d,d)\\
N_r N_p (d^j + d^k)\quad\text{otherwise}\; ,
\end{cases} \end{equation}
the Hessian $\bigl( N_m N_{m'} \varphi|_{G_0} \bigr)_{\ell+1\leq m,m' \leq d}$ of $\varphi$ restricted to $G_0$ is a non-negative quadratic form. It is in fact positive definite since for any $x_0\in G_0$ \begin{multline}\label{6.2thm2}
\det \Bigl( N_m N_{m'} \varphi (x_0) \Bigr)_{\ell+1\leq m,m' \leq d} \\[2mm]
= \det \Bigl( N_m N_{m'} (d^j + d^k) (x_0) \Bigr)_{\ell+1\leq m,m' \leq d}
+ \det \begin{pmatrix} \Bigl( N_m N_{m'} (d^j + d^k) (x_0) \Bigr)_{\ell+1\leq m,m' \leq d-1} & 0 \\ * & 2 C_0 \end{pmatrix}\\[2mm]
= 2 C_0 \,\det D^2_{\perp,G_0}(d^j + d^k)(x_0) > 0 \; . \end{multline} Thus \begin{equation}\label{6a.2thm2}
D_t^2 \tilde{\varphi}|_{k(G_0)}= \Bigl( N_m N_{m'} \varphi|_{G_0} \Bigr)_{\ell+1\leq m,m' \leq d} >0\; . \end{equation}
The following lemma is an adapted version of the Morse Lemma with parameter (see e.g. Lemma 1.2.2 in \cite{Dui}).
\begin{Lem}\label{Morse}
Let $\phi\in\mathscr C^\infty \bigl(G_0\times (-\delta, \delta)^{d-\ell}\bigr)$ be such that $\phi(s,0) = 0$, $D_t\phi (s,0) = 0$ and the transversal Hessian $D^2_t\phi(s,\cdot)|_{t=0} =: Q(s)$ is non-degenerate for all $s\in G_0$. Then, for each $s\in G_0$, there is a diffeomorphism $ y(s,.): (-\delta, \delta)^{d-\ell} \rightarrow U$, where $U \subset{\mathbb R} ^{d-\ell}$ is some neighborhood of $0$, such that \begin{equation}\label{3.2thm2}
y(s,t) = t + O\bigl(|t|^2\bigr) \quad\text{as}\;\; |t|\to 0\quad\text{and}\quad \phi(s,t) = \frac{1}{2}\langle y(s,t), Q(s) y(s,t)\rangle\; . \end{equation} Furthermore, $y(s,t)$ is $\mathscr C^\infty$ in $s\in G_0$. \end{Lem}
The proof of Lemma \ref{Morse} follows the proof of the Morse-Palais Lemma in \cite{lang}, noting that the construction depends smoothly on the parameter $s\in G_0$.
By \eqref{2.2thm2} and \eqref{6a.2thm2}, the phase function $\tilde{\varphi}$ satisfies the assumptions on $\phi$ given in Lemma \ref{Morse}. We thus can define the diffeomorphism $h:= \mathbf{1} \times y: G_0\times (-\delta, \delta)^{d-\ell}\rightarrow G_0 \times U$ for $y$ constructed with respect to $\tilde{\varphi}$ as in Lemma \ref{Morse}. Using the diffeomorphism $k: \supp \Psi\rightarrow G_0\times (-\delta, \delta)^{d-\ell}$ constructed above (see \eqref{1.2thm2}), we set $g(x)= h\circ k (x) = (s, y)$ (then $g^{-1}(s,0) = s$ holds for any $s\in G_0$). Thus \begin{equation}\label{10.2thm2}
\varphi \bigl(g^{-1}(s,y)\bigr) = \frac{1}{2} \langle y, Q(s) y\rangle \end{equation} and setting $x=g^{-1}(s,y)$ we obtain by \eqref{12.1thm2}, using the notation \eqref{14thm1}, modulo $O\bigl(e^{-\frac{S_{jk}}{\varepsilon}} \varepsilon^\infty \bigr)$ \begin{align} w_{jk} & \equiv \varepsilon^{-\frac{d}{2}} e^{-\frac{S_{jk}}{\varepsilon}} \int_{\supp \Psi} e^{-\frac{\varphi(x)}{\varepsilon}} a(x;\varepsilon)\, dx \nonumber \\
& = \varepsilon^{-\frac{d}{2}} e^{-\frac{S_{jk}}{\varepsilon}} \int_{G_0}\int_{U} e^{-\frac{1}{2\varepsilon} \langle y, Q(s) y\rangle} a(g^{-1}(s,y); \varepsilon) J(s,y) \, dy \, d\sigma(s) \label{4.2thm2} \end{align} where $d\sigma$ is the Euclidean surface element on $G_0$ and $J(s,y)= \det D_y g^{-1}(s, .)$ denotes the Jacobi determinant for the diffeomorphism \[ g^{-1}(s, .): U \rightarrow \Span \bigl(N_{\ell+1}(s), \ldots, N_d(s)\bigr) \]
and $Q(s)=D^2_t\tilde{\varphi}(s,\cdot)|_{t=0}$ denotes the transversal Hessian of $\tilde{\varphi}$ as given in \eqref{6a.2thm2}. From the construction of $g$ and \eqref{1.2thm2} it follows that $J(s,0) = 1$ for all $s\in G_0$.
By the stationary phase formula with respect to $y$ in \eqref{4.2thm2}, we get modulo $O\bigl(e^{-\frac{S_{jk}}{\varepsilon}} \varepsilon^\infty \bigr)$ \begin{align}
w_{jk} &= \varepsilon^{-\frac{d}{2}} e^{-\frac{S_{jk}}{\varepsilon}} \bigl(\varepsilon 2 \pi\bigr)^{\frac{d-\ell}{2}} \int_{G_0} \bigl(\det Q(s)\bigr)^{-\frac{1}{2}} \sum_{\nu=0}^\infty \frac{\varepsilon^\nu}{\nu !}\Bigl( \langle \partial_y, Q^{-1}(s) \partial_y\rangle^\nu \tilde{a} J\Bigr)(s,0; \varepsilon)\, d\sigma(s) \nonumber\\ &= \varepsilon^{-\frac{\ell}{2}} e^{-\frac{S_{jk}}{\varepsilon}}\bigl(2 \pi\bigr)^{\frac{d-\ell}{2}}\sum_{\nu=0}^\infty \varepsilon^\nu \int_{G_0}B_{\nu} (s)\, d\sigma(s)\label{9.2thm2} \end{align} where $\tilde{a}(.; \varepsilon) := a(.; \varepsilon)\circ g^{-1}$ and, for any $s\in G_0$, $B_{0}(s)$ is given by the leading order of \begin{equation}
\Bigl(\det Q(s)\Bigr)^{-\frac{1}{2}} a(g^{-1}(s,0); \varepsilon)\bigr)
= \Bigl|2C_0 \det D^2_{\perp,G_0}\bigl(d^j + d^k\bigr)(s)\Bigr|^{-\frac{1}{2}} a(s; \varepsilon)\, ,\qquad \label{7.2thm2} \end{equation} using \eqref{6.2thm2}, \eqref{6a.2thm2} and identifying $s\in G_0$ with a point in ${\mathbb R} ^d_x$.
We now use the definition of $a$ in \eqref{14thm1}, the expansion \eqref{5thm1} of $\Op_{\varepsilon,0}(\widehat{q}_\psi) \Psi b^j$ and the fact that \eqref{20thm1} and \eqref{21thm1} also hold for any $y_0\in G_0$ in the setting of Hypothesis \ref{hypgeo2} to get for $s\in G_0$ \begin{equation}\label{8.2thm2}
B_{0} (s)
=\sqrt{\frac{\varepsilon}{2\pi}} \Bigl|\det D^2_{\perp, G_0}\bigl(d^j + d^k\bigr)(s)\Bigr|^{-\frac{1}{2}} \varepsilon^{-(N_j + N_k)} b^k_{-N_k}(s) \sum_{\eta\in{\mathbb Z}^d} \tilde{a}_\eta(s) \eta e^{-\eta\cdot \nabla d^j (s)} b^j_{-N_j} (s) \; . \end{equation} Combining \eqref{8.2thm2} and \eqref{9.2thm2} and using \eqref{22thm1} completes the proof.
\nopagebreak
$\Box$
\section{Some more results for $w_{jk}$}\label{section5}
In this section, we derive some formulae and estimates for the interaction term $w_{jk}$ and its leading order term, assuming only Hypotheses \ref{hyp1} to \ref{hypkj}, i.e. without any assumptions on the geodesics between the potential minima $x^j$ and $x^k$.
We combine the fact that the relevant jumps in the interaction term are those taking place in a small neighborhood of ${\mathbb H}_d\cap E$, proven in \cite{kleinro4}, Proposition 1.7, with the results on approximate eigenfunctions proven in \cite{kleinro5}.
\begin{prop}\label{wjkasymp} Assume that Hypotheses \ref{hyp1} to \ref{hypkj} hold and let $\widehat{v}_m^\varepsilon,\, m=j,k,$ denote the approximate eigenfunctions given in \eqref{hatvm}. For $\delta>0$, we set \begin{equation}\label{deltagammac} \delta\Gamma := \delta{\mathbb H}_{d,R}\cap E\, , \quad \widehat{\delta\Gamma} := \delta \Gamma \cap {\mathbb H}_{d,R} \quad\text{and}\quad \widehat{\delta\Gamma}^{c} := \delta \Gamma \cap {\mathbb H}_{d,R}^{c} \end{equation} where $\delta {\mathbb H}_{d,R}$ is defined in \eqref{deltaA}. Then the interaction term is given by \begin{equation}\label{0prop51} w_{jk} =
\skpd{ \widehat{v}_j^\varepsilon}{ \mathbf{1}_{\widehat{\delta\Gamma}}T_\varepsilon \mathbf{1}_{\widehat{\delta\Gamma}^c}
\widehat{v}_k^\varepsilon} - \skpd{\mathbf{1}_{\widehat{\delta\Gamma}}T_\varepsilon \mathbf{1}_{\widehat{\delta\Gamma}^c} \widehat{v}_j^\varepsilon}{\widehat{v}_k^\varepsilon} + O\Bigl(\varepsilon^\infty e^{-\frac{S_{jk}}{\varepsilon}}\Bigr)\; . \end{equation} Moreover, setting \begin{equation}\label{tdelta} \tilde{t}^\delta (x, \xi) := - \sum_{\gamma\in(\varepsilon {\mathbb Z})^d} \mathbf{1}_{\widehat{\delta\Gamma}^c}(x+\gamma) a^{(0)}_\gamma (x) \cosh \frac{\gamma\cdot \xi}{\varepsilon} \; , \end{equation} the leading order of $w_{jk}$ is can be written as \begin{equation}\label{theowjk1} \sum_{x\in \widehat{\delta\Gamma}_\varepsilon} \widehat{v}^\varepsilon_j(x) \widehat{v}^\varepsilon_k(x) \left( \tilde{t}^\delta (x,\nabla d^j(x)) - \tilde{t}^\delta (x,\nabla d^k(x))\right) \; . \end{equation} If $\widehat{v}^\varepsilon_j$ and $\widehat{v}^\varepsilon_k$ are both strictly positive in $\widehat{\delta\Gamma}_\varepsilon$, we have modulo $O\left(\varepsilon^\infty e^{-\frac{S_{jk}}{\varepsilon}}\right)$ \begin{multline}\label{theowjk2} \sum_{x\in \widehat{\delta\Gamma}_\varepsilon} \widehat{v}^\varepsilon_j(x) \widehat{v}^\varepsilon_k(x) \nabla_\xi \tilde{t}^\delta (x,\nabla d^k(x))(\nabla d^j(x) - \nabla d^k(x)) \\ \leq w_{jk} \leq \sum_{x\in \widehat{\delta\Gamma}_\varepsilon} \widehat{v}^\varepsilon_j(x) \widehat{v}^\varepsilon_k(x) \nabla_\xi \tilde{t}^\delta (x,\nabla d^j(x))(\nabla d^j(x) - \nabla d^k(x))\; . \end{multline}
\end{prop}
We remark that the translation operator $\mathbf{1}_{\widehat{\delta\Gamma}}T_\varepsilon \mathbf{1}_{\widehat{\delta\Gamma}^c}$ is non-zero only for translations mapping points $x\in E$ with $0\leq x_d\leq \delta$ to points $x+\gamma\in E$ with $-\delta \leq x+\gamma < 0$. Thus each translation crosses the hyperplane ${\mathbb H}_d$ from right to left.
\begin{proof} Since by Hypothesis \ref{hypkj} each of the two wells has exactly one eigenvalue within the spectral interval $I_\varepsilon$, we have $\widehat{v}_j^\varepsilon:= \tilde{v}_{j,1}^\varepsilon = \widehat{v}_{j,1}^\varepsilon$ in the setting of \cite{kleinro5}, Theorem 1.8. Setting \begin{equation}\label{defA1} A:= \mathbf{1}_{\widehat{\delta\Gamma}}T_\varepsilon \mathbf{1}_{\widehat{\delta\Gamma}^c} - \mathbf{1}_{\widehat{\delta\Gamma}^c}T_\varepsilon \mathbf{1}_{\widehat{\delta\Gamma}}\; , \end{equation} we have by \cite{kleinro5}, Proposition 1.7, \begin{eqnarray}
\left| w_{jk} - \skpd{\widehat{v}^\varepsilon_j}{A \widehat{v}^\varepsilon_k} \right| &=&
\left| \skpd{v_j}{A v_k} - \skpd{\widehat{v}^\varepsilon_j}{A \widehat{v}^\varepsilon_k} \right| + \expord{-(S_0 + a - \delta)}\nonumber \\
&\leq& \left| \skpd{v_j-\widehat{v}^\varepsilon_j}{A v_k}\right| +
\left|\skpd{\widehat{v}^\varepsilon_j}{A (v_k- \widehat{v}^\varepsilon_k)} \right| + \expord{-(S_0 + a - \delta)}\; .
\label{wjkminusbeide} \end{eqnarray} From \eqref{defA1} and the triangle inequality for the Finsler distance $d$ it follows that \begin{multline*}
\left| \skpd{v_j-\widehat{v}^\varepsilon_j}{A v_k}\right| =
\Bigl|\sum_{x\in(\varepsilon {\mathbb Z})^d}\sum_{\gamma\in(\varepsilon {\mathbb Z})^d} \left[\mathbf{1}_{\widehat{\delta\Gamma}}(x)\mathbf{1}_{\widehat{\delta\Gamma}^c}(x+\gamma) - \mathbf{1}_{\widehat{\delta\Gamma}^c}(x)\mathbf{1}_{\widehat{\delta\Gamma}}(x+\gamma) \right]\times\\ \times\, e^{\frac{d^j(x)}{\varepsilon}}e^{-\frac{d^j(x)}{\varepsilon}}
\left(v_j(x) - \widehat{v}^\varepsilon_j(x)\right) a_\gamma(x) e^{\frac{d^k(x)}{\varepsilon}}e^{-\frac{d^k(x)}{\varepsilon}} v_k(x+\gamma)\Bigr| \\
\leq e^{-\frac{d(x_j, x_k)}{\varepsilon}} \left\| e^{\frac{d^j}{\varepsilon}}(v_j - \widehat{v}^\varepsilon_j)
\right\|_{\ell^2(\delta\Gamma)}
\left\| e^{\frac{d^k}{\varepsilon}}v_k
\right\|_{\ell^2(\delta\Gamma)}
\sum_{|\gamma|<B} \left\| a_\gamma e^{\frac{d(.,.+\gamma)}{\varepsilon}}
\right\|_{\ell^\infty(\delta\Gamma)}\; . \end{multline*} In the last step we used that for some $B>0$
we have $|\gamma|<B$ if $x\in \widehat{\delta\Gamma}$ and $x+\gamma\in\widehat{\delta\Gamma}^c$ and vice versa. Therefore by \cite{kleinro5}, Theorem 1.8, Proposition 3.1 and by \eqref{agammasupnorm2} we have \begin{equation}\label{vjminusujinw}
\left| \skpd{v_j-\widehat{v}^\varepsilon_j}{A v_k}\right| = O\left(e^{-\frac{S_{jk}}{\varepsilon}} \varepsilon^\infty\right)\; . \end{equation} The second summand on the right hand side of \eqref{wjkminusbeide} can be estimated similarly. This proves \eqref{0prop51}.\\
For the next step, we remark that by Hypothesis \ref{hyp1}, as a function on the cotangent bundle $T^*\delta \Gamma$, the symbol $\tilde{t}^\delta$ is hyperregular (see \cite{kleinro}).
Setting $\tilde{b}^\ell:= b^\ell_{-N_\ell}$ for $\ell\in\{j,k\}$, \eqref{0prop51} leads to \begin{multline}\label{wjk11} w_{jk} \equiv \sum_{x\in \widehat{\delta\Gamma}_\varepsilon}\sum_{\natop{\gamma\in(\varepsilon {\mathbb Z})^d}{x+\gamma \in \widehat{\delta\Gamma}_\varepsilon^c}} a^{(0)}_\gamma(x) \varepsilon^{\frac{d}{2}-N_j - N_k} \left( \tilde{b}^j(x) e^{-\frac{d^{j}(x)}{\varepsilon}} \tilde{b}^k(x+\gamma) e^{-\frac{d^{k}(x+\gamma)}{\varepsilon}}\right.\\
\left. - \tilde{b}^j(x+\gamma) e^{-\frac{d^{j}(x+\gamma)}{\varepsilon}} \tilde{b}^k(x) e^{-\frac{d^{k}(x)}{\varepsilon}}\right)\; . \end{multline}
We split the sum over $\gamma$ in the parts $A_1(x)$ with $|\gamma|\leq 1$ and $A_2(x)$ with $|\gamma|>1$. Then it follows at once from \eqref{agammasum} that for any $B>0$ and some $C>0$ \begin{equation}\label{wjk17}
\Big| \sum_{x\in \widehat{\delta\Gamma}_\varepsilon}A_2(x)\Bigr| \leq C e^{-\frac{B}{\varepsilon}}\; . \end{equation} To analyze $A_1(x)$, we use Taylor expansion at $x$, yielding for $\ell=j,k$ \begin{equation}\label{wjk15}
\sum_{\natop{\gamma \in (\varepsilon {\mathbb Z})^d}{|\gamma|\leq 1}} \mathbf{1}_{\widehat{\delta\Gamma}^c}(x+\gamma) a^{(0)}_\gamma(x) \tilde{b}^\ell (x+\gamma) e^{-\frac{d^{\ell}(x+\gamma)}{\varepsilon}} = - \tilde{b}^\ell(x) e^{-\frac{1}{\varepsilon}d^\ell(x)} \tilde{t}^\delta(x, \nabla d^\ell(x)) + R_1(x) \end{equation} where, using the notation $\gamma = \varepsilon \eta$ for $\eta\in{\mathbb Z}^d$ and $\tilde{a}_\eta = a^{(0)}_{\varepsilon\eta}$, the remainder $R_1(x)$ can for some $C>0$ and any $B>0$ be estimated by \begin{align}\label{wjk16}
\bigl| R_1(x) \bigr| &= e^{-\frac{d^{\ell}(x)}{\varepsilon}} \Bigl| \sum_{\natop{\eta \in {\mathbb Z}^d}{|\eta|\leq \frac{1}{\varepsilon}}}
\mathbf{1}_{\widehat{\delta\Gamma}^c}(x+\varepsilon\eta) \varepsilon \eta\cdot \nabla \tilde{b}^\ell (x) e^{\eta\nabla d^\ell(x)} \tilde{a}_\eta(x) (1 + O(1))\Bigr| \\
&\leq \varepsilon C \sum_{\eta\in{\mathbb Z}^d} |\eta| e^{-B|\eta|} \leq \varepsilon C \int_{{\mathbb R} ^d} |\eta| e^{-B|\eta|}\, d\eta \leq C \varepsilon \; . \end{align} Inserting \eqref{wjk17}, \eqref{wjk15} and \eqref{wjk16} into \eqref{wjk11} yields \[ w_{jk} \equiv \sum_{x\in\widehat{\delta\Gamma}_\varepsilon} \varepsilon^{\frac{d}{2}-N_j-N_k} \tilde{b}^j(x)\tilde{b}^k (x) e^{-\frac{1}{\varepsilon}(d^j(x) + d^k(x))} \left( \tilde{t}^\delta(x, \nabla d^j(x)) - \tilde{t}^\delta(x, \nabla d^k(x))\right) + O(\varepsilon)\, . \] and thus proves \eqref{theowjk1}.
To show \eqref{theowjk2}, we use that for any convex function $f$ on ${\mathbb R} ^d$ \[ \nabla f(\eta)(\xi -\eta) \leq f(\xi) - f(\eta) \leq \nabla f(\xi) (\xi - \eta)\, , \qquad \eta,\xi\in{\mathbb R} ^d\; .\] Thus for $\widehat{v}^\varepsilon_j$ and $\widehat{v}^\varepsilon_k$ both strictly positive in $\widehat{\delta\Gamma}$, \eqref{theowjk2} follows from the convexity of $\tilde{t}^\delta$.
\end{proof}
\begin{appendix}
\section{Pseudo-Differential operators in the discrete setting}\label{app1}
We introduce and analyze pseudo-differential operators associated to symbols, which are $2\pi$-periodic with respect to $\xi$ (for former results see also \cite{kleinro2}).
Let ${\mathbb T}^d := {\mathbb R} ^d/(2\pi){\mathbb Z}^d$ denote the $d$-dimensional torus and without further mentioning we identify functions on ${\mathbb T}^d$ with $2\pi$-periodic functions on ${\mathbb R} ^d$.
\begin{Def}\label{pseudo} \begin{enumerate} \item An order function on ${\mathbb R} ^N$ is a function $m:{\mathbb R} ^{N} \rightarrow (0, \infty)$ such that there exist $C>0, M\in {\mathbb N}$ such that \[ m(z_1) \leq C \langle z_1-z_2\rangle^M m(z_2) \, , \qquad z_1, z_2\in {\mathbb R} ^{N} \]
where $\langle x \rangle := \sqrt{1+|x|^2}$. \item A function $p\in \mathscr C^\infty \bigl({\mathbb R} ^{N}\times (0, 1]\bigr)$ is an element of the symbol class $S_\delta^k\bigl(m\bigr)\bigl({\mathbb R} ^{N}\bigr)$ for some order function $m$ on ${\mathbb R} ^N$, if for all $\alpha\in {\mathbb N}^{N}$ there is a constant $C_\alpha >0$ such that
\[ \Bigl| \partial^\alpha p (z; \varepsilon)\Bigr| \leq C_\alpha \varepsilon^{k-\delta |\alpha|} m(z)\, , \qquad z\in{\mathbb R} ^N \] uniformly for $\varepsilon\in (0,1]$. On $S_\delta^k\bigl(m\bigr)\bigl({\mathbb R} ^{N}\bigr)$ we define the Fr\'echet-seminorms \begin{equation}\label{Frechet-Norm}
\|p\|_{\alpha} := \sup_{z\in{\mathbb R} ^N, 0<\varepsilon\leq 1}\frac{\Bigl| \partial^\alpha p (z; \varepsilon)\Bigr|}{\varepsilon^{k-\delta |\alpha|} m(z)}\, , \quad \alpha \in{\mathbb N}^N\; . \end{equation} We define the symbol class $S_\delta^k\bigl(m\bigr)\bigl({\mathbb R} ^{N}\times {\mathbb T}^d\bigr)$ by identification of $\mathscr C^\infty ({\mathbb T}^d)$ with the $2\pi$-periodic functions in $\mathscr C^\infty ({\mathbb R} ^d)$. \item To $p\in S_\delta^k\bigl(m\bigr)\bigl({\mathbb R} ^{2d}\times {\mathbb T}^d\bigr)$ we associate a pseudo-differential operator $\widetilde{\Op}_\varepsilon^{{\mathbb T}}(p): {\mathcal K}\left((\varepsilon {\mathbb Z})^d\right) \longrightarrow {\mathcal K}'\left((\varepsilon {\mathbb Z})^d\right)$ setting \begin{equation}\label{psdo3dTorus} \widetilde{\Op}_\varepsilon^{{\mathbb T}}(p)\, v(x; \varepsilon) := (2\pi)^{-d} \sum_{y\in(\varepsilon {\mathbb Z})^d}\int_{[-\pi,\pi]^d} e^{\frac{i}{\varepsilon}(y-x)\xi} p(x, y ,\xi;\varepsilon)v(y) \, d\xi \end{equation} where \begin{equation}\label{kompaktge}
{\mathcal K}\left((\varepsilon {\mathbb Z})^d\right):=\{ u: (\varepsilon {\mathbb Z})^d\rightarrow {\mathbb C}\; |\; u~\mbox{has compact support}\} \end{equation} and ${\mathcal K}'\left((\varepsilon {\mathbb Z})^d\right):= \{f: (\varepsilon {\mathbb Z})^d\rightarrow {\mathbb C}\ \} $ is dual to ${\mathcal K}\left((\varepsilon {\mathbb Z})^d\right)$ by use of the scalar product $\skpd{u}{v}:= \sum_x \bar{u}(x)v(x)$ . \item For $t\in [0,1]$ and $q\in S_\delta^k\bigl(m\bigr)\bigl({\mathbb R} ^{d}\times {\mathbb T}^d\bigr)$ the associated pseudo-differential operator $\Op_{\varepsilon,t}^{\mathbb T} (q)$ is defined by \begin{equation}\label{psdo2dTorus} \Op_{\varepsilon,t}^{{\mathbb T}}(q)\, v(x; \varepsilon) := (2\pi)^{-d} \sum_{y\in(\varepsilon {\mathbb Z})^d}\int_{[-\pi,\pi]^d} e^{\frac{i}{\varepsilon}(y-x)\xi} q((1-t)x + ty , \xi; \varepsilon) v(y) \, d\xi \end{equation} for any $v\in\mathcal{K}((\varepsilon {\mathbb Z})^d)$ and we set $\Op_{\varepsilon, 0}^{\mathbb T} (q) =: \Op_\varepsilon^{\mathbb T}(q)$. \item To $p\in S_\delta^k\bigl(m\bigr)\bigl({\mathbb R} ^{3d}\bigr)$ we associate a pseudo-differential operator $\widetilde{\Op}_\varepsilon (p): \mathscr C_0^\infty ({\mathbb R} ^d) \longrightarrow \mathcal{D}'({\mathbb R} ^d)$ setting \begin{equation}\label{psdo3d} \widetilde{\Op}_\varepsilon (p)\, v(x; \varepsilon) := (2\pi\varepsilon )^{-d} \int_{{\mathbb R} ^{2d}} e^{\frac{i}{\varepsilon}(y-x)\xi} p(x, y ,\xi;\varepsilon)v(y) \, dy \, d\xi \; . \end{equation} \item For $t\in [0,1]$ and $q\in S_\delta^k\bigl(m\bigr)\bigl({\mathbb R} ^{2d}\bigr)$ the associated a pseudo-differential operator $\Op_{\varepsilon,t} (q)$ is defined by \begin{equation}\label{psdo2dt} \Op_{\varepsilon,t} (q)\, v(x; \varepsilon) := (2\pi\varepsilon )^{-d} \int_{{\mathbb R} ^{2d}} e^{\frac{i}{\varepsilon}(y-x)\xi} q((1-t)x + ty , \xi; \varepsilon) v(y) \, dy \, d\xi \, , \qquad v\in\mathscr C_0^\infty{{\mathbb R} ^d} \end{equation} and we set $\Op_{\varepsilon,0}(q) =: \Op_\varepsilon(q)$. \end{enumerate} \end{Def}
Standard arguments show that $\widetilde{\Op}_\varepsilon(p)$ actually maps $\mathscr C_0^\infty({\mathbb R} ^d)$ into $\mathscr C^\infty({\mathbb R} ^d)$. Moreover, the seminorms given in \eqref{Frechet-Norm} induce the structure of a Fr\'echet-space in $S^k_\delta(m)({\mathbb R} ^N)$.
In \cite{kleinro2} we discussed properties of pseudo-differential operators $\Op_\varepsilon^{\mathbb T}(.)$. In particular we showed that, for a symbol $q\in S_\delta^k\bigl(m\bigr)\bigl({\mathbb R} ^{2d}\bigr)$ which is $2\pi$-periodic with respect to $\xi$, the restriction of $\Op_\varepsilon (q)$ to $\mathcal{K}\bigl((\varepsilon {\mathbb Z})^d\bigr)$ coincides with $\Op_\varepsilon^{\mathbb T} (q)$.
In the next proposition we show that this statement also holds in the more general case of $\widetilde{\Op}_\varepsilon^{\mathbb T}$ and $\widetilde{\Op}_\varepsilon$.
\begin{prop}\label{prop1app} For some order function $m$ on ${\mathbb R} ^{3d}$, let $p\in S^k_\delta \bigl(m\bigr)\bigl({\mathbb R} ^{3d}\bigr)$ satisfy $p(x, y, \xi; \varepsilon) = p(x, y, \xi + 2\pi\eta; \varepsilon)$ for any $\eta\in{\mathbb Z}^d, \xi, x, y\in {\mathbb R} ^d$ and $\varepsilon\in (0,1]$. Then $p\in S^k_\delta \bigl(m\bigr)\bigl({\mathbb R} ^{2d}\times {\mathbb T}^d\bigr)$ and using the restriction map \begin{equation}\label{restrict}
r_\varepsilon: \mathscr C_0^\infty({\mathbb R} ^d) \rightarrow \mathcal{K}((\varepsilon {\mathbb Z})^d)\, , \quad r_\varepsilon(u) = u|_{(\varepsilon {\mathbb Z})^d} \end{equation} we have \begin{equation}\label{0prop1app} r_\varepsilon \circ \widetilde{\Op}_{\varepsilon} (p)\, u = \widetilde{\Op}_\varepsilon^{\mathbb T} (p) r_\varepsilon\circ u \, , \qquad u\in \mathscr C_0^\infty({\mathbb R} ^d)\; . \end{equation} \end{prop}
\begin{proof} For $x\neq (\varepsilon {\mathbb Z})^d$ both sides of \eqref{0prop1app} are zero, so we choose $x\in(\varepsilon {\mathbb Z})^d$. Then for $u\in \mathscr C_0^\infty({\mathbb R} ^d)$, using the $\varepsilon$-scaled Fourier transform \begin{equation}\label{fourier} F_\varepsilon u (x) =\sqrt{2\pi}^{-d} \int_{{\mathbb R} ^d} e^{-\frac{i}{\varepsilon}x\xi} u(\xi) \, d\xi\; , \end{equation} we can write \begin{equation}\label{1prop1app} \widetilde{\Op}_\varepsilon (p) u (x; \varepsilon) = \bigl(\varepsilon\sqrt{2\pi}\bigr)^{-d} \int_{{\mathbb R} ^d} \bigl( F_\varepsilon p (x, y, \cdot, \varepsilon)\bigr)(x-y) u(y)\, dy\; . \end{equation} Since for any $2\pi$-periodic function $g\in\mathscr C^\infty ({\mathbb R} ^d)$ the Fourier transform is given by \begin{equation}\label{opall1} F_\varepsilon g = \left(\frac{\varepsilon}{\sqrt{2\pi}}\right)^d \sum_{z\in(\varepsilon {\mathbb Z})^d} \delta_z c_z\, , \quad\text{where}\quad c_z:= \int_{[-\pi,\pi]^d} e^{-\frac{i}{\varepsilon}z\mu} g(\mu)\, d\mu \; , \end{equation}
(see e.g. \cite{hormander2}), we formally get \begin{align} \text{rhs} \eqref{1prop1app} &= \bigl(\varepsilon\sqrt{2\pi}\bigr)^{-d} \int_{{\mathbb R} ^d} \left(\frac{\varepsilon}{\sqrt{2\pi}}\right)^d \sum_{z\in(\varepsilon {\mathbb Z})^d} \int_{[-\pi, \pi]^d} e^{-\frac{i}{\varepsilon}z\mu} p (x, y, \mu ; \varepsilon)\, d\mu \delta_z (x-y) u(y)\, dy\nonumber\\ &= \bigl(2\pi\bigr)^{-d} \sum_{z\in(\varepsilon {\mathbb Z})^d} \int_{[-\pi, \pi]^d} \int_{{\mathbb R} ^d} e^{-\frac{i}{\varepsilon}z\mu} p (x, y, \mu ; \varepsilon) \delta_z (x-y) u(y)\, dy\, d\mu\nonumber\\ &= \bigl(2\pi\bigr)^{-d} \sum_{z\in(\varepsilon {\mathbb Z})^d} \int_{[-\pi, \pi]^d} e^{-\frac{i}{\varepsilon}z\mu} p (x, x-z, \mu ; \varepsilon) u(x-z)\, d\mu\; .\label{2prop1app} \end{align} With the substitution $y=x-z$ and $\xi = \mu$ we get by \eqref{1prop1app} and \eqref{2prop1app} \[ \widetilde{\Op}_\varepsilon (p) u(x; \varepsilon) =
\bigl(2\pi\bigr)^{-d} \sum_{y\in(\varepsilon {\mathbb Z})^d} \int_{[-\pi, \pi]^d} e^{-\frac{i}{\varepsilon}(x-y)\xi} p (x, y, \xi ; \varepsilon) u(y)\, dy\, d\xi = \widetilde{\Op}_\varepsilon^{\mathbb T} (p) u(x; \varepsilon) \] proving the stated result.
\end{proof}
\begin{rem}\label{remprop1app} Let $m$ be an order function on ${\mathbb R} ^{2d}$ and $p\in S^k_\delta\bigl(m\bigr)\bigl({\mathbb R} ^{2d}\bigr)$ a symbol. Then, setting $\tilde{p}_t (x, y, \xi; \varepsilon) := p( tx + (1-t) y, \xi; \varepsilon)$ for $t\in[0,1]$, we have $\widetilde{\Op}_\varepsilon (\tilde{p}_t) = \Op_{\varepsilon, t} (p)$. Thus the $t$-quantization can be seen as a special case of the general quantization.
Moreover, if $p$ is periodic in $\xi$, i.e. if $p(x, \xi; \varepsilon) = p(x, \xi + 2\pi\eta; \varepsilon)$ for any $\eta\in{\mathbb Z}^d, \xi,x\in {\mathbb R} ^d$ and $\varepsilon\in (0,\varepsilon_0]$, then $p\in S^k_\delta\bigl(m\bigr)\bigl({\mathbb R} ^{d}\times {\mathbb T}^d\bigr)$, \[ r_\varepsilon \circ \Op_{\varepsilon,t} (p) (u) = \Op_{\varepsilon,t}^{\mathbb T} (p) \circ r_\varepsilon (u) \, , \qquad u\in \mathscr C_0^\infty({\mathbb R} ^d) \] and $\widetilde{\Op}_\varepsilon^{\mathbb T} (\tilde{p}_t) = \Op_{\varepsilon,t}^{\mathbb T} (p)$. \end{rem}
\begin{rem} For $a\in S_\delta^k\bigl(\langle \xi\rangle^\ell, {\mathbb R} ^{3d}\bigr)$ the operator $\widetilde{\Op}_\varepsilon (a)$ is continuous: $\mathscr{S}({\mathbb R} ^d) \rightarrow \mathscr{S}({\mathbb R} ^d)$ (see e.g. \cite{martinez}) and, similar to Lemma A.2 in \cite{kleinro2}, this result implies that $\widetilde{\Op}_\varepsilon^{\mathbb T}(a)$ is continuous: $s((\varepsilon {\mathbb Z})^d) \rightarrow s((\varepsilon {\mathbb Z})^d)$ by use of Proposition \ref{prop1app}. \end{rem}
The following proposition gives a relation between the different quantizations for symbols which are periodic with respect to $\xi$. The proof is partly based on \cite{martinez}, where the result is shown for symbols in $S_0^0\bigl(\langle \xi\rangle^m\bigr)\bigl({\mathbb R} ^{3d}\bigr)$.
\begin{prop}\label{prop2app} For $0\leq \delta < \frac{1}{2}$, let $a\in S_\delta^k\bigl(m\bigr)\bigl({\mathbb R} ^{2d}\times {\mathbb T}^d\bigr)$ and $t\in [0,1]$, then there exists a unique symbol $a_t\in S_\delta^k\bigl(\tilde{m}\bigr)\bigl({\mathbb R} ^d\times {\mathbb T}^d\bigr)$ where $\tilde{m}(x,\xi) := m(x, x, \xi)$ such that \begin{equation}\label{0prop2app} \widetilde{\Op}_\varepsilon^{\mathbb T} (a) = \Op_{\varepsilon, t}^{\mathbb T} (a_t)\; . \end{equation} Moreover the mapping $S^k_\delta(m)\ni a\mapsto a_t\in S^k_\delta(\tilde{m})$ is continuous in its Fr\'echet-topology induced from \eqref{Frechet-Norm}. $a_t$ can be written as \begin{equation}\label{1prop2app} a_t (x, \xi; \varepsilon) = (2\pi)^{-d} \sum_{\theta\in(\varepsilon {\mathbb Z})^d} \int_{[-\pi,\pi]^d} e^{\frac{i}{\varepsilon}(\xi - \mu) \theta} a\bigl(x+t\theta, x-(1-t)\theta, \mu; \varepsilon\bigr)\, d\mu \end{equation} and has the asymptotic expansion \begin{equation}\label{2prop2app}
a_t (x, \xi; \varepsilon) \sim \sum_{j=0}^\infty \varepsilon^j a_{t,j}(x, \xi) \, , \qquad a_{t,j}(x, \xi) := \sum_{\natop{\alpha\in{\mathbb N}^d}{|\alpha|=j} } \frac{i^{j}}{\alpha!} \partial_\xi^\alpha
\partial_z^\alpha a \bigl(x + t z, x - (1-t) z, \xi; \varepsilon\bigr)|_{z=0}\; . \end{equation} If we write $a_t(x, \xi; \varepsilon) = \sum_{j\leq N-1} \varepsilon^j a_{t,j}(x, \xi) + S_N(a) (x, \xi; \varepsilon)$ then $S_N(a)\in S_\delta^{k + N(1-2\delta)}\bigl(\tilde{m}\bigr)\bigl({\mathbb R} ^d\times {\mathbb T}^d\bigr)$ and the Fr\'echet-seminorms of $S_N$ depend (linearly)
on finitely many $\|a\|_{\alpha}$ with $|\alpha| \geq N$. \end{prop}
\begin{proof} To satisfy \eqref{0prop2app}, the symbol $a_t$ above has to satisfy in $\mathcal{D}'({\mathbb R} ^{2d})$ \begin{equation}\label{3prop2app} \int_{[-\pi, \pi]^d} e^{\frac{i}{\varepsilon}(y-x)\mu} a(x, y, \mu; \varepsilon) \, d\mu = \int_{[-\pi, \pi]^d} e^{\frac{i}{\varepsilon}(y-x)\mu} a_t((1-t) x + t y, \mu; \varepsilon) \, d\mu\, . \end{equation} Setting $\theta= x-y$ and $z=(1-t)x + ty = x - t\theta$ in \eqref{3prop2app} gives \begin{equation}\label{4prop2app} \int_{[-\pi, \pi]^d} e^{-\frac{i}{\varepsilon}\theta\mu} a\bigl(z + t \theta, z - (1-t)\theta, \mu; \varepsilon\bigr) \, d\mu = \sqrt{2\pi}^{d}\bigl(\mathscr{F}_\varepsilon a_t (z, \,.\, ; \varepsilon)\bigr) (\theta) \end{equation} where ${\mathscr F}_\varepsilon:L^2\left({\mathbb T}^d\right) \to \ell^2\left((\varepsilon{\mathbb Z})^d\right)$ denotes the discrete Fourier transform defined by \begin{equation}\label{Fou} \bigl({\mathscr F}_\varepsilon f\bigr)(\theta) := \frac{1}{\sqrt{2\pi}^d} \int_{[-\pi,\pi]^d} e^{-\frac{i}{\varepsilon}\theta \mu}f(\mu)\,d\mu \, , \qquad f\in L^2({\mathbb T}^d)\, , \; \theta\in(\varepsilon {\mathbb Z})^d \end{equation} with inverse ${\mathscr F}_\varepsilon^{-1}:\ell^2\left((\varepsilon{\mathbb Z})^d\right)\to L^2\left({\mathbb T}^d\right)$, \begin{equation} \label{Fou-1} \bigl({\mathscr F}_\varepsilon^{-1}v\bigr)(\xi) := \frac{1}{\sqrt{2\pi}^d}\sum_{\theta\in(\varepsilon {\mathbb Z})^d} e^{\frac{i}{\varepsilon}\theta \xi}v(\theta), \qquad v\in \ell^2\left((\varepsilon {\mathbb Z})^d\right)\, , \; \xi\in{\mathbb T}^d \end{equation} where the sum in understood in standard L.I.M-sense. Thus taking the inverse Fourier transform $\mathscr{F}_\varepsilon^{-1}$ on both sides of \eqref{4prop2app} yields \eqref{1prop2app}.\\ To analyze the right hand side of \eqref{1prop2app}, we set $\eta=\mu -\xi$ and introduce a cut-off-function $\zeta\in {\mathcal K}\left((\varepsilon {\mathbb Z})^d, [0,1]\right)$ with $\zeta = 1$ in a neighborhood of $0$ to get \begin{align} a_t (x, \xi; \varepsilon) &= b_{t,1}(x, \xi; \varepsilon) + b_{t,2}(x, \xi; \varepsilon)\qquad\text{with}\label{5prop2app}\\ b_{t,1}(x, \xi; \varepsilon)&:= (2\pi)^{-d} \sum_{\theta\in(\varepsilon {\mathbb Z})^d} \int_{[-\pi,\pi]^d} e^{-\frac{i}{\varepsilon}\eta \theta}(1-\zeta(\theta)) a\bigl(x+t\theta, x-(1-t)\theta, \xi + \eta; \varepsilon\bigr)\, d\eta\nonumber\\ b_{t,2}(x, \xi; \varepsilon)&:= (2\pi)^{-d} \sum_{\theta\in(\varepsilon {\mathbb Z})^d} \int_{[-\pi,\pi]^d} e^{-\frac{i}{\varepsilon}\eta \theta}\zeta(\theta) a\bigl(x+t\theta, x-(1-t)\theta, \xi + \eta; \varepsilon\bigr)\, d\eta\nonumber \end{align} The aim is now to show $b_{t,1}\in S^\infty (\tilde{m})({\mathbb R} ^d\times {\mathbb T}^d)$ and $b_{t,2}\in S_\delta^k(\tilde{m})({\mathbb R} ^d\times {\mathbb T}^d)$ having the required asymptotic expansion and that the mappings $a\mapsto b_{t,1}$ and $a\mapsto b_{t,2}$ are continuous.
Since $e^{\frac{i}{\varepsilon}2\pi \eta z} = 1$ for all $z\in(\varepsilon {\mathbb Z})^d$ and $\eta\in{\mathbb Z}^d$, it follows at once from \eqref{1prop2app} that $b_{t,i} (x, \xi + 2\pi \eta; \varepsilon) = b_{t,i}(x, \xi; \varepsilon)$ for $i=1,2$.
By use of the operator $L(\theta, \eta):= \frac{-\varepsilon^2\Delta_\eta}{|\theta|^2}$, which is well defined on the support of $1-\zeta (\theta)$ and fulfills $L(\theta,\eta)e^{-\frac{i}{\varepsilon}\theta\eta}= e^{-\frac{i}{\varepsilon}\theta\eta}$, we have for any $n\in {\mathbb N}$ by partial integration, using the $2\pi$-periodicity of the symbol $a(x, y, \xi; \varepsilon)$ with respect to $\xi$, \begin{align} b_{t,1}(x,\xi;\varepsilon) = (2\pi)^{-d} \sum_{\theta\in(\varepsilon {\mathbb Z})^d}\int_{[-\pi,\pi]^d} \left( L^n(\theta,\eta) e^{-\frac{i}{\varepsilon}\theta\eta}\right)
(\mathbf{1} - \zeta(\theta)) a\bigl(x + t \theta, x - (1-t)\theta,\xi+\eta;\varepsilon\bigr)\, d\eta\nonumber \\
= (2\pi)^{-d}\sum_{\theta\in(\varepsilon {\mathbb Z})^d}\int_{[-\pi,\pi]^d} e^{-\frac{i}{\varepsilon}\theta\eta}
\frac{(\mathbf{1} - \zeta(\theta))}{|\theta|^{2n}}(-\varepsilon^2\Delta_\eta)^n a\bigl(x+ t\theta, x- (1-t)\theta,\xi+\eta;\varepsilon\bigr)\, d\eta \, .\label{6prop2app} \end{align} Since $a\in S_\delta^k(m)$, the absolute value of the integrand is for some $C>0$ and $M\in{\mathbb N}$ bounded from above by \begin{equation}\label{7prop2app}
C \varepsilon^{k + 2n(1-\delta)}\frac{m(x+ t\theta, x- (1-t)\theta, \xi+\eta)}{\langle \theta \rangle^{2n}} \leq
C \varepsilon^{k + 2n(1-\delta)}\langle\theta\rangle^{M - 2n}\langle\eta\rangle^M m(x, x, \xi) \, . \end{equation} This term is integrable and summable for $n$ sufficiently large yielding \[ b_{t,1}(x,\xi;\varepsilon) = \varepsilon^{k + 2n(1- \delta)-d}O(\tilde{m}(x,\xi))\; .\] The derivatives can be estimated similarly, and thus $ b_{t,1}\in S^\infty(\tilde{m})\bigl({\mathbb R} ^d\times {\mathbb T}^d\bigr)$.\\ To see the continuity of $S^k_\delta(m)\ni a\mapsto b_{t,1}\in S^{k + 2n(1- \delta)-d}_\delta(\tilde{m})$ for any $n\in{\mathbb N}$ large enough, we use \eqref{6prop2app} and \eqref{7prop2app} to estimate for any $\alpha, \beta\in {\mathbb N}^d$ and $x\in{\mathbb R} ^d, \xi\in{\mathbb T}^d$ \begin{align*}
\Bigl|\partial_x^\alpha\partial_\xi^\beta b_{t,1}(x,\xi;\varepsilon)\Bigr| &\leq C \sum_{\theta\in(\varepsilon {\mathbb Z})^d}\int_{[-\pi,\pi]^d}
\frac{|\mathbf{1} - \zeta(\theta)|}{|\theta|^{2n}}\frac{\bigl|(-\varepsilon^2\Delta_\eta)^n \partial_x^\alpha\partial_\xi^\beta a(x+ t\theta, x- (1-t)\theta,\xi+\eta;\varepsilon)\bigr|}
{\varepsilon^{k + 2n(1- \delta)-d}m(x+ t\theta, x- (1-t)\theta,\xi+\eta)}\\
&\qquad \times \varepsilon^{k + 2n(1- \delta)-d}m(x+ t\theta, x- (1-t)\theta,\xi+\eta)\, d\eta\\
&\leq C \varepsilon^{k + 2n(1- \delta)-d} m(x,x,\xi)\|a\|_{(\alpha,\tilde{\beta}(n))} \sum_{\theta\in(\varepsilon {\mathbb Z})^d}\int_{[-\pi,\pi]^d}
|\mathbf{1} - \zeta(\theta)|\langle\theta\rangle^{M - 2n}\langle\eta\rangle^M\, d\eta\\
&\leq C \varepsilon^{k + 2n(1- \delta)-d}\tilde{m}(x,\xi)\|a\|_{(\alpha,\tilde{\beta}(n))} \end{align*} for $\tilde{\beta}(n) = \beta + (2n,\ldots 2n)$, where the last estimate holds for $n$ sufficiently large. This gives continuity.
Since in the definition of $b_{t,2}$ in \eqref{5prop2app} integral and sum range over a compact set, it follows analog to the estimates above that \[
\Bigl|\partial_x^\alpha \partial_\xi^\beta b_{t,2}(x, \xi; \varepsilon)\Bigr| \leq C_{\alpha,\beta} \varepsilon^{k - (|\alpha| + |\beta|)\delta}\|a\|_{(\alpha, \beta)} m(x, x, \xi) \] and thus $b_{t,2}\in S_\delta^k(\tilde{m})$ and the mapping $S^k_\delta(m)\ni a\mapsto b_{t,2}\in S^k_\delta(\tilde{m})$ is continuous.
Thus $S_\delta^k(m)\ni a\mapsto a_t\in S_\delta^k(\tilde{m})$ is continuous. Using standard arguments, the method of stationary phase (see e.g. \cite{thesis}, Lemma B.4) gives the asymptotic expansion \eqref{2prop2app}.
Since $a\mapsto S_N = a_t - \sum_{j=0}^{N-1}\varepsilon^j a_{t,j}$ is obviously continuous, each Fr\'echet-seminorm of $S_N$ can be estimated by finitely many Fr\'echet-seminorms of $a$. To get the more refined statement $S_N$, we use \eqref{1prop2app} to write \begin{equation}\label{8prop2app}
a_t (x, \xi; \varepsilon) = e^{i\varepsilon D_\theta D_\eta} a\bigl(x + t\theta, x - (1-t)\theta; \xi + \eta; \varepsilon\bigr)\bigl|_{\theta = 0 = \eta}\; . \end{equation} In fact, by algebraic substitutions, \eqref{8prop2app} is a consequence of the formula \begin{equation}\label{9prop2app}
e^{i\varepsilon D_\theta D_\eta} b(\theta, \eta; \varepsilon) = \sum_{z\in(\varepsilon {\mathbb Z})^d} \int_{[-\pi, \pi]^d} e^{-\frac{i}{\varepsilon}z\mu} b(\theta - z, \eta - \mu; \varepsilon)\, d\mu \end{equation} for $b\in S^k_\delta(\tilde{m})({\mathbb R} ^d\times {\mathbb T}^d)$, where, for $x, \xi$ fixed, we set $b(\theta, \eta; \varepsilon) = a(x + t\theta, x - (1-t)\theta; \xi + \eta; \varepsilon)$. \eqref{9prop2app} may be proved by writing $e^{i\varepsilon D_\theta D_\eta}$ as a multiplication operator in the covariables and applying the Fouriertransforms ${\mathscr F}_\varepsilon$, ${\mathscr F}_\varepsilon^{-1}$, using that $e^{-\frac{i}{\varepsilon}x\xi}$ is invariant under ${\mathscr F}_{\varepsilon, \xi\rightarrow z}{\mathscr F}_{\varepsilon, x\rightarrow \mu}^{-1}$ and the standard fact that Fourier transform maps products to convolutions (see \cite{thesis}).
Using Taylor's formula for $e^{ix}$, we get
\[ S_N(a)(x, \xi; \varepsilon) = \frac{(i\varepsilon D_\theta D_\eta)^N}{(N-1)!} \int_0^1 (1-s)^{N-1}e^{i\varepsilon s D_\theta D_\eta}\, ds\; a \bigl(x + t \theta, x - (1-t)\theta, \eta + \xi; \varepsilon\bigr) |_{\theta = 0 = \eta}\, , \] proving that $S_N$ only depends on Fr\'echet-seminorms of
$(D_\theta D_\eta)^N a\bigl(x+t\theta, x-(1-t)\theta, \eta + \xi; \varepsilon\bigr)$ and thus not on Fr\'echet-seminorms $\|a\|_\alpha$ with $|\alpha|< N$.
\end{proof}
The norm estimate \cite{kleinro2}, Proposition A.6, for operators $\Op_\varepsilon^{\mathbb T}(q)$ with a bounded symbol $q\in S^k_\delta (1)({\mathbb R} ^d\times {\mathbb T}^d)$ combined with Proposition \ref{prop2app} leads at once to the following corollary.
\begin{cor}\label{cor1app} Let $a\in S^k_\delta(1)\left({\mathbb R} ^{2d}\times{\mathbb T}^d\right)$ with $0\leq \delta \leq \frac{1}{2}$. Then there exists a constant $M>0$ such that, for the associated operator $\widetilde{\Op}_\varepsilon^{{\mathbb T}}(a)$ given by \eqref{psdo3dTorus} the estimate \begin{equation}\label{cvab}
\left\| \widetilde{\Op}_\varepsilon^{{\mathbb T}}(a) u \right\|_{\ell^2((\varepsilon {\mathbb Z})^d)} \leq M \varepsilon^r \|u\|_{\ell^2((\varepsilon {\mathbb Z})^d)} \end{equation} holds for any $u\in s\left((\varepsilon {\mathbb Z})^d\right)$ and $\varepsilon>0$. $\widetilde{\Op}_\varepsilon^{{\mathbb T}}(a)$ can therefore be extended to a continuous operator: $\ell^2\left((\varepsilon {\mathbb Z})^d\right)\longrightarrow \ell^2\left((\varepsilon {\mathbb Z})^d\right)$ with
$\|\widetilde{\Op}_\varepsilon^{{\mathbb T}}(a)\| \leq M\varepsilon^r$. Moreover $M$ can be chosen depending only on a finite number of Fr\'echet-seminorms of the symbol $a$. \end{cor}
In the next proposition, we analyze the symbol of an operator conjugated with a term $e^{\varphi/\varepsilon}$.
\begin{prop}\label{prop3app} Let $q\in S^k_\delta \bigl(1\bigr)\bigl({\mathbb R} ^{2d}\times {\mathbb T}^d\bigr)$, $0\leq \delta<\frac{1}{2}$, be a symbol such that the map $\xi \mapsto q(x, y, \xi; \varepsilon)$ can be extended to an analytic function on ${\mathbb C}^d$. Let $\psi\in \mathscr C^\infty ({\mathbb R} ^d, {\mathbb R} )$ such that all derivatives are bounded.\\ Then \[ Q_\psi:= e^{\psi/\varepsilon} \widetilde{\Op}^{\mathbb T}_{\varepsilon}(q) e^{-\psi / \varepsilon} \] is the quantization of the symbol $\widehat{q}_\psi\in S^k_\delta \bigl( 1\bigr)\bigl({\mathbb R} ^{2d}\times {\mathbb T}^d\bigr)$ given by \begin{equation}\label{00prop3app} \widehat{q}_\psi (x, y, \xi; \varepsilon) := q( x, y, \xi - i \Phi (x,y); \varepsilon) \end{equation} where $\Phi$ is given in \eqref{2prop3app}. In particular, the map $\xi \mapsto \widehat{q}_\psi(x,y,\xi;\varepsilon)$ can be extended to an analytic function on ${\mathbb C}^d$. If $q$ has an asymptotic expansion $q\sim \sum_n \varepsilon^n q_n$ in $\varepsilon$, then the same is true for $\widehat{q}_\psi$.
For $t\in [0,1]$, the operator $Q_\psi$ is the $t$-quantization of a symbol $q_{\psi, t} \in S^k_\delta \bigl(1\bigr)\bigl({\mathbb R} ^{d}\times {\mathbb T}^d\bigr)$ with asymptotic expansion $q_{\psi, t} \sim \sum_n q_{n,\psi,t}$ such that $q_{\psi,t} - \sum_{n=0}^{N-1} q_{n,\psi, t} \in S^{k+N(1-2\delta)}(1)({\mathbb R} ^d\times {\mathbb T}^d)$. Moreover, the map $\xi \mapsto q_{\psi, t}(x,\xi; \varepsilon)$ can be extended to an analytic function on ${\mathbb C}^d$ and \begin{equation}\label{0prop3app} q_{\psi, t} (x, \xi; \varepsilon) = \widehat{q}_\psi (x,x,\xi;\varepsilon) = q ( x, x, \xi - i \nabla \psi (x); \varepsilon) \quad \mod S^{k+1-2\delta}_\delta \bigl(1\bigr)\bigl({\mathbb R} ^{d}\times {\mathbb T}^d\bigr)\; . \end{equation} \end{prop}
\begin{proof} The integral kernel of $e^{\psi/\varepsilon} \widetilde{\Op}^{\mathbb T}_{\varepsilon}(q) e^{-\psi / \varepsilon} $ is given by the oscillating integral \begin{align} K_\psi (x,y) & := (2\pi)^{-d} \int_{[-\pi, \pi]^d} e^{\frac{i}{\varepsilon}[ (y-x)\xi + i (\psi (y) - \psi (x))]} q( x, y, \xi; \varepsilon) \, d\xi \nonumber\\ &= (2\pi)^{-d} \int_{[-\pi, \pi]^d} e^{\frac{i}{\varepsilon}(y-x) [\xi + i \Phi (x, y)]} q( x, y, \xi; \varepsilon) \, d\xi \label{1prop3app} \end{align} where we set \begin{equation}\label{2prop3app} \Phi (x, y) := \int_0^1 \nabla \psi ( (1-t) x + t y)\, dt\; . \end{equation} Substituting $\tilde{\xi} := \xi + i \Phi (x, y)$ and iteratively using Lemma \ref{Lem1} yields \begin{align} \text{rhs }\eqref{1prop3app} &= (2\pi)^{-d} \int_{[-\pi, \pi]^d + i \Phi (x,y)} e^{\frac{i}{\varepsilon}(y-x) \tilde{\xi}} q( x, y, \tilde{\xi} - i \Phi (x,y); \varepsilon) \, d\tilde{\xi} \nonumber\\ &= (2\pi)^{-d} \int_{[-\pi, \pi]^d} e^{\frac{i}{\varepsilon}(y-x) \tilde{\xi}} q( x, y, \tilde{\xi} - i \Phi (x,y); \varepsilon) \, d\tilde{\xi} \label{3prop3app} \end{align} The right hand side of \eqref{3prop3app} is the integral kernel of $\widetilde{\Op}^{\mathbb T}_\varepsilon \bigl(\widehat{q}_\psi\bigr)$ for $\widehat{q}_\psi$ given by \eqref{00prop3app}. Since all derivatives of $\Phi$ are bounded by assumption, if follows that $\widehat{q}_\psi\in S^k_\delta \bigl( 1\bigr)\bigl({\mathbb R} ^{2d}\times {\mathbb T}^d\bigr)$. The statement on the analyticity of $\widehat{q}_\psi$ with respect to $\xi$ and on the existence of an asymptotic expansion follow at once from equality \eqref{00prop3app}.
Concerning the statement on the $t$-quantization we use Proposition \ref{prop2app}, showing that there is a unique symbol $\widehat{q}_{t,\psi} \in S^k_\delta \bigl( 1\bigr)\bigl({\mathbb R} ^{d}\times {\mathbb T}^d\bigr)$ such that $\widetilde{\Op}_\varepsilon^{\mathbb T} (\widehat{q}_\psi) = \Op_{\varepsilon, t}^{\mathbb T} (\widehat{q}_{t,\phi})$. Moreover, by \eqref{2prop2app}, we have in leading order, i.e. modulo $S_\delta^{k+1-2\delta}(1)$, \begin{equation}\label{4prop3app} \widehat{q}_{t,\psi} (x, \xi; \varepsilon) = \widehat{q}_\psi(x, x, \xi; \varepsilon) = q (x, x, \xi - i \Phi(x, x); \varepsilon) = q (x, x, \xi - i \nabla \psi(x); \varepsilon)\; . \end{equation} and $q_{\psi,t}$ has an asymptotic expansion with the stated properties.
\end{proof}
\begin{rem}\label{remprop3app} Let $p\in S^k_\delta \bigl( 1\bigr)\bigl({\mathbb R} ^{d}\times {\mathbb T}^d\bigr)$ and $s, t\in [0,1]$. Then it follows at once from Remark \ref{remprop1app} that $e^{\psi/\varepsilon} \widetilde{\Op}^{\mathbb T}_{\varepsilon, t}(p) e^{-\psi / \varepsilon}$ is the $s$-quantization of a symbol $p_{\psi, s}\in S^k_\delta \bigl( 1\bigr)\bigl({\mathbb R} ^{d}\times {\mathbb T}^d\bigr)$ satisfying \[ p_{\psi, s}(x, \xi; \varepsilon) = p(x, \xi - i\nabla \psi (x); \varepsilon) \mod S^{k+1}_\delta(1) \; . \] \end{rem}
\section{Former results}\label{app2}
In the more general setting, that there might be more than two Dirichlet operators with spectrum inside of the spectral interval $I_\varepsilon$, let \begin{eqnarray}\label{specHepusw} \spec (H_\varepsilon) \cap I_\varepsilon = \{ \lambda_1,\ldots , \lambda_N\} \, ,&\quad& u_1,\ldots ,u_N\in \ell^2\left((\varepsilon {\mathbb Z})^d\right)\\ {\mathcal F} := \Span \{u_1,\ldots u_N\} \nonumber\\ \spec \left(H_\varepsilon^{M_j}\right) \cap I_\varepsilon = \{ \mu_{j,1},\ldots, \mu_{j,n_j}\} \, , &\quad& v_{j,1},\ldots,v_{j,n_j}\in \ell^2_{M_{j,\varepsilon}},\, j\in {\mathcal C} \nonumber\\ {\mathcal E}_j := \Span \{ v_{j,1},\ldots, v_{j,n_j} \} \, , &\quad & {\mathcal E} := \bigoplus {\mathcal E}_j \nonumber \end{eqnarray} denote the eigenvalues of $H_\varepsilon$ and of the Dirichlet operators $H_\varepsilon^{M_j}$ defined in \eqref{HepD} inside the spectral interval $I_\varepsilon$ and the corresponding real orthonormal systems of eigenfunctions (these exist because all operators commute with complex conjugation). We write \begin{equation}\label{valpha}
v_\alpha\quad\text{with}\quad \alpha =(\alpha_1, \alpha_2)\in \mathcal{J}:=\{(j,k)\,|\,j\in\mathcal{C},\, 1\leq k \leq n_j\} \quad \text{and}\quad j(\alpha):= \alpha_1\, . \end{equation} We remark that the number of eigenvalues $N, n_j\, ,\, j\in\mathcal{C}$ with respect to $I_\varepsilon$ as defined in \eqref{specHepusw} may depend on $\varepsilon$.
For a fixed spectral interval $I_\varepsilon$, it is shown in \cite{kleinro4} that the distance
$\vec{\dist}({\mathcal E}, {\mathcal F}):= \| \Pi_{\mathcal E} - \Pi_{\mathcal F} \Pi_{\mathcal E}\|$ is exponentially small and determined by $S_0$, the Finsler distance between the two nearest neighboring wells.
The following theorem, proven in \cite{kleinro4}, gives the representation of $H_\varepsilon$ restricted to an eigenspace with respect to the basis of Dirichlet eigenfunctions.
\begin{theo}\label{ealphafalpha} In the setting of Hypotheses \ref{hyp1}, \ref{hypIMj} and \eqref{specHepusw}, \eqref{valpha}, set $\mathcal{G}_v:=\left(\skpd{v_\alpha}{v_\beta}\right)_{\alpha,\beta\in\mathcal{J}}$, the Gram-matrix, and $\vec{e}:=\vec{v}\mathcal{G}_v^{-\frac{1}{2}}$, the orthonormalization of $\vec{v}:=(v_{1,1}.\ldots, v_{m,n_m})$. Let $\Pi_{\mathcal F}$ be the orthogonal projection onto ${\mathcal F}$ and set $f_\alpha=\Pi_{{\mathcal F}} e_\alpha$. For $\mathcal{G}_f=\left(\skpd{f_\alpha}{f_\beta}\right)$, we choose $\vec{g}:=\vec{f}\mathcal{G}_f^{-\frac{1}{2}}$ as orthonormal basis of ${\mathcal F}$.
Then there exists $\varepsilon_0>0$ such that for all $\sigma < S$ and $\varepsilon\in (0,\varepsilon_0]$ the following holds. \begin{enumerate}
\item The matrix of $H_\varepsilon|_{{\mathcal F}}$ with respect to $\vec{g}$ is given by \[ \diag \bigl(\mu_{1,1}, \ldots, \mu_{m,n_m}\bigl) + \left(\tilde{w}_{\alpha,\beta}\right)_{\alpha,\beta\in\mathcal{J}} + O\left(e^{-\frac{2\sigma}{\varepsilon}}\right) \] where \[ \tilde{w}_{\alpha,\beta} = \frac{1}{2}(w_{\alpha\beta}+ w_{\beta\alpha}) = O\left(\varepsilon^{-N}e^{-\frac{d(x_{j(\alpha)},x_{j(\beta)})}{\varepsilon}}\right) \] with \begin{equation}\label{interact} w_{\alpha,\beta} = \skpd{v_\alpha}{(\mathbf{1} -\mathbf{1}_{M_{j(\beta)}})T_\varepsilon v_\beta} = \sum_{\natop{x\in(\varepsilon {\mathbb Z})^d}{x\notin M_{j(\beta)}}} \sum_{\gamma\in(\varepsilon {\mathbb Z})^d} a_\gamma(x; \varepsilon)v_\beta(x+\gamma) v_\alpha (x) \end{equation} and $\tilde{w}_{\alpha,\beta} = 0$ for $j(\alpha) = j(\beta)$. The remainder $O\bigl(e^{-\frac{2\sigma}{\varepsilon}}\bigr)$ is estimated with respect to the operator norm. \item There exists a bijection
\[ b:\spec (H_\varepsilon|_{{\mathcal F}})\rightarrow \spec \bigl((\mu_\alpha \delta_{\alpha\beta} +
\tilde{w}_{\alpha\beta})_{\alpha,\beta\in\mathcal{J}}\bigr)\quad\text{such that}\quad |b(\lambda) - \lambda| = \expord{-2\sigma}\] where the eigenvalues are counted with multiplicity. \end{enumerate} \end{theo}
\end{appendix}
\end{document} | arXiv |
O. A. S. Karamzadeh
Shahid Chamran University, Ahvaz Iran
M. Namdari
S. Soltanpour
Home > Vol 16, No 2 (2015) > Karamzadeh
On the locally functionally countable subalgebra of C(X) on locally functionally countable subalgebra of C(X)
O. A. S. Karamzadeh, M. Namdari, S. Soltanpour
Let $C_c(X)=\{f\in C(X) : |f(X)|\leq \aleph_0\}$, $C^F(X)=\{f\in C(X): |f(X)|<\infty\}$, and $L_c(X)=\{f\in C(X) : \overline{C_f}=X\}$, where $C_f$ is the union of all open subsets $U\subseteq X$ such that $|f(U)|\leq\aleph_0$, and $C_F(X)$ be the socle of $C(X)$ (i.e., the sum of minimal ideals of $C(X)$). It is shown that if $X$ is a locally compact space, then $L_c(X)=C(X)$ if and only if $X$ is locally scattered.
We observe that $L_c(X)$ enjoys most of the important properties which are shared by $C(X)$ and $C_c(X)$.
Spaces $X$ such that $L_c(X)$ is regular (von Neumann) are characterized. Similarly to $C(X)$ and $C_c(X)$, it is shown that $L_c(X)$ is a regular ring if and only if it is $\aleph_0$-selfinjective.
We also determine spaces $X$ such that ${\rm Soc}{\big(}L_c(X){\big)}=C_F(X)$ (resp., ${\rm Soc}{\big(}L_c(X){\big)}={\rm Soc}{\big(}C_c(X){\big)}$). It is proved that if $C_F(X)$ is a maximal ideal in $L_c(X)$, then $C_c(X)=C^F(X)=L_c(X)\cong \prod\limits_{i=1}^n R_i$, where $R_i=\mathbb R$ for each $i$, and $X$ has a unique infinite clopen connected subset. The converse of the latter result is also given. The spaces $X$ for which $C_F(X)$ is a prime ideal in $L_c(X)$ are characterized and consequently for these spaces, we infer that $L_c(X)$ can not be isomorphic to any $C(Y)$.
functionally countable space; socle, zero-dimensional space; scattered space; locally scattered space, $\aleph_0$-selfinjective.
54C30; 54C40; 54C05; 54G12.
. Azarpanah, Intersection of essential ideals in C(X), Proc. Amer. Math. Soc. 125 (1997), 2149-2154.
http://dx.doi.org/10.1090/S0002-9939-97-04086-0
F. Azarpanah and O. A. S. Karamzadeh, Algebraic characterization of some disconnected spaces, Italian. J. Pure Appl. Math. 12 (2002), 155-168.
F. Azarpanah, O. A. S. Karamzadeh and S. Rahmati, C(X) vs. C(X) modulo its socle, Colloq. Math. 3 (2008),315-336.
http://dx.doi.org/10.4064/cm111-2-9
P. Bhattacharjee, M. L. Knox and W. Wm. Mcgovern, The classical ring of quotients of $C_c(X)$, Appl. Gen. Topol.15, no. 2 (2014), 147-154.
http://dx.doi.org/10.4995/agt.2014.3181
O. Dovgoshey, O.Martio, V. Ryazanov and M. Vuorinen, The Cantor function, Expo. Math. 24 (2006), 1-37.
http://dx.doi.org/10.1016/j.exmath.2005.05.002
T. Dube, Contracting the Socle in Rings of Continuous Functions, Rend. Semin. Mat. Univ. Padova. 123 (2010), 37-53.
http://dx.doi.org/10.4171/RSMUP/123-2
R. Engelking, General Topology, Heldermann Verlag Berlin, 1989.
A. A. Estaji and O. A. S. Karamzadeh, On C(X) modulo its socle, Comm. Algebra 31 (2003), 1561-1571.
http://dx.doi.org/10.1081/AGB-120018497
M. Ghadermazi, O. A. S. Karamzadeh and M. Namdari, On the functionally countable subalgebra of $C(X)$, Rend. Sem. Mat. Univ. Padova, 129 (2013), 47-69.
S. G. Ghasemzadeh, O. A. S. Karamzadeh and M. Namdari, The super socle of the ring of continuous functions, Mathematica Slovaka, to appear.
J. Hart and K. Kunen, Locally constant functions, Fund. Math. 150 (1996), 67-96.
M. Henriksen, R. Raphael and R. G. Woods, SP-scattered spaces; a new generalization of scattered spaces, Comment. Math. Univ. Carolin 48, no. 3 (2007), 487-505.
O. A. S. Karamzadeh, On a question of Matlis, Comm. Algebra 25 (1997), 2717-2726.
O. A. S. Karamzadeh and A. A. Koochakpour, On $aleph_{_0}$-selfinjectivity of strongly regular rings, Comm. Algebra 27 (1999), 1501-1513.
O. A. S. Karamzadeh, M. Namdari and M. A. Siavoshi, A note on $lambda$-compact spaces, Math. Slovaca. 63, no. 6 (2013) 1371-1380.
O. A. S. Karamzadeh and M. Rostami, On the intrinsic topology and some related ideals of C(X), Proc. Amer. Math. Soc. 93 (1985), 179-184.
R. Levy and M. D. Rice, Normal P-spaces and the $G_delta$-topology, Colloq. Math. 47 (1981), 227-240.
M. A. Mulero, Algebraic properties of rings of continuous functions, Fund. Math. 149 (1996), 55-66.
M. Namdari and A. Veisi, The subalgebra of $C_c(X)$ consisting of elements with countable image versus C(X) with respect to their rings of quotients, Far East J. Math. Sci. (FJMS), 59 (2011), 201-212.
M. Namdari and A. Veisi, Rings of quotients of the subalgebra of C(X) consisting of functions with countable image, Inter. Math. Forum, 7 (2012), 561-571.
A. Pelczynski and Z. Semadeni, Spaces of continuous functions (III), Studia Mathematica 18 (1959), 211-222.
M. E. Rudin and W. Rudin, Continuous functions that are locally constant on dense sets, J. Funct. Anal. 133 (1995), 120-137.
http://dx.doi.org/10.1006/jfan.1995.1121
W. Rudin, Continuous functions on compact spaces without perfect subsets, Proc. Amer. Math. Soc. 8 (1957), 39-42.
D. Rudd, On two sum theorems for ideals of C(X), Michigan Math. J. 17 (1970), 139-141.
http://dx.doi.org/10.1307/mmj/1029000423
1. Further thoughts on the ring $${\mathcal {R}}_c (L)$$Rc(L) in frames
Ali Akbar Estaji, Maryam Robat Sarpoushi, Mahtab Elyasi
Algebra universalis vol: 80 issue: 4 year: 2019 | CommonCrawl |
Find ${\begin{vmatrix} 1 & a & a^2 \\ 1 & b & b^2 \\ 1 & c & c^2 \\ \end{vmatrix}}$
Determinants
In the given $3 \times 3$ matrix, $1$, $a$ and $a^2$ are the entries in the first row, $1$, $b$ and $b^2$ are the elements in the second row and $1$, $c$ and $c^2$ are the entries in the third row.
${\begin{vmatrix} 1 & a & a^2 \\ 1 & b & b^2 \\ 1 & c & c^2 \\ \end{vmatrix}}$
The determinant of the matrix of the order $3$ has to calculate in this matrix problem. So, let's learn how to evaluate the determinant of this square matrix.
Subtract the Second row from First row
$1$ is an entry in the first row and the first column. $1$ is also an element in the second row and the first column. Subtract the elements in the second row from the corresponding elements in the first row and substitute the difference of the entries in the respective positions of first row in the matrix.
$R_1-R_2 \,\to\, R_1$
It makes the element in the first row and the first column to become $0$.
$=\,\,\,$ ${\begin{vmatrix} 1-1 & a-b & a^2-b^2 \\ 1 & b & b^2 \\ 1 & c & c^2 \\ \end{vmatrix}}$
$=\,\,\,$ ${\begin{vmatrix} 0 & a-b & a^2-b^2 \\ 1 & b & b^2 \\ 1 & c & c^2 \\ \end{vmatrix}}$
In the first row of $3 \times 3$ matrix, $0$, $a-b$ and $a^2-b^2$ are the entries in the matrix. The element $a^2-b^2$ represents the difference of the squares and they can be expressed in factor form as per the difference rule of squares.
$=\,\,\,$ ${\begin{vmatrix} 0 & a-b & (a-b)(a+b) \\ 1 & b & b^2 \\ 1 & c & c^2 \\ \end{vmatrix}}$
In the first row, $a-b$ is a factor in both second and third columns but there is no factor in the first column. However, it can be written as follows for our convenience.
$=\,\,\,$ ${\begin{vmatrix} 0 \times (a-b) & 1 \times (a-b) & (a-b) \times (a+b) \\ 1 & b & b^2 \\ 1 & c & c^2 \\ \end{vmatrix}}$
Now, $a-b$ is a common factor in each entry of the first row in this $3 \times 3$ square matrix. So, it can be taken out common from the entries of the first row.
$=\,\,\,$ $(a-b){\begin{vmatrix} 0 & 1 & a+b \\ 1 & b & b^2 \\ 1 & c & c^2 \\ \end{vmatrix}}$
Subtract the Third row from Second row
$1$ is an element in the second row and the first column. $1$ is also an entry in the third row and the first column. Find the subtraction of the entries in the third row from the corresponding elements in the second row of the matrix.
Substitute them in their respective positions of the second row. The subtraction process makes the entry to become $0$ in the second row and the first column.
$=\,\,\,$ $(a-b){\begin{vmatrix} 0 & 1 & a+b \\ 1-1 & b-c & b^2-c^2 \\ 1 & c & c^2 \\ \end{vmatrix}}$
$=\,\,\,$ $(a-b){\begin{vmatrix} 0 & 1 & a+b \\ 0 & b-c & b^2-c^2 \\ 1 & c & c^2 \\ \end{vmatrix}}$
In the second row of the square matrix of order $3$, the elements are $0$, $b-c$ and $b^2-c^2$. The element $b^2-c^2$ expresses the difference of two terms. So, the difference of the terms can be written in factor form by using the difference rule of the squares in factor form.
$=\,\,\,$ $(a-b){\begin{vmatrix} 0 & 1 & a+b \\ 0 & b-c & (b-c)(b+c) \\ 1 & c & c^2 \\ \end{vmatrix}}$
In the second row, $b-c$ is a factor in both second and third elements but there is no factor like that in the first column. So, the factor $b-c$ can be included as follows for a cause.
$=\,\,\,$ $(a-b){\begin{vmatrix} 0 & 1 & a+b \\ 0 \times (b-c) & 1 \times (b-c) & (b-c) \times (b+c) \\ 1 & c & c^2 \\ \end{vmatrix}}$
In all three elements of the second row, $b-c$ is a common factor and it can be taken out common from them.
$=\,\,\,$ $(a-b)(b-c){\begin{vmatrix} 0 & 1 & a+b \\ 0 & 1 & b+c \\ 1 & c & c^2 \\ \end{vmatrix}}$
$1$ is an element in the first row and the second column. $1$ is also an entry in the second row and the second column. Let's find the difference of them by subtracting the entries in the second row from the respective elements in the first row. Substitute the difference of the elements in the respective positions in the first row.
The idea behind finding the subtraction of the elements is to make the entry to become $0$ in the first row and the second column.
$=\,\,\,$ $(a-b)(b-c){\begin{vmatrix} 0-0 & 1-1 & a+b-(b+c) \\ 0 & 1 & b+c \\ 1 & c & c^2 \\ \end{vmatrix}}$
$=\,\,\,$ $(a-b)(b-c){\begin{vmatrix} 0 & 0 & a+b-b-c \\ 0 & 1 & b+c \\ 1 & c & c^2 \\ \end{vmatrix}}$
$=\,\,\,$ $(a-b)(b-c){\begin{vmatrix} 0 & 0 & a+\cancel{b}-\cancel{b}-c \\ 0 & 1 & b+c \\ 1 & c & c^2 \\ \end{vmatrix}}$
$=\,\,\,$ $(a-b)(b-c){\begin{vmatrix} 0 & 0 & a-c \\ 0 & 1 & b+c \\ 1 & c & c^2 \\ \end{vmatrix}}$
Find the Determinant of the matrix
In the square matrix of the order $3$, the first and second elements are $0$. So, the determinant of the simplified matrix can be evaluated by the third element.
$=\,\,\,$ $(a-b)(b-c)\Big((-1)^{1+3} \times (a-c) \times (0 \times c\,-\,1 \times 1)\Big)$
$=\,\,\,$ $(a-b)(b-c)\Big((-1)^{4} \times (a-c) \times (0\,-\,1)\Big)$
$=\,\,\,$ $(a-b)(b-c)\Big(1 \times (a-c) \times (0\,-\,1)\Big)$
$=\,\,\,$ $(a-b)(b-c)\Big((a-c) \times (-1)\Big)$
$=\,\,\,$ $(a-b)(b-c)\Big((-1) \times (a-c)\Big)$
$=\,\,\,$ $(a-b)(b-c)(-a+c)$
$=\,\,\,$ $(a-b)(b-c)(c-a)$ | CommonCrawl |
To evaluate $\lim_{x \to 0^-}({\frac{\tan x}{x}})^\frac{1}{x^3}$
Evaluate $$\lim_{x \to 0^-}({\frac{\tan x}{x}})^\frac{1}{x^3}$$ I tried taking log on both sides and then using L'Hospital rule but its giving complex results.Are there any simpler methods to approach this?
calculus limits
PiGammaPiGamma
$\begingroup$ math.stackexchange.com/questions/1770823/… $\endgroup$ – Guy Fsone Nov 20 '17 at 13:16
From Are all limits solvable without L'Hôpital Rule or Series Expansion,
$\lim_{x\to0}\left(\dfrac{\tan x-x}{x^3}\right)=\dfrac13$
$\implies\dfrac{\tan x-x}{x^m}\to0$ for $m<3$ as $x\to0$
$$\lim_{x\to0}\left(\dfrac{\tan x}x\right)^{1/x^3}$$
$$=\left(\left(\lim_{x\to0}\left(1+\dfrac{\tan x-x}x\right)^{x/(\tan x-x)}\right)^{\lim_{x\to0}\frac{\tan x-x}{x^3}}\right)^{\lim_{x\to0}\frac1x}$$
The inner limit converges to $e^{1/3}$
What about the outermost exponent?
$\begingroup$ Isn't it $e^0$? $\endgroup$ – PiGamma Nov 20 '17 at 6:19
$\begingroup$ $\frac{x}{\tan(x)-x}\cdot \frac{\tan(x)-x}{x^2} = \frac{1}{x}$, not $\frac{1}{x^3}$. $\endgroup$ – Michael Lee Nov 20 '17 at 6:19
$\begingroup$ Actually, $\lim_{x\to 0} \left(\frac{\tan(x)}{x}\right)^{1/x^3}$ does not exist. From the left-hand side, it's $0$, and from the right-hand side, it's infinite. You can see as much by plotting the function using WolframAlpha. $\endgroup$ – Michael Lee Nov 20 '17 at 6:20
$\begingroup$ @MichaelLee I wanted to calculate only left hand limit $\endgroup$ – PiGamma Nov 20 '17 at 6:22
$\begingroup$ There's a mistake in your notation, then. You wrote $x\to 0-0$ where you meant $x\to 0^-$. I've just edited it. $\endgroup$ – Michael Lee Nov 20 '17 at 6:23
From this answer, we can see that the Taylor coefficients of $\tan(x)$ expanded around $0$ will be positive, which implies that every truncation of the Taylor series acts as a lower bound for $\tan(x)$ on $[0, \pi/2)$. Therefore, $$\tan(x)\geq x+\frac{x^3}{3}$$ for $x\in [0, \pi/2)$. As $\tan(x)/x$ is an even function, this implies $$\frac{\tan(x)}{x}\geq 1+\frac{x^2}{3}$$ for $x\in (-\pi/2, 0)\cup (0, \pi/2)$. Therefore, as $1/x^3$ is negative for $x < 0$, $$\left(\frac{\tan(x)}{x}\right)^{1/x^3}\leq \left(1+\frac{x^2}{3}\right)^{1/x^3}$$ By Bernoulli's inequality, $$\left(1+\frac{x^2}{3}\right)^{-1/x^3}\geq 1-\frac{1}{x^3}\cdot \frac{x^2}{3} = 1-\frac{1}{3x}$$ for $-1\leq x < 0$, so $$\left(1+\frac{x^2}{3}\right)^{1/x^3}\leq \left(1-\frac{1}{3x}\right)^{-1} = \frac{3x}{3x-1}$$ Therefore, $$0\leq \lim_{x\to 0^-} \left(\frac{\tan(x)}{x}\right)^{1/x^3}\leq \lim_{x\to 0^-} \left(1+\frac{x^2}{3}\right)^{1/x^3}\leq \lim_{x\to 0^-} \frac{3x}{3x-1} = 0$$
Michael LeeMichael Lee
By changing $x$ into $-x$ this is the same as $$ \lim_{x\to0^+}\left(\frac{\tan x}{x}\right)^{-1/x^3} = \lim_{x\to0^+}\left(\frac{x}{\tan x}\right)^{1/x^3} $$ Thus you want to find \begin{align} \lim_{x\to0^+}-\frac{\log\dfrac{\tan x}{x}}{x^3} &=-\lim_{x\to0^+}\frac{\log\left(1+\dfrac{x^2}{3}+o(x^2)\right)}{x^3}\\[6px] &=-\lim_{x\to0^+}\frac{\dfrac{x^2}{3}+o(x^2)}{x^3}\\[6px] &=-\infty \end{align} Then your limit is $\lim_{t\to-\infty}e^t=0$
197k1414 gold badges9696 silver badges228228 bronze badges
Equation $(4)$ in this answer says that $$ \lim_{x\to0}\frac{\tan(x)-x}{x^3}=\frac13 $$ Therefore, $$ \begin{align} \left(\frac{\tan(x)}x\right)^{1/x^3} &=\left(1+\frac{x^2}3+o\!\left(x^2\right)\right)^{1/x^3}\\ &=\left(\left(1+\frac{x^2}3+o\!\left(x^2\right)\right)^{1/x^2}\right)^{1/x} \end{align} $$ as $x\to0$, we can make $\left(1+\frac{x^2}3+o\!\left(x^2\right)\right)^{1/x^2}$ as close to $e^{1/3}\gt1$ as we wish. Thus, $$ \begin{align} \lim_{x\to0^-}\left(\left(1+\frac{x^2}3+o\!\left(x^2\right)\right)^{1/x^2}\right)^{1/x} &=\lim_{x\to0^-}\left(e^{1/3}\right)^{1/x}\\ &=0 \end{align} $$
robjohn♦robjohn
Easy trick
$$\lim_{x\to 0^-} \left(\frac{\tan x}{x}\right)^{\frac1{x^3}} =\lim_{x\to 0^-}\exp\left(\frac{1}{x^3}\ln\left(\frac{\tan x -x}{x}+1\right)\right) \sim \lim_{x\to 0^-}\exp\left(\frac{1}{3x}\frac{\ln\left(1+\frac{x^2}{3}\right)}{\frac{x^2}{3}}\right)\\= \color{blue}{\exp(-\infty\times \frac13)= 0} $$
Given that $$\tan x -x \sim \frac{x^3}{3}~~~~and ~~~~ \lim_{h\to 0} \frac{\ln\left(1+h\right)}{h} = 1$$
Guy FsoneGuy Fsone
Not the answer you're looking for? Browse other questions tagged calculus limits or ask your own question.
Are all limits solvable without L'Hôpital Rule or Series Expansion
Working out $\tan x$ using sin and cos expansion
How to Find $ \lim\limits_{x\to 0} \left(\frac {\tan x }{x} \right)^{\frac{1}{x^2}}$.
Evaluate: $\lim_{x \to 0}\left(\frac{\tan\left(\pi\cos^2x\right)}{x^2}\right)$
Evaluating the limit of $\lim_{x\to\infty}(\sqrt{\frac{x^3}{x+2}}-x)$.
Evaluate $\lim_{x\to \infty} \frac{\tan^{2}(\frac{1}{x})}{(\ln(1+\frac{4}{x}))^2}$
Evaluate $\lim_{x \to 0} \frac{\tan(\tan x)-\sin(\sin x)}{\tan x-\sin x}$
Solving $\lim_{ x \to 0^+} \sqrt{\tan x}^{\sqrt{x}}$
Evaluate $\lim _ { x \rightarrow 0} \frac { \tan ( x ^ { a } ) - ( \sin x ) ^ { a } } { x ^ { a + 2} }$
How to evaluate $\lim_{x\to+\infty}\frac{\sin x\tan x }{x^3+x^2} $?
Limit $\lim_{u\to0} \frac{3u}{\tan 2u}$
Evaluate $\lim_{x\to \infty} (x+5)\tan^{-1}(x+5)- (x+1)\tan^{-1}(x+1)$
Find $\lim_{x \to \frac{\pi}{2}} \frac{\tan 11x}{\tan 13x}$. How to choose an interval for this function s.t we can use L'Hospital's rule. | CommonCrawl |
\begin{document}
\maketitle
\begin{abstract} We introduce a new class of ``filtered" schemes for some first order non-linear Hamilton-Jacobi-Bellman equations. The work follows recent ideas of Froese and Oberman (SIAM J. Numer. Anal., Vol 51, pp.423-444, 2013). The proposed schemes are not monotone but still satisfy some $\epsilon$-monotone property. Convergence results and precise error estimates are given, of the order of $\sqrt{{\Delta x}}$ where ${\Delta x}$ is the mesh size. The framework allows to construct finite difference discretizations that are easy to implement, high--order in the domains where the solution is smooth, and provably convergent, together with error estimates. Numerical tests on several examples are given to validate the approach, also showing how the filtered technique can be applied to stabilize an otherwise unstable high--order scheme. \end{abstract}
\begin{keywords} Hamilton-Jacobi equation, high-order schemes, $\epsilon$-monotone scheme, viscosity solutions, error estimates \end{keywords}
\section{Introduction}
In this work, our aim is to develop high--order and convergent schemes for first order Hamilton-Jacobi (HJ) equations
of the following form \begin{eqnarray}\label{eq:hj1}
& & \partial_t v+H(x,\nabla v)=0, \quad (t,x)\in[0,T]\times \mathbb{R}^d\\
& & v(0,x)=v_0(x),\quad x\in\mathbb{R}^d. \end{eqnarray}
\noindent Basic assumptions on the Hamiltonian $H$ and the initial data $v_0$ will be introduced in the next section. It is well known that, in the one dimensional case, there is a strong link between Hamilton-Jacobi equations and scalar conservation laws. Namely, the viscosity solution of the evolutive HJ equation is the primitive of the entropy solution of the corresponding hyperbolic conservation law with the same hamiltonian. There are several schemes developed for hyperbolic conservation law (see references~\cite{H83}~\cite{H84},~\cite{CL84},~\cite{GS98}). Most of the numerical ideas to solve hyperbolic conservation law can be extended to HJ equations. Well known high--order essentially non-oscillatory (ENO) scheme have been introduced by A. Harten et al.\ in~\cite{HEOC87} for hyperbolic conservation laws, and then extended to HJ equation by Osher and Shu~\cite{OS91}. ENO schemes have shown to have high--order accuracy although a precise convergence result is still missing and, for this property, they have been quite successful in many applications. Despite the fact that there is no convergence proof of ENO schemes towards the viscosity solution of \eqref{eq:hj1} in the general case, convergence results may hold for related schemes, e.g. MUSCL schemes, as it has been proved by Lions and Souganidis in ~\cite{lio-sou-95}. Convergence results of some non monotone scheme have also been shown in particular cases~\cite{bok-meg-zid-2010}. Another interesting result has been proved by Fjordholm et al.~\cite{FMT12}, they have shown that ENO interpolation is stable but the stability result is not sufficient to conclude total variation boundedness (TVB) of the ENO reconstruction procedure. In~\cite{F13}, a conjecture related to weak total variation property for ENO schemes is given. Finally, let us also mention~\cite{CFR05}, where weighted essentially non-oscillatory (WENO) schemes have been applied to HJ equations; the convergence proof of the scheme relies also on the work of Ferretti~\cite{fer-02} where higher than first order schemes are proposed in a semi-Lagriangian setting, yet with restrictive conditions on the mesh steps.
In this paper we give a very simple way to construct high--order schemes in a convergent framework. It is known (by Godunov's theorem) that a monotone scheme can be at most first order. Therefore it is needed to look for non-monotone schemes. The difficulty is then to combine non-monotonicity of the scheme and convergence towards the viscosity solution of \eqref{eq:hj1}, and also to obtain error estimates. In our approach we will adapt a general idea of Froese and Oberman~\cite{FO13}, that was presented for stationnary second order Hamilton-Jacobi equations and based on the use of a ``filter" function. Here we focus mainly on the case of evolutive first order Hamilton-Jacobi equation \eqref{eq:hj1}, and an adaptation to the steady case will be also considered. We will design a slightly different filter function for which the filtered scheme is still an ``$\epsilon$-monotone" scheme (see Eq.\ref{eq:eps-monotone}), but that improves the numerical results. Let us mention also the work \cite{bok-fal-fer-gru-kal-zid} for steady equations where some $\epsilon$-monotone semi-Lagrangian schemes are studied.
The paper is organized as follows. In Section 2, we present the schemes and give main convergence results.
Section 3 is devoted to the numerical tests on several academic examples to illustrate our approach in one and two-dimensional cases. A test on nonlinear steady equations , as well an evolutive ``obstacle" HJ equation in the form of $\min(u_t + H(x,u_x), u-g(x))=0$ for a given function $g$ are also included in this section. Finally, Section 4 contains our concluding remarks.
\section{Definitions and main results}
\subsection{Setting of the problem}
Let us denote by $|\cdot|$ the Euclidean norm on $\mathbb{R}^d$ ($d\geq 1$). The following classical assumptions will be considered in the sequel of this paper: \\ {\bf (A1)} $v_{0}$ is Lipschitz continuous function i.e. there exist $L_0 >0$ such that for every $x~,y\in \mathbb{R}^d$, \begin{equation}\label{eq:L0}
|v_0(x)-v_0(y)|\leq L_0 |x-y|. \end{equation}
\noindent {\bf (A2)} $H:\mathbb{R}^d\times\mathbb{R}^d\rightarrow \mathbb{R}^d$ satisfies, for some constant $C\geq 0$, for all $p,q,x,y\in \mathbb{R}^d$: \begin{eqnarray}
|H(y,p) - H(x,p)|\leq C (1+|p|) |y-x|, \end{eqnarray} and \begin{eqnarray}
|H(x,q) - H(x,p)|\leq C (1+|x|) |q-p|. \end{eqnarray}
Under assumptions (A1) and (A2) there exists a unique viscosity solution for \eqref{eq:hj1} (see Ishii \cite{Ish-84}). Furthermore $v$ is locally Lipschitz continuous on $[0,T]\times \mathbb{R}^d$.
For clarity of presentation we focus on the one-dimensional case and consider the following simplified problem: \begin{eqnarray}\label{eq:hj} &&v_t + H(x, v_x)=0, \quad x\in \mathbb{R},\ t\in [0,T],\\ &&v(0,x)=v_0(x),\quad x\in \mathbb{R}. \end{eqnarray}
\subsection{Construction of the filtered scheme}
Let $\tau = \Delta t >0$ be a time step (in the form of $\tau=\frac{T}{N}$ for some $N\geq 1$), and ${\Delta x}>0$ be a space step. A uniform mesh in time is defined by $t_n:=n\tau$, $n\in[0,\dots,N]$, and in space by the nodes $x_j:=j{\Delta x}$, $j\in \mathbb{Z}$.
The construction of a filtered scheme needs three ingredients: \begin{itemize} \item
a monotone scheme, denoted $S^M$ \item
a high--order scheme, denoted $S^A$ \item
a bounded ``filter" function, $F: \mathbb{R}\rightarrow \mathbb{R}$. \end{itemize} The high-order scheme need not be convergent nor stable; the letter $A$ stands for ``arbitrary order", following \cite{FO13}. For a start, $S^M$ will be based on a finite difference scheme. Later on we will also propose a definition of $S^M$ based on a semi-Lagriangian scheme.
Then, the filtered scheme $S^F$ is defined as \begin{eqnarray}\label{eq:FS}
u^{n+1}_j \equiv S^F(u^n)_j := S^{M}(u^n)_j+\epsilon\tau F\bigg(\frac{S^{A}(u^n)_j-S^{M}(u^n)_j}{\epsilon\tau}\bigg), \end{eqnarray} where $\epsilon=\epsilon_{\tau,{\Delta x}}>0$ is a parameter satisfying \begin{eqnarray}\label{eq:epsgotozero}
\lim_{(\tau,{\Delta x})\rightarrow 0} \epsilon = 0. \end{eqnarray} More hints on the choice of $\epsilon$ will be given later on.
The scheme is initialized in the standard way, i.e. \begin{eqnarray} \label{eq:FDexplicit_b}
u_j^0 := v_0(x_j),\quad \forall j\in \mathbb{Z}. \end{eqnarray}
Now we make precise some requirements on $S^M$, $S^A$ and the function $F$.
\noindent {\em Definition of the monotone finite difference scheme $S^M$} \\
Following Crandall and Lions~\cite{CL84}, we consider a finite difference scheme written as $u^{n+1}=S^M(u^n)$ with \begin{eqnarray}\label{eq:FD}
S^{M}(u^{n})(x) := u^{n}(x) -\tau\ h^M(x,D^-u^n(x),D^+u^n(x)), \end{eqnarray} with $$
D^\pm u(x) :=\pm \frac{u(x\pm {\Delta x}) - u(x)}{{\Delta x}}, $$ where $h^M$ corresponds to a monotone numerical Hamiltonian that will be made precise below. We will denote also $S^M(u^n)_j := S^M(u^n)(x_j)$.
Therefore the scheme also reads, for all $j\in \mathbb{Z}$, $\forall n\geq 0$: \begin{eqnarray} \label{eq:FDexplicit_a}
u^{n+1}_j:= u^{n}_j -\tau\ h^M(x_j,D^-u^n_j,D^+u^n_j), \quad D^{\pm} u^n_j:=\pm \frac{u^n_{j\pm 1}-u^n_j}{{\Delta x}}. \end{eqnarray}
\noindent {\bf (A3) Assumptions on $S^M$}\\[0.0cm] We will use the following assumptions throughout this paper:\\ \begin{tabular}{ll}
& $(i)$ $h^M$ is a Lipschitz continuous function.\\ & $(ii)$ (\textit{consistency})
$\forall x$, $\forall u$,\ $ h^M(x,u,u)=H(x,u).$
\\ & $(iii)$ (\textit{monotonicity}) for any functions $u,v$,\\ & \hspace{2cm}
$
\mbox{$u\leq v$ $\Longrightarrow$ $S^M(u)\leq S^M(v)$}.
$ \end{tabular}
In pratice condition (A3)-$(iii)$ is only required at mesh points and the condition reads \begin{eqnarray} \label{eq:monotcond}
u_j\leq v_j,\ \forall j, \quad \Rightarrow\quad S^M(u)_j \leq S^M(v)_j,\ \forall j. \end{eqnarray}
At this stage, we notice that under condition (A3) the filtered scheme is ``$\epsilon$-monotone" in the sense that \begin{eqnarray}\label{eq:eps-monotone}
\ u_j\leq v_j,\ \forall j, \quad \Rightarrow\quad S^M(u)_j \leq S^M(v)_j + \epsilon\tau,\ \forall j. \end{eqnarray} with $\epsilon\rightarrow 0$ as $(\tau,{\Delta x})\rightarrow 0$. This implies the convergence of the scheme by Barles-Souganidis convergence theorem (see \cite{BS91,A09}).
\begin{rem} Under assumption $(i)$, the consistency property $(ii)$ is equivalent to say that, for any $v\in C^2([0,T]\times\mathbb{R})$, there exists a constant $C_M\geq 0$ independant of ${\Delta x}$ such that \begin{eqnarray}
\label{eq:SM-consist}
\bigg| h^M(x,D^-v(x),D^+ v(x)) - H(x,v_x) \bigg| \leq C_M {\Delta x} \| \partial_{xx} v \|_\infty. \end{eqnarray} The same statement holds true if \eqref{eq:SM-consist} is replaced by the following consistency error estimate: \begin{eqnarray}
& & \hspace*{-2cm}
\mathcal{E}_{S^M}(v)(t,x) :=
\bigg| \frac{v(t+\tau,x)-S^M(v(t,.))(x)}{\tau} - \big( v_t(t,x) + H(x,v_x(t,x))) \bigg|
\nonumber \\
& & \qquad \leq \ C_M \bigg(\tau \| \partial_{tt} v \|_\infty + {\Delta x} \| \partial_{xx} v \|_\infty\bigg). \end{eqnarray} \end{rem}
\begin{rem} Assuming $(i)$, it is easily shown that the monotonicity property $(iii)$ is equivalent to say that $h^M=h^M(x,u^-,u^+)$ satisfies, a.e. $(x,u^-,u^+ )\in \mathbb{R}^3$:
\begin{eqnarray}\label{eq:MONOT-1}
\mbox{$\displaystyle\frac{\partial h^M}{\partial u^-} \geq 0$,\ \ \ $\displaystyle\frac{\partial h^M}{\partial u^+} \leq 0$,} \end{eqnarray} and the CFL condition \begin{eqnarray}\label{eq:MONOT-2}
\frac{\tau}{{\Delta x}} \bigg(\frac{\partial h^M}{\partial u^-}(x,u^-,u^+) - \frac{\partial h^M}{\partial u^+}(x,u^-,u^+)\bigg) \leq 1. \end{eqnarray}
When using finite difference schemes, it is assumed that the CFL condition \eqref{eq:MONOT-2} is satisfied, and that can be written equivalently in the form \begin{eqnarray}\label{eq:CFL}
c_{0} \frac{\tau}{{\Delta x}} \leq 1, \end{eqnarray} where $c_0$ is a constant independant of $\tau$ and ${\Delta x}$. \end{rem}
\begin{example} Let us consider the Lax-Friedrichs numerical Hamiltonian is $$
h^{M,LF}(x,u^-,u^+):=H(x,\frac{u^-+u^+}{2}) - \frac{c_0}{2}(u^+-u^-) $$ where $c_0>0$ is a constant. The scheme is consistant; it is furthermore monotone under the conditions
$\max_{x,p}|\partial_p H(x,p)|\leq c_0$, and $c_0\frac{\tau}{{\Delta x}}\leq 1$. \end{example}
\noindent {\em Definition of the high--order scheme $S^A$:}
we consider an iterative scheme of ``high--order" in the form $u^{n+1}=S^A(u^n)$, written as \begin{small} \begin{eqnarray} \label{eq:SA-developped}
S^{A}(u^{n})(x)= u^{n}(x)-\tau h^{A}(x,D^{k,-}u^n(x),\dots, D^-u^n(x),D^+u^n(x),\dots,D^{k,+}u^n(x)),\nonumber \end{eqnarray} \end{small} \!\!where $h^A$ corresponds to a ``high-order" numerical Hamiltonian, and $D^{\ell,\pm}u(x):=\pm \frac{u(x\pm \ell{\Delta x})-u(x)}{{\Delta x}}$ for $\ell=1,\dots,k$. To simplify the notations we may write \eqref{eq:SA-developped} in the more compact form
\begin{eqnarray}\label{eq:SA}
S^{A}(u^{n})(x)= u^{n}(x)-\tau h^{A}\big(x, D^{\pm} u^n(x)) \end{eqnarray} even if there is a dependency on $\ell$ in $(D^{\ell,\pm} u^n(x))_{\ell=1,\dots,k}$.
\noindent {\bf (A4) Assumptions on $S^A$:}\\[0.2cm] We will use the following assumptions:\\ {\em
\indent $(i)$ $h^A$ is a Lipschitz continuous function.\\[0.2cm] \indent $(ii)$ (\textit{high--order consistency}) There exists $k\geq 2$, for all $\ell\in[1,\dots,k]$, for any function $v=v(t,x)$ of class $C^{\ell+1}$, there exists $C_{A,\ell}\geq 0$, \begin{eqnarray}\label{eq:H1consist_a}
& & \hspace*{-2cm}
\mathcal{E}_{S^A}(v)(t,x) :=
\bigg| \frac{v(t+\tau,x)-S^A(v(t,.))(x)}{\tau} -\big( v_t(t,x) + H(x,v_x(t,x))) \bigg|
\\
& & \leq \ C_{A,\ell} \bigg (\tau^\ell \|\partial_t^{\ell+1} v\|_\infty +
{\Delta x}^\ell \|\partial_x^{\ell+1} v\|_\infty \bigg). \end{eqnarray}
} Here $v^\ell_{x}$ denotes the $\ell$-th derivative of $v$ w.r.t.\ $x$.
\begin{rem}\label{rem:hA-implications} The high-order consistency implies, for all $\ell \in [1,\dots,k]$, and for $v \in C^{\ell+1}(\mathbb{R})$, \begin{eqnarray*}\label{eq:H1consist-implications}
\bigg| h^A(x,\dots,D^-v,D^+v,\dots) - H(x,v_x)\bigg|
\leq \ C_{A,\ell} \|\partial_x^{\ell+1} v\|_\infty {\Delta x}^\ell. \end{eqnarray*} \end{rem}
\begin{example}\label{rem:CFD-RK2} {\em (Centered scheme)} A typical example with $k=2$ is obtained with the centered TVD (Total Variation Diminishing) approximation in space and the Runge-Kutta 2nd order scheme in time (or Heun scheme): \begin{subequations} \label{eq:SA-RK2} \begin{eqnarray}\label{eq:SA-RK2a}
& & S_0(u^n)_j:= u^n_j - \tau H(x_j,\frac{u^n_{j+1}-u^n_{j-1}}{2{\Delta x}}), \end{eqnarray} and \begin{eqnarray}\label{eq:SA-RK2b}
& & S^A(u):= \frac{1}{2}(u + S_0 (S_0(u))). \end{eqnarray} \end{subequations} Of course there is no reason for the centered scheme to be stable (as it will be shown in the numerical section). Using a filter will help stabilize the scheme. \end{example} A similar example with $k=3$ can be obtained with any third order finite difference approximation in space and the TVD-RK3 scheme in time~\cite{got-shu-98}.
\noindent {\em Definition of the filter function $F$.}
We recall that Froese and Oberman's filter function used in~\cite{FO13} is: \begin{eqnarray*}
\tilde F(x) = sign(x)\max(1-||x|-1|,0) = \left\{
\begin{array}{l l}
x & \quad |x| \leq 1.\\
0 & \quad |x|\geq 2.\\
-x+2 &\quad 1 \leq x\leq 2.\\
-x-2 &\quad -2\leq x \leq -1.\\
\end{array} \right. \end{eqnarray*} In the present work we define a new filter function simply as follows: \begin{eqnarray} \label{def:Filter}
F(x) := x 1_{|x|\leq 1} = \left\{
\begin{array}{l l}
x & \quad \text{if} \ |x| \leq 1,\\
0 & \quad \text{otherwise}.\\
\end{array} \right. \end{eqnarray}
\begin{figure}
\caption{Froese and Oberman's filter (left), new filter (right)}
\end{figure}
The idea of the present filter function is to keep the high--order scheme when $|h^A-h^M|\leq \epsilon$
(because then $|S^A-S^M|/(\tau \epsilon) \leq 1$ and $S^F=S^M + \tau\epsilon F(\frac{S^A-S^M}{\tau\epsilon}) \equiv S^A$), whereas $F=0$ and $S^F=S^M$ if that bound is not satisfied, i.e., the scheme is simply given by the monotone scheme itself. Clearly the main difference is the discontinuity at $x=-1, 1$.
\subsection{Convergence result}
The following theorem gives several basic convergence results for the filtered scheme. Note that the high-order assumption (A4) will not be necessary to get the error estimates $(i)$-$(ii)$. It will be only used to get a high-order consistency error estimate in the regular case (part $(iii)$). Globally the scheme will have just an $O(\sqrt{\Delta x})$ rate of convergence for just Lipschitz continuous solutions because the jumps in the gradient prevent high-order accuracy on the kinks.
\begin{theorem} \label{th:estimate1} Assume (A1)-(A2), and let $v_0$ be bounded.
We assume also that $S^M$ satisfies (A3), and $|F|\leq 1$. Let $u^n$ denote the filtered scheme~\eqref{eq:FS}.
Let $v^n_j:=v(t_n,x_j)$ where $v$ is the exact solution of~\eqref{eq:hj}. Assume \begin{eqnarray} \label{eq:eps-1}
0<\epsilon \leq c_0\sqrt{{\Delta x}} \end{eqnarray} for some constant $c_0>0$.
$(i)$ The scheme $u^n$ satisfies the Crandall-Lions estimate \begin{eqnarray}
\|u^{n} - v^{n}\|_{\infty} \leq C \sqrt{{\Delta x}}, \quad \forall\ n=0,...,N. \end{eqnarray} for some constant $C$ independent of ${\Delta x}$.
$(ii)$ (First order convergence for classical solutions.) If furthermore the exact solution $v$ belongs to $C^2([0,T]\times\mathbb{R})$,
and $\epsilon\leq c_0 {\Delta x}$ (instead of \eqref{eq:eps-1}), then, we have \begin{eqnarray}
\|u^{n} - v^{n}\|_{\infty} \leq C {\Delta x}, \quad n=0,...,N, \end{eqnarray} for some constant $C$ independent of ${\Delta x}$.
$(iii)$ (Local high-order consistency.)
Assume that $S^A$ is a high--order scheme satisfying (A4) for some $k\geq 2$. Let $1\leq \ell \leq k$ and $v$ be a $C^{\ell+1}$ function in a neighborhood of a point $(t,x)\in (0,T)\times \mathbb{R}$.
Assume that
\begin{eqnarray}
\mbox{$(C_{A,1} + C_M)\ \bigg(\|v_{tt}\|_\infty\, \tau + \| v_{xx}\|_\infty\, {\Delta x} \bigg) \leq \epsilon$.} \end{eqnarray} Then, for sufficiently small $t_n-t$, $x_j-x$, $\tau$, ${\Delta x}$, it holds $$
{S^F}(v^n)_j = {S^A}(v^n)_j $$ and, in particular, a local high-order consistency error for the filtered scheme $S^F$ holds: $$ {\mathcal{E}}_{S^F}(v^n)_j \equiv \mathcal{E}_{S^A}(v^n)_j = O({\Delta x}^\ell) $$ (the consistency error $\mathcal{E}_{S^A}$ is defined in \eqref{eq:H1consist_a}). \end{theorem}
\begin{proof}
$(i)$ Let $w^{n+1}_j=S^M(w^n)_j$ be defined with the monotone scheme only, with $w^0_j=v_0(x_j)=u^0_j$. By definitions, \begin{eqnarray*}
u^{n+1}_j - w^{n+1}_j
& = & S^{M}(u^{n})_j - S^M(w^n)_j +\epsilon \tau F\big(.) \end{eqnarray*} Hence, by using the monotonicity of $S^M$, \begin{eqnarray*}
\max_j |u^{n+1}_j - w^{n+1}_j|
& \leq & \max_j |u^{n}_j - w^n_j| +\epsilon \tau, \end{eqnarray*} and by recursion, for $n\leq N$, \begin{eqnarray*}
\max_j |u^{n}_j - w^{n}_j|
& \leq & \epsilon n \tau \leq T \epsilon. \end{eqnarray*} On the other hand, by Crandall and Lions \cite{CL84}, an error estimate holds for the monotone scheme: \begin{eqnarray*}
\max_j |w^{n}_j-v^n_j| & \leq & C \sqrt{{\Delta x}}, \end{eqnarray*} for some $C\geq 0$. By summing up the previous bounds, we deduce \begin{eqnarray*}
\max_j |u^{n}_j - v^{n}_j|
& \leq & C\sqrt{{\Delta x}} + T \epsilon, \end{eqnarray*} and together with the assumption on $\epsilon$, it gives the desired result.
$(ii)$ Let $\displaystyle \mathcal{E}^n_j:=\frac{v^{n+1}_j - S^M(v^n)_j}{\tau}$. If the solution is $C^2$ regular with bounded second order derivatives, then the consistency error is bounded by \begin{eqnarray}\label{eq:Epsnj}
| \mathcal{E}^n_j | \leq C_M(\tau + {\Delta x}). \end{eqnarray}
Hence \begin{eqnarray*}
| u^{n+1}_j - v^{n+1}_j|
& = & | S^{M}(u^{n})_j - S^M(v^n)_j + \tau \mathcal{E}^n_j + \tau \epsilon F\big(.) |\\
& \leq &
\| u^{n} - v^n\|_\infty + \tau \|\mathcal{E}^n\|_\infty + \tau \epsilon. \end{eqnarray*} By recursion, for $n\tau \leq T$, \begin{eqnarray*}
\|u^{n} - v^{n}\|_\infty
& \leq & \| u^{0} - v^0\|_\infty + T ( \max_{0\leq k\leq N-1}\|\mathcal{E}^k\|_\infty + \epsilon). \end{eqnarray*} Finally by using the assumption on $\epsilon$, the bound \eqref{eq:Epsnj} and the fact that $\tau=O({\Delta x})$ (using CFL condition \eqref{eq:CFL}), we get the desired result.
$(iii)$ To prove that $S^F(v^n)_j=S^A(v^n)_j$, one has to check that $$
\frac{|S^A(v^n)_j-S^M(v^n)_j|}{\epsilon\tau} \leq 1
$$ as $(\tau,{\Delta x})\rightarrow 0$.
By using the consistency error definitions, \if{ \begin{eqnarray*}
\frac{|S^A(v^n)_j-S^M(v^n)_j|}{\tau}
& = &
\bigg| \frac{v^{n+1}_j - S^A(v^n)_j}{\tau} + v_t(t_n,x_j) + H(x_j,v_x(t_n,x_j)) \\
& & \quad - \bigg( \frac{v^{n+1}_j - S^M(v^n)_j}{\tau}+ v_t(t_n,x_j) + H(x_j,v_x(t_n,x_j))\bigg) \bigg| \\
& \leq & |\mathcal{E}_{S^A}(v^n)_j| + |\mathcal{E}_{S^M}(v^n)_j| \\
& \leq & (C_{A,1} + C_M) (\tau \|v_{tt}\|_\infty + {\Delta x} \| v_{xx}\|_\infty) \end{eqnarray*} }\fi
\begin{eqnarray*}
\frac{|S^A(v^n)_j-S^M(v^n)_j|}{\tau}
& = &
\bigg| \frac{v^{n+1}_j - S^M(v^n)_j}{\tau}+ v_t(t_n,x_j) + H(x_j,v_x(t_n,x_j)) \\
& & \quad - \bigg( \frac{v^{n+1}_j - S^A(v^n)_j}{\tau} + v_t(t_n,x_j) + H(x_j,v_x(t_n,x_j))\bigg) \bigg| \\
& \leq & |\mathcal{E}_{S^M}(v^n)_j| + |\mathcal{E}_{S^A}(v^n)_j| \\
& \leq & (C_{A,1} + C_M) (\tau \|v_{tt}\|_\infty + {\Delta x} \| v_{xx}\|_\infty) \end{eqnarray*}
Hence
the desired result follows. \end{proof}
\begin{rem}\label{rem:PROJECTION} {\bf Other approaches.} It is already known from the original work of Osher and Shu~\cite{OS91} that it is possible to modify an ENO scheme in order to obtain a convergent scheme. For instance, if $D^{\pm,A} u^n_j$ denotes a high--order finite difference derivative estimate (of ENO type), a projection on the first order finite difference derivative $D^\pm u^n_j$ can be used, up to a controlled error (see in particular Remark 2.2 of \cite{OS91}): $$ \mbox{instead of \ $D^{\pm,A} u^n_j$,} \quad \mbox{use} \ P_{[D^\pm u^n_j, M{\Delta x}]}(D^{\pm,A} u^n_j) $$
where $P_{[a,b]}(y)$ is the projection defined by: $$
P_{[a,b]}(y):=\left\{ \barr{ll}
y & \mbox{if\ \ $a-b\leq y\leq a+b$}\\
a-b & \mbox{if\ \ $y\leq a-b$}\\
a+b & \mbox{if\ \ $y\geq a+b$}
\end{array} \right. $$
and $M>0$ is some constant greater than the expected value $\frac{1}{2} |u_{xx}(t_n,x_j)|$. However, we emphasize that in our approach we do not consider a projection but a perturbation with a filter, which is sligthly different. Indeed, by using a projection into an interval of the form $[a-M{\Delta x},\, a+M{\Delta x}]$ where $a=D^\pm u^n_i$, numerical tests show that we may choose too often one of the extremal values $a\pm M{\Delta x}$ which is then produces an overall too big error (worse than using the first order finite differences).
Following the present approach, we would rather advice to use, $$ \mbox{instead of \ $D^{\pm,A} u^n_j$,} \quad \mbox{the value} \ \tilde P_{[D^\pm u^n_j, M{\Delta x}]}(D^{\pm,A} u^n_j) $$ where $\tilde P_{[a,b]}(y)$ is defined by: $$
\tilde P_{[a,b]}(y):=\left\{ \barr{ll}
y & \mbox{if\ \ $a-b\leq y\leq a+b$}\\
a & \mbox{if\ \ $y\notin [a-b,a+b]$}\\
\end{array} \right. $$ \end{rem}
\begin{rem}{\bf Filtered semi-Lagrangian scheme.}
Let us consider the case of
\begin{eqnarray} \label{eq:h}
& & H(x,p):= \min_{b\in B} \max_{a\in A}\{-f(x,a,b).p- \ell(x,a,b)\}, \end{eqnarray} where $A\subset \mathbb{R}^m$ and $B\subset \mathbb{R}^n$ are non-empty compact sets (with $m,n \geq 1$), \if{ and $f: \mathbb{R}^d\times {A} \rightarrow \mathbb{R}^d$ and $\ell: \mathbb{R}^d\times {A} \rightarrow \mathbb{R}$ are Lipschitz continuous w.r.t.\ $x$: $\exists L\geq 0$, $\forall a\in {A}$, $\forall x,y$: \begin{eqnarray} \label{eq:Lff}
\max(|f(x,a)- f(y,a)|,|\ell(x,a)-\ell(y,a)|)\ \leq \ L |x-y|. \end{eqnarray} }\fi $f: \mathbb{R}^d\times {A}\times B \rightarrow \mathbb{R}^d$ and $\ell: \mathbb{R}^d\times {A}\times B \rightarrow \mathbb{R}$ are Lipschitz continuous w.r.t.\ $x$: $\exists L\geq 0$, $\forall (a,b)\in {A}\times B$, $\forall x,y$: \begin{eqnarray} \label{eq:Lff}
\max(|f(x,a,b)- f(y,a,b)|,|\ell(x,a,b)-\ell(y,a,b)|)\ \leq \ L |x-y|. \end{eqnarray} (We notice that (A2) is satisfied for hamiltonian functions such as \eqref{eq:h}.) \if{ $$
|H(x_1,p)-H(x_2,p)|\leq L(1+|p|)\,|x_1-x_2| $$ and $$
|H(x,p)-H(x,q)|\leq |p-q|\, \max_{a,b} |f(x,a,b)| \leq |p-q|\, (\max_{a,b}|f(0,a,b)| + L|x|). $$ }\fi Let $[u]$ denote the $P^1$-interpolation of $u$ in dimension one on the mesh $(x_j)$, i.e. \begin{eqnarray}\label{interpolation}
x\in[x_j,x_{j+1}] \quad \Rightarrow \quad
[u](x) := \frac{x_{j+1}-x}{{\Delta x}} u_j+\frac{x-x_{j}}{{\Delta x}} u_{j+1}. \end{eqnarray} Then a monotone SL scheme can be defined as follows: \begin{eqnarray}\label{eq:sl}
S^M(u^n)_j := \displaystyle \min_{a\in A} \max_{b\in B}
\bigg( [u^{n}]\big(x_j+ \tau f(x_j,a,b)\big)+ \tau \ell (x_j,a,b) \bigg). \end{eqnarray}
A filtered scheme based on SL can then be defined by \eqref{eq:FS} and \eqref{eq:sl}. Convergence result as well as error estimates could also be obtained in this framework. (For error estimates for the monotone SL scheme, we refer to~\cite{sor-98, FF14}.) \end{rem}
\if{ \begin{rem} In the case when $D(x)$ is convex then the scheme takes the more usual form \begin{eqnarray}\label{eq:sl-b}
S^M(u^n)_j = \displaystyle \min_{a\in A}[u^{n}](x_j+ \tau f(x_j,a)+ \tau \ell (x_j,a)). \end{eqnarray} \end{rem} }\fi
\if{ \begin{prop} (i) Assume (A1) ($v_0$ is Lipschitz continuous). Assume furthermore that\\ \indent - if $f=f(x,a)$: $\forall x$, $f(x,A)$ is convex.\\ \indent - if $f=f(x,a,b)$: we assume $f(x,a,b)=g_1(x,b)\cdot A + g_2(x,b)$ with $A$ convex, and $B$ compact.\\ Then the following consistency error holds, $\forall n\leq N$, $\forall j$:
$$ |\mathcal{E}_{S^M}(v^n)_j| \leq C (\tau + \frac{{\Delta x}}{\tau}). $$ \end{prop} \begin{proof}
Let us assume that $f(x,a,b)=f(x,a)$ and $\ell(x,a,b)=\ell(x,a)$, the arguments in the general case beeing very similar. Equation \eqref{eq:hj} is the Bellman equation for a finite horizon control problem (see ~\cite{BCD97}, Ch. III for details). More precisely, we consider the controlled system (whose solution is denoted $y=y_x^\alpha$) \begin{eqnarray}\label{eq:csys}
\dot y(t)=f(y(s),\alpha(s)), \quad \text{ for a.e.}\quad s\geq 0 \quad y(0)=x, \end{eqnarray} where $\alpha\in \mathcal{A}$, the set of measurable functions: $(0,\infty) \rightarrow A$ (Caratheodory solutions are considered in~\eqref{eq:csys}). Let the value function $v$ be defined by \begin{eqnarray}\label{v}
v(t,x) :=\inf_{\alpha\in\mathcal{A}} \ \ v_{0}(y_{x}^{\alpha}(t))+\int_0^t \! \ell(y_{x}^{\alpha}(s),\alpha(s))\, ds. \end{eqnarray} Then $v$ satisfies, for $h\geq 0$: \begin{eqnarray}\label{eq:vdpp}
v(t+h,x)=\inf_{\alpha\in \mathcal{A}} v(t,y^\alpha_x(h)) + \int_{0}^{h} \ell(y^\alpha_x(s),\alpha(s))\, ds. \end{eqnarray} It can be shown that $v$ is the unique solution of \eqref{eq:hj} in the viscosity sense. We will use the formula \eqref{eq:vdpp} in order to get the consistency error.
From the definition of $v$ and using Gronwall's Lemma we first obtain that
$|v(t,x_2)-v(t,x_1)|\leq L_{v_0} e^{LT} |x_2-x_1|$ where $L_{v_0}$ is the Lipschitz constant of $v_0$. The Lipschitz property for $v(t_n,.)=v^n$ implies a control on the interpolation error: \begin{eqnarray}
\|[v^n]-v^n\|_\infty \leq L_{v_0} e^{LT} {\Delta x}. \end{eqnarray}
We first consider constant controls $\alpha(s)=a\in A$, for $s\in [0,h]$. We note that $y_x^a(h)=x + h f(x,a) + O(h^2)$, and therefore it is then deduced from \eqref{eq:vdpp} and the Lipschitz property of $v(t_n,.)$ that $$ v^{n+1}_j \leq \sup_{a\in A} v^n(x_j + h f(x_j,a)) + h \ell(x_j,a) + Ch^2,$$ for some constant $C\geq 0$. Hence \begin{eqnarray} v^{n+1}_j - S^M(v^n)_j
& \leq &
\sup_{a\in A} \bigg( v^n (x_j + \tau f(x_j,a)) + \tau \ell(x_j,a) \bigg) \nonumber\\
& & \hspace{1cm} - \sup_{a\in A} \bigg([v^n](x_j + \tau f(x_j,a)) + \tau \ell(x_j,a) \bigg) + C\tau^2 \nonumber \\
& \leq & \| v^n-[v^n] \|_\infty + C\tau^2\\
& \leq & L_{v_0} e^{LT} {\Delta x} + C\tau^2. \end{eqnarray} This gives the desired upper bound for error consistency.
Now we want to prove a reverse inequality. Let $C>0$. There exists a sequence of controls $\alpha_h$ such that $$
v^{n+1}_j \geq v^n(y_x^{\alpha_h}(h)) + \int_0^h \ell(y_x^{\alpha_h}(s),\alpha_h(s))ds - C h^2. $$ \begin{center} \fbox{to be continued ...} \end{center} \end{proof} }\fi
\if{ \begin{theorem}[{\bf convergence of filtered SL schemes}] Let assumptions (A1)-(A2'), and $v_0$ bounded. Then for all $n\tau \leq T$,
\begin{eqnarray*}
\max_j |v^n(x_j)-u^n(x_j)| \leq C (\epsilon + \tau+\frac{{\Delta x}}{\tau}). \end{eqnarray*} In particular by choosing $\epsilon\leq c_0\sqrt{{\Delta x}}$ and $\tau= c_1\sqrt{{\Delta x}}$ the error behaves as $O(\sqrt{{\Delta x}})$. \end{theorem} \begin{proof} As in the proof for the filtered finite difference schemes, if $w^{n+1}_j=S^M(w^n)_j$ is defined with the monotone SL scheme, and with $w^0_j=v_0(x_j)$, then, for $n\leq N$, \begin{eqnarray*}
\max_j |u^{n}_j - w^{n}_j|
& \leq & \epsilon n \tau \ \leq \ T \epsilon.
\end{eqnarray*} There remains to find an error estimate for $|v^n_j - w^n_j|$.
It can be proved that $u^n$ is Lipschitz regular. Then, the error between the semi-discrete scheme (say $\vartheta^n$) and the fully discrete scheme is bounded by $O(\frac{{\Delta x}}{\tau})$. Finally, it is known, for the semi-discrete scheme that
\begin{center} \fbox{CHECK : Classical estimate for monotone SL scheme for games with finite horizon} \fbox{(Maurizio)} \end{center} $$
\max_j |v^n_j - \vartheta^n_j| \leq C \sqrt{\tau}. $$ {\color{blue}
(NB for $H(x,p)=max_{a\in A} (-f(x,a) p - \ell(x,a))$ I see a direct bound in $O(\tau)$, for games I am not sure). }
Hence the global bound $$
\max_j |v^n_j - w^n_j| \leq C (\sqrt{\tau} + \frac{{\Delta x}}{\tau}). $$ \end{proof} }\fi
\if{ \begin{eqnarray*}\label{eq:sl1}
\|u^{n} - v^{n}\|_{\infty}
\leq \|S^{M}(u^{n-1})-S^{M}(v^{n-1})\|_{\infty}+\|S^{M}(v^{n-1})-v^{n}\|_\infty+\epsilon\tau \|F\|_\infty. \end{eqnarray*} By using the monotonicity of $S^M$, we get \begin{eqnarray*}
\|u^{n} - v^{n}\|_{\infty}\leq \|u^{n-1}-v^{n-1}\|_{\infty}+\|S^{M}(v^{n-1})-v^{n}\|_\infty+\epsilon \tau \|F\|.\\
\end{eqnarray*} By the definition of filter function we know that $\|F\|\leq 1$, and consistency error estimate \eqref{eq:H1consist_c} of semi-Lagrangian scheme \eqref{eq:sl}, we get \begin{eqnarray*}
\|u^{n} - v^{n}\|_{\infty}&\leq \|u^{n-1}-v^{n-1}\|_{\infty} + L \big(\tau+\frac{{\Delta x}}{ \tau}\big )\tau+\epsilon \tau. \end{eqnarray*} Hence by recursion, we get \begin{eqnarray*}
\|u^{n} - v^{n}\| _{\infty}& \leq \|u^{0} - v^{0}\|_{\infty} + LT \big(\tau+\frac{{\Delta x}}{ \tau}\big )+T\epsilon. \end{eqnarray*} If we choose $\tau=\sqrt{{\Delta x}}$ and $\epsilon \leq C\sqrt{{\Delta x}}$ for $C\geq 0$ , we get the desired result. \begin{eqnarray*}
\|u^{n} - v^{n}\|_{\infty} \leq C \sqrt{ {\Delta x}}, \end{eqnarray*} }\fi
\subsection{Adding a limiter} \label{sec:limiter}
The basic filter scheme \eqref{eq:FS} is designed to be of high--order where the solution is regular and when there is no viscosity aspects. However, for instance in the case of front propagation, it can be observed that the filter scheme may let small errors occur near extrema, when two possible directions of propagation occur in the same cell.
This is the case for instance near a minima for an eikonal equation. In order to improve the scheme near extrema, we propose to introduce a limiter before doing the filtering process.
Limiting correction will be needed only when there is some viscosity aspect (it is not needed for advection).
Let us consider the case of front propagation,
i.e., equation of type \eqref{eq:hj}, now with \begin{eqnarray}
H(x,v_x)= \max_{a\in A} \big( f(x,a) v_x \big) \end{eqnarray} (i.e., no distributive cost in the Hamiltonian function).
In the one-dimensional case, a viscosity aspect may occur at a minima detected at mesh point $x_i$ if \begin{eqnarray} \label{eq:changesigns}
\mbox{ $\min_a f(x_j,a) \leq 0$ \ \ and \ \ $\max_a f(x_j,a)\geq 0$.} \end{eqnarray} In that case, the solution should not go below the local minima around this point, i.e., we want \begin{eqnarray}\label{eq:limitmin}
u^{n+1}_j \geq u_{min,j}:=\min(u^n_{j-1},u^n_j, u^n_{j+1}), \end{eqnarray} and, in the same way, we want to impose that \begin{eqnarray}\label{eq:limitmax}
u^{n+1}_j \leq u_{max,j}:=\max(u^n_{j-1},u^n_j, u^n_{j+1}). \end{eqnarray}
If we consider the high-order scheme to be of the form $u^{n+1}_j=u^n_j - \tau h^A(u^n)$, then the limiting process amounts to saying that $$
h^A(u^n)_j\leq h^{max}_j:=\frac{u^n_j - u_{min,j}}{\tau}. $$ and $$
h^A(u^n)_j\geq h^{min}_j:=\frac{u^n_j - u_{max,j}}{\tau}. $$ This amounts to define a limited $\bar h^A$ such that, if \eqref{eq:changesigns} holds at mesh point $x_j$, then $$
\bar h^A(u^n)_j:=\min\bigg(\max(h^A(u^n)_j,\, h^{min}_j),\, h^{max}_j\bigg). $$ and, otherwise, $$
\bar h^A_j :\equiv h^A_j. $$ Then the filtering process is the same, using $\bar h^A$ instead of $h^A$ for the definition of the high-order scheme $S^F$.
For two dimensional equations a similar limiter could be developped in order to make the scheme more efficient at singular regions. However, for the numerical tests of the next section (in two dimensions) we will simply limit the scheme by using an equivalent of \eqref{eq:limitmin}-\eqref{eq:limitmax}. Hence, instead of the scheme value $u^{n+1}_{ij}=S^A(u^n)_{ij}$ for the high--order scheme, we will update the value by \begin{eqnarray}\label{eq:2dlimiter}
u^{n+1}_{ij}=\min(\max(S^A(u^n)_{ij},u^{min}_{ij}),u^{max}_{ij}), \end{eqnarray} where $u^{min}_{ij}=\min(u^n_{ij},u^n_{i\pm 1,j}, u^n_{i,j\pm 1})$ and $u^{max}_{ij}=\max(u^n_{ij},u^n_{i\pm 1,j}, u^n_{i,j\pm 1})$.
\subsection{How to choose the parameter $\epsilon$: a simplified approach}
The scheme should switch to high--order scheme when some regularity of the data is detected, and in that case we should have \begin{eqnarray*}
\bigg| \frac{S^A(v)-S^M(v)}{\epsilon\tau} \bigg| \ = \
\bigg| \frac{h^{A}(\cdot)-h^{M}(\cdot)}{\epsilon} \bigg|\ \leq\ 1. \end{eqnarray*} In a region where a function $v=v(x)$ is regular enough, by using Taylor expansions, zero order terms in $h^A(x,D^\pm v)$ and $h^M(x,D^\pm v)$ vanish (they are both equal to $H(x,v_x(x))$) and it remains an estimate of order ${\Delta x}$. More precisely, by using the high--order property (A4) we have $$h^A(x_j,D^\pm v_j)= H(x_j,v_x(x_j)) + O({\Delta x}^2).$$ On the other hand, by using Taylor expansions, $$Dv^\pm_j = v_x(x_j) \pm \frac{1}{2} v_{xx}(x_j) {\Delta x} + O({\Delta x}^2),$$ Hence, denoting $h^M = h^M(x,u^-,u^+)$, it holds at points where $h^M$ is regular, $$
h^M(x_j, Dv^-_j, Dv^+_j) = H(x_j,v_x(x_j)) + \frac{1}{2} v_{xx}(x_j) \bigg(
\frac{\partial h^M_j}{\partial u^+} -
\frac{\partial h^M_j}{\partial u^-}
\bigg)
+ O({\Delta x}^2). $$ Therefore, \begin{eqnarray*}
|h^{A}(v)-h^{M}(v)|
& = &
\frac{1}{2} |v_{xx}(x_j)|\,
\bigg| \frac{\partial h^M_j}{\partial u^+} - \frac{\partial h^M_j}{\partial u^-} \bigg|\,{\Delta x} + O({\Delta x}^2). \end{eqnarray*} Hence we will make the choice to take $\epsilon$ roughly such that \begin{eqnarray} \label{eq:eps-rough-estimate}
\frac{1}{2} |v_{xx}(x_j)|\,
\bigg| \frac{\partial h^M_j}{\partial u^+}
- \frac{\partial h^M_j}{\partial u^-}
\bigg|\,{\Delta x} \leq \epsilon \end{eqnarray} (where $h^M_j=h^M(x_j,v_x(x_j),v_x(x_j))$). Therefore, if at some point $x_j$ \eqref{eq:eps-rough-estimate} holds, then the scheme will switch to the high-order scheme. Otherwise, when the expectations from $h^M$ and $h^A$ are different enough, the scheme will switch to the monotone scheme.
In conclusion we have upper and lower bound for the switching parameter $\epsilon$: \begin{itemize} \item Choose $\epsilon \leq c_0 \sqrt{\Delta x}$ for some constant $c_0>0$ in order that the convergence and error estimate result holds (see Theorem~\ref{th:estimate1}).
\item Choose $\epsilon \geq c_1 {\Delta x}$, where $c_1$ is sufficiently large. This constant should be choosen roughly such that $$
\frac{1}{2} \|v_{xx}\|_\infty
\bigg\| \frac{\partial h^M}{\partial u^+}(.,v_x,v_x) - \frac{\partial h^M}{\partial u^-}(.,v_x,v_x) \bigg\|_\infty \leq c_1. $$ where the range of values of $v_x$ and $v_{xx}$ can be estimated, in general, from the values of $(v_0)_x$, $(v_0)_{xx}$ and the Hamiltonian function $H$. Then the scheme is expected to switch to the high-order scheme where the solution is regular. \end{itemize}
\section{Numerical tests}
In this section we present several numerical tests in one and two dimensions. Unless otherwise indicated, the filtered scheme will refer to the scheme where the high-order Hamiltonian is the centered scheme in space (see Remark~\ref{rem:CFD-RK2}), with Heun (RK2) scheme discretization in time (see in particular Eqs.~\eqref{eq:SA-RK2a}-\eqref{eq:SA-RK2b}). Hereafter this scheme will be referred as the ``centered scheme".
The monotone finite difference scheme and function $h^M$ will be made precise for each example.
For the filtered scheme, unless otherwise precised, the switching coefficient
$ \epsilon=5{\Delta x}. $
will be used. In practice we have numerically observed that taking $\epsilon=c_1{\Delta x}$ with $c_1$ sufficiently large does not much change the numerical results in the following tests.
All the tested filtered schemes (apart from the steady and obstacle equations) enters in the convergence framework of the previous section, so in particular there is a theoretical convergence of order $\sqrt{{\Delta x}}$ under the usual CFL condition.
In the tests, the filtered scheme will be in general compared to a second order ENO scheme (for precise definition, see Appendix~\ref{app:A}), as well as the centered (a priori unstable) scheme without filtering.
In several cases, local error in the $L^2$ norms are computed in some subdomain $D$, which, at a given time $t_n$, corresponds to $$
e_{L^2_{loc}}:= \bigg(\sum_{\{i,\ x_i\in D\}} |v(t_n,x_i) - u^n_i|^2\bigg)^{1/2} $$
The first two numerical examples deal with one-dimensional HJ equations, examples 3 and 4 are concerned with two-dimensional HJ equations, and the last three examples will concern a one-dimensional steady equation and two nonlinear one-dimensional obstacle problems.
\noindent {\bf Example 1.} {\bf Eikonal equation.} We consider the case of \begin{eqnarray}\label{eq:eikonal}
& & v_t + |v_x|=0, \quad t\in(0,T), \ x\in (-2,2),\\
& & v(0,x)=v_0(x):= \max(0,1-x^2)^4, \quad x\in(-2,2)\label{eq:v02a}.
\end{eqnarray}
In Table~\ref{tab:ex2-1}, we compare the filtered scheme (with $\epsilon=5{\Delta x}$) with the centered scheme and the ENO second order scheme, with CFL $=0.37$ and terminal time $T=0.3$. For the filtered scheme, the monotone hamiltonian used is $h^M(x,v^-,v^+):=max(v^-,-v^+)$.
As expected, we observe that the centered scheme alone is unstable. On the other hand, the filtered and ENO schemes are numerically comparable in that case, and second order convergent (the results are similar for the $L^1$ and the $L^\infty$ errors).
Then, in Table~\ref{tab:ex2-2}, we consider the same PDE but with the following reversed initial data: \begin{eqnarray}
\label{eq:v02b}
& & \tilde v_0(x):=-\max(0,1-x^2)^4, \quad x\in(-2,2). \end{eqnarray} In that case the centered scheme alone is unbounded. The filtered scheme (with $\epsilon=5{\Delta x}$) is second order. However, here, the limiter correction as described in section \eqref{sec:limiter} was needed in order to get second order behavior. We also observe that the filtered scheme gives better results than the ENO scheme. (We have also numerically tested the ENO scheme with the same limiter correction but it does not improve the behavior of the ENO scheme alone).
In conclusion, this first example shows firstly, that the filtered scheme can stabilize an otherwise unstable scheme, and secondly that it can give the desired second order behavior.
\begin{table}[\tablepos] \begin{center}
\begin{tabular}{|cc|cc|cc|cc|} \hline
&
& \multicolumn{2}{|c|}{filtered ($\epsilon= 5{\Delta x}$)} & \multicolumn{2}{|c|}{centered} & \multicolumn{2}{|c|}{ENO2} \\ \hline
$M$ & $N$ & $L^2$ error & order & $L^2$ error & order & $L^2$ error & order \\ \hline \hline
40 & 9 & 7.51E-03 & - & 1.18E-01 & - & 1.64E-02 & - \\
80 & 17 & 3.36E-03 & 1.16 & 1.14E-01 & 0.06 & 4.38E-03 & 1.91 \\
160 & 33 & 8.02E-04 & 2.07 & 1.13E-01 & 0.00 & 1.19E-03 & 1.87 \\
320 & 65 & 1.80E-04 & 2.16 & 1.13E-01 & 0.00 & 3.22E-04 & 1.89 \\
640 & 130 & 4.53E-05 & 1.99 & 1.13E-01 & 0.00 & 8.22E-05 & 1.97 \\ \hline \end{tabular} \caption{(Example 1 with initial data \eqref{eq:v02a}) $L^2$ errors for filtered scheme, centered scheme, and ENO second order scheme \label{tab:ex2-1} } \end{center} \end{table}
\begin{table}[\tablepos] \begin{center}
\begin{tabular}{|cc|cc|cc|cc|} \hline
\multicolumn{2}{|c|}{ }
& \multicolumn{2}{|c|}{filtered ($\epsilon= 5{\Delta x}$)} & \multicolumn{2}{|c|}{centered} & \multicolumn{2}{|c|}{ENO2} \\ \hline
$M$ & $N$ & error & order & error & order & error & order \\ \hline \hline
40 & 9 & 1.27E-02 & - & 2.03E-02 & - & 2.60E-02 & - \\
80 & 17 & 3.17E-03 & 2.00 & 8.96E-03 & 1.18 & 8.00E-03 & 1.70 \\
160 & 33 & 7.90E-04 & 2.01 & 1.06E-02 & -0.24 & 2.50E-03 & 1.68 \\
320 & 65 & 1.97E-04 & 2.00 & 1.26E-01 & -3.57 & 7.80E-04 & 1.68 \\
640 & 130 & 4.92E-05 & 2.00 & 1.06E+02 & -9.71 & 2.44E-04 & 1.67 \\
\hline \end{tabular} \caption{(Example 1 with initial data \eqref{eq:v02b}.) $L^2$ errors for filtered scheme, centered scheme, and ENO second order scheme. \label{tab:ex2-2} } \end{center} \end{table}
\newcommand{0.38}{0.38} \begin{figure}
\caption{(Example 1) With initial data~\eqref{eq:v02a} (left), and plots at time $T=0.3$ with centered scheme - middle - and filtered scheme - right, using $M=160$ mesh points. }
\label{fig:v02a}
\end{figure} \begin{figure}
\caption{(Example 1) With initial data~\eqref{eq:v02b} (left), and plots at time $T=0.3$ with centered scheme - middle - and filtered scheme - right, using $M=160$ mesh points. }
\label{fig:v02b}
\end{figure}
\noindent {\bf Example 2.} {\bf Burger's equation.}
In this example an HJ equivalent of the nonlinear Burger's equation is considered: \begin{subequations} \label{eq:ex3-1} \begin{eqnarray}
& & v_t + \displaystyle \ud |v_x|^2=0, \quad t>0,\ x \in(-2,2)\\
& & v(0,x)=v_0(x):= \max(0,1-x^2), \quad x\in(-2,2) \end{eqnarray} \end{subequations}
with Dirichlet boundary condition on $(-2,2)$.
Exact solution is known. \footnote{
It holds $$
v(t,x)=\frac{(\max(0,1-|\bar x|))^2}{2t} 1_{ \{t>\ud \} } + \frac{(1-2t)^2- |x|^2}{1-2t}\ 1_{ \{1\geq |x|\geq 1-2t \} }. $$ }. In order to test high--order convergence we have considered the smoother initial data which is the one obtained from \eqref{eq:ex3-1} at time $t_0:=0.1$,~i.e.~: \begin{subequations} \label{eq:ex3-2} \begin{eqnarray}
& & w_t+ \displaystyle \ud |w_x|^2=0, \quad t>0,\ x \in(-2,2).\\
& & w(0,x):= v(t_0,x), \quad x\in(-2,2), \end{eqnarray} \end{subequations} with exact solution $w(t,x)=v(t+t_0,x)$.
An illustration is given in Fig.~\ref{fig:burger}. For the filtered scheme, the monotone hamiltonian used is $h^M(x,v^-,v^+):=\frac{1}{2}(v^-)^2\ 1_{v^->0} + \frac{1}{2}(v^+)^2\ 1_{v^+<0}$.
Errors are given in Table~\eqref{tab:ex3-1}, using CFL=$0.37$ and terminal time $T=0.3$.
In conclusion we observe numerically that the filtered scheme keeps the good behavior of the centered scheme (here stable and almost second order).
\begin{figure}
\caption{(Example 2) Plots at $t=0$ and $t=0.3$ with the filtered scheme. }
\label{fig:burger}
\end{figure}
\begin{table}[\tablepos] \begin{center}
\begin{tabular}{|cc|cc|cc|cc|} \hline
\multicolumn{2}{|c|}{ }
& \multicolumn{2}{|c|}{filtered ($\epsilon= 5{\Delta x}$)} & \multicolumn{2}{|c|}{centered} & \multicolumn{2}{|c|}{ENO2} \\ \hline
$M$ & $N$ & error & order & error & order & error & order \\ \hline \hline
40 & 9 & 2.06E-02 & - & 2.07E-02 & - & 2.55E-02 & - \\
80 & 17 & 6.24E-03 & 1.73 & 6.24E-03 & 1.73 & 8.24E-03 & 1.63 \\
160 & 33 & 1.85E-03 & 1.76 & 1.85E-03 & 1.76 & 2.81E-03 & 1.55 \\
320 & 65 & 5.51E-04 & 1.74 & 5.51E-04 & 1.74 & 1.03E-03 & 1.45 \\
640 & 130 & 1.68E-04 & 1.71 & 1.68E-04 & 1.71 & 3.74E-04 & 1.47 \\
\hline \end{tabular} \caption{(Example 2) $L^2$ errors for filtered scheme, centered scheme, and ENO second order scheme. \label{tab:ex3-1} } \end{center} \end{table}
\noindent {\bf Example 3.} {\bf 2D rotation.} We now apply filtered scheme to an advection equation in two dimensions: \begin{eqnarray}
& & v_t - y v_x + x v_y=0,\quad (x,y)\in \Omega,\ t>0,\\
& & v(0,x,y)=v_0(x,y):= 0.5- 0.5\, \max(0,\frac{1-(x-1)^2-y^2}{1- r_0^2})^4 \end{eqnarray} where $\Omega:=(-A,A)^2$ (with $A=2.5$), $r_0=0.5$ and with Dirichlet boundary condition $v(t,x)=0.5$, $x\in\partial\Omega$. This initial condition is regular and such that the level set $v_0(x,y)=0$ corresponds to a circle centered at $(1,0)$ and of radius $r_0$.
In this example the monotone numerical Hamiltonian is defined by \begin{eqnarray}\label{eq:FD2}
h^{M}(u^-_x,u_x^+,u_y^-,u_y^+)
&:= & \max(0,f_1(a,x,y))u_x^- + \min(0,f_1(a,x,y))u_x^+ \\
& & +\ \max(0,f_2(a,x,y))u_y^-+ \min(0,f_2(a,x,y))u_y^+ \nonumber
\end{eqnarray} and the high--order scheme is the centered finite difference scheme in both spacial variables, and RK2 in time. The filtered scheme is otherwise the same as \eqref{eq:FS}. However it is necessary to use a greater constant $c_1$ is the choice $\epsilon=c_1{\Delta x}$ in order to get (global) second order errors. We have used here $\epsilon=20{\Delta x}$.
On the other hand the CFL condition is \begin{eqnarray}\label{eq:CFL2}
\mu:=c_0(\frac{\tau}{{\Delta x}}+\frac{\tau}{{\Delta y}} )\leq 1, \end{eqnarray} where here $c_0=2.5$ (an upper bound for the dynamics in the considered domain $\Omega$). In this test the CFL number is $\mu:=0.37$.
Results are shown in Table~\ref{tab:ex3-1-2D} for terminal time time $T:=\pi/2$.
Although the centered scheme is a priori unstable, in this example it is numerically stable and of second order. So we observe that the filtered scheme keep this good behavior and is also or second order (ENO scheme gives comparable results here).
\begin{table}[\tablepos] \begin{center}
\begin{tabular}{|cc|cc|cc|cc|} \hline
& & \multicolumn{2}{|c|}{filtered} & \multicolumn{2}{|c|}{centered} & \multicolumn{2}{|c|}{ENO} \\ \hline
$Mx$ & $Ny$ & $L^2$ error & order & $L^2$ error & order & $L^2$ error & order \\ \hline \hline
20 & 20 & 5.05E-01 & - & 5.05E-01 & - & 6.99E-01 & - \\
40 & 40 & 1.48E-01 & 1.77 & 1.48E-01 & 1.77 & 4.66E-01 & 0.58 \\
80 & 80 & 3.77E-02 & 1.98 & 3.77E-02 & 1.98 & 2.04E-01 & 1.19 \\
160 & 160 & 9.40E-03 & 2.00 & 9.40E-03 & 2.00 & 5.50E-02 & 1.89 \\
320 & 320 & 2.34E-03 & 2.01 & 2.34E-03 & 2.01 & 1.29E-02 & 2.10 \\ \hline \end{tabular}
\caption{(Example 3) Global $L^2$ errors for the filtered scheme, centered and second order ENO schemes (with CFL $0.37$). \label{tab:ex3-1-2D} } \end{center} \end{table}
\if{ }\fi
\begin{figure}
\caption{ (Example 3) Filtered scheme, plots at time $t=0$ (left) and $t=\pi/2$ (rigth) with $M=80$ mesh points. }
\label{fig:ex3}
\end{figure}
\noindent {\bf Example 4.} {\bf{Eikonal equation.}} In this example we consider the eikonal equation \begin{eqnarray}\label{eq:eikonal2d}
& & v_t+ |\nabla v|=0,\quad (x,y)\in \Omega,\ t>0 \end{eqnarray} in the domain $\Omega:=(-3,3)^2$. The initial data is defined by {\small \begin{eqnarray} \label{eq:eikonal-twoholes}
& & \hspace{-1cm} v_0(x,y) = \\
& & 0.5-0.5\, \max\bigg( \max(0,\frac{1-(x-1)^2-y^2}{1- r_0^2})^4,\ \max(0,\frac{1-(x+1)^2-y^2}{1- r_0^2})^4\bigg).
\nonumber \end{eqnarray} } The zero-level set of $v_0$ corresponds to two separates circles or radius $r_0$ and centered in $A=(1,0)$ and $B=(-1,0)$ respectively. Dirchlet boundary conditions are used as the previous example.
\if{ We first consider the following smooth initial data \begin{eqnarray} \label{eq:eikonal-onehole}
& & v(0,x,y)=v_0(x,y)=0.5 -0.5\, \max(0,\frac{1-x^2-y^2}{1- r_0^2})^4, \end{eqnarray} with $r_0=0.5$, with Drichlet boundary conditions as in the previous example.
Numerical results are given in Table~\ref{tab:ex41} were is compared the global $L^2$ errors for the filtered scheme (with $\epsilon=20\ex$), the centered scheme, and a second order ENO scheme. }\fi
The monotone hamiltonian $h^M$ used in the filtered scheme is in Lax-Friedrichs form: \begin{eqnarray}\label{eq:LF2}
h^{M}(x,u^-_1,u_1^+,u_2^-,u_2^+)
& = & H(x,\frac{u^-_1+u_1^+}{2},\frac{u_2^-+u_2^+}{2}) \nonumber \\
& & \quad - \frac{C_x}{2}(u_1^+ - u_1^-) - \frac{C_y}{2}(u_2^+ - u_2^-), \end{eqnarray} where, here, $C_x=C_y=1$. We used the CFL condition $\mu=0.37$ as in \eqref{eq:CFL2}. Also, the simple limiter~\eqref{eq:2dlimiter} was used for the filtered scheme as described in Section~\ref{sec:limiter}. It is needed in order to get a good second order behavior at extrema of the numerical solution. The filter coefficient is set to $\epsilon=20{\Delta x}$ as in the previous example.
\if{ \begin{table} \begin{center}
\begin{tabular}{|cc|cc|cc|cc|} \hline
& & \multicolumn{2}{|c|}{filtered ($\epsilon=20{\Delta x}$)} & \multicolumn{2}{|c|}{centered} & \multicolumn{2}{|c|}{ENO2} \\ \hline
$Mx$ & $Ny$ & $L^2$ error & order & $L^2$ error & order & $L^2$ error & order \\ \hline \hline
25 & 25 & 5.39E-01 & - & 3.73E-01 & - & 4.22E-01 & - \\
50 & 50 & 1.82E-01 & 1.57 & 1.42E-01 & 1.39 & 1.57E-01 & 1.42 \\
100 & 100 & 3.72E-02 & 2.29 & 4.72E-02 & 1.59 & 5.12E-02 & 1.62 \\
200 & 200 & 9.36E-03 & 1.99 & 1.66E-02 & 1.51 & 1.48E-02 & 1.80 \\
400 & 400 & 2.36E-03 & 1.99 & 7.23E-03 & 1.20 & 4.34E-03 & 1.77 \\ \hline \end{tabular} \caption{(Example 4, with initial data \eqref{eq:eikonal-onehole}) Global $L^2$ errors for filtered scheme, centered and second order ENO schemes, at time $t=0.6$. \label{tab:ex41} } \end{center} \end{table}
}\fi
\if{ We now consider an other initial data corresponding to two separates holes centered in $A=(1,0)$ and $B=(-1,0)$ respectively: {\small \begin{eqnarray} \label{eq:eikonal-twoholes}
& & \hspace{-1cm} v(0,x,y) = \\
& & 0.5-0.5\, \max\bigg( \max(0,\frac{1-(x-1)^2-y^2}{1- r_0^2})^4,\ \max(0,\frac{1-(x+1)^2-y^2}{1- r_0^2})^4\bigg).
\nonumber \end{eqnarray} } }\fi
Numerical results are given in Table~\ref{tab:ex42}, showing the global $L^2$ errors for the filtered scheme, the centered scheme, and a second order ENO scheme, at time $t=0.6$. We observe that the centered scheme has some unstabilities for fine mesh, while the filtered performs as expected.
\begin{table}[!hbtp] \begin{center}
\begin{tabular}{|cc|cc|cc|cc|} \hline
& & \multicolumn{2}{|c|}{filtered ($\epsilon=20{\Delta x}$)} & \multicolumn{2}{|c|}{centered} & \multicolumn{2}{|c|}{ENO2} \\ \hline
$Mx$ & $Ny$ & $L^2$ error & order & $L^2$ error & order & $L^2$ error & order \\ \hline \hline
25 & 25 & 5.39E-01 & - & 6.00E-01 & - & 5.84E-01 & - \\
50 & 50 & 1.82E-01 & 1.57 & 2.25E-01 & 1.41 & 2.11E-01 & 1.47 \\
100 & 100 & 3.72E-02 & 2.29 & 8.46E-02 & 1.41 & 6.88E-02 & 1.62 \\
200 & 200 & 9.36E-03 & 1.99 & 3.53E-02 & 1.26 & 2.02E-02 & 1.76 \\
400 & 400 & 2.36E-03 & 1.99 & 1.36E-01 & -1.95 & 5.98E-03 & 1.76 \\ \hline \end{tabular}
\caption{(Example 4) Global $L^2$ errors for filtered scheme, centered and second order ENO schemes. \label{tab:ex42} } \end{center} \end{table}
\begin{figure}
\caption{
(Example 4) Plots at times $t=0$ (top) and $t=\pi/2$ (bottom) for the filtered scheme with $M=50$ mesh points. The figures to the right represent the $0$-level sets. }
\label{fig:ex42}
\end{figure}
\if{
\noindent {\bf Example 5.} In this example the considered HJ equation is \begin{eqnarray}
v_t-yv_x+xv_y+ \|\nabla v\|=0, \quad (x,y)\in \Omega,\ t>0, \end{eqnarray} with $\Omega=(-3,3)^2$, and with the following initial data: \begin{eqnarray}
v_0(x,y)=\min(0.5,\|x-A\|_{2}-0.5,\|x- B\|_{2}-0.5),
\end{eqnarray} where $\|.\|$ is Euclidean norm, $A=(1,0)$ and $B=(-1,0)$ (together with Dirichlet boundary condition $v(t,x,y)=0.5$ for $(x,y)\in\partial \Omega$).
Again we compare the filtered scheme (with $\epsilon=5{\Delta x}$) with the centered (a priori unstable) scheme and the second order ENO scheme.
Numerical results are shown in Table~\ref{tab:ex5}, for terminal time $T=0.75$ and CFL $0.37$. Local errors has been computed in the region $|v(t,x,y)| \leq 0.1$ and also away from the singular
line $x+y=0$ (i.e., for points such that furthermore $|\frac{x+y}{\sqrt{2}}|\geq 0.1$). In this example, the naive centered scheme is unstable (as expected), while the filtered scheme is stable and of second order.
\begin{table}[H] \begin{center}
\begin{tabular}{|cc|cc|cc|cc|} \hline
& & \multicolumn{2}{|c|}{filtered $\epsilon=5{\Delta x}$} & \multicolumn{2}{|c|}{centered} & \multicolumn{2}{|c|}{ENO2} \\ \hline
$Mx$ & $Nx$ & $L^2$ error & order & $L^2$ error & order & $L^2$ error & order \\ \hline \hline
25 & 25 & 1.02E-01 & 0.00 & 1.11E-01 & 0.00 & 1.14E-01 & 0.00 \\
50 & 50 & 1.78E-02 & 2.51 & 1.99E-02 & 2.48 & 2.12E-02 & 2.43 \\
100 & 100 & 6.06E-03 & 1.56 & 2.04E-02 & -0.03 & 3.67E-03 & 2.53 \\
200 & 200 & 1.13E-03 & 2.43 & 1.27E-02 & 0.69 & 8.61E-04 & 2.09 \\
400 & 400 & 2.86E-04 & 1.98 & 1.13E-02 & 0.17 & 2.12E-04 & 2.02 \\ \hline \end{tabular} \caption{(Example 5) Local errors of filtered, centered and ENO scheme. \label{tab:ex5} } \end{center} \end{table}
}\fi
\newcommand{Example 5}{Example 5} \newcommand{Example 6}{Example 6} \newcommand{Example 7}{Example 7}
\noindent
{\bf Example 5\ Steady eikonal equation.} We consider a steady eikonal equation with Dirichlet boundary condition, which is taken from Abgrall~\cite{A09}: \begin{subequations} \label{eq:A} \begin{eqnarray}
& & |v_x|=f(x)\quad \ x\in (0,1),\\
& & v(0)=v(1)=0,
\end{eqnarray} \end{subequations} where $f(x)=3x^2+a$, with $a=\frac{1-2x_0^3}{2x_0-1}$ and $x_0=\frac{\sqrt[3]{2}+2}{4\sqrt[3]{2}}$. Exact solution is known: \begin{eqnarray} v(x) := \left\{
\begin{array}{l l}
x^3+ax & \quad x\in[0,x_0], \\
1+a-ax-x^3 & \quad x\in[x_0,1].
\end{array} \right. \end{eqnarray} The steady solution for \eqref{eq:A} can be considered as the limit $\displaystyle\lim_{t\rightarrow \infty} v(t,x)$ where $v$ is the solution of the time marching corresponding form: \begin{subequations} \label{eq:A1} \begin{eqnarray}
& & v_t+|v_x|=f(x)\quad \ x\in (0,1), \ t>0, \\
& & v(t,0)=v(t,1)=0, \quad t>0. \end{eqnarray} \end{subequations}
In this example the upwind monotone scheme is used: \begin{eqnarray*}
h^M(.)_j:=\frac{u_j^{n+1}-u^n_j}{\tau} - \max\{ \frac{u_j^n -u^n_{j-1}}{{\Delta x}} , \frac{u_j^n -u^n_{j+1}}{{\Delta x}}\}
- \tau f(x_j), \end{eqnarray*}
the high--order scheme will be the centered scheme,
and the filtered scheme \eqref{eq:FS} will be used with $\epsilon=5{\Delta x}$.
The iterations are stopped when the difference between too successive time step is small enough or a fixed number of iterations is passed, i.e., in this example, \begin{eqnarray}
\|u^{n+1} - u^n\|_{L^\infty}:=\max_i |u^{n+1}_{i} -u^n_{i}| \leq 10^{-6} \quad \mbox{or} \quad n\geq N_{max}:=5000. \end{eqnarray} As analyzed in~\cite{bok-fal-fer-gru-kal-zid} for $\epsilon$-monotone schemes, for a given mesh step, even if the iterations may not converge (because of the non monotony of the scheme), it can be shown to be close to a fixed point after enough iterations.
\if{ \begin{table}[!hbtp] \begin{center}
\begin{tabular}{|c|cc|cc|cc|} \hline
& \multicolumn{2}{|c|}{filtered ($\epsilon= 5{\Delta x}$)} & \multicolumn{2}{|c|}{centered} & \multicolumn{2}{|c|}{ENO2} \\ \hline
$M$ & error & order & error & order & error & order \\ \hline \hline
\hline \end{tabular} \caption{(Example 5) Global errors of filtered scheme with RK2 in time. \label{tab:ex6} } \end{center} \end{table} }\fi
\begin{table}[!hbtp] \begin{center}
\begin{tabular}{|c|cc|cc||cc|} \hline
& \multicolumn{2}{|c|}{filtered} & \multicolumn{2}{|c|}{centered} & \multicolumn{2}{|c|}{filtered + ENO} \\ \hline
$M$ & error & order & error & order & error & order \\ \hline \hline
50 & 2.16E-03 & - & NaN & - & 5.29E-03 & - \\
100 & 7.14E-04 & 1.60 & NaN & - & 1.35E-03 & 1.97 \\
200 & 2.17E-04 & 1.72 & NaN & - & 3.42E-04 & 1.98 \\
400 & 6.32E-05 & 1.78 & NaN & - & 8.61E-05 & 1.99 \\
800 & 2.17E-05 & 1.54 & NaN & - & 2.16E-05 & 2.00 \\ \hline
\end{tabular} \caption{(Example 5) Global errors for filtered scheme, compared with the centered (unstable) scheme, and a filtered ENO scheme. \label{tab:ex6} } \end{center} \end{table}
\if{ \begin{table}[!h] \begin{center}
\begin{tabular}{|c|cc|cc|cc|} \hline
& \multicolumn{2}{|c|}{($L_1$-Error)} & \multicolumn{2}{|c|}{$L_2$-Error)} & \multicolumn{2}{|c|}{$L_\infty$-Error)} \\ \hline
$M$ & error & order & error & order & error & order \\ \hline \hline
50 & 1.90E-03 & - & 2.97E-03 & - & 5.29E-03 & - \\
100 & 5.02E-04 & 1.92 & 7.81E-04 & 1.93 & 1.35E-03 & 1.97 \\
200 & 1.28E-04 & 1.97 & 1.99E-04 & 1.97 & 3.42E-04 & 1.98 \\
400 & 3.24E-05 & 1.98 & 5.04E-05 & 1.98 & 8.61E-05 & 1.99 \\
800 & 8.15E-06 & 1.99 & 1.27E-05 & 1.99 & 2.16E-05 & 2.00 \\
\hline \end{tabular} \caption{Example 5 Global errors of ENO2 scheme + filtered with RK2 in time. \label{tab:ex4.1-3} } \end{center} \end{table} \begin{table}[!h] \begin{center}
\begin{tabular}{|c|cc|cc|cc|} \hline
& \multicolumn{2}{|c|}{($L_1$-Error)} & \multicolumn{2}{|c|}{$L_2$-Error)} & \multicolumn{2}{|c|}{$L_\infty$-Error)} \\ \hline
$M$ & error & order & error & order & error & order \\ \hline \hline
50 & 3.46E-03 & - & 1.05E-02 & - & 7.04E-02 & - \\
100 & 9.04E-04 & 1.94 & 3.67E-03 & 1.52 & 3.54E-02 & 0.99 \\
200 & 2.30E-04 & 1.98 & 1.29E-03 & 1.51 & 1.78E-02 & 1.00 \\
400 & 5.81E-05 & 1.98 & 4.53E-04 & 1.51 & 8.89E-03 & 1.00 \\
800 & 1.46E-05 & 1.99 & 1.60E-04 & 1.50 & 4.45E-03 & 1.00 \\ \hline \end{tabular} \caption{(Example 6) Global errors of ENO alone scheme with RK2 in time. (only ENO2) \label{tab:ex6-eno} } \end{center} \end{table} }\fi
\begin{figure}
\caption{(Example 5) Filtered scheme for a steady equation, with $M=50$ mesh points.}
\end{figure}
\noindent {\bf Example 6\ Advection with an obstacle.} Here we consider an obstacle problem, which is taken from~\cite{Boka-cheng-shu-13}: \begin{eqnarray}
&& min(v_t+v_x,\ v-g(x))=0,\quad t >0, x\in[-1,1],\\
&& v_0(x)= 0.5+sin(\pi x) \quad x\in[-1, 1], \end{eqnarray} together with periodic boundary condition. The obstacle function is $g(x):= sin(\pi x)$. In this case exact solution is given by: \begin{eqnarray} v(t,x) := \left\{
\begin{array}{l l}
\max(v_0(x-at), g(x))& ~ \text{if}~ t< \frac{1}{3} \\
\max(v_0(x-at), g(x), -1_{x\in\ [0.5,1]} )& ~ \text{if} ~ t \in [ \frac{1}{3},\frac{5}{6}], \\
\max(v_0(x-at), g(x), 1_{x\in\ [-1,t-\frac{5}{6}] \cup [0.5,1]} )& ~ \text{if} ~ t \in [\frac{5}{6}, 1], \\
\end{array} \right. \end{eqnarray} Results are given in Table~\ref{tab:ex7}, for terminal time $t=0.5$. Errors are computed away from singular points, i.e., in the region $[-1,1] \setminus \big( \cup _{i=1,3}[s_i-\delta, s_i+\delta]\big)$ (where $s_1=-0.1349733, s_2=0.5$ and $s_3=2/3$ are the three singular points. Filtered scheme is numerically of second order (ENO gives comparable results here).
\begin{table}[\tablepos] \begin{center}
\begin{tabular}{|cc|cc|cc|cc|} \hline
\multicolumn{2}{|c|}{Errors}
& \multicolumn{2}{|c|}{filtered $\epsilon= 5{\Delta x}$} & \multicolumn{2}{|c|}{centered} & \multicolumn{2}{|c|}{ENO2} \\ \hline
$M$ & $N$ & error & order & error & order & error & order \\ \hline \hline
40 & 20 & 7.93E-03 & 2.03 & 1.63E-02 & 1.54 & 2.14E-02 & 1.59 \\
80 & 40 & 1.84E-03 & 2.10 & 2.98E-02 & -0.87 & 7.75E-03 & 1.46 \\
160 & 80 & 3.92E-04 & 2.24 & 1.46E-02 & 1.03 & 1.07E-03 & 2.86 \\
320 & 160 & 9.67E-05 & 2.02 & 8.02E-03 & 0.86 & 2.72E-04 & 1.97 \\
640 & 320 & 2.40E-05 & 2.01 & 4.10E-03 & 0.97 & 6.92E-05 & 1.98 \\ \hline \end{tabular} \caption{(Example 6) $L^\infty$ errors away from singular points, for filtered scheme, centered scheme, and second order ENO scheme. \label{tab:ex7} } \end{center} \end{table}
\if{ \begin{table}[\tablepos] \begin{center}
\begin{tabular}{|cc|cc|cc|cc|} \hline
\multicolumn{2}{|c|}{Errors}
& \multicolumn{2}{|c|}{($L_1$-Error)} & \multicolumn{2}{|c|}{$L_2$-Error)} & \multicolumn{2}{|c|}{$L_\infty$-Error)} \\ \hline
$M$ & $N$ & error & order & error & order & error & order \\ \hline \hline
40 & 25 & 3.56E-03 & - & 3.96E-03 & - & 5.43E-03 & - \\
80 & 49 & 8.70E-04 & 2.03 & 9.77E-04 & 2.02 & 1.53E-03 & 1.83 \\
160 & 98 & 2.15E-04 & 2.02 & 2.40E-04 & 2.03 & 3.47E-04 & 2.14 \\
320 & 196 & 5.35E-05 & 2.01 & 5.96E-05 & 2.01 & 9.53E-05 & 1.86 \\
640 & 391 & 1.33E-05 & 2.01 & 1.49E-05 & 2.00 & 2.56E-05 & 1.90 \\
\hline \end{tabular} \caption{ ({Example 6}.2) Filtered scheme ($5{\Delta x}$) with RK2 in time. \label{tab:ex72} } \end{center} \end{table} }\fi
\begin{figure}
\caption{(Example 6) Plots at T=0(initial data), T=0.3, T=0.5.}
\end{figure}
\noindent {\bf Example 7\ Eikonal with an obstacle.} We consider an Eikonal equation with an obstacle term, also taken from~\cite{Boka-cheng-shu-13}: \begin{eqnarray}
&& min(v_t+|v_x|, v-g(x))=0,\quad t >0, x\in[-1,1],\\ && v_0(x)= 0.5+sin(\pi x)\quad x\in[-1, 1], \end{eqnarray} with periodic boundary condition on $(-1,1)$ and $g(x)= sin(\pi x)$. In this case the exact solution is given by: \begin{eqnarray}
v(t,x)=\max(\bar v(t,x), g(x)).
\end{eqnarray} where $\bar v$ is the solution of the Eikonal equation $v_t+ |v_x|=0$. The formula $\bar v(t,x)= \min_{y\in[x-t,x+t]}v_0(y)$ holds, which simplifies to \begin{eqnarray}
v(t,x) := \left\{
\begin{array}{l l}
v_0(x+t)& ~\text{if} ~ x< -0.5-t \\
-0.5 & ~\text{if} ~ x \in [ -0.5-t, -0.5+t], \\
\min(v_0(x-t), v_0(x+t))& ~ \text{if} ~ x \geq -0.5+t, \\
\end{array} \right. \end{eqnarray}
Results are given in Table~\ref{tab:ex8} for terminal time $T=0.2$. Plots are also shown in Figure~\ref{fig:ex8} for different times (for $t\geq \frac{1}{3}$ solution remains unchanged).
\if{ \begin{table}[\tablepos] \begin{center}
\begin{tabular}{|cc|cc|cc|cc|} \hline
Errors
& \multicolumn{2}{|c|}{filtered} & \multicolumn{2}{|c|}{ENO2} \\ \hline
$M$ & error & order & error & order \\ \hline \hline
40 & 2.35E-03 & - & 3.74E-03 & - & 9.59E-03 & - \\
80 & 3.88E-04 & 2.60 & 6.26E-04 & 2.58 & 2.79E-03 & 1.78 \\
160 & 7.57E-05 & 2.36 & 1.13E-04 & 2.47 & 4.12E-04 & 2.76 \\
320 & 1.59E-05 & 2.25 & 2.26E-05 & 2.32 & 1.04E-04 & 1.99 \\
640 & 3.73E-06 & 2.09 & 5.50E-06 & 2.04 & 3.22E-05 & 1.69 \\
\hline \end{tabular} \caption{\label{tab:ex8} Filtered scheme at time $t=0.2$} \end{center} \end{table}
\begin{table}[\tablepos] \begin{center}
\begin{tabular}{|cc|cc|cc|cc|} \hline
\multicolumn{2}{|c|}{Errors}
& \multicolumn{2}{|c|}{($L_1$-Error)} & \multicolumn{2}{|c|}{$L_2$-Error)} & \multicolumn{2}{|c|}{$L_\infty$-Error)} \\ \hline
$M$ & $N$ & error & order & error & order & error & order \\ \hline \hline
40 & 8 & 5.87E-03 & - & 6.85E-03 & - & 1.31E-02 & - \\
80 & 16 & 1.52E-03 & 1.95 & 2.12E-03 & 1.69 & 5.53E-03 & 1.24 \\
160 & 32 & 3.98E-04 & 1.93 & 6.80E-04 & 1.64 & 2.25E-03 & 1.30 \\
320 & 64 & 1.04E-04 & 1.94 & 2.18E-04 & 1.64 & 9.20E-04 & 1.29 \\
640 & 128 & 2.67E-05 & 1.96 & 6.96E-05 & 1.65 & 3.77E-04 & 1.29 \\
\hline \end{tabular} \caption{ ENO scheme (second order ) with RK2 in time and terminal time T=0.2.} \end{center} \end{table} }\fi
\begin{table}[\tablepos] \begin{center}
\begin{tabular}{|c|cc|cc|} \hline
Errors
& \multicolumn{2}{|c|}{filtered} & \multicolumn{2}{|c|}{ENO2} \\ \hline
$M$ & error & order & error & order \\ \hline \hline
40 & 3.74E-03 & - & 6.85E-03 & - \\
80 & 6.26E-04 & 2.58 & 2.12E-03 & 1.69 \\
160 & 1.13E-04 & 2.47 & 6.80E-04 & 1.64 \\
320 & 2.26E-05 & 2.32 & 2.18E-04 & 1.64 \\
640 & 5.50E-06 & 2.04 & 6.96E-05 & 1.65 \\ \hline \end{tabular} \caption{\label{tab:ex8} Filtered scheme and ENO scheme at time $t=0.2$} \end{center} \end{table}
\begin{figure}
\caption{ (Example 7) Plots at times $t=0$, $t=0.2$ and $t=0.4$. The dark line is the numerical solution, similar to the exact solution, and the ligth line is the obstacle function.}
\label{fig:ex8}
\end{figure}
\section{Conclusion}
We propose a ``filtered" scheme which behaves as a high--order scheme when the solution is smooth and as a low order monotone scheme otherwise. It has a simple presentation that is easy to implement. Rigorous error bounds hold, of the same order as the Crandall-Lions estimates in $\sqrt{{\Delta x}}$ where ${\Delta x}$ is the mesh size. In the case the solution is smooth a high-order consistency error estimate also holds.
We show on several numerical examples the ability of the scheme to stabilize an otherwise unstable scheme, and also we observe a precision similar to a second order ENO scheme on basic linear and non linear examples.
On going works concern the application of the present approach to some front propagation equations.
\appendix
\section{An essentially non-oscillatory (ENO) scheme of second order} \label{app:A}
We recall here a simple ENO method of order two based on the work of Osher and Shu~\cite{OS91} for Hamilton Jacobi equation (the ENO method was designed by Harten et al.~\cite{HEOC87} for the approximation solution of non-linear conservation law).
Let $m$ be the minmod function defined by \begin{eqnarray}\label{linEquGrad} m(a,b)=\left\{
\begin{array}{ll}
\displaystyle a \quad {\rm if}\, |a|\leq |b|,~ ab >0\\
\displaystyle b \quad {\rm if}\, |b| <|b|,~ ab>0\\
\displaystyle 0 \quad {\rm if}\, ab \leq 0
\end{array} \right. \end{eqnarray}
(other functions can be considered such as $m(a,b)=a$ if $|a|\leq |b|$ and $m(a,b)=b$ otherwise). Let $D^\pm u_j=\pm (u_{j\pm 1}-u_j)/{\Delta x}$ and $$
D^2 u_j : = \frac{u_{j+1}-2 u_j + u_{j-1}}{{\Delta x}^2}. $$ Then the right and left ENO approximation of the derivative can be defined by \begin{eqnarray*}
\bar D^{\pm}u_{j}= D^{\pm}u_j \mp \frac{1}{2} {\Delta x}\ m(D^2 u_{j},D^2 u_{j\pm 1} )
\end{eqnarray*} and the ENO (Euler forward) scheme by $$
S_0(u)_j:= u_j - \tau h^M(x_j,\,\bar D^-u_j,\, \bar D^+u_j). $$ The corresponding RK2 scheme can then be defined by $S(u)=\frac{1}{2}(u + S_0(S_0(u)))$.
\if{
{\bf{TVD RK3 scheme }} Here we are recalling third order TVD RK scheme \begin{eqnarray*}
& & u^{n,1}_i:= u^n_i - \tau h(x_i,D^\mp u^n_i).\\
& &u^{n,2}_{i}:= \frac{3}{4}(u^n_i +\frac{1}{4}u^{n,1}-\tau h(x_i,D^\mp u^{n,1}_i).\\
&& u^{n+1}_i= \frac{1}{3}u_i^n+\frac{2}{3}u^{n,2}-\frac{2}{3}\tau h(x_i,D^\mp u^{n,2}_i),\\ \end{eqnarray*} }\fi
{}
\end{document} | arXiv |
\begin{document}
\newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\R}[1]{\textcolor{red}{#1}} \newcommand{\B}[1]{\textcolor{blue}{#1}} \newcommand{\fixme}[1]{\textcolor{orange}{#1}}
\def\epsilon_{\rm arm}{\epsilon_{\rm arm}} \def\epsilon_{\rm src}{\epsilon_{\rm src}} \def\epsilon_{\rm ext}{\epsilon_{\rm ext}} \def\epsilon_{\rm int}{\epsilon_{\rm int}} \def\epsilon_{\rm out}{\epsilon_{\rm out}}
\defT_{\rm itm}{T_{\rm itm}} \defT_{\rm src}{T_{\rm src}}
\def{\bf M}_{\rm rot}{{\bf M}_{\rm rot}} \def{\bf M}_{\rm sqz}{{\bf M}_{\rm sqz}} \def{\bf M}_{\rm opt}{{\bf M}_{\rm opt}} \def{\bf M}_{\rm io}{{\bf M}_{\rm io}} \def{\bf M}_{\rm c}{{\bf M}_{\rm c}}
\title{Broadband Quantum Noise Reduction in Future Long Baseline Gravitational-wave Detectors via EPR Entanglement}
\author{Jacob L. Beckey} \affiliation{School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, United Kingdom } \author{Yiqiu Ma} \affiliation{Center for Gravitational Experiment, School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China} \affiliation{Theoretical astrophysics 350-17, Californian Institute of Technology, Pasadena, Californian 91125, USA}
\author{Vincent Boyer} \affiliation{School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, United Kingdom }
\author{Haixing Miao} \affiliation{School of Physics and Astronomy and Institute of Gravitational Wave Astronomy, University of Birmingham, Birmingham, B15 2TT, United Kingdom}
\begin{abstract} Broadband quantum noise reduction can be achieved in gravitational wave detectors by injecting frequency dependent squeezed light into the the dark port of the interferometer. This frequency dependent squeezing can be generated by combining squeezed light with external filter cavities. However, in future long baseline interferometers (LBIs), the filter cavity required to achieve the broadband squeezing has a low bandwidth -- necessitating a very long cavity to mitigate the issue from optical loss. It has been shown recently that by taking advantage of Einstein-Podolsky-Rosen (EPR) entanglement in the squeezed light source, the interferometer can simultaneously act as a detector and a filter cavity. This is an attractive broadband squeezing scheme for LBIs because the length requirement for the filter cavity is naturally satisfied by the length of the interferometer arms. In this paper we present a systematic way of finding the working points for this broadband squeezing scheme in LBIs. We also show that in LBIs, the EPR scheme achieves nearly perfect ellipse rotation as compared to 4km interferometers which have appreciable error around the intermediate frequency. Finally, we show that an approximation for the opto-mechanical coupling constant in the 4km case can break down for longer baselines. These results are applicable to future detectors such as the 10km Einstein Telescope and the 40km Cosmic Explorer. \end{abstract}
\maketitle
\section{Introduction} Gravitational-wave (GW) detectors including LIGO and VIRGO, which recently made breakthrough discoveries are Michelson-type interferometers with km size arms\,\cite{FirstGW,GW2}. They are among the largest and most sensitive experiments humans have ever constructed. To push the limits of scientific discovery even further, larger, more sensitive instruments are already being planned. Two such detectors are the 10km Einstein Telescope (ET) \cite{ETDesign} and the 40km Cosmic Explorer \cite{Abbott_2017}. They differ from LIGO in many ways including scale and configuration, but for our purposes can be treated in a very similar way mathematically.
All ground-based GW detectors are plagued by various noise sources that result from the fact that they are on Earth (e.g. seismic activity). Once these and all other classical noise sources are suppressed, the sensitivity of GW detectors is ultimately limited by the quantum nature of light. The quantized electromagnetic field is analogous to a quantum harmonic oscillator (position and momentum of a mass are replaced by amplitude and phase quadrature of light). The uncertainty in the amplitude and phase quadrtaures (quantum fluctuations) limits the sensitivity of interferometric measurements.
The two primary noise sources in GW interferometers are radiation-pressure noise and photon shot noise. The former is due to the quantum fluctuations in the amplitude that cause random fluctuations in the radiation pressure force on the mirrors. The latter is due to the uncertainty in the phase of the light which manifests itself as the statistical arrival time of photons. These noise sources are modelled as entering through the dark port of the interferometer and coupling to the coherent laser light. The detector sensitivity achieved when these uncorrelated (random) fluctuations enter the interferometer is called the \textbf{Standard Quantum Limit} (SQL). It has been shown that this limit could be surpassed if a squeezed vacuum was injected into the interferometer's dark port instead \cite{CavesQMNoiseInIFO,schnabel_2017}. Depending on the squeezing angle of the injected squeezed vacuum (see Fig.\,\ref{fig:SetupAndSensitivity}), we can decrease the amplitude or phase fluctuations that limit the detector sensitivity. \begin{figure}
\caption{LBI setup and sensitivity curves for various squeezing schemes. A fixed squeezing angle only surpasses the SQL (black dotted line) over a narrow frequency band. Our broadband squeezing scheme (purple curve) is achieved by rotating the noise ellipse in a frequency-dependent way as shown below the plot. Acronyms used are: end test mirror (ETM), input test mirror (ITM), power recycling mirror (PRM), signal recycling mirror (SRM), and output mode cleaner (OMC).}
\label{fig:SetupAndSensitivity}
\end{figure} Amplitude and phase quadratures are conjugate variables (like position and momentum), thus the Heisenberg Uncertainty Principle states that the product of their uncertainties must be greater than some constant. Thus, by decreasing phase fluctuations, we suffer an increase in amplitude fluctuations. This would not be a problem if at low gravitational wave frequencies, the mirror suspension systems did not have mechanical resonances that amplify these fluctuations and make radiation pressure noise the limiting noise source. Put simply, at low frequencies we need amplitude-squeezed vacuum injection. Once far away from these resonances (at higher frequencies), the detector is then limited by shot noise and thus phase-squeezed vacuum is needed. It has been known for some time that frequency-dependent squeezing would allow one to surpass the SQL over all frequencies \cite{KLMTV}. These proposals require additional low-loss filter cavities. It was shown recently that an alternative approach to achieving frequency-dependent squeezing without additional cavities is using EPR-entangled signal and idler beams (different frequency components in conventional squeezed light source)\,\cite{NaturePhysicsEPR}, which has been demonstrated in proof-of-principle experiments\,\cite{Jan2019, Yap2019}. In this paper, we present a systematic way of finding the working points for this broadband squeezing scheme in LBIs. We also show that in LBIs, the EPR scheme achieves nearly perfect ellipse rotation as compared to 4km interferometers which have appreciable error. Finally, we show that an approximation for the opto-mechanical coupling constant in the 4km case can break down for longer baselines. \section{Theory} \subsection{EPR Entanglement in Squeezed-light Source} In this section, we will illustrate the EPR entanglment generated in the squeezed-light source, which consists of
a non-degenerate optical parametric amplifier with a nonlinear electric susceptibility ($\chi^{(2)}$ in our case). Such a device takes in two modes and a pump beam (energy source) and produces two amplified modes which we call the signal and idler beams. In frequency space, we can visualize the OPA taking in uncorrelated sidebands and entangling (correlating) them. In fact, any frequency modes $\omega_1$, $\omega_2$ within the squeezing bandwidth that suffice $\omega_p = \omega_1 + \omega_2$ will be entangled with each other. In our proposed scheme \cite{EPRwithFINESSE, NaturePhysicsEPR}, we detune the pump field by an amount $\Delta$ (of order MHz) such that $\omega_p = 2\omega_0 + \Delta$, where $\omega_0$ is the interferometer carrier frequency. This creates correlated sidebands around the frequencies $\omega_0$ and $\omega_0+\Delta$ (Fig.\,\ref{fig:DetunedOPA}). The corresponding amplitude and phase quadratures defined with respect to $\omega_0$ and $\omega_0+\Delta$ are therefore entangled\,\cite{ReidEPR,EPR-CV}. \begin{figure}
\caption{Visualization of the frequency-mode entanglement in the pumped OPA in our proposed scheme.}
\label{fig:DetunedOPA}
\end{figure}
In Caves and Schumaker's two-photon formalism \cite{TwoPhotonFormalism}, the amplitude and phase quadratures are written in terms of sidebands. \begin{align}
\hat{a}_1 (\Omega) &= \frac{\hat{a}(\omega_0+\Omega)+\hat{a}^{\dagger}(\omega_0-\Omega)}{\sqrt{2}}\,,\\ \hat{a}_2 (\Omega) &= \frac{\hat{a}(\omega_0+\Omega)-\hat{a}^{\dagger}(\omega_0-\Omega)}{i\sqrt{2}}\,,\\
\hat{b}_1 (\Omega) &= \frac{\hat{b}(\omega_0+\Delta+\Omega)+\hat{b}^{\dagger}(\omega_0+\Delta-\Omega)}{\sqrt{2}}\,,\\
\hat{b}_2 (\Omega) &= \frac{\hat{b}(\omega_0+\Delta+\Omega)-\hat{b}^{\dagger}(\omega_0+\Delta-\Omega)}{i\sqrt{2}}\,. \end{align} The general quadratures for the signal and idler beams can then be written as \begin{align}
\hat{a}_{\theta} &= \hat{a}_1 \cos{\theta} + \hat{a}_2 \sin{\theta}\,, \\
\hat{b}_{\theta} &= \hat{b}_1 \cos{\theta} + \hat{b}_2 \sin{\theta}\,. \end{align} In the high squeezing regime, the fluctuations in the joint quadratures, $\hat{a}_1 - \hat{b}_1$ and $\hat{a}_2+\hat{b}_2$, are simultaneously well below the vacuum level. This is the experimental signature of the EPR entanglement. This does not violate Heisenberg's Uncertainty Principle because $[\hat{a}_1 - \hat{b}_1,\hat{a}_2+\hat{b}_2]=0$. In analogy to the original EPR paper \cite{EPR1935,ReidEPR}, $\hat{a}_{-\theta}$ is maximally correlated with $\hat{b}_{\theta}$. This correlation allows us to reduce our uncertainty in $\hat{a}_{-\theta}$ by making a measurement on $\hat{b}_{\theta}$. This key theoretical result that enables our conditional squeezing scheme has been realized experimentally recently \cite{ExperimentalBroadband}. \subsection{Interferometer as Detector and Filter} The signal and idler beams enter the dark port of the interferometer and couple to the laser that enters the bright port. The interferometer we consider has both signal and power recycling cavities to increase sensitivity as shown in Fig. \ref{fig:SetupAndSensitivity}. To the signal beam, the interferometer looks like a resonant cavity. The input-output relation for the phase quadrature of the signal beam (which will contain our GW signal) is \cite{KLMTV} \begin{equation}\label{eq:SignalIO}
\hat{A}_2 = e^{2i\beta}(\hat{a}_2 - \mathcal{K} \hat{a}_1) + \sqrt{2 \mathcal{K}} \frac{h}{h_{\text{SQL}}} e^{i\beta}\,, \end{equation} where $h_{\text{SQL}}=\sqrt{8\hbar/(m \Omega^2 L_{\rm arm}^2)}$ is the square root of the SQL, $\beta$ is a phase shift given as $\beta\equiv \arctan{\Omega/\gamma}$ with $\gamma$ being the detection bandwidth, and $\mathcal{K}$ is the optomechanical coupling constant that determines the coupling between the light and the interferometer mirrors. For current GW detectors, the signal-recycling cavity length is of the order of 10 meters and the SRM transmission is quite high (tens of percent). In this case, we can effectively view the signal-recycling cavity (SRC) formed by SRM and ITM as a compound mirror by ignoring the propagation phase $\Omega L_{\rm SRC}/c$ picked up by the sidebands, as first mentioned by Buonanno and Chen\,\cite{ScalingLaw}. When the SRC is tuned, the corresponding optomechanical coupling constant $\cal K$ is the same as given by Kimble {\it et al.}\,\cite{KLMTV}: \begin{equation}
\mathcal{K}_{\text{KLMTV}}=\frac{32 \omega_0 P_{\text{arm}}}{mL_{\rm arm}^2 \Omega^2 (\Omega^2 + \gamma^2)}\,, \end{equation} where $P_{\rm arm}$ is the arm cavity power, $m$ is the mirror mass and $\gamma$ is equal to $ c T_{\rm SRC}/(4 L_{\rm arm})$ with $T_{\rm SRC}$ being the effective power transmission of the SRC when viewed as a compound mirror.
For LBIs, the definition of such a coupling constant can differ form that given by Kimble {it et al}. This is because the SRC length is longer and also the SRM transmission becomes comparable to that of ITM in order to broaden the detection bandwidth in the resonant-sideband-extraction mode, i.e., the effective transmissivity $T_{\rm SRC}$ of SRC approaches 1. The approximation for defining $\gamma$ applied in Ref.\,\cite{ScalingLaw}, which assumes $T_{\rm SRC}\ll 1$, starts to break down. We also need to take into account the frequency dependent propagation phase of the sidebands, which leads to the following expression for the coupling constant constant\,\cite{martynov2019}: \begin{equation}\label{Kappa_LBI}
\mathcal{K}_{\text{LBIs
}}=\frac{2 h^2_{\text{SQL}} L_{\rm arm} \omega_0 P_{\text{arm}} \gamma_s \omega^2_s}{\hbar c [\gamma_s^2 \Omega^2 + (\Omega^2-\omega^2_s)^2]}\,. \end{equation} Here $L_{\text{arm}}$ is the interferometer arm length; $\omega_s$ is a resonant frequency that arises from the coupling between the signal recycling and arm cavities. The frequency and bandwidth for such a resonance are given by \begin{equation}
\omega_s = \frac{c T_{\text{ITM}}}{2\sqrt{L_{\text{arm}}L_{\text{SRC}}}}\,,\quad
\gamma_s = \frac{c T_{\text{SRM}}}{4 L_{\text{SRC}}}\,. \end{equation}
\begin{figure}
\caption{The signal recycling interferometer can be mapped to a three-mirror cavity. The signal recycling cavity can then be mapped into a single mirror with an effective transmissivities and reflectivities \cite{ScalingLaw}. This final, two-mirror cavity is resonant for the signal beam (at $\omega_0$) but detuned for the idler beam (at $\omega_0+\Delta$), thus the idler simply experiences a frequency-dependent ellipse rotation. This allows us to use the interferometer itself as a filter cavity.
The single-trip propagation phase $\phi_{\rm SRC}$ is equal to integer number of $\pi$ for the carrier in the resonant-sideband-extraction mode, and is equal to some specific number for the idler, as explained in the text.}
\label{fig:Mapping}
\end{figure}
The idler beam, however, is far away from the carrier frequency and does not produce noticeable radiation pressure effect on the test mass. As such, it sees the interferometer as a simple detuned cavity as shown in Fig.\,\ref{fig:Mapping}, the same as done in Ref.\,\cite{ScalingLaw}. As such, the idler beam simply experiences a frequency dependent ellipse rotation. This can be seen in the idler input-output relation which is given as \begin{equation}\label{eq:IdlerIO}
\hat{B}_2 = e^{i\alpha}(-\hat{b}_1 \sin{\Phi_{\text{rot}}}+\hat{b}_2 \cos{\Phi_{\text{rot}}})\, \end{equation} where $\alpha$ is an unimportant overall phase and the the important rotation angle $\Phi_{\rm rot}$ is given by\,\cite{KLMTV, EPRwithFINESSE}: \begin{equation}\label{eq:ApproxRot} \Phi_{\text{rot}} = \arctan{\bigg(\frac{\Omega+\delta_f}{\gamma_f}\bigg)}+\arctan{\bigg(\frac{-\Omega+\delta_f}{\gamma_f}\bigg)}\,. \end{equation} Here $\delta_f$ and $\gamma_f$ are the effective detuning and bandwidth of the interferomemeter with respect to the idler beam. They are defined through \begin{align}\label{resonance} 2(\omega_{\rm idler}+\delta_f) (L_{\rm arm}/c)&+ \arg(r^{\rm idler}_{\rm SRC})= 2n\pi\,,\\
\gamma_f &\equiv c |t^{\rm idler}_{\rm SRC}|^2/ (4L_{\rm arm})\,, \label{eq:approxgammaf} \end{align} where $\omega_{\rm idler} = \omega_0 +\Delta$, $n$ is an integer number, $r^{\rm idler}_{\rm SRC}$ and $r^{\rm idler}_{\rm SRC}$ are the effective amplitude reflectivity and transmissivity of the SRC for the idler beam: \begin{align}
r_{\rm SRC}^{\rm idler}&= \sqrt{R_{\rm ITM}} +\frac{T_{\rm ITM}\sqrt{R_{\rm SRM}}}{1-\sqrt{R_{\rm ITM}R_{\rm SRM}} e^{2i\phi_{\rm SRC}^{\rm idler}}}\,,\\
t_{\rm SRC}^{\rm idler}&=\frac{\sqrt{T_{\text{SRM}}T_{\text{ITM}}}e^{i\phi^{\rm idler}_{\text{SRC}}}}{1-\sqrt{R_{\text{ITM}}R_{\text{SRM}}}e^{2i\phi^{\rm idler}_{\text{SRC}}}}\,. \label{tSRC} \end{align} The phase $\phi_{\rm SRC}^{\rm ider} = \Delta L_{\rm SRC}/c $ by assuming $\omega_0 L_{\rm SRC}/c$ equal to integer number of $\pi$ as in the resonant-sideband-extraction mode. Note that the issue of the compound-mirror approximation for the carrier mentioned earlier does not occur for the idler beam. Because $\Delta \gg \Omega$, the sideband propagation phase inside SRC can be ignored, and also
the effective SRC transmissivity for the idler $T_{\rm SRC}^{\rm idler} = |t^{\rm idler}_{\rm SRC}|^2$ is much smaller than 1, which makes $\gamma_f$ properly defined.
The rotation angle $\Phi_{\text{rot}}$ needs to be equal to $\arctan{\mathcal{K}}$ to achieve the required frequency-dependent squeezing. This usually cannot be realised exactly with a single cavity, and two cavities are required. Indeed, for LIGO implementation of such an idea\,\cite{NaturePhysicsEPR}, the rotation in the intermediate frequency does deviate from the ideal one by a noticeable amount. As we will see, for LBIs, the broadband operation mode can make such a deviation negligible, because the transition frequency from the radiation-pressure-noise dominant regime to the shot-noise dominant regime is much lower than the detection bandwidth; a single filter cavity is close to be sufficient and ideal\,\cite{Khalili}. The corresponding required detuning and bandwidth for given Eq.\,\eqref{Kappa_LBI} and $\omega_s\gg \Omega$, following a similar derivation as Refs.\,\cite{Khalili,NaturePhysicsEPR}: \begin{align}\label{eq:gammaf}
\gamma_f = \sqrt{\frac{\Omega^2 \mathcal{K}_{\rm LBIs}}{2}}\Bigg|_{\omega_s\gg \Omega} & \approx \sqrt{\frac{4 \omega_0 P_{\text{arm}}T_{\rm SRM}}{m c^2 T^2_{\text{ITM}}}}\,,\\
\delta_f &= -\gamma_f\,.
\label{eq:deltaf} \end{align} From Eq.\,(46) in \cite{KLMTV}, one can show that the sensitivity of the interferometer with imperfect rotation angle is \begin{equation}\label{eq:Sh_loss}
S_h \approx \frac{h^2_{\text{SQL}}}{2 \cosh{2r}}\bigg(\mathcal{K} + \frac{1}{\mathcal{K}}\bigg) + \frac{h^2_{\text{SQL}}}{2}\frac{\sinh^2{2r}}{\cosh{2r}}\bigg(\mathcal{K} + \frac{1}{\mathcal{K}}\bigg)\delta \Phi^2 \end{equation} where $r$ is the squeezing factor and $\delta \Phi = \Phi_{\text{rot}}-\arctan{\cal K}<< 1$. The first term in Eq.\,\eqref{eq:Sh_loss} is the sensitivity when the rotational angle is realized exactly and the second term is the degradation in sensitivity due to error in the rotational angle. In the case of a 15dB squeezing injection as considered in \cite{NaturePhysicsEPR}, $r=1.73$, so the ratio of the correction term to the exact term is $\approx 249 \delta \Phi^2$. So, if we want to keep the relative correction to less than 10\%, we will need the error in the rotation angle $\delta \Phi < 0.02$ rad (i.e $249 (0.02\text{ rad})^2 \times 100\% = 9.96\% < 10\%$). So, as long as the proposed scheme keeps the overall error in the rotation angle to less than $0.02$ rad, we will suffer no more than a $10\%$ degradation in noise reduction. This requirement turns out to be easily satisfied in the broadband detection mode of LBIs due to the reason mentioned earlier.
\section{Numerical Results}
In this section, we will show the systematic approach to finding the working points for implementing this idea. The procedure was outlined in Ref.\,\cite{NaturePhysicsEPR}, which showed one working points out of many for LIGO parameters. The tunable parameters are the detuning frequency of the pump $\Delta$, the small change of the arm length $\delta L_{\rm arm}$ and the SRC length $\delta L_{\rm SRC}$ with respect to their macroscopic value. We show how the relevant domain for the various tunable parameters in our scheme were derived. Then, we present the result of our search for solutions to the resonance condition within these bounds.
For illustration, we choose the detector parameters on the scale similar to the Einstein Telescope. To highlight that the EPR squeezing scheme is not restricted to one particular set of detector design parameters, we allow the macroscopic arm length $L_{\rm arm}$ and $L_{\rm SRC}$ to vary from the nominal values outlined in the Einstein Telescope design study \cite{ETDesign}. The detector parameters are outlined in the following table.
\bgroup \def1.2{1.2}
\begin{center}
\begin{tabular}{ |c|c|c|c| } \hline Parameters & Name & Value \\ \hline $L_{\text{arm}}$& arm cavity length &[9995, 10005]\,m \\ $L_{\text{SRC}}$& signal recycling cavity length & [100, 200]\,m\\ $m$ & mirror mass &150\,kg \\ $T_{\text{ITM}}$&ITM power transmissivity & 0.04 \\ $T_{\text{SRM}}$&SRM power transmissivity & 0.04 \\ $P_{\text{arm}}$&intra-cavity power & 3 MW \\ $r$ & squeezing parameter & 1.73 (15\,dB)\\ \hline \end{tabular} \end{center}
To start, we equate Eq. \eqref{eq:approxgammaf} with Eq. \eqref{eq:gammaf} and solve for $\phi_{\text{SRC}}$. This is the exact phase accumulated by the idler after one round trip in the SRC, so we denote it $\phi^{\text{exact}}_{\text{SRC}}$.
Next, for the effective cavity to have a detuning frequency satisfying Eq.\,\eqref{eq:deltaf}: $\delta_f=-\gamma_f$, we tune the idler detuning $\Delta$, $\delta L_{\rm arm}$ and $\delta L_{\rm SRC}$ to find solutions to Eq. \eqref{resonance}. The idler detuning $\Delta$ has to be in the low MHz regime because if it was lower it would interfere with the carrier, but if it were too high, electronics would not work optimally. The allowable range is taken to be \begin{equation}
\frac{\Delta}{2\pi} \in [5,50] \,\text{MHz} \end{equation} Since we want to keep $\gamma_f$
fixed while we tune $\Delta$ to make the resonance condition Eq.\,\eqref{resonance} satisfied for $\delta_f = -\gamma_f$, we can only alter $\Delta$ by integer numbers $n$ of the free spectral range of the SRC, namely $\Delta = (\phi^{\text{exact}}_{\text{SRC}} + n \pi) c/L_{\rm SRC}$. The minimum allowed detuning is 5\,MHz and this will occur when $L_{\text{SRC}}$ is at its maximum, i.e. $200$\,m. This will correspond to the minimum allowed $n$. Similarly, the max detuning occurs to the minimum SRC length and the max $n$. We find the relevant values of $n$ to be \begin{equation}
n \in [7,33] \end{equation}
We found that the overall rotation angle $\Phi_{\rm rot}$ is not very sensitive to changes in the SRC phase. It is acceptable to have $\phi_{\text{SRC}}$ slightly deviated from the exact value $\phi^{\text{exact}}_{\text{SRC}}$. This makes it easier for us to find solutions to our resonance condition Eq.\,\eqref{resonance}. As noted before, we must keep the error in the overall rotation angle to less than 0.02 rad. That is, we need $|\delta \Phi|=|\Phi_{\text{rot}}-\arctan{\cal K}|<0.02 \text{ rad}$ where $ \Phi_{{\text{rot}}}$ is given in Eq.\,\eqref{eq:ApproxRot}. To ensure the error in rotation angle is less than 0.02 rad over the whole positive frequency domain we require \begin{equation}
\max\limits_{\Omega} |\Delta \phi_{\text{SRC}} \frac{d\Phi_{\text{rot}}}{d \phi_{\text{SRC}}}|<0.02 \end{equation} Using the given parameters listed in the table, one finds \begin{equation}
|\Delta \phi_{\text{SRC}}|<0.002\,. \end{equation} \bgroup \def1.2{1.2}
To show the working points for given different macroscopic arm length and SRC length, we scan the length by $1$ m step size. Additionally, we sweep $\phi^{\text{approx}}_{\text{SRC}}$ in between $[ \phi^{\text{exact}}_{\text{SRC}}-0.002,\phi^{\text{exact}}_{\text{SRC}}+0.002]$ with a step size of $0.0001 $. There are 1.2 million combinations of these parameters with the given step sizes. We took advantage of the fact that each combination is independent and can thus be checked in parallel. We define the working point as those requires microsopic change of arm length $\Delta L_{\rm arm}$ and SRC length $\delta L_{\rm SRC}$ smaller than 1\,cm. Our search resulted in 3444 working points summarized in Fig.\,\ref{fig:WorkingPoints}. We pick one of the many working points for illustration, and the result sensitivity curve is given by Fig.\,\ref{fig:ApproxNoiseReduction}. The EPR scheme achieves almost the ideal frequency dependent rotation of the squeezing quadrature angle. This is result of us heavily restricting our parameter space to bound the error. \begin{figure}
\caption{Approximate sensitivity (dotted curve) plotted alongside the sensitivity when ideal ellipse rotation is achieved (red curve). For this plot, we use $L_{\rm arm} = 10003$\,m, $L_{\rm SRC} = 100$\,m, $\Delta/(2\pi) = 1.04$\,MHz and $\phi_{\rm SRC}^{\rm approx} =0.16828$.}
\label{fig:ApproxNoiseReduction}
\end{figure}
\begin{figure*}\label{fig:WorkingPoints}
\end{figure*} The step sizes were chosen with computational expense in mind, so the resolution is not particularly high. As such, Fig. \ref{fig:WorkingPoints} shows several ``dead zones'' as well as a couple ``hot points''. The natural question to ask is whether or not these are real or whether they are a byproduct of our numerical precision. Zooming in around two such points, we produced the subfigures on the righthand side of Fig. \ref{fig:WorkingPoints}. In the case of the ``dead zone'', we see that there are actually working points where there appeared to be none. This is promising, as it points to the conclusion that a working point can be found given precise fine-tuning. Similarly, we zoomed in on a ``hot point'' (top right panel of Fig. \ref{fig:WorkingPoints}) and interestingly we still see a line structure where there are as many as 35 working points surrounded by areas that apparently have zero working points. So, to check whether this was a real feature of the system or an issue of numerical precision, we again zoomed in round the ``hot points.'' What we found, once again, is that the ``dead zones'' must be simply due to the numerical precision chosen. With more time or computational power (or both) one could map a relatively smooth landscape of working points for ET. In our case, our goal was to simply show that this EPR-based squeezing scheme is not very sensitive to the actual arm length and SRC length, and that we can always find some working points for given a set of parameters. \section{Conclusions} We have shown that EPR entanglement-based squeezing can be implemented in LBIs. We derived the relevant bounds on the tunable parameters to ensure that our approximate ellipse rotation scheme very nearly matches the exact rotation achievable through the use of external filter cavities. The goal of the project was to map the interferometer working points for this squeezing scheme in LBIs like ET and Cosmic Explorer. We accomplished this at a rather low resolution of the parameter space. Zooming in around areas that had very many working points or very few showed that the landscape of working points seems to be quite smooth. In other words, if an area appears to have no working points, it is likely because the step size used to iterate through the parameter space was too low. This is ideal for experimental implementation of such a scheme, for if we cannot fulfill the requirement given the norminal parameters, there should be another working point less than centimeters away. As such, we conclude that EPR-based squeezing is an appealing alternative to other broadband quantum noise reduction schemes that require additional filter cavities.
\end{document} | arXiv |
Maximal ideal
In mathematics, more specifically in ring theory, a maximal ideal is an ideal that is maximal (with respect to set inclusion) amongst all proper ideals.[1][2] In other words, I is a maximal ideal of a ring R if there are no other ideals contained between I and R.
Maximal ideals are important because the quotients of rings by maximal ideals are simple rings, and in the special case of unital commutative rings they are also fields.
In noncommutative ring theory, a maximal right ideal is defined analogously as being a maximal element in the poset of proper right ideals, and similarly, a maximal left ideal is defined to be a maximal element of the poset of proper left ideals. Since a one-sided maximal ideal A is not necessarily two-sided, the quotient R/A is not necessarily a ring, but it is a simple module over R. If R has a unique maximal right ideal, then R is known as a local ring, and the maximal right ideal is also the unique maximal left and unique maximal two-sided ideal of the ring, and is in fact the Jacobson radical J(R).
It is possible for a ring to have a unique maximal two-sided ideal and yet lack unique maximal one-sided ideals: for example, in the ring of 2 by 2 square matrices over a field, the zero ideal is a maximal two-sided ideal, but there are many maximal right ideals.
Definition
There are other equivalent ways of expressing the definition of maximal one-sided and maximal two-sided ideals. Given a ring R and a proper ideal I of R (that is I ≠ R), I is a maximal ideal of R if any of the following equivalent conditions hold:
• There exists no other proper ideal J of R so that I ⊊ J.
• For any ideal J with I ⊆ J, either J = I or J = R.
• The quotient ring R/I is a simple ring.
There is an analogous list for one-sided ideals, for which only the right-hand versions will be given. For a right ideal A of a ring R, the following conditions are equivalent to A being a maximal right ideal of R:
• There exists no other proper right ideal B of R so that A ⊊ B.
• For any right ideal B with A ⊆ B, either B = A or B = R.
• The quotient module R/A is a simple right R-module.
Maximal right/left/two-sided ideals are the dual notion to that of minimal ideals.
Examples
• If F is a field, then the only maximal ideal is {0}.
• In the ring Z of integers, the maximal ideals are the principal ideals generated by a prime number.
• More generally, all nonzero prime ideals are maximal in a principal ideal domain.
• The ideal $(2,x)$ is a maximal ideal in ring $\mathbb {Z} [x]$. Generally, the maximal ideals of $\mathbb {Z} [x]$ are of the form $(p,f(x))$ where $p$ is a prime number and $f(x)$ is a polynomial in $\mathbb {Z} [x]$ which is irreducible modulo $p$.
• Every prime ideal is a maximal ideal in a Boolean ring, i.e., a ring consisting of only idempotent elements. In fact, every prime ideal is maximal in a commutative ring $R$ whenever there exists an integer $n>1$ such that $x^{n}=x$ for any $x\in R$.
• The maximal ideals of the polynomial ring $\mathbb {C} [x]$ are principal ideals generated by $x-c$ for some $c\in \mathbb {C} $.
• More generally, the maximal ideals of the polynomial ring K[x1, ..., xn] over an algebraically closed field K are the ideals of the form (x1 − a1, ..., xn − an). This result is known as the weak Nullstellensatz.
Properties
• An important ideal of the ring called the Jacobson radical can be defined using maximal right (or maximal left) ideals.
• If R is a unital commutative ring with an ideal m, then k = R/m is a field if and only if m is a maximal ideal. In that case, R/m is known as the residue field. This fact can fail in non-unital rings. For example, $4\mathbb {Z} $ is a maximal ideal in $2\mathbb {Z} $, but $2\mathbb {Z} /4\mathbb {Z} $ is not a field.
• If L is a maximal left ideal, then R/L is a simple left R-module. Conversely in rings with unity, any simple left R-module arises this way. Incidentally this shows that a collection of representatives of simple left R-modules is actually a set since it can be put into correspondence with part of the set of maximal left ideals of R.
• Krull's theorem (1929): Every nonzero unital ring has a maximal ideal. The result is also true if "ideal" is replaced with "right ideal" or "left ideal". More generally, it is true that every nonzero finitely generated module has a maximal submodule. Suppose I is an ideal which is not R (respectively, A is a right ideal which is not R). Then R/I is a ring with unity (respectively, R/A is a finitely generated module), and so the above theorems can be applied to the quotient to conclude that there is a maximal ideal (respectively, maximal right ideal) of R containing I (respectively, A).
• Krull's theorem can fail for rings without unity. A radical ring, i.e. a ring in which the Jacobson radical is the entire ring, has no simple modules and hence has no maximal right or left ideals. See regular ideals for possible ways to circumvent this problem.
• In a commutative ring with unity, every maximal ideal is a prime ideal. The converse is not always true: for example, in any nonfield integral domain the zero ideal is a prime ideal which is not maximal. Commutative rings in which prime ideals are maximal are known as zero-dimensional rings, where the dimension used is the Krull dimension.
• A maximal ideal of a noncommutative ring might not be prime in the commutative sense. For example, let $M_{n\times n}(\mathbb {Z} )$ be the ring of all $n\times n$ matrices over $\mathbb {Z} $. This ring has a maximal ideal $M_{n\times n}(p\mathbb {Z} )$ for any prime $p$, but this is not a prime ideal since (in the case $n=2$)$A={\text{diag}}(1,p)$ and $B={\text{diag}}(p,1)$ are not in $M_{n\times n}(p\mathbb {Z} )$, but $AB=pI_{2}\in M_{n\times n}(p\mathbb {Z} )$. However, maximal ideals of noncommutative rings are prime in the generalized sense below.
Generalization
For an R-module A, a maximal submodule M of A is a submodule M ≠ A satisfying the property that for any other submodule N, M ⊆ N ⊆ A implies N = M or N = A. Equivalently, M is a maximal submodule if and only if the quotient module A/M is a simple module. The maximal right ideals of a ring R are exactly the maximal submodules of the module RR.
Unlike rings with unity, a nonzero module does not necessarily have maximal submodules. However, as noted above, finitely generated nonzero modules have maximal submodules, and also projective modules have maximal submodules.
As with rings, one can define the radical of a module using maximal submodules. Furthermore, maximal ideals can be generalized by defining a maximal sub-bimodule M of a bimodule B to be a proper sub-bimodule of M which is contained in no other proper sub-bimodule of M. The maximal ideals of R are then exactly the maximal sub-bimodules of the bimodule RRR.
See also
• Prime ideal
References
1. Dummit, David S.; Foote, Richard M. (2004). Abstract Algebra (3rd ed.). John Wiley & Sons. ISBN 0-471-43334-9.
2. Lang, Serge (2002). Algebra. Graduate Texts in Mathematics. Springer. ISBN 0-387-95385-X.
• Anderson, Frank W.; Fuller, Kent R. (1992), Rings and categories of modules, Graduate Texts in Mathematics, vol. 13 (2 ed.), New York: Springer-Verlag, pp. x+376, doi:10.1007/978-1-4612-4418-9, ISBN 0-387-97845-3, MR 1245487
• Lam, T. Y. (2001), A first course in noncommutative rings, Graduate Texts in Mathematics, vol. 131 (2 ed.), New York: Springer-Verlag, pp. xx+385, doi:10.1007/978-1-4419-8616-0, ISBN 0-387-95183-0, MR 1838439
| Wikipedia |
Hermann Kinkelin
Hermann Kinkelin (11 November 1832 – 1 January 1913)[1] was a Swiss mathematician and politician.
Hermann Kinkelin
Hermann Kinkelin
Born(1832-11-11)11 November 1832
Bern
Died1 January 1913(1913-01-01) (aged 80)
Basel
NationalitySwiss
Known forGlaisher–Kinkelin constant
Scientific career
FieldsMathematics
InstitutionsUniversity of Basel
Swiss Statistical Society SSS
Statistical-economic society
Life
His family came from Lindau on Lake Constance. He studied at the Universities of Zurich, Lausanne, and Munich. In 1865 he became professor of mathematics at the University of Basel, where until his retirement in 1908, the full burden of teaching of mathematics was his responsibility. In 1867 he was naturalized in Basel. He was also a statistician, he founded the Swiss Statistical Society and the Statistical-economic society in Basel and led the 1870 and 1880 Federal census in Basel.
Kinkelin's works dealt with the gamma function, infinite series, and solid geometry of the axonometric. Kinkelin produced more than 60 publications in actuarial mathematics and statistics. He was a founder of the Basel "mortality and age checkout" (later "Patria, Swiss life insurance company Mutual") and the Swiss Statistical Society, of which he was a member during 1877–86.
Hermann Kinkelin died in Basel on 1 January 1913.[2][3]
Publications
• Investigation into the formula $\scriptstyle nF(nx)=f(x)+f(x+{\frac {1}{n}})+f(x+{\frac {2}{n}})+\ldots f(x+{\frac {n-1}{n}}).$ Archiv der Mathematik und Physik 22, 1854, pp. 189–224 (Google Books, dito)
• The fundamental equations of the Γ(x) function, Mitteilungen der Naturforschenden Gesellschaft in Bern 385 und 386, 1857, pp. 1–11 (Internet-Archiv, dito)
• On some infinite series, Mitteilungen der Naturforschenden Gesellschaft in Bern 419 und 420, 1858, pp. 89–104 ( Internet Archive)
• About a transcendent relatives of the gamma function and its application to the integral calculus, Journal für die reine und angewandte Mathematik 57, 1860, pp. 122–138 (GDZ)
• The oblique axonometric projection, Vierteljahrsschrift der Naturforschenden Gesellschaft in Zürich 6, 1861, pp. 358–367 (Google Books)
• On the Theory of Prismoides, Archiv der Mathematik und Physik 39, 1862, p. 181–186 (Google Books, dito, dito)
• Proof of three sybling expressions of the triangle, Archiv der Mathematik und Physik 39, 1862, pp. 186–188 (Google Books, dito, dito)
• New evidence of the presence complex roots in an algebraic equation, Mathematische Annalen 1, 1869, pp. 502–506 (Google Books, GDZ, Jahrbuch-Rezension)
• The calculation of the Christian Easter, Zeitschrift für Mathematik und Physik 15, 1870, pp. 217–228 (Internet-Archiv)
• Lecture in Die Basler Mathematiker Daniel Bernoulli und Leonhard Euler, Verhandlungen der Naturforschenden Gesellschaft in Basel 7 (Anhang), 1884, pp. 51–71 (Internet-Archiv)
• Constructions of the centers of curvature of conics, Zeitschrift für Mathematik und Physik 40, 1895, pp. 58–59 (Internet-Archiv, Jahrbuch-Rezension)
• About the gamma function, Verhandlungen der Naturforschenden Gesellschaft in Basel 16, 1903, pp. 309–328 (Internet-Archiv, dito)
Monographs
• General theory of harmonic series with applications to number theory, Schweighauser, Basel 1862 (Google Books)
• Short notice of the metric weights and measures, 1876; Nachdruck: Andreas Mächler, Riehen 2006, ISBN 3-905837-02-1
References
• Johann Jakob Burckhardt: Kinkelin. Hermann. In: Neue Deutsche Biographie (NDB). Band 11, Duncker & Humblot, Berlin 1977, pp. 625 (Digitalisat).
• H. Fäh, in: Verhh. d. Schweizer. Naturforschenden Ges., 96. J.verslg., 1913 (P); G. Schärtlin, Erinnerungen an
• H. K., ebd.; R. Flatt, Verz. d. gedr. Veröff. v. H. K., ebd.; ders., in: Basler Jb. 1914(P); HBLS (P).
• Hermann Wichers: Kinkelin, Hermann in Historischen Lexikon der Schweiz
1. Dauben, Joseph W.; Scriba, Christoph J. (23 September 2002). Writing the History of Mathematics – Its Historical Development. p. 101. ISBN 9783764361679. Retrieved 29 December 2012.
2. "Baselstadt". Zürcherische Freitagszeitung (in German). 10 January 1913. p. 2. Retrieved 5 April 2020 – via NewspaperArchive.
3. "Baselstadt". Oberländer Tagblatt (in German). 3 January 1913. p. 1. Retrieved 5 April 2020 – via NewspaperArchive.
• Hermann Kinkelin in History of Social Security in Switzerland
Authority control
International
• ISNI
• VIAF
National
• Spain
• Germany
• United States
• Netherlands
Academics
• MathSciNet
• zbMATH
People
• Deutsche Biographie
Other
• Historical Dictionary of Switzerland
• IdRef
| Wikipedia |
Bots influence opinion dynamics without direct human-bot interaction: the mediating role of recommender systems
N. Pescetelli1,2,
D. Barkoczi2,3 &
M. Cebrian2,4,5
Applied Network Science volume 7, Article number: 46 (2022) Cite this article
Bots' ability to influence public discourse is difficult to estimate. Recent studies found that hyperpartisan bots are unlikely to influence public opinion because bots often interact with already highly polarized users. However, previous studies focused on direct human-bot interactions (e.g., retweets, at-mentions, and likes). The present study suggests that political bots, zealots, and trolls may indirectly affect people's views via a platform's content recommendation system's mediating role, thus influencing opinions without direct human-bot interaction. Using an agent-based opinion dynamics simulation, we isolated the effect of a single bot—representing 1% of nodes in a network—on the opinion of rational Bayesian agents when a simple recommendation system mediates the agents' content consumption. We compare this experimental condition with an identical baseline condition where such a bot is absent. Across conditions, we use the same random seed and a psychologically realistic Bayesian opinion update rule so that conditions remain identical except for the bot presence. Results show that, even with limited direct interactions, the mere presence of the bot is sufficient to shift the average population's opinion. Virtually all nodes—not only nodes directly interacting with the bot—shifted towards more extreme opinions. Furthermore, the mere bot's presence significantly affected the internal representation of the recommender system. Overall, these findings offer a proof of concept that bots and hyperpartisan accounts can influence population opinions not only by directly interacting with humans but also by secondary effects, such as shifting platforms' recommendation engines' internal representations. The mediating role of recommender systems creates indirect causal pathways of algorithmic opinion manipulation.
Bots and recommender systems
Bots are becoming pervasive in our social media. From Twitter to Reddit, bots can interact with humans without detection, influencing opinions, and creating artificial narratives (Hurtado et al. 2019; Yanardag et al. 2021). This study uses an agent-based simulation to explore the interaction between bots and content recommendation algorithms.
Recommender systems such as collaborative filtering can provide hyper-personalized content recommendations. However, they partly rely on average population characteristics and shared features between nodes to produce their recommendations. We test the hypothesis that recommender systems mediating information access can mediate bot influence. We hypothesize that bots can affect a population's mean opinion not just by direct interactions with other nodes but via skewing the training sample fed to the recommender system during training (i.e., indirect interactions). Thus, a bot may influence content recommendation at the population level by subtly affecting how a centralized recommender system represents a population's preferences and patterns of content engagement. This indirect social influence may be more pervasive than direct social influence because it occurs without direct bot-human interaction.
The potential of algorithmic agents, commonly called bots, to influence public opinion has recently been under closer scrutiny. Special attention has been given to social and political bots that operate under human disguise on social media. Early studies documented the potential effects of bots on skewing opinion distributions on social media users and voters (Bessi and Ferrara 2016). Bots can inflate the perception of the popularity of particular views (Lerman et al. 2016), polarise opinions around divisive issues (Broniatowski et al. 2018; Stewart et al. 2018; Carley 2020), contribute to the spread of misinformation, conspiratory theories or hyper-partisan content (Paul and Matthews 2016; Shao et al. 2018), and promote harmful or inflammatory content (Stella et al. 2018). These generalized concerns have mobilized platforms to improve algorithmic agents' automatic detection and removal (Howard 2018; Ferrara et al. 2016; Ledford 2020; Beskow and Carley 2018). Bot influence often acts on public opinion in concert with human trolls, fake accounts, pink-slime newspapers, and "fake news" (Hurtado et al. 2019; Linvill and Warren 2018; Aral and Eckles 2019; Tucker et al. 2018). Researchers have started to untangle this complex web of interactions. The content spread by this class of agents spreads faster due to its emotional or sensationalist features (Vosoughi et al. 2018; Lazer et al. 2018). While often studied together, misinformation and extreme views can be orthogonal dimensions. Empirical evidence shows that misinformation and extreme online views tend to be more engaging than accurate or moderate content (Edelson et al. 2021). Recommender systems seem critical in promoting extreme content over moderate ones (Whittaker et al. 2021). Partisan content tends to remain confined in insulated clusters of users, thus reducing the opportunity to encounter cross-cutting content (Bakshy et al. 2015). Although algorithmic agents represent only a small part of general media manipulation tactics (Kakutani 2019; Sunstein 2018), they pose a problem for online platforms. Their ease of implementation, low cost, and scalability hurt the overall media environment. In this paper, we estimate the lower bound of algorithmic influence by focusing on the effect of a single algorithmic agent on a population. Our findings can be generalized to other 'pre-programmed' or 'stubborn' agents of media manipulation, such as partisan accounts and human trolls (Hegselmann and Krause 2015). Pre-programmed agents share several features, such as pre-set opinions and pushing political agendas while being scarcely influenced by others' beliefs.
The effect of bots and troll factories on public opinion is hard to estimate. Several researchers have recently attempted to measure hyper-partisan content's effect by looking at social media data from the 2016 USA presidential election (Guess et al. 2019; Allen et al. 2020). These studies suggest that sharing and consuming fake or hyper-partisan content was relatively rare relative to the total volume of content consumed. One study (Bail et al. 2020) attempted to measure the effect of exposure to Russia's Internet Research Agency (IRA) content on people's opinions. The authors found that interactions with highly partisan accounts were most common among respondents with already strong ideological alignment with those opinions. The researchers interpreted these findings as suggesting that hyper-partisan accounts might fail to change beliefs because they primarily interact with already highly polarised individuals. This phenomenon, also named "minimal effect", is not specific to social media platforms but can also be found in offline political advertisement and canvassing practices (Zaller 1992; Endres and Panagopoulos 2019; Kalla and Broockman 2018). In other words, changing political attitudes tends to be less effective than one imagines. A recent study found that human accounts are significantly more visible during political events than unverified accounts (González-Bailón and De Domenico 2021). This finding casts doubt on the centrality and impact of bot activity on political mobilizations' coverage (Ferreira et al. 2021). Overall, these findings show that, notwithstanding the well-documented spread of bots and troll factories on social media, their effect on influencing opinions may be limited.
The studies reviewed above were primarily concerned with direct influence among agents, namely direct interactions between algorithmic and human accounts (e.g., likes, retweets, and comments). Although common in many social settings, we argue that direct social influence does not consider the complexity of the digital influence landscape. Direct social influence has long been studied outside the domain of social media platforms, e.g., opinion change in social psychology (Yaniv 2004; Bonaccio and Dalal 2006; Sherif et al. 1965; Festinger and Carlsmith 1959; Rader et al. 2017) and in opinion dynamics in sociology (Flache et al. 2017; Deffuant et al. 2000; DeGroot 1974; Friedkin and Johnsen 1990). Direct influence assumes the unfiltered exposure to another person's belief (e.g., an advisor) changes a privately held belief. However, this simple social influence model may be outdated in the modern digital environment.
Although direct interactions on most online platforms do occur (e.g., friends exchanging messages and users tweeting their views), information exchange is also mediated by algorithmic procedures that sort, rank, and disseminate or throttle information. The algorithmic ranking of content can affect exposure to specific views (Bakshy et al. 2015). Recommender systems can learn population averages and trends, forming accurate representations of individual preferences from collective news consumption patterns (Das et al. 2007; Analytis et al. 2020). One crucial difference between traditional social influence and machine-mediated social influence is that in the latter case, single users can influence not only other people's beliefs but the "belief" of the content curation algorithm (i.e., its internal model). This paper investigates a previously unexplored indirect causal pathway connecting social bots and individuals via a simple recommendation algorithm (Fig. 1a). We test the hypothesis that algorithmic agents, like bots and troll factories, can disproportionately influence the entire population by biasing the training sample of recommender algorithms predicting user engagement and user opinions (Fig. 1b). This disproportionate influence may be facilitated by their resistance to persuasion and greater content engagement and sharing activity (Yildiz et al. 2013; Hunter and Zaman 2018). Affecting recommender systems' internal representations would be a more effective influence strategy that can influence a network's nodes in parallel rather than serially.
The indirect influence of bots on social information networks. (a) Representation of opinion dynamics network mediated by a content recommender system (grey box on top). Bot and Human agents (circles) consume and share content. A bot agent can influence human opinions via direct interaction with human agents (e.g., retweets, at-mentions, likes, and comments) or indirectly via affecting the internal representation of the content recommendation algorithm. (b) Schematic representation of the effect of bot presence on the internal representation learned by a simple recommender system trained to predict a user's engagement with various types of content. Including the bot behavior in the training set skews the model to think that engagement with extreme content is more likely than it would be without the bot presence. (c) Agents in the simulation were modeled to include a true private opinion and an expressed public opinion. Agents were presented with one of their neighbors' public opinions on every round based on the recommender's predicted content engagement. Then the agents decided whether to engage with this content or not according to their engagement function (Eq. 3). Opinion change took place only if the agent decided to engage with the recommended piece of content
We call this type of influence machine-mediated indirect influence, as opposed to indirect influence occurring via intermediary nodes (a bot may directly influence one human but indirectly influence all the humans to whom the first human is connected). Recent research in opinion dynamics has already shown the importance of weak ties and the indirect influence of bots on the rest of the network (Keijzer and Mäs 2021; Aldayel and Magdy 2022). Here, however, we are especially interested in the influence of social bots on network opinion dynamics when platform-wide algorithmic content recommendation mediates information sharing.
A cognitive model of Bayesian opinion update
Several opinion dynamics models represent belief updates as linear combinations of opinions, such as weighted averages (Friedkin and Johnsen 2011; DeGroot 1974). Linear models, however, need to explain the non-linear dynamics of belief escalations often observed in online and lab settings (Bail et al. 2018; Pescetelli and Yeung 2020b). Models that try to account for these effects—e.g., similarity bias and repulsive influence (Flache et al. 2017)—often use parameters that are difficult to match with the well-known cognitive processes underlying opinion change (Resulaj et al. 2009; Fleming et al. 2018; Yaniv 2004).
Opinion change has been the focus of an active investigation in cognitive neuroscience and social psychology (Bonaccio and Dalal 2006). This research shows that people update their opinions based on subjective estimates of uncertainty in their beliefs: more confident opinions are more influential in group settings (Price and Stone 2004; Penrod and Cutler 1995; Sniezek and Van Swol 2001), and confident individuals show smaller opinions shifts (Soll and Mannes 2011; Yaniv 2004; Becker et al. 2017). Opinion dynamics models have used confidence to model the susceptibility to persuasion—or vice versa the influence of an opinion (Hegselmann and Krause 2015). However, while this literature models confidence as a free parameter, we build on recent theoretical and empirical work on the neurocognitive bases of confidence (Ma et al. 2006; Fleming et al. 2018; Fleming and Daw 2017). According to this framework, confidence behaves and is mathematically described as a probability estimate. The probabilistic framework has two direct benefits. First, it grounds opinion dynamics models in cognitive psychology and empirical behavioral findings of decisions, beliefs, and changes of mind. Second, it allows for modeling a wide range of opinion dynamics (e.g., belief escalation, risky shift, polarization, convergence to consensus, bias assimilation, similarity bias) within the well-understood mathematical framework of probability (Hahn and Oaksford 2006, 2007).
In this paper, we use a binary choice (0, 1), where opinion and confidence are represented as the sign (opinion) and the magnitude (confidence) of the difference from 0.5, respectively. Thus, zero represents extreme confidence in one opinion, 0.5 represents a moderate or uncertain opinion, and 1 represents extreme confidence in the opposite opinion. We use a Bayesian opinion updating rule used in experimental psychology to model opinion change (Pescetelli and Yeung 2020a, b; Harris et al. 2016; Pescetelli et al. 2016). The Bayesian update offers a natural way to consider all aspects of beliefs, including opinion direction, belief conviction, and resistance to changes of mind or new information (Sun and Müller 2013; Hegselmann and Krause 2015). This opinion update function produces non-linear dynamics mirrored by belief updates in laboratory experiments (Pescetelli and Yeung 2020b; Pescetelli et al. 2016). Agreeing people tend to reinforce each other's beliefs and move to more confident positions. In comparison, people who disagree tend to converge to more uncertain positions (see (Bail et al. 2018) for an exception). These non-linear dynamics—often called biased assimilation in opinion dynamics—naturally emerge when using the Bayesian theorem to update opinions.
We created two identical, fully connected networks of 100 agents to test our hypothesis. The two networks differed only in whether the bot was present or absent. We initialized the two simulations using the same random seed, which allowed us to directly test the counterfactual of introducing a single bot in the network while holding all other conditions constant. The bot could influence other users via the recommendation algorithm (Fig. 1a). Contrary to previous studies (Friedkin and Johnsen 1990; DeGroot 1974), we distinguish between internally held beliefs and externally observable behavior. We assume that observable behavior represents a noisy reading of true internal beliefs. This assumption captures the fact that people on several online platforms, such as fora and social media, can form beliefs and change opinions simply by consuming content and never posting or sharing their own (Lazer 2020; Muller 2012). One does not need to tweet about climate change to form an opinion on climate change. The distinction between internally held and publicly displayed beliefs allows us to train the recommender algorithm with externally observable behavior. The recommender algorithm does not make the unrealistic assumption that it can access a user's private opinions. We call 'engagement' all externally observable behaviors such as tweets, likes, and reactions. Thus, the recommender algorithm and agents must infer other agents' underlying opinions from engagement behaviors.
Across a series of simulations, we quantify the effect of adding a single bot to a network of fully connected agents. We show that the bot can influence human agents even though few direct interactions exist between human agents and the bot. We conclude that in an information system where algorithmic models control who sees what, bots and hyper-partisan agents can influence the entire users' population by influencing the internal representation learned by the recommender algorithm. In other words, the recommender belief might be as crucial as people's beliefs in determining the outcome of network opinion dynamics. We discuss these findings in light of the contemporary debate on social media regulation.
We simulate a simplified social network model where a recommender system learns and presents a personalized content feed to agents in the network. This feed contains the expressed opinions of other agents in the network. Each agent can observe and interact with other agents' opinions by updating and expressing their own opinions. We manipulate whether a single bot is also part of the potential pool of agents that the recommender system draws upon to create the feeds in two separate but identical conditions. We study whether this bot can infiltrate the feed controlled by the recommender system by influencing the statistical relationships it learns.
Simulation procedure
All simulations were run using R version 4.1.2. We simulate N = 100 fully connected agents. We run the model for 100 steps. The limit of 100 steps was chosen because pilots converged long before this time. We test 100 replications per condition.
Each agent is represented by a private opinion in the range [0.01, 0.99] drawn from a truncated Normal distribution.
$$T_{i} = tN(0.5,\;0.2)$$
and by an expressed opinion representing a noisy observation of their true opinions:
$$E_{i} = T_{i} + N(0,\;0.1)$$
On each time step, agents go through a two-step process:
First, agents decide whether or not to engage with content in their feed (see below). Content is the expressed opinions of other agents ranked by the recommender system for each agent, based on the model's predicted engagement for a given piece of content. Agents decide whether to engage with the content based on two well-documented biases, namely the bias to engage with similar content (similarity bias or homophily bias) (Mäs and Flache 2013; Dandekar et al. 2013) and the bias to engage with extreme content and confident opinions (Penrod and Cutler 1995; Price and Stone 2004; Hegselmann and Krause 2015; Edelson et al. 2021; Whittaker et al. 2021). An engagement function is defined as:
$$P(engage_{j} )^{t + 1} = \alpha \left( {\left| {E_{j}^{t} - T_{i}^{t} } \right|} \right) + 2(1 - \alpha ) \times \left| {E_{j}^{t} - 0.5} \right|$$
where E is the expressed opinion of another agent j. We represent engagement as a binary decision to engage or not engage, drawn from a binomial distribution with probability P(engage). The right-hand side of the equation represents a weighted sum of the similarity bias and the extremity bias. The importance of each bias is controlled by the weight parameter alpha, set to 0.2. We explore in Supplementary Material various values of alpha (Additional file 1: Fig. S2). Opinion similarity is represented by the absolute distance between E and T (the first term). In contrast, the extremity bias is represented by the absolute distance between E and the midpoint of 0.5 (the second term). The engagement function in Eq. 3 makes it more likely that agents engage with content that is (a) similar to their initially held opinion and (b) more distant from moderate opinions (represented by the mid point 0.5) their own opinions. This behavior represents people's online tendency to engage with shocking or count-intuitive content more than moderate content (Vosoughi et al. 2018; Lazer et al. 2018; Edelson et al. 2021).
Opinion update
If agents decide to engage, they update their own opinions using a Bayesian opinion update function:
$$T_{i}^{t + 1} = \frac{{T_{i}^{t} O_{i}^{t} }}{{(T_{i}^{t} O_{i}^{t} ) + (1 - T_{i}^{t} )(1 - O_{j}^{t} )}}$$
where O is the expressed opinion of another agent j, discounted by a trust factor θ between 0 (no trust at all) and 1 (complete trust).
$$O_{i}^{t} = \theta (E_{j}^{t} - 0.5) + 0.5$$
The addition of a discount factor helps stabilize the model. It avoids that agents who engage with distant opinions update their initial opinion with large shifts in the opinion space (an unrealistic behavior). We set θ = 0.2.
If agents decide not to engage, they keep their opinion from the previous timestep, time t.
Each agent is presented with a feed consisting of the expressed opinion of another agent among their neighbors. The recommended piece of content (Feed) was created as a simple logistic recommender system. We chose this simple logistic model to provide a minimal proof of concept of our hypothesis. Based on past observations, the feed aims to provide the content that the agent is most likely to engage with (see Engagement above). To achieve this, we train a simple logistic regression using all agents' binary engagement history as a dependent variable and the absolute distance between the agent's public opinion at time t-1 and the opinion they observed in their feed as the independent variable.
$$H \sim L(D)$$
where L is a logistic regression model, H is the history of binary engagement events (0 = did not engage; 1 = did engage), and D is the absolute difference between an agent and its neighbors' public opinions. In other words, the model aims to learn the agents' engagement function by observing their prior engagement history and the content they observed in their feeds. To provide sufficient training data for the recommender system, we start the simulation's first ten steps by randomly presenting content in the feeds. The logistic regression was implemented using the glm package in R version 4.1.2.
The bot is represented as a stubborn agent that does not change its opinion but sticks to the same opinion throughout the simulation (Stewart et al. 2019; Karan et al. 2018; Yildiz et al. 2013). In different conditions, we manipulate the degree to which this opinion is extreme (i.e., the distance from the mean opinion of the agents).
We initialize the simulation with the following parameters: N(0.5, 0.2) and bot opinion = 0.8. This setting represents a situation where agents hold a moderate opinion and are not polarized. In probabilistic terms, the average population opinion is uncertain (i.e., around 0.5). The bot's opinion is more confident than the average opinion, but the mean difference between agents and the bot is not very large. On each timestep (starting from t = 10 onwards), agents are presented with a unique feed based on which they decide whether to engage and update their opinions. Once each agent has made a decision, the simulation proceeds to the next timestep. We repeat the procedure for t = 100 timesteps and r = 100 replications. We record each agent's opinion on each timestep and the cases where the bot gets recommended to an agent. We simulate two conditions, one where the bot is present and one where it is absent. We initialize both simulation conditions with the same random seeds, thereby producing virtually identical simulation conditions except for the presence of the bot. This manipulation allows for precise measurements regarding the influence of the bot on the network.
Population-level influence of the bot on the average opinion
We start by looking at the population-level influence of the bot on agents' opinions. Figure 2a shows the mean opinion in the entire group over time for the two conditions (bot vs. no bot). Note that for the first t = 10 timesteps, there is no change in opinion since those trials serve as training samples for the recommender system and, therefore, present content in the feed randomly. From t = 10 onwards, we see a significant difference between the two conditions, with the bot shifting the average opinion of the population by an average of more than ten percentage points. This effect is also reflected by the average engagement levels in the population, as depicted in Fig. 2b. This effect holds across different initial opinion distributions (Additional file 1: Fig. S3) and different bot opinions (Fig. 5). From t = 10 onwards, we observe a significant jump in engagements. The recommender system becomes increasingly efficient at recommending content. The increase in engagement towards the end of the plot indicates that, as consensus emerges, interactions between agreeing agents lead to greater confidence (biased assimilation) and thus greater engagement. The plot shows an average between simulations where the average opinion converged to 0 and simulations where the average opinion converged to 1 (Additional file 1: Fig. S1).
Mean opinion and mean engagement in networks with and without bot influence. (a) Mean opinion of the agents over time. (b) Mean number of agents engaging with content in each timestep. Red: Treatment condition, the bot is part of the social network. Blue: Control condition, the bot is not part of the social network. A single bot can produce substantial changes in the mean opinion and mean engagement levels in the network
We let the simulation run for 300 timesteps (Additional file 1: Fig. S4). Surprisingly, Figure S4b shows lower engagement when the bot is present (red line). In some simulations, the bot's pull toward its opinion (0.8) while agents try to converge toward 0 keeps agents in the uncertainty region (around 0.5) for longer (Additional file 1: Fig. S1d). In turn, this reduces average opinion extremity and engagement. However, the bot can also increase engagement, as seen in Additional file 1: Fig. S3. When agents start with moderate levels of consensus (0.4), the bot temporarily increases engagement as it holds the most confident opinion. As agents become more confident themselves, engagement in the bot-absent condition increases again.
The magnitude of direct bot influence on the individual agents
So far, we have seen that a single bot can affect the population's average opinion and engagement levels. Here, we investigate the reasons underlying this effect more directly. Figure 3a shows the number of agents directly influenced by the bot on each timestep. By direct influence, we mean that the bot's content was recommended to an agent via the feed. The agent decided to engage with the bot's content (and thus update its private opinion based on its content). On average, 2 agents engage with and change their opinions after observing the bot on any timestep. Yet, we observe an average opinion change of 26% (Fig. 3b). Opinion change is the absolute difference between opinions pre and post-engagement. The finding of low engagement and opinion shift is consistent with the existing literature on the "minimal effect", which suggests that both online (Bail et al. 2020) and offline (Zaller 1992; Endres and Panagopoulos 2019; Kalla and Broockman 2018) efforts at persuasion are rarely effective. It suggests that direct influence (e.g., direct bot interaction or political advertisement and canvassing practices) is often ineffective at shifting population averages. Our finding only captures the direct influence from bot to agent but does not measure the bot's indirect influence by influencing an agent that will influence further agents. We believe that indirect influence may be more pervasive and more pronounced, especially in online contexts where recommender systems facilitate information spread. To measure this indirect n-th order influence of bots on agents, in the next paragraph, we compare the two simulation conditions (bot vs. no bot) while using the same random seed and holding all other conditions constant.
Direct bot influence. (a) The average number of nodes influenced by the bot on each timestep. Influence is defined as when an agent is presented with content produced by the bot, engages with this content, and shifts its own opinion. (b) Mean opinion change for agents influenced by the bot on each timestep. A single bot can influence multiple people on each timestep and produce substantial opinion change
The individual-level shift in opinion as a result of direct or indirect bot influence
We then looked at the difference in opinion between the same agent across the two simulation conditions, holding all other aspects of the simulation constant (Fig. 4). Initializing the two simulations with identical parameters and random seed allowed us to isolate the effect of the bot. Estimating the within-agents effect improves our estimation of the bot effect. Differences between the two counterfactual worlds reflect the direct and indirect effects caused by introducing the bot. Notwithstanding the little direct influence (Fig. 3), we found that, compared to a counterfactual simulation, the bot had an indirect effect on the entire population, with the magnitude of influence on opinion varying considerably, from 11 to 15 percentage points (Fig. 4a). This effect is explained by agents observing other agents who might have interacted with the bot, leading to a trickle-down effect of the bot's opinion on other agents who might not have interacted with the bot. Figure 4b shows the signed difference between agents' opinions in the control and bot conditions \((d = T_{nobot} - T_{bot} )\). Notice that most points are negative, indicating that nodes' opinions shifted toward the bot's opinion. Even though using the same random seed initialization does not ensure that all interaction events are the same, the fact that signed differences are systematically skewed towards the bot's opinion (Fig. 4a) shows that the bot influenced all individual agents. Our model shows that bots' influence is magnified when we account for indirect influence via the recommender system or other intermediary agents. This striking result indicates that a single bot can have a much more robust and lasting effect beyond individuals it directly interacts with. This finding seems to suggest that studies focusing only on direct influence (bots' influence on people they directly interacted with) might have underestimated the actual capacity of a bot to bias population opinion dynamics.
Within-agents effect of bot across the two simulations. (a) The absolute difference between each agent's opinion at time t = 300 between the two simulation conditions (bot vs. no bot). This analysis measures the bot's total impact on the opinions of the same agents in the network. (b) The signed difference between each agent's opinion at time t = 300 between the two simulation conditions (bot vs. no bot). This analysis shows the direction of the social influence of the bot on individuals' opinions
An exploration of the parameter space for bot opinion and population average
Finally, the above results assumed that the average opinion in the population is N(0.5, 0.2) and the bot opinion is 0.8. The results are specific to this parametrization of our model. To test the generalisability of our conclusion, we explore the sensitivity of our results to different values of agent and bot opinion. Figure 5 shows a heatmap where the x-axis shows different values of the bot opinion and the y-axis shows the mean opinion in the population. The results remain qualitatively similar to those presented in the main text. The bot has a more substantial effect on the population when its opinion is more distant from the average opinion of the population. The results further support the conclusion that a bot (representing 1% of the total population) can have a disproportionate effect on population-level dynamics when we consider indirect influence.
Heatmap of different initial opinion distributions. The principal analysis assumed that the average opinion in the population is N(0.5, 0.2) and the bot opinion is 0.8. Here, we explore the sensitivity of our results to different values of agent and bot opinion. This figure shows a heatmap where the x-axis shows the bot's different opinion values and the y-axis shows different population average opinion values. The results remain qualitatively similar to those presented in the main text. The bot has a more substantial effect on the population when its opinion is more distant from the average opinion of the population
Effect on the recommendation system's internal representations
We conducted a last set of analyses to detect differences in the model's internal representations. The recommender system used in this study was a simple logistic regression. The model was trained using agents' binary engagement history as a dependent variable and the absolute difference between the agent's public opinion at time t−1 and other agents' opinions as to the independent variable. After model fitting, the logistic's slope beta coefficients were used to compare the model's internal representations across conditions on the last timestep of each simulation. The distributions' mean values were negative in both conditions (No bot = −4.86; Bot = −4.08), suggesting that opinion distance between an agent and its neighbor negatively predicted engagement. This difference is expected given the influence of the similarity bias in Eq. 3. We then ran a Welch two-sample t-test on the distributions of beta coefficients on the last time step of the simulation across the 100 repetitions. We found a significant difference between the two conditions (t(173.11) = −3.96, p < 0.001), suggesting that the bot significantly reduced the negative effect of the opinion distance on engagement (Cf. Fig. 1b).
This paper investigated the indirect influence that programmed media manipulators, such as bots, trolls, and zealots, can have on population opinion dynamics via recommender systems. We posited that even without direct exposure to bots' content, bots could influence population-wide content ranking by providing unduly training evidence to recommender algorithms. For instance, bots' greater activity, content engagement and production, and resilience to persuasion may contribute to bots skewing the training sample algorithms use to infer population preferences, averages, and typical content consumption patterns.
Using an opinion dynamics simulation on a 100-node network, we find that a single bot can substantially shift the mean opinion and engagement compared to a control condition without a bot. Even though only a minority of 'human' nodes (2%) directly engaged with the bot's content, the bot disproportionately affected the average shift in opinion observed in the population. Notably, virtually all nodes in the population were influenced by the bot presence, with opinion shifts ranging from 11 to 15 percentage points. The results are robust across different initialization parameters and different opinion update functions. We tested the effect of removing the bot after 40 timesteps. The number of agents directly influenced by the bot and the mean opinion change is shown in Additional file 1: Fig. S5. We find a sudden drop in direct influence after the bot is removed from the network. Comparing within-node opinion shifts across conditions, we find that all nodes showed shifted opinions at t = 100 (Additional file 1: Fig. S6). However, compared to the main results shown in Fig. 4, the magnitude of the shift is vastly reduced. Finally, we find that the internal representations of our simple recommendation model (beta logistic coefficients) were significantly impacted by the bot's presence. The coefficients were significantly larger in the bot-present than in the bot-absent condition.
These results would be unlikely if bots could influence human agents only via direct exposure. As bots represent only a minority of the population of agents (1% in our simulation), it is unlikely that they can interact with and directly influence all other agents. Our findings show that a simple recommender system (a logistic regression in our simulation) dramatically increases the influence of a bot on the population. Our first contribution is advancing the debate around bots' influence and media manipulation. Our study highlights a previously unexplored phenomenon and draws attention to a subtle yet potentially pervasive phenomenon.
Contrary to previous studies investigating social media bots, our work does not model direct interactions between bots and human agents (arguably representing a minority of interactions) but focuses on indirect effects via recommendation systems. Agents-based simulations have shown how bots can have a long-range, pervasive, and most critically stealthy influence on the network even without direct social influence (Keijzer and Mäs 2021). Our findings highlight that malicious agents, such as bots and trolls factories, can further increase their influence by infiltrating the internal representations of trained models tasked with content filtering. Our setup allows us to compare counterfactual worlds, thus strengthening causal inference. We initialized control (without-bot) and treatment (with-bot) simulations with the same parameters and random seed. Furthermore, effects on opinion shifts and engagement were calculated at the individual node level, thus measuring the effect of our treatment (bot presence) on the opinion dynamics and engagement of virtually identical 'human' agents.
Although our agent-based model provides valuable insights into machine-mediated information systems, it is limited by the ecological validity of simulation studies. Testing the same hypotheses in real-world contexts may be problematic due to the difficulty in conducting randomized control trials on social media platforms and the proprietary nature of natural recommender systems. Although it may be challenging to study these systems, researchers have recently successfully inferred the hidden mechanisms underlying several proprietary algorithms by systematically prompting them (Ali et al. 2019; Hannak et al. 2013; Robertson et al. 2018). Furthermore, real-world opinion dynamics are arguably more complex than the simple simulated world. Complex dynamics may be elicited by bots operating on media platforms not captured by our simulation (Mønsted et al. 2017). Nevertheless, our findings show that one component of such a complex network of influence may occur not via direct interactions between nodes but by subtly skewing the recommender systems' training set. We invite future researchers in computational social science to investigate this indirect causal pathway linking bots and human social media accounts through recommender systems (Fig. 1a).
The second contribution of our paper lies in using a Bayesian update function that bridges classical opinion dynamics findings (e.g., opinion averaging, biased assimilation) with behavioral observations of opinion change from psychology and cognitive science. Although several other opinion models exist that reproduce these dynamics (Flache et al. 2017), not many use parameters that can be associated with explicit psychological constructs. Several classical models assume that opinion change results from a linear combination of neighboring nodes' observed opinions, such as averages and weighted means (DeGroot 1974; Friedkin and Johnsen 1990; Deffuant et al. 2000). However, experimental evidence suggests that non-linear multiplicative dynamics often govern opinion change (Bail et al. 2018; Pescetelli and Yeung 2020b; Pescetelli et al. 2016; Moscovici and Zavalloni 1969). Here, we used a Bayesian opinion update model that captures the dynamics of belief conviction, uncertainty, and probabilistic judgments (Pescetelli and Yeung 2020a, b; Harris et al. 2016). We selected this Bayesian update model because opinion shifts can be interpreted as shifts in confidence estimates. We argue that this opinion update model has several advantages. It represents opinions in the well-known language of probability. It can be seen as a normative rational model of opinion update (Harris et al. 2016). Using the Bayesian theorem to model belief updates allows us to quantify a best-case scenario, namely, the impact of bots if people were rational.
Similarly, it represents opinion dynamics as shifts in subjective probability estimates (e.g., the probability of being "right"), given the perceived evidence from other agents' opinions. More confident agents (i.e., with a stronger prior) are less influenced by other agents and more influential than uncertain agents. This approach explains several social phenomena (e.g., polarization, hyper-partisanship, escalation, and averaging) without requiring arbitrary free parameters. Encounters with agreeing agents tend to increase one's belief conviction (biased assimilation), while encounters with disagreeing agents increase uncertainty (assimilative social influence). Modeling trust and perceived expertise could potentially explain recent evidence suggesting that disagreement may sometimes entrench people further in their decisions (Bail et al. 2018; Harris et al. 2016). Bayesian update represents opinion escalation dynamics better than linear aggregation models. While linear updates may better model estimation tasks, Bayesian updates may better represent belief convictions and partisan affiliations, i.e., cases where interaction with like-minded individuals makes people more extreme.
We also acknowledge that our findings are specific to our choice of parameters and may not generalize well to other scenarios. Contrary to previous work, we do not explore the effect of different network sizes and structures on the effect under consideration. We acknowledge this as a limitation of our study and invite future studies to test the robustness of our results to alternative network architectures. Our study used a simple logistic model to predict engagement scores to provide recommended content. One limitation is that existing recommender systems are more complex than the simple logistic regression employed in this study. For instance, recommender systems can consider many more features and provide greater personalization thanks to highly granular information about users and user similarity. However, the effects highlighted in our findings are likely to affect, at least to some degree, any content filtering algorithm trying to extrapolate the behavior of one user to another. We speculate that more complex recommendation systems may still be affected by the same dynamics highlighted here as long as they use population averages to predict individual preferences. By biasing the estimation of a population mean, algorithmic agents can change the model's expectation for a given cluster of users or the whole population. Extrapolating a user's behavior to another represents the standard in many recommender systems (Ricci et al. 2011), e.g., collaborative filtering algorithms (Das et al. 2007; Koren and Bell 2015; Ricci et al. 2011). Recently, researchers have shown that individual social influence can be affected by an individual's position in the population distribution and similarity with others (Analytis et al. 2018, 2020). Thus we may also expect to observe our findings with more realistic content recommendation algorithms.
Furthermore, the complexity of realistic recommender systems makes the findings of this work even more significant. Indeed, our findings suggest that bots and troll factories' influence may be subtle but highly pervasive. The opacity and complexity of natural recommender systems suggest that such pervasive effects may continue to operate undetected. The potential consequences are difficult to imagine but should prompt further investigation.
Finally, some of our findings may depend on the specific opinion update model that we used here. We explore in the Supplementary material different values of alpha in Eq. 3 (Additional file 1: Fig. S2) and different average opinion distributions (Additional file 1: Fig. S3). A caveat in our simulation pertains to the modeling opinion and opinion change and operationalizing bots as stubborn agents (Hunter and Zaman 2018; Yildiz et al. 2013). In the present study, we represent beliefs along a single opinion dimension. People's beliefs outside the lab are often more complex and multifaceted than our model.
Nevertheless, using beliefs spanning a single dimension represents a necessary first step in many opinion dynamic models and advice-taking paradigms (Bonaccio and Dalal 2006; Deffuant et al. 2000; Flache et al. 2017; Friedkin and Johnsen 1990). Similarly, political polarisation and beliefs across several domains, especially divisive issues, may be well described by a single belief dimension (Navajas et al. 2019). Future modeling efforts could generalize our findings to multi-dimensional attitude spaces.
We suggest possible ways to reduce the risk of public opinion manipulation. First, improving the detection and removal of automated accounts can reduce bots' impact on population-wide behaviors (Additional file 1: Fig. S5). However, uniquely relying on this strategy is not sustainable in the long run as automated detection becomes outdated and new and more sophisticated bots are developed. Detection and removal tend to be more effective with relatively simple bots, thus creating a selective pressure for bots to develop more human-like features that are more likely to remain undetected (like the Red Queen hypothesis in biology). A more valuable strategy might be to regulate recommender and filtering algorithms to make them more transparent. Knowledge of the features used to make content recommendations can help academics and practitioners to monitor features that ill-willing entities can exploit. Open auditing of recommender systems and open-source software can go a long way in preventing some types of bots from doing harm and minimizing algorithmic tampering with public opinions.
In this paper, we explored the hypothesis that algorithmic agents may have stealthy, undue influence on online social networks by biasing the internal representations of recommender systems. Bots' more extreme views, greater activity frequency, and content generation might distort content recommendation for the entire network. Researchers and watchdogs should be aware of the indirect causal pathways of bot influence.
Barkoczi, D., & Pescetelli, N. (2021, August 21). Indirect causal influence of social bots through a simple recommendation algorithm. Retrieved from osf.io/7s83x.
Aldayel A, Magdy W (2022) Characterizing the role of bots in polarized stance on social media. Soc Netw Anal Min 12(1):30
Ali M, Sapiezynski P, Bogen M, Korolova A, Mislove A, Rieke A (2019) Discrimination through optimization: how Facebook's Ad delivery can lead to biased outcomes. Proc ACM Hum Comput Interact 199(3):1–30
Allen J, Howland B, Mobius M, Rothschild D, Watts DJ (2020) Evaluating the fake news problem at the scale of the information ecosystem. Sci Adv 6(14):eaay3539
Analytis PP, Barkoczi D, Herzog SM (2018) Social learning strategies for matters of taste. Nat Hum Behav 2(6):415–424
Analytis PP, Barkoczi D, Lorenz-Spreen P, Herzog S (2020) The structure of social influence in recommender networks. In: Proceedings of the web conference 2020, 2655–61. WWW '20. New York, NY, USA: association for computing machinery
Aral S, Eckles D (2019) Protecting elections from social media manipulation. Science 365(6456):858–861
Bail CA, Argyle LP, Brown TW, Bumpus JP, Haohan Chen MB, Hunzaker F, Lee J, Mann M, Merhout F, Volfovsky A (2018) Exposure to opposing views on social media can increase political polarization. Proc Natl Acad Sci 115(37):9216–9221
Bail CA, Guay B, Maloney E, Aidan Combs D, Hillygus S, Merhout F, Freelon D, Volfovsky A (2020) Assessing the Russian internet research agency's impact on the political attitudes and behaviors of American twitter users in late 2017. Proc Natl Acad Sci 117(1):243–250
Bakshy E, Messing S, Adamic LA (2015) Exposure to ideologically diverse news and opinion on facebook. Science. https://science.sciencemag.org/content/348/6239/1130.abstract?casa_token=93SGKMyFHO4AAAAA:NLLn7cnwU-dniTFvSJ5wC7XUJ30w5AFKxPLDLfWyijbh8Z-NWk0vsYB2zgXtq7EyGRLUhHdYX2fBfQ
Becker J, Brackbill D, Centola D (2017) Network dynamics of social influence in the wisdom of crowds. Proc Natl Acad Sci USA 114(26):E5070–E5076
Beskow DM, Carley KM (2018) Bot conversations are different: leveraging network metrics for bot detection in twitter. In: 2018 IEEE/ACM international conference on advances in social networks analysis and mining (ASONAM), 825–32. ieeexplore.ieee.org
Bessi A and Ferrara E (2016) Social bots distort the 2016 US presidential election online discussion. SSRN 21(11). https://ssrn.com/abstract=2982233
Bonaccio S, Dalal RS (2006) Advice taking and decision-making: an integrative literature review, and implications for the organizational sciences. Organ Behav Hum Decis Process 101(2):127–151
Broniatowski DA, Jamison AM, Qi S, AlKulaib L, Chen T, Benton A, Quinn SC, Dredze M (2018) Weaponized health communication: twitter bots and russian trolls amplify the vaccine debate. Am J Public Health 108(10):1378–1384
Carley KM (2020) Social cybersecurity: an emerging science. Comput Math Organ Theory 26(4):365–381
Dandekar P, Goel A, Lee DT (2013) Biased assimilation, homophily, and the dynamics of polarization. Proc Natl Acad Sci USA 110(15):5791–5796
Das A, Datar M, Garg A and Rajaram S (2007) Google news personalization: scalable online collaborative filtering. In: Proc of the 16th Int Conf on World Wide Web, 271–80
Deffuant G, Neau D, Amblard F, Weisbuch G (2000) Mixing beliefs among interacting agents. Adv Compl Syst A Multidis J 03(4):87–98
DeGroot MH (1974) Reaching a consensus. J Am Stat Assoc 69(345):118
Edelson L, Nguyen M-K, Goldstein I, Goga O, Mccoy D, et al. Understanding engagement with U.S. (mis)information news sources on Facebook. IMC '21: ACM Internet Measurement Conference, Nov 2021, Virtual Event, France. pp. 444–463. Link available here: https://hal.archives-ouvertes.fr/hal-03440083/file/news-interactions-imc2021.pdf
Endres K, Panagopoulos C (2019) Cross-pressure and voting behavior: evidence from randomized experiments. The J Polit 81(3):1090–1095
Ferrara E, Varol O, Davis C, Menczer F, Flammini A (2016) The rise of social bots. Commun ACM 59(7):96–104
Ferreira LN, Hong I, Rutherford A, Cebrian M (2021) The small-world network of global protests. Sci Rep 11(1):19215
Festinger L, Carlsmith JM (1959) Cognitive consequences of forced compliance. J Abnorm Psychol 58(2):203–210
Flache A, Mäs M, Feliciani T, Chattoe-Brown E, Deffuant G, Huet S, and Lorenz J (2017) Models of social influence: towards the next frontiers. J Artif Soc Soc Simul. https://doi.org/10.18564/jasss.3521.
Fleming SM, Daw ND (2017) Self-evaluation of decision performance: a general bayesian framework for metacognitive computation. Psychol Rev 124(1):1–59
Fleming SM, van der Putten EJ, Daw ND (2018) Neural mediators of changes of mind about perceptual decisions. Nat Neurosci 21(4):617–624
Friedkin NE, Johnsen EC (1990) Social influence and opinions. The J Math Sociol 15(3–4):193–206
Friedkin NE and Johnsen EC (2011) Social influence network theory: a sociological examination of small group dynamics. Cambridge University Press
González-Bailón S, De Domenico M (2021) Bots are less central than verified accounts during contentious political events. Proc Natl Acad Sci USA. https://doi.org/10.1073/pnas.2013443118
Guess A, Nagler J, Tucker J (2019) Less than you think: prevalence and predictors of fake news dissemination on facebook. Sci Adv 5(1):4586
Hahn U, Oaksford M (2006) A Bayesian approach to informal argument fallacies. Synthese 152(2):207–236
Hahn U, Oaksford M (2007) The rationality of informal argumentation: a Bayesian approach to reasoning fallacies. Psychol Rev 114(3):704–732
Hannak A, Sapiezynski P, Kakhki AM, Krishnamurthy B, Lazer D, Mislove A, Wilson C (2013) Measuring personalization of web search. In: Proceedings of the 22nd international conference on world wide web, 527–38. WWW '13. New York, NY, USA: Association for Computing Machinery
Harris AJL, Hahn U, Madsen JK, Hsu AS (2016) The appeal to expert opinion: quantitative support for a Bayesian network approach. Cogn Sci 40(6):1496–1533
Hegselmann R, Krause U (2015) Opinion dynamics under the influence of radical groups, charismatic leaders, and other constant signals: a simple unifying model. Netw Heterog Media 10(3):477–509
Howard P (2018) How political campaigns weaponize social media bots. IEEE Spectrum Oct
Hunter SD, and Zaman T (2018) Optimizing opinions with stubborn agents under time-varying dynamics. arXiv [cs.SI]. arXiv. http://arxiv.org/abs/1806.11253.
Hurtado S, Ray P and Marculescu R (2019) Bot detection in reddit political discussion. In: Proceedings of the fourth international workshop on social sensing, 30–35. SocialSense'19. New York, NY, USA: Association for Computing Machinery
Kakutani M (2019) The death of truth. Tim Duggan Books
Kalla JL, Broockman DE (2018) The minimal persuasive effects of campaign contact in general elections: evidence from 49 field experiments. The Am Polit Sci Rev 112(1):148–166
Karan N, Salimi F, Chakraborty S (2018) Effect of zealots on the opinion dynamics of rational agents with bounded confidence. Acta Phys Pol, B 49(1):73
Keijzer MA, Mäs M (2021) The strength of weak bots. Online Social Networks and Media 21(January):100106
Koren Y and Bell R (2015) Advances in collaborative filtering. In: Recommender systems handbook, edited by Francesco Ricci, Lior Rokach, and Bracha Shapira, 77–118. Boston, MA: Springer US
Lazer D (2020) Studying human attention on the internet. Proceedings of the National Academy of Sciences of the United States of America
Lazer D, Baum MA, Benkler Y, Berinsky AJ, Greenhill KM, Menczer F, Metzger MJ et al (2018) The science of fake news. Science 359(6380):1094–1096
Ledford H (2020) Social scientists battle bots to glean insights from online chatter. Nature 578(7793):17–17
Lerman K, Yan X, Xin-Zeng Wu (2016) The 'Majority Illusion' in social networks. PLoS ONE 11(2):e0147617
Linvill DL, Warren PL (2018) Troll factories: the internet research agency and state-sponsored agenda building. Resource Centre on Media Freedom in Europe. https://scholar.google.com/scholar?hl=en&q=Brandon+C+Boatwright%2C+Darren+L+Linvill%2C+and+Patrick+L+Warren.+2018.+Troll+factories%3A+The+internet+research+agency+and+statesponsored+agenda+building.+Resource+Centre+on+Media+Freedom+in+Europe+%282018%29
Ma WJ, Beck JM, Latham PE, Pouget A (2006) Bayesian inference with probabilistic population codes. Nat Neurosci 9(11):1432–1438
Mäs M, Flache A (2013) Differentiation without distancing. Explaining Bi-polarization of opinions without negative influence. PLoS ONE 8(11):e74516
Mønsted B, Sapieżyński P, Ferrara E, Lehmann S (2017) Evidence of complex contagion of information in social media: an experiment using twitter bots. PLoS ONE 12(9):e0184148
Moscovici S, Zavalloni M (1969) The group as a polarizer of attitudes. J Pers Soc Psychol 12(2):125–135
Muller M (2012) Lurking as personal trait or situational disposition: lurking and contributing in enterprise social media. In: Proceedings of the ACM 2012 conference on computer supported cooperative work, 253–56. CSCW '12. New York, NY, USA: Association for Computing Machinery
Navajas J, Heduan FÁ, Garrido JM, Gonzalez PA, Garbulsky G, Ariely D, Sigman M (2019) Reaching consensus in polarized moral debates. Curr Biol: CB 29(23):4124–29.e6
Paul, Christopher, and Miriam Matthews. 2016. "The Russian 'firehose of Falsehood' Propaganda Model." Rand Corporation, 2–7.
Penrod SD, Cutler BL (1995) Witness confidence and witness accuracy: assessing their forensic relation. Psychol, Publ Pol, Law: an off Law Rev Univ Arizona College Law Univf Miami School Law 1:817–845
Pescetelli N, Yeung N (2020a) The role of decision confidence in advice-taking and trust formation. J Exp Psychol Gen. https://doi.org/10.1037/xge0000960
Pescetelli N, Yeung N (2020b) The effects of recursive communication dynamics on belief updating. Proc Royal Soc b: Biol Sci 287(1931):20200025
Pescetelli N, Rees G, Bahrami B (2016) The perceptual and social components of metacognition. J Exp Psychol Gen 145(8):949–965
Price PC, Stone ER (2004) Intuitive evaluation of likelihood judgment producers: evidence for a confidence heuristic. J Behav Decis Mak 17(1):39–57
Rader CA, Larrick RP, Soll JB (2017) Advice as a form of social influence: informational motives and the consequences for accuracy. Soc Pers Psychol Compass 11(8):e12329
Resulaj A, Kiani R, Wolpert DM, Shadlen MN (2009) Changes of mind in decision-making. Nature 461:263–266
Ricci F, Rokach L and Shapira B (2011) Introduction to recommender systems handbook. In: Recommender systems handbook, edited by Ricci F, Rokach L, Shapira B, and Kantor PB, 1–35. Boston, MA: Springer US
Robertson RE, Lazer D, and Wilson C (2018) Auditing the personalization and composition of politically-related search engine results pages. In: Proceedings of the 2018 world wide web conference on World Wide Web - WWW '18, 955–65. New York, New York, USA: ACM Press
Shao C, Ciampaglia GL, Varol O, Yang K-C, Flammini A, Menczer F (2018) The spread of low-credibility content by social bots. Nat Commun 9(1):4787
Sherif CW, Sherif MS, Nebergall RE (1965) Attitude and attitude change. W.B. Saunders Company, Philadelphia
Sniezek JA, Van Swol LM (2001) Trust, confidence, and expertise in a judge-advisor system. Organ Behav Hum Decis Process 84(2):288–307
Soll JB, Mannes AE (2011) Judgmental aggregation strategies depend on whether the self is involved. Int J Forecast 27(1):81–102
Stella M, Ferrara E, De Domenico M (2018) Bots increase exposure to negative and inflammatory content in online social systems. Proc Natl Acad Sci USA 115(49):12435–12440
Stewart LG, Arif A, and Starbird K (2018) Examining trolls and polarization with a retweet network. In: Proc ACM WSDM, workshop on misinformation and misbehavior mining on the web. http://faculty.washington.edu/kstarbi/examining-trolls-polarization.pdf
Stewart AJ, Mosleh M, Diakonova M, Arechar AA, Rand DG, Plotkin JB (2019) Information gerrymandering and undemocratic decisions. Nature 573(7772):117–121
Sun Z, Müller D (2013) A framework for modeling payments for ecosystem services with agent-based models, Bayesian belief networks and opinion dynamics models. Environ Model Softw 45(July):15–28
Sunstein CR (2018) #Republic: divided democracy in the age of social media. Princeton University Press
Tucker JA, Guess A, Barbera P, Vaccari Cr, Siegel A, Sanovich S, Stukal D, Nyhan B (2018) Social media, political polarization, and political disinformation: a review of the scientific literature. SSRN J. https://doi.org/10.2139/ssrn.3144139
Vosoughi S, Roy D, Aral S (2018) The spread of true and false news online. Science 359(6380):1146–1151
Whittaker J, Looney S, Reed A, Votta F (2021) Recommender systems and the amplification of extremist content. Internet Policy Rev. https://doi.org/10.14763/2021.2.1565
Yanardag P, Cebrian M, Rahwan I (2021) Shelley: a crowd-sourced collaborative horror writer. Creat Cognit. https://doi.org/10.1145/3450741.3465251
Yaniv I (2004) Receiving other people's advice: influence and benefit. Organ Behav Hum Decis Process 93(1):1–13
Yildiz E, Ozdaglar A, Acemoglu D, Saberi A, Scaglione A (2013) Binary opinion dynamics with stubborn agents. ACM Trans Econ Comput 19,1(4):1–30
Zaller JR (1992) The nature and origins of mass opinion. Cambridge University Press
We thank two anonymous reviewers for their time, feedback, and detailed, invaluable insights.
The study was funded by the Max Planck Institute for Human Development. D.B. was partly funded by a research grant from the Institute of Psychology at the Chinese Academy of Sciences.
New Jersey Institute of Technology, 323 Dr. Martin Luther King Jr Blvd, Newark, NJ, 07102, USA
N. Pescetelli
Center for Humans and Machines, Max Planck Institute for Human Development, 94 Lentzeallee, 14195, Berlin, Germany
N. Pescetelli, D. Barkoczi & M. Cebrian
Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Chaoyang District, Beijing, 100101, China
D. Barkoczi
Statistics Department, Universidad Carlos III de Madrid, Marid, Spain
M. Cebrian
UC3M-Santander Big Data Institute, Universidad Carlos III de Madrid, Madrid, Spain
NP and MC conceptualized the study. NP and DB designed the formal analysis. DB curated and visualized the data and ran the formal analyses. NP and MC acquired funding. All authors were responsible for the investigation and methodology. NP curated the project administration. MC supervised the project. NP and DB wrote the original draft. All authors reviewed and edited the final draft.
Correspondence to N. Pescetelli.
Supplementary Figures S1–S6.
Pescetelli, N., Barkoczi, D. & Cebrian, M. Bots influence opinion dynamics without direct human-bot interaction: the mediating role of recommender systems. Appl Netw Sci 7, 46 (2022). https://doi.org/10.1007/s41109-022-00488-6
Opinion dynamics
Bayesian belief update
Social influence | CommonCrawl |
The Redshift of the CMB vs. Dark Energy
The cosmic microwave background (CMB) radiation comprises about 98% of all electromagnetic radiation in the universe. And, from the creation of the CMB to today that electromagnetic radiation has red shifted to about 1100 times its original wavelength. And, the energy content of electromagnetic radiation is inversely proportionally to its wavelength. Therefore, the CMB has shed an immense amount of energy in the last 13.8 billion years because of the redshift due to the expanding universe.
How does this amount of energy compare to the estimate of Dark Energy in the universe?
energy cosmology dark-energy cosmic-microwave-background
Allyn ShellAllyn Shell
In terms of direct numbers at present, dark energy comprises about 70% of all of the energy in the universe. Radiation, on the other hand, makes up less than 0.005% of the energy in the universe. It's such a small fraction that it's less than the error associated with the values for matter and dark energy.
A good way to approximate how the two energies compare over time (with expansion included) is through the scale factor of the metric, $a$. The scale factor represents the ratio of the distance between two points at any given moment of time to the distance between those two points now. Naturally, as the universe expands, the amount of volume in a given region of the universe increases like $a^3$. With that said, let's take a look at the volume density of both your types of energy.
As you quite correctly pointed out, the expansion of the universe redshifts radiation, which means the universe loses that energy entirely. Add to that the fact that the number density of a fixed number of photons is proportional to $\frac{1}{a^3}$, and it's not hard to understand why relativity says the total energy density of radiation decreases like $\frac{1}{a^4}$. In other words the total amount of energy contained in radiation decreases approximately like $\frac{1}{a}$.
As for dark energy, the current accepted model, $\Lambda$-CDM, treats dark energy like a constant energy density. That means as the universe expands, the amount of dark energy per unit volume remains constant. Yikes. This means the total amount of dark energy increases like $a^3$.
Add them together and you see there is a net increase in total energy of the universe (the total energy of matter remains more or less constant). Clearly it isn't the case that the energy lost from radiation is picked up as dark energy. But, of course, you already knew that. You had already gone as far as realizing that the radiation energy fell off like $\frac{1}{a}$ and there would need to be a seriously funky (<-- technical term) relationship between $a$ and the energy density of dark energy for the two totals to sum to a constant. Kudos to you for having figured this out by yourself and asking an excellent follow-up question.
$\begingroup$ +1, but since the OP asks "how does $E_\mathrm{CMB}$ compare to $E_\Lambda$", you could provide an actual number here. $\endgroup$ – pela Aug 5 '16 at 14:16
$\begingroup$ @Jim, do you have an estimate for the integral of the 1/a loss over 13.8 billion years as compared to DE? Was the CMB energy loss ever greater than the DE? $\endgroup$ – Allyn Shell Aug 5 '16 at 15:42
$\begingroup$ @AllynShell I don't have the exact numbers. But they aren't too hard to estimate. Blueshift the CMB in reverse by 1100 and that should give you the approximate order of magnitude of radiation energy. I do know that the total amount of dark energy gained is greater than the total amount of radiation energy lost. That's easy to see. No matter when you start counting, the amount of dark energy gained from one moment to the other is always greater than the radiation energy lost. $\frac{a_f^3}{a_i^3}>\frac{a_f}{a_i}>1$. That's simple math. $\endgroup$ – Jim Aug 5 '16 at 18:32
The dynamics of FLRW cosmology is really not that different from elementary Newtonian mechanics. One can derive some of the salient aspects of this using Newtonian mechanics. The energy $E~=~K~+~V$ has the total energy $E$ constant, and the kinetic $K$ and potential $V$ energies add and subtract from each other to maintain a constant total energy. The scale factor for the evolution of the spacetime, which I work out in a Newtonian framework here How did the universe shift from "dark matter dominated" to "dark energy dominated"? within a Newtonian context.
The more complete general relativistic equation is $$ H^2~=~\left(\frac{\dot a}{a}\right)^2~=~H_0\left[\frac{\Omega_m}{a^3}~+~\frac{\Omega_r}{a^4}~+~(1~-~\Omega_r~-~\Omega_m)a^{-3(1+w)}\right] $$ Here $\Omega_m$ holds for matter, $\Omega_r$ for radiation. $\Omega_m$ is $.26$ for dark matter and $.04$ for luminous matter, so $\Omega_m~=~.3$, and currently $\Omega_r$ is very small. This equation describes the dynamics according to dark energy or the vacuum. In a Newtonian framework the left hand side is kinetic energy and the right potential. The loss of energy in photons means the photon by being stretched has its energy taken in by spacetime.
Lawrence B. CrowellLawrence B. Crowell
Not the answer you're looking for? Browse other questions tagged energy cosmology dark-energy cosmic-microwave-background or ask your own question.
How did the universe shift from "dark matter dominated" to "dark energy dominated"?
Could Dark Energy be evidence that space equals energy, in a similar manner to $E=mc^2$?
When the Cosmic Microwave Background radiation cools, where does the energy go?
Cosmic Background Radiation and redshift vs. temperature
Assuming the universe is finite, at some point, CMB should go away, right?
Cosmic Microwave Background seen from a hypothetical foreign Galaxy?
How to manage to solve for the scale of factor increase between two galaxies?
Evidence before CMB
If the CMB was emitted at the ionization energy of hydrogen, why is it a blackbody spectrum instead of a line spectrum?
Age of the CMB: How do we know?
Abundance of energetic CMB photons produced due to Sunyaev-Zel'dovich effect
CMB - Excess Energy? | CommonCrawl |
Chromatic adaptation from achromatic stimuli with implied color
R. J. Lee ORCID: orcid.org/0000-0001-6168-09761 &
G. Mather1
Attention, Perception, & Psychophysics volume 81, pages2890–2901(2019)Cite this article
Previous research has shown that the typical or memory color of an object is perceived in images of that object, even when the image is achromatic. We performed an experiment to investigate whether the implied color in greyscale images could influence the perceived color of subsequent, simple stimuli. We used a standard top-up adaptation technique along with a roving-pedestal, two-alternative spatial forced-choice method for measuring perceptual bias without contamination from any response or decision biases. Adaptors were achromatic images of natural objects that are normally seen with diagnostic color. We found that, in some circumstances, greyscale adapting images had a biasing effect, shifting the achromatic point toward the implied color, in comparison with phase-scrambled images. We interpret this effect as evidence of adaptation in chromatic signaling mechanisms that receive top-down input from knowledge of object color. This implied color adaptation effect was particularly strong from images of bananas, which are popular stimuli in memory color experiments. We also consider the effect in a color constancy context, in which the implied color is used by the visual system to estimate an illuminant, but find our results inconsistent with this explanation.
The mechanisms in the peripheral visual pathway that support human color vision are reasonably well understood: signals from three classes of cone photoreceptor (long-, medium-, and short-wavelength sensitive, or L, M, and S), tuned to different regions of the visible spectrum are combined by specific populations of ganglion cells in the retina in an opponent fashion. These ganglion cells project to the lateral geniculate nucleus (LGN) and from there signals are sent to the cortex. However, it has been suggested that what we see is not only determined by the optic array (the pattern of light reaching our eyes) but is also partly determined by knowledge or prior experience of the world and an estimate of the relative likelihood of possible stimuli. The debate on the bottom-up vs. the top-down nature of vision has been ongoing through much of modern vision science and continues today (Firestone & Scholl, 2016). Relatively recently, more evidence had been provided for top-down inputs to color perception, and we discuss some of this below. In the experiment we describe here, we attempt to determine whether the typical bias from expected colors of objects, known to affect color vision when that color is not present, lasts beyond the presentation of that stimulus, in a way analogous to traditional chromatic adaptation in the peripheral mechanisms.
Memory color affects color perception
For some time, it has been suggested that the remembered typical color of an object—"memory color" (as opposed to "color memory", simply remembering colors, Bartleson 1960; Hering 1964)—can affect the perceived color of that object. One of the earliest people to make this suggestion was Hering (1964, p. 8), who went as far as to suggest that the memory color seen in a "fleeting glance" could replace the perception of another color, depending on attention. Several studies have shown that the typical or diagnostic color (Tanaka & Presnell, 1999; Biederman & Ju, 1988) of an object can have an effect on the perception of that object's color. These experiments use stimulus images featuring natural objects (often fruit and vegetables, Bannert & Bartels 2013; Hansen, Olkkonen, Walter, & Gegenfurtner, 2006; Olkkonen, Hansen, & Gegenfurtner, 2008) that have a typical color, objects that are familiar to the particular observer (Bloj, Weiß, & Gegenfurtner, 2016), and/or objects with popular branding or symbolism (e.g., road signs, Witzel, Valkova, Hansen, & Gegenfurtner, 2011). The effects can go beyond the perceived color of the object, for example it has been shown that scenes are better recognized when they contain objects in their diagnostic colors (Goffaux, Jacques, Mourax, Oliva, Schyns, & Rossion, 2005). The perceived colors of after-images, created by viewing color-inverted images of recognizable objects, have also been shown to be affected by the expected color of the object, and this can be modulated by an estimate of the precision of the signal (which is low for after-images, as they are unstable, Lupyan2015).
In experiments in which observers were asked to adjust an image of an object with diagnostic color to be achromatic, they usually selected colors that were opposite (on the opposite side of the achromatic point in color space) from that color (Hansen et al., 2006; Olkkonen et al., 2008; Witzel et al., 2011; Lupyan, 2015). This is consistent with having to 'over-compensate' to negate the chromatic perception caused by the familiar color of the object. This effect was reduced when the stimuli were made less realistic by having their texture reduced or made into outline shapes only (Olkkonen et al., 2008). In addition, the color of the objects used in these experiments is remembered with more saturation and lightness than typical examples (Bartleson, 1960; Delk & Fillenbaum, 1965) or the specific items known to the observer (Bloj et al., 2016).
When greyscale images of color-diagnostic images are shown to observers, it has been shown that the memory color can be predicted from neural activity in the primary visual cortex (V1, Bannert & Bartels 2013) and the authors concluded that this region is receiving signals about the prior knowledge of object colors, possibly as feedback from area V4.
Color constancy
We see objects because light from an illumination source reflects from their surface and into our eyes. The spectral composition of this light is determined by both the illuminant and the surface reflectance properties, but in general we see surfaces as being the same color in different illumination conditions. This is one, perhaps too restricted, definition of color constancy (Smithson, 2005; D'Zmura & Lennie, 1986; Foster, 2011; Hurlbert, 1998). The problem of recovering the surface properties is a difficult one for the visual system, and there are many suggestions of how the illuminant might be estimated in order to be discounted from the proximal stimulus, using chromatic information distributed in space or time in the stimulus. Another way that this might be done is by comparing the chromatic signal to the expected color determined by the diagnostic color of an object, and assuming the difference is caused by the chromaticity of the illuminant. This was also part of Hering's (1964, p. 17) observations on memory colors. A bias of the signal towards the expected object color could then be applied to the rest of the scene to achieve color constancy. Memory color has been experimentally investigated in a color constancy context, and any effect of familiar objects is small (Granzier & Gegenfurtner, 2012; Kanematsu & Brainard, 2014).
Chromatic adaptation
Much of what we know about the peripheral mechanisms of color vision comes from studying adaptation or habituation. Continued stimulation of a particular mechanism reduces the activity of that mechanism relative to others. This can be done selectively and the effects measured in psychophysical experiments (see for reviews Clifford et al., 2007). It can readily be shown that adaptation to a colored stimulus for only a few seconds produces negative after images that also last seconds. This kind of adaptation, which we-refer to as "normalization", has the effect of making the adapting stimulus appear closer to neutral and occurs in systems where the stimulus dimension is encoded by broadband mechanisms (such as the spectral sensitivities of the cones), rather than a larger number of narrowly tuned mechanisms (see Webster 2011, for discussion) where the effect can sometimes be opposite.
In color vision, adaptation can be shown to both the time-averaged magnitude of a stimulus and to the amount of variation (contrast adaptation, see Webster 1996, for review), and has been used as a tool to study the nature of processing in the system. For example, Krauskopf, Williams, and Heeley (1982) adapted observers to chromatic modulations in different directions in color space and demonstrated that thresholds for detecting subsequent colors could be elevated selectively in only two directions—the cardinal axes of color space, which are determined by the responses of the opponent mechanisms. The comparison of cone signals by different classes of retinal ganglion cells (RGCs), which project to the lateral geniculate nucleus (LGN), results in the perceptually opposite colors. The L-M mechanism roughly encodes redness vs. greenness, and the S-(L+M) mechanism roughly encodes blueness vs. yellowness, although these colors are not the same as the unique hues. The color visual system has a perceptual "norm", when signals might be considered in balance and stimuli appear achromatic (Webster & Leonard, 2008), although this can vary between individuals. In the experiment we describe here, we are particularly concerned with the first kind of adaptation, to the steady or time-averaged stimulus. Viewing a predominantly yellow stimulus for a few seconds, for example, produces a blue after-image when subsequently viewing an objectively neutral field. The effect is most easily demonstrated with a discrete stimulus that creates an after-image, but a chromatically biased scene will have the effect of adapting the whole visual field, producing a perceptual color bias. This kind of normalization has been shown to result from gain-control-like processes in various stages of the chromatic pathway, including in the cones themselves (Valeton & van Norren, 1983) and the RGCs (Zaidi, Ennis, Cao, & Lee, 2012), although other observed effects suggest that a component of adaptation is cortical (e.g., Shimojo, Kamitani, & Nishida, 2001; Zeki, Cheadle, Pepper, & Mylonas, 2017). Further experiments, including reanalysis of the original data from (Krauskopf et al., 1982) by Krauskopf, Williams, Mandler, and Brown (1986), suggest that there are additional mechanisms, possibly later in the pathway than the opponent ones that are tuned to different hues (Eskew, 2009), but little is known about how adaptation in these "higher-order mechanisms" would affect perception.
Adaptation is traditionally thought of as being driven by bottom-up input from sensation. However, in this experiment, we are more interested in the potential adapting effects from top-down input, originating from memory color. We would not like to speculate about whether this occurs at any of the stages of the system that we know of, and it does seem unlikely that it occurs in any of the peripheral, opponent mechanisms. Nevertheless, the LGN does receive input from V1 (Fitzpatrick, Usrey, Schofield, & Einstein, 1994), so it is not impossible that top-down driven adaptation does take place here.
Adaptation takes place in many perceptual modalities, not limited to color or even vision. Mechanisms sensitive to stimulus motion can be adapted with a stimulus with continuous motion in one direction, so that subsequent stationary stimuli appear to drift in the opposite direction to the adapter (Mather, Verstraten, & Anstis, 1998). Particularly relevant to our current work, viewing static photographs of moving subjects (e.g., running figures) is also claimed to produce motion adaptation, affecting perceived direction of subsequent stimuli composed of drifting dots (Winawer, Huk, & Boroditsky, 2008). These stimuli with implied motion have an effect analogous to the effect we aimed to produce in our experiment, but it has been said that the measured effect is actually a result of a shift in the observers' decision criterion, not a perceptual bias (Morgan, Melmoth, & Solomon, 2013; Mather & Sharman 2015). We use a methodology designed to avoid this problem and measure a true perceptual bias, as will be described later.
We might expect that viewing images of objects that have a typical, diagnostic color will affect the perception of not just the chromaticities in those images, but also the chromaticities in subsequent stimuli, even when those images of objects are made achromatic. We suggest two subtly different reasons why this might be the case, that each cause chromatic biases in different directions: 'normalization' and 'illuminant compensation'. Firstly, knowledge of color might provide input to color vision mechanisms at some level—not necessarily the peripheral mechanisms mentioned above—adjusting their gain to cause a chromatic bias in perception that lasts beyond the adaptors. A greyscale image of a typically yellow object might provide excitation in a mechanism that responds to yellow, in a similar (but likely much reduced) way to a naturally colored image. Adaptation will occur, the output of the mechanism will be reduced, and subsequent stimuli will then appear biased away from yellow—towards blue according to opponent models of color perception—neutral stimuli would appear more blue or yellow stimuli would appear more neutral. This could also be considered as a shift in the neutral point towards yellow. Alternatively, if the visual system uses memory color to estimate illuminant chromaticity in order to discount it, then viewing the greyscale image of a typically yellow object would suggest an extremely blue-biased illuminant, resulting in an achromatic reflection. Subsequent stimuli might still be perceived as if under this blue illuminant, if presented within a short time afterwards, as it takes the visual system some time to adjust to a different illuminant (Lee, Dawson, & Smithson, 2012). The perceptual effect of the resulting compensation would be opposite to the effect of the normalization just described. Subsequent neutral chromatic stimuli would appear yellow, just as the adapting stimulus was, so the overall biasing effect is yellow (or a shift in the neutral point towards blue). In this experiment, we aimed to directly test whether achromatic images of typically colored objects, images with implied color, can cause normalization-like or illuminant-compensation-like perceptual chromatic biases in subsequent, simple, stimuli. The chromatic direction of any such bias might indicate which of the above routes is the cause. However, we do not anticipate that the stimuli we use will be particularly effective in causing an illuminant bias, for reasons we discuss later, nor is illuminant discounting the only way that color constancy might be achieved. We used a standard top-up adaptation experiment paradigm, with methodology designed to remove the effect of response bias or any strategy based on the semantic content of the images.
Four observers participated in this experiment. One was male, the others female. All had normal color vision (as verified with Ishihara's Test for Color Deficiency), and normal or corrected-to-normal acuity. One observer was one of the authors, and has knowledge of color theory and color space, and another observer was experienced with psychophysics experiments but not color, specifically. The final two observers were less experienced in psychophysical observations. Each observer contributed between 5 and 16 hours of time to data collection. The study received clearance from University of Lincoln school of Psychology Research Ethic Committee.
Adapting images were images of natural objects. The objects were chosen because they are predominantly a particular color, yet can still be identified when they are greyscale. The objects we selected were: bananas, carrots, leaves, and cucumbers (Fig. 1). In the first round of data collection, we used only the first three objects. The cucumber images were used later, in an attempt to understand unexpected effects from the leaves. The original images to be used as adaptors were found from Internet searches, and chosen because they met the following criteria: The images contained many examples of the object, such that the whole image was filled with examples; there were no other objects in the scene; the images were of sufficiently high pixel resolution that they did not appear pixelated on the experiment monitor. We made these decisions because we wanted images that did not contain other objects or surfaces that might provide a reference against which the color of the adapting objects may be judged or the illuminant estimated. Some images found that met these requirements were very large, and were split to create more than one stimulus image. Images were cropped as necessary to fit the 4:3 aspect ratio of the experimental monitor. Finally, the images were converted from RGB color to greyscale (MATLAB's rgb2gray function), and scaled in luminance so that the mean of all adapting images was the same, and the same luminance as the probe stimuli (see below).
Examples of the adapting images used in the experiment. Each column shows an image from one of the four sets (bananas, carrots, leaves, cucumbers), and each row shows the different modifications of each image. The top row shows the original images in full color, which were not used in the experiment. The middle row shows the original images in greyscale, as used in one condition of the experiment. The bottom row shows the phase-scrambled versions of the images in the middle row, as used in the control conditions
The adapting images for the control conditions were made by randomizing the phases of the Fourier components of the original images. The control images therefore contained the same Fourier amplitude components, and approximately the same luminance distributions, as the original images.
Probe stimuli were comprised of a disc, approximately 1° visual angle in diameter, divided into semicircles by a narrow vertical black line (see Fig. 2). The two segments were assigned chromaticities from either a 'pedestal' or a corresponding 'pedestal plus test' set.
The configuration for the apparatus in the experiment, from the point of view of the observer. The monitor is surrounded by black card baffles to remove any view of the rest of the room. The colour probe stimulus is shown, not to scale
All colors for the probe stimuli were defined in CIE L∗a∗b∗ space and converted to appropriate RGB values. CIE L∗a∗b∗ was used, as it is intended to be more perceptually uniform—the Euclidean distance between two points corresponds to a perceptual difference that is approximately the same in any part of the space—than any of the spaces based on the physiological mechanisms. Conversion to RGB was done by transforming CIE L∗a∗b∗ values to CIE (1931) XYZ values, and calculating the corresponding RGB values using the spectral measurements of the monitor. The pedestal set had three colors, one was grey (L∗a∗b∗ = (100,0,0)), and the others were slightly red and green in half of the sessions (L∗a∗b∗ = (100,1,0) and (100,− 1,0), respectively) and slightly yellow and blue (L∗a∗b∗ = (100,0,1) and (100,0,− 1), respectively) in the other half of the sessions. There was a separate test set for each pedestal, containing eight offsets from the pedestal, centered on zero offset and equally spaced over 5.0 units in L∗a∗b∗ space in the color direction being used in the session. With this range of colors used, the appearances of the pedestal and test plus pedestal colors were very similar and the task was difficult. One observer in particular was unable to make the perceptual decision with any degree of certainty, and so the range of the chromaticities was doubled for this observer.
The luminance of the grey pedestal, relative to which all other colors were specified, was 26.6cdm− 2. All other pedestal and test chromaticities had the same luminance, as L* values were the same for all these stimuli.
All stimuli were presented on a Sony Trinitron G400 CRT driven by a CRS (Cambridge Research Systems, Rochester, Kent, UK) VSG 2/5 providing 14-bit per channel chromatic resolution. The display was gamma-corrected using a CRS ColorCAL, and spectral measurements were made with a JETI Specbos 1211 (JETI Technische Instrumente GmbH, Jena, Germany). Observers viewed the display from a distance of 60 cm while resting on a chin-rest, so the whole display subtended 34 × 26 degrees of visual angle. Observers gave their responses with a button box. Black baffles surrounded the monitor and the observer to prevent stray light from the monitor illuminating other objects in the room, which was otherwise dark (see Fig. 2).
Each experimental session began with an initial adaptation period of 120s. This consisted of adapting images presented at the rate of 5 per second, in a randomized order. The sequence of trials then began immediately. Each trial consisted of the presentation of adapting images for 4 s (again at 5 Hz in a randomized order). This was followed by a 100-ms black interval, then the probe stimulus for 300 ms, and a second 100-ms black interval. The period of adaptation images for the next trial began immediately (Fig. 3), and the observer had the first 2 s of this period to give their response. The presentation rate meant that each image was displayed for 200 ms, more than long enough for object identification with color photographs and line drawings (Biederman & Ju, 1988), but not long enough to result in after images. The sequence of each trial meant that there was no period where the routine paused to wait for a response, so any adapting effect was maintained as much as possible, but there was enough time between each probe stimulus to make the task possible for the observer.
The sequence of stimuli in one trial of the experiment. Greyscale images, either from one of the sets of images of typically colored images, or from the set of phase-scrambled controls, were shown in a random order, 5 per second for 4 seconds. A 0.1-s black frame followed, and this was followed by the probe stimulus (see description in text) for 0.3 s
The observer's task was to decide which of the two halves of the probe stimulus was least saturated or "most like grey" and respond by pressing the corresponding one of two buttons.
The procedure to measure chromatic bias was analogous to that used by Morgan (2014) and Morgan, Grant, Melmoth, and Solomon (2015) to measure orientation perception bias and by Mather and Sharman (2015) and Morgan, Schreiber, and Solomon (2016) to measure speed perception bias. One-half of the probe was assigned to be the 'pedestal' and one was the 'pedestal plus test'. This assignment was randomly determined for each trial, and the observer was not aware of which was which. In each session, there were three pedestal values used, and eight test values per pedestal. Each combination was repeated ten times per session, giving 240 trials per session. Separate sessions were used for each combination of adapting image set, adapting or control condition (see below) and chromatic direction (a∗ and b∗, roughly red-green and yellow-blue). Each session was repeated 1–3 times, giving between 10 and 30 repeats of each unique trial and 3840 to 11,520 total trial per observer. Each session lasted approximately 20 min. Sessions were run in a paired fashion so that a session of the control condition was run with only a short break preceding or following the corresponding adapting images session.
For each observer, image category (bananas, carrots, leaves, cucumbers), experimental condition (images or phase scrambled control), color direction and pedestal, we calculated the proportion of times the 'pedestal plus test' color was chosen as least saturated. These proportions, as functions of the pedestal (p) plus test (t) chromaticity, were fitted by a modified version of the function used by (Morgan et al., 2016):
$$ \text{Pr}(\text{``}p+t^{\prime\prime}) = \frac{1}{2}\left( 1+\operatorname{erf}\left[\frac{2\mu-p-t}{2\sigma}\right]\operatorname{erf}\left[\frac{t-p}{2\sigma}\right]\right) $$
separately for each of the pedestals and for each of the two a∗ and b∗ chromatic axes. Figure 4 shows a representative example of the three functions fitted to data for one observer, image set condition, and color direction. The complete set of plots, including r2 values corresponding to the fits, can be found in the Supplementary Material. Some features of these plots are important: if the test were zero (i.e. pedestal and pedestal plus test were the same, which did not actually occur in our experiment) we would expect observers to indicate that the pedestal plus test was least saturated as often as they indicate the pedestal. The fitted curves are constrained to pass through the 0.5 proportion at the chromaticity of the corresponding pedestals (indicated by the vertical dashed lines) and they appear to fit the data well. This anchor need not be the peak of the curve, although it is expected that the peak will be around this point in the case of the control condition with neutral (a∗ = 0 and b∗ = 0) pedestals.
Example response proportions and fitted psychometric functions from one observer, for the b∗ (yellow-blue) chromatic direction and banana stimuli. Separate panels show data from trials with different pedestal chromaticities. Left panel: blue (b∗ = − 1). Center panel: neutral (b∗ = 0). Right panel: yellow (b∗ = 1). The vertical axes are the proportions of times that the observer responded that the pedestal plus test stimulus (as opposed to the pedestal) was more neutral, for each of the pedestal plus test chromaticities on the horizontal axis. Data for the control condition (phase-scrambled images) are shown with square symbols and fitted with solid curves. Data from the image condition are shown with x symbols and fitted with dashed curves. Vertical dotted lines indicate the chromaticities of the pedestals, and the horizontal dotted line indicates the proportion at which the observer is equally likely to choose the pedestal plus test or pedestal
We are specifically interested in the effect that the adapting images had on the chromatic bias (or achromatic point), in relation to the controls. This is shown by a shift in the position of the peaks of the psychometric functions (difference between the solid and dotted functions in Fig. 4), which will be accompanied by a change in height (since they must pass through Pr("p + t″) = 0.5 at the chromaticity of the pedestal). Our experimental prediction, in the case of the 'normalization' explanation, is that the curves will shift in the direction towards the implied color of the adapting images. For example, this would generally be in the + b∗ direction for bananas. Indeed, such a shift can be seen in Fig. 4 as a rightward shift from the solid curves to the dashed curves. To obtain a measure of these shifts, we take the fitted μ parameters of the fitted curves and calculate the difference between these parameters for data from corresponding adapter and control conditions. We average the three separate shifts obtained from the three pedestals within each combination of other variables. This gives us two-dimensional (a∗, b∗) shift estimates for each observer and for each image category. These shifts are plotted in Fig. 5, where the directions and magnitudes can be seen, separately for each observer and as an average over observers. Particularly for the bananas and carrots image sets, the achromatic shift is in a direction in color space close to the direction of the predominant color of the original images. This is true for all the individual observers, with some variation. One observer shows a very small shift in the carrots condition, and this reduces the size of the mean shift vector for that condition. For the leaves and cucumbers adapting image sets, the magnitude of the shift is often not as great as for the bananas and carrots, and the direction is not consistent with that of the typical green colors of leaves and cucumbers.
Chromatic biases for all observers in the constant-lightness plane of CIE L∗a∗b∗ space. Each colored symbol, connected to the origin by a thin colored line, shows the chromatic vector representing the perceptual bias resulting from viewing the adapting images. These shifts are calculated from the differences between the mean (μ) parameters of the function fitted to response proportions (see Fig. 4). Biases are shown for each observer separately, distinguished by the plot symbols. Data from the different adapting image sets are shown with different colored symbols: solid yellow for bananas, solid red for carrots, solid green for leaves and open green for cucumbers. The large + symbols, connected to the origin with thick black lines, indicate the mean over all observers and the standard error of that mean. Their color again corresponds to the image set (with a dashed cross for cucumbers). The thin colored dashed lines indicate the vectors towards the mean chromaticities of the original versions of the images used as adaptors, and are displayed in that color
The fitted curves also allow us to estimate the just-noticeable-differences (JNDs) from the σ parameter. These were fairly consistent within each observer, and the average over all conditions and observers was 0.74.
We have shown that biases in color perception occur after viewing achromatic images of objects that typically have a specific color. In some cases, this bias is similar in effect to the adaptation that occurs after adaptation to chromatic stimuli. The perceptual shift was measured against a control condition in which the only difference was in the Fourier phase components of the adapting images. Since we compare chromatic judgements after viewing the images of objects to judgements after viewing the same images with phase scrambling, we control for any differential adaptation that might arise from the differences in the spatial contrast sensitivity of the chromatic mechanisms in the peripheral visual system (Mullen, 1985). Since the phase information itself is unlikely to have a biasing effect on color perception, we must assume that the biases we observe come from the semantic content in the non-scrambled images.
How the visual system might use implied color
For at least some of the stimulus image sets (bananas and carrots), the bias is consistent with the memory color of objects having a normalizing input to chromatic perception, by adapting mechanisms that respond to implied chromatic input. The chromatic direction in which the neutral point shifts is, on average, towards the typical colors of the objects in the adapting images. There is some spread around these directions, but this is to be expected since the observers never saw the original colors of the images and there are likely to be differences between individuals. However, the magnitudes of the shift for the carrots images are smaller than that of the bananas. Furthermore from the two image sets of typically green objects, leaves and cucumbers, we again see generally smaller biases than those from bananas and these biases are in various directions. Many show a shift in the roughly red-purple direction. Considering only the directions of the shifts, this is opposite to what would be predicted by normalization to the implied green, and more consistent with a color constancy illuminant compensation paradigm in which the visual system is discounting a red illuminant (responsible for making the green objects achromatic). It is not immediately clear why the different adapting images have effects consistent with different explanations, however there are some points to highlight.
That the visual system explicitly estimates the illuminant and discounts it is questionable (for evidence against, see Granzier, Brenner, & Smeets, 2009; Rutherford & Brainard 2002), and by no means the only way that color constancy might be achieved. Even if such a mechanism were in operation, we would expect that any estimation of a biased illumination from our adaptation stimuli would be quite weak. It is highly unlikely for a color-biased scene, such as any of the original images we used before they were transformed to greyscale, to be rendered completely achromatic and varying only in luminance by any plausible illumination. Such an illumination would need to be accurately specified both spectrally and spatially to achieve this. Additionally, as mentioned above, the addition of diagnostic color objects to scenes has been shown to improve color constancy very little. Perhaps more importantly, estimating the illuminant in this way would be weakened by other putative mechanisms of color constancy that operate on the lower-level chromatic information in the stimuli. One of the likely ways that the illuminant is estimated is from the average chromaticity of the scene, assuming that on average surfaces have neutral reflectance spectra (e.g., Land 1983, 1986). This would result in a neutral estimate from all our adaptation stimuli, including the controls, and so would not produce any shift in the neutral point. Our scenes do not include objects other than the ones we selected, so do not show the chromatic bias that would be introduced by a biased illuminant.
The typical yellow of bananas lies close to the yellow region of the daylight axis, which runs in a roughly blue to yellow direction (parallel to the b∗ axis in Fig. 5). It has been suggested that color constancy uses natural illuminants as a prior, and more readily compensates for illuminants that vary in chromaticity on the daylight axis (Brainard, Longere, Delahunt, Freeman, Kraft, & Xiao, 2006; Pearce, Crichton, Mackiewicz, Finlayson, & Hurlbert, 2014), but this is still under debate (e.g., Delahunt & Brainard 2004). Witzel et al., (2011) found memory color effects that were stronger for colors more closely aligned with the daylight axis, but Olkkonen et al., (2008) reject this explanation for their results. In our case, the direction of bias after adapting to images of bananas is inconsistent with a mechanism in which the visual system is discounting an illuminant. Illuminant-compensation might be responsible for some of the results from the typically green adaptors but the direction of shift, while compatible with the colors involved, is roughly orthogonal to the daylight axis and so does not benefit from any increased compensation possible in this direction.
One reason why testing the natural illuminants hypothesis is difficult in this context is that we know of no studies that use images of natural objects that are typically blue, or at least blue but still identifiable when made grayscale. Perhaps the most obvious example of blue in nature is the sky, but an image containing only blue sky would likely not be identifiable when converted to greyscale. Memory color for blue objects has been investigated with man-made objects (Bannert and Bartels, 2013; Bloj et al., 2016; Witzel et al., 2011), but this would have been difficult to achieve in our experiment in which we chose to use natural objects so that they would be familiar to any observer, and many images filled with many examples of those objects.
It is possible that both a normalization mechanism and an illuminant discounting mechanism are in operation and working against each other. If this is the case, then they are unlikely to be perfectly balanced in their effects and the perceptual biases we see are the net effects. In the cases of the bananas and carrots, the implied-color adaptation has a greater effect than the illuminant compensation, and vice-versa for the leaves images. It is not clear why this should be the case, but we do note below that bananas seem to produce particularly strong memory-color effects. It is also conceivable that the two mechanisms operate in sequence, for example with illuminant estimation from inputs taking place after the effects of adaptation to implied color, and again interacting based on the relative effectiveness of each. However, we do not believe that we can make any interpretations about the sequence of operation from this experiment or dataset. They must depend on signals originating after the stage at which objects are recognized, however.
We describe the shift directions from our green-implying adaptors as being opposite to that expected from adaptation, but this is only the case if we consider a traditional color space with opponent axes, as a representation analogous to the peripheral color-vision mechanisms. If we are accomplishing something like adaptation, we have no reason to assume that this occurs at such a peripheral opponent stage. Conceivably, the adaptation might occur at a higher-order (i.e., more central) stage, which does not have the same opponent properties. As mentioned above, systems composed of many narrowly tuned mechanisms can have adapting effects in the opposite direction to normalisation and the suggested central site might encode hue in this way. However, it is still difficult to imagine how adaptation to a "green" signal would result in the achromatic point moving in the red direction, yet a "yellow" signal moves it in the yellow direction. It is possible that the variation in strength and consistency that we see in our measured adaptation effects is because the implied color of some stimuli, bananas in particular, closely matches the preferred color of a particular higher-order mechanism in all our observers, more so than cucumbers. It is also quite possible that using chromatic directions aligned with the adapted mechanisms, rather than the axes of CIE L*a*b* that we arbitrarily chose to test, would reveal more consistent effects, but since we know little about this hypothetical stage of the color visual system this would be challenging.
Any perceptual effect from implied color must come from an observer's knowledge of the typical colors of objects, gained through experience. The colors of natural objects such as the ones in our stimuli vary, but we expect the experience with them to be relatively consistent among our observers. All observers grew up and live in the United Kingdom, where bananas are almost always yellow when seen for sale. Similarly, carrots are almost always orange. Cucumbers are common, and similarly shaped items (e.g., courgette, marrow) are also green. Leaves, on the other hand, are often various stages of red, orange, or brown. The majority of data were collected in the spring and summer months when the vast majority of leaves in the United Kingdom are green, but nevertheless this may be a source of the inconsistency in the adapting direction and magnitude of the leaves. Furthermore, leaves are often the background elements of a scene, rather than the useful object to be interacted with. This is particularly true when we consider the evolutionarily relevant task of searching for fruit amongst foliage. It is possible that the leaves have little adaptive effect because they are not judged as relevant, and therefore attended to. Perhaps the memory-color effects from the green objects we chose are simply much weaker than those of the bananas and carrots, and this is why we see smaller and much more variable shifts.
Is the effect real?
The sizes of all the shifts are very small in relation to the often-given size of one JND in CIE L∗a∗b∗ space, which is 1.0 (although could be considerably larger, Mahy, Van Eycken, & Oosterlink, 1994). However, compared to the JNDs specific to our task estimated from the data, the shifts do not seem as small. For example: the magnitude of the shift was 69% of the corresponding JND, on average, for the b∗ shift with the bananas images. We did not expect to find large effects, since viewing achromatic images does not lead to obviously visible after-effects that are noticeable in normal viewing, as is also true for implied-motion stimuli. However, despite considerable variation between observers, the chromatic biases we measured do seem to be in generally consistent directions for all observers for the bananas and carrots images sets.
The conflicting directions of the adapting effects, in some cases, serves to weaken the evidence for adapting effects from implied color. However, we do not believe that this is a problem with our methodology. The 'roving pedestal' 2AFC method (Morgan et al., 2013) that we use is ideally suited to an experiment of this kind since it allows us to be much more certain that our measurements are those of perceptual biases. Had the observer been asked to simply make a judgement of a single stimulus, then they might have based their response on their expectation after identifying the objects in the adapting set, which could be a symbolic cue. Had the pedestal chromaticity always been the same, they may have been able to identify it on each trial and this may have influenced the decision. In addition, an adjustment task in which the observer was required to adjust a stimulus so that it appeared neutral or the same as their memory color of an object would have not been appropriate, since viewing that stimulus itself for any period of time would likely lead to low-level adaptation effects greater than the effects that we sought.
Comparison to other studies
We know of no other work that has attempted to show adaptation to implied color or memory color, or that the effect of the color implication lasts beyond the presentation of the achromatic implying stimulus. We measure changes in perception not of the memory color objects themselves, but on simple geometric stimuli that are not presented at the same time. We also use a different task that avoids lengthy response periods, avoids response bias, and does not explicitly measure an achromatic percept. However, there are some other experiments that show similar features to ours in their results. In particular, Olkkonen et al., (2008) found smaller amounts of "over compensation" when observers adjusted the color of a variety of images of fruit and vegetables, including carrots and those that are typically green, to appear achromatic than when they adjusted the color of bananas. We also see a similarly weak effect from our other images, relative to bananas. Indeed, bananas are commonly used as stimuli in memory color experiments (e.g., Tanaka and Presnell 1999; Biederman & Ju 1988; Bannert & Bartels 2013; Olkkonen et al., 2008; Granzier & Gegenfurtner 2012; Yendrikhovskij, Blommaert, & de Ridder, 1999; Vurro, Ling, & Hurlbert, 2013), sometimes they are the only real object used (Kanematsu & Brainard, 2014; Witzel, 2016), but they are notably absent from (Bartleson, 1960). If we had limited our stimulus images to bananas, we might have drawn stronger conclusions. It is unclear as to why bananas illicit a particularly strong memory color, since they change color (from green to yellow to brown) as they ripen just as many other fruit. It might be suggested that since the typical yellow of bananas lies close to the daylight axis, an illuminant bias is more readily accepted as a reason for a change in their chromaticity but, as stated above, this seems inconsistent with the results of our experiment.
Source of the top-down signal
Our experiment took the form of a typical visual adaptation experiment, and we present observers with greyscale images of objects with a typical color in the assumption that these images will trigger the sensation of that color. However, it might be argued that these images are not necessary, and we need only ask our observers to imagine an object without ever seeing it. Observers are able to reproduce or select a color according to their internal representation from a verbal instruction (Bartleson 1960; Pérez-Carpinell, De Fez, Baldoví, & Soriano, 1998), and perhaps a command to imagine an object, or even simply a color, would be sufficient to cause an adapting effect. This experiment remains to be performed, but since the stimuli described do not imply an illuminant, it might provide a stronger means to determine between a normalisation vs. illuminant compensation source of the effect. However, when other researchers used stimuli that were degraded in their similarity to the real objects (by using silhouettes) they measured weaker effects (Olkkonen et al., 2008), so perhaps an accurate representation of the object is necessary to invoke memory color.
We provide some evidence that images that do not contain chromatic information, but which imply color through the memory color of the content of the image, can produce perceptual biases that are similar to chromatic adaptation to the real color. This evidence is weak, however, and we remain cautious in drawing conclusions, particularly when different mechanisms may be working in opposition.
Bannert, M., & Bartels, A. (2013). Decoding the yellow of a gray banana. Current Biology, 23(22), 2268–2272. https://doi.org/10.1016/j.cub.2013.09.016
Bartleson, C. J. (1960). Memory colors of familiar objects. Journal of the Optical Society of America, 50(1), 73–77. https://doi.org/10.1364/JOSA.50.000073
Biederman, I., & Ju, G. (1988). Surface versus edge-based determinants of visual recognition. Cognitive Psychology, 20(1), 38–64. https://doi.org/10.1016/0010-0285(88)90024-2
Bloj, M., Weiß, D., & Gegenfurtner, K. R. (2016). Bias effects of short- and long-term color memory for unique objects. Journal of the Optical Society of America A: Optics Image Science and Vision, 33(4), 492–500. https://doi.org/10.1364/JOSAA.33.000492.
Brainard, D. H., Longere, P., Delahunt, P. B., Freeman, W. T., Kraft, J. M., & Xiao, B. (2006). Bayesian model of human color constancy. Journal of Vision, 6(11), 1267–1281. https://doi.org/10.1167/6.11.10
Clifford, C. W. G., Webster, M. A., Stanley, G. B., Stocker, A. A., Kohn, A., Sharpee, T. O., & Schwartz, O. (2007). Visual adaptation: Neural, psychological and computational aspects. Vision Research, 47(25), 3125–3131. https://doi.org/10.1016/j.visres.2007.08.023.
Delahunt, P. B., & Brainard, D. H. (2004). Does human color constancy incorporate the statistical regularity of natural daylight?. Journal of Vision, 57–81. https://doi.org/10.1167/4.2.1
Delk, J. L., & Fillenbaum, S. (1965). Differences in perceived color as a function of characteristic color. The American Journal of Psychology, 78(2), 290–293.
D'Zmura, M., & Lennie, P. (1986). Mechanisms of color constancy. Journal of the Optical Society of America A: Optics Image Science and Vision, 3(10), 1662–72. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/3772628
Eskew, R. T. (2009). Higher order color mechanisms: A critical review. Vision Research, 49(22), 2686–704. https://doi.org/10.1016/j.visres.2009.07.005
Firestone, C., & Scholl, B. J. (2016). Cognition Does not affect perception: Evaluating the evidence for "top-down" effects. The Behavioral and brain sciences, 39(2016), e229. https://doi.org/10.1017/S0140525X15000965.
Fitzpatrick, D., Usrey, W. M., Schofield, B. R., & Einstein, G. (1994). The sublaminar organization of corticogeniculate neurons in layer 6 of macaque striate cortex. Visual Neuroscience, 11(02), 307–315. https://doi.org/10.1017/S0952523800001656
Foster, D. H. (2011). Color constancy. Vision Research, 51(7), 674–700. https://doi.org/10.1016/j.visres.2010.09.006
Goffaux, V., Jacques, C., Mourax, A., Oliva, A., Schyns, P. G., & Rossion, B. (2005). Diagnostic colours contribute to the early stages of scene categorization: Behavioural and neurophysiological evidence. Visual Cognition, 12(6), 878–892. https://doi.org/10.1080/13506280444000562.
Granzier, J. J. M., Brenner, E., & Smeets, J. B. J. (2009). Can illumination estimates provide the basis for color constancy? Journal of Vision, 9(3), 1–11. https://doi.org/10.1167/9.3.18
Granzier, J. J. M., & Gegenfurtner, K. R. (2012). Effects of memory colour on colour constancy for unknown coloured objects. i-Perception, 3, 190–215. https://doi.org/10.1068/i0461.
Hansen, T., Olkkonen, M., Walter, S., & Gegenfurtner, K. R. (2006). Memory modulates color appearance. Nature Neuroscience, 9(11), 1367–8. https://doi.org/10.1038/nn1794
Hering, E. (1964). L. Hurvich, & D. Jameson (Eds.) Outlines of a theory of the light sense. Cambridge: Harvard University Press. (Translators).
Hurlbert, A. C. (1998). Computational models of color constancy. In V. Walsh, & J. Kulikowski (Eds.) Perceptual constancy: Why things look as they do (pp. 283–322). Cambridge: Cambridge University Press.
Kanematsu, E., & Brainard, D. H. (2014). No measured effect of a familiar contextual object on color constancy. Color Research and Application, 39(4), 347–359. https://doi.org/10.1002/col.21805
Krauskopf, J., Williams, D. R., & Heeley, D. W. (1982). Cardinal directions of colour space. Vision Research, 22, 1123–1131.
Krauskopf, J., Williams, D. R., Mandler, M. B., & Brown, A. M. (1986). Higher order color mechanisms. Vision Research, 26(1), 23–32. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/3716212
Land, E. H. (1983). Recent advances in Retinex theory and some implications for cortical computations: Color vision and the natural image. Proceedings of the National Academy of Sciences, 80(16), 5163–5169. https://doi.org/10.1073/pnas.80.16.5163 https://doi.org/10.1073/pnas.80.16.5163
Land, E. H. (1986). Recent advances in Retinex theory. Vision Research, 26(1), 7–21. https://doi.org/10.1016/0042-6989(86)90067-2.
Lee, R. J., Dawson, K. A., & Smithson, H. E. (2012). Slow updating of the achromatic point after a change in illumination (Vol. 12) (No. 1). https://doi.org/10.1167/12.1.19
Lupyan, G. (2015). Object knowledge changes visual appearance: Semantic effects on color afterimages. Acta Psychologica, 161, 117–130. https://doi.org/10.1016/j.actpsy.2015.08.006
Mahy, M., Van Eycken, L., & Oosterlink, A. (1994). Evaluation of uniform color spaces developed after the adoption of CIELAB and CIELUV. Color Research and Application, 19(2), 105–121.
Mather, G., & Sharman, R. J. (2015). Decision-level adaptation in motion perception. Royal Society Open Science, 2, 150418.
Mather, G., Verstraten, F., & Anstis, S. M. (Eds.) (1998). The motion aftereffect: A modern perspective. Cambridge: MIT Press.
Morgan, M. (2014). A bias-free measure of retinotopic tilt adaptation. Journal of Vision, 14, 1–9. https://doi.org/10.1167/14.1.7.doi
Morgan, M., Grant, S., Melmoth, D., & Solomon, J. A. (2015). Tilted frames of reference have similar effects on the perception of gravitational vertical and the planning of vertical saccadic eye movements. Experimental Brain Research, 233(7), 2115–2125. https://doi.org/10.1007/s00221-015-4282-0
Morgan, M., Melmoth, D., & Solomon, J. A. (2013). Linking hypotheses underlying Class A and Class B methods. Visual Neuroscience, 30(5-6), 197–206. https://doi.org/10.1017/S095252381300045X
Morgan, M., Schreiber, K., & Solomon, J. A. (2016). Low-level mediation of directionally specific motion aftereffects: Motion perception is not necessary. Attention, Perception, and Psychophysics, 78, 2621–2632. https://doi.org/10.3758/s13414-016-1160-1
Mullen, K. T. (1985). The contrast sensitivity of human colour vision to red-green and blue-yellow chromatic gratings. The Journal of Physiology, 359, 381–400. https://doi.org/10.1113/jphysiol.1985.sp015591
Olkkonen, M., Hansen, T., & Gegenfurtner, K. R. (2008). Color appearance of familiar objects: Effects of object shape, texture, and illumination changes. Journal of Vision, 8(5), 1–16. https://doi.org/10.1167/8.5.13
Pearce, B., Crichton, S., Mackiewicz, M., Finlayson, G. D., & Hurlbert, A. (2014). Chromatic illumination discrimination ability reveals that human colour constancy is optimised for blue daylight illuminations. PLoS ONE, 9(2), e87989. https://doi.org/10.1371/journal.pone.0087989.
Pérez-Carpinell, J., De Fez, M. D., Baldoví, R., & Soriano, J. C. (1998). Familiar objects and memory color. Color Research and Application, 23(6), 416–427. https://doi.org/10.1002/(SICI)1520-6378(199812)23:6lt;416::AID-COL10>3.0.CO;2-N
Rutherford, M. D., & Brainard, D. H. (2002). Lightness constancy: A direct test of the illumination-estimation hypothesis. Psychological Science: A Journal of the American Psychological Society/APS, 13(2), 142–149. https://doi.org/10.1111/1467-9280.00426.
Shimojo, S., Kamitani, Y., & Nishida, S. (2001). Afterimage of perceptually filled-in surface. Science, 293 (5535), 1677–1680. https://doi.org/10.1126/science.1060161
Smithson, H. E. (2005). Sensory, computational and cognitive components of human colour constancy. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 360(1458), 1329–46. https://doi.org/10.1098/rstb.2005.1633
Tanaka, J. W., & Presnell, L. M. (1999). Color diagnosticity in object recognition. Perception and Psychophysics, 61(6), 1140–1153. https://doi.org/10.3758/BF03207619
Valeton, J. M., & van Norren, D. (1983). Light adaptation of primate cones: An analysis based on extracellular data. Vision Research, 23(12), 1539–1547. https://doi.org/10.1016/0042-6989(83)90167-0.
Vurro, M., Ling, Y., & Hurlbert, A. C. (2013). Memory color of natural familiar objects: Effects of surface texture and 3-D shape. Journal of Vision, 13(7), 20. https://doi.org/10.1167/13.7.20
Webster, M. A. (1996). Human colour perception and its adaptation. Network: Computation in Neural Systems, 7(4), 587–634.
Webster, M. A. (2011). Adaptation and visual coding. Journal of Vision, 11(5), 3–3. https://doi.org/10.1167/11.5.3
Webster, M. A., & Leonard, D. (2008). Adaptation and perceptual norms in color vision. Journal of the Optical Society of America A: Optics, Image Science and Vision, 25(11), 2817–2825.
Winawer, J., Huk, A. C., & Boroditsky, L. (2008). A motion aftereffect from still photographs depicting motion. Psychological Science, 19(3), 276–283. https://doi.org/10.1111/j.1467-9280.2008.02080.x.
Witzel, C. (2016). An easy way to show memory color effects. i-Perception, 7(5), 1–11. https://doi.org/10.1177/2041669516663751.
Witzel, C., Valkova, H., Hansen, T., & Gegenfurtner, K. R. (2011). Object knowledge modulates colour appearance. i-Perception, 2, 13–49. https://doi.org/10.1068/i0396
Yendrikhovskij, S. N., Blommaert, F. J. J., & de Ridder, H. (1999). Representation of memory prototype for an object color. Color Research and Application, 24(6), 393–410.
Zaidi, Q., Ennis, R., Cao, D., & Lee, B. (2012). Neural locus of color afterimages. Current Biology, 22, 220–224. Retrieved from http://www.perceptionweb.com/abstract.cgi?id=v110748; http://www.cell.com/current-biology/retrieve/pii/S0960982211013984
Zeki, S., Cheadle, S., Pepper, J., & Mylonas, D. (2017). The constancy of colored after-images. Frontiers in Human Neuroscience, 11(May), 1–8. https://doi.org/10.3389/fnhum.2017.00229
We thank the many individuals who commented on early presentations of this work at conferences. We thank Ingrdia Einingyte for assistance with data collection. We also thank all our observers for their time.
School of Psychology, University of Lincoln, Lincoln, UK
R. J. Lee
& G. Mather
Search for R. J. Lee in:
Search for G. Mather in:
Correspondence to G. Mather.
Open Practices Statement
Data from this experiment are available as Supplementary Material. This experiment was not preregistered.
(CSV 929 KB)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Lee, R.J., Mather, G. Chromatic adaptation from achromatic stimuli with implied color. Atten Percept Psychophys 81, 2890–2901 (2019). https://doi.org/10.3758/s13414-019-01716-5
Color and light: color
Color and light: constancy
Adaptation and aftereffects | CommonCrawl |
\begin{document}
\title[stably free modules over Laurent polynomial rings]{\textbf{On stably free modules over Laurent polynomial rings}} \author[A.Abedelfatah]{Abed Abedelfatah} \address{Department of Mathematics, University of Haifa, Mount Carmel, Haifa 31905, Israel} \email{[email protected]} \keywords{Stably free modules, Hermite rings, Unimodular rows, Laurent polynomial rings, Constructive Mathematics} \begin{abstract} We prove constructively that for any finite-dimensional commutative ring $R$ and $n\geq\dim (R)+2$, the group $\E_{n}(R[X,X^{-1}])$ acts transitively on $\Um_{n}(R[X,X^{-1}])$. In particular, we obtain that for any finite-dimensional ring $R$, every finitely generated stably free module over $R[X,X^{-1}]$ of rank $>\dim R$ is free, i.e., $R[X,X^{-1}]$ is $(\dim R)$-Hermite. \end{abstract} \maketitle \section{Introduction}
We denote by $R$ a commutative ring with unity and $\mathbb{N}$ the set of non-negative integers. $\Um_{n}(R)$ is the set of unimodular rows of length $n$ over $R$, that is all $(x_{0},\dots,x_{n-1})\in R^{n}$ such that $x_{0}R+\dots+x_{n-1}R=R$. If $u,v\in \Um_{n}(R)$ and $G$ is a subgroup of $\GL_{n}(R)$, we write $u\sim_{\G}v$ if there exists $g$ in $G$ such that $v=ug$. Recall that $\E_{n}(R)$ denotes the subgroup of $\GL_{n}(R)$, generated by all $\E_{ij}(a):=I_{n}+ae_{ij}$ (where $i\neq j$, $a\in R$ and $e_{ij}$ denotes the $n\times n$- matrix whose only non-zero entry is 1 on the $(i,j)$- th place). We abbreviate the notation $u\sim_{\E_{n}(R)}v$ to $u\sim_{\E}v$. We say that a ring $R$ is Hermite (resp. d-Hermite ) if any finitely generated stably free $R$-module ( resp., any finitely generated stably free $R$-module of rank $>d$ ) is free.
In \cite{F}, A.A.Suslin proved: \begin{thm}\emph{(A.A.Suslin)}\\ If $R$ is a Noetherian ring and $$A=R[X_{1}^{\pm1},\dots,X_{k}^{\pm1},X_{k+1},\dots,X_{n}].$$ Then for $n\geq\max{(3,\dim(R)+2)}$ the group $\E_{n}(A)$ acts transitively on $\Um_{n}(A)$. \end{thm} In particular, we obtain that $\E_{n}(R[X,X^{-1}])$ acts transitively on $\Um_{n}(R[X,X^{-1}])$ for any Noetherian ring $R$, where $n\geq\max{(3,\dim(R)+2)}$. In \cite{G}, I.Yengui proved: \begin{thm}\emph{(I.Yengui)}\\ Let $R$ be a ring of dimension $d$, $n\geq d+1$, and let $f\in \Um_{n+1}(R[X])$. Then there exists $E\in \E_{n+1}(R[X])$ such that $f\cdot E=e_{1}$. \end{thm} In this article we generalize by proving: \begin{thm}\label{30} For any finite-dimensional ring $R$, $\E_{n}(R[X,X^{-1}])$ acts transitively on $\Um_{n}(R[X,X^{-1}])$, where $n\geq \dim(R)+2$. \end{thm}
This gives a positive answer to Yengui's question (Question $9$ of \cite{G}). The proof we give is a close adaptation of Yengui's proof to the Laurent case.
\section{Preliminary results on unimodular rows}
A.A.Suslin proved in \cite{F}, that if $f=(f_{0},\dots,f_{n})\in \Um_{n+1}(R[X])$, where $f_{1}$ is unitary and $n\geq 1$, then there exists $w\in \SL_{2}(R[X])\cdot\E_{n+1}(R[X])$ such that $f\cdot w=e_{1}$. In fact, this theorem is a crucial point in his proof of Serre's conjecture. R.A.Rao generalized in [\cite{D}, Corollary 2.5] by proving:
\begin{thm}\emph{(R.A.Rao, \cite{D})}\label{70}\\ Let $f=(f_{0},\dots,f_{n})\in \Um_{n+1}(R[X])$, where $n\geq 2$. If some $f_{i}$ is unitary, then $f$ is completable to a matrix in $\E_{n}(R[X])$. \end{thm}
Recall that the boundary ideal of an element $a$ of a ring $R$ is the ideal $\mathcal{I}(a)$ of $R$ generated by $a$ and all $y\in R$ such that $ay$ is nilpotent. Moreover, $\dim R\leq d\Leftrightarrow\dim(R/\mathcal{I}(a))\leq d-1$ for all $a\in R$ \cite{C}.
\begin{thm}\emph{[\cite{B}, Theorem 2.4]}\label{73}\\ Let $R$ be a ring of dimension $\leq d$ and $a=(a_{0},\dots,a_{n})\in \Um_{n+1}(R)$ where $n\geq d+1$, then there exist $b_{1},\dots,b_{n}\in R$ such that $$\langle a_{1}+b_{1}a_{0},\dots,a_{n}+b_{n}a_{0}\rangle=R$$ \end{thm}
In fact, we can obtain a stronger result if $f\in \Um_{n+1}(R_{S})$, where $S$ is a multiplicative subset of $R$:
\begin{prop}\label{41} Let S be a multiplicative subset of $R$ such that $S^{-1}R$ has dimension $d$. Let $(a_{0},\dots,a_{n})\in M_{n+1}(R)$ be a row such that $(\frac{a_{0}}{1},\dots,\frac{a_{n}}{1})\in \Um_{n+1}(S^{-1}R)$, where $n> d$. Then there exist $b_{1},\dots,b_{n}\in R$ and $s\in S$ such that $$s\in (a_{1}+b_{1}a_{0})R+\dots+(a_{n}+b_{n}a_{0})R.$$ \end{prop} \begin{proof} By induction on $d$, if $d=0$ then $R_{S}/\mathcal{I}(\frac{a_{n}}{1})\cong (R/J)_{\overline{S}}$ is trivial, where $\overline{S}=\setf{s+J}{s\in S}$, $J=i^{-1}(\mathcal{I}(\frac{a_{n}}{1}))$, and $i:R\rightarrow R_{S}$ is the natural homomorphism. So $1\in\langle \frac{a_{n}}{1},\frac{b_{n}}{1}\rangle$ in $R_{S}$, where $b_{n}\in R$ and $\frac{a_{n}b_{n}}{1}$ is nilpotent. Since $1\in \langle\frac{a_{1}}{1},\dots,\frac{a_{n-1}}{1},\frac{a_{n}}{1},\frac{b_{n}a_{0}}{1}\rangle$, so by [\cite{B}, Lemma 2.3], $1\in \langle\frac{a_{1}}{1},\dots,\frac{a_{n-1}}{1},\frac{a_{n}+b_{n}a_{0}}{1}\rangle$, i.e., there exist $s\in S$ such that $s\in a_{1}R+\dots+a_{n-1}R+(a_{n}+b_{n}a_{0})R$.
Assume now $d>0$. By the induction assumption with respect to the ring $R_{S}/\mathcal{I}(\frac{a_{n}}{1})\cong (R/J)_{\overline{S}}$ we can find $\bar{b}_{1},\dots,\bar{b}_{n-1}\in R/J$ such that $$\langle\frac{\bar{a}_{1}+\bar{b}_{1}\bar{a}_{0}}{\overline{1}},\dots,\frac{\bar{a}_{n-1}+\bar{b}_{n-1}\bar{a}_{0}}{\overline{1}}\rangle=(R/J)_{\overline{S}}.$$ So $\langle\overline{\frac{a_{1}+b_{1}a_{0}}{1}},\dots,\overline{\frac{a_{n-1}+b_{n-1}a_{0}}{1}}\rangle=R_{S}/\mathcal{I}(\frac{a_{n}}{1})$, this means that $$\langle\frac{a_{1}+b_{1}a_{0}}{1},\dots,\frac{a_{n-1}+b_{n-1}a_{0}}{1},\frac{a_{n}}{1},\frac{b_{n}}{1}\rangle=R_{S}$$ where $\frac{a_{n}b_{n}}{1}$ is nilpotent. So by [\cite{B}, Lemma 2.3] $$\langle\frac{a_{1}+b_{1}a_{0}}{1},\dots,\frac{a_{n-1}+b_{n-1}a_{0}}{1},\frac{a_{n}+b_{n}a_{0}}{1}\rangle=R_{S}.$$ \end{proof}
Let $f\in \Um_{n+1}(R[X])$, where $n\geq \frac{d}{2}+1$, with $R$ a local ring of dimension $d$. M.Roitman's argument in [\cite{E}, Theorem 5], shows how one could decrease the degree of all but one (special) co-ordinate of $f$. In the absence of a monic polynomial as a co-ordinate of $f$ he uses a Euclid's algorithm and this is achieved via,
\begin{lem}\emph{(M.Roitman, [\cite{E}, Lemma 1])}\label{2}\\ Let $(x_{0},\dots,x_{n})\in \Um_{n+1}(R)$, $n\geq2$, and let $t$ be an element of $R$ which is invertible $\bmod{(Rx_{0}+\dots+Rx_{n-2})}$. Then $$(x_{0},\dots,x_{n})\sim_{\E_{n+1}(R)}(x_{0},\dots,t^{2}x_{n})\sim_{\E_{n+1}(R)}(x_{0},\dots ,tx_{n-1},tx_{n}).$$ \end{lem}
\section{The main results}
\begin{defns} Let $f\in R[X,X^{-1}]$ be a nonzero Laurent polynomial. We denote $\deg(f)=\hdeg(f)-\ldeg(f)$, where $\hdeg(f)$ and $\ldeg(f)$ denote respectively the highest and the lowest degree of $f$.
Let $\hc(f)$ and $\lc(f)$ denote respectively the coefficients of the highest and the lowest degree term of $f$. An element $f\in R[X,X^{-1}]$ is called a doubly unitary if $\hc(f),\lc(f)\in U(R)$. \end{defns} For example, $\deg(X^{-3}+X^{2})=5$. \begin{lem}\label{40} Let $f_{1},\dots,f_{n}\in R[X,X^{-1}]$ such that $\hdeg(f_{i})\leq k-1$, $\ldeg(f_{i})\geq -m$ for all $1\leq i\leq n$. Let $f\in R[X,X^{-1}]$ with $\hdeg(f)=k,~ \ldeg(f)\geq -m$, where $k,m\in \mathbb{N}$. Assume that $\hc(f)\in U(R)$ and the coefficients of $f_{1},\dots,f_{n}$ generate the ideal $(1)$ of $R$, then $I=\langle f_{1},\dots,f_{n},f\rangle$ contains a polynomial $h$ of $\hdeg(h)=k-1$, $\ldeg(h)\geq -m$ and $\hc(h)\in U(R)$. \end{lem} \begin{proof} Since $X^{m}f_{1},\dots,X^{m}f_{n},X^{m}f\in R[X]$, by [\cite{A}, \S4, Lemma 1(b)], $I$ contains a polynomial $h_{1}\in R[X]$ of degree $m+k-1$ which is unitary. So $h=X^{-m}h_{1}\in I$ of $\hdeg(h)=k-1$, $\ldeg(h)\geq -m$ and $\hc(h)\in U(R)$. \end{proof}
\begin{prop}\label{42} Let $I\unlhd R[X,X^{-1}]$ be an ideal, $J\unlhd R$, such that $I$ contains a doubly unitary polynomial. If $I+J[X,X^{-1}]=R[X,X^{-1}]$ then $(I\cap R)+J=R$. \end{prop} \begin{proof} Let us denote by $h_1$ a doubly unitary polynomial in $I$. Since $I+J[X,X^{-1}]=R[X,X^{-1}]$, there exist $h_2\in I$ and $h_3\in J[X,X^{-1}]$ such that $h_2+h_3=1$. Let $g_i=X^{-\ldeg(h_i)}h_i$, for $i=1,2,3$. Since $X^l\in \sum_{i=1}^{3}g_{i}R[X]$, for some $l\geq 0$, and $g_{1}\equiv u\bmod{XR[X]}$, where $u\in U(R)$, we obtain that $\langle g_1,g_2,g_3\rangle=\langle1\rangle$ in $R[X]$. By [\cite{H}, Lemma 2], we obtain $(\langle g_1,g_2\rangle\cap R)+J=R$. So $(I\cap R)+J=R$. \end{proof}
\begin{thm}\label{43} Let $f=(f_{0},\dots,f_{n})\in \Um_{n+1}(R[X,X^{-1}])$, where $n\geq 2$. Assume that $f_{0}$ is a doubly unitary polynomial, then $$f\sim_{\E_{n+1}(R[X,X^{-1}])}(1,0,\dots,0).$$ \end{thm} \begin{proof} By (\ref{2}), $f\sim_{\E}(X^{-\ldeg(f_{0})}f_{0},X^{-\ldeg(f_{0})}f_{1},f_{2},\dots,f_{n})\sim_{\E}\\(X^{-\ldeg(f_{0})}f_{0},X^{-\ldeg(f_{0})+2k}f_{1},X^{2k}f_{2},\dots,X^{2k}f_{n})=(g_{0},\dots,g_{n})$ where $k\in \mathbb{N}$. For sufficiently big $k$, we obtain that $g_{0},\dots,g_{n}\in R[X]$. Clearly, $X^{l}\in \sum_{i=0}^{n}g_{i}R[X]$ for some $l\geq 0$. But $g_{0}\equiv u\bmod{XR[X]}$, where $u\in U(R)$, then $X^{l}R[X]+g_{0}R[X]=R[X]$, so $g\in \Um_{n}(R[X])$. By (\ref{70}), $g\sim_{\E} e_{1}$. \end{proof}
\begin{rem}\label{44} Let $a=(a_{1},\dots,a_{n})\in \Um_{n+1}(R)$, where $n\geq 2$. If $$a\sim_{\E_{n}(R/\mathrm{Nil}(R))}e_{1}$$ then $a\sim_{\E_{n}(R)}e_{1}$. \end{rem}
\begin{prop}\label{48} If $R$ is a zero-dimensional ring and $f=(f_{0},\dots,f_{n})\\\in \Um_{n+1}(R[X,X^{-1}])$, where $n\geq 1$. Then $$f\sim_{\E}e_{1}.$$ \end{prop} \begin{proof} We prove by induction on $\deg f_0+\deg f_1$. We may assume that $R$ is reduced ring. Let $a=\hc(f_{0})$ and $b=\lc(f_{0})$. Assume that $ab\in U(R)$, then by elementary transformations of the form $$f_1-X^{\ldeg(f_1)-\ldeg(f_0)}b^{-1}\lc(f_1)f_0$$ we obtain that $f\sim_{E}(f_0,h_1,f_2,\dots,f_n)$, where $\ldeg(h_1)>\ldeg(f_0)$. By elementary transformations of the form $$f_1-X^{\hdeg(f_1)-\hdeg(f_0)}a^{-1}\hc(f_1)f_0$$ we obtain that $f\sim_{E}(f_0,g_1,f_2,\dots,f_n)$, where $\ldeg(g_1)\geq\ldeg(f_0)$ and $\hdeg(g_1)<\hdeg(f_0)$. So we may assume that $\deg f_{0}\leq \deg f_{1}$ and $ab\notin U(R)$. Assume that $a\notin U(R)$. We have $Ra=Re$ for some idempotent $e$. Let $c=\hc(f_1)$. Since $e\in Ra$, we may assume that $c\ne0$ and that $c\in R(1-e)$. Note that\begin{center} $(1-e)f=(f_{0}(1-e),\dots,f_{n}(1-e))\in \Um_{n+1}(R(1-e)[X,X^{-1}])$ and $ef=(f_{0}e,\dots,f_{n}e)\in \Um_{n+1}(Re[X,X^{-1}])$.\end{center} By the inductive assumption, there are matrices $$A\in \E_{n+1}(R(1-e)[X,X^{-1}]),~B\in \E_{n+1}(Re[X,X^{-1}])$$ so that $(1-e)fA=(1-e,0,\dots,0)$ and $efB=(e,0,\dots,0).$ Let $$A=\prod_{s=1}^{k}\E_{ij}(h_{s}),~B=\prod_{s=1}^{t}\E_{ij}(g_{s})$$ where $$\E_{ij}(h_{s})=(1-e)I_{n+1}+h_{s}e_{ij},~\E_{ij}(g_{s})=eI_{n+1}+g_{s}e_{ij}$$ and $i\ne j\in \{1,\dots,n+1\},~h_{s}\in R(1-e)[X,X^{-1}],~g_{s}\in Re[X,X^{-1}]$. Let $$A'=\prod_{s=1}^{k}(I_{n+1}+h_{s}e_{ij}),~B'=\prod_{s=1}^{t}(I_{n+1}+g_{s}e_{ij}).$$ Clearly, $(1-e)A'=A$, $eB'=B$ and $A',B'\in \E_{n+1}(R[X,X^{-1}])$. Let $C=A'B'$, then $C\in \E_{n+1}(R[X,X^{-1}])$ and $$(1-e)C=(1-e)A'(1-e)B'=A(1-e)I_{n+1}=(1-e)A'=A.$$ Similarly, we have $eC=B$. Let $fC=(g_{0},\dots,g_{n})=g$. Thus \begin{center}$g_{0}(1-e)=1-e$ and $g_{1}e=e$.\end{center} So \begin{center}$f\sim_{\E_{n+1}(R[X,X^{-1}])}(g_0,\dots,g_n)\sim_{\E_{n+1}(R[X,X^{-1}])}(g_{0}+e,\dots,g_n)=(1+g_{0}e,\dots,g_{n})\sim_{\E_{n+1}(R[X,X^{-1}])}(1+g_{0}e,-g_{0}e,\dots,g_{n})\sim_{\E_{n+1}(R[X,X^{-1}])}e_{1}.$ \end{center} Similarly, if $b\notin U(R)$, then $f\sim_{\E}e_1$. \end{proof}
\begin{prop}\label{50} If $R$ is a zero-dimensional ring, then $$\SL_{n}(R[X,X^{-1}])=\E_{n}(R[X,X^{-1}])$$ for all $n\geq 2$. \end{prop} \begin{proof} Clearly, $\E_{n}(R[X,X^{-1}])\subseteq \SL_{n}(R[X,X^{-1}])$. Let $M\in \SL_{n}(R[X,X^{-1}])$. By (\ref{48}), we can perform suitable elementary transformations to bring $M$ to $M_1$ with first row $(1,0,\dots,0)$. Now a sequence of row transformations bring $M_1$ to $$M_{2}=\left(\begin{array}{cc} 1 & 0\\ 0 & M'\end{array}\right)$$ where $M'\in \SL_{n-1}(R[X,X^{-1}])$. The proof now proceeds by induction on $n$. \end{proof}
\begin{lem}\label{13} Let $(f_0,\dots,f_n)\in \Um_{n+1}(R[X,X^{-1}])$, where $n\geq 2$. Assume that $\hc(f_0)$ is invertible modulo $f_0$. Then $$f\sim_{\E}(f_0,g_1,\dots,g_n)$$ where $\hdeg(g_i)<\hdeg(f_0),\ldeg(g_i)\geq\ldeg(f_0)$, for all $1\leq i\leq n$. \end{lem}
\begin{proof} By (\ref{2}), $f\sim_{\E}(f_0,X^{2k}f_1,\dots,X^{2k}f_n)$ for all $k\in \mathbb{Z}$. So we may assume that $\ldeg(f_i)>\ldeg(f_0)$. Let $a=\hc(f_0)$. By (\ref{2}) we have $$f\sim_{\E}(f_0,a^{2}f_1,\dots,a^{2}f_n).$$ Using elementary transformations of the form $$a^{2}f_i-aX^{\hdeg(f_i)-\hdeg(f_0)}\hc(f_i)f_0$$ we lower the degrees of $f_i$, for all $1\leq i\leq n$, and obtain the required row. \end{proof}
\begin{lem}\label{10} Let $R$ be a ring of dimension $d>0$ and $$f=(r,f_{1},\dots,f_{n})\in \Um_{n+1}(R[X,X^{-1}])$$ where $r\in R,~n\geq d+1$. Assume that for every ring $T$ of dimension $<~d$ and $n\geq\dim(T)+1$, the group $\E_{n+1}(T[X,X^{-1}])$ acts transitively on $\Um_{n+1}(T[X,X^{-1}])$. Then $f\sim_{\E(R[X,X^{-1}])}e_{1}$. \end{lem} \begin{proof} Since $\dim(R/\mathcal{I}(r))<\dim(R)$ so over $R/\mathcal{I}(r)$, we can complete $(f_{1},\dots,f_{n})$ to a matrix in $\E_{n}(R/\mathcal{I}(r)[X,X^{-1}])$. If we lift this matrix, we obtain that \begin{center} $(r,f_{1},\dots,f_{n})\sim_{\E_{n+1}(R[X,X^{-1}])}(r,1+rw_{1}+h_{1},\dots,rw_{n}+h_{n})\sim_{\E_{n+1}(R[X,X^{-1}])}(r,1+h_{1},\dots,h_{n})$ \end{center} where $h_{i},~w_{i}\in R[X,X^{-1}]$ and $rh_{i}=0$ for all $1\leq i\leq n$. Then $$f\sim_{\E_{n+1}(R[X,X^{-1}])}(r-r(1+h_{1}),1+h_{1},\dots,h_{n})\sim_{\E_{n+1}(R[X,X^{-1}])}e_{1}.$$ \end{proof}
\begin{lem}\label{11} Let $R$ be a ring of dimension $d>0$ and $$f=(f_{0},\dots,f_{n})\in \Um_{n+1}(R[X,X^{-1}])$$ such that $n\geq d+1$, $f_{0}=ag$ and $a^{t}=\hc(f_{0})$, where $a\in R\setminus U(R),~0\neq t\in \mathbb{N}$. Assume that for every ring $T$ of dimension $<~d$ and $n\geq\dim(T)+1$, the group $\E_{n+1}(T[X,X^{-1}])$ acts transitively on $\Um_{n+1}(T[X,X^{-1}])$. Then $f\sim_{\E(R[X,X^{-1}])}e_{1}$. \end{lem}
\begin{proof} We prove by induction on the number $M$ of non-zero coefficients of the polynomial $f_{0}$, that $f\sim_{\E}e_{1}$. If $M=1$, so $f_{0}=rX^{m}$ where $r\in R,m\in\mathbb{Z}$. By (\ref{2}), $f\sim_{\E}(r,X^{-m}f_{1},f_{2},\dots,f_{n})$. So by (\ref{10}), we obtain that $f\sim_{\E}e_{1}$. Assume now that $M>1$. Let $S$ be the multiplicative subset of $R$ generated by $a,b$, where $b=\lc(g)$, i.e., $S=\setf{a^{k_{1}}b^{k_{2}}}{k_{1},k_{2}\in \mathbb{N}}$. By the inductive step, with respect to the ring $R/abR$, we obtain from $f$ a row $\equiv (1,0,\dots,0)\bmod{abR[X,X^{-1}]}$, also we can perform such transformation so that at every stage the row contains a doubly unitary polynomial in $R_{S}[X,X^{-1}]$, indeed, if we have to perform, e.g., the elementary transformation $$(g_{0},\dots,g_{n})\rightarrow (g_{0},g_{1}+hg_{0},\dots,g_{n})$$ and $g_{1}$ is a doubly unitary polynomial in $R_{S}[X,X^{-1}]$, then we replace this elementary transformation by the two transformations: \begin{center} $(g_{0},\dots,g_{n})\rightarrow (g_{0}+abX^{m}g_{1}+abX^{k}g_{1},g_{1},\dots,g_{n})\rightarrow (g_{0}+abX^{m}g_{1}+abX^{k}g_{1},g_{1}+h(g_{0}+abX^{m}g_{1}+abX^{-k}g_{1}),\dots,g_{n})$ \end{center} where $m>\hdeg(g_{0}),~k<\ldeg(g_{0})$. So we may assume that $$(f_{0},\dots,f_{n})\equiv (1,0,\dots,0)\bmod{abR[X,X^{-1}]}$$ and $f_{0}$ is a doubly unitary polynomial in $R_{S}[X,X^{-1}]$. By (\ref{13}), we may assume that $\hdeg(f_{i})<\hdeg(f_{0}),~\ldeg(f_{i})\geq \ldeg(f_{0})$.
We prove that $f$ can be transformed by elementary transformation into a row with one constant entry. We use an argument similar to that in the proof of [\cite{E}, Theorem 5].
Assume that the number of the coefficients of $f_{2},\dots,f_{n}$ is $\geq 2(n-1)$. Since $d>0$, we obtain that $2(n-1)\geq d+1$. Let $a_{1},\dots,a_{t}$ be the coefficients of $f_{2},\dots,f_{n}$ and $J=\frac{a_1}{1}R_{S}+\dots+\frac{a_t}{1}R_S$. Let $I=R_{S}[X,X^{-1}]f_{0}+R_{S}[X,X^{-1}]f_{1}$. Since $I+J[X,X^{-1}]=R_{S}[X,X^{-1}]$ and $f_{0}$ is a doubly unitary in $R_{S}[X,X^{-1}]$, by (\ref{42}), we obtain that $(I\cap R_{S})+J=R_{S}$. So $(\frac{f_{0}h_{0}+f_{1}h_{1}}{s})+\frac{r_{1}}{s_{1}}\frac{a_{1}}{1}+\dots+\frac{r_{t}}{s_{t}}\frac{a_{t}}{1}=\frac{1}{1}$ ,where $h_{0},h_{1}\in R[X,X^{-1}]$ and $r_{i}\in R,~s,s_{i}\in S$ for all $1\leq i\leq t$. This means that $(\frac{f_{0}h_{0}+f_{1}h_{1}}{1},\frac{a_{1}}{1},\dots,\frac{a_{t}}{1})\in \Um_{t+1}(R_{S})$. By (\ref{41}), there exist $s\in S$ and $b_{1},\dots,b_{t}\in R$, such that $$s\in (a_{1}+b_{1}(f_{0}h_{0}+f_{1}h_{1}))R+\dots+(a_{t}+b_{t}(f_{0}h_{0}+f_{1}h_{1}))R.$$ Using elementary transformations, we may assume that $J=R_{S}$. By (\ref{40}), the ideal $\langle f_{0},f_{2},\dots,f_{n}\rangle$ contains a polynomial $h$ such that $a^{k_{1}}b^{k_{2}}=\hc(h)$ and $\hdeg(h)=\hdeg(f_{0})-1,\ldeg(h)\geq \ldeg(f_{0})$ where $k_{1},k_{2}\in \mathbb{N}$. Let $r=\hc(f_{1})$, So \begin{center} $f\sim_{\E}(f_{0},a^{2k_{1}}b^{2k_{2}}f_{1},f_{2},\dots,f_{n})\sim_{\E}(f_{0},a^{2k_{1}}b^{2k_{2}}f_{1}+(1-a^{k_{1}}b^{k_{2}}r)h,f_{2},\dots,f_{n}).$ \end{center} Then we may assume that $a^{k_{1}}b^{k_{2}}=\hc(f_{1})$. By the proof of Lemma (\ref{13}), we can decrease the $\hdeg(f_{i})$ for all $2\leq i\leq n$.
Repeating the argument above, we obtain that $$f\sim_{\E}(rX^{m},g_{1},\dots,g_{n})\sim_{\E}(r,g_{1}X^{-m},g_{2},\dots,g_{n})$$ where $r\in R,m\in\mathbb{Z}$, $g_{1},\dots,g_{n}\in R[X,X^{-1}]$. By (\ref{10}), $f\sim_{\E}e_{1}$. \end{proof}
\begin{lem}\label{12} Let $R$ be a ring of dimension $d>0$ and $$f=(f_{0},\dots,f_{n})\in \Um_{n+1}(R[X,X^{-1}])$$ such that $n\geq d+1$, $f_{0}=cg$ and $c^{t}=\lc(f_{0})$, where $c\in R\setminus U(R),~0\neq t\in \mathbb{N}$. Assume that for every ring $T$ of dimension $<~d$ and $n\geq\dim(T)+1$, the group $\E_{n+1}(T[X,X^{-1}])$ acts transitively on $\Um_{n+1}(T[X,X^{-1}])$. Then $f\sim_{\E(R[X,X^{-1}])}e_{1}$. \end{lem} \begin{proof} By making the change of variable: $X\rightarrow X^{-1}$ and Proposition (\ref{11}), we obtain that $f\sim_{\E(R[X,X^{-1}])}e_{1}$. \end{proof}
\begin{thm}\label{51} Let $R$ be a ring of dimension $d$ and $n\geq d+1$, then $\E_{n+1}(R[X,X^{-1}])$ acts transitively on $\Um_{n+1}(R[X,X^{-1}])$. \end{thm} \begin{proof} Let $f=(f_0,\dots,f_n)\in \Um_{n+1}(R[X,X^{-1})$. We prove the theorem by induction on $d$, we may assume that $R$ is reduced ring. If $d=0$, by (\ref{48}), we are done. Assume that the theorem is true for the dimensions $0,1,\dots,d-1$, where $d>0$. We prove by induction on the number $N$ of nonzero coefficients of the polynomials $f_{0},\dots,f_{n}$, that $f\sim_{\E}e_{1}$ if $\dim R=d$. Starting with $N=1$. Let $N>1$. Let $a=\hc(f_{0})$ and $c=\lc(f_{0})$, if $ac\in U(R)$ then by (\ref{43}), we are done. Otherwise, assume that $a\notin U(R)$, by the inductive step, with respect to the ring $R/aR$, we obtain from $f$ a row $\equiv (1,0,\dots,0)\bmod{aR[X,X^{-1}]}$ using elementary transformations. We can perform such transformations so that at every stage the row contains a polynomial $g\in R[X,X^{-1}]$ such that $\hc(g)=a^{t}$, where $t\in \mathbb{N}$. Indeed, if we have to perform, e.g., the elementary transformation $$(g_{0},\dots,g_{n})\rightarrow (g_{0},g_{1}+hg_{0},\dots,g_{n})$$ and $\hc(g_{1})\in U(R_{a})$, then we replace this elementary transformation by the two transformations: \begin{center} $(g_{0},\dots,g_{n})\rightarrow (g_{0}+aX^{m}g_{1},g_{1},\dots,g_{n})\rightarrow (g_{0}+aX^{m}g_{1},g_{1}+h(g_{0}+aX^{m}g_{1}),\dots,g_{n})$ \end{center} where $m>\hdeg(g_{0})$.
So we have $f_{0}=ag$, and $a^{t}=\hc(f_{0})$, where $0\neq t\in \mathbb{N}$. By (\ref{11}), $f\sim_{\E}e_{1}$. Similarly, if $c\notin U(R)$, by (\ref{12}) we obtain that $f\sim_{\E}e_{1}$. \end{proof}
\begin{cor}\label{52} For any ring $R$ with Krull dimension $\leq d$, all finitely generated stably free modules over $R[X,X^{-1}]$ of rank $>d$ are free. \end{cor}
The following conjecture is the analogue of Conjecture 8 of \cite{G} in the Laurent case:
\begin{con} For any ring $R$ with Krull dimension $\leq d$, all finitely generated stably free modules over $R[X_{1}^{\pm1},\dots,X_{k}^{\pm1},X_{k+1},\dots,X_{n}]$ of rank $>d$ are free. \end{con} \textbf{Acknowledgments.} I would like to thank Professor Moshe Roitman, my M.Sc. thesis advisor, for his interest in this project.
\end{document} | arXiv |
\begin{document}
\title{Echo Spectroscopy of Atomic Dynamics in a Gaussian Trap via Phase Imprints}
\author{Daniel Oblak \and J\"{u}rgen Appel \and Patrick Windpassinger \and Ulrich Busk Hoff \and Niels Kj{\ae}rgaard\thanks{\email{[email protected]}} \and Eugene S. Polzik
}
\institute{QUANTOP, Danish National Research Foundation Center for Quantum Optics, Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, DK-2100 Copenhagen \O, Denmark}
\date{Dated: \today}
\abstract{We report on the collapse and revival of Ramsey fringe visibility when a spatially dependent phase is imprinted in the coherences of a trapped ensemble of two-level atoms. The phase is imprinted via the light shift from a Gaussian laser beam which couples the dynamics of internal and external degrees of freedom for the atoms in an echo spectroscopy sequence. The observed revivals are directly linked to the oscillatory motion of atoms in the trap. An understanding of the effect is important for quantum state engineering of trapped atoms.
\PACS{
{37.10}{Atom traps and guides}\and {32.}{Atomic properties and interactions with photons}\and{42.50.Dv}{Quantum state engineering and measurements}\and {06.30.Ft}{Time and frequency}
} }
\maketitle
\section{Introduction}\label{intro} A trapped gas of atoms can act as dispersive medium with a refractive index which depends on the internal state of the atoms. The state-dependent phase shift of probe laser light propagating through an ensemble of Cs atoms has recently been used to observe Rabi flopping on the clock transition non-destructively \cite{Chaudhury2006,Windpassinger2008}. Such non-destructive measurements of a collective atomic quantum state component holds the promise to predict the outcome of subsequent measurements beyond the standard quantum limit \cite{Kuzmich1998}. This reduction in uncertainty is referred to as conditional squeezing and the resulting nonclassical atomic state may be used to increase the precision of atomic clocks \cite{Oblak2005}
In a recent paper \cite{pwarxiv1}, we considered the effect of inhomogeneous light shifts on the atomic quantum state evolution when using a Gaussian laser beam for dispersive probing of an ensemble of atoms confined in a dipole trap. The spatial intensity distribution of the probe beam implies that an atom experiences a position dependent \textit{differential} ac Stark shift of the clock levels and the atomic cloud acquires a spatial phase imprint as illustrated schematically in Fig.~\ref{fig:1}. In the present paper we shall focus on the fact that the individual atoms are not stationary in the trap, but move about to explore regions of different probe light intensities. This is of consequence for how well their inhomogeneous phase spread can be compensated for by using Hahn echo techniques. Specifically, we shall investigate the degradation of Ramsey fringe contrast in echo spectroscopy as a result of atomic movement in between two perturbing light pulses on either side of the echo pulse. The fringe contrast can be observed to revive at half-integer multipla of the radial trap period and we demonstrate how this effect can be used to measure the trap frequency \textit{in situ} without actually exciting collective oscillation modes. The specific form of the collapse and revival of fringe visibility is modelled readily when the anharmonicity of the trapping potential is taken into account.
\begin{figure}
\caption{The Gaussian intensity distribution of a laser beam is mapped onto the atoms as a spatially dependent phase of the clock state superpositions.}
\label{fig:1}
\end{figure}
\section{Ramsey Interrogation and Echo Spectroscopy}
\subsection{Ramsey spectroscopy of two-level atoms}
In Ramsey spectroscopy, a collection of two-level atoms with quantum levels \mbox{$|\!\downarrow\rangle$} and \mbox{$|\!
\uparrow\rangle$} separated by an energy difference $\hbar\Omega_0$ interacts with two near resonant fields of angular frequency $\Omega\approx \Omega_0$ separated by a time $\mathcal{T}$ \cite{Vanier1989}. With all the atoms initially in the state \mbox{$|\!\downarrow\rangle$} the first field interaction --- a $\pi/2$-pulse --- produces an equal coherent superposition state
\mbox{$(|\!\downarrow\rangle+|\!\uparrow\rangle)/\sqrt{2}$} for each atom. The atoms now evolve freely for the time $\mathcal{T}$ during which a phase $\phi=(\Omega-\Omega_0)\mathcal{T}$ is accumulated by the state:
\mbox{$(|\!\downarrow\rangle\,+\, e^{i\phi}|\!\uparrow\rangle)/\sqrt{2}$}. Finally, a second $\pi/2$-pulse is applied and the population difference of the two states is measured. This quantity will vary periodically with $\phi$ giving rise to so-called Ramsey fringes when either $\mathcal{T}$ or $\Omega$ are scanned.
If the two energy levels \mbox{$|\!\downarrow\rangle$} and \mbox{$|\!\uparrow\rangle$} are perturbed differentially in any way during the free evolution period, the energy splitting $\hbar \Omega_0$ and thus $\phi$ will be affected. This may occur homogeneously such that all the atoms are perturbed by the same amount causing an over all shift of the Ramsey fringe pattern \cite{Featonby1998}. Essentially, this fringe shift constitutes an interferometric measurement of the perturbation strength. For nonuniform perturbations the phase $\phi$ differs from atom to atom and in the Ramsey measurement of the whole collection of atoms the fringe shift is accompanied by a degraded visibility as a result of \textit{inhomogeneous} dephasing \cite{pwarxiv1}.
\begin{figure}
\caption{Echo spectroscopy time line as employed in our experiments. A near-resonant microwave (MW) field is applied to the atoms in a $\pi/2-\pi-\pi/2$ pulse sequence. The effect of short perturbing light pulses occurring at a time $t_1$ before and a time $t_2$ after the $\pi$-pulse is investigated. }
\label{fig:sequence}
\end{figure}
\subsection{Coherence echoes} \begin{figure*}
\caption{(a) Dynamical phase space representation of atoms in a 1D harmonic oscillator potential of angular frequency $\omega$. (b) Phase imprint on the atoms from the position dependent light intensity of Gaussian laser beam during echo spectroscopy, where two identical light pulses act on each side of the echo $\pi$-pulse. (i) In general echo spectroscopy cannot completely rephase the atoms. (ii) In the special case of the time between the two light pulses being $\tau_1+\tau_2=\pi({\rm mod}\pi)/\omega$, the effects of the two light pulses cancel.}
\label{fig:2}
\end{figure*}
Effects of any form of inhomogeneous dephasing encountered in Ramsey spectroscopy can, to a large extent, be cancelled by introducing a so-called echo-pulse ($\pi$-pulse) between the two $\pi/2$-pulses at time $t_1$ in the Ramsey sequence \cite{HAHN1950}. The echo pulse essentially inverts the sign of the phase $\phi$ accumulated so far by each atom and if subsequently each atom encounters the same amount of perturbation as prior to the $\pi$-pulse a rephasing will occur at time $t_2=t_1$ after the echo pulse. In this sense the perturbation is \textit{reversible} e.g. for a collection of stationary atoms in a inhomogeneous light field.
Irreversible dephasing may result from fluctuating perturbations such as the variations in phase shift due to noise in the intensity of the inhomogeneous light field or movement of atoms therein \cite{Kuhr2005}.
Since echo spectroscopy nullifies the effect of reversible perturbations it has proven to be a powerful tool for investigating the nature and magnitude of atomic decoherence due to the combined effect of irreversible processes including dephasing and spontaneous decay of atoms \cite{Kuhr2005,Andersen2003,Ozeri2005,Lengwenus2007}. Several studies focus on the influence of the inhomogeneous dipole trapping light and it has been demonstrated that an echo pulse is efficient in compensating the effect of the trapping laser on a time scale $\ll h/\Delta E$, where $\Delta E$ is the differential ac Stark shift between \mbox{$|\!\downarrow\rangle$} and
\mbox{$|\!\uparrow\rangle$} \cite{Andersen2003}. Furthermore, it has been shown that the differential light shift for an oscillator mode of a far-off-resonant trap (FORT) is well described by its time averaged value \cite{Kuhr2005} such that inhomogeneous dephasing introduced by the Gaussian trapping laser beam profile can in many cases be reversed in an echo sequence.
\subsection{Inhomogeneous light shifts and trap dynamics}\label{sec:theodynamics} In the present paper we shall consider the perturbations from an auxiliary inhomogeneous pulse of light applied before the echo pulse and an identical light pulse applied after the echo pulse as indicated in Fig.~\ref{fig:sequence}. For a stationary atomic ensemble perfect rephasing would be expected except for decoherence of the atomic state due to irreversible spontaneous scattering events and quantum fluctuations (shot noise) of the two light pulses. Hence, a measurement of the Ramsey fringe visibility would appear to be ideally suited for determining the amount of decoherence introduced by a given laser beam. The situation is somewhat complicated when atoms change positions between the application of the two light pulses. To elaborate on this issue we consider a simple 1D model of particles evolving in a harmonic trap. The distributions of atoms in momentum and poisition are represented as a dynamical phase space plot [see Fig.~\ref{fig:2} (a)]. The application of a light pulse with Gaussian intensity profile will imprint a phase which only depends on the position if we assume that the duration of the light pulse is much shorter than an oscillation period $T$ in the trap. Now, during the free evolution the imprint will rotate in dynamical phase space at angular frequency $\omega=2\pi/T$. In Fig.~\ref{fig:2}(b), we outline the situation for echo spectroscopy in the cases (i) $\tau_1=\tau_2=\pi/4\omega$ and (ii) $\tau_1=\tau_2=\pi/2\omega$, where $\tau_1$ and $\tau_2$ are the durations from the light pulse before and after to the echo pulse, respectively. Complete rephasing of the atoms is encountered only if $\tau_1+\tau_2=\pi({\rm mod}~\pi)/\omega$, i.e. the atoms oscillate in the trap for a half-integer multipla of a trap period between the two light pulses. For an echo spectroscopy experiment this implies that the Ramsey fringe contrast will depend on the time separation $\tau_1+\tau_2$ between the light pulses.
\section{Experimental} \subsection{Setup} Details of our experimental setup and atomic sample preparation can found in \cite{pwarxiv1}. The starting point for the experiment presented in the present paper is an ensemble of $\sim 10,000-50,000$ Cs atoms polarized in the
$6S_{1/2}(F=3,m_F=0)\equiv|\!\!\downarrow\rangle$ clock state and confined by a $\sim 4~W$ Yb:YAG laser beam focussed to a waist of $\rm \sim 40 \mu m$. This dipole trap is characterized by oscillation frequencies in the order of a kHz radially and a few Hz axially. The clock transition
\mbox{$|\!\downarrow\rangle$}~$\leftrightarrow6S_{1/2}(F=4,m_F=0)\equiv$~\mbox{$|\!\uparrow\rangle$} is driven using microwave radiation around 9.2~GHz applied in a
$\pi/2-\pi-\pi/2$ echo spectroscopy sequence as depicted in Fig.~\ref{fig:sequence}. At the end of this sequence we determine the fraction of atoms residing in the $|\!\uparrow\rangle$ state. The probing of atoms is performed with a beam of light propagating along the trap axis with a waist of 18~$\rm \mu m$ located at the center of the trap. The frequency of the probe light is blue detuned by 160~MHz from the {$6S_{1/2}(F=4)\rightarrow6P_{3/2}(F=5)$} transition and via the dispersive atom-light interaction the probe light experiences a phase shift proportional to the number of
\mbox{$|\!\uparrow\rangle$}-atoms which is measured using a shot noise limited Mach Zehnder interferometer \cite{Oblak2005,Petrov2007a}. By applying light resonant with the {$6S_{1/2}(F=3)\rightarrow6P_{3/2}(F=4)$} transition all atoms are pumped into the $6S_{1/2}(F=4)$ level and the total number of atoms involved is determined from a subsequent phase shift measurement. Optionally, we can apply light from an additional probe laser which is red detuned by 135~MHz from the
{$6S_{1/2}(F=3)\rightarrow6P_{3/2}(F=2)$} transition and hence couples to the \mbox{$|\!\downarrow\rangle$} population. By engaging the two probes simultaneously we can obtain a zero (mean) interferometer phase shift for ensembles in an equal clock state superposition irrespective of the total number of atoms. This two probe color configuration has proven convenient in our measurements of atomic projection noise \cite{Windpassinger2008}. The two probes of this two-color scheme are merged in a single mode optical fiber to ensure good spatial overlap and hence enter the interferometer at the same input port.
\subsection{Decoherence and inhomogeneous dephasing} For low optical powers the dispersive probing scheme is close to nondestructive in the sense that spontaneous scattering is very limited. However, the ratio of signal to noise (which in our case is the shot noise of light) for a phase shift measurements increases with increasing probe photon number. Hence, there is a trade off between information gained and coherence lost. Establishing an optimal balance between decoherence and measurement strength is important for quantum state engineering of a squeezed clock state population difference via a dispersive measurement. As it turns out, the optical depth of the atomic sample is the key figure of merit determining the optimal optical power and thus the amount of decoherence \cite{Hammerer2004}. Preferably, and to achieve a high degree of squeezing, the optical depth should be large such that each probe photon interacts with many atoms. In this respect the $\sim 1:200$ radial to axial aspect ratio of our sample provided by the dipole trap potential is favorable and gives rise to an optical depth of up to $\sim 20$ along the direction of the probe laser beam.
From these considerations, it is obviously important to have a handle on the amount of decoherence introduced by the dispersive probing scheme and echo spectroscopy would appear to be the method of choice \cite{Ozeri2005}. Ideally, a light pulse derived from the probe laser could be devided into two with each part applied before and after the echo pulse, respectively, and the reduction in Ramsey fringe amplitude would then gauge the decoherence from probe light considered as a perturbation. This method relies on the complete cancellation of the reversible dephasing by the inhomogeneous light shift. However, the movement of atoms in between the two pertubing pulses may also lead to imperfect rephasing causing a Ramsey fringe reduction as outlined in \ref{sec:theodynamics}. In fact, this effect is significant and is prominently manifested in the echo spectroscopy.
\subsection{Results} \subsubsection{Echo Spectroscopy} \begin{figure}
\caption{Ramsey fringes as recorded in echo spectroscopy as a function of the time separation $\tau_1+\tau_2$ between two light shifting pulses around the echo pulse. The fringe amplitudes have been normalized to the case when no light is applied. For increasing separation times the fringes are observed to first wash out and subsequently revive.}
\label{fig:expramsfringe}
\end{figure}
As described in section \ref{sec:theodynamics} the dynamics of atoms in our dipole trap is expected to affect the ability of an echo pulse to balance the effect of two surrounding phase shifting light pulses, as can indeed be observed experimentally in echo spectroscopy. To illuminate the effect, we present in Fig.~\ref{fig:expramsfringe} examples of Ramsey fringes as recorded in the two-color probing scheme using the echo sequence of Fig.~\ref{fig:sequence} for light pulse separations $\tau_1+\tau_2$ of 12, 250, and 500~$\rm \mu s$, respectively. The light pulses have a duration of 2~$\mu s$ which is short compared to all other relevant time scales in the experiment such as the trap oscillation period of $\rm\sim1000~\mu s$. For the traces shown, $t_1$ was kept fixed at 1500~ms while $t_2$ was scanned between 500~ms and 2500~ms. To enable the observation of Ramsey fringes in the time domain the frequency of our microwave source was detuned by 3~kHz. The detuning sets the undulation frequency of the Ramsey fringes which have their maximum amplitude at $t_2=t_1=1500$~ms due to the echo rephasing pulse.
The most intriguing aspect of Fig.~\ref{fig:expramsfringe} is the collapse and revival of fringe visibility. For a time separation $\rm\tau_1+\tau_2=12~\mu s$ between the light pulses which is short compared to the trap frequency, the echo pulse is effective in restoring the Ramsey fringe visibility: Since the light pulses are so closely spaced that the atoms hardly have time to move in between, the second pulse essentially undoes the inhomogeneous phase imprint of the first. An upper bound on the amount of decoherence can be estimated from reduction in fringe amplitude, which in this case is 76\%. A completely different situation is encountered at $\rm\tau_1+\tau_2=250~\mu s$ with a light pulse separation in the vicinity of a quarter of a radial trap period. Here the atomic ensemble develops a nonuniform phase as indicated in Fig.~\ref{fig:2}b(i) and the Ramsey fringes for individual atoms will in general not interfere constructively. Hence a certain degradation in the ensemble fringe contrast happens. Finally, for a pulse separation of $\rm\tau_1+\tau_2=500~\mu s$ in the vicinity of half a radial trap period a Ramsey fringe revival happens. At the time of the second light pulse an atom will find itself close to the radial distance (from the trap center) it had when the first pulse was applied and hence experience the same light intensity. Due to the inverting effect of the interposed echo pulse a phase shift cancelation will occur.
\subsubsection{Revival frequency versus trap power} The revival of Ramsey fringe visibility, resulting from interference between spatially imprinted phases in the atomic coherences, is linked to the radial trap oscillation frequency. Measurements of trap frequencies in cold atom experiments are typically performed by exciting either monopole or dipole oscillation modes, or by driving parametric losses when modulating the trap \cite{Grimm2000,Boiron1998}. Using the fringe revival, we are able to extract the trap frequency \textit{in situ} without actually exciting motion --- an atom is tagged in its \textit{internal} degrees of freedom according to its position. Towards this end we simply measure the height of the central Ramsey fringe in our echo sequence Fig.~\ref{fig:sequence} as a function of the separation between the two light shifting pulses. Hence, we keep $t_2=t_1$ fixed and vary $\rm\tau_1+\tau_2$. As the height of the central Ramsey fringe provides measure of the fringe visibility this quantity oscillates in $\rm\tau_1+\tau_2$ at twice the trap frequency as discussed above. Figure~\ref{fig:expfreqdep} shows the measured revival frequency as function of the optical power of our dipole trapping beam. The observed revival frequencies are described well by a square root dependency on the optical power as expected from theory \cite{Grimm2000}. \begin{figure}
\caption{Observed revival frequency of Ramsey fringes in echo spectroscopy as a function of the dipole trapping power. A square root scaling (line) describes the data well. Each data point was extracted from a revival trace as shown in the inset as recorded using the single color probing scheme.}
\label{fig:expfreqdep}
\end{figure}
\subsubsection{Modeling the revivals} \begin{figure}
\caption{Collapse and revival of the fringe amplitude in echo spectroscopy. Filled circles shows the experimental normalized fringe amplitude as function of the time separation $\tau_1+\tau_2$ between peturbing light pulses around the echo pulse for ten different values of the number of perturbing photons $N_{\rm pert}$. Full lines show the result of our numerical simulations of the dynamics.}
\label{fig:expdecaysrevivals}
\end{figure} In a perfect harmonic trap the radial oscillation period is independent of oscillation amplitude and complete revival of the Ramsey fringe would hence be expected when $\rm\tau_1+\tau_2$ equals half a trap period. This is clearly not the case for the revival presented in the inset of Fig.~\ref{fig:expfreqdep}. A possible explanation could be sought in the anharmonicity of the Gaussian trapping potential. To investigate this in more detail we perform numerical simulations of the phase evolution of $N=50,000$ particles in a Gaussian trap when subjected to the echo spectroscopy sequence shown in Fig.~\ref{fig:sequence} including a non-uniform light shift from a Gaussian laser beam. During half a radial trapping period the axial movement of even the most energetic, trapped particles will only amount to a small fraction of the probe beam Rayleigh length which characterizes the length scale for the light-atom interaction volume. Furthermore, collisions between particles are expected to be negligible for the low densities and short time scales involved \cite{Mudrich2002}. In addition, we shall assume that the mean kinetic energy for trapped particles is sufficiently large for the radial frequency variation along the trap axis to be negligible compared with the local frequency spread caused by the trap potential anharmonicity. Hence, as an approximation, we shall restrict our treatment to the 2D case of radial dynamics. The details of the implementation of our numerical simulations is given in appendix~\ref{app:spatdist}. An expected fringe amplitude (relative to the unperturbed case) as function of time separation between the two perturbing light pulses, $t\equiv \tau_1+\tau_2$, is given by \begin{equation}\label{eqfringecontrast} \mathcal{F}(t)=\frac{\sum_{j=0}^Ne^{-\rho_j^2(t/2+t_2)/\gamma^2w_0^2}\cos{\phi_f^{(j)}(t)}}{\sum_{j=0}^Ne^{-\rho_j^2(t/2+t_2)/\gamma^2w_0^2}}, \end{equation} where $\rho_j$ and $\phi_f^{(j)}$ are the radial position and echo phase [cf. Eq.~(\ref{eqechophase})], respectively, of particle $j$ in the ensemble. Since the atomic ensemble is probed with a Gaussian light beam, each particle contributes to the total signal by a weight according to the intensity at its radial position at the time of probing. Figure~\ref{fig:expdecaysrevivals} shows the fringe amplitude versus light pulse separation time as recorded in echo spectroscopy experiments for various perturbing light powers $P_{\rm pert}$. We model our data using $\mathcal{F}(t)$ as provided by our numerical simulations. The functional behavior of $\mathcal{F}(t)$ depends on the three parameters $T$, $\gamma$, and $\phi_0$, i.e., the cloud temperature prior to the separatrix truncation (see appendix \ref{app:spatdist}), the probe to trap beam ratio, and the peak phase shift. These parameters are adjusted to minimize the mean square difference to our experimental data. Least squares are obtained for $kT/U_0=1$, $\gamma=0.6$, and $\phi_0/N_{\rm pert}=5.0\times10^{-7}$ which corresponds well to what would be expected in the experiment. Fixing the free parameters of our simulation code to these values, we plot the normalized echo amplitude in Fig.~\ref{fig:expdecaysrevivals}. We note that our simple 2D model gives an excellent over-all agreement between experiment and simulation.
The model does not include decoherence with the result that all the simulated curves in Fig.~\ref{fig:expdecaysrevivals} indicate perfect rephasing at $t=0$. In the experimental data it is evident that rephasing is not perfect especially for the large probe powers. However, the effect of trap dynamics on the fringe visibility clearly dominates over the decoherence for most pulse separations and it is unclear how much the, in principle, reversible dephasing contributes to the fringe reduction at the smallest pulse separation of 12$\mu s$. Thus, measurements provide an upper bound on the (irreversible) decoherence
\section{Discussion} In summary, we have reported on the manifestation of motional dynamics of trapped atoms in echo spectroscopy on the Cs clock transition. The movement of particles implies that the ac Stark shifts from two identical light pulses generally do not cancel completely on the application of an in-between Hahn echo. Rather, the echo fringe amplitude is observed to decrease with increasing separation of the two light pulses. For obtaining squeezing on the clock transition via dispersive measurements this is unfortunate for two reasons. First, the effect limits the applicability of echo techniques to restore the adverse inhomogeneous light shift effects of a strong off-resonant probe pulse. Second, using echo spectroscopy to gauge the atomic decoherence resulting from spontaneously scattered probe photons when probing an ensemble is not straight forward. Obviously, an understanding of the effect of motional dynamics is important when dispersive measurements are used for quantum state engineering.
An alternative approach to eliminate inhomogeneous differential light shift of the clock states induced by a probe pulse is to apply a simultaneous light pulse from a second laser \cite{Kaplan2004}. The frequency of the second probe laser should be chosen so that the differential ac Stark shifts introduced by each probe laser exactly cancel. In the present configuration, where the two probe beams enter the Mach Zehnder interferometer via the same input port, the detunings are pegged such that simultaneous application of the two probes on a coherent superposition state yields a zero mean phase shift. At these detunings the ac Stark shift imposed on each clock state add in sign and instead of cancelling the differential light shift, the two-color probing scheme essentially doubles it. However, injecting the two beams via \textit{different} ports of the interferometer input beam splitter establishes a scheme where the detunings can be chosen to eliminate the differential light shift and achieve a state sensitive, balanced interferometer signal at the same time \cite{saffmaninprep}. We are presently in the course of reconfiguring our experimental setup to this favourable scheme. Echo spectroscopy experiments along the lines the present paper should then easily indicate if light shift cancelation has been accomplished.
We recall that the primary motivation for our experimental efforts is to demonstrate squeezing on the clock transition. Here a balanced interferometer measurement of an ensemble in an equal clock state superposition will be used to predict the outcome of a subsequent measurement beyond the standard quantum limit \cite{Windpassinger2008,Kuzmich1998,Oblak2005}. The present paper and ref. \cite{pwarxiv1} have investigated some adverse effects on the collective ensemble state which may accompany these ``nondestructive" measurements and discussed routes to minimize them. Even when applying these strategies to eliminate inhomogeneous light shifts, the atomic trap dynamics will lead to retardation effects when comparing two consecutive measurements on an ensemble. Particle motion and a Gaussian probe beam profile implies that an atom in general contributes to the interferometer signal by different weights in the two measurements. A fuller treatment of this is beyond the scope of this paper, but it is expected that the degree of squeezing (i.e. the amount of correlation between the two pulses) is going the diminish and increase analogous to the collapse and revival of echo fringe amplitude as reported on here.
\begin{appendix} \section{Methods for numerical simulations} \label{app:spatdist} We obtain the initial conditions for our numerical simulations by considering a canonical ensemble of particles with mass $m$ at a temperature $T$. Using polar coordinates $(\rho,\theta)$, the phase space density is \begin{equation} \sigma({\rho,\theta,p_{\rho}},p_{\theta})\propto e^{-[({p_{\rho}^2+p_{\theta}^2 /\rho^2})/2m+V({\rho})]/kT}, \end{equation}
where $k$ is the Boltzmann constant and $p_{\rho}$ and $p_{\rho}$ are the conjugate momenta to $\rho$ and $\theta$, respectively \cite{Reif1985}. For the Gaussian trap the confinement potential is $V({\rho})=- U_0e^{-(2\rho^2/w_0^2)}$, which has the (finite) trap depth $U_0$ and depends only on the radial distance $\rho$ to the trap center at the origin. We want to restrict ourselves to particles inside the separatrix of stable motion, i.e., we disregard unbound particles for which $({p_{\rho}^2+p_{\theta}^2 /\rho^2})/2m+V({\rho})>0$ (see Fig.~\ref{fig:separatrix}). Integrating over particles inside the separatrix yields a spatial density distribution \begin{equation} n({\bf r})\propto\left[e^{-V(\rho)/kT}-1\right]\propto\left\{ \begin{array}{lcc}\exp\left(-\frac{U_0}{kT}\frac{2\rho^2}{w_0^2}\right)&,&U_0\gg kT \\ \exp\left(-\frac{2\rho^2}{w_0^2}\right)&,&U_0 \ll kT \\ \end{array}\right., \end{equation} which for all practical purposes can be assumed to vanish for $\rho\gtrsim w_0$. Applying a finite cut-off radius $\rho_0\gtrsim w_0$, we assign random radii to the particles of our ensemble according to a probability density $\propto \rho e^{-V(\rho)/kT}$ using the rejection method \cite{Press1986} on the interval $[0...\rho_0]$. Next we assign the canonical momenta $p_\rho$ and $p_\theta$ for each particle in correspondence with normal distributions of widths $\sqrt{kT}$ and $\rho\sqrt{kT}$, respectively. We discard untrapped particles as described above and tag each trapped particle $j$ with a phase $\phi_i^{(j)}=\phi_0\exp(-2\rho_j^2/\gamma^2w_0^2)$, where $\gamma$ is the ratio between the waists of the trapping laser and the perturbing laser and $\phi_0$ is the peak phase shift. We finally integrate the equations of motion using the Verlet method to obtain $\rho_j(t)$ from which the echo phase \begin{equation}\label{eqechophase} \phi_f^{(j)}(t)=\phi_i^{(j)}-\phi_0\exp(-2\rho_j^2(t)/\gamma^2w_0^2) \end{equation} can be calculated. \begin{figure}
\caption{Illustration of the Gaussian potential separatrix of bound motion projected onto $(\rho,p_\rho,p_\theta)$ phase-space. Particles inside the separatrix are trapped and form the starting point for our numerical simulations.
}
\label{fig:separatrix}
\end{figure} \end{appendix}
\end{document} | arXiv |
HiC-bench: comprehensive and reproducible Hi-C data analysis designed for parameter exploration and benchmarking
Charalampos Lazaris1,2,
Stephen Kelly3,4,
Panagiotis Ntziachristos5,
Iannis Aifantis1,2 &
Aristotelis Tsirigos ORCID: orcid.org/0000-0002-7512-84771,2,3,4
BMC Genomics volume 18, Article number: 22 (2017) Cite this article
Chromatin conformation capture techniques have evolved rapidly over the last few years and have provided new insights into genome organization at an unprecedented resolution. Analysis of Hi-C data is complex and computationally intensive involving multiple tasks and requiring robust quality assessment. This has led to the development of several tools and methods for processing Hi-C data. However, most of the existing tools do not cover all aspects of the analysis and only offer few quality assessment options. Additionally, availability of a multitude of tools makes scientists wonder how these tools and associated parameters can be optimally used, and how potential discrepancies can be interpreted and resolved. Most importantly, investigators need to be ensured that slight changes in parameters and/or methods do not affect the conclusions of their studies.
To address these issues (compare, explore and reproduce), we introduce HiC-bench, a configurable computational platform for comprehensive and reproducible analysis of Hi-C sequencing data. HiC-bench performs all common Hi-C analysis tasks, such as alignment, filtering, contact matrix generation and normalization, identification of topological domains, scoring and annotation of specific interactions using both published tools and our own. We have also embedded various tasks that perform quality assessment and visualization. HiC-bench is implemented as a data flow platform with an emphasis on analysis reproducibility. Additionally, the user can readily perform parameter exploration and comparison of different tools in a combinatorial manner that takes into account all desired parameter settings in each pipeline task. This unique feature facilitates the design and execution of complex benchmark studies that may involve combinations of multiple tool/parameter choices in each step of the analysis. To demonstrate the usefulness of our platform, we performed a comprehensive benchmark of existing and new TAD callers exploring different matrix correction methods, parameter settings and sequencing depths. Users can extend our pipeline by adding more tools as they become available.
HiC-bench consists an easy-to-use and extensible platform for comprehensive analysis of Hi-C datasets. We expect that it will facilitate current analyses and help scientists formulate and test new hypotheses in the field of three-dimensional genome organization.
Nuclear organization is of fundamental importance to gene regulation. Recently, proximity ligation assays have greatly enhanced our understanding of chromatin organization and its relationship to gene expression [1]. Here we focus on Hi-C, a powerful genome-wide chromosome conformation capture variant, which detects genome-wide chromatin interactions [2, 3]. In Hi-C, chromatin is cross-linked and DNA is fragmented using restriction enzymes, the interacting fragments are ligated forming hybrids that are then sequenced and mapped back to the genome. Hi-C is a very powerful technique that has led to important discoveries regarding the organizational principles of the genome. More specifically, Hi-C has revealed that the mammalian genome is organized in active and repressed areas (A and B compartments) [2] that are further divided in "meta-TADs" [4], TADs [5] and sub-TADs [6]. TADs consist evolutionarily conserved, megabase-scale, non-overlapping areas with increased frequency of intra-domain compared to inter-domain chromatin interactions [5, 7]. Despite the fact that Hi-C is very powerful, it is known to be prone to systematic biases [8–10]. Moreover, as the sequencing costs plummet allowing for increased Hi-C resolution, Hi-C poses formidable challenges to computational analysis in terms of data storage, memory usage and processing speed. Thus, various tools have been recently developed to mitigate biases in Hi-C data and make Hi-C analysis faster and more efficient in terms of resource usage. HiC-Box [11], hiclib [9] and HiC-Pro [12] perform various Hi-C analysis tasks, such as alignment and binning of Hi-C sequencing reads into Hi-C contact matrices, noise reduction and detection of specific DNA-DNA interactions. Hi-Corrector [13] has been developed for noise reduction of Hi-C data, allowing parallelization and effective memory management, whereas Hi-Cpipe [14] offers parallelization options and includes steps for alignment, filtering, quality control, detection of specific interactions and visualization of contact matrices. Other tools that allow parallelization are HiFive [15], HOMER [16] and HiC-Pro [12]. Allele-specific Hi-C contact maps can be generated using HiC-Pro and HiCUP [17] (with SNPsplit [18]). TADbit can be used to map raw reads, create interaction matrices, normalize and correct the matrices, call topological domains and build three-dimensional (3D) models based on the Hi-C matrices [19]. HiCdat performs binning, matrix normalization, integration of other data (e.g., ChIP-seq) and visualization [20]. HIPPIE offers similar functionality with HiCdat and allows detection of specific enhancer-promoter interactions [21]. Other tools mainly focus on visualization of Hi-C data (e.g., Sushi [22] and HiCPlotter [23]). Despite the recent boom in the development of computational methods for Hi-C analysis, most of these tools only focus on certain aspects of the analysis, thus failing to encompass the entire Hi-C data analysis workflow. More importantly, these tools or pipelines are not easily extensible, and, for any given Hi-C task, they do not allow the integration of multiple alternative tools (use of alternative TAD calling methods for example) whose performance could then be qualitatively or quantitatively compared. Available tools do not support comprehensive reporting of the parameters used for each task and they do not enable reproducible computational analysis which is an imperative requirement in the era of big data [24], especially given the complexity of Hi-C analyses. The recently released HiFive is an exception as it offers a Galaxy interface [15]. However, use of Galaxy [25] can become problematic for data-heavy analyses, especially when the remote Galaxy server is used.
To facilitate comprehensive processing, reproducibility, parameter exploration and benchmarking of Hi-C data analyses, we introduce HiC-bench, a data flow platform which is extensible and allows the integration of different task-specific tools. Current and future tools related to Hi-C analysis can be easily incorporated into HiC-bench by implementing simple wrapper scripts. HiC-bench covers all current aspects of a standard Hi-C analysis workflow, including read mapping, filtering, quality control, binning, noise correction and identification of specific interactions (Table 1). Moreover, it integrates multiple alternative tools for performing each task (such as matrix correction tools and TAD-calling algorithms), while at the same time allowing simultaneous exploration of different parameter settings that are propagated from one task to all subsequent tasks in the pipeline. HiC-bench also generates a variety of quality assessment plots and offers other visualization options, such as generating genome browser tracks as well as snapshots using HiCPlotter. We have built this platform with reproducibility in mind, as all tools, versions and parameter settings are recorded throughout the analysis. HiC-bench is released as open-source software and the source code is available on GitHub and Zenodo (for details please refer to "Availability and requirements" section). Our team provides installation and usage support.
Table 1 Comparison of HiC-bench with published Hi-C analysis or visualization tools
The HiC-bench workflow
HiC-bench is a comprehensive computational pipeline for Hi-C sequencing data analysis. It covers all aspects of Hi-C data analysis, ranging from alignment of raw reads to boundary-score calculation, TAD calling, boundary detection, annotation of specific interactions and enrichment analysis. Thus, HiC-bench consists the most complete computational Hi-C analysis pipeline to date (Table 1). Importantly, every step of the pipeline includes summary statistics (when applicable) and direct comparative visualization of the results. This feature is essential for quality control and facilitates troubleshooting. The HiC-bench workflow (Fig. 1) starts with the alignment of Hi-C sequencing reads and ends with the annotation and enrichment of specific interactions. More specifically, in the first step, the raw reads (fastq files) are aligned to the reference genome using Bowtie2 [26] (align). The aligned reads are further filtered in order to determine those Hi-C read pairs that will be used for downstream analysis (filter). A detailed statistics report showing the numbers and percentages of reads assigned to the different categories is automatically generated in the next step (filter-stats). The reads that satisfy the filtering criteria are used for the creation of Hi-C contact matrices (matrix-filtered). These contact matrices can either be directly visualized in the WashU Epigenome Browser [27] as Hi-C tracks (tracks), or further processed using three alternative matrix correction methods: (a) matrix scaling (matrix-prep), (b) iterative correction (matrix-ic) [9] and (c) HiCNorm (matrix-hicnorm) [28]. As quality control, plots of the average number of Hi-C interactions as a function of the distance between the interacting loci are automatically generated in the next step (matrix-stats). The Hi-C matrices, before and after matrix correction, are used as inputs in various subsequent pipeline tasks. First, they are directly compared in terms of Pearson or Spearman correlation (compare-matrices and compare-matrices-stats) in order to estimate the similarity between Hi-C samples. Second, they are used for the calculation of boundary scores (boundary-scores and boundary-scores-pca), identification of topological domains (domains) and comparison of boundaries (compare-boundaries and compare-boundaries-stats). Third, high-resolution Hi-C matrices are used for detection and annotation of specific chromatin interactions (interactions and annotations), enrichment analysis in transcription factors, chromatin marks or other segmented data (annotation-stats) and visualization of chromatin interactions in certain genomic loci of interest (hicplotter). We should note here that HiC-bench is totally extensible and customizable as new tools can be easily integrated into the HiC-bench workflow (see Additional file 1 User Manual for more details). In addition to the multiple alternative tools that can be used to perform certain tasks, HiC-bench allows simultaneous exploration of different parameter settings that are propagated from one task to all subsequent tasks in the pipeline (for details please refer to "Main concepts and pipeline architecture" section). For example, after contact matrices are generated and corrected using alternative methods, HiC-bench proceeds with TAD calling using all computed matrices as inputs (Figs. 1 and 2a). This unique feature enables the design and execution of complex benchmark studies that may include combinations of multiple tool/parameter choices in each step. HiC-bench focuses on the reproducibility of the analysis by keeping records of the source code, tool versions and parameter settings, and it is the only HiC-analysis pipeline that allows combinatorial parameter exploration facilitating benchmarking of Hi-C analyses.
HiC-bench workflow. Raw reads (input fastq files) are aligned and then filtered (align and filter tasks). Filtered reads are used for the creation of Hi-C track files (tracks) that can be directly uploaded to the WashU Epigenome Browser [27]. A report with a statistics summary of filtered Hi-C reads, is also automatically generated (filter-stats). Raw Hi-C matrices (matrix-filtered) are normalized using (a) scaling (matrix-prep), (b) iterative correction (matrix-ic) [9] or (c) HiCNorm (matrix-hicnorm) [28]. A report with the plots of the normalized Hi-C counts as function of the distance between the interacting partners (matrix-stats) is automatically generated for all methods. The resulting matrices are compared across all samples in terms of Pearson and Spearman correlation (compare-matrices and compare-matrices-stats). Boundary scores are calculated and the corresponding report with the Principal Component Analysis (PCA) is automatically generated (boundary-scores and boundary-scores-pca). Domains are identified using various TAD calling algorithms (domains) followed by comparison of TAD boundaries (compare-boundaries and compare-boundaries-stats). A report with the statistics of boundary comparison is also automatically generated. Hi-C visualization of user-defined genomic regions is performed using HiCPlotter (hicplotter) [23]. Specific chromatin interactions (interactions) are detected and annotated (annotations). Finally, enrichment of top interactions in certain chromatin marks, transcription factors etc. provided by the user, is automatically calculated (annotations-stats)
a Computational trails. Each combination of tools and parameter settings can be imagined as a unique computational "trail" that is executed simultaneously with all the other possible trails to create a collection of output objects. As an example, one of these possible trails is presented in red. The raw reads were aligned, filtered and then binned in 40 kb resolution matrices. Our own naïve matrix scaling method was then used for matrix correction and domains were called using TopDom [31]. b HiC-bench pipeline task architecture. All pipeline tasks are performed by a single R script, "pipeline-master-explorer.r". This script generates output objects based on all combinations of input objects and parameter scripts while taking into account the split variable, group variable and tuple settings. The output objects are stored in the corresponding "results" directory. As an example, domain calling for IMR90 is presented. The filtered reads of the IMR90 Hi-C sample (digested with HindIII) are used as input. The pipeline-master-explorer script tests if TAD calling with these settings has been performed and if not it calls the domain calling wrapper script (code/hicseq-domains.tcsh) with the corresponding parameters (e.g., params/params.armatus.gamma_0.5.tcsh). After the task is complete, the output is stored in the corresponding "results" directory
The HiC-bench toolkit
HiC-bench performs various tasks of Hi-C analysis ranging from read alignment to annotation of specific interactions and visualization. We have developed two new tools, gtools-hic and hic-matrix, to execute the multiple tasks in the HiC-bench pipeline, but we have also integrated existing tools to allow comparative and complementary analyses and facilitate benchmarking. More specifically, the alignment task is performed either with Bowtie2 [26] or with the "align" function of gtools-hic, our newest addition to GenomicTools [29]. Likewise, filtering, creation of Hi-C tracks and generation of Hi-C contact matrices are performed using the functions "filter", "bin/convert" and "matrix" of gtools-hic respectively. For advanced users, we have implemented a series of novel features for these common Hi-C analysis tasks. For example, the operation "matrix" of gtools-hic allows generation of arbitrary chimeric Hi-C contact matrices, a feature particularly useful for the study of the effect of chromosomal translocations on chromatin interactions. Another example is the generation of distance-restricted matrices (up to some maximum distance off the diagonal) in order to save storage space and reduce memory usage at fine resolutions. For matrix correction we use either published algorithms (iterative correction (IC/ICE) [9], HiCNorm [28]) or our "naïve scaling" method where we divide the Hi-C counts by (a) the total number of (usable) reads, and (b) the "effective length" [8, 28] of each genomic bin. We also integrated published TAD callers like DI [5], Armatus [30], TopDom [31], insulation index (Crane) [32] and our own TAD calling method (similar but not identical to contrast index [33, 34]) implemented as the "domains" operation in hic-matrix. Additionally, the "domains" operation produces genome-wide boundary scores using multiple methods and allowing flexibility in choosing parameters. Boundaries are simply defined as local maxima of the boundary scores. For the detection of specific interactions, we introduce the "loops" function of hic-matrix, while GenomicTools is used for annotation of these interactions with gene names, ChIP-seq and other user-defined data. Finally, we implemented a wrapper for HiCPlotter, taking advantage of its advanced visualization features in order to allow the user to quickly generate snapshots of areas of interest in batch. The HiC-bench toolkit is summarized in Table 2. All the tools we developed appear in bold. Further information on the toolkit is provided in the User Manual found online and in the Supplemental Information section.
Table 2 The HiC-bench toolkit
Main concepts and pipeline architecture
We built our platform based on principles outlined in scientific workflow systems such as Kepler [35], Taverna [36] and VisTrails [37]. The main idea behind our platform is the ability to track data provenance [37, 38], the origin of the data, computational tasks, tool versions and parameter settings used in order to generate a certain output (or collection of outputs) from a given input (or collection of inputs). Thus, our pipeline ensures reproducibility which is a particularly important feature for such a complex computational task. In addition, HiC-bench enables combinatorial analysis and parameter exploration by implementing the idea of computational "trails": a unique combination of inputs, tools and parameter values can be imagined as a unique (computational) trail that is followed simultaneously with all the other possible trails in order to generate a collection of output objects (Fig. 2a). Our platform consists of three main components: (a) data, (b) code and (c) pipelines. These components are organized in respective directories in our local repository, and synchronized with a remote GitHub repository for public access. The data directory is used to store data that would be used by any analysis, for example genome-related data, such as DNA sequences and indices (e.g., Bowtie2), gene annotations and, in general, any type of data that is required for the analysis. The code directory is used to store scripts, source code and executables. More details about the directory structure can be found in the User Manual. Finally, the "pipelines" directory is used to store the structure of each pipeline. Here, we will focus on our Hi-C pipeline, but we have also implemented a ChIP-seq pipeline, which is very useful for integrating CTCF and histone modification ChIP-seq data with Hi-C data. The structure of the pipeline is presented to the user as a numbered list of directories, each one corresponding to one operation (or task) of the pipeline. As shown in Fig. 1, our Hi-C pipeline currently consists of several tasks starting with alignment and reaching completion with the identification and annotation of specific DNA-DNA interactions and annotations with ChIP-seq and other genome-wide data (see also Table 2 and Additional file 2: Table S1). We will examine these tasks in detail in the Results section of this manuscript.
Parameter exploration, input and output objects
In conventional computational pipelines, several computational tasks (operations) are executed on their required inputs. However, in existing genomics pipelines, each task generates a single result object (e.g., TAD calling using one method with fixed parameter settings) which is then used by downstream tasks. To allow full parameter (and method/tool) exploration, we introduce instead a data flow model, where every task may accommodate an arbitrary number of output objects. Downstream tasks will then operate on all computed objects generated by the tasks they depend on. Pipeline tasks are implemented as shown in the diagram of Fig. 2b. First, input objects are filtered according to user-specified criteria (e.g., TAD calling is only done for Hi-C contact matrices at 40 kb resolution). Then, pipeline-master-explorer (implemented as an R script; see Additional file 1 User Manual for usage and input arguments) generates the commands that create all desired output objects. In principle, all combinations of input objects with all parameter settings will be created, subject to user-defined filtering criteria. In the interest of extensibility, new pipeline tasks can be conveniently implemented using a single-line pipeline-master-explorer command (see Additional file 3: Table S2), provided that wrapper scripts for each task (e.g., TAD calling using TopDom) have been properly set up. In the simplest scenario, any task in our pipeline will generate computational objects for each combination of parameter file and input objects obtained from upstream tasks. For example, suppose the aligned reads from 12 Hi-C datasets are filtered using three different parameter settings, and that we need to create contact matrices at four resolutions (1 Mb, 100 kb, 40 kb and 10 kb). Then, the number of output objects (contact matrices in this case) will be 144 (i.e., 12 × 3 × 4). Although many computational scenarios can be realized by this simple one-to-one mapping of input–output objects, more complex scenarios are frequently encountered, as described in the next section.
Filtering, splitting and grouping input objects into new output objects
Oftentimes, a simple one-to-one mapping of input objects to output objects is not desirable. For this reason, we introduce the concepts of filtering, splitting and grouping of input objects which are used to modify the behavior of pipeline-master-explorer (see Fig. 2b). Filtering is required when some input objects are not relevant for a given task, e.g., TAD calling is not performed on 1 Mb-resolution contact matrices, and specific DNA-DNA interactions are not meaningful for resolutions greater than 10–20 kb. Splitting is necessary in some cases: for example, we split the input objects by genome assembly (hg19, mm10) when comparing contact matrices or domains across samples, since only matrices or domains from the same genome assembly can be compared directly. In our platform, the user is allowed to split a collection of input objects by any variable contained in the sample sheet (except fastq files), thus allowing user-defined splits of the data, such as by cell type or treatment. Complementary to the splitting concept, grouping permits the aggregation of a collection of input objects (sharing the same value of a variable defined in the sample sheet) into a single output object. For example, the user may want to create genome browser tracks or contact matrices of combined technical and/or biological replicates, or group all input objects (samples) together in tasks such as Principal Component Analysis (PCA) or alignment/filtering statistics.
Combinatorial objects
Even after introducing the concepts described above, more complex scenarios are possible as some tasks require the input of pairs (or triplets etc.) of objects. This feature has also been implemented in our pipeline (tuples in Fig. 2b) and is currently used in the compare-matrices and compare-boundaries tasks. However, it should be utilized wisely (for example in conjunction with filtering, splitting and grouping) because it may lead to a combinatorial "explosion" of output objects.
Parameter scripts
The design of our platform is motivated by the need to facilitate the use of different parameter settings for each pipeline task. For this reason, we have implemented wrapper scripts for each tool/method used in each task. For example, we have implemented a wrapper script for alignment, filtering, correcting contact matrices using IC or HiCNorm (separate wrappers), TAD calling using Armatus [30], TopDom [31], DI [5] and insulation index (Crane) [32] (separate wrappers). The main motivation is to hide most of the complexity inside the wrapper script and allow the user to modify the parameters using a simple but flexible parameter script. Unlike static parameter files, parameter scripts allow for dynamic calculation of parameters based on certain input variables (e.g., enzyme name, group name etc.). Within this framework, by adding and/or modifying simple parameter scripts, the user can explore the effect of different parameters (a) on the task directly affected by these parameters, and (b) on all dependent downstream tasks. Additionally, these parameter scripts serve as a record of parameters and tool versions that were used to produce the results, facilitating analysis reproducibility as well as documentation in scientific reports and manuscripts.
Results stored as computational trails
All the concepts described above have been implemented in a single R script named pipeline-master-explorer. This script maintains a database of input-output objects for each task, stored in a hidden directory under results (results/.db). It also creates a "run" script which is executed in order to generate all the desired results. All results are stored in the results directory in a tree structure that reveals the computational trail for each object (see examples shown in Fig. 2b and Additional file 3: Table S2). Therefore, the user can easily infer how each object was created, including what inputs and what parameters were used.
Initiating a new reproducible analysis
In the interest of data analysis reproducibility, any new analysis requires creating a copy of the code and pipeline structure into a desired location, effectively creating a branch. This way, any changes in the code repository will not affect the analysis and conversely, the user can customize the code according to the requirements of each project without modifying the code repository. Copying of the code and initiating a new analysis is done simply by invoking the script "pipeline-new-analysis.tcsh" as described in the User Manual.
Pipeline tasks
A pipeline consists of a number of (partially) ordered tasks that can be described by a directed acyclic graph which defines all dependencies. HiC-bench implements a total of 20 tasks as shown in the workflow of Fig. 1. In the analysis directory structure, each task is assigned its own subdirectory found inside the pipeline directory starting from the top level. This directory includes a symbolic link to the inputs of the analysis (fastq files, sample sheet, etc.), a link to the code, a directory (inpdirs) containing links to all dependencies, a directory containing parameter scripts (see below) and a "run" script which can be used to generate all the results of this task. The "run" scripts of each task are executed in the specified order by the master "run" script located at the top level (see Additional file 1 User Manual for details on pipeline directory structure).
Input data and the sample sheet
Before performing any analysis, a computational pipeline needs input data. All input data for our pipeline tasks are stored in their own "inputs" directory accessible at the top level (along with the numbered pipeline tasks) and via symbolic links from within the directories assigned to each task to allow easy access to the corresponding input data. A "readme" file explains how to organize the input data inside the inputs directory (see Additional file 1 User Manual for details). Briefly, the fastq subdirectory is used to store all fastq files, organized into one subdirectory per sample. Then, the sample sheet needs to be generated. This can be done automatically using the "create-sample-sheet.tcsh" script, but the user can also manually modify and expand the sample sheet with features beyond what is required. Currently required features are the sample name (to be used in all downstream analyses), fastq files (R1 and R2 in separate columns), genome assembly version (e.g., hg19, mm10) and restriction enzyme name (e.g., HindIII, NcoI). Adding more features, such as different group names (e.g., sample, cell type, treatment), allows the user to perform more sophisticated downstream analyses, such as grouping replicates for generating genome browser tracks, or splitting samples by genome assembly to compare boundaries (see previous section on grouping and splitting).
Executing the pipeline
The entire pipeline can be executed automatically by the "pipeline-execute.tcsh" script, as shown below:
$$ \mathrm{code}/\mathrm{code}.\mathrm{main}/\mathrm{pipeline}\hbox{-} \mathrm{execute} < \mathrm{project}\ \mathrm{name} > <\mathrm{user}\ \mathrm{e}\hbox{-} \mathrm{mail}\ \mathrm{address}> $$
where < project name > will be substituted by the name of the project and < user e-mail address > by the preferred e-mail address of the person who runs the analysis in order to be notified upon completion. The "pipeline-execute.tcsh" script essentially executes the "run" script for each task (following the specified order). At the completion of every task, the log files of all finished jobs are inspected for error messages. If error messages are found, the pipeline aborts with an error message.
Timestamping
Besides creating the "run" script used to generate all results, the "pipeline-master-explorer.r" script, also checks whether existing output objects are up-to-date when compared to their dependencies (i.e., input objects and parameter scripts; can be expanded to include code dependencies as well). Currently, the pipelines are set up so that out-of-date objects are not deleted and recomputed automatically, but only presented to the user as a warning. The user can then choose to delete them manually and re-compute. The reason for this is to protect the user against accidentally repeating computationally demanding tasks (e.g., alignments) without given first the chance to review why certain objects may be out-of-date. From a more philosophical point of view, and in the interest of keeping a record of all computations (when possible), the user may never want to modify parameter files or the code for a given project, but instead only add new parameter files. Then, no object will be out-of-date, and only new objects will need to be recomputed every time.
Alignment and filtering
Paired-end reads were mapped to the reference genome (hg19 or mm10) using Bowtie2 [26]. Reads with low mapping quality (MAPQ < 30) were discarded. Local alignments of input read pairs were performed as they consist of chimeric reads between two (non-consecutive) interacting fragments. This approach yielded a high percentage of mappable reads (>95%) for all datasets (Additional file 4: Figure S1). Mapped read pairs were subsequently filtered for known artifacts of the Hi-C protocol such as self-ligation, mapping too far from the enzyme's known cutting sites etc. More specifically, reads mapping in multiple locations on the reference genome (multihit), double-sided reads that mapped to the same enzyme fragment (ds-same-fragment), reads whose 5'-end mapped too far (ds-too-far) from the enzyme cutting site, reads with only one mappable end (single-sided) and unmapped reads (unmapped), were discarded. Read pairs that corresponded to regions that were very close (less than 25 kilobases, ds-too-close) in linear distance on the genome as well as duplicate read pairs (ds-duplicate-intra and ds-duplicate-inter) were also discarded. In Additional file 4: Figure S1, we show detailed paired-end read statistics for the Hi-C datasets used in this study. We include the read numbers (Additional file 4: Figure S1A) and their corresponding percentages (Additional file 4: Figure S1B). Eventually, approximately 10–50% of paired-reads passed all filtering criteria and were used for downstream analysis (Additional file 4: Figure S1B). The statistics report is automatically generated for all input samples. The tools and parameter settings used for the alignment and filtering tasks are fully customizable and can be defined in the corresponding parameter files.
Contact matrix generation, normalization and correction
The read-pairs that passed the filtering task were used to create Hi-C contact matrices for all samples. The elements of each contact matrix correspond to pairs of genomic "bins". The value in each matrix element is the number of read pairs aligning to the corresponding genomic regions. In this study, we used various resolutions, ranging from fine (10 kb) to coarse (1 Mb). The resulting matrices either remained unprocessed (filtered), or they were processed using different correction methods including HiCNorm [28], iterative correction (IC or ICE) [9] as well as "naïve scaling". In Additional file 5: Figure S2, we present the average Hi-C count as a function of the distance between the interacting fragments, separately for each Hi-C matrix for not corrected (filtered) and IC-corrected matrices.
Comparison of contact matrices
Our pipeline allows direct comparison and visualization of the generated Hi-C contact matrices. More specifically, using our hic-matrix tool, all pairwise Pearson and Spearman correlations were automatically calculated for each (a) input sample, (b) resolution, and (c) matrix correction method. The corresponding correlograms were automatically generated using the corrgram R package [39]. A representative example is shown in Additional file 6: Figure S3. The correlograms summarizing the pairwise Pearson correlations for all samples used in this study are presented before and after matrix correction using the iterative correction algorithm. These plots are very useful because the user can quickly assess the similarity between technical and biological replicates as well as differences between various cell types. As shown before (Additional file 6: Figure S3 in [5]), iterative correction improves the correlation between enzymes at the expense of a decreased correlation between samples prepared using the same enzyme.
Boundary scores
Topological domains (TADs) are defined as genomic neighborhoods of highly interacting chromatin, with relatively more infrequent inter-domain interactions [5, 40, 41]. Topological domains are demarcated by boundaries, i.e., genomic regions bound by insulators thus hampering DNA contacts across adjacent domains. For each genomic position, in a given resolution (typically 40 kb or less), we define a "boundary score" to quantify the insulation strength of this position. The higher the boundary score, the higher the insulation strength and the probability that this region actually acts as a boundary between adjacent domains. The idea of boundary scores is further illustrated in Additional file 7: Figure S4, where two adjacent TADs are shown. The upstream TAD on the left (L) is separated from the downstream TAD on the right (R) by a boundary (black circle). We define two parameters, the distance from the diagonal of the Hi-C contact matrix to be excluded from the boundary score calculation (δ) (not shown) and the maximum distance from the diagonal to be considered (d). The corresponding parameter values can be selected by the user. For this analysis, we used δ = 0 and d = 2 Mb as suggested before [5]. In addition to the published directionality index [5], we defined and computed the "inter", "intra-max" and "ratio" scores, defined as follows:
$$ \mathrm{inter} = \mathrm{mean}\left(\mathrm{X}\right) $$
$$ {\mathrm{intra}}_{\max } = \max \left(\mathrm{mean}(L),\ \mathrm{mean}(R)\right) $$
$$ \mathrm{ratio} = {\mathrm{intra}}_{\max }/\mathrm{inter} $$
Principal component analysis (PCA) of boundary scores across samples in this study, before and after matrix correction, shows that biological replicates tend to cluster together, either in the case of filtered or corrected (IC) matrices (Additional file 8: Figure S5).
Topological domains
Topologically-associated domains (TADs) are increasingly recognized as an important feature of genome organization [5]. Despite the importance of TADs in genome organization, very few Hi-C pipelines have integrated TAD calling (e.g., TADbit [19]). In HiC-bench, we have integrated TAD calling as a pipeline task and we demonstrate this integration using different TAD callers: (a) Armatus [30], (b) TopDom [31], (c) DI [5], (d) insulation index (Crane) [32] and (e) our own hic-matrix (domains). Our pipeline makes it straightforward to plug in additional TAD callers, by installing these tools and setting up the corresponding wrapper scripts. HiC-bench also facilitates the direct comparison of TADs across samples by automatically calculating the number of TAD boundaries and all the pairwise overlaps of TAD boundaries across all inputs, generating the corresponding graphs (as in the case of matrix correlations described in a previous section). We define boundary overlap as the ratio of the intersection of boundaries between two replicates (A and B) over the union of boundaries in these two replicates, as shown in the equation below:
$$ \mathrm{boundary}\_\mathrm{overlap} = \left(\mathrm{A}\cap \mathrm{B}\right)/\left(\mathrm{A}\cup \mathrm{B}\right) $$
For the boundary overlap calculation, we extended each boundary by 40 kb on both sides (+/−40 kb flanking region, i.e., the size of one bin). The fact that HiC-bench allows simultaneous exploration of all parameter settings for all installed TAD-calling algorithms, greatly facilitates parameter exploration, optimization as well as assessment of algorithm effectiveness. Pairwise comparison of boundaries (boundary overlaps) across samples is shown in Fig. 3 and Additional file 9: Figure S6.
Comparison of topological domain calling methods subject to Hi-C contact matrix preprocessing by simple filtering or iterative correction (IC). The methods were assessed in terms of boundary overlap between replicates (a), change (%) in mean boundary overlap after matrix correction (b), change (%) in standard deviation of mean overlap across replicates after matrix correction (c) and number of identified topological domains per cell type (d). The different colors correspond to the different callers. Gradients of the same color are used for the different values of the same parameter, ranging from low (light color) to high (dark color) values. The TAD callers along with the corresponding parameter settings are presented in the legend. For this analysis all available read pairs were used
In our pipeline, we also take advantage of the great visualization capabilities offered by the recently released HiCPlotter [23], in order to allow the user to visualize Hi-C contact matrices along with TADs (in triangle format) for multiple genomic regions of interest. The user can also add binding profiles in BedGraph format for factors (e.g., CTCF), boundary scores, histone marks of interest (e.g., H3K4me3, H3K27ac) etc. An example is shown in Additional file 10: Figure S7, where an area of the contact matrix of human embryonic stem cells (H1) (HindIII) is presented along with the corresponding TADs (triangles), various boundary scores, the CTCF binding profile and annotations of selected genomic elements, before and after matrix correction (IC). The integration of HiCPlotter in our pipeline, allows the user to easily create publication-quality figures for multiple areas of interest simultaneously.
Specific interactions, annotations and enrichments
The plummeting costs of next-generation sequencing have resulted in a dramatic increase in the resolution achieved in Hi-C experiments. While the original Hi-C study reported interaction matrices of 1 Mb resolution [2], recently 1 kb resolution was reported [42]. Thus, the characterization and annotation of specific genomic interactions from Hi-C data is an important feature of a modern Hi-C analysis pipeline. HiC-bench generates a table of the interacting loci based on parameters defined by the user. These parameters include the resolution, the lowest number of read pairs required per interacting area as well as the minimum distance between the interacting partners. The resulting table contains the coordinates of the interacting loci, the raw count of interactions between them, the number of interactions after "scaling" and the number of interactions between the partners after distance normalization (observed Hi-C counts normalized by expected counts as a function of distance). This table is further annotated with the gene names or the factors (e.g., CTCF) and histone modification marks (e.g., H3K4me1, H3K27ac, H3K4me3) that overlap with the interacting loci. The user can provide bed files with the features of interest to be used for annotation. As an example, the enrichment of chromatin marks in the top 50,000 chromatin interactions in the H1 and IMR90 samples is presented in Additional file 11: Figure S8.
The main software requirements are: Bowtie2 aligner [26], Python (2.7 or later) (along with Numpy, Scipy and Matplotlib libraries), R (3.0.2) [43] and various R packages (lattice, RColorBrewer, corrplot, reshape, gplots, preprocessCore, zoo, reshape2, plotrix, pastecs, boot, optparse, ggplot2, igraph, Matrix, MASS, flsa, VennDiagram, futile.logger and plyr). More details on the versions of the packages can be found in the User Manual (sessionInfo()). In addition, installation of mirnylib Python library [44] is required for matrix balancing based on IC (ICE). The pipeline has been tested on a high-performance computing cluster based on Sun Grid Engine (SGE). The operating system used was Redhat Linux GNU (64 bit).
We used HiC-bench to analyze several published Hi-C datasets and the results of our analysis are presented below. Additionally, we performed a comprehensive benchmark of existing and new TAD callers exploring different matrix correction methods, parameter settings and sequencing depths. Our results can be reproduced by re-running the corresponding pipeline snapshot available upon request as a single compressed archive file (too big to include as a Supplemental file).
Comprehensive reanalysis of available Hi-C datasets using HiC-bench
Our platform is designed to facilitate and streamline the analysis of a large number of available Hi-C datasets in batch. Thus, we collected and comprehensively analyzed multiple Hi-C samples from three large studies [5, 42, 45]. From the first study we analyzed IMR90 (HindIII) samples, from the second we analyzed Hi-C samples from lymphoblastoid cells (GM12878), human lung fibroblasts (IMR90 (MboI)), erythroleukemia cells (K562), chronic myelogenous leukemia (CML) cells (KBM-7) and keratinocytes (NHEK), and from the third one, we analyzed samples from human embryonic stem cells (H1) and all the embryonic stem-cell derived lineages mentioned, including mesendoderm, mesenchymal stem cells, neural progenitor cells and trophectoderm cells. All datasets yielded at least 40 million usable intra-chromosomal read pairs in at least two biological replicates. We performed extensive quality control on all datasets, calculating the read counts and percentages per classification category (Additional file 4: Figure S1), the attenuation of Hi-C signal over genomic distance (Additional file 5: Figure S2), the correlation of Hi-C matrices before and after matrix correction (Additional file 6: Figure S3), the similarity of boundary scores (Additional file 8: Figure S5) and all pairwise boundary overlaps across samples (Additional file 9: Figure S6). In addition, we performed a comprehensive benchmarking of our own and published TAD callers, across all reanalyzed Hi-C datasets. The results of our benchmark are presented in the following sections.
Iterative correction of Hi-C contact matrices improves reproducibility of TAD boundaries
Iterative correction has been shown to correct for known biases in Hi-C [9]. Thus, we hypothesized that IC may increase the reproducibility of TAD calling. We performed a comprehensive analysis calculating the TAD boundary overlaps for biological replicates of the same sample (as described in Methods section), using different TAD callers and different main parameter values for each TAD caller (Fig. 3a). After comparing TAD boundary overlaps between filtered (uncorrected) and IC-corrected matrices, we observed an improvement in the boundary overlap when corrected matrices were used, irrespective of TAD caller and parameter settings (Fig. 3b). The only exception was DI. Careful examination of the overlaps per sample revealed that IC introduced outliers only in the case of DI (in general, the opposite was true for the other callers). We hypothesize that IC may occasionally negatively affect the computation of the directionality index, especially because its calculation depends on a smaller number of bins (1-dimensional line) compared to the rest of the methods (2-dimensional triangles). In addition to the increase in the mean value of boundary overlap upon correction, we observed that the standard deviation of boundary overlaps among replicates decreased (again, with the exception of DI) (Fig. 3c). While this seems to be the trend for almost all TAD caller/parameter value combinations, the effect of correction in variance is more profound in certain cases (e.g., hicintra-max) than others. It is also worth mentioning that increased size of the insulation window (in the case of Crane), the resolution parameter γ (Armatus) or the distance d (hicinter, hicintra-max, hicratio) may result in certain cases in increased boundary overlap (e.g., Armatus), but this is not generalizable (e.g., hicintra-max). Interestingly, increased TAD boundary overlap does not necessarily mean increased consistency in the number of predicted TADs across sample types, as would be expected since TADs are largely invariant across cell types [5]. For example, the TAD calling algorithm which is based on insulation score (Crane), predicted similar TAD overlaps and similar TAD numbers for different insulation windows (ranging from 0.5 Mb to 2 Mb), whereas Armatus performed well in terms of TAD boundary reproducibility (Fig. 3a) but the corresponding predicted TAD numbers varied considerably (Fig. 3d). This may be partly due to the nature of the Armatus algorithm, as it has been built to reveal multiple levels of chromatin organization (TADs, sub-TADs etc.). We conclude that while iterative correction improves the reproducibility of TAD boundary detection across replicates, the number of predicted TADs should be also taken into account when selecting TAD calling method for downstream analysis. The method of choice should be the one that is robust in terms of both reproducibility and number of predicted TADs.
Increased sequencing depth improves the reproducibility of TAD boundaries
After selecting the parameter setting that optimized TAD boundary overlap between biological replicates of the same sample per TAD caller, we also investigated the effect of sequencing depth on the reproducibility of TAD boundary detection. Since some of the input samples were limited to only 40 million usable intra-chromosomal Hi-C read pairs, we resampled 10 million, 20 million and 40 million read pairs from each sample and evaluated the effect of increasing sequencing depth on TAD boundary reproducibility. The results are depicted in Fig. 4a. We noticed that increased sequencing depth resulted in increased TAD boundary overlap, regardless of the TAD calling algorithm used (Fig. 4a, c). As far as the TAD numbers are concerned, increased sequencing depth decreased TAD number variability for certain callers (e.g., hicratio) but not in all cases (e.g., Armatus) (Fig. 4b). In many cases, increased sequencing depth, decreased the variance of TAD boundary overlap among replicates (Fig. 4c). In summary, based on this benchmark, we recommend that Hi-C samples should be sufficiently sequenced as sequencing depth seems to affect TAD calling reproducibility.
Comparison of topological domain calling methods for different preprocessing method and sequencing depth. TAD calling methods were assessed in terms of boundary overlap between replicates (a), number of identified topological domains (b) and boundary overlap across replicates upon increasing sequencing depth (c) for different matrix preprocessing (filtered and IC corrected) and different sequencing depths (10 million, 20 million and 40 million reads). For TAD calling, only the optimal caller/parameter value pairs are shown (defined as the ones achieving the maximum boundary overlap for IC and 40 million reads). The boxplot and line colors correspond to the different TAD callers
Recently, several computational tools and pipelines have been developed for Hi-C analysis. Some focus on matrix correction, others on detection of specific chromatin interactions and their differences across conditions and others on visualization of these interactions. However, very few of these tools offer a complete Hi-C analysis (e.g., HiC-Pro), addressing tasks which range from alignment to interaction annotation. HiC-bench is a comprehensive Hi-C analysis pipeline with the ability to process many samples in parallel, record and visualize the results in each task, thus facilitating troubleshooting and further analyses. It integrates both existing tools but also new tools that we developed to perform certain Hi-C analysis tasks. In addition, HiC-bench focuses on parameter exploration, reproducibility and extensibility. All parameter settings used in each pipeline task are automatically recorded, while future tools can be easily added using the supplied wrapper template. More importantly, HiC-bench is the only Hi-C pipeline so far that allows extensive parameter exploration, thus facilitating direct comparison of the results obtained by different tools, methods and parameters. This unique feature helps users test the robustness of the analysis, optimize the parameter settings and eventually obtain reliable and biologically meaningful results. To demonstrate the usefulness of HiC-bench, we performed a comprehensive benchmark of popular and newly-introduced TAD callers, varying the matrix preprocessing (filtered or corrected matrices with IC method), the sequencing depth, and the value of the main parameter of each TAD caller, which is usually the window used for the calculation of directionality index or insulation score. We found that the matrix correction has a positive effect on the boundary overlap between replicates and that increased sequencing depth leads to higher boundary overlap.
In conclusion, HiC-bench is an easy-to-use framework for systematic, comprehensive, integrative and reproducible analysis of Hi-C datasets. We expect that use of our platform will facilitate current analyses and enable scientists to further develop and test interesting hypotheses in the field of chromatin organization and epigenetics.
Directionality index
IC or ICE:
Iterative correction
TAD:
Topological domain or topologically associating domain
Dekker J, Marti-Renom MA, Mirny LA. Exploring the three-dimensional organization of genomes: interpreting chromatin interaction data. Nat Rev Genet. 2013;14:390–403.
Lieberman-Aiden E, van Berkum NL, Williams L, Imakaev M, Ragoczy T, Telling A, et al. Comprehensive mapping of long-range interactions reveals folding principles of the human genome. Science. 2009;326:289–93.
Belton J-M, McCord RP, Gibcus JH, Naumova N, Zhan Y, Dekker J. Hi-C: a comprehensive technique to capture the conformation of genomes. Methods. 2012;58:268–76.
Fraser J, Ferrai C, Chiariello AM, Schueler M, Rito T, Laudanno G, et al. Hierarchical folding and reorganization of chromosomes are linked to transcriptional changes in cellular differentiation. Mol Syst Biol. 2015;11:852–2.
Dixon JR, Selvaraj S, Yue F, Kim A, Li Y, Shen Y, et al. Topological domains in mammalian genomes identified by analysis of chromatin interactions. Nature. 2012;485:376–80.
Phillips-Cremins JE, Sauria MEG, Sanyal A, Gerasimova TI, Lajoie BR, Bell JSK, et al. Architectural protein subclasses shape 3D organization of genomes during lineage commitment. Cell. 2013;153:1281–95.
Vietri Rudan M, Barrington C, Henderson S, Ernst C, Odom DT, Tanay A, et al. Comparative Hi-C Reveals that CTCF Underlies Evolution of Chromosomal Domain Architecture. Cell Rep. 2015;10:1297–309.
Yaffe E, Tanay A. Probabilistic modeling of Hi-C contact maps eliminates systematic biases to characterize global chromosomal architecture. Nat Genet. 2011;43:1059–65.
Imakaev M, Fudenberg G, McCord RP, Naumova N, Goloborodko A, Lajoie BR, et al. Iterative correction of Hi-C data reveals hallmarks of chromosome organization. Nat Meth. 2012;9:999–1003.
Cournac A, Marie-Nelly H, Marbouty M, Koszul R, Mozziconacci J. Normalization of a chromosomal contact map. BMC Genomics. 2012;13:436.
Koszul R. HiC-Box. https://github.com/rkoszul/HiC-Box. Accessed 20 Feb 2016.
Servant N, Varoquaux N, Lajoie BR, Viara E, Chen C-J, Vert J-P, et al. HiC-Pro: an optimized and flexible pipeline for Hi-C data processing. Genome Biol. 2015;16:259.
Li W, Gong K, Li Q, Alber F, Zhou XJ. Hi-Corrector: a fast, scalable and memory-efficient package for normalizing large-scale Hi-C data. Bioinformatics. 2015;31:960–2.
Castellano G, Le Dily F, Hermoso Pulido A, Beato M, Roma G. Hi-Cpipe: a pipeline for high-throughput chromosome capture. bioRxiv. 2015. doi:10.1101/020636.
Sauria ME, Phillips-Cremins JE, Corces VG, Taylor J. HiFive: a tool suite for easy and efficient HiC and 5C data analysis. Genome Biol. 2015;16:237.
Heinz S, Benner C, Spann N, Bertolino E, Lin YC, Laslo P, et al. Simple combinations of lineage-determining transcription factors prime cis-regulatory elements required for macrophage and B cell identities. Mol Cell. 2010;38:576–89.
Wingett S, Ewels P, Furlan-Magaril M, Nagano T, Schoenfelder S, Fraser P, et al. HiCUP: pipeline for mapping and processing Hi-C data. F1000Res. 2015;4:1310.
Krueger F, Andrews SR. SNPsplit: Allele-specific splitting of alignments between genomes with known SNP genotypes. F1000Res. 2016;5:1479.
Serra F, Baù D, Filion G, Marti-Renom MA. Structural features of the fly chromatin colors revealed by automatic three-dimensional modeling. bioRxiv. 2016. doi:10.1101/036764.
Schmid MW, Grob S, Grossniklaus U. HiCdat: a fast and easy-to-use Hi-C data analysis tool. BMC Bioinf. 2015;16:390–6.
Hwang Y-C, Lin C-F, Valladares O, Malamon J, Kuksa PP, Zheng Q, et al. HIPPIE: a high-throughput identification pipeline for promoter interacting enhancer elements. Bioinformatics. 2015;31:1290–2.
Phanstiel DH, Boyle AP, Araya CL, Snyder MP. Sushi.R: flexible, quantitative and integrative genomic visualizations for publication-quality multi-panel figures. Bioinformatics. 2014;30:2808–10.
Akdemir KC, Chin L. HiCPlotter integrates genomic data with interaction matrices. Genome Biol. 2015;16:198.
Editorial. Rebooting review. Nat Biotechnol. 2015;33:319–9.
Goecks J, Nekrutenko A, Taylor J, Galaxy Team. Galaxy: a comprehensive approach for supporting accessible, reproducible, and transparent computational research in the life sciences. Genome Biol. 2010;11(8):R86.
Langmead B, Salzberg SL. Fast gapped-read alignment with Bowtie 2. Nat Meth. 2012;9:357–9.
Zhou X, Lowdon RF, Li D, Lawson HA, Madden PAF, Costello JF, et al. Exploring long-range genome interactions using the WashU Epigenome Browser. Nat Meth. 2013;10:375–6.
Hu M, Hu M, Deng K, Deng K, Selvaraj S, Selvaraj S, et al. HiCNorm: removing biases in Hi-C data via Poisson regression. Bioinformatics. 2012;28:3131–3.
Tsirigos A, Haiminen N, Bilal E, Utro F. GenomicTools: a computational platform for developing high-throughput analytics in genomics. Bioinformatics. 2012;28:282–3.
Filippova D, Patro R, Duggal G, Kingsford C. Identification of alternative topological domains in chromatin. Algorithms Mol Biol. 2014;9:14.
Shin H, Shi Y, Dai C, Tjong H, Gong K, Alber F, et al. TopDom: an efficient and deterministic method for identifying topological domains in genomes. Nucleic Acids Res. 2016;44(7):e70.
Crane E, Bian Q, McCord RP, Lajoie BR, Wheeler BS, Ralston EJ, et al. Condensin-driven remodelling of X chromosome topology during dosage compensation. Nature. 2015;523:240–4.
Van Bortle K, Nichols MH, Li L, Ong C-T, Takenaka N, Qin ZS, et al. Insulator function and topological domain border strength scale with architectural protein occupancy. Genome Biol. 2014;15:R82.
Alekseyenko AA, Walsh EM, Wang X, Grayson AR, Hsi PT, Kharchenko PV, et al. The oncogenic BRD4-NUT chromatin regulator drives aberrant transcription within large topological domains. Genes Dev. 2015;29:1507–23.
Ludäscher B, Altintas I, Berkley C, Higgins D, Jaeger E, Jones M, et al. Scientific workflow management and the Kepler system. Concurrency Comput. 2006;18:1039–65.
Oinn T, Addis M, Ferris J, Marvin D, Senger M, Greenwood M, et al. Taverna: a tool for the composition and enactment of bioinformatics workflows. Bioinformatics. 2004;20:3045–54.
Freire J. Making computations and publications reproducible with VisTrails. Comput Sci Eng. 2012;14:18–25.
Bavoil L, Callahan SP, Crossno PJ, Freire J, Scheidegger CE, Silva CT, et al. VisTrails: enabling interactive multiple-view visualizations. VIS 05 IEEE; 2005. pp. 135–42.
Wright K. Plot a Correlogram. R package. http://CRAN.R-project.org/package=corrgram
Sexton T, Yaffe E, Kenigsberg E, Bantignies F, Leblanc B, Hoichman M, et al. Three-dimensional folding and functional organization principles of the drosophila genome. Cell. 2012;148:458–72.
Nora EP, Lajoie BR, Schulz EG, Giorgetti L, Okamoto I, Servant N, et al. Spatial partitioning of the regulatory landscape of the X-inactivation centre. Nature. 2012;485:381–5.
Rao SSP, Huntley MH, Durand NC, Stamenova EK, Bochkov ID, Robinson JT, et al. A 3D Map of the human genome at kilobase resolution reveals principles of chromatin looping. Cell. 2014;159:1665–80.
R Core Team. R: A language and environment for statistical computing. R Foundation for statistical Computing Vienna, Austria 2016. https://www.R-project.org/
mirnylib. https://bitbucket.org/mirnylab/mirnylib. Accessed 20 May 2016.
Dixon JR, Jung I, Selvaraj S, Shen Y, Antosiewicz-Bourget JE, Lee AY, et al. Chromatin architecture reorganization during stem cell differentiation. Nature. 2015;518:331–6.
Aristotelis Tsirigos was supported by a Research Scholar Grant, RSG-15-189-01 - RMC from the American Cancer Society and a Leukemia & Lymphoma Society New Idea Award, 8007–17. We would like to thank Dennis Shasha and Juliana Freire for inspiring discussions on data flows. We would also like to thank Kadir Caner Akdemir for useful discussions on the usage of HiCPlotter. We also like to thank the NYU Applied Bioinformatics Laboratories for providing bioinformatics support and helping with the analysis and interpretation of the data. This work has used computing resources at the High Performance Computing Facility (HPCF) at the NYU Langone Medical Center.
The study was supported by a Research Scholar Grant, RSG-15-189-01 - RMC from the American Cancer Society and a Leukemia & Lymphoma Society New Idea Award, 8007–17 to Aristotelis Tsirigos (AT). NYU Genome Technology Center (GTC) is a shared resource, partially supported by the Cancer Center Support Grant, P30CA016087, at the Laura and Isaac Perlmutter Cancer Center.
Published Hi-C data were downloaded from Gene Expression Omnibus, using the corresponding accession numbers: GSE35156 [5], GSE63525 [42] and GSE52457 [45].
HiC-bench source code is freely available on GitHub and Zenodo.
Project name: HiC-bench
Project home page: https://github.com/NYU-BFX/hic-bench/wiki
Archived version: https://zenodo.org/badge/latestdoi/20915/NYU-BFX/hic-bench
Operating system: Redhat Linux GNU (64 bit)
Programming language: R, C++, Python, Unix shell scripts
Other requirements: Mentioned in "Software requirements" section and the manual.
Any restrictions to use by non-academics: None.
CL performed computational analyses, generated figures and implemented certain wrapper scripts. SK wrote the user manual. PN and IA offered biological insights and helped with the interpretation of Hi-C data. AT designed and implemented the pipeline. CL and AT wrote the manuscript. All authors read and approved the final manuscript.
Department of Pathology, NYU School of Medicine, New York, NY, 10016, USA
Charalampos Lazaris, Iannis Aifantis & Aristotelis Tsirigos
Laura and Isaac Perlmutter Cancer Center and Helen L. and Martin S. Kimmel Center for Stem Cell Biology, NYU School of Medicine, New York, NY, 10016, USA
Applied Bioinformatics Laboratories, Office of Science & Research, NYU School of Medicine, New York, NY, 10016, USA
Stephen Kelly & Aristotelis Tsirigos
Genome Technology Center, Office of Science & Research, NYU School of Medicine, New York, NY, 10016, USA
Department of Biochemistry and Molecular Genetics, Feinberg School of Medicine, Northwestern University, Chicago, IL, 60611, USA
Panagiotis Ntziachristos
Charalampos Lazaris
Stephen Kelly
Iannis Aifantis
Aristotelis Tsirigos
Correspondence to Iannis Aifantis or Aristotelis Tsirigos.
HiC-Bench Manual. (PDF 3853 kb)
HiC-bench task implementation. The table summarizes how the pipeline tasks are implemented, which are the requirements for their execution and how they are handled by the pipeline-master-explorer script. The first column lists all the tasks performed by the pipeline ranging from alignment to annotation. The second column lists the input directory required for each task while the third one lists the parameter files. Certain tasks depend on the reference genome (human or mouse), thus the genome assembly acts as split variable (column 4). In some tasks, replicates can be grouped using the group variable (column 5). Pairwise comparisons between replicates or samples are also allowed using tuples (column 6). The last column lists the full pipeline-master-explorer command for each pipeline task. (XLSX 10 kb)
HiC-bench input-output objects. The table summarizes the inputs and outputs of the TAD-calling task using three different methods with parameter values stored in the params files (column 2). The first column describes the tree structure of the input directories that are essentially the different Hi-C matrices for each sample, before (filtered) and after matrix correction using different methods (e.g., IC). The second column lists all the different parameter scripts and the third column corresponds to the tree structure of the generated output objects. (XLSX 10 kb)
Additional file 4: Figure S1.
Hi-C reads filtering statistics. Number (A) and percentage (B) of the various read categories identified during filtering for all datasets used in the study. Mappable reads were over 95% in all samples. Duplicate (ds-duplicate-intra and ds-duplicate-inter; red and pink respectively), non-uniquely mappable (multihit; light blue), single-end mappable (single-sided; dark blue) and unmapped reads (unmapped; dark purple) were discarded. Self-ligation products (ds-same-fragment; orange) and reads mapping too far (ds-too-far; light purple) from restriction sites or too close to one another (ds-too-close; orange) were also discarded. Only double-sided uniquely mappable cis (ds-accepted-intra; dark green) and trans (ds-accepted-inter; light green) read pairs were used for downstream analysis. The x axis represents either the raw read number (A) or the percentage of reads (B) falling within each of the categories described in the legend. The y axis represents the samples. (PDF 1380 kb)
Matrix statistics. Normalized Hi-C counts are presented as a function of the distance between the interacting partners for all samples and correction methods. The Hi-C samples analyzed were GM12878 (light blue), hESCs (H1) (blue), mesenchymal cells (light green), mesendoderm (dark green), neural progenitors (pink), trophectoderm (red), IMR90 (light and dark orange), K562 (light purple), KBM7 (dark purple) and NHEK (yellow). The matrices were either unprocessed (filtered) (top) or corrected using IC (bottom). The y axis represents the normalized count of Hi-C interactions and the x axis the distance between the interacting partners in kilobases. (PDF 2050 kb)
Pairwise Pearson correlation of Hi-C matrices. Correlograms summarizing all pairwise Pearson correlations for all Hi-C samples used in this study: raw (filtered) matrices (left panel) and matrices after iterative correction (right panel). Dark red indicates strong positive correlation and dark blue strong negative. The resolution of the matrices is 40 kb. (PDF 1405 kb)
Boundary score calculation. Two adjacent topological domains (red triangles) are depicted. The left domain (L) is separated from the right domain (R) by a boundary (black circle). The areas of more-frequent intra-domain interactions are in red. The area of less-frequent cross-domain (or inter-domain) interactions is X. We also introduce parameter d which is the maximum distance from the diagonal to be considered for the calculation of boundary scores (default value: d = 2 Mb). (PDF 1546 kb)
Principal component analysis of boundary scores. Boundary scores were calculated using ratio score, for all samples either before (filtered) (left panel) or after iterative correction (IC) (right panel). (PDF 882 kb)
Pairwise overlaps of TAD boundaries. The pairwise overlaps of TAD boundaries are shown for all samples of this study, after calling boundaries using hicratio (all reads, d = 0500). Before TAD calling, the Hi-C matrices were either unprocessed (filtered) or corrected using iterative correction (IC) (resolution = 40 kb). (PDF 3847 kb)
Additional file 10: Figure S7.
Visualization of TADs and certain areas of interest. HiC-bench integrates HiCPlotter [23] and it offers the ability to easily prepare publication-quality figures. We present the area surrounding NANOG, a gene of particular importance for the maintenance of pluripotency. The Hi-C matrix corresponding to the chr12:3940389–11948655 genomic region is presented for H1 cells, before and after matrix correction. The matrix is also rotated 45° to facilitate TAD visualization. Various boundary scores (intra-max, DI, ratio) are shown as individual tracks along with CTCF binding. The location of NANOG is presented as a blue line. (PDF 1307 kb)
Enrichment of chromatin interactions in human fibroblasts (IMR90) and embryonic stem cells (H1). The enrichment of certain chromatin marks and CTCF in the top 50,000 chromatin interactions in the IMR90 and H1 samples is shown. Deep blue and larger circle size indicate higher enrichment. (PDF 921 kb)
Lazaris, C., Kelly, S., Ntziachristos, P. et al. HiC-bench: comprehensive and reproducible Hi-C data analysis designed for parameter exploration and benchmarking. BMC Genomics 18, 22 (2017). https://doi.org/10.1186/s12864-016-3387-6
Computational pipeline
Parameter exploration | CommonCrawl |
OSA Publishing > Optical Materials Express > Volume 9 > Issue 6 > Page 2630
Alexandra Boltasseva, Editor-in-Chief
Biomimetic structural coloration with tunable degree of angle-independence generated by two-photon polymerization
Gordon Zyla, Alexander Kovalev, Silas Heisterkamp, Cemal Esen, Evgeny L. Gurevich, Stanislav Gorb, and Andreas Ostendorf
Gordon Zyla,1,* Alexander Kovalev,2 Silas Heisterkamp,1 Cemal Esen,1 Evgeny L. Gurevich,1 Stanislav Gorb,2 and Andreas Ostendorf1
1Applied Laser Technologies, Ruhr-Universität Bochum, Universitätsstraße 150, 44801, Bochum, Germany
2Functional Morphology and Biomechanics, Christian-Albrechts-Universität zu Kiel, Am Botanischen Garten 9, 24098, Kiel, Germany
*Corresponding author: [email protected]
Gordon Zyla https://orcid.org/0000-0001-8202-1574
G Zyla
A Kovalev
S Heisterkamp
C Esen
E Gurevich
S Gorb
A Ostendorf
•https://doi.org/10.1364/OME.9.002630
Gordon Zyla, Alexander Kovalev, Silas Heisterkamp, Cemal Esen, Evgeny L. Gurevich, Stanislav Gorb, and Andreas Ostendorf, "Biomimetic structural coloration with tunable degree of angle-independence generated by two-photon polymerization," Opt. Mater. Express 9, 2630-2639 (2019)
Constructive interference
Structural color
Two photon polymerization
Original Manuscript: April 1, 2019
Revised Manuscript: May 2, 2019
Manuscript Accepted: May 2, 2019
Optical Materials Express Laser Writing (2019)
A successful realization of photonic systems with characteristics of the Morpho butterfly coloration is reported using two-photon polymerization. Submicron structure features have been fabricated through the interference of the incident beam and the reflected beam in a thin polymer film. Furthermore, the influence of the lateral microstructure organization on the color formation has been studied in detail. The design of the polymerized structures was validated by scanning electron microscopy. The optical properties were analyzed using an angle-resolved spectrometer. Tunable angle-independence, based on reflection intensity modulation, has been investigated by using photonic structures with less degree of symmetry. Finally, these findings were used to demonstrate the high potential of two-photon polymerization in the field of biomimetic research and for technical application, e.g. for sensing and anti-counterfeiting.
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
| EPUB ?
OSA is experimenting with EPUB as a way to encourage reading on mobile devices, accommodate assistive reading devices, and explore options for secure article sharing. Our Readium EPUB implementation requires a recent, modern browser such as Chrome, Firefox, Safari, or Edge.
Range and stability of structural colors generated by Morpho-inspired color reflectors
Kyungjae Chung and Jung H. Shin
J. Opt. Soc. Am. A 30(5) 962-968 (2013)
Theoretical and experimental analysis of the structural pattern responsible for the iridescence of Morpho butterflies
Radwanul Hasan Siddique, Silvia Diewald, Juerg Leuthold, and Hendrik Hölscher
Color generation in butte y wings and fabrication of such structures
Teh-Hwa Wong, Mool C. Gupta, Bruce Robins, and Thomas L. Levendusky
Disorder and broad-angle iridescence from Morpho-inspired structures
Bokwang Song, Seok Chan Eom, and Jung H. Shin
Mysterious coloring: structural origin of color mixing for two breeds of Papilio butterflies
Ying-Ying Diao and Xiang-Yang Liu
B. P. Meier, P. R. D'Agostino, A. J. Elliot, M. A. Maier, and B. M. Wilkowski, "Color in context: psychological context moderates the influence of red on approach- and avoidance-motivated behavior," PLoS One 7(7), e40333 (2012).
W. Wickler, "Mimicry and the Evolution of Animal Communication," Nature 208(5010), 519–521 (1965).
L. M. Arenas, J. Troscianko, and M. Stevens, "Color contrast and stability as key elements for effective warning signals," Front. Ecol. Evol. 2, 1544 (2014).
T. D. Schultz and O. M. Fincke, "Structural colours create a flashing cue for sexual recognition and male quality in a Neotropical giant damselfly," Funct. Ecol. 23(4), 724–732 (2009).
P. A. Lewis, ed., Properties and Economics, vol. Vol. 1 of Pigment Handbook (Wiley, 1988), 2nd ed.
M. Yusuf, "Agro-Industrial Waste Materials and their Recycled Value-Added Applications: Review," in Handbook of Ecomaterials, vol. 159 L. M. T. Martínez, O. V. Kharissova, and B. I. Kharisov, eds. (Springer International Publishing, Cham, 2017), pp. 1–11.
P. Møller and H. Wallin, "Genotoxic hazards of azo pigments and other colorants related to 1-phenylazo-2-hydroxynaphthalene," Mutat. Res. 462(1), 13–30 (2000).
P. Vukusic and J. R. Sambles, "Photonic structures in biology," Nature 424(6950), 852–855 (2003).
S. Kinoshita, Structural Colors in the Realm of Nature (World Scientific, 2008).
P. Vukusic, J. R. Sambles, C. R. Lawrence, and R. J. Wootton, "Quantified interference and diffraction in single Morpho butterfly scales," Proc. Royal Soc. Lond. Ser. B: Biol. Sci. 266(1427), 1403–1411 (1999).
M. A. Giraldo and D. G. Stavenga, "Brilliant iridescence of Morpho butterfly wing scales is due to both a thin film lower lamina and a multilayered upper lamina," J. Comp. Physiol. 202(5), 381–388 (2016).
S. Kinoshita, S. Yoshioka, and K. Kawagoe, "Mechanisms of structural colour in the Morpho butterfly: cooperation of regularity and irregularity in an iridescent scale," Proc. Royal Soc. Lond. Ser. B: Biol. Sci. 269(1499), 1417–1421 (2002).
S. Yoshioka and S. Kinoshita, "Wavelength-selective and anisotropic light-diffusing scale on the wing of the Morpho butterfly," Proc. Royal Soc. Lond. Ser. B: Biol. Sci. 271(1539), 581–587 (2004).
B.-K. Hsiung, D. D. Deheyn, M. D. Shawkey, and T. A. Blackledge, "Blue reflectance in tarantulas is evolutionarily conserved despite nanostructural diversity," Sci. Adv. 1(10), e1500709 (2015).
K. Yu, T. Fan, S. Lou, and D. Zhang, "Biomimetic optical materials: Integration of nature's design for manipulation of light," Prog. Mater. Sci. 58(6), 825–873 (2013).
J. Sun, B. Bhushan, and J. Tong, "Structural coloration in nature," RSC Adv. 3(35), 14862 (2013).
S. L. Burg and A. J. Parnell, "Self-assembling structural colour in nature," J. Phys.: Condens. Matter 30(41), 413001 (2018).
S. Niu, B. Li, Z. Mu, M. Yang, J. Zhang, Z. Han, and L. Ren, "Excellent Structure-Based Multifunction of Morpho Butterfly Wings: A Review," J. Bionic Eng. 12(2), 170–189 (2015).
A. Saito, "Material design and structural color inspired by biomimetic approach," Sci. Technol. Adv. Mater. 12(6), 064709 (2011).
O. Karthaus, Biomimetics in Photonics, vol. 13 of Series in optics and optoelectronics (Taylor & Francis, 2012).
B. Bhushan, "Biomimetics: lessons from nature–an overview," Philos. Trans. R. Soc., A 367(1893), 1445–1486 (2009).
R. A. Potyrailo, R. K. Bonam, J. G. Hartley, T. A. Starkey, P. Vukusic, M. Vasudev, T. Bunning, R. R. Naik, Z. Tang, M. A. Palacios, M. Larsen, L. A. Le Tarte, J. C. Grande, S. Zhong, and T. Deng, "Towards outperforming conventional sensor arrays with fabricated individual photonic vapour sensors inspired by Morpho butterflies," Nat. Commun. 6(1), 7959 (2015).
R. H. Siddique, G. Gomard, and H. Hölscher, "The role of random nanostructures for the omnidirectional anti-reflection properties of the glasswing butterfly," Nat. Commun. 6(1), 6909 (2015).
S. Zhang and Y. Chen, "Nanofabrication and coloration study of artificial Morpho butterfly wings with aligned lamellae layers," Sci. Rep. 5(1), 16637 (2015).
K. Chung, S. Yu, C.-J. Heo, J. W. Shim, S.-M. Yang, M. G. Han, H.-S. Lee, Y. Jin, S. Y. Lee, N. Park, and J. H. Shin, "Flexible, angle-independent, structural color reflectors inspired by morpho butterfly wings," Adv. Mater. 24(18), 2375–2379 (2012).
M. Aryal, D.-H. Ko, J. R. Tumbleston, A. Gadisa, E. T. Samulski, and R. Lopez, "Large area nanofabrication of butterfly wing's three dimensional ultrastructures," J. Vac. Sci. Technol., B: Nanotechnol. Microelectron.: Mater., Process., Meas., Phenom. 30(6), 061802 (2012).
K. Watanabe, T. Hoshino, K. Kanda, Y. Haruyama, and S. Matsui, "Brilliant Blue Observation from a Morpho -Butterfly-Scale Quasi-Structure," Jpn. J. Appl. Phys. 44(1), L48–L50 (2005).
Y. Chen, J. Gu, D. Zhang, S. Zhu, H. Su, X. Hu, C. Feng, W. Zhang, Q. Liu, and A. R. Parker, "Tunable three-dimensional ZrO2 photonic crystals replicated from single butterfly wing scales," J. Mater. Chem. 21(39), 15237 (2011).
K. Kumar, H. Duan, R. S. Hegde, S. C. W. Koh, J. N. Wei, and J. K. W. Yang, "Printing colour at the optical diffraction limit," Nat. Nanotechnol. 7(9), 557–561 (2012).
M. Göppert-Mayer, "Über Elementarakte mit zwei Quantensprüngen," Ann. Phys. 401(3), 273–294 (1931).
Y. Tan, J. Gu, X. Zang, W. Xu, K. Shi, L. Xu, and D. Zhang, "Versatile fabrication of intact three-dimensional metallic butterfly wing scales with hierarchical sub-micrometer structures," Angew. Chem., Int. Ed. 50(36), 8307–8311 (2011).
B. H. Cumpston, S. P. Ananthavel, S. Barlow, D. L. Dyer, J. E. Ehrlich, L. L. Erskine, A. A. Heikal, S. M. Kuebler, I.-Y. S. Lee, D. McCord-Maughon, J. Qin, H. Röckel, M. Rumi, X.-L. Wu, S. R. Marder, and J. W. Perry, "Two-photon polymerization initiators for three-dimensional optical data storage and microfabrication," Nature 398(6722), 51–54 (1999).
M. T. Raimondi, S. M. Eaton, M. M. Nava, M. Laganà, G. Cerullo, and R. Osellame, "Two-photon laser polymerization: from fundamentals to biomedical application in tissue engineering and regenerative medicine," J. Appl. Biomater. Funct. Mater. 10(1), 56–66 (2012).
M. Nawrot, L. Zinkiewicz, B. Włodarczyk, and P. Wasylczyk, "Transmission phase gratings fabricated with direct laser writing as color filters in the visible," Opt. Express 21(26), 31919–31924 (2013).
B.-K. Hsiung, R. H. Siddique, L. Jiang, Y. Liu, Y. Lu, M. D. Shawkey, and T. A. Blackledge, "Tarantula-Inspired Noniridescent Photonics with Long-Range Order," Adv. Opt. Mater. 5(2), 1600599 (2017).
B.-K. Hsiung, R. H. Siddique, D. G. Stavenga, J. C. Otto, M. C. Allen, Y. Liu, Y.-F. Lu, D. D. Deheyn, M. D. Shawkey, and T. A. Blackledge, "Rainbow peacock spiders inspire miniature super-iridescent optics," Nat. Commun. 8(1), 2278 (2017).
G. Zyla, A. Kovalev, M. Grafen, E. L. Gurevich, C. Esen, A. Ostendorf, and S. Gorb, "Generation of bioinspired structural colors via two-photon polymerization," Sci. Rep. 7(1), 17622 (2017).
T. Kondo, S. Matsuo, S. Juodkazis, V. Mizeikis, and H. Misawa, "Multiphoton fabrication of periodic structures by multibeam interference of femtosecond pulses," Appl. Phys. Lett. 82(17), 2758–2760 (2003).
G. Kostovski, A. Mitchell, A. Holland, E. Fardin, and M. Austin, "Nanolithography by elastomeric scattering mask: An application of photolithographic standing waves," Appl. Phys. Lett. 88(13), 133128 (2006).
Q.-Q. Liu, Y.-Y. Zhao, M.-L. Zheng, and X.-M. Duan, "Tunable multilayer submicrostructures fabricated by interference assisted two-photon polymerization," Appl. Phys. Lett. 111(22), 223102 (2017).
B. Mills, D. Kundys, M. Farsari, S. Mailis, and R. W. Eason, "Single-pulse multiphoton fabrication of high aspect ratio structures with sub-micron features using vortex beams," Appl. Phys. A 108(3), 651–655 (2012).
S. Kinoshita and S. Yoshioka, "Structural colorsole of regularity and irregularity in the structure," ChemPhysChem 6(8), 1442–1459 (2005).
H. L. Leertouwer, B. D. Wilts, and D. G. Stavenga, "Refractive index and dispersion of butterfly chitin and bird keratin measured by polarizing interference microscopy," Opt. Express 19(24), 24061–24066 (2011).
M. Born, E. Wolf, and A. B. Bhatia, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light (Cambridge Univ. Press, 2016), 7th ed.
J. W. Strutt, "XV. On the light from the sky, its polarization and colour," The London, Edinburgh, and Dublin Philos. Mag. J. Sci. 41(271), 107–120 (1871).
B. Song, V. E. Johansen, O. Sigmund, and J. H. Shin, "Reproducing the hierarchy of disorder for Morpho-inspired, broad-angle color reflection," Sci. Rep. 7(1), 46023 (2017).
F. Burmeister, S. Steenhusen, R. Houbertz, T. S. Asche, J. Nickel, S. Nolte, N. Tucher, P. Josten, K. Obel, H. Wolter, S. Fessel, A. M. Schneider, K.-H. Gärtner, C. Beck, P. Behrens, A. Tünnermann, and H. Walles, "Two-photon polymerization of inorganic-organic polymers for biomedical and microoptical applications," in Optically Induced Nanostructures. Biomedical and Technical Applications, A. Ostendorf and K. König, eds. (De Gruyter, s.l., 2015).
J. H. Lee, B. Fan, T. D. Samdin, D. A. Monteiro, M. S. Desai, O. Scheideler, H.-E. Jin, S. Kim, and S.-W. Lee, "Phage-Based Structural Color Sensors and Their Pattern Recognition Sensing System," ACS Nano 11(4), 3632–3641 (2017).
E. P. Chan, J. J. Walish, E. L. Thomas, and C. M. Stafford, "Block copolymer photonic gel for mechanochromic sensing," Adv. Mater. 23(40), 4702–4706 (2011).
Y. Meng, J. Qiu, S. Wu, B. Ju, S. Zhang, and B. Tang, "Biomimetic Structural Color Films with a Bilayer Inverse Heterostructure for Anticounterfeiting Applications," ACS Appl. Mater. Interfaces 10(44), 38459–38465 (2018).
H. Hu, Q.-W. Chen, J. Tang, X.-Y. Hu, and X.-H. Zhou, "Photonic anti-counterfeiting using structural colors derived from magnetic-responsive photonic crystals with double photonic bandgap heterostructures," J. Mater. Chem. 22(22), 11048 (2012).
Allen, M. C.
Ananthavel, S. P.
Arenas, L. M.
Aryal, M.
Asche, T. S.
Austin, M.
Barlow, S.
Beck, C.
Behrens, P.
Bhatia, A. B.
Bhushan, B.
Blackledge, T. A.
Bonam, R. K.
Born, M.
Bunning, T.
Burg, S. L.
Burmeister, F.
Cerullo, G.
Chan, E. P.
Chen, Q.-W.
Chung, K.
Cumpston, B. H.
D'Agostino, P. R.
Deheyn, D. D.
Deng, T.
Desai, M. S.
Duan, H.
Duan, X.-M.
Dyer, D. L.
Eason, R. W.
Eaton, S. M.
Ehrlich, J. E.
Elliot, A. J.
Erskine, L. L.
Esen, C.
Fan, B.
Fan, T.
Fardin, E.
Farsari, M.
Feng, C.
Fessel, S.
Fincke, O. M.
Gadisa, A.
Gärtner, K.-H.
Giraldo, M. A.
Gomard, G.
Göppert-Mayer, M.
Gorb, S.
Grafen, M.
Grande, J. C.
Gu, J.
Gurevich, E. L.
Han, M. G.
Han, Z.
Hartley, J. G.
Haruyama, Y.
Hegde, R. S.
Heikal, A. A.
Heo, C.-J.
Holland, A.
Hölscher, H.
Hoshino, T.
Houbertz, R.
Hsiung, B.-K.
Hu, H.
Hu, X.
Hu, X.-Y.
Jiang, L.
Jin, H.-E.
Jin, Y.
Johansen, V. E.
Josten, P.
Ju, B.
Juodkazis, S.
Kanda, K.
Karthaus, O.
Kawagoe, K.
Kim, S.
Kinoshita, S.
Ko, D.-H.
Koh, S. C. W.
Kondo, T.
Kostovski, G.
Kovalev, A.
Kuebler, S. M.
Kumar, K.
Kundys, D.
Laganà, M.
Larsen, M.
Lawrence, C. R.
Le Tarte, L. A.
Lee, H.-S.
Lee, I.-Y. S.
Lee, J. H.
Lee, S. Y.
Lee, S.-W.
Leertouwer, H. L.
Li, B.
Liu, Q.
Liu, Q.-Q.
Lopez, R.
Lou, S.
Lu, Y.
Lu, Y.-F.
Maier, M. A.
Mailis, S.
Marder, S. R.
Matsuo, S.
McCord-Maughon, D.
Meier, B. P.
Meng, Y.
Mills, B.
Misawa, H.
Mitchell, A.
Mizeikis, V.
Møller, P.
Monteiro, D. A.
Mu, Z.
Naik, R. R.
Nava, M. M.
Nawrot, M.
Nickel, J.
Niu, S.
Nolte, S.
Obel, K.
Osellame, R.
Ostendorf, A.
Otto, J. C.
Palacios, M. A.
Park, N.
Parker, A. R.
Parnell, A. J.
Perry, J. W.
Potyrailo, R. A.
Qin, J.
Qiu, J.
Raimondi, M. T.
Ren, L.
Röckel, H.
Rumi, M.
Saito, A.
Sambles, J. R.
Samdin, T. D.
Samulski, E. T.
Scheideler, O.
Schneider, A. M.
Schultz, T. D.
Shawkey, M. D.
Shi, K.
Shim, J. W.
Shin, J. H.
Siddique, R. H.
Sigmund, O.
Song, B.
Stafford, C. M.
Starkey, T. A.
Stavenga, D. G.
Steenhusen, S.
Stevens, M.
Strutt, J. W.
Su, H.
Sun, J.
Tan, Y.
Tang, B.
Tang, J.
Tang, Z.
Thomas, E. L.
Tong, J.
Troscianko, J.
Tucher, N.
Tumbleston, J. R.
Tünnermann, A.
Vasudev, M.
Vukusic, P.
Walish, J. J.
Walles, H.
Wallin, H.
Wasylczyk, P.
Watanabe, K.
Wei, J. N.
Wickler, W.
Wilkowski, B. M.
Wilts, B. D.
Wlodarczyk, B.
Wolf, E.
Wolter, H.
Wootton, R. J.
Wu, X.-L.
Xu, L.
Xu, W.
Yang, J. K. W.
Yang, M.
Yang, S.-M.
Yoshioka, S.
Yu, K.
Yusuf, M.
Zang, X.
Zhang, D.
Zhang, S.
Zhao, Y.-Y.
Zheng, M.-L.
Zhong, S.
Zhou, X.-H.
Zhu, S.
Zinkiewicz, L.
Zyla, G.
ACS Appl. Mater. Interfaces (1)
ACS Nano (1)
Adv. Mater. (2)
Adv. Opt. Mater. (1)
Angew. Chem., Int. Ed. (1)
Ann. Phys. (1)
Appl. Phys. A (1)
ChemPhysChem (1)
Front. Ecol. Evol. (1)
Funct. Ecol. (1)
J. Appl. Biomater. Funct. Mater. (1)
J. Bionic Eng. (1)
J. Comp. Physiol. (1)
J. Mater. Chem. (2)
J. Phys.: Condens. Matter (1)
J. Vac. Sci. Technol., B: Nanotechnol. Microelectron.: Mater., Process., Meas., Phenom. (1)
Jpn. J. Appl. Phys. (1)
Mutat. Res. (1)
Nat. Commun. (3)
Nat. Nanotechnol. (1)
Philos. Trans. R. Soc., A (1)
Proc. Royal Soc. Lond. Ser. B: Biol. Sci. (3)
Prog. Mater. Sci. (1)
RSC Adv. (1)
Sci. Adv. (1)
Sci. Rep. (3)
Sci. Technol. Adv. Mater. (1)
The London, Edinburgh, and Dublin Philos. Mag. J. Sci. (1)
Fig. 1. Visualization of the concept for the generation of biomimetic photonic structures with 2PP. A computer-aided design of the grid structure and the arc structure are illustrated on the left and in the center. Furthermore, different views of the cross sectional morphology are shown in A-A and B-B. These section views demonstrate the theoretical structure design in axial direction and the color formation process. The axial structure design is the same for the grid and the arc structure.
Fig. 2. Biomimetic blue coloration fabricated with 2PP. The microscope image in (A) shows grid structures with different cross-line distance values for $x_{g}$ and $y_{g}$ (see Fig. 1). The cross-line distances were varied equally for both lateral sizes ($x_{g}$=$y_{g}$). An increment of the cross-line distance occurred stepwise with 0.2 µm. The edge length $l_{g}$ of each grid structure is 100 µm (A). A microscope image of the resulting color formation using the arc structure is illustrated in (E). The overall area size is 250 µm2. A detail plan view of the two structure types is given in the SEM images (B) and (F). The section view of the grid structure (C) and the arc structure (G) demonstrate the lamellar construction which resembles the cross-sectional morphology of the Morpho butterfly ridges [8,10,12]. The SEM images in (D) and (H) illustrate the structural composition of the Morpho didius ridges in the plan view.
Fig. 3. Biomimetic blue coloration fabricated with 2PP and the ellipsometric measurements. Microscope images of color arrays are shown in (A) using the grid structures (A) and in (D) using the the arc structures. The overall color areas are 500 µm2. The illumination angle was constantly $45^\circ$. The observation angle could be varied. Related reflection spectra are shown in (B) and (E) for an observation angle of $50^\circ$. In this process, two illumination direction (I1 and I2) were adjusted to study the reflection properties regarding the structure orientation. Normalized angle-resolved spectra are illustrated in (C) and (F) for the illumination direction I1.
Fig. 4. A demonstrator for biomimetic application. In the upper illustration, upright microscope images visualize the reflection characteristics depending on the tilting direction (tilt A, tilt B) of color array consisting of photonic structures with an arched geometry. An adjustable ramp was used to tilt the sample in a range between $0^\circ$ and $40^\circ$. The lower images show a color array with an internal butterfly shape for three tilt positions. The arc structures are aligned with different orientations of the structure symmetry axis with respect to the incidence plane for the image contour and butterfly shape. The left microscope image shows the color formation without sample tilting. The other microscope images were taken at a tilt angle of $40^\circ$ for different tilting axis of the sample. In the middle image the arc structures in the contour of the color array are tilted around the internal structure axis "tilt B", whereas the arc structures which form the butterfly shape are tilted around "tilt A". The reverse case is shown in the left picture. As a result, the contour or the butterfly shape is blue colored with a tilt around the vertical or horizontal axis of the sample. | CommonCrawl |
\begin{document}
\date{}
\title{ Infinite Lewis Weights in Spectral Graph Theory}
\begin{titlepage}
\maketitle
\begin{abstract}
We study the spectral implications of re-weighting a graph by the $\ell_{\infty}$-Lewis weights of its edges.
Our main motivation is the ER-Minimization problem (Saberi et al., SIAM'08): Given an undirected graph $G$, the goal is to
find positive normalized edge-weights $w\in \mathbb{R}_+^m$ which minimize the sum of pairwise \emph{effective-resistances} of $G_w$ (Kirchhoff's index). By contrast, $\ell_{\infty}$-Lewis weights minimize the \emph{maximum} effective-resistance of \emph{edges}, but are much cheaper to approximate, especially for Laplacians. With this algorithmic motivation, we study the ER-approximation ratio obtained by Lewis weights.
Our first main result is that $\ell_{\infty}$-Lewis weights provide a constant ($\approx 3.12$) approximation for ER-minimization on \emph{trees}. The proof introduces a new technique, a local polarization process for effective-resistances ($\ell_2$-congestion) on trees, which is of independent interest in electrical network analysis. For general graphs, we prove an upper bound $\alpha(G)$ on the approximation ratio obtained by Lewis weights, which is always $\leq \min\{ \text{diam}(G), \kappa(L_{w_\infty})\}$, where $\kappa$ is the condition number of the weighted Laplacian. All our approximation algorithms run in \emph{input-sparsity} time $\wt{O}(m)$, a major improvement over Saberi et al.'s $O(m^{3.5})$ SDP for exact ER-minimization. We conjecture that Lewis weights provide an $O(\log n)$-approximation for \emph{any} graph, and show experimentally that the ratio on many graphs is $O(1)$.
Finally, we demonstrate the favorable effects of $\ell_{\infty}$-LW reweighting on the \emph{spectral-gap} of graphs and on their \emph{spectral-thinness} (Anari and Gharan, 2015). En-route to our results, we prove a weighted analogue of Mohar's classical bound on $\lambda_2(G)$, and provide a new characterization of leverage-scores of a matrix, as the gradient (w.r.t weights) of the volume of the enclosing ellipsoid.
\iffalse We study A-vs.-D experiment design on graph Laplacians. Given an undirected graph $G$, the former problem asks to find positive normalized edge-weights $w\in \mathbb{R}^m$ which minimize the total sum of pairwise \emph{effective-resistances}
of $G_w$ (Saberi et al., SIAM'08), while the latter seeks edge-weights that minimize the \emph{maximum} effective-resistance among \emph{edges}, also known as the \emph{$\ell_\infty$-Lewis weights} of $L_G$. Our algorithmic motivation is a recent line of work showing that $\ell_\infty$-Lewis weights are much cheaper to compute,
to both high and low accuracy. As such, we study the approximation and other spectral implications obtained by reweighing a graph by the $\ell_{\infty}$-Lewis weights of its edges.
Our main result is that $\ell_{\infty}$-Lewis weights provide a constant ($\approx 3.12$) approximation for the optimal ER-minimizing weights on \emph{trees}. The proof develops a new technique, a local ``polarization process" for effective-resistances ($\ell_2$-congestion?) on trees, which is of independent interest in electrical network analysis. For general graphs, we develop a upper bound $\alpha(G)$ on the approximation ratio obtained by Lewis weights, which is always $\leq \min\{ \text{diam}(G), \kappa(L_{G_{w_\infty}})\}$, where $\kappa$ is the condition number of the weighted Laplacian. All our approximation algorithms run in \emph{input-sparsity} ($\wt{O}(m)$) time, a dramatic improvement over Saberi et al.'s $O(m^{3.5})$ SDP for exact ERMP. We conjecture that Lewis weights provide an $O(\log n)$ approximation for any graph, and demonstrate the usefulness of our algorithm and analysis for network design applications, with extensive experiments.
Finally, we study the implication of $\ell_{\infty}$-LW re-weighing on the \emph{condition number}, conductance, and (Soft)max-Eigenvalue [Nesterov]. En-route to these results is a new characterization of leverage-scores of a matrix, as the gradient w.r.t weights of the volume of the enclosing ellipsoid. \fi
\iffalse \begin{itemize}
\item sum of total effective resistance of the graph (trace minimization).
\item the spectral gap (algebraic connectivity) of the graph.
\item the condition number of the Laplacian. \end{itemize}
\begin{itemize}
\item \bf Improving the Conductance. \rm
$\ell_{\infty}$-LW reweighting decreases the spectral gap (mixing time) and the \emph{condition number} of the reweighted graph.. for sparse graphs...
\item Using recent input-sparsity time high-accuracy algorithms for computing $\ell_{\infty}$-LW, our results imply a $\wt{O}(m)$-time algorithm
for total-EF minimization, with $O(\log n)$ approximation. This is dramatically faster than Saberi's Interior-Point-Method, which finds the optimal rewighting in $O(m^{3.5})$ time. Our algorithm is practical, providing exceptional results in practice, often withina small constant factor, in input-sparsity time.
We conjecture that this approximation result applies to general undirected graphs, and provide extensive simulations corroborating this conjecture. \end{itemize} \fi
\end{abstract}
\thispagestyle{empty}
\end{titlepage}
\tableofcontents
\section{Introduction}
The $\ell_p$-Lewis weights of a matrix $A \in \mathbb{R}^{m\times n}$ generalize the notion of statistical \emph{leverage scores} of rows, which informally quantify the importance of row $a^\top_i$ in composing the spectrum of $A$. Formally, the $\ell_p$-Lewis weights ($\ell_p$-LW) of $A$ are the unique vector $\overline{w} \in \mathbb{R}^m$ satisfying the following \emph{fixed-point} equation
\begin{align}\label{eq_Lp_LW_def} a_i^T\left(A^T \overline{W}^{1-2 / p} A \right)^{-1} a_i=\overline{w}_i^{2 / p} \;\;\;\;, \;\;\;\;\; \forall \; i \in [m] \end{align} where $\overline{W} = \diag(\overline{w}) \in \mathbb{R}^{m\times m}$. This definition of \cite{Lewis78} can be viewed as a \emph{change-of-density} of rows, that requires each of the reweighted rows to have leverage score $1$, i.e., that the $i$th row of the reweighted matrix
$\overline{W}^{1/2 - 1/p} A$ should end up with leverage score $\overline{w}_i$, hence the ``fixed point" condition \eqref{eq_Lp_LW_def}. $p$-norm Lewis weights have found important algorithmic applications in optimizatin and random matrix theory in recent years, from dimensionality-reduction for $p$-norm regression (Row-Sampling \cite{CP14, flps21}) to the design of optimal barrier functions for linear programming \cite{LS14}, and spectral sparsification for Laplacian matrices (\cite{SS11, KMP16}). For $p=\infty$, which is the focus of this paper, the $\ell_{\infty}$-LW have a particularly nice geometric interpretation -- the ellipsoid
$\mathcal{E} = \{ x \in \mathbb{R}^n \mid x^T (A^T \overline{W} A)^{-1} x \leq 1 \} $ defined by $\ell_{\infty}$-LW$(A)$ is the \emph{dual} solution of the minimum-volume ellipsoid enclosing the pointset $\{ a_i \}_{i=1}^m $, known as the the Outer John Ellipsoid:
\begin{align}\label{equ_John_Ellipsoid} \max\limits_{M \succeq 0} \left\{ \log\det M \; : a_i^T M a_i \leq 1, \; \forall i \in[m] \right\} . \end{align}
In this paper we investigate $\ell_{\infty}$-LW as a tool for design and analysis of \emph{Laplacian} matrices. Since leverage-scores ($\ell_2$-LW) play a key role in spectral graph algorithms \cite{st04, SS11,KMP16}, it is natural to ask what role do higher-order Lewis weights play in graph theory and network analysis: \begin{quote} \emph{What are the spectral implications of reweighting a graph by the $\ell_{\infty}$-LW of its edges? } \end{quote}
In order to explain this motivation and the role of $\ell_{\infty}$-LW in spectral graph theory, it is useful to interpret the optimization Problem~\eqref{equ_John_Ellipsoid} as a special case of \emph{Experiment Optimal Design} \cite{boyd_convex_2004},
where the goal is to find an optimal convex combination of fixed linear measurements $\{a_i\}_{i=1}^m$, inducing a maximal-confidence ellipsoid for its least-square estimator, where ``maximal" is with respect to \emph{some} partial order on Positive-semidefinite matrices (e.g Loewner order, see \cref{appendix_optimal_design}).
A natural choice for such order is the \emph{volume} (determinant) of the confidence ellipsoid, known as \emph{D-optimal design}: Given the experiment matrix, $\{ a_i \in \mathbb{R}^n \}_{i=1}^m$, the primal problem of \eqref{equ_John_Ellipsoid} is \begin{align}\label{equ_D_optimal_def} \begin{array}{ll} \text { minimize } & \logdet \tag{D-optimal Design} \left(\sum_{j=1}^m g_j a_j a_j^T\right)^{-1} \\ \text { subject to } & \boldsymbol{1}^T g = 1 \ , \ g \geq 0. \end{array} \end{align} Another natural order on PSD matrices is the matrix \emph{trace}, commonly known as \emph{A-optimal design} \cite{boyd_convex_2004}: \begin{align}\label{equ_A_optimal_def} \begin{array}{ll} \text { minimize } & \Tr \left(\sum_{j=1}^m g_j a_j a_j^T\right)^{-1} \tag{A-optimal Design} \\ \text { subject to } & \boldsymbol{1}^T g = 1 \ , \ g \geq 0, \end{array} \end{align} which is a Semidefinite program (SDP). In both of these convex problems, the optimization is over positive normalized \emph{weight vectors} $g \in \mathbb{R}^m$, defining enclosing ellipsoids of the pointset $\{a_i\}_{i=1}^m$. Understanding the relationship between \eqref{equ_D_optimal_def} and \eqref{equ_A_optimal_def} is a central theme of this paper.
The two different objective functions have a natural geometric interpretation: \eqref{equ_D_optimal_def} seeks to minimize the \emph{geometric mean} of the ellipsoid's semiaxis lengths, whereas \eqref{equ_A_optimal_def} seeks to minimize their \emph{harmonic mean}. Intuitively, this HM-GM viewpoint renders the problems quite different, and indeed we show that for \emph{general} PSD matrices, there's an $\Omega(n/\log^2 n)$ separation between the two objectives:
\begin{theorem}[Informal]\label{thm_natural_gap}
There are $n \times n$ PSD experiment matrices $A \succeq 0$ for which the D-optimal solution ($\ell_{\infty}$-LW($A$)) is no better than an $\Omega(n/\log^2 n)$-approximation to the A-optimal design problem w.r.t $A$. \end{theorem}
This lower bound implies that a nontrivial relation between the two minimization problems can only hold due to some special structure of PSD matrices. As mentioned earlier, the focus of this paper is on \emph{graph Laplacians}. Our main motivation for studying A-vs-D optimal design for Laplacian matrices comes from network design applications, as in this case both problems correspond to controlling the \emph{distribution of electrical flows} and \emph{Effective Resistances} of the underlying graph. We now explain this correspondence.
\paragraph{Electric Network Design: Minimizing Effective Resistances on Graphs} Given an undirected graph $G$, the \emph{effective resistance} $R_{ij}(G)$ is the electrical potential difference that appears across terminals $i$ and $j$ when a unit current source is applied between them (see Definition \eqref{def_ER}).
Intuitively, the ER between $i$ and $j$ is small when there are many paths between the two nodes with high conductance edges, and it is large when there are few paths, with lower conductance, between them. Effective resistances have many applications in electrical network analysis, spectral sparsification \cite{SS11} and Laplacian linear-system solvers \cite{KMP16}, maximum-flow algorithms \cite{CKM+11}, the commute and cover times in Markov chains \cite{CRR+97, Mat88}, continuous-time averaging networks, finding thin trees \cite{AO15}, and in generating random spanning trees \cite{KM09, MST15, DKP+17}. Apart of all these, the distribution of effective resistances of a graph are natural graph properties to be investigated on their own right, as advocated by \cite{anari17, saberi}.
In a seminal work, Ghosh, Boyd and Saberi \cite{saberi} introduced the \emph{Effective Resistance Minimization Problem} (ERMP): Given an unweighted graph $G$, find nonnegative, normalized edge-weights that minimize the total sum of effective resistance over all pairs of vertices of the reweighted graph $G_g$, also known as the \emph{Kirchhoff index} $\mathcal{K}_g(G)$ \cite{Lukovits1999}. This problem can be expressed as the following Semidefinite Program:
\begin{equation}\label{equ_ERMP_def} \begin{array}{ll@{}ll} \text{minimize} & \mathcal{K}_g(G) := \sum\limits_{i,j} R_{ij} &\\ \text{subject to}& g \geq 0 , \tag{ERMP} \ \boldsymbol{1}^T g=1 \end{array} \end{equation}
It is straightforward to verify that the ERMP is an \emph{A-optimal design problem over Laplacians} (see equation~\eqref{equ_R_tot_A_design}). This formulation of ERMP has the following interpretation -- we have real numbers $x_1,...,x_n$ at the nodes of $G$, which have zero sum. Each edge in $G$ corresponds to a possible `measurement', which yields the difference of its adjacent node values, plus noise. Given a fixed budget of measurements for estimating $\boldsymbol{x}$, the problem is to select the \emph{fraction} of experiments that should be devoted to each edge measurement. The optimal fractions are precisely the ERMP weights.
The motivation of \cite{saberi} for studying this problem was improving the ``electrical connectivity" and mixing time of the underlying network. Minimizing resistance distances and electrical flows is an important primitive in routing networks (and molecular chemistry \cite{Lukovits1999}, where the problem in fact originated), and can be viewed as a \emph{convex} proxy for maximizing the spectral-gap and communication throughput of the reweighted network. As such, ERMP can be viewed as a network configuration algorithm \cite{HAAKSM13,MAK+20}.
In \cite{saberi}, the authors design an ad-hoc interior-point method (IPM) which solves ~\eqref{equ_ERMP_def} in $O(m^{3.5})$ time (more precisely, in $\sim \sqrt{m}$ generic linear-equations involving \emph{pseudo-inverses} of Laplacians, which are no longer SDD). We note that even with recent machinery of \emph{inverse-maintenance} for speeding-up general IPMs \cite{jkl+20, HJ0T022}, one could at best
achieve $\wt{O}(mn^{2.5} + n^{0.5}m^\omega)$,
but such acceleration (for sparse graphs) is only of theoretical value. In either case, the above runtimes are quite daunting for large-scale networks, let alone for dense ones ($m\gg n$). This motivates resorting to \emph{approximate} algorithms. Unfortunately, using black-box approximate SDP solvers (projection-free algorithms a-la \cite{AK16, Haz08}) seems inherently too slow for the ERMP SDP, due to its large \emph{width} and the additive-guarantees of MwU.\footnote{The \emph{width} of the ERMP SDP is $\geq \Omega(n^2)$, hence approximate SDP solvers based on multiplicative-weight updates \cite{Haz08, AK16} require at least $mn^2$ time to achieve nontrivial approximation.
Moreover, Hazan's algorithm only guarantees an \emph{additive} approximation $\epsilon\|C\|_F$ with respect to the Frobenius norm of the objective matrix, which in our case is $\Omega(n^{1.5})$, see~\cref{appendix_ermp_sdp}.}
By contrast, $\ell_{\infty}$-Lewis weights are much cheaper to approximate, especially for Laplacians -- While \emph{high-accuracy} computation of the John ellipsoid \eqref{equ_John_Ellipsoid} is no faster than the aforementioned IPM ($\sim m^{3.5}\log(m/\epsilon)$ \cite{Nem99}), low-precision approximation of $\ell_{\infty}$-LW turns out to be dramatically cheaper -- \cite{ccly,CP14} gave a simple and practical approximation algorithm to arbitrarily small precision $\epsilon$, via $\wt{O}(1/\epsilon)$ repeated \emph{leverage-score} computations, which in the case of Laplacians can be done in input-sparsity time \cite{st04,KMP16}. Fortunately, low-accuracy approximation of $\ell_{\infty}$-LW suffices for ERMP (see \cref{sec_computing_LW}). We remark that very recently, \cite{flps21} gave a \emph{high-accuracy} algorithm for computing $\ell_p$-LW using only $\wt{O}(p^3 \log(1/\epsilon))$ leverage-score computations (Laplacian linear systems in our case), hence setting $p=n^{o(1)}$ yields an alternative $O(m^{1+o(1)}\log(1/\epsilon))$ time approximate algorithm, which is good enough for most applications \cite{flps21}.
Compared to the ERMP objective, the $\ell_{\infty}$-LW of the graph Laplacian $L(G)$ can be shown to minimize the \emph{maximal} effective-resistance over \emph{edges} of $G$. In fact, we show something stronger: The ER of edges in $G$ are the gradient, w.r.t weights $g\in \mathbb{R}^m$, of the \emph{volume} of the ellipsoid induced by the weighted Laplacian $L_g$. To the best of our knowledge this provides a new geometric characterization of the ER of a graph (and more generally, of statistical leverage-scores of a general matrix, see~\cref{sec_new_char_LS_ER}) :
\begin{lemma} \label{lem_ER_char}
$ER(G_g) = - \nabla_g \log\det L_g^+ .$ \end{lemma}
The above discussion and the efficiency of $\ell_{\infty}$-LW approximation, raise the following natural question:
\emph{How well do the $\ell_{\infty}$-Lewis weights of a graph approximate the Kirchhoff index (optimal ERMP weights) ?}
The main message of this paper is that $\ell_{\infty}$-LW rewighting of edges has various favorable effects on the spectrum of graphs, and provides a an efficient ``preprocessing" operation on large-scale (undirected) networks, well beyond the Kirchhoff index. We summarize our main findings in the next subsection.
\subsection{Main Results}
Let $\alpha_{A,D}(G)$ denote the approximation ratio obtained by $\ell_{\infty}$-LW for the ERMP objective (Definition \eqref{def_alpha_AD}). Our first main result is that $\alpha_{A,D}$ is constant for
\emph{trees}:
\begin{theorem}\label{thm_LW_apx_ERMP_trees}
$\ell_{\infty}$-LW are a $3.12$-approximate solution for the ERMP problem on trees. \end{theorem} It is noteworthy that there are trees for which $\alpha_{A,D} > 2.5$ (see \cref{sec_experiments}), so the bound is nearly tight. The proof of Theorem~\ref{thm_LW_apx_ERMP_trees} relies on a new technique, designated for trees, which may be of independent interest in electric network analysis -- we show that we can always locally modify the tree in a way that increases the approximation ratio $\alpha_{A,D}(T)$. More precisely, our proof introduces a (finite) ``polarization process" of ERs based on $\ell_2$-congestion of trees, that can be repeatedly applied to yield the worst case ``polarized" family of trees (the \emph{Bowtie graph}, see Figure \ref{fig_tps_tree}). We provide a high-level overview of the proof in Section \ref{sec_tree_TOV}.
Our second main result is an upper bound on the approximation ratio of $\ell_{\infty}$-LW for general graphs. The precise upper bound we develop is a function of the spectral parameter $ \alpha_1(g) := \frac{2}{(n-1)^2} \Tr L_g^{+}$ of the weighted laplacian of $G$, which is a central quantity in our analysis. Denoting by $g_{lw}$ the $\ell_{\infty}$-LW edge weights, define \begin{align} \label{eq_alpha} \alpha_{min}(G) := \min \left\{ \alpha_1(g_{\ell w})\; , \; \left\lVert -\nabla_g \left(\log \alpha_1(g_{\ell w})\right) \right\rVert_\infty \right\}. \end{align} For intuition, note that $\min\{x,(\log x)'\} = 1$, so the above quantities tend to be anti-monotone in each other. \begin{theorem}[Upper Bound for General Graphs ]\label{thm_UB_general_graphs} For any undirected graph $G$, \[ \alpha_{A,D}(G) \leq \alpha_{min}(G). \]
\end{theorem} We prove that $\alpha_{min}(G)$ is always at most $\leq \min \left\{ \text{diam}(G), \kappa(L_{g_{\ell w}})\right\}$, where $\text{diam}(G)$ is the diameter of $G$, and $\kappa(L_{g_{\ell w}})$ is the \emph{condition-number} of its $\ell_{\infty}$-LW-weighted Laplacian. While this already gives a good approximation for low-diameter graphs, we stress that $\alpha_{min}$ typically provides a much tighter upper bound on $\alpha_{A,D}$: The \emph{lollipop} graph (clique of size $n$ connected to a path of length $n$) has both diameter and LW-condition-number $\Omega(n)$, but simulations show that $\alpha_{min}(\text{lollipop}_n) \leq O(\log n)$. In \cref{sec_experiments}, we showcase the approximation ratio $\alpha_{min}$ for various different graph families, and show that in practice it grows as $\wt{O}(1)$. In light of Theorems \ref{thm_LW_apx_ERMP_trees}, \ref{thm_UB_general_graphs}, and our empirical evidence, we conjecture that $\ell_{\infty}$-LW provide an $O(\log n)$ approximation for the Kirchoff index of any graph:
\begin{conjecture}[Lewis meets Kirchoff]\label{conj_LW_apx_ERMP} $\forall G$, \;
$\alpha_{A,D}(G) \leq O(\log n)$. \end{conjecture}
A stronger conjecture would be $\alpha_{\min}(G) \leq O(\log n)$; In fact, both our analysis and experiments indicate that this stronger conjecture holds (See \cref{sec_experiments}). We do not have a super-constant separations between $\alpha_{\min}(G)$ and $\alpha_{A,D}(G)$, and whether $\alpha_{\min}(G)$ \eqref{eq_alpha} is a tight upper bound is an intriguing question.
\paragraph{Spectral Implications of $\ell_{\infty}$-LW}
Our third set of results demonstrates the favorable effects of $\ell_{\infty}$-LW reweighting on the eigenvalue distribution of graph Laplacians. We present two fundamental results in this direction:
\begin{theorem}[$\ell_{\infty}$-LW reweighting and Mixing-time, Informal] \label{thm_spectral_LW} \;
\begin{enumerate}
\item We generalize the classic spectral-gap bound of \cite{Mohar91} $(\lambda_2 \geq 4/nD)$ to \emph{weighted} graphs:
$\lambda_2(G_w) \geq 2/ (nD\cdot R_{\max}(G_w))$, where $R_{\max}$ is the maximal effective resistance of \emph{an edge} in the weighted graph.
\item We show a close connection between the optimality condition of the smooth-max-eigenvalue of a graph \cite{Nesterov03} ($\max_g \log (\Tr e^{L_g^+})$) and the fixed-point condition for $\ell_{\infty}$-LW \eqref{eq_Lp_LW_def}, implying a condition under which $\ell_{\infty}$-LW reweighting increases the softmax function.
\end{enumerate} \end{theorem}
\paragraph{Spectrally-Thin Trees \cite{AO15}} In an influential result, \cite{AGM10} presented a rounding scheme for the Asymmetric TSP problem that breaks the $O(\log n)$ approximation barrier, assuming the underlying graph admits a \emph{spectrally-thin} tree (STT) ($L_T \preceq \gamma L_G$, where $\gamma = \gamma_G(T)$ is the spectral thinness of the tree) . Anari et al. \cite{AO15} showed that for any graph, $\gamma_G(T)$ is at least $R_{\max_e}(G)$ and that an optimal tree with thinness $\wt{\Theta}(R_{\max_e}(G))$ can be found in polynomial time. We generalize this result in the following way -- For any graph $G$, we can find a \emph{weighted} tree $T_w$ with spectral thinness of $L_{T_w} \preceq O((n-1)/m)L_G$: \begin{lemma}
For any connected graph $G = \langle V,E \rangle$ there is a weighted spanning tree $T_w$ such that $T_w$ has spectral thinness of $((n-1)/m) \cdot O(\log n / \log \log n)$. \end{lemma} We show this is always at least as good as the \cite{AO15} tree. It would be interesting to see if this weighted version of STTs can be used for the ATSP rounding primitive. We believe that the host of our results will have further applications in the design and analysis of spectral graph algorithms.
\section{Thechnical Overview}
Here we provide a high-level overview of the main new ideas required to prove Theorems \ref{thm_LW_apx_ERMP_trees} and \ref{thm_UB_general_graphs}.
\paragraph{ER Polarization: Technical Overview of Theorem \ref{thm_LW_apx_ERMP_trees} }\label{sec_tree_TOV}
Our proof technique for bounding the ERMP approximation ratio on trees ($\alpha_{A,D}(T)$) is based on the following important observation: the optimal weight of the $l$'th edge of a tree, $g^*_l(T)$, is proportional to its congestion, $c_l(T)$~\eqref{equ_opt_g_trees}. Therefore the optimal weight will be "farther away" from uniform (hence far from the LW solution \eqref{clm_LW_tree}) when the tree has a severe "bottleneck" (i.e., an edge crossed by many paths). With this observation, we show that given any tree $T$, we can
construct an alternative tree $T'$ whose approximation ratio is worse, i.e., $\alpha_{A,D}(T') > \alpha_{A,D}(T)$. Our construction is iterative, leveraging the above observation -- at each iteration we will increase the ``bottleneck" of the tree, until we reach a `fixed point' (the Bowtie graph). To formally define this polarization-process, we introduce the notion of \emph{Local Transformations}\footnote{The idea of LT is inspired by Elementary operations on matrices, and indeed they play a similar role in some sense.} (LT) on a tree. An LT is a local graph operation that changes the congestion of a \emph{single} edge of the tree. Formally, we denote by $E_k$ an LT such that \begin{equation}\label{equ_LT_def}
c_l(E_k \circ T) = c_l(T) , \ \forall \ l \neq k , \end{equation} where $T' = E_k \circ T$ is the tree after the transformation.
There are two natural ways to increase the bottleneck of a tree (i.e., make the distribution in~\eqref{equ_opt_g_trees} further from uniform): (1) Reduce the number of different paths, or (2) Add more entry points (i.e leaves). It turns out that these two operation can be formulated as an LT. This allows us to gradually construct the new tree with repeated use of LT. For this purpose, we shall define an \emph{upper LT} and a \emph{lower LT}, denoted $E^{\uparrow}_k$ and $E^{\downarrow}_k$ respectively, such that for any tree $T$, we have that: \begin{equation}\label{equ_upper_lower_LT}
\begin{split}
&c_k(E^{\uparrow}_k \circ T) > c_k(T), \\
&c_k(E_k^{\downarrow} \circ T) < c_k(T).
\end{split} \end{equation} (This definition will become clear below). The upper LT will correspond to (1) above, i.e., reducing the number of paths, and lower LT will correspond to (2) above, i.e., adding another entry point, which is consistent with our initial intuition.
It turns out that the exact threshold for choosing which local operation to perform next is the norm of the optimal weight -- $||g^*(T)||_2^2$ :
\begin{claim}\label{clm_apx_ratio_ET}
Let $T$ be tree of order $n$, $E_k$ be an LT. Then $\alpha_{A,D}(T) \leq \alpha_{A,D}(E_k \circ T)$ if one of the following holds:
\[
E_k \text{ is a lower LT } \ \& \ g_k^*(T) \leq ||g^*(T)||_2^2
\]
\qquad \qquad \qquad \qquad \qquad \quad or,
\[
E_k \text{ is an upper LT } \ \& \ g_k^*(T) > ||g^*(T)||_2^2
\] \end{claim}
Following this claim, we divide the edges of the tree to two sets: \[
E_<(T) \coloneqq \{ l \ \mid \ g_l^*(T) \leq ||g^*(T)||_2^2 \ \} \ , \ E_>(T) \coloneqq \{ l \ \mid \ g_l^*(T) > ||g^*(T)||_2^2 \ \} \] and operate with the appropriate LT on each set (until saturation -- $E_k \circ T = T$). This process is possible due to a key feature of these sets - they are \emph{invariant under LTs}. More precisely, we prove that
for any tree $T$, and $k \in E_>$ we have that, \( E_>(E_k^{\uparrow} \circ T) = E_>(T)\), and vice versa for $E_<$.
We apply this process repeatedly (each iteration choosing the `right' LT), and show that that it must terminate after a finite number of steps, in a ``fixed-point" tree whose congestion is as ``polarized" as possible -- we call this family of trees \emph{Bowtie graphs} of some order (see \Cref{fig_tps_tree}). We prove that Bowtie graphs maximize the ratio $\alpha_{A,D}(T)$ over the set of trees, and then bound the latter quantity for Bowtie graphs (by $\approx 3.12$) using the tools we develop for general graphs, on which we elaborate in the next paragraph.
\begin{figure}
\caption{A Bowtie graph - $\mathcal{B}_{t,p,s}$. A path of length $p$ joined with stars of size $t,s$ on both sides}
\label{fig_tps_tree}
\end{figure}
\paragraph{Overview of the General Upper Bound (Theorem \ref{thm_UB_general_graphs})}
There is a simple intuitive explanation for why $\ell_{\infty}$-LW($G$) provide an $O(\text{diameter})$-approximation to the ERMP problem on any graph $G$, i.e., $\alpha_{A,D} \leq \diam(G)$. Indeed, since LW minimize the maximal ER among \emph{edges} of $G$ (see next section), and since effective-resistances are well-known to form a \emph{metric}, the triangle-inequality implies that $ER(i,j)$ for any pair of vertices in $G_{lw}$ is at most the sum of ERs along the edges of a \emph{shortest path} between $i$ and $j$, which is at most $\diam(G_{lw}) = \diam(G)$. We use an AM-GM argument to prove a stronger bound: For any $G$, \begin{equation}\label{eq_alpha_1}
\alpha_{A,D}(G) \leq \alpha_1(g_{\ell w}), \end{equation} where $ \alpha_1(g) := \frac{2}{(n-1)^2} \Tr L_g^{+}$ is a parameter closely related to the optimal solution $\mathcal{K}_G(g)$ -- Indeed, the harmonic-mean characterization mentioned earlier in the introduction implies that the optimal ERMP value can be rewritten as (see Equation~\eqref{equ_R_tot_rep_tr}) \[
\mathcal{K}(g^*) = n \Tr L_{g^*}^+ =
\frac{n(n-1)}{HM(\lambda(L_{g^*}))}, \] where $HM(\lambda(L_{g^*}))$ is the harmonic mean of the $n-1$ (positive) eigenvalues of the weighted Laplacian with the optimal ERMP weights. With some further manipulations, we can use the AM-GM inequality to show that the approximation ratio $\mathcal{K}(g_{lw})/\mathcal{K}(g^*) \leq \alpha_1(G)$, which is always at most $\diam(G)$ via the triangle inequality, but is typically smaller, see the experiments section.
What if the diameter of $G$ is large? In this case we show that a ``\emph{dual}" function of $\alpha_1(G)$ is typically small. Specifically, we prove that the following quantity is also an upper bound on the approximation ratio: \begin{equation}\label{eq_alpha_2}
\alpha_{A,D}(G) \leq \left\lVert -\nabla_g \left(\log \alpha_1(g_{\ell w})\right) \right\rVert_\infty. \end{equation} This bound is somewhat more technical and less intuitive, but the key for deriving it is based on the ERMP \emph{duality gap} of \cite{saberi}, which naturally involves gradient-suboptimality conditions with respect to weights. Using calculus and the Courant-Fisher principle we then show that this is at most $\kappa(L(G_{lw}))$.
Combining \eqref{eq_alpha_1} and \eqref{eq_alpha_2} yields Theorem \ref{thm_UB_general_graphs} : $\alpha_{A,D}(G) \leq \min \left\{ \alpha_1(g_{\ell w})\; , \; \left\lVert -\nabla_g \left(\log \alpha_1(g_{\ell w})\right) \right\rVert_\infty \right\}$. As mentioned earlier in the introduction, the intuition for why the \emph{minimum} of the two aformentioned quantities should always be small comes from the scalar inequality $\min\{x,(\log x)'\} = 1$. Indeed, all our simulations corroborate that the minimum is $\wt{O}(1)$, hence $\alpha_{min}(G)$ is typically a much tighter bound than $\min\{\diam(G), \kappa(L(G_{lw}))\}$. Conjecture \ref{conj_a_min} postulates that the minimum is always $O(\log n)$.
\paragraph{Organization of this paper:} In \cref{sec_prelims} we provide background on Lewis weights, Laplacians and graph Effective Resistances, and prove our new characterization for Leverage Scores and Effective Resistance (Lemma \ref{lem_ER_char}). In \cref{sec_R_tot} we prove \Cref{thm_LW_apx_ERMP_trees,thm_UB_general_graphs}. \cref{sec_experiments} summarizes our experimental results. In \cref{sec_LW_spectral}, we prove \Cref{thm_spectral_LW} and the application for spectral-thin trees. Finally, in \cref{sec_computing_LW} we restate the algorithm for computing LW, and show that low-accuracy LW are sufficient for our results. We finish in \cref{sec_optimal_design} with exploring the relation of A- and D-optimal design, showing the geometric interpretation in terms of Pythagorean means, and proving \Cref{thm_natural_gap}.
\section{Preliminaries}\label{sec_prelims}
\subsection{Notations} We denote by $\boldsymbol{S}^n_+$ the symmetric matrices subspace of $\mathbb{R}^{n \times n}$. The unit vectors, $e_i$, are the vectors with all entries $0$, but the $i$'th entry (which equals to $1$). We denote the eigenvalues (EV) of a matrix $M \in \mathbb{R}^{n \times n}$ by \[ \lambda(M) = \lambda_1(M) \leq \lambda_2(M) \leq \dots \leq \lambda_n(M). \]
For any matrix $A \in \mathbb{R}^{n \times d}$, we denote the \emph{projection} matrix onto the column space of $A$ as \[ \Pi_A = A(A^T A)^+ A^T. \]
\iffalse For two numbers we say that $x \approx_\epsilon y$ if \[ (1-\epsilon)y \leq x \leq (1+\epsilon)y \] In addition, we say that $x \sim_\alpha y$ if \[ \frac{1}{\alpha} y \leq x \leq \alpha y \] \fi
For future purposes, we restate here the AM-GM inequality. For any sequence $\{x_i\}_{i=1}^n$ of $n$ positive numbers, define: \[ AM(X) = \frac{1}{n}\sum_{i} x_i \ , GM(X) = \sqrt[n]{x_1 \dots x_n} \ , \ HM(X) = \frac{n}{\sum_{i=1}^n x_i^{-1}} \ , \] with the following inequality: \[
HM(X) \leq GM(X) \leq AM(X), \] when equality occurs iff $X$ is a constant sequence.
We use the following definition of ellipsoids - an ellipsoid $\mathcal{E}$ is defined by $\{ v \in \mathbb{R}^n \ \mid v^T A v \leq 1 \}$, where $A$ is a symmetric PSD matrix. The semiaxis of an ellipsoid $\mathcal{E}$ are given by $\boldsymbol{\sigma} = \{ \sigma_i(\mathcal{E}) \}_{i=1}^n$, and are equals to \begin{align*} \sigma_i = \lambda_i(A)^{-1/2} . \refstepcounter{equation}\tag{\theequation} \label{equ_ellipsoid_semiaxis_def} \end{align*}
From this it's clear that the volume of an ellipsoid equals , up to constant factor, to -- $\text{vol(}\mathcal{E}) \propto |A|^{-1/2}$.
\subsection{Leverage Scores and Lewis Weights}\label{sec_preliminary_LS}
For matrix $A \in \mathbb{R}^{n \times d}$ we define the \emph{Leverage Score} (LS) of the $i$'th row of $A$ as \[ \tau_i(A) = a_i^T (A^T A)^+ a_i , \] where $a_i$ is the $i$'th row of $A$. We call $\tau(A) = \{\tau_i(A)\}_{i=1}^n$ the Leverage Scores of $A$. Note that $\tau(A)$ is exactly the diagonal of the projection matrix $\Pi_A$. Since \(0 \preceq \Pi_A \preceq I\), we have $0 \leq \tau_i(A) \leq 1$, and : \begin{fact}(Foster's lemma, \cite{Fos53}) \label{equ_LS_facts} $\sum\limits_{i} \tau_i(A) = \Tr \ \Pi_A = \rank(A)$. \end{fact}
Leverage scores are traditionally used as a tool for sketching and $\ell_2$-subspace embedding \cite{cw13}. Indeed, row-sampling of a matrix based on LS is known to produce a good spectral approximation \cite{cw13,dmmw12}. This is also the core idea behind spectral graph sparsification \cite{SS11}. \\
As mentioned in the introduction, we are interested in $\ell_{\infty}$-LW, which are a generalization of leverage scores: \begin{definition}($\ell_{\infty}$-LW)\label{equ_inf_LW}
For a matrix $A \in \mathbb{R}^{n \times d}$, the $\ell_\infty$-LW of $A$ ,denoted by $w_\infty(A) \in \mathbb{R}^n$, is the \emph{unique} weight vector such that for each row $i \in [n]$ we have
\[
(w_\infty)_i = \tau_i(W_\infty^{1/2}A) ,
\]
or equivalently,
\[
a_i^T(A^T W_\infty A)^+ a_i = 1 ,
\]
where $W_\infty = \diag(w_\infty)$. From now on, we will use LW to denote the $\ell_{\infty}$-LW$(A) \coloneqq w_\infty(A)$. \end{definition}
Note that Definition \eqref{equ_inf_LW} is \emph{cyclic} and indeed it is not a-priori clear that $\ell_{\infty}$-LW even exist. One way to prove the unique existence is by repeatedly computing the LS of the resulting matrix, until a fixed-point is reached, see \cref{sec_computing_LW}. Observe that for any matrix $A$, employing facts~\eqref{equ_LS_facts}, we have $w_\infty(A) \leq \boldsymbol{1}$ and, because $W_\infty$ is a full diagonal matrix, meaning that $\rank(W_\infty^{1/2}A) = \rank(A)$, $\sum_{i=1}^m (w_\infty(A))_i = \rank(A)$.
\iffalse
\subsection{Pythagorean Means} For future purposes we restate the definition of Pythagorean means and the $AM-GM$ inequality. For any sequence $X$ of $n$ non-negative numbers define the \emph{Mean} of $X$ as \begin{enumerate}
\item Arithmetic mean (AM) := $\frac{1}{n} \sum_{i=1}^n x_i$
\item Geometric mean (GM) := $\sqrt[n]{x_1 \dots x_n}$
\item Harmonic mean (HM) := $\frac{n}{\sum_{i=1}^n x_i^{-1}}$ \end{enumerate} and, for any sequence $X$ of non-negative $n$ numbers the following holds:
\[
HM(X) \leq GM(X) \leq AM(X)
\]
When equality occurs iff $X$ is a constant sequence.
\fi
\subsection{Effective Resistance of Graphs}\label{sec_preliminary_graphs}
In this section we formally define the Effective Resistance (ER) of a graph, and derive a few key properties we will use later on. More details can be found in \cite{saberi}. Let $G=\langle V,E \rangle$, be an undirected graph with $n$ vertices and $m$ edges. w.l.o.g, $V = [n]$. We always assume that $G$ is connected and without self loops (if $G$ is not connected we can operate separately on each connected component). The degree of $u\in V$ is denoted $d(u) \coloneqq d_G(u)$. The adjacency matrix of $G$ is $A = A_G$. For each edge $l=(i,j)$ such that $i<j$, define an (arbitrary) \emph{orientation} $b_l \in \mathbb{R}^n$ as $(b_l)_i = 1$, $(b_l)_j = -1$ and all other entries $0$. $B = \{b_l\}_{l=1}^m \in \mathbb{R}^{n \times m}$ is called the \emph{edge-incidence} matrix of $G$.
This paper explores the effects of \emph{re-weighting} a graph's edges on various graph functions. The re-weighted graph, $G=\langle V,E,w \rangle$, is defined by the associated weight vector $g \in \mathbb{R}^m$ such that, $g_l = w(e_l)$.
The corresponding \emph{weighted Laplacian} $L_g = L_g(G)$, w.r.t the weight vector $g \in \mathbb{R}^m$, is the PSD matrix \[ L_g := B \cdot \diag(g) \cdot B^T. \] Where clear from context, we sometimes denote $W = W_g \coloneqq \diag(g)$. Equivalently, $L_g = D_g - A_G(g)$, where $A_G(g)$ is the weighted adjacency matrix of $G$ and $D_g$ is diagonal matrix with $d_g(u) = \sum_{v \sim u} w(u,v)$ in the $u$'th diagonal term. Note that the unweighted case corresponds to $g= \boldsymbol{1}\in \mathbb{R}^m$.\\
An important fact about Laplacians is that the rank of $L$ is $n-1$ and that $L \boldsymbol{1} = 0$. It easily verified that $L +(1/n)\boldsymbol{1} \boldsymbol{1}^T$ is invertible and its inverse equals to, \begin{equation}\label{equ_laplac_inv}
(L + (1/n)\boldsymbol{1} \boldsymbol{1}^T)^{-1} = L^+ + (1/n)\boldsymbol{1} \boldsymbol{1}^T, \end{equation} where $L^+$ is the Pseudo-inverse of $L$.
Throughout the paper we will always assume that $g$ is normalized such that $\boldsymbol{1}^T g =1$. One nice consequence of this normalization is that $AM(\lambda(L_g)) = \frac{2}{n-1}$. To see why recall that $L_g = D_g - A(g)$. So \[
\Tr L_g = \Tr D_g - \Tr A(g) = \Tr D_g = \sum_{i} \sum_{j \sim i} w(i,j) =2 \cdot \sum_{e \in E} w(e) = 2 \cdot \boldsymbol{1}^T g = 2. \] $L_g$ has only $n-1$ positive EVs, so \begin{align*}
AM(\lambda(L_g)) = \frac{1}{n-1} \sum_{i=2}^n \lambda_i = \frac{1}{n-1} \Tr L_g = \frac{2}{n-1}. \refstepcounter{equation}\tag{\theequation} \label{equ_AM_Laplacian} \end{align*} We will use that later on.\\
Throughout the paper, we shall use the following shorthand for the $\ell_{\infty}$-Lewis Weight of a given graph $G$ \begin{equation}\label{equ_LW_normalize_grpahs}
g_{lw} := \frac{1}{n-1} w_\infty(B^T) . \end{equation} This normalization is due to fact~\eqref{equ_LS_facts} and that $\rank(B^T) = n-1$.\\
\iffalse
\begin{lemma}\label{lem_AM_Laplacian}
For any Laplacian $L_g$ the following holds
\[
AM(\lambda(L_g)) = \frac{2}{n-1}\boldsymbol{1}^Tg
\]
and for $\boldsymbol{1}^Tg=1$,
\[
AM(\lambda(L_g)) = \frac{2}{n-1}
\] \end{lemma} \begin{proof}
Recall that $L_g = D_g - A(g)$. Now
\[
\Tr L_g = \Tr D_g - \Tr A(g) = \Tr D_g = \sum_{i} \sum_{j \sim i} w(i,j) =2 \cdot \sum_{e \in E} w(e) = 2 \cdot \boldsymbol{1}^T g
\]
Recall that we only have $n-1$ non-zero EV. Thus,
\[
AM(\lambda(L_g)) = \frac{1}{n-1} \sum_{i=1}^n \lambda_i = \frac{1}{n-1} \Tr L_g = \frac{2}{n-1}\cdot \boldsymbol{1}^T g
\]
and for normalized weight vector
\[
AM(\lambda(L_g)) = \frac{2}{n-1}
\] \end{proof}
\fi With these definitions, we can now formally define the Effective Resistances of a graph: \begin{definition}[Effective Resistance] \label{def_ER} Given a weighted graph $G$, with Laplacian $L_g = BWB^T$, the Effective Resistance (ER) between a pair of terminal nodes $(i,j)$ in the graph is:
\[ ER(i,j) = b_{ij}^T L_g^+ b_{ij} \] where, $b_{ij} = e_i - e_j$, and $L^+$ is Pseudo-inverse of $L$. \end{definition}
Through the paper we sometimes denote the ER between two vertices $i$ and $j$ , with weight vector $g$ as $R_{ij}(g)$. We denote the Effective Resistance on the $l$'th edge, for $G(g)$ by \[ R_l(g) = b_l^T L_g^+ b_l \ \ , \ l=1\dots m \] when the weights $g$ are clear, we will write $R_l$. One useful property, which directly follows from the definition and properties of Pesudo-inverse is that ER is a homogeneous function of $g$ of degree $-1$, i.e \begin{equation}\label{equ_ER_homogenous}
R_{ij}(\alpha g) = R_{ij}(g)/\alpha. \end{equation} Another important property of ER is that it defines a metric on the graph \cite{Klein93}, and as such satisfies the triangle inequality (for any weights $g$): \begin{align*}
R_{ij} \leq R_{ik} + R_{kj} \ \forall i,k,j \in V \refstepcounter{equation}\tag{\theequation} \label{equ_ER_tri_ineq} \end{align*}
Our paper continues the work of \cite{saberi} on the problem of minimizing the total Effective Resistance of a graph (ERMP), also known as the Kirchhoff Index $\mathcal{K}_G(g)$ of $G$ \cite{Lukovits1999}. $\mathcal{K}_G$ can also be expressed as \begin{align*}
\mathcal{K}_G(g) := & \sum_{i,j} R_{ij}(g) \\
= & n\Tr L_g^{+} \refstepcounter{equation}\tag{\theequation} \label{equ_R_tot_rep_tr} \\
= & n\Tr (L_g + (1/n) \boldsymbol{1}\boldsymbol{1}^T)^{-1} - n \end{align*} where we used the definition and trace laws for the first equality, and equation~\eqref{equ_laplac_inv} for the second one. Using the above expression, we can re-write the ERMP as the following SDP: \begin{equation*} \begin{array}{ll@{}ll} \text{minimize} & \Tr X^{-1} &\\ \text{subject to}& X = \sum_{l=1}^{m} g_lb_lb_l^T + (1/n)\boldsymbol{1}\boldsymbol{1}^T \tag{ERMP}\\ \label{equ_R_tot_A_design}
& \boldsymbol{1}^Tg=1 \end{array} \end{equation*} which is a special case of A-optimal design (see~\eqref{equ_A_optimal_def}) \emph{over Laplacians} (with the incidence matrix $B$ as the experiment matrix).
\subsection{A New Characterization of Leverage Scores and Effective Resistances}\label{sec_new_char_LS_ER}
Before proceeding to prove our results, we present here a new geometric characterization for leverage scores (LS) and effective resistances (ER). As mentioned earlier in the introduction, the optimal solution for D-optimal design is precisely the LW of the experiment matrix \cite{CP14,ccly}. Our characterization follows from a different (and to the best of our knowledge, a new) proof of the latter result, using elementary convex optimization analysis. For brevity, we only provide here a high-level overview, and defer the details to ~\cref{appendix_D_design}. Let $B$ be the edge incident matrix of some graph $G$, $L_g$ the weighted Laplacian of $G(g)$, and denote our objective (see remark~\ref{remark_LD_G_objective}) by \[ LD_G(g) := \logdet (L_g^+ + (1/n) \boldsymbol{1} \boldsymbol{1}^T).\] In~\eqref{equ_logdet_der}, we prove that the ER on the $l$'th edge equals to \begin{align*}
R_l(g) = b_l^T L_g^+ b_l = b_l^T(B W_g B^T)^+ b_l = -\frac{\partial LD(g)}{\partial g_l}. \end{align*} In other words, the gradient of $LD_G(g)$ (w.r.t $g$) equals to the ER on the edges (up to a minus sign). This can be generalized to Leverage Scores of an arbitrary matrix as well: Let $V \in \mathbb{R}^{n \times m}$ and define $C = W_g^{1/2} V^T$, the weighted experiment matrix. Define $E_V(g) = V W_g V^T$ and $LD_V(g) = \logdet (E_V(g)^{-1}) = -\logdet (E_V(g))$. Then the $l$'th LS of $C$ equals to \begin{align}\label{eq_char_LS_grad} && \tau_l(C) = -g_l \cdot \frac{\partial LD_V(g)}{\partial g_l} && \text{see~\eqref{equ_der_LD_g_with_LS}} \end{align} I.e., the Leverage Scores of the weighted matrix are the weighted gradient of $LD_V(g)$ (up to a minus sign).
This characterization has the following geometric interpretation: We know that D-optimal design is equivalent to finding the minimum-volume enclosing ellipsoid on $V$. This means that the gradient of $LD_V(g)$ is proportional to the gradient of the volume of $\mathcal{E}$. Thus, \eqref{eq_char_LS_grad} implies: \begin{lemma}
The Leverage Scores of a matrix $V$ are equal (up to constant factors) to the gradient of the volume of the ellipsoid induced by $E_V(g)$.
In particular, in the case of \emph{Laplacians}, the Effective Resistance on the edges of a weighted graph are equal to the gradient of the volume of the ellipsoid induced by $L_g$. \end{lemma}
In fact, for Laplacians a stronger statement holds, namely, the optimality criterion for $LD_G$ becomes \begin{flalign*} && -\frac{\partial LD_G(g)}{g_l} = R_l(g) \leq \rank(B) = n-1. && \text{see~\eqref{equ_LD_G_opt_crit}} \end{flalign*} Moreover, at $g=g_{\ell w}$, we saturate this condition (see equation~\eqref{equ_LD_LW_opt_saturation}), meaning that \[ R_l(g_{\ell w}) = n-1 \ , \ \forall \ l=1\dots m \]
Now, from the uniqueness of the optimal solution for $LD(g)$, we know that for any other normalized weight vector $g$ the optimality criteria doesn't hold. In other words, for any $\wh{g} \neq g_{\ell w} \in \mathbb{R}^m$, such that $\boldsymbol{1}^T \wh{g} = 1$, there exist $l \in [m]$ such that \[ R_l \left( \ \wh{g} \ \right) > n-1. \] We conclude that $\ell_{\infty}-$LW \emph{minimizes the maximal} ER over edges, among all (normalized) weight-vectors.
\section{ ERMP Approximation via $\ell_{\infty}$-Lewis Weights }\label{sec_R_tot}
Recall the Kirchoff Index of a graph, $\mathcal{K}_G^*$, is the optimal solution for the ERMP problem \eqref{equ_ERMP_def}. Throughout this section, we denote by $\mathcal{K}_G^{\ell w}$ the value of ERMP at $g_{\ell w}$. The approximation ratio obtained by $\ell_{\infty}$-LW is defined as \begin{equation}\label{def_alpha_AD}
\alpha_{A,D} \coloneqq \frac{\mathcal{K}_G^{\ell w}}{\mathcal{K}_G^*}. \end{equation} Recall that $g_{\ell w}$ is the optimal solution for the D-optimal design, so $\alpha_{A,D}$ exactly captures the gap of A- and D-optimal design over Laplacians. Since we are proposing $\ell_{\infty}$-LW as an approximation-algorithm for the ERMP, $\alpha_{A,D}$ is simply the \emph{approximation ratio} of our algorithm. \\ This section is divided to roughly two independent results. We first focus solely on \emph{trees} thus proving \Cref{thm_LW_apx_ERMP_trees}. For trees, the ERMP gets a more simple form, and we used a designated technique for proving our theorem. The second result is an attempt to expand to general graphs. We give a more general analysis, and showing two different upper bounds on our approximation ratio. We conjecture that they yield an $O(\log n)$-UB, and in particular implying \Cref{conj_LW_apx_ERMP}. We finish this section by showing some formulas regards this UBs.
\subsection{$\ell_{\infty}$-LW are $O(1)$-Approximation for Trees}
We divide this section to two parts. First, we rewrite the approximation ratio especially for trees, using the formulas for the optimal weights derived by \cite{saberi}. Then we give the technical details of the proof of \Cref{thm_LW_apx_ERMP_trees}.\\
\noindent We first show that for trees $g_{\ell w} = g_{uni} = (1/m)\boldsymbol{1}$: \begin{claim}\label{clm_LW_tree}
For any tree $T$, we have $g_{\ell w} = g_{uni}$. \end{claim} \begin{proof}
For simplicity, let's look first at the non-normalized weights, $w_\infty = w_\infty(B^T)$. We know that $w_\infty \leq \boldsymbol{1}$ (equation~\eqref{equ_LS_facts}). Moreover, we have by~\eqref{equ_LS_facts} that
\begin{align*}
\sum_{l=1}^m (g_{\ell w})_l = \underbrace{n-1 = m}_\text{for trees}.
\end{align*}
Thus, having $m$ positive numbers, which at most $1$ and sums up to $m$, inevitably are all equal to $1$. Hence,
$ (g_{\ell w})_l = 1 \ , \forall \ l=1 \dots m$, so normalizing by $n-1 = m$ we get $g_{\ell w}=g_{uni}$, as claimed.
\end{proof}
Next, we restate some key formulas from \cite{saberi} for the ERMP on trees. First, when the graph is a tree, for any two vertices $i,j$, the ER between them is \[ R_{ij}(g) = \sum_{e_l \in P(i,j)} \frac{1}{g_l} \]
This is clear from an electrical network POV\footnote{i.e The ER of resistors in series is additive.}. In other words, the $l$'th edge contributes $(1/g_l)$ to all the paths that contain it. With this in mind, we define the \emph{congestion} of the $l$'th edge as follows:
\begin{definition}[congestion of an edge]\label{def_congestion} The congestion of an edge $e_l \in T$, denoted $c_l(T)$, is the number of paths in $T$ that contain it. \end{definition} It's easy to see that \[ c_l = n_l(n-n_l) , \] where $n_l$ is the number of nodes in the \emph{sub-tree} of one side of $e_l$ (this is symmetric). Note that from the concavity of the function $f(x) = x(n-x)$, the minimal congestion is achieved at the leaves of $T$ ($1(n-1) = m$), whereas the maximal congestion is $\frac{n}{2}(n-\frac{n}{2}) = (n/2)^2$. More generally, \begin{align*} f(x) = x(n-x) < y(n-y) = f(y) \iff \min(x,n-x) < \min(y,n-y). \refstepcounter{equation}\tag{\theequation} \label{equ_cong_relation} \end{align*} With this definition and the above fact, we get that for any tree $T$, and for any weight-vector $g \in \mathbb{R}^m$, \[ \mathcal{K}_T(g) = \sum_{l=1}^m \frac{c_l}{g_l} \;\; . \] In addition, \cite{saberi} (Section 5.1) proved that, for trees, the optimal ERMP solution is \begin{equation}\label{equ_opt_tree}
\mathcal{K}_T^* = \left(\sum_{l} c_l^{1/2} \right)^2, \end{equation} and that the closed-form expression for the optimal weights is \begin{equation}\label{equ_opt_g_trees}
g^*_l = (c_l/\mathcal{K}^*)^{1/2} = \frac{c_l^{1/2}}{\sum_{k} c_k^{1/2}} \ \ \ \ \forall l=1,\dots,m. \end{equation} Using the last expressions, it immediately follows that \begin{equation}\label{equ_cong_sum}
c_l = \mathcal{K}^* \cdot (g^*_l)^2 \implies \sum_l c_l = \mathcal{K}^* \cdot \sum_l (g^*_l)^2 = \mathcal{K}^* \cdot ||g^*||_2^2 . \end{equation} Lastly, our proof will use the fact that the optimal weights are at most $1/2$. Indeed, \begin{align*} g^*_{max} \leq \frac{c_{max}^{1/2}}{\min \{\sum_l c_l^{1/2} \}} \leq \frac{n/2}{m\sqrt{m}} \leq \frac{1}{\sqrt{m}} \ll \frac{1}{2}, \refstepcounter{equation}\tag{\theequation} \label{equ_g_star_UB} \end{align*} when we used the bounds from \eqref{equ_cong_relation}.\\
Using the fact that $g_{\ell w}= g_{uni}$ (claim~\ref{clm_LW_tree}), we can write $\mathcal{K}^{\ell w}_T$ as: \[ \mathcal{K}_T^{\ell w} =\sum_{l=1}^m \frac{c_l}{(g_{\ell w})_l} = \sum_{l=1}^m \frac{c_l}{1/m} = m\sum_{l} c_l \] Thus, for any tree $T_n$ of order $n$, we can write the approximation ratio in terms of the congestion as: \begin{equation}\label{equ_apx_ratio_trees}
\alpha_{A,D}(T_n) = \alpha(T_n) = \frac{m\sum\limits_{l} c_l}{\left(\sum\limits_{l} c_l^{1/2} \right)^2} \end{equation}
We shall also use the following claim in the proof of Theorem \ref{thm_LW_apx_ERMP_trees} : \begin{claim}\label{clm_apx_ratio_subtree}
Let $T,T'$ be two trees such that $T \subset T'$. Then $\alpha(T) \leq \alpha(T')$. \end{claim} \begin{proof}
Recall the definition of $\alpha(T)$:
\[
\alpha(T) = \frac{\mathcal{K}^{\ell w}_T}{\mathcal{K}^*_T} = \frac{m \cdot \sum_l c_l}{\mathcal{K}^*_T}
\]
Clearly, $\mathcal{K}^{\ell w}_T < \mathcal{K}^{\ell w}_{T'}$ (we are only adding positive terms). Moreover, \cite{saberi} showed that
\[
T \subset T' \implies \mathcal{K}^*_{T'} \leq \mathcal{K}^*_T \;\; .
\]
(this can be seen as the optimal weight for $T$ is feasible for $T'$). From this, it's easy to see that
\[
\alpha(T) \leq \alpha(T')
\] \end{proof}
\subsubsection{Proof of Theorem \ref{thm_LW_apx_ERMP_trees}}\label{sec_prf_LW_apx_trees}
We begin by showing when an LT increases the approximation ratio.
\begin{proof}[Proof of Claim~\ref{clm_apx_ratio_ET}] \normalfont
Denote by $T' = E_k \circ T$, $c_k' = c_k(T')$ (the rest of the congestions remains the same).
Recall that $\alpha = (m\sum_l c_l)/ \left(\sum_l c_l^{1/2}\right)^2$, so let's write the modified numerator and denominator.
\begin{align*}
&\sum_{l} c_l' = \sum_{l} c_l + (c_k' - c_k). \\
&\sum_{l} c_l'^{1/2} = \sum_{l} c_l^{1/2} +(c_k'^{1/2} - c_k^{1/2}).
\end{align*}
We get that,
\begin{align*}
\alpha(T) \leq \alpha(T') &\iff \frac{\sum_{l} c_l}{\left(\sum_l c_l^{1/2}\right)^2} \leq \frac{\sum_{l} c_l + (c_k' - c_k)}{\left(\sum_{l} c_l^{1/2} +(c_k'^{1/2} - c_k^{1/2})\right)^2} \\
&\iff \frac{\sum_{l} c_l}{\left(\sum_l c_l^{1/2}\right)^2} \leq \frac{\sum_{l} c_l + (c_k' - c_k)}{\left(\sum_{l} c_l^{1/2}\right)^2 +\left(c_k'^{1/2} - c_k^{1/2}\right)^2 + 2\left(\sum_{l} c_l^{1/2}\right) \left(c_k'^{1/2} - c_k^{1/2}\right) } \\
\end{align*}
Using the fact that, assuming the denominators are positive (which true in our case),
\[
\frac{A}{B} \leq \frac{A+D}{B+C} \iff AC \leq BD,
\]
we get that
\begin{align*}
\alpha(T) \leq \alpha(T') &\iff \left(\sum_l c_l \right)\left[ \left(c_k'^{1/2} - c_k^{1/2}\right)^2 + 2\left(\sum_{l} c_l^{1/2}\right) \left(c_k'^{1/2} - c_k^{1/2}\right) \right] \leq (c_k' - c_k)\left(\sum_{l} c_l^{1/2}\right)^2 \\
&\iff \left(\sum_l c_l \right) \left(c_k'^{1/2} - c_k^{1/2}\right) \left( 2\sum_{l} c_l^{1/2} + c_k'^{1/2} - c_k^{1/2} \right) \leq (c_k'^{1/2} - c_k^{1/2}) (c_k'^{1/2} + c_k^{1/2})\left(\sum_{l} c_l^{1/2}\right)^2 \\
\end{align*}
We divide the rest of the proof to two cases.\\
\textbf{Case 1:} $E_k$ is an upper LT. By definition, $c_k' > c_k$. Then, $c_k'^{1/2} - c_k^{1/2} > 0$, so we can divide by the latter to get,
\begin{align*}
\alpha(T) \leq \alpha(T') &\iff \left(\sum_l c_l \right) \left( 2\sum_{l} c_l^{1/2} + c_k'^{1/2} - c_k^{1/2} \right) \leq (c_k'^{1/2} + c_k^{1/2})\left(\sum_{l} c_l^{1/2}\right)^2 \\
&\iff ||g^*||_2^2 \left( 2\frac{c_k^{1/2}}{g^*_k} + c_k'^{1/2} - c_k^{1/2} \right) \leq (c_k'^{1/2} + c_k^{1/2}) && \text{using~\eqref{equ_cong_sum}, and \eqref{equ_opt_g_trees} for the $k$'th edge.}\\
&\iff 2\frac{c_k^{1/2}}{g^*_k} - c_k^{1/2}\left(1 + \frac{1}{||g^*||_2^2} \right) \leq c_k'^{1/2}\left(\frac{1}{||g^*||_2^2} -1\right) && \text{group terms by $c_k^{1/2}$ and $c_k'^{1/2}$.}
\end{align*}
In our case, $c_k'/c_k > 1$, so after dividing both sides by $c_k^{1/2}$, it's sufficient to require
\begin{align*}
\frac{2}{g^*_k} - 1 - \frac{1}{||g^*||_2^2} < \frac{1}{||g^*||_2^2} -1 \iff g_k^* > ||g^*||_2^2 .
\end{align*}
Thus we get that,
\[
g_k^* > ||g^*||_2^2 \implies \alpha(T) \leq \alpha(T').
\]
\textbf{Case 2:} $E_k$ is a lower LT . Using the same arguments we get that,
\[
g_k^* \leq ||g^*||_2^2 \implies \alpha(T) \leq \alpha(T')
\]
Unifying the two cases completes the proof.
\end{proof}
Remember that we defined the following partition: \[
E_<(T) \coloneqq \{ l \ \mid \ g_l^*(T) \leq ||g^*(T)||_2^2 \} \ , \ E_>(T) \coloneqq \{ l \ \mid \ g_l^*(T) > ||g^*(T)||_2^2 \} \]
We now show the key feature of this partition -- $E_<$ and $E_>$ are \emph{invariant} under upper and lower LT: \begin{claim}\label{clm_invariant_LT}
For any tree $T$, and $k \in E_>$ we have that,
\[
E_>(E_k^{\uparrow} \circ T) = E_>(T).
\]
and vice versa for $E_<$. \end{claim} \begin{proof} \normalfont
We will only prove this for $E_>$ as the proof for $E_<$ is symmetric. Let $k \in E_>(T)$. Denote by $T'=E_k^{\uparrow} \circ T$. We first show that $g^*_k(T') > g^*_k(T)$. From equation~\eqref{equ_opt_g_trees} we know that
\[
g_l^*(T) = \left(\frac{c_l(T)}{\mathcal{K}^*_T}\right)^{1/2} \ , \ g_l^*(T') = \left(\frac{c_l(T')}{\mathcal{K}^*_{T'}}\right)^{1/2}
\]
In addition, from the definition of upper LT, the congestion on the $k$'th edge increases, and doesn't change on any other edge, meaning that:
\[
\mathcal{K}^*_{T'} = \left(\sum_l c_l(T')^{1/2} \right)^2 > \left(\sum_l c_l(T)^{1/2} \right)^2 = \mathcal{K}^*_T.
\]
Combining the last two equations ,implies that for all $l \neq k$ we have:
\begin{equation}\label{equ_g_l_upper_LT}
c_l(T') = c_l(T) \implies \frac{g_l^*(T)}{g_l^*(T')} = \frac{\mathcal{K}^*_{T'}}{\mathcal{K}^*_T} > 1 \implies g_l^*(T') < g_l^*(T).
\end{equation}
Recall that $\boldsymbol{1}^T g^* = 1$, so,
\begin{equation}\label{equ_g_k_upper_LT}
g_k^*(T') = 1-\sum_{l\neq k} g^*_l(T') > 1 - \sum_{l\neq k} g^*_l(T) = g_k^*(T).
\end{equation}
We need to show that $k \in E_>(T')$, i.e
\[
g_k^*(T') > ||g^*(T')||_2^2
\]
Using~\eqref{equ_g_k_upper_LT}, we can write:
\begin{align*}
g_k^*(T') &= g_k^*(T) + (g_k^*(T') - g_k^*(T)) \\
&> \sum_{l \neq k} g_l^*(T)^2 + g_k^*(T)^2 +(g_k^*(T') - g_k^*(T)) && \text{since $k \in E_>(T)$} \\
&> \sum_{l \neq k} g_l^*(T')^2 + g_k^*(T)^2 +(g_k^*(T') - g_k^*(T)) && \text{using~\eqref{equ_g_l_upper_LT}} \\
&= \sum_{l \neq k} g_l^*(T')^2 + g_k^*(T')^2 + \left(g_k^*(T') - g_k^*(T) + g_k^*(T)^2 - g_k^*(T')^2 \right) \\
&> \sum_{l} g_l^*(T')^2 = ||g^*(T')||_2^2
\end{align*}
The last inequality is because $\forall l , \ g^*_l < 1/2$ (see~\eqref{equ_g_star_UB} above), and for any two number $a,b$ such that $a-b>0 , a+b<1$ we have that:
\[
a-b > (a-b)(a+b) = a^2 - b^2 \implies (a-b)-(a^2-b^2) >0.
\]
Using this fact with $a=g_k^*(T') \ , \ b=g_k^*(T)$ concludes the proof. \end{proof}
So far we showed that if there exist an LT as defined in~\eqref{equ_upper_lower_LT} then the sets $E_<, \ E_>$ are invariant under the matching transformations, and the approximation ratio of the transformed tree is larger than the original tree. Using these properties, we can repeatedly apply an LT (until saturation), and the final tree is guaranteed to be the hardest instance. So, we are ready to describe the main process of this section. We begin with arbitrary tree $T$ of order $n$. We partition its edges to two sets $E_<(T),E_>(T)$ as defined above. We know that this two sets are invariant under lower and upper LT - $E^{\uparrow} , E^{\downarrow}$ - so we can repeatedly apply them on both sets until we have $E \circ T = T$. Furthermore, from claim~\ref{clm_apx_ratio_ET} we can say that for the final tree $\Tilde{T}$, $\alpha(\Tilde{T})$, is an UB for the approximation ratio of the original tree. It is left to explicitly define $E^{\uparrow} , E^{\downarrow}$ and compute $\alpha(\Tilde{T})$. \\ \begin{figure}
\caption{Description of lower LT }
\label{fig_lower_ET}
\end{figure}
We first define $E^{\downarrow}$. We are assuming that the tree is rooted, and "up" and "down" directions are well defined. Let $k \in E_<(T)$ be a non leaf edge (meaning its congestion is greater than $m$). Let $e_k = (u,v)$. Denote by $T_v$ the downward sub tree rooted at $v$. Then $E^{\downarrow}_k$ is to take $T_v$ and "ascend" it to $u$ (see Figure~\ref{fig_lower_ET}). Clearly, for any $l \neq k$ , $c_l(E^{\downarrow}_k \circ T) = c_l(T)$ so $E^{\downarrow}_k$ is an LT. Furthermore $c_k(E^{\downarrow}_k \circ T) = m < c_k(T)$
\eqref{equ_cong_relation}, so $E^{\downarrow}_k$ is indeed a lower LT. In other words, applying $E^{\downarrow}_k$ on all $k \in E_<(T)$ simply turns all of them into leafs in $T'$. So, we first apply $E^{\downarrow}_k$ on all edges $k \in E_<(T)$, leaving us with $T'$ such that for any $l \in E_<(T')$ , $e_l$ is a leaf edge. So from now on we will assume that all edges in $E_<(T)$ are leaf edges. \\
Next, we define $E^{\uparrow}$. We can focus on the sub-tree, $T_>$, spanned by $E_>(T)$ - all the other edges are leafs, so the structure of both trees is the same. Let $k \in E_>(T)$. Denote $e_k = (u,v)$. If $d_{T_>}(u) , d_{T_>}(v) \leq 2 $, do nothing. We emphasize that $d(u)$ is the degree of $u$ in the \textbf{sub-tree} $T_>$ - i.e number of edges of $u$ that in $E_>$. We now divide the transformation to two cases: \\ \\
\textbf{Case 1:} $d_{T_>}(u) , d_{T_>}(v) > 2 $ (see Figure~\ref{fig_upper_ET_case1}). Let $n_v = |T_v| $. w.l.o.g. assume that $n_v \leq n/2$. Denote by $l_i = (v,x_i)$ for $i=1,...,d(v)$ such that $|T_{x_1}| = n_1 \leq n_2 \leq ... \leq n_{x_{d(v)}}$. Then, $E^{\uparrow}_k$ is to change $(v,x_2)$ to $(x_1,x_2)$. Clearly the congestion changes only on $l_1'=(x_1,x_2)$. But, since $n_1,n_2 \leq n_1+n_2 \leq n_v \leq n/2$, then \begin{align*}
\min&(n_1,n-n_1) = n_1 < n_1+n_2 = \min(n_1+n_2,n-(n_1+n_2)) \\
&\implies \underbrace{b_1(T) = n_1(n-n_1)}_{\text{congestion on $l_1$}} < \underbrace{(n_1+n_2)(n-(n_1+n_2)) = b_1(E^{\uparrow}_k \circ T)}_\text{congestion on $l_1'$}. \end{align*}
So, indeed $E^{\uparrow}_k$ is an upper LT\footnote{While $E^{\uparrow}_k$ is indeed defined by $k$ it actually changes the congestion on $l_1$. While this is somewhat ambiguous, since $l_1 \in E_>$ it doesn't affect the proof, so we abuse this notation.}. \\ \\ \textbf{Case 2:} w.l.o.g. $d(u)=2 , \ d(v) > 2$ (see Figure~\ref{fig_upper_ET_case2.1}).
Similar to the first case, let $l' = (u,y) \ , \ l_i = (v,x_i)$ for $i=1,...,d(v)$ such that $|T_{x_1}| = n_1 \leq n_2 \leq ... \leq n_{x_{d(v)}} \ , \ n' = n-|T_v|$.\\
\textbf{Case 2.1:} If $n_1 \leq n'$, then once again $E^{\uparrow}_k$ is to change $(v,x_2)$ to $(x_1,x_2)$. The congestion changes only on $l_1'=(x_1,x_2)$, but $n_1 < n/(d(v)-1) \leq n/2$ and $n_1<n_i,n'$, so \begin{align*}
\min&(n_1,n-n_1) = n_1 < \min (n_1+n_2 , 1+n' + n_3+\dots)\\
&\implies b_1(T) = n_1(n-n_1) < (n_1+n_2)(n-(n_1+n_2)) = b_1(E^{\uparrow}_k \circ T), \end{align*}
and, indeed, $E^{\uparrow}_k$ is an upper LT. \\
\textbf{Case 2.2:} If $n' < n_1$, then $E^{\uparrow}_k$ is to take $(v,x_1)$ with $T_{x_1}$ and move it to $(u,x_1)$ (see Figure~\ref{fig_upper_ET_case2.2}). The congestion changes only on $e_k=(u,v)$. But $n' < n/(d(v)-1) \leq n/2 \ , \ n' < n_1 \leq n_i$ so, \begin{align*}
\min&( n',n-n') = n' < \min ( n_1+n' , 1+n_2+\dots)\\
&\implies c_k(T) = n'(n-n') < (n'+n_1)(n-(n'+n_1)) = c_k(E^{\uparrow}_k \circ T), \end{align*} so, again, we get that $E^{\uparrow}_k$ is an upper LT. \\
\begin{remark}
Note that at the end of case \textbf{(2.2)} we either move to case \textbf{(1)} (if $d(v) > 3$) or to case \textbf{(2.1)} (if $d(v) =3$), so we can focus only on the first type of upper LT (first two cases). Now, for this type, the degree of $v$ after the transformation is strictly smaller than before, so this process will terminate at some point and we guarantee to end up with $d(v) = 2$ for all $v \in E_>(T)$ - i.e \emph{path graph} (of edges in $E_>$). \end{remark}
\begin{figure}
\caption{Stem graph with branches. The "black dots" are leafs, the black edges are from $E_>$, and the blue edges from $E_<$.}
\label{fig_path_exits}
\end{figure}
We can finally describe the final tree $\Tilde{T}$. As mentioned earlier, we apply lower LT until all edges from $E_<$ are leafs. In addition, we apply upper LT until we get a single path of edges from $E_>$. Thus the final tree has a "stem" structure (i.e path) of size $p=|E_>(T)|$ with $s=|E_<(T)|$ "branches" along the way (see \Cref{fig_path_exits}). At this point it should be clear that "pushing aside" branches to the closest side would increase the approximation ratio, since this increases the bottleneck in the tree. Formally speaking, pushing leafs to the end will increase the congestion along the path which is again an upper LT, thus increasing the approximation ratio as well (remember that edges on the path are from $E_>$). So the final tree $\Tilde{T}$ is simply \[ \Tilde{T} = \mathcal{B}_{s_1,p,s_2} \] where $s_1+s_2 = s$ , and $\mathcal{B}_{t,p,s}$ is the \emph{Bowtie} graph (see \Cref{fig_tps_tree}).\\
All that's left is to compute $\alpha(\mathcal{B}_{t,p,s})$. We know that $T \subset T' \implies \alpha(T) \leq \alpha(T')$ (see claim~\ref{clm_apx_ratio_subtree}), so it's sufficient to prove for $\mathcal{B}_{n,n,n}$ ($n$ is the size of the original tree). While we can do it explicitly, we will use a much easier calculation using an UB we derive ahead for general graphs, giving us an UB of $\sim 3$, and concluding the proof (see lemma~\ref{lem_a2_bowtie} for further details).
\iffalse
We have $2n$ leaf edges with congestion of $m=3n-1$ (In the "big" tree there are $3n$ vertices). On the $i$'th edge on the path the congestion is $(n+i)(3n-(n+i)) = (n+i)(2n-i)$ for $i=1...n-1$. Hence, \begin{align*}
\alpha(\mathcal{B}_{n,n,n}) &= \frac{m\cdot \sum_l c_l}{(\sum_l c_l^{1/2})^2} = m \cdot \frac{2n\cdot m + \sum_{i=1}^{n-1} (n+i)(3n-(n+i))}{(2n\cdot \sqrt{m} + \sum_{i=1}^{n-1} (n+i)^{1/2}(3n-(n+i))^{1/2})^2}\\
&= m \cdot \frac{2n\cdot m + \sum_{k=n+1}^{2n-1} k(3n-k)}{(2n\cdot \sqrt{m} + \sum_{k=n+1}^{2n-1} k^{1/2}(3n-k)^{1/2})^2}\\
&=m \cdot \frac{2n\cdot m + A}{(2n\cdot \sqrt{m} + B)^2}\\ \end{align*} where \[ A = \sum_{k=n+1}^{2n-1} k(3n-k) = \sum_{k=1}^{2n-1} k(3n-k) - \sum_{k=1}^{n} k(3n-k) \approx \frac{10}{3}n^3 - \frac{7}{6}n^3 = \frac{13}{6}n^3 \] and \[ B = \sum_{k=n+1}^{2n-1} (k(3n-k))^{1/2} = \sum_{k=1}^{2n-1} (k(3n-k))^{1/2} - \sum_{k=1}^{n} (k(3n-k))^{1/2} \approx (c_2 - c_1)n^2 = cn^2 \] Using the approximation \[
\sum_{k=1}^{b \cdot n} (k(3n-k))^{1/2} \approx cn^2 \ \ , \ \ c_b = \int_0^b (t(3-t))^{1/2} \approx \left\{ \begin{array}{ll}
1.031, & \text{for } b=1\\
2.503, & \text{for } b=2
\end{array}\right. \] Thus, we get \begin{align*}
\alpha(T_{tps}) \leq \alpha(T_{n,n,n}) &= (3n)\cdot \frac{(2n)\cdot (3n) + A}{((2n)\cdot \sqrt{3n} + B)^2} \approx (3n) \cdot \frac{6n^2 + \frac{13}{6}n^3}{(3n)(4n^2) + c^2n^4}\\
& \leq \frac{18n^3 + \frac{13}{2} n^4}{c^2n^4} \leq \frac{9n^4+\frac{13}{2} n^4}{c^2 n^4} \approx 10.5 \refstepcounter{equation}\tag{\theequation} \label{equ_bowtie_analitic_apx} \end{align*} (Actually, we can a tighter bound of $\approx 3$, see lemma~\eqref{lem_a2_bowtie}).
\fi
\subsection{ERMP for General Graphs}\label{sec_general_UB}
We now continue to the case of general graphs. Unlike for trees, for general graphs the ERMP doesn't have a clear "combinatorical" form but rather an algebraic expression. Hence, our proving method should take a different aim. We derive two different UB for the approximation ratio, and we conjecture that their \emph{minimum} is $\Tilde{O}(1)$. Indeed, we will show that in the case of Bowtie graph the \emph{minimum} is $\Tilde{O}(1)$, matching \Cref{thm_LW_apx_ERMP_trees}. Finally, we provide many simulations that support our conjecture. \\
Recall that the ERMP can be formulated as: \begin{equation*} \begin{array}{ll@{}ll} \text{minimize} & \Tr X^{-1} &\\ \text{subject to}& X = \sum_{l=1}^{m} g_lb_lb_l^T + (1/n)\boldsymbol{1}\boldsymbol{1}^T \\
& \boldsymbol{1}^Tg=1 \end{array} \end{equation*}
Since a closed-form expression for the exact approximation ratio seems hard to compute, our goal is to derive an upper bound on the approximation ratio of $\ell_{\infty}$-LW, which is henceforth denoted $\alpha_{A,D}$. It turns out that we can derive two \emph{different} upper bounds (with interesting relation) for $\alpha_{A,D}$. The first bound is using the $AM-GM$ inequality, and the second bound is via Lagrange duality analysis.
\subsubsection{The AM-GM UB} Recall that we can formulate both A-optimality and D-optimality in terms of "means" (see \cref{sec_optimal_design}). So trying to use $AM-GM$ inequality makes a lot of sense. We know we can write $\mathcal{K}_G(g)$ as (equation~\eqref{equ_R_tot_rep_tr}) \begin{equation}\label{equ_R_tot_rep_HM}
\mathcal{K}(g) = n \Tr L_g^+ = n \sum_{i=1}^{n-1} \lambda_i(L_g^+) = n \sum_{i=1}^{n-1} \lambda_i(L_g)^{-1} = \frac{n(n-1)}{HM(\lambda(L_g))} \end{equation} (we only have $n-1$ positive eigenvalues). So, we get \[ HM(\lambda(L_{g^*})) = \frac{n(n-1)}{\mathcal{K}_G^*} \ ,\ HM(\lambda(L_{g_{\ell w}})) = \frac{n(n-1)}{\mathcal{K}_G^{\ell w}} \] Using \emph{AM-GM} inequality, with equation~\eqref{equ_AM_Laplacian}, gives us, \[ \alpha_{A,D} = \frac{\mathcal{K}_G^{\ell w}}{\mathcal{K}_G^*} = \frac{HM(\lambda(L_{g^*}))}{HM(\lambda(L_{g_{\ell w}}))} \leq \frac{AM(\lambda(L_{g^*}))}{HM(\lambda(L_{g_{\ell w}}))} = \frac{2/(n-1)}{HM(\lambda(L_{g_{\ell w}}))} = \frac{2}{n(n-1)^2} \mathcal{K}_G^{\ell w} \]
We denote this bound, the \emph{AM-GM} bound, by $\alpha_{AM}$: \[ \alpha_{A,D} \leq \alpha_{AM}(\ell w) = \frac{2}{n(n-1)^2} \mathcal{K}_G^{\ell w} = \frac{2}{(n-1)^2} \Tr L_{\ell w}^+ \]
It is nice to see that, since $\mathcal{K}_G^* \geq n(n-1)^2/2$ (see lower bound on the optimal solution from \cite{saberi}) we get that: \[ \alpha_1(g) = \frac{2}{n(n-1)^2}\mathcal{K}_G(g) \geq \frac{2}{n(n-1)^2}\mathcal{K}_G^* \geq 1 , \] with equality iff $g=g^*$, so this is well defined.
\subsubsection{The Duality-gap UB} Next we would like to use the \emph{dual gap} of ERMP to derive a second UB. The usage of the dual problem to bound the sub-optimality of a feasible solution is a well known technique, so our motivation is clear. We begin by restating the result of \cite{saberi} regards the duality gap of the ERMP problem. For the sake of complicity as it is one of the key component in our result, we derive the duality gap from scratch, but note that this is already done by \cite{saberi}. We state here the final results, see the full proof in \cref{appendix_ERMP_dual_gap}.\\
The dual problem of ERMP is (see equation~\eqref{equ_ERMP_dual_prob}), \begin{equation*}
\begin{array}{ll@{}ll}
& \text{maximize} \ \ & n(\Tr V)^2 \\
& \text{subject to} & b_l^T V^2 b_l = ||V b_l|| \leq 1 , \\
& & V \succeq 0 \ ,\ V \boldsymbol{1} = 0 \end{array} \end{equation*}
We know that the gradient of $\mathcal{K}_G(g)$, w.r.t to $g$, is equals to (see equation~\eqref{equ_der_L_g_plus}): \[
\frac{\partial L_g^+}{\partial g_l} = -\Tr L_g^+ b_l b_l^T L_g^+ = -||L_g^+ b_l||^2 \]
Given any feasible $g$, let $V = \frac{1}{\underset{l}{\max} ||L_g^+ b_l||} L_g^+$ (this is clearly dual feasible). By definition, the duality gap in $g$, is equals to, \begin{align*}
\eta &= n \Tr L_g^+ - n(\Tr V)^2 = n \Tr L_g^+ - \frac{n (\Tr L_g^+)^2}{\underset{l}{\max} ||L_g^+ b_l||^2} \\
&= \frac{\mathcal{K}_G(g)}{-\min\limits_l \ \partial \mathcal{K}_G(g)/\partial g_l} (-\min_l \frac{\partial \mathcal{K}_G(g)}{\partial g_l} - \mathcal{K}_G(g)) \end{align*} Using that, we can derive our second UB. By the definition of dual gap, \[ \mathcal{K}_G(g) - \mathcal{K}_G^* \leq \eta = \frac{\mathcal{K}_G(g)}{-\min\limits_l \ \partial \mathcal{K}_G(g)/\partial g_l} (-\min_l \frac{\partial \mathcal{K}_G(g)}{\partial g_l} - \mathcal{K}_G(g)) \\ = \mathcal{K}_G(g) + \frac{(\mathcal{K}_G(g))^2}{\min\limits_l \ \partial \mathcal{K}_G(g)/\partial g_l} \\ \] Remember that the derivative of $\mathcal{K}_G(g)$ is negative, and with simple arrangement we get, \[
\frac{\mathcal{K}_G(g)}{\mathcal{K}_G^*} \leq \frac{-\min\limits_l \frac{\partial \mathcal{K}_G(g)}{\partial g_l}}{\mathcal{K}_G(g)} = \frac{-\min\limits_l ( -n||L^+ b_l||_2^2 )}{n \Tr L^+} = \frac{\max\limits_l ||L^+ b_l||_2^2}{\Tr L^+} \]
Inserting $g_{\ell w}$, we denote this bound, the \emph{duality gap} bound, by $\alpha_{dual}$, \[
\alpha_{A,D} \leq \alpha_{dual}(\ell w) \coloneqq \frac{\max\limits_l ||L_{\ell w}^+ b_l||_2^2}{\Tr L_{\ell w}^+} \]
\subsubsection{Approximation Ratio UB for the ERMP}
We can now define our derived UB for the approximation ratio - $\alpha_{A,D}$: \[
\alpha_{A,D} \leq \alpha_{min} \coloneqq \min(\alpha_1 , \alpha_2), \]
Throughout the paper, $\alpha_{AM}, \alpha_1$ always refers to $\alpha_{AM}(\ell w)$, and $\alpha_{dual}, \alpha_2$ always refers to $\alpha_{dual}(\ell w)$, while simply $\alpha$ will refer to $\alpha_{min}$ unless stated otherwise.\\
We can now propose the following, stronger, conjecture: \begin{conjecture}\label{conj_a_min}
For any graph $G$, \( \alpha_{min} \leq O(\log n) \) \end{conjecture} We emphasize that this is not equivalent to \Cref{conj_LW_apx_ERMP}, as we don't know whether $\alpha_{min}$ is tight. However, our simulations indicate that $\alpha_{min}$ is sufficient (see ahead).\\
For the rest of this section, we derive some formulas on $\alpha_1$ and $\alpha_2$ and their relation. Our first lemma is the connection of the two bounds: \begin{lemma}\label{lem_a2_der_log_a1}
$\alpha_2(g) \ = \left\lVert - \nabla_g (\log \alpha_1) \right\rVert_\infty $ \end{lemma}
\begin{proof}
Recall that \(\alpha_1(g) = \frac{2}{(n-1)^2} \Tr L_g^+ \).
Using the chain rule and equation~\eqref{equ_der_L_g_plus}, we get that
\begin{align*}
- \frac{\partial }{\partial g_l} \log(\alpha_1(g)) &= - \frac{1}{\alpha_1(g)} \frac{\partial \alpha_1}{\partial g_l} = \frac{-(n-1)^2}{2\Tr L_g^+}\frac{2}{(n-1)^2} \Tr \frac{\partial L_g^+}{\partial g_l} \\
&= \frac{-1}{\Tr L_g^+} \Tr( -L_g^+ b_l b_l^T L_g^+ ) = \frac{\Tr b_l^T L_g^+ L_g^+ b_l}{\Tr L_g^+} = \frac{||L_g^+ b_l||^2}{\Tr L_g^+}
\end{align*}
Hence,
\[
\left\lVert - \nabla_g (\log \alpha_1) \right\rVert_\infty = \max_l \frac{||L_g^+ b_l||^2}{\Tr L_g^+} = \alpha_2(g) .
\] \end{proof}
We can also give a more `combinatorical` interpretation for $\alpha_1$: \begin{lemma}\label{lem_apx_diam_UB}
\( \alpha_1 \leq D \), where $D$ is the diameter of the graph. \end{lemma} \begin{proof}
We know that at $g_{\ell w}$, the effective resistance on any edge is $n-1$.
Let SP($i,j$) be the shortest path between $i,j$. We can bound $R_{ij}^{\ell w} = R_{ij}(g_{\ell w})$ using the triangle inequality (see equation~\eqref{equ_ER_tri_ineq}):
\[
R_{ij}^{\ell w} \leq \sum_{l \in \text{SP}(i,j)} R_l^{\ell w} = \sum_{l \in \text{SP}(i,j)} (n-1) = (n-1)|\text{SP}(i,j)| \leq (n-1)D ,
\]
where $D$ is the diameter of the graph. Thus we get that,
\[
\mathcal{K}_G^{\ell w} = \sum_{i<j} R_{ij}^{\ell w} \leq \sum_{i<j} (n-1)D = {n \choose 2} (n-1)D = \frac{n(n-1)^2}{2} D ,
\]
and,
\[
\alpha_1 = \frac{2}{n(n-1)^2} \mathcal{K}_G^{\ell w} \leq \frac{2}{n(n-1)^2} \cdot \frac{n(n-1)^2}{2} D = D
\] \end{proof}
In contrast, $\alpha_{dual}$ has a close connection to the \emph{condition number} of the Laplacian: \begin{lemma}\label{lem_a2_cond_num_UB}
\( \alpha_{dual} \leq \kappa(L_{lw}) = \frac{\lambda_{n}(L_{lw})}{\lambda_{2}(L_{lw})}\) \end{lemma} \begin{proof}
By the Courant-Fisher principle,
\[
\lambda_{n}(L_{lw}^+) = \underset{x \in \mathbb{R}^n}{\max} \left\{ \frac{x^T L_{lw}^+ x}{x^T x} \right\}
\]
In particular, taking $x = (L_{lw}^+)^{1/2} b_l$ (remember that $L_g^+$ is symmetric PSD matrix) gives us,
\[
\frac{b_l^T L_{lw}^+ L_{lw}^+ b_l}{b_l^T L_{lw}^+ b_l} \leq \lambda_{n}(L_{lw}^+)
\]
i.e,
\[
||L_{lw}^+ b_l ||^2 = b_l^T L_{lw}^+ L_{lw}^+ b_l \leq \lambda_{n}(L_{lw}^+) \cdot b_l^T L_{lw}^+ b_l = (n-1)\lambda_{n}(L_{lw}^+).
\]
In addition, remember that, \( \min(x_1,...,x_k) \leq AM(x_1,...x_k)\). So, using it for the eigenvalues of $L_{lw}^+$, and the fact that the trace is the sum of the eigenvalues, we get that,
\[
\lambda_{2}(L_{lw}^+) \leq \frac{1}{n-1} \Tr L_{lw}^+
\]
Thus,
\[
\alpha_2 = \max_l \frac{||L_{lw}^+ b_l||^2}{\Tr L_{lw}^+} \leq \max_l \frac{(n-1)\lambda_{n}(L_{lw}^+)}{\Tr L_{lw}^+} \leq \frac{(n-1)\lambda_{n}(L_{lw}^+)}{(n-1)\lambda_{2}(L_{lw}^+)} = \frac{\lambda_{n}(L_{lw}^+)}{\lambda_{2}(L_{lw}^+)} = \frac{\lambda_{n}(L_{lw})}{\lambda_{2}(L_{lw})} = \kappa(L_{lw})
\]
when we used that fact the eigenvalues of $L^+$ are the reciprocal of the eigenvalues of $L$. \end{proof}
$ $\\ As promised, we use $\alpha_{dual}$ to derive the approximation ratio for trees: \begin{lemma}\label{lem_a2_bowtie}
For the Bowtie graph, $\mathcal{B}_{n,n,n}$, $\alpha_{dual} \approx 3.12$ \end{lemma}
\begin{proof}
Recall that for trees
\[
\mathcal{K}_T(g) = \sum_l \frac{c_l}{g_l} ,
\]
and,
\[
\frac{\partial \mathcal{K}(g)}{\partial g_l} = -\frac{c_l}{g_l^2}
\]
So, we can write $\alpha_{dual}$, for trees, as
\begin{flalign*}
&&\alpha_{dual} = \underset{l}{\max} \left( \left. -\frac{\partial \mathcal{K}(g)}{g_l}/\mathcal{K}(g) \right|_{g_{\ell w}} \right) = \underset{l}{\max} \frac{m^2 c_l}{m \sum_k c_k} = \frac{m \cdot c_{\max}}{\sum_k c_k} &&\text{$g_{\ell w} = g_{uni}$, see claim~\ref{clm_LW_tree}}
\end{flalign*}
Clearly, $c_{max} \leq (3n/2)^2$ (the middle edge). As for the denominator, an easy calculation gives us,
\begin{align*}
\sum_k c_k &= \underbrace{2n\cdot m}_\text{$2n$ leafs edges} + \underbrace{\sum_{k=n+1}^{2n-1} k(3n-k)}_\text{path's edges}\\
&= 2n\cdot m + \sum_{k=1}^{2n-1} k(3n-k) - \sum_{k=1}^{n} k(3n-k) \approx 2n\cdot m + \frac{10}{3}n^3 - \frac{7}{6}n^3 = \frac{13}{6}n^3 + 6n^2.
\end{align*}
Thus, we get that
\[
\alpha_{dual} (\mathcal{B}_{n,n,n}) \leq \underbrace{\frac{m(3n/2)^2}{(13/6)n^3} \leq \frac{27/4}{13/6}}_{m = 3n-1} \approx 3.12
\] \end{proof}
We should note that this is an UB, and experimental results shows a slightly better ratio (see ahead).\\
While we haven't managed to prove our conjecture for general graphs, we can use the above formulas to show it for certain families. For instance, for low-diameter graphs, lemma~\ref{lem_apx_diam_UB} proves our conjecture. In addition, we know that good expanders graphs have a low condition number \cite{spielman11}, thus lemma~\ref{lem_a2_cond_num_UB} proves the conjecture for them as well. For more graphs, we provide a strong evidence for our conjecture, using various simulations, more on that below.
\section{Experimental Results}\label{sec_experiments}
\paragraph{Setup.} The simulation computes the approximation ratio using $\alpha_{min}$, thus matching \Cref{conj_a_min}. All random graphs results are taken to be the maximum of $100$ runs. We compute the LW of the graph using \Cref{alg_LW_computation_ccly} for $\epsilon = 0.01$. Whenever the graph isn't connected or simple, we remove all self loops and take its largest connected component (this is just a technicality). We simulate our conjecture for several elementary graph families, e.g $k$-regular graphs, lollipop, and grids.
\Cref{tab_dataset_elem} summarizes the (interesting) graphs we checked, and the approximation ratio of these graphs.
\Cref{fig_simulation_charts} shows an asymptotic behavior of regular graphs and lollipop.
\begin{savenotes} \begin{table*}[ht!]\small \normalfont \footnotesize
\setlength{\tabcolsep}{6pt}
\centering
\begin{tabular}{rrn{9}{0}n{9}{0}n{9}{0}r}
\toprule
\multicolumn{4}{c}{{\bfseries Dataset}}& \multicolumn{1}{c}{}\\
{\bfseries type} & {\bfseries graph} & \multicolumn{1}{r}{\bfseries nodes} & \multicolumn{1}{r}{\bfseries edges} & \multicolumn{1}{c}{\bfseries $\alpha_{min}(G)$} \\
\midrule
\bfseries random, high diameter, $d$-regular & $3$-regular & 400 & 600 & $1 \period 55$ \\
& $4$-regular & 400 & 800 & $1\period 17$ \\
& $5$-regular & 400 & 1000 & $1\period 11$ \\
& $6$-regular & 400 & 1200 & $1\period 08$ \\
\midrule
\bfseries small-world graph & Watts–Strogatz small-world graph\footnote{with parameters -- $k=4$, $p=2/3$} & 400 & 800 & $1\period 64$ \\
\midrule
\bfseries grid & balanced grid\footnote{$2$-dimensional square} & 400 & 760 & $1\period 35$ \\
& 10x-grid\footnote{rectangle with width size $10$} & 400 & 750 & $1 \period 94$ \\
\midrule
\bfseries expanders & Margulis-Gabber-Galil graph & 400 & 1520 & $1\period 06$ \\
& chordal-cycle graph & 400 & 1196 & $1\period 64$ \\
\midrule
\bfseries dense graphs & (400,400)--lollipop & 800 & 80200 & $3\period 03$ \\
\midrule
\bfseries trees & Bowtie & 3000 & 2999 & $2\period 5$ \\
\midrule
\bfseries real-world graphs\footnote{graphs taken from \cite{MSJ12}} & \textsc{Yeast} & 2224 & 6609 & $2\period 23$ \\
& \textsc{Stif} & 17720 & 31799 & $4\period 08$ \\
& \textsc{royal} & 2939 & 4755 & $8\period 96$ \\
\bottomrule
\end{tabular}
\caption{Summary of approximation bounds for elementary graphs.
\label{tab_dataset_elem}} \end{table*} \end{savenotes}
\begin{figure}
\caption{$\alpha_{min}$ of high diameter regular graphs}
\label{fig_simulation_charts_regular}
\caption{$\alpha_{min}$ of lollipop graph up to $800$ vertices}
\label{fig_simulation_charts_lolipop}
\caption{Asymptotic behavior of elementary graphs}
\label{fig_simulation_charts}
\end{figure}
\iffalse \begin{table*}[ht!]\small
\setlength{\tabcolsep}{6pt}
\centering
\begin{tabular}{rrn{9}{0}n{9}{0}n{9}{0}}
\toprule
\multicolumn{4}{c}{{\bfseries Dataset}}& \multicolumn{1}{c}{\bfseries Approximation}\\
{\bfseries type} & {\bfseries name} & \multicolumn{1}{r}{\bfseries nodes} & \multicolumn{1}{r}{\bfseries edges} & \multicolumn{1}{c}{\bfseries ratio UB} \\
\midrule
\bfseries infrastructure & \textsc{Ca} & 1965206 & 2766607 & 5 \\
& \textsc{Bucharest} & 189732 & 223143 & 21 \\
& \textsc{HongKong} & 321210 & 409038 & 32 \\
& \textsc{Paris} & 4325486 & 5395531 & 55 \\
& \textsc{London} & 2099114 & 2588544 & 57 \\
& \textsc{Stif} & 17720 & 31799 & 28 \\
\midrule
\bfseries social & \textsc{Facebook} & 4039 & 88234 & 142 \\
& \textsc{Stack-TCS} & 25232 & 69026 & 143 \\
& \textsc{Stack-Math} & 1132468 & 2853815 & 850 \\
& \textsc{LiveJournal} & 3997962 & 34681189 & 360 \\
\midrule
\bfseries web & \textsc{Wikipedia} & 252335 & 2427434 & 1007 \\
\midrule
\bfseries hierarchy & \textsc{Royal} & 3007 & 4862 & 11 \\
& \textsc{Math} & 101898 & 105131 & 56 \\
\midrule
\bfseries ontology & \textsc{Yago} & 2635315 & 5216293 & 836 \\
& \textsc{DbPedia} & 7697211 & 30622392 & 28 \\
\midrule
\bfseries database & \textsc{Tpch} & 1381291 & 79352127 & 699 \\
\midrule
\bfseries biology & \textsc{Yeast} & 2284 & 6646 & 54 \\
\bottomrule
\end{tabular}
\caption{Dataset and summary of approximation bounds from \cite{MSJ12}. \amit{ignore the current numbers, I dont have them yet} \label{tab_dataset_realworld}} \end{table*}
\fi
\iffalse \begin{remark}
For computing the LW of the graph, we used \Cref{alg_LW_computation_ccly} with the following implementation -- in order to compute the $l$'th LS, $b_l^T L_g^{-1} b_l$, at each iteration we first compute the cholesky decomposition of $L_g = B W B^T = U^T U $, using \textsc{Scipy-Sparse} engine (we employ here the fact that our graphs are sparse). Then, we use a triangular-system solver from \textsc{Scipy} to compute $Z = (U^T)^{-1} B$. Note that $Z^T Z = B^T L_g^+ B$, so we want $\diag(Z^T Z)$. But clearly the $i'th$ diagonal term is equals to the sum of $i$'th column in $Z$ squared, leaving us with the following procedure:
\begin{enumerate}
\item Find Cholesky decomposition -- $L_g = U^T U$ where $U$ is upper triangular.
\item Solve $(U^T)Z = B$ using fast traingular system solver.
\item Compute \textsc{ColSum}($Z \circ Z$) where $\circ$ is the Hadamard product, and \textsc{ColSum}($M$) is the sum along columns of $M$.
\end{enumerate} \end{remark} \fi
\section{Spectral Implications of $\ell_{\infty}$-LW Reweighting (Proof of \Cref{thm_spectral_LW})}\label{sec_LW_spectral}
In this section we study how $\ell_{\infty}$-LW affect the eigenvalue distribution of the reweighted graph Laplacian. We present several results in this direction, which are of both mathematical and algorithmic interest.
\subsection{
Bound The Algebraic Connectivity}
A classic result of \cite{Mohar91} asserts that for unweighted graphs the algebraic connectivity is bounded by \[ \lambda_2 \geq 4/nD. \]
We generalize this bound for weighted graphs in the following way: \begin{lemma}
Given a graph $G(g)$, denote the maximal edge-ER in $R_{max}(g) = \max\limits_l R_l(g)$, and the maximal pairwise ER in $\Tilde{R_{max}}$. Then,
\begin{flalign}
&&\lambda_2(L_g) \geq \frac{2}{n \Tilde{R_{max}}} \geq \frac{2}{nD \cdot R_{max}(g)} &&\\
\text{and}&& && \nonumber \\
&& \lambda_2(L_g) \geq \frac{4}{n \sum_l R_l(g)}
\end{flalign}
\end{lemma} \begin{proof}
We know that ER is a metric, so for any two vertices $i,j$,
\[
R_{ij}(g) \leq \sum_{l \in P_{ij}} R_l(g) \leq D\cdot R_{max}(g)
\]
where $P_{ij}$ is the shortest (length) path between $i$ and $j$. In particular, $\Tilde{R_{max}}(g) \leq D \cdot R_{max}$. Thus
\[
\mathcal{K}_G(g) = \sum_{i<j} R_{ij}(g) \leq {n \choose 2} \cdot \Tilde{R_{max}} = \frac{n(n-1)}{2} \Tilde{R_{max}}(g)
\]
Using the fact that $\mathcal{K}_G(g) = n \Tr L_g^+$, we get
\[
\Tr L_g^+ \leq \frac{(n-1)}{2} \Tilde{R_{max}}(g)
\]
We know that $\lambda_{max}(L_g^+) \leq \Tr L_g^+ $, so
\[
\lambda_{max}(L_g^+) \leq \frac{(n-1)}{2}\Tilde{R_{max}}(g) \implies \lambda_2 \geq \frac{2}{(n-1)\Tilde{R_{max}}(g)} \geq \frac{2}{n\Tilde{R_{max}}(g)} \geq \frac{2}{n D \cdot R_{max}(g)}
\]
For the second bound, define the characteristic function of $P_{ij}$ as
\[
\chi_{ij}(e) = \twopartdef{1}{e \in P_{ij}}{0}{otherwise}
\]
Now, we can write
\begin{align*}
\mathcal{K}(g) &= \frac{1}{2}\sum_{i,j} R_{ij}(g) \\
&\leq \frac{1}{2} \sum_i \sum_j \sum_{l \in P_{ij}} R_l = \frac{1}{2} \sum_i \sum_j \sum_{l \in E} R_l \cdot \chi_{ij}(e) \\
&=\frac{1}{2} \sum_{l} R_l \sum_i \sum_j \chi_{ij}(e) \leq \frac{n^2}{4} \sum_{l} R_l
\end{align*}
where in the last line we use the fact that any edge $e$ belongs to at most $\frac{n^2}{4}$ of the paths $P_{ij}$ \cite{Mohar91}. Following the same arguments from before, we get that
\[
\lambda_{max}(L_g^+) \leq \frac{n}{4} \sum_{l} R_l \implies \lambda_2 \geq \frac{4}{n \sum_l R_l}
\]
\end{proof}
For sanity check, we should check what we get for unweighted graphs. Indeed for $g=\boldsymbol{1}$, we have \begin{align*}
&L_g = BB^T \implies L_g^+ = (BB^T)^+ \\
&\implies B L_g^+ B^T = B(BB^T)^+ B^T = BB^+ \\
&\implies R_l = \diag( B L_g^+ B^T)_l = \diag(BB^+)_l \leq \boldsymbol{1} \end{align*}
where the last line justify because $BB^+$ is projection matrix. So, we got that $R_{max} \leq 1$, thus (using the first bound) \[ \lambda_2 \geq \frac{2}{nD \cdot R_{max}} \geq \frac{2}{nD} \] which is almost the previous bound (and is better than the weaker version of $1/nD$). \\
\begin{remark}
We discussed here only on the algebraic connectivity, but there are many other graph properties and bounds that applies only to unweighted graphs. The above technique, and possibly more, may be used to generalize those bounds to weighted graphs, in terms of ER. \end{remark}
\iffalse For the next theorem we first need the following claim. \begin{claim}\label{clm_psd1}
For any graph $G$,
\[
B^T L_{lw}^+ B \preceq 2m\boldsymbol{I}_m
\]
and in general, if $m > \frac{2}{\beta} n$, for $\beta \leq 2$, then,
\[
B^T L_{lw}^+ B \preceq \beta \cdot m\boldsymbol{I}_m
\] \end{claim} The proof is a purely technical calculation, so we defer it to \cref{appendix_prf_clm_psd1}. We now use this claim to prove the following theorem.
\begin{theorem}[Theorem \ref{thm_spectral_LW}.II] \label{lem_lam2_LW_uni}
For any graph $G$,
\[
\frac{1}{2} \lambda_2(L_{uni}) \leq \lambda_2(L_{lw})
\]
and if $m > \frac{2}{\beta} n$, then,
\[
\frac{1}{\beta} \lambda_2(L_{uni}) \leq \lambda_2(L_{lw})
\]
where $L_{uni}$ is the Laplacian with uniform weight vector -- $g = (1/m) \boldsymbol{1}$. \end{theorem}
\begin{proof}
We first prove the first case. We do this in terms of $L^+$, i.e \(\lambda_{max}(L_{lw}^+) \leq 2 \cdot \lambda_{max}(L_{uni}^+)\).
Recall that from the Courant-Fisher principle \[ \lambda_{max}(L^+) = \max \left\{ \frac{x^T L^+ x}{x^T x} \mid x \in \mathbb{R}^n \right\} \]
However, we know that $\ker(L^+) = \dim \ker(B^T)$. In other words, for any $x \in \mathbb{R}^n$ the following holds \[ L^+ x = L^+ \Pi_B x \ , \ x^T L^+ = x^T \Pi_B^T L^+ \] where \( \Pi_B = B B^+ \), is the projection matrix onto $\span(B^T)$. So, \[ \lambda_{max}(L^+) = \max \left\{ \frac{x^T (B^T)^+ B^T L^+ B B^+ x}{x^T x} \mid x \in \mathbb{R}^n \right\} \]
Now, using claim~\ref{clm_psd1}, for any $y \in \mathbb{R}^m$ \[ \frac{y^T B^T L_{lw}^+ B y}{y^T y} \leq \lambda_{max}(B^T L_{lw}^+ B) \leq 2m \] Apply it for \(y = B^+ x\), we get
\begin{align*}
\lambda_{max}(L_{lw}^+) = \max_x \frac{x^T (B^T)^+ B^T L_{lw}^+ B B^+ x}{x^T x} &\leq \max_x \frac{x^T(B^T)^+(2m)B^+ x}{x^T x} = (2m) \cdot \max_x \frac{x^T(BB^T)^+ x}{x^T x} \\
&= 2 \cdot \max_x\frac{x^TL_{uni}^+ x}{x^T x} = 2\lambda_{max}(L_{uni}^+) \end{align*} Where we used the fact that pseudo-inverse is reciprocal multiplicative and commutes with Transpose, so \begin{flalign*}
&& &(B^T)^+ B^+ = (BB^T)^+ && \\
\text{and, } && &m(BB^T)^+ = (1/m \ BB^T)^+ = (BW_{uni}B^T)^+ = L_{uni}^+ && \end{flalign*}
The proof for the the general case is exactly the same using the second case from claim~\ref{clm_psd1}.
\end{proof}
\fi
\subsection{Minimize the Mixing Time of a Graph}
Recall that the spectral gap of a graph is the smallest non-negative eigenvalue of the Laplacian. Since our graphs are assumed to be connected, this equals $\lambda_2(L_g) = \lambda_2$. Now, a smaller (faster) mixing time is equivalent to higher spectral gap, thus our goal is to maximize $\lambda_2$.
It is well known that the maximum of $n$ variables can be "smoothed" by using the \emph{LogSumExp} (LSE) function \cite{Nesterov03} : \[ \LSE(x_1,...,x_n) = \log \left(\sum_i e^{x_i} \right) \] The LSE is differentiable, convex function and for any $n$ numbers the LSE satisfy \[ x_{max} \leq \LSE(x_1,...,x_n) \leq x_{max} + \log(n) \] Thus, in an attempt to analyze $\lambda_2 = 1/ \lambda_n(L^+)$, it is natural to consider the LSE of $L^+$ eigenvalues (maximize $\lambda_2$ = minimize $\lambda_n(L^+)$ ). Using the fact that \[ \Tr \exp(M) = \sum_i e^{\lambda_{i}(M)} \] we define the softmaxEV (SEV) function as: \[ \SEV(g) = \log(\Tr[\exp(L_g^+)]) \] In order to simplify calculations, exploiting the fact that \emph{log} is a monotone function, we will analyze: \[ f(g) = \Tr[ \exp(L_g^+)] \] Note that $f(g)$ is still convex as exponent preserve convexity. In Appendix \cref{appendix_LSE_opt_crit} we prove that the optimality criterion for SEV is: \begin{align}\label{equ_sev_opt_crit}
\Tr [ e^{L_g^+}L_g^+ (I - b_l b_l^T L_g^+)] \geq 0. \end{align} In the next section, we show that when \eqref{equ_sev_opt_crit} is \emph{pointwise nonnegative} (i.e., all summands are PSD ), the criterion \eqref{equ_sev_opt_crit} is tightly connected to the $\ell_{\infty}$-LW optimality condition, hence provides a condition under which LW reweighting improves the mixing-time of the graph $G$.
\iffalse In accordance to previous analysis, we will derive the optimality criteria for $f(g)$ using the familiar identity, \[ g \ \text{is optimal for } f \ \iff \nabla f(g)^T(e_l - g) \geq 0 \ \text{, for} \ l=1...m \]
\paragraph{Gradient} Let's start by computing the gradient of $f(g)$. Using the chain rule we get \[ \frac{\partial f}{\partial g_l} = \boldsymbol{Tr}[ e^{L_g^+} \frac{\partial L_g^+}{\partial g_l} ]= -\boldsymbol{Tr} [e^{L_g^+} L_g^+ b_l b_l^T L_g^+] \] Where we used equation ~\ref{equ_der_L_g_plus} for the gradient of $L_g^+$. \\ Next we want to compute \( \nabla f(g)^T g\). We will used the standard "trick". Indeed, \[ \frac{\partial }{\partial \alpha}f(\alpha g) = \nabla f(\alpha g)^T g \] thus setting $\alpha=1$ in last term yield our desired expression. So, \[ f(\alpha g) = \boldsymbol{Tr}[ e^{L_{\alpha g}^+} ] = \boldsymbol{Tr}[ e^{L_{g}^+ / \alpha} ] \] \[ \implies \frac{\partial }{\partial \alpha}f(\alpha g) = \frac{\partial }{\partial \alpha} \boldsymbol{Tr}[ e^{L_{g}^+ / \alpha} ] = \boldsymbol{Tr}[ e^{L_{g}^+ / \alpha} \frac{\partial }{\partial \alpha}(L_{g}^+ / \alpha) ] = \boldsymbol{Tr}[ e^{L_{g}^+ / \alpha} L_g^+ (-1/\alpha^2) ] \] and setting $\alpha=1$ leads to \[ \nabla f(g)^T g = -\boldsymbol{Tr}[ e^{L_{g}^+} L_g^+] \]
\paragraph{optimality criteria} As mentioned above the optimal solution satisfies \[ \nabla f(g)^T(e_l - g) = \frac{\partial f}{\partial g_l} - \nabla f(g)^T g \geq 0 \] Using our derived expressions, this condition get the form \[ -\Tr [e^{L_g^+} L_g^+ b_l b_l^T L_g^+] + \Tr[ e^{L_{g}^+} L_g^+] \geq 0 \] Using the additive property of trace we get \[ \Tr [ e^{L_g^+}L_g^+ (\mathbf{I} - b_l b_l^T L_g^+)] \geq 0 \]
\fi
\paragraph{A Sufficient condition for optimality} Note that equation~\eqref{equ_sev_opt_crit} is easily satisfied if \(\mathbf{I} - b_l b_l^T L_g^+\) is a PSD matrix. Moreover the spectrum of this matrix is quite elegant - \(b_l b_l^T L_g^+\) is one-rank matrix with one non-zero eigenvalue equals to $R_l(g)$ (corresponds to the eigenvector $b_l$). So the spectrum of \(\mathbf{I} - b_l b_l^T L_g^+ \) is simply \[ \underbrace{1,...,1}_{n-1} , 1 - R_l(g) \] Thus, a sufficient condition for $g$ to be optimal is that the effective resistance on each edge is less than $1$. Note that this condition resembles the normalized LW condition, where the ER on each edge is (at most) $n-1$.\\ Unfortunately, there isn't feasible solution that holds this condition. To see why, note that for any feasible $g$ (equation~\eqref{equ_logdet_grad_g_prod}) \[
-\nabla \logdet(g)^T \cdot g = \sum_l R_l(g) \cdot g_l = n-1 \] But, if for some $g'$ we have $R_l(g') \leq 1$ then, \[ n-1 = \sum_l R_l(g)g_l \leq \sum_l g_l = 1 \] Contradiction\footnote{$n \leq 2$ is negligible case}. In conclusion, there is no feasible solution such that \(I - b_l b_l^T L_g^+\) is PSD. However this is not means that no optimal solution exist - the trace can be positive even if the matrix is not PSD. Unfortunately, we don't know how to derive closed-form expression for the optimal solution besides the above condition.
\paragraph{Relative optimality} While it isn't clear what is the optimal solution for the SEV function, we can use the convex property to try to prove that "solution 1 is better than 2". Formally, for a convex function $f$
\[ f(x) \leq f(y) \iff \nabla f(x)^T(y-x) \geq 0 \]
For example, taking $x=g_{lw}=\wh{g} \ , \ y=g_{uni}=(1/m)\boldsymbol{1}$, we get the following condition \[ \nabla f(\wh{g})^T((1/m)\boldsymbol{1} - \wh{g}) = (1/m)\sum_l \frac{\partial f}{\partial g_l} - \nabla f(\wh{g})^T \wh{g} \geq 0 \]
Using the derived formulas from \cref{appendix_LSE_opt_crit}, this gets the form \[ -(1/m)\sum_l \Tr [e^{L_{lw}^+} L_{lw}^+ b_l b_l^T L_{lw}^+] +\Tr [ e^{L_{lw}^+} L_{lw}^+] = \Tr [e^{L_{lw}^+} L_{lw}^+ (\mathbf{I} - \sum_l (1/m)b_l b_l^T L_{lw}^+)] = \Tr [e^{L_{lw}^+} L_{lw}^+ (\mathbf{I} - L_{uni} L_{lw}^+)] \geq 0 \]
\begin{remark} The last condition is not a trivial one. To see why, note that if $\mathbf{I} - L_{uni} L_{lw}^+$ is PSD then the last condition satisfies. It's easy to see that \[ \mathbf{I} - L_{uni} L_{lw}^+ \succeq 0 \iff L_{uni} L_{lw}^+ \preceq \mathbf{I} \iff (1/m) B^T L_{lw}^+ B \preceq \mathbf{I} \iff B^T L_{lw}^+ B \preceq m\mathbf{I} . \] multiplying by $W = 1/(n-1) \cdot \diag(w_\infty)$ on both sides, and we get that \[
W^{1/2} B^T (B W B^T)^+ B W^{1/2} = \Pi_{W^{1/2} B^T} \preceq m W \] It is known that the eigenvalues of a projection matrix are either $0$ or $1$. On the other hand, the $i$'th eigenvalue of $m W$ is clearly $m (w_\infty)_i /(n-1)$. In other words, if $w_\infty \geq (n-1)/m$ then we can conclude that the mixing time of the reweighted graph is faster than the unweighted graph. Unfortunately, this can never happen -- remember that the sum of LW is $\rank(B)=n-1$, so if $(w_\infty)_{\min} > (n-1)/m$ we get \[ n-1 \sum_i (w_\infty)_i \geq m \cdot (w_\infty)_{\min} > m \cdot ((n-1)/m) = n-1 \] contradiction, unless $g_{\ell w} = g_{uni}$. We emphasize that this doesn't mean that LW don't improve the mixing time, since $\mathbf{I} - L_{uni} L_{lw}^+ \succeq 0$ is just a sufficient condition, and simulations indeed show that LW improves the mixing time. However, we don't have any closed form condition for this improvement. \end{remark}
\subsection{Optimal Spectrally-Thin Trees}
Our final application of LW is in spectral thin trees. A spanning tree $T$ of $G$ is $\gamma$-spectral-thin if $L_T \preceq \gamma L_G$. In \cite{AGM10} it has been shown that there is a deep connection between Asymmetric TSP and spectrally-thin trees as finding a low STT can be used with the fractional solution of ASTP to find an approximate solution. This has been the key ingredient in \cite{AGM10} and followup work. In \cite{HO14} the authors showed that for a connected graph $G = \langle V,E \rangle$, any spanning tree $T$ of $G$ has spectral thinness of at least $O(\max\limits_{e \in T} R_G(e))$ (i.e. the maximal edge ER of $T$ in $G$). Furthermore, it is possible to find the tree that achieve this LB in polynomial time. Now, we know (\cref{sec_new_char_LS_ER}) that LW minimizes the latter. So, it is natural to ask how LW can be used to find the optimal spectral thin tree of a graph.
Following the work of \cite{HO14}, we prove the following lemma: \begin{lemma}\label{lem_spectral_thin_tree}
For any connected graph $G = \langle V,E \rangle$ there is a weighted spanning tree $T_g$ such that $T_g$ has spectral thinness of $((n-1)/m) \cdot O(\log n / \log \log n)$. \end{lemma}
The proof is very similar to \cite{HO14} and will make use of the following lemma: \begin{lemma}\label{lem_independent_set}
Let $w_1,\dots,w_m \in \mathbb{R}^n$ be an $n$-dimensional vectors with unit norm. Let $p_1,\dots,p_m$ be a probability distribution on these vectors such that the covariance matrix is $\sum_i p_i w_i w_i^T = (1/n) \mathbf{I}$. Then, there is a polynomial algorithm that computes a subset $S \subseteq [m]$ such that $\{ w_i \mid i \in S \}$ forms a basis of $\mathbb{R}^n$, and $|| \sum_{i \in S} w_i w_i^T || \leq \alpha$ where $\alpha = O(\log n / \log \log n)$. \end{lemma}
\begin{proof}[Proof of Claim~\ref{lem_spectral_thin_tree}] \normalfont Let $w_\infty$ be the LW of the graph. Define $g_0 = \frac{n-1}{m}\boldsymbol{1}$ , $\overline{w} = \frac{n-1}{m}w_\infty$. For each $e_l \in E$, define $w_l = \sqrt{(g_0)_l} \cdot (L_{\overline{w}})^{+/2} b_l$, $\boldsymbol{p} = \frac{1}{n-1}w_\infty$. Indeed, by ~\eqref{equ_LS_facts} and the multiplicative property of pseudo-inverse we have \begin{align*}
\sum_l p_l = 1 \ , \ ||w_l||^2 = (g_0)_l \cdot \frac{m}{n-1} b_l^T (B W_\infty B^T)^+ b_l = \frac{n-1}{m} \cdot \frac{m}{n-1} = 1 \end{align*} In addition, \begin{align*}
\sum_{e_l \in E} p_l w_l w_l^T &= \frac{1}{n-1} (L_{\overline{w}})^{+/2} \left (\sum_l (w_\infty)_l (g_0)_l b_l b_l^T \right) (L_{\overline{w}})^{+/2} \\
&= \frac{1}{n-1} (L_{\overline{w}})^{+/2} \left( \sum_l \overline{w}_l b_l b_l^T \right) (L_{\overline{w}})^{+/2} \\
&= \frac{1}{n-1} \mathbf{I}_{\text{im} L_G} \end{align*} So, we can view the vectors $\{w_l \mid l \in E \}$ as $(n-1)$-dimensional vectors (in the linear span of $B$) and apply \cref{lem_independent_set} to get a set $T \subset E$ of size $n-1$ ($T$ forms a basis in $\mathbb{R}^{n-1}$) such that $\{ w_e \mid e \in T\}$ is linearly independent and \[ \sum_{e_l \in T} w_l w_l^T \preceq O(\log n/\log \log n) \mathbf{I}_{\text{im} L_G} \] The first two conditions imply that $T$ induced a spanning tree of $G$. Then, the last condition gives us (simple rearrangement) \[ \sum_{e_l \in T} (g_0)_l b_l b_l^T = L_{T_g} \preceq O(\log n/\log \log n) L_{\overline{w}} \] Now, we know LW are at most $1$, so, $\overline{w} = \frac{n-1}{m}w_\infty \leq \frac{n-1}{m} \boldsymbol{1}$, implying that \[ L_{\overline{w}} \preceq \frac{n-1}{m} BB^T = \frac{n-1}{m} L_G \] Thus we got that \[ L_{T_g} \preceq ((n-1)/m) \cdot O(\log n / \log \log n) L_G \] \end{proof}
\paragraph{The ATSP Angle} It would be interesting to understand if and how can \emph{weighted STTs} be used for ATSP rounding schemes, a-la \cite{AGM10}. Nevertheless, we emphasize two advantages of LW-weighted STTs: First, in contrast to the unweighted case \cite{AO15}, we guarantee to find an $O((n-1)/m)$-STT \emph{regardless} of the original graph, which is optimal -- to see why, recall that LW is the optimal minimizer of the maximal edge-ER. Taking $g_{\ell w}$ and $g_{uni}$, both normalized, we have \[ R_{max}(g_{\ell w}) = n-1 \leq R_{max}(g_{uni}) = m \cdot R_{max}(\boldsymbol{1}) \implies R_{max}(\boldsymbol{1}) \geq \frac{n-1}{m}. \] Second, the unweighted STT guaranteed by \cite{AO15} can at best achieve this bound with total weight $n-1$. By contrast, the total weight of our weighted STT is $\frac{(n-1)^2}{m} \leq n-1$ (and for most graphs $\ll$). In this sense, the tree from Lemma \ref{lem_spectral_thin_tree} is always "cheaper".
\section{Computing Lewis Weights}\label{sec_computing_LW}
In this section we outline recent accelerated methods for computing $\ell_{\infty}$-LW of a matrix via \emph{repeated leverage-score} computations (in out case, Laplacian linear systems). The first method, due to \cite{ccly} (building on \cite{CP14}), runs in input-sparsity time and is very practical, but provides only low-accuracy solution. The second method, due to \cite{flps21}, provides a \emph{high-accuracy} algorithm for $\ell_p$-LW, using $\wt{O}(p^3 \log (1/\epsilon))$ leverage-score computations (Laplacian linear systems in our case). Fortunately, as we show below, low-accuracy is sufficient the ERMP application, hence we focus on the algorithm of \cite{ccly}. However, we remark that in most optimization applications,
computing $\ell_p$-LW for $p=n^{o(1)}$ is suffices (see \cite{flps21,LS14}) in which the \cite{flps21} algorithm yields a high-accuracy $O(m^{1+o(1)}\log(1/\epsilon))$ time algorithm for Laplacians.
\subsection{Computing $\ell_{\infty}$-Lewis Weights to Low-Precision \cite{ccly}}
\begin{algorithm} \caption{Computing $\ell_{\infty}$-Lewis weight} \label{alg_LW_computation_ccly}
\hspace*{\algorithmicindent}\textbf{Input:} A matrix $A \in \mathbb{R}^{m \times n}$ with rank $k$, $T$ - number of iterations. \\ \hspace*{\algorithmicindent}\textbf{Result:} $\ell_{\infty}$-LW, $w \in \mathbb{R}^m$. \\ \begin{algorithmic} \State Initialize: $w_l^{(1)} = \frac{k}{m}$ for $l=1, \dots ,m$. \For{$t = 1, \dots ,T-1$}
\State $w_l^{(t+1)} = w_l^{(t)} \cdot a_l^T (A^T \diag(w^{(t)})A)^+ a_l$ ; for $l=1, \dots, m$ \Comment{// We can use here a Laplacian LS solver.}
\EndFor \State $(w)_i = \frac{1}{T} \sum\limits_{t=1}^T w_i^{(t)}$ for $i=1,\dots ,m$ \\ \Return $w$ \end{algorithmic} \end{algorithm}
Since we use \cite{ccly} in a black-box, we only give a high level overview of Algorithm \ref{alg_LW_computation_ccly}. The basic idea of \cite{ccly, CP14} is to use the observation that for (exact) $\ell_{\infty}$-LW, we have \begin{align}\label{equ_ccly_alg_fixed_equ}
w_i = \tau_i(W^{1/2} A). \end{align} \cite{ccly} use \eqref{equ_ccly_alg_fixed_equ} to define a fixed point iteration described in \Cref{alg_LW_computation_ccly}. In other words, the algorithm repeatedly computes the leverage scores of the weighted matrix, updating the weights according to the average of past iterations, until an (approximate) fixed-point is reached. The performance of this algorithm is as follows:
Let $\epsilon >0$ and denote by $\wt{w}$ the output of the algorithm for input $A \in \mathbb{R}^{m \times n}$. Our goal is to compute the approximate LW, $\wt{w}$, such that \begin{align*}
\wt{w} \approx_\epsilon w_\infty \refstepcounter{equation}\tag{\theequation} \label{equ_mul_apx_LW} \end{align*} where $a \approx_\epsilon b$ iff $a = (1 \pm \epsilon)b$. However, the approximation guarantee of \cref{alg_LW_computation_ccly} is in the \emph{optimality} sense, meaning \begin{align*} \tau_i(\wt{W}^{1/2} A) \approx_\epsilon \wt{w}_i , \ \ \forall i \in [m]. \refstepcounter{equation}\tag{\theequation} \label{equ_opt_apx_LW} \end{align*} Fortunately, \cite{flps21} recently showed that $\epsilon$-approximate optimal LS \eqref{equ_opt_apx_LW} imply $\epsilon$-approximate LW \eqref{equ_mul_apx_LW}: \begin{theorem}
(LS-apx $\Rightarrow$ LW-apx, \cite{flps21}). For all $\epsilon \in (0, 1)$, Algorithm~\ref{alg_LW_computation_ccly} outputs a ($1+\epsilon$)-approximation of LW with $T = O(\epsilon^{-1} \log \frac{m}{n})$ iterations. \end{theorem}
\paragraph{Approximate Lewis Weights:} Since the output of \Cref{alg_LW_computation_ccly} satisfies $\wt{w} \approx_\epsilon w_\infty$, this implies \[ \Tilde{L_{w}} = \sum_l (\wt{g_{w}})_l b_l b_l^T \approx_\epsilon \sum_l (g_{\ell w})_l b_l b_l^T = L_{\ell w} \] where $\wt{g_{w}} ,g_{\ell w} $ is the weight vector defined by $\wt{w} ,w_\infty$, respectively. Thus, \[ \Tr \Tilde{L_{w}} \approx_\epsilon \Tr L_{\ell w} \] In other words, computing the approximate LW guarantees a-$(1+\epsilon)$ factor for the trace of $L_{\ell w}$. Since our objective function is solely the trace, that means approximate LW will gives us a-$(1+\epsilon)$ factor for our approximation ratio. Since we are only shooting for $O(1)$-approximation for ERMP, the latter is sufficient, so \Cref{alg_LW_computation_ccly} can be used to produce the approximate-ERMP weights claimed in Theorems \ref{thm_LW_apx_ERMP_trees} and \ref{thm_UB_general_graphs}. Note that this also applies to the rest of results, as the proximity notion in~\eqref{equ_mul_apx_LW} also implies spectral approximation.
\section{ An $\Omega(n/\log^2 n)$ Separation of A-vs-D Design for General PSD Matrices}
\label{sec_optimal_design}
We finish by justifying our focus on Laplacians. We show that, for general PSD matrices, the problem of A- vs D-optimal design are in fact very different, in the sense that the $\ell_{\infty}$-LW cannot provide better than $\Omega(n/\log^2 n)$ approximation to the A-optimal design problem.
The motivation for this separation is a reformulation of A- and D-optimal design as \emph{Harmonic mean} (HM) vs. \emph{Geometric mean} (GM) minimization problem. We show an unbounded gap between the HM and GM of a general sequence (up to some constraint, more on that later). We then construct a simple experiment matrix (i.e a pointset in space) for which the D-optimal solution (given by $\ell_{\infty}$-LW) is an $\Omega(n/log^2(n))$ multiplicative approximation to the A-optimal solution.
\paragraph{A-optimal design as Harmonic mean optimization:}
We begin with A-optimal design. Given an experiment matrix, $V = (v_1, \dots, v_m) \ \in \mathbb{R}^{n \times m}$, the dual problem of A-optimal design is
\begin{equation*} \begin{array}{ll@{}ll} \text{maximize} & \Tr(W^{1/2})^2&\\ \text{subject to}& v_i^T W v_i \leq 1 \ , \ i=1,\dots,m \end{array} \end{equation*} with $W \in \boldsymbol{S}^n_+$ (see also~\eqref{equ_ERMP_dual_prob}). Note that the constraint define an ellipsoid $\mathcal{E}(W) = \{ x \in \mathbb{R}^n \mid x^T W x \leq 1 \}$ that enclose on the pointset $\{v_i\}_{i=1}^m$. We can write our objective function (neglecting the square since it does not change the optimal solution, just its value) as (see~\eqref{equ_ellipsoid_semiaxis_def}): \[ \Tr \ W^{1/2} = \sum_{i=1}^n \lambda_i(W)^{1/2} = \sum_{i=1}^n (\sigma_i)^{-1} = \frac{n}{HM(\boldsymbol{\sigma})} , \] where $\boldsymbol{\sigma}$ is the semiaxis lengths of the ellipsoid $\mathcal{E}(W)$.
In other words, the A-optimal design problem attempts to minimize the \emph{Harmonic mean} of the semiaxis lengths of the enclosing ellipsoid $\mathcal{E}(W)$. Thus, A-optimal design is equivalent to the following problem
\begin{equation*} \begin{array}{ll@{}ll} \text{minimize} & HM(\boldsymbol{\sigma}(W))&\\ \text{subject to}& v_i^T W v_i \leq 1 \ , \ i=1,\dots,m \end{array} \end{equation*}
\paragraph{D-optimal design as Geometric mean minimization} We can do the same for D-optimal design. The dual problem of D-optimal designs is \begin{equation*} \begin{array}{ll@{}ll} \text{maximize} & \logdet(W) &\\ \text{subject to}& v_i^T W v_i \leq 1 \ , \ i=1,\dots,m \end{array} \end{equation*} with $W \in \boldsymbol{S}^n_+$. Same as before, we can write our objective function as \[
\log |W| = \log(\lambda_1(W) \cdots \lambda_n(W)) = \log((\sigma_1 \cdots \sigma_n)^{-2}), \] and we get that, \[
\logdet \ W = \log |W| = -2n\log((\sigma_1 \cdots \sigma_n)^{1/n}) = -2n\log(GM(\boldsymbol{\sigma})), \] where we used the same notations as above.
In other words, maximize the function $\logdet(W)$, is the same as minimizing the \emph{Geometric mean} of the semiaxis lengths of the enclosing ellipsoid $\mathcal{E}(W)$. Thus, D-optimal design is equivalent to the following problem
\begin{equation*} \begin{array}{ll@{}ll} \text{minimize} & GM(\boldsymbol{\sigma}(W))&\\ \text{subject to}& v_i^T W v_i \leq 1 \ , \ i=1,\dots,m \end{array} \end{equation*}
\begin{remark}
The geometric interpretation of D-optimal design as the minimal volume enclosing ellipsoid is also known as Outer John Ellipsoid, and is a well studied problem. For further details see also \cite{KT93,ccly,ZF20,todd16,Puk06}. \end{remark}
\subsection{HM and GM comparison}\label{sec_HM_GM_comp}
As we just saw, A-optimality and D-optimality are correlated to the Harmonic and Geometric mean of the same value - semiaxis lengths of some enclosing ellipsoid. We know that for any vector $\boldsymbol{x} \in \mathbb{R}^n$ , \(HM(\boldsymbol{x}) \leq GM(\boldsymbol{x})\) , with equality achieved only at a uniform vector, and that they are continuous functions of $\boldsymbol{x}$. So, It is natural to think that they are "monotone" in some sense. For example, one might think that for two vectors $\boldsymbol{x},\boldsymbol{x}' \in \mathbb{R}^n$ \[ GM(\boldsymbol{x}) \leq \alpha \ GM(\boldsymbol{x}') \iff HM(\boldsymbol{x}) \leq \alpha \ HM(\boldsymbol{x}') , \] for some constant factor $\alpha$. Note that this kind of property makes the two problems equivalent (up to maybe some constant factors). We show that this is not the case. The following technical claim shows that, from any sequence $\boldsymbol{x}$ of $n$ positive numbers, it is possible to construct a related sequence $\boldsymbol{x}'$ such that $GM(\boldsymbol{x}') = GM(\boldsymbol{x})$ but $HM(\boldsymbol{x}') \ll HM(\boldsymbol{x})$. We defer the proof to \cref{appendix_prf_clm_hm-gm}.
\begin{claim}\label{clm_HM_GM_gap}
Let $n \in \mathbb{N}$ , $\boldsymbol{x} \in \mathbb{R}^n_+$. For any $0<t<1 \in \mathbb{R}$, there exist $\boldsymbol{x}' \in \mathbb{R}^n_+$ such that $GM(\boldsymbol{x}')=GM(\boldsymbol{x})$ but $HM(\boldsymbol{x}') = t \cdot HM(\boldsymbol{x})$ \end{claim} Note that this claim is a bit stronger than what we need - we don't even have to "pay" at the GM (although we do pay at the AM) in order to decrease the HM \emph{as much as we want}.
\subsection{A Separation of A-Optimal and D-Optimal Design for General PSD Matrices}
As we just said, the HM and GM aren't approximately close for general sequences (i.e there is an unbounded gap between HM and GM). Since we can interpret A-optimal and D-optimal design as "HM vs GM" problem, this naturally raises the following claim \begin{claim}\label{clm_A_D_gen_gap}
There exist $V \in \mathbb{R}^{n \times m}_+$ such that
\[
\Tr (Vg^*_D V^T)^{-1} \gg \Tr (Vg^*_AV^T)^{-1}
\]
where $g^*_D,g^*_A$ are the optimal solutions for D-optimal and A-optimal design, respectively. \end{claim}
While we can use claim~\ref{clm_HM_GM_gap} to construct this "counter example", we offer a much simpler instance with $\Tilde{\Omega}(n)$ gap, which is more than enough for showing a separation.
\begin{proof}
Let $V$ be a diagonal matrix with $[1,\dots,n]$ on the diagonal. We already mentioned that the optimal solution for D-optimal design is the LW of the experiment matrix $V$, i.e
\[
(g^*_D)_l = \left(w(V)_\infty \right)_l.
\]
By definition of LW, $g^*_D$ satisfies equation~\eqref{equ_inf_LW}:
\[
v_l^T (V^T W_\infty V)^{-1} v_l = 1 ,
\]
Where $v_l$ is the $l$'th row of $V$. Using the fact the $V$ is a diagonal matrix whose $l$'th entry equals to $\ell$, we can write the last equation as
\[
\ell (\ell^2 (w_\infty)_l)^{-1} \ell = 1 \implies (w_\infty)_l = 1 ,
\]
and with normalization we get
\[
g^*_D = (1/n) \boldsymbol{1}.
\]
The value of A-optimal at $g^*_D$ equals to,
\[
\Tr (V g^*_D V^T)^{-1} = \sum_{i=1}^n (i^2/n)^{-1} = n \sum_{i=1}^n \frac{1}{i^2} \approx \frac{\pi^2}{6} n \approx 1.5n.
\]
\\
Next, let's look at $(g_0)_i = 1/i$ (up to normalization). We know that
\[
\sum_{i=1}^n \frac{1}{i} \approx log(n),
\]
so, $(g_0)_i \approx \frac{1}{log(n) \cdot i}$. The value of A-optimal at $g_0$ is
\[
\Tr (V g_0 V^T)^{-1} \approx \sum_{i=1}^n \left(\frac{i^2}{i \cdot log(n)}\right)^{-1} = log(n) \sum_{i=1}^n \frac{1}{i} \approx log^2(n).
\]
Thus, we get that
\[
\Tr (V g^*_D V^T)^{-1} = \theta(n) \gg \Tr (V g_0 V^T)^{-1} = \theta(log^2(n)) \geq \Tr (V g^*_A V^T)^{-1}
\]
where that last inequality is from the optimality of $g^*_A$.
Thus we can conclude that there is a natural gap between A-optimal and D-optimal design of \emph{at least} $\Omega(n/log^2n).$ \end{proof}
\section{ ERMP Approximation via $\ell_{\infty}$-Lewis Weights }\label{sec_R_tot}
Recall the Kirchoff Index of a graph, $\mathcal{K}_G^*$, is the optimal solution for the ERMP problem \eqref{equ_ERMP_def}. Throughout this section, we denote by $\mathcal{K}_G^{\ell w}$ the value of ERMP at $g_{\ell w}$. The approximation ratio obtained by $\ell_{\infty}$-LW is defined as \begin{equation}\label{def_alpha_AD}
\alpha_{A,D} \coloneqq \frac{\mathcal{K}_G^{\ell w}}{\mathcal{K}_G^*}. \end{equation} Recall that $g_{\ell w}$ is the optimal solution for the D-optimal design, so $\alpha_{A,D}$ exactly captures the gap of A- and D-optimal design over Laplacians. Since we are proposing $\ell_{\infty}$-LW as an approximation-algorithm for the ERMP, $\alpha_{A,D}$ is simply the \emph{approximation ratio} of our algorithm. \\ This section is divided to roughly two independent results. We first focus solely on \emph{trees} thus proving \Cref{thm_LW_apx_ERMP_trees}. For trees, the ERMP gets a more simple form, and we used a designated technique for proving our theorem. The second result is an attempt to expand to general graphs. We give a more general analysis, and showing two different upper bounds on our approximation ratio. We conjecture that they yield an $O(\log n)$-UB, and in particular implying \Cref{conj_LW_apx_ERMP}. We finish this section by showing some formulas regards this UBs.
\subsection{$\ell_{\infty}$-LW are $O(1)$-Approximation for Trees}
We divide this section to two parts. First, we rewrite the approximation ratio especially for trees, using the formulas for the optimal weights derived by \cite{saberi}. Then we give the technical details of the proof of \Cref{thm_LW_apx_ERMP_trees}.\\
\noindent We first show that for trees $g_{\ell w} = g_{uni} = (1/m)\boldsymbol{1}$: \begin{claim}\label{clm_LW_tree}
For any tree $T$, we have $g_{\ell w} = g_{uni}$. \end{claim} \begin{proof}
For simplicity, let's look first at the non-normalized weights, $w_\infty = w_\infty(B^T)$. We know that $w_\infty \leq \boldsymbol{1}$ (equation~\eqref{equ_LS_facts}). Moreover, we have by~\eqref{equ_LS_facts} that
\begin{align*}
\sum_{l=1}^m (g_{\ell w})_l = \underbrace{n-1 = m}_\text{for trees}.
\end{align*}
Thus, having $m$ positive numbers, which at most $1$ and sums up to $m$, inevitably are all equal to $1$. Hence,
$ (g_{\ell w})_l = 1 \ , \forall \ l=1 \dots m$, so normalizing by $n-1 = m$ we get $g_{\ell w}=g_{uni}$, as claimed.
\end{proof}
Next, we restate some key formulas from \cite{saberi} for the ERMP on trees. First, when the graph is a tree, for any two vertices $i,j$, the ER between them is \[ R_{ij}(g) = \sum_{e_l \in P(i,j)} \frac{1}{g_l} \]
This is clear from an electrical network POV\footnote{i.e The ER of resistors in series is additive.}. In other words, the $l$'th edge contributes $(1/g_l)$ to all the paths that contain it. With this in mind, we define the \emph{congestion} of the $l$'th edge as follows:
\begin{definition}[congestion of an edge]\label{def_congestion} The congestion of an edge $e_l \in T$, denoted $c_l(T)$, is the number of paths in $T$ that contain it. \end{definition} It's easy to see that \[ c_l = n_l(n-n_l) , \] where $n_l$ is the number of nodes in the \emph{sub-tree} of one side of $e_l$ (this is symmetric). Note that from the concavity of the function $f(x) = x(n-x)$, the minimal congestion is achieved at the leaves of $T$ ($1(n-1) = m$), whereas the maximal congestion is $\frac{n}{2}(n-\frac{n}{2}) = (n/2)^2$. More generally, \begin{align*} f(x) = x(n-x) < y(n-y) = f(y) \iff \min(x,n-x) < \min(y,n-y). \refstepcounter{equation}\tag{\theequation} \label{equ_cong_relation} \end{align*} With this definition and the above fact, we get that for any tree $T$, and for any weight-vector $g \in \mathbb{R}^m$, \[ \mathcal{K}_T(g) = \sum_{l=1}^m \frac{c_l}{g_l} \;\; . \] In addition, \cite{saberi} (Section 5.1) proved that, for trees, the optimal ERMP solution is \begin{equation}\label{equ_opt_tree}
\mathcal{K}_T^* = \left(\sum_{l} c_l^{1/2} \right)^2, \end{equation} and that the closed-form expression for the optimal weights is \begin{equation}\label{equ_opt_g_trees}
g^*_l = (c_l/\mathcal{K}^*)^{1/2} = \frac{c_l^{1/2}}{\sum_{k} c_k^{1/2}} \ \ \ \ \forall l=1,\dots,m. \end{equation} Using the last expressions, it immediately follows that \begin{equation}\label{equ_cong_sum}
c_l = \mathcal{K}^* \cdot (g^*_l)^2 \implies \sum_l c_l = \mathcal{K}^* \cdot \sum_l (g^*_l)^2 = \mathcal{K}^* \cdot ||g^*||_2^2 . \end{equation} Lastly, our proof will use the fact that the optimal weights are at most $1/2$. Indeed, \begin{align*} g^*_{max} \leq \frac{c_{max}^{1/2}}{\min \{\sum_l c_l^{1/2} \}} \leq \frac{n/2}{m\sqrt{m}} \leq \frac{1}{\sqrt{m}} \ll \frac{1}{2}, \refstepcounter{equation}\tag{\theequation} \label{equ_g_star_UB} \end{align*} when we used the bounds from \eqref{equ_cong_relation}.\\
Using the fact that $g_{\ell w}= g_{uni}$ (claim~\ref{clm_LW_tree}), we can write $\mathcal{K}^{\ell w}_T$ as: \[ \mathcal{K}_T^{\ell w} =\sum_{l=1}^m \frac{c_l}{(g_{\ell w})_l} = \sum_{l=1}^m \frac{c_l}{1/m} = m\sum_{l} c_l \] Thus, for any tree $T_n$ of order $n$, we can write the approximation ratio in terms of the congestion as: \begin{equation}\label{equ_apx_ratio_trees}
\alpha_{A,D}(T_n) = \alpha(T_n) = \frac{m\sum\limits_{l} c_l}{\left(\sum\limits_{l} c_l^{1/2} \right)^2} \end{equation}
We shall also use the following claim in the proof of Theorem \ref{thm_LW_apx_ERMP_trees} : \begin{claim}\label{clm_apx_ratio_subtree}
Let $T,T'$ be two trees such that $T \subset T'$. Then $\alpha(T) \leq \alpha(T')$. \end{claim} \begin{proof}
Recall the definition of $\alpha(T)$:
\[
\alpha(T) = \frac{\mathcal{K}^{\ell w}_T}{\mathcal{K}^*_T} = \frac{m \cdot \sum_l c_l}{\mathcal{K}^*_T}
\]
Clearly, $\mathcal{K}^{\ell w}_T < \mathcal{K}^{\ell w}_{T'}$ (we are only adding positive terms). Moreover, \cite{saberi} showed that
\[
T \subset T' \implies \mathcal{K}^*_{T'} \leq \mathcal{K}^*_T \;\; .
\]
(this can be seen as the optimal weight for $T$ is feasible for $T'$). From this, it's easy to see that
\[
\alpha(T) \leq \alpha(T')
\] \end{proof}
\subsubsection{Proof of Theorem \ref{thm_LW_apx_ERMP_trees}}\label{sec_prf_LW_apx_trees}
We begin by showing when an LT increases the approximation ratio.
\begin{proof}[Proof of Claim~\ref{clm_apx_ratio_ET}] \normalfont
Denote by $T' = E_k \circ T$, $c_k' = c_k(T')$ (the rest of the congestions remains the same).
Recall that $\alpha = (m\sum_l c_l)/ \left(\sum_l c_l^{1/2}\right)^2$, so let's write the modified numerator and denominator.
\begin{align*}
&\sum_{l} c_l' = \sum_{l} c_l + (c_k' - c_k). \\
&\sum_{l} c_l'^{1/2} = \sum_{l} c_l^{1/2} +(c_k'^{1/2} - c_k^{1/2}).
\end{align*}
We get that,
\begin{align*}
\alpha(T) \leq \alpha(T') &\iff \frac{\sum_{l} c_l}{\left(\sum_l c_l^{1/2}\right)^2} \leq \frac{\sum_{l} c_l + (c_k' - c_k)}{\left(\sum_{l} c_l^{1/2} +(c_k'^{1/2} - c_k^{1/2})\right)^2} \\
&\iff \frac{\sum_{l} c_l}{\left(\sum_l c_l^{1/2}\right)^2} \leq \frac{\sum_{l} c_l + (c_k' - c_k)}{\left(\sum_{l} c_l^{1/2}\right)^2 +\left(c_k'^{1/2} - c_k^{1/2}\right)^2 + 2\left(\sum_{l} c_l^{1/2}\right) \left(c_k'^{1/2} - c_k^{1/2}\right) } \\
\end{align*}
Using the fact that, assuming the denominators are positive (which true in our case),
\[
\frac{A}{B} \leq \frac{A+D}{B+C} \iff AC \leq BD,
\]
we get that
\begin{align*}
\alpha(T) \leq \alpha(T') &\iff \left(\sum_l c_l \right)\left[ \left(c_k'^{1/2} - c_k^{1/2}\right)^2 + 2\left(\sum_{l} c_l^{1/2}\right) \left(c_k'^{1/2} - c_k^{1/2}\right) \right] \leq (c_k' - c_k)\left(\sum_{l} c_l^{1/2}\right)^2 \\
&\iff \left(\sum_l c_l \right) \left(c_k'^{1/2} - c_k^{1/2}\right) \left( 2\sum_{l} c_l^{1/2} + c_k'^{1/2} - c_k^{1/2} \right) \leq (c_k'^{1/2} - c_k^{1/2}) (c_k'^{1/2} + c_k^{1/2})\left(\sum_{l} c_l^{1/2}\right)^2 \\
\end{align*}
We divide the rest of the proof to two cases.\\
\textbf{Case 1:} $E_k$ is an upper LT. By definition, $c_k' > c_k$. Then, $c_k'^{1/2} - c_k^{1/2} > 0$, so we can divide by the latter to get,
\begin{align*}
\alpha(T) \leq \alpha(T') &\iff \left(\sum_l c_l \right) \left( 2\sum_{l} c_l^{1/2} + c_k'^{1/2} - c_k^{1/2} \right) \leq (c_k'^{1/2} + c_k^{1/2})\left(\sum_{l} c_l^{1/2}\right)^2 \\
&\iff ||g^*||_2^2 \left( 2\frac{c_k^{1/2}}{g^*_k} + c_k'^{1/2} - c_k^{1/2} \right) \leq (c_k'^{1/2} + c_k^{1/2}) && \text{using~\eqref{equ_cong_sum}, and \eqref{equ_opt_g_trees} for the $k$'th edge.}\\
&\iff 2\frac{c_k^{1/2}}{g^*_k} - c_k^{1/2}\left(1 + \frac{1}{||g^*||_2^2} \right) \leq c_k'^{1/2}\left(\frac{1}{||g^*||_2^2} -1\right) && \text{group terms by $c_k^{1/2}$ and $c_k'^{1/2}$.}
\end{align*}
In our case, $c_k'/c_k > 1$, so after dividing both sides by $c_k^{1/2}$, it's sufficient to require
\begin{align*}
\frac{2}{g^*_k} - 1 - \frac{1}{||g^*||_2^2} < \frac{1}{||g^*||_2^2} -1 \iff g_k^* > ||g^*||_2^2 .
\end{align*}
Thus we get that,
\[
g_k^* > ||g^*||_2^2 \implies \alpha(T) \leq \alpha(T').
\]
\textbf{Case 2:} $E_k$ is a lower LT . Using the same arguments we get that,
\[
g_k^* \leq ||g^*||_2^2 \implies \alpha(T) \leq \alpha(T')
\]
Unifying the two cases completes the proof.
\end{proof}
Remember that we defined the following partition: \[
E_<(T) \coloneqq \{ l \ \mid \ g_l^*(T) \leq ||g^*(T)||_2^2 \} \ , \ E_>(T) \coloneqq \{ l \ \mid \ g_l^*(T) > ||g^*(T)||_2^2 \} \]
We now show the key feature of this partition -- $E_<$ and $E_>$ are \emph{invariant} under upper and lower LT: \begin{claim}\label{clm_invariant_LT}
For any tree $T$, and $k \in E_>$ we have that,
\[
E_>(E_k^{\uparrow} \circ T) = E_>(T).
\]
and vice versa for $E_<$. \end{claim} \begin{proof} \normalfont
We will only prove this for $E_>$ as the proof for $E_<$ is symmetric. Let $k \in E_>(T)$. Denote by $T'=E_k^{\uparrow} \circ T$. We first show that $g^*_k(T') > g^*_k(T)$. From equation~\eqref{equ_opt_g_trees} we know that
\[
g_l^*(T) = \left(\frac{c_l(T)}{\mathcal{K}^*_T}\right)^{1/2} \ , \ g_l^*(T') = \left(\frac{c_l(T')}{\mathcal{K}^*_{T'}}\right)^{1/2}
\]
In addition, from the definition of upper LT, the congestion on the $k$'th edge increases, and doesn't change on any other edge, meaning that:
\[
\mathcal{K}^*_{T'} = \left(\sum_l c_l(T')^{1/2} \right)^2 > \left(\sum_l c_l(T)^{1/2} \right)^2 = \mathcal{K}^*_T.
\]
Combining the last two equations ,implies that for all $l \neq k$ we have:
\begin{equation}\label{equ_g_l_upper_LT}
c_l(T') = c_l(T) \implies \frac{g_l^*(T)}{g_l^*(T')} = \frac{\mathcal{K}^*_{T'}}{\mathcal{K}^*_T} > 1 \implies g_l^*(T') < g_l^*(T).
\end{equation}
Recall that $\boldsymbol{1}^T g^* = 1$, so,
\begin{equation}\label{equ_g_k_upper_LT}
g_k^*(T') = 1-\sum_{l\neq k} g^*_l(T') > 1 - \sum_{l\neq k} g^*_l(T) = g_k^*(T).
\end{equation}
We need to show that $k \in E_>(T')$, i.e
\[
g_k^*(T') > ||g^*(T')||_2^2
\]
Using~\eqref{equ_g_k_upper_LT}, we can write:
\begin{align*}
g_k^*(T') &= g_k^*(T) + (g_k^*(T') - g_k^*(T)) \\
&> \sum_{l \neq k} g_l^*(T)^2 + g_k^*(T)^2 +(g_k^*(T') - g_k^*(T)) && \text{since $k \in E_>(T)$} \\
&> \sum_{l \neq k} g_l^*(T')^2 + g_k^*(T)^2 +(g_k^*(T') - g_k^*(T)) && \text{using~\eqref{equ_g_l_upper_LT}} \\
&= \sum_{l \neq k} g_l^*(T')^2 + g_k^*(T')^2 + \left(g_k^*(T') - g_k^*(T) + g_k^*(T)^2 - g_k^*(T')^2 \right) \\
&> \sum_{l} g_l^*(T')^2 = ||g^*(T')||_2^2
\end{align*}
The last inequality is because $\forall l , \ g^*_l < 1/2$ (see~\eqref{equ_g_star_UB} above), and for any two number $a,b$ such that $a-b>0 , a+b<1$ we have that:
\[
a-b > (a-b)(a+b) = a^2 - b^2 \implies (a-b)-(a^2-b^2) >0.
\]
Using this fact with $a=g_k^*(T') \ , \ b=g_k^*(T)$ concludes the proof. \end{proof}
So far we showed that if there exist an LT as defined in~\eqref{equ_upper_lower_LT} then the sets $E_<, \ E_>$ are invariant under the matching transformations, and the approximation ratio of the transformed tree is larger than the original tree. Using these properties, we can repeatedly apply an LT (until saturation), and the final tree is guaranteed to be the hardest instance. So, we are ready to describe the main process of this section. We begin with arbitrary tree $T$ of order $n$. We partition its edges to two sets $E_<(T),E_>(T)$ as defined above. We know that this two sets are invariant under lower and upper LT - $E^{\uparrow} , E^{\downarrow}$ - so we can repeatedly apply them on both sets until we have $E \circ T = T$. Furthermore, from claim~\ref{clm_apx_ratio_ET} we can say that for the final tree $\Tilde{T}$, $\alpha(\Tilde{T})$, is an UB for the approximation ratio of the original tree. It is left to explicitly define $E^{\uparrow} , E^{\downarrow}$ and compute $\alpha(\Tilde{T})$. \\ \begin{figure}
\caption{Description of lower LT }
\label{fig_lower_ET}
\end{figure}
We first define $E^{\downarrow}$. We are assuming that the tree is rooted, and "up" and "down" directions are well defined. Let $k \in E_<(T)$ be a non leaf edge (meaning its congestion is greater than $m$). Let $e_k = (u,v)$. Denote by $T_v$ the downward sub tree rooted at $v$. Then $E^{\downarrow}_k$ is to take $T_v$ and "ascend" it to $u$ (see Figure~\ref{fig_lower_ET}). Clearly, for any $l \neq k$ , $c_l(E^{\downarrow}_k \circ T) = c_l(T)$ so $E^{\downarrow}_k$ is an LT. Furthermore $c_k(E^{\downarrow}_k \circ T) = m < c_k(T)$
\eqref{equ_cong_relation}, so $E^{\downarrow}_k$ is indeed a lower LT. In other words, applying $E^{\downarrow}_k$ on all $k \in E_<(T)$ simply turns all of them into leafs in $T'$. So, we first apply $E^{\downarrow}_k$ on all edges $k \in E_<(T)$, leaving us with $T'$ such that for any $l \in E_<(T')$ , $e_l$ is a leaf edge. So from now on we will assume that all edges in $E_<(T)$ are leaf edges. \\
Next, we define $E^{\uparrow}$. We can focus on the sub-tree, $T_>$, spanned by $E_>(T)$ - all the other edges are leafs, so the structure of both trees is the same. Let $k \in E_>(T)$. Denote $e_k = (u,v)$. If $d_{T_>}(u) , d_{T_>}(v) \leq 2 $, do nothing. We emphasize that $d(u)$ is the degree of $u$ in the \textbf{sub-tree} $T_>$ - i.e number of edges of $u$ that in $E_>$. We now divide the transformation to two cases: \\ \\
\textbf{Case 1:} $d_{T_>}(u) , d_{T_>}(v) > 2 $ (see Figure~\ref{fig_upper_ET_case1}). Let $n_v = |T_v| $. w.l.o.g. assume that $n_v \leq n/2$. Denote by $l_i = (v,x_i)$ for $i=1,...,d(v)$ such that $|T_{x_1}| = n_1 \leq n_2 \leq ... \leq n_{x_{d(v)}}$. Then, $E^{\uparrow}_k$ is to change $(v,x_2)$ to $(x_1,x_2)$. Clearly the congestion changes only on $l_1'=(x_1,x_2)$. But, since $n_1,n_2 \leq n_1+n_2 \leq n_v \leq n/2$, then \begin{align*}
\min&(n_1,n-n_1) = n_1 < n_1+n_2 = \min(n_1+n_2,n-(n_1+n_2)) \\
&\implies \underbrace{b_1(T) = n_1(n-n_1)}_{\text{congestion on $l_1$}} < \underbrace{(n_1+n_2)(n-(n_1+n_2)) = b_1(E^{\uparrow}_k \circ T)}_\text{congestion on $l_1'$}. \end{align*}
So, indeed $E^{\uparrow}_k$ is an upper LT\footnote{While $E^{\uparrow}_k$ is indeed defined by $k$ it actually changes the congestion on $l_1$. While this is somewhat ambiguous, since $l_1 \in E_>$ it doesn't affect the proof, so we abuse this notation.}. \\ \\ \textbf{Case 2:} w.l.o.g. $d(u)=2 , \ d(v) > 2$ (see Figure~\ref{fig_upper_ET_case2.1}).
Similar to the first case, let $l' = (u,y) \ , \ l_i = (v,x_i)$ for $i=1,...,d(v)$ such that $|T_{x_1}| = n_1 \leq n_2 \leq ... \leq n_{x_{d(v)}} \ , \ n' = n-|T_v|$.\\
\textbf{Case 2.1:} If $n_1 \leq n'$, then once again $E^{\uparrow}_k$ is to change $(v,x_2)$ to $(x_1,x_2)$. The congestion changes only on $l_1'=(x_1,x_2)$, but $n_1 < n/(d(v)-1) \leq n/2$ and $n_1<n_i,n'$, so \begin{align*}
\min&(n_1,n-n_1) = n_1 < \min (n_1+n_2 , 1+n' + n_3+\dots)\\
&\implies b_1(T) = n_1(n-n_1) < (n_1+n_2)(n-(n_1+n_2)) = b_1(E^{\uparrow}_k \circ T), \end{align*}
and, indeed, $E^{\uparrow}_k$ is an upper LT. \\
\textbf{Case 2.2:} If $n' < n_1$, then $E^{\uparrow}_k$ is to take $(v,x_1)$ with $T_{x_1}$ and move it to $(u,x_1)$ (see Figure~\ref{fig_upper_ET_case2.2}). The congestion changes only on $e_k=(u,v)$. But $n' < n/(d(v)-1) \leq n/2 \ , \ n' < n_1 \leq n_i$ so, \begin{align*}
\min&( n',n-n') = n' < \min ( n_1+n' , 1+n_2+\dots)\\
&\implies c_k(T) = n'(n-n') < (n'+n_1)(n-(n'+n_1)) = c_k(E^{\uparrow}_k \circ T), \end{align*} so, again, we get that $E^{\uparrow}_k$ is an upper LT. \\
\begin{remark}
Note that at the end of case \textbf{(2.2)} we either move to case \textbf{(1)} (if $d(v) > 3$) or to case \textbf{(2.1)} (if $d(v) =3$), so we can focus only on the first type of upper LT (first two cases). Now, for this type, the degree of $v$ after the transformation is strictly smaller than before, so this process will terminate at some point and we guarantee to end up with $d(v) = 2$ for all $v \in E_>(T)$ - i.e \emph{path graph} (of edges in $E_>$). \end{remark}
\begin{figure}
\caption{Stem graph with branches. The "black dots" are leafs, the black edges are from $E_>$, and the blue edges from $E_<$.}
\label{fig_path_exits}
\end{figure}
We can finally describe the final tree $\Tilde{T}$. As mentioned earlier, we apply lower LT until all edges from $E_<$ are leafs. In addition, we apply upper LT until we get a single path of edges from $E_>$. Thus the final tree has a "stem" structure (i.e path) of size $p=|E_>(T)|$ with $s=|E_<(T)|$ "branches" along the way (see \Cref{fig_path_exits}). At this point it should be clear that "pushing aside" branches to the closest side would increase the approximation ratio, since this increases the bottleneck in the tree. Formally speaking, pushing leafs to the end will increase the congestion along the path which is again an upper LT, thus increasing the approximation ratio as well (remember that edges on the path are from $E_>$). So the final tree $\Tilde{T}$ is simply \[ \Tilde{T} = \mathcal{B}_{s_1,p,s_2} \] where $s_1+s_2 = s$ , and $\mathcal{B}_{t,p,s}$ is the \emph{Bowtie} graph (see \Cref{fig_tps_tree}).\\
All that's left is to compute $\alpha(\mathcal{B}_{t,p,s})$. We know that $T \subset T' \implies \alpha(T) \leq \alpha(T')$ (see claim~\ref{clm_apx_ratio_subtree}), so it's sufficient to prove for $\mathcal{B}_{n,n,n}$ ($n$ is the size of the original tree). While we can do it explicitly, we will use a much easier calculation using an UB we derive ahead for general graphs, giving us an UB of $\sim 3$, and concluding the proof (see lemma~\ref{lem_a2_bowtie} for further details).
\iffalse
We have $2n$ leaf edges with congestion of $m=3n-1$ (In the "big" tree there are $3n$ vertices). On the $i$'th edge on the path the congestion is $(n+i)(3n-(n+i)) = (n+i)(2n-i)$ for $i=1...n-1$. Hence, \begin{align*}
\alpha(\mathcal{B}_{n,n,n}) &= \frac{m\cdot \sum_l c_l}{(\sum_l c_l^{1/2})^2} = m \cdot \frac{2n\cdot m + \sum_{i=1}^{n-1} (n+i)(3n-(n+i))}{(2n\cdot \sqrt{m} + \sum_{i=1}^{n-1} (n+i)^{1/2}(3n-(n+i))^{1/2})^2}\\
&= m \cdot \frac{2n\cdot m + \sum_{k=n+1}^{2n-1} k(3n-k)}{(2n\cdot \sqrt{m} + \sum_{k=n+1}^{2n-1} k^{1/2}(3n-k)^{1/2})^2}\\
&=m \cdot \frac{2n\cdot m + A}{(2n\cdot \sqrt{m} + B)^2}\\ \end{align*} where \[ A = \sum_{k=n+1}^{2n-1} k(3n-k) = \sum_{k=1}^{2n-1} k(3n-k) - \sum_{k=1}^{n} k(3n-k) \approx \frac{10}{3}n^3 - \frac{7}{6}n^3 = \frac{13}{6}n^3 \] and \[ B = \sum_{k=n+1}^{2n-1} (k(3n-k))^{1/2} = \sum_{k=1}^{2n-1} (k(3n-k))^{1/2} - \sum_{k=1}^{n} (k(3n-k))^{1/2} \approx (c_2 - c_1)n^2 = cn^2 \] Using the approximation \[
\sum_{k=1}^{b \cdot n} (k(3n-k))^{1/2} \approx cn^2 \ \ , \ \ c_b = \int_0^b (t(3-t))^{1/2} \approx \left\{ \begin{array}{ll}
1.031, & \text{for } b=1\\
2.503, & \text{for } b=2
\end{array}\right. \] Thus, we get \begin{align*}
\alpha(T_{tps}) \leq \alpha(T_{n,n,n}) &= (3n)\cdot \frac{(2n)\cdot (3n) + A}{((2n)\cdot \sqrt{3n} + B)^2} \approx (3n) \cdot \frac{6n^2 + \frac{13}{6}n^3}{(3n)(4n^2) + c^2n^4}\\
& \leq \frac{18n^3 + \frac{13}{2} n^4}{c^2n^4} \leq \frac{9n^4+\frac{13}{2} n^4}{c^2 n^4} \approx 10.5 \refstepcounter{equation}\tag{\theequation} \label{equ_bowtie_analitic_apx} \end{align*} (Actually, we can a tighter bound of $\approx 3$, see lemma~\eqref{lem_a2_bowtie}).
\fi
\subsection{ERMP for General Graphs}\label{sec_general_UB}
We now continue to the case of general graphs. Unlike for trees, for general graphs the ERMP doesn't have a clear "combinatorical" form but rather an algebraic expression. Hence, our proving method should take a different aim. We derive two different UB for the approximation ratio, and we conjecture that their \emph{minimum} is $\Tilde{O}(1)$. Indeed, we will show that in the case of Bowtie graph the \emph{minimum} is $\Tilde{O}(1)$, matching \Cref{thm_LW_apx_ERMP_trees}. Finally, we provide many simulations that support our conjecture. \\
Recall that the ERMP can be formulated as: \begin{equation*} \begin{array}{ll@{}ll} \text{minimize} & \Tr X^{-1} &\\ \text{subject to}& X = \sum_{l=1}^{m} g_lb_lb_l^T + (1/n)\boldsymbol{1}\boldsymbol{1}^T \\
& \boldsymbol{1}^Tg=1 \end{array} \end{equation*}
Since a closed-form expression for the exact approximation ratio seems hard to compute, our goal is to derive an upper bound on the approximation ratio of $\ell_{\infty}$-LW, which is henceforth denoted $\alpha_{A,D}$. It turns out that we can derive two \emph{different} upper bounds (with interesting relation) for $\alpha_{A,D}$. The first bound is using the $AM-GM$ inequality, and the second bound is via Lagrange duality analysis.
\subsubsection{The AM-GM UB} Recall that we can formulate both A-optimality and D-optimality in terms of "means" (see \cref{sec_optimal_design}). So trying to use $AM-GM$ inequality makes a lot of sense. We know we can write $\mathcal{K}_G(g)$ as (equation~\eqref{equ_R_tot_rep_tr}) \begin{equation}\label{equ_R_tot_rep_HM}
\mathcal{K}(g) = n \Tr L_g^+ = n \sum_{i=1}^{n-1} \lambda_i(L_g^+) = n \sum_{i=1}^{n-1} \lambda_i(L_g)^{-1} = \frac{n(n-1)}{HM(\lambda(L_g))} \end{equation} (we only have $n-1$ positive eigenvalues). So, we get \[ HM(\lambda(L_{g^*})) = \frac{n(n-1)}{\mathcal{K}_G^*} \ ,\ HM(\lambda(L_{g_{\ell w}})) = \frac{n(n-1)}{\mathcal{K}_G^{\ell w}} \] Using \emph{AM-GM} inequality, with equation~\eqref{equ_AM_Laplacian}, gives us, \[ \alpha_{A,D} = \frac{\mathcal{K}_G^{\ell w}}{\mathcal{K}_G^*} = \frac{HM(\lambda(L_{g^*}))}{HM(\lambda(L_{g_{\ell w}}))} \leq \frac{AM(\lambda(L_{g^*}))}{HM(\lambda(L_{g_{\ell w}}))} = \frac{2/(n-1)}{HM(\lambda(L_{g_{\ell w}}))} = \frac{2}{n(n-1)^2} \mathcal{K}_G^{\ell w} \]
We denote this bound, the \emph{AM-GM} bound, by $\alpha_{AM}$: \[ \alpha_{A,D} \leq \alpha_{AM}(\ell w) = \frac{2}{n(n-1)^2} \mathcal{K}_G^{\ell w} = \frac{2}{(n-1)^2} \Tr L_{\ell w}^+ \]
It is nice to see that, since $\mathcal{K}_G^* \geq n(n-1)^2/2$ (see lower bound on the optimal solution from \cite{saberi}) we get that: \[ \alpha_1(g) = \frac{2}{n(n-1)^2}\mathcal{K}_G(g) \geq \frac{2}{n(n-1)^2}\mathcal{K}_G^* \geq 1 , \] with equality iff $g=g^*$, so this is well defined.
\subsubsection{The Duality-gap UB} Next we would like to use the \emph{dual gap} of ERMP to derive a second UB. The usage of the dual problem to bound the sub-optimality of a feasible solution is a well known technique, so our motivation is clear. We begin by restating the result of \cite{saberi} regards the duality gap of the ERMP problem. For the sake of complicity as it is one of the key component in our result, we derive the duality gap from scratch, but note that this is already done by \cite{saberi}. We state here the final results, see the full proof in \cref{appendix_ERMP_dual_gap}.\\
The dual problem of ERMP is (see equation~\eqref{equ_ERMP_dual_prob}), \begin{equation*}
\begin{array}{ll@{}ll}
& \text{maximize} \ \ & n(\Tr V)^2 \\
& \text{subject to} & b_l^T V^2 b_l = ||V b_l|| \leq 1 , \\
& & V \succeq 0 \ ,\ V \boldsymbol{1} = 0 \end{array} \end{equation*}
We know that the gradient of $\mathcal{K}_G(g)$, w.r.t to $g$, is equals to (see equation~\eqref{equ_der_L_g_plus}): \[
\frac{\partial L_g^+}{\partial g_l} = -\Tr L_g^+ b_l b_l^T L_g^+ = -||L_g^+ b_l||^2 \]
Given any feasible $g$, let $V = \frac{1}{\underset{l}{\max} ||L_g^+ b_l||} L_g^+$ (this is clearly dual feasible). By definition, the duality gap in $g$, is equals to, \begin{align*}
\eta &= n \Tr L_g^+ - n(\Tr V)^2 = n \Tr L_g^+ - \frac{n (\Tr L_g^+)^2}{\underset{l}{\max} ||L_g^+ b_l||^2} \\
&= \frac{\mathcal{K}_G(g)}{-\min\limits_l \ \partial \mathcal{K}_G(g)/\partial g_l} (-\min_l \frac{\partial \mathcal{K}_G(g)}{\partial g_l} - \mathcal{K}_G(g)) \end{align*} Using that, we can derive our second UB. By the definition of dual gap, \[ \mathcal{K}_G(g) - \mathcal{K}_G^* \leq \eta = \frac{\mathcal{K}_G(g)}{-\min\limits_l \ \partial \mathcal{K}_G(g)/\partial g_l} (-\min_l \frac{\partial \mathcal{K}_G(g)}{\partial g_l} - \mathcal{K}_G(g)) \\ = \mathcal{K}_G(g) + \frac{(\mathcal{K}_G(g))^2}{\min\limits_l \ \partial \mathcal{K}_G(g)/\partial g_l} \\ \] Remember that the derivative of $\mathcal{K}_G(g)$ is negative, and with simple arrangement we get, \[
\frac{\mathcal{K}_G(g)}{\mathcal{K}_G^*} \leq \frac{-\min\limits_l \frac{\partial \mathcal{K}_G(g)}{\partial g_l}}{\mathcal{K}_G(g)} = \frac{-\min\limits_l ( -n||L^+ b_l||_2^2 )}{n \Tr L^+} = \frac{\max\limits_l ||L^+ b_l||_2^2}{\Tr L^+} \]
Inserting $g_{\ell w}$, we denote this bound, the \emph{duality gap} bound, by $\alpha_{dual}$, \[
\alpha_{A,D} \leq \alpha_{dual}(\ell w) \coloneqq \frac{\max\limits_l ||L_{\ell w}^+ b_l||_2^2}{\Tr L_{\ell w}^+} \]
\subsubsection{Approximation Ratio UB for the ERMP}
We can now define our derived UB for the approximation ratio - $\alpha_{A,D}$: \[
\alpha_{A,D} \leq \alpha_{min} \coloneqq \min(\alpha_1 , \alpha_2), \]
Throughout the paper, $\alpha_{AM}, \alpha_1$ always refers to $\alpha_{AM}(\ell w)$, and $\alpha_{dual}, \alpha_2$ always refers to $\alpha_{dual}(\ell w)$, while simply $\alpha$ will refer to $\alpha_{min}$ unless stated otherwise.\\
We can now propose the following, stronger, conjecture: \begin{conjecture}\label{conj_a_min}
For any graph $G$, \( \alpha_{min} \leq O(\log n) \) \end{conjecture} We emphasize that this is not equivalent to \Cref{conj_LW_apx_ERMP}, as we don't know whether $\alpha_{min}$ is tight. However, our simulations indicate that $\alpha_{min}$ is sufficient (see ahead).\\
For the rest of this section, we derive some formulas on $\alpha_1$ and $\alpha_2$ and their relation. Our first lemma is the connection of the two bounds: \begin{lemma}\label{lem_a2_der_log_a1}
$\alpha_2(g) \ = \left\lVert - \nabla_g (\log \alpha_1) \right\rVert_\infty $ \end{lemma}
\begin{proof}
Recall that \(\alpha_1(g) = \frac{2}{(n-1)^2} \Tr L_g^+ \).
Using the chain rule and equation~\eqref{equ_der_L_g_plus}, we get that
\begin{align*}
- \frac{\partial }{\partial g_l} \log(\alpha_1(g)) &= - \frac{1}{\alpha_1(g)} \frac{\partial \alpha_1}{\partial g_l} = \frac{-(n-1)^2}{2\Tr L_g^+}\frac{2}{(n-1)^2} \Tr \frac{\partial L_g^+}{\partial g_l} \\
&= \frac{-1}{\Tr L_g^+} \Tr( -L_g^+ b_l b_l^T L_g^+ ) = \frac{\Tr b_l^T L_g^+ L_g^+ b_l}{\Tr L_g^+} = \frac{||L_g^+ b_l||^2}{\Tr L_g^+}
\end{align*}
Hence,
\[
\left\lVert - \nabla_g (\log \alpha_1) \right\rVert_\infty = \max_l \frac{||L_g^+ b_l||^2}{\Tr L_g^+} = \alpha_2(g) .
\] \end{proof}
We can also give a more `combinatorical` interpretation for $\alpha_1$: \begin{lemma}\label{lem_apx_diam_UB}
\( \alpha_1 \leq D \), where $D$ is the diameter of the graph. \end{lemma} \begin{proof}
We know that at $g_{\ell w}$, the effective resistance on any edge is $n-1$.
Let SP($i,j$) be the shortest path between $i,j$. We can bound $R_{ij}^{\ell w} = R_{ij}(g_{\ell w})$ using the triangle inequality (see equation~\eqref{equ_ER_tri_ineq}):
\[
R_{ij}^{\ell w} \leq \sum_{l \in \text{SP}(i,j)} R_l^{\ell w} = \sum_{l \in \text{SP}(i,j)} (n-1) = (n-1)|\text{SP}(i,j)| \leq (n-1)D ,
\]
where $D$ is the diameter of the graph. Thus we get that,
\[
\mathcal{K}_G^{\ell w} = \sum_{i<j} R_{ij}^{\ell w} \leq \sum_{i<j} (n-1)D = {n \choose 2} (n-1)D = \frac{n(n-1)^2}{2} D ,
\]
and,
\[
\alpha_1 = \frac{2}{n(n-1)^2} \mathcal{K}_G^{\ell w} \leq \frac{2}{n(n-1)^2} \cdot \frac{n(n-1)^2}{2} D = D
\] \end{proof}
In contrast, $\alpha_{dual}$ has a close connection to the \emph{condition number} of the Laplacian: \begin{lemma}\label{lem_a2_cond_num_UB}
\( \alpha_{dual} \leq \kappa(L_{lw}) = \frac{\lambda_{n}(L_{lw})}{\lambda_{2}(L_{lw})}\) \end{lemma} \begin{proof}
By the Courant-Fisher principle,
\[
\lambda_{n}(L_{lw}^+) = \underset{x \in \mathbb{R}^n}{\max} \left\{ \frac{x^T L_{lw}^+ x}{x^T x} \right\}
\]
In particular, taking $x = (L_{lw}^+)^{1/2} b_l$ (remember that $L_g^+$ is symmetric PSD matrix) gives us,
\[
\frac{b_l^T L_{lw}^+ L_{lw}^+ b_l}{b_l^T L_{lw}^+ b_l} \leq \lambda_{n}(L_{lw}^+)
\]
i.e,
\[
||L_{lw}^+ b_l ||^2 = b_l^T L_{lw}^+ L_{lw}^+ b_l \leq \lambda_{n}(L_{lw}^+) \cdot b_l^T L_{lw}^+ b_l = (n-1)\lambda_{n}(L_{lw}^+).
\]
In addition, remember that, \( \min(x_1,...,x_k) \leq AM(x_1,...x_k)\). So, using it for the eigenvalues of $L_{lw}^+$, and the fact that the trace is the sum of the eigenvalues, we get that,
\[
\lambda_{2}(L_{lw}^+) \leq \frac{1}{n-1} \Tr L_{lw}^+
\]
Thus,
\[
\alpha_2 = \max_l \frac{||L_{lw}^+ b_l||^2}{\Tr L_{lw}^+} \leq \max_l \frac{(n-1)\lambda_{n}(L_{lw}^+)}{\Tr L_{lw}^+} \leq \frac{(n-1)\lambda_{n}(L_{lw}^+)}{(n-1)\lambda_{2}(L_{lw}^+)} = \frac{\lambda_{n}(L_{lw}^+)}{\lambda_{2}(L_{lw}^+)} = \frac{\lambda_{n}(L_{lw})}{\lambda_{2}(L_{lw})} = \kappa(L_{lw})
\]
when we used that fact the eigenvalues of $L^+$ are the reciprocal of the eigenvalues of $L$. \end{proof}
$ $\\ As promised, we use $\alpha_{dual}$ to derive the approximation ratio for trees: \begin{lemma}\label{lem_a2_bowtie}
For the Bowtie graph, $\mathcal{B}_{n,n,n}$, $\alpha_{dual} \approx 3.12$ \end{lemma}
\begin{proof}
Recall that for trees
\[
\mathcal{K}_T(g) = \sum_l \frac{c_l}{g_l} ,
\]
and,
\[
\frac{\partial \mathcal{K}(g)}{\partial g_l} = -\frac{c_l}{g_l^2}
\]
So, we can write $\alpha_{dual}$, for trees, as
\begin{flalign*}
&&\alpha_{dual} = \underset{l}{\max} \left( \left. -\frac{\partial \mathcal{K}(g)}{g_l}/\mathcal{K}(g) \right|_{g_{\ell w}} \right) = \underset{l}{\max} \frac{m^2 c_l}{m \sum_k c_k} = \frac{m \cdot c_{\max}}{\sum_k c_k} &&\text{$g_{\ell w} = g_{uni}$, see claim~\ref{clm_LW_tree}}
\end{flalign*}
Clearly, $c_{max} \leq (3n/2)^2$ (the middle edge). As for the denominator, an easy calculation gives us,
\begin{align*}
\sum_k c_k &= \underbrace{2n\cdot m}_\text{$2n$ leafs edges} + \underbrace{\sum_{k=n+1}^{2n-1} k(3n-k)}_\text{path's edges}\\
&= 2n\cdot m + \sum_{k=1}^{2n-1} k(3n-k) - \sum_{k=1}^{n} k(3n-k) \approx 2n\cdot m + \frac{10}{3}n^3 - \frac{7}{6}n^3 = \frac{13}{6}n^3 + 6n^2.
\end{align*}
Thus, we get that
\[
\alpha_{dual} (\mathcal{B}_{n,n,n}) \leq \underbrace{\frac{m(3n/2)^2}{(13/6)n^3} \leq \frac{27/4}{13/6}}_{m = 3n-1} \approx 3.12
\] \end{proof}
We should note that this is an UB, and experimental results shows a slightly better ratio (see ahead).\\
While we haven't managed to prove our conjecture for general graphs, we can use the above formulas to show it for certain families. For instance, for low-diameter graphs, lemma~\ref{lem_apx_diam_UB} proves our conjecture. In addition, we know that good expanders graphs have a low condition number \cite{spielman11}, thus lemma~\ref{lem_a2_cond_num_UB} proves the conjecture for them as well. For more graphs, we provide a strong evidence for our conjecture, using various simulations, more on that below.
\section{Experimental Results}\label{sec_experiments}
\paragraph{Setup.} The simulation computes the approximation ratio using $\alpha_{min}$, thus matching \Cref{conj_a_min}. All random graphs results are taken to be the maximum of $100$ runs. We compute the LW of the graph using \Cref{alg_LW_computation_ccly} for $\epsilon = 0.01$. Whenever the graph isn't connected or simple, we remove all self loops and take its largest connected component (this is just a technicality). We simulate our conjecture for several elementary graph families, e.g $k$-regular graphs, lollipop, and grids.
\Cref{tab_dataset_elem} summarizes the (interesting) graphs we checked, and the approximation ratio of these graphs.
\Cref{fig_simulation_charts} shows an asymptotic behavior of regular graphs and lollipop.
\begin{savenotes} \begin{table*}[ht!]\small \normalfont \footnotesize
\setlength{\tabcolsep}{6pt}
\centering
\begin{tabular}{rrn{9}{0}n{9}{0}n{9}{0}r}
\toprule
\multicolumn{4}{c}{{\bfseries Dataset}}& \multicolumn{1}{c}{}\\
{\bfseries type} & {\bfseries graph} & \multicolumn{1}{r}{\bfseries nodes} & \multicolumn{1}{r}{\bfseries edges} & \multicolumn{1}{c}{\bfseries $\alpha_{min}(G)$} \\
\midrule
\bfseries random, high diameter, $d$-regular & $3$-regular & 400 & 600 & $1 \period 55$ \\
& $4$-regular & 400 & 800 & $1\period 17$ \\
& $5$-regular & 400 & 1000 & $1\period 11$ \\
& $6$-regular & 400 & 1200 & $1\period 08$ \\
\midrule
\bfseries small-world graph & Watts–Strogatz small-world graph\footnote{with parameters -- $k=4$, $p=2/3$} & 400 & 800 & $1\period 64$ \\
\midrule
\bfseries grid & balanced grid\footnote{$2$-dimensional square} & 400 & 760 & $1\period 35$ \\
& 10x-grid\footnote{rectangle with width size $10$} & 400 & 750 & $1 \period 94$ \\
\midrule
\bfseries expanders & Margulis-Gabber-Galil graph & 400 & 1520 & $1\period 06$ \\
& chordal-cycle graph & 400 & 1196 & $1\period 64$ \\
\midrule
\bfseries dense graphs & (400,400)--lollipop & 800 & 80200 & $3\period 03$ \\
\midrule
\bfseries trees & Bowtie & 3000 & 2999 & $2\period 5$ \\
\midrule
\bfseries real-world graphs\footnote{graphs taken from \cite{MSJ12}} & \textsc{Yeast} & 2224 & 6609 & $2\period 23$ \\
& \textsc{Stif} & 17720 & 31799 & $4\period 08$ \\
& \textsc{royal} & 2939 & 4755 & $8\period 96$ \\
\bottomrule
\end{tabular}
\caption{Summary of approximation bounds for elementary graphs.
\label{tab_dataset_elem}} \end{table*} \end{savenotes}
\begin{figure}
\caption{$\alpha_{min}$ of high diameter regular graphs}
\label{fig_simulation_charts_regular}
\caption{$\alpha_{min}$ of lollipop graph up to $800$ vertices}
\label{fig_simulation_charts_lolipop}
\caption{Asymptotic behavior of elementary graphs}
\label{fig_simulation_charts}
\end{figure}
\iffalse \begin{table*}[ht!]\small
\setlength{\tabcolsep}{6pt}
\centering
\begin{tabular}{rrn{9}{0}n{9}{0}n{9}{0}}
\toprule
\multicolumn{4}{c}{{\bfseries Dataset}}& \multicolumn{1}{c}{\bfseries Approximation}\\
{\bfseries type} & {\bfseries name} & \multicolumn{1}{r}{\bfseries nodes} & \multicolumn{1}{r}{\bfseries edges} & \multicolumn{1}{c}{\bfseries ratio UB} \\
\midrule
\bfseries infrastructure & \textsc{Ca} & 1965206 & 2766607 & 5 \\
& \textsc{Bucharest} & 189732 & 223143 & 21 \\
& \textsc{HongKong} & 321210 & 409038 & 32 \\
& \textsc{Paris} & 4325486 & 5395531 & 55 \\
& \textsc{London} & 2099114 & 2588544 & 57 \\
& \textsc{Stif} & 17720 & 31799 & 28 \\
\midrule
\bfseries social & \textsc{Facebook} & 4039 & 88234 & 142 \\
& \textsc{Stack-TCS} & 25232 & 69026 & 143 \\
& \textsc{Stack-Math} & 1132468 & 2853815 & 850 \\
& \textsc{LiveJournal} & 3997962 & 34681189 & 360 \\
\midrule
\bfseries web & \textsc{Wikipedia} & 252335 & 2427434 & 1007 \\
\midrule
\bfseries hierarchy & \textsc{Royal} & 3007 & 4862 & 11 \\
& \textsc{Math} & 101898 & 105131 & 56 \\
\midrule
\bfseries ontology & \textsc{Yago} & 2635315 & 5216293 & 836 \\
& \textsc{DbPedia} & 7697211 & 30622392 & 28 \\
\midrule
\bfseries database & \textsc{Tpch} & 1381291 & 79352127 & 699 \\
\midrule
\bfseries biology & \textsc{Yeast} & 2284 & 6646 & 54 \\
\bottomrule
\end{tabular}
\caption{Dataset and summary of approximation bounds from \cite{MSJ12}. \amit{ignore the current numbers, I dont have them yet} \label{tab_dataset_realworld}} \end{table*}
\fi
\iffalse \begin{remark}
For computing the LW of the graph, we used \Cref{alg_LW_computation_ccly} with the following implementation -- in order to compute the $l$'th LS, $b_l^T L_g^{-1} b_l$, at each iteration we first compute the cholesky decomposition of $L_g = B W B^T = U^T U $, using \textsc{Scipy-Sparse} engine (we employ here the fact that our graphs are sparse). Then, we use a triangular-system solver from \textsc{Scipy} to compute $Z = (U^T)^{-1} B$. Note that $Z^T Z = B^T L_g^+ B$, so we want $\diag(Z^T Z)$. But clearly the $i'th$ diagonal term is equals to the sum of $i$'th column in $Z$ squared, leaving us with the following procedure:
\begin{enumerate}
\item Find Cholesky decomposition -- $L_g = U^T U$ where $U$ is upper triangular.
\item Solve $(U^T)Z = B$ using fast traingular system solver.
\item Compute \textsc{ColSum}($Z \circ Z$) where $\circ$ is the Hadamard product, and \textsc{ColSum}($M$) is the sum along columns of $M$.
\end{enumerate} \end{remark} \fi
\section{Spectral Implications of $\ell_{\infty}$-LW Reweighting (Proof of \Cref{thm_spectral_LW})}\label{sec_LW_spectral}
In this section we study how $\ell_{\infty}$-LW affect the eigenvalue distribution of the reweighted graph Laplacian. We present several results in this direction, which are of both mathematical and algorithmic interest.
\subsection{
Bound The Algebraic Connectivity}
A classic result of \cite{Mohar91} asserts that for unweighted graphs the algebraic connectivity is bounded by \[ \lambda_2 \geq 4/nD. \]
We generalize this bound for weighted graphs in the following way: \begin{lemma}
Given a graph $G(g)$, denote the maximal edge-ER in $R_{max}(g) = \max\limits_l R_l(g)$, and the maximal pairwise ER in $\Tilde{R_{max}}$. Then,
\begin{flalign}
&&\lambda_2(L_g) \geq \frac{2}{n \Tilde{R_{max}}} \geq \frac{2}{nD \cdot R_{max}(g)} &&\\
\text{and}&& && \nonumber \\
&& \lambda_2(L_g) \geq \frac{4}{n \sum_l R_l(g)}
\end{flalign}
\end{lemma} \begin{proof}
We know that ER is a metric, so for any two vertices $i,j$,
\[
R_{ij}(g) \leq \sum_{l \in P_{ij}} R_l(g) \leq D\cdot R_{max}(g)
\]
where $P_{ij}$ is the shortest (length) path between $i$ and $j$. In particular, $\Tilde{R_{max}}(g) \leq D \cdot R_{max}$. Thus
\[
\mathcal{K}_G(g) = \sum_{i<j} R_{ij}(g) \leq {n \choose 2} \cdot \Tilde{R_{max}} = \frac{n(n-1)}{2} \Tilde{R_{max}}(g)
\]
Using the fact that $\mathcal{K}_G(g) = n \Tr L_g^+$, we get
\[
\Tr L_g^+ \leq \frac{(n-1)}{2} \Tilde{R_{max}}(g)
\]
We know that $\lambda_{max}(L_g^+) \leq \Tr L_g^+ $, so
\[
\lambda_{max}(L_g^+) \leq \frac{(n-1)}{2}\Tilde{R_{max}}(g) \implies \lambda_2 \geq \frac{2}{(n-1)\Tilde{R_{max}}(g)} \geq \frac{2}{n\Tilde{R_{max}}(g)} \geq \frac{2}{n D \cdot R_{max}(g)}
\]
For the second bound, define the characteristic function of $P_{ij}$ as
\[
\chi_{ij}(e) = \twopartdef{1}{e \in P_{ij}}{0}{otherwise}
\]
Now, we can write
\begin{align*}
\mathcal{K}(g) &= \frac{1}{2}\sum_{i,j} R_{ij}(g) \\
&\leq \frac{1}{2} \sum_i \sum_j \sum_{l \in P_{ij}} R_l = \frac{1}{2} \sum_i \sum_j \sum_{l \in E} R_l \cdot \chi_{ij}(e) \\
&=\frac{1}{2} \sum_{l} R_l \sum_i \sum_j \chi_{ij}(e) \leq \frac{n^2}{4} \sum_{l} R_l
\end{align*}
where in the last line we use the fact that any edge $e$ belongs to at most $\frac{n^2}{4}$ of the paths $P_{ij}$ \cite{Mohar91}. Following the same arguments from before, we get that
\[
\lambda_{max}(L_g^+) \leq \frac{n}{4} \sum_{l} R_l \implies \lambda_2 \geq \frac{4}{n \sum_l R_l}
\]
\end{proof}
For sanity check, we should check what we get for unweighted graphs. Indeed for $g=\boldsymbol{1}$, we have \begin{align*}
&L_g = BB^T \implies L_g^+ = (BB^T)^+ \\
&\implies B L_g^+ B^T = B(BB^T)^+ B^T = BB^+ \\
&\implies R_l = \diag( B L_g^+ B^T)_l = \diag(BB^+)_l \leq \boldsymbol{1} \end{align*}
where the last line justify because $BB^+$ is projection matrix. So, we got that $R_{max} \leq 1$, thus (using the first bound) \[ \lambda_2 \geq \frac{2}{nD \cdot R_{max}} \geq \frac{2}{nD} \] which is almost the previous bound (and is better than the weaker version of $1/nD$). \\
\begin{remark}
We discussed here only on the algebraic connectivity, but there are many other graph properties and bounds that applies only to unweighted graphs. The above technique, and possibly more, may be used to generalize those bounds to weighted graphs, in terms of ER. \end{remark}
\iffalse For the next theorem we first need the following claim. \begin{claim}\label{clm_psd1}
For any graph $G$,
\[
B^T L_{lw}^+ B \preceq 2m\boldsymbol{I}_m
\]
and in general, if $m > \frac{2}{\beta} n$, for $\beta \leq 2$, then,
\[
B^T L_{lw}^+ B \preceq \beta \cdot m\boldsymbol{I}_m
\] \end{claim} The proof is a purely technical calculation, so we defer it to \cref{appendix_prf_clm_psd1}. We now use this claim to prove the following theorem.
\begin{theorem}[Theorem \ref{thm_spectral_LW}.II] \label{lem_lam2_LW_uni}
For any graph $G$,
\[
\frac{1}{2} \lambda_2(L_{uni}) \leq \lambda_2(L_{lw})
\]
and if $m > \frac{2}{\beta} n$, then,
\[
\frac{1}{\beta} \lambda_2(L_{uni}) \leq \lambda_2(L_{lw})
\]
where $L_{uni}$ is the Laplacian with uniform weight vector -- $g = (1/m) \boldsymbol{1}$. \end{theorem}
\begin{proof}
We first prove the first case. We do this in terms of $L^+$, i.e \(\lambda_{max}(L_{lw}^+) \leq 2 \cdot \lambda_{max}(L_{uni}^+)\).
Recall that from the Courant-Fisher principle \[ \lambda_{max}(L^+) = \max \left\{ \frac{x^T L^+ x}{x^T x} \mid x \in \mathbb{R}^n \right\} \]
However, we know that $\ker(L^+) = \dim \ker(B^T)$. In other words, for any $x \in \mathbb{R}^n$ the following holds \[ L^+ x = L^+ \Pi_B x \ , \ x^T L^+ = x^T \Pi_B^T L^+ \] where \( \Pi_B = B B^+ \), is the projection matrix onto $\span(B^T)$. So, \[ \lambda_{max}(L^+) = \max \left\{ \frac{x^T (B^T)^+ B^T L^+ B B^+ x}{x^T x} \mid x \in \mathbb{R}^n \right\} \]
Now, using claim~\ref{clm_psd1}, for any $y \in \mathbb{R}^m$ \[ \frac{y^T B^T L_{lw}^+ B y}{y^T y} \leq \lambda_{max}(B^T L_{lw}^+ B) \leq 2m \] Apply it for \(y = B^+ x\), we get
\begin{align*}
\lambda_{max}(L_{lw}^+) = \max_x \frac{x^T (B^T)^+ B^T L_{lw}^+ B B^+ x}{x^T x} &\leq \max_x \frac{x^T(B^T)^+(2m)B^+ x}{x^T x} = (2m) \cdot \max_x \frac{x^T(BB^T)^+ x}{x^T x} \\
&= 2 \cdot \max_x\frac{x^TL_{uni}^+ x}{x^T x} = 2\lambda_{max}(L_{uni}^+) \end{align*} Where we used the fact that pseudo-inverse is reciprocal multiplicative and commutes with Transpose, so \begin{flalign*}
&& &(B^T)^+ B^+ = (BB^T)^+ && \\
\text{and, } && &m(BB^T)^+ = (1/m \ BB^T)^+ = (BW_{uni}B^T)^+ = L_{uni}^+ && \end{flalign*}
The proof for the the general case is exactly the same using the second case from claim~\ref{clm_psd1}.
\end{proof}
\fi
\subsection{Minimize the Mixing Time of a Graph}
Recall that the spectral gap of a graph is the smallest non-negative eigenvalue of the Laplacian. Since our graphs are assumed to be connected, this equals $\lambda_2(L_g) = \lambda_2$. Now, a smaller (faster) mixing time is equivalent to higher spectral gap, thus our goal is to maximize $\lambda_2$.
It is well known that the maximum of $n$ variables can be "smoothed" by using the \emph{LogSumExp} (LSE) function \cite{Nesterov03} : \[ \LSE(x_1,...,x_n) = \log \left(\sum_i e^{x_i} \right) \] The LSE is differentiable, convex function and for any $n$ numbers the LSE satisfy \[ x_{max} \leq \LSE(x_1,...,x_n) \leq x_{max} + \log(n) \] Thus, in an attempt to analyze $\lambda_2 = 1/ \lambda_n(L^+)$, it is natural to consider the LSE of $L^+$ eigenvalues (maximize $\lambda_2$ = minimize $\lambda_n(L^+)$ ). Using the fact that \[ \Tr \exp(M) = \sum_i e^{\lambda_{i}(M)} \] we define the softmaxEV (SEV) function as: \[ \SEV(g) = \log(\Tr[\exp(L_g^+)]) \] In order to simplify calculations, exploiting the fact that \emph{log} is a monotone function, we will analyze: \[ f(g) = \Tr[ \exp(L_g^+)] \] Note that $f(g)$ is still convex as exponent preserve convexity. In Appendix \cref{appendix_LSE_opt_crit} we prove that the optimality criterion for SEV is: \begin{align}\label{equ_sev_opt_crit}
\Tr [ e^{L_g^+}L_g^+ (I - b_l b_l^T L_g^+)] \geq 0. \end{align} In the next section, we show that when \eqref{equ_sev_opt_crit} is \emph{pointwise nonnegative} (i.e., all summands are PSD ), the criterion \eqref{equ_sev_opt_crit} is tightly connected to the $\ell_{\infty}$-LW optimality condition, hence provides a condition under which LW reweighting improves the mixing-time of the graph $G$.
\iffalse In accordance to previous analysis, we will derive the optimality criteria for $f(g)$ using the familiar identity, \[ g \ \text{is optimal for } f \ \iff \nabla f(g)^T(e_l - g) \geq 0 \ \text{, for} \ l=1...m \]
\paragraph{Gradient} Let's start by computing the gradient of $f(g)$. Using the chain rule we get \[ \frac{\partial f}{\partial g_l} = \boldsymbol{Tr}[ e^{L_g^+} \frac{\partial L_g^+}{\partial g_l} ]= -\boldsymbol{Tr} [e^{L_g^+} L_g^+ b_l b_l^T L_g^+] \] Where we used equation ~\ref{equ_der_L_g_plus} for the gradient of $L_g^+$. \\ Next we want to compute \( \nabla f(g)^T g\). We will used the standard "trick". Indeed, \[ \frac{\partial }{\partial \alpha}f(\alpha g) = \nabla f(\alpha g)^T g \] thus setting $\alpha=1$ in last term yield our desired expression. So, \[ f(\alpha g) = \boldsymbol{Tr}[ e^{L_{\alpha g}^+} ] = \boldsymbol{Tr}[ e^{L_{g}^+ / \alpha} ] \] \[ \implies \frac{\partial }{\partial \alpha}f(\alpha g) = \frac{\partial }{\partial \alpha} \boldsymbol{Tr}[ e^{L_{g}^+ / \alpha} ] = \boldsymbol{Tr}[ e^{L_{g}^+ / \alpha} \frac{\partial }{\partial \alpha}(L_{g}^+ / \alpha) ] = \boldsymbol{Tr}[ e^{L_{g}^+ / \alpha} L_g^+ (-1/\alpha^2) ] \] and setting $\alpha=1$ leads to \[ \nabla f(g)^T g = -\boldsymbol{Tr}[ e^{L_{g}^+} L_g^+] \]
\paragraph{optimality criteria} As mentioned above the optimal solution satisfies \[ \nabla f(g)^T(e_l - g) = \frac{\partial f}{\partial g_l} - \nabla f(g)^T g \geq 0 \] Using our derived expressions, this condition get the form \[ -\Tr [e^{L_g^+} L_g^+ b_l b_l^T L_g^+] + \Tr[ e^{L_{g}^+} L_g^+] \geq 0 \] Using the additive property of trace we get \[ \Tr [ e^{L_g^+}L_g^+ (\mathbf{I} - b_l b_l^T L_g^+)] \geq 0 \]
\fi
\paragraph{A Sufficient condition for optimality} Note that equation~\eqref{equ_sev_opt_crit} is easily satisfied if \(\mathbf{I} - b_l b_l^T L_g^+\) is a PSD matrix. Moreover the spectrum of this matrix is quite elegant - \(b_l b_l^T L_g^+\) is one-rank matrix with one non-zero eigenvalue equals to $R_l(g)$ (corresponds to the eigenvector $b_l$). So the spectrum of \(\mathbf{I} - b_l b_l^T L_g^+ \) is simply \[ \underbrace{1,...,1}_{n-1} , 1 - R_l(g) \] Thus, a sufficient condition for $g$ to be optimal is that the effective resistance on each edge is less than $1$. Note that this condition resembles the normalized LW condition, where the ER on each edge is (at most) $n-1$.\\ Unfortunately, there isn't feasible solution that holds this condition. To see why, note that for any feasible $g$ (equation~\eqref{equ_logdet_grad_g_prod}) \[
-\nabla \logdet(g)^T \cdot g = \sum_l R_l(g) \cdot g_l = n-1 \] But, if for some $g'$ we have $R_l(g') \leq 1$ then, \[ n-1 = \sum_l R_l(g)g_l \leq \sum_l g_l = 1 \] Contradiction\footnote{$n \leq 2$ is negligible case}. In conclusion, there is no feasible solution such that \(I - b_l b_l^T L_g^+\) is PSD. However this is not means that no optimal solution exist - the trace can be positive even if the matrix is not PSD. Unfortunately, we don't know how to derive closed-form expression for the optimal solution besides the above condition.
\paragraph{Relative optimality} While it isn't clear what is the optimal solution for the SEV function, we can use the convex property to try to prove that "solution 1 is better than 2". Formally, for a convex function $f$
\[ f(x) \leq f(y) \iff \nabla f(x)^T(y-x) \geq 0 \]
For example, taking $x=g_{lw}=\wh{g} \ , \ y=g_{uni}=(1/m)\boldsymbol{1}$, we get the following condition \[ \nabla f(\wh{g})^T((1/m)\boldsymbol{1} - \wh{g}) = (1/m)\sum_l \frac{\partial f}{\partial g_l} - \nabla f(\wh{g})^T \wh{g} \geq 0 \]
Using the derived formulas from \cref{appendix_LSE_opt_crit}, this gets the form \[ -(1/m)\sum_l \Tr [e^{L_{lw}^+} L_{lw}^+ b_l b_l^T L_{lw}^+] +\Tr [ e^{L_{lw}^+} L_{lw}^+] = \Tr [e^{L_{lw}^+} L_{lw}^+ (\mathbf{I} - \sum_l (1/m)b_l b_l^T L_{lw}^+)] = \Tr [e^{L_{lw}^+} L_{lw}^+ (\mathbf{I} - L_{uni} L_{lw}^+)] \geq 0 \]
\begin{remark} The last condition is not a trivial one. To see why, note that if $\mathbf{I} - L_{uni} L_{lw}^+$ is PSD then the last condition satisfies. It's easy to see that \[ \mathbf{I} - L_{uni} L_{lw}^+ \succeq 0 \iff L_{uni} L_{lw}^+ \preceq \mathbf{I} \iff (1/m) B^T L_{lw}^+ B \preceq \mathbf{I} \iff B^T L_{lw}^+ B \preceq m\mathbf{I} . \] multiplying by $W = 1/(n-1) \cdot \diag(w_\infty)$ on both sides, and we get that \[
W^{1/2} B^T (B W B^T)^+ B W^{1/2} = \Pi_{W^{1/2} B^T} \preceq m W \] It is known that the eigenvalues of a projection matrix are either $0$ or $1$. On the other hand, the $i$'th eigenvalue of $m W$ is clearly $m (w_\infty)_i /(n-1)$. In other words, if $w_\infty \geq (n-1)/m$ then we can conclude that the mixing time of the reweighted graph is faster than the unweighted graph. Unfortunately, this can never happen -- remember that the sum of LW is $\rank(B)=n-1$, so if $(w_\infty)_{\min} > (n-1)/m$ we get \[ n-1 \sum_i (w_\infty)_i \geq m \cdot (w_\infty)_{\min} > m \cdot ((n-1)/m) = n-1 \] contradiction, unless $g_{\ell w} = g_{uni}$. We emphasize that this doesn't mean that LW don't improve the mixing time, since $\mathbf{I} - L_{uni} L_{lw}^+ \succeq 0$ is just a sufficient condition, and simulations indeed show that LW improves the mixing time. However, we don't have any closed form condition for this improvement. \end{remark}
\subsection{Optimal Spectrally-Thin Trees}
Our final application of LW is in spectral thin trees. A spanning tree $T$ of $G$ is $\gamma$-spectral-thin if $L_T \preceq \gamma L_G$. In \cite{AGM10} it has been shown that there is a deep connection between Asymmetric TSP and spectrally-thin trees as finding a low STT can be used with the fractional solution of ASTP to find an approximate solution. This has been the key ingredient in \cite{AGM10} and followup work. In \cite{HO14} the authors showed that for a connected graph $G = \langle V,E \rangle$, any spanning tree $T$ of $G$ has spectral thinness of at least $O(\max\limits_{e \in T} R_G(e))$ (i.e. the maximal edge ER of $T$ in $G$). Furthermore, it is possible to find the tree that achieve this LB in polynomial time. Now, we know (\cref{sec_new_char_LS_ER}) that LW minimizes the latter. So, it is natural to ask how LW can be used to find the optimal spectral thin tree of a graph.
Following the work of \cite{HO14}, we prove the following lemma: \begin{lemma}\label{lem_spectral_thin_tree}
For any connected graph $G = \langle V,E \rangle$ there is a weighted spanning tree $T_g$ such that $T_g$ has spectral thinness of $((n-1)/m) \cdot O(\log n / \log \log n)$. \end{lemma}
The proof is very similar to \cite{HO14} and will make use of the following lemma: \begin{lemma}\label{lem_independent_set}
Let $w_1,\dots,w_m \in \mathbb{R}^n$ be an $n$-dimensional vectors with unit norm. Let $p_1,\dots,p_m$ be a probability distribution on these vectors such that the covariance matrix is $\sum_i p_i w_i w_i^T = (1/n) \mathbf{I}$. Then, there is a polynomial algorithm that computes a subset $S \subseteq [m]$ such that $\{ w_i \mid i \in S \}$ forms a basis of $\mathbb{R}^n$, and $|| \sum_{i \in S} w_i w_i^T || \leq \alpha$ where $\alpha = O(\log n / \log \log n)$. \end{lemma}
\begin{proof}[Proof of Claim~\ref{lem_spectral_thin_tree}] \normalfont Let $w_\infty$ be the LW of the graph. Define $g_0 = \frac{n-1}{m}\boldsymbol{1}$ , $\overline{w} = \frac{n-1}{m}w_\infty$. For each $e_l \in E$, define $w_l = \sqrt{(g_0)_l} \cdot (L_{\overline{w}})^{+/2} b_l$, $\boldsymbol{p} = \frac{1}{n-1}w_\infty$. Indeed, by ~\eqref{equ_LS_facts} and the multiplicative property of pseudo-inverse we have \begin{align*}
\sum_l p_l = 1 \ , \ ||w_l||^2 = (g_0)_l \cdot \frac{m}{n-1} b_l^T (B W_\infty B^T)^+ b_l = \frac{n-1}{m} \cdot \frac{m}{n-1} = 1 \end{align*} In addition, \begin{align*}
\sum_{e_l \in E} p_l w_l w_l^T &= \frac{1}{n-1} (L_{\overline{w}})^{+/2} \left (\sum_l (w_\infty)_l (g_0)_l b_l b_l^T \right) (L_{\overline{w}})^{+/2} \\
&= \frac{1}{n-1} (L_{\overline{w}})^{+/2} \left( \sum_l \overline{w}_l b_l b_l^T \right) (L_{\overline{w}})^{+/2} \\
&= \frac{1}{n-1} \mathbf{I}_{\text{im} L_G} \end{align*} So, we can view the vectors $\{w_l \mid l \in E \}$ as $(n-1)$-dimensional vectors (in the linear span of $B$) and apply \cref{lem_independent_set} to get a set $T \subset E$ of size $n-1$ ($T$ forms a basis in $\mathbb{R}^{n-1}$) such that $\{ w_e \mid e \in T\}$ is linearly independent and \[ \sum_{e_l \in T} w_l w_l^T \preceq O(\log n/\log \log n) \mathbf{I}_{\text{im} L_G} \] The first two conditions imply that $T$ induced a spanning tree of $G$. Then, the last condition gives us (simple rearrangement) \[ \sum_{e_l \in T} (g_0)_l b_l b_l^T = L_{T_g} \preceq O(\log n/\log \log n) L_{\overline{w}} \] Now, we know LW are at most $1$, so, $\overline{w} = \frac{n-1}{m}w_\infty \leq \frac{n-1}{m} \boldsymbol{1}$, implying that \[ L_{\overline{w}} \preceq \frac{n-1}{m} BB^T = \frac{n-1}{m} L_G \] Thus we got that \[ L_{T_g} \preceq ((n-1)/m) \cdot O(\log n / \log \log n) L_G \] \end{proof}
\paragraph{The ATSP Angle} It would be interesting to understand if and how can \emph{weighted STTs} be used for ATSP rounding schemes, a-la \cite{AGM10}. Nevertheless, we emphasize two advantages of LW-weighted STTs: First, in contrast to the unweighted case \cite{AO15}, we guarantee to find an $O((n-1)/m)$-STT \emph{regardless} of the original graph, which is optimal -- to see why, recall that LW is the optimal minimizer of the maximal edge-ER. Taking $g_{\ell w}$ and $g_{uni}$, both normalized, we have \[ R_{max}(g_{\ell w}) = n-1 \leq R_{max}(g_{uni}) = m \cdot R_{max}(\boldsymbol{1}) \implies R_{max}(\boldsymbol{1}) \geq \frac{n-1}{m}. \] Second, the unweighted STT guaranteed by \cite{AO15} can at best achieve this bound with total weight $n-1$. By contrast, the total weight of our weighted STT is $\frac{(n-1)^2}{m} \leq n-1$ (and for most graphs $\ll$). In this sense, the tree from Lemma \ref{lem_spectral_thin_tree} is always "cheaper".
\section{Computing Lewis Weights}\label{sec_computing_LW}
In this section we outline recent accelerated methods for computing $\ell_{\infty}$-LW of a matrix via \emph{repeated leverage-score} computations (in out case, Laplacian linear systems). The first method, due to \cite{ccly} (building on \cite{CP14}), runs in input-sparsity time and is very practical, but provides only low-accuracy solution. The second method, due to \cite{flps21}, provides a \emph{high-accuracy} algorithm for $\ell_p$-LW, using $\wt{O}(p^3 \log (1/\epsilon))$ leverage-score computations (Laplacian linear systems in our case). Fortunately, as we show below, low-accuracy is sufficient the ERMP application, hence we focus on the algorithm of \cite{ccly}. However, we remark that in most optimization applications,
computing $\ell_p$-LW for $p=n^{o(1)}$ is suffices (see \cite{flps21,LS14}) in which the \cite{flps21} algorithm yields a high-accuracy $O(m^{1+o(1)}\log(1/\epsilon))$ time algorithm for Laplacians.
\subsection{Computing $\ell_{\infty}$-Lewis Weights to Low-Precision \cite{ccly}}
\begin{algorithm} \caption{Computing $\ell_{\infty}$-Lewis weight} \label{alg_LW_computation_ccly}
\hspace*{\algorithmicindent}\textbf{Input:} A matrix $A \in \mathbb{R}^{m \times n}$ with rank $k$, $T$ - number of iterations. \\ \hspace*{\algorithmicindent}\textbf{Result:} $\ell_{\infty}$-LW, $w \in \mathbb{R}^m$. \\ \begin{algorithmic} \State Initialize: $w_l^{(1)} = \frac{k}{m}$ for $l=1, \dots ,m$. \For{$t = 1, \dots ,T-1$}
\State $w_l^{(t+1)} = w_l^{(t)} \cdot a_l^T (A^T \diag(w^{(t)})A)^+ a_l$ ; for $l=1, \dots, m$ \Comment{// We can use here a Laplacian LS solver.}
\EndFor \State $(w)_i = \frac{1}{T} \sum\limits_{t=1}^T w_i^{(t)}$ for $i=1,\dots ,m$ \\ \Return $w$ \end{algorithmic} \end{algorithm}
Since we use \cite{ccly} in a black-box, we only give a high level overview of Algorithm \ref{alg_LW_computation_ccly}. The basic idea of \cite{ccly, CP14} is to use the observation that for (exact) $\ell_{\infty}$-LW, we have \begin{align}\label{equ_ccly_alg_fixed_equ}
w_i = \tau_i(W^{1/2} A). \end{align} \cite{ccly} use \eqref{equ_ccly_alg_fixed_equ} to define a fixed point iteration described in \Cref{alg_LW_computation_ccly}. In other words, the algorithm repeatedly computes the leverage scores of the weighted matrix, updating the weights according to the average of past iterations, until an (approximate) fixed-point is reached. The performance of this algorithm is as follows:
Let $\epsilon >0$ and denote by $\wt{w}$ the output of the algorithm for input $A \in \mathbb{R}^{m \times n}$. Our goal is to compute the approximate LW, $\wt{w}$, such that \begin{align*}
\wt{w} \approx_\epsilon w_\infty \refstepcounter{equation}\tag{\theequation} \label{equ_mul_apx_LW} \end{align*} where $a \approx_\epsilon b$ iff $a = (1 \pm \epsilon)b$. However, the approximation guarantee of \cref{alg_LW_computation_ccly} is in the \emph{optimality} sense, meaning \begin{align*} \tau_i(\wt{W}^{1/2} A) \approx_\epsilon \wt{w}_i , \ \ \forall i \in [m]. \refstepcounter{equation}\tag{\theequation} \label{equ_opt_apx_LW} \end{align*} Fortunately, \cite{flps21} recently showed that $\epsilon$-approximate optimal LS \eqref{equ_opt_apx_LW} imply $\epsilon$-approximate LW \eqref{equ_mul_apx_LW}: \begin{theorem}
(LS-apx $\Rightarrow$ LW-apx, \cite{flps21}). For all $\epsilon \in (0, 1)$, Algorithm~\ref{alg_LW_computation_ccly} outputs a ($1+\epsilon$)-approximation of LW with $T = O(\epsilon^{-1} \log \frac{m}{n})$ iterations. \end{theorem}
\paragraph{Approximate Lewis Weights:} Since the output of \Cref{alg_LW_computation_ccly} satisfies $\wt{w} \approx_\epsilon w_\infty$, this implies \[ \Tilde{L_{w}} = \sum_l (\wt{g_{w}})_l b_l b_l^T \approx_\epsilon \sum_l (g_{\ell w})_l b_l b_l^T = L_{\ell w} \] where $\wt{g_{w}} ,g_{\ell w} $ is the weight vector defined by $\wt{w} ,w_\infty$, respectively. Thus, \[ \Tr \Tilde{L_{w}} \approx_\epsilon \Tr L_{\ell w} \] In other words, computing the approximate LW guarantees a-$(1+\epsilon)$ factor for the trace of $L_{\ell w}$. Since our objective function is solely the trace, that means approximate LW will gives us a-$(1+\epsilon)$ factor for our approximation ratio. Since we are only shooting for $O(1)$-approximation for ERMP, the latter is sufficient, so \Cref{alg_LW_computation_ccly} can be used to produce the approximate-ERMP weights claimed in Theorems \ref{thm_LW_apx_ERMP_trees} and \ref{thm_UB_general_graphs}. Note that this also applies to the rest of results, as the proximity notion in~\eqref{equ_mul_apx_LW} also implies spectral approximation.
\section{Appendix}
\subsection{Background: Convex Optimization Analysis}\label{appendix_cvx_opt_recipe}
All the optimization problems discussed in this paper are convex problems. On our analysis of the optimal solution, we will use the following recipe for problems of the following form: \begin{equation*} \begin{array}{ll@{}ll} \text{minimize} & f(g) &\\ \text{subject to}& \boldsymbol{1}^Tg=1 \ , \ g \in \mathbb{R}_+^m \end{array} \end{equation*} when $f(g)$ is convex. It is a well known identity, that a feasible $g$ is optimal for $f$ iff \[ \nabla f(g)^T \cdot (\wh{g} - g) \geq 0 \ , \ \forall \ \wh{g}: \ \boldsymbol{1}^T\wh{g}=1. \] This is the same as, \[ \nabla f(g)^T \cdot (e_l - g) \geq 0 \ , \ \forall \ l=1\dots m \] (expand $\wh{g}$ to the elementary basis representation). So, the optimality criteria for $g$ is, \[ \frac{\partial f}{\partial g_l}(g) -\nabla f(g)^T \cdot g \geq 0 \ , \ \forall \ l=1\dots m . \] Thus, we will use the following recipe - given a convex problem of the above form: \begin{enumerate}
\item Compute the partial derivatives of $f$, i.e $\frac{\partial f(g)}{\partial g_l}$.
\item Compute $\langle \nabla f(g) , g \rangle = \nabla f(g)^T \cdot g$.
\item Derive the optimality criteria:
\begin{align*}
\frac{\partial f}{\partial g_l} -\nabla f(g)^T \cdot g \geq 0 \ , \ \forall \ l=1 \dots m \refstepcounter{equation}\tag{\theequation} \label{equ_cvx_opt_criteria}
\end{align*} \end{enumerate}
\subsection{Background: Experimental Optimal Design}\label{appendix_optimal_design}
Here we give a high level overview of experimental optimal design. For more information see \cite{boyd_convex_2004,Puk06}.
Assume you want to estimate a vector $x \in \mathbb{R}^n$, using the following experiments: Each round $i = 1,\dots, m$, you can choose one \emph{test} vector from a given $p$ possible choices -- $v_1,\dots,v_p$ -- and you get $y_i = v_{j_i}^T x + w_i$, where $j_i \in [p]$ and, $w_i$ is some Gaussian noise (i.e independent Gaussian RV with zero mean and unit variance). It usually assumed that $V = (v_1, \dots, v_p)$ is a full rank matrix, but the it can be easily generalized. The optimal estimation is given by the least-square solution: \[ \wh{x} = \underbrace{\left(\sum_{i=1}^m v_{j_i} v_{j_i}^T\right)^{-1}}_{E} \ \cdot\sum_{i=1}^m y_i v_{j_i} . \] For any estimation $x$, we define the estimation error $e=\wh{x}-x$, with the associated covariance matrix $E$. The goal of optimal design is to minimize $E$ w.r.t some partial order (e.g Loewner order). In general, optimal design can be a hard combinatorial problem for large $m$. To see why, let $m_j$ be the number of rounds we chose $v_j$, and express $E$ using $m_j$: \[
E = \left(\sum_{i=1}^m v_{j_i} v_{j_i}^T\right)^{-1} = \left(\sum_{j=1}^p m_jv_jv_j^T\right)^{-1} \] This shows that optimal design can be formulate with the integer variables $m_j$, with the constraint that they will sum up to $m$. If $m$ is quite large, then we might deal with relative small integers, making it quite difficult problem. However, we may relax it in the following way. Define $g_j = m_j/m$, and express $E$ in terms of $g_j$: \[
E(g) = \frac{1}{m} \left(\sum_{j=1}^p g_jv_jv_j^T\right)^{-1} \] Now, instead of searching over the integers, we let $g_j$ be real numbers, under the constraint that they will sum up to $1$. Thus, the relaxed optimal design asks to minimize $E$ such that $\boldsymbol{1}^T g = 1$, w.r.t some partial order. For example, A- and D-optimal design minimizes w.r.t the trace and determinant, respectively. We refer to $V$ as the \emph{experiment} matrix, and denote by $E_V(g) = V\cdot \diag(g) \cdot V^T$, such that $E_V(g)^{-1}$ is the error covariance matrix.
\iffalse
We consider the problem of estimation a vector $x \in \mathbb{R}^n$ from measurements of experiments \[ y_i = a_i^T x + w_i, \ \ i \ = 1,\dots,m\] where $w_i$ is measurement noise. We assume that $w_i$ are independent Gaussian random variables with zero mean and unit variance, and that $A = (a_1,\dots , a_m)$ has full rank. The maximum likelihood estimate of $x$, which is the same as the minimum variance estimate is given by the least-square solution, \[ \wh{x} = \underbrace{\left(\sum_{i=1}^m a_ia_i^T\right)^{-1}}_{E} \ \cdot\sum_{i=1}^m y_i a_i .\] The left matrix $E = \left(\sum_{i=1}^m a_ia_i^T\right)^{-1} = \mathbb{E} \ [ee^\top]$ is called the \emph{error covariance matrix}, and characterizes the accuracy of the estimation, i.e., the informativeness of the experiments. It is standard to assume that the measurement vectors $a_1,\dots,a_m$ can be chosen among $p$ possible test vectors $v_1,\dots,v_p \in \mathbb{R}^n$. The goal of \emph{experiment design} is to choose the vectors $a_i$ from the possible choices so that the error covariance $E$ is small in some sense. In other words each of the $m$ experiments can be chosen from a fixed menu of $p$ possible experiments; our job is to find a set of measurements that together are ``maximally informative". \\ Let $m_j$ denote the number of experiments for which $a_i$ is chosen to be $v_j$, thus we have \[ m_1 + \dots +m_p = m\] We can express the error covariance matrix as \[ E = \left(\sum_{i=1}^m a_ia_i^T\right)^{-1} = \left(\sum_{i=1}^p m_jv_jv_j^T\right)^{-1}\] This shows that $E$ depends only on $m_1,\dots,m_p$. \\ Next we define the relaxed experiment design (for further details see \cite{boyd_convex_2004}). Let $g_i = m_i/m$, which is the relative frequency of experiment $i$. We can express the error covariance in terms of $g_i$ \[ E = \frac{1}{m} \left(\sum_{i=1}^m g_i v_i v_i^T\right)^{-1}\] The "relaxation" is instead of searching over $m_i$ \emph{integers} (a NP problem in general), we instead search over $g \in \mathbb{R}^p$. The vector $g \in \mathbb{R}^p$ satisfies $g \geq 0, \boldsymbol{1}^Tg=1$. Ignoring the $(1/m)$ factor we arrive at the problem \begin{equation}\label{equ_gen_opt_design} \begin{array}{ll@{}ll} \text{minimize} & E = (\sum_{i=1}^m g_i v_i v_i^T)^{-1}&\\ \text{subject to}& g \geq 0 , \ \boldsymbol{1}^Tg=1 \end{array} \end{equation} The \emph{relaxed experiment design} is a convex optimization problem, since the objective $E$ is an $\boldsymbol{S}^n_+$-convex function of $g$.
Several scalarizations have been proposed for the relaxed experimental design problem. As mentioned before we will only discuss two very common scalarization.
The most widely used scalarization is called the \emph{D optimal design} in which we minimize the determinant of the error covariance matrix $E$. This corresponds to minimize the volume of the resulting confidence ellipsoid. Taking the logarithm of the objective we can pose this problem as \begin{equation*} \begin{array}{ll@{}ll} \text{minimize} & logdet(\sum_{i=1}^m g_i v_i v_i^T)^{-1}&\\ \text{subject to}& g \geq 0 , \ \boldsymbol{1}^Tg=1 \end{array} \end{equation*}
which is a convex optimization problem.
In \emph{A optimal design} we minimize the trace of the error covariance matrix $E$. Thus we can pose this problem as \begin{equation}\label{equ_A_optimal_def} \begin{array}{ll@{}ll} \text{minimize} & \Tr(\sum_{i=1}^m g_i v_i v_i^T)^{-1}&\\ \text{subject to}& g \geq 0 , \ \boldsymbol{1}^Tg=1 \end{array} \end{equation}
which is (also) a convex optimization problem.
The Lagrange duals of the \emph{D optimal design} and the \emph{A optimal design} problems can be expressed as \begin{equation}\label{equ_D_optimal_dual} \begin{array}{ll@{}ll} \text{maximize} & logdet \ W &\\ \text{subject to}& v_i^T W v_i \leq 1 \ , \ i=1,\dots,p \end{array} \end{equation}
\begin{equation}\label{equ_A_optimal_dual} \begin{array}{ll@{}ll} \text{maximize} & \Tr(W^{1/2})^2&\\ \text{subject to}& v_i^T W v_i \leq 1 \ , \ i=1,\dots,p \end{array} \end{equation}
respectively (see \cite{boyd_convex_2004}). The variable in both problems is $W \in \boldsymbol{S}^n_{++}$. In both problems the optimal solution $W^*$ determines an Ellipsoid $\mathcal{E} = \{ x \ | \ x^T W^*x \leq 1\}$ that contains the points $v_1 ,\dots ,v_p$ . In this definition the semiaxis lengths, $\boldsymbol{\sigma}$, are equal to \[ \sigma_i = \lambda_i(A)^{-1/2} \iff \lambda_i(A) = \sigma_i^{-2} \] Thus we can write the objective function(s) of the dual problem(s) as
\begin{align*}
\left\{ \begin{array}{ll}
-\log(\Pi_i \ \sigma_i) \ & \\
&\text{and} \\
(\sum_i \frac{1}{\sigma_i})^2
\end{array}
\right. \end{align*} From here it's easy to see that in D-optimality, $W^*$ is the minimal \emph{volume} enclosing Ellipsoid, while in A-optimality $W^*$ minimizes the \emph{Harmonic Mean} of the semiaxis lengths. We will elaborate on that at \cref{sec_optimal_design}.
\fi
\subsection{The D-Optimal Design Problem}\label{appendix_D_design}
Here we provide an alternative way to solve the D-optimal design problem via a convex analysis approach. We focus on the case of Laplacians, but it can be generalized to any experiment matrix. We follow the recipe described in \ref{appendix_cvx_opt_recipe}.
\iffalse
Let $V \in \mathbb{R}^{n \times m}$ be a full rank matrix. Recall that the D optimal designs is the problem of \begin{equation*} \begin{array}{ll@{}ll} \text{minimize} & logdet(\sum_{l=1}^m g_l v_l v_l^T)^{-1}&\\ \text{subject to}& g \geq 0 , \ \boldsymbol{1}^Tg=1 \end{array} \end{equation*} where $v_i$ are the columns of $V$, and $g \in \mathbb{R}^m$. We will show that $\ell_{\infty}$-LW exactly matches the optimality criteria for this problem, proving the following theorem: \begin{theorem}\label{thm_D_optimal_LW}
Let $V \in \mathbb{R}^{n \times m}$ be the experiment matrix for an optimal design problem. The solution for D optimality for $V$ is
\[ g = w_\infty(V^T)\] \end{theorem}
Now, D optimal design is a convex optimization problem, so we know that it has an unique solution. Moreover we know exactly how to derive it - compute the gradient of the objective function and find the optimality criteria (see \cref{sec_preliminary_cvx_opt}). Let's start with computing the gradient. Define $E_V(g) \coloneqq V \cdot diag(g) \cdot V^T $. For simplicity of the proof we define the \emph{logdet} function as \[ LD_V(g) \coloneqq logdet(\sum_{i=1}^m g_i v_i v_i^T)^{-1} = -logdet(E_V(g)) \] through the proof we neglect $V$ whenever it's obvious.\\
We will use \emph{Jacobi's formula} - given differentiable matrix $X(t)$: \begin{equation}\label{equ_jacobi_formula}
\frac{\partial |X|}{\partial t} = |X| \Tr(X^{-1} \frac{\partial X}{\partial t}) \end{equation}
We can now start our analysis. Observe that if we write the one-rank decomposition of $E(g) = \sum_{l=1}^m g_l v_l v_l^T$, it is clear that \begin{equation}\label{equ_der_E_g}
\frac{\partial E(g)}{\partial g_l} = v_l v_l^T \end{equation}
Now, using \emph{Jacobi's formula}~\eqref{equ_jacobi_formula} and basic chain rule we get \begin{align*}
& \frac{\partial |E(g)|}{\partial g_l} = |E(g)| \Tr (E(g)^{-1} \frac{\partial E(g)}{\partial g_l}) \\
&\implies \frac{\partial \log |E(g)|}{\partial g_l} = |E(g)|^{-1} \frac{\partial |E(g)|}{\partial g_l} = \Tr (E(g)^{-1} \frac{\partial E(g)}{\partial g_l}) \end{align*}
combining the last two equations leads to \begin{align*} \frac{\partial LD(g)}{\partial g_l} & = -\Tr( E(g)^{-1} v_l v_l^T ) \\ & = - \Tr( (v_l^T E(g))^{-1} v_l ) = v_l^T (V \cdot diag(g) \cdot V^T)^{-1} v_l \refstepcounter{equation}\tag{\theequation} \label{equ_der_LD_g} \end{align*}
We can express the last equation in terms of Leverage Scores. Let $C \coloneqq diag(g)^{1/2} \cdot V^T = W_g^{1/2} V^T $. Then, the Leverage Score of $C$ is \[ \tau_l(C) = c_l^T (C C^T)^{-1} c_l = g_l^{1/2} \cdot v_l^T (V W_g V^T)^{-1} v_l \cdot g_l^{1/2} = g_l \cdot v_l^T (V W_g V^T)^{-1} v_l \] So using equation~\eqref{equ_der_LD_g} we get that \begin{equation}\label{equ_der_LD_g_with_LS}
\frac{\partial LD(g)}{\partial g_l} = g_l^{-1} \cdot \tau_l(W_g^{1/2} V^T) \end{equation}
Next we need to compute $<\nabla LD(g) , g> = \nabla LD(g)^T \cdot g$ . We will use the following "trick" - By the multiplicativity of determinant \begin{align*}
LD(\alpha g) &= -\log |E(\alpha g)| = - \log | V \cdot diag(\alpha g) \cdot V^T| \\
&=-\log |\alpha E(g)| \\
&= -\log ( \alpha^{n}|E(g)|) && \text{because $rank(E) = n$} \\
&= LD(g) - n\log(\alpha) && \text{Logarithm laws} \end{align*} Taking the derivative if both sides w.r.t $\alpha$ we get \[ \nabla LD(\alpha g)^T \cdot g = \frac{\partial}{\partial \alpha}(LD(g) - n\log \alpha) = -\frac{n}{\alpha^2} \]
evaluate the above expression at $\alpha = 1$ yield
\begin{equation}\label{equ_logdet_grad_g_prod}
\nabla logdet(g)^T \cdot g = -n
\end{equation}
\begin{remark}
It's not hard to see that $rank(E) = rank(V)$ and we assumed $V$ has a full rank, so $rank(E)=n$. If $V$ has a rank $d<n$ than $|\alpha E| = \alpha^d |E|$, and in general if $rank(V)=d$ then
\[
LD(\alpha g) = LD(g) - d\log(\alpha)
\]
and accordingly
\[
\nabla logdet(g)^T \cdot g = -d
\] \end{remark}
Now we can proceed to deriving the optimality criteria for $LD$. The optimality criteria for convex function (see \cref{sec_preliminary_cvx_opt}) is \[ \nabla LD(g)^T(e_l -g) \geq 0 \ , \ l=1, \dots , m \] Using equation~\eqref{equ_logdet_grad_g_prod}, we can rewrite this condition as \[ \frac{\partial LD(g)}{g_l} + n \geq 0 \ , \ l=1, \dots , m \] and with equation~\eqref{equ_der_LD_g_with_LS} we get that $g$ is optimal iff \begin{align*} -\frac{\partial LD(g)}{g_l} = g_l^{-1} \cdot \tau_l(W_g^{1/2} V^T) \leq n \ , \ l=1, \dots , m \refstepcounter{equation}\tag{\theequation} \label{equ_LD_opt_criteria} \end{align*}
Finally we can show that $\ell_{\infty}-$LW is indeed the optimal solution for $LD(g)$. Let $C = W_g^{1/2} V^T$ as before. Let $g_{\ell w} \coloneqq (1/n) w_\infty(V^T)$. First of all, from equation~\eqref{equ_sum_LW} \[ \sum_{l=1}^m (g_{\ell w})_l = (1/n) \sum_{l=1}^m w_\infty(V^T)_l = (1/n) \cdot n = 1 \] so $g_{\ell w}$ is feasible. Next we need to show that it matches the optimality criteria. Recall from equation~\eqref{equ_inf_LW} that for the non-normalized $LW$ \[ v_i^T(V W_\infty V^T)^{-1} v_i = 1 \] Since we normalize by $(1/n)$ we get that for $g=g_{\ell w}$ \[ v_i^T(V W_g V^T)^{-1} v_i = n \] Hence, for $g=g_{\ell w}$ we have \[ \tau_l(W_\infty^{1/2} V^T) = g_l^{1/2} \cdot v_l^T (V \cdot W_\infty V^T)^{-1} v_l \cdot g_l^{1/2} = n \cdot g_l \] Substituting the latter in equation~\eqref{equ_LD_opt_criteria} gives us \begin{align*} -\frac{\partial LD(g_{\ell w})}{g_l} = g_l^{-1} \cdot \tau_l(W_g^{1/2} V^T) = n \cdot g_l \cdot g_l^{-1} = n \leq n \refstepcounter{equation}\tag{\theequation} \label{equ_LD_LW_opt_saturation} \end{align*} which matches exactly the optimality criteria proving theorem~\ref{thm_D_optimal_LW}. Note that $g_{\ell w}$ actually \emph{saturate} the optimality criteria. We will use that later.
\fi
We define the D-optimal design problem over Laplacian, as a relaxation of problem~\eqref{equ_D_optimal_def}, in the following manner: \begin{definition}\label{def_LD_g} Given an undirected graph $G$, denote the Laplacian of the weighted graph $G(g)$ in $L_g$. The \emph{LD} minimization problem asks how to re-weight the edges in order to minimize the \emph{logdet} of $L_g^+$: \begin{equation*} \begin{array}{ll@{}ll} \text{minimize} & \logdet( L_g^+ + (1/n)\boldsymbol{1}\boldsymbol{1}^T) &\\ \text{subject to}& \boldsymbol{1}^Tg=1 \ , \ g \geq 0 \end{array} \end{equation*} \end{definition}
\begin{remark}\label{remark_LD_G_objective}
One might wonder why this is our objective function. Note that $L_g^+$ has zero eigenvalue while we interesting in only the positive eigenvalues. However, it is not difficult to prove that:
\[ \lambda(L_g^+ + 1/n \boldsymbol{1}\boldsymbol{1}^T) = \twopartdef {\lambda(L_g^+)} {\lambda(L_g^+) \neq 0} {1} {\lambda(L_g^+) = 0} \]
So, our objective function is simply the product of all positive eigenvalues of $L^+$. We further abuse this notation to write $|L_g^+ + (1/n)\boldsymbol{1} \boldsymbol{1}^T|$ as $|L_g^+|$. \end{remark}
We will write the objective function as $LD_G(g) = \logdet(L_g^+)$. Whenever it's clear that we refer to the graphs version we will simply write $LD(g)$. Note that for a general experiment matrix $V$ the objective function is equals to $LD_V(g) = \logdet(E_V(g)^+)$. This leads us to the following result: \begin{lemma}\label{thm_D_optimal_LW}
Given an undirected graph $G$ with edge-incident matrix $B \in \mathbb{R}^{n \times m}$, the weight-vector that minimizes the $LD_G(g)$ function is $LW(B^T)$, up to normalization. More generally, for an experiment matrix $V \in \mathbb{R}^{n \times m}$, the solution for D-optimality for $V$ is
$LW(V^T)$, up to normalization. \end{lemma}
Let's start by computing the partial derivatives. We will use the following fact: Suppose that the invertible symmetric matrix $X(t)$ is a differentiable function of parameter $t \in \mathbb{R}$. Then we have \begin{equation}\label{equ_der_inv_mat}
\frac{\partial X^{-1}}{\partial t} = -X^{-1}\frac{\partial X}{\partial t} X^{-1} \end{equation} We also use \emph{Jacobi's formula} - given differentiable matrix $X(t)$: \begin{equation}\label{equ_jacobi_formula}
\frac{\partial |X|}{\partial t} = |X| \Tr (X^{-1} \frac{\partial X}{\partial t}) \end{equation}
Now, if we write the one-rank decomposition of $L_g = \sum_{l=1}^m g_l b_l b_l^T$, it is clear that \begin{equation}\label{equ_der_L_g}
\frac{\partial L_g}{\partial g_l} = b_l b_l^T \end{equation}
Using the above, and equation~\eqref{equ_laplac_inv}, we get that, \begin{align*}
\frac{\partial L_g^+}{\partial g_l} &= \frac{\partial (L_g + 1/n\boldsymbol{11}^T)^{-1}}{\partial g_l} = -(L_g + 1/n\boldsymbol{11}^T)^{-1}\frac{\partial L_g}{\partial g_l}(L_g + 1/n\boldsymbol{11}^T)^{-1} && \text{using~\eqref{equ_der_inv_mat} } \\
&= -(L_g^+ + 1/n\boldsymbol{11}^T)\frac{\partial L_g}{\partial g_l}(L_g^+ + 1/n\boldsymbol{11}^T) && \text{from~\eqref{equ_laplac_inv}}\\
&= -(L_g^+ + 1/n\boldsymbol{11}^T)b_l b_l^T(L_g^+ + 1/n\boldsymbol{11}^T) && \text{using~\eqref{equ_der_L_g}} \\
&= -L_g^+ b_l b_l^T L_g^+ \refstepcounter{equation}\tag{\theequation} \label{equ_der_L_g_plus} \end{align*} when we used the fact that $b_l^T \boldsymbol{1} = 0$.
Now, using \emph{Jacobi's formula}~\eqref{equ_jacobi_formula}, and basic chain rule we get: \begin{align*}
& \frac{\partial |L_g^+|}{\partial g_l} = |L_g^+| \Tr ((L_g^+)^{-1} \frac{\partial L_g^+}{\partial g_l}) \\
&\implies \frac{\partial \log |L_g^+|}{\partial g_l} = |L_g^+|^{-1} \frac{\partial |L_g^+|}{\partial g_l} = \Tr((L_g^+)^{-1} \frac{\partial L_g^+}{\partial g_l}) \end{align*}
Combining the last two equations leads to, \begin{align*}
\frac{\partial \log |L_g^+|}{\partial g_l} & = -\Tr( (L_g^+)^{-1} L_g^+ b_l b_l^T L_g^+ ) \\ & = - \Tr ( (b_l b_l^T L_g^+) = -\Tr (b_l^T L_g^+ b_l) = -R_l(g) \refstepcounter{equation}\tag{\theequation} \label{equ_logdet_der} \end{align*}
\begin{remark}
For the general case, we will get that:
\begin{align*}
\frac{\partial LD_V(g)}{\partial g_l} & = -\Tr( E_V(g)^{-1} v_l v_l^T ) \\
& = - \Tr ( (v_l^T E_V(g))^{-1} v_l ) = v_l^T (V \cdot \diag(g) \cdot V^T)^{-1} v_l \refstepcounter{equation}\tag{\theequation} \label{equ_der_LD_g}
\end{align*}
and, in terms of LS:
\begin{equation}\label{equ_der_LD_g_with_LS}
\frac{\partial LD_V(g)}{\partial g_l} = - g_l^{-1} \cdot \tau_l(W_g^{1/2} V^T)
\end{equation} \end{remark}
Next, we need to compute $\langle \nabla LD(g) , g \rangle = \nabla LD(g)^T \cdot g$ . We will use the following "trick" - by the multiplicativity of determinant, and properties of pseudo-inverse: \begin{align*}
LD(\alpha g) &= \log |L_{\alpha g}| = \log | (B \cdot \diag(\alpha g) \cdot B^T)^+| \\
&=\log |\alpha^{-1} L_g^+| \\
&= \log ( \alpha^{-(n-1)}|L_g^+|) && \text{because $\rank(L_g) = n-1$} \\
&= LD(g) - (n-1)\log(\alpha) && \text{logarithm laws} \end{align*} Taking the derivative of both sides w.r.t $\alpha$, we get that: \[ \nabla LD(\alpha g)^T \cdot g = \frac{\partial}{\partial \alpha}(LD(g) - (n-1)\log (\alpha)) = -\frac{n-1}{\alpha} \] Evaluate the expression at $\alpha = 1$, yield \begin{equation}\label{equ_logdet_grad_g_prod}
\nabla LD(g)^T \cdot g = -(n-1) \end{equation}
\begin{remark}
It's not hard to see that for general matrix $V$, with rank $d$, $rank(E_V) = rank(V) = d$, and $|\alpha^{-1} E_V^{-1}| = \alpha^{-d} |E_V^{-1}|$. So we will have,
\[
LD_V(\alpha g) = LD_V(g) - d\log(\alpha) ,
\]
and accordingly,
\[
\nabla LD_V(g)^T \cdot g = -d.
\] \end{remark}
Now, we can plug all the above in the optimality criteria (equation~\eqref{equ_cvx_opt_criteria}), and get that $g$ is optimal for $LD(g)$ iff, \[ \frac{\partial LD(g)}{g_l} + n-1 = -R_l(g) + n-1 \geq 0 \ , \ l=1,\dots, m \] or, \begin{align*} R_l(g) \leq n-1 \ , \ \forall \ l=1,\dots,m . \refstepcounter{equation}\tag{\theequation} \label{equ_LD_G_opt_crit} \end{align*} And for general $V$, \begin{align*} g_l^{-1} \cdot \tau_l(W_g^{1/2} V^T) \leq \rank(V) \ , \ l=1, \dots , m \refstepcounter{equation}\tag{\theequation} \label{equ_LD_opt_criteria} \end{align*}
Finally, we can show that $\ell_{\infty}$-LW is indeed the optimal solution for $LD(g)$. We will show it for the general case. Let $C = W_g^{1/2} V^T$. Assume $V$ has rank $d$. Let $g_{\ell w} \coloneqq (1/d) w_\infty(V^T)$. First of all, from equation~\eqref{equ_LS_facts} \[ \sum_{l=1}^m (g_{\ell w})_l = (1/d) \sum_{l=1}^m w_\infty(V^T)_l = (1/d) \cdot d = 1 \] so $g_{\ell w}$ is feasible. Next, we need to show that it matches the optimality criteria. Recall from equation~\eqref{equ_inf_LW} that for the non-normalized LW, \[ v_l^T(V W_\infty V^T)^+ v_l = 1 , \ l=1, \dots , m. \] Since we normalize by $(1/d)$ we get that for $g=g_{\ell w}$, \[ v_l^T(V W_g V^T)^+ v_l = d , \ l=1, \dots , m. \] Hence, for $g=g_{\ell w}$ we have, \[ \tau_l(W_\infty^{1/2} V^T) = g_l^{1/2} \cdot v_l^T (V \cdot W_\infty V^T)^+ v_l \cdot g_l^{1/2} = d \cdot g_l , \ \forall \ l=1, \dots , m . \] Substituting the latter in equation~\eqref{equ_LD_opt_criteria} gives us \begin{align*} g_l^{-1} \cdot \tau_l(W_g^{1/2} V^T) = d \cdot g_l \cdot g_l^{-1} = d \leq d. \refstepcounter{equation}\tag{\theequation} \label{equ_LD_LW_opt_saturation} \end{align*} which matches exactly the optimality criteria, and proving \Cref{thm_D_optimal_LW}. Note that, indeed $g_{\ell w}$ \emph{saturates} the optimality criteria.
\iffalse
First, let's compute the gradient of $R_{tot}$. Recall that from equation~\ref{equ_der_L_g_plus} we know that \[ \frac{\partial L_g^+}{\partial g_l} = -L_g^+ b_l b_l^T L_g^+ \]
Using it for our objective function yield \begin{align*}
\frac{\partial R_{tot}}{g_l} & = -n \Tr L_g^+ b_l b_l^T L_g^+ = -n\Tr b_l^T L_g^+ L_g^+ b_l \\
& = -n||L_g^+ b_l||^2 \end{align*}
We can also express the gradient of $R_{tot}$ as \[ \nabla R_{tot}(g) = -n \ \boldsymbol{diag}(B^T(L_g^+)^2 B) = -n \ \boldsymbol{diag}(B^T(L_g+ (1/n)\boldsymbol{1}\boldsymbol{1}^T)^{-2} B) \]
finally we note the following useful identity: \[ \nabla R_{tot}(g)^T \cdot g = -R_{tot} \] \label{equ_R_tot_der} for any feasible g. This can be shown using the fact that $R_{tot}$ is homogeneous function of $g$ of degree $-1$ (see ~\ref{equ_ER_homogenous}), meaning $R_{tot}(\alpha g) = R_{tot}(g)/\alpha$. Taking the derivative with respect to $\alpha$ in $\alpha=1$ yield \[ \nabla R_{tot}(g)^T \cdot g = -R_{tot} \]
Next, we derive the optimality criteria. Recall that our objective function may be written as: \[ \Tr X^{-1} \] which is strictly convex. Again, for any convex function, a necessary and sufficient condition for optimality of feasible solution $g$ is: \[ \nabla R_{tot}(g)^T(\wh{g}-g) \geq 0 \text{ for all feasible } \wh{g} \] This is the same as requiring \[ \nabla R_{tot}(g)^T(e_l-g) \geq 0 \text{ for } l=1 \dots m \]
using identity~\ref{equ_R_tot_der} we can write this as: \[ \frac{\partial R_{tot}}{\partial g_l} + R_{tot}(g) \geq 0\]
\fi
\subsection{Proof of Claim~\ref{clm_HM_GM_gap}}\label{appendix_prf_clm_hm-gm}
\begin{proof}
Let $\boldsymbol{x} = (x_1,x_2,\dots, x_n)$. Define $\boldsymbol{x}' = (px1,x2/p, x3,\dots,x_n)$. Indeed,
\[
GM(\boldsymbol{x}') = (\Pi_i \ x'_i)^{1/n} = (\Pi_i \ x_i)^{1/n} = GM(\boldsymbol{x})
\]
Let $S = \sum_{i=1}^n \frac{1}{x_i}$ and similar for $S'$. Clearly, $HM(\boldsymbol{x}) = n/S$, so to increase the HM by $t$ we should decrease $S$ by $t$.
Explicitly, we need that
\[
S' = \sum_{i=1}^n \frac{1}{x'_i} = S/t = (1/t)\sum_{i=1}^n \frac{1}{x_i}.
\]
From our definition of $\boldsymbol{x}'$ we get that
\[
S' = \frac{1}{px_1} + \frac{p}{x_2} + \sum_{i=3}^n \frac{1}{x_i} = \frac{1}{px_1} + \frac{p}{x_2} + S - \frac{1}{x_1} - \frac{1}{x_2} = S + (1/p - 1)\frac{1}{x_1} + (p-1)\frac{1}{x_2} = S/t,
\]
which is a quadratic equation in $p$:
\begin{align*}
(1-1/t)S - \left(\frac{1}{x_1} + \frac{1}{x_2} \right) +\frac{1}{x_1}\frac{1}{p} + \frac{1}{x_2}p = 0. \refstepcounter{equation}\tag{\theequation} \label{equ_quad_p_t}
\end{align*}
We can change variables to $y_1 = \frac{1}{x_1} , y_2 = \frac{1}{x_2} , t' = (1/t - 1)S$ to get:
\[
y_2 \cdot p^2 - (t'+y_1+y_2) \cdot p + y_1 = 0,
\]
which has the solution:
\begin{align*}
p &=\frac{1}{2y_2}\left[(t' + y_1 + y_2 ) + \sqrt{(t'+y_1+y_2)^2 - 4y_2y_1} \right] \\
&= \frac{1}{2y_2}\left[(t' + y_1 + y_2) + \sqrt{t'^2 + (y_1+y_2)^2 + 2t'(y_1+y_2) - 4y_2y_1} \right] \\
&= \frac{1}{2y_2}\left[(t' + y_1 + y_2) + \sqrt{t'^2 + 2t'(y_1+y_2) + (y_1 - y_2)^2} \right] \refstepcounter{equation}\tag{\theequation} \label{equ_sol_p_t}
\end{align*}
Note that $t', y_1,y_2$ are nonnegative so $p$ is well defined.
With this we can conclude that for any $t<1$, there exist $p_t$ using equation~\eqref{equ_sol_p_t}, with it we can construct $\boldsymbol{x}'$ such that $GM(\boldsymbol{x}) = GM(\boldsymbol{x}')$ and $HM(\boldsymbol{x}') = t \cdot HM(\boldsymbol{x})$. \\
It is worth mentioning how the AM changes in this construction. Indeed,
\[
AM(\boldsymbol{x}') = \frac{1}{n}\sum_{i=1}^n x'_i = \frac{1}{n}(\sum_i x_i - x_1 - x_2 +px_1 + x_2/p) = AM(\boldsymbol{x}) + \frac{1}{n}((p_t-1)x_1+(1/p_t - 1)x_2).
\]
Interestingly, we get that the Arithmetic mean always increases. To see why consider equation~\eqref{equ_quad_p_t}
\[
(1/p -1)\frac{1}{x_1} + (p-1)\frac{1}{x_2} = S(1/t-1)
\]
Multiplying both sides by $x_1 x_2$ gives us
\[
(1/p-1)x_2 + (p-1)x_1 =S(1/t-1)x_1x_2 > 0.
\]
\end{proof}
\begin{remark}
It is worth noticing the behavior of, $\Delta AM = AM(x') - AM(x)$ . While $\frac{HM'}{HM} = t$,
the difference in the AM is (see last equation in the last proof)
\[
\Delta AM = S x_1 x_2 \frac{1}{n}(1/t - 1) = \theta \left( \frac{1}{n}(1/t - 1) \right) \ \ \ \ \ (\text{we refer to $\boldsymbol{x}$ as constant.})
\]
For example, if $t = \theta(1/n)$ then,
\[
\frac{HM'}{HM} = \theta \left(\frac{1}{n} \right) \ , \ \Delta AM = \theta(1).
\]
So, although the value of the $HM$ is multiplied by a factor of $n$, the change in the value of $AM$ is relatively small. In other words, the decrease in the value of $HM$ outweighs the slight increase in the value of $AM$. \end{remark}
\subsection{The Lagrange Dual Problem of ERMP}\label{appendix_ERMP_dual_gap}
We first rewrite the ERMP problem as:
\begin{equation*} \begin{array}{ll@{}ll} \text{minimize} & n \Tr X^{-1} - n &\\ \text{subject to}& X = \sum_{l=1}^{m} g_lb_lb_l^T + (1/n)\boldsymbol{1}\boldsymbol{1}^T\\
& \boldsymbol{1}^Tg=1 , g \geq 0 \end{array} \end{equation*}
with variables $g \in \mathbb{R}^m$ and $X = X^T \in \mathbb{R}^{n \times n}_+$. Define the dual variables $Z \in \boldsymbol{S}^{n}_+$ , $v \in \mathbb{R}$, for the equality constraints, and $\lambda \in \mathbb{R}^m$ for the non-negativity constraint $g \geq 0$. The Lagrangian is \[ L(X,g,Z,v,\lambda) = n\Tr X^{-1} - n + \Tr [Z(X - \sum_{l=1}^{m} g_lb_lb_l^T + 1/n\boldsymbol{1}\boldsymbol{1}^T) ] + v(\boldsymbol{1}^T g-1) - \lambda^T g . \]
Then, the dual function is, \begin{align*} h(Z,v,\lambda) &= \underset{ X \succeq 0, g}{\inf} L(X,g,Z,v,\lambda) \\ &= \underset{X \succeq 0}{\inf} \ \Tr [nX^{-1} + ZX] + \underset{g}{\inf} \ \left(\sum_{l=1}^{m} g_l(b_l^T Z b_l + v - \lambda_l)\right) -n -v -(1/n)\boldsymbol{1}^T Z \boldsymbol{1} \end{align*} \begin{equation*} = \left\{ \begin{array}{ll@{}ll}
& -v -(1/n)\boldsymbol{1}^T Z \boldsymbol{1} + 2\Tr(nZ)^{1/2} -n \ \ \ \ \ &\text{if} \ \ b_l^T Z b_l +v = \lambda_l \ , \ Z \succeq 0 \\
& -\infty & \text{otherwise} \end{array} \right. \end{equation*}
The last line can be explained as follows. The left term $\Tr[nX^{-1} + ZX]$ is unbounded below, unless $Z \succeq 0$ (otherwise we could pick $X_1$ to be eigenvector corresponds to a negative eigenvalue of $Z$ multiplied by arbitrary large constant, and complete $X_{2...n}$ s.t $X$ is PSD). Now, if $Z \succ 0$ the unique $X$ that minimizes it is $X = (Z/n)^{-1/2}$ (this can be easily shown by taking the derivative w.r.t $X$) , hence its optimal value is, \[ \Tr[nX^{-1} + ZX] = \Tr[n(Z/n)^{1/2} + Z(Z/n)^{-1/2}] = 2\Tr(nZ)^{1/2} . \] If $Z$ is PSD but not PD, this is still the optimal value as we can take a sequence of PD matrices $Z_i$ that converges to $Z$ and since this is a linear function of $Z$ over closed set, we get the same optimal value.
Additionally, the right term $\sum_{l=1}^{m} g_l(b_l^T Z b_l + v - \lambda_l)$ is also unbounded from below, unless $\forall l, \ b_l^T Z b_l + v - \lambda_l=0$. Otherwise, take $g=\alpha e_k$, such that, $b_k^T Z b_k + v - \lambda_k = c \neq 0$, leading to $\sum_{l=1}^{m} g_l(b_l^T Z b_l + v - \lambda_l) = \alpha \cdot c$ which is unbounded from below as function of $\alpha$.\\
The Lagrange dual problem is, \begin{equation*}
\begin{array}{ll@{}ll}
& \text{maximize} \ \ &h(Z,v,\lambda) \\
& \text{subject to} & b_l^T Z b_l \leq v , Z \succeq 0, \lambda>0 \end{array} \end{equation*}
Using the formula for $h(Z,v,\lambda)$ we derived above, and eliminating $\lambda$ (since it isn't taking part in the objective function), we obtain the following form: \begin{equation*}
\begin{array}{ll@{}ll}
& \text{maximize} \ \ & -v -(1/n)\boldsymbol{1}^T Z \boldsymbol{1} + 2\Tr(nZ)^{1/2} -n \\
& \text{subject to} & b_l^T Z b_l \leq v , Z \succeq 0 \end{array} \end{equation*}
This problem is another convex optimization problem with variables $Z , v$.
It can be verified that, if $X^* = L^* + 1/n \boldsymbol{1}\boldsymbol{1}^T$ (where $L^* = \sum_{l=1}^{m} g_l^* b_l b_l^T$ the optimal weighted Laplacian), is the optimal solution for the primal ERMP, then \[ Z^* = n(X^*)^{-2} = n(L^* + 1/n \boldsymbol{1}\boldsymbol{1}^T)^{-2} \ , \ v^* = \underset{l}{\max} \ b_l^T Z^* b_l \] is the optimal solution for the dual problem (this is easily implied from our analysis). We can use it to further reduce the dual problem.
For any $g$, we have $(L_g + 1/n \boldsymbol{1}\boldsymbol{1}^T)^{-2} \boldsymbol{1} = \boldsymbol{1}$ (see equation~\eqref{equ_laplac_inv}). Thus, $Z^* \boldsymbol{1} = n\boldsymbol{1}$, meaning the optimal solution satisfies $Z \boldsymbol{1}=n\boldsymbol{1}$. So, we can add this constraint without changing the optimal value (which is $\mathcal{K}^*$). The obtain dual problem is now, \begin{equation*}
\begin{array}{ll@{}ll}
& \text{maximize} \ \ & -v -(1/n)\boldsymbol{1}^T Z \boldsymbol{1} + 2\Tr(nZ)^{1/2} -n \\
& \text{subject to} & b_l^T Z b_l \leq v , \\
& & Z \succeq 0 \ ,\ Z \boldsymbol{1} = n \boldsymbol{1} \end{array} \end{equation*} \\ Changing variables to $Y = Z - \boldsymbol{1}\boldsymbol{1}^T$, leads to the following form, \begin{equation*}
\begin{array}{ll@{}ll}
& \text{maximize} \ \ & -v + 2\Tr(nY)^{1/2} \\
& \text{subject to} & b_l^T Y b_l \leq v , \\
& & Y \succeq 0 \ ,\ Y \boldsymbol{1} = 0 \end{array} \end{equation*} \\ Let $W = Y/v$ (note that from our constraints $v$ is a positive number), and our problem becomes: \begin{equation*}
\begin{array}{ll@{}ll}
& \text{maximize} \ \ &-v + 2(nv)^{1/2}\Tr W^{1/2} \\
& \text{subject to} & b_l^T W b_l \leq 1 , \\
& & W \succeq 0 \ ,\ W \boldsymbol{1} = 0 \end{array} \end{equation*} \\ Now, the maximal value of $f(v) = -v + 2C \cdot v^{1/2}$ for positive constant $C$ is $v = C^2$. Taking $C = n^{1/2}\Tr W^{1/2}$, and the problem becomes \begin{equation*}
\begin{array}{ll@{}ll}
& \text{maximize} \ \ &n(\Tr W^{1/2})^2 \\
& \text{subject to} & b_l^T W b_l \leq 1 , \\
& & W \succeq 0 \ ,\ W \boldsymbol{1} = 0 \end{array} \end{equation*} \\ Finally, changing variables to $V = W^{1/2}$, we can express the dual problem as, \begin{equation*}
\begin{array}{ll@{}ll}
& \text{maximize} \ \ &n(\Tr V)^2 \\
& \text{subject to} & b_l^T V^2 b_l = ||V b_l|| \leq 1 , \\ \refstepcounter{equation}\tag{\theequation} \label{equ_ERMP_dual_prob}
& & V \succeq 0 \ ,\ V \boldsymbol{1} = 0 \end{array} \end{equation*}
\iffalse
We first prove the following two claims: \begin{claim}
For fixed $l$, let \(f_l(g) = \Tr b_l^T L_g^+ L_g^+ b_l\). Then $f_l(g)$ is convex at the domain \(\{ g | \boldsymbol{1}^Tg=1 \}\) \end{claim} \begin{proof}
We will show that $g(Y) = c^T Y^{-2} c$ is convex for $Y \in \boldsymbol{S}^n_{++} $. Then since $Y=L_g + (1/n)\boldsymbol{1}\boldsymbol{1}^T$ is affine mapping the composition $f(g) = g(L_g + (1/n)\boldsymbol{1}\boldsymbol{1}^T)$ is also convex.
\\
Let $g(Y) = c^T Y^{-2} c$. $g$ is convex iff the Hesian of $g$ is PSD. Indeed, its second taylor approximation is
\begin{align*}
c^T (Y+\Delta)^{-2} c &\approx c^T Y^{-1} (I+Y^{-1/2}\Delta Y^{-1/2})^{-2}Y^{-1} c \\
&= c^T Y^{-1}(I - 2 Y^{-1/2}\Delta Y^{-1/2} + 3Y^{-1/2}\Delta Y^{-1}\Delta Y^{-1/2})Y^{-1}c \\
\end{align*}
The second order term can be expressed as
\[
3c^T Y^{-3/2}\Delta Y^{-1} \Delta Y^{-3/2} c = 3||Y^{-1/2} \Delta Y^{-3/2} c||^2
\]
Where $Y^{-1/2},Y^{-3/2}$ are well defined since $Y$ is PD. Note that this is a positive definite quadratic function of $\Delta$ (evaluate to zero iff $\Delta = 0$) and so we can conclude that the Hessian of $g$ is PSD (actually it is PD, meaning $g$ is \textbf{strictly} convex).
\end{proof}
\begin{claim}\label{clm_1}
For any $l=1...m$ we have
\[
\Tr (b_l^T L_g^+ L_g^+ b_l) \leq \frac{2}{3} + min_k \frac{2}{3}\Tr (b_l^T L_g^+ L_g^+ b_k b_k^T L_g^+ b_l)
\] \end{claim} \begin{proof}
Denote by $f_l(g) = \Tr (b_l^T L_g^+ L_g^+ b_l)$ for some fixed $l$ (in order to simple notations we will neglect subscript $l$ for the rest of the proof).\\
$f(g)$ is convex and as such, as mentioned several times, satisfies
\begin{equation*}
\frac{\partial f(g)}{\partial g_k} \leq f(e_k) - f(g) + \nabla f(g)^T g
\end{equation*}
With standard calculations, we get
\begin{align*}
L_{e_k} = b_k b_k^T &\implies L_{e_k}^+ = (b_k b_k^T)^+ = (1/2)b_k b_k^T \\
&\implies f(e_k) = \Tr \ (b_l^T (L_{e_k}^+)^2 b_l )= (1/4)\Tr \ (b_l^T b_k b_k^T b_k b_k^T b_l) = \left\{ \begin{array}{ll}
2, & \text{if } k=l\\
0.5, & \text{if } e_k \sim e_l\\
0, & \text{if } o/w
\end{array}\right.
\end{align*}
To see why, note that
\[
b_k^T b_k = 2 \implies \Tr \ (b_l^T b_k b_k^T b_k b_k^T b_l) = \Tr \ (b_l^T b_k (b_k^T b_k) b_k^T b_l)=2\Tr \ ((b_l b_l^T) (b_k b_k^T))
\]
Now, $b_i b_i^T$ is a matrix with $1$ in two diagonal terms and $-1$ on two off-diagonal terms, based on $e_i$ vertices. If $k=l$, we can use the same identity to get $\Tr \ ((b_l b_l^T) (b_k b_k^T)) = 4$, matching the first case. If $e_k$ and $e_l$ shares a vertex, then their corresponding matrices share exactly one element, so their pairwise product is exactly $1$. Since trace of product is the sum of the pairwise elements product this concludes the second case. In any other case, the pairwise product is $0$ matching the last case.\\
Next,
\begin{align*}
\frac{\partial f}{\partial g_k} &= \Tr b_l^T \frac{\partial (L_g^+)^2}{\partial g_k} b_l \\
&= 2\Tr b_l^T L_g^+ \frac{\partial L_g^+}{\partial g_k} b_l \\
&= -2\Tr (b_l^T L_g^+ L_g^+ b_k b_k^T L_g^+ b_l )
\end{align*}
using the chain rule and equation~\eqref{equ_der_L_g_plus}.
Next,
\[
f(\alpha g) = \Tr (b_l^T (L_{\alpha g}^+)^2 b_l) = \frac{1}{\alpha^2} f(g)
\]
\[
\implies \frac{\partial}{\partial \alpha} f(\alpha g) = \frac{-2}{\alpha^3} f(g)
\overset{\alpha=1}{\implies} \nabla f(g)^T g = -2f(g)
\]
Now we can plug in this expressions in our above criteria, to get
\[
-2\Tr( b_l^T L_g^+ L_g^+ b_k b_k^T L_g^+ b_l) \leq f(e_k) -3f(g) = f(e_k)-3 \Tr (b_l^T L_g^+ L_g^+ b_l)
\]
Changing sides we get
\[
\Tr (b_l^T L_g^+ L_g^+ b_l) \leq \frac{1}{3}(f(e_k) + 2\Tr( b_l^T L_g^+ L_g^+ b_k b_k^T L_g^+ b_l))
\]
Note that this inequality holds for any $k=1...m$, and in particular we can take the minimum over all $k$:
\[
\Tr (b_l^T L_g^+ L_g^+ b_l) \leq \frac{1}{3} \cdot min_k \{ f(e_k) + 2\Tr (b_l^T L_g^+ L_g^+ b_k b_k^T L_g^+ b_l) \}
\]
Since $f(e_k) \leq 2$ we get
\[
\Tr (b_l^T L_g^+ L_g^+ b_l) \leq \frac{2}{3} \cdot min_k \{ 1 + \Tr (b_l^T L_g^+ L_g^+ b_k b_k^T L_g^+ b_l) \}
\]
\end{proof}
Using the above two claims we can prove our lemma \begin{lemma}
$\alpha_2 \leq \frac{4}{3(n-1)^2} + \frac{2}{3} max_l \{ min_k \{ ((B^T L_{lw}^+ B)_{lk})^2 \} \} \approx max_l \{ min_k \{ ((B^T L_{lw}^+ B)_{lk})^2 \} \}$ \end{lemma}
\begin{proof}
By definition
\[
\alpha_2 = \frac{max_l \ \Tr(b_l^T L_{lw}^+ L_{lw}^+ b_l)}{\Tr L_{lw}^+} = \frac{max_l \ f_l(lw)}{\Tr L_{lw}^+}
\]
From claim~\eqref{clm_1}
\[
f_l(g) \leq \frac{2}{3} + min_k \ \frac{2}{3}\Tr(b_l^T L_g^+ L_g^+ b_k b_k^T L_g^+ b_l) \leq \frac{2}{3} + min_k \ \frac{2}{3}\Tr(b_l^T L_g^+ b_k b_k^T L_g^+ b_l) \cdot \Tr L_g^+
\]
Where that last line comes from the well known fact that for $A,B$ - PSD matrices
\( \Tr(AB) \leq \TrA \cdot \TrB \) \\
Using that, it's easy to see that
\begin{align*}
\alpha_2 = \frac{max_l \ f_l(lw)}{\Tr L_{lw}^+} \leq \frac{2}{3\Tr L_{lw}^+} + max_l \ min_k \ \frac{2}{3}\Tr(b_l^T L_{lw}^+ b_k b_k^T L_{lw}^+ b_l)
\end{align*}
Recall that \(\Tr L_{lw}^+ \geq \frac{(n-1)^2}{2}\) and we get
\[
\alpha_2 \leq \frac{4}{3(n-1)^2} + max_l \ min_k \ \frac{2}{3}(b_l^T L_{lw}^+ b_k )^2 = \frac{4}{3(n-1)^2} + max_l \ min_k \ \frac{2}{3}((B^T L_{lw}^+ B)_{lk})^2
\]
\end{proof} \fi
\subsection{SDP Formulation for The ERMP}\label{appendix_ermp_sdp}
Shown by \cite{saberi,boyd_convex_2004}, the ERMP can be formulated as an SDP: \begin{equation*} \begin{array}{ll@{}ll} \text{minimize} & n \Tr Y &\\ \text{subject to}& g \geq 0 ,\ \boldsymbol{1}^T g=1 &\\ & \begin{bmatrix} L_g + (1/n)\boldsymbol{1} \boldsymbol{1}^T & I \\ I & Y \end{bmatrix} \succeq 0 \end{array} \end{equation*}
Define, the coefficient matrix \[ C \coloneqq n \begin{bmatrix} 0 & 0 \\ 0 & -I \end{bmatrix} \] and the constrains \[ A \coloneqq \begin{bmatrix} I & 0 \end{bmatrix}, b \coloneqq \begin{bmatrix} L_g + (1/n)\boldsymbol{1} \boldsymbol{1}^T & I \end{bmatrix} \] Then the above formulation gets the form: \begin{equation*} \begin{array}{ll@{}ll} \text{maximize} & C \bullet X &\\ \text{subject to}& g \geq 0 ,\ \boldsymbol{1}^T g=1 &\\ & A X = b &\\ & X \succeq 0, X = X^T \end{array} \end{equation*} which is \cite{Haz08} formulation.
\iffalse
\subsection{Analytical Approximation Ratio for The Bowtie Graph}
Here we provide an analytic computation for $\alpha(\mathcal{B}_{n,n,n})$. We have $2n$ leaf edges with congestion of $m=3n-1$. On the $i$'th edge in the path the congestion equals to $(n+i)(3n-(n+i)) = (n+i)(2n-i)$ for $i=1...n-1$. Hence, \begin{align*}
\alpha(\mathcal{B}_{n,n,n}) &= \frac{m\cdot \sum_l b_l}{(\sum_l b_l^{1/2})^2} = m \cdot \frac{2n\cdot m + \sum_{i=1}^{n-1} (n+i)(3n-(n+i))}{(2n\cdot \sqrt{m} + \sum_{i=1}^{n-1} (n+i)^{1/2}(3n-(n+i))^{1/2})^2}\\
&= m \cdot \frac{2n\cdot m + \sum_{k=n+1}^{2n-1} k(3n-k)}{(2n\cdot \sqrt{m} + \sum_{k=n+1}^{2n-1} k^{1/2}(3n-k)^{1/2})^2}\\
&=m \cdot \frac{2n\cdot m + A}{(2n\cdot \sqrt{m} + B)^2}\\ \end{align*} where \[ A = \sum_{k=n+1}^{2n-1} k(3n-k) = \sum_{k=1}^{2n-1} k(3n-k) - \sum_{k=1}^{n} k(3n-k) \approx \frac{10}{3}n^3 - \frac{7}{6}n^3 = \frac{13}{6}n^3 \] and \[ B = \sum_{k=n+1}^{2n-1} (k(3n-k))^{1/2} = \sum_{k=1}^{2n-1} (k(3n-k))^{1/2} - \sum_{k=1}^{n} (k(3n-k))^{1/2} \approx (c_2 - c_1)n^2 = cn^2 \] when we used the approximation \[
\sum_{k=1}^{b \cdot n} (k(3n-k))^{1/2} \approx cn^2 \ \ , \ \ c_b = \int_0^b (t(3-t))^{1/2} \approx \left\{ \begin{array}{ll}
1.031, & \text{for } b=1\\
2.503, & \text{for } b=2
\end{array}\right. \] Thus, we get \begin{align*}
\alpha(T_{tps}) \leq \alpha(T_{n,n,n}) &= (3n)\cdot \frac{(2n)\cdot (3n) + A}{((2n)\cdot \sqrt{3n} + B)^2} \approx (3n) \cdot \frac{6n^2 + \frac{13}{6}n^3}{(3n)(4n^2) + c^2n^4}\\
& \leq \frac{18n^3 + \frac{13}{2} n^4}{c^2n^4} \leq \frac{9n^4+\frac{13}{2} n^4}{c^2 n^4} \approx 10.5 \refstepcounter{equation}\tag{\theequation} \label{equ_bowtie_analitic_apx} \end{align*}
\begin{proof}
We begin with the first case.
Define the following matrix
\[
C = 2m\boldsymbol{I}_m - B^T L_{lw}^+ B
\]
We will prove that $C$ is PSD. A sufficient condition for an $m \times m$ (hermitian) matrix to be PSD is
\[
\Tr C \geq 0 \ \text{and, } \ \frac{(\Tr \ C)^2}{\Tr \ C^2} > m-1
\]
Exploiting the fact the the diagonal terms of \(B^T L_{lw}^+ B\) are \(R_l = n-1\) , it's easy to see that \(\Tr C = 2m^2 - (n-1)m = m(2m-n+1) \geq 0\). All that left to show is the second condition. Indeed,
\begin{align*}
\Tr C^2 &= \Tr[4m^2 \mathbf{I} - 4mB^T L_{lw}^+ B + B^T L_{lw}^+ B B^T L_{lw}^+ B ] = 4m^3 - 4m^2(n-1) + \Tr [B^T L_{lw}^+ B B^T L_{lw}^+ B] \\
&= 4m^2(m-n+1) + \Tr [B^T L_{lw}^+ B B^T L_{lw}^+ B] \overset{?}{<} \frac{(\Tr C)^2}{m-1} = \frac{m^2(2m-n+1)^2}{m-1}
\end{align*}
With little algebra the inequality gets the form
\[
(m-1)\Tr [B^T L_{lw}^+ B \cdot B^T L_{lw}^+ B] \overset{?}{<} 4m^3 + n^2m^2 - 6nm^2 + 5m^2
\]
Now, we know that for any matrix $A$,
\[
\Tr AA^T = \sum_{i,j}a_{ij}^2
\]
Hence,
\[
\Tr [B^T L_{lw}^+ B B^T L_{lw}^+ B] = \sum_{i,j=1}^m (B^T L_{lw}^+ B)_{ij}^2
\]
We want to bound the last expression. Since $L_{lw}^+$ is PSD matrix we know that for any $x \in \mathbb{R}^n$
\[
x^T L_{lw}^+ x \geq 0
\]
In particular, for $x= \sum_{l} b_l$ we get that
\[
(\sum_l b_l^T) L_{lw}^+ (\sum_l b_l) = \sum_l b_l^TL_{lw}^+b_l + 2\sum_{i < j}(B^T L_{lw}^+ B)_{ij} = m(n-1) + 2\sum_{i < j}(B^T L_{lw}^+ B)_{ij} \geq 0
\]
Next, from inequality of norms we know that \(||x||_1 \leq \sqrt{d}||x||_2 \) for $x \in \mathbb{R}^d$. Imply it to the above term we get that,
\[
-\frac{m}{2}(n-1) \leq \sum_{i =1 < j }^m (B^T L_{lw}^+ B)_{ij} \leq \sqrt{{m \choose 2} \sum_{i < j} (B^T L_{lw}^+ B)_{ij}^2}
\]
And finally we get, \amit{wrong:(}
\begin{align*}
& {m \choose 2} \sum_{i < j} (B^T L_{lw}^+ B)_{ij}^2 \leq \frac{m^2}{4}(n-1)^2 \implies \sum_{i < j} (B^T L_{lw}^+ B)_{ij}^2 \leq \frac{m}{2(m-1)}(n-1)^2 \\
&\implies \Tr[B^T L_{lw}^+ B B^T L_{lw}^+ B] = \sum_{i,j} (B^T L_{lw}^+ B)_{ij}^2 = m(n-1)^2 + \sum_{i \neq j} (B^T L_{lw}^+ B)_{ij}^2 \leq m(n-1)^2 + \frac{m}{2(m-1)}(n-1)^2
\end{align*}
plug it in the above condition leaves us with,
\[
(m-1)\Tr [B^T L_{lw}^+ B B^T L_{lw}^+ B] \leq m(m-1)(n-1)^2 + \frac{m}{2}(n-1)^2 \overset{?}{<} 4m^3 + m^2(n^2- 6n + 5)
\]
Divide both sides by $m$ and rearrange, we get
\[
4m^2 + (n-6)(n-1)m + \frac{1}{2}(n-1)^2 \overset{?}{>} 0
\]
This is simple quadratic equation of $m$, and one can easily verified that all $m > n-1$ satisfies this condition.
So, we successfully showed that
\[
\Tr C \geq 0 \ \ \text{and} \ \ \frac{(\Tr C)^2}{\Tr C^2} > n-1
\]
concluding that $C$ is PSD matrix, and implying that \( B^T L_{lw}^+ B \preceq 2m\boldsymbol{I}_m\). \\
For the second case we repeat this process but with $C = \beta m\boldsymbol{I} - B^T L_{lw}^+ B$. Now, \(\Tr C = \beta m^2 - (n-1)m = m(\beta m-n+1) \) which is positive iff $m > (n-1)/\beta $, but we assumed $m > 2n/\beta$, so this is true.
As for the second part, we have,
\begin{align*}
\Tr C^2 &= \Tr[\beta^2 m^2 \mathbf{I} - 2\beta mB^T L_{lw}^+ B + B^T L_{lw}^+ B B^T L_{lw}^+ B ] = \beta^2 m^3 - 2\beta m^2(n-1) + \Tr [B^T L_{lw}^+ B B^T L_{lw}^+ B] \\
&= \beta m^2(\beta m-2n+2) + \Tr [B^T L_{lw}^+ B B^T L_{lw}^+ B] \overset{?}{<} \frac{(\Tr C)^2}{m-1} = \frac{m^2(\beta m-n+1)^2}{m-1}
\end{align*}
Rearrange and we get that,
\[
(m-1)\Tr [B^T L_{lw}^+ B \cdot B^T L_{lw}^+ B] \overset{?}{<} m^2(\beta^2 m + n^2 - (2+2\beta)n + (1+2\beta))
\]
Using the bound we got above, we have
\[
(m-1)\Tr [B^T L_{lw}^+ B B^T L_{lw}^+ B] \leq m(m-1)(n-1)^2 + \frac{m}{2}(n-1)^2 \overset{?}{<} m^2(\beta^2 m + n^2 - (2+2\beta)n + (1+2\beta))
\]
Divide both sides by $m$ and rearrange, we get
\[
\beta^2 m^2 - 2\beta(n-1)m + \frac{1}{2}(n-1)^2 \overset{?}{>} 0
\]
which again is a quadratic equation in $m$. One can check that any $m > \frac{2}{\beta} n$ satisfies this condition, and by the same argument as before, this concludes the proof.
(actually, a more tight threshold is $\frac{(2+\sqrt{2})}{2\beta}$, but since our goal is to present here a general technique and motivation, the exact threshold is not a big interest for us).
\end{proof}
\fi
\subsection{Optimality Criteria for SEV}\label{appendix_LSE_opt_crit}
In accordance to previous analysis, we will derive the optimality criteria for $f(g) = \Tr e^{L_g^+}$, using the familiar consition, \[ g \ \text{is optimal for } f \ \iff \nabla f(g)^T(e_l - g) \geq 0 \ \text{, for} \ l=1\dots m \]
Let's start by computing the gradient of $f(g)$. Using the chain rule we get that: \[ \frac{\partial f}{\partial g_l} = \Tr[ e^{L_g^+} \frac{\partial L_g^+}{\partial g_l} ]= -\Tr [e^{L_g^+} L_g^+ b_l b_l^T L_g^+] \] where we used equation~\eqref{equ_der_L_g_plus} for the gradient of $L_g^+$.\\
Next we want to compute \( \nabla f(g)^T g\). We will use the standard "trick":
\begin{align*} &f(\alpha g) = \Tr[ e^{L_{\alpha g}^+} ] = \Tr[ e^{L_{g}^+ / \alpha} ] \\ &\implies \frac{\partial }{\partial \alpha}f(\alpha g) = \frac{\partial }{\partial \alpha} \Tr[ e^{L_{g}^+ / \alpha} ] = \Tr[ e^{L_{g}^+ / \alpha} \frac{\partial }{\partial \alpha}(L_{g}^+ / \alpha) ] = \Tr[ e^{L_{g}^+ / \alpha} L_g^+ (-1/\alpha^2) ] \end{align*} and setting $\alpha=1$, leads to \[
\nabla f(g)^T g = \frac{\partial }{\partial \alpha}f(\alpha g) \Big|_{\alpha=1} = -\Tr [ e^{L_{g}^+} L_g^+] . \]
plug those in the optimality condition, and we get the following form: \[ -\Tr [e^{L_g^+} L_g^+ b_l b_l^T L_g^+] + \Tr[ e^{L_{g}^+} L_g^+] \geq 0 \] and using the additive property of trace, we get equation~\eqref{equ_sev_opt_crit}: \[ \Tr [ e^{L_g^+}L_g^+ (I - b_l b_l^T L_g^+)] \geq 0 \]
\subsection{Figures of Upper LT}
\begin{figure}
\caption{Description of upper LT for first case }
\label{fig_upper_ET_case1}
\end{figure}
\begin{figure}
\caption{Description of upper LT for case 2.1 }
\label{fig_upper_ET_case2.1}
\end{figure}
\begin{figure}
\caption{Description of upper LT for case 2.2 }
\label{fig_upper_ET_case2.2}
\end{figure}
\end{document}
\end{document} | arXiv |
What does the leading turnstile operator mean?
I know that different authors use different notation to represent programming language semantics. As a matter of fact Guy Steele addresses this problem in an interesting video.
Can someone help me understand? Thanks.
On the left of the turnstile, you can find the local context, a finite list of assumptions on the types of the variables at hand.
Above, $n$ can be zero, resulting in $\vdash e:T$. This means that no assumptions on variables are made. Usually, this means that $e$ is a closed term (without any free variables) having type $T$.
Often, the rule you mention is written in a more general form, where there can be more hypotheses than the one mentioned in the question.
Here, $\Gamma$ represents any context, and $\Gamma, x:T_1$ represents its extension obtained by appending the additional hypothesis $x:T_1$ to the list $\Gamma$. It is common to require that $x$ did not appear in $\Gamma$, so that the extension does not "conflict" with a previous assumption.
As a complement to the other answers, note that there are three levels of "implication" in typing derivations. And the same remark holds with logical derivations since there is actually a correspondence between the two (called the Curry-Howard's correspondance).
The first implication is the arrow that appears in formulas, and it corresponds to logical implication in a formula (or a function type for the $\lambda$-calculus).
The second implication is materialized by the turnstile symbol, and means "assuming every formula on the left, the formula on the right holds". In particular, the rule you give tells how one should prove an implication: to prove $A \Rightarrow B$, then one must prove $B$ under the assumption that $A$ holds. In terms of the $\lambda$-calculus, to prove that $\lambda x.t$ has type $A \to B$, one must show that $t$ has type $B$, assuming that $x$ is a variable of type $A$ (see the correspondence?).
The third level of implication is materialized by the horizontal bar, and means "if every premise (elements at the top) holds, then the conclusion (the element at the bottom) holds". You can link that to the interpretation of the typing rule for $\lambda$-abstraction that you gave (see the explanation in the previous paragraph).
In type checking systems, the ($\vdash$) represents the ternary relation over type environments, expressions and types: $\vdash \texttt Env \times \texttt Exp \times \texttt Typ$.
Note that, the operator reserves its functionality regardless of where it appears, either in the premise or the conclusion of the rule.
In every situation that I've seen, $X\vdash Y$ means that there is a proof of $Y$ assuming that $X$ holds. If $X$ is empty, that means that $Y$ is a tautology: it has a proof without needing any assumptions.
Not the answer you're looking for? Browse other questions tagged type-theory denotational-semantics or ask your own question.
To what does typing correspond in a Turing Machine?
What does Godels Incompleteness theorem "true but unprovable" mean?
What does Harper mean by "class"?
What is the semantic model of types?
What does "terms evaluated in related environments yield related values" means in the context of typing judgements?
What are the concequences of the unit type and the unit value being the same?
What is the use case of multi-type-parameters generic interface?
What Happened to "Top" in Denotational Semantics?
Why does the denotational semantics for a while loop have a existence quantifier? | CommonCrawl |
\begin{document}
\newcounter{tmpcnt} \setcounter{tmpcnt}{1}
\journal{arXiv}
\begin{frontmatter}
\title{Local well-posedness of the complex Ginzburg-Landau Equation in general domains}
\author[add1]{Takanori Kuroda} \ead{1d\_est\_quod\[email protected]} \author[add2,fn2]{Mitsuharu \^Otani} \ead{[email protected]}
\fntext[fn2]{Partly supported by the Grant-in-Aid for Scientific Research, \#18K03382, the Ministry of Education, Culture, Sports, Science and Technology, Japan.}
\address[add1]{Department of Mathematics, School of Science and Engineering, \\ Waseda University, 3-4-1 Okubo Shinjuku-ku, Tokyo, 169-8555, JAPAN} \address[add2]{Department of Applied Physics, School of Science and Engineering, \\ Waseda University, 3-4-1 Okubo Shinjuku-ku, Tokyo, 169-8555, JAPAN}
\begin{abstract} In this paper, complex Ginzburg-Landau (CGL) equations with superlinear growth terms are studied. We discuss the local well-posedness in the energy space \({\rm H}^1\) for the initial-boundary value problem of the equations in general domains. The local well-posedness in \({\rm H}^1\) in bounded domains is already examined by authors (2019).
Our approach to CGL equations is based on the theory of parabolic equations governed by subdifferential operators with non-monotone perturbations. By using this method together with the Yosida approximation procedure, we discuss the existence and the uniqueness of local solutions as well as the global existence of solutions with small initial data. \end{abstract}
\begin{keyword} initial boundary value problem\sep local well-posedness\sep complex Ginzburg-Landau equation\sep unbounded general domain\sep subdifferential operator \MSC[2010] 35Q56\sep 47J35\sep 35K61 \end{keyword} \end{frontmatter}
\section{Introduction}\label{sec-1}
In this paper we are concerned with the following complex Ginzburg-Landau equation in a general domain \(\Omega \subset \mathbb{R}^N\) with smooth boundary \(\partial \Omega\):
\begin{equation} \tag*{(CGL)} \left\{ \begin{aligned}
&\partial_t u(t, x) \!-\! (\lambda \!+\! i\alpha)\Delta u \!-\! (\kappa \!+\! i\beta)|u|^{q-2}u \!-\! \gamma u \!=\! f(t, x) &&\hspace{-2mm}\mbox{in}\ (t, x) \in [0, T] \times \Omega,\\ &u(t, x) = 0&&\hspace{-2mm}\mbox{on}\ (t, x) \in [0, T] \times \partial\Omega,\\ &u(0, x) = u_0(x)&&\hspace{-2mm}\mbox{in}\ x \in \Omega, \end{aligned} \right. \end{equation} where \(\lambda, \kappa > 0\), \(\alpha, \beta, \gamma \in \mathbb{R}\) are parameters; \(i = \sqrt{-1}\) denotes the imaginary unit; \(u_0: \Omega \rightarrow \mathbb{C}\) is a given initial value; \(f: \Omega \times [0, T] \rightarrow \mathbb{C}\) (\(T > 0\)) is a given external force. Our unknown function \(u:\overline{\Omega} \times [0,\infty) \rightarrow \mathbb{C}\) is complex valued. Under the suitable assumption on \(q\), that is, the Sobolev subcritical condition on \(q\), we establish the local well-posedness of (CGL) for \(u_0 \in {\rm H}^1\).
As extreme cases, (CGL) corresponds to two well-known equations: semi-linear heat equations (when \(\alpha=\beta=0\)) and nonlinear Schr\"odinger equations (when \(\lambda=\kappa=0\)). Thus for the general case, (CGL) could be regarded as an ``intermediate'' equation between these two equations.
As for the case where \(\kappa < 0\), equation (CGL) was introduced by Landau and Ginzburg in 1950 \cite{GL1} as a mathematical model for superconductivity. Subsequently, it was revealed that many nonlinear partial differential equations arising from physics can be rewritten in the form of (CGL) (\cite{N1}).
Mathematical studies for the case where \(\kappa < 0\) are pursued extensively by several authors. The first treatment is due to Temam \cite{T1}, where weak global solutions were constructed by the Galerkin method. Ginibre-Velo \cite{GinibreJVeloG1996} showed the existence of global strong solutions for (CGL) in whole space \(\mathbb{R}^N\) under some suitable conditions on \(\lambda,\kappa,\alpha,\beta\) in terms of \(q\) with initial data taken form \(\mathrm{H}^1(\mathbb{R}^N)\cap\mathrm{L}^q(\mathbb{R}^N)\). They approximated nonlinear terms by the mollifier and then established a priori estimates to obtain solutions of (CGL). Okazawa-Yokota \cite{OY1} regarded (CGL) as parabolic equations with perturbations governed by maximal monotone operators in complex Hilbert spaces and showed the existence of global solutions together with some smoothing effects. The global existence of solutions of (CGL) in general (unbounded) domains is investigated in \cite{KOS1}, where (CGL) is regarded as parabolic equations with monotone and non-monotone perturbations governed by subdifferential operators in the product space of real Hilbert spaces.
For the case where \(\kappa > 0\), the blow-up phenomenon of solutions was first shown by Masmoudi-Zaag \cite{MZ1} and Cazenave et al. \cite{CDW1,CDF1}. As for the local well-posedness, Shimotsuma-Yokota-Yoshii \cite{SYY1} discussed in \({\rm L}^p\) for various kinds of domains via a suitable estimate for the heat kernel; the local well-posedness in \({\rm H}_0^1\) in bounded domains is studied in \cite{KO2} by the non-monotone perturbation theory of nonlinear parabolic equations governed by subdifferential operators developed in \cite{O2}.
In this paper, we show the local well-posedness in the energy space \({\rm H}_0^1\) for general domains. We first regard (CGL) as an abstract evolution equation governed by subdifferential operators in the product Hilbert space \(({\rm L}^2(\Omega))^2\) over real coefficients. Our previous work \cite{KO2} essentially relies on the compactness argument, which is guaranteed by boundedness of domains with the aid of Rellich-Kondrachov's theorem. The heat kernel for \(-(\lambda+i\alpha)\Delta\) is constructed for general domains and examined in \cite{SYY1} for \({\rm L}^p\) (\(1<p<\infty\)) spaces. However, estimates for derivatives of heat kernels for this elliptic operator with complex coefficients in general domains are not obtained. Hence we can not apply this approach. Here we propose a different strategy, i.e., we first introduce an auxiliary equation, which is (CGL) added with a dissipative term \(\varepsilon|u|^{r-2}u\) (\(r>q,\varepsilon>0\)) to dominate non-monotone terms \(-(\kappa+i\beta)|u|^{q-2}u\). To show the global well-posedness of solutions for this auxiliary equation, we make use of the Yosida approximation instead of the compactness argument. By establishing a priori estimates of solutions \(u_\varepsilon\) of auxiliary equations independent of \(\varepsilon>0\) and letting \(\varepsilon\) tend to \(0\), we show that \(u_\varepsilon\) converges to our desired solution.
In addition to the local existence result, we also give related results concerning global solutions.
This paper consists of six sections. In \S2, we fix some notations and prepare some preliminaries. Main results are stated in \S3. In \S4, we introduce auxiliary problems for (CGL) and show their global well-posedness. The local well-posedness of (CGL) is discussed in \S5 as well as the alternative concerning the asymptotic behavior of solutions, i.e., blow-up or global existence of solutions. The last section \S6 is devoted to the study of the existence of small global solutions.
\section{Notations and Preliminaries}\label{sec-2} In this section, we fix some notations in order to formulate (CGL) as an evolution equation in a real product function space based on the following identification: \[ \mathbb{C} \ni u_1 + iu_2 \mapsto (u_1, u_2)^{\rm T} \in \mathbb{R}^2. \] Then define the following: \[ \begin{aligned}
&(U \cdot V)_{\mathbb{R}^2} := u_1 v_1 + u_2 v_2,\quad |U|=|U|_{\mathbb{R}^2}, \qquad U=(u_1, u_2)^{\rm T}, \ V=(v_1, v_2)^{\rm T} \in \mathbb{R}^2,\\[1mm] &\mathbb{L}^2(\Omega) :={\rm L}^2(\Omega) \times {\rm L}^2(\Omega),\quad (U, V)_{\mathbb{L}^2} := (u_1, v_1)_{{\rm L}^2} + (u_2, v_2)_{{\rm L}^2},\\[1mm] &\qquad U=(u_1, u_2)^{\rm T},\quad V=(v_1, v_2)^{\rm T} \in \mathbb{L}^2(\Omega),\\[1mm]
&\mathbb{L}^r(\Omega) := {\rm L}^r(\Omega) \times {\rm L}^r(\Omega),\quad |U|_{\mathbb{L}^r}^r := |u_1|_{{\rm L}^r}^r + |u_2|_{{\rm L}^r}^r\quad\ U \in \mathbb{L}^r(\Omega)\ (1\leq r < \infty),\\[1mm] &\mathbb{H}^1_0(\Omega) := {\rm H}^1_0(\Omega) \times {\rm H}^1_0(\Omega),\ (U, V)_{\mathbb{H}^1_0} := (u_1, v_1)_{{\rm H}^1_0} + (u_2, v_2)_{{\rm H}^1_0}\ \ U, V \in \mathbb{H}^1_0(\Omega),\\ &\mathbb{H}^2(\Omega) := {\rm H}^2(\Omega) \times {\rm H}^2(\Omega). \end{aligned} \] We use the differential symbols to indicate differential operators which act on each component of \({\mathbb{H}^1_0}(\Omega)\)-elements: \[ \begin{aligned} & D_i = \frac{\partial}{\partial x_i}: \mathbb{H}^1_0(\Omega) \rightarrow \mathbb{L}^2(\Omega),\\ &D_i U = (D_i u_1, D_i u_2)^{\rm T} \in \mathbb{L}^2(\Omega) \ (i=1, \cdots, N),\\[2mm] & \nabla = \left(\frac{\partial}{\partial x_1}, \cdots, \frac{\partial}{\partial x_N}\right): \mathbb{H}^1_0(\Omega) \rightarrow ({\rm L}^2(\Omega))^{2 N},\\ &\nabla U=(\nabla u_1, \nabla u_2)^T \in ({\rm L}^2(\Omega))^{2 N}. \end{aligned} \]
We further define, for \(U=(u_1, u_2)^{\rm T},\ V= (v_1, v_2)^{\rm T},\ W = (w_1, w_2)^{\rm T}\), \[ \begin{aligned} &U(x) \cdot \nabla V(x) := u_1(x) \nabla v_1(x) + u_2(x) \nabla v_2(x) \in \mathbb{R}^N,\\[2mm] &( U(x) \cdot \nabla V(x) ) W(x) := ( u_1(x) ~\! w_1(x) \nabla v_1(x), u_2(x) w_2(x) \nabla v_2(x) )^{\rm T} \in \mathbb{R}^{2N},\\[2mm] &(\nabla U(x) \cdot \nabla V(x)) := \nabla u_1(x) \cdot \nabla v_1(x) + \nabla u_2(x) \cdot \nabla v_2(x) \in \mathbb{R}^1,\\[2mm]
&|\nabla U(x)| := \left(|\nabla u_1(x)|^2_{\mathbb{R}^N} + |\nabla u_2(x)|^2_{\mathbb{R}^N} \right)^{1/2}. \end{aligned} \]
In addition, \(\mathcal{H}^S\) denotes the space of functions with values in \(\mathbb{L}^2(\Omega)\) defined on \([0, S]\) (\(S > 0\)), which is a Hilbert space with the following inner product and norm. \[ \begin{aligned} &\mathcal{H}^S := {\rm L}^2(0, S; \mathbb{L}^2(\Omega)) \ni U(t), V(t),\\ &\quad\mbox{with inner product:}\ (U, V)_{\mathcal{H}^S} = \int_0^S (U, V)_{\mathbb{L}^2}^2 dt,\\
&\quad\mbox{and norm:}\ \|U\|_{\mathcal{H}^S}^2 = (U, U)_{\mathcal{H}^S}. \end{aligned} \]
As the realization in \(\mathbb{R}^2\) of the imaginary unit \(i\) in \(\mathbb{C}\), we introduce the following matrix \(I\), which is a linear isometry on \(\mathbb{R}^2\): \[ I = \begin{pmatrix} 0 & -1\\ 1 & 0 \end{pmatrix}. \] We abuse \(I\) for the realization of \(I\) in \(\mathbb{L}^2(\Omega)\), i.e., \(I U = ( - u_2, u_1 )^{\rm T}\) for all \(U = (u_1, u_2)^{\rm T} \in \mathbb{L}^2(\Omega)\).
Then \(I\) satisfies the following properties (see \cite{KOS1}):
\begin{enumerate} \item Skew-symmetric property: \begin{equation} \label{skew-symmetric_property} (IU \cdot V)_{\mathbb{R}^2} = - (U \cdot IV)_{\mathbb{R}^2}; \hspace{4mm} (IU \cdot U)_{\mathbb{R}^2} = 0 \hspace{4mm} \mbox{for each}\ U, V \in \mathbb{R}^2. \end{equation}
\item Commutative property with the differential operator \(D_i = \frac{\partial}{\partial x_i}\): \begin{equation} \label{commutative_property} I D_i = D_i I:\mathbb{H}^1_0 \rightarrow \mathbb{L}^2\ (i=1, \cdots, N). \end{equation}
\end{enumerate}
Let \({\rm H}\) be a Hilbert space and denote by \(\Phi({\rm H})\) the set of all lower semi-continuous convex function \(\phi\) from \({\rm H}\) into \((-\infty, +\infty]\) such that the effective domain of \(\phi\) given by \({\rm D}(\phi) := \{u \in {\rm H}\mid \ \phi(u) < +\infty \}\) is not empty. Then for \(\phi \in \Phi({\rm H})\), the subdifferential of \(\phi\) at \(u \in {\rm D}(\phi)\) is defined by \[ \partial \phi(u) := \{w \in {\rm H}\mid (w, v - u)_{\rm H} \leq \phi(v)-\phi(u) \hspace{2mm} \mbox{for all}\ v \in {\rm H}\}, \] which is a possibly multivalued maximal monotone operator with domain\\ \({\rm D}(\partial \phi) = \{u \in {\rm H}\mid \partial\phi(u) \neq \emptyset\}\). However for the discussion below, we have only to consider the case where \(\partial \phi\) is single valued.
We define functionals \(\varphi, \ \psi_r:\mathbb{L}^2(\Omega) \rightarrow [0, +\infty]\) (\(r\geq2\)) by \begin{align} \label{varphi} &\varphi(U) := \left\{ \begin{aligned}
&\frac{1}{2} \displaystyle\int_\Omega |\nabla U(x)|^2 dx &&\mbox{if}\ U \in \mathbb{H}^1_0(\Omega),\\[3mm] &+ \infty &&\mbox{if}\ U \in \mathbb{L}^2(\Omega)\setminus\mathbb{H}^1_0(\Omega), \end{aligned} \right. \\[2mm] \label{psi} &\psi_r(U) := \left\{ \begin{aligned}
&\frac{1}{r} \displaystyle\int_\Omega |U(x)|_{\mathbb{R}^2}^r dx &&\mbox{if}\ U \in \mathbb{L}^r(\Omega) \cap \mathbb{L}^2(\Omega),\\[3mm] &+\infty &&\mbox{if}\ U \in \mathbb{L}^2(\Omega)\setminus\mathbb{L}^r(\Omega). \end{aligned} \right. \end{align} Then it is easy to see that \(\varphi, \psi_r \in \Phi(\mathbb{L}^2(\Omega))\) and their subdifferentials are given by \begin{align} \label{delvaphi} &\begin{aligned}[t] &\partial \varphi(U)=-\Delta U\ \mbox{with} \ {\rm D}( \partial \varphi) = \mathbb{H}^1_0(\Omega)\cap\mathbb{H}^2(\Omega),\\[2mm] \end{aligned}\\ \label{delpsi}
&\partial \psi_r(U) = |U|_{\mathbb{R}^2}^{r-2}U=|U|^{r-2}U\ {\rm with} \ {\rm D}( \partial \psi_r) = \mathbb{L}^{2(r-1)}(\Omega) \cap \mathbb{L}^2(\Omega). \end{align} Furthermore for any $\mu>0$, we can define the Yosida approximations
\(\partial \varphi_\mu,\ \partial \psi_{r,\mu}\) of \(\partial \varphi,\ \partial \psi_r\) by \begin{align} \label{Yosida:varphi} &\partial \varphi_\mu(U) := \frac{1}{\mu}(U - J_\mu^{\partial \varphi}U) = \partial \varphi(J_\mu^{\partial \varphi} U), \quad J_\mu^{\partial \varphi} : = ( 1 + \mu \partial \varphi)^{-1}, \\[2mm] \label{Yosida:psi} &\partial \psi_{r,\mu}(U) := \frac{1}{\mu} (U - J_\mu^{\partial \psi_r} U) = \partial \psi_r( J_\mu^{\partial \psi_r} U ), \quad J_\mu^{\partial \psi_r} : = ( 1 + \mu \partial \psi_r)^{-1}. \end{align} The second identity holds since \(\partial\varphi\) and \(\partial\psi_r\) are single-valued.
Then it is well known that \(\partial \varphi_\mu, \ \partial \psi_{r,\mu}\) are Lipschitz continuous on \(\mathbb{L}^2(\Omega)\) and satisfies the following properties (see \cite{B1}, \cite{B2}): \begin{align} \label{asd} \psi_r(J_\mu^{\partial\psi_r}U)&\leq\psi_{r,\mu}(U) \leq \psi_r(U),\\ \label{as}
|\partial\psi_{r,\mu}(U)|_{{\rm L}^2}&=|\partial\psi_{r}(J_\mu^{\partial\psi}U)|_{{\rm L}^2}\leq|\partial\psi_r(U)|_{\mathbb{L}^2}\quad\forall\ U \in {\rm D}(\partial\psi_r),\ \forall \mu > 0, \end{align} where \(\psi_{r,\mu}\) is the Moreau-Yosida regularization of \(\psi_r\) given by the following formula: \[ \psi_{r,\mu}(U) = \inf_{V \in \mathbb{L}^2(\Omega)}\left\{
\frac{1}{2\mu}|U-V|_{\mathbb{L}^2}^2+\psi_r(V) \right\}
=\frac{\mu}{2}|(\partial\psi_r)_\mu(U)|_{\mathbb{L}^2}^2+\psi_r(J_\mu^{\partial\psi}U)\geq0. \] Moreover since \(\psi_r(0)=0\), it follows from the definition of subdifferential operators and \eqref{asd} that \begin{align}\label{asdf} (\partial\psi_{r,\mu}(U),U)_{\mathbb{L}^2} = (\partial\psi_{r,\mu}(U),U-0)_{\mathbb{L}^2} \leq \psi_{r,\mu}(U) - \psi_{r,\mu}(0) \leq \psi_r(U). \end{align}
Here for later use, we prepare some fundamental properties of \(I\) in connection with \(\partial \varphi,\ \partial \psi_r,\ \partial \varphi_\mu,\ \partial \psi_{r,\mu}\).
\begin{Lem2}[(c.f. \cite{KOS1} Lemma 2.1)] The following angle conditions hold: \label{Lem:2.1} \begin{align} \label{orth:IU} &(\partial \varphi(U), I U)_{\mathbb{L}^2} = 0\quad \forall U \in {\rm D}(\partial \varphi),\quad (\partial \psi_r(U), I U)_{\mathbb{L}^2} = 0\quad \forall U \in {\rm D}(\partial \psi_r), \\[2mm] \label{orth:mu:IU} &(\partial \varphi_\mu(U), I U)_{\mathbb{L}^2} = 0,\quad (\partial \psi_{r,\mu}(U), I U)_{\mathbb{L}^2} = 0 \quad \forall U \in \mathbb{L}^2(\Omega), \\[2mm]
\label{orth:Ipsi} &\begin{aligned} &(\partial \psi_q(U), I \partial \psi_r(U))_{\mathbb{L}^2} = 0,\\ &(\partial \psi_q(U), I \partial \psi_{r,\mu}(U))_{\mathbb{L}^2}=0\quad \forall U \in {\rm D}(\partial \psi_q)\cap{\rm D}(\partial \psi_r), \forall q,r \geq 2, \end{aligned}\\ \label{angle} &(\partial\varphi(U),\partial\psi_r(U))_{\mathbb{L}^2} \geq 0\quad\forall U \in{\rm D}(\partial \varphi)\cap{\rm D}(\partial \psi_r). \end{align} \end{Lem2}
\begin{proof} The first relation in \eqref{orth:Ipsi} is obvious. So we only give a proof of the second relation in \eqref{orth:Ipsi} here. Let \(W=J_\mu^{\partial\psi_r}U\), then \(U = W + \mu\partial\psi_r(W)\). It holds \[ (\partial \psi_q(U), I \partial \psi_{r,\mu}(U))_{\mathbb{L}^2} =
(|U|^{q-2}(W + \mu|W|^{r-2}W),I|W|^{r-2}W)_{\mathbb{L}^2}=0. \] \end{proof}
We also recall a property of the sum of \(\partial\varphi\) and \(\partial\psi_r\). \begin{Lem2}[(c.f. \cite{KOS1} Lemma 2.3)] The operator \(\lambda\partial\varphi(U) + \varepsilon\partial\psi_r(U)\) (\(\varepsilon>0\)) is maximal monotone in \(\mathbb{L}^2(\Omega)\). \label{Lem:2.3} Moreover the following relation holds: \[ \lambda\partial\varphi(U) + \varepsilon\partial\psi_r(U) = \partial(\lambda\varphi+\varepsilon\psi_r)(U). \] \end{Lem2}
Then in view of \eqref{delvaphi}, \eqref{delpsi} and the property of \(I\), we can see that (CGL) can be reduced to the following evolution equation: \[ \tag*{(ACGL)} \left\{ \begin{aligned} &\frac{dU}{dt}(t) \!+\! \lambda\partial\varphi(U) \!+\! \alpha I \partial \varphi(U) \!-\! (\kappa+ \beta I) \partial \psi_q(U) \!-\! \gamma U \!=\! F(t),\quad t \in (0,T),\\ &U(0) =U_0, \end{aligned} \right. \] where \(f(t, x) = f_1(t, x) + i f_2(t, x)\) is identified with \(F(t) = (f_1(t, \cdot), f_2(t, \cdot))^{\rm T} \in \mathbb{L}^2(\Omega)\).
We conclude this section by preparing two lemmas for later use. The first one is a pointwise estimate for the difference of nonlinear terms: \begin{Lem} Let $r \in (2,\infty)$ and put \[ d_r := \begin{cases} \frac{r-1}{2}&\mbox{if}\ 4 \leq r,\\ \frac{3}{2}&\mbox{if}\ 3 < r < 4,\\ 1&\mbox{if}\ 2 < r \leq 3. \end{cases} \] Then the following inequality holds.\label{locLip1}
\begin{equation}\label{locLip2} \begin{aligned}[t]
\left|\left(|U|^{r - 2} u_i - |V|^{r - 2} v_i\right)(x_j - y_j)\right|
\leq d_r\left(|U|^{r - 2} + |V|^{r - 2}\right) |U-V||X - Y|&\\ \forall i,j=1,2& \end{aligned} \end{equation} for all \(U=(u_1,u_2),V=(v_1,v_2),X=(x_1,x_2),Y=(y_1,y_2)\in \mathbb{R}^2\). \end{Lem} This can be proved by the same arguments in the proof of Lemma 5 in \cite{KO2} with obvious modifications (see (6.2) (6.3) and (6.4) in \cite{KO2}).
Next lemma is concerned with the accretivity of the operator \(\partial\psi_q\) in \(\mathbb{L}^r(\Omega)\), namely the following assertion holds: \begin{Lem} Let \(V_i = J_\mu^{\partial\psi_q}U_i\) (\(i=1,2\)).\label{ACC} Then the following inequality holds: \begin{equation}\label{ACC1}
|V_1-V_2|_{\mathbb{L}^r} \leq |U_1-U_2|_{\mathbb{L}^r}. \end{equation} \end{Lem} \begin{proof}
By the definition of resolvent operators, we have \(U_i = V_i + \mu\partial\psi_q(V_i) = V_i + \mu|V_i|^{q-2}V_i\) (\(i=1,2\)). Multiplying \(U_1-U_2\) by \(|V_1-V_2|^{r-2}(V_1-V_2)\) and applying H\"older's inequality, we get \[ \begin{aligned}
&|U_1-U_2|_{\mathbb{L}^r}|V_1-V_2|_{\mathbb{L}^r}^{r-1}\\ &\geq
(U_1-U_2, |V_1-V_2|^{r-2}(V_1-V_2))_{\mathbb{L}^2}\\ &=
|V_1-V_2|_{\mathbb{L}^r}^r +\mu
(|V_1|^{q-2}V_1-|V_2|^{q-2}V_2, |V_1-V_2|^{r-2}(V_1-V_2))_{\mathbb{L}^2}. \end{aligned} \] Here, to derive \eqref{ACC1}, it suffices to show that \[ \begin{aligned}
&(|V_1|^{q-2}V_1-|V_2|^{q-2}V_2, |V_1-V_2|^{r-2}(V_1-V_2))_{\mathbb{L}^2}\\ &= \int_\Omega
|V_1-V_2|^{r-2}
\{|V_1|^q+|V_2|^q-(|V_1|^{q-2}+|V_2|^{q-2})V_1V_2\} dx\geq0. \end{aligned} \] In fact, by Young's inequality, we get \[
|V_1|^{q-2}V_1V_2
\leq \left(1-\frac{1}{q}\right)|V_1|^q+\frac{1}{q}|V_2|^q, \quad
|V_2|^{q-2}V_1V_2
\leq \left(1-\frac{1}{q}\right)|V_2|^q+\frac{1}{q}|V_1|^q. \] \end{proof}
\section{Main Results}\label{sec-3} Our main results are stated as follows.
\setcounter{tmpcnt}{1} \begin{Thm}[Local well-posedness in general domains] Let \(\Omega \subset \mathbb{R}^N\) be a general domain of uniformly \({\rm C}^2\)-regular class \label{lwpgd}
, \(F \in \mathcal{H}^T\) and \(2 < q < 2^*\) (subcritical), where \begin{equation}\label{SobSub} 2^* = \begin{cases} +\infty & (N = 1, 2),\\ \frac{2N}{N - 2} & (N \geq 3). \end{cases} \end{equation} Then for all \(U_0 \in \mathbb{H}_0^1(\Omega) = {\rm D}(\varphi)\), there exist \(T_0 \in (0, T]\) and a unique function \(U(t) \in {\rm C}([0, T_0]; \mathbb{H}^1_0(\Omega))\) satisfying: \begin{enumerate} \renewcommand{(\roman{enumi})}{(\roman{enumi})} \item \(U \in {\rm W}^{1, 2}(0, T_0; \mathbb{L}^2(\Omega))\), \item \(U(t) \in {\rm D}(\partial\varphi) \subset {\rm D}(\partial\psi_q)\) for a.e. \(t \in [0, T_0]\) and satisfies (ACGL) for a.e. \(t \in [0, T_0]\), \item \(\partial\varphi(U(\cdot)), \partial\psi_q(U(\cdot)) \in \mathcal{H}^{T_0}\). \end{enumerate} \end{Thm}
Furthermore the following alternative on the maximal existence time of the solution holds:
\begin{Thm}[Alternative] Let \(T_m\) be the maximal existence time of a solution to (ACGL) satisfying the regularity (i)-(iii) given in Theorem \ref{lwpgd} for all $T_0 \in (0,T_m)$.\label{altgd} Then the following alternative on \(T_m\) holds: \begin{itemize} \item \(T_m = T\) or
\item \(T_m < T\) and \( \lim_{t \uparrow T_m}\left\{|U(t)|_{\mathbb{L}^2}^2+2\varphi(U(t))\right\} = +\infty\). \end{itemize} \end{Thm}
In order to formulate the existence of small global solutions for \(F \in \mathcal{H}^T\), let \(\tilde{F}\) be the extension of \(F\) by zero to \((0, +\infty)\). We set the following notation in order to measure the external force \(F\) in terms of \(\tilde{F}\): \[ \@ifstar\@opnorms\@opnorm{F}^2_2 :=
\sup\left\{ \int_s^{s + 1}|\tilde{F}(t)|_{\mathbb{L}^2}^2 ~\! dt \mathrel{;} 0 \leq s < +\infty \right\}. \]
\begin{Thm}[Existence of small global solutions] Let all assumptions in Theorem \ref{lwpgd} be satisfied and let \(\gamma < 0\).\label{gegd}
Then there exists a sufficiently small number \(r\) independent of \(T\) such that for all \(U_0 \in D(\varphi)\) and \(F \in {\rm L}^2(0, T; \mathbb{L}^2(\Omega))\) with \(\varphi(U_0)+\frac{1}{2}|U_0|_{\mathbb{L}^2}^2 \leq r^2\) and \(\@ifstar\@opnorms\@opnorm{F}_2 \leq r^2\), every local solution given in Theorem \ref{lwpgd} can be continued globally up to \([0, T]\). \end{Thm}
\section{Auxiliary Problems}\label{sec-4} In this section, we consider the following auxiliary equations and show their global well-posedness. \[ \tag*{(AE)\(^\varepsilon\)} \left\{ \begin{aligned} &\frac{dU}{dt}(t) \!+\! \lambda\partial\varphi(U)\!+\!\alpha I\partial\varphi(U)\!+\!\varepsilon\partial\psi_r(U)\!-\!(\kappa\!+\! \beta I) \partial \psi_q(U) \!-\! \gamma U \!=\! F(t),\ t \!\in\! (0,T),\\ &U(0) = U_0. \end{aligned} \right. \]
\setcounter{tmpcnt}{1} \begin{Prop} Let \(\Omega \subset \mathbb{R}^N\) be a general domain of uniformly \({\rm C}^2\)-regular class
, \(F \in \mathcal{H}^T\), \(2 < q < 2^*\), \(\varepsilon>0\) and \(q<r<2^*\).\label{GWP} Then for all \(U_0 \in \mathbb{H}_0^1(\Omega) = {\rm D}(\varphi)\), there exists a unique function \(U(t) \in {\rm C}([0, T]; \mathbb{H}^1_0(\Omega))\) satisfying: \begin{enumerate} \renewcommand{(\roman{enumi})}{(\roman{enumi})} \item \(U \in {\rm W}^{1, 2}(0, T; \mathbb{L}^2(\Omega))\), \item \(U(t) \in {\rm D}(\partial\varphi) \subset {\rm D}(\partial\psi_r)\) for a.e. \(t \in [0, T]\) and satisfies (AE)\(^\varepsilon\) for a.e. \(t \in [0, T]\), \item \(\partial\varphi(U(\cdot)), \partial\psi_r(U(\cdot)) \in \mathcal{H}^{T}\). \end{enumerate} \end{Prop}
\begin{proof} We consider another approximate equation: \[ \tag*{(AE)\(_\mu^\varepsilon\)} \left\{ \begin{aligned} &\frac{dU}{dt}(t) + \lambda\partial\varphi(U)+\alpha I\partial\varphi(U)+\varepsilon\partial\psi_r(U)-(\kappa+ \beta I) \partial \psi_{q,\mu}(U) - \gamma U = F(t),\\[-1mm] &\hspace{80mm} t \in (0,T),\\ &U(0) = U_0. \end{aligned} \right. \] Since \(r<2^*\) implies \(\mathrm{D}(\varphi)\subset\mathrm{D}(\psi_r)\) and \(U\mapsto-(\kappa+\beta I)\partial\psi_{q,\mu}(U)-\gamma U\) is Lipschitz continuous in \(\mathbb{L}^2(\Omega)\), by virtue of Lemma 2, there exists a unique global solution \(U_\mu\) of (AE)\(_\mu^\varepsilon\) satisfying (i)-(iii) of Proposition \ref{GWP} (see \cite{B1} and Proposition 5.1 of \cite{KO2}).
To see the convergence of \(U_\mu\) as \(\mu \downarrow 0\), we establish a priori estimates. For this purpose, we frequently use the following interpolation inequalities \begin{equation}\label{inter} \begin{aligned}[t]
&|U|_{\mathbb{L}^q}^q \leq |U|_{\mathbb{L}^r}^{r\theta}|U|_{\mathbb{L}^2}^{2(1-\theta)},\\
&|U|_{\mathbb{L}^{2(q-1)}}^{2(q-1)} \leq |U|_{\mathbb{L}^{2(r-1)}}^{2(r-1)\theta}|U|_{\mathbb{L}^2}^{2(1-\theta)}=|\partial\psi_r(U)|_{\mathbb{L}^2}^{2\theta}|U|_{\mathbb{L}^2}^{2(1-\theta)},\\ &\theta=\frac{q-2}{r-2}. \end{aligned} \end{equation}
\begin{Lem} Let \(U=U_\mu\) be the solution of (AE)\(_\mu^\varepsilon\).\label{1eAEm}
Then there exists \(C_1\) depending only on \(\lambda, \kappa, \beta, \gamma, \varepsilon\), \(T\), \(|U_0|_{\mathbb{L}^2}\) and \(\|F\|_{\mathcal{H}^T}\) but not on \(\mu\) such that \begin{equation} \label{1eAEm-1}
\sup_{t \in [0, T]}|U(t)|_{\mathbb{L}^2}^2 + \int_{0}^{T}\varphi(U(t)) dt + \int_0^T\psi_r(U(t))dt \leq C_1. \end{equation} \end{Lem} \begin{proof} Multiplying (AE)\(_\mu^\varepsilon\) by \(U\) and noting orthogonalities \eqref{orth:IU}, \eqref{orth:mu:IU}, \eqref{asdf} with \(r=q\) and \eqref{inter}, we obtain \begin{equation}\label{lkjhgfds} \begin{aligned}
\frac{1}{2}\frac{d}{dt}|U|_{\mathbb{L}^2}^2 +2\lambda\varphi(U) +r\varepsilon\psi_r(U) &\leq \kappa\psi_q(U)
+\gamma_+|U|_{\mathbb{L}^2}^2
+|F|_{\mathbb{L}^2}|U|_{\mathbb{L}^2}\\
&\leq \frac{\kappa}{q}|U|^{r\theta}_{\mathbb{L}^r}|U|^{2(1-\theta)}_{\mathbb{L}^2}
+\gamma_+|U|_{\mathbb{L}^2}^2
+|F|_{\mathbb{L}^2}|U|_{\mathbb{L}^2}\\
&\leq \frac{\varepsilon}{2}|U|^r_{\mathbb{L}^r}
+(C_\varepsilon+\gamma_++1)|U|^2_{\mathbb{L}^2}
+\frac{1}{4}|F|_{\mathbb{L}^2}^2, \end{aligned} \end{equation} where \(C_\varepsilon = (1-\theta)\left\{\frac{\kappa}{q}\left(\frac{2\theta}{\varepsilon }\right)^\theta\right\}^{\frac{1}{1-\theta}}\) and \(\gamma_+ = \max\{0,\gamma\}\). Applying Gronwall's inequality to \eqref{lkjhgfds}, we obtain \eqref{1eAEm-1}. \end{proof}
\begin{Lem} Let \(U=U_\mu\) be the solution of (AE)\(_\mu^\varepsilon\).\label{2eAEm}
Then there exists \(C_2\) depending only on \(\lambda, \kappa, \beta, \gamma,\varepsilon\), \(T\), \(|U_0|_{\mathbb{L}^2}, \varphi(U_0)\) and \(\|F\|_{\mathcal{H}^T}\) but not on \(\mu\) such that \begin{equation} \label{2eAEm-1} \begin{aligned} &\sup_{t \in [0, T]}\varphi(U(t)) + \sup_{t \in [0, T]}\psi_r(U(t))\\
&+ \int_{0}^{T}|\partial\varphi(U(t))|_{\mathbb{L}^2}^2dt + \int_0^T|\partial\psi_r(U(t))|_{\mathbb{L}^2}^2dt +\int_{0}^{T}\left|\frac{dU}{dt}(t)\right|_{\mathbb{L}^2}^2\!dt \leq C_2. \end{aligned} \end{equation} \end{Lem} \begin{proof} Multiplying (AE)\(_\mu^\varepsilon\) by \(\partial\varphi(U)\) and \(\partial\psi_r(U)\), by \eqref{orth:Ipsi}, \eqref{angle} and \eqref{as}, we have \begin{align} \label{1qw}
&\frac{d}{dt}\varphi(U(t))+\lambda|\partial\varphi(U)|_{\mathbb{L}^2}^2 \leq \begin{aligned}[t]
&\sqrt{\kappa^2+\beta^2}|\partial\psi_q(U)|_{\mathbb{L}^2}|\partial\varphi(U)|_{\mathbb{L}^2} +2\gamma_+\varphi(U)\\
&+|F|_{\mathbb{L}^2}|\partial\varphi(U)|_{\mathbb{L}^2}, \end{aligned}\\ \label{2qw}
&\frac{d}{dt}\psi_r(U(t))+\varepsilon|\partial\psi_r(U)|_{\mathbb{L}^2}^2 \leq \begin{aligned}[t]
&\kappa|\partial\psi_q(U)|_{\mathbb{L}^2}|\partial\psi_r(U)|_{\mathbb{L}^2} +r\gamma_+\psi_r(U)\\
&+|F|_{\mathbb{L}^2}|\partial\psi_r(U)|_{\mathbb{L}^2}. \end{aligned} \end{align}
Using H\"older's inequality, we obtain \begin{align} \label{1qw2}
&\frac{d}{dt}\varphi(U(t))+\frac{\lambda}{2}|\partial\varphi(U)|_{\mathbb{L}^2}^2 \leq
\frac{\kappa^2+\beta^2}{\lambda}|\partial\psi_q(U)|_{\mathbb{L}^2}^2 +2\gamma_+\varphi(U)
+\frac{1}{\lambda}|F|_{\mathbb{L}^2}^2,\\ \label{2qw2}
&\frac{d}{dt}\psi_r(U(t))+\frac{\varepsilon}{2}|\partial\psi_r(U)|_{\mathbb{L}^2}^2 \leq
\frac{\kappa^2}{\varepsilon}|\partial\psi_q(U)|_{\mathbb{L}^2}^2 +r\gamma_+\psi_r(U)
+\frac{1}{\varepsilon}|F|_{\mathbb{L}^2}^2. \end{align}
We add \eqref{1qw2} to \eqref{2qw2} and apply \eqref{inter} to obtain \begin{equation} \begin{aligned} &\frac{d}{dt}\varphi(U(t)) +\frac{d}{dt}\psi_r(U(t))
+\frac{\lambda}{2}|\partial\varphi(U)|_{\mathbb{L}^2}^2
+\frac{\varepsilon}{2}|\partial\psi_r(U)|_{\mathbb{L}^2}^2\\ &\leq
\left(\frac{\kappa^2+\beta^2}{\lambda}+\frac{\kappa^2}{\varepsilon}\right)|\partial\psi_q(U)|_{\mathbb{L}^2}^2 +2\gamma_+\varphi(U)+r\gamma_+\psi_r(U)
+\left(\frac{1}{\lambda}+\frac{1}{\varepsilon}\right)|F|_{\mathbb{L}^2}^2\\ &\leq \left(\frac{\kappa^2+\beta^2}{\lambda}+\frac{\kappa^2}{\varepsilon}\right)
|\partial\psi_r(U)|_{\mathbb{L}^2}^{2\theta}|U|_{\mathbb{L}^2}^{2(1-\theta)} +2\gamma_+\varphi(U)+r\gamma_+\psi_r(U)
+\left(\frac{1}{\lambda}+\frac{1}{\varepsilon}\right)|F|_{\mathbb{L}^2}^2\\ &\leq
\frac{\varepsilon}{4}|\partial\psi_r(U)|_{\mathbb{L}^2}^2
+\tilde{C}_\varepsilon|U|_{\mathbb{L}^2}^2 +2\gamma_+\varphi(U)+r\gamma_+\psi_r(U)
+\left(\frac{1}{\lambda}+\frac{1}{\varepsilon}\right)|F|_{\mathbb{L}^2}^2, \end{aligned} \end{equation} where \[ \tilde{C}_\varepsilon = (1-\theta) \left\{ \left(\frac{4\theta}{\varepsilon}\right)^\theta \left(\frac{\kappa^2+\beta^2}{\lambda}+\frac{\kappa^2}{\varepsilon}\right) \right\}^{\frac{1}{1-\theta}}. \]
Then \eqref{1eAEm-1} and Gronwall's inequality assure the estimates for the first four terms in \eqref{2eAEm-1}. Hence the estimate for \(\left|\frac{dU}{dt}\right|_{\mathrm{L}^2(0,T;\mathbb{L}^2(\Omega))}\) follows from the equation. \end{proof} Noting \eqref{inter} and the above estimates, we can deduce \begin{equation}\label{2eAEm-2}
\sup_{t\in[0,T]}\psi_q(U(t))+\int_0^T|\partial\psi_q(U(t))|_{\mathbb{L}^2}^2dt\leq C_2. \end{equation}
Let \(U_\mu\) and \(U_\nu\) be solutions to \begin{align} \tag*{(AE)\(_\mu\)} &\frac{dU_\mu}{dt}(t) + (\lambda+\alpha I)\partial\varphi(U_\mu)+\varepsilon\partial\psi_r(U_\mu)-(\kappa+ \beta I) \partial \psi_{q,\mu}(U_\mu) - \gamma U_\mu = F(t),\\ \tag*{(AE)\(_\nu\)} &\frac{dU_\nu}{dt}(t) + (\lambda+\alpha I)\partial\varphi(U_\nu)+\varepsilon\partial\psi_r(U_\nu)-(\kappa+ \beta I) \partial \psi_{q,\nu}(U_\nu) - \gamma U_\nu = F(t) \end{align} with initial condition \(U_\mu(0) = U_\nu(0) = U_0\) respectively.
Multiplying (AE)\(_\mu\)\(-\)(AE)\(_\nu\) by \(U_\mu-U_\nu\) and using
monotonicity of \(I\partial\varphi\) and \(\partial\psi_r\), we obtain \[ \begin{aligned}
&\frac{1}{2}\frac{d}{dt}|U_\mu-U_\nu|_{\mathbb{L}^2}^2 +2\lambda\varphi(U_\mu-U_\nu)\\
&\leq \bigl((\kappa+ \beta I)(\partial\psi_{q,\mu}(U_\mu)-\partial\psi_{q,\nu}(U_\nu)), U_\mu-U_\nu\bigr)_{\mathbb{L}^2}
+\gamma_+|U_\mu-U_\nu|_{\mathbb{L}^2}^2. \end{aligned} \] Integrating the above inequality over \([0,T]\) with respect to \(t\), we obtain \begin{equation}\label{gfd} \begin{aligned}
&\frac{1}{2}|U_\mu-U_\nu|_{\mathbb{L}^2}^2 +2\lambda \int_0^T\varphi(U_\mu-U_\nu)dt \\ &\leq \bigl((\kappa+ \beta I)(\partial\psi_{q,\mu}(U_\mu)-\partial\psi_{q,\nu}(U_\nu)), U_\mu-U_\nu\bigr)_{\mathcal{H}^T}
+\gamma_+\!\int_0^T|U_\mu-U_\nu|_{\mathbb{L}^2}^2dt. \end{aligned} \end{equation}
The first term on the right hand side of \eqref{gfd} can be decomposed in the following way: \begin{equation}\label{gfds} \begin{aligned} &\bigl((\kappa+ \beta I)(\partial\psi_{q,\mu}(U_\mu)-\partial\psi_{q,\nu}(U_\nu)), U_\mu-U_\nu\bigr)_{\mathcal{H}^T}\\ &=\int_0^T \bigl((\kappa+ \beta I)(\partial\psi_q(J_\mu^{\partial\psi_q}U_\mu)-\partial\psi_q(J_\nu^{\partial\psi_q}U_\nu)), U_\mu-U_\nu\bigr)_{\mathbb{L}^2} dt\\ &= \begin{aligned}[t] &\int_0^T \bigl((\kappa+ \beta I)(\partial\psi_q(J_\mu^{\partial\psi_q}U_\mu)-\partial\psi_q(J_\mu^{\partial\psi_q}U_\nu)), U_\mu-U_\nu\bigr)_{\mathbb{L}^2} dt\\ &+ \int_0^T \bigl((\kappa+ \beta I)(\partial\psi_q(J_\mu^{\partial\psi_q}U_\nu)-\partial\psi_q(J_\nu^{\partial\psi_q}U_\nu)), U_\mu-U_\nu\bigr)_{\mathbb{L}^2} dt. \end{aligned} \end{aligned} \end{equation}
Put \(\tilde{C} := \sqrt{\kappa^2+\beta^2}d_q\). Then by \eqref{locLip2} and \eqref{ACC1}, we have \begin{equation}\label{gfdsa} \begin{aligned} &\int_0^T \bigl((\kappa+ \beta I)(\partial\psi_q(J_\mu^{\partial\psi_q}U_\mu)-\partial\psi_q(J_\mu^{\partial\psi_q}U_\nu)), U_\mu-U_\nu\bigr)_{\mathbb{L}^2} dt\\ &= \int_0^T \int_\Omega
(\kappa+ \beta I)(|J_\mu^{\partial\psi_q}U_\mu|^{q-2}J_\mu^{\partial\psi_q}U_\mu-|J_\mu^{\partial\psi_q}U_\nu|^{q-2}J_\mu^{\partial\psi_q}U_\nu), U_\mu-U_\nu\bigr)_{\mathbb{L}^2} dxdt\\ &\leq
\tilde{C} \int_0^T \int_\Omega \bigl(
|J_\mu^{\partial\psi_q}U_\mu|^{q-2} +
|J_\mu^{\partial\psi_q}U_\nu|^{q-2} \bigr)
|J_\mu^{\partial\psi_q}U_\mu -
J_\mu^{\partial\psi_q}U_\nu|
|U_\mu-U_\nu| dxdt\\ &\leq \tilde{C}q^{\frac{q-2}{q}} \int_0^T \bigl( \psi_q(J_\mu^{\partial\psi_q}U_\mu)^{\frac{q-2}{q}} \!+\! \psi_q(J_\mu^{\partial\psi_q}U_\nu)^{\frac{q-2}{q}} \bigr)
|J_\mu^{\partial\psi_q}U_\mu
\!-\! J_\mu^{\partial\psi_q}U_\nu|_{\mathbb{L}^q}
|U_\mu-U_\nu|_{\mathbb{L}^q} dt \\ &\leq \tilde{C}q^{\frac{q-2}{q}} \int_0^T \bigl( \psi_q(J_\mu^{\partial\psi_q}U_\mu)^{\frac{q-2}{q}} + \psi_q(J_\mu^{\partial\psi_q}U_\nu)^{\frac{q-2}{q}} \bigr)
|U_\mu-U_\nu|_{\mathbb{L}^q}^2 dt\\ &\leq \lambda \int_0^T \varphi(U_\mu-U_\nu) dt + \bar{C} \int_0^T
|U_\mu-U_\nu|_{\mathbb{L}^2}^2 dt, \end{aligned} \end{equation} where
we used estimates \eqref{2eAEm-2}, \eqref{asd}, the interpolation inequality: \begin{equation}\label{GNH1} \begin{aligned}[t]
&|U|_{\mathbb{L}^q}^q \begin{aligned}[t] &\leq C_b^{\frac{q}{2}}
|\nabla U|_{\mathbb{L}^2}^{2\cdot\frac{q-\xi}{q}}
|U|_{\mathbb{L}^2}^{2\cdot\frac{\xi}{q}}\\ &\leq
\frac{\lambda}{2}|\nabla U|_{\mathbb{L}^2}^2 +
\bar{C}|U|_{\mathbb{L}^2}^2, \end{aligned}\\ &\xi=\frac{2^*-q}{2(N-2)}\in(0,q)\ \mbox{for}\ N\geq 3\ \mbox{and}\ \xi=\frac{1}{2}\ \mbox{for}\ N=1,2, \end{aligned} \end{equation} and Young's inequality and \(\bar{C}\) is a constant depending on \(\lambda,\tilde{C},q,C_b\) and \(\xi\).
As for the second term in \eqref{gfds}, again by \eqref{locLip2}, we get \begin{equation}\label{bvc} \begin{aligned} &\int_0^T \bigl((\kappa+ \beta I)(\partial\psi_q(J_\mu^{\partial\psi_q}U_\nu)-\partial\psi_q(J_\nu^{\partial\psi_q}U_\nu)), U_\mu-U_\nu\bigr)_{\mathbb{L}^2} dt\\ &= \int_0^T \int_\Omega
(\kappa+ \beta I)(|J_\mu^{\partial\psi_q}U_\nu|^{q-2}J_\mu^{\partial\psi_q}U_\nu-|J_\nu^{\partial\psi_q}U_\nu|^{q-2}J_\nu^{\partial\psi_q}U_\nu), U_\mu-U_\nu\bigr)_{\mathbb{L}^2} dxdt\\ &\leq \tilde{C} \int_0^T \int_\Omega \bigl(
|J_\mu^{\partial\psi_q}U_\nu|^{q-2} +
|J_\nu^{\partial\psi_q}U_\nu|^{q-2} \bigr)
|J_\mu^{\partial\psi_q}U_\nu -
J_\nu^{\partial\psi_q}U_\nu|
|U_\mu-U_\nu| dxdt\\ &\leq \tilde{C} \sum_{i,j=\mu,\nu} \int_0^T \int_\Omega
|J_i^{\partial\psi_q}U_\nu|^{q-2}
|J_\mu^{\partial\psi_q}U_\nu -
J_\nu^{\partial\psi_q}U_\nu|
|U_j| dxdt\\ &\leq \tilde{C} \sum_{i,j=\mu,\nu} \int_0^T
|J_i^{\partial\psi_q}U_\nu|_{\mathbb{L}^{2(q-1)}}^{q-2}
|J_\mu^{\partial\psi_q}U_\nu -
J_\nu^{\partial\psi_q}U_\nu|_{\mathbb{L}^2}
|U_j|_{\mathbb{L}^{2(q-1)}} dt, \end{aligned} \end{equation} where we note \[ \frac{q-2}{2(q-1)}+\frac{1}{2}+\frac{1}{2(q-1)}=1. \]
Let \(V_1=J_\mu^{\partial\psi_q}U_\nu\) and \(V_2=J_\nu^{\partial\psi_q}U_\nu\), then the definition of resolvent operator yields \(U_\nu=V_1+\mu\partial\psi_q(V_1)=V_2+\nu\partial\psi_q(V_2)\), that is \[ J_\mu^{\partial\psi_q}U_\nu-J_\nu^{\partial\psi_q}U_\nu = V_1-V_2 = \nu\partial\psi_q(V_2) - \mu\partial\psi_q(V_1), \] whence follows \begin{equation}\label{poiuyt} \begin{aligned}
|J_\mu^{\partial\psi_q}U_\nu -
J_\nu^{\partial\psi_q}U_\nu|_{\mathbb{L}^2} &\leq (\mu+\nu) (
|\partial\psi_q(V_2)|_{\mathbb{L}^2} +
|\partial\psi_q(V_1)|_{\mathbb{L}^2} )\\ &= (\mu+\nu) (
|\partial\psi_q(J_\nu^{\partial\psi_q}U_\nu)|_{\mathbb{L}^2} +
|\partial\psi_q(J_\mu^{\partial\psi_q}U_\nu)|_{\mathbb{L}^2} ). \end{aligned} \end{equation} Combining \eqref{bvc} with \eqref{poiuyt}, we have \begin{equation}\label{LKJ} \begin{aligned} &\int_0^T \bigl((\kappa+ \beta I)(\partial\psi_q(J_\mu^{\partial\psi_q}U_\nu)-\partial\psi_q(J_\nu^{\partial\psi_q}U_\nu)), U_\mu-U_\nu\bigr)_{\mathbb{L}^2} dt\\ &\leq (\mu+\nu)\tilde{C} \sum_{i,j,k=\mu,\nu} \int_0^T
|J_i^{\partial\psi_q}U_\nu|_{\mathbb{L}^{2(q-1)}}^{q-2}
|\partial\psi_q(J_k^{\partial\psi_q}U_\nu)|_{\mathbb{L}^2}
|U_j|_{\mathbb{L}^{2(q-1)}} dt\\ &\leq (\mu+\nu)\tilde{C} \sum_{i,j,k=\mu,\nu} \begin{aligned}[t] &\left\{ \int_0^T
|J_i^{\partial\psi_q}U_\nu|_{\mathbb{L}^{2(q-1)}}^{2(q-1)} dt \right\}^{\frac{q-2}{2(q-1)}}\\ &\times\left\{ \int_0^T
|\partial\psi_q(J_k^{\partial\psi_q}U_\nu)|_{\mathbb{L}^2}^2 dt \right\}^{\frac{1}{2}} \left\{ \int_0^T
|U_j|_{\mathbb{L}^{2(q-1)}}^{2(q-1)} dt \right\}^{\frac{1}{2(q-1)}} \end{aligned}\\ &\leq(\mu+\nu)\bar{\bar{C}}, \end{aligned} \end{equation} where \(\bar{\bar{C}} = 8\tilde{C}C_2\) and we used \eqref{2eAEm-2}, \eqref{as} and the fact that \[
|\partial\psi_q(U)|_{\mathbb{L}^2}^2 =
\bigl||U|^{q-2}U\bigr|_{\mathbb{L}^2}^2 =
|U|_{\mathbb{L}^{2(q-1)}}^{2(q-1)}. \]
Thus in view of \eqref{gfd}, \eqref{gfdsa} and \eqref{LKJ}, we obtain \[
\frac{1}{2}|U_\mu-U_\nu|_{\mathbb{L}^2}^2 +\lambda \int_0^T\varphi(U_\mu-U_\nu)dt \leq (\mu+\nu)\bar{\bar{C}}
+(\gamma_++\bar{C})\int_0^T|U_\mu-U_\nu|_{\mathbb{L}^2}^2dt. \] Therefore Gronwall's inequality yields that \(\{U_\mu\}_{\mu>0}\) forms a Cauchy net in \(\mathrm{C}([0,T];\mathbb{L}^2(\Omega))\) as \(\mu,\nu\downarrow0\).
By the a priori estimates \eqref{2eAEm-1} and \eqref{2eAEm-2}, we obtain the following convergences of subsequence \(\{U_{\mu_n}\}_{n\in\mathbb{N}}\subset\{U_\mu\}_{\mu>0}\) as \(n\to\infty\): \begin{align}\label{aaaa} U_{\mu_n}&\rightarrow U&&\mbox{strongly in}\ {\rm C}([0, T]; \mathbb{L}^2(\Omega)),\\ \frac{dU_{\mu_n}}{dt}&\rightharpoonup \frac{dU}{dt}&&\mbox{weakly in}\ {\rm L}^2(0, T; \mathbb{L}^2(\Omega)),\\ \partial\varphi(U_{\mu_n})&\rightharpoonup \partial\varphi(U)&&\mbox{weakly in}\ {\rm L}^2(0, T; \mathbb{L}^2(\Omega)),\\ \partial\psi_r(U_{\mu_n})&\rightharpoonup \partial\psi_r(U)&&\mbox{weakly in}\ {\rm L}^2(0, T; \mathbb{L}^2(\Omega)),\\ \partial\psi_q(U_{\mu_n})&\rightharpoonup \partial\psi_q(U)&&\mbox{weakly in}\ {\rm L}^2(0, T; \mathbb{L}^2(\Omega)),\\ \partial\psi_{q,{\mu_n}}(U_{\mu_n})&\rightharpoonup \partial\psi_q(U)&&\mbox{weakly in}\ {\rm L}^2(0, T; \mathbb{L}^2(\Omega)), \end{align} where we used the demi-closedness of \(\frac{d}{dt}, \partial\varphi, \partial\psi_r, \partial\psi_q\). We note that \eqref{aaaa} implies \(J_{\mu_n}^{\partial\psi_q}U_{\mu_n} \to U\) strongly in \({\rm L}^2(0,T;\mathbb{L}^2(\Omega))\) (see \cite{KOS1}).
Hence \(U\) is the desired solution of (AE)\(^\varepsilon\) and the uniqueness follows from the fact that \(\{U_\mu\}_{\mu>0}\) forms a Cauchy net in \({\rm C}([0,T];\mathbb{L}^2(\Omega))\). \end{proof}
\section{Proofs of Theorems \ref{lwpgd} and \ref{altgd}} In this section we establish local (in time) a priori estimates for solutions \(\{U^\varepsilon\}_{\varepsilon>0}\) of auxiliary equations (AE)\(^\varepsilon\) in order to show the existence of the unique local solution of (ACGL). In the sequel, we assume \(2<q<r<2^*\).
\begin{Lem} Let \(U=U^\varepsilon\) be the solution of (AE)\(^\varepsilon\).\label{eAEe}
Then there exist \(C_3\) and \(T_0>0\) depending only on \(\lambda, \kappa, \beta, \gamma\), \(|U_0|_{\mathbb{L}^2}, \varphi(U_0)\) and \(\|F\|_{\mathcal{H}^T}\) but not on \(\varepsilon\) such that \begin{equation} \label{eAEe1} \begin{aligned}
&\sup_{t \in [0, T_0]}|U(t)|_{\mathbb{L}^2}^2 + \sup_{t \in [0, T_0]}\varphi(U(t))
+ \int_{0}^{T_0}|\partial\varphi(U(t))|_{\mathbb{L}^2}^2dt\leq C_3. \end{aligned} \end{equation} \end{Lem}
\begin{proof} Multiplying (AE)\(^\varepsilon\) by \(U\) and \(\partial\varphi(U)\) and using Young's inequality, \eqref{orth:IU}, \eqref{angle} and \eqref{GNH1}, we obtain the following inequalities respectively: \begin{align}\label{Poi} &\begin{aligned}[t]
\frac{1}{2}\frac{d}{dt}|U|_{\mathbb{L}^2}^2 + 2\lambda\varphi(U)
&\leq
(\gamma_++1)|U|_{\mathbb{L}^2}^2
+\frac{1}{4}|F|_{\mathbb{L}^2}^2 +q\kappa\psi_q(U)\\ &\leq
(\gamma_++1)|U|_{\mathbb{L}^2}^2
+|F|_{\mathbb{L}^2}^2 +\kappa \left\{ \lambda \varphi(U) + \bar{C}
|U|_{\mathbb{L}^2}^2 \right\}^{\frac{q}{2}}, \end{aligned}\\ &\begin{aligned}[t] \frac{d}{dt}\varphi(U) +
\frac{\lambda}{2}|\partial\varphi(U)|_{\mathbb{L}^2}^2 &\leq 2\gamma_+\varphi(U)
+\frac{1}{\lambda}|F|_{\mathbb{L}^2}^2
+\frac{\kappa^2+\beta^2}{\lambda}|\partial\psi_q(U)|_{\mathbb{L}^2}^2\\ &\leq \begin{aligned}[t] &2\gamma_+\varphi(U)
+\frac{1}{\lambda}|F|_{\mathbb{L}^2}^2\\ &+ \frac{k}{\lambda}(\kappa^2+\beta^2) \left[
|\partial\varphi(U)|_{\mathbb{L}^2}^{2-\theta}\cdot (2\varphi(U))^{\frac{2q-4+\theta}{2}} +
|U|_{\mathbb{L}^2}^{2(q-1)} \right], \end{aligned} \end{aligned} \end{align} where we used (in \cite{OS1}) \begin{align}\label{LKJH} &
|\partial\psi(U)_q(U)|_{\mathbb{L}^2}^2 =
|U|_{\mathbb{L}^{2(q-1)}}^{2(q-1)} \leq k (
|\Delta U|_{\mathbb{L}^2}^{2-\theta}|\nabla U|_{\mathbb{L}^2}^{2q-4+\theta}+|U|_{\mathbb{L}^2}^{2(q-1)} ), \\ \notag & \theta = \left\{ \begin{aligned} &2&&\mbox{if}\ N=1,2\ \mbox{or if}\ N\geq 3\ \mbox{and}\ q\in\left(2,\frac{2N-2}{N-2}\right], \\ &2q-N(q-2)&&\mbox{if}\ N\geq 3\ \mbox{and}\ \frac{2N-2}{N-2}<q, \end{aligned} \right. \end{align} where \(k\) is a constant depending on \(q\), \(N\) and \(\Omega\).
Hence, since \(\frac{2N-2}{N-2}<q<2^*\) implies that \(\theta\in(0,2)\), by Young's inequality, there exists \(C_0>0\) such that \begin{equation}\label{POi} \frac{d}{dt}\varphi(U) +
\frac{\lambda}{4}|\partial\varphi(U)|_{\mathbb{L}^2}^2 \leq 2\gamma_+\varphi(U)
+\frac{1}{\lambda}|F|_{\mathbb{L}^2}^2\\ +C_0 \left(
|U|_{\mathbb{L}^2}^{2(q-1)} + \varphi(U)^\rho \right) \end{equation} with \(\rho=1+\frac{2(q-2)}{\theta}>q-1\).
We add \eqref{Poi} and \eqref{POi} together to obtain \begin{equation}\label{POI}
\frac{1}{2}\frac{d}{dt}|U|_{\mathbb{L}^2}^2 + \frac{d}{dt}\varphi(U) +
\frac{\lambda}{4}|\partial\varphi(U)|_{\mathbb{L}^2}^2 \leq
\left(1+\frac{1}{\lambda}\right)|F|_{\mathbb{L}^2}^2 +l\left(
|U|_{\mathbb{L}^2}^2+2\varphi(U)\right), \end{equation} where \(l(s)=(2(\gamma_++1)s + \kappa\left(\frac{\lambda}{2}+2\bar{C}\right)^\frac{q}{2}2s^{\frac{q}{2}} + C_0\{(2s)^{2(q-1)} + s^\rho\}\) is a non-decreasing function.
Here we recall the following lemma: \begin{Lem*}[(\^Otani \cite{O5}, p. 360. Lemma 2.2)] Let \(y(t)\) be a bounded measurable non-negative function on \([0, T]\) and suppose that there exist \(y_0 \geq 0\) and a monotone non-decreasing function \(m(\cdot): [0, +\infty) \to [0, +\infty)\) such that \begin{equation} \label{O5Lem22:1} y(t) \leq y_0 + \int_0^tm(y(s))ds\quad\mbox{a.e.}\ t \in (0, T). \end{equation} Then there exists a number \(S = S(y_0, m(\cdot)) \in (0, T]\) such that \begin{equation} \label{O5Lem22:2} y(t) \leq y_0 + 1\quad\mbox{a.e.}\ t \in [0, S]. \end{equation} \end{Lem*}
We apply the above lemma with \(y(t) = |U(t)|_{\mathbb{L}^2}^2 + 2\varphi(U(t))\), \(y_0=|U_0|_{\mathbb{L}^2}^2 + 2\varphi(U_0)+2\left(1+\frac{1}{\lambda}\right)\|F\|_{\mathcal{H}^T}^2\) and \(m(\cdot)=2l(\cdot)\) so that we obtain \eqref{eAEe1} with \begin{equation}\label{Szero}
T_0 = S\left(|U_0|_{\mathbb{L}^2}^2 + 2\varphi(U_0)+2\left(1+\frac{1}{\lambda}\right)\|F\|_{\mathcal{H}^T}^2, 2l(\cdot)\right). \end{equation} \end{proof}
\begin{proof}[Proof of Theorem \ref{lwpgd}] By a priori estimate \eqref{eAEe1}, inequality \eqref{LKJH}with \(q\) replaced by \(r\) and assumption \(2 < q < r < 2^*\), the following estimates can be derived as well \begin{equation}\label{aaa} \begin{aligned} &\sup_{t\in[0,T_0]}\psi_q(U(t)) +\sup_{t\in[0,T_0]}\psi_r(U(t))\\
&+\int_{0}^{T_0}|\partial\psi_r(U(t))|_{\mathbb{L}^2}^2dt
+\int_{0}^{T_0}|\partial\psi_q(U(t))|_{\mathbb{L}^2}^2dt \leq C_3. \end{aligned} \end{equation}
Moreover we have the strong convergence of \(\{U^\varepsilon\}_{\varepsilon>0}\) in \({\rm C}([0,T];\mathbb{L}^2(\Omega))\). Indeed, multiplying the difference of two equations \({\rm (AE)}^\varepsilon-{\rm (AE)}^{\varepsilon'}\) by \(U^\varepsilon - U^{\varepsilon'}\), using the self-adjoint property of \(\partial\varphi\), (\ref{orth:IU}) and by the same argument as in \eqref{gfdsa} , we get \begin{equation} \label{DUeUep} \begin{aligned}
&\frac{1}{2}\frac{d}{dt}|U^\varepsilon - U^{\varepsilon'}|_{\mathbb{L}^2}^2 + 2\lambda\varphi(U^\varepsilon - U^{\varepsilon'}) + (\varepsilon\partial\psi_r(U^\varepsilon) - \varepsilon'\partial\psi_r(U^{\varepsilon'}), U^\varepsilon - U^{\varepsilon'})_{\mathbb{L}^2}\\
&\leq \gamma_+|U^\varepsilon - U^{\varepsilon'}|_{\mathbb{L}^2}^2 + ((\kappa + I\beta)(\partial\psi_q(U^\varepsilon) - \partial\psi_q(U^{\varepsilon'})), U^\varepsilon - U^{\varepsilon'})_{\mathbb{L}^2}\\ &\leq
\gamma_+|U^\varepsilon - U^{\varepsilon'}|_{\mathbb{L}^2}^2 + \tilde{\tilde{C}}(\psi_q(U^\varepsilon)^{\frac{q - 2}{q}}
+ \psi_q(U^{\varepsilon'})^{\frac{q - 2}{q}})|U^\varepsilon - U^{\varepsilon'}|_{\mathbb{L}^q}^2 +\lambda\varphi(U^\varepsilon-U^{\varepsilon'}), \end{aligned} \end{equation} where the constant \(\tilde{\tilde{C}}=\tilde{C}q^{\frac{q-2}{q}}\) depends only on \(q, \kappa, \beta\).
We here assume \(\varepsilon < \varepsilon'\) without loss of generality. By monotonicity of \(\partial\psi_r\) and the definition of subdifferential operators, we obtain \begin{equation} \label{eeprime} \begin{aligned} &(\varepsilon\partial\psi_r(U^\varepsilon) - \varepsilon'\partial\psi_r(U^{\varepsilon'}), U^\varepsilon - U^{\varepsilon'})_{\mathbb{L}^2}\\ &= \varepsilon(\partial\psi_r(U^\varepsilon) - \partial\psi_r(U^{\varepsilon'}), U^\varepsilon - U^{\varepsilon'})_{\mathbb{L}^2} + (\varepsilon - \varepsilon')(\partial\psi_r(U^{\varepsilon'}), U^\varepsilon - U^{\varepsilon'})_{\mathbb{L}^2}\\ &\geq (\varepsilon - \varepsilon')(\psi_r(U^\varepsilon) - \psi_r(U^{\varepsilon'})). \end{aligned} \end{equation} Then in view of \eqref{aaa}, \eqref{DUeUep} and \eqref{eeprime}, we obtain \begin{equation} \label{DUeUep1} \begin{aligned}
&\frac{1}{2}\frac{d}{dt}|U^\varepsilon - U^{\varepsilon'}|_{\mathbb{L}^2}^2 + \lambda\varphi(U^\varepsilon - U^{\varepsilon'})\\ &\leq \begin{aligned}[t]
&\gamma_+|U^\varepsilon - U^{\varepsilon'}|_{\mathbb{L}^2}^2 + \tilde{\tilde{C}}(\psi_q(U^\varepsilon)^{\frac{q - 2}{q}} + \psi_q(U^{\varepsilon'})^{\frac{q - 2}{q}})|U^\varepsilon - U^{\varepsilon'}|_{\mathbb{L}^q}^2\\ &+ (\varepsilon' - \varepsilon)(\psi_r(U^\varepsilon) - \psi_r(U^{\varepsilon'})). \end{aligned}\\ &\leq
\gamma_+|U^\varepsilon - U^{\varepsilon'}|_{\mathbb{L}^2}^2 + 2\tilde{\tilde{C}}C_3^{\frac{q - 2}{q}}
|U^\varepsilon - U^{\varepsilon'}|_{\mathbb{L}^q}^2 + (\varepsilon' - \varepsilon)C_3. \end{aligned} \end{equation} Applying \eqref{GNH1} and Young's inequality to \eqref{DUeUep1}, we see that there exists a constant \(C_4\) such that \[ \begin{aligned}
&\frac{1}{2}\frac{d}{dt}|U^\varepsilon - U^{\varepsilon'}|_{\mathbb{L}^2}^2 + \lambda\varphi(U^\varepsilon - U^{\varepsilon'})\\ &\leq
\gamma_+|U^\varepsilon - U^{\varepsilon'}|_{\mathbb{L}^2}^2 + \frac{\lambda}{2}
|\nabla(U^\varepsilon - U^{\varepsilon'})|_{\mathbb{L}^2}^2
+C_4|U^\varepsilon - U^{\varepsilon'}|_{\mathbb{L}^2}^2 + (\varepsilon' - \varepsilon)C_3. \end{aligned} \]
Thus by Gronwall's inequality, we can conclude that \(\{U^\varepsilon\}_{\varepsilon > 0}\) forms a Cauchy net in \(\mathrm{C}([0,T_0];\mathbb{L}^2(\Omega))\).
By a priori estimates \eqref{eAEe1} and \eqref{aaa}, we can extract a subsequence \(\{U^{\varepsilon_n}\}_{n\in\mathbb{N}}\subset\{U^\varepsilon\}_{\varepsilon>0}\) such that: \begin{align} U^{\varepsilon_n}&\rightarrow U&&\mbox{strongly in}\ {\rm C}([0, T_0]; \mathbb{L}^2(\Omega)),\\ \frac{dU^{\varepsilon_n}}{dt}&\rightharpoonup \frac{dU}{dt}&&\mbox{weakly in}\ {\rm L}^2(0, T_0; \mathbb{L}^2(\Omega)),\\ \partial\varphi(U^{\varepsilon_n})&\rightharpoonup \partial\varphi(U)&&\mbox{weakly in}\ {\rm L}^2(0, T_0; \mathbb{L}^2(\Omega)),\\ \varepsilon_n\partial\psi_r(U^{\varepsilon_n})&\rightarrow 0&&\mbox{strongly in}\ {\rm L}^2(0, T_0; \mathbb{L}^2(\Omega)),\\ \partial\psi_q(U^{\varepsilon_n})&\rightharpoonup \partial\psi_q(U)&&\mbox{weakly in}\ {\rm L}^2(0, T_0; \mathbb{L}^2(\Omega)), \end{align} where we used the demi-closedness of \(\frac{d}{dt}, \partial\varphi, \partial\psi_q\). Then wee see that \(U\) is the desired solution of (ACGL).
The uniqueness part follows from the fact that \(\{U^\varepsilon\}_{\varepsilon>0}\) forms a Cauchy net.
\end{proof} \begin{proof}[Proof of Theorem \ref{altgd}] We shall proceed the proof of Theorem \ref{altgd} by contradiction.
Let \(T_m<T\) and \(\liminf_{t\uparrow T_m}\left\{|U(t)|_{\mathbb{L}^2}^2+2\varphi(U(t))\right\}<+\infty\). Then there exists a increasing sequence \(\{t_n\}_{n\in\mathbb{N}}\) and a positive number \(K_0>0\) which is independent of \(n\) such that \begin{align} \label{convtn} &t_n\uparrow T_m\\ \intertext{and}
&|U(t_n)|_{\mathbb{L}^2}^2+2\varphi(U(t_n))\leq K_0\quad\mbox{for all}\ n\in\mathbb{N}. \end{align}
We repeat the same argument as in the proof of Theorem \ref{lwpgd} with \(t_0\) replaced by \(t_n\) (\(n\in\mathbb{N}\)). Then by \eqref{Szero}, \(U(t)\) can be continued up to \(t_n+T_0\) as a solution of (ACGL), where \[
0<T_0=S\left(K_0+2\left(1+\frac{1}{\lambda}\right)\|F\|_{\mathcal{H}^T}^2,2l(\cdot)\right), \] which is independent of \(n\). By \eqref{convtn}, we can take a sufficiently large number \(n\in\mathbb{N}\) such that \(T_m<t_n+T_0\), which contradicts the definition of \(T_m\). \end{proof}
\section{Proof of Theorem \ref{gegd}} First we recall the following lemma. \begin{Lem2}[(c.f. \cite{KO2} Lemma 7)]
Let \(f(t) \in {\rm L}^1(0, T)\) and \(j(t)\) be an absolutely \label{Gtyineq}
continuous positive function on \([0, S]\) with \(0 < S \leq T\) such that \begin{equation}\label{7.5}
\frac{d}{dt} j(t) + \delta j(t) \leq K |f(t)| \quad \mbox{a.e.}\ t \in [0, S], \end{equation} where \(\delta > 0\) and \(K > 0\). Then we have \begin{equation}\label{7.6}
\begin{aligned}
& j(t) \leq j(0) ~\! e^{-\delta t}
+ \frac{K}{1 - e^{-\delta}} ~\! \@ifstar\@opnorms\@opnorm{f}_1
\quad \forall t \in [0, S], \\
& \@ifstar\@opnorms\@opnorm{f}_1 = \sup\left\{\int_S^{S + 1}|\tilde{f}(t)|dt
\mathrel{;} 0 \leq S < \infty\right\}, \end{aligned} \end{equation} where \(\tilde{f}\) is the zero extension of \(f\) to \([0, \infty)\). \end{Lem2}
Hence in order to give an estimate for the real parts of equation (CGL) from below, we prepare the following lemma. \begin{Lem} Let all assumptions in Theorem \ref{gegd} be satisfied. \label{coergd}
Then there exists \(\varepsilon_0 > 0\) and \(\delta > 0\) such that for all \(U \in D(\varphi) = \mathbb{H}_0^1(\Omega)\) satisfying \(\frac{1}{2}|U|_{\mathbb{L}^2}^2+\varphi(U)\leq \varepsilon_0\), the following estimate holds. \begin{equation} \label{coer1}
(\lambda \partial \varphi U - \kappa \partial \psi_q U - \gamma U, U)_{\mathbb{L}^2} \geq \delta\left( \frac{1}{2}|U|_{\mathbb{L}^2}^2+\varphi(U)\right). \end{equation} \end{Lem} \begin{proof} We multiply \(\lambda \partial \varphi(U) - \kappa\partial\psi_q(U) - \gamma U\) by \(U\). Then we get by (\ref{GNH1}) \begin{equation} \label{ReUgd} \begin{aligned} &(\lambda\partial\varphi(U) - \kappa\partial\psi_q(U) - \gamma U, U)_{\mathbb{L}^2}\\
&= 2\lambda\varphi(U) - q\kappa\psi_q(U) - \gamma |U|_{\mathbb{L}^2}^2\\
&\geq 2\delta\left(\frac{1}{2}|U|_{\mathbb{L}^2}^2+\varphi(U)\right)-\kappa C_b^\frac{q}{2}(|U|_{\mathbb{L}^2}^2+2\varphi(U))^\frac{q}{2}\\ &\geq
\left(2\delta-\kappa C_b^\frac{q}{2} 2^\frac{q}{2} \left(\frac{1}{2}|U|_{\mathbb{L}^2}^2+\varphi(U)\right)^\frac{q-2}{2}\right)
\left(\frac{1}{2}|U|_{\mathbb{L}^2}^2+\varphi(U)\right),
\end{aligned} \end{equation}
where \(\delta=\min\{\lambda,|\gamma|\}>0\). Then choosing \(\varepsilon_0=\left(\frac{\delta}{\kappa C_b^\frac{q}{2}2^\frac{q}{2}}\right)^{\frac{2}{q-2}}\), we obtain \eqref{coer1}. \end{proof}
With the aid of Lemma \ref{coergd}, we ca derive the global boundedness of \(\frac{1}{2}|U(t)|_{\mathbb{L}^2}^2+\varphi(U(t))\) for small initial data. \begin{Lem} Let all assumptions in Theorem \ref{gegd} be satisfied. \label{gbddgd}
Then there exist \(\varepsilon_1>0\) and \(L>0\) independent of \(T\) such that for any \(r\in(0,\varepsilon_1)\), if \(\frac{1}{2}|U_0|_{\mathbb{L}^2}^2 + \varphi(U_0) \leq r^2\) and \(\@ifstar\@opnorms\@opnorm{F}_2 \leq r\), then the corresponding solution \(U(t)\) on \([0, S]\), \(0 < S \leq T\) satisfies \begin{equation} \label{gbddgd1}
\frac{1}{2}|U(t)|_{\mathbb{L}^2}^2 + \varphi(U(t)) < Lr^2\quad \forall t \in [0, S]. \end{equation} \end{Lem} \begin{proof} We fix \(L\) and \(\varepsilon_1\) by \begin{align}
L & = \left[2 + L_1^2+\frac{1}{1 - e^{-2|\gamma|}}\left\{\frac{1}{\lambda}+C_0(L_1^2+L_2)\right\}\right] \intertext{(\(C_0\) is the constant appering in \eqref{POi}),} \notag
L_2 & = \frac{1}{\delta} \left(L_1 + \frac{1}{2}L_1^2 \right), \quad L_1
= 1 + \frac{1}{1 - e^{- \frac{\delta}{2}}}, \\[1mm] \label{7.11}
\varepsilon_1 & = \frac{\varepsilon_0}{L}
\quad \mbox{(\(\varepsilon_0\) is the number appearing in Lemma \ref{coergd}).} \end{align}
Then we claim that (\ref{gbddgd1}) holds true for all \(t \in [0, S]\). Suppose that this is not the case, then by the continuity of \(\frac{1}{2}|U(t)|_{\mathbb{L}^2}^2+\varphi(U(t))\), there exists \(t_1 \in (0, S)\) such that \begin{equation}\label{6.8}
\frac{1}{2}|U(t)|_{\mathbb{L}^2}^2+\varphi(U(t)) < Lr^2 \quad \forall t \in [0, t_1)
\ \ \mbox{and} \ \ \frac{1}{2}|U(t_1)|_{\mathbb{L}^2}^2+\varphi(U(t_1)) = L r^2. \end{equation} We are going to show that this leads to a contradiction.
We first multiply (ACGL) by \(U(t)\) for \(t \in [0, t_1]\). Then since \(\frac{1}{2}|U(t)|_{\mathbb{L}^2}^2 +\varphi(U(t)) \leq Lr^2 \leq \varepsilon_0\) for all \(t \in [0, t_1]\),
Lemma \ref{coergd} and \eqref{orth:IU} gives \begin{align}\label{7.13}
& \frac{1}{2}\frac{d}{dt}|U(t)|_{\mathbb{L}^2}^2
+ \delta\left(\frac{1}{2}|U(t)|_{\mathbb{L}^2}^2+\varphi(U(t))\right) \leq |F(t)|_{\mathbb{L}^2}|U(t)|_{\mathbb{L}^2}
& \forall t \in [0, t_1], \end{align} Hence, by (\ref{7.13}) and Lemma \ref{Gtyineq}, we get \begin{equation}\label{6.10}
\sup_{0 \leq t \leq t_1}|U(t)|_{\mathbb{L}^2}
\leq \left(1
+ \frac{1}{1 - e^{-\frac{\delta}{2}}}\right)\varepsilon_0 = L_1 r, \end{equation} where we used the fact that
\(|U(0)|_{\mathbb{L}^2} \leq \varepsilon_0\)
and \( \@ifstar\@opnorms\@opnorm{|F(t)|_{\mathbb{L}^2}}_1 = \@ifstar\@opnorms\@opnorm{F}_2 \leq r\).
Hence the integration of (\ref{7.13}) over \((t, t+1)\) gives \begin{equation}\label{7.16}
\sup_{0 \leq t < \infty} \int_t^{t + 1} \tilde{\varphi}(U(\tau))d\tau
\leq \frac{1}{\delta} \left( L_1 r^2 + \frac{1}{2} ~\! L_1^2 ~\! r^2 \right) = L_2 ~\! r^2, \end{equation} where \(\tilde{\varphi}(U(\cdot))\) is the zero extension of \(\varphi(U(\cdot))\) to \([0, \infty)\).
By the same argument as for \eqref{POi}, we have \begin{equation}\label{7.18} \begin{aligned}
&\frac{d}{dt} \varphi(U) + 2|\gamma| \varphi(U) +\frac{\lambda}{4}|\partial\varphi(U)|_{\mathbb{L}^2}^2\\
&\leq \frac{1}{\lambda} |F|_{\mathbb{L}^2}^2 +C_0\left(\frac{1}{2}|U|_{\mathbb{L}^2}^{2(q-1)}+\varphi(U)^\rho\right)
\quad \forall t \in [0, t_1]. \end{aligned} \end{equation} Without loss of generality, we can take \(Lr \leq L\varepsilon_1 = \varepsilon_0 \leq 1\). Then since \(\rho > 1\), in view of \eqref{6.10}, \eqref{7.16} and Lemma \ref{Gtyineq}, we integrate \eqref{7.18} on \([\tilde{t}_1-1,t_1]\) with \(\tilde{t}_1=\max\{1,t_1\}\) to obtain \begin{equation} \frac{1}{2}
|U(t_1)|_{\mathbb{L}^2}^2 + \varphi(U(t_1))
\leq \left[1 + \frac{1}{2}L_1^2+ \frac{1}{1 - e^{-2|\gamma|}}\left\{\frac{1}{\lambda}+C_0(L_1^2+L_2)\right\}\right]r^2 < Lr^2, \end{equation} which together with \eqref{6.10} contradicts \eqref{6.8}. \end{proof}
\begin{proof}[Proof of Theorem \ref{gegd}]
Theorem \ref{gegd} is a direct consequence of the uniform boundedness of \(|U|_{\mathbb{L}^2}^2 + 2\varphi(U)\) based on Lemma \ref{gbddgd} and Theorem \ref{altgd}. \end{proof}
\end{document} | arXiv |
Annie Marie Watkins Garraway
Annie Marie Watkins Garraway (born 1940) is an American mathematician who worked in telecommunications and electronic data transmission. She is also a philanthropist.
Annie Marie Watkins Garraway
Born
Annie Marie Watkins
1940 (age 82–83)
Parsons, Kansas
NationalityAmerican
Alma materNorthwestern University, University of California at Berkeley
OccupationMathematician. philanthropist
Spouse(s)Michael Garraway, Ira W. Deep
Children3, including Levi Garraway
Parents
• Levi Watkins (1911–1994) (father)
• Lillian Bernice Varnado (1917–2013) (mother)
RelativesLevi Watkins (brother)
Biography
Garraway was born Annie Marie Watkins in Parsons, Kansas, the oldest daughter of Levi Watkins (1911–1994) and Lillian Bernice Varnado (1917–2013)[1][2] who met when they were both high school teachers.[2]
Annie Marie attended Booker T. Washington High School[3] and then enrolled in S. A. Owen Junior College, which her father had founded and served as the first president.[2] As a freshman in 1957, she intended to major in engineering, but a math teacher at Owen, Juanita R. Turner, suggested that Annie Marie consider a different course of study. As Garraway recalled later, Turner taught math full-time at Manassas High School while also teaching college algebra at the junior college.[3]
She recognized I had a talent for math. She had me stay after class to do more math exercises. She did this even though she had spent a full work day at Manassas. ... As a result of her working with me, I never had trouble with math.[3]
In 1959, her family moved to Montgomery, Alabama where her father was an administrator and then the sixth president of Alabama State College, a historically black college, from 1962 to 1983. That college is known today as Alabama State University (ASU).[2]
Garraway continued her studies at Northwestern University in Evanston, Illinois where she earned a B.S. and M.S. in mathematics.[4] In 1967, she completed a Ph.D. in math from University of California at Berkeley with her dissertation titled Structure of some cocycles in analysis.[2]
She had a successful career at AT&T Labs and its spinoff company, Lucent Technologies.[5][6] According to one of her brothers, "Her pioneering mathematical algorithms and inventions for Bell Laboratories and Lucent Technologies paved the way for the modern era of telecommunications and the electronic transmission of data around the world."[2]
She married Michael Oliver Garraway in 1965.[4] In 2004, she married Ira W. Deep, Jr., professor emeritus at The Ohio State University[7] and the first chair of the university's Department of Plant Pathology.[8] Her three children together have earned three doctorates and two medical degrees.[5]
Philanthropy
Vanderbilt University
Garraway's 2017 gift to Vanderbilt, was made in honor of her brother Levi Watkins Jr. who died in 2015, for "his transformative leadership and service, his historical medical inquiry and the tremendous imprint he left on students and faculty at Vanderbilt University School of Medicine (VUSM). He was the first African-American to graduate from the university's school of medicine as a member of the Class of 1970.[9] According to the school, "When Watkins walked through the doors of VUSM in 1966, he broke new ground by becoming the school's first African-American student. When he graduated four years later after being elected into the Alpha Omega Alpha (AOA) medical honor society, he was still the only one."[9][10]
Johns Hopkins University
In 2019, Garraway created a scholarship at Johns Hopkins, also in memory of her brother, Levi Watkins Jr. who was the first African American to become the university's chief resident in cardiac surgery. In 1979, he established Hopkins' national recruiting program for medical students of color and in 1980, he implanted the first automatic heart defibrillator at Hopkins.[5]
LeMoyne-Owen College
Garraway's 2020 gift to LeMoyne-Owen college was inspired by the movie and book, Hidden Figures, which describes the true story of three African-American female mathematicians working at NASA as human computers, who played a critical role in the 1960s U.S. space efforts. "Seeing the movie and reading the book made me think that she (Mrs. Turner) saw hidden figures in me," Garraway said.[3]
According to Garraway, after watching the film she made plans to create an endowed scholarship fund at LeMoyne-Owen College, a historically black college in Memphis, that now includes the institution formerly known as S.A. Owen Junior College.[6] The resulting Juanita R. Turner Memorial Scholarship is named for the junior college math professor who had made an extra effort in 1957 to tutor Garraway in math. Little is known of Turner except that when she was attending Grant Elementary School, she was the youngest winner of the citywide spelling contest (for African-American students) in 1927. She earned a master's of science degree in mathematics from the University of Illinois at Urbana.[6]
Publications
• Garraway, Annie Marie Watkins (1967). Structure of some cocycles in analysis. Berkeley, California: University of California. OCLC 952184806.
References
1. Obituary, The Montgomery Advertiser; Publication Date: 6 Mar 1994; Publication Place: Montgomery, Alabama, United States of America
2. Watkins, Donald V. (2019-05-11). "A Mother's Day Tribute to Lillian Bernice Varnado Watkins". donaldwatkins. Retrieved 2020-11-04.
3. TSDMemphis.com, Special to (2018-07-02). "'Hidden Figures' moves LOC alum to endow math scholarship". TSDMemphis.com. Retrieved 2020-11-03.
4. "19 Jun 1966, 40 - The Montgomery Advertiser at Newspapers.com". Newspapers.com. Retrieved 2020-11-03.
5. "A Labor of Love". Giving to Johns Hopkins. Retrieved 2020-11-03.
6. Copeland, Raven. "'Hidden Figures' leads to math scholarship for LeMoyne-Owen College students". The Commercial Appeal. Retrieved 2020-11-03.
7. Ohio Department of Health; Columbus, Ohio; Ohio Marriage Index, 1970 and 1972-2007
8. "Ira W. Deep | Plant Pathology". plantpath.osu.edu. Retrieved 2020-11-03.
9. Whitney, Kathy. "Garraway creates scholarship in honor of Levi Watkins Jr". Vanderbilt University. Retrieved 2020-11-03.
10. "Levi Watkins Obituary (2015) - Baltimore Sun". www.legacy.com. Retrieved 2020-11-03.
Authority control: Academics
• Mathematics Genealogy Project
| Wikipedia |
Shape Registration in Implicit Spaces Using Information Theory and Free Form Deformations Shape Registration in Implicit Spaces Using Information Theory and Free Form Deformations Huang, Xiaolei and Paragios, Nikos and Metaxas, Dimitris N. 2006
Paper summary anmolsharma Shape registration problem have been an active research topic in computational geometry, computer vision, medical image analysis and pattern recognition communities. Also called the shape alignment, it has extensive uses in recognition, indexing, retrieval, generation and other downstream analysis of a set of shapes. There have been a variety of works that approach this problem, with the methods varying mostly in terms of (can be called pillars of registration) the shape representation, transformation and registration criteria that is used. One such method is proposed by Huang et al. in this paper, which uses a novel combination of the three pillars, where an implicit shape representation is used to register an object both globally and locally. For the registration criteria, the proposed method uses Mutual Information based criteria for its global registration phase, while sum-squared differences (SSD) for its local phase. The method starts off with defining an implicit, non-parameteric shape representation which is translation, rotation and scale invariant. This makes the first step of the registration pipeline which transforms the input images into a domain where the shape is implicitly defined. The image is first partitioned into three spaces, namely $[\Omega]$ (the image domain), $[R_S]$ (points inside the shape), $[\Omega - R_S]$ (points outside the shape), and $[S]$ (points lying on the shape boundary). Using this partition, a function based upon the Lipschitz function $\phi : \Omega -> \mathbb{R}^+$ is defined as: \begin{equation} \phi_S(x,y) \begin{cases} 0 & (x,y) \in S \\ + D((x,y), S)>0 & (x,y) \in [R_s] \\ - D((x,y), S)<0 & (x,y) \in [\Omega - R_s] \end{cases} \end{equation} Where $D((x,y),S)$ is the distance function which gives the minimum Euclidean distance between point $(x,y)$ and the shape $S$. Given the implicit representation, global shape alignment is performed using the Mutual Information (MI) objective function defined between the probability density functions of the pixels in source image and the target image sampled from the domain $\Omega$. \begin{equation} MI(f_{\Omega}, g_{\Omega}^{A}) = \underbrace{\mathcal{H}[p^{f_{\Omega}}(l_1)]}_{\substack{\text{Entropy of the}\\ \text{distribution representing $f_{\Omega}$}}} + \underbrace{\mathcal{H}[p^{g_{\Omega}^{A}}(l_2)]}_{\substack{\text{Entropy of the}\\ \text{distribution representing $g_{\Omega}^{A}$} \\ \text{which is the} \\ \text{transformed source ISR using $A(\theta)$}}} - \underbrace{\mathcal{H}[p^{f_{\Omega}, g_{\Omega}^{A}}(l_1, l_2)]}_{\substack{\text{Entropy of the}\\ \text{joint distribution}\\\text{representing $f_{\Omega}, g_{\Omega}^{A}$}}} \end{equation} Following global registration, local registration is performed by embedding a control point grid using the Incremental Free Form Deformation (IFFD) method. The objective function to minimize is used as the sum squared differences (SSD). The local registration is also offset by using a multi-resolution framework, which performs deformations on control points of varying resolution, in order to account for small local deformations in the shape. In case where there is prior information available for feature point correspondence between the two shapes, this prior knowledge can be added as a plugin term in the overall local registration optimization term. The method was applied on statistically modeling anatomical structures, 3D face scan and mesh registration.
www.wikidata.org
Shape Registration in Implicit Spaces Using Information Theory and Free Form Deformations
Huang, Xiaolei and Paragios, Nikos and Metaxas, Dimitris N.
IEEE Transactions on Pattern Analysis and Machine Intelligence - 2006 via Local Bibsonomy
Shape registration problem have been an active research topic in computational geometry, computer vision, medical image analysis and pattern recognition communities. Also called the shape alignment, it has extensive uses in recognition, indexing, retrieval, generation and other downstream analysis of a set of shapes. There have been a variety of works that approach this problem, with the methods varying mostly in terms of (can be called pillars of registration) the shape representation, transformation and registration criteria that is used. One such method is proposed by Huang et al. in this paper, which uses a novel combination of the three pillars, where an implicit shape representation is used to register an object both globally and locally. For the registration criteria, the proposed method uses Mutual Information based criteria for its global registration phase, while sum-squared differences (SSD) for its local phase.
The method starts off with defining an implicit, non-parameteric shape representation which is translation, rotation and scale invariant. This makes the first step of the registration pipeline which transforms the input images into a domain where the shape is implicitly defined. The image is first partitioned into three spaces, namely $[\Omega]$ (the image domain), $[R_S]$ (points inside the shape), $[\Omega - R_S]$ (points outside the shape), and $[S]$ (points lying on the shape boundary). Using this partition, a function based upon the Lipschitz function $\phi : \Omega -> \mathbb{R}^+$ is defined as:
\begin{equation}
\phi_S(x,y)
\begin{cases}
0 & (x,y) \in S \\
+ D((x,y), S)>0 & (x,y) \in [R_s] \\
- D((x,y), S)<0 & (x,y) \in [\Omega - R_s]
\end{cases}
Where $D((x,y),S)$ is the distance function which gives the minimum Euclidean distance between point $(x,y)$ and the shape $S$.
Given the implicit representation, global shape alignment is performed using the Mutual Information (MI) objective function defined between the probability density functions of the pixels in source image and the target image sampled from the domain $\Omega$.
MI(f_{\Omega}, g_{\Omega}^{A}) = \underbrace{\mathcal{H}[p^{f_{\Omega}}(l_1)]}_{\substack{\text{Entropy of the}\\ \text{distribution representing $f_{\Omega}$}}} + \underbrace{\mathcal{H}[p^{g_{\Omega}^{A}}(l_2)]}_{\substack{\text{Entropy of the}\\ \text{distribution representing $g_{\Omega}^{A}$} \\ \text{which is the} \\ \text{transformed source ISR using $A(\theta)$}}} - \underbrace{\mathcal{H}[p^{f_{\Omega}, g_{\Omega}^{A}}(l_1, l_2)]}_{\substack{\text{Entropy of the}\\ \text{joint distribution}\\\text{representing $f_{\Omega}, g_{\Omega}^{A}$}}}
Following global registration, local registration is performed by embedding a control point grid using the Incremental Free Form Deformation (IFFD) method. The objective function to minimize is used as the sum squared differences (SSD). The local registration is also offset by using a multi-resolution framework, which performs deformations on control points of varying resolution, in order to account for small local deformations in the shape. In case where there is prior information available for feature point correspondence between the two shapes, this prior knowledge can be added as a plugin term in the overall local registration optimization term.
The method was applied on statistically modeling anatomical structures, 3D face scan and mesh registration. | CommonCrawl |
Dense area-preserving homeomorphisms have zero Lyapunov exponents
DCDS Home
Abstract settings for stabilization of nonlinear parabolic system with a Riccati-based strategy. Application to Navier-Stokes and Boussinesq equations with Neumann or Dirichlet control
April 2012, 32(4): 1209-1229. doi: 10.3934/dcds.2012.32.1209
Symbolic approach and induction in the Heisenberg group
Jean-Francois Bertazzon 1,
Institut de Mathematiques de Luminy (UMR 6206), Université de la Méditerranee, Campus de Luminy, 13288 MARSEILLE Cedex 9, France
Received January 2010 Revised August 2011 Published October 2011
We associate a homomorphism in the Heisenberg group to each hyperbolic unimodular automorphism of the free group on two generators. We show that the first return-time of some flows in "good" sections, are conjugate to niltranslations, which have the property of being self-induced.
Keywords: induction., Heisenberg group, renormalization.
Mathematics Subject Classification: Primary: 28D10, 37C5.
Citation: Jean-Francois Bertazzon. Symbolic approach and induction in the Heisenberg group. Discrete & Continuous Dynamical Systems - A, 2012, 32 (4) : 1209-1229. doi: 10.3934/dcds.2012.32.1209
R. L. Adler, Symbolic dynamics and Markov partitions,, Bull. Amer. Math. Soc. (N.S.), 35 (1998), 1. Google Scholar
L. Ambrosio and S. Rigot, Optimal mass transportation in the Heisenberg group,, J. Funct. Anal., 208 (2004), 261. doi: 10.1016/S0022-1236(03)00019-3. Google Scholar
P. Arnoux, J. Bernat and X. Bressaud, "Geometric Models for Substitution,", Experimental Mathematics, (2010). Google Scholar
P. Arnoux and C. Mauduit, Complexité de suites engendrées par des récurrences unipotentes,, Acta Arithmetica, 76 (1996), 85. Google Scholar
P. Arnoux and A. Siegel, Dynamique du nombre d'or,, To appear in Actes de l'Université d'été de Bordeaux, (2004). Google Scholar
L. Auslander, L. Green and F. Hahn, "Flows on Homogeneous Spaces,", With the assistance of L. Markus and W. Massey, 53 (1963). Google Scholar
N. Chekhova, P. Hubert and A. Messaoudi, Propriétés combinatoires, ergodiques et arithmétiques de la substitution de Tribonacci,, J. Théor. Nombres Bordeaux, 13 (2001), 371. doi: 10.5802/jtnb.328. Google Scholar
L. Flaminio and G. Forni, Equidistribution of nilflows and applications to theta sums,, Ergodic Theory Dynam. Systems, 26 (2006), 409. doi: 10.1017/S014338570500060X. Google Scholar
P. Fogg, "Substitutions in Dynamics, Arithmetics and Combinatorics,", Lecture Notes in Mathematics, 1794 (2002). Google Scholar
H. Furstenberg, Strict ergodicity and transformation of the torus,, Amer. J. Math., 83 (1961), 573. doi: 10.2307/2372899. Google Scholar
G. Gelbrich, Self-similar periodic tilings on the Heisenberg group,, J. Lie Theory, 4 (1994), 31. Google Scholar
M. Goze and P. Piu, Classification des métriques invariantes à gauche sur le groupe de Heisenberg,, Rend. Circ. Mat. Palermo (2), 39 (1990), 299. doi: 10.1007/BF02844764. Google Scholar
L. W. Green, Spectra of nilflows,, Bull. Amer. Math. Soc., 67 (1961), 414. doi: 10.1090/S0002-9904-1961-10650-2. Google Scholar
, J. R. Lee and A. Naor,, \emph{$L_p$ metrics on the Heisenberg group and the Goemans-Linial conjecture}., (). Google Scholar
E. Lesigne, Sur une nil-variété, les parties minimales associées à une translation sont uniquement ergodiques,, Ergodic Theory Dynam. Systems, 11 (1991), 379. Google Scholar
P. Pansu, Plongements quasiisométriques du groupe de Heisenberg dans $L^p$, d'après Cheeger, Kleiner, Lee, Naor,, in, 25 (2008), 2006. Google Scholar
M. Queffélec, "Substitution Dynamical Systems-Spectral Analysis,", Lecture Notes in Mathematics, 1294 (1987). Google Scholar
Heping Liu, Yu Liu. Refinable functions on the Heisenberg group. Communications on Pure & Applied Analysis, 2007, 6 (3) : 775-787. doi: 10.3934/cpaa.2007.6.775
Isabeau Birindelli, J. Wigniolle. Homogenization of Hamilton-Jacobi equations in the Heisenberg group. Communications on Pure & Applied Analysis, 2003, 2 (4) : 461-479. doi: 10.3934/cpaa.2003.2.461
Sze-Bi Hsu, Bernold Fiedler, Hsiu-Hau Lin. Classification of potential flows under renormalization group transformation. Discrete & Continuous Dynamical Systems - B, 2016, 21 (2) : 437-446. doi: 10.3934/dcdsb.2016.21.437
Ning Sun, Shaoyun Shi, Wenlei Li. Singular renormalization group approach to SIS problems. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 0-0. doi: 10.3934/dcdsb.2020073
Xinjing Wang, Pengcheng Niu, Xuewei Cui. A Liouville type theorem to an extension problem relating to the Heisenberg group. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2379-2394. doi: 10.3934/cpaa.2018113
L. Brandolini, M. Rigoli and A. G. Setti. On the existence of positive solutions of Yamabe-type equations on the Heisenberg group. Electronic Research Announcements, 1996, 2: 101-107.
Pablo Ochoa. Approximation schemes for non-linear second order equations on the Heisenberg group. Communications on Pure & Applied Analysis, 2015, 14 (5) : 1841-1863. doi: 10.3934/cpaa.2015.14.1841
Luis F. López, Yannick Sire. Rigidity results for nonlocal phase transitions in the Heisenberg group $\mathbb{H}$. Discrete & Continuous Dynamical Systems - A, 2014, 34 (6) : 2639-2656. doi: 10.3934/dcds.2014.34.2639
Patrizia Pucci. Critical Schrödinger-Hardy systems in the Heisenberg group. Discrete & Continuous Dynamical Systems - S, 2019, 12 (2) : 375-400. doi: 10.3934/dcdss.2019025
Fausto Ferrari, Qing Liu, Juan Manfredi. On the characterization of $p$-harmonic functions on the Heisenberg group by mean value properties. Discrete & Continuous Dynamical Systems - A, 2014, 34 (7) : 2779-2793. doi: 10.3934/dcds.2014.34.2779
I. Moise, Roger Temam. Renormalization group method: Application to Navier-Stokes equation. Discrete & Continuous Dynamical Systems - A, 2000, 6 (1) : 191-210. doi: 10.3934/dcds.2000.6.191
G. A. Braga, Frederico Furtado, Vincenzo Isaia. Renormalization group calculation of asymptotically self-similar dynamics. Conference Publications, 2005, 2005 (Special) : 131-141. doi: 10.3934/proc.2005.2005.131
Wenlei Li, Shaoyun Shi. Singular perturbed renormalization group theory and its application to highly oscillatory problems. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1819-1833. doi: 10.3934/dcdsb.2018089
Nathan Glatt-Holtz, Mohammed Ziane. Singular perturbation systems with stochastic forcing and the renormalization group method. Discrete & Continuous Dynamical Systems - A, 2010, 26 (4) : 1241-1268. doi: 10.3934/dcds.2010.26.1241
Chiu-Ya Lan, Chi-Kun Lin. Asymptotic behavior of the compressible viscous potential fluid: Renormalization group approach. Discrete & Continuous Dynamical Systems - A, 2004, 11 (1) : 161-188. doi: 10.3934/dcds.2004.11.161
Hans Koch. A renormalization group fixed point associated with the breakup of golden invariant tori. Discrete & Continuous Dynamical Systems - A, 2004, 11 (4) : 881-909. doi: 10.3934/dcds.2004.11.881
Houda Mokrani. Semi-linear sub-elliptic equations on the Heisenberg group with a singular potential. Communications on Pure & Applied Analysis, 2009, 8 (5) : 1619-1636. doi: 10.3934/cpaa.2009.8.1619
Pablo Ochoa, Julio Alejo Ruiz. A study of comparison, existence and regularity of viscosity and weak solutions for quasilinear equations in the Heisenberg group. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1091-1115. doi: 10.3934/cpaa.2019053
Vincenzo Michael Isaia. Numerical simulation of universal finite time behavior for parabolic IVP via geometric renormalization group. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3459-3481. doi: 10.3934/dcdsb.2017175
G. A. Braga, Frederico Furtado, Jussara M. Moreira, Leonardo T. Rolla. Renormalization group analysis of nonlinear diffusion equations with time dependent coefficients: Analytical results. Discrete & Continuous Dynamical Systems - B, 2007, 7 (4) : 699-715. doi: 10.3934/dcdsb.2007.7.699
Jean-Francois Bertazzon | CommonCrawl |
Circulating tumour cell gene expression and chemosensitivity analyses: predictive accuracy for response to multidisciplinary treatment of patients with unresectable refractory recurrent rectal cancer or unresectable refractory colorectal cancer liver metastases
Stefano Guadagni1,
Francesco Masedu1,
Giammaria Fiorentini2,
Donatella Sarti2,
Caterina Fiorentini3,
Veronica Guadagni4,
Panagiotis Apostolou5,
Ioannis Papasotiriou6,
Panagiotis Parsonidis5,
Marco Valenti1,
Enrico Ricevuto1,
Gemma Bruera1,
Antonietta R. Farina1,
Andrew R. Mackay1 na1 &
Marco Clementi1 na1
Patients with unresectable recurrent rectal cancer (RRC) or colorectal cancer (CRC) with liver metastases, refractory to at least two lines of traditional systemic therapy, may receive third line intraarterial chemotherapy (IC) and targeted therapy (TT) using drugs selected by chemosensitivity and tumor gene expression analyses of liquid biopsy-derived circulating tumor cells (CTCs).
In this retrospective study, 36 patients with refractory unresectable RRC or refractory unresectable CRC liver metastases were submitted for IC and TT with agents selected by precision oncotherapy chemosensitivity assays performed on liquid biopsy-derived CTCs, transiently cultured in vitro, and by tumor gene expression in the same CTC population, as a ratio to tumor gene expression in peripheral mononuclear blood cells (PMBCs) from the same individual. The endpoint was to evaluate the predictive accuracy of a specific liquid biopsy precision oncotherapy CTC purification and in vitro culture methodology for a positive RECIST 1.1 response to the therapy selected.
Our analyses resulted in evaluations of 94.12% (95% CI 0.71–0.99) for sensitivity, 5.26% (95% CI 0.01–0.26) for specificity, a predictive value of 47.06% (95% CI 0.29–0.65) for a positive response, a predictive value of 50% (95% CI 0.01–0.98) for a negative response, with an overall calculated predictive accuracy of 47.22% (95% CI 0.30–0.64).
This is the first reported estimation of predictive accuracy derived from combining chemosensitivity and tumor gene expression analyses on liquid biopsy-derived CTCs, transiently cultured in vitro which, despite limitations, represents a baseline and benchmark which we envisage will be improve upon by methodological and technological advances and future clinical trials.
Clinical and biological prognostic markers identify patients with differing risks of a specific outcome regardless of treatment, such as progression or death [1], and can select individuals at high risk of relapse, as potential candidates for alternative treatments. In contrast, predictive markers are associated with response (benefit) or lack of response to a particular therapy relative to other available therapies and identify patients more likely to benefit from a particular treatment [2]. Within this context, tissue and liquid biopsy precision oncotherapy holds great promise in improving therapeutic outcomes, as both represent important methods for detecting prognostic and predictive markers, with additional potential to identify therapeutic targets. This is particularly true for less invasive, serially repeatable liquid biopsies, which are potentially more relevant to disseminated disease.
Tissue biopsies, in accordance with American Society of Clinical Oncology (ASCO) and European Society for Medical Oncology (ESMO) recommendations [3], can be used to detect oncogenic mutations, oncogene overexpression and to evaluate chemosensitivity [4]. Liquid biopsies (blood, ascites, urine, pleural effusion, or cerebrospinal fluid) can be used to do the same but also permit purification and analysis of tumour-derived components including circulating tumor cells (CTCs), exosomes, circulating tumor DNAs (ctDNA), microRNAs (miRNAs), long non-coding RNAs (lncRNAs) and proteins [5, 6], with enhanced potential for identifying prognostic markers, predictive markers and therapeutic targets.
When compared to tumor-derived non-cellular components, intact CTCs not only provide a valuable source of quality tumour cell-derived nucleic acids and proteins for immunocytological, fluorescence in situ hybridization (FISH), DNA sequence, PCR, RT-PCR and multiplex RNA analysis but also the opportunity for transient in vitro culture, enabling chemosensitivity and tumor gene expression analysis to be performed on the same CTC population. Up to 2010, CTC detection and purification methodologies were based upon either biophysical (i.e. Biocoll/Ficoll®, Oncoquick® or Screen-Cell® Cyto IS - Sarcelles, France) or antigenic (i.e. CellSearch® system - Menarini Silicon Biosystems) properties. Following 2010, progress in microfluidic-based technologies, devices and principles have resulted in the use of novel antibody-based marker-dependent platforms, physical characteristic-based platforms, and secretome and transcriptome-based platforms, the latter of which, however, involves CTC lysis, eliminating the possibility of CTC culture for drug screening.
The FDA has approved liquid biopsy [7] and has validated the Cell-SearchTM system (Veridex LCC, Raritan, NJ, USA) as a diagnostic tool for CTC identification and enumeration in a variety of tumours, including colorectal cancer (CRC) and currently holds a dominant position amongst competitors [8]. This system is employed to detect CD45 (Cluster of differentiation 45) negative, EpCAM (Epithelial adhesion molecule) and cytokeratins CK8/18/19 positive CTCs, using anti human CK8/18/19/20 antibodies conjugated with ferrous beads for magnetic purification [8]. Cytokeratins and particularly CK20, are well-established markers of epithelial cell and metastatic CRC CTCs [9] and are employed also for semi-quantitative qRT-PCR analyses [10] and the majority of studies using this system report CTC enumeration as a prognostic marker for progression free survival (PFS) and overall survival (OS) [11]. Drawbacks with this approach, however, include potential CTC underestimation due to epithelial to mesenchymal transition (EMT) and loss of epithelial marker expression, technical bottlenecks and cost, which reduce routine use [8]. Alternative isolation and analytical methods and techniques have also been reported that overcome some of these problems (i.e. leukapheresis and Hydro-Seq technology) [8] and flow cytometry has also been reported to detect CTCs with high sensitivity and specificity [12].
To date, few papers have investigated the therapeutic predictive potential of CTC isolation and enumeration in CRC patients [13,14,15,16] or the promise of combining molecular profiling with chemosensitivity analysis, considered for some time to be the best way to improve disease characterisation and selection of individualised therapy [17, 18]. Within this context, patients presenting with unresectable recurrent rectal cancer (RRC) or unresectable CRC liver metastases, refractory to at least two lines of standard systemic chemotherapy, targeted and radiation therapy, represent good potential candidates for determining the predictive accuracy of combined in vitro (ex-vivo) CTC chemosensitivity and gene expression analysis.
CRC is currently the third most common cancer type, with upwards of ≈ 1.8 million new cases and ≈ 9% of all cancer-related deaths reported annually worldwide [19]. Treatment strategies depend upon disease stage, patient condition, molecular mechanisms, economic parameters, health care systems and unexpected factors, such as the current Sars-Covid-2 pandemic. In highly developed countries, 5-year survival rates for localized CRC of ≈ 90% plummet to ≈ 14% for metastatic or relapsed disease. For stage IV metastatic or recurrent CRC, devoid of known cancer driving mutations or targetable biomarkers (≈ 85% of patients), current systemic therapeutic regimes include: FOLFOX (leucovorin plus 5-fluorouracil plus oxaliplatin), CAPEOX (capecitabine plus oxaliplatin), FOLFIRI (leucovorin plus 5-fluorouracil plus irinotecan), FOLFOXIRI (leucovorin plus 5-fluorouracil plus oxaliplatin plus irinotecan), typically combined with bevacizumab or cetuximab or aflibercept and recently new oral drugs such as regorafenib and trifluridine/tipiracil [18, 20,21,22,23,24,25,26,27,28]. Patients with dominant liver metastatic disease or limited extrahepatic disease, may receive liver-directed intraarterial therapies such as hepatic arterial chemotherapy infusion, chemoembolization [20] and radioembolization to improve local tumor response and to reduce systemic side effects [18]. For all of these patients, liquid biopsy-derived CTCs deserve thorough investigation as a potential source of important predictive information with respect chemosensitivity and chemoresistance, enhancing the possibility of identifying novel alternative therapeutic strategies and reducing toxicity.
For stage IV metastatic or recurrent CRCs refractory to standard systemic therapies, that exhibit detectable overexpression or mutation of known driver oncogenes (≈ 4–15%), precision oncotherapy represents a viable therapeutic option, in conditions of: i) deficient mismatch repair protein expression (dMMR) and/or DNA microsatellite instability (MSI-H/high), treatable with pembrolizumab and nivolumab or a combination of nivolumab and ipilimumab check-point inhibitors in first and subsequent lines of treatment; ii) specific BRAFV600E mutation treatable with a combination of encorafenib and cetuximab in second or third lines; iii) EGFR overexpression treatable with cetuximab and panitumumab; iv) HER-2 3+ overexpression or HER2 FISH/ISH amplification treatable with trastuzumab, lapatinib, tucatinib and deruxtecan/trastuzumab; v) neurotrophic tropomyosin receptor kinases (NTRKs) overexpression or expression of novel NTRK chimeric fusions treatable with larotrectinib or entrectinib [21,22,23,24,25,26,27,28].
Within this context, we report a retrospective study of the accuracy of in vitro ex vivo CTC chemosensitivity and tumour gene expression analyses in predicting response to treatment in CRC patients presenting with unresectable refractory RRC or unresectable refractory CRC liver metastases. In this setting, chemotherapeutic agents and monoclonal antibodies used for the multidisciplinary treatment of patients with refractory CRC, were chosen from chemosensitivity assays performed on transiently in vitro cultured liquid biopsy-derived CTCs and from the ratio of tumor gene expression exhibited by the same CTC populations compared to purified peripheral blood mononuclear cells from the same patient.
This study involved patients with unresectable and predictable disease course and was approved by ASL n.1 Ethics committee, Abruzzo, Italy (Chairperson: G. Piccioli; protocol number 10/CE/2018; approved: 19 July 2018 (n.1419)]. Drug selection, clinical treatments and evaluations were performed at the University of L'Aquila, L'Aquila, Italy. CTC isolation, culture, gene expression and chemosensitivity analyses were performed at the Research Genetic Cancer Centre, Florina, Greece. All patients provided written consent and received complete information about their disease and the implications of the proposed conventional treatment, in accordance with the Helsinki Declaration and the University of L'Aquila committee on human experimentation.
From a cohort of 168 CRC patients, enrolled from 2007 to 2019, comprising 62 patients with unresectable recurrent rectal cancer in progression following two lines of systemic chemotherapy and radiotherapy and 106 patients with unresectable CRC liver metastases in progression after two lines of systemic chemotherapy, 36 patients were retrospectively selected based on the following criteria: submission for precision oncotherapy by combined intraarterial chemotherapy and systemic venous targeted therapy, using drugs selected by comparative chemosensitivity and gene expression analyses performed on in vitro cultured CTCs and PBMCs from the same patient. To ensure the stability and consistency of sample collection and detection, only patients submitted for combined locoregional intraarterial chemotherapy and systemic venous targeted-therapy, whose CTCs had been purified using the same methodology, were included in this retrospective study, whereas patients submitted for systemic venous chemotherapy and/or systemic targeted therapy, and those in which CTC isolation and culture had been performed with more recent techniques and methodologies, not available at the beginning of the study, were excluded.
Decisions concerning unresectability and precision oncotherapy were made by experienced surgeons, oncologists and radiologists during multidisciplinary meetings. Inclusion criteria were: i) histologically confirmed colorectal cancer diagnosis and complete primary tumor resection; ii) failure of two lines of systemic chemotherapy; iii) Eastern Cooperative Oncology Group (ECOG) performance status of < 4; iv) adequate liver or renal dysfunction (total bilirubin serum levels < 3 mg/dL, serum albumin level > 20 g/L, serum creatinine level < 2 mg/dL). In all cases, systemic chemotherapy ceased 4 weeks prior to the 1st cycle of tailored intraarterial chemotherapy in association with targeted therapy. Patients with inadequate medical records were excluded from this study. Demographic and clinical characteristics of the 36 patients are presented in Table 1.
Table 1 Patient demographic and clinical characteristics
Liquid biopsy, CTC gene expression and chemosensitivity assays
Sample collection, storage and transportation
Liquid biopsies (≥ 20 mL venous blood) were collected from each patient in sterile 50 mL Falcon tubes, containing 7 ml of 0.02 M ethylenediaminetetraacetic acid (EDTA) anticoagulant (E0511.0250, Duchefa Biochemie B.V., Haarlem, The Netherlands), transported in impact-resistant containers, under refrigeration at 2–8 °C, and analysed within 80 hours [29].
CTC isolation
For CTC isolation, blood samples were layered over 4 ml polysucrose solution (Biocoll 1077, Biochrom, Berlin, Germany) and centrifuged for 20 min at 2500×g. Peripheral blood mononuclear cells (PBMCs) and CTCs (the buffy coat) were collected and washed with phosphate-buffered saline (PBS, P3813, Sigma-Aldrich, Germany). Cell pellets were resuspended for 10 min in erythrocyte lysis buffer [154 mM NH4Cl (31,107, Sigma-Aldrich, Germany), 10 mM KHCO3 (4854, Merck, Germany) and 0.1 mM EDTA]. Cells were then collected by centrifugation, washed in PBS, and incubated with mouse monoclonal anti-human CD45 antibody-conjugated magnetic beads (39-CD45–250, Gentaur, Belgium) for 30 min at 4 °C. Anti CD45 bead-bound cells were collected in a magnetic field and saved for use as non-cancer control PBMCs in qRT-PCR gene expression analyses. Remaining cells were incubated with mouse monoclonal Anti-Pan cytokeratin-conjugated microbeads (PCK/CK4, CK5, CK6, CK8, CK10, CK13 and CK18) (MA1081-M, Gentaur, Belgium) for 30 min at 4 °C and PCK bead-bound cells (CTCs) collected in a magnetic field and washed in PBS. Isolated viable CTCs (IV-CTCs) were counted, and samples containing ≥5 viable CTCs per ml of blood were cultured for 6 days in order to obtain sufficient cell numbers for gene expression and chemosensitivity assays.
CTC culture
Purified pan-cytokeratin positive/CD45-negative bead-bound cells were cultured in RPMI-1640 (R0883, Sigma-Aldrich, Germany) containing 10% Fetal Bovine Serum-FBS (F4135, Sigma-Aldrich, Germany) and 2 mmol/l glutamine (G5792; Sigma), in 12-well cell culture plates (Corning, Merck, Germany), without antibiotics, at 37 °C, 5% CO2, and culture medium was changed every 2 days, as previously described [30].
CTC validation post-culture
CTC validation was confirmed by positive CK18, CK19, and negative CD45, CD31, N-Cadherin qRT-PCR (see section 2.2.5 for methodology) and by positive Pan-Cytokeratin-APC, and EpCAM-FITC, and negative CD45-PE Immunofluorescence (IF), in 6-day CTC cultures (Fig. 1). Briefly for IF, purified CTCs and PBMCs from the same patient were initially stained with anti-human CD45-PE (phycoerythin) conjugated mouse monoclonal antibody (304,008, Biolegend, CA, USA) and anti-human EpCAM-FITC (fluorescein isothioyanate) conjugated mouse monoclonal antibody (324,204, Biolegend, CA, USA), at recommended dilutions, and subsequently, cells were stained with anti-human Pan-Cytokeratin-APC (allophycocyanin, ab201807, Cambridge, UK) conjugated mouse monoclonal antibody (SAB4700666, Sigma-Aldrich, MO, USA), using Leucoperm staining protocol (BUF09C, Bio-Rad, CA, USA). Nuclei were counterstained with DAPI (4′,6-diamidino-2-phenylindole, Abbott Molecular, Illinois, USA) and cells were visualised under a Nikon Eclipse 50i microscope, armed with Cytovision software (Leica Biosystems, United States).
Representative: A) qRT-PCR CTC validation and B) accompanying histogram, demonstrating β-actin, CK18 and CK19 but not CD45, CD31 or N-Cadherin mRNA expression in a 6-day CTC culture, plus C) IF validation demonstrating positive IF immunoreactivity for EpCAM, Pan-CK but not CD45 in a 6 day CTC culture (left panels), and positive IF immunoreactivity for CD45 but not EpCAM or Pan-CK in PBMCs from the same patient (right panels) (bar = 50 μm)
CTC gene expression
For gene expression analysis, RNAs were purified from CTC cultures and from corresponding PBMCs using RNeasy Mini Kits, as directed by the manufacturer (74,105, Qiagen, Hilden, Germany). RNAs (1 μg) were reverse transcribed using a PrimeScript RT Reagent Kit, as directed by the manufacturer (RR037A, Takara, Beijing, China) and subjected to KAPA SYBR Fast Master Mix (2×) Universal (KK4618, KAPA Biosystems, MA, USA) real-time qPCR, in a final volume of 20 μl. Real-time qRT-PCR reactions were performed in a final volume of 20 μl and characterised by 2 min denaturation at 95 °C, followed by 40 cycles consisting of 10 sec denaturation at 95 °C and 30 sec annealing at 59 °C. Melting-curve analysis was performed from 70 °C to 90 °C, with 0.5 °C increments of 5 s, at each step. Reactions were employed to evaluate the expression of multidrug resistance gene-ABCB1 (MDR1), thymidylate synthase (TYMS), dihydro-folate reductase (DHFR), DNA excision repair protein (ERCC1), glutathione S-transferase (GST), epidermal growth factor (EGF), vascular epidermal growth factor (VEGF), 18S ribosomal RNA (18S rRNA), β actin (ACTB), glyceraldehyde 3-phosphate dehydrogenase (GAPDH), the specific primers for which have been previously reported [31]. All reactions were performed in triplicate, compared to template-free negative controls, and analysed by Livak relative quantification [32]. Gene expression was quantified using the following equations:
$${\Delta \mathrm{Ct}}_{\left(\mathrm{threshold}\ \mathrm{Cycle}\right)}={\mathrm{Ct}}_{\mathrm{target}}-{\mathrm{Ct}}_{\beta \hbox{-} \mathrm{actin}}$$
$$\Delta \Delta \mathrm{Ct}={\Delta \mathrm{Ct}}_{\left(\mathrm{treated}\ \mathrm{CTCs}\right)}\hbox{-} {\Delta \mathrm{Ct}}_{\left(\mathrm{non}\hbox{-} \mathrm{cancer}\ \mathrm{cells}\right)}$$
$$\mathrm{Relative}\ \mathrm{expression}\ \mathrm{level}={2}^{\hbox{-} \Delta \Delta \mathrm{Ct}}$$
$$\%\mathrm{Gene}\ \mathrm{expression}=100\times \left({2}^{\hbox{-} \Delta \Delta \mathrm{Ct}}\hbox{-} 1\right)$$
and classified as low (< 50%) or high (> 50%) over-expression, as previously described [31].
CTC chemosensitivity
For chemosensitivity assays, 6 day CTC cultures in 12-well plates were treated with the following drug concentrations: 1 μM alkeran (Μ2011, Sigma-Aldrich, Germany), 1 μM doxorubicin (D1515, Sigma-Aldrich, Germany), 1 μM cisplatin (P4394, Sigma-Aldrich, Germany), 10 μM 5-fluorouracil (F6627, Sigma-Aldrich, Germany), 1.12 μM oxaliplatin (O9512, Sigma-Aldrich, Germany), 1 μM carboplatin (41575–94-4, Sigma-Aldrich, Germany), 5 μM irinotecan (I1406, Sigma-Aldrich, Germany), 1 μM raltitrexed (112887–68-0, Sigma-Aldrich, Germany) and 2 μM mitomycin C (M4287, Sigma-Aldrich, Germany), in complete culture medium. Cell viability was assessed by Annexin V-PE (559,763; BD Bioscience, USA) flow cytometry (BD Instruments Inc., San José, CA, USA), and the percentage of living, dead and dying cells evaluated using BD CellQuest Software (BD Instruments Inc., USA (Fig. 2). Controls included untreated cells incubated in the presence and absence of chemotherapeutic drugs and cell-free counterparts. The percentage of non-viable CTCs was calculated under non-drug and drug-treated conditions, and chemosensitivity was classified as either: i) non-sensitive < 35% death; ii) partially sensitive 35–80% death, or iii) highly sensitive > 80% death.
Representative flow cytometric analyses, with corresponding tables, showing the percentage changes in live, dead, late apoptotic and apoptotic Annexin-V positive CTCs, in 72 hour-chemosensitivity assays of cultured CTCs from the same individual (Case 1), demonstrating chemosensitivity to Mitomycin-C (MMC, 2 μM) (A) but not Flurouracil (5-FU, 10 μM) (B), compared to respective untreated CTC controls
Precision oncotherapy protocol criteria
Decisions concerning precision oncotherapy were made by experienced oncologists, surgeons and radiologists during multidisciplinary meetings based on gene expression and chemosensitivity analyses, previous systemic chemotherapeutic protocols, and in vitro drug cytotoxicity under hypoxic conditions [33]. For intraarterial chemotherapy, drug regimens were selected according to the following criteria: i) mono-chemotherapy for CTCs with high sensitivity (> 80% of dead and dying cells) for one or more drug, with highest chemosensitivity indicating the drug to be used, ii) poly-chemotherapy for CTCs exhibiting partial sensitivity, and iii) with respect to drug activity under conditions of hypoxia. For targeted therapy, the presence/absence of KRAS and NRAS mutations in exon 2 (codons 12 and 13), exon 3 (codons 59 and 61) and/or exon 4 (codons 117 and 146) was verified, and subsequent drug selection was based upon existing therapeutic options already recommended by evolving international guidelines for clinical practice, and high CTC: PBMC gene over-expression ratios, with the highest percentage used to select each drug. Specifically, for EGFR overexpression cetuximab was selected and for high VEGFR overexpression bevacizumab was selected. If CTC: PBMC expression ratios were > 50%, target-therapy selection was based upon the highest ratio of tumour gene overexpression.
Intraarterial chemotherapy techniques
Intraarterial chemotherapeutic procedures for unresectable refractory RRC and for unresectable refractory CRC liver metastases, together with eligibility criteria, have been described previously [17, 18]. For pelvic recurrences, hypoxic pelvic perfusion (HPP) requires specialized surgical skill and can also be performed percutaneously [34], whereas regional intraarterial chemotherapy for CRC liver metastases requires an interventional radiologist [18, 35]. Both procedures included hemofiltration to reduce toxicity [17, 18].
Response and adverse events criteria
Responses, assessed by computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET), were evaluated 3 months following the first cycle of intraarterial chemotherapy combined with targeted therapy and classified using Response Evaluation Criteria in Solid Tumors (version 1.1), as either complete responses (CR), partial responses (PR), corresponding to a minimum 30% reduction in tumour volume, stable disease (SD) or progressive disease (PD) [36]. Patient responses prior to 2009 were re-classified retrospectively. Adverse reactions were evaluated using National Cancer Institute-Common Terminology Criteria for Adverse Events software (version 4.03) and classified from 0 to 4.
Due to the low sample size and plausible deviations from distributional symmetry, patient demographic and clinical characteristics are summarized as percentages and median values. Cut-off values for chemosensitivity tests and tumor gene expression analyses were calculated from combined optima of positive predictive values. Specifically, this pair of cut-off values was derived from the pair of thresholds that maximized predictive accuracy, considering all possible confusion matrices. Binomial exact confidence intervals of 95% are provided for sensitivity, specificity, positive predictive, negative predictive and accuracy values.
CTC chemosensitivity and gene expression analyses, used for selecting precision oncotherapy protocols, and patient therapeutic responses
With respect to CTC purification, the CTC detection rate in this cohort of advanced stage metastatic CRC patients was 100%, the mean (± s.e.) number of purified CTCs per millilitre of blood was 10.3 ± 0.35 (range 6.2 to 16.3) (Table 2). For 25 mls blood samples this translated into total numbers of CTCs per patient ranging from ≈ 150 to 400, which was expanded by 6-day in vitro culture to between ≈ 35,000 to 100,000 CTCs per patient for the gene expression and chemosensitivity assays used to select locoregional chemotherapy and systemic targeted therapy protocols.
Table 2 Protocols of intraarterial chemotherapy based on CTC chemosensitivity, and RECIST 1.1 responses
RECIST 1.1 responses to combined intraarterial chemotherapy and targeted therapy, selected by CTC chemosensitivity and gene expression assays, and the KRAS mutational status of tumors are presented in Tables 2 and 3.
Table 3 Targeted therapy protocols selected according to CTC: PBMC percentage gene overexpression ratios, and associated RECIST 1.1 responses
Response predictive accuracy of precision oncotherapy protocols, selected by chemosensitivity and gene expression analyses performed on in vitro cultured liquid biopsy-derived CTCs and PBMCs
With respect to the 36 patients assessed in this study, intraarterial chemotherapy combined with targeted therapy elicited 17 partial responses (PR), 18 stable disease (SD) responses and 1 progressive disease (PD) response. RECIST 1.1 responses subdivided into positive (CR + PR) and negative (SD + PD) responses, in relation to CTCs chemosensitivity and tumor gene expression tests for targeted therapy, are presented in Table 4. Tests were defined as positive when ≥70% of drug-treated CTCs were killed and CTC: PBMC tumor gene expression ratios were ≥ 50%. Tests were defined as negative when < 70% of drug-treated CTCs were killed and CTC: PBMC tumor gene expression ratios were < 50%.
Table 4 Positive and negative RECIST 1.1 responses after intraarterial chemotherapy and targeted therapy, using protocols selected by positive or negative CTC chemosensitivity and tumor gene expression analyses
Considering both types of therapy (intraarterial and targeted) and associated respective cut-offs for assay positivity, we report 16 true positive responses (TP), 1 false positive response (FP), 18 true negative responses (TN) and 1 false negative response (FN), which translate into a sensitivity of 94.12% (16/17) (95% CI 0.71–0.99), specificity of 5.26% (1/19) (95% CI 0.01–0.26), a positive predictive value of 47.06% (16/34) (95% CI 0.29–0.65), a negative predictive value of 50% (1/2) (95% CI 0.01–0.98) and an over-all predictive accuracy of 47.22% (17/36) (95% CI 0.30–0.64).
No technical, hemodynamic or vascular complications were observed during HPP and HAI procedures, no perfusion-related postoperative deaths were registered, and all adverse events are reported in Table 5.
Table 5 Adverse events in 27 patients with unresectable refractory RRC and 9 patients with unresectable refractory CRC liver metastases submitted for Hypoxic Pelvic Perfusion (HPP)/targeted therapy or Hepatic Artery Infusion/targeted therapy based on in vitro CTC chemosensitivity and gene expressions analyses
Interest in precision oncotherapy has grown dramatically over the past 20 years and routine characterisation of oncogenic pathways activated in primary, recurrent and metastatic tumour tissue biopsies form the mainstay of precision oncology selection of individualised therapeutic strategies, which has led to improvements in survival rates. Tissue biopsy procedures, however, are invasive, can facilitate tumour dissemination and are of limited use in patient follow-up. These drawbacks have stimulated interest into the potential use of liquid biopsies, as a source of tumour-derived components including CTCs that can be enumerated as a prognostic marker, purified and characterised at the biomolecular and chemosensitivity levels to provide predictive information for precision oncology.
Although current clinical use of liquid biopsies remains limited, CTCs have been isolated from patients with a variety of tumour types [37, 38], including CRC [17, 18]. CTC research has evolved from prognostic studies based upon CTC enumeration, to predictive clinical response studies based upon the in vitro chemosensitivity and tumor gene expression profiles of cultured CTCs. Various methods for CTC isolation and enumeration have been reported [10], although FDA approval of a particular method and equipment has led to a commanding position [8]. Furthermore, in addition to liquid biopsy-derived CTC purification methodologies, a variety of methods have been reported for subsequent CTC in vitro cultivation in order to facilitate chemosensitivity and tumor gene expression analysis [12, 39].
A potential drawback of current methodologies, however, is the exclusive use of epithelial markers for detecting and purifying CTCs that may miss CTCs that have undergone epithelial to mesenchymal transition (EMT). EMT is a pre-requisite for metastatic dissemination in majority of carcinomas, is characterised by a shift to SNAIL, ZEB and TWIST transcriptional activity, TGFβ and Wnt signaling and matrix metalloproteinase activity and results in repression of epithelial marker expression and "metaplastic" conversion to a motile mesenchymal phenotype [10]. CTCs numbers may, therefore, be underestimated by methods that exclusively employ epithelial markers. Furthermore, purified epithelial CTCs may differ to mesenchymal CTCs in biomolecular profiles and chemosensitivity, potentially making information from CTC analysis incomplete, more relevant to epithelial rather than mesenchymal tumour components and may differ to information gained from tissue biopsies, leading to differences in precision oncotherapy predictions of clinical efficacy between liquid and tissue biopsies.
Given these variables and considerations, the salient question remains whether it is possible to provide a calculated prediction of clinical response to select tailored drugs using a specific CTC methodology, and would this alter if the tumour type, patient population or analytical methodology changes?
The predictive accuracy of precision oncotherapy depends initially upon patient characteristics, tumor type and stage. Here, we evaluated the predictive accuracy of precision oncotherapy on a specific population of CRC patients, refractory to at least two standard lines of systemic therapy, undoubtedly more chemo-resistant than patients with unresectable recurrent rectal cancer or unresectable CRC liver metastases submitted to first line treatment. The predictive accuracy of precision oncotherapy also depends upon the methodology used for CTC isolation, enrichment, in vitro culture, chemosensitivity, tumor gene expression, etc. In this study, due to the long accrual period, the predictive accuracy of a particular methodology for CTC purification, culture and analysis, was evaluated. This methodology has now been improved by recent technical advances, which if available during the period of study, may have enhanced predictive accuracy.
Based on a predictive accuracy value of 47.2% for the methodology employed, the main clinical message is that patients with unresectable recurrent rectal cancer or unresectable CRC liver metastases, refractory to at least two lines of traditional systemic therapy, exhibited a positive RECIST 1.1. response in ≈ 50% of cases. This represents the first reported estimation of predictive accuracy derived from combined chemosensitivity and tumor gene expression analysis of in vitro cultured CTCs and extends a recent meta-analysis reporting a predictive value of chemosensitivity assays alone for individualized CRC chemotherapy [40]. Furthermore, the combined positivity cut-off of ≥70% cell death for chemosensitivity and ≥ 50% for CTC: PBMC tumor gene expression ratios, were selected considering the positive predictive value, which essentially focus on an optimized RECIST 1.1 response, maximizing predictive accuracy. Choosing pairs of lower cut-off values, potential oncotherapy precision protocols should consider chemotherapeutic agents for which CTCs exhibit greater resistance and/or monoclonal antibodies for less expressed CTC antigens.
The reported predictive accuracy value of 47.2%, is indeed impressive considering that the CTCs from 77.8% of patients exhibited over-expression of the multi-drug resistance gene MDR1 (≥ 65%), CTCs from 19.5% of patients exhibited significant over-expression of ERCC1 and GST (> 15%), involved in platinum resistance [41] and CTCs from 22.8% of patients exhibited ≥5% over-expression of TYMS or DHFR, involved in 5-fluorouracil resistance [42, 43].
Another important message from this study is that transient in vitro culture of liquid biopsy-derived CTCs can provide sufficient cell numbers for screening anticancer compounds including agents not normally prescribed for any particular tumor type, of relevance for "drug repurposing" [39]. Indeed, 72% of CTCs in the refractory CRC patient group exhibited sensitivity to Mitomycin C, 5.5% exhibited sensitivity to Alkeran and 2.7% exhibited sensitivity to Doxorubicin, agents that are not currently recognized as particularly active against CRC and have been reported to be ≈10 times more cytotoxic under hypoxic conditions [33]. In the present study, CTC chemosensitivity assays were not performed under hypoxic conditions, suggesting that the cytotoxic potential of these agents could be further enhanced via intraarterial administration to improve access to hypoxic tumour regions or using therapeutic protocols that promote tissue hypoxia, such as hypoxic pelvic perfusion (HPP), in order to take therapeutic advantage of chronic or transient tumour tissue hypoxia [44].
Limitations of this study, include: i) the small patient sample size, which in any case provided a confidence interval for predictive accuracy of 95% (CI 0.30–0.64); ii) the inclusion of data from CTC populations obtained from patients with recurrent rectal cancer and CRC liver metastases, mitigated somewhat by the need to compare CTCs disseminating from recurrent and overt metastatic sites; iii) transient in vitro CTC culture, used to obtain sufficient numbers for assay, which may have altered gene expression, chemosensitivity and reduced predictive accuracy, deemed necessary for reasons of methodological homogeneity for this study, initiated 2007 and terminated in 2019, which also explains why novel miniaturized system-based methods were not used; iv) the use of methodology for the evaluation of apoptosis that may have underestimated total death by not measuring paraptosis, ferroptosis and/or necroptosis; and v) lack of novel recent methodological improvements (microfluidic based systems, anti-CK20 antibodies and EMT markers), also for reasons of methodological homogeneity for the duration of this 2007–2019 retrospective study.
Despite these limitations and emphasizing combined treatment with intraarterial chemotherapy and systemic target therapy for advanced CRC patients, refractory to at least two lines of systemic therapy, this study provides interesting biostatistical information for multidisciplinary oncological teams for quantifying the predictive accuracy of the particular CTC-isolation/assay methodology employed. We do not, however, exclude the possibility that predictive accuracy could be improved by recent technological and biomolecular innovations (see above) nor does this study evaluate the relative importance of intraarterial chemotherapy in determining tumor response.
The biomolecular methodology utilised in this retrospective study of patients with unresectable refractory RRC and unresectable refractory CRC liver metastases, provides a predictive accuracy of ≈ 50% for response to liquid biopsy precision oncology-selected intraarterial chemotherapy and targeted therapy. We envisage that this value will be improved by novel technologies and therapeutic agents, and by upgrading methodology to ensure purification of all potentially metastatic epithelial, quasi-mesenchymal and mesenchymal CTC sub-populations, to include physiologically relevant controls in addition to PBMCs, and by eventual methodological standardization. Despite the challenges in applying these procedures to wider populations in non-research settings, the encouraging result of this retrospective study, employing CTC purification procedures relevant to 2007–2019, sets the stage for potential improvements in predictive accuracy in subsequent clinical trials employing current and future technological and methodological improvements.
The datasets generated and/or analysed during the current study are not publicly available due to privacy reasons but are available from the corresponding author on reasonable request.
BRAF:
v-Raf murine sarcoma viral oncogene homolog B
CK-20:
Cytokeratin-20
EGFR:
Epithelial Growth Factor Receptor
EMT:
Epithelial-Mesenchymal-Transition
CD45:
Cluster of differentiation 45
EpCAM:
Epithelial adhesion molecule
pan-CK:
Pan-Cytokeratin
qRT-PCR:
Reverse Transcriptase quantitative Polymerase Chain Reaction
SNAIL:
Gene family of zinc-finger transcription factors
ZEB:
Gene family of zinc finger transcription factors
TWIST:
Twist gene family
TGFβ:
Wnt:
Wingless and Int − 1 signaling pathway
PET:
Platelet endothelial cell adhesion molecule
DAPI:
4′,6-diamidino-2-phenylindole
EpCAM-FITC:
Antibody against Epithelial cell adhesion molecule conjugated to fluorescein isothioyanate
CD45-PE:
Antibody against Cluster of differentiation 45 conjugated to phycoerythin
Pan-Cytokeratin-APC:
Antibody against a panel of cytokeratins conjugated to allophycocyanin
ACTB:
β actin
18S rRNA:
18S ribosomal RNA
5-FU:
5 fluorouracil
GAPDH:
Glyceraldehyde 3-phosphate dehydrogenase
MMC:
IRI:
Irinotecan
OX:
Oxaliplatin
Raltitrexed
DOX:
ALK:
Alkeran
CIS:
Sargent DJ, Conley BA, Allegra C, Collette L. Clinical trial designs for predictive marker validation in cancer treatment trials. J Clin Oncol. 2005;23:2020–7.
McShane LM. Statistical challenges in the development and evaluation of marker-based clinical tests. BMC Med. 2012;10:52.
Sepulveda AR, Hamilton SR, Allegra CJ, Grody W, Cushman-Vokoun AM, Funkhouseret WK, et al. Molecular biomarkers for the evaluation of colorectal cancer. Guideline from the American Society for Clinical Pathology, College of American Pathologists, Association for Molecular Pathology, and American Society of Clinical Oncology. Arch Pathol Lab Med. 2017;141:625–57.
Yoon YS, Kim JC. Recent applications of chemosensitivity tests for colorectal cancer treatment. World J Gastroenterol. 2014;20:16398–408.
Fernández-Lázaro D, García Hernández JL, García AC, Córdova Martínez A, Mielgo-Ayuso J, Cruz-Hernández JJ. Liquid biopsy as novel tool in precision medicine: origins, properties, identification and clinical perspective of cancer's biomarkers. Diagnostics. 2020;10:215.
Wong C-H, Chen Y-C. Clinical significance of exosomes as potential biomarkers in cancer. World J Clin Cases. 2019;7:171–90.
Karachaliou N, Mayo-de-Las-Casas C, Molina-Vila MA, Rosell R. Real-time liquid biopsies become a reality in cancer treatment. Ann Transl Med. 2015;3:36.
Ding Y, Li W, Wang K, Xu C, Hao M, Ding L. Perspectives of the application of liquid biopsy in colorectal Cancer. Biomed Res Int. 2020:6843180. https://doi.org/10.1155/2020/6843180.
Welinder C, Jansson B, Lindell G, Wenner J. Cytokeratin 20 improves the detection of circulating tumor cells in patients with colorectal cancer. Cancer Lett. 2015;358:43–6. https://doi.org/10.1016/j.canlet.2014.12.024.
Hendricks A, Brandt B, Geisen R, Dall K, Röder C, Schafmayer C, et al. Isolation and enumeration of CTC in colorectal Cancer patients: introduction of a novel cell imaging approach and comparison to cellular and molecular detection techniques. Cancers. 2020;12:2643. https://doi.org/10.3390/cancers12092643.
Article CAS PubMed Central Google Scholar
Vacante M, Ciuni R, Basile F, Biondi A. The liquid biopsy in the Management of Colorectal Cancer: an overview. Biomedicines. 2020;8:308. https://doi.org/10.3390/biomedicines8090308.
Papasotiriou I, Chatziioannou M, Pessiou K, Retsas I, Dafouli G, Kyriazopoulou A, et al. Detection of circulating tumor cells in patients with breast, prostate, pancreatic, Colon and Melanoma Cancer: a blinded comparative study using healthy donors. J C T. 2015;6:543–53. https://doi.org/10.4236/jct.2015.67059.
Seeberg LT, Waage A, Brunborg C, Hugenschmidt H, Renolen A, Stav I, et al. Circulating tumor cells in patients with colorectal liver metastasis predict impaired survival. Ann Surg. 2015;261:164–71.
Arrazubi V, Mata E, Antelo ML, Tarifa A, Herrera J, Zazpe C, et al. Circulating tumor cells in patients undergoing resection of colorectal cancer liver metastases. Clinical utility for long-term outcome: a prospective trial. Ann Surg Oncol. 2019;26:2805–11.
Wang L, Zhou S, Zhang W, Wang J, Wang M, Hu X, et al. Circulating tumor cells as an independent prognostic factor in advanced colorectal cancer: a retrospective study in 121 patients. Int J Color Dis. 2019;34:589–97.
Musella V, Pietrantonio F, Di Buduo E, Iacovelli R, Martinetti A, Sottotetti E, et al. Circulating tumor cells as a longitudinal biomarker in patients with advanced chemorefractory, RAS-BRAF wild-type colorectal cancer receiving cetuximab or panitumumab. Int J Cancer. 2015;137:1467–74.
Guadagni S, Fiorentini G, De Simone M, Masedu F, Zoras O, Mackay AR, et al. Precision oncotherapy based on liquid biopsies in multidisciplinary treatment of unresectable recurrent rectal cancer: a retrospective cohort study. J Cancer Res Clin Oncol. 2020;146:205–19. https://doi.org/10.1007/s00432-019-03046-3.
Guadagni S, Clementi M, Mackay AR, Ricevuto E, Fiorentini G, Sarti D, et al. Real-life multidisciplinary treatment for unresectable colorectal cancer liver metastases including hepatic artery infusion with chemo-filtration and liquid biopsy precision oncotherapy. Observational cohort study. J Cancer Res Clin Oncol. 2020;146:1273–90.
Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. 2018;68:394–424.
Aliberti C, Carandina R, Sarti D, Mulazzani L, Pizzirani E, Guadagni S, et al. Chemoembolization adopting polyethylene glycol drug-eluting embolics loaded with doxorubicin for the treatment of hepatocellular carcinoma. Am J Roentgenol. 2017;209:430–4.
Bruera G, Cannita K, Di Giacomo D, Lamy A, Frébourg T, Sabourin JC, et al. Worse prognosis of KRAS c.35 G > a mutant metastatic colorectal cancer (MCRC) patients treated with intensive triplet chemotherapy plus bevacizumab (FIr-B/FOx). BMC Med. 2013;11:59.
Bruera G, Cannita K, Tessitore A, Russo A, Alesse E, Ficorella C, et al. The prevalent KRAS exon 2 c.35 G > a mutation in metastatic colorectal cancer patients: a biomarker of worse prognosis and potential benefit of bevacizumab-containing intensive regimens? Crit Rev Oncol Hematol. 2015;93:190–202.
Bruera G, Pepe F, Malapelle U, Pisapia P, Dal Mas A, Di Giacomo D, et al. KRAS, NRAS and BRAF mutations detected by next generation sequencing, and differential clinical outcome in metastatic colorectal cancer (MCRC) patients treated with first line FIr-B/FOx adding bevacizumab (BEV) to triplet chemotherapy. Oncotarget. 2018;9:26279–90.
Morelli MF, Santomaggio A, Ricevuto E, Cannita K, De Galitiis F, Tudini M, et al. Triplet schedule of weekly 5-fluorouracil and alternating irinotecan or oxaliplatin in advanced colorectal Cancer: a dose-finding and phase II study. Oncol Rep. 2010;23:1635–40.
Bruera G, Santomaggio A, Cannita K, Lanfiuti Baldi P, Tudini M, De Galitiis F, et al. "Poker" association of weekly alternating 5-fluorouracil, Irinotecan, bevacizumab and Oxaliplatin (FIr-B/FOx) in first line treatment of metastatic colorectal cancer: a phase II study. BMC Cancer. 2010;10:567.
Bruera G, Ricevuto E. Intensive chemotherapy of metastatic colorectal cancer: weighing between safety and clinical efficacy. Evaluation of Masi G, Loupakis F, Salvatore L, et al. Bevaizumab with FOLFOXIRI (irinotecan, oxaliplatin, fluorouracil, and folinate) as first-line treatment for metastatic colorectal cancer: a phase 2 trial. Lancet Oncol 2010; 11:845-52. Expert Opin Biol Ther. 2011;11:821–4.
Bruera G, Cannita K, Giuliante F, Lanfiuti Baldi P, Vicentini R, Marchetti P, et al. Effectiveness of liver metastasectomies in metastatic colorectal Cancer (MCRC) patients treated with triplet chemotherapy plus bevacizumab (FIr-B/FOx). Clin Colorectal Cancer. 2012;11:119–26.
Sartore-Bianchi A, Pietrantonio F, Lonardi S, Mussolin B, Rua F, Fenocchio E, et al. Phase II study of anti-EGFR rechallenge therapy with panitumumab driven by circulating tumor DNA molecular selection in metastatic colorectal cancer: The CHRONOS trial (abstract). J Clin Oncol. 2021;39(suppl 15) Abstract available online at https://meetinglibrary.asco.org/record/195971/abstract (Accessed on 08 June 2021).
Apostolou P, Ntanovasilis DA, Papasotiriou I. Evaluation of a simple method for storage of blood samples that enables isolation of circulating tumor cells 96 h after sample collection. J Biol Res Thessalon. 2017;24:11.
Toloudi M, Ioannou E, Chatziioannou M, Apostolou P, Kiritsis C, Manta S, et al. Comparison of the growth curves of Cancer cells and Cancer stem cells. Curr Stem Cell Res Ther. 2014;9:112–6.
Apostolou P, Iliopoulos AC, Parsonidis P, Papasotiriou I. Gene expression profiling as a potential predictor between normal and cancer samples in gastrointestinal carcinoma. Oncotarget. 2019;10:3328–38.
Livak KJ, Schmittgen TD. Analysis of relative gene expression data using real- time quantitative PCR and the 22DDCT method. Methods. 2001;25:402–8.
Teicher BA, Lazo JS, Sartorelli AC. Classification of antineoplastic agents by their selective toxicities toward oxygenated and hypoxic tumor cells. Cancer Res. 1981;41:73–81.
Guadagni S, Fiorentini G, Clementi M, Palumbo P, Mambrini A, Masedu F. Mitomycin C hypoxic pelvic perfusion for unresectable recurrent rectal cancer: pharmacokinetic comparison of surgical and percutaneous techniques. Updat Surg. 2017;69:403–10. https://doi.org/10.1007/s13304-017-0480-6.
Mambrini A, Bassi C, Pacetti P, Torri T, Iacono C, Ballardini M, et al. Prognostic factors in patients with advanced pancreatic adenocarcinoma treated with intra-arterial chemotherapy. Pancreas. 2008;36:56–60.
Eisenhauer EA, Therasse P, Bogaerts J, Schwartz LH, Sargent D, Ford R, et al. New response evaluation criteria in solid tumors: revised RECIST guideline (version 1.1). Eur J Cancer. 2009;45:228–47.
Guadagni S, Clementi M, Masedu F, Fiorentini G, Sarti D, Deraco M, et al. A pilot study of the predictive potential of chemosensitivity and gene expression assays using circulating tumour cells from patients with recurrent ovarian cancer. Int J Mol Sci. 2020;21:4813. https://doi.org/10.3390/ijms21134813.
Guadagni S, Fiorentini G, Papasotiriou I, Apostolou P, Masedu F, Sarti D, et al. Circulating tumour cell liquid biopsy in selecting therapy for recurrent cutaneous melanoma with locoregional pelvic metastases: a pilot study. BMC Res Notes. 2020;13:176.
Popova AA, Levkin PA. Precision Medicine in Oncology: In Vitro Drug Sensitivity and Resistance Test (DSRT) for Selection of Personalized Anticancer Therapy. Adv Therap. 2020;3:1900100. https://doi.org/10.1002/adtp.201900100.
Blom K, Nygren P, Larsson R, Andersson CR. Predictive value of ex vivo Chemosensitivity assays for individualized Cancer chemotherapy: a Meta-analysis. SLAS Technol. 2017;22:306–14.
Yu Y, Kanwar SS, Patel BB, Nautiyal J, Sarkar FH, Majumdar APN. Elimination of Colon Cancer stem-like cells by the combination of curcumin and FOLFOX. Transl Oncol. 2009;2:321–8.
Jensen SA, Vainer B, Witton CJ, Jørgensen JT, Sørensen JB. Prognostic significance of numeric aberrations of genes for thymidylate synthase, thymidine phosphorylase and Dihydrofolate reductase in colorectal Cancer. Acta Oncol. 2008;47:1054–61.
Di Paolo A, Chu E. The role of thymidylate synthase as a molecular biomarker. Clin Cancer Res. 2004;10:411–2.
Farina AR, Cappabianca L, Sebastiano M, Zelli V, Guadagni S, Mackay AR. Hypoxia-induced alternative splicing: the 11th Hallmark of Cancer. J Exp Clin Cancer Res. 2020;39:110. https://doi.org/10.1186/s13046-020-01616-9.
The authors report no proprietary or commercial interest in any product mentioned or concept discussed in this article. None of the authors have any COI/ Financial Disclosure.
This research received no external funding.
Andrew R. Mackay and Marco Clementi shared last authors.
Department of Applied Clinical and Biotechnological Sciences, University of L'Aquila, 67100, L'Aquila, Italy
Stefano Guadagni, Francesco Masedu, Marco Valenti, Enrico Ricevuto, Gemma Bruera, Antonietta R. Farina, Andrew R. Mackay & Marco Clementi
Department of Oncology and Hematology, Azienda Ospedaliera "Ospedali Riuniti Marche Nord", Pesaro, Italy
Giammaria Fiorentini & Donatella Sarti
Department of Prevention and Sports Medicine, University Hospital Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
Caterina Fiorentini
Department of Physiology and Pharmacology, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
Veronica Guadagni
Research Genetic Cancer Centre S.A, Florina, Greece
Panagiotis Apostolou & Panagiotis Parsonidis
Research Genetic Cancer Centre International GmbH, Zug, Switzerland
Ioannis Papasotiriou
Stefano Guadagni
Francesco Masedu
Giammaria Fiorentini
Donatella Sarti
Panagiotis Apostolou
Panagiotis Parsonidis
Enrico Ricevuto
Gemma Bruera
Antonietta R. Farina
Andrew R. Mackay
Marco Clementi
S.G., F.M., A.R.M, P.A., I.P. wrote the main manuscript text. S.G., P.P., A.R.F., prepared figures and Tables. D.S., G.B., C.F., V.G. made data curation. P.A., P.P., made biomolecular analyses. G.S., M.C., G.F., E.R., G.B. made clinical treatments. M.V. supervised statistical analysis. All authors reviewed the manuscript. The author(s) read and approved the final manuscript.
Correspondence to Stefano Guadagni.
This study was conducted according to the guidelines of the Declaration of Helsinki, and was approved by the Ethics Committee of the "Azienda Sanitaria Locale" (ASL) n.1 of the "Regione Abruzzo", Italy (Chairperson: G. Piccioli; protocol number 10/CE/2018; approved: 19 July 2018 (n.1419)].
Members of ASL n.1 Ethics committee
Gianlorenzo PICCIOLI Chairperson; Magistrate
Claudio FERRI Clinician
Marco Valenti Biostatistic
Domenico PARISE Substitute of Hospital Health Director
Maurizio PAOLONI Clinician
Marco CARMIGNANI Pharmacologist
Elvira D'ALESSANDRO Geneticist
Goffredo DEL ROSSO Clinician
Mario DI PIETRO Pediatrician
Antonio BARILE Radiologist
Roberto BERRETTONI Medical Devices
Ester LIBERATORE Hospital Pharmacy
Patrizia MASCIOVECCHIO Coroner
Fabrizio ANDREASSI Clinical Engineering
Carmine ORLANDI Nutritionist
Luciano LIPPA Clinician
Giovanni MUTTILLI Nurse
Eleonora CORONA Patient Organization
Carlo DI STANISLAO Clinical Secretary
Marivera DE ROSA Political Secretary
Informed consent was obtained from all subjects in the study.
All authors declare that they have no competing interests.
Guadagni, S., Masedu, F., Fiorentini, G. et al. Circulating tumour cell gene expression and chemosensitivity analyses: predictive accuracy for response to multidisciplinary treatment of patients with unresectable refractory recurrent rectal cancer or unresectable refractory colorectal cancer liver metastases. BMC Cancer 22, 660 (2022). https://doi.org/10.1186/s12885-022-09770-3
Predictive accuracy
Precision oncotherapy
Liquid biopsies
Chemosensitivity tests
Tumor gene expressions analyses
Recurrent rectal cancer
Colorectal cancer liver metastases
Intraarterial chemotherapy | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.